pthread_key_create can be 0.
addresses issue #19567.
Codegen
}
+impl Copy for Mode {}
+
impl FromStr for Mode {
fn from_str(s: &str) -> Option<Mode> {
match s {
desc: test::TestDesc {
name: make_test_name(config, testfile),
ignore: header::is_test_ignored(config, testfile),
- should_fail: false
+ should_fail: test::ShouldFail::No,
},
testfn: f(),
}
}
~~~
+`#[should_fail]` tests can be fragile as it's hard to guarantee that the test
+didn't fail for an unexpected reason. To help with this, an optional `expected`
+parameter can be added to the `should_fail` attribute. The test harness will
+make sure that the failure message contains the provided text. A safer version
+of the example above would be:
+
+~~~test_harness
+#[test]
+#[should_fail(expected = "index out of bounds")]
+fn test_out_of_bounds_failure() {
+ let v: &[int] = &[];
+ v[0];
+}
+~~~
+
A test runner built with the `--test` flag supports a limited set of
arguments to control which tests are run:
fn abort() -> !;
}
+#[lang = "owned_box"]
+pub struct Box<T>(*mut T);
+
#[lang="exchange_malloc"]
unsafe fn allocate(size: uint, _align: uint) -> *mut u8 {
let p = libc::malloc(size as libc::size_t) as *mut u8;
```
Rust files always end in a `.rs` extension. If you're using more than one word
-in your file name, use an underscore. `hello_world.rs` rather than
+in your filename, use an underscore. `hello_world.rs` rather than
`helloworld.rs`.
Now that you've got your file open, type this in:
means that you're calling a macro instead of a normal function. Rust implements
`println!` as a macro rather than a function for good reasons, but that's a
very advanced topic. You'll learn more when we talk about macros later. One
-last thing to mention: Rust's macros are significantly different than C macros,
+last thing to mention: Rust's macros are significantly different from C macros,
if you've used those. Don't be scared of using macros. We'll get to the details
eventually, you'll just have to trust us for now.
```
This reveals two interesting things about Rust: it is an expression-based
-language, and semicolons are different than in other 'curly brace and
-semicolon'-based languages. These two things are related.
+language, and semicolons are different from semicolons in other 'curly brace
+and semicolon'-based languages. These two things are related.
## Expressions vs. Statements
# Strings
Strings are an important concept for any programmer to master. Rust's string
-handling system is a bit different than in other languages, due to its systems
+handling system is a bit different from other languages, due to its systems
focus. Any time you have a data structure of variable size, things can get
tricky, and strings are a re-sizable data structure. That said, Rust's strings
also work differently than in some other systems languages, such as C.
## Comparing guesses
If you remember, earlier in the guide, we made a `cmp` function that compared
-two numbers. Let's add that in, along with a `match` statement to compare the
-guess to the secret guess:
+two numbers. Let's add that in, along with a `match` statement to compare our
+guess to the secret number:
```{rust,ignore}
use std::io;
* experimental: This item was only recently introduced or is otherwise in a
state of flux. It may change significantly, or even be removed. No guarantee
of backwards-compatibility.
-* unstable: This item is still under development, but requires more testing to
+* unstable: This item is still under development and requires more testing to
be considered stable. No guarantee of backwards-compatibility.
* stable: This item is considered stable, and will not change significantly.
Guarantee of backwards-compatibility.
This is useful, because if we wait to try to borrow `x` after this borrow is
over, then everything will work.
-For more advanced patterns, please consult the [Lifetime
-Guide](guide-lifetimes.html). You'll also learn what this type signature with
+For more advanced patterns, please consult the [Ownership
+Guide](guide-ownership.html). You'll also learn what this type signature with
the `'a` syntax is:
```{rust,ignore}
issues that programmers have with concurrency. Many concurrency errors that are
runtime errors in other languages are compile-time errors in Rust.
-Rust's concurrency primitive is called a **task**. Tasks are lightweight, and
-do not share memory in an unsafe manner, preferring message passing to
-communicate. It's worth noting that tasks are implemented as a library, and
-not part of the language. This means that in the future, other concurrency
-libraries can be written for Rust to help in specific scenarios. Here's an
-example of creating a task:
+Rust's concurrency primitive is called a **task**. Tasks are similar to
+threads, and do not share memory in an unsafe manner, preferring message
+passing to communicate. It's worth noting that tasks are implemented as a
+library, and not part of the language. This means that in the future, other
+concurrency libraries can be written for Rust to help in specific scenarios.
+Here's an example of creating a task:
```{rust}
spawn(proc() {
### Symbols
```{.ebnf .gram}
-symbol : "::" "->"
+symbol : "::" | "->"
| '#' | '[' | ']' | '(' | ')' | '{' | '}'
| ',' | ';' ;
```
```
use std::iter::range_step;
-use std::option::{Some, None};
+use std::option::Option::{Some, None};
use std::collections::hash_map::{mod, HashMap};
fn foo<T>(_: T){}
// Equivalent to 'std::iter::range_step(0u, 10u, 2u);'
range_step(0u, 10u, 2u);
- // Equivalent to 'foo(vec![std::option::Some(1.0f64),
- // std::option::None]);'
+ // Equivalent to 'foo(vec![std::option::Option::Some(1.0f64),
+ // std::option::Option::None]);'
foo(vec![Some(1.0f64), None]);
// Both `hash_map` and `HashMap` are in scope.
```
# struct Point {x: f64, y: f64};
+# impl Copy for Point {}
# type Surface = int;
# struct BoundingBox {x: f64, y: f64, width: f64, height: f64};
# trait Shape { fn draw(&self, Surface); fn bounding_box(&self) -> BoundingBox; }
center: Point,
}
+impl Copy for Circle {}
+
impl Shape for Circle {
fn draw(&self, s: Surface) { do_draw_circle(s, *self); }
fn bounding_box(&self) -> BoundingBox {
it can be thought of as being accessible to the outside world. For example:
```
+# #![allow(missing_copy_implementations)]
# fn main() {}
// Declare a private struct
struct Foo;
return RustStructPrinter(val, false)
if enum_member_count == 1:
- if enum_members[0].name == None:
+ first_variant_name = enum_members[0].name
+ if first_variant_name == None:
# This is a singleton enum
return rust_pretty_printer_lookup_function(val[enum_members[0]])
else:
- assert enum_members[0].name.startswith("RUST$ENCODED$ENUM$")
+ assert first_variant_name.startswith("RUST$ENCODED$ENUM$")
# This is a space-optimized enum
- last_separator_index = enum_members[0].name.rfind("$")
+ last_separator_index = first_variant_name.rfind("$")
second_last_separator_index = first_variant_name.rfind("$", 0, last_separator_index)
disr_field_index = first_variant_name[second_last_separator_index + 1 :
last_separator_index]
sole_variant_val = val[enum_members[0]]
disr_field = get_field_at_index(sole_variant_val, disr_field_index)
- discriminant = int(sole_variant_val[disr_field])
+ discriminant = sole_variant_val[disr_field]
+
+ # If the discriminant field is a fat pointer we have to consider the
+ # first word as the true discriminant
+ if discriminant.type.code == gdb.TYPE_CODE_STRUCT:
+ discriminant = discriminant[get_field_at_index(discriminant, 0)]
if discriminant == 0:
null_variant_name = first_variant_name[last_separator_index + 1:]
class IdentityPrinter:
def __init__(self, string):
- self.string
+ self.string = string
def to_string(self):
return self.string
"rt/isaac/randport.cpp", # public domain
"rt/isaac/rand.h", # public domain
"rt/isaac/standard.h", # public domain
- "libstd/sync/mpsc_queue.rs", # BSD
- "libstd/sync/spsc_queue.rs", # BSD
- "libstd/sync/mpmc_bounded_queue.rs", # BSD
+ "libstd/comm/mpsc_queue.rs", # BSD
+ "libstd/comm/spsc_queue.rs", # BSD
"test/bench/shootout-binarytrees.rs", # BSD
"test/bench/shootout-chameneos-redux.rs", # BSD
"test/bench/shootout-fannkuch-redux.rs", # BSD
return "<invalid enum encoding: %s>" % first_variant_name
# Read the discriminant
- disr_val = val.GetChildAtIndex(0).GetChildAtIndex(disr_field_index).GetValueAsUnsigned()
+ disr_val = val.GetChildAtIndex(0).GetChildAtIndex(disr_field_index)
- if disr_val == 0:
+ # If the discriminant field is a fat pointer we have to consider the
+ # first word as the true discriminant
+ if disr_val.GetType().GetTypeClass() == lldb.eTypeClassStruct:
+ disr_val = disr_val.GetChildAtIndex(0)
+
+ if disr_val.GetValueAsUnsigned() == 0:
# Null case: Print the name of the null-variant
null_variant_name = first_variant_name[last_separator_index + 1:]
return null_variant_name
def emit_bsearch_range_table(f):
f.write("""
fn bsearch_range_table(c: char, r: &'static [(char,char)]) -> bool {
- use core::cmp::{Equal, Less, Greater};
+ use core::cmp::Ordering::{Equal, Less, Greater};
use core::slice::SlicePrelude;
r.binary_search(|&(lo,hi)| {
if lo <= c && c <= hi { Equal }
def emit_conversions_module(f, lowerupper, upperlower):
f.write("pub mod conversions {")
f.write("""
- use core::cmp::{Equal, Less, Greater};
+ use core::cmp::Ordering::{Equal, Less, Greater};
use core::slice::SlicePrelude;
use core::tuple::Tuple2;
- use core::option::{Option, Some, None};
+ use core::option::Option;
+ use core::option::Option::{Some, None};
use core::slice;
pub fn to_lower(c: char) -> char {
f.write(""" }
fn bsearch_range_value_table(c: char, r: &'static [(char, char, GraphemeCat)]) -> GraphemeCat {
- use core::cmp::{Equal, Less, Greater};
+ use core::cmp::Ordering::{Equal, Less, Greater};
match r.binary_search(|&(lo, hi, _)| {
if lo <= c && c <= hi { Equal }
else if hi < c { Less }
def emit_charwidth_module(f, width_table):
f.write("pub mod charwidth {\n")
- f.write(" use core::option::{Option, Some, None};\n")
+ f.write(" use core::option::Option;\n")
+ f.write(" use core::option::Option::{Some, None};\n")
f.write(" use core::slice::SlicePrelude;\n")
f.write(" use core::slice;\n")
f.write("""
fn bsearch_range_value_table(c: char, is_cjk: bool, r: &'static [(char, char, u8, u8)]) -> u8 {
- use core::cmp::{Equal, Less, Greater};
+ use core::cmp::Ordering::{Equal, Less, Greater};
match r.binary_search(|&(lo, hi, _, _)| {
if lo <= c && c <= hi { Equal }
else if hi < c { Less }
f.write("""
fn bsearch_range_value_table(c: char, r: &'static [(char, char, u8)]) -> u8 {
- use core::cmp::{Equal, Less, Greater};
+ use core::cmp::Ordering::{Equal, Less, Greater};
use core::slice::SlicePrelude;
use core::slice;
match r.binary_search(|&(lo, hi, _)| {
use core::mem::{min_align_of, size_of, drop};
use core::mem;
use core::ops::{Drop, Deref};
-use core::option::{Some, None, Option};
+use core::option::Option;
+use core::option::Option::{Some, None};
use core::ptr::RawPtr;
use core::ptr;
use heap::deallocate;
use std::comm::channel;
use std::mem::drop;
use std::ops::Drop;
- use std::option::{Option, Some, None};
+ use std::option::Option;
+ use std::option::Option::{Some, None};
use std::str::Str;
use std::sync::atomic;
use std::task;
#[test]
fn show_arc() {
let a = Arc::new(5u32);
- assert!(format!("{}", a).as_slice() == "5")
+ assert!(format!("{}", a) == "5")
}
// Make sure deriving works with Arc<T>
use core::mem;
use core::option::Option;
use core::raw::TraitObject;
-use core::result::{Ok, Err, Result};
+use core::result::Result;
+use core::result::Result::{Ok, Err};
/// A value that represents the global exchange heap. This is the default
/// place that the `box` keyword allocates into when no place is supplied.
let b = box Test as Box<Any>;
let a_str = a.to_str();
let b_str = b.to_str();
- assert_eq!(a_str.as_slice(), "Box<Any>");
- assert_eq!(b_str.as_slice(), "Box<Any>");
+ assert_eq!(a_str, "Box<Any>");
+ assert_eq!(b_str, "Box<Any>");
let a = &8u as &Any;
let b = &Test as &Any;
let s = format!("{}", a);
- assert_eq!(s.as_slice(), "&Any");
+ assert_eq!(s, "&Any");
let s = format!("{}", b);
- assert_eq!(s.as_slice(), "&Any");
+ assert_eq!(s, "&Any");
}
}
target_arch = "x86_64"))]
const MIN_ALIGN: uint = 16;
-#[cfg(jemalloc)]
+#[cfg(external_funcs)]
mod imp {
- use core::option::{None, Option};
+ extern {
+ fn rust_allocate(size: uint, align: uint) -> *mut u8;
+ fn rust_deallocate(ptr: *mut u8, old_size: uint, align: uint);
+ fn rust_reallocate(ptr: *mut u8, old_size: uint, size: uint, align: uint) -> *mut u8;
+ fn rust_reallocate_inplace(ptr: *mut u8, old_size: uint, size: uint,
+ align: uint) -> uint;
+ fn rust_usable_size(size: uint, align: uint) -> uint;
+ fn rust_stats_print();
+ }
+
+ #[inline]
+ pub unsafe fn allocate(size: uint, align: uint) -> *mut u8 {
+ rust_allocate(size, align)
+ }
+
+ #[inline]
+ pub unsafe fn reallocate_inplace(ptr: *mut u8, old_size: uint, size: uint,
+ align: uint) -> uint {
+ rust_reallocate_inplace(ptr, old_size, size, align)
+ }
+
+ #[inline]
+ pub unsafe fn deallocate(ptr: *mut u8, old_size: uint, align: uint) {
+ rust_deallocate(ptr, old_size, align)
+ }
+
+ #[inline]
+ pub unsafe fn reallocate_inplace(ptr: *mut u8, old_size: uint, size: uint,
+ align: uint) -> uint {
+ rust_reallocate_inplace(ptr, old_size, size, align)
+ }
+
+ #[inline]
+ pub fn usable_size(size: uint, align: uint) -> uint {
+ unsafe { rust_usable_size(size, align) }
+ }
+
+ #[inline]
+ pub fn stats_print() {
+ unsafe { rust_stats_print() }
+ }
+}
+
+#[cfg(external_crate)]
+mod imp {
+ extern crate external;
+ pub use self::external::{allocate, deallocate, reallocate_inplace, reallocate};
+ pub use self::external::{usable_size, stats_print};
+}
+
+#[cfg(all(not(external_funcs), not(external_crate), jemalloc))]
+mod imp {
+ use core::option::Option;
+ use core::option::Option::None;
use core::ptr::{null_mut, null};
use core::num::Int;
use libc::{c_char, c_int, c_void, size_t};
}
}
-#[cfg(all(not(jemalloc), unix))]
+#[cfg(all(not(external_funcs), not(external_crate), not(jemalloc), unix))]
mod imp {
use core::cmp;
use core::ptr;
pub fn stats_print() {}
}
-#[cfg(all(not(jemalloc), windows))]
+#[cfg(all(not(external_funcs), not(external_crate), not(jemalloc), windows))]
mod imp {
use libc::{c_void, size_t};
use libc;
use core::kinds::marker;
use core::mem::{transmute, min_align_of, size_of, forget};
use core::ops::{Deref, Drop};
-use core::option::{Option, Some, None};
+use core::option::Option;
+use core::option::Option::{Some, None};
use core::ptr;
use core::ptr::RawPtr;
-use core::result::{Result, Ok, Err};
+use core::result::Result;
+use core::result::Result::{Ok, Err};
use heap::deallocate;
mod tests {
use super::{Rc, Weak, weak_count, strong_count};
use std::cell::RefCell;
- use std::option::{Option, Some, None};
- use std::result::{Err, Ok};
+ use std::option::Option;
+ use std::option::Option::{Some, None};
+ use std::result::Result::{Err, Ok};
use std::mem::drop;
use std::clone::Clone;
}
let ptr: &mut T = unsafe {
- let ptr: &mut T = mem::transmute(self.ptr);
+ let ptr: &mut T = mem::transmute(self.ptr.clone());
ptr::write(ptr, object);
self.ptr.set(self.ptr.get().offset(1));
ptr
//! position: uint
//! }
//!
+//! impl Copy for State {}
+//!
//! // The priority queue depends on `Ord`.
//! // Explicitly implement the trait so the queue becomes a min-heap
//! // instead of a max-heap.
let mut end = q.len();
while end > 1 {
end -= 1;
- q.data.as_mut_slice().swap(0, end);
+ q.data.swap(0, end);
q.siftdown_range(0, end)
}
q.into_vec()
v.sort();
data.sort();
- assert_eq!(v.as_slice(), data.as_slice());
- assert_eq!(heap.into_sorted_vec().as_slice(), data.as_slice());
+ assert_eq!(v, data);
+ assert_eq!(heap.into_sorted_vec(), data);
}
#[test]
fn test_from_iter() {
let xs = vec!(9u, 8, 7, 6, 5, 4, 3, 2, 1);
- let mut q: BinaryHeap<uint> = xs.as_slice().iter().rev().map(|&x| x).collect();
+ let mut q: BinaryHeap<uint> = xs.iter().rev().map(|&x| x).collect();
for &x in xs.iter() {
assert_eq!(q.pop().unwrap(), x);
#[test]
fn test_to_str() {
let zerolen = Bitv::new();
- assert_eq!(zerolen.to_string().as_slice(), "");
+ assert_eq!(zerolen.to_string(), "");
let eightbits = Bitv::with_capacity(8u, false);
- assert_eq!(eightbits.to_string().as_slice(), "00000000")
+ assert_eq!(eightbits.to_string(), "00000000")
}
#[test]
let mut b = bitv::Bitv::with_capacity(2, false);
b.set(0, true);
b.set(1, false);
- assert_eq!(b.to_string().as_slice(), "10");
+ assert_eq!(b.to_string(), "10");
}
#[test]
fn test_from_bytes() {
let bitv = from_bytes(&[0b10110110, 0b00000000, 0b11111111]);
let str = format!("{}{}{}", "10110110", "00000000", "11111111");
- assert_eq!(bitv.to_string().as_slice(), str.as_slice());
+ assert_eq!(bitv.to_string(), str);
}
#[test]
fn test_from_bools() {
let bools = vec![true, false, true, true];
let bitv: Bitv = bools.iter().map(|n| *n).collect();
- assert_eq!(bitv.to_string().as_slice(), "1011");
+ assert_eq!(bitv.to_string(), "1011");
}
#[test]
let expected = [3, 5, 11, 77];
let actual = a.intersection(&b).collect::<Vec<uint>>();
- assert_eq!(actual.as_slice(), expected.as_slice());
+ assert_eq!(actual, expected);
}
#[test]
let expected = [1, 5, 500];
let actual = a.difference(&b).collect::<Vec<uint>>();
- assert_eq!(actual.as_slice(), expected.as_slice());
+ assert_eq!(actual, expected);
}
#[test]
let expected = [1, 5, 11, 14, 220];
let actual = a.symmetric_difference(&b).collect::<Vec<uint>>();
- assert_eq!(actual.as_slice(), expected.as_slice());
+ assert_eq!(actual, expected);
}
#[test]
let expected = [1, 3, 5, 9, 11, 13, 19, 24, 160, 200];
let actual = a.union(&b).collect::<Vec<uint>>();
- assert_eq!(actual.as_slice(), expected.as_slice());
+ assert_eq!(actual, expected);
}
#[test]
s.insert(10);
s.insert(50);
s.insert(2);
- assert_eq!("{1, 2, 10, 50}".to_string(), s.to_string());
+ assert_eq!("{1, 2, 10, 50}", s.to_string());
}
fn rng() -> rand::IsaacRng {
}
/// Gets an iterator over the keys of the map.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::BTreeMap;
+ ///
+ /// let mut a = BTreeMap::new();
+ /// a.insert(1u, "a");
+ /// a.insert(2u, "b");
+ ///
+ /// let keys: Vec<uint> = a.keys().cloned().collect();
+ /// assert_eq!(keys, vec![1u,2,]);
+ /// ```
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn keys<'a>(&'a self) -> Keys<'a, K, V> {
self.iter().map(|(k, _)| k)
}
/// Gets an iterator over the values of the map.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::BTreeMap;
+ ///
+ /// let mut a = BTreeMap::new();
+ /// a.insert(1u, "a");
+ /// a.insert(2u, "b");
+ ///
+ /// let values: Vec<&str> = a.values().cloned().collect();
+ /// assert_eq!(values, vec!["a","b"]);
+ /// ```
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn values<'a>(&'a self) -> Values<'a, K, V> {
self.iter().map(|(_, v)| v)
/// Swap the given key-value pair with the key-value pair stored in the node's index,
/// without checking bounds.
pub unsafe fn unsafe_swap(&mut self, index: uint, key: &mut K, val: &mut V) {
- mem::swap(self.keys.as_mut_slice().unsafe_mut(index), key);
- mem::swap(self.vals.as_mut_slice().unsafe_mut(index), val);
+ mem::swap(self.keys.unsafe_mut(index), key);
+ mem::swap(self.vals.unsafe_mut(index), val);
}
/// Get the node's key mutably without any bounds checks.
pub unsafe fn unsafe_key_mut(&mut self, index: uint) -> &mut K {
- self.keys.as_mut_slice().unsafe_mut(index)
+ self.keys.unsafe_mut(index)
}
/// Get the node's value at the given index
pub fn val(&self, index: uint) -> Option<&V> {
- self.vals.as_slice().get(index)
+ self.vals.get(index)
}
/// Get the node's value at the given index
pub fn val_mut(&mut self, index: uint) -> Option<&mut V> {
- self.vals.as_mut_slice().get_mut(index)
+ self.vals.get_mut(index)
}
/// Get the node's value mutably without any bounds checks.
pub unsafe fn unsafe_val_mut(&mut self, index: uint) -> &mut V {
- self.vals.as_mut_slice().unsafe_mut(index)
+ self.vals.unsafe_mut(index)
}
/// Get the node's edge at the given index
pub fn edge(&self, index: uint) -> Option<&Node<K,V>> {
- self.edges.as_slice().get(index)
+ self.edges.get(index)
}
/// Get the node's edge mutably at the given index
pub fn edge_mut(&mut self, index: uint) -> Option<&mut Node<K,V>> {
- self.edges.as_mut_slice().get_mut(index)
+ self.edges.get_mut(index)
}
/// Get the node's edge mutably without any bounds checks.
pub unsafe fn unsafe_edge_mut(&mut self, index: uint) -> &mut Node<K,V> {
- self.edges.as_mut_slice().unsafe_mut(index)
+ self.edges.unsafe_mut(index)
}
/// Pop an edge off the end of the node
pub fn iter<'a>(&'a self) -> Traversal<'a, K, V> {
let is_leaf = self.is_leaf();
Traversal {
- elems: self.keys.as_slice().iter().zip(self.vals.as_slice().iter()),
- edges: self.edges.as_slice().iter(),
+ elems: self.keys.iter().zip(self.vals.iter()),
+ edges: self.edges.iter(),
head_is_edge: true,
tail_is_edge: true,
has_edges: !is_leaf,
pub fn iter_mut<'a>(&'a mut self) -> MutTraversal<'a, K, V> {
let is_leaf = self.is_leaf();
MutTraversal {
- elems: self.keys.as_slice().iter().zip(self.vals.as_mut_slice().iter_mut()),
- edges: self.edges.as_mut_slice().iter_mut(),
+ elems: self.keys.iter().zip(self.vals.iter_mut()),
+ edges: self.edges.iter_mut(),
head_is_edge: true,
tail_is_edge: true,
has_edges: !is_leaf,
let left_len = len - right_len;
let mut right = Vec::with_capacity(left.capacity());
unsafe {
- let left_ptr = left.as_slice().unsafe_get(left_len) as *const _;
- let right_ptr = right.as_mut_slice().as_mut_ptr();
+ let left_ptr = left.unsafe_get(left_len) as *const _;
+ let right_ptr = right.as_mut_ptr();
ptr::copy_nonoverlapping_memory(right_ptr, left_ptr, right_len);
left.set_len(left_len);
right.set_len(right_len);
impl<T: Ord> BTreeSet<T> {
/// Visits the values representing the difference, in ascending order.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::BTreeSet;
+ ///
+ /// let mut a = BTreeSet::new();
+ /// a.insert(1u);
+ /// a.insert(2u);
+ ///
+ /// let mut b = BTreeSet::new();
+ /// b.insert(2u);
+ /// b.insert(3u);
+ ///
+ /// let diff: Vec<uint> = a.difference(&b).cloned().collect();
+ /// assert_eq!(diff, vec![1u]);
+ /// ```
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn difference<'a>(&'a self, other: &'a BTreeSet<T>) -> DifferenceItems<'a, T> {
DifferenceItems{a: self.iter().peekable(), b: other.iter().peekable()}
}
/// Visits the values representing the symmetric difference, in ascending order.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::BTreeSet;
+ ///
+ /// let mut a = BTreeSet::new();
+ /// a.insert(1u);
+ /// a.insert(2u);
+ ///
+ /// let mut b = BTreeSet::new();
+ /// b.insert(2u);
+ /// b.insert(3u);
+ ///
+ /// let sym_diff: Vec<uint> = a.symmetric_difference(&b).cloned().collect();
+ /// assert_eq!(sym_diff, vec![1u,3]);
+ /// ```
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn symmetric_difference<'a>(&'a self, other: &'a BTreeSet<T>)
-> SymDifferenceItems<'a, T> {
}
/// Visits the values representing the intersection, in ascending order.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::BTreeSet;
+ ///
+ /// let mut a = BTreeSet::new();
+ /// a.insert(1u);
+ /// a.insert(2u);
+ ///
+ /// let mut b = BTreeSet::new();
+ /// b.insert(2u);
+ /// b.insert(3u);
+ ///
+ /// let intersection: Vec<uint> = a.intersection(&b).cloned().collect();
+ /// assert_eq!(intersection, vec![2u]);
+ /// ```
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn intersection<'a>(&'a self, other: &'a BTreeSet<T>)
-> IntersectionItems<'a, T> {
}
/// Visits the values representing the union, in ascending order.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::BTreeSet;
+ ///
+ /// let mut a = BTreeSet::new();
+ /// a.insert(1u);
+ ///
+ /// let mut b = BTreeSet::new();
+ /// b.insert(2u);
+ ///
+ /// let union: Vec<uint> = a.union(&b).cloned().collect();
+ /// assert_eq!(union, vec![1u,2]);
+ /// ```
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn union<'a>(&'a self, other: &'a BTreeSet<T>) -> UnionItems<'a, T> {
UnionItems{a: self.iter().peekable(), b: other.iter().peekable()}
let set_str = format!("{}", set);
- assert!(set_str == "{1, 2}".to_string());
- assert_eq!(format!("{}", empty), "{}".to_string());
+ assert!(set_str == "{1, 2}");
+ assert_eq!(format!("{}", empty), "{}");
}
}
}
type Link<T> = Option<Box<Node<T>>>;
-struct Rawlink<T> { p: *mut T }
+
+struct Rawlink<T> {
+ p: *mut T,
+}
+
+impl<T> Copy for Rawlink<T> {}
struct Node<T> {
next: Link<T>,
fn clone(&self) -> Items<'a, T> { *self }
}
+impl<'a,T> Copy for Items<'a,T> {}
+
/// An iterator over mutable references to the items of a `DList`.
pub struct MutItems<'a, T:'a> {
list: &'a mut DList<T>,
let mut m = list_from(v.as_slice());
m.prepend(list_from(u.as_slice()));
check_links(&m);
- u.extend(v.as_slice().iter().map(|&b| b));
+ u.extend(v.iter().map(|&b| b));
assert_eq!(u.len(), m.len());
for elt in u.into_iter() {
assert_eq!(m.pop_front(), Some(elt))
spawn(proc() {
check_links(&n);
let a: &[_] = &[&1,&2,&3];
- assert_eq!(a, n.iter().collect::<Vec<&int>>().as_slice());
+ assert_eq!(a, n.iter().collect::<Vec<&int>>());
});
}
#[test]
fn test_show() {
let list: DList<int> = range(0i, 10).collect();
- assert!(list.to_string().as_slice() == "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]");
+ assert!(list.to_string() == "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]");
let list: DList<&str> = vec!["just", "one", "test", "more"].iter()
.map(|&s| s)
.collect();
- assert!(list.to_string().as_slice() == "[just, one, test, more]");
+ assert!(list.to_string() == "[just, one, test, more]");
}
#[cfg(test)]
bits: uint
}
+impl<E> Copy for EnumSet<E> {}
+
impl<E:CLike+fmt::Show> fmt::Show for EnumSet<E> {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
try!(write!(fmt, "{{"));
A, B, C
}
+ impl Copy for Foo {}
+
impl CLike for Foo {
fn to_uint(&self) -> uint {
*self as uint
#[test]
fn test_show() {
let mut e = EnumSet::new();
- assert_eq!("{}", e.to_string().as_slice());
+ assert_eq!("{}", e.to_string());
e.insert(A);
- assert_eq!("{A}", e.to_string().as_slice());
+ assert_eq!("{A}", e.to_string());
e.insert(C);
- assert_eq!("{A, C}", e.to_string().as_slice());
+ assert_eq!("{A, C}", e.to_string());
}
#[test]
V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
V60, V61, V62, V63, V64, V65, V66, V67, V68, V69,
}
+
+ impl Copy for Bar {}
+
impl CLike for Bar {
fn to_uint(&self) -> uint {
*self as uint
ntail: uint, // how many bytes in tail are valid
}
+impl Copy for SipState {}
+
// sadly, these macro definitions can't appear later,
// because they're needed in the following defs;
// this design could be improved.
/// `SipHasher` computes the SipHash algorithm from a stream of bytes.
#[deriving(Clone)]
+#[allow(missing_copy_implementations)]
pub struct SipHasher {
k0: u64,
k1: u64,
}
{
let b: &[_] = &[&0,&1,&2,&3,&4];
- assert_eq!(d.iter().collect::<Vec<&int>>().as_slice(), b);
+ assert_eq!(d.iter().collect::<Vec<&int>>(), b);
}
for i in range(6i, 9) {
}
{
let b: &[_] = &[&8,&7,&6,&0,&1,&2,&3,&4];
- assert_eq!(d.iter().collect::<Vec<&int>>().as_slice(), b);
+ assert_eq!(d.iter().collect::<Vec<&int>>(), b);
}
let mut it = d.iter();
}
{
let b: &[_] = &[&4,&3,&2,&1,&0];
- assert_eq!(d.iter().rev().collect::<Vec<&int>>().as_slice(), b);
+ assert_eq!(d.iter().rev().collect::<Vec<&int>>(), b);
}
for i in range(6i, 9) {
d.push_front(i);
}
let b: &[_] = &[&4,&3,&2,&1,&0,&6,&7,&8];
- assert_eq!(d.iter().rev().collect::<Vec<&int>>().as_slice(), b);
+ assert_eq!(d.iter().rev().collect::<Vec<&int>>(), b);
}
#[test]
#[test]
fn test_show() {
let ringbuf: RingBuf<int> = range(0i, 10).collect();
- assert!(format!("{}", ringbuf).as_slice() == "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]");
+ assert!(format!("{}", ringbuf) == "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]");
let ringbuf: RingBuf<&str> = vec!["just", "one", "test", "more"].iter()
.map(|&s| s)
.collect();
- assert!(format!("{}", ringbuf).as_slice() == "[just, one, test, more]");
+ assert!(format!("{}", ringbuf) == "[just, one, test, more]");
}
#[test]
use alloc::boxed::Box;
use core::borrow::{BorrowFrom, BorrowFromMut, ToOwned};
use core::cmp;
-use core::kinds::Sized;
+use core::kinds::{Copy, Sized};
use core::mem::size_of;
use core::mem;
use core::prelude::{Clone, Greater, Iterator, IteratorExt, Less, None, Option};
enum Direction { Pos, Neg }
+impl Copy for Direction {}
+
/// An `Index` and `Direction` together.
struct SizeDirection {
size: uint,
dir: Direction,
}
+impl Copy for SizeDirection {}
+
impl Iterator<(uint, uint)> for ElementSwaps {
#[inline]
fn next(&mut self) -> Option<(uint, uint)> {
match max {
Some((i, sd)) => {
let j = new_pos(i, sd.dir);
- self.sdir.as_mut_slice().swap(i, j);
+ self.sdir.swap(i, j);
// Swap the direction of each larger SizeDirection
for x in self.sdir.iter_mut() {
Some((0,0)) => Some(self.v.clone()),
Some((a, b)) => {
let elt = self.v.clone();
- self.v.as_mut_slice().swap(a, b);
+ self.v.swap(a, b);
Some(elt)
}
}
#[test]
fn test_head_mut() {
let mut a = vec![];
- assert_eq!(a.as_mut_slice().head_mut(), None);
+ assert_eq!(a.head_mut(), None);
a = vec![11i];
- assert_eq!(*a.as_mut_slice().head_mut().unwrap(), 11);
+ assert_eq!(*a.head_mut().unwrap(), 11);
a = vec![11i, 12];
- assert_eq!(*a.as_mut_slice().head_mut().unwrap(), 11);
+ assert_eq!(*a.head_mut().unwrap(), 11);
}
#[test]
fn test_tail_mut() {
let mut a = vec![11i];
let b: &mut [int] = &mut [];
- assert!(a.as_mut_slice().tail_mut() == b);
+ assert!(a.tail_mut() == b);
a = vec![11i, 12];
let b: &mut [int] = &mut [12];
- assert!(a.as_mut_slice().tail_mut() == b);
+ assert!(a.tail_mut() == b);
}
#[test]
#[should_fail]
fn test_tail_mut_empty() {
let mut a: Vec<int> = vec![];
- a.as_mut_slice().tail_mut();
+ a.tail_mut();
}
#[test]
fn test_init_mut() {
let mut a = vec![11i];
let b: &mut [int] = &mut [];
- assert!(a.as_mut_slice().init_mut() == b);
+ assert!(a.init_mut() == b);
a = vec![11i, 12];
let b: &mut [int] = &mut [11];
- assert!(a.as_mut_slice().init_mut() == b);
+ assert!(a.init_mut() == b);
}
#[test]
#[should_fail]
fn test_init_mut_empty() {
let mut a: Vec<int> = vec![];
- a.as_mut_slice().init_mut();
+ a.init_mut();
}
#[test]
#[test]
fn test_last_mut() {
let mut a = vec![];
- assert_eq!(a.as_mut_slice().last_mut(), None);
+ assert_eq!(a.last_mut(), None);
a = vec![11i];
- assert_eq!(*a.as_mut_slice().last_mut().unwrap(), 11);
+ assert_eq!(*a.last_mut().unwrap(), 11);
a = vec![11i, 12];
- assert_eq!(*a.as_mut_slice().last_mut().unwrap(), 12);
+ assert_eq!(*a.last_mut().unwrap(), 12);
}
#[test]
.collect::<Vec<uint>>();
let mut v1 = v.clone();
- v.as_mut_slice().sort();
+ v.sort();
assert!(v.as_slice().windows(2).all(|w| w[0] <= w[1]));
- v1.as_mut_slice().sort_by(|a, b| a.cmp(b));
+ v1.sort_by(|a, b| a.cmp(b));
assert!(v1.as_slice().windows(2).all(|w| w[0] <= w[1]));
- v1.as_mut_slice().sort_by(|a, b| b.cmp(a));
+ v1.sort_by(|a, b| b.cmp(a));
assert!(v1.as_slice().windows(2).all(|w| w[0] >= w[1]));
}
}
fn clone(&self) -> S {
self.f.set(self.f.get() + 1);
if self.f.get() == 10 { panic!() }
- S { f: self.f, boxes: self.boxes.clone() }
+ S {
+ f: self.f.clone(),
+ boxes: self.boxes.clone(),
+ }
}
}
- let s = S { f: Cell::new(0), boxes: (box 0, Rc::new(0)) };
+ let s = S {
+ f: Cell::new(0),
+ boxes: (box 0, Rc::new(0)),
+ };
let _ = Vec::from_elem(100, s);
}
let xs = &[1i,2,3,4,5];
let splits: &[&[int]] = &[&[1], &[3], &[5]];
- assert_eq!(xs.split(|x| *x % 2 == 0).collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.split(|x| *x % 2 == 0).collect::<Vec<&[int]>>(),
splits);
let splits: &[&[int]] = &[&[], &[2,3,4,5]];
- assert_eq!(xs.split(|x| *x == 1).collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.split(|x| *x == 1).collect::<Vec<&[int]>>(),
splits);
let splits: &[&[int]] = &[&[1,2,3,4], &[]];
- assert_eq!(xs.split(|x| *x == 5).collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.split(|x| *x == 5).collect::<Vec<&[int]>>(),
splits);
let splits: &[&[int]] = &[&[1,2,3,4,5]];
- assert_eq!(xs.split(|x| *x == 10).collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.split(|x| *x == 10).collect::<Vec<&[int]>>(),
splits);
let splits: &[&[int]] = &[&[], &[], &[], &[], &[], &[]];
- assert_eq!(xs.split(|_| true).collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.split(|_| true).collect::<Vec<&[int]>>(),
splits);
let xs: &[int] = &[];
let splits: &[&[int]] = &[&[]];
- assert_eq!(xs.split(|x| *x == 5).collect::<Vec<&[int]>>().as_slice(), splits);
+ assert_eq!(xs.split(|x| *x == 5).collect::<Vec<&[int]>>(), splits);
}
#[test]
let xs = &[1i,2,3,4,5];
let splits: &[&[int]] = &[&[1,2,3,4,5]];
- assert_eq!(xs.splitn(0, |x| *x % 2 == 0).collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.splitn(0, |x| *x % 2 == 0).collect::<Vec<&[int]>>(),
splits);
let splits: &[&[int]] = &[&[1], &[3,4,5]];
- assert_eq!(xs.splitn(1, |x| *x % 2 == 0).collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.splitn(1, |x| *x % 2 == 0).collect::<Vec<&[int]>>(),
splits);
let splits: &[&[int]] = &[&[], &[], &[], &[4,5]];
- assert_eq!(xs.splitn(3, |_| true).collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.splitn(3, |_| true).collect::<Vec<&[int]>>(),
splits);
let xs: &[int] = &[];
let splits: &[&[int]] = &[&[]];
- assert_eq!(xs.splitn(1, |x| *x == 5).collect::<Vec<&[int]>>().as_slice(), splits);
+ assert_eq!(xs.splitn(1, |x| *x == 5).collect::<Vec<&[int]>>(), splits);
}
#[test]
let xs = &mut [1i,2,3,4,5];
let splits: &[&mut [int]] = &[&mut [1,2,3,4,5]];
- assert_eq!(xs.splitn_mut(0, |x| *x % 2 == 0).collect::<Vec<&mut [int]>>().as_slice(),
+ assert_eq!(xs.splitn_mut(0, |x| *x % 2 == 0).collect::<Vec<&mut [int]>>(),
splits);
let splits: &[&mut [int]] = &[&mut [1], &mut [3,4,5]];
- assert_eq!(xs.splitn_mut(1, |x| *x % 2 == 0).collect::<Vec<&mut [int]>>().as_slice(),
+ assert_eq!(xs.splitn_mut(1, |x| *x % 2 == 0).collect::<Vec<&mut [int]>>(),
splits);
let splits: &[&mut [int]] = &[&mut [], &mut [], &mut [], &mut [4,5]];
- assert_eq!(xs.splitn_mut(3, |_| true).collect::<Vec<&mut [int]>>().as_slice(),
+ assert_eq!(xs.splitn_mut(3, |_| true).collect::<Vec<&mut [int]>>(),
splits);
let xs: &mut [int] = &mut [];
let splits: &[&mut [int]] = &[&mut []];
- assert_eq!(xs.splitn_mut(1, |x| *x == 5).collect::<Vec<&mut [int]>>().as_slice(),
+ assert_eq!(xs.splitn_mut(1, |x| *x == 5).collect::<Vec<&mut [int]>>(),
splits);
}
let xs = &[1i,2,3,4,5];
let splits: &[&[int]] = &[&[5], &[3], &[1]];
- assert_eq!(xs.split(|x| *x % 2 == 0).rev().collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.split(|x| *x % 2 == 0).rev().collect::<Vec<&[int]>>(),
splits);
let splits: &[&[int]] = &[&[2,3,4,5], &[]];
- assert_eq!(xs.split(|x| *x == 1).rev().collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.split(|x| *x == 1).rev().collect::<Vec<&[int]>>(),
splits);
let splits: &[&[int]] = &[&[], &[1,2,3,4]];
- assert_eq!(xs.split(|x| *x == 5).rev().collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.split(|x| *x == 5).rev().collect::<Vec<&[int]>>(),
splits);
let splits: &[&[int]] = &[&[1,2,3,4,5]];
- assert_eq!(xs.split(|x| *x == 10).rev().collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.split(|x| *x == 10).rev().collect::<Vec<&[int]>>(),
splits);
let xs: &[int] = &[];
let splits: &[&[int]] = &[&[]];
- assert_eq!(xs.split(|x| *x == 5).rev().collect::<Vec<&[int]>>().as_slice(), splits);
+ assert_eq!(xs.split(|x| *x == 5).rev().collect::<Vec<&[int]>>(), splits);
}
#[test]
let xs = &[1,2,3,4,5];
let splits: &[&[int]] = &[&[1,2,3,4,5]];
- assert_eq!(xs.rsplitn(0, |x| *x % 2 == 0).collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.rsplitn(0, |x| *x % 2 == 0).collect::<Vec<&[int]>>(),
splits);
let splits: &[&[int]] = &[&[5], &[1,2,3]];
- assert_eq!(xs.rsplitn(1, |x| *x % 2 == 0).collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.rsplitn(1, |x| *x % 2 == 0).collect::<Vec<&[int]>>(),
splits);
let splits: &[&[int]] = &[&[], &[], &[], &[1,2]];
- assert_eq!(xs.rsplitn(3, |_| true).collect::<Vec<&[int]>>().as_slice(),
+ assert_eq!(xs.rsplitn(3, |_| true).collect::<Vec<&[int]>>(),
splits);
let xs: &[int] = &[];
let splits: &[&[int]] = &[&[]];
- assert_eq!(xs.rsplitn(1, |x| *x == 5).collect::<Vec<&[int]>>().as_slice(), splits);
+ assert_eq!(xs.rsplitn(1, |x| *x == 5).collect::<Vec<&[int]>>(), splits);
}
#[test]
let v = &[1i,2,3,4];
let wins: &[&[int]] = &[&[1,2], &[2,3], &[3,4]];
- assert_eq!(v.windows(2).collect::<Vec<&[int]>>().as_slice(), wins);
+ assert_eq!(v.windows(2).collect::<Vec<&[int]>>(), wins);
let wins: &[&[int]] = &[&[1i,2,3], &[2,3,4]];
- assert_eq!(v.windows(3).collect::<Vec<&[int]>>().as_slice(), wins);
+ assert_eq!(v.windows(3).collect::<Vec<&[int]>>(), wins);
assert!(v.windows(6).next().is_none());
}
let v = &[1i,2,3,4,5];
let chunks: &[&[int]] = &[&[1i,2], &[3,4], &[5]];
- assert_eq!(v.chunks(2).collect::<Vec<&[int]>>().as_slice(), chunks);
+ assert_eq!(v.chunks(2).collect::<Vec<&[int]>>(), chunks);
let chunks: &[&[int]] = &[&[1i,2,3], &[4,5]];
- assert_eq!(v.chunks(3).collect::<Vec<&[int]>>().as_slice(), chunks);
+ assert_eq!(v.chunks(3).collect::<Vec<&[int]>>(), chunks);
let chunks: &[&[int]] = &[&[1i,2,3,4,5]];
- assert_eq!(v.chunks(6).collect::<Vec<&[int]>>().as_slice(), chunks);
+ assert_eq!(v.chunks(6).collect::<Vec<&[int]>>(), chunks);
let chunks: &[&[int]] = &[&[5i], &[3,4], &[1,2]];
- assert_eq!(v.chunks(2).rev().collect::<Vec<&[int]>>().as_slice(), chunks);
+ assert_eq!(v.chunks(2).rev().collect::<Vec<&[int]>>(), chunks);
let mut it = v.chunks(2);
assert_eq!(it.indexable(), 3);
let chunk: &[int] = &[1,2];
})
)
let empty: Vec<int> = vec![];
- test_show_vec!(empty, "[]".to_string());
- test_show_vec!(vec![1i], "[1]".to_string());
- test_show_vec!(vec![1i, 2, 3], "[1, 2, 3]".to_string());
+ test_show_vec!(empty, "[]");
+ test_show_vec!(vec![1i], "[1]");
+ test_show_vec!(vec![1i, 2, 3], "[1, 2, 3]");
test_show_vec!(vec![vec![], vec![1u], vec![1u, 1u]],
- "[[], [1], [1, 1]]".to_string());
+ "[[], [1], [1, 1]]");
let empty_mut: &mut [int] = &mut[];
- test_show_vec!(empty_mut, "[]".to_string());
+ test_show_vec!(empty_mut, "[]");
let v: &mut[int] = &mut[1];
- test_show_vec!(v, "[1]".to_string());
+ test_show_vec!(v, "[1]");
let v: &mut[int] = &mut[1, 2, 3];
- test_show_vec!(v, "[1, 2, 3]".to_string());
+ test_show_vec!(v, "[1, 2, 3]");
let v: &mut [&mut[uint]] = &mut[&mut[], &mut[1u], &mut[1u, 1u]];
- test_show_vec!(v, "[[], [1], [1, 1]]".to_string());
+ test_show_vec!(v, "[[], [1], [1, 1]]");
}
#[test]
fn test_to_vec() {
let xs = box [1u, 2, 3];
let ys = xs.to_vec();
- assert_eq!(ys.as_slice(), [1u, 2, 3].as_slice());
+ assert_eq!(ys, [1u, 2, 3]);
}
}
impl<'a> Iterator<char> for Decompositions<'a> {
#[inline]
fn next(&mut self) -> Option<char> {
- match self.buffer.as_slice().head() {
+ match self.buffer.head() {
Some(&(c, 0)) => {
self.sorted = false;
self.buffer.remove(0);
_ => self.sorted = false
}
- let decomposer = match self.kind {
- Canonical => unicode::char::decompose_canonical,
- Compatible => unicode::char::decompose_compatible
- };
-
if !self.sorted {
for ch in self.iter {
let buffer = &mut self.buffer;
let sorted = &mut self.sorted;
- decomposer(ch, |d| {
- let class = unicode::char::canonical_combining_class(d);
- if class == 0 && !*sorted {
- canonical_sort(buffer.as_mut_slice());
- *sorted = true;
+ {
+ let callback = |d| {
+ let class =
+ unicode::char::canonical_combining_class(d);
+ if class == 0 && !*sorted {
+ canonical_sort(buffer.as_mut_slice());
+ *sorted = true;
+ }
+ buffer.push((d, class));
+ };
+ match self.kind {
+ Canonical => {
+ unicode::char::decompose_canonical(ch, callback)
+ }
+ Compatible => {
+ unicode::char::decompose_compatible(ch, callback)
+ }
}
- buffer.push((d, class));
- });
- if *sorted { break }
+ }
+ if *sorted {
+ break
+ }
}
}
use std::default::Default;
use std::char::Char;
use std::clone::Clone;
- use std::cmp::{Equal, Greater, Less, Ord, PartialOrd, Equiv};
- use std::option::{Some, None};
+ use std::cmp::{Ord, PartialOrd, Equiv};
+ use std::cmp::Ordering::{Equal, Greater, Less};
+ use std::option::Option;
+ use std::option::Option::{Some, None};
use std::ptr::RawPtr;
use std::iter::{Iterator, IteratorExt, DoubleEndedIteratorExt};
#[test]
fn test_collect() {
let empty = String::from_str("");
- let s: String = empty.as_slice().chars().collect();
+ let s: String = empty.chars().collect();
assert_eq!(empty, s);
let data = String::from_str("ประเทศไทย中");
- let s: String = data.as_slice().chars().collect();
+ let s: String = data.chars().collect();
assert_eq!(data, s);
}
fn test_into_bytes() {
let data = String::from_str("asdf");
let buf = data.into_bytes();
- assert_eq!(b"asdf", buf.as_slice());
+ assert_eq!(b"asdf", buf);
}
#[test]
let string = "ประเทศไทย中华Việt Nam";
let mut data = String::from_str(string);
data.push_str(string);
- assert!(data.as_slice().find_str("ไท华").is_none());
- assert_eq!(data.as_slice().slice(0u, 43u).find_str(""), Some(0u));
- assert_eq!(data.as_slice().slice(6u, 43u).find_str(""), Some(6u - 6u));
+ assert!(data.find_str("ไท华").is_none());
+ assert_eq!(data.slice(0u, 43u).find_str(""), Some(0u));
+ assert_eq!(data.slice(6u, 43u).find_str(""), Some(6u - 6u));
- assert_eq!(data.as_slice().slice(0u, 43u).find_str("ประ"), Some( 0u));
- assert_eq!(data.as_slice().slice(0u, 43u).find_str("ทศไ"), Some(12u));
- assert_eq!(data.as_slice().slice(0u, 43u).find_str("ย中"), Some(24u));
- assert_eq!(data.as_slice().slice(0u, 43u).find_str("iệt"), Some(34u));
- assert_eq!(data.as_slice().slice(0u, 43u).find_str("Nam"), Some(40u));
+ assert_eq!(data.slice(0u, 43u).find_str("ประ"), Some( 0u));
+ assert_eq!(data.slice(0u, 43u).find_str("ทศไ"), Some(12u));
+ assert_eq!(data.slice(0u, 43u).find_str("ย中"), Some(24u));
+ assert_eq!(data.slice(0u, 43u).find_str("iệt"), Some(34u));
+ assert_eq!(data.slice(0u, 43u).find_str("Nam"), Some(40u));
- assert_eq!(data.as_slice().slice(43u, 86u).find_str("ประ"), Some(43u - 43u));
- assert_eq!(data.as_slice().slice(43u, 86u).find_str("ทศไ"), Some(55u - 43u));
- assert_eq!(data.as_slice().slice(43u, 86u).find_str("ย中"), Some(67u - 43u));
- assert_eq!(data.as_slice().slice(43u, 86u).find_str("iệt"), Some(77u - 43u));
- assert_eq!(data.as_slice().slice(43u, 86u).find_str("Nam"), Some(83u - 43u));
+ assert_eq!(data.slice(43u, 86u).find_str("ประ"), Some(43u - 43u));
+ assert_eq!(data.slice(43u, 86u).find_str("ทศไ"), Some(55u - 43u));
+ assert_eq!(data.slice(43u, 86u).find_str("ย中"), Some(67u - 43u));
+ assert_eq!(data.slice(43u, 86u).find_str("iệt"), Some(77u - 43u));
+ assert_eq!(data.slice(43u, 86u).find_str("Nam"), Some(83u - 43u));
}
#[test]
($expected: expr, $string: expr) => {
{
let s = $string.concat();
- assert_eq!($expected, s.as_slice());
+ assert_eq!($expected, s);
}
}
}
($expected: expr, $string: expr, $delim: expr) => {
{
let s = $string.connect($delim);
- assert_eq!($expected, s.as_slice());
+ assert_eq!($expected, s);
}
}
}
let a = "ประเ";
let a2 = "دولة الكويتทศไทย中华";
- assert_eq!(data.replace(a, repl).as_slice(), a2);
+ assert_eq!(data.replace(a, repl), a2);
}
#[test]
let b = "ะเ";
let b2 = "ปรدولة الكويتทศไทย中华";
- assert_eq!(data.replace(b, repl).as_slice(), b2);
+ assert_eq!(data.replace(b, repl), b2);
}
#[test]
let c = "中华";
let c2 = "ประเทศไทยدولة الكويت";
- assert_eq!(data.replace(c, repl).as_slice(), c2);
+ assert_eq!(data.replace(c, repl), c2);
}
#[test]
let repl = "دولة الكويت";
let d = "ไท华";
- assert_eq!(data.replace(d, repl).as_slice(), data);
+ assert_eq!(data.replace(d, repl), data);
}
#[test]
}
let letters = a_million_letter_x();
assert!(half_a_million_letter_x() ==
- String::from_str(letters.as_slice().slice(0u, 3u * 500000u)));
+ String::from_str(letters.slice(0u, 3u * 500000u)));
}
#[test]
let b: &[u8] = &[];
assert_eq!("".as_bytes(), b);
assert_eq!("abc".as_bytes(), b"abc");
- assert_eq!("ศไทย中华Việt Nam".as_bytes(), v.as_slice());
+ assert_eq!("ศไทย中华Việt Nam".as_bytes(), v);
}
#[test]
let string = "a\nb\nc";
let lines: Vec<&str> = string.lines().collect();
- let lines = lines.as_slice();
assert_eq!(string.subslice_offset(lines[0]), 0);
assert_eq!(string.subslice_offset(lines[1]), 2);
assert_eq!(string.subslice_offset(lines[2]), 4);
fn test_nfd_chars() {
macro_rules! t {
($input: expr, $expected: expr) => {
- assert_eq!($input.nfd_chars().collect::<String>(), $expected.into_string());
+ assert_eq!($input.nfd_chars().collect::<String>(), $expected);
}
}
t!("abc", "abc");
fn test_nfkd_chars() {
macro_rules! t {
($input: expr, $expected: expr) => {
- assert_eq!($input.nfkd_chars().collect::<String>(), $expected.into_string());
+ assert_eq!($input.nfkd_chars().collect::<String>(), $expected);
}
}
t!("abc", "abc");
fn test_nfc_chars() {
macro_rules! t {
($input: expr, $expected: expr) => {
- assert_eq!($input.nfc_chars().collect::<String>(), $expected.into_string());
+ assert_eq!($input.nfc_chars().collect::<String>(), $expected);
}
}
t!("abc", "abc");
fn test_nfkc_chars() {
macro_rules! t {
($input: expr, $expected: expr) => {
- assert_eq!($input.nfkc_chars().collect::<String>(), $expected.into_string());
+ assert_eq!($input.nfkc_chars().collect::<String>(), $expected);
}
}
t!("abc", "abc");
let s = "a̐éö̲\r\n";
let gr_inds = s.grapheme_indices(true).collect::<Vec<(uint, &str)>>();
let b: &[_] = &[(0u, "a̐"), (3, "é"), (6, "ö̲"), (11, "\r\n")];
- assert_eq!(gr_inds.as_slice(), b);
+ assert_eq!(gr_inds, b);
let gr_inds = s.grapheme_indices(true).rev().collect::<Vec<(uint, &str)>>();
let b: &[_] = &[(11, "\r\n"), (6, "ö̲"), (3, "é"), (0u, "a̐")];
- assert_eq!(gr_inds.as_slice(), b);
+ assert_eq!(gr_inds, b);
let mut gr_inds_iter = s.grapheme_indices(true);
{
let gr_inds = gr_inds_iter.by_ref();
let s = "\n\r\n\r";
let gr = s.graphemes(true).rev().collect::<Vec<&str>>();
let b: &[_] = &["\r", "\r\n", "\n"];
- assert_eq!(gr.as_slice(), b);
+ assert_eq!(gr, b);
}
#[test]
fn test_split_strator() {
fn t(s: &str, sep: &str, u: &[&str]) {
let v: Vec<&str> = s.split_str(sep).collect();
- assert_eq!(v.as_slice(), u.as_slice());
+ assert_eq!(v, u);
}
t("--1233345--", "12345", &["--1233345--"]);
t("abc::hello::there", "::", &["abc", "hello", "there"]);
#[inline]
#[unstable = "the panic conventions for strings are under development"]
pub fn truncate(&mut self, new_len: uint) {
- assert!(self.as_slice().is_char_boundary(new_len));
+ assert!(self.is_char_boundary(new_len));
self.vec.truncate(new_len)
}
return None
}
- let CharRange {ch, next} = self.as_slice().char_range_at_reverse(len);
+ let CharRange {ch, next} = self.char_range_at_reverse(len);
unsafe {
self.vec.set_len(next);
}
let len = self.len();
if idx >= len { return None }
- let CharRange { ch, next } = self.as_slice().char_range_at(idx);
+ let CharRange { ch, next } = self.char_range_at(idx);
unsafe {
ptr::copy_memory(self.vec.as_mut_ptr().offset(idx as int),
self.vec.as_ptr().offset(next as int),
pub fn insert(&mut self, idx: uint, ch: char) {
let len = self.len();
assert!(idx <= len);
- assert!(self.as_slice().is_char_boundary(idx));
+ assert!(self.is_char_boundary(idx));
self.vec.reserve(4);
let mut bits = [0, ..4];
let amt = ch.encode_utf8(&mut bits).unwrap();
for p in pairs.iter() {
let (s, u) = (*p).clone();
- let s_as_utf16 = s.as_slice().utf16_units().collect::<Vec<u16>>();
+ let s_as_utf16 = s.utf16_units().collect::<Vec<u16>>();
let u_as_string = String::from_utf16(u.as_slice()).unwrap();
assert!(str::is_utf16(u.as_slice()));
assert_eq!(String::from_utf16_lossy(u.as_slice()), s);
assert_eq!(String::from_utf16(s_as_utf16.as_slice()).unwrap(), s);
- assert_eq!(u_as_string.as_slice().utf16_units().collect::<Vec<u16>>(), u);
+ assert_eq!(u_as_string.utf16_units().collect::<Vec<u16>>(), u);
}
}
let mv = s.as_mut_vec();
mv.push_all(&[b'D']);
}
- assert_eq!(s.as_slice(), "ABCD");
+ assert_eq!(s, "ABCD");
}
#[test]
fn test_push_str() {
let mut s = String::new();
s.push_str("");
- assert_eq!(s.as_slice().slice_from(0), "");
+ assert_eq!(s.slice_from(0), "");
s.push_str("abc");
- assert_eq!(s.as_slice().slice_from(0), "abc");
+ assert_eq!(s.slice_from(0), "abc");
s.push_str("ประเทศไทย中华Việt Nam");
- assert_eq!(s.as_slice().slice_from(0), "abcประเทศไทย中华Việt Nam");
+ assert_eq!(s.slice_from(0), "abcประเทศไทย中华Việt Nam");
}
#[test]
data.push('¢'); // 2 byte
data.push('€'); // 3 byte
data.push('𤭢'); // 4 byte
- assert_eq!(data.as_slice(), "ประเทศไทย中华b¢€𤭢");
+ assert_eq!(data, "ประเทศไทย中华b¢€𤭢");
}
#[test]
assert_eq!(data.pop().unwrap(), '¢'); // 2 bytes
assert_eq!(data.pop().unwrap(), 'b'); // 1 bytes
assert_eq!(data.pop().unwrap(), '华');
- assert_eq!(data.as_slice(), "ประเทศไทย中");
+ assert_eq!(data, "ประเทศไทย中");
}
#[test]
fn test_str_truncate() {
let mut s = String::from_str("12345");
s.truncate(5);
- assert_eq!(s.as_slice(), "12345");
+ assert_eq!(s, "12345");
s.truncate(3);
- assert_eq!(s.as_slice(), "123");
+ assert_eq!(s, "123");
s.truncate(0);
- assert_eq!(s.as_slice(), "");
+ assert_eq!(s, "");
let mut s = String::from_str("12345");
- let p = s.as_slice().as_ptr();
+ let p = s.as_ptr();
s.truncate(3);
s.push_str("6");
- let p_ = s.as_slice().as_ptr();
+ let p_ = s.as_ptr();
assert_eq!(p_, p);
}
let mut s = String::from_str("12345");
s.clear();
assert_eq!(s.len(), 0);
- assert_eq!(s.as_slice(), "");
+ assert_eq!(s, "");
}
#[test]
let b = a + "2";
let b = b + String::from_str("2");
assert_eq!(b.len(), 7);
- assert_eq!(b.as_slice(), "1234522");
+ assert_eq!(b, "1234522");
}
#[test]
let mut s = "ศไทย中华Việt Nam; foobar".to_string();;
assert_eq!(s.remove(0), Some('ศ'));
assert_eq!(s.len(), 33);
- assert_eq!(s.as_slice(), "ไทย中华Việt Nam; foobar");
+ assert_eq!(s, "ไทย中华Việt Nam; foobar");
assert_eq!(s.remove(33), None);
assert_eq!(s.remove(300), None);
assert_eq!(s.remove(17), Some('ệ'));
- assert_eq!(s.as_slice(), "ไทย中华Vit Nam; foobar");
+ assert_eq!(s, "ไทย中华Vit Nam; foobar");
}
#[test] #[should_fail]
fn insert() {
let mut s = "foobar".to_string();
s.insert(0, 'ệ');
- assert_eq!(s.as_slice(), "ệfoobar");
+ assert_eq!(s, "ệfoobar");
s.insert(6, 'ย');
- assert_eq!(s.as_slice(), "ệfooยbar");
+ assert_eq!(s, "ệfooยbar");
}
#[test] #[should_fail] fn insert_bad1() { "".to_string().insert(1, 't'); }
#[test]
fn test_simple_types() {
- assert_eq!(1i.to_string(), "1".to_string());
- assert_eq!((-1i).to_string(), "-1".to_string());
- assert_eq!(200u.to_string(), "200".to_string());
- assert_eq!(2u8.to_string(), "2".to_string());
- assert_eq!(true.to_string(), "true".to_string());
- assert_eq!(false.to_string(), "false".to_string());
- assert_eq!(().to_string(), "()".to_string());
- assert_eq!(("hi".to_string()).to_string(), "hi".to_string());
+ assert_eq!(1i.to_string(), "1");
+ assert_eq!((-1i).to_string(), "-1");
+ assert_eq!(200u.to_string(), "200");
+ assert_eq!(2u8.to_string(), "2");
+ assert_eq!(true.to_string(), "true");
+ assert_eq!(false.to_string(), "false");
+ assert_eq!(().to_string(), "()");
+ assert_eq!(("hi".to_string()).to_string(), "hi");
}
#[test]
fn test_vectors() {
let x: Vec<int> = vec![];
- assert_eq!(x.to_string(), "[]".to_string());
- assert_eq!((vec![1i]).to_string(), "[1]".to_string());
- assert_eq!((vec![1i, 2, 3]).to_string(), "[1, 2, 3]".to_string());
+ assert_eq!(x.to_string(), "[]");
+ assert_eq!((vec![1i]).to_string(), "[1]");
+ assert_eq!((vec![1i, 2, 3]).to_string(), "[1, 2, 3]");
assert!((vec![vec![], vec![1i], vec![1i, 1]]).to_string() ==
- "[[], [1], [1, 1]]".to_string());
+ "[[], [1], [1, 1]]");
}
#[bench]
let map_str = format!("{}", map);
- assert!(map_str == "{1: 2, 3: 4}".to_string());
- assert_eq!(format!("{}", empty), "{}".to_string());
+ assert!(map_str == "{1: 2, 3: 4}");
+ assert_eq!(format!("{}", empty), "{}");
}
#[test]
use tree_map::{TreeMap, Entries, RevEntries, MoveEntries};
// FIXME(conventions): implement bounded iterators
-// FIXME(conventions): implement BitOr, BitAnd, BitXor, and Sub
// FIXME(conventions): replace rev_iter(_mut) by making iter(_mut) DoubleEnded
/// An implementation of the `Set` trait on top of the `TreeMap` container. The
}
}
+#[unstable = "matches collection reform specification, waiting for dust to settle"]
+impl<T: Ord + Clone> BitOr<TreeSet<T>, TreeSet<T>> for TreeSet<T> {
+ /// Returns the union of `self` and `rhs` as a new `TreeSet<T>`.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::TreeSet;
+ ///
+ /// let a: TreeSet<int> = vec![1, 2, 3].into_iter().collect();
+ /// let b: TreeSet<int> = vec![3, 4, 5].into_iter().collect();
+ ///
+ /// let set: TreeSet<int> = a | b;
+ /// let v: Vec<int> = set.into_iter().collect();
+ /// assert_eq!(v, vec![1, 2, 3, 4, 5]);
+ /// ```
+ fn bitor(&self, rhs: &TreeSet<T>) -> TreeSet<T> {
+ self.union(rhs).cloned().collect()
+ }
+}
+
+#[unstable = "matches collection reform specification, waiting for dust to settle"]
+impl<T: Ord + Clone> BitAnd<TreeSet<T>, TreeSet<T>> for TreeSet<T> {
+ /// Returns the intersection of `self` and `rhs` as a new `TreeSet<T>`.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::TreeSet;
+ ///
+ /// let a: TreeSet<int> = vec![1, 2, 3].into_iter().collect();
+ /// let b: TreeSet<int> = vec![2, 3, 4].into_iter().collect();
+ ///
+ /// let set: TreeSet<int> = a & b;
+ /// let v: Vec<int> = set.into_iter().collect();
+ /// assert_eq!(v, vec![2, 3]);
+ /// ```
+ fn bitand(&self, rhs: &TreeSet<T>) -> TreeSet<T> {
+ self.intersection(rhs).cloned().collect()
+ }
+}
+
+#[unstable = "matches collection reform specification, waiting for dust to settle"]
+impl<T: Ord + Clone> BitXor<TreeSet<T>, TreeSet<T>> for TreeSet<T> {
+ /// Returns the symmetric difference of `self` and `rhs` as a new `TreeSet<T>`.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::TreeSet;
+ ///
+ /// let a: TreeSet<int> = vec![1, 2, 3].into_iter().collect();
+ /// let b: TreeSet<int> = vec![3, 4, 5].into_iter().collect();
+ ///
+ /// let set: TreeSet<int> = a ^ b;
+ /// let v: Vec<int> = set.into_iter().collect();
+ /// assert_eq!(v, vec![1, 2, 4, 5]);
+ /// ```
+ fn bitxor(&self, rhs: &TreeSet<T>) -> TreeSet<T> {
+ self.symmetric_difference(rhs).cloned().collect()
+ }
+}
+
+#[unstable = "matches collection reform specification, waiting for dust to settle"]
+impl<T: Ord + Clone> Sub<TreeSet<T>, TreeSet<T>> for TreeSet<T> {
+ /// Returns the difference of `self` and `rhs` as a new `TreeSet<T>`.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::TreeSet;
+ ///
+ /// let a: TreeSet<int> = vec![1, 2, 3].into_iter().collect();
+ /// let b: TreeSet<int> = vec![3, 4, 5].into_iter().collect();
+ ///
+ /// let set: TreeSet<int> = a - b;
+ /// let v: Vec<int> = set.into_iter().collect();
+ /// assert_eq!(v, vec![1, 2]);
+ /// ```
+ fn sub(&self, rhs: &TreeSet<T>) -> TreeSet<T> {
+ self.difference(rhs).cloned().collect()
+ }
+}
+
impl<T: Ord> FromIterator<T> for TreeSet<T> {
fn from_iter<Iter: Iterator<T>>(iter: Iter) -> TreeSet<T> {
let mut set = TreeSet::new();
mod test {
use std::prelude::*;
use std::hash;
+ use vec::Vec;
use super::TreeSet;
&[-2, 1, 3, 5, 9, 11, 13, 16, 19, 24]);
}
+ #[test]
+ fn test_bit_or() {
+ let a: TreeSet<int> = vec![1, 3, 5, 9, 11, 16, 19, 24].into_iter().collect();
+ let b: TreeSet<int> = vec![-2, 1, 5, 9, 13, 19].into_iter().collect();
+
+ let set: TreeSet<int> = a | b;
+ let v: Vec<int> = set.into_iter().collect();
+ assert_eq!(v, vec![-2, 1, 3, 5, 9, 11, 13, 16, 19, 24]);
+ }
+
+ #[test]
+ fn test_bit_and() {
+ let a: TreeSet<int> = vec![11, 1, 3, 77, 103, 5, -5].into_iter().collect();
+ let b: TreeSet<int> = vec![2, 11, 77, -9, -42, 5, 3].into_iter().collect();
+
+ let set: TreeSet<int> = a & b;
+ let v: Vec<int> = set.into_iter().collect();
+ assert_eq!(v, vec![3, 5, 11, 77]);
+ }
+
+ #[test]
+ fn test_bit_xor() {
+ let a: TreeSet<int> = vec![1, 3, 5, 9, 11].into_iter().collect();
+ let b: TreeSet<int> = vec![-2, 3, 9, 14, 22].into_iter().collect();
+
+ let set: TreeSet<int> = a ^ b;
+ let v: Vec<int> = set.into_iter().collect();
+ assert_eq!(v, vec![-2, 1, 5, 11, 14, 22]);
+ }
+
+ #[test]
+ fn test_sub() {
+ let a: TreeSet<int> = vec![-5, 11, 22, 33, 40, 42].into_iter().collect();
+ let b: TreeSet<int> = vec![-12, -5, 14, 23, 34, 38, 39, 50].into_iter().collect();
+
+ let set: TreeSet<int> = a - b;
+ let v: Vec<int> = set.into_iter().collect();
+ assert_eq!(v, vec![11, 22, 33, 40, 42]);
+ }
+
#[test]
fn test_zip() {
let mut x = TreeSet::new();
let set_str = format!("{}", set);
- assert!(set_str == "{1, 2}".to_string());
- assert_eq!(format!("{}", empty), "{}".to_string());
+ assert!(set_str == "{1, 2}");
+ assert_eq!(format!("{}", empty), "{}");
}
}
let map_str = format!("{}", map);
- assert!(map_str == "{1: a, 2: b}".to_string());
- assert_eq!(format!("{}", empty), "{}".to_string());
+ assert!(map_str == "{1: a, 2: b}");
+ assert_eq!(format!("{}", empty), "{}");
}
#[test]
// except according to those terms.
// FIXME(conventions): implement bounded iterators
-// FIXME(conventions): implement BitOr, BitAnd, BitXor, and Sub
// FIXME(conventions): replace each_reverse by making iter DoubleEnded
// FIXME(conventions): implement iter_mut and into_iter
}
}
+#[unstable = "matches collection reform specification, waiting for dust to settle"]
+impl BitOr<TrieSet, TrieSet> for TrieSet {
+ /// Returns the union of `self` and `rhs` as a new `TrieSet`.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::TrieSet;
+ ///
+ /// let a: TrieSet = vec![1, 2, 3].into_iter().collect();
+ /// let b: TrieSet = vec![3, 4, 5].into_iter().collect();
+ ///
+ /// let set: TrieSet = a | b;
+ /// let v: Vec<uint> = set.iter().collect();
+ /// assert_eq!(v, vec![1u, 2, 3, 4, 5]);
+ /// ```
+ fn bitor(&self, rhs: &TrieSet) -> TrieSet {
+ self.union(rhs).collect()
+ }
+}
+
+#[unstable = "matches collection reform specification, waiting for dust to settle"]
+impl BitAnd<TrieSet, TrieSet> for TrieSet {
+ /// Returns the intersection of `self` and `rhs` as a new `TrieSet`.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::TrieSet;
+ ///
+ /// let a: TrieSet = vec![1, 2, 3].into_iter().collect();
+ /// let b: TrieSet = vec![2, 3, 4].into_iter().collect();
+ ///
+ /// let set: TrieSet = a & b;
+ /// let v: Vec<uint> = set.iter().collect();
+ /// assert_eq!(v, vec![2u, 3]);
+ /// ```
+ fn bitand(&self, rhs: &TrieSet) -> TrieSet {
+ self.intersection(rhs).collect()
+ }
+}
+
+#[unstable = "matches collection reform specification, waiting for dust to settle"]
+impl BitXor<TrieSet, TrieSet> for TrieSet {
+ /// Returns the symmetric difference of `self` and `rhs` as a new `TrieSet`.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::TrieSet;
+ ///
+ /// let a: TrieSet = vec![1, 2, 3].into_iter().collect();
+ /// let b: TrieSet = vec![3, 4, 5].into_iter().collect();
+ ///
+ /// let set: TrieSet = a ^ b;
+ /// let v: Vec<uint> = set.iter().collect();
+ /// assert_eq!(v, vec![1u, 2, 4, 5]);
+ /// ```
+ fn bitxor(&self, rhs: &TrieSet) -> TrieSet {
+ self.symmetric_difference(rhs).collect()
+ }
+}
+
+#[unstable = "matches collection reform specification, waiting for dust to settle"]
+impl Sub<TrieSet, TrieSet> for TrieSet {
+ /// Returns the difference of `self` and `rhs` as a new `TrieSet`.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::TrieSet;
+ ///
+ /// let a: TrieSet = vec![1, 2, 3].into_iter().collect();
+ /// let b: TrieSet = vec![3, 4, 5].into_iter().collect();
+ ///
+ /// let set: TrieSet = a - b;
+ /// let v: Vec<uint> = set.iter().collect();
+ /// assert_eq!(v, vec![1u, 2]);
+ /// ```
+ fn sub(&self, rhs: &TrieSet) -> TrieSet {
+ self.difference(rhs).collect()
+ }
+}
+
/// A forward iterator over a set.
pub struct SetItems<'a> {
iter: Entries<'a, ()>
mod test {
use std::prelude::*;
use std::uint;
+ use vec::Vec;
use super::TrieSet;
let set_str = format!("{}", set);
- assert!(set_str == "{1, 2}".to_string());
- assert_eq!(format!("{}", empty), "{}".to_string());
+ assert!(set_str == "{1, 2}");
+ assert_eq!(format!("{}", empty), "{}");
}
#[test]
&[1, 5, 9, 13, 19],
&[1, 3, 5, 9, 11, 13, 16, 19, 24]);
}
+
+ #[test]
+ fn test_bit_or() {
+ let a: TrieSet = vec![1, 2, 3].into_iter().collect();
+ let b: TrieSet = vec![3, 4, 5].into_iter().collect();
+
+ let set: TrieSet = a | b;
+ let v: Vec<uint> = set.iter().collect();
+ assert_eq!(v, vec![1u, 2, 3, 4, 5]);
+ }
+
+ #[test]
+ fn test_bit_and() {
+ let a: TrieSet = vec![1, 2, 3].into_iter().collect();
+ let b: TrieSet = vec![2, 3, 4].into_iter().collect();
+
+ let set: TrieSet = a & b;
+ let v: Vec<uint> = set.iter().collect();
+ assert_eq!(v, vec![2u, 3]);
+ }
+
+ #[test]
+ fn test_bit_xor() {
+ let a: TrieSet = vec![1, 2, 3].into_iter().collect();
+ let b: TrieSet = vec![3, 4, 5].into_iter().collect();
+
+ let set: TrieSet = a ^ b;
+ let v: Vec<uint> = set.iter().collect();
+ assert_eq!(v, vec![1u, 2, 4, 5]);
+ }
+
+ #[test]
+ fn test_sub() {
+ let a: TrieSet = vec![1, 2, 3].into_iter().collect();
+ let b: TrieSet = vec![3, 4, 5].into_iter().collect();
+
+ let set: TrieSet = a - b;
+ let v: Vec<uint> = set.iter().collect();
+ assert_eq!(v, vec![1u, 2]);
+ }
}
let mut xs = Vec::with_capacity(length);
while xs.len < length {
let len = xs.len;
- ptr::write(xs.as_mut_slice().unsafe_mut(len), op(len));
+ ptr::write(xs.unsafe_mut(len), op(len));
xs.len += 1;
}
xs
}
}
- /// Creates a `Vec<T>` directly from the raw constituents.
+ /// Creates a `Vec<T>` directly from the raw components of another vector.
///
- /// This is highly unsafe:
- ///
- /// - if `ptr` is null, then `length` and `capacity` should be 0
- /// - `ptr` must point to an allocation of size `capacity`
- /// - there must be `length` valid instances of type `T` at the
- /// beginning of that allocation
- /// - `ptr` must be allocated by the default `Vec` allocator
+ /// This is highly unsafe, due to the number of invariants that aren't checked.
///
/// # Example
///
let mut xs = Vec::with_capacity(length);
while xs.len < length {
let len = xs.len;
- ptr::write(xs.as_mut_slice().unsafe_mut(len),
+ ptr::write(xs.unsafe_mut(len),
value.clone());
xs.len += 1;
}
// during the loop can prevent this optimisation.
unsafe {
ptr::write(
- self.as_mut_slice().unsafe_mut(len),
+ self.unsafe_mut(len),
other.unsafe_get(i).clone());
self.set_len(len + 1);
}
Some(new_cap) => {
let amort_cap = new_cap.next_power_of_two();
// next_power_of_two will overflow to exactly 0 for really big capacities
- if amort_cap == 0 {
- self.grow_capacity(new_cap);
+ let cap = if amort_cap == 0 {
+ new_cap
} else {
- self.grow_capacity(amort_cap);
- }
+ amort_cap
+ };
+ self.grow_capacity(cap)
}
}
}
// decrement len before the read(), so a panic on Drop doesn't
// re-drop the just-failed value.
self.len -= 1;
- ptr::read(self.as_slice().unsafe_get(self.len));
+ ptr::read(self.unsafe_get(self.len));
}
}
}
pub fn swap_remove(&mut self, index: uint) -> Option<T> {
let length = self.len();
if length > 0 && index < length - 1 {
- self.as_mut_slice().swap(index, length - 1);
+ self.swap(index, length - 1);
} else if index >= length {
return None
}
} else {
unsafe {
self.len -= 1;
- Some(ptr::read(self.as_slice().unsafe_get(self.len())))
+ Some(ptr::read(self.unsafe_get(self.len())))
}
}
}
if ln < 1 { return; }
// Avoid bounds checks by using unsafe pointers.
- let p = self.as_mut_slice().as_mut_ptr();
+ let p = self.as_mut_ptr();
let mut r = 1;
let mut w = 1;
// zeroed (when moving out, because of #[unsafe_no_drop_flag]).
if self.cap != 0 {
unsafe {
- for x in self.as_mut_slice().iter() {
+ for x in self.iter() {
ptr::read(x);
}
dealloc(self.ptr, self.cap)
#[test]
fn test_as_vec() {
let xs = [1u8, 2u8, 3u8];
- assert_eq!(as_vec(&xs).as_slice(), xs.as_slice());
+ assert_eq!(as_vec(&xs).as_slice(), xs);
}
#[test]
}
}
- assert!(values.as_slice() == [1, 2, 5, 6, 7]);
+ assert!(values == [1, 2, 5, 6, 7]);
}
#[test]
}
}
- assert!(values.as_slice() == [2, 3, 3, 4, 5]);
+ assert!(values == [2, 3, 3, 4, 5]);
}
#[test]
let (left, right) = unzip(z1.iter().map(|&x| x));
- let (left, right) = (left.as_slice(), right.as_slice());
assert_eq!((1, 4), (left[0], right[0]));
assert_eq!((2, 5), (left[1], right[1]));
assert_eq!((3, 6), (left[2], right[2]));
#[test]
fn test_map_in_place() {
let v = vec![0u, 1, 2];
- assert_eq!(v.map_in_place(|i: uint| i as int - 1).as_slice(), [-1i, 0, 1].as_slice());
+ assert_eq!(v.map_in_place(|i: uint| i as int - 1), [-1i, 0, 1]);
}
#[test]
let v = vec![(), ()];
#[deriving(PartialEq, Show)]
struct ZeroSized;
- assert_eq!(v.map_in_place(|_| ZeroSized).as_slice(), [ZeroSized, ZeroSized].as_slice());
+ assert_eq!(v.map_in_place(|_| ZeroSized), [ZeroSized, ZeroSized]);
}
#[test]
fn test_into_boxed_slice() {
let xs = vec![1u, 2, 3];
let ys = xs.into_boxed_slice();
- assert_eq!(ys.as_slice(), [1u, 2, 3].as_slice());
+ assert_eq!(ys.as_slice(), [1u, 2, 3]);
}
#[bench]
VecMap { v: Vec::with_capacity(capacity) }
}
+ /// Returns the number of elements the `VecMap` can hold without
+ /// reallocating.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// use std::collections::VecMap;
+ /// let map: VecMap<String> = VecMap::with_capacity(10);
+ /// assert!(map.capacity() >= 10);
+ /// ```
+ #[inline]
+ #[unstable = "matches collection reform specification, waiting for dust to settle"]
+ pub fn capacity(&self) -> uint {
+ self.v.capacity()
+ }
+
/// Returns an iterator visiting all keys in ascending order by the keys.
/// The iterator's element type is `uint`.
#[unstable = "matches collection reform specification, waiting for dust to settle"]
map.insert(3, 4i);
let map_str = map.to_string();
- let map_str = map_str.as_slice();
assert!(map_str == "{1: 2, 3: 4}" || map_str == "{3: 4, 1: 2}");
- assert_eq!(format!("{}", empty), "{}".to_string());
+ assert_eq!(format!("{}", empty), "{}");
}
#[test]
#![stable]
use mem::{transmute};
-use option::{Option, Some, None};
+use option::Option;
+use option::Option::{Some, None};
use raw::TraitObject;
use intrinsics::TypeId;
use intrinsics;
use std::kinds::marker;
use cell::UnsafeCell;
+use kinds::Copy;
/// A boolean type which can be safely shared between threads.
#[stable]
SeqCst,
}
+impl Copy for Ordering {}
+
/// An `AtomicBool` initialized to `false`.
#[unstable = "may be renamed, pending conventions for static initalizers"]
pub const INIT_ATOMIC_BOOL: AtomicBool =
Owned(T)
}
+impl<'a, T, Sized? B> Clone for Cow<'a, T, B> where B: ToOwned<T> {
+ fn clone(&self) -> Cow<'a, T, B> {
+ match *self {
+ Borrowed(b) => Borrowed(b),
+ Owned(ref o) => {
+ let b: &B = BorrowFrom::borrow_from(o);
+ Owned(b.to_owned())
+ },
+ }
+ }
+}
+
impl<'a, T, Sized? B> Cow<'a, T, B> where B: ToOwned<T> {
/// Acquire a mutable reference to the owned form of the data.
///
use default::Default;
use kinds::{marker, Copy};
use ops::{Deref, DerefMut, Drop};
-use option::{None, Option, Some};
+use option::Option;
+use option::Option::{None, Some};
/// A mutable memory location that admits only `Copy` data.
#[unstable = "likely to be renamed; otherwise stable"]
/// Returns `None` if the value is currently mutably borrowed.
#[unstable = "may be renamed, depending on global conventions"]
pub fn try_borrow<'a>(&'a self) -> Option<Ref<'a, T>> {
- match self.borrow.get() {
- WRITING => None,
- borrow => {
- self.borrow.set(borrow + 1);
- Some(Ref { _parent: self })
- }
+ match BorrowRef::new(&self.borrow) {
+ Some(b) => Some(Ref { _value: unsafe { &*self.value.get() }, _borrow: b }),
+ None => None,
}
}
/// Returns `None` if the value is currently borrowed.
#[unstable = "may be renamed, depending on global conventions"]
pub fn try_borrow_mut<'a>(&'a self) -> Option<RefMut<'a, T>> {
- match self.borrow.get() {
- UNUSED => {
- self.borrow.set(WRITING);
- Some(RefMut { _parent: self })
- },
- _ => None
+ match BorrowRefMut::new(&self.borrow) {
+ Some(b) => Some(RefMut { _value: unsafe { &mut *self.value.get() }, _borrow: b }),
+ None => None,
}
}
}
}
-/// Wraps a borrowed reference to a value in a `RefCell` box.
-#[unstable]
-pub struct Ref<'b, T:'b> {
- // FIXME #12808: strange name to try to avoid interfering with
- // field accesses of the contained type via Deref
- _parent: &'b RefCell<T>
+struct BorrowRef<'b> {
+ _borrow: &'b Cell<BorrowFlag>,
+}
+
+impl<'b> BorrowRef<'b> {
+ fn new(borrow: &'b Cell<BorrowFlag>) -> Option<BorrowRef<'b>> {
+ match borrow.get() {
+ WRITING => None,
+ b => {
+ borrow.set(b + 1);
+ Some(BorrowRef { _borrow: borrow })
+ },
+ }
+ }
}
#[unsafe_destructor]
-#[unstable]
-impl<'b, T> Drop for Ref<'b, T> {
+impl<'b> Drop for BorrowRef<'b> {
fn drop(&mut self) {
- let borrow = self._parent.borrow.get();
+ let borrow = self._borrow.get();
debug_assert!(borrow != WRITING && borrow != UNUSED);
- self._parent.borrow.set(borrow - 1);
+ self._borrow.set(borrow - 1);
}
}
+impl<'b> Clone for BorrowRef<'b> {
+ fn clone(&self) -> BorrowRef<'b> {
+ // Since this Ref exists, we know the borrow flag
+ // is not set to WRITING.
+ let borrow = self._borrow.get();
+ debug_assert!(borrow != WRITING && borrow != UNUSED);
+ self._borrow.set(borrow + 1);
+ BorrowRef { _borrow: self._borrow }
+ }
+}
+
+/// Wraps a borrowed reference to a value in a `RefCell` box.
+#[unstable]
+pub struct Ref<'b, T:'b> {
+ // FIXME #12808: strange name to try to avoid interfering with
+ // field accesses of the contained type via Deref
+ _value: &'b T,
+ _borrow: BorrowRef<'b>,
+}
+
#[unstable = "waiting for `Deref` to become stable"]
impl<'b, T> Deref<T> for Ref<'b, T> {
#[inline]
fn deref<'a>(&'a self) -> &'a T {
- unsafe { &*self._parent.value.get() }
+ self._value
}
}
/// A `Clone` implementation would interfere with the widespread
/// use of `r.borrow().clone()` to clone the contents of a `RefCell`.
#[experimental = "likely to be moved to a method, pending language changes"]
-pub fn clone_ref<'b, T>(orig: &Ref<'b, T>) -> Ref<'b, T> {
- // Since this Ref exists, we know the borrow flag
- // is not set to WRITING.
- let borrow = orig._parent.borrow.get();
- debug_assert!(borrow != WRITING && borrow != UNUSED);
- orig._parent.borrow.set(borrow + 1);
-
+pub fn clone_ref<'b, T:Clone>(orig: &Ref<'b, T>) -> Ref<'b, T> {
Ref {
- _parent: orig._parent,
+ _value: orig._value,
+ _borrow: orig._borrow.clone(),
}
}
-/// Wraps a mutable borrowed reference to a value in a `RefCell` box.
-#[unstable]
-pub struct RefMut<'b, T:'b> {
- // FIXME #12808: strange name to try to avoid interfering with
- // field accesses of the contained type via Deref
- _parent: &'b RefCell<T>
+struct BorrowRefMut<'b> {
+ _borrow: &'b Cell<BorrowFlag>,
}
#[unsafe_destructor]
-#[unstable]
-impl<'b, T> Drop for RefMut<'b, T> {
+impl<'b> Drop for BorrowRefMut<'b> {
fn drop(&mut self) {
- let borrow = self._parent.borrow.get();
+ let borrow = self._borrow.get();
debug_assert!(borrow == WRITING);
- self._parent.borrow.set(UNUSED);
+ self._borrow.set(UNUSED);
}
}
+impl<'b> BorrowRefMut<'b> {
+ fn new(borrow: &'b Cell<BorrowFlag>) -> Option<BorrowRefMut<'b>> {
+ match borrow.get() {
+ UNUSED => {
+ borrow.set(WRITING);
+ Some(BorrowRefMut { _borrow: borrow })
+ },
+ _ => None,
+ }
+ }
+}
+
+/// Wraps a mutable borrowed reference to a value in a `RefCell` box.
+#[unstable]
+pub struct RefMut<'b, T:'b> {
+ // FIXME #12808: strange name to try to avoid interfering with
+ // field accesses of the contained type via Deref
+ _value: &'b mut T,
+ _borrow: BorrowRefMut<'b>,
+}
+
#[unstable = "waiting for `Deref` to become stable"]
impl<'b, T> Deref<T> for RefMut<'b, T> {
#[inline]
fn deref<'a>(&'a self) -> &'a T {
- unsafe { &*self._parent.value.get() }
+ self._value
}
}
impl<'b, T> DerefMut<T> for RefMut<'b, T> {
#[inline]
fn deref_mut<'a>(&'a mut self) -> &'a mut T {
- unsafe { &mut *self._parent.value.get() }
+ self._value
}
}
#![doc(primitive = "char")]
use mem::transmute;
-use option::{None, Option, Some};
+use option::Option;
+use option::Option::{None, Some};
use iter::{range_step, Iterator, RangeStep};
use slice::SlicePrelude;
}
}
}
+
pub use self::Ordering::*;
-use kinds::Sized;
+use kinds::{Copy, Sized};
use option::{Option, Some, None};
/// Trait for values that can be compared for equality and inequality.
Greater = 1i,
}
+impl Copy for Ordering {}
+
impl Ordering {
/// Reverse the `Ordering`, so that `Less` becomes `Greater` and
/// vice versa.
// Implementation of PartialEq, Eq, PartialOrd and Ord for primitive types
mod impls {
- use cmp::{PartialOrd, Ord, PartialEq, Eq, Ordering,
- Less, Greater, Equal};
+ use cmp::{PartialOrd, Ord, PartialEq, Eq, Ordering};
+ use cmp::Ordering::{Less, Greater, Equal};
use kinds::Sized;
- use option::{Option, Some, None};
+ use option::Option;
+ use option::Option::{Some, None};
macro_rules! partial_eq_impl(
($($t:ty)*) => ($(
use iter::{range, DoubleEndedIteratorExt};
use num::{Float, FPNaN, FPInfinite, ToPrimitive};
use num::cast;
-use result::Ok;
+use result::Result::Ok;
use slice::{mod, SlicePrelude};
use str::StrPrelude;
use iter::{Iterator, IteratorExt, range};
use kinds::{Copy, Sized};
use mem;
-use option::{Option, Some, None};
+use option::Option;
+use option::Option::{Some, None};
use ops::Deref;
-use result::{Ok, Err};
+use result::Result::{Ok, Err};
use result;
use slice::SlicePrelude;
use slice;
pub mod rt;
#[experimental = "core and I/O reconciliation may alter this definition"]
+/// The type returned by formatter methods.
pub type Result = result::Result<(), Error>;
/// The error type which is returned from formatting a message into a stream.
#[experimental = "core and I/O reconciliation may alter this definition"]
pub struct Error;
+impl Copy for Error {}
+
/// A collection of methods that are required to format a message into a stream.
///
/// This trait is the type which this modules requires when formatting
args: &'a [Argument<'a>],
}
+// NB. Argument is essentially an optimized partially applied formatting function,
+// equivalent to `exists T.(&T, fn(&T, &mut Formatter) -> Result`.
+
enum Void {}
/// This struct represents the generic "argument" which is taken by the Xprintf
/// types, and then this struct is used to canonicalize arguments to one type.
#[experimental = "implementation detail of the `format_args!` macro"]
pub struct Argument<'a> {
- formatter: extern "Rust" fn(&Void, &mut Formatter) -> Result,
value: &'a Void,
+ formatter: fn(&Void, &mut Formatter) -> Result,
}
+impl<'a> Argument<'a> {
+ #[inline(never)]
+ fn show_uint(x: &uint, f: &mut Formatter) -> Result {
+ Show::fmt(x, f)
+ }
+
+ fn new<'a, T>(x: &'a T, f: fn(&T, &mut Formatter) -> Result) -> Argument<'a> {
+ unsafe {
+ Argument {
+ formatter: mem::transmute(f),
+ value: mem::transmute(x)
+ }
+ }
+ }
+
+ fn from_uint<'a>(x: &'a uint) -> Argument<'a> {
+ Argument::new(x, Argument::show_uint)
+ }
+
+ fn as_uint(&self) -> Option<uint> {
+ if self.formatter as uint == Argument::show_uint as uint {
+ Some(unsafe { *(self.value as *const _ as *const uint) })
+ } else {
+ None
+ }
+ }
+}
+
+impl<'a> Copy for Argument<'a> {}
+
impl<'a> Arguments<'a> {
/// When using the format_args!() macro, this function is used to generate the
- /// Arguments structure. The compiler inserts an `unsafe` block to call this,
- /// which is valid because the compiler performs all necessary validation to
- /// ensure that the resulting call to format/write would be safe.
+ /// Arguments structure.
#[doc(hidden)] #[inline]
#[experimental = "implementation detail of the `format_args!` macro"]
- pub unsafe fn new<'a>(pieces: &'static [&'static str],
- args: &'a [Argument<'a>]) -> Arguments<'a> {
+ pub fn new<'a>(pieces: &'a [&'a str],
+ args: &'a [Argument<'a>]) -> Arguments<'a> {
Arguments {
- pieces: mem::transmute(pieces),
+ pieces: pieces,
fmt: None,
args: args
}
/// This function is used to specify nonstandard formatting parameters.
/// The `pieces` array must be at least as long as `fmt` to construct
- /// a valid Arguments structure.
+ /// a valid Arguments structure. Also, any `Count` within `fmt` that is
+ /// `CountIsParam` or `CountIsNextParam` has to point to an argument
+ /// created with `argumentuint`. However, failing to do so doesn't cause
+ /// unsafety, but will ignore invalid .
#[doc(hidden)] #[inline]
#[experimental = "implementation detail of the `format_args!` macro"]
- pub unsafe fn with_placeholders<'a>(pieces: &'static [&'static str],
- fmt: &'static [rt::Argument<'static>],
- args: &'a [Argument<'a>]) -> Arguments<'a> {
+ pub fn with_placeholders<'a>(pieces: &'a [&'a str],
+ fmt: &'a [rt::Argument<'a>],
+ args: &'a [Argument<'a>]) -> Arguments<'a> {
Arguments {
- pieces: mem::transmute(pieces),
- fmt: Some(mem::transmute(fmt)),
+ pieces: pieces,
+ fmt: Some(fmt),
args: args
}
}
fn getcount(&mut self, cnt: &rt::Count) -> Option<uint> {
match *cnt {
- rt::CountIs(n) => { Some(n) }
- rt::CountImplied => { None }
+ rt::CountIs(n) => Some(n),
+ rt::CountImplied => None,
rt::CountIsParam(i) => {
- let v = self.args[i].value;
- unsafe { Some(*(v as *const _ as *const uint)) }
+ self.args[i].as_uint()
}
rt::CountIsNextParam => {
- let v = self.curarg.next().unwrap().value;
- unsafe { Some(*(v as *const _ as *const uint)) }
+ self.curarg.next().and_then(|arg| arg.as_uint())
}
}
}
/// create the Argument structures that are passed into the `format` function.
#[doc(hidden)] #[inline]
#[experimental = "implementation detail of the `format_args!` macro"]
-pub fn argument<'a, T>(f: extern "Rust" fn(&T, &mut Formatter) -> Result,
+pub fn argument<'a, T>(f: fn(&T, &mut Formatter) -> Result,
t: &'a T) -> Argument<'a> {
- unsafe {
- Argument {
- formatter: mem::transmute(f),
- value: mem::transmute(t)
- }
- }
-}
-
-/// When the compiler determines that the type of an argument *must* be a string
-/// (such as for select), then it invokes this method.
-#[doc(hidden)] #[inline]
-#[experimental = "implementation detail of the `format_args!` macro"]
-pub fn argumentstr<'a>(s: &'a &str) -> Argument<'a> {
- argument(Show::fmt, s)
+ Argument::new(t, f)
}
/// When the compiler determines that the type of an argument *must* be a uint
-/// (such as for plural), then it invokes this method.
+/// (such as for width and precision), then it invokes this method.
#[doc(hidden)] #[inline]
#[experimental = "implementation detail of the `format_args!` macro"]
pub fn argumentuint<'a>(s: &'a uint) -> Argument<'a> {
- argument(Show::fmt, s)
+ Argument::from_uint(s)
}
// Implementations of the core formatting traits
use fmt;
use iter::DoubleEndedIteratorExt;
+use kinds::Copy;
use num::{Int, cast};
use slice::SlicePrelude;
base: u8,
}
+impl Copy for Radix {}
+
impl Radix {
fn new(base: u8) -> Radix {
assert!(2 <= base && base <= 36, "the base must be in the range of 2..36: {}", base);
#[unstable = "may be renamed or move to a different module"]
pub struct RadixFmt<T, R>(T, R);
+impl<T,R> Copy for RadixFmt<T,R> where T: Copy, R: Copy {}
+
/// Constructs a radix formatter in the range of `2..36`.
///
/// # Example
pub use self::Count::*;
pub use self::Position::*;
pub use self::Flag::*;
+use kinds::Copy;
#[doc(hidden)]
pub struct Argument<'a> {
pub format: FormatSpec,
}
+impl<'a> Copy for Argument<'a> {}
+
#[doc(hidden)]
pub struct FormatSpec {
pub fill: char,
pub width: Count,
}
+impl Copy for FormatSpec {}
+
/// Possible alignments that can be requested as part of a formatting directive.
#[deriving(PartialEq)]
pub enum Alignment {
AlignUnknown,
}
+impl Copy for Alignment {}
+
#[doc(hidden)]
pub enum Count {
CountIs(uint), CountIsParam(uint), CountIsNextParam, CountImplied,
}
+impl Copy for Count {}
+
#[doc(hidden)]
pub enum Position {
ArgumentNext, ArgumentIs(uint)
}
+impl Copy for Position {}
+
/// Flags which can be passed to formatting via a directive.
///
/// These flags are discovered through the `flags` field of the `Formatter`
/// being aware of the sign to be printed.
FlagSignAwareZeroPad,
}
+
+impl Copy for Flag {}
//! rustc compiler intrinsics.
//!
-//! The corresponding definitions are in librustc/middle/trans/foreign.rs.
+//! The corresponding definitions are in librustc_trans/trans/intrinsic.rs.
//!
//! # Volatiles
//!
#![experimental]
#![allow(missing_docs)]
+use kinds::Copy;
+
pub type GlueFn = extern "Rust" fn(*const i8);
#[lang="ty_desc"]
pub name: &'static str,
}
+impl Copy for TyDesc {}
+
extern "rust-intrinsic" {
// NB: These intrinsics take unsafe pointers because they mutate aliased
t: u64,
}
+impl Copy for TypeId {}
+
impl TypeId {
/// Returns the `TypeId` of the type this generic function has been instantiated with
pub fn of<T: 'static>() -> TypeId {
use clone::Clone;
use cmp;
use cmp::Ord;
+use kinds::Copy;
use mem;
use num::{ToPrimitive, Int};
use ops::{Add, Deref};
-use option::{Option, Some, None};
+use option::Option;
+use option::Option::{Some, None};
use uint;
#[deprecated = "renamed to Extend"] pub use self::Extend as Extendable;
iter: T,
}
-#[unstable = "trait is unstable"]
+impl<T:Copy> Copy for Cycle<T> {}
+
impl<A, T: Clone + Iterator<A>> Iterator<A> for Cycle<T> {
#[inline]
fn next(&mut self) -> Option<A> {
peeked: Option<A>,
}
-#[unstable = "trait is unstable"]
+impl<T:Copy,A:Copy> Copy for Peekable<A,T> {}
+
impl<A, T: Iterator<A>> Iterator<A> for Peekable<A, T> {
#[inline]
fn next(&mut self) -> Option<A> {
}
}
-/// An iterator which just modifies the contained state throughout iteration.
+/// An iterator which passes mutable state to a closure and yields the result.
+///
+/// # Example: The Fibonacci Sequence
+///
+/// An iterator that yields sequential Fibonacci numbers, and stops on overflow.
+///
+/// ```rust
+/// use std::iter::Unfold;
+/// use std::num::Int; // For `.checked_add()`
+///
+/// // This iterator will yield up to the last Fibonacci number before the max value of `u32`.
+/// // You can simply change `u32` to `u64` in this line if you want higher values than that.
+/// let mut fibonacci = Unfold::new((Some(0u32), Some(1u32)), |&(ref mut x2, ref mut x1)| {
+/// // Attempt to get the next Fibonacci number
+/// // `x1` will be `None` if previously overflowed.
+/// let next = match (*x2, *x1) {
+/// (Some(x2), Some(x1)) => x2.checked_add(x1),
+/// _ => None,
+/// };
+///
+/// // Shift left: ret <- x2 <- x1 <- next
+/// let ret = *x2;
+/// *x2 = *x1;
+/// *x1 = next;
+///
+/// ret
+/// });
+///
+/// for i in fibonacci {
+/// println!("{}", i);
+/// }
+/// ```
#[experimental]
pub struct Unfold<'a, A, St> {
f: |&mut St|: 'a -> Option<A>,
- /// Internal state that will be yielded on the next iteration
+ /// Internal state that will be passed to the closure on the next iteration
pub state: St,
}
#[experimental]
impl<'a, A, St> Unfold<'a, A, St> {
/// Creates a new iterator with the specified closure as the "iterator
- /// function" and an initial state to eventually pass to the iterator
+ /// function" and an initial state to eventually pass to the closure
#[inline]
pub fn new<'a>(initial_state: St, f: |&mut St|: 'a -> Option<A>)
-> Unfold<'a, A, St> {
step: A,
}
+impl<A:Copy> Copy for Counter<A> {}
+
/// Creates a new counter with the specified start/step
#[inline]
#[unstable = "may be renamed"]
one: A,
}
+impl<A:Copy> Copy for Range<A> {}
+
/// Returns an iterator over the given range [start, stop) (that is, starting
/// at start (inclusive), and ending at stop (exclusive)).
///
pub mod order {
use cmp;
use cmp::{Eq, Ord, PartialOrd, PartialEq};
- use option::{Option, Some, None};
+ use cmp::Ordering::{Equal, Less, Greater};
+ use option::Option;
+ use option::Option::{Some, None};
use super::Iterator;
/// Compare `a` and `b` for equality using `Eq`
pub fn cmp<A: Ord, T: Iterator<A>, S: Iterator<A>>(mut a: T, mut b: S) -> cmp::Ordering {
loop {
match (a.next(), b.next()) {
- (None, None) => return cmp::Equal,
- (None, _ ) => return cmp::Less,
- (_ , None) => return cmp::Greater,
+ (None, None) => return Equal,
+ (None, _ ) => return Less,
+ (_ , None) => return Greater,
(Some(x), Some(y)) => match x.cmp(&y) {
- cmp::Equal => (),
+ Equal => (),
non_eq => return non_eq,
},
}
-> Option<cmp::Ordering> {
loop {
match (a.next(), b.next()) {
- (None, None) => return Some(cmp::Equal),
- (None, _ ) => return Some(cmp::Less),
- (_ , None) => return Some(cmp::Greater),
+ (None, None) => return Some(Equal),
+ (None, _ ) => return Some(Less),
+ (_ , None) => return Some(Greater),
(Some(x), Some(y)) => match x.partial_cmp(&y) {
- Some(cmp::Equal) => (),
+ Some(Equal) => (),
non_eq => return non_eq,
},
}
/// implemented using unsafe code. In that case, you may want to embed
/// some of the marker types below into your type.
pub mod marker {
+ use super::Copy;
+
/// A marker type whose type parameter `T` is considered to be
/// covariant with respect to the type itself. This is (typically)
/// used to indicate that an instance of the type `T` is being stored
#[deriving(Clone, PartialEq, Eq, PartialOrd, Ord)]
pub struct CovariantType<T>;
+ impl<T> Copy for CovariantType<T> {}
+
/// A marker type whose type parameter `T` is considered to be
/// contravariant with respect to the type itself. This is (typically)
/// used to indicate that an instance of the type `T` will be consumed
#[deriving(Clone, PartialEq, Eq, PartialOrd, Ord)]
pub struct ContravariantType<T>;
+ impl<T> Copy for ContravariantType<T> {}
+
/// A marker type whose type parameter `T` is considered to be
/// invariant with respect to the type itself. This is (typically)
/// used to indicate that instances of the type `T` may be read or
#[deriving(Clone, PartialEq, Eq, PartialOrd, Ord)]
pub struct InvariantType<T>;
+ impl<T> Copy for InvariantType<T> {}
+
/// As `CovariantType`, but for lifetime parameters. Using
/// `CovariantLifetime<'a>` indicates that it is ok to substitute
/// a *longer* lifetime for `'a` than the one you originally
#[deriving(Clone, PartialEq, Eq, PartialOrd, Ord)]
pub struct CovariantLifetime<'a>;
+ impl<'a> Copy for CovariantLifetime<'a> {}
+
/// As `ContravariantType`, but for lifetime parameters. Using
/// `ContravariantLifetime<'a>` indicates that it is ok to
/// substitute a *shorter* lifetime for `'a` than the one you
#[deriving(Clone, PartialEq, Eq, PartialOrd, Ord)]
pub struct ContravariantLifetime<'a>;
+ impl<'a> Copy for ContravariantLifetime<'a> {}
+
/// As `InvariantType`, but for lifetime parameters. Using
/// `InvariantLifetime<'a>` indicates that it is not ok to
/// substitute any other lifetime for `'a` besides its original
/// their instances remain thread-local.
#[lang="no_send_bound"]
#[deriving(Clone, PartialEq, Eq, PartialOrd, Ord)]
+ #[allow(missing_copy_implementations)]
pub struct NoSend;
/// A type which is considered "not POD", meaning that it is not
/// ensure that they are never copied, even if they lack a destructor.
#[lang="no_copy_bound"]
#[deriving(Clone, PartialEq, Eq, PartialOrd, Ord)]
+ #[allow(missing_copy_implementations)]
pub struct NoCopy;
/// A type which is considered "not sync", meaning that
/// shared between tasks.
#[lang="no_sync_bound"]
#[deriving(Clone, PartialEq, Eq, PartialOrd, Ord)]
+ #[allow(missing_copy_implementations)]
pub struct NoSync;
/// A type which is considered managed by the GC. This is typically
/// embedded in other types.
#[lang="managed_bound"]
#[deriving(Clone, PartialEq, Eq, PartialOrd, Ord)]
+ #[allow(missing_copy_implementations)]
pub struct Managed;
}
+
html_playground_url = "http://play.rust-lang.org/")]
#![no_std]
-#![allow(unknown_features)]
+#![allow(unknown_features, raw_pointer_deriving)]
#![feature(globs, intrinsics, lang_items, macro_rules, phase)]
#![feature(simd, unsafe_destructor, slicing_syntax)]
#![feature(default_type_params)]
use mem::size_of;
use ops::{Add, Sub, Mul, Div, Rem, Neg};
use ops::{Not, BitAnd, BitOr, BitXor, Shl, Shr};
-use option::{Option, Some, None};
+use option::Option;
+use option::Option::{Some, None};
use str::{FromStr, from_str, StrPrelude};
/// Simultaneous division and remainder
/// ```
fn checked_add(self, other: Self) -> Option<Self>;
- /// Checked integer subtraction. Computes `self + other`, returning `None`
+ /// Checked integer subtraction. Computes `self - other`, returning `None`
/// if underflow occurred.
///
/// # Example
/// ```
fn checked_sub(self, other: Self) -> Option<Self>;
- /// Checked integer multiplication. Computes `self + other`, returning
+ /// Checked integer multiplication. Computes `self * other`, returning
/// `None` if underflow or overflow occurred.
///
/// # Example
/// ```
fn checked_mul(self, other: Self) -> Option<Self>;
- /// Checked integer division. Computes `self + other` returning `None` if
- /// `self == 0` or the operation results in underflow or overflow.
+ /// Checked integer division. Computes `self / other`, returning `None` if
+ /// `other == 0` or the operation results in underflow or overflow.
///
/// # Example
///
FPNormal,
}
+impl Copy for FPCategory {}
+
/// A built-in floating point number.
// FIXME(#5527): In a future version of Rust, many of these functions will
// become constants.
/// ```rust
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl Add<Foo, Foo> for Foo {
/// fn add(&self, _rhs: &Foo) -> Foo {
/// println!("Adding!");
/// ```rust
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl Sub<Foo, Foo> for Foo {
/// fn sub(&self, _rhs: &Foo) -> Foo {
/// println!("Subtracting!");
/// ```rust
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl Mul<Foo, Foo> for Foo {
/// fn mul(&self, _rhs: &Foo) -> Foo {
/// println!("Multiplying!");
/// ```
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl Div<Foo, Foo> for Foo {
/// fn div(&self, _rhs: &Foo) -> Foo {
/// println!("Dividing!");
/// ```
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl Rem<Foo, Foo> for Foo {
/// fn rem(&self, _rhs: &Foo) -> Foo {
/// println!("Remainder-ing!");
/// ```
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl Neg<Foo> for Foo {
/// fn neg(&self) -> Foo {
/// println!("Negating!");
/// ```
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl Not<Foo> for Foo {
/// fn not(&self) -> Foo {
/// println!("Not-ing!");
/// ```
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl BitAnd<Foo, Foo> for Foo {
/// fn bitand(&self, _rhs: &Foo) -> Foo {
/// println!("Bitwise And-ing!");
/// ```
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl BitOr<Foo, Foo> for Foo {
/// fn bitor(&self, _rhs: &Foo) -> Foo {
/// println!("Bitwise Or-ing!");
/// ```
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl BitXor<Foo, Foo> for Foo {
/// fn bitxor(&self, _rhs: &Foo) -> Foo {
/// println!("Bitwise Xor-ing!");
/// ```
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl Shl<Foo, Foo> for Foo {
/// fn shl(&self, _rhs: &Foo) -> Foo {
/// println!("Shifting left!");
/// ```
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl Shr<Foo, Foo> for Foo {
/// fn shr(&self, _rhs: &Foo) -> Foo {
/// println!("Shifting right!");
/// ```
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl Index<Foo, Foo> for Foo {
/// fn index<'a>(&'a self, _index: &Foo) -> &'a Foo {
/// println!("Indexing!");
/// ```
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl IndexMut<Foo, Foo> for Foo {
/// fn index_mut<'a>(&'a mut self, _index: &Foo) -> &'a mut Foo {
/// println!("Indexing!");
/// ```ignore
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl Slice<Foo, Foo> for Foo {
/// fn as_slice_<'a>(&'a self) -> &'a Foo {
/// println!("Slicing!");
/// ```ignore
/// struct Foo;
///
+/// impl Copy for Foo {}
+///
/// impl SliceMut<Foo, Foo> for Foo {
/// fn as_mut_slice_<'a>(&'a mut self) -> &'a mut Foo {
/// println!("Slicing!");
/// }
/// ```
#[lang="deref_mut"]
-pub trait DerefMut<Sized? Result>: Deref<Result> {
+pub trait DerefMut<Sized? Result> for Sized? : Deref<Result> {
/// The method called to mutably dereference a value
fn deref_mut<'a>(&'a mut self) -> &'a mut Result;
}
use cmp::{Eq, Ord};
use default::Default;
-use iter::{Iterator, IteratorExt, DoubleEndedIterator, FromIterator, ExactSizeIterator};
+use iter::{Iterator, IteratorExt, DoubleEndedIterator, FromIterator};
+use iter::{ExactSizeIterator};
+use kinds::Copy;
use mem;
-use result::{Result, Ok, Err};
+use result::Result;
+use result::Result::{Ok, Err};
use slice;
use slice::AsSlice;
use clone::Clone;
}
}
}
+
+impl<T:Copy> Copy for Option<T> {}
+
use mem;
use clone::Clone;
use intrinsics;
-use option::{Some, None, Option};
+use option::Option;
+use option::Option::{Some, None};
-use cmp::{PartialEq, Eq, PartialOrd, Equiv, Ordering, Less, Equal, Greater};
+use cmp::{PartialEq, Eq, PartialOrd, Equiv};
+use cmp::Ordering;
+use cmp::Ordering::{Less, Equal, Greater};
pub use intrinsics::copy_memory;
pub use intrinsics::copy_nonoverlapping_memory;
#[inline]
fn ge(&self, other: &*mut T) -> bool { *self >= *other }
}
+
//!
//! Their definition should always match the ABI defined in `rustc::back::abi`.
+use kinds::Copy;
use mem;
use kinds::Sized;
pub len: uint,
}
+impl<T> Copy for Slice<T> {}
+
/// The representation of a Rust closure
#[repr(C)]
pub struct Closure {
pub env: *mut (),
}
+impl Copy for Closure {}
+
/// The representation of a Rust procedure (`proc()`)
#[repr(C)]
pub struct Procedure {
pub env: *mut (),
}
+impl Copy for Procedure {}
+
/// The representation of a Rust trait object.
///
/// This struct does not have a `Repr` implementation
pub vtable: *mut (),
}
+impl Copy for TraitObject {}
+
/// This trait is meant to map equivalences between raw structs and their
/// corresponding rust values.
pub trait Repr<T> for Sized? {
pub use self::Result::*;
+use kinds::Copy;
use std::fmt::Show;
use slice;
use slice::AsSlice;
use iter::{Iterator, IteratorExt, DoubleEndedIterator, FromIterator, ExactSizeIterator};
-use option::{None, Option, Some};
+use option::Option;
+use option::Option::{None, Some};
/// `Result` is a type that represents either success (`Ok`) or failure (`Err`).
///
}
Ok(init)
}
+
+#[cfg(not(stage0))]
+impl<T:Copy,U:Copy> Copy for Result<T,U> {}
+
#![allow(non_camel_case_types)]
#![allow(missing_docs)]
+use kinds::Copy;
+
#[experimental]
#[simd]
#[deriving(Show)]
pub i8, pub i8, pub i8, pub i8,
pub i8, pub i8, pub i8, pub i8);
+impl Copy for i8x16 {}
+
#[experimental]
#[simd]
#[deriving(Show)]
pub struct i16x8(pub i16, pub i16, pub i16, pub i16,
pub i16, pub i16, pub i16, pub i16);
+impl Copy for i16x8 {}
+
#[experimental]
#[simd]
#[deriving(Show)]
#[repr(C)]
pub struct i32x4(pub i32, pub i32, pub i32, pub i32);
+impl Copy for i32x4 {}
+
#[experimental]
#[simd]
#[deriving(Show)]
#[repr(C)]
pub struct i64x2(pub i64, pub i64);
+impl Copy for i64x2 {}
+
#[experimental]
#[simd]
#[deriving(Show)]
pub u8, pub u8, pub u8, pub u8,
pub u8, pub u8, pub u8, pub u8);
+impl Copy for u8x16 {}
+
#[experimental]
#[simd]
#[deriving(Show)]
pub struct u16x8(pub u16, pub u16, pub u16, pub u16,
pub u16, pub u16, pub u16, pub u16);
+impl Copy for u16x8 {}
+
#[experimental]
#[simd]
#[deriving(Show)]
#[repr(C)]
pub struct u32x4(pub u32, pub u32, pub u32, pub u32);
+impl Copy for u32x4 {}
+
#[experimental]
#[simd]
#[deriving(Show)]
#[repr(C)]
pub struct u64x2(pub u64, pub u64);
+impl Copy for u64x2 {}
+
#[experimental]
#[simd]
#[deriving(Show)]
#[repr(C)]
pub struct f32x4(pub f32, pub f32, pub f32, pub f32);
+impl Copy for f32x4 {}
+
#[experimental]
#[simd]
#[deriving(Show)]
#[repr(C)]
pub struct f64x2(pub f64, pub f64);
+
+impl Copy for f64x2 {}
+
use mem::transmute;
use clone::Clone;
-use cmp::{PartialEq, PartialOrd, Eq, Ord, Ordering, Less, Equal, Greater, Equiv};
+use cmp::{Ordering, PartialEq, PartialOrd, Eq, Ord, Equiv};
+use cmp::Ordering::{Less, Equal, Greater};
use cmp;
use default::Default;
use iter::*;
+use kinds::Copy;
use num::Int;
use ops;
-use option::{None, Option, Some};
+use option::Option;
+use option::Option::{None, Some};
use ptr;
use ptr::RawPtr;
use mem;
}
}
+impl<'a,T> Copy for Items<'a,T> {}
+
iterator!{struct Items -> *const T, &'a T}
#[experimental = "needs review"]
NotFound(uint)
}
+impl Copy for BinarySearchResult {}
+
#[experimental = "needs review"]
impl BinarySearchResult {
/// Converts a `Found` to `Some`, `NotFound` to `None`.
use mem::transmute;
use ptr::RawPtr;
use raw::Slice;
- use option::{None, Option, Some};
+ use option::Option;
+ use option::Option::{None, Some};
/// Form a slice from a pointer and length (as a number of units,
/// not bytes).
/// Copies data from `src` to `dst`
///
- /// `src` and `dst` must not overlap. Panics if the length of `dst`
- /// is less than the length of `src`.
+ /// Panics if the length of `dst` is less than the length of `src`.
#[inline]
pub fn copy_memory(dst: &mut [u8], src: &[u8]) {
let len_src = src.len();
assert!(dst.len() >= len_src);
+ // `dst` is unaliasable, so we know statically it doesn't overlap
+ // with `src`.
unsafe {
ptr::copy_nonoverlapping_memory(dst.as_mut_ptr(),
src.as_ptr(),
impl_int_slice!(u32, i32)
impl_int_slice!(u64, i64)
impl_int_slice!(uint, int)
+
use iter::{Map, Iterator, IteratorExt, DoubleEndedIterator};
use iter::{DoubleEndedIteratorExt, ExactSizeIterator};
use iter::range;
-use kinds::Sized;
+use kinds::{Copy, Sized};
use mem;
use num::Int;
-use option::{Option, None, Some};
+use option::Option;
+use option::Option::{None, Some};
use ptr::RawPtr;
use raw::{Repr, Slice};
use slice::{mod, SlicePrelude};
iter: slice::Items<'a, u8>
}
+impl<'a> Copy for Chars<'a> {}
+
// Return the initial codepoint accumulator for the first byte.
// The first byte is special, only want bottom 5 bits for width 2, 4 bits
// for width 3, and 3 bits for width 4
LoneSurrogate(u16)
}
+impl Copy for Utf16Item {}
+
impl Utf16Item {
/// Convert `self` to a `char`, taking `LoneSurrogate`s to the
/// replacement character (U+FFFD).
pub next: uint,
}
+impl Copy for CharRange {}
+
/// Mask of the value bits of a continuation byte
const CONT_MASK: u8 = 0b0011_1111u8;
/// Value of the tag bits (tag mask is !CONT_MASK) of a continuation byte
#[allow(missing_docs)]
pub mod traits {
- use cmp::{Ord, Ordering, Less, Equal, Greater, PartialEq, PartialOrd, Equiv, Eq};
+ use cmp::{Ordering, Ord, PartialEq, PartialOrd, Equiv, Eq};
+ use cmp::Ordering::{Less, Equal, Greater};
use iter::IteratorExt;
- use option::{Option, Some};
+ use option::Option;
+ use option::Option::Some;
use ops;
use str::{Str, StrPrelude, eq_slice};
impl<'a> Default for &'a str {
fn default() -> &'a str { "" }
}
+
use clone::Clone;
use cmp::*;
use default::Default;
-use option::{Option, Some};
+use option::Option;
+use option::Option::Some;
// macro for implementing n-ary tuple functions and operations
macro_rules! tuple_impls {
#[test]
fn cell_has_sensible_show() {
let x = Cell::new("foo bar");
- assert!(format!("{}", x).as_slice().contains(x.get()));
+ assert!(format!("{}", x).contains(x.get()));
x.set("baz qux");
- assert!(format!("{}", x).as_slice().contains(x.get()));
+ assert!(format!("{}", x).contains(x.get()));
}
#[test]
let refcell = RefCell::new("foo");
let refcell_refmut = refcell.borrow_mut();
- assert!(format!("{}", refcell_refmut).as_slice().contains("foo"));
+ assert!(format!("{}", refcell_refmut).contains("foo"));
drop(refcell_refmut);
let refcell_ref = refcell.borrow();
- assert!(format!("{}", refcell_ref).as_slice().contains("foo"));
+ assert!(format!("{}", refcell_ref).contains("foo"));
drop(refcell_ref);
}
return result;
}
let s = string('\n');
- assert_eq!(s.as_slice(), "\\n");
+ assert_eq!(s, "\\n");
let s = string('\r');
- assert_eq!(s.as_slice(), "\\r");
+ assert_eq!(s, "\\r");
let s = string('\'');
- assert_eq!(s.as_slice(), "\\'");
+ assert_eq!(s, "\\'");
let s = string('"');
- assert_eq!(s.as_slice(), "\\\"");
+ assert_eq!(s, "\\\"");
let s = string(' ');
- assert_eq!(s.as_slice(), " ");
+ assert_eq!(s, " ");
let s = string('a');
- assert_eq!(s.as_slice(), "a");
+ assert_eq!(s, "a");
let s = string('~');
- assert_eq!(s.as_slice(), "~");
+ assert_eq!(s, "~");
let s = string('\x00');
- assert_eq!(s.as_slice(), "\\x00");
+ assert_eq!(s, "\\x00");
let s = string('\x1f');
- assert_eq!(s.as_slice(), "\\x1f");
+ assert_eq!(s, "\\x1f");
let s = string('\x7f');
- assert_eq!(s.as_slice(), "\\x7f");
+ assert_eq!(s, "\\x7f");
let s = string('\u00ff');
- assert_eq!(s.as_slice(), "\\u00ff");
+ assert_eq!(s, "\\u00ff");
let s = string('\u011b');
- assert_eq!(s.as_slice(), "\\u011b");
+ assert_eq!(s, "\\u011b");
let s = string('\U0001d4b6');
- assert_eq!(s.as_slice(), "\\U0001d4b6");
+ assert_eq!(s, "\\U0001d4b6");
}
#[test]
return result;
}
let s = string('\x00');
- assert_eq!(s.as_slice(), "\\x00");
+ assert_eq!(s, "\\x00");
let s = string('\n');
- assert_eq!(s.as_slice(), "\\x0a");
+ assert_eq!(s, "\\x0a");
let s = string(' ');
- assert_eq!(s.as_slice(), "\\x20");
+ assert_eq!(s, "\\x20");
let s = string('a');
- assert_eq!(s.as_slice(), "\\x61");
+ assert_eq!(s, "\\x61");
let s = string('\u011b');
- assert_eq!(s.as_slice(), "\\u011b");
+ assert_eq!(s, "\\u011b");
let s = string('\U0001d4b6');
- assert_eq!(s.as_slice(), "\\U0001d4b6");
+ assert_eq!(s, "\\U0001d4b6");
}
#[test]
// Formatting integers should select the right implementation based off
// the type of the argument. Also, hex/octal/binary should be defined
// for integers, but they shouldn't emit the negative sign.
- assert!(format!("{}", 1i).as_slice() == "1");
- assert!(format!("{}", 1i8).as_slice() == "1");
- assert!(format!("{}", 1i16).as_slice() == "1");
- assert!(format!("{}", 1i32).as_slice() == "1");
- assert!(format!("{}", 1i64).as_slice() == "1");
- assert!(format!("{}", -1i).as_slice() == "-1");
- assert!(format!("{}", -1i8).as_slice() == "-1");
- assert!(format!("{}", -1i16).as_slice() == "-1");
- assert!(format!("{}", -1i32).as_slice() == "-1");
- assert!(format!("{}", -1i64).as_slice() == "-1");
- assert!(format!("{:b}", 1i).as_slice() == "1");
- assert!(format!("{:b}", 1i8).as_slice() == "1");
- assert!(format!("{:b}", 1i16).as_slice() == "1");
- assert!(format!("{:b}", 1i32).as_slice() == "1");
- assert!(format!("{:b}", 1i64).as_slice() == "1");
- assert!(format!("{:x}", 1i).as_slice() == "1");
- assert!(format!("{:x}", 1i8).as_slice() == "1");
- assert!(format!("{:x}", 1i16).as_slice() == "1");
- assert!(format!("{:x}", 1i32).as_slice() == "1");
- assert!(format!("{:x}", 1i64).as_slice() == "1");
- assert!(format!("{:X}", 1i).as_slice() == "1");
- assert!(format!("{:X}", 1i8).as_slice() == "1");
- assert!(format!("{:X}", 1i16).as_slice() == "1");
- assert!(format!("{:X}", 1i32).as_slice() == "1");
- assert!(format!("{:X}", 1i64).as_slice() == "1");
- assert!(format!("{:o}", 1i).as_slice() == "1");
- assert!(format!("{:o}", 1i8).as_slice() == "1");
- assert!(format!("{:o}", 1i16).as_slice() == "1");
- assert!(format!("{:o}", 1i32).as_slice() == "1");
- assert!(format!("{:o}", 1i64).as_slice() == "1");
-
- assert!(format!("{}", 1u).as_slice() == "1");
- assert!(format!("{}", 1u8).as_slice() == "1");
- assert!(format!("{}", 1u16).as_slice() == "1");
- assert!(format!("{}", 1u32).as_slice() == "1");
- assert!(format!("{}", 1u64).as_slice() == "1");
- assert!(format!("{:b}", 1u).as_slice() == "1");
- assert!(format!("{:b}", 1u8).as_slice() == "1");
- assert!(format!("{:b}", 1u16).as_slice() == "1");
- assert!(format!("{:b}", 1u32).as_slice() == "1");
- assert!(format!("{:b}", 1u64).as_slice() == "1");
- assert!(format!("{:x}", 1u).as_slice() == "1");
- assert!(format!("{:x}", 1u8).as_slice() == "1");
- assert!(format!("{:x}", 1u16).as_slice() == "1");
- assert!(format!("{:x}", 1u32).as_slice() == "1");
- assert!(format!("{:x}", 1u64).as_slice() == "1");
- assert!(format!("{:X}", 1u).as_slice() == "1");
- assert!(format!("{:X}", 1u8).as_slice() == "1");
- assert!(format!("{:X}", 1u16).as_slice() == "1");
- assert!(format!("{:X}", 1u32).as_slice() == "1");
- assert!(format!("{:X}", 1u64).as_slice() == "1");
- assert!(format!("{:o}", 1u).as_slice() == "1");
- assert!(format!("{:o}", 1u8).as_slice() == "1");
- assert!(format!("{:o}", 1u16).as_slice() == "1");
- assert!(format!("{:o}", 1u32).as_slice() == "1");
- assert!(format!("{:o}", 1u64).as_slice() == "1");
+ assert!(format!("{}", 1i) == "1");
+ assert!(format!("{}", 1i8) == "1");
+ assert!(format!("{}", 1i16) == "1");
+ assert!(format!("{}", 1i32) == "1");
+ assert!(format!("{}", 1i64) == "1");
+ assert!(format!("{}", -1i) == "-1");
+ assert!(format!("{}", -1i8) == "-1");
+ assert!(format!("{}", -1i16) == "-1");
+ assert!(format!("{}", -1i32) == "-1");
+ assert!(format!("{}", -1i64) == "-1");
+ assert!(format!("{:b}", 1i) == "1");
+ assert!(format!("{:b}", 1i8) == "1");
+ assert!(format!("{:b}", 1i16) == "1");
+ assert!(format!("{:b}", 1i32) == "1");
+ assert!(format!("{:b}", 1i64) == "1");
+ assert!(format!("{:x}", 1i) == "1");
+ assert!(format!("{:x}", 1i8) == "1");
+ assert!(format!("{:x}", 1i16) == "1");
+ assert!(format!("{:x}", 1i32) == "1");
+ assert!(format!("{:x}", 1i64) == "1");
+ assert!(format!("{:X}", 1i) == "1");
+ assert!(format!("{:X}", 1i8) == "1");
+ assert!(format!("{:X}", 1i16) == "1");
+ assert!(format!("{:X}", 1i32) == "1");
+ assert!(format!("{:X}", 1i64) == "1");
+ assert!(format!("{:o}", 1i) == "1");
+ assert!(format!("{:o}", 1i8) == "1");
+ assert!(format!("{:o}", 1i16) == "1");
+ assert!(format!("{:o}", 1i32) == "1");
+ assert!(format!("{:o}", 1i64) == "1");
+
+ assert!(format!("{}", 1u) == "1");
+ assert!(format!("{}", 1u8) == "1");
+ assert!(format!("{}", 1u16) == "1");
+ assert!(format!("{}", 1u32) == "1");
+ assert!(format!("{}", 1u64) == "1");
+ assert!(format!("{:b}", 1u) == "1");
+ assert!(format!("{:b}", 1u8) == "1");
+ assert!(format!("{:b}", 1u16) == "1");
+ assert!(format!("{:b}", 1u32) == "1");
+ assert!(format!("{:b}", 1u64) == "1");
+ assert!(format!("{:x}", 1u) == "1");
+ assert!(format!("{:x}", 1u8) == "1");
+ assert!(format!("{:x}", 1u16) == "1");
+ assert!(format!("{:x}", 1u32) == "1");
+ assert!(format!("{:x}", 1u64) == "1");
+ assert!(format!("{:X}", 1u) == "1");
+ assert!(format!("{:X}", 1u8) == "1");
+ assert!(format!("{:X}", 1u16) == "1");
+ assert!(format!("{:X}", 1u32) == "1");
+ assert!(format!("{:X}", 1u64) == "1");
+ assert!(format!("{:o}", 1u) == "1");
+ assert!(format!("{:o}", 1u8) == "1");
+ assert!(format!("{:o}", 1u16) == "1");
+ assert!(format!("{:o}", 1u32) == "1");
+ assert!(format!("{:o}", 1u64) == "1");
// Test a larger number
- assert!(format!("{:b}", 55i).as_slice() == "110111");
- assert!(format!("{:o}", 55i).as_slice() == "67");
- assert!(format!("{}", 55i).as_slice() == "55");
- assert!(format!("{:x}", 55i).as_slice() == "37");
- assert!(format!("{:X}", 55i).as_slice() == "37");
+ assert!(format!("{:b}", 55i) == "110111");
+ assert!(format!("{:o}", 55i) == "67");
+ assert!(format!("{}", 55i) == "55");
+ assert!(format!("{:x}", 55i) == "37");
+ assert!(format!("{:X}", 55i) == "37");
}
#[test]
fn test_format_int_zero() {
- assert!(format!("{}", 0i).as_slice() == "0");
- assert!(format!("{:b}", 0i).as_slice() == "0");
- assert!(format!("{:o}", 0i).as_slice() == "0");
- assert!(format!("{:x}", 0i).as_slice() == "0");
- assert!(format!("{:X}", 0i).as_slice() == "0");
-
- assert!(format!("{}", 0u).as_slice() == "0");
- assert!(format!("{:b}", 0u).as_slice() == "0");
- assert!(format!("{:o}", 0u).as_slice() == "0");
- assert!(format!("{:x}", 0u).as_slice() == "0");
- assert!(format!("{:X}", 0u).as_slice() == "0");
+ assert!(format!("{}", 0i) == "0");
+ assert!(format!("{:b}", 0i) == "0");
+ assert!(format!("{:o}", 0i) == "0");
+ assert!(format!("{:x}", 0i) == "0");
+ assert!(format!("{:X}", 0i) == "0");
+
+ assert!(format!("{}", 0u) == "0");
+ assert!(format!("{:b}", 0u) == "0");
+ assert!(format!("{:o}", 0u) == "0");
+ assert!(format!("{:x}", 0u) == "0");
+ assert!(format!("{:X}", 0u) == "0");
}
#[test]
fn test_format_int_flags() {
- assert!(format!("{:3}", 1i).as_slice() == " 1");
- assert!(format!("{:>3}", 1i).as_slice() == " 1");
- assert!(format!("{:>+3}", 1i).as_slice() == " +1");
- assert!(format!("{:<3}", 1i).as_slice() == "1 ");
- assert!(format!("{:#}", 1i).as_slice() == "1");
- assert!(format!("{:#x}", 10i).as_slice() == "0xa");
- assert!(format!("{:#X}", 10i).as_slice() == "0xA");
- assert!(format!("{:#5x}", 10i).as_slice() == " 0xa");
- assert!(format!("{:#o}", 10i).as_slice() == "0o12");
- assert!(format!("{:08x}", 10i).as_slice() == "0000000a");
- assert!(format!("{:8x}", 10i).as_slice() == " a");
- assert!(format!("{:<8x}", 10i).as_slice() == "a ");
- assert!(format!("{:>8x}", 10i).as_slice() == " a");
- assert!(format!("{:#08x}", 10i).as_slice() == "0x00000a");
- assert!(format!("{:08}", -10i).as_slice() == "-0000010");
- assert!(format!("{:x}", -1u8).as_slice() == "ff");
- assert!(format!("{:X}", -1u8).as_slice() == "FF");
- assert!(format!("{:b}", -1u8).as_slice() == "11111111");
- assert!(format!("{:o}", -1u8).as_slice() == "377");
- assert!(format!("{:#x}", -1u8).as_slice() == "0xff");
- assert!(format!("{:#X}", -1u8).as_slice() == "0xFF");
- assert!(format!("{:#b}", -1u8).as_slice() == "0b11111111");
- assert!(format!("{:#o}", -1u8).as_slice() == "0o377");
+ assert!(format!("{:3}", 1i) == " 1");
+ assert!(format!("{:>3}", 1i) == " 1");
+ assert!(format!("{:>+3}", 1i) == " +1");
+ assert!(format!("{:<3}", 1i) == "1 ");
+ assert!(format!("{:#}", 1i) == "1");
+ assert!(format!("{:#x}", 10i) == "0xa");
+ assert!(format!("{:#X}", 10i) == "0xA");
+ assert!(format!("{:#5x}", 10i) == " 0xa");
+ assert!(format!("{:#o}", 10i) == "0o12");
+ assert!(format!("{:08x}", 10i) == "0000000a");
+ assert!(format!("{:8x}", 10i) == " a");
+ assert!(format!("{:<8x}", 10i) == "a ");
+ assert!(format!("{:>8x}", 10i) == " a");
+ assert!(format!("{:#08x}", 10i) == "0x00000a");
+ assert!(format!("{:08}", -10i) == "-0000010");
+ assert!(format!("{:x}", -1u8) == "ff");
+ assert!(format!("{:X}", -1u8) == "FF");
+ assert!(format!("{:b}", -1u8) == "11111111");
+ assert!(format!("{:o}", -1u8) == "377");
+ assert!(format!("{:#x}", -1u8) == "0xff");
+ assert!(format!("{:#X}", -1u8) == "0xFF");
+ assert!(format!("{:#b}", -1u8) == "0b11111111");
+ assert!(format!("{:#o}", -1u8) == "0o377");
}
#[test]
fn test_format_int_sign_padding() {
- assert!(format!("{:+5}", 1i).as_slice() == " +1");
- assert!(format!("{:+5}", -1i).as_slice() == " -1");
- assert!(format!("{:05}", 1i).as_slice() == "00001");
- assert!(format!("{:05}", -1i).as_slice() == "-0001");
- assert!(format!("{:+05}", 1i).as_slice() == "+0001");
- assert!(format!("{:+05}", -1i).as_slice() == "-0001");
+ assert!(format!("{:+5}", 1i) == " +1");
+ assert!(format!("{:+5}", -1i) == " -1");
+ assert!(format!("{:05}", 1i) == "00001");
+ assert!(format!("{:05}", -1i) == "-0001");
+ assert!(format!("{:+05}", 1i) == "+0001");
+ assert!(format!("{:+05}", -1i) == "-0001");
}
#[test]
fn test_format_int_twos_complement() {
use core::{i8, i16, i32, i64};
- assert!(format!("{}", i8::MIN).as_slice() == "-128");
- assert!(format!("{}", i16::MIN).as_slice() == "-32768");
- assert!(format!("{}", i32::MIN).as_slice() == "-2147483648");
- assert!(format!("{}", i64::MIN).as_slice() == "-9223372036854775808");
+ assert!(format!("{}", i8::MIN) == "-128");
+ assert!(format!("{}", i16::MIN) == "-32768");
+ assert!(format!("{}", i32::MIN) == "-2147483648");
+ assert!(format!("{}", i64::MIN) == "-9223372036854775808");
}
#[test]
fn test_format_radix() {
- assert!(format!("{:04}", radix(3i, 2)).as_slice() == "0011");
- assert!(format!("{}", radix(55i, 36)).as_slice() == "1j");
+ assert!(format!("{:04}", radix(3i, 2)) == "0011");
+ assert!(format!("{}", radix(55i, 36)) == "1j");
}
#[test]
#[cfg(test)]
mod test {
- use core::option::{Option, Some, None};
+ use core::option::Option;
+ use core::option::Option::{Some, None};
use core::num::Float;
use core::num::from_str_radix;
#[test]
fn test_get_str() {
let x = "test".to_string();
- let addr_x = x.as_slice().as_ptr();
+ let addr_x = x.as_ptr();
let opt = Some(x);
let y = opt.unwrap();
- let addr_y = y.as_slice().as_ptr();
+ let addr_y = y.as_ptr();
assert_eq!(addr_x, addr_y);
}
fn test_unwrap() {
assert_eq!(Some(1i).unwrap(), 1);
let s = Some("hello".to_string()).unwrap();
- assert_eq!(s.as_slice(), "hello");
+ assert_eq!(s, "hello");
}
#[test]
assert_eq!(opt_ref_ref.clone(), Some(&val1_ref));
assert_eq!(opt_ref_ref.clone().cloned(), Some(&val1));
assert_eq!(opt_ref_ref.cloned().cloned(), Some(1u32));
-}
\ No newline at end of file
+}
let err: Result<int, &'static str> = Err("Err");
let s = format!("{}", ok);
- assert_eq!(s.as_slice(), "Ok(100)");
+ assert_eq!(s, "Ok(100)");
let s = format!("{}", err);
- assert_eq!(s.as_slice(), "Err(Err)");
+ assert_eq!(s, "Err(Err)");
}
#[test]
#[test]
fn test_show() {
let s = format!("{}", (1i,));
- assert_eq!(s.as_slice(), "(1,)");
+ assert_eq!(s, "(1,)");
let s = format!("{}", (1i, true));
- assert_eq!(s.as_slice(), "(1, true)");
+ assert_eq!(s, "(1, true)");
let s = format!("{}", (1i, "hi", true));
- assert_eq!(s.as_slice(), "(1, hi, true)");
+ assert_eq!(s, "(1, hi, true)");
}
debug!("{} bytes deflated to {} ({:.1}% size)",
input.len(), cmp.len(),
100.0 * ((cmp.len() as f64) / (input.len() as f64)));
- assert_eq!(input.as_slice(), out.as_slice());
+ assert_eq!(input, out.as_slice());
}
}
let bytes = vec!(1, 2, 3, 4, 5);
let deflated = deflate_bytes(bytes.as_slice()).expect("deflation failed");
let inflated = inflate_bytes(deflated.as_slice()).expect("inflation failed");
- assert_eq!(inflated.as_slice(), bytes.as_slice());
+ assert_eq!(inflated.as_slice(), bytes);
}
}
#![experimental]
#![crate_type = "rlib"]
#![crate_type = "dylib"]
+#![doc(html_logo_url = "http://www.rust-lang.org/logos/rust-logo-128x128-blk-v2.png",
+ html_favicon_url = "http://www.rust-lang.org/favicon.ico",
+ html_root_url = "http://doc.rust-lang.org/nightly/",
+ html_playground_url = "http://play.rust-lang.org/")]
+
#![feature(macro_rules, globs, import_shadowing)]
pub use self::Piece::*;
pub use self::Position::*;
NextArgument(Argument<'a>),
}
+impl<'a> Copy for Piece<'a> {}
+
/// Representation of an argument specification.
#[deriving(PartialEq)]
pub struct Argument<'a> {
pub format: FormatSpec<'a>,
}
+impl<'a> Copy for Argument<'a> {}
+
/// Specification for the formatting of an argument in the format string.
#[deriving(PartialEq)]
pub struct FormatSpec<'a> {
pub ty: &'a str
}
+impl<'a> Copy for FormatSpec<'a> {}
+
/// Enum describing where an argument for a format can be located.
#[deriving(PartialEq)]
pub enum Position<'a> {
ArgumentNamed(&'a str),
}
+impl<'a> Copy for Position<'a> {}
+
/// Enum of alignments which are supported.
#[deriving(PartialEq)]
pub enum Alignment {
AlignUnknown,
}
+impl Copy for Alignment {}
+
/// Various flags which can be applied to format strings. The meaning of these
/// flags is defined by the formatters themselves.
#[deriving(PartialEq)]
FlagSignAwareZeroPad,
}
+impl Copy for Flag {}
+
/// A count is used for the precision and width parameters of an integer, and
/// can reference either an argument or a literal integer.
#[deriving(PartialEq)]
CountImplied,
}
+impl<'a> Copy for Count<'a> {}
+
/// The parser structure for interpreting the input format string. This is
/// modelled as an iterator over `Piece` structures to form a stream of tokens
/// being output.
fn same(fmt: &'static str, p: &[Piece<'static>]) {
let mut parser = Parser::new(fmt);
- assert!(p == parser.collect::<Vec<Piece<'static>>>().as_slice());
+ assert!(p == parser.collect::<Vec<Piece<'static>>>());
}
fn fmtdflt() -> FormatSpec<'static> {
use self::Occur::*;
use self::Fail::*;
use self::Optval::*;
+use self::SplitWithinState::*;
+use self::Whitespace::*;
+use self::LengthLimit::*;
use std::fmt;
-use std::result::{Err, Ok};
+use std::result::Result::{Err, Ok};
use std::result;
use std::string::String;
Maybe,
}
+impl Copy for HasArg {}
+
/// Describes how often an option may occur.
#[deriving(Clone, PartialEq, Eq)]
pub enum Occur {
Multi,
}
+impl Copy for Occur {}
+
/// A description of a possible option.
#[deriving(Clone, PartialEq, Eq)]
pub struct Opt {
UnexpectedArgument(String),
}
+/// The type of failure that occurred.
+#[deriving(PartialEq, Eq)]
+#[allow(missing_docs)]
+pub enum FailType {
+ ArgumentMissing_,
+ UnrecognizedOption_,
+ OptionMissing_,
+ OptionDuplicated_,
+ UnexpectedArgument_,
+}
+
+impl Copy for FailType {}
+
/// The result of parsing a command line with a set of options.
pub type Result = result::Result<Matches, Fail>;
aliases: Vec::new()
},
(1,0) => Opt {
- name: Short(short_name.as_slice().char_at(0)),
+ name: Short(short_name.char_at(0)),
hasarg: hasarg,
occur: occur,
aliases: Vec::new()
occur: occur,
aliases: vec!(
Opt {
- name: Short(short_name.as_slice().char_at(0)),
+ name: Short(short_name.char_at(0)),
hasarg: hasarg,
occur: occur,
aliases: Vec::new()
let curlen = cur.len();
if !is_arg(cur.as_slice()) {
free.push(cur);
- } else if cur.as_slice() == "--" {
+ } else if cur == "--" {
let mut j = i + 1;
while j < l { free.push(args[j].clone()); j += 1; }
break;
let mut names;
let mut i_arg = None;
if cur.as_bytes()[1] == b'-' {
- let tail = cur.as_slice().slice(2, curlen);
+ let tail = cur.slice(2, curlen);
let tail_eq: Vec<&str> = tail.split('=').collect();
if tail_eq.len() <= 1 {
names = vec!(Long(tail.to_string()));
let mut j = 1;
names = Vec::new();
while j < curlen {
- let range = cur.as_slice().char_range_at(j);
+ let range = cur.char_range_at(j);
let opt = Short(range.ch);
/* In a series of potential options (eg. -aheJ), if we
};
if arg_follows && range.next < curlen {
- i_arg = Some(cur.as_slice()
- .slice(range.next, curlen).to_string());
+ i_arg = Some(cur.slice(range.next, curlen).to_string());
break;
}
// FIXME: #5516 should be graphemes not codepoints
// here we just need to indent the start of the description
- let rowlen = row.as_slice().char_len();
+ let rowlen = row.char_len();
if rowlen < 24 {
for _ in range(0, 24 - rowlen) {
row.push(' ');
// Normalize desc to contain words separated by one space character
let mut desc_normalized_whitespace = String::new();
- for word in desc.as_slice().words() {
+ for word in desc.words() {
desc_normalized_whitespace.push_str(word);
desc_normalized_whitespace.push(' ');
}
B, // words
C, // internal and trailing whitespace
}
+impl Copy for SplitWithinState {}
enum Whitespace {
Ws, // current char is whitespace
Cr // current char is not whitespace
}
+impl Copy for Whitespace {}
enum LengthLimit {
UnderLim, // current char makes current substring still fit in limit
OverLim // current char makes current substring no longer fit in limit
}
+impl Copy for LengthLimit {}
/// Splits a string into substrings with possibly internal whitespace,
/// sequence longer than the limit.
fn each_split_within<'a>(ss: &'a str, lim: uint, it: |&'a str| -> bool)
-> bool {
- use self::SplitWithinState::*;
- use self::Whitespace::*;
- use self::LengthLimit::*;
// Just for fun, let's write this as a state machine:
let mut slice_start = 0;
use super::*;
use super::Fail::*;
- use std::result::{Err, Ok};
+ use std::result::Result::{Err, Ok};
use std::result;
// Tests for reqopt
match rs {
Ok(ref m) => {
assert!(m.opt_present("test"));
- assert_eq!(m.opt_str("test").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("test").unwrap(), "20");
assert!(m.opt_present("t"));
- assert_eq!(m.opt_str("t").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("t").unwrap(), "20");
}
_ => { panic!("test_reqopt failed (long arg)"); }
}
match getopts(short_args.as_slice(), opts.as_slice()) {
Ok(ref m) => {
assert!((m.opt_present("test")));
- assert_eq!(m.opt_str("test").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("test").unwrap(), "20");
assert!((m.opt_present("t")));
- assert_eq!(m.opt_str("t").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("t").unwrap(), "20");
}
_ => { panic!("test_reqopt failed (short arg)"); }
}
match rs {
Ok(ref m) => {
assert!(m.opt_present("test"));
- assert_eq!(m.opt_str("test").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("test").unwrap(), "20");
assert!((m.opt_present("t")));
- assert_eq!(m.opt_str("t").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("t").unwrap(), "20");
}
_ => panic!()
}
match getopts(short_args.as_slice(), opts.as_slice()) {
Ok(ref m) => {
assert!((m.opt_present("test")));
- assert_eq!(m.opt_str("test").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("test").unwrap(), "20");
assert!((m.opt_present("t")));
- assert_eq!(m.opt_str("t").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("t").unwrap(), "20");
}
_ => panic!()
}
Ok(ref m) => {
// The next variable after the flag is just a free argument
- assert!(m.free[0] == "20".to_string());
+ assert!(m.free[0] == "20");
}
_ => panic!()
}
match rs {
Ok(ref m) => {
assert!((m.opt_present("test")));
- assert_eq!(m.opt_str("test").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("test").unwrap(), "20");
assert!((m.opt_present("t")));
- assert_eq!(m.opt_str("t").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("t").unwrap(), "20");
}
_ => panic!()
}
match getopts(short_args.as_slice(), opts.as_slice()) {
Ok(ref m) => {
assert!((m.opt_present("test")));
- assert_eq!(m.opt_str("test").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("test").unwrap(), "20");
assert!((m.opt_present("t")));
- assert_eq!(m.opt_str("t").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("t").unwrap(), "20");
}
_ => panic!()
}
match rs {
Ok(ref m) => {
assert!(m.opt_present("test"));
- assert_eq!(m.opt_str("test").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("test").unwrap(), "20");
assert!(m.opt_present("t"));
- assert_eq!(m.opt_str("t").unwrap(), "20".to_string());
+ assert_eq!(m.opt_str("t").unwrap(), "20");
let pair = m.opt_strs("test");
- assert!(pair[0] == "20".to_string());
- assert!(pair[1] == "30".to_string());
+ assert!(pair[0] == "20");
+ assert!(pair[1] == "30");
}
_ => panic!()
}
let rs = getopts(args.as_slice(), opts.as_slice());
match rs {
Ok(ref m) => {
- assert!(m.free[0] == "prog".to_string());
- assert!(m.free[1] == "free1".to_string());
- assert_eq!(m.opt_str("s").unwrap(), "20".to_string());
- assert!(m.free[2] == "free2".to_string());
+ assert!(m.free[0] == "prog");
+ assert!(m.free[1] == "free1");
+ assert_eq!(m.opt_str("s").unwrap(), "20");
+ assert!(m.free[2] == "free2");
assert!((m.opt_present("flag")));
- assert_eq!(m.opt_str("long").unwrap(), "30".to_string());
+ assert_eq!(m.opt_str("long").unwrap(), "30");
assert!((m.opt_present("f")));
let pair = m.opt_strs("m");
- assert!(pair[0] == "40".to_string());
- assert!(pair[1] == "50".to_string());
+ assert!(pair[0] == "40");
+ assert!(pair[1] == "50");
let pair = m.opt_strs("n");
- assert!(pair[0] == "-A B".to_string());
- assert!(pair[1] == "-60 70".to_string());
+ assert!(pair[0] == "-A B");
+ assert!(pair[1] == "-60 70");
assert!((!m.opt_present("notpresent")));
}
_ => panic!()
let args_single = vec!("-e".to_string(), "foo".to_string());
let matches_single = &match getopts(args_single.as_slice(),
opts.as_slice()) {
- result::Ok(m) => m,
- result::Err(_) => panic!()
+ result::Result::Ok(m) => m,
+ result::Result::Err(_) => panic!()
};
assert!(matches_single.opts_present(&["e".to_string()]));
assert!(matches_single.opts_present(&["encrypt".to_string(), "e".to_string()]));
assert!(!matches_single.opts_present(&["thing".to_string()]));
assert!(!matches_single.opts_present(&[]));
- assert_eq!(matches_single.opts_str(&["e".to_string()]).unwrap(), "foo".to_string());
+ assert_eq!(matches_single.opts_str(&["e".to_string()]).unwrap(), "foo");
assert_eq!(matches_single.opts_str(&["e".to_string(), "encrypt".to_string()]).unwrap(),
- "foo".to_string());
+ "foo");
assert_eq!(matches_single.opts_str(&["encrypt".to_string(), "e".to_string()]).unwrap(),
- "foo".to_string());
+ "foo");
let args_both = vec!("-e".to_string(), "foo".to_string(), "--encrypt".to_string(),
"foo".to_string());
let matches_both = &match getopts(args_both.as_slice(),
opts.as_slice()) {
- result::Ok(m) => m,
- result::Err(_) => panic!()
+ result::Result::Ok(m) => m,
+ result::Result::Err(_) => panic!()
};
assert!(matches_both.opts_present(&["e".to_string()]));
assert!(matches_both.opts_present(&["encrypt".to_string()]));
assert!(!matches_both.opts_present(&["thing".to_string()]));
assert!(!matches_both.opts_present(&[]));
- assert_eq!(matches_both.opts_str(&["e".to_string()]).unwrap(), "foo".to_string());
- assert_eq!(matches_both.opts_str(&["encrypt".to_string()]).unwrap(), "foo".to_string());
+ assert_eq!(matches_both.opts_str(&["e".to_string()]).unwrap(), "foo");
+ assert_eq!(matches_both.opts_str(&["encrypt".to_string()]).unwrap(), "foo");
assert_eq!(matches_both.opts_str(&["e".to_string(), "encrypt".to_string()]).unwrap(),
- "foo".to_string());
+ "foo");
assert_eq!(matches_both.opts_str(&["encrypt".to_string(), "e".to_string()]).unwrap(),
- "foo".to_string());
+ "foo");
}
#[test]
let opts = vec!(optmulti("L", "", "library directory", "LIB"),
optmulti("M", "", "something", "MMMM"));
let matches = &match getopts(args.as_slice(), opts.as_slice()) {
- result::Ok(m) => m,
- result::Err(_) => panic!()
+ result::Result::Ok(m) => m,
+ result::Result::Err(_) => panic!()
};
assert!(matches.opts_present(&["L".to_string()]));
- assert_eq!(matches.opts_str(&["L".to_string()]).unwrap(), "foo".to_string());
+ assert_eq!(matches.opts_str(&["L".to_string()]).unwrap(), "foo");
assert!(matches.opts_present(&["M".to_string()]));
- assert_eq!(matches.opts_str(&["M".to_string()]).unwrap(), ".".to_string());
+ assert_eq!(matches.opts_str(&["M".to_string()]).unwrap(), ".");
}
let opts = vec!(optmulti("L", "", "library directory", "LIB"),
optflagmulti("v", "verbose", "Verbose"));
let matches = &match getopts(args.as_slice(), opts.as_slice()) {
- result::Ok(m) => m,
- result::Err(e) => panic!( "{}", e )
+ result::Result::Ok(m) => m,
+ result::Result::Err(e) => panic!( "{}", e )
};
assert!(matches.opts_present(&["L".to_string()]));
- assert_eq!(matches.opts_str(&["L".to_string()]).unwrap(), "verbose".to_string());
+ assert_eq!(matches.opts_str(&["L".to_string()]).unwrap(), "verbose");
assert!(matches.opts_present(&["v".to_string()]));
assert_eq!(3, matches.opt_count("v"));
}
-k --kiwi Desc
-p [VAL] Desc
-l VAL Desc
-".to_string();
+";
let generated_usage = usage("Usage: fruits", optgroups.as_slice());
-k --kiwi This is a long description which won't be wrapped..+..
-a --apple This is a long description which _will_ be
wrapped..+..
-".to_string();
+";
let usage = usage("Usage: fruits", optgroups.as_slice());
-a --apple This “description” has some characters that could
confuse the line wrapping; an apple costs 0.51€ in
some parts of Europe.
-".to_string();
+";
let usage = usage("Usage: fruits", optgroups.as_slice());
/// Renders text as string suitable for a label in a .dot file.
pub fn escape(&self) -> String {
match self {
- &LabelStr(ref s) => s.as_slice().escape_default(),
+ &LabelStr(ref s) => s.escape_default(),
&EscStr(ref s) => LabelText::escape_str(s.as_slice()),
}
}
fn empty_graph() {
let labels : Trivial = UnlabelledNodes(0);
let r = test_input(LabelledGraph::new("empty_graph", labels, vec!()));
- assert_eq!(r.unwrap().as_slice(),
+ assert_eq!(r.unwrap(),
r#"digraph empty_graph {
}
"#);
fn single_node() {
let labels : Trivial = UnlabelledNodes(1);
let r = test_input(LabelledGraph::new("single_node", labels, vec!()));
- assert_eq!(r.unwrap().as_slice(),
+ assert_eq!(r.unwrap(),
r#"digraph single_node {
N0[label="N0"];
}
let labels : Trivial = UnlabelledNodes(2);
let result = test_input(LabelledGraph::new("single_edge", labels,
vec!(edge(0, 1, "E"))));
- assert_eq!(result.unwrap().as_slice(),
+ assert_eq!(result.unwrap(),
r#"digraph single_edge {
N0[label="N0"];
N1[label="N1"];
let labels : Trivial = SomeNodesLabelled(vec![Some("A"), None]);
let result = test_input(LabelledGraph::new("test_some_labelled", labels,
vec![edge(0, 1, "A-1")]));
- assert_eq!(result.unwrap().as_slice(),
+ assert_eq!(result.unwrap(),
r#"digraph test_some_labelled {
N0[label="A"];
N1[label="N1"];
let labels : Trivial = UnlabelledNodes(1);
let r = test_input(LabelledGraph::new("single_cyclic_node", labels,
vec!(edge(0, 0, "E"))));
- assert_eq!(r.unwrap().as_slice(),
+ assert_eq!(r.unwrap(),
r#"digraph single_cyclic_node {
N0[label="N0"];
N0 -> N0[label="E"];
"hasse_diagram", labels,
vec!(edge(0, 1, ""), edge(0, 2, ""),
edge(1, 3, ""), edge(2, 3, ""))));
- assert_eq!(r.unwrap().as_slice(),
+ assert_eq!(r.unwrap(),
r#"digraph hasse_diagram {
N0[label="{x,y}"];
N1[label="{x}"];
render(&g, &mut writer).unwrap();
let r = (&mut writer.as_slice()).read_to_string();
- assert_eq!(r.unwrap().as_slice(),
+ assert_eq!(r.unwrap(),
r#"digraph syntax_tree {
N0[label="if test {\l branch1\l} else {\l branch2\l}\lafterward\l"];
N1[label="branch1"];
#![allow(non_upper_case_globals)]
#![allow(missing_docs)]
#![allow(non_snake_case)]
+#![allow(raw_pointer_deriving)]
extern crate core;
/// variants, because the compiler complains about the repr attribute
/// otherwise.
#[repr(u8)]
+ #[allow(missing_copy_implementations)]
pub enum c_void {
__variant1,
__variant2,
}
+ #[allow(missing_copy_implementations)]
pub enum FILE {}
+ #[allow(missing_copy_implementations)]
pub enum fpos_t {}
}
pub mod c99 {
pub type uint64_t = u64;
}
pub mod posix88 {
+ #[allow(missing_copy_implementations)]
pub enum DIR {}
+ #[allow(missing_copy_implementations)]
pub enum dirent_t {}
}
pub mod posix01 {}
pub type pthread_t = c_ulong;
#[repr(C)]
- pub struct glob_t {
+ #[deriving(Copy)] pub struct glob_t {
pub gl_pathc: size_t,
pub gl_pathv: *mut *mut c_char,
pub gl_offs: size_t,
}
#[repr(C)]
- pub struct timeval {
+ #[deriving(Copy)] pub struct timeval {
pub tv_sec: time_t,
pub tv_usec: suseconds_t,
}
#[repr(C)]
- pub struct timespec {
+ #[deriving(Copy)] pub struct timespec {
pub tv_sec: time_t,
pub tv_nsec: c_long,
}
- pub enum timezone {}
+ #[deriving(Copy)] pub enum timezone {}
pub type sighandler_t = size_t;
}
pub type in_port_t = u16;
pub type in_addr_t = u32;
#[repr(C)]
- pub struct sockaddr {
+ #[deriving(Copy)] pub struct sockaddr {
pub sa_family: sa_family_t,
pub sa_data: [u8, ..14],
}
#[repr(C)]
- pub struct sockaddr_storage {
+ #[deriving(Copy)] pub struct sockaddr_storage {
pub ss_family: sa_family_t,
pub __ss_align: i64,
pub __ss_pad2: [u8, ..112],
}
#[repr(C)]
- pub struct sockaddr_in {
+ #[deriving(Copy)] pub struct sockaddr_in {
pub sin_family: sa_family_t,
pub sin_port: in_port_t,
pub sin_addr: in_addr,
pub sin_zero: [u8, ..8],
}
#[repr(C)]
- pub struct in_addr {
+ #[deriving(Copy)] pub struct in_addr {
pub s_addr: in_addr_t,
}
#[repr(C)]
- pub struct sockaddr_in6 {
+ #[deriving(Copy)] pub struct sockaddr_in6 {
pub sin6_family: sa_family_t,
pub sin6_port: in_port_t,
pub sin6_flowinfo: u32,
pub sin6_scope_id: u32,
}
#[repr(C)]
- pub struct in6_addr {
+ #[deriving(Copy)] pub struct in6_addr {
pub s6_addr: [u16, ..8]
}
#[repr(C)]
- pub struct ip_mreq {
+ #[deriving(Copy)] pub struct ip_mreq {
pub imr_multiaddr: in_addr,
pub imr_interface: in_addr,
}
#[repr(C)]
- pub struct ip6_mreq {
+ #[deriving(Copy)] pub struct ip6_mreq {
pub ipv6mr_multiaddr: in6_addr,
pub ipv6mr_interface: c_uint,
}
#[repr(C)]
- pub struct addrinfo {
+ #[deriving(Copy)] pub struct addrinfo {
pub ai_flags: c_int,
pub ai_family: c_int,
pub ai_socktype: c_int,
pub ai_next: *mut addrinfo,
}
#[repr(C)]
- pub struct sockaddr_un {
+ #[deriving(Copy)] pub struct sockaddr_un {
pub sun_family: sa_family_t,
pub sun_path: [c_char, ..108]
}
#[repr(C)]
- pub struct ifaddrs {
+ #[deriving(Copy)] pub struct ifaddrs {
pub ifa_next: *mut ifaddrs,
pub ifa_name: *mut c_char,
pub ifa_flags: c_uint,
pub type blkcnt_t = i32;
#[repr(C)]
- pub struct stat {
+ #[deriving(Copy)] pub struct stat {
pub st_dev: dev_t,
pub __pad1: c_short,
pub st_ino: ino_t,
}
#[repr(C)]
- pub struct utimbuf {
+ #[deriving(Copy)] pub struct utimbuf {
pub actime: time_t,
pub modtime: time_t,
}
#[repr(C)]
- pub struct pthread_attr_t {
+ #[deriving(Copy)] pub struct pthread_attr_t {
pub __size: [u32, ..9]
}
}
pub type blkcnt_t = u32;
#[repr(C)]
- pub struct stat {
+ #[deriving(Copy)] pub struct stat {
pub st_dev: c_ulonglong,
pub __pad0: [c_uchar, ..4],
pub __st_ino: ino_t,
}
#[repr(C)]
- pub struct utimbuf {
+ #[deriving(Copy)] pub struct utimbuf {
pub actime: time_t,
pub modtime: time_t,
}
#[repr(C)]
- pub struct pthread_attr_t {
+ #[deriving(Copy)] pub struct pthread_attr_t {
pub __size: [u32, ..9]
}
}
pub type blkcnt_t = i32;
#[repr(C)]
- pub struct stat {
+ #[deriving(Copy)] pub struct stat {
pub st_dev: c_ulong,
pub st_pad1: [c_long, ..3],
pub st_ino: ino_t,
}
#[repr(C)]
- pub struct utimbuf {
+ #[deriving(Copy)] pub struct utimbuf {
pub actime: time_t,
pub modtime: time_t,
}
#[repr(C)]
- pub struct pthread_attr_t {
+ #[deriving(Copy)] pub struct pthread_attr_t {
pub __size: [u32, ..9]
}
}
pub mod extra {
use types::os::arch::c95::{c_ushort, c_int, c_uchar};
#[repr(C)]
- pub struct sockaddr_ll {
+ #[deriving(Copy)] pub struct sockaddr_ll {
pub sll_family: c_ushort,
pub sll_protocol: c_ushort,
pub sll_ifindex: c_int,
pub type blksize_t = i64;
pub type blkcnt_t = i64;
#[repr(C)]
- pub struct stat {
+ #[deriving(Copy)] pub struct stat {
pub st_dev: dev_t,
pub st_ino: ino_t,
pub st_nlink: nlink_t,
}
#[repr(C)]
- pub struct utimbuf {
+ #[deriving(Copy)] pub struct utimbuf {
pub actime: time_t,
pub modtime: time_t,
}
#[repr(C)]
- pub struct pthread_attr_t {
+ #[deriving(Copy)] pub struct pthread_attr_t {
pub __size: [u64, ..7]
}
}
}
pub mod extra {
use types::os::arch::c95::{c_ushort, c_int, c_uchar};
- pub struct sockaddr_ll {
+ #[deriving(Copy)] pub struct sockaddr_ll {
pub sll_family: c_ushort,
pub sll_protocol: c_ushort,
pub sll_ifindex: c_int,
pub type pthread_t = uintptr_t;
#[repr(C)]
- pub struct glob_t {
+ #[deriving(Copy)] pub struct glob_t {
pub gl_pathc: size_t,
pub __unused1: size_t,
pub gl_offs: size_t,
}
#[repr(C)]
- pub struct timeval {
+ #[deriving(Copy)] pub struct timeval {
pub tv_sec: time_t,
pub tv_usec: suseconds_t,
}
#[repr(C)]
- pub struct timespec {
+ #[deriving(Copy)] pub struct timespec {
pub tv_sec: time_t,
pub tv_nsec: c_long,
}
- pub enum timezone {}
+ #[deriving(Copy)] pub enum timezone {}
pub type sighandler_t = size_t;
}
pub type in_port_t = u16;
pub type in_addr_t = u32;
#[repr(C)]
- pub struct sockaddr {
+ #[deriving(Copy)] pub struct sockaddr {
pub sa_len: u8,
pub sa_family: sa_family_t,
pub sa_data: [u8, ..14],
}
#[repr(C)]
- pub struct sockaddr_storage {
+ #[deriving(Copy)] pub struct sockaddr_storage {
pub ss_len: u8,
pub ss_family: sa_family_t,
pub __ss_pad1: [u8, ..6],
pub __ss_pad2: [u8, ..112],
}
#[repr(C)]
- pub struct sockaddr_in {
+ #[deriving(Copy)] pub struct sockaddr_in {
pub sin_len: u8,
pub sin_family: sa_family_t,
pub sin_port: in_port_t,
pub sin_zero: [u8, ..8],
}
#[repr(C)]
- pub struct in_addr {
+ #[deriving(Copy)] pub struct in_addr {
pub s_addr: in_addr_t,
}
#[repr(C)]
- pub struct sockaddr_in6 {
+ #[deriving(Copy)] pub struct sockaddr_in6 {
pub sin6_len: u8,
pub sin6_family: sa_family_t,
pub sin6_port: in_port_t,
pub sin6_scope_id: u32,
}
#[repr(C)]
- pub struct in6_addr {
+ #[deriving(Copy)] pub struct in6_addr {
pub s6_addr: [u16, ..8]
}
#[repr(C)]
- pub struct ip_mreq {
+ #[deriving(Copy)] pub struct ip_mreq {
pub imr_multiaddr: in_addr,
pub imr_interface: in_addr,
}
#[repr(C)]
- pub struct ip6_mreq {
+ #[deriving(Copy)] pub struct ip6_mreq {
pub ipv6mr_multiaddr: in6_addr,
pub ipv6mr_interface: c_uint,
}
#[repr(C)]
- pub struct addrinfo {
+ #[deriving(Copy)] pub struct addrinfo {
pub ai_flags: c_int,
pub ai_family: c_int,
pub ai_socktype: c_int,
pub ai_next: *mut addrinfo,
}
#[repr(C)]
- pub struct sockaddr_un {
+ #[deriving(Copy)] pub struct sockaddr_un {
pub sun_len: u8,
pub sun_family: sa_family_t,
pub sun_path: [c_char, ..104]
}
#[repr(C)]
- pub struct ifaddrs {
+ #[deriving(Copy)] pub struct ifaddrs {
pub ifa_next: *mut ifaddrs,
pub ifa_name: *mut c_char,
pub ifa_flags: c_uint,
pub type blkcnt_t = i64;
pub type fflags_t = u32;
#[repr(C)]
- pub struct stat {
+ #[deriving(Copy)] pub struct stat {
pub st_dev: dev_t,
pub st_ino: ino_t,
pub st_mode: mode_t,
}
#[repr(C)]
- pub struct utimbuf {
+ #[deriving(Copy)] pub struct utimbuf {
pub actime: time_t,
pub modtime: time_t,
}
pub type pthread_t = uintptr_t;
#[repr(C)]
- pub struct glob_t {
+ #[deriving(Copy)] pub struct glob_t {
pub gl_pathc: size_t,
pub __unused1: size_t,
pub gl_offs: size_t,
}
#[repr(C)]
- pub struct timeval {
+ #[deriving(Copy)] pub struct timeval {
pub tv_sec: time_t,
pub tv_usec: suseconds_t,
}
#[repr(C)]
- pub struct timespec {
+ #[deriving(Copy)] pub struct timespec {
pub tv_sec: time_t,
pub tv_nsec: c_long,
}
- pub enum timezone {}
+ #[deriving(Copy)] pub enum timezone {}
pub type sighandler_t = size_t;
}
pub type in_port_t = u16;
pub type in_addr_t = u32;
#[repr(C)]
- pub struct sockaddr {
+ #[deriving(Copy)] pub struct sockaddr {
pub sa_len: u8,
pub sa_family: sa_family_t,
pub sa_data: [u8, ..14],
}
#[repr(C)]
- pub struct sockaddr_storage {
+ #[deriving(Copy)] pub struct sockaddr_storage {
pub ss_len: u8,
pub ss_family: sa_family_t,
pub __ss_pad1: [u8, ..6],
pub __ss_pad2: [u8, ..112],
}
#[repr(C)]
- pub struct sockaddr_in {
+ #[deriving(Copy)] pub struct sockaddr_in {
pub sin_len: u8,
pub sin_family: sa_family_t,
pub sin_port: in_port_t,
pub sin_zero: [u8, ..8],
}
#[repr(C)]
- pub struct in_addr {
+ #[deriving(Copy)] pub struct in_addr {
pub s_addr: in_addr_t,
}
#[repr(C)]
- pub struct sockaddr_in6 {
+ #[deriving(Copy)] pub struct sockaddr_in6 {
pub sin6_len: u8,
pub sin6_family: sa_family_t,
pub sin6_port: in_port_t,
pub sin6_scope_id: u32,
}
#[repr(C)]
- pub struct in6_addr {
+ #[deriving(Copy)] pub struct in6_addr {
pub s6_addr: [u16, ..8]
}
#[repr(C)]
- pub struct ip_mreq {
+ #[deriving(Copy)] pub struct ip_mreq {
pub imr_multiaddr: in_addr,
pub imr_interface: in_addr,
}
#[repr(C)]
- pub struct ip6_mreq {
+ #[deriving(Copy)] pub struct ip6_mreq {
pub ipv6mr_multiaddr: in6_addr,
pub ipv6mr_interface: c_uint,
}
#[repr(C)]
- pub struct addrinfo {
+ #[deriving(Copy)] pub struct addrinfo {
pub ai_flags: c_int,
pub ai_family: c_int,
pub ai_socktype: c_int,
pub ai_next: *mut addrinfo,
}
#[repr(C)]
- pub struct sockaddr_un {
+ #[deriving(Copy)] pub struct sockaddr_un {
pub sun_len: u8,
pub sun_family: sa_family_t,
pub sun_path: [c_char, ..104]
pub type fflags_t = u32;
#[repr(C)]
- pub struct stat {
+ #[deriving(Copy)] pub struct stat {
pub st_ino: ino_t,
pub st_nlink: nlink_t,
pub st_dev: dev_t,
pub st_qspare2: int64_t,
}
#[repr(C)]
- pub struct utimbuf {
+ #[deriving(Copy)] pub struct utimbuf {
pub actime: time_t,
pub modtime: time_t,
}
// pub Note: this is the struct called stat64 in Windows. Not stat,
// nor stati64.
#[repr(C)]
- pub struct stat {
+ #[deriving(Copy)] pub struct stat {
pub st_dev: dev_t,
pub st_ino: ino_t,
pub st_mode: u16,
// note that this is called utimbuf64 in Windows
#[repr(C)]
- pub struct utimbuf {
+ #[deriving(Copy)] pub struct utimbuf {
pub actime: time64_t,
pub modtime: time64_t,
}
#[repr(C)]
- pub struct timeval {
+ #[deriving(Copy)] pub struct timeval {
pub tv_sec: c_long,
pub tv_usec: c_long,
}
#[repr(C)]
- pub struct timespec {
+ #[deriving(Copy)] pub struct timespec {
pub tv_sec: time_t,
pub tv_nsec: c_long,
}
- pub enum timezone {}
+ #[deriving(Copy)] pub enum timezone {}
}
pub mod bsd44 {
pub type in_port_t = u16;
pub type in_addr_t = u32;
#[repr(C)]
- pub struct sockaddr {
+ #[deriving(Copy)] pub struct sockaddr {
pub sa_family: sa_family_t,
pub sa_data: [u8, ..14],
}
#[repr(C)]
- pub struct sockaddr_storage {
+ #[deriving(Copy)] pub struct sockaddr_storage {
pub ss_family: sa_family_t,
pub __ss_pad1: [u8, ..6],
pub __ss_align: i64,
pub __ss_pad2: [u8, ..112],
}
#[repr(C)]
- pub struct sockaddr_in {
+ #[deriving(Copy)] pub struct sockaddr_in {
pub sin_family: sa_family_t,
pub sin_port: in_port_t,
pub sin_addr: in_addr,
pub sin_zero: [u8, ..8],
}
#[repr(C)]
- pub struct in_addr {
+ #[deriving(Copy)] pub struct in_addr {
pub s_addr: in_addr_t,
}
#[repr(C)]
- pub struct sockaddr_in6 {
+ #[deriving(Copy)] pub struct sockaddr_in6 {
pub sin6_family: sa_family_t,
pub sin6_port: in_port_t,
pub sin6_flowinfo: u32,
pub sin6_scope_id: u32,
}
#[repr(C)]
- pub struct in6_addr {
+ #[deriving(Copy)] pub struct in6_addr {
pub s6_addr: [u16, ..8]
}
#[repr(C)]
- pub struct ip_mreq {
+ #[deriving(Copy)] pub struct ip_mreq {
pub imr_multiaddr: in_addr,
pub imr_interface: in_addr,
}
#[repr(C)]
- pub struct ip6_mreq {
+ #[deriving(Copy)] pub struct ip6_mreq {
pub ipv6mr_multiaddr: in6_addr,
pub ipv6mr_interface: c_uint,
}
#[repr(C)]
- pub struct addrinfo {
+ #[deriving(Copy)] pub struct addrinfo {
pub ai_flags: c_int,
pub ai_family: c_int,
pub ai_socktype: c_int,
pub ai_next: *mut addrinfo,
}
#[repr(C)]
- pub struct sockaddr_un {
+ #[deriving(Copy)] pub struct sockaddr_un {
pub sun_family: sa_family_t,
pub sun_path: [c_char, ..108]
}
pub type LPCH = *mut CHAR;
#[repr(C)]
- pub struct SECURITY_ATTRIBUTES {
+ #[deriving(Copy)] pub struct SECURITY_ATTRIBUTES {
pub nLength: DWORD,
pub lpSecurityDescriptor: LPVOID,
pub bInheritHandle: BOOL,
pub type int64 = i64;
#[repr(C)]
- pub struct STARTUPINFO {
+ #[deriving(Copy)] pub struct STARTUPINFO {
pub cb: DWORD,
pub lpReserved: LPWSTR,
pub lpDesktop: LPWSTR,
pub type LPSTARTUPINFO = *mut STARTUPINFO;
#[repr(C)]
- pub struct PROCESS_INFORMATION {
+ #[deriving(Copy)] pub struct PROCESS_INFORMATION {
pub hProcess: HANDLE,
pub hThread: HANDLE,
pub dwProcessId: DWORD,
pub type LPPROCESS_INFORMATION = *mut PROCESS_INFORMATION;
#[repr(C)]
- pub struct SYSTEM_INFO {
+ #[deriving(Copy)] pub struct SYSTEM_INFO {
pub wProcessorArchitecture: WORD,
pub wReserved: WORD,
pub dwPageSize: DWORD,
pub type LPSYSTEM_INFO = *mut SYSTEM_INFO;
#[repr(C)]
- pub struct MEMORY_BASIC_INFORMATION {
+ #[deriving(Copy)] pub struct MEMORY_BASIC_INFORMATION {
pub BaseAddress: LPVOID,
pub AllocationBase: LPVOID,
pub AllocationProtect: DWORD,
pub type LPMEMORY_BASIC_INFORMATION = *mut MEMORY_BASIC_INFORMATION;
#[repr(C)]
- pub struct OVERLAPPED {
+ #[deriving(Copy)] pub struct OVERLAPPED {
pub Internal: *mut c_ulong,
pub InternalHigh: *mut c_ulong,
pub Offset: DWORD,
pub type LPOVERLAPPED = *mut OVERLAPPED;
#[repr(C)]
- pub struct FILETIME {
+ #[deriving(Copy)] pub struct FILETIME {
pub dwLowDateTime: DWORD,
pub dwHighDateTime: DWORD,
}
pub type LPFILETIME = *mut FILETIME;
#[repr(C)]
- pub struct GUID {
+ #[deriving(Copy)] pub struct GUID {
pub Data1: DWORD,
pub Data2: WORD,
pub Data3: WORD,
}
#[repr(C)]
- pub struct WSAPROTOCOLCHAIN {
+ #[deriving(Copy)] pub struct WSAPROTOCOLCHAIN {
pub ChainLen: c_int,
pub ChainEntries: [DWORD, ..MAX_PROTOCOL_CHAIN as uint],
}
pub type LPWSAPROTOCOLCHAIN = *mut WSAPROTOCOLCHAIN;
#[repr(C)]
- pub struct WSAPROTOCOL_INFO {
+ #[deriving(Copy)] pub struct WSAPROTOCOL_INFO {
pub dwServiceFlags1: DWORD,
pub dwServiceFlags2: DWORD,
pub dwServiceFlags3: DWORD,
pub type GROUP = c_uint;
#[repr(C)]
- pub struct WIN32_FIND_DATAW {
+ #[deriving(Copy)] pub struct WIN32_FIND_DATAW {
pub dwFileAttributes: DWORD,
pub ftCreationTime: FILETIME,
pub ftLastAccessTime: FILETIME,
pub mod common {
pub mod posix01 {
use types::common::c95::c_void;
- use types::os::arch::c95::{c_char, c_int, size_t,
- time_t, suseconds_t, c_long};
+ use types::os::arch::c95::{c_char, c_int, size_t, time_t};
+ use types::os::arch::c95::{suseconds_t, c_long};
use types::os::arch::c99::{uintptr_t};
pub type pthread_t = uintptr_t;
#[repr(C)]
- pub struct glob_t {
+ #[deriving(Copy)] pub struct glob_t {
pub gl_pathc: size_t,
pub __unused1: c_int,
pub gl_offs: size_t,
}
#[repr(C)]
- pub struct timeval {
+ #[deriving(Copy)] pub struct timeval {
pub tv_sec: time_t,
pub tv_usec: suseconds_t,
}
#[repr(C)]
- pub struct timespec {
+ #[deriving(Copy)] pub struct timespec {
pub tv_sec: time_t,
pub tv_nsec: c_long,
}
- pub enum timezone {}
+ #[deriving(Copy)] pub enum timezone {}
pub type sighandler_t = size_t;
}
pub type in_port_t = u16;
pub type in_addr_t = u32;
#[repr(C)]
- pub struct sockaddr {
+ #[deriving(Copy)] pub struct sockaddr {
pub sa_len: u8,
pub sa_family: sa_family_t,
pub sa_data: [u8, ..14],
}
+
#[repr(C)]
- pub struct sockaddr_storage {
+ #[deriving(Copy)] pub struct sockaddr_storage {
pub ss_len: u8,
pub ss_family: sa_family_t,
pub __ss_pad1: [u8, ..6],
pub __ss_align: i64,
pub __ss_pad2: [u8, ..112],
}
+
#[repr(C)]
- pub struct sockaddr_in {
+ #[deriving(Copy)] pub struct sockaddr_in {
pub sin_len: u8,
pub sin_family: sa_family_t,
pub sin_port: in_port_t,
pub sin_addr: in_addr,
pub sin_zero: [u8, ..8],
}
+
#[repr(C)]
- pub struct in_addr {
+ #[deriving(Copy)] pub struct in_addr {
pub s_addr: in_addr_t,
}
+
#[repr(C)]
- pub struct sockaddr_in6 {
+ #[deriving(Copy)] pub struct sockaddr_in6 {
pub sin6_len: u8,
pub sin6_family: sa_family_t,
pub sin6_port: in_port_t,
pub sin6_addr: in6_addr,
pub sin6_scope_id: u32,
}
+
#[repr(C)]
- pub struct in6_addr {
+ #[deriving(Copy)] pub struct in6_addr {
pub s6_addr: [u16, ..8]
}
+
#[repr(C)]
- pub struct ip_mreq {
+ #[deriving(Copy)] pub struct ip_mreq {
pub imr_multiaddr: in_addr,
pub imr_interface: in_addr,
}
+
#[repr(C)]
- pub struct ip6_mreq {
+ #[deriving(Copy)] pub struct ip6_mreq {
pub ipv6mr_multiaddr: in6_addr,
pub ipv6mr_interface: c_uint,
}
+
#[repr(C)]
- pub struct addrinfo {
+ #[deriving(Copy)] pub struct addrinfo {
pub ai_flags: c_int,
pub ai_family: c_int,
pub ai_socktype: c_int,
pub ai_addr: *mut sockaddr,
pub ai_next: *mut addrinfo,
}
+
#[repr(C)]
- pub struct sockaddr_un {
+ #[deriving(Copy)] pub struct sockaddr_un {
pub sun_len: u8,
pub sun_family: sa_family_t,
pub sun_path: [c_char, ..104]
}
+
#[repr(C)]
- pub struct ifaddrs {
+ #[deriving(Copy)] pub struct ifaddrs {
pub ifa_next: *mut ifaddrs,
pub ifa_name: *mut c_char,
pub ifa_flags: c_uint,
pub type blkcnt_t = i32;
#[repr(C)]
- pub struct stat {
+ #[deriving(Copy)] pub struct stat {
pub st_dev: dev_t,
pub st_mode: mode_t,
pub st_nlink: nlink_t,
}
#[repr(C)]
- pub struct utimbuf {
+ #[deriving(Copy)] pub struct utimbuf {
pub actime: time_t,
pub modtime: time_t,
}
#[repr(C)]
- pub struct pthread_attr_t {
+ #[deriving(Copy)] pub struct pthread_attr_t {
pub __sig: c_long,
pub __opaque: [c_char, ..36]
}
}
pub mod extra {
#[repr(C)]
- pub struct mach_timebase_info {
+ #[deriving(Copy)] pub struct mach_timebase_info {
pub numer: u32,
pub denom: u32,
}
pub type blkcnt_t = i32;
#[repr(C)]
- pub struct stat {
+ #[deriving(Copy)] pub struct stat {
pub st_dev: dev_t,
pub st_mode: mode_t,
pub st_nlink: nlink_t,
}
#[repr(C)]
- pub struct utimbuf {
+ #[deriving(Copy)] pub struct utimbuf {
pub actime: time_t,
pub modtime: time_t,
}
#[repr(C)]
- pub struct pthread_attr_t {
+ #[deriving(Copy)] pub struct pthread_attr_t {
pub __sig: c_long,
pub __opaque: [c_char, ..56]
}
}
pub mod extra {
#[repr(C)]
- pub struct mach_timebase_info {
+ #[deriving(Copy)] pub struct mach_timebase_info {
pub numer: u32,
pub denom: u32,
}
pub fn issue_14344_workaround() {} // FIXME #14344 force linkage to happen correctly
#[test] fn work_on_windows() { } // FIXME #10872 needed for a happy windows
+
+#[doc(hidden)]
+#[cfg(not(test))]
+mod std {
+ pub use core::kinds;
+}
#[test]
fn parse_logging_spec_valid() {
let (dirs, filter) = parse_logging_spec("crate1::mod1=1,crate1::mod2,crate2=4");
- let dirs = dirs.as_slice();
assert_eq!(dirs.len(), 3);
assert_eq!(dirs[0].name, Some("crate1::mod1".to_string()));
assert_eq!(dirs[0].level, 1);
fn parse_logging_spec_invalid_crate() {
// test parse_logging_spec with multiple = in specification
let (dirs, filter) = parse_logging_spec("crate1::mod1=1=2,crate2=4");
- let dirs = dirs.as_slice();
assert_eq!(dirs.len(), 1);
assert_eq!(dirs[0].name, Some("crate2".to_string()));
assert_eq!(dirs[0].level, 4);
fn parse_logging_spec_invalid_log_level() {
// test parse_logging_spec with 'noNumber' as log level
let (dirs, filter) = parse_logging_spec("crate1::mod1=noNumber,crate2=4");
- let dirs = dirs.as_slice();
assert_eq!(dirs.len(), 1);
assert_eq!(dirs[0].name, Some("crate2".to_string()));
assert_eq!(dirs[0].level, 4);
fn parse_logging_spec_string_log_level() {
// test parse_logging_spec with 'warn' as log level
let (dirs, filter) = parse_logging_spec("crate1::mod1=wrong,crate2=warn");
- let dirs = dirs.as_slice();
assert_eq!(dirs.len(), 1);
assert_eq!(dirs[0].name, Some("crate2".to_string()));
assert_eq!(dirs[0].level, ::WARN);
fn parse_logging_spec_empty_log_level() {
// test parse_logging_spec with '' as log level
let (dirs, filter) = parse_logging_spec("crate1::mod1=wrong,crate2=");
- let dirs = dirs.as_slice();
assert_eq!(dirs.len(), 1);
assert_eq!(dirs[0].name, Some("crate2".to_string()));
assert_eq!(dirs[0].level, ::MAX_LOG_LEVEL);
fn parse_logging_spec_global() {
// test parse_logging_spec with no crate
let (dirs, filter) = parse_logging_spec("warn,crate2=4");
- let dirs = dirs.as_slice();
assert_eq!(dirs.len(), 2);
assert_eq!(dirs[0].name, None);
assert_eq!(dirs[0].level, 2);
#[test]
fn parse_logging_spec_valid_filter() {
let (dirs, filter) = parse_logging_spec("crate1::mod1=1,crate1::mod2,crate2=4/abc");
- let dirs = dirs.as_slice();
assert_eq!(dirs.len(), 3);
assert_eq!(dirs[0].name, Some("crate1::mod1".to_string()));
assert_eq!(dirs[0].level, 1);
assert_eq!(dirs[2].name, Some("crate2".to_string()));
assert_eq!(dirs[2].level, 4);
- assert!(filter.is_some() && filter.unwrap().to_string().as_slice() == "abc");
+ assert!(filter.is_some() && filter.unwrap().to_string() == "abc");
}
#[test]
fn parse_logging_spec_invalid_crate_filter() {
let (dirs, filter) = parse_logging_spec("crate1::mod1=1=2,crate2=4/a.c");
- let dirs = dirs.as_slice();
assert_eq!(dirs.len(), 1);
assert_eq!(dirs[0].name, Some("crate2".to_string()));
assert_eq!(dirs[0].level, 4);
- assert!(filter.is_some() && filter.unwrap().to_string().as_slice() == "a.c");
+ assert!(filter.is_some() && filter.unwrap().to_string() == "a.c");
}
#[test]
fn parse_logging_spec_empty_with_filter() {
let (dirs, filter) = parse_logging_spec("crate1/a*c");
- let dirs = dirs.as_slice();
assert_eq!(dirs.len(), 1);
assert_eq!(dirs[0].name, Some("crate1".to_string()));
assert_eq!(dirs[0].level, ::MAX_LOG_LEVEL);
- assert!(filter.is_some() && filter.unwrap().to_string().as_slice() == "a*c");
+ assert!(filter.is_some() && filter.unwrap().to_string() == "a*c");
}
}
#[deriving(PartialEq, PartialOrd)]
pub struct LogLevel(pub u32);
+impl Copy for LogLevel {}
+
impl fmt::Show for LogLevel {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
let LogLevel(level) = *self;
pub line: uint,
}
+impl Copy for LogLocation {}
+
/// Tests whether a given module's name is enabled for a particular level of
/// logging. This is the second layer of defense about determining whether a
/// module's log statement should be emitted or not.
index: uint, // Index into state
}
+impl Copy for ChaChaRng {}
+
static EMPTY: ChaChaRng = ChaChaRng {
buffer: [0, ..STATE_WORDS],
state: [0, ..STATE_WORDS],
//! The exponential distribution.
+use core::kinds::Copy;
use core::num::Float;
use {Rng, Rand};
/// College, Oxford
pub struct Exp1(pub f64);
+impl Copy for Exp1 {}
+
// This could be done via `-rng.gen::<f64>().ln()` but that is slower.
impl Rand for Exp1 {
#[inline]
lambda_inverse: f64
}
+impl Copy for Exp {}
+
impl Exp {
/// Construct a new `Exp` with the given shape parameter
/// `lambda`. Panics if `lambda <= 0`.
//! The normal and derived distributions.
+use core::kinds::Copy;
use core::num::Float;
use {Rng, Rand, Open01};
/// College, Oxford
pub struct StandardNormal(pub f64);
+impl Copy for StandardNormal {}
+
impl Rand for StandardNormal {
fn rand<R:Rng>(rng: &mut R) -> StandardNormal {
#[inline]
std_dev: f64,
}
+impl Copy for Normal {}
+
impl Normal {
/// Construct a new `Normal` distribution with the given mean and
/// standard deviation.
norm: Normal
}
+impl Copy for LogNormal {}
+
impl LogNormal {
/// Construct a new `LogNormal` distribution with the given mean
/// and standard deviation.
b: u32,
c: u32
}
+
+impl Copy for IsaacRng {}
+
static EMPTY: IsaacRng = IsaacRng {
cnt: 0,
rsl: [0, ..RAND_SIZE_UINT],
c: u64,
}
+impl Copy for Isaac64Rng {}
+
static EMPTY_64: Isaac64Rng = Isaac64Rng {
cnt: 0,
rsl: [0, .. RAND_SIZE_64],
/// [1]: Marsaglia, George (July 2003). ["Xorshift
/// RNGs"](http://www.jstatsoft.org/v08/i14/paper). *Journal of
/// Statistical Software*. Vol. 8 (Issue 14).
+#[allow(missing_copy_implementations)]
pub struct XorShiftRng {
x: u32,
y: u32,
w: u32,
}
+impl Clone for XorShiftRng {
+ fn clone(&self) -> XorShiftRng {
+ XorShiftRng {
+ x: self.x,
+ y: self.y,
+ z: self.z,
+ w: self.w,
+ }
+ }
+}
+
impl XorShiftRng {
/// Creates a new XorShiftRng instance which is not seeded.
///
/// replacing the RNG with the result of a `Default::default` call.
pub struct ReseedWithDefault;
+impl Copy for ReseedWithDefault {}
+
impl<R: Rng + Default> Reseeder<R> for ReseedWithDefault {
fn reseed(&mut self, rng: &mut R) {
*rng = Default::default();
pub end: uint,
}
+impl<'doc> Copy for Doc<'doc> {}
+
impl<'doc> Doc<'doc> {
pub fn new(data: &'doc [u8]) -> Doc<'doc> {
Doc { data: data, start: 0u, end: data.len() }
EsLabel, // Used only when debugging
}
+impl Copy for EbmlEncoderTag {}
+
#[deriving(Show)]
pub enum Error {
IntTooBig(uint),
use std::io::extensions::u64_from_be_bytes;
use std::mem::transmute;
use std::num::Int;
- use std::option::{None, Option, Some};
+ use std::option::Option;
+ use std::option::Option::{None, Some};
use serialize;
pub next: uint
}
+ impl Copy for Res {}
+
#[inline(never)]
fn vuint_at_slow(data: &[u8], start: uint) -> DecodeResult<Res> {
let a = data[start];
use serialize::{Encodable, Decodable};
- use std::option::{None, Option, Some};
+ use std::option::Option;
+ use std::option::Option::{None, Some};
#[test]
fn test_vuint_at() {
Ungreedy,
}
+impl Copy for Greed {}
+
impl Greed {
pub fn is_greedy(&self) -> bool {
match *self {
// Parse the min and max values from the regex.
let (mut min, mut max): (uint, Option<uint>);
- if !inner.as_slice().contains(",") {
+ if !inner.contains(",") {
min = try!(self.parse_uint(inner.as_slice()));
max = Some(min);
} else {
- let pieces: Vec<&str> = inner.as_slice().splitn(1, ',').collect();
+ let pieces: Vec<&str> = inner.splitn(1, ',').collect();
let (smin, smax) = (pieces[0], pieces[1]);
if smin.len() == 0 {
return self.err("Max repetitions cannot be specified \
return self.err("Capture names must have at least 1 character.")
}
let name = self.slice(self.chari, closer);
- if !name.as_slice().chars().all(is_valid_cap) {
+ if !name.chars().all(is_valid_cap) {
return self.err(
"Capture names can only have underscores, letters and digits.")
}
pub prog: fn(MatchKind, &str, uint, uint) -> Vec<Option<uint>>
}
+impl Copy for ExNative {}
+
impl Clone for ExNative {
- fn clone(&self) -> ExNative { *self }
+ fn clone(&self) -> ExNative {
+ *self
+ }
}
impl fmt::Show for Regex {
input: &str, s: uint, e: uint) -> CaptureLocs {
match *re {
Dynamic(ExDynamic { ref prog, .. }) => vm::run(which, prog, input, s, e),
- Native(ExNative { prog, .. }) => prog(which, input, s, e),
+ Native(ExNative { ref prog, .. }) => (*prog)(which, input, s, e),
}
}
};
// The test set sometimes leave out capture groups, so truncate
// actual capture groups to match test set.
- let (sexpect, mut sgot) = (expected.as_slice(), got.as_slice());
- if sgot.len() > sexpect.len() {
- sgot = sgot[0..sexpect.len()]
+ let mut sgot = got.as_slice();
+ if sgot.len() > expected.len() {
+ sgot = sgot[0..expected.len()]
}
- if sexpect != sgot {
+ if expected != sgot {
panic!("For RE '{}' against '{}', expected '{}' but got '{}'",
- $re, text, sexpect, sgot);
+ $re, text, expected, sgot);
}
}
);
Submatches,
}
+impl Copy for MatchKind {}
+
/// Runs an NFA simulation on the compiled expression given on the search text
/// `input`. The search begins at byte index `start` and ends at byte index
/// `end`. (The range is specified here so that zero-width assertions will work
StepContinue,
}
+impl Copy for StepState {}
+
impl<'r, 't> Nfa<'r, 't> {
fn run(&mut self) -> CaptureLocs {
let ncaps = match self.which {
// jump ahead quickly. If it can't be found, then we can bail
// out early.
if self.prog.prefix.len() > 0 && clist.size == 0 {
- let needle = self.prog.prefix.as_slice().as_bytes();
+ let needle = self.prog.prefix.as_bytes();
let haystack = self.input.as_bytes()[self.ic..];
match find_prefix(needle, haystack) {
None => break,
// expression returned.
let num_cap_locs = 2 * self.prog.num_captures();
let num_insts = self.prog.insts.len();
- let cap_names = self.vec_expr(self.names.as_slice().iter(),
+ let cap_names = self.vec_expr(self.names.iter(),
|cx, name| match *name {
Some(ref name) => {
let name = name.as_slice();
}
);
let prefix_anchor =
- match self.prog.insts.as_slice()[1] {
+ match self.prog.insts[1] {
EmptyBegin(flags) if flags & FLAG_MULTI == 0 => true,
_ => false,
};
let init_groups = self.vec_expr(range(0, num_cap_locs),
|cx, _| cx.expr_none(self.sp));
- let prefix_lit = Rc::new(self.prog.prefix.as_slice().as_bytes().to_vec());
+ let prefix_lit = Rc::new(self.prog.prefix.as_bytes().to_vec());
let prefix_bytes = self.cx.expr_lit(self.sp, ast::LitBinary(prefix_lit));
let check_prefix = self.check_prefix();
pub mod privacy;
pub mod reachable;
pub mod region;
+ pub mod recursion_limit;
pub mod resolve;
pub mod resolve_lifetime;
pub mod stability;
use metadata::csearch;
use middle::def::*;
+use middle::subst::Substs;
use middle::ty::{mod, Ty};
use middle::{def, pat_util, stability};
use middle::const_eval::{eval_const_expr_partial, const_int, const_uint};
use std::num::SignedInt;
use std::{i8, i16, i32, i64, u8, u16, u32, u64, f32, f64};
use syntax::{abi, ast, ast_map};
-use syntax::ast_util::{mod, is_shift_binop};
+use syntax::ast_util::is_shift_binop;
use syntax::attr::{mod, AttrMetaMethods};
use syntax::codemap::{Span, DUMMY_SP};
use syntax::parse::token;
use syntax::ast::{TyI, TyU, TyI8, TyU8, TyI16, TyU16, TyI32, TyU32, TyI64, TyU64};
+use syntax::ast_util;
use syntax::ptr::P;
use syntax::visit::{mod, Visitor};
pub struct WhileTrue;
+impl Copy for WhileTrue {}
+
impl LintPass for WhileTrue {
fn get_lints(&self) -> LintArray {
lint_array!(WHILE_TRUE)
pub struct UnusedCasts;
+impl Copy for UnusedCasts {}
+
impl LintPass for UnusedCasts {
fn get_lints(&self) -> LintArray {
lint_array!(UNUSED_TYPECASTS)
negated_expr_id: ast::NodeId,
}
+impl Copy for TypeLimits {}
+
impl TypeLimits {
pub fn new() -> TypeLimits {
TypeLimits {
pub struct ImproperCTypes;
+impl Copy for ImproperCTypes {}
+
impl LintPass for ImproperCTypes {
fn get_lints(&self) -> LintArray {
lint_array!(IMPROPER_CTYPES)
pub struct BoxPointers;
+impl Copy for BoxPointers {}
+
impl BoxPointers {
fn check_heap_type<'a, 'tcx>(&self, cx: &Context<'a, 'tcx>,
span: Span, ty: Ty<'tcx>) {
pub struct UnusedAttributes;
+impl Copy for UnusedAttributes {}
+
impl LintPass for UnusedAttributes {
fn get_lints(&self) -> LintArray {
lint_array!(UNUSED_ATTRIBUTES)
pub struct PathStatements;
+impl Copy for PathStatements {}
+
impl LintPass for PathStatements {
fn get_lints(&self) -> LintArray {
lint_array!(PATH_STATEMENTS)
pub struct UnusedResults;
+impl Copy for UnusedResults {}
+
impl LintPass for UnusedResults {
fn get_lints(&self) -> LintArray {
lint_array!(UNUSED_MUST_USE, UNUSED_RESULTS)
pub struct NonCamelCaseTypes;
+impl Copy for NonCamelCaseTypes {}
+
impl NonCamelCaseTypes {
fn check_case(&self, cx: &Context, sort: &str, ident: ast::Ident, span: Span) {
fn is_camel_case(ident: ast::Ident) -> bool {
pub struct NonSnakeCase;
+impl Copy for NonSnakeCase {}
+
impl NonSnakeCase {
fn check_snake_case(&self, cx: &Context, sort: &str, ident: ast::Ident, span: Span) {
fn is_snake_case(ident: ast::Ident) -> bool {
let mut buf = String::new();
if s.is_empty() { continue; }
for ch in s.chars() {
- if !buf.is_empty() && buf.as_slice() != "'"
+ if !buf.is_empty() && buf != "'"
&& ch.is_uppercase()
&& !last_upper {
words.push(buf);
pub struct NonUpperCaseGlobals;
+impl Copy for NonUpperCaseGlobals {}
+
impl LintPass for NonUpperCaseGlobals {
fn get_lints(&self) -> LintArray {
lint_array!(NON_UPPER_CASE_GLOBALS)
pub struct UnusedParens;
+impl Copy for UnusedParens {}
+
impl UnusedParens {
fn check_unused_parens_core(&self, cx: &Context, value: &ast::Expr, msg: &str,
struct_lit_needs_parens: bool) {
pub struct UnusedImportBraces;
+impl Copy for UnusedImportBraces {}
+
impl LintPass for UnusedImportBraces {
fn get_lints(&self) -> LintArray {
lint_array!(UNUSED_IMPORT_BRACES)
pub struct NonShorthandFieldPatterns;
+impl Copy for NonShorthandFieldPatterns {}
+
impl LintPass for NonShorthandFieldPatterns {
fn get_lints(&self) -> LintArray {
lint_array!(NON_SHORTHAND_FIELD_PATTERNS)
pub struct UnusedUnsafe;
+impl Copy for UnusedUnsafe {}
+
impl LintPass for UnusedUnsafe {
fn get_lints(&self) -> LintArray {
lint_array!(UNUSED_UNSAFE)
pub struct UnsafeBlocks;
+impl Copy for UnsafeBlocks {}
+
impl LintPass for UnsafeBlocks {
fn get_lints(&self) -> LintArray {
lint_array!(UNSAFE_BLOCKS)
pub struct UnusedMut;
+impl Copy for UnusedMut {}
+
impl UnusedMut {
fn check_unused_mut_pat(&self, cx: &Context, pats: &[P<ast::Pat>]) {
// collect all mutable pattern and group their NodeIDs by their Identifier to
pub struct UnusedAllocation;
+impl Copy for UnusedAllocation {}
+
impl LintPass for UnusedAllocation {
fn get_lints(&self) -> LintArray {
lint_array!(UNUSED_ALLOCATION)
ast::ItemEnum(..) => "an enum",
ast::ItemStruct(..) => "a struct",
ast::ItemTrait(..) => "a trait",
+ ast::ItemTy(..) => "a type alias",
_ => return
};
self.check_missing_docs_attrs(cx, Some(it.id), it.attrs.as_slice(),
}
}
+pub struct MissingCopyImplementations;
+
+impl Copy for MissingCopyImplementations {}
+
+impl LintPass for MissingCopyImplementations {
+ fn get_lints(&self) -> LintArray {
+ lint_array!(MISSING_COPY_IMPLEMENTATIONS)
+ }
+
+ fn check_item(&mut self, cx: &Context, item: &ast::Item) {
+ if !cx.exported_items.contains(&item.id) {
+ return
+ }
+ if cx.tcx
+ .destructor_for_type
+ .borrow()
+ .contains_key(&ast_util::local_def(item.id)) {
+ return
+ }
+ let ty = match item.node {
+ ast::ItemStruct(_, ref ast_generics) => {
+ if ast_generics.is_parameterized() {
+ return
+ }
+ ty::mk_struct(cx.tcx,
+ ast_util::local_def(item.id),
+ Substs::empty())
+ }
+ ast::ItemEnum(_, ref ast_generics) => {
+ if ast_generics.is_parameterized() {
+ return
+ }
+ ty::mk_enum(cx.tcx,
+ ast_util::local_def(item.id),
+ Substs::empty())
+ }
+ _ => return,
+ };
+ let parameter_environment = ty::empty_parameter_environment();
+ if !ty::type_moves_by_default(cx.tcx,
+ ty,
+ ¶meter_environment) {
+ return
+ }
+ if ty::can_type_implement_copy(cx.tcx,
+ ty,
+ ¶meter_environment).is_ok() {
+ cx.span_lint(MISSING_COPY_IMPLEMENTATIONS,
+ item.span,
+ "type could implement `Copy`; consider adding `impl \
+ Copy`")
+ }
+ }
+}
+
declare_lint!(DEPRECATED, Warn,
"detects use of #[deprecated] items")
/// `#[unstable]` attributes, or no stability attribute.
pub struct Stability;
+impl Copy for Stability {}
+
impl Stability {
fn lint(&self, cx: &Context, id: ast::DefId, span: Span) {
let stability = stability::lookup(cx.tcx, id);
declare_lint!(pub FAT_PTR_TRANSMUTES, Allow,
"detects transmutes of fat pointers")
+declare_lint!(pub MISSING_COPY_IMPLEMENTATIONS, Warn,
+ "detects potentially-forgotten implementations of `Copy`")
+
/// Does nothing as a lint pass, but registers some `Lint`s
/// which are used by other parts of the compiler.
pub struct HardwiredLints;
+impl Copy for HardwiredLints {}
+
impl LintPass for HardwiredLints {
fn get_lints(&self) -> LintArray {
lint_array!(
UnusedMut,
UnusedAllocation,
Stability,
+ MissingCopyImplementations,
)
add_builtin_with_new!(sess,
pub desc: &'static str,
}
+impl Copy for Lint {}
+
impl Lint {
/// Get the lint's name, with ASCII letters converted to lowercase.
pub fn name_lower(&self) -> String {
lint: &'static Lint,
}
+impl Copy for LintId {}
+
impl PartialEq for LintId {
fn eq(&self, other: &LintId) -> bool {
(self.lint as *const Lint) == (other.lint as *const Lint)
Allow, Warn, Deny, Forbid
}
+impl Copy for Level {}
+
impl Level {
/// Convert a level to a lower-case string.
pub fn as_str(self) -> &'static str {
CommandLine,
}
+impl Copy for LintSource {}
+
pub type LevelSource = (Level, LintSource);
pub mod builtin;
tag_table_capture_modes = 0x56,
tag_table_object_cast_map = 0x57,
}
+
+impl Copy for astencode_tag {}
static first_astencode_tag: uint = tag_ast as uint;
static last_astencode_tag: uint = tag_table_object_cast_map as uint;
impl astencode_tag {
}
}
-fn register_native_lib(sess: &Session, span: Option<Span>, name: String,
- kind: cstore::NativeLibaryKind) {
- if name.as_slice().is_empty() {
+fn register_native_lib(sess: &Session,
+ span: Option<Span>,
+ name: String,
+ kind: cstore::NativeLibraryKind) {
+ if name.is_empty() {
match span {
Some(span) => {
sess.span_err(span, "#[link(name = \"\")] given with \
hash: Option<&Svh>) -> Option<ast::CrateNum> {
let mut ret = None;
e.sess.cstore.iter_crate_data(|cnum, data| {
- if data.name.as_slice() != name { return }
+ if data.name != name { return }
match hash {
Some(hash) if *hash == data.hash() => { ret = Some(cnum); return }
pub vis: ast::Visibility,
}
+impl Copy for MethodInfo {}
+
pub fn get_symbol(cstore: &cstore::CStore, def: ast::DefId) -> String {
let cdata = cstore.get_crate_data(def.krate);
decoder::get_symbol(cdata.data(), def.node)
decoder::get_impl_vtables(&*cdata, def.node, tcx)
}
-pub fn get_native_libraries(cstore: &cstore::CStore,
- crate_num: ast::CrateNum)
- -> Vec<(cstore::NativeLibaryKind, String)> {
+pub fn get_native_libraries(cstore: &cstore::CStore, crate_num: ast::CrateNum)
+ -> Vec<(cstore::NativeLibraryKind, String)> {
let cdata = cstore.get_crate_data(crate_num);
decoder::get_native_libraries(&*cdata)
}
pub use self::MetadataBlob::*;
pub use self::LinkagePreference::*;
-pub use self::NativeLibaryKind::*;
+pub use self::NativeLibraryKind::*;
use back::svh::Svh;
use metadata::decoder;
RequireStatic,
}
-#[deriving(PartialEq, FromPrimitive, Clone)]
-pub enum NativeLibaryKind {
+impl Copy for LinkagePreference {}
+
+#[deriving(Clone, PartialEq, FromPrimitive)]
+pub enum NativeLibraryKind {
NativeStatic, // native static library (.a archive)
NativeFramework, // OSX-specific
NativeUnknown, // default way to specify a dynamic library
}
+impl Copy for NativeLibraryKind {}
+
// Where a crate came from on the local filesystem. One of these two options
// must be non-None.
#[deriving(PartialEq, Clone)]
/// Map from NodeId's of local extern crate statements to crate numbers
extern_mod_crate_map: RefCell<NodeMap<ast::CrateNum>>,
used_crate_sources: RefCell<Vec<CrateSource>>,
- used_libraries: RefCell<Vec<(String, NativeLibaryKind)>>,
+ used_libraries: RefCell<Vec<(String, NativeLibraryKind)>>,
used_link_args: RefCell<Vec<String>>,
pub intr: Rc<IdentInterner>,
}
let mut ordering = Vec::new();
fn visit(cstore: &CStore, cnum: ast::CrateNum,
ordering: &mut Vec<ast::CrateNum>) {
- if ordering.as_slice().contains(&cnum) { return }
+ if ordering.contains(&cnum) { return }
let meta = cstore.get_crate_data(cnum);
for (_, &dep) in meta.cnum_map.iter() {
visit(cstore, dep, ordering);
for (&num, _) in self.metas.borrow().iter() {
visit(self, num, &mut ordering);
}
- ordering.as_mut_slice().reverse();
- let ordering = ordering.as_slice();
+ ordering.reverse();
let mut libs = self.used_crate_sources.borrow()
.iter()
.map(|src| (src.cnum, match prefer {
libs
}
- pub fn add_used_library(&self, lib: String, kind: NativeLibaryKind) {
+ pub fn add_used_library(&self, lib: String, kind: NativeLibraryKind) {
assert!(!lib.is_empty());
self.used_libraries.borrow_mut().push((lib, kind));
}
pub fn get_used_libraries<'a>(&'a self)
- -> &'a RefCell<Vec<(String, NativeLibaryKind)> > {
+ -> &'a RefCell<Vec<(String,
+ NativeLibraryKind)>> {
&self.used_libraries
}
DlField
}
+impl Copy for DefLike {}
+
/// Iterates over the language items in the given crate.
pub fn each_lang_item(cdata: Cmd, f: |ast::NodeId, uint| -> bool) -> bool {
let root = rbml::Doc::new(cdata.data());
pub fn get_native_libraries(cdata: Cmd)
- -> Vec<(cstore::NativeLibaryKind, String)> {
+ -> Vec<(cstore::NativeLibraryKind, String)> {
let libraries = reader::get_doc(rbml::Doc::new(cdata.data()),
tag_native_libraries);
let mut result = Vec::new();
reader::tagged_docs(libraries, tag_native_libraries_lib, |lib_doc| {
let kind_doc = reader::get_doc(lib_doc, tag_native_libraries_kind);
let name_doc = reader::get_doc(lib_doc, tag_native_libraries_name);
- let kind: cstore::NativeLibaryKind =
+ let kind: cstore::NativeLibraryKind =
FromPrimitive::from_u32(reader::doc_as_u32(kind_doc)).unwrap();
let name = name_doc.as_str().to_string();
result.push((kind, name));
// encoded metadata for static methods relative to Bar,
// but not yet for Foo.
//
- if path_differs || original_name.get() != exp.name.as_slice() {
+ if path_differs || original_name.get() != exp.name {
if !encode_reexported_static_base_methods(ecx, rbml_w, exp) {
if encode_reexported_static_trait_methods(ecx, rbml_w, exp) {
debug!("(encode reexported static methods) {} [trait]",
use util::fs as myfs;
-pub enum FileMatch { FileMatches, FileDoesntMatch }
+pub enum FileMatch {
+ FileMatches,
+ FileDoesntMatch,
+}
+
+impl Copy for FileMatch {}
// A module for searching for libraries
// FIXME (#2658): I'm not happy how this module turned out. Should
let mut env_rust_path: Vec<Path> = match get_rust_path() {
Some(env_path) => {
let env_path_components =
- env_path.as_slice().split_str(PATH_ENTRY_SEPARATOR);
+ env_path.split_str(PATH_ENTRY_SEPARATOR);
env_path_components.map(|s| Path::new(s)).collect()
}
None => Vec::new()
fn crate_matches(&mut self, crate_data: &[u8], libpath: &Path) -> bool {
if self.should_match_name {
match decoder::maybe_get_crate_name(crate_data) {
- Some(ref name) if self.crate_name == name.as_slice() => {}
+ Some(ref name) if self.crate_name == *name => {}
_ => { info!("Rejecting via crate name"); return false }
}
}
None => { debug!("triple not present"); return false }
Some(t) => t,
};
- if triple.as_slice() != self.triple {
+ if triple != self.triple {
info!("Rejecting via crate triple: expected {} got {}", self.triple, triple);
self.rejected_via_triple.push(CrateMismatch {
path: libpath.clone(),
let name = String::from_raw_buf_len(name_buf as *const u8,
name_len as uint);
debug!("get_metadata_section: name {}", name);
- if read_meta_section_name(is_osx).as_slice() == name.as_slice() {
+ if read_meta_section_name(is_osx) == name {
let cbuf = llvm::LLVMGetSectionContents(si.llsi);
let csz = llvm::LLVMGetSectionSize(si.llsi) as uint;
let cvbuf: *const u8 = cbuf as *const u8;
// Identifies an unboxed closure
UnboxedClosureSource
}
+
+impl Copy for DefIdSource {}
pub type conv_did<'a> =
|source: DefIdSource, ast::DefId|: 'a -> ast::DefId;
use middle::expr_use_visitor as euv;
use middle::mem_categorization as mc;
use middle::region;
+use middle::ty::ParameterEnvironment;
use middle::ty;
+use syntax::ast::NodeId;
use syntax::ast;
use syntax::codemap::Span;
use util::ppaux::Repr;
dfcx_loans: &'a LoanDataFlow<'a, 'tcx>,
move_data: move_data::FlowedMoveData<'a, 'tcx>,
all_loans: &'a [Loan<'tcx>],
+ param_env: &'a ParameterEnvironment<'tcx>,
}
impl<'a, 'tcx> euv::Delegate<'tcx> for CheckLoanCtxt<'a, 'tcx> {
dfcx_loans: &LoanDataFlow<'b, 'tcx>,
move_data: move_data::FlowedMoveData<'c, 'tcx>,
all_loans: &[Loan<'tcx>],
+ fn_id: NodeId,
decl: &ast::FnDecl,
body: &ast::Block) {
debug!("check_loans(body id={})", body.id);
+ let param_env = ParameterEnvironment::for_item(bccx.tcx, fn_id);
+
let mut clcx = CheckLoanCtxt {
bccx: bccx,
dfcx_loans: dfcx_loans,
move_data: move_data,
all_loans: all_loans,
+ param_env: ¶m_env,
};
{
- let mut euv = euv::ExprUseVisitor::new(&mut clcx, bccx.tcx);
+ let mut euv = euv::ExprUseVisitor::new(&mut clcx,
+ bccx.tcx,
+ param_env.clone());
euv.walk_fn(decl, body);
}
}
use_kind,
&**lp,
the_move,
- moved_lp);
+ moved_lp,
+ self.param_env);
false
});
}
debug!("fragments 3 assigned: {}", path_lps(assigned.as_slice()));
// Fourth, build the leftover from the moved, assigned, and parents.
- for m in moved.as_slice().iter() {
+ for m in moved.iter() {
let lp = this.path_loan_path(*m);
add_fragment_siblings(this, tcx, &mut unmoved, lp, None);
}
- for a in assigned.as_slice().iter() {
+ for a in assigned.iter() {
let lp = this.path_loan_path(*a);
add_fragment_siblings(this, tcx, &mut unmoved, lp, None);
}
- for p in parents.as_slice().iter() {
+ for p in parents.iter() {
let lp = this.path_loan_path(*p);
add_fragment_siblings(this, tcx, &mut unmoved, lp, None);
}
use middle::expr_use_visitor as euv;
use middle::mem_categorization as mc;
use middle::region;
+use middle::ty::ParameterEnvironment;
use middle::ty;
use util::ppaux::{Repr};
mod move_error;
pub fn gather_loans_in_fn<'a, 'tcx>(bccx: &BorrowckCtxt<'a, 'tcx>,
+ fn_id: NodeId,
decl: &ast::FnDecl,
body: &ast::Block)
- -> (Vec<Loan<'tcx>>, move_data::MoveData<'tcx>)
-{
+ -> (Vec<Loan<'tcx>>,
+ move_data::MoveData<'tcx>) {
let mut glcx = GatherLoanCtxt {
bccx: bccx,
all_loans: Vec::new(),
move_error_collector: move_error::MoveErrorCollector::new(),
};
+ let param_env = ParameterEnvironment::for_item(bccx.tcx, fn_id);
+
{
- let mut euv = euv::ExprUseVisitor::new(&mut glcx, bccx.tcx);
+ let mut euv = euv::ExprUseVisitor::new(&mut glcx,
+ bccx.tcx,
+ param_env);
euv.walk_fn(decl, body);
}
Assigns,
}
+impl Copy for Variant {}
+
impl Variant {
pub fn short_name(&self) -> &'static str {
match *self {
use middle::expr_use_visitor as euv;
use middle::mem_categorization as mc;
use middle::region;
-use middle::ty::{mod, Ty};
+use middle::ty::{mod, ParameterEnvironment, Ty};
use util::ppaux::{note_and_explain_region, Repr, UserString};
use std::rc::Rc;
#[deriving(Clone)]
pub struct LoanDataFlowOperator;
+impl Copy for LoanDataFlowOperator {}
+
pub type LoanDataFlow<'a, 'tcx> = DataFlowContext<'a, 'tcx, LoanDataFlowOperator>;
impl<'a, 'tcx, 'v> Visitor<'v> for BorrowckCtxt<'a, 'tcx> {
move_data::fragments::instrument_move_fragments(&flowed_moves.move_data,
this.tcx, sp, id);
- check_loans::check_loans(this, &loan_dfcx, flowed_moves,
- all_loans.as_slice(), decl, body);
+ check_loans::check_loans(this,
+ &loan_dfcx,
+ flowed_moves,
+ all_loans.as_slice(),
+ id,
+ decl,
+ body);
visit::walk_fn(this, fk, decl, body, sp);
}
// Check the body of fn items.
let id_range = ast_util::compute_id_range_for_fn_body(fk, decl, body, sp, id);
let (all_loans, move_data) =
- gather_loans::gather_loans_in_fn(this, decl, body);
+ gather_loans::gather_loans_in_fn(this, id, decl, body);
let mut loan_dfcx =
DataFlowContext::new(this.tcx,
LpInterior(mc::InteriorKind) // `LV.f` in doc.rs
}
+impl Copy for LoanPathElem {}
+
pub fn closure_to_block(closure_id: ast::NodeId,
tcx: &ty::ctxt) -> ast::NodeId {
match tcx.map.get(closure_id) {
// Errors that can occur
#[deriving(PartialEq)]
+#[allow(missing_copy_implementations)]
pub enum bckerr_code {
err_mutbl,
err_out_of_scope(ty::Region, ty::Region), // superscope, subscope
BorrowViolation(euv::LoanCause)
}
+impl Copy for AliasableViolationKind {}
+
#[deriving(Show)]
pub enum MovedValueUseKind {
MovedInUse,
MovedInCapture,
}
+impl Copy for MovedValueUseKind {}
+
///////////////////////////////////////////////////////////////////////////
// Misc
use_kind: MovedValueUseKind,
lp: &LoanPath<'tcx>,
the_move: &move_data::Move,
- moved_lp: &LoanPath<'tcx>) {
+ moved_lp: &LoanPath<'tcx>,
+ param_env: &ParameterEnvironment<'tcx>) {
let verb = match use_kind {
MovedInUse => "use",
MovedInCapture => "capture",
r).as_slice())
}
};
- let (suggestion, _) = move_suggestion(self.tcx, expr_ty,
+ let (suggestion, _) = move_suggestion(self.tcx, param_env, expr_ty,
("moved by default", ""));
self.tcx.sess.span_note(
expr_span,
r).as_slice())
}
};
- let (suggestion, help) = move_suggestion(self.tcx, expr_ty,
+ let (suggestion, help) = move_suggestion(self.tcx,
+ param_env,
+ expr_ty,
("moved by default", "make a copy and \
capture that instead to override"));
self.tcx.sess.span_note(
}
}
- fn move_suggestion<'tcx>(tcx: &ty::ctxt<'tcx>, ty: Ty<'tcx>,
+ fn move_suggestion<'tcx>(tcx: &ty::ctxt<'tcx>,
+ param_env: &ty::ParameterEnvironment<'tcx>,
+ ty: Ty<'tcx>,
default_msgs: (&'static str, &'static str))
-> (&'static str, &'static str) {
match ty.sty {
}) =>
("a non-copyable stack closure",
"capture it in a new closure, e.g. `|x| f(x)`, to override"),
- _ if ty::type_moves_by_default(tcx, ty) =>
+ _ if ty::type_moves_by_default(tcx, ty, param_env) =>
("non-copyable",
"perhaps you meant to use `clone()`?"),
_ => default_msgs,
#[deriving(PartialEq, Eq, PartialOrd, Ord, Show)]
pub struct MovePathIndex(uint);
+impl Copy for MovePathIndex {}
+
impl MovePathIndex {
fn get(&self) -> uint {
let MovePathIndex(v) = *self; v
#[deriving(PartialEq)]
pub struct MoveIndex(uint);
+impl Copy for MoveIndex {}
+
impl MoveIndex {
fn get(&self) -> uint {
let MoveIndex(v) = *self; v
Captured // Closure creation that moves a value
}
+impl Copy for MoveKind {}
+
pub struct Move {
/// Path being moved.
pub path: MovePathIndex,
pub next_move: MoveIndex
}
+impl Copy for Move {}
+
pub struct Assignment {
/// Path being assigned.
pub path: MovePathIndex,
pub span: Span,
}
+impl Copy for Assignment {}
+
pub struct VariantMatch {
/// downcast to the variant.
pub path: MovePathIndex,
pub mode: euv::MatchMode
}
+impl Copy for VariantMatch {}
+
#[deriving(Clone)]
pub struct MoveDataFlowOperator;
+impl Copy for MoveDataFlowOperator {}
+
pub type MoveDataFlow<'a, 'tcx> = DataFlowContext<'a, 'tcx, MoveDataFlowOperator>;
#[deriving(Clone)]
pub struct AssignDataFlowOperator;
+impl Copy for AssignDataFlowOperator {}
+
pub type AssignDataFlow<'a, 'tcx> = DataFlowContext<'a, 'tcx, AssignDataFlowOperator>;
fn loan_path_is_precise(loan_path: &LoanPath) -> bool {
break_index: CFGIndex, // where to go on a `break
}
+impl Copy for LoopScope {}
+
pub fn construct(tcx: &ty::ctxt,
blk: &ast::Block) -> CFG {
let mut graph = graph::Graph::new();
fn replace_newline_with_backslash_l(s: String) -> String {
// Replacing newlines with \\l causes each line to be left-aligned,
// improving presentation of (long) pretty-printed expressions.
- if s.as_slice().contains("\n") {
+ if s.contains("\n") {
let mut s = s.replace("\n", "\\l");
// Apparently left-alignment applies to the line that precedes
// \l, not the line that follows; so, add \l at end of string
// if not already present, ensuring last line gets left-aligned
// as well.
let mut last_two: Vec<_> =
- s.as_slice().chars().rev().take(2).collect();
+ s.chars().rev().take(2).collect();
last_two.reverse();
if last_two != ['\\', 'l'] {
s.push_str("\\l");
pub id: ast::NodeId
}
+impl Copy for CFGNodeData {}
+
pub struct CFGEdgeData {
pub exiting_scopes: Vec<ast::NodeId>
}
Normal, Loop, Closure
}
+impl Copy for Context {}
+
struct CheckLoopVisitor<'a> {
sess: &'a Session,
cx: Context
}
+impl<'a> Copy for CheckLoopVisitor<'a> {}
+
pub fn check_crate(sess: &Session, krate: &ast::Crate) {
visit::walk_crate(&mut CheckLoopVisitor { sess: sess, cx: Normal }, krate)
}
}
pub struct MatchCheckCtxt<'a, 'tcx: 'a> {
- pub tcx: &'a ty::ctxt<'tcx>
+ pub tcx: &'a ty::ctxt<'tcx>,
+ pub param_env: ParameterEnvironment<'tcx>,
}
#[deriving(Clone, PartialEq)]
LeaveOutWitness
}
+impl Copy for WitnessPreference {}
+
impl<'a, 'tcx, 'v> Visitor<'v> for MatchCheckCtxt<'a, 'tcx> {
fn visit_expr(&mut self, ex: &ast::Expr) {
check_expr(self, ex);
}
pub fn check_crate(tcx: &ty::ctxt) {
- visit::walk_crate(&mut MatchCheckCtxt { tcx: tcx }, tcx.map.krate());
+ visit::walk_crate(&mut MatchCheckCtxt {
+ tcx: tcx,
+ param_env: ty::empty_parameter_environment(),
+ }, tcx.map.krate());
tcx.sess.abort_if_errors();
}
decl: &ast::FnDecl,
body: &ast::Block,
sp: Span,
- _: NodeId) {
+ fn_id: NodeId) {
+ match kind {
+ visit::FkFnBlock => {}
+ _ => cx.param_env = ParameterEnvironment::for_item(cx.tcx, fn_id),
+ }
+
visit::walk_fn(cx, kind, decl, body, sp);
+
for input in decl.inputs.iter() {
is_refutable(cx, &*input.pat, |pat| {
span_err!(cx.tcx.sess, input.pat.span, E0006,
match p.node {
ast::PatIdent(ast::BindByValue(_), _, ref sub) => {
let pat_ty = ty::node_id_to_type(tcx, p.id);
- if ty::type_moves_by_default(tcx, pat_ty) {
+ if ty::type_moves_by_default(tcx,
+ pat_ty,
+ &cx.param_env) {
check_move(p, sub.as_ref().map(|p| &**p));
}
}
let mut checker = MutationChecker {
cx: cx,
};
- let mut visitor = ExprUseVisitor::new(&mut checker, checker.cx.tcx);
+ let mut visitor = ExprUseVisitor::new(&mut checker,
+ checker.cx.tcx,
+ cx.param_env.clone());
visitor.walk_expr(guard);
}
use middle::expr_use_visitor as euv;
use middle::mem_categorization as mc;
+use middle::ty::ParameterEnvironment;
use middle::ty;
use util::ppaux::ty_to_string;
fd: &'v ast::FnDecl,
b: &'v ast::Block,
s: Span,
- _: ast::NodeId) {
+ fn_id: ast::NodeId) {
{
- let mut euv = euv::ExprUseVisitor::new(self, self.tcx);
+ let param_env = ParameterEnvironment::for_item(self.tcx, fn_id);
+ let mut euv = euv::ExprUseVisitor::new(self, self.tcx, param_env);
euv.walk_fn(fd, b);
}
visit::walk_fn(self, fk, fd, b, s)
InNothing,
}
+impl Copy for Mode {}
+
struct CheckStaticVisitor<'a, 'tcx: 'a> {
tcx: &'a ty::ctxt<'tcx>,
mode: Mode,
checker: &'a mut GlobalChecker,
}
-struct GlobalVisitor<'a, 'b, 'tcx: 'b>(euv::ExprUseVisitor<'a, 'b, 'tcx, ty::ctxt<'tcx>>);
+struct GlobalVisitor<'a,'b,'tcx:'a+'b>(
+ euv::ExprUseVisitor<'a,'b,'tcx,ty::ctxt<'tcx>>);
struct GlobalChecker {
static_consumptions: NodeSet,
const_borrows: NodeSet,
static_local_borrows: NodeSet::new(),
};
{
- let visitor = euv::ExprUseVisitor::new(&mut checker, tcx);
+ let param_env = ty::empty_parameter_environment();
+ let visitor = euv::ExprUseVisitor::new(&mut checker, tcx, param_env);
visit::walk_crate(&mut GlobalVisitor(visitor), tcx.map.krate());
}
visit::walk_crate(&mut CheckStaticVisitor {
}
}
-impl<'a, 'b, 't, 'v> Visitor<'v> for GlobalVisitor<'a, 'b, 't> {
+impl<'a,'b,'t,'v> Visitor<'v> for GlobalVisitor<'a,'b,'t> {
fn visit_item(&mut self, item: &ast::Item) {
match item.node {
ast::ItemConst(_, ref e) |
non_const
}
+impl Copy for constness {}
+
type constness_cache = DefIdMap<constness>;
pub fn join(a: constness, b: constness) -> constness {
use util::nodemap::NodeMap;
#[deriving(Show)]
-pub enum EntryOrExit { Entry, Exit }
+pub enum EntryOrExit {
+ Entry,
+ Exit,
+}
+
+impl Copy for EntryOrExit {}
#[deriving(Clone)]
pub struct DataFlowContext<'a, 'tcx: 'a, O> {
for attr in lint::gather_attrs(attrs).into_iter() {
match attr {
Ok((ref name, lint::Allow, _))
- if name.get() == dead_code.as_slice() => return true,
+ if name.get() == dead_code => return true,
_ => (),
}
}
DefMethod(ast::DefId /* method */, Option<ast::DefId> /* trait */, MethodProvenance),
}
+impl Copy for Def {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub enum MethodProvenance {
FromTrait(ast::DefId),
}
}
+impl Copy for MethodProvenance {}
+
impl Def {
pub fn def_id(&self) -> ast::DefId {
match *self {
UnsafeBlock(ast::NodeId),
}
+impl Copy for UnsafeContext {}
+
fn type_is_unsafe_function(ty: Ty) -> bool {
match ty.sty {
ty::ty_bare_fn(ref f) => f.fn_style == ast::UnsafeFn,
use middle::{def, region, pat_util};
use middle::mem_categorization as mc;
use middle::mem_categorization::Typer;
-use middle::ty::{mod, Ty};
+use middle::ty::{mod, ParameterEnvironment, Ty};
use middle::ty::{MethodCall, MethodObject, MethodTraitObject};
use middle::ty::{MethodOrigin, MethodParam, MethodTypeParam};
use middle::ty::{MethodStatic, MethodStaticUnboxedClosure};
use util::ppaux::Repr;
+use std::kinds;
use syntax::ast;
use syntax::ptr::P;
use syntax::codemap::Span;
MatchDiscriminant
}
+impl kinds::Copy for LoanCause {}
+
#[deriving(PartialEq, Show)]
pub enum ConsumeMode {
Copy, // reference to x where x has a type that copies
Move(MoveReason), // reference to x where x has a type that moves
}
+impl kinds::Copy for ConsumeMode {}
+
#[deriving(PartialEq,Show)]
pub enum MoveReason {
DirectRefMove,
CaptureMove,
}
+impl kinds::Copy for MoveReason {}
+
#[deriving(PartialEq,Show)]
pub enum MatchMode {
NonBindingMatch,
MovingMatch,
}
+impl kinds::Copy for MatchMode {}
+
#[deriving(PartialEq,Show)]
enum TrackMatchMode<T> {
- Unknown, Definite(MatchMode), Conflicting,
+ Unknown,
+ Definite(MatchMode),
+ Conflicting,
}
+impl<T> kinds::Copy for TrackMatchMode<T> {}
+
impl<T> TrackMatchMode<T> {
// Builds up the whole match mode for a pattern from its constituent
// parts. The lattice looks like this:
WriteAndRead, // x += y
}
+impl kinds::Copy for MutateMode {}
+
enum OverloadedCallType {
FnOverloadedCall,
FnMutOverloadedCall,
FnOnceOverloadedCall,
}
+impl kinds::Copy for OverloadedCallType {}
+
impl OverloadedCallType {
fn from_trait_id(tcx: &ty::ctxt, trait_id: ast::DefId)
-> OverloadedCallType {
typer: &'t TYPER,
mc: mc::MemCategorizationContext<'t,TYPER>,
delegate: &'d mut (Delegate<'tcx>+'d),
+ param_env: ParameterEnvironment<'tcx>,
}
// If the TYPER results in an error, it's because the type check
impl<'d,'t,'tcx,TYPER:mc::Typer<'tcx>> ExprUseVisitor<'d,'t,'tcx,TYPER> {
pub fn new(delegate: &'d mut Delegate<'tcx>,
- typer: &'t TYPER)
+ typer: &'t TYPER,
+ param_env: ParameterEnvironment<'tcx>)
-> ExprUseVisitor<'d,'t,'tcx,TYPER> {
- ExprUseVisitor { typer: typer,
- mc: mc::MemCategorizationContext::new(typer),
- delegate: delegate }
+ ExprUseVisitor {
+ typer: typer,
+ mc: mc::MemCategorizationContext::new(typer),
+ delegate: delegate,
+ param_env: param_env,
+ }
}
pub fn walk_fn(&mut self,
consume_id: ast::NodeId,
consume_span: Span,
cmt: mc::cmt<'tcx>) {
- let mode = copy_or_move(self.tcx(), cmt.ty, DirectRefMove);
+ let mode = copy_or_move(self.tcx(),
+ cmt.ty,
+ &self.param_env,
+ DirectRefMove);
self.delegate.consume(consume_id, consume_span, cmt, mode);
}
ast::PatIdent(ast::BindByRef(_), _, _) =>
mode.lub(BorrowingMatch),
ast::PatIdent(ast::BindByValue(_), _, _) => {
- match copy_or_move(tcx, cmt_pat.ty, PatBindingMove) {
+ match copy_or_move(tcx,
+ cmt_pat.ty,
+ &self.param_env,
+ PatBindingMove) {
Copy => mode.lub(CopyingMatch),
Move(_) => mode.lub(MovingMatch),
}
let tcx = typer.tcx();
let def_map = &self.typer.tcx().def_map;
let delegate = &mut self.delegate;
-
+ let param_env = &mut self.param_env;
return_if_err!(mc.cat_pattern(cmt_discr.clone(), pat, |mc, cmt_pat, pat| {
if pat_util::pat_is_binding(def_map, pat) {
let tcx = typer.tcx();
r, bk, RefBinding);
}
ast::PatIdent(ast::BindByValue(_), _, _) => {
- let mode = copy_or_move(typer.tcx(), cmt_pat.ty, PatBindingMove);
+ let mode = copy_or_move(typer.tcx(),
+ cmt_pat.ty,
+ param_env,
+ PatBindingMove);
debug!("walk_pat binding consuming pat");
delegate.consume_pat(pat, cmt_pat, mode);
}
let cmt_var = return_if_err!(self.cat_captured_var(closure_expr.id,
closure_expr.span,
freevar.def));
- let mode = copy_or_move(self.tcx(), cmt_var.ty, CaptureMove);
+ let mode = copy_or_move(self.tcx(),
+ cmt_var.ty,
+ &self.param_env,
+ CaptureMove);
self.delegate.consume(closure_expr.id, freevar.span, cmt_var, mode);
}
}
}
}
-fn copy_or_move<'tcx>(tcx: &ty::ctxt<'tcx>, ty: Ty<'tcx>,
- move_reason: MoveReason) -> ConsumeMode {
- if ty::type_moves_by_default(tcx, ty) { Move(move_reason) } else { Copy }
+fn copy_or_move<'tcx>(tcx: &ty::ctxt<'tcx>,
+ ty: Ty<'tcx>,
+ param_env: &ParameterEnvironment<'tcx>,
+ move_reason: MoveReason)
+ -> ConsumeMode {
+ if ty::type_moves_by_default(tcx, ty, param_env) {
+ Move(move_reason)
+ } else {
+ Copy
+ }
}
ParameterSimplifiedType,
}
+impl Copy for SimplifiedType {}
+
/// Tries to simplify a type by dropping type parameters, deref'ing away any reference types, etc.
/// The idea is to get something simple that we can use to quickly decide if two types could unify
/// during method lookup.
#[allow(non_upper_case_globals)]
pub const InvalidNodeIndex: NodeIndex = NodeIndex(uint::MAX);
+impl Copy for NodeIndex {}
+
#[deriving(PartialEq, Show)]
pub struct EdgeIndex(pub uint);
#[allow(non_upper_case_globals)]
pub const InvalidEdgeIndex: EdgeIndex = EdgeIndex(uint::MAX);
+impl Copy for EdgeIndex {}
+
// Use a private field here to guarantee no more instances are created:
#[deriving(Show)]
pub struct Direction { repr: uint }
#[allow(non_upper_case_globals)]
pub const Incoming: Direction = Direction { repr: 1 };
+impl Copy for Direction {}
+
impl NodeIndex {
fn get(&self) -> uint { let NodeIndex(v) = *self; v }
/// Returns unique id (unique with respect to the graph holding associated node).
IfExpressionWithNoElse(Span)
}
+impl Copy for TypeOrigin {}
+
/// See `error_reporting.rs` for more details
#[deriving(Clone, Show)]
pub enum ValuePairs<'tcx> {
HigherRankedType,
}
+impl Copy for LateBoundRegionConversionTime {}
+
/// Reasons to create a region inference variable
///
/// See `error_reporting.rs` for more details
unresolved_ty(TyVid)
}
+impl Copy for fixup_err {}
+
pub fn fixup_err_to_string(f: fixup_err) -> String {
match f {
unresolved_int_ty(_) => {
ConstrainVarSubReg(RegionVid, Region),
}
+impl Copy for Constraint {}
+
// Something we have to verify after region inference is done, but
// which does not directly influence the inference process
pub enum Verify<'tcx> {
b: Region,
}
+impl Copy for TwoRegions {}
+
#[deriving(PartialEq)]
pub enum UndoLogEntry {
OpenSnapshot,
AddCombination(CombineMapType, TwoRegions)
}
+impl Copy for UndoLogEntry {}
+
#[deriving(PartialEq)]
pub enum CombineMapType {
Lub, Glb
}
+impl Copy for CombineMapType {}
+
#[deriving(Clone, Show)]
pub enum RegionResolutionError<'tcx> {
/// `ConcreteFailure(o, a, b)`:
length: uint
}
+impl Copy for RegionSnapshot {}
+
#[deriving(Show)]
pub struct RegionMark {
length: uint
}
+impl Copy for RegionMark {}
+
impl<'a, 'tcx> RegionVarBindings<'a, 'tcx> {
pub fn new(tcx: &'a ty::ctxt<'tcx>) -> RegionVarBindings<'a, 'tcx> {
RegionVarBindings {
#[deriving(PartialEq, Show)]
enum Classification { Expanding, Contracting }
+impl Copy for Classification {}
+
pub enum VarValue { NoValue, Value(Region), ErrorValue }
+impl Copy for VarValue {}
+
struct VarData {
classification: Classification,
value: VarValue,
SubtypeOf, SupertypeOf, EqTo
}
+impl Copy for RelationDir {}
+
impl RelationDir {
fn opposite(self) -> RelationDir {
match self {
pub struct Delegate;
+impl Copy for Delegate {}
+
// We can't use V:LatticeValue, much as I would like to,
// because frequently the pattern is that V=Option<U> for some
// other type parameter U, and we have no way to say
$($variant),*
}
+impl Copy for LangItem {}
+
pub struct LanguageItems {
pub items: Vec<Option<ast::DefId>>,
pub missing: Vec<LangItem>,
#[deriving(PartialEq)]
struct Variable(uint);
+
+impl Copy for Variable {}
+
#[deriving(PartialEq)]
struct LiveNode(uint);
+impl Copy for LiveNode {}
+
impl Variable {
fn get(&self) -> uint { let Variable(v) = *self; v }
}
ExitNode
}
+impl Copy for LiveNodeKind {}
+
fn live_node_kind_to_string(lnk: LiveNodeKind, cx: &ty::ctxt) -> String {
let cm = cx.sess.codemap();
match lnk {
ident: ast::Ident
}
+impl Copy for LocalInfo {}
+
#[deriving(Show)]
enum VarKind {
Arg(NodeId, ast::Ident),
CleanExit
}
+impl Copy for VarKind {}
+
struct IrMaps<'a, 'tcx: 'a> {
tcx: &'a ty::ctxt<'tcx>,
used: bool
}
+impl Copy for Users {}
+
fn invalid_users() -> Users {
Users {
reader: invalid_node(),
clean_exit_var: Variable
}
+impl Copy for Specials {}
+
static ACC_READ: uint = 1u;
static ACC_WRITE: uint = 2u;
static ACC_USE: uint = 4u;
// the same bindings, and we also consider the first pattern to be
// the "authoritative" set of ids
let arm_succ =
- self.define_bindings_in_arm_pats(arm.pats.as_slice().head().map(|p| &**p),
+ self.define_bindings_in_arm_pats(arm.pats.head().map(|p| &**p),
guard_succ);
self.merge_from_succ(ln, arm_succ, first_merge);
first_merge = false;
// only consider the first pattern; any later patterns must have
// the same bindings, and we also consider the first pattern to be
// the "authoritative" set of ids
- this.arm_pats_bindings(arm.pats.as_slice().head().map(|p| &**p), |this, ln, var, sp, id| {
+ this.arm_pats_bindings(arm.pats.head().map(|p| &**p), |this, ln, var, sp, id| {
this.warn_about_unused(sp, id, ln, var);
});
visit::walk_arm(this, arm);
pub is_unboxed: bool
}
+impl Copy for Upvar {}
+
// different kinds of pointers:
#[deriving(Clone, PartialEq, Eq, Hash, Show)]
pub enum PointerKind {
UnsafePtr(ast::Mutability)
}
+impl Copy for PointerKind {}
+
// We use the term "interior" to mean "something reachable from the
// base without a pointer dereference", e.g. a field
#[deriving(Clone, PartialEq, Eq, Hash, Show)]
InteriorElement(ElementKind),
}
+impl Copy for InteriorKind {}
+
#[deriving(Clone, PartialEq, Eq, Hash, Show)]
pub enum FieldName {
NamedField(ast::Name),
PositionalField(uint)
}
+impl Copy for FieldName {}
+
#[deriving(Clone, PartialEq, Eq, Hash, Show)]
pub enum ElementKind {
VecElement,
OtherElement,
}
+impl Copy for ElementKind {}
+
#[deriving(Clone, PartialEq, Eq, Hash, Show)]
pub enum MutabilityCategory {
McImmutable, // Immutable.
McInherited, // Inherited from the fact that owner is mutable.
}
+impl Copy for MutabilityCategory {}
+
// A note about the provenance of a `cmt`. This is used for
// special-case handling of upvars such as mutability inference.
// Upvar categorization can generate a variable number of nested
NoteNone // Nothing special
}
+impl Copy for Note {}
+
// `cmt`: "Category, Mutability, and Type".
//
// a complete categorization of a value indicating where it originated
deref_interior(InteriorKind),
}
+impl Copy for deref_kind {}
+
// Categorizes a derefable type. Note that we include vectors and strings as
// derefable (we model an index as the combination of a deref and then a
// pointer adjustment).
typer: &'t TYPER
}
+impl<'t,TYPER:'t> Copy for MemCategorizationContext<'t,TYPER> {}
+
pub type McResult<T> = Result<T, ()>;
/// The `Typer` trait provides the interface for the mem-categorization
InteriorSafe
}
+impl Copy for InteriorSafety {}
+
pub enum AliasableReason {
AliasableBorrowed,
AliasableClosure(ast::NodeId), // Aliasable due to capture Fn closure env
AliasableStaticMut(InteriorSafety),
}
+impl Copy for AliasableReason {}
+
impl<'tcx> cmt_<'tcx> {
pub fn guarantor(&self) -> cmt<'tcx> {
//! Returns `self` after stripping away any owned pointer derefs or
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Recursion limit.
+//
+// There are various parts of the compiler that must impose arbitrary limits
+// on how deeply they recurse to prevent stack overflow. Users can override
+// this via an attribute on the crate like `#![recursion_limit(22)]`. This pass
+// just peeks and looks for that attribute.
+
+use session::Session;
+use syntax::ast;
+use syntax::attr::AttrMetaMethods;
+use std::str::FromStr;
+
+pub fn update_recursion_limit(sess: &Session, krate: &ast::Crate) {
+ for attr in krate.attrs.iter() {
+ if !attr.check_name("recursion_limit") {
+ continue;
+ }
+
+ if let Some(s) = attr.value_str() {
+ if let Some(n) = FromStr::from_str(s.get()) {
+ sess.recursion_limit.set(n);
+ return;
+ }
+ }
+
+ sess.span_err(attr.span, "malformed recursion limit attribute, \
+ expected #![recursion_limit(\"N\")]");
+ }
+}
Misc(ast::NodeId)
}
+impl Copy for CodeExtent {}
+
impl CodeExtent {
/// Creates a scope that represents the dynamic extent associated
/// with `node_id`.
parent: Option<ast::NodeId>,
}
+impl Copy for Context {}
+
struct RegionResolutionVisitor<'a> {
sess: &'a Session,
binding_mode: BindingMode,
}
+impl Copy for binding_info {}
+
// Map from the name in a pattern to its binding mode.
type BindingMap = HashMap<Name,binding_info>;
type_used: ImportUse},
}
+impl Copy for LastPrivate {}
+
#[deriving(Show)]
pub enum PrivateDep {
AllPublic,
DependsOn(DefId),
}
+impl Copy for PrivateDep {}
+
// How an import is used.
#[deriving(PartialEq, Show)]
pub enum ImportUse {
Used, // The import is used.
}
+impl Copy for ImportUse {}
+
impl LastPrivate {
fn or(self, other: LastPrivate) -> LastPrivate {
match (self, other) {
ArgumentIrrefutableMode,
}
+impl Copy for PatternBindingMode {}
+
#[deriving(PartialEq, Eq, Hash, Show)]
enum Namespace {
TypeNS,
ValueNS
}
+impl Copy for Namespace {}
+
#[deriving(PartialEq)]
enum NamespaceError {
NoError,
ValueError
}
+impl Copy for NamespaceError {}
+
/// A NamespaceResult represents the result of resolving an import in
/// a particular namespace. The result is either definitely-resolved,
/// definitely- unresolved, or unknown.
GlobImport
}
+impl Copy for ImportDirectiveSubclass {}
+
/// The context that we thread through while building the reduced graph.
#[deriving(Clone)]
enum ReducedGraphParent {
RibKind)
}
+impl<'a> Copy for TypeParameters<'a> {}
+
// The rib kind controls the translation of local
// definitions (`DefLocal`) to upvars (`DefUpvar`).
ConstantItemRibKind
}
+impl Copy for RibKind {}
+
// Methods can be required or provided. RequiredMethod methods only occur in traits.
enum MethodSort {
RequiredMethod,
ProvidedMethod(NodeId)
}
+impl Copy for MethodSort {}
+
enum UseLexicalScopeFlag {
DontUseLexicalScope,
UseLexicalScope
}
+impl Copy for UseLexicalScopeFlag {}
+
enum ModulePrefixResult {
NoPrefixFound,
PrefixFound(Rc<Module>, uint)
TypeTraitItemKind,
}
+impl Copy for TraitItemKind {}
+
impl TraitItemKind {
pub fn from_explicit_self_category(explicit_self_category:
ExplicitSelfCategory)
PathSearch,
}
+impl Copy for NameSearchType {}
+
enum BareIdentifierPatternResolution {
FoundStructOrEnumVariant(Def, LastPrivate),
FoundConst(Def, LastPrivate),
BareIdentifierPatternUnresolved
}
+impl Copy for BareIdentifierPatternResolution {}
+
// Specifies how duplicates should be handled when adding a child item if
// another item exists with the same name in some namespace.
#[deriving(PartialEq)]
OverwriteDuplicates
}
+impl Copy for DuplicateCheckingMode {}
+
/// One local scope.
struct Rib {
bindings: HashMap<Name, DefLike>,
AnonymousModuleKind,
}
+impl Copy for ModuleKind {}
+
/// One node in the tree of modules.
struct Module {
parent_link: ParentLink,
}
}
+impl Copy for DefModifiers {}
+
// Records a possibly-private type definition.
#[deriving(Clone)]
struct TypeNsDef {
value_span: Option<Span>,
}
+impl Copy for ValueNsDef {}
+
// Records the definitions (at most one for each namespace) that a name is
// bound to.
struct NameBindings {
TraitQPath, // <T as SomeTrait>::
}
+impl Copy for TraitReferenceType {}
+
impl NameBindings {
fn new() -> NameBindings {
NameBindings {
let module_path = match view_path.node {
ViewPathSimple(_, ref full_path, _) => {
full_path.segments
- .as_slice().init()
+ .init()
.iter().map(|ident| ident.identifier.name)
.collect()
}
continue;
}
};
- let module_path = module_path.as_slice().init();
+ let module_path = module_path.init();
(module_path.to_vec(), name)
}
};
}
Some(_) => {
- // The import is unresolved. Bail out.
- debug!("(resolving single import) unresolved import; \
- bailing out");
- return Indeterminate;
+ // If containing_module is the same module whose import we are resolving
+ // and there it has an unresolved import with the same name as `source`,
+ // then the user is actually trying to import an item that is declared
+ // in the same scope
+ //
+ // e.g
+ // use self::submodule;
+ // pub mod submodule;
+ //
+ // In this case we continue as if we resolved the import and let the
+ // check_for_conflicts_between_imports_and_items call below handle
+ // the conflict
+ match (module_.def_id.get(), containing_module.def_id.get()) {
+ (Some(id1), Some(id2)) if id1 == id2 => {
+ if value_result.is_unknown() {
+ value_result = UnboundResult;
+ }
+ if type_result.is_unknown() {
+ type_result = UnboundResult;
+ }
+ }
+ _ => {
+ // The import is unresolved. Bail out.
+ debug!("(resolving single import) unresolved import; \
+ bailing out");
+ return Indeterminate;
+ }
+ }
}
}
}
fn check_for_conflicts_between_imports_and_items(&mut self,
module: &Module,
import_resolution:
- &mut ImportResolution,
+ &ImportResolution,
import_span: Span,
name: Name) {
if self.session.features.borrow().import_shadowing {
.contains_key(&name) {
match import_resolution.type_target {
Some(ref target) if !target.shadowable => {
- let msg = format!("import `{}` conflicts with imported \
- crate in this module",
+ let msg = format!("import `{0}` conflicts with imported \
+ crate in this module \
+ (maybe you meant `use {0}::*`?)",
token::get_name(name).get());
self.session.span_err(import_span, msg.as_slice());
}
.codemap()
.span_to_snippet((*imports)[index].span)
.unwrap();
- if sn.as_slice().contains("::") {
+ if sn.contains("::") {
self.resolve_error((*imports)[index].span,
"unresolved import");
} else {
let err = format!("unresolved import (maybe you meant `{}::*`?)",
- sn.as_slice().slice(0, sn.len()));
+ sn.slice(0, sn.len()));
self.resolve_error((*imports)[index].span, err.as_slice());
}
}
});
if method_scope && token::get_name(self.self_name).get()
- == wrong_name.as_slice() {
+ == wrong_name {
self.resolve_error(
expr.span,
"`self` is not available \
/* lifetime decl */ ast::NodeId),
}
+impl Copy for DefRegion {}
+
// maps the id of each lifetime reference to the lifetime decl
// that it corresponds to
pub type NamedRegionMap = NodeMap<DefRegion>;
FnSpace, // Type parameters attached to a method or fn
}
+impl Copy for ParamSpace {}
+
impl ParamSpace {
pub fn all() -> [ParamSpace, ..4] {
[TypeSpace, SelfSpace, AssocSpace, FnSpace]
pub code: ObligationCauseCode<'tcx>
}
+impl<'tcx> Copy for ObligationCause<'tcx> {}
+
#[deriving(Clone)]
pub enum ObligationCauseCode<'tcx> {
/// Not well classified or should be obvious from span.
pub type Obligations<'tcx> = subst::VecPerParamSpace<Obligation<'tcx>>;
+impl<'tcx> Copy for ObligationCauseCode<'tcx> {}
+
pub type Selection<'tcx> = Vtable<'tcx, Obligation<'tcx>>;
#[deriving(Clone,Show)]
VtableFnPointer(ref sig) => VtableFnPointer((*sig).clone()),
VtableUnboxedClosure(d, ref s) => VtableUnboxedClosure(d, s.clone()),
VtableParam(ref p) => VtableParam((*p).clone()),
- VtableBuiltin(ref i) => VtableBuiltin(i.map_nested(op)),
+ VtableBuiltin(ref b) => VtableBuiltin(b.map_nested(op)),
}
}
VtableFnPointer(sig) => VtableFnPointer(sig),
VtableUnboxedClosure(d, s) => VtableUnboxedClosure(d, s),
VtableParam(p) => VtableParam(p),
- VtableBuiltin(i) => VtableBuiltin(i.map_move_nested(op)),
+ VtableBuiltin(no) => VtableBuiltin(no.map_move_nested(op)),
}
}
}
previous: Option<&'prev ObligationStack<'prev, 'tcx>>
}
+#[deriving(Clone)]
pub struct SelectionCache<'tcx> {
hashmap: RefCell<HashMap<Rc<ty::TraitRef<'tcx>>,
SelectionResult<'tcx, Candidate<'tcx>>>>,
CoerciveMethodMatch(/* impl we matched */ ast::DefId)
}
+impl Copy for MethodMatchedData {}
+
/// The selection process begins by considering all impls, where
/// clauses, and so forth that might resolve an obligation. Sometimes
/// we'll be able to say definitively that (e.g.) an impl does not
}
#[deriving(Show)]
-enum EvaluationResult {
+enum EvaluationResult<'tcx> {
EvaluatedToOk,
- EvaluatedToErr,
EvaluatedToAmbig,
+ EvaluatedToErr(SelectionError<'tcx>),
}
impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> {
bound: ty::BuiltinBound,
previous_stack: &ObligationStack<'o, 'tcx>,
ty: Ty<'tcx>)
- -> EvaluationResult
+ -> EvaluationResult<'tcx>
{
let obligation =
util::obligation_for_builtin_bound(
fn evaluate_obligation_recursively<'o>(&mut self,
previous_stack: Option<&ObligationStack<'o, 'tcx>>,
obligation: &Obligation<'tcx>)
- -> EvaluationResult
+ -> EvaluationResult<'tcx>
{
debug!("evaluate_obligation_recursively({})",
obligation.repr(self.tcx()));
fn evaluate_stack<'o>(&mut self,
stack: &ObligationStack<'o, 'tcx>)
- -> EvaluationResult
+ -> EvaluationResult<'tcx>
{
// In intercrate mode, whenever any of the types are unbound,
// there can always be an impl. Even if there are no impls in
match self.candidate_from_obligation(stack) {
Ok(Some(c)) => self.winnow_candidate(stack, &c),
Ok(None) => EvaluatedToAmbig,
- Err(_) => EvaluatedToErr,
+ Err(e) => EvaluatedToErr(e),
}
}
})
}
- ///////////////////////////////////////////////////////////////////////////
- // METHOD MATCHING
- //
- // Method matching is a variation on the normal select/evaluation
- // situation. In this scenario, rather than having a full trait
- // reference to select from, we start with an expression like
- // `receiver.method(...)`. This means that we have `rcvr_ty`, the
- // type of the receiver, and we have a possible trait that
- // supplies `method`. We must determine whether the receiver is
- // applicable, taking into account the transformed self type
- // declared on `method`. We also must consider the possibility
- // that `receiver` can be *coerced* into a suitable type (for
- // example, a receiver type like `&(Any+Send)` might be coerced
- // into a receiver like `&Any` to allow for method dispatch). See
- // the body of `evaluate_method_obligation()` for more details on
- // the algorithm.
-
- /// Determine whether a trait-method is applicable to a receiver of
- /// type `rcvr_ty`. *Does not affect the inference state.*
- ///
- /// - `rcvr_ty` -- type of the receiver
- /// - `xform_self_ty` -- transformed self type declared on the method, with `Self`
- /// to a fresh type variable
- /// - `obligation` -- a reference to the trait where the method is declared, with
- /// the input types on the trait replaced with fresh type variables
- pub fn evaluate_method_obligation(&mut self,
- rcvr_ty: Ty<'tcx>,
- xform_self_ty: Ty<'tcx>,
- obligation: &Obligation<'tcx>)
- -> MethodMatchResult
- {
- // Here is the situation. We have a trait method declared (say) like so:
- //
- // trait TheTrait {
- // fn the_method(self: Rc<Self>, ...) { ... }
- // }
- //
- // And then we have a call looking (say) like this:
- //
- // let x: Rc<Foo> = ...;
- // x.the_method()
- //
- // Now we want to decide if `TheTrait` is applicable. As a
- // human, we can see that `TheTrait` is applicable if there is
- // an impl for the type `Foo`. But how does the compiler know
- // what impl to look for, given that our receiver has type
- // `Rc<Foo>`? We need to take the method's self type into
- // account.
- //
- // On entry to this function, we have the following inputs:
- //
- // - `rcvr_ty = Rc<Foo>`
- // - `xform_self_ty = Rc<$0>`
- // - `obligation = $0 as TheTrait`
- //
- // We do the match in two phases. The first is a *precise
- // match*, which means that no coercion is required. This is
- // the preferred way to match. It works by first making
- // `rcvr_ty` a subtype of `xform_self_ty`. This unifies `$0`
- // and `Foo`. We can then evaluate (roughly as normal) the
- // trait reference `Foo as TheTrait`.
- //
- // If this fails, we fallback to a coercive match, described below.
-
- match self.infcx.probe(|| self.match_method_precise(rcvr_ty, xform_self_ty, obligation)) {
- Ok(()) => { return MethodMatched(PreciseMethodMatch); }
- Err(_) => { }
- }
-
- // Coercive matches work slightly differently and cannot
- // completely reuse the normal trait matching machinery
- // (though they employ many of the same bits and pieces). To
- // see how it works, let's continue with our previous example,
- // but with the following declarations:
- //
- // ```
- // trait Foo : Bar { .. }
- // trait Bar : Baz { ... }
- // trait Baz { ... }
- // impl TheTrait for Bar {
- // fn the_method(self: Rc<Bar>, ...) { ... }
- // }
- // ```
- //
- // Now we see that the receiver type `Rc<Foo>` is actually an
- // object type. And in fact the impl we want is an impl on the
- // supertrait `Rc<Bar>`. The precise matching procedure won't
- // find it, however, because `Rc<Foo>` is not a subtype of
- // `Rc<Bar>` -- it is *coercible* to `Rc<Bar>` (actually, such
- // coercions are not yet implemented, but let's leave that
- // aside for now).
- //
- // To handle this case, we employ a different procedure. Recall
- // that our initial state is as follows:
- //
- // - `rcvr_ty = Rc<Foo>`
- // - `xform_self_ty = Rc<$0>`
- // - `obligation = $0 as TheTrait`
- //
- // We now go through each impl and instantiate all of its type
- // variables, yielding the trait reference that the impl
- // provides. In our example, the impl would provide `Bar as
- // TheTrait`. Next we (try to) unify the trait reference that
- // the impl provides with the input obligation. This would
- // unify `$0` and `Bar`. Now we can see whether the receiver
- // type (`Rc<Foo>`) is *coercible to* the transformed self
- // type (`Rc<$0> == Rc<Bar>`). In this case, the answer is
- // yes, so the impl is considered a candidate.
- //
- // Note that there is the possibility of ambiguity here, even
- // when all types are known. In our example, this might occur
- // if there was *also* an impl of `TheTrait` for `Baz`. In
- // this case, `Rc<Foo>` would be coercible to both `Rc<Bar>`
- // and `Rc<Baz>`. (Note that it is not a *coherence violation*
- // to have impls for both `Bar` and `Baz`, despite this
- // ambiguity). In this case, we report an error, listing all
- // the applicable impls. The user can explicitly "up-coerce"
- // to the type they want.
- //
- // Note that this coercion step only considers actual impls
- // found in the source. This is because all the
- // compiler-provided impls (such as those for unboxed
- // closures) do not have relevant coercions. This simplifies
- // life immensely.
-
- let mut impls =
- self.assemble_method_candidates_from_impls(rcvr_ty, xform_self_ty, obligation);
-
- if impls.len() > 1 {
- impls.retain(|&c| self.winnow_method_impl(c, rcvr_ty, xform_self_ty, obligation));
- }
-
- if impls.len() > 1 {
- return MethodAmbiguous(impls);
- }
-
- match impls.pop() {
- Some(def_id) => MethodMatched(CoerciveMethodMatch(def_id)),
- None => MethodDidNotMatch
- }
- }
-
- /// Given the successful result of a method match, this function "confirms" the result, which
- /// basically repeats the various matching operations, but outside of any snapshot so that
- /// their effects are committed into the inference state.
- pub fn confirm_method_match(&mut self,
- rcvr_ty: Ty<'tcx>,
- xform_self_ty: Ty<'tcx>,
- obligation: &Obligation<'tcx>,
- data: MethodMatchedData)
- {
- let is_ok = match data {
- PreciseMethodMatch => {
- self.match_method_precise(rcvr_ty, xform_self_ty, obligation).is_ok()
- }
-
- CoerciveMethodMatch(impl_def_id) => {
- self.match_method_coerce(impl_def_id, rcvr_ty, xform_self_ty, obligation).is_ok()
- }
- };
-
- if !is_ok {
- self.tcx().sess.span_bug(
- obligation.cause.span,
- format!("match not repeatable: {}, {}, {}, {}",
- rcvr_ty.repr(self.tcx()),
- xform_self_ty.repr(self.tcx()),
- obligation.repr(self.tcx()),
- data)[]);
- }
- }
-
- /// Implements the *precise method match* procedure described in
- /// `evaluate_method_obligation()`.
- fn match_method_precise(&mut self,
- rcvr_ty: Ty<'tcx>,
- xform_self_ty: Ty<'tcx>,
- obligation: &Obligation<'tcx>)
- -> Result<(),()>
- {
- self.infcx.commit_if_ok(|| {
- match self.infcx.sub_types(false, infer::RelateSelfType(obligation.cause.span),
- rcvr_ty, xform_self_ty) {
- Ok(()) => { }
- Err(_) => { return Err(()); }
- }
-
- if self.evaluate_obligation(obligation) {
- Ok(())
- } else {
- Err(())
- }
- })
- }
-
- /// Assembles a list of potentially applicable impls using the *coercive match* procedure
- /// described in `evaluate_method_obligation()`.
- fn assemble_method_candidates_from_impls(&mut self,
- rcvr_ty: Ty<'tcx>,
- xform_self_ty: Ty<'tcx>,
- obligation: &Obligation<'tcx>)
- -> Vec<ast::DefId>
- {
- let mut candidates = Vec::new();
-
- let all_impls = self.all_impls(obligation.trait_ref.def_id);
- for &impl_def_id in all_impls.iter() {
- self.infcx.probe(|| {
- match self.match_method_coerce(impl_def_id, rcvr_ty, xform_self_ty, obligation) {
- Ok(_) => { candidates.push(impl_def_id); }
- Err(_) => { }
- }
- });
- }
-
- candidates
- }
-
- /// Applies the *coercive match* procedure described in `evaluate_method_obligation()` to a
- /// particular impl.
- fn match_method_coerce(&mut self,
- impl_def_id: ast::DefId,
- rcvr_ty: Ty<'tcx>,
- xform_self_ty: Ty<'tcx>,
- obligation: &Obligation<'tcx>)
- -> Result<Substs<'tcx>, ()>
- {
- // This is almost always expected to succeed. It
- // causes the impl's self-type etc to be unified with
- // the type variable that is shared between
- // obligation/xform_self_ty. In our example, after
- // this is done, the type of `xform_self_ty` would
- // change from `Rc<$0>` to `Rc<Foo>` (because $0 is
- // unified with `Foo`).
- let substs = try!(self.match_impl(impl_def_id, obligation));
-
- // Next, check whether we can coerce. For now we require
- // that the coercion be a no-op.
- let origin = infer::Misc(obligation.cause.span);
- match infer::mk_coercety(self.infcx, true, origin,
- rcvr_ty, xform_self_ty) {
- Ok(None) => { /* Fallthrough */ }
- Ok(Some(_)) | Err(_) => { return Err(()); }
- }
-
- Ok(substs)
- }
-
- /// A version of `winnow_impl` applicable to coerice method matching. This is basically the
- /// same as `winnow_impl` but it uses the method matching procedure and is specific to impls.
- fn winnow_method_impl(&mut self,
- impl_def_id: ast::DefId,
- rcvr_ty: Ty<'tcx>,
- xform_self_ty: Ty<'tcx>,
- obligation: &Obligation<'tcx>)
- -> bool
- {
- debug!("winnow_method_impl: impl_def_id={} rcvr_ty={} xform_self_ty={} obligation={}",
- impl_def_id.repr(self.tcx()),
- rcvr_ty.repr(self.tcx()),
- xform_self_ty.repr(self.tcx()),
- obligation.repr(self.tcx()));
-
- self.infcx.probe(|| {
- match self.match_method_coerce(impl_def_id, rcvr_ty, xform_self_ty, obligation) {
- Ok(substs) => {
- let vtable_impl = self.vtable_impl(impl_def_id,
- substs,
- obligation.cause,
- obligation.recursion_depth + 1);
- self.winnow_selection(None, VtableImpl(vtable_impl)).may_apply()
- }
- Err(()) => {
- false
- }
- }
- })
- }
-
///////////////////////////////////////////////////////////////////////////
// CANDIDATE ASSEMBLY
//
// and applicable impls. There is a certain set of precedence rules here.
match self.tcx().lang_items.to_builtin_kind(obligation.trait_ref.def_id) {
- Some(bound) => {
- try!(self.assemble_builtin_bound_candidates(bound, stack, &mut candidates));
+ Some(ty::BoundCopy) => {
+ debug!("obligation self ty is {}",
+ obligation.self_ty().repr(self.tcx()));
+
+ // If the user has asked for the older, compatibility
+ // behavior, ignore user-defined impls here. This will
+ // go away by the time 1.0 is released.
+ if !self.tcx().sess.features.borrow().opt_out_copy {
+ try!(self.assemble_candidates_from_impls(obligation, &mut candidates));
+ }
+
+ try!(self.assemble_builtin_bound_candidates(ty::BoundCopy,
+ stack,
+ &mut candidates));
}
None => {
- // For the time being, we ignore user-defined impls for builtin-bounds.
+ // For the time being, we ignore user-defined impls for builtin-bounds, other than
+ // `Copy`.
// (And unboxed candidates only apply to the Fn/FnMut/etc traits.)
try!(self.assemble_unboxed_closure_candidates(obligation, &mut candidates));
try!(self.assemble_fn_pointer_candidates(obligation, &mut candidates));
try!(self.assemble_candidates_from_impls(obligation, &mut candidates));
}
+
+ Some(bound) => {
+ try!(self.assemble_builtin_bound_candidates(bound, stack, &mut candidates));
+ }
}
try!(self.assemble_candidates_from_caller_bounds(obligation, &mut candidates));
+ debug!("candidate list size: {}", candidates.vec.len());
Ok(candidates)
}
fn winnow_candidate<'o>(&mut self,
stack: &ObligationStack<'o, 'tcx>,
candidate: &Candidate<'tcx>)
- -> EvaluationResult
+ -> EvaluationResult<'tcx>
{
debug!("winnow_candidate: candidate={}", candidate.repr(self.tcx()));
self.infcx.probe(|| {
let candidate = (*candidate).clone();
match self.confirm_candidate(stack.obligation, candidate) {
Ok(selection) => self.winnow_selection(Some(stack), selection),
- Err(_) => EvaluatedToErr,
+ Err(error) => EvaluatedToErr(error),
}
})
}
fn winnow_selection<'o>(&mut self,
stack: Option<&ObligationStack<'o, 'tcx>>,
selection: Selection<'tcx>)
- -> EvaluationResult
+ -> EvaluationResult<'tcx>
{
let mut result = EvaluatedToOk;
for obligation in selection.iter_nested() {
match self.evaluate_obligation_recursively(stack, obligation) {
- EvaluatedToErr => { return EvaluatedToErr; }
+ EvaluatedToErr(e) => { return EvaluatedToErr(e); }
EvaluatedToAmbig => { result = EvaluatedToAmbig; }
EvaluatedToOk => { }
}
}
ty::BoundCopy => {
- if
- Some(def_id) == tcx.lang_items.no_copy_bound() ||
- Some(def_id) == tcx.lang_items.managed_bound() ||
- ty::has_dtor(tcx, def_id)
- {
- return Err(Unimplemented);
+ // This is an Opt-In Built-In Trait. So, unless
+ // the user is asking for the old behavior, we
+ // don't supply any form of builtin impl.
+ if !this.tcx().sess.features.borrow().opt_out_copy {
+ return Ok(ParameterBuiltin)
}
}
}
}
-impl EvaluationResult {
+impl<'tcx> EvaluationResult<'tcx> {
fn may_apply(&self) -> bool {
match *self {
- EvaluatedToOk | EvaluatedToAmbig => true,
- EvaluatedToErr => false,
+ EvaluatedToOk |
+ EvaluatedToAmbig |
+ EvaluatedToErr(Overflow) |
+ EvaluatedToErr(OutputTypeParameterMismatch(..)) => {
+ true
+ }
+ EvaluatedToErr(Unimplemented) => {
+ false
+ }
}
}
}
pub use self::ExprAdjustment::*;
pub use self::vtable_origin::*;
pub use self::MethodOrigin::*;
+pub use self::CopyImplementationError::*;
use back::svh::Svh;
use session::Session;
use middle::region;
use middle::resolve;
use middle::resolve_lifetime;
+use middle::infer;
use middle::stability;
use middle::subst::{mod, Subst, Substs, VecPerParamSpace};
+use middle::traits::ObligationCause;
use middle::traits;
use middle::ty;
use middle::ty_fold::{mod, TypeFoldable, TypeFolder, HigherRankedFoldable};
use std::mem;
use std::ops;
use std::rc::Rc;
-use std::collections::hash_map::{Occupied, Vacant};
+use std::collections::hash_map::{HashMap, Occupied, Vacant};
use arena::TypedArena;
use syntax::abi;
use syntax::ast::{CrateNum, DefId, FnStyle, Ident, ItemTrait, LOCAL_CRATE};
use syntax::ast::{Visibility};
use syntax::ast_util::{mod, is_local, lit_is_str, local_def, PostExpansionMethod};
use syntax::attr::{mod, AttrMetaMethods};
-use syntax::codemap::Span;
+use syntax::codemap::{DUMMY_SP, Span};
use syntax::parse::token::{mod, InternedString};
use syntax::{ast, ast_map};
use std::collections::enum_set::{EnumSet, CLike};
pub mt: mt<'tcx>
}
+impl<'tcx> Copy for field<'tcx> {}
+
#[deriving(Clone, Show)]
pub enum ImplOrTraitItemContainer {
TraitContainer(ast::DefId),
ImplContainer(ast::DefId),
}
+impl Copy for ImplOrTraitItemContainer {}
+
impl ImplOrTraitItemContainer {
pub fn id(&self) -> ast::DefId {
match *self {
TypeTraitItemId(ast::DefId),
}
+impl Copy for ImplOrTraitItemId {}
+
impl ImplOrTraitItemId {
pub fn def_id(&self) -> ast::DefId {
match *self {
pub container: ImplOrTraitItemContainer,
}
+impl Copy for AssociatedType {}
+
#[deriving(Clone, PartialEq, Eq, Hash, Show)]
pub struct mt<'tcx> {
pub ty: Ty<'tcx>,
pub mutbl: ast::Mutability,
}
+impl<'tcx> Copy for mt<'tcx> {}
+
#[deriving(Clone, PartialEq, Eq, Hash, Encodable, Decodable, Show)]
pub enum TraitStore {
/// Box<Trait>
RegionTraitStore(Region, ast::Mutability),
}
+impl Copy for TraitStore {}
+
#[deriving(Clone, Show)]
pub struct field_ty {
pub name: Name,
pub origin: ast::DefId, // The DefId of the struct in which the field is declared.
}
+impl Copy for field_ty {}
+
// Contains information needed to resolve types and (in the future) look up
// the types of AST nodes.
#[deriving(PartialEq, Eq, Hash)]
pub len: uint
}
+impl Copy for creader_cache_key {}
+
pub enum ast_ty_to_ty_cache_entry<'tcx> {
atttce_unresolved, /* not resolved yet */
atttce_resolved(Ty<'tcx>) /* resolved to a type, irrespective of region */
}
+impl<'tcx> Copy for ast_ty_to_ty_cache_entry<'tcx> {}
+
#[deriving(Clone, PartialEq, Decodable, Encodable)]
pub struct ItemVariances {
pub types: VecPerParamSpace<Variance>,
Bivariant, // T<A> <: T<B> -- e.g., unused type parameter
}
+impl Copy for Variance {}
+
#[deriving(Clone, Show)]
pub enum AutoAdjustment<'tcx> {
AdjustAddEnv(ty::TraitStore),
pub index: uint
}
+impl Copy for param_index {}
+
#[deriving(Clone, Show)]
pub enum MethodOrigin<'tcx> {
// fully statically resolved method
pub substs: subst::Substs<'tcx>
}
+impl Copy for MethodCall {}
+
/// With method calls, we store some extra information in
/// side tables (i.e method_map). We use
/// MethodCall as a key to index into these tables instead of
AutoObject
}
+impl Copy for ExprAdjustment {}
+
impl MethodCall {
pub fn expr(id: ast::NodeId) -> MethodCall {
MethodCall {
pub id: ast::NodeId,
}
+impl<'tcx> Copy for TransmuteRestriction<'tcx> {}
+
/// The data structure to keep track of all the information that typechecker
/// generates so that so that it can be reused and doesn't have to be redone
/// later on.
/// Caches the representation hints for struct definitions.
pub repr_hint_cache: RefCell<DefIdMap<Rc<Vec<attr::ReprAttr>>>>,
+
+ /// Caches whether types move by default.
+ pub type_moves_by_default_cache: RefCell<HashMap<Ty<'tcx>,bool>>,
}
// Flags that we track on types. These flags are propagated upwards
}
}
+impl Copy for TypeFlags {}
+
#[deriving(Show)]
pub struct TyS<'tcx> {
pub sty: sty<'tcx>,
self.ty.sty == other.ty.sty
}
}
+
impl<'tcx> Eq for InternedTy<'tcx> {}
impl<'tcx, S: Writer> Hash<S> for InternedTy<'tcx> {
}
}
+impl<'tcx> Copy for FnOutput<'tcx> {}
+
/// Signature of a function type, which I have arbitrarily
/// decided to use to refer to the input/output types.
///
pub def_id: DefId
}
+impl Copy for ParamTy {}
+
/// A [De Bruijn index][dbi] is a standard means of representing
/// regions (and perhaps later types) in a higher-ranked setting. In
/// particular, imagine a type like this:
pub closure_expr_id: ast::NodeId,
}
+impl Copy for UpvarId {}
+
#[deriving(Clone, PartialEq, Eq, Hash, Show, Encodable, Decodable)]
pub enum BorrowKind {
/// Data must be immutable and is aliasable.
MutBorrow
}
+impl Copy for BorrowKind {}
+
/// Information describing the borrowing of an upvar. This is computed
/// during `typeck`, specifically by `regionck`. The general idea is
/// that the compiler analyses treat closures like:
pub type UpvarBorrowMap = FnvHashMap<UpvarId, UpvarBorrow>;
+impl Copy for UpvarBorrow {}
+
impl Region {
pub fn is_bound(&self) -> bool {
match *self {
}
}
+impl Copy for Region {}
+
#[deriving(Clone, PartialEq, PartialOrd, Eq, Ord, Hash, Encodable, Decodable, Show)]
/// A "free" region `fr` can be interpreted as "some region
/// at least as big as the scope `fr.scope`".
pub bound_region: BoundRegion
}
+impl Copy for FreeRegion {}
+
#[deriving(Clone, PartialEq, PartialOrd, Eq, Ord, Hash, Encodable, Decodable, Show)]
pub enum BoundRegion {
/// An anonymous region parameter for a given fn (&T)
BrEnv
}
+impl Copy for BoundRegion {}
+
#[inline]
pub fn mk_prim_t<'tcx>(primitive: &'tcx TyS<'static>) -> Ty<'tcx> {
// FIXME(#17596) Ty<'tcx> is incorrectly invariant w.r.t 'tcx.
UintType(ast::UintTy),
}
+impl Copy for IntVarValue {}
+
#[deriving(Clone, Show)]
pub enum terr_vstore_kind {
terr_vec,
terr_trait
}
+impl Copy for terr_vstore_kind {}
+
#[deriving(Clone, Show)]
pub struct expected_found<T> {
pub expected: T,
pub found: T
}
+impl<T:Copy> Copy for expected_found<T> {}
+
// Data structures used in type unification
#[deriving(Clone, Show)]
pub enum type_err<'tcx> {
terr_convergence_mismatch(expected_found<bool>)
}
+impl<'tcx> Copy for type_err<'tcx> {}
+
/// Bounds suitable for a named type parameter like `A` in `fn foo<A>`
/// as well as the existential type parameter in an object type.
#[deriving(PartialEq, Eq, Hash, Clone, Show)]
pub builtin_bounds: BuiltinBounds
}
+impl Copy for ExistentialBounds {}
+
pub type BuiltinBounds = EnumSet<BuiltinBound>;
#[deriving(Clone, Encodable, PartialEq, Eq, Decodable, Hash, Show)]
BoundSync,
}
+impl Copy for BuiltinBound {}
+
pub fn empty_builtin_bounds() -> BuiltinBounds {
EnumSet::new()
}
pub index: uint
}
+impl Copy for TyVid {}
+
#[deriving(Clone, PartialEq, Eq, Hash)]
pub struct IntVid {
pub index: uint
}
+impl Copy for IntVid {}
+
#[deriving(Clone, PartialEq, Eq, Hash)]
pub struct FloatVid {
pub index: uint
}
+impl Copy for FloatVid {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash)]
pub struct RegionVid {
pub index: uint
}
+impl Copy for RegionVid {}
+
#[deriving(Clone, PartialEq, Eq, Hash)]
pub enum InferTy {
TyVar(TyVid),
SkolemizedIntTy(uint),
}
+impl Copy for InferTy {}
+
#[deriving(Clone, Encodable, Decodable, Eq, Hash, Show)]
pub enum InferRegion {
ReVar(RegionVid),
ReSkolemized(uint, BoundRegion)
}
+impl Copy for InferRegion {}
+
impl cmp::PartialEq for InferRegion {
fn eq(&self, other: &InferRegion) -> bool {
match ((*self), *other) {
/// bound lifetime parameters are replaced with free ones, but in the
/// future I hope to refine the representation of types so as to make
/// more distinctions clearer.
+#[deriving(Clone)]
pub struct ParameterEnvironment<'tcx> {
/// A substitution that can be applied to move from
/// the "outer" view of a type or method to the "inner" view.
}
TypeTraitItem(_) => {
cx.sess
- .bug("ParameterEnvironment::from_item(): \
+ .bug("ParameterEnvironment::for_item(): \
can't create a parameter environment \
for type trait items")
}
}
}
ast::TypeImplItem(_) => {
- cx.sess.bug("ParameterEnvironment::from_item(): \
+ cx.sess.bug("ParameterEnvironment::for_item(): \
can't create a parameter environment \
for type impl items")
}
match *trait_method {
ast::RequiredMethod(ref required) => {
cx.sess.span_bug(required.span,
- "ParameterEnvironment::from_item():
+ "ParameterEnvironment::for_item():
can't create a parameter \
environment for required trait \
methods")
}
TypeTraitItem(_) => {
cx.sess
- .bug("ParameterEnvironment::from_item(): \
+ .bug("ParameterEnvironment::for_item(): \
can't create a parameter environment \
for type trait items")
}
}
}
}
+ Some(ast_map::NodeExpr(..)) => {
+ // This is a convenience to allow closures to work.
+ ParameterEnvironment::for_item(cx, cx.map.get_parent(id))
+ }
_ => {
cx.sess.bug(format!("ParameterEnvironment::from_item(): \
`{}` is not an item",
FnOnceUnboxedClosureKind,
}
+impl Copy for UnboxedClosureKind {}
+
impl UnboxedClosureKind {
pub fn trait_did(&self, cx: &ctxt) -> ast::DefId {
let result = match *self {
associated_types: RefCell::new(DefIdMap::new()),
selection_cache: traits::SelectionCache::new(),
repr_hint_cache: RefCell::new(DefIdMap::new()),
+ type_moves_by_default_cache: RefCell::new(HashMap::new()),
}
}
pub bits: u64
}
+impl Copy for TypeContents {}
+
macro_rules! def_type_content_sets(
(mod $mname:ident { $($name:ident = $bits:expr),+ }) => {
#[allow(non_snake_case)]
OwnsOwned = 0b0000_0000__0000_0001__0000,
OwnsDtor = 0b0000_0000__0000_0010__0000,
OwnsManaged /* see [1] below */ = 0b0000_0000__0000_0100__0000,
- OwnsAffine = 0b0000_0000__0000_1000__0000,
OwnsAll = 0b0000_0000__1111_1111__0000,
// Things that are reachable by the value in any way (fourth nibble):
ReachesFfiUnsafe = 0b0010_0000__0000_0000__0000,
ReachesAll = 0b0011_1111__0000_0000__0000,
- // Things that cause values to *move* rather than *copy*. This
- // is almost the same as the `Copy` trait, but for managed
- // data -- atm, we consider managed data to copy, not move,
- // but it does not impl Copy as a pure memcpy is not good
- // enough. Yuck.
- Moves = 0b0000_0000__0000_1011__0000,
-
// Things that mean drop glue is necessary
NeedsDrop = 0b0000_0000__0000_0111__0000,
// Things that prevent values from being considered sized
Nonsized = 0b0000_0000__0000_0000__0001,
- // Things that make values considered not POD (would be same
- // as `Moves`, but for the fact that managed data `@` is
- // not considered POD)
- Noncopy = 0b0000_0000__0000_1111__0000,
-
// Bits to set when a managed value is encountered
//
// [1] Do not set the bits TC::OwnsManaged or
self.intersects(TC::InteriorUnsized)
}
- pub fn moves_by_default(&self, _: &ctxt) -> bool {
- self.intersects(TC::Moves)
- }
-
pub fn needs_drop(&self, _: &ctxt) -> bool {
self.intersects(TC::NeedsDrop)
}
res = res | TC::ReachesFfiUnsafe;
}
- match repr_hints.as_slice().get(0) {
+ match repr_hints.get(0) {
Some(h) => if !h.is_ffi_safe() {
res = res | TC::ReachesFfiUnsafe;
},
mc | tc_ty(cx, mt.ty, cache)
}
- fn apply_lang_items(cx: &ctxt,
- did: ast::DefId,
- tc: TypeContents)
- -> TypeContents
- {
+ fn apply_lang_items(cx: &ctxt, did: ast::DefId, tc: TypeContents)
+ -> TypeContents {
if Some(did) == cx.lang_items.managed_bound() {
tc | TC::Managed
- } else if Some(did) == cx.lang_items.no_copy_bound() {
- tc | TC::OwnsAffine
} else if Some(did) == cx.lang_items.unsafe_type() {
tc | TC::InteriorUnsafe
} else {
mutbl: ast::Mutability)
-> TypeContents {
let b = match mutbl {
- ast::MutMutable => TC::ReachesMutable | TC::OwnsAffine,
+ ast::MutMutable => TC::ReachesMutable,
ast::MutImmutable => TC::None,
};
b | (TC::ReachesBorrowed).when(region != ty::ReStatic)
}
};
- // This also prohibits "@once fn" from being copied, which allows it to
- // be called. Neither way really makes much sense.
- let ot = match cty.onceness {
- ast::Once => TC::OwnsAffine,
- ast::Many => TC::None,
- };
-
- st | ot
+ st
}
fn object_contents(cx: &ctxt,
let mut tc = TC::All;
each_inherited_builtin_bound(cx, bounds, traits, |bound| {
tc = tc - match bound {
- BoundSync | BoundSend => TC::None,
+ BoundSync | BoundSend | BoundCopy => TC::None,
BoundSized => TC::Nonsized,
- BoundCopy => TC::Noncopy,
};
});
return tc;
}
}
-pub fn type_moves_by_default<'tcx>(cx: &ctxt<'tcx>, ty: Ty<'tcx>) -> bool {
- type_contents(cx, ty).moves_by_default(cx)
+pub fn type_moves_by_default<'tcx>(cx: &ctxt<'tcx>,
+ ty: Ty<'tcx>,
+ param_env: &ParameterEnvironment<'tcx>)
+ -> bool {
+ if !type_has_params(ty) && !type_has_self(ty) {
+ match cx.type_moves_by_default_cache.borrow().get(&ty) {
+ None => {}
+ Some(&result) => {
+ debug!("determined whether {} moves by default (cached): {}",
+ ty_to_string(cx, ty),
+ result);
+ return result
+ }
+ }
+ }
+
+ let infcx = infer::new_infer_ctxt(cx);
+ let mut fulfill_cx = traits::FulfillmentContext::new();
+ let obligation = traits::obligation_for_builtin_bound(
+ cx,
+ ObligationCause::misc(DUMMY_SP),
+ ty,
+ ty::BoundCopy).unwrap();
+ fulfill_cx.register_obligation(cx, obligation);
+ let result = !fulfill_cx.select_all_or_error(&infcx,
+ param_env,
+ cx).is_ok();
+ cx.type_moves_by_default_cache.borrow_mut().insert(ty, result);
+ debug!("determined whether {} moves by default: {}",
+ ty_to_string(cx, ty),
+ result);
+ result
}
pub fn is_ffi_safe<'tcx>(cx: &ctxt<'tcx>, ty: Ty<'tcx>) -> bool {
SelfRecursive,
}
+impl Copy for Representability {}
+
/// Check whether a type is representable. This means it cannot contain unboxed
/// structural recursion. This check is needed for structs and enums.
pub fn is_type_representable<'tcx>(cx: &ctxt<'tcx>, sp: Span, ty: Ty<'tcx>)
variant: Option<ast::DefId>) -> Option<Ty<'tcx>> {
match (&ty.sty, variant) {
- (&ty_tup(ref v), None) => v.as_slice().get(i).map(|&t| t),
+ (&ty_tup(ref v), None) => v.get(i).map(|&t| t),
(&ty_struct(def_id, ref substs), None) => lookup_struct_fields(cx, def_id)
- .as_slice().get(i)
+ .get(i)
.map(|&t|lookup_item_type(cx, t.id).ty.subst(cx, substs)),
(&ty_enum(def_id, ref substs), Some(variant_def_id)) => {
let variant_info = enum_variant_with_id(cx, def_id, variant_def_id);
- variant_info.args.as_slice().get(i).map(|t|t.subst(cx, substs))
+ variant_info.args.get(i).map(|t|t.subst(cx, substs))
}
(&ty_enum(def_id, ref substs), None) => {
assert!(enum_is_univariant(cx, def_id));
let enum_variants = enum_variants(cx, def_id);
let variant_info = &(*enum_variants)[0];
- variant_info.args.as_slice().get(i).map(|t|t.subst(cx, substs))
+ variant_info.args.get(i).map(|t|t.subst(cx, substs))
}
_ => None
RvalueStmtExpr
}
+impl Copy for ExprKind {}
+
pub fn expr_kind(tcx: &ctxt, expr: &ast::Expr) -> ExprKind {
if tcx.method_map.borrow().contains_key(&MethodCall::expr(expr.id)) {
// Overloaded operations are generally calls, and hence they are
pub name: ast::Name,
}
+impl Copy for AssociatedTypeInfo {}
+
impl PartialOrd for AssociatedTypeInfo {
fn partial_cmp(&self, other: &AssociatedTypeInfo) -> Option<Ordering> {
Some(self.index.cmp(&other.index))
TraitDtor(DefId, bool)
}
+impl Copy for DtorKind {}
+
impl DtorKind {
pub fn is_present(&self) -> bool {
match *self {
pub ty: Ty<'tcx>,
}
+impl<'tcx> Copy for UnboxedClosureUpvar<'tcx> {}
+
// Returns a list of `UnboxedClosureUpvar`s for each upvar.
pub fn unboxed_closure_upvars<'tcx>(tcx: &ctxt<'tcx>, closure_id: ast::DefId, substs: &Substs<'tcx>)
-> Vec<UnboxedClosureUpvar<'tcx>> {
ByBoxExplicitSelfCategory,
}
+impl Copy for ExplicitSelfCategory {}
+
/// Pushes all the lifetimes in the given type onto the given list. A
/// "lifetime in a type" is a lifetime specified by a reference or a lifetime
/// in a list of type substitutions. This does *not* traverse into nominal
pub span: Span
}
+impl Copy for Freevar {}
+
pub type FreevarMap = NodeMap<Vec<Freevar>>;
pub type CaptureModeMap = NodeMap<ast::CaptureClause>;
}
}
+impl Copy for DebruijnIndex {}
+
impl<'tcx> Repr<'tcx> for AutoAdjustment<'tcx> {
fn repr(&self, tcx: &ctxt<'tcx>) -> String {
match *self {
trait_ref.substs.clone().with_method(meth_tps, meth_regions)
}
+pub enum CopyImplementationError {
+ FieldDoesNotImplementCopy(ast::Name),
+ VariantDoesNotImplementCopy(ast::Name),
+ TypeIsStructural,
+}
+
+impl Copy for CopyImplementationError {}
+
+pub fn can_type_implement_copy<'tcx>(tcx: &ctxt<'tcx>,
+ self_type: Ty<'tcx>,
+ param_env: &ParameterEnvironment<'tcx>)
+ -> Result<(),CopyImplementationError> {
+ match self_type.sty {
+ ty::ty_struct(struct_did, ref substs) => {
+ let fields = ty::struct_fields(tcx, struct_did, substs);
+ for field in fields.iter() {
+ if type_moves_by_default(tcx, field.mt.ty, param_env) {
+ return Err(FieldDoesNotImplementCopy(field.name))
+ }
+ }
+ }
+ ty::ty_enum(enum_did, ref substs) => {
+ let enum_variants = ty::enum_variants(tcx, enum_did);
+ for variant in enum_variants.iter() {
+ for variant_arg_type in variant.args.iter() {
+ let substd_arg_type =
+ variant_arg_type.subst(tcx, substs);
+ if type_moves_by_default(tcx,
+ substd_arg_type,
+ param_env) {
+ return Err(VariantDoesNotImplementCopy(variant.name))
+ }
+ }
+ }
+ }
+ _ => return Err(TypeIsStructural),
+ }
+
+ Ok(())
+}
Aggressive // -O3
}
+impl Copy for OptLevel {}
+
#[deriving(Clone, PartialEq)]
pub enum DebugInfoLevel {
NoDebugInfo,
FullDebugInfo,
}
+impl Copy for DebugInfoLevel {}
+
#[deriving(Clone, PartialEq, PartialOrd, Ord, Eq)]
pub enum OutputType {
OutputTypeBitcode,
OutputTypeExe,
}
+impl Copy for OutputType {}
+
#[deriving(Clone)]
pub struct Options {
// The crate config requested for the session, which may be combined
// parsed code. It remains mutable in case its replacements wants to use
// this.
pub addl_lib_search_paths: RefCell<Vec<Path>>,
- pub libs: Vec<(String, cstore::NativeLibaryKind)>,
+ pub libs: Vec<(String, cstore::NativeLibraryKind)>,
pub maybe_sysroot: Option<Path>,
pub target_triple: String,
// User-specified cfg meta items. The compiler itself will add additional
EntryNone,
}
+impl Copy for EntryFnType {}
+
#[deriving(PartialEq, PartialOrd, Clone, Ord, Eq, Hash)]
pub enum CrateType {
CrateTypeExecutable,
CrateTypeStaticlib,
}
+impl Copy for CrateType {}
+
macro_rules! debugging_opts(
([ $opt:ident ] $cnt:expr ) => (
pub const $opt: u64 = 1 << $cnt;
{
let mut cg = basic_codegen_options();
for option in matches.opt_strs("C").into_iter() {
- let mut iter = option.as_slice().splitn(1, '=');
+ let mut iter = option.splitn(1, '=');
let key = iter.next().unwrap();
let value = iter.next();
let option_to_lookup = key.replace("-", "_");
let mut found = false;
for &(candidate, setter, opt_type_desc, _) in CG_OPTIONS.iter() {
- if option_to_lookup.as_slice() != candidate { continue }
+ if option_to_lookup != candidate { continue }
if !setter(&mut cg, value) {
match (value, opt_type_desc) {
(Some(..), None) => {
for &level in [lint::Allow, lint::Warn, lint::Deny, lint::Forbid].iter() {
for lint_name in matches.opt_strs(level.as_str()).into_iter() {
- if lint_name.as_slice() == "help" {
+ if lint_name == "help" {
describe_lints = true;
} else {
lint_opts.push((lint_name.replace("-", "_").into_string(), level));
let debug_map = debugging_opts_map();
for debug_flag in debug_flags.iter() {
let mut this_bit = 0;
- for tuple in debug_map.iter() {
- let (name, bit) = match *tuple { (ref a, _, b) => (a, b) };
- if *name == debug_flag.as_slice() {
+ for &(name, _, bit) in debug_map.iter() {
+ if name == *debug_flag {
this_bit = bit;
break;
}
if !parse_only && !no_trans {
let unparsed_output_types = matches.opt_strs("emit");
for unparsed_output_type in unparsed_output_types.iter() {
- for part in unparsed_output_type.as_slice().split(',') {
+ for part in unparsed_output_type.split(',') {
let output_type = match part.as_slice() {
"asm" => OutputTypeAssembly,
"ir" => OutputTypeLlvmAssembly,
}
}
};
- output_types.as_mut_slice().sort();
+ output_types.sort();
output_types.dedup();
if output_types.len() == 0 {
output_types.push(OutputTypeExe);
}).collect();
let libs = matches.opt_strs("l").into_iter().map(|s| {
- let mut parts = s.as_slice().rsplitn(1, ':');
+ let mut parts = s.rsplitn(1, ':');
let kind = parts.next().unwrap();
let (name, kind) = match (parts.next(), kind) {
(None, name) |
let mut externs = HashMap::new();
for arg in matches.opt_strs("extern").iter() {
- let mut parts = arg.as_slice().splitn(1, '=');
+ let mut parts = arg.splitn(1, '=');
let name = match parts.next() {
Some(s) => s,
None => early_error("--extern value must not be empty"),
let mut crate_types: Vec<CrateType> = Vec::new();
for unparsed_crate_type in list_list.iter() {
- for part in unparsed_crate_type.as_slice().split(',') {
+ for part in unparsed_crate_type.split(',') {
let new_part = match part {
"lib" => default_lib_output(),
"rlib" => CrateTypeRlib,
let can_print_warnings = sopts.lint_opts
.iter()
- .filter(|&&(ref key, _)| key.as_slice() == "warnings")
+ .filter(|&&(ref key, _)| *key == "warnings")
.map(|&(_, ref level)| *level != lint::Allow)
.last()
.unwrap_or(true);
#[deriving(Clone,Show)]
pub struct ErrorReported;
+impl Copy for ErrorReported {}
+
pub fn time<T, U>(do_it: bool, what: &str, u: U, f: |U| -> T) -> T {
thread_local!(static DEPTH: Cell<uint> = Cell::new(0));
if !do_it { return f(u); }
#[deriving(Clone, Default)]
pub struct FnvHasher;
+impl Copy for FnvHasher {}
+
+#[allow(missing_copy_implementations)]
pub struct FnvState(u64);
impl Hasher<FnvState> for FnvHasher {
pub fn ty_to_short_str<'tcx>(cx: &ctxt<'tcx>, typ: Ty<'tcx>) -> String {
let mut s = typ.repr(cx).to_string();
if s.len() >= 32u {
- s = s.as_slice().slice(0u, 32u).to_string();
+ s = s.slice(0u, 32u).to_string();
}
return s;
}
use syntax::abi;
pub fn get_target_strs(target_triple: String, target_os: abi::Os) -> target_strs::t {
- let cc_args = if target_triple.as_slice().contains("thumb") {
+ let cc_args = if target_triple.contains("thumb") {
vec!("-mthumb".to_string())
} else {
vec!("-marm".to_string())
"path2".to_string()
]);
assert_eq!(flags,
- vec!("-Wl,-rpath,path1".to_string(),
- "-Wl,-rpath,path2".to_string()));
+ ["-Wl,-rpath,path1",
+ "-Wl,-rpath,path2"]);
}
#[test]
"rpath2".to_string(),
"rpath1".to_string()
]);
- assert!(res.as_slice() == [
- "rpath1".to_string(),
- "rpath2".to_string()
+ assert!(res == [
+ "rpath1",
+ "rpath2",
]);
}
"4a".to_string(),
"3".to_string()
]);
- assert!(res.as_slice() == [
- "1a".to_string(),
- "2".to_string(),
- "4a".to_string(),
- "3".to_string()
+ assert!(res == [
+ "1a",
+ "2",
+ "4a",
+ "3",
]);
}
realpath: |p| Ok(p.clone())
};
let res = get_rpath_relative_to_output(config, &Path::new("lib/libstd.so"));
- assert_eq!(res.as_slice(), "$ORIGIN/../lib");
+ assert_eq!(res, "$ORIGIN/../lib");
}
#[test]
realpath: |p| Ok(p.clone())
};
let res = get_rpath_relative_to_output(config, &Path::new("lib/libstd.so"));
- assert_eq!(res.as_slice(), "$ORIGIN/../lib");
+ assert_eq!(res, "$ORIGIN/../lib");
}
#[test]
realpath: |p| Ok(p.clone())
};
let res = get_rpath_relative_to_output(config, &Path::new("lib/libstd.so"));
- assert_eq!(res.as_slice(), "$ORIGIN/../lib");
+ assert_eq!(res, "$ORIGIN/../lib");
}
#[test]
realpath: |p| Ok(p.clone())
};
let res = get_rpath_relative_to_output(config, &Path::new("lib/libstd.so"));
- assert_eq!(res.as_slice(), "@loader_path/../lib");
+ assert_eq!(res, "@loader_path/../lib");
}
}
/// Convenience function that retrieves the result of a digest as a
/// String in hexadecimal format.
fn result_str(&mut self) -> String {
- self.result_bytes().as_slice().to_hex().to_string()
+ self.result_bytes().to_hex().to_string()
}
}
while left > 0u {
let take = (left + 1u) / 2u;
sh.input_str(t.input
- .as_slice()
.slice(len - left, take + len - left));
left = left - take;
}
pub fn adjust_abi(&self, abi: abi::Abi) -> abi::Abi {
match abi {
abi::System => {
- if self.options.is_like_windows && self.arch.as_slice() == "x86" {
+ if self.options.is_like_windows && self.arch == "x86" {
abi::Stdcall
} else {
abi::C
( $($name:ident),+ ) => (
{
let target = target.replace("-", "_");
- let target = target.as_slice();
if false { }
$(
else if target == stringify!($name) {
*sess.features.borrow_mut() = features;
});
+ time(time_passes, "recursion limit", (), |_| {
+ middle::recursion_limit::update_recursion_limit(sess, &krate);
+ });
+
// strip before expansion to allow macros to depend on
// configuration variables e.g/ in
//
if base.len() == 0 {
base.push(link::default_output_for_target(session));
}
- base.as_mut_slice().sort();
+ base.sort();
base.dedup();
}
// Don't handle -W help here, because we might first load plugins.
let r = matches.opt_strs("Z");
- if r.iter().any(|x| x.as_slice() == "help") {
+ if r.iter().any(|x| *x == "help") {
describe_debug_flags();
return None;
}
let cg_flags = matches.opt_strs("C");
- if cg_flags.iter().any(|x| x.as_slice() == "help") {
+ if cg_flags.iter().any(|x| *x == "help") {
describe_codegen_flags();
return None;
}
PpmExpandedHygiene,
}
+impl Copy for PpSourceMode {}
+
#[deriving(PartialEq, Show)]
pub enum PpMode {
PpmSource(PpSourceMode),
PpmFlowGraph,
}
+impl Copy for PpMode {}
+
pub fn parse_pretty(sess: &Session, name: &str) -> (PpMode, Option<UserIdentifiedItem>) {
let mut split = name.splitn(1, '=');
let first = split.next().unwrap();
OptimizationFailure,
}
+impl Copy for OptimizationDiagnosticKind {}
+
impl OptimizationDiagnosticKind {
pub fn describe(self) -> &'static str {
match self {
pub message: TwineRef,
}
+impl Copy for OptimizationDiagnostic {}
+
impl OptimizationDiagnostic {
unsafe fn unpack(kind: OptimizationDiagnosticKind, di: DiagnosticInfoRef)
-> OptimizationDiagnostic {
UnknownDiagnostic(DiagnosticInfoRef),
}
+impl Copy for Diagnostic {}
+
impl Diagnostic {
pub unsafe fn unpack(di: DiagnosticInfoRef) -> Diagnostic {
let kind = super::LLVMGetDiagInfoKind(di);
X86_64_Win64 = 79,
}
+impl Copy for CallConv {}
+
pub enum Visibility {
LLVMDefaultVisibility = 0,
HiddenVisibility = 1,
ProtectedVisibility = 2,
}
+impl Copy for Visibility {}
+
// This enum omits the obsolete (and no-op) linkage types DLLImportLinkage,
// DLLExportLinkage, GhostLinkage and LinkOnceODRAutoHideLinkage.
// LinkerPrivateLinkage and LinkerPrivateWeakLinkage are not included either;
CommonLinkage = 14,
}
+impl Copy for Linkage {}
+
#[repr(C)]
#[deriving(Show)]
pub enum DiagnosticSeverity {
Note,
}
+impl Copy for DiagnosticSeverity {}
+
bitflags! {
flags Attribute : u32 {
const ZExtAttribute = 1 << 0,
}
}
+impl Copy for Attribute {}
+
#[repr(u64)]
pub enum OtherAttribute {
// The following are not really exposed in
NonNullAttribute = 1 << 44,
}
+impl Copy for OtherAttribute {}
+
pub enum SpecialAttribute {
DereferenceableAttribute(u64)
}
+impl Copy for SpecialAttribute {}
+
#[repr(C)]
pub enum AttributeSet {
ReturnIndex = 0,
FunctionIndex = !0
}
+impl Copy for AttributeSet {}
+
pub trait AttrHelper {
fn apply_llfn(&self, idx: c_uint, llfn: ValueRef);
fn apply_callsite(&self, idx: c_uint, callsite: ValueRef);
IntSLE = 41,
}
+impl Copy for IntPredicate {}
+
// enum for the LLVM RealPredicate type
pub enum RealPredicate {
RealPredicateFalse = 0,
RealPredicateTrue = 15,
}
+impl Copy for RealPredicate {}
+
// The LLVM TypeKind type - must stay in sync with the def of
// LLVMTypeKind in llvm/include/llvm-c/Core.h
#[deriving(PartialEq)]
X86_MMX = 15,
}
+impl Copy for TypeKind {}
+
#[repr(C)]
pub enum AtomicBinOp {
AtomicXchg = 0,
AtomicUMin = 10,
}
+impl Copy for AtomicBinOp {}
+
#[repr(C)]
pub enum AtomicOrdering {
NotAtomic = 0,
SequentiallyConsistent = 7
}
+impl Copy for AtomicOrdering {}
+
// Consts for the LLVMCodeGenFileType type (in include/llvm/c/TargetMachine.h)
#[repr(C)]
pub enum FileType {
ObjectFileType = 1
}
+impl Copy for FileType {}
+
pub enum MetadataType {
MD_dbg = 0,
MD_tbaa = 1,
MD_tbaa_struct = 5
}
+impl Copy for MetadataType {}
+
// Inline Asm Dialect
pub enum AsmDialect {
AD_ATT = 0,
AD_Intel = 1
}
+impl Copy for AsmDialect {}
+
#[deriving(PartialEq, Clone)]
#[repr(C)]
pub enum CodeGenOptLevel {
CodeGenLevelAggressive = 3,
}
+impl Copy for CodeGenOptLevel {}
+
#[deriving(PartialEq)]
#[repr(C)]
pub enum RelocMode {
RelocDynamicNoPic = 3,
}
+impl Copy for RelocMode {}
+
#[repr(C)]
pub enum CodeGenModel {
CodeModelDefault = 0,
CodeModelLarge = 5,
}
+impl Copy for CodeGenModel {}
+
#[repr(C)]
pub enum DiagnosticKind {
DK_InlineAsm = 0,
DK_OptimizationFailure,
}
+impl Copy for DiagnosticKind {}
+
// Opaque pointer types
+#[allow(missing_copy_implementations)]
pub enum Module_opaque {}
pub type ModuleRef = *mut Module_opaque;
+#[allow(missing_copy_implementations)]
pub enum Context_opaque {}
pub type ContextRef = *mut Context_opaque;
+#[allow(missing_copy_implementations)]
pub enum Type_opaque {}
pub type TypeRef = *mut Type_opaque;
+#[allow(missing_copy_implementations)]
pub enum Value_opaque {}
pub type ValueRef = *mut Value_opaque;
+#[allow(missing_copy_implementations)]
pub enum BasicBlock_opaque {}
pub type BasicBlockRef = *mut BasicBlock_opaque;
+#[allow(missing_copy_implementations)]
pub enum Builder_opaque {}
pub type BuilderRef = *mut Builder_opaque;
+#[allow(missing_copy_implementations)]
pub enum ExecutionEngine_opaque {}
pub type ExecutionEngineRef = *mut ExecutionEngine_opaque;
+#[allow(missing_copy_implementations)]
pub enum MemoryBuffer_opaque {}
pub type MemoryBufferRef = *mut MemoryBuffer_opaque;
+#[allow(missing_copy_implementations)]
pub enum PassManager_opaque {}
pub type PassManagerRef = *mut PassManager_opaque;
+#[allow(missing_copy_implementations)]
pub enum PassManagerBuilder_opaque {}
pub type PassManagerBuilderRef = *mut PassManagerBuilder_opaque;
+#[allow(missing_copy_implementations)]
pub enum Use_opaque {}
pub type UseRef = *mut Use_opaque;
+#[allow(missing_copy_implementations)]
pub enum TargetData_opaque {}
pub type TargetDataRef = *mut TargetData_opaque;
+#[allow(missing_copy_implementations)]
pub enum ObjectFile_opaque {}
pub type ObjectFileRef = *mut ObjectFile_opaque;
+#[allow(missing_copy_implementations)]
pub enum SectionIterator_opaque {}
pub type SectionIteratorRef = *mut SectionIterator_opaque;
+#[allow(missing_copy_implementations)]
pub enum Pass_opaque {}
pub type PassRef = *mut Pass_opaque;
+#[allow(missing_copy_implementations)]
pub enum TargetMachine_opaque {}
pub type TargetMachineRef = *mut TargetMachine_opaque;
+#[allow(missing_copy_implementations)]
pub enum Archive_opaque {}
pub type ArchiveRef = *mut Archive_opaque;
+#[allow(missing_copy_implementations)]
pub enum Twine_opaque {}
pub type TwineRef = *mut Twine_opaque;
+#[allow(missing_copy_implementations)]
pub enum DiagnosticInfo_opaque {}
pub type DiagnosticInfoRef = *mut DiagnosticInfo_opaque;
+#[allow(missing_copy_implementations)]
pub enum DebugLoc_opaque {}
pub type DebugLocRef = *mut DebugLoc_opaque;
+#[allow(missing_copy_implementations)]
pub enum SMDiagnostic_opaque {}
pub type SMDiagnosticRef = *mut SMDiagnostic_opaque;
pub use self::DIDescriptorFlags::*;
use super::{ValueRef};
+ #[allow(missing_copy_implementations)]
pub enum DIBuilder_opaque {}
pub type DIBuilderRef = *mut DIBuilder_opaque;
FlagLValueReference = 1 << 14,
FlagRValueReference = 1 << 15
}
+
+ impl Copy for DIDescriptorFlags {}
}
}
}
+#[allow(missing_copy_implementations)]
pub enum RustString_opaque {}
pub type RustStringRef = *mut RustString_opaque;
type RustStringRepr = *mut RefCell<Vec<u8>>;
if let Some(sess) = sess {
if let Some(ref s) = sess.opts.crate_name {
if let Some((attr, ref name)) = attr_crate_name {
- if s.as_slice() != name.get() {
+ if *s != name.get() {
let msg = format!("--crate-name and #[crate_name] are \
required to match, but `{}` != `{}`",
s, name);
let mut tstr = String::new();
for c in c.escape_unicode() { tstr.push(c) }
result.push('$');
- result.push_str(tstr.as_slice().slice_from(1));
+ result.push_str(tstr.slice_from(1));
}
}
}
fn write_rlib_bytecode_object_v1<T: Writer>(writer: &mut T,
bc_data_deflated: &[u8])
-> ::std::io::IoResult<()> {
- let bc_data_deflated_size: u64 = bc_data_deflated.as_slice().len() as u64;
+ let bc_data_deflated_size: u64 = bc_data_deflated.len() as u64;
try! { writer.write(RLIB_BYTECODE_OBJECT_MAGIC) };
try! { writer.write_le_u32(1) };
let args = sess.opts.cg.link_args.as_ref().unwrap_or(&empty_vec);
let mut args = args.iter().chain(used_link_args.iter());
if !dylib
- && (t.options.relocation_model.as_slice() == "pic"
- || sess.opts.cg.relocation_model.as_ref()
- .unwrap_or(&empty_str).as_slice() == "pic")
- && !args.any(|x| x.as_slice() == "-static") {
+ && (t.options.relocation_model == "pic"
+ || *sess.opts.cg.relocation_model.as_ref()
+ .unwrap_or(&empty_str) == "pic")
+ && !args.any(|x| *x == "-static") {
cmd.arg("-pie");
}
}
// Internalize everything but the reachable symbols of the current module
let cstrs: Vec<::std::c_str::CString> =
- reachable.iter().map(|s| s.as_slice().to_c_str()).collect();
+ reachable.iter().map(|s| s.to_c_str()).collect();
let arr: Vec<*const i8> = cstrs.iter().map(|c| c.as_ptr()).collect();
let ptr = arr.as_ptr();
unsafe {
use std::task::TaskBuilder;
use libc::{c_uint, c_int, c_void};
+#[deriving(Clone, PartialEq, PartialOrd, Ord, Eq)]
+pub enum OutputType {
+ OutputTypeBitcode,
+ OutputTypeAssembly,
+ OutputTypeLlvmAssembly,
+ OutputTypeObject,
+ OutputTypeExe,
+}
+
+impl Copy for OutputType {}
+
pub fn llvm_err(handler: &diagnostic::Handler, msg: String) -> ! {
unsafe {
let cstr = llvm::LLVMRustGetLastError();
let pass_name = pass_name.as_str().expect("got a non-UTF8 pass name from LLVM");
let enabled = match cgcx.remark {
AllPasses => true,
- SomePasses(ref v) => v.iter().any(|s| s.as_slice() == pass_name),
+ SomePasses(ref v) => v.iter().any(|s| *s == pass_name),
};
if enabled {
// If we're verifying or linting, add them to the function pass
// manager.
let addpass = |pass: &str| {
- pass.as_slice().with_c_str(|s| llvm::LLVMRustAddPass(fpm, s))
+ pass.with_c_str(|s| llvm::LLVMRustAddPass(fpm, s))
};
if !config.no_verify { assert!(addpass("verify")); }
}
for pass in config.passes.iter() {
- pass.as_slice().with_c_str(|s| {
+ pass.with_c_str(|s| {
if !llvm::LLVMRustAddPass(mpm, s) {
cgcx.handler.warn(format!("unknown pass {}, ignoring",
*pass).as_slice());
self.collecting = true;
self.visit_pat(&*arg.pat);
self.collecting = false;
- let span_utils = self.span;
+ let span_utils = self.span.clone();
for &(id, ref p, _, _) in self.collected_paths.iter() {
let typ = ppaux::ty_to_string(&self.analysis.ty_cx,
(*self.analysis.ty_cx.node_types.borrow())[id]);
FnRef,
}
+impl Copy for Row {}
+
impl<'a> FmtStrs<'a> {
pub fn new(rec: Box<Recorder>, span: SpanUtils<'a>, krate: String) -> FmtStrs<'a> {
FmtStrs {
let values = values.iter().map(|s| {
// Never take more than 1020 chars
if s.len() > 1020 {
- s.as_slice().slice_to(1020)
+ s.slice_to(1020)
} else {
s.as_slice()
}
if self.recorder.dump_spans {
if dump_spans {
- self.recorder.dump_span(self.span, label, span, Some(sub_span));
+ self.recorder.dump_span(self.span.clone(),
+ label,
+ span,
+ Some(sub_span));
}
return;
}
use syntax::parse::token;
use syntax::parse::token::{keywords, Token};
+#[deriving(Clone)]
pub struct SpanUtils<'a> {
pub sess: &'a Session,
pub err_count: Cell<int>,
#[deriving(Show)]
struct ConstantExpr<'a>(&'a ast::Expr);
+impl<'a> Copy for ConstantExpr<'a> {}
+
impl<'a> ConstantExpr<'a> {
fn eq(self, other: ConstantExpr<'a>, tcx: &ty::ctxt) -> bool {
let ConstantExpr(expr) = self;
CompareSliceLength
}
+impl Copy for BranchKind {}
+
pub enum OptResult<'blk, 'tcx: 'blk> {
SingleResult(Result<'blk, 'tcx>),
RangeResult(Result<'blk, 'tcx>, Result<'blk, 'tcx>),
TrByRef,
}
+impl Copy for TransBindingMode {}
+
/// Information about a pattern binding:
/// - `llmatch` is a pointer to a stack slot. The stack slot contains a
/// pointer into the value being matched. Hence, llmatch has type `T**`
pub ty: Ty<'tcx>,
}
+impl<'tcx> Copy for BindingInfo<'tcx> {}
+
type BindingsMap<'tcx> = FnvHashMap<Ident, BindingInfo<'tcx>>;
struct ArmData<'p, 'blk, 'tcx: 'blk> {
check_match::Constructor::Variant(def_id)
};
- let mcx = check_match::MatchCheckCtxt { tcx: bcx.tcx() };
+ let param_env = ty::empty_parameter_environment();
+ let mcx = check_match::MatchCheckCtxt {
+ tcx: bcx.tcx(),
+ param_env: param_env,
+ };
enter_match(bcx, dm, m, col, val, |pats|
check_match::specialize(&mcx, pats.as_slice(), &ctor, col, variant_size)
)
node_id_type(bcx, pat_id)
};
- let mcx = check_match::MatchCheckCtxt { tcx: bcx.tcx() };
+ let mcx = check_match::MatchCheckCtxt {
+ tcx: bcx.tcx(),
+ param_env: ty::empty_parameter_environment(),
+ };
let adt_vals = if any_irrefutable_adt_pat(bcx.tcx(), m, col) {
let repr = adt::represent_type(bcx.ccx(), left_ty);
let arg_count = adt::num_args(&*repr, 0);
/// Checks whether the binding in `discr` is assigned to anywhere in the expression `body`
fn is_discr_reassigned(bcx: Block, discr: &ast::Expr, body: &ast::Expr) -> bool {
- match discr.node {
+ let (vid, field) = match discr.node {
ast::ExprPath(..) => match bcx.def(discr.id) {
- def::DefLocal(vid) | def::DefUpvar(vid, _, _) => {
- let mut rc = ReassignmentChecker {
- node: vid,
- reassigned: false
- };
- {
- let mut visitor = euv::ExprUseVisitor::new(&mut rc, bcx);
- visitor.walk_expr(body);
- }
- rc.reassigned
- }
- _ => false
+ def::DefLocal(vid) | def::DefUpvar(vid, _, _) => (vid, None),
+ _ => return false
+ },
+ ast::ExprField(ref base, field) => {
+ let vid = match bcx.tcx().def_map.borrow().get(&base.id) {
+ Some(&def::DefLocal(vid)) | Some(&def::DefUpvar(vid, _, _)) => vid,
+ _ => return false
+ };
+ (vid, Some(mc::NamedField(field.node.name)))
},
- _ => false
+ ast::ExprTupField(ref base, field) => {
+ let vid = match bcx.tcx().def_map.borrow().get(&base.id) {
+ Some(&def::DefLocal(vid)) | Some(&def::DefUpvar(vid, _, _)) => vid,
+ _ => return false
+ };
+ (vid, Some(mc::PositionalField(field.node)))
+ },
+ _ => return false
+ };
+
+ let mut rc = ReassignmentChecker {
+ node: vid,
+ field: field,
+ reassigned: false
+ };
+ {
+ let param_env = ty::empty_parameter_environment();
+ let mut visitor = euv::ExprUseVisitor::new(&mut rc, bcx, param_env);
+ visitor.walk_expr(body);
}
+ rc.reassigned
}
struct ReassignmentChecker {
node: ast::NodeId,
+ field: Option<mc::FieldName>,
reassigned: bool
}
+// Determine if the expression we're matching on is reassigned to within
+// the body of the match's arm.
+// We only care for the `mutate` callback since this check only matters
+// for cases where the matched value is moved.
impl<'tcx> euv::Delegate<'tcx> for ReassignmentChecker {
fn consume(&mut self, _: ast::NodeId, _: Span, _: mc::cmt, _: euv::ConsumeMode) {}
fn matched_pat(&mut self, _: &ast::Pat, _: mc::cmt, _: euv::MatchMode) {}
match cmt.cat {
mc::cat_upvar(mc::Upvar { id: ty::UpvarId { var_id: vid, .. }, .. }) |
mc::cat_local(vid) => self.reassigned = self.node == vid,
+ mc::cat_interior(ref base_cmt, mc::InteriorField(field)) => {
+ match base_cmt.cat {
+ mc::cat_upvar(mc::Upvar { id: ty::UpvarId { var_id: vid, .. }, .. }) |
+ mc::cat_local(vid) => {
+ self.reassigned = self.node == vid && Some(field) == self.field
+ },
+ _ => {}
+ }
+ },
_ => {}
}
}
let variable_ty = node_id_type(bcx, p_id);
let llvariable_ty = type_of::type_of(ccx, variable_ty);
let tcx = bcx.tcx();
+ let param_env = ty::empty_parameter_environment();
let llmatch;
let trmode;
match bm {
ast::BindByValue(_)
- if !ty::type_moves_by_default(tcx, variable_ty) || reassigned => {
+ if !ty::type_moves_by_default(tcx,
+ variable_ty,
+ ¶m_env) || reassigned => {
llmatch = alloca_no_lifetime(bcx,
llvariable_ty.ptr_to(),
"__llmatch");
FatPointer(uint)
}
+impl Copy for PointerField {}
+
impl<'tcx> Case<'tcx> {
- fn is_zerolen<'a>(&self, cx: &CrateContext<'a, 'tcx>, scapegoat: Ty<'tcx>) -> bool {
+ fn is_zerolen<'a>(&self, cx: &CrateContext<'a, 'tcx>, scapegoat: Ty<'tcx>)
+ -> bool {
mk_struct(cx, self.tys.as_slice(), false, scapegoat).size == 0
}
};
let r = ia.asm.get().with_c_str(|a| {
- constraints.as_slice().with_c_str(|c| {
+ constraints.with_c_str(|c| {
InlineAsmCall(bcx,
a,
c,
use std::c_str::ToCStr;
use std::cell::{Cell, RefCell};
use std::collections::HashSet;
+use std::mem;
use std::rc::Rc;
use std::{i8, i16, i32, i64};
use syntax::abi::{Rust, RustCall, RustIntrinsic, Abi};
// Used only for creating scalar comparison glue.
pub enum scalar_type { nil_type, signed_int, unsigned_int, floating_point, }
+impl Copy for scalar_type {}
+
pub fn compare_scalar_types<'blk, 'tcx>(cx: Block<'blk, 'tcx>,
lhs: ValueRef,
rhs: ValueRef,
in iter_structural_ty")
}
}
- _ => cx.sess().unimpl("type in iter_structural_ty")
+ _ => {
+ cx.sess().unimpl(format!("type in iter_structural_ty: {}",
+ ty_to_string(cx.tcx(), t)).as_slice())
+ }
}
return cx;
}
}
}
+#[deriving(Clone, Eq, PartialEq)]
+pub enum IsUnboxedClosureFlag {
+ NotUnboxedClosure,
+ IsUnboxedClosure,
+}
+
+impl Copy for IsUnboxedClosureFlag {}
+
// trans_closure: Builds an LLVM function out of a source function.
// If the function closes over its environment a closure will be
// returned.
InlinedCopy,
}
+impl Copy for ValueOrigin {}
+
/// Set the appropriate linkage for an LLVM `ValueRef` (function or global).
/// If the `llval` is the direct translation of a specific Rust item, `id`
/// should be set to the `NodeId` of that item. (This mapping should be
format!("Illegal null byte in export_name \
value: `{}`", sym).as_slice());
}
- let g = sym.as_slice().with_c_str(|buf| {
+ let g = sym.with_c_str(|buf| {
llvm::LLVMAddGlobal(ccx.llmod(), llty, buf)
});
fn next(&mut self) -> Option<ValueRef> {
let old = self.cur;
if !old.is_null() {
- self.cur = unsafe { (self.step)(old) };
+ self.cur = unsafe {
+ let step: unsafe extern "C" fn(ValueRef) -> ValueRef =
+ mem::transmute_copy(&self.step);
+ step(old)
+ };
Some(old)
} else {
None
pub struct BasicBlock(pub BasicBlockRef);
+impl Copy for BasicBlock {}
+
pub type Preds<'a> = Map<'a, Value, BasicBlock, Filter<'a, Value, Users>>;
/// Wrapper for LLVM BasicBlockRef
let comment_text = format!("{} {}", "#",
sanitized.replace("\n", "\n\t# "));
self.count_insn("inlineasm");
- let asm = comment_text.as_slice().with_c_str(|c| {
+ let asm = comment_text.with_c_str(|c| {
unsafe {
llvm::LLVMConstInlineAsm(Type::func(&[], &Type::void(self.ccx)).to_ref(),
c, noname(), False, False)
Ignore,
}
+impl Copy for ArgKind {}
+
/// Information about how a specific C type
/// should be passed to or returned from a function
///
pub attr: option::Option<Attribute>
}
+impl Copy for ArgType {}
+
impl ArgType {
pub fn direct(ty: Type, cast: option::Option<Type>,
pad: option::Option<Type>,
ArgType {
kind: Indirect,
ty: ty,
- cast: option::None,
- pad: option::None,
+ cast: option::Option::None,
+ pad: option::Option::None,
attr: attr
}
}
Memory
}
+impl Copy for RegClass {}
+
trait TypeMethods {
fn is_reg_ty(&self) -> bool;
}
pub llself: ValueRef,
}
+impl Copy for MethodData {}
+
pub enum CalleeData<'tcx> {
Closure(Datum<'tcx, Lvalue>),
DoAutorefArg(ast::NodeId)
}
+impl Copy for AutorefArg {}
+
pub fn trans_arg_datum<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
formal_arg_ty: Ty<'tcx>,
arg_datum: Datum<'tcx, Expr>,
index: uint
}
+impl Copy for CustomScopeIndex {}
+
pub const EXIT_BREAK: uint = 0;
pub const EXIT_LOOP: uint = 1;
pub const EXIT_MAX: uint = 2;
LoopExit(ast::NodeId, uint)
}
+impl Copy for EarlyExitLabel {}
+
pub struct CachedEarlyExit {
label: EarlyExitLabel,
cleanup_block: BasicBlockRef,
}
+impl Copy for CachedEarlyExit {}
+
pub trait Cleanup<'tcx> {
fn must_unwind(&self) -> bool;
fn clean_on_unwind(&self) -> bool;
CustomScope(CustomScopeIndex)
}
+impl Copy for ScopeId {}
+
impl<'blk, 'tcx> CleanupMethods<'blk, 'tcx> for FunctionContext<'blk, 'tcx> {
/// Invoked when we start to trans the code contained within a new cleanup scope.
fn push_ast_cleanup_scope(&self, debug_loc: NodeInfo) {
zero: bool
}
+impl<'tcx> Copy for DropValue<'tcx> {}
+
impl<'tcx> Cleanup<'tcx> for DropValue<'tcx> {
fn must_unwind(&self) -> bool {
self.must_unwind
HeapExchange
}
+impl Copy for Heap {}
+
pub struct FreeValue<'tcx> {
ptr: ValueRef,
heap: Heap,
content_ty: Ty<'tcx>
}
+impl<'tcx> Copy for FreeValue<'tcx> {}
+
impl<'tcx> Cleanup<'tcx> for FreeValue<'tcx> {
fn must_unwind(&self) -> bool {
true
heap: Heap,
}
+impl Copy for FreeSlice {}
+
impl<'tcx> Cleanup<'tcx> for FreeSlice {
fn must_unwind(&self) -> bool {
true
ptr: ValueRef,
}
+impl Copy for LifetimeEnd {}
+
impl<'tcx> Cleanup<'tcx> for LifetimeEnd {
fn must_unwind(&self) -> bool {
false
datum: Datum<'tcx, Lvalue>
}
+impl<'tcx> Copy for EnvValue<'tcx> {}
+
impl<'tcx> EnvValue<'tcx> {
pub fn to_string<'a>(&self, ccx: &CrateContext<'a, 'tcx>) -> String {
format!("{}({})", self.action, self.datum.to_string(ccx))
pub name: ValueRef,
}
+impl<'tcx> Copy for tydesc_info<'tcx> {}
+
/*
* A note on nomenclature of linking: "extern", "foreign", and "upcall".
*
pub span: Span,
}
+impl Copy for NodeInfo {}
+
pub fn expr_info(expr: &ast::Expr) -> NodeInfo {
NodeInfo { id: expr.id, span: expr.span }
}
MethodCall(ty::MethodCall)
}
+impl Copy for ExprOrMethodCall {}
+
pub fn node_id_substs<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
node: ExprOrMethodCall)
- -> subst::Substs<'tcx>
-{
+ -> subst::Substs<'tcx> {
let tcx = bcx.tcx();
let substs = match node {
sess.target
.target
.data_layout
- .as_slice()
.with_c_str(|buf| {
llvm::LLVMSetDataLayout(llmod, buf);
});
sess.target
.target
.llvm_target
- .as_slice()
.with_c_str(|buf| {
llvm::LLVMRustSetNormalizedTarget(llmod, buf);
});
pub kind: K,
}
+impl<'tcx,K:Copy> Copy for Datum<'tcx,K> {}
+
pub struct DatumBlock<'blk, 'tcx: 'blk, K> {
pub bcx: Block<'blk, 'tcx>,
pub datum: Datum<'tcx, K>,
#[deriving(Clone, Show)]
pub struct Lvalue;
+impl Copy for Lvalue {}
+
#[deriving(Show)]
pub struct Rvalue {
pub mode: RvalueMode
ByValue,
}
+impl Copy for RvalueMode {}
+
pub fn immediate_rvalue<'tcx>(val: ValueRef, ty: Ty<'tcx>) -> Datum<'tcx, Rvalue> {
return Datum::new(val, ty, Rvalue::new(ByValue));
}
/// Copies the value into a new location. This function always preserves the existing datum as
/// a valid value. Therefore, it does not consume `self` and, also, cannot be applied to affine
/// values (since they must never be duplicated).
- pub fn shallow_copy<'blk>(&self,
- bcx: Block<'blk, 'tcx>,
- dst: ValueRef)
- -> Block<'blk, 'tcx> {
- assert!(!ty::type_moves_by_default(bcx.tcx(), self.ty));
+ pub fn shallow_copy<'blk, 'tcx>(&self,
+ bcx: Block<'blk, 'tcx>,
+ dst: ValueRef)
+ -> Block<'blk, 'tcx> {
+ /*!
+ * Copies the value into a new location. This function always
+ * preserves the existing datum as a valid value. Therefore,
+ * it does not consume `self` and, also, cannot be applied to
+ * affine values (since they must never be duplicated).
+ */
+
+ let param_env = ty::empty_parameter_environment();
+ assert!(!ty::type_moves_by_default(bcx.tcx(), self.ty, ¶m_env));
self.shallow_copy_raw(bcx, dst)
}
#[deriving(Show, Hash, Eq, PartialEq, Clone)]
struct UniqueTypeId(ast::Name);
+impl Copy for UniqueTypeId {}
+
// The TypeMap is where the CrateDebugContext holds the type metadata nodes
// created so far. The metadata nodes are indexed by UniqueTypeId, and, for
// faster lookup, also by Ty. The TypeMap is responsible for creating
namespace_node.mangled_name_of_contained_item(var_name.as_slice());
let var_scope = namespace_node.scope;
- var_name.as_slice().with_c_str(|var_name| {
- linkage_name.as_slice().with_c_str(|linkage_name| {
+ var_name.with_c_str(|var_name| {
+ linkage_name.with_c_str(|linkage_name| {
unsafe {
llvm::LLVMDIBuilderCreateStaticVariable(DIB(cx),
var_scope,
let containing_scope = namespace_node.scope;
(linkage_name, containing_scope)
} else {
- (function_name.as_slice().to_string(), file_metadata)
+ (function_name.clone(), file_metadata)
};
// Clang sets this parameter to the opening brace of the function's block,
let is_local_to_unit = is_node_local_to_unit(cx, fn_ast_id);
- let fn_metadata = function_name.as_slice().with_c_str(|function_name| {
- linkage_name.as_slice().with_c_str(|linkage_name| {
+ let fn_metadata = function_name.with_c_str(|function_name| {
+ linkage_name.with_c_str(|linkage_name| {
unsafe {
llvm::LLVMDIBuilderCreateFunction(
DIB(cx),
path_bytes.insert(1, prefix[1]);
}
- path_bytes.as_slice().to_c_str()
+ path_bytes.to_c_str()
}
_ => fallback_path(cx)
}
});
fn fallback_path(cx: &CrateContext) -> CString {
- cx.link_meta().crate_name.as_slice().to_c_str()
+ cx.link_meta().crate_name.to_c_str()
}
}
let pointer_llvm_type = type_of::type_of(cx, pointer_type);
let (pointer_size, pointer_align) = size_and_align_of(cx, pointer_llvm_type);
let name = compute_debuginfo_type_name(cx, pointer_type, false);
- let ptr_metadata = name.as_slice().with_c_str(|name| {
+ let ptr_metadata = name.with_c_str(|name| {
unsafe {
llvm::LLVMDIBuilderCreatePointerType(
DIB(cx),
NoDiscriminant
}
+impl Copy for EnumDiscriminantInfo {}
+
// Returns a tuple of (1) type_metadata_stub of the variant, (2) the llvm_type
// of the variant, and (3) a MemberDescriptionFactory for producing the
// descriptions of the fields of the variant. This is a rudimentary version of a
.borrow()
.get_unique_type_id_as_string(unique_type_id);
- let enum_metadata = enum_name.as_slice().with_c_str(|enum_name| {
- unique_type_id_str.as_slice().with_c_str(|unique_type_id_str| {
+ let enum_metadata = enum_name.with_c_str(|enum_name| {
+ unique_type_id_str.with_c_str(|unique_type_id_str| {
unsafe {
llvm::LLVMDIBuilderCreateUnionType(
DIB(cx),
ComputedMemberOffset => machine::llelement_offset(cx, composite_llvm_type, i)
};
- member_description.name.as_slice().with_c_str(|member_name| {
+ member_description.name.with_c_str(|member_name| {
unsafe {
llvm::LLVMDIBuilderCreateMemberType(
DIB(cx),
.get_unique_type_id_as_string(unique_type_id);
let metadata_stub = unsafe {
struct_type_name.with_c_str(|name| {
- unique_type_id_str.as_slice().with_c_str(|unique_type_id| {
+ unique_type_id_str.with_c_str(|unique_type_id| {
// LLVMDIBuilderCreateStructType() wants an empty array. A null
// pointer will lead to hard to trace and debug LLVM assertions
// later on in llvm/lib/IR/Value.cpp.
UnknownLocation
}
+impl Copy for DebugLocation {}
+
impl DebugLocation {
fn new(scope: DIScope, line: uint, col: uint) -> DebugLocation {
KnownLocation {
Ignore,
}
+impl Copy for Dest {}
+
impl Dest {
pub fn to_string(&self, ccx: &CrateContext) -> String {
match *self {
cast_other,
}
+impl Copy for cast_kind {}
+
pub fn cast_type_kind<'tcx>(tcx: &ty::ctxt<'tcx>, t: Ty<'tcx>) -> cast_kind {
match t.sty {
ty::ty_char => cast_integral,
let llalign = llalign_of(ccx, llty);
let name = mangle_internal_name_by_type_and_seq(ccx, t, "tydesc");
debug!("+++ declare_tydesc {} {}", ppaux::ty_to_string(ccx.tcx(), t), name);
- let gvar = name.as_slice().with_c_str(|buf| {
+ let gvar = name.with_c_str(|buf| {
unsafe {
llvm::LLVMAddGlobal(ccx.llmod(), ccx.tydesc_type().to_ref(), buf)
}
pub llmod: ModuleRef,
}
+impl Copy for ModuleTranslation {}
+
pub struct CrateTranslation {
pub modules: Vec<ModuleTranslation>,
pub metadata_module: ModuleTranslation,
pub llunit_alloc_size: u64
}
+impl<'tcx> Copy for VecTypes<'tcx> {}
+
impl<'tcx> VecTypes<'tcx> {
pub fn to_string<'a>(&self, ccx: &CrateContext<'a, 'tcx>) -> String {
format!("VecTypes {{unit_ty={}, llunit_ty={}, \
1 => expr::trans_into(bcx, &**element, SaveIn(lldest)),
count => {
let elem = unpack_datum!(bcx, expr::trans(bcx, &**element));
- assert!(!ty::type_moves_by_default(bcx.tcx(), elem.ty));
-
let bcx = iter_vec_loop(bcx, lldest, vt,
C_uint(bcx.ccx(), count),
|set_bcx, lleltptr, _| {
rf: TypeRef
}
+impl Copy for Type {}
+
macro_rules! ty (
($e:expr) => ( Type::from_ref(unsafe { $e }))
)
an_unboxed_closure,
}
+impl Copy for named_ty {}
+
pub fn llvm_type_name<'a, 'tcx>(cx: &CrateContext<'a, 'tcx>,
what: named_ty,
did: ast::DefId,
tps: &[Ty<'tcx>])
- -> String
-{
+ -> String {
let name = match what {
a_struct => "struct",
an_enum => "enum",
pub struct Value(pub ValueRef);
+impl Copy for Value {}
+
macro_rules! opt_val ( ($e:expr) => (
unsafe {
match $e {
}
}
+/// Wrapper for LLVM UseRef
pub struct Use(UseRef);
-/// Wrapper for LLVM UseRef
+impl Copy for Use {}
+
+/**
+ * Wrapper for LLVM UseRef
+ */
impl Use {
pub fn get(&self) -> UseRef {
let Use(v) = *self; v
}
/// Iterator for the users of a value
+#[allow(missing_copy_implementations)]
pub struct Users {
next: Option<Use>
}
use middle::infer::resolve_type;
use middle::infer::resolve::try_resolve_tvar_shallow;
-use std::result::{Err, Ok};
-use std::result;
+use std::result::Result::{Err, Ok};
use syntax::ast;
use syntax::codemap::Span;
use util::ppaux::Repr;
// n.b.: order of actual, expected is reversed
match infer::mk_subty(fcx.infcx(), b_is_expected, infer::Misc(sp),
ty_b, ty_a) {
- result::Ok(()) => { /* ok */ }
- result::Err(ref err) => {
+ Ok(()) => { /* ok */ }
+ Err(ref err) => {
handle_err(sp, ty_a, ty_b, err);
}
}
try_resolve_tvar_shallow).unwrap_or(expected)
} else { expected };
match fcx.mk_assignty(expr, expr_ty, expected) {
- result::Ok(()) => { /* ok */ }
- result::Err(ref err) => {
+ Ok(()) => { /* ok */ }
+ Err(ref err) => {
fcx.report_mismatched_types(sp, expected, expr_ty, err);
}
}
TraitSource(/* trait id */ ast::DefId),
}
+impl Copy for CandidateSource {}
+
type MethodIndex = uint; // just for doc purposes
/// Determines whether the type `self_ty` supports a method name `method_name` or not.
ExpectCastableToType(Ty<'tcx>),
}
+impl<'tcx> Copy for Expectation<'tcx> {}
+
#[deriving(Clone)]
pub struct FnStyleState {
pub def: ast::NodeId,
from_fn: bool
}
+impl Copy for FnStyleState {}
+
impl FnStyleState {
pub fn function(fn_style: ast::FnStyle, def: ast::NodeId) -> FnStyleState {
FnStyleState { def: def, fn_style: fn_style, from_fn: true }
decl, id, body, &inh);
vtable::select_all_fcx_obligations_or_error(&fcx);
- regionck::regionck_fn(&fcx, id, body);
+ regionck::regionck_fn(&fcx, id, decl, body);
fcx.default_diverging_type_variables_to_nil();
writeback::resolve_type_vars_in_fn(&fcx, decl, body);
}
self.register_unsize_obligations(span, &**u)
}
ty::UnsizeVtable(ref ty_trait, self_ty) => {
- vtable::check_object_safety(self.tcx(), ty_trait, span);
-
+ vtable::check_object_safety(self.tcx(), &ty_trait.principal, span);
// If the type is `Foo+'a`, ensures that the type
// being cast to `Foo+'a` implements `Foo`:
vtable::register_object_cast_obligations(self,
NoPreference
}
+impl Copy for LvaluePreference {}
+
/// Executes an autoderef loop for the type `t`. At each step, invokes `should_stop` to decide
/// whether to terminate the loop. Returns the final type and number of derefs that it performed.
///
DoDerefArgs
}
+impl Copy for DerefArgs {}
+
/// Controls whether the arguments are tupled. This is used for the call
/// operator.
///
fcx.infcx().resolve_regions_and_report_errors();
}
-pub fn regionck_fn(fcx: &FnCtxt, id: ast::NodeId, blk: &ast::Block) {
+pub fn regionck_fn(fcx: &FnCtxt, id: ast::NodeId, decl: &ast::FnDecl, blk: &ast::Block) {
let mut rcx = Rcx::new(fcx, blk.id);
if fcx.err_count_since_creation() == 0 {
// regionck assumes typeck succeeded
- rcx.visit_fn_body(id, blk);
+ rcx.visit_fn_body(id, decl, blk);
}
// Region checking a fn can introduce new trait obligations,
fn visit_fn_body(&mut self,
id: ast::NodeId,
+ fn_decl: &ast::FnDecl,
body: &ast::Block)
{
// When we enter a function, we can derive
let len = self.region_param_pairs.len();
self.relate_free_regions(fn_sig.as_slice(), body.id);
+ link_fn_args(self, CodeExtent::from_node_id(body.id), fn_decl.inputs.as_slice());
self.visit_block(body);
self.visit_region_obligations(body.id);
self.region_param_pairs.truncate(len);
// hierarchy, and in particular the relationships between free
// regions, until regionck, as described in #3238.
- fn visit_fn(&mut self, _fk: visit::FnKind<'v>, _fd: &'v ast::FnDecl,
+ fn visit_fn(&mut self, _fk: visit::FnKind<'v>, fd: &'v ast::FnDecl,
b: &'v ast::Block, _s: Span, id: ast::NodeId) {
- self.visit_fn_body(id, b)
+ self.visit_fn_body(id, fd, b)
}
fn visit_item(&mut self, i: &ast::Item) { visit_item(self, i); }
/// then ensures that the lifetime of the resulting pointer is
/// linked to the lifetime of its guarantor (if any).
fn link_match(rcx: &Rcx, discr: &ast::Expr, arms: &[ast::Arm]) {
-
debug!("regionck::for_match()");
let mc = mc::MemCategorizationContext::new(rcx);
let discr_cmt = ignore_err!(mc.cat_expr(discr));
}
}
+/// Computes the guarantors for any ref bindings in a match and
+/// then ensures that the lifetime of the resulting pointer is
+/// linked to the lifetime of its guarantor (if any).
+fn link_fn_args(rcx: &Rcx, body_scope: CodeExtent, args: &[ast::Arg]) {
+ debug!("regionck::link_fn_args(body_scope={})", body_scope);
+ let mc = mc::MemCategorizationContext::new(rcx);
+ for arg in args.iter() {
+ let arg_ty = rcx.fcx.node_ty(arg.id);
+ let re_scope = ty::ReScope(body_scope);
+ let arg_cmt = mc.cat_rvalue(arg.id, arg.ty.span, re_scope, arg_ty);
+ debug!("arg_ty={} arg_cmt={}",
+ arg_ty.repr(rcx.tcx()),
+ arg_cmt.repr(rcx.tcx()));
+ link_pattern(rcx, mc, arg_cmt, &*arg.pat);
+ }
+}
+
/// Link lifetimes of any ref bindings in `root_pat` to the pointers found in the discriminant, if
/// needed.
fn link_pattern<'a, 'tcx>(rcx: &Rcx<'a, 'tcx>,
mc: mc::MemCategorizationContext<Rcx<'a, 'tcx>>,
discr_cmt: mc::cmt<'tcx>,
root_pat: &ast::Pat) {
+ debug!("link_pattern(discr_cmt={}, root_pat={})",
+ discr_cmt.repr(rcx.tcx()),
+ root_pat.repr(rcx.tcx()));
let _ = mc.cat_pattern(discr_cmt, root_pat, |mc, sub_cmt, sub_pat| {
match sub_pat.node {
// `ref x` pattern
// Ensure that if ~T is cast to ~Trait, then T : Trait
push_cast_obligation(fcx, cast_expr, object_trait, referent_ty);
- check_object_safety(fcx.tcx(), object_trait, source_expr.span);
+ check_object_safety(fcx.tcx(), &object_trait.principal, source_expr.span);
}
(&ty::ty_rptr(referent_region, ty::mt { ty: referent_ty,
target_region,
referent_region);
- check_object_safety(fcx.tcx(), object_trait, source_expr.span);
+ check_object_safety(fcx.tcx(), &object_trait.principal, source_expr.span);
}
}
// self by value, has no type parameters and does not use the `Self` type, except
// in self position.
pub fn check_object_safety<'tcx>(tcx: &ty::ctxt<'tcx>,
- object_trait: &ty::TyTrait<'tcx>,
+ object_trait: &ty::TraitRef<'tcx>,
+ span: Span) {
+
+ let mut object = object_trait.clone();
+ if object.substs.types.len(SelfSpace) == 0 {
+ object.substs.types.push(SelfSpace, ty::mk_err());
+ }
+
+ let object = Rc::new(object);
+ for tr in traits::supertraits(tcx, object) {
+ check_object_safety_inner(tcx, &*tr, span);
+ }
+}
+
+fn check_object_safety_inner<'tcx>(tcx: &ty::ctxt<'tcx>,
+ object_trait: &ty::TraitRef<'tcx>,
span: Span) {
// Skip the fn_once lang item trait since only the compiler should call
// `call_once` which is the method which takes self by value. What could go
// wrong?
match tcx.lang_items.fn_once_trait() {
- Some(def_id) if def_id == object_trait.principal.def_id => return,
+ Some(def_id) if def_id == object_trait.def_id => return,
_ => {}
}
- let trait_items = ty::trait_items(tcx, object_trait.principal.def_id);
+ let trait_items = ty::trait_items(tcx, object_trait.def_id);
let mut errors = Vec::new();
for item in trait_items.iter() {
let mut errors = errors.iter().flat_map(|x| x.iter()).peekable();
if errors.peek().is_some() {
- let trait_name = ty::item_path_str(tcx, object_trait.principal.def_id);
+ let trait_name = ty::item_path_str(tcx, object_trait.def_id);
span_err!(tcx.sess, span, E0038,
"cannot convert to a trait object because trait `{}` is not object-safe",
trait_name);
"overflow evaluating the trait `{}` for the type `{}`",
trait_ref.user_string(fcx.tcx()),
self_ty.user_string(fcx.tcx())).as_slice());
+
+ let current_limit = fcx.tcx().sess.recursion_limit.get();
+ let suggested_limit = current_limit * 2;
+ fcx.tcx().sess.span_note(
+ obligation.cause.span,
+ format!(
+ "consider adding a `#![recursion_limit=\"{}\"]` attribute to your crate",
+ suggested_limit)[]);
+
note_obligation_cause(fcx, obligation);
}
Unimplemented => {
}
}
+ if fcx.tcx().lang_items.copy_trait() == Some(trait_ref.def_id) {
+ // This is checked in coherence.
+ return
+ }
+
// We are stricter on the trait-ref in an impl than the
// self-type. In particular, we enforce region
// relationships. The reason for this is that (at least
ResolvingUnboxedClosure(ast::DefId),
}
+impl Copy for ResolveReason {}
+
impl ResolveReason {
fn span(&self, tcx: &ty::ctxt) -> Span {
match *self {
use metadata::csearch::{each_impl, get_impl_trait};
use metadata::csearch;
-use middle::subst;
+use middle::subst::{mod, Subst};
use middle::ty::{ImplContainer, ImplOrTraitItemId, MethodTraitItemId};
-use middle::ty::{TypeTraitItemId, lookup_item_type};
-use middle::ty::{Ty, ty_bool, ty_char, ty_enum, ty_err};
-use middle::ty::{ty_str, ty_vec, ty_float, ty_infer, ty_int, ty_open};
+use middle::ty::{ParameterEnvironment, TypeTraitItemId, lookup_item_type};
+use middle::ty::{Ty, ty_bool, ty_char, ty_closure, ty_enum, ty_err};
use middle::ty::{ty_param, Polytype, ty_ptr};
use middle::ty::{ty_rptr, ty_struct, ty_trait, ty_tup};
+use middle::ty::{ty_str, ty_vec, ty_float, ty_infer, ty_int, ty_open};
use middle::ty::{ty_uint, ty_unboxed_closure, ty_uniq, ty_bare_fn};
-use middle::ty::{ty_closure};
-use middle::ty::type_is_ty_var;
-use middle::subst::Subst;
+use middle::ty::{type_is_ty_var};
use middle::ty;
use CrateCtxt;
use middle::infer::combine::Combine;
// do this here, but it's actually the most convenient place, since
// the coherence tables contain the trait -> type mappings.
self.populate_destructor_table();
+
+ // Check to make sure implementations of `Copy` are legal.
+ self.check_implementations_of_copy();
}
fn check_implementation(&self,
}
}
}
+
+ /// Ensures that implementations of the built-in trait `Copy` are legal.
+ fn check_implementations_of_copy(&self) {
+ let tcx = self.crate_context.tcx;
+ let copy_trait = match tcx.lang_items.copy_trait() {
+ Some(id) => id,
+ None => return,
+ };
+
+ let trait_impls = match tcx.trait_impls
+ .borrow()
+ .get(©_trait)
+ .cloned() {
+ None => {
+ debug!("check_implementations_of_copy(): no types with \
+ implementations of `Copy` found");
+ return
+ }
+ Some(found_impls) => found_impls
+ };
+
+ // Clone first to avoid a double borrow error.
+ let trait_impls = trait_impls.borrow().clone();
+
+ for &impl_did in trait_impls.iter() {
+ if impl_did.krate != ast::LOCAL_CRATE {
+ debug!("check_implementations_of_copy(): impl not in this \
+ crate");
+ continue
+ }
+
+ let self_type = self.get_self_type_for_implementation(impl_did);
+ let span = tcx.map.span(impl_did.node);
+ let param_env = ParameterEnvironment::for_item(tcx,
+ impl_did.node);
+ let self_type = self_type.ty.subst(tcx, ¶m_env.free_substs);
+
+ match ty::can_type_implement_copy(tcx, self_type, ¶m_env) {
+ Ok(()) => {}
+ Err(ty::FieldDoesNotImplementCopy(name)) => {
+ tcx.sess
+ .span_err(span,
+ format!("the trait `Copy` may not be \
+ implemented for this type; field \
+ `{}` does not implement `Copy`",
+ token::get_name(name)).as_slice())
+ }
+ Err(ty::VariantDoesNotImplementCopy(name)) => {
+ tcx.sess
+ .span_err(span,
+ format!("the trait `Copy` may not be \
+ implemented for this type; variant \
+ `{}` does not implement `Copy`",
+ token::get_name(name)).as_slice())
+ }
+ Err(ty::TypeIsStructural) => {
+ tcx.sess
+ .span_err(span,
+ "the trait `Copy` may not be implemented \
+ for this type; type is not a structure or \
+ enumeration")
+ }
+ }
+ }
+ }
}
fn subst_receiver_types_in_method_ty<'tcx>(tcx: &ty::ctxt<'tcx>,
TraitConvertMethodContext(ast::DefId, &'a [ast::TraitItem]),
}
+impl<'a> Copy for ConvertMethodContext<'a> {}
+
fn convert_methods<'a,'tcx,'i,I>(ccx: &CrateCtxt<'a, 'tcx>,
convert_method_context: ConvertMethodContext,
container: ImplOrTraitItemContainer,
// for types that appear in structs and so on.
pub struct ExplicitRscope;
+impl Copy for ExplicitRscope {}
+
impl RegionScope for ExplicitRscope {
fn default_region_bound(&self, _span: Span) -> Option<ty::Region> {
None
// A scope in which any omitted region defaults to `default`. This is
// used after the `->` in function signatures, but also for backwards
// compatibility with object types. The latter use may go away.
+#[allow(missing_copy_implementations)]
pub struct SpecificRscope {
default: ty::Region
}
#[deriving(Show)]
struct InferredIndex(uint);
+impl Copy for InferredIndex {}
+
enum VarianceTerm<'a> {
ConstantTerm(ty::Variance),
TransformTerm(VarianceTermPtr<'a>, VarianceTermPtr<'a>),
InferredTerm(InferredIndex),
}
+impl<'a> Copy for VarianceTerm<'a> {}
+
impl<'a> fmt::Show for VarianceTerm<'a> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
RegionParam
}
+impl Copy for ParamKind {}
+
struct InferredInfo<'a> {
item_id: ast::NodeId,
kind: ParamKind,
variance: &'a VarianceTerm<'a>,
}
+impl<'a> Copy for Constraint<'a> {}
+
fn add_constraints_from_crate<'a, 'tcx>(terms_cx: TermsContext<'a, 'tcx>,
krate: &ast::Crate)
-> ConstraintContext<'a, 'tcx> {
while index < num_inferred &&
inferred_infos[index].item_id == item_id {
- let info = inferred_infos[index];
+ let info = &inferred_infos[index];
let variance = solutions[index];
debug!("Index {} Info {} / {} / {} Variance {}",
index, info.index, info.kind, info.space, variance);
fn is_doc_hidden(a: &clean::Attribute) -> bool {
match *a {
- clean::List(ref name, ref inner) if name.as_slice() == "doc" => {
+ clean::List(ref name, ref inner) if *name == "doc" => {
inner.iter().any(|a| {
match *a {
- clean::Word(ref s) => s.as_slice() == "hidden",
+ clean::Word(ref s) => *s == "hidden",
_ => false,
}
})
pub fn doc_list<'a>(&'a self) -> Option<&'a [Attribute]> {
for attr in self.attrs.iter() {
match *attr {
- List(ref x, ref list) if "doc" == x.as_slice() => {
+ List(ref x, ref list) if "doc" == *x => {
return Some(list.as_slice());
}
_ => {}
pub fn doc_value<'a>(&'a self) -> Option<&'a str> {
for attr in self.attrs.iter() {
match *attr {
- NameValue(ref x, ref v) if "doc" == x.as_slice() => {
+ NameValue(ref x, ref v) if "doc" == *x => {
return Some(v.as_slice());
}
_ => {}
Some(ref l) => {
for innerattr in l.iter() {
match *innerattr {
- Word(ref s) if "hidden" == s.as_slice() => {
+ Word(ref s) if "hidden" == *s => {
return true
}
_ => (),
PrimitiveTuple,
}
+impl Copy for PrimitiveType {}
+
#[deriving(Clone, Encodable, Decodable)]
pub enum TypeKind {
TypeEnum,
TypeTypedef,
}
+impl Copy for TypeKind {}
+
impl PrimitiveType {
fn from_str(s: &str) -> Option<PrimitiveType> {
match s.as_slice() {
fn find(attrs: &[Attribute]) -> Option<PrimitiveType> {
for attr in attrs.iter() {
let list = match *attr {
- List(ref k, ref l) if k.as_slice() == "doc" => l,
+ List(ref k, ref l) if *k == "doc" => l,
_ => continue,
};
for sub_attr in list.iter() {
let value = match *sub_attr {
NameValue(ref k, ref v)
- if k.as_slice() == "primitive" => v.as_slice(),
+ if *k == "primitive" => v.as_slice(),
_ => continue,
};
match PrimitiveType::from_str(value) {
Immutable,
}
+impl Copy for Mutability {}
+
impl Clean<Mutability> for ast::Mutability {
fn clean(&self, _: &DocContext) -> Mutability {
match self {
Unit
}
+impl Copy for StructType {}
+
pub enum TypeBound {
RegionBound,
TraitBound(ast::TraitRef)
use clean;
use stability_summary::ModuleSummary;
-use html::item_type;
use html::item_type::ItemType;
use html::render;
use html::render::{cache, CURRENT_LOCATION_KEY};
/// Wrapper struct for emitting type parameter bounds.
pub struct TyParamBounds<'a>(pub &'a [clean::TyParamBound]);
+impl Copy for VisSpace {}
+impl Copy for FnStyleSpace {}
+impl Copy for MutableSpace {}
+impl Copy for RawMutableSpace {}
+
impl VisSpace {
pub fn get(&self) -> Option<ast::Visibility> {
let VisSpace(v) = *self; v
Some(root) => {
let mut root = String::from_str(root.as_slice());
for seg in path.segments[..amt].iter() {
- if "super" == seg.name.as_slice() ||
- "self" == seg.name.as_slice() {
+ if "super" == seg.name ||
+ "self" == seg.name {
try!(write!(w, "{}::", seg.name));
} else {
root.push_str(seg.name.as_slice());
url.push_str("/");
}
match shortty {
- item_type::Module => {
+ ItemType::Module => {
url.push_str(fqp.last().unwrap().as_slice());
url.push_str("/index.html");
}
Some(root) => {
try!(write!(f, "<a href='{}{}/primitive.{}.html'>",
root,
- path.ref0().as_slice().head().unwrap(),
+ path.ref0().head().unwrap(),
prim.to_url_str()));
needs_termination = true;
}
// except according to those terms.
//! Item types.
-pub use self::ItemType::*;
use std::fmt;
use clean;
Method = 10,
StructField = 11,
Variant = 12,
- ForeignFunction = 13,
- ForeignStatic = 14,
+ // we used to have ForeignFunction and ForeignStatic. they are retired now.
Macro = 15,
Primitive = 16,
AssociatedType = 17,
Constant = 18,
}
+impl Copy for ItemType {}
+
impl ItemType {
+ pub fn from_item(item: &clean::Item) -> ItemType {
+ match item.inner {
+ clean::ModuleItem(..) => ItemType::Module,
+ clean::StructItem(..) => ItemType::Struct,
+ clean::EnumItem(..) => ItemType::Enum,
+ clean::FunctionItem(..) => ItemType::Function,
+ clean::TypedefItem(..) => ItemType::Typedef,
+ clean::StaticItem(..) => ItemType::Static,
+ clean::ConstantItem(..) => ItemType::Constant,
+ clean::TraitItem(..) => ItemType::Trait,
+ clean::ImplItem(..) => ItemType::Impl,
+ clean::ViewItemItem(..) => ItemType::ViewItem,
+ clean::TyMethodItem(..) => ItemType::TyMethod,
+ clean::MethodItem(..) => ItemType::Method,
+ clean::StructFieldItem(..) => ItemType::StructField,
+ clean::VariantItem(..) => ItemType::Variant,
+ clean::ForeignFunctionItem(..) => ItemType::Function, // no ForeignFunction
+ clean::ForeignStaticItem(..) => ItemType::Static, // no ForeignStatic
+ clean::MacroItem(..) => ItemType::Macro,
+ clean::PrimitiveItem(..) => ItemType::Primitive,
+ clean::AssociatedTypeItem(..) => ItemType::AssociatedType,
+ }
+ }
+
+ pub fn from_type_kind(kind: clean::TypeKind) -> ItemType {
+ match kind {
+ clean::TypeStruct => ItemType::Struct,
+ clean::TypeEnum => ItemType::Enum,
+ clean::TypeFunction => ItemType::Function,
+ clean::TypeTrait => ItemType::Trait,
+ clean::TypeModule => ItemType::Module,
+ clean::TypeStatic => ItemType::Static,
+ clean::TypeVariant => ItemType::Variant,
+ clean::TypeTypedef => ItemType::Typedef,
+ }
+ }
+
pub fn to_static_str(&self) -> &'static str {
match *self {
- Module => "mod",
- Struct => "struct",
- Enum => "enum",
- Function => "fn",
- Typedef => "type",
- Static => "static",
- Trait => "trait",
- Impl => "impl",
- ViewItem => "viewitem",
- TyMethod => "tymethod",
- Method => "method",
- StructField => "structfield",
- Variant => "variant",
- ForeignFunction => "ffi",
- ForeignStatic => "ffs",
- Macro => "macro",
- Primitive => "primitive",
- AssociatedType => "associatedtype",
- Constant => "constant",
+ ItemType::Module => "mod",
+ ItemType::Struct => "struct",
+ ItemType::Enum => "enum",
+ ItemType::Function => "fn",
+ ItemType::Typedef => "type",
+ ItemType::Static => "static",
+ ItemType::Trait => "trait",
+ ItemType::Impl => "impl",
+ ItemType::ViewItem => "viewitem",
+ ItemType::TyMethod => "tymethod",
+ ItemType::Method => "method",
+ ItemType::StructField => "structfield",
+ ItemType::Variant => "variant",
+ ItemType::Macro => "macro",
+ ItemType::Primitive => "primitive",
+ ItemType::AssociatedType => "associatedtype",
+ ItemType::Constant => "constant",
}
}
}
}
}
-pub fn shortty(item: &clean::Item) -> ItemType {
- match item.inner {
- clean::ModuleItem(..) => Module,
- clean::StructItem(..) => Struct,
- clean::EnumItem(..) => Enum,
- clean::FunctionItem(..) => Function,
- clean::TypedefItem(..) => Typedef,
- clean::StaticItem(..) => Static,
- clean::ConstantItem(..) => Constant,
- clean::TraitItem(..) => Trait,
- clean::ImplItem(..) => Impl,
- clean::ViewItemItem(..) => ViewItem,
- clean::TyMethodItem(..) => TyMethod,
- clean::MethodItem(..) => Method,
- clean::StructFieldItem(..) => StructField,
- clean::VariantItem(..) => Variant,
- clean::ForeignFunctionItem(..) => ForeignFunction,
- clean::ForeignStaticItem(..) => ForeignStatic,
- clean::MacroItem(..) => Macro,
- clean::PrimitiveItem(..) => Primitive,
- clean::AssociatedTypeItem(..) => AssociatedType,
- }
-}
-
}
pub fn redirect(dst: &mut io::Writer, url: &str) -> io::IoResult<()> {
+ // <script> triggers a redirect before refresh, so this is fine.
write!(dst,
r##"<!DOCTYPE html>
<html lang="en">
<meta http-equiv="refresh" content="0;URL={url}">
</head>
<body>
+ <p>Redirecting to <a href="{url}">{url}</a>...</p>
+ <script>location.replace("{url}" + location.search + location.hash);</script>
</body>
</html>"##,
url = url,
};
// Transform the contents of the header into a hyphenated string
- let id = s.as_slice().words().map(|s| s.to_ascii_lower())
+ let id = s.words().map(|s| s.to_ascii_lower())
.collect::<Vec<String>>().connect("-");
// This is a terrible hack working around how hoedown gives us rendered
let mut seen_other_tags = false;
let mut data = LangString::all_false();
- let mut tokens = string.as_slice().split(|c: char|
+ let mut tokens = string.split(|c: char|
!(c == '_' || c == '-' || c.is_alphanumeric())
);
use html::format::{VisSpace, Method, FnStyleSpace, MutableSpace, Stability};
use html::format::{ConciseStability, TyParamBounds, WhereClause};
use html::highlight;
-use html::item_type::{ItemType, shortty};
-use html::item_type;
+use html::item_type::ItemType;
use html::layout;
use html::markdown::Markdown;
use html::markdown;
// Helper structs for rendering items/sidebars and carrying along contextual
// information
-struct Item<'a> { cx: &'a Context, item: &'a clean::Item, }
+struct Item<'a> {
+ cx: &'a Context,
+ item: &'a clean::Item,
+}
+
+impl<'a> Copy for Item<'a> {}
+
struct Sidebar<'a> { cx: &'a Context, item: &'a clean::Item, }
/// Struct representing one entry in the JS search index. These are all emitted
for attr in attrs.iter() {
match *attr {
clean::NameValue(ref x, ref s)
- if "html_favicon_url" == x.as_slice() => {
+ if "html_favicon_url" == *x => {
cx.layout.favicon = s.to_string();
}
clean::NameValue(ref x, ref s)
- if "html_logo_url" == x.as_slice() => {
+ if "html_logo_url" == *x => {
cx.layout.logo = s.to_string();
}
clean::NameValue(ref x, ref s)
- if "html_playground_url" == x.as_slice() => {
+ if "html_playground_url" == *x => {
cx.layout.playground_url = s.to_string();
markdown::PLAYGROUND_KRATE.with(|slot| {
if slot.borrow().is_none() {
});
}
clean::Word(ref x)
- if "html_no_source" == x.as_slice() => {
+ if "html_no_source" == *x => {
cx.include_sources = false;
}
_ => {}
let paths: HashMap<ast::DefId, (Vec<String>, ItemType)> =
analysis.as_ref().map(|a| {
let paths = a.external_paths.borrow_mut().take().unwrap();
- paths.into_iter().map(|(k, (v, t))| {
- (k, (v, match t {
- clean::TypeStruct => item_type::Struct,
- clean::TypeEnum => item_type::Enum,
- clean::TypeFunction => item_type::Function,
- clean::TypeTrait => item_type::Trait,
- clean::TypeModule => item_type::Module,
- clean::TypeStatic => item_type::Static,
- clean::TypeVariant => item_type::Variant,
- clean::TypeTypedef => item_type::Typedef,
- }))
- }).collect()
- }).unwrap_or(HashMap::new());
+ paths.into_iter().map(|(k, (v, t))| (k, (v, ItemType::from_type_kind(t)))).collect()
+ }).unwrap_or(HashMap::new());
let mut cache = Cache {
impls: HashMap::new(),
external_paths: paths.iter().map(|(&k, v)| (k, v.ref0().clone()))
for &(n, ref e) in krate.externs.iter() {
cache.extern_locations.insert(n, extern_location(e, &cx.dst));
let did = ast::DefId { krate: n, node: ast::CRATE_NODE_ID };
- cache.paths.insert(did, (vec![e.name.to_string()], item_type::Module));
+ cache.paths.insert(did, (vec![e.name.to_string()], ItemType::Module));
}
// Cache where all known primitives have their documentation located.
for (i, item) in cache.search_index.iter().enumerate() {
// Omit the path if it is same to that of the prior item.
let path;
- if lastpath.as_slice() == item.path.as_slice() {
+ if lastpath == item.path {
path = "";
} else {
lastpath = item.path.to_string();
if path.exists() {
for line in BufferedReader::new(File::open(path)).lines() {
let line = try!(line);
- if !line.as_slice().starts_with(key) {
+ if !line.starts_with(key) {
continue
}
- if line.as_slice().starts_with(
+ if line.starts_with(
format!("{}['{}']", key, krate).as_slice()) {
continue
}
}
}
+/// Returns a documentation-level item type from the item.
+fn shortty(item: &clean::Item) -> ItemType {
+ ItemType::from_item(item)
+}
+
/// Takes a path to a source file and cleans the path to it. This canonicalizes
/// things like ".." to components which preserve the "top down" hierarchy of a
/// static HTML tree.
// external crate
for attr in e.attrs.iter() {
match *attr {
- clean::List(ref x, ref list) if "doc" == x.as_slice() => {
+ clean::List(ref x, ref list) if "doc" == *x => {
for attr in list.iter() {
match *attr {
clean::NameValue(ref x, ref s)
- if "html_root_url" == x.as_slice() => {
- if s.as_slice().ends_with("/") {
+ if "html_root_url" == *x => {
+ if s.ends_with("/") {
return Remote(s.to_string());
}
return Remote(format!("{}/", s));
let last = self.parent_stack.last().unwrap();
let did = *last;
let path = match self.paths.get(&did) {
- Some(&(_, item_type::Trait)) =>
+ Some(&(_, ItemType::Trait)) =>
Some(self.stack[..self.stack.len() - 1]),
// The current stack not necessarily has correlation for
// where the type was defined. On the other hand,
// `paths` always has the right information if present.
- Some(&(ref fqp, item_type::Struct)) |
- Some(&(ref fqp, item_type::Enum)) =>
+ Some(&(ref fqp, ItemType::Struct)) |
+ Some(&(ref fqp, ItemType::Enum)) =>
Some(fqp[..fqp.len() - 1]),
Some(..) => Some(self.stack.as_slice()),
None => None
clean::VariantItem(..) if !self.privmod => {
let mut stack = self.stack.clone();
stack.pop();
- self.paths.insert(item.def_id, (stack, item_type::Enum));
+ self.paths.insert(item.def_id, (stack, ItemType::Enum));
}
clean::PrimitiveItem(..) if item.visibility.is_some() => {
let dox = match attrs.into_iter().find(|a| {
match *a {
clean::NameValue(ref x, _)
- if "doc" == x.as_slice() => {
+ if "doc" == *x => {
true
}
_ => false
for item in m.items.iter() {
if self.ignore_private_item(item) { continue }
+ // avoid putting foreign items to the sidebar.
+ if let &clean::ForeignFunctionItem(..) = &item.inner { continue }
+ if let &clean::ForeignStaticItem(..) = &item.inner { continue }
+
let short = shortty(item).to_static_str();
let myname = match item.name {
None => continue,
}
for (_, items) in map.iter_mut() {
- items.as_mut_slice().sort();
+ items.sort();
}
return map;
}
clean::TypedefItem(ref t) => item_typedef(fmt, self.item, t),
clean::MacroItem(ref m) => item_macro(fmt, self.item, m),
clean::PrimitiveItem(ref p) => item_primitive(fmt, self.item, p),
- clean::StaticItem(ref i) => item_static(fmt, self.item, i),
+ clean::StaticItem(ref i) | clean::ForeignStaticItem(ref i) =>
+ item_static(fmt, self.item, i),
clean::ConstantItem(ref c) => item_constant(fmt, self.item, c),
_ => Ok(())
}
!cx.ignore_private_item(&items[*i])
}).collect::<Vec<uint>>();
+ // the order of item types in the listing
+ fn reorder(ty: ItemType) -> u8 {
+ match ty {
+ ItemType::ViewItem => 0,
+ ItemType::Primitive => 1,
+ ItemType::Module => 2,
+ ItemType::Macro => 3,
+ ItemType::Struct => 4,
+ ItemType::Enum => 5,
+ ItemType::Constant => 6,
+ ItemType::Static => 7,
+ ItemType::Trait => 8,
+ ItemType::Function => 9,
+ ItemType::Typedef => 10,
+ _ => 11 + ty as u8,
+ }
+ }
+
fn cmp(i1: &clean::Item, i2: &clean::Item, idx1: uint, idx2: uint) -> Ordering {
- if shortty(i1) == shortty(i2) {
+ let ty1 = shortty(i1);
+ let ty2 = shortty(i2);
+ if ty1 == ty2 {
return i1.name.cmp(&i2.name);
}
- match (&i1.inner, &i2.inner) {
- (&clean::ViewItemItem(ref a), &clean::ViewItemItem(ref b)) => {
- match (&a.inner, &b.inner) {
- (&clean::ExternCrate(..), _) => Less,
- (_, &clean::ExternCrate(..)) => Greater,
- _ => idx1.cmp(&idx2),
+
+ let tycmp = reorder(ty1).cmp(&reorder(ty2));
+ if let Equal = tycmp {
+ // for reexports, `extern crate` takes precedence.
+ match (&i1.inner, &i2.inner) {
+ (&clean::ViewItemItem(ref a), &clean::ViewItemItem(ref b)) => {
+ match (&a.inner, &b.inner) {
+ (&clean::ExternCrate(..), _) => return Less,
+ (_, &clean::ExternCrate(..)) => return Greater,
+ _ => {}
+ }
}
+ (_, _) => {}
}
- (&clean::ViewItemItem(..), _) => Less,
- (_, &clean::ViewItemItem(..)) => Greater,
- (&clean::PrimitiveItem(..), _) => Less,
- (_, &clean::PrimitiveItem(..)) => Greater,
- (&clean::ModuleItem(..), _) => Less,
- (_, &clean::ModuleItem(..)) => Greater,
- (&clean::MacroItem(..), _) => Less,
- (_, &clean::MacroItem(..)) => Greater,
- (&clean::StructItem(..), _) => Less,
- (_, &clean::StructItem(..)) => Greater,
- (&clean::EnumItem(..), _) => Less,
- (_, &clean::EnumItem(..)) => Greater,
- (&clean::ConstantItem(..), _) => Less,
- (_, &clean::ConstantItem(..)) => Greater,
- (&clean::StaticItem(..), _) => Less,
- (_, &clean::StaticItem(..)) => Greater,
- (&clean::ForeignFunctionItem(..), _) => Less,
- (_, &clean::ForeignFunctionItem(..)) => Greater,
- (&clean::ForeignStaticItem(..), _) => Less,
- (_, &clean::ForeignStaticItem(..)) => Greater,
- (&clean::TraitItem(..), _) => Less,
- (_, &clean::TraitItem(..)) => Greater,
- (&clean::FunctionItem(..), _) => Less,
- (_, &clean::FunctionItem(..)) => Greater,
- (&clean::TypedefItem(..), _) => Less,
- (_, &clean::TypedefItem(..)) => Greater,
- _ => idx1.cmp(&idx2),
+
+ idx1.cmp(&idx2)
+ } else {
+ tycmp
}
}
try!(write!(w, "</table>"));
}
curty = myty;
- let (short, name) = match myitem.inner {
- clean::ModuleItem(..) => ("modules", "Modules"),
- clean::StructItem(..) => ("structs", "Structs"),
- clean::EnumItem(..) => ("enums", "Enums"),
- clean::FunctionItem(..) => ("functions", "Functions"),
- clean::TypedefItem(..) => ("types", "Type Definitions"),
- clean::StaticItem(..) => ("statics", "Statics"),
- clean::ConstantItem(..) => ("constants", "Constants"),
- clean::TraitItem(..) => ("traits", "Traits"),
- clean::ImplItem(..) => ("impls", "Implementations"),
- clean::ViewItemItem(..) => ("reexports", "Reexports"),
- clean::TyMethodItem(..) => ("tymethods", "Type Methods"),
- clean::MethodItem(..) => ("methods", "Methods"),
- clean::StructFieldItem(..) => ("fields", "Struct Fields"),
- clean::VariantItem(..) => ("variants", "Variants"),
- clean::ForeignFunctionItem(..) => ("ffi-fns", "Foreign Functions"),
- clean::ForeignStaticItem(..) => ("ffi-statics", "Foreign Statics"),
- clean::MacroItem(..) => ("macros", "Macros"),
- clean::PrimitiveItem(..) => ("primitives", "Primitive Types"),
- clean::AssociatedTypeItem(..) => ("associated-types", "Associated Types"),
+ let (short, name) = match myty.unwrap() {
+ ItemType::Module => ("modules", "Modules"),
+ ItemType::Struct => ("structs", "Structs"),
+ ItemType::Enum => ("enums", "Enums"),
+ ItemType::Function => ("functions", "Functions"),
+ ItemType::Typedef => ("types", "Type Definitions"),
+ ItemType::Static => ("statics", "Statics"),
+ ItemType::Constant => ("constants", "Constants"),
+ ItemType::Trait => ("traits", "Traits"),
+ ItemType::Impl => ("impls", "Implementations"),
+ ItemType::ViewItem => ("reexports", "Reexports"),
+ ItemType::TyMethod => ("tymethods", "Type Methods"),
+ ItemType::Method => ("methods", "Methods"),
+ ItemType::StructField => ("fields", "Struct Fields"),
+ ItemType::Variant => ("variants", "Variants"),
+ ItemType::Macro => ("macros", "Macros"),
+ ItemType::Primitive => ("primitives", "Primitive Types"),
+ ItemType::AssociatedType => ("associated-types", "Associated Types"),
};
try!(write!(w,
"<h2 id='{id}' class='section-header'>\
for (var i = results.length - 1; i > 0; i -= 1) {
if (results[i].word === results[i - 1].word &&
results[i].item.ty === results[i - 1].item.ty &&
- results[i].item.path === results[i - 1].item.path)
+ results[i].item.path === results[i - 1].item.path &&
+ (results[i].item.parent || {}).name === (results[i - 1].item.parent || {}).name)
{
results[i].id = -1;
}
"method",
"structfield",
"variant",
- "ffi",
- "ffs",
+ "ffi", // retained for backward compatibility
+ "ffs", // retained for backward compatibility
"macro",
"primitive",
"associatedtype",
var code = $('<code>').append(structs[j]);
$.each(code.find('a'), function(idx, a) {
var href = $(a).attr('href');
- if (!href.startsWith('http')) {
- $(a).attr('href', rootPath + $(a).attr('href'));
+ if (href && !href.startsWith('http')) {
+ $(a).attr('href', rootPath + href);
}
});
var li = $('<li>').append(code);
#![experimental]
#![crate_type = "dylib"]
#![crate_type = "rlib"]
+#![doc(html_logo_url = "http://www.rust-lang.org/logos/rust-logo-128x128-blk-v2.png",
+ html_favicon_url = "http://www.rust-lang.org/favicon.ico",
+ html_root_url = "http://doc.rust-lang.org/nightly/",
+ html_playground_url = "http://play.rust-lang.org/")]
#![allow(unknown_features)]
#![feature(globs, if_let, macro_rules, phase, slicing_syntax, tuple_indexing)]
fn parse_externs(matches: &getopts::Matches) -> Result<core::Externs, String> {
let mut externs = HashMap::new();
for arg in matches.opt_strs("extern").iter() {
- let mut parts = arg.as_slice().splitn(1, '=');
+ let mut parts = arg.splitn(1, '=');
let name = match parts.next() {
Some(s) => s,
None => {
for inner in nested.iter() {
match *inner {
clean::Word(ref x)
- if "no_default_passes" == x.as_slice() => {
+ if "no_default_passes" == *x => {
default_passes = false;
}
clean::NameValue(ref x, ref value)
- if "passes" == x.as_slice() => {
- for pass in value.as_slice().words() {
+ if "passes" == *x => {
+ for pass in value.words() {
passes.push(pass.to_string());
}
}
clean::NameValue(ref x, ref value)
- if "plugins" == x.as_slice() => {
- for p in value.as_slice().words() {
+ if "plugins" == *x => {
+ for p in value.words() {
plugins.push(p.to_string());
}
}
for pass in passes.iter() {
let plugin = match PASSES.iter()
.position(|&(p, _, _)| {
- p == pass.as_slice()
+ p == *pass
}) {
Some(i) => PASSES[i].val1(),
None => {
// Make sure the schema is what we expect
match obj.remove(&"schema".to_string()) {
Some(Json::String(version)) => {
- if version.as_slice() != SCHEMA_VERSION {
+ if version != SCHEMA_VERSION {
return Err(format!(
"sorry, but I only understand version {}",
SCHEMA_VERSION))
for attr in i.attrs.iter() {
match attr {
&clean::NameValue(ref x, ref s)
- if "doc" == x.as_slice() => {
+ if "doc" == *x => {
avec.push(clean::NameValue("doc".to_string(),
unindent(s.as_slice())))
}
for attr in i.attrs.iter() {
match *attr {
clean::NameValue(ref x, ref s)
- if "doc" == x.as_slice() => {
+ if "doc" == *x => {
docstr.push_str(s.as_slice());
docstr.push('\n');
},
}
}
let mut a: Vec<clean::Attribute> = i.attrs.iter().filter(|&a| match a {
- &clean::NameValue(ref x, _) if "doc" == x.as_slice() => false,
+ &clean::NameValue(ref x, _) if "doc" == *x => false,
_ => true
}).map(|x| x.clone()).collect();
if docstr.len() > 0 {
fn should_unindent() {
let s = " line1\n line2".to_string();
let r = unindent(s.as_slice());
- assert_eq!(r.as_slice(), "line1\nline2");
+ assert_eq!(r, "line1\nline2");
}
#[test]
fn should_unindent_multiple_paragraphs() {
let s = " line1\n\n line2".to_string();
let r = unindent(s.as_slice());
- assert_eq!(r.as_slice(), "line1\n\nline2");
+ assert_eq!(r, "line1\n\nline2");
}
#[test]
// base indentation and should be preserved
let s = " line1\n\n line2".to_string();
let r = unindent(s.as_slice());
- assert_eq!(r.as_slice(), "line1\n\n line2");
+ assert_eq!(r, "line1\n\n line2");
}
#[test]
// and continue here"]
let s = "line1\n line2".to_string();
let r = unindent(s.as_slice());
- assert_eq!(r.as_slice(), "line1\nline2");
+ assert_eq!(r, "line1\nline2");
}
#[test]
fn should_not_ignore_first_line_indent_in_a_single_line_para() {
let s = "line1\n\n line2".to_string();
let r = unindent(s.as_slice());
- assert_eq!(r.as_slice(), "line1\n\n line2");
+ assert_eq!(r, "line1\n\n line2");
}
}
pub unmarked: uint,
}
+impl Copy for Counts {}
+
impl Add<Counts, Counts> for Counts {
fn add(&self, other: &Counts) -> Counts {
Counts {
desc: testing::TestDesc {
name: testing::DynTestName(name),
ignore: should_ignore,
- should_fail: false, // compiler failures are test failures
+ should_fail: testing::ShouldFail::No, // compiler failures are test failures
},
testfn: testing::DynTestFn(proc() {
runtest(test.as_slice(),
static TASK_COUNT: atomic::AtomicUint = atomic::INIT_ATOMIC_UINT;
static TASK_LOCK: StaticNativeMutex = NATIVE_MUTEX_INIT;
+#[allow(missing_copy_implementations)]
pub struct Token { _private: () }
impl Drop for Token {
///
/// This structure wraps a `*libc::c_char`, and will automatically free the
/// memory it is pointing to when it goes out of scope.
+#[allow(missing_copy_implementations)]
pub struct CString {
buf: *const libc::c_char,
owns_buffer_: bool,
#[cfg(any(target_os = "macos", target_os = "ios"))]
mod os {
+ use core::kinds::Copy;
use libc;
#[cfg(target_arch = "x86_64")]
__sig: libc::c_long,
__opaque: [u8, ..__PTHREAD_MUTEX_SIZE__],
}
+
+ impl Copy for pthread_mutex_t {}
+
#[repr(C)]
pub struct pthread_cond_t {
__sig: libc::c_long,
__opaque: [u8, ..__PTHREAD_COND_SIZE__],
}
+ impl Copy for pthread_cond_t {}
+
pub const PTHREAD_MUTEX_INITIALIZER: pthread_mutex_t = pthread_mutex_t {
__sig: _PTHREAD_MUTEX_SIG_INIT,
__opaque: [0, ..__PTHREAD_MUTEX_SIZE__],
#[cfg(target_os = "linux")]
mod os {
+ use core::kinds::Copy;
use libc;
// minus 8 because we have an 'align' field
__align: libc::c_longlong,
size: [u8, ..__SIZEOF_PTHREAD_MUTEX_T],
}
+
+ impl Copy for pthread_mutex_t {}
+
#[repr(C)]
pub struct pthread_cond_t {
__align: libc::c_longlong,
size: [u8, ..__SIZEOF_PTHREAD_COND_T],
}
+ impl Copy for pthread_cond_t {}
+
pub const PTHREAD_MUTEX_INITIALIZER: pthread_mutex_t = pthread_mutex_t {
__align: 0,
size: [0, ..__SIZEOF_PTHREAD_MUTEX_T],
use libunwind as uw;
+#[allow(missing_copy_implementations)]
pub struct Unwinder {
unwinding: bool,
}
#[allow(non_camel_case_types, non_snake_case)]
pub mod eabi {
pub use self::EXCEPTION_DISPOSITION::*;
+ use core::prelude::*;
use libunwind as uw;
use libc::{c_void, c_int};
#[repr(C)]
+ #[allow(missing_copy_implementations)]
pub struct EXCEPTION_RECORD;
#[repr(C)]
+ #[allow(missing_copy_implementations)]
pub struct CONTEXT;
#[repr(C)]
+ #[allow(missing_copy_implementations)]
pub struct DISPATCHER_CONTEXT;
#[repr(C)]
ExceptionCollidedUnwind
}
+ impl Copy for EXCEPTION_DISPOSITION {}
+
type _Unwind_Personality_Fn =
extern "C" fn(
version: c_int,
pub struct Stdio(libc::c_int);
#[allow(non_upper_case_globals)]
+impl Copy for Stdio {}
+
+#[allow(non_uppercase_statics)]
pub const Stdout: Stdio = Stdio(libc::STDOUT_FILENO);
#[allow(non_upper_case_globals)]
pub const Stderr: Stdio = Stdio(libc::STDERR_FILENO);
UrlSafe
}
+impl Copy for CharacterSet {}
+
/// Contains configuration parameters for `to_base64`.
pub struct Config {
/// Character set to use
pub line_length: Option<uint>
}
+impl Copy for Config {}
+
/// Configuration for RFC 4648 standard base64 encoding
pub static STANDARD: Config =
Config {char_set: Standard, pad: true, line_length: None};
InvalidBase64Length,
}
+impl Copy for FromBase64Error {}
+
impl fmt::Show for FromBase64Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
#[test]
fn test_to_base64_basic() {
- assert_eq!("".as_bytes().to_base64(STANDARD), "".to_string());
- assert_eq!("f".as_bytes().to_base64(STANDARD), "Zg==".to_string());
- assert_eq!("fo".as_bytes().to_base64(STANDARD), "Zm8=".to_string());
- assert_eq!("foo".as_bytes().to_base64(STANDARD), "Zm9v".to_string());
- assert_eq!("foob".as_bytes().to_base64(STANDARD), "Zm9vYg==".to_string());
- assert_eq!("fooba".as_bytes().to_base64(STANDARD), "Zm9vYmE=".to_string());
- assert_eq!("foobar".as_bytes().to_base64(STANDARD), "Zm9vYmFy".to_string());
+ assert_eq!("".as_bytes().to_base64(STANDARD), "");
+ assert_eq!("f".as_bytes().to_base64(STANDARD), "Zg==");
+ assert_eq!("fo".as_bytes().to_base64(STANDARD), "Zm8=");
+ assert_eq!("foo".as_bytes().to_base64(STANDARD), "Zm9v");
+ assert_eq!("foob".as_bytes().to_base64(STANDARD), "Zm9vYg==");
+ assert_eq!("fooba".as_bytes().to_base64(STANDARD), "Zm9vYmE=");
+ assert_eq!("foobar".as_bytes().to_base64(STANDARD), "Zm9vYmFy");
}
#[test]
fn test_to_base64_line_break() {
assert!(![0u8, ..1000].to_base64(Config {line_length: None, ..STANDARD})
- .as_slice()
.contains("\r\n"));
assert_eq!("foobar".as_bytes().to_base64(Config {line_length: Some(4),
..STANDARD}),
- "Zm9v\r\nYmFy".to_string());
+ "Zm9v\r\nYmFy");
}
#[test]
fn test_to_base64_padding() {
- assert_eq!("f".as_bytes().to_base64(Config {pad: false, ..STANDARD}), "Zg".to_string());
- assert_eq!("fo".as_bytes().to_base64(Config {pad: false, ..STANDARD}), "Zm8".to_string());
+ assert_eq!("f".as_bytes().to_base64(Config {pad: false, ..STANDARD}), "Zg");
+ assert_eq!("fo".as_bytes().to_base64(Config {pad: false, ..STANDARD}), "Zm8");
}
#[test]
fn test_to_base64_url_safe() {
- assert_eq!([251, 255].to_base64(URL_SAFE), "-_8".to_string());
- assert_eq!([251, 255].to_base64(STANDARD), "+/8=".to_string());
+ assert_eq!([251, 255].to_base64(URL_SAFE), "-_8");
+ assert_eq!([251, 255].to_base64(STANDARD), "+/8=");
}
#[test]
fn test_from_base64_basic() {
- assert_eq!("".from_base64().unwrap().as_slice(), "".as_bytes());
- assert_eq!("Zg==".from_base64().unwrap().as_slice(), "f".as_bytes());
- assert_eq!("Zm8=".from_base64().unwrap().as_slice(), "fo".as_bytes());
- assert_eq!("Zm9v".from_base64().unwrap().as_slice(), "foo".as_bytes());
- assert_eq!("Zm9vYg==".from_base64().unwrap().as_slice(), "foob".as_bytes());
- assert_eq!("Zm9vYmE=".from_base64().unwrap().as_slice(), "fooba".as_bytes());
- assert_eq!("Zm9vYmFy".from_base64().unwrap().as_slice(), "foobar".as_bytes());
+ assert_eq!("".from_base64().unwrap(), b"");
+ assert_eq!("Zg==".from_base64().unwrap(), b"f");
+ assert_eq!("Zm8=".from_base64().unwrap(), b"fo");
+ assert_eq!("Zm9v".from_base64().unwrap(), b"foo");
+ assert_eq!("Zm9vYg==".from_base64().unwrap(), b"foob");
+ assert_eq!("Zm9vYmE=".from_base64().unwrap(), b"fooba");
+ assert_eq!("Zm9vYmFy".from_base64().unwrap(), b"foobar");
}
#[test]
fn test_from_base64_bytes() {
- assert_eq!(b"Zm9vYmFy".from_base64().unwrap().as_slice(), "foobar".as_bytes());
+ assert_eq!(b"Zm9vYmFy".from_base64().unwrap(), b"foobar");
}
#[test]
fn test_from_base64_newlines() {
- assert_eq!("Zm9v\r\nYmFy".from_base64().unwrap().as_slice(),
- "foobar".as_bytes());
- assert_eq!("Zm9vYg==\r\n".from_base64().unwrap().as_slice(),
- "foob".as_bytes());
+ assert_eq!("Zm9v\r\nYmFy".from_base64().unwrap(),
+ b"foobar");
+ assert_eq!("Zm9vYg==\r\n".from_base64().unwrap(),
+ b"foob");
}
#[test]
for _ in range(0u, 1000) {
let times = task_rng().gen_range(1u, 100);
let v = Vec::from_fn(times, |_| random::<u8>());
- assert_eq!(v.as_slice()
- .to_base64(STANDARD)
- .as_slice()
+ assert_eq!(v.to_base64(STANDARD)
.from_base64()
- .unwrap()
- .as_slice(),
- v.as_slice());
+ .unwrap(),
+ v);
}
}
ウヰノオクヤマ ケフコエテ アサキユメミシ ヱヒモセスン";
let sb = s.as_bytes().to_base64(STANDARD);
b.iter(|| {
- sb.as_slice().from_base64().unwrap();
+ sb.from_base64().unwrap();
});
b.bytes = sb.len() as u64;
}
InvalidHexLength,
}
+impl Copy for FromHexError {}
+
impl fmt::Show for FromHexError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
#[test]
pub fn test_to_hex() {
- assert_eq!("foobar".as_bytes().to_hex(), "666f6f626172".to_string());
+ assert_eq!("foobar".as_bytes().to_hex(), "666f6f626172");
}
#[test]
pub fn test_from_hex_okay() {
- assert_eq!("666f6f626172".from_hex().unwrap().as_slice(),
- "foobar".as_bytes());
- assert_eq!("666F6F626172".from_hex().unwrap().as_slice(),
- "foobar".as_bytes());
+ assert_eq!("666f6f626172".from_hex().unwrap(),
+ b"foobar");
+ assert_eq!("666F6F626172".from_hex().unwrap(),
+ b"foobar");
}
#[test]
#[test]
pub fn test_from_hex_ignores_whitespace() {
- assert_eq!("666f 6f6\r\n26172 ".from_hex().unwrap().as_slice(),
- "foobar".as_bytes());
+ assert_eq!("666f 6f6\r\n26172 ".from_hex().unwrap(),
+ b"foobar");
}
#[test]
pub fn test_from_hex_all_bytes() {
for i in range(0u, 256) {
let ii: &[u8] = &[i as u8];
- assert_eq!(format!("{:02x}", i as uint).as_slice()
- .from_hex()
- .unwrap()
- .as_slice(),
+ assert_eq!(format!("{:02x}", i as uint).from_hex()
+ .unwrap(),
ii);
- assert_eq!(format!("{:02X}", i as uint).as_slice()
- .from_hex()
- .unwrap()
- .as_slice(),
+ assert_eq!(format!("{:02X}", i as uint).from_hex()
+ .unwrap(),
ii);
}
}
ウヰノオクヤマ ケフコエテ アサキユメミシ ヱヒモセスン";
let sb = s.as_bytes().to_hex();
b.iter(|| {
- sb.as_slice().from_hex().unwrap();
+ sb.from_hex().unwrap();
});
b.bytes = sb.len() as u64;
}
//! fn main() {
//! let object = TestStruct {
//! data_int: 1,
-//! data_str: "toto".to_string(),
+//! data_str: "homura".to_string(),
//! data_vector: vec![2,3,4,5],
//! };
//!
//! // Serialize using `ToJson`
//! let input_data = TestStruct {
//! data_int: 1,
-//! data_str: "toto".to_string(),
+//! data_str: "madoka".to_string(),
//! data_vector: vec![2,3,4,5],
//! };
//! let json_obj: Json = input_data.to_json();
NotUtf8,
}
+impl Copy for ErrorCode {}
+
#[deriving(Clone, PartialEq, Show)]
pub enum ParserError {
/// msg, line, col
IoError(io::IoErrorKind, &'static str),
}
+impl Copy for ParserError {}
+
// Builder and Parser have the same errors.
pub type BuilderError = ParserError;
fn fmt_number_or_null(v: f64) -> string::String {
match v.classify() {
FPNaN | FPInfinite => string::String::from_str("null"),
- _ => f64::to_str_digits(v, 6u)
+ _ if v.fract() != 0f64 => f64::to_str_digits(v, 6u),
+ _ => f64::to_str_digits(v, 6u) + ".0",
}
}
impl<'a> ::Encoder<io::IoError> for Encoder<'a> {
fn emit_nil(&mut self) -> EncodeResult { write!(self.writer, "null") }
- fn emit_uint(&mut self, v: uint) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_u64(&mut self, v: u64) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_u32(&mut self, v: u32) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_u16(&mut self, v: u16) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_u8(&mut self, v: u8) -> EncodeResult { self.emit_f64(v as f64) }
+ fn emit_uint(&mut self, v: uint) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_u64(&mut self, v: u64) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_u32(&mut self, v: u32) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_u16(&mut self, v: u16) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_u8(&mut self, v: u8) -> EncodeResult { write!(self.writer, "{}", v) }
- fn emit_int(&mut self, v: int) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_i64(&mut self, v: i64) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_i32(&mut self, v: i32) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_i16(&mut self, v: i16) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_i8(&mut self, v: i8) -> EncodeResult { self.emit_f64(v as f64) }
+ fn emit_int(&mut self, v: int) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_i64(&mut self, v: i64) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_i32(&mut self, v: i32) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_i16(&mut self, v: i16) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_i8(&mut self, v: i8) -> EncodeResult { write!(self.writer, "{}", v) }
fn emit_bool(&mut self, v: bool) -> EncodeResult {
if v {
fn emit_f64(&mut self, v: f64) -> EncodeResult {
write!(self.writer, "{}", fmt_number_or_null(v))
}
- fn emit_f32(&mut self, v: f32) -> EncodeResult { self.emit_f64(v as f64) }
+ fn emit_f32(&mut self, v: f32) -> EncodeResult {
+ self.emit_f64(v as f64)
+ }
fn emit_char(&mut self, v: char) -> EncodeResult {
escape_char(self.writer, v)
escape_str(self.writer, v)
}
- fn emit_enum(&mut self, _name: &str, f: |&mut Encoder<'a>| -> EncodeResult) -> EncodeResult {
+ fn emit_enum(&mut self,
+ _name: &str,
+ f: |&mut Encoder<'a>| -> EncodeResult) -> EncodeResult {
f(self)
}
impl<'a> ::Encoder<io::IoError> for PrettyEncoder<'a> {
fn emit_nil(&mut self) -> EncodeResult { write!(self.writer, "null") }
- fn emit_uint(&mut self, v: uint) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_u64(&mut self, v: u64) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_u32(&mut self, v: u32) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_u16(&mut self, v: u16) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_u8(&mut self, v: u8) -> EncodeResult { self.emit_f64(v as f64) }
+ fn emit_uint(&mut self, v: uint) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_u64(&mut self, v: u64) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_u32(&mut self, v: u32) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_u16(&mut self, v: u16) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_u8(&mut self, v: u8) -> EncodeResult { write!(self.writer, "{}", v) }
- fn emit_int(&mut self, v: int) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_i64(&mut self, v: i64) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_i32(&mut self, v: i32) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_i16(&mut self, v: i16) -> EncodeResult { self.emit_f64(v as f64) }
- fn emit_i8(&mut self, v: i8) -> EncodeResult { self.emit_f64(v as f64) }
+ fn emit_int(&mut self, v: int) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_i64(&mut self, v: i64) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_i32(&mut self, v: i32) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_i16(&mut self, v: i16) -> EncodeResult { write!(self.writer, "{}", v) }
+ fn emit_i8(&mut self, v: i8) -> EncodeResult { write!(self.writer, "{}", v) }
fn emit_bool(&mut self, v: bool) -> EncodeResult {
if v {
fn emit_enum_variant(&mut self,
name: &str,
- _: uint,
+ _id: uint,
cnt: uint,
f: |&mut PrettyEncoder<'a>| -> EncodeResult) -> EncodeResult {
if cnt == 0 {
($e:expr, Null) => ({
match $e {
Json::Null => Ok(()),
- other => Err(ExpectedError("Null".to_string(),
+ other => Err(ExpectedError("Null".into_string(),
format!("{}", other)))
}
});
($name:ident, $ty:ty) => {
fn $name(&mut self) -> DecodeResult<$ty> {
match self.pop() {
- Json::I64(f) => {
- match num::cast(f) {
- Some(f) => Ok(f),
- None => Err(ExpectedError("Number".to_string(), format!("{}", f))),
- }
- }
- Json::U64(f) => {
- match num::cast(f) {
- Some(f) => Ok(f),
- None => Err(ExpectedError("Number".to_string(), format!("{}", f))),
- }
- }
- Json::F64(f) => {
- match num::cast(f) {
- Some(f) => Ok(f),
- None => Err(ExpectedError("Number".to_string(), format!("{}", f))),
- }
- }
- Json::String(s) => {
- // re: #12967.. a type w/ numeric keys (ie HashMap<uint, V> etc)
- // is going to have a string here, as per JSON spec.
- match std::str::from_str(s.as_slice()) {
- Some(f) => Ok(f),
- None => Err(ExpectedError("Number".to_string(), s)),
- }
+ Json::I64(f) => match num::cast(f) {
+ Some(f) => Ok(f),
+ None => Err(ExpectedError("Number".into_string(), format!("{}", f))),
+ },
+ Json::U64(f) => match num::cast(f) {
+ Some(f) => Ok(f),
+ None => Err(ExpectedError("Number".into_string(), format!("{}", f))),
},
- value => Err(ExpectedError("Number".to_string(), format!("{}", value)))
+ Json::F64(f) => Err(ExpectedError("Integer".into_string(), format!("{}", f))),
+ // re: #12967.. a type w/ numeric keys (ie HashMap<uint, V> etc)
+ // is going to have a string here, as per JSON spec.
+ Json::String(s) => match std::str::from_str(s.as_slice()) {
+ Some(f) => Ok(f),
+ None => Err(ExpectedError("Number".into_string(), s)),
+ },
+ value => Err(ExpectedError("Number".into_string(), format!("{}", value))),
}
}
}
// is going to have a string here, as per JSON spec.
match std::str::from_str(s.as_slice()) {
Some(f) => Ok(f),
- None => Err(ExpectedError("Number".to_string(), s)),
+ None => Err(ExpectedError("Number".into_string(), s)),
}
},
Json::Null => Ok(f64::NAN),
- value => Err(ExpectedError("Number".to_string(), format!("{}", value)))
+ value => Err(ExpectedError("Number".into_string(), format!("{}", value)))
}
}
fn read_char(&mut self) -> DecodeResult<char> {
let s = try!(self.read_str());
{
- let mut it = s.as_slice().chars();
+ let mut it = s.chars();
match (it.next(), it.next()) {
// exactly one character
(Some(c), None) => return Ok(c),
_ => ()
}
}
- Err(ExpectedError("single character string".to_string(), format!("{}", s)))
+ Err(ExpectedError("single character string".into_string(), format!("{}", s)))
}
fn read_str(&mut self) -> DecodeResult<string::String> {
let name = match self.pop() {
Json::String(s) => s,
Json::Object(mut o) => {
- let n = match o.remove(&"variant".to_string()) {
+ let n = match o.remove(&"variant".into_string()) {
Some(Json::String(s)) => s,
Some(val) => {
- return Err(ExpectedError("String".to_string(), format!("{}", val)))
+ return Err(ExpectedError("String".into_string(), format!("{}", val)))
}
None => {
- return Err(MissingFieldError("variant".to_string()))
+ return Err(MissingFieldError("variant".into_string()))
}
};
- match o.remove(&"fields".to_string()) {
+ match o.remove(&"fields".into_string()) {
Some(Json::Array(l)) => {
for field in l.into_iter().rev() {
self.stack.push(field);
}
},
Some(val) => {
- return Err(ExpectedError("Array".to_string(), format!("{}", val)))
+ return Err(ExpectedError("Array".into_string(), format!("{}", val)))
}
None => {
- return Err(MissingFieldError("fields".to_string()))
+ return Err(MissingFieldError("fields".into_string()))
}
}
n
}
json => {
- return Err(ExpectedError("String or Object".to_string(), format!("{}", json)))
+ return Err(ExpectedError("String or Object".into_string(), format!("{}", json)))
}
};
let idx = match names.iter()
#[test]
fn test_decode_option_malformed() {
check_err::<OptionData>("{ \"opt\": [] }",
- ExpectedError("Number".to_string(), "[]".to_string()));
+ ExpectedError("Number".into_string(), "[]".into_string()));
check_err::<OptionData>("{ \"opt\": false }",
- ExpectedError("Number".to_string(), "false".to_string()));
+ ExpectedError("Number".into_string(), "false".into_string()));
}
#[deriving(PartialEq, Encodable, Decodable, Show)]
#[test]
fn test_write_null() {
- assert_eq!(Null.to_string().into_string(), "null".to_string());
- assert_eq!(Null.to_pretty_str().into_string(), "null".to_string());
+ assert_eq!(Null.to_string(), "null");
+ assert_eq!(Null.to_pretty_str(), "null");
}
#[test]
fn test_write_i64() {
- assert_eq!(U64(0).to_string().into_string(), "0".to_string());
- assert_eq!(U64(0).to_pretty_str().into_string(), "0".to_string());
+ assert_eq!(U64(0).to_string(), "0");
+ assert_eq!(U64(0).to_pretty_str(), "0");
+
+ assert_eq!(U64(1234).to_string(), "1234");
+ assert_eq!(U64(1234).to_pretty_str(), "1234");
- assert_eq!(U64(1234).to_string().into_string(), "1234".to_string());
- assert_eq!(U64(1234).to_pretty_str().into_string(), "1234".to_string());
+ assert_eq!(I64(-5678).to_string(), "-5678");
+ assert_eq!(I64(-5678).to_pretty_str(), "-5678");
- assert_eq!(I64(-5678).to_string().into_string(), "-5678".to_string());
- assert_eq!(I64(-5678).to_pretty_str().into_string(), "-5678".to_string());
+ assert_eq!(U64(7650007200025252000).to_string(), "7650007200025252000");
+ assert_eq!(U64(7650007200025252000).to_pretty_str(), "7650007200025252000");
}
#[test]
fn test_write_f64() {
- assert_eq!(F64(3.0).to_string().into_string(), "3".to_string());
- assert_eq!(F64(3.0).to_pretty_str().into_string(), "3".to_string());
+ assert_eq!(F64(3.0).to_string(), "3.0");
+ assert_eq!(F64(3.0).to_pretty_str(), "3.0");
- assert_eq!(F64(3.1).to_string().into_string(), "3.1".to_string());
- assert_eq!(F64(3.1).to_pretty_str().into_string(), "3.1".to_string());
+ assert_eq!(F64(3.1).to_string(), "3.1");
+ assert_eq!(F64(3.1).to_pretty_str(), "3.1");
- assert_eq!(F64(-1.5).to_string().into_string(), "-1.5".to_string());
- assert_eq!(F64(-1.5).to_pretty_str().into_string(), "-1.5".to_string());
+ assert_eq!(F64(-1.5).to_string(), "-1.5");
+ assert_eq!(F64(-1.5).to_pretty_str(), "-1.5");
- assert_eq!(F64(0.5).to_string().into_string(), "0.5".to_string());
- assert_eq!(F64(0.5).to_pretty_str().into_string(), "0.5".to_string());
+ assert_eq!(F64(0.5).to_string(), "0.5");
+ assert_eq!(F64(0.5).to_pretty_str(), "0.5");
- assert_eq!(F64(f64::NAN).to_string().into_string(), "null".to_string());
- assert_eq!(F64(f64::NAN).to_pretty_str().into_string(), "null".to_string());
+ assert_eq!(F64(f64::NAN).to_string(), "null");
+ assert_eq!(F64(f64::NAN).to_pretty_str(), "null");
- assert_eq!(F64(f64::INFINITY).to_string().into_string(), "null".to_string());
- assert_eq!(F64(f64::INFINITY).to_pretty_str().into_string(), "null".to_string());
+ assert_eq!(F64(f64::INFINITY).to_string(), "null");
+ assert_eq!(F64(f64::INFINITY).to_pretty_str(), "null");
- assert_eq!(F64(f64::NEG_INFINITY).to_string().into_string(), "null".to_string());
- assert_eq!(F64(f64::NEG_INFINITY).to_pretty_str().into_string(), "null".to_string());
+ assert_eq!(F64(f64::NEG_INFINITY).to_string(), "null");
+ assert_eq!(F64(f64::NEG_INFINITY).to_pretty_str(), "null");
}
#[test]
fn test_write_str() {
- assert_eq!(String("".to_string()).to_string().into_string(), "\"\"".to_string());
- assert_eq!(String("".to_string()).to_pretty_str().into_string(), "\"\"".to_string());
+ assert_eq!(String("".into_string()).to_string(), "\"\"");
+ assert_eq!(String("".into_string()).to_pretty_str(), "\"\"");
- assert_eq!(String("foo".to_string()).to_string().into_string(), "\"foo\"".to_string());
- assert_eq!(String("foo".to_string()).to_pretty_str().into_string(), "\"foo\"".to_string());
+ assert_eq!(String("homura".into_string()).to_string(), "\"homura\"");
+ assert_eq!(String("madoka".into_string()).to_pretty_str(), "\"madoka\"");
}
#[test]
fn test_write_bool() {
- assert_eq!(Boolean(true).to_string().into_string(), "true".to_string());
- assert_eq!(Boolean(true).to_pretty_str().into_string(), "true".to_string());
+ assert_eq!(Boolean(true).to_string(), "true");
+ assert_eq!(Boolean(true).to_pretty_str(), "true");
- assert_eq!(Boolean(false).to_string().into_string(), "false".to_string());
- assert_eq!(Boolean(false).to_pretty_str().into_string(), "false".to_string());
+ assert_eq!(Boolean(false).to_string(), "false");
+ assert_eq!(Boolean(false).to_pretty_str(), "false");
}
#[test]
fn test_write_array() {
- assert_eq!(Array(vec![]).to_string().into_string(), "[]".to_string());
- assert_eq!(Array(vec![]).to_pretty_str().into_string(), "[]".to_string());
+ assert_eq!(Array(vec![]).to_string(), "[]");
+ assert_eq!(Array(vec![]).to_pretty_str(), "[]");
- assert_eq!(Array(vec![Boolean(true)]).to_string().into_string(), "[true]".to_string());
+ assert_eq!(Array(vec![Boolean(true)]).to_string(), "[true]");
assert_eq!(
- Array(vec![Boolean(true)]).to_pretty_str().into_string(),
+ Array(vec![Boolean(true)]).to_pretty_str(),
"\
[\n \
true\n\
- ]".to_string()
+ ]"
);
let long_test_array = Array(vec![
Boolean(false),
Null,
- Array(vec![String("foo\nbar".to_string()), F64(3.5)])]);
+ Array(vec![String("foo\nbar".into_string()), F64(3.5)])]);
- assert_eq!(long_test_array.to_string().into_string(),
- "[false,null,[\"foo\\nbar\",3.5]]".to_string());
+ assert_eq!(long_test_array.to_string(),
+ "[false,null,[\"foo\\nbar\",3.5]]");
assert_eq!(
- long_test_array.to_pretty_str().into_string(),
+ long_test_array.to_pretty_str(),
"\
[\n \
false,\n \
\"foo\\nbar\",\n \
3.5\n \
]\n\
- ]".to_string()
+ ]"
);
}
#[test]
fn test_write_object() {
- assert_eq!(mk_object(&[]).to_string().into_string(), "{}".to_string());
- assert_eq!(mk_object(&[]).to_pretty_str().into_string(), "{}".to_string());
+ assert_eq!(mk_object(&[]).to_string(), "{}");
+ assert_eq!(mk_object(&[]).to_pretty_str(), "{}");
assert_eq!(
mk_object(&[
- ("a".to_string(), Boolean(true))
- ]).to_string().into_string(),
- "{\"a\":true}".to_string()
+ ("a".into_string(), Boolean(true))
+ ]).to_string(),
+ "{\"a\":true}"
);
assert_eq!(
- mk_object(&[("a".to_string(), Boolean(true))]).to_pretty_str(),
+ mk_object(&[("a".into_string(), Boolean(true))]).to_pretty_str(),
"\
{\n \
\"a\": true\n\
- }".to_string()
+ }"
);
let complex_obj = mk_object(&[
- ("b".to_string(), Array(vec![
- mk_object(&[("c".to_string(), String("\x0c\r".to_string()))]),
- mk_object(&[("d".to_string(), String("".to_string()))])
+ ("b".into_string(), Array(vec![
+ mk_object(&[("c".into_string(), String("\x0c\r".into_string()))]),
+ mk_object(&[("d".into_string(), String("".into_string()))])
]))
]);
assert_eq!(
- complex_obj.to_string().into_string(),
+ complex_obj.to_string(),
"{\
\"b\":[\
{\"c\":\"\\f\\r\"},\
{\"d\":\"\"}\
]\
- }".to_string()
+ }"
);
assert_eq!(
- complex_obj.to_pretty_str().into_string(),
+ complex_obj.to_pretty_str(),
"\
{\n \
\"b\": [\n \
\"d\": \"\"\n \
}\n \
]\n\
- }".to_string()
+ }"
);
let a = mk_object(&[
- ("a".to_string(), Boolean(true)),
- ("b".to_string(), Array(vec![
- mk_object(&[("c".to_string(), String("\x0c\r".to_string()))]),
- mk_object(&[("d".to_string(), String("".to_string()))])
+ ("a".into_string(), Boolean(true)),
+ ("b".into_string(), Array(vec![
+ mk_object(&[("c".into_string(), String("\x0c\r".into_string()))]),
+ mk_object(&[("d".into_string(), String("".into_string()))])
]))
]);
let mut encoder = Encoder::new(writer);
animal.encode(&mut encoder).unwrap();
}),
- "\"Dog\"".to_string()
+ "\"Dog\""
);
assert_eq!(
with_str_writer(|writer| {
let mut encoder = PrettyEncoder::new(writer);
animal.encode(&mut encoder).unwrap();
}),
- "\"Dog\"".to_string()
+ "\"Dog\""
);
- let animal = Frog("Henry".to_string(), 349);
+ let animal = Frog("Henry".into_string(), 349);
assert_eq!(
with_str_writer(|writer| {
let mut encoder = Encoder::new(writer);
animal.encode(&mut encoder).unwrap();
}),
- "{\"variant\":\"Frog\",\"fields\":[\"Henry\",349]}".to_string()
+ "{\"variant\":\"Frog\",\"fields\":[\"Henry\",349]}"
);
assert_eq!(
with_str_writer(|writer| {
\"Henry\",\n \
349\n \
]\n\
- }".to_string()
+ }"
);
}
#[test]
fn test_write_some() {
- let value = Some("jodhpurs".to_string());
+ let value = Some("jodhpurs".into_string());
let s = with_str_writer(|writer| {
let mut encoder = Encoder::new(writer);
value.encode(&mut encoder).unwrap();
});
- assert_eq!(s, "\"jodhpurs\"".to_string());
+ assert_eq!(s, "\"jodhpurs\"");
- let value = Some("jodhpurs".to_string());
+ let value = Some("jodhpurs".into_string());
let s = with_str_writer(|writer| {
let mut encoder = PrettyEncoder::new(writer);
value.encode(&mut encoder).unwrap();
});
- assert_eq!(s, "\"jodhpurs\"".to_string());
+ assert_eq!(s, "\"jodhpurs\"");
}
#[test]
let mut encoder = Encoder::new(writer);
value.encode(&mut encoder).unwrap();
});
- assert_eq!(s, "null".to_string());
+ assert_eq!(s, "null");
let s = with_str_writer(|writer| {
let mut encoder = Encoder::new(writer);
value.encode(&mut encoder).unwrap();
});
- assert_eq!(s, "null".to_string());
+ assert_eq!(s, "null");
}
#[test]
let v: i64 = super::decode("9223372036854775807").unwrap();
assert_eq!(v, i64::MAX);
+
+ let res: DecodeResult<i64> = super::decode("765.25252");
+ assert_eq!(res, Err(ExpectedError("Integer".into_string(), "765.25252".into_string())));
}
#[test]
assert_eq!(from_str("\""), Err(SyntaxError(EOFWhileParsingString, 1, 2)));
assert_eq!(from_str("\"lol"), Err(SyntaxError(EOFWhileParsingString, 1, 5)));
- assert_eq!(from_str("\"\""), Ok(String("".to_string())));
- assert_eq!(from_str("\"foo\""), Ok(String("foo".to_string())));
- assert_eq!(from_str("\"\\\"\""), Ok(String("\"".to_string())));
- assert_eq!(from_str("\"\\b\""), Ok(String("\x08".to_string())));
- assert_eq!(from_str("\"\\n\""), Ok(String("\n".to_string())));
- assert_eq!(from_str("\"\\r\""), Ok(String("\r".to_string())));
- assert_eq!(from_str("\"\\t\""), Ok(String("\t".to_string())));
- assert_eq!(from_str(" \"foo\" "), Ok(String("foo".to_string())));
- assert_eq!(from_str("\"\\u12ab\""), Ok(String("\u12ab".to_string())));
- assert_eq!(from_str("\"\\uAB12\""), Ok(String("\uAB12".to_string())));
+ assert_eq!(from_str("\"\""), Ok(String("".into_string())));
+ assert_eq!(from_str("\"foo\""), Ok(String("foo".into_string())));
+ assert_eq!(from_str("\"\\\"\""), Ok(String("\"".into_string())));
+ assert_eq!(from_str("\"\\b\""), Ok(String("\x08".into_string())));
+ assert_eq!(from_str("\"\\n\""), Ok(String("\n".into_string())));
+ assert_eq!(from_str("\"\\r\""), Ok(String("\r".into_string())));
+ assert_eq!(from_str("\"\\t\""), Ok(String("\t".into_string())));
+ assert_eq!(from_str(" \"foo\" "), Ok(String("foo".into_string())));
+ assert_eq!(from_str("\"\\u12ab\""), Ok(String("\u12ab".into_string())));
+ assert_eq!(from_str("\"\\uAB12\""), Ok(String("\uAB12".into_string())));
}
#[test]
for &(i, o) in s.iter() {
let v: string::String = super::decode(i).unwrap();
- assert_eq!(v.as_slice(), o);
+ assert_eq!(v, o);
}
}
assert_eq!(t, (1u, 2, 3))
let t: (uint, string::String) = super::decode("[1, \"two\"]").unwrap();
- assert_eq!(t, (1u, "two".to_string()));
+ assert_eq!(t, (1u, "two".into_string()));
}
#[test]
assert_eq!(from_str("{}").unwrap(), mk_object(&[]));
assert_eq!(from_str("{\"a\": 3}").unwrap(),
- mk_object(&[("a".to_string(), U64(3))]));
+ mk_object(&[("a".into_string(), U64(3))]));
assert_eq!(from_str(
"{ \"a\": null, \"b\" : true }").unwrap(),
mk_object(&[
- ("a".to_string(), Null),
- ("b".to_string(), Boolean(true))]));
+ ("a".into_string(), Null),
+ ("b".into_string(), Boolean(true))]));
assert_eq!(from_str("\n{ \"a\": null, \"b\" : true }\n").unwrap(),
mk_object(&[
- ("a".to_string(), Null),
- ("b".to_string(), Boolean(true))]));
+ ("a".into_string(), Null),
+ ("b".into_string(), Boolean(true))]));
assert_eq!(from_str(
"{\"a\" : 1.0 ,\"b\": [ true ]}").unwrap(),
mk_object(&[
- ("a".to_string(), F64(1.0)),
- ("b".to_string(), Array(vec![Boolean(true)]))
+ ("a".into_string(), F64(1.0)),
+ ("b".into_string(), Array(vec![Boolean(true)]))
]));
assert_eq!(from_str(
"{\
]\
}").unwrap(),
mk_object(&[
- ("a".to_string(), F64(1.0)),
- ("b".to_string(), Array(vec![
+ ("a".into_string(), F64(1.0)),
+ ("b".into_string(), Array(vec![
Boolean(true),
- String("foo\nbar".to_string()),
+ String("foo\nbar".into_string()),
mk_object(&[
- ("c".to_string(), mk_object(&[("d".to_string(), Null)]))
+ ("c".into_string(), mk_object(&[("d".into_string(), Null)]))
])
]))
]));
v,
Outer {
inner: vec![
- Inner { a: (), b: 2, c: vec!["abc".to_string(), "xyz".to_string()] }
+ Inner { a: (), b: 2, c: vec!["abc".into_string(), "xyz".into_string()] }
]
}
);
assert_eq!(value, None);
let value: Option<string::String> = super::decode("\"jodhpurs\"").unwrap();
- assert_eq!(value, Some("jodhpurs".to_string()));
+ assert_eq!(value, Some("jodhpurs".into_string()));
}
#[test]
let s = "{\"variant\":\"Frog\",\"fields\":[\"Henry\",349]}";
let value: Animal = super::decode(s).unwrap();
- assert_eq!(value, Frog("Henry".to_string(), 349));
+ assert_eq!(value, Frog("Henry".into_string(), 349));
}
#[test]
\"fields\":[\"Henry\", 349]}}";
let mut map: TreeMap<string::String, Animal> = super::decode(s).unwrap();
- assert_eq!(map.remove(&"a".to_string()), Some(Dog));
- assert_eq!(map.remove(&"b".to_string()), Some(Frog("Henry".to_string(), 349)));
+ assert_eq!(map.remove(&"a".into_string()), Some(Dog));
+ assert_eq!(map.remove(&"b".into_string()), Some(Frog("Henry".into_string(), 349)));
}
#[test]
}
#[test]
fn test_decode_errors_struct() {
- check_err::<DecodeStruct>("[]", ExpectedError("Object".to_string(), "[]".to_string()));
+ check_err::<DecodeStruct>("[]", ExpectedError("Object".into_string(), "[]".into_string()));
check_err::<DecodeStruct>("{\"x\": true, \"y\": true, \"z\": \"\", \"w\": []}",
- ExpectedError("Number".to_string(), "true".to_string()));
+ ExpectedError("Number".into_string(), "true".into_string()));
check_err::<DecodeStruct>("{\"x\": 1, \"y\": [], \"z\": \"\", \"w\": []}",
- ExpectedError("Boolean".to_string(), "[]".to_string()));
+ ExpectedError("Boolean".into_string(), "[]".into_string()));
check_err::<DecodeStruct>("{\"x\": 1, \"y\": true, \"z\": {}, \"w\": []}",
- ExpectedError("String".to_string(), "{}".to_string()));
+ ExpectedError("String".into_string(), "{}".into_string()));
check_err::<DecodeStruct>("{\"x\": 1, \"y\": true, \"z\": \"\", \"w\": null}",
- ExpectedError("Array".to_string(), "null".to_string()));
+ ExpectedError("Array".into_string(), "null".into_string()));
check_err::<DecodeStruct>("{\"x\": 1, \"y\": true, \"z\": \"\"}",
- MissingFieldError("w".to_string()));
+ MissingFieldError("w".into_string()));
}
#[test]
fn test_decode_errors_enum() {
check_err::<DecodeEnum>("{}",
- MissingFieldError("variant".to_string()));
+ MissingFieldError("variant".into_string()));
check_err::<DecodeEnum>("{\"variant\": 1}",
- ExpectedError("String".to_string(), "1".to_string()));
+ ExpectedError("String".into_string(), "1".into_string()));
check_err::<DecodeEnum>("{\"variant\": \"A\"}",
- MissingFieldError("fields".to_string()));
+ MissingFieldError("fields".into_string()));
check_err::<DecodeEnum>("{\"variant\": \"A\", \"fields\": null}",
- ExpectedError("Array".to_string(), "null".to_string()));
+ ExpectedError("Array".into_string(), "null".into_string()));
check_err::<DecodeEnum>("{\"variant\": \"C\", \"fields\": []}",
- UnknownVariantError("C".to_string()));
+ UnknownVariantError("C".into_string()));
}
#[test]
};
let mut decoder = Decoder::new(json_obj);
let result: Result<HashMap<uint, bool>, DecoderError> = Decodable::decode(&mut decoder);
- assert_eq!(result, Err(ExpectedError("Number".to_string(), "a".to_string())));
+ assert_eq!(result, Err(ExpectedError("Number".into_string(), "a".into_string())));
}
fn assert_stream_equal(src: &str,
r#"{ "foo":"bar", "array" : [0, 1, 2, 3, 4, 5], "idents":[null,true,false]}"#,
vec![
(ObjectStart, vec![]),
- (StringValue("bar".to_string()), vec![Key("foo")]),
+ (StringValue("bar".into_string()), vec![Key("foo")]),
(ArrayStart, vec![Key("array")]),
(U64Value(0), vec![Key("array"), Index(0)]),
(U64Value(1), vec![Key("array"), Index(1)]),
(F64Value(1.0), vec![Key("a")]),
(ArrayStart, vec![Key("b")]),
(BooleanValue(true), vec![Key("b"), Index(0)]),
- (StringValue("foo\nbar".to_string()), vec![Key("b"), Index(1)]),
+ (StringValue("foo\nbar".into_string()), vec![Key("b"), Index(1)]),
(ObjectStart, vec![Key("b"), Index(2)]),
(ObjectStart, vec![Key("b"), Index(2), Key("c")]),
(NullValue, vec![Key("b"), Index(2), Key("c"), Key("d")]),
assert!(stack.last_is_index());
assert!(stack.get(0) == Index(1));
- stack.push_key("foo".to_string());
+ stack.push_key("foo".into_string());
assert!(stack.len() == 2);
assert!(stack.is_equal_to(&[Index(1), Key("foo")]));
assert!(stack.get(0) == Index(1));
assert!(stack.get(1) == Key("foo"));
- stack.push_key("bar".to_string());
+ stack.push_key("bar".into_string());
assert!(stack.len() == 3);
assert!(stack.is_equal_to(&[Index(1), Key("foo"), Key("bar")]));
let array3 = Array(vec!(U64(1), U64(2), U64(3)));
let object = {
let mut tree_map = TreeMap::new();
- tree_map.insert("a".to_string(), U64(1));
- tree_map.insert("b".to_string(), U64(2));
+ tree_map.insert("a".into_string(), U64(1));
+ tree_map.insert("b".into_string(), U64(2));
Object(tree_map)
};
assert_eq!((vec![1u, 2]).to_json(), array2);
assert_eq!(vec!(1u, 2, 3).to_json(), array3);
let mut tree_map = TreeMap::new();
- tree_map.insert("a".to_string(), 1u);
- tree_map.insert("b".to_string(), 2);
+ tree_map.insert("a".into_string(), 1u);
+ tree_map.insert("b".into_string(), 2);
assert_eq!(tree_map.to_json(), object);
let mut hash_map = HashMap::new();
- hash_map.insert("a".to_string(), 1u);
- hash_map.insert("b".to_string(), 2);
+ hash_map.insert("a".into_string(), 1u);
+ hash_map.insert("b".into_string(), 2);
assert_eq!(hash_map.to_json(), object);
assert_eq!(Some(15i).to_json(), I64(15));
assert_eq!(Some(15u).to_json(), U64(15));
}
fn big_json() -> string::String {
- let mut src = "[\n".to_string();
+ let mut src = "[\n".into_string();
for _ in range(0i, 500) {
src.push_str(r#"{ "a": true, "b": null, "c":3.1415, "d": "Hello world", "e": \
[1,2,3]},"#);
fn bench_streaming_large(b: &mut Bencher) {
let src = big_json();
b.iter( || {
- let mut parser = Parser::new(src.as_slice().chars());
+ let mut parser = Parser::new(src.chars());
loop {
match parser.next() {
None => return,
use core::kinds::Sized;
use fmt;
use iter::IteratorExt;
+use kinds::Copy;
use mem;
-use option::{Option, Some, None};
+use option::Option;
+use option::Option::{Some, None};
use slice::{SlicePrelude, AsSlice};
use str::{Str, StrPrelude};
use string::{String, IntoString};
#[deriving(Clone, PartialEq, PartialOrd, Ord, Eq, Hash)]
pub struct Ascii { chr: u8 }
+impl Copy for Ascii {}
+
impl Ascii {
/// Converts an ascii character into a `u8`.
#[inline]
assert_eq!(test.to_ascii(), b);
assert_eq!("( ;".to_ascii(), b);
let v = vec![40u8, 32u8, 59u8];
- assert_eq!(v.as_slice().to_ascii(), b);
- assert_eq!("( ;".to_string().as_slice().to_ascii(), b);
+ assert_eq!(v.to_ascii(), b);
+ assert_eq!("( ;".to_string().to_ascii(), b);
- assert_eq!("abCDef&?#".to_ascii().to_lowercase().into_string(), "abcdef&?#".to_string());
- assert_eq!("abCDef&?#".to_ascii().to_uppercase().into_string(), "ABCDEF&?#".to_string());
+ assert_eq!("abCDef&?#".to_ascii().to_lowercase().into_string(), "abcdef&?#");
+ assert_eq!("abCDef&?#".to_ascii().to_uppercase().into_string(), "ABCDEF&?#");
- assert_eq!("".to_ascii().to_lowercase().into_string(), "".to_string());
- assert_eq!("YMCA".to_ascii().to_lowercase().into_string(), "ymca".to_string());
+ assert_eq!("".to_ascii().to_lowercase().into_string(), "");
+ assert_eq!("YMCA".to_ascii().to_lowercase().into_string(), "ymca");
let mixed = "abcDEFxyz:.;".to_ascii();
- assert_eq!(mixed.to_uppercase().into_string(), "ABCDEFXYZ:.;".to_string());
+ assert_eq!(mixed.to_uppercase().into_string(), "ABCDEFXYZ:.;");
assert!("aBcDeF&?#".to_ascii().eq_ignore_case("AbCdEf&?#".to_ascii()));
#[test]
fn test_ascii_vec_ng() {
- assert_eq!("abCDef&?#".to_ascii().to_lowercase().into_string(), "abcdef&?#".to_string());
- assert_eq!("abCDef&?#".to_ascii().to_uppercase().into_string(), "ABCDEF&?#".to_string());
- assert_eq!("".to_ascii().to_lowercase().into_string(), "".to_string());
- assert_eq!("YMCA".to_ascii().to_lowercase().into_string(), "ymca".to_string());
+ assert_eq!("abCDef&?#".to_ascii().to_lowercase().into_string(), "abcdef&?#");
+ assert_eq!("abCDef&?#".to_ascii().to_uppercase().into_string(), "ABCDEF&?#");
+ assert_eq!("".to_ascii().to_lowercase().into_string(), "");
+ assert_eq!("YMCA".to_ascii().to_lowercase().into_string(), "ymca");
let mixed = "abcDEFxyz:.;".to_ascii();
- assert_eq!(mixed.to_uppercase().into_string(), "ABCDEFXYZ:.;".to_string());
+ assert_eq!(mixed.to_uppercase().into_string(), "ABCDEFXYZ:.;");
}
#[test]
#[test]
fn test_ascii_into_string() {
- assert_eq!(vec2ascii![40, 32, 59].into_string(), "( ;".to_string());
- assert_eq!(vec2ascii!(40, 32, 59).into_string(), "( ;".to_string());
+ assert_eq!(vec2ascii![40, 32, 59].into_string(), "( ;");
+ assert_eq!(vec2ascii!(40, 32, 59).into_string(), "( ;");
}
#[test]
#[test]
fn test_to_ascii_upper() {
- assert_eq!("url()URL()uRl()ürl".to_ascii_upper(), "URL()URL()URL()üRL".to_string());
- assert_eq!("hıKß".to_ascii_upper(), "HıKß".to_string());
+ assert_eq!("url()URL()uRl()ürl".to_ascii_upper(), "URL()URL()URL()üRL");
+ assert_eq!("hıKß".to_ascii_upper(), "HıKß");
let mut i = 0;
while i <= 500 {
let upper = if 'a' as u32 <= i && i <= 'z' as u32 { i + 'A' as u32 - 'a' as u32 }
else { i };
- assert_eq!((from_u32(i).unwrap()).to_string().as_slice().to_ascii_upper(),
+ assert_eq!((from_u32(i).unwrap()).to_string().to_ascii_upper(),
(from_u32(upper).unwrap()).to_string())
i += 1;
}
#[test]
fn test_to_ascii_lower() {
- assert_eq!("url()URL()uRl()Ürl".to_ascii_lower(), "url()url()url()Ürl".to_string());
+ assert_eq!("url()URL()uRl()Ürl".to_ascii_lower(), "url()url()url()Ürl");
// Dotted capital I, Kelvin sign, Sharp S.
- assert_eq!("HİKß".to_ascii_lower(), "hİKß".to_string());
+ assert_eq!("HİKß".to_ascii_lower(), "hİKß");
let mut i = 0;
while i <= 500 {
let lower = if 'A' as u32 <= i && i <= 'Z' as u32 { i + 'a' as u32 - 'A' as u32 }
else { i };
- assert_eq!((from_u32(i).unwrap()).to_string().as_slice().to_ascii_lower(),
+ assert_eq!((from_u32(i).unwrap()).to_string().to_ascii_lower(),
(from_u32(lower).unwrap()).to_string())
i += 1;
}
fn test_into_ascii_upper() {
assert_eq!(("url()URL()uRl()ürl".to_string()).into_ascii_upper(),
"URL()URL()URL()üRL".to_string());
- assert_eq!(("hıKß".to_string()).into_ascii_upper(), "HıKß".to_string());
+ assert_eq!(("hıKß".to_string()).into_ascii_upper(), "HıKß");
let mut i = 0;
while i <= 500 {
#[test]
fn test_into_ascii_lower() {
assert_eq!(("url()URL()uRl()Ürl".to_string()).into_ascii_lower(),
- "url()url()url()Ürl".to_string());
+ "url()url()url()Ürl");
// Dotted capital I, Kelvin sign, Sharp S.
- assert_eq!(("HİKß".to_string()).into_ascii_lower(), "hİKß".to_string());
+ assert_eq!(("HİKß".to_string()).into_ascii_lower(), "hİKß");
let mut i = 0;
while i <= 500 {
let c = i;
let lower = if 'A' as u32 <= c && c <= 'Z' as u32 { c + 'a' as u32 - 'A' as u32 }
else { c };
- assert!((from_u32(i).unwrap()).to_string().as_slice().eq_ignore_ascii_case(
+ assert!((from_u32(i).unwrap()).to_string().eq_ignore_ascii_case(
(from_u32(lower).unwrap()).to_string().as_slice()));
i += 1;
}
#[test]
fn test_to_string() {
let s = Ascii{ chr: b't' }.to_string();
- assert_eq!(s, "t".to_string());
+ assert_eq!(s, "t");
}
#[test]
fn test_show() {
let c = Ascii { chr: b't' };
- assert_eq!(format!("{}", c), "t".to_string());
+ assert_eq!(format!("{}", c), "t");
}
}
/// }
/// }
///
+/// impl Copy for Flags {}
+///
/// fn main() {
/// let e1 = FLAG_A | FLAG_C;
/// let e2 = FLAG_B | FLAG_C;
/// }
/// }
///
+/// impl Copy for Flags {}
+///
/// impl Flags {
/// pub fn clear(&mut self) {
/// self.bits = 0; // The `bits` field can be accessed from within the
#[inline]
pub fn from_bits(bits: $T) -> ::std::option::Option<$BitFlags> {
if (bits & !$BitFlags::all().bits()) != 0 {
- ::std::option::None
+ ::std::option::Option::None
} else {
- ::std::option::Some($BitFlags { bits: bits })
+ ::std::option::Option::Some($BitFlags { bits: bits })
}
}
#[cfg(test)]
#[allow(non_upper_case_globals)]
mod tests {
+ use kinds::Copy;
use hash;
- use option::{Some, None};
+ use option::Option::{Some, None};
use ops::{BitOr, BitAnd, BitXor, Sub, Not};
bitflags! {
}
}
+ impl Copy for Flags {}
+
bitflags! {
flags AnotherSetOfFlags: i8 {
const AnotherFlag = -1_i8,
}
}
+ impl Copy for AnotherSetOfFlags {}
+
#[test]
fn test_bits(){
assert_eq!(Flags::empty().bits(), 0x00000000);
use kinds::Send;
use mem;
use ops::Drop;
-use option::{Option, Some, None};
+use option::Option;
+use option::Option::{Some, None};
use ptr::RawPtr;
use ptr;
use raw;
use mem::{mod, replace};
use num::{Int, UnsignedInt};
use ops::{Deref, Index, IndexMut};
-use option::{Some, None, Option};
-use result::{Result, Ok, Err};
+use option::Option;
+use option::Option::{Some, None};
+use result::Result;
+use result::Result::{Ok, Err};
use super::table;
use super::table::{
DROP_VECTOR.with(|v| {
for i in range(0u, 200) {
- assert_eq!(v.borrow().as_slice()[i], 0);
+ assert_eq!(v.borrow()[i], 0);
}
});
DROP_VECTOR.with(|v| {
for i in range(0u, 200) {
- assert_eq!(v.borrow().as_slice()[i], 1);
+ assert_eq!(v.borrow()[i], 1);
}
});
assert!(v.is_some());
DROP_VECTOR.with(|v| {
- assert_eq!(v.borrow().as_slice()[i], 1);
- assert_eq!(v.borrow().as_slice()[i+100], 1);
+ assert_eq!(v.borrow()[i], 1);
+ assert_eq!(v.borrow()[i+100], 1);
});
}
DROP_VECTOR.with(|v| {
for i in range(0u, 50) {
- assert_eq!(v.borrow().as_slice()[i], 0);
- assert_eq!(v.borrow().as_slice()[i+100], 0);
+ assert_eq!(v.borrow()[i], 0);
+ assert_eq!(v.borrow()[i+100], 0);
}
for i in range(50u, 100) {
- assert_eq!(v.borrow().as_slice()[i], 1);
- assert_eq!(v.borrow().as_slice()[i+100], 1);
+ assert_eq!(v.borrow()[i], 1);
+ assert_eq!(v.borrow()[i+100], 1);
}
});
}
DROP_VECTOR.with(|v| {
for i in range(0u, 200) {
- assert_eq!(v.borrow().as_slice()[i], 0);
+ assert_eq!(v.borrow()[i], 0);
}
});
}
DROP_VECTOR.with(|v| {
for i in range(0u, 200) {
- assert_eq!(v.borrow().as_slice()[i], 0);
+ assert_eq!(v.borrow()[i], 0);
}
});
DROP_VECTOR.with(|v| {
for i in range(0u, 200) {
- assert_eq!(v.borrow().as_slice()[i], 1);
+ assert_eq!(v.borrow()[i], 1);
}
});
DROP_VECTOR.with(|v| {
for i in range(0u, 200) {
- assert_eq!(v.borrow().as_slice()[i], 1);
+ assert_eq!(v.borrow()[i], 1);
}
});
DROP_VECTOR.with(|v| {
let nk = range(0u, 100).filter(|&i| {
- v.borrow().as_slice()[i] == 1
+ v.borrow()[i] == 1
}).count();
let nv = range(0u, 100).filter(|&i| {
- v.borrow().as_slice()[i+100] == 1
+ v.borrow()[i+100] == 1
}).count();
assert_eq!(nk, 50);
DROP_VECTOR.with(|v| {
for i in range(0u, 200) {
- assert_eq!(v.borrow().as_slice()[i], 0);
+ assert_eq!(v.borrow()[i], 0);
}
});
}
let map_str = format!("{}", map);
- assert!(map_str == "{1: 2, 3: 4}".to_string() || map_str == "{3: 4, 1: 2}".to_string());
- assert_eq!(format!("{}", empty), "{}".to_string());
+ assert!(map_str == "{1: 2, 3: 4}" || map_str == "{3: 4, 1: 2}");
+ assert_eq!(format!("{}", empty), "{}");
}
#[test]
use hash::{Hash, Hasher, RandomSipHasher};
use iter::{Iterator, IteratorExt, FromIterator, FilterMap, Chain, Repeat, Zip, Extend, repeat};
use iter;
-use option::{Some, None};
-use result::{Ok, Err};
+use option::Option::{Some, None};
+use result::Result::{Ok, Err};
use super::map::{HashMap, Entries, MoveEntries, INITIAL_CAPACITY};
};
let v = hs.into_iter().collect::<Vec<char>>();
- assert!(['a', 'b'][] == v.as_slice() || ['b', 'a'][] == v.as_slice());
+ assert!(['a', 'b'] == v || ['b', 'a'] == v);
}
#[test]
let set_str = format!("{}", set);
- assert!(set_str == "{1, 2}".to_string() || set_str == "{2, 1}".to_string());
- assert_eq!(format!("{}", empty), "{}".to_string());
+ assert!(set_str == "{1, 2}" || set_str == "{2, 1}");
+ assert_eq!(format!("{}", empty), "{}");
}
}
use cmp;
use hash::{Hash, Hasher};
use iter::{Iterator, count};
-use kinds::{Sized, marker};
+use kinds::{Copy, Sized, marker};
use mem::{min_align_of, size_of};
use mem;
use num::{Int, UnsignedInt};
use ops::{Deref, DerefMut, Drop};
-use option::{Some, None, Option};
+use option::Option;
+use option::Option::{Some, None};
use ptr::{RawPtr, copy_nonoverlapping_memory, zero_memory};
use ptr;
use rt::heap::{allocate, deallocate};
val: *mut V
}
+impl<K,V> Copy for RawBucket<K,V> {}
+
pub struct Bucket<K, V, M> {
raw: RawBucket<K, V>,
idx: uint,
table: M
}
+impl<K,V,M:Copy> Copy for Bucket<K,V,M> {}
+
pub struct EmptyBucket<K, V, M> {
raw: RawBucket<K, V>,
idx: uint,
use iter::{range, Iterator, Extend};
use mem;
use ops::Drop;
-use option::{Some, None, Option};
+use option::Option;
+use option::Option::{Some, None};
use boxed::Box;
use ptr;
-use result::{Ok, Err};
+use result::Result::{Ok, Err};
// FIXME(conventions): implement iterators?
// FIXME(conventions): implement indexing?
cache.insert(1, 10);
cache.insert(2, 20);
cache.insert(3, 30);
- assert_eq!(cache.to_string(), "{3: 30, 2: 20, 1: 10}".to_string());
+ assert_eq!(cache.to_string(), "{3: 30, 2: 20, 1: 10}");
cache.insert(2, 22);
- assert_eq!(cache.to_string(), "{2: 22, 3: 30, 1: 10}".to_string());
+ assert_eq!(cache.to_string(), "{2: 22, 3: 30, 1: 10}");
cache.insert(6, 60);
- assert_eq!(cache.to_string(), "{6: 60, 2: 22, 3: 30}".to_string());
+ assert_eq!(cache.to_string(), "{6: 60, 2: 22, 3: 30}");
cache.get(&3);
- assert_eq!(cache.to_string(), "{3: 30, 6: 60, 2: 22}".to_string());
+ assert_eq!(cache.to_string(), "{3: 30, 6: 60, 2: 22}");
cache.set_capacity(2);
- assert_eq!(cache.to_string(), "{3: 30, 6: 60}".to_string());
+ assert_eq!(cache.to_string(), "{3: 30, 6: 60}");
}
#[test]
cache.clear();
assert!(cache.get(&1).is_none());
assert!(cache.get(&2).is_none());
- assert_eq!(cache.to_string(), "{}".to_string());
+ assert_eq!(cache.to_string(), "{}");
}
}
mod shared;
mod stream;
mod sync;
+mod mpsc_queue;
+mod spsc_queue;
/// The receiving-half of Rust's channel type. This half can only be owned by
/// one task
Disconnected,
}
+impl Copy for TryRecvError {}
+
/// This enumeration is the list of the possible error outcomes for the
/// `SyncSender::try_send` method.
#[deriving(PartialEq, Clone, Show)]
#[unstable]
impl<T: Send> Clone for Sender<T> {
fn clone(&self) -> Sender<T> {
- let (packet, sleeper) = match *unsafe { self.inner() } {
+ let (packet, sleeper, guard) = match *unsafe { self.inner() } {
Oneshot(ref p) => {
let a = Arc::new(UnsafeCell::new(shared::Packet::new()));
unsafe {
- (*a.get()).postinit_lock();
+ let guard = (*a.get()).postinit_lock();
match (*p.get()).upgrade(Receiver::new(Shared(a.clone()))) {
- oneshot::UpSuccess | oneshot::UpDisconnected => (a, None),
- oneshot::UpWoke(task) => (a, Some(task))
+ oneshot::UpSuccess |
+ oneshot::UpDisconnected => (a, None, guard),
+ oneshot::UpWoke(task) => (a, Some(task), guard)
}
}
}
Stream(ref p) => {
let a = Arc::new(UnsafeCell::new(shared::Packet::new()));
unsafe {
- (*a.get()).postinit_lock();
+ let guard = (*a.get()).postinit_lock();
match (*p.get()).upgrade(Receiver::new(Shared(a.clone()))) {
- stream::UpSuccess | stream::UpDisconnected => (a, None),
- stream::UpWoke(task) => (a, Some(task)),
+ stream::UpSuccess |
+ stream::UpDisconnected => (a, None, guard),
+ stream::UpWoke(task) => (a, Some(task), guard),
}
}
}
};
unsafe {
- (*packet.get()).inherit_blocker(sleeper);
+ (*packet.get()).inherit_blocker(sleeper, guard);
let tmp = Sender::new(Shared(packet.clone()));
mem::swap(self.inner_mut(), tmp.inner_mut());
--- /dev/null
+/* Copyright (c) 2010-2011 Dmitry Vyukov. All rights reserved.
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY DMITRY VYUKOV "AS IS" AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
+ * SHALL DMITRY VYUKOV OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * The views and conclusions contained in the software and documentation are
+ * those of the authors and should not be interpreted as representing official
+ * policies, either expressed or implied, of Dmitry Vyukov.
+ */
+
+//! A mostly lock-free multi-producer, single consumer queue.
+//!
+//! This module contains an implementation of a concurrent MPSC queue. This
+//! queue can be used to share data between tasks, and is also used as the
+//! building block of channels in rust.
+//!
+//! Note that the current implementation of this queue has a caveat of the `pop`
+//! method, and see the method for more information about it. Due to this
+//! caveat, this queue may not be appropriate for all use-cases.
+
+#![experimental]
+
+// http://www.1024cores.net/home/lock-free-algorithms
+// /queues/non-intrusive-mpsc-node-based-queue
+
+pub use self::PopResult::*;
+
+use core::prelude::*;
+
+use alloc::boxed::Box;
+use core::mem;
+use core::cell::UnsafeCell;
+
+use sync::atomic::{AtomicPtr, Release, Acquire, AcqRel, Relaxed};
+
+/// A result of the `pop` function.
+pub enum PopResult<T> {
+ /// Some data has been popped
+ Data(T),
+ /// The queue is empty
+ Empty,
+ /// The queue is in an inconsistent state. Popping data should succeed, but
+ /// some pushers have yet to make enough progress in order allow a pop to
+ /// succeed. It is recommended that a pop() occur "in the near future" in
+ /// order to see if the sender has made progress or not
+ Inconsistent,
+}
+
+struct Node<T> {
+ next: AtomicPtr<Node<T>>,
+ value: Option<T>,
+}
+
+/// The multi-producer single-consumer structure. This is not cloneable, but it
+/// may be safely shared so long as it is guaranteed that there is only one
+/// popper at a time (many pushers are allowed).
+pub struct Queue<T> {
+ head: AtomicPtr<Node<T>>,
+ tail: UnsafeCell<*mut Node<T>>,
+}
+
+impl<T> Node<T> {
+ unsafe fn new(v: Option<T>) -> *mut Node<T> {
+ mem::transmute(box Node {
+ next: AtomicPtr::new(0 as *mut Node<T>),
+ value: v,
+ })
+ }
+}
+
+impl<T: Send> Queue<T> {
+ /// Creates a new queue that is safe to share among multiple producers and
+ /// one consumer.
+ pub fn new() -> Queue<T> {
+ let stub = unsafe { Node::new(None) };
+ Queue {
+ head: AtomicPtr::new(stub),
+ tail: UnsafeCell::new(stub),
+ }
+ }
+
+ /// Pushes a new value onto this queue.
+ pub fn push(&self, t: T) {
+ unsafe {
+ let n = Node::new(Some(t));
+ let prev = self.head.swap(n, AcqRel);
+ (*prev).next.store(n, Release);
+ }
+ }
+
+ /// Pops some data from this queue.
+ ///
+ /// Note that the current implementation means that this function cannot
+ /// return `Option<T>`. It is possible for this queue to be in an
+ /// inconsistent state where many pushes have succeeded and completely
+ /// finished, but pops cannot return `Some(t)`. This inconsistent state
+ /// happens when a pusher is pre-empted at an inopportune moment.
+ ///
+ /// This inconsistent state means that this queue does indeed have data, but
+ /// it does not currently have access to it at this time.
+ pub fn pop(&self) -> PopResult<T> {
+ unsafe {
+ let tail = *self.tail.get();
+ let next = (*tail).next.load(Acquire);
+
+ if !next.is_null() {
+ *self.tail.get() = next;
+ assert!((*tail).value.is_none());
+ assert!((*next).value.is_some());
+ let ret = (*next).value.take().unwrap();
+ let _: Box<Node<T>> = mem::transmute(tail);
+ return Data(ret);
+ }
+
+ if self.head.load(Acquire) == tail {Empty} else {Inconsistent}
+ }
+ }
+}
+
+#[unsafe_destructor]
+impl<T: Send> Drop for Queue<T> {
+ fn drop(&mut self) {
+ unsafe {
+ let mut cur = *self.tail.get();
+ while !cur.is_null() {
+ let next = (*cur).next.load(Relaxed);
+ let _: Box<Node<T>> = mem::transmute(cur);
+ cur = next;
+ }
+ }
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use prelude::*;
+
+ use alloc::arc::Arc;
+
+ use super::{Queue, Data, Empty, Inconsistent};
+
+ #[test]
+ fn test_full() {
+ let q = Queue::new();
+ q.push(box 1i);
+ q.push(box 2i);
+ }
+
+ #[test]
+ fn test() {
+ let nthreads = 8u;
+ let nmsgs = 1000u;
+ let q = Queue::new();
+ match q.pop() {
+ Empty => {}
+ Inconsistent | Data(..) => panic!()
+ }
+ let (tx, rx) = channel();
+ let q = Arc::new(q);
+
+ for _ in range(0, nthreads) {
+ let tx = tx.clone();
+ let q = q.clone();
+ spawn(proc() {
+ for i in range(0, nmsgs) {
+ q.push(i);
+ }
+ tx.send(());
+ });
+ }
+
+ let mut i = 0u;
+ while i < nthreads * nmsgs {
+ match q.pop() {
+ Empty | Inconsistent => {},
+ Data(_) => { i += 1 }
+ }
+ }
+ drop(tx);
+ for _ in range(0, nthreads) {
+ rx.recv();
+ }
+ }
+}
use core::cmp;
use core::int;
use rustrt::local::Local;
-use rustrt::mutex::NativeMutex;
use rustrt::task::{Task, BlockedTask};
use rustrt::thread::Thread;
-use sync::atomic;
-use sync::mpsc_queue as mpsc;
+use sync::{atomic, Mutex, MutexGuard};
+use comm::mpsc_queue as mpsc;
const DISCONNECTED: int = int::MIN;
const FUDGE: int = 1024;
// this lock protects various portions of this implementation during
// select()
- select_lock: NativeMutex,
+ select_lock: Mutex<()>,
}
pub enum Failure {
channels: atomic::AtomicInt::new(2),
port_dropped: atomic::AtomicBool::new(false),
sender_drain: atomic::AtomicInt::new(0),
- select_lock: unsafe { NativeMutex::new() },
+ select_lock: Mutex::new(()),
};
return p;
}
// In other case mutex data will be duplicated while cloning
// and that could cause problems on platforms where it is
// represented by opaque data structure
- pub fn postinit_lock(&mut self) {
- unsafe { self.select_lock.lock_noguard() }
+ pub fn postinit_lock(&self) -> MutexGuard<()> {
+ self.select_lock.lock()
}
// This function is used at the creation of a shared packet to inherit a
// tasks in select().
//
// This can only be called at channel-creation time
- pub fn inherit_blocker(&mut self, task: Option<BlockedTask>) {
+ pub fn inherit_blocker(&mut self,
+ task: Option<BlockedTask>,
+ guard: MutexGuard<()>) {
match task {
Some(task) => {
assert_eq!(self.cnt.load(atomic::SeqCst), 0);
// interfere with this method. After we unlock this lock, we're
// signifying that we're done modifying self.cnt and self.to_wake and
// the port is ready for the world to continue using it.
- unsafe { self.select_lock.unlock_noguard() }
+ drop(guard);
}
pub fn send(&mut self, t: T) -> Result<(), T> {
// done with. Without this bounce, we can race with inherit_blocker
// about looking at and dealing with to_wake. Once we have acquired the
// lock, we are guaranteed that inherit_blocker is done.
- unsafe {
+ {
let _guard = self.select_lock.lock();
}
--- /dev/null
+/* Copyright (c) 2010-2011 Dmitry Vyukov. All rights reserved.
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY DMITRY VYUKOV "AS IS" AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
+ * SHALL DMITRY VYUKOV OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * The views and conclusions contained in the software and documentation are
+ * those of the authors and should not be interpreted as representing official
+ * policies, either expressed or implied, of Dmitry Vyukov.
+ */
+
+// http://www.1024cores.net/home/lock-free-algorithms/queues/unbounded-spsc-queue
+
+//! A single-producer single-consumer concurrent queue
+//!
+//! This module contains the implementation of an SPSC queue which can be used
+//! concurrently between two tasks. This data structure is safe to use and
+//! enforces the semantics that there is one pusher and one popper.
+
+#![experimental]
+
+use core::prelude::*;
+
+use alloc::boxed::Box;
+use core::mem;
+use core::cell::UnsafeCell;
+
+use sync::atomic::{AtomicPtr, Relaxed, AtomicUint, Acquire, Release};
+
+// Node within the linked list queue of messages to send
+struct Node<T> {
+ // FIXME: this could be an uninitialized T if we're careful enough, and
+ // that would reduce memory usage (and be a bit faster).
+ // is it worth it?
+ value: Option<T>, // nullable for re-use of nodes
+ next: AtomicPtr<Node<T>>, // next node in the queue
+}
+
+/// The single-producer single-consumer queue. This structure is not cloneable,
+/// but it can be safely shared in an Arc if it is guaranteed that there
+/// is only one popper and one pusher touching the queue at any one point in
+/// time.
+pub struct Queue<T> {
+ // consumer fields
+ tail: UnsafeCell<*mut Node<T>>, // where to pop from
+ tail_prev: AtomicPtr<Node<T>>, // where to pop from
+
+ // producer fields
+ head: UnsafeCell<*mut Node<T>>, // where to push to
+ first: UnsafeCell<*mut Node<T>>, // where to get new nodes from
+ tail_copy: UnsafeCell<*mut Node<T>>, // between first/tail
+
+ // Cache maintenance fields. Additions and subtractions are stored
+ // separately in order to allow them to use nonatomic addition/subtraction.
+ cache_bound: uint,
+ cache_additions: AtomicUint,
+ cache_subtractions: AtomicUint,
+}
+
+impl<T: Send> Node<T> {
+ fn new() -> *mut Node<T> {
+ unsafe {
+ mem::transmute(box Node {
+ value: None,
+ next: AtomicPtr::new(0 as *mut Node<T>),
+ })
+ }
+ }
+}
+
+impl<T: Send> Queue<T> {
+ /// Creates a new queue.
+ ///
+ /// This is unsafe as the type system doesn't enforce a single
+ /// consumer-producer relationship. It also allows the consumer to `pop`
+ /// items while there is a `peek` active due to all methods having a
+ /// non-mutable receiver.
+ ///
+ /// # Arguments
+ ///
+ /// * `bound` - This queue implementation is implemented with a linked
+ /// list, and this means that a push is always a malloc. In
+ /// order to amortize this cost, an internal cache of nodes is
+ /// maintained to prevent a malloc from always being
+ /// necessary. This bound is the limit on the size of the
+ /// cache (if desired). If the value is 0, then the cache has
+ /// no bound. Otherwise, the cache will never grow larger than
+ /// `bound` (although the queue itself could be much larger.
+ pub unsafe fn new(bound: uint) -> Queue<T> {
+ let n1 = Node::new();
+ let n2 = Node::new();
+ (*n1).next.store(n2, Relaxed);
+ Queue {
+ tail: UnsafeCell::new(n2),
+ tail_prev: AtomicPtr::new(n1),
+ head: UnsafeCell::new(n2),
+ first: UnsafeCell::new(n1),
+ tail_copy: UnsafeCell::new(n1),
+ cache_bound: bound,
+ cache_additions: AtomicUint::new(0),
+ cache_subtractions: AtomicUint::new(0),
+ }
+ }
+
+ /// Pushes a new value onto this queue. Note that to use this function
+ /// safely, it must be externally guaranteed that there is only one pusher.
+ pub fn push(&self, t: T) {
+ unsafe {
+ // Acquire a node (which either uses a cached one or allocates a new
+ // one), and then append this to the 'head' node.
+ let n = self.alloc();
+ assert!((*n).value.is_none());
+ (*n).value = Some(t);
+ (*n).next.store(0 as *mut Node<T>, Relaxed);
+ (**self.head.get()).next.store(n, Release);
+ *self.head.get() = n;
+ }
+ }
+
+ unsafe fn alloc(&self) -> *mut Node<T> {
+ // First try to see if we can consume the 'first' node for our uses.
+ // We try to avoid as many atomic instructions as possible here, so
+ // the addition to cache_subtractions is not atomic (plus we're the
+ // only one subtracting from the cache).
+ if *self.first.get() != *self.tail_copy.get() {
+ if self.cache_bound > 0 {
+ let b = self.cache_subtractions.load(Relaxed);
+ self.cache_subtractions.store(b + 1, Relaxed);
+ }
+ let ret = *self.first.get();
+ *self.first.get() = (*ret).next.load(Relaxed);
+ return ret;
+ }
+ // If the above fails, then update our copy of the tail and try
+ // again.
+ *self.tail_copy.get() = self.tail_prev.load(Acquire);
+ if *self.first.get() != *self.tail_copy.get() {
+ if self.cache_bound > 0 {
+ let b = self.cache_subtractions.load(Relaxed);
+ self.cache_subtractions.store(b + 1, Relaxed);
+ }
+ let ret = *self.first.get();
+ *self.first.get() = (*ret).next.load(Relaxed);
+ return ret;
+ }
+ // If all of that fails, then we have to allocate a new node
+ // (there's nothing in the node cache).
+ Node::new()
+ }
+
+ /// Attempts to pop a value from this queue. Remember that to use this type
+ /// safely you must ensure that there is only one popper at a time.
+ pub fn pop(&self) -> Option<T> {
+ unsafe {
+ // The `tail` node is not actually a used node, but rather a
+ // sentinel from where we should start popping from. Hence, look at
+ // tail's next field and see if we can use it. If we do a pop, then
+ // the current tail node is a candidate for going into the cache.
+ let tail = *self.tail.get();
+ let next = (*tail).next.load(Acquire);
+ if next.is_null() { return None }
+ assert!((*next).value.is_some());
+ let ret = (*next).value.take();
+
+ *self.tail.get() = next;
+ if self.cache_bound == 0 {
+ self.tail_prev.store(tail, Release);
+ } else {
+ // FIXME: this is dubious with overflow.
+ let additions = self.cache_additions.load(Relaxed);
+ let subtractions = self.cache_subtractions.load(Relaxed);
+ let size = additions - subtractions;
+
+ if size < self.cache_bound {
+ self.tail_prev.store(tail, Release);
+ self.cache_additions.store(additions + 1, Relaxed);
+ } else {
+ (*self.tail_prev.load(Relaxed)).next.store(next, Relaxed);
+ // We have successfully erased all references to 'tail', so
+ // now we can safely drop it.
+ let _: Box<Node<T>> = mem::transmute(tail);
+ }
+ }
+ return ret;
+ }
+ }
+
+ /// Attempts to peek at the head of the queue, returning `None` if the queue
+ /// has no data currently
+ ///
+ /// # Warning
+ /// The reference returned is invalid if it is not used before the consumer
+ /// pops the value off the queue. If the producer then pushes another value
+ /// onto the queue, it will overwrite the value pointed to by the reference.
+ pub fn peek<'a>(&'a self) -> Option<&'a mut T> {
+ // This is essentially the same as above with all the popping bits
+ // stripped out.
+ unsafe {
+ let tail = *self.tail.get();
+ let next = (*tail).next.load(Acquire);
+ if next.is_null() { return None }
+ return (*next).value.as_mut();
+ }
+ }
+}
+
+#[unsafe_destructor]
+impl<T: Send> Drop for Queue<T> {
+ fn drop(&mut self) {
+ unsafe {
+ let mut cur = *self.first.get();
+ while !cur.is_null() {
+ let next = (*cur).next.load(Relaxed);
+ let _n: Box<Node<T>> = mem::transmute(cur);
+ cur = next;
+ }
+ }
+ }
+}
+
+#[cfg(test)]
+mod test {
+ use prelude::*;
+
+ use sync::Arc;
+ use super::Queue;
+
+ #[test]
+ fn smoke() {
+ unsafe {
+ let queue = Queue::new(0);
+ queue.push(1i);
+ queue.push(2);
+ assert_eq!(queue.pop(), Some(1i));
+ assert_eq!(queue.pop(), Some(2));
+ assert_eq!(queue.pop(), None);
+ queue.push(3);
+ queue.push(4);
+ assert_eq!(queue.pop(), Some(3));
+ assert_eq!(queue.pop(), Some(4));
+ assert_eq!(queue.pop(), None);
+ }
+ }
+
+ #[test]
+ fn peek() {
+ unsafe {
+ let queue = Queue::new(0);
+ queue.push(vec![1i]);
+
+ // Ensure the borrowchecker works
+ match queue.peek() {
+ Some(vec) => match vec.as_slice() {
+ // Note that `pop` is not allowed here due to borrow
+ [1] => {}
+ _ => return
+ },
+ None => unreachable!()
+ }
+
+ queue.pop();
+ }
+ }
+
+ #[test]
+ fn drop_full() {
+ unsafe {
+ let q = Queue::new(0);
+ q.push(box 1i);
+ q.push(box 2i);
+ }
+ }
+
+ #[test]
+ fn smoke_bound() {
+ unsafe {
+ let q = Queue::new(0);
+ q.push(1i);
+ q.push(2);
+ assert_eq!(q.pop(), Some(1));
+ assert_eq!(q.pop(), Some(2));
+ assert_eq!(q.pop(), None);
+ q.push(3);
+ q.push(4);
+ assert_eq!(q.pop(), Some(3));
+ assert_eq!(q.pop(), Some(4));
+ assert_eq!(q.pop(), None);
+ }
+ }
+
+ #[test]
+ fn stress() {
+ unsafe {
+ stress_bound(0);
+ stress_bound(1);
+ }
+
+ unsafe fn stress_bound(bound: uint) {
+ let q = Arc::new(Queue::new(bound));
+
+ let (tx, rx) = channel();
+ let q2 = q.clone();
+ spawn(proc() {
+ for _ in range(0u, 100000) {
+ loop {
+ match q2.pop() {
+ Some(1i) => break,
+ Some(_) => panic!(),
+ None => {}
+ }
+ }
+ }
+ tx.send(());
+ });
+ for _ in range(0i, 100000) {
+ q.push(1);
+ }
+ rx.recv();
+ }
+ }
+}
use rustrt::thread::Thread;
use sync::atomic;
-use sync::spsc_queue as spsc;
+use comm::spsc_queue as spsc;
use comm::Receiver;
const DISCONNECTED: int = int::MIN;
use string::String;
use vec::Vec;
-pub struct DynamicLibrary { handle: *mut u8 }
+#[allow(missing_copy_implementations)]
+pub struct DynamicLibrary {
+ handle: *mut u8
+}
impl Drop for DynamicLibrary {
fn drop(&mut self) {
use c_str::{CString, ToCStr};
use libc;
+ use kinds::Copy;
use ptr;
use result::*;
use string::String;
}
pub fn check_for_errors_in<T>(f: || -> T) -> Result<T, String> {
- use rustrt::mutex::{StaticNativeMutex, NATIVE_MUTEX_INIT};
- static LOCK: StaticNativeMutex = NATIVE_MUTEX_INIT;
+ use sync::{StaticMutex, MUTEX_INIT};
+ static LOCK: StaticMutex = MUTEX_INIT;
unsafe {
// dlerror isn't thread safe, so we need to lock around this entire
// sequence
Local = 0,
}
+ impl Copy for Rtld {}
+
#[link_name = "dl"]
extern {
fn dlopen(filename: *const libc::c_char,
use libc;
use os;
use ptr;
- use result::{Ok, Err, Result};
+ use result::Result;
+ use result::Result::{Ok, Err};
use slice::SlicePrelude;
use str::StrPrelude;
use str;
//! }
//! ```
-use option::{Option, None};
+use option::Option;
+use option::Option::None;
use kinds::Send;
use string::String;
use fmt;
use io::{Writer, IoResult};
use kinds::Send;
-use option::{Some, None, Option};
-use result::Ok;
+use option::Option;
+use option::Option::{Some, None};
+use result::Result::Ok;
use rt::backtrace;
use rustrt::{Stderr, Stdio};
use rustrt::local::Local;
use io::Writer;
use io;
-use result::{Ok, Err};
+use result::Result::{Ok, Err};
use string;
use vec::Vec;
pub use core::fmt::{Argument, Arguments, write, radix, Radix, RadixFmt};
#[doc(hidden)]
-pub use core::fmt::{argument, argumentstr, argumentuint};
+pub use core::fmt::{argument, argumentuint};
/// The format function takes a precompiled format string and a list of
/// arguments, to return the resulting formatted string.
use io::{Reader, Writer, Stream, Buffer, DEFAULT_BUF_SIZE, IoResult};
use iter::ExactSizeIterator;
use ops::Drop;
-use option::{Some, None, Option};
-use result::{Ok, Err};
+use option::Option;
+use option::Option::{Some, None};
+use result::Result::{Ok, Err};
use slice::{SlicePrelude};
use slice;
use vec::Vec;
let nread = reader.read(&mut buf);
assert_eq!(Ok(3), nread);
let b: &[_] = &[5, 6, 7];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
let mut buf = [0, 0];
let nread = reader.read(&mut buf);
assert_eq!(Ok(2), nread);
let b: &[_] = &[0, 1];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
let mut buf = [0];
let nread = reader.read(&mut buf);
assert_eq!(Ok(1), nread);
let b: &[_] = &[2];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
let mut buf = [0, 0, 0];
let nread = reader.read(&mut buf);
assert_eq!(Ok(1), nread);
let b: &[_] = &[3, 0, 0];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
let nread = reader.read(&mut buf);
assert_eq!(Ok(1), nread);
let b: &[_] = &[4, 0, 0];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
assert!(reader.read(&mut buf).is_err());
}
use cmp;
use comm::{Sender, Receiver};
use io;
-use option::{None, Some};
-use result::{Ok, Err};
+use option::Option::{None, Some};
+use result::Result::{Ok, Err};
use slice::{bytes, CloneSliceAllocPrelude, SlicePrelude};
use super::{Buffer, Reader, Writer, IoResult};
use vec::Vec;
assert_eq!(Ok(3), reader.read(&mut buf));
let a: &[u8] = &[1,2,3];
- assert_eq!(a, buf.as_slice());
+ assert_eq!(a, buf);
assert_eq!(Ok(3), reader.read(&mut buf));
let a: &[u8] = &[4,5,6];
- assert_eq!(a, buf.as_slice());
+ assert_eq!(a, buf);
assert_eq!(Ok(2), reader.read(&mut buf));
let a: &[u8] = &[7,8,6];
- assert_eq!(a, buf.as_slice());
+ assert_eq!(a, buf);
match reader.read(buf.as_mut_slice()) {
Ok(..) => panic!(),
Err(e) => assert_eq!(e.kind, io::EndOfFile),
}
- assert_eq!(a, buf.as_slice());
+ assert_eq!(a, buf);
// Ensure it continues to panic in the same way.
match reader.read(buf.as_mut_slice()) {
Ok(..) => panic!(),
Err(e) => assert_eq!(e.kind, io::EndOfFile),
}
- assert_eq!(a, buf.as_slice());
+ assert_eq!(a, buf);
}
#[test]
use io;
use iter::Iterator;
use num::Int;
-use option::{Option, Some, None};
+use option::Option;
+use option::Option::{Some, None};
use ptr::RawPtr;
-use result::{Ok, Err};
+use result::Result::{Ok, Err};
use slice::{SlicePrelude, AsSlice};
/// An iterator that reads a single byte on each iteration,
use io::UpdateIoError;
use io;
use iter::{Iterator, Extend};
-use option::{Some, None, Option};
+use option::Option;
+use option::Option::{Some, None};
use path::{Path, GenericPath};
use path;
-use result::{Err, Ok};
+use result::Result::{Err, Ok};
use slice::SlicePrelude;
use string::String;
use vec::Vec;
macro_rules! error( ($e:expr, $s:expr) => (
match $e {
Ok(_) => panic!("Unexpected success. Should've been: {}", $s),
- Err(ref err) => assert!(err.to_string().as_slice().contains($s.as_slice()),
+ Err(ref err) => assert!(err.to_string().contains($s.as_slice()),
format!("`{}` did not contain `{}`", err, $s))
}
) )
}
check!(unlink(filename));
let read_str = str::from_utf8(&read_mem).unwrap();
- assert!(read_str.as_slice() == final_msg.as_slice());
+ assert!(read_str == final_msg);
}
#[test]
let f = dir.join(format!("{}.txt", n));
let mut w = check!(File::create(&f));
let msg_str = format!("{}{}", prefix, n.to_string());
- let msg = msg_str.as_slice().as_bytes();
+ let msg = msg_str.as_bytes();
check!(w.write(msg));
}
let files = check!(readdir(dir));
#![allow(deprecated)]
use cmp::min;
-use option::None;
-use result::{Err, Ok};
+use option::Option::None;
+use result::Result::{Err, Ok};
use io;
use io::{Reader, Writer, Seek, Buffer, IoError, SeekStyle, IoResult};
use slice::{mod, AsSlice, SlicePrelude};
impl<'a> Writer for BufWriter<'a> {
#[inline]
- fn write(&mut self, buf: &[u8]) -> IoResult<()> {
- // return an error if the entire write does not fit in the buffer
- let cap = if self.pos >= self.buf.len() { 0 } else { self.buf.len() - self.pos };
- if buf.len() > cap {
- return Err(IoError {
- kind: io::OtherIoError,
- desc: "Trying to write past end of buffer",
- detail: None
- })
+ fn write(&mut self, src: &[u8]) -> IoResult<()> {
+ let dst = self.buf[mut self.pos..];
+ let dst_len = dst.len();
+
+ if dst_len == 0 {
+ return Err(io::standard_error(io::EndOfFile));
}
- slice::bytes::copy_memory(self.buf[mut self.pos..], buf);
- self.pos += buf.len();
- Ok(())
+ let src_len = src.len();
+
+ if dst_len >= src_len {
+ slice::bytes::copy_memory(dst, src);
+
+ self.pos += src_len;
+
+ Ok(())
+ } else {
+ slice::bytes::copy_memory(dst, src[..dst_len]);
+
+ self.pos += dst_len;
+
+ Err(io::standard_error(io::ShortWrite(dst_len)))
+ }
}
}
#[inline]
fn seek(&mut self, pos: i64, style: SeekStyle) -> IoResult<()> {
let new = try!(combine(style, self.pos, self.buf.len(), pos));
- self.pos = new as uint;
+ self.pos = min(new as uint, self.buf.len());
Ok(())
}
}
/// # #![allow(unused_must_use)]
/// use std::io::BufReader;
///
-/// let mut buf = [0, 1, 2, 3];
-/// let mut r = BufReader::new(&mut buf);
+/// let buf = [0, 1, 2, 3];
+/// let mut r = BufReader::new(&buf);
///
-/// assert_eq!(r.read_to_end().unwrap(), vec!(0, 1, 2, 3));
+/// assert_eq!(r.read_to_end().unwrap(), vec![0, 1, 2, 3]);
/// ```
pub struct BufReader<'a> {
buf: &'a [u8],
#[test]
fn test_buf_writer() {
- let mut buf = [0 as u8, ..8];
+ let mut buf = [0 as u8, ..9];
{
let mut writer = BufWriter::new(&mut buf);
assert_eq!(writer.tell(), Ok(0));
writer.write(&[]).unwrap();
assert_eq!(writer.tell(), Ok(8));
- assert!(writer.write(&[1]).is_err());
+ assert_eq!(writer.write(&[8, 9]).unwrap_err().kind, io::ShortWrite(1));
+ assert_eq!(writer.write(&[10]).unwrap_err().kind, io::EndOfFile);
}
- let b: &[_] = &[0, 1, 2, 3, 4, 5, 6, 7];
- assert_eq!(buf.as_slice(), b);
+ let b: &[_] = &[0, 1, 2, 3, 4, 5, 6, 7, 8];
+ assert_eq!(buf, b);
}
#[test]
}
let b: &[_] = &[1, 3, 2, 0, 0, 0, 0, 4];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
}
#[test]
match writer.write(&[0, 0]) {
Ok(..) => panic!(),
- Err(e) => assert_eq!(e.kind, io::OtherIoError),
+ Err(e) => assert_eq!(e.kind, io::ShortWrite(1)),
}
}
assert_eq!(reader.read(&mut buf), Ok(1));
assert_eq!(reader.tell(), Ok(1));
let b: &[_] = &[0];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
let mut buf = [0, ..4];
assert_eq!(reader.read(&mut buf), Ok(4));
assert_eq!(reader.tell(), Ok(5));
let b: &[_] = &[1, 2, 3, 4];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
assert_eq!(reader.read(&mut buf), Ok(3));
let b: &[_] = &[5, 6, 7];
assert_eq!(buf[0..3], b);
assert_eq!(reader.read(&mut buf), Ok(1));
assert_eq!(reader.tell(), Ok(1));
let b: &[_] = &[0];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
let mut buf = [0, ..4];
assert_eq!(reader.read(&mut buf), Ok(4));
assert_eq!(reader.tell(), Ok(5));
let b: &[_] = &[1, 2, 3, 4];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
assert_eq!(reader.read(&mut buf), Ok(3));
let b: &[_] = &[5, 6, 7];
assert_eq!(buf[0..3], b);
writer.write_line("testing").unwrap();
writer.write_str("testing").unwrap();
let mut r = BufReader::new(writer.get_ref());
- assert_eq!(r.read_to_string().unwrap(), "testingtesting\ntesting".to_string());
+ assert_eq!(r.read_to_string().unwrap(), "testingtesting\ntesting");
}
#[test]
writer.write_char('\n').unwrap();
writer.write_char('ệ').unwrap();
let mut r = BufReader::new(writer.get_ref());
- assert_eq!(r.read_to_string().unwrap(), "a\nệ".to_string());
+ assert_eq!(r.read_to_string().unwrap(), "a\nệ");
}
#[test]
let mut buf = [0, ..3];
assert!(r.read_at_least(buf.len(), &mut buf).is_ok());
let b: &[_] = &[1, 2, 3];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
assert!(r.read_at_least(0, buf[mut ..0]).is_ok());
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
assert!(r.read_at_least(buf.len(), &mut buf).is_ok());
let b: &[_] = &[4, 5, 6];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
assert!(r.read_at_least(buf.len(), &mut buf).is_err());
let b: &[_] = &[7, 8, 6];
- assert_eq!(buf.as_slice(), b);
+ assert_eq!(buf, b);
}
fn do_bench_mem_writer(b: &mut Bencher, times: uint, len: uint) {
for _i in range(0u, 10) {
let mut buf = [0 as u8, .. 10];
rdr.read(&mut buf).unwrap();
- assert_eq!(buf.as_slice(), [5, .. 10].as_slice());
+ assert_eq!(buf, [5, .. 10]);
}
}
});
//! ```rust
//! use std::io;
//!
-//! for line in io::stdin().lines() {
+//! for line in io::stdin().lock().lines() {
//! print!("{}", line.unwrap());
//! }
//! ```
use fmt;
use int;
use iter::{Iterator, IteratorExt};
+use kinds::Copy;
use mem::transmute;
use ops::{BitOr, BitXor, BitAnd, Sub, Not};
-use option::{Option, Some, None};
+use option::Option;
+use option::Option::{Some, None};
use os;
use boxed::Box;
-use result::{Ok, Err, Result};
+use result::Result;
+use result::Result::{Ok, Err};
use sys;
-use slice::{AsSlice, SlicePrelude};
-use str::{Str, StrPrelude};
+use slice::SlicePrelude;
+use str::StrPrelude;
use str;
use string::String;
use uint;
pub fn from_errno(errno: uint, detail: bool) -> IoError {
let mut err = sys::decode_error(errno as i32);
if detail && err.kind == OtherIoError {
- err.detail = Some(os::error_string(errno).as_slice().chars()
+ err.detail = Some(os::error_string(errno).chars()
.map(|c| c.to_lowercase()).collect())
}
err
NoProgress,
}
+impl Copy for IoErrorKind {}
+
/// A trait that lets you add a `detail` to an IoError easily
trait UpdateIoError<T> {
/// Returns an IoError with updated description and detail
/// # Example
///
/// ```rust
- /// use std::io;
+ /// use std::io::BufReader;
///
- /// let mut reader = io::stdin();
- /// let input = reader.read_line().ok().unwrap_or("nothing".to_string());
+ /// let mut reader = BufReader::new(b"hello\nworld");
+ /// assert_eq!("hello\n", &*reader.read_line().unwrap());
/// ```
///
/// # Error
SeekCur,
}
+impl Copy for SeekStyle {}
+
/// An object implementing `Seek` internally has some form of cursor which can
/// be moved within a stream of bytes. The stream typically has a fixed size,
/// allowing seeking relative to either end.
Truncate,
}
+impl Copy for FileMode {}
+
/// Access permissions with which the file should be opened. `File`s
/// opened with `Read` will return an error if written to.
pub enum FileAccess {
ReadWrite,
}
+impl Copy for FileAccess {}
+
/// Different kinds of files which can be identified by a call to stat
#[deriving(PartialEq, Show, Hash, Clone)]
pub enum FileType {
Unknown,
}
+impl Copy for FileType {}
+
/// A structure used to describe metadata information about a file. This
/// structure is created through the `stat` method on a `Path`.
///
pub unstable: UnstableFileStat,
}
+impl Copy for FileStat {}
+
/// This structure represents all of the possible information which can be
/// returned from a `stat` syscall which is not contained in the `FileStat`
/// structure. This information is not necessarily platform independent, and may
pub gen: u64,
}
+impl Copy for UnstableFileStat {}
+
bitflags! {
#[doc = "A set of permissions for a file or directory is represented"]
#[doc = "by a set of flags which are or'd together."]
}
}
+impl Copy for FilePermission {}
+
impl Default for FilePermission {
#[inline]
fn default() -> FilePermission { FilePermission::empty() }
fn test_show() {
use super::*;
- assert_eq!(format!("{}", USER_READ), "0400".to_string());
- assert_eq!(format!("{}", USER_FILE), "0644".to_string());
- assert_eq!(format!("{}", USER_EXEC), "0755".to_string());
- assert_eq!(format!("{}", USER_RWX), "0700".to_string());
- assert_eq!(format!("{}", GROUP_RWX), "0070".to_string());
- assert_eq!(format!("{}", OTHER_RWX), "0007".to_string());
- assert_eq!(format!("{}", ALL_PERMISSIONS), "0777".to_string());
- assert_eq!(format!("{}", USER_READ | USER_WRITE | OTHER_WRITE), "0602".to_string());
+ assert_eq!(format!("{}", USER_READ), "0400");
+ assert_eq!(format!("{}", USER_FILE), "0644");
+ assert_eq!(format!("{}", USER_EXEC), "0755");
+ assert_eq!(format!("{}", USER_RWX), "0700");
+ assert_eq!(format!("{}", GROUP_RWX), "0070");
+ assert_eq!(format!("{}", OTHER_RWX), "0007");
+ assert_eq!(format!("{}", ALL_PERMISSIONS), "0777");
+ assert_eq!(format!("{}", USER_READ | USER_WRITE | OTHER_WRITE), "0602");
}
fn _ensure_buffer_is_object_safe<T: Buffer>(x: &T) -> &Buffer {
use iter::IteratorExt;
use io::{IoResult};
use io::net::ip::{SocketAddr, IpAddr};
-use option::{Option, Some, None};
+use kinds::Copy;
+use option::Option;
+use option::Option::{Some, None};
use sys;
use vec::Vec;
Stream, Datagram, Raw
}
+impl Copy for SocketType {}
+
/// Flags which can be or'd into the `flags` field of a `Hint`. These are used
/// to manipulate how a query is performed.
///
V4Mapped,
}
+impl Copy for Flag {}
+
/// A transport protocol associated with either a hint or a return value of
/// `lookup`
pub enum Protocol {
TCP, UDP
}
+impl Copy for Protocol {}
+
/// This structure is used to provide hints when fetching addresses for a
/// remote host to control how the lookup is performed.
///
pub flags: uint,
}
+impl Copy for Hint {}
+
pub struct Info {
pub address: SocketAddr,
pub family: uint,
pub flags: uint,
}
+impl Copy for Info {}
+
/// Easy name resolution. Given a hostname, returns the list of IP addresses for
/// that hostname.
pub fn get_host_addresses(host: &str) -> IoResult<Vec<IpAddr>> {
pub use self::IpAddr::*;
use fmt;
+use kinds::Copy;
use io::{mod, IoResult, IoError};
use io::net;
use iter::{Iterator, IteratorExt};
-use option::{Option, None, Some};
-use result::{Ok, Err};
+use option::Option;
+use option::Option::{None, Some};
+use result::Result::{Ok, Err};
use str::{FromStr, StrPrelude};
use slice::{CloneSlicePrelude, SlicePrelude};
use vec::Vec;
Ipv6Addr(u16, u16, u16, u16, u16, u16, u16, u16)
}
+impl Copy for IpAddr {}
+
impl fmt::Show for IpAddr {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
match *self {
pub port: Port,
}
+impl Copy for SocketAddr {}
+
impl fmt::Show for SocketAddr {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self.ip {
#[test]
fn ipv6_addr_to_string() {
let a1 = Ipv6Addr(0, 0, 0, 0, 0, 0xffff, 0xc000, 0x280);
- assert!(a1.to_string() == "::ffff:192.0.2.128".to_string() ||
- a1.to_string() == "::FFFF:192.0.2.128".to_string());
+ assert!(a1.to_string() == "::ffff:192.0.2.128" ||
+ a1.to_string() == "::FFFF:192.0.2.128");
assert_eq!(Ipv6Addr(8, 9, 10, 11, 12, 13, 14, 15).to_string(),
- "8:9:a:b:c:d:e:f".to_string());
+ "8:9:a:b:c:d:e:f");
}
#[test]
//! Networking I/O
use io::{IoError, IoResult, InvalidInput};
-use option::None;
-use result::{Ok, Err};
+use option::Option::None;
+use result::Result::{Ok, Err};
use self::ip::{SocketAddr, ToSocketAddr};
pub use self::addrinfo::get_host_addresses;
use clone::Clone;
use io::IoResult;
-use result::Err;
+use result::Result::Err;
use io::net::ip::{SocketAddr, ToSocketAddr};
use io::{Reader, Writer, Listener, Acceptor};
use io::{standard_error, TimedOut};
-use option::{None, Some, Option};
+use option::Option;
+use option::Option::{None, Some};
use time::Duration;
use sys::tcp::TcpStream as TcpStreamImp;
use io::net::ip::{SocketAddr, IpAddr, ToSocketAddr};
use io::{Reader, Writer, IoResult};
use option::Option;
-use result::{Ok, Err};
+use result::Result::{Ok, Err};
use sys::udp::UdpSocket as UdpSocketImp;
use sys_common;
// if the env is currently just inheriting from the parent's,
// materialize the parent's env into a hashtable.
self.env = Some(os::env_as_bytes().into_iter()
- .map(|(k, v)| (EnvKey(k.as_slice().to_c_str()),
- v.as_slice().to_c_str()))
+ .map(|(k, v)| (EnvKey(k.to_c_str()),
+ v.to_c_str()))
.collect());
self.env.as_mut().unwrap()
}
CreatePipe(bool /* readable */, bool /* writable */),
}
+impl Copy for StdioContainer {}
+
/// Describes the result of a process after it has terminated.
/// Note that Windows have no signals, so the result is usually ExitStatus.
#[deriving(PartialEq, Eq, Clone)]
ExitSignal(int),
}
+impl Copy for ProcessExit {}
+
impl fmt::Show for ProcessExit {
/// Format a ProcessExit enum, to nicely present the information.
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fn stdout_works() {
let mut cmd = Command::new("echo");
cmd.arg("foobar").stdout(CreatePipe(false, true));
- assert_eq!(run_output(cmd), "foobar\n".to_string());
+ assert_eq!(run_output(cmd), "foobar\n");
}
#[cfg(all(unix, not(target_os="android")))]
cmd.arg("-c").arg("pwd")
.cwd(&Path::new("/"))
.stdout(CreatePipe(false, true));
- assert_eq!(run_output(cmd), "/\n".to_string());
+ assert_eq!(run_output(cmd), "/\n");
}
#[cfg(all(unix, not(target_os="android")))]
drop(p.stdin.take());
let out = read_all(p.stdout.as_mut().unwrap() as &mut Reader);
assert!(p.wait().unwrap().success());
- assert_eq!(out, "foobar\n".to_string());
+ assert_eq!(out, "foobar\n");
}
#[cfg(not(target_os="android"))]
let output_str = str::from_utf8(output.as_slice()).unwrap();
assert!(status.success());
- assert_eq!(output_str.trim().to_string(), "hello".to_string());
+ assert_eq!(output_str.trim().to_string(), "hello");
// FIXME #7224
if !running_on_valgrind() {
assert_eq!(error, Vec::new());
let output_str = str::from_utf8(output.as_slice()).unwrap();
assert!(status.success());
- assert_eq!(output_str.trim().to_string(), "hello".to_string());
+ assert_eq!(output_str.trim().to_string(), "hello");
// FIXME #7224
if !running_on_valgrind() {
assert_eq!(error, Vec::new());
let output = String::from_utf8(prog.wait_with_output().unwrap().output).unwrap();
let parent_dir = os::getcwd().unwrap();
- let child_dir = Path::new(output.as_slice().trim());
+ let child_dir = Path::new(output.trim());
let parent_stat = parent_dir.stat().unwrap();
let child_stat = child_dir.stat().unwrap();
let prog = pwd_cmd().cwd(&parent_dir).spawn().unwrap();
let output = String::from_utf8(prog.wait_with_output().unwrap().output).unwrap();
- let child_dir = Path::new(output.as_slice().trim());
+ let child_dir = Path::new(output.trim());
let parent_stat = parent_dir.stat().unwrap();
let child_stat = child_dir.stat().unwrap();
for &(ref k, ref v) in r.iter() {
// don't check windows magical empty-named variables
assert!(k.is_empty() ||
- output.as_slice()
- .contains(format!("{}={}", *k, *v).as_slice()),
+ output.contains(format!("{}={}", *k, *v).as_slice()),
"output doesn't contain `{}={}`\n{}",
k, v, output);
}
for &(ref k, ref v) in r.iter() {
// don't check android RANDOM variables
if *k != "RANDOM".to_string() {
- assert!(output.as_slice()
- .contains(format!("{}={}",
+ assert!(output.contains(format!("{}={}",
*k,
*v).as_slice()) ||
- output.as_slice()
- .contains(format!("{}=\'{}\'",
+ output.contains(format!("{}=\'{}\'",
*k,
*v).as_slice()));
}
let result = prog.wait_with_output().unwrap();
let output = String::from_utf8_lossy(result.output.as_slice()).into_string();
- assert!(output.as_slice().contains("RUN_TEST_NEW_ENV=123"),
+ assert!(output.contains("RUN_TEST_NEW_ENV=123"),
"didn't find RUN_TEST_NEW_ENV inside of:\n\n{}", output);
}
let result = prog.wait_with_output().unwrap();
let output = String::from_utf8_lossy(result.output.as_slice()).into_string();
- assert!(output.as_slice().contains("RUN_TEST_NEW_ENV=123"),
+ assert!(output.contains("RUN_TEST_NEW_ENV=123"),
"didn't find RUN_TEST_NEW_ENV inside of:\n\n{}", output);
}
//! as a `Reader` without unwrapping the result first.
use clone::Clone;
-use result::{Ok, Err};
+use result::Result::{Ok, Err};
use super::{Reader, Writer, Listener, Acceptor, Seek, SeekStyle, IoResult};
impl<W: Writer> Writer for IoResult<W> {
use boxed::Box;
use cell::RefCell;
+use clone::Clone;
use failure::LOCAL_STDERR;
use fmt;
-use io::{Reader, Writer, IoResult, IoError, OtherIoError,
+use io::{Reader, Writer, IoResult, IoError, OtherIoError, Buffer,
standard_error, EndOfFile, LineBufferedWriter, BufferedReader};
use kinds::Send;
use libc;
use mem;
-use option::{Option, Some, None};
-use result::{Ok, Err};
+use option::Option;
+use option::Option::{Some, None};
+use ops::{Deref, DerefMut};
+use result::Result::{Ok, Err};
use rustrt;
use rustrt::local::Local;
use rustrt::task::Task;
use slice::SlicePrelude;
use str::StrPrelude;
+use string::String;
use sys::{fs, tty};
+use sync::{Arc, Mutex, MutexGuard, Once, ONCE_INIT};
use uint;
+use vec::Vec;
// And so begins the tale of acquiring a uv handle to a stdio stream on all
// platforms in all situations. Our story begins by splitting the world into two
RefCell::new(None)
})
-/// Creates a new non-blocking handle to the stdin of the current process.
-///
-/// The returned handled is buffered by default with a `BufferedReader`. If
-/// buffered access is not desired, the `stdin_raw` function is provided to
-/// provided unbuffered access to stdin.
+/// A synchronized wrapper around a buffered reader from stdin
+#[deriving(Clone)]
+pub struct StdinReader {
+ inner: Arc<Mutex<BufferedReader<StdReader>>>,
+}
+
+/// A guard for exlusive access to `StdinReader`'s internal `BufferedReader`.
+pub struct StdinReaderGuard<'a> {
+ inner: MutexGuard<'a, BufferedReader<StdReader>>,
+}
+
+impl<'a> Deref<BufferedReader<StdReader>> for StdinReaderGuard<'a> {
+ fn deref(&self) -> &BufferedReader<StdReader> {
+ &*self.inner
+ }
+}
+
+impl<'a> DerefMut<BufferedReader<StdReader>> for StdinReaderGuard<'a> {
+ fn deref_mut(&mut self) -> &mut BufferedReader<StdReader> {
+ &mut *self.inner
+ }
+}
+
+impl StdinReader {
+ /// Locks the `StdinReader`, granting the calling thread exclusive access
+ /// to the underlying `BufferedReader`.
+ ///
+ /// This provides access to methods like `chars` and `lines`.
+ ///
+ /// ## Example
+ ///
+ /// ```rust
+ /// use std::io;
+ ///
+ /// for line in io::stdin().lock().lines() {
+ /// println!("{}", line.unwrap());
+ /// }
+ /// ```
+ pub fn lock<'a>(&'a mut self) -> StdinReaderGuard<'a> {
+ StdinReaderGuard {
+ inner: self.inner.lock()
+ }
+ }
+
+ /// Like `Buffer::read_line`.
+ ///
+ /// The read is performed atomically - concurrent read calls in other
+ /// threads will not interleave with this one.
+ pub fn read_line(&mut self) -> IoResult<String> {
+ self.inner.lock().read_line()
+ }
+
+ /// Like `Buffer::read_until`.
+ ///
+ /// The read is performed atomically - concurrent read calls in other
+ /// threads will not interleave with this one.
+ pub fn read_until(&mut self, byte: u8) -> IoResult<Vec<u8>> {
+ self.inner.lock().read_until(byte)
+ }
+
+ /// Like `Buffer::read_char`.
+ ///
+ /// The read is performed atomically - concurrent read calls in other
+ /// threads will not interleave with this one.
+ pub fn read_char(&mut self) -> IoResult<char> {
+ self.inner.lock().read_char()
+ }
+}
+
+impl Reader for StdinReader {
+ fn read(&mut self, buf: &mut [u8]) -> IoResult<uint> {
+ self.inner.lock().read(buf)
+ }
+
+ // We have to manually delegate all of these because the default impls call
+ // read more than once and we don't want those calls to interleave (or
+ // incur the costs of repeated locking).
+
+ fn read_at_least(&mut self, min: uint, buf: &mut [u8]) -> IoResult<uint> {
+ self.inner.lock().read_at_least(min, buf)
+ }
+
+ fn push_at_least(&mut self, min: uint, len: uint, buf: &mut Vec<u8>) -> IoResult<uint> {
+ self.inner.lock().push_at_least(min, len, buf)
+ }
+
+ fn read_to_end(&mut self) -> IoResult<Vec<u8>> {
+ self.inner.lock().read_to_end()
+ }
+
+ fn read_le_uint_n(&mut self, nbytes: uint) -> IoResult<u64> {
+ self.inner.lock().read_le_uint_n(nbytes)
+ }
+
+ fn read_be_uint_n(&mut self, nbytes: uint) -> IoResult<u64> {
+ self.inner.lock().read_be_uint_n(nbytes)
+ }
+}
+
+/// Creates a new handle to the stdin of the current process.
///
-/// Care should be taken when creating multiple handles to the stdin of a
-/// process. Because this is a buffered reader by default, it's possible for
-/// pending input to be unconsumed in one reader and unavailable to other
-/// readers. It is recommended that only one handle at a time is created for the
-/// stdin of a process.
+/// The returned handle is a wrapper around a global `BufferedReader` shared
+/// by all threads. If buffered access is not desired, the `stdin_raw` function
+/// is provided to provided unbuffered access to stdin.
///
/// See `stdout()` for more notes about this function.
-pub fn stdin() -> BufferedReader<StdReader> {
- // The default buffer capacity is 64k, but apparently windows doesn't like
- // 64k reads on stdin. See #13304 for details, but the idea is that on
- // windows we use a slightly smaller buffer that's been seen to be
- // acceptable.
- if cfg!(windows) {
- BufferedReader::with_capacity(8 * 1024, stdin_raw())
- } else {
- BufferedReader::new(stdin_raw())
+pub fn stdin() -> StdinReader {
+ // We're following the same strategy as kimundi's lazy_static library
+ static mut STDIN: *const StdinReader = 0 as *const StdinReader;
+ static ONCE: Once = ONCE_INIT;
+
+ unsafe {
+ ONCE.doit(|| {
+ // The default buffer capacity is 64k, but apparently windows doesn't like
+ // 64k reads on stdin. See #13304 for details, but the idea is that on
+ // windows we use a slightly smaller buffer that's been seen to be
+ // acceptable.
+ let stdin = if cfg!(windows) {
+ BufferedReader::with_capacity(8 * 1024, stdin_raw())
+ } else {
+ BufferedReader::new(stdin_raw())
+ };
+ let stdin = StdinReader {
+ inner: Arc::new(Mutex::new(stdin))
+ };
+ STDIN = mem::transmute(box stdin);
+ });
+
+ (*STDIN).clone()
}
}
set_stdout(box w);
println!("hello!");
});
- assert_eq!(r.read_to_string().unwrap(), "hello!\n".to_string());
+ assert_eq!(r.read_to_string().unwrap(), "hello!\n");
}
#[test]
panic!("my special message");
});
let s = r.read_to_string().unwrap();
- assert!(s.as_slice().contains("my special message"));
+ assert!(s.contains("my special message"));
}
}
use io;
use libc;
use ops::Drop;
-use option::{Option, None, Some};
+use option::Option;
+use option::Option::{None, Some};
use os;
use path::{Path, GenericPath};
-use result::{Ok, Err};
+use result::Result::{Ok, Err};
use sync::atomic;
/// A wrapper for a path to temporary directory implementing automatic
/// A `Writer` which ignores bytes written to it, like /dev/null.
pub struct NullWriter;
+impl Copy for NullWriter {}
+
impl Writer for NullWriter {
#[inline]
fn write(&mut self, _buf: &[u8]) -> io::IoResult<()> { Ok(()) }
/// A `Reader` which returns an infinite stream of 0 bytes, like /dev/zero.
pub struct ZeroReader;
+impl Copy for ZeroReader {}
+
impl Reader for ZeroReader {
#[inline]
fn read(&mut self, buf: &mut [u8]) -> io::IoResult<uint> {
/// A `Reader` which is always at EOF, like /dev/null.
pub struct NullReader;
+impl Copy for NullReader {}
+
impl Reader for NullReader {
#[inline]
fn read(&mut self, _buf: &mut [u8]) -> io::IoResult<uint> {
#![allow(unknown_features)]
#![feature(macro_rules, globs, linkage)]
#![feature(default_type_params, phase, lang_items, unsafe_destructor)]
-#![feature(import_shadowing, slicing_syntax)]
+#![feature(import_shadowing, slicing_syntax, tuple_indexing)]
// Don't link to std. We are std.
#![no_std]
use char;
use char::Char;
+use kinds::Copy;
use num;
use num::{Int, Float, FPNaN, FPInfinite, ToPrimitive};
use slice::{SlicePrelude, CloneSliceAllocPrelude};
ExpBin,
}
+impl Copy for ExponentFormat {}
+
/// The number of digits used for emitting the fractional part of a number, if
/// any.
pub enum SignificantDigits {
DigExact(uint)
}
+impl Copy for SignificantDigits {}
+
/// How to emit the sign of a number.
pub enum SignFormat {
/// No sign will be printed. The exponent sign will also be emitted.
SignAll,
}
-/// Converts an integral number to its string representation as a byte vector.
-/// This is meant to be a common base implementation for all integral string
-/// conversion functions like `to_string()` or `to_str_radix()`.
-///
-/// # Arguments
-///
-/// - `num` - The number to convert. Accepts any number that
-/// implements the numeric traits.
-/// - `radix` - Base to use. Accepts only the values 2-36.
-/// - `sign` - How to emit the sign. Options are:
-/// - `SignNone`: No sign at all. Basically emits `abs(num)`.
-/// - `SignNeg`: Only `-` on negative values.
-/// - `SignAll`: Both `+` on positive, and `-` on negative numbers.
-/// - `f` - a callback which will be invoked for each ascii character
-/// which composes the string representation of this integer
-///
-/// # Panics
-///
-/// - Panics if `radix` < 2 or `radix` > 36.
+impl Copy for SignFormat {}
+
+/**
+ * Converts an integral number to its string representation as a byte vector.
+ * This is meant to be a common base implementation for all integral string
+ * conversion functions like `to_string()` or `to_str_radix()`.
+ *
+ * # Arguments
+ * - `num` - The number to convert. Accepts any number that
+ * implements the numeric traits.
+ * - `radix` - Base to use. Accepts only the values 2-36.
+ * - `sign` - How to emit the sign. Options are:
+ * - `SignNone`: No sign at all. Basically emits `abs(num)`.
+ * - `SignNeg`: Only `-` on negative values.
+ * - `SignAll`: Both `+` on positive, and `-` on negative numbers.
+ * - `f` - a callback which will be invoked for each ascii character
+ * which composes the string representation of this integer
+ *
+ * # Return value
+ * A tuple containing the byte vector, and a boolean flag indicating
+ * whether it represents a special value like `inf`, `-inf`, `NaN` or not.
+ * It returns a tuple because there can be ambiguity between a special value
+ * and a number representation at higher bases.
+ *
+ * # Failure
+ * - Fails if `radix` < 2 or `radix` > 36.
+ */
fn int_to_str_bytes_common<T: Int>(num: T, radix: uint, sign: SignFormat, f: |u8|) {
assert!(2 <= radix && radix <= 36);
#[test]
fn test_int_to_str_overflow() {
let mut i8_val: i8 = 127_i8;
- assert_eq!(i8_val.to_string(), "127".to_string());
+ assert_eq!(i8_val.to_string(), "127");
i8_val += 1 as i8;
- assert_eq!(i8_val.to_string(), "-128".to_string());
+ assert_eq!(i8_val.to_string(), "-128");
let mut i16_val: i16 = 32_767_i16;
- assert_eq!(i16_val.to_string(), "32767".to_string());
+ assert_eq!(i16_val.to_string(), "32767");
i16_val += 1 as i16;
- assert_eq!(i16_val.to_string(), "-32768".to_string());
+ assert_eq!(i16_val.to_string(), "-32768");
let mut i32_val: i32 = 2_147_483_647_i32;
- assert_eq!(i32_val.to_string(), "2147483647".to_string());
+ assert_eq!(i32_val.to_string(), "2147483647");
i32_val += 1 as i32;
- assert_eq!(i32_val.to_string(), "-2147483648".to_string());
+ assert_eq!(i32_val.to_string(), "-2147483648");
let mut i64_val: i64 = 9_223_372_036_854_775_807_i64;
- assert_eq!(i64_val.to_string(), "9223372036854775807".to_string());
+ assert_eq!(i64_val.to_string(), "9223372036854775807");
i64_val += 1 as i64;
- assert_eq!(i64_val.to_string(), "-9223372036854775808".to_string());
+ assert_eq!(i64_val.to_string(), "-9223372036854775808");
}
}
#[test]
fn test_uint_to_str_overflow() {
let mut u8_val: u8 = 255_u8;
- assert_eq!(u8_val.to_string(), "255".to_string());
+ assert_eq!(u8_val.to_string(), "255");
u8_val += 1 as u8;
- assert_eq!(u8_val.to_string(), "0".to_string());
+ assert_eq!(u8_val.to_string(), "0");
let mut u16_val: u16 = 65_535_u16;
- assert_eq!(u16_val.to_string(), "65535".to_string());
+ assert_eq!(u16_val.to_string(), "65535");
u16_val += 1 as u16;
- assert_eq!(u16_val.to_string(), "0".to_string());
+ assert_eq!(u16_val.to_string(), "0");
let mut u32_val: u32 = 4_294_967_295_u32;
- assert_eq!(u32_val.to_string(), "4294967295".to_string());
+ assert_eq!(u32_val.to_string(), "4294967295");
u32_val += 1 as u32;
- assert_eq!(u32_val.to_string(), "0".to_string());
+ assert_eq!(u32_val.to_string(), "0");
let mut u64_val: u64 = 18_446_744_073_709_551_615_u64;
- assert_eq!(u64_val.to_string(), "18446744073709551615".to_string());
+ assert_eq!(u64_val.to_string(), "18446744073709551615");
u64_val += 1 as u64;
- assert_eq!(u64_val.to_string(), "0".to_string());
+ assert_eq!(u64_val.to_string(), "0");
}
#[test]
use fmt;
use io::{IoResult, IoError};
use iter::{Iterator, IteratorExt};
+use kinds::Copy;
use libc::{c_void, c_int};
use libc;
use boxed::Box;
use ops::Drop;
-use option::{Some, None, Option};
+use option::Option;
+use option::Option::{Some, None};
use os;
use path::{Path, GenericPath, BytesContainer};
use sys;
use sys::os as os_imp;
use ptr::RawPtr;
use ptr;
-use result::{Err, Ok, Result};
+use result::Result;
+use result::Result::{Err, Ok};
use slice::{AsSlice, SlicePrelude, PartialEqSlicePrelude};
use slice::CloneSliceAllocPrelude;
use str::{Str, StrPrelude, StrAllocating};
pub mod windows {
use libc::types::os::arch::extra::DWORD;
use libc;
- use option::{None, Option};
+ use option::Option;
+ use option::Option::None;
use option;
use os::TMPBUF_SZ;
use slice::{SlicePrelude};
// set `res` to None and continue.
let s = String::from_utf16(sub)
.expect("fill_utf16_buf_and_decode: closure created invalid UTF-16");
- res = option::Some(s)
+ res = option::Option::Some(s)
}
}
return res;
Serialize access through a global lock.
*/
fn with_env_lock<T>(f: || -> T) -> T {
- use rustrt::mutex::{StaticNativeMutex, NATIVE_MUTEX_INIT};
+ use sync::{StaticMutex, MUTEX_INIT};
- static LOCK: StaticNativeMutex = NATIVE_MUTEX_INIT;
+ static LOCK: StaticMutex = MUTEX_INIT;
- unsafe {
- let _guard = LOCK.lock();
- f()
- }
+ let _guard = LOCK.lock();
+ f()
}
/// Returns a vector of (variable, value) pairs, for all the environment
fn env_convert(input: Vec<Vec<u8>>) -> Vec<(Vec<u8>, Vec<u8>)> {
let mut pairs = Vec::new();
for p in input.iter() {
- let mut it = p.as_slice().splitn(1, |b| *b == b'=');
+ let mut it = p.splitn(1, |b| *b == b'=');
let key = it.next().unwrap().to_vec();
let default: &[u8] = &[];
let val = it.next().unwrap_or(default).to_vec();
pub writer: c_int,
}
+impl Copy for Pipe {}
+
/// Creates a new low-level OS in-memory pipe.
///
/// This function can fail to succeed if there are no more resources available
kind: MemoryMapKind,
}
+#[cfg(not(stage0))]
+impl Copy for MemoryMap {}
+
/// Type of memory map
pub enum MemoryMapKind {
/// Virtual memory map. Usually used to change the permissions of a given
MapVirtual
}
+impl Copy for MemoryMapKind {}
+
/// Options the memory map is created with
pub enum MapOption {
/// The memory should be readable
MapNonStandardFlags(c_int),
}
+impl Copy for MapOption {}
+
/// Possible errors when creating a map.
pub enum MapError {
/// ## The following are POSIX-specific
ErrMapViewOfFile(uint)
}
+impl Copy for MapError {}
+
impl fmt::Show for MapError {
fn fmt(&self, out: &mut fmt::Formatter) -> fmt::Result {
let str = match *self {
fn test_setenv() {
let n = make_rand_name();
setenv(n.as_slice(), "VALUE");
- assert_eq!(getenv(n.as_slice()), option::Some("VALUE".to_string()));
+ assert_eq!(getenv(n.as_slice()), option::Option::Some("VALUE".to_string()));
}
#[test]
let n = make_rand_name();
setenv(n.as_slice(), "VALUE");
unsetenv(n.as_slice());
- assert_eq!(getenv(n.as_slice()), option::None);
+ assert_eq!(getenv(n.as_slice()), option::Option::None);
}
#[test]
let n = make_rand_name();
setenv(n.as_slice(), "1");
setenv(n.as_slice(), "2");
- assert_eq!(getenv(n.as_slice()), option::Some("2".to_string()));
+ assert_eq!(getenv(n.as_slice()), option::Option::Some("2".to_string()));
setenv(n.as_slice(), "");
- assert_eq!(getenv(n.as_slice()), option::Some("".to_string()));
+ assert_eq!(getenv(n.as_slice()), option::Option::Some("".to_string()));
}
// Windows GetEnvironmentVariable requires some extra work to make sure
let n = make_rand_name();
setenv(n.as_slice(), s.as_slice());
debug!("{}", s.clone());
- assert_eq!(getenv(n.as_slice()), option::Some(s));
+ assert_eq!(getenv(n.as_slice()), option::Option::Some(s));
}
#[test]
// MingW seems to set some funky environment variables like
// "=C:=C:\MinGW\msys\1.0\bin" and "!::=::\" that are returned
// from env() but not visible from getenv().
- assert!(v2.is_none() || v2 == option::Some(v));
+ assert!(v2.is_none() || v2 == option::Option::Some(v));
}
}
#[test]
fn memory_map_rw() {
- use result::{Ok, Err};
+ use result::Result::{Ok, Err};
let chunk = match os::MemoryMap::new(16, &[
os::MapReadable,
#[test]
fn memory_map_file() {
- use result::{Ok, Err};
+ use result::Result::{Ok, Err};
use os::*;
use libc::*;
use io::fs;
#[cfg(unix)]
fn join_paths_unix() {
fn test_eq(input: &[&str], output: &str) -> bool {
- join_paths(input).unwrap().as_slice() == output.as_bytes()
+ join_paths(input).unwrap() == output.as_bytes()
}
assert!(test_eq(&[], ""));
#[cfg(windows)]
fn join_paths_windows() {
fn test_eq(input: &[&str], output: &str) -> bool {
- join_paths(input).unwrap().as_slice() == output.as_bytes()
+ join_paths(input).unwrap() == output.as_bytes()
}
assert!(test_eq(&[], ""));
use clone::Clone;
use fmt;
use iter::IteratorExt;
-use option::{Option, None, Some};
+use option::Option;
+use option::Option::{None, Some};
use str;
use str::{CowString, MaybeOwned, Str, StrPrelude};
use string::String;
let input = r"\foo\bar\baz";
let path: WindowsPath = WindowsPath::new(input.to_c_str());
- assert_eq!(path.as_str().unwrap(), input.as_slice());
+ assert_eq!(path.as_str().unwrap(), input);
}
}
use io::Writer;
use iter::{DoubleEndedIteratorExt, AdditiveIterator, Extend};
use iter::{Iterator, IteratorExt, Map};
-use option::{Option, None, Some};
+use option::Option;
+use option::Option::{None, Some};
use kinds::Sized;
use str::{FromStr, Str};
use str;
unsafe fn set_filename_unchecked<T: BytesContainer>(&mut self, filename: T) {
let filename = filename.container_as_bytes();
match self.sepidx {
- None if b".." == self.repr.as_slice() => {
+ None if b".." == self.repr => {
let mut v = Vec::with_capacity(3 + filename.len());
v.push_all(dot_dot_static);
v.push(SEP_BYTE);
self.repr = Path::normalize(v.as_slice());
}
}
- self.sepidx = self.repr.as_slice().rposition_elem(&SEP_BYTE);
+ self.sepidx = self.repr.rposition_elem(&SEP_BYTE);
}
unsafe fn push_unchecked<T: BytesContainer>(&mut self, path: T) {
// FIXME: this is slow
self.repr = Path::normalize(v.as_slice());
}
- self.sepidx = self.repr.as_slice().rposition_elem(&SEP_BYTE);
+ self.sepidx = self.repr.rposition_elem(&SEP_BYTE);
}
}
}
fn dirname<'a>(&'a self) -> &'a [u8] {
match self.sepidx {
- None if b".." == self.repr.as_slice() => self.repr.as_slice(),
+ None if b".." == self.repr => self.repr.as_slice(),
None => dot_static,
Some(0) => self.repr[..1],
Some(idx) if self.repr[idx+1..] == b".." => self.repr.as_slice(),
fn filename<'a>(&'a self) -> Option<&'a [u8]> {
match self.sepidx {
- None if b"." == self.repr.as_slice() ||
- b".." == self.repr.as_slice() => None,
+ None if b"." == self.repr ||
+ b".." == self.repr => None,
None => Some(self.repr.as_slice()),
Some(idx) if self.repr[idx+1..] == b".." => None,
Some(0) if self.repr[1..].is_empty() => None,
fn pop(&mut self) -> bool {
match self.sepidx {
- None if b"." == self.repr.as_slice() => false,
+ None if b"." == self.repr => false,
None => {
self.repr = vec![b'.'];
self.sepidx = None;
true
}
- Some(0) if b"/" == self.repr.as_slice() => false,
+ Some(0) if b"/" == self.repr => false,
Some(idx) => {
if idx == 0 {
self.repr.truncate(idx+1);
} else {
self.repr.truncate(idx);
}
- self.sepidx = self.repr.as_slice().rposition_elem(&SEP_BYTE);
+ self.sepidx = self.repr.rposition_elem(&SEP_BYTE);
true
}
}
} else {
let mut ita = self.components();
let mut itb = other.components();
- if b"." == self.repr.as_slice() {
+ if b"." == self.repr {
return match itb.next() {
None => true,
Some(b) => b != b".."
}
}
}
- Some(Path::new(comps.as_slice().connect_vec(&SEP_BYTE)))
+ Some(Path::new(comps.connect_vec(&SEP_BYTE)))
}
}
// None result means the byte vector didn't need normalizing
fn normalize_helper<'a>(v: &'a [u8], is_abs: bool) -> Option<Vec<&'a [u8]>> {
- if is_abs && v.as_slice().is_empty() {
+ if is_abs && v.is_empty() {
return None;
}
let mut comps: Vec<&'a [u8]> = vec![];
t!(s: Path::new("foo/../../.."), "../..");
t!(s: Path::new("foo/../../bar"), "../bar");
- assert_eq!(Path::new(b"foo/bar").into_vec().as_slice(), b"foo/bar");
- assert_eq!(Path::new(b"/foo/../../bar").into_vec().as_slice(),
+ assert_eq!(Path::new(b"foo/bar").into_vec(), b"foo/bar");
+ assert_eq!(Path::new(b"/foo/../../bar").into_vec(),
b"/bar");
let p = Path::new(b"foo/bar\x80");
($path:expr, $disp:ident, $exp:expr) => (
{
let path = Path::new($path);
- assert!(path.$disp().to_string().as_slice() == $exp);
+ assert!(path.$disp().to_string() == $exp);
}
)
)
{
let path = Path::new($path);
let f = format!("{}", path.display());
- assert!(f.as_slice() == $exp);
+ assert!(f == $exp);
let f = format!("{}", path.filename_display());
- assert!(f.as_slice() == $expf);
+ assert!(f == $expf);
}
)
)
let path = Path::new($arg);
let comps = path.components().collect::<Vec<&[u8]>>();
let exp: &[&[u8]] = &[$($exp),*];
- assert_eq!(comps.as_slice(), exp);
+ assert_eq!(comps, exp);
let comps = path.components().rev().collect::<Vec<&[u8]>>();
let exp = exp.iter().rev().map(|&x|x).collect::<Vec<&[u8]>>();
assert_eq!(comps, exp)
let path = Path::new($arg);
let comps = path.str_components().collect::<Vec<Option<&str>>>();
let exp: &[Option<&str>] = &$exp;
- assert_eq!(comps.as_slice(), exp);
+ assert_eq!(comps, exp);
let comps = path.str_components().rev().collect::<Vec<Option<&str>>>();
let exp = exp.iter().rev().map(|&x|x).collect::<Vec<Option<&str>>>();
assert_eq!(comps, exp);
use io::Writer;
use iter::{AdditiveIterator, DoubleEndedIteratorExt, Extend};
use iter::{Iterator, IteratorExt, Map};
+use kinds::Copy;
use mem;
-use option::{Option, Some, None};
+use option::Option;
+use option::Option::{Some, None};
use slice::{AsSlice, SlicePrelude};
use str::{CharSplits, FromStr, Str, StrAllocating, StrVector, StrPrelude};
use string::String;
unsafe fn set_filename_unchecked<T: BytesContainer>(&mut self, filename: T) {
let filename = filename.container_as_str().unwrap();
match self.sepidx_or_prefix_len() {
- None if ".." == self.repr.as_slice() => {
+ None if ".." == self.repr => {
let mut s = String::with_capacity(3 + filename.len());
s.push_str("..");
s.push(SEP);
None => {
self.update_normalized(filename);
}
- Some((_,idxa,end)) if self.repr.as_slice().slice(idxa,end) == ".." => {
+ Some((_,idxa,end)) if self.repr.slice(idxa,end) == ".." => {
let mut s = String::with_capacity(end + 1 + filename.len());
- s.push_str(self.repr.as_slice().slice_to(end));
+ s.push_str(self.repr.slice_to(end));
s.push(SEP);
s.push_str(filename);
self.update_normalized(s);
}
Some((idxb,idxa,_)) if self.prefix == Some(DiskPrefix) && idxa == self.prefix_len() => {
let mut s = String::with_capacity(idxb + filename.len());
- s.push_str(self.repr.as_slice().slice_to(idxb));
+ s.push_str(self.repr.slice_to(idxb));
s.push_str(filename);
self.update_normalized(s);
}
Some((idxb,_,_)) => {
let mut s = String::with_capacity(idxb + 1 + filename.len());
- s.push_str(self.repr.as_slice().slice_to(idxb));
+ s.push_str(self.repr.slice_to(idxb));
s.push(SEP);
s.push_str(filename);
self.update_normalized(s);
/// Always returns a `Some` value.
fn dirname_str<'a>(&'a self) -> Option<&'a str> {
Some(match self.sepidx_or_prefix_len() {
- None if ".." == self.repr.as_slice() => self.repr.as_slice(),
+ None if ".." == self.repr => self.repr.as_slice(),
None => ".",
- Some((_,idxa,end)) if self.repr.as_slice().slice(idxa, end) == ".." => {
+ Some((_,idxa,end)) if self.repr.slice(idxa, end) == ".." => {
self.repr.as_slice()
}
- Some((idxb,_,end)) if self.repr.as_slice().slice(idxb, end) == "\\" => {
+ Some((idxb,_,end)) if self.repr.slice(idxb, end) == "\\" => {
self.repr.as_slice()
}
- Some((0,idxa,_)) => self.repr.as_slice().slice_to(idxa),
+ Some((0,idxa,_)) => self.repr.slice_to(idxa),
Some((idxb,idxa,_)) => {
match self.prefix {
Some(DiskPrefix) | Some(VerbatimDiskPrefix) if idxb == self.prefix_len() => {
- self.repr.as_slice().slice_to(idxa)
+ self.repr.slice_to(idxa)
}
- _ => self.repr.as_slice().slice_to(idxb)
+ _ => self.repr.slice_to(idxb)
}
}
})
#[inline]
fn pop(&mut self) -> bool {
match self.sepidx_or_prefix_len() {
- None if "." == self.repr.as_slice() => false,
+ None if "." == self.repr => false,
None => {
self.repr = String::from_str(".");
self.sepidx = None;
true
}
Some((idxb,idxa,end)) if idxb == idxa && idxb == end => false,
- Some((idxb,_,end)) if self.repr.as_slice().slice(idxb, end) == "\\" => false,
+ Some((idxb,_,end)) if self.repr.slice(idxb, end) == "\\" => false,
Some((idxb,idxa,_)) => {
let trunc = match self.prefix {
Some(DiskPrefix) | Some(VerbatimDiskPrefix) | None => {
if self.prefix.is_some() {
Some(Path::new(match self.prefix {
Some(DiskPrefix) if self.is_absolute() => {
- self.repr.as_slice().slice_to(self.prefix_len()+1)
+ self.repr.slice_to(self.prefix_len()+1)
}
Some(VerbatimDiskPrefix) => {
- self.repr.as_slice().slice_to(self.prefix_len()+1)
+ self.repr.slice_to(self.prefix_len()+1)
}
- _ => self.repr.as_slice().slice_to(self.prefix_len())
+ _ => self.repr.slice_to(self.prefix_len())
}))
} else if is_vol_relative(self) {
- Some(Path::new(self.repr.as_slice().slice_to(1)))
+ Some(Path::new(self.repr.slice_to(1)))
} else {
None
}
fn is_absolute(&self) -> bool {
match self.prefix {
Some(DiskPrefix) => {
- let rest = self.repr.as_slice().slice_from(self.prefix_len());
+ let rest = self.repr.slice_from(self.prefix_len());
rest.len() > 0 && rest.as_bytes()[0] == SEP_BYTE
}
Some(_) => true,
} else {
let mut ita = self.str_components().map(|x|x.unwrap());
let mut itb = other.str_components().map(|x|x.unwrap());
- if "." == self.repr.as_slice() {
+ if "." == self.repr {
return itb.next() != Some("..");
}
loop {
fn update_sepidx(&mut self) {
let s = if self.has_nonsemantic_trailing_slash() {
- self.repr.as_slice().slice_to(self.repr.len()-1)
+ self.repr.slice_to(self.repr.len()-1)
} else { self.repr.as_slice() };
let idx = s.rfind(if !prefix_is_verbatim(self.prefix) { is_sep }
else { is_sep_verbatim });
}
// now ensure normalization didn't change anything
if repr.slice_from(path.prefix_len()) ==
- new_path.repr.as_slice().slice_from(new_path.prefix_len()) {
+ new_path.repr.slice_from(new_path.prefix_len()) {
Some(new_path)
} else {
None
DiskPrefix
}
+impl Copy for PathPrefix {}
+
fn parse_prefix<'a>(mut path: &'a str) -> Option<PathPrefix> {
if path.starts_with("\\\\") {
// \\
t!(s: Path::new("foo\\..\\..\\.."), "..\\..");
t!(s: Path::new("foo\\..\\..\\bar"), "..\\bar");
- assert_eq!(Path::new(b"foo\\bar").into_vec().as_slice(), b"foo\\bar");
- assert_eq!(Path::new(b"\\foo\\..\\..\\bar").into_vec().as_slice(), b"\\bar");
+ assert_eq!(Path::new(b"foo\\bar").into_vec(), b"foo\\bar");
+ assert_eq!(Path::new(b"\\foo\\..\\..\\bar").into_vec(), b"\\bar");
t!(s: Path::new("\\\\a"), "\\a");
t!(s: Path::new("\\\\a\\"), "\\a");
#[test]
fn test_display_str() {
let path = Path::new("foo");
- assert_eq!(path.display().to_string(), "foo".to_string());
+ assert_eq!(path.display().to_string(), "foo");
let path = Path::new(b"\\");
- assert_eq!(path.filename_display().to_string(), "".to_string());
+ assert_eq!(path.filename_display().to_string(), "");
let path = Path::new("foo");
let mo = path.display().as_cow();
{
let path = Path::new($path);
let f = format!("{}", path.display());
- assert_eq!(f.as_slice(), $exp);
+ assert_eq!(f, $exp);
let f = format!("{}", path.filename_display());
- assert_eq!(f.as_slice(), $expf);
+ assert_eq!(f, $expf);
}
)
)
let comps = path.str_components().map(|x|x.unwrap())
.collect::<Vec<&str>>();
let exp: &[&str] = &$exp;
- assert_eq!(comps.as_slice(), exp);
+ assert_eq!(comps, exp);
let comps = path.str_components().rev().map(|x|x.unwrap())
.collect::<Vec<&str>>();
let exp = exp.iter().rev().map(|&x|x).collect::<Vec<&str>>();
let path = Path::new($path);
let comps = path.components().collect::<Vec<&[u8]>>();
let exp: &[&[u8]] = &$exp;
- assert_eq!(comps.as_slice(), exp);
+ assert_eq!(comps, exp);
let comps = path.components().rev().collect::<Vec<&[u8]>>();
let exp = exp.iter().rev().map(|&x|x).collect::<Vec<&[u8]>>();
assert_eq!(comps, exp);
use clone::Clone;
use io::IoResult;
use iter::{Iterator, IteratorExt};
+use kinds::Copy;
use mem;
use rc::Rc;
-use result::{Ok, Err};
+use result::Result::{Ok, Err};
use vec::Vec;
#[cfg(not(target_word_size="64"))]
/// The standard RNG. This is designed to be efficient on the current
/// platform.
-pub struct StdRng { rng: IsaacWordRng }
+pub struct StdRng {
+ rng: IsaacWordRng,
+}
+
+impl Copy for StdRng {}
impl StdRng {
/// Create a randomly seeded instance of `StdRng`.
let mut one = [1i];
r.shuffle(&mut one);
let b: &[_] = &[1];
- assert_eq!(one.as_slice(), b);
+ assert_eq!(one, b);
let mut two = [1i, 2];
r.shuffle(&mut two);
let mut x = [1i, 1, 1];
r.shuffle(&mut x);
let b: &[_] = &[1, 1, 1];
- assert_eq!(x.as_slice(), b);
+ assert_eq!(x, b);
}
#[test]
let mut v = [1i, 1, 1];
r.shuffle(&mut v);
let b: &[_] = &[1, 1, 1];
- assert_eq!(v.as_slice(), b);
+ assert_eq!(v, b);
assert_eq!(r.gen_range(0u, 1u), 0u);
}
use path::Path;
use rand::Rng;
use rand::reader::ReaderRng;
- use result::{Ok, Err};
+ use result::Result::{Ok, Err};
use slice::SlicePrelude;
use mem;
use os::errno;
use mem;
use os;
use rand::Rng;
- use result::{Ok};
+ use result::Result::{Ok};
use self::libc::{c_int, size_t};
use slice::{SlicePrelude};
use ops::Drop;
use os;
use rand::Rng;
- use result::{Ok, Err};
+ use result::Result::{Ok, Err};
use self::libc::{DWORD, BYTE, LPCSTR, BOOL};
use self::libc::types::os::arch::extra::{LONG_PTR};
use slice::{SlicePrelude};
use io::Reader;
use rand::Rng;
-use result::{Ok, Err};
+use result::Result::{Ok, Err};
use slice::SlicePrelude;
/// An RNG that reads random bytes straight from a `Reader`. This will
use io::{IoResult, Writer};
use iter::{Iterator, IteratorExt};
-use option::{Some, None};
+use option::Option::{Some, None};
use os;
-use result::{Ok, Err};
+use result::Result::{Ok, Err};
use str::{StrPrelude, from_str};
use sync::atomic;
use unicode::char::UnicodeChar;
use io::{IoResult, Writer};
use libc;
use mem;
- use option::{Some, None, Option};
- use result::{Ok, Err};
- use rustrt::mutex::{StaticNativeMutex, NATIVE_MUTEX_INIT};
+ use option::Option;
+ use option::Option::{Some, None};
+ use result::Result::{Ok, Err};
+ use sync::{StaticMutex, MUTEX_INIT};
/// As always - iOS on arm uses SjLj exceptions and
/// _Unwind_Backtrace is even not available there. Still,
// while it doesn't requires lock for work as everything is
// local, it still displays much nicer backtraces when a
// couple of tasks panic simultaneously
- static LOCK: StaticNativeMutex = NATIVE_MUTEX_INIT;
- let _g = unsafe { LOCK.lock() };
+ static LOCK: StaticMutex = MUTEX_INIT;
+ let _g = LOCK.lock();
try!(writeln!(w, "stack backtrace:"));
// 100 lines should be enough
// is semi-reasonable in terms of printing anyway, and we know that all
// I/O done here is blocking I/O, not green I/O, so we don't have to
// worry about this being a native vs green mutex.
- static LOCK: StaticNativeMutex = NATIVE_MUTEX_INIT;
- let _g = unsafe { LOCK.lock() };
+ static LOCK: StaticMutex = MUTEX_INIT;
+ let _g = LOCK.lock();
try!(writeln!(w, "stack backtrace:"));
use libc;
use mem;
use ops::Drop;
- use option::{Some, None};
+ use option::Option::{Some, None};
use path::Path;
- use result::{Ok, Err};
- use rustrt::mutex::{StaticNativeMutex, NATIVE_MUTEX_INIT};
+ use result::Result::{Ok, Err};
+ use sync::{StaticMutex, MUTEX_INIT};
use slice::SlicePrelude;
use str::StrPrelude;
use dynamic_lib::DynamicLibrary;
pub fn write(w: &mut Writer) -> IoResult<()> {
// According to windows documentation, all dbghelp functions are
// single-threaded.
- static LOCK: StaticNativeMutex = NATIVE_MUTEX_INIT;
- let _g = unsafe { LOCK.lock() };
+ static LOCK: StaticMutex = MUTEX_INIT;
+ let _g = LOCK.lock();
// Open up dbghelp.dll, we don't link to it explicitly because it can't
// always be found. Additionally, it's nice having fewer dependencies.
macro_rules! t( ($a:expr, $b:expr) => ({
let mut m = Vec::new();
super::demangle(&mut m, $a).unwrap();
- assert_eq!(String::from_utf8(m).unwrap(), $b.to_string());
+ assert_eq!(String::from_utf8(m).unwrap(), $b);
}) )
#[test]
// except according to those terms.
use libc::uintptr_t;
-use option::{Some, None, Option};
+use option::Option;
+use option::Option::{Some, None};
use os;
use str::{FromStr, from_str, Str};
use sync::atomic;
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use sync::{Mutex, Condvar};
+
+/// A barrier enables multiple tasks to synchronize the beginning
+/// of some computation.
+///
+/// ```rust
+/// use std::sync::{Arc, Barrier};
+///
+/// let barrier = Arc::new(Barrier::new(10));
+/// for _ in range(0u, 10) {
+/// let c = barrier.clone();
+/// // The same messages will be printed together.
+/// // You will NOT see any interleaving.
+/// spawn(proc() {
+/// println!("before wait");
+/// c.wait();
+/// println!("after wait");
+/// });
+/// }
+/// ```
+pub struct Barrier {
+ lock: Mutex<BarrierState>,
+ cvar: Condvar,
+ num_threads: uint,
+}
+
+// The inner state of a double barrier
+struct BarrierState {
+ count: uint,
+ generation_id: uint,
+}
+
+impl Barrier {
+ /// Create a new barrier that can block a given number of threads.
+ ///
+ /// A barrier will block `n`-1 threads which call `wait` and then wake up
+ /// all threads at once when the `n`th thread calls `wait`.
+ pub fn new(n: uint) -> Barrier {
+ Barrier {
+ lock: Mutex::new(BarrierState {
+ count: 0,
+ generation_id: 0,
+ }),
+ cvar: Condvar::new(),
+ num_threads: n,
+ }
+ }
+
+ /// Block the current thread until all threads has rendezvoused here.
+ ///
+ /// Barriers are re-usable after all threads have rendezvoused once, and can
+ /// be used continuously.
+ pub fn wait(&self) {
+ let mut lock = self.lock.lock();
+ let local_gen = lock.generation_id;
+ lock.count += 1;
+ if lock.count < self.num_threads {
+ // We need a while loop to guard against spurious wakeups.
+ // http://en.wikipedia.org/wiki/Spurious_wakeup
+ while local_gen == lock.generation_id &&
+ lock.count < self.num_threads {
+ self.cvar.wait(&lock);
+ }
+ } else {
+ lock.count = 0;
+ lock.generation_id += 1;
+ self.cvar.notify_all();
+ }
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use prelude::*;
+
+ use sync::{Arc, Barrier};
+ use comm::Empty;
+
+ #[test]
+ fn test_barrier() {
+ let barrier = Arc::new(Barrier::new(10));
+ let (tx, rx) = channel();
+
+ for _ in range(0u, 9) {
+ let c = barrier.clone();
+ let tx = tx.clone();
+ spawn(proc() {
+ c.wait();
+ tx.send(true);
+ });
+ }
+
+ // At this point, all spawned tasks should be blocked,
+ // so we shouldn't get anything from the port
+ assert!(match rx.try_recv() {
+ Err(Empty) => true,
+ _ => false,
+ });
+
+ barrier.wait();
+ // Now, the barrier is cleared and we should get data.
+ for _ in range(0u, 9) {
+ rx.recv();
+ }
+ }
+}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use prelude::*;
+
+use sync::atomic::{mod, AtomicUint};
+use sync::{mutex, StaticMutexGuard};
+use sys_common::condvar as sys;
+use sys_common::mutex as sys_mutex;
+use time::Duration;
+
+/// A Condition Variable
+///
+/// Condition variables represent the ability to block a thread such that it
+/// consumes no CPU time while waiting for an event to occur. Condition
+/// variables are typically associated with a boolean predicate (a condition)
+/// and a mutex. The predicate is always verified inside of the mutex before
+/// determining that thread must block.
+///
+/// Functions in this module will block the current **thread** of execution and
+/// are bindings to system-provided condition variables where possible. Note
+/// that this module places one additional restriction over the system condition
+/// variables: each condvar can be used with precisely one mutex at runtime. Any
+/// attempt to use multiple mutexes on the same condition variable will result
+/// in a runtime panic. If this is not desired, then the unsafe primitives in
+/// `sys` do not have this restriction but may result in undefined behavior.
+///
+/// # Example
+///
+/// ```
+/// use std::sync::{Arc, Mutex, Condvar};
+///
+/// let pair = Arc::new((Mutex::new(false), Condvar::new()));
+/// let pair2 = pair.clone();
+///
+/// // Inside of our lock, spawn a new thread, and then wait for it to start
+/// spawn(proc() {
+/// let &(ref lock, ref cvar) = &*pair2;
+/// let mut started = lock.lock();
+/// *started = true;
+/// cvar.notify_one();
+/// });
+///
+/// // wait for the thread to start up
+/// let &(ref lock, ref cvar) = &*pair;
+/// let started = lock.lock();
+/// while !*started {
+/// cvar.wait(&started);
+/// }
+/// ```
+pub struct Condvar { inner: Box<StaticCondvar> }
+
+/// Statically allocated condition variables.
+///
+/// This structure is identical to `Condvar` except that it is suitable for use
+/// in static initializers for other structures.
+///
+/// # Example
+///
+/// ```
+/// use std::sync::{StaticCondvar, CONDVAR_INIT};
+///
+/// static CVAR: StaticCondvar = CONDVAR_INIT;
+/// ```
+pub struct StaticCondvar {
+ inner: sys::Condvar,
+ mutex: AtomicUint,
+}
+
+/// Constant initializer for a statically allocated condition variable.
+pub const CONDVAR_INIT: StaticCondvar = StaticCondvar {
+ inner: sys::CONDVAR_INIT,
+ mutex: atomic::INIT_ATOMIC_UINT,
+};
+
+/// A trait for vaules which can be passed to the waiting methods of condition
+/// variables. This is implemented by the mutex guards in this module.
+///
+/// Note that this trait should likely not be implemented manually unless you
+/// really know what you're doing.
+pub trait AsMutexGuard {
+ #[allow(missing_docs)]
+ unsafe fn as_mutex_guard(&self) -> &StaticMutexGuard;
+}
+
+impl Condvar {
+ /// Creates a new condition variable which is ready to be waited on and
+ /// notified.
+ pub fn new() -> Condvar {
+ Condvar {
+ inner: box StaticCondvar {
+ inner: unsafe { sys::Condvar::new() },
+ mutex: AtomicUint::new(0),
+ }
+ }
+ }
+
+ /// Block the current thread until this condition variable receives a
+ /// notification.
+ ///
+ /// This function will atomically unlock the mutex specified (represented by
+ /// `guard`) and block the current thread. This means that any calls to
+ /// `notify_*()` which happen logically after the mutex is unlocked are
+ /// candidates to wake this thread up. When this function call returns, the
+ /// lock specified will have been re-acquired.
+ ///
+ /// Note that this function is susceptible to spurious wakeups. Condition
+ /// variables normally have a boolean predicate associated with them, and
+ /// the predicate must always be checked each time this function returns to
+ /// protect against spurious wakeups.
+ ///
+ /// # Panics
+ ///
+ /// This function will `panic!()` if it is used with more than one mutex
+ /// over time. Each condition variable is dynamically bound to exactly one
+ /// mutex to ensure defined behavior across platforms. If this functionality
+ /// is not desired, then unsafe primitives in `sys` are provided.
+ pub fn wait<T: AsMutexGuard>(&self, mutex_guard: &T) {
+ unsafe {
+ let me: &'static Condvar = &*(self as *const _);
+ me.inner.wait(mutex_guard)
+ }
+ }
+
+ /// Wait on this condition variable for a notification, timing out after a
+ /// specified duration.
+ ///
+ /// The semantics of this function are equivalent to `wait()` except that
+ /// the thread will be blocked for roughly no longer than `dur`. This method
+ /// should not be used for precise timing due to anomalies such as
+ /// preemption or platform differences that may not cause the maximum amount
+ /// of time waited to be precisely `dur`.
+ ///
+ /// If the wait timed out, then `false` will be returned. Otherwise if a
+ /// notification was received then `true` will be returned.
+ ///
+ /// Like `wait`, the lock specified will be re-acquired when this function
+ /// returns, regardless of whether the timeout elapsed or not.
+ // Note that this method is *not* public, and this is quite intentional
+ // because we're not quite sure about the semantics of relative vs absolute
+ // durations or how the timing guarantees play into what the system APIs
+ // provide. There are also additional concerns about the unix-specific
+ // implementation which may need to be addressed.
+ #[allow(dead_code)]
+ fn wait_timeout<T: AsMutexGuard>(&self, mutex_guard: &T,
+ dur: Duration) -> bool {
+ unsafe {
+ let me: &'static Condvar = &*(self as *const _);
+ me.inner.wait_timeout(mutex_guard, dur)
+ }
+ }
+
+ /// Wake up one blocked thread on this condvar.
+ ///
+ /// If there is a blocked thread on this condition variable, then it will
+ /// be woken up from its call to `wait` or `wait_timeout`. Calls to
+ /// `notify_one` are not buffered in any way.
+ ///
+ /// To wake up all threads, see `notify_one()`.
+ pub fn notify_one(&self) { unsafe { self.inner.inner.notify_one() } }
+
+ /// Wake up all blocked threads on this condvar.
+ ///
+ /// This method will ensure that any current waiters on the condition
+ /// variable are awoken. Calls to `notify_all()` are not buffered in any
+ /// way.
+ ///
+ /// To wake up only one thread, see `notify_one()`.
+ pub fn notify_all(&self) { unsafe { self.inner.inner.notify_all() } }
+}
+
+impl Drop for Condvar {
+ fn drop(&mut self) {
+ unsafe { self.inner.inner.destroy() }
+ }
+}
+
+impl StaticCondvar {
+ /// Block the current thread until this condition variable receives a
+ /// notification.
+ ///
+ /// See `Condvar::wait`.
+ pub fn wait<T: AsMutexGuard>(&'static self, mutex_guard: &T) {
+ unsafe {
+ let lock = mutex_guard.as_mutex_guard();
+ let sys = mutex::guard_lock(lock);
+ self.verify(sys);
+ self.inner.wait(sys);
+ (*mutex::guard_poison(lock)).check("mutex");
+ }
+ }
+
+ /// Wait on this condition variable for a notification, timing out after a
+ /// specified duration.
+ ///
+ /// See `Condvar::wait_timeout`.
+ #[allow(dead_code)] // may want to stabilize this later, see wait_timeout above
+ fn wait_timeout<T: AsMutexGuard>(&'static self, mutex_guard: &T,
+ dur: Duration) -> bool {
+ unsafe {
+ let lock = mutex_guard.as_mutex_guard();
+ let sys = mutex::guard_lock(lock);
+ self.verify(sys);
+ let ret = self.inner.wait_timeout(sys, dur);
+ (*mutex::guard_poison(lock)).check("mutex");
+ return ret;
+ }
+ }
+
+ /// Wake up one blocked thread on this condvar.
+ ///
+ /// See `Condvar::notify_one`.
+ pub fn notify_one(&'static self) { unsafe { self.inner.notify_one() } }
+
+ /// Wake up all blocked threads on this condvar.
+ ///
+ /// See `Condvar::notify_all`.
+ pub fn notify_all(&'static self) { unsafe { self.inner.notify_all() } }
+
+ /// Deallocate all resources associated with this static condvar.
+ ///
+ /// This method is unsafe to call as there is no guarantee that there are no
+ /// active users of the condvar, and this also doesn't prevent any future
+ /// users of the condvar. This method is required to be called to not leak
+ /// memory on all platforms.
+ pub unsafe fn destroy(&'static self) {
+ self.inner.destroy()
+ }
+
+ fn verify(&self, mutex: &sys_mutex::Mutex) {
+ let addr = mutex as *const _ as uint;
+ match self.mutex.compare_and_swap(0, addr, atomic::SeqCst) {
+ // If we got out 0, then we have successfully bound the mutex to
+ // this cvar.
+ 0 => {}
+
+ // If we get out a value that's the same as `addr`, then someone
+ // already beat us to the punch.
+ n if n == addr => {}
+
+ // Anything else and we're using more than one mutex on this cvar,
+ // which is currently disallowed.
+ _ => panic!("attempted to use a condition variable with two \
+ mutexes"),
+ }
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use prelude::*;
+
+ use time::Duration;
+ use super::{StaticCondvar, CONDVAR_INIT};
+ use sync::{StaticMutex, MUTEX_INIT, Condvar, Mutex, Arc};
+
+ #[test]
+ fn smoke() {
+ let c = Condvar::new();
+ c.notify_one();
+ c.notify_all();
+ }
+
+ #[test]
+ fn static_smoke() {
+ static C: StaticCondvar = CONDVAR_INIT;
+ C.notify_one();
+ C.notify_all();
+ unsafe { C.destroy(); }
+ }
+
+ #[test]
+ fn notify_one() {
+ static C: StaticCondvar = CONDVAR_INIT;
+ static M: StaticMutex = MUTEX_INIT;
+
+ let g = M.lock();
+ spawn(proc() {
+ let _g = M.lock();
+ C.notify_one();
+ });
+ C.wait(&g);
+ drop(g);
+ unsafe { C.destroy(); M.destroy(); }
+ }
+
+ #[test]
+ fn notify_all() {
+ const N: uint = 10;
+
+ let data = Arc::new((Mutex::new(0), Condvar::new()));
+ let (tx, rx) = channel();
+ for _ in range(0, N) {
+ let data = data.clone();
+ let tx = tx.clone();
+ spawn(proc() {
+ let &(ref lock, ref cond) = &*data;
+ let mut cnt = lock.lock();
+ *cnt += 1;
+ if *cnt == N {
+ tx.send(());
+ }
+ while *cnt != 0 {
+ cond.wait(&cnt);
+ }
+ tx.send(());
+ });
+ }
+ drop(tx);
+
+ let &(ref lock, ref cond) = &*data;
+ rx.recv();
+ let mut cnt = lock.lock();
+ *cnt = 0;
+ cond.notify_all();
+ drop(cnt);
+
+ for _ in range(0, N) {
+ rx.recv();
+ }
+ }
+
+ #[test]
+ fn wait_timeout() {
+ static C: StaticCondvar = CONDVAR_INIT;
+ static M: StaticMutex = MUTEX_INIT;
+
+ let g = M.lock();
+ assert!(!C.wait_timeout(&g, Duration::nanoseconds(1000)));
+ spawn(proc() {
+ let _g = M.lock();
+ C.notify_one();
+ });
+ assert!(C.wait_timeout(&g, Duration::days(1)));
+ drop(g);
+ unsafe { C.destroy(); M.destroy(); }
+ }
+
+ #[test]
+ #[should_fail]
+ fn two_mutexes() {
+ static M1: StaticMutex = MUTEX_INIT;
+ static M2: StaticMutex = MUTEX_INIT;
+ static C: StaticCondvar = CONDVAR_INIT;
+
+ let g = M1.lock();
+ spawn(proc() {
+ let _g = M1.lock();
+ C.notify_one();
+ });
+ C.wait(&g);
+ drop(g);
+
+ C.wait(&M2.lock());
+
+ }
+}
+
+++ /dev/null
-// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! A (mostly) lock-free concurrent work-stealing deque
-//!
-//! This module contains an implementation of the Chase-Lev work stealing deque
-//! described in "Dynamic Circular Work-Stealing Deque". The implementation is
-//! heavily based on the pseudocode found in the paper.
-//!
-//! This implementation does not want to have the restriction of a garbage
-//! collector for reclamation of buffers, and instead it uses a shared pool of
-//! buffers. This shared pool is required for correctness in this
-//! implementation.
-//!
-//! The only lock-synchronized portions of this deque are the buffer allocation
-//! and deallocation portions. Otherwise all operations are lock-free.
-//!
-//! # Example
-//!
-//! use std::sync::deque::BufferPool;
-//!
-//! let mut pool = BufferPool::new();
-//! let (mut worker, mut stealer) = pool.deque();
-//!
-//! // Only the worker may push/pop
-//! worker.push(1i);
-//! worker.pop();
-//!
-//! // Stealers take data from the other end of the deque
-//! worker.push(1i);
-//! stealer.steal();
-//!
-//! // Stealers can be cloned to have many stealers stealing in parallel
-//! worker.push(1i);
-//! let mut stealer2 = stealer.clone();
-//! stealer2.steal();
-
-#![experimental]
-
-// NB: the "buffer pool" strategy is not done for speed, but rather for
-// correctness. For more info, see the comment on `swap_buffer`
-
-// FIXME: all atomic operations in this module use a SeqCst ordering. That is
-// probably overkill
-
-pub use self::Stolen::*;
-
-use core::prelude::*;
-
-use alloc::arc::Arc;
-use alloc::heap::{allocate, deallocate};
-use alloc::boxed::Box;
-use vec::Vec;
-use core::kinds::marker;
-use core::mem::{forget, min_align_of, size_of, transmute};
-use core::ptr;
-use rustrt::exclusive::Exclusive;
-
-use sync::atomic::{AtomicInt, AtomicPtr, SeqCst};
-
-// Once the queue is less than 1/K full, then it will be downsized. Note that
-// the deque requires that this number be less than 2.
-static K: int = 4;
-
-// Minimum number of bits that a buffer size should be. No buffer will resize to
-// under this value, and all deques will initially contain a buffer of this
-// size.
-//
-// The size in question is 1 << MIN_BITS
-static MIN_BITS: uint = 7;
-
-struct Deque<T> {
- bottom: AtomicInt,
- top: AtomicInt,
- array: AtomicPtr<Buffer<T>>,
- pool: BufferPool<T>,
-}
-
-/// Worker half of the work-stealing deque. This worker has exclusive access to
-/// one side of the deque, and uses `push` and `pop` method to manipulate it.
-///
-/// There may only be one worker per deque.
-pub struct Worker<T> {
- deque: Arc<Deque<T>>,
- _noshare: marker::NoSync,
-}
-
-/// The stealing half of the work-stealing deque. Stealers have access to the
-/// opposite end of the deque from the worker, and they only have access to the
-/// `steal` method.
-pub struct Stealer<T> {
- deque: Arc<Deque<T>>,
- _noshare: marker::NoSync,
-}
-
-/// When stealing some data, this is an enumeration of the possible outcomes.
-#[deriving(PartialEq, Show)]
-pub enum Stolen<T> {
- /// The deque was empty at the time of stealing
- Empty,
- /// The stealer lost the race for stealing data, and a retry may return more
- /// data.
- Abort,
- /// The stealer has successfully stolen some data.
- Data(T),
-}
-
-/// The allocation pool for buffers used by work-stealing deques. Right now this
-/// structure is used for reclamation of memory after it is no longer in use by
-/// deques.
-///
-/// This data structure is protected by a mutex, but it is rarely used. Deques
-/// will only use this structure when allocating a new buffer or deallocating a
-/// previous one.
-pub struct BufferPool<T> {
- pool: Arc<Exclusive<Vec<Box<Buffer<T>>>>>,
-}
-
-/// An internal buffer used by the chase-lev deque. This structure is actually
-/// implemented as a circular buffer, and is used as the intermediate storage of
-/// the data in the deque.
-///
-/// This type is implemented with *T instead of Vec<T> for two reasons:
-///
-/// 1. There is nothing safe about using this buffer. This easily allows the
-/// same value to be read twice in to rust, and there is nothing to
-/// prevent this. The usage by the deque must ensure that one of the
-/// values is forgotten. Furthermore, we only ever want to manually run
-/// destructors for values in this buffer (on drop) because the bounds
-/// are defined by the deque it's owned by.
-///
-/// 2. We can certainly avoid bounds checks using *T instead of Vec<T>, although
-/// LLVM is probably pretty good at doing this already.
-struct Buffer<T> {
- storage: *const T,
- log_size: uint,
-}
-
-impl<T: Send> BufferPool<T> {
- /// Allocates a new buffer pool which in turn can be used to allocate new
- /// deques.
- pub fn new() -> BufferPool<T> {
- BufferPool { pool: Arc::new(Exclusive::new(Vec::new())) }
- }
-
- /// Allocates a new work-stealing deque which will send/receiving memory to
- /// and from this buffer pool.
- pub fn deque(&self) -> (Worker<T>, Stealer<T>) {
- let a = Arc::new(Deque::new(self.clone()));
- let b = a.clone();
- (Worker { deque: a, _noshare: marker::NoSync },
- Stealer { deque: b, _noshare: marker::NoSync })
- }
-
- fn alloc(&mut self, bits: uint) -> Box<Buffer<T>> {
- unsafe {
- let mut pool = self.pool.lock();
- match pool.iter().position(|x| x.size() >= (1 << bits)) {
- Some(i) => pool.remove(i).unwrap(),
- None => box Buffer::new(bits)
- }
- }
- }
-
- fn free(&self, buf: Box<Buffer<T>>) {
- unsafe {
- let mut pool = self.pool.lock();
- match pool.iter().position(|v| v.size() > buf.size()) {
- Some(i) => pool.insert(i, buf),
- None => pool.push(buf),
- }
- }
- }
-}
-
-impl<T: Send> Clone for BufferPool<T> {
- fn clone(&self) -> BufferPool<T> { BufferPool { pool: self.pool.clone() } }
-}
-
-impl<T: Send> Worker<T> {
- /// Pushes data onto the front of this work queue.
- pub fn push(&self, t: T) {
- unsafe { self.deque.push(t) }
- }
- /// Pops data off the front of the work queue, returning `None` on an empty
- /// queue.
- pub fn pop(&self) -> Option<T> {
- unsafe { self.deque.pop() }
- }
-
- /// Gets access to the buffer pool that this worker is attached to. This can
- /// be used to create more deques which share the same buffer pool as this
- /// deque.
- pub fn pool<'a>(&'a self) -> &'a BufferPool<T> {
- &self.deque.pool
- }
-}
-
-impl<T: Send> Stealer<T> {
- /// Steals work off the end of the queue (opposite of the worker's end)
- pub fn steal(&self) -> Stolen<T> {
- unsafe { self.deque.steal() }
- }
-
- /// Gets access to the buffer pool that this stealer is attached to. This
- /// can be used to create more deques which share the same buffer pool as
- /// this deque.
- pub fn pool<'a>(&'a self) -> &'a BufferPool<T> {
- &self.deque.pool
- }
-}
-
-impl<T: Send> Clone for Stealer<T> {
- fn clone(&self) -> Stealer<T> {
- Stealer { deque: self.deque.clone(), _noshare: marker::NoSync }
- }
-}
-
-// Almost all of this code can be found directly in the paper so I'm not
-// personally going to heavily comment what's going on here.
-
-impl<T: Send> Deque<T> {
- fn new(mut pool: BufferPool<T>) -> Deque<T> {
- let buf = pool.alloc(MIN_BITS);
- Deque {
- bottom: AtomicInt::new(0),
- top: AtomicInt::new(0),
- array: AtomicPtr::new(unsafe { transmute(buf) }),
- pool: pool,
- }
- }
-
- unsafe fn push(&self, data: T) {
- let mut b = self.bottom.load(SeqCst);
- let t = self.top.load(SeqCst);
- let mut a = self.array.load(SeqCst);
- let size = b - t;
- if size >= (*a).size() - 1 {
- // You won't find this code in the chase-lev deque paper. This is
- // alluded to in a small footnote, however. We always free a buffer
- // when growing in order to prevent leaks.
- a = self.swap_buffer(b, a, (*a).resize(b, t, 1));
- b = self.bottom.load(SeqCst);
- }
- (*a).put(b, data);
- self.bottom.store(b + 1, SeqCst);
- }
-
- unsafe fn pop(&self) -> Option<T> {
- let b = self.bottom.load(SeqCst);
- let a = self.array.load(SeqCst);
- let b = b - 1;
- self.bottom.store(b, SeqCst);
- let t = self.top.load(SeqCst);
- let size = b - t;
- if size < 0 {
- self.bottom.store(t, SeqCst);
- return None;
- }
- let data = (*a).get(b);
- if size > 0 {
- self.maybe_shrink(b, t);
- return Some(data);
- }
- if self.top.compare_and_swap(t, t + 1, SeqCst) == t {
- self.bottom.store(t + 1, SeqCst);
- return Some(data);
- } else {
- self.bottom.store(t + 1, SeqCst);
- forget(data); // someone else stole this value
- return None;
- }
- }
-
- unsafe fn steal(&self) -> Stolen<T> {
- let t = self.top.load(SeqCst);
- let old = self.array.load(SeqCst);
- let b = self.bottom.load(SeqCst);
- let a = self.array.load(SeqCst);
- let size = b - t;
- if size <= 0 { return Empty }
- if size % (*a).size() == 0 {
- if a == old && t == self.top.load(SeqCst) {
- return Empty
- }
- return Abort
- }
- let data = (*a).get(t);
- if self.top.compare_and_swap(t, t + 1, SeqCst) == t {
- Data(data)
- } else {
- forget(data); // someone else stole this value
- Abort
- }
- }
-
- unsafe fn maybe_shrink(&self, b: int, t: int) {
- let a = self.array.load(SeqCst);
- if b - t < (*a).size() / K && b - t > (1 << MIN_BITS) {
- self.swap_buffer(b, a, (*a).resize(b, t, -1));
- }
- }
-
- // Helper routine not mentioned in the paper which is used in growing and
- // shrinking buffers to swap in a new buffer into place. As a bit of a
- // recap, the whole point that we need a buffer pool rather than just
- // calling malloc/free directly is that stealers can continue using buffers
- // after this method has called 'free' on it. The continued usage is simply
- // a read followed by a forget, but we must make sure that the memory can
- // continue to be read after we flag this buffer for reclamation.
- unsafe fn swap_buffer(&self, b: int, old: *mut Buffer<T>,
- buf: Buffer<T>) -> *mut Buffer<T> {
- let newbuf: *mut Buffer<T> = transmute(box buf);
- self.array.store(newbuf, SeqCst);
- let ss = (*newbuf).size();
- self.bottom.store(b + ss, SeqCst);
- let t = self.top.load(SeqCst);
- if self.top.compare_and_swap(t, t + ss, SeqCst) != t {
- self.bottom.store(b, SeqCst);
- }
- self.pool.free(transmute(old));
- return newbuf;
- }
-}
-
-
-#[unsafe_destructor]
-impl<T: Send> Drop for Deque<T> {
- fn drop(&mut self) {
- let t = self.top.load(SeqCst);
- let b = self.bottom.load(SeqCst);
- let a = self.array.load(SeqCst);
- // Free whatever is leftover in the dequeue, and then move the buffer
- // back into the pool.
- for i in range(t, b) {
- let _: T = unsafe { (*a).get(i) };
- }
- self.pool.free(unsafe { transmute(a) });
- }
-}
-
-#[inline]
-fn buffer_alloc_size<T>(log_size: uint) -> uint {
- (1 << log_size) * size_of::<T>()
-}
-
-impl<T: Send> Buffer<T> {
- unsafe fn new(log_size: uint) -> Buffer<T> {
- let size = buffer_alloc_size::<T>(log_size);
- let buffer = allocate(size, min_align_of::<T>());
- if buffer.is_null() { ::alloc::oom() }
- Buffer {
- storage: buffer as *const T,
- log_size: log_size,
- }
- }
-
- fn size(&self) -> int { 1 << self.log_size }
-
- // Apparently LLVM cannot optimize (foo % (1 << bar)) into this implicitly
- fn mask(&self) -> int { (1 << self.log_size) - 1 }
-
- unsafe fn elem(&self, i: int) -> *const T {
- self.storage.offset(i & self.mask())
- }
-
- // This does not protect against loading duplicate values of the same cell,
- // nor does this clear out the contents contained within. Hence, this is a
- // very unsafe method which the caller needs to treat specially in case a
- // race is lost.
- unsafe fn get(&self, i: int) -> T {
- ptr::read(self.elem(i))
- }
-
- // Unsafe because this unsafely overwrites possibly uninitialized or
- // initialized data.
- unsafe fn put(&self, i: int, t: T) {
- ptr::write(self.elem(i) as *mut T, t);
- }
-
- // Again, unsafe because this has incredibly dubious ownership violations.
- // It is assumed that this buffer is immediately dropped.
- unsafe fn resize(&self, b: int, t: int, delta: int) -> Buffer<T> {
- // NB: not entirely obvious, but thanks to 2's complement,
- // casting delta to uint and then adding gives the desired
- // effect.
- let buf = Buffer::new(self.log_size + delta as uint);
- for i in range(t, b) {
- buf.put(i, self.get(i));
- }
- return buf;
- }
-}
-
-#[unsafe_destructor]
-impl<T: Send> Drop for Buffer<T> {
- fn drop(&mut self) {
- // It is assumed that all buffers are empty on drop.
- let size = buffer_alloc_size::<T>(self.log_size);
- unsafe { deallocate(self.storage as *mut u8, size, min_align_of::<T>()) }
- }
-}
-
-#[cfg(test)]
-mod tests {
- use prelude::*;
- use super::{Data, BufferPool, Abort, Empty, Worker, Stealer};
-
- use mem;
- use rustrt::thread::Thread;
- use rand;
- use rand::Rng;
- use sync::atomic::{AtomicBool, INIT_ATOMIC_BOOL, SeqCst,
- AtomicUint, INIT_ATOMIC_UINT};
- use vec;
-
- #[test]
- fn smoke() {
- let pool = BufferPool::new();
- let (w, s) = pool.deque();
- assert_eq!(w.pop(), None);
- assert_eq!(s.steal(), Empty);
- w.push(1i);
- assert_eq!(w.pop(), Some(1));
- w.push(1);
- assert_eq!(s.steal(), Data(1));
- w.push(1);
- assert_eq!(s.clone().steal(), Data(1));
- }
-
- #[test]
- fn stealpush() {
- static AMT: int = 100000;
- let pool = BufferPool::<int>::new();
- let (w, s) = pool.deque();
- let t = Thread::start(proc() {
- let mut left = AMT;
- while left > 0 {
- match s.steal() {
- Data(i) => {
- assert_eq!(i, 1);
- left -= 1;
- }
- Abort | Empty => {}
- }
- }
- });
-
- for _ in range(0, AMT) {
- w.push(1);
- }
-
- t.join();
- }
-
- #[test]
- fn stealpush_large() {
- static AMT: int = 100000;
- let pool = BufferPool::<(int, int)>::new();
- let (w, s) = pool.deque();
- let t = Thread::start(proc() {
- let mut left = AMT;
- while left > 0 {
- match s.steal() {
- Data((1, 10)) => { left -= 1; }
- Data(..) => panic!(),
- Abort | Empty => {}
- }
- }
- });
-
- for _ in range(0, AMT) {
- w.push((1, 10));
- }
-
- t.join();
- }
-
- fn stampede(w: Worker<Box<int>>, s: Stealer<Box<int>>,
- nthreads: int, amt: uint) {
- for _ in range(0, amt) {
- w.push(box 20);
- }
- let mut remaining = AtomicUint::new(amt);
- let unsafe_remaining: *mut AtomicUint = &mut remaining;
-
- let threads = range(0, nthreads).map(|_| {
- let s = s.clone();
- Thread::start(proc() {
- unsafe {
- while (*unsafe_remaining).load(SeqCst) > 0 {
- match s.steal() {
- Data(box 20) => {
- (*unsafe_remaining).fetch_sub(1, SeqCst);
- }
- Data(..) => panic!(),
- Abort | Empty => {}
- }
- }
- }
- })
- }).collect::<Vec<Thread<()>>>();
-
- while remaining.load(SeqCst) > 0 {
- match w.pop() {
- Some(box 20) => { remaining.fetch_sub(1, SeqCst); }
- Some(..) => panic!(),
- None => {}
- }
- }
-
- for thread in threads.into_iter() {
- thread.join();
- }
- }
-
- #[test]
- fn run_stampede() {
- let pool = BufferPool::<Box<int>>::new();
- let (w, s) = pool.deque();
- stampede(w, s, 8, 10000);
- }
-
- #[test]
- fn many_stampede() {
- static AMT: uint = 4;
- let pool = BufferPool::<Box<int>>::new();
- let threads = range(0, AMT).map(|_| {
- let (w, s) = pool.deque();
- Thread::start(proc() {
- stampede(w, s, 4, 10000);
- })
- }).collect::<Vec<Thread<()>>>();
-
- for thread in threads.into_iter() {
- thread.join();
- }
- }
-
- #[test]
- fn stress() {
- static AMT: int = 100000;
- static NTHREADS: int = 8;
- static DONE: AtomicBool = INIT_ATOMIC_BOOL;
- static HITS: AtomicUint = INIT_ATOMIC_UINT;
- let pool = BufferPool::<int>::new();
- let (w, s) = pool.deque();
-
- let threads = range(0, NTHREADS).map(|_| {
- let s = s.clone();
- Thread::start(proc() {
- loop {
- match s.steal() {
- Data(2) => { HITS.fetch_add(1, SeqCst); }
- Data(..) => panic!(),
- _ if DONE.load(SeqCst) => break,
- _ => {}
- }
- }
- })
- }).collect::<Vec<Thread<()>>>();
-
- let mut rng = rand::task_rng();
- let mut expected = 0;
- while expected < AMT {
- if rng.gen_range(0i, 3) == 2 {
- match w.pop() {
- None => {}
- Some(2) => { HITS.fetch_add(1, SeqCst); },
- Some(_) => panic!(),
- }
- } else {
- expected += 1;
- w.push(2);
- }
- }
-
- while HITS.load(SeqCst) < AMT as uint {
- match w.pop() {
- None => {}
- Some(2) => { HITS.fetch_add(1, SeqCst); },
- Some(_) => panic!(),
- }
- }
- DONE.store(true, SeqCst);
-
- for thread in threads.into_iter() {
- thread.join();
- }
-
- assert_eq!(HITS.load(SeqCst), expected as uint);
- }
-
- #[test]
- #[cfg_attr(windows, ignore)] // apparently windows scheduling is weird?
- fn no_starvation() {
- static AMT: int = 10000;
- static NTHREADS: int = 4;
- static DONE: AtomicBool = INIT_ATOMIC_BOOL;
- let pool = BufferPool::<(int, uint)>::new();
- let (w, s) = pool.deque();
-
- let (threads, hits) = vec::unzip(range(0, NTHREADS).map(|_| {
- let s = s.clone();
- let unique_box = box AtomicUint::new(0);
- let thread_box = unsafe {
- *mem::transmute::<&Box<AtomicUint>,
- *const *mut AtomicUint>(&unique_box)
- };
- (Thread::start(proc() {
- unsafe {
- loop {
- match s.steal() {
- Data((1, 2)) => {
- (*thread_box).fetch_add(1, SeqCst);
- }
- Data(..) => panic!(),
- _ if DONE.load(SeqCst) => break,
- _ => {}
- }
- }
- }
- }), unique_box)
- }));
-
- let mut rng = rand::task_rng();
- let mut myhit = false;
- 'outer: loop {
- for _ in range(0, rng.gen_range(0, AMT)) {
- if !myhit && rng.gen_range(0i, 3) == 2 {
- match w.pop() {
- None => {}
- Some((1, 2)) => myhit = true,
- Some(_) => panic!(),
- }
- } else {
- w.push((1, 2));
- }
- }
-
- for slot in hits.iter() {
- let amt = slot.load(SeqCst);
- if amt == 0 { continue 'outer; }
- }
- if myhit {
- break
- }
- }
-
- DONE.store(true, SeqCst);
-
- for thread in threads.into_iter() {
- thread.join();
- }
- }
-}
use prelude::*;
use sync::Future;
use task;
- use comm::{channel, Sender};
+ use comm::channel;
#[test]
fn test_from_value() {
let mut f = Future::from_value("snail".to_string());
- assert_eq!(f.get(), "snail".to_string());
+ assert_eq!(f.get(), "snail");
}
#[test]
let (tx, rx) = channel();
tx.send("whale".to_string());
let mut f = Future::from_receiver(rx);
- assert_eq!(f.get(), "whale".to_string());
+ assert_eq!(f.get(), "whale");
}
#[test]
fn test_from_fn() {
let mut f = Future::from_fn(proc() "brail".to_string());
- assert_eq!(f.get(), "brail".to_string());
+ assert_eq!(f.get(), "brail");
}
#[test]
fn test_interface_get() {
let mut f = Future::from_value("fail".to_string());
- assert_eq!(f.get(), "fail".to_string());
+ assert_eq!(f.get(), "fail");
}
#[test]
fn test_interface_unwrap() {
let f = Future::from_value("fail".to_string());
- assert_eq!(f.unwrap(), "fail".to_string());
+ assert_eq!(f.unwrap(), "fail");
}
#[test]
#[test]
fn test_spawn() {
let mut f = Future::spawn(proc() "bale".to_string());
- assert_eq!(f.get(), "bale".to_string());
+ assert_eq!(f.get(), "bale");
}
#[test]
+++ /dev/null
-// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! Wrappers for safe, shared, mutable memory between tasks
-//!
-//! The wrappers in this module build on the primitives from `sync::raw` to
-//! provide safe interfaces around using the primitive locks. These primitives
-//! implement a technique called "poisoning" where when a task panicked with a
-//! held lock, all future attempts to use the lock will panic.
-//!
-//! For example, if two tasks are contending on a mutex and one of them panics
-//! after grabbing the lock, the second task will immediately panic because the
-//! lock is now poisoned.
-
-use core::prelude::*;
-
-use self::Inner::*;
-
-use core::cell::UnsafeCell;
-use rustrt::local::Local;
-use rustrt::task::Task;
-
-use super::raw;
-
-// Poisoning helpers
-
-struct PoisonOnFail<'a> {
- flag: &'a mut bool,
- failed: bool,
-}
-
-fn failing() -> bool {
- Local::borrow(None::<Task>).unwinder.unwinding()
-}
-
-impl<'a> PoisonOnFail<'a> {
- fn check(flag: bool, name: &str) {
- if flag {
- panic!("Poisoned {} - another task failed inside!", name);
- }
- }
-
- fn new<'a>(flag: &'a mut bool, name: &str) -> PoisonOnFail<'a> {
- PoisonOnFail::check(*flag, name);
- PoisonOnFail {
- flag: flag,
- failed: failing()
- }
- }
-}
-
-#[unsafe_destructor]
-impl<'a> Drop for PoisonOnFail<'a> {
- fn drop(&mut self) {
- if !self.failed && failing() {
- *self.flag = true;
- }
- }
-}
-
-// Condvar
-
-enum Inner<'a> {
- InnerMutex(raw::MutexGuard<'a>),
- InnerRWLock(raw::RWLockWriteGuard<'a>),
-}
-
-impl<'b> Inner<'b> {
- fn cond<'a>(&'a self) -> &'a raw::Condvar<'b> {
- match *self {
- InnerMutex(ref m) => &m.cond,
- InnerRWLock(ref m) => &m.cond,
- }
- }
-}
-
-/// A condition variable, a mechanism for unlock-and-descheduling and
-/// signaling, for use with the lock types.
-pub struct Condvar<'a> {
- name: &'static str,
- // n.b. Inner must be after PoisonOnFail because we must set the poison flag
- // *inside* the mutex, and struct fields are destroyed top-to-bottom
- // (destroy the lock guard last).
- poison: PoisonOnFail<'a>,
- inner: Inner<'a>,
-}
-
-impl<'a> Condvar<'a> {
- /// Atomically exit the associated lock and block until a signal is sent.
- ///
- /// wait() is equivalent to wait_on(0).
- ///
- /// # Panics
- ///
- /// A task which is killed while waiting on a condition variable will wake
- /// up, panic, and unlock the associated lock as it unwinds.
- #[inline]
- pub fn wait(&self) { self.wait_on(0) }
-
- /// Atomically exit the associated lock and block on a specified condvar
- /// until a signal is sent on that same condvar.
- ///
- /// The associated lock must have been initialised with an appropriate
- /// number of condvars. The condvar_id must be between 0 and num_condvars-1
- /// or else this call will fail.
- #[inline]
- pub fn wait_on(&self, condvar_id: uint) {
- assert!(!*self.poison.flag);
- self.inner.cond().wait_on(condvar_id);
- // This is why we need to wrap sync::condvar.
- PoisonOnFail::check(*self.poison.flag, self.name);
- }
-
- /// Wake up a blocked task. Returns false if there was no blocked task.
- #[inline]
- pub fn signal(&self) -> bool { self.signal_on(0) }
-
- /// Wake up a blocked task on a specified condvar (as
- /// sync::cond.signal_on). Returns false if there was no blocked task.
- #[inline]
- pub fn signal_on(&self, condvar_id: uint) -> bool {
- assert!(!*self.poison.flag);
- self.inner.cond().signal_on(condvar_id)
- }
-
- /// Wake up all blocked tasks. Returns the number of tasks woken.
- #[inline]
- pub fn broadcast(&self) -> uint { self.broadcast_on(0) }
-
- /// Wake up all blocked tasks on a specified condvar (as
- /// sync::cond.broadcast_on). Returns the number of tasks woken.
- #[inline]
- pub fn broadcast_on(&self, condvar_id: uint) -> uint {
- assert!(!*self.poison.flag);
- self.inner.cond().broadcast_on(condvar_id)
- }
-}
-
-/// A wrapper type which provides synchronized access to the underlying data, of
-/// type `T`. A mutex always provides exclusive access, and concurrent requests
-/// will block while the mutex is already locked.
-///
-/// # Example
-///
-/// ```
-/// use std::sync::{Mutex, Arc};
-///
-/// let mutex = Arc::new(Mutex::new(1i));
-/// let mutex2 = mutex.clone();
-///
-/// spawn(proc() {
-/// let mut val = mutex2.lock();
-/// *val += 1;
-/// val.cond.signal();
-/// });
-///
-/// let value = mutex.lock();
-/// while *value != 2 {
-/// value.cond.wait();
-/// }
-/// ```
-pub struct Mutex<T> {
- lock: raw::Mutex,
- failed: UnsafeCell<bool>,
- data: UnsafeCell<T>,
-}
-
-/// An guard which is created by locking a mutex. Through this guard the
-/// underlying data can be accessed.
-pub struct MutexGuard<'a, T:'a> {
- // FIXME #12808: strange name to try to avoid interfering with
- // field accesses of the contained type via Deref
- _data: &'a mut T,
- /// Inner condition variable connected to the locked mutex that this guard
- /// was created from. This can be used for atomic-unlock-and-deschedule.
- pub cond: Condvar<'a>,
-}
-
-impl<T: Send> Mutex<T> {
- /// Creates a new mutex to protect the user-supplied data.
- pub fn new(user_data: T) -> Mutex<T> {
- Mutex::new_with_condvars(user_data, 1)
- }
-
- /// Create a new mutex, with a specified number of associated condvars.
- ///
- /// This will allow calling wait_on/signal_on/broadcast_on with condvar IDs
- /// between 0 and num_condvars-1. (If num_condvars is 0, lock_cond will be
- /// allowed but any operations on the condvar will fail.)
- pub fn new_with_condvars(user_data: T, num_condvars: uint) -> Mutex<T> {
- Mutex {
- lock: raw::Mutex::new_with_condvars(num_condvars),
- failed: UnsafeCell::new(false),
- data: UnsafeCell::new(user_data),
- }
- }
-
- /// Access the underlying mutable data with mutual exclusion from other
- /// tasks. The returned value is an RAII guard which will unlock the mutex
- /// when dropped. All concurrent tasks attempting to lock the mutex will
- /// block while the returned value is still alive.
- ///
- /// # Panics
- ///
- /// Panicking while inside the Mutex will unlock the Mutex while unwinding, so
- /// that other tasks won't block forever. It will also poison the Mutex:
- /// any tasks that subsequently try to access it (including those already
- /// blocked on the mutex) will also panic immediately.
- #[inline]
- pub fn lock<'a>(&'a self) -> MutexGuard<'a, T> {
- let guard = self.lock.lock();
-
- // These two accesses are safe because we're guaranteed at this point
- // that we have exclusive access to this mutex. We are indeed able to
- // promote ourselves from &Mutex to `&mut T`
- let poison = unsafe { &mut *self.failed.get() };
- let data = unsafe { &mut *self.data.get() };
-
- MutexGuard {
- _data: data,
- cond: Condvar {
- name: "Mutex",
- poison: PoisonOnFail::new(poison, "Mutex"),
- inner: InnerMutex(guard),
- },
- }
- }
-}
-
-impl<'a, T: Send> Deref<T> for MutexGuard<'a, T> {
- fn deref<'a>(&'a self) -> &'a T { &*self._data }
-}
-impl<'a, T: Send> DerefMut<T> for MutexGuard<'a, T> {
- fn deref_mut<'a>(&'a mut self) -> &'a mut T { &mut *self._data }
-}
-
-/// A dual-mode reader-writer lock. The data can be accessed mutably or
-/// immutably, and immutably-accessing tasks may run concurrently.
-///
-/// # Example
-///
-/// ```
-/// use std::sync::{RWLock, Arc};
-///
-/// let lock1 = Arc::new(RWLock::new(1i));
-/// let lock2 = lock1.clone();
-///
-/// spawn(proc() {
-/// let mut val = lock2.write();
-/// *val = 3;
-/// let val = val.downgrade();
-/// println!("{}", *val);
-/// });
-///
-/// let val = lock1.read();
-/// println!("{}", *val);
-/// ```
-pub struct RWLock<T> {
- lock: raw::RWLock,
- failed: UnsafeCell<bool>,
- data: UnsafeCell<T>,
-}
-
-/// A guard which is created by locking an rwlock in write mode. Through this
-/// guard the underlying data can be accessed.
-pub struct RWLockWriteGuard<'a, T:'a> {
- // FIXME #12808: strange name to try to avoid interfering with
- // field accesses of the contained type via Deref
- _data: &'a mut T,
- /// Inner condition variable that can be used to sleep on the write mode of
- /// this rwlock.
- pub cond: Condvar<'a>,
-}
-
-/// A guard which is created by locking an rwlock in read mode. Through this
-/// guard the underlying data can be accessed.
-pub struct RWLockReadGuard<'a, T:'a> {
- // FIXME #12808: strange names to try to avoid interfering with
- // field accesses of the contained type via Deref
- _data: &'a T,
- _guard: raw::RWLockReadGuard<'a>,
-}
-
-impl<T: Send + Sync> RWLock<T> {
- /// Create a reader/writer lock with the supplied data.
- pub fn new(user_data: T) -> RWLock<T> {
- RWLock::new_with_condvars(user_data, 1)
- }
-
- /// Create a reader/writer lock with the supplied data and a specified number
- /// of condvars (as sync::RWLock::new_with_condvars).
- pub fn new_with_condvars(user_data: T, num_condvars: uint) -> RWLock<T> {
- RWLock {
- lock: raw::RWLock::new_with_condvars(num_condvars),
- failed: UnsafeCell::new(false),
- data: UnsafeCell::new(user_data),
- }
- }
-
- /// Access the underlying data mutably. Locks the rwlock in write mode;
- /// other readers and writers will block.
- ///
- /// # Panics
- ///
- /// Panicking while inside the lock will unlock the lock while unwinding, so
- /// that other tasks won't block forever. As Mutex.lock, it will also poison
- /// the lock, so subsequent readers and writers will both also panic.
- #[inline]
- pub fn write<'a>(&'a self) -> RWLockWriteGuard<'a, T> {
- let guard = self.lock.write();
-
- // These two accesses are safe because we're guaranteed at this point
- // that we have exclusive access to this rwlock. We are indeed able to
- // promote ourselves from &RWLock to `&mut T`
- let poison = unsafe { &mut *self.failed.get() };
- let data = unsafe { &mut *self.data.get() };
-
- RWLockWriteGuard {
- _data: data,
- cond: Condvar {
- name: "RWLock",
- poison: PoisonOnFail::new(poison, "RWLock"),
- inner: InnerRWLock(guard),
- },
- }
- }
-
- /// Access the underlying data immutably. May run concurrently with other
- /// reading tasks.
- ///
- /// # Panics
- ///
- /// Panicking will unlock the lock while unwinding. However, unlike all other
- /// access modes, this will not poison the lock.
- pub fn read<'a>(&'a self) -> RWLockReadGuard<'a, T> {
- let guard = self.lock.read();
- PoisonOnFail::check(unsafe { *self.failed.get() }, "RWLock");
- RWLockReadGuard {
- _guard: guard,
- _data: unsafe { &*self.data.get() },
- }
- }
-}
-
-impl<'a, T: Send + Sync> RWLockWriteGuard<'a, T> {
- /// Consumes this write lock token, returning a new read lock token.
- ///
- /// This will allow pending readers to come into the lock.
- pub fn downgrade(self) -> RWLockReadGuard<'a, T> {
- let RWLockWriteGuard { _data, cond } = self;
- // convert the data to read-only explicitly
- let data = &*_data;
- let guard = match cond.inner {
- InnerMutex(..) => unreachable!(),
- InnerRWLock(guard) => guard.downgrade()
- };
- RWLockReadGuard { _guard: guard, _data: data }
- }
-}
-
-impl<'a, T: Send + Sync> Deref<T> for RWLockReadGuard<'a, T> {
- fn deref<'a>(&'a self) -> &'a T { self._data }
-}
-impl<'a, T: Send + Sync> Deref<T> for RWLockWriteGuard<'a, T> {
- fn deref<'a>(&'a self) -> &'a T { &*self._data }
-}
-impl<'a, T: Send + Sync> DerefMut<T> for RWLockWriteGuard<'a, T> {
- fn deref_mut<'a>(&'a mut self) -> &'a mut T { &mut *self._data }
-}
-
-/// A barrier enables multiple tasks to synchronize the beginning
-/// of some computation.
-///
-/// ```rust
-/// use std::sync::{Arc, Barrier};
-///
-/// let barrier = Arc::new(Barrier::new(10));
-/// for _ in range(0u, 10) {
-/// let c = barrier.clone();
-/// // The same messages will be printed together.
-/// // You will NOT see any interleaving.
-/// spawn(proc() {
-/// println!("before wait");
-/// c.wait();
-/// println!("after wait");
-/// });
-/// }
-/// ```
-pub struct Barrier {
- lock: Mutex<BarrierState>,
- num_tasks: uint,
-}
-
-// The inner state of a double barrier
-struct BarrierState {
- count: uint,
- generation_id: uint,
-}
-
-impl Barrier {
- /// Create a new barrier that can block a given number of tasks.
- pub fn new(num_tasks: uint) -> Barrier {
- Barrier {
- lock: Mutex::new(BarrierState {
- count: 0,
- generation_id: 0,
- }),
- num_tasks: num_tasks,
- }
- }
-
- /// Block the current task until a certain number of tasks is waiting.
- pub fn wait(&self) {
- let mut lock = self.lock.lock();
- let local_gen = lock.generation_id;
- lock.count += 1;
- if lock.count < self.num_tasks {
- // We need a while loop to guard against spurious wakeups.
- // http://en.wikipedia.org/wiki/Spurious_wakeup
- while local_gen == lock.generation_id &&
- lock.count < self.num_tasks {
- lock.cond.wait();
- }
- } else {
- lock.count = 0;
- lock.generation_id += 1;
- lock.cond.broadcast();
- }
- }
-}
-
-#[cfg(test)]
-mod tests {
- use prelude::*;
- use comm::Empty;
- use task;
- use task::try_future;
- use sync::Arc;
-
- use super::{Mutex, Barrier, RWLock};
-
- #[test]
- fn test_mutex_arc_condvar() {
- let arc = Arc::new(Mutex::new(false));
- let arc2 = arc.clone();
- let (tx, rx) = channel();
- task::spawn(proc() {
- // wait until parent gets in
- rx.recv();
- let mut lock = arc2.lock();
- *lock = true;
- lock.cond.signal();
- });
-
- let lock = arc.lock();
- tx.send(());
- assert!(!*lock);
- while !*lock {
- lock.cond.wait();
- }
- }
-
- #[test] #[should_fail]
- fn test_arc_condvar_poison() {
- let arc = Arc::new(Mutex::new(1i));
- let arc2 = arc.clone();
- let (tx, rx) = channel();
-
- spawn(proc() {
- rx.recv();
- let lock = arc2.lock();
- lock.cond.signal();
- // Parent should fail when it wakes up.
- panic!();
- });
-
- let lock = arc.lock();
- tx.send(());
- while *lock == 1 {
- lock.cond.wait();
- }
- }
-
- #[test] #[should_fail]
- fn test_mutex_arc_poison() {
- let arc = Arc::new(Mutex::new(1i));
- let arc2 = arc.clone();
- let _ = task::try(proc() {
- let lock = arc2.lock();
- assert_eq!(*lock, 2);
- });
- let lock = arc.lock();
- assert_eq!(*lock, 1);
- }
-
- #[test]
- fn test_mutex_arc_nested() {
- // Tests nested mutexes and access
- // to underlying data.
- let arc = Arc::new(Mutex::new(1i));
- let arc2 = Arc::new(Mutex::new(arc));
- task::spawn(proc() {
- let lock = arc2.lock();
- let lock2 = lock.deref().lock();
- assert_eq!(*lock2, 1);
- });
- }
-
- #[test]
- fn test_mutex_arc_access_in_unwind() {
- let arc = Arc::new(Mutex::new(1i));
- let arc2 = arc.clone();
- let _ = task::try::<()>(proc() {
- struct Unwinder {
- i: Arc<Mutex<int>>,
- }
- impl Drop for Unwinder {
- fn drop(&mut self) {
- let mut lock = self.i.lock();
- *lock += 1;
- }
- }
- let _u = Unwinder { i: arc2 };
- panic!();
- });
- let lock = arc.lock();
- assert_eq!(*lock, 2);
- }
-
- #[test] #[should_fail]
- fn test_rw_arc_poison_wr() {
- let arc = Arc::new(RWLock::new(1i));
- let arc2 = arc.clone();
- let _ = task::try(proc() {
- let lock = arc2.write();
- assert_eq!(*lock, 2);
- });
- let lock = arc.read();
- assert_eq!(*lock, 1);
- }
- #[test] #[should_fail]
- fn test_rw_arc_poison_ww() {
- let arc = Arc::new(RWLock::new(1i));
- let arc2 = arc.clone();
- let _ = task::try(proc() {
- let lock = arc2.write();
- assert_eq!(*lock, 2);
- });
- let lock = arc.write();
- assert_eq!(*lock, 1);
- }
- #[test]
- fn test_rw_arc_no_poison_rr() {
- let arc = Arc::new(RWLock::new(1i));
- let arc2 = arc.clone();
- let _ = task::try(proc() {
- let lock = arc2.read();
- assert_eq!(*lock, 2);
- });
- let lock = arc.read();
- assert_eq!(*lock, 1);
- }
- #[test]
- fn test_rw_arc_no_poison_rw() {
- let arc = Arc::new(RWLock::new(1i));
- let arc2 = arc.clone();
- let _ = task::try(proc() {
- let lock = arc2.read();
- assert_eq!(*lock, 2);
- });
- let lock = arc.write();
- assert_eq!(*lock, 1);
- }
- #[test]
- fn test_rw_arc_no_poison_dr() {
- let arc = Arc::new(RWLock::new(1i));
- let arc2 = arc.clone();
- let _ = task::try(proc() {
- let lock = arc2.write().downgrade();
- assert_eq!(*lock, 2);
- });
- let lock = arc.write();
- assert_eq!(*lock, 1);
- }
-
- #[test]
- fn test_rw_arc() {
- let arc = Arc::new(RWLock::new(0i));
- let arc2 = arc.clone();
- let (tx, rx) = channel();
-
- task::spawn(proc() {
- let mut lock = arc2.write();
- for _ in range(0u, 10) {
- let tmp = *lock;
- *lock = -1;
- task::deschedule();
- *lock = tmp + 1;
- }
- tx.send(());
- });
-
- // Readers try to catch the writer in the act
- let mut children = Vec::new();
- for _ in range(0u, 5) {
- let arc3 = arc.clone();
- children.push(try_future(proc() {
- let lock = arc3.read();
- assert!(*lock >= 0);
- }));
- }
-
- // Wait for children to pass their asserts
- for r in children.iter_mut() {
- assert!(r.get_ref().is_ok());
- }
-
- // Wait for writer to finish
- rx.recv();
- let lock = arc.read();
- assert_eq!(*lock, 10);
- }
-
- #[test]
- fn test_rw_arc_access_in_unwind() {
- let arc = Arc::new(RWLock::new(1i));
- let arc2 = arc.clone();
- let _ = task::try::<()>(proc() {
- struct Unwinder {
- i: Arc<RWLock<int>>,
- }
- impl Drop for Unwinder {
- fn drop(&mut self) {
- let mut lock = self.i.write();
- *lock += 1;
- }
- }
- let _u = Unwinder { i: arc2 };
- panic!();
- });
- let lock = arc.read();
- assert_eq!(*lock, 2);
- }
-
- #[test]
- fn test_rw_downgrade() {
- // (1) A downgrader gets in write mode and does cond.wait.
- // (2) A writer gets in write mode, sets state to 42, and does signal.
- // (3) Downgrader wakes, sets state to 31337.
- // (4) tells writer and all other readers to contend as it downgrades.
- // (5) Writer attempts to set state back to 42, while downgraded task
- // and all reader tasks assert that it's 31337.
- let arc = Arc::new(RWLock::new(0i));
-
- // Reader tasks
- let mut reader_convos = Vec::new();
- for _ in range(0u, 10) {
- let ((tx1, rx1), (tx2, rx2)) = (channel(), channel());
- reader_convos.push((tx1, rx2));
- let arcn = arc.clone();
- task::spawn(proc() {
- rx1.recv(); // wait for downgrader to give go-ahead
- let lock = arcn.read();
- assert_eq!(*lock, 31337);
- tx2.send(());
- });
- }
-
- // Writer task
- let arc2 = arc.clone();
- let ((tx1, rx1), (tx2, rx2)) = (channel(), channel());
- task::spawn(proc() {
- rx1.recv();
- {
- let mut lock = arc2.write();
- assert_eq!(*lock, 0);
- *lock = 42;
- lock.cond.signal();
- }
- rx1.recv();
- {
- let mut lock = arc2.write();
- // This shouldn't happen until after the downgrade read
- // section, and all other readers, finish.
- assert_eq!(*lock, 31337);
- *lock = 42;
- }
- tx2.send(());
- });
-
- // Downgrader (us)
- let mut lock = arc.write();
- tx1.send(()); // send to another writer who will wake us up
- while *lock == 0 {
- lock.cond.wait();
- }
- assert_eq!(*lock, 42);
- *lock = 31337;
- // send to other readers
- for &(ref mut rc, _) in reader_convos.iter_mut() {
- rc.send(())
- }
- let lock = lock.downgrade();
- // complete handshake with other readers
- for &(_, ref mut rp) in reader_convos.iter_mut() {
- rp.recv()
- }
- tx1.send(()); // tell writer to try again
- assert_eq!(*lock, 31337);
- drop(lock);
-
- rx2.recv(); // complete handshake with writer
- }
-
- #[cfg(test)]
- fn test_rw_write_cond_downgrade_read_race_helper() {
- // Tests that when a downgrader hands off the "reader cloud" lock
- // because of a contending reader, a writer can't race to get it
- // instead, which would result in readers_and_writers. This tests
- // the raw module rather than this one, but it's here because an
- // rwarc gives us extra shared state to help check for the race.
- let x = Arc::new(RWLock::new(true));
- let (tx, rx) = channel();
-
- // writer task
- let xw = x.clone();
- task::spawn(proc() {
- let mut lock = xw.write();
- tx.send(()); // tell downgrader it's ok to go
- lock.cond.wait();
- // The core of the test is here: the condvar reacquire path
- // must involve order_lock, so that it cannot race with a reader
- // trying to receive the "reader cloud lock hand-off".
- *lock = false;
- });
-
- rx.recv(); // wait for writer to get in
-
- let lock = x.write();
- assert!(*lock);
- // make writer contend in the cond-reacquire path
- lock.cond.signal();
- // make a reader task to trigger the "reader cloud lock" handoff
- let xr = x.clone();
- let (tx, rx) = channel();
- task::spawn(proc() {
- tx.send(());
- drop(xr.read());
- });
- rx.recv(); // wait for reader task to exist
-
- let lock = lock.downgrade();
- // if writer mistakenly got in, make sure it mutates state
- // before we assert on it
- for _ in range(0u, 5) { task::deschedule(); }
- // make sure writer didn't get in.
- assert!(*lock);
- }
- #[test]
- fn test_rw_write_cond_downgrade_read_race() {
- // Ideally the above test case would have deschedule statements in it
- // that helped to expose the race nearly 100% of the time... but adding
- // deschedules in the intuitively-right locations made it even less
- // likely, and I wasn't sure why :( . This is a mediocre "next best"
- // option.
- for _ in range(0u, 8) {
- test_rw_write_cond_downgrade_read_race_helper();
- }
- }
-
- #[test]
- fn test_barrier() {
- let barrier = Arc::new(Barrier::new(10));
- let (tx, rx) = channel();
-
- for _ in range(0u, 9) {
- let c = barrier.clone();
- let tx = tx.clone();
- spawn(proc() {
- c.wait();
- tx.send(true);
- });
- }
-
- // At this point, all spawned tasks should be blocked,
- // so we shouldn't get anything from the port
- assert!(match rx.try_recv() {
- Err(Empty) => true,
- _ => false,
- });
-
- barrier.wait();
- // Now, the barrier is cleared and we should get data.
- for _ in range(0u, 9) {
- rx.recv();
- }
- }
-}
#![experimental]
-pub use self::one::{Once, ONCE_INIT};
-
pub use alloc::arc::{Arc, Weak};
-pub use self::lock::{Mutex, MutexGuard, Condvar, Barrier,
- RWLock, RWLockReadGuard, RWLockWriteGuard};
-// The mutex/rwlock in this module are not meant for reexport
-pub use self::raw::{Semaphore, SemaphoreGuard};
+pub use self::mutex::{Mutex, MutexGuard, StaticMutex, StaticMutexGuard, MUTEX_INIT};
+pub use self::rwlock::{RWLock, StaticRWLock, RWLOCK_INIT};
+pub use self::rwlock::{RWLockReadGuard, RWLockWriteGuard};
+pub use self::rwlock::{StaticRWLockReadGuard, StaticRWLockWriteGuard};
+pub use self::condvar::{Condvar, StaticCondvar, CONDVAR_INIT, AsMutexGuard};
+pub use self::once::{Once, ONCE_INIT};
+pub use self::semaphore::{Semaphore, SemaphoreGuard};
+pub use self::barrier::Barrier;
pub use self::future::Future;
pub use self::task_pool::TaskPool;
-// Core building blocks for all primitives in this crate
-
-#[stable]
pub mod atomic;
-
-// Concurrent data structures
-
-pub mod spsc_queue;
-pub mod mpsc_queue;
-pub mod mpmc_bounded_queue;
-pub mod deque;
-
-// Low-level concurrency primitives
-
-mod raw;
-mod mutex;
-mod one;
-
-// Higher level primitives based on those above
-
-mod lock;
-
-// Task management
-
+mod barrier;
+mod condvar;
mod future;
+mod mutex;
+mod once;
+mod poison;
+mod rwlock;
+mod semaphore;
mod task_pool;
+++ /dev/null
-/* Copyright (c) 2010-2011 Dmitry Vyukov. All rights reserved.
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- * this list of conditions and the following disclaimer.
- *
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY DMITRY VYUKOV "AS IS" AND ANY EXPRESS OR IMPLIED
- * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
- * SHALL DMITRY VYUKOV OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
- * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
- * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
- * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
- * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * The views and conclusions contained in the software and documentation are
- * those of the authors and should not be interpreted as representing official
- * policies, either expressed or implied, of Dmitry Vyukov.
- */
-
-#![experimental]
-#![allow(missing_docs, dead_code)]
-
-// http://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue
-
-use core::prelude::*;
-
-use alloc::arc::Arc;
-use vec::Vec;
-use core::num::UnsignedInt;
-use core::cell::UnsafeCell;
-
-use sync::atomic::{AtomicUint,Relaxed,Release,Acquire};
-
-struct Node<T> {
- sequence: AtomicUint,
- value: Option<T>,
-}
-
-struct State<T> {
- pad0: [u8, ..64],
- buffer: Vec<UnsafeCell<Node<T>>>,
- mask: uint,
- pad1: [u8, ..64],
- enqueue_pos: AtomicUint,
- pad2: [u8, ..64],
- dequeue_pos: AtomicUint,
- pad3: [u8, ..64],
-}
-
-pub struct Queue<T> {
- state: Arc<State<T>>,
-}
-
-impl<T: Send> State<T> {
- fn with_capacity(capacity: uint) -> State<T> {
- let capacity = if capacity < 2 || (capacity & (capacity - 1)) != 0 {
- if capacity < 2 {
- 2u
- } else {
- // use next power of 2 as capacity
- capacity.next_power_of_two()
- }
- } else {
- capacity
- };
- let buffer = Vec::from_fn(capacity, |i| {
- UnsafeCell::new(Node { sequence:AtomicUint::new(i), value: None })
- });
- State{
- pad0: [0, ..64],
- buffer: buffer,
- mask: capacity-1,
- pad1: [0, ..64],
- enqueue_pos: AtomicUint::new(0),
- pad2: [0, ..64],
- dequeue_pos: AtomicUint::new(0),
- pad3: [0, ..64],
- }
- }
-
- fn push(&self, value: T) -> bool {
- let mask = self.mask;
- let mut pos = self.enqueue_pos.load(Relaxed);
- loop {
- let node = &self.buffer[pos & mask];
- let seq = unsafe { (*node.get()).sequence.load(Acquire) };
- let diff: int = seq as int - pos as int;
-
- if diff == 0 {
- let enqueue_pos = self.enqueue_pos.compare_and_swap(pos, pos+1, Relaxed);
- if enqueue_pos == pos {
- unsafe {
- (*node.get()).value = Some(value);
- (*node.get()).sequence.store(pos+1, Release);
- }
- break
- } else {
- pos = enqueue_pos;
- }
- } else if diff < 0 {
- return false
- } else {
- pos = self.enqueue_pos.load(Relaxed);
- }
- }
- true
- }
-
- fn pop(&self) -> Option<T> {
- let mask = self.mask;
- let mut pos = self.dequeue_pos.load(Relaxed);
- loop {
- let node = &self.buffer[pos & mask];
- let seq = unsafe { (*node.get()).sequence.load(Acquire) };
- let diff: int = seq as int - (pos + 1) as int;
- if diff == 0 {
- let dequeue_pos = self.dequeue_pos.compare_and_swap(pos, pos+1, Relaxed);
- if dequeue_pos == pos {
- unsafe {
- let value = (*node.get()).value.take();
- (*node.get()).sequence.store(pos + mask + 1, Release);
- return value
- }
- } else {
- pos = dequeue_pos;
- }
- } else if diff < 0 {
- return None
- } else {
- pos = self.dequeue_pos.load(Relaxed);
- }
- }
- }
-}
-
-impl<T: Send> Queue<T> {
- pub fn with_capacity(capacity: uint) -> Queue<T> {
- Queue{
- state: Arc::new(State::with_capacity(capacity))
- }
- }
-
- pub fn push(&self, value: T) -> bool {
- self.state.push(value)
- }
-
- pub fn pop(&self) -> Option<T> {
- self.state.pop()
- }
-}
-
-impl<T: Send> Clone for Queue<T> {
- fn clone(&self) -> Queue<T> {
- Queue { state: self.state.clone() }
- }
-}
-
-#[cfg(test)]
-mod tests {
- use prelude::*;
- use super::Queue;
-
- #[test]
- fn test() {
- let nthreads = 8u;
- let nmsgs = 1000u;
- let q = Queue::with_capacity(nthreads*nmsgs);
- assert_eq!(None, q.pop());
- let (tx, rx) = channel();
-
- for _ in range(0, nthreads) {
- let q = q.clone();
- let tx = tx.clone();
- spawn(proc() {
- let q = q;
- for i in range(0, nmsgs) {
- assert!(q.push(i));
- }
- tx.send(());
- });
- }
-
- let mut completion_rxs = vec![];
- for _ in range(0, nthreads) {
- let (tx, rx) = channel();
- completion_rxs.push(rx);
- let q = q.clone();
- spawn(proc() {
- let q = q;
- let mut i = 0u;
- loop {
- match q.pop() {
- None => {},
- Some(_) => {
- i += 1;
- if i == nmsgs { break }
- }
- }
- }
- tx.send(i);
- });
- }
-
- for rx in completion_rxs.iter_mut() {
- assert_eq!(nmsgs, rx.recv());
- }
- for _ in range(0, nthreads) {
- rx.recv();
- }
- }
-}
+++ /dev/null
-/* Copyright (c) 2010-2011 Dmitry Vyukov. All rights reserved.
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- * this list of conditions and the following disclaimer.
- *
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY DMITRY VYUKOV "AS IS" AND ANY EXPRESS OR IMPLIED
- * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
- * SHALL DMITRY VYUKOV OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
- * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
- * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
- * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
- * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * The views and conclusions contained in the software and documentation are
- * those of the authors and should not be interpreted as representing official
- * policies, either expressed or implied, of Dmitry Vyukov.
- */
-
-//! A mostly lock-free multi-producer, single consumer queue.
-//!
-//! This module contains an implementation of a concurrent MPSC queue. This
-//! queue can be used to share data between tasks, and is also used as the
-//! building block of channels in rust.
-//!
-//! Note that the current implementation of this queue has a caveat of the `pop`
-//! method, and see the method for more information about it. Due to this
-//! caveat, this queue may not be appropriate for all use-cases.
-
-#![experimental]
-
-// http://www.1024cores.net/home/lock-free-algorithms
-// /queues/non-intrusive-mpsc-node-based-queue
-
-pub use self::PopResult::*;
-
-use core::prelude::*;
-
-use alloc::boxed::Box;
-use core::mem;
-use core::cell::UnsafeCell;
-
-use sync::atomic::{AtomicPtr, Release, Acquire, AcqRel, Relaxed};
-
-/// A result of the `pop` function.
-pub enum PopResult<T> {
- /// Some data has been popped
- Data(T),
- /// The queue is empty
- Empty,
- /// The queue is in an inconsistent state. Popping data should succeed, but
- /// some pushers have yet to make enough progress in order allow a pop to
- /// succeed. It is recommended that a pop() occur "in the near future" in
- /// order to see if the sender has made progress or not
- Inconsistent,
-}
-
-struct Node<T> {
- next: AtomicPtr<Node<T>>,
- value: Option<T>,
-}
-
-/// The multi-producer single-consumer structure. This is not cloneable, but it
-/// may be safely shared so long as it is guaranteed that there is only one
-/// popper at a time (many pushers are allowed).
-pub struct Queue<T> {
- head: AtomicPtr<Node<T>>,
- tail: UnsafeCell<*mut Node<T>>,
-}
-
-impl<T> Node<T> {
- unsafe fn new(v: Option<T>) -> *mut Node<T> {
- mem::transmute(box Node {
- next: AtomicPtr::new(0 as *mut Node<T>),
- value: v,
- })
- }
-}
-
-impl<T: Send> Queue<T> {
- /// Creates a new queue that is safe to share among multiple producers and
- /// one consumer.
- pub fn new() -> Queue<T> {
- let stub = unsafe { Node::new(None) };
- Queue {
- head: AtomicPtr::new(stub),
- tail: UnsafeCell::new(stub),
- }
- }
-
- /// Pushes a new value onto this queue.
- pub fn push(&self, t: T) {
- unsafe {
- let n = Node::new(Some(t));
- let prev = self.head.swap(n, AcqRel);
- (*prev).next.store(n, Release);
- }
- }
-
- /// Pops some data from this queue.
- ///
- /// Note that the current implementation means that this function cannot
- /// return `Option<T>`. It is possible for this queue to be in an
- /// inconsistent state where many pushes have succeeded and completely
- /// finished, but pops cannot return `Some(t)`. This inconsistent state
- /// happens when a pusher is pre-empted at an inopportune moment.
- ///
- /// This inconsistent state means that this queue does indeed have data, but
- /// it does not currently have access to it at this time.
- pub fn pop(&self) -> PopResult<T> {
- unsafe {
- let tail = *self.tail.get();
- let next = (*tail).next.load(Acquire);
-
- if !next.is_null() {
- *self.tail.get() = next;
- assert!((*tail).value.is_none());
- assert!((*next).value.is_some());
- let ret = (*next).value.take().unwrap();
- let _: Box<Node<T>> = mem::transmute(tail);
- return Data(ret);
- }
-
- if self.head.load(Acquire) == tail {Empty} else {Inconsistent}
- }
- }
-
- /// Attempts to pop data from this queue, but doesn't attempt too hard. This
- /// will canonicalize inconsistent states to a `None` value.
- pub fn casual_pop(&self) -> Option<T> {
- match self.pop() {
- Data(t) => Some(t),
- Empty | Inconsistent => None,
- }
- }
-}
-
-#[unsafe_destructor]
-impl<T: Send> Drop for Queue<T> {
- fn drop(&mut self) {
- unsafe {
- let mut cur = *self.tail.get();
- while !cur.is_null() {
- let next = (*cur).next.load(Relaxed);
- let _: Box<Node<T>> = mem::transmute(cur);
- cur = next;
- }
- }
- }
-}
-
-#[cfg(test)]
-mod tests {
- use prelude::*;
-
- use alloc::arc::Arc;
-
- use super::{Queue, Data, Empty, Inconsistent};
-
- #[test]
- fn test_full() {
- let q = Queue::new();
- q.push(box 1i);
- q.push(box 2i);
- }
-
- #[test]
- fn test() {
- let nthreads = 8u;
- let nmsgs = 1000u;
- let q = Queue::new();
- match q.pop() {
- Empty => {}
- Inconsistent | Data(..) => panic!()
- }
- let (tx, rx) = channel();
- let q = Arc::new(q);
-
- for _ in range(0, nthreads) {
- let tx = tx.clone();
- let q = q.clone();
- spawn(proc() {
- for i in range(0, nmsgs) {
- q.push(i);
- }
- tx.send(());
- });
- }
-
- let mut i = 0u;
- while i < nthreads * nmsgs {
- match q.pop() {
- Empty | Inconsistent => {},
- Data(_) => { i += 1 }
- }
- }
- drop(tx);
- for _ in range(0, nthreads) {
- rx.recv();
- }
- }
-}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-//! A simple native mutex implementation. Warning: this API is likely
-//! to change soon.
+use prelude::*;
-#![allow(dead_code)]
-
-use core::prelude::*;
-use alloc::boxed::Box;
-use rustrt::mutex;
-
-pub const LOCKED: uint = 1 << 0;
-pub const BLOCKED: uint = 1 << 1;
+use cell::UnsafeCell;
+use kinds::marker;
+use sync::{poison, AsMutexGuard};
+use sys_common::mutex as sys;
/// A mutual exclusion primitive useful for protecting shared data
///
-/// This mutex will properly block tasks waiting for the lock to become
-/// available. The mutex can also be statically initialized or created via a
-/// `new` constructor.
+/// This mutex will block threads waiting for the lock to become available. The
+/// mutex can also be statically initialized or created via a `new`
+/// constructor. Each mutex has a type parameter which represents the data that
+/// it is protecting. The data can only be accessed through the RAII guards
+/// returned from `lock` and `try_lock`, which guarantees that the data is only
+/// ever accessed when the mutex is locked.
+///
+/// # Poisoning
+///
+/// In order to prevent access to otherwise invalid data, each mutex will
+/// propagate any panics which occur while the lock is held. Once a thread has
+/// panicked while holding the lock, then all other threads will immediately
+/// panic as well once they hold the lock.
///
/// # Example
///
-/// ```rust,ignore
-/// use std::sync::mutex::Mutex;
+/// ```rust
+/// use std::sync::{Arc, Mutex};
+/// const N: uint = 10;
///
-/// let m = Mutex::new();
-/// let guard = m.lock();
-/// // do some work
-/// drop(guard); // unlock the lock
+/// // Spawn a few threads to increment a shared variable (non-atomically), and
+/// // let the main thread know once all increments are done.
+/// //
+/// // Here we're using an Arc to share memory among tasks, and the data inside
+/// // the Arc is protected with a mutex.
+/// let data = Arc::new(Mutex::new(0));
+///
+/// let (tx, rx) = channel();
+/// for _ in range(0u, 10) {
+/// let (data, tx) = (data.clone(), tx.clone());
+/// spawn(proc() {
+/// // The shared static can only be accessed once the lock is held.
+/// // Our non-atomic increment is safe because we're the only thread
+/// // which can access the shared state when the lock is held.
+/// let mut data = data.lock();
+/// *data += 1;
+/// if *data == N {
+/// tx.send(());
+/// }
+/// // the lock is unlocked here when `data` goes out of scope.
+/// });
+/// }
+///
+/// rx.recv();
/// ```
-pub struct Mutex {
+pub struct Mutex<T> {
// Note that this static mutex is in a *box*, not inlined into the struct
- // itself. This is done for memory safety reasons with the usage of a
- // StaticNativeMutex inside the static mutex above. Once a native mutex has
- // been used once, its address can never change (it can't be moved). This
- // mutex type can be safely moved at any time, so to ensure that the native
- // mutex is used correctly we box the inner lock to give it a constant
- // address.
- lock: Box<StaticMutex>,
+ // itself. Once a native mutex has been used once, its address can never
+ // change (it can't be moved). This mutex type can be safely moved at any
+ // time, so to ensure that the native mutex is used correctly we box the
+ // inner lock to give it a constant address.
+ inner: Box<StaticMutex>,
+ data: UnsafeCell<T>,
}
/// The static mutex type is provided to allow for static allocation of mutexes.
///
/// # Example
///
-/// ```rust,ignore
-/// use std::sync::mutex::{StaticMutex, MUTEX_INIT};
+/// ```rust
+/// use std::sync::{StaticMutex, MUTEX_INIT};
///
/// static LOCK: StaticMutex = MUTEX_INIT;
///
/// // lock is unlocked here.
/// ```
pub struct StaticMutex {
- lock: mutex::StaticNativeMutex,
+ lock: sys::Mutex,
+ poison: UnsafeCell<poison::Flag>,
}
/// An RAII implementation of a "scoped lock" of a mutex. When this structure is
/// dropped (falls out of scope), the lock will be unlocked.
+///
+/// The data protected by the mutex can be access through this guard via its
+/// Deref and DerefMut implementations
#[must_use]
-pub struct Guard<'a> {
- guard: mutex::LockGuard<'a>,
+pub struct MutexGuard<'a, T: 'a> {
+ // funny underscores due to how Deref/DerefMut currently work (they
+ // disregard field privacy).
+ __lock: &'a Mutex<T>,
+ __guard: StaticMutexGuard,
}
-fn lift_guard(guard: mutex::LockGuard) -> Guard {
- Guard { guard: guard }
+/// An RAII implementation of a "scoped lock" of a static mutex. When this
+/// structure is dropped (falls out of scope), the lock will be unlocked.
+#[must_use]
+pub struct StaticMutexGuard {
+ lock: &'static sys::Mutex,
+ marker: marker::NoSend,
+ poison: poison::Guard<'static>,
}
/// Static initialization of a mutex. This constant can be used to initialize
/// other mutex constants.
pub const MUTEX_INIT: StaticMutex = StaticMutex {
- lock: mutex::NATIVE_MUTEX_INIT
+ lock: sys::MUTEX_INIT,
+ poison: UnsafeCell { value: poison::Flag { failed: false } },
};
-impl StaticMutex {
- /// Attempts to grab this lock, see `Mutex::try_lock`
- pub fn try_lock<'a>(&'a self) -> Option<Guard<'a>> {
- unsafe { self.lock.trylock().map(lift_guard) }
+impl<T: Send> Mutex<T> {
+ /// Creates a new mutex in an unlocked state ready for use.
+ pub fn new(t: T) -> Mutex<T> {
+ Mutex {
+ inner: box MUTEX_INIT,
+ data: UnsafeCell::new(t),
+ }
+ }
+
+ /// Acquires a mutex, blocking the current task until it is able to do so.
+ ///
+ /// This function will block the local task until it is available to acquire
+ /// the mutex. Upon returning, the task is the only task with the mutex
+ /// held. An RAII guard is returned to allow scoped unlock of the lock. When
+ /// the guard goes out of scope, the mutex will be unlocked.
+ ///
+ /// # Panics
+ ///
+ /// If another user of this mutex panicked while holding the mutex, then
+ /// this call will immediately panic once the mutex is acquired.
+ pub fn lock(&self) -> MutexGuard<T> {
+ unsafe {
+ let lock: &'static StaticMutex = &*(&*self.inner as *const _);
+ MutexGuard::new(self, lock.lock())
+ }
+ }
+
+ /// Attempts to acquire this lock.
+ ///
+ /// If the lock could not be acquired at this time, then `None` is returned.
+ /// Otherwise, an RAII guard is returned. The lock will be unlocked when the
+ /// guard is dropped.
+ ///
+ /// This function does not block.
+ ///
+ /// # Panics
+ ///
+ /// If another user of this mutex panicked while holding the mutex, then
+ /// this call will immediately panic if the mutex would otherwise be
+ /// acquired.
+ pub fn try_lock(&self) -> Option<MutexGuard<T>> {
+ unsafe {
+ let lock: &'static StaticMutex = &*(&*self.inner as *const _);
+ lock.try_lock().map(|guard| {
+ MutexGuard::new(self, guard)
+ })
+ }
}
+}
+#[unsafe_destructor]
+impl<T: Send> Drop for Mutex<T> {
+ fn drop(&mut self) {
+ // This is actually safe b/c we know that there is no further usage of
+ // this mutex (it's up to the user to arrange for a mutex to get
+ // dropped, that's not our job)
+ unsafe { self.inner.lock.destroy() }
+ }
+}
+
+impl StaticMutex {
/// Acquires this lock, see `Mutex::lock`
- pub fn lock<'a>(&'a self) -> Guard<'a> {
- lift_guard(unsafe { self.lock.lock() })
+ pub fn lock(&'static self) -> StaticMutexGuard {
+ unsafe { self.lock.lock() }
+ StaticMutexGuard::new(self)
+ }
+
+ /// Attempts to grab this lock, see `Mutex::try_lock`
+ pub fn try_lock(&'static self) -> Option<StaticMutexGuard> {
+ if unsafe { self.lock.try_lock() } {
+ Some(StaticMutexGuard::new(self))
+ } else {
+ None
+ }
}
/// Deallocates resources associated with this static mutex.
/// *all* platforms. It may be the case that some platforms do not leak
/// memory if this method is not called, but this is not guaranteed to be
/// true on all platforms.
- pub unsafe fn destroy(&self) {
+ pub unsafe fn destroy(&'static self) {
self.lock.destroy()
}
}
-impl Mutex {
- /// Creates a new mutex in an unlocked state ready for use.
- pub fn new() -> Mutex {
- Mutex {
- lock: box StaticMutex {
- lock: unsafe { mutex::StaticNativeMutex::new() },
- }
- }
+impl<'mutex, T> MutexGuard<'mutex, T> {
+ fn new(lock: &Mutex<T>, guard: StaticMutexGuard) -> MutexGuard<T> {
+ MutexGuard { __lock: lock, __guard: guard }
}
+}
- /// Attempts to acquire this lock.
- ///
- /// If the lock could not be acquired at this time, then `None` is returned.
- /// Otherwise, an RAII guard is returned. The lock will be unlocked when the
- /// guard is dropped.
- ///
- /// This function does not block.
- pub fn try_lock<'a>(&'a self) -> Option<Guard<'a>> {
- self.lock.try_lock()
+impl<'mutex, T> AsMutexGuard for MutexGuard<'mutex, T> {
+ unsafe fn as_mutex_guard(&self) -> &StaticMutexGuard { &self.__guard }
+}
+
+impl<'mutex, T> Deref<T> for MutexGuard<'mutex, T> {
+ fn deref<'a>(&'a self) -> &'a T { unsafe { &*self.__lock.data.get() } }
+}
+impl<'mutex, T> DerefMut<T> for MutexGuard<'mutex, T> {
+ fn deref_mut<'a>(&'a mut self) -> &'a mut T {
+ unsafe { &mut *self.__lock.data.get() }
}
+}
- /// Acquires a mutex, blocking the current task until it is able to do so.
- ///
- /// This function will block the local task until it is available to acquire
- /// the mutex. Upon returning, the task is the only task with the mutex
- /// held. An RAII guard is returned to allow scoped unlock of the lock. When
- /// the guard goes out of scope, the mutex will be unlocked.
- pub fn lock<'a>(&'a self) -> Guard<'a> { self.lock.lock() }
+impl StaticMutexGuard {
+ fn new(lock: &'static StaticMutex) -> StaticMutexGuard {
+ unsafe {
+ let guard = StaticMutexGuard {
+ lock: &lock.lock,
+ marker: marker::NoSend,
+ poison: (*lock.poison.get()).borrow(),
+ };
+ guard.poison.check("mutex");
+ return guard;
+ }
+ }
+}
+
+pub fn guard_lock(guard: &StaticMutexGuard) -> &sys::Mutex { guard.lock }
+pub fn guard_poison(guard: &StaticMutexGuard) -> &poison::Guard {
+ &guard.poison
+}
+
+impl AsMutexGuard for StaticMutexGuard {
+ unsafe fn as_mutex_guard(&self) -> &StaticMutexGuard { self }
}
-impl Drop for Mutex {
+#[unsafe_destructor]
+impl Drop for StaticMutexGuard {
fn drop(&mut self) {
- // This is actually safe b/c we know that there is no further usage of
- // this mutex (it's up to the user to arrange for a mutex to get
- // dropped, that's not our job)
- unsafe { self.lock.destroy() }
+ unsafe {
+ self.poison.done();
+ self.lock.unlock();
+ }
}
}
#[cfg(test)]
mod test {
use prelude::*;
- use super::{Mutex, StaticMutex, MUTEX_INIT};
+
+ use task;
+ use sync::{Arc, Mutex, StaticMutex, MUTEX_INIT, Condvar};
#[test]
fn smoke() {
- let m = Mutex::new();
+ let m = Mutex::new(());
drop(m.lock());
drop(m.lock());
}
}
#[test]
- fn trylock() {
- let m = Mutex::new();
+ fn try_lock() {
+ let m = Mutex::new(());
assert!(m.try_lock().is_some());
}
+
+ #[test]
+ fn test_mutex_arc_condvar() {
+ let arc = Arc::new((Mutex::new(false), Condvar::new()));
+ let arc2 = arc.clone();
+ let (tx, rx) = channel();
+ spawn(proc() {
+ // wait until parent gets in
+ rx.recv();
+ let &(ref lock, ref cvar) = &*arc2;
+ let mut lock = lock.lock();
+ *lock = true;
+ cvar.notify_one();
+ });
+
+ let &(ref lock, ref cvar) = &*arc;
+ let lock = lock.lock();
+ tx.send(());
+ assert!(!*lock);
+ while !*lock {
+ cvar.wait(&lock);
+ }
+ }
+
+ #[test]
+ #[should_fail]
+ fn test_arc_condvar_poison() {
+ let arc = Arc::new((Mutex::new(1i), Condvar::new()));
+ let arc2 = arc.clone();
+ let (tx, rx) = channel();
+
+ spawn(proc() {
+ rx.recv();
+ let &(ref lock, ref cvar) = &*arc2;
+ let _g = lock.lock();
+ cvar.notify_one();
+ // Parent should fail when it wakes up.
+ panic!();
+ });
+
+ let &(ref lock, ref cvar) = &*arc;
+ let lock = lock.lock();
+ tx.send(());
+ while *lock == 1 {
+ cvar.wait(&lock);
+ }
+ }
+
+ #[test]
+ #[should_fail]
+ fn test_mutex_arc_poison() {
+ let arc = Arc::new(Mutex::new(1i));
+ let arc2 = arc.clone();
+ let _ = task::try(proc() {
+ let lock = arc2.lock();
+ assert_eq!(*lock, 2);
+ });
+ let lock = arc.lock();
+ assert_eq!(*lock, 1);
+ }
+
+ #[test]
+ fn test_mutex_arc_nested() {
+ // Tests nested mutexes and access
+ // to underlying data.
+ let arc = Arc::new(Mutex::new(1i));
+ let arc2 = Arc::new(Mutex::new(arc));
+ let (tx, rx) = channel();
+ spawn(proc() {
+ let lock = arc2.lock();
+ let lock2 = lock.deref().lock();
+ assert_eq!(*lock2, 1);
+ tx.send(());
+ });
+ rx.recv();
+ }
+
+ #[test]
+ fn test_mutex_arc_access_in_unwind() {
+ let arc = Arc::new(Mutex::new(1i));
+ let arc2 = arc.clone();
+ let _ = task::try::<()>(proc() {
+ struct Unwinder {
+ i: Arc<Mutex<int>>,
+ }
+ impl Drop for Unwinder {
+ fn drop(&mut self) {
+ *self.i.lock() += 1;
+ }
+ }
+ let _u = Unwinder { i: arc2 };
+ panic!();
+ });
+ let lock = arc.lock();
+ assert_eq!(*lock, 2);
+ }
}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+//! A "once initialization" primitive
+//!
+//! This primitive is meant to be used to run one-time initialization. An
+//! example use case would be for initializing an FFI library.
+
+use int;
+use mem::drop;
+use sync::atomic;
+use sync::{StaticMutex, MUTEX_INIT};
+
+/// A synchronization primitive which can be used to run a one-time global
+/// initialization. Useful for one-time initialization for FFI or related
+/// functionality. This type can only be constructed with the `ONCE_INIT`
+/// value.
+///
+/// # Example
+///
+/// ```rust
+/// use std::sync::{Once, ONCE_INIT};
+///
+/// static START: Once = ONCE_INIT;
+///
+/// START.doit(|| {
+/// // run initialization here
+/// });
+/// ```
+pub struct Once {
+ mutex: StaticMutex,
+ cnt: atomic::AtomicInt,
+ lock_cnt: atomic::AtomicInt,
+}
+
+/// Initialization value for static `Once` values.
+pub const ONCE_INIT: Once = Once {
+ mutex: MUTEX_INIT,
+ cnt: atomic::INIT_ATOMIC_INT,
+ lock_cnt: atomic::INIT_ATOMIC_INT,
+};
+
+impl Once {
+ /// Perform an initialization routine once and only once. The given closure
+ /// will be executed if this is the first time `doit` has been called, and
+ /// otherwise the routine will *not* be invoked.
+ ///
+ /// This method will block the calling task if another initialization
+ /// routine is currently running.
+ ///
+ /// When this function returns, it is guaranteed that some initialization
+ /// has run and completed (it may not be the closure specified).
+ pub fn doit(&'static self, f: ||) {
+ // Optimize common path: load is much cheaper than fetch_add.
+ if self.cnt.load(atomic::SeqCst) < 0 {
+ return
+ }
+
+ // Implementation-wise, this would seem like a fairly trivial primitive.
+ // The stickler part is where our mutexes currently require an
+ // allocation, and usage of a `Once` shouldn't leak this allocation.
+ //
+ // This means that there must be a deterministic destroyer of the mutex
+ // contained within (because it's not needed after the initialization
+ // has run).
+ //
+ // The general scheme here is to gate all future threads once
+ // initialization has completed with a "very negative" count, and to
+ // allow through threads to lock the mutex if they see a non negative
+ // count. For all threads grabbing the mutex, exactly one of them should
+ // be responsible for unlocking the mutex, and this should only be done
+ // once everyone else is done with the mutex.
+ //
+ // This atomicity is achieved by swapping a very negative value into the
+ // shared count when the initialization routine has completed. This will
+ // read the number of threads which will at some point attempt to
+ // acquire the mutex. This count is then squirreled away in a separate
+ // variable, and the last person on the way out of the mutex is then
+ // responsible for destroying the mutex.
+ //
+ // It is crucial that the negative value is swapped in *after* the
+ // initialization routine has completed because otherwise new threads
+ // calling `doit` will return immediately before the initialization has
+ // completed.
+
+ let prev = self.cnt.fetch_add(1, atomic::SeqCst);
+ if prev < 0 {
+ // Make sure we never overflow, we'll never have int::MIN
+ // simultaneous calls to `doit` to make this value go back to 0
+ self.cnt.store(int::MIN, atomic::SeqCst);
+ return
+ }
+
+ // If the count is negative, then someone else finished the job,
+ // otherwise we run the job and record how many people will try to grab
+ // this lock
+ let guard = self.mutex.lock();
+ if self.cnt.load(atomic::SeqCst) > 0 {
+ f();
+ let prev = self.cnt.swap(int::MIN, atomic::SeqCst);
+ self.lock_cnt.store(prev, atomic::SeqCst);
+ }
+ drop(guard);
+
+ // Last one out cleans up after everyone else, no leaks!
+ if self.lock_cnt.fetch_add(-1, atomic::SeqCst) == 1 {
+ unsafe { self.mutex.destroy() }
+ }
+ }
+}
+
+#[cfg(test)]
+mod test {
+ use prelude::*;
+
+ use task;
+ use super::{ONCE_INIT, Once};
+
+ #[test]
+ fn smoke_once() {
+ static O: Once = ONCE_INIT;
+ let mut a = 0i;
+ O.doit(|| a += 1);
+ assert_eq!(a, 1);
+ O.doit(|| a += 1);
+ assert_eq!(a, 1);
+ }
+
+ #[test]
+ fn stampede_once() {
+ static O: Once = ONCE_INIT;
+ static mut run: bool = false;
+
+ let (tx, rx) = channel();
+ for _ in range(0u, 10) {
+ let tx = tx.clone();
+ spawn(proc() {
+ for _ in range(0u, 4) { task::deschedule() }
+ unsafe {
+ O.doit(|| {
+ assert!(!run);
+ run = true;
+ });
+ assert!(run);
+ }
+ tx.send(());
+ });
+ }
+
+ unsafe {
+ O.doit(|| {
+ assert!(!run);
+ run = true;
+ });
+ assert!(run);
+ }
+
+ for _ in range(0u, 10) {
+ rx.recv();
+ }
+ }
+}
+++ /dev/null
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! A "once initialization" primitive
-//!
-//! This primitive is meant to be used to run one-time initialization. An
-//! example use case would be for initializing an FFI library.
-
-use core::prelude::*;
-
-use core::int;
-use core::atomic;
-
-use super::mutex::{StaticMutex, MUTEX_INIT};
-
-/// A synchronization primitive which can be used to run a one-time global
-/// initialization. Useful for one-time initialization for FFI or related
-/// functionality. This type can only be constructed with the `ONCE_INIT`
-/// value.
-///
-/// # Example
-///
-/// ```rust,ignore
-/// use std::sync::one::{Once, ONCE_INIT};
-///
-/// static START: Once = ONCE_INIT;
-///
-/// START.doit(|| {
-/// // run initialization here
-/// });
-/// ```
-pub struct Once {
- mutex: StaticMutex,
- cnt: atomic::AtomicInt,
- lock_cnt: atomic::AtomicInt,
-}
-
-/// Initialization value for static `Once` values.
-pub const ONCE_INIT: Once = Once {
- mutex: MUTEX_INIT,
- cnt: atomic::INIT_ATOMIC_INT,
- lock_cnt: atomic::INIT_ATOMIC_INT,
-};
-
-impl Once {
- /// Perform an initialization routine once and only once. The given closure
- /// will be executed if this is the first time `doit` has been called, and
- /// otherwise the routine will *not* be invoked.
- ///
- /// This method will block the calling task if another initialization
- /// routine is currently running.
- ///
- /// When this function returns, it is guaranteed that some initialization
- /// has run and completed (it may not be the closure specified).
- pub fn doit(&self, f: ||) {
- // Optimize common path: load is much cheaper than fetch_add.
- if self.cnt.load(atomic::SeqCst) < 0 {
- return
- }
-
- // Implementation-wise, this would seem like a fairly trivial primitive.
- // The stickler part is where our mutexes currently require an
- // allocation, and usage of a `Once` shouldn't leak this allocation.
- //
- // This means that there must be a deterministic destroyer of the mutex
- // contained within (because it's not needed after the initialization
- // has run).
- //
- // The general scheme here is to gate all future threads once
- // initialization has completed with a "very negative" count, and to
- // allow through threads to lock the mutex if they see a non negative
- // count. For all threads grabbing the mutex, exactly one of them should
- // be responsible for unlocking the mutex, and this should only be done
- // once everyone else is done with the mutex.
- //
- // This atomicity is achieved by swapping a very negative value into the
- // shared count when the initialization routine has completed. This will
- // read the number of threads which will at some point attempt to
- // acquire the mutex. This count is then squirreled away in a separate
- // variable, and the last person on the way out of the mutex is then
- // responsible for destroying the mutex.
- //
- // It is crucial that the negative value is swapped in *after* the
- // initialization routine has completed because otherwise new threads
- // calling `doit` will return immediately before the initialization has
- // completed.
-
- let prev = self.cnt.fetch_add(1, atomic::SeqCst);
- if prev < 0 {
- // Make sure we never overflow, we'll never have int::MIN
- // simultaneous calls to `doit` to make this value go back to 0
- self.cnt.store(int::MIN, atomic::SeqCst);
- return
- }
-
- // If the count is negative, then someone else finished the job,
- // otherwise we run the job and record how many people will try to grab
- // this lock
- let guard = self.mutex.lock();
- if self.cnt.load(atomic::SeqCst) > 0 {
- f();
- let prev = self.cnt.swap(int::MIN, atomic::SeqCst);
- self.lock_cnt.store(prev, atomic::SeqCst);
- }
- drop(guard);
-
- // Last one out cleans up after everyone else, no leaks!
- if self.lock_cnt.fetch_add(-1, atomic::SeqCst) == 1 {
- unsafe { self.mutex.destroy() }
- }
- }
-}
-
-#[cfg(test)]
-mod test {
- use prelude::*;
- use task;
- use super::{ONCE_INIT, Once};
-
- #[test]
- fn smoke_once() {
- static O: Once = ONCE_INIT;
- let mut a = 0i;
- O.doit(|| a += 1);
- assert_eq!(a, 1);
- O.doit(|| a += 1);
- assert_eq!(a, 1);
- }
-
- #[test]
- fn stampede_once() {
- static O: Once = ONCE_INIT;
- static mut run: bool = false;
-
- let (tx, rx) = channel();
- for _ in range(0u, 10) {
- let tx = tx.clone();
- spawn(proc() {
- for _ in range(0u, 4) { task::deschedule() }
- unsafe {
- O.doit(|| {
- assert!(!run);
- run = true;
- });
- assert!(run);
- }
- tx.send(());
- });
- }
-
- unsafe {
- O.doit(|| {
- assert!(!run);
- run = true;
- });
- assert!(run);
- }
-
- for _ in range(0u, 10) {
- rx.recv();
- }
- }
-}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use option::None;
+use rustrt::task::Task;
+use rustrt::local::Local;
+
+pub struct Flag { pub failed: bool }
+
+impl Flag {
+ pub fn borrow(&mut self) -> Guard {
+ Guard { flag: &mut self.failed, failing: failing() }
+ }
+}
+
+pub struct Guard<'a> {
+ flag: &'a mut bool,
+ failing: bool,
+}
+
+impl<'a> Guard<'a> {
+ pub fn check(&self, name: &str) {
+ if *self.flag {
+ panic!("poisoned {} - another task failed inside", name);
+ }
+ }
+
+ pub fn done(&mut self) {
+ if !self.failing && failing() {
+ *self.flag = true;
+ }
+ }
+}
+
+fn failing() -> bool {
+ if Local::exists(None::<Task>) {
+ Local::borrow(None::<Task>).unwinder.unwinding()
+ } else {
+ false
+ }
+}
+++ /dev/null
-// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! Raw concurrency primitives you know and love.
-//!
-//! These primitives are not recommended for general use, but are provided for
-//! flavorful use-cases. It is recommended to use the types at the top of the
-//! `sync` crate which wrap values directly and provide safer abstractions for
-//! containing data.
-
-// A side-effect of merging libsync into libstd; will go away once
-// libsync rewrite lands
-#![allow(dead_code)]
-
-use core::prelude::*;
-use self::ReacquireOrderLock::*;
-
-use core::atomic;
-use core::finally::Finally;
-use core::kinds::marker;
-use core::mem;
-use core::cell::UnsafeCell;
-use vec::Vec;
-
-use super::mutex;
-use comm::{Receiver, Sender, channel};
-
-// Each waiting task receives on one of these.
-type WaitEnd = Receiver<()>;
-type SignalEnd = Sender<()>;
-// A doubly-ended queue of waiting tasks.
-struct WaitQueue {
- head: Receiver<SignalEnd>,
- tail: Sender<SignalEnd>,
-}
-
-impl WaitQueue {
- fn new() -> WaitQueue {
- let (block_tail, block_head) = channel();
- WaitQueue { head: block_head, tail: block_tail }
- }
-
- // Signals one live task from the queue.
- fn signal(&self) -> bool {
- match self.head.try_recv() {
- Ok(ch) => {
- // Send a wakeup signal. If the waiter was killed, its port will
- // have closed. Keep trying until we get a live task.
- if ch.send_opt(()).is_ok() {
- true
- } else {
- self.signal()
- }
- }
- _ => false
- }
- }
-
- fn broadcast(&self) -> uint {
- let mut count = 0;
- loop {
- match self.head.try_recv() {
- Ok(ch) => {
- if ch.send_opt(()).is_ok() {
- count += 1;
- }
- }
- _ => break
- }
- }
- count
- }
-
- fn wait_end(&self) -> WaitEnd {
- let (signal_end, wait_end) = channel();
- self.tail.send(signal_end);
- wait_end
- }
-}
-
-// The building-block used to make semaphores, mutexes, and rwlocks.
-struct Sem<Q> {
- lock: mutex::Mutex,
- // n.b, we need Sem to be `Sync`, but the WaitQueue type is not send/share
- // (for good reason). We have an internal invariant on this semaphore,
- // however, that the queue is never accessed outside of a locked
- // context.
- inner: UnsafeCell<SemInner<Q>>
-}
-
-struct SemInner<Q> {
- count: int,
- waiters: WaitQueue,
- // Can be either unit or another waitqueue. Some sems shouldn't come with
- // a condition variable attached, others should.
- blocked: Q,
-}
-
-#[must_use]
-struct SemGuard<'a, Q:'a> {
- sem: &'a Sem<Q>,
-}
-
-impl<Q: Send> Sem<Q> {
- fn new(count: int, q: Q) -> Sem<Q> {
- assert!(count >= 0,
- "semaphores cannot be initialized with negative values");
- Sem {
- lock: mutex::Mutex::new(),
- inner: UnsafeCell::new(SemInner {
- waiters: WaitQueue::new(),
- count: count,
- blocked: q,
- })
- }
- }
-
- unsafe fn with(&self, f: |&mut SemInner<Q>|) {
- let _g = self.lock.lock();
- // This &mut is safe because, due to the lock, we are the only one who can touch the data
- f(&mut *self.inner.get())
- }
-
- pub fn acquire(&self) {
- unsafe {
- let mut waiter_nobe = None;
- self.with(|state| {
- state.count -= 1;
- if state.count < 0 {
- // Create waiter nobe, enqueue ourself, and tell
- // outer scope we need to block.
- waiter_nobe = Some(state.waiters.wait_end());
- }
- });
- // Uncomment if you wish to test for sem races. Not
- // valgrind-friendly.
- /* for _ in range(0u, 1000) { task::deschedule(); } */
- // Need to wait outside the exclusive.
- if waiter_nobe.is_some() {
- let _ = waiter_nobe.unwrap().recv();
- }
- }
- }
-
- pub fn release(&self) {
- unsafe {
- self.with(|state| {
- state.count += 1;
- if state.count <= 0 {
- state.waiters.signal();
- }
- })
- }
- }
-
- pub fn access<'a>(&'a self) -> SemGuard<'a, Q> {
- self.acquire();
- SemGuard { sem: self }
- }
-}
-
-#[unsafe_destructor]
-impl<'a, Q: Send> Drop for SemGuard<'a, Q> {
- fn drop(&mut self) {
- self.sem.release();
- }
-}
-
-impl Sem<Vec<WaitQueue>> {
- fn new_and_signal(count: int, num_condvars: uint) -> Sem<Vec<WaitQueue>> {
- let mut queues = Vec::new();
- for _ in range(0, num_condvars) { queues.push(WaitQueue::new()); }
- Sem::new(count, queues)
- }
-
- // The only other places that condvars get built are rwlock.write_cond()
- // and rwlock_write_mode.
- pub fn access_cond<'a>(&'a self) -> SemCondGuard<'a> {
- SemCondGuard {
- guard: self.access(),
- cvar: Condvar { sem: self, order: Nothing, nocopy: marker::NoCopy },
- }
- }
-}
-
-// FIXME(#3598): Want to use an Option down below, but we need a custom enum
-// that's not polymorphic to get around the fact that lifetimes are invariant
-// inside of type parameters.
-enum ReacquireOrderLock<'a> {
- Nothing, // c.c
- Just(&'a Semaphore),
-}
-
-/// A mechanism for atomic-unlock-and-deschedule blocking and signalling.
-pub struct Condvar<'a> {
- // The 'Sem' object associated with this condvar. This is the one that's
- // atomically-unlocked-and-descheduled upon and reacquired during wakeup.
- sem: &'a Sem<Vec<WaitQueue> >,
- // This is (can be) an extra semaphore which is held around the reacquire
- // operation on the first one. This is only used in cvars associated with
- // rwlocks, and is needed to ensure that, when a downgrader is trying to
- // hand off the access lock (which would be the first field, here), a 2nd
- // writer waking up from a cvar wait can't race with a reader to steal it,
- // See the comment in write_cond for more detail.
- order: ReacquireOrderLock<'a>,
- // Make sure condvars are non-copyable.
- nocopy: marker::NoCopy,
-}
-
-impl<'a> Condvar<'a> {
- /// Atomically drop the associated lock, and block until a signal is sent.
- ///
- /// # Panics
- ///
- /// A task which is killed while waiting on a condition variable will wake
- /// up, panic, and unlock the associated lock as it unwinds.
- pub fn wait(&self) { self.wait_on(0) }
-
- /// As wait(), but can specify which of multiple condition variables to
- /// wait on. Only a signal_on() or broadcast_on() with the same condvar_id
- /// will wake this thread.
- ///
- /// The associated lock must have been initialised with an appropriate
- /// number of condvars. The condvar_id must be between 0 and num_condvars-1
- /// or else this call will panic.
- ///
- /// wait() is equivalent to wait_on(0).
- pub fn wait_on(&self, condvar_id: uint) {
- let mut wait_end = None;
- let mut out_of_bounds = None;
- // Release lock, 'atomically' enqueuing ourselves in so doing.
- unsafe {
- self.sem.with(|state| {
- if condvar_id < state.blocked.len() {
- // Drop the lock.
- state.count += 1;
- if state.count <= 0 {
- state.waiters.signal();
- }
- // Create waiter nobe, and enqueue ourself to
- // be woken up by a signaller.
- wait_end = Some(state.blocked[condvar_id].wait_end());
- } else {
- out_of_bounds = Some(state.blocked.len());
- }
- })
- }
-
- // If deschedule checks start getting inserted anywhere, we can be
- // killed before or after enqueueing.
- check_cvar_bounds(out_of_bounds, condvar_id, "cond.wait_on()", || {
- // Unconditionally "block". (Might not actually block if a
- // signaller already sent -- I mean 'unconditionally' in contrast
- // with acquire().)
- (|| {
- let _ = wait_end.take().unwrap().recv();
- }).finally(|| {
- // Reacquire the condvar.
- match self.order {
- Just(lock) => {
- let _g = lock.access();
- self.sem.acquire();
- }
- Nothing => self.sem.acquire(),
- }
- })
- })
- }
-
- /// Wake up a blocked task. Returns false if there was no blocked task.
- pub fn signal(&self) -> bool { self.signal_on(0) }
-
- /// As signal, but with a specified condvar_id. See wait_on.
- pub fn signal_on(&self, condvar_id: uint) -> bool {
- unsafe {
- let mut out_of_bounds = None;
- let mut result = false;
- self.sem.with(|state| {
- if condvar_id < state.blocked.len() {
- result = state.blocked[condvar_id].signal();
- } else {
- out_of_bounds = Some(state.blocked.len());
- }
- });
- check_cvar_bounds(out_of_bounds,
- condvar_id,
- "cond.signal_on()",
- || result)
- }
- }
-
- /// Wake up all blocked tasks. Returns the number of tasks woken.
- pub fn broadcast(&self) -> uint { self.broadcast_on(0) }
-
- /// As broadcast, but with a specified condvar_id. See wait_on.
- pub fn broadcast_on(&self, condvar_id: uint) -> uint {
- let mut out_of_bounds = None;
- let mut queue = None;
- unsafe {
- self.sem.with(|state| {
- if condvar_id < state.blocked.len() {
- // To avoid :broadcast_heavy, we make a new waitqueue,
- // swap it out with the old one, and broadcast on the
- // old one outside of the little-lock.
- queue = Some(mem::replace(&mut state.blocked[condvar_id],
- WaitQueue::new()));
- } else {
- out_of_bounds = Some(state.blocked.len());
- }
- });
- check_cvar_bounds(out_of_bounds,
- condvar_id,
- "cond.signal_on()",
- || {
- queue.take().unwrap().broadcast()
- })
- }
- }
-}
-
-// Checks whether a condvar ID was out of bounds, and panics if so, or does
-// something else next on success.
-#[inline]
-fn check_cvar_bounds<U>(
- out_of_bounds: Option<uint>,
- id: uint,
- act: &str,
- blk: || -> U)
- -> U {
- match out_of_bounds {
- Some(0) =>
- panic!("{} with illegal ID {} - this lock has no condvars!", act, id),
- Some(length) =>
- panic!("{} with illegal ID {} - ID must be less than {}", act, id, length),
- None => blk()
- }
-}
-
-#[must_use]
-struct SemCondGuard<'a> {
- guard: SemGuard<'a, Vec<WaitQueue>>,
- cvar: Condvar<'a>,
-}
-
-/// A counting, blocking, bounded-waiting semaphore.
-pub struct Semaphore {
- sem: Sem<()>,
-}
-
-/// An RAII guard used to represent an acquired resource to a semaphore. When
-/// dropped, this value will release the resource back to the semaphore.
-#[must_use]
-pub struct SemaphoreGuard<'a> {
- _guard: SemGuard<'a, ()>,
-}
-
-impl Semaphore {
- /// Create a new semaphore with the specified count.
- ///
- /// # Panics
- ///
- /// This function will panic if `count` is negative.
- pub fn new(count: int) -> Semaphore {
- Semaphore { sem: Sem::new(count, ()) }
- }
-
- /// Acquire a resource represented by the semaphore. Blocks if necessary
- /// until resource(s) become available.
- pub fn acquire(&self) { self.sem.acquire() }
-
- /// Release a held resource represented by the semaphore. Wakes a blocked
- /// contending task, if any exist. Won't block the caller.
- pub fn release(&self) { self.sem.release() }
-
- /// Acquire a resource of this semaphore, returning an RAII guard which will
- /// release the resource when dropped.
- pub fn access<'a>(&'a self) -> SemaphoreGuard<'a> {
- SemaphoreGuard { _guard: self.sem.access() }
- }
-}
-
-/// A blocking, bounded-waiting, mutual exclusion lock with an associated
-/// FIFO condition variable.
-///
-/// # Panics
-///
-/// A task which panicks while holding a mutex will unlock the mutex as it
-/// unwinds.
-pub struct Mutex {
- sem: Sem<Vec<WaitQueue>>,
-}
-
-/// An RAII structure which is used to gain access to a mutex's condition
-/// variable. Additionally, when a value of this type is dropped, the
-/// corresponding mutex is also unlocked.
-#[must_use]
-pub struct MutexGuard<'a> {
- _guard: SemGuard<'a, Vec<WaitQueue>>,
- /// Inner condition variable which is connected to the outer mutex, and can
- /// be used for atomic-unlock-and-deschedule.
- pub cond: Condvar<'a>,
-}
-
-impl Mutex {
- /// Create a new mutex, with one associated condvar.
- pub fn new() -> Mutex { Mutex::new_with_condvars(1) }
-
- /// Create a new mutex, with a specified number of associated condvars. This
- /// will allow calling wait_on/signal_on/broadcast_on with condvar IDs
- /// between 0 and num_condvars-1. (If num_condvars is 0, lock_cond will be
- /// allowed but any operations on the condvar will panic.)
- pub fn new_with_condvars(num_condvars: uint) -> Mutex {
- Mutex { sem: Sem::new_and_signal(1, num_condvars) }
- }
-
- /// Acquires ownership of this mutex, returning an RAII guard which will
- /// unlock the mutex when dropped. The associated condition variable can
- /// also be accessed through the returned guard.
- pub fn lock<'a>(&'a self) -> MutexGuard<'a> {
- let SemCondGuard { guard, cvar } = self.sem.access_cond();
- MutexGuard { _guard: guard, cond: cvar }
- }
-}
-
-// NB: Wikipedia - Readers-writers_problem#The_third_readers-writers_problem
-
-/// A blocking, no-starvation, reader-writer lock with an associated condvar.
-///
-/// # Panics
-///
-/// A task which panics while holding an rwlock will unlock the rwlock as it
-/// unwinds.
-pub struct RWLock {
- order_lock: Semaphore,
- access_lock: Sem<Vec<WaitQueue>>,
-
- // The only way the count flag is ever accessed is with xadd. Since it is
- // a read-modify-write operation, multiple xadds on different cores will
- // always be consistent with respect to each other, so a monotonic/relaxed
- // consistency ordering suffices (i.e., no extra barriers are needed).
- //
- // FIXME(#6598): The atomics module has no relaxed ordering flag, so I use
- // acquire/release orderings superfluously. Change these someday.
- read_count: atomic::AtomicUint,
-}
-
-/// An RAII helper which is created by acquiring a read lock on an RWLock. When
-/// dropped, this will unlock the RWLock.
-#[must_use]
-pub struct RWLockReadGuard<'a> {
- lock: &'a RWLock,
-}
-
-/// An RAII helper which is created by acquiring a write lock on an RWLock. When
-/// dropped, this will unlock the RWLock.
-///
-/// A value of this type can also be consumed to downgrade to a read-only lock.
-#[must_use]
-pub struct RWLockWriteGuard<'a> {
- lock: &'a RWLock,
- /// Inner condition variable that is connected to the write-mode of the
- /// outer rwlock.
- pub cond: Condvar<'a>,
-}
-
-impl RWLock {
- /// Create a new rwlock, with one associated condvar.
- pub fn new() -> RWLock { RWLock::new_with_condvars(1) }
-
- /// Create a new rwlock, with a specified number of associated condvars.
- /// Similar to mutex_with_condvars.
- pub fn new_with_condvars(num_condvars: uint) -> RWLock {
- RWLock {
- order_lock: Semaphore::new(1),
- access_lock: Sem::new_and_signal(1, num_condvars),
- read_count: atomic::AtomicUint::new(0),
- }
- }
-
- /// Acquires a read-lock, returning an RAII guard that will unlock the lock
- /// when dropped. Calls to 'read' from other tasks may run concurrently with
- /// this one.
- pub fn read<'a>(&'a self) -> RWLockReadGuard<'a> {
- let _guard = self.order_lock.access();
- let old_count = self.read_count.fetch_add(1, atomic::Acquire);
- if old_count == 0 {
- self.access_lock.acquire();
- }
- RWLockReadGuard { lock: self }
- }
-
- /// Acquire a write-lock, returning an RAII guard that will unlock the lock
- /// when dropped. No calls to 'read' or 'write' from other tasks will run
- /// concurrently with this one.
- ///
- /// You can also downgrade a write to a read by calling the `downgrade`
- /// method on the returned guard. Additionally, the guard will contain a
- /// `Condvar` attached to this lock.
- ///
- /// # Example
- ///
- /// ```{rust,ignore}
- /// use std::sync::raw::RWLock;
- ///
- /// let lock = RWLock::new();
- /// let write = lock.write();
- /// // ... exclusive access ...
- /// let read = write.downgrade();
- /// // ... shared access ...
- /// drop(read);
- /// ```
- pub fn write<'a>(&'a self) -> RWLockWriteGuard<'a> {
- let _g = self.order_lock.access();
- self.access_lock.acquire();
-
- // It's important to thread our order lock into the condvar, so that
- // when a cond.wait() wakes up, it uses it while reacquiring the
- // access lock. If we permitted a waking-up writer to "cut in line",
- // there could arise a subtle race when a downgrader attempts to hand
- // off the reader cloud lock to a waiting reader. This race is tested
- // in arc.rs (test_rw_write_cond_downgrade_read_race) and looks like:
- // T1 (writer) T2 (downgrader) T3 (reader)
- // [in cond.wait()]
- // [locks for writing]
- // [holds access_lock]
- // [is signalled, perhaps by
- // downgrader or a 4th thread]
- // tries to lock access(!)
- // lock order_lock
- // xadd read_count[0->1]
- // tries to lock access
- // [downgrade]
- // xadd read_count[1->2]
- // unlock access
- // Since T1 contended on the access lock before T3 did, it will steal
- // the lock handoff. Adding order_lock in the condvar reacquire path
- // solves this because T1 will hold order_lock while waiting on access,
- // which will cause T3 to have to wait until T1 finishes its write,
- // which can't happen until T2 finishes the downgrade-read entirely.
- // The astute reader will also note that making waking writers use the
- // order_lock is better for not starving readers.
- RWLockWriteGuard {
- lock: self,
- cond: Condvar {
- sem: &self.access_lock,
- order: Just(&self.order_lock),
- nocopy: marker::NoCopy,
- }
- }
- }
-}
-
-impl<'a> RWLockWriteGuard<'a> {
- /// Consumes this write lock and converts it into a read lock.
- pub fn downgrade(self) -> RWLockReadGuard<'a> {
- let lock = self.lock;
- // Don't run the destructor of the write guard, we're in charge of
- // things from now on
- unsafe { mem::forget(self) }
-
- let old_count = lock.read_count.fetch_add(1, atomic::Release);
- // If another reader was already blocking, we need to hand-off
- // the "reader cloud" access lock to them.
- if old_count != 0 {
- // Guaranteed not to let another writer in, because
- // another reader was holding the order_lock. Hence they
- // must be the one to get the access_lock (because all
- // access_locks are acquired with order_lock held). See
- // the comment in write_cond for more justification.
- lock.access_lock.release();
- }
- RWLockReadGuard { lock: lock }
- }
-}
-
-#[unsafe_destructor]
-impl<'a> Drop for RWLockWriteGuard<'a> {
- fn drop(&mut self) {
- self.lock.access_lock.release();
- }
-}
-
-#[unsafe_destructor]
-impl<'a> Drop for RWLockReadGuard<'a> {
- fn drop(&mut self) {
- let old_count = self.lock.read_count.fetch_sub(1, atomic::Release);
- assert!(old_count > 0);
- if old_count == 1 {
- // Note: this release used to be outside of a locked access
- // to exclusive-protected state. If this code is ever
- // converted back to such (instead of using atomic ops),
- // this access MUST NOT go inside the exclusive access.
- self.lock.access_lock.release();
- }
- }
-}
-
-#[cfg(test)]
-mod tests {
- pub use self::RWLockMode::*;
-
- use sync::Arc;
- use prelude::*;
- use super::{Semaphore, Mutex, RWLock, Condvar};
-
- use mem;
- use result;
- use task;
-
- #[test]
- fn test_sem_acquire_release() {
- let s = Semaphore::new(1);
- s.acquire();
- s.release();
- s.acquire();
- }
-
- #[test]
- fn test_sem_basic() {
- let s = Semaphore::new(1);
- let _g = s.access();
- }
-
- #[test]
- #[should_fail]
- fn test_sem_basic2() {
- Semaphore::new(-1);
- }
-
- #[test]
- fn test_sem_as_mutex() {
- let s = Arc::new(Semaphore::new(1));
- let s2 = s.clone();
- task::spawn(proc() {
- let _g = s2.access();
- for _ in range(0u, 5) { task::deschedule(); }
- });
- let _g = s.access();
- for _ in range(0u, 5) { task::deschedule(); }
- }
-
- #[test]
- fn test_sem_as_cvar() {
- /* Child waits and parent signals */
- let (tx, rx) = channel();
- let s = Arc::new(Semaphore::new(0));
- let s2 = s.clone();
- task::spawn(proc() {
- s2.acquire();
- tx.send(());
- });
- for _ in range(0u, 5) { task::deschedule(); }
- s.release();
- let _ = rx.recv();
-
- /* Parent waits and child signals */
- let (tx, rx) = channel();
- let s = Arc::new(Semaphore::new(0));
- let s2 = s.clone();
- task::spawn(proc() {
- for _ in range(0u, 5) { task::deschedule(); }
- s2.release();
- let _ = rx.recv();
- });
- s.acquire();
- tx.send(());
- }
-
- #[test]
- fn test_sem_multi_resource() {
- // Parent and child both get in the critical section at the same
- // time, and shake hands.
- let s = Arc::new(Semaphore::new(2));
- let s2 = s.clone();
- let (tx1, rx1) = channel();
- let (tx2, rx2) = channel();
- task::spawn(proc() {
- let _g = s2.access();
- let _ = rx2.recv();
- tx1.send(());
- });
- let _g = s.access();
- tx2.send(());
- let _ = rx1.recv();
- }
-
- #[test]
- fn test_sem_runtime_friendly_blocking() {
- // Force the runtime to schedule two threads on the same sched_loop.
- // When one blocks, it should schedule the other one.
- let s = Arc::new(Semaphore::new(1));
- let s2 = s.clone();
- let (tx, rx) = channel();
- {
- let _g = s.access();
- task::spawn(proc() {
- tx.send(());
- drop(s2.access());
- tx.send(());
- });
- rx.recv(); // wait for child to come alive
- for _ in range(0u, 5) { task::deschedule(); } // let the child contend
- }
- rx.recv(); // wait for child to be done
- }
-
- #[test]
- fn test_mutex_lock() {
- // Unsafely achieve shared state, and do the textbook
- // "load tmp = move ptr; inc tmp; store ptr <- tmp" dance.
- let (tx, rx) = channel();
- let m = Arc::new(Mutex::new());
- let m2 = m.clone();
- let mut sharedstate = box 0;
- {
- let ptr: *mut int = &mut *sharedstate;
- task::spawn(proc() {
- access_shared(ptr, &m2, 10);
- tx.send(());
- });
- }
- {
- access_shared(&mut *sharedstate, &m, 10);
- let _ = rx.recv();
-
- assert_eq!(*sharedstate, 20);
- }
-
- fn access_shared(sharedstate: *mut int, m: &Arc<Mutex>, n: uint) {
- for _ in range(0u, n) {
- let _g = m.lock();
- let oldval = unsafe { *sharedstate };
- task::deschedule();
- unsafe { *sharedstate = oldval + 1; }
- }
- }
- }
-
- #[test]
- fn test_mutex_cond_wait() {
- let m = Arc::new(Mutex::new());
-
- // Child wakes up parent
- {
- let lock = m.lock();
- let m2 = m.clone();
- task::spawn(proc() {
- let lock = m2.lock();
- let woken = lock.cond.signal();
- assert!(woken);
- });
- lock.cond.wait();
- }
- // Parent wakes up child
- let (tx, rx) = channel();
- let m3 = m.clone();
- task::spawn(proc() {
- let lock = m3.lock();
- tx.send(());
- lock.cond.wait();
- tx.send(());
- });
- rx.recv(); // Wait until child gets in the mutex
- {
- let lock = m.lock();
- let woken = lock.cond.signal();
- assert!(woken);
- }
- rx.recv(); // Wait until child wakes up
- }
-
- fn test_mutex_cond_broadcast_helper(num_waiters: uint) {
- let m = Arc::new(Mutex::new());
- let mut rxs = Vec::new();
-
- for _ in range(0u, num_waiters) {
- let mi = m.clone();
- let (tx, rx) = channel();
- rxs.push(rx);
- task::spawn(proc() {
- let lock = mi.lock();
- tx.send(());
- lock.cond.wait();
- tx.send(());
- });
- }
-
- // wait until all children get in the mutex
- for rx in rxs.iter_mut() { rx.recv(); }
- {
- let lock = m.lock();
- let num_woken = lock.cond.broadcast();
- assert_eq!(num_woken, num_waiters);
- }
- // wait until all children wake up
- for rx in rxs.iter_mut() { rx.recv(); }
- }
-
- #[test]
- fn test_mutex_cond_broadcast() {
- test_mutex_cond_broadcast_helper(12);
- }
-
- #[test]
- fn test_mutex_cond_broadcast_none() {
- test_mutex_cond_broadcast_helper(0);
- }
-
- #[test]
- fn test_mutex_cond_no_waiter() {
- let m = Arc::new(Mutex::new());
- let m2 = m.clone();
- let _ = task::try(proc() {
- drop(m.lock());
- });
- let lock = m2.lock();
- assert!(!lock.cond.signal());
- }
-
- #[test]
- fn test_mutex_killed_simple() {
- use any::Any;
-
- // Mutex must get automatically unlocked if panicked/killed within.
- let m = Arc::new(Mutex::new());
- let m2 = m.clone();
-
- let result: result::Result<(), Box<Any + Send>> = task::try(proc() {
- let _lock = m2.lock();
- panic!();
- });
- assert!(result.is_err());
- // child task must have finished by the time try returns
- drop(m.lock());
- }
-
- #[test]
- fn test_mutex_cond_signal_on_0() {
- // Tests that signal_on(0) is equivalent to signal().
- let m = Arc::new(Mutex::new());
- let lock = m.lock();
- let m2 = m.clone();
- task::spawn(proc() {
- let lock = m2.lock();
- lock.cond.signal_on(0);
- });
- lock.cond.wait();
- }
-
- #[test]
- fn test_mutex_no_condvars() {
- let result = task::try(proc() {
- let m = Mutex::new_with_condvars(0);
- m.lock().cond.wait();
- });
- assert!(result.is_err());
- let result = task::try(proc() {
- let m = Mutex::new_with_condvars(0);
- m.lock().cond.signal();
- });
- assert!(result.is_err());
- let result = task::try(proc() {
- let m = Mutex::new_with_condvars(0);
- m.lock().cond.broadcast();
- });
- assert!(result.is_err());
- }
-
- #[cfg(test)]
- pub enum RWLockMode { Read, Write, Downgrade, DowngradeRead }
-
- #[cfg(test)]
- fn lock_rwlock_in_mode(x: &Arc<RWLock>, mode: RWLockMode, blk: ||) {
- match mode {
- Read => { let _g = x.read(); blk() }
- Write => { let _g = x.write(); blk() }
- Downgrade => { let _g = x.write(); blk() }
- DowngradeRead => { let _g = x.write().downgrade(); blk() }
- }
- }
-
- #[cfg(test)]
- fn test_rwlock_exclusion(x: Arc<RWLock>,
- mode1: RWLockMode,
- mode2: RWLockMode) {
- // Test mutual exclusion between readers and writers. Just like the
- // mutex mutual exclusion test, a ways above.
- let (tx, rx) = channel();
- let x2 = x.clone();
- let mut sharedstate = box 0;
- {
- let ptr: *const int = &*sharedstate;
- task::spawn(proc() {
- let sharedstate: &mut int =
- unsafe { mem::transmute(ptr) };
- access_shared(sharedstate, &x2, mode1, 10);
- tx.send(());
- });
- }
- {
- access_shared(&mut *sharedstate, &x, mode2, 10);
- let _ = rx.recv();
-
- assert_eq!(*sharedstate, 20);
- }
-
- fn access_shared(sharedstate: &mut int, x: &Arc<RWLock>,
- mode: RWLockMode, n: uint) {
- for _ in range(0u, n) {
- lock_rwlock_in_mode(x, mode, || {
- let oldval = *sharedstate;
- task::deschedule();
- *sharedstate = oldval + 1;
- })
- }
- }
- }
-
- #[test]
- fn test_rwlock_readers_wont_modify_the_data() {
- test_rwlock_exclusion(Arc::new(RWLock::new()), Read, Write);
- test_rwlock_exclusion(Arc::new(RWLock::new()), Write, Read);
- test_rwlock_exclusion(Arc::new(RWLock::new()), Read, Downgrade);
- test_rwlock_exclusion(Arc::new(RWLock::new()), Downgrade, Read);
- test_rwlock_exclusion(Arc::new(RWLock::new()), Write, DowngradeRead);
- test_rwlock_exclusion(Arc::new(RWLock::new()), DowngradeRead, Write);
- }
-
- #[test]
- fn test_rwlock_writers_and_writers() {
- test_rwlock_exclusion(Arc::new(RWLock::new()), Write, Write);
- test_rwlock_exclusion(Arc::new(RWLock::new()), Write, Downgrade);
- test_rwlock_exclusion(Arc::new(RWLock::new()), Downgrade, Write);
- test_rwlock_exclusion(Arc::new(RWLock::new()), Downgrade, Downgrade);
- }
-
- #[cfg(test)]
- fn test_rwlock_handshake(x: Arc<RWLock>,
- mode1: RWLockMode,
- mode2: RWLockMode,
- make_mode2_go_first: bool) {
- // Much like sem_multi_resource.
- let x2 = x.clone();
- let (tx1, rx1) = channel();
- let (tx2, rx2) = channel();
- task::spawn(proc() {
- if !make_mode2_go_first {
- rx2.recv(); // parent sends to us once it locks, or ...
- }
- lock_rwlock_in_mode(&x2, mode2, || {
- if make_mode2_go_first {
- tx1.send(()); // ... we send to it once we lock
- }
- rx2.recv();
- tx1.send(());
- })
- });
- if make_mode2_go_first {
- rx1.recv(); // child sends to us once it locks, or ...
- }
- lock_rwlock_in_mode(&x, mode1, || {
- if !make_mode2_go_first {
- tx2.send(()); // ... we send to it once we lock
- }
- tx2.send(());
- rx1.recv();
- })
- }
-
- #[test]
- fn test_rwlock_readers_and_readers() {
- test_rwlock_handshake(Arc::new(RWLock::new()), Read, Read, false);
- // The downgrader needs to get in before the reader gets in, otherwise
- // they cannot end up reading at the same time.
- test_rwlock_handshake(Arc::new(RWLock::new()), DowngradeRead, Read, false);
- test_rwlock_handshake(Arc::new(RWLock::new()), Read, DowngradeRead, true);
- // Two downgrade_reads can never both end up reading at the same time.
- }
-
- #[test]
- fn test_rwlock_downgrade_unlock() {
- // Tests that downgrade can unlock the lock in both modes
- let x = Arc::new(RWLock::new());
- lock_rwlock_in_mode(&x, Downgrade, || { });
- test_rwlock_handshake(x, Read, Read, false);
- let y = Arc::new(RWLock::new());
- lock_rwlock_in_mode(&y, DowngradeRead, || { });
- test_rwlock_exclusion(y, Write, Write);
- }
-
- #[test]
- fn test_rwlock_read_recursive() {
- let x = RWLock::new();
- let _g1 = x.read();
- let _g2 = x.read();
- }
-
- #[test]
- fn test_rwlock_cond_wait() {
- // As test_mutex_cond_wait above.
- let x = Arc::new(RWLock::new());
-
- // Child wakes up parent
- {
- let lock = x.write();
- let x2 = x.clone();
- task::spawn(proc() {
- let lock = x2.write();
- assert!(lock.cond.signal());
- });
- lock.cond.wait();
- }
- // Parent wakes up child
- let (tx, rx) = channel();
- let x3 = x.clone();
- task::spawn(proc() {
- let lock = x3.write();
- tx.send(());
- lock.cond.wait();
- tx.send(());
- });
- rx.recv(); // Wait until child gets in the rwlock
- drop(x.read()); // Must be able to get in as a reader
- {
- let x = x.write();
- assert!(x.cond.signal());
- }
- rx.recv(); // Wait until child wakes up
- drop(x.read()); // Just for good measure
- }
-
- #[cfg(test)]
- fn test_rwlock_cond_broadcast_helper(num_waiters: uint) {
- // Much like the mutex broadcast test. Downgrade-enabled.
- fn lock_cond(x: &Arc<RWLock>, blk: |c: &Condvar|) {
- let lock = x.write();
- blk(&lock.cond);
- }
-
- let x = Arc::new(RWLock::new());
- let mut rxs = Vec::new();
-
- for _ in range(0u, num_waiters) {
- let xi = x.clone();
- let (tx, rx) = channel();
- rxs.push(rx);
- task::spawn(proc() {
- lock_cond(&xi, |cond| {
- tx.send(());
- cond.wait();
- tx.send(());
- })
- });
- }
-
- // wait until all children get in the mutex
- for rx in rxs.iter_mut() { let _ = rx.recv(); }
- lock_cond(&x, |cond| {
- let num_woken = cond.broadcast();
- assert_eq!(num_woken, num_waiters);
- });
- // wait until all children wake up
- for rx in rxs.iter_mut() { let _ = rx.recv(); }
- }
-
- #[test]
- fn test_rwlock_cond_broadcast() {
- test_rwlock_cond_broadcast_helper(0);
- test_rwlock_cond_broadcast_helper(12);
- }
-
- #[cfg(test)]
- fn rwlock_kill_helper(mode1: RWLockMode, mode2: RWLockMode) {
- use any::Any;
-
- // Mutex must get automatically unlocked if panicked/killed within.
- let x = Arc::new(RWLock::new());
- let x2 = x.clone();
-
- let result: result::Result<(), Box<Any + Send>> = task::try(proc() {
- lock_rwlock_in_mode(&x2, mode1, || {
- panic!();
- })
- });
- assert!(result.is_err());
- // child task must have finished by the time try returns
- lock_rwlock_in_mode(&x, mode2, || { })
- }
-
- #[test]
- fn test_rwlock_reader_killed_writer() {
- rwlock_kill_helper(Read, Write);
- }
-
- #[test]
- fn test_rwlock_writer_killed_reader() {
- rwlock_kill_helper(Write, Read);
- }
-
- #[test]
- fn test_rwlock_reader_killed_reader() {
- rwlock_kill_helper(Read, Read);
- }
-
- #[test]
- fn test_rwlock_writer_killed_writer() {
- rwlock_kill_helper(Write, Write);
- }
-
- #[test]
- fn test_rwlock_kill_downgrader() {
- rwlock_kill_helper(Downgrade, Read);
- rwlock_kill_helper(Read, Downgrade);
- rwlock_kill_helper(Downgrade, Write);
- rwlock_kill_helper(Write, Downgrade);
- rwlock_kill_helper(DowngradeRead, Read);
- rwlock_kill_helper(Read, DowngradeRead);
- rwlock_kill_helper(DowngradeRead, Write);
- rwlock_kill_helper(Write, DowngradeRead);
- rwlock_kill_helper(DowngradeRead, Downgrade);
- rwlock_kill_helper(DowngradeRead, Downgrade);
- rwlock_kill_helper(Downgrade, DowngradeRead);
- rwlock_kill_helper(Downgrade, DowngradeRead);
- }
-}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use prelude::*;
+
+use kinds::marker;
+use cell::UnsafeCell;
+use sys_common::rwlock as sys;
+use sync::poison;
+
+/// A reader-writer lock
+///
+/// This type of lock allows a number of readers or at most one writer at any
+/// point in time. The write portion of this lock typically allows modification
+/// of the underlying data (exclusive access) and the read portion of this lock
+/// typically allows for read-only access (shared access).
+///
+/// The type parameter `T` represents the data that this lock protects. It is
+/// required that `T` satisfies `Send` to be shared across tasks and `Sync` to
+/// allow concurrent access through readers. The RAII guards returned from the
+/// locking methods implement `Deref` (and `DerefMut` for the `write` methods)
+/// to allow access to the contained of the lock.
+///
+/// RWLocks, like Mutexes, will become poisoned on panics. Note, however, that
+/// an RWLock may only be poisoned if a panic occurs while it is locked
+/// exclusively (write mode). If a panic occurs in any reader, then the lock
+/// will not be poisoned.
+///
+/// # Example
+///
+/// ```
+/// use std::sync::RWLock;
+///
+/// let lock = RWLock::new(5i);
+///
+/// // many reader locks can be held at once
+/// {
+/// let r1 = lock.read();
+/// let r2 = lock.read();
+/// assert_eq!(*r1, 5);
+/// assert_eq!(*r2, 5);
+/// } // read locks are dropped at this point
+///
+/// // only one write lock may be held, however
+/// {
+/// let mut w = lock.write();
+/// *w += 1;
+/// assert_eq!(*w, 6);
+/// } // write lock is dropped here
+/// ```
+pub struct RWLock<T> {
+ inner: Box<StaticRWLock>,
+ data: UnsafeCell<T>,
+}
+
+/// Structure representing a staticaly allocated RWLock.
+///
+/// This structure is intended to be used inside of a `static` and will provide
+/// automatic global access as well as lazy initialization. The internal
+/// resources of this RWLock, however, must be manually deallocated.
+///
+/// # Example
+///
+/// ```
+/// use std::sync::{StaticRWLock, RWLOCK_INIT};
+///
+/// static LOCK: StaticRWLock = RWLOCK_INIT;
+///
+/// {
+/// let _g = LOCK.read();
+/// // ... shared read access
+/// }
+/// {
+/// let _g = LOCK.write();
+/// // ... exclusive write access
+/// }
+/// unsafe { LOCK.destroy() } // free all resources
+/// ```
+pub struct StaticRWLock {
+ inner: sys::RWLock,
+ poison: UnsafeCell<poison::Flag>,
+}
+
+/// Constant initialization for a statically-initialized rwlock.
+pub const RWLOCK_INIT: StaticRWLock = StaticRWLock {
+ inner: sys::RWLOCK_INIT,
+ poison: UnsafeCell { value: poison::Flag { failed: false } },
+};
+
+/// RAII structure used to release the shared read access of a lock when
+/// dropped.
+#[must_use]
+pub struct RWLockReadGuard<'a, T: 'a> {
+ __lock: &'a RWLock<T>,
+ __guard: StaticRWLockReadGuard,
+}
+
+/// RAII structure used to release the exclusive write access of a lock when
+/// dropped.
+#[must_use]
+pub struct RWLockWriteGuard<'a, T: 'a> {
+ __lock: &'a RWLock<T>,
+ __guard: StaticRWLockWriteGuard,
+}
+
+/// RAII structure used to release the shared read access of a lock when
+/// dropped.
+#[must_use]
+pub struct StaticRWLockReadGuard {
+ lock: &'static sys::RWLock,
+ marker: marker::NoSend,
+}
+
+/// RAII structure used to release the exclusive write access of a lock when
+/// dropped.
+#[must_use]
+pub struct StaticRWLockWriteGuard {
+ lock: &'static sys::RWLock,
+ marker: marker::NoSend,
+ poison: poison::Guard<'static>,
+}
+
+impl<T: Send + Sync> RWLock<T> {
+ /// Creates a new instance of an RWLock which is unlocked and read to go.
+ pub fn new(t: T) -> RWLock<T> {
+ RWLock { inner: box RWLOCK_INIT, data: UnsafeCell::new(t) }
+ }
+
+ /// Locks this rwlock with shared read access, blocking the current thread
+ /// until it can be acquired.
+ ///
+ /// The calling thread will be blocked until there are no more writers which
+ /// hold the lock. There may be other readers currently inside the lock when
+ /// this method returns. This method does not provide any guarantees with
+ /// respect to the ordering of whether contentious readers or writers will
+ /// acquire the lock first.
+ ///
+ /// Returns an RAII guard which will release this thread's shared access
+ /// once it is dropped.
+ ///
+ /// # Panics
+ ///
+ /// This function will panic if the RWLock is poisoned. An RWLock is
+ /// poisoned whenever a writer panics while holding an exclusive lock. The
+ /// panic will occur immediately after the lock has been acquired.
+ #[inline]
+ pub fn read(&self) -> RWLockReadGuard<T> {
+ unsafe {
+ let lock: &'static StaticRWLock = &*(&*self.inner as *const _);
+ RWLockReadGuard::new(self, lock.read())
+ }
+ }
+
+ /// Attempt to acquire this lock with shared read access.
+ ///
+ /// This function will never block and will return immediately if `read`
+ /// would otherwise succeed. Returns `Some` of an RAII guard which will
+ /// release the shared access of this thread when dropped, or `None` if the
+ /// access could not be granted. This method does not provide any
+ /// guarantees with respect to the ordering of whether contentious readers
+ /// or writers will acquire the lock first.
+ ///
+ /// # Panics
+ ///
+ /// This function will panic if the RWLock is poisoned. An RWLock is
+ /// poisoned whenever a writer panics while holding an exclusive lock. A
+ /// panic will only occur if the lock is acquired.
+ #[inline]
+ pub fn try_read(&self) -> Option<RWLockReadGuard<T>> {
+ unsafe {
+ let lock: &'static StaticRWLock = &*(&*self.inner as *const _);
+ lock.try_read().map(|guard| {
+ RWLockReadGuard::new(self, guard)
+ })
+ }
+ }
+
+ /// Lock this rwlock with exclusive write access, blocking the current
+ /// thread until it can be acquired.
+ ///
+ /// This function will not return while other writers or other readers
+ /// currently have access to the lock.
+ ///
+ /// Returns an RAII guard which will drop the write access of this rwlock
+ /// when dropped.
+ ///
+ /// # Panics
+ ///
+ /// This function will panic if the RWLock is poisoned. An RWLock is
+ /// poisoned whenever a writer panics while holding an exclusive lock. The
+ /// panic will occur when the lock is acquired.
+ #[inline]
+ pub fn write(&self) -> RWLockWriteGuard<T> {
+ unsafe {
+ let lock: &'static StaticRWLock = &*(&*self.inner as *const _);
+ RWLockWriteGuard::new(self, lock.write())
+ }
+ }
+
+ /// Attempt to lock this rwlock with exclusive write access.
+ ///
+ /// This function does not ever block, and it will return `None` if a call
+ /// to `write` would otherwise block. If successful, an RAII guard is
+ /// returned.
+ ///
+ /// # Panics
+ ///
+ /// This function will panic if the RWLock is poisoned. An RWLock is
+ /// poisoned whenever a writer panics while holding an exclusive lock. A
+ /// panic will only occur if the lock is acquired.
+ #[inline]
+ pub fn try_write(&self) -> Option<RWLockWriteGuard<T>> {
+ unsafe {
+ let lock: &'static StaticRWLock = &*(&*self.inner as *const _);
+ lock.try_write().map(|guard| {
+ RWLockWriteGuard::new(self, guard)
+ })
+ }
+ }
+}
+
+#[unsafe_destructor]
+impl<T> Drop for RWLock<T> {
+ fn drop(&mut self) {
+ unsafe { self.inner.inner.destroy() }
+ }
+}
+
+impl StaticRWLock {
+ /// Locks this rwlock with shared read access, blocking the current thread
+ /// until it can be acquired.
+ ///
+ /// See `RWLock::read`.
+ #[inline]
+ pub fn read(&'static self) -> StaticRWLockReadGuard {
+ unsafe { self.inner.read() }
+ StaticRWLockReadGuard::new(self)
+ }
+
+ /// Attempt to acquire this lock with shared read access.
+ ///
+ /// See `RWLock::try_read`.
+ #[inline]
+ pub fn try_read(&'static self) -> Option<StaticRWLockReadGuard> {
+ if unsafe { self.inner.try_read() } {
+ Some(StaticRWLockReadGuard::new(self))
+ } else {
+ None
+ }
+ }
+
+ /// Lock this rwlock with exclusive write access, blocking the current
+ /// thread until it can be acquired.
+ ///
+ /// See `RWLock::write`.
+ #[inline]
+ pub fn write(&'static self) -> StaticRWLockWriteGuard {
+ unsafe { self.inner.write() }
+ StaticRWLockWriteGuard::new(self)
+ }
+
+ /// Attempt to lock this rwlock with exclusive write access.
+ ///
+ /// See `RWLock::try_write`.
+ #[inline]
+ pub fn try_write(&'static self) -> Option<StaticRWLockWriteGuard> {
+ if unsafe { self.inner.try_write() } {
+ Some(StaticRWLockWriteGuard::new(self))
+ } else {
+ None
+ }
+ }
+
+ /// Deallocate all resources associated with this static lock.
+ ///
+ /// This method is unsafe to call as there is no guarantee that there are no
+ /// active users of the lock, and this also doesn't prevent any future users
+ /// of this lock. This method is required to be called to not leak memory on
+ /// all platforms.
+ pub unsafe fn destroy(&'static self) {
+ self.inner.destroy()
+ }
+}
+
+impl<'rwlock, T> RWLockReadGuard<'rwlock, T> {
+ fn new(lock: &RWLock<T>, guard: StaticRWLockReadGuard)
+ -> RWLockReadGuard<T> {
+ RWLockReadGuard { __lock: lock, __guard: guard }
+ }
+}
+impl<'rwlock, T> RWLockWriteGuard<'rwlock, T> {
+ fn new(lock: &RWLock<T>, guard: StaticRWLockWriteGuard)
+ -> RWLockWriteGuard<T> {
+ RWLockWriteGuard { __lock: lock, __guard: guard }
+ }
+}
+
+impl<'rwlock, T> Deref<T> for RWLockReadGuard<'rwlock, T> {
+ fn deref(&self) -> &T { unsafe { &*self.__lock.data.get() } }
+}
+impl<'rwlock, T> Deref<T> for RWLockWriteGuard<'rwlock, T> {
+ fn deref(&self) -> &T { unsafe { &*self.__lock.data.get() } }
+}
+impl<'rwlock, T> DerefMut<T> for RWLockWriteGuard<'rwlock, T> {
+ fn deref_mut(&mut self) -> &mut T { unsafe { &mut *self.__lock.data.get() } }
+}
+
+impl StaticRWLockReadGuard {
+ fn new(lock: &'static StaticRWLock) -> StaticRWLockReadGuard {
+ let guard = StaticRWLockReadGuard {
+ lock: &lock.inner,
+ marker: marker::NoSend,
+ };
+ unsafe { (*lock.poison.get()).borrow().check("rwlock"); }
+ return guard;
+ }
+}
+impl StaticRWLockWriteGuard {
+ fn new(lock: &'static StaticRWLock) -> StaticRWLockWriteGuard {
+ unsafe {
+ let guard = StaticRWLockWriteGuard {
+ lock: &lock.inner,
+ marker: marker::NoSend,
+ poison: (*lock.poison.get()).borrow(),
+ };
+ guard.poison.check("rwlock");
+ return guard;
+ }
+ }
+}
+
+#[unsafe_destructor]
+impl Drop for StaticRWLockReadGuard {
+ fn drop(&mut self) {
+ unsafe { self.lock.read_unlock(); }
+ }
+}
+
+#[unsafe_destructor]
+impl Drop for StaticRWLockWriteGuard {
+ fn drop(&mut self) {
+ self.poison.done();
+ unsafe { self.lock.write_unlock(); }
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use prelude::*;
+
+ use rand::{mod, Rng};
+ use task;
+ use sync::{Arc, RWLock, StaticRWLock, RWLOCK_INIT};
+
+ #[test]
+ fn smoke() {
+ let l = RWLock::new(());
+ drop(l.read());
+ drop(l.write());
+ drop((l.read(), l.read()));
+ drop(l.write());
+ }
+
+ #[test]
+ fn static_smoke() {
+ static R: StaticRWLock = RWLOCK_INIT;
+ drop(R.read());
+ drop(R.write());
+ drop((R.read(), R.read()));
+ drop(R.write());
+ unsafe { R.destroy(); }
+ }
+
+ #[test]
+ fn frob() {
+ static R: StaticRWLock = RWLOCK_INIT;
+ static N: uint = 10;
+ static M: uint = 1000;
+
+ let (tx, rx) = channel::<()>();
+ for _ in range(0, N) {
+ let tx = tx.clone();
+ spawn(proc() {
+ let mut rng = rand::task_rng();
+ for _ in range(0, M) {
+ if rng.gen_weighted_bool(N) {
+ drop(R.write());
+ } else {
+ drop(R.read());
+ }
+ }
+ drop(tx);
+ });
+ }
+ drop(tx);
+ let _ = rx.recv_opt();
+ unsafe { R.destroy(); }
+ }
+
+ #[test]
+ #[should_fail]
+ fn test_rw_arc_poison_wr() {
+ let arc = Arc::new(RWLock::new(1i));
+ let arc2 = arc.clone();
+ let _ = task::try(proc() {
+ let lock = arc2.write();
+ assert_eq!(*lock, 2);
+ });
+ let lock = arc.read();
+ assert_eq!(*lock, 1);
+ }
+
+ #[test]
+ #[should_fail]
+ fn test_rw_arc_poison_ww() {
+ let arc = Arc::new(RWLock::new(1i));
+ let arc2 = arc.clone();
+ let _ = task::try(proc() {
+ let lock = arc2.write();
+ assert_eq!(*lock, 2);
+ });
+ let lock = arc.write();
+ assert_eq!(*lock, 1);
+ }
+
+ #[test]
+ fn test_rw_arc_no_poison_rr() {
+ let arc = Arc::new(RWLock::new(1i));
+ let arc2 = arc.clone();
+ let _ = task::try(proc() {
+ let lock = arc2.read();
+ assert_eq!(*lock, 2);
+ });
+ let lock = arc.read();
+ assert_eq!(*lock, 1);
+ }
+ #[test]
+ fn test_rw_arc_no_poison_rw() {
+ let arc = Arc::new(RWLock::new(1i));
+ let arc2 = arc.clone();
+ let _ = task::try(proc() {
+ let lock = arc2.read();
+ assert_eq!(*lock, 2);
+ });
+ let lock = arc.write();
+ assert_eq!(*lock, 1);
+ }
+
+ #[test]
+ fn test_rw_arc() {
+ let arc = Arc::new(RWLock::new(0i));
+ let arc2 = arc.clone();
+ let (tx, rx) = channel();
+
+ task::spawn(proc() {
+ let mut lock = arc2.write();
+ for _ in range(0u, 10) {
+ let tmp = *lock;
+ *lock = -1;
+ task::deschedule();
+ *lock = tmp + 1;
+ }
+ tx.send(());
+ });
+
+ // Readers try to catch the writer in the act
+ let mut children = Vec::new();
+ for _ in range(0u, 5) {
+ let arc3 = arc.clone();
+ children.push(task::try_future(proc() {
+ let lock = arc3.read();
+ assert!(*lock >= 0);
+ }));
+ }
+
+ // Wait for children to pass their asserts
+ for r in children.iter_mut() {
+ assert!(r.get_ref().is_ok());
+ }
+
+ // Wait for writer to finish
+ rx.recv();
+ let lock = arc.read();
+ assert_eq!(*lock, 10);
+ }
+
+ #[test]
+ fn test_rw_arc_access_in_unwind() {
+ let arc = Arc::new(RWLock::new(1i));
+ let arc2 = arc.clone();
+ let _ = task::try::<()>(proc() {
+ struct Unwinder {
+ i: Arc<RWLock<int>>,
+ }
+ impl Drop for Unwinder {
+ fn drop(&mut self) {
+ let mut lock = self.i.write();
+ *lock += 1;
+ }
+ }
+ let _u = Unwinder { i: arc2 };
+ panic!();
+ });
+ let lock = arc.read();
+ assert_eq!(*lock, 2);
+ }
+}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use ops::Drop;
+use sync::{Mutex, Condvar};
+
+/// A counting, blocking, semaphore.
+///
+/// Semaphores are a form of atomic counter where access is only granted if the
+/// counter is a positive value. Each acquisition will block the calling thread
+/// until the counter is positive, and each release will increment the counter
+/// and unblock any threads if necessary.
+///
+/// # Example
+///
+/// ```
+/// use std::sync::Semaphore;
+///
+/// // Create a semaphore that represents 5 resources
+/// let sem = Semaphore::new(5);
+///
+/// // Acquire one of the resources
+/// sem.acquire();
+///
+/// // Acquire one of the resources for a limited period of time
+/// {
+/// let _guard = sem.access();
+/// // ...
+/// } // resources is released here
+///
+/// // Release our initially acquired resource
+/// sem.release();
+/// ```
+pub struct Semaphore {
+ lock: Mutex<int>,
+ cvar: Condvar,
+}
+
+/// An RAII guard which will release a resource acquired from a semaphore when
+/// dropped.
+pub struct SemaphoreGuard<'a> {
+ sem: &'a Semaphore,
+}
+
+impl Semaphore {
+ /// Creates a new semaphore with the initial count specified.
+ ///
+ /// The count specified can be thought of as a number of resources, and a
+ /// call to `acquire` or `access` will block until at least one resource is
+ /// available. It is valid to initialize a semaphore with a negative count.
+ pub fn new(count: int) -> Semaphore {
+ Semaphore {
+ lock: Mutex::new(count),
+ cvar: Condvar::new(),
+ }
+ }
+
+ /// Acquires a resource of this semaphore, blocking the current thread until
+ /// it can do so.
+ ///
+ /// This method will block until the internal count of the semaphore is at
+ /// least 1.
+ pub fn acquire(&self) {
+ let mut count = self.lock.lock();
+ while *count <= 0 {
+ self.cvar.wait(&count);
+ }
+ *count -= 1;
+ }
+
+ /// Release a resource from this semaphore.
+ ///
+ /// This will increment the number of resources in this semaphore by 1 and
+ /// will notify any pending waiters in `acquire` or `access` if necessary.
+ pub fn release(&self) {
+ *self.lock.lock() += 1;
+ self.cvar.notify_one();
+ }
+
+ /// Acquires a resource of this semaphore, returning an RAII guard to
+ /// release the semaphore when dropped.
+ ///
+ /// This function is semantically equivalent to an `acquire` followed by a
+ /// `release` when the guard returned is dropped.
+ pub fn access(&self) -> SemaphoreGuard {
+ self.acquire();
+ SemaphoreGuard { sem: self }
+ }
+}
+
+#[unsafe_destructor]
+impl<'a> Drop for SemaphoreGuard<'a> {
+ fn drop(&mut self) {
+ self.sem.release();
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use prelude::*;
+
+ use sync::Arc;
+ use super::Semaphore;
+
+ #[test]
+ fn test_sem_acquire_release() {
+ let s = Semaphore::new(1);
+ s.acquire();
+ s.release();
+ s.acquire();
+ }
+
+ #[test]
+ fn test_sem_basic() {
+ let s = Semaphore::new(1);
+ let _g = s.access();
+ }
+
+ #[test]
+ fn test_sem_as_mutex() {
+ let s = Arc::new(Semaphore::new(1));
+ let s2 = s.clone();
+ spawn(proc() {
+ let _g = s2.access();
+ });
+ let _g = s.access();
+ }
+
+ #[test]
+ fn test_sem_as_cvar() {
+ /* Child waits and parent signals */
+ let (tx, rx) = channel();
+ let s = Arc::new(Semaphore::new(0));
+ let s2 = s.clone();
+ spawn(proc() {
+ s2.acquire();
+ tx.send(());
+ });
+ s.release();
+ let _ = rx.recv();
+
+ /* Parent waits and child signals */
+ let (tx, rx) = channel();
+ let s = Arc::new(Semaphore::new(0));
+ let s2 = s.clone();
+ spawn(proc() {
+ s2.release();
+ let _ = rx.recv();
+ });
+ s.acquire();
+ tx.send(());
+ }
+
+ #[test]
+ fn test_sem_multi_resource() {
+ // Parent and child both get in the critical section at the same
+ // time, and shake hands.
+ let s = Arc::new(Semaphore::new(2));
+ let s2 = s.clone();
+ let (tx1, rx1) = channel();
+ let (tx2, rx2) = channel();
+ spawn(proc() {
+ let _g = s2.access();
+ let _ = rx2.recv();
+ tx1.send(());
+ });
+ let _g = s.access();
+ tx2.send(());
+ let _ = rx1.recv();
+ }
+
+ #[test]
+ fn test_sem_runtime_friendly_blocking() {
+ let s = Arc::new(Semaphore::new(1));
+ let s2 = s.clone();
+ let (tx, rx) = channel();
+ {
+ let _g = s.access();
+ spawn(proc() {
+ tx.send(());
+ drop(s2.access());
+ tx.send(());
+ });
+ rx.recv(); // wait for child to come alive
+ }
+ rx.recv(); // wait for child to be done
+ }
+}
+++ /dev/null
-/* Copyright (c) 2010-2011 Dmitry Vyukov. All rights reserved.
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- * this list of conditions and the following disclaimer.
- *
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY DMITRY VYUKOV "AS IS" AND ANY EXPRESS OR IMPLIED
- * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
- * SHALL DMITRY VYUKOV OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
- * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
- * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
- * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
- * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * The views and conclusions contained in the software and documentation are
- * those of the authors and should not be interpreted as representing official
- * policies, either expressed or implied, of Dmitry Vyukov.
- */
-
-// http://www.1024cores.net/home/lock-free-algorithms/queues/unbounded-spsc-queue
-
-//! A single-producer single-consumer concurrent queue
-//!
-//! This module contains the implementation of an SPSC queue which can be used
-//! concurrently between two tasks. This data structure is safe to use and
-//! enforces the semantics that there is one pusher and one popper.
-
-#![experimental]
-
-use core::prelude::*;
-
-use alloc::boxed::Box;
-use core::mem;
-use core::cell::UnsafeCell;
-use alloc::arc::Arc;
-
-use sync::atomic::{AtomicPtr, Relaxed, AtomicUint, Acquire, Release};
-
-// Node within the linked list queue of messages to send
-struct Node<T> {
- // FIXME: this could be an uninitialized T if we're careful enough, and
- // that would reduce memory usage (and be a bit faster).
- // is it worth it?
- value: Option<T>, // nullable for re-use of nodes
- next: AtomicPtr<Node<T>>, // next node in the queue
-}
-
-/// The single-producer single-consumer queue. This structure is not cloneable,
-/// but it can be safely shared in an Arc if it is guaranteed that there
-/// is only one popper and one pusher touching the queue at any one point in
-/// time.
-pub struct Queue<T> {
- // consumer fields
- tail: UnsafeCell<*mut Node<T>>, // where to pop from
- tail_prev: AtomicPtr<Node<T>>, // where to pop from
-
- // producer fields
- head: UnsafeCell<*mut Node<T>>, // where to push to
- first: UnsafeCell<*mut Node<T>>, // where to get new nodes from
- tail_copy: UnsafeCell<*mut Node<T>>, // between first/tail
-
- // Cache maintenance fields. Additions and subtractions are stored
- // separately in order to allow them to use nonatomic addition/subtraction.
- cache_bound: uint,
- cache_additions: AtomicUint,
- cache_subtractions: AtomicUint,
-}
-
-/// A safe abstraction for the consumer in a single-producer single-consumer
-/// queue.
-pub struct Consumer<T> {
- inner: Arc<Queue<T>>
-}
-
-impl<T: Send> Consumer<T> {
- /// Attempts to pop the value from the head of the queue, returning `None`
- /// if the queue is empty.
- pub fn pop(&mut self) -> Option<T> {
- self.inner.pop()
- }
-
- /// Attempts to peek at the head of the queue, returning `None` if the queue
- /// is empty.
- pub fn peek<'a>(&'a mut self) -> Option<&'a mut T> {
- self.inner.peek()
- }
-}
-
-/// A safe abstraction for the producer in a single-producer single-consumer
-/// queue.
-pub struct Producer<T> {
- inner: Arc<Queue<T>>
-}
-
-impl<T: Send> Producer<T> {
- /// Pushes a new value onto the queue.
- pub fn push(&mut self, t: T) {
- self.inner.push(t)
- }
-}
-
-impl<T: Send> Node<T> {
- fn new() -> *mut Node<T> {
- unsafe {
- mem::transmute(box Node {
- value: None,
- next: AtomicPtr::new(0 as *mut Node<T>),
- })
- }
- }
-}
-
-/// Creates a new queue with a consumer-producer pair.
-///
-/// The producer returned is connected to the consumer to push all data to
-/// the consumer.
-///
-/// # Arguments
-///
-/// * `bound` - This queue implementation is implemented with a linked
-/// list, and this means that a push is always a malloc. In
-/// order to amortize this cost, an internal cache of nodes is
-/// maintained to prevent a malloc from always being
-/// necessary. This bound is the limit on the size of the
-/// cache (if desired). If the value is 0, then the cache has
-/// no bound. Otherwise, the cache will never grow larger than
-/// `bound` (although the queue itself could be much larger.
-pub fn queue<T: Send>(bound: uint) -> (Consumer<T>, Producer<T>) {
- let q = unsafe { Queue::new(bound) };
- let arc = Arc::new(q);
- let consumer = Consumer { inner: arc.clone() };
- let producer = Producer { inner: arc };
-
- (consumer, producer)
-}
-
-impl<T: Send> Queue<T> {
- /// Creates a new queue.
- ///
- /// This is unsafe as the type system doesn't enforce a single
- /// consumer-producer relationship. It also allows the consumer to `pop`
- /// items while there is a `peek` active due to all methods having a
- /// non-mutable receiver.
- ///
- /// # Arguments
- ///
- /// * `bound` - This queue implementation is implemented with a linked
- /// list, and this means that a push is always a malloc. In
- /// order to amortize this cost, an internal cache of nodes is
- /// maintained to prevent a malloc from always being
- /// necessary. This bound is the limit on the size of the
- /// cache (if desired). If the value is 0, then the cache has
- /// no bound. Otherwise, the cache will never grow larger than
- /// `bound` (although the queue itself could be much larger.
- pub unsafe fn new(bound: uint) -> Queue<T> {
- let n1 = Node::new();
- let n2 = Node::new();
- (*n1).next.store(n2, Relaxed);
- Queue {
- tail: UnsafeCell::new(n2),
- tail_prev: AtomicPtr::new(n1),
- head: UnsafeCell::new(n2),
- first: UnsafeCell::new(n1),
- tail_copy: UnsafeCell::new(n1),
- cache_bound: bound,
- cache_additions: AtomicUint::new(0),
- cache_subtractions: AtomicUint::new(0),
- }
- }
-
- /// Pushes a new value onto this queue. Note that to use this function
- /// safely, it must be externally guaranteed that there is only one pusher.
- pub fn push(&self, t: T) {
- unsafe {
- // Acquire a node (which either uses a cached one or allocates a new
- // one), and then append this to the 'head' node.
- let n = self.alloc();
- assert!((*n).value.is_none());
- (*n).value = Some(t);
- (*n).next.store(0 as *mut Node<T>, Relaxed);
- (**self.head.get()).next.store(n, Release);
- *self.head.get() = n;
- }
- }
-
- unsafe fn alloc(&self) -> *mut Node<T> {
- // First try to see if we can consume the 'first' node for our uses.
- // We try to avoid as many atomic instructions as possible here, so
- // the addition to cache_subtractions is not atomic (plus we're the
- // only one subtracting from the cache).
- if *self.first.get() != *self.tail_copy.get() {
- if self.cache_bound > 0 {
- let b = self.cache_subtractions.load(Relaxed);
- self.cache_subtractions.store(b + 1, Relaxed);
- }
- let ret = *self.first.get();
- *self.first.get() = (*ret).next.load(Relaxed);
- return ret;
- }
- // If the above fails, then update our copy of the tail and try
- // again.
- *self.tail_copy.get() = self.tail_prev.load(Acquire);
- if *self.first.get() != *self.tail_copy.get() {
- if self.cache_bound > 0 {
- let b = self.cache_subtractions.load(Relaxed);
- self.cache_subtractions.store(b + 1, Relaxed);
- }
- let ret = *self.first.get();
- *self.first.get() = (*ret).next.load(Relaxed);
- return ret;
- }
- // If all of that fails, then we have to allocate a new node
- // (there's nothing in the node cache).
- Node::new()
- }
-
- /// Attempts to pop a value from this queue. Remember that to use this type
- /// safely you must ensure that there is only one popper at a time.
- pub fn pop(&self) -> Option<T> {
- unsafe {
- // The `tail` node is not actually a used node, but rather a
- // sentinel from where we should start popping from. Hence, look at
- // tail's next field and see if we can use it. If we do a pop, then
- // the current tail node is a candidate for going into the cache.
- let tail = *self.tail.get();
- let next = (*tail).next.load(Acquire);
- if next.is_null() { return None }
- assert!((*next).value.is_some());
- let ret = (*next).value.take();
-
- *self.tail.get() = next;
- if self.cache_bound == 0 {
- self.tail_prev.store(tail, Release);
- } else {
- // FIXME: this is dubious with overflow.
- let additions = self.cache_additions.load(Relaxed);
- let subtractions = self.cache_subtractions.load(Relaxed);
- let size = additions - subtractions;
-
- if size < self.cache_bound {
- self.tail_prev.store(tail, Release);
- self.cache_additions.store(additions + 1, Relaxed);
- } else {
- (*self.tail_prev.load(Relaxed)).next.store(next, Relaxed);
- // We have successfully erased all references to 'tail', so
- // now we can safely drop it.
- let _: Box<Node<T>> = mem::transmute(tail);
- }
- }
- return ret;
- }
- }
-
- /// Attempts to peek at the head of the queue, returning `None` if the queue
- /// has no data currently
- ///
- /// # Warning
- /// The reference returned is invalid if it is not used before the consumer
- /// pops the value off the queue. If the producer then pushes another value
- /// onto the queue, it will overwrite the value pointed to by the reference.
- pub fn peek<'a>(&'a self) -> Option<&'a mut T> {
- // This is essentially the same as above with all the popping bits
- // stripped out.
- unsafe {
- let tail = *self.tail.get();
- let next = (*tail).next.load(Acquire);
- if next.is_null() { return None }
- return (*next).value.as_mut();
- }
- }
-}
-
-#[unsafe_destructor]
-impl<T: Send> Drop for Queue<T> {
- fn drop(&mut self) {
- unsafe {
- let mut cur = *self.first.get();
- while !cur.is_null() {
- let next = (*cur).next.load(Relaxed);
- let _n: Box<Node<T>> = mem::transmute(cur);
- cur = next;
- }
- }
- }
-}
-
-#[cfg(test)]
-mod test {
- use prelude::*;
-
- use super::{queue};
-
- #[test]
- fn smoke() {
- let (mut consumer, mut producer) = queue(0);
- producer.push(1i);
- producer.push(2);
- assert_eq!(consumer.pop(), Some(1i));
- assert_eq!(consumer.pop(), Some(2));
- assert_eq!(consumer.pop(), None);
- producer.push(3);
- producer.push(4);
- assert_eq!(consumer.pop(), Some(3));
- assert_eq!(consumer.pop(), Some(4));
- assert_eq!(consumer.pop(), None);
- }
-
- #[test]
- fn peek() {
- let (mut consumer, mut producer) = queue(0);
- producer.push(vec![1i]);
-
- // Ensure the borrowchecker works
- match consumer.peek() {
- Some(vec) => match vec.as_slice() {
- // Note that `pop` is not allowed here due to borrow
- [1] => {}
- _ => return
- },
- None => unreachable!()
- }
-
- consumer.pop();
- }
-
- #[test]
- fn drop_full() {
- let (_, mut producer) = queue(0);
- producer.push(box 1i);
- producer.push(box 2i);
- }
-
- #[test]
- fn smoke_bound() {
- let (mut consumer, mut producer) = queue(1);
- producer.push(1i);
- producer.push(2);
- assert_eq!(consumer.pop(), Some(1));
- assert_eq!(consumer.pop(), Some(2));
- assert_eq!(consumer.pop(), None);
- producer.push(3);
- producer.push(4);
- assert_eq!(consumer.pop(), Some(3));
- assert_eq!(consumer.pop(), Some(4));
- assert_eq!(consumer.pop(), None);
- }
-
- #[test]
- fn stress() {
- stress_bound(0);
- stress_bound(1);
-
- fn stress_bound(bound: uint) {
- let (consumer, mut producer) = queue(bound);
-
- let (tx, rx) = channel();
- spawn(proc() {
- // Move the consumer to a local mutable slot
- let mut consumer = consumer;
- for _ in range(0u, 100000) {
- loop {
- match consumer.pop() {
- Some(1i) => break,
- Some(_) => panic!(),
- None => {}
- }
- }
- }
- tx.send(());
- });
- for _ in range(0i, 100000) {
- producer.push(1);
- }
- rx.recv();
- }
- }
-}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use time::Duration;
+use sys_common::mutex::{mod, Mutex};
+use sys::condvar as imp;
+
+/// An OS-based condition variable.
+///
+/// This structure is the lowest layer possible on top of the OS-provided
+/// condition variables. It is consequently entirely unsafe to use. It is
+/// recommended to use the safer types at the top level of this crate instead of
+/// this type.
+pub struct Condvar(imp::Condvar);
+
+/// Static initializer for condition variables.
+pub const CONDVAR_INIT: Condvar = Condvar(imp::CONDVAR_INIT);
+
+impl Condvar {
+ /// Creates a new condition variable for use.
+ ///
+ /// Behavior is undefined if the condition variable is moved after it is
+ /// first used with any of the functions below.
+ #[inline]
+ pub unsafe fn new() -> Condvar { Condvar(imp::Condvar::new()) }
+
+ /// Signal one waiter on this condition variable to wake up.
+ #[inline]
+ pub unsafe fn notify_one(&self) { self.0.notify_one() }
+
+ /// Awaken all current waiters on this condition variable.
+ #[inline]
+ pub unsafe fn notify_all(&self) { self.0.notify_all() }
+
+ /// Wait for a signal on the specified mutex.
+ ///
+ /// Behavior is undefined if the mutex is not locked by the current thread.
+ /// Behavior is also undefined if more than one mutex is used concurrently
+ /// on this condition variable.
+ #[inline]
+ pub unsafe fn wait(&self, mutex: &Mutex) { self.0.wait(mutex::raw(mutex)) }
+
+ /// Wait for a signal on the specified mutex with a timeout duration
+ /// specified by `dur` (a relative time into the future).
+ ///
+ /// Behavior is undefined if the mutex is not locked by the current thread.
+ /// Behavior is also undefined if more than one mutex is used concurrently
+ /// on this condition variable.
+ #[inline]
+ pub unsafe fn wait_timeout(&self, mutex: &Mutex, dur: Duration) -> bool {
+ self.0.wait_timeout(mutex::raw(mutex), dur)
+ }
+
+ /// Deallocate all resources associated with this condition variable.
+ ///
+ /// Behavior is undefined if there are current or will be future users of
+ /// this condition variable.
+ #[inline]
+ pub unsafe fn destroy(&self) { self.0.destroy() }
+}
//! can be created in the future and there must be no active timers at that
//! time.
+use prelude::*;
+
+use cell::UnsafeCell;
use mem;
use rustrt::bookkeeping;
-use rustrt::mutex::StaticNativeMutex;
use rustrt;
-use cell::UnsafeCell;
+use sync::{StaticMutex, StaticCondvar};
use sys::helper_signal;
-use prelude::*;
use task;
/// is for static initialization.
pub struct Helper<M> {
/// Internal lock which protects the remaining fields
- pub lock: StaticNativeMutex,
+ pub lock: StaticMutex,
+ pub cond: StaticCondvar,
// You'll notice that the remaining fields are UnsafeCell<T>, and this is
// because all helper thread operations are done through &self, but we need
/// Flag if this helper thread has booted and been initialized yet.
pub initialized: UnsafeCell<bool>,
+
+ /// Flag if this helper thread has shut down
+ pub shutdown: UnsafeCell<bool>,
}
impl<M: Send> Helper<M> {
task::spawn(proc() {
bookkeeping::decrement();
helper(receive, rx, t);
- self.lock.lock().signal()
+ let _g = self.lock.lock();
+ *self.shutdown.get() = true;
+ self.cond.notify_one()
});
rustrt::at_exit(proc() { self.shutdown() });
helper_signal::signal(*self.signal.get() as helper_signal::signal);
// Wait for the child to exit
- guard.wait();
+ while !*self.shutdown.get() {
+ self.cond.wait(&guard);
+ }
drop(guard);
// Clean up after ourselves
use path::BytesContainer;
use collections;
-pub mod net;
+pub mod condvar;
pub mod helper_thread;
+pub mod mutex;
+pub mod net;
+pub mod rwlock;
pub mod thread_local;
// common error constructors
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+pub use sys::mutex::raw;
+
+use sys::mutex as imp;
+
+/// An OS-based mutual exclusion lock.
+///
+/// This is the thinnest cross-platform wrapper around OS mutexes. All usage of
+/// this mutex is unsafe and it is recommended to instead use the safe wrapper
+/// at the top level of the crate instead of this type.
+pub struct Mutex(imp::Mutex);
+
+/// Constant initializer for statically allocated mutexes.
+pub const MUTEX_INIT: Mutex = Mutex(imp::MUTEX_INIT);
+
+impl Mutex {
+ /// Creates a newly initialized mutex.
+ ///
+ /// Behavior is undefined if the mutex is moved after the first method is
+ /// called on the mutex.
+ #[inline]
+ pub unsafe fn new() -> Mutex { Mutex(imp::Mutex::new()) }
+
+ /// Lock the mutex blocking the current thread until it is available.
+ ///
+ /// Behavior is undefined if the mutex has been moved between this and any
+ /// previous function call.
+ #[inline]
+ pub unsafe fn lock(&self) { self.0.lock() }
+
+ /// Attempt to lock the mutex without blocking, returning whether it was
+ /// successfully acquired or not.
+ ///
+ /// Behavior is undefined if the mutex has been moved between this and any
+ /// previous function call.
+ #[inline]
+ pub unsafe fn try_lock(&self) -> bool { self.0.try_lock() }
+
+ /// Unlock the mutex.
+ ///
+ /// Behavior is undefined if the current thread does not actually hold the
+ /// mutex.
+ #[inline]
+ pub unsafe fn unlock(&self) { self.0.unlock() }
+
+ /// Deallocate all resources associated with this mutex.
+ ///
+ /// Behavior is undefined if there are current or will be future users of
+ /// this mutex.
+ #[inline]
+ pub unsafe fn destroy(&self) { self.0.destroy() }
+}
+
+// not meant to be exported to the outside world, just the containing module
+pub fn raw(mutex: &Mutex) -> &imp::Mutex { &mutex.0 }
use mem;
use num::Int;
use ptr::{mod, null, null_mut};
-use rustrt::mutex;
use io::net::ip::{SocketAddr, IpAddr, Ipv4Addr, Ipv6Addr};
use io::net::addrinfo;
use io::{IoResult, IoError};
use sys::{mod, retry, c, sock_t, last_error, last_net_error, last_gai_error, close_sock,
wrlen, msglen_t, os, wouldblock, set_nonblocking, timer, ms_to_timeval,
decode_error_detailed};
+use sync::{Mutex, MutexGuard};
use sys_common::{mod, keep_going, short_write, timeout};
use prelude::*;
use cmp;
// Unused on Linux, where this lock is not necessary.
#[allow(dead_code)]
- lock: mutex::NativeMutex
+ lock: Mutex<()>,
}
impl Inner {
fn new(fd: sock_t) -> Inner {
- Inner { fd: fd, lock: unsafe { mutex::NativeMutex::new() } }
+ Inner { fd: fd, lock: Mutex::new(()) }
}
}
pub struct Guard<'a> {
pub fd: sock_t,
- pub guard: mutex::LockGuard<'a>,
+ pub guard: MutexGuard<'a, ()>,
}
#[unsafe_destructor]
fn lock_nonblocking<'a>(&'a self) -> Guard<'a> {
let ret = Guard {
fd: self.fd(),
- guard: unsafe { self.inner.lock.lock() },
+ guard: self.inner.lock.lock(),
};
assert!(set_nonblocking(self.fd(), true).is_ok());
ret
fn lock_nonblocking<'a>(&'a self) -> Guard<'a> {
let ret = Guard {
fd: self.fd(),
- guard: unsafe { self.inner.lock.lock() },
+ guard: self.inner.lock.lock(),
};
assert!(set_nonblocking(self.fd(), true).is_ok());
ret
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use sys::rwlock as imp;
+
+/// An OS-based reader-writer lock.
+///
+/// This structure is entirely unsafe and serves as the lowest layer of a
+/// cross-platform binding of system rwlocks. It is recommended to use the
+/// safer types at the top level of this crate instead of this type.
+pub struct RWLock(imp::RWLock);
+
+/// Constant initializer for static RWLocks.
+pub const RWLOCK_INIT: RWLock = RWLock(imp::RWLOCK_INIT);
+
+impl RWLock {
+ /// Creates a new instance of an RWLock.
+ ///
+ /// Usage of an RWLock is undefined if it is moved after its first use (any
+ /// function calls below).
+ #[inline]
+ pub unsafe fn new() -> RWLock { RWLock(imp::RWLock::new()) }
+
+ /// Acquire shared access to the underlying lock, blocking the current
+ /// thread to do so.
+ ///
+ /// Behavior is undefined if the rwlock has been moved between this and any
+ /// previous methodo call.
+ #[inline]
+ pub unsafe fn read(&self) { self.0.read() }
+
+ /// Attempt to acquire shared access to this lock, returning whether it
+ /// succeeded or not.
+ ///
+ /// This function does not block the current thread.
+ ///
+ /// Behavior is undefined if the rwlock has been moved between this and any
+ /// previous methodo call.
+ #[inline]
+ pub unsafe fn try_read(&self) -> bool { self.0.try_read() }
+
+ /// Acquire write access to the underlying lock, blocking the current thread
+ /// to do so.
+ ///
+ /// Behavior is undefined if the rwlock has been moved between this and any
+ /// previous methodo call.
+ #[inline]
+ pub unsafe fn write(&self) { self.0.write() }
+
+ /// Attempt to acquire exclusive access to this lock, returning whether it
+ /// succeeded or not.
+ ///
+ /// This function does not block the current thread.
+ ///
+ /// Behavior is undefined if the rwlock has been moved between this and any
+ /// previous methodo call.
+ #[inline]
+ pub unsafe fn try_write(&self) -> bool { self.0.try_write() }
+
+ /// Unlock previously acquired shared access to this lock.
+ ///
+ /// Behavior is undefined if the current thread does not have shared access.
+ #[inline]
+ pub unsafe fn read_unlock(&self) { self.0.read_unlock() }
+
+ /// Unlock previously acquired exclusive access to this lock.
+ ///
+ /// Behavior is undefined if the current thread does not currently have
+ /// exclusive access.
+ #[inline]
+ pub unsafe fn write_unlock(&self) { self.0.write_unlock() }
+
+ /// Destroy OS-related resources with this RWLock.
+ ///
+ /// Behavior is undefined if there are any currently active users of this
+ /// lock.
+ #[inline]
+ pub unsafe fn destroy(&self) { self.0.destroy() }
+}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use cell::UnsafeCell;
+use libc;
+use sys::mutex::{mod, Mutex};
+use sys::sync as ffi;
+use time::Duration;
+
+pub struct Condvar { inner: UnsafeCell<ffi::pthread_cond_t> }
+
+pub const CONDVAR_INIT: Condvar = Condvar {
+ inner: UnsafeCell { value: ffi::PTHREAD_COND_INITIALIZER },
+};
+
+impl Condvar {
+ #[inline]
+ pub unsafe fn new() -> Condvar {
+ // Might be moved and address is changing it is better to avoid
+ // initialization of potentially opaque OS data before it landed
+ Condvar { inner: UnsafeCell::new(ffi::PTHREAD_COND_INITIALIZER) }
+ }
+
+ #[inline]
+ pub unsafe fn notify_one(&self) {
+ let r = ffi::pthread_cond_signal(self.inner.get());
+ debug_assert_eq!(r, 0);
+ }
+
+ #[inline]
+ pub unsafe fn notify_all(&self) {
+ let r = ffi::pthread_cond_broadcast(self.inner.get());
+ debug_assert_eq!(r, 0);
+ }
+
+ #[inline]
+ pub unsafe fn wait(&self, mutex: &Mutex) {
+ let r = ffi::pthread_cond_wait(self.inner.get(), mutex::raw(mutex));
+ debug_assert_eq!(r, 0);
+ }
+
+ pub unsafe fn wait_timeout(&self, mutex: &Mutex, dur: Duration) -> bool {
+ assert!(dur >= Duration::nanoseconds(0));
+
+ // First, figure out what time it currently is
+ let mut tv = libc::timeval { tv_sec: 0, tv_usec: 0 };
+ let r = ffi::gettimeofday(&mut tv, 0 as *mut _);
+ debug_assert_eq!(r, 0);
+
+ // Offset that time with the specified duration
+ let abs = Duration::seconds(tv.tv_sec as i64) +
+ Duration::microseconds(tv.tv_usec as i64) +
+ dur;
+ let ns = abs.num_nanoseconds().unwrap() as u64;
+ let timeout = libc::timespec {
+ tv_sec: (ns / 1000000000) as libc::time_t,
+ tv_nsec: (ns % 1000000000) as libc::c_long,
+ };
+
+ // And wait!
+ let r = ffi::pthread_cond_timedwait(self.inner.get(), mutex::raw(mutex),
+ &timeout);
+ if r != 0 {
+ debug_assert_eq!(r as int, libc::ETIMEDOUT as int);
+ false
+ } else {
+ true
+ }
+ }
+
+ #[inline]
+ pub unsafe fn destroy(&self) {
+ let r = ffi::pthread_cond_destroy(self.inner.get());
+ debug_assert_eq!(r, 0);
+ }
+}
use io::{FilePermission, Write, UnstableFileStat, Open, FileAccess, FileMode};
use io::{IoResult, FileStat, SeekStyle, Reader};
use io::{Read, Truncate, SeekCur, SeekSet, ReadWrite, SeekEnd, Append};
-use result::{Ok, Err};
+use result::Result::{Ok, Err};
use sys::retry;
use sys_common::{keep_going, eof, mkerr_libc};
let size = unsafe { rust_dirent_t_size() };
let mut buf = Vec::<u8>::with_capacity(size as uint);
- let ptr = buf.as_mut_slice().as_mut_ptr() as *mut dirent_t;
+ let ptr = buf.as_mut_ptr() as *mut dirent_t;
let p = p.to_c_str();
let dir_ptr = unsafe {opendir(p.as_ptr())};
macro_rules! helper_init( (static $name:ident: Helper<$m:ty>) => (
static $name: Helper<$m> = Helper {
- lock: ::rustrt::mutex::NATIVE_MUTEX_INIT,
+ lock: ::sync::MUTEX_INIT,
+ cond: ::sync::CONDVAR_INIT,
chan: ::cell::UnsafeCell { value: 0 as *mut Sender<$m> },
signal: ::cell::UnsafeCell { value: 0 },
initialized: ::cell::UnsafeCell { value: false },
+ shutdown: ::cell::UnsafeCell { value: false },
};
) )
pub mod c;
pub mod ext;
+pub mod condvar;
pub mod fs;
pub mod helper_signal;
+pub mod mutex;
pub mod os;
pub mod pipe;
pub mod process;
+pub mod rwlock;
+pub mod sync;
pub mod tcp;
-pub mod timer;
pub mod thread_local;
+pub mod timer;
pub mod tty;
pub mod udp;
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use cell::UnsafeCell;
+use sys::sync as ffi;
+use sys_common::mutex;
+
+pub struct Mutex { inner: UnsafeCell<ffi::pthread_mutex_t> }
+
+#[inline]
+pub unsafe fn raw(m: &Mutex) -> *mut ffi::pthread_mutex_t {
+ m.inner.get()
+}
+
+pub const MUTEX_INIT: Mutex = Mutex {
+ inner: UnsafeCell { value: ffi::PTHREAD_MUTEX_INITIALIZER },
+};
+
+impl Mutex {
+ #[inline]
+ pub unsafe fn new() -> Mutex {
+ // Might be moved and address is changing it is better to avoid
+ // initialization of potentially opaque OS data before it landed
+ MUTEX_INIT
+ }
+ #[inline]
+ pub unsafe fn lock(&self) {
+ let r = ffi::pthread_mutex_lock(self.inner.get());
+ debug_assert_eq!(r, 0);
+ }
+ #[inline]
+ pub unsafe fn unlock(&self) {
+ let r = ffi::pthread_mutex_unlock(self.inner.get());
+ debug_assert_eq!(r, 0);
+ }
+ #[inline]
+ pub unsafe fn try_lock(&self) -> bool {
+ ffi::pthread_mutex_trylock(self.inner.get()) == 0
+ }
+ #[inline]
+ pub unsafe fn destroy(&self) {
+ let r = ffi::pthread_mutex_destroy(self.inner.get());
+ debug_assert_eq!(r, 0);
+ }
+}
use libc;
use c_str::CString;
use mem;
-use rustrt::mutex;
-use sync::atomic;
+use sync::{atomic, Mutex};
use io::{mod, IoResult, IoError};
use prelude::*;
// Unused on Linux, where this lock is not necessary.
#[allow(dead_code)]
- lock: mutex::NativeMutex
+ lock: Mutex<()>,
}
impl Inner {
fn new(fd: fd_t) -> Inner {
- Inner { fd: fd, lock: unsafe { mutex::NativeMutex::new() } }
+ Inner { fd: fd, lock: Mutex::new(()) }
}
}
use libc::{mod, pid_t, c_void, c_int};
use c_str::CString;
-use io::{mod, IoResult, IoError};
+use io::{mod, IoResult, IoError, EndOfFile};
use mem;
use os;
use ptr;
NewChild(libc::pid_t, Sender<ProcessExit>, u64),
}
+const CLOEXEC_MSG_FOOTER: &'static [u8] = b"NOEX";
+
impl Process {
pub fn id(&self) -> pid_t {
self.pid
if pid < 0 {
return Err(super::last_error())
} else if pid > 0 {
+ #[inline]
+ fn combine(arr: &[u8]) -> i32 {
+ let a = arr[0] as u32;
+ let b = arr[1] as u32;
+ let c = arr[2] as u32;
+ let d = arr[3] as u32;
+
+ ((a << 24) | (b << 16) | (c << 8) | (d << 0)) as i32
+ }
+
+ let p = Process{ pid: pid };
drop(output);
- let mut bytes = [0, ..4];
+ let mut bytes = [0, ..8];
return match input.read(&mut bytes) {
- Ok(4) => {
- let errno = (bytes[0] as i32 << 24) |
- (bytes[1] as i32 << 16) |
- (bytes[2] as i32 << 8) |
- (bytes[3] as i32 << 0);
+ Ok(8) => {
+ assert!(combine(CLOEXEC_MSG_FOOTER) == combine(bytes.slice(4, 8)),
+ "Validation on the CLOEXEC pipe failed: {}", bytes);
+ let errno = combine(bytes.slice(0, 4));
+ assert!(p.wait(0).is_ok(), "wait(0) should either return Ok or panic");
Err(super::decode_error(errno))
}
- Err(..) => Ok(Process { pid: pid }),
- Ok(..) => panic!("short read on the cloexec pipe"),
+ Err(ref e) if e.kind == EndOfFile => Ok(p),
+ Err(e) => {
+ assert!(p.wait(0).is_ok(), "wait(0) should either return Ok or panic");
+ panic!("the CLOEXEC pipe failed: {}", e)
+ },
+ Ok(..) => { // pipe I/O up to PIPE_BUF bytes should be atomic
+ assert!(p.wait(0).is_ok(), "wait(0) should either return Ok or panic");
+ panic!("short read on the CLOEXEC pipe")
+ }
};
}
let _ = libc::close(input.fd());
fn fail(output: &mut FileDesc) -> ! {
- let errno = sys::os::errno();
+ let errno = sys::os::errno() as u32;
let bytes = [
(errno >> 24) as u8,
(errno >> 16) as u8,
(errno >> 8) as u8,
(errno >> 0) as u8,
+ CLOEXEC_MSG_FOOTER[0], CLOEXEC_MSG_FOOTER[1],
+ CLOEXEC_MSG_FOOTER[2], CLOEXEC_MSG_FOOTER[3]
];
+ // pipe I/O up to PIPE_BUF bytes should be atomic
assert!(output.write(&bytes).is_ok());
unsafe { libc::_exit(1) }
}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use cell::UnsafeCell;
+use sys::sync as ffi;
+
+pub struct RWLock { inner: UnsafeCell<ffi::pthread_rwlock_t> }
+
+pub const RWLOCK_INIT: RWLock = RWLock {
+ inner: UnsafeCell { value: ffi::PTHREAD_RWLOCK_INITIALIZER },
+};
+
+impl RWLock {
+ #[inline]
+ pub unsafe fn new() -> RWLock {
+ // Might be moved and address is changing it is better to avoid
+ // initialization of potentially opaque OS data before it landed
+ RWLOCK_INIT
+ }
+ #[inline]
+ pub unsafe fn read(&self) {
+ let r = ffi::pthread_rwlock_rdlock(self.inner.get());
+ debug_assert_eq!(r, 0);
+ }
+ #[inline]
+ pub unsafe fn try_read(&self) -> bool {
+ ffi::pthread_rwlock_tryrdlock(self.inner.get()) == 0
+ }
+ #[inline]
+ pub unsafe fn write(&self) {
+ let r = ffi::pthread_rwlock_wrlock(self.inner.get());
+ debug_assert_eq!(r, 0);
+ }
+ #[inline]
+ pub unsafe fn try_write(&self) -> bool {
+ ffi::pthread_rwlock_trywrlock(self.inner.get()) == 0
+ }
+ #[inline]
+ pub unsafe fn read_unlock(&self) {
+ let r = ffi::pthread_rwlock_unlock(self.inner.get());
+ debug_assert_eq!(r, 0);
+ }
+ #[inline]
+ pub unsafe fn write_unlock(&self) { self.read_unlock() }
+ #[inline]
+ pub unsafe fn destroy(&self) {
+ let r = ffi::pthread_rwlock_destroy(self.inner.get());
+ debug_assert_eq!(r, 0);
+ }
+}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![allow(bad_style)]
+
+use libc;
+
+pub use self::os::{PTHREAD_MUTEX_INITIALIZER, pthread_mutex_t};
+pub use self::os::{PTHREAD_COND_INITIALIZER, pthread_cond_t};
+pub use self::os::{PTHREAD_RWLOCK_INITIALIZER, pthread_rwlock_t};
+
+extern {
+ // mutexes
+ pub fn pthread_mutex_destroy(lock: *mut pthread_mutex_t) -> libc::c_int;
+ pub fn pthread_mutex_lock(lock: *mut pthread_mutex_t) -> libc::c_int;
+ pub fn pthread_mutex_trylock(lock: *mut pthread_mutex_t) -> libc::c_int;
+ pub fn pthread_mutex_unlock(lock: *mut pthread_mutex_t) -> libc::c_int;
+
+ // cvars
+ pub fn pthread_cond_wait(cond: *mut pthread_cond_t,
+ lock: *mut pthread_mutex_t) -> libc::c_int;
+ pub fn pthread_cond_timedwait(cond: *mut pthread_cond_t,
+ lock: *mut pthread_mutex_t,
+ abstime: *const libc::timespec) -> libc::c_int;
+ pub fn pthread_cond_signal(cond: *mut pthread_cond_t) -> libc::c_int;
+ pub fn pthread_cond_broadcast(cond: *mut pthread_cond_t) -> libc::c_int;
+ pub fn pthread_cond_destroy(cond: *mut pthread_cond_t) -> libc::c_int;
+ pub fn gettimeofday(tp: *mut libc::timeval,
+ tz: *mut libc::c_void) -> libc::c_int;
+
+ // rwlocks
+ pub fn pthread_rwlock_destroy(lock: *mut pthread_rwlock_t) -> libc::c_int;
+ pub fn pthread_rwlock_rdlock(lock: *mut pthread_rwlock_t) -> libc::c_int;
+ pub fn pthread_rwlock_tryrdlock(lock: *mut pthread_rwlock_t) -> libc::c_int;
+ pub fn pthread_rwlock_wrlock(lock: *mut pthread_rwlock_t) -> libc::c_int;
+ pub fn pthread_rwlock_trywrlock(lock: *mut pthread_rwlock_t) -> libc::c_int;
+ pub fn pthread_rwlock_unlock(lock: *mut pthread_rwlock_t) -> libc::c_int;
+}
+
+#[cfg(any(target_os = "freebsd", target_os = "dragonfly"))]
+mod os {
+ use libc;
+
+ pub type pthread_mutex_t = *mut libc::c_void;
+ pub type pthread_cond_t = *mut libc::c_void;
+ pub type pthread_rwlock_t = *mut libc::c_void;
+
+ pub const PTHREAD_MUTEX_INITIALIZER: pthread_mutex_t = 0 as *mut _;
+ pub const PTHREAD_COND_INITIALIZER: pthread_cond_t = 0 as *mut _;
+ pub const PTHREAD_RWLOCK_INITIALIZER: pthread_rwlock_t = 0 as *mut _;
+}
+
+#[cfg(any(target_os = "macos", target_os = "ios"))]
+mod os {
+ use libc;
+
+ #[cfg(target_arch = "x86_64")]
+ const __PTHREAD_MUTEX_SIZE__: uint = 56;
+ #[cfg(any(target_arch = "x86",
+ target_arch = "arm"))]
+ const __PTHREAD_MUTEX_SIZE__: uint = 40;
+
+ #[cfg(target_arch = "x86_64")]
+ const __PTHREAD_COND_SIZE__: uint = 40;
+ #[cfg(any(target_arch = "x86",
+ target_arch = "arm"))]
+ const __PTHREAD_COND_SIZE__: uint = 24;
+
+ #[cfg(target_arch = "x86_64")]
+ const __PTHREAD_RWLOCK_SIZE__: uint = 192;
+ #[cfg(any(target_arch = "x86",
+ target_arch = "arm"))]
+ const __PTHREAD_RWLOCK_SIZE__: uint = 124;
+
+ const _PTHREAD_MUTEX_SIG_INIT: libc::c_long = 0x32AAABA7;
+ const _PTHREAD_COND_SIG_INIT: libc::c_long = 0x3CB0B1BB;
+ const _PTHREAD_RWLOCK_SIG_INIT: libc::c_long = 0x2DA8B3B4;
+
+ #[repr(C)]
+ pub struct pthread_mutex_t {
+ __sig: libc::c_long,
+ __opaque: [u8, ..__PTHREAD_MUTEX_SIZE__],
+ }
+ #[repr(C)]
+ pub struct pthread_cond_t {
+ __sig: libc::c_long,
+ __opaque: [u8, ..__PTHREAD_COND_SIZE__],
+ }
+ #[repr(C)]
+ pub struct pthread_rwlock_t {
+ __sig: libc::c_long,
+ __opaque: [u8, ..__PTHREAD_RWLOCK_SIZE__],
+ }
+
+ pub const PTHREAD_MUTEX_INITIALIZER: pthread_mutex_t = pthread_mutex_t {
+ __sig: _PTHREAD_MUTEX_SIG_INIT,
+ __opaque: [0, ..__PTHREAD_MUTEX_SIZE__],
+ };
+ pub const PTHREAD_COND_INITIALIZER: pthread_cond_t = pthread_cond_t {
+ __sig: _PTHREAD_COND_SIG_INIT,
+ __opaque: [0, ..__PTHREAD_COND_SIZE__],
+ };
+ pub const PTHREAD_RWLOCK_INITIALIZER: pthread_rwlock_t = pthread_rwlock_t {
+ __sig: _PTHREAD_RWLOCK_SIG_INIT,
+ __opaque: [0, ..__PTHREAD_RWLOCK_SIZE__],
+ };
+}
+
+#[cfg(target_os = "linux")]
+mod os {
+ use libc;
+
+ // minus 8 because we have an 'align' field
+ #[cfg(target_arch = "x86_64")]
+ const __SIZEOF_PTHREAD_MUTEX_T: uint = 40 - 8;
+ #[cfg(any(target_arch = "x86",
+ target_arch = "arm",
+ target_arch = "mips",
+ target_arch = "mipsel"))]
+ const __SIZEOF_PTHREAD_MUTEX_T: uint = 24 - 8;
+
+ #[cfg(any(target_arch = "x86_64",
+ target_arch = "x86",
+ target_arch = "arm",
+ target_arch = "mips",
+ target_arch = "mipsel"))]
+ const __SIZEOF_PTHREAD_COND_T: uint = 48 - 8;
+
+ #[cfg(target_arch = "x86_64")]
+ const __SIZEOF_PTHREAD_RWLOCK_T: uint = 56 - 8;
+
+ #[cfg(any(target_arch = "x86",
+ target_arch = "arm",
+ target_arch = "mips",
+ target_arch = "mipsel"))]
+ const __SIZEOF_PTHREAD_RWLOCK_T: uint = 32 - 8;
+
+ #[repr(C)]
+ pub struct pthread_mutex_t {
+ __align: libc::c_longlong,
+ size: [u8, ..__SIZEOF_PTHREAD_MUTEX_T],
+ }
+ #[repr(C)]
+ pub struct pthread_cond_t {
+ __align: libc::c_longlong,
+ size: [u8, ..__SIZEOF_PTHREAD_COND_T],
+ }
+ #[repr(C)]
+ pub struct pthread_rwlock_t {
+ __align: libc::c_longlong,
+ size: [u8, ..__SIZEOF_PTHREAD_RWLOCK_T],
+ }
+
+ pub const PTHREAD_MUTEX_INITIALIZER: pthread_mutex_t = pthread_mutex_t {
+ __align: 0,
+ size: [0, ..__SIZEOF_PTHREAD_MUTEX_T],
+ };
+ pub const PTHREAD_COND_INITIALIZER: pthread_cond_t = pthread_cond_t {
+ __align: 0,
+ size: [0, ..__SIZEOF_PTHREAD_COND_T],
+ };
+ pub const PTHREAD_RWLOCK_INITIALIZER: pthread_rwlock_t = pthread_rwlock_t {
+ __align: 0,
+ size: [0, ..__SIZEOF_PTHREAD_RWLOCK_T],
+ };
+}
+#[cfg(target_os = "android")]
+mod os {
+ use libc;
+
+ #[repr(C)]
+ pub struct pthread_mutex_t { value: libc::c_int }
+ #[repr(C)]
+ pub struct pthread_cond_t { value: libc::c_int }
+ #[repr(C)]
+ pub struct pthread_rwlock_t {
+ lock: pthread_mutex_t,
+ cond: pthread_cond_t,
+ numLocks: libc::c_int,
+ writerThreadId: libc::c_int,
+ pendingReaders: libc::c_int,
+ pendingWriters: libc::c_int,
+ reserved: [*mut libc::c_void, ..4],
+ }
+
+ pub const PTHREAD_MUTEX_INITIALIZER: pthread_mutex_t = pthread_mutex_t {
+ value: 0,
+ };
+ pub const PTHREAD_COND_INITIALIZER: pthread_cond_t = pthread_cond_t {
+ value: 0,
+ };
+ pub const PTHREAD_RWLOCK_INITIALIZER: pthread_rwlock_t = pthread_rwlock_t {
+ lock: PTHREAD_MUTEX_INITIALIZER,
+ cond: PTHREAD_COND_INITIALIZER,
+ numLocks: 0,
+ writerThreadId: 0,
+ pendingReaders: 0,
+ pendingWriters: 0,
+ reserved: [0 as *mut _, ..4],
+ };
+}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use cell::UnsafeCell;
+use libc::{mod, DWORD};
+use libc;
+use os;
+use sys::mutex::{mod, Mutex};
+use sys::sync as ffi;
+use time::Duration;
+
+pub struct Condvar { inner: UnsafeCell<ffi::CONDITION_VARIABLE> }
+
+pub const CONDVAR_INIT: Condvar = Condvar {
+ inner: UnsafeCell { value: ffi::CONDITION_VARIABLE_INIT }
+};
+
+impl Condvar {
+ #[inline]
+ pub unsafe fn new() -> Condvar { CONDVAR_INIT }
+
+ #[inline]
+ pub unsafe fn wait(&self, mutex: &Mutex) {
+ let r = ffi::SleepConditionVariableCS(self.inner.get(),
+ mutex::raw(mutex),
+ libc::INFINITE);
+ debug_assert!(r != 0);
+ }
+
+ pub unsafe fn wait_timeout(&self, mutex: &Mutex, dur: Duration) -> bool {
+ let r = ffi::SleepConditionVariableCS(self.inner.get(),
+ mutex::raw(mutex),
+ dur.num_milliseconds() as DWORD);
+ if r == 0 {
+ const ERROR_TIMEOUT: DWORD = 0x5B4;
+ debug_assert_eq!(os::errno() as uint, ERROR_TIMEOUT as uint);
+ false
+ } else {
+ true
+ }
+ }
+
+ #[inline]
+ pub unsafe fn notify_one(&self) {
+ ffi::WakeConditionVariable(self.inner.get())
+ }
+
+ #[inline]
+ pub unsafe fn notify_all(&self) {
+ ffi::WakeAllConditionVariable(self.inner.get())
+ }
+
+ pub unsafe fn destroy(&self) {
+ // ...
+ }
+}
libc::VOLUME_NAME_DOS)
});
let ret = match ret {
- Some(ref s) if s.as_slice().starts_with(r"\\?\") => { // "
- Ok(Path::new(s.as_slice().slice_from(4)))
+ Some(ref s) if s.starts_with(r"\\?\") => { // "
+ Ok(Path::new(s.slice_from(4)))
}
Some(s) => Ok(Path::new(s)),
None => Err(super::last_error()),
macro_rules! helper_init( (static $name:ident: Helper<$m:ty>) => (
static $name: Helper<$m> = Helper {
- lock: ::rustrt::mutex::NATIVE_MUTEX_INIT,
+ lock: ::sync::MUTEX_INIT,
+ cond: ::sync::CONDVAR_INIT,
chan: ::cell::UnsafeCell { value: 0 as *mut Sender<$m> },
signal: ::cell::UnsafeCell { value: 0 },
initialized: ::cell::UnsafeCell { value: false },
+ shutdown: ::cell::UnsafeCell { value: false },
};
) )
pub mod c;
pub mod ext;
+pub mod condvar;
pub mod fs;
pub mod helper_signal;
+pub mod mutex;
pub mod os;
pub mod pipe;
pub mod process;
+pub mod rwlock;
+pub mod sync;
pub mod tcp;
pub mod thread_local;
pub mod timer;
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use prelude::*;
+
+use sync::atomic;
+use alloc::{mod, heap};
+
+use libc::DWORD;
+use sys::sync as ffi;
+
+const SPIN_COUNT: DWORD = 4000;
+
+pub struct Mutex { inner: atomic::AtomicUint }
+
+pub const MUTEX_INIT: Mutex = Mutex { inner: atomic::INIT_ATOMIC_UINT };
+
+#[inline]
+pub unsafe fn raw(m: &Mutex) -> ffi::LPCRITICAL_SECTION {
+ m.get()
+}
+
+impl Mutex {
+ #[inline]
+ pub unsafe fn new() -> Mutex {
+ Mutex { inner: atomic::AtomicUint::new(init_lock() as uint) }
+ }
+ #[inline]
+ pub unsafe fn lock(&self) {
+ ffi::EnterCriticalSection(self.get())
+ }
+ #[inline]
+ pub unsafe fn try_lock(&self) -> bool {
+ ffi::TryEnterCriticalSection(self.get()) != 0
+ }
+ #[inline]
+ pub unsafe fn unlock(&self) {
+ ffi::LeaveCriticalSection(self.get())
+ }
+ pub unsafe fn destroy(&self) {
+ let lock = self.inner.swap(0, atomic::SeqCst);
+ if lock != 0 { free_lock(lock as ffi::LPCRITICAL_SECTION) }
+ }
+
+ unsafe fn get(&self) -> ffi::LPCRITICAL_SECTION {
+ match self.inner.load(atomic::SeqCst) {
+ 0 => {}
+ n => return n as ffi::LPCRITICAL_SECTION
+ }
+ let lock = init_lock();
+ match self.inner.compare_and_swap(0, lock as uint, atomic::SeqCst) {
+ 0 => return lock as ffi::LPCRITICAL_SECTION,
+ _ => {}
+ }
+ free_lock(lock);
+ return self.inner.load(atomic::SeqCst) as ffi::LPCRITICAL_SECTION;
+ }
+}
+
+unsafe fn init_lock() -> ffi::LPCRITICAL_SECTION {
+ let block = heap::allocate(ffi::CRITICAL_SECTION_SIZE, 8)
+ as ffi::LPCRITICAL_SECTION;
+ if block.is_null() { alloc::oom() }
+ ffi::InitializeCriticalSectionAndSpinCount(block, SPIN_COUNT);
+ return block;
+}
+
+unsafe fn free_lock(h: ffi::LPCRITICAL_SECTION) {
+ ffi::DeleteCriticalSection(h);
+ heap::deallocate(h as *mut _, ffi::CRITICAL_SECTION_SIZE, 8);
+}
use c_str::CString;
use mem;
use ptr;
-use sync::atomic;
-use rustrt::mutex;
+use sync::{atomic, Mutex};
use io::{mod, IoError, IoResult};
use prelude::*;
struct Inner {
handle: libc::HANDLE,
- lock: mutex::NativeMutex,
+ lock: Mutex<()>,
read_closed: atomic::AtomicBool,
write_closed: atomic::AtomicBool,
}
fn new(handle: libc::HANDLE) -> Inner {
Inner {
handle: handle,
- lock: unsafe { mutex::NativeMutex::new() },
+ lock: Mutex::new(()),
read_closed: atomic::AtomicBool::new(false),
write_closed: atomic::AtomicBool::new(false),
}
with_envp(cfg.env(), |envp| {
with_dirp(cfg.cwd(), |dirp| {
- let mut cmd_str: Vec<u16> = cmd_str.as_slice().utf16_units().collect();
+ let mut cmd_str: Vec<u16> = cmd_str.utf16_units().collect();
cmd_str.push(0);
let created = CreateProcessW(ptr::null(),
cmd_str.as_mut_ptr(),
let kv = format!("{}={}",
pair.ref0().container_as_str().unwrap(),
pair.ref1().container_as_str().unwrap());
- blk.extend(kv.as_slice().utf16_units());
+ blk.extend(kv.utf16_units());
blk.push(0);
}
assert_eq!(
test_wrapper("prog", &["aaa", "bbb", "ccc"]),
- "prog aaa bbb ccc".to_string()
+ "prog aaa bbb ccc"
);
assert_eq!(
test_wrapper("C:\\Program Files\\blah\\blah.exe", &["aaa"]),
- "\"C:\\Program Files\\blah\\blah.exe\" aaa".to_string()
+ "\"C:\\Program Files\\blah\\blah.exe\" aaa"
);
assert_eq!(
test_wrapper("C:\\Program Files\\test", &["aa\"bb"]),
- "\"C:\\Program Files\\test\" aa\\\"bb".to_string()
+ "\"C:\\Program Files\\test\" aa\\\"bb"
);
assert_eq!(
test_wrapper("echo", &["a b c"]),
- "echo \"a b c\"".to_string()
+ "echo \"a b c\""
);
assert_eq!(
test_wrapper("\u03c0\u042f\u97f3\u00e6\u221e", &[]),
- "\u03c0\u042f\u97f3\u00e6\u221e".to_string()
+ "\u03c0\u042f\u97f3\u00e6\u221e"
);
}
}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use cell::UnsafeCell;
+use sys::sync as ffi;
+
+pub struct RWLock { inner: UnsafeCell<ffi::SRWLOCK> }
+
+pub const RWLOCK_INIT: RWLock = RWLock {
+ inner: UnsafeCell { value: ffi::SRWLOCK_INIT }
+};
+
+impl RWLock {
+ #[inline]
+ pub unsafe fn new() -> RWLock { RWLOCK_INIT }
+
+ #[inline]
+ pub unsafe fn read(&self) {
+ ffi::AcquireSRWLockShared(self.inner.get())
+ }
+ #[inline]
+ pub unsafe fn try_read(&self) -> bool {
+ ffi::TryAcquireSRWLockShared(self.inner.get()) != 0
+ }
+ #[inline]
+ pub unsafe fn write(&self) {
+ ffi::AcquireSRWLockExclusive(self.inner.get())
+ }
+ #[inline]
+ pub unsafe fn try_write(&self) -> bool {
+ ffi::TryAcquireSRWLockExclusive(self.inner.get()) != 0
+ }
+ #[inline]
+ pub unsafe fn read_unlock(&self) {
+ ffi::ReleaseSRWLockShared(self.inner.get())
+ }
+ #[inline]
+ pub unsafe fn write_unlock(&self) {
+ ffi::ReleaseSRWLockExclusive(self.inner.get())
+ }
+
+ #[inline]
+ pub unsafe fn destroy(&self) {
+ // ...
+ }
+}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use libc::{BOOL, DWORD, c_void, LPVOID};
+use libc::types::os::arch::extra::BOOLEAN;
+
+pub type LPCRITICAL_SECTION = *mut c_void;
+pub type LPCONDITION_VARIABLE = *mut CONDITION_VARIABLE;
+pub type LPSRWLOCK = *mut SRWLOCK;
+
+#[cfg(target_arch = "x86")]
+pub const CRITICAL_SECTION_SIZE: uint = 24;
+#[cfg(target_arch = "x86_64")]
+pub const CRITICAL_SECTION_SIZE: uint = 40;
+
+#[repr(C)]
+pub struct CONDITION_VARIABLE { pub ptr: LPVOID }
+#[repr(C)]
+pub struct SRWLOCK { pub ptr: LPVOID }
+
+pub const CONDITION_VARIABLE_INIT: CONDITION_VARIABLE = CONDITION_VARIABLE {
+ ptr: 0 as *mut _,
+};
+pub const SRWLOCK_INIT: SRWLOCK = SRWLOCK { ptr: 0 as *mut _ };
+
+extern "system" {
+ // critical sections
+ pub fn InitializeCriticalSectionAndSpinCount(
+ lpCriticalSection: LPCRITICAL_SECTION,
+ dwSpinCount: DWORD) -> BOOL;
+ pub fn DeleteCriticalSection(lpCriticalSection: LPCRITICAL_SECTION);
+ pub fn EnterCriticalSection(lpCriticalSection: LPCRITICAL_SECTION);
+ pub fn LeaveCriticalSection(lpCriticalSection: LPCRITICAL_SECTION);
+ pub fn TryEnterCriticalSection(lpCriticalSection: LPCRITICAL_SECTION) -> BOOL;
+
+ // condition variables
+ pub fn SleepConditionVariableCS(ConditionVariable: LPCONDITION_VARIABLE,
+ CriticalSection: LPCRITICAL_SECTION,
+ dwMilliseconds: DWORD) -> BOOL;
+ pub fn WakeConditionVariable(ConditionVariable: LPCONDITION_VARIABLE);
+ pub fn WakeAllConditionVariable(ConditionVariable: LPCONDITION_VARIABLE);
+
+ // slim rwlocks
+ pub fn AcquireSRWLockExclusive(SRWLock: LPSRWLOCK);
+ pub fn AcquireSRWLockShared(SRWLock: LPSRWLOCK);
+ pub fn ReleaseSRWLockExclusive(SRWLock: LPSRWLOCK);
+ pub fn ReleaseSRWLockShared(SRWLock: LPSRWLOCK);
+ pub fn TryAcquireSRWLockExclusive(SRWLock: LPSRWLOCK) -> BOOLEAN;
+ pub fn TryAcquireSRWLockShared(SRWLock: LPSRWLOCK) -> BOOLEAN;
+}
+
pub fn write(&mut self, buf: &[u8]) -> IoResult<()> {
let utf16 = match from_utf8(buf) {
Some(utf8) => {
- utf8.as_slice().utf16_units().collect::<Vec<u16>>()
+ utf8.utf16_units().collect::<Vec<u16>>()
}
None => return Err(invalid_encoding()),
};
use comm::channel;
use io::{Writer, stdio};
use kinds::{Send, marker};
-use option::{None, Some, Option};
+use option::Option;
+use option::Option::{None, Some};
use result::Result;
use rustrt::local::Local;
use rustrt::task::Task;
use rustrt::task;
-use str::{Str, SendStr};
+use str::SendStr;
use string::{String, ToString};
use sync::Future;
/// # Return value
///
/// If the child task executes successfully (without panicking) then the
- /// future returns `result::Ok` containing the value returned by the
- /// function. If the child task panics then the future returns `result::Err`
- /// containing the argument to `panic!(...)` as an `Any` trait object.
+ /// future returns `result::Result::Ok` containing the value returned by the
+ /// function. If the child task panics then the future returns
+ /// `result::Result::Err` containing the argument to `panic!(...)` as an
+ /// `Any` trait object.
#[experimental = "Futures are experimental."]
pub fn try_future<T:Send>(self, f: proc():Send -> T)
-> Future<Result<T, Box<Any + Send>>> {
let task = Local::borrow(None::<Task>);
match task.name {
- Some(ref name) => Some(name.as_slice().to_string()),
+ Some(ref name) => Some(name.to_string()),
None => None
}
}
use borrow::IntoCow;
use boxed::BoxAny;
use prelude::*;
- use result::{Ok, Err};
+ use result::Result::{Ok, Err};
use result;
use std::io::{ChanReader, ChanWriter};
use string::String;
#[test]
fn test_owned_named_task() {
TaskBuilder::new().named("ada lovelace".to_string()).try(proc() {
- assert!(name().unwrap() == "ada lovelace".to_string());
+ assert!(name().unwrap() == "ada lovelace");
}).map_err(|_| ()).unwrap();
}
#[test]
fn test_static_named_task() {
TaskBuilder::new().named("ada lovelace").try(proc() {
- assert!(name().unwrap() == "ada lovelace".to_string());
+ assert!(name().unwrap() == "ada lovelace");
}).map_err(|_| ()).unwrap();
}
#[test]
fn test_send_named_task() {
TaskBuilder::new().named("ada lovelace".into_cow()).try(proc() {
- assert!(name().unwrap() == "ada lovelace".to_string());
+ assert!(name().unwrap() == "ada lovelace");
}).map_err(|_| ()).unwrap();
}
match try(proc() {
"Success!".to_string()
}).as_ref().map(|s| s.as_slice()) {
- result::Ok("Success!") => (),
+ result::Result::Ok("Success!") => (),
_ => panic!()
}
}
match try(proc() {
panic!()
}) {
- result::Err(_) => (),
- result::Ok(()) => panic!()
+ result::Result::Err(_) => (),
+ result::Result::Ok(()) => panic!()
}
}
Err(e) => {
type T = String;
assert!(e.is::<T>());
- assert_eq!(*e.downcast::<T>().unwrap(), "owned string".to_string());
+ assert_eq!(*e.downcast::<T>().unwrap(), "owned string");
}
Ok(()) => panic!()
}
assert!(r.is_ok());
let output = reader.read_to_string().unwrap();
- assert_eq!(output, "Hello, world!".to_string());
+ assert_eq!(output, "Hello, world!");
}
// NOTE: the corresponding test for stderr is in run-pass/task-stderr, due
use std::cell::UnsafeCell as __UnsafeCell;
use std::thread_local::KeyInner as __KeyInner;
use std::option::Option as __Option;
- use std::option::None as __None;
+ use std::option::Option::None as __None;
__thread_local_inner!(static __KEY: __UnsafeCell<__Option<$t>> = {
__UnsafeCell { value: __None }
use std::cell::UnsafeCell as __UnsafeCell;
use std::thread_local::KeyInner as __KeyInner;
use std::option::Option as __Option;
- use std::option::None as __None;
+ use std::option::Option::None as __None;
__thread_local_inner!(static __KEY: __UnsafeCell<__Option<$t>> = {
__UnsafeCell { value: __None }
inner: ::std::cell::UnsafeCell { value: $init },
os: ::std::thread_local::OsStaticKey {
inner: ::std::thread_local::OS_INIT_INNER,
- dtor: ::std::option::Some(__destroy),
+ dtor: ::std::option::Option::Some(__destroy),
},
}
};
#![experimental]
use {fmt, i64};
+use kinds::Copy;
use ops::{Add, Sub, Mul, Div, Neg};
-use option::{Option, Some, None};
+use option::Option;
+use option::Option::{Some, None};
use num::Int;
-use result::{Result, Ok, Err};
+use result::Result;
+use result::Result::{Ok, Err};
/// The number of nanoseconds in a microsecond.
const NANOS_PER_MICRO: i32 = 1000;
nanos: (i64::MAX % MILLIS_PER_SEC) as i32 * NANOS_PER_MILLI
};
+impl Copy for Duration {}
+
impl Duration {
/// Makes a new `Duration` with given number of weeks.
/// Equivalent to `Duration::seconds(weeks * 7 * 24 * 60 * 60), with overflow checks.
mod tests {
use super::{Duration, MIN, MAX};
use {i32, i64};
- use option::{Some, None};
+ use option::Option::{Some, None};
use string::ToString;
#[test]
#[test]
fn test_duration_fmt() {
- assert_eq!(Duration::zero().to_string(), "PT0S".to_string());
- assert_eq!(Duration::days(42).to_string(), "P42D".to_string());
- assert_eq!(Duration::days(-42).to_string(), "-P42D".to_string());
- assert_eq!(Duration::seconds(42).to_string(), "PT42S".to_string());
- assert_eq!(Duration::milliseconds(42).to_string(), "PT0.042S".to_string());
- assert_eq!(Duration::microseconds(42).to_string(), "PT0.000042S".to_string());
- assert_eq!(Duration::nanoseconds(42).to_string(), "PT0.000000042S".to_string());
+ assert_eq!(Duration::zero().to_string(), "PT0S");
+ assert_eq!(Duration::days(42).to_string(), "P42D");
+ assert_eq!(Duration::days(-42).to_string(), "-P42D");
+ assert_eq!(Duration::seconds(42).to_string(), "PT42S");
+ assert_eq!(Duration::milliseconds(42).to_string(), "PT0.042S");
+ assert_eq!(Duration::microseconds(42).to_string(), "PT0.000042S");
+ assert_eq!(Duration::nanoseconds(42).to_string(), "PT0.000000042S");
assert_eq!((Duration::days(7) + Duration::milliseconds(6543)).to_string(),
- "P7DT6.543S".to_string());
- assert_eq!(Duration::seconds(-86401).to_string(), "-P1DT1S".to_string());
- assert_eq!(Duration::nanoseconds(-1).to_string(), "-PT0.000000001S".to_string());
+ "P7DT6.543S");
+ assert_eq!(Duration::seconds(-86401).to_string(), "-P1DT1S");
+ assert_eq!(Duration::nanoseconds(-1).to_string(), "-PT0.000000001S");
// the format specifier should have no effect on `Duration`
assert_eq!(format!("{:30}", Duration::days(1) + Duration::milliseconds(2345)),
- "P1DT2.345S".to_string());
+ "P1DT2.345S");
}
}
use std::fmt;
#[deriving(PartialEq)]
-pub enum Os { OsWindows, OsMacos, OsLinux, OsAndroid, OsFreebsd, OsiOS,
- OsDragonfly }
+pub enum Os {
+ OsWindows,
+ OsMacos,
+ OsLinux,
+ OsAndroid,
+ OsFreebsd,
+ OsiOS,
+ OsDragonfly,
+}
+
+impl Copy for Os {}
#[deriving(PartialEq, Eq, Hash, Encodable, Decodable, Clone)]
pub enum Abi {
RustCall,
}
+impl Copy for Abi {}
+
#[allow(non_camel_case_types)]
#[deriving(PartialEq)]
pub enum Architecture {
Mipsel
}
+impl Copy for Architecture {}
+
pub struct AbiData {
abi: Abi,
name: &'static str,
}
+impl Copy for AbiData {}
+
pub enum AbiArchitecture {
/// Not a real ABI (e.g., intrinsic)
RustArch,
Archs(u32)
}
+#[allow(non_upper_case_globals)]
+impl Copy for AbiArchitecture {}
+
#[allow(non_upper_case_globals)]
static AbiDatas: &'static [AbiData] = &[
// Platform-specific ABIs
pub ctxt: SyntaxContext
}
+impl Copy for Ident {}
+
impl Ident {
/// Construct an identifier with the given name and an empty context:
pub fn new(name: Name) -> Ident { Ident {name: name, ctxt: EMPTY_CTXT}}
#[deriving(Eq, Ord, PartialEq, PartialOrd, Hash, Encodable, Decodable, Clone)]
pub struct Name(pub u32);
+impl Copy for Name {}
+
impl Name {
pub fn as_str<'a>(&'a self) -> &'a str {
unsafe {
pub name: Name
}
+impl Copy for Lifetime {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub struct LifetimeDef {
pub lifetime: Lifetime,
pub node: NodeId,
}
+impl Copy for DefId {}
+
/// Item definitions in the currently-compiled crate would have the CrateNum
/// LOCAL_CRATE in their DefId.
pub const LOCAL_CRATE: CrateNum = 0;
BindByValue(Mutability),
}
+impl Copy for BindingMode {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub enum PatWildKind {
/// Represents the wildcard pattern `_`
PatWildMulti,
}
+impl Copy for PatWildKind {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub enum Pat_ {
/// Represents a wildcard pattern (either `_` or `..`)
MutImmutable,
}
+impl Copy for Mutability {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub enum BinOp {
BiAdd,
BiGt,
}
+#[cfg(not(stage0))]
+impl Copy for BinOp {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub enum UnOp {
UnUniq,
UnNeg
}
+impl Copy for UnOp {}
+
pub type Stmt = Spanned<Stmt_>;
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
LocalFor,
}
+impl Copy for LocalSource {}
+
// FIXME (pending discussion of #1697, #2178...): local should really be
// a refinement on pat.
/// Local represents a `let` statement, e.g., `let <pat>:<ty> = <expr>;`
UnsafeBlock(UnsafeSource),
}
+impl Copy for BlockCheckMode {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub enum UnsafeSource {
CompilerGenerated,
UserProvided,
}
+impl Copy for UnsafeSource {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub struct Expr {
pub id: NodeId,
MatchWhileLetDesugar,
}
+impl Copy for MatchSource {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub enum CaptureClause {
CaptureByValue,
CaptureByRef,
}
+impl Copy for CaptureClause {}
+
/// A delimited sequence of token trees
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub struct Delimited {
OneOrMore,
}
+impl Copy for KleeneOp {}
+
/// When the main rust parser encounters a syntax-extension invocation, it
/// parses the arguments to the invocation as a token-tree. This is a very
/// loose structure, such that all sorts of different AST-fragments can
RawStr(uint)
}
+impl Copy for StrStyle {}
+
pub type Lit = Spanned<Lit_>;
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
Plus
}
-impl<T: Int> Sign {
+impl Copy for Sign {}
+
+impl<T> Sign where T: Int {
pub fn new(n: T) -> Sign {
if n < Int::zero() {
Minus
UnsuffixedIntLit(Sign)
}
+impl Copy for LitIntType {}
+
impl LitIntType {
pub fn suffix_len(&self) -> uint {
match *self {
TyI64,
}
+impl Copy for IntTy {}
+
impl fmt::Show for IntTy {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", ast_util::int_ty_to_string(*self, None))
TyU64,
}
+impl Copy for UintTy {}
+
impl UintTy {
pub fn suffix_len(&self) -> uint {
match *self {
TyF64,
}
+impl Copy for FloatTy {}
+
impl fmt::Show for FloatTy {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", ast_util::float_ty_to_string(*self))
TyChar
}
+impl Copy for PrimTy {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash)]
pub enum Onceness {
Once,
Many
}
+impl Copy for Onceness {}
+
impl fmt::Show for Onceness {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
AsmIntel
}
+impl Copy for AsmDialect {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub struct InlineAsm {
pub asm: InternedString,
NormalFn,
}
+impl Copy for FnStyle {}
+
impl fmt::Show for FnStyle {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
PathListMod { id: NodeId }
}
+impl Copy for PathListItem_ {}
+
impl PathListItem_ {
pub fn id(&self) -> NodeId {
match *self {
AttrInner,
}
+impl Copy for AttrStyle {}
+
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub struct AttrId(pub uint);
+impl Copy for AttrId {}
+
/// Doc-comments are promoted to attributes that have is_sugared_doc = true
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub struct Attribute_ {
Inherited,
}
+impl Copy for Visibility {}
+
impl Visibility {
pub fn inherit_from(&self, parent_visibility: Visibility) -> Visibility {
match self {
UnnamedField(Visibility),
}
+impl Copy for StructFieldKind {}
+
impl StructFieldKind {
pub fn is_unnamed(&self) -> bool {
match *self {
FnOnceUnboxedClosureKind,
}
+impl Copy for UnboxedClosureKind {}
+
/// The data we save and restore about an inlined item or method. This is not
/// part of the AST that we parse from a file, but it becomes part of the tree
/// that we trans.
/// To construct one, use the `Code::from_node` function.
pub struct FnLikeNode<'a> { node: ast_map::Node<'a> }
+impl<'a> Copy for FnLikeNode<'a> {}
+
/// MaybeFnLike wraps a method that indicates if an object
/// corresponds to some FnLikeNode.
pub trait MaybeFnLike { fn is_fn_like(&self) -> bool; }
BlockCode(&'a Block),
}
+impl<'a> Copy for Code<'a> {}
+
impl<'a> Code<'a> {
pub fn id(&self) -> ast::NodeId {
match *self {
PathName(Name)
}
+impl Copy for PathElem {}
+
impl PathElem {
pub fn name(&self) -> Name {
match *self {
NodeLifetime(&'ast Lifetime),
}
+impl<'ast> Copy for Node<'ast> {}
+
/// Represents an entry and its parent Node ID
/// The odd layout is to bring down the total size.
#[deriving(Show)]
RootInlinedParent(&'ast InlinedParent)
}
+impl<'ast> Copy for MapEntry<'ast> {}
+
impl<'ast> Clone for MapEntry<'ast> {
fn clone(&self) -> MapEntry<'ast> {
*self
}
fn find_entry(&self, id: NodeId) -> Option<MapEntry<'ast>> {
- self.map.borrow().as_slice().get(id as uint).map(|e| *e)
+ self.map.borrow().get(id as uint).map(|e| *e)
}
pub fn krate(&self) -> &'ast Crate {
pub max: NodeId,
}
+impl Copy for IdRange {}
+
impl IdRange {
pub fn max() -> IdRange {
IdRange {
InlineNever,
}
+impl Copy for InlineAttr {}
+
/// Determine what `#[inline]` attribute is present in `attrs`, if any.
pub fn find_inline_attr(attrs: &[Attribute]) -> InlineAttr {
// FIXME (#2809)---validate the usage of #[inline] and #[inline]
Locked
}
+impl Copy for StabilityLevel {}
+
pub fn find_stability_generic<'a,
AM: AttrMetaMethods,
I: Iterator<&'a AM>>
ReprPacked,
}
+impl Copy for ReprAttr {}
+
impl ReprAttr {
pub fn is_ffi_safe(&self) -> bool {
match *self {
UnsignedInt(ast::UintTy)
}
+impl Copy for IntType {}
+
impl IntType {
#[inline]
pub fn is_signed(self) -> bool {
#[deriving(Clone, PartialEq, Eq, Hash, PartialOrd, Show)]
pub struct BytePos(pub u32);
+impl Copy for BytePos {}
+
/// A character offset. Because of multibyte utf8 characters, a byte offset
/// is not equivalent to a character offset. The CodeMap will convert BytePos
/// values to CharPos values as necessary.
#[deriving(PartialEq, Hash, PartialOrd, Show)]
pub struct CharPos(pub uint);
+impl Copy for CharPos {}
+
// FIXME: Lots of boilerplate in these impls, but so far my attempts to fix
// have been unsuccessful
pub expn_id: ExpnId
}
+impl Copy for Span {}
+
pub const DUMMY_SP: Span = Span { lo: BytePos(0), hi: BytePos(0), expn_id: NO_EXPANSION };
#[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash, Show)]
pub span: Span,
}
+impl<T:Copy> Copy for Spanned<T> {}
+
impl PartialEq for Span {
fn eq(&self, other: &Span) -> bool {
return (*self).lo == (*other).lo && (*self).hi == (*other).hi;
MacroBang
}
+impl Copy for MacroFormat {}
+
#[deriving(Clone, Hash, Show)]
pub struct NameAndSpan {
/// The name of the macro that was invoked to create the thing
#[deriving(PartialEq, Eq, Clone, Show, Hash, Encodable, Decodable)]
pub struct ExpnId(u32);
+impl Copy for ExpnId {}
+
pub const NO_EXPANSION: ExpnId = ExpnId(-1);
impl ExpnId {
pub bytes: uint,
}
+impl Copy for MultiByteChar {}
+
/// A single source in the CodeMap
pub struct FileMap {
/// The name of the file that the source came from, source that doesn't
lines.get(line_number).map(|&line| {
let begin: BytePos = line - self.start_pos;
let begin = begin.to_uint();
- let slice = self.src.as_slice().slice_from(begin);
+ let slice = self.src.slice_from(begin);
match slice.find('\n') {
Some(e) => slice.slice_to(e),
None => slice
}
pub fn is_real_file(&self) -> bool {
- !(self.name.as_slice().starts_with("<") &&
- self.name.as_slice().ends_with(">"))
+ !(self.name.starts_with("<") &&
+ self.name.ends_with(">"))
}
}
// Remove utf-8 BOM if any.
// FIXME #12884: no efficient/safe way to remove from the start of a string
// and reuse the allocation.
- let mut src = if src.as_slice().starts_with("\ufeff") {
- String::from_str(src.as_slice().slice_from(3))
+ let mut src = if src.starts_with("\ufeff") {
+ String::from_str(src.slice_from(3))
} else {
String::from_str(src.as_slice())
};
// This is a workaround to prevent CodeMap.lookup_filemap_idx from accidentally
// overflowing into the next filemap in case the last byte of span is also the last
// byte of filemap, which leads to incorrect results from CodeMap.span_to_*.
- if src.len() > 0 && !src.as_slice().ends_with("\n") {
+ if src.len() > 0 && !src.ends_with("\n") {
src.push('\n');
}
if begin.fm.start_pos != end.fm.start_pos {
None
} else {
- Some(begin.fm.src.as_slice().slice(begin.pos.to_uint(),
- end.pos.to_uint()).to_string())
+ Some(begin.fm.src.slice(begin.pos.to_uint(),
+ end.pos.to_uint()).to_string())
}
}
pub fn get_filemap(&self, filename: &str) -> Rc<FileMap> {
for fm in self.files.borrow().iter() {
- if filename == fm.name.as_slice() {
+ if filename == fm.name {
return fm.clone();
}
}
let cm = init_code_map();
let fmabp1 = cm.lookup_byte_offset(BytePos(22));
- assert_eq!(fmabp1.fm.name, "blork.rs".to_string());
+ assert_eq!(fmabp1.fm.name, "blork.rs");
assert_eq!(fmabp1.pos, BytePos(22));
let fmabp2 = cm.lookup_byte_offset(BytePos(24));
- assert_eq!(fmabp2.fm.name, "blork2.rs".to_string());
+ assert_eq!(fmabp2.fm.name, "blork2.rs");
assert_eq!(fmabp2.pos, BytePos(0));
}
let cm = init_code_map();
let loc1 = cm.lookup_char_pos(BytePos(22));
- assert_eq!(loc1.file.name, "blork.rs".to_string());
+ assert_eq!(loc1.file.name, "blork.rs");
assert_eq!(loc1.line, 2);
assert_eq!(loc1.col, CharPos(10));
let loc2 = cm.lookup_char_pos(BytePos(24));
- assert_eq!(loc2.file.name, "blork2.rs".to_string());
+ assert_eq!(loc2.file.name, "blork2.rs");
assert_eq!(loc2.line, 1);
assert_eq!(loc2.col, CharPos(0));
}
let span = Span {lo: BytePos(12), hi: BytePos(23), expn_id: NO_EXPANSION};
let file_lines = cm.span_to_lines(span);
- assert_eq!(file_lines.file.name, "blork.rs".to_string());
+ assert_eq!(file_lines.file.name, "blork.rs");
assert_eq!(file_lines.lines.len(), 1);
assert_eq!(file_lines.lines[0], 1u);
}
let span = Span {lo: BytePos(12), hi: BytePos(23), expn_id: NO_EXPANSION};
let sstr = cm.span_to_string(span);
- assert_eq!(sstr, "blork.rs:2:1: 2:12".to_string());
+ assert_eq!(sstr, "blork.rs:2:1: 2:12");
}
}
FileLine(Span),
}
+impl Copy for RenderSpan {}
+
impl RenderSpan {
fn span(self) -> Span {
match self {
Never
}
+impl Copy for ColorConfig {}
+
pub trait Emitter {
fn emit(&mut self, cmsp: Option<(&codemap::CodeMap, Span)>,
msg: &str, code: Option<&str>, lvl: Level);
/// how a rustc task died (if so desired).
pub struct FatalError;
+impl Copy for FatalError {}
+
/// Signifies that the compiler died with an explicit call to `.bug`
/// or `.span_bug` rather than a failed assertion, etc.
pub struct ExplicitBug;
+impl Copy for ExplicitBug {}
+
/// A span-handler is like a handler but also
/// accepts span information for source-location
/// reporting.
Help,
}
+impl Copy for Level {}
+
impl fmt::Show for Level {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
use std::fmt::Show;
meta_item: &ast::MetaItem,
item: &ast::Item,
push: |P<ast::Item>|) {
- (*self)(ecx, sp, meta_item, item, push)
+ self.clone()(ecx, sp, meta_item, item, push)
}
}
meta_item: &ast::MetaItem,
item: P<ast::Item>)
-> P<ast::Item> {
- (*self)(ecx, span, meta_item, item)
+ self.clone()(ecx, span, meta_item, item)
}
}
span: Span,
token_tree: &[ast::TokenTree])
-> Box<MacResult+'cx> {
- (*self)(ecx, span, token_tree)
+ self.clone()(ecx, span, token_tree)
}
}
ident: ast::Ident,
token_tree: Vec<ast::TokenTree> )
-> Box<MacResult+'cx> {
- (*self)(cx, sp, ident, token_tree)
+ self.clone()(cx, sp, ident, token_tree)
}
}
span: Span
}
+impl Copy for DummyResult {}
+
impl DummyResult {
/// Create a default MacResult that can be anything.
///
let mut call_site = None;
loop {
let expn_info = self.codemap().with_expn_info(expn_id, |ei| {
- ei.map(|ei| (ei.call_site, ei.callee.name.as_slice() == "include"))
+ ei.map(|ei| (ei.call_site, ei.callee.name == "include"))
});
match expn_info {
None => break,
let some = vec!(
self.ident_of("std"),
self.ident_of("option"),
+ self.ident_of("Option"),
self.ident_of("Some"));
self.expr_call_global(sp, some, vec!(expr))
}
let none = self.path_global(sp, vec!(
self.ident_of("std"),
self.ident_of("option"),
+ self.ident_of("Option"),
self.ident_of("None")));
self.expr_path(none)
}
let ok = vec!(
self.ident_of("std"),
self.ident_of("result"),
+ self.ident_of("Result"),
self.ident_of("Ok"));
self.expr_call_global(sp, ok, vec!(expr))
}
let err = vec!(
self.ident_of("std"),
self.ident_of("result"),
+ self.ident_of("Result"),
self.ident_of("Err"));
self.expr_call_global(sp, err, vec!(expr))
}
let some = vec!(
self.ident_of("std"),
self.ident_of("option"),
+ self.ident_of("Option"),
self.ident_of("Some"));
let path = self.path_global(span, some);
self.pat_enum(span, path, vec!(pat))
let some = vec!(
self.ident_of("std"),
self.ident_of("option"),
+ self.ident_of("Option"),
self.ident_of("None"));
let path = self.path_global(span, some);
self.pat_enum(span, path, vec!())
let some = vec!(
self.ident_of("std"),
self.ident_of("result"),
+ self.ident_of("Result"),
self.ident_of("Ok"));
let path = self.path_global(span, some);
self.pat_enum(span, path, vec!(pat))
let some = vec!(
self.ident_of("std"),
self.ident_of("result"),
+ self.ident_of("Result"),
self.ident_of("Err"));
let path = self.path_global(span, some);
self.pat_enum(span, path, vec!(pat))
PartialCmpOp, LtOp, LeOp, GtOp, GeOp,
}
+impl Copy for OrderingOp {}
+
pub fn some_ordering_collapsed(cx: &mut ExtCtxt,
span: Span,
op: OrderingOp,
let ordering = cx.path_global(span,
vec!(cx.ident_of("std"),
cx.ident_of("cmp"),
+ cx.ident_of("Ordering"),
cx.ident_of("Equal")));
let ordering = cx.expr_path(ordering);
let equals_expr = cx.expr_some(span, ordering);
Builds:
let __test = ::std::cmp::PartialOrd::partial_cmp(&self_field1, &other_field1);
- if __test == ::std::option::Some(::std::cmp::Equal) {
+ if __test == ::std::option::Option::Some(::std::cmp::Ordering::Equal) {
let __test = ::std::cmp::PartialOrd::partial_cmp(&self_field2, &other_field2);
- if __test == ::std::option::Some(::std::cmp::Equal) {
+ if __test == ::std::option::Option::Some(::std::cmp::Ordering::Equal) {
...
} else {
__test
false,
|cx, span, old, self_f, other_fs| {
// let __test = new;
- // if __test == Some(::std::cmp::Equal) {
+ // if __test == Some(::std::cmp::Ordering::Equal) {
// old
// } else {
// __test
Builds:
let __test = self_field1.cmp(&other_field2);
- if other == ::std::cmp::Equal {
+ if other == ::std::cmp::Ordering::Equal {
let __test = self_field2.cmp(&other_field2);
- if __test == ::std::cmp::Equal {
+ if __test == ::std::cmp::Ordering::Equal {
...
} else {
__test
false,
|cx, span, old, new| {
// let __test = new;
- // if __test == ::std::cmp::Equal {
+ // if __test == ::std::cmp::Ordering::Equal {
// old
// } else {
// __test
true,
vec!(cx.ident_of("std"),
cx.ident_of("option"),
+ cx.ident_of("Option"),
cx.ident_of("None")),
Vec::new(),
vec!(cx.ty_rptr(sp,
cx.expr_call_global(sp,
vec!(cx.ident_of("std"),
cx.ident_of("option"),
+ cx.ident_of("Option"),
cx.ident_of("Some")),
vec!(cx.expr_str(sp,
token::intern_and_get_ident(
}
// Now create a vector containing all the arguments
- let slicename = self.ecx.ident_of("__args_vec");
- {
- let args = names.into_iter().map(|a| a.unwrap());
- let args = locals.into_iter().chain(args);
- let args = self.ecx.expr_vec_slice(self.fmtsp, args.collect());
- lets.push(self.ecx.stmt_let(self.fmtsp, false, slicename, args));
- }
+ let args = locals.into_iter().chain(names.into_iter().map(|a| a.unwrap()));
// Now create the fmt::Arguments struct with all our locals we created.
let pieces = self.ecx.expr_ident(self.fmtsp, static_str_name);
- let args_slice = self.ecx.expr_ident(self.fmtsp, slicename);
+ let args_slice = self.ecx.expr_vec_slice(self.fmtsp, args.collect());
let (fn_name, fn_args) = if self.all_pieces_simple {
("new", vec![pieces, args_slice])
self.ecx.ident_of("Arguments"),
self.ecx.ident_of(fn_name)), fn_args);
- // We did all the work of making sure that the arguments
- // structure is safe, so we can safely have an unsafe block.
- let result = self.ecx.expr_block(P(ast::Block {
- view_items: Vec::new(),
- stmts: Vec::new(),
- expr: Some(result),
- id: ast::DUMMY_NODE_ID,
- rules: ast::UnsafeBlock(ast::CompilerGenerated),
- span: self.fmtsp,
- }));
- let resname = self.ecx.ident_of("__args");
- lets.push(self.ecx.stmt_let(self.fmtsp, false, resname, result));
- let res = self.ecx.expr_ident(self.fmtsp, resname);
let result = match invocation {
Call(e) => {
let span = e.span;
- self.ecx.expr_call(span, e,
- vec!(self.ecx.expr_addr_of(span, res)))
+ self.ecx.expr_call(span, e, vec![
+ self.ecx.expr_addr_of(span, result)
+ ])
}
MethodCall(e, m) => {
let span = e.span;
- self.ecx.expr_method_call(span, e, m,
- vec!(self.ecx.expr_addr_of(span, res)))
+ self.ecx.expr_method_call(span, e, m, vec![
+ self.ecx.expr_addr_of(span, result)
+ ])
}
};
let body = self.ecx.expr_block(self.ecx.block(self.fmtsp, lets,
IllegalCtxt
}
+impl Copy for SyntaxContext_ {}
+
/// A list of ident->name renamings
pub type RenameList = Vec<(Ident, Name)>;
sp: Span,
tts: &[ast::TokenTree])
-> Box<base::MacResult+'static> {
- let e_param_colons = cx.expr_lit(sp, ast::LitBool(false));
let expanded = expand_parse_call(cx, sp, "parse_ty",
- vec!(e_param_colons), tts);
+ vec![], tts);
base::MacExpr::new(expanded)
}
use std::slice;
-/// This is a list of all known features since the beginning of time. This list
-/// can never shrink, it may only be expanded (in order to prevent old programs
-/// from failing to compile). The status of each feature may change, however.
+// if you change this list without updating src/doc/reference.md, @cmr will be sad
static KNOWN_FEATURES: &'static [(&'static str, Status)] = &[
("globs", Active),
("macro_rules", Active),
("unboxed_closures", Active),
("import_shadowing", Active),
("advanced_slice_patterns", Active),
- ("tuple_indexing", Active),
+ ("tuple_indexing", Accepted),
("associated_types", Active),
("visible_private_types", Active),
("slicing_syntax", Active),
- ("if_let", Active),
- ("while_let", Active),
-
- // if you change this list without updating src/doc/reference.md, cmr will be sad
+ ("if_let", Accepted),
+ ("while_let", Accepted),
// A temporary feature gate used to enable parser extensions needed
// to bootstrap fix for #5723.
("issue_5723_bootstrap", Accepted),
+ // A way to temporary opt out of opt in copy. This will *never* be accepted.
+ ("opt_out_copy", Active),
+
// These are used to test this portion of the compiler, they don't actually
// mean anything
("test_accepted_feature", Accepted),
pub import_shadowing: bool,
pub visible_private_types: bool,
pub quote: bool,
+ pub opt_out_copy: bool,
}
+impl Copy for Features {}
+
impl Features {
pub fn new() -> Features {
Features {
import_shadowing: false,
visible_private_types: false,
quote: false,
+ opt_out_copy: false,
}
}
}
}
fn has_feature(&self, feature: &str) -> bool {
- self.features.iter().any(|n| n.as_slice() == feature)
+ self.features.iter().any(|&n| n == feature)
}
}
"unboxed closures are a work-in-progress \
feature with known bugs");
}
- ast::ExprTupField(..) => {
- self.gate_feature("tuple_indexing",
- e.span,
- "tuple indexing is experimental");
- }
- ast::ExprIfLet(..) => {
- self.gate_feature("if_let", e.span,
- "`if let` syntax is experimental");
- }
ast::ExprSlice(..) => {
self.gate_feature("slicing_syntax",
e.span,
"slicing syntax is experimental");
}
- ast::ExprWhileLet(..) => {
- self.gate_feature("while_let", e.span,
- "`while let` syntax is experimental");
- }
_ => {}
}
visit::walk_expr(self, e);
import_shadowing: cx.has_feature("import_shadowing"),
visible_private_types: cx.has_feature("visible_private_types"),
quote: cx.has_feature("quote"),
+ opt_out_copy: cx.has_feature("opt_out_copy"),
},
unknown_features)
}
fn parse_meta_seq(&mut self) -> Vec<P<ast::MetaItem>> {
self.parse_seq(&token::OpenDelim(token::Paren),
&token::CloseDelim(token::Paren),
- seq_sep_trailing_disallowed(token::Comma),
+ seq_sep_trailing_allowed(token::Comma),
|p| p.parse_meta_item()).node
}
pub trailing_sep_allowed: bool
}
-pub fn seq_sep_trailing_disallowed(t: token::Token) -> SeqSep {
- SeqSep {
- sep: Some(t),
- trailing_sep_allowed: false,
- }
-}
pub fn seq_sep_trailing_allowed(t: token::Token) -> SeqSep {
SeqSep {
sep: Some(t),
trailing_sep_allowed: true,
}
}
+
pub fn seq_sep_none() -> SeqSep {
SeqSep {
sep: None,
BlankLine,
}
+impl Copy for CommentStyle {}
+
#[deriving(Clone)]
pub struct Comment {
pub style: CommentStyle,
let mut j = lines.len();
// first line of all-stars should be omitted
if lines.len() > 0 &&
- lines[0].as_slice().chars().all(|c| c == '*') {
+ lines[0].chars().all(|c| c == '*') {
i += 1;
}
- while i < j && lines[i].as_slice().trim().is_empty() {
+ while i < j && lines[i].trim().is_empty() {
i += 1;
}
// like the first, a last line of all stars should be omitted
if j > i && lines[j - 1]
- .as_slice()
.chars()
.skip(1)
.all(|c| c == '*') {
j -= 1;
}
- while j > i && lines[j - 1].as_slice().trim().is_empty() {
+ while j > i && lines[j - 1].trim().is_empty() {
j -= 1;
}
return lines.slice(i, j).iter().map(|x| (*x).clone()).collect();
let mut can_trim = true;
let mut first = true;
for line in lines.iter() {
- for (j, c) in line.as_slice().chars().enumerate() {
+ for (j, c) in line.chars().enumerate() {
if j > i || !"* \t".contains_char(c) {
can_trim = false;
break;
if can_trim {
lines.iter().map(|line| {
- line.as_slice().slice(i + 1, line.len()).to_string()
+ line.slice(i + 1, line.len()).to_string()
}).collect()
} else {
lines
let s1 = match all_whitespace(s.as_slice(), col) {
Some(col) => {
if col < len {
- s.as_slice().slice(col, len).to_string()
+ s.slice(col, len).to_string()
} else {
"".to_string()
}
if is_block_doc_comment(curr_line.as_slice()) {
return
}
- assert!(!curr_line.as_slice().contains_char('\n'));
+ assert!(!curr_line.contains_char('\n'));
lines.push(curr_line);
} else {
let mut level: int = 1;
#[test] fn test_block_doc_comment_1() {
let comment = "/**\n * Test \n ** Test\n * Test\n*/";
let stripped = strip_doc_comment_decoration(comment);
- assert_eq!(stripped, " Test \n* Test\n Test".to_string());
+ assert_eq!(stripped, " Test \n* Test\n Test");
}
#[test] fn test_block_doc_comment_2() {
let comment = "/**\n * Test\n * Test\n*/";
let stripped = strip_doc_comment_decoration(comment);
- assert_eq!(stripped, " Test\n Test".to_string());
+ assert_eq!(stripped, " Test\n Test");
}
#[test] fn test_block_doc_comment_3() {
let comment = "/**\n let a: *int;\n *a = 5;\n*/";
let stripped = strip_doc_comment_decoration(comment);
- assert_eq!(stripped, " let a: *int;\n *a = 5;".to_string());
+ assert_eq!(stripped, " let a: *int;\n *a = 5;");
}
#[test] fn test_block_doc_comment_4() {
let comment = "/*******************\n test\n *********************/";
let stripped = strip_doc_comment_decoration(comment);
- assert_eq!(stripped, " test".to_string());
+ assert_eq!(stripped, " test");
}
#[test] fn test_line_doc_comment() {
let stripped = strip_doc_comment_decoration("/// test");
- assert_eq!(stripped, " test".to_string());
+ assert_eq!(stripped, " test");
let stripped = strip_doc_comment_decoration("///! test");
- assert_eq!(stripped, " test".to_string());
+ assert_eq!(stripped, " test");
let stripped = strip_doc_comment_decoration("// test");
- assert_eq!(stripped, " test".to_string());
+ assert_eq!(stripped, " test");
let stripped = strip_doc_comment_decoration("// test");
- assert_eq!(stripped, " test".to_string());
+ assert_eq!(stripped, " test");
let stripped = strip_doc_comment_decoration("///test");
- assert_eq!(stripped, "test".to_string());
+ assert_eq!(stripped, "test");
let stripped = strip_doc_comment_decoration("///!test");
- assert_eq!(stripped, "test".to_string());
+ assert_eq!(stripped, "test");
let stripped = strip_doc_comment_decoration("//test");
- assert_eq!(stripped, "test".to_string());
+ assert_eq!(stripped, "test");
}
}
/// Calls `f` with a string slice of the source text spanning from `start`
/// up to but excluding `end`.
fn with_str_from_to<T>(&self, start: BytePos, end: BytePos, f: |s: &str| -> T) -> T {
- f(self.filemap.src.as_slice().slice(
+ f(self.filemap.src.slice(
self.byte_offset(start).to_uint(),
self.byte_offset(end).to_uint()))
}
let last_char = self.curr.unwrap();
let next = self.filemap
.src
- .as_slice()
.char_range_at(current_byte_offset);
let byte_offset_diff = next.next - current_byte_offset;
self.pos = self.pos + Pos::from_uint(byte_offset_diff);
pub fn nextch(&self) -> Option<char> {
let offset = self.byte_offset(self.pos).to_uint();
if offset < self.filemap.src.len() {
- Some(self.filemap.src.as_slice().char_at(offset))
+ Some(self.filemap.src.char_at(offset))
} else {
None
}
}
}
+ // SNAP c9f6d69
+ #[allow(unused)]
+ fn old_escape_warning(&mut self, sp: Span) {
+ self.span_diagnostic
+ .span_warn(sp, "\\U00ABCD12 and \\uABCD escapes are deprecated");
+ self.span_diagnostic
+ .span_help(sp, "use \\u{ABCD12} escapes instead");
+ }
+
/// Scan for a single (possibly escaped) byte or char
/// in a byte, (non-raw) byte string, char, or (non-raw) string literal.
/// `start` is the position of `first_source_char`, which is already consumed.
Some(e) => {
return match e {
'n' | 'r' | 't' | '\\' | '\'' | '"' | '0' => true,
- 'x' => self.scan_hex_digits(2u, delim, !ascii_only),
+ 'x' => self.scan_byte_escape(delim, !ascii_only),
'u' if !ascii_only => {
- self.scan_hex_digits(4u, delim, false)
+ if self.curr == Some('{') {
+ self.scan_unicode_escape(delim)
+ } else {
+ let res = self.scan_hex_digits(4u, delim, false);
+ // SNAP c9f6d69
+ //let sp = codemap::mk_sp(escaped_pos, self.last_pos);
+ //self.old_escape_warning(sp);
+ res
+ }
}
'U' if !ascii_only => {
- self.scan_hex_digits(8u, delim, false)
+ let res = self.scan_hex_digits(8u, delim, false);
+ // SNAP c9f6d69
+ //let sp = codemap::mk_sp(escaped_pos, self.last_pos);
+ //self.old_escape_warning(sp);
+ res
}
'\n' if delim == '"' => {
self.consume_whitespace();
true
}
+ /// Scan over a \u{...} escape
+ ///
+ /// At this point, we have already seen the \ and the u, the { is the current character. We
+ /// will read at least one digit, and up to 6, and pass over the }.
+ fn scan_unicode_escape(&mut self, delim: char) -> bool {
+ self.bump(); // past the {
+ let start_bpos = self.last_pos;
+ let mut count: uint = 0;
+ let mut accum_int = 0;
+
+ while !self.curr_is('}') && count <= 6 {
+ let c = match self.curr {
+ Some(c) => c,
+ None => {
+ self.fatal_span_(start_bpos, self.last_pos,
+ "unterminated unicode escape (found EOF)");
+ }
+ };
+ accum_int *= 16;
+ accum_int += c.to_digit(16).unwrap_or_else(|| {
+ if c == delim {
+ self.fatal_span_(self.last_pos, self.pos,
+ "unterminated unicode escape (needed a `}`)");
+ } else {
+ self.fatal_span_char(self.last_pos, self.pos,
+ "illegal character in unicode escape", c);
+ }
+ }) as u32;
+ self.bump();
+ count += 1;
+ }
+
+ if count > 6 {
+ self.fatal_span_(start_bpos, self.last_pos,
+ "overlong unicode escape (can have at most 6 hex digits)");
+ }
+
+ self.bump(); // past the ending }
+
+ let mut valid = count >= 1 && count <= 6;
+ if char::from_u32(accum_int).is_none() {
+ valid = false;
+ }
+
+ if !valid {
+ self.fatal_span_(start_bpos, self.last_pos, "illegal unicode character escape");
+ }
+ valid
+ }
+
/// Scan over a float exponent.
fn scan_float_exponent(&mut self) {
if self.curr_is('e') || self.curr_is('E') {
return token::Byte(id);
}
+ fn scan_byte_escape(&mut self, delim: char, below_0x7f_only: bool) -> bool {
+ self.scan_hex_digits(2, delim, below_0x7f_only)
+ }
+
fn scan_byte_string(&mut self) -> token::Lit {
self.bump();
let start = self.last_pos;
let msg = format!("lexer should have rejected a bad character escape {}", lit);
let msg2 = msg.as_slice();
- let esc: |uint| -> Option<(char, int)> = |len|
+ fn esc(len: uint, lit: &str) -> Option<(char, int)> {
num::from_str_radix(lit.slice(2, len), 16)
.and_then(char::from_u32)
- .map(|x| (x, len as int));
+ .map(|x| (x, len as int))
+ }
+
+ let unicode_escape: || -> Option<(char, int)> = ||
+ if lit.as_bytes()[2] == b'{' {
+ let idx = lit.find('}').expect(msg2);
+ let subslice = lit.slice(3, idx);
+ num::from_str_radix(subslice, 16)
+ .and_then(char::from_u32)
+ .map(|x| (x, subslice.char_len() as int + 4))
+ } else {
+ esc(6, lit)
+ };
// Unicode escapes
return match lit.as_bytes()[1] as char {
- 'x' | 'X' => esc(4),
- 'u' => esc(6),
- 'U' => esc(10),
+ 'x' | 'X' => esc(4, lit),
+ 'u' => unicode_escape(),
+ 'U' => esc(10, lit),
_ => None,
}.expect(msg2);
}
}\
]\
}\
-]".to_string()
+]"
);
}
ObsoleteExternCrateRenaming,
}
+impl Copy for ObsoleteSyntax {}
+
pub trait ParserObsoleteMethods {
/// Reports an obsolete syntax non-fatal error.
fn obsolete(&mut self, sp: Span, kind: ObsoleteSyntax);
use std::num::Float;
use std::rc::Rc;
use std::iter;
+use std::slice;
bitflags! {
flags Restrictions: u8 {
}
}
+impl Copy for Restrictions {}
+
type ItemInfo = (Ident, Item_, Option<Vec<Attribute> >);
/// How to parse a path. There are four different kinds of paths, all of which
LifetimeAndTypesWithColons,
}
+impl Copy for PathParsingMode {}
+
enum ItemOrViewItem {
/// Indicates a failure to parse any kind of item. The attributes are
/// returned.
/// name is not known. This does not change while the parser is descending
/// into modules, and sub-parsers have new values for this name.
pub root_module_name: Option<String>,
+ pub expected_tokens: Vec<TokenType>,
+}
+
+#[deriving(PartialEq, Eq, Clone)]
+pub enum TokenType {
+ Token(token::Token),
+ Operator,
+}
+
+impl TokenType {
+ fn to_string(&self) -> String {
+ match *self {
+ TokenType::Token(ref t) => format!("`{}`", Parser::token_to_string(t)),
+ TokenType::Operator => "an operator".into_string(),
+ }
+ }
}
fn is_plain_ident_or_underscore(t: &token::Token) -> bool {
open_braces: Vec::new(),
owns_directory: true,
root_module_name: None,
+ expected_tokens: Vec::new(),
}
}
/// Expect and consume the token t. Signal an error if
/// the next token is not t.
pub fn expect(&mut self, t: &token::Token) {
- if self.token == *t {
- self.bump();
+ if self.expected_tokens.is_empty() {
+ if self.token == *t {
+ self.bump();
+ } else {
+ let token_str = Parser::token_to_string(t);
+ let this_token_str = self.this_token_to_string();
+ self.fatal(format!("expected `{}`, found `{}`",
+ token_str,
+ this_token_str).as_slice())
+ }
} else {
- let token_str = Parser::token_to_string(t);
- let this_token_str = self.this_token_to_string();
- self.fatal(format!("expected `{}`, found `{}`",
- token_str,
- this_token_str).as_slice())
+ self.expect_one_of(slice::ref_slice(t), &[]);
}
}
pub fn expect_one_of(&mut self,
edible: &[token::Token],
inedible: &[token::Token]) {
- fn tokens_to_string(tokens: &[token::Token]) -> String {
+ fn tokens_to_string(tokens: &[TokenType]) -> String {
let mut i = tokens.iter();
// This might be a sign we need a connect method on Iterator.
let b = i.next()
- .map_or("".to_string(), |t| Parser::token_to_string(t));
- i.fold(b, |b,a| {
- let mut b = b;
- b.push_str("`, `");
- b.push_str(Parser::token_to_string(a).as_slice());
+ .map_or("".into_string(), |t| t.to_string());
+ i.enumerate().fold(b, |mut b, (i, ref a)| {
+ if tokens.len() > 2 && i == tokens.len() - 2 {
+ b.push_str(", or ");
+ } else if tokens.len() == 2 && i == tokens.len() - 2 {
+ b.push_str(" or ");
+ } else {
+ b.push_str(", ");
+ }
+ b.push_str(&*a.to_string());
b
})
}
} else if inedible.contains(&self.token) {
// leave it in the input
} else {
- let mut expected = edible.iter().map(|x| x.clone()).collect::<Vec<_>>();
- expected.push_all(inedible);
+ let mut expected = edible.iter().map(|x| TokenType::Token(x.clone()))
+ .collect::<Vec<_>>();
+ expected.extend(inedible.iter().map(|x| TokenType::Token(x.clone())));
+ expected.push_all(&*self.expected_tokens);
+ expected.sort_by(|a, b| a.to_string().cmp(&b.to_string()));
+ expected.dedup();
let expect = tokens_to_string(expected.as_slice());
let actual = self.this_token_to_string();
self.fatal(
(if expected.len() != 1 {
- (format!("expected one of `{}`, found `{}`",
+ (format!("expected one of {}, found `{}`",
expect,
actual))
} else {
- (format!("expected `{}`, found `{}`",
+ (format!("expected {}, found `{}`",
expect,
actual))
}).as_slice()
spanned(lo, hi, node)
}
+ /// Check if the next token is `tok`, and return `true` if so.
+ ///
+ /// This method is will automatically add `tok` to `expected_tokens` if `tok` is not
+ /// encountered.
+ pub fn check(&mut self, tok: &token::Token) -> bool {
+ let is_present = self.token == *tok;
+ if !is_present { self.expected_tokens.push(TokenType::Token(tok.clone())); }
+ is_present
+ }
+
/// Consume token 'tok' if it exists. Returns true if the given
/// token was present, false otherwise.
pub fn eat(&mut self, tok: &token::Token) -> bool {
- let is_present = self.token == *tok;
+ let is_present = self.check(tok);
if is_present { self.bump() }
is_present
}
// commas in generic parameters, because it can stop either after
// parsing a type or after parsing a comma.
for i in iter::count(0u, 1) {
- if self.token == token::Gt
+ if self.check(&token::Gt)
|| self.token == token::BinOp(token::Shr)
|| self.token == token::Ge
|| self.token == token::BinOpEq(token::Shr) {
}
_ => ()
}
- if sep.trailing_sep_allowed && self.token == *ket { break; }
+ if sep.trailing_sep_allowed && self.check(ket) { break; }
v.push(f(self));
}
return v;
self.span = next.sp;
self.token = next.tok;
self.tokens_consumed += 1u;
+ self.expected_tokens.clear();
}
/// Advance the parser by one token and return the bumped token.
self.parse_proc_type(lifetime_defs)
} else if self.token_is_bare_fn_keyword() || self.token_is_closure_keyword() {
self.parse_ty_bare_fn_or_ty_closure(lifetime_defs)
- } else if self.token == token::ModSep ||
+ } else if self.check(&token::ModSep) ||
self.token.is_ident() ||
self.token.is_path()
{
/// Parses an optional unboxed closure kind (`&:`, `&mut:`, or `:`).
pub fn parse_optional_unboxed_closure_kind(&mut self)
-> Option<UnboxedClosureKind> {
- if self.token == token::BinOp(token::And) &&
+ if self.check(&token::BinOp(token::And)) &&
self.look_ahead(1, |t| t.is_keyword(keywords::Mut)) &&
self.look_ahead(2, |t| *t == token::Colon) {
self.bump();
lifetime_defs: Vec<ast::LifetimeDef>)
-> Vec<ast::LifetimeDef>
{
- if self.eat(&token::Lt) {
+ if self.token == token::Lt {
+ self.bump();
if lifetime_defs.is_empty() {
self.warn("deprecated syntax; use the `for` keyword now \
(e.g. change `fn<'a>` to `for<'a> fn`)");
let lo = self.span.lo;
- let t = if self.token == token::OpenDelim(token::Paren) {
+ let t = if self.check(&token::OpenDelim(token::Paren)) {
self.bump();
// (t) is a parenthesized ty
let mut last_comma = false;
while self.token != token::CloseDelim(token::Paren) {
ts.push(self.parse_ty_sum());
- if self.token == token::Comma {
+ if self.check(&token::Comma) {
last_comma = true;
self.bump();
} else {
_ => self.obsolete(last_span, ObsoleteOwnedType)
}
TyTup(vec![self.parse_ty()])
- } else if self.token == token::BinOp(token::Star) {
+ } else if self.check(&token::BinOp(token::Star)) {
// STAR POINTER (bare pointer?)
self.bump();
TyPtr(self.parse_ptr())
- } else if self.token == token::OpenDelim(token::Bracket) {
+ } else if self.check(&token::OpenDelim(token::Bracket)) {
// VECTOR
self.expect(&token::OpenDelim(token::Bracket));
let t = self.parse_ty_sum();
};
self.expect(&token::CloseDelim(token::Bracket));
t
- } else if self.token == token::BinOp(token::And) ||
+ } else if self.check(&token::BinOp(token::And)) ||
self.token == token::AndAnd {
// BORROWED POINTER
self.expect_and();
self.token_is_closure_keyword() {
// BARE FUNCTION OR CLOSURE
self.parse_ty_bare_fn_or_ty_closure(Vec::new())
- } else if self.token == token::BinOp(token::Or) ||
+ } else if self.check(&token::BinOp(token::Or)) ||
self.token == token::OrOr ||
(self.token == token::Lt &&
self.look_ahead(1, |t| {
TyTypeof(e)
} else if self.eat_keyword(keywords::Proc) {
self.parse_proc_type(Vec::new())
- } else if self.token == token::Lt {
+ } else if self.check(&token::Lt) {
// QUALIFIED PATH `<TYPE as TRAIT_REF>::item`
self.bump();
let self_type = self.parse_ty_sum();
trait_ref: P(trait_ref),
item_name: item_name,
}))
- } else if self.token == token::ModSep ||
+ } else if self.check(&token::ModSep) ||
self.token.is_ident() ||
self.token.is_path() {
// NAMED TYPE
// TYPE TO BE INFERRED
TyInfer
} else {
- let msg = format!("expected type, found token {}", self.token);
+ let this_token_str = self.this_token_to_string();
+ let msg = format!("expected type, found `{}`", this_token_str);
self.fatal(msg.as_slice());
};
}
pub fn maybe_parse_fixed_vstore(&mut self) -> Option<P<ast::Expr>> {
- if self.token == token::Comma &&
+ if self.check(&token::Comma) &&
self.look_ahead(1, |t| *t == token::DotDot) {
self.bump();
self.bump();
token::Gt => { return res; }
token::BinOp(token::Shr) => { return res; }
_ => {
+ let this_token_str = self.this_token_to_string();
let msg = format!("expected `,` or `>` after lifetime \
- name, got: {}",
- self.token);
+ name, found `{}`",
+ this_token_str);
self.fatal(msg.as_slice());
}
}
es.push(self.parse_expr());
self.commit_expr(&**es.last().unwrap(), &[],
&[token::Comma, token::CloseDelim(token::Paren)]);
- if self.token == token::Comma {
+ if self.check(&token::Comma) {
trailing_comma = true;
self.bump();
token::OpenDelim(token::Bracket) => {
self.bump();
- if self.token == token::CloseDelim(token::Bracket) {
+ if self.check(&token::CloseDelim(token::Bracket)) {
// Empty vector.
self.bump();
ex = ExprVec(Vec::new());
} else {
// Nonempty vector.
let first_expr = self.parse_expr();
- if self.token == token::Comma &&
+ if self.check(&token::Comma) &&
self.look_ahead(1, |t| *t == token::DotDot) {
// Repeating vector syntax: [ 0, ..512 ]
self.bump();
let count = self.parse_expr();
self.expect(&token::CloseDelim(token::Bracket));
ex = ExprRepeat(first_expr, count);
- } else if self.token == token::Comma {
+ } else if self.check(&token::Comma) {
// Vector with two or more elements.
self.bump();
let remaining_exprs = self.parse_seq_to_end(
ex = ExprBreak(None);
}
hi = self.span.hi;
- } else if self.token == token::ModSep ||
+ } else if self.check(&token::ModSep) ||
self.token.is_ident() &&
!self.token.is_keyword(keywords::True) &&
!self.token.is_keyword(keywords::False) {
self.parse_path(LifetimeAndTypesWithColons);
// `!`, as an operator, is prefix, so we know this isn't that
- if self.token == token::Not {
+ if self.check(&token::Not) {
// MACRO INVOCATION expression
self.bump();
tts,
EMPTY_CTXT));
}
- if self.token == token::OpenDelim(token::Brace) {
+ if self.check(&token::OpenDelim(token::Brace)) {
// This is a struct literal, unless we're prohibited
// from parsing struct literals here.
if !self.restrictions.contains(RESTRICTION_NO_STRUCT_LITERAL) {
self.restrictions.contains(RESTRICTION_NO_BAR_OP) {
return lhs;
}
+ self.expected_tokens.push(TokenType::Operator);
let cur_opt = self.token.to_binop();
match cur_opt {
/// Parse the RHS of a local variable declaration (e.g. '= 14;')
fn parse_initializer(&mut self) -> Option<P<Expr>> {
- if self.token == token::Eq {
+ if self.check(&token::Eq) {
self.bump();
Some(self.parse_expr())
} else {
let mut pats = Vec::new();
loop {
pats.push(self.parse_pat());
- if self.token == token::BinOp(token::Or) { self.bump(); }
+ if self.check(&token::BinOp(token::Or)) { self.bump(); }
else { return pats; }
};
}
first = false;
} else {
self.expect(&token::Comma);
+
+ if self.token == token::CloseDelim(token::Bracket)
+ && (before_slice || after.len() != 0) {
+ break
+ }
}
if before_slice {
- if self.token == token::DotDot {
+ if self.check(&token::DotDot) {
self.bump();
- if self.token == token::Comma ||
- self.token == token::CloseDelim(token::Bracket) {
+ if self.check(&token::Comma) ||
+ self.check(&token::CloseDelim(token::Bracket)) {
slice = Some(P(ast::Pat {
id: ast::DUMMY_NODE_ID,
node: PatWild(PatWildMulti),
}
let subpat = self.parse_pat();
- if before_slice && self.token == token::DotDot {
+ if before_slice && self.check(&token::DotDot) {
self.bump();
slice = Some(subpat);
before_slice = false;
} else {
self.expect(&token::Comma);
// accept trailing commas
- if self.token == token::CloseDelim(token::Brace) { break }
+ if self.check(&token::CloseDelim(token::Brace)) { break }
}
let lo = self.span.lo;
let hi;
- if self.token == token::DotDot {
+ if self.check(&token::DotDot) {
self.bump();
if self.token != token::CloseDelim(token::Brace) {
let token_str = self.this_token_to_string();
let fieldname = self.parse_ident();
- let (subpat, is_shorthand) = if self.token == token::Colon {
+ let (subpat, is_shorthand) = if self.check(&token::Colon) {
match bind_type {
BindByRef(..) | BindByValue(MutMutable) => {
let token_str = self.this_token_to_string();
token::OpenDelim(token::Paren) => {
// parse (pat,pat,pat,...) as tuple
self.bump();
- if self.token == token::CloseDelim(token::Paren) {
+ if self.check(&token::CloseDelim(token::Paren)) {
self.bump();
pat = PatTup(vec![]);
} else {
let mut fields = vec!(self.parse_pat());
if self.look_ahead(1, |t| *t != token::CloseDelim(token::Paren)) {
- while self.token == token::Comma {
+ while self.check(&token::Comma) {
self.bump();
- if self.token == token::CloseDelim(token::Paren) { break; }
+ if self.check(&token::CloseDelim(token::Paren)) { break; }
fields.push(self.parse_pat());
}
}
// These expressions are limited to literals (possibly
// preceded by unary-minus) or identifiers.
let val = self.parse_literal_maybe_minus();
- if (self.token == token::DotDotDot) &&
+ if (self.check(&token::DotDotDot)) &&
self.look_ahead(1, |t| {
*t != token::Comma && *t != token::CloseDelim(token::Bracket)
}) {
let hi = self.span.hi;
if id.name == token::special_idents::invalid.name {
- if self.token == token::Dot {
+ if self.check(&token::Dot) {
let span = self.span;
let token_string = self.this_token_to_string();
self.span_err(span,
let bounds = self.parse_colon_then_ty_param_bounds();
- let default = if self.token == token::Eq {
+ let default = if self.check(&token::Eq) {
self.bump();
Some(self.parse_ty_sum())
}
(optional_unboxed_closure_kind, args)
}
};
- let output = if self.token == token::RArrow {
+ let output = if self.check(&token::RArrow) {
self.parse_ret_ty()
} else {
Return(P(Ty {
seq_sep_trailing_allowed(token::Comma),
|p| p.parse_fn_block_arg());
- let output = if self.token == token::RArrow {
+ let output = if self.check(&token::RArrow) {
self.parse_ret_ty()
} else {
Return(P(Ty {
token::get_ident(class_name)).as_slice());
}
self.bump();
- } else if self.token == token::OpenDelim(token::Paren) {
+ } else if self.check(&token::OpenDelim(token::Paren)) {
// It's a tuple-like struct.
is_tuple_like = true;
fields = self.parse_unspanned_seq(
fn parse_item_mod(&mut self, outer_attrs: &[Attribute]) -> ItemInfo {
let id_span = self.span;
let id = self.parse_ident();
- if self.token == token::Semi {
+ if self.check(&token::Semi) {
self.bump();
// This mod is in an external file. Let's go get it!
let (m, attrs) = self.eval_src_mod(id, outer_attrs, id_span);
let (maybe_path, ident) = match self.token {
token::Ident(..) => {
let the_ident = self.parse_ident();
- let path = if self.eat(&token::Eq) {
+ let path = if self.token == token::Eq {
+ self.bump();
let path = self.parse_str();
let span = self.span;
self.obsolete(span, ObsoleteExternCrateRenaming);
token::get_ident(ident)).as_slice());
}
kind = StructVariantKind(struct_def);
- } else if self.token == token::OpenDelim(token::Paren) {
+ } else if self.check(&token::OpenDelim(token::Paren)) {
all_nullary = false;
let arg_tys = self.parse_enum_variant_seq(
&token::OpenDelim(token::Paren),
visibility,
maybe_append(attrs, extra_attrs));
return IoviItem(item);
- } else if self.token == token::OpenDelim(token::Brace) {
+ } else if self.check(&token::OpenDelim(token::Brace)) {
return self.parse_item_foreign_mod(lo, opt_abi, visibility, attrs);
}
fn parse_view_path(&mut self) -> P<ViewPath> {
let lo = self.span.lo;
- if self.token == token::OpenDelim(token::Brace) {
+ if self.check(&token::OpenDelim(token::Brace)) {
// use {foo,bar}
let idents = self.parse_unspanned_seq(
&token::OpenDelim(token::Brace),
self.bump();
let path_lo = self.span.lo;
path = vec!(self.parse_ident());
- while self.token == token::ModSep {
+ while self.check(&token::ModSep) {
self.bump();
let id = self.parse_ident();
path.push(id);
token::ModSep => {
// foo::bar or foo::{a,b,c} or foo::*
- while self.token == token::ModSep {
+ while self.check(&token::ModSep) {
self.bump();
match self.token {
loop {
match self.parse_foreign_item(attrs, macros_allowed) {
IoviNone(returned_attrs) => {
- if self.token == token::CloseDelim(token::Brace) {
+ if self.check(&token::CloseDelim(token::Brace)) {
attrs = returned_attrs;
break
}
Shr,
}
+impl Copy for BinOpToken {}
+
/// A delimeter token
#[deriving(Clone, Encodable, Decodable, PartialEq, Eq, Hash, Show)]
pub enum DelimToken {
Brace,
}
+impl Copy for DelimToken {}
+
#[deriving(Clone, Encodable, Decodable, PartialEq, Eq, Hash, Show)]
pub enum IdentStyle {
/// `::` follows the identifier with no whitespace in-between.
}
}
+#[cfg(not(stage0))]
+impl Copy for Lit {}
+
+#[cfg(not(stage0))]
+impl Copy for IdentStyle {}
+
#[allow(non_camel_case_types)]
#[deriving(Clone, Encodable, Decodable, PartialEq, Eq, Hash, Show)]
pub enum Token {
$( $rk_variant, )*
}
+ impl Copy for Keyword {}
+
impl Keyword {
pub fn to_name(&self) -> ast::Name {
match *self {
Inconsistent,
}
+impl Copy for Breaks {}
+
#[deriving(Clone)]
pub struct BreakToken {
offset: int,
blank_space: int
}
+impl Copy for BreakToken {}
+
#[deriving(Clone)]
pub struct BeginToken {
offset: int,
breaks: Breaks
}
+impl Copy for BeginToken {}
+
#[deriving(Clone)]
pub enum Token {
String(string::String, int),
Broken(Breaks),
}
+impl Copy for PrintStackBreak {}
+
pub struct PrintStackElem {
offset: int,
pbreak: PrintStackBreak
}
+impl Copy for PrintStackElem {}
+
static SIZE_INFINITY: int = 0xffff;
pub fn mk_printer(out: Box<io::Writer+'static>, linewidth: uint) -> Printer {
pub struct NoAnn;
+impl Copy for NoAnn {}
+
impl PpAnn for NoAnn {}
pub struct CurrentCommentAndLiteral {
cur_lit: uint,
}
+impl Copy for CurrentCommentAndLiteral {}
+
pub struct State<'a> {
pub s: pp::Printer,
cm: Option<&'a CodeMap>,
comments::BlankLine => {
// We need to do at least one, possibly two hardbreaks.
let is_semi = match self.s.last_token() {
- pp::String(s, _) => ";" == s.as_slice(),
+ pp::String(s, _) => ";" == s,
_ => false
};
if is_semi || self.is_begin() || self.is_end() {
variadic: false
};
let generics = ast_util::empty_generics();
- assert_eq!(&fun_to_string(&decl, ast::NormalFn, abba_ident,
+ assert_eq!(fun_to_string(&decl, ast::NormalFn, abba_ident,
None, &generics),
- &"fn abba()".to_string());
+ "fn abba()");
}
#[test]
});
let varstr = variant_to_string(&var);
- assert_eq!(&varstr,&"pub principal_skinner".to_string());
+ assert_eq!(varstr, "pub principal_skinner");
}
#[test]
use ptr::P;
use util::small_vector::SmallVector;
+enum ShouldFail {
+ No,
+ Yes(Option<InternedString>),
+}
+
struct Test {
span: Span,
path: Vec<ast::Ident> ,
bench: bool,
ignore: bool,
- should_fail: bool
+ should_fail: ShouldFail
}
struct TestCtxt<'a> {
i.attrs.iter().any(|attr| attr.check_name("ignore"))
}
-fn should_fail(i: &ast::Item) -> bool {
- attr::contains_name(i.attrs.as_slice(), "should_fail")
+fn should_fail(i: &ast::Item) -> ShouldFail {
+ match i.attrs.iter().find(|attr| attr.check_name("should_fail")) {
+ Some(attr) => {
+ let msg = attr.meta_item_list()
+ .and_then(|list| list.iter().find(|mi| mi.check_name("expected")))
+ .and_then(|mi| mi.value_str());
+ ShouldFail::Yes(msg)
+ }
+ None => ShouldFail::No,
+ }
}
/*
vec![name_expr]);
let ignore_expr = ecx.expr_bool(span, test.ignore);
- let fail_expr = ecx.expr_bool(span, test.should_fail);
+ let should_fail_path = |name| {
+ ecx.path(span, vec![self_id, test_id, ecx.ident_of("ShouldFail"), ecx.ident_of(name)])
+ };
+ let fail_expr = match test.should_fail {
+ ShouldFail::No => ecx.expr_path(should_fail_path("No")),
+ ShouldFail::Yes(ref msg) => {
+ let path = should_fail_path("Yes");
+ let arg = match *msg {
+ Some(ref msg) => ecx.expr_some(span, ecx.expr_str(span, msg.clone())),
+ None => ecx.expr_none(span),
+ };
+ ecx.expr_call(span, ecx.expr_path(path), vec![arg])
+ }
+ };
// self::test::TestDesc { ... }
let desc_expr = ecx.expr_struct(
FkFnBlock,
}
+impl<'a> Copy for FnKind<'a> {}
+
/// Each method of the Visitor trait is a hook to be potentially
/// overridden. Each method's default implementation recursively visits
/// the substructure of the input via the corresponding `walk` method;
/// Terminal attributes
pub mod attr {
pub use self::Attr::*;
+ use std::kinds::Copy;
/// Terminal attributes for use with term.attr().
///
/// Convenience attribute to set the background color
BackgroundColor(super::color::Color)
}
+
+ impl Copy for Attr {}
}
/// A terminal with similar capabilities to an ANSI Terminal
let entry = open(term.as_slice());
if entry.is_err() {
if os::getenv("MSYSCON").map_or(false, |s| {
- "mintty.exe" == s.as_slice()
+ "mintty.exe" == s
}) {
// msys terminal
return Some(box TerminfoTerminal {out: out,
SeekIfEndPercent(int)
}
+impl Copy for States {}
+
#[deriving(PartialEq)]
enum FormatState {
FormatStateFlags,
FormatStatePrecision
}
+impl Copy for FormatState {}
+
/// Types of parameters a capability can use
#[allow(missing_docs)]
#[deriving(Clone)]
space: bool
}
+impl Copy for Flags {}
+
impl Flags {
fn new() -> Flags {
Flags{ width: 0, precision: 0, alternate: false,
FormatString
}
+impl Copy for FormatOp {}
+
impl FormatOp {
fn from_char(c: char) -> FormatOp {
match c {
}
}
FormatHEX => {
- s = s.as_slice()
- .to_ascii()
+ s = s.to_ascii()
.iter()
.map(|b| b.to_uppercase().as_byte())
.collect();
#[cfg(test)]
mod test {
use super::{expand,Param,Words,Variables,Number};
- use std::result::Ok;
+ use std::result::Result::Ok;
#[test]
fn test_basic_setabf() {
Err(_) => return Err("input not utf-8".to_string()),
};
- let term_names: Vec<String> = names_str.as_slice()
- .split('|')
+ let term_names: Vec<String> = names_str.split('|')
.map(|s| s.to_string())
.collect();
dirs_to_search.push(homedir.unwrap().join(".terminfo"))
}
match getenv("TERMINFO_DIRS") {
- Some(dirs) => for i in dirs.as_slice().split(':') {
+ Some(dirs) => for i in dirs.split(':') {
if i == "" {
dirs_to_search.push(Path::new("/usr/share/terminfo"));
} else {
let p = get_dbpath_for_term(t).expect("no terminfo entry found");
p.as_str().unwrap().to_string()
};
- assert!(x("screen") == "/usr/share/terminfo/s/screen".to_string());
+ assert!(x("screen") == "/usr/share/terminfo/s/screen");
assert!(get_dbpath_for_term("") == None);
setenv("TERMINFO_DIRS", ":");
- assert!(x("screen") == "/usr/share/terminfo/s/screen".to_string());
+ assert!(x("screen") == "/usr/share/terminfo/s/screen");
unsetenv("TERMINFO_DIRS");
}
use self::NamePadding::*;
use self::OutputLocation::*;
+use std::any::{Any, AnyRefExt};
use std::collections::TreeMap;
use stats::Stats;
use getopts::{OptGroup, optflag, optopt};
use regex::Regex;
-use serialize::{json, Decodable};
-use serialize::json::{Json, ToJson};
+use serialize::{json, Decodable, Encodable};
use term::Terminal;
use term::color::{Color, RED, YELLOW, GREEN, CYAN};
MetricChange, Improvement, Regression, LikelyNoise,
StaticTestFn, StaticTestName, DynTestName, DynTestFn,
run_test, test_main, test_main_static, filter_tests,
- parse_opts, StaticBenchFn};
+ parse_opts, StaticBenchFn, ShouldFail};
}
pub mod stats;
}
#[deriving(Clone)]
-enum NamePadding { PadNone, PadOnLeft, PadOnRight }
+enum NamePadding {
+ PadNone,
+ PadOnLeft,
+ PadOnRight,
+}
+
+impl Copy for NamePadding {}
impl TestDesc {
fn padded_name(&self, column_count: uint, align: NamePadding) -> String {
/// This is feed into functions marked with `#[bench]` to allow for
/// set-up & tear-down before running a piece of code repeatedly via a
/// call to `iter`.
+#[deriving(Copy)]
pub struct Bencher {
iterations: u64,
dur: Duration,
pub bytes: u64,
}
+#[deriving(Copy, Clone, Show, PartialEq, Eq, Hash)]
+pub enum ShouldFail {
+ No,
+ Yes(Option<&'static str>)
+}
+
// The definition of a single test. A test runner will run a list of
// these.
#[deriving(Clone, Show, PartialEq, Eq, Hash)]
pub struct TestDesc {
pub name: TestName,
pub ignore: bool,
- pub should_fail: bool,
+ pub should_fail: ShouldFail,
}
#[deriving(Show)]
noise: f64
}
+impl Copy for Metric {}
+
impl Metric {
pub fn new(value: f64, noise: f64) -> Metric {
Metric {value: value, noise: noise}
Regression(f64)
}
+impl Copy for MetricChange {}
+
pub type MetricDiff = TreeMap<String,MetricChange>;
// The default console test runner. It accepts the command line
NeverColor,
}
+impl Copy for ColorConfig {}
+
pub struct TestOpts {
pub filter: Option<Regex>,
pub run_ignored: bool,
fn usage(binary: &str) {
let message = format!("Usage: {} [OPTIONS] [FILTER]", binary);
- println!(r"{usage}
+ println!(r#"{usage}
The FILTER regex is tested against the name of all tests to run, and
only those tests that match are run.
function takes one argument (test::Bencher).
#[should_fail] - This function (also labeled with #[test]) will only pass if
the code causes a failure (an assertion failure or panic!)
+ A message may be provided, which the failure string must
+ contain: #[should_fail(expected = "foo")].
#[ignore] - When applied to a function which is already attributed as a
test, then the test runner will ignore these tests during
normal test runs. Running with --ignored will run these
- tests.",
+ tests."#,
usage = getopts::usage(message.as_slice(),
optgroups().as_slice()));
}
match maybestr {
None => None,
Some(s) => {
- let mut it = s.as_slice().split('.');
+ let mut it = s.split('.');
match (it.next().and_then(from_str::<uint>), it.next().and_then(from_str::<uint>),
it.next()) {
(Some(a), Some(b), None) => {
}
try!(self.write_plain("\nfailures:\n"));
- failures.as_mut_slice().sort();
+ failures.sort();
for name in failures.iter() {
try!(self.write_plain(format!(" {}\n",
name.as_slice()).as_slice()));
let test_a = TestDesc {
name: StaticTestName("a"),
ignore: false,
- should_fail: false
+ should_fail: ShouldFail::No
};
let test_b = TestDesc {
name: StaticTestName("b"),
ignore: false,
- should_fail: false
+ should_fail: ShouldFail::No
};
let mut st = ConsoleTestState {
Pretty(_) => unreachable!()
};
- let apos = s.as_slice().find_str("a").unwrap();
- let bpos = s.as_slice().find_str("b").unwrap();
+ let apos = s.find_str("a").unwrap();
+ let bpos = s.find_str("b").unwrap();
assert!(apos < bpos);
}
let stdout = reader.read_to_end().unwrap().into_iter().collect();
let task_result = result_future.into_inner();
- let test_result = calc_result(&desc, task_result.is_ok());
+ let test_result = calc_result(&desc, task_result);
monitor_ch.send((desc.clone(), test_result, stdout));
})
}
return;
}
StaticBenchFn(benchfn) => {
- let bs = ::bench::benchmark(|harness| benchfn(harness));
+ let bs = ::bench::benchmark(|harness| (benchfn.clone())(harness));
monitor_ch.send((desc, TrBench(bs), Vec::new()));
return;
}
}
}
-fn calc_result(desc: &TestDesc, task_succeeded: bool) -> TestResult {
- if task_succeeded {
- if desc.should_fail { TrFailed }
- else { TrOk }
- } else {
- if desc.should_fail { TrOk }
- else { TrFailed }
- }
-}
-
-
-impl ToJson for Metric {
- fn to_json(&self) -> json::Json {
- let mut map = TreeMap::new();
- map.insert("value".to_string(), json::Json::F64(self.value));
- map.insert("noise".to_string(), json::Json::F64(self.noise));
- json::Json::Object(map)
+fn calc_result(desc: &TestDesc, task_result: Result<(), Box<Any+Send>>) -> TestResult {
+ match (&desc.should_fail, task_result) {
+ (&ShouldFail::No, Ok(())) |
+ (&ShouldFail::Yes(None), Err(_)) => TrOk,
+ (&ShouldFail::Yes(Some(msg)), Err(ref err))
+ if err.downcast_ref::<String>()
+ .map(|e| &**e)
+ .or_else(|| err.downcast_ref::<&'static str>().map(|e| *e))
+ .map(|e| e.contains(msg))
+ .unwrap_or(false) => TrOk,
+ _ => TrFailed,
}
}
-
impl MetricMap {
pub fn new() -> MetricMap {
pub fn save(&self, p: &Path) -> io::IoResult<()> {
let mut file = try!(File::create(p));
let MetricMap(ref map) = *self;
-
- // FIXME(pcwalton): Yuck.
- let mut new_map = TreeMap::new();
- for (ref key, ref value) in map.iter() {
- new_map.insert(key.to_string(), (*value).clone());
- }
-
- new_map.to_json().to_pretty_writer(&mut file)
+ let mut enc = json::PrettyEncoder::new(&mut file);
+ map.encode(&mut enc)
}
/// Compare against another MetricMap. Optionally compare all
TestDesc, TestDescAndFn, TestOpts, run_test,
Metric, MetricMap, MetricAdded, MetricRemoved,
Improvement, Regression, LikelyNoise,
- StaticTestName, DynTestName, DynTestFn};
+ StaticTestName, DynTestName, DynTestFn, ShouldFail};
use std::io::TempDir;
#[test]
desc: TestDesc {
name: StaticTestName("whatever"),
ignore: true,
- should_fail: false
+ should_fail: ShouldFail::No,
},
testfn: DynTestFn(proc() f()),
};
desc: TestDesc {
name: StaticTestName("whatever"),
ignore: true,
- should_fail: false
+ should_fail: ShouldFail::No,
},
testfn: DynTestFn(proc() f()),
};
desc: TestDesc {
name: StaticTestName("whatever"),
ignore: false,
- should_fail: true
+ should_fail: ShouldFail::Yes(None)
+ },
+ testfn: DynTestFn(proc() f()),
+ };
+ let (tx, rx) = channel();
+ run_test(&TestOpts::new(), false, desc, tx);
+ let (_, res, _) = rx.recv();
+ assert!(res == TrOk);
+ }
+
+ #[test]
+ fn test_should_fail_good_message() {
+ fn f() { panic!("an error message"); }
+ let desc = TestDescAndFn {
+ desc: TestDesc {
+ name: StaticTestName("whatever"),
+ ignore: false,
+ should_fail: ShouldFail::Yes(Some("error message"))
},
testfn: DynTestFn(proc() f()),
};
assert!(res == TrOk);
}
+ #[test]
+ fn test_should_fail_bad_message() {
+ fn f() { panic!("an error message"); }
+ let desc = TestDescAndFn {
+ desc: TestDesc {
+ name: StaticTestName("whatever"),
+ ignore: false,
+ should_fail: ShouldFail::Yes(Some("foobar"))
+ },
+ testfn: DynTestFn(proc() f()),
+ };
+ let (tx, rx) = channel();
+ run_test(&TestOpts::new(), false, desc, tx);
+ let (_, res, _) = rx.recv();
+ assert!(res == TrFailed);
+ }
+
#[test]
fn test_should_fail_but_succeeds() {
fn f() { }
desc: TestDesc {
name: StaticTestName("whatever"),
ignore: false,
- should_fail: true
+ should_fail: ShouldFail::Yes(None)
},
testfn: DynTestFn(proc() f()),
};
desc: TestDesc {
name: StaticTestName("1"),
ignore: true,
- should_fail: false,
+ should_fail: ShouldFail::No,
},
testfn: DynTestFn(proc() {}),
},
desc: TestDesc {
name: StaticTestName("2"),
ignore: false,
- should_fail: false
+ should_fail: ShouldFail::No,
},
testfn: DynTestFn(proc() {}),
});
assert_eq!(filtered.len(), 1);
assert_eq!(filtered[0].desc.name.to_string(),
- "1".to_string());
+ "1");
assert!(filtered[0].desc.ignore == false);
}
desc: TestDesc {
name: DynTestName((*name).clone()),
ignore: false,
- should_fail: false
+ should_fail: ShouldFail::No,
},
testfn: DynTestFn(testfn),
};
desc: TestDesc {
name: DynTestName(name.to_string()),
ignore: false,
- should_fail: false
+ should_fail: ShouldFail::No,
},
testfn: DynTestFn(test_fn)
}
// This constant is derived by smarter statistics brains than me, but it is
// consistent with how R and other packages treat the MAD.
let number = FromPrimitive::from_f64(1.4826).unwrap();
- abs_devs.as_slice().median() * number
+ abs_devs.median() * number
}
fn median_abs_dev_pct(&self) -> T {
/// A record specifying a time value in seconds and nanoseconds.
#[deriving(Clone, PartialEq, Eq, PartialOrd, Ord, Encodable, Decodable, Show)]
-pub struct Timespec { pub sec: i64, pub nsec: i32 }
+pub struct Timespec {
+ pub sec: i64,
+ pub nsec: i32,
+}
+
+impl Copy for Timespec {}
+
/*
* Timespec assumes that pre-epoch Timespecs have negative sec and positive
* nsec fields. Darwin's and Linux's struct timespec functions handle pre-
pub tm_nsec: i32,
}
+impl Copy for Tm {}
+
pub fn empty_tm() -> Tm {
Tm {
tm_sec: 0_i32,
UnexpectedCharacter(char, char),
}
+impl Copy for ParseError {}
+
impl Show for ParseError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
InvalidFormatSpecifier};
use std::f64;
- use std::result::{Err, Ok};
+ use std::result::Result::{Err, Ok};
use std::time::Duration;
use self::test::Bencher;
debug!("test_ctime: {} {}", utc.asctime(), local.asctime());
- assert_eq!(utc.asctime().to_string(), "Fri Feb 13 23:31:30 2009".to_string());
- assert_eq!(local.asctime().to_string(), "Fri Feb 13 15:31:30 2009".to_string());
+ assert_eq!(utc.asctime().to_string(), "Fri Feb 13 23:31:30 2009");
+ assert_eq!(local.asctime().to_string(), "Fri Feb 13 15:31:30 2009");
}
fn test_ctime() {
debug!("test_ctime: {} {}", utc.ctime(), local.ctime());
- assert_eq!(utc.ctime().to_string(), "Fri Feb 13 15:31:30 2009".to_string());
- assert_eq!(local.ctime().to_string(), "Fri Feb 13 15:31:30 2009".to_string());
+ assert_eq!(utc.ctime().to_string(), "Fri Feb 13 15:31:30 2009");
+ assert_eq!(local.ctime().to_string(), "Fri Feb 13 15:31:30 2009");
}
fn test_strftime() {
let utc = at_utc(time);
let local = at(time);
- assert_eq!(local.strftime("").unwrap().to_string(), "".to_string());
- assert_eq!(local.strftime("%A").unwrap().to_string(), "Friday".to_string());
- assert_eq!(local.strftime("%a").unwrap().to_string(), "Fri".to_string());
- assert_eq!(local.strftime("%B").unwrap().to_string(), "February".to_string());
- assert_eq!(local.strftime("%b").unwrap().to_string(), "Feb".to_string());
- assert_eq!(local.strftime("%C").unwrap().to_string(), "20".to_string());
+ assert_eq!(local.strftime("").unwrap().to_string(), "");
+ assert_eq!(local.strftime("%A").unwrap().to_string(), "Friday");
+ assert_eq!(local.strftime("%a").unwrap().to_string(), "Fri");
+ assert_eq!(local.strftime("%B").unwrap().to_string(), "February");
+ assert_eq!(local.strftime("%b").unwrap().to_string(), "Feb");
+ assert_eq!(local.strftime("%C").unwrap().to_string(), "20");
assert_eq!(local.strftime("%c").unwrap().to_string(),
- "Fri Feb 13 15:31:30 2009".to_string());
- assert_eq!(local.strftime("%D").unwrap().to_string(), "02/13/09".to_string());
- assert_eq!(local.strftime("%d").unwrap().to_string(), "13".to_string());
- assert_eq!(local.strftime("%e").unwrap().to_string(), "13".to_string());
- assert_eq!(local.strftime("%F").unwrap().to_string(), "2009-02-13".to_string());
- assert_eq!(local.strftime("%f").unwrap().to_string(), "000054321".to_string());
- assert_eq!(local.strftime("%G").unwrap().to_string(), "2009".to_string());
- assert_eq!(local.strftime("%g").unwrap().to_string(), "09".to_string());
- assert_eq!(local.strftime("%H").unwrap().to_string(), "15".to_string());
- assert_eq!(local.strftime("%h").unwrap().to_string(), "Feb".to_string());
- assert_eq!(local.strftime("%I").unwrap().to_string(), "03".to_string());
- assert_eq!(local.strftime("%j").unwrap().to_string(), "044".to_string());
- assert_eq!(local.strftime("%k").unwrap().to_string(), "15".to_string());
- assert_eq!(local.strftime("%l").unwrap().to_string(), " 3".to_string());
- assert_eq!(local.strftime("%M").unwrap().to_string(), "31".to_string());
- assert_eq!(local.strftime("%m").unwrap().to_string(), "02".to_string());
- assert_eq!(local.strftime("%n").unwrap().to_string(), "\n".to_string());
- assert_eq!(local.strftime("%P").unwrap().to_string(), "pm".to_string());
- assert_eq!(local.strftime("%p").unwrap().to_string(), "PM".to_string());
- assert_eq!(local.strftime("%R").unwrap().to_string(), "15:31".to_string());
- assert_eq!(local.strftime("%r").unwrap().to_string(), "03:31:30 PM".to_string());
- assert_eq!(local.strftime("%S").unwrap().to_string(), "30".to_string());
- assert_eq!(local.strftime("%s").unwrap().to_string(), "1234567890".to_string());
- assert_eq!(local.strftime("%T").unwrap().to_string(), "15:31:30".to_string());
- assert_eq!(local.strftime("%t").unwrap().to_string(), "\t".to_string());
- assert_eq!(local.strftime("%U").unwrap().to_string(), "06".to_string());
- assert_eq!(local.strftime("%u").unwrap().to_string(), "5".to_string());
- assert_eq!(local.strftime("%V").unwrap().to_string(), "07".to_string());
- assert_eq!(local.strftime("%v").unwrap().to_string(), "13-Feb-2009".to_string());
- assert_eq!(local.strftime("%W").unwrap().to_string(), "06".to_string());
- assert_eq!(local.strftime("%w").unwrap().to_string(), "5".to_string());
+ "Fri Feb 13 15:31:30 2009");
+ assert_eq!(local.strftime("%D").unwrap().to_string(), "02/13/09");
+ assert_eq!(local.strftime("%d").unwrap().to_string(), "13");
+ assert_eq!(local.strftime("%e").unwrap().to_string(), "13");
+ assert_eq!(local.strftime("%F").unwrap().to_string(), "2009-02-13");
+ assert_eq!(local.strftime("%f").unwrap().to_string(), "000054321");
+ assert_eq!(local.strftime("%G").unwrap().to_string(), "2009");
+ assert_eq!(local.strftime("%g").unwrap().to_string(), "09");
+ assert_eq!(local.strftime("%H").unwrap().to_string(), "15");
+ assert_eq!(local.strftime("%h").unwrap().to_string(), "Feb");
+ assert_eq!(local.strftime("%I").unwrap().to_string(), "03");
+ assert_eq!(local.strftime("%j").unwrap().to_string(), "044");
+ assert_eq!(local.strftime("%k").unwrap().to_string(), "15");
+ assert_eq!(local.strftime("%l").unwrap().to_string(), " 3");
+ assert_eq!(local.strftime("%M").unwrap().to_string(), "31");
+ assert_eq!(local.strftime("%m").unwrap().to_string(), "02");
+ assert_eq!(local.strftime("%n").unwrap().to_string(), "\n");
+ assert_eq!(local.strftime("%P").unwrap().to_string(), "pm");
+ assert_eq!(local.strftime("%p").unwrap().to_string(), "PM");
+ assert_eq!(local.strftime("%R").unwrap().to_string(), "15:31");
+ assert_eq!(local.strftime("%r").unwrap().to_string(), "03:31:30 PM");
+ assert_eq!(local.strftime("%S").unwrap().to_string(), "30");
+ assert_eq!(local.strftime("%s").unwrap().to_string(), "1234567890");
+ assert_eq!(local.strftime("%T").unwrap().to_string(), "15:31:30");
+ assert_eq!(local.strftime("%t").unwrap().to_string(), "\t");
+ assert_eq!(local.strftime("%U").unwrap().to_string(), "06");
+ assert_eq!(local.strftime("%u").unwrap().to_string(), "5");
+ assert_eq!(local.strftime("%V").unwrap().to_string(), "07");
+ assert_eq!(local.strftime("%v").unwrap().to_string(), "13-Feb-2009");
+ assert_eq!(local.strftime("%W").unwrap().to_string(), "06");
+ assert_eq!(local.strftime("%w").unwrap().to_string(), "5");
// FIXME (#2350): support locale
- assert_eq!(local.strftime("%X").unwrap().to_string(), "15:31:30".to_string());
+ assert_eq!(local.strftime("%X").unwrap().to_string(), "15:31:30");
// FIXME (#2350): support locale
- assert_eq!(local.strftime("%x").unwrap().to_string(), "02/13/09".to_string());
- assert_eq!(local.strftime("%Y").unwrap().to_string(), "2009".to_string());
- assert_eq!(local.strftime("%y").unwrap().to_string(), "09".to_string());
+ assert_eq!(local.strftime("%x").unwrap().to_string(), "02/13/09");
+ assert_eq!(local.strftime("%Y").unwrap().to_string(), "2009");
+ assert_eq!(local.strftime("%y").unwrap().to_string(), "09");
// FIXME (#2350): support locale
- assert_eq!(local.strftime("%Z").unwrap().to_string(), "".to_string());
- assert_eq!(local.strftime("%z").unwrap().to_string(), "-0800".to_string());
+ assert_eq!(local.strftime("%Z").unwrap().to_string(), "");
+ assert_eq!(local.strftime("%z").unwrap().to_string(), "-0800");
assert_eq!(local.strftime("%+").unwrap().to_string(),
- "2009-02-13T15:31:30-08:00".to_string());
- assert_eq!(local.strftime("%%").unwrap().to_string(), "%".to_string());
+ "2009-02-13T15:31:30-08:00");
+ assert_eq!(local.strftime("%%").unwrap().to_string(), "%");
let invalid_specifiers = ["%E", "%J", "%K", "%L", "%N", "%O", "%o", "%Q", "%q"];
for &sp in invalid_specifiers.iter() {
assert_eq!(local.strftime("%").unwrap_err(), MissingFormatConverter);
assert_eq!(local.strftime("%A %").unwrap_err(), MissingFormatConverter);
- assert_eq!(local.asctime().to_string(), "Fri Feb 13 15:31:30 2009".to_string());
- assert_eq!(local.ctime().to_string(), "Fri Feb 13 15:31:30 2009".to_string());
- assert_eq!(local.rfc822z().to_string(), "Fri, 13 Feb 2009 15:31:30 -0800".to_string());
- assert_eq!(local.rfc3339().to_string(), "2009-02-13T15:31:30-08:00".to_string());
+ assert_eq!(local.asctime().to_string(), "Fri Feb 13 15:31:30 2009");
+ assert_eq!(local.ctime().to_string(), "Fri Feb 13 15:31:30 2009");
+ assert_eq!(local.rfc822z().to_string(), "Fri, 13 Feb 2009 15:31:30 -0800");
+ assert_eq!(local.rfc3339().to_string(), "2009-02-13T15:31:30-08:00");
- assert_eq!(utc.asctime().to_string(), "Fri Feb 13 23:31:30 2009".to_string());
- assert_eq!(utc.ctime().to_string(), "Fri Feb 13 15:31:30 2009".to_string());
- assert_eq!(utc.rfc822().to_string(), "Fri, 13 Feb 2009 23:31:30 GMT".to_string());
- assert_eq!(utc.rfc822z().to_string(), "Fri, 13 Feb 2009 23:31:30 -0000".to_string());
- assert_eq!(utc.rfc3339().to_string(), "2009-02-13T23:31:30Z".to_string());
+ assert_eq!(utc.asctime().to_string(), "Fri Feb 13 23:31:30 2009");
+ assert_eq!(utc.ctime().to_string(), "Fri Feb 13 15:31:30 2009");
+ assert_eq!(utc.rfc822().to_string(), "Fri, 13 Feb 2009 23:31:30 GMT");
+ assert_eq!(utc.rfc822z().to_string(), "Fri, 13 Feb 2009 23:31:30 -0000");
+ assert_eq!(utc.rfc3339().to_string(), "2009-02-13T23:31:30Z");
}
fn test_timespec_eq_ord() {
//! Functions for computing canonical and compatible decompositions for Unicode characters.
-use core::cmp::{Equal, Less, Greater};
-use core::option::{Option, Some, None};
+use core::cmp::Ordering::{Equal, Less, Greater};
+use core::option::Option;
+use core::option::Option::{Some, None};
use core::slice;
use core::slice::SlicePrelude;
use tables::normalization::{canonical_table, compatibility_table, composition_table};
pub const UNICODE_VERSION: (uint, uint, uint) = (7, 0, 0);
fn bsearch_range_table(c: char, r: &'static [(char,char)]) -> bool {
- use core::cmp::{Equal, Less, Greater};
+ use core::cmp::Ordering::{Equal, Less, Greater};
use core::slice::SlicePrelude;
r.binary_search(|&(lo,hi)| {
if lo <= c && c <= hi { Equal }
fn bsearch_range_value_table(c: char, r: &'static [(char, char, u8)]) -> u8 {
- use core::cmp::{Equal, Less, Greater};
+ use core::cmp::Ordering::{Equal, Less, Greater};
use core::slice::SlicePrelude;
use core::slice;
match r.binary_search(|&(lo, hi, _)| {
}
pub mod conversions {
- use core::cmp::{Equal, Less, Greater};
+ use core::cmp::Ordering::{Equal, Less, Greater};
use core::slice::SlicePrelude;
use core::tuple::Tuple2;
- use core::option::{Option, Some, None};
+ use core::option::Option;
+ use core::option::Option::{Some, None};
use core::slice;
pub fn to_lower(c: char) -> char {
}
pub mod charwidth {
- use core::option::{Option, Some, None};
+ use core::option::Option;
+ use core::option::Option::{Some, None};
use core::slice::SlicePrelude;
use core::slice;
fn bsearch_range_value_table(c: char, is_cjk: bool, r: &'static [(char, char, u8, u8)]) -> u8 {
- use core::cmp::{Equal, Less, Greater};
+ use core::cmp::Ordering::{Equal, Less, Greater};
match r.binary_search(|&(lo, hi, _, _)| {
if lo <= c && c <= hi { Equal }
else if hi < c { Less }
pub mod grapheme {
pub use self::GraphemeCat::*;
use core::slice::SlicePrelude;
+ use core::kinds::Copy;
use core::slice;
#[allow(non_camel_case_types)]
GC_Any,
}
+ impl Copy for GraphemeCat {}
+
fn bsearch_range_value_table(c: char, r: &'static [(char, char, GraphemeCat)]) -> GraphemeCat {
- use core::cmp::{Equal, Less, Greater};
+ use core::cmp::Ordering::{Equal, Less, Greater};
match r.binary_search(|&(lo, hi, _)| {
if lo <= c && c <= hi { Equal }
else if hi < c { Less }
use core::iter::{Filter, AdditiveIterator, Iterator, IteratorExt};
use core::iter::{DoubleEndedIterator, DoubleEndedIteratorExt};
use core::kinds::Sized;
-use core::option::{Option, None, Some};
+use core::option::Option;
+use core::option::Option::{None, Some};
use core::str::{CharSplits, StrPrelude};
use u_char::UnicodeChar;
use tables::grapheme::GraphemeCat;
pub struct A;
+ impl Copy for A {}
+
pub fn make() -> B { A }
impl A {
p: i32,
}
pub const THREE: P = P { p: 3 };
+ impl Copy for P {}
}
pub static A: S = S { p: private::THREE };
+
+impl Copy for S {}
+
#[lang = "eh_personality"]
extern fn eh_personality() {}
+
+#[lang="copy"]
+pub trait Copy {}
+
+
pub struct Foo;
+impl Copy for Foo {}
+
impl Foo {
pub fn foo(self, x: &Foo) {
unsafe { COUNT *= 2; }
pub struct Foo;
+impl Copy for Foo {}
+
impl Foo {
pub fn run_trait(self) {
unsafe { COUNT *= 17; }
pub struct Struct;
+impl Copy for Struct {}
+
pub enum Unit {
UnitVariant,
Argument(Struct)
}
+impl Copy for Unit {}
+
pub struct TupleStruct(pub uint, pub &'static str);
+impl Copy for TupleStruct {}
+
// used by the cfail test
pub struct StructWithFields {
foo: int,
}
+impl Copy for StructWithFields {}
+
pub enum EnumWithVariants {
EnumVariant,
EnumVariantArg(int)
}
+
+impl Copy for EnumWithVariants {}
+
// ignore-lexer-test FIXME #15679
use std::os;
-use std::sync::{Arc, Future, Mutex};
+use std::sync::{Arc, Future, Mutex, Condvar};
use std::time::Duration;
use std::uint;
// A poor man's pipe.
-type pipe = Arc<Mutex<Vec<uint>>>;
+type pipe = Arc<(Mutex<Vec<uint>>, Condvar)>;
fn send(p: &pipe, msg: uint) {
- let mut arr = p.lock();
+ let &(ref lock, ref cond) = &**p;
+ let mut arr = lock.lock();
arr.push(msg);
- arr.cond.signal();
+ cond.notify_one();
}
fn recv(p: &pipe) -> uint {
- let mut arr = p.lock();
+ let &(ref lock, ref cond) = &**p;
+ let mut arr = lock.lock();
while arr.is_empty() {
- arr.cond.wait();
+ cond.wait(&arr);
}
arr.pop().unwrap()
}
fn init() -> (pipe,pipe) {
- let m = Arc::new(Mutex::new(Vec::new()));
+ let m = Arc::new((Mutex::new(Vec::new()), Condvar::new()));
((&m).clone(), m)
}
+++ /dev/null
-// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// This test creates a bunch of tasks that simultaneously send to each
-// other in a ring. The messages should all be basically
-// independent.
-// This is like msgsend-ring-pipes but adapted to use Arcs.
-
-// This also serves as a pipes test, because Arcs are implemented with pipes.
-
-// no-pretty-expanded FIXME #15189
-// ignore-lexer-test FIXME #15679
-
-use std::os;
-use std::sync::{RWLock, Arc, Future};
-use std::time::Duration;
-use std::uint;
-
-// A poor man's pipe.
-type pipe = Arc<RWLock<Vec<uint>>>;
-
-fn send(p: &pipe, msg: uint) {
- let mut arr = p.write();
- arr.push(msg);
- arr.cond.signal();
-}
-fn recv(p: &pipe) -> uint {
- let mut arr = p.write();
- while arr.is_empty() {
- arr.cond.wait();
- }
- arr.pop().unwrap()
-}
-
-fn init() -> (pipe,pipe) {
- let x = Arc::new(RWLock::new(Vec::new()));
- ((&x).clone(), x)
-}
-
-
-fn thread_ring(i: uint, count: uint, num_chan: pipe, num_port: pipe) {
- let mut num_chan = Some(num_chan);
- let mut num_port = Some(num_port);
- // Send/Receive lots of messages.
- for j in range(0u, count) {
- //println!("task %?, iter %?", i, j);
- let num_chan2 = num_chan.take().unwrap();
- let num_port2 = num_port.take().unwrap();
- send(&num_chan2, i * j);
- num_chan = Some(num_chan2);
- let _n = recv(&num_port2);
- //log(error, _n);
- num_port = Some(num_port2);
- };
-}
-
-fn main() {
- let args = os::args();
- let args = if os::getenv("RUST_BENCH").is_some() {
- vec!("".to_string(), "100".to_string(), "10000".to_string())
- } else if args.len() <= 1u {
- vec!("".to_string(), "10".to_string(), "100".to_string())
- } else {
- args.clone().into_iter().collect()
- };
-
- let num_tasks = from_str::<uint>(args[1].as_slice()).unwrap();
- let msg_per_task = from_str::<uint>(args[2].as_slice()).unwrap();
-
- let (mut num_chan, num_port) = init();
-
- let mut p = Some((num_chan, num_port));
- let dur = Duration::span(|| {
- let (mut num_chan, num_port) = p.take().unwrap();
-
- // create the ring
- let mut futures = Vec::new();
-
- for i in range(1u, num_tasks) {
- //println!("spawning %?", i);
- let (new_chan, num_port) = init();
- let num_chan_2 = num_chan.clone();
- let new_future = Future::spawn(proc() {
- thread_ring(i, msg_per_task, num_chan_2, num_port)
- });
- futures.push(new_future);
- num_chan = new_chan;
- };
-
- // do our iteration
- thread_ring(0, msg_per_task, num_chan, num_port);
-
- // synchronize
- for f in futures.iter_mut() {
- let _ = f.get();
- }
- });
-
- // all done, report stats.
- let num_msgs = num_tasks * msg_per_task;
- let rate = (num_msgs as f64) / (dur.num_milliseconds() as f64);
-
- println!("Sent {} messages in {} ms", num_msgs, dur.num_milliseconds());
- println!(" {} messages / second", rate / 1000.0);
- println!(" {} μs / message", 1000000. / rate / 1000.0);
-}
y: f32,
}
+impl Copy for Vec2 {}
+
fn lerp(a: f32, b: f32, v: f32) -> f32 { a * (1.0 - v) + b * v }
fn smooth(v: f32) -> f32 { v * v * (3.0 - 2.0 * v) }
}
}
-enum Color { Red, Yellow, Blue }
+enum Color {
+ Red,
+ Yellow,
+ Blue,
+}
+
+impl Copy for Color {}
+
impl fmt::Show for Color {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let str = match *self {
color: Color
}
+impl Copy for CreatureInfo {}
+
fn show_color_list(set: Vec<Color>) -> String {
let mut out = String::new();
for col in set.iter() {
p: [i32, .. 16],
}
+impl Copy for P {}
+
struct Perm {
cnt: [i32, .. 16],
fact: [u32, .. 16],
perm: P,
}
+impl Copy for Perm {}
+
impl Perm {
fn new(n: u32) -> Perm {
let mut fact = [1, .. 16];
p: f32,
}
+impl Copy for AminoAcid {}
+
struct RepeatFasta<'a, W:'a> {
alu: &'static str,
out: &'a mut W
fn find(mm: &HashMap<Vec<u8> , uint>, key: String) -> uint {
let key = key.into_ascii().as_slice().to_lowercase().into_string();
match mm.get(key.as_bytes()) {
- option::None => { return 0u; }
- option::Some(&num) => { return num; }
+ option::Option::None => { return 0u; }
+ option::Option::Some(&num) => { return num; }
}
}
// start processing if this is the one
('>', false) => {
match line.as_slice().slice_from(1).find_str("THREE") {
- option::Some(_) => { proc_mode = true; }
- option::None => { }
+ option::Option::Some(_) => { proc_mode = true; }
+ option::Option::None => { }
}
}
#[deriving(PartialEq, PartialOrd, Ord, Eq)]
struct Code(u64);
+impl Copy for Code {}
+
impl Code {
fn hash(&self) -> u64 {
let Code(ret) = *self;
let fd = std::io::File::open(&Path::new("shootout-k-nucleotide.data"));
get_sequence(&mut std::io::BufferedReader::new(fd), ">THREE")
} else {
- get_sequence(&mut std::io::stdin(), ">THREE")
+ get_sequence(&mut *std::io::stdin().lock(), ">THREE")
};
let input = Arc::new(input);
mass: f64,
}
+impl Copy for Planet {}
+
fn advance(bodies: &mut [Planet, ..N_BODIES], dt: f64, steps: int) {
for _ in range(0, steps) {
let mut b_slice = bodies.as_mut_slice();
extern crate getopts;
use std::os;
-use std::result::{Ok, Err};
+use std::result::Result::{Ok, Err};
use std::task;
use std::time::Duration;
return true;
}
- pub fn read(mut reader: BufferedReader<StdReader>) -> Sudoku {
+ pub fn read(mut reader: &mut BufferedReader<StdReader>) -> Sudoku {
/* assert first line is exactly "9,9" */
assert!(reader.read_line().unwrap() == "9,9".to_string());
let mut sudoku = if use_default {
Sudoku::from_vec(&DEFAULT_SUDOKU)
} else {
- Sudoku::read(io::stdin())
+ Sudoku::read(&mut *io::stdin().lock())
};
sudoku.solve();
sudoku.write(&mut io::stdout());
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+fn main() {
+ let x: [int ..3]; //~ ERROR expected one of `(`, `+`, `,`, `::`, or `]`, found `..`
+}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(tuple_indexing)]
-
struct Foo(Box<int>, int);
struct Bar(int, int);
bar2: Bar
}
+impl Copy for Foo {}
+
struct Bar {
int1: int,
int2: int,
}
+impl Copy for Bar {}
+
fn make_foo() -> Box<Foo> { panic!() }
fn borrow_same_field_twice_mut_mut() {
bar2: Bar
}
+impl Copy for Foo {}
+
struct Bar {
int1: int,
int2: int,
}
+impl Copy for Bar {}
+
fn make_foo() -> Foo { panic!() }
fn borrow_same_field_twice_mut_mut() {
+++ /dev/null
-// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-enum Either<T, U> { Left(T), Right(U) }
-
- fn f(x: &mut Either<int,f64>, y: &Either<int,f64>) -> int {
- match *y {
- Either::Left(ref z) => {
- *x = Either::Right(1.0);
- *z
- }
- _ => panic!()
- }
- }
-
- fn g() {
- let mut x: Either<int,f64> = Either::Left(3);
- println!("{}", f(&mut x, &x)); //~ ERROR cannot borrow
- }
-
- fn h() {
- let mut x: Either<int,f64> = Either::Left(3);
- let y: &Either<int, f64> = &x;
- let z: &mut Either<int, f64> = &mut x; //~ ERROR cannot borrow
- *z = *y;
- }
-
- fn main() {}
// except according to those terms.
struct A { a: int, b: int }
+
+impl Copy for A {}
+
struct B { a: int, b: Box<int> }
fn var_copy_after_var_borrow() {
//~^ ERROR: cannot assign to immutable captured outer variable in a proc `x`
let s = std::io::stdin();
- proc() { s.lines(); };
+ proc() { s.read_to_end(); };
//~^ ERROR: cannot borrow immutable captured outer variable in a proc `s` as mutable
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-# //~ ERROR 11:1: 11:2 error: expected `[`, found `<eof>`
+# //~ ERROR 11:1: 11:2 error: expected one of `!` or `[`, found `<eof>`
struct S;
+impl Copy for S {}
+
impl Index<uint, str> for S {
fn index<'a>(&'a self, _: &uint) -> &'a str {
"hello"
struct T;
+impl Copy for T {}
+
impl Index<uint, Show + 'static> for T {
fn index<'a>(&'a self, idx: &uint) -> &'a (Show + 'static) {
static x: uint = 42;
fn main() {
S[0];
- //~^ ERROR E0161
+ //~^ ERROR cannot move out of dereference
+ //~^^ ERROR E0161
T[0];
//~^ ERROR cannot move out of dereference
//~^^ ERROR E0161
pub fn main() {
let _x: Box<str> = box *"hello world";
//~^ ERROR E0161
+ //~^^ ERROR cannot move out of dereference
let array: &[int] = &[1, 2, 3];
let _x: Box<[int]> = box *array;
//~^ ERROR E0161
+ //~^^ ERROR cannot move out of dereference
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-impl Foo; //~ ERROR expected `{`, found `;`
+impl Foo; //~ ERROR expected one of `(`, `+`, `::`, or `{`, found `;`
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(macro_rules,if_let)]
+#![feature(macro_rules)]
fn macros() {
macro_rules! foo{
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// error-pattern:expected `[`, found `vec`
+// error-pattern:expected one of `!` or `[`, found `vec`
mod blade_runner {
#vec[doc(
brief = "Blade Runner is probably the best movie ever",
fn main() {
(|| box *[0u].as_slice())();
- //~^ ERROR cannot move a value of type [uint]
+ //~^ ERROR cannot move out of dereference
+ //~^^ ERROR cannot move a value of type [uint]
}
+++ /dev/null
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-extern {
- const FOO: uint; //~ ERROR: unexpected token: `const`
-}
-
-fn main() {}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(tuple_indexing)]
-
struct MyPtr<'a>(&'a mut uint);
impl<'a> Deref<uint> for MyPtr<'a> {
fn deref<'b>(&'b self) -> &'b uint { self.0 }
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+pub trait Foo for Sized? { fn foo<T>(&self, ext_thing: &T); }
+pub trait Bar for Sized?: Foo { }
+impl<T: Foo> Bar for T { }
+
+pub struct Thing;
+impl Foo for Thing {
+ fn foo<T>(&self, _: &T) {}
+}
+
+#[inline(never)] fn foo(b: &Bar) { b.foo(&0u) }
+
+fn main() {
+ let mut thing = Thing;
+ let test: &Bar = &mut thing; //~ ERROR cannot convert to a trait object because trait `Foo`
+ foo(test);
+}
fn main() {
let t = (42i, 42i);
- t.0::<int>; //~ ERROR expected one of `;`, `}`, found `::`
+ t.0::<int>; //~ ERROR expected one of `.`, `;`, `}`, or an operator, found `::`
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(tuple_indexing)]
-
const TUP: (uint,) = (42,);
fn main() {
--- /dev/null
+// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use self::A; //~ ERROR import `A` conflicts with existing submodule
+use self::B; //~ ERROR import `B` conflicts with existing submodule
+mod A {}
+pub mod B {}
+
+mod C {
+ use C::D; //~ ERROR import `D` conflicts with existing submodule
+ mod D {}
+}
+
+fn main() {}
fn main()
{
let x = 3
-} //~ ERROR: expected `;`, found `}`
+} //~ ERROR: expected one of `.`, `;`, or an operator, found `}`
use std::rc::Rc;
fn assert_copy<T:Copy>() { }
+
trait Dummy { }
struct MyStruct {
y: int,
}
+impl Copy for MyStruct {}
+
struct MyNoncopyStruct {
x: Box<char>,
}
#![allow(unused_variables)]
#![allow(non_camel_case_types)]
#![allow(non_upper_case_globals)]
+#![allow(missing_copy_implementations)]
#![deny(dead_code)]
#![crate_type="lib"]
#![feature(globs)]
#![deny(missing_docs)]
#![allow(dead_code)]
+#![allow(missing_copy_implementations)]
//! Some garbage docs for the crate here
#![doc="More garbage"]
+type Typedef = String;
+pub type PubTypedef = String; //~ ERROR: missing documentation
+
struct Foo {
a: int,
b: int,
// except according to those terms.
#![deny(unused_parens)]
-#![feature(if_let,while_let)]
#[deriving(Eq, PartialEq)]
struct X { y: bool }
// everything imported
// Should get errors for both 'Some' and 'None'
-use std::option::{Some, None}; //~ ERROR unused import
- //~^ ERROR unused import
+use std::option::Option::{Some, None}; //~ ERROR unused import
+ //~^ ERROR unused import
use test::A; //~ ERROR unused import
// Be sure that if we just bring some methods into scope that they're also
fn main() {
let a = Vec::new();
match a {
- [1, tail.., tail..] => {}, //~ ERROR: expected `,`, found `..`
+ [1, tail.., tail..] => {}, //~ ERROR: expected one of `!`, `,`, or `@`, found `..`
_ => ()
}
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(tuple_indexing)]
-
// Test that we correctly compute the move fragments for a fn.
//
// Note that the code below is not actually incorrect; the
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(tuple_indexing)]
-
struct Foo(Box<int>);
fn main() {
y: int
}
-impl Cmp, ToString for S { //~ ERROR: expected `{`, found `,`
+impl Cmp, ToString for S { //~ ERROR: expected one of `(`, `+`, `::`, or `{`, found `,`
fn eq(&&other: S) { false }
fn to_string(&self) -> String { "hi".to_string() }
}
pub fn main() {
struct Foo { x: int }
- let mut Foo { x: x } = Foo { x: 3 }; //~ ERROR: expected `;`, found `{`
+ let mut Foo { x: x } = Foo { x: 3 }; //~ ERROR: expected one of `:`, `;`, `=`, or `@`, found `{`
}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+pub fn main() {
+ let s = "\u{2603"; //~ ERROR unterminated unicode escape (needed a `}`)
+}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+pub fn main() {
+ let s = "\u{260311111111}"; //~ ERROR overlong unicode escape (can have at most 6 hex digits)
+}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+pub fn main() {
+ let s = "\u{d805}"; //~ ERROR illegal unicode character escape
+}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+pub fn main() {
+ let s = "\u{lol}"; //~ ERROR illegal character in unicode escape
+}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-fn foo(x) { //~ ERROR expected `:`, found `)`
+fn foo(x) { //~ ERROR expected one of `!`, `:`, or `@`, found `)`
}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+struct CantCopyThis;
+
+struct IWantToCopyThis {
+ but_i_cant: CantCopyThis,
+}
+
+impl Copy for IWantToCopyThis {}
+//~^ ERROR the trait `Copy` may not be implemented for this type
+
+enum CantCopyThisEither {
+ A,
+ B,
+}
+
+enum IWantToCopyThisToo {
+ ButICant(CantCopyThisEither),
+}
+
+impl Copy for IWantToCopyThisToo {}
+//~^ ERROR the trait `Copy` may not be implemented for this type
+
+fn main() {}
+
pub fn main() {
match 22i {
- 0 .. 3 => {} //~ ERROR expected `=>`, found `..`
+ 0 .. 3 => {} //~ ERROR expected one of `...`, `=>`, or `|`, found `..`
_ => {}
}
}
static s: &'static str =
r#"
- "## //~ ERROR expected `;`, found `#`
+ "## //~ ERROR expected one of `.`, `;`, or an operator, found `#`
;
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that the recursion limit can be changed. In this case, we have
+// deeply nested types that will fail the `Send` check by overflow
+// when the recursion limit is set very low.
+
+#![feature(macro_rules)]
+#![allow(dead_code)]
+#![recursion_limit="10"]
+
+macro_rules! link {
+ ($id:ident, $t:ty) => {
+ enum $id { $id($t) }
+ }
+}
+
+link!(A,B)
+link!(B,C)
+link!(C,D)
+link!(D,E)
+link!(E,F)
+link!(F,G)
+link!(G,H)
+link!(H,I)
+link!(I,J)
+link!(J,K)
+link!(K,L)
+link!(L,M)
+link!(M,N)
+
+enum N { N(uint) }
+
+fn is_send<T:Send>() { }
+
+fn main() {
+ is_send::<A>();
+ //~^ ERROR overflow evaluating
+ //~^^ NOTE consider adding a `#![recursion_limit="20"]` attribute to your crate
+ //~^^^ NOTE must be implemented
+ //~^^^^ ERROR overflow evaluating
+ //~^^^^^ NOTE consider adding a `#![recursion_limit="20"]` attribute to your crate
+ //~^^^^^^ NOTE must be implemented
+}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-type closure = Box<lt/fn()>; //~ ERROR expected `,`, found `/`
+type closure = Box<lt/fn()>; //~ ERROR expected one of `(`, `+`, `,`, `::`, or `>`, found `/`
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-enum e = int; //~ ERROR expected `{`, found `=`
+enum e = int; //~ ERROR expected one of `<` or `{`, found `=`
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-type v = [int * 3]; //~ ERROR expected `]`, found `*`
+type v = [int * 3]; //~ ERROR expected one of `(`, `+`, `,`, `::`, or `]`, found `*`
fn removed_moves() {
let mut x = 0;
let y <- x;
- //~^ ERROR expected `;`, found `<-`
+ //~^ ERROR expected one of `!`, `:`, `;`, `=`, or `@`, found `<-`
}
let mut x = 0;
let y = 0;
y <- x;
- //~^ ERROR expected one of `;`, `}`, found `<-`
+ //~^ ERROR expected one of `!`, `.`, `::`, `;`, `{`, `}`, or an operator, found `<-`
}
fn f() {
let v = [mut 1, 2, 3, 4];
//~^ ERROR expected identifier, found keyword `mut`
- //~^^ ERROR expected `]`, found `1`
+ //~^^ ERROR expected one of `!`, `,`, `.`, `::`, `]`, `{`, or an operator, found `1`
}
type v = [mut int];
//~^ ERROR expected identifier, found keyword `mut`
- //~^^ ERROR expected `]`, found `int`
+ //~^^ ERROR expected one of `(`, `+`, `,`, `::`, or `]`, found `int`
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-type bptr = &lifetime/int; //~ ERROR expected `;`, found `/`
+type bptr = &lifetime/int; //~ ERROR expected one of `(`, `+`, `::`, or `;`, found `/`
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-type t = { f: () }; //~ ERROR expected type, found token OpenDelim(Brace)
+type t = { f: () }; //~ ERROR expected type, found `{`
fn f() {
let a_box = box mut 42;
//~^ ERROR expected identifier, found keyword `mut`
- //~^^ ERROR expected `;`, found `42`
+ //~^^ ERROR expected one of `!`, `.`, `::`, `;`, `{`, or an operator, found `42`
}
type mut_box = Box<mut int>;
//~^ ERROR expected identifier, found keyword `mut`
- //~^^ ERROR expected `,`, found `int`
+ //~^^ ERROR expected one of `(`, `+`, `,`, `::`, or `>`, found `int`
let a = S { foo: (), bar: () };
let b = S { foo: () with a };
- //~^ ERROR expected one of `,`, `}`, found `with`
+ //~^ ERROR expected one of `,`, `.`, `}`, or an operator, found `with`
}
+++ /dev/null
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// A zero-dependency test that covers some basic traits, default
-// methods, etc. When mucking about with basic type system stuff I
-// often encounter problems in the iterator trait, so it's useful to
-// have hanging around. -nmatsakis
-
-// error-pattern: requires `start` lang_item
-
-#![no_std]
-#![feature(lang_items)]
-
-#[lang = "sized"]
-pub trait Sized for Sized? {
- // Empty.
-}
-
-pub mod std {
- pub mod clone {
- pub trait Clone {
- fn clone(&self) -> Self;
- }
- }
-}
-
-pub struct ContravariantLifetime<'a>;
-
-impl <'a> ::std::clone::Clone for ContravariantLifetime<'a> {
- #[inline]
- fn clone(&self) -> ContravariantLifetime<'a> {
- match *self { ContravariantLifetime => ContravariantLifetime, }
- }
-}
-
-fn main() { }
+++ /dev/null
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-
-// A zero-dependency test that covers some basic traits, default
-// methods, etc. When mucking about with basic type system stuff I
-// often encounter problems in the iterator trait, so it's useful to
-// have hanging around. -nmatsakis
-
-// error-pattern: requires `start` lang_item
-
-#![no_std]
-#![feature(lang_items)]
-
-#[lang = "sized"]
-pub trait Sized for Sized? {
- // Empty.
-}
-
-#[unstable = "Definition may change slightly after trait reform"]
-pub trait PartialEq for Sized? {
- /// This method tests for `self` and `other` values to be equal, and is used by `==`.
- fn eq(&self, other: &Self) -> bool;
-}
-
-#[unstable = "Trait is unstable."]
-impl<'a, Sized? T: PartialEq> PartialEq for &'a T {
- #[inline]
- fn eq(&self, other: & &'a T) -> bool { PartialEq::eq(*self, *other) }
-}
-
-fn main() { }
fn main() {
for x in Foo {
- x: 3 //~ ERROR expected one of `;`, `}`
+ x: 3 //~ ERROR expected one of `!`, `.`, `::`, `;`, `{`, `}`, or an operator, found `:`
}.hi() {
println!("yo");
}
fn main() {
if Foo {
- x: 3 //~ ERROR expected one of `;`, `}`
+ x: 3 //~ ERROR expected one of `!`, `.`, `::`, `;`, `{`, `}`, or an operator, found `:`
}.hi() {
println!("yo");
}
fn main() {
match Foo {
- x: 3 //~ ERROR expected `=>`
+ x: 3 //~ ERROR expected one of `!`, `=>`, `@`, or `|`, found `:`
} {
Foo {
x: x
fn main() {
while Foo {
- x: 3 //~ ERROR expected one of `;`, `}`
+ x: 3 //~ ERROR expected one of `!`, `.`, `::`, `;`, `{`, `}`, or an operator, found `:`
}.hi() {
println!("yo");
}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+fn main() {
+ let [_, ..,] = [(), ()]; //~ ERROR unexpected token: `]`
+}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(tuple_indexing)]
-
struct Point { x: int, y: int }
struct Empty;
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(tuple_indexing)]
-
struct Point(int, int);
fn main() {
pub fn main() {
let mut f = |&mut: x: int, y: int| -> int { x + y };
- let z = f.call_mut((1u, 2)); //~ ERROR not implemented
+ let z = f.call_mut((1u, 2)); //~ ERROR type mismatch
println!("{}", z);
}
pub fn main() {
let f = |&mut: x: uint, y: int| -> int { (x as int) + y };
- let z = call_it(3, f); //~ ERROR not implemented
+ let z = call_it(3, f); //~ ERROR type mismatch
println!("{}", z);
}
// except according to those terms.
extern crate core;
-use core; //~ ERROR unresolved import (maybe you meant `core::*`?)
+use core;
+//~^ ERROR import `core` conflicts with imported crate in this module
fn main() {}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(macro_rules,while_let)]
+#![feature(macro_rules)]
fn macros() {
macro_rules! foo{
use self::ManualDiscriminant::{OneHundred, OneThousand, OneMillion};
use self::SingleVariant::TheOnlyVariant;
+#[deriving(Copy)]
enum AutoDiscriminant {
One,
Two,
Three
}
+#[deriving(Copy)]
enum ManualDiscriminant {
OneHundred = 100,
OneThousand = 1000,
OneMillion = 1000000
}
+#[deriving(Copy)]
enum SingleVariant {
TheOnlyVariant
}
// gdb-command: print none
// gdb-check:$12 = None
+// gdb-command: print some_fat
+// gdb-check:$13 = Some = {"abc"}
+
+// gdb-command: print none_fat
+// gdb-check:$14 = None
+
// gdb-command: print nested_variant1
-// gdb-check:$13 = NestedVariant1 = {NestedStruct = {regular_struct = RegularStruct = {the_first_field = 111, the_second_field = 112.5, the_third_field = true, the_fourth_field = "NestedStructString1"}, tuple_struct = TupleStruct = {113.5, 114}, empty_struct = EmptyStruct, c_style_enum = CStyleEnumVar2, mixed_enum = MixedEnumTupleVar = {115, 116, false}}}
+// gdb-check:$15 = NestedVariant1 = {NestedStruct = {regular_struct = RegularStruct = {the_first_field = 111, the_second_field = 112.5, the_third_field = true, the_fourth_field = "NestedStructString1"}, tuple_struct = TupleStruct = {113.5, 114}, empty_struct = EmptyStruct, c_style_enum = CStyleEnumVar2, mixed_enum = MixedEnumTupleVar = {115, 116, false}}}
// gdb-command: print nested_variant2
-// gdb-check:$14 = NestedVariant2 = {abc = NestedStruct = {regular_struct = RegularStruct = {the_first_field = 117, the_second_field = 118.5, the_third_field = false, the_fourth_field = "NestedStructString10"}, tuple_struct = TupleStruct = {119.5, 120}, empty_struct = EmptyStruct, c_style_enum = CStyleEnumVar3, mixed_enum = MixedEnumStructVar = {field1 = 121.5, field2 = -122}}}
+// gdb-check:$16 = NestedVariant2 = {abc = NestedStruct = {regular_struct = RegularStruct = {the_first_field = 117, the_second_field = 118.5, the_third_field = false, the_fourth_field = "NestedStructString10"}, tuple_struct = TupleStruct = {119.5, 120}, empty_struct = EmptyStruct, c_style_enum = CStyleEnumVar3, mixed_enum = MixedEnumStructVar = {field1 = 121.5, field2 = -122}}}
use self::CStyleEnum::{CStyleEnumVar1, CStyleEnumVar2, CStyleEnumVar3};
use self::MixedEnum::{MixedEnumCStyleVar, MixedEnumTupleVar, MixedEnumStructVar};
let some = Some(110u);
let none: Option<int> = None;
+ let some_fat = Some("abc");
+ let none_fat: Option<&'static str> = None;
let nested_variant1 = NestedVariant1(
NestedStruct {
}
fn zzz() {()}
+
+impl<T:Copy> Copy for Struct<T> {}
+
}
fn zzz() {()}
+
+impl Copy for Enum {}
+
}
fn zzz() {()}
+
+impl<T:Copy> Copy for Struct<T> {}
+
}
fn zzz() {()}
+
+impl Copy for Struct {}
+
}
fn zzz() {()}
+
+impl Copy for Struct {}
+
}
fn zzz() {()}
+
+impl Copy for TupleStruct {}
+
// lldb-command:print void_droid
// lldb-check:[...]$5 = Void
+// lldb-command:print some_str
+// lldb-check:[...]$6 = Some(&str { data_ptr: [...], length: 3 })
+
+// lldb-command:print none_str
+// lldb-check:[...]$7 = None
+
// If a struct has exactly two variants, one of them is empty, and the other one
// contains a non-nullable pointer, then this value is used as the discriminator.
fn main() {
+ let some_str: Option<&'static str> = Some("abc");
+ let none_str: Option<&'static str> = None;
+
let some: Option<&u32> = Some(unsafe { std::mem::transmute(0x12345678u) });
let none: Option<&u32> = None;
}
fn zzz() {()}
+
+impl Copy for Struct {}
+
}
fn zzz() {()}
+
+impl Copy for Struct {}
+
struct S { eax: int }
+impl Copy for S {}
+
fn test3() {
let regs = &Cell::new(S {eax: 0});
match true { true => { } _ => { } }
static __STATIC_FMTSTR: &'static [&'static str] =
(&([("test" as &'static str)] as [&'static str, ..1]) as
&'static [&'static str, ..1]);
- let __args_vec =
- (&([] as [core::fmt::Argument<'_>, ..0]) as
- &[core::fmt::Argument<'_>, ..0]);
- let __args =
- (unsafe {
- ((::std::fmt::Arguments::new as
- unsafe fn(&'static [&'static str], &'a [core::fmt::Argument<'a>]) -> core::fmt::Arguments<'a>)((__STATIC_FMTSTR
- as
- &'static [&'static str]),
- (__args_vec
- as
- &[core::fmt::Argument<'_>, ..0]))
- as core::fmt::Arguments<'_>)
- } as core::fmt::Arguments<'_>);
((::std::fmt::format as
- fn(&core::fmt::Arguments<'_>) -> collections::string::String)((&(__args
+ fn(&core::fmt::Arguments<'_>) -> collections::string::String)((&((::std::fmt::Arguments::new
+ as
+ fn(&'a [&'a str], &'a [core::fmt::Argument<'a>]) -> core::fmt::Arguments<'a>)((__STATIC_FMTSTR
+ as
+ &'static [&'static str]),
+ (&([]
+ as
+ [core::fmt::Argument<'_>, ..0])
+ as
+ &[core::fmt::Argument<'_>, ..0]))
as
core::fmt::Arguments<'_>)
as
// error-pattern:called `Result::unwrap()` on an `Err` value
-use std::result;
+use std::result::Result::Err;
fn main() {
- println!("{}", result::Err::<int,String>("kitty".to_string()).unwrap());
+ println!("{}", Err::<int,String>("kitty".to_string()).unwrap());
}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// check-stdout
+// error-pattern:task 'test_foo' panicked at
+// compile-flags: --test
+// ignore-pretty: does not work well with `--test`
+
+#[test]
+#[should_fail(expected = "foobar")]
+fn test_foo() {
+ panic!("blah")
+}
+
+
c: i8
}
+impl Copy for Foo {}
+
#[link(name = "test", kind = "static")]
extern {
fn foo(f: Foo) -> Foo;
#![feature(lang_items)]
#![no_std]
+#[lang="copy"]
+trait Copy { }
+
#[lang="sized"]
trait Sized { }
newvar(int)
}
+impl Copy for newtype {}
+
pub fn main() {
// Test that borrowck treats enums with a single variant
struct X<T>(T);
-impl <T:Sync> RequiresShare for X<T> { }
+impl<T:Copy> Copy for X<T> {}
-impl <T:Sync+Send> RequiresRequiresShareAndSend for X<T> { }
+impl<T:Sync> RequiresShare for X<T> { }
-impl <T:Copy> RequiresCopy for X<T> { }
+impl<T:Sync+Send> RequiresRequiresShareAndSend for X<T> { }
+
+impl<T:Copy> RequiresCopy for X<T> { }
pub fn main() { }
}
}
+impl Copy for Foo {}
+
pub fn main() {
let x = Cell::new(Foo { x: 22 });
let _y = x.get();
#[deriving(Show)]
enum cat_type { tuxedo, tabby, tortoiseshell }
+impl Copy for cat_type {}
+
impl cmp::PartialEq for cat_type {
fn eq(&self, other: &cat_type) -> bool {
((*self) as uint) == ((*other) as uint)
pub fn main() {
enum x { foo }
+ impl Copy for x {}
impl ::std::cmp::PartialEq for x {
fn eq(&self, other: &x) -> bool {
(*self) as int == (*other) as int
dummy: uint
}
+impl Copy for MyType {}
+
impl MyTrait for MyType {
fn get(&self) -> MyType { (*self).clone() }
}
Bar = 0xDEADBEE
}
+impl Copy for Foo {}
+
static X: Foo = Foo::Bar;
pub fn main() {
assert!(a != b);
assert!(a < b);
- assert_eq!(a.cmp(&b), ::std::cmp::Less);
+ assert_eq!(a.cmp(&b), ::std::cmp::Ordering::Less);
}
// ignore-test FIXME #11820: & is unreliable in deriving
-use std::cmp::{Less,Equal,Greater};
+use std::cmp::Ordering::{Less,Equal,Greater};
#[deriving(Eq,Ord)]
struct A<'a> {
#[deriving(PartialEq,Eq)]
struct Bar;
+impl Copy for Bar {}
+
trait ToBar {
fn to_bar(&self) -> Bar;
}
#[deriving(PartialEq,Eq)]
struct Bar;
+impl Copy for Bar {}
+
trait ToBar {
fn to_bar(&self) -> Bar;
}
#[deriving(PartialEq,Eq)]
struct Bar;
+impl Copy for Bar {}
+
#[deriving(PartialEq,Eq)]
struct Bar1 {
f: int
}
+impl Copy for Bar1 {}
+
trait ToBar {
fn to_bar(&self) -> Bar;
fn to_val(&self) -> int;
#[deriving(Show)]
enum chan { chan_t, }
+impl Copy for chan {}
+
impl PartialEq for chan {
fn eq(&self, other: &chan) -> bool {
((*self) as uint) == ((*other) as uint)
A = 0
}
static C: E = E::V;
+ impl Copy for E {}
pub fn check() {
assert_eq!(size_of::<E>(), size_of::<$t>());
assert_eq!(E::V as $t, $v as $t);
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use std::result::{Result,Ok};
+use std::result::Result;
+use std::result::Result::Ok;
static C: Result<(), Box<int>> = Ok(());
struct LM { resize_at: uint, size: uint }
+impl Copy for LM {}
+
enum HashMap<K,V> {
HashMap_(LM)
}
+impl<K,V> Copy for HashMap<K,V> {}
+
fn linear_map<K,V>() -> HashMap<K,V> {
HashMap::HashMap_(LM{
resize_at: 32,
// not exported
enum t { t1, t2, }
+ impl Copy for t {}
+
impl PartialEq for t {
fn eq(&self, other: &t) -> bool {
((*self) as uint) == ((*other) as uint)
struct A { a: int }
+impl Copy for A {}
+
pub fn main() {
let mut x = A {a: 10};
f(&mut x);
struct I { i: int }
+impl Copy for I {}
+
fn test_rec() {
let rs = if true { I {i: 100} } else { I {i: 101} };
assert_eq!(rs.i, 100);
#[deriving(Show)]
enum mood { happy, sad, }
+impl Copy for mood {}
+
impl PartialEq for mood {
fn eq(&self, other: &mood) -> bool {
((*self) as uint) == ((*other) as uint)
// Tests for match as expressions resulting in struct types
struct R { i: int }
+impl Copy for R {}
+
fn test_rec() {
let rs = match true { true => R {i: 100}, _ => panic!() };
assert_eq!(rs.i, 100);
#[deriving(Show)]
enum mood { happy, sad, }
+impl Copy for mood {}
+
impl PartialEq for mood {
fn eq(&self, other: &mood) -> bool {
((*self) as uint) == ((*other) as uint)
struct Point {x: int, y: int, z: int}
+impl Copy for Point {}
+
fn f(p: &Cell<Point>) {
assert!((p.get().z == 12));
p.set(Point {x: 10, y: 11, z: 13});
one: u16, two: u16
}
+impl Copy for TwoU16s {}
+
#[link(name = "rust_test_helpers")]
extern {
pub fn rust_dbg_extern_identity_TwoU16s(v: TwoU16s) -> TwoU16s;
one: u32, two: u32
}
+impl Copy for TwoU32s {}
+
#[link(name = "rust_test_helpers")]
extern {
pub fn rust_dbg_extern_identity_TwoU32s(v: TwoU32s) -> TwoU32s;
one: u64, two: u64
}
+impl Copy for TwoU64s {}
+
#[link(name = "rust_test_helpers")]
extern {
pub fn rust_dbg_extern_identity_TwoU64s(v: TwoU64s) -> TwoU64s;
one: u8, two: u8
}
+impl Copy for TwoU8s {}
+
#[link(name = "rust_test_helpers")]
extern {
pub fn rust_dbg_extern_identity_TwoU8s(v: TwoU8s) -> TwoU8s;
z: u64,
}
+impl Copy for S {}
+
#[link(name = "rust_test_helpers")]
extern {
pub fn get_x(x: S) -> u64;
struct Triple {x: int, y: int, z: int}
+impl Copy for Triple {}
+
pub fn main() {
let mut x = 62;
let mut y = 63;
enum Q { R(Option<uint>) }
+impl Copy for Q {}
+
fn xyzzy(q: Q) -> uint {
match q {
Q::R(S) if S.is_some() => { 0 }
struct Pair { x: int, y: int }
+impl Copy for Pair {}
+
pub fn main() {
let a: int =
match 10i { x if x < 7 => { 1i } x if x < 11 => { 2i } 10 => { 3i } _ => { 4i } };
z: int
}
+impl Copy for XYZ {}
+
fn main() {
let mut connected = HashSet::new();
let mut border = HashSet::new();
}
fn child() {
- for line in io::stdin().lines() {
+ for line in io::stdin().lock().lines() {
println!("{}", line.unwrap());
}
}
fn child() {
io::stdout().write_line("foo").unwrap();
io::stderr().write_line("bar").unwrap();
- assert_eq!(io::stdin().read_line().err().unwrap().kind, io::EndOfFile);
+ assert_eq!(io::stdin().lock().read_line().err().unwrap().kind, io::EndOfFile);
}
fn test() {
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+// ignore-android seems to block forever
+
#![forbid(warnings)]
// Pretty printing tests complain about `use std::predule::*`
pub fn main() {
let mut stdin = std::io::stdin();
spawn(proc() {
- let _ = stdin.lines();
+ let _ = stdin.read_to_end();
});
}
Baz
}
+impl Copy for Foo {}
+
impl Foo {
fn foo(&self) {
match self {
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![feature(tuple_indexing)]
+struct S {
+ o: Option<String>
+}
+
+// Make sure we don't reuse the same alloca when matching
+// on field of struct or tuple which we reassign in the match body.
+
+fn main() {
+ let mut a = (0i, Some("right".into_string()));
+ let b = match a.1 {
+ Some(v) => {
+ a.1 = Some("wrong".into_string());
+ v
+ }
+ None => String::new()
+ };
+ println!("{}", b);
+ assert_eq!(b, "right");
+
+
+ let mut s = S{ o: Some("right".into_string()) };
+ let b = match s.o {
+ Some(v) => {
+ s.o = Some("wrong".into_string());
+ v
+ }
+ None => String::new(),
+ };
+ println!("{}", b);
+ assert_eq!(b, "right");
+}
trait clam<A> {
fn chowder(&self, y: A);
}
+
struct foo<A> {
x: A,
}
+impl<A:Copy> Copy for foo<A> {}
+
impl<A> clam<A> for foo<A> {
fn chowder(&self, _y: A) {
}
meow: extern "Rust" fn(),
}
+impl Copy for cat {}
+
fn meow() {
println!("meow")
}
struct KittyInfo {kitty: cat}
+impl Copy for KittyInfo {}
+
// Code compiles and runs successfully if we add a + before the first arg
fn nyan(kitty: cat, _kitty_info: KittyInfo) {
(kitty.meow)();
fn lookup(table: json::Object, key: String, default: String) -> String
{
match table.find(&key.to_string()) {
- option::Some(&Json::String(ref s)) => {
+ option::Option::Some(&Json::String(ref s)) => {
s.to_string()
}
- option::Some(value) => {
+ option::Option::Some(value) => {
println!("{} was expected to be a string but is a {}", key, value);
default
}
- option::None => {
+ option::Option::None => {
default
}
}
enum order { hamburger, fries(side), shake }
enum meal { to_go(order), for_here(order) }
+impl Copy for side {}
+impl Copy for order {}
+impl Copy for meal {}
+
fn foo(m: Box<meal>, cond: bool) {
match *m {
meal::to_go(_) => { }
fn tester()
{
let loader: rsrc_loader = proc(_path) {
- result::Ok("more blah".to_string())
+ result::Result::Ok("more blah".to_string())
};
let path = path::Path::new("blah");
y: int,
}
+impl Copy for Point {}
+
// Represents an offset on a canvas. (This has the same structure as a Point.
// but different semantics).
struct Size {
height: int,
}
+impl Copy for Size {}
+
struct Rect {
top_left: Point,
size: Size,
}
+impl Copy for Rect {}
+
// Contains the information needed to do shape rendering via ASCII art.
struct AsciiArt {
width: uint,
y: f64
}
+impl Copy for Vec2 {}
+
// methods we want to export as methods as well as operators
impl Vec2 {
#[inline(always)]
y: f64
}
+impl Copy for Point {}
+
pub enum Shape {
Circle(Point, f64),
Rectangle(Point, Point)
}
+impl Copy for Shape {}
+
impl Shape {
pub fn area(&self, sh: Shape) -> f64 {
match sh {
*/
struct X { vec: &'static [int] }
+
+impl Copy for X {}
+
static V: &'static [X] = &[X { vec: &[1, 2, 3] }];
+
pub fn main() {
for &v in V.iter() {
println!("{}", v.vec);
// ignore-windows #13361
#![no_std]
+#![feature(lang_items)]
extern crate "lang-item-public" as lang_lib;
pub mod glfw {
pub struct InputState(uint);
+ impl Copy for InputState {}
+
pub const RELEASE : InputState = InputState(0);
pub const PRESS : InputState = InputState(1);
pub const REPEAT : InputState = InputState(2);
struct Foo;
+impl Copy for Foo {}
+
trait Bar {
fn foo1(&self);
fn foo2(self);
struct Foo;
+impl Copy for Foo {}
+
impl Foo {
fn foo(self, x: &Foo) {
unsafe { COUNT *= 2; }
*/
struct S<T> { i:u8, t:T }
-impl<T> S<T> { fn unwrap(self) -> T { self.t } }
+
+impl<T:Copy> Copy for S<T> {}
+
+impl<T> S<T> {
+ fn unwrap(self) -> T {
+ self.t
+ }
+}
+
#[deriving(PartialEq, Show)]
struct A((u32, u32));
+
+impl Copy for A {}
+
#[deriving(PartialEq, Show)]
struct B(u64);
+impl Copy for B {}
+
pub fn main() {
static Ca: S<A> = S { i: 0, t: A((13, 104)) };
static Cb: S<B> = S { i: 0, t: B(31337) };
dummy: uint
}
+impl Copy for MyType {}
+
impl MyTrait<uint> for MyType {
fn get(&self) -> uint { self.dummy }
}
dummy: uint
}
+impl Copy for MyType {}
+
impl MyTrait<uint> for MyType {
fn get(&self) -> uint { self.dummy }
}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+pub fn main() {
+ let s = "\u{2603}";
+ assert_eq!(s, "☃");
+
+ let s = "\u{2a10}\u{2A01}\u{2Aa0}";
+ assert_eq!(s, "⨐⨁⪠");
+
+ let s = "\\{20}";
+ let mut correct_s = String::from_str("\\");
+ correct_s.push_str("{20}");
+ assert_eq!(s, correct_s.as_slice());
+}
struct mytype(Mytype);
-struct Mytype {compute: fn(mytype) -> int, val: int}
+impl Copy for mytype {}
+
+struct Mytype {
+ compute: fn(mytype) -> int,
+ val: int,
+}
+
+impl Copy for Mytype {}
fn compute(i: mytype) -> int {
let mytype(m) = i;
check_option!($e: $T, |ptr| assert!(*ptr == $e));
}};
($e:expr: $T:ty, |$v:ident| $chk:expr) => {{
- assert!(option::None::<$T>.is_none());
+ assert!(option::Option::None::<$T>.is_none());
let e = $e;
- let s_ = option::Some::<$T>(e);
+ let s_ = option::Option::Some::<$T>(e);
let $v = s_.as_ref().unwrap();
$chk
}}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![feature(opt_out_copy)]
+
+// Test the opt-out-copy feature guard. This is the same as the
+// "opt-in-copy.rs" test from compile-fail, except that it is using
+// the feature guard, and hence the structureds in this file are
+// implicitly copyable, and hence we get no errors. This test can be
+// safely removed once the opt-out-copy "feature" is rejected.
+
+struct CantCopyThis;
+
+struct IWantToCopyThis {
+ but_i_cant: CantCopyThis,
+}
+
+impl Copy for IWantToCopyThis {}
+
+enum CantCopyThisEither {
+ A,
+ B,
+}
+
+enum IWantToCopyThisToo {
+ ButICant(CantCopyThisEither),
+}
+
+impl Copy for IWantToCopyThisToo {}
+
+fn is_copy<T:Copy>() { }
+
+fn main() {
+ is_copy::<CantCopyThis>();
+ is_copy::<CantCopyThisEither>();
+ is_copy::<IWantToCopyThis>();
+ is_copy::<IWantToCopyThisToo>();
+}
+
_f2: int,
}
+impl Copy for Foo {}
+
#[inline(never)]
pub fn foo(f: &mut Foo) -> Foo {
let ret = *f;
y: Y
}
+impl<X:Copy,Y:Copy> Copy for DerefWrapper<X,Y> {}
+
impl<X, Y> DerefWrapper<X, Y> {
fn get_x(self) -> X {
self.x
pub y: Y
}
+ impl<X:Copy,Y:Copy> Copy for DerefWrapperHideX<X,Y> {}
+
impl<X, Y> DerefWrapperHideX<X, Y> {
pub fn new(x: X, y: Y) -> DerefWrapperHideX<X, Y> {
DerefWrapperHideX {
baz: u64
}
+impl Copy for Foo {}
+
pub fn main() {
let foos = [Foo { bar: 1, baz: 2 }, .. 10];
struct Point {x: int, y: int}
+impl Copy for Point {}
+
type rect = (Point, Point);
fn fst(r: rect) -> Point { let (fst, _) = r; return fst; }
struct Rect {x: int, y: int, w: int, h: int}
+impl Copy for Rect {}
+
fn f(r: Rect, x: int, y: int, w: int, h: int) {
assert_eq!(r.x, x);
assert_eq!(r.y, y);
f: int
}
+impl Copy for C {}
+
fn get_v1(a: &A) -> &int {
// Region inferencer must deduce that &v < L2 < L1
let foo = &a.value; // L1
t: &'a int
}
+impl<'a> Copy for Box<'a> {}
+
impl<'a> GetRef<'a> for Box<'a> {
fn get(&self) -> &'a int {
self.t
t: &'a T
}
+impl<'a,T:'a> Copy for Box<'a,T> {}
+
impl<'a,T:Clone> GetRef<'a,T> for Box<'a,T> {
fn get(&self) -> &'a T {
self.t
t: T
}
+impl<T:Copy> Copy for Box<T> {}
+
impl<T:Clone> Get<T> for Box<T> {
fn get(&self) -> T {
self.t.clone()
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that region inference correctly links up the regions when a
+// `ref` borrow occurs inside a fn argument.
+
+#![allow(dead_code)]
+
+fn with<'a>(_: |&'a Vec<int>| -> &'a Vec<int>) { }
+
+fn foo() {
+ with(|&ref ints| ints);
+}
+
+fn main() { }
TypeInt,
TypeFunction(Type<'tcx>, Type<'tcx>),
}
+
+impl<'tcx> Copy for TypeStructure<'tcx> {}
+
impl<'tcx> PartialEq for TypeStructure<'tcx> {
fn eq(&self, other: &TypeStructure<'tcx>) -> bool {
match (*self, *other) {
id: uint
}
+impl Copy for NodeId {}
+
type Ast<'ast> = &'ast AstStructure<'ast>;
struct AstStructure<'ast> {
kind: AstKind<'ast>
}
+impl<'ast> Copy for AstStructure<'ast> {}
+
enum AstKind<'ast> {
ExprInt,
ExprVar(uint),
ExprLambda(Ast<'ast>),
}
+impl<'ast> Copy for AstKind<'ast> {}
+
fn compute_types<'tcx,'ast>(tcx: &mut TypeContext<'tcx,'ast>,
ast: Ast<'ast>) -> Type<'tcx>
{
n: int
}
+impl Copy for Value {}
+
impl Value {
fn squared(mut self) -> Value {
self.n *= self.n;
extern crate collections;
use std::collections::HashMap;
-use std::option::Some;
+use std::option::Option::Some;
use std::str::SendStr;
pub fn main() {
extern crate collections;
use self::collections::TreeMap;
-use std::option::Some;
+use std::option::Option::Some;
use std::str::SendStr;
use std::string::ToString;
+++ /dev/null
-// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-
-// Exercises a bug in the shape code that was exposed
-// on x86_64: when there is an enum embedded in an
-// interior record which is then itself interior to
-// something else, shape calculations were off.
-
-#[deriving(Clone, Show)]
-enum opt_span {
- //hack (as opposed to option), to make `span` compile
- os_none,
- os_some(Box<Span>),
-}
-
-#[deriving(Clone, Show)]
-struct Span {
- lo: uint,
- hi: uint,
- expanded_from: opt_span,
-}
-
-#[deriving(Clone, Show)]
-struct Spanned<T> {
- data: T,
- span: Span,
-}
-
-type ty_ = uint;
-
-#[deriving(Clone, Show)]
-struct Path_ {
- global: bool,
- idents: Vec<String> ,
- types: Vec<Box<ty>>,
-}
-
-type path = Spanned<Path_>;
-type ty = Spanned<ty_>;
-
-#[deriving(Clone, Show)]
-struct X {
- sp: Span,
- path: path,
-}
-
-pub fn main() {
- let sp: Span = Span {lo: 57451u, hi: 57542u, expanded_from: opt_span::os_none};
- let t: Box<ty> = box Spanned { data: 3u, span: sp.clone() };
- let p_: Path_ = Path_ {
- global: true,
- idents: vec!("hi".to_string()),
- types: vec!(t),
- };
- let p: path = Spanned { data: p_, span: sp.clone() };
- let x = X { sp: sp, path: p };
- println!("{}", x.path.clone());
- println!("{}", x.clone());
-}
#[simd] struct f32x4(f32, f32, f32, f32);
+impl Copy for f32x4 {}
+
fn add<T: ops::Add<T, T>>(lhs: T, rhs: T) -> T {
lhs + rhs
}
#[repr(u8)]
enum Eu { Lu = 0, Hu = 255 }
+
+impl Copy for Eu {}
+
static CLu: Eu = Eu::Lu;
static CHu: Eu = Eu::Hu;
#[repr(i8)]
enum Es { Ls = -128, Hs = 127 }
+
+impl Copy for Es {}
+
static CLs: Es = Es::Ls;
static CHs: Es = Es::Hs;
// ignore-lexer-test FIXME #15883
pub struct Quad { a: u64, b: u64, c: u64, d: u64 }
+
+impl Copy for Quad {}
+
pub struct Floats { a: f64, b: u8, c: f64 }
+impl Copy for Floats {}
+
mod rustrt {
use super::{Floats, Quad};
#[deriving(Show)]
enum foo { large, small, }
+impl Copy for foo {}
+
impl PartialEq for foo {
fn eq(&self, other: &foo) -> bool {
((*self) as uint) == ((*other) as uint)
orange = 8 >> 1
}
+impl Copy for color {}
+
impl PartialEq for color {
fn eq(&self, other: &color) -> bool {
((*self) as uint) == ((*other) as uint)
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// compile-flags: --test
+// ignore-pretty: does not work well with `--test`
+
+#[test]
+#[should_fail(expected = "foo")]
+fn test_foo() {
+ panic!("foo bar")
+}
+
+#[test]
+#[should_fail(expected = "foo")]
+fn test_foo_dynamic() {
+ panic!("{} bar", "foo")
+}
+
+
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+#![feature(advanced_slice_patterns,)]
+
fn f<T,>(_: T,) {}
struct Foo<T,>;
Qux(int,),
}
+#[allow(unused,)]
pub fn main() {
f::<int,>(0i,);
let (_, _,) = (1i, 1i,);
+ let [_, _,] = [1i, 1,];
+ let [_, _, .., _,] = [1i, 1, 1, 1,];
+ let [_, _, _.., _,] = [1i, 1, 1, 1,];
let x: Foo<int,> = Foo::<int,>;
y: int,
}
+impl Copy for Struct {}
+
impl Trait<&'static str> for Struct {
fn f(&self, x: &'static str) {
println!("Hi, {}!", x);
y: int,
}
+impl Copy for Struct {}
+
impl Trait for Struct {
fn f(&self) {
println!("Hi!");
fn isEq(a: &Self, b: &Self) -> bool;
}
+#[deriving(Clone)]
enum Color { cyan, magenta, yellow, black }
+impl Copy for Color {}
+
impl Equal for Color {
fn isEq(a: &Color, b: &Color) -> bool {
match (*a, *b) {
}
}
+#[deriving(Clone)]
enum ColorTree {
leaf(Color),
branch(Box<ColorTree>, Box<ColorTree>)
impl Equal for ColorTree {
fn isEq(a: &ColorTree, b: &ColorTree) -> bool {
match (a, b) {
- (&leaf(x), &leaf(y)) => { Equal::isEq(&x, &y) }
+ (&leaf(ref x), &leaf(ref y)) => {
+ Equal::isEq(&(*x).clone(), &(*y).clone())
+ }
(&branch(ref l1, ref r1), &branch(ref l2, ref r2)) => {
- Equal::isEq(&**l1, &**l2) && Equal::isEq(&**r1, &**r2)
+ Equal::isEq(&(**l1).clone(), &(**l2).clone()) &&
+ Equal::isEq(&(**r1).clone(), &(**r2).clone())
}
_ => { false }
}
fn isEq(&self, a: &Self) -> bool;
}
+#[deriving(Clone)]
enum Color { cyan, magenta, yellow, black }
+impl Copy for Color {}
+
impl Equal for Color {
fn isEq(&self, a: &Color) -> bool {
match (*self, *a) {
}
}
+#[deriving(Clone)]
enum ColorTree {
leaf(Color),
branch(Box<ColorTree>, Box<ColorTree>)
impl Equal for ColorTree {
fn isEq(&self, a: &ColorTree) -> bool {
match (self, a) {
- (&leaf(x), &leaf(y)) => { x.isEq(&y) }
+ (&leaf(ref x), &leaf(ref y)) => { x.isEq(&(*y).clone()) }
(&branch(ref l1, ref r1), &branch(ref l2, ref r2)) => {
- (&**l1).isEq(&**l2) && (&**r1).isEq(&**r2)
+ (*l1).isEq(&(**l2).clone()) && (*r1).isEq(&(**r2).clone())
}
_ => { false }
}
f: int,
}
+impl Copy for Foo {}
+
impl Foo {
fn foo(self: Foo, x: int) -> int {
self.f + x
f: T,
}
+impl<T:Copy> Copy for Bar<T> {}
+
impl<T> Bar<T> {
fn foo(self: Bar<T>, x: int) -> int {
x
#[deriving(Show, PartialEq)]
struct Foo(uint, &'static str);
+
+ impl Copy for Foo {}
+
let x = Foo(42, "forty-two");
let f = bar(x);
assert_eq!(f.call_once(()), x);
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// ignore-test
+
+extern crate libc;
+
+use std::io::process::Command;
+use std::iter::IteratorExt;
+
+use libc::funcs::posix88::unistd;
+
+
+// "ps -A -o pid,sid,command" with GNU ps should output something like this:
+// PID SID COMMAND
+// 1 1 /sbin/init
+// 2 0 [kthreadd]
+// 3 0 [ksoftirqd/0]
+// ...
+// 12562 9237 ./spawn-failure
+// 12563 9237 [spawn-failure] <defunct>
+// 12564 9237 [spawn-failure] <defunct>
+// ...
+// 12592 9237 [spawn-failure] <defunct>
+// 12593 9237 ps -A -o pid,sid,command
+// 12884 12884 /bin/zsh
+// 12922 12922 /bin/zsh
+// ...
+
+#[cfg(unix)]
+fn find_zombies() {
+ // http://man.freebsd.org/ps(1)
+ // http://man7.org/linux/man-pages/man1/ps.1.html
+ #[cfg(not(target_os = "macos"))]
+ const FIELDS: &'static str = "pid,sid,command";
+
+ // https://developer.apple.com/library/mac/documentation/Darwin/
+ // Reference/ManPages/man1/ps.1.html
+ #[cfg(target_os = "macos")]
+ const FIELDS: &'static str = "pid,sess,command";
+
+ let my_sid = unsafe { unistd::getsid(0) };
+
+ let ps_cmd_output = Command::new("ps").args(&["-A", "-o", FIELDS]).output().unwrap();
+ let ps_output = String::from_utf8_lossy(ps_cmd_output.output.as_slice());
+
+ let found = ps_output.split('\n').enumerate().any(|(line_no, line)|
+ 0 < line_no && 0 < line.len() &&
+ my_sid == from_str(line.split(' ').filter(|w| 0 < w.len()).nth(1)
+ .expect("1st column should be Session ID")
+ ).expect("Session ID string into integer") &&
+ line.contains("defunct") && {
+ println!("Zombie child {}", line);
+ true
+ }
+ );
+
+ assert!( ! found, "Found at least one zombie child");
+}
+
+#[cfg(windows)]
+fn find_zombies() { }
+
+fn main() {
+ let too_long = format!("/NoSuchCommand{:0300}", 0u8);
+
+ for _ in range(0u32, 100) {
+ let invalid = Command::new(too_long.as_slice()).spawn();
+ assert!(invalid.is_err());
+ }
+
+ find_zombies();
+}