This makes some use of `x` and `y`, instead of setting them to the same value.
* Data races
* Dereferencing a null/dangling raw pointer
-* Mutating an immutable value/reference without `UnsafeCell`
* Reads of [undef](http://llvm.org/docs/LangRef.html#undefined-values)
(uninitialized) memory
* Breaking the [pointer aliasing
rules](http://llvm.org/docs/LangRef.html#pointer-aliasing-rules)
with raw pointers (a subset of the rules used by C)
+* `&mut` and `&` follow LLVM’s scoped [noalias] model, except if the `&T`
+ contains an `UnsafeCell<U>`. Unsafe code must not violate these aliasing
+ guarantees.
+* Mutating an immutable value/reference without `UnsafeCell<U>`
* Invoking undefined behavior via compiler intrinsics:
* Indexing outside of the bounds of an object with `std::ptr::offset`
(`offset` intrinsic), with
code. Rust's failure system is not compatible with exception handling in
other languages. Unwinding must be caught and handled at FFI boundaries.
+[noalias]: http://llvm.org/docs/LangRef.html#noalias
+
##### Behaviour not considered unsafe
This is a list of behaviour not considered *unsafe* in Rust terms, but that may
## Lifetime Elision
-Earlier, we mentioned *lifetime elision*, a feature of Rust which allows you to
-not write lifetime annotations in certain circumstances. All references have a
-lifetime, and so if you elide a lifetime (like `&T` instead of `&'a T`), Rust
-will do three things to determine what those lifetimes should be.
+Rust supports powerful local type inference in function bodies, but it’s
+forbidden in item signatures to allow reasoning about the types just based in
+the item signature alone. However, for ergonomic reasons a very restricted
+secondary inference algorithm called “lifetime elision” applies in function
+signatures. It infers only based on the signature components themselves and not
+based on the body of the function, only infers lifetime paramters, and does
+this with only three easily memorizable and unambiguous rules. This makes
+lifetime elision a shorthand for writing an item signature, while not hiding
+away the actual types involved as full local inference would if applied to it.
When talking about lifetime elision, we use the term *input lifetime* and
*output lifetime*. An *input lifetime* is a lifetime associated with a parameter
}
#[cfg(test)]
-mod tests {
+mod test {
use super::add_two;
#[test]
}
```
-There's a few changes here. The first is the introduction of a `mod tests` with
+There's a few changes here. The first is the introduction of a `mod test` with
a `cfg` attribute. The module allows us to group all of our tests together, and
to also define helper functions if needed, that don't become a part of the rest
of our crate. The `cfg` attribute only compiles our test code if we're
}
#[cfg(test)]
-mod tests {
+mod test {
use super::*;
#[test]
libc::realloc(ptr as *mut libc::c_void, size as libc::size_t) as *mut u8
} else {
let new_ptr = allocate(size, align);
- ptr::copy(new_ptr, ptr, cmp::min(size, old_size));
+ ptr::copy(ptr, new_ptr, cmp::min(size, old_size));
deallocate(ptr, old_size, align);
new_ptr
}
#[inline]
unsafe fn insert_kv(&mut self, index: usize, key: K, val: V) -> &mut V {
ptr::copy(
- self.keys_mut().as_mut_ptr().offset(index as isize + 1),
self.keys().as_ptr().offset(index as isize),
+ self.keys_mut().as_mut_ptr().offset(index as isize + 1),
self.len() - index
);
ptr::copy(
- self.vals_mut().as_mut_ptr().offset(index as isize + 1),
self.vals().as_ptr().offset(index as isize),
+ self.vals_mut().as_mut_ptr().offset(index as isize + 1),
self.len() - index
);
#[inline]
unsafe fn insert_edge(&mut self, index: usize, edge: Node<K, V>) {
ptr::copy(
- self.edges_mut().as_mut_ptr().offset(index as isize + 1),
self.edges().as_ptr().offset(index as isize),
+ self.edges_mut().as_mut_ptr().offset(index as isize + 1),
self.len() - index
);
ptr::write(self.edges_mut().get_unchecked_mut(index), edge);
let val = ptr::read(self.vals().get_unchecked(index));
ptr::copy(
- self.keys_mut().as_mut_ptr().offset(index as isize),
self.keys().as_ptr().offset(index as isize + 1),
+ self.keys_mut().as_mut_ptr().offset(index as isize),
self.len() - index - 1
);
ptr::copy(
- self.vals_mut().as_mut_ptr().offset(index as isize),
self.vals().as_ptr().offset(index as isize + 1),
+ self.vals_mut().as_mut_ptr().offset(index as isize),
self.len() - index - 1
);
let edge = ptr::read(self.edges().get_unchecked(index));
ptr::copy(
- self.edges_mut().as_mut_ptr().offset(index as isize),
self.edges().as_ptr().offset(index as isize + 1),
+ self.edges_mut().as_mut_ptr().offset(index as isize),
// index can be == len+1, so do the +1 first to avoid underflow.
(self.len() + 1) - index
);
right._len = self.len() / 2;
let right_offset = self.len() - right.len();
ptr::copy_nonoverlapping(
- right.keys_mut().as_mut_ptr(),
self.keys().as_ptr().offset(right_offset as isize),
+ right.keys_mut().as_mut_ptr(),
right.len()
);
ptr::copy_nonoverlapping(
- right.vals_mut().as_mut_ptr(),
self.vals().as_ptr().offset(right_offset as isize),
+ right.vals_mut().as_mut_ptr(),
right.len()
);
if !self.is_leaf() {
ptr::copy_nonoverlapping(
- right.edges_mut().as_mut_ptr(),
self.edges().as_ptr().offset(right_offset as isize),
+ right.edges_mut().as_mut_ptr(),
right.len() + 1
);
}
ptr::write(self.vals_mut().get_unchecked_mut(old_len), val);
ptr::copy_nonoverlapping(
- self.keys_mut().as_mut_ptr().offset(old_len as isize + 1),
right.keys().as_ptr(),
+ self.keys_mut().as_mut_ptr().offset(old_len as isize + 1),
right.len()
);
ptr::copy_nonoverlapping(
- self.vals_mut().as_mut_ptr().offset(old_len as isize + 1),
right.vals().as_ptr(),
+ self.vals_mut().as_mut_ptr().offset(old_len as isize + 1),
right.len()
);
if !self.is_leaf() {
ptr::copy_nonoverlapping(
- self.edges_mut().as_mut_ptr().offset(old_len as isize + 1),
right.edges().as_ptr(),
+ self.edges_mut().as_mut_ptr().offset(old_len as isize + 1),
right.len() + 1
);
}
if i != j {
let tmp = ptr::read(read_ptr);
- ptr::copy(buf_v.offset(j + 1),
- &*buf_v.offset(j),
+ ptr::copy(&*buf_v.offset(j),
+ buf_v.offset(j + 1),
(i - j) as usize);
- ptr::copy_nonoverlapping(buf_v.offset(j), &tmp, 1);
+ ptr::copy_nonoverlapping(&tmp, buf_v.offset(j), 1);
mem::forget(tmp);
}
}
// j + 1 could be `len` (for the last `i`), but in
// that case, `i == j` so we don't copy. The
// `.offset(j)` is always in bounds.
- ptr::copy(buf_dat.offset(j + 1),
- &*buf_dat.offset(j),
+ ptr::copy(&*buf_dat.offset(j),
+ buf_dat.offset(j + 1),
i - j as usize);
- ptr::copy_nonoverlapping(buf_dat.offset(j), read_ptr, 1);
+ ptr::copy_nonoverlapping(read_ptr, buf_dat.offset(j), 1);
}
}
}
if left == right_start {
// the number remaining in this run.
let elems = (right_end as usize - right as usize) / mem::size_of::<T>();
- ptr::copy_nonoverlapping(out, &*right, elems);
+ ptr::copy_nonoverlapping(&*right, out, elems);
break;
} else if right == right_end {
let elems = (right_start as usize - left as usize) / mem::size_of::<T>();
- ptr::copy_nonoverlapping(out, &*left, elems);
+ ptr::copy_nonoverlapping(&*left, out, elems);
break;
}
} else {
step(&mut left)
};
- ptr::copy_nonoverlapping(out, &*to_copy, 1);
+ ptr::copy_nonoverlapping(&*to_copy, out, 1);
step(&mut out);
}
}
// write the result to `v` in one go, so that there are never two copies
// of the same object in `v`.
unsafe {
- ptr::copy_nonoverlapping(v.as_mut_ptr(), &*buf_dat, len);
+ ptr::copy_nonoverlapping(&*buf_dat, v.as_mut_ptr(), len);
}
// increment the pointer, returning the old pointer.
let ch = self.char_at(idx);
let next = idx + ch.len_utf8();
unsafe {
- ptr::copy(self.vec.as_mut_ptr().offset(idx as isize),
- self.vec.as_ptr().offset(next as isize),
+ ptr::copy(self.vec.as_ptr().offset(next as isize),
+ self.vec.as_mut_ptr().offset(idx as isize),
len - next);
self.vec.set_len(len - (next - idx));
}
let amt = ch.encode_utf8(&mut bits).unwrap();
unsafe {
- ptr::copy(self.vec.as_mut_ptr().offset((idx + amt) as isize),
- self.vec.as_ptr().offset(idx as isize),
+ ptr::copy(self.vec.as_ptr().offset(idx as isize),
+ self.vec.as_mut_ptr().offset((idx + amt) as isize),
len - idx);
- ptr::copy(self.vec.as_mut_ptr().offset(idx as isize),
- bits.as_ptr(),
+ ptr::copy(bits.as_ptr(),
+ self.vec.as_mut_ptr().offset(idx as isize),
amt);
self.vec.set_len(len + amt);
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<'a> From<&'a str> for String {
+ #[inline]
fn from(s: &'a str) -> String {
s.to_string()
}
}
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<'a> From<&'a str> for Cow<'a, str> {
+ #[inline]
+ fn from(s: &'a str) -> Cow<'a, str> {
+ Cow::Borrowed(s)
+ }
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+impl<'a> From<String> for Cow<'a, str> {
+ #[inline]
+ fn from(s: String) -> Cow<'a, str> {
+ Cow::Owned(s)
+ }
+}
+
#[stable(feature = "rust1", since = "1.0.0")]
impl Into<Vec<u8>> for String {
fn into(self) -> Vec<u8> {
}
}
-#[stable(feature = "rust1", since = "1.0.0")]
+#[unstable(feature = "into_cow", reason = "may be replaced by `convert::Into`")]
impl IntoCow<'static, str> for String {
#[inline]
fn into_cow(self) -> Cow<'static, str> {
}
}
-#[stable(feature = "rust1", since = "1.0.0")]
+#[unstable(feature = "into_cow", reason = "may be replaced by `convert::Into`")]
impl<'a> IntoCow<'a, str> for &'a str {
#[inline]
fn into_cow(self) -> Cow<'a, str> {
/// Creates a vector by copying the elements from a raw pointer.
///
- /// This function will copy `elts` contiguous elements starting at `ptr` into a new allocation
- /// owned by the returned `Vec<T>`. The elements of the buffer are copied into the vector
- /// without cloning, as if `ptr::read()` were called on them.
+ /// This function will copy `elts` contiguous elements starting at `ptr`
+ /// into a new allocation owned by the returned `Vec<T>`. The elements of
+ /// the buffer are copied into the vector without cloning, as if
+ /// `ptr::read()` were called on them.
#[inline]
#[unstable(feature = "collections",
reason = "may be better expressed via composition")]
pub unsafe fn from_raw_buf(ptr: *const T, elts: usize) -> Vec<T> {
let mut dst = Vec::with_capacity(elts);
dst.set_len(elts);
- ptr::copy_nonoverlapping(dst.as_mut_ptr(), ptr, elts);
+ ptr::copy_nonoverlapping(ptr, dst.as_mut_ptr(), elts);
dst
}
self.cap
}
- /// Reserves capacity for at least `additional` more elements to be inserted in the given
- /// `Vec<T>`. The collection may reserve more space to avoid frequent reallocations.
+ /// Reserves capacity for at least `additional` more elements to be inserted
+ /// in the given `Vec<T>`. The collection may reserve more space to avoid
+ /// frequent reallocations.
///
/// # Panics
///
let p = self.as_mut_ptr().offset(index as isize);
// Shift everything over to make space. (Duplicating the
// `index`th element into two consecutive places.)
- ptr::copy(p.offset(1), &*p, len - index);
+ ptr::copy(&*p, p.offset(1), len - index);
// Write it in, overwriting the first copy of the `index`th
// element.
ptr::write(&mut *p, element);
ret = ptr::read(ptr);
// Shift everything down to fill in that spot.
- ptr::copy(ptr, &*ptr.offset(1), len - index - 1);
+ ptr::copy(&*ptr.offset(1), ptr, len - index - 1);
}
self.set_len(len - 1);
ret
let len = self.len();
unsafe {
ptr::copy_nonoverlapping(
- self.get_unchecked_mut(len),
other.as_ptr(),
+ self.get_unchecked_mut(len),
other.len());
}
other.set_len(other_len);
ptr::copy_nonoverlapping(
- other.as_mut_ptr(),
self.as_ptr().offset(at as isize),
+ other.as_mut_ptr(),
other.len());
}
other
debug_assert!(src + len <= self.cap, "dst={} src={} len={} cap={}", dst, src, len,
self.cap);
ptr::copy(
- self.ptr.offset(dst as isize),
self.ptr.offset(src as isize),
+ self.ptr.offset(dst as isize),
len);
}
debug_assert!(src + len <= self.cap, "dst={} src={} len={} cap={}", dst, src, len,
self.cap);
ptr::copy_nonoverlapping(
- self.ptr.offset(dst as isize),
self.ptr.offset(src as isize),
+ self.ptr.offset(dst as isize),
len);
}
}
// `at` lies in the first half.
let amount_in_first = first_len - at;
- ptr::copy_nonoverlapping(*other.ptr,
- first_half.as_ptr().offset(at as isize),
+ ptr::copy_nonoverlapping(first_half.as_ptr().offset(at as isize),
+ *other.ptr,
amount_in_first);
// just take all of the second half.
- ptr::copy_nonoverlapping(other.ptr.offset(amount_in_first as isize),
- second_half.as_ptr(),
+ ptr::copy_nonoverlapping(second_half.as_ptr(),
+ other.ptr.offset(amount_in_first as isize),
second_len);
} else {
// `at` lies in the second half, need to factor in the elements we skipped
// in the first half.
let offset = at - first_len;
let amount_in_second = second_len - offset;
- ptr::copy_nonoverlapping(*other.ptr,
- second_half.as_ptr().offset(offset as isize),
+ ptr::copy_nonoverlapping(second_half.as_ptr().offset(offset as isize),
+ *other.ptr,
amount_in_second);
}
}
//! For example,
//!
//! ```
-//! # #![feature(os, old_io, old_path)]
+//! #![feature(core)]
//! use std::error::FromError;
-//! use std::old_io::{File, IoError};
-//! use std::os::{MemoryMap, MapError};
-//! use std::old_path::Path;
+//! use std::{io, str};
+//! use std::fs::File;
//!
//! enum MyError {
-//! Io(IoError),
-//! Map(MapError)
+//! Io(io::Error),
+//! Utf8(str::Utf8Error),
//! }
//!
-//! impl FromError<IoError> for MyError {
-//! fn from_error(err: IoError) -> MyError {
-//! MyError::Io(err)
-//! }
+//! impl FromError<io::Error> for MyError {
+//! fn from_error(err: io::Error) -> MyError { MyError::Io(err) }
//! }
//!
-//! impl FromError<MapError> for MyError {
-//! fn from_error(err: MapError) -> MyError {
-//! MyError::Map(err)
-//! }
+//! impl FromError<str::Utf8Error> for MyError {
+//! fn from_error(err: str::Utf8Error) -> MyError { MyError::Utf8(err) }
//! }
//!
//! #[allow(unused_variables)]
//! fn open_and_map() -> Result<(), MyError> {
-//! let f = try!(File::open(&Path::new("foo.txt")));
-//! let m = try!(MemoryMap::new(0, &[]));
+//! let b = b"foo.txt";
+//! let s = try!(str::from_utf8(b));
+//! let f = try!(File::open(s));
+//!
//! // do something interesting here...
//! Ok(())
//! }
impl<'a> fmt::Write for Filler<'a> {
fn write_str(&mut self, s: &str) -> fmt::Result {
- slice::bytes::copy_memory(&mut self.buf[(*self.end)..],
- s.as_bytes());
+ slice::bytes::copy_memory(s.as_bytes(),
+ &mut self.buf[(*self.end)..]);
*self.end += s.len();
Ok(())
}
/// let mut t: T = mem::uninitialized();
///
/// // Perform the swap, `&mut` pointers never alias
- /// ptr::copy_nonoverlapping(&mut t, &*x, 1);
- /// ptr::copy_nonoverlapping(x, &*y, 1);
- /// ptr::copy_nonoverlapping(y, &t, 1);
+ /// ptr::copy_nonoverlapping(x, &mut t, 1);
+ /// ptr::copy_nonoverlapping(y, x, 1);
+ /// ptr::copy_nonoverlapping(&t, y, 1);
///
/// // y and t now point to the same thing, but we need to completely forget `tmp`
/// // because it's no longer relevant.
/// }
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
+ #[cfg(not(stage0))]
+ pub fn copy_nonoverlapping<T>(src: *const T, dst: *mut T, count: usize);
+
+ /// dox
+ #[stable(feature = "rust1", since = "1.0.0")]
+ #[cfg(stage0)]
pub fn copy_nonoverlapping<T>(dst: *mut T, src: *const T, count: usize);
/// Copies `count * size_of<T>` bytes from `src` to `dst`. The source
/// unsafe fn from_buf_raw<T>(ptr: *const T, elts: usize) -> Vec<T> {
/// let mut dst = Vec::with_capacity(elts);
/// dst.set_len(elts);
- /// ptr::copy(dst.as_mut_ptr(), ptr, elts);
+ /// ptr::copy(ptr, dst.as_mut_ptr(), elts);
/// dst
/// }
/// ```
///
#[stable(feature = "rust1", since = "1.0.0")]
+ #[cfg(not(stage0))]
+ pub fn copy<T>(src: *const T, dst: *mut T, count: usize);
+
+ /// dox
+ #[stable(feature = "rust1", since = "1.0.0")]
+ #[cfg(stage0)]
pub fn copy<T>(dst: *mut T, src: *const T, count: usize);
/// Invokes memset on the specified pointer, setting `count * size_of::<T>()`
let mut t: T = uninitialized();
// Perform the swap, `&mut` pointers never alias
- ptr::copy_nonoverlapping(&mut t, &*x, 1);
- ptr::copy_nonoverlapping(x, &*y, 1);
- ptr::copy_nonoverlapping(y, &t, 1);
+ ptr::copy_nonoverlapping(&*x, &mut t, 1);
+ ptr::copy_nonoverlapping(&*y, x, 1);
+ ptr::copy_nonoverlapping(&t, y, 1);
// y and t now point to the same thing, but we need to completely forget `t`
// because it's no longer relevant.
}
}
- /// Moves the value `v` out of the `Option<T>` if the content of the `Option<T>` is a `Some(v)`.
+ /// Moves the value `v` out of the `Option<T>` if it is `Some(v)`.
///
/// # Panics
///
// FIXME #19649: intrinsic docs don't render, so these have no docs :(
#[stable(feature = "rust1", since = "1.0.0")]
+#[cfg(not(stage0))]
pub use intrinsics::copy_nonoverlapping;
+/// dox
+#[cfg(stage0)]
#[stable(feature = "rust1", since = "1.0.0")]
+pub unsafe fn copy_nonoverlapping<T>(src: *const T, dst: *mut T, count: usize) {
+ intrinsics::copy_nonoverlapping(dst, src, count)
+}
+
+#[stable(feature = "rust1", since = "1.0.0")]
+#[cfg(not(stage0))]
pub use intrinsics::copy;
+/// dox
+#[cfg(stage0)]
+#[stable(feature = "rust1", since = "1.0.0")]
+pub unsafe fn copy<T>(src: *const T, dst: *mut T, count: usize) {
+ intrinsics::copy(dst, src, count)
+}
+
+
#[stable(feature = "rust1", since = "1.0.0")]
pub use intrinsics::write_bytes;
pub unsafe fn swap<T>(x: *mut T, y: *mut T) {
// Give ourselves some scratch space to work with
let mut tmp: T = mem::uninitialized();
- let t: *mut T = &mut tmp;
// Perform the swap
- copy_nonoverlapping(t, &*x, 1);
- copy(x, &*y, 1); // `x` and `y` may overlap
- copy_nonoverlapping(y, &*t, 1);
+ copy_nonoverlapping(x, &mut tmp, 1);
+ copy(y, x, 1); // `x` and `y` may overlap
+ copy_nonoverlapping(&tmp, y, 1);
// y and t now point to the same thing, but we need to completely forget `tmp`
// because it's no longer relevant.
#[stable(feature = "rust1", since = "1.0.0")]
pub unsafe fn read<T>(src: *const T) -> T {
let mut tmp: T = mem::uninitialized();
- copy_nonoverlapping(&mut tmp, src, 1);
+ copy_nonoverlapping(src, &mut tmp, 1);
tmp
}
///
/// Panics if the length of `dst` is less than the length of `src`.
#[inline]
- pub fn copy_memory(dst: &mut [u8], src: &[u8]) {
+ pub fn copy_memory(src: &[u8], dst: &mut [u8]) {
let len_src = src.len();
assert!(dst.len() >= len_src);
// `dst` is unaliasable, so we know statically it doesn't overlap
// with `src`.
unsafe {
- ptr::copy_nonoverlapping(dst.as_mut_ptr(),
- src.as_ptr(),
+ ptr::copy_nonoverlapping(src.as_ptr(),
+ dst.as_mut_ptr(),
len_src);
}
}
let v0 = vec![32000u16, 32001u16, 32002u16];
let mut v1 = vec![0u16, 0u16, 0u16];
- copy(v1.as_mut_ptr().offset(1),
- v0.as_ptr().offset(1), 1);
+ copy(v0.as_ptr().offset(1), v1.as_mut_ptr().offset(1), 1);
assert!((v1[0] == 0u16 &&
v1[1] == 32001u16 &&
v1[2] == 0u16));
- copy(v1.as_mut_ptr(),
- v0.as_ptr().offset(2), 1);
+ copy(v0.as_ptr().offset(2), v1.as_mut_ptr(), 1);
assert!((v1[0] == 32002u16 &&
v1[1] == 32001u16 &&
v1[2] == 0u16));
- copy(v1.as_mut_ptr().offset(2),
- v0.as_ptr(), 1);
+ copy(v0.as_ptr(), v1.as_mut_ptr().offset(2), 1);
assert!((v1[0] == 32002u16 &&
v1[1] == 32001u16 &&
v1[2] == 32000u16));
pub fn doc_as_u16(d: Doc) -> u16 {
assert_eq!(d.end, d.start + 2);
let mut b = [0; 2];
- bytes::copy_memory(&mut b, &d.data[d.start..d.end]);
+ bytes::copy_memory(&d.data[d.start..d.end], &mut b);
unsafe { (*(b.as_ptr() as *const u16)).to_be() }
}
pub fn doc_as_u32(d: Doc) -> u32 {
assert_eq!(d.end, d.start + 4);
let mut b = [0; 4];
- bytes::copy_memory(&mut b, &d.data[d.start..d.end]);
+ bytes::copy_memory(&d.data[d.start..d.end], &mut b);
unsafe { (*(b.as_ptr() as *const u32)).to_be() }
}
pub fn doc_as_u64(d: Doc) -> u64 {
assert_eq!(d.end, d.start + 8);
let mut b = [0; 8];
- bytes::copy_memory(&mut b, &d.data[d.start..d.end]);
+ bytes::copy_memory(&d.data[d.start..d.end], &mut b);
unsafe { (*(b.as_ptr() as *const u64)).to_be() }
}
{
let last_size_pos = last_size_pos as usize;
let data = &self.writer.get_ref()[last_size_pos+4..cur_pos as usize];
- bytes::copy_memory(&mut buf, data);
+ bytes::copy_memory(data, &mut buf);
}
// overwrite the size and data and continue
fn u32_from_be_bytes(bytes: &[u8]) -> u32 {
let mut b = [0; 4];
- bytes::copy_memory(&mut b, &bytes[..4]);
+ bytes::copy_memory(&bytes[..4], &mut b);
unsafe { (*(b.as_ptr() as *const u32)).to_be() }
}
}
}
+ // When this returns true, it means that the expression *is* a
+ // method-call (i.e. via the operator-overload). This true result
+ // also implies that walk_overloaded_operator already took care of
+ // recursively processing the input arguments, and thus the caller
+ // should not do so.
fn walk_overloaded_operator(&mut self,
expr: &ast::Expr,
receiver: &ast::Expr,
pub use self::deref_kind::*;
pub use self::categorization::*;
+use self::Aliasability::*;
+
use middle::check_const;
use middle::def;
use middle::region;
impl MutabilityCategory {
pub fn from_mutbl(m: ast::Mutability) -> MutabilityCategory {
- match m {
+ let ret = match m {
MutImmutable => McImmutable,
MutMutable => McDeclared
- }
+ };
+ debug!("MutabilityCategory::{}({:?}) => {:?}",
+ "from_mutbl", m, ret);
+ ret
}
pub fn from_borrow_kind(borrow_kind: ty::BorrowKind) -> MutabilityCategory {
- match borrow_kind {
+ let ret = match borrow_kind {
ty::ImmBorrow => McImmutable,
ty::UniqueImmBorrow => McImmutable,
ty::MutBorrow => McDeclared,
- }
+ };
+ debug!("MutabilityCategory::{}({:?}) => {:?}",
+ "from_borrow_kind", borrow_kind, ret);
+ ret
}
- pub fn from_pointer_kind(base_mutbl: MutabilityCategory,
- ptr: PointerKind) -> MutabilityCategory {
- match ptr {
+ fn from_pointer_kind(base_mutbl: MutabilityCategory,
+ ptr: PointerKind) -> MutabilityCategory {
+ let ret = match ptr {
Unique => {
base_mutbl.inherit()
}
UnsafePtr(m) => {
MutabilityCategory::from_mutbl(m)
}
- }
+ };
+ debug!("MutabilityCategory::{}({:?}, {:?}) => {:?}",
+ "from_pointer_kind", base_mutbl, ptr, ret);
+ ret
}
fn from_local(tcx: &ty::ctxt, id: ast::NodeId) -> MutabilityCategory {
- match tcx.map.get(id) {
+ let ret = match tcx.map.get(id) {
ast_map::NodeLocal(p) | ast_map::NodeArg(p) => match p.node {
ast::PatIdent(bind_mode, _, _) => {
if bind_mode == ast::BindByValue(ast::MutMutable) {
_ => tcx.sess.span_bug(p.span, "expected identifier pattern")
},
_ => tcx.sess.span_bug(tcx.map.span(id), "expected identifier pattern")
- }
+ };
+ debug!("MutabilityCategory::{}(tcx, id={:?}) => {:?}",
+ "from_local", id, ret);
+ ret
}
pub fn inherit(&self) -> MutabilityCategory {
- match *self {
+ let ret = match *self {
McImmutable => McImmutable,
McDeclared => McInherited,
McInherited => McInherited,
- }
+ };
+ debug!("{:?}.inherit() => {:?}", self, ret);
+ ret
}
pub fn is_mutable(&self) -> bool {
- match *self {
+ let ret = match *self {
McImmutable => false,
McInherited => true,
McDeclared => true,
- }
+ };
+ debug!("{:?}.is_mutable() => {:?}", self, ret);
+ ret
}
pub fn is_immutable(&self) -> bool {
- match *self {
+ let ret = match *self {
McImmutable => true,
McDeclared | McInherited => false
- }
+ };
+ debug!("{:?}.is_immutable() => {:?}", self, ret);
+ ret
}
pub fn to_user_str(&self) -> &'static str {
}
};
- Ok(Rc::new(cmt_result))
+ let ret = Rc::new(cmt_result);
+ debug!("cat_upvar ret={}", ret.repr(self.tcx()));
+ Ok(ret)
}
fn env_deref(&self,
McDeclared | McInherited => { }
}
- cmt_ {
+ let ret = cmt_ {
id: id,
span: span,
cat: cat_deref(Rc::new(cmt_result), 0, env_ptr),
mutbl: deref_mutbl,
ty: var_ty,
note: NoteClosureEnv(upvar_id)
- }
+ };
+
+ debug!("env_deref ret {}", ret.repr(self.tcx()));
+
+ ret
}
pub fn cat_rvalue_node(&self,
}
}
};
- self.cat_rvalue(id, span, re, expr_ty)
+ let ret = self.cat_rvalue(id, span, re, expr_ty);
+ debug!("cat_rvalue_node ret {}", ret.repr(self.tcx()));
+ ret
}
pub fn cat_rvalue(&self,
span: Span,
temp_scope: ty::Region,
expr_ty: Ty<'tcx>) -> cmt<'tcx> {
- Rc::new(cmt_ {
+ let ret = Rc::new(cmt_ {
id:cmt_id,
span:span,
cat:cat_rvalue(temp_scope),
mutbl:McDeclared,
ty:expr_ty,
note: NoteNone
- })
+ });
+ debug!("cat_rvalue ret {}", ret.repr(self.tcx()));
+ ret
}
pub fn cat_field<N:ast_node>(&self,
f_name: ast::Name,
f_ty: Ty<'tcx>)
-> cmt<'tcx> {
- Rc::new(cmt_ {
+ let ret = Rc::new(cmt_ {
id: node.id(),
span: node.span(),
mutbl: base_cmt.mutbl.inherit(),
cat: cat_interior(base_cmt, InteriorField(NamedField(f_name))),
ty: f_ty,
note: NoteNone
- })
+ });
+ debug!("cat_field ret {}", ret.repr(self.tcx()));
+ ret
}
pub fn cat_tup_field<N:ast_node>(&self,
f_idx: usize,
f_ty: Ty<'tcx>)
-> cmt<'tcx> {
- Rc::new(cmt_ {
+ let ret = Rc::new(cmt_ {
id: node.id(),
span: node.span(),
mutbl: base_cmt.mutbl.inherit(),
cat: cat_interior(base_cmt, InteriorField(PositionalField(f_idx))),
ty: f_ty,
note: NoteNone
- })
+ });
+ debug!("cat_tup_field ret {}", ret.repr(self.tcx()));
+ ret
}
fn cat_deref<N:ast_node>(&self,
};
let base_cmt_ty = base_cmt.ty;
match ty::deref(base_cmt_ty, true) {
- Some(mt) => self.cat_deref_common(node, base_cmt, deref_cnt,
+ Some(mt) => {
+ let ret = self.cat_deref_common(node, base_cmt, deref_cnt,
mt.ty,
deref_context,
- /* implicit: */ false),
+ /* implicit: */ false);
+ debug!("cat_deref ret {}", ret.repr(self.tcx()));
+ ret
+ }
None => {
debug!("Explicit deref of non-derefable type: {}",
base_cmt_ty.repr(self.tcx()));
(base_cmt.mutbl.inherit(), cat_interior(base_cmt, interior))
}
};
- Ok(Rc::new(cmt_ {
+ let ret = Rc::new(cmt_ {
id: node.id(),
span: node.span(),
cat: cat,
mutbl: m,
ty: deref_ty,
note: NoteNone
- }))
+ });
+ debug!("cat_deref_common ret {}", ret.repr(self.tcx()));
+ Ok(ret)
}
pub fn cat_index<N:ast_node>(&self,
};
let m = base_cmt.mutbl.inherit();
- return Ok(interior(elt, base_cmt.clone(), base_cmt.ty,
- m, context, element_ty));
+ let ret = interior(elt, base_cmt.clone(), base_cmt.ty,
+ m, context, element_ty);
+ debug!("cat_index ret {}", ret.repr(self.tcx()));
+ return Ok(ret);
fn interior<'tcx, N: ast_node>(elt: &N,
of_cmt: cmt<'tcx>,
context: InteriorOffsetKind)
-> McResult<cmt<'tcx>>
{
- match try!(deref_kind(base_cmt.ty, Some(context))) {
+ let ret = match try!(deref_kind(base_cmt.ty, Some(context))) {
deref_ptr(ptr) => {
// for unique ptrs, we inherit mutability from the
// owning reference.
let m = MutabilityCategory::from_pointer_kind(base_cmt.mutbl, ptr);
// the deref is explicit in the resulting cmt
- Ok(Rc::new(cmt_ {
+ Rc::new(cmt_ {
id:elt.id(),
span:elt.span(),
cat:cat_deref(base_cmt.clone(), 0, ptr),
None => self.tcx().sess.bug("Found non-derefable type")
},
note: NoteNone
- }))
+ })
}
deref_interior(_) => {
- Ok(base_cmt)
+ base_cmt
}
- }
+ };
+ debug!("deref_vec ret {}", ret.repr(self.tcx()));
+ Ok(ret)
}
/// Given a pattern P like: `[_, ..Q, _]`, where `vec_cmt` is the cmt for `P`, `slice_pat` is
interior_ty: Ty<'tcx>,
interior: InteriorKind)
-> cmt<'tcx> {
- Rc::new(cmt_ {
+ let ret = Rc::new(cmt_ {
id: node.id(),
span: node.span(),
mutbl: base_cmt.mutbl.inherit(),
cat: cat_interior(base_cmt, interior),
ty: interior_ty,
note: NoteNone
- })
+ });
+ debug!("cat_imm_interior ret={}", ret.repr(self.tcx()));
+ ret
}
pub fn cat_downcast<N:ast_node>(&self,
downcast_ty: Ty<'tcx>,
variant_did: ast::DefId)
-> cmt<'tcx> {
- Rc::new(cmt_ {
+ let ret = Rc::new(cmt_ {
id: node.id(),
span: node.span(),
mutbl: base_cmt.mutbl.inherit(),
cat: cat_downcast(base_cmt, variant_did),
ty: downcast_ty,
note: NoteNone
- })
+ });
+ debug!("cat_downcast ret={}", ret.repr(self.tcx()));
+ ret
}
pub fn cat_pattern<F>(&self, cmt: cmt<'tcx>, pat: &ast::Pat, mut op: F) -> McResult<()>
}
}
-#[derive(Copy)]
+#[derive(Copy, Clone, Debug)]
pub enum InteriorSafety {
InteriorUnsafe,
InteriorSafe
}
-#[derive(Copy)]
+#[derive(Clone, Debug)]
+pub enum Aliasability {
+ FreelyAliasable(AliasableReason),
+ NonAliasable,
+ ImmutableUnique(Box<Aliasability>),
+}
+
+#[derive(Copy, Clone, Debug)]
pub enum AliasableReason {
AliasableBorrowed,
AliasableClosure(ast::NodeId), // Aliasable due to capture Fn closure env
AliasableOther,
+ UnaliasableImmutable, // Created as needed upon seeing ImmutableUnique
AliasableStatic(InteriorSafety),
AliasableStaticMut(InteriorSafety),
}
}
}
- /// Returns `Some(_)` if this lvalue represents a freely aliasable pointer type.
+ /// Returns `FreelyAliasable(_)` if this lvalue represents a freely aliasable pointer type.
pub fn freely_aliasable(&self, ctxt: &ty::ctxt<'tcx>)
- -> Option<AliasableReason> {
+ -> Aliasability {
// Maybe non-obvious: copied upvars can only be considered
// non-aliasable in once closures, since any other kind can be
// aliased and eventually recused.
cat_deref(ref b, _, BorrowedPtr(ty::UniqueImmBorrow, _)) |
cat_deref(ref b, _, Implicit(ty::UniqueImmBorrow, _)) |
cat_downcast(ref b, _) |
- cat_deref(ref b, _, Unique) |
cat_interior(ref b, _) => {
// Aliasability depends on base cmt
b.freely_aliasable(ctxt)
}
+ cat_deref(ref b, _, Unique) => {
+ let sub = b.freely_aliasable(ctxt);
+ if b.mutbl.is_mutable() {
+ // Aliasability depends on base cmt alone
+ sub
+ } else {
+ // Do not allow mutation through an immutable box.
+ ImmutableUnique(Box::new(sub))
+ }
+ }
+
cat_rvalue(..) |
cat_local(..) |
cat_upvar(..) |
cat_deref(_, _, UnsafePtr(..)) => { // yes, it's aliasable, but...
- None
+ NonAliasable
}
cat_static_item(..) => {
};
if self.mutbl.is_mutable() {
- Some(AliasableStaticMut(int_safe))
+ FreelyAliasable(AliasableStaticMut(int_safe))
} else {
- Some(AliasableStatic(int_safe))
+ FreelyAliasable(AliasableStatic(int_safe))
}
}
cat_deref(ref base, _, BorrowedPtr(ty::ImmBorrow, _)) |
cat_deref(ref base, _, Implicit(ty::ImmBorrow, _)) => {
match base.cat {
- cat_upvar(Upvar{ id, .. }) => Some(AliasableClosure(id.closure_expr_id)),
- _ => Some(AliasableBorrowed)
+ cat_upvar(Upvar{ id, .. }) =>
+ FreelyAliasable(AliasableClosure(id.closure_expr_id)),
+ _ => FreelyAliasable(AliasableBorrowed)
}
}
}
let buffer_remaining = size - self.buffer_idx;
if input.len() >= buffer_remaining {
copy_memory(
- &mut self.buffer[self.buffer_idx..size],
- &input[..buffer_remaining]);
+ &input[..buffer_remaining],
+ &mut self.buffer[self.buffer_idx..size]);
self.buffer_idx = 0;
func(&self.buffer);
i += buffer_remaining;
} else {
copy_memory(
- &mut self.buffer[self.buffer_idx..self.buffer_idx + input.len()],
- input);
+ input,
+ &mut self.buffer[self.buffer_idx..self.buffer_idx + input.len()]);
self.buffer_idx += input.len();
return;
}
// be empty.
let input_remaining = input.len() - i;
copy_memory(
- &mut self.buffer[..input_remaining],
- &input[i..]);
+ &input[i..],
+ &mut self.buffer[..input_remaining]);
self.buffer_idx += input_remaining;
}
cmt: mc::cmt<'tcx>)
-> bool {
match cmt.freely_aliasable(this.tcx()) {
- None => {
+ mc::Aliasability::NonAliasable => {
return true;
}
- Some(mc::AliasableStaticMut(..)) => {
+ mc::Aliasability::FreelyAliasable(mc::AliasableStaticMut(..)) => {
return true;
}
- Some(cause) => {
+ mc::Aliasability::ImmutableUnique(_) => {
+ this.bccx.report_aliasability_violation(
+ span,
+ MutabilityViolation,
+ mc::AliasableReason::UnaliasableImmutable);
+ return false;
+ }
+ mc::Aliasability::FreelyAliasable(cause) => {
this.bccx.report_aliasability_violation(
span,
MutabilityViolation,
assignee_cmt: mc::cmt<'tcx>,
mode: euv::MutateMode)
{
- debug!("mutate(assignment_id={}, assignee_cmt={})",
- assignment_id, assignee_cmt.repr(self.tcx()));
+ let opt_lp = opt_loan_path(&assignee_cmt);
+ debug!("mutate(assignment_id={}, assignee_cmt={}) opt_lp={:?}",
+ assignment_id, assignee_cmt.repr(self.tcx()), opt_lp);
- match opt_loan_path(&assignee_cmt) {
+ match opt_lp {
Some(lp) => {
gather_moves::gather_assignment(self.bccx, &self.move_data,
assignment_id, assignment_span,
req_kind: ty::BorrowKind)
-> Result<(),()> {
- match (cmt.freely_aliasable(bccx.tcx), req_kind) {
- (None, _) => {
+ let aliasability = cmt.freely_aliasable(bccx.tcx);
+ debug!("check_aliasability aliasability={:?} req_kind={:?}",
+ aliasability, req_kind);
+
+ match (aliasability, req_kind) {
+ (mc::Aliasability::NonAliasable, _) => {
/* Uniquely accessible path -- OK for `&` and `&mut` */
Ok(())
}
- (Some(mc::AliasableStatic(safety)), ty::ImmBorrow) => {
+ (mc::Aliasability::FreelyAliasable(mc::AliasableStatic(safety)), ty::ImmBorrow) => {
// Borrow of an immutable static item:
match safety {
mc::InteriorUnsafe => {
}
}
}
- (Some(mc::AliasableStaticMut(..)), _) => {
+ (mc::Aliasability::FreelyAliasable(mc::AliasableStaticMut(..)), _) => {
// Even touching a static mut is considered unsafe. We assume the
// user knows what they're doing in these cases.
Ok(())
}
- (Some(alias_cause), ty::UniqueImmBorrow) |
- (Some(alias_cause), ty::MutBorrow) => {
+ (mc::Aliasability::ImmutableUnique(_), ty::MutBorrow) => {
+ bccx.report_aliasability_violation(
+ borrow_span,
+ BorrowViolation(loan_cause),
+ mc::AliasableReason::UnaliasableImmutable);
+ Err(())
+ }
+ (mc::Aliasability::FreelyAliasable(alias_cause), ty::UniqueImmBorrow) |
+ (mc::Aliasability::FreelyAliasable(alias_cause), ty::MutBorrow) => {
bccx.report_aliasability_violation(
borrow_span,
BorrowViolation(loan_cause),
req_kind: ty::BorrowKind)
-> Result<(),()> {
//! Implements the M-* rules in README.md.
-
+ debug!("check_mutability(cause={:?} cmt={} req_kind={:?}",
+ cause, cmt.repr(bccx.tcx), req_kind);
match req_kind {
ty::UniqueImmBorrow | ty::ImmBorrow => {
match cmt.mutbl {
&format!("{} in an aliasable location",
prefix));
}
+ mc::AliasableReason::UnaliasableImmutable => {
+ self.tcx.sess.span_err(
+ span,
+ &format!("{} in an immutable container",
+ prefix));
+ }
mc::AliasableClosure(id) => {
self.tcx.sess.span_err(span,
&format!("{} in a captured outer \
use util::nodemap::{FnvHashMap, NodeSet};
use lint::{Level, Context, LintPass, LintArray, Lint};
-use std::collections::BitSet;
+use std::collections::{HashSet, BitSet};
use std::collections::hash_map::Entry::{Occupied, Vacant};
use std::num::SignedInt;
use std::{cmp, slice};
/// Stack of whether #[doc(hidden)] is set
/// at each level which has lint attributes.
doc_hidden_stack: Vec<bool>,
+
+ /// Private traits or trait items that leaked through. Don't check their methods.
+ private_traits: HashSet<ast::NodeId>,
}
impl MissingDoc {
struct_def_stack: vec!(),
in_variant: false,
doc_hidden_stack: vec!(false),
+ private_traits: HashSet::new(),
}
}
ast::ItemMod(..) => "a module",
ast::ItemEnum(..) => "an enum",
ast::ItemStruct(..) => "a struct",
- ast::ItemTrait(..) => "a trait",
+ ast::ItemTrait(_, _, _, ref items) => {
+ // Issue #11592, traits are always considered exported, even when private.
+ if it.vis == ast::Visibility::Inherited {
+ self.private_traits.insert(it.id);
+ for itm in items {
+ self.private_traits.insert(itm.id);
+ }
+ return
+ }
+ "a trait"
+ },
ast::ItemTy(..) => "a type alias",
+ ast::ItemImpl(_, _, _, Some(ref trait_ref), _, ref impl_items) => {
+ // If the trait is private, add the impl items to private_traits so they don't get
+ // reported for missing docs.
+ let real_trait = ty::trait_ref_to_def_id(cx.tcx, trait_ref);
+ match cx.tcx.map.find(real_trait.node) {
+ Some(ast_map::NodeItem(item)) => if item.vis == ast::Visibility::Inherited {
+ for itm in impl_items {
+ self.private_traits.insert(itm.id);
+ }
+ },
+ _ => { }
+ }
+ return
+ },
_ => return
};
+
self.check_missing_docs_attrs(cx, Some(it.id), &it.attrs, it.span, desc);
}
fn check_trait_item(&mut self, cx: &Context, trait_item: &ast::TraitItem) {
+ if self.private_traits.contains(&trait_item.id) { return }
+
let desc = match trait_item.node {
ast::MethodTraitItem(..) => "a trait method",
ast::TypeTraitItem(..) => "an associated type"
};
+
self.check_missing_docs_attrs(cx, Some(trait_item.id),
&trait_item.attrs,
trait_item.span, desc);
false,
false,
*substs.types.get(FnSpace, 0),
- llargs[0],
llargs[1],
+ llargs[0],
llargs[2],
call_debug_location)
}
true,
false,
*substs.types.get(FnSpace, 0),
- llargs[0],
llargs[1],
+ llargs[0],
llargs[2],
call_debug_location)
}
mutbl: ast::MutImmutable
}))
}
- "copy" | "copy_nonoverlapping" |
+ "copy" | "copy_nonoverlapping" => {
+ (1,
+ vec!(
+ ty::mk_ptr(tcx, ty::mt {
+ ty: param(ccx, 0),
+ mutbl: ast::MutImmutable
+ }),
+ ty::mk_ptr(tcx, ty::mt {
+ ty: param(ccx, 0),
+ mutbl: ast::MutMutable
+ }),
+ tcx.types.usize,
+ ),
+ ty::mk_nil(tcx))
+ }
"volatile_copy_memory" | "volatile_copy_nonoverlapping_memory" => {
(1,
vec!(
if should_panic && out.status.success() {
panic!("test executable succeeded when it should have failed");
} else if !should_panic && !out.status.success() {
- panic!("test executable failed:\n{:?}",
- str::from_utf8(&out.stdout));
+ panic!("test executable failed:\n{}\n{}",
+ str::from_utf8(&out.stdout).unwrap_or(""),
+ str::from_utf8(&out.stderr).unwrap_or(""));
}
}
}
pub fn shift(mut self) -> Option<GapThenFull<K, V, M>> {
unsafe {
*self.gap.raw.hash = mem::replace(&mut *self.full.raw.hash, EMPTY_BUCKET);
- ptr::copy_nonoverlapping(self.gap.raw.key, self.full.raw.key, 1);
- ptr::copy_nonoverlapping(self.gap.raw.val, self.full.raw.val, 1);
+ ptr::copy_nonoverlapping(self.full.raw.key, self.gap.raw.key, 1);
+ ptr::copy_nonoverlapping(self.full.raw.val, self.gap.raw.val, 1);
}
let FullBucket { raw: prev_raw, idx: prev_idx, .. } = self.full;
if written > 0 {
// NB: would be better expressed as .remove(0..n) if it existed
unsafe {
- ptr::copy(self.buf.as_mut_ptr(),
- self.buf.as_ptr().offset(written as isize),
+ ptr::copy(self.buf.as_ptr().offset(written as isize),
+ self.buf.as_mut_ptr(),
len - written);
}
}
// there (left), and what will be appended on the end (right)
let space = self.inner.len() - pos as usize;
let (left, right) = buf.split_at(cmp::min(space, buf.len()));
- slice::bytes::copy_memory(&mut self.inner[(pos as usize)..], left);
+ slice::bytes::copy_memory(left, &mut self.inner[(pos as usize)..]);
self.inner.push_all(right);
// Bump us forward
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
let amt = cmp::min(buf.len(), self.len());
let (a, b) = self.split_at(amt);
- slice::bytes::copy_memory(buf, a);
+ slice::bytes::copy_memory(a, buf);
*self = b;
Ok(amt)
}
fn write(&mut self, data: &[u8]) -> io::Result<usize> {
let amt = cmp::min(data.len(), self.len());
let (a, b) = mem::replace(self, &mut []).split_at_mut(amt);
- slice::bytes::copy_memory(a, &data[..amt]);
+ slice::bytes::copy_memory(&data[..amt], a);
*self = b;
Ok(amt)
}
///
/// Equivalent to the `println!` macro except that a newline is not printed at
/// the end of the message.
+///
+/// Note that stdout is frequently line-buffered by default so it may be
+/// necessary to use `io::stdout().flush()` to ensure the output is emitted
+/// immediately.
#[macro_export]
#[stable(feature = "rust1", since = "1.0.0")]
#[allow_internal_unstable]
let nread = {
let available = try!(self.fill_buf());
let nread = cmp::min(available.len(), buf.len());
- slice::bytes::copy_memory(buf, &available[..nread]);
+ slice::bytes::copy_memory(&available[..nread], buf);
nread
};
self.pos += nread;
self.inner.as_mut().unwrap().write_all(buf)
} else {
let dst = &mut self.buf[self.pos..];
- slice::bytes::copy_memory(dst, buf);
+ slice::bytes::copy_memory(buf, dst);
self.pos += buf.len();
Ok(())
}
Some(src) => {
let dst = &mut buf[num_read..];
let count = cmp::min(src.len(), dst.len());
- bytes::copy_memory(dst, &src[..count]);
+ bytes::copy_memory(&src[..count], dst);
count
},
None => 0,
unsafe {
let ptr = data.as_ptr().offset(start as isize);
let out = buf.as_mut_ptr();
- copy_nonoverlapping(out.offset((8 - size) as isize), ptr, size);
+ copy_nonoverlapping(ptr, out.offset((8 - size) as isize), size);
(*(out as *const u64)).to_be()
}
}
let input = &self.buf[self.pos.. self.pos + write_len];
let output = &mut buf[..write_len];
assert_eq!(input.len(), output.len());
- slice::bytes::copy_memory(output, input);
+ slice::bytes::copy_memory(input, output);
}
self.pos += write_len;
assert!(self.pos <= self.buf.len());
{
let input = &self[..write_len];
let output = &mut buf[.. write_len];
- slice::bytes::copy_memory(output, input);
+ slice::bytes::copy_memory(input, output);
}
*self = &self[write_len..];
let src_len = src.len();
if dst_len >= src_len {
- slice::bytes::copy_memory(dst, src);
+ slice::bytes::copy_memory(src, dst);
self.pos += src_len;
Ok(())
} else {
- slice::bytes::copy_memory(dst, &src[..dst_len]);
+ slice::bytes::copy_memory(&src[..dst_len], dst);
self.pos += dst_len;
let input = &self.buf[self.pos.. self.pos + write_len];
let output = &mut buf[..write_len];
assert_eq!(input.len(), output.len());
- slice::bytes::copy_memory(output, input);
+ slice::bytes::copy_memory(input, output);
}
self.pos += write_len;
assert!(self.pos <= self.buf.len());
Some((surrogate_pos, _)) => {
pos = surrogate_pos + 3;
slice::bytes::copy_memory(
+ UTF8_REPLACEMENT_CHARACTER,
&mut self.bytes[surrogate_pos .. pos],
- UTF8_REPLACEMENT_CHARACTER
);
},
None => return unsafe { String::from_utf8_unchecked(self.bytes) }
let mut buf = repeat(0).take(alu_len + LINE_LEN).collect::<Vec<_>>();
let alu: &[u8] = self.alu.as_bytes();
- copy_memory(&mut buf, alu);
+ copy_memory(alu, &mut buf);
let buf_len = buf.len();
- copy_memory(&mut buf[alu_len..buf_len],
- &alu[..LINE_LEN]);
+ copy_memory(&alu[..LINE_LEN], &mut buf[alu_len..buf_len]);
let mut pos = 0;
let mut bytes;
let mut i = LINE_LEN;
while i < len {
unsafe {
- copy(seq.as_mut_ptr().offset((i - off + 1) as isize),
- seq.as_ptr().offset((i - off) as isize), off);
+ copy(seq.as_ptr().offset((i - off) as isize),
+ seq.as_mut_ptr().offset((i - off + 1) as isize), off);
*seq.get_unchecked_mut(i - off) = b'\n';
}
i += LINE_LEN + 1;
// except according to those terms.
// This tests that we can't modify Box<&mut T> contents while they
-// are borrowed.
+// are borrowed (#14498).
+//
+// Also includes tests of the errors reported when the Box in question
+// is immutable (#14270).
#![feature(box_syntax)]
struct A { a: isize }
struct B<'a> { a: Box<&'a mut isize> }
+fn indirect_write_to_imm_box() {
+ let mut x: isize = 1;
+ let y: Box<_> = box &mut x;
+ let p = &y;
+ ***p = 2; //~ ERROR cannot assign to data in an immutable container
+ drop(p);
+}
+
fn borrow_in_var_from_var() {
+ let mut x: isize = 1;
+ let mut y: Box<_> = box &mut x;
+ let p = &y;
+ let q = &***p;
+ **y = 2; //~ ERROR cannot assign to `**y` because it is borrowed
+ drop(p);
+ drop(q);
+}
+
+fn borrow_in_var_from_var_via_imm_box() {
let mut x: isize = 1;
let y: Box<_> = box &mut x;
let p = &y;
let q = &***p;
**y = 2; //~ ERROR cannot assign to `**y` because it is borrowed
+ //~^ ERROR cannot assign to data in an immutable container
drop(p);
drop(q);
}
fn borrow_in_var_from_field() {
+ let mut x = A { a: 1 };
+ let mut y: Box<_> = box &mut x.a;
+ let p = &y;
+ let q = &***p;
+ **y = 2; //~ ERROR cannot assign to `**y` because it is borrowed
+ drop(p);
+ drop(q);
+}
+
+fn borrow_in_var_from_field_via_imm_box() {
let mut x = A { a: 1 };
let y: Box<_> = box &mut x.a;
let p = &y;
let q = &***p;
**y = 2; //~ ERROR cannot assign to `**y` because it is borrowed
+ //~^ ERROR cannot assign to data in an immutable container
drop(p);
drop(q);
}
fn borrow_in_field_from_var() {
+ let mut x: isize = 1;
+ let mut y = B { a: box &mut x };
+ let p = &y.a;
+ let q = &***p;
+ **y.a = 2; //~ ERROR cannot assign to `**y.a` because it is borrowed
+ drop(p);
+ drop(q);
+}
+
+fn borrow_in_field_from_var_via_imm_box() {
let mut x: isize = 1;
let y = B { a: box &mut x };
let p = &y.a;
let q = &***p;
**y.a = 2; //~ ERROR cannot assign to `**y.a` because it is borrowed
+ //~^ ERROR cannot assign to data in an immutable container
drop(p);
drop(q);
}
fn borrow_in_field_from_field() {
+ let mut x = A { a: 1 };
+ let mut y = B { a: box &mut x.a };
+ let p = &y.a;
+ let q = &***p;
+ **y.a = 2; //~ ERROR cannot assign to `**y.a` because it is borrowed
+ drop(p);
+ drop(q);
+}
+
+fn borrow_in_field_from_field_via_imm_box() {
let mut x = A { a: 1 };
let y = B { a: box &mut x.a };
let p = &y.a;
let q = &***p;
**y.a = 2; //~ ERROR cannot assign to `**y.a` because it is borrowed
+ //~^ ERROR cannot assign to data in an immutable container
drop(p);
drop(q);
}
fn main() {
+ indirect_write_to_imm_box();
borrow_in_var_from_var();
+ borrow_in_var_from_var_via_imm_box();
borrow_in_var_from_field();
+ borrow_in_var_from_field_via_imm_box();
borrow_in_field_from_var();
+ borrow_in_field_from_var_via_imm_box();
borrow_in_field_from_field();
+ borrow_in_field_from_field_via_imm_box();
}
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+//! Ensure the private trait Bar isn't complained about.
+
+#![deny(missing_docs)]
+
+mod foo {
+ trait Bar { fn bar(&self) { } }
+ impl Bar for i8 { fn bar(&self) { } }
+}
+
+fn main() { }
impl<'a> MyWriter for &'a mut [u8] {
fn my_write(&mut self, buf: &[u8]) -> IoResult<()> {
- slice::bytes::copy_memory(*self, buf);
+ slice::bytes::copy_memory(buf, *self);
let write_len = buf.len();
unsafe {