- [alloc_system](alloc-system.md)
- [allocator](allocator.md)
- [allow_internal_unstable](allow-internal-unstable.md)
+- [as_c_str](as-c-str.md)
- [as_unsafe_cell](as-unsafe-cell.md)
- [ascii_ctype](ascii-ctype.md)
- [asm](asm.md)
- [collections](collections.md)
- [collections_range](collections-range.md)
- [command_envs](command-envs.md)
+- [compiler_barriers](compiler-barriers.md)
- [compiler_builtins](compiler-builtins.md)
- [compiler_builtins_lib](compiler-builtins-lib.md)
- [concat_idents](concat-idents.md)
--- /dev/null
+# `as_c_str`
+
+The tracking issue for this feature is: [#40380]
+
+[#40380]: https://github.com/rust-lang/rust/issues/40380
+
+------------------------
+
--- /dev/null
+# `compiler_barriers`
+
+The tracking issue for this feature is: [#41091]
+
+[#41091]: https://github.com/rust-lang/rust/issues/41091
+
+------------------------
+
+The `compiler_barriers` feature exposes the `compiler_barrier` function
+in `std::sync::atomic`. This function is conceptually similar to C++'s
+`atomic_signal_fence`, which can currently only be accessed in nightly
+Rust using the `atomic_singlethreadfence_*` instrinsic functions in
+`core`, or through the mostly equivalent literal assembly:
+
+```rust
+#![feature(asm)]
+unsafe { asm!("" ::: "memory" : "volatile") };
+```
+
+A `compiler_barrier` restricts the kinds of memory re-ordering the
+compiler is allowed to do. Specifically, depending on the given ordering
+semantics, the compiler may be disallowed from moving reads or writes
+from before or after the call to the other side of the call to
+`compiler_barrier`. Note that it does **not** prevent the *hardware*
+from doing such re-ordering. This is not a problem in a single-threaded,
+execution context, but when other threads may modify memory at the same
+time, stronger synchronization primitives are required.
+
+## Examples
+
+`compiler_barrier` is generally only useful for preventing a thread from
+racing *with itself*. That is, if a given thread is executing one piece
+of code, and is then interrupted, and starts executing code elsewhere
+(while still in the same thread, and conceptually still on the same
+core). In traditional programs, this can only occur when a signal
+handler is registered. In more low-level code, such situations can also
+arise when handling interrupts, when implementing green threads with
+pre-emption, etc.
+
+To give a straightforward example of when a `compiler_barrier` is
+necessary, consider the following example:
+
+```rust
+# use std::sync::atomic::{AtomicBool, AtomicUsize};
+# use std::sync::atomic::{ATOMIC_BOOL_INIT, ATOMIC_USIZE_INIT};
+# use std::sync::atomic::Ordering;
+static IMPORTANT_VARIABLE: AtomicUsize = ATOMIC_USIZE_INIT;
+static IS_READY: AtomicBool = ATOMIC_BOOL_INIT;
+
+fn main() {
+ IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
+ IS_READY.store(true, Ordering::Relaxed);
+}
+
+fn signal_handler() {
+ if IS_READY.load(Ordering::Relaxed) {
+ assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
+ }
+}
+```
+
+The way it is currently written, the `assert_eq!` is *not* guaranteed to
+succeed, despite everything happening in a single thread. To see why,
+remember that the compiler is free to swap the stores to
+`IMPORTANT_VARIABLE` and `IS_READ` since they are both
+`Ordering::Relaxed`. If it does, and the signal handler is invoked right
+after `IS_READY` is updated, then the signal handler will see
+`IS_READY=1`, but `IMPORTANT_VARIABLE=0`.
+
+Using a `compiler_barrier`, we can remedy this situation:
+
+```rust
+#![feature(compiler_barriers)]
+# use std::sync::atomic::{AtomicBool, AtomicUsize};
+# use std::sync::atomic::{ATOMIC_BOOL_INIT, ATOMIC_USIZE_INIT};
+# use std::sync::atomic::Ordering;
+use std::sync::atomic::compiler_barrier;
+
+static IMPORTANT_VARIABLE: AtomicUsize = ATOMIC_USIZE_INIT;
+static IS_READY: AtomicBool = ATOMIC_BOOL_INIT;
+
+fn main() {
+ IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
+ // prevent earlier writes from being moved beyond this point
+ compiler_barrier(Ordering::Release);
+ IS_READY.store(true, Ordering::Relaxed);
+}
+
+fn signal_handler() {
+ if IS_READY.load(Ordering::Relaxed) {
+ assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
+ }
+}
+```
+
+A deeper discussion of compiler barriers with various re-ordering
+semantics (such as `Ordering::SeqCst`) is beyond the scope of this text.
+Curious readers are encouraged to read the Linux kernel's discussion of
+[memory barriers][1], the C++ references on [`std::memory_order`][2] and
+[`atomic_signal_fence`][3], and [this StackOverflow answer][4] for
+further details.
+
+[1]: https://www.kernel.org/doc/Documentation/memory-barriers.txt
+[2]: http://en.cppreference.com/w/cpp/atomic/memory_order
+[3]: http://www.cplusplus.com/reference/atomic/atomic_signal_fence/
+[4]: http://stackoverflow.com/a/18454971/472927
}
}
}
+
+ fn rfind<F>(&mut self, mut predicate: F) -> Option<Self::Item>
+ where F: FnMut(&Self::Item) -> bool,
+ {
+ self.rsearch_while(None, move |elt| {
+ if predicate(&elt) {
+ SearchWhile::Done(Some(elt))
+ } else {
+ SearchWhile::Continue
+ }
+ })
+ }
+
}
// search_while is a generalization of the internal iteration methods.
}
+/// A compiler memory barrier.
+///
+/// `compiler_barrier` does not emit any machine code, but prevents the compiler from re-ordering
+/// memory operations across this point. Which reorderings are disallowed is dictated by the given
+/// [`Ordering`]. Note that `compiler_barrier` does *not* introduce inter-thread memory
+/// synchronization; for that, a [`fence`] is needed.
+///
+/// The re-ordering prevented by the different ordering semantics are:
+///
+/// - with [`SeqCst`], no re-ordering of reads and writes across this point is allowed.
+/// - with [`Release`], preceding reads and writes cannot be moved past subsequent writes.
+/// - with [`Acquire`], subsequent reads and writes cannot be moved ahead of preceding reads.
+/// - with [`AcqRel`], both of the above rules are enforced.
+///
+/// # Panics
+///
+/// Panics if `order` is [`Relaxed`].
+///
+/// [`fence`]: fn.fence.html
+/// [`Ordering`]: enum.Ordering.html
+/// [`Acquire`]: enum.Ordering.html#variant.Acquire
+/// [`SeqCst`]: enum.Ordering.html#variant.SeqCst
+/// [`Release`]: enum.Ordering.html#variant.Release
+/// [`AcqRel`]: enum.Ordering.html#variant.AcqRel
+/// [`Relaxed`]: enum.Ordering.html#variant.Relaxed
+#[inline]
+#[unstable(feature = "compiler_barriers", issue = "41091")]
+pub fn compiler_barrier(order: Ordering) {
+ unsafe {
+ match order {
+ Acquire => intrinsics::atomic_singlethreadfence_acq(),
+ Release => intrinsics::atomic_singlethreadfence_rel(),
+ AcqRel => intrinsics::atomic_singlethreadfence_acqrel(),
+ SeqCst => intrinsics::atomic_singlethreadfence(),
+ Relaxed => panic!("there is no such thing as a relaxed barrier"),
+ __Nonexhaustive => panic!("invalid memory ordering"),
+ }
+ }
+}
+
+
#[cfg(target_has_atomic = "8")]
#[stable(feature = "atomic_debug", since = "1.3.0")]
impl fmt::Debug for AtomicBool {
#![feature(fixed_size_array)]
#![feature(flt2dec)]
#![feature(fmt_internals)]
+#![feature(iter_rfind)]
#![feature(libc)]
#![feature(nonzero)]
#![feature(rand)]
}
}
+#[test]
+fn test_find_rfind() {
+ let v = [0, 1, 2, 3, 4, 5];
+ let mut iter = v.iter();
+ let mut i = v.len();
+ while let Some(&elt) = iter.rfind(|_| true) {
+ i -= 1;
+ assert_eq!(elt, v[i]);
+ }
+ assert_eq!(i, 0);
+ assert_eq!(v.iter().rfind(|&&x| x <= 3), Some(&3));
+}
+
#[test]
fn sort_unstable() {
let mut v = [0; 600];
where O: BitDenotation
{
flow_state: DataflowState<O>,
+ dead_unwinds: &'a IdxSet<mir::BasicBlock>,
mir: &'a Mir<'tcx>,
}
{
pub fn new(_tcx: TyCtxt<'a, 'tcx, 'tcx>,
mir: &'a Mir<'tcx>,
+ dead_unwinds: &'a IdxSet<mir::BasicBlock>,
denotation: D) -> Self {
let bits_per_block = denotation.bits_per_block();
let usize_bits = mem::size_of::<usize>() * 8;
DataflowAnalysis {
mir: mir,
+ dead_unwinds: dead_unwinds,
flow_state: DataflowState {
sets: AllSets {
bits_per_block: bits_per_block,
ref target, value: _, location: _, unwind: Some(ref unwind)
} => {
self.propagate_bits_into_entry_set_for(in_out, changed, target);
- self.propagate_bits_into_entry_set_for(in_out, changed, unwind);
+ if !self.dead_unwinds.contains(&bb) {
+ self.propagate_bits_into_entry_set_for(in_out, changed, unwind);
+ }
}
mir::TerminatorKind::SwitchInt { ref targets, .. } => {
for target in targets {
}
mir::TerminatorKind::Call { ref cleanup, ref destination, func: _, args: _ } => {
if let Some(ref unwind) = *cleanup {
- self.propagate_bits_into_entry_set_for(in_out, changed, unwind);
+ if !self.dead_unwinds.contains(&bb) {
+ self.propagate_bits_into_entry_set_for(in_out, changed, unwind);
+ }
}
if let Some((ref dest_lval, ref dest_bb)) = *destination {
// N.B.: This must be done *last*, after all other
use super::gather_moves::{HasMoveData, MoveData, MovePathIndex, LookupResult};
use super::dataflow::{MaybeInitializedLvals, MaybeUninitializedLvals};
use super::dataflow::{DataflowResults};
-use super::{drop_flag_effects_for_location, on_all_children_bits};
-use super::on_lookup_result_bits;
+use super::{on_all_children_bits, on_all_drop_children_bits};
+use super::{drop_flag_effects_for_location, on_lookup_result_bits};
use super::MoveDataParamEnv;
use rustc::ty::{self, TyCtxt};
use rustc::mir::*;
use rustc_mir::util::patch::MirPatch;
use rustc_mir::util::elaborate_drops::{DropFlagState, elaborate_drop};
use rustc_mir::util::elaborate_drops::{DropElaborator, DropStyle, DropFlagMode};
+use syntax::ast;
use syntax_pos::Span;
use std::fmt;
move_data: move_data,
param_env: param_env
};
+ let dead_unwinds = find_dead_unwinds(tcx, mir, id, &env);
let flow_inits =
- super::do_dataflow(tcx, mir, id, &[],
+ super::do_dataflow(tcx, mir, id, &[], &dead_unwinds,
MaybeInitializedLvals::new(tcx, mir, &env),
|bd, p| &bd.move_data().move_paths[p]);
let flow_uninits =
- super::do_dataflow(tcx, mir, id, &[],
+ super::do_dataflow(tcx, mir, id, &[], &dead_unwinds,
MaybeUninitializedLvals::new(tcx, mir, &env),
|bd, p| &bd.move_data().move_paths[p]);
impl Pass for ElaborateDrops {}
+/// Return the set of basic blocks whose unwind edges are known
+/// to not be reachable, because they are `drop` terminators
+/// that can't drop anything.
+fn find_dead_unwinds<'a, 'tcx>(
+ tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ mir: &Mir<'tcx>,
+ id: ast::NodeId,
+ env: &MoveDataParamEnv<'tcx>)
+ -> IdxSetBuf<BasicBlock>
+{
+ debug!("find_dead_unwinds({:?})", mir.span);
+ // We only need to do this pass once, because unwind edges can only
+ // reach cleanup blocks, which can't have unwind edges themselves.
+ let mut dead_unwinds = IdxSetBuf::new_empty(mir.basic_blocks().len());
+ let flow_inits =
+ super::do_dataflow(tcx, mir, id, &[], &dead_unwinds,
+ MaybeInitializedLvals::new(tcx, mir, &env),
+ |bd, p| &bd.move_data().move_paths[p]);
+ for (bb, bb_data) in mir.basic_blocks().iter_enumerated() {
+ match bb_data.terminator().kind {
+ TerminatorKind::Drop { ref location, unwind: Some(_), .. } |
+ TerminatorKind::DropAndReplace { ref location, unwind: Some(_), .. } => {
+ let mut init_data = InitializationData {
+ live: flow_inits.sets().on_entry_set_for(bb.index()).to_owned(),
+ dead: IdxSetBuf::new_empty(env.move_data.move_paths.len()),
+ };
+ debug!("find_dead_unwinds @ {:?}: {:?}; init_data={:?}",
+ bb, bb_data, init_data.live);
+ for stmt in 0..bb_data.statements.len() {
+ let loc = Location { block: bb, statement_index: stmt };
+ init_data.apply_location(tcx, mir, env, loc);
+ }
+
+ let path = match env.move_data.rev_lookup.find(location) {
+ LookupResult::Exact(e) => e,
+ LookupResult::Parent(..) => {
+ debug!("find_dead_unwinds: has parent; skipping");
+ continue
+ }
+ };
+
+ debug!("find_dead_unwinds @ {:?}: path({:?})={:?}", bb, location, path);
+
+ let mut maybe_live = false;
+ on_all_drop_children_bits(tcx, mir, &env, path, |child| {
+ let (child_maybe_live, _) = init_data.state(child);
+ maybe_live |= child_maybe_live;
+ });
+
+ debug!("find_dead_unwinds @ {:?}: maybe_live={}", bb, maybe_live);
+ if !maybe_live {
+ dead_unwinds.add(&bb);
+ }
+ }
+ _ => {}
+ }
+ }
+
+ dead_unwinds
+}
+
struct InitializationData {
live: IdxSetBuf<MovePathIndex>,
dead: IdxSetBuf<MovePathIndex>
let mut some_live = false;
let mut some_dead = false;
let mut children_count = 0;
- on_all_children_bits(
- self.tcx(), self.mir(), self.ctxt.move_data(),
- path, |child| {
- if self.ctxt.path_needs_drop(child) {
- let (live, dead) = self.init_data.state(child);
- debug!("elaborate_drop: state({:?}) = {:?}",
- child, (live, dead));
- some_live |= live;
- some_dead |= dead;
- children_count += 1;
- }
+ on_all_drop_children_bits(
+ self.tcx(), self.mir(), self.ctxt.env, path, |child| {
+ let (live, dead) = self.init_data.state(child);
+ debug!("elaborate_drop: state({:?}) = {:?}",
+ child, (live, dead));
+ some_live |= live;
+ some_dead |= dead;
+ children_count += 1;
});
((some_live, some_dead), children_count != 1)
}
self.patch
}
- fn path_needs_drop(&self, path: MovePathIndex) -> bool
- {
- let lvalue = &self.move_data().move_paths[path].lvalue;
- let ty = lvalue.ty(self.mir, self.tcx).to_ty(self.tcx);
- debug!("path_needs_drop({:?}, {:?} : {:?})", path, lvalue, ty);
-
- self.tcx.type_needs_drop_given_env(ty, self.param_env())
- }
-
fn collect_drop_flags(&mut self)
{
for (bb, data) in self.mir.basic_blocks().iter_enumerated() {
}
};
- on_all_children_bits(self.tcx, self.mir, self.move_data(), path, |child| {
- if self.path_needs_drop(child) {
- let (maybe_live, maybe_dead) = init_data.state(child);
- debug!("collect_drop_flags: collecting {:?} from {:?}@{:?} - {:?}",
- child, location, path, (maybe_live, maybe_dead));
- if maybe_live && maybe_dead {
- self.create_drop_flag(child)
- }
+ on_all_drop_children_bits(self.tcx, self.mir, self.env, path, |child| {
+ let (maybe_live, maybe_dead) = init_data.state(child);
+ debug!("collect_drop_flags: collecting {:?} from {:?}@{:?} - {:?}",
+ child, location, path, (maybe_live, maybe_dead));
+ if maybe_live && maybe_dead {
+ self.create_drop_flag(child)
}
});
}
use rustc::session::Session;
use rustc::ty::{self, TyCtxt};
use rustc_mir::util::elaborate_drops::DropFlagState;
+use rustc_data_structures::indexed_set::{IdxSet, IdxSetBuf};
mod abs_domain;
pub mod elaborate_drops;
let param_env = ty::ParameterEnvironment::for_item(tcx, id);
let move_data = MoveData::gather_moves(mir, tcx, ¶m_env);
let mdpe = MoveDataParamEnv { move_data: move_data, param_env: param_env };
+ let dead_unwinds = IdxSetBuf::new_empty(mir.basic_blocks().len());
let flow_inits =
- do_dataflow(tcx, mir, id, attributes, MaybeInitializedLvals::new(tcx, mir, &mdpe),
+ do_dataflow(tcx, mir, id, attributes, &dead_unwinds,
+ MaybeInitializedLvals::new(tcx, mir, &mdpe),
|bd, i| &bd.move_data().move_paths[i]);
let flow_uninits =
- do_dataflow(tcx, mir, id, attributes, MaybeUninitializedLvals::new(tcx, mir, &mdpe),
+ do_dataflow(tcx, mir, id, attributes, &dead_unwinds,
+ MaybeUninitializedLvals::new(tcx, mir, &mdpe),
|bd, i| &bd.move_data().move_paths[i]);
let flow_def_inits =
- do_dataflow(tcx, mir, id, attributes, DefinitelyInitializedLvals::new(tcx, mir, &mdpe),
+ do_dataflow(tcx, mir, id, attributes, &dead_unwinds,
+ DefinitelyInitializedLvals::new(tcx, mir, &mdpe),
|bd, i| &bd.move_data().move_paths[i]);
if has_rustc_mir_with(attributes, "rustc_peek_maybe_init").is_some() {
mir: &Mir<'tcx>,
node_id: ast::NodeId,
attributes: &[ast::Attribute],
+ dead_unwinds: &IdxSet<BasicBlock>,
bd: BD,
p: P)
-> DataflowResults<BD>
node_id: node_id,
print_preflow_to: print_preflow_to,
print_postflow_to: print_postflow_to,
- flow_state: DataflowAnalysis::new(tcx, mir, bd),
+ flow_state: DataflowAnalysis::new(tcx, mir, dead_unwinds, bd),
};
mbcx.dataflow(p);
on_all_children_bits(tcx, mir, move_data, move_path_index, &mut each_child);
}
+fn on_all_drop_children_bits<'a, 'tcx, F>(
+ tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ mir: &Mir<'tcx>,
+ ctxt: &MoveDataParamEnv<'tcx>,
+ path: MovePathIndex,
+ mut each_child: F)
+ where F: FnMut(MovePathIndex)
+{
+ on_all_children_bits(tcx, mir, &ctxt.move_data, path, |child| {
+ let lvalue = &ctxt.move_data.move_paths[path].lvalue;
+ let ty = lvalue.ty(mir, tcx).to_ty(tcx);
+ debug!("on_all_drop_children_bits({:?}, {:?} : {:?})", path, lvalue, ty);
+
+ if tcx.type_needs_drop_given_env(ty, &ctxt.param_env) {
+ each_child(child);
+ } else {
+ debug!("on_all_drop_children_bits - skipping")
+ }
+ })
+}
+
fn drop_flag_effects_for_function_entry<'a, 'tcx, F>(
tcx: TyCtxt<'a, 'tcx, 'tcx>,
mir: &Mir<'tcx>,
// hack around this by replacing the host triple with the target and pray
// that those -L directories are the same!
let mut cmd = Command::new(&llvm_config);
+ if let Some(link_arg) = llvm_link_arg {
+ cmd.arg(link_arg);
+ }
cmd.arg("--ldflags");
for lib in output(&mut cmd).split_whitespace() {
if lib.starts_with("-LIBPATH:") {
&self.inner
}
+ /// Extracts a `CStr` slice containing the entire string.
+ #[unstable(feature = "as_c_str", issue = "40380")]
+ pub fn as_c_str(&self) -> &CStr {
+ &*self
+ }
+
/// Converts this `CString` into a boxed `CStr`.
#[unstable(feature = "into_boxed_c_str", issue = "40380")]
pub fn into_boxed_c_str(self) -> Box<CStr> {
--- /dev/null
+// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// check that we don't emit multiple drop flags when they are not needed.
+
+fn main() {
+ let x = S.other(S.id());
+}
+
+pub fn test() {
+ let u = S;
+ let mut v = S;
+ drop(v);
+ v = u;
+}
+
+struct S;
+impl Drop for S {
+ fn drop(&mut self) {
+ }
+}
+
+impl S {
+ fn id(self) -> Self { self }
+ fn other(self, s: Self) {}
+}
+
+// END RUST SOURCE
+// START rustc.node4.ElaborateDrops.after.mir
+// let mut _2: S;
+// let mut _3: ();
+// let mut _4: S;
+// let mut _5: S;
+// let mut _6: bool;
+//
+// bb0: {
+// END rustc.node4.ElaborateDrops.after.mir
+// START rustc.node13.ElaborateDrops.after.mir
+// let mut _2: ();
+// let mut _4: ();
+// let mut _5: S;
+// let mut _6: S;
+// let mut _7: bool;
+//
+// bb0: {
+// END rustc.node13.ElaborateDrops.after.mir