]> git.lizzy.rs Git - rust.git/commitdiff
Auto merge of #40658 - eddyb:lay-more-out, r=arielb1
authorbors <bors@rust-lang.org>
Sun, 9 Apr 2017 13:08:10 +0000 (13:08 +0000)
committerbors <bors@rust-lang.org>
Sun, 9 Apr 2017 13:08:10 +0000 (13:08 +0000)
Use ty::layout for ABI computation instead of LLVM types.

This is the first step in creating a backend-agnostic library for computing call ABI details from signatures.
I wanted to open the PR *before* attempting to move `cabi_*` from trans to avoid rebase churn in #39999.
**EDIT**: As I suspected, #39999 needs this PR to fully work (see https://github.com/rust-lang/rust/pull/39999#issuecomment-287723379).

The first 3 commits add more APIs to `ty::layout` and replace non-ABI uses of `sizing_type_of`.
These APIs are probably usable by other backends, and miri too (cc @stoklund @solson).

The last commit rewrites `rustc_trans::cabi_*` to use `ty::layout` and new `rustc_trans::abi` APIs.
Also, during the process, a couple trivial bugs were identified and fixed:
* `msp430`, `nvptx`, `nvptx64`: type sizes *in bytes* were compared with `32` and `64`
* `x86` (`fastcall`): `f64` was incorrectly not treated the same way as `f32`

Although not urgent, this PR also uses the more general "homogenous aggregate" logic to fix #32045.

13 files changed:
src/doc/unstable-book/src/SUMMARY.md
src/doc/unstable-book/src/as-c-str.md [new file with mode: 0644]
src/doc/unstable-book/src/compiler-barriers.md [new file with mode: 0644]
src/libcore/slice/mod.rs
src/libcore/sync/atomic.rs
src/libcore/tests/lib.rs
src/libcore/tests/slice.rs
src/librustc_borrowck/borrowck/mir/dataflow/mod.rs
src/librustc_borrowck/borrowck/mir/elaborate_drops.rs
src/librustc_borrowck/borrowck/mir/mod.rs
src/librustc_llvm/build.rs
src/libstd/ffi/c_str.rs
src/test/mir-opt/issue-41110.rs [new file with mode: 0644]

index 20812de524add29aa089e6bdd28dabcc08b5771a..d86766bac02a2829e15b97f433211f7eca237c70 100644 (file)
@@ -12,6 +12,7 @@
 - [alloc_system](alloc-system.md)
 - [allocator](allocator.md)
 - [allow_internal_unstable](allow-internal-unstable.md)
+- [as_c_str](as-c-str.md)
 - [as_unsafe_cell](as-unsafe-cell.md)
 - [ascii_ctype](ascii-ctype.md)
 - [asm](asm.md)
@@ -37,6 +38,7 @@
 - [collections](collections.md)
 - [collections_range](collections-range.md)
 - [command_envs](command-envs.md)
+- [compiler_barriers](compiler-barriers.md)
 - [compiler_builtins](compiler-builtins.md)
 - [compiler_builtins_lib](compiler-builtins-lib.md)
 - [concat_idents](concat-idents.md)
diff --git a/src/doc/unstable-book/src/as-c-str.md b/src/doc/unstable-book/src/as-c-str.md
new file mode 100644 (file)
index 0000000..ed32eed
--- /dev/null
@@ -0,0 +1,8 @@
+# `as_c_str`
+
+The tracking issue for this feature is: [#40380]
+
+[#40380]: https://github.com/rust-lang/rust/issues/40380
+
+------------------------
+
diff --git a/src/doc/unstable-book/src/compiler-barriers.md b/src/doc/unstable-book/src/compiler-barriers.md
new file mode 100644 (file)
index 0000000..827447f
--- /dev/null
@@ -0,0 +1,106 @@
+# `compiler_barriers`
+
+The tracking issue for this feature is: [#41091]
+
+[#41091]: https://github.com/rust-lang/rust/issues/41091
+
+------------------------
+
+The `compiler_barriers` feature exposes the `compiler_barrier` function
+in `std::sync::atomic`. This function is conceptually similar to C++'s
+`atomic_signal_fence`, which can currently only be accessed in nightly
+Rust using the `atomic_singlethreadfence_*` instrinsic functions in
+`core`, or through the mostly equivalent literal assembly:
+
+```rust
+#![feature(asm)]
+unsafe { asm!("" ::: "memory" : "volatile") };
+```
+
+A `compiler_barrier` restricts the kinds of memory re-ordering the
+compiler is allowed to do. Specifically, depending on the given ordering
+semantics, the compiler may be disallowed from moving reads or writes
+from before or after the call to the other side of the call to
+`compiler_barrier`. Note that it does **not** prevent the *hardware*
+from doing such re-ordering. This is not a problem in a single-threaded,
+execution context, but when other threads may modify memory at the same
+time, stronger synchronization primitives are required.
+
+## Examples
+
+`compiler_barrier` is generally only useful for preventing a thread from
+racing *with itself*. That is, if a given thread is executing one piece
+of code, and is then interrupted, and starts executing code elsewhere
+(while still in the same thread, and conceptually still on the same
+core). In traditional programs, this can only occur when a signal
+handler is registered. In more low-level code, such situations can also
+arise when handling interrupts, when implementing green threads with
+pre-emption, etc.
+
+To give a straightforward example of when a `compiler_barrier` is
+necessary, consider the following example:
+
+```rust
+# use std::sync::atomic::{AtomicBool, AtomicUsize};
+# use std::sync::atomic::{ATOMIC_BOOL_INIT, ATOMIC_USIZE_INIT};
+# use std::sync::atomic::Ordering;
+static IMPORTANT_VARIABLE: AtomicUsize = ATOMIC_USIZE_INIT;
+static IS_READY: AtomicBool = ATOMIC_BOOL_INIT;
+
+fn main() {
+    IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
+    IS_READY.store(true, Ordering::Relaxed);
+}
+
+fn signal_handler() {
+    if IS_READY.load(Ordering::Relaxed) {
+        assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
+    }
+}
+```
+
+The way it is currently written, the `assert_eq!` is *not* guaranteed to
+succeed, despite everything happening in a single thread. To see why,
+remember that the compiler is free to swap the stores to
+`IMPORTANT_VARIABLE` and `IS_READ` since they are both
+`Ordering::Relaxed`. If it does, and the signal handler is invoked right
+after `IS_READY` is updated, then the signal handler will see
+`IS_READY=1`, but `IMPORTANT_VARIABLE=0`.
+
+Using a `compiler_barrier`, we can remedy this situation:
+
+```rust
+#![feature(compiler_barriers)]
+# use std::sync::atomic::{AtomicBool, AtomicUsize};
+# use std::sync::atomic::{ATOMIC_BOOL_INIT, ATOMIC_USIZE_INIT};
+# use std::sync::atomic::Ordering;
+use std::sync::atomic::compiler_barrier;
+
+static IMPORTANT_VARIABLE: AtomicUsize = ATOMIC_USIZE_INIT;
+static IS_READY: AtomicBool = ATOMIC_BOOL_INIT;
+
+fn main() {
+    IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
+    // prevent earlier writes from being moved beyond this point
+    compiler_barrier(Ordering::Release);
+    IS_READY.store(true, Ordering::Relaxed);
+}
+
+fn signal_handler() {
+    if IS_READY.load(Ordering::Relaxed) {
+        assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
+    }
+}
+```
+
+A deeper discussion of compiler barriers with various re-ordering
+semantics (such as `Ordering::SeqCst`) is beyond the scope of this text.
+Curious readers are encouraged to read the Linux kernel's discussion of
+[memory barriers][1], the C++ references on [`std::memory_order`][2] and
+[`atomic_signal_fence`][3], and [this StackOverflow answer][4] for
+further details.
+
+[1]: https://www.kernel.org/doc/Documentation/memory-barriers.txt
+[2]: http://en.cppreference.com/w/cpp/atomic/memory_order
+[3]: http://www.cplusplus.com/reference/atomic/atomic_signal_fence/
+[4]: http://stackoverflow.com/a/18454971/472927
index 6d598677c9ba4f0cad90dc5bf1458d1d94433f58..87dfdfe57b65c77db0443d99ea6e7d46143c5089 100644 (file)
@@ -1190,6 +1190,19 @@ fn next_back(&mut self) -> Option<$elem> {
                     }
                 }
             }
+
+            fn rfind<F>(&mut self, mut predicate: F) -> Option<Self::Item>
+                where F: FnMut(&Self::Item) -> bool,
+            {
+                self.rsearch_while(None, move |elt| {
+                    if predicate(&elt) {
+                        SearchWhile::Done(Some(elt))
+                    } else {
+                        SearchWhile::Continue
+                    }
+                })
+            }
+
         }
 
         // search_while is a generalization of the internal iteration methods.
index a4050f271eb99b30a08eb91961680bf468444e28..0c70524ead246beee3a85bbe16754f9612f34329 100644 (file)
@@ -1591,6 +1591,47 @@ pub fn fence(order: Ordering) {
 }
 
 
+/// A compiler memory barrier.
+///
+/// `compiler_barrier` does not emit any machine code, but prevents the compiler from re-ordering
+/// memory operations across this point. Which reorderings are disallowed is dictated by the given
+/// [`Ordering`]. Note that `compiler_barrier` does *not* introduce inter-thread memory
+/// synchronization; for that, a [`fence`] is needed.
+///
+/// The re-ordering prevented by the different ordering semantics are:
+///
+///  - with [`SeqCst`], no re-ordering of reads and writes across this point is allowed.
+///  - with [`Release`], preceding reads and writes cannot be moved past subsequent writes.
+///  - with [`Acquire`], subsequent reads and writes cannot be moved ahead of preceding reads.
+///  - with [`AcqRel`], both of the above rules are enforced.
+///
+/// # Panics
+///
+/// Panics if `order` is [`Relaxed`].
+///
+/// [`fence`]: fn.fence.html
+/// [`Ordering`]: enum.Ordering.html
+/// [`Acquire`]: enum.Ordering.html#variant.Acquire
+/// [`SeqCst`]: enum.Ordering.html#variant.SeqCst
+/// [`Release`]: enum.Ordering.html#variant.Release
+/// [`AcqRel`]: enum.Ordering.html#variant.AcqRel
+/// [`Relaxed`]: enum.Ordering.html#variant.Relaxed
+#[inline]
+#[unstable(feature = "compiler_barriers", issue = "41091")]
+pub fn compiler_barrier(order: Ordering) {
+    unsafe {
+        match order {
+            Acquire => intrinsics::atomic_singlethreadfence_acq(),
+            Release => intrinsics::atomic_singlethreadfence_rel(),
+            AcqRel => intrinsics::atomic_singlethreadfence_acqrel(),
+            SeqCst => intrinsics::atomic_singlethreadfence(),
+            Relaxed => panic!("there is no such thing as a relaxed barrier"),
+            __Nonexhaustive => panic!("invalid memory ordering"),
+        }
+    }
+}
+
+
 #[cfg(target_has_atomic = "8")]
 #[stable(feature = "atomic_debug", since = "1.3.0")]
 impl fmt::Debug for AtomicBool {
index d92c378160d2e0e4ba3eabcc5fcdc813675b07c4..528ab3bc84523e184ee7b41b12bbce7d9e36ee57 100644 (file)
@@ -20,6 +20,7 @@
 #![feature(fixed_size_array)]
 #![feature(flt2dec)]
 #![feature(fmt_internals)]
+#![feature(iter_rfind)]
 #![feature(libc)]
 #![feature(nonzero)]
 #![feature(rand)]
index ec38345030fa5897423ff55d2f4f882dcff4c1ee..15047204e50d609699e5df8bcee730d9aa37ee6c 100644 (file)
@@ -225,6 +225,19 @@ fn get_unchecked_mut_range() {
     }
 }
 
+#[test]
+fn test_find_rfind() {
+    let v = [0, 1, 2, 3, 4, 5];
+    let mut iter = v.iter();
+    let mut i = v.len();
+    while let Some(&elt) = iter.rfind(|_| true) {
+        i -= 1;
+        assert_eq!(elt, v[i]);
+    }
+    assert_eq!(i, 0);
+    assert_eq!(v.iter().rfind(|&&x| x <= 3), Some(&3));
+}
+
 #[test]
 fn sort_unstable() {
     let mut v = [0; 600];
index 8b246105f61693b147688622d62203ddfa33653b..f0f082a2561cca6f3814dd1acaf46b2ec4828805 100644 (file)
@@ -181,6 +181,7 @@ pub struct DataflowAnalysis<'a, 'tcx: 'a, O>
     where O: BitDenotation
 {
     flow_state: DataflowState<O>,
+    dead_unwinds: &'a IdxSet<mir::BasicBlock>,
     mir: &'a Mir<'tcx>,
 }
 
@@ -377,6 +378,7 @@ impl<'a, 'tcx: 'a, D> DataflowAnalysis<'a, 'tcx, D>
 {
     pub fn new(_tcx: TyCtxt<'a, 'tcx, 'tcx>,
                mir: &'a Mir<'tcx>,
+               dead_unwinds: &'a IdxSet<mir::BasicBlock>,
                denotation: D) -> Self {
         let bits_per_block = denotation.bits_per_block();
         let usize_bits = mem::size_of::<usize>() * 8;
@@ -397,6 +399,7 @@ pub fn new(_tcx: TyCtxt<'a, 'tcx, 'tcx>,
 
         DataflowAnalysis {
             mir: mir,
+            dead_unwinds: dead_unwinds,
             flow_state: DataflowState {
                 sets: AllSets {
                     bits_per_block: bits_per_block,
@@ -452,7 +455,9 @@ fn propagate_bits_into_graph_successors_of(
                 ref target, value: _, location: _, unwind: Some(ref unwind)
             } => {
                 self.propagate_bits_into_entry_set_for(in_out, changed, target);
-                self.propagate_bits_into_entry_set_for(in_out, changed, unwind);
+                if !self.dead_unwinds.contains(&bb) {
+                    self.propagate_bits_into_entry_set_for(in_out, changed, unwind);
+                }
             }
             mir::TerminatorKind::SwitchInt { ref targets, .. } => {
                 for target in targets {
@@ -461,7 +466,9 @@ fn propagate_bits_into_graph_successors_of(
             }
             mir::TerminatorKind::Call { ref cleanup, ref destination, func: _, args: _ } => {
                 if let Some(ref unwind) = *cleanup {
-                    self.propagate_bits_into_entry_set_for(in_out, changed, unwind);
+                    if !self.dead_unwinds.contains(&bb) {
+                        self.propagate_bits_into_entry_set_for(in_out, changed, unwind);
+                    }
                 }
                 if let Some((ref dest_lval, ref dest_bb)) = *destination {
                     // N.B.: This must be done *last*, after all other
index 88ec86cc95d614dfe461a30620d73dc6c5c9a064..713e656666271a97c32cae318612ddb3af349bdf 100644 (file)
@@ -11,8 +11,8 @@
 use super::gather_moves::{HasMoveData, MoveData, MovePathIndex, LookupResult};
 use super::dataflow::{MaybeInitializedLvals, MaybeUninitializedLvals};
 use super::dataflow::{DataflowResults};
-use super::{drop_flag_effects_for_location, on_all_children_bits};
-use super::on_lookup_result_bits;
+use super::{on_all_children_bits, on_all_drop_children_bits};
+use super::{drop_flag_effects_for_location, on_lookup_result_bits};
 use super::MoveDataParamEnv;
 use rustc::ty::{self, TyCtxt};
 use rustc::mir::*;
@@ -24,6 +24,7 @@
 use rustc_mir::util::patch::MirPatch;
 use rustc_mir::util::elaborate_drops::{DropFlagState, elaborate_drop};
 use rustc_mir::util::elaborate_drops::{DropElaborator, DropStyle, DropFlagMode};
+use syntax::ast;
 use syntax_pos::Span;
 
 use std::fmt;
@@ -49,12 +50,13 @@ fn run_pass<'a>(&mut self, tcx: TyCtxt<'a, 'tcx, 'tcx>,
                 move_data: move_data,
                 param_env: param_env
             };
+            let dead_unwinds = find_dead_unwinds(tcx, mir, id, &env);
             let flow_inits =
-                super::do_dataflow(tcx, mir, id, &[],
+                super::do_dataflow(tcx, mir, id, &[], &dead_unwinds,
                                    MaybeInitializedLvals::new(tcx, mir, &env),
                                    |bd, p| &bd.move_data().move_paths[p]);
             let flow_uninits =
-                super::do_dataflow(tcx, mir, id, &[],
+                super::do_dataflow(tcx, mir, id, &[], &dead_unwinds,
                                    MaybeUninitializedLvals::new(tcx, mir, &env),
                                    |bd, p| &bd.move_data().move_paths[p]);
 
@@ -74,6 +76,67 @@ fn run_pass<'a>(&mut self, tcx: TyCtxt<'a, 'tcx, 'tcx>,
 
 impl Pass for ElaborateDrops {}
 
+/// Return the set of basic blocks whose unwind edges are known
+/// to not be reachable, because they are `drop` terminators
+/// that can't drop anything.
+fn find_dead_unwinds<'a, 'tcx>(
+    tcx: TyCtxt<'a, 'tcx, 'tcx>,
+    mir: &Mir<'tcx>,
+    id: ast::NodeId,
+    env: &MoveDataParamEnv<'tcx>)
+    -> IdxSetBuf<BasicBlock>
+{
+    debug!("find_dead_unwinds({:?})", mir.span);
+    // We only need to do this pass once, because unwind edges can only
+    // reach cleanup blocks, which can't have unwind edges themselves.
+    let mut dead_unwinds = IdxSetBuf::new_empty(mir.basic_blocks().len());
+    let flow_inits =
+        super::do_dataflow(tcx, mir, id, &[], &dead_unwinds,
+                           MaybeInitializedLvals::new(tcx, mir, &env),
+                           |bd, p| &bd.move_data().move_paths[p]);
+    for (bb, bb_data) in mir.basic_blocks().iter_enumerated() {
+        match bb_data.terminator().kind {
+            TerminatorKind::Drop { ref location, unwind: Some(_), .. } |
+            TerminatorKind::DropAndReplace { ref location, unwind: Some(_), .. } => {
+                let mut init_data = InitializationData {
+                    live: flow_inits.sets().on_entry_set_for(bb.index()).to_owned(),
+                    dead: IdxSetBuf::new_empty(env.move_data.move_paths.len()),
+                };
+                debug!("find_dead_unwinds @ {:?}: {:?}; init_data={:?}",
+                       bb, bb_data, init_data.live);
+                for stmt in 0..bb_data.statements.len() {
+                    let loc = Location { block: bb, statement_index: stmt };
+                    init_data.apply_location(tcx, mir, env, loc);
+                }
+
+                let path = match env.move_data.rev_lookup.find(location) {
+                    LookupResult::Exact(e) => e,
+                    LookupResult::Parent(..) => {
+                        debug!("find_dead_unwinds: has parent; skipping");
+                        continue
+                    }
+                };
+
+                debug!("find_dead_unwinds @ {:?}: path({:?})={:?}", bb, location, path);
+
+                let mut maybe_live = false;
+                on_all_drop_children_bits(tcx, mir, &env, path, |child| {
+                    let (child_maybe_live, _) = init_data.state(child);
+                    maybe_live |= child_maybe_live;
+                });
+
+                debug!("find_dead_unwinds @ {:?}: maybe_live={}", bb, maybe_live);
+                if !maybe_live {
+                    dead_unwinds.add(&bb);
+                }
+            }
+            _ => {}
+        }
+    }
+
+    dead_unwinds
+}
+
 struct InitializationData {
     live: IdxSetBuf<MovePathIndex>,
     dead: IdxSetBuf<MovePathIndex>
@@ -144,17 +207,14 @@ fn drop_style(&self, path: Self::Path, mode: DropFlagMode) -> DropStyle {
                 let mut some_live = false;
                 let mut some_dead = false;
                 let mut children_count = 0;
-                on_all_children_bits(
-                    self.tcx(), self.mir(), self.ctxt.move_data(),
-                    path, |child| {
-                        if self.ctxt.path_needs_drop(child) {
-                            let (live, dead) = self.init_data.state(child);
-                            debug!("elaborate_drop: state({:?}) = {:?}",
-                                   child, (live, dead));
-                            some_live |= live;
-                            some_dead |= dead;
-                            children_count += 1;
-                        }
+                on_all_drop_children_bits(
+                    self.tcx(), self.mir(), self.ctxt.env, path, |child| {
+                        let (live, dead) = self.init_data.state(child);
+                        debug!("elaborate_drop: state({:?}) = {:?}",
+                               child, (live, dead));
+                        some_live |= live;
+                        some_dead |= dead;
+                        children_count += 1;
                     });
                 ((some_live, some_dead), children_count != 1)
             }
@@ -276,15 +336,6 @@ fn elaborate(mut self) -> MirPatch<'tcx>
         self.patch
     }
 
-    fn path_needs_drop(&self, path: MovePathIndex) -> bool
-    {
-        let lvalue = &self.move_data().move_paths[path].lvalue;
-        let ty = lvalue.ty(self.mir, self.tcx).to_ty(self.tcx);
-        debug!("path_needs_drop({:?}, {:?} : {:?})", path, lvalue, ty);
-
-        self.tcx.type_needs_drop_given_env(ty, self.param_env())
-    }
-
     fn collect_drop_flags(&mut self)
     {
         for (bb, data) in self.mir.basic_blocks().iter_enumerated() {
@@ -318,14 +369,12 @@ fn collect_drop_flags(&mut self)
                 }
             };
 
-            on_all_children_bits(self.tcx, self.mir, self.move_data(), path, |child| {
-                if self.path_needs_drop(child) {
-                    let (maybe_live, maybe_dead) = init_data.state(child);
-                    debug!("collect_drop_flags: collecting {:?} from {:?}@{:?} - {:?}",
-                           child, location, path, (maybe_live, maybe_dead));
-                    if maybe_live && maybe_dead {
-                        self.create_drop_flag(child)
-                    }
+            on_all_drop_children_bits(self.tcx, self.mir, self.env, path, |child| {
+                let (maybe_live, maybe_dead) = init_data.state(child);
+                debug!("collect_drop_flags: collecting {:?} from {:?}@{:?} - {:?}",
+                       child, location, path, (maybe_live, maybe_dead));
+                if maybe_live && maybe_dead {
+                    self.create_drop_flag(child)
                 }
             });
         }
index 9237bb31f6bd7f62c267550d29723c46d69bcf13..dc01cbe5e7605eb3a4fd4a9e19e269d25fe531a8 100644 (file)
@@ -17,6 +17,7 @@
 use rustc::session::Session;
 use rustc::ty::{self, TyCtxt};
 use rustc_mir::util::elaborate_drops::DropFlagState;
+use rustc_data_structures::indexed_set::{IdxSet, IdxSetBuf};
 
 mod abs_domain;
 pub mod elaborate_drops;
@@ -64,14 +65,18 @@ pub fn borrowck_mir(bcx: &mut BorrowckCtxt,
     let param_env = ty::ParameterEnvironment::for_item(tcx, id);
     let move_data = MoveData::gather_moves(mir, tcx, &param_env);
     let mdpe = MoveDataParamEnv { move_data: move_data, param_env: param_env };
+    let dead_unwinds = IdxSetBuf::new_empty(mir.basic_blocks().len());
     let flow_inits =
-        do_dataflow(tcx, mir, id, attributes, MaybeInitializedLvals::new(tcx, mir, &mdpe),
+        do_dataflow(tcx, mir, id, attributes, &dead_unwinds,
+                    MaybeInitializedLvals::new(tcx, mir, &mdpe),
                     |bd, i| &bd.move_data().move_paths[i]);
     let flow_uninits =
-        do_dataflow(tcx, mir, id, attributes, MaybeUninitializedLvals::new(tcx, mir, &mdpe),
+        do_dataflow(tcx, mir, id, attributes, &dead_unwinds,
+                    MaybeUninitializedLvals::new(tcx, mir, &mdpe),
                     |bd, i| &bd.move_data().move_paths[i]);
     let flow_def_inits =
-        do_dataflow(tcx, mir, id, attributes, DefinitelyInitializedLvals::new(tcx, mir, &mdpe),
+        do_dataflow(tcx, mir, id, attributes, &dead_unwinds,
+                    DefinitelyInitializedLvals::new(tcx, mir, &mdpe),
                     |bd, i| &bd.move_data().move_paths[i]);
 
     if has_rustc_mir_with(attributes, "rustc_peek_maybe_init").is_some() {
@@ -108,6 +113,7 @@ fn do_dataflow<'a, 'tcx, BD, P>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
                                 mir: &Mir<'tcx>,
                                 node_id: ast::NodeId,
                                 attributes: &[ast::Attribute],
+                                dead_unwinds: &IdxSet<BasicBlock>,
                                 bd: BD,
                                 p: P)
                                 -> DataflowResults<BD>
@@ -137,7 +143,7 @@ fn do_dataflow<'a, 'tcx, BD, P>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
         node_id: node_id,
         print_preflow_to: print_preflow_to,
         print_postflow_to: print_postflow_to,
-        flow_state: DataflowAnalysis::new(tcx, mir, bd),
+        flow_state: DataflowAnalysis::new(tcx, mir, dead_unwinds, bd),
     };
 
     mbcx.dataflow(p);
@@ -303,6 +309,27 @@ fn on_all_children_bits<'a, 'tcx, F>(
     on_all_children_bits(tcx, mir, move_data, move_path_index, &mut each_child);
 }
 
+fn on_all_drop_children_bits<'a, 'tcx, F>(
+    tcx: TyCtxt<'a, 'tcx, 'tcx>,
+    mir: &Mir<'tcx>,
+    ctxt: &MoveDataParamEnv<'tcx>,
+    path: MovePathIndex,
+    mut each_child: F)
+    where F: FnMut(MovePathIndex)
+{
+    on_all_children_bits(tcx, mir, &ctxt.move_data, path, |child| {
+        let lvalue = &ctxt.move_data.move_paths[path].lvalue;
+        let ty = lvalue.ty(mir, tcx).to_ty(tcx);
+        debug!("on_all_drop_children_bits({:?}, {:?} : {:?})", path, lvalue, ty);
+
+        if tcx.type_needs_drop_given_env(ty, &ctxt.param_env) {
+            each_child(child);
+        } else {
+            debug!("on_all_drop_children_bits - skipping")
+        }
+    })
+}
+
 fn drop_flag_effects_for_function_entry<'a, 'tcx, F>(
     tcx: TyCtxt<'a, 'tcx, 'tcx>,
     mir: &Mir<'tcx>,
index 7d5887e699fd7fdb2329291736ca32a7130ec1f6..a8def4bafd864b6e4203025c2082f233d9f87eb6 100644 (file)
@@ -217,6 +217,9 @@ fn main() {
     // hack around this by replacing the host triple with the target and pray
     // that those -L directories are the same!
     let mut cmd = Command::new(&llvm_config);
+    if let Some(link_arg) = llvm_link_arg {
+        cmd.arg(link_arg);
+    }
     cmd.arg("--ldflags");
     for lib in output(&mut cmd).split_whitespace() {
         if lib.starts_with("-LIBPATH:") {
index fc1b9a976322ea50e4580380368cd437a39bcc08..29f977ecd8c33101c655425c565e6c491a4cf969 100644 (file)
@@ -324,6 +324,12 @@ pub fn as_bytes_with_nul(&self) -> &[u8] {
         &self.inner
     }
 
+    /// Extracts a `CStr` slice containing the entire string.
+    #[unstable(feature = "as_c_str", issue = "40380")]
+    pub fn as_c_str(&self) -> &CStr {
+        &*self
+    }
+
     /// Converts this `CString` into a boxed `CStr`.
     #[unstable(feature = "into_boxed_c_str", issue = "40380")]
     pub fn into_boxed_c_str(self) -> Box<CStr> {
diff --git a/src/test/mir-opt/issue-41110.rs b/src/test/mir-opt/issue-41110.rs
new file mode 100644 (file)
index 0000000..fec635b
--- /dev/null
@@ -0,0 +1,53 @@
+// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// check that we don't emit multiple drop flags when they are not needed.
+
+fn main() {
+    let x = S.other(S.id());
+}
+
+pub fn test() {
+    let u = S;
+    let mut v = S;
+    drop(v);
+    v = u;
+}
+
+struct S;
+impl Drop for S {
+    fn drop(&mut self) {
+    }
+}
+
+impl S {
+    fn id(self) -> Self { self }
+    fn other(self, s: Self) {}
+}
+
+// END RUST SOURCE
+// START rustc.node4.ElaborateDrops.after.mir
+//    let mut _2: S;
+//    let mut _3: ();
+//    let mut _4: S;
+//    let mut _5: S;
+//    let mut _6: bool;
+//
+//    bb0: {
+// END rustc.node4.ElaborateDrops.after.mir
+// START rustc.node13.ElaborateDrops.after.mir
+//    let mut _2: ();
+//    let mut _4: ();
+//    let mut _5: S;
+//    let mut _6: S;
+//    let mut _7: bool;
+//
+//    bb0: {
+// END rustc.node13.ElaborateDrops.after.mir