2 Managing the scope stack. The scopes are tied to lexical scopes, so as
3 we descend the HAIR, we push a scope on the stack, build its
4 contents, and then pop it off. Every scope is named by a
9 When pushing a new scope, we record the current point in the graph (a
10 basic block); this marks the entry to the scope. We then generate more
11 stuff in the control-flow graph. Whenever the scope is exited, either
12 via a `break` or `return` or just by fallthrough, that marks an exit
13 from the scope. Each lexical scope thus corresponds to a single-entry,
14 multiple-exit (SEME) region in the control-flow graph.
16 For now, we keep a mapping from each `region::Scope` to its
17 corresponding SEME region for later reference (see caveat in next
18 paragraph). This is because region scopes are tied to
19 them. Eventually, when we shift to non-lexical lifetimes, there should
20 be no need to remember this mapping.
22 There is one additional wrinkle, actually, that I wanted to hide from
23 you but duty compels me to mention. In the course of building
24 matches, it sometimes happen that certain code (namely guards) gets
25 executed multiple times. This means that the scope lexical scope may
26 in fact correspond to multiple, disjoint SEME regions. So in fact our
27 mapping is from one scope to a vector of SEME regions.
31 The primary purpose for scopes is to insert drops: while building
32 the contents, we also accumulate places that need to be dropped upon
33 exit from each scope. This is done by calling `schedule_drop`. Once a
34 drop is scheduled, whenever we branch out we will insert drops of all
35 those places onto the outgoing edge. Note that we don't know the full
36 set of scheduled drops up front, and so whenever we exit from the
37 scope we only drop the values scheduled thus far. For example, consider
38 the scope S corresponding to this loop:
49 When processing the `let x`, we will add one drop to the scope for
50 `x`. The break will then insert a drop for `x`. When we process `let
51 y`, we will add another drop (in fact, to a subscope, but let's ignore
52 that for now); any later drops would also drop `y`.
56 There are numerous "normal" ways to early exit a scope: `break`,
57 `continue`, `return` (panics are handled separately). Whenever an
58 early exit occurs, the method `exit_scope` is called. It is given the
59 current point in execution where the early exit occurs, as well as the
60 scope you want to branch to (note that all early exits from to some
61 other enclosing scope). `exit_scope` will record this exit point and
64 Panics are handled in a similar fashion, except that a panic always
65 returns out to the `DIVERGE_BLOCK`. To trigger a panic, simply call
66 `panic(p)` with the current point `p`. Or else you can call
67 `diverge_cleanup`, which will produce a block that you can branch to
68 which does the appropriate cleanup and then diverges. `panic(p)`
69 simply calls `diverge_cleanup()` and adds an edge from `p` to the
74 In addition to the normal scope stack, we track a loop scope stack
75 that contains only loops. It tracks where a `break` and `continue`
80 use crate::build::{BlockAnd, BlockAndExtension, Builder, CFG};
81 use crate::hair::LintLevel;
82 use rustc::middle::region;
85 use rustc::hir::def_id::LOCAL_CRATE;
87 use syntax_pos::{Span};
88 use rustc_data_structures::fx::FxHashMap;
89 use std::collections::hash_map::Entry;
92 pub struct Scope<'tcx> {
93 /// The source scope this scope was created in.
94 source_scope: SourceScope,
96 /// the region span of this scope within source code.
97 region_scope: region::Scope,
99 /// the span of that region_scope
100 region_scope_span: Span,
102 /// Whether there's anything to do for the cleanup path, that is,
103 /// when unwinding through this scope. This includes destructors,
104 /// but not StorageDead statements, which don't get emitted at all
105 /// for unwinding, for several reasons:
106 /// * clang doesn't emit llvm.lifetime.end for C++ unwinding
107 /// * LLVM's memory dependency analysis can't handle it atm
108 /// * polluting the cleanup MIR with StorageDead creates
109 /// landing pads even though there's no actual destructors
110 /// * freeing up stack space has no effect during unwinding
113 /// set of places to drop when exiting this scope. This starts
114 /// out empty but grows as variables are declared during the
115 /// building process. This is a stack, so we always drop from the
116 /// end of the vector (top of the stack) first.
117 drops: Vec<DropData<'tcx>>,
119 /// The cache for drop chain on “normal” exit into a particular BasicBlock.
120 cached_exits: FxHashMap<(BasicBlock, region::Scope), BasicBlock>,
122 /// The cache for drop chain on "generator drop" exit.
123 cached_generator_drop: Option<BasicBlock>,
125 /// The cache for drop chain on "unwind" exit.
126 cached_unwind: CachedBlock,
130 struct DropData<'tcx> {
131 /// span where drop obligation was incurred (typically where place was declared)
135 location: Place<'tcx>,
137 /// Whether this is a value Drop or a StorageDead.
141 #[derive(Debug, Default, Clone, Copy)]
142 pub(crate) struct CachedBlock {
143 /// The cached block for the cleanups-on-diverge path. This block
144 /// contains code to run the current drop and all the preceding
145 /// drops (i.e., those having lower index in Drop’s Scope drop
147 unwind: Option<BasicBlock>,
149 /// The cached block for unwinds during cleanups-on-generator-drop path
151 /// This is split from the standard unwind path here to prevent drop
152 /// elaboration from creating drop flags that would have to be captured
153 /// by the generator. I'm not sure how important this optimization is,
155 generator_drop: Option<BasicBlock>,
159 pub(crate) enum DropKind {
161 cached_block: CachedBlock,
166 #[derive(Clone, Debug)]
167 pub struct BreakableScope<'tcx> {
168 /// Region scope of the loop
169 pub region_scope: region::Scope,
170 /// Where the body of the loop begins. `None` if block
171 pub continue_block: Option<BasicBlock>,
172 /// Block to branch into when the loop or block terminates (either by being `break`-en out
173 /// from, or by having its condition to become false)
174 pub break_block: BasicBlock,
175 /// The destination of the loop/block expression itself (i.e., where to put the result of a
176 /// `break` expression)
177 pub break_destination: Place<'tcx>,
181 fn invalidate(&mut self) {
182 self.generator_drop = None;
186 fn get(&self, generator_drop: bool) -> Option<BasicBlock> {
194 fn ref_mut(&mut self, generator_drop: bool) -> &mut Option<BasicBlock> {
196 &mut self.generator_drop
204 fn may_panic(&self) -> bool {
206 DropKind::Value { .. } => true,
207 DropKind::Storage => false
212 impl<'tcx> Scope<'tcx> {
213 /// Invalidates all the cached blocks in the scope.
215 /// Should always be run for all inner scopes when a drop is pushed into some scope enclosing a
216 /// larger extent of code.
218 /// `storage_only` controls whether to invalidate only drop paths that run `StorageDead`.
219 /// `this_scope_only` controls whether to invalidate only drop paths that refer to the current
220 /// top-of-scope (as opposed to dependent scopes).
221 fn invalidate_cache(&mut self, storage_only: bool, this_scope_only: bool) {
222 // FIXME: maybe do shared caching of `cached_exits` etc. to handle functions
223 // with lots of `try!`?
225 // cached exits drop storage and refer to the top-of-scope
226 self.cached_exits.clear();
229 // the current generator drop and unwind ignore
230 // storage but refer to top-of-scope
231 self.cached_generator_drop = None;
232 self.cached_unwind.invalidate();
235 if !storage_only && !this_scope_only {
236 for drop_data in &mut self.drops {
237 if let DropKind::Value { ref mut cached_block } = drop_data.kind {
238 cached_block.invalidate();
244 /// Given a span and this scope's source scope, make a SourceInfo.
245 fn source_info(&self, span: Span) -> SourceInfo {
248 scope: self.source_scope
253 impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> {
254 // Adding and removing scopes
255 // ==========================
256 /// Start a breakable scope, which tracks where `continue` and `break`
257 /// should branch to. See module comment for more details.
259 /// Returns the might_break attribute of the BreakableScope used.
260 pub fn in_breakable_scope<F, R>(&mut self,
261 loop_block: Option<BasicBlock>,
262 break_block: BasicBlock,
263 break_destination: Place<'tcx>,
265 where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>) -> R
267 let region_scope = self.topmost_scope();
268 let scope = BreakableScope {
270 continue_block: loop_block,
274 self.breakable_scopes.push(scope);
276 let breakable_scope = self.breakable_scopes.pop().unwrap();
277 assert!(breakable_scope.region_scope == region_scope);
281 pub fn in_opt_scope<F, R>(&mut self,
282 opt_scope: Option<(region::Scope, SourceInfo)>,
283 mut block: BasicBlock,
286 where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>) -> BlockAnd<R>
288 debug!("in_opt_scope(opt_scope={:?}, block={:?})", opt_scope, block);
289 if let Some(region_scope) = opt_scope { self.push_scope(region_scope); }
290 let rv = unpack!(block = f(self));
291 if let Some(region_scope) = opt_scope {
292 unpack!(block = self.pop_scope(region_scope, block));
294 debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
298 /// Convenience wrapper that pushes a scope and then executes `f`
299 /// to build its contents, popping the scope afterwards.
300 pub fn in_scope<F, R>(&mut self,
301 region_scope: (region::Scope, SourceInfo),
302 lint_level: LintLevel,
303 mut block: BasicBlock,
306 where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>) -> BlockAnd<R>
308 debug!("in_scope(region_scope={:?}, block={:?})", region_scope, block);
309 let source_scope = self.source_scope;
310 let tcx = self.hir.tcx();
311 if let LintLevel::Explicit(node_id) = lint_level {
312 let same_lint_scopes = tcx.dep_graph.with_ignore(|| {
313 let sets = tcx.lint_levels(LOCAL_CRATE);
315 tcx.hir().definitions().node_to_hir_id(
316 self.source_scope_local_data[source_scope].lint_root
319 tcx.hir().definitions().node_to_hir_id(node_id);
320 sets.lint_level_set(parent_hir_id) ==
321 sets.lint_level_set(current_hir_id)
324 if !same_lint_scopes {
326 self.new_source_scope(region_scope.1.span, lint_level,
330 self.push_scope(region_scope);
331 let rv = unpack!(block = f(self));
332 unpack!(block = self.pop_scope(region_scope, block));
333 self.source_scope = source_scope;
334 debug!("in_scope: exiting region_scope={:?} block={:?}", region_scope, block);
338 /// Push a scope onto the stack. You can then build code in this
339 /// scope and call `pop_scope` afterwards. Note that these two
340 /// calls must be paired; using `in_scope` as a convenience
341 /// wrapper maybe preferable.
342 pub fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
343 debug!("push_scope({:?})", region_scope);
344 let vis_scope = self.source_scope;
345 self.scopes.push(Scope {
346 source_scope: vis_scope,
347 region_scope: region_scope.0,
348 region_scope_span: region_scope.1.span,
349 needs_cleanup: false,
351 cached_generator_drop: None,
352 cached_exits: Default::default(),
353 cached_unwind: CachedBlock::default(),
357 /// Pops a scope, which should have region scope `region_scope`,
358 /// adding any drops onto the end of `block` that are needed.
359 /// This must match 1-to-1 with `push_scope`.
360 pub fn pop_scope(&mut self,
361 region_scope: (region::Scope, SourceInfo),
362 mut block: BasicBlock)
364 debug!("pop_scope({:?}, {:?})", region_scope, block);
365 // If we are emitting a `drop` statement, we need to have the cached
366 // diverge cleanup pads ready in case that drop panics.
368 self.scopes.last().unwrap().drops.iter().any(|s| s.kind.may_panic());
370 self.diverge_cleanup();
372 let scope = self.scopes.pop().unwrap();
373 assert_eq!(scope.region_scope, region_scope.0);
375 let unwind_to = self.scopes.last().and_then(|next_scope| {
376 next_scope.cached_unwind.get(false)
377 }).unwrap_or_else(|| self.resume_block());
379 unpack!(block = build_scope_drops(
392 /// Branch out of `block` to `target`, exiting all scopes up to
393 /// and including `region_scope`. This will insert whatever drops are
394 /// needed. See module comment for details.
395 pub fn exit_scope(&mut self,
397 region_scope: (region::Scope, SourceInfo),
398 mut block: BasicBlock,
399 target: BasicBlock) {
400 debug!("exit_scope(region_scope={:?}, block={:?}, target={:?})",
401 region_scope, block, target);
402 let scope_count = 1 + self.scopes.iter().rev()
403 .position(|scope| scope.region_scope == region_scope.0)
405 span_bug!(span, "region_scope {:?} does not enclose", region_scope)
407 let len = self.scopes.len();
408 assert!(scope_count < len, "should not use `exit_scope` to pop ALL scopes");
410 // If we are emitting a `drop` statement, we need to have the cached
411 // diverge cleanup pads ready in case that drop panics.
412 let may_panic = self.scopes[(len - scope_count)..].iter().any(|s| s.needs_cleanup);
414 self.diverge_cleanup();
417 let mut scopes = self.scopes[(len - scope_count - 1)..].iter_mut().rev();
418 let mut scope = scopes.next().unwrap();
419 for next_scope in scopes {
420 if scope.drops.is_empty() {
424 let source_info = scope.source_info(span);
425 block = match scope.cached_exits.entry((target, region_scope.0)) {
426 Entry::Occupied(e) => {
427 self.cfg.terminate(block, source_info,
428 TerminatorKind::Goto { target: *e.get() });
431 Entry::Vacant(v) => {
432 let b = self.cfg.start_new_block();
433 self.cfg.terminate(block, source_info,
434 TerminatorKind::Goto { target: b });
440 let unwind_to = next_scope.cached_unwind.get(false).unwrap_or_else(|| {
441 debug_assert!(!may_panic, "cached block not present?");
445 unpack!(block = build_scope_drops(
457 let scope = &self.scopes[len - scope_count];
458 self.cfg.terminate(block, scope.source_info(span),
459 TerminatorKind::Goto { target });
462 /// Creates a path that performs all required cleanup for dropping a generator.
464 /// This path terminates in GeneratorDrop. Returns the start of the path.
465 /// None indicates there’s no cleanup to do at this point.
466 pub fn generator_drop_cleanup(&mut self) -> Option<BasicBlock> {
467 if !self.scopes.iter().any(|scope| scope.needs_cleanup) {
471 // Fill in the cache for unwinds
472 self.diverge_cleanup_gen(true);
474 let src_info = self.scopes[0].source_info(self.fn_span);
475 let resume_block = self.resume_block();
476 let mut scopes = self.scopes.iter_mut().rev().peekable();
477 let mut block = self.cfg.start_new_block();
480 while let Some(scope) = scopes.next() {
481 if !scope.needs_cleanup {
485 block = if let Some(b) = scope.cached_generator_drop {
486 self.cfg.terminate(block, src_info,
487 TerminatorKind::Goto { target: b });
490 let b = self.cfg.start_new_block();
491 scope.cached_generator_drop = Some(b);
492 self.cfg.terminate(block, src_info,
493 TerminatorKind::Goto { target: b });
497 let unwind_to = scopes.peek().as_ref().map(|scope| {
498 scope.cached_unwind.get(true).unwrap_or_else(|| {
499 span_bug!(src_info.span, "cached block not present?")
501 }).unwrap_or(resume_block);
503 unpack!(block = build_scope_drops(
513 self.cfg.terminate(block, src_info, TerminatorKind::GeneratorDrop);
518 /// Creates a new source scope, nested in the current one.
519 pub fn new_source_scope(&mut self,
521 lint_level: LintLevel,
522 safety: Option<Safety>) -> SourceScope {
523 let parent = self.source_scope;
524 debug!("new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
525 span, lint_level, safety,
526 parent, self.source_scope_local_data.get(parent));
527 let scope = self.source_scopes.push(SourceScopeData {
529 parent_scope: Some(parent),
531 let scope_local_data = SourceScopeLocalData {
532 lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
535 self.source_scope_local_data[parent].lint_root
537 safety: safety.unwrap_or_else(|| {
538 self.source_scope_local_data[parent].safety
541 self.source_scope_local_data.push(scope_local_data);
547 /// Finds the breakable scope for a given label. This is used for
548 /// resolving `break` and `continue`.
549 pub fn find_breakable_scope(&self,
551 label: region::Scope)
552 -> &BreakableScope<'tcx> {
553 // find the loop-scope with the correct id
554 self.breakable_scopes.iter()
556 .filter(|breakable_scope| breakable_scope.region_scope == label)
558 .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
561 /// Given a span and the current source scope, make a SourceInfo.
562 pub fn source_info(&self, span: Span) -> SourceInfo {
565 scope: self.source_scope
569 /// Returns the `region::Scope` of the scope which should be exited by a
571 pub fn region_scope_of_return_scope(&self) -> region::Scope {
572 // The outermost scope (`scopes[0]`) will be the `CallSiteScope`.
573 // We want `scopes[1]`, which is the `ParameterScope`.
574 assert!(self.scopes.len() >= 2);
575 assert!(match self.scopes[1].region_scope.data {
576 region::ScopeData::Arguments => true,
579 self.scopes[1].region_scope
582 /// Returns the topmost active scope, which is known to be alive until
583 /// the next scope expression.
584 pub fn topmost_scope(&self) -> region::Scope {
585 self.scopes.last().expect("topmost_scope: no scopes present").region_scope
588 /// Returns the scope that we should use as the lifetime of an
589 /// operand. Basically, an operand must live until it is consumed.
590 /// This is similar to, but not quite the same as, the temporary
591 /// scope (which can be larger or smaller).
595 /// let x = foo(bar(X, Y));
597 /// We wish to pop the storage for X and Y after `bar()` is
598 /// called, not after the whole `let` is completed.
600 /// As another example, if the second argument diverges:
602 /// foo(Box::new(2), panic!())
604 /// We would allocate the box but then free it on the unwinding
605 /// path; we would also emit a free on the 'success' path from
606 /// panic, but that will turn out to be removed as dead-code.
608 /// When building statics/constants, returns `None` since
609 /// intermediate values do not have to be dropped in that case.
610 pub fn local_scope(&self) -> Option<region::Scope> {
611 match self.hir.body_owner_kind {
612 hir::BodyOwnerKind::Const |
613 hir::BodyOwnerKind::Static(_) =>
614 // No need to free storage in this context.
616 hir::BodyOwnerKind::Closure |
617 hir::BodyOwnerKind::Fn =>
618 Some(self.topmost_scope()),
622 // Schedule an abort block - this is used for some ABIs that cannot unwind
623 pub fn schedule_abort(&mut self) -> BasicBlock {
624 self.scopes[0].needs_cleanup = true;
625 let abortblk = self.cfg.start_new_cleanup_block();
626 let source_info = self.scopes[0].source_info(self.fn_span);
627 self.cfg.terminate(abortblk, source_info, TerminatorKind::Abort);
628 self.cached_resume_block = Some(abortblk);
632 pub fn schedule_drop_storage_and_value(
635 region_scope: region::Scope,
640 span, region_scope, place, place_ty,
644 span, region_scope, place, place_ty,
646 cached_block: CachedBlock::default(),
653 /// Indicates that `place` should be dropped on exit from
656 /// When called with `DropKind::Storage`, `place` should be a local
657 /// with an index higher than the current `self.arg_count`.
658 pub fn schedule_drop(
661 region_scope: region::Scope,
666 let needs_drop = self.hir.needs_drop(place_ty);
668 DropKind::Value { .. } => if !needs_drop { return },
669 DropKind::Storage => {
671 Place::Local(index) => if index.index() <= self.arg_count {
673 span, "`schedule_drop` called with index {} and arg_count {}",
679 span, "`schedule_drop` called with non-`Local` place {:?}", place
685 for scope in self.scopes.iter_mut().rev() {
686 let this_scope = scope.region_scope == region_scope;
687 // When building drops, we try to cache chains of drops in such a way so these drops
688 // could be reused by the drops which would branch into the cached (already built)
689 // blocks. This, however, means that whenever we add a drop into a scope which already
690 // had some blocks built (and thus, cached) for it, we must invalidate all caches which
691 // might branch into the scope which had a drop just added to it. This is necessary,
692 // because otherwise some other code might use the cache to branch into already built
693 // chain of drops, essentially ignoring the newly added drop.
695 // For example consider there’s two scopes with a drop in each. These are built and
696 // thus the caches are filled:
698 // +--------------------------------------------------------+
699 // | +---------------------------------+ |
700 // | | +--------+ +-------------+ | +---------------+ |
701 // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
702 // | | +--------+ +-------------+ | +---------------+ |
703 // | +------------|outer_scope cache|--+ |
704 // +------------------------------|middle_scope cache|------+
706 // Now, a new, inner-most scope is added along with a new drop into both inner-most and
707 // outer-most scopes:
709 // +------------------------------------------------------------+
710 // | +----------------------------------+ |
711 // | | +--------+ +-------------+ | +---------------+ | +-------------+
712 // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
713 // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
714 // | | +-+ +-------------+ | |
715 // | +---|invalid outer_scope cache|----+ |
716 // +----=----------------|invalid middle_scope cache|-----------+
718 // If, when adding `drop(new)` we do not invalidate the cached blocks for both
719 // outer_scope and middle_scope, then, when building drops for the inner (right-most)
720 // scope, the old, cached blocks, without `drop(new)` will get used, producing the
723 // The cache and its invalidation for unwind branch is somewhat special. The cache is
724 // per-drop, rather than per scope, which has a several different implications. Adding
725 // a new drop into a scope will not invalidate cached blocks of the prior drops in the
726 // scope. That is true, because none of the already existing drops will have an edge
727 // into a block with the newly added drop.
729 // Note that this code iterates scopes from the inner-most to the outer-most,
730 // invalidating caches of each scope visited. This way bare minimum of the
731 // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
732 // cache of outer scpoe stays intact.
733 scope.invalidate_cache(!needs_drop, this_scope);
735 if let DropKind::Value { .. } = drop_kind {
736 scope.needs_cleanup = true;
739 let region_scope_span = region_scope.span(self.hir.tcx(),
740 &self.hir.region_scope_tree);
741 // Attribute scope exit drops to scope's closing brace.
742 let scope_end = self.hir.tcx().sess.source_map().end_point(region_scope_span);
744 scope.drops.push(DropData {
746 location: place.clone(),
752 span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, place);
757 /// Creates a path that performs all required cleanup for unwinding.
759 /// This path terminates in Resume. Returns the start of the path.
760 /// See module comment for more details.
761 pub fn diverge_cleanup(&mut self) -> BasicBlock {
762 self.diverge_cleanup_gen(false)
765 fn resume_block(&mut self) -> BasicBlock {
766 if let Some(target) = self.cached_resume_block {
769 let resumeblk = self.cfg.start_new_cleanup_block();
770 self.cfg.terminate(resumeblk,
772 scope: OUTERMOST_SOURCE_SCOPE,
775 TerminatorKind::Resume);
776 self.cached_resume_block = Some(resumeblk);
781 fn diverge_cleanup_gen(&mut self, generator_drop: bool) -> BasicBlock {
782 // Build up the drops in **reverse** order. The end result will
785 // scopes[n] -> scopes[n-1] -> ... -> scopes[0]
787 // However, we build this in **reverse order**. That is, we
788 // process scopes[0], then scopes[1], etc, pointing each one at
789 // the result generates from the one before. Along the way, we
790 // store caches. If everything is cached, we'll just walk right
791 // to left reading the cached results but never created anything.
793 // Find the last cached block
794 let (mut target, first_uncached) = if let Some(cached_index) = self.scopes.iter()
795 .rposition(|scope| scope.cached_unwind.get(generator_drop).is_some()) {
796 (self.scopes[cached_index].cached_unwind.get(generator_drop).unwrap(), cached_index + 1)
798 (self.resume_block(), 0)
801 for scope in self.scopes[first_uncached..].iter_mut() {
802 target = build_diverge_scope(&mut self.cfg, scope.region_scope_span,
803 scope, target, generator_drop);
809 /// Utility function for *non*-scope code to build their own drops
810 pub fn build_drop(&mut self,
813 location: Place<'tcx>,
814 ty: Ty<'tcx>) -> BlockAnd<()> {
815 if !self.hir.needs_drop(ty) {
818 let source_info = self.source_info(span);
819 let next_target = self.cfg.start_new_block();
820 let diverge_target = self.diverge_cleanup();
821 self.cfg.terminate(block, source_info,
822 TerminatorKind::Drop {
825 unwind: Some(diverge_target),
830 /// Utility function for *non*-scope code to build their own drops
831 pub fn build_drop_and_replace(&mut self,
834 location: Place<'tcx>,
835 value: Operand<'tcx>) -> BlockAnd<()> {
836 let source_info = self.source_info(span);
837 let next_target = self.cfg.start_new_block();
838 let diverge_target = self.diverge_cleanup();
839 self.cfg.terminate(block, source_info,
840 TerminatorKind::DropAndReplace {
844 unwind: Some(diverge_target),
849 /// Creates an Assert terminator and return the success block.
850 /// If the boolean condition operand is not the expected value,
851 /// a runtime panic will be caused with the given message.
852 pub fn assert(&mut self, block: BasicBlock,
855 msg: AssertMessage<'tcx>,
858 let source_info = self.source_info(span);
860 let success_block = self.cfg.start_new_block();
861 let cleanup = self.diverge_cleanup();
863 self.cfg.terminate(block, source_info,
864 TerminatorKind::Assert {
868 target: success_block,
869 cleanup: Some(cleanup),
876 /// Builds drops for pop_scope and exit_scope.
877 fn build_scope_drops<'tcx>(
880 mut block: BasicBlock,
881 last_unwind_to: BasicBlock,
883 generator_drop: bool,
885 debug!("build_scope_drops({:?} -> {:?}", block, scope);
887 // Build up the drops in evaluation order. The end result will
890 // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
894 // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
896 // The horizontal arrows represent the execution path when the drops return
897 // successfully. The downwards arrows represent the execution path when the
898 // drops panic (panicking while unwinding will abort, so there's no need for
899 // another set of arrows). The drops for the unwind path should have already
900 // been generated by `diverge_cleanup_gen`.
902 // The code in this function reads from right to left.
903 // Storage dead drops have to be done left to right (since we can only push
904 // to the end of a Vec). So, we find the next drop and then call
905 // push_storage_deads which will iterate backwards through them so that
906 // they are added in the correct order.
908 let mut unwind_blocks = scope.drops.iter().rev().filter_map(|drop_data| {
909 if let DropKind::Value { cached_block } = drop_data.kind {
910 Some(cached_block.get(generator_drop).unwrap_or_else(|| {
911 span_bug!(drop_data.span, "cached block not present?")
918 // When we unwind from a drop, we start cleaning up from the next one, so
919 // we don't need this block.
920 unwind_blocks.next();
922 for drop_data in scope.drops.iter().rev() {
923 let source_info = scope.source_info(drop_data.span);
924 match drop_data.kind {
925 DropKind::Value { .. } => {
926 let unwind_to = unwind_blocks.next().unwrap_or(last_unwind_to);
928 let next = cfg.start_new_block();
929 cfg.terminate(block, source_info, TerminatorKind::Drop {
930 location: drop_data.location.clone(),
932 unwind: Some(unwind_to)
936 DropKind::Storage => {
937 // We do not need to emit StorageDead for generator drops
942 // Drop the storage for both value and storage drops.
943 // Only temps and vars need their storage dead.
944 match drop_data.location {
945 Place::Local(index) if index.index() > arg_count => {
946 cfg.push(block, Statement {
948 kind: StatementKind::StorageDead(index)
959 fn build_diverge_scope<'tcx>(cfg: &mut CFG<'tcx>,
961 scope: &mut Scope<'tcx>,
962 mut target: BasicBlock,
963 generator_drop: bool)
966 // Build up the drops in **reverse** order. The end result will
969 // [drops[n]] -...-> [drops[0]] -> [target]
971 // The code in this function reads from right to left. At each
972 // point, we check for cached blocks representing the
973 // remainder. If everything is cached, we'll just walk right to
974 // left reading the cached results but never create anything.
976 let source_scope = scope.source_scope;
977 let source_info = |span| SourceInfo {
982 // Next, build up the drops. Here we iterate the vector in
983 // *forward* order, so that we generate drops[0] first (right to
984 // left in diagram above).
985 for (j, drop_data) in scope.drops.iter_mut().enumerate() {
986 debug!("build_diverge_scope drop_data[{}]: {:?}", j, drop_data);
987 // Only full value drops are emitted in the diverging path,
990 // Note: This may not actually be what we desire (are we
991 // "freeing" stack storage as we unwind, or merely observing a
992 // frozen stack)? In particular, the intent may have been to
993 // match the behavior of clang, but on inspection eddyb says
994 // this is not what clang does.
995 let cached_block = match drop_data.kind {
996 DropKind::Value { ref mut cached_block } => cached_block.ref_mut(generator_drop),
997 DropKind::Storage => continue
999 target = if let Some(cached_block) = *cached_block {
1002 let block = cfg.start_new_cleanup_block();
1003 cfg.terminate(block, source_info(drop_data.span),
1004 TerminatorKind::Drop {
1005 location: drop_data.location.clone(),
1009 *cached_block = Some(block);
1014 *scope.cached_unwind.ref_mut(generator_drop) = Some(target);
1016 debug!("build_diverge_scope({:?}, {:?}) = {:?}", scope, span, target);