2 Managing the scope stack. The scopes are tied to lexical scopes, so as
3 we descend the HAIR, we push a scope on the stack, build its
4 contents, and then pop it off. Every scope is named by a
9 When pushing a new scope, we record the current point in the graph (a
10 basic block); this marks the entry to the scope. We then generate more
11 stuff in the control-flow graph. Whenever the scope is exited, either
12 via a `break` or `return` or just by fallthrough, that marks an exit
13 from the scope. Each lexical scope thus corresponds to a single-entry,
14 multiple-exit (SEME) region in the control-flow graph.
16 For now, we keep a mapping from each `region::Scope` to its
17 corresponding SEME region for later reference (see caveat in next
18 paragraph). This is because region scopes are tied to
19 them. Eventually, when we shift to non-lexical lifetimes, there should
20 be no need to remember this mapping.
22 ### Not so SEME Regions
24 In the course of building matches, it sometimes happens that certain code
25 (namely guards) gets executed multiple times. This means that the scope lexical
26 scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
27 mapping is from one scope to a vector of SEME regions.
29 Also in matches, the scopes assigned to arms are not even SEME regions! Each
30 arm has a single region with one entry for each pattern. We manually
31 manipulate the scheduled drops in this scope to avoid dropping things multiple
32 times, although drop elaboration would clean this up for value drops.
36 The primary purpose for scopes is to insert drops: while building
37 the contents, we also accumulate places that need to be dropped upon
38 exit from each scope. This is done by calling `schedule_drop`. Once a
39 drop is scheduled, whenever we branch out we will insert drops of all
40 those places onto the outgoing edge. Note that we don't know the full
41 set of scheduled drops up front, and so whenever we exit from the
42 scope we only drop the values scheduled thus far. For example, consider
43 the scope S corresponding to this loop:
54 When processing the `let x`, we will add one drop to the scope for
55 `x`. The break will then insert a drop for `x`. When we process `let
56 y`, we will add another drop (in fact, to a subscope, but let's ignore
57 that for now); any later drops would also drop `y`.
61 There are numerous "normal" ways to early exit a scope: `break`,
62 `continue`, `return` (panics are handled separately). Whenever an
63 early exit occurs, the method `exit_scope` is called. It is given the
64 current point in execution where the early exit occurs, as well as the
65 scope you want to branch to (note that all early exits from to some
66 other enclosing scope). `exit_scope` will record this exit point and
69 Panics are handled in a similar fashion, except that a panic always
70 returns out to the `DIVERGE_BLOCK`. To trigger a panic, simply call
71 `panic(p)` with the current point `p`. Or else you can call
72 `diverge_cleanup`, which will produce a block that you can branch to
73 which does the appropriate cleanup and then diverges. `panic(p)`
74 simply calls `diverge_cleanup()` and adds an edge from `p` to the
79 In addition to the normal scope stack, we track a loop scope stack
80 that contains only loops. It tracks where a `break` and `continue`
85 use crate::build::{BlockAnd, BlockAndExtension, Builder, CFG};
86 use crate::hair::LintLevel;
87 use rustc::middle::region;
91 use syntax_pos::{Span};
92 use rustc_data_structures::fx::FxHashMap;
93 use std::collections::hash_map::Entry;
96 pub struct Scope<'tcx> {
97 /// The source scope this scope was created in.
98 source_scope: SourceScope,
100 /// the region span of this scope within source code.
101 region_scope: region::Scope,
103 /// the span of that region_scope
104 region_scope_span: Span,
106 /// Whether there's anything to do for the cleanup path, that is,
107 /// when unwinding through this scope. This includes destructors,
108 /// but not StorageDead statements, which don't get emitted at all
109 /// for unwinding, for several reasons:
110 /// * clang doesn't emit llvm.lifetime.end for C++ unwinding
111 /// * LLVM's memory dependency analysis can't handle it atm
112 /// * polluting the cleanup MIR with StorageDead creates
113 /// landing pads even though there's no actual destructors
114 /// * freeing up stack space has no effect during unwinding
117 /// set of places to drop when exiting this scope. This starts
118 /// out empty but grows as variables are declared during the
119 /// building process. This is a stack, so we always drop from the
120 /// end of the vector (top of the stack) first.
121 drops: Vec<DropData<'tcx>>,
123 /// The cache for drop chain on “normal” exit into a particular BasicBlock.
124 cached_exits: FxHashMap<(BasicBlock, region::Scope), BasicBlock>,
126 /// The cache for drop chain on "generator drop" exit.
127 cached_generator_drop: Option<BasicBlock>,
129 /// The cache for drop chain on "unwind" exit.
130 cached_unwind: CachedBlock,
134 struct DropData<'tcx> {
135 /// span where drop obligation was incurred (typically where place was declared)
139 location: Place<'tcx>,
141 /// Whether this is a value Drop or a StorageDead.
145 #[derive(Debug, Default, Clone, Copy)]
146 pub(crate) struct CachedBlock {
147 /// The cached block for the cleanups-on-diverge path. This block
148 /// contains code to run the current drop and all the preceding
149 /// drops (i.e., those having lower index in Drop’s Scope drop
151 unwind: Option<BasicBlock>,
153 /// The cached block for unwinds during cleanups-on-generator-drop path
155 /// This is split from the standard unwind path here to prevent drop
156 /// elaboration from creating drop flags that would have to be captured
157 /// by the generator. I'm not sure how important this optimization is,
159 generator_drop: Option<BasicBlock>,
163 pub(crate) enum DropKind {
165 cached_block: CachedBlock,
170 #[derive(Clone, Debug)]
171 pub struct BreakableScope<'tcx> {
172 /// Region scope of the loop
173 pub region_scope: region::Scope,
174 /// Where the body of the loop begins. `None` if block
175 pub continue_block: Option<BasicBlock>,
176 /// Block to branch into when the loop or block terminates (either by being `break`-en out
177 /// from, or by having its condition to become false)
178 pub break_block: BasicBlock,
179 /// The destination of the loop/block expression itself (i.e., where to put the result of a
180 /// `break` expression)
181 pub break_destination: Place<'tcx>,
185 fn invalidate(&mut self) {
186 self.generator_drop = None;
190 fn get(&self, generator_drop: bool) -> Option<BasicBlock> {
198 fn ref_mut(&mut self, generator_drop: bool) -> &mut Option<BasicBlock> {
200 &mut self.generator_drop
208 fn may_panic(&self) -> bool {
210 DropKind::Value { .. } => true,
211 DropKind::Storage => false
216 impl<'tcx> Scope<'tcx> {
217 /// Invalidates all the cached blocks in the scope.
219 /// Should always be run for all inner scopes when a drop is pushed into some scope enclosing a
220 /// larger extent of code.
222 /// `storage_only` controls whether to invalidate only drop paths that run `StorageDead`.
223 /// `this_scope_only` controls whether to invalidate only drop paths that refer to the current
224 /// top-of-scope (as opposed to dependent scopes).
225 fn invalidate_cache(&mut self, storage_only: bool, this_scope_only: bool) {
226 // FIXME: maybe do shared caching of `cached_exits` etc. to handle functions
227 // with lots of `try!`?
229 // cached exits drop storage and refer to the top-of-scope
230 self.cached_exits.clear();
233 // the current generator drop and unwind ignore
234 // storage but refer to top-of-scope
235 self.cached_generator_drop = None;
236 self.cached_unwind.invalidate();
239 if !storage_only && !this_scope_only {
240 for drop_data in &mut self.drops {
241 if let DropKind::Value { ref mut cached_block } = drop_data.kind {
242 cached_block.invalidate();
248 /// Given a span and this scope's source scope, make a SourceInfo.
249 fn source_info(&self, span: Span) -> SourceInfo {
252 scope: self.source_scope
257 impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> {
258 // Adding and removing scopes
259 // ==========================
260 /// Start a breakable scope, which tracks where `continue` and `break`
261 /// should branch to. See module comment for more details.
263 /// Returns the might_break attribute of the BreakableScope used.
264 pub fn in_breakable_scope<F, R>(&mut self,
265 loop_block: Option<BasicBlock>,
266 break_block: BasicBlock,
267 break_destination: Place<'tcx>,
269 where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>) -> R
271 let region_scope = self.topmost_scope();
272 let scope = BreakableScope {
274 continue_block: loop_block,
278 self.breakable_scopes.push(scope);
280 let breakable_scope = self.breakable_scopes.pop().unwrap();
281 assert!(breakable_scope.region_scope == region_scope);
285 pub fn in_opt_scope<F, R>(&mut self,
286 opt_scope: Option<(region::Scope, SourceInfo)>,
289 where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>) -> BlockAnd<R>
291 debug!("in_opt_scope(opt_scope={:?})", opt_scope);
292 if let Some(region_scope) = opt_scope { self.push_scope(region_scope); }
294 let rv = unpack!(block = f(self));
295 if let Some(region_scope) = opt_scope {
296 unpack!(block = self.pop_scope(region_scope, block));
298 debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
302 /// Convenience wrapper that pushes a scope and then executes `f`
303 /// to build its contents, popping the scope afterwards.
304 pub fn in_scope<F, R>(&mut self,
305 region_scope: (region::Scope, SourceInfo),
306 lint_level: LintLevel,
309 where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>) -> BlockAnd<R>
311 debug!("in_scope(region_scope={:?})", region_scope);
312 let source_scope = self.source_scope;
313 let tcx = self.hir.tcx();
314 if let LintLevel::Explicit(current_hir_id) = lint_level {
315 // Use `maybe_lint_level_root_bounded` with `root_lint_level` as a bound
316 // to avoid adding Hir dependences on our parents.
317 // We estimate the true lint roots here to avoid creating a lot of source scopes.
319 let parent_root = tcx.maybe_lint_level_root_bounded(
320 self.source_scope_local_data[source_scope].lint_root,
321 self.hir.root_lint_level,
323 let current_root = tcx.maybe_lint_level_root_bounded(
325 self.hir.root_lint_level
328 if parent_root != current_root {
329 self.source_scope = self.new_source_scope(
331 LintLevel::Explicit(current_root),
336 self.push_scope(region_scope);
338 let rv = unpack!(block = f(self));
339 unpack!(block = self.pop_scope(region_scope, block));
340 self.source_scope = source_scope;
341 debug!("in_scope: exiting region_scope={:?} block={:?}", region_scope, block);
345 /// Push a scope onto the stack. You can then build code in this
346 /// scope and call `pop_scope` afterwards. Note that these two
347 /// calls must be paired; using `in_scope` as a convenience
348 /// wrapper maybe preferable.
349 pub fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
350 debug!("push_scope({:?})", region_scope);
351 let vis_scope = self.source_scope;
352 self.scopes.push(Scope {
353 source_scope: vis_scope,
354 region_scope: region_scope.0,
355 region_scope_span: region_scope.1.span,
356 needs_cleanup: false,
358 cached_generator_drop: None,
359 cached_exits: Default::default(),
360 cached_unwind: CachedBlock::default(),
364 /// Pops a scope, which should have region scope `region_scope`,
365 /// adding any drops onto the end of `block` that are needed.
366 /// This must match 1-to-1 with `push_scope`.
367 pub fn pop_scope(&mut self,
368 region_scope: (region::Scope, SourceInfo),
369 mut block: BasicBlock)
371 debug!("pop_scope({:?}, {:?})", region_scope, block);
372 // If we are emitting a `drop` statement, we need to have the cached
373 // diverge cleanup pads ready in case that drop panics.
375 self.scopes.last().unwrap().drops.iter().any(|s| s.kind.may_panic());
377 self.diverge_cleanup();
379 let scope = self.scopes.pop().unwrap();
380 assert_eq!(scope.region_scope, region_scope.0);
382 let unwind_to = self.scopes.last().and_then(|next_scope| {
383 next_scope.cached_unwind.get(false)
384 }).unwrap_or_else(|| self.resume_block());
386 unpack!(block = build_scope_drops(
399 /// Branch out of `block` to `target`, exiting all scopes up to
400 /// and including `region_scope`. This will insert whatever drops are
401 /// needed. See module comment for details.
402 pub fn exit_scope(&mut self,
404 region_scope: (region::Scope, SourceInfo),
405 mut block: BasicBlock,
406 target: BasicBlock) {
407 debug!("exit_scope(region_scope={:?}, block={:?}, target={:?})",
408 region_scope, block, target);
409 let scope_count = 1 + self.scopes.iter().rev()
410 .position(|scope| scope.region_scope == region_scope.0)
412 span_bug!(span, "region_scope {:?} does not enclose", region_scope)
414 let len = self.scopes.len();
415 assert!(scope_count < len, "should not use `exit_scope` to pop ALL scopes");
417 // If we are emitting a `drop` statement, we need to have the cached
418 // diverge cleanup pads ready in case that drop panics.
419 let may_panic = self.scopes[(len - scope_count)..].iter().any(|s| s.needs_cleanup);
421 self.diverge_cleanup();
424 let mut scopes = self.scopes[(len - scope_count - 1)..].iter_mut().rev();
425 let mut scope = scopes.next().unwrap();
426 for next_scope in scopes {
427 if scope.drops.is_empty() {
431 let source_info = scope.source_info(span);
432 block = match scope.cached_exits.entry((target, region_scope.0)) {
433 Entry::Occupied(e) => {
434 self.cfg.terminate(block, source_info,
435 TerminatorKind::Goto { target: *e.get() });
438 Entry::Vacant(v) => {
439 let b = self.cfg.start_new_block();
440 self.cfg.terminate(block, source_info,
441 TerminatorKind::Goto { target: b });
447 let unwind_to = next_scope.cached_unwind.get(false).unwrap_or_else(|| {
448 debug_assert!(!may_panic, "cached block not present?");
452 unpack!(block = build_scope_drops(
464 let scope = &self.scopes[len - scope_count];
465 self.cfg.terminate(block, scope.source_info(span),
466 TerminatorKind::Goto { target });
469 /// Creates a path that performs all required cleanup for dropping a generator.
471 /// This path terminates in GeneratorDrop. Returns the start of the path.
472 /// None indicates there’s no cleanup to do at this point.
473 pub fn generator_drop_cleanup(&mut self) -> Option<BasicBlock> {
474 if !self.scopes.iter().any(|scope| scope.needs_cleanup) {
478 // Fill in the cache for unwinds
479 self.diverge_cleanup_gen(true);
481 let src_info = self.scopes[0].source_info(self.fn_span);
482 let resume_block = self.resume_block();
483 let mut scopes = self.scopes.iter_mut().rev().peekable();
484 let mut block = self.cfg.start_new_block();
487 while let Some(scope) = scopes.next() {
488 if !scope.needs_cleanup {
492 block = if let Some(b) = scope.cached_generator_drop {
493 self.cfg.terminate(block, src_info,
494 TerminatorKind::Goto { target: b });
497 let b = self.cfg.start_new_block();
498 scope.cached_generator_drop = Some(b);
499 self.cfg.terminate(block, src_info,
500 TerminatorKind::Goto { target: b });
504 let unwind_to = scopes.peek().as_ref().map(|scope| {
505 scope.cached_unwind.get(true).unwrap_or_else(|| {
506 span_bug!(src_info.span, "cached block not present?")
508 }).unwrap_or(resume_block);
510 unpack!(block = build_scope_drops(
520 self.cfg.terminate(block, src_info, TerminatorKind::GeneratorDrop);
525 /// Creates a new source scope, nested in the current one.
526 pub fn new_source_scope(&mut self,
528 lint_level: LintLevel,
529 safety: Option<Safety>) -> SourceScope {
530 let parent = self.source_scope;
531 debug!("new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
532 span, lint_level, safety,
533 parent, self.source_scope_local_data.get(parent));
534 let scope = self.source_scopes.push(SourceScopeData {
536 parent_scope: Some(parent),
538 let scope_local_data = SourceScopeLocalData {
539 lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
542 self.source_scope_local_data[parent].lint_root
544 safety: safety.unwrap_or_else(|| {
545 self.source_scope_local_data[parent].safety
548 self.source_scope_local_data.push(scope_local_data);
554 /// Finds the breakable scope for a given label. This is used for
555 /// resolving `break` and `continue`.
556 pub fn find_breakable_scope(&self,
558 label: region::Scope)
559 -> &BreakableScope<'tcx> {
560 // find the loop-scope with the correct id
561 self.breakable_scopes.iter()
563 .filter(|breakable_scope| breakable_scope.region_scope == label)
565 .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
568 /// Given a span and the current source scope, make a SourceInfo.
569 pub fn source_info(&self, span: Span) -> SourceInfo {
572 scope: self.source_scope
576 /// Returns the `region::Scope` of the scope which should be exited by a
578 pub fn region_scope_of_return_scope(&self) -> region::Scope {
579 // The outermost scope (`scopes[0]`) will be the `CallSiteScope`.
580 // We want `scopes[1]`, which is the `ParameterScope`.
581 assert!(self.scopes.len() >= 2);
582 assert!(match self.scopes[1].region_scope.data {
583 region::ScopeData::Arguments => true,
586 self.scopes[1].region_scope
589 /// Returns the topmost active scope, which is known to be alive until
590 /// the next scope expression.
591 pub fn topmost_scope(&self) -> region::Scope {
592 self.scopes.last().expect("topmost_scope: no scopes present").region_scope
595 /// Returns the scope that we should use as the lifetime of an
596 /// operand. Basically, an operand must live until it is consumed.
597 /// This is similar to, but not quite the same as, the temporary
598 /// scope (which can be larger or smaller).
602 /// let x = foo(bar(X, Y));
604 /// We wish to pop the storage for X and Y after `bar()` is
605 /// called, not after the whole `let` is completed.
607 /// As another example, if the second argument diverges:
609 /// foo(Box::new(2), panic!())
611 /// We would allocate the box but then free it on the unwinding
612 /// path; we would also emit a free on the 'success' path from
613 /// panic, but that will turn out to be removed as dead-code.
615 /// When building statics/constants, returns `None` since
616 /// intermediate values do not have to be dropped in that case.
617 pub fn local_scope(&self) -> Option<region::Scope> {
618 match self.hir.body_owner_kind {
619 hir::BodyOwnerKind::Const |
620 hir::BodyOwnerKind::Static(_) =>
621 // No need to free storage in this context.
623 hir::BodyOwnerKind::Closure |
624 hir::BodyOwnerKind::Fn =>
625 Some(self.topmost_scope()),
629 // Schedule an abort block - this is used for some ABIs that cannot unwind
630 pub fn schedule_abort(&mut self) -> BasicBlock {
631 self.scopes[0].needs_cleanup = true;
632 let abortblk = self.cfg.start_new_cleanup_block();
633 let source_info = self.scopes[0].source_info(self.fn_span);
634 self.cfg.terminate(abortblk, source_info, TerminatorKind::Abort);
635 self.cached_resume_block = Some(abortblk);
639 pub fn schedule_drop_storage_and_value(
642 region_scope: region::Scope,
647 span, region_scope, place, place_ty,
651 span, region_scope, place, place_ty,
653 cached_block: CachedBlock::default(),
660 /// Indicates that `place` should be dropped on exit from
663 /// When called with `DropKind::Storage`, `place` should be a local
664 /// with an index higher than the current `self.arg_count`.
665 pub fn schedule_drop(
668 region_scope: region::Scope,
673 let needs_drop = self.hir.needs_drop(place_ty);
675 DropKind::Value { .. } => if !needs_drop { return },
676 DropKind::Storage => {
678 Place::Base(PlaceBase::Local(index)) => if index.index() <= self.arg_count {
680 span, "`schedule_drop` called with index {} and arg_count {}",
686 span, "`schedule_drop` called with non-`Local` place {:?}", place
692 for scope in self.scopes.iter_mut().rev() {
693 let this_scope = scope.region_scope == region_scope;
694 // When building drops, we try to cache chains of drops in such a way so these drops
695 // could be reused by the drops which would branch into the cached (already built)
696 // blocks. This, however, means that whenever we add a drop into a scope which already
697 // had some blocks built (and thus, cached) for it, we must invalidate all caches which
698 // might branch into the scope which had a drop just added to it. This is necessary,
699 // because otherwise some other code might use the cache to branch into already built
700 // chain of drops, essentially ignoring the newly added drop.
702 // For example consider there’s two scopes with a drop in each. These are built and
703 // thus the caches are filled:
705 // +--------------------------------------------------------+
706 // | +---------------------------------+ |
707 // | | +--------+ +-------------+ | +---------------+ |
708 // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
709 // | | +--------+ +-------------+ | +---------------+ |
710 // | +------------|outer_scope cache|--+ |
711 // +------------------------------|middle_scope cache|------+
713 // Now, a new, inner-most scope is added along with a new drop into both inner-most and
714 // outer-most scopes:
716 // +------------------------------------------------------------+
717 // | +----------------------------------+ |
718 // | | +--------+ +-------------+ | +---------------+ | +-------------+
719 // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
720 // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
721 // | | +-+ +-------------+ | |
722 // | +---|invalid outer_scope cache|----+ |
723 // +----=----------------|invalid middle_scope cache|-----------+
725 // If, when adding `drop(new)` we do not invalidate the cached blocks for both
726 // outer_scope and middle_scope, then, when building drops for the inner (right-most)
727 // scope, the old, cached blocks, without `drop(new)` will get used, producing the
730 // The cache and its invalidation for unwind branch is somewhat special. The cache is
731 // per-drop, rather than per scope, which has a several different implications. Adding
732 // a new drop into a scope will not invalidate cached blocks of the prior drops in the
733 // scope. That is true, because none of the already existing drops will have an edge
734 // into a block with the newly added drop.
736 // Note that this code iterates scopes from the inner-most to the outer-most,
737 // invalidating caches of each scope visited. This way bare minimum of the
738 // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
739 // cache of outer scope stays intact.
740 scope.invalidate_cache(!needs_drop, this_scope);
742 if let DropKind::Value { .. } = drop_kind {
743 scope.needs_cleanup = true;
746 let region_scope_span = region_scope.span(self.hir.tcx(),
747 &self.hir.region_scope_tree);
748 // Attribute scope exit drops to scope's closing brace.
749 let scope_end = self.hir.tcx().sess.source_map().end_point(region_scope_span);
751 scope.drops.push(DropData {
753 location: place.clone(),
759 span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, place);
764 /// Creates a path that performs all required cleanup for unwinding.
766 /// This path terminates in Resume. Returns the start of the path.
767 /// See module comment for more details.
768 pub fn diverge_cleanup(&mut self) -> BasicBlock {
769 self.diverge_cleanup_gen(false)
772 fn resume_block(&mut self) -> BasicBlock {
773 if let Some(target) = self.cached_resume_block {
776 let resumeblk = self.cfg.start_new_cleanup_block();
777 self.cfg.terminate(resumeblk,
779 scope: OUTERMOST_SOURCE_SCOPE,
782 TerminatorKind::Resume);
783 self.cached_resume_block = Some(resumeblk);
788 fn diverge_cleanup_gen(&mut self, generator_drop: bool) -> BasicBlock {
789 // Build up the drops in **reverse** order. The end result will
792 // scopes[n] -> scopes[n-1] -> ... -> scopes[0]
794 // However, we build this in **reverse order**. That is, we
795 // process scopes[0], then scopes[1], etc, pointing each one at
796 // the result generates from the one before. Along the way, we
797 // store caches. If everything is cached, we'll just walk right
798 // to left reading the cached results but never created anything.
800 // Find the last cached block
801 let (mut target, first_uncached) = if let Some(cached_index) = self.scopes.iter()
802 .rposition(|scope| scope.cached_unwind.get(generator_drop).is_some()) {
803 (self.scopes[cached_index].cached_unwind.get(generator_drop).unwrap(), cached_index + 1)
805 (self.resume_block(), 0)
808 for scope in self.scopes[first_uncached..].iter_mut() {
809 target = build_diverge_scope(&mut self.cfg, scope.region_scope_span,
810 scope, target, generator_drop);
816 /// Utility function for *non*-scope code to build their own drops
817 pub fn build_drop(&mut self,
820 location: Place<'tcx>,
821 ty: Ty<'tcx>) -> BlockAnd<()> {
822 if !self.hir.needs_drop(ty) {
825 let source_info = self.source_info(span);
826 let next_target = self.cfg.start_new_block();
827 let diverge_target = self.diverge_cleanup();
828 self.cfg.terminate(block, source_info,
829 TerminatorKind::Drop {
832 unwind: Some(diverge_target),
837 /// Utility function for *non*-scope code to build their own drops
838 pub fn build_drop_and_replace(&mut self,
841 location: Place<'tcx>,
842 value: Operand<'tcx>) -> BlockAnd<()> {
843 let source_info = self.source_info(span);
844 let next_target = self.cfg.start_new_block();
845 let diverge_target = self.diverge_cleanup();
846 self.cfg.terminate(block, source_info,
847 TerminatorKind::DropAndReplace {
851 unwind: Some(diverge_target),
856 /// Creates an Assert terminator and return the success block.
857 /// If the boolean condition operand is not the expected value,
858 /// a runtime panic will be caused with the given message.
859 pub fn assert(&mut self, block: BasicBlock,
862 msg: AssertMessage<'tcx>,
865 let source_info = self.source_info(span);
867 let success_block = self.cfg.start_new_block();
868 let cleanup = self.diverge_cleanup();
870 self.cfg.terminate(block, source_info,
871 TerminatorKind::Assert {
875 target: success_block,
876 cleanup: Some(cleanup),
882 // `match` arm scopes
883 // ==================
884 /// Unschedules any drops in the top scope.
886 /// This is only needed for `match` arm scopes, because they have one
887 /// entrance per pattern, but only one exit.
888 pub fn clear_top_scope(&mut self, region_scope: region::Scope) {
889 let top_scope = self.scopes.last_mut().unwrap();
891 assert_eq!(top_scope.region_scope, region_scope);
893 top_scope.drops.clear();
894 top_scope.invalidate_cache(false, true);
897 /// Drops the single variable provided
899 /// * The scope must be the top scope.
900 /// * The variable must be in that scope.
901 /// * The variable must be at the top of that scope: it's the next thing
902 /// scheduled to drop.
903 /// * The drop must be of DropKind::Storage.
905 /// This is used for the boolean holding the result of the match guard. We
908 /// * The boolean is different for each pattern
909 /// * There is only one exit for the arm scope
910 /// * The guard expression scope is too short, it ends just before the
911 /// boolean is tested.
915 region_scope: region::Scope,
918 let top_scope = self.scopes.last_mut().unwrap();
920 assert_eq!(top_scope.region_scope, region_scope);
922 let top_drop_data = top_scope.drops.pop().unwrap();
924 match top_drop_data.kind {
925 DropKind::Value { .. } => {
926 bug!("Should not be calling pop_top_variable on non-copy type!")
928 DropKind::Storage => {
929 // Drop the storage for both value and storage drops.
930 // Only temps and vars need their storage dead.
931 match top_drop_data.location {
932 Place::Base(PlaceBase::Local(index)) => {
933 let source_info = top_scope.source_info(top_drop_data.span);
934 assert_eq!(index, variable);
935 self.cfg.push(block, Statement {
937 kind: StatementKind::StorageDead(index)
945 top_scope.invalidate_cache(true, true);
950 /// Builds drops for pop_scope and exit_scope.
951 fn build_scope_drops<'tcx>(
954 mut block: BasicBlock,
955 last_unwind_to: BasicBlock,
957 generator_drop: bool,
959 debug!("build_scope_drops({:?} -> {:?}", block, scope);
961 // Build up the drops in evaluation order. The end result will
964 // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
968 // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
970 // The horizontal arrows represent the execution path when the drops return
971 // successfully. The downwards arrows represent the execution path when the
972 // drops panic (panicking while unwinding will abort, so there's no need for
973 // another set of arrows). The drops for the unwind path should have already
974 // been generated by `diverge_cleanup_gen`.
976 // The code in this function reads from right to left.
977 // Storage dead drops have to be done left to right (since we can only push
978 // to the end of a Vec). So, we find the next drop and then call
979 // push_storage_deads which will iterate backwards through them so that
980 // they are added in the correct order.
982 let mut unwind_blocks = scope.drops.iter().rev().filter_map(|drop_data| {
983 if let DropKind::Value { cached_block } = drop_data.kind {
984 Some(cached_block.get(generator_drop).unwrap_or_else(|| {
985 span_bug!(drop_data.span, "cached block not present?")
992 // When we unwind from a drop, we start cleaning up from the next one, so
993 // we don't need this block.
994 unwind_blocks.next();
996 for drop_data in scope.drops.iter().rev() {
997 let source_info = scope.source_info(drop_data.span);
998 match drop_data.kind {
999 DropKind::Value { .. } => {
1000 let unwind_to = unwind_blocks.next().unwrap_or(last_unwind_to);
1002 let next = cfg.start_new_block();
1003 cfg.terminate(block, source_info, TerminatorKind::Drop {
1004 location: drop_data.location.clone(),
1006 unwind: Some(unwind_to)
1010 DropKind::Storage => {
1011 // We do not need to emit StorageDead for generator drops
1016 // Drop the storage for both value and storage drops.
1017 // Only temps and vars need their storage dead.
1018 match drop_data.location {
1019 Place::Base(PlaceBase::Local(index)) if index.index() > arg_count => {
1020 cfg.push(block, Statement {
1022 kind: StatementKind::StorageDead(index)
1025 _ => unreachable!(),
1033 fn build_diverge_scope<'tcx>(cfg: &mut CFG<'tcx>,
1035 scope: &mut Scope<'tcx>,
1036 mut target: BasicBlock,
1037 generator_drop: bool)
1040 // Build up the drops in **reverse** order. The end result will
1043 // [drops[n]] -...-> [drops[0]] -> [target]
1045 // The code in this function reads from right to left. At each
1046 // point, we check for cached blocks representing the
1047 // remainder. If everything is cached, we'll just walk right to
1048 // left reading the cached results but never create anything.
1050 let source_scope = scope.source_scope;
1051 let source_info = |span| SourceInfo {
1056 // Next, build up the drops. Here we iterate the vector in
1057 // *forward* order, so that we generate drops[0] first (right to
1058 // left in diagram above).
1059 for (j, drop_data) in scope.drops.iter_mut().enumerate() {
1060 debug!("build_diverge_scope drop_data[{}]: {:?}", j, drop_data);
1061 // Only full value drops are emitted in the diverging path,
1064 // Note: This may not actually be what we desire (are we
1065 // "freeing" stack storage as we unwind, or merely observing a
1066 // frozen stack)? In particular, the intent may have been to
1067 // match the behavior of clang, but on inspection eddyb says
1068 // this is not what clang does.
1069 let cached_block = match drop_data.kind {
1070 DropKind::Value { ref mut cached_block } => cached_block.ref_mut(generator_drop),
1071 DropKind::Storage => continue
1073 target = if let Some(cached_block) = *cached_block {
1076 let block = cfg.start_new_cleanup_block();
1077 cfg.terminate(block, source_info(drop_data.span),
1078 TerminatorKind::Drop {
1079 location: drop_data.location.clone(),
1083 *cached_block = Some(block);
1088 *scope.cached_unwind.ref_mut(generator_drop) = Some(target);
1090 debug!("build_diverge_scope({:?}, {:?}) = {:?}", scope, span, target);