2 Managing the scope stack. The scopes are tied to lexical scopes, so as
3 we descend the HAIR, we push a scope on the stack, build its
4 contents, and then pop it off. Every scope is named by a
9 When pushing a new scope, we record the current point in the graph (a
10 basic block); this marks the entry to the scope. We then generate more
11 stuff in the control-flow graph. Whenever the scope is exited, either
12 via a `break` or `return` or just by fallthrough, that marks an exit
13 from the scope. Each lexical scope thus corresponds to a single-entry,
14 multiple-exit (SEME) region in the control-flow graph.
16 For now, we keep a mapping from each `region::Scope` to its
17 corresponding SEME region for later reference (see caveat in next
18 paragraph). This is because region scopes are tied to
19 them. Eventually, when we shift to non-lexical lifetimes, there should
20 be no need to remember this mapping.
22 ### Not so SEME Regions
24 In the course of building matches, it sometimes happens that certain code
25 (namely guards) gets executed multiple times. This means that the scope lexical
26 scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
27 mapping is from one scope to a vector of SEME regions.
29 Also in matches, the scopes assigned to arms are not even SEME regions! Each
30 arm has a single region with one entry for each pattern. We manually
31 manipulate the scheduled drops in this scope to avoid dropping things multiple
32 times, although drop elaboration would clean this up for value drops.
36 The primary purpose for scopes is to insert drops: while building
37 the contents, we also accumulate places that need to be dropped upon
38 exit from each scope. This is done by calling `schedule_drop`. Once a
39 drop is scheduled, whenever we branch out we will insert drops of all
40 those places onto the outgoing edge. Note that we don't know the full
41 set of scheduled drops up front, and so whenever we exit from the
42 scope we only drop the values scheduled thus far. For example, consider
43 the scope S corresponding to this loop:
54 When processing the `let x`, we will add one drop to the scope for
55 `x`. The break will then insert a drop for `x`. When we process `let
56 y`, we will add another drop (in fact, to a subscope, but let's ignore
57 that for now); any later drops would also drop `y`.
61 There are numerous "normal" ways to early exit a scope: `break`,
62 `continue`, `return` (panics are handled separately). Whenever an
63 early exit occurs, the method `exit_scope` is called. It is given the
64 current point in execution where the early exit occurs, as well as the
65 scope you want to branch to (note that all early exits from to some
66 other enclosing scope). `exit_scope` will record this exit point and
69 Panics are handled in a similar fashion, except that a panic always
70 returns out to the `DIVERGE_BLOCK`. To trigger a panic, simply call
71 `panic(p)` with the current point `p`. Or else you can call
72 `diverge_cleanup`, which will produce a block that you can branch to
73 which does the appropriate cleanup and then diverges. `panic(p)`
74 simply calls `diverge_cleanup()` and adds an edge from `p` to the
79 In addition to the normal scope stack, we track a loop scope stack
80 that contains only loops. It tracks where a `break` and `continue`
85 use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
86 use crate::hair::{Expr, ExprRef, LintLevel};
87 use rustc_middle::middle::region;
88 use rustc_middle::mir::*;
89 use rustc_data_structures::fx::FxHashMap;
91 use rustc_hir::GeneratorKind;
92 use rustc_span::{Span, DUMMY_SP};
93 use std::collections::hash_map::Entry;
98 /// The source scope this scope was created in.
99 source_scope: SourceScope,
101 /// the region span of this scope within source code.
102 region_scope: region::Scope,
104 /// the span of that region_scope
105 region_scope_span: Span,
107 /// set of places to drop when exiting this scope. This starts
108 /// out empty but grows as variables are declared during the
109 /// building process. This is a stack, so we always drop from the
110 /// end of the vector (top of the stack) first.
111 drops: Vec<DropData>,
113 moved_locals: Vec<Local>,
115 /// The cache for drop chain on “normal” exit into a particular BasicBlock.
116 cached_exits: FxHashMap<(BasicBlock, region::Scope), BasicBlock>,
118 /// The cache for drop chain on "generator drop" exit.
119 cached_generator_drop: Option<BasicBlock>,
121 /// The cache for drop chain on "unwind" exit.
122 cached_unwind: CachedBlock,
125 #[derive(Debug, Default)]
126 crate struct Scopes<'tcx> {
128 /// The current set of breakable scopes. See module comment for more details.
129 breakable_scopes: Vec<BreakableScope<'tcx>>,
134 /// span where drop obligation was incurred (typically where place was declared)
140 /// Whether this is a value Drop or a StorageDead.
143 /// The cached blocks for unwinds.
144 cached_block: CachedBlock,
147 #[derive(Debug, Default, Clone, Copy)]
149 /// The cached block for the cleanups-on-diverge path. This block
150 /// contains code to run the current drop and all the preceding
151 /// drops (i.e., those having lower index in Drop’s Scope drop
153 unwind: Option<BasicBlock>,
155 /// The cached block for unwinds during cleanups-on-generator-drop path
157 /// This is split from the standard unwind path here to prevent drop
158 /// elaboration from creating drop flags that would have to be captured
159 /// by the generator. I'm not sure how important this optimization is,
161 generator_drop: Option<BasicBlock>,
164 #[derive(Debug, PartialEq, Eq)]
165 pub(crate) enum DropKind {
170 #[derive(Clone, Debug)]
171 struct BreakableScope<'tcx> {
172 /// Region scope of the loop
173 region_scope: region::Scope,
174 /// Where the body of the loop begins. `None` if block
175 continue_block: Option<BasicBlock>,
176 /// Block to branch into when the loop or block terminates (either by being
177 /// `break`-en out from, or by having its condition to become false)
178 break_block: BasicBlock,
179 /// The destination of the loop/block expression itself (i.e., where to put
180 /// the result of a `break` expression)
181 break_destination: Place<'tcx>,
184 /// The target of an expression that breaks out of a scope
185 #[derive(Clone, Copy, Debug)]
186 crate enum BreakableTarget {
187 Continue(region::Scope),
188 Break(region::Scope),
193 fn invalidate(&mut self) {
194 *self = CachedBlock::default();
197 fn get(&self, generator_drop: bool) -> Option<BasicBlock> {
198 if generator_drop { self.generator_drop } else { self.unwind }
201 fn ref_mut(&mut self, generator_drop: bool) -> &mut Option<BasicBlock> {
202 if generator_drop { &mut self.generator_drop } else { &mut self.unwind }
207 /// Invalidates all the cached blocks in the scope.
209 /// Should always be run for all inner scopes when a drop is pushed into some scope enclosing a
210 /// larger extent of code.
212 /// `storage_only` controls whether to invalidate only drop paths that run `StorageDead`.
213 /// `this_scope_only` controls whether to invalidate only drop paths that refer to the current
214 /// top-of-scope (as opposed to dependent scopes).
218 generator_kind: Option<GeneratorKind>,
219 this_scope_only: bool,
221 // FIXME: maybe do shared caching of `cached_exits` etc. to handle functions
222 // with lots of `try!`?
224 // cached exits drop storage and refer to the top-of-scope
225 self.cached_exits.clear();
227 // the current generator drop and unwind refer to top-of-scope
228 self.cached_generator_drop = None;
230 let ignore_unwinds = storage_only && generator_kind.is_none();
232 self.cached_unwind.invalidate();
235 if !ignore_unwinds && !this_scope_only {
236 for drop_data in &mut self.drops {
237 drop_data.cached_block.invalidate();
242 /// Given a span and this scope's source scope, make a SourceInfo.
243 fn source_info(&self, span: Span) -> SourceInfo {
244 SourceInfo { span, scope: self.source_scope }
247 /// Whether there's anything to do for the cleanup path, that is,
248 /// when unwinding through this scope. This includes destructors,
249 /// but not StorageDead statements, which don't get emitted at all
250 /// for unwinding, for several reasons:
251 /// * clang doesn't emit llvm.lifetime.end for C++ unwinding
252 /// * LLVM's memory dependency analysis can't handle it atm
253 /// * polluting the cleanup MIR with StorageDead creates
254 /// landing pads even though there's no actual destructors
255 /// * freeing up stack space has no effect during unwinding
256 /// Note that for generators we do emit StorageDeads, for the
257 /// use of optimizations in the MIR generator transform.
258 fn needs_cleanup(&self) -> bool {
259 self.drops.iter().any(|drop| match drop.kind {
260 DropKind::Value => true,
261 DropKind::Storage => false,
266 impl<'tcx> Scopes<'tcx> {
267 fn len(&self) -> usize {
271 fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
272 debug!("push_scope({:?})", region_scope);
273 self.scopes.push(Scope {
274 source_scope: vis_scope,
275 region_scope: region_scope.0,
276 region_scope_span: region_scope.1.span,
278 moved_locals: vec![],
279 cached_generator_drop: None,
280 cached_exits: Default::default(),
281 cached_unwind: CachedBlock::default(),
287 region_scope: (region::Scope, SourceInfo),
288 ) -> (Scope, Option<BasicBlock>) {
289 let scope = self.scopes.pop().unwrap();
290 assert_eq!(scope.region_scope, region_scope.0);
292 self.scopes.last().and_then(|next_scope| next_scope.cached_unwind.get(false));
296 fn may_panic(&self, scope_count: usize) -> bool {
297 let len = self.len();
298 self.scopes[(len - scope_count)..].iter().any(|s| s.needs_cleanup())
301 /// Finds the breakable scope for a given label. This is used for
302 /// resolving `return`, `break` and `continue`.
303 fn find_breakable_scope(
306 target: BreakableTarget,
307 ) -> (BasicBlock, region::Scope, Option<Place<'tcx>>) {
308 let get_scope = |scope: region::Scope| {
309 // find the loop-scope by its `region::Scope`.
310 self.breakable_scopes
312 .rfind(|breakable_scope| breakable_scope.region_scope == scope)
313 .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
316 BreakableTarget::Return => {
317 let scope = &self.breakable_scopes[0];
318 if scope.break_destination != Place::return_place() {
319 span_bug!(span, "`return` in item with no return scope");
321 (scope.break_block, scope.region_scope, Some(scope.break_destination))
323 BreakableTarget::Break(scope) => {
324 let scope = get_scope(scope);
325 (scope.break_block, scope.region_scope, Some(scope.break_destination))
327 BreakableTarget::Continue(scope) => {
328 let scope = get_scope(scope);
329 let continue_block = scope
331 .unwrap_or_else(|| span_bug!(span, "missing `continue` block"));
332 (continue_block, scope.region_scope, None)
337 fn num_scopes_above(&self, region_scope: region::Scope, span: Span) -> usize {
338 let scope_count = self
342 .position(|scope| scope.region_scope == region_scope)
343 .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope));
344 let len = self.len();
345 assert!(scope_count < len, "should not use `exit_scope` to pop ALL scopes");
349 fn iter_mut(&mut self) -> impl DoubleEndedIterator<Item = &mut Scope> + '_ {
350 self.scopes.iter_mut().rev()
353 fn top_scopes(&mut self, count: usize) -> impl DoubleEndedIterator<Item = &mut Scope> + '_ {
354 let len = self.len();
355 self.scopes[len - count..].iter_mut()
358 /// Returns the topmost active scope, which is known to be alive until
359 /// the next scope expression.
360 pub(super) fn topmost(&self) -> region::Scope {
361 self.scopes.last().expect("topmost_scope: no scopes present").region_scope
364 fn source_info(&self, index: usize, span: Span) -> SourceInfo {
365 self.scopes[self.len() - index].source_info(span)
369 impl<'a, 'tcx> Builder<'a, 'tcx> {
370 // Adding and removing scopes
371 // ==========================
372 // Start a breakable scope, which tracks where `continue`, `break` and
373 // `return` should branch to.
374 crate fn in_breakable_scope<F, R>(
376 loop_block: Option<BasicBlock>,
377 break_block: BasicBlock,
378 break_destination: Place<'tcx>,
382 F: FnOnce(&mut Builder<'a, 'tcx>) -> R,
384 let region_scope = self.scopes.topmost();
385 let scope = BreakableScope {
387 continue_block: loop_block,
391 self.scopes.breakable_scopes.push(scope);
393 let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
394 assert!(breakable_scope.region_scope == region_scope);
398 crate fn in_opt_scope<F, R>(
400 opt_scope: Option<(region::Scope, SourceInfo)>,
404 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
406 debug!("in_opt_scope(opt_scope={:?})", opt_scope);
407 if let Some(region_scope) = opt_scope {
408 self.push_scope(region_scope);
411 let rv = unpack!(block = f(self));
412 if let Some(region_scope) = opt_scope {
413 unpack!(block = self.pop_scope(region_scope, block));
415 debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
419 /// Convenience wrapper that pushes a scope and then executes `f`
420 /// to build its contents, popping the scope afterwards.
421 crate fn in_scope<F, R>(
423 region_scope: (region::Scope, SourceInfo),
424 lint_level: LintLevel,
428 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
430 debug!("in_scope(region_scope={:?})", region_scope);
431 let source_scope = self.source_scope;
432 let tcx = self.hir.tcx();
433 if let LintLevel::Explicit(current_hir_id) = lint_level {
434 // Use `maybe_lint_level_root_bounded` with `root_lint_level` as a bound
435 // to avoid adding Hir dependences on our parents.
436 // We estimate the true lint roots here to avoid creating a lot of source scopes.
438 let parent_root = tcx.maybe_lint_level_root_bounded(
439 self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root,
440 self.hir.root_lint_level,
443 tcx.maybe_lint_level_root_bounded(current_hir_id, self.hir.root_lint_level);
445 if parent_root != current_root {
446 self.source_scope = self.new_source_scope(
448 LintLevel::Explicit(current_root),
453 self.push_scope(region_scope);
455 let rv = unpack!(block = f(self));
456 unpack!(block = self.pop_scope(region_scope, block));
457 self.source_scope = source_scope;
458 debug!("in_scope: exiting region_scope={:?} block={:?}", region_scope, block);
462 /// Push a scope onto the stack. You can then build code in this
463 /// scope and call `pop_scope` afterwards. Note that these two
464 /// calls must be paired; using `in_scope` as a convenience
465 /// wrapper maybe preferable.
466 crate fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
467 self.scopes.push_scope(region_scope, self.source_scope);
470 /// Pops a scope, which should have region scope `region_scope`,
471 /// adding any drops onto the end of `block` that are needed.
472 /// This must match 1-to-1 with `push_scope`.
475 region_scope: (region::Scope, SourceInfo),
476 mut block: BasicBlock,
478 debug!("pop_scope({:?}, {:?})", region_scope, block);
479 // If we are emitting a `drop` statement, we need to have the cached
480 // diverge cleanup pads ready in case that drop panics.
481 if self.scopes.may_panic(1) {
482 self.diverge_cleanup();
484 let (scope, unwind_to) = self.scopes.pop_scope(region_scope);
485 let unwind_to = unwind_to.unwrap_or_else(|| self.resume_block());
488 block = build_scope_drops(
495 false, // not generator
496 false, // not unwind path
503 crate fn break_scope(
505 mut block: BasicBlock,
506 value: Option<ExprRef<'tcx>>,
507 scope: BreakableTarget,
508 source_info: SourceInfo,
510 let (mut target_block, region_scope, destination) =
511 self.scopes.find_breakable_scope(source_info.span, scope);
512 if let BreakableTarget::Return = scope {
513 // We call this now, rather than when we start lowering the
514 // function so that the return block doesn't precede the entire
515 // rest of the CFG. Some passes and LLVM prefer blocks to be in
516 // approximately CFG order.
517 target_block = self.return_block();
519 if let Some(destination) = destination {
520 if let Some(value) = value {
521 debug!("stmt_expr Break val block_context.push(SubExpr)");
522 self.block_context.push(BlockFrame::SubExpr);
523 unpack!(block = self.into(destination, block, value));
524 self.block_context.pop();
526 self.cfg.push_assign_unit(block, source_info, destination, self.hir.tcx())
529 assert!(value.is_none(), "`return` and `break` should have a destination");
531 self.exit_scope(source_info.span, region_scope, block, target_block);
532 self.cfg.start_new_block().unit()
535 /// Branch out of `block` to `target`, exiting all scopes up to
536 /// and including `region_scope`. This will insert whatever drops are
537 /// needed. See module comment for details.
541 region_scope: region::Scope,
542 mut block: BasicBlock,
546 "exit_scope(region_scope={:?}, block={:?}, target={:?})",
547 region_scope, block, target
549 let scope_count = self.scopes.num_scopes_above(region_scope, span);
551 // If we are emitting a `drop` statement, we need to have the cached
552 // diverge cleanup pads ready in case that drop panics.
553 let may_panic = self.scopes.may_panic(scope_count);
555 self.diverge_cleanup();
558 let mut scopes = self.scopes.top_scopes(scope_count + 1).rev();
559 let mut scope = scopes.next().unwrap();
560 for next_scope in scopes {
561 if scope.drops.is_empty() {
565 let source_info = scope.source_info(span);
566 block = match scope.cached_exits.entry((target, region_scope)) {
567 Entry::Occupied(e) => {
568 self.cfg.goto(block, source_info, *e.get());
571 Entry::Vacant(v) => {
572 let b = self.cfg.start_new_block();
573 self.cfg.goto(block, source_info, b);
579 let unwind_to = next_scope.cached_unwind.get(false).unwrap_or_else(|| {
580 debug_assert!(!may_panic, "cached block not present?");
585 block = build_scope_drops(
592 false, // not generator
593 false, // not unwind path
600 self.cfg.goto(block, self.scopes.source_info(scope_count, span), target);
603 /// Creates a path that performs all required cleanup for dropping a generator.
605 /// This path terminates in GeneratorDrop. Returns the start of the path.
606 /// None indicates there’s no cleanup to do at this point.
607 crate fn generator_drop_cleanup(&mut self) -> Option<BasicBlock> {
608 // Fill in the cache for unwinds
609 self.diverge_cleanup_gen(true);
611 let src_info = self.scopes.source_info(self.scopes.len(), self.fn_span);
612 let resume_block = self.resume_block();
613 let mut scopes = self.scopes.iter_mut().peekable();
614 let mut block = self.cfg.start_new_block();
617 while let Some(scope) = scopes.next() {
618 block = if let Some(b) = scope.cached_generator_drop {
619 self.cfg.goto(block, src_info, b);
622 let b = self.cfg.start_new_block();
623 scope.cached_generator_drop = Some(b);
624 self.cfg.goto(block, src_info, b);
628 let unwind_to = scopes
635 .unwrap_or_else(|| span_bug!(src_info.span, "cached block not present?"))
637 .unwrap_or(resume_block);
640 block = build_scope_drops(
647 true, // is generator
648 true, // is cached path
653 self.cfg.terminate(block, src_info, TerminatorKind::GeneratorDrop);
658 /// Creates a new source scope, nested in the current one.
659 crate fn new_source_scope(
662 lint_level: LintLevel,
663 safety: Option<Safety>,
665 let parent = self.source_scope;
667 "new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
672 self.source_scopes.get(parent)
674 let scope_local_data = SourceScopeLocalData {
675 lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
678 self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root
680 safety: safety.unwrap_or_else(|| {
681 self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety
684 self.source_scopes.push(SourceScopeData {
686 parent_scope: Some(parent),
687 local_data: ClearCrossCrate::Set(scope_local_data),
691 /// Given a span and the current source scope, make a SourceInfo.
692 crate fn source_info(&self, span: Span) -> SourceInfo {
693 SourceInfo { span, scope: self.source_scope }
698 /// Returns the scope that we should use as the lifetime of an
699 /// operand. Basically, an operand must live until it is consumed.
700 /// This is similar to, but not quite the same as, the temporary
701 /// scope (which can be larger or smaller).
705 /// let x = foo(bar(X, Y));
707 /// We wish to pop the storage for X and Y after `bar()` is
708 /// called, not after the whole `let` is completed.
710 /// As another example, if the second argument diverges:
712 /// foo(Box::new(2), panic!())
714 /// We would allocate the box but then free it on the unwinding
715 /// path; we would also emit a free on the 'success' path from
716 /// panic, but that will turn out to be removed as dead-code.
718 /// When building statics/constants, returns `None` since
719 /// intermediate values do not have to be dropped in that case.
720 crate fn local_scope(&self) -> Option<region::Scope> {
721 match self.hir.body_owner_kind {
722 hir::BodyOwnerKind::Const | hir::BodyOwnerKind::Static(_) =>
723 // No need to free storage in this context.
727 hir::BodyOwnerKind::Closure | hir::BodyOwnerKind::Fn => Some(self.scopes.topmost()),
731 // Schedule an abort block - this is used for some ABIs that cannot unwind
732 crate fn schedule_abort(&mut self) -> BasicBlock {
733 let source_info = self.scopes.source_info(self.scopes.len(), self.fn_span);
734 let abortblk = self.cfg.start_new_cleanup_block();
735 self.cfg.terminate(abortblk, source_info, TerminatorKind::Abort);
736 self.cached_resume_block = Some(abortblk);
742 crate fn schedule_drop_storage_and_value(
745 region_scope: region::Scope,
748 self.schedule_drop(span, region_scope, local, DropKind::Storage);
749 self.schedule_drop(span, region_scope, local, DropKind::Value);
752 /// Indicates that `place` should be dropped on exit from
755 /// When called with `DropKind::Storage`, `place` should be a local
756 /// with an index higher than the current `self.arg_count`.
757 crate fn schedule_drop(
760 region_scope: region::Scope,
764 let needs_drop = match drop_kind {
766 if !self.hir.needs_drop(self.local_decls[local].ty) {
771 DropKind::Storage => {
772 if local.index() <= self.arg_count {
775 "`schedule_drop` called with local {:?} and arg_count {}",
784 for scope in self.scopes.iter_mut() {
785 let this_scope = scope.region_scope == region_scope;
786 // When building drops, we try to cache chains of drops in such a way so these drops
787 // could be reused by the drops which would branch into the cached (already built)
788 // blocks. This, however, means that whenever we add a drop into a scope which already
789 // had some blocks built (and thus, cached) for it, we must invalidate all caches which
790 // might branch into the scope which had a drop just added to it. This is necessary,
791 // because otherwise some other code might use the cache to branch into already built
792 // chain of drops, essentially ignoring the newly added drop.
794 // For example consider there’s two scopes with a drop in each. These are built and
795 // thus the caches are filled:
797 // +--------------------------------------------------------+
798 // | +---------------------------------+ |
799 // | | +--------+ +-------------+ | +---------------+ |
800 // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
801 // | | +--------+ +-------------+ | +---------------+ |
802 // | +------------|outer_scope cache|--+ |
803 // +------------------------------|middle_scope cache|------+
805 // Now, a new, inner-most scope is added along with a new drop into both inner-most and
806 // outer-most scopes:
808 // +------------------------------------------------------------+
809 // | +----------------------------------+ |
810 // | | +--------+ +-------------+ | +---------------+ | +-------------+
811 // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
812 // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
813 // | | +-+ +-------------+ | |
814 // | +---|invalid outer_scope cache|----+ |
815 // +----=----------------|invalid middle_scope cache|-----------+
817 // If, when adding `drop(new)` we do not invalidate the cached blocks for both
818 // outer_scope and middle_scope, then, when building drops for the inner (right-most)
819 // scope, the old, cached blocks, without `drop(new)` will get used, producing the
822 // The cache and its invalidation for unwind branch is somewhat special. The cache is
823 // per-drop, rather than per scope, which has a several different implications. Adding
824 // a new drop into a scope will not invalidate cached blocks of the prior drops in the
825 // scope. That is true, because none of the already existing drops will have an edge
826 // into a block with the newly added drop.
828 // Note that this code iterates scopes from the inner-most to the outer-most,
829 // invalidating caches of each scope visited. This way bare minimum of the
830 // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
831 // cache of outer scope stays intact.
832 scope.invalidate_cache(!needs_drop, self.generator_kind, this_scope);
834 let region_scope_span =
835 region_scope.span(self.hir.tcx(), &self.hir.region_scope_tree);
836 // Attribute scope exit drops to scope's closing brace.
837 let scope_end = self.hir.tcx().sess.source_map().end_point(region_scope_span);
839 scope.drops.push(DropData {
843 cached_block: CachedBlock::default(),
848 span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
851 /// Indicates that the "local operand" stored in `local` is
852 /// *moved* at some point during execution (see `local_scope` for
853 /// more information about what a "local operand" is -- in short,
854 /// it's an intermediate operand created as part of preparing some
855 /// MIR instruction). We use this information to suppress
856 /// redundant drops on the non-unwind paths. This results in less
857 /// MIR, but also avoids spurious borrow check errors
860 /// Example: when compiling the call to `foo` here:
866 /// we would evaluate `bar()` to an operand `_X`. We would also
867 /// schedule `_X` to be dropped when the expression scope for
868 /// `foo(bar())` is exited. This is relevant, for example, if the
869 /// later arguments should unwind (it would ensure that `_X` gets
870 /// dropped). However, if no unwind occurs, then `_X` will be
871 /// unconditionally consumed by the `call`:
876 /// _R = CALL(foo, _X, ...)
880 /// However, `_X` is still registered to be dropped, and so if we
881 /// do nothing else, we would generate a `DROP(_X)` that occurs
882 /// after the call. This will later be optimized out by the
883 /// drop-elaboation code, but in the meantime it can lead to
884 /// spurious borrow-check errors -- the problem, ironically, is
885 /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
886 /// that it creates. See #64391 for an example.
887 crate fn record_operands_moved(&mut self, operands: &[Operand<'tcx>]) {
888 let scope = match self.local_scope() {
890 // if there is no local scope, operands won't be dropped anyway
894 Some(local_scope) => self
897 .find(|scope| scope.region_scope == local_scope)
898 .unwrap_or_else(|| bug!("scope {:?} not found in scope list!", local_scope)),
901 // look for moves of a local variable, like `MOVE(_X)`
902 let locals_moved = operands.iter().flat_map(|operand| match operand {
903 Operand::Copy(_) | Operand::Constant(_) => None,
904 Operand::Move(place) => place.as_local(),
907 for local in locals_moved {
908 // check if we have a Drop for this operand and -- if so
909 // -- add it to the list of moved operands. Note that this
910 // local might not have been an operand created for this
911 // call, it could come from other places too.
912 if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
913 scope.moved_locals.push(local);
920 /// Branch based on a boolean condition.
922 /// This is a special case because the temporary for the condition needs to
923 /// be dropped on both the true and the false arm.
926 mut block: BasicBlock,
927 condition: Expr<'tcx>,
928 source_info: SourceInfo,
929 ) -> (BasicBlock, BasicBlock) {
930 let cond = unpack!(block = self.as_local_operand(block, condition));
931 let true_block = self.cfg.start_new_block();
932 let false_block = self.cfg.start_new_block();
933 let term = TerminatorKind::if_(self.hir.tcx(), cond.clone(), true_block, false_block);
934 self.cfg.terminate(block, source_info, term);
937 // Don't try to drop a constant
938 Operand::Constant(_) => (),
939 // If constants and statics, we don't generate StorageLive for this
940 // temporary, so don't try to generate StorageDead for it either.
941 _ if self.local_scope().is_none() => (),
942 Operand::Copy(place) | Operand::Move(place) => {
943 if let Some(cond_temp) = place.as_local() {
944 // Manually drop the condition on both branches.
945 let top_scope = self.scopes.scopes.last_mut().unwrap();
946 let top_drop_data = top_scope.drops.pop().unwrap();
948 match top_drop_data.kind {
949 DropKind::Value { .. } => {
950 bug!("Drop scheduled on top of condition variable")
952 DropKind::Storage => {
953 let source_info = top_scope.source_info(top_drop_data.span);
954 let local = top_drop_data.local;
955 assert_eq!(local, cond_temp, "Drop scheduled on top of condition");
958 Statement { source_info, kind: StatementKind::StorageDead(local) },
962 Statement { source_info, kind: StatementKind::StorageDead(local) },
967 top_scope.invalidate_cache(true, self.generator_kind, true);
969 bug!("Expected as_local_operand to produce a temporary");
974 (true_block, false_block)
977 /// Creates a path that performs all required cleanup for unwinding.
979 /// This path terminates in Resume. Returns the start of the path.
980 /// See module comment for more details.
981 crate fn diverge_cleanup(&mut self) -> BasicBlock {
982 self.diverge_cleanup_gen(false)
985 fn resume_block(&mut self) -> BasicBlock {
986 if let Some(target) = self.cached_resume_block {
989 let resumeblk = self.cfg.start_new_cleanup_block();
992 SourceInfo { scope: OUTERMOST_SOURCE_SCOPE, span: self.fn_span },
993 TerminatorKind::Resume,
995 self.cached_resume_block = Some(resumeblk);
1000 fn diverge_cleanup_gen(&mut self, generator_drop: bool) -> BasicBlock {
1001 // Build up the drops in **reverse** order. The end result will
1004 // scopes[n] -> scopes[n-1] -> ... -> scopes[0]
1006 // However, we build this in **reverse order**. That is, we
1007 // process scopes[0], then scopes[1], etc, pointing each one at
1008 // the result generates from the one before. Along the way, we
1009 // store caches. If everything is cached, we'll just walk right
1010 // to left reading the cached results but never created anything.
1012 // Find the last cached block
1013 debug!("diverge_cleanup_gen(self.scopes = {:?})", self.scopes);
1014 let cached_cleanup = self.scopes.iter_mut().enumerate().find_map(|(idx, ref scope)| {
1015 let cached_block = scope.cached_unwind.get(generator_drop)?;
1016 Some((cached_block, idx))
1018 let (mut target, first_uncached) =
1019 cached_cleanup.unwrap_or_else(|| (self.resume_block(), self.scopes.len()));
1021 for scope in self.scopes.top_scopes(first_uncached) {
1022 target = build_diverge_scope(
1024 scope.region_scope_span,
1028 self.generator_kind,
1035 /// Utility function for *non*-scope code to build their own drops
1036 crate fn build_drop_and_replace(
1040 location: Place<'tcx>,
1041 value: Operand<'tcx>,
1043 let source_info = self.source_info(span);
1044 let next_target = self.cfg.start_new_block();
1045 let diverge_target = self.diverge_cleanup();
1049 TerminatorKind::DropAndReplace {
1052 target: next_target,
1053 unwind: Some(diverge_target),
1059 /// Creates an Assert terminator and return the success block.
1060 /// If the boolean condition operand is not the expected value,
1061 /// a runtime panic will be caused with the given message.
1065 cond: Operand<'tcx>,
1067 msg: AssertMessage<'tcx>,
1070 let source_info = self.source_info(span);
1072 let success_block = self.cfg.start_new_block();
1073 let cleanup = self.diverge_cleanup();
1078 TerminatorKind::Assert {
1082 target: success_block,
1083 cleanup: Some(cleanup),
1090 // `match` arm scopes
1091 // ==================
1092 /// Unschedules any drops in the top scope.
1094 /// This is only needed for `match` arm scopes, because they have one
1095 /// entrance per pattern, but only one exit.
1096 pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
1097 let top_scope = self.scopes.scopes.last_mut().unwrap();
1099 assert_eq!(top_scope.region_scope, region_scope);
1101 top_scope.drops.clear();
1102 top_scope.invalidate_cache(false, self.generator_kind, true);
1106 /// Builds drops for pop_scope and exit_scope.
1107 fn build_scope_drops<'tcx>(
1108 cfg: &mut CFG<'tcx>,
1109 generator_kind: Option<GeneratorKind>,
1111 mut block: BasicBlock,
1112 last_unwind_to: BasicBlock,
1114 generator_drop: bool,
1115 is_cached_path: bool,
1117 debug!("build_scope_drops({:?} -> {:?})", block, scope);
1119 // Build up the drops in evaluation order. The end result will
1122 // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1126 // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1128 // The horizontal arrows represent the execution path when the drops return
1129 // successfully. The downwards arrows represent the execution path when the
1130 // drops panic (panicking while unwinding will abort, so there's no need for
1131 // another set of arrows).
1133 // For generators, we unwind from a drop on a local to its StorageDead
1134 // statement. For other functions we don't worry about StorageDead. The
1135 // drops for the unwind path should have already been generated by
1136 // `diverge_cleanup_gen`.
1138 for drop_idx in (0..scope.drops.len()).rev() {
1139 let drop_data = &scope.drops[drop_idx];
1140 let source_info = scope.source_info(drop_data.span);
1141 let local = drop_data.local;
1143 match drop_data.kind {
1144 DropKind::Value => {
1145 // If the operand has been moved, and we are not on an unwind
1146 // path, then don't generate the drop. (We only take this into
1147 // account for non-unwind paths so as not to disturb the
1148 // caching mechanism.)
1149 if !is_cached_path && scope.moved_locals.iter().any(|&o| o == local) {
1153 let unwind_to = get_unwind_to(scope, generator_kind, drop_idx, generator_drop)
1154 .unwrap_or(last_unwind_to);
1156 let next = cfg.start_new_block();
1160 TerminatorKind::Drop {
1161 location: local.into(),
1163 unwind: Some(unwind_to),
1168 DropKind::Storage => {
1169 // Only temps and vars need their storage dead.
1170 assert!(local.index() > arg_count);
1171 cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
1180 generator_kind: Option<GeneratorKind>,
1182 generator_drop: bool,
1183 ) -> Option<BasicBlock> {
1184 for drop_idx in (0..unwind_from).rev() {
1185 let drop_data = &scope.drops[drop_idx];
1186 match (generator_kind, &drop_data.kind) {
1187 (Some(_), DropKind::Storage) => {
1188 return Some(drop_data.cached_block.get(generator_drop).unwrap_or_else(|| {
1189 span_bug!(drop_data.span, "cached block not present for {:?}", drop_data)
1192 (None, DropKind::Value) => {
1193 return Some(drop_data.cached_block.get(generator_drop).unwrap_or_else(|| {
1194 span_bug!(drop_data.span, "cached block not present for {:?}", drop_data)
1203 fn build_diverge_scope<'tcx>(
1204 cfg: &mut CFG<'tcx>,
1207 mut target: BasicBlock,
1208 generator_drop: bool,
1209 generator_kind: Option<GeneratorKind>,
1211 // Build up the drops in **reverse** order. The end result will
1214 // [drops[n]] -...-> [drops[0]] -> [target]
1216 // The code in this function reads from right to left. At each
1217 // point, we check for cached blocks representing the
1218 // remainder. If everything is cached, we'll just walk right to
1219 // left reading the cached results but never create anything.
1221 let source_scope = scope.source_scope;
1222 let source_info = |span| SourceInfo { span, scope: source_scope };
1224 // We keep track of StorageDead statements to prepend to our current block
1225 // and store them here, in reverse order.
1226 let mut storage_deads = vec![];
1228 let mut target_built_by_us = false;
1230 // Build up the drops. Here we iterate the vector in
1231 // *forward* order, so that we generate drops[0] first (right to
1232 // left in diagram above).
1233 debug!("build_diverge_scope({:?})", scope.drops);
1234 for (j, drop_data) in scope.drops.iter_mut().enumerate() {
1235 debug!("build_diverge_scope drop_data[{}]: {:?}", j, drop_data);
1236 // Only full value drops are emitted in the diverging path,
1237 // not StorageDead, except in the case of generators.
1239 // Note: This may not actually be what we desire (are we
1240 // "freeing" stack storage as we unwind, or merely observing a
1241 // frozen stack)? In particular, the intent may have been to
1242 // match the behavior of clang, but on inspection eddyb says
1243 // this is not what clang does.
1244 match drop_data.kind {
1245 DropKind::Storage if generator_kind.is_some() => {
1246 storage_deads.push(Statement {
1247 source_info: source_info(drop_data.span),
1248 kind: StatementKind::StorageDead(drop_data.local),
1250 if !target_built_by_us {
1251 // We cannot add statements to an existing block, so we create a new
1252 // block for our StorageDead statements.
1253 let block = cfg.start_new_cleanup_block();
1254 let source_info = SourceInfo { span: DUMMY_SP, scope: source_scope };
1255 cfg.goto(block, source_info, target);
1257 target_built_by_us = true;
1259 *drop_data.cached_block.ref_mut(generator_drop) = Some(target);
1261 DropKind::Storage => {}
1262 DropKind::Value => {
1263 let cached_block = drop_data.cached_block.ref_mut(generator_drop);
1264 target = if let Some(cached_block) = *cached_block {
1265 storage_deads.clear();
1266 target_built_by_us = false;
1269 push_storage_deads(cfg, target, &mut storage_deads);
1270 let block = cfg.start_new_cleanup_block();
1273 source_info(drop_data.span),
1274 TerminatorKind::Drop {
1275 location: drop_data.local.into(),
1280 *cached_block = Some(block);
1281 target_built_by_us = true;
1287 push_storage_deads(cfg, target, &mut storage_deads);
1288 *scope.cached_unwind.ref_mut(generator_drop) = Some(target);
1290 assert!(storage_deads.is_empty());
1291 debug!("build_diverge_scope({:?}, {:?}) = {:?}", scope, span, target);
1296 fn push_storage_deads<'tcx>(
1297 cfg: &mut CFG<'tcx>,
1299 storage_deads: &mut Vec<Statement<'tcx>>,
1301 if storage_deads.is_empty() {
1304 let statements = &mut cfg.block_data_mut(target).statements;
1305 storage_deads.reverse();
1307 "push_storage_deads({:?}), storage_deads={:?}, statements={:?}",
1308 target, storage_deads, statements
1310 storage_deads.append(statements);
1311 mem::swap(statements, storage_deads);
1312 assert!(storage_deads.is_empty());