2 Managing the scope stack. The scopes are tied to lexical scopes, so as
3 we descend the THIR, we push a scope on the stack, build its
4 contents, and then pop it off. Every scope is named by a
9 When pushing a new [Scope], we record the current point in the graph (a
10 basic block); this marks the entry to the scope. We then generate more
11 stuff in the control-flow graph. Whenever the scope is exited, either
12 via a `break` or `return` or just by fallthrough, that marks an exit
13 from the scope. Each lexical scope thus corresponds to a single-entry,
14 multiple-exit (SEME) region in the control-flow graph.
16 For now, we record the `region::Scope` to each SEME region for later reference
17 (see caveat in next paragraph). This is because destruction scopes are tied to
18 them. This may change in the future so that MIR lowering determines its own
21 ### Not so SEME Regions
23 In the course of building matches, it sometimes happens that certain code
24 (namely guards) gets executed multiple times. This means that the scope lexical
25 scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26 mapping is from one scope to a vector of SEME regions. Since the SEME regions
27 are disjoint, the mapping is still one-to-one for the set of SEME regions that
30 Also in matches, the scopes assigned to arms are not always even SEME regions!
31 Each arm has a single region with one entry for each pattern. We manually
32 manipulate the scheduled drops in this scope to avoid dropping things multiple
37 The primary purpose for scopes is to insert drops: while building
38 the contents, we also accumulate places that need to be dropped upon
39 exit from each scope. This is done by calling `schedule_drop`. Once a
40 drop is scheduled, whenever we branch out we will insert drops of all
41 those places onto the outgoing edge. Note that we don't know the full
42 set of scheduled drops up front, and so whenever we exit from the
43 scope we only drop the values scheduled thus far. For example, consider
44 the scope S corresponding to this loop:
55 When processing the `let x`, we will add one drop to the scope for
56 `x`. The break will then insert a drop for `x`. When we process `let
57 y`, we will add another drop (in fact, to a subscope, but let's ignore
58 that for now); any later drops would also drop `y`.
62 There are numerous "normal" ways to early exit a scope: `break`,
63 `continue`, `return` (panics are handled separately). Whenever an
64 early exit occurs, the method `break_scope` is called. It is given the
65 current point in execution where the early exit occurs, as well as the
66 scope you want to branch to (note that all early exits from to some
67 other enclosing scope). `break_scope` will record the set of drops currently
68 scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69 will be added to the CFG.
71 Panics are handled in a similar fashion, except that the drops are added to the
72 MIR once the rest of the function has finished being lowered. If a terminator
73 can panic, call `diverge_from(block)` with the block containing the terminator
78 In addition to the normal scope stack, we track a loop scope stack
79 that contains only loops and breakable blocks. It tracks where a `break`,
80 `continue` or `return` should go to.
86 use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
87 use rustc_data_structures::fx::FxHashMap;
88 use rustc_index::vec::IndexVec;
89 use rustc_middle::middle::region;
90 use rustc_middle::mir::*;
91 use rustc_middle::thir::{Expr, LintLevel};
93 use rustc_span::{Span, DUMMY_SP};
96 pub struct Scopes<'tcx> {
99 /// The current set of breakable scopes. See module comment for more details.
100 breakable_scopes: Vec<BreakableScope<'tcx>>,
102 /// The scope of the innermost if-then currently being lowered.
103 if_then_scope: Option<IfThenScope>,
105 /// Drops that need to be done on unwind paths. See the comment on
106 /// [DropTree] for more details.
107 unwind_drops: DropTree,
109 /// Drops that need to be done on paths to the `GeneratorDrop` terminator.
110 generator_drops: DropTree,
115 /// The source scope this scope was created in.
116 source_scope: SourceScope,
118 /// the region span of this scope within source code.
119 region_scope: region::Scope,
121 /// set of places to drop when exiting this scope. This starts
122 /// out empty but grows as variables are declared during the
123 /// building process. This is a stack, so we always drop from the
124 /// end of the vector (top of the stack) first.
125 drops: Vec<DropData>,
127 moved_locals: Vec<Local>,
129 /// The drop index that will drop everything in and below this scope on an
131 cached_unwind_block: Option<DropIdx>,
133 /// The drop index that will drop everything in and below this scope on a
134 /// generator drop path.
135 cached_generator_drop_block: Option<DropIdx>,
138 #[derive(Clone, Copy, Debug)]
140 /// The `Span` where drop obligation was incurred (typically where place was
142 source_info: SourceInfo,
147 /// Whether this is a value Drop or a StorageDead.
151 #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
152 pub(crate) enum DropKind {
158 struct BreakableScope<'tcx> {
159 /// Region scope of the loop
160 region_scope: region::Scope,
161 /// The destination of the loop/block expression itself (i.e., where to put
162 /// the result of a `break` or `return` expression)
163 break_destination: Place<'tcx>,
164 /// Drops that happen on the `break`/`return` path.
165 break_drops: DropTree,
166 /// Drops that happen on the `continue` path.
167 continue_drops: Option<DropTree>,
172 /// The if-then scope or arm scope
173 region_scope: region::Scope,
174 /// Drops that happen on the `else` path.
175 else_drops: DropTree,
178 /// The target of an expression that breaks out of a scope
179 #[derive(Clone, Copy, Debug)]
180 pub(crate) enum BreakableTarget {
181 Continue(region::Scope),
182 Break(region::Scope),
186 rustc_index::newtype_index! {
187 struct DropIdx { .. }
190 const ROOT_NODE: DropIdx = DropIdx::from_u32(0);
192 /// A tree of drops that we have deferred lowering. It's used for:
194 /// * Drops on unwind paths
195 /// * Drops on generator drop paths (when a suspended generator is dropped)
196 /// * Drops on return and loop exit paths
197 /// * Drops on the else path in an `if let` chain
199 /// Once no more nodes could be added to the tree, we lower it to MIR in one go
203 /// Drops in the tree.
204 drops: IndexVec<DropIdx, (DropData, DropIdx)>,
205 /// Map for finding the inverse of the `next_drop` relation:
207 /// `previous_drops[(drops[i].1, drops[i].0.local, drops[i].0.kind)] == i`
208 previous_drops: FxHashMap<(DropIdx, Local, DropKind), DropIdx>,
209 /// Edges into the `DropTree` that need to be added once it's lowered.
210 entry_points: Vec<(DropIdx, BasicBlock)>,
214 /// Whether there's anything to do for the cleanup path, that is,
215 /// when unwinding through this scope. This includes destructors,
216 /// but not StorageDead statements, which don't get emitted at all
217 /// for unwinding, for several reasons:
218 /// * clang doesn't emit llvm.lifetime.end for C++ unwinding
219 /// * LLVM's memory dependency analysis can't handle it atm
220 /// * polluting the cleanup MIR with StorageDead creates
221 /// landing pads even though there's no actual destructors
222 /// * freeing up stack space has no effect during unwinding
223 /// Note that for generators we do emit StorageDeads, for the
224 /// use of optimizations in the MIR generator transform.
225 fn needs_cleanup(&self) -> bool {
226 self.drops.iter().any(|drop| match drop.kind {
227 DropKind::Value => true,
228 DropKind::Storage => false,
232 fn invalidate_cache(&mut self) {
233 self.cached_unwind_block = None;
234 self.cached_generator_drop_block = None;
238 /// A trait that determined how [DropTree] creates its blocks and
239 /// links to any entry nodes.
240 trait DropTreeBuilder<'tcx> {
241 /// Create a new block for the tree. This should call either
242 /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
243 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
245 /// Links a block outside the drop tree, `from`, to the block `to` inside
247 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
252 // The root node of the tree doesn't represent a drop, but instead
253 // represents the block in the tree that should be jumped to once all
254 // of the required drops have been performed.
255 let fake_source_info = SourceInfo::outermost(DUMMY_SP);
257 DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
258 let drop_idx = DropIdx::MAX;
259 let drops = IndexVec::from_elem_n((fake_data, drop_idx), 1);
260 Self { drops, entry_points: Vec::new(), previous_drops: FxHashMap::default() }
263 fn add_drop(&mut self, drop: DropData, next: DropIdx) -> DropIdx {
264 let drops = &mut self.drops;
267 .entry((next, drop.local, drop.kind))
268 .or_insert_with(|| drops.push((drop, next)))
271 fn add_entry(&mut self, from: BasicBlock, to: DropIdx) {
272 debug_assert!(to < self.drops.next_index());
273 self.entry_points.push((to, from));
276 /// Builds the MIR for a given drop tree.
278 /// `blocks` should have the same length as `self.drops`, and may have its
279 /// first value set to some already existing block.
280 fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
283 blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
285 debug!("DropTree::build_mir(drops = {:#?})", self);
286 assert_eq!(blocks.len(), self.drops.len());
288 self.assign_blocks::<T>(cfg, blocks);
289 self.link_blocks(cfg, blocks)
292 /// Assign blocks for all of the drops in the drop tree that need them.
293 fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
296 blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
298 // StorageDead statements can share blocks with each other and also with
299 // a Drop terminator. We iterate through the drops to find which drops
300 // need their own block.
301 #[derive(Clone, Copy)]
303 // This drop is unreachable
305 // This drop is only reachable through the `StorageDead` with the
308 // This drop has more than one way of being reached, or it is
309 // branched to from outside the tree, or its predecessor is a
314 let mut needs_block = IndexVec::from_elem(Block::None, &self.drops);
315 if blocks[ROOT_NODE].is_some() {
316 // In some cases (such as drops for `continue`) the root node
317 // already has a block. In this case, make sure that we don't
319 needs_block[ROOT_NODE] = Block::Own;
322 // Sort so that we only need to check the last value.
323 let entry_points = &mut self.entry_points;
326 for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
327 if entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) {
328 let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
329 needs_block[drop_idx] = Block::Own;
330 while entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) {
331 let entry_block = entry_points.pop().unwrap().1;
332 T::add_entry(cfg, entry_block, block);
335 match needs_block[drop_idx] {
336 Block::None => continue,
338 blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
340 Block::Shares(pred) => {
341 blocks[drop_idx] = blocks[pred];
344 if let DropKind::Value = drop_data.0.kind {
345 needs_block[drop_data.1] = Block::Own;
346 } else if drop_idx != ROOT_NODE {
347 match &mut needs_block[drop_data.1] {
348 pred @ Block::None => *pred = Block::Shares(drop_idx),
349 pred @ Block::Shares(_) => *pred = Block::Own,
355 debug!("assign_blocks: blocks = {:#?}", blocks);
356 assert!(entry_points.is_empty());
359 fn link_blocks<'tcx>(
362 blocks: &IndexVec<DropIdx, Option<BasicBlock>>,
364 for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
365 let Some(block) = blocks[drop_idx] else { continue };
366 match drop_data.0.kind {
368 let terminator = TerminatorKind::Drop {
369 target: blocks[drop_data.1].unwrap(),
370 // The caller will handle this if needed.
372 place: drop_data.0.local.into(),
374 cfg.terminate(block, drop_data.0.source_info, terminator);
376 // Root nodes don't correspond to a drop.
377 DropKind::Storage if drop_idx == ROOT_NODE => {}
378 DropKind::Storage => {
379 let stmt = Statement {
380 source_info: drop_data.0.source_info,
381 kind: StatementKind::StorageDead(drop_data.0.local),
383 cfg.push(block, stmt);
384 let target = blocks[drop_data.1].unwrap();
386 // Diagnostics don't use this `Span` but debuginfo
387 // might. Since we don't want breakpoints to be placed
388 // here, especially when this is on an unwind path, we
390 let source_info = SourceInfo { span: DUMMY_SP, ..drop_data.0.source_info };
391 let terminator = TerminatorKind::Goto { target };
392 cfg.terminate(block, source_info, terminator);
400 impl<'tcx> Scopes<'tcx> {
401 pub(crate) fn new() -> Self {
404 breakable_scopes: Vec::new(),
406 unwind_drops: DropTree::new(),
407 generator_drops: DropTree::new(),
411 fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
412 debug!("push_scope({:?})", region_scope);
413 self.scopes.push(Scope {
414 source_scope: vis_scope,
415 region_scope: region_scope.0,
417 moved_locals: vec![],
418 cached_unwind_block: None,
419 cached_generator_drop_block: None,
423 fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
424 let scope = self.scopes.pop().unwrap();
425 assert_eq!(scope.region_scope, region_scope.0);
429 fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
432 .rposition(|scope| scope.region_scope == region_scope)
433 .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
436 /// Returns the topmost active scope, which is known to be alive until
437 /// the next scope expression.
438 fn topmost(&self) -> region::Scope {
439 self.scopes.last().expect("topmost_scope: no scopes present").region_scope
443 impl<'a, 'tcx> Builder<'a, 'tcx> {
444 // Adding and removing scopes
445 // ==========================
447 /// Start a breakable scope, which tracks where `continue`, `break` and
448 /// `return` should branch to.
449 pub(crate) fn in_breakable_scope<F>(
451 loop_block: Option<BasicBlock>,
452 break_destination: Place<'tcx>,
457 F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
459 let region_scope = self.scopes.topmost();
460 let scope = BreakableScope {
463 break_drops: DropTree::new(),
464 continue_drops: loop_block.map(|_| DropTree::new()),
466 self.scopes.breakable_scopes.push(scope);
467 let normal_exit_block = f(self);
468 let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
469 assert!(breakable_scope.region_scope == region_scope);
471 self.build_exit_tree(breakable_scope.break_drops, region_scope, span, None);
472 if let Some(drops) = breakable_scope.continue_drops {
473 self.build_exit_tree(drops, region_scope, span, loop_block);
475 match (normal_exit_block, break_block) {
476 (Some(block), None) | (None, Some(block)) => block,
477 (None, None) => self.cfg.start_new_block().unit(),
478 (Some(normal_block), Some(exit_block)) => {
479 let target = self.cfg.start_new_block();
480 let source_info = self.source_info(span);
482 unpack!(normal_block),
484 TerminatorKind::Goto { target },
489 TerminatorKind::Goto { target },
496 /// Start an if-then scope which tracks drop for `if` expressions and `if`
499 /// For an if-let chain:
501 /// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
503 /// There are three possible ways the condition can be false and we may have
504 /// to drop `x`, `x` and `y`, or neither depending on which binding fails.
505 /// To handle this correctly we use a `DropTree` in a similar way to a
506 /// `loop` expression and 'break' out on all of the 'else' paths.
509 /// - We don't need to keep a stack of scopes in the `Builder` because the
510 /// 'else' paths will only leave the innermost scope.
511 /// - This is also used for match guards.
512 pub(crate) fn in_if_then_scope<F>(
514 region_scope: region::Scope,
517 ) -> (BasicBlock, BasicBlock)
519 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
521 let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
522 let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
524 let then_block = unpack!(f(self));
526 let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
527 assert!(if_then_scope.region_scope == region_scope);
529 let else_block = self
530 .build_exit_tree(if_then_scope.else_drops, region_scope, span, None)
531 .map_or_else(|| self.cfg.start_new_block(), |else_block_and| unpack!(else_block_and));
533 (then_block, else_block)
536 pub(crate) fn in_opt_scope<F, R>(
538 opt_scope: Option<(region::Scope, SourceInfo)>,
542 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
544 debug!("in_opt_scope(opt_scope={:?})", opt_scope);
545 if let Some(region_scope) = opt_scope {
546 self.push_scope(region_scope);
549 let rv = unpack!(block = f(self));
550 if let Some(region_scope) = opt_scope {
551 unpack!(block = self.pop_scope(region_scope, block));
553 debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
557 /// Convenience wrapper that pushes a scope and then executes `f`
558 /// to build its contents, popping the scope afterwards.
559 #[instrument(skip(self, f), level = "debug")]
560 pub(crate) fn in_scope<F, R>(
562 region_scope: (region::Scope, SourceInfo),
563 lint_level: LintLevel,
567 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
569 let source_scope = self.source_scope;
571 if let LintLevel::Explicit(current_hir_id) = lint_level {
572 // Use `maybe_lint_level_root_bounded` with `root_lint_level` as a bound
573 // to avoid adding Hir dependencies on our parents.
574 // We estimate the true lint roots here to avoid creating a lot of source scopes.
576 let parent_root = tcx.maybe_lint_level_root_bounded(
577 self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root,
580 let current_root = tcx.maybe_lint_level_root_bounded(current_hir_id, self.hir_id);
582 if parent_root != current_root {
583 self.source_scope = self.new_source_scope(
585 LintLevel::Explicit(current_root),
590 self.push_scope(region_scope);
592 let rv = unpack!(block = f(self));
593 unpack!(block = self.pop_scope(region_scope, block));
594 self.source_scope = source_scope;
599 /// Push a scope onto the stack. You can then build code in this
600 /// scope and call `pop_scope` afterwards. Note that these two
601 /// calls must be paired; using `in_scope` as a convenience
602 /// wrapper maybe preferable.
603 pub(crate) fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
604 self.scopes.push_scope(region_scope, self.source_scope);
607 /// Pops a scope, which should have region scope `region_scope`,
608 /// adding any drops onto the end of `block` that are needed.
609 /// This must match 1-to-1 with `push_scope`.
610 pub(crate) fn pop_scope(
612 region_scope: (region::Scope, SourceInfo),
613 mut block: BasicBlock,
615 debug!("pop_scope({:?}, {:?})", region_scope, block);
617 block = self.leave_top_scope(block);
619 self.scopes.pop_scope(region_scope);
624 /// Sets up the drops for breaking from `block` to `target`.
625 pub(crate) fn break_scope(
627 mut block: BasicBlock,
628 value: Option<&Expr<'tcx>>,
629 target: BreakableTarget,
630 source_info: SourceInfo,
632 let span = source_info.span;
634 let get_scope_index = |scope: region::Scope| {
635 // find the loop-scope by its `region::Scope`.
639 .rposition(|breakable_scope| breakable_scope.region_scope == scope)
640 .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
642 let (break_index, destination) = match target {
643 BreakableTarget::Return => {
644 let scope = &self.scopes.breakable_scopes[0];
645 if scope.break_destination != Place::return_place() {
646 span_bug!(span, "`return` in item with no return scope");
648 (0, Some(scope.break_destination))
650 BreakableTarget::Break(scope) => {
651 let break_index = get_scope_index(scope);
652 let scope = &self.scopes.breakable_scopes[break_index];
653 (break_index, Some(scope.break_destination))
655 BreakableTarget::Continue(scope) => {
656 let break_index = get_scope_index(scope);
661 if let Some(destination) = destination {
662 if let Some(value) = value {
663 debug!("stmt_expr Break val block_context.push(SubExpr)");
664 self.block_context.push(BlockFrame::SubExpr);
665 unpack!(block = self.expr_into_dest(destination, block, value));
666 self.block_context.pop();
668 self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
671 assert!(value.is_none(), "`return` and `break` should have a destination");
672 if self.tcx.sess.instrument_coverage() {
673 // Unlike `break` and `return`, which push an `Assign` statement to MIR, from which
674 // a Coverage code region can be generated, `continue` needs no `Assign`; but
675 // without one, the `InstrumentCoverage` MIR pass cannot generate a code region for
676 // `continue`. Coverage will be missing unless we add a dummy `Assign` to MIR.
677 self.add_dummy_assignment(span, block, source_info);
681 let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
682 let scope_index = self.scopes.scope_index(region_scope, span);
683 let drops = if destination.is_some() {
684 &mut self.scopes.breakable_scopes[break_index].break_drops
686 self.scopes.breakable_scopes[break_index].continue_drops.as_mut().unwrap()
688 let mut drop_idx = ROOT_NODE;
689 for scope in &self.scopes.scopes[scope_index + 1..] {
690 for drop in &scope.drops {
691 drop_idx = drops.add_drop(*drop, drop_idx);
694 drops.add_entry(block, drop_idx);
696 // `build_drop_trees` doesn't have access to our source_info, so we
697 // create a dummy terminator now. `TerminatorKind::Resume` is used
698 // because MIR type checking will panic if it hasn't been overwritten.
699 self.cfg.terminate(block, source_info, TerminatorKind::Resume);
701 self.cfg.start_new_block().unit()
704 pub(crate) fn break_for_else(
707 target: region::Scope,
708 source_info: SourceInfo,
710 let scope_index = self.scopes.scope_index(target, source_info.span);
711 let if_then_scope = self
715 .unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
717 assert_eq!(if_then_scope.region_scope, target, "breaking to incorrect scope");
719 let mut drop_idx = ROOT_NODE;
720 let drops = &mut if_then_scope.else_drops;
721 for scope in &self.scopes.scopes[scope_index + 1..] {
722 for drop in &scope.drops {
723 drop_idx = drops.add_drop(*drop, drop_idx);
726 drops.add_entry(block, drop_idx);
728 // `build_drop_trees` doesn't have access to our source_info, so we
729 // create a dummy terminator now. `TerminatorKind::Resume` is used
730 // because MIR type checking will panic if it hasn't been overwritten.
731 self.cfg.terminate(block, source_info, TerminatorKind::Resume);
734 // Add a dummy `Assign` statement to the CFG, with the span for the source code's `continue`
736 fn add_dummy_assignment(&mut self, span: Span, block: BasicBlock, source_info: SourceInfo) {
737 let local_decl = LocalDecl::new(self.tcx.mk_unit(), span).internal();
738 let temp_place = Place::from(self.local_decls.push(local_decl));
739 self.cfg.push_assign_unit(block, source_info, temp_place, self.tcx);
742 fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
743 // If we are emitting a `drop` statement, we need to have the cached
744 // diverge cleanup pads ready in case that drop panics.
745 let needs_cleanup = self.scopes.scopes.last().map_or(false, |scope| scope.needs_cleanup());
746 let is_generator = self.generator_kind.is_some();
747 let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
749 let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
750 unpack!(build_scope_drops(
752 &mut self.scopes.unwind_drops,
756 is_generator && needs_cleanup,
761 /// Creates a new source scope, nested in the current one.
762 pub(crate) fn new_source_scope(
765 lint_level: LintLevel,
766 safety: Option<Safety>,
768 let parent = self.source_scope;
770 "new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
775 self.source_scopes.get(parent)
777 let scope_local_data = SourceScopeLocalData {
778 lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
781 self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root
783 safety: safety.unwrap_or_else(|| {
784 self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety
787 self.source_scopes.push(SourceScopeData {
789 parent_scope: Some(parent),
791 inlined_parent_scope: None,
792 local_data: ClearCrossCrate::Set(scope_local_data),
796 /// Given a span and the current source scope, make a SourceInfo.
797 pub(crate) fn source_info(&self, span: Span) -> SourceInfo {
798 SourceInfo { span, scope: self.source_scope }
804 /// Returns the scope that we should use as the lifetime of an
805 /// operand. Basically, an operand must live until it is consumed.
806 /// This is similar to, but not quite the same as, the temporary
807 /// scope (which can be larger or smaller).
810 /// ```ignore (illustrative)
811 /// let x = foo(bar(X, Y));
813 /// We wish to pop the storage for X and Y after `bar()` is
814 /// called, not after the whole `let` is completed.
816 /// As another example, if the second argument diverges:
817 /// ```ignore (illustrative)
818 /// foo(Box::new(2), panic!())
820 /// We would allocate the box but then free it on the unwinding
821 /// path; we would also emit a free on the 'success' path from
822 /// panic, but that will turn out to be removed as dead-code.
823 pub(crate) fn local_scope(&self) -> region::Scope {
824 self.scopes.topmost()
830 pub(crate) fn schedule_drop_storage_and_value(
833 region_scope: region::Scope,
836 self.schedule_drop(span, region_scope, local, DropKind::Storage);
837 self.schedule_drop(span, region_scope, local, DropKind::Value);
840 /// Indicates that `place` should be dropped on exit from `region_scope`.
842 /// When called with `DropKind::Storage`, `place` shouldn't be the return
843 /// place, or a function parameter.
844 pub(crate) fn schedule_drop(
847 region_scope: region::Scope,
851 let needs_drop = match drop_kind {
853 if !self.local_decls[local].ty.needs_drop(self.tcx, self.param_env) {
858 DropKind::Storage => {
859 if local.index() <= self.arg_count {
862 "`schedule_drop` called with local {:?} and arg_count {}",
871 // When building drops, we try to cache chains of drops to reduce the
872 // number of `DropTree::add_drop` calls. This, however, means that
873 // whenever we add a drop into a scope which already had some entries
874 // in the drop tree built (and thus, cached) for it, we must invalidate
875 // all caches which might branch into the scope which had a drop just
876 // added to it. This is necessary, because otherwise some other code
877 // might use the cache to branch into already built chain of drops,
878 // essentially ignoring the newly added drop.
880 // For example consider there’s two scopes with a drop in each. These
881 // are built and thus the caches are filled:
883 // +--------------------------------------------------------+
884 // | +---------------------------------+ |
885 // | | +--------+ +-------------+ | +---------------+ |
886 // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
887 // | | +--------+ +-------------+ | +---------------+ |
888 // | +------------|outer_scope cache|--+ |
889 // +------------------------------|middle_scope cache|------+
891 // Now, a new, inner-most scope is added along with a new drop into
892 // both inner-most and outer-most scopes:
894 // +------------------------------------------------------------+
895 // | +----------------------------------+ |
896 // | | +--------+ +-------------+ | +---------------+ | +-------------+
897 // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
898 // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
899 // | | +-+ +-------------+ | |
900 // | +---|invalid outer_scope cache|----+ |
901 // +----=----------------|invalid middle_scope cache|-----------+
903 // If, when adding `drop(new)` we do not invalidate the cached blocks for both
904 // outer_scope and middle_scope, then, when building drops for the inner (right-most)
905 // scope, the old, cached blocks, without `drop(new)` will get used, producing the
908 // Note that this code iterates scopes from the inner-most to the outer-most,
909 // invalidating caches of each scope visited. This way bare minimum of the
910 // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
911 // cache of outer scope stays intact.
913 // Since we only cache drops for the unwind path and the generator drop
914 // path, we only need to invalidate the cache for drops that happen on
915 // the unwind or generator drop paths. This means that for
916 // non-generators we don't need to invalidate caches for `DropKind::Storage`.
917 let invalidate_caches = needs_drop || self.generator_kind.is_some();
918 for scope in self.scopes.scopes.iter_mut().rev() {
919 if invalidate_caches {
920 scope.invalidate_cache();
923 if scope.region_scope == region_scope {
924 let region_scope_span = region_scope.span(self.tcx, &self.region_scope_tree);
925 // Attribute scope exit drops to scope's closing brace.
926 let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
928 scope.drops.push(DropData {
929 source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
938 span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
941 /// Indicates that the "local operand" stored in `local` is
942 /// *moved* at some point during execution (see `local_scope` for
943 /// more information about what a "local operand" is -- in short,
944 /// it's an intermediate operand created as part of preparing some
945 /// MIR instruction). We use this information to suppress
946 /// redundant drops on the non-unwind paths. This results in less
947 /// MIR, but also avoids spurious borrow check errors
950 /// Example: when compiling the call to `foo` here:
952 /// ```ignore (illustrative)
956 /// we would evaluate `bar()` to an operand `_X`. We would also
957 /// schedule `_X` to be dropped when the expression scope for
958 /// `foo(bar())` is exited. This is relevant, for example, if the
959 /// later arguments should unwind (it would ensure that `_X` gets
960 /// dropped). However, if no unwind occurs, then `_X` will be
961 /// unconditionally consumed by the `call`:
963 /// ```ignore (illustrative)
966 /// _R = CALL(foo, _X, ...)
970 /// However, `_X` is still registered to be dropped, and so if we
971 /// do nothing else, we would generate a `DROP(_X)` that occurs
972 /// after the call. This will later be optimized out by the
973 /// drop-elaboration code, but in the meantime it can lead to
974 /// spurious borrow-check errors -- the problem, ironically, is
975 /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
976 /// that it creates. See #64391 for an example.
977 pub(crate) fn record_operands_moved(&mut self, operands: &[Operand<'tcx>]) {
978 let local_scope = self.local_scope();
979 let scope = self.scopes.scopes.last_mut().unwrap();
981 assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
983 // look for moves of a local variable, like `MOVE(_X)`
984 let locals_moved = operands.iter().flat_map(|operand| match operand {
985 Operand::Copy(_) | Operand::Constant(_) => None,
986 Operand::Move(place) => place.as_local(),
989 for local in locals_moved {
990 // check if we have a Drop for this operand and -- if so
991 // -- add it to the list of moved operands. Note that this
992 // local might not have been an operand created for this
993 // call, it could come from other places too.
994 if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
995 scope.moved_locals.push(local);
1003 /// Returns the [DropIdx] for the innermost drop if the function unwound at
1004 /// this point. The `DropIdx` will be created if it doesn't already exist.
1005 fn diverge_cleanup(&mut self) -> DropIdx {
1006 // It is okay to use dummy span because the getting scope index on the topmost scope
1007 // must always succeed.
1008 self.diverge_cleanup_target(self.scopes.topmost(), DUMMY_SP)
1011 /// This is similar to [diverge_cleanup](Self::diverge_cleanup) except its target is set to
1012 /// some ancestor scope instead of the current scope.
1013 /// It is possible to unwind to some ancestor scope if some drop panics as
1014 /// the program breaks out of a if-then scope.
1015 fn diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1016 let target = self.scopes.scope_index(target_scope, span);
1017 let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1021 .find_map(|(scope_idx, scope)| {
1022 scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1024 .unwrap_or((0, ROOT_NODE));
1026 if uncached_scope > target {
1030 let is_generator = self.generator_kind.is_some();
1031 for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1032 for drop in &scope.drops {
1033 if is_generator || drop.kind == DropKind::Value {
1034 cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1037 scope.cached_unwind_block = Some(cached_drop);
1043 /// Prepares to create a path that performs all required cleanup for a
1044 /// terminator that can unwind at the given basic block.
1046 /// This path terminates in Resume. The path isn't created until after all
1047 /// of the non-unwind paths in this item have been lowered.
1048 pub(crate) fn diverge_from(&mut self, start: BasicBlock) {
1051 self.cfg.block_data(start).terminator().kind,
1052 TerminatorKind::Assert { .. }
1053 | TerminatorKind::Call { .. }
1054 | TerminatorKind::Drop { .. }
1055 | TerminatorKind::DropAndReplace { .. }
1056 | TerminatorKind::FalseUnwind { .. }
1057 | TerminatorKind::InlineAsm { .. }
1059 "diverge_from called on block with terminator that cannot unwind."
1062 let next_drop = self.diverge_cleanup();
1063 self.scopes.unwind_drops.add_entry(start, next_drop);
1066 /// Sets up a path that performs all required cleanup for dropping a
1067 /// generator, starting from the given block that ends in
1068 /// [TerminatorKind::Yield].
1070 /// This path terminates in GeneratorDrop.
1071 pub(crate) fn generator_drop_cleanup(&mut self, yield_block: BasicBlock) {
1074 self.cfg.block_data(yield_block).terminator().kind,
1075 TerminatorKind::Yield { .. }
1077 "generator_drop_cleanup called on block with non-yield terminator."
1079 let (uncached_scope, mut cached_drop) = self
1085 .find_map(|(scope_idx, scope)| {
1086 scope.cached_generator_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1088 .unwrap_or((0, ROOT_NODE));
1090 for scope in &mut self.scopes.scopes[uncached_scope..] {
1091 for drop in &scope.drops {
1092 cached_drop = self.scopes.generator_drops.add_drop(*drop, cached_drop);
1094 scope.cached_generator_drop_block = Some(cached_drop);
1097 self.scopes.generator_drops.add_entry(yield_block, cached_drop);
1100 /// Utility function for *non*-scope code to build their own drops
1101 pub(crate) fn build_drop_and_replace(
1106 value: Operand<'tcx>,
1108 let source_info = self.source_info(span);
1109 let next_target = self.cfg.start_new_block();
1114 TerminatorKind::DropAndReplace { place, value, target: next_target, unwind: None },
1116 self.diverge_from(block);
1121 /// Creates an `Assert` terminator and return the success block.
1122 /// If the boolean condition operand is not the expected value,
1123 /// a runtime panic will be caused with the given message.
1124 pub(crate) fn assert(
1127 cond: Operand<'tcx>,
1129 msg: AssertMessage<'tcx>,
1132 let source_info = self.source_info(span);
1133 let success_block = self.cfg.start_new_block();
1138 TerminatorKind::Assert { cond, expected, msg, target: success_block, cleanup: None },
1140 self.diverge_from(block);
1145 /// Unschedules any drops in the top scope.
1147 /// This is only needed for `match` arm scopes, because they have one
1148 /// entrance per pattern, but only one exit.
1149 pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
1150 let top_scope = self.scopes.scopes.last_mut().unwrap();
1152 assert_eq!(top_scope.region_scope, region_scope);
1154 top_scope.drops.clear();
1155 top_scope.invalidate_cache();
1159 /// Builds drops for `pop_scope` and `leave_top_scope`.
1160 fn build_scope_drops<'tcx>(
1161 cfg: &mut CFG<'tcx>,
1162 unwind_drops: &mut DropTree,
1164 mut block: BasicBlock,
1165 mut unwind_to: DropIdx,
1166 storage_dead_on_unwind: bool,
1169 debug!("build_scope_drops({:?} -> {:?})", block, scope);
1171 // Build up the drops in evaluation order. The end result will
1174 // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1178 // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1180 // The horizontal arrows represent the execution path when the drops return
1181 // successfully. The downwards arrows represent the execution path when the
1182 // drops panic (panicking while unwinding will abort, so there's no need for
1183 // another set of arrows).
1185 // For generators, we unwind from a drop on a local to its StorageDead
1186 // statement. For other functions we don't worry about StorageDead. The
1187 // drops for the unwind path should have already been generated by
1188 // `diverge_cleanup_gen`.
1190 for drop_data in scope.drops.iter().rev() {
1191 let source_info = drop_data.source_info;
1192 let local = drop_data.local;
1194 match drop_data.kind {
1195 DropKind::Value => {
1196 // `unwind_to` should drop the value that we're about to
1197 // schedule. If dropping this value panics, then we continue
1198 // with the *next* value on the unwind path.
1199 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1200 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1201 unwind_to = unwind_drops.drops[unwind_to].1;
1203 // If the operand has been moved, and we are not on an unwind
1204 // path, then don't generate the drop. (We only take this into
1205 // account for non-unwind paths so as not to disturb the
1206 // caching mechanism.)
1207 if scope.moved_locals.iter().any(|&o| o == local) {
1211 unwind_drops.add_entry(block, unwind_to);
1213 let next = cfg.start_new_block();
1217 TerminatorKind::Drop { place: local.into(), target: next, unwind: None },
1221 DropKind::Storage => {
1222 if storage_dead_on_unwind {
1223 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1224 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1225 unwind_to = unwind_drops.drops[unwind_to].1;
1227 // Only temps and vars need their storage dead.
1228 assert!(local.index() > arg_count);
1229 cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
1236 impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1237 /// Build a drop tree for a breakable scope.
1239 /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1240 /// loop. Otherwise this is for `break` or `return`.
1243 mut drops: DropTree,
1244 else_scope: region::Scope,
1246 continue_block: Option<BasicBlock>,
1247 ) -> Option<BlockAnd<()>> {
1248 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1249 blocks[ROOT_NODE] = continue_block;
1251 drops.build_mir::<ExitScopes>(&mut self.cfg, &mut blocks);
1252 let is_generator = self.generator_kind.is_some();
1254 // Link the exit drop tree to unwind drop tree.
1255 if drops.drops.iter().any(|(drop, _)| drop.kind == DropKind::Value) {
1256 let unwind_target = self.diverge_cleanup_target(else_scope, span);
1257 let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1258 for (drop_idx, drop_data) in drops.drops.iter_enumerated().skip(1) {
1259 match drop_data.0.kind {
1260 DropKind::Storage => {
1262 let unwind_drop = self
1265 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1266 unwind_indices.push(unwind_drop);
1268 unwind_indices.push(unwind_indices[drop_data.1]);
1271 DropKind::Value => {
1272 let unwind_drop = self
1275 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1278 .add_entry(blocks[drop_idx].unwrap(), unwind_indices[drop_data.1]);
1279 unwind_indices.push(unwind_drop);
1284 blocks[ROOT_NODE].map(BasicBlock::unit)
1287 /// Build the unwind and generator drop trees.
1288 pub(crate) fn build_drop_trees(&mut self) {
1289 if self.generator_kind.is_some() {
1290 self.build_generator_drop_trees();
1292 Self::build_unwind_tree(
1294 &mut self.scopes.unwind_drops,
1301 fn build_generator_drop_trees(&mut self) {
1302 // Build the drop tree for dropping the generator while it's suspended.
1303 let drops = &mut self.scopes.generator_drops;
1304 let cfg = &mut self.cfg;
1305 let fn_span = self.fn_span;
1306 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1307 drops.build_mir::<GeneratorDrop>(cfg, &mut blocks);
1308 if let Some(root_block) = blocks[ROOT_NODE] {
1311 SourceInfo::outermost(fn_span),
1312 TerminatorKind::GeneratorDrop,
1316 // Build the drop tree for unwinding in the normal control flow paths.
1317 let resume_block = &mut None;
1318 let unwind_drops = &mut self.scopes.unwind_drops;
1319 Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
1321 // Build the drop tree for unwinding when dropping a suspended
1324 // This is a different tree to the standard unwind paths here to
1325 // prevent drop elaboration from creating drop flags that would have
1326 // to be captured by the generator. I'm not sure how important this
1327 // optimization is, but it is here.
1328 for (drop_idx, drop_data) in drops.drops.iter_enumerated() {
1329 if let DropKind::Value = drop_data.0.kind {
1330 debug_assert!(drop_data.1 < drops.drops.next_index());
1331 drops.entry_points.push((drop_data.1, blocks[drop_idx].unwrap()));
1334 Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
1337 fn build_unwind_tree(
1338 cfg: &mut CFG<'tcx>,
1339 drops: &mut DropTree,
1341 resume_block: &mut Option<BasicBlock>,
1343 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1344 blocks[ROOT_NODE] = *resume_block;
1345 drops.build_mir::<Unwind>(cfg, &mut blocks);
1346 if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
1347 cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::Resume);
1349 *resume_block = blocks[ROOT_NODE];
1354 // DropTreeBuilder implementations.
1358 impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
1359 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1360 cfg.start_new_block()
1362 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1363 cfg.block_data_mut(from).terminator_mut().kind = TerminatorKind::Goto { target: to };
1367 struct GeneratorDrop;
1369 impl<'tcx> DropTreeBuilder<'tcx> for GeneratorDrop {
1370 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1371 cfg.start_new_block()
1373 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1374 let term = cfg.block_data_mut(from).terminator_mut();
1375 if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
1379 term.source_info.span,
1380 "cannot enter generator drop tree from {:?}",
1389 impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
1390 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1391 cfg.start_new_cleanup_block()
1393 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1394 let term = &mut cfg.block_data_mut(from).terminator_mut();
1395 match &mut term.kind {
1396 TerminatorKind::Drop { unwind, .. }
1397 | TerminatorKind::DropAndReplace { unwind, .. }
1398 | TerminatorKind::FalseUnwind { unwind, .. }
1399 | TerminatorKind::Call { cleanup: unwind, .. }
1400 | TerminatorKind::Assert { cleanup: unwind, .. }
1401 | TerminatorKind::InlineAsm { cleanup: unwind, .. } => {
1404 TerminatorKind::Goto { .. }
1405 | TerminatorKind::SwitchInt { .. }
1406 | TerminatorKind::Resume
1407 | TerminatorKind::Abort
1408 | TerminatorKind::Return
1409 | TerminatorKind::Unreachable
1410 | TerminatorKind::Yield { .. }
1411 | TerminatorKind::GeneratorDrop
1412 | TerminatorKind::FalseEdge { .. } => {
1413 span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)