2 Managing the scope stack. The scopes are tied to lexical scopes, so as
3 we descend the THIR, we push a scope on the stack, build its
4 contents, and then pop it off. Every scope is named by a
9 When pushing a new [Scope], we record the current point in the graph (a
10 basic block); this marks the entry to the scope. We then generate more
11 stuff in the control-flow graph. Whenever the scope is exited, either
12 via a `break` or `return` or just by fallthrough, that marks an exit
13 from the scope. Each lexical scope thus corresponds to a single-entry,
14 multiple-exit (SEME) region in the control-flow graph.
16 For now, we record the `region::Scope` to each SEME region for later reference
17 (see caveat in next paragraph). This is because destruction scopes are tied to
18 them. This may change in the future so that MIR lowering determines its own
21 ### Not so SEME Regions
23 In the course of building matches, it sometimes happens that certain code
24 (namely guards) gets executed multiple times. This means that the scope lexical
25 scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26 mapping is from one scope to a vector of SEME regions. Since the SEME regions
27 are disjoint, the mapping is still one-to-one for the set of SEME regions that
30 Also in matches, the scopes assigned to arms are not always even SEME regions!
31 Each arm has a single region with one entry for each pattern. We manually
32 manipulate the scheduled drops in this scope to avoid dropping things multiple
37 The primary purpose for scopes is to insert drops: while building
38 the contents, we also accumulate places that need to be dropped upon
39 exit from each scope. This is done by calling `schedule_drop`. Once a
40 drop is scheduled, whenever we branch out we will insert drops of all
41 those places onto the outgoing edge. Note that we don't know the full
42 set of scheduled drops up front, and so whenever we exit from the
43 scope we only drop the values scheduled thus far. For example, consider
44 the scope S corresponding to this loop:
55 When processing the `let x`, we will add one drop to the scope for
56 `x`. The break will then insert a drop for `x`. When we process `let
57 y`, we will add another drop (in fact, to a subscope, but let's ignore
58 that for now); any later drops would also drop `y`.
62 There are numerous "normal" ways to early exit a scope: `break`,
63 `continue`, `return` (panics are handled separately). Whenever an
64 early exit occurs, the method `break_scope` is called. It is given the
65 current point in execution where the early exit occurs, as well as the
66 scope you want to branch to (note that all early exits from to some
67 other enclosing scope). `break_scope` will record the set of drops currently
68 scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69 will be added to the CFG.
71 Panics are handled in a similar fashion, except that the drops are added to the
72 MIR once the rest of the function has finished being lowered. If a terminator
73 can panic, call `diverge_from(block)` with the block containing the terminator
78 In addition to the normal scope stack, we track a loop scope stack
79 that contains only loops and breakable blocks. It tracks where a `break`,
80 `continue` or `return` should go to.
84 use crate::build::matches::{ArmHasGuard, Candidate};
85 use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
86 use crate::thir::{Arm, Expr, ExprRef, LintLevel};
87 use rustc_data_structures::fx::{FxHashMap, FxHashSet};
89 use rustc_index::vec::IndexVec;
90 use rustc_middle::middle::region;
91 use rustc_middle::mir::*;
92 use rustc_span::{Span, DUMMY_SP};
95 pub struct Scopes<'tcx> {
97 /// The current set of breakable scopes. See module comment for more details.
98 breakable_scopes: Vec<BreakableScope<'tcx>>,
100 /// Drops that need to be done on unwind paths. See the comment on
101 /// [DropTree] for more details.
102 unwind_drops: DropTree,
104 /// Drops that need to be done on paths to the `GeneratorDrop` terminator.
105 generator_drops: DropTree,
110 /// The source scope this scope was created in.
111 source_scope: SourceScope,
113 /// the region span of this scope within source code.
114 region_scope: region::Scope,
116 /// the span of that region_scope
117 region_scope_span: Span,
119 /// set of places to drop when exiting this scope. This starts
120 /// out empty but grows as variables are declared during the
121 /// building process. This is a stack, so we always drop from the
122 /// end of the vector (top of the stack) first.
123 drops: Vec<DropData>,
125 /// The drop index that will drop everything in and below this scope on an
127 cached_unwind_block: Option<DropIdx>,
129 /// The drop index that will drop everything in and below this scope on a
130 /// generator drop path.
131 cached_generator_drop_block: Option<DropIdx>,
134 #[derive(Clone, Copy, Debug)]
136 /// The `Span` where drop obligation was incurred (typically where place was
138 source_info: SourceInfo,
143 /// Whether this is a value Drop or a StorageDead.
147 #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
148 pub(crate) enum DropKind {
154 struct BreakableScope<'tcx> {
155 /// Region scope of the loop
156 region_scope: region::Scope,
157 /// The destination of the loop/block expression itself (i.e., where to put
158 /// the result of a `break` or `return` expression)
159 break_destination: Place<'tcx>,
160 /// The scope that the destination should have its drop scheduled in.
161 destination_scope: Option<region::Scope>,
162 /// Drops that happen on the `break`/`return` path.
163 break_drops: DropTree,
164 /// Drops that happen on the `continue` path.
165 continue_drops: Option<DropTree>,
168 /// The target of an expression that breaks out of a scope
169 #[derive(Clone, Copy, Debug)]
170 crate enum BreakableTarget {
171 Continue(region::Scope),
172 Break(region::Scope),
176 rustc_index::newtype_index! {
177 struct DropIdx { .. }
180 const ROOT_NODE: DropIdx = DropIdx::from_u32(0);
182 /// A tree of drops that we have deferred lowering. It's used for:
184 /// * Drops on unwind paths
185 /// * Drops on generator drop paths (when a suspended generator is dropped)
186 /// * Drops on return and loop exit paths
188 /// Once no more nodes could be added to the tree, we lower it to MIR in one go
192 /// Drops in the tree.
193 drops: IndexVec<DropIdx, (DropData, DropIdx)>,
194 /// Map for finding the inverse of the `next_drop` relation:
196 /// `previous_drops[(drops[i].1, drops[i].0.local, drops[i].0.kind)] == i`
197 previous_drops: FxHashMap<(DropIdx, Local, DropKind), DropIdx>,
198 /// Edges into the `DropTree` that need to be added once it's lowered.
199 entry_points: Vec<(DropIdx, BasicBlock)>,
203 /// Whether there's anything to do for the cleanup path, that is,
204 /// when unwinding through this scope. This includes destructors,
205 /// but not StorageDead statements, which don't get emitted at all
206 /// for unwinding, for several reasons:
207 /// * clang doesn't emit llvm.lifetime.end for C++ unwinding
208 /// * LLVM's memory dependency analysis can't handle it atm
209 /// * polluting the cleanup MIR with StorageDead creates
210 /// landing pads even though there's no actual destructors
211 /// * freeing up stack space has no effect during unwinding
212 /// Note that for generators we do emit StorageDeads, for the
213 /// use of optimizations in the MIR generator transform.
214 fn needs_cleanup(&self) -> bool {
215 self.drops.iter().any(|drop| match drop.kind {
216 DropKind::Value => true,
217 DropKind::Storage => false,
221 fn invalidate_cache(&mut self) {
222 self.cached_unwind_block = None;
223 self.cached_generator_drop_block = None;
227 /// A trait that determined how [DropTree] creates its blocks and
228 /// links to any entry nodes.
229 trait DropTreeBuilder<'tcx> {
230 /// Create a new block for the tree. This should call either
231 /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
232 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
234 /// Links a block outside the drop tree, `from`, to the block `to` inside
236 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
241 // The root node of the tree doesn't represent a drop, but instead
242 // represents the block in the tree that should be jumped to once all
243 // of the required drops have been performed.
244 let fake_source_info = SourceInfo::outermost(DUMMY_SP);
246 DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
247 let drop_idx = DropIdx::MAX;
248 let drops = IndexVec::from_elem_n((fake_data, drop_idx), 1);
249 Self { drops, entry_points: Vec::new(), previous_drops: FxHashMap::default() }
252 fn add_drop(&mut self, drop: DropData, next: DropIdx) -> DropIdx {
253 let drops = &mut self.drops;
256 .entry((next, drop.local, drop.kind))
257 .or_insert_with(|| drops.push((drop, next)))
260 fn add_entry(&mut self, from: BasicBlock, to: DropIdx) {
261 debug_assert!(to < self.drops.next_index());
262 self.entry_points.push((to, from));
265 /// Builds the MIR for a given drop tree.
267 /// `blocks` should have the same length as `self.drops`, and may have its
268 /// first value set to some already existing block.
269 fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
272 blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
274 debug!("DropTree::build_mir(drops = {:#?})", self);
275 assert_eq!(blocks.len(), self.drops.len());
277 self.assign_blocks::<T>(cfg, blocks);
278 self.link_blocks(cfg, blocks)
281 /// Assign blocks for all of the drops in the drop tree that need them.
282 fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
285 blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
287 // StorageDead statements can share blocks with each other and also with
288 // a Drop terminator. We iterate through the drops to find which drops
289 // need their own block.
290 #[derive(Clone, Copy)]
292 // This drop is unreachable
294 // This drop is only reachable through the `StorageDead` with the
297 // This drop has more than one way of being reached, or it is
298 // branched to from outside the tree, or its predecessor is a
303 let mut needs_block = IndexVec::from_elem(Block::None, &self.drops);
304 if blocks[ROOT_NODE].is_some() {
305 // In some cases (such as drops for `continue`) the root node
306 // already has a block. In this case, make sure that we don't
308 needs_block[ROOT_NODE] = Block::Own;
311 // Sort so that we only need to check the last value.
312 let entry_points = &mut self.entry_points;
315 for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
316 if entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) {
317 let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
318 needs_block[drop_idx] = Block::Own;
319 while entry_points.last().map_or(false, |entry_point| entry_point.0 == drop_idx) {
320 let entry_block = entry_points.pop().unwrap().1;
321 T::add_entry(cfg, entry_block, block);
324 match needs_block[drop_idx] {
325 Block::None => continue,
327 blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
329 Block::Shares(pred) => {
330 blocks[drop_idx] = blocks[pred];
333 if let DropKind::Value = drop_data.0.kind {
334 needs_block[drop_data.1] = Block::Own;
335 } else if drop_idx != ROOT_NODE {
336 match &mut needs_block[drop_data.1] {
337 pred @ Block::None => *pred = Block::Shares(drop_idx),
338 pred @ Block::Shares(_) => *pred = Block::Own,
344 debug!("assign_blocks: blocks = {:#?}", blocks);
345 assert!(entry_points.is_empty());
348 fn link_blocks<'tcx>(
351 blocks: &IndexVec<DropIdx, Option<BasicBlock>>,
353 for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
354 let block = if let Some(block) = blocks[drop_idx] {
359 match drop_data.0.kind {
361 let terminator = TerminatorKind::Drop {
362 target: blocks[drop_data.1].unwrap(),
363 // The caller will handle this if needed.
365 place: drop_data.0.local.into(),
367 cfg.terminate(block, drop_data.0.source_info, terminator);
369 // Root nodes don't correspond to a drop.
370 DropKind::Storage if drop_idx == ROOT_NODE => {}
371 DropKind::Storage => {
372 let stmt = Statement {
373 source_info: drop_data.0.source_info,
374 kind: StatementKind::StorageDead(drop_data.0.local),
376 cfg.push(block, stmt);
377 let target = blocks[drop_data.1].unwrap();
379 // Diagnostics don't use this `Span` but debuginfo
380 // might. Since we don't want breakpoints to be placed
381 // here, especially when this is on an unwind path, we
383 let source_info = SourceInfo { span: DUMMY_SP, ..drop_data.0.source_info };
384 let terminator = TerminatorKind::Goto { target };
385 cfg.terminate(block, source_info, terminator);
393 impl<'tcx> Scopes<'tcx> {
394 pub(crate) fn new() -> Self {
397 breakable_scopes: Vec::new(),
398 unwind_drops: DropTree::new(),
399 generator_drops: DropTree::new(),
403 fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
404 debug!("push_scope({:?})", region_scope);
405 self.scopes.push(Scope {
406 source_scope: vis_scope,
407 region_scope: region_scope.0,
408 region_scope_span: region_scope.1.span,
410 cached_unwind_block: None,
411 cached_generator_drop_block: None,
415 fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
416 let scope = self.scopes.pop().unwrap();
417 assert_eq!(scope.region_scope, region_scope.0);
421 fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
424 .rposition(|scope| scope.region_scope == region_scope)
425 .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
428 /// Returns the topmost active scope, which is known to be alive until
429 /// the next scope expression.
430 fn topmost(&self) -> region::Scope {
431 self.scopes.last().expect("topmost_scope: no scopes present").region_scope
435 impl<'a, 'tcx> Builder<'a, 'tcx> {
436 // Adding and removing scopes
437 // ==========================
438 // Start a breakable scope, which tracks where `continue`, `break` and
439 // `return` should branch to.
440 crate fn in_breakable_scope<F>(
442 loop_block: Option<BasicBlock>,
443 break_destination: Place<'tcx>,
444 destination_scope: Option<region::Scope>,
449 F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
451 let region_scope = self.scopes.topmost();
452 let scope = BreakableScope {
456 break_drops: DropTree::new(),
457 continue_drops: loop_block.map(|_| DropTree::new()),
459 let continue_block = loop_block.map(|block| (block, self.diverge_cleanup()));
460 self.scopes.breakable_scopes.push(scope);
461 let normal_exit_block = f(self);
462 let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
463 assert!(breakable_scope.region_scope == region_scope);
464 if let Some(drops) = breakable_scope.continue_drops {
465 self.build_exit_tree(drops, continue_block);
467 let break_block = self.build_exit_tree(breakable_scope.break_drops, None);
468 match (normal_exit_block, break_block) {
469 (Some(block), None) | (None, Some(block)) => block,
470 (None, None) => self.cfg.start_new_block().unit(),
471 (Some(normal_block), Some(exit_block)) => {
472 let target = self.cfg.start_new_block();
473 let source_info = self.source_info(span);
475 unpack!(normal_block),
477 TerminatorKind::Goto { target },
482 TerminatorKind::Goto { target },
489 crate fn in_opt_scope<F, R>(
491 opt_scope: Option<(region::Scope, SourceInfo)>,
495 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
497 debug!("in_opt_scope(opt_scope={:?})", opt_scope);
498 if let Some(region_scope) = opt_scope {
499 self.push_scope(region_scope);
502 let rv = unpack!(block = f(self));
503 if let Some(region_scope) = opt_scope {
504 unpack!(block = self.pop_scope(region_scope, block));
506 debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
510 /// Convenience wrapper that pushes a scope and then executes `f`
511 /// to build its contents, popping the scope afterwards.
512 crate fn in_scope<F, R>(
514 region_scope: (region::Scope, SourceInfo),
515 lint_level: LintLevel,
519 F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
521 debug!("in_scope(region_scope={:?})", region_scope);
522 let source_scope = self.source_scope;
523 let tcx = self.hir.tcx();
524 if let LintLevel::Explicit(current_hir_id) = lint_level {
525 // Use `maybe_lint_level_root_bounded` with `root_lint_level` as a bound
526 // to avoid adding Hir dependences on our parents.
527 // We estimate the true lint roots here to avoid creating a lot of source scopes.
529 let parent_root = tcx.maybe_lint_level_root_bounded(
530 self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root,
531 self.hir.root_lint_level,
534 tcx.maybe_lint_level_root_bounded(current_hir_id, self.hir.root_lint_level);
536 if parent_root != current_root {
537 self.source_scope = self.new_source_scope(
539 LintLevel::Explicit(current_root),
544 self.push_scope(region_scope);
546 let rv = unpack!(block = f(self));
547 unpack!(block = self.pop_scope(region_scope, block));
548 self.source_scope = source_scope;
549 debug!("in_scope: exiting region_scope={:?} block={:?}", region_scope, block);
553 /// Push a scope onto the stack. You can then build code in this
554 /// scope and call `pop_scope` afterwards. Note that these two
555 /// calls must be paired; using `in_scope` as a convenience
556 /// wrapper maybe preferable.
557 crate fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
558 self.scopes.push_scope(region_scope, self.source_scope);
561 /// Pops a scope, which should have region scope `region_scope`,
562 /// adding any drops onto the end of `block` that are needed.
563 /// This must match 1-to-1 with `push_scope`.
566 region_scope: (region::Scope, SourceInfo),
567 mut block: BasicBlock,
569 debug!("pop_scope({:?}, {:?})", region_scope, block);
571 block = self.leave_top_scope(block);
573 self.scopes.pop_scope(region_scope);
578 /// Sets up the drops for breaking from `block` to `target`.
579 crate fn break_scope(
581 mut block: BasicBlock,
582 value: Option<ExprRef<'tcx>>,
583 target: BreakableTarget,
584 source_info: SourceInfo,
586 let span = source_info.span;
588 let get_scope_index = |scope: region::Scope| {
589 // find the loop-scope by its `region::Scope`.
593 .rposition(|breakable_scope| breakable_scope.region_scope == scope)
594 .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
596 let (break_index, destination, dest_scope) = match target {
597 BreakableTarget::Return => {
598 let scope = &self.scopes.breakable_scopes[0];
599 if scope.break_destination != Place::return_place() {
600 span_bug!(span, "`return` in item with no return scope");
602 (0, Some(scope.break_destination), scope.destination_scope)
604 BreakableTarget::Break(scope) => {
605 let break_index = get_scope_index(scope);
606 let scope = &self.scopes.breakable_scopes[break_index];
607 (break_index, Some(scope.break_destination), scope.destination_scope)
609 BreakableTarget::Continue(scope) => {
610 let break_index = get_scope_index(scope);
611 (break_index, None, None)
615 if let Some(destination) = destination {
616 if let Some(value) = value {
617 debug!("stmt_expr Break val block_context.push(SubExpr)");
618 self.block_context.push(BlockFrame::SubExpr);
619 unpack!(block = self.into(destination, dest_scope, block, value));
621 .map(|scope| self.unschedule_drop(scope, destination.as_local().unwrap()));
622 self.block_context.pop();
624 self.cfg.push_assign_unit(block, source_info, destination, self.hir.tcx())
627 assert!(value.is_none(), "`return` and `break` should have a destination");
630 let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
631 let scope_index = self.scopes.scope_index(region_scope, span);
632 let drops = if destination.is_some() {
633 &mut self.scopes.breakable_scopes[break_index].break_drops
635 self.scopes.breakable_scopes[break_index].continue_drops.as_mut().unwrap()
637 let mut drop_idx = ROOT_NODE;
638 for scope in &self.scopes.scopes[scope_index + 1..] {
639 for drop in &scope.drops {
640 drop_idx = drops.add_drop(*drop, drop_idx);
643 drops.add_entry(block, drop_idx);
645 // `build_drop_tree` doesn't have access to our source_info, so we
646 // create a dummy terminator now. `TerminatorKind::Resume` is used
647 // because MIR type checking will panic if it hasn't been overwritten.
648 self.cfg.terminate(block, source_info, TerminatorKind::Resume);
650 self.cfg.start_new_block().unit()
653 crate fn exit_top_scope(
655 mut block: BasicBlock,
657 source_info: SourceInfo,
659 block = self.leave_top_scope(block);
660 self.cfg.terminate(block, source_info, TerminatorKind::Goto { target });
663 fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
664 // If we are emitting a `drop` statement, we need to have the cached
665 // diverge cleanup pads ready in case that drop panics.
666 let needs_cleanup = self.scopes.scopes.last().map_or(false, |scope| scope.needs_cleanup());
667 let is_generator = self.generator_kind.is_some();
668 let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
670 let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
671 unpack!(build_scope_drops(
673 &mut self.scopes.unwind_drops,
677 is_generator && needs_cleanup,
682 /// Creates a new source scope, nested in the current one.
683 crate fn new_source_scope(
686 lint_level: LintLevel,
687 safety: Option<Safety>,
689 let parent = self.source_scope;
691 "new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
696 self.source_scopes.get(parent)
698 let scope_local_data = SourceScopeLocalData {
699 lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
702 self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root
704 safety: safety.unwrap_or_else(|| {
705 self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety
708 self.source_scopes.push(SourceScopeData {
710 parent_scope: Some(parent),
712 inlined_parent_scope: None,
713 local_data: ClearCrossCrate::Set(scope_local_data),
717 /// Given a span and the current source scope, make a SourceInfo.
718 crate fn source_info(&self, span: Span) -> SourceInfo {
719 SourceInfo { span, scope: self.source_scope }
724 /// Returns the scope that we should use as the lifetime of an
725 /// operand. Basically, an operand must live until it is consumed.
726 /// This is similar to, but not quite the same as, the temporary
727 /// scope (which can be larger or smaller).
731 /// let x = foo(bar(X, Y));
733 /// We wish to pop the storage for X and Y after `bar()` is
734 /// called, not after the whole `let` is completed.
736 /// As another example, if the second argument diverges:
738 /// foo(Box::new(2), panic!())
740 /// We would allocate the box but then free it on the unwinding
741 /// path; we would also emit a free on the 'success' path from
742 /// panic, but that will turn out to be removed as dead-code.
744 /// When building statics/constants, returns `None` since
745 /// intermediate values do not have to be dropped in that case.
746 crate fn local_scope(&self) -> Option<region::Scope> {
747 match self.hir.body_owner_kind {
748 hir::BodyOwnerKind::Const | hir::BodyOwnerKind::Static(_) =>
749 // No need to free storage in this context.
753 hir::BodyOwnerKind::Closure | hir::BodyOwnerKind::Fn => Some(self.scopes.topmost()),
759 crate fn schedule_drop_storage_and_value(
762 region_scope: region::Scope,
765 self.schedule_drop(span, region_scope, local, DropKind::Storage);
766 self.schedule_drop(span, region_scope, local, DropKind::Value);
769 /// Indicates that `place` should be dropped on exit from `region_scope`.
771 /// When called with `DropKind::Storage`, `place` shouldn't be the return
772 /// place, or a function parameter.
773 crate fn schedule_drop(
776 region_scope: region::Scope,
780 let needs_drop = match drop_kind {
782 if !self.hir.needs_drop(self.local_decls[local].ty) {
787 DropKind::Storage => {
788 if local.index() <= self.arg_count {
791 "`schedule_drop` called with local {:?} and arg_count {}",
800 // When building drops, we try to cache chains of drops to reduce the
801 // number of `DropTree::add_drop` calls. This, however, means that
802 // whenever we add a drop into a scope which already had some entries
803 // in the drop tree built (and thus, cached) for it, we must invalidate
804 // all caches which might branch into the scope which had a drop just
805 // added to it. This is necessary, because otherwise some other code
806 // might use the cache to branch into already built chain of drops,
807 // essentially ignoring the newly added drop.
809 // For example consider there’s two scopes with a drop in each. These
810 // are built and thus the caches are filled:
812 // +--------------------------------------------------------+
813 // | +---------------------------------+ |
814 // | | +--------+ +-------------+ | +---------------+ |
815 // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
816 // | | +--------+ +-------------+ | +---------------+ |
817 // | +------------|outer_scope cache|--+ |
818 // +------------------------------|middle_scope cache|------+
820 // Now, a new, inner-most scope is added along with a new drop into
821 // both inner-most and outer-most scopes:
823 // +------------------------------------------------------------+
824 // | +----------------------------------+ |
825 // | | +--------+ +-------------+ | +---------------+ | +-------------+
826 // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
827 // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
828 // | | +-+ +-------------+ | |
829 // | +---|invalid outer_scope cache|----+ |
830 // +----=----------------|invalid middle_scope cache|-----------+
832 // If, when adding `drop(new)` we do not invalidate the cached blocks for both
833 // outer_scope and middle_scope, then, when building drops for the inner (right-most)
834 // scope, the old, cached blocks, without `drop(new)` will get used, producing the
837 // Note that this code iterates scopes from the inner-most to the outer-most,
838 // invalidating caches of each scope visited. This way bare minimum of the
839 // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
840 // cache of outer scope stays intact.
842 // Since we only cache drops for the unwind path and the generator drop
843 // path, we only need to invalidate the cache for drops that happen on
844 // the unwind or generator drop paths. This means that for
845 // non-generators we don't need to invalidate caches for `DropKind::Storage`.
846 let invalidate_caches = needs_drop || self.generator_kind.is_some();
847 for scope in self.scopes.scopes.iter_mut().rev() {
848 if invalidate_caches {
849 scope.invalidate_cache();
852 if scope.region_scope == region_scope {
853 let region_scope_span =
854 region_scope.span(self.hir.tcx(), &self.hir.region_scope_tree);
855 // Attribute scope exit drops to scope's closing brace.
856 let scope_end = self.hir.tcx().sess.source_map().end_point(region_scope_span);
858 scope.drops.push(DropData {
859 source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
868 span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
871 /// Unschedule a drop. Used for `break`, `return` and `match` expressions,
872 /// where `record_operands_moved` is not powerful enough.
874 /// The given local is expected to have a value drop scheduled in the given
875 /// scope and for that drop to be the most recent thing scheduled in that
877 fn unschedule_drop(&mut self, region_scope: region::Scope, local: Local) {
878 if !self.hir.needs_drop(self.local_decls[local].ty) {
881 for scope in self.scopes.scopes.iter_mut().rev() {
882 scope.invalidate_cache();
884 if scope.region_scope == region_scope {
885 let drop = scope.drops.pop();
888 Some(DropData { local: removed_local, kind: DropKind::Value, .. })
889 if removed_local == local =>
894 "found wrong drop, expected value drop of {:?}, found {:?}",
902 bug!("region scope {:?} not in scope to unschedule drop of {:?}", region_scope, local);
905 /// Indicates that the "local operands" stored in `local` is
906 /// *moved* at some point during execution (see `local_scope` for
907 /// more information about what a "local operand" is -- in short,
908 /// it's an intermediate operand created as part of preparing some
909 /// MIR instruction). We use this information to suppress
910 /// redundant drops. This results in less MIR, but also avoids spurious
911 /// borrow check errors (c.f. #64391).
913 /// Example: when compiling the call to `foo` here:
919 /// we would evaluate `bar()` to an operand `_X`. We would also
920 /// schedule `_X` to be dropped when the expression scope for
921 /// `foo(bar())` is exited. This is relevant, for example, if the
922 /// later arguments should unwind (it would ensure that `_X` gets
923 /// dropped). However, if no unwind occurs, then `_X` will be
924 /// unconditionally consumed by the `call`:
929 /// _R = CALL(foo, _X, ...)
933 /// However, `_X` is still registered to be dropped, and so if we
934 /// do nothing else, we would generate a `DROP(_X)` that occurs
935 /// after the call. This will later be optimized out by the
936 /// drop-elaboation code, but in the meantime it can lead to
937 /// spurious borrow-check errors -- the problem, ironically, is
938 /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
939 /// that it creates. See #64391 for an example.
940 crate fn record_operands_moved(&mut self, operands: &[Operand<'tcx>]) {
941 let scope = match self.local_scope() {
943 // if there is no local scope, operands won't be dropped anyway
947 Some(local_scope) => {
948 let top_scope = self.scopes.scopes.last_mut().unwrap();
950 top_scope.region_scope == local_scope,
951 "local scope ({:?}) is not the topmost scope!",
959 // look for moves of a local variable, like `MOVE(_X)`
960 let locals_moved = operands
962 .filter_map(|operand| match operand {
963 Operand::Copy(_) | Operand::Constant(_) => None,
964 Operand::Move(place) => place.as_local(),
966 .collect::<FxHashSet<_>>();
968 // Remove the drops for the moved operands.
971 .retain(|drop| drop.kind == DropKind::Storage || !locals_moved.contains(&drop.local));
972 scope.invalidate_cache();
977 /// Branch based on a boolean condition.
979 /// This is a special case because the temporary for the condition needs to
980 /// be dropped on both the true and the false arm.
983 mut block: BasicBlock,
984 condition: Expr<'tcx>,
985 source_info: SourceInfo,
986 ) -> (BasicBlock, BasicBlock) {
987 let cond = unpack!(block = self.as_local_operand(block, condition));
988 let true_block = self.cfg.start_new_block();
989 let false_block = self.cfg.start_new_block();
990 let term = TerminatorKind::if_(self.hir.tcx(), cond.clone(), true_block, false_block);
991 self.cfg.terminate(block, source_info, term);
994 // Don't try to drop a constant
995 Operand::Constant(_) => (),
996 // If constants and statics, we don't generate StorageLive for this
997 // temporary, so don't try to generate StorageDead for it either.
998 _ if self.local_scope().is_none() => (),
999 Operand::Copy(place) | Operand::Move(place) => {
1000 if let Some(cond_temp) = place.as_local() {
1001 // Manually drop the condition on both branches.
1002 let top_scope = self.scopes.scopes.last_mut().unwrap();
1003 let top_drop_data = top_scope.drops.pop().unwrap();
1004 if self.generator_kind.is_some() {
1005 top_scope.invalidate_cache();
1008 match top_drop_data.kind {
1009 DropKind::Value { .. } => {
1010 bug!("Drop scheduled on top of condition variable")
1012 DropKind::Storage => {
1013 let source_info = top_drop_data.source_info;
1014 let local = top_drop_data.local;
1015 assert_eq!(local, cond_temp, "Drop scheduled on top of condition");
1018 Statement { source_info, kind: StatementKind::StorageDead(local) },
1022 Statement { source_info, kind: StatementKind::StorageDead(local) },
1027 bug!("Expected as_local_operand to produce a temporary");
1032 (true_block, false_block)
1035 /// Returns the [DropIdx] for the innermost drop if the function unwound at
1036 /// this point. The `DropIdx` will be created if it doesn't already exist.
1037 fn diverge_cleanup(&mut self) -> DropIdx {
1038 let is_generator = self.generator_kind.is_some();
1039 let (uncached_scope, mut cached_drop) = self
1045 .find_map(|(scope_idx, scope)| {
1046 scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1048 .unwrap_or((0, ROOT_NODE));
1050 for scope in &mut self.scopes.scopes[uncached_scope..] {
1051 for drop in &scope.drops {
1052 if is_generator || drop.kind == DropKind::Value {
1053 cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1056 scope.cached_unwind_block = Some(cached_drop);
1062 /// Prepares to create a path that performs all required cleanup for a
1063 /// terminator that can unwind at the given basic block.
1065 /// This path terminates in Resume. The path isn't created until after all
1066 /// of the non-unwind paths in this item have been lowered.
1067 crate fn diverge_from(&mut self, start: BasicBlock) {
1070 self.cfg.block_data(start).terminator().kind,
1071 TerminatorKind::Assert { .. }
1072 | TerminatorKind::Call {..}
1073 | TerminatorKind::DropAndReplace { .. }
1074 | TerminatorKind::FalseUnwind { .. }
1076 "diverge_from called on block with terminator that cannot unwind."
1079 let next_drop = self.diverge_cleanup();
1080 self.scopes.unwind_drops.add_entry(start, next_drop);
1083 /// Sets up a path that performs all required cleanup for dropping a
1084 /// generator, starting from the given block that ends in
1085 /// [TerminatorKind::Yield].
1087 /// This path terminates in GeneratorDrop.
1088 crate fn generator_drop_cleanup(&mut self, yield_block: BasicBlock) {
1091 self.cfg.block_data(yield_block).terminator().kind,
1092 TerminatorKind::Yield { .. }
1094 "generator_drop_cleanup called on block with non-yield terminator."
1096 let (uncached_scope, mut cached_drop) = self
1102 .find_map(|(scope_idx, scope)| {
1103 scope.cached_generator_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1105 .unwrap_or((0, ROOT_NODE));
1107 for scope in &mut self.scopes.scopes[uncached_scope..] {
1108 for drop in &scope.drops {
1109 cached_drop = self.scopes.generator_drops.add_drop(*drop, cached_drop);
1111 scope.cached_generator_drop_block = Some(cached_drop);
1114 self.scopes.generator_drops.add_entry(yield_block, cached_drop);
1117 /// Utility function for *non*-scope code to build their own drops
1118 crate fn build_drop_and_replace(
1123 value: Operand<'tcx>,
1125 let source_info = self.source_info(span);
1126 let next_target = self.cfg.start_new_block();
1131 TerminatorKind::DropAndReplace { place, value, target: next_target, unwind: None },
1133 self.diverge_from(block);
1138 /// Creates an `Assert` terminator and return the success block.
1139 /// If the boolean condition operand is not the expected value,
1140 /// a runtime panic will be caused with the given message.
1144 cond: Operand<'tcx>,
1146 msg: AssertMessage<'tcx>,
1149 let source_info = self.source_info(span);
1150 let success_block = self.cfg.start_new_block();
1155 TerminatorKind::Assert { cond, expected, msg, target: success_block, cleanup: None },
1157 self.diverge_from(block);
1162 /// Lower the arms and guards of a match.
1164 /// The decision tree should have already been created (by
1165 /// [Builder::lower_match_tree]).
1167 /// This is this module, and not in `build::matches` because we have to do
1168 /// some careful scope manipulation to have the drop of the destination be
1169 /// scheduled at the end of each arm and then cleared for the next arm.
1170 crate fn lower_match_arms(
1172 destination: Place<'tcx>,
1173 destination_scope: Option<region::Scope>,
1174 scrutinee_place: Place<'tcx>,
1175 scrutinee_span: Span,
1176 arm_candidates: Vec<(&'_ Arm<'tcx>, Candidate<'_, 'tcx>)>,
1177 outer_source_info: SourceInfo,
1178 fake_borrow_temps: Vec<(Place<'tcx>, Local)>,
1180 if arm_candidates.is_empty() {
1181 // If there are no arms to schedule drops, then we have to do it
1183 if let Some(scope) = destination_scope {
1185 outer_source_info.span,
1187 destination.as_local().unwrap(),
1191 return self.cfg.start_new_block().unit();
1193 let mut first_arm = true;
1194 let arm_end_blocks: Vec<_> = arm_candidates
1196 .map(|(arm, candidate)| {
1197 debug!("lowering arm {:?}\ncandidate = {:?}", arm, candidate);
1201 } else if let Some(scope) = destination_scope {
1202 self.unschedule_drop(scope, destination.as_local().unwrap());
1205 let arm_source_info = self.source_info(arm.span);
1206 let arm_scope = (arm.scope, arm_source_info);
1207 self.in_scope(arm_scope, arm.lint_level, |this| {
1208 let body = this.hir.mirror(arm.body.clone());
1209 let scope = this.declare_bindings(
1213 ArmHasGuard(arm.guard.is_some()),
1214 Some((Some(&scrutinee_place), scrutinee_span)),
1217 let arm_block = this.bind_pattern(
1227 if let Some(source_scope) = scope {
1228 this.source_scope = source_scope;
1231 this.into(destination, destination_scope, arm_block, body)
1236 // all the arm blocks will rejoin here
1237 let end_block = self.cfg.start_new_block();
1239 for arm_block in arm_end_blocks {
1240 self.cfg.goto(unpack!(arm_block), outer_source_info, end_block);
1243 self.source_scope = outer_source_info.scope;
1248 /// Unschedules any drops in the top scope.
1250 /// This is only needed for `match` arm scopes, because they have one
1251 /// entrance per pattern, but only one exit.
1252 pub(super) fn clear_top_scope(&mut self, region_scope: region::Scope) {
1253 let top_scope = self.scopes.scopes.last_mut().unwrap();
1255 assert_eq!(top_scope.region_scope, region_scope);
1257 top_scope.drops.clear();
1258 top_scope.invalidate_cache();
1261 /// Unschedules the drop of the return place.
1263 /// If the return type of a function requires drop, then we schedule it
1264 /// in the outermost scope so that it's dropped if there's a panic while
1265 /// we drop any local variables. But we don't want to drop it if we
1266 /// return normally.
1267 crate fn unschedule_return_place_drop(&mut self) {
1268 assert_eq!(self.scopes.scopes.len(), 1);
1269 assert!(self.scopes.scopes[0].drops.len() <= 1);
1270 self.scopes.scopes[0].drops.clear();
1274 /// Builds drops for `pop_scope` and `leave_top_scope`.
1275 fn build_scope_drops<'tcx>(
1276 cfg: &mut CFG<'tcx>,
1277 unwind_drops: &mut DropTree,
1279 mut block: BasicBlock,
1280 mut unwind_to: DropIdx,
1281 storage_dead_on_unwind: bool,
1284 debug!("build_scope_drops({:?} -> {:?})", block, scope);
1286 // Build up the drops in evaluation order. The end result will
1289 // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1293 // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1295 // The horizontal arrows represent the execution path when the drops return
1296 // successfully. The downwards arrows represent the execution path when the
1297 // drops panic (panicking while unwinding will abort, so there's no need for
1298 // another set of arrows).
1300 // For generators, we unwind from a drop on a local to its StorageDead
1301 // statement. For other functions we don't worry about StorageDead. The
1302 // drops for the unwind path should have already been generated by
1303 // `diverge_cleanup_gen`.
1305 for drop_data in scope.drops.iter().rev() {
1306 let source_info = drop_data.source_info;
1307 let local = drop_data.local;
1309 match drop_data.kind {
1310 DropKind::Value => {
1311 // `unwind_to` should drop the value that we're about to
1312 // schedule. If dropping this value panics, then we continue
1313 // with the *next* value on the unwind path.
1314 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1315 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1316 unwind_to = unwind_drops.drops[unwind_to].1;
1318 unwind_drops.add_entry(block, unwind_to);
1320 let next = cfg.start_new_block();
1324 TerminatorKind::Drop { place: local.into(), target: next, unwind: None },
1328 DropKind::Storage => {
1329 if storage_dead_on_unwind {
1330 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1331 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1332 unwind_to = unwind_drops.drops[unwind_to].1;
1334 // Only temps and vars need their storage dead.
1335 assert!(local.index() > arg_count);
1336 cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
1343 impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1344 /// Build a drop tree for a breakable scope.
1346 /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1347 /// loop. Otherwise this is for `break` or `return`. The `DropIdx` is the
1348 /// next drop in the case that the drop tree unwinds. This is needed
1349 /// because the drop of the break destination has already been scheduled
1350 /// but it hasn't been initialized on the `continue` paths.
1353 mut drops: DropTree,
1354 continue_block: Option<(BasicBlock, DropIdx)>,
1355 ) -> Option<BlockAnd<()>> {
1356 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1357 blocks[ROOT_NODE] = continue_block.map(|(block, _)| block);
1359 drops.build_mir::<ExitScopes>(&mut self.cfg, &mut blocks);
1361 // Link the exit drop tree to unwind drop tree.
1362 if drops.drops.iter().any(|(drop, _)| drop.kind == DropKind::Value) {
1363 let unwind_target = continue_block
1364 .map_or_else(|| self.diverge_cleanup(), |(_, unwind_target)| unwind_target);
1365 let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1366 for (drop_idx, drop_data) in drops.drops.iter_enumerated().skip(1) {
1367 match drop_data.0.kind {
1368 DropKind::Storage => {
1369 if self.generator_kind.is_some() {
1370 let unwind_drop = self
1373 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1374 unwind_indices.push(unwind_drop);
1376 unwind_indices.push(unwind_indices[drop_data.1]);
1379 DropKind::Value => {
1380 let unwind_drop = self
1383 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1386 .add_entry(blocks[drop_idx].unwrap(), unwind_indices[drop_data.1]);
1387 unwind_indices.push(unwind_drop);
1392 blocks[ROOT_NODE].map(BasicBlock::unit)
1395 /// Build the unwind and generator drop trees.
1396 crate fn build_drop_trees(&mut self, should_abort: bool) {
1397 if self.generator_kind.is_some() {
1398 self.build_generator_drop_trees(should_abort);
1400 Self::build_unwind_tree(
1402 &mut self.scopes.unwind_drops,
1410 fn build_generator_drop_trees(&mut self, should_abort: bool) {
1411 // Build the drop tree for dropping the generator while it's suspended.
1412 let drops = &mut self.scopes.generator_drops;
1413 let cfg = &mut self.cfg;
1414 let fn_span = self.fn_span;
1415 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1416 drops.build_mir::<GeneratorDrop>(cfg, &mut blocks);
1417 if let Some(root_block) = blocks[ROOT_NODE] {
1420 SourceInfo::outermost(fn_span),
1421 TerminatorKind::GeneratorDrop,
1425 // Build the drop tree for unwinding in the normal control flow paths.
1426 let resume_block = &mut None;
1427 let unwind_drops = &mut self.scopes.unwind_drops;
1428 Self::build_unwind_tree(cfg, unwind_drops, fn_span, should_abort, resume_block);
1430 // Build the drop tree for unwinding when dropping a suspended
1433 // This is a different tree to the standard unwind paths here to
1434 // prevent drop elaboration from creating drop flags that would have
1435 // to be captured by the generator. I'm not sure how important this
1436 // optimization is, but it is here.
1437 for (drop_idx, drop_data) in drops.drops.iter_enumerated() {
1438 if let DropKind::Value = drop_data.0.kind {
1439 debug_assert!(drop_data.1 < drops.drops.next_index());
1440 drops.entry_points.push((drop_data.1, blocks[drop_idx].unwrap()));
1443 Self::build_unwind_tree(cfg, drops, fn_span, should_abort, resume_block);
1446 fn build_unwind_tree(
1447 cfg: &mut CFG<'tcx>,
1448 drops: &mut DropTree,
1451 resume_block: &mut Option<BasicBlock>,
1453 let mut blocks = IndexVec::from_elem(None, &drops.drops);
1454 blocks[ROOT_NODE] = *resume_block;
1455 drops.build_mir::<Unwind>(cfg, &mut blocks);
1456 if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
1457 // `TerminatorKind::Abort` is used for `#[unwind(aborts)]`
1460 if should_abort { TerminatorKind::Abort } else { TerminatorKind::Resume };
1462 cfg.terminate(resume, SourceInfo::outermost(fn_span), terminator);
1464 *resume_block = blocks[ROOT_NODE];
1469 // DropTreeBuilder implementations.
1473 impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
1474 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1475 cfg.start_new_block()
1477 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1478 cfg.block_data_mut(from).terminator_mut().kind = TerminatorKind::Goto { target: to };
1482 struct GeneratorDrop;
1484 impl<'tcx> DropTreeBuilder<'tcx> for GeneratorDrop {
1485 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1486 cfg.start_new_block()
1488 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1489 let term = cfg.block_data_mut(from).terminator_mut();
1490 if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
1494 term.source_info.span,
1495 "cannot enter generator drop tree from {:?}",
1504 impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
1505 fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1506 cfg.start_new_cleanup_block()
1508 fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1509 let term = &mut cfg.block_data_mut(from).terminator_mut();
1510 match &mut term.kind {
1511 TerminatorKind::Drop { unwind, .. }
1512 | TerminatorKind::DropAndReplace { unwind, .. }
1513 | TerminatorKind::FalseUnwind { unwind, .. }
1514 | TerminatorKind::Call { cleanup: unwind, .. }
1515 | TerminatorKind::Assert { cleanup: unwind, .. } => {
1518 TerminatorKind::Goto { .. }
1519 | TerminatorKind::SwitchInt { .. }
1520 | TerminatorKind::Resume
1521 | TerminatorKind::Abort
1522 | TerminatorKind::Return
1523 | TerminatorKind::Unreachable
1524 | TerminatorKind::Yield { .. }
1525 | TerminatorKind::GeneratorDrop
1526 | TerminatorKind::FalseEdge { .. }
1527 | TerminatorKind::InlineAsm { .. } => {
1528 span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)