2 Managing the scope stack. The scopes are tied to lexical scopes, so as
3 we descend the HAIR, we push a scope on the stack, build its
4 contents, and then pop it off. Every scope is named by a
9 When pushing a new scope, we record the current point in the graph (a
10 basic block); this marks the entry to the scope. We then generate more
11 stuff in the control-flow graph. Whenever the scope is exited, either
12 via a `break` or `return` or just by fallthrough, that marks an exit
13 from the scope. Each lexical scope thus corresponds to a single-entry,
14 multiple-exit (SEME) region in the control-flow graph.
16 For now, we keep a mapping from each `region::Scope` to its
17 corresponding SEME region for later reference (see caveat in next
18 paragraph). This is because region scopes are tied to
19 them. Eventually, when we shift to non-lexical lifetimes, there should
20 be no need to remember this mapping.
22 ### Not so SEME Regions
24 In the course of building matches, it sometimes happens that certain code
25 (namely guards) gets executed multiple times. This means that the scope lexical
26 scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
27 mapping is from one scope to a vector of SEME regions.
29 Also in matches, the scopes assigned to arms are not even SEME regions! Each
30 arm has a single region with one entry for each pattern. We manually
31 manipulate the scheduled drops in this scope to avoid dropping things multiple
32 times, although drop elaboration would clean this up for value drops.
36 The primary purpose for scopes is to insert drops: while building
37 the contents, we also accumulate places that need to be dropped upon
38 exit from each scope. This is done by calling `schedule_drop`. Once a
39 drop is scheduled, whenever we branch out we will insert drops of all
40 those places onto the outgoing edge. Note that we don't know the full
41 set of scheduled drops up front, and so whenever we exit from the
42 scope we only drop the values scheduled thus far. For example, consider
43 the scope S corresponding to this loop:
54 When processing the `let x`, we will add one drop to the scope for
55 `x`. The break will then insert a drop for `x`. When we process `let
56 y`, we will add another drop (in fact, to a subscope, but let's ignore
57 that for now); any later drops would also drop `y`.
61 There are numerous "normal" ways to early exit a scope: `break`,
62 `continue`, `return` (panics are handled separately). Whenever an
63 early exit occurs, the method `exit_scope` is called. It is given the
64 current point in execution where the early exit occurs, as well as the
65 scope you want to branch to (note that all early exits from to some
66 other enclosing scope). `exit_scope` will record this exit point and
69 Panics are handled in a similar fashion, except that a panic always
70 returns out to the `DIVERGE_BLOCK`. To trigger a panic, simply call
71 `panic(p)` with the current point `p`. Or else you can call
72 `diverge_cleanup`, which will produce a block that you can branch to
73 which does the appropriate cleanup and then diverges. `panic(p)`
74 simply calls `diverge_cleanup()` and adds an edge from `p` to the
79 In addition to the normal scope stack, we track a loop scope stack
80 that contains only loops. It tracks where a `break` and `continue`
85 use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
86 use crate::hair::{Expr, ExprRef, LintLevel};
87 use rustc::middle::region;
91 use syntax_pos::{DUMMY_SP, Span};
92 use rustc_data_structures::fx::FxHashMap;
93 use std::collections::hash_map::Entry;
98 /// The source scope this scope was created in.
99 source_scope: SourceScope,
101 /// the region span of this scope within source code.
102 region_scope: region::Scope,
104 /// the span of that region_scope
105 region_scope_span: Span,
107 /// set of places to drop when exiting this scope. This starts
108 /// out empty but grows as variables are declared during the
109 /// building process. This is a stack, so we always drop from the
110 /// end of the vector (top of the stack) first.
111 drops: Vec<DropData>,
113 moved_locals: Vec<Local>,
115 /// The cache for drop chain on “normal” exit into a particular BasicBlock.
116 cached_exits: FxHashMap<(BasicBlock, region::Scope), BasicBlock>,
118 /// The cache for drop chain on "generator drop" exit.
119 cached_generator_drop: Option<BasicBlock>,
121 /// The cache for drop chain on "unwind" exit.
122 cached_unwind: CachedBlock,
125 #[derive(Debug, Default)]
126 pub struct Scopes<'tcx> {
128 /// The current set of breakable scopes. See module comment for more details.
129 breakable_scopes: Vec<BreakableScope<'tcx>>,
134 /// span where drop obligation was incurred (typically where place was declared)
140 /// Whether this is a value Drop or a StorageDead.
143 /// The cached blocks for unwinds.
144 cached_block: CachedBlock,
147 #[derive(Debug, Default, Clone, Copy)]
149 /// The cached block for the cleanups-on-diverge path. This block
150 /// contains code to run the current drop and all the preceding
151 /// drops (i.e., those having lower index in Drop’s Scope drop
153 unwind: Option<BasicBlock>,
155 /// The cached block for unwinds during cleanups-on-generator-drop path
157 /// This is split from the standard unwind path here to prevent drop
158 /// elaboration from creating drop flags that would have to be captured
159 /// by the generator. I'm not sure how important this optimization is,
161 generator_drop: Option<BasicBlock>,
164 #[derive(Debug, PartialEq, Eq)]
165 pub(crate) enum DropKind {
170 #[derive(Clone, Debug)]
171 struct BreakableScope<'tcx> {
172 /// Region scope of the loop
173 region_scope: region::Scope,
174 /// Where the body of the loop begins. `None` if block
175 continue_block: Option<BasicBlock>,
176 /// Block to branch into when the loop or block terminates (either by being `break`-en out
177 /// from, or by having its condition to become false)
178 break_block: BasicBlock,
179 /// The destination of the loop/block expression itself (i.e., where to put the result of a
180 /// `break` expression)
181 break_destination: Place<'tcx>,
184 /// The target of an expression that breaks out of a scope
185 #[derive(Clone, Copy, Debug)]
186 pub enum BreakableTarget {
187 Continue(region::Scope),
188 Break(region::Scope),
193 fn invalidate(&mut self) {
194 *self = CachedBlock::default();
197 fn get(&self, generator_drop: bool) -> Option<BasicBlock> {
205 fn ref_mut(&mut self, generator_drop: bool) -> &mut Option<BasicBlock> {
207 &mut self.generator_drop
215 /// Invalidates all the cached blocks in the scope.
217 /// Should always be run for all inner scopes when a drop is pushed into some scope enclosing a
218 /// larger extent of code.
220 /// `storage_only` controls whether to invalidate only drop paths that run `StorageDead`.
221 /// `this_scope_only` controls whether to invalidate only drop paths that refer to the current
222 /// top-of-scope (as opposed to dependent scopes).
223 fn invalidate_cache(&mut self, storage_only: bool, is_generator: bool, this_scope_only: bool) {
224 // FIXME: maybe do shared caching of `cached_exits` etc. to handle functions
225 // with lots of `try!`?
227 // cached exits drop storage and refer to the top-of-scope
228 self.cached_exits.clear();
230 // the current generator drop and unwind refer to top-of-scope
231 self.cached_generator_drop = None;
233 let ignore_unwinds = storage_only && !is_generator;
235 self.cached_unwind.invalidate();
238 if !ignore_unwinds && !this_scope_only {
239 for drop_data in &mut self.drops {
240 drop_data.cached_block.invalidate();
245 /// Given a span and this scope's source scope, make a SourceInfo.
246 fn source_info(&self, span: Span) -> SourceInfo {
249 scope: self.source_scope
254 /// Whether there's anything to do for the cleanup path, that is,
255 /// when unwinding through this scope. This includes destructors,
256 /// but not StorageDead statements, which don't get emitted at all
257 /// for unwinding, for several reasons:
258 /// * clang doesn't emit llvm.lifetime.end for C++ unwinding
259 /// * LLVM's memory dependency analysis can't handle it atm
260 /// * polluting the cleanup MIR with StorageDead creates
261 /// landing pads even though there's no actual destructors
262 /// * freeing up stack space has no effect during unwinding
263 /// Note that for generators we do emit StorageDeads, for the
264 /// use of optimizations in the MIR generator transform.
265 fn needs_cleanup(&self) -> bool {
266 self.drops.iter().any(|drop| match drop.kind {
267 DropKind::Value => true,
268 DropKind::Storage => false,
273 impl<'tcx> Scopes<'tcx> {
274 fn len(&self) -> usize {
278 fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
279 debug!("push_scope({:?})", region_scope);
280 self.scopes.push(Scope {
281 source_scope: vis_scope,
282 region_scope: region_scope.0,
283 region_scope_span: region_scope.1.span,
285 moved_locals: vec![],
286 cached_generator_drop: None,
287 cached_exits: Default::default(),
288 cached_unwind: CachedBlock::default(),
294 region_scope: (region::Scope, SourceInfo),
295 ) -> (Scope, Option<BasicBlock>) {
296 let scope = self.scopes.pop().unwrap();
297 assert_eq!(scope.region_scope, region_scope.0);
298 let unwind_to = self.scopes.last()
299 .and_then(|next_scope| next_scope.cached_unwind.get(false));
303 fn may_panic(&self, scope_count: usize) -> bool {
304 let len = self.len();
305 self.scopes[(len - scope_count)..].iter().any(|s| s.needs_cleanup())
308 /// Finds the breakable scope for a given label. This is used for
309 /// resolving `return`, `break` and `continue`.
310 fn find_breakable_scope(
313 target: BreakableTarget,
314 ) -> (BasicBlock, region::Scope, Option<Place<'tcx>>) {
315 let get_scope = |scope: region::Scope| {
316 // find the loop-scope by its `region::Scope`.
317 self.breakable_scopes.iter()
318 .rfind(|breakable_scope| breakable_scope.region_scope == scope)
319 .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
322 BreakableTarget::Return => {
323 let scope = &self.breakable_scopes[0];
324 if scope.break_destination != Place::return_place() {
325 span_bug!(span, "`return` in item with no return scope");
327 (scope.break_block, scope.region_scope, Some(scope.break_destination.clone()))
329 BreakableTarget::Break(scope) => {
330 let scope = get_scope(scope);
331 (scope.break_block, scope.region_scope, Some(scope.break_destination.clone()))
333 BreakableTarget::Continue(scope) => {
334 let scope = get_scope(scope);
335 let continue_block = scope.continue_block
336 .unwrap_or_else(|| span_bug!(span, "missing `continue` block"));
337 (continue_block, scope.region_scope, None)
342 fn num_scopes_above(&self, region_scope: region::Scope, span: Span) -> usize {
343 let scope_count = self.scopes.iter().rev()
344 .position(|scope| scope.region_scope == region_scope)
346 span_bug!(span, "region_scope {:?} does not enclose", region_scope)
348 let len = self.len();
349 assert!(scope_count < len, "should not use `exit_scope` to pop ALL scopes");
353 fn iter_mut(&mut self) -> impl DoubleEndedIterator<Item=&mut Scope> + '_ {
354 self.scopes.iter_mut().rev()
357 fn top_scopes(&mut self, count: usize) -> impl DoubleEndedIterator<Item=&mut Scope> + '_ {
358 let len = self.len();
359 self.scopes[len - count..].iter_mut()
362 /// Returns the topmost active scope, which is known to be alive until
363 /// the next scope expression.
364 pub(super) fn topmost(&self) -> region::Scope {
365 self.scopes.last().expect("topmost_scope: no scopes present").region_scope
368 fn source_info(&self, index: usize, span: Span) -> SourceInfo {
369 self.scopes[self.len() - index].source_info(span)
373 impl<'a, 'tcx> Builder<'a, 'tcx> {
374 // Adding and removing scopes
375 // ==========================
376 // Start a breakable scope, which tracks where `continue`, `break` and
377 // `return` should branch to.
378 pub fn in_breakable_scope<F, R>(&mut self,
379 loop_block: Option<BasicBlock>,
380 break_block: BasicBlock,
381 break_destination: Place<'tcx>,
383 where F: FnOnce(&mut Builder<'a, 'tcx>) -> R
385 let region_scope = self.scopes.topmost();
386 let scope = BreakableScope {
388 continue_block: loop_block,
392 self.scopes.breakable_scopes.push(scope);
394 let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
395 assert!(breakable_scope.region_scope == region_scope);
399 pub fn in_opt_scope<F, R>(&mut self,
400 opt_scope: Option<(region::Scope, SourceInfo)>,
403 where F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>
405 debug!("in_opt_scope(opt_scope={:?})", opt_scope);
406 if let Some(region_scope) = opt_scope { self.push_scope(region_scope); }
408 let rv = unpack!(block = f(self));
409 if let Some(region_scope) = opt_scope {
410 unpack!(block = self.pop_scope(region_scope, block));
412 debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
416 /// Convenience wrapper that pushes a scope and then executes `f`
417 /// to build its contents, popping the scope afterwards.
418 pub fn in_scope<F, R>(&mut self,
419 region_scope: (region::Scope, SourceInfo),
420 lint_level: LintLevel,
423 where F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>
425 debug!("in_scope(region_scope={:?})", region_scope);
426 let source_scope = self.source_scope;
427 let tcx = self.hir.tcx();
428 if let LintLevel::Explicit(current_hir_id) = lint_level {
429 // Use `maybe_lint_level_root_bounded` with `root_lint_level` as a bound
430 // to avoid adding Hir dependences on our parents.
431 // We estimate the true lint roots here to avoid creating a lot of source scopes.
433 let parent_root = tcx.maybe_lint_level_root_bounded(
434 self.source_scope_local_data[source_scope].lint_root,
435 self.hir.root_lint_level,
437 let current_root = tcx.maybe_lint_level_root_bounded(
439 self.hir.root_lint_level
442 if parent_root != current_root {
443 self.source_scope = self.new_source_scope(
445 LintLevel::Explicit(current_root),
450 self.push_scope(region_scope);
452 let rv = unpack!(block = f(self));
453 unpack!(block = self.pop_scope(region_scope, block));
454 self.source_scope = source_scope;
455 debug!("in_scope: exiting region_scope={:?} block={:?}", region_scope, block);
459 /// Push a scope onto the stack. You can then build code in this
460 /// scope and call `pop_scope` afterwards. Note that these two
461 /// calls must be paired; using `in_scope` as a convenience
462 /// wrapper maybe preferable.
463 pub fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
464 self.scopes.push_scope(region_scope, self.source_scope);
467 /// Pops a scope, which should have region scope `region_scope`,
468 /// adding any drops onto the end of `block` that are needed.
469 /// This must match 1-to-1 with `push_scope`.
470 pub fn pop_scope(&mut self,
471 region_scope: (region::Scope, SourceInfo),
472 mut block: BasicBlock)
474 debug!("pop_scope({:?}, {:?})", region_scope, block);
475 // If we are emitting a `drop` statement, we need to have the cached
476 // diverge cleanup pads ready in case that drop panics.
477 if self.scopes.may_panic(1) {
478 self.diverge_cleanup();
480 let (scope, unwind_to) = self.scopes.pop_scope(region_scope);
481 let unwind_to = unwind_to.unwrap_or_else(|| self.resume_block());
483 unpack!(block = build_scope_drops(
490 false, // not generator
491 false, // not unwind path
499 mut block: BasicBlock,
500 value: Option<ExprRef<'tcx>>,
501 scope: BreakableTarget,
502 source_info: SourceInfo,
504 let (mut target_block, region_scope, destination)
505 = self.scopes.find_breakable_scope(source_info.span, scope);
506 if let BreakableTarget::Return = scope {
507 // We call this now, rather than when we start lowering the
508 // function so that the return block doesn't precede the entire
509 // rest of the CFG. Some passes and LLVM prefer blocks to be in
510 // approximately CFG order.
511 target_block = self.return_block();
513 if let Some(destination) = destination {
514 if let Some(value) = value {
515 debug!("stmt_expr Break val block_context.push(SubExpr)");
516 self.block_context.push(BlockFrame::SubExpr);
517 unpack!(block = self.into(&destination, block, value));
518 self.block_context.pop();
520 self.cfg.push_assign_unit(block, source_info, &destination)
523 assert!(value.is_none(), "`return` and `break` should have a destination");
525 self.exit_scope(source_info.span, region_scope, block, target_block);
526 self.cfg.start_new_block().unit()
529 /// Branch out of `block` to `target`, exiting all scopes up to
530 /// and including `region_scope`. This will insert whatever drops are
531 /// needed. See module comment for details.
532 pub fn exit_scope(&mut self,
534 region_scope: region::Scope,
535 mut block: BasicBlock,
536 target: BasicBlock) {
537 debug!("exit_scope(region_scope={:?}, block={:?}, target={:?})",
538 region_scope, block, target);
539 let scope_count = self.scopes.num_scopes_above(region_scope, span);
541 // If we are emitting a `drop` statement, we need to have the cached
542 // diverge cleanup pads ready in case that drop panics.
543 let may_panic = self.scopes.may_panic(scope_count);
545 self.diverge_cleanup();
548 let mut scopes = self.scopes.top_scopes(scope_count + 1).rev();
549 let mut scope = scopes.next().unwrap();
550 for next_scope in scopes {
551 if scope.drops.is_empty() {
555 let source_info = scope.source_info(span);
556 block = match scope.cached_exits.entry((target, region_scope)) {
557 Entry::Occupied(e) => {
558 self.cfg.terminate(block, source_info,
559 TerminatorKind::Goto { target: *e.get() });
562 Entry::Vacant(v) => {
563 let b = self.cfg.start_new_block();
564 self.cfg.terminate(block, source_info,
565 TerminatorKind::Goto { target: b });
571 let unwind_to = next_scope.cached_unwind.get(false).unwrap_or_else(|| {
572 debug_assert!(!may_panic, "cached block not present?");
576 unpack!(block = build_scope_drops(
583 false, // not generator
584 false, // not unwind path
590 let source_info = self.scopes.source_info(scope_count, span);
591 self.cfg.terminate(block, source_info, TerminatorKind::Goto { target });
594 /// Creates a path that performs all required cleanup for dropping a generator.
596 /// This path terminates in GeneratorDrop. Returns the start of the path.
597 /// None indicates there’s no cleanup to do at this point.
598 pub fn generator_drop_cleanup(&mut self) -> Option<BasicBlock> {
599 // Fill in the cache for unwinds
600 self.diverge_cleanup_gen(true);
602 let src_info = self.scopes.source_info(self.scopes.len(), self.fn_span);
603 let resume_block = self.resume_block();
604 let mut scopes = self.scopes.iter_mut().peekable();
605 let mut block = self.cfg.start_new_block();
608 while let Some(scope) = scopes.next() {
609 block = if let Some(b) = scope.cached_generator_drop {
610 self.cfg.terminate(block, src_info,
611 TerminatorKind::Goto { target: b });
614 let b = self.cfg.start_new_block();
615 scope.cached_generator_drop = Some(b);
616 self.cfg.terminate(block, src_info,
617 TerminatorKind::Goto { target: b });
621 let unwind_to = scopes.peek().as_ref().map(|scope| {
622 scope.cached_unwind.get(true).unwrap_or_else(|| {
623 span_bug!(src_info.span, "cached block not present?")
625 }).unwrap_or(resume_block);
627 unpack!(block = build_scope_drops(
634 true, // is generator
635 true, // is cached path
639 self.cfg.terminate(block, src_info, TerminatorKind::GeneratorDrop);
644 /// Creates a new source scope, nested in the current one.
645 pub fn new_source_scope(&mut self,
647 lint_level: LintLevel,
648 safety: Option<Safety>) -> SourceScope {
649 let parent = self.source_scope;
650 debug!("new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
651 span, lint_level, safety,
652 parent, self.source_scope_local_data.get(parent));
653 let scope = self.source_scopes.push(SourceScopeData {
655 parent_scope: Some(parent),
657 let scope_local_data = SourceScopeLocalData {
658 lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
661 self.source_scope_local_data[parent].lint_root
663 safety: safety.unwrap_or_else(|| {
664 self.source_scope_local_data[parent].safety
667 self.source_scope_local_data.push(scope_local_data);
671 /// Given a span and the current source scope, make a SourceInfo.
672 pub fn source_info(&self, span: Span) -> SourceInfo {
675 scope: self.source_scope
681 /// Returns the scope that we should use as the lifetime of an
682 /// operand. Basically, an operand must live until it is consumed.
683 /// This is similar to, but not quite the same as, the temporary
684 /// scope (which can be larger or smaller).
688 /// let x = foo(bar(X, Y));
690 /// We wish to pop the storage for X and Y after `bar()` is
691 /// called, not after the whole `let` is completed.
693 /// As another example, if the second argument diverges:
695 /// foo(Box::new(2), panic!())
697 /// We would allocate the box but then free it on the unwinding
698 /// path; we would also emit a free on the 'success' path from
699 /// panic, but that will turn out to be removed as dead-code.
701 /// When building statics/constants, returns `None` since
702 /// intermediate values do not have to be dropped in that case.
703 pub fn local_scope(&self) -> Option<region::Scope> {
704 match self.hir.body_owner_kind {
705 hir::BodyOwnerKind::Const |
706 hir::BodyOwnerKind::Static(_) =>
707 // No need to free storage in this context.
709 hir::BodyOwnerKind::Closure |
710 hir::BodyOwnerKind::Fn =>
711 Some(self.scopes.topmost()),
715 // Schedule an abort block - this is used for some ABIs that cannot unwind
716 pub fn schedule_abort(&mut self) -> BasicBlock {
717 let source_info = self.scopes.source_info(self.scopes.len(), self.fn_span);
718 let abortblk = self.cfg.start_new_cleanup_block();
719 self.cfg.terminate(abortblk, source_info, TerminatorKind::Abort);
720 self.cached_resume_block = Some(abortblk);
726 pub fn schedule_drop_storage_and_value(
729 region_scope: region::Scope,
733 self.schedule_drop(span, region_scope, local, place_ty, DropKind::Storage);
734 self.schedule_drop(span, region_scope, local, place_ty, DropKind::Value);
737 /// Indicates that `place` should be dropped on exit from
740 /// When called with `DropKind::Storage`, `place` should be a local
741 /// with an index higher than the current `self.arg_count`.
742 pub fn schedule_drop(
745 region_scope: region::Scope,
750 let needs_drop = self.hir.needs_drop(place_ty);
752 DropKind::Value => if !needs_drop { return },
753 DropKind::Storage => {
754 if local.index() <= self.arg_count {
756 span, "`schedule_drop` called with local {:?} and arg_count {}",
764 for scope in self.scopes.iter_mut() {
765 let this_scope = scope.region_scope == region_scope;
766 // When building drops, we try to cache chains of drops in such a way so these drops
767 // could be reused by the drops which would branch into the cached (already built)
768 // blocks. This, however, means that whenever we add a drop into a scope which already
769 // had some blocks built (and thus, cached) for it, we must invalidate all caches which
770 // might branch into the scope which had a drop just added to it. This is necessary,
771 // because otherwise some other code might use the cache to branch into already built
772 // chain of drops, essentially ignoring the newly added drop.
774 // For example consider there’s two scopes with a drop in each. These are built and
775 // thus the caches are filled:
777 // +--------------------------------------------------------+
778 // | +---------------------------------+ |
779 // | | +--------+ +-------------+ | +---------------+ |
780 // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
781 // | | +--------+ +-------------+ | +---------------+ |
782 // | +------------|outer_scope cache|--+ |
783 // +------------------------------|middle_scope cache|------+
785 // Now, a new, inner-most scope is added along with a new drop into both inner-most and
786 // outer-most scopes:
788 // +------------------------------------------------------------+
789 // | +----------------------------------+ |
790 // | | +--------+ +-------------+ | +---------------+ | +-------------+
791 // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
792 // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
793 // | | +-+ +-------------+ | |
794 // | +---|invalid outer_scope cache|----+ |
795 // +----=----------------|invalid middle_scope cache|-----------+
797 // If, when adding `drop(new)` we do not invalidate the cached blocks for both
798 // outer_scope and middle_scope, then, when building drops for the inner (right-most)
799 // scope, the old, cached blocks, without `drop(new)` will get used, producing the
802 // The cache and its invalidation for unwind branch is somewhat special. The cache is
803 // per-drop, rather than per scope, which has a several different implications. Adding
804 // a new drop into a scope will not invalidate cached blocks of the prior drops in the
805 // scope. That is true, because none of the already existing drops will have an edge
806 // into a block with the newly added drop.
808 // Note that this code iterates scopes from the inner-most to the outer-most,
809 // invalidating caches of each scope visited. This way bare minimum of the
810 // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
811 // cache of outer scope stays intact.
812 scope.invalidate_cache(!needs_drop, self.is_generator, this_scope);
814 let region_scope_span = region_scope.span(self.hir.tcx(),
815 &self.hir.region_scope_tree);
816 // Attribute scope exit drops to scope's closing brace.
817 let scope_end = self.hir.tcx().sess.source_map().end_point(region_scope_span);
819 scope.drops.push(DropData {
823 cached_block: CachedBlock::default(),
828 span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
831 /// Indicates that the "local operand" stored in `local` is
832 /// *moved* at some point during execution (see `local_scope` for
833 /// more information about what a "local operand" is -- in short,
834 /// it's an intermediate operand created as part of preparing some
835 /// MIR instruction). We use this information to suppress
836 /// redundant drops on the non-unwind paths. This results in less
837 /// MIR, but also avoids spurious borrow check errors
840 /// Example: when compiling the call to `foo` here:
846 /// we would evaluate `bar()` to an operand `_X`. We would also
847 /// schedule `_X` to be dropped when the expression scope for
848 /// `foo(bar())` is exited. This is relevant, for example, if the
849 /// later arguments should unwind (it would ensure that `_X` gets
850 /// dropped). However, if no unwind occurs, then `_X` will be
851 /// unconditionally consumed by the `call`:
856 /// _R = CALL(foo, _X, ...)
860 /// However, `_X` is still registered to be dropped, and so if we
861 /// do nothing else, we would generate a `DROP(_X)` that occurs
862 /// after the call. This will later be optimized out by the
863 /// drop-elaboation code, but in the meantime it can lead to
864 /// spurious borrow-check errors -- the problem, ironically, is
865 /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
866 /// that it creates. See #64391 for an example.
867 pub fn record_operands_moved(
869 operands: &[Operand<'tcx>],
871 let scope = match self.local_scope() {
873 // if there is no local scope, operands won't be dropped anyway
877 Some(local_scope) => {
878 self.scopes.iter_mut().find(|scope| scope.region_scope == local_scope)
879 .unwrap_or_else(|| bug!("scope {:?} not found in scope list!", local_scope))
883 // look for moves of a local variable, like `MOVE(_X)`
884 let locals_moved = operands.iter().flat_map(|operand| match operand {
885 Operand::Copy(_) | Operand::Constant(_) => None,
886 Operand::Move(place) => place.as_local(),
889 for local in locals_moved {
890 // check if we have a Drop for this operand and -- if so
891 // -- add it to the list of moved operands. Note that this
892 // local might not have been an operand created for this
893 // call, it could come from other places too.
894 if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
895 scope.moved_locals.push(local);
902 /// Branch based on a boolean condition.
904 /// This is a special case because the temporary for the condition needs to
905 /// be dropped on both the true and the false arm.
908 mut block: BasicBlock,
909 condition: Expr<'tcx>,
910 source_info: SourceInfo,
911 ) -> (BasicBlock, BasicBlock) {
912 let cond = unpack!(block = self.as_local_operand(block, condition));
913 let true_block = self.cfg.start_new_block();
914 let false_block = self.cfg.start_new_block();
915 let term = TerminatorKind::if_(
921 self.cfg.terminate(block, source_info, term);
924 // Don't try to drop a constant
925 Operand::Constant(_) => (),
926 // If constants and statics, we don't generate StorageLive for this
927 // temporary, so don't try to generate StorageDead for it either.
928 _ if self.local_scope().is_none() => (),
929 Operand::Copy(Place {
930 base: PlaceBase::Local(cond_temp),
933 | Operand::Move(Place {
934 base: PlaceBase::Local(cond_temp),
937 // Manually drop the condition on both branches.
938 let top_scope = self.scopes.scopes.last_mut().unwrap();
939 let top_drop_data = top_scope.drops.pop().unwrap();
941 match top_drop_data.kind {
942 DropKind::Value { .. } => {
943 bug!("Drop scheduled on top of condition variable")
945 DropKind::Storage => {
946 let source_info = top_scope.source_info(top_drop_data.span);
947 let local = top_drop_data.local;
948 assert_eq!(local, cond_temp, "Drop scheduled on top of condition");
953 kind: StatementKind::StorageDead(local)
960 kind: StatementKind::StorageDead(local)
966 top_scope.invalidate_cache(true, self.is_generator, true);
968 _ => bug!("Expected as_local_operand to produce a temporary"),
971 (true_block, false_block)
974 /// Creates a path that performs all required cleanup for unwinding.
976 /// This path terminates in Resume. Returns the start of the path.
977 /// See module comment for more details.
978 pub fn diverge_cleanup(&mut self) -> BasicBlock {
979 self.diverge_cleanup_gen(false)
982 fn resume_block(&mut self) -> BasicBlock {
983 if let Some(target) = self.cached_resume_block {
986 let resumeblk = self.cfg.start_new_cleanup_block();
987 self.cfg.terminate(resumeblk,
989 scope: OUTERMOST_SOURCE_SCOPE,
992 TerminatorKind::Resume);
993 self.cached_resume_block = Some(resumeblk);
998 fn diverge_cleanup_gen(&mut self, generator_drop: bool) -> BasicBlock {
999 // Build up the drops in **reverse** order. The end result will
1002 // scopes[n] -> scopes[n-1] -> ... -> scopes[0]
1004 // However, we build this in **reverse order**. That is, we
1005 // process scopes[0], then scopes[1], etc, pointing each one at
1006 // the result generates from the one before. Along the way, we
1007 // store caches. If everything is cached, we'll just walk right
1008 // to left reading the cached results but never created anything.
1010 // Find the last cached block
1011 debug!("diverge_cleanup_gen(self.scopes = {:?})", self.scopes);
1012 let cached_cleanup = self.scopes.iter_mut().enumerate()
1013 .find_map(|(idx, ref scope)| {
1014 let cached_block = scope.cached_unwind.get(generator_drop)?;
1015 Some((cached_block, idx))
1017 let (mut target, first_uncached) = cached_cleanup
1018 .unwrap_or_else(|| (self.resume_block(), self.scopes.len()));
1020 for scope in self.scopes.top_scopes(first_uncached) {
1021 target = build_diverge_scope(&mut self.cfg, scope.region_scope_span,
1022 scope, target, generator_drop, self.is_generator);
1028 /// Utility function for *non*-scope code to build their own drops
1029 pub fn build_drop_and_replace(&mut self,
1032 location: Place<'tcx>,
1033 value: Operand<'tcx>) -> BlockAnd<()> {
1034 let source_info = self.source_info(span);
1035 let next_target = self.cfg.start_new_block();
1036 let diverge_target = self.diverge_cleanup();
1037 self.cfg.terminate(block, source_info,
1038 TerminatorKind::DropAndReplace {
1041 target: next_target,
1042 unwind: Some(diverge_target),
1047 /// Creates an Assert terminator and return the success block.
1048 /// If the boolean condition operand is not the expected value,
1049 /// a runtime panic will be caused with the given message.
1050 pub fn assert(&mut self, block: BasicBlock,
1051 cond: Operand<'tcx>,
1053 msg: AssertMessage<'tcx>,
1056 let source_info = self.source_info(span);
1058 let success_block = self.cfg.start_new_block();
1059 let cleanup = self.diverge_cleanup();
1061 self.cfg.terminate(block, source_info,
1062 TerminatorKind::Assert {
1066 target: success_block,
1067 cleanup: Some(cleanup),
1073 // `match` arm scopes
1074 // ==================
1075 /// Unschedules any drops in the top scope.
1077 /// This is only needed for `match` arm scopes, because they have one
1078 /// entrance per pattern, but only one exit.
1079 pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
1080 let top_scope = self.scopes.scopes.last_mut().unwrap();
1082 assert_eq!(top_scope.region_scope, region_scope);
1084 top_scope.drops.clear();
1085 top_scope.invalidate_cache(false, self.is_generator, true);
1089 /// Builds drops for pop_scope and exit_scope.
1090 fn build_scope_drops<'tcx>(
1091 cfg: &mut CFG<'tcx>,
1094 mut block: BasicBlock,
1095 last_unwind_to: BasicBlock,
1097 generator_drop: bool,
1098 is_cached_path: bool,
1100 debug!("build_scope_drops({:?} -> {:?})", block, scope);
1102 // Build up the drops in evaluation order. The end result will
1105 // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1109 // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1111 // The horizontal arrows represent the execution path when the drops return
1112 // successfully. The downwards arrows represent the execution path when the
1113 // drops panic (panicking while unwinding will abort, so there's no need for
1114 // another set of arrows).
1116 // For generators, we unwind from a drop on a local to its StorageDead
1117 // statement. For other functions we don't worry about StorageDead. The
1118 // drops for the unwind path should have already been generated by
1119 // `diverge_cleanup_gen`.
1121 for drop_idx in (0..scope.drops.len()).rev() {
1122 let drop_data = &scope.drops[drop_idx];
1123 let source_info = scope.source_info(drop_data.span);
1124 let local = drop_data.local;
1126 match drop_data.kind {
1127 DropKind::Value => {
1128 // If the operand has been moved, and we are not on an unwind
1129 // path, then don't generate the drop. (We only take this into
1130 // account for non-unwind paths so as not to disturb the
1131 // caching mechanism.)
1132 if !is_cached_path && scope.moved_locals.iter().any(|&o| o == local) {
1136 let unwind_to = get_unwind_to(scope, is_generator, drop_idx, generator_drop)
1137 .unwrap_or(last_unwind_to);
1139 let next = cfg.start_new_block();
1140 cfg.terminate(block, source_info, TerminatorKind::Drop {
1141 location: local.into(),
1143 unwind: Some(unwind_to)
1147 DropKind::Storage => {
1148 // Only temps and vars need their storage dead.
1149 assert!(local.index() > arg_count);
1150 cfg.push(block, Statement {
1152 kind: StatementKind::StorageDead(local)
1164 generator_drop: bool,
1165 ) -> Option<BasicBlock> {
1166 for drop_idx in (0..unwind_from).rev() {
1167 let drop_data = &scope.drops[drop_idx];
1168 match (is_generator, &drop_data.kind) {
1169 (true, DropKind::Storage) => {
1170 return Some(drop_data.cached_block.get(generator_drop).unwrap_or_else(|| {
1171 span_bug!(drop_data.span, "cached block not present for {:?}", drop_data)
1174 (false, DropKind::Value) => {
1175 return Some(drop_data.cached_block.get(generator_drop).unwrap_or_else(|| {
1176 span_bug!(drop_data.span, "cached block not present for {:?}", drop_data)
1185 fn build_diverge_scope<'tcx>(cfg: &mut CFG<'tcx>,
1188 mut target: BasicBlock,
1189 generator_drop: bool,
1193 // Build up the drops in **reverse** order. The end result will
1196 // [drops[n]] -...-> [drops[0]] -> [target]
1198 // The code in this function reads from right to left. At each
1199 // point, we check for cached blocks representing the
1200 // remainder. If everything is cached, we'll just walk right to
1201 // left reading the cached results but never create anything.
1203 let source_scope = scope.source_scope;
1204 let source_info = |span| SourceInfo {
1209 // We keep track of StorageDead statements to prepend to our current block
1210 // and store them here, in reverse order.
1211 let mut storage_deads = vec![];
1213 let mut target_built_by_us = false;
1215 // Build up the drops. Here we iterate the vector in
1216 // *forward* order, so that we generate drops[0] first (right to
1217 // left in diagram above).
1218 debug!("build_diverge_scope({:?})", scope.drops);
1219 for (j, drop_data) in scope.drops.iter_mut().enumerate() {
1220 debug!("build_diverge_scope drop_data[{}]: {:?}", j, drop_data);
1221 // Only full value drops are emitted in the diverging path,
1222 // not StorageDead, except in the case of generators.
1224 // Note: This may not actually be what we desire (are we
1225 // "freeing" stack storage as we unwind, or merely observing a
1226 // frozen stack)? In particular, the intent may have been to
1227 // match the behavior of clang, but on inspection eddyb says
1228 // this is not what clang does.
1229 match drop_data.kind {
1230 DropKind::Storage if is_generator => {
1231 storage_deads.push(Statement {
1232 source_info: source_info(drop_data.span),
1233 kind: StatementKind::StorageDead(drop_data.local)
1235 if !target_built_by_us {
1236 // We cannot add statements to an existing block, so we create a new
1237 // block for our StorageDead statements.
1238 let block = cfg.start_new_cleanup_block();
1239 let source_info = SourceInfo { span: DUMMY_SP, scope: source_scope };
1240 cfg.terminate(block, source_info,
1241 TerminatorKind::Goto { target: target });
1243 target_built_by_us = true;
1245 *drop_data.cached_block.ref_mut(generator_drop) = Some(target);
1247 DropKind::Storage => {}
1248 DropKind::Value => {
1249 let cached_block = drop_data.cached_block.ref_mut(generator_drop);
1250 target = if let Some(cached_block) = *cached_block {
1251 storage_deads.clear();
1252 target_built_by_us = false;
1255 push_storage_deads(cfg, target, &mut storage_deads);
1256 let block = cfg.start_new_cleanup_block();
1259 source_info(drop_data.span),
1260 TerminatorKind::Drop {
1261 location: drop_data.local.into(),
1266 *cached_block = Some(block);
1267 target_built_by_us = true;
1273 push_storage_deads(cfg, target, &mut storage_deads);
1274 *scope.cached_unwind.ref_mut(generator_drop) = Some(target);
1276 assert!(storage_deads.is_empty());
1277 debug!("build_diverge_scope({:?}, {:?}) = {:?}", scope, span, target);
1282 fn push_storage_deads(cfg: &mut CFG<'tcx>,
1284 storage_deads: &mut Vec<Statement<'tcx>>) {
1285 if storage_deads.is_empty() { return; }
1286 let statements = &mut cfg.block_data_mut(target).statements;
1287 storage_deads.reverse();
1288 debug!("push_storage_deads({:?}), storage_deads={:?}, statements={:?}",
1289 target, storage_deads, statements);
1290 storage_deads.append(statements);
1291 mem::swap(statements, storage_deads);
1292 assert!(storage_deads.is_empty());