1 // Copyright 2015 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
12 Managing the scope stack. The scopes are tied to lexical scopes, so as
13 we descend the HAIR, we push a scope on the stack, translate ite
14 contents, and then pop it off. Every scope is named by a
19 When pushing a new scope, we record the current point in the graph (a
20 basic block); this marks the entry to the scope. We then generate more
21 stuff in the control-flow graph. Whenever the scope is exited, either
22 via a `break` or `return` or just by fallthrough, that marks an exit
23 from the scope. Each lexical scope thus corresponds to a single-entry,
24 multiple-exit (SEME) region in the control-flow graph.
26 For now, we keep a mapping from each `CodeExtent` to its
27 corresponding SEME region for later reference (see caveat in next
28 paragraph). This is because region scopes are tied to
29 them. Eventually, when we shift to non-lexical lifetimes, there should
30 be no need to remember this mapping.
32 There is one additional wrinkle, actually, that I wanted to hide from
33 you but duty compels me to mention. In the course of translating
34 matches, it sometimes happen that certain code (namely guards) gets
35 executed multiple times. This means that the scope lexical scope may
36 in fact correspond to multiple, disjoint SEME regions. So in fact our
37 mapping is from one scope to a vector of SEME regions.
41 The primary purpose for scopes is to insert drops: while translating
42 the contents, we also accumulate lvalues that need to be dropped upon
43 exit from each scope. This is done by calling `schedule_drop`. Once a
44 drop is scheduled, whenever we branch out we will insert drops of all
45 those lvalues onto the outgoing edge. Note that we don't know the full
46 set of scheduled drops up front, and so whenever we exit from the
47 scope we only drop the values scheduled thus far. For example, consider
48 the scope S corresponding to this loop:
58 When processing the `let x`, we will add one drop to the scope for
59 `x`. The break will then insert a drop for `x`. When we process `let
60 y`, we will add another drop (in fact, to a subscope, but let's ignore
61 that for now); any later drops would also drop `y`.
65 There are numerous "normal" ways to early exit a scope: `break`,
66 `continue`, `return` (panics are handled separately). Whenever an
67 early exit occurs, the method `exit_scope` is called. It is given the
68 current point in execution where the early exit occurs, as well as the
69 scope you want to branch to (note that all early exits from to some
70 other enclosing scope). `exit_scope` will record this exit point and
73 Panics are handled in a similar fashion, except that a panic always
74 returns out to the `DIVERGE_BLOCK`. To trigger a panic, simply call
75 `panic(p)` with the current point `p`. Or else you can call
76 `diverge_cleanup`, which will produce a block that you can branch to
77 which does the appropriate cleanup and then diverges. `panic(p)`
78 simply calls `diverge_cleanup()` and adds an edge from `p` to the
83 In addition to the normal scope stack, we track a loop scope stack
84 that contains only loops. It tracks where a `break` and `continue`
89 use build::{BlockAnd, BlockAndExtension, Builder, CFG};
90 use rustc::middle::region::{CodeExtent, CodeExtentData};
91 use rustc::middle::lang_items;
92 use rustc::ty::subst::{Kind, Subst};
93 use rustc::ty::{Ty, TyCtxt};
96 use rustc_data_structures::indexed_vec::Idx;
97 use rustc_data_structures::fx::FxHashMap;
99 pub struct Scope<'tcx> {
100 /// The visibility scope this scope was created in.
101 visibility_scope: VisibilityScope,
103 /// the extent of this scope within source code.
106 /// Whether there's anything to do for the cleanup path, that is,
107 /// when unwinding through this scope. This includes destructors,
108 /// but not StorageDead statements, which don't get emitted at all
109 /// for unwinding, for several reasons:
110 /// * clang doesn't emit llvm.lifetime.end for C++ unwinding
111 /// * LLVM's memory dependency analysis can't handle it atm
112 /// * pollutting the cleanup MIR with StorageDead creates
113 /// landing pads even though there's no actual destructors
114 /// * freeing up stack space has no effect during unwinding
117 /// set of lvalues to drop when exiting this scope. This starts
118 /// out empty but grows as variables are declared during the
119 /// building process. This is a stack, so we always drop from the
120 /// end of the vector (top of the stack) first.
121 drops: Vec<DropData<'tcx>>,
123 /// A scope may only have one associated free, because:
125 /// 1. We require a `free` to only be scheduled in the scope of
126 /// `EXPR` in `box EXPR`;
127 /// 2. It only makes sense to have it translated into the diverge-path.
129 /// This kind of drop will be run *after* all the regular drops
130 /// scheduled onto this scope, because drops may have dependencies
131 /// on the allocated memory.
133 /// This is expected to go away once `box EXPR` becomes a sugar
134 /// for placement protocol and gets desugared in some earlier
136 free: Option<FreeData<'tcx>>,
138 /// The cache for drop chain on “normal” exit into a particular BasicBlock.
139 cached_exits: FxHashMap<(BasicBlock, CodeExtent), BasicBlock>,
142 struct DropData<'tcx> {
143 /// span where drop obligation was incurred (typically where lvalue was declared)
147 location: Lvalue<'tcx>,
149 /// Whether this is a full value Drop, or just a StorageDead.
155 /// The cached block for the cleanups-on-diverge path. This block
156 /// contains code to run the current drop and all the preceding
157 /// drops (i.e. those having lower index in Drop’s Scope drop
159 cached_block: Option<BasicBlock>
164 struct FreeData<'tcx> {
165 /// span where free obligation was incurred
168 /// Lvalue containing the allocated box.
171 /// type of item for which the box was allocated for (i.e. the T in Box<T>).
174 /// The cached block containing code to run the free. The block will also execute all the drops
176 cached_block: Option<BasicBlock>
179 #[derive(Clone, Debug)]
180 pub struct LoopScope<'tcx> {
181 /// Extent of the loop
182 pub extent: CodeExtent,
183 /// Where the body of the loop begins
184 pub continue_block: BasicBlock,
185 /// Block to branch into when the loop terminates (either by being `break`-en out from, or by
186 /// having its condition to become false)
187 pub break_block: BasicBlock,
188 /// The destination of the loop expression itself (i.e. where to put the result of a `break`
190 pub break_destination: Lvalue<'tcx>,
193 impl<'tcx> Scope<'tcx> {
194 /// Invalidate all the cached blocks in the scope.
196 /// Should always be run for all inner scopes when a drop is pushed into some scope enclosing a
197 /// larger extent of code.
199 /// `unwind` controls whether caches for the unwind branch are also invalidated.
200 fn invalidate_cache(&mut self, unwind: bool) {
201 self.cached_exits.clear();
202 if !unwind { return; }
203 for dropdata in &mut self.drops {
204 if let DropKind::Value { ref mut cached_block } = dropdata.kind {
205 *cached_block = None;
208 if let Some(ref mut freedata) = self.free {
209 freedata.cached_block = None;
213 /// Returns the cached entrypoint for diverging exit from this scope.
215 /// Precondition: the caches must be fully filled (i.e. diverge_cleanup is called) in order for
216 /// this method to work correctly.
217 fn cached_block(&self) -> Option<BasicBlock> {
218 let mut drops = self.drops.iter().rev().filter_map(|data| {
220 DropKind::Value { cached_block } => Some(cached_block),
221 DropKind::Storage => None
224 if let Some(cached_block) = drops.next() {
225 Some(cached_block.expect("drop cache is not filled"))
226 } else if let Some(ref data) = self.free {
227 Some(data.cached_block.expect("free cache is not filled"))
233 /// Given a span and this scope's visibility scope, make a SourceInfo.
234 fn source_info(&self, span: Span) -> SourceInfo {
237 scope: self.visibility_scope
242 impl<'a, 'gcx, 'tcx> Builder<'a, 'gcx, 'tcx> {
243 // Adding and removing scopes
244 // ==========================
245 /// Start a loop scope, which tracks where `continue` and `break`
246 /// should branch to. See module comment for more details.
248 /// Returns the might_break attribute of the LoopScope used.
249 pub fn in_loop_scope<F>(&mut self,
250 loop_block: BasicBlock,
251 break_block: BasicBlock,
252 break_destination: Lvalue<'tcx>,
254 where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>)
256 let extent = self.scopes.last().map(|scope| scope.extent).unwrap();
257 let loop_scope = LoopScope {
258 extent: extent.clone(),
259 continue_block: loop_block,
260 break_block: break_block,
261 break_destination: break_destination,
263 self.loop_scopes.push(loop_scope);
265 let loop_scope = self.loop_scopes.pop().unwrap();
266 assert!(loop_scope.extent == extent);
269 /// Convenience wrapper that pushes a scope and then executes `f`
270 /// to build its contents, popping the scope afterwards.
271 pub fn in_scope<F, R>(&mut self, extent: CodeExtent, mut block: BasicBlock, f: F) -> BlockAnd<R>
272 where F: FnOnce(&mut Builder<'a, 'gcx, 'tcx>) -> BlockAnd<R>
274 debug!("in_scope(extent={:?}, block={:?})", extent, block);
275 self.push_scope(extent);
276 let rv = unpack!(block = f(self));
277 unpack!(block = self.pop_scope(extent, block));
278 debug!("in_scope: exiting extent={:?} block={:?}", extent, block);
282 /// Push a scope onto the stack. You can then build code in this
283 /// scope and call `pop_scope` afterwards. Note that these two
284 /// calls must be paired; using `in_scope` as a convenience
285 /// wrapper maybe preferable.
286 pub fn push_scope(&mut self, extent: CodeExtent) {
287 debug!("push_scope({:?})", extent);
288 let vis_scope = self.visibility_scope;
289 self.scopes.push(Scope {
290 visibility_scope: vis_scope,
292 needs_cleanup: false,
295 cached_exits: FxHashMap()
299 /// Pops a scope, which should have extent `extent`, adding any
300 /// drops onto the end of `block` that are needed. This must
301 /// match 1-to-1 with `push_scope`.
302 pub fn pop_scope(&mut self,
304 mut block: BasicBlock)
306 debug!("pop_scope({:?}, {:?})", extent, block);
307 // We need to have `cached_block`s available for all the drops, so we call diverge_cleanup
308 // to make sure all the `cached_block`s are filled in.
309 self.diverge_cleanup();
310 let scope = self.scopes.pop().unwrap();
311 assert_eq!(scope.extent, extent);
312 unpack!(block = build_scope_drops(&mut self.cfg,
321 /// Branch out of `block` to `target`, exiting all scopes up to
322 /// and including `extent`. This will insert whatever drops are
323 /// needed, as well as tracking this exit for the SEME region. See
324 /// module comment for details.
325 pub fn exit_scope(&mut self,
328 mut block: BasicBlock,
329 target: BasicBlock) {
330 debug!("exit_scope(extent={:?}, block={:?}, target={:?})", extent, block, target);
331 let scope_count = 1 + self.scopes.iter().rev().position(|scope| scope.extent == extent)
333 span_bug!(span, "extent {:?} does not enclose", extent)
335 let len = self.scopes.len();
336 assert!(scope_count < len, "should not use `exit_scope` to pop ALL scopes");
337 let tmp = self.get_unit_temp();
339 let mut rest = &mut self.scopes[(len - scope_count)..];
340 while let Some((scope, rest_)) = {rest}.split_last_mut() {
342 block = if let Some(&e) = scope.cached_exits.get(&(target, extent)) {
343 self.cfg.terminate(block, scope.source_info(span),
344 TerminatorKind::Goto { target: e });
347 let b = self.cfg.start_new_block();
348 self.cfg.terminate(block, scope.source_info(span),
349 TerminatorKind::Goto { target: b });
350 scope.cached_exits.insert((target, extent), b);
353 unpack!(block = build_scope_drops(&mut self.cfg,
358 if let Some(ref free_data) = scope.free {
359 let next = self.cfg.start_new_block();
360 let free = build_free(self.hir.tcx(), &tmp, free_data, next);
361 self.cfg.terminate(block, scope.source_info(span), free);
366 let scope = &self.scopes[len - scope_count];
367 self.cfg.terminate(block, scope.source_info(span),
368 TerminatorKind::Goto { target: target });
371 /// Creates a new visibility scope, nested in the current one.
372 pub fn new_visibility_scope(&mut self, span: Span) -> VisibilityScope {
373 let parent = self.visibility_scope;
374 let scope = VisibilityScope::new(self.visibility_scopes.len());
375 self.visibility_scopes.push(VisibilityScopeData {
377 parent_scope: Some(parent),
384 /// Finds the loop scope for a given label. This is used for
385 /// resolving `break` and `continue`.
386 pub fn find_loop_scope(&mut self,
388 label: Option<CodeExtent>)
389 -> &mut LoopScope<'tcx> {
390 let loop_scopes = &mut self.loop_scopes;
393 // no label? return the innermost loop scope
394 loop_scopes.iter_mut().rev().next()
397 // otherwise, find the loop-scope with the correct id
398 loop_scopes.iter_mut()
400 .filter(|loop_scope| loop_scope.extent == label)
403 }.unwrap_or_else(|| span_bug!(span, "no enclosing loop scope found?"))
406 /// Given a span and the current visibility scope, make a SourceInfo.
407 pub fn source_info(&self, span: Span) -> SourceInfo {
410 scope: self.visibility_scope
414 /// Returns the extent of the scope which should be exited by a
416 pub fn extent_of_return_scope(&self) -> CodeExtent {
417 // The outermost scope (`scopes[0]`) will be the `CallSiteScope`.
418 // We want `scopes[1]`, which is the `ParameterScope`.
419 assert!(self.scopes.len() >= 2);
420 assert!(match self.hir.tcx().region_maps.code_extent_data(self.scopes[1].extent) {
421 CodeExtentData::ParameterScope { .. } => true,
424 self.scopes[1].extent
429 /// Indicates that `lvalue` should be dropped on exit from
431 pub fn schedule_drop(&mut self,
434 lvalue: &Lvalue<'tcx>,
435 lvalue_ty: Ty<'tcx>) {
436 let needs_drop = self.hir.needs_drop(lvalue_ty);
437 let drop_kind = if needs_drop {
438 DropKind::Value { cached_block: None }
440 // Only temps and vars need their storage dead.
442 Lvalue::Local(index) if index.index() > self.arg_count => DropKind::Storage,
447 for scope in self.scopes.iter_mut().rev() {
448 let this_scope = scope.extent == extent;
449 // When building drops, we try to cache chains of drops in such a way so these drops
450 // could be reused by the drops which would branch into the cached (already built)
451 // blocks. This, however, means that whenever we add a drop into a scope which already
452 // had some blocks built (and thus, cached) for it, we must invalidate all caches which
453 // might branch into the scope which had a drop just added to it. This is necessary,
454 // because otherwise some other code might use the cache to branch into already built
455 // chain of drops, essentially ignoring the newly added drop.
457 // For example consider there’s two scopes with a drop in each. These are built and
458 // thus the caches are filled:
460 // +--------------------------------------------------------+
461 // | +---------------------------------+ |
462 // | | +--------+ +-------------+ | +---------------+ |
463 // | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
464 // | | +--------+ +-------------+ | +---------------+ |
465 // | +------------|outer_scope cache|--+ |
466 // +------------------------------|middle_scope cache|------+
468 // Now, a new, inner-most scope is added along with a new drop into both inner-most and
469 // outer-most scopes:
471 // +------------------------------------------------------------+
472 // | +----------------------------------+ |
473 // | | +--------+ +-------------+ | +---------------+ | +-------------+
474 // | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
475 // | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
476 // | | +-+ +-------------+ | |
477 // | +---|invalid outer_scope cache|----+ |
478 // +----=----------------|invalid middle_scope cache|-----------+
480 // If, when adding `drop(new)` we do not invalidate the cached blocks for both
481 // outer_scope and middle_scope, then, when building drops for the inner (right-most)
482 // scope, the old, cached blocks, without `drop(new)` will get used, producing the
485 // The cache and its invalidation for unwind branch is somewhat special. The cache is
486 // per-drop, rather than per scope, which has a several different implications. Adding
487 // a new drop into a scope will not invalidate cached blocks of the prior drops in the
488 // scope. That is true, because none of the already existing drops will have an edge
489 // into a block with the newly added drop.
491 // Note that this code iterates scopes from the inner-most to the outer-most,
492 // invalidating caches of each scope visited. This way bare minimum of the
493 // caches gets invalidated. i.e. if a new drop is added into the middle scope, the
494 // cache of outer scpoe stays intact.
495 let invalidate_unwind = needs_drop && !this_scope;
496 scope.invalidate_cache(invalidate_unwind);
498 if let DropKind::Value { .. } = drop_kind {
499 scope.needs_cleanup = true;
501 let tcx = self.hir.tcx();
502 let extent_span = extent.span(&tcx.region_maps, &tcx.map).unwrap();
503 // Attribute scope exit drops to scope's closing brace
504 let scope_end = Span { lo: extent_span.hi, .. extent_span};
505 scope.drops.push(DropData {
507 location: lvalue.clone(),
513 span_bug!(span, "extent {:?} not in scope to drop {:?}", extent, lvalue);
516 /// Schedule dropping of a not-yet-fully-initialised box.
518 /// This cleanup will only be translated into unwind branch.
519 /// The extent should be for the `EXPR` inside `box EXPR`.
520 /// There may only be one “free” scheduled in any given scope.
521 pub fn schedule_box_free(&mut self,
524 value: &Lvalue<'tcx>,
526 for scope in self.scopes.iter_mut().rev() {
527 // See the comment in schedule_drop above. The primary difference is that we invalidate
528 // the unwind blocks unconditionally. That’s because the box free may be considered
529 // outer-most cleanup within the scope.
530 scope.invalidate_cache(true);
531 if scope.extent == extent {
532 assert!(scope.free.is_none(), "scope already has a scheduled free!");
533 scope.needs_cleanup = true;
534 scope.free = Some(FreeData {
536 value: value.clone(),
543 span_bug!(span, "extent {:?} not in scope to free {:?}", extent, value);
548 /// Creates a path that performs all required cleanup for unwinding.
550 /// This path terminates in Resume. Returns the start of the path.
551 /// See module comment for more details. None indicates there’s no
552 /// cleanup to do at this point.
553 pub fn diverge_cleanup(&mut self) -> Option<BasicBlock> {
554 if !self.scopes.iter().any(|scope| scope.needs_cleanup) {
557 assert!(!self.scopes.is_empty()); // or `any` above would be false
559 let unit_temp = self.get_unit_temp();
560 let Builder { ref mut hir, ref mut cfg, ref mut scopes,
561 ref mut cached_resume_block, .. } = *self;
563 // Build up the drops in **reverse** order. The end result will
566 // scopes[n] -> scopes[n-1] -> ... -> scopes[0]
568 // However, we build this in **reverse order**. That is, we
569 // process scopes[0], then scopes[1], etc, pointing each one at
570 // the result generates from the one before. Along the way, we
571 // store caches. If everything is cached, we'll just walk right
572 // to left reading the cached results but never created anything.
574 // To start, create the resume terminator.
575 let mut target = if let Some(target) = *cached_resume_block {
578 let resumeblk = cfg.start_new_cleanup_block();
579 cfg.terminate(resumeblk,
580 scopes[0].source_info(self.fn_span),
581 TerminatorKind::Resume);
582 *cached_resume_block = Some(resumeblk);
586 for scope in scopes.iter_mut().filter(|s| s.needs_cleanup) {
587 target = build_diverge_scope(hir.tcx(), cfg, &unit_temp, scope, target);
592 /// Utility function for *non*-scope code to build their own drops
593 pub fn build_drop(&mut self,
596 location: Lvalue<'tcx>,
597 ty: Ty<'tcx>) -> BlockAnd<()> {
598 if !self.hir.needs_drop(ty) {
601 let source_info = self.source_info(span);
602 let next_target = self.cfg.start_new_block();
603 let diverge_target = self.diverge_cleanup();
604 self.cfg.terminate(block, source_info,
605 TerminatorKind::Drop {
608 unwind: diverge_target,
613 /// Utility function for *non*-scope code to build their own drops
614 pub fn build_drop_and_replace(&mut self,
617 location: Lvalue<'tcx>,
618 value: Operand<'tcx>) -> BlockAnd<()> {
619 let source_info = self.source_info(span);
620 let next_target = self.cfg.start_new_block();
621 let diverge_target = self.diverge_cleanup();
622 self.cfg.terminate(block, source_info,
623 TerminatorKind::DropAndReplace {
627 unwind: diverge_target,
632 /// Create an Assert terminator and return the success block.
633 /// If the boolean condition operand is not the expected value,
634 /// a runtime panic will be caused with the given message.
635 pub fn assert(&mut self, block: BasicBlock,
638 msg: AssertMessage<'tcx>,
641 let source_info = self.source_info(span);
643 let success_block = self.cfg.start_new_block();
644 let cleanup = self.diverge_cleanup();
646 self.cfg.terminate(block, source_info,
647 TerminatorKind::Assert {
651 target: success_block,
659 /// Builds drops for pop_scope and exit_scope.
660 fn build_scope_drops<'tcx>(cfg: &mut CFG<'tcx>,
662 earlier_scopes: &[Scope<'tcx>],
663 mut block: BasicBlock,
666 let mut iter = scope.drops.iter().rev().peekable();
667 while let Some(drop_data) = iter.next() {
668 let source_info = scope.source_info(drop_data.span);
669 if let DropKind::Value { .. } = drop_data.kind {
670 // Try to find the next block with its cached block
671 // for us to diverge into in case the drop panics.
672 let on_diverge = iter.peek().iter().filter_map(|dd| {
674 DropKind::Value { cached_block } => cached_block,
675 DropKind::Storage => None
678 // If there’s no `cached_block`s within current scope,
679 // we must look for one in the enclosing scope.
680 let on_diverge = on_diverge.or_else(||{
681 earlier_scopes.iter().rev().flat_map(|s| s.cached_block()).next()
683 let next = cfg.start_new_block();
684 cfg.terminate(block, source_info, TerminatorKind::Drop {
685 location: drop_data.location.clone(),
691 match drop_data.kind {
692 DropKind::Value { .. } |
693 DropKind::Storage => {
694 // Only temps and vars need their storage dead.
695 match drop_data.location {
696 Lvalue::Local(index) if index.index() > arg_count => {}
700 cfg.push(block, Statement {
701 source_info: source_info,
702 kind: StatementKind::StorageDead(drop_data.location.clone())
710 fn build_diverge_scope<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
712 unit_temp: &Lvalue<'tcx>,
713 scope: &mut Scope<'tcx>,
714 mut target: BasicBlock)
717 // Build up the drops in **reverse** order. The end result will
720 // [drops[n]] -...-> [drops[0]] -> [Free] -> [target]
722 // +------------------------------------+
725 // The code in this function reads from right to left. At each
726 // point, we check for cached blocks representing the
727 // remainder. If everything is cached, we'll just walk right to
728 // left reading the cached results but never created anything.
730 let visibility_scope = scope.visibility_scope;
731 let source_info = |span| SourceInfo {
733 scope: visibility_scope
736 // Next, build up any free.
737 if let Some(ref mut free_data) = scope.free {
738 target = if let Some(cached_block) = free_data.cached_block {
741 let into = cfg.start_new_cleanup_block();
742 cfg.terminate(into, source_info(free_data.span),
743 build_free(tcx, unit_temp, free_data, target));
744 free_data.cached_block = Some(into);
749 // Next, build up the drops. Here we iterate the vector in
750 // *forward* order, so that we generate drops[0] first (right to
751 // left in diagram above).
752 for drop_data in &mut scope.drops {
753 // Only full value drops are emitted in the diverging path,
755 let cached_block = match drop_data.kind {
756 DropKind::Value { ref mut cached_block } => cached_block,
757 DropKind::Storage => continue
759 target = if let Some(cached_block) = *cached_block {
762 let block = cfg.start_new_cleanup_block();
763 cfg.terminate(block, source_info(drop_data.span),
764 TerminatorKind::Drop {
765 location: drop_data.location.clone(),
769 *cached_block = Some(block);
777 fn build_free<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
778 unit_temp: &Lvalue<'tcx>,
779 data: &FreeData<'tcx>,
781 -> TerminatorKind<'tcx> {
782 let free_func = tcx.require_lang_item(lang_items::BoxFreeFnLangItem);
783 let substs = tcx.intern_substs(&[Kind::from(data.item_ty)]);
784 TerminatorKind::Call {
785 func: Operand::Constant(Constant {
787 ty: tcx.item_type(free_func).subst(tcx, substs),
788 literal: Literal::Item {
793 args: vec![Operand::Consume(data.value.clone())],
794 destination: Some((unit_temp.clone(), target)),