1 // Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
11 //! ## The Cleanup module
13 //! The cleanup module tracks what values need to be cleaned up as scopes
14 //! are exited, either via panic or just normal control flow. The basic
15 //! idea is that the function context maintains a stack of cleanup scopes
16 //! that are pushed/popped as we traverse the AST tree. There is typically
17 //! at least one cleanup scope per AST node; some AST nodes may introduce
18 //! additional temporary scopes.
20 //! Cleanup items can be scheduled into any of the scopes on the stack.
21 //! Typically, when a scope is popped, we will also generate the code for
22 //! each of its cleanups at that time. This corresponds to a normal exit
23 //! from a block (for example, an expression completing evaluation
24 //! successfully without panic). However, it is also possible to pop a
25 //! block *without* executing its cleanups; this is typically used to
26 //! guard intermediate values that must be cleaned up on panic, but not
27 //! if everything goes right. See the section on custom scopes below for
30 //! Cleanup scopes come in three kinds:
32 //! - **AST scopes:** each AST node in a function body has a corresponding
33 //! AST scope. We push the AST scope when we start generate code for an AST
34 //! node and pop it once the AST node has been fully generated.
35 //! - **Loop scopes:** loops have an additional cleanup scope. Cleanups are
36 //! never scheduled into loop scopes; instead, they are used to record the
37 //! basic blocks that we should branch to when a `continue` or `break` statement
39 //! - **Custom scopes:** custom scopes are typically used to ensure cleanup
40 //! of intermediate values.
42 //! ### When to schedule cleanup
44 //! Although the cleanup system is intended to *feel* fairly declarative,
45 //! it's still important to time calls to `schedule_clean()` correctly.
46 //! Basically, you should not schedule cleanup for memory until it has
47 //! been initialized, because if an unwind should occur before the memory
48 //! is fully initialized, then the cleanup will run and try to free or
49 //! drop uninitialized memory. If the initialization itself produces
50 //! byproducts that need to be freed, then you should use temporary custom
51 //! scopes to ensure that those byproducts will get freed on unwind. For
52 //! example, an expression like `box foo()` will first allocate a box in the
53 //! heap and then call `foo()` -- if `foo()` should panic, this box needs
54 //! to be *shallowly* freed.
56 //! ### Long-distance jumps
58 //! In addition to popping a scope, which corresponds to normal control
59 //! flow exiting the scope, we may also *jump out* of a scope into some
60 //! earlier scope on the stack. This can occur in response to a `return`,
61 //! `break`, or `continue` statement, but also in response to panic. In
62 //! any of these cases, we will generate a series of cleanup blocks for
63 //! each of the scopes that is exited. So, if the stack contains scopes A
64 //! ... Z, and we break out of a loop whose corresponding cleanup scope is
65 //! X, we would generate cleanup blocks for the cleanups in X, Y, and Z.
66 //! After cleanup is done we would branch to the exit point for scope X.
67 //! But if panic should occur, we would generate cleanups for all the
68 //! scopes from A to Z and then resume the unwind process afterwards.
70 //! To avoid generating tons of code, we cache the cleanup blocks that we
71 //! create for breaks, returns, unwinds, and other jumps. Whenever a new
72 //! cleanup is scheduled, though, we must clear these cached blocks. A
73 //! possible improvement would be to keep the cached blocks but simply
74 //! generate a new block which performs the additional cleanup and then
75 //! branches to the existing cached blocks.
77 //! ### AST and loop cleanup scopes
79 //! AST cleanup scopes are pushed when we begin and end processing an AST
80 //! node. They are used to house cleanups related to rvalue temporary that
81 //! get referenced (e.g., due to an expression like `&Foo()`). Whenever an
82 //! AST scope is popped, we always trans all the cleanups, adding the cleanup
83 //! code after the postdominator of the AST node.
85 //! AST nodes that represent breakable loops also push a loop scope; the
86 //! loop scope never has any actual cleanups, it's just used to point to
87 //! the basic blocks where control should flow after a "continue" or
88 //! "break" statement. Popping a loop scope never generates code.
90 //! ### Custom cleanup scopes
92 //! Custom cleanup scopes are used for a variety of purposes. The most
93 //! common though is to handle temporary byproducts, where cleanup only
94 //! needs to occur on panic. The general strategy is to push a custom
95 //! cleanup scope, schedule *shallow* cleanups into the custom scope, and
96 //! then pop the custom scope (without transing the cleanups) when
97 //! execution succeeds normally. This way the cleanups are only trans'd on
98 //! unwind, and only up until the point where execution succeeded, at
99 //! which time the complete value should be stored in an lvalue or some
100 //! other place where normal cleanup applies.
102 //! To spell it out, here is an example. Imagine an expression `box expr`.
103 //! We would basically:
105 //! 1. Push a custom cleanup scope C.
106 //! 2. Allocate the box.
107 //! 3. Schedule a shallow free in the scope C.
108 //! 4. Trans `expr` into the box.
109 //! 5. Pop the scope C.
110 //! 6. Return the box as an rvalue.
112 //! This way, if a panic occurs while transing `expr`, the custom
113 //! cleanup scope C is pushed and hence the box will be freed. The trans
114 //! code for `expr` itself is responsible for freeing any other byproducts
115 //! that may be in play.
117 pub use self::ScopeId::*;
118 pub use self::CleanupScopeKind::*;
119 pub use self::EarlyExitLabel::*;
120 pub use self::Heap::*;
122 use llvm::{BasicBlockRef, ValueRef};
126 use trans::common::{Block, FunctionContext, NodeIdAndSpan};
127 use trans::debuginfo::{DebugLoc, ToDebugLoc};
130 use trans::type_::Type;
131 use middle::ty::{self, Ty};
135 pub struct CleanupScope<'blk, 'tcx: 'blk> {
136 // The id of this cleanup scope. If the id is None,
137 // this is a *temporary scope* that is pushed during trans to
138 // cleanup miscellaneous garbage that trans may generate whose
139 // lifetime is a subset of some expression. See module doc for
141 kind: CleanupScopeKind<'blk, 'tcx>,
143 // Cleanups to run upon scope exit.
144 cleanups: Vec<CleanupObj<'tcx>>,
146 // The debug location any drop calls generated for this scope will be
150 cached_early_exits: Vec<CachedEarlyExit>,
151 cached_landing_pad: Option<BasicBlockRef>,
154 #[derive(Copy, Clone, Debug)]
155 pub struct CustomScopeIndex {
159 pub const EXIT_BREAK: usize = 0;
160 pub const EXIT_LOOP: usize = 1;
161 pub const EXIT_MAX: usize = 2;
163 pub enum CleanupScopeKind<'blk, 'tcx: 'blk> {
165 AstScopeKind(ast::NodeId),
166 LoopScopeKind(ast::NodeId, [Block<'blk, 'tcx>; EXIT_MAX])
169 impl<'blk, 'tcx: 'blk> fmt::Debug for CleanupScopeKind<'blk, 'tcx> {
170 fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
172 CustomScopeKind => write!(f, "CustomScopeKind"),
173 AstScopeKind(nid) => write!(f, "AstScopeKind({})", nid),
174 LoopScopeKind(nid, ref blks) => {
175 try!(write!(f, "LoopScopeKind({}, [", nid));
177 try!(write!(f, "{:p}, ", blk));
185 #[derive(Copy, Clone, PartialEq, Debug)]
186 pub enum EarlyExitLabel {
189 LoopExit(ast::NodeId, usize)
192 #[derive(Copy, Clone)]
193 pub struct CachedEarlyExit {
194 label: EarlyExitLabel,
195 cleanup_block: BasicBlockRef,
198 pub trait Cleanup<'tcx> {
199 fn must_unwind(&self) -> bool;
200 fn is_lifetime_end(&self) -> bool;
201 fn trans<'blk>(&self,
202 bcx: Block<'blk, 'tcx>,
204 -> Block<'blk, 'tcx>;
207 pub type CleanupObj<'tcx> = Box<Cleanup<'tcx>+'tcx>;
209 #[derive(Copy, Clone, Debug)]
211 AstScope(ast::NodeId),
212 CustomScope(CustomScopeIndex)
215 impl<'blk, 'tcx> CleanupMethods<'blk, 'tcx> for FunctionContext<'blk, 'tcx> {
216 /// Invoked when we start to trans the code contained within a new cleanup scope.
217 fn push_ast_cleanup_scope(&self, debug_loc: NodeIdAndSpan) {
218 debug!("push_ast_cleanup_scope({})",
219 self.ccx.tcx().map.node_to_string(debug_loc.id));
221 // FIXME(#2202) -- currently closure bodies have a parent
222 // region, which messes up the assertion below, since there
223 // are no cleanup scopes on the stack at the start of
224 // trans'ing a closure body. I think though that this should
225 // eventually be fixed by closure bodies not having a parent
226 // region, though that's a touch unclear, and it might also be
227 // better just to narrow this assertion more (i.e., by
228 // excluding id's that correspond to closure bodies only). For
229 // now we just say that if there is already an AST scope on the stack,
230 // this new AST scope had better be its immediate child.
231 let top_scope = self.top_ast_scope();
232 if top_scope.is_some() {
236 .opt_encl_scope(region::CodeExtent::from_node_id(debug_loc.id))
237 .map(|s|s.node_id()) == top_scope)
242 .opt_encl_scope(region::CodeExtent::DestructionScope(debug_loc.id))
243 .map(|s|s.node_id()) == top_scope));
246 self.push_scope(CleanupScope::new(AstScopeKind(debug_loc.id),
247 debug_loc.debug_loc()));
250 fn push_loop_cleanup_scope(&self,
252 exits: [Block<'blk, 'tcx>; EXIT_MAX]) {
253 debug!("push_loop_cleanup_scope({})",
254 self.ccx.tcx().map.node_to_string(id));
255 assert_eq!(Some(id), self.top_ast_scope());
257 // Just copy the debuginfo source location from the enclosing scope
258 let debug_loc = self.scopes
264 self.push_scope(CleanupScope::new(LoopScopeKind(id, exits), debug_loc));
267 fn push_custom_cleanup_scope(&self) -> CustomScopeIndex {
268 let index = self.scopes_len();
269 debug!("push_custom_cleanup_scope(): {}", index);
271 // Just copy the debuginfo source location from the enclosing scope
272 let debug_loc = self.scopes
275 .map(|opt_scope| opt_scope.debug_loc)
276 .unwrap_or(DebugLoc::None);
278 self.push_scope(CleanupScope::new(CustomScopeKind, debug_loc));
279 CustomScopeIndex { index: index }
282 fn push_custom_cleanup_scope_with_debug_loc(&self,
283 debug_loc: NodeIdAndSpan)
284 -> CustomScopeIndex {
285 let index = self.scopes_len();
286 debug!("push_custom_cleanup_scope(): {}", index);
288 self.push_scope(CleanupScope::new(CustomScopeKind,
289 debug_loc.debug_loc()));
290 CustomScopeIndex { index: index }
293 /// Removes the cleanup scope for id `cleanup_scope`, which must be at the top of the cleanup
294 /// stack, and generates the code to do its cleanups for normal exit.
295 fn pop_and_trans_ast_cleanup_scope(&self,
296 bcx: Block<'blk, 'tcx>,
297 cleanup_scope: ast::NodeId)
298 -> Block<'blk, 'tcx> {
299 debug!("pop_and_trans_ast_cleanup_scope({})",
300 self.ccx.tcx().map.node_to_string(cleanup_scope));
302 assert!(self.top_scope(|s| s.kind.is_ast_with_id(cleanup_scope)));
304 let scope = self.pop_scope();
305 self.trans_scope_cleanups(bcx, &scope)
308 /// Removes the loop cleanup scope for id `cleanup_scope`, which must be at the top of the
309 /// cleanup stack. Does not generate any cleanup code, since loop scopes should exit by
310 /// branching to a block generated by `normal_exit_block`.
311 fn pop_loop_cleanup_scope(&self,
312 cleanup_scope: ast::NodeId) {
313 debug!("pop_loop_cleanup_scope({})",
314 self.ccx.tcx().map.node_to_string(cleanup_scope));
316 assert!(self.top_scope(|s| s.kind.is_loop_with_id(cleanup_scope)));
318 let _ = self.pop_scope();
321 /// Removes the top cleanup scope from the stack without executing its cleanups. The top
322 /// cleanup scope must be the temporary scope `custom_scope`.
323 fn pop_custom_cleanup_scope(&self,
324 custom_scope: CustomScopeIndex) {
325 debug!("pop_custom_cleanup_scope({})", custom_scope.index);
326 assert!(self.is_valid_to_pop_custom_scope(custom_scope));
327 let _ = self.pop_scope();
330 /// Removes the top cleanup scope from the stack, which must be a temporary scope, and
331 /// generates the code to do its cleanups for normal exit.
332 fn pop_and_trans_custom_cleanup_scope(&self,
333 bcx: Block<'blk, 'tcx>,
334 custom_scope: CustomScopeIndex)
335 -> Block<'blk, 'tcx> {
336 debug!("pop_and_trans_custom_cleanup_scope({:?})", custom_scope);
337 assert!(self.is_valid_to_pop_custom_scope(custom_scope));
339 let scope = self.pop_scope();
340 self.trans_scope_cleanups(bcx, &scope)
343 /// Returns the id of the top-most loop scope
344 fn top_loop_scope(&self) -> ast::NodeId {
345 for scope in self.scopes.borrow().iter().rev() {
346 if let LoopScopeKind(id, _) = scope.kind {
350 self.ccx.sess().bug("no loop scope found");
353 /// Returns a block to branch to which will perform all pending cleanups and then
354 /// break/continue (depending on `exit`) out of the loop with id `cleanup_scope`
355 fn normal_exit_block(&'blk self,
356 cleanup_scope: ast::NodeId,
357 exit: usize) -> BasicBlockRef {
358 self.trans_cleanups_to_exit_scope(LoopExit(cleanup_scope, exit))
361 /// Returns a block to branch to which will perform all pending cleanups and then return from
363 fn return_exit_block(&'blk self) -> BasicBlockRef {
364 self.trans_cleanups_to_exit_scope(ReturnExit)
367 fn schedule_lifetime_end(&self,
368 cleanup_scope: ScopeId,
370 let drop = box LifetimeEnd {
374 debug!("schedule_lifetime_end({:?}, val={})",
376 self.ccx.tn().val_to_string(val));
378 self.schedule_clean(cleanup_scope, drop as CleanupObj);
381 /// Schedules a (deep) drop of `val`, which is a pointer to an instance of `ty`
382 fn schedule_drop_mem(&self,
383 cleanup_scope: ScopeId,
386 if !self.type_needs_drop(ty) { return; }
387 let drop = box DropValue {
395 debug!("schedule_drop_mem({:?}, val={}, ty={:?}) fill_on_drop={} skip_dtor={}",
397 self.ccx.tn().val_to_string(val),
402 self.schedule_clean(cleanup_scope, drop as CleanupObj);
405 /// Schedules a (deep) drop and filling of `val`, which is a pointer to an instance of `ty`
406 fn schedule_drop_and_fill_mem(&self,
407 cleanup_scope: ScopeId,
410 if !self.type_needs_drop(ty) { return; }
412 let drop = box DropValue {
420 debug!("schedule_drop_and_fill_mem({:?}, val={}, ty={:?}, fill_on_drop={}, skip_dtor={})",
422 self.ccx.tn().val_to_string(val),
427 self.schedule_clean(cleanup_scope, drop as CleanupObj);
430 /// Issue #23611: Schedules a (deep) drop of the contents of
431 /// `val`, which is a pointer to an instance of struct/enum type
432 /// `ty`. The scheduled code handles extracting the discriminant
433 /// and dropping the contents associated with that variant
434 /// *without* executing any associated drop implementation.
435 fn schedule_drop_adt_contents(&self,
436 cleanup_scope: ScopeId,
439 // `if` below could be "!contents_needs_drop"; skipping drop
440 // is just an optimization, so sound to be conservative.
441 if !self.type_needs_drop(ty) { return; }
443 let drop = box DropValue {
451 debug!("schedule_drop_adt_contents({:?}, val={}, ty={:?}) fill_on_drop={} skip_dtor={}",
453 self.ccx.tn().val_to_string(val),
458 self.schedule_clean(cleanup_scope, drop as CleanupObj);
461 /// Schedules a (deep) drop of `val`, which is an instance of `ty`
462 fn schedule_drop_immediate(&self,
463 cleanup_scope: ScopeId,
467 if !self.type_needs_drop(ty) { return; }
468 let drop = box DropValue {
476 debug!("schedule_drop_immediate({:?}, val={}, ty={:?}) fill_on_drop={} skip_dtor={}",
478 self.ccx.tn().val_to_string(val),
483 self.schedule_clean(cleanup_scope, drop as CleanupObj);
486 /// Schedules a call to `free(val)`. Note that this is a shallow operation.
487 fn schedule_free_value(&self,
488 cleanup_scope: ScopeId,
491 content_ty: Ty<'tcx>) {
492 let drop = box FreeValue { ptr: val, heap: heap, content_ty: content_ty };
494 debug!("schedule_free_value({:?}, val={}, heap={:?})",
496 self.ccx.tn().val_to_string(val),
499 self.schedule_clean(cleanup_scope, drop as CleanupObj);
502 fn schedule_clean(&self,
503 cleanup_scope: ScopeId,
504 cleanup: CleanupObj<'tcx>) {
505 match cleanup_scope {
506 AstScope(id) => self.schedule_clean_in_ast_scope(id, cleanup),
507 CustomScope(id) => self.schedule_clean_in_custom_scope(id, cleanup),
511 /// Schedules a cleanup to occur upon exit from `cleanup_scope`. If `cleanup_scope` is not
512 /// provided, then the cleanup is scheduled in the topmost scope, which must be a temporary
514 fn schedule_clean_in_ast_scope(&self,
515 cleanup_scope: ast::NodeId,
516 cleanup: CleanupObj<'tcx>) {
517 debug!("schedule_clean_in_ast_scope(cleanup_scope={})",
520 for scope in self.scopes.borrow_mut().iter_mut().rev() {
521 if scope.kind.is_ast_with_id(cleanup_scope) {
522 scope.cleanups.push(cleanup);
523 scope.clear_cached_exits();
526 // will be adding a cleanup to some enclosing scope
527 scope.clear_cached_exits();
532 &format!("no cleanup scope {} found",
533 self.ccx.tcx().map.node_to_string(cleanup_scope)));
536 /// Schedules a cleanup to occur in the top-most scope, which must be a temporary scope.
537 fn schedule_clean_in_custom_scope(&self,
538 custom_scope: CustomScopeIndex,
539 cleanup: CleanupObj<'tcx>) {
540 debug!("schedule_clean_in_custom_scope(custom_scope={})",
543 assert!(self.is_valid_custom_scope(custom_scope));
545 let mut scopes = self.scopes.borrow_mut();
546 let scope = &mut (*scopes)[custom_scope.index];
547 scope.cleanups.push(cleanup);
548 scope.clear_cached_exits();
551 /// Returns true if there are pending cleanups that should execute on panic.
552 fn needs_invoke(&self) -> bool {
553 self.scopes.borrow().iter().rev().any(|s| s.needs_invoke())
556 /// Returns a basic block to branch to in the event of a panic. This block will run the panic
557 /// cleanups and eventually invoke the LLVM `Resume` instruction.
558 fn get_landing_pad(&'blk self) -> BasicBlockRef {
559 let _icx = base::push_ctxt("get_landing_pad");
561 debug!("get_landing_pad");
563 let orig_scopes_len = self.scopes_len();
564 assert!(orig_scopes_len > 0);
566 // Remove any scopes that do not have cleanups on panic:
567 let mut popped_scopes = vec!();
568 while !self.top_scope(|s| s.needs_invoke()) {
569 debug!("top scope does not need invoke");
570 popped_scopes.push(self.pop_scope());
573 // Check for an existing landing pad in the new topmost scope:
574 let llbb = self.get_or_create_landing_pad();
576 // Push the scopes we removed back on:
578 match popped_scopes.pop() {
579 Some(scope) => self.push_scope(scope),
584 assert_eq!(self.scopes_len(), orig_scopes_len);
590 impl<'blk, 'tcx> CleanupHelperMethods<'blk, 'tcx> for FunctionContext<'blk, 'tcx> {
591 /// Returns the id of the current top-most AST scope, if any.
592 fn top_ast_scope(&self) -> Option<ast::NodeId> {
593 for scope in self.scopes.borrow().iter().rev() {
595 CustomScopeKind | LoopScopeKind(..) => {}
604 fn top_nonempty_cleanup_scope(&self) -> Option<usize> {
605 self.scopes.borrow().iter().rev().position(|s| !s.cleanups.is_empty())
608 fn is_valid_to_pop_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool {
609 self.is_valid_custom_scope(custom_scope) &&
610 custom_scope.index == self.scopes.borrow().len() - 1
613 fn is_valid_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool {
614 let scopes = self.scopes.borrow();
615 custom_scope.index < scopes.len() &&
616 (*scopes)[custom_scope.index].kind.is_temp()
619 /// Generates the cleanups for `scope` into `bcx`
620 fn trans_scope_cleanups(&self, // cannot borrow self, will recurse
621 bcx: Block<'blk, 'tcx>,
622 scope: &CleanupScope<'blk, 'tcx>) -> Block<'blk, 'tcx> {
625 if !bcx.unreachable.get() {
626 for cleanup in scope.cleanups.iter().rev() {
627 bcx = cleanup.trans(bcx, scope.debug_loc);
633 fn scopes_len(&self) -> usize {
634 self.scopes.borrow().len()
637 fn push_scope(&self, scope: CleanupScope<'blk, 'tcx>) {
638 self.scopes.borrow_mut().push(scope)
641 fn pop_scope(&self) -> CleanupScope<'blk, 'tcx> {
642 debug!("popping cleanup scope {}, {} scopes remaining",
643 self.top_scope(|s| s.block_name("")),
644 self.scopes_len() - 1);
646 self.scopes.borrow_mut().pop().unwrap()
649 fn top_scope<R, F>(&self, f: F) -> R where F: FnOnce(&CleanupScope<'blk, 'tcx>) -> R {
650 f(self.scopes.borrow().last().unwrap())
653 /// Used when the caller wishes to jump to an early exit, such as a return, break, continue, or
654 /// unwind. This function will generate all cleanups between the top of the stack and the exit
655 /// `label` and return a basic block that the caller can branch to.
657 /// For example, if the current stack of cleanups were as follows:
666 /// and the `label` specifies a break from `Loop 23`, then this function would generate a
667 /// series of basic blocks as follows:
669 /// Cleanup(AST 24) -> Cleanup(Custom 2) -> break_blk
671 /// where `break_blk` is the block specified in `Loop 23` as the target for breaks. The return
672 /// value would be the first basic block in that sequence (`Cleanup(AST 24)`). The caller could
673 /// then branch to `Cleanup(AST 24)` and it will perform all cleanups and finally branch to the
675 fn trans_cleanups_to_exit_scope(&'blk self,
676 label: EarlyExitLabel)
678 debug!("trans_cleanups_to_exit_scope label={:?} scopes={}",
679 label, self.scopes_len());
681 let orig_scopes_len = self.scopes_len();
683 let mut popped_scopes = vec!();
685 // First we pop off all the cleanup stacks that are
686 // traversed until the exit is reached, pushing them
687 // onto the side vector `popped_scopes`. No code is
688 // generated at this time.
690 // So, continuing the example from above, we would wind up
691 // with a `popped_scopes` vector of `[AST 24, Custom 2]`.
692 // (Presuming that there are no cached exits)
694 if self.scopes_len() == 0 {
697 // Generate a block that will `Resume`.
698 let prev_bcx = self.new_block(true, "resume", None);
699 let personality = self.personality.get().expect(
700 "create_landing_pad() should have set this");
701 build::Resume(prev_bcx,
702 build::Load(prev_bcx, personality));
703 prev_llbb = prev_bcx.llbb;
708 prev_llbb = self.get_llreturn();
713 self.ccx.sess().bug(&format!(
714 "cannot exit from scope {}, \
720 // Check if we have already cached the unwinding of this
721 // scope for this label. If so, we can stop popping scopes
722 // and branch to the cached label, since it contains the
723 // cleanups for any subsequent scopes.
724 match self.top_scope(|s| s.cached_early_exit(label)) {
725 Some(cleanup_block) => {
726 prev_llbb = cleanup_block;
732 // Pop off the scope, since we will be generating
733 // unwinding code for it. If we are searching for a loop exit,
734 // and this scope is that loop, then stop popping and set
735 // `prev_llbb` to the appropriate exit block from the loop.
736 popped_scopes.push(self.pop_scope());
737 let scope = popped_scopes.last().unwrap();
739 UnwindExit | ReturnExit => { }
740 LoopExit(id, exit) => {
741 match scope.kind.early_exit_block(id, exit) {
743 prev_llbb = exitllbb;
753 debug!("trans_cleanups_to_exit_scope: popped {} scopes",
754 popped_scopes.len());
756 // Now push the popped scopes back on. As we go,
757 // we track in `prev_llbb` the exit to which this scope
758 // should branch when it's done.
760 // So, continuing with our example, we will start out with
761 // `prev_llbb` being set to `break_blk` (or possibly a cached
762 // early exit). We will then pop the scopes from `popped_scopes`
763 // and generate a basic block for each one, prepending it in the
764 // series and updating `prev_llbb`. So we begin by popping `Custom 2`
765 // and generating `Cleanup(Custom 2)`. We make `Cleanup(Custom 2)`
766 // branch to `prev_llbb == break_blk`, giving us a sequence like:
768 // Cleanup(Custom 2) -> prev_llbb
770 // We then pop `AST 24` and repeat the process, giving us the sequence:
772 // Cleanup(AST 24) -> Cleanup(Custom 2) -> prev_llbb
774 // At this point, `popped_scopes` is empty, and so the final block
775 // that we return to the user is `Cleanup(AST 24)`.
776 while let Some(mut scope) = popped_scopes.pop() {
777 if !scope.cleanups.is_empty() {
778 let name = scope.block_name("clean");
779 debug!("generating cleanups for {}", name);
780 let bcx_in = self.new_block(label.is_unwind(),
783 let mut bcx_out = bcx_in;
784 for cleanup in scope.cleanups.iter().rev() {
785 bcx_out = cleanup.trans(bcx_out,
788 build::Br(bcx_out, prev_llbb, DebugLoc::None);
789 prev_llbb = bcx_in.llbb;
791 scope.add_cached_early_exit(label, prev_llbb);
793 self.push_scope(scope);
796 debug!("trans_cleanups_to_exit_scope: prev_llbb={:?}", prev_llbb);
798 assert_eq!(self.scopes_len(), orig_scopes_len);
802 /// Creates a landing pad for the top scope, if one does not exist. The landing pad will
803 /// perform all cleanups necessary for an unwind and then `resume` to continue error
806 /// landing_pad -> ... cleanups ... -> [resume]
808 /// (The cleanups and resume instruction are created by `trans_cleanups_to_exit_scope()`, not
809 /// in this function itself.)
810 fn get_or_create_landing_pad(&'blk self) -> BasicBlockRef {
813 debug!("get_or_create_landing_pad");
815 // Check if a landing pad block exists; if not, create one.
817 let mut scopes = self.scopes.borrow_mut();
818 let last_scope = scopes.last_mut().unwrap();
819 match last_scope.cached_landing_pad {
820 Some(llbb) => { return llbb; }
822 let name = last_scope.block_name("unwind");
823 pad_bcx = self.new_block(true, &name[..], None);
824 last_scope.cached_landing_pad = Some(pad_bcx.llbb);
829 // The landing pad return type (the type being propagated). Not sure what
830 // this represents but it's determined by the personality function and
831 // this is what the EH proposal example uses.
832 let llretty = Type::struct_(self.ccx,
833 &[Type::i8p(self.ccx), Type::i32(self.ccx)],
836 let llpersonality = pad_bcx.fcx.eh_personality();
838 // The only landing pad clause will be 'cleanup'
839 let llretval = build::LandingPad(pad_bcx, llretty, llpersonality, 1);
841 // The landing pad block is a cleanup
842 build::SetCleanup(pad_bcx, llretval);
844 // We store the retval in a function-central alloca, so that calls to
845 // Resume can find it.
846 match self.personality.get() {
848 build::Store(pad_bcx, llretval, addr);
851 let addr = base::alloca(pad_bcx, common::val_ty(llretval), "");
852 self.personality.set(Some(addr));
853 build::Store(pad_bcx, llretval, addr);
857 // Generate the cleanup block and branch to it.
858 let cleanup_llbb = self.trans_cleanups_to_exit_scope(UnwindExit);
859 build::Br(pad_bcx, cleanup_llbb, DebugLoc::None);
865 impl<'blk, 'tcx> CleanupScope<'blk, 'tcx> {
866 fn new(kind: CleanupScopeKind<'blk, 'tcx>,
868 -> CleanupScope<'blk, 'tcx> {
871 debug_loc: debug_loc,
873 cached_early_exits: vec!(),
874 cached_landing_pad: None,
878 fn clear_cached_exits(&mut self) {
879 self.cached_early_exits = vec!();
880 self.cached_landing_pad = None;
883 fn cached_early_exit(&self,
884 label: EarlyExitLabel)
885 -> Option<BasicBlockRef> {
886 self.cached_early_exits.iter().
887 find(|e| e.label == label).
888 map(|e| e.cleanup_block)
891 fn add_cached_early_exit(&mut self,
892 label: EarlyExitLabel,
893 blk: BasicBlockRef) {
894 self.cached_early_exits.push(
895 CachedEarlyExit { label: label,
896 cleanup_block: blk });
899 /// True if this scope has cleanups that need unwinding
900 fn needs_invoke(&self) -> bool {
902 self.cached_landing_pad.is_some() ||
903 self.cleanups.iter().any(|c| c.must_unwind())
906 /// Returns a suitable name to use for the basic block that handles this cleanup scope
907 fn block_name(&self, prefix: &str) -> String {
909 CustomScopeKind => format!("{}_custom_", prefix),
910 AstScopeKind(id) => format!("{}_ast_{}_", prefix, id),
911 LoopScopeKind(id, _) => format!("{}_loop_{}_", prefix, id),
915 /// Manipulate cleanup scope for call arguments. Conceptually, each
916 /// argument to a call is an lvalue, and performing the call moves each
917 /// of the arguments into a new rvalue (which gets cleaned up by the
918 /// callee). As an optimization, instead of actually performing all of
919 /// those moves, trans just manipulates the cleanup scope to obtain the
921 pub fn drop_non_lifetime_clean(&mut self) {
922 self.cleanups.retain(|c| c.is_lifetime_end());
923 self.clear_cached_exits();
927 impl<'blk, 'tcx> CleanupScopeKind<'blk, 'tcx> {
928 fn is_temp(&self) -> bool {
930 CustomScopeKind => true,
931 LoopScopeKind(..) | AstScopeKind(..) => false,
935 fn is_ast_with_id(&self, id: ast::NodeId) -> bool {
937 CustomScopeKind | LoopScopeKind(..) => false,
938 AstScopeKind(i) => i == id
942 fn is_loop_with_id(&self, id: ast::NodeId) -> bool {
944 CustomScopeKind | AstScopeKind(..) => false,
945 LoopScopeKind(i, _) => i == id
949 /// If this is a loop scope with id `id`, return the early exit block `exit`, else `None`
950 fn early_exit_block(&self,
952 exit: usize) -> Option<BasicBlockRef> {
954 LoopScopeKind(i, ref exits) if id == i => Some(exits[exit].llbb),
960 impl EarlyExitLabel {
961 fn is_unwind(&self) -> bool {
969 ///////////////////////////////////////////////////////////////////////////
972 #[derive(Copy, Clone)]
973 pub struct DropValue<'tcx> {
981 impl<'tcx> Cleanup<'tcx> for DropValue<'tcx> {
982 fn must_unwind(&self) -> bool {
986 fn is_lifetime_end(&self) -> bool {
990 fn trans<'blk>(&self,
991 bcx: Block<'blk, 'tcx>,
993 -> Block<'blk, 'tcx> {
994 let skip_dtor = self.skip_dtor;
995 let _icx = if skip_dtor {
996 base::push_ctxt("<DropValue as Cleanup>::trans skip_dtor=true")
998 base::push_ctxt("<DropValue as Cleanup>::trans skip_dtor=false")
1000 let bcx = if self.is_immediate {
1001 glue::drop_ty_immediate(bcx, self.val, self.ty, debug_loc, self.skip_dtor)
1003 glue::drop_ty_core(bcx, self.val, self.ty, debug_loc, self.skip_dtor)
1005 if self.fill_on_drop {
1006 base::drop_done_fill_mem(bcx, self.val, self.ty);
1012 #[derive(Copy, Clone, Debug)]
1017 #[derive(Copy, Clone)]
1018 pub struct FreeValue<'tcx> {
1021 content_ty: Ty<'tcx>
1024 impl<'tcx> Cleanup<'tcx> for FreeValue<'tcx> {
1025 fn must_unwind(&self) -> bool {
1029 fn is_lifetime_end(&self) -> bool {
1033 fn trans<'blk>(&self,
1034 bcx: Block<'blk, 'tcx>,
1035 debug_loc: DebugLoc)
1036 -> Block<'blk, 'tcx> {
1039 glue::trans_exchange_free_ty(bcx,
1048 #[derive(Copy, Clone)]
1049 pub struct LifetimeEnd {
1053 impl<'tcx> Cleanup<'tcx> for LifetimeEnd {
1054 fn must_unwind(&self) -> bool {
1058 fn is_lifetime_end(&self) -> bool {
1062 fn trans<'blk>(&self,
1063 bcx: Block<'blk, 'tcx>,
1064 debug_loc: DebugLoc)
1065 -> Block<'blk, 'tcx> {
1066 debug_loc.apply(bcx.fcx);
1067 base::call_lifetime_end(bcx, self.ptr);
1072 pub fn temporary_scope(tcx: &ty::ctxt,
1075 match tcx.region_maps.temporary_scope(id) {
1077 let r = AstScope(scope.node_id());
1078 debug!("temporary_scope({}) = {:?}", id, r);
1082 tcx.sess.bug(&format!("no temporary scope available for expr {}",
1088 pub fn var_scope(tcx: &ty::ctxt,
1091 let r = AstScope(tcx.region_maps.var_scope(id).node_id());
1092 debug!("var_scope({}) = {:?}", id, r);
1096 ///////////////////////////////////////////////////////////////////////////
1097 // These traits just exist to put the methods into this file.
1099 pub trait CleanupMethods<'blk, 'tcx> {
1100 fn push_ast_cleanup_scope(&self, id: NodeIdAndSpan);
1101 fn push_loop_cleanup_scope(&self,
1103 exits: [Block<'blk, 'tcx>; EXIT_MAX]);
1104 fn push_custom_cleanup_scope(&self) -> CustomScopeIndex;
1105 fn push_custom_cleanup_scope_with_debug_loc(&self,
1106 debug_loc: NodeIdAndSpan)
1107 -> CustomScopeIndex;
1108 fn pop_and_trans_ast_cleanup_scope(&self,
1109 bcx: Block<'blk, 'tcx>,
1110 cleanup_scope: ast::NodeId)
1111 -> Block<'blk, 'tcx>;
1112 fn pop_loop_cleanup_scope(&self,
1113 cleanup_scope: ast::NodeId);
1114 fn pop_custom_cleanup_scope(&self,
1115 custom_scope: CustomScopeIndex);
1116 fn pop_and_trans_custom_cleanup_scope(&self,
1117 bcx: Block<'blk, 'tcx>,
1118 custom_scope: CustomScopeIndex)
1119 -> Block<'blk, 'tcx>;
1120 fn top_loop_scope(&self) -> ast::NodeId;
1121 fn normal_exit_block(&'blk self,
1122 cleanup_scope: ast::NodeId,
1123 exit: usize) -> BasicBlockRef;
1124 fn return_exit_block(&'blk self) -> BasicBlockRef;
1125 fn schedule_lifetime_end(&self,
1126 cleanup_scope: ScopeId,
1128 fn schedule_drop_mem(&self,
1129 cleanup_scope: ScopeId,
1132 fn schedule_drop_and_fill_mem(&self,
1133 cleanup_scope: ScopeId,
1136 fn schedule_drop_adt_contents(&self,
1137 cleanup_scope: ScopeId,
1140 fn schedule_drop_immediate(&self,
1141 cleanup_scope: ScopeId,
1144 fn schedule_free_value(&self,
1145 cleanup_scope: ScopeId,
1148 content_ty: Ty<'tcx>);
1149 fn schedule_clean(&self,
1150 cleanup_scope: ScopeId,
1151 cleanup: CleanupObj<'tcx>);
1152 fn schedule_clean_in_ast_scope(&self,
1153 cleanup_scope: ast::NodeId,
1154 cleanup: CleanupObj<'tcx>);
1155 fn schedule_clean_in_custom_scope(&self,
1156 custom_scope: CustomScopeIndex,
1157 cleanup: CleanupObj<'tcx>);
1158 fn needs_invoke(&self) -> bool;
1159 fn get_landing_pad(&'blk self) -> BasicBlockRef;
1162 trait CleanupHelperMethods<'blk, 'tcx> {
1163 fn top_ast_scope(&self) -> Option<ast::NodeId>;
1164 fn top_nonempty_cleanup_scope(&self) -> Option<usize>;
1165 fn is_valid_to_pop_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool;
1166 fn is_valid_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool;
1167 fn trans_scope_cleanups(&self,
1168 bcx: Block<'blk, 'tcx>,
1169 scope: &CleanupScope<'blk, 'tcx>) -> Block<'blk, 'tcx>;
1170 fn trans_cleanups_to_exit_scope(&'blk self,
1171 label: EarlyExitLabel)
1173 fn get_or_create_landing_pad(&'blk self) -> BasicBlockRef;
1174 fn scopes_len(&self) -> usize;
1175 fn push_scope(&self, scope: CleanupScope<'blk, 'tcx>);
1176 fn pop_scope(&self) -> CleanupScope<'blk, 'tcx>;
1177 fn top_scope<R, F>(&self, f: F) -> R where F: FnOnce(&CleanupScope<'blk, 'tcx>) -> R;