1 //! A classic liveness analysis based on dataflow over the AST. Computes,
2 //! for each local variable in a function, whether that variable is live
3 //! at a given point. Program execution points are identified by their
8 //! The basic model is that each local variable is assigned an index. We
9 //! represent sets of local variables using a vector indexed by this
10 //! index. The value in the vector is either 0, indicating the variable
11 //! is dead, or the ID of an expression that uses the variable.
13 //! We conceptually walk over the AST in reverse execution order. If we
14 //! find a use of a variable, we add it to the set of live variables. If
15 //! we find an assignment to a variable, we remove it from the set of live
16 //! variables. When we have to merge two flows, we take the union of
17 //! those two flows -- if the variable is live on both paths, we simply
18 //! pick one ID. In the event of loops, we continue doing this until a
19 //! fixed point is reached.
21 //! ## Checking initialization
23 //! At the function entry point, all variables must be dead. If this is
24 //! not the case, we can report an error using the ID found in the set of
25 //! live variables, which identifies a use of the variable which is not
26 //! dominated by an assignment.
30 //! After each explicit move, the variable must be dead.
32 //! ## Computing last uses
34 //! Any use of the variable where the variable is dead afterwards is a
37 //! # Implementation details
39 //! The actual implementation contains two (nested) walks over the AST.
40 //! The outer walk has the job of building up the ir_maps instance for the
41 //! enclosing function. On the way down the tree, it identifies those AST
42 //! nodes and variable IDs that will be needed for the liveness analysis
43 //! and assigns them contiguous IDs. The liveness ID for an AST node is
44 //! called a `live_node` (it's a newtype'd `u32`) and the ID for a variable
45 //! is called a `variable` (another newtype'd `u32`).
47 //! On the way back up the tree, as we are about to exit from a function
48 //! declaration we allocate a `liveness` instance. Now that we know
49 //! precisely how many nodes and variables we need, we can allocate all
50 //! the various arrays that we will need to precisely the right size. We then
51 //! perform the actual propagation on the `liveness` instance.
53 //! This propagation is encoded in the various `propagate_through_*()`
54 //! methods. It effectively does a reverse walk of the AST; whenever we
55 //! reach a loop node, we iterate until a fixed point is reached.
57 //! ## The `RWU` struct
59 //! At each live node `N`, we track three pieces of information for each
60 //! variable `V` (these are encapsulated in the `RWU` struct):
62 //! - `reader`: the `LiveNode` ID of some node which will read the value
63 //! that `V` holds on entry to `N`. Formally: a node `M` such
64 //! that there exists a path `P` from `N` to `M` where `P` does not
65 //! write `V`. If the `reader` is `invalid_node()`, then the current
66 //! value will never be read (the variable is dead, essentially).
68 //! - `writer`: the `LiveNode` ID of some node which will write the
69 //! variable `V` and which is reachable from `N`. Formally: a node `M`
70 //! such that there exists a path `P` from `N` to `M` and `M` writes
71 //! `V`. If the `writer` is `invalid_node()`, then there is no writer
72 //! of `V` that follows `N`.
74 //! - `used`: a boolean value indicating whether `V` is *used*. We
75 //! distinguish a *read* from a *use* in that a *use* is some read that
76 //! is not just used to generate a new value. For example, `x += 1` is
77 //! a read but not a use. This is used to generate better warnings.
79 //! ## Special Variables
81 //! We generate various special variables for various, well, special purposes.
82 //! These are described in the `specials` struct:
84 //! - `exit_ln`: a live node that is generated to represent every 'exit' from
85 //! the function, whether it be by explicit return, panic, or other means.
87 //! - `fallthrough_ln`: a live node that represents a fallthrough
89 //! - `clean_exit_var`: a synthetic variable that is only 'read' from the
90 //! fallthrough node. It is only live if the function could converge
91 //! via means other than an explicit `return` expression. That is, it is
92 //! only dead if the end of the function's block can never be reached.
93 //! It is the responsibility of typeck to ensure that there are no
94 //! `return` expressions in a function declared as diverging.
96 use self::LoopKind::*;
97 use self::LiveNodeKind::*;
100 use crate::hir::def::*;
101 use crate::hir::Node;
102 use crate::ty::{self, TyCtxt};
103 use crate::ty::query::Providers;
105 use crate::util::nodemap::{HirIdMap, HirIdSet};
107 use errors::Applicability;
108 use std::collections::{BTreeMap, VecDeque};
110 use std::io::prelude::*;
113 use syntax::ast::{self, NodeId};
115 use syntax::symbol::{kw, sym};
116 use syntax_pos::Span;
119 use crate::hir::{Expr, HirId};
120 use crate::hir::def_id::DefId;
121 use crate::hir::intravisit::{self, Visitor, FnKind, NestedVisitorMap};
123 /// For use with `propagate_through_loop`.
125 /// An endless `loop` loop.
127 /// A `while` loop, with the given expression as condition.
131 #[derive(Copy, Clone, PartialEq)]
132 struct Variable(u32);
134 #[derive(Copy, Clone, PartialEq)]
135 struct LiveNode(u32);
138 fn get(&self) -> usize { self.0 as usize }
142 fn get(&self) -> usize { self.0 as usize }
145 #[derive(Copy, Clone, PartialEq, Debug)]
153 fn live_node_kind_to_string(lnk: LiveNodeKind, tcx: TyCtxt<'_, '_>) -> String {
154 let cm = tcx.sess.source_map();
157 format!("Upvar node [{}]", cm.span_to_string(s))
160 format!("Expr node [{}]", cm.span_to_string(s))
163 format!("Var def node [{}]", cm.span_to_string(s))
165 ExitNode => "Exit node".to_owned(),
169 impl<'tcx> Visitor<'tcx> for IrMaps<'tcx> {
170 fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> {
171 NestedVisitorMap::OnlyBodies(&self.tcx.hir())
174 fn visit_fn(&mut self, fk: FnKind<'tcx>, fd: &'tcx hir::FnDecl,
175 b: hir::BodyId, s: Span, id: HirId) {
176 visit_fn(self, fk, fd, b, s, id);
179 fn visit_local(&mut self, l: &'tcx hir::Local) { visit_local(self, l); }
180 fn visit_expr(&mut self, ex: &'tcx Expr) { visit_expr(self, ex); }
181 fn visit_arm(&mut self, a: &'tcx hir::Arm) { visit_arm(self, a); }
184 fn check_mod_liveness<'tcx>(tcx: TyCtxt<'tcx, 'tcx>, module_def_id: DefId) {
185 tcx.hir().visit_item_likes_in_module(
187 &mut IrMaps::new(tcx, module_def_id).as_deep_visitor(),
191 pub fn provide(providers: &mut Providers<'_>) {
192 *providers = Providers {
198 impl fmt::Debug for LiveNode {
199 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
200 write!(f, "ln({})", self.get())
204 impl fmt::Debug for Variable {
205 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
206 write!(f, "v({})", self.get())
210 // ______________________________________________________________________
213 // This is the first pass and the one that drives the main
214 // computation. It walks up and down the IR once. On the way down,
215 // we count for each function the number of variables as well as
216 // liveness nodes. A liveness node is basically an expression or
217 // capture clause that does something of interest: either it has
218 // interesting control flow or it uses/defines a local variable.
220 // On the way back up, at each function node we create liveness sets
221 // (we now know precisely how big to make our various vectors and so
222 // forth) and then do the data-flow propagation to compute the set
223 // of live variables at each program point.
225 // Finally, we run back over the IR one last time and, using the
226 // computed liveness, check various safety conditions. For example,
227 // there must be no live nodes at the definition site for a variable
228 // unless it has an initializer. Similarly, each non-mutable local
229 // variable must not be assigned if there is some successor
230 // assignment. And so forth.
233 fn is_valid(&self) -> bool {
238 fn invalid_node() -> LiveNode { LiveNode(u32::MAX) }
245 #[derive(Copy, Clone, Debug)]
252 #[derive(Copy, Clone, Debug)]
254 Arg(HirId, ast::Name),
259 struct IrMaps<'tcx> {
260 tcx: TyCtxt<'tcx, 'tcx>,
262 num_live_nodes: usize,
264 live_node_map: HirIdMap<LiveNode>,
265 variable_map: HirIdMap<Variable>,
266 capture_info_map: HirIdMap<Rc<Vec<CaptureInfo>>>,
267 var_kinds: Vec<VarKind>,
268 lnks: Vec<LiveNodeKind>,
272 fn new(tcx: TyCtxt<'tcx, 'tcx>, body_owner: DefId) -> IrMaps<'tcx> {
278 live_node_map: HirIdMap::default(),
279 variable_map: HirIdMap::default(),
280 capture_info_map: Default::default(),
281 var_kinds: Vec::new(),
286 fn add_live_node(&mut self, lnk: LiveNodeKind) -> LiveNode {
287 let ln = LiveNode(self.num_live_nodes as u32);
289 self.num_live_nodes += 1;
291 debug!("{:?} is of kind {}", ln,
292 live_node_kind_to_string(lnk, self.tcx));
297 fn add_live_node_for_node(&mut self, hir_id: HirId, lnk: LiveNodeKind) {
298 let ln = self.add_live_node(lnk);
299 self.live_node_map.insert(hir_id, ln);
301 debug!("{:?} is node {:?}", ln, hir_id);
304 fn add_variable(&mut self, vk: VarKind) -> Variable {
305 let v = Variable(self.num_vars as u32);
306 self.var_kinds.push(vk);
310 Local(LocalInfo { id: node_id, .. }) | Arg(node_id, _) => {
311 self.variable_map.insert(node_id, v);
316 debug!("{:?} is {:?}", v, vk);
321 fn variable(&self, hir_id: HirId, span: Span) -> Variable {
322 match self.variable_map.get(&hir_id) {
325 span_bug!(span, "no variable registered for id {:?}", hir_id);
330 fn variable_name(&self, var: Variable) -> String {
331 match self.var_kinds[var.get()] {
332 Local(LocalInfo { name, .. }) | Arg(_, name) => {
335 CleanExit => "<clean-exit>".to_owned()
339 fn variable_is_shorthand(&self, var: Variable) -> bool {
340 match self.var_kinds[var.get()] {
341 Local(LocalInfo { is_shorthand, .. }) => is_shorthand,
342 Arg(..) | CleanExit => false
346 fn set_captures(&mut self, hir_id: HirId, cs: Vec<CaptureInfo>) {
347 self.capture_info_map.insert(hir_id, Rc::new(cs));
350 fn lnk(&self, ln: LiveNode) -> LiveNodeKind {
355 fn visit_fn<'a, 'tcx: 'a>(
356 ir: &mut IrMaps<'tcx>,
358 decl: &'tcx hir::FnDecl,
359 body_id: hir::BodyId,
365 // swap in a new set of IR maps for this function body:
366 let def_id = ir.tcx.hir().local_def_id_from_hir_id(id);
367 let mut fn_maps = IrMaps::new(ir.tcx, def_id);
369 // Don't run unused pass for #[derive()]
370 if let FnKind::Method(..) = fk {
371 let parent = ir.tcx.hir().get_parent_item(id);
372 if let Some(Node::Item(i)) = ir.tcx.hir().find_by_hir_id(parent) {
373 if i.attrs.iter().any(|a| a.check_name(sym::automatically_derived)) {
379 debug!("creating fn_maps: {:p}", &fn_maps);
381 let body = ir.tcx.hir().body(body_id);
383 for arg in &body.arguments {
384 let is_shorthand = match arg.pat.node {
385 crate::hir::PatKind::Struct(..) => true,
388 arg.pat.each_binding(|_bm, hir_id, _x, ident| {
389 debug!("adding argument {:?}", hir_id);
390 let var = if is_shorthand {
397 Arg(hir_id, ident.name)
399 fn_maps.add_variable(var);
403 // gather up the various local variables, significant expressions,
405 intravisit::walk_fn(&mut fn_maps, fk, decl, body_id, sp, id);
408 let mut lsets = Liveness::new(&mut fn_maps, body_id);
409 let entry_ln = lsets.compute(&body.value);
411 // check for various error conditions
412 lsets.visit_body(body);
413 lsets.warn_about_unused_args(body, entry_ln);
416 fn add_from_pat<'tcx>(ir: &mut IrMaps<'tcx>, pat: &P<hir::Pat>) {
417 // For struct patterns, take note of which fields used shorthand
418 // (`x` rather than `x: x`).
419 let mut shorthand_field_ids = HirIdSet::default();
420 let mut pats = VecDeque::new();
422 while let Some(pat) = pats.pop_front() {
423 use crate::hir::PatKind::*;
425 Binding(_, _, _, ref inner_pat) => {
426 pats.extend(inner_pat.iter());
428 Struct(_, ref fields, _) => {
429 for field in fields {
430 if field.node.is_shorthand {
431 shorthand_field_ids.insert(field.node.pat.hir_id);
435 Ref(ref inner_pat, _) |
436 Box(ref inner_pat) => {
437 pats.push_back(inner_pat);
439 TupleStruct(_, ref inner_pats, _) |
440 Tuple(ref inner_pats, _) => {
441 pats.extend(inner_pats.iter());
443 Slice(ref pre_pats, ref inner_pat, ref post_pats) => {
444 pats.extend(pre_pats.iter());
445 pats.extend(inner_pat.iter());
446 pats.extend(post_pats.iter());
452 pat.each_binding(|_bm, hir_id, _sp, ident| {
453 ir.add_live_node_for_node(hir_id, VarDefNode(ident.span));
454 ir.add_variable(Local(LocalInfo {
457 is_shorthand: shorthand_field_ids.contains(&hir_id)
462 fn visit_local<'tcx>(ir: &mut IrMaps<'tcx>, local: &'tcx hir::Local) {
463 add_from_pat(ir, &local.pat);
464 intravisit::walk_local(ir, local);
467 fn visit_arm<'tcx>(ir: &mut IrMaps<'tcx>, arm: &'tcx hir::Arm) {
468 for pat in &arm.pats {
469 add_from_pat(ir, pat);
471 intravisit::walk_arm(ir, arm);
474 fn visit_expr<'tcx>(ir: &mut IrMaps<'tcx>, expr: &'tcx Expr) {
476 // live nodes required for uses or definitions of variables:
477 hir::ExprKind::Path(hir::QPath::Resolved(_, ref path)) => {
478 debug!("expr {}: path that leads to {:?}", expr.hir_id, path.res);
479 if let Res::Local(var_hir_id) = path.res {
480 let upvars = ir.tcx.upvars(ir.body_owner);
481 if !upvars.map_or(false, |upvars| upvars.contains_key(&var_hir_id)) {
482 ir.add_live_node_for_node(expr.hir_id, ExprNode(expr.span));
485 intravisit::walk_expr(ir, expr);
487 hir::ExprKind::Closure(..) => {
488 // Interesting control flow (for loops can contain labeled
489 // breaks or continues)
490 ir.add_live_node_for_node(expr.hir_id, ExprNode(expr.span));
492 // Make a live_node for each captured variable, with the span
493 // being the location that the variable is used. This results
494 // in better error messages than just pointing at the closure
495 // construction site.
496 let mut call_caps = Vec::new();
497 let closure_def_id = ir.tcx.hir().local_def_id_from_hir_id(expr.hir_id);
498 if let Some(upvars) = ir.tcx.upvars(closure_def_id) {
499 let parent_upvars = ir.tcx.upvars(ir.body_owner);
500 call_caps.extend(upvars.iter().filter_map(|(&var_id, upvar)| {
501 let has_parent = parent_upvars
502 .map_or(false, |upvars| upvars.contains_key(&var_id));
504 let upvar_ln = ir.add_live_node(UpvarNode(upvar.span));
505 Some(CaptureInfo { ln: upvar_ln, var_hid: var_id })
511 ir.set_captures(expr.hir_id, call_caps);
512 let old_body_owner = ir.body_owner;
513 ir.body_owner = closure_def_id;
514 intravisit::walk_expr(ir, expr);
515 ir.body_owner = old_body_owner;
518 // live nodes required for interesting control flow:
519 hir::ExprKind::Match(..) |
520 hir::ExprKind::While(..) |
521 hir::ExprKind::Loop(..) => {
522 ir.add_live_node_for_node(expr.hir_id, ExprNode(expr.span));
523 intravisit::walk_expr(ir, expr);
525 hir::ExprKind::Binary(op, ..) if op.node.is_lazy() => {
526 ir.add_live_node_for_node(expr.hir_id, ExprNode(expr.span));
527 intravisit::walk_expr(ir, expr);
530 // otherwise, live nodes are not required:
531 hir::ExprKind::Index(..) |
532 hir::ExprKind::Field(..) |
533 hir::ExprKind::Array(..) |
534 hir::ExprKind::Call(..) |
535 hir::ExprKind::MethodCall(..) |
536 hir::ExprKind::Tup(..) |
537 hir::ExprKind::Binary(..) |
538 hir::ExprKind::AddrOf(..) |
539 hir::ExprKind::Cast(..) |
540 hir::ExprKind::DropTemps(..) |
541 hir::ExprKind::Unary(..) |
542 hir::ExprKind::Break(..) |
543 hir::ExprKind::Continue(_) |
544 hir::ExprKind::Lit(_) |
545 hir::ExprKind::Ret(..) |
546 hir::ExprKind::Block(..) |
547 hir::ExprKind::Assign(..) |
548 hir::ExprKind::AssignOp(..) |
549 hir::ExprKind::Struct(..) |
550 hir::ExprKind::Repeat(..) |
551 hir::ExprKind::InlineAsm(..) |
552 hir::ExprKind::Box(..) |
553 hir::ExprKind::Yield(..) |
554 hir::ExprKind::Type(..) |
556 hir::ExprKind::Path(hir::QPath::TypeRelative(..)) => {
557 intravisit::walk_expr(ir, expr);
562 // ______________________________________________________________________
563 // Computing liveness sets
565 // Actually we compute just a bit more than just liveness, but we use
566 // the same basic propagation framework in all cases.
568 #[derive(Clone, Copy)]
575 /// Conceptually, this is like a `Vec<RWU>`. But the number of `RWU`s can get
576 /// very large, so it uses a more compact representation that takes advantage
577 /// of the fact that when the number of `RWU`s is large, most of them have an
578 /// invalid reader and an invalid writer.
580 /// Each entry in `packed_rwus` is either INV_INV_FALSE, INV_INV_TRUE, or
581 /// an index into `unpacked_rwus`. In the common cases, this compacts the
582 /// 65 bits of data into 32; in the uncommon cases, it expands the 65 bits
585 /// More compact representations are possible -- e.g., use only 2 bits per
586 /// packed `RWU` and make the secondary table a HashMap that maps from
587 /// indices to `RWU`s -- but this one strikes a good balance between size
589 packed_rwus: Vec<u32>,
590 unpacked_rwus: Vec<RWU>,
593 // A constant representing `RWU { reader: invalid_node(); writer: invalid_node(); used: false }`.
594 const INV_INV_FALSE: u32 = u32::MAX;
596 // A constant representing `RWU { reader: invalid_node(); writer: invalid_node(); used: true }`.
597 const INV_INV_TRUE: u32 = u32::MAX - 1;
600 fn new(num_rwus: usize) -> RWUTable {
602 packed_rwus: vec![INV_INV_FALSE; num_rwus],
603 unpacked_rwus: vec![],
607 fn get(&self, idx: usize) -> RWU {
608 let packed_rwu = self.packed_rwus[idx];
610 INV_INV_FALSE => RWU { reader: invalid_node(), writer: invalid_node(), used: false },
611 INV_INV_TRUE => RWU { reader: invalid_node(), writer: invalid_node(), used: true },
612 _ => self.unpacked_rwus[packed_rwu as usize],
616 fn get_reader(&self, idx: usize) -> LiveNode {
617 let packed_rwu = self.packed_rwus[idx];
619 INV_INV_FALSE | INV_INV_TRUE => invalid_node(),
620 _ => self.unpacked_rwus[packed_rwu as usize].reader,
624 fn get_writer(&self, idx: usize) -> LiveNode {
625 let packed_rwu = self.packed_rwus[idx];
627 INV_INV_FALSE | INV_INV_TRUE => invalid_node(),
628 _ => self.unpacked_rwus[packed_rwu as usize].writer,
632 fn get_used(&self, idx: usize) -> bool {
633 let packed_rwu = self.packed_rwus[idx];
635 INV_INV_FALSE => false,
636 INV_INV_TRUE => true,
637 _ => self.unpacked_rwus[packed_rwu as usize].used,
642 fn copy_packed(&mut self, dst_idx: usize, src_idx: usize) {
643 self.packed_rwus[dst_idx] = self.packed_rwus[src_idx];
646 fn assign_unpacked(&mut self, idx: usize, rwu: RWU) {
647 if rwu.reader == invalid_node() && rwu.writer == invalid_node() {
648 // When we overwrite an indexing entry in `self.packed_rwus` with
649 // `INV_INV_{TRUE,FALSE}` we don't remove the corresponding entry
650 // from `self.unpacked_rwus`; it's not worth the effort, and we
651 // can't have entries shifting around anyway.
652 self.packed_rwus[idx] = if rwu.used {
658 // Add a new RWU to `unpacked_rwus` and make `packed_rwus[idx]`
660 self.packed_rwus[idx] = self.unpacked_rwus.len() as u32;
661 self.unpacked_rwus.push(rwu);
665 fn assign_inv_inv(&mut self, idx: usize) {
666 self.packed_rwus[idx] = if self.get_used(idx) {
674 #[derive(Copy, Clone)]
677 fallthrough_ln: LiveNode,
678 clean_exit_var: Variable
681 const ACC_READ: u32 = 1;
682 const ACC_WRITE: u32 = 2;
683 const ACC_USE: u32 = 4;
685 struct Liveness<'a, 'tcx: 'a> {
686 ir: &'a mut IrMaps<'tcx>,
687 tables: &'a ty::TypeckTables<'tcx>,
689 successors: Vec<LiveNode>,
692 // mappings from loop node ID to LiveNode
693 // ("break" label should map to loop node ID,
694 // it probably doesn't now)
695 break_ln: HirIdMap<LiveNode>,
696 cont_ln: HirIdMap<LiveNode>,
699 impl<'a, 'tcx> Liveness<'a, 'tcx> {
700 fn new(ir: &'a mut IrMaps<'tcx>, body: hir::BodyId) -> Liveness<'a, 'tcx> {
701 // Special nodes and variables:
702 // - exit_ln represents the end of the fn, either by return or panic
703 // - implicit_ret_var is a pseudo-variable that represents
704 // an implicit return
705 let specials = Specials {
706 exit_ln: ir.add_live_node(ExitNode),
707 fallthrough_ln: ir.add_live_node(ExitNode),
708 clean_exit_var: ir.add_variable(CleanExit)
711 let tables = ir.tcx.body_tables(body);
713 let num_live_nodes = ir.num_live_nodes;
714 let num_vars = ir.num_vars;
720 successors: vec![invalid_node(); num_live_nodes],
721 rwu_table: RWUTable::new(num_live_nodes * num_vars),
722 break_ln: Default::default(),
723 cont_ln: Default::default(),
727 fn live_node(&self, hir_id: HirId, span: Span) -> LiveNode {
728 match self.ir.live_node_map.get(&hir_id) {
731 // This must be a mismatch between the ir_map construction
732 // above and the propagation code below; the two sets of
733 // code have to agree about which AST nodes are worth
734 // creating liveness nodes for.
737 "no live node registered for node {:?}",
743 fn variable(&self, hir_id: HirId, span: Span) -> Variable {
744 self.ir.variable(hir_id, span)
747 fn pat_bindings<F>(&mut self, pat: &hir::Pat, mut f: F) where
748 F: FnMut(&mut Liveness<'a, 'tcx>, LiveNode, Variable, Span, HirId),
750 pat.each_binding(|_bm, hir_id, sp, n| {
751 let ln = self.live_node(hir_id, sp);
752 let var = self.variable(hir_id, n.span);
753 f(self, ln, var, n.span, hir_id);
757 fn arm_pats_bindings<F>(&mut self, pat: Option<&hir::Pat>, f: F) where
758 F: FnMut(&mut Liveness<'a, 'tcx>, LiveNode, Variable, Span, HirId),
760 if let Some(pat) = pat {
761 self.pat_bindings(pat, f);
765 fn define_bindings_in_pat(&mut self, pat: &hir::Pat, succ: LiveNode)
767 self.define_bindings_in_arm_pats(Some(pat), succ)
770 fn define_bindings_in_arm_pats(&mut self, pat: Option<&hir::Pat>, succ: LiveNode)
773 self.arm_pats_bindings(pat, |this, ln, var, _sp, _id| {
774 this.init_from_succ(ln, succ);
775 this.define(ln, var);
781 fn idx(&self, ln: LiveNode, var: Variable) -> usize {
782 ln.get() * self.ir.num_vars + var.get()
785 fn live_on_entry(&self, ln: LiveNode, var: Variable) -> Option<LiveNodeKind> {
786 assert!(ln.is_valid());
787 let reader = self.rwu_table.get_reader(self.idx(ln, var));
788 if reader.is_valid() { Some(self.ir.lnk(reader)) } else { None }
791 // Is this variable live on entry to any of its successor nodes?
792 fn live_on_exit(&self, ln: LiveNode, var: Variable)
793 -> Option<LiveNodeKind> {
794 let successor = self.successors[ln.get()];
795 self.live_on_entry(successor, var)
798 fn used_on_entry(&self, ln: LiveNode, var: Variable) -> bool {
799 assert!(ln.is_valid());
800 self.rwu_table.get_used(self.idx(ln, var))
803 fn assigned_on_entry(&self, ln: LiveNode, var: Variable)
804 -> Option<LiveNodeKind> {
805 assert!(ln.is_valid());
806 let writer = self.rwu_table.get_writer(self.idx(ln, var));
807 if writer.is_valid() { Some(self.ir.lnk(writer)) } else { None }
810 fn assigned_on_exit(&self, ln: LiveNode, var: Variable)
811 -> Option<LiveNodeKind> {
812 let successor = self.successors[ln.get()];
813 self.assigned_on_entry(successor, var)
816 fn indices2<F>(&mut self, ln: LiveNode, succ_ln: LiveNode, mut op: F) where
817 F: FnMut(&mut Liveness<'a, 'tcx>, usize, usize),
819 let node_base_idx = self.idx(ln, Variable(0));
820 let succ_base_idx = self.idx(succ_ln, Variable(0));
821 for var_idx in 0..self.ir.num_vars {
822 op(self, node_base_idx + var_idx, succ_base_idx + var_idx);
826 fn write_vars<F>(&self,
830 -> io::Result<()> where
831 F: FnMut(usize) -> LiveNode,
833 let node_base_idx = self.idx(ln, Variable(0));
834 for var_idx in 0..self.ir.num_vars {
835 let idx = node_base_idx + var_idx;
836 if test(idx).is_valid() {
837 write!(wr, " {:?}", Variable(var_idx as u32))?;
844 #[allow(unused_must_use)]
845 fn ln_str(&self, ln: LiveNode) -> String {
846 let mut wr = Vec::new();
848 let wr = &mut wr as &mut dyn Write;
849 write!(wr, "[ln({:?}) of kind {:?} reads", ln.get(), self.ir.lnk(ln));
850 self.write_vars(wr, ln, |idx| self.rwu_table.get_reader(idx));
851 write!(wr, " writes");
852 self.write_vars(wr, ln, |idx| self.rwu_table.get_writer(idx));
853 write!(wr, " precedes {:?}]", self.successors[ln.get()]);
855 String::from_utf8(wr).unwrap()
858 fn init_empty(&mut self, ln: LiveNode, succ_ln: LiveNode) {
859 self.successors[ln.get()] = succ_ln;
861 // It is not necessary to initialize the RWUs here because they are all
862 // set to INV_INV_FALSE when they are created, and the sets only grow
863 // during iterations.
866 fn init_from_succ(&mut self, ln: LiveNode, succ_ln: LiveNode) {
867 // more efficient version of init_empty() / merge_from_succ()
868 self.successors[ln.get()] = succ_ln;
870 self.indices2(ln, succ_ln, |this, idx, succ_idx| {
871 this.rwu_table.copy_packed(idx, succ_idx);
873 debug!("init_from_succ(ln={}, succ={})",
874 self.ln_str(ln), self.ln_str(succ_ln));
877 fn merge_from_succ(&mut self,
882 if ln == succ_ln { return false; }
884 let mut changed = false;
885 self.indices2(ln, succ_ln, |this, idx, succ_idx| {
886 let mut rwu = this.rwu_table.get(idx);
887 let succ_rwu = this.rwu_table.get(succ_idx);
888 if succ_rwu.reader.is_valid() && !rwu.reader.is_valid() {
889 rwu.reader = succ_rwu.reader;
893 if succ_rwu.writer.is_valid() && !rwu.writer.is_valid() {
894 rwu.writer = succ_rwu.writer;
898 if succ_rwu.used && !rwu.used {
904 this.rwu_table.assign_unpacked(idx, rwu);
908 debug!("merge_from_succ(ln={:?}, succ={}, first_merge={}, changed={})",
909 ln, self.ln_str(succ_ln), first_merge, changed);
913 // Indicates that a local variable was *defined*; we know that no
914 // uses of the variable can precede the definition (resolve checks
915 // this) so we just clear out all the data.
916 fn define(&mut self, writer: LiveNode, var: Variable) {
917 let idx = self.idx(writer, var);
918 self.rwu_table.assign_inv_inv(idx);
920 debug!("{:?} defines {:?} (idx={}): {}", writer, var,
921 idx, self.ln_str(writer));
924 // Either read, write, or both depending on the acc bitset
925 fn acc(&mut self, ln: LiveNode, var: Variable, acc: u32) {
926 debug!("{:?} accesses[{:x}] {:?}: {}",
927 ln, acc, var, self.ln_str(ln));
929 let idx = self.idx(ln, var);
930 let mut rwu = self.rwu_table.get(idx);
932 if (acc & ACC_WRITE) != 0 {
933 rwu.reader = invalid_node();
937 // Important: if we both read/write, must do read second
938 // or else the write will override.
939 if (acc & ACC_READ) != 0 {
943 if (acc & ACC_USE) != 0 {
947 self.rwu_table.assign_unpacked(idx, rwu);
950 fn compute(&mut self, body: &hir::Expr) -> LiveNode {
951 debug!("compute: using id for body, {}",
952 self.ir.tcx.hir().hir_to_pretty_string(body.hir_id));
954 // the fallthrough exit is only for those cases where we do not
955 // explicitly return:
957 self.init_from_succ(s.fallthrough_ln, s.exit_ln);
958 self.acc(s.fallthrough_ln, s.clean_exit_var, ACC_READ);
960 let entry_ln = self.propagate_through_expr(body, s.fallthrough_ln);
962 // hack to skip the loop unless debug! is enabled:
963 debug!("^^ liveness computation results for body {} (entry={:?})", {
964 for ln_idx in 0..self.ir.num_live_nodes {
965 debug!("{:?}", self.ln_str(LiveNode(ln_idx as u32)));
974 fn propagate_through_block(&mut self, blk: &hir::Block, succ: LiveNode)
976 if blk.targeted_by_break {
977 self.break_ln.insert(blk.hir_id, succ);
979 let succ = self.propagate_through_opt_expr(blk.expr.as_ref().map(|e| &**e), succ);
980 blk.stmts.iter().rev().fold(succ, |succ, stmt| {
981 self.propagate_through_stmt(stmt, succ)
985 fn propagate_through_stmt(&mut self, stmt: &hir::Stmt, succ: LiveNode)
988 hir::StmtKind::Local(ref local) => {
989 // Note: we mark the variable as defined regardless of whether
990 // there is an initializer. Initially I had thought to only mark
991 // the live variable as defined if it was initialized, and then we
992 // could check for uninit variables just by scanning what is live
993 // at the start of the function. But that doesn't work so well for
994 // immutable variables defined in a loop:
995 // loop { let x; x = 5; }
996 // because the "assignment" loops back around and generates an error.
998 // So now we just check that variables defined w/o an
999 // initializer are not live at the point of their
1000 // initialization, which is mildly more complex than checking
1001 // once at the func header but otherwise equivalent.
1003 let succ = self.propagate_through_opt_expr(local.init.as_ref().map(|e| &**e), succ);
1004 self.define_bindings_in_pat(&local.pat, succ)
1006 hir::StmtKind::Item(..) => succ,
1007 hir::StmtKind::Expr(ref expr) | hir::StmtKind::Semi(ref expr) => {
1008 self.propagate_through_expr(&expr, succ)
1013 fn propagate_through_exprs(&mut self, exprs: &[Expr], succ: LiveNode)
1015 exprs.iter().rev().fold(succ, |succ, expr| {
1016 self.propagate_through_expr(&expr, succ)
1020 fn propagate_through_opt_expr(&mut self,
1021 opt_expr: Option<&Expr>,
1024 opt_expr.map_or(succ, |expr| self.propagate_through_expr(expr, succ))
1027 fn propagate_through_expr(&mut self, expr: &Expr, succ: LiveNode)
1029 debug!("propagate_through_expr: {}", self.ir.tcx.hir().hir_to_pretty_string(expr.hir_id));
1032 // Interesting cases with control flow or which gen/kill
1033 hir::ExprKind::Path(hir::QPath::Resolved(_, ref path)) => {
1034 self.access_path(expr.hir_id, path, succ, ACC_READ | ACC_USE)
1037 hir::ExprKind::Field(ref e, _) => {
1038 self.propagate_through_expr(&e, succ)
1041 hir::ExprKind::Closure(..) => {
1042 debug!("{} is an ExprKind::Closure",
1043 self.ir.tcx.hir().hir_to_pretty_string(expr.hir_id));
1045 // the construction of a closure itself is not important,
1046 // but we have to consider the closed over variables.
1047 let caps = self.ir.capture_info_map.get(&expr.hir_id).cloned().unwrap_or_else(||
1048 span_bug!(expr.span, "no registered caps"));
1050 caps.iter().rev().fold(succ, |succ, cap| {
1051 self.init_from_succ(cap.ln, succ);
1052 let var = self.variable(cap.var_hid, expr.span);
1053 self.acc(cap.ln, var, ACC_READ | ACC_USE);
1058 hir::ExprKind::While(ref cond, ref blk, _) => {
1059 self.propagate_through_loop(expr, WhileLoop(&cond), &blk, succ)
1062 // Note that labels have been resolved, so we don't need to look
1063 // at the label ident
1064 hir::ExprKind::Loop(ref blk, _, _) => {
1065 self.propagate_through_loop(expr, LoopLoop, &blk, succ)
1068 hir::ExprKind::Match(ref e, ref arms, _) => {
1083 let ln = self.live_node(expr.hir_id, expr.span);
1084 self.init_empty(ln, succ);
1085 let mut first_merge = true;
1087 let body_succ = self.propagate_through_expr(&arm.body, succ);
1089 let guard_succ = self.propagate_through_opt_expr(
1090 arm.guard.as_ref().map(|hir::Guard::If(e)| &**e),
1093 // only consider the first pattern; any later patterns must have
1094 // the same bindings, and we also consider the first pattern to be
1095 // the "authoritative" set of ids
1097 self.define_bindings_in_arm_pats(arm.pats.first().map(|p| &**p),
1099 self.merge_from_succ(ln, arm_succ, first_merge);
1100 first_merge = false;
1102 self.propagate_through_expr(&e, ln)
1105 hir::ExprKind::Ret(ref o_e) => {
1106 // ignore succ and subst exit_ln:
1107 let exit_ln = self.s.exit_ln;
1108 self.propagate_through_opt_expr(o_e.as_ref().map(|e| &**e), exit_ln)
1111 hir::ExprKind::Break(label, ref opt_expr) => {
1112 // Find which label this break jumps to
1113 let target = match label.target_id {
1114 Ok(hir_id) => self.break_ln.get(&hir_id),
1115 Err(err) => span_bug!(expr.span, "loop scope error: {}", err),
1118 // Now that we know the label we're going to,
1119 // look it up in the break loop nodes table
1122 Some(b) => self.propagate_through_opt_expr(opt_expr.as_ref().map(|e| &**e), b),
1123 None => span_bug!(expr.span, "break to unknown label")
1127 hir::ExprKind::Continue(label) => {
1128 // Find which label this expr continues to
1129 let sc = label.target_id.unwrap_or_else(|err|
1130 span_bug!(expr.span, "loop scope error: {}", err));
1132 // Now that we know the label we're going to,
1133 // look it up in the continue loop nodes table
1134 self.cont_ln.get(&sc).cloned().unwrap_or_else(||
1135 span_bug!(expr.span, "continue to unknown label"))
1138 hir::ExprKind::Assign(ref l, ref r) => {
1139 // see comment on places in
1140 // propagate_through_place_components()
1141 let succ = self.write_place(&l, succ, ACC_WRITE);
1142 let succ = self.propagate_through_place_components(&l, succ);
1143 self.propagate_through_expr(&r, succ)
1146 hir::ExprKind::AssignOp(_, ref l, ref r) => {
1147 // an overloaded assign op is like a method call
1148 if self.tables.is_method_call(expr) {
1149 let succ = self.propagate_through_expr(&l, succ);
1150 self.propagate_through_expr(&r, succ)
1152 // see comment on places in
1153 // propagate_through_place_components()
1154 let succ = self.write_place(&l, succ, ACC_WRITE|ACC_READ);
1155 let succ = self.propagate_through_expr(&r, succ);
1156 self.propagate_through_place_components(&l, succ)
1160 // Uninteresting cases: just propagate in rev exec order
1162 hir::ExprKind::Array(ref exprs) => {
1163 self.propagate_through_exprs(exprs, succ)
1166 hir::ExprKind::Struct(_, ref fields, ref with_expr) => {
1167 let succ = self.propagate_through_opt_expr(with_expr.as_ref().map(|e| &**e), succ);
1168 fields.iter().rev().fold(succ, |succ, field| {
1169 self.propagate_through_expr(&field.expr, succ)
1173 hir::ExprKind::Call(ref f, ref args) => {
1174 let m = self.ir.tcx.hir().get_module_parent_by_hir_id(expr.hir_id);
1175 let succ = if self.ir.tcx.is_ty_uninhabited_from(m, self.tables.expr_ty(expr)) {
1180 let succ = self.propagate_through_exprs(args, succ);
1181 self.propagate_through_expr(&f, succ)
1184 hir::ExprKind::MethodCall(.., ref args) => {
1185 let m = self.ir.tcx.hir().get_module_parent_by_hir_id(expr.hir_id);
1186 let succ = if self.ir.tcx.is_ty_uninhabited_from(m, self.tables.expr_ty(expr)) {
1192 self.propagate_through_exprs(args, succ)
1195 hir::ExprKind::Tup(ref exprs) => {
1196 self.propagate_through_exprs(exprs, succ)
1199 hir::ExprKind::Binary(op, ref l, ref r) if op.node.is_lazy() => {
1200 let r_succ = self.propagate_through_expr(&r, succ);
1202 let ln = self.live_node(expr.hir_id, expr.span);
1203 self.init_from_succ(ln, succ);
1204 self.merge_from_succ(ln, r_succ, false);
1206 self.propagate_through_expr(&l, ln)
1209 hir::ExprKind::Index(ref l, ref r) |
1210 hir::ExprKind::Binary(_, ref l, ref r) => {
1211 let r_succ = self.propagate_through_expr(&r, succ);
1212 self.propagate_through_expr(&l, r_succ)
1215 hir::ExprKind::Box(ref e) |
1216 hir::ExprKind::AddrOf(_, ref e) |
1217 hir::ExprKind::Cast(ref e, _) |
1218 hir::ExprKind::Type(ref e, _) |
1219 hir::ExprKind::DropTemps(ref e) |
1220 hir::ExprKind::Unary(_, ref e) |
1221 hir::ExprKind::Yield(ref e) |
1222 hir::ExprKind::Repeat(ref e, _) => {
1223 self.propagate_through_expr(&e, succ)
1226 hir::ExprKind::InlineAsm(ref ia, ref outputs, ref inputs) => {
1227 let succ = ia.outputs.iter().zip(outputs).rev().fold(succ, |succ, (o, output)| {
1228 // see comment on places
1229 // in propagate_through_place_components()
1231 self.propagate_through_expr(output, succ)
1233 let acc = if o.is_rw { ACC_WRITE|ACC_READ } else { ACC_WRITE };
1234 let succ = self.write_place(output, succ, acc);
1235 self.propagate_through_place_components(output, succ)
1238 // Inputs are executed first. Propagate last because of rev order
1239 self.propagate_through_exprs(inputs, succ)
1242 hir::ExprKind::Lit(..) | hir::ExprKind::Err |
1243 hir::ExprKind::Path(hir::QPath::TypeRelative(..)) => {
1247 // Note that labels have been resolved, so we don't need to look
1248 // at the label ident
1249 hir::ExprKind::Block(ref blk, _) => {
1250 self.propagate_through_block(&blk, succ)
1255 fn propagate_through_place_components(&mut self,
1261 // In general, the full flow graph structure for an
1262 // assignment/move/etc can be handled in one of two ways,
1263 // depending on whether what is being assigned is a "tracked
1264 // value" or not. A tracked value is basically a local
1265 // variable or argument.
1267 // The two kinds of graphs are:
1269 // Tracked place Untracked place
1270 // ----------------------++-----------------------
1274 // (rvalue) || (rvalue)
1277 // (write of place) || (place components)
1282 // ----------------------++-----------------------
1284 // I will cover the two cases in turn:
1288 // A tracked place is a local variable/argument `x`. In
1289 // these cases, the link_node where the write occurs is linked
1290 // to node id of `x`. The `write_place()` routine generates
1291 // the contents of this node. There are no subcomponents to
1294 // # Non-tracked places
1296 // These are places like `x[5]` or `x.f`. In that case, we
1297 // basically ignore the value which is written to but generate
1298 // reads for the components---`x` in these two examples. The
1299 // components reads are generated by
1300 // `propagate_through_place_components()` (this fn).
1304 // It is still possible to observe assignments to non-places;
1305 // these errors are detected in the later pass borrowck. We
1306 // just ignore such cases and treat them as reads.
1309 hir::ExprKind::Path(_) => succ,
1310 hir::ExprKind::Field(ref e, _) => self.propagate_through_expr(&e, succ),
1311 _ => self.propagate_through_expr(expr, succ)
1315 // see comment on propagate_through_place()
1316 fn write_place(&mut self, expr: &Expr, succ: LiveNode, acc: u32) -> LiveNode {
1318 hir::ExprKind::Path(hir::QPath::Resolved(_, ref path)) => {
1319 self.access_path(expr.hir_id, path, succ, acc)
1322 // We do not track other places, so just propagate through
1323 // to their subcomponents. Also, it may happen that
1324 // non-places occur here, because those are detected in the
1325 // later pass borrowck.
1330 fn access_var(&mut self, hir_id: HirId, nid: NodeId, succ: LiveNode, acc: u32, span: Span)
1332 let ln = self.live_node(hir_id, span);
1334 self.init_from_succ(ln, succ);
1335 let var_hid = self.ir.tcx.hir().node_to_hir_id(nid);
1336 let var = self.variable(var_hid, span);
1337 self.acc(ln, var, acc);
1342 fn access_path(&mut self, hir_id: HirId, path: &hir::Path, succ: LiveNode, acc: u32)
1345 Res::Local(hid) => {
1346 let upvars = self.ir.tcx.upvars(self.ir.body_owner);
1347 if !upvars.map_or(false, |upvars| upvars.contains_key(&hid)) {
1348 let nid = self.ir.tcx.hir().hir_to_node_id(hid);
1349 self.access_var(hir_id, nid, succ, acc, path.span)
1358 fn propagate_through_loop(&mut self,
1366 We model control flow like this:
1384 let mut first_merge = true;
1385 let ln = self.live_node(expr.hir_id, expr.span);
1386 self.init_empty(ln, succ);
1390 // If this is not a `loop` loop, then it's possible we bypass
1391 // the body altogether. Otherwise, the only way is via a `break`
1392 // in the loop body.
1393 self.merge_from_succ(ln, succ, first_merge);
1394 first_merge = false;
1397 debug!("propagate_through_loop: using id for loop body {} {}",
1398 expr.hir_id, self.ir.tcx.hir().hir_to_pretty_string(body.hir_id));
1400 self.break_ln.insert(expr.hir_id, succ);
1402 let cond_ln = match kind {
1404 WhileLoop(ref cond) => self.propagate_through_expr(&cond, ln),
1407 self.cont_ln.insert(expr.hir_id, cond_ln);
1409 let body_ln = self.propagate_through_block(body, cond_ln);
1411 // repeat until fixed point is reached:
1412 while self.merge_from_succ(ln, body_ln, first_merge) {
1413 first_merge = false;
1415 let new_cond_ln = match kind {
1417 WhileLoop(ref cond) => {
1418 self.propagate_through_expr(&cond, ln)
1421 assert_eq!(cond_ln, new_cond_ln);
1422 assert_eq!(body_ln, self.propagate_through_block(body, cond_ln));
1429 // _______________________________________________________________________
1430 // Checking for error conditions
1432 impl<'a, 'tcx> Visitor<'tcx> for Liveness<'a, 'tcx> {
1433 fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'tcx> {
1434 NestedVisitorMap::None
1437 fn visit_local(&mut self, l: &'tcx hir::Local) {
1438 check_local(self, l);
1440 fn visit_expr(&mut self, ex: &'tcx Expr) {
1441 check_expr(self, ex);
1443 fn visit_arm(&mut self, a: &'tcx hir::Arm) {
1448 fn check_local<'a, 'tcx>(this: &mut Liveness<'a, 'tcx>, local: &'tcx hir::Local) {
1451 this.warn_about_unused_or_dead_vars_in_pat(&local.pat);
1454 this.pat_bindings(&local.pat, |this, ln, var, sp, id| {
1455 let span = local.pat.simple_ident().map_or(sp, |ident| ident.span);
1456 this.warn_about_unused(vec![span], id, ln, var);
1461 intravisit::walk_local(this, local);
1464 fn check_arm<'a, 'tcx>(this: &mut Liveness<'a, 'tcx>, arm: &'tcx hir::Arm) {
1465 // Only consider the variable from the first pattern; any later patterns must have
1466 // the same bindings, and we also consider the first pattern to be the "authoritative" set of
1467 // ids. However, we should take the spans of variables with the same name from the later
1468 // patterns so the suggestions to prefix with underscores will apply to those too.
1469 let mut vars: BTreeMap<String, (LiveNode, Variable, HirId, Vec<Span>)> = Default::default();
1471 for pat in &arm.pats {
1472 this.arm_pats_bindings(Some(&*pat), |this, ln, var, sp, id| {
1473 let name = this.ir.variable_name(var);
1475 .and_modify(|(.., spans)| {
1478 .or_insert_with(|| {
1479 (ln, var, id, vec![sp])
1484 for (_, (ln, var, id, spans)) in vars {
1485 this.warn_about_unused(spans, id, ln, var);
1488 intravisit::walk_arm(this, arm);
1491 fn check_expr<'a, 'tcx>(this: &mut Liveness<'a, 'tcx>, expr: &'tcx Expr) {
1493 hir::ExprKind::Assign(ref l, _) => {
1494 this.check_place(&l);
1496 intravisit::walk_expr(this, expr);
1499 hir::ExprKind::AssignOp(_, ref l, _) => {
1500 if !this.tables.is_method_call(expr) {
1501 this.check_place(&l);
1504 intravisit::walk_expr(this, expr);
1507 hir::ExprKind::InlineAsm(ref ia, ref outputs, ref inputs) => {
1508 for input in inputs {
1509 this.visit_expr(input);
1512 // Output operands must be places
1513 for (o, output) in ia.outputs.iter().zip(outputs) {
1515 this.check_place(output);
1517 this.visit_expr(output);
1520 intravisit::walk_expr(this, expr);
1523 // no correctness conditions related to liveness
1524 hir::ExprKind::Call(..) | hir::ExprKind::MethodCall(..) |
1525 hir::ExprKind::Match(..) | hir::ExprKind::While(..) | hir::ExprKind::Loop(..) |
1526 hir::ExprKind::Index(..) | hir::ExprKind::Field(..) |
1527 hir::ExprKind::Array(..) | hir::ExprKind::Tup(..) | hir::ExprKind::Binary(..) |
1528 hir::ExprKind::Cast(..) | hir::ExprKind::DropTemps(..) | hir::ExprKind::Unary(..) |
1529 hir::ExprKind::Ret(..) | hir::ExprKind::Break(..) | hir::ExprKind::Continue(..) |
1530 hir::ExprKind::Lit(_) | hir::ExprKind::Block(..) | hir::ExprKind::AddrOf(..) |
1531 hir::ExprKind::Struct(..) | hir::ExprKind::Repeat(..) |
1532 hir::ExprKind::Closure(..) | hir::ExprKind::Path(_) | hir::ExprKind::Yield(..) |
1533 hir::ExprKind::Box(..) | hir::ExprKind::Type(..) | hir::ExprKind::Err => {
1534 intravisit::walk_expr(this, expr);
1539 impl<'a, 'tcx> Liveness<'a, 'tcx> {
1540 fn check_place(&mut self, expr: &'tcx Expr) {
1542 hir::ExprKind::Path(hir::QPath::Resolved(_, ref path)) => {
1543 if let Res::Local(var_hid) = path.res {
1544 let upvars = self.ir.tcx.upvars(self.ir.body_owner);
1545 if !upvars.map_or(false, |upvars| upvars.contains_key(&var_hid)) {
1546 // Assignment to an immutable variable or argument: only legal
1547 // if there is no later assignment. If this local is actually
1548 // mutable, then check for a reassignment to flag the mutability
1550 let ln = self.live_node(expr.hir_id, expr.span);
1551 let var = self.variable(var_hid, expr.span);
1552 self.warn_about_dead_assign(expr.span, expr.hir_id, ln, var);
1557 // For other kinds of places, no checks are required,
1558 // and any embedded expressions are actually rvalues
1559 intravisit::walk_expr(self, expr);
1564 fn should_warn(&self, var: Variable) -> Option<String> {
1565 let name = self.ir.variable_name(var);
1566 if name.is_empty() || name.as_bytes()[0] == b'_' {
1573 fn warn_about_unused_args(&self, body: &hir::Body, entry_ln: LiveNode) {
1574 for arg in &body.arguments {
1575 arg.pat.each_binding(|_bm, hir_id, _, ident| {
1576 let sp = ident.span;
1577 let var = self.variable(hir_id, sp);
1578 // Ignore unused self.
1579 if ident.name != kw::SelfLower {
1580 if !self.warn_about_unused(vec![sp], hir_id, entry_ln, var) {
1581 if self.live_on_entry(entry_ln, var).is_none() {
1582 self.report_dead_assign(hir_id, sp, var, true);
1590 fn warn_about_unused_or_dead_vars_in_pat(&mut self, pat: &hir::Pat) {
1591 self.pat_bindings(pat, |this, ln, var, sp, id| {
1592 if !this.warn_about_unused(vec![sp], id, ln, var) {
1593 this.warn_about_dead_assign(sp, id, ln, var);
1598 fn warn_about_unused(&self,
1604 if !self.used_on_entry(ln, var) {
1605 let r = self.should_warn(var);
1606 if let Some(name) = r {
1607 // annoying: for parameters in funcs like `fn(x: i32)
1608 // {ret}`, there is only one node, so asking about
1609 // assigned_on_exit() is not meaningful.
1610 let is_assigned = if ln == self.s.exit_ln {
1613 self.assigned_on_exit(ln, var).is_some()
1617 self.ir.tcx.lint_hir_note(
1618 lint::builtin::UNUSED_VARIABLES,
1621 &format!("variable `{}` is assigned to, but never used", name),
1622 &format!("consider using `_{}` instead", name),
1624 } else if name != "self" {
1625 let mut err = self.ir.tcx.struct_span_lint_hir(
1626 lint::builtin::UNUSED_VARIABLES,
1629 &format!("unused variable: `{}`", name),
1632 if self.ir.variable_is_shorthand(var) {
1633 if let Node::Binding(pat) = self.ir.tcx.hir().get_by_hir_id(hir_id) {
1634 // Handle `ref` and `ref mut`.
1635 let spans = spans.iter()
1636 .map(|_span| (pat.span, format!("{}: _", name)))
1639 err.multipart_suggestion(
1640 "try ignoring the field",
1642 Applicability::MachineApplicable,
1646 err.multipart_suggestion(
1647 "consider prefixing with an underscore",
1648 spans.iter().map(|span| (*span, format!("_{}", name))).collect(),
1649 Applicability::MachineApplicable,
1662 fn warn_about_dead_assign(&self, sp: Span, hir_id: HirId, ln: LiveNode, var: Variable) {
1663 if self.live_on_exit(ln, var).is_none() {
1664 self.report_dead_assign(hir_id, sp, var, false);
1668 fn report_dead_assign(&self, hir_id: HirId, sp: Span, var: Variable, is_argument: bool) {
1669 if let Some(name) = self.should_warn(var) {
1671 self.ir.tcx.struct_span_lint_hir(lint::builtin::UNUSED_ASSIGNMENTS, hir_id, sp,
1672 &format!("value passed to `{}` is never read", name))
1673 .help("maybe it is overwritten before being read?")
1676 self.ir.tcx.struct_span_lint_hir(lint::builtin::UNUSED_ASSIGNMENTS, hir_id, sp,
1677 &format!("value assigned to `{}` is never read", name))
1678 .help("maybe it is overwritten before being read?")