1 // Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
13 * # Compilation of match statements
15 * I will endeavor to explain the code as best I can. I have only a loose
16 * understanding of some parts of it.
20 * The basic state of the code is maintained in an array `m` of `Match`
21 * objects. Each `Match` describes some list of patterns, all of which must
22 * match against the current list of values. If those patterns match, then
23 * the arm listed in the match is the correct arm. A given arm may have
24 * multiple corresponding match entries, one for each alternative that
25 * remains. As we proceed these sets of matches are adjusted by the various
26 * `enter_XXX()` functions, each of which adjusts the set of options given
27 * some information about the value which has been matched.
29 * So, initially, there is one value and N matches, each of which have one
30 * constituent pattern. N here is usually the number of arms but may be
31 * greater, if some arms have multiple alternatives. For example, here:
33 * enum Foo { A, B(int), C(uint, uint) }
41 * The value would be `foo`. There would be four matches, each of which
42 * contains one pattern (and, in one case, a guard). We could collect the
43 * various options and then compile the code for the case where `foo` is an
44 * `A`, a `B`, and a `C`. When we generate the code for `C`, we would (1)
45 * drop the two matches that do not match a `C` and (2) expand the other two
46 * into two patterns each. In the first case, the two patterns would be `1u`
47 * and `2`, and the in the second case the _ pattern would be expanded into
48 * `_` and `_`. The two values are of course the arguments to `C`.
50 * Here is a quick guide to the various functions:
52 * - `compile_submatch()`: The main workhouse. It takes a list of values and
53 * a list of matches and finds the various possibilities that could occur.
55 * - `enter_XXX()`: modifies the list of matches based on some information
56 * about the value that has been matched. For example,
57 * `enter_rec_or_struct()` adjusts the values given that a record or struct
58 * has been matched. This is an infallible pattern, so *all* of the matches
59 * must be either wildcards or record/struct patterns. `enter_opt()`
60 * handles the fallible cases, and it is correspondingly more complex.
64 * We store information about the bound variables for each arm as part of the
65 * per-arm `ArmData` struct. There is a mapping from identifiers to
66 * `BindingInfo` structs. These structs contain the mode/id/type of the
67 * binding, but they also contain an LLVM value which points at an alloca
68 * called `llmatch`. For by value bindings that are Copy, we also create
69 * an extra alloca that we copy the matched value to so that any changes
70 * we do to our copy is not reflected in the original and vice-versa.
71 * We don't do this if it's a move since the original value can't be used
72 * and thus allowing us to cheat in not creating an extra alloca.
74 * The `llmatch` binding always stores a pointer into the value being matched
75 * which points at the data for the binding. If the value being matched has
76 * type `T`, then, `llmatch` will point at an alloca of type `T*` (and hence
77 * `llmatch` has type `T**`). So, if you have a pattern like:
81 * match (a, b) { (ref c, d) => { ... } }
83 * For `c` and `d`, we would generate allocas of type `C*` and `D*`
84 * respectively. These are called the `llmatch`. As we match, when we come
85 * up against an identifier, we store the current pointer into the
86 * corresponding alloca.
88 * Once a pattern is completely matched, and assuming that there is no guard
89 * pattern, we will branch to a block that leads to the body itself. For any
90 * by-value bindings, this block will first load the ptr from `llmatch` (the
91 * one of type `D*`) and then load a second time to get the actual value (the
92 * one of type `D`). For by ref bindings, the value of the local variable is
93 * simply the first alloca.
95 * So, for the example above, we would generate a setup kind of like this:
101 * +--------------------------------------------+
102 * | llmatch_c = (addr of first half of tuple) |
103 * | llmatch_d = (addr of second half of tuple) |
104 * +--------------------------------------------+
106 * +--------------------------------------+
107 * | *llbinding_d = **llmatch_d |
108 * +--------------------------------------+
110 * If there is a guard, the situation is slightly different, because we must
111 * execute the guard code. Moreover, we need to do so once for each of the
112 * alternatives that lead to the arm, because if the guard fails, they may
113 * have different points from which to continue the search. Therefore, in that
114 * case, we generate code that looks more like:
120 * +-------------------------------------------+
121 * | llmatch_c = (addr of first half of tuple) |
122 * | llmatch_d = (addr of first half of tuple) |
123 * +-------------------------------------------+
125 * +-------------------------------------------------+
126 * | *llbinding_d = **llmatch_d |
127 * | check condition |
128 * | if false { goto next case } |
129 * | if true { goto body } |
130 * +-------------------------------------------------+
132 * The handling for the cleanups is a bit... sensitive. Basically, the body
133 * is the one that invokes `add_clean()` for each binding. During the guard
134 * evaluation, we add temporary cleanups and revoke them after the guard is
135 * evaluated (it could fail, after all). Note that guards and moves are
136 * just plain incompatible.
138 * Some relevant helper functions that manage bindings:
139 * - `create_bindings_map()`
140 * - `insert_lllocals()`
143 * ## Notes on vector pattern matching.
145 * Vector pattern matching is surprisingly tricky. The problem is that
146 * the structure of the vector isn't fully known, and slice matches
147 * can be done on subparts of it.
149 * The way that vector pattern matches are dealt with, then, is as
150 * follows. First, we make the actual condition associated with a
151 * vector pattern simply a vector length comparison. So the pattern
152 * [1, .. x] gets the condition "vec len >= 1", and the pattern
153 * [.. x] gets the condition "vec len >= 0". The problem here is that
154 * having the condition "vec len >= 1" hold clearly does not mean that
155 * only a pattern that has exactly that condition will match. This
156 * means that it may well be the case that a condition holds, but none
157 * of the patterns matching that condition match; to deal with this,
158 * when doing vector length matches, we have match failures proceed to
159 * the next condition to check.
161 * There are a couple more subtleties to deal with. While the "actual"
162 * condition associated with vector length tests is simply a test on
163 * the vector length, the actual vec_len Opt entry contains more
164 * information used to restrict which matches are associated with it.
165 * So that all matches in a submatch are matching against the same
166 * values from inside the vector, they are split up by how many
167 * elements they match at the front and at the back of the vector. In
168 * order to make sure that arms are properly checked in order, even
169 * with the overmatching conditions, each vec_len Opt entry is
170 * associated with a range of matches.
171 * Consider the following:
175 * [1, 2, 2, .. _] => 1,
176 * [1, 2, 3, .. _] => 2,
180 * The proper arm to match is arm 2, but arms 0 and 3 both have the
181 * condition "len >= 2". If arm 3 was lumped in with arm 0, then the
182 * wrong branch would be taken. Instead, vec_len Opts are associated
183 * with a contiguous range of matches that have the same "shape".
184 * This is sort of ugly and requires a bunch of special handling of
190 use driver::config::FullDebugInfo;
191 use llvm::{ValueRef, BasicBlockRef};
192 use middle::check_match::StaticInliner;
193 use middle::check_match;
194 use middle::const_eval;
196 use middle::expr_use_visitor as euv;
197 use middle::lang_items::StrEqFnLangItem;
198 use middle::mem_categorization as mc;
199 use middle::pat_util::*;
200 use middle::resolve::DefMap;
201 use middle::trans::adt;
202 use middle::trans::base::*;
203 use middle::trans::build::{AddCase, And, BitCast, Br, CondBr, GEPi, InBoundsGEP, Load};
204 use middle::trans::build::{Mul, Not, Store, Sub, add_comment};
205 use middle::trans::build;
206 use middle::trans::callee;
207 use middle::trans::cleanup::{mod, CleanupMethods};
208 use middle::trans::common::*;
209 use middle::trans::consts;
210 use middle::trans::datum::*;
211 use middle::trans::expr::{mod, Dest};
212 use middle::trans::tvec;
213 use middle::trans::type_of;
214 use middle::trans::debuginfo;
216 use util::common::indenter;
217 use util::ppaux::{Repr, vec_map_to_string};
220 use std::collections::HashMap;
223 use syntax::ast::{DUMMY_NODE_ID, Ident};
224 use syntax::codemap::Span;
225 use syntax::fold::Folder;
228 struct ConstantExpr<'a>(&'a ast::Expr);
230 impl<'a> ConstantExpr<'a> {
231 fn eq(self, other: ConstantExpr<'a>, tcx: &ty::ctxt) -> bool {
232 let ConstantExpr(expr) = self;
233 let ConstantExpr(other_expr) = other;
234 match const_eval::compare_lit_exprs(tcx, expr, other_expr) {
235 Some(val1) => val1 == 0,
236 None => fail!("compare_list_exprs: type mismatch"),
241 // An option identifying a branch (either a literal, an enum variant or a range)
243 ConstantValue(ConstantExpr<'a>),
244 ConstantRange(ConstantExpr<'a>, ConstantExpr<'a>),
245 Variant(ty::Disr, Rc<adt::Repr>, ast::DefId),
246 SliceLengthEqual(uint),
247 SliceLengthGreaterOrEqual(/* prefix length */ uint, /* suffix length */ uint),
251 fn eq(&self, other: &Opt<'a>, tcx: &ty::ctxt) -> bool {
252 match (self, other) {
253 (&ConstantValue(a), &ConstantValue(b)) => a.eq(b, tcx),
254 (&ConstantRange(a1, a2), &ConstantRange(b1, b2)) => {
255 a1.eq(b1, tcx) && a2.eq(b2, tcx)
257 (&Variant(a_disr, ref a_repr, a_def), &Variant(b_disr, ref b_repr, b_def)) => {
258 a_disr == b_disr && *a_repr == *b_repr && a_def == b_def
260 (&SliceLengthEqual(a), &SliceLengthEqual(b)) => a == b,
261 (&SliceLengthGreaterOrEqual(a1, a2), &SliceLengthGreaterOrEqual(b1, b2)) => {
268 fn trans<'blk, 'tcx>(&self, mut bcx: Block<'blk, 'tcx>) -> OptResult<'blk, 'tcx> {
269 let _icx = push_ctxt("match::trans_opt");
272 ConstantValue(ConstantExpr(lit_expr)) => {
273 let lit_ty = ty::node_id_to_type(bcx.tcx(), lit_expr.id);
274 let (llval, _, _) = consts::const_expr(ccx, &*lit_expr, true);
275 let lit_datum = immediate_rvalue(llval, lit_ty);
276 let lit_datum = unpack_datum!(bcx, lit_datum.to_appropriate_datum(bcx));
277 SingleResult(Result::new(bcx, lit_datum.val))
279 ConstantRange(ConstantExpr(ref l1), ConstantExpr(ref l2)) => {
280 let (l1, _, _) = consts::const_expr(ccx, &**l1, true);
281 let (l2, _, _) = consts::const_expr(ccx, &**l2, true);
282 RangeResult(Result::new(bcx, l1), Result::new(bcx, l2))
284 Variant(disr_val, ref repr, _) => {
285 adt::trans_case(bcx, &**repr, disr_val)
287 SliceLengthEqual(length) => {
288 SingleResult(Result::new(bcx, C_uint(ccx, length)))
290 SliceLengthGreaterOrEqual(prefix, suffix) => {
291 LowerBound(Result::new(bcx, C_uint(ccx, prefix + suffix)))
297 #[deriving(PartialEq)]
298 pub enum BranchKind {
306 pub enum OptResult<'blk, 'tcx: 'blk> {
307 SingleResult(Result<'blk, 'tcx>),
308 RangeResult(Result<'blk, 'tcx>, Result<'blk, 'tcx>),
309 LowerBound(Result<'blk, 'tcx>)
313 pub enum TransBindingMode {
314 TrByCopy(/* llbinding */ ValueRef),
320 * Information about a pattern binding:
321 * - `llmatch` is a pointer to a stack slot. The stack slot contains a
322 * pointer into the value being matched. Hence, llmatch has type `T**`
323 * where `T` is the value being matched.
324 * - `trmode` is the trans binding mode
325 * - `id` is the node id of the binding
326 * - `ty` is the Rust type of the binding */
328 pub struct BindingInfo {
329 pub llmatch: ValueRef,
330 pub trmode: TransBindingMode,
336 type BindingsMap = HashMap<Ident, BindingInfo>;
338 struct ArmData<'p, 'blk, 'tcx: 'blk> {
339 bodycx: Block<'blk, 'tcx>,
341 bindings_map: BindingsMap
346 * If all `pats` are matched then arm `data` will be executed.
347 * As we proceed `bound_ptrs` are filled with pointers to values to be bound,
348 * these pointers are stored in llmatch variables just before executing `data` arm.
350 struct Match<'a, 'p: 'a, 'blk: 'a, 'tcx: 'blk> {
351 pats: Vec<&'p ast::Pat>,
352 data: &'a ArmData<'p, 'blk, 'tcx>,
353 bound_ptrs: Vec<(Ident, ValueRef)>
356 impl<'a, 'p, 'blk, 'tcx> Repr for Match<'a, 'p, 'blk, 'tcx> {
357 fn repr(&self, tcx: &ty::ctxt) -> String {
358 if tcx.sess.verbose() {
359 // for many programs, this just take too long to serialize
362 format!("{} pats", self.pats.len())
367 fn has_nested_bindings(m: &[Match], col: uint) -> bool {
369 match br.pats.get(col).node {
370 ast::PatIdent(_, _, Some(_)) => return true,
377 fn expand_nested_bindings<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
378 m: &[Match<'a, 'p, 'blk, 'tcx>],
381 -> Vec<Match<'a, 'p, 'blk, 'tcx>> {
382 debug!("expand_nested_bindings(bcx={}, m={}, col={}, val={})",
386 bcx.val_to_string(val));
387 let _indenter = indenter();
390 let mut bound_ptrs = br.bound_ptrs.clone();
391 let mut pat = *br.pats.get(col);
393 pat = match pat.node {
394 ast::PatIdent(_, ref path, Some(ref inner)) => {
395 bound_ptrs.push((path.node, val));
402 let mut pats = br.pats.clone();
403 *pats.get_mut(col) = pat;
407 bound_ptrs: bound_ptrs
412 type EnterPatterns<'a> = <'p> |&[&'p ast::Pat]|: 'a -> Option<Vec<&'p ast::Pat>>;
414 fn enter_match<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
416 m: &[Match<'a, 'p, 'blk, 'tcx>],
420 -> Vec<Match<'a, 'p, 'blk, 'tcx>> {
421 debug!("enter_match(bcx={}, m={}, col={}, val={})",
425 bcx.val_to_string(val));
426 let _indenter = indenter();
428 m.iter().filter_map(|br| {
429 e(br.pats.as_slice()).map(|pats| {
430 let this = *br.pats.get(col);
431 let mut bound_ptrs = br.bound_ptrs.clone();
433 ast::PatIdent(_, ref path, None) => {
434 if pat_is_binding(dm, &*this) {
435 bound_ptrs.push((path.node, val));
438 ast::PatVec(ref before, Some(ref slice), ref after) => {
440 ast::PatIdent(_, ref path, None) => {
441 let subslice_val = bind_subslice_pat(
443 before.len(), after.len());
444 bound_ptrs.push((path.node, subslice_val));
454 bound_ptrs: bound_ptrs
460 fn enter_default<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
462 m: &[Match<'a, 'p, 'blk, 'tcx>],
465 -> Vec<Match<'a, 'p, 'blk, 'tcx>> {
466 debug!("enter_default(bcx={}, m={}, col={}, val={})",
470 bcx.val_to_string(val));
471 let _indenter = indenter();
473 // Collect all of the matches that can match against anything.
474 enter_match(bcx, dm, m, col, val, |pats| {
475 if pat_is_binding_or_wild(dm, &*pats[col]) {
476 Some(Vec::from_slice(pats.slice_to(col)).append(pats.slice_from(col + 1)))
483 // <pcwalton> nmatsakis: what does enter_opt do?
484 // <pcwalton> in trans/match
485 // <pcwalton> trans/match.rs is like stumbling around in a dark cave
486 // <nmatsakis> pcwalton: the enter family of functions adjust the set of
487 // patterns as needed
488 // <nmatsakis> yeah, at some point I kind of achieved some level of
490 // <nmatsakis> anyhow, they adjust the patterns given that something of that
491 // kind has been found
492 // <nmatsakis> pcwalton: ok, right, so enter_XXX() adjusts the patterns, as I
494 // <nmatsakis> enter_match() kind of embodies the generic code
495 // <nmatsakis> it is provided with a function that tests each pattern to see
496 // if it might possibly apply and so forth
497 // <nmatsakis> so, if you have a pattern like {a: _, b: _, _} and one like _
498 // <nmatsakis> then _ would be expanded to (_, _)
499 // <nmatsakis> one spot for each of the sub-patterns
500 // <nmatsakis> enter_opt() is one of the more complex; it covers the fallible
502 // <nmatsakis> enter_rec_or_struct() or enter_tuple() are simpler, since they
503 // are infallible patterns
504 // <nmatsakis> so all patterns must either be records (resp. tuples) or
507 /// The above is now outdated in that enter_match() now takes a function that
508 /// takes the complete row of patterns rather than just the first one.
509 /// Also, most of the enter_() family functions have been unified with
510 /// the check_match specialization step.
511 fn enter_opt<'a, 'p, 'blk, 'tcx>(
512 bcx: Block<'blk, 'tcx>,
515 m: &[Match<'a, 'p, 'blk, 'tcx>],
520 -> Vec<Match<'a, 'p, 'blk, 'tcx>> {
521 debug!("enter_opt(bcx={}, m={}, opt={:?}, col={}, val={})",
526 bcx.val_to_string(val));
527 let _indenter = indenter();
529 let ctor = match opt {
530 &ConstantValue(ConstantExpr(expr)) => check_match::ConstantValue(
531 const_eval::eval_const_expr(bcx.tcx(), &*expr)
533 &ConstantRange(ConstantExpr(lo), ConstantExpr(hi)) => check_match::ConstantRange(
534 const_eval::eval_const_expr(bcx.tcx(), &*lo),
535 const_eval::eval_const_expr(bcx.tcx(), &*hi)
537 &SliceLengthEqual(n) =>
538 check_match::Slice(n),
539 &SliceLengthGreaterOrEqual(before, after) =>
540 check_match::SliceWithSubslice(before, after),
541 &Variant(_, _, def_id) =>
542 check_match::Variant(def_id)
545 let mcx = check_match::MatchCheckCtxt { tcx: bcx.tcx() };
546 enter_match(bcx, dm, m, col, val, |pats|
547 check_match::specialize(&mcx, pats.as_slice(), &ctor, col, variant_size)
551 // Returns the options in one column of matches. An option is something that
552 // needs to be conditionally matched at runtime; for example, the discriminant
553 // on a set of enum variants or a literal.
554 fn get_branches<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
555 m: &[Match<'a, 'p, 'blk, 'tcx>], col: uint)
559 let mut found: Vec<Opt> = vec![];
561 let cur = *br.pats.get(col);
562 let opt = match cur.node {
563 ast::PatLit(ref l) => ConstantValue(ConstantExpr(&**l)),
564 ast::PatIdent(..) | ast::PatEnum(..) | ast::PatStruct(..) => {
565 // This is either an enum variant or a variable binding.
566 let opt_def = tcx.def_map.borrow().find_copy(&cur.id);
568 Some(def::DefVariant(enum_id, var_id, _)) => {
569 let variant = ty::enum_variant_with_id(tcx, enum_id, var_id);
570 Variant(variant.disr_val, adt::represent_node(bcx, cur.id), var_id)
575 ast::PatRange(ref l1, ref l2) => {
576 ConstantRange(ConstantExpr(&**l1), ConstantExpr(&**l2))
578 ast::PatVec(ref before, None, ref after) => {
579 SliceLengthEqual(before.len() + after.len())
581 ast::PatVec(ref before, Some(_), ref after) => {
582 SliceLengthGreaterOrEqual(before.len(), after.len())
587 if !found.iter().any(|x| x.eq(&opt, tcx)) {
594 struct ExtractedBlock<'blk, 'tcx: 'blk> {
596 bcx: Block<'blk, 'tcx>,
599 fn extract_variant_args<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
603 -> ExtractedBlock<'blk, 'tcx> {
604 let _icx = push_ctxt("match::extract_variant_args");
605 let args = Vec::from_fn(adt::num_args(repr, disr_val), |i| {
606 adt::trans_field_ptr(bcx, repr, val, disr_val, i)
609 ExtractedBlock { vals: args, bcx: bcx }
612 fn match_datum(val: ValueRef, left_ty: ty::t) -> Datum<Lvalue> {
614 * Helper for converting from the ValueRef that we pass around in
615 * the match code, which is always an lvalue, into a Datum. Eventually
616 * we should just pass around a Datum and be done with it.
618 Datum::new(val, left_ty, Lvalue)
621 fn bind_subslice_pat(bcx: Block,
625 offset_right: uint) -> ValueRef {
626 let _icx = push_ctxt("match::bind_subslice_pat");
627 let vec_ty = node_id_type(bcx, pat_id);
628 let vt = tvec::vec_types(bcx, ty::sequence_element_type(bcx.tcx(), ty::type_content(vec_ty)));
629 let vec_datum = match_datum(val, vec_ty);
630 let (base, len) = vec_datum.get_vec_base_and_len(bcx);
632 let slice_byte_offset = Mul(bcx, vt.llunit_size, C_uint(bcx.ccx(), offset_left));
633 let slice_begin = tvec::pointer_add_byte(bcx, base, slice_byte_offset);
634 let slice_len_offset = C_uint(bcx.ccx(), offset_left + offset_right);
635 let slice_len = Sub(bcx, len, slice_len_offset);
636 let slice_ty = ty::mk_slice(bcx.tcx(),
638 ty::mt {ty: vt.unit_ty, mutbl: ast::MutImmutable});
639 let scratch = rvalue_scratch_datum(bcx, slice_ty, "");
640 Store(bcx, slice_begin,
641 GEPi(bcx, scratch.val, [0u, abi::slice_elt_base]));
642 Store(bcx, slice_len, GEPi(bcx, scratch.val, [0u, abi::slice_elt_len]));
646 fn extract_vec_elems<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
651 -> ExtractedBlock<'blk, 'tcx> {
652 let _icx = push_ctxt("match::extract_vec_elems");
653 let vec_datum = match_datum(val, left_ty);
654 let (base, len) = vec_datum.get_vec_base_and_len(bcx);
655 let mut elems = vec![];
656 elems.extend(range(0, before).map(|i| GEPi(bcx, base, [i])));
657 elems.extend(range(0, after).rev().map(|i| {
658 InBoundsGEP(bcx, base, [
659 Sub(bcx, len, C_uint(bcx.ccx(), i + 1))
662 ExtractedBlock { vals: elems, bcx: bcx }
665 // Macro for deciding whether any of the remaining matches fit a given kind of
666 // pattern. Note that, because the macro is well-typed, either ALL of the
667 // matches should fit that sort of pattern or NONE (however, some of the
668 // matches may be wildcards like _ or identifiers).
669 macro_rules! any_pat (
670 ($m:expr, $col:expr, $pattern:pat) => (
671 ($m).iter().any(|br| {
672 match br.pats.get($col).node {
680 fn any_uniq_pat(m: &[Match], col: uint) -> bool {
681 any_pat!(m, col, ast::PatBox(_))
684 fn any_region_pat(m: &[Match], col: uint) -> bool {
685 any_pat!(m, col, ast::PatRegion(_))
688 fn any_irrefutable_adt_pat(tcx: &ty::ctxt, m: &[Match], col: uint) -> bool {
690 let pat = *br.pats.get(col);
692 ast::PatTup(_) => true,
693 ast::PatStruct(..) => {
694 match tcx.def_map.borrow().find(&pat.id) {
695 Some(&def::DefVariant(..)) => false,
699 ast::PatEnum(..) | ast::PatIdent(_, _, None) => {
700 match tcx.def_map.borrow().find(&pat.id) {
701 Some(&def::DefFn(..)) |
702 Some(&def::DefStruct(..)) => true,
711 /// What to do when the pattern match fails.
712 enum FailureHandler {
714 JumpToBasicBlock(BasicBlockRef),
718 impl FailureHandler {
719 fn is_fallible(&self) -> bool {
726 fn is_infallible(&self) -> bool {
730 fn handle_fail(&self, bcx: Block) {
733 fail!("attempted to fail in an infallible failure handler!"),
734 JumpToBasicBlock(basic_block) =>
735 Br(bcx, basic_block),
737 build::Unreachable(bcx)
742 fn pick_col(m: &[Match]) -> uint {
743 fn score(p: &ast::Pat) -> uint {
745 ast::PatLit(_) | ast::PatEnum(_, _) | ast::PatRange(_, _) => 1u,
746 ast::PatIdent(_, _, Some(ref p)) => score(&**p),
750 let mut scores = Vec::from_elem(m[0].pats.len(), 0u);
752 for (i, ref p) in br.pats.iter().enumerate() {
753 *scores.get_mut(i) += score(&***p);
756 let mut max_score = 0u;
757 let mut best_col = 0u;
758 for (i, score) in scores.iter().enumerate() {
761 // Irrefutable columns always go first, they'd only be duplicated in
763 if score == 0u { return i; }
764 // If no irrefutable ones are found, we pick the one with the biggest
766 if score > max_score { max_score = score; best_col = i; }
771 // Compiles a comparison between two things.
772 fn compare_values<'blk, 'tcx>(cx: Block<'blk, 'tcx>,
776 -> Result<'blk, 'tcx> {
777 fn compare_str<'blk, 'tcx>(cx: Block<'blk, 'tcx>,
781 -> Result<'blk, 'tcx> {
782 let did = langcall(cx,
784 format!("comparison of `{}`",
785 cx.ty_to_string(rhs_t)).as_slice(),
787 callee::trans_lang_call(cx, did, [lhs, rhs], None)
790 let _icx = push_ctxt("compare_values");
791 if ty::type_is_scalar(rhs_t) {
792 let rs = compare_scalar_types(cx, lhs, rhs, rhs_t, ast::BiEq);
793 return Result::new(rs.bcx, rs.val);
796 match ty::get(rhs_t).sty {
797 ty::ty_rptr(_, mt) => match ty::get(mt.ty).sty {
798 ty::ty_str => compare_str(cx, lhs, rhs, rhs_t),
799 ty::ty_vec(ty, _) => match ty::get(ty).sty {
800 ty::ty_uint(ast::TyU8) => {
801 // NOTE: cast &[u8] to &str and abuse the str_eq lang item,
802 // which calls memcmp().
803 let t = ty::mk_str_slice(cx.tcx(), ty::ReStatic, ast::MutImmutable);
804 let lhs = BitCast(cx, lhs, type_of::type_of(cx.ccx(), t).ptr_to());
805 let rhs = BitCast(cx, rhs, type_of::type_of(cx.ccx(), t).ptr_to());
806 compare_str(cx, lhs, rhs, rhs_t)
808 _ => cx.sess().bug("only byte strings supported in compare_values"),
810 _ => cx.sess().bug("only string and byte strings supported in compare_values"),
812 _ => cx.sess().bug("only scalars, byte strings, and strings supported in compare_values"),
816 fn insert_lllocals<'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
817 bindings_map: &BindingsMap,
818 cs: Option<cleanup::ScopeId>)
819 -> Block<'blk, 'tcx> {
821 * For each binding in `data.bindings_map`, adds an appropriate entry into
822 * the `fcx.lllocals` map
825 for (&ident, &binding_info) in bindings_map.iter() {
826 let llval = match binding_info.trmode {
827 // By value mut binding for a copy type: load from the ptr
828 // into the matched value and copy to our alloca
829 TrByCopy(llbinding) => {
830 let llval = Load(bcx, binding_info.llmatch);
831 let datum = Datum::new(llval, binding_info.ty, Lvalue);
832 call_lifetime_start(bcx, llbinding);
833 bcx = datum.store_to(bcx, llbinding);
835 Some(cs) => bcx.fcx.schedule_lifetime_end(cs, llbinding),
842 // By value move bindings: load from the ptr into the matched value
843 TrByMove => Load(bcx, binding_info.llmatch),
845 // By ref binding: use the ptr into the matched value
846 TrByRef => binding_info.llmatch
849 let datum = Datum::new(llval, binding_info.ty, Lvalue);
852 bcx.fcx.schedule_drop_and_zero_mem(cs, llval, binding_info.ty);
853 bcx.fcx.schedule_lifetime_end(cs, binding_info.llmatch);
858 debug!("binding {:?} to {}",
860 bcx.val_to_string(llval));
861 bcx.fcx.lllocals.borrow_mut().insert(binding_info.id, datum);
863 if bcx.sess().opts.debuginfo == FullDebugInfo {
864 debuginfo::create_match_binding_metadata(bcx,
872 fn compile_guard<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
873 guard_expr: &ast::Expr,
875 m: &[Match<'a, 'p, 'blk, 'tcx>],
877 chk: &FailureHandler,
878 has_genuine_default: bool)
879 -> Block<'blk, 'tcx> {
880 debug!("compile_guard(bcx={}, guard_expr={}, m={}, vals={})",
882 bcx.expr_to_string(guard_expr),
884 vec_map_to_string(vals, |v| bcx.val_to_string(*v)));
885 let _indenter = indenter();
887 let mut bcx = insert_lllocals(bcx, &data.bindings_map, None);
889 let val = unpack_datum!(bcx, expr::trans(bcx, guard_expr));
890 let val = val.to_llbool(bcx);
892 for (_, &binding_info) in data.bindings_map.iter() {
893 match binding_info.trmode {
894 TrByCopy(llbinding) => call_lifetime_end(bcx, llbinding),
899 with_cond(bcx, Not(bcx, val), |bcx| {
900 // Guard does not match: remove all bindings from the lllocals table
901 for (_, &binding_info) in data.bindings_map.iter() {
902 call_lifetime_end(bcx, binding_info.llmatch);
903 bcx.fcx.lllocals.borrow_mut().remove(&binding_info.id);
906 // If the default arm is the only one left, move on to the next
907 // condition explicitly rather than (possibly) falling back to
909 &JumpToBasicBlock(_) if m.len() == 1 && has_genuine_default => {
910 chk.handle_fail(bcx);
913 compile_submatch(bcx, m, vals, chk, has_genuine_default);
920 fn compile_submatch<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
921 m: &[Match<'a, 'p, 'blk, 'tcx>],
923 chk: &FailureHandler,
924 has_genuine_default: bool) {
925 debug!("compile_submatch(bcx={}, m={}, vals={})",
928 vec_map_to_string(vals, |v| bcx.val_to_string(*v)));
929 let _indenter = indenter();
930 let _icx = push_ctxt("match::compile_submatch");
933 if chk.is_fallible() {
934 chk.handle_fail(bcx);
939 let col_count = m[0].pats.len();
941 let data = &m[0].data;
942 for &(ref ident, ref value_ptr) in m[0].bound_ptrs.iter() {
943 let llmatch = data.bindings_map.get(ident).llmatch;
944 call_lifetime_start(bcx, llmatch);
945 Store(bcx, *value_ptr, llmatch);
947 match data.arm.guard {
948 Some(ref guard_expr) => {
949 bcx = compile_guard(bcx,
955 has_genuine_default);
959 Br(bcx, data.bodycx.llbb);
963 let col = pick_col(m);
966 if has_nested_bindings(m, col) {
967 let expanded = expand_nested_bindings(bcx, m, col, val);
968 compile_submatch_continue(bcx,
976 compile_submatch_continue(bcx, m, vals, chk, col, val, has_genuine_default)
980 fn compile_submatch_continue<'a, 'p, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
981 m: &[Match<'a, 'p, 'blk, 'tcx>],
983 chk: &FailureHandler,
986 has_genuine_default: bool) {
989 let dm = &tcx.def_map;
991 let vals_left = Vec::from_slice(vals.slice(0u, col)).append(vals.slice(col + 1u, vals.len()));
992 let ccx = bcx.fcx.ccx;
994 // Find a real id (we're adding placeholder wildcard patterns, but
995 // each column is guaranteed to have at least one real pattern)
996 let pat_id = m.iter().map(|br| br.pats.get(col).id)
997 .find(|&id| id != DUMMY_NODE_ID)
998 .unwrap_or(DUMMY_NODE_ID);
1000 let left_ty = if pat_id == DUMMY_NODE_ID {
1003 node_id_type(bcx, pat_id)
1006 let mcx = check_match::MatchCheckCtxt { tcx: bcx.tcx() };
1007 let adt_vals = if any_irrefutable_adt_pat(bcx.tcx(), m, col) {
1008 let repr = adt::represent_type(bcx.ccx(), left_ty);
1009 let arg_count = adt::num_args(&*repr, 0);
1010 let field_vals: Vec<ValueRef> = std::iter::range(0, arg_count).map(|ix|
1011 adt::trans_field_ptr(bcx, &*repr, val, 0, ix)
1014 } else if any_uniq_pat(m, col) || any_region_pat(m, col) {
1015 Some(vec!(Load(bcx, val)))
1017 match ty::get(left_ty).sty {
1018 ty::ty_vec(_, Some(n)) => {
1019 let args = extract_vec_elems(bcx, left_ty, n, 0, val);
1027 Some(field_vals) => {
1028 let pats = enter_match(bcx, dm, m, col, val, |pats|
1029 check_match::specialize(&mcx, pats, &check_match::Single, col, field_vals.len())
1031 let vals = field_vals.append(vals_left.as_slice());
1032 compile_submatch(bcx, pats.as_slice(), vals.as_slice(), chk, has_genuine_default);
1038 // Decide what kind of branch we need
1039 let opts = get_branches(bcx, m, col);
1040 debug!("options={:?}", opts);
1041 let mut kind = NoBranch;
1042 let mut test_val = val;
1043 debug!("test_val={}", bcx.val_to_string(test_val));
1044 if opts.len() > 0u {
1045 match *opts.get(0) {
1046 ConstantValue(_) | ConstantRange(_, _) => {
1047 test_val = load_if_immediate(bcx, val, left_ty);
1048 kind = if ty::type_is_integral(left_ty) {
1054 Variant(_, ref repr, _) => {
1055 let (the_kind, val_opt) = adt::trans_switch(bcx, &**repr, val);
1057 for &tval in val_opt.iter() { test_val = tval; }
1059 SliceLengthEqual(_) | SliceLengthGreaterOrEqual(_, _) => {
1060 let (_, len) = tvec::get_base_and_len(bcx, val, left_ty);
1066 for o in opts.iter() {
1068 ConstantRange(_, _) => { kind = Compare; break },
1069 SliceLengthGreaterOrEqual(_, _) => { kind = CompareSliceLength; break },
1073 let else_cx = match kind {
1074 NoBranch | Single => bcx,
1075 _ => bcx.fcx.new_temp_block("match_else")
1077 let sw = if kind == Switch {
1078 build::Switch(bcx, test_val, else_cx.llbb, opts.len())
1080 C_int(ccx, 0) // Placeholder for when not using a switch
1083 let defaults = enter_default(else_cx, dm, m, col, val);
1084 let exhaustive = chk.is_infallible() && defaults.len() == 0u;
1085 let len = opts.len();
1087 // Compile subtrees for each option
1088 for (i, opt) in opts.iter().enumerate() {
1089 // In some cases of range and vector pattern matching, we need to
1090 // override the failure case so that instead of failing, it proceeds
1091 // to try more matching. branch_chk, then, is the proper failure case
1092 // for the current conditional branch.
1093 let mut branch_chk = None;
1094 let mut opt_cx = else_cx;
1095 if !exhaustive || i + 1 < len {
1096 opt_cx = bcx.fcx.new_temp_block("match_case");
1098 Single => Br(bcx, opt_cx.llbb),
1100 match opt.trans(bcx) {
1101 SingleResult(r) => {
1102 AddCase(sw, r.val, opt_cx.llbb);
1107 "in compile_submatch, expected \
1108 opt.trans() to return a SingleResult")
1112 Compare | CompareSliceLength => {
1113 let t = if kind == Compare {
1116 ty::mk_uint() // vector length
1118 let Result { bcx: after_cx, val: matches } = {
1119 match opt.trans(bcx) {
1120 SingleResult(Result { bcx, val }) => {
1121 compare_values(bcx, test_val, val, t)
1123 RangeResult(Result { val: vbegin, .. },
1124 Result { bcx, val: vend }) => {
1125 let Result { bcx, val: llge } =
1126 compare_scalar_types(
1128 vbegin, t, ast::BiGe);
1129 let Result { bcx, val: llle } =
1130 compare_scalar_types(
1131 bcx, test_val, vend,
1133 Result::new(bcx, And(bcx, llge, llle))
1135 LowerBound(Result { bcx, val }) => {
1136 compare_scalar_types(bcx, test_val, val, t, ast::BiGe)
1140 bcx = fcx.new_temp_block("compare_next");
1142 // If none of the sub-cases match, and the current condition
1143 // is guarded or has multiple patterns, move on to the next
1144 // condition, if there is any, rather than falling back to
1146 let guarded = m[i].data.arm.guard.is_some();
1147 let multi_pats = m[i].pats.len() > 1;
1148 if i + 1 < len && (guarded || multi_pats || kind == CompareSliceLength) {
1149 branch_chk = Some(JumpToBasicBlock(bcx.llbb));
1151 CondBr(after_cx, matches, opt_cx.llbb, bcx.llbb);
1155 } else if kind == Compare || kind == CompareSliceLength {
1156 Br(bcx, else_cx.llbb);
1160 let mut unpacked = Vec::new();
1162 Variant(disr_val, ref repr, _) => {
1163 let ExtractedBlock {vals: argvals, bcx: new_bcx} =
1164 extract_variant_args(opt_cx, &**repr, disr_val, val);
1165 size = argvals.len();
1169 SliceLengthEqual(len) => {
1170 let args = extract_vec_elems(opt_cx, left_ty, len, 0, val);
1171 size = args.vals.len();
1172 unpacked = args.vals.clone();
1175 SliceLengthGreaterOrEqual(before, after) => {
1176 let args = extract_vec_elems(opt_cx, left_ty, before, after, val);
1177 size = args.vals.len();
1178 unpacked = args.vals.clone();
1181 ConstantValue(_) | ConstantRange(_, _) => ()
1183 let opt_ms = enter_opt(opt_cx, pat_id, dm, m, opt, col, size, val);
1184 let opt_vals = unpacked.append(vals_left.as_slice());
1185 compile_submatch(opt_cx,
1187 opt_vals.as_slice(),
1188 branch_chk.as_ref().unwrap_or(chk),
1189 has_genuine_default);
1192 // Compile the fall-through case, if any
1193 if !exhaustive && kind != Single {
1194 if kind == Compare || kind == CompareSliceLength {
1195 Br(bcx, else_cx.llbb);
1198 // If there is only one default arm left, move on to the next
1199 // condition explicitly rather than (eventually) falling back to
1200 // the last default arm.
1201 &JumpToBasicBlock(_) if defaults.len() == 1 && has_genuine_default => {
1202 chk.handle_fail(else_cx);
1205 compile_submatch(else_cx,
1206 defaults.as_slice(),
1207 vals_left.as_slice(),
1209 has_genuine_default);
1215 pub fn trans_match<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
1216 match_expr: &ast::Expr,
1217 discr_expr: &ast::Expr,
1220 -> Block<'blk, 'tcx> {
1221 let _icx = push_ctxt("match::trans_match");
1222 trans_match_inner(bcx, match_expr.id, discr_expr, arms, dest)
1225 /// Checks whether the binding in `discr` is assigned to anywhere in the expression `body`
1226 fn is_discr_reassigned(bcx: Block, discr: &ast::Expr, body: &ast::Expr) -> bool {
1228 ast::ExprPath(..) => match bcx.def(discr.id) {
1229 def::DefLocal(vid, _) | def::DefUpvar(vid, _, _, _) => {
1230 let mut rc = ReassignmentChecker {
1235 let mut visitor = euv::ExprUseVisitor::new(&mut rc, bcx);
1236 visitor.walk_expr(body);
1246 struct ReassignmentChecker {
1251 impl euv::Delegate for ReassignmentChecker {
1252 fn consume(&mut self, _: ast::NodeId, _: Span, _: mc::cmt, _: euv::ConsumeMode) {}
1253 fn consume_pat(&mut self, _: &ast::Pat, _: mc::cmt, _: euv::ConsumeMode) {}
1254 fn borrow(&mut self, _: ast::NodeId, _: Span, _: mc::cmt, _: ty::Region,
1255 _: ty::BorrowKind, _: euv::LoanCause) {}
1256 fn decl_without_init(&mut self, _: ast::NodeId, _: Span) {}
1258 fn mutate(&mut self, _: ast::NodeId, _: Span, cmt: mc::cmt, _: euv::MutateMode) {
1260 mc::cat_copied_upvar(mc::CopiedUpvar { upvar_id: vid, .. }) |
1261 mc::cat_local(vid) => self.reassigned = self.node == vid,
1267 fn create_bindings_map(bcx: Block, pat: &ast::Pat,
1268 discr: &ast::Expr, body: &ast::Expr) -> BindingsMap {
1269 // Create the bindings map, which is a mapping from each binding name
1270 // to an alloca() that will be the value for that local variable.
1271 // Note that we use the names because each binding will have many ids
1272 // from the various alternatives.
1273 let ccx = bcx.ccx();
1274 let tcx = bcx.tcx();
1275 let reassigned = is_discr_reassigned(bcx, discr, body);
1276 let mut bindings_map = HashMap::new();
1277 pat_bindings(&tcx.def_map, &*pat, |bm, p_id, span, path1| {
1278 let ident = path1.node;
1279 let variable_ty = node_id_type(bcx, p_id);
1280 let llvariable_ty = type_of::type_of(ccx, variable_ty);
1281 let tcx = bcx.tcx();
1287 if !ty::type_moves_by_default(tcx, variable_ty) || reassigned => {
1288 llmatch = alloca_no_lifetime(bcx,
1289 llvariable_ty.ptr_to(),
1291 trmode = TrByCopy(alloca_no_lifetime(bcx,
1293 bcx.ident(ident).as_slice()));
1295 ast::BindByValue(_) => {
1296 // in this case, the final type of the variable will be T,
1297 // but during matching we need to store a *T as explained
1299 llmatch = alloca_no_lifetime(bcx,
1300 llvariable_ty.ptr_to(),
1301 bcx.ident(ident).as_slice());
1304 ast::BindByRef(_) => {
1305 llmatch = alloca_no_lifetime(bcx,
1307 bcx.ident(ident).as_slice());
1311 bindings_map.insert(ident, BindingInfo {
1319 return bindings_map;
1322 fn trans_match_inner<'blk, 'tcx>(scope_cx: Block<'blk, 'tcx>,
1323 match_id: ast::NodeId,
1324 discr_expr: &ast::Expr,
1326 dest: Dest) -> Block<'blk, 'tcx> {
1327 let _icx = push_ctxt("match::trans_match_inner");
1328 let fcx = scope_cx.fcx;
1329 let mut bcx = scope_cx;
1330 let tcx = bcx.tcx();
1332 let discr_datum = unpack_datum!(bcx, expr::trans_to_lvalue(bcx, discr_expr,
1334 if bcx.unreachable.get() {
1338 let t = node_id_type(bcx, discr_expr.id);
1339 let chk = if ty::type_is_empty(tcx, t) {
1345 let arm_datas: Vec<ArmData> = arms.iter().map(|arm| ArmData {
1346 bodycx: fcx.new_id_block("case_body", arm.body.id),
1348 bindings_map: create_bindings_map(bcx, &**arm.pats.get(0), discr_expr, &*arm.body)
1351 let mut static_inliner = StaticInliner::new(scope_cx.tcx());
1352 let arm_pats: Vec<Vec<P<ast::Pat>>> = arm_datas.iter().map(|arm_data| {
1353 arm_data.arm.pats.iter().map(|p| static_inliner.fold_pat((*p).clone())).collect()
1355 let mut matches = Vec::new();
1356 for (arm_data, pats) in arm_datas.iter().zip(arm_pats.iter()) {
1357 matches.extend(pats.iter().map(|p| Match {
1360 bound_ptrs: Vec::new(),
1364 // `compile_submatch` works one column of arm patterns a time and
1365 // then peels that column off. So as we progress, it may become
1366 // impossible to tell whether we have a genuine default arm, i.e.
1367 // `_ => foo` or not. Sometimes it is important to know that in order
1368 // to decide whether moving on to the next condition or falling back
1369 // to the default arm.
1370 let has_default = arms.last().map_or(false, |arm| {
1372 && arm.pats.last().unwrap().node == ast::PatWild(ast::PatWildSingle)
1375 compile_submatch(bcx, matches.as_slice(), [discr_datum.val], &chk, has_default);
1377 let mut arm_cxs = Vec::new();
1378 for arm_data in arm_datas.iter() {
1379 let mut bcx = arm_data.bodycx;
1381 // insert bindings into the lllocals map and add cleanups
1382 let cs = fcx.push_custom_cleanup_scope();
1383 bcx = insert_lllocals(bcx, &arm_data.bindings_map, Some(cleanup::CustomScope(cs)));
1384 bcx = expr::trans_into(bcx, &*arm_data.arm.body, dest);
1385 bcx = fcx.pop_and_trans_custom_cleanup_scope(bcx, cs);
1389 bcx = scope_cx.fcx.join_blocks(match_id, arm_cxs.as_slice());
1393 pub fn store_local<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
1395 -> Block<'blk, 'tcx> {
1397 * Generates code for a local variable declaration like
1398 * `let <pat>;` or `let <pat> = <opt_init_expr>`.
1400 let _icx = push_ctxt("match::store_local");
1402 let tcx = bcx.tcx();
1403 let pat = &*local.pat;
1405 fn create_dummy_locals<'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
1407 -> Block<'blk, 'tcx> {
1408 // create dummy memory for the variables if we have no
1409 // value to store into them immediately
1410 let tcx = bcx.tcx();
1411 pat_bindings(&tcx.def_map, pat, |_, p_id, _, path1| {
1412 let scope = cleanup::var_scope(tcx, p_id);
1413 bcx = mk_binding_alloca(
1414 bcx, p_id, &path1.node, scope, (),
1415 |(), bcx, llval, ty| { zero_mem(bcx, llval, ty); bcx });
1421 Some(ref init_expr) => {
1422 // Optimize the "let x = expr" case. This just writes
1423 // the result of evaluating `expr` directly into the alloca
1424 // for `x`. Often the general path results in similar or the
1425 // same code post-optimization, but not always. In particular,
1426 // in unsafe code, you can have expressions like
1428 // let x = intrinsics::uninit();
1430 // In such cases, the more general path is unsafe, because
1431 // it assumes it is matching against a valid value.
1432 match simple_identifier(&*pat) {
1434 let var_scope = cleanup::var_scope(tcx, local.id);
1435 return mk_binding_alloca(
1436 bcx, pat.id, ident, var_scope, (),
1437 |(), bcx, v, _| expr::trans_into(bcx, &**init_expr,
1446 unpack_datum!(bcx, expr::trans_to_lvalue(bcx, &**init_expr, "let"));
1447 if ty::type_is_bot(expr_ty(bcx, &**init_expr)) {
1448 create_dummy_locals(bcx, pat)
1450 if bcx.sess().asm_comments() {
1451 add_comment(bcx, "creating zeroable ref llval");
1453 let var_scope = cleanup::var_scope(tcx, local.id);
1454 bind_irrefutable_pat(bcx, pat, init_datum.val, var_scope)
1458 create_dummy_locals(bcx, pat)
1463 pub fn store_arg<'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
1466 arg_scope: cleanup::ScopeId)
1467 -> Block<'blk, 'tcx> {
1469 * Generates code for argument patterns like `fn foo(<pat>: T)`.
1470 * Creates entries in the `lllocals` map for each of the bindings
1475 * - `pat` is the argument pattern
1476 * - `llval` is a pointer to the argument value (in other words,
1477 * if the argument type is `T`, then `llval` is a `T*`). In some
1478 * cases, this code may zero out the memory `llval` points at.
1481 let _icx = push_ctxt("match::store_arg");
1483 match simple_identifier(&*pat) {
1485 // Generate nicer LLVM for the common case of fn a pattern
1487 let arg_ty = node_id_type(bcx, pat.id);
1488 if type_of::arg_is_indirect(bcx.ccx(), arg_ty)
1489 && bcx.sess().opts.debuginfo != FullDebugInfo {
1490 // Don't copy an indirect argument to an alloca, the caller
1491 // already put it in a temporary alloca and gave it up, unless
1492 // we emit extra-debug-info, which requires local allocas :(.
1493 let arg_val = arg.add_clean(bcx.fcx, arg_scope);
1494 bcx.fcx.lllocals.borrow_mut()
1495 .insert(pat.id, Datum::new(arg_val, arg_ty, Lvalue));
1499 bcx, pat.id, ident, arg_scope, arg,
1500 |arg, bcx, llval, _| arg.store_to(bcx, llval))
1505 // General path. Copy out the values that are used in the
1507 let arg = unpack_datum!(
1508 bcx, arg.to_lvalue_datum_in_scope(bcx, "__arg", arg_scope));
1509 bind_irrefutable_pat(bcx, pat, arg.val, arg_scope)
1514 /// Generates code for the pattern binding in a `for` loop like
1515 /// `for <pat> in <expr> { ... }`.
1516 pub fn store_for_loop_binding<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
1519 body_scope: cleanup::ScopeId)
1520 -> Block<'blk, 'tcx> {
1521 let _icx = push_ctxt("match::store_for_loop_binding");
1523 if simple_identifier(&*pat).is_some() {
1524 // Generate nicer LLVM for the common case of a `for` loop pattern
1525 // like `for x in blahblah { ... }`.
1526 let binding_type = node_id_type(bcx, pat.id);
1527 bcx.fcx.lllocals.borrow_mut().insert(pat.id,
1534 // General path. Copy out the values that are used in the pattern.
1535 bind_irrefutable_pat(bcx, pat, llvalue, body_scope)
1538 fn mk_binding_alloca<'blk, 'tcx, A>(bcx: Block<'blk, 'tcx>,
1541 cleanup_scope: cleanup::ScopeId,
1543 populate: |A, Block<'blk, 'tcx>, ValueRef, ty::t|
1544 -> Block<'blk, 'tcx>)
1545 -> Block<'blk, 'tcx> {
1546 let var_ty = node_id_type(bcx, p_id);
1548 // Allocate memory on stack for the binding.
1549 let llval = alloc_ty(bcx, var_ty, bcx.ident(*ident).as_slice());
1551 // Subtle: be sure that we *populate* the memory *before*
1552 // we schedule the cleanup.
1553 let bcx = populate(arg, bcx, llval, var_ty);
1554 bcx.fcx.schedule_lifetime_end(cleanup_scope, llval);
1555 bcx.fcx.schedule_drop_mem(cleanup_scope, llval, var_ty);
1557 // Now that memory is initialized and has cleanup scheduled,
1558 // create the datum and insert into the local variable map.
1559 let datum = Datum::new(llval, var_ty, Lvalue);
1560 bcx.fcx.lllocals.borrow_mut().insert(p_id, datum);
1564 fn bind_irrefutable_pat<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
1567 cleanup_scope: cleanup::ScopeId)
1568 -> Block<'blk, 'tcx> {
1570 * A simple version of the pattern matching code that only handles
1571 * irrefutable patterns. This is used in let/argument patterns,
1572 * not in match statements. Unifying this code with the code above
1573 * sounds nice, but in practice it produces very inefficient code,
1574 * since the match code is so much more general. In most cases,
1575 * LLVM is able to optimize the code, but it causes longer compile
1576 * times and makes the generated code nigh impossible to read.
1579 * - bcx: starting basic block context
1580 * - pat: the irrefutable pattern being matched.
1581 * - val: the value being matched -- must be an lvalue (by ref, with cleanup)
1584 debug!("bind_irrefutable_pat(bcx={}, pat={})",
1586 pat.repr(bcx.tcx()));
1588 if bcx.sess().asm_comments() {
1589 add_comment(bcx, format!("bind_irrefutable_pat(pat={})",
1590 pat.repr(bcx.tcx())).as_slice());
1593 let _indenter = indenter();
1595 let _icx = push_ctxt("match::bind_irrefutable_pat");
1597 let tcx = bcx.tcx();
1598 let ccx = bcx.ccx();
1600 ast::PatIdent(pat_binding_mode, ref path1, ref inner) => {
1601 if pat_is_binding(&tcx.def_map, &*pat) {
1602 // Allocate the stack slot where the value of this
1603 // binding will live and place it into the appropriate
1605 bcx = mk_binding_alloca(
1606 bcx, pat.id, &path1.node, cleanup_scope, (),
1607 |(), bcx, llval, ty| {
1608 match pat_binding_mode {
1609 ast::BindByValue(_) => {
1610 // By value binding: move the value that `val`
1611 // points at into the binding's stack slot.
1612 let d = Datum::new(val, ty, Lvalue);
1613 d.store_to(bcx, llval)
1616 ast::BindByRef(_) => {
1617 // By ref binding: the value of the variable
1618 // is the pointer `val` itself.
1619 Store(bcx, val, llval);
1626 for inner_pat in inner.iter() {
1627 bcx = bind_irrefutable_pat(bcx, &**inner_pat, val, cleanup_scope);
1630 ast::PatEnum(_, ref sub_pats) => {
1631 let opt_def = bcx.tcx().def_map.borrow().find_copy(&pat.id);
1633 Some(def::DefVariant(enum_id, var_id, _)) => {
1634 let repr = adt::represent_node(bcx, pat.id);
1635 let vinfo = ty::enum_variant_with_id(ccx.tcx(),
1638 let args = extract_variant_args(bcx,
1642 for sub_pat in sub_pats.iter() {
1643 for (i, &argval) in args.vals.iter().enumerate() {
1644 bcx = bind_irrefutable_pat(bcx, &**sub_pat.get(i),
1645 argval, cleanup_scope);
1649 Some(def::DefFn(..)) |
1650 Some(def::DefStruct(..)) => {
1653 // This is a unit-like struct. Nothing to do here.
1655 Some(ref elems) => {
1656 // This is the tuple struct case.
1657 let repr = adt::represent_node(bcx, pat.id);
1658 for (i, elem) in elems.iter().enumerate() {
1659 let fldptr = adt::trans_field_ptr(bcx, &*repr,
1661 bcx = bind_irrefutable_pat(bcx, &**elem,
1662 fldptr, cleanup_scope);
1668 // Nothing to do here.
1672 ast::PatStruct(_, ref fields, _) => {
1673 let tcx = bcx.tcx();
1674 let pat_ty = node_id_type(bcx, pat.id);
1675 let pat_repr = adt::represent_type(bcx.ccx(), pat_ty);
1676 expr::with_field_tys(tcx, pat_ty, Some(pat.id), |discr, field_tys| {
1677 for f in fields.iter() {
1678 let ix = ty::field_idx_strict(tcx, f.ident.name, field_tys);
1679 let fldptr = adt::trans_field_ptr(bcx, &*pat_repr, val,
1681 bcx = bind_irrefutable_pat(bcx, &*f.pat, fldptr, cleanup_scope);
1685 ast::PatTup(ref elems) => {
1686 let repr = adt::represent_node(bcx, pat.id);
1687 for (i, elem) in elems.iter().enumerate() {
1688 let fldptr = adt::trans_field_ptr(bcx, &*repr, val, 0, i);
1689 bcx = bind_irrefutable_pat(bcx, &**elem, fldptr, cleanup_scope);
1692 ast::PatBox(ref inner) => {
1693 let llbox = Load(bcx, val);
1694 bcx = bind_irrefutable_pat(bcx, &**inner, llbox, cleanup_scope);
1696 ast::PatRegion(ref inner) => {
1697 let loaded_val = Load(bcx, val);
1698 bcx = bind_irrefutable_pat(bcx, &**inner, loaded_val, cleanup_scope);
1700 ast::PatVec(ref before, ref slice, ref after) => {
1701 let pat_ty = node_id_type(bcx, pat.id);
1702 let mut extracted = extract_vec_elems(bcx, pat_ty, before.len(), after.len(), val);
1705 extracted.vals.insert(
1707 bind_subslice_pat(bcx, pat.id, val, before.len(), after.len())
1714 .chain(slice.iter())
1715 .chain(after.iter())
1716 .zip(extracted.vals.into_iter())
1717 .fold(bcx, |bcx, (inner, elem)|
1718 bind_irrefutable_pat(bcx, &**inner, elem, cleanup_scope)
1721 ast::PatMac(..) => {
1722 bcx.sess().span_bug(pat.span, "unexpanded macro");
1724 ast::PatWild(_) | ast::PatLit(_) | ast::PatRange(_, _) => ()