1 // Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
13 * # Compilation of match statements
15 * I will endeavor to explain the code as best I can. I have only a loose
16 * understanding of some parts of it.
20 * The basic state of the code is maintained in an array `m` of `Match`
21 * objects. Each `Match` describes some list of patterns, all of which must
22 * match against the current list of values. If those patterns match, then
23 * the arm listed in the match is the correct arm. A given arm may have
24 * multiple corresponding match entries, one for each alternative that
25 * remains. As we proceed these sets of matches are adjusted by the various
26 * `enter_XXX()` functions, each of which adjusts the set of options given
27 * some information about the value which has been matched.
29 * So, initially, there is one value and N matches, each of which have one
30 * constituent pattern. N here is usually the number of arms but may be
31 * greater, if some arms have multiple alternatives. For example, here:
33 * enum Foo { A, B(int), C(uint, uint) }
41 * The value would be `foo`. There would be four matches, each of which
42 * contains one pattern (and, in one case, a guard). We could collect the
43 * various options and then compile the code for the case where `foo` is an
44 * `A`, a `B`, and a `C`. When we generate the code for `C`, we would (1)
45 * drop the two matches that do not match a `C` and (2) expand the other two
46 * into two patterns each. In the first case, the two patterns would be `1u`
47 * and `2`, and the in the second case the _ pattern would be expanded into
48 * `_` and `_`. The two values are of course the arguments to `C`.
50 * Here is a quick guide to the various functions:
52 * - `compile_submatch()`: The main workhouse. It takes a list of values and
53 * a list of matches and finds the various possibilities that could occur.
55 * - `enter_XXX()`: modifies the list of matches based on some information
56 * about the value that has been matched. For example,
57 * `enter_rec_or_struct()` adjusts the values given that a record or struct
58 * has been matched. This is an infallible pattern, so *all* of the matches
59 * must be either wildcards or record/struct patterns. `enter_opt()`
60 * handles the fallible cases, and it is correspondingly more complex.
64 * We store information about the bound variables for each arm as part of the
65 * per-arm `ArmData` struct. There is a mapping from identifiers to
66 * `BindingInfo` structs. These structs contain the mode/id/type of the
67 * binding, but they also contain an LLVM value which points at an alloca
68 * called `llmatch`. For by value bindings that are Copy, we also create
69 * an extra alloca that we copy the matched value to so that any changes
70 * we do to our copy is not reflected in the original and vice-versa.
71 * We don't do this if it's a move since the original value can't be used
72 * and thus allowing us to cheat in not creating an extra alloca.
74 * The `llmatch` binding always stores a pointer into the value being matched
75 * which points at the data for the binding. If the value being matched has
76 * type `T`, then, `llmatch` will point at an alloca of type `T*` (and hence
77 * `llmatch` has type `T**`). So, if you have a pattern like:
81 * match (a, b) { (ref c, d) => { ... } }
83 * For `c` and `d`, we would generate allocas of type `C*` and `D*`
84 * respectively. These are called the `llmatch`. As we match, when we come
85 * up against an identifier, we store the current pointer into the
86 * corresponding alloca.
88 * Once a pattern is completely matched, and assuming that there is no guard
89 * pattern, we will branch to a block that leads to the body itself. For any
90 * by-value bindings, this block will first load the ptr from `llmatch` (the
91 * one of type `D*`) and then load a second time to get the actual value (the
92 * one of type `D`). For by ref bindings, the value of the local variable is
93 * simply the first alloca.
95 * So, for the example above, we would generate a setup kind of like this:
101 * +--------------------------------------------+
102 * | llmatch_c = (addr of first half of tuple) |
103 * | llmatch_d = (addr of second half of tuple) |
104 * +--------------------------------------------+
106 * +--------------------------------------+
107 * | *llbinding_d = **llmatch_d |
108 * +--------------------------------------+
110 * If there is a guard, the situation is slightly different, because we must
111 * execute the guard code. Moreover, we need to do so once for each of the
112 * alternatives that lead to the arm, because if the guard fails, they may
113 * have different points from which to continue the search. Therefore, in that
114 * case, we generate code that looks more like:
120 * +-------------------------------------------+
121 * | llmatch_c = (addr of first half of tuple) |
122 * | llmatch_d = (addr of first half of tuple) |
123 * +-------------------------------------------+
125 * +-------------------------------------------------+
126 * | *llbinding_d = **llmatch_d |
127 * | check condition |
128 * | if false { goto next case } |
129 * | if true { goto body } |
130 * +-------------------------------------------------+
132 * The handling for the cleanups is a bit... sensitive. Basically, the body
133 * is the one that invokes `add_clean()` for each binding. During the guard
134 * evaluation, we add temporary cleanups and revoke them after the guard is
135 * evaluated (it could fail, after all). Note that guards and moves are
136 * just plain incompatible.
138 * Some relevant helper functions that manage bindings:
139 * - `create_bindings_map()`
140 * - `insert_lllocals()`
143 * ## Notes on vector pattern matching.
145 * Vector pattern matching is surprisingly tricky. The problem is that
146 * the structure of the vector isn't fully known, and slice matches
147 * can be done on subparts of it.
149 * The way that vector pattern matches are dealt with, then, is as
150 * follows. First, we make the actual condition associated with a
151 * vector pattern simply a vector length comparison. So the pattern
152 * [1, .. x] gets the condition "vec len >= 1", and the pattern
153 * [.. x] gets the condition "vec len >= 0". The problem here is that
154 * having the condition "vec len >= 1" hold clearly does not mean that
155 * only a pattern that has exactly that condition will match. This
156 * means that it may well be the case that a condition holds, but none
157 * of the patterns matching that condition match; to deal with this,
158 * when doing vector length matches, we have match failures proceed to
159 * the next condition to check.
161 * There are a couple more subtleties to deal with. While the "actual"
162 * condition associated with vector length tests is simply a test on
163 * the vector length, the actual vec_len Opt entry contains more
164 * information used to restrict which matches are associated with it.
165 * So that all matches in a submatch are matching against the same
166 * values from inside the vector, they are split up by how many
167 * elements they match at the front and at the back of the vector. In
168 * order to make sure that arms are properly checked in order, even
169 * with the overmatching conditions, each vec_len Opt entry is
170 * associated with a range of matches.
171 * Consider the following:
175 * [1, 2, 2, .. _] => 1,
176 * [1, 2, 3, .. _] => 2,
180 * The proper arm to match is arm 2, but arms 0 and 3 both have the
181 * condition "len >= 2". If arm 3 was lumped in with arm 0, then the
182 * wrong branch would be taken. Instead, vec_len Opts are associated
183 * with a contiguous range of matches that have the same "shape".
184 * This is sort of ugly and requires a bunch of special handling of
189 #![allow(non_camel_case_types)]
192 use driver::config::FullDebugInfo;
194 use llvm::{ValueRef, BasicBlockRef};
195 use middle::const_eval;
197 use middle::check_match;
198 use middle::lang_items::StrEqFnLangItem;
199 use middle::pat_util::*;
200 use middle::resolve::DefMap;
201 use middle::trans::adt;
202 use middle::trans::base::*;
203 use middle::trans::build::*;
204 use middle::trans::callee;
205 use middle::trans::cleanup;
206 use middle::trans::cleanup::CleanupMethods;
207 use middle::trans::common::*;
208 use middle::trans::consts;
209 use middle::trans::controlflow;
210 use middle::trans::datum::*;
211 use middle::trans::expr::Dest;
212 use middle::trans::expr;
213 use middle::trans::tvec;
214 use middle::trans::type_of;
215 use middle::trans::debuginfo;
217 use util::common::indenter;
218 use util::ppaux::{Repr, vec_map_to_string};
221 use std::collections::HashMap;
226 use syntax::ast::Ident;
227 use syntax::codemap::Span;
228 use syntax::parse::token::InternedString;
230 // An option identifying a literal: either an expression or a DefId of a static expression.
232 ExprLit(Gc<ast::Expr>),
233 ConstLit(ast::DefId), // the def ID of the constant
236 #[deriving(PartialEq)]
239 vec_len_ge(/* length of prefix */uint)
242 // An option identifying a branch (either a literal, an enum variant or a
246 var(ty::Disr, Rc<adt::Repr>, ast::DefId),
247 range(Gc<ast::Expr>, Gc<ast::Expr>),
248 vec_len(/* length */ uint, VecLenOpt, /*range of matches*/(uint, uint))
251 fn lit_to_expr(tcx: &ty::ctxt, a: &Lit) -> Gc<ast::Expr> {
253 ExprLit(existing_a_expr) => existing_a_expr,
254 ConstLit(a_const) => const_eval::lookup_const_by_id(tcx, a_const).unwrap()
258 fn opt_eq(tcx: &ty::ctxt, a: &Opt, b: &Opt) -> bool {
260 (&lit(a), &lit(b)) => {
261 let a_expr = lit_to_expr(tcx, &a);
262 let b_expr = lit_to_expr(tcx, &b);
263 match const_eval::compare_lit_exprs(tcx, &*a_expr, &*b_expr) {
264 Some(val1) => val1 == 0,
265 None => fail!("compare_list_exprs: type mismatch"),
268 (&range(ref a1, ref a2), &range(ref b1, ref b2)) => {
269 let m1 = const_eval::compare_lit_exprs(tcx, &**a1, &**b1);
270 let m2 = const_eval::compare_lit_exprs(tcx, &**a2, &**b2);
272 (Some(val1), Some(val2)) => (val1 == 0 && val2 == 0),
273 _ => fail!("compare_list_exprs: type mismatch"),
276 (&var(a, _, _), &var(b, _, _)) => a == b,
277 (&vec_len(a1, a2, _), &vec_len(b1, b2, _)) =>
278 a1 == b1 && a2 == b2,
283 pub enum opt_result<'a> {
284 single_result(Result<'a>),
285 lower_bound(Result<'a>),
286 range_result(Result<'a>, Result<'a>),
289 fn trans_opt<'a>(bcx: &'a Block<'a>, o: &Opt) -> opt_result<'a> {
290 let _icx = push_ctxt("match::trans_opt");
294 lit(ExprLit(ref lit_expr)) => {
295 let lit_datum = unpack_datum!(bcx, expr::trans(bcx, &**lit_expr));
296 let lit_datum = lit_datum.assert_rvalue(bcx); // literals are rvalues
297 let lit_datum = unpack_datum!(bcx, lit_datum.to_appropriate_datum(bcx));
298 return single_result(Result::new(bcx, lit_datum.val));
300 lit(l @ ConstLit(ref def_id)) => {
301 let lit_ty = ty::node_id_to_type(bcx.tcx(), lit_to_expr(bcx.tcx(), &l).id);
302 let (llval, _) = consts::get_const_val(bcx.ccx(), *def_id);
303 let lit_datum = immediate_rvalue(llval, lit_ty);
304 let lit_datum = unpack_datum!(bcx, lit_datum.to_appropriate_datum(bcx));
305 return single_result(Result::new(bcx, lit_datum.val));
307 var(disr_val, ref repr, _) => {
308 return adt::trans_case(bcx, &**repr, disr_val);
310 range(ref l1, ref l2) => {
311 let (l1, _) = consts::const_expr(ccx, &**l1, true);
312 let (l2, _) = consts::const_expr(ccx, &**l2, true);
313 return range_result(Result::new(bcx, l1), Result::new(bcx, l2));
315 vec_len(n, vec_len_eq, _) => {
316 return single_result(Result::new(bcx, C_int(ccx, n as int)));
318 vec_len(n, vec_len_ge(_), _) => {
319 return lower_bound(Result::new(bcx, C_int(ccx, n as int)));
324 fn variant_opt(bcx: &Block, pat_id: ast::NodeId) -> Opt {
326 let def = ccx.tcx.def_map.borrow().get_copy(&pat_id);
328 def::DefVariant(enum_id, var_id, _) => {
329 let variant = ty::enum_variant_with_id(ccx.tcx(), enum_id, var_id);
330 var(variant.disr_val, adt::represent_node(bcx, pat_id), var_id)
333 ccx.sess().bug("non-variant or struct in variant_opt()");
339 pub enum TransBindingMode {
340 TrByCopy(/* llbinding */ ValueRef),
346 * Information about a pattern binding:
347 * - `llmatch` is a pointer to a stack slot. The stack slot contains a
348 * pointer into the value being matched. Hence, llmatch has type `T**`
349 * where `T` is the value being matched.
350 * - `trmode` is the trans binding mode
351 * - `id` is the node id of the binding
352 * - `ty` is the Rust type of the binding */
354 pub struct BindingInfo {
355 pub llmatch: ValueRef,
356 pub trmode: TransBindingMode,
362 type BindingsMap = HashMap<Ident, BindingInfo>;
364 struct ArmData<'a, 'b> {
365 bodycx: &'b Block<'b>,
367 bindings_map: BindingsMap
372 * If all `pats` are matched then arm `data` will be executed.
373 * As we proceed `bound_ptrs` are filled with pointers to values to be bound,
374 * these pointers are stored in llmatch variables just before executing `data` arm.
376 struct Match<'a, 'b> {
377 pats: Vec<Gc<ast::Pat>>,
378 data: &'a ArmData<'a, 'b>,
379 bound_ptrs: Vec<(Ident, ValueRef)>
382 impl<'a, 'b> Repr for Match<'a, 'b> {
383 fn repr(&self, tcx: &ty::ctxt) -> String {
384 if tcx.sess.verbose() {
385 // for many programs, this just take too long to serialize
388 format!("{} pats", self.pats.len())
393 fn has_nested_bindings(m: &[Match], col: uint) -> bool {
395 match br.pats.get(col).node {
396 ast::PatIdent(_, _, Some(_)) => return true,
403 fn expand_nested_bindings<'a, 'b>(
405 m: &'a [Match<'a, 'b>],
408 -> Vec<Match<'a, 'b>> {
409 debug!("expand_nested_bindings(bcx={}, m={}, col={}, val={})",
413 bcx.val_to_string(val));
414 let _indenter = indenter();
417 let mut bound_ptrs = br.bound_ptrs.clone();
418 let mut pat = *br.pats.get(col);
420 pat = match pat.node {
421 ast::PatIdent(_, ref path, Some(inner)) => {
422 bound_ptrs.push((path.node, val));
429 let mut pats = br.pats.clone();
430 *pats.get_mut(col) = pat;
434 bound_ptrs: bound_ptrs
439 type enter_pats<'a> = |&[Gc<ast::Pat>]|: 'a -> Option<Vec<Gc<ast::Pat>>>;
441 fn enter_match<'a, 'b>(
444 m: &'a [Match<'a, 'b>],
448 -> Vec<Match<'a, 'b>> {
449 debug!("enter_match(bcx={}, m={}, col={}, val={})",
453 bcx.val_to_string(val));
454 let _indenter = indenter();
456 m.iter().filter_map(|br| {
457 e(br.pats.as_slice()).map(|pats| {
458 let this = *br.pats.get(col);
459 let mut bound_ptrs = br.bound_ptrs.clone();
461 ast::PatIdent(_, ref path1, None) => {
462 if pat_is_binding(dm, &*this) {
463 bound_ptrs.push((path1.node, val));
472 bound_ptrs: bound_ptrs
478 fn enter_default<'a, 'b>(
481 m: &'a [Match<'a, 'b>],
484 -> Vec<Match<'a, 'b>> {
485 debug!("enter_default(bcx={}, m={}, col={}, val={})",
489 bcx.val_to_string(val));
490 let _indenter = indenter();
492 // Collect all of the matches that can match against anything.
493 enter_match(bcx, dm, m, col, val, |pats| {
494 if pat_is_binding_or_wild(dm, &*pats[col]) {
495 Some(Vec::from_slice(pats.slice_to(col)).append(pats.slice_from(col + 1)))
502 // <pcwalton> nmatsakis: what does enter_opt do?
503 // <pcwalton> in trans/match
504 // <pcwalton> trans/match.rs is like stumbling around in a dark cave
505 // <nmatsakis> pcwalton: the enter family of functions adjust the set of
506 // patterns as needed
507 // <nmatsakis> yeah, at some point I kind of achieved some level of
509 // <nmatsakis> anyhow, they adjust the patterns given that something of that
510 // kind has been found
511 // <nmatsakis> pcwalton: ok, right, so enter_XXX() adjusts the patterns, as I
513 // <nmatsakis> enter_match() kind of embodies the generic code
514 // <nmatsakis> it is provided with a function that tests each pattern to see
515 // if it might possibly apply and so forth
516 // <nmatsakis> so, if you have a pattern like {a: _, b: _, _} and one like _
517 // <nmatsakis> then _ would be expanded to (_, _)
518 // <nmatsakis> one spot for each of the sub-patterns
519 // <nmatsakis> enter_opt() is one of the more complex; it covers the fallible
521 // <nmatsakis> enter_rec_or_struct() or enter_tuple() are simpler, since they
522 // are infallible patterns
523 // <nmatsakis> so all patterns must either be records (resp. tuples) or
526 /// The above is now outdated in that enter_match() now takes a function that
527 /// takes the complete row of patterns rather than just the first one.
528 /// Also, most of the enter_() family functions have been unified with
529 /// the check_match specialization step.
530 fn enter_opt<'a, 'b>(
534 m: &'a [Match<'a, 'b>],
539 -> Vec<Match<'a, 'b>> {
540 debug!("enter_opt(bcx={}, m={}, opt={:?}, col={}, val={})",
545 bcx.val_to_string(val));
546 let _indenter = indenter();
548 let ctor = match opt {
550 check_match::ConstantValue(const_eval::eval_const_expr(
551 bcx.tcx(), &*lit_to_expr(bcx.tcx(), &x)))
553 &range(ref lo, ref hi) => check_match::ConstantRange(
554 const_eval::eval_const_expr(bcx.tcx(), &**lo),
555 const_eval::eval_const_expr(bcx.tcx(), &**hi)
557 &vec_len(len, _, _) => check_match::Slice(len),
558 &var(_, _, def_id) => check_match::Variant(def_id)
563 let mcx = check_match::MatchCheckCtxt { tcx: bcx.tcx() };
564 enter_match(bcx, dm, m, col, val, |pats| {
565 let span = pats[col].span;
566 let specialized = match pats[col].node {
567 ast::PatVec(ref before, slice, ref after) => {
568 let (lo, hi) = match *opt {
569 vec_len(_, _, (lo, hi)) => (lo, hi),
570 _ => tcx.sess.span_bug(span,
571 "vec pattern but not vec opt")
574 let elems = match slice {
575 Some(slice) if i >= lo && i <= hi => {
576 let n = before.len() + after.len();
577 let this_opt = vec_len(n, vec_len_ge(before.len()),
579 if opt_eq(tcx, &this_opt, opt) {
580 let mut new_before = Vec::new();
581 for pat in before.iter() {
582 new_before.push(*pat);
584 new_before.push(slice);
585 for pat in after.iter() {
586 new_before.push(*pat);
593 None if i >= lo && i <= hi => {
594 let n = before.len();
595 if opt_eq(tcx, &vec_len(n, vec_len_eq, (lo,hi)), opt) {
596 let mut new_before = Vec::new();
597 for pat in before.iter() {
598 new_before.push(*pat);
607 elems.map(|head| head.append(pats.slice_to(col)).append(pats.slice_from(col + 1)))
610 check_match::specialize(&mcx, pats.as_slice(), &ctor, col, variant_size)
618 // Returns the options in one column of matches. An option is something that
619 // needs to be conditionally matched at runtime; for example, the discriminant
620 // on a set of enum variants or a literal.
621 fn get_options(bcx: &Block, m: &[Match], col: uint) -> Vec<Opt> {
623 fn add_to_set(tcx: &ty::ctxt, set: &mut Vec<Opt>, val: Opt) {
624 if set.iter().any(|l| opt_eq(tcx, l, &val)) {return;}
627 // Vector comparisons are special in that since the actual
628 // conditions over-match, we need to be careful about them. This
629 // means that in order to properly handle things in order, we need
630 // to not always merge conditions.
631 fn add_veclen_to_set(set: &mut Vec<Opt> , i: uint,
632 len: uint, vlo: VecLenOpt) {
634 // If the last condition in the list matches the one we want
635 // to add, then extend its range. Otherwise, make a new
636 // vec_len with a range just covering the new entry.
637 Some(&vec_len(len2, vlo2, (start, end)))
638 if len == len2 && vlo == vlo2 => {
639 let length = set.len();
640 *set.get_mut(length - 1) =
641 vec_len(len, vlo, (start, end+1))
643 _ => set.push(vec_len(len, vlo, (i, i)))
647 let mut found = Vec::new();
648 for (i, br) in m.iter().enumerate() {
649 let cur = *br.pats.get(col);
652 add_to_set(ccx.tcx(), &mut found, lit(ExprLit(l)));
654 ast::PatIdent(..) => {
655 // This is either an enum variant or a variable binding.
656 let opt_def = ccx.tcx.def_map.borrow().find_copy(&cur.id);
658 Some(def::DefVariant(..)) => {
659 add_to_set(ccx.tcx(), &mut found,
660 variant_opt(bcx, cur.id));
662 Some(def::DefStatic(const_did, false)) => {
663 add_to_set(ccx.tcx(), &mut found,
664 lit(ConstLit(const_did)));
669 ast::PatEnum(..) | ast::PatStruct(..) => {
670 // This could be one of: a tuple-like enum variant, a
671 // struct-like enum variant, or a struct.
672 let opt_def = ccx.tcx.def_map.borrow().find_copy(&cur.id);
674 Some(def::DefFn(..)) |
675 Some(def::DefVariant(..)) => {
676 add_to_set(ccx.tcx(), &mut found,
677 variant_opt(bcx, cur.id));
679 Some(def::DefStatic(const_did, false)) => {
680 add_to_set(ccx.tcx(), &mut found,
681 lit(ConstLit(const_did)));
686 ast::PatRange(l1, l2) => {
687 add_to_set(ccx.tcx(), &mut found, range(l1, l2));
689 ast::PatVec(ref before, slice, ref after) => {
690 let (len, vec_opt) = match slice {
691 None => (before.len(), vec_len_eq),
692 Some(_) => (before.len() + after.len(),
693 vec_len_ge(before.len()))
695 add_veclen_to_set(&mut found, i, len, vec_opt);
703 struct ExtractedBlock<'a> {
704 vals: Vec<ValueRef> ,
708 fn extract_variant_args<'a>(
713 -> ExtractedBlock<'a> {
714 let _icx = push_ctxt("match::extract_variant_args");
715 let args = Vec::from_fn(adt::num_args(repr, disr_val), |i| {
716 adt::trans_field_ptr(bcx, repr, val, disr_val, i)
719 ExtractedBlock { vals: args, bcx: bcx }
722 fn match_datum(bcx: &Block,
727 * Helper for converting from the ValueRef that we pass around in
728 * the match code, which is always an lvalue, into a Datum. Eventually
729 * we should just pass around a Datum and be done with it.
732 let ty = node_id_type(bcx, pat_id);
733 Datum::new(val, ty, Lvalue)
737 fn extract_vec_elems<'a>(
743 -> ExtractedBlock<'a> {
744 let _icx = push_ctxt("match::extract_vec_elems");
745 let vec_datum = match_datum(bcx, val, pat_id);
746 let (base, len) = vec_datum.get_vec_base_and_len(bcx);
747 let vec_ty = node_id_type(bcx, pat_id);
748 let vt = tvec::vec_types(bcx, ty::sequence_element_type(bcx.tcx(), vec_ty));
750 let mut elems = Vec::from_fn(elem_count, |i| {
752 None => GEPi(bcx, base, [i]),
753 Some(n) if i < n => GEPi(bcx, base, [i]),
754 Some(n) if i > n => {
755 InBoundsGEP(bcx, base, [
757 C_int(bcx.ccx(), (elem_count - i) as int))])
759 _ => unsafe { llvm::LLVMGetUndef(vt.llunit_ty.to_ref()) }
763 let n = slice.unwrap();
764 let slice_byte_offset = Mul(bcx, vt.llunit_size, C_uint(bcx.ccx(), n));
765 let slice_begin = tvec::pointer_add_byte(bcx, base, slice_byte_offset);
766 let slice_len_offset = C_uint(bcx.ccx(), elem_count - 1u);
767 let slice_len = Sub(bcx, len, slice_len_offset);
768 let slice_ty = ty::mk_slice(bcx.tcx(),
770 ty::mt {ty: vt.unit_ty, mutbl: ast::MutImmutable});
771 let scratch = rvalue_scratch_datum(bcx, slice_ty, "");
772 Store(bcx, slice_begin,
773 GEPi(bcx, scratch.val, [0u, abi::slice_elt_base]));
774 Store(bcx, slice_len, GEPi(bcx, scratch.val, [0u, abi::slice_elt_len]));
775 *elems.get_mut(n) = scratch.val;
778 ExtractedBlock { vals: elems, bcx: bcx }
781 // Macro for deciding whether any of the remaining matches fit a given kind of
782 // pattern. Note that, because the macro is well-typed, either ALL of the
783 // matches should fit that sort of pattern or NONE (however, some of the
784 // matches may be wildcards like _ or identifiers).
785 macro_rules! any_pat (
786 ($m:expr, $col:expr, $pattern:pat) => (
787 ($m).iter().any(|br| {
788 match br.pats.get($col).node {
796 fn any_uniq_pat(m: &[Match], col: uint) -> bool {
797 any_pat!(m, col, ast::PatBox(_))
800 fn any_region_pat(m: &[Match], col: uint) -> bool {
801 any_pat!(m, col, ast::PatRegion(_))
804 fn any_irrefutable_adt_pat(bcx: &Block, m: &[Match], col: uint) -> bool {
806 let pat = *br.pats.get(col);
808 ast::PatTup(_) => true,
809 ast::PatStruct(..) => {
810 match bcx.tcx().def_map.borrow().find(&pat.id) {
811 Some(&def::DefVariant(..)) => false,
815 ast::PatEnum(..) | ast::PatIdent(_, _, None) => {
816 match bcx.tcx().def_map.borrow().find(&pat.id) {
817 Some(&def::DefFn(..)) |
818 Some(&def::DefStruct(..)) => true,
827 struct DynamicFailureHandler<'a> {
831 finished: Cell<Option<BasicBlockRef>>,
834 impl<'a> DynamicFailureHandler<'a> {
835 fn handle_fail(&self) -> BasicBlockRef {
836 match self.finished.get() {
837 Some(bb) => return bb,
841 let fcx = self.bcx.fcx;
842 let fail_cx = fcx.new_block(false, "case_fallthrough", None);
843 controlflow::trans_fail(fail_cx, self.sp, self.msg.clone());
844 self.finished.set(Some(fail_cx.llbb));
849 /// What to do when the pattern match fails.
850 enum FailureHandler<'a> {
852 JumpToBasicBlock(BasicBlockRef),
853 DynamicFailureHandlerClass(Box<DynamicFailureHandler<'a>>),
856 impl<'a> FailureHandler<'a> {
857 fn is_infallible(&self) -> bool {
864 fn is_fallible(&self) -> bool {
865 !self.is_infallible()
868 fn handle_fail(&self) -> BasicBlockRef {
871 fail!("attempted to fail in infallible failure handler!")
873 JumpToBasicBlock(basic_block) => basic_block,
874 DynamicFailureHandlerClass(ref dynamic_failure_handler) => {
875 dynamic_failure_handler.handle_fail()
881 fn pick_col(m: &[Match]) -> uint {
882 fn score(p: &ast::Pat) -> uint {
884 ast::PatLit(_) | ast::PatEnum(_, _) | ast::PatRange(_, _) => 1u,
885 ast::PatIdent(_, _, Some(ref p)) => score(&**p),
889 let mut scores = Vec::from_elem(m[0].pats.len(), 0u);
891 for (i, ref p) in br.pats.iter().enumerate() {
892 *scores.get_mut(i) += score(&***p);
895 let mut max_score = 0u;
896 let mut best_col = 0u;
897 for (i, score) in scores.iter().enumerate() {
900 // Irrefutable columns always go first, they'd only be duplicated in
902 if score == 0u { return i; }
903 // If no irrefutable ones are found, we pick the one with the biggest
905 if score > max_score { max_score = score; best_col = i; }
910 #[deriving(PartialEq)]
911 pub enum branch_kind { no_branch, single, switch, compare, compare_vec_len }
913 // Compiles a comparison between two things.
914 fn compare_values<'a>(
920 fn compare_str<'a>(cx: &'a Block<'a>,
925 let did = langcall(cx,
927 format!("comparison of `{}`",
928 cx.ty_to_string(rhs_t)).as_slice(),
930 callee::trans_lang_call(cx, did, [lhs, rhs], None)
933 let _icx = push_ctxt("compare_values");
934 if ty::type_is_scalar(rhs_t) {
935 let rs = compare_scalar_types(cx, lhs, rhs, rhs_t, ast::BiEq);
936 return Result::new(rs.bcx, rs.val);
939 match ty::get(rhs_t).sty {
940 ty::ty_rptr(_, mt) => match ty::get(mt.ty).sty {
941 ty::ty_str => compare_str(cx, lhs, rhs, rhs_t),
942 ty::ty_vec(mt, _) => match ty::get(mt.ty).sty {
943 ty::ty_uint(ast::TyU8) => {
944 // NOTE: cast &[u8] to &str and abuse the str_eq lang item,
945 // which calls memcmp().
946 let t = ty::mk_str_slice(cx.tcx(), ty::ReStatic, ast::MutImmutable);
947 let lhs = BitCast(cx, lhs, type_of::type_of(cx.ccx(), t).ptr_to());
948 let rhs = BitCast(cx, rhs, type_of::type_of(cx.ccx(), t).ptr_to());
949 compare_str(cx, lhs, rhs, rhs_t)
951 _ => cx.sess().bug("only byte strings supported in compare_values"),
953 _ => cx.sess().bug("only string and byte strings supported in compare_values"),
955 _ => cx.sess().bug("only scalars, byte strings, and strings supported in compare_values"),
959 fn insert_lllocals<'a>(mut bcx: &'a Block<'a>, bindings_map: &BindingsMap,
960 cs: Option<cleanup::ScopeId>)
963 * For each binding in `data.bindings_map`, adds an appropriate entry into
964 * the `fcx.lllocals` map
967 for (&ident, &binding_info) in bindings_map.iter() {
968 let llval = match binding_info.trmode {
969 // By value mut binding for a copy type: load from the ptr
970 // into the matched value and copy to our alloca
971 TrByCopy(llbinding) => {
972 let llval = Load(bcx, binding_info.llmatch);
973 let datum = Datum::new(llval, binding_info.ty, Lvalue);
974 bcx = datum.store_to(bcx, llbinding);
979 // By value move bindings: load from the ptr into the matched value
980 TrByMove => Load(bcx, binding_info.llmatch),
982 // By ref binding: use the ptr into the matched value
983 TrByRef => binding_info.llmatch
986 let datum = Datum::new(llval, binding_info.ty, Lvalue);
988 Some(cs) => bcx.fcx.schedule_drop_and_zero_mem(cs, llval, binding_info.ty),
992 debug!("binding {:?} to {}",
994 bcx.val_to_string(llval));
995 bcx.fcx.lllocals.borrow_mut().insert(binding_info.id, datum);
997 if bcx.sess().opts.debuginfo == FullDebugInfo {
998 debuginfo::create_match_binding_metadata(bcx,
1006 fn compile_guard<'a, 'b>(
1008 guard_expr: &ast::Expr,
1010 m: &'a [Match<'a, 'b>],
1012 chk: &FailureHandler,
1013 has_genuine_default: bool)
1015 debug!("compile_guard(bcx={}, guard_expr={}, m={}, vals={})",
1017 bcx.expr_to_string(guard_expr),
1019 vec_map_to_string(vals, |v| bcx.val_to_string(*v)));
1020 let _indenter = indenter();
1022 let mut bcx = insert_lllocals(bcx, &data.bindings_map, None);
1024 let val = unpack_datum!(bcx, expr::trans(bcx, guard_expr));
1025 let val = val.to_llbool(bcx);
1027 return with_cond(bcx, Not(bcx, val), |bcx| {
1028 // Guard does not match: remove all bindings from the lllocals table
1029 for (_, &binding_info) in data.bindings_map.iter() {
1030 bcx.fcx.lllocals.borrow_mut().remove(&binding_info.id);
1033 // If the default arm is the only one left, move on to the next
1034 // condition explicitly rather than (possibly) falling back to
1036 &JumpToBasicBlock(_) if m.len() == 1 && has_genuine_default => {
1037 Br(bcx, chk.handle_fail());
1040 compile_submatch(bcx, m, vals, chk, has_genuine_default);
1047 fn compile_submatch<'a, 'b>(
1049 m: &'a [Match<'a, 'b>],
1051 chk: &FailureHandler,
1052 has_genuine_default: bool) {
1053 debug!("compile_submatch(bcx={}, m={}, vals={})",
1056 vec_map_to_string(vals, |v| bcx.val_to_string(*v)));
1057 let _indenter = indenter();
1058 let _icx = push_ctxt("match::compile_submatch");
1061 if chk.is_fallible() {
1062 Br(bcx, chk.handle_fail());
1066 if m[0].pats.len() == 0u {
1067 let data = &m[0].data;
1068 for &(ref ident, ref value_ptr) in m[0].bound_ptrs.iter() {
1069 let llmatch = data.bindings_map.get(ident).llmatch;
1070 Store(bcx, *value_ptr, llmatch);
1072 match data.arm.guard {
1073 Some(ref guard_expr) => {
1074 bcx = compile_guard(bcx,
1077 m.slice(1, m.len()),
1080 has_genuine_default);
1084 Br(bcx, data.bodycx.llbb);
1088 let col = pick_col(m);
1089 let val = vals[col];
1091 if has_nested_bindings(m, col) {
1092 let expanded = expand_nested_bindings(bcx, m, col, val);
1093 compile_submatch_continue(bcx,
1094 expanded.as_slice(),
1099 has_genuine_default)
1101 compile_submatch_continue(bcx, m, vals, chk, col, val, has_genuine_default)
1105 fn compile_submatch_continue<'a, 'b>(
1106 mut bcx: &'b Block<'b>,
1107 m: &'a [Match<'a, 'b>],
1109 chk: &FailureHandler,
1112 has_genuine_default: bool) {
1114 let tcx = bcx.tcx();
1115 let dm = &tcx.def_map;
1117 let vals_left = Vec::from_slice(vals.slice(0u, col)).append(vals.slice(col + 1u, vals.len()));
1118 let ccx = bcx.fcx.ccx;
1120 // Find a real id (we're adding placeholder wildcard patterns, but
1121 // each column is guaranteed to have at least one real pattern)
1122 let pat_id = m.iter().map(|br| br.pats.get(col).id).find(|&id| id != 0).unwrap_or(0);
1124 let left_ty = if pat_id == 0 {
1127 node_id_type(bcx, pat_id)
1130 let mcx = check_match::MatchCheckCtxt { tcx: bcx.tcx() };
1131 let adt_vals = if any_irrefutable_adt_pat(bcx, m, col) {
1132 let repr = adt::represent_type(bcx.ccx(), left_ty);
1133 let arg_count = adt::num_args(&*repr, 0);
1134 let field_vals: Vec<ValueRef> = std::iter::range(0, arg_count).map(|ix|
1135 adt::trans_field_ptr(bcx, &*repr, val, 0, ix)
1138 } else if any_uniq_pat(m, col) || any_region_pat(m, col) {
1139 Some(vec!(Load(bcx, val)))
1145 Some(field_vals) => {
1146 let pats = enter_match(bcx, dm, m, col, val, |pats|
1147 check_match::specialize(&mcx, pats, &check_match::Single, col, field_vals.len())
1149 let vals = field_vals.append(vals_left.as_slice());
1150 compile_submatch(bcx, pats.as_slice(), vals.as_slice(), chk, has_genuine_default);
1156 // Decide what kind of branch we need
1157 let opts = get_options(bcx, m, col);
1158 debug!("options={:?}", opts);
1159 let mut kind = no_branch;
1160 let mut test_val = val;
1161 debug!("test_val={}", bcx.val_to_string(test_val));
1162 if opts.len() > 0u {
1163 match *opts.get(0) {
1164 var(_, ref repr, _) => {
1165 let (the_kind, val_opt) = adt::trans_switch(bcx, &**repr, val);
1167 for &tval in val_opt.iter() { test_val = tval; }
1170 test_val = load_if_immediate(bcx, val, left_ty);
1171 kind = if ty::type_is_integral(left_ty) { switch }
1175 test_val = Load(bcx, val);
1179 let (_, len) = tvec::get_base_and_len(bcx, val, left_ty);
1181 kind = compare_vec_len;
1185 for o in opts.iter() {
1187 range(_, _) => { kind = compare; break }
1191 let else_cx = match kind {
1192 no_branch | single => bcx,
1193 _ => bcx.fcx.new_temp_block("match_else")
1195 let sw = if kind == switch {
1196 Switch(bcx, test_val, else_cx.llbb, opts.len())
1198 C_int(ccx, 0) // Placeholder for when not using a switch
1201 let defaults = enter_default(else_cx, dm, m, col, val);
1202 let exhaustive = chk.is_infallible() && defaults.len() == 0u;
1203 let len = opts.len();
1205 // Compile subtrees for each option
1206 for (i, opt) in opts.iter().enumerate() {
1207 // In some cases of range and vector pattern matching, we need to
1208 // override the failure case so that instead of failing, it proceeds
1209 // to try more matching. branch_chk, then, is the proper failure case
1210 // for the current conditional branch.
1211 let mut branch_chk = None;
1212 let mut opt_cx = else_cx;
1213 if !exhaustive || i+1 < len {
1214 opt_cx = bcx.fcx.new_temp_block("match_case");
1216 single => Br(bcx, opt_cx.llbb),
1218 match trans_opt(bcx, opt) {
1219 single_result(r) => {
1221 llvm::LLVMAddCase(sw, r.val, opt_cx.llbb);
1227 "in compile_submatch, expected \
1228 trans_opt to return a single_result")
1232 compare | compare_vec_len => {
1233 let t = if kind == compare {
1236 ty::mk_uint() // vector length
1238 let Result {bcx: after_cx, val: matches} = {
1239 match trans_opt(bcx, opt) {
1240 single_result(Result {bcx, val}) => {
1241 compare_values(bcx, test_val, val, t)
1243 lower_bound(Result {bcx, val}) => {
1244 compare_scalar_types(bcx, test_val, val, t, ast::BiGe)
1246 range_result(Result {val: vbegin, ..},
1247 Result {bcx, val: vend}) => {
1248 let Result {bcx, val: llge} =
1249 compare_scalar_types(
1251 vbegin, t, ast::BiGe);
1252 let Result {bcx, val: llle} =
1253 compare_scalar_types(
1254 bcx, test_val, vend,
1256 Result::new(bcx, And(bcx, llge, llle))
1260 bcx = fcx.new_temp_block("compare_next");
1262 // If none of the sub-cases match, and the current condition
1263 // is guarded or has multiple patterns, move on to the next
1264 // condition, if there is any, rather than falling back to
1266 let guarded = m[i].data.arm.guard.is_some();
1267 let multi_pats = m[i].pats.len() > 1;
1268 if i + 1 < len && (guarded || multi_pats || kind == compare_vec_len) {
1269 branch_chk = Some(JumpToBasicBlock(bcx.llbb));
1271 CondBr(after_cx, matches, opt_cx.llbb, bcx.llbb);
1275 } else if kind == compare || kind == compare_vec_len {
1276 Br(bcx, else_cx.llbb);
1280 let mut unpacked = Vec::new();
1282 var(disr_val, ref repr, _) => {
1283 let ExtractedBlock {vals: argvals, bcx: new_bcx} =
1284 extract_variant_args(opt_cx, &**repr, disr_val, val);
1285 size = argvals.len();
1289 vec_len(n, vt, _) => {
1290 let (n, slice) = match vt {
1291 vec_len_ge(i) => (n + 1u, Some(i)),
1292 vec_len_eq => (n, None)
1294 let args = extract_vec_elems(opt_cx, pat_id, n,
1296 size = args.vals.len();
1297 unpacked = args.vals.clone();
1300 lit(_) | range(_, _) => ()
1302 let opt_ms = enter_opt(opt_cx, pat_id, dm, m, opt, col, size, val);
1303 let opt_vals = unpacked.append(vals_left.as_slice());
1307 compile_submatch(opt_cx,
1309 opt_vals.as_slice(),
1311 has_genuine_default)
1313 Some(branch_chk) => {
1314 compile_submatch(opt_cx,
1316 opt_vals.as_slice(),
1318 has_genuine_default)
1323 // Compile the fall-through case, if any
1324 if !exhaustive && kind != single {
1325 if kind == compare || kind == compare_vec_len {
1326 Br(bcx, else_cx.llbb);
1329 // If there is only one default arm left, move on to the next
1330 // condition explicitly rather than (eventually) falling back to
1331 // the last default arm.
1332 &JumpToBasicBlock(_) if defaults.len() == 1 && has_genuine_default => {
1333 Br(else_cx, chk.handle_fail());
1336 compile_submatch(else_cx,
1337 defaults.as_slice(),
1338 vals_left.as_slice(),
1340 has_genuine_default);
1346 pub fn trans_match<'a>(
1348 match_expr: &ast::Expr,
1349 discr_expr: &ast::Expr,
1353 let _icx = push_ctxt("match::trans_match");
1354 trans_match_inner(bcx, match_expr.id, discr_expr, arms, dest)
1357 fn create_bindings_map(bcx: &Block, pat: Gc<ast::Pat>) -> BindingsMap {
1358 // Create the bindings map, which is a mapping from each binding name
1359 // to an alloca() that will be the value for that local variable.
1360 // Note that we use the names because each binding will have many ids
1361 // from the various alternatives.
1362 let ccx = bcx.ccx();
1363 let tcx = bcx.tcx();
1364 let mut bindings_map = HashMap::new();
1365 pat_bindings(&tcx.def_map, &*pat, |bm, p_id, span, path1| {
1366 let ident = path1.node;
1367 let variable_ty = node_id_type(bcx, p_id);
1368 let llvariable_ty = type_of::type_of(ccx, variable_ty);
1369 let tcx = bcx.tcx();
1375 if !ty::type_moves_by_default(tcx, variable_ty) => {
1376 llmatch = alloca(bcx,
1377 llvariable_ty.ptr_to(),
1379 trmode = TrByCopy(alloca(bcx,
1381 bcx.ident(ident).as_slice()));
1383 ast::BindByValue(_) => {
1384 // in this case, the final type of the variable will be T,
1385 // but during matching we need to store a *T as explained
1387 llmatch = alloca(bcx,
1388 llvariable_ty.ptr_to(),
1389 bcx.ident(ident).as_slice());
1392 ast::BindByRef(_) => {
1393 llmatch = alloca(bcx,
1395 bcx.ident(ident).as_slice());
1399 bindings_map.insert(ident, BindingInfo {
1407 return bindings_map;
1410 fn trans_match_inner<'a>(scope_cx: &'a Block<'a>,
1411 match_id: ast::NodeId,
1412 discr_expr: &ast::Expr,
1414 dest: Dest) -> &'a Block<'a> {
1415 let _icx = push_ctxt("match::trans_match_inner");
1416 let fcx = scope_cx.fcx;
1417 let mut bcx = scope_cx;
1418 let tcx = bcx.tcx();
1420 let discr_datum = unpack_datum!(bcx, expr::trans_to_lvalue(bcx, discr_expr,
1422 if bcx.unreachable.get() {
1426 let t = node_id_type(bcx, discr_expr.id);
1428 if ty::type_is_empty(tcx, t) {
1429 // Special case for empty types
1430 let fail_cx = Cell::new(None);
1431 let fail_handler = box DynamicFailureHandler {
1433 sp: discr_expr.span,
1434 msg: InternedString::new("scrutinizing value that can't \
1438 DynamicFailureHandlerClass(fail_handler)
1444 let arm_datas: Vec<ArmData> = arms.iter().map(|arm| ArmData {
1445 bodycx: fcx.new_id_block("case_body", arm.body.id),
1447 bindings_map: create_bindings_map(bcx, *arm.pats.get(0))
1450 let mut matches = Vec::new();
1451 for arm_data in arm_datas.iter() {
1452 matches.extend(arm_data.arm.pats.iter().map(|p| Match {
1455 bound_ptrs: Vec::new(),
1459 // `compile_submatch` works one column of arm patterns a time and
1460 // then peels that column off. So as we progress, it may become
1461 // impossible to tell whether we have a genuine default arm, i.e.
1462 // `_ => foo` or not. Sometimes it is important to know that in order
1463 // to decide whether moving on to the next condition or falling back
1464 // to the default arm.
1465 let has_default = arms.last().map_or(false, |arm| {
1467 && arm.pats.last().unwrap().node == ast::PatWild
1470 compile_submatch(bcx, matches.as_slice(), [discr_datum.val], &chk, has_default);
1472 let mut arm_cxs = Vec::new();
1473 for arm_data in arm_datas.iter() {
1474 let mut bcx = arm_data.bodycx;
1476 // insert bindings into the lllocals map and add cleanups
1477 let cs = fcx.push_custom_cleanup_scope();
1478 bcx = insert_lllocals(bcx, &arm_data.bindings_map, Some(cleanup::CustomScope(cs)));
1479 bcx = expr::trans_into(bcx, &*arm_data.arm.body, dest);
1480 bcx = fcx.pop_and_trans_custom_cleanup_scope(bcx, cs);
1484 bcx = scope_cx.fcx.join_blocks(match_id, arm_cxs.as_slice());
1488 enum IrrefutablePatternBindingMode {
1489 // Stores the association between node ID and LLVM value in `lllocals`.
1491 // Stores the association between node ID and LLVM value in `llargs`.
1495 pub fn store_local<'a>(bcx: &'a Block<'a>,
1499 * Generates code for a local variable declaration like
1500 * `let <pat>;` or `let <pat> = <opt_init_expr>`.
1502 let _icx = push_ctxt("match::store_local");
1504 let tcx = bcx.tcx();
1505 let pat = local.pat;
1506 let opt_init_expr = local.init;
1508 return match opt_init_expr {
1509 Some(init_expr) => {
1510 // Optimize the "let x = expr" case. This just writes
1511 // the result of evaluating `expr` directly into the alloca
1512 // for `x`. Often the general path results in similar or the
1513 // same code post-optimization, but not always. In particular,
1514 // in unsafe code, you can have expressions like
1516 // let x = intrinsics::uninit();
1518 // In such cases, the more general path is unsafe, because
1519 // it assumes it is matching against a valid value.
1520 match simple_identifier(&*pat) {
1522 let var_scope = cleanup::var_scope(tcx, local.id);
1523 return mk_binding_alloca(
1524 bcx, pat.id, ident, BindLocal, var_scope, (),
1525 |(), bcx, v, _| expr::trans_into(bcx, &*init_expr,
1534 unpack_datum!(bcx, expr::trans_to_lvalue(bcx, &*init_expr, "let"));
1535 if ty::type_is_bot(expr_ty(bcx, &*init_expr)) {
1536 create_dummy_locals(bcx, pat)
1538 if bcx.sess().asm_comments() {
1539 add_comment(bcx, "creating zeroable ref llval");
1541 let var_scope = cleanup::var_scope(tcx, local.id);
1542 bind_irrefutable_pat(bcx, pat, init_datum.val, BindLocal, var_scope)
1546 create_dummy_locals(bcx, pat)
1550 fn create_dummy_locals<'a>(mut bcx: &'a Block<'a>,
1553 // create dummy memory for the variables if we have no
1554 // value to store into them immediately
1555 let tcx = bcx.tcx();
1556 pat_bindings(&tcx.def_map, &*pat, |_, p_id, _, path1| {
1557 let scope = cleanup::var_scope(tcx, p_id);
1558 bcx = mk_binding_alloca(
1559 bcx, p_id, &path1.node, BindLocal, scope, (),
1560 |(), bcx, llval, ty| { zero_mem(bcx, llval, ty); bcx });
1566 pub fn store_arg<'a>(mut bcx: &'a Block<'a>,
1569 arg_scope: cleanup::ScopeId)
1572 * Generates code for argument patterns like `fn foo(<pat>: T)`.
1573 * Creates entries in the `llargs` map for each of the bindings
1578 * - `pat` is the argument pattern
1579 * - `llval` is a pointer to the argument value (in other words,
1580 * if the argument type is `T`, then `llval` is a `T*`). In some
1581 * cases, this code may zero out the memory `llval` points at.
1584 let _icx = push_ctxt("match::store_arg");
1586 match simple_identifier(&*pat) {
1588 // Generate nicer LLVM for the common case of fn a pattern
1590 let arg_ty = node_id_type(bcx, pat.id);
1591 if type_of::arg_is_indirect(bcx.ccx(), arg_ty)
1592 && bcx.sess().opts.debuginfo != FullDebugInfo {
1593 // Don't copy an indirect argument to an alloca, the caller
1594 // already put it in a temporary alloca and gave it up, unless
1595 // we emit extra-debug-info, which requires local allocas :(.
1596 let arg_val = arg.add_clean(bcx.fcx, arg_scope);
1597 bcx.fcx.llargs.borrow_mut()
1598 .insert(pat.id, Datum::new(arg_val, arg_ty, Lvalue));
1602 bcx, pat.id, ident, BindArgument, arg_scope, arg,
1603 |arg, bcx, llval, _| arg.store_to(bcx, llval))
1608 // General path. Copy out the values that are used in the
1610 let arg = unpack_datum!(
1611 bcx, arg.to_lvalue_datum_in_scope(bcx, "__arg", arg_scope));
1612 bind_irrefutable_pat(bcx, pat, arg.val,
1613 BindArgument, arg_scope)
1618 fn mk_binding_alloca<'a,A>(bcx: &'a Block<'a>,
1621 binding_mode: IrrefutablePatternBindingMode,
1622 cleanup_scope: cleanup::ScopeId,
1624 populate: |A, &'a Block<'a>, ValueRef, ty::t| -> &'a Block<'a>)
1626 let var_ty = node_id_type(bcx, p_id);
1628 // Allocate memory on stack for the binding.
1629 let llval = alloc_ty(bcx, var_ty, bcx.ident(*ident).as_slice());
1631 // Subtle: be sure that we *populate* the memory *before*
1632 // we schedule the cleanup.
1633 let bcx = populate(arg, bcx, llval, var_ty);
1634 bcx.fcx.schedule_drop_mem(cleanup_scope, llval, var_ty);
1636 // Now that memory is initialized and has cleanup scheduled,
1637 // create the datum and insert into the local variable map.
1638 let datum = Datum::new(llval, var_ty, Lvalue);
1639 let mut llmap = match binding_mode {
1640 BindLocal => bcx.fcx.lllocals.borrow_mut(),
1641 BindArgument => bcx.fcx.llargs.borrow_mut()
1643 llmap.insert(p_id, datum);
1647 fn bind_irrefutable_pat<'a>(
1651 binding_mode: IrrefutablePatternBindingMode,
1652 cleanup_scope: cleanup::ScopeId)
1655 * A simple version of the pattern matching code that only handles
1656 * irrefutable patterns. This is used in let/argument patterns,
1657 * not in match statements. Unifying this code with the code above
1658 * sounds nice, but in practice it produces very inefficient code,
1659 * since the match code is so much more general. In most cases,
1660 * LLVM is able to optimize the code, but it causes longer compile
1661 * times and makes the generated code nigh impossible to read.
1664 * - bcx: starting basic block context
1665 * - pat: the irrefutable pattern being matched.
1666 * - val: the value being matched -- must be an lvalue (by ref, with cleanup)
1667 * - binding_mode: is this for an argument or a local variable?
1670 debug!("bind_irrefutable_pat(bcx={}, pat={}, binding_mode={:?})",
1672 pat.repr(bcx.tcx()),
1675 if bcx.sess().asm_comments() {
1676 add_comment(bcx, format!("bind_irrefutable_pat(pat={})",
1677 pat.repr(bcx.tcx())).as_slice());
1680 let _indenter = indenter();
1682 let _icx = push_ctxt("match::bind_irrefutable_pat");
1684 let tcx = bcx.tcx();
1685 let ccx = bcx.ccx();
1687 ast::PatIdent(pat_binding_mode, ref path1, inner) => {
1688 if pat_is_binding(&tcx.def_map, &*pat) {
1689 // Allocate the stack slot where the value of this
1690 // binding will live and place it into the appropriate
1692 bcx = mk_binding_alloca(
1693 bcx, pat.id, &path1.node, binding_mode, cleanup_scope, (),
1694 |(), bcx, llval, ty| {
1695 match pat_binding_mode {
1696 ast::BindByValue(_) => {
1697 // By value binding: move the value that `val`
1698 // points at into the binding's stack slot.
1699 let d = Datum::new(val, ty, Lvalue);
1700 d.store_to(bcx, llval)
1703 ast::BindByRef(_) => {
1704 // By ref binding: the value of the variable
1705 // is the pointer `val` itself.
1706 Store(bcx, val, llval);
1713 for &inner_pat in inner.iter() {
1714 bcx = bind_irrefutable_pat(bcx, inner_pat, val,
1715 binding_mode, cleanup_scope);
1718 ast::PatEnum(_, ref sub_pats) => {
1719 let opt_def = bcx.tcx().def_map.borrow().find_copy(&pat.id);
1721 Some(def::DefVariant(enum_id, var_id, _)) => {
1722 let repr = adt::represent_node(bcx, pat.id);
1723 let vinfo = ty::enum_variant_with_id(ccx.tcx(),
1726 let args = extract_variant_args(bcx,
1730 for sub_pat in sub_pats.iter() {
1731 for (i, argval) in args.vals.iter().enumerate() {
1732 bcx = bind_irrefutable_pat(bcx, *sub_pat.get(i),
1733 *argval, binding_mode,
1738 Some(def::DefFn(..)) |
1739 Some(def::DefStruct(..)) => {
1742 // This is a unit-like struct. Nothing to do here.
1744 Some(ref elems) => {
1745 // This is the tuple struct case.
1746 let repr = adt::represent_node(bcx, pat.id);
1747 for (i, elem) in elems.iter().enumerate() {
1748 let fldptr = adt::trans_field_ptr(bcx, &*repr,
1750 bcx = bind_irrefutable_pat(bcx, *elem,
1751 fldptr, binding_mode,
1757 Some(def::DefStatic(_, false)) => {
1760 // Nothing to do here.
1764 ast::PatStruct(_, ref fields, _) => {
1765 let tcx = bcx.tcx();
1766 let pat_ty = node_id_type(bcx, pat.id);
1767 let pat_repr = adt::represent_type(bcx.ccx(), pat_ty);
1768 expr::with_field_tys(tcx, pat_ty, Some(pat.id), |discr, field_tys| {
1769 for f in fields.iter() {
1770 let ix = ty::field_idx_strict(tcx, f.ident.name, field_tys);
1771 let fldptr = adt::trans_field_ptr(bcx, &*pat_repr, val,
1773 bcx = bind_irrefutable_pat(bcx, f.pat, fldptr,
1774 binding_mode, cleanup_scope);
1778 ast::PatTup(ref elems) => {
1779 let repr = adt::represent_node(bcx, pat.id);
1780 for (i, elem) in elems.iter().enumerate() {
1781 let fldptr = adt::trans_field_ptr(bcx, &*repr, val, 0, i);
1782 bcx = bind_irrefutable_pat(bcx, *elem, fldptr,
1783 binding_mode, cleanup_scope);
1786 ast::PatBox(inner) => {
1787 let llbox = Load(bcx, val);
1788 bcx = bind_irrefutable_pat(bcx, inner, llbox, binding_mode, cleanup_scope);
1790 ast::PatRegion(inner) => {
1791 let loaded_val = Load(bcx, val);
1792 bcx = bind_irrefutable_pat(bcx, inner, loaded_val, binding_mode, cleanup_scope);
1794 ast::PatVec(ref before, ref slice, ref after) => {
1795 let extracted = extract_vec_elems(
1796 bcx, pat.id, before.len() + 1u + after.len(),
1797 slice.map(|_| before.len()), val
1800 .iter().map(|v| Some(*v))
1801 .chain(Some(*slice).move_iter())
1802 .chain(after.iter().map(|v| Some(*v)))
1803 .zip(extracted.vals.iter())
1804 .fold(bcx, |bcx, (inner, elem)| {
1805 inner.map_or(bcx, |inner| {
1806 bind_irrefutable_pat(bcx, inner, *elem, binding_mode, cleanup_scope)
1810 ast::PatMac(..) => {
1811 bcx.sess().span_bug(pat.span, "unexpanded macro");
1813 ast::PatWild | ast::PatWildMulti | ast::PatLit(_) | ast::PatRange(_, _) => ()