1 // Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
13 * # Compilation of match statements
15 * I will endeavor to explain the code as best I can. I have only a loose
16 * understanding of some parts of it.
20 * The basic state of the code is maintained in an array `m` of `Match`
21 * objects. Each `Match` describes some list of patterns, all of which must
22 * match against the current list of values. If those patterns match, then
23 * the arm listed in the match is the correct arm. A given arm may have
24 * multiple corresponding match entries, one for each alternative that
25 * remains. As we proceed these sets of matches are adjusted by the various
26 * `enter_XXX()` functions, each of which adjusts the set of options given
27 * some information about the value which has been matched.
29 * So, initially, there is one value and N matches, each of which have one
30 * constituent pattern. N here is usually the number of arms but may be
31 * greater, if some arms have multiple alternatives. For example, here:
33 * enum Foo { A, B(int), C(uint, uint) }
41 * The value would be `foo`. There would be four matches, each of which
42 * contains one pattern (and, in one case, a guard). We could collect the
43 * various options and then compile the code for the case where `foo` is an
44 * `A`, a `B`, and a `C`. When we generate the code for `C`, we would (1)
45 * drop the two matches that do not match a `C` and (2) expand the other two
46 * into two patterns each. In the first case, the two patterns would be `1u`
47 * and `2`, and the in the second case the _ pattern would be expanded into
48 * `_` and `_`. The two values are of course the arguments to `C`.
50 * Here is a quick guide to the various functions:
52 * - `compile_submatch()`: The main workhouse. It takes a list of values and
53 * a list of matches and finds the various possibilities that could occur.
55 * - `enter_XXX()`: modifies the list of matches based on some information
56 * about the value that has been matched. For example,
57 * `enter_rec_or_struct()` adjusts the values given that a record or struct
58 * has been matched. This is an infallible pattern, so *all* of the matches
59 * must be either wildcards or record/struct patterns. `enter_opt()`
60 * handles the fallible cases, and it is correspondingly more complex.
64 * We store information about the bound variables for each arm as part of the
65 * per-arm `ArmData` struct. There is a mapping from identifiers to
66 * `BindingInfo` structs. These structs contain the mode/id/type of the
67 * binding, but they also contain up to two LLVM values, called `llmatch` and
68 * `llbinding` respectively (the `llbinding`, as will be described shortly, is
69 * optional and only present for by-value bindings---therefore it is bundled
70 * up as part of the `TransBindingMode` type). Both point at allocas.
72 * The `llmatch` binding always stores a pointer into the value being matched
73 * which points at the data for the binding. If the value being matched has
74 * type `T`, then, `llmatch` will point at an alloca of type `T*` (and hence
75 * `llmatch` has type `T**`). So, if you have a pattern like:
79 * match (a, b) { (ref c, d) => { ... } }
81 * For `c` and `d`, we would generate allocas of type `C*` and `D*`
82 * respectively. These are called the `llmatch`. As we match, when we come
83 * up against an identifier, we store the current pointer into the
84 * corresponding alloca.
86 * In addition, for each by-value binding (copy or move), we will create a
87 * second alloca (`llbinding`) that will hold the final value. In this
88 * example, that means that `d` would have this second alloca of type `D` (and
89 * hence `llbinding` has type `D*`).
91 * Once a pattern is completely matched, and assuming that there is no guard
92 * pattern, we will branch to a block that leads to the body itself. For any
93 * by-value bindings, this block will first load the ptr from `llmatch` (the
94 * one of type `D*`) and copy/move the value into `llbinding` (the one of type
95 * `D`). The second alloca then becomes the value of the local variable. For
96 * by ref bindings, the value of the local variable is simply the first
99 * So, for the example above, we would generate a setup kind of like this:
105 * +-------------------------------------------+
106 * | llmatch_c = (addr of first half of tuple) |
107 * | llmatch_d = (addr of first half of tuple) |
108 * +-------------------------------------------+
110 * +--------------------------------------+
111 * | *llbinding_d = **llmatch_dlbinding_d |
112 * +--------------------------------------+
114 * If there is a guard, the situation is slightly different, because we must
115 * execute the guard code. Moreover, we need to do so once for each of the
116 * alternatives that lead to the arm, because if the guard fails, they may
117 * have different points from which to continue the search. Therefore, in that
118 * case, we generate code that looks more like:
124 * +-------------------------------------------+
125 * | llmatch_c = (addr of first half of tuple) |
126 * | llmatch_d = (addr of first half of tuple) |
127 * +-------------------------------------------+
129 * +-------------------------------------------------+
130 * | *llbinding_d = **llmatch_dlbinding_d |
131 * | check condition |
132 * | if false { free *llbinding_d, goto next case } |
133 * | if true { goto body } |
134 * +-------------------------------------------------+
136 * The handling for the cleanups is a bit... sensitive. Basically, the body
137 * is the one that invokes `add_clean()` for each binding. During the guard
138 * evaluation, we add temporary cleanups and revoke them after the guard is
139 * evaluated (it could fail, after all). Presuming the guard fails, we drop
140 * the various values we copied explicitly. Note that guards and moves are
141 * just plain incompatible.
143 * Some relevant helper functions that manage bindings:
144 * - `create_bindings_map()`
145 * - `store_non_ref_bindings()`
146 * - `insert_lllocals()`
149 * ## Notes on vector pattern matching.
151 * Vector pattern matching is surprisingly tricky. The problem is that
152 * the structure of the vector isn't fully known, and slice matches
153 * can be done on subparts of it.
155 * The way that vector pattern matches are dealt with, then, is as
156 * follows. First, we make the actual condition associated with a
157 * vector pattern simply a vector length comparison. So the pattern
158 * [1, .. x] gets the condition "vec len >= 1", and the pattern
159 * [.. x] gets the condition "vec len >= 0". The problem here is that
160 * having the condition "vec len >= 1" hold clearly does not mean that
161 * only a pattern that has exactly that condition will match. This
162 * means that it may well be the case that a condition holds, but none
163 * of the patterns matching that condition match; to deal with this,
164 * when doing vector length matches, we have match failures proceed to
165 * the next condition to check.
167 * There are a couple more subtleties to deal with. While the "actual"
168 * condition associated with vector length tests is simply a test on
169 * the vector length, the actual vec_len Opt entry contains more
170 * information used to restrict which matches are associated with it.
171 * So that all matches in a submatch are matching against the same
172 * values from inside the vector, they are split up by how many
173 * elements they match at the front and at the back of the vector. In
174 * order to make sure that arms are properly checked in order, even
175 * with the overmatching conditions, each vec_len Opt entry is
176 * associated with a range of matches.
177 * Consider the following:
181 * [1, 2, 2, .. _] => 1,
182 * [1, 2, 3, .. _] => 2,
186 * The proper arm to match is arm 2, but arms 0 and 3 both have the
187 * condition "len >= 2". If arm 3 was lumped in with arm 0, then the
188 * wrong branch would be taken. Instead, vec_len Opts are associated
189 * with a contiguous range of matches that have the same "shape".
190 * This is sort of ugly and requires a bunch of special handling of
195 #![allow(non_camel_case_types)]
198 use driver::config::FullDebugInfo;
199 use lib::llvm::{llvm, ValueRef, BasicBlockRef};
200 use middle::const_eval;
201 use middle::lang_items::{UniqStrEqFnLangItem, StrEqFnLangItem};
202 use middle::pat_util::*;
203 use middle::resolve::DefMap;
204 use middle::trans::adt;
205 use middle::trans::base::*;
206 use middle::trans::build::*;
207 use middle::trans::callee;
208 use middle::trans::cleanup;
209 use middle::trans::cleanup::CleanupMethods;
210 use middle::trans::common::*;
211 use middle::trans::consts;
212 use middle::trans::controlflow;
213 use middle::trans::datum;
214 use middle::trans::datum::*;
215 use middle::trans::expr::Dest;
216 use middle::trans::expr;
217 use middle::trans::glue;
218 use middle::trans::tvec;
219 use middle::trans::type_of;
220 use middle::trans::debuginfo;
222 use util::common::indenter;
223 use util::ppaux::{Repr, vec_map_to_str};
225 use collections::HashMap;
229 use syntax::ast::Ident;
230 use syntax::ast_util::path_to_ident;
231 use syntax::ast_util;
232 use syntax::codemap::{Span, DUMMY_SP};
233 use syntax::parse::token::InternedString;
235 // An option identifying a literal: either a unit-like struct or an
238 UnitLikeStructLit(ast::NodeId), // the node ID of the pattern
240 ConstLit(ast::DefId), // the def ID of the constant
246 vec_len_ge(/* length of prefix */uint)
249 // An option identifying a branch (either a literal, an enum variant or a
253 var(ty::Disr, Rc<adt::Repr>),
254 range(@ast::Expr, @ast::Expr),
255 vec_len(/* length */ uint, VecLenOpt, /*range of matches*/(uint, uint))
258 fn lit_to_expr(tcx: &ty::ctxt, a: &Lit) -> @ast::Expr {
260 ExprLit(existing_a_expr) => existing_a_expr,
261 ConstLit(a_const) => const_eval::lookup_const_by_id(tcx, a_const).unwrap(),
262 UnitLikeStructLit(_) => fail!("lit_to_expr: unexpected struct lit"),
266 fn opt_eq(tcx: &ty::ctxt, a: &Opt, b: &Opt) -> bool {
268 (&lit(UnitLikeStructLit(a)), &lit(UnitLikeStructLit(b))) => a == b,
269 (&lit(a), &lit(b)) => {
270 let a_expr = lit_to_expr(tcx, &a);
271 let b_expr = lit_to_expr(tcx, &b);
272 match const_eval::compare_lit_exprs(tcx, a_expr, b_expr) {
273 Some(val1) => val1 == 0,
274 None => fail!("compare_list_exprs: type mismatch"),
277 (&range(a1, a2), &range(b1, b2)) => {
278 let m1 = const_eval::compare_lit_exprs(tcx, a1, b1);
279 let m2 = const_eval::compare_lit_exprs(tcx, a2, b2);
281 (Some(val1), Some(val2)) => (val1 == 0 && val2 == 0),
282 _ => fail!("compare_list_exprs: type mismatch"),
285 (&var(a, _), &var(b, _)) => a == b,
286 (&vec_len(a1, a2, _), &vec_len(b1, b2, _)) =>
287 a1 == b1 && a2 == b2,
292 pub enum opt_result<'a> {
293 single_result(Result<'a>),
294 lower_bound(Result<'a>),
295 range_result(Result<'a>, Result<'a>),
298 fn trans_opt<'a>(bcx: &'a Block<'a>, o: &Opt) -> opt_result<'a> {
299 let _icx = push_ctxt("match::trans_opt");
303 lit(ExprLit(lit_expr)) => {
304 let lit_datum = unpack_datum!(bcx, expr::trans(bcx, lit_expr));
305 let lit_datum = lit_datum.assert_rvalue(bcx); // literals are rvalues
306 let lit_datum = unpack_datum!(bcx, lit_datum.to_appropriate_datum(bcx));
307 return single_result(Result::new(bcx, lit_datum.val));
309 lit(UnitLikeStructLit(pat_id)) => {
310 let struct_ty = ty::node_id_to_type(bcx.tcx(), pat_id);
311 let datum = datum::rvalue_scratch_datum(bcx, struct_ty, "");
312 return single_result(Result::new(bcx, datum.val));
314 lit(ConstLit(lit_id)) => {
315 let (llval, _) = consts::get_const_val(bcx.ccx(), lit_id);
316 return single_result(Result::new(bcx, llval));
318 var(disr_val, ref repr) => {
319 return adt::trans_case(bcx, &**repr, disr_val);
322 let (l1, _) = consts::const_expr(ccx, l1, true);
323 let (l2, _) = consts::const_expr(ccx, l2, true);
324 return range_result(Result::new(bcx, l1), Result::new(bcx, l2));
326 vec_len(n, vec_len_eq, _) => {
327 return single_result(Result::new(bcx, C_int(ccx, n as int)));
329 vec_len(n, vec_len_ge(_), _) => {
330 return lower_bound(Result::new(bcx, C_int(ccx, n as int)));
335 fn variant_opt(bcx: &Block, pat_id: ast::NodeId) -> Opt {
337 let def = ccx.tcx.def_map.borrow().get_copy(&pat_id);
339 ast::DefVariant(enum_id, var_id, _) => {
340 let variants = ty::enum_variants(ccx.tcx(), enum_id);
341 for v in (*variants).iter() {
343 return var(v.disr_val,
344 adt::represent_node(bcx, pat_id))
350 ast::DefStruct(_) => {
351 return lit(UnitLikeStructLit(pat_id));
354 ccx.sess().bug("non-variant or struct in variant_opt()");
360 enum TransBindingMode {
361 TrByValue(/*llbinding:*/ ValueRef),
366 * Information about a pattern binding:
367 * - `llmatch` is a pointer to a stack slot. The stack slot contains a
368 * pointer into the value being matched. Hence, llmatch has type `T**`
369 * where `T` is the value being matched.
370 * - `trmode` is the trans binding mode
371 * - `id` is the node id of the binding
372 * - `ty` is the Rust type of the binding */
376 trmode: TransBindingMode,
382 type BindingsMap = HashMap<Ident, BindingInfo>;
384 struct ArmData<'a, 'b> {
385 bodycx: &'b Block<'b>,
387 bindings_map: BindingsMap
392 * If all `pats` are matched then arm `data` will be executed.
393 * As we proceed `bound_ptrs` are filled with pointers to values to be bound,
394 * these pointers are stored in llmatch variables just before executing `data` arm.
396 struct Match<'a, 'b> {
397 pats: Vec<@ast::Pat>,
398 data: &'a ArmData<'a, 'b>,
399 bound_ptrs: Vec<(Ident, ValueRef)>
402 impl<'a, 'b> Repr for Match<'a, 'b> {
403 fn repr(&self, tcx: &ty::ctxt) -> String {
404 if tcx.sess.verbose() {
405 // for many programs, this just take too long to serialize
408 format_strbuf!("{} pats", self.pats.len())
413 fn has_nested_bindings(m: &[Match], col: uint) -> bool {
415 match br.pats.get(col).node {
416 ast::PatIdent(_, _, Some(_)) => return true,
423 fn expand_nested_bindings<'a, 'b>(
425 m: &'a [Match<'a, 'b>],
428 -> Vec<Match<'a, 'b>> {
429 debug!("expand_nested_bindings(bcx={}, m={}, col={}, val={})",
433 bcx.val_to_str(val));
434 let _indenter = indenter();
437 match br.pats.get(col).node {
438 ast::PatIdent(_, ref path, Some(inner)) => {
439 let pats = Vec::from_slice(br.pats.slice(0u, col))
440 .append((vec!(inner))
441 .append(br.pats.slice(col + 1u, br.pats.len())).as_slice());
443 let mut bound_ptrs = br.bound_ptrs.clone();
444 bound_ptrs.push((path_to_ident(path), val));
448 bound_ptrs: bound_ptrs
452 pats: br.pats.clone(),
454 bound_ptrs: br.bound_ptrs.clone()
460 fn assert_is_binding_or_wild(bcx: &Block, p: @ast::Pat) {
461 if !pat_is_binding_or_wild(&bcx.tcx().def_map, p) {
464 format!("expected an identifier pattern but found p: {}",
465 p.repr(bcx.tcx())).as_slice());
469 type enter_pat<'a> = |@ast::Pat|: 'a -> Option<Vec<@ast::Pat>>;
471 fn enter_match<'a, 'b>(
474 m: &'a [Match<'a, 'b>],
478 -> Vec<Match<'a, 'b>> {
479 debug!("enter_match(bcx={}, m={}, col={}, val={})",
483 bcx.val_to_str(val));
484 let _indenter = indenter();
486 m.iter().filter_map(|br| {
487 e(*br.pats.get(col)).map(|sub| {
488 let pats = sub.append(br.pats.slice(0u, col))
489 .append(br.pats.slice(col + 1u, br.pats.len()));
491 let this = *br.pats.get(col);
492 let mut bound_ptrs = br.bound_ptrs.clone();
494 ast::PatIdent(_, ref path, None) => {
495 if pat_is_binding(dm, this) {
496 bound_ptrs.push((path_to_ident(path), val));
505 bound_ptrs: bound_ptrs
511 fn enter_default<'a, 'b>(
514 m: &'a [Match<'a, 'b>],
517 chk: &FailureHandler)
518 -> Vec<Match<'a, 'b>> {
519 debug!("enter_default(bcx={}, m={}, col={}, val={})",
523 bcx.val_to_str(val));
524 let _indenter = indenter();
526 // Collect all of the matches that can match against anything.
527 let matches = enter_match(bcx, dm, m, col, val, |p| {
529 ast::PatWild | ast::PatWildMulti => Some(Vec::new()),
530 ast::PatIdent(_, _, None) if pat_is_binding(dm, p) => Some(Vec::new()),
535 // Ok, now, this is pretty subtle. A "default" match is a match
536 // that needs to be considered if none of the actual checks on the
537 // value being considered succeed. The subtlety lies in that sometimes
538 // identifier/wildcard matches are *not* default matches. Consider:
539 // "match x { _ if something => foo, true => bar, false => baz }".
540 // There is a wildcard match, but it is *not* a default case. The boolean
541 // case on the value being considered is exhaustive. If the case is
542 // exhaustive, then there are no defaults.
544 // We detect whether the case is exhaustive in the following
545 // somewhat kludgy way: if the last wildcard/binding match has a
546 // guard, then by non-redundancy, we know that there aren't any
547 // non guarded matches, and thus by exhaustiveness, we know that
548 // we don't need any default cases. If the check *isn't* nonexhaustive
549 // (because chk is Some), then we need the defaults anyways.
550 let is_exhaustive = match matches.last() {
551 Some(m) if m.data.arm.guard.is_some() && chk.is_infallible() => true,
555 if is_exhaustive { Vec::new() } else { matches }
558 // <pcwalton> nmatsakis: what does enter_opt do?
559 // <pcwalton> in trans/match
560 // <pcwalton> trans/match.rs is like stumbling around in a dark cave
561 // <nmatsakis> pcwalton: the enter family of functions adjust the set of
562 // patterns as needed
563 // <nmatsakis> yeah, at some point I kind of achieved some level of
565 // <nmatsakis> anyhow, they adjust the patterns given that something of that
566 // kind has been found
567 // <nmatsakis> pcwalton: ok, right, so enter_XXX() adjusts the patterns, as I
569 // <nmatsakis> enter_match() kind of embodies the generic code
570 // <nmatsakis> it is provided with a function that tests each pattern to see
571 // if it might possibly apply and so forth
572 // <nmatsakis> so, if you have a pattern like {a: _, b: _, _} and one like _
573 // <nmatsakis> then _ would be expanded to (_, _)
574 // <nmatsakis> one spot for each of the sub-patterns
575 // <nmatsakis> enter_opt() is one of the more complex; it covers the fallible
577 // <nmatsakis> enter_rec_or_struct() or enter_tuple() are simpler, since they
578 // are infallible patterns
579 // <nmatsakis> so all patterns must either be records (resp. tuples) or
582 fn enter_opt<'a, 'b>(
584 m: &'a [Match<'a, 'b>],
589 -> Vec<Match<'a, 'b>> {
590 debug!("enter_opt(bcx={}, m={}, opt={:?}, col={}, val={})",
595 bcx.val_to_str(val));
596 let _indenter = indenter();
599 let dummy = @ast::Pat {id: 0, node: ast::PatWild, span: DUMMY_SP};
601 enter_match(bcx, &tcx.def_map, m, col, val, |p| {
602 let answer = match p.node {
604 ast::PatIdent(_, _, None) if pat_is_const(&tcx.def_map, p) => {
605 let const_def = tcx.def_map.borrow().get_copy(&p.id);
606 let const_def_id = ast_util::def_id_of_def(const_def);
607 if opt_eq(tcx, &lit(ConstLit(const_def_id)), opt) {
613 ast::PatEnum(_, ref subpats) => {
614 if opt_eq(tcx, &variant_opt(bcx, p.id), opt) {
615 // FIXME: Must we clone?
617 None => Some(Vec::from_elem(variant_size, dummy)),
618 Some(ref subpats) => {
619 Some((*subpats).iter().map(|x| *x).collect())
626 ast::PatIdent(_, _, None)
627 if pat_is_variant_or_struct(&tcx.def_map, p) => {
628 if opt_eq(tcx, &variant_opt(bcx, p.id), opt) {
635 if opt_eq(tcx, &lit(ExprLit(l)), opt) { Some(Vec::new()) }
638 ast::PatRange(l1, l2) => {
639 if opt_eq(tcx, &range(l1, l2), opt) { Some(Vec::new()) }
642 ast::PatStruct(_, ref field_pats, _) => {
643 if opt_eq(tcx, &variant_opt(bcx, p.id), opt) {
644 // Look up the struct variant ID.
646 match tcx.def_map.borrow().get_copy(&p.id) {
647 ast::DefVariant(_, found_struct_id, _) => {
648 struct_id = found_struct_id;
651 tcx.sess.span_bug(p.span, "expected enum variant def");
655 // Reorder the patterns into the same order they were
656 // specified in the struct definition. Also fill in
657 // unspecified fields with dummy.
658 let mut reordered_patterns = Vec::new();
659 let r = ty::lookup_struct_fields(tcx, struct_id);
660 for field in r.iter() {
661 match field_pats.iter().find(|p| p.ident.name
663 None => reordered_patterns.push(dummy),
664 Some(fp) => reordered_patterns.push(fp.pat)
667 Some(reordered_patterns)
672 ast::PatVec(ref before, slice, ref after) => {
673 let (lo, hi) = match *opt {
674 vec_len(_, _, (lo, hi)) => (lo, hi),
675 _ => tcx.sess.span_bug(p.span,
676 "vec pattern but not vec opt")
680 Some(slice) if i >= lo && i <= hi => {
681 let n = before.len() + after.len();
682 let this_opt = vec_len(n, vec_len_ge(before.len()),
684 if opt_eq(tcx, &this_opt, opt) {
685 let mut new_before = Vec::new();
686 for pat in before.iter() {
687 new_before.push(*pat);
689 new_before.push(slice);
690 for pat in after.iter() {
691 new_before.push(*pat);
698 None if i >= lo && i <= hi => {
699 let n = before.len();
700 if opt_eq(tcx, &vec_len(n, vec_len_eq, (lo,hi)), opt) {
701 let mut new_before = Vec::new();
702 for pat in before.iter() {
703 new_before.push(*pat);
714 assert_is_binding_or_wild(bcx, p);
715 Some(Vec::from_elem(variant_size, dummy))
723 fn enter_rec_or_struct<'a, 'b>(
726 m: &'a [Match<'a, 'b>],
728 fields: &[ast::Ident],
730 -> Vec<Match<'a, 'b>> {
731 debug!("enter_rec_or_struct(bcx={}, m={}, col={}, val={})",
735 bcx.val_to_str(val));
736 let _indenter = indenter();
738 let dummy = @ast::Pat {id: 0, node: ast::PatWild, span: DUMMY_SP};
739 enter_match(bcx, dm, m, col, val, |p| {
741 ast::PatStruct(_, ref fpats, _) => {
742 let mut pats = Vec::new();
743 for fname in fields.iter() {
744 match fpats.iter().find(|p| p.ident.name == fname.name) {
745 None => pats.push(dummy),
746 Some(pat) => pats.push(pat.pat)
752 assert_is_binding_or_wild(bcx, p);
753 Some(Vec::from_elem(fields.len(), dummy))
759 fn enter_tup<'a, 'b>(
762 m: &'a [Match<'a, 'b>],
766 -> Vec<Match<'a, 'b>> {
767 debug!("enter_tup(bcx={}, m={}, col={}, val={})",
771 bcx.val_to_str(val));
772 let _indenter = indenter();
774 let dummy = @ast::Pat {id: 0, node: ast::PatWild, span: DUMMY_SP};
775 enter_match(bcx, dm, m, col, val, |p| {
777 ast::PatTup(ref elts) => {
778 let mut new_elts = Vec::new();
779 for elt in elts.iter() {
780 new_elts.push((*elt).clone())
785 assert_is_binding_or_wild(bcx, p);
786 Some(Vec::from_elem(n_elts, dummy))
792 fn enter_tuple_struct<'a, 'b>(
795 m: &'a [Match<'a, 'b>],
799 -> Vec<Match<'a, 'b>> {
800 debug!("enter_tuple_struct(bcx={}, m={}, col={}, val={})",
804 bcx.val_to_str(val));
805 let _indenter = indenter();
807 let dummy = @ast::Pat {id: 0, node: ast::PatWild, span: DUMMY_SP};
808 enter_match(bcx, dm, m, col, val, |p| {
810 ast::PatEnum(_, Some(ref elts)) => {
811 Some(elts.iter().map(|x| (*x)).collect())
813 ast::PatEnum(_, None) => {
814 Some(Vec::from_elem(n_elts, dummy))
817 assert_is_binding_or_wild(bcx, p);
818 Some(Vec::from_elem(n_elts, dummy))
824 fn enter_uniq<'a, 'b>(
827 m: &'a [Match<'a, 'b>],
830 -> Vec<Match<'a, 'b>> {
831 debug!("enter_uniq(bcx={}, m={}, col={}, val={})",
835 bcx.val_to_str(val));
836 let _indenter = indenter();
838 let dummy = @ast::Pat {id: 0, node: ast::PatWild, span: DUMMY_SP};
839 enter_match(bcx, dm, m, col, val, |p| {
841 ast::PatUniq(sub) => {
845 assert_is_binding_or_wild(bcx, p);
852 fn enter_region<'a, 'b>(
855 m: &'a [Match<'a, 'b>],
858 -> Vec<Match<'a, 'b>> {
859 debug!("enter_region(bcx={}, m={}, col={}, val={})",
863 bcx.val_to_str(val));
864 let _indenter = indenter();
866 let dummy = @ast::Pat { id: 0, node: ast::PatWild, span: DUMMY_SP };
867 enter_match(bcx, dm, m, col, val, |p| {
869 ast::PatRegion(sub) => {
873 assert_is_binding_or_wild(bcx, p);
880 // Returns the options in one column of matches. An option is something that
881 // needs to be conditionally matched at runtime; for example, the discriminant
882 // on a set of enum variants or a literal.
883 fn get_options(bcx: &Block, m: &[Match], col: uint) -> Vec<Opt> {
885 fn add_to_set(tcx: &ty::ctxt, set: &mut Vec<Opt>, val: Opt) {
886 if set.iter().any(|l| opt_eq(tcx, l, &val)) {return;}
889 // Vector comparisons are special in that since the actual
890 // conditions over-match, we need to be careful about them. This
891 // means that in order to properly handle things in order, we need
892 // to not always merge conditions.
893 fn add_veclen_to_set(set: &mut Vec<Opt> , i: uint,
894 len: uint, vlo: VecLenOpt) {
896 // If the last condition in the list matches the one we want
897 // to add, then extend its range. Otherwise, make a new
898 // vec_len with a range just covering the new entry.
899 Some(&vec_len(len2, vlo2, (start, end)))
900 if len == len2 && vlo == vlo2 => {
901 let length = set.len();
902 *set.get_mut(length - 1) =
903 vec_len(len, vlo, (start, end+1))
905 _ => set.push(vec_len(len, vlo, (i, i)))
909 let mut found = Vec::new();
910 for (i, br) in m.iter().enumerate() {
911 let cur = *br.pats.get(col);
914 add_to_set(ccx.tcx(), &mut found, lit(ExprLit(l)));
916 ast::PatIdent(..) => {
917 // This is one of: an enum variant, a unit-like struct, or a
919 let opt_def = ccx.tcx.def_map.borrow().find_copy(&cur.id);
921 Some(ast::DefVariant(..)) => {
922 add_to_set(ccx.tcx(), &mut found,
923 variant_opt(bcx, cur.id));
925 Some(ast::DefStruct(..)) => {
926 add_to_set(ccx.tcx(), &mut found,
927 lit(UnitLikeStructLit(cur.id)));
929 Some(ast::DefStatic(const_did, false)) => {
930 add_to_set(ccx.tcx(), &mut found,
931 lit(ConstLit(const_did)));
936 ast::PatEnum(..) | ast::PatStruct(..) => {
937 // This could be one of: a tuple-like enum variant, a
938 // struct-like enum variant, or a struct.
939 let opt_def = ccx.tcx.def_map.borrow().find_copy(&cur.id);
941 Some(ast::DefFn(..)) |
942 Some(ast::DefVariant(..)) => {
943 add_to_set(ccx.tcx(), &mut found,
944 variant_opt(bcx, cur.id));
946 Some(ast::DefStatic(const_did, false)) => {
947 add_to_set(ccx.tcx(), &mut found,
948 lit(ConstLit(const_did)));
953 ast::PatRange(l1, l2) => {
954 add_to_set(ccx.tcx(), &mut found, range(l1, l2));
956 ast::PatVec(ref before, slice, ref after) => {
957 let (len, vec_opt) = match slice {
958 None => (before.len(), vec_len_eq),
959 Some(_) => (before.len() + after.len(),
960 vec_len_ge(before.len()))
962 add_veclen_to_set(&mut found, i, len, vec_opt);
970 struct ExtractedBlock<'a> {
971 vals: Vec<ValueRef> ,
975 fn extract_variant_args<'a>(
980 -> ExtractedBlock<'a> {
981 let _icx = push_ctxt("match::extract_variant_args");
982 let args = Vec::from_fn(adt::num_args(repr, disr_val), |i| {
983 adt::trans_field_ptr(bcx, repr, val, disr_val, i)
986 ExtractedBlock { vals: args, bcx: bcx }
989 fn match_datum(bcx: &Block,
994 * Helper for converting from the ValueRef that we pass around in
995 * the match code, which is always an lvalue, into a Datum. Eventually
996 * we should just pass around a Datum and be done with it.
999 let ty = node_id_type(bcx, pat_id);
1000 Datum(val, ty, Lvalue)
1004 fn extract_vec_elems<'a>(
1006 pat_id: ast::NodeId,
1008 slice: Option<uint>,
1011 -> ExtractedBlock<'a> {
1012 let _icx = push_ctxt("match::extract_vec_elems");
1013 let vec_datum = match_datum(bcx, val, pat_id);
1014 let (base, len) = vec_datum.get_vec_base_and_len(bcx);
1015 let vec_ty = node_id_type(bcx, pat_id);
1016 let vt = tvec::vec_types(bcx, ty::sequence_element_type(bcx.tcx(), vec_ty));
1018 let mut elems = Vec::from_fn(elem_count, |i| {
1020 None => GEPi(bcx, base, [i]),
1021 Some(n) if i < n => GEPi(bcx, base, [i]),
1022 Some(n) if i > n => {
1023 InBoundsGEP(bcx, base, [
1025 C_int(bcx.ccx(), (elem_count - i) as int))])
1027 _ => unsafe { llvm::LLVMGetUndef(vt.llunit_ty.to_ref()) }
1030 if slice.is_some() {
1031 let n = slice.unwrap();
1032 let slice_byte_offset = Mul(bcx, vt.llunit_size, C_uint(bcx.ccx(), n));
1033 let slice_begin = tvec::pointer_add_byte(bcx, base, slice_byte_offset);
1034 let slice_len_offset = C_uint(bcx.ccx(), elem_count - 1u);
1035 let slice_len = Sub(bcx, len, slice_len_offset);
1036 let slice_ty = ty::mk_slice(bcx.tcx(),
1038 ty::mt {ty: vt.unit_ty, mutbl: ast::MutImmutable});
1039 let scratch = rvalue_scratch_datum(bcx, slice_ty, "");
1040 Store(bcx, slice_begin,
1041 GEPi(bcx, scratch.val, [0u, abi::slice_elt_base]));
1042 Store(bcx, slice_len, GEPi(bcx, scratch.val, [0u, abi::slice_elt_len]));
1043 *elems.get_mut(n) = scratch.val;
1046 ExtractedBlock { vals: elems, bcx: bcx }
1049 /// Checks every pattern in `m` at `col` column.
1050 /// If there are a struct pattern among them function
1051 /// returns list of all fields that are matched in these patterns.
1052 /// Function returns None if there is no struct pattern.
1053 /// Function doesn't collect fields from struct-like enum variants.
1054 /// Function can return empty list if there is only wildcard struct pattern.
1055 fn collect_record_or_struct_fields<'a>(
1059 -> Option<Vec<ast::Ident> > {
1060 let mut fields: Vec<ast::Ident> = Vec::new();
1061 let mut found = false;
1062 for br in m.iter() {
1063 match br.pats.get(col).node {
1064 ast::PatStruct(_, ref fs, _) => {
1065 match ty::get(node_id_type(bcx, br.pats.get(col).id)).sty {
1066 ty::ty_struct(..) => {
1067 extend(&mut fields, fs.as_slice());
1077 return Some(fields);
1082 fn extend(idents: &mut Vec<ast::Ident> , field_pats: &[ast::FieldPat]) {
1083 for field_pat in field_pats.iter() {
1084 let field_ident = field_pat.ident;
1085 if !idents.iter().any(|x| x.name == field_ident.name) {
1086 idents.push(field_ident);
1092 // Macro for deciding whether any of the remaining matches fit a given kind of
1093 // pattern. Note that, because the macro is well-typed, either ALL of the
1094 // matches should fit that sort of pattern or NONE (however, some of the
1095 // matches may be wildcards like _ or identifiers).
1096 macro_rules! any_pat (
1097 ($m:expr, $pattern:pat) => (
1098 ($m).iter().any(|br| {
1099 match br.pats.get(col).node {
1107 fn any_uniq_pat(m: &[Match], col: uint) -> bool {
1108 any_pat!(m, ast::PatUniq(_))
1111 fn any_region_pat(m: &[Match], col: uint) -> bool {
1112 any_pat!(m, ast::PatRegion(_))
1115 fn any_tup_pat(m: &[Match], col: uint) -> bool {
1116 any_pat!(m, ast::PatTup(_))
1119 fn any_tuple_struct_pat(bcx: &Block, m: &[Match], col: uint) -> bool {
1121 let pat = *br.pats.get(col);
1123 ast::PatEnum(_, _) => {
1124 match bcx.tcx().def_map.borrow().find(&pat.id) {
1125 Some(&ast::DefFn(..)) |
1126 Some(&ast::DefStruct(..)) => true,
1135 struct DynamicFailureHandler<'a> {
1138 msg: InternedString,
1139 finished: Cell<Option<BasicBlockRef>>,
1142 impl<'a> DynamicFailureHandler<'a> {
1143 fn handle_fail(&self) -> BasicBlockRef {
1144 match self.finished.get() {
1145 Some(bb) => return bb,
1149 let fcx = self.bcx.fcx;
1150 let fail_cx = fcx.new_block(false, "case_fallthrough", None);
1151 controlflow::trans_fail(fail_cx, self.sp, self.msg.clone());
1152 self.finished.set(Some(fail_cx.llbb));
1157 /// What to do when the pattern match fails.
1158 enum FailureHandler<'a> {
1160 JumpToBasicBlock(BasicBlockRef),
1161 DynamicFailureHandlerClass(Box<DynamicFailureHandler<'a>>),
1164 impl<'a> FailureHandler<'a> {
1165 fn is_infallible(&self) -> bool {
1172 fn is_fallible(&self) -> bool {
1173 !self.is_infallible()
1176 fn handle_fail(&self) -> BasicBlockRef {
1179 fail!("attempted to fail in infallible failure handler!")
1181 JumpToBasicBlock(basic_block) => basic_block,
1182 DynamicFailureHandlerClass(ref dynamic_failure_handler) => {
1183 dynamic_failure_handler.handle_fail()
1189 fn pick_col(m: &[Match]) -> uint {
1190 fn score(p: &ast::Pat) -> uint {
1192 ast::PatLit(_) | ast::PatEnum(_, _) | ast::PatRange(_, _) => 1u,
1193 ast::PatIdent(_, _, Some(p)) => score(p),
1197 let mut scores = Vec::from_elem(m[0].pats.len(), 0u);
1198 for br in m.iter() {
1199 for (i, p) in br.pats.iter().enumerate() {
1200 *scores.get_mut(i) += score(*p);
1203 let mut max_score = 0u;
1204 let mut best_col = 0u;
1205 for (i, score) in scores.iter().enumerate() {
1208 // Irrefutable columns always go first, they'd only be duplicated in
1210 if score == 0u { return i; }
1211 // If no irrefutable ones are found, we pick the one with the biggest
1212 // branching factor.
1213 if score > max_score { max_score = score; best_col = i; }
1219 pub enum branch_kind { no_branch, single, switch, compare, compare_vec_len, }
1221 // Compiles a comparison between two things.
1223 // NB: This must produce an i1, not a Rust bool (i8).
1224 fn compare_values<'a>(
1230 fn compare_str<'a>(cx: &'a Block<'a>,
1235 let did = langcall(cx,
1237 format!("comparison of `{}`",
1238 cx.ty_to_str(rhs_t)).as_slice(),
1240 let result = callee::trans_lang_call(cx, did, [lhs, rhs], None);
1243 val: bool_to_i1(result.bcx, result.val)
1247 let _icx = push_ctxt("compare_values");
1248 if ty::type_is_scalar(rhs_t) {
1249 let rs = compare_scalar_types(cx, lhs, rhs, rhs_t, ast::BiEq);
1250 return Result::new(rs.bcx, rs.val);
1253 match ty::get(rhs_t).sty {
1254 ty::ty_uniq(t) => match ty::get(t).sty {
1256 let scratch_lhs = alloca(cx, val_ty(lhs), "__lhs");
1257 Store(cx, lhs, scratch_lhs);
1258 let scratch_rhs = alloca(cx, val_ty(rhs), "__rhs");
1259 Store(cx, rhs, scratch_rhs);
1260 let did = langcall(cx,
1262 format!("comparison of `{}`",
1263 cx.ty_to_str(rhs_t)).as_slice(),
1264 UniqStrEqFnLangItem);
1265 let result = callee::trans_lang_call(cx, did, [scratch_lhs, scratch_rhs], None);
1268 val: bool_to_i1(result.bcx, result.val)
1271 _ => cx.sess().bug("only scalars and strings supported in compare_values"),
1273 ty::ty_rptr(_, mt) => match ty::get(mt.ty).sty {
1274 ty::ty_str => compare_str(cx, lhs, rhs, rhs_t),
1275 _ => cx.sess().bug("only scalars and strings supported in compare_values"),
1277 _ => cx.sess().bug("only scalars and strings supported in compare_values"),
1281 fn store_non_ref_bindings<'a>(
1283 bindings_map: &BindingsMap,
1284 opt_cleanup_scope: Option<cleanup::ScopeId>)
1288 * For each copy/move binding, copy the value from the value being
1289 * matched into its final home. This code executes once one of
1290 * the patterns for a given arm has completely matched. It adds
1291 * cleanups to the `opt_cleanup_scope`, if one is provided.
1296 for (_, &binding_info) in bindings_map.iter() {
1297 match binding_info.trmode {
1298 TrByValue(lldest) => {
1299 let llval = Load(bcx, binding_info.llmatch); // get a T*
1300 let datum = Datum(llval, binding_info.ty, Lvalue);
1301 bcx = datum.store_to(bcx, lldest);
1303 match opt_cleanup_scope {
1306 fcx.schedule_drop_mem(s, lldest, binding_info.ty);
1316 fn insert_lllocals<'a>(bcx: &'a Block<'a>,
1317 bindings_map: &BindingsMap,
1318 cleanup_scope: cleanup::ScopeId)
1321 * For each binding in `data.bindings_map`, adds an appropriate entry into
1322 * the `fcx.lllocals` map, scheduling cleanup in `cleanup_scope`.
1327 for (&ident, &binding_info) in bindings_map.iter() {
1328 let llval = match binding_info.trmode {
1329 // By value bindings: use the stack slot that we
1330 // copied/moved the value into
1331 TrByValue(lldest) => lldest,
1333 // By ref binding: use the ptr into the matched value
1334 TrByRef => binding_info.llmatch
1337 let datum = Datum(llval, binding_info.ty, Lvalue);
1338 fcx.schedule_drop_mem(cleanup_scope, llval, binding_info.ty);
1340 debug!("binding {:?} to {}",
1342 bcx.val_to_str(llval));
1343 bcx.fcx.lllocals.borrow_mut().insert(binding_info.id, datum);
1345 if bcx.sess().opts.debuginfo == FullDebugInfo {
1346 debuginfo::create_match_binding_metadata(bcx,
1356 fn compile_guard<'a, 'b>(
1358 guard_expr: &ast::Expr,
1360 m: &'a [Match<'a, 'b>],
1362 chk: &FailureHandler,
1363 has_genuine_default: bool)
1365 debug!("compile_guard(bcx={}, guard_expr={}, m={}, vals={})",
1367 bcx.expr_to_str(guard_expr),
1369 vec_map_to_str(vals, |v| bcx.val_to_str(*v)));
1370 let _indenter = indenter();
1372 // Lest the guard itself should fail, introduce a temporary cleanup
1373 // scope for any non-ref bindings we create.
1374 let temp_scope = bcx.fcx.push_custom_cleanup_scope();
1377 bcx = store_non_ref_bindings(bcx, &data.bindings_map,
1378 Some(cleanup::CustomScope(temp_scope)));
1379 bcx = insert_lllocals(bcx, &data.bindings_map,
1380 cleanup::CustomScope(temp_scope));
1382 let val = unpack_datum!(bcx, expr::trans(bcx, guard_expr));
1383 let val = val.to_llbool(bcx);
1385 // Cancel cleanups now that the guard successfully executed. If
1386 // the guard was false, we will drop the values explicitly
1387 // below. Otherwise, we'll add lvalue cleanups at the end.
1388 bcx.fcx.pop_custom_cleanup_scope(temp_scope);
1390 return with_cond(bcx, Not(bcx, val), |bcx| {
1391 // Guard does not match: free the values we copied,
1392 // and remove all bindings from the lllocals table
1393 let bcx = drop_bindings(bcx, data);
1395 // If the default arm is the only one left, move on to the next
1396 // condition explicitly rather than (possibly) falling back to
1398 &JumpToBasicBlock(_) if m.len() == 1 && has_genuine_default => {
1399 Br(bcx, chk.handle_fail());
1402 compile_submatch(bcx, m, vals, chk, has_genuine_default);
1408 fn drop_bindings<'a>(bcx: &'a Block<'a>, data: &ArmData)
1411 for (_, &binding_info) in data.bindings_map.iter() {
1412 match binding_info.trmode {
1413 TrByValue(llval) => {
1414 bcx = glue::drop_ty(bcx, llval, binding_info.ty);
1418 bcx.fcx.lllocals.borrow_mut().remove(&binding_info.id);
1424 fn compile_submatch<'a, 'b>(
1426 m: &'a [Match<'a, 'b>],
1428 chk: &FailureHandler,
1429 has_genuine_default: bool) {
1430 debug!("compile_submatch(bcx={}, m={}, vals={})",
1433 vec_map_to_str(vals, |v| bcx.val_to_str(*v)));
1434 let _indenter = indenter();
1437 For an empty match, a fall-through case must exist
1439 assert!((m.len() > 0u || chk.is_fallible()));
1440 let _icx = push_ctxt("match::compile_submatch");
1443 Br(bcx, chk.handle_fail());
1446 if m[0].pats.len() == 0u {
1447 let data = &m[0].data;
1448 for &(ref ident, ref value_ptr) in m[0].bound_ptrs.iter() {
1449 let llmatch = data.bindings_map.get(ident).llmatch;
1450 Store(bcx, *value_ptr, llmatch);
1452 match data.arm.guard {
1453 Some(guard_expr) => {
1454 bcx = compile_guard(bcx,
1457 m.slice(1, m.len()),
1460 has_genuine_default);
1464 Br(bcx, data.bodycx.llbb);
1468 let col = pick_col(m);
1469 let val = vals[col];
1471 if has_nested_bindings(m, col) {
1472 let expanded = expand_nested_bindings(bcx, m, col, val);
1473 compile_submatch_continue(bcx,
1474 expanded.as_slice(),
1479 has_genuine_default)
1481 compile_submatch_continue(bcx, m, vals, chk, col, val, has_genuine_default)
1485 fn compile_submatch_continue<'a, 'b>(
1486 mut bcx: &'b Block<'b>,
1487 m: &'a [Match<'a, 'b>],
1489 chk: &FailureHandler,
1492 has_genuine_default: bool) {
1494 let tcx = bcx.tcx();
1495 let dm = &tcx.def_map;
1497 let vals_left = Vec::from_slice(vals.slice(0u, col)).append(vals.slice(col + 1u, vals.len()));
1498 let ccx = bcx.fcx.ccx;
1500 for br in m.iter() {
1501 // Find a real id (we're adding placeholder wildcard patterns, but
1502 // each column is guaranteed to have at least one real pattern)
1504 pat_id = br.pats.get(col).id;
1508 match collect_record_or_struct_fields(bcx, m, col) {
1509 Some(ref rec_fields) => {
1510 let pat_ty = node_id_type(bcx, pat_id);
1511 let pat_repr = adt::represent_type(bcx.ccx(), pat_ty);
1512 expr::with_field_tys(tcx, pat_ty, Some(pat_id), |discr, field_tys| {
1513 let rec_vals = rec_fields.iter().map(|field_name| {
1514 let ix = ty::field_idx_strict(tcx, field_name.name, field_tys);
1515 adt::trans_field_ptr(bcx, &*pat_repr, val, discr, ix)
1516 }).collect::<Vec<_>>();
1519 enter_rec_or_struct(bcx,
1523 rec_fields.as_slice(),
1525 rec_vals.append(vals_left.as_slice()).as_slice(),
1526 chk, has_genuine_default);
1533 if any_tup_pat(m, col) {
1534 let tup_ty = node_id_type(bcx, pat_id);
1535 let tup_repr = adt::represent_type(bcx.ccx(), tup_ty);
1536 let n_tup_elts = match ty::get(tup_ty).sty {
1537 ty::ty_tup(ref elts) => elts.len(),
1538 _ => ccx.sess().bug("non-tuple type in tuple pattern")
1540 let tup_vals = Vec::from_fn(n_tup_elts, |i| {
1541 adt::trans_field_ptr(bcx, &*tup_repr, val, 0, i)
1543 compile_submatch(bcx,
1549 n_tup_elts).as_slice(),
1550 tup_vals.append(vals_left.as_slice()).as_slice(),
1551 chk, has_genuine_default);
1555 if any_tuple_struct_pat(bcx, m, col) {
1556 let struct_ty = node_id_type(bcx, pat_id);
1557 let struct_element_count;
1558 match ty::get(struct_ty).sty {
1559 ty::ty_struct(struct_id, _) => {
1560 struct_element_count =
1561 ty::lookup_struct_fields(tcx, struct_id).len();
1564 ccx.sess().bug("non-struct type in tuple struct pattern");
1568 let struct_repr = adt::represent_type(bcx.ccx(), struct_ty);
1569 let llstructvals = Vec::from_fn(struct_element_count, |i| {
1570 adt::trans_field_ptr(bcx, &*struct_repr, val, 0, i)
1572 compile_submatch(bcx,
1573 enter_tuple_struct(bcx, dm, m, col, val,
1574 struct_element_count).as_slice(),
1575 llstructvals.append(vals_left.as_slice()).as_slice(),
1576 chk, has_genuine_default);
1580 if any_uniq_pat(m, col) {
1581 let llbox = Load(bcx, val);
1582 compile_submatch(bcx,
1583 enter_uniq(bcx, dm, m, col, val).as_slice(),
1584 (vec!(llbox)).append(vals_left.as_slice()).as_slice(),
1585 chk, has_genuine_default);
1589 if any_region_pat(m, col) {
1590 let loaded_val = Load(bcx, val);
1591 compile_submatch(bcx,
1592 enter_region(bcx, dm, m, col, val).as_slice(),
1593 (vec!(loaded_val)).append(vals_left.as_slice()).as_slice(),
1594 chk, has_genuine_default);
1598 // Decide what kind of branch we need
1599 let opts = get_options(bcx, m, col);
1600 debug!("options={:?}", opts);
1601 let mut kind = no_branch;
1602 let mut test_val = val;
1603 debug!("test_val={}", bcx.val_to_str(test_val));
1604 if opts.len() > 0u {
1605 match *opts.get(0) {
1606 var(_, ref repr) => {
1607 let (the_kind, val_opt) = adt::trans_switch(bcx, &**repr, val);
1609 for &tval in val_opt.iter() { test_val = tval; }
1612 let pty = node_id_type(bcx, pat_id);
1613 test_val = load_if_immediate(bcx, val, pty);
1614 kind = if ty::type_is_integral(pty) { switch }
1618 test_val = Load(bcx, val);
1622 let vec_ty = node_id_type(bcx, pat_id);
1623 let (_, len) = tvec::get_base_and_len(bcx, val, vec_ty);
1625 kind = compare_vec_len;
1629 for o in opts.iter() {
1631 range(_, _) => { kind = compare; break }
1635 let else_cx = match kind {
1636 no_branch | single => bcx,
1637 _ => bcx.fcx.new_temp_block("match_else")
1639 let sw = if kind == switch {
1640 Switch(bcx, test_val, else_cx.llbb, opts.len())
1642 C_int(ccx, 0) // Placeholder for when not using a switch
1645 let defaults = enter_default(else_cx, dm, m, col, val, chk);
1646 let exhaustive = chk.is_infallible() && defaults.len() == 0u;
1647 let len = opts.len();
1649 // Compile subtrees for each option
1650 for (i, opt) in opts.iter().enumerate() {
1651 // In some cases of range and vector pattern matching, we need to
1652 // override the failure case so that instead of failing, it proceeds
1653 // to try more matching. branch_chk, then, is the proper failure case
1654 // for the current conditional branch.
1655 let mut branch_chk = None;
1656 let mut opt_cx = else_cx;
1657 if !exhaustive || i+1 < len {
1658 opt_cx = bcx.fcx.new_temp_block("match_case");
1660 single => Br(bcx, opt_cx.llbb),
1662 match trans_opt(bcx, opt) {
1663 single_result(r) => {
1665 llvm::LLVMAddCase(sw, r.val, opt_cx.llbb);
1671 "in compile_submatch, expected \
1672 trans_opt to return a single_result")
1677 let t = node_id_type(bcx, pat_id);
1678 let Result {bcx: after_cx, val: matches} = {
1679 match trans_opt(bcx, opt) {
1680 single_result(Result {bcx, val}) => {
1681 compare_values(bcx, test_val, val, t)
1683 lower_bound(Result {bcx, val}) => {
1684 compare_scalar_types(
1688 range_result(Result {val: vbegin, ..},
1689 Result {bcx, val: vend}) => {
1690 let Result {bcx, val: llge} =
1691 compare_scalar_types(
1693 vbegin, t, ast::BiGe);
1694 let Result {bcx, val: llle} =
1695 compare_scalar_types(
1696 bcx, test_val, vend,
1698 Result::new(bcx, And(bcx, llge, llle))
1702 bcx = fcx.new_temp_block("compare_next");
1704 // If none of the sub-cases match, and the current condition
1705 // is guarded or has multiple patterns, move on to the next
1706 // condition, if there is any, rather than falling back to
1708 let guarded = m[i].data.arm.guard.is_some();
1709 let multi_pats = m[i].pats.len() > 1;
1710 if i+1 < len && (guarded || multi_pats) {
1711 branch_chk = Some(JumpToBasicBlock(bcx.llbb));
1713 CondBr(after_cx, matches, opt_cx.llbb, bcx.llbb);
1715 compare_vec_len => {
1716 let Result {bcx: after_cx, val: matches} = {
1717 match trans_opt(bcx, opt) {
1719 Result {bcx, val}) => {
1720 let value = compare_scalar_values(
1722 signed_int, ast::BiEq);
1723 Result::new(bcx, value)
1726 Result {bcx, val: val}) => {
1727 let value = compare_scalar_values(
1729 signed_int, ast::BiGe);
1730 Result::new(bcx, value)
1733 Result {val: vbegin, ..},
1734 Result {bcx, val: vend}) => {
1736 compare_scalar_values(
1738 vbegin, signed_int, ast::BiGe);
1740 compare_scalar_values(
1741 bcx, test_val, vend,
1742 signed_int, ast::BiLe);
1743 Result::new(bcx, And(bcx, llge, llle))
1747 bcx = fcx.new_temp_block("compare_vec_len_next");
1749 // If none of these subcases match, move on to the
1750 // next condition if there is any.
1752 branch_chk = Some(JumpToBasicBlock(bcx.llbb));
1754 CondBr(after_cx, matches, opt_cx.llbb, bcx.llbb);
1758 } else if kind == compare || kind == compare_vec_len {
1759 Br(bcx, else_cx.llbb);
1763 let mut unpacked = Vec::new();
1765 var(disr_val, ref repr) => {
1766 let ExtractedBlock {vals: argvals, bcx: new_bcx} =
1767 extract_variant_args(opt_cx, &**repr, disr_val, val);
1768 size = argvals.len();
1772 vec_len(n, vt, _) => {
1773 let (n, slice) = match vt {
1774 vec_len_ge(i) => (n + 1u, Some(i)),
1775 vec_len_eq => (n, None)
1777 let args = extract_vec_elems(opt_cx, pat_id, n,
1778 slice, val, test_val);
1779 size = args.vals.len();
1780 unpacked = args.vals.clone();
1783 lit(_) | range(_, _) => ()
1785 let opt_ms = enter_opt(opt_cx, m, opt, col, size, val);
1786 let opt_vals = unpacked.append(vals_left.as_slice());
1790 compile_submatch(opt_cx,
1792 opt_vals.as_slice(),
1794 has_genuine_default)
1796 Some(branch_chk) => {
1797 compile_submatch(opt_cx,
1799 opt_vals.as_slice(),
1801 has_genuine_default)
1806 // Compile the fall-through case, if any
1807 if !exhaustive && kind != single {
1808 if kind == compare || kind == compare_vec_len {
1809 Br(bcx, else_cx.llbb);
1812 // If there is only one default arm left, move on to the next
1813 // condition explicitly rather than (eventually) falling back to
1814 // the last default arm.
1815 &JumpToBasicBlock(_) if defaults.len() == 1 && has_genuine_default => {
1816 Br(else_cx, chk.handle_fail());
1819 compile_submatch(else_cx,
1820 defaults.as_slice(),
1821 vals_left.as_slice(),
1823 has_genuine_default);
1829 pub fn trans_match<'a>(
1831 match_expr: &ast::Expr,
1832 discr_expr: &ast::Expr,
1836 let _icx = push_ctxt("match::trans_match");
1837 trans_match_inner(bcx, match_expr.id, discr_expr, arms, dest)
1840 fn create_bindings_map(bcx: &Block, pat: @ast::Pat) -> BindingsMap {
1841 // Create the bindings map, which is a mapping from each binding name
1842 // to an alloca() that will be the value for that local variable.
1843 // Note that we use the names because each binding will have many ids
1844 // from the various alternatives.
1845 let ccx = bcx.ccx();
1846 let tcx = bcx.tcx();
1847 let mut bindings_map = HashMap::new();
1848 pat_bindings(&tcx.def_map, pat, |bm, p_id, span, path| {
1849 let ident = path_to_ident(path);
1850 let variable_ty = node_id_type(bcx, p_id);
1851 let llvariable_ty = type_of::type_of(ccx, variable_ty);
1856 ast::BindByValue(_) => {
1857 // in this case, the final type of the variable will be T,
1858 // but during matching we need to store a *T as explained
1860 llmatch = alloca(bcx, llvariable_ty.ptr_to(), "__llmatch");
1861 trmode = TrByValue(alloca(bcx,
1863 bcx.ident(ident).as_slice()));
1865 ast::BindByRef(_) => {
1866 llmatch = alloca(bcx,
1868 bcx.ident(ident).as_slice());
1872 bindings_map.insert(ident, BindingInfo {
1880 return bindings_map;
1883 fn trans_match_inner<'a>(scope_cx: &'a Block<'a>,
1884 match_id: ast::NodeId,
1885 discr_expr: &ast::Expr,
1887 dest: Dest) -> &'a Block<'a> {
1888 let _icx = push_ctxt("match::trans_match_inner");
1889 let fcx = scope_cx.fcx;
1890 let mut bcx = scope_cx;
1891 let tcx = bcx.tcx();
1893 let discr_datum = unpack_datum!(bcx, expr::trans_to_lvalue(bcx, discr_expr,
1895 if bcx.unreachable.get() {
1899 let t = node_id_type(bcx, discr_expr.id);
1901 if ty::type_is_empty(tcx, t) {
1902 // Special case for empty types
1903 let fail_cx = Cell::new(None);
1904 let fail_handler = box DynamicFailureHandler {
1906 sp: discr_expr.span,
1907 msg: InternedString::new("scrutinizing value that can't \
1911 DynamicFailureHandlerClass(fail_handler)
1917 let arm_datas: Vec<ArmData> = arms.iter().map(|arm| ArmData {
1918 bodycx: fcx.new_id_block("case_body", arm.body.id),
1920 bindings_map: create_bindings_map(bcx, *arm.pats.get(0))
1923 let mut matches = Vec::new();
1924 for arm_data in arm_datas.iter() {
1925 matches.extend(arm_data.arm.pats.iter().map(|p| Match {
1928 bound_ptrs: Vec::new(),
1932 // `compile_submatch` works one column of arm patterns a time and
1933 // then peels that column off. So as we progress, it may become
1934 // impossible to know whether we have a genuine default arm, i.e.
1935 // `_ => foo` or not. Sometimes it is important to know that in order
1936 // to decide whether moving on to the next condition or falling back
1937 // to the default arm.
1938 let has_default = arms.len() > 0 && {
1939 let ref pats = arms.last().unwrap().pats;
1942 && match pats.last().unwrap().node {
1943 ast::PatWild => true, _ => false
1947 compile_submatch(bcx, matches.as_slice(), [discr_datum.val], &chk, has_default);
1949 let mut arm_cxs = Vec::new();
1950 for arm_data in arm_datas.iter() {
1951 let mut bcx = arm_data.bodycx;
1953 // If this arm has a guard, then the various by-value bindings have
1954 // already been copied into their homes. If not, we do it here. This
1955 // is just to reduce code space. See extensive comment at the start
1956 // of the file for more details.
1957 if arm_data.arm.guard.is_none() {
1958 bcx = store_non_ref_bindings(bcx, &arm_data.bindings_map, None);
1961 // insert bindings into the lllocals map and add cleanups
1962 let cleanup_scope = fcx.push_custom_cleanup_scope();
1963 bcx = insert_lllocals(bcx, &arm_data.bindings_map,
1964 cleanup::CustomScope(cleanup_scope));
1965 bcx = expr::trans_into(bcx, arm_data.arm.body, dest);
1966 bcx = fcx.pop_and_trans_custom_cleanup_scope(bcx, cleanup_scope);
1970 bcx = scope_cx.fcx.join_blocks(match_id, arm_cxs.as_slice());
1974 enum IrrefutablePatternBindingMode {
1975 // Stores the association between node ID and LLVM value in `lllocals`.
1977 // Stores the association between node ID and LLVM value in `llargs`.
1981 pub fn store_local<'a>(bcx: &'a Block<'a>,
1985 * Generates code for a local variable declaration like
1986 * `let <pat>;` or `let <pat> = <opt_init_expr>`.
1988 let _icx = push_ctxt("match::store_local");
1990 let tcx = bcx.tcx();
1991 let pat = local.pat;
1992 let opt_init_expr = local.init;
1994 return match opt_init_expr {
1995 Some(init_expr) => {
1996 // Optimize the "let x = expr" case. This just writes
1997 // the result of evaluating `expr` directly into the alloca
1998 // for `x`. Often the general path results in similar or the
1999 // same code post-optimization, but not always. In particular,
2000 // in unsafe code, you can have expressions like
2002 // let x = intrinsics::uninit();
2004 // In such cases, the more general path is unsafe, because
2005 // it assumes it is matching against a valid value.
2006 match simple_identifier(pat) {
2008 let var_scope = cleanup::var_scope(tcx, local.id);
2009 return mk_binding_alloca(
2010 bcx, pat.id, path, BindLocal, var_scope, (),
2011 |(), bcx, v, _| expr::trans_into(bcx, init_expr,
2020 unpack_datum!(bcx, expr::trans_to_lvalue(bcx, init_expr, "let"));
2021 if ty::type_is_bot(expr_ty(bcx, init_expr)) {
2022 create_dummy_locals(bcx, pat)
2024 if bcx.sess().asm_comments() {
2025 add_comment(bcx, "creating zeroable ref llval");
2027 let var_scope = cleanup::var_scope(tcx, local.id);
2028 bind_irrefutable_pat(bcx, pat, init_datum.val, BindLocal, var_scope)
2032 create_dummy_locals(bcx, pat)
2036 fn create_dummy_locals<'a>(mut bcx: &'a Block<'a>,
2039 // create dummy memory for the variables if we have no
2040 // value to store into them immediately
2041 let tcx = bcx.tcx();
2042 pat_bindings(&tcx.def_map, pat, |_, p_id, _, path| {
2043 let scope = cleanup::var_scope(tcx, p_id);
2044 bcx = mk_binding_alloca(
2045 bcx, p_id, path, BindLocal, scope, (),
2046 |(), bcx, llval, ty| { zero_mem(bcx, llval, ty); bcx });
2052 pub fn store_arg<'a>(mut bcx: &'a Block<'a>,
2055 arg_scope: cleanup::ScopeId)
2058 * Generates code for argument patterns like `fn foo(<pat>: T)`.
2059 * Creates entries in the `llargs` map for each of the bindings
2064 * - `pat` is the argument pattern
2065 * - `llval` is a pointer to the argument value (in other words,
2066 * if the argument type is `T`, then `llval` is a `T*`). In some
2067 * cases, this code may zero out the memory `llval` points at.
2070 let _icx = push_ctxt("match::store_arg");
2072 match simple_identifier(pat) {
2074 // Generate nicer LLVM for the common case of fn a pattern
2076 let arg_ty = node_id_type(bcx, pat.id);
2077 if type_of::arg_is_indirect(bcx.ccx(), arg_ty)
2078 && bcx.sess().opts.debuginfo != FullDebugInfo {
2079 // Don't copy an indirect argument to an alloca, the caller
2080 // already put it in a temporary alloca and gave it up, unless
2081 // we emit extra-debug-info, which requires local allocas :(.
2082 let arg_val = arg.add_clean(bcx.fcx, arg_scope);
2083 bcx.fcx.llargs.borrow_mut()
2084 .insert(pat.id, Datum(arg_val, arg_ty, Lvalue));
2088 bcx, pat.id, path, BindArgument, arg_scope, arg,
2089 |arg, bcx, llval, _| arg.store_to(bcx, llval))
2094 // General path. Copy out the values that are used in the
2096 let arg = unpack_datum!(
2097 bcx, arg.to_lvalue_datum_in_scope(bcx, "__arg", arg_scope));
2098 bind_irrefutable_pat(bcx, pat, arg.val,
2099 BindArgument, arg_scope)
2104 fn mk_binding_alloca<'a,A>(bcx: &'a Block<'a>,
2107 binding_mode: IrrefutablePatternBindingMode,
2108 cleanup_scope: cleanup::ScopeId,
2110 populate: |A, &'a Block<'a>, ValueRef, ty::t| -> &'a Block<'a>)
2112 let var_ty = node_id_type(bcx, p_id);
2113 let ident = ast_util::path_to_ident(path);
2115 // Allocate memory on stack for the binding.
2116 let llval = alloc_ty(bcx, var_ty, bcx.ident(ident).as_slice());
2118 // Subtle: be sure that we *populate* the memory *before*
2119 // we schedule the cleanup.
2120 let bcx = populate(arg, bcx, llval, var_ty);
2121 bcx.fcx.schedule_drop_mem(cleanup_scope, llval, var_ty);
2123 // Now that memory is initialized and has cleanup scheduled,
2124 // create the datum and insert into the local variable map.
2125 let datum = Datum(llval, var_ty, Lvalue);
2126 let mut llmap = match binding_mode {
2127 BindLocal => bcx.fcx.lllocals.borrow_mut(),
2128 BindArgument => bcx.fcx.llargs.borrow_mut()
2130 llmap.insert(p_id, datum);
2134 fn bind_irrefutable_pat<'a>(
2138 binding_mode: IrrefutablePatternBindingMode,
2139 cleanup_scope: cleanup::ScopeId)
2142 * A simple version of the pattern matching code that only handles
2143 * irrefutable patterns. This is used in let/argument patterns,
2144 * not in match statements. Unifying this code with the code above
2145 * sounds nice, but in practice it produces very inefficient code,
2146 * since the match code is so much more general. In most cases,
2147 * LLVM is able to optimize the code, but it causes longer compile
2148 * times and makes the generated code nigh impossible to read.
2151 * - bcx: starting basic block context
2152 * - pat: the irrefutable pattern being matched.
2153 * - val: the value being matched -- must be an lvalue (by ref, with cleanup)
2154 * - binding_mode: is this for an argument or a local variable?
2157 debug!("bind_irrefutable_pat(bcx={}, pat={}, binding_mode={:?})",
2159 pat.repr(bcx.tcx()),
2162 if bcx.sess().asm_comments() {
2163 add_comment(bcx, format!("bind_irrefutable_pat(pat={})",
2164 pat.repr(bcx.tcx())).as_slice());
2167 let _indenter = indenter();
2169 let _icx = push_ctxt("match::bind_irrefutable_pat");
2171 let tcx = bcx.tcx();
2172 let ccx = bcx.ccx();
2174 ast::PatIdent(pat_binding_mode, ref path, inner) => {
2175 if pat_is_binding(&tcx.def_map, pat) {
2176 // Allocate the stack slot where the value of this
2177 // binding will live and place it into the appropriate
2179 bcx = mk_binding_alloca(
2180 bcx, pat.id, path, binding_mode, cleanup_scope, (),
2181 |(), bcx, llval, ty| {
2182 match pat_binding_mode {
2183 ast::BindByValue(_) => {
2184 // By value binding: move the value that `val`
2185 // points at into the binding's stack slot.
2186 let d = Datum(val, ty, Lvalue);
2187 d.store_to(bcx, llval)
2190 ast::BindByRef(_) => {
2191 // By ref binding: the value of the variable
2192 // is the pointer `val` itself.
2193 Store(bcx, val, llval);
2200 for &inner_pat in inner.iter() {
2201 bcx = bind_irrefutable_pat(bcx, inner_pat, val,
2202 binding_mode, cleanup_scope);
2205 ast::PatEnum(_, ref sub_pats) => {
2206 let opt_def = bcx.tcx().def_map.borrow().find_copy(&pat.id);
2208 Some(ast::DefVariant(enum_id, var_id, _)) => {
2209 let repr = adt::represent_node(bcx, pat.id);
2210 let vinfo = ty::enum_variant_with_id(ccx.tcx(),
2213 let args = extract_variant_args(bcx,
2217 for sub_pat in sub_pats.iter() {
2218 for (i, argval) in args.vals.iter().enumerate() {
2219 bcx = bind_irrefutable_pat(bcx, *sub_pat.get(i),
2220 *argval, binding_mode,
2225 Some(ast::DefFn(..)) |
2226 Some(ast::DefStruct(..)) => {
2229 // This is a unit-like struct. Nothing to do here.
2231 Some(ref elems) => {
2232 // This is the tuple struct case.
2233 let repr = adt::represent_node(bcx, pat.id);
2234 for (i, elem) in elems.iter().enumerate() {
2235 let fldptr = adt::trans_field_ptr(bcx, &*repr,
2237 bcx = bind_irrefutable_pat(bcx, *elem,
2238 fldptr, binding_mode,
2244 Some(ast::DefStatic(_, false)) => {
2247 // Nothing to do here.
2251 ast::PatStruct(_, ref fields, _) => {
2252 let tcx = bcx.tcx();
2253 let pat_ty = node_id_type(bcx, pat.id);
2254 let pat_repr = adt::represent_type(bcx.ccx(), pat_ty);
2255 expr::with_field_tys(tcx, pat_ty, Some(pat.id), |discr, field_tys| {
2256 for f in fields.iter() {
2257 let ix = ty::field_idx_strict(tcx, f.ident.name, field_tys);
2258 let fldptr = adt::trans_field_ptr(bcx, &*pat_repr, val,
2260 bcx = bind_irrefutable_pat(bcx, f.pat, fldptr,
2261 binding_mode, cleanup_scope);
2265 ast::PatTup(ref elems) => {
2266 let repr = adt::represent_node(bcx, pat.id);
2267 for (i, elem) in elems.iter().enumerate() {
2268 let fldptr = adt::trans_field_ptr(bcx, &*repr, val, 0, i);
2269 bcx = bind_irrefutable_pat(bcx, *elem, fldptr,
2270 binding_mode, cleanup_scope);
2273 ast::PatUniq(inner) => {
2274 let llbox = Load(bcx, val);
2275 bcx = bind_irrefutable_pat(bcx, inner, llbox, binding_mode, cleanup_scope);
2277 ast::PatRegion(inner) => {
2278 let loaded_val = Load(bcx, val);
2279 bcx = bind_irrefutable_pat(bcx, inner, loaded_val, binding_mode, cleanup_scope);
2281 ast::PatVec(..) => {
2282 bcx.sess().span_bug(pat.span,
2283 "vector patterns are never irrefutable!");
2285 ast::PatWild | ast::PatWildMulti | ast::PatLit(_) | ast::PatRange(_, _) => ()