<p id="keyword-table-marker"></p>
-| | | | | |
-|----------|----------|----------|----------|--------|
-| abstract | alignof | as | become | box |
-| break | const | continue | crate | do |
-| else | enum | extern | false | final |
-| fn | for | if | impl | in |
-| let | loop | match | mod | move |
-| mut | offsetof | once | override | priv |
-| proc | pub | pure | ref | return |
-| sizeof | static | self | struct | super |
-| true | trait | type | typeof | unsafe |
-| unsized | use | virtual | where | while |
-| yield | | | | |
+| | | | | |
+|----------|----------|----------|----------|---------|
+| abstract | alignof | as | become | box |
+| break | const | continue | crate | do |
+| else | enum | extern | false | final |
+| fn | for | if | impl | in |
+| let | loop | macro | match | mod |
+| move | mut | offsetof | override | priv |
+| proc | pub | pure | ref | return |
+| Self | self | sizeof | static | struct |
+| super | trait | true | type | typeof |
+| unsafe | unsized | use | virtual | where |
+| while | yield | | | |
Each of these keywords has special meaning in its grammar, and all of them are
idx_expr : expr '[' expr ']' ;
```
+### Range expressions
+
+```antlr
+range_expr : expr ".." expr |
+ expr ".." |
+ ".." expr |
+ ".." ;
+```
+
### Unary operator expressions
**FIXME:** grammar?
### While loops
```antlr
-while_expr : "while" no_struct_literal_expr '{' block '}' ;
+while_expr : [ lifetime ':' ] "while" no_struct_literal_expr '{' block '}' ;
```
### Infinite loops
### For expressions
```antlr
-for_expr : "for" pat "in" no_struct_literal_expr '{' block '}' ;
+for_expr : [ lifetime ':' ] "for" pat "in" no_struct_literal_expr '{' block '}' ;
```
### If expressions
# Notation
-Rust's grammar is defined over Unicode code points, each conventionally denoted
-`U+XXXX`, for 4 or more hexadecimal digits `X`. _Most_ of Rust's grammar is
-confined to the ASCII range of Unicode, and is described in this document by a
-dialect of Extended Backus-Naur Form (EBNF), specifically a dialect of EBNF
-supported by common automated LL(k) parsing tools such as `llgen`, rather than
-the dialect given in ISO 14977. The dialect can be defined self-referentially
-as follows:
-
-```{.ebnf .notation}
-grammar : rule + ;
-rule : nonterminal ':' productionrule ';' ;
-productionrule : production [ '|' production ] * ;
-production : term * ;
-term : element repeats ;
-element : LITERAL | IDENTIFIER | '[' productionrule ']' ;
-repeats : [ '*' | '+' ] NUMBER ? | NUMBER ? | '?' ;
-```
-
-Where:
-
-- Whitespace in the grammar is ignored.
-- Square brackets are used to group rules.
-- `LITERAL` is a single printable ASCII character, or an escaped hexadecimal
- ASCII code of the form `\xQQ`, in single quotes, denoting the corresponding
- Unicode code point `U+00QQ`.
-- `IDENTIFIER` is a nonempty string of ASCII letters and underscores.
-- The `repeat` forms apply to the adjacent `element`, and are as follows:
- - `?` means zero or one repetition
- - `*` means zero or more repetitions
- - `+` means one or more repetitions
- - NUMBER trailing a repeat symbol gives a maximum repetition count
- - NUMBER on its own gives an exact repetition count
-
-This EBNF dialect should hopefully be familiar to many readers.
-
## Unicode productions
A few productions in Rust's grammar permit Unicode code points outside the ASCII
## Special Unicode Productions
The following productions in the Rust grammar are defined in terms of Unicode
-properties: `ident`, `non_null`, `non_star`, `non_eol`, `non_slash_or_star`,
-`non_single_quote` and `non_double_quote`.
+properties: `ident`, `non_null`, `non_eol`, `non_single_quote` and `non_double_quote`.
### Identifiers
-The `ident` production is any nonempty Unicode string of the following form:
+The `ident` production is any nonempty Unicode[^non_ascii_idents] string of the following form:
+
+[^non_ascii_idents]: Non-ASCII characters in identifiers are currently feature
+ gated. This is expected to improve soon.
- The first character has property `XID_start`
- The remaining characters have property `XID_continue`
- `non_null` is any single Unicode character aside from `U+0000` (null)
- `non_eol` is `non_null` restricted to exclude `U+000A` (`'\n'`)
-- `non_star` is `non_null` restricted to exclude `U+002A` (`*`)
-- `non_slash_or_star` is `non_null` restricted to exclude `U+002F` (`/`) and `U+002A` (`*`)
- `non_single_quote` is `non_null` restricted to exclude `U+0027` (`'`)
- `non_double_quote` is `non_null` restricted to exclude `U+0022` (`"`)
## Comments
-```{.ebnf .gram}
-comment : block_comment | line_comment ;
-block_comment : "/*" block_comment_body * "*/" ;
-block_comment_body : [block_comment | character] * ;
-line_comment : "//" non_eol * ;
-```
-
Comments in Rust code follow the general C++ style of line and block-comment
forms. Nested block comments are supported.
## Whitespace
-```{.ebnf .gram}
-whitespace_char : '\x20' | '\x09' | '\x0a' | '\x0d' ;
-whitespace : [ whitespace_char | comment ] + ;
-```
-
The `whitespace_char` production is any nonempty Unicode string consisting of
any of the following Unicode characters: `U+0020` (space, `' '`), `U+0009`
(tab, `'\t'`), `U+000A` (LF, `'\n'`), `U+000D` (CR, `'\r'`).
## Tokens
-```{.ebnf .gram}
-simple_token : keyword | unop | binop ;
-token : simple_token | ident | literal | symbol | whitespace token ;
-```
-
Tokens are primitive productions in the grammar defined by regular
(non-recursive) languages. "Simple" tokens are given in [string table
production](#string-table-productions) form, and occur in the rest of the
grammar as double-quoted strings. Other tokens have exact rules given.
-### Keywords
-
-<p id="keyword-table-marker"></p>
-
-| | | | | |
-|----------|----------|----------|----------|---------|
-| abstract | alignof | as | become | box |
-| break | const | continue | crate | do |
-| else | enum | extern | false | final |
-| fn | for | if | impl | in |
-| let | loop | macro | match | mod |
-| move | mut | offsetof | override | priv |
-| proc | pub | pure | ref | return |
-| Self | self | sizeof | static | struct |
-| super | trait | true | type | typeof |
-| unsafe | unsized | use | virtual | where |
-| while | yield | | | |
-
-
-Each of these keywords has special meaning in its grammar, and all of them are
-excluded from the `ident` rule.
-
-Note that some of these keywords are reserved, and do not currently do
-anything.
-
### Literals
A literal is an expression consisting of a single token, rather than a sequence
rather than referring to it by name or some other evaluation rule. A literal is
a form of constant expression, so is evaluated (primarily) at compile time.
-```{.ebnf .gram}
-lit_suffix : ident;
-literal : [ string_lit | char_lit | byte_string_lit | byte_lit | num_lit ] lit_suffix ?;
-```
-
The optional suffix is only used for certain numeric literals, but is
reserved for future extension, that is, the above gives the lexical
grammar, but a Rust parser will reject everything but the 12 special
##### Suffixes
| Integer | Floating-point |
|---------|----------------|
-| `u8`, `i8`, `u16`, `i16`, `u32`, `i32`, `u64`, `i64`, `is` (`isize`), `us` (`usize`) | `f32`, `f64` |
+| `u8`, `i8`, `u16`, `i16`, `u32`, `i32`, `u64`, `i64`, `isize`, `usize` | `f32`, `f64` |
#### Character and string literals
-```{.ebnf .gram}
-char_lit : '\x27' char_body '\x27' ;
-string_lit : '"' string_body * '"' | 'r' raw_string ;
-
-char_body : non_single_quote
- | '\x5c' [ '\x27' | common_escape | unicode_escape ] ;
-
-string_body : non_double_quote
- | '\x5c' [ '\x22' | common_escape | unicode_escape ] ;
-raw_string : '"' raw_string_body '"' | '#' raw_string '#' ;
-
-common_escape : '\x5c'
- | 'n' | 'r' | 't' | '0'
- | 'x' hex_digit 2
-
-unicode_escape : 'u' '{' hex_digit+ 6 '}';
-
-hex_digit : 'a' | 'b' | 'c' | 'd' | 'e' | 'f'
- | 'A' | 'B' | 'C' | 'D' | 'E' | 'F'
- | dec_digit ;
-oct_digit : '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' ;
-dec_digit : '0' | nonzero_dec ;
-nonzero_dec: '1' | '2' | '3' | '4'
- | '5' | '6' | '7' | '8' | '9' ;
-```
-
##### Character literals
A _character literal_ is a single Unicode character enclosed within two
Raw string literals do not process any escapes. They start with the character
`U+0072` (`r`), followed by zero or more of the character `U+0023` (`#`) and a
-`U+0022` (double-quote) character. The _raw string body_ is not defined in the
-EBNF grammar above: it can contain any sequence of Unicode characters and is
-terminated only by another `U+0022` (double-quote) character, followed by the
-same number of `U+0023` (`#`) characters that preceded the opening `U+0022`
-(double-quote) character.
+`U+0022` (double-quote) character. The _raw string body_ can contain any sequence
+of Unicode characters and is terminated only by another `U+0022` (double-quote)
+character, followed by the same number of `U+0023` (`#`) characters that preceded
+the opening `U+0022` (double-quote) character.
All Unicode characters contained in the raw string body represent themselves,
the characters `U+0022` (double-quote) (except when followed by at least as
#### Byte and byte string literals
-```{.ebnf .gram}
-byte_lit : "b\x27" byte_body '\x27' ;
-byte_string_lit : "b\x22" string_body * '\x22' | "br" raw_byte_string ;
-
-byte_body : ascii_non_single_quote
- | '\x5c' [ '\x27' | common_escape ] ;
-
-byte_string_body : ascii_non_double_quote
- | '\x5c' [ '\x22' | common_escape ] ;
-raw_byte_string : '"' raw_byte_string_body '"' | '#' raw_byte_string '#' ;
-
-```
-
##### Byte literals
A _byte literal_ is a single ASCII character (in the `U+0000` to `U+007F`
-range) enclosed within two `U+0027` (single-quote) characters, with the
-exception of `U+0027` itself, which must be _escaped_ by a preceding U+005C
-character (`\`), or a single _escape_. It is equivalent to a `u8` unsigned
-8-bit integer _number literal_.
+range) or a single _escape_ preceded by the characters `U+0062` (`b`) and
+`U+0027` (single-quote), and followed by the character `U+0027`. If the character
+`U+0027` is present within the literal, it must be _escaped_ by a preceding
+`U+005C` (`\`) character. It is equivalent to a `u8` unsigned 8-bit integer
+_number literal_.
##### Byte string literals
followed by the character `U+0022`. If the character `U+0022` is present within
the literal, it must be _escaped_ by a preceding `U+005C` (`\`) character.
Alternatively, a byte string literal can be a _raw byte string literal_, defined
-below. A byte string literal is equivalent to a `&'static [u8]` borrowed array
+below. A byte string literal of length `n` is equivalent to a `&'static [u8; n]` borrowed fixed-sized array
of unsigned 8-bit integers.
Some additional _escapes_ are available in either byte or non-raw byte string
Raw byte string literals do not process any escapes. They start with the
character `U+0062` (`b`), followed by `U+0072` (`r`), followed by zero or more
of the character `U+0023` (`#`), and a `U+0022` (double-quote) character. The
-_raw string body_ is not defined in the EBNF grammar above: it can contain any
-sequence of ASCII characters and is terminated only by another `U+0022`
-(double-quote) character, followed by the same number of `U+0023` (`#`)
-characters that preceded the opening `U+0022` (double-quote) character. A raw
-byte string literal can not contain any non-ASCII byte.
+_raw string body_ can contain any sequence of ASCII characters and is terminated
+only by another `U+0022` (double-quote) character, followed by the same number of
+`U+0023` (`#`) characters that preceded the opening `U+0022` (double-quote)
+character. A raw byte string literal can not contain any non-ASCII byte.
All characters contained in the raw string body represent their ASCII encoding,
the characters `U+0022` (double-quote) (except when followed by at least as
#### Number literals
-```{.ebnf .gram}
-num_lit : nonzero_dec [ dec_digit | '_' ] * float_suffix ?
- | '0' [ [ dec_digit | '_' ] * float_suffix ?
- | 'b' [ '1' | '0' | '_' ] +
- | 'o' [ oct_digit | '_' ] +
- | 'x' [ hex_digit | '_' ] + ] ;
-
-float_suffix : [ exponent | '.' dec_lit exponent ? ] ? ;
-
-exponent : ['E' | 'e'] ['-' | '+' ] ? dec_lit ;
-dec_lit : [ dec_digit | '_' ] + ;
-```
-
A _number literal_ is either an _integer literal_ or a _floating-point
literal_. The grammar for recognizing the two kinds of literals is mixed.
### Symbols
-```{.ebnf .gram}
-symbol : "::" | "->"
- | '#' | '[' | ']' | '(' | ')' | '{' | '}'
- | ',' | ';' ;
-```
-
Symbols are a general class of printable [token](#tokens) that play structural
roles in a variety of grammar productions. They are catalogued here for
completeness as the set of remaining miscellaneous printable tokens that do not
## Paths
-```{.ebnf .gram}
-expr_path : [ "::" ] ident [ "::" expr_path_tail ] + ;
-expr_path_tail : '<' type_expr [ ',' type_expr ] + '>'
- | expr_path ;
-
-type_path : ident [ type_path_tail ] + ;
-type_path_tail : '<' type_expr [ ',' type_expr ] + '>'
- | "::" type_path ;
-```
-
A _path_ is a sequence of one or more path components _logically_ separated by
a namespace qualifier (`::`). If a path consists of only one component, it may
refer to either an [item](#items) or a [variable](#variables) in a local control
## Macros
-```{.ebnf .gram}
-expr_macro_rules : "macro_rules" '!' ident '(' macro_rule * ')' ;
-macro_rule : '(' matcher * ')' "=>" '(' transcriber * ')' ';' ;
-matcher : '(' matcher * ')' | '[' matcher * ']'
- | '{' matcher * '}' | '$' ident ':' ident
- | '$' '(' matcher * ')' sep_token? [ '*' | '+' ]
- | non_special_token ;
-transcriber : '(' transcriber * ')' | '[' transcriber * ']'
- | '{' transcriber * '}' | '$' ident
- | '$' '(' transcriber * ')' sep_token? [ '*' | '+' ]
- | non_special_token ;
-```
-
`macro_rules` allows users to define syntax extension in a declarative way. We
call such extensions "macros by example" or simply "macros" — to be distinguished
from the "procedural macros" defined in [compiler plugins][plugin].
# Crates and source files
-Rust is a *compiled* language. Its semantics obey a *phase distinction* between
-compile-time and run-time. Those semantic rules that have a *static
+Although Rust, like any other language, can be implemented by an interpreter as
+well as a compiler, the only existing implementation is a compiler —
+from now on referred to as *the* Rust compiler — and the language has
+always been designed to be compiled. For these reasons, this section assumes a
+compiler.
+
+Rust's semantics obey a *phase distinction* between compile-time and
+run-time.[^phase-distinction] Those semantic rules that have a *static
interpretation* govern the success or failure of compilation. Those semantics
that have a *dynamic interpretation* govern the behavior of the program at
run-time.
+[^phase-distinction]: This distinction would also exist in an interpreter.
+ Static checks like syntactic analysis, type checking, and lints should
+ happen before the program is executed regardless of when it is executed.
+
The compilation model centers on artifacts called _crates_. Each compilation
processes a single crate in source form, and if successful, produces a single
-crate in binary form: either an executable or a library.[^cratesourcefile]
+crate in binary form: either an executable or some sort of
+library.[^cratesourcefile]
[^cratesourcefile]: A crate is somewhat analogous to an *assembly* in the
ECMA-335 CLI model, a *library* in the SML/NJ Compilation Manager, a *unit*
A Rust source file describes a module, the name and location of which —
in the module tree of the current crate — are defined from outside the
source file: either by an explicit `mod_item` in a referencing source file, or
-by the name of the crate itself.
+by the name of the crate itself. Every source file is a module, but not every
+module needs its own source file: [module definitions](#modules) can be nested
+within one file.
Each source file contains a sequence of zero or more `item` definitions, and
-may optionally begin with any number of `attributes` that apply to the
-containing module. Attributes on the anonymous crate module define important
-metadata that influences the behavior of the compiler.
+may optionally begin with any number of [attributes](#Items and attributes)
+that apply to the containing module, most of which influence the behavior of
+the compiler. The anonymous crate module can have additional attributes that
+apply to the crate as a whole.
```no_run
-// Crate name
+// Specify the crate name.
#![crate_name = "projx"]
-// Specify the output type
+// Specify the type of output artifact.
#![crate_type = "lib"]
-// Turn on a warning
+// Turn on a warning.
+// This can be done in any module, not just the anonymous crate module.
#![warn(non_camel_case_types)]
```
## Items
-```{.ebnf .gram}
-item : extern_crate_decl | use_decl | mod_item | fn_item | type_item
- | struct_item | enum_item | static_item | trait_item | impl_item
- | extern_block ;
-```
-
An _item_ is a component of a crate. Items are organized within a crate by a
nested set of [modules](#modules). Every crate has a single "outermost"
anonymous module; all further items within the crate have [paths](#paths)
### Modules
-```{.ebnf .gram}
-mod_item : "mod" ident ( ';' | '{' mod '}' );
-mod : item * ;
-```
-
A module is a container for zero or more [items](#items).
A _module item_ is a module, surrounded in braces, named, and prefixed with the
##### Extern crate declarations
-```{.ebnf .gram}
-extern_crate_decl : "extern" "crate" crate_name
-crate_name: ident | ( string_lit "as" ident )
-```
-
An _`extern crate` declaration_ specifies a dependency on an external crate.
The external crate is then bound into the declaring scope as the `ident`
provided in the `extern_crate_decl`.
##### Use declarations
-```{.ebnf .gram}
-use_decl : "pub" ? "use" [ path "as" ident
- | path_glob ] ;
-
-path_glob : ident [ "::" [ path_glob
- | '*' ] ] ?
- | '{' path_item [ ',' path_item ] * '}' ;
-
-path_item : ident | "self" ;
-```
-
A _use declaration_ creates one or more local name bindings synonymous with
some other [path](#paths). Usually a `use` declaration is used to shorten the
path required to refer to a module item. These declarations may appear at the
### Constant items
-```{.ebnf .gram}
-const_item : "const" ident ':' type '=' expr ';' ;
-```
-
A *constant item* is a named _constant value_ which is not associated with a
specific memory location in the program. Constants are essentially inlined
wherever they are used, meaning that they are copied directly into the relevant
### Static items
-```{.ebnf .gram}
-static_item : "static" ident ':' type '=' expr ';' ;
-```
-
A *static item* is similar to a *constant*, except that it represents a precise
memory location in the program. A static is never "inlined" at the usage site,
and all references to it refer to the same memory location. Static items have
### External blocks
-```{.ebnf .gram}
-extern_block_item : "extern" '{' extern_block '}' ;
-extern_block : [ foreign_fn ] * ;
-```
-
External blocks form the basis for Rust's foreign function interface.
Declarations in an external block describe symbols in external, non-Rust
libraries.
## Attributes
-```{.ebnf .gram}
-attribute : '#' '!' ? '[' meta_item ']' ;
-meta_item : ident [ '=' literal
- | '(' meta_seq ')' ] ? ;
-meta_seq : meta_item [ ',' meta_seq ] ? ;
-```
-
Any item declaration may have an _attribute_ applied to it. Attributes in Rust
are modeled on Attributes in ECMA-335, with the syntax coming from ECMA-334
(C#). An attribute is a general, free-form metadatum that is interpreted
The lint checks supported by the compiler can be found via `rustc -W help`,
along with their default settings. [Compiler
-plugins](book/plugins.html#lint-plugins) can provide additional lint checks.
+plugins](book/compiler-plugins.html#lint-plugins) can provide additional lint checks.
```{.ignore}
mod m1 {
terms of encapsulation).
If a feature is promoted to a language feature, then all existing programs will
-start to receive compilation warnings about #[feature] directives which enabled
+start to receive compilation warnings about `#![feature]` directives which enabled
the new feature (because the directive is no longer necessary). However, if a
feature is decided to be removed from the language, errors will be issued (if
there isn't a parser error first). The directive in this case is no longer
#### Variable declarations
-```{.ebnf .gram}
-let_decl : "let" pat [':' type ] ? [ init ] ? ';' ;
-init : [ '=' ] expr ;
-```
-
A _variable declaration_ introduces a new set of variable, given by a pattern. The
pattern may be followed by a type annotation, and/or an initializer expression.
When no type annotation is given, the compiler will infer the type, or signal
```{.tuple}
(0,);
(0.0, 4.5);
-("a", 4us, true);
+("a", 4usize, true);
```
### Unit expressions
### Structure expressions
-```{.ebnf .gram}
-struct_expr : expr_path '{' ident ':' expr
- [ ',' ident ':' expr ] *
- [ ".." expr ] '}' |
- expr_path '(' expr
- [ ',' expr ] * ')' |
- expr_path ;
-```
-
There are several forms of structure expressions. A _structure expression_
consists of the [path](#paths) of a [structure item](#structures), followed by
a brace-enclosed list of one or more comma-separated name-value pairs,
### Block expressions
-```{.ebnf .gram}
-block_expr : '{' [ stmt ';' | item ] *
- [ expr ] '}' ;
-```
-
A _block expression_ is similar to a module in terms of the declarations that
are possible. Each block conceptually introduces a new namespace scope. Use
items can bring new names into scopes and declared items are in scope for only
### Method-call expressions
-```{.ebnf .gram}
-method_call_expr : expr '.' ident paren_expr_list ;
-```
-
A _method call_ consists of an expression followed by a single dot, an
identifier, and a parenthesized expression-list. Method calls are resolved to
methods on specific traits, either statically dispatching to a method if the
### Field expressions
-```{.ebnf .gram}
-field_expr : expr '.' ident ;
-```
-
A _field expression_ consists of an expression followed by a single dot and an
identifier, when not immediately followed by a parenthesized expression-list
(the latter is a [method call expression](#method-call-expressions)). A field
### Array expressions
-```{.ebnf .gram}
-array_expr : '[' "mut" ? array_elems? ']' ;
-
-array_elems : [expr [',' expr]*] | [expr ';' expr] ;
-```
-
An [array](#array,-and-slice-types) _expression_ is written by enclosing zero
or more comma-separated expressions of uniform type in square brackets.
### Index expressions
-```{.ebnf .gram}
-idx_expr : expr '[' expr ']' ;
-```
-
[Array](#array,-and-slice-types)-typed expressions can be indexed by
writing a square-bracket-enclosed expression (the index) after them. When the
array is mutable, the resulting [lvalue](#lvalues,-rvalues-and-temporaries) can
### Range expressions
-```{.ebnf .gram}
-range_expr : expr ".." expr |
- expr ".." |
- ".." expr |
- ".." ;
-```
-
The `..` operator will construct an object of one of the `std::ops::Range` variants.
```
### Binary operator expressions
-```{.ebnf .gram}
-binop_expr : expr binop expr ;
-```
-
Binary operators expressions are given in terms of [operator
precedence](#operator-precedence).
expression. Parentheses can be used to explicitly specify evaluation order
within an expression.
-```{.ebnf .gram}
-paren_expr : '(' expr ')' ;
-```
-
An example of a parenthesized expression:
```
### Call expressions
-```{.ebnf .gram}
-expr_list : [ expr [ ',' expr ]* ] ? ;
-paren_expr_list : '(' expr_list ')' ;
-call_expr : expr paren_expr_list ;
-```
-
A _call expression_ invokes a function, providing zero or more input variables
and an optional location to move the function's output into. If the function
eventually returns, then the expression completes.
### Lambda expressions
-```{.ebnf .gram}
-ident_list : [ ident [ ',' ident ]* ] ? ;
-lambda_expr : '|' ident_list '|' expr ;
-```
-
A _lambda expression_ (sometimes called an "anonymous function expression")
defines a function and denotes it as a value, in a single expression. A lambda
expression is a pipe-symbol-delimited (`|`) list of identifiers followed by an
A `loop` expression denotes an infinite loop.
-```{.ebnf .gram}
-loop_expr : [ lifetime ':' ] "loop" '{' block '}';
-```
-
A `loop` expression may optionally have a _label_. The label is written as
a lifetime preceding the loop expression, as in `'foo: loop{ }`. If a
label is present, then labeled `break` and `continue` expressions nested
### Break expressions
-```{.ebnf .gram}
-break_expr : "break" [ lifetime ];
-```
-
A `break` expression has an optional _label_. If the label is absent, then
executing a `break` expression immediately terminates the innermost loop
enclosing it. It is only permitted in the body of a loop. If the label is
### Continue expressions
-```{.ebnf .gram}
-continue_expr : "continue" [ lifetime ];
-```
-
A `continue` expression has an optional _label_. If the label is absent, then
executing a `continue` expression immediately terminates the current iteration
of the innermost loop enclosing it, returning control to the loop *head*. In
### While loops
-```{.ebnf .gram}
-while_expr : [ lifetime ':' ] "while" no_struct_literal_expr '{' block '}' ;
-```
-
A `while` loop begins by evaluating the boolean loop conditional expression.
If the loop conditional expression evaluates to `true`, the loop body block
executes and control returns to the loop conditional expression. If the loop
### For expressions
-```{.ebnf .gram}
-for_expr : [ lifetime ':' ] "for" pat "in" no_struct_literal_expr '{' block '}' ;
-```
-
A `for` expression is a syntactic construct for looping over elements provided
-by an implementation of `std::iter::Iterator`.
+by an implementation of `std::iter::IntoIterator`.
An example of a for loop over the contents of an array:
```
# type Foo = i32;
-# fn bar(f: Foo) { }
+# fn bar(f: &Foo) { }
# let a = 0;
# let b = 0;
# let c = 0;
let v: &[Foo] = &[a, b, c];
-for e in v.iter() {
- bar(*e);
+for e in v {
+ bar(e);
}
```
### If expressions
-```{.ebnf .gram}
-if_expr : "if" no_struct_literal_expr '{' block '}'
- else_tail ? ;
-
-else_tail : "else" [ if_expr | if_let_expr
- | '{' block '}' ] ;
-```
-
An `if` expression is a conditional branch in program control. The form of an
`if` expression is a condition expression, followed by a consequent block, any
number of `else if` conditions and blocks, and an optional trailing `else`
### Match expressions
-```{.ebnf .gram}
-match_expr : "match" no_struct_literal_expr '{' match_arm * '}' ;
-
-match_arm : attribute * match_pat "=>" [ expr "," | '{' block '}' ] ;
-
-match_pat : pat [ '|' pat ] * [ "if" expr ] ? ;
-```
-
A `match` expression branches on a *pattern*. The exact form of matching that
occurs depends on the pattern. Patterns consist of some combination of
literals, destructured arrays or enum constructors, structures and tuples,
### If let expressions
-```{.ebnf .gram}
-if_let_expr : "if" "let" pat '=' expr '{' block '}'
- else_tail ? ;
-else_tail : "else" [ if_expr | if_let_expr | '{' block '}' ] ;
-```
-
An `if let` expression is semantically identical to an `if` expression but in place
of a condition expression it expects a refutable let statement. If the value of the
expression on the right hand side of the let statement matches the pattern, the corresponding
### While let loops
-```{.ebnf .gram}
-while_let_expr : "while" "let" pat '=' expr '{' block '}' ;
-```
-
A `while let` loop is semantically identical to a `while` loop but in place of a
condition expression it expects a refutable let statement. If the value of the
expression on the right hand side of the let statement matches the pattern, the
### Return expressions
-```{.ebnf .gram}
-return_expr : "return" expr ? ;
-```
-
Return expressions are denoted with the keyword `return`. Evaluating a `return`
expression moves its argument into the designated output location for the
current function call, destroys the current function activation frame, and
The primitive types are the following:
-* The "unit" type `()`, having the single "unit" value `()` (occasionally called
- "nil"). [^unittype]
* The boolean type `bool` with values `true` and `false`.
* The machine types.
* The machine-dependent integer and floating-point types.
-[^unittype]: The "unit" value `()` is *not* a sentinel "null pointer" value for
- reference variables; the "unit" type is the implicit return type from functions
- otherwise lacking a return type, and can be used in other contexts (such as
- message-sending or type-parametric code) as a zero-size type.]
-
#### Machine types
The machine types are the following:
A value of type `str` is a Unicode string, represented as an array of 8-bit
unsigned bytes holding a sequence of UTF-8 code points. Since `str` is of
unknown size, it is not a _first-class_ type, but can only be instantiated
-through a pointer type, such as `&str` or `String`.
+through a pointer type, such as `&str`.
### Tuple types
A `struct` *type* is a heterogeneous product of other types, called the
*fields* of the type.[^structtype]
-[^structtype]: `struct` types are analogous `struct` types in C,
+[^structtype]: `struct` types are analogous to `struct` types in C,
the *record* types of the ML family,
or the *structure* types of the Lisp family.
have the same memory layout.
The fields of a `struct` may be qualified by [visibility
-modifiers](#re-exporting-and-visibility), to allow access to data in a
+modifiers](#visibility-and-privacy), to allow access to data in a
structure outside a module.
A _tuple struct_ type is just like a structure type, except that the fields are
* References (`&`)
: These point to memory _owned by some other value_.
- A reference type is written `&type` for some lifetime-variable `f`,
- or just `&'a type` when you need an explicit lifetime.
+ A reference type is written `&type`,
+ or `&'a type` when you need to specify an explicit lifetime.
Copying a reference is a "shallow" operation:
it involves only copying the pointer itself.
- Releasing a reference typically has no effect on the value it points to,
- with the exception of temporary values, which are released when the last
- reference to them is released.
+ Releasing a reference has no effect on the value it points to,
+ but a reference of a temporary value will keep it alive during the scope
+ of the reference itself.
* Raw pointers (`*`)
: Raw pointers are pointers without safety or liveness guarantees.
Raw pointers are written as `*const T` or `*mut T`,
- for example `*const int` means a raw pointer to an integer.
+ for example `*const i32` means a raw pointer to a 32-bit integer.
Copying or dropping a raw pointer has no effect on the lifecycle of any
other value. Dereferencing a raw pointer or converting it to any other
pointer type is an [`unsafe` operation](#unsafe-functions).
### Closure types
-```{.ebnf .notation}
-closure_type := [ 'unsafe' ] [ '<' lifetime-list '>' ] '|' arg-list '|'
- [ ':' bound-list ] [ '->' type ]
-lifetime-list := lifetime | lifetime ',' lifetime-list
-arg-list := ident ':' type | ident ':' type ',' arg-list
-bound-list := bound | bound '+' bound-list
-bound := path | lifetime
-```
-
-The type of a closure mapping an input of type `A` to an output of type `B` is
-`|A| -> B`. A closure with no arguments or return values has type `||`.
+A [lambda expression](#lambda-expressions) produces a closure value with
+a unique, anonymous type that cannot be written out.
-An example of creating and calling a closure:
+Depending on the requirements of the closure, its type implements one or
+more of the closure traits:
-```rust
-let captured_var = 10;
+* `FnOnce`
+ : The closure can be called once. A closure called as `FnOnce`
+ can move out values from its environment.
-let closure_no_args = || println!("captured_var={}", captured_var);
+* `FnMut`
+ : The closure can be called multiple times as mutable. A closure called as
+ `FnMut` can mutate values from its environment. `FnMut` implies
+ `FnOnce`.
-let closure_args = |arg: i32| -> i32 {
- println!("captured_var={}, arg={}", captured_var, arg);
- arg // Note lack of semicolon after 'arg'
-};
+* `Fn`
+ : The closure can be called multiple times through a shared reference.
+ A closure called as `Fn` can neither move out from nor mutate values
+ from its environment. `Fn` implies `FnMut` and `FnOnce`.
-fn call_closure<F: Fn(), G: Fn(i32) -> i32>(c1: F, c2: G) {
- c1();
- c2(2);
-}
-
-call_closure(closure_no_args, closure_args);
-
-```
### Trait objects
its type parameters are types:
```ignore
-fn map<A: Clone, B: Clone>(f: |A| -> B, xs: &[A]) -> Vec<B> {
+fn to_vec<A: Clone>(xs: &[A]) -> Vec<A> {
if xs.is_empty() {
return vec![];
}
- let first: B = f(xs[0].clone());
- let mut rest: Vec<B> = map(f, xs.slice(1, xs.len()));
+ let first: A = xs[0].clone();
+ let mut rest: Vec<A> = to_vec(&xs[1..]);
rest.insert(0, first);
- return rest;
+ rest
}
```
-Here, `first` has type `B`, referring to `map`'s `B` type parameter; and `rest`
-has type `Vec<B>`, a vector type with element type `B`.
+Here, `first` has type `A`, referring to `to_vec`'s `A` type parameter; and `rest`
+has type `Vec<A>`, a vector with element type `A`.
### Self types
pattern syntax
[ffi]: book/ffi.html
-[plugin]: book/plugins.html
+[plugin]: book/compiler-plugins.html
% Unit testing
-Unit tests should live in a `test` submodule at the bottom of the module they
-test. Mark the `test` submodule with `#[cfg(test)]` so it is only compiled when
+Unit tests should live in a `tests` submodule at the bottom of the module they
+test. Mark the `tests` submodule with `#[cfg(test)]` so it is only compiled when
testing.
-The `test` module should contain:
+The `tests` module should contain:
* Imports needed only for testing.
* Functions marked with `#[test]` striving for full coverage of the parent module's
// Excerpt from std::str
#[cfg(test)]
-mod test {
+mod tests {
#[test]
fn test_eq() {
assert!((eq(&"".to_owned(), &"".to_owned())));
requirements, and writing low-level code, like device drivers and operating
systems. It improves on current languages targeting this space by having a
number of compile-time safety checks that produce no runtime overhead, while
-eliminating all data races. Rust also aims to achieve ‘zero-cost abstrations’
+eliminating all data races. Rust also aims to achieve ‘zero-cost abstractions’
even though some of these abstractions feel like those of a high-level
language. Even then, Rust still allows precise control like a low-level
language would.
* [Learn Rust](learn-rust.md)
* [Effective Rust](effective-rust.md)
* [The Stack and the Heap](the-stack-and-the-heap.md)
- * [Debug and Display](debug-and-display.md)
* [Testing](testing.md)
* [Conditional Compilation](conditional-compilation.md)
* [Documentation](documentation.md)
* [Strings](strings.md)
* [Generics](generics.md)
* [Traits](traits.md)
- * [Operators and Overloading](operators-and-overloading.md)
* [Drop](drop.md)
* [if let](if-let.md)
* [Trait Objects](trait-objects.md)
* [Casting between types](casting-between-types.md)
* [Associated Types](associated-types.md)
* [Unsized Types](unsized-types.md)
+ * [Operators and Overloading](operators-and-overloading.md)
* [Deref coercions](deref-coercions.md)
* [Macros](macros.md)
* [Raw Pointers](raw-pointers.md)
There’s one other key point here: because we’re bounding a generic with a
trait, this will get monomorphized, and therefore, we’ll be doing static
-dispatch into the closure. That’s pretty neat. In many langauges, closures are
+dispatch into the closure. That’s pretty neat. In many languages, closures are
inherently heap allocated, and will always involve dynamic dispatch. In Rust,
we can stack allocate our closure environment, and statically dispatch the
call. This happens quite often with iterators and their adapters, which often
+++ /dev/null
-% Debug and Display
-
-Coming soon!
% `Deref` coercions
-Coming soon!
+The standard library provides a special trait, [`Deref`][deref]. It’s normally
+used to overload `*`, the dereference operator:
+
+```rust
+use std::ops::Deref;
+
+struct DerefExample<T> {
+ value: T,
+}
+
+impl<T> Deref for DerefExample<T> {
+ type Target = T;
+
+ fn deref(&self) -> &T {
+ &self.value
+ }
+}
+
+fn main() {
+ let x = DerefExample { value: 'a' };
+ assert_eq!('a', *x);
+}
+```
+
+[deref]: ../std/ops/trait.Deref.html
+
+This is useful for writing custom pointer types. However, there’s a language
+feature related to `Deref`: ‘deref coercions’. Here’s the rule: If you have a
+type `U`, and it implements `Deref<Target=T>`, values of `&U` will
+automatically coerce to a `&T`. Here’s an example:
+
+```rust
+fn foo(s: &str) {
+ // borrow a string for a second
+}
+
+// String implements Deref<Target=str>
+let owned = "Hello".to_string();
+
+// therefore, this works:
+foo(&owned);
+```
+
+Using an ampersand in front of a value takes a reference to it. So `owned` is a
+`String`, `&owned` is an `&String`, and since `impl Deref<Target=str> for
+String`, `&String` will deref to `&str`, which `foo()` takes.
+
+That’s it. This rule is one of the only places in which Rust does an automatic
+conversion for you, but it adds a lot of flexibility. For example, the `Rc<T>`
+type implements `Deref<Target=T>`, so this works:
+
+```rust
+use std::rc::Rc;
+
+fn foo(s: &str) {
+ // borrow a string for a second
+}
+
+// String implements Deref<Target=str>
+let owned = "Hello".to_string();
+let counted = Rc::new(owned);
+
+// therefore, this works:
+foo(&counted);
+```
+
+All we’ve done is wrap our `String` in an `Rc<T>`. But we can now pass the
+`Rc<String>` around anywhere we’d have a `String`. The signature of `foo`
+didn’t change, but works just as well with either type. This example has two
+conversions: `Rc<String>` to `String` and then `String` to `&str`. Rust will do
+this as many times as possible until the types match.
+
+Another very common implementation provided by the standard library is:
+
+```rust
+fn foo(s: &[i32]) {
+ // borrow a slice for a second
+}
+
+// Vec<T> implements Deref<Target=[T]>
+let owned = vec![1, 2, 3];
+
+foo(&owned);
+```
+
+Vectors can `Deref` to a slice.
+
+## Deref and method calls
+
+`Deref` will also kick in when calling a method. In other words, these are
+the same two things in Rust:
+
+```rust
+struct Foo;
+
+impl Foo {
+ fn foo(&self) { println!("Foo"); }
+}
+
+let f = Foo;
+
+f.foo();
+```
+
+Even though `f` isn’t a reference, and `foo` takes `&self`, this works.
+That’s because these things are the same:
+
+```rust,ignore
+f.foo();
+(&f).foo();
+(&&f).foo();
+(&&&&&&&&f).foo();
+```
+
+A value of type `&&&&&&&&&&&&&&&&Foo` can still have methods defined on `Foo`
+called, because the compiler will insert as many * operations as necessary to
+get it right. And since it’s inserting `*`s, that uses `Deref`.
## Generation options
-`rustdoc` also contains a few other options on the command line, for further customiziation:
+`rustdoc` also contains a few other options on the command line, for further customization:
- `--html-in-header FILE`: includes the contents of FILE at the end of the
`<head>...</head>` section.
% Getting Started
This first section of the book will get you going with Rust and its tooling.
-First, we’ll install Rust. Then: the classic ‘Hello World’ program. Finally,
+First, we’ll install Rust. Then, the classic ‘Hello World’ program. Finally,
we’ll talk about Cargo, Rust’s build system and package manager.
When a compiler is compiling your program, it does a number of different
things. One of the things that it does is turn the text of your program into an
-'abstract syntax tree,' or 'AST.' This tree is a representation of the
+‘abstract syntax tree’, or‘AST’. This tree is a representation of the
structure of your program. For example, `2 + 3` can be turned into a tree:
```text
Luckily, as you may have guessed with the leading question, you can! Rust provides
the ability to use this ‘method call syntax’ via the `impl` keyword.
-## Method calls
+# Method calls
Here’s how it works:
}
```
-## Chaining method calls
+# Chaining method calls
So, now we know how to call a method, such as `foo.bar()`. But what about our
original example, `foo.bar().baz()`? This is called ‘method chaining’, and we
We just say we’re returning a `Circle`. With this method, we can grow a new
circle to any arbitrary size.
-## Static methods
+# Static methods
You can also define methods that do not take a `self` parameter. Here’s a
pattern that’s very common in Rust code:
are called with the `Struct::method()` syntax, rather than the `ref.method()`
syntax.
-## Builder Pattern
+# Builder Pattern
Let’s say that we want our users to be able to create Circles, but we will
allow them to only set the properties they care about. Otherwise, the `x`
% Mutability
-Coming Soon
+Mutability, the ability to change something, works a bit differently in Rust
+than in other languages. The first aspect of mutability is its non-default
+status:
+
+```rust,ignore
+let x = 5;
+x = 6; // error!
+```
+
+We can introduce mutability with the `mut` keyword:
+
+```rust
+let mut x = 5;
+
+x = 6; // no problem!
+```
+
+This is a mutable [variable binding][vb]. When a binding is mutable, it means
+you’re allowed to change what the binding points to. So in the above example,
+it’s not so much that the value at `x` is changing, but that the binding
+changed from one `i32` to another.
+
+[vb]: variable-bindings.html
+
+If you want to change what the binding points to, you’ll need a [mutable reference][mr]:
+
+```rust
+let mut x = 5;
+let y = &mut x;
+```
+
+[mr]: references-and-borrowing.html
+
+`y` is an immutable binding to a mutable reference, which means that you can’t
+bind `y` to something else (`y = &mut z`), but you can mutate the thing that’s
+bound to `y`. (`*y = 5`) A subtle distinction.
+
+Of course, if you need both:
+
+```rust
+let mut x = 5;
+let mut y = &mut x;
+```
+
+Now `y` can be bound to another value, and the value it’s referencing can be
+changed.
+
+It’s important to note that `mut` is part of a [pattern][pattern], so you
+can do things like this:
+
+```rust
+let (mut x, y) = (5, 6);
+
+fn foo(mut x: i32) {
+# }
+```
+
+[pattern]: patterns.html
+
+# Interior vs. Exterior Mutability
+
+However, when we say something is ‘immutable’ in Rust, that doesn’t mean that
+it’s not able to be changed: We mean something has ‘exterior mutability’. Consider,
+for example, [`Arc<T>`][arc]:
+
+```rust
+use std::sync::Arc;
+
+let x = Arc::new(5);
+let y = x.clone();
+```
+
+[arc]: ../std/sync/struct.Arc.html
+
+When we call `clone()`, the `Arc<T>` needs to update the reference count. Yet
+we’ve not used any `mut`s here, `x` is an immutable binding, and we didn’t take
+`&mut 5` or anything. So what gives?
+
+To this, we have to go back to the core of Rust’s guiding philosophy, memory
+safety, and the mechanism by which Rust guarantees it, the
+[ownership][ownership] system, and more specifically, [borrowing][borrowing]:
+
+> You may have one or the other of these two kinds of borrows, but not both at
+> the same time:
+>
+> * 0 to N references (`&T`) to a resource.
+> * exactly one mutable reference (`&mut T`)
+
+[ownership]: ownership.html
+[borrowing]: borrowing.html#The-Rules
+
+So, that’s the real definition of ‘immutability’: is this safe to have two
+pointers to? In `Arc<T>`’s case, yes: the mutation is entirely contained inside
+the structure itself. It’s not user facing. For this reason, it hands out `&T`
+with `clone()`. If it handed out `&mut T`s, though, that would be a problem.
+
+Other types, like the ones in the [`std::cell`][stdcell] module, have the
+opposite: interior mutability. For example:
+
+```rust
+use std::cell::RefCell;
+
+let x = RefCell::new(42);
+
+let y = x.borrow_mut();
+```
+
+[stdcell]: ../std/cell/index.html
+
+RefCell hands out `&mut` references to what’s inside of it with the
+`borrow_mut()` method. Isn’t that dangerous? What if we do:
+
+```rust,ignore
+use std::cell::RefCell;
+
+let x = RefCell::new(42);
+
+let y = x.borrow_mut();
+let z = x.borrow_mut();
+# (y, z);
+```
+
+This will in fact panic, at runtime. This is what `RefCell` does: it enforces
+Rust’s borrowing rules at runtime, and `panic!`s if they’re violated. This
+allows us to get around another aspect of Rust’s mutability rules. Let’s talk
+about it first.
+
+## Field-level mutability
+
+Mutability is a property of either a borrow (`&mut`) or a binding (`let mut`).
+This means that, for example, you cannot have a [`struct`][struct] with
+some fields mutable and some immutable:
+
+```rust,ignore
+struct Point {
+ x: i32,
+ mut y: i32, // nope
+}
+```
+
+The mutability of a struct is in its binding:
+
+```rust,ignore
+struct Point {
+ x: i32,
+ y: i32,
+}
+
+let mut a = Point { x: 5, y: 6 };
+
+a.x = 10;
+
+let b = Point { x: 5, y: 6};
+
+b.x = 10; // error: cannot assign to immutable field `b.x`
+```
+
+[struct]: structs.html
+
+However, by using `Cell<T>`, you can emulate field-level mutability:
+
+```
+use std::cell::Cell;
+
+struct Point {
+ x: i32,
+ y: Cell<i32>,
+}
+
+let mut point = Point { x: 5, y: Cell::new(6) };
+
+point.y.set(7);
+
+println!("y: {:?}", point.y);
+```
+
+This will print `y: Cell { value: 7 }`. We’ve successfully updated `y`.
% Operators and Overloading
-Coming soon!
+Rust allows for a limited form of operator overloading. There are certain
+operators that are able to be overloaded. To support a particular operator
+between types, there’s a specific trait that you can implement, which then
+overloads the operator.
+
+For example, the `+` operator can be overloaded with the `Add` trait:
+
+```rust
+use std::ops::Add;
+
+#[derive(Debug)]
+struct Point {
+ x: i32,
+ y: i32,
+}
+
+impl Add for Point {
+ type Output = Point;
+
+ fn add(self, other: Point) -> Point {
+ Point { x: self.x + other.x, y: self.y + other.y }
+ }
+}
+
+fn main() {
+ let p1 = Point { x: 1, y: 0 };
+ let p2 = Point { x: 2, y: 3 };
+
+ let p3 = p1 + p2;
+
+ println!("{:?}", p3);
+}
+```
+
+In `main`, we can use `+` on our two `Point`s, since we’ve implemented
+`Add<Output=Point>` for `Point`.
+
+There are a number of operators that can be overloaded this way, and all of
+their associated traits live in the [`std::ops`][stdops] module. Check out its
+documentation for the full list.
+
+[stdops]: ../std/ops/index.html
+
+Implementing these traits follows a pattern. Let’s look at [`Add`][add] in more
+detail:
+
+```rust
+# mod foo {
+pub trait Add<RHS = Self> {
+ type Output;
+
+ fn add(self, rhs: RHS) -> Self::Output;
+}
+# }
+```
+
+[add]: ../std/ops/trait.Add.html
+
+There’s three types in total involved here: the type you `impl Add` for, `RHS`,
+which defaults to `Self`, and `Output`. For an expression `let z = x + y`, `x`
+is the `Self` type, `y` is the RHS, and `z` is the `Self::Output` type.
+
+```rust
+# struct Point;
+# use std::ops::Add;
+impl Add<i32> for Point {
+ type Output = f64;
+
+ fn add(self, rhs: i32) -> f64 {
+ // add an i32 to a Point and get an f64
+# 1.0
+ }
+}
+```
+
+will let you do this:
+
+```rust,ignore
+let p: Point = // ...
+let x: f64 = p + 2i32;
+```
Here’s a list of the different numeric types, with links to their documentation
in the standard library:
+* [i8](../std/primitive.i8.html)
* [i16](../std/primitive.i16.html)
* [i32](../std/primitive.i32.html)
* [i64](../std/primitive.i64.html)
-* [i8](../std/primitive.i8.html)
+* [u8](../std/primitive.u8.html)
* [u16](../std/primitive.u16.html)
* [u32](../std/primitive.u32.html)
* [u64](../std/primitive.u64.html)
-* [u8](../std/primitive.u8.html)
* [isize](../std/primitive.isize.html)
* [usize](../std/primitive.usize.html)
* [f32](../std/primitive.f32.html)
Integer types come in two varieties: signed and unsigned. To understand the
difference, let’s consider a number with four bits of size. A signed, four-bit
number would let you store numbers from `-8` to `+7`. Signed numbers use
-â\80\98twoâ\80\99s compliment representationâ\80\99. An unsigned four bit number, since it does
+â\80\9ctwoâ\80\99s compliment representationâ\80\9d. An unsigned four bit number, since it does
not need to store negatives, can store values from `0` to `+15`.
Unsigned types use a `u` for their category, and signed types use `i`. The `i`
is for ‘integer’. So `u8` is an eight-bit unsigned number, and `i8` is an
-eight-bit signed number.
+eight-bit signed number.
## Fixed size types
## Floating-point types
-Rust also two floating point types: `f32` and `f64`. These correspond to
+Rust also has two floating point types: `f32` and `f64`. These correspond to
IEEE-754 single and double precision numbers.
# Arrays
Remember [before][let] when I said the left-hand side of a `let` statement was more
powerful than just assigning a binding? Here we are. We can put a pattern on
the left-hand side of the `let`, and if it matches up to the right-hand side,
-we can assign multiple bindings at once. In this case, `let` "destructures,"
-or "breaks up," the tuple, and assigns the bits to three bindings.
+we can assign multiple bindings at once. In this case, `let` “destructures”
+or “breaks up” the tuple, and assigns the bits to three bindings.
[let]: variable-bindings.html
This is a very common use of `assert_eq!`: call some function with
some known arguments and compare it to the expected output.
-# The `test` module
+# The `tests` module
There is one way in which our existing example is not idiomatic: it's
-missing the test module. The idiomatic way of writing our example
+missing the `tests` module. The idiomatic way of writing our example
looks like this:
```{rust,ignore}
}
#[cfg(test)]
-mod test {
+mod tests {
use super::add_two;
#[test]
}
```
-There's a few changes here. The first is the introduction of a `mod test` with
+There's a few changes here. The first is the introduction of a `mod tests` with
a `cfg` attribute. The module allows us to group all of our tests together, and
to also define helper functions if needed, that don't become a part of the rest
of our crate. The `cfg` attribute only compiles our test code if we're
}
#[cfg(test)]
-mod test {
+mod tests {
use super::*;
#[test]
Running target/adder-91b3e234d4ed382a
running 1 test
-test test::it_works ... ok
+test tests::it_works ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured
It works!
-The current convention is to use the `test` module to hold your "unit-style"
+The current convention is to use the `tests` module to hold your "unit-style"
tests. Anything that just tests one small bit of functionality makes sense to
go here. But what about "integration-style" tests instead? For that, we have
the `tests` directory
Running target/adder-91b3e234d4ed382a
running 1 test
-test test::it_works ... ok
+test tests::it_works ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured
Now we have three sections: our previous test is also run, as well as our new
one.
-That's all there is to the `tests` directory. The `test` module isn't needed
+That's all there is to the `tests` directory. The `tests` module isn't needed
here, since the whole thing is focused on tests.
Let's finally check out that third section: documentation tests.
}
#[cfg(test)]
-mod test {
+mod tests {
use super::*;
#[test]
Running target/adder-91b3e234d4ed382a
running 1 test
-test test::it_works ... ok
+test tests::it_works ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured
that implements `Foo`: only one copy is generated, often (but not always)
resulting in less code bloat. However, this comes at the cost of requiring
slower virtual function calls, and effectively inhibiting any chance of
-inlining and related optimisations from occurring.
+inlining and related optimizations from occurring.
### Why pointers?
```rust,ignore
let mut f = std::fs::File::open("foo.txt").ok().expect("Couldn’t open foo.txt");
let result = f.write("whatever".as_bytes());
-# result.unwrap(); // ignore the erorr
+# result.unwrap(); // ignore the error
```
Here’s the error:
let mut f = std::fs::File::open("foo.txt").ok().expect("Couldn’t open foo.txt");
let result = f.write("whatever".as_bytes());
-# result.unwrap(); // ignore the erorr
+# result.unwrap(); // ignore the error
```
This will compile without error.
% Variable Bindings
-Vitually every non-’Hello World’ Rust program uses *variable bindings*. They
+Virtually every non-'Hello World’ Rust program uses *variable bindings*. They
look like this:
```rust
}
#[cfg(test)]
-mod test {
+mod tests {
extern crate test;
use self::test::Bencher;
use boxed::Box;
//!
//! ## Precision
//!
-//! For non-numeric types, this can be considered a "maximum width". If the
-//! resulting string is longer than this width, then it is truncated down to
-//! this many characters and only those are emitted.
+//! For non-numeric types, this can be considered a "maximum width". If the resulting string is
+//! longer than this width, then it is truncated down to this many characters and only those are
+//! emitted.
//!
//! For integral types, this has no meaning currently.
//!
-//! For floating-point types, this indicates how many digits after the decimal
-//! point should be printed.
+//! For floating-point types, this indicates how many digits after the decimal point should be
+//! printed.
+//!
+//! There are three possible ways to specify the desired `precision`:
+//!
+//! There are three possible ways to specify the desired `precision`:
+//! 1. An integer `.N`,
+//! 2. an integer followed by dollar sign `.N$`, or
+//! 3. an asterisk `.*`.
+//!
+//! The first specification, `.N`, means the integer `N` itself is the precision.
+//!
+//! The second, `.N$`, means use format *argument* `N` (which must be a `usize`) as the precision.
+//!
+//! Finally, `.*` means that this `{...}` is associated with *two* format inputs rather than one:
+//! the first input holds the `usize` precision, and the second holds the value to print. Note
+//! that in this case, if one uses the format string `{<arg>:<spec>.*}`, then the `<arg>` part
+//! refers to the *value* to print, and the `precision` must come in the input preceding `<arg>`.
+//!
+//! For example, these:
+//!
+//! ```
+//! // Hello {arg 0 (x)} is {arg 1 (0.01} with precision specified inline (5)}
+//! println!("Hello {0} is {1:.5}", "x", 0.01);
+//!
+//! // Hello {arg 1 (x)} is {arg 2 (0.01} with precision specified in arg 0 (5)}
+//! println!("Hello {1} is {2:.0$}", 5, "x", 0.01);
+//!
+//! // Hello {arg 0 (x)} is {arg 2 (0.01} with precision specified in arg 1 (5)}
+//! println!("Hello {0} is {2:.1$}", "x", 5, 0.01);
+//!
+//! // Hello {next arg (x)} is {second of next two args (0.01} with precision
+//! // specified in first of next two args (5)}
+//! println!("Hello {} is {:.*}", "x", 5, 0.01);
+//!
+//! // Hello {next arg (x)} is {arg 2 (0.01} with precision
+//! // specified in its predecessor (5)}
+//! println!("Hello {} is {2:.*}", "x", 5, 0.01);
+//! ```
+//!
+//! All print the same thing:
+//!
+//! ```text
+//! Hello x is 0.01000
+//! ```
+//!
+//! While these:
+//!
+//! ```
+//! println!("{}, `{name:.*}` has 3 fractional digits", "Hello", 3, name=1234.56);
+//! println!("{}, `{name:.*}` has 3 characters", "Hello", 3, name="1234.56");
+//! ```
+//!
+//! print two significantly different things:
+//!
+//! ```text
+//! Hello, `1234.560` has 3 fractional digits
+//! Hello, `123` has 3 characters
+//! ```
//!
//! # Escaping
//!
}
#[cfg(test)]
-mod test {
+mod tests {
use std::clone::Clone;
use std::iter::{Iterator, IntoIterator};
use std::option::Option::{Some, None, self};
}
#[cfg(test)]
-mod test {
+mod tests {
use core::iter::{Iterator, self};
use core::option::Option::Some;
//! Traits for conversions between types.
//!
//! The traits in this module provide a general way to talk about conversions from one type to
-//! another. They follow the standard Rust conventions of `as`/`to`/`into`/`from`.
+//! another. They follow the standard Rust conventions of `as`/`into`/`from`.
//!
//! Like many traits, these are often used as bounds for generic functions, to support arguments of
//! multiple types.
/// `TraitObject` is guaranteed to match layouts, but it is not the
/// type of trait objects (e.g. the fields are not directly accessible
/// on a `&SomeTrait`) nor does it control that layout (changing the
-/// definition will not change the layout of a `&SometTrait`). It is
+/// definition will not change the layout of a `&SomeTrait`). It is
/// only designed to be used by unsafe code that needs to manipulate
/// the low-level details.
///
}
#[cfg(test)]
-mod test {
+mod tests {
use core::option::Option;
use core::option::Option::{Some, None};
use core::num::Float;
#[cfg(test)]
-mod test {
+mod tests {
use std::prelude::v1::*;
use core::iter::order;
}
#[cfg(test)]
-mod test {
+mod tests {
use std::prelude::v1::*;
use distributions::{Sample, IndependentSample};
}
#[cfg(test)]
-mod test {
+mod tests {
use std::prelude::v1::*;
use distributions::{Sample, IndependentSample};
#[cfg(test)]
-mod test {
+mod tests {
use std::prelude::v1::*;
use core::iter::order;
}
#[cfg(test)]
-mod test {
+mod tests {
use std::prelude::v1::*;
use core::iter::{order, repeat};
into a variable called `op_string` while simultaneously requiring the inner
String to be moved into a variable called `s`.
+```
let x = Some("s".to_string());
match x {
op_string @ Some(s) => ...
None => ...
}
+```
See also Error 303.
"##,
referenced in the pattern guard code. Doing so however would prevent the name
from being available in the body of the match arm. Consider the following:
+```
match Some("hi".to_string()) {
Some(s) if s.len() == 0 => // use s.
...
}
+```
The variable `s` has type String, and its use in the guard is as a variable of
type String. The guard code effectively executes in a separate scope to the body
innocuous, the problem is most clear when considering functions that take their
argument by value.
+```
match Some("hi".to_string()) {
Some(s) if { drop(s); false } => (),
Some(s) => // use s.
...
}
+```
The value would be dropped in the guard then become unavailable not only in the
body of that arm but also in all subsequent arms! The solution is to bind by
You can build a free-standing crate by adding `#![no_std]` to the crate
attributes:
+```
#![feature(no_std)]
#![no_std]
+```
See also https://doc.rust-lang.org/book/no-stdlib.html
"##,
If you want to match against a `static`, consider using a guard instead:
+```
static FORTY_TWO: i32 = 42;
match Some(42) {
Some(x) if x == FORTY_TWO => ...
...
}
+```
"##,
E0161: r##"
match was succesful. If the match is irrefutable (when it cannot fail to match),
use a regular `let`-binding instead. For instance:
+```
struct Irrefutable(i32);
let irr = Irrefutable(0);
// Try this instead:
let Irrefutable(x) = irr;
foo(x);
+```
"##,
E0165: r##"
match was succesful. If the match is irrefutable (when it cannot fail to match),
use a regular `let`-binding inside a `loop` instead. For instance:
+```
struct Irrefutable(i32);
let irr = Irrefutable(0);
let Irrefutable(x) = irr;
...
}
+```
"##,
E0170: r##"
Enum variants are qualified by default. For example, given this type:
+```
enum Method {
GET,
POST
}
+```
you would match it using:
+```
match m {
Method::GET => ...
Method::POST => ...
}
+```
If you don't qualify the names, the code will bind new variables named "GET" and
"POST" instead. This behavior is likely not what you want, so rustc warns when
Qualified names are good practice, and most code works well with them. But if
you prefer them unqualified, you can import the variants into scope:
+```
use Method::*;
enum Method { GET, POST }
+```
"##,
E0267: r##"
This error indicates that the given recursion limit could not be parsed. Ensure
that the value provided is a positive integer between quotes, like so:
+```
#![recursion_limit="1000"]
+```
"##,
E0297: r##"
loop variable, consider using a `match` or `if let` inside the loop body. For
instance:
+```
// This fails because `None` is not covered.
for Some(x) in xs {
...
...
}
}
+```
"##,
E0301: r##"
exhaustive. For instance, the following would not match any arm if mutable
borrows were allowed:
+```
match Some(()) {
None => { },
option if option.take().is_none() => { /* impossible, option is `Some` */ },
Some(_) => { } // When the previous match failed, the option became `None`.
}
+```
"##,
E0302: r##"
exhaustive. For instance, the following would not match any arm if assignments
were allowed:
+```
match Some(()) {
None => { },
option if { option = None; false } { },
Some(_) => { } // When the previous match failed, the option became `None`.
}
+```
"##,
E0303: r##"
Updates to the borrow checker in a future version of Rust may remove this
restriction, but for now patterns must be rewritten without sub-bindings.
-// Before.
-match Some("hi".to_string()) {
- ref op_string_ref @ Some(ref s) => ...
+```
+// Code like this...
+match Some(5) {
+ ref op_num @ Some(num) => ...
None => ...
}
}
None => ...
}
+```
The `op_string_ref` binding has type &Option<&String> in both cases.
pub mod entry;
pub mod expr_use_visitor;
pub mod fast_reject;
+ pub mod free_region;
pub mod intrinsicck;
pub mod infer;
+ pub mod implicator;
pub mod lang_items;
pub mod liveness;
pub mod mem_categorization;
--- /dev/null
+// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+//! This file defines
+
+use middle::implicator::Implication;
+use middle::ty::{self, FreeRegion};
+use util::common::can_reach;
+use util::nodemap::FnvHashMap;
+use util::ppaux::Repr;
+
+#[derive(Clone)]
+pub struct FreeRegionMap {
+ /// `free_region_map` maps from a free region `a` to a list of
+ /// free regions `bs` such that `a <= b for all b in bs`
+ map: FnvHashMap<FreeRegion, Vec<FreeRegion>>,
+}
+
+impl FreeRegionMap {
+ pub fn new() -> FreeRegionMap {
+ FreeRegionMap { map: FnvHashMap() }
+ }
+
+ pub fn relate_free_regions_from_implications<'tcx>(&mut self,
+ tcx: &ty::ctxt<'tcx>,
+ implications: &[Implication<'tcx>])
+ {
+ for implication in implications {
+ debug!("implication: {}", implication.repr(tcx));
+ match *implication {
+ Implication::RegionSubRegion(_, ty::ReFree(free_a), ty::ReFree(free_b)) => {
+ self.relate_free_regions(free_a, free_b);
+ }
+ Implication::RegionSubRegion(..) |
+ Implication::RegionSubClosure(..) |
+ Implication::RegionSubGeneric(..) |
+ Implication::Predicate(..) => {
+ }
+ }
+ }
+ }
+
+ pub fn relate_free_regions_from_predicates<'tcx>(&mut self,
+ tcx: &ty::ctxt<'tcx>,
+ predicates: &[ty::Predicate<'tcx>]) {
+ debug!("relate_free_regions_from_predicates(predicates={})", predicates.repr(tcx));
+ for predicate in predicates {
+ match *predicate {
+ ty::Predicate::Projection(..) |
+ ty::Predicate::Trait(..) |
+ ty::Predicate::Equate(..) |
+ ty::Predicate::TypeOutlives(..) => {
+ // No region bounds here
+ }
+ ty::Predicate::RegionOutlives(ty::Binder(ty::OutlivesPredicate(r_a, r_b))) => {
+ match (r_a, r_b) {
+ (ty::ReFree(fr_a), ty::ReFree(fr_b)) => {
+ // Record that `'a:'b`. Or, put another way, `'b <= 'a`.
+ self.relate_free_regions(fr_b, fr_a);
+ }
+ _ => {
+ // All named regions are instantiated with free regions.
+ tcx.sess.bug(
+ &format!("record_region_bounds: non free region: {} / {}",
+ r_a.repr(tcx),
+ r_b.repr(tcx)));
+ }
+ }
+ }
+ }
+ }
+ }
+
+ pub fn relate_free_regions(&mut self, sub: FreeRegion, sup: FreeRegion) {
+ let mut sups = self.map.entry(sub).or_insert(Vec::new());
+ if !sups.contains(&sup) {
+ sups.push(sup);
+ }
+ }
+
+ /// Determines whether two free regions have a subregion relationship
+ /// by walking the graph encoded in `map`. Note that
+ /// it is possible that `sub != sup` and `sub <= sup` and `sup <= sub`
+ /// (that is, the user can give two different names to the same lifetime).
+ pub fn sub_free_region(&self, sub: FreeRegion, sup: FreeRegion) -> bool {
+ can_reach(&self.map, sub, sup)
+ }
+
+ /// Determines whether one region is a subregion of another. This is intended to run *after
+ /// inference* and sadly the logic is somewhat duplicated with the code in infer.rs.
+ pub fn is_subregion_of(&self,
+ tcx: &ty::ctxt,
+ sub_region: ty::Region,
+ super_region: ty::Region)
+ -> bool {
+ debug!("is_subregion_of(sub_region={:?}, super_region={:?})",
+ sub_region, super_region);
+
+ sub_region == super_region || {
+ match (sub_region, super_region) {
+ (ty::ReEmpty, _) |
+ (_, ty::ReStatic) =>
+ true,
+
+ (ty::ReScope(sub_scope), ty::ReScope(super_scope)) =>
+ tcx.region_maps.is_subscope_of(sub_scope, super_scope),
+
+ (ty::ReScope(sub_scope), ty::ReFree(ref fr)) =>
+ tcx.region_maps.is_subscope_of(sub_scope, fr.scope.to_code_extent()),
+
+ (ty::ReFree(sub_fr), ty::ReFree(super_fr)) =>
+ self.sub_free_region(sub_fr, super_fr),
+
+ _ =>
+ false,
+ }
+ }
+ }
+}
+
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// #![warn(deprecated_mode)]
+
+use middle::infer::{InferCtxt, GenericKind};
+use middle::subst::Substs;
+use middle::traits;
+use middle::ty::{self, RegionEscape, ToPolyTraitRef, Ty};
+use middle::ty_fold::{TypeFoldable, TypeFolder};
+
+use std::rc::Rc;
+use syntax::ast;
+use syntax::codemap::Span;
+
+use util::common::ErrorReported;
+use util::nodemap::FnvHashSet;
+use util::ppaux::Repr;
+
+// Helper functions related to manipulating region types.
+
+pub enum Implication<'tcx> {
+ RegionSubRegion(Option<Ty<'tcx>>, ty::Region, ty::Region),
+ RegionSubGeneric(Option<Ty<'tcx>>, ty::Region, GenericKind<'tcx>),
+ RegionSubClosure(Option<Ty<'tcx>>, ty::Region, ast::DefId, &'tcx Substs<'tcx>),
+ Predicate(ast::DefId, ty::Predicate<'tcx>),
+}
+
+struct Implicator<'a, 'tcx: 'a> {
+ infcx: &'a InferCtxt<'a,'tcx>,
+ closure_typer: &'a (ty::ClosureTyper<'tcx>+'a),
+ body_id: ast::NodeId,
+ stack: Vec<(ty::Region, Option<Ty<'tcx>>)>,
+ span: Span,
+ out: Vec<Implication<'tcx>>,
+ visited: FnvHashSet<Ty<'tcx>>,
+}
+
+/// This routine computes the well-formedness constraints that must hold for the type `ty` to
+/// appear in a context with lifetime `outer_region`
+pub fn implications<'a,'tcx>(
+ infcx: &'a InferCtxt<'a,'tcx>,
+ closure_typer: &ty::ClosureTyper<'tcx>,
+ body_id: ast::NodeId,
+ ty: Ty<'tcx>,
+ outer_region: ty::Region,
+ span: Span)
+ -> Vec<Implication<'tcx>>
+{
+ debug!("implications(body_id={}, ty={}, outer_region={})",
+ body_id,
+ ty.repr(closure_typer.tcx()),
+ outer_region.repr(closure_typer.tcx()));
+
+ let mut stack = Vec::new();
+ stack.push((outer_region, None));
+ let mut wf = Implicator { closure_typer: closure_typer,
+ infcx: infcx,
+ body_id: body_id,
+ span: span,
+ stack: stack,
+ out: Vec::new(),
+ visited: FnvHashSet() };
+ wf.accumulate_from_ty(ty);
+ debug!("implications: out={}", wf.out.repr(closure_typer.tcx()));
+ wf.out
+}
+
+impl<'a, 'tcx> Implicator<'a, 'tcx> {
+ fn tcx(&self) -> &'a ty::ctxt<'tcx> {
+ self.infcx.tcx
+ }
+
+ fn accumulate_from_ty(&mut self, ty: Ty<'tcx>) {
+ debug!("accumulate_from_ty(ty={})",
+ ty.repr(self.tcx()));
+
+ // When expanding out associated types, we can visit a cyclic
+ // set of types. Issue #23003.
+ if !self.visited.insert(ty) {
+ return;
+ }
+
+ match ty.sty {
+ ty::ty_bool |
+ ty::ty_char |
+ ty::ty_int(..) |
+ ty::ty_uint(..) |
+ ty::ty_float(..) |
+ ty::ty_bare_fn(..) |
+ ty::ty_err |
+ ty::ty_str => {
+ // No borrowed content reachable here.
+ }
+
+ ty::ty_closure(def_id, substs) => {
+ let &(r_a, opt_ty) = self.stack.last().unwrap();
+ self.out.push(Implication::RegionSubClosure(opt_ty, r_a, def_id, substs));
+ }
+
+ ty::ty_trait(ref t) => {
+ let required_region_bounds =
+ object_region_bounds(self.tcx(), &t.principal, t.bounds.builtin_bounds);
+ self.accumulate_from_object_ty(ty, t.bounds.region_bound, required_region_bounds)
+ }
+
+ ty::ty_enum(def_id, substs) |
+ ty::ty_struct(def_id, substs) => {
+ let item_scheme = ty::lookup_item_type(self.tcx(), def_id);
+ self.accumulate_from_adt(ty, def_id, &item_scheme.generics, substs)
+ }
+
+ ty::ty_vec(t, _) |
+ ty::ty_ptr(ty::mt { ty: t, .. }) |
+ ty::ty_uniq(t) => {
+ self.accumulate_from_ty(t)
+ }
+
+ ty::ty_rptr(r_b, mt) => {
+ self.accumulate_from_rptr(ty, *r_b, mt.ty);
+ }
+
+ ty::ty_param(p) => {
+ self.push_param_constraint_from_top(p);
+ }
+
+ ty::ty_projection(ref data) => {
+ // `<T as TraitRef<..>>::Name`
+
+ self.push_projection_constraint_from_top(data);
+ }
+
+ ty::ty_tup(ref tuptys) => {
+ for &tupty in tuptys {
+ self.accumulate_from_ty(tupty);
+ }
+ }
+
+ ty::ty_infer(_) => {
+ // This should not happen, BUT:
+ //
+ // Currently we uncover region relationships on
+ // entering the fn check. We should do this after
+ // the fn check, then we can call this case a bug().
+ }
+ }
+ }
+
+ fn accumulate_from_rptr(&mut self,
+ ty: Ty<'tcx>,
+ r_b: ty::Region,
+ ty_b: Ty<'tcx>) {
+ // We are walking down a type like this, and current
+ // position is indicated by caret:
+ //
+ // &'a &'b ty_b
+ // ^
+ //
+ // At this point, top of stack will be `'a`. We must
+ // require that `'a <= 'b`.
+
+ self.push_region_constraint_from_top(r_b);
+
+ // Now we push `'b` onto the stack, because it must
+ // constrain any borrowed content we find within `T`.
+
+ self.stack.push((r_b, Some(ty)));
+ self.accumulate_from_ty(ty_b);
+ self.stack.pop().unwrap();
+ }
+
+ /// Pushes a constraint that `r_b` must outlive the top region on the stack.
+ fn push_region_constraint_from_top(&mut self,
+ r_b: ty::Region) {
+
+ // Indicates that we have found borrowed content with a lifetime
+ // of at least `r_b`. This adds a constraint that `r_b` must
+ // outlive the region `r_a` on top of the stack.
+ //
+ // As an example, imagine walking a type like:
+ //
+ // &'a &'b T
+ // ^
+ //
+ // when we hit the inner pointer (indicated by caret), `'a` will
+ // be on top of stack and `'b` will be the lifetime of the content
+ // we just found. So we add constraint that `'a <= 'b`.
+
+ let &(r_a, opt_ty) = self.stack.last().unwrap();
+ self.push_sub_region_constraint(opt_ty, r_a, r_b);
+ }
+
+ /// Pushes a constraint that `r_a <= r_b`, due to `opt_ty`
+ fn push_sub_region_constraint(&mut self,
+ opt_ty: Option<Ty<'tcx>>,
+ r_a: ty::Region,
+ r_b: ty::Region) {
+ self.out.push(Implication::RegionSubRegion(opt_ty, r_a, r_b));
+ }
+
+ /// Pushes a constraint that `param_ty` must outlive the top region on the stack.
+ fn push_param_constraint_from_top(&mut self,
+ param_ty: ty::ParamTy) {
+ let &(region, opt_ty) = self.stack.last().unwrap();
+ self.push_param_constraint(region, opt_ty, param_ty);
+ }
+
+ /// Pushes a constraint that `projection_ty` must outlive the top region on the stack.
+ fn push_projection_constraint_from_top(&mut self,
+ projection_ty: &ty::ProjectionTy<'tcx>) {
+ let &(region, opt_ty) = self.stack.last().unwrap();
+ self.out.push(Implication::RegionSubGeneric(
+ opt_ty, region, GenericKind::Projection(projection_ty.clone())));
+ }
+
+ /// Pushes a constraint that `region <= param_ty`, due to `opt_ty`
+ fn push_param_constraint(&mut self,
+ region: ty::Region,
+ opt_ty: Option<Ty<'tcx>>,
+ param_ty: ty::ParamTy) {
+ self.out.push(Implication::RegionSubGeneric(
+ opt_ty, region, GenericKind::Param(param_ty)));
+ }
+
+ fn accumulate_from_adt(&mut self,
+ ty: Ty<'tcx>,
+ def_id: ast::DefId,
+ _generics: &ty::Generics<'tcx>,
+ substs: &Substs<'tcx>)
+ {
+ let predicates =
+ ty::lookup_predicates(self.tcx(), def_id).instantiate(self.tcx(), substs);
+ let predicates = match self.fully_normalize(&predicates) {
+ Ok(predicates) => predicates,
+ Err(ErrorReported) => { return; }
+ };
+
+ for predicate in predicates.predicates.as_slice() {
+ match *predicate {
+ ty::Predicate::Trait(ref data) => {
+ self.accumulate_from_assoc_types_transitive(data);
+ }
+ ty::Predicate::Equate(..) => { }
+ ty::Predicate::Projection(..) => { }
+ ty::Predicate::RegionOutlives(ref data) => {
+ match ty::no_late_bound_regions(self.tcx(), data) {
+ None => { }
+ Some(ty::OutlivesPredicate(r_a, r_b)) => {
+ self.push_sub_region_constraint(Some(ty), r_b, r_a);
+ }
+ }
+ }
+ ty::Predicate::TypeOutlives(ref data) => {
+ match ty::no_late_bound_regions(self.tcx(), data) {
+ None => { }
+ Some(ty::OutlivesPredicate(ty_a, r_b)) => {
+ self.stack.push((r_b, Some(ty)));
+ self.accumulate_from_ty(ty_a);
+ self.stack.pop().unwrap();
+ }
+ }
+ }
+ }
+ }
+
+ let obligations = predicates.predicates
+ .into_iter()
+ .map(|pred| Implication::Predicate(def_id, pred));
+ self.out.extend(obligations);
+
+ let variances = ty::item_variances(self.tcx(), def_id);
+
+ for (®ion, &variance) in substs.regions().iter().zip(variances.regions.iter()) {
+ match variance {
+ ty::Contravariant | ty::Invariant => {
+ // If any data with this lifetime is reachable
+ // within, it must be at least contravariant.
+ self.push_region_constraint_from_top(region)
+ }
+ ty::Covariant | ty::Bivariant => { }
+ }
+ }
+
+ for (&ty, &variance) in substs.types.iter().zip(variances.types.iter()) {
+ match variance {
+ ty::Covariant | ty::Invariant => {
+ // If any data of this type is reachable within,
+ // it must be at least covariant.
+ self.accumulate_from_ty(ty);
+ }
+ ty::Contravariant | ty::Bivariant => { }
+ }
+ }
+ }
+
+ /// Given that there is a requirement that `Foo<X> : 'a`, where
+ /// `Foo` is declared like `struct Foo<T> where T : SomeTrait`,
+ /// this code finds all the associated types defined in
+ /// `SomeTrait` (and supertraits) and adds a requirement that `<X
+ /// as SomeTrait>::N : 'a` (where `N` is some associated type
+ /// defined in `SomeTrait`). This rule only applies to
+ /// trait-bounds that are not higher-ranked, because we cannot
+ /// project out of a HRTB. This rule helps code using associated
+ /// types to compile, see Issue #22246 for an example.
+ fn accumulate_from_assoc_types_transitive(&mut self,
+ data: &ty::PolyTraitPredicate<'tcx>)
+ {
+ debug!("accumulate_from_assoc_types_transitive({})",
+ data.repr(self.tcx()));
+
+ for poly_trait_ref in traits::supertraits(self.tcx(), data.to_poly_trait_ref()) {
+ match ty::no_late_bound_regions(self.tcx(), &poly_trait_ref) {
+ Some(trait_ref) => { self.accumulate_from_assoc_types(trait_ref); }
+ None => { }
+ }
+ }
+ }
+
+ fn accumulate_from_assoc_types(&mut self,
+ trait_ref: Rc<ty::TraitRef<'tcx>>)
+ {
+ debug!("accumulate_from_assoc_types({})",
+ trait_ref.repr(self.tcx()));
+
+ let trait_def_id = trait_ref.def_id;
+ let trait_def = ty::lookup_trait_def(self.tcx(), trait_def_id);
+ let assoc_type_projections: Vec<_> =
+ trait_def.associated_type_names
+ .iter()
+ .map(|&name| ty::mk_projection(self.tcx(), trait_ref.clone(), name))
+ .collect();
+ debug!("accumulate_from_assoc_types: assoc_type_projections={}",
+ assoc_type_projections.repr(self.tcx()));
+ let tys = match self.fully_normalize(&assoc_type_projections) {
+ Ok(tys) => { tys }
+ Err(ErrorReported) => { return; }
+ };
+ for ty in tys {
+ self.accumulate_from_ty(ty);
+ }
+ }
+
+ fn accumulate_from_object_ty(&mut self,
+ ty: Ty<'tcx>,
+ region_bound: ty::Region,
+ required_region_bounds: Vec<ty::Region>)
+ {
+ // Imagine a type like this:
+ //
+ // trait Foo { }
+ // trait Bar<'c> : 'c { }
+ //
+ // &'b (Foo+'c+Bar<'d>)
+ // ^
+ //
+ // In this case, the following relationships must hold:
+ //
+ // 'b <= 'c
+ // 'd <= 'c
+ //
+ // The first conditions is due to the normal region pointer
+ // rules, which say that a reference cannot outlive its
+ // referent.
+ //
+ // The final condition may be a bit surprising. In particular,
+ // you may expect that it would have been `'c <= 'd`, since
+ // usually lifetimes of outer things are conservative
+ // approximations for inner things. However, it works somewhat
+ // differently with trait objects: here the idea is that if the
+ // user specifies a region bound (`'c`, in this case) it is the
+ // "master bound" that *implies* that bounds from other traits are
+ // all met. (Remember that *all bounds* in a type like
+ // `Foo+Bar+Zed` must be met, not just one, hence if we write
+ // `Foo<'x>+Bar<'y>`, we know that the type outlives *both* 'x and
+ // 'y.)
+ //
+ // Note: in fact we only permit builtin traits, not `Bar<'d>`, I
+ // am looking forward to the future here.
+
+ // The content of this object type must outlive
+ // `bounds.region_bound`:
+ let r_c = region_bound;
+ self.push_region_constraint_from_top(r_c);
+
+ // And then, in turn, to be well-formed, the
+ // `region_bound` that user specified must imply the
+ // region bounds required from all of the trait types:
+ for &r_d in &required_region_bounds {
+ // Each of these is an instance of the `'c <= 'b`
+ // constraint above
+ self.out.push(Implication::RegionSubRegion(Some(ty), r_d, r_c));
+ }
+ }
+
+ fn fully_normalize<T>(&self, value: &T) -> Result<T,ErrorReported>
+ where T : TypeFoldable<'tcx> + ty::HasProjectionTypes + Clone + Repr<'tcx>
+ {
+ let value =
+ traits::fully_normalize(self.infcx,
+ self.closure_typer,
+ traits::ObligationCause::misc(self.span, self.body_id),
+ value);
+ match value {
+ Ok(value) => Ok(value),
+ Err(errors) => {
+ // I don't like reporting these errors here, but I
+ // don't know where else to report them just now. And
+ // I don't really expect errors to arise here
+ // frequently. I guess the best option would be to
+ // propagate them out.
+ traits::report_fulfillment_errors(self.infcx, &errors);
+ Err(ErrorReported)
+ }
+ }
+ }
+}
+
+/// Given an object type like `SomeTrait+Send`, computes the lifetime
+/// bounds that must hold on the elided self type. These are derived
+/// from the declarations of `SomeTrait`, `Send`, and friends -- if
+/// they declare `trait SomeTrait : 'static`, for example, then
+/// `'static` would appear in the list. The hard work is done by
+/// `ty::required_region_bounds`, see that for more information.
+pub fn object_region_bounds<'tcx>(
+ tcx: &ty::ctxt<'tcx>,
+ principal: &ty::PolyTraitRef<'tcx>,
+ others: ty::BuiltinBounds)
+ -> Vec<ty::Region>
+{
+ // Since we don't actually *know* the self type for an object,
+ // this "open(err)" serves as a kind of dummy standin -- basically
+ // a skolemized type.
+ let open_ty = ty::mk_infer(tcx, ty::FreshTy(0));
+
+ // Note that we preserve the overall binding levels here.
+ assert!(!open_ty.has_escaping_regions());
+ let substs = tcx.mk_substs(principal.0.substs.with_self_ty(open_ty));
+ let trait_refs = vec!(ty::Binder(Rc::new(ty::TraitRef::new(principal.0.def_id, substs))));
+
+ let param_bounds = ty::ParamBounds {
+ region_bounds: Vec::new(),
+ builtin_bounds: others,
+ trait_bounds: trait_refs,
+ projection_bounds: Vec::new(), // not relevant to computing region bounds
+ };
+
+ let predicates = ty::predicates(tcx, open_ty, ¶m_bounds);
+ ty::required_region_bounds(tcx, open_ty, predicates)
+}
+
+impl<'tcx> Repr<'tcx> for Implication<'tcx> {
+ fn repr(&self, tcx: &ty::ctxt<'tcx>) -> String {
+ match *self {
+ Implication::RegionSubRegion(_, ref r_a, ref r_b) => {
+ format!("RegionSubRegion({}, {})",
+ r_a.repr(tcx),
+ r_b.repr(tcx))
+ }
+
+ Implication::RegionSubGeneric(_, ref r, ref p) => {
+ format!("RegionSubGeneric({}, {})",
+ r.repr(tcx),
+ p.repr(tcx))
+ }
+
+ Implication::RegionSubClosure(_, ref a, ref b, ref c) => {
+ format!("RegionSubClosure({}, {}, {})",
+ a.repr(tcx),
+ b.repr(tcx),
+ c.repr(tcx))
+ }
+
+ Implication::Predicate(ref def_id, ref p) => {
+ format!("Predicate({}, {})",
+ def_id.repr(tcx),
+ p.repr(tcx))
+ }
+ }
+ }
+}
pub use self::freshen::TypeFreshener;
pub use self::region_inference::GenericKind;
+use middle::free_region::FreeRegionMap;
use middle::subst;
use middle::subst::Substs;
use middle::ty::{TyVid, IntVid, FloatVid, RegionVid, UnconstrainedNumeric};
self.region_vars.new_bound(debruijn)
}
- pub fn resolve_regions_and_report_errors(&self, subject_node_id: ast::NodeId) {
- let errors = self.region_vars.resolve_regions(subject_node_id);
+ pub fn resolve_regions_and_report_errors(&self,
+ free_regions: &FreeRegionMap,
+ subject_node_id: ast::NodeId) {
+ let errors = self.region_vars.resolve_regions(free_regions, subject_node_id);
self.report_region_errors(&errors); // see error_reporting.rs
}
use super::{RegionVariableOrigin, SubregionOrigin, TypeTrace, MiscVariable};
use rustc_data_structures::graph::{self, Direction, NodeIndex};
+use middle::free_region::FreeRegionMap;
use middle::region;
use middle::ty::{self, Ty};
use middle::ty::{BoundRegion, FreeRegion, Region, RegionVid};
/// fixed-point iteration to find region values which satisfy all
/// constraints, assuming such values can be found; if they cannot,
/// errors are reported.
- pub fn resolve_regions(&self, subject_node: ast::NodeId) -> Vec<RegionResolutionError<'tcx>> {
+ pub fn resolve_regions(&self,
+ free_regions: &FreeRegionMap,
+ subject_node: ast::NodeId)
+ -> Vec<RegionResolutionError<'tcx>>
+ {
debug!("RegionVarBindings: resolve_regions()");
let mut errors = vec!();
- let v = self.infer_variable_values(&mut errors, subject_node);
+ let v = self.infer_variable_values(free_regions, &mut errors, subject_node);
*self.values.borrow_mut() = Some(v);
errors
}
- fn is_subregion_of(&self, sub: Region, sup: Region) -> bool {
- self.tcx.region_maps.is_subregion_of(sub, sup)
- }
-
- fn lub_concrete_regions(&self, a: Region, b: Region) -> Region {
+ fn lub_concrete_regions(&self, free_regions: &FreeRegionMap, a: Region, b: Region) -> Region {
match (a, b) {
(ReLateBound(..), _) |
(_, ReLateBound(..)) |
}
(ReFree(ref a_fr), ReFree(ref b_fr)) => {
- self.lub_free_regions(a_fr, b_fr)
+ self.lub_free_regions(free_regions, a_fr, b_fr)
}
// For these types, we cannot define any additional
/// Computes a region that encloses both free region arguments. Guarantee that if the same two
/// regions are given as argument, in any order, a consistent result is returned.
fn lub_free_regions(&self,
+ free_regions: &FreeRegionMap,
a: &FreeRegion,
b: &FreeRegion)
-> ty::Region
{
return match a.cmp(b) {
- Less => helper(self, a, b),
- Greater => helper(self, b, a),
+ Less => helper(self, free_regions, a, b),
+ Greater => helper(self, free_regions, b, a),
Equal => ty::ReFree(*a)
};
- fn helper(this: &RegionVarBindings,
+ fn helper(_this: &RegionVarBindings,
+ free_regions: &FreeRegionMap,
a: &FreeRegion,
b: &FreeRegion) -> ty::Region
{
- if this.tcx.region_maps.sub_free_region(*a, *b) {
+ if free_regions.sub_free_region(*a, *b) {
ty::ReFree(*b)
- } else if this.tcx.region_maps.sub_free_region(*b, *a) {
+ } else if free_regions.sub_free_region(*b, *a) {
ty::ReFree(*a)
} else {
ty::ReStatic
}
fn glb_concrete_regions(&self,
+ free_regions: &FreeRegionMap,
a: Region,
b: Region)
-> RelateResult<'tcx, Region>
}
(ReFree(ref a_fr), ReFree(ref b_fr)) => {
- self.glb_free_regions(a_fr, b_fr)
+ self.glb_free_regions(free_regions, a_fr, b_fr)
}
// For these types, we cannot define any additional
/// if the same two regions are given as argument, in any order, a consistent result is
/// returned.
fn glb_free_regions(&self,
+ free_regions: &FreeRegionMap,
a: &FreeRegion,
b: &FreeRegion)
-> RelateResult<'tcx, ty::Region>
{
return match a.cmp(b) {
- Less => helper(self, a, b),
- Greater => helper(self, b, a),
+ Less => helper(self, free_regions, a, b),
+ Greater => helper(self, free_regions, b, a),
Equal => Ok(ty::ReFree(*a))
};
fn helper<'a, 'tcx>(this: &RegionVarBindings<'a, 'tcx>,
+ free_regions: &FreeRegionMap,
a: &FreeRegion,
b: &FreeRegion) -> RelateResult<'tcx, ty::Region>
{
- if this.tcx.region_maps.sub_free_region(*a, *b) {
+ if free_regions.sub_free_region(*a, *b) {
Ok(ty::ReFree(*a))
- } else if this.tcx.region_maps.sub_free_region(*b, *a) {
+ } else if free_regions.sub_free_region(*b, *a) {
Ok(ty::ReFree(*b))
} else {
this.intersect_scopes(ty::ReFree(*a), ty::ReFree(*b),
impl<'a, 'tcx> RegionVarBindings<'a, 'tcx> {
fn infer_variable_values(&self,
+ free_regions: &FreeRegionMap,
errors: &mut Vec<RegionResolutionError<'tcx>>,
subject: ast::NodeId) -> Vec<VarValue>
{
debug!("----() End constraint listing {:?}---", self.dump_constraints());
graphviz::maybe_print_constraints_for(self, subject);
- self.expansion(&mut var_data);
- self.contraction(&mut var_data);
+ self.expansion(free_regions, &mut var_data);
+ self.contraction(free_regions, &mut var_data);
let values =
- self.extract_values_and_collect_conflicts(&var_data[..],
+ self.extract_values_and_collect_conflicts(free_regions,
+ &var_data[..],
errors);
- self.collect_concrete_region_errors(&values, errors);
+ self.collect_concrete_region_errors(free_regions, &values, errors);
values
}
}
}
- fn expansion(&self, var_data: &mut [VarData]) {
+ fn expansion(&self, free_regions: &FreeRegionMap, var_data: &mut [VarData]) {
self.iterate_until_fixed_point("Expansion", |constraint| {
debug!("expansion: constraint={} origin={}",
constraint.repr(self.tcx),
match *constraint {
ConstrainRegSubVar(a_region, b_vid) => {
let b_data = &mut var_data[b_vid.index as usize];
- self.expand_node(a_region, b_vid, b_data)
+ self.expand_node(free_regions, a_region, b_vid, b_data)
}
ConstrainVarSubVar(a_vid, b_vid) => {
match var_data[a_vid.index as usize].value {
NoValue | ErrorValue => false,
Value(a_region) => {
let b_node = &mut var_data[b_vid.index as usize];
- self.expand_node(a_region, b_vid, b_node)
+ self.expand_node(free_regions, a_region, b_vid, b_node)
}
}
}
}
fn expand_node(&self,
+ free_regions: &FreeRegionMap,
a_region: Region,
b_vid: RegionVid,
b_data: &mut VarData)
}
Value(cur_region) => {
- let lub = self.lub_concrete_regions(a_region, cur_region);
+ let lub = self.lub_concrete_regions(free_regions, a_region, cur_region);
if lub == cur_region {
return false;
}
}
fn contraction(&self,
+ free_regions: &FreeRegionMap,
var_data: &mut [VarData]) {
self.iterate_until_fixed_point("Contraction", |constraint| {
debug!("contraction: constraint={} origin={}",
NoValue | ErrorValue => false,
Value(b_region) => {
let a_data = &mut var_data[a_vid.index as usize];
- self.contract_node(a_vid, a_data, b_region)
+ self.contract_node(free_regions, a_vid, a_data, b_region)
}
}
}
ConstrainVarSubReg(a_vid, b_region) => {
let a_data = &mut var_data[a_vid.index as usize];
- self.contract_node(a_vid, a_data, b_region)
+ self.contract_node(free_regions, a_vid, a_data, b_region)
}
}
})
}
fn contract_node(&self,
+ free_regions: &FreeRegionMap,
a_vid: RegionVid,
a_data: &mut VarData,
b_region: Region)
Value(a_region) => {
match a_data.classification {
- Expanding => check_node(self, a_vid, a_data, a_region, b_region),
- Contracting => adjust_node(self, a_vid, a_data, a_region, b_region),
+ Expanding =>
+ check_node(self, free_regions, a_vid, a_data, a_region, b_region),
+ Contracting =>
+ adjust_node(self, free_regions, a_vid, a_data, a_region, b_region),
}
}
};
fn check_node(this: &RegionVarBindings,
+ free_regions: &FreeRegionMap,
a_vid: RegionVid,
a_data: &mut VarData,
a_region: Region,
b_region: Region)
- -> bool {
- if !this.is_subregion_of(a_region, b_region) {
+ -> bool
+ {
+ if !free_regions.is_subregion_of(this.tcx, a_region, b_region) {
debug!("Setting {:?} to ErrorValue: {} not subregion of {}",
a_vid,
a_region.repr(this.tcx),
}
fn adjust_node(this: &RegionVarBindings,
+ free_regions: &FreeRegionMap,
a_vid: RegionVid,
a_data: &mut VarData,
a_region: Region,
b_region: Region)
-> bool {
- match this.glb_concrete_regions(a_region, b_region) {
+ match this.glb_concrete_regions(free_regions, a_region, b_region) {
Ok(glb) => {
if glb == a_region {
false
}
fn collect_concrete_region_errors(&self,
+ free_regions: &FreeRegionMap,
values: &Vec<VarValue>,
errors: &mut Vec<RegionResolutionError<'tcx>>)
{
for verify in &*self.verifys.borrow() {
match *verify {
VerifyRegSubReg(ref origin, sub, sup) => {
- if self.is_subregion_of(sub, sup) {
+ if free_regions.is_subregion_of(self.tcx, sub, sup) {
continue;
}
let sub = normalize(values, sub);
if sups.iter()
.map(|&sup| normalize(values, sup))
- .any(|sup| self.is_subregion_of(sub, sup))
+ .any(|sup| free_regions.is_subregion_of(self.tcx, sub, sup))
{
continue;
}
fn extract_values_and_collect_conflicts(
&self,
+ free_regions: &FreeRegionMap,
var_data: &[VarData],
errors: &mut Vec<RegionResolutionError<'tcx>>)
-> Vec<VarValue>
match var_data[idx].classification {
Expanding => {
self.collect_error_for_expanding_node(
- graph, var_data, &mut dup_vec,
+ free_regions, graph, var_data, &mut dup_vec,
node_vid, errors);
}
Contracting => {
self.collect_error_for_contracting_node(
- graph, var_data, &mut dup_vec,
+ free_regions, graph, var_data, &mut dup_vec,
node_vid, errors);
}
}
return graph;
}
- fn collect_error_for_expanding_node(
- &self,
- graph: &RegionGraph,
- var_data: &[VarData],
- dup_vec: &mut [u32],
- node_idx: RegionVid,
- errors: &mut Vec<RegionResolutionError<'tcx>>)
+ fn collect_error_for_expanding_node(&self,
+ free_regions: &FreeRegionMap,
+ graph: &RegionGraph,
+ var_data: &[VarData],
+ dup_vec: &mut [u32],
+ node_idx: RegionVid,
+ errors: &mut Vec<RegionResolutionError<'tcx>>)
{
// Errors in expanding nodes result from a lower-bound that is
// not contained by an upper-bound.
for lower_bound in &lower_bounds {
for upper_bound in &upper_bounds {
- if !self.is_subregion_of(lower_bound.region,
- upper_bound.region) {
+ if !free_regions.is_subregion_of(self.tcx,
+ lower_bound.region,
+ upper_bound.region) {
debug!("pushing SubSupConflict sub: {:?} sup: {:?}",
lower_bound.region, upper_bound.region);
errors.push(SubSupConflict(
fn collect_error_for_contracting_node(
&self,
+ free_regions: &FreeRegionMap,
graph: &RegionGraph,
var_data: &[VarData],
dup_vec: &mut [u32],
for upper_bound_1 in &upper_bounds {
for upper_bound_2 in &upper_bounds {
- match self.glb_concrete_regions(upper_bound_1.region,
+ match self.glb_concrete_regions(free_regions,
+ upper_bound_1.region,
upper_bound_2.region) {
Ok(_) => {}
Err(_) => {
//! `middle/typeck/infer/region_inference.rs`
use session::Session;
-use middle::ty::{self, Ty, FreeRegion};
+use middle::ty::{self, Ty};
use util::nodemap::{FnvHashMap, FnvHashSet, NodeMap};
-use util::common::can_reach;
use std::cell::RefCell;
use syntax::codemap::{self, Span};
/// which that variable is declared.
var_map: RefCell<NodeMap<CodeExtent>>,
- /// `free_region_map` maps from a free region `a` to a list of
- /// free regions `bs` such that `a <= b for all b in bs`
- ///
- /// NB. the free region map is populated during type check as we
- /// check each function. See the function `relate_free_regions`
- /// for more information.
- free_region_map: RefCell<FnvHashMap<FreeRegion, Vec<FreeRegion>>>,
-
/// `rvalue_scopes` includes entries for those expressions whose cleanup scope is
/// larger than the default. The map goes from the expression id
/// to the cleanup scope id. For rvalues not present in this
e(child, parent)
}
}
- pub fn each_encl_free_region<E>(&self, mut e:E) where E: FnMut(&FreeRegion, &FreeRegion) {
- for (child, parents) in self.free_region_map.borrow().iter() {
- for parent in parents.iter() {
- e(child, parent)
- }
- }
- }
pub fn each_rvalue_scope<E>(&self, mut e:E) where E: FnMut(&ast::NodeId, &CodeExtent) {
for (child, parent) in self.rvalue_scopes.borrow().iter() {
e(child, parent)
}
}
- pub fn relate_free_regions(&self, sub: FreeRegion, sup: FreeRegion) {
- match self.free_region_map.borrow_mut().get_mut(&sub) {
- Some(sups) => {
- if !sups.iter().any(|x| x == &sup) {
- sups.push(sup);
- }
- return;
- }
- None => {}
- }
-
- debug!("relate_free_regions(sub={:?}, sup={:?})", sub, sup);
- self.free_region_map.borrow_mut().insert(sub, vec!(sup));
- }
-
/// Records that `sub_fn` is defined within `sup_fn`. These ids
/// should be the id of the block that is the fn body, which is
/// also the root of the region hierarchy for that fn.
return true;
}
- /// Determines whether two free regions have a subregion relationship
- /// by walking the graph encoded in `free_region_map`. Note that
- /// it is possible that `sub != sup` and `sub <= sup` and `sup <= sub`
- /// (that is, the user can give two different names to the same lifetime).
- pub fn sub_free_region(&self, sub: FreeRegion, sup: FreeRegion) -> bool {
- can_reach(&*self.free_region_map.borrow(), sub, sup)
- }
-
- /// Determines whether one region is a subregion of another. This is intended to run *after
- /// inference* and sadly the logic is somewhat duplicated with the code in infer.rs.
- pub fn is_subregion_of(&self,
- sub_region: ty::Region,
- super_region: ty::Region)
- -> bool {
- debug!("is_subregion_of(sub_region={:?}, super_region={:?})",
- sub_region, super_region);
-
- sub_region == super_region || {
- match (sub_region, super_region) {
- (ty::ReEmpty, _) |
- (_, ty::ReStatic) => {
- true
- }
-
- (ty::ReScope(sub_scope), ty::ReScope(super_scope)) => {
- self.is_subscope_of(sub_scope, super_scope)
- }
-
- (ty::ReScope(sub_scope), ty::ReFree(ref fr)) => {
- self.is_subscope_of(sub_scope, fr.scope.to_code_extent())
- }
-
- (ty::ReFree(sub_fr), ty::ReFree(super_fr)) => {
- self.sub_free_region(sub_fr, super_fr)
- }
-
- (ty::ReEarlyBound(data_a), ty::ReEarlyBound(data_b)) => {
- // This case is used only to make sure that explicitly-
- // specified `Self` types match the real self type in
- // implementations. Yuck.
- data_a == data_b
- }
-
- _ => {
- false
- }
- }
- }
- }
-
/// Finds the nearest common ancestor (if any) of two scopes. That is, finds the smallest
/// scope which is greater than or equal to both `scope_a` and `scope_b`.
pub fn nearest_common_ancestor(&self,
let maps = RegionMaps {
scope_map: RefCell::new(FnvHashMap()),
var_map: RefCell::new(NodeMap()),
- free_region_map: RefCell::new(FnvHashMap()),
rvalue_scopes: RefCell::new(NodeMap()),
terminating_scopes: RefCell::new(FnvHashSet()),
fn_tree: RefCell::new(NodeMap()),
pub use self::Vtable::*;
pub use self::ObligationCauseCode::*;
+use middle::free_region::FreeRegionMap;
use middle::subst;
use middle::ty::{self, HasProjectionTypes, Ty};
use middle::ty_fold::TypeFoldable;
}
};
- infcx.resolve_regions_and_report_errors(body_id);
+ let free_regions = FreeRegionMap::new();
+ infcx.resolve_regions_and_report_errors(&free_regions, body_id);
let predicates = match infcx.fully_resolve(&predicates) {
Ok(predicates) => predicates,
Err(fixup_err) => {
use middle::const_eval;
use middle::def::{self, DefMap, ExportMap};
use middle::dependency_format;
+use middle::free_region::FreeRegionMap;
use middle::lang_items::{FnTraitLangItem, FnMutTraitLangItem, FnOnceTraitLangItem};
use middle::mem_categorization as mc;
use middle::region;
use middle::resolve_lifetime;
use middle::infer;
use middle::pat_util;
+use middle::region::RegionMaps;
use middle::stability;
use middle::subst::{self, ParamSpace, Subst, Substs, VecPerParamSpace};
use middle::traits;
pub named_region_map: resolve_lifetime::NamedRegionMap,
- pub region_maps: middle::region::RegionMaps,
+ pub region_maps: RegionMaps,
+
+ // For each fn declared in the local crate, type check stores the
+ // free-region relationships that were deduced from its where
+ // clauses and parameter types. These are then read-again by
+ // borrowck. (They are not used during trans, and hence are not
+ // serialized or needed for cross-crate fns.)
+ free_region_maps: RefCell<NodeMap<FreeRegionMap>>,
/// Stores the types for various nodes in the AST. Note that this table
/// is not guaranteed to be populated until after typeck. See
pub fn node_type_insert(&self, id: NodeId, ty: Ty<'tcx>) {
self.node_types.borrow_mut().insert(id, ty);
}
+
+ pub fn store_free_region_map(&self, id: NodeId, map: FreeRegionMap) {
+ self.free_region_maps.borrow_mut()
+ .insert(id, map);
+ }
+
+ pub fn free_region_map(&self, id: NodeId) -> FreeRegionMap {
+ self.free_region_maps.borrow()[&id].clone()
+ }
}
// Flags that we track on types. These flags are propagated upwards
named_region_map: resolve_lifetime::NamedRegionMap,
map: ast_map::Map<'tcx>,
freevars: RefCell<FreevarMap>,
- region_maps: middle::region::RegionMaps,
+ region_maps: RegionMaps,
lang_items: middle::lang_items::LanguageItems,
stability: stability::Index) -> ctxt<'tcx>
{
region_interner: RefCell::new(FnvHashMap()),
types: common_types,
named_region_map: named_region_map,
+ region_maps: region_maps,
+ free_region_maps: RefCell::new(FnvHashMap()),
item_variance_map: RefCell::new(DefIdMap()),
variance_computed: Cell::new(false),
sess: s,
def_map: def_map,
- region_maps: region_maps,
node_types: RefCell::new(FnvHashMap()),
item_substs: RefCell::new(NodeMap()),
impl_trait_refs: RefCell::new(NodeMap()),
let bounds = liberate_late_bound_regions(tcx, free_id_outlive, &ty::Binder(bounds));
let predicates = bounds.predicates.into_vec();
- //
- // Compute region bounds. For now, these relations are stored in a
- // global table on the tcx, so just enter them there. I'm not
- // crazy about this scheme, but it's convenient, at least.
- //
-
- record_region_bounds(tcx, &*predicates);
-
debug!("construct_parameter_environment: free_id={:?} free_subst={:?} predicates={:?}",
free_id,
free_substs.repr(tcx),
};
let cause = traits::ObligationCause::misc(span, free_id);
- return traits::normalize_param_env_or_error(unnormalized_env, cause);
-
- fn record_region_bounds<'tcx>(tcx: &ty::ctxt<'tcx>, predicates: &[ty::Predicate<'tcx>]) {
- debug!("record_region_bounds(predicates={:?})", predicates.repr(tcx));
-
- for predicate in predicates {
- match *predicate {
- Predicate::Projection(..) |
- Predicate::Trait(..) |
- Predicate::Equate(..) |
- Predicate::TypeOutlives(..) => {
- // No region bounds here
- }
- Predicate::RegionOutlives(ty::Binder(ty::OutlivesPredicate(r_a, r_b))) => {
- match (r_a, r_b) {
- (ty::ReFree(fr_a), ty::ReFree(fr_b)) => {
- // Record that `'a:'b`. Or, put another way, `'b <= 'a`.
- tcx.region_maps.relate_free_regions(fr_b, fr_a);
- }
- _ => {
- // All named regions are instantiated with free regions.
- tcx.sess.bug(
- &format!("record_region_bounds: non free region: {} / {}",
- r_a.repr(tcx),
- r_b.repr(tcx)));
- }
- }
- }
- }
- }
- }
+ traits::normalize_param_env_or_error(unnormalized_env, cause)
}
impl BorrowKind {
}
#[cfg(test)]
-mod test {
+mod tests {
use session::config::{build_configuration, optgroups, build_session_options};
use session::build_session;
}
#[cfg(all(not(windows), test))]
-mod test {
+mod tests {
use tempdir::TempDir;
use std::fs::{self, File};
use super::realpath;
}
#[cfg(all(unix, test))]
-mod test {
+mod tests {
use super::{RPathConfig};
use super::{minimize_rpaths, rpaths_to_flags, get_rpath_relative_to_output};
use std::path::{Path, PathBuf};
use rustc::middle::dataflow::KillFrom;
use rustc::middle::expr_use_visitor as euv;
use rustc::middle::mem_categorization as mc;
+use rustc::middle::free_region::FreeRegionMap;
use rustc::middle::region;
use rustc::middle::ty::{self, Ty};
use rustc::util::ppaux::{note_and_explain_region, Repr, UserString};
+use std::mem;
use std::rc::Rc;
use std::string::String;
use syntax::ast;
impl<'a, 'tcx, 'v> Visitor<'v> for BorrowckCtxt<'a, 'tcx> {
fn visit_fn(&mut self, fk: FnKind<'v>, fd: &'v FnDecl,
b: &'v Block, s: Span, id: ast::NodeId) {
- borrowck_fn(self, fk, fd, b, s, id);
+ match fk {
+ visit::FkItemFn(..) |
+ visit::FkMethod(..) => {
+ let new_free_region_map = self.tcx.free_region_map(id);
+ let old_free_region_map =
+ mem::replace(&mut self.free_region_map, new_free_region_map);
+ borrowck_fn(self, fk, fd, b, s, id);
+ self.free_region_map = old_free_region_map;
+ }
+
+ visit::FkFnBlock => {
+ borrowck_fn(self, fk, fd, b, s, id);
+ }
+ }
}
fn visit_item(&mut self, item: &ast::Item) {
pub fn check_crate(tcx: &ty::ctxt) {
let mut bccx = BorrowckCtxt {
tcx: tcx,
+ free_region_map: FreeRegionMap::new(),
stats: BorrowStats {
loaned_paths_same: 0,
loaned_paths_imm: 0,
let cfg = cfg::CFG::new(this.tcx, body);
let AnalysisData { all_loans,
loans: loan_dfcx,
- move_data:flowed_moves } =
+ move_data: flowed_moves } =
build_borrowck_dataflow_data(this, fk, decl, &cfg, body, sp, id);
move_data::fragments::instrument_move_fragments(&flowed_moves.move_data,
- this.tcx, sp, id);
+ this.tcx,
+ sp,
+ id);
check_loans::check_loans(this,
&loan_dfcx,
cfg: &cfg::CFG,
body: &ast::Block,
sp: Span,
- id: ast::NodeId) -> AnalysisData<'a, 'tcx> {
+ id: ast::NodeId)
+ -> AnalysisData<'a, 'tcx>
+{
// Check the body of fn items.
let id_range = ast_util::compute_id_range_for_fn_body(fk, decl, body, sp, id);
let (all_loans, move_data) =
/// the `BorrowckCtxt` itself , e.g. the flowgraph visualizer.
pub fn build_borrowck_dataflow_data_for_fn<'a, 'tcx>(
tcx: &'a ty::ctxt<'tcx>,
- input: FnPartsWithCFG<'a>) -> (BorrowckCtxt<'a, 'tcx>, AnalysisData<'a, 'tcx>) {
+ input: FnPartsWithCFG<'a>)
+ -> (BorrowckCtxt<'a, 'tcx>, AnalysisData<'a, 'tcx>)
+{
let mut bccx = BorrowckCtxt {
tcx: tcx,
+ free_region_map: FreeRegionMap::new(),
stats: BorrowStats {
loaned_paths_same: 0,
loaned_paths_imm: 0,
pub struct BorrowckCtxt<'a, 'tcx: 'a> {
tcx: &'a ty::ctxt<'tcx>,
+ // Hacky. As we visit various fns, we have to load up the
+ // free-region map for each one. This map is computed by during
+ // typeck for each fn item and stored -- closures just use the map
+ // from the fn item that encloses them. Since we walk the fns in
+ // order, we basically just overwrite this field as we enter a fn
+ // item and restore it afterwards in a stack-like fashion. Then
+ // the borrow checking code can assume that `free_region_map` is
+ // always the correct map for the current fn. Feels like it'd be
+ // better to just recompute this, rather than store it, but it's a
+ // bit of a pain to factor that code out at the moment.
+ free_region_map: FreeRegionMap,
+
// Statistics:
stats: BorrowStats
}
impl<'a, 'tcx> BorrowckCtxt<'a, 'tcx> {
pub fn is_subregion_of(&self, r_sub: ty::Region, r_sup: ty::Region)
- -> bool {
- self.tcx.region_maps.is_subregion_of(r_sub, r_sup)
+ -> bool
+ {
+ self.free_region_map.is_subregion_of(self.tcx, r_sub, r_sup)
}
pub fn report(&self, err: BckError<'tcx>) {
use snapshot_vec::{SnapshotVec, SnapshotVecDelegate};
#[cfg(test)]
-mod test;
+mod tests;
pub struct Graph<N,E> {
nodes: SnapshotVec<Node<N>> ,
+++ /dev/null
-// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-use graph::*;
-use std::fmt::Debug;
-
-type TestNode = Node<&'static str>;
-type TestEdge = Edge<&'static str>;
-type TestGraph = Graph<&'static str, &'static str>;
-
-fn create_graph() -> TestGraph {
- let mut graph = Graph::new();
-
- // Create a simple graph
- //
- // A -+> B --> C
- // | | ^
- // | v |
- // F D --> E
-
- let a = graph.add_node("A");
- let b = graph.add_node("B");
- let c = graph.add_node("C");
- let d = graph.add_node("D");
- let e = graph.add_node("E");
- let f = graph.add_node("F");
-
- graph.add_edge(a, b, "AB");
- graph.add_edge(b, c, "BC");
- graph.add_edge(b, d, "BD");
- graph.add_edge(d, e, "DE");
- graph.add_edge(e, c, "EC");
- graph.add_edge(f, b, "FB");
-
- return graph;
-}
-
-#[test]
-fn each_node() {
- let graph = create_graph();
- let expected = ["A", "B", "C", "D", "E", "F"];
- graph.each_node(|idx, node| {
- assert_eq!(&expected[idx.0], graph.node_data(idx));
- assert_eq!(expected[idx.0], node.data);
- true
- });
-}
-
-#[test]
-fn each_edge() {
- let graph = create_graph();
- let expected = ["AB", "BC", "BD", "DE", "EC", "FB"];
- graph.each_edge(|idx, edge| {
- assert_eq!(&expected[idx.0], graph.edge_data(idx));
- assert_eq!(expected[idx.0], edge.data);
- true
- });
-}
-
-fn test_adjacent_edges<N:PartialEq+Debug,E:PartialEq+Debug>(graph: &Graph<N,E>,
- start_index: NodeIndex,
- start_data: N,
- expected_incoming: &[(E,N)],
- expected_outgoing: &[(E,N)]) {
- assert!(graph.node_data(start_index) == &start_data);
-
- let mut counter = 0;
- for (edge_index, edge) in graph.incoming_edges(start_index) {
- assert!(graph.edge_data(edge_index) == &edge.data);
- assert!(counter < expected_incoming.len());
- debug!("counter={:?} expected={:?} edge_index={:?} edge={:?}",
- counter, expected_incoming[counter], edge_index, edge);
- match expected_incoming[counter] {
- (ref e, ref n) => {
- assert!(e == &edge.data);
- assert!(n == graph.node_data(edge.source()));
- assert!(start_index == edge.target);
- }
- }
- counter += 1;
- }
- assert_eq!(counter, expected_incoming.len());
-
- let mut counter = 0;
- for (edge_index, edge) in graph.outgoing_edges(start_index) {
- assert!(graph.edge_data(edge_index) == &edge.data);
- assert!(counter < expected_outgoing.len());
- debug!("counter={:?} expected={:?} edge_index={:?} edge={:?}",
- counter, expected_outgoing[counter], edge_index, edge);
- match expected_outgoing[counter] {
- (ref e, ref n) => {
- assert!(e == &edge.data);
- assert!(start_index == edge.source);
- assert!(n == graph.node_data(edge.target));
- }
- }
- counter += 1;
- }
- assert_eq!(counter, expected_outgoing.len());
-}
-
-#[test]
-fn each_adjacent_from_a() {
- let graph = create_graph();
- test_adjacent_edges(&graph, NodeIndex(0), "A",
- &[],
- &[("AB", "B")]);
-}
-
-#[test]
-fn each_adjacent_from_b() {
- let graph = create_graph();
- test_adjacent_edges(&graph, NodeIndex(1), "B",
- &[("FB", "F"), ("AB", "A"),],
- &[("BD", "D"), ("BC", "C"),]);
-}
-
-#[test]
-fn each_adjacent_from_c() {
- let graph = create_graph();
- test_adjacent_edges(&graph, NodeIndex(2), "C",
- &[("EC", "E"), ("BC", "B")],
- &[]);
-}
-
-#[test]
-fn each_adjacent_from_d() {
- let graph = create_graph();
- test_adjacent_edges(&graph, NodeIndex(3), "D",
- &[("BD", "B")],
- &[("DE", "E")]);
-}
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use graph::*;
+use std::fmt::Debug;
+
+type TestNode = Node<&'static str>;
+type TestEdge = Edge<&'static str>;
+type TestGraph = Graph<&'static str, &'static str>;
+
+fn create_graph() -> TestGraph {
+ let mut graph = Graph::new();
+
+ // Create a simple graph
+ //
+ // A -+> B --> C
+ // | | ^
+ // | v |
+ // F D --> E
+
+ let a = graph.add_node("A");
+ let b = graph.add_node("B");
+ let c = graph.add_node("C");
+ let d = graph.add_node("D");
+ let e = graph.add_node("E");
+ let f = graph.add_node("F");
+
+ graph.add_edge(a, b, "AB");
+ graph.add_edge(b, c, "BC");
+ graph.add_edge(b, d, "BD");
+ graph.add_edge(d, e, "DE");
+ graph.add_edge(e, c, "EC");
+ graph.add_edge(f, b, "FB");
+
+ return graph;
+}
+
+#[test]
+fn each_node() {
+ let graph = create_graph();
+ let expected = ["A", "B", "C", "D", "E", "F"];
+ graph.each_node(|idx, node| {
+ assert_eq!(&expected[idx.0], graph.node_data(idx));
+ assert_eq!(expected[idx.0], node.data);
+ true
+ });
+}
+
+#[test]
+fn each_edge() {
+ let graph = create_graph();
+ let expected = ["AB", "BC", "BD", "DE", "EC", "FB"];
+ graph.each_edge(|idx, edge| {
+ assert_eq!(&expected[idx.0], graph.edge_data(idx));
+ assert_eq!(expected[idx.0], edge.data);
+ true
+ });
+}
+
+fn test_adjacent_edges<N:PartialEq+Debug,E:PartialEq+Debug>(graph: &Graph<N,E>,
+ start_index: NodeIndex,
+ start_data: N,
+ expected_incoming: &[(E,N)],
+ expected_outgoing: &[(E,N)]) {
+ assert!(graph.node_data(start_index) == &start_data);
+
+ let mut counter = 0;
+ for (edge_index, edge) in graph.incoming_edges(start_index) {
+ assert!(graph.edge_data(edge_index) == &edge.data);
+ assert!(counter < expected_incoming.len());
+ debug!("counter={:?} expected={:?} edge_index={:?} edge={:?}",
+ counter, expected_incoming[counter], edge_index, edge);
+ match expected_incoming[counter] {
+ (ref e, ref n) => {
+ assert!(e == &edge.data);
+ assert!(n == graph.node_data(edge.source()));
+ assert!(start_index == edge.target);
+ }
+ }
+ counter += 1;
+ }
+ assert_eq!(counter, expected_incoming.len());
+
+ let mut counter = 0;
+ for (edge_index, edge) in graph.outgoing_edges(start_index) {
+ assert!(graph.edge_data(edge_index) == &edge.data);
+ assert!(counter < expected_outgoing.len());
+ debug!("counter={:?} expected={:?} edge_index={:?} edge={:?}",
+ counter, expected_outgoing[counter], edge_index, edge);
+ match expected_outgoing[counter] {
+ (ref e, ref n) => {
+ assert!(e == &edge.data);
+ assert!(start_index == edge.source);
+ assert!(n == graph.node_data(edge.target));
+ }
+ }
+ counter += 1;
+ }
+ assert_eq!(counter, expected_outgoing.len());
+}
+
+#[test]
+fn each_adjacent_from_a() {
+ let graph = create_graph();
+ test_adjacent_edges(&graph, NodeIndex(0), "A",
+ &[],
+ &[("AB", "B")]);
+}
+
+#[test]
+fn each_adjacent_from_b() {
+ let graph = create_graph();
+ test_adjacent_edges(&graph, NodeIndex(1), "B",
+ &[("FB", "F"), ("AB", "A"),],
+ &[("BD", "D"), ("BC", "C"),]);
+}
+
+#[test]
+fn each_adjacent_from_c() {
+ let graph = create_graph();
+ test_adjacent_edges(&graph, NodeIndex(2), "C",
+ &[("EC", "E"), ("BC", "B")],
+ &[]);
+}
+
+#[test]
+fn each_adjacent_from_d() {
+ let graph = create_graph();
+ test_adjacent_edges(&graph, NodeIndex(3), "D",
+ &[("BD", "B")],
+ &[("DE", "E")]);
+}
use snapshot_vec as sv;
#[cfg(test)]
-mod test;
+mod tests;
/// This trait is implemented by any type that can serve as a type
/// variable. We call such variables *unification keys*. For example,
+++ /dev/null
-// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-#![allow(non_snake_case)]
-
-extern crate test;
-use self::test::Bencher;
-use std::collections::HashSet;
-use unify::{UnifyKey, UnificationTable};
-
-#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
-struct UnitKey(u32);
-
-impl UnifyKey for UnitKey {
- type Value = ();
- fn index(&self) -> u32 { self.0 }
- fn from_index(u: u32) -> UnitKey { UnitKey(u) }
- fn tag(_: Option<UnitKey>) -> &'static str { "UnitKey" }
-}
-
-#[test]
-fn basic() {
- let mut ut: UnificationTable<UnitKey> = UnificationTable::new();
- let k1 = ut.new_key(());
- let k2 = ut.new_key(());
- assert_eq!(ut.unioned(k1, k2), false);
- ut.union(k1, k2);
- assert_eq!(ut.unioned(k1, k2), true);
-}
-
-#[test]
-fn big_array() {
- let mut ut: UnificationTable<UnitKey> = UnificationTable::new();
- let mut keys = Vec::new();
- const MAX: usize = 1 << 15;
-
- for _ in 0..MAX {
- keys.push(ut.new_key(()));
- }
-
- for i in 1..MAX {
- let l = keys[i-1];
- let r = keys[i];
- ut.union(l, r);
- }
-
- for i in 0..MAX {
- assert!(ut.unioned(keys[0], keys[i]));
- }
-}
-
-#[bench]
-fn big_array_bench(b: &mut Bencher) {
- let mut ut: UnificationTable<UnitKey> = UnificationTable::new();
- let mut keys = Vec::new();
- const MAX: usize = 1 << 15;
-
- for _ in 0..MAX {
- keys.push(ut.new_key(()));
- }
-
-
- b.iter(|| {
- for i in 1..MAX {
- let l = keys[i-1];
- let r = keys[i];
- ut.union(l, r);
- }
-
- for i in 0..MAX {
- assert!(ut.unioned(keys[0], keys[i]));
- }
- })
-}
-
-#[test]
-fn even_odd() {
- let mut ut: UnificationTable<UnitKey> = UnificationTable::new();
- let mut keys = Vec::new();
- const MAX: usize = 1 << 10;
-
- for i in 0..MAX {
- let key = ut.new_key(());
- keys.push(key);
-
- if i >= 2 {
- ut.union(key, keys[i-2]);
- }
- }
-
- for i in 1..MAX {
- assert!(!ut.unioned(keys[i-1], keys[i]));
- }
-
- for i in 2..MAX {
- assert!(ut.unioned(keys[i-2], keys[i]));
- }
-}
-
-#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
-struct IntKey(u32);
-
-impl UnifyKey for IntKey {
- type Value = Option<i32>;
- fn index(&self) -> u32 { self.0 }
- fn from_index(u: u32) -> IntKey { IntKey(u) }
- fn tag(_: Option<IntKey>) -> &'static str { "IntKey" }
-}
-
-/// Test unifying a key whose value is `Some(_)` with a key whose value is `None`.
-/// Afterwards both should be `Some(_)`.
-#[test]
-fn unify_key_Some_key_None() {
- let mut ut: UnificationTable<IntKey> = UnificationTable::new();
- let k1 = ut.new_key(Some(22));
- let k2 = ut.new_key(None);
- assert!(ut.unify_var_var(k1, k2).is_ok());
- assert_eq!(ut.probe(k2), Some(22));
- assert_eq!(ut.probe(k1), Some(22));
-}
-
-/// Test unifying a key whose value is `None` with a key whose value is `Some(_)`.
-/// Afterwards both should be `Some(_)`.
-#[test]
-fn unify_key_None_key_Some() {
- let mut ut: UnificationTable<IntKey> = UnificationTable::new();
- let k1 = ut.new_key(Some(22));
- let k2 = ut.new_key(None);
- assert!(ut.unify_var_var(k2, k1).is_ok());
- assert_eq!(ut.probe(k2), Some(22));
- assert_eq!(ut.probe(k1), Some(22));
-}
-
-/// Test unifying a key whose value is `Some(x)` with a key whose value is `Some(y)`.
-/// This should yield an error.
-#[test]
-fn unify_key_Some_x_key_Some_y() {
- let mut ut: UnificationTable<IntKey> = UnificationTable::new();
- let k1 = ut.new_key(Some(22));
- let k2 = ut.new_key(Some(23));
- assert_eq!(ut.unify_var_var(k1, k2), Err((22, 23)));
- assert_eq!(ut.unify_var_var(k2, k1), Err((23, 22)));
- assert_eq!(ut.probe(k1), Some(22));
- assert_eq!(ut.probe(k2), Some(23));
-}
-
-/// Test unifying a key whose value is `Some(x)` with a key whose value is `Some(x)`.
-/// This should be ok.
-#[test]
-fn unify_key_Some_x_key_Some_x() {
- let mut ut: UnificationTable<IntKey> = UnificationTable::new();
- let k1 = ut.new_key(Some(22));
- let k2 = ut.new_key(Some(22));
- assert!(ut.unify_var_var(k1, k2).is_ok());
- assert_eq!(ut.probe(k1), Some(22));
- assert_eq!(ut.probe(k2), Some(22));
-}
-
-/// Test unifying a key whose value is `None` with a value is `x`.
-/// Afterwards key should be `x`.
-#[test]
-fn unify_key_None_val() {
- let mut ut: UnificationTable<IntKey> = UnificationTable::new();
- let k1 = ut.new_key(None);
- assert!(ut.unify_var_value(k1, 22).is_ok());
- assert_eq!(ut.probe(k1), Some(22));
-}
-
-/// Test unifying a key whose value is `Some(x)` with the value `y`.
-/// This should yield an error.
-#[test]
-fn unify_key_Some_x_val_y() {
- let mut ut: UnificationTable<IntKey> = UnificationTable::new();
- let k1 = ut.new_key(Some(22));
- assert_eq!(ut.unify_var_value(k1, 23), Err((22, 23)));
- assert_eq!(ut.probe(k1), Some(22));
-}
-
-/// Test unifying a key whose value is `Some(x)` with the value `x`.
-/// This should be ok.
-#[test]
-fn unify_key_Some_x_val_x() {
- let mut ut: UnificationTable<IntKey> = UnificationTable::new();
- let k1 = ut.new_key(Some(22));
- assert!(ut.unify_var_value(k1, 22).is_ok());
- assert_eq!(ut.probe(k1), Some(22));
-}
-
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![allow(non_snake_case)]
+
+extern crate test;
+use self::test::Bencher;
+use std::collections::HashSet;
+use unify::{UnifyKey, UnificationTable};
+
+#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
+struct UnitKey(u32);
+
+impl UnifyKey for UnitKey {
+ type Value = ();
+ fn index(&self) -> u32 { self.0 }
+ fn from_index(u: u32) -> UnitKey { UnitKey(u) }
+ fn tag(_: Option<UnitKey>) -> &'static str { "UnitKey" }
+}
+
+#[test]
+fn basic() {
+ let mut ut: UnificationTable<UnitKey> = UnificationTable::new();
+ let k1 = ut.new_key(());
+ let k2 = ut.new_key(());
+ assert_eq!(ut.unioned(k1, k2), false);
+ ut.union(k1, k2);
+ assert_eq!(ut.unioned(k1, k2), true);
+}
+
+#[test]
+fn big_array() {
+ let mut ut: UnificationTable<UnitKey> = UnificationTable::new();
+ let mut keys = Vec::new();
+ const MAX: usize = 1 << 15;
+
+ for _ in 0..MAX {
+ keys.push(ut.new_key(()));
+ }
+
+ for i in 1..MAX {
+ let l = keys[i-1];
+ let r = keys[i];
+ ut.union(l, r);
+ }
+
+ for i in 0..MAX {
+ assert!(ut.unioned(keys[0], keys[i]));
+ }
+}
+
+#[bench]
+fn big_array_bench(b: &mut Bencher) {
+ let mut ut: UnificationTable<UnitKey> = UnificationTable::new();
+ let mut keys = Vec::new();
+ const MAX: usize = 1 << 15;
+
+ for _ in 0..MAX {
+ keys.push(ut.new_key(()));
+ }
+
+
+ b.iter(|| {
+ for i in 1..MAX {
+ let l = keys[i-1];
+ let r = keys[i];
+ ut.union(l, r);
+ }
+
+ for i in 0..MAX {
+ assert!(ut.unioned(keys[0], keys[i]));
+ }
+ })
+}
+
+#[test]
+fn even_odd() {
+ let mut ut: UnificationTable<UnitKey> = UnificationTable::new();
+ let mut keys = Vec::new();
+ const MAX: usize = 1 << 10;
+
+ for i in 0..MAX {
+ let key = ut.new_key(());
+ keys.push(key);
+
+ if i >= 2 {
+ ut.union(key, keys[i-2]);
+ }
+ }
+
+ for i in 1..MAX {
+ assert!(!ut.unioned(keys[i-1], keys[i]));
+ }
+
+ for i in 2..MAX {
+ assert!(ut.unioned(keys[i-2], keys[i]));
+ }
+}
+
+#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
+struct IntKey(u32);
+
+impl UnifyKey for IntKey {
+ type Value = Option<i32>;
+ fn index(&self) -> u32 { self.0 }
+ fn from_index(u: u32) -> IntKey { IntKey(u) }
+ fn tag(_: Option<IntKey>) -> &'static str { "IntKey" }
+}
+
+/// Test unifying a key whose value is `Some(_)` with a key whose value is `None`.
+/// Afterwards both should be `Some(_)`.
+#[test]
+fn unify_key_Some_key_None() {
+ let mut ut: UnificationTable<IntKey> = UnificationTable::new();
+ let k1 = ut.new_key(Some(22));
+ let k2 = ut.new_key(None);
+ assert!(ut.unify_var_var(k1, k2).is_ok());
+ assert_eq!(ut.probe(k2), Some(22));
+ assert_eq!(ut.probe(k1), Some(22));
+}
+
+/// Test unifying a key whose value is `None` with a key whose value is `Some(_)`.
+/// Afterwards both should be `Some(_)`.
+#[test]
+fn unify_key_None_key_Some() {
+ let mut ut: UnificationTable<IntKey> = UnificationTable::new();
+ let k1 = ut.new_key(Some(22));
+ let k2 = ut.new_key(None);
+ assert!(ut.unify_var_var(k2, k1).is_ok());
+ assert_eq!(ut.probe(k2), Some(22));
+ assert_eq!(ut.probe(k1), Some(22));
+}
+
+/// Test unifying a key whose value is `Some(x)` with a key whose value is `Some(y)`.
+/// This should yield an error.
+#[test]
+fn unify_key_Some_x_key_Some_y() {
+ let mut ut: UnificationTable<IntKey> = UnificationTable::new();
+ let k1 = ut.new_key(Some(22));
+ let k2 = ut.new_key(Some(23));
+ assert_eq!(ut.unify_var_var(k1, k2), Err((22, 23)));
+ assert_eq!(ut.unify_var_var(k2, k1), Err((23, 22)));
+ assert_eq!(ut.probe(k1), Some(22));
+ assert_eq!(ut.probe(k2), Some(23));
+}
+
+/// Test unifying a key whose value is `Some(x)` with a key whose value is `Some(x)`.
+/// This should be ok.
+#[test]
+fn unify_key_Some_x_key_Some_x() {
+ let mut ut: UnificationTable<IntKey> = UnificationTable::new();
+ let k1 = ut.new_key(Some(22));
+ let k2 = ut.new_key(Some(22));
+ assert!(ut.unify_var_var(k1, k2).is_ok());
+ assert_eq!(ut.probe(k1), Some(22));
+ assert_eq!(ut.probe(k2), Some(22));
+}
+
+/// Test unifying a key whose value is `None` with a value is `x`.
+/// Afterwards key should be `x`.
+#[test]
+fn unify_key_None_val() {
+ let mut ut: UnificationTable<IntKey> = UnificationTable::new();
+ let k1 = ut.new_key(None);
+ assert!(ut.unify_var_value(k1, 22).is_ok());
+ assert_eq!(ut.probe(k1), Some(22));
+}
+
+/// Test unifying a key whose value is `Some(x)` with the value `y`.
+/// This should yield an error.
+#[test]
+fn unify_key_Some_x_val_y() {
+ let mut ut: UnificationTable<IntKey> = UnificationTable::new();
+ let k1 = ut.new_key(Some(22));
+ assert_eq!(ut.unify_var_value(k1, 23), Err((22, 23)));
+ assert_eq!(ut.probe(k1), Some(22));
+}
+
+/// Test unifying a key whose value is `Some(x)` with the value `x`.
+/// This should be ok.
+#[test]
+fn unify_key_Some_x_val_x() {
+ let mut ut: UnificationTable<IntKey> = UnificationTable::new();
+ let k1 = ut.new_key(Some(22));
+ assert!(ut.unify_var_value(k1, 22).is_ok());
+ assert_eq!(ut.probe(k1), Some(22));
+}
+
use rustc_lint;
use rustc_resolve as resolve;
use rustc_typeck::middle::lang_items;
+use rustc_typeck::middle::free_region::FreeRegionMap;
use rustc_typeck::middle::region::{self, CodeExtent, DestructionScopeData};
use rustc_typeck::middle::resolve_lifetime;
use rustc_typeck::middle::stability;
stability::Index::new(krate));
let infcx = infer::new_infer_ctxt(&tcx);
body(Env { infcx: &infcx });
- infcx.resolve_regions_and_report_errors(ast::CRATE_NODE_ID);
+ let free_regions = FreeRegionMap::new();
+ infcx.resolve_regions_and_report_errors(&free_regions, ast::CRATE_NODE_ID);
assert_eq!(tcx.sess.err_count(), expected_err_count);
}
use middle::astconv_util::{prim_ty_to_ty, check_path_args, NO_TPS, NO_REGIONS};
use middle::const_eval;
use middle::def;
+use middle::implicator::object_region_bounds;
use middle::resolve_lifetime as rl;
use middle::privacy::{AllPublic, LastMod};
use middle::subst::{FnSpace, TypeSpace, SelfSpace, Subst, Substs};
return r;
}
-/// Given an object type like `SomeTrait+Send`, computes the lifetime
-/// bounds that must hold on the elided self type. These are derived
-/// from the declarations of `SomeTrait`, `Send`, and friends -- if
-/// they declare `trait SomeTrait : 'static`, for example, then
-/// `'static` would appear in the list. The hard work is done by
-/// `ty::required_region_bounds`, see that for more information.
-pub fn object_region_bounds<'tcx>(
- tcx: &ty::ctxt<'tcx>,
- principal: &ty::PolyTraitRef<'tcx>,
- others: ty::BuiltinBounds)
- -> Vec<ty::Region>
-{
- // Since we don't actually *know* the self type for an object,
- // this "open(err)" serves as a kind of dummy standin -- basically
- // a skolemized type.
- let open_ty = ty::mk_infer(tcx, ty::FreshTy(0));
-
- // Note that we preserve the overall binding levels here.
- assert!(!open_ty.has_escaping_regions());
- let substs = tcx.mk_substs(principal.0.substs.with_self_ty(open_ty));
- let trait_refs = vec!(ty::Binder(Rc::new(ty::TraitRef::new(principal.0.def_id, substs))));
-
- let param_bounds = ty::ParamBounds {
- region_bounds: Vec::new(),
- builtin_bounds: others,
- trait_bounds: trait_refs,
- projection_bounds: Vec::new(), // not relevant to computing region bounds
- };
-
- let predicates = ty::predicates(tcx, open_ty, ¶m_bounds);
- ty::required_region_bounds(tcx, open_ty, predicates)
-}
-
pub struct PartitionedBounds<'a> {
pub builtin_bounds: ty::BuiltinBounds,
pub trait_bounds: Vec<&'a ast::PolyTraitRef>,
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+use middle::free_region::FreeRegionMap;
use middle::infer;
use middle::traits;
use middle::ty::{self};
Ok(_) => {}
}
- // Finally, resolve all regions. This catches wily misuses of lifetime
- // parameters.
- infcx.resolve_regions_and_report_errors(impl_m_body_id);
+ // Finally, resolve all regions. This catches wily misuses of
+ // lifetime parameters. We have to build up a plausible lifetime
+ // environment based on what we find in the trait. We could also
+ // include the obligations derived from the method argument types,
+ // but I don't think it's necessary -- after all, those are still
+ // in effect when type-checking the body, and all the
+ // where-clauses in the header etc should be implied by the trait
+ // anyway, so it shouldn't be needed there either. Anyway, we can
+ // always add more relations later (it's backwards compat).
+ let mut free_regions = FreeRegionMap::new();
+ free_regions.relate_free_regions_from_predicates(tcx, &trait_param_env.caller_bounds);
+
+ infcx.resolve_regions_and_report_errors(&free_regions, impl_m_body_id);
fn check_region_bounds_on_impl_method<'tcx>(tcx: &ty::ctxt<'tcx>,
span: Span,
+++ /dev/null
-// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// #![warn(deprecated_mode)]
-
-use astconv::object_region_bounds;
-use middle::infer::{InferCtxt, GenericKind};
-use middle::subst::Substs;
-use middle::traits;
-use middle::ty::{self, ToPolyTraitRef, Ty};
-use middle::ty_fold::{TypeFoldable, TypeFolder};
-
-use std::rc::Rc;
-use syntax::ast;
-use syntax::codemap::Span;
-
-use util::common::ErrorReported;
-use util::nodemap::FnvHashSet;
-use util::ppaux::Repr;
-
-// Helper functions related to manipulating region types.
-
-pub enum Implication<'tcx> {
- RegionSubRegion(Option<Ty<'tcx>>, ty::Region, ty::Region),
- RegionSubGeneric(Option<Ty<'tcx>>, ty::Region, GenericKind<'tcx>),
- RegionSubClosure(Option<Ty<'tcx>>, ty::Region, ast::DefId, &'tcx Substs<'tcx>),
- Predicate(ast::DefId, ty::Predicate<'tcx>),
-}
-
-struct Implicator<'a, 'tcx: 'a> {
- infcx: &'a InferCtxt<'a,'tcx>,
- closure_typer: &'a (ty::ClosureTyper<'tcx>+'a),
- body_id: ast::NodeId,
- stack: Vec<(ty::Region, Option<Ty<'tcx>>)>,
- span: Span,
- out: Vec<Implication<'tcx>>,
- visited: FnvHashSet<Ty<'tcx>>,
-}
-
-/// This routine computes the well-formedness constraints that must hold for the type `ty` to
-/// appear in a context with lifetime `outer_region`
-pub fn implications<'a,'tcx>(
- infcx: &'a InferCtxt<'a,'tcx>,
- closure_typer: &ty::ClosureTyper<'tcx>,
- body_id: ast::NodeId,
- ty: Ty<'tcx>,
- outer_region: ty::Region,
- span: Span)
- -> Vec<Implication<'tcx>>
-{
- debug!("implications(body_id={}, ty={}, outer_region={})",
- body_id,
- ty.repr(closure_typer.tcx()),
- outer_region.repr(closure_typer.tcx()));
-
- let mut stack = Vec::new();
- stack.push((outer_region, None));
- let mut wf = Implicator { closure_typer: closure_typer,
- infcx: infcx,
- body_id: body_id,
- span: span,
- stack: stack,
- out: Vec::new(),
- visited: FnvHashSet() };
- wf.accumulate_from_ty(ty);
- debug!("implications: out={}", wf.out.repr(closure_typer.tcx()));
- wf.out
-}
-
-impl<'a, 'tcx> Implicator<'a, 'tcx> {
- fn tcx(&self) -> &'a ty::ctxt<'tcx> {
- self.infcx.tcx
- }
-
- fn accumulate_from_ty(&mut self, ty: Ty<'tcx>) {
- debug!("accumulate_from_ty(ty={})",
- ty.repr(self.tcx()));
-
- // When expanding out associated types, we can visit a cyclic
- // set of types. Issue #23003.
- if !self.visited.insert(ty) {
- return;
- }
-
- match ty.sty {
- ty::ty_bool |
- ty::ty_char |
- ty::ty_int(..) |
- ty::ty_uint(..) |
- ty::ty_float(..) |
- ty::ty_bare_fn(..) |
- ty::ty_err |
- ty::ty_str => {
- // No borrowed content reachable here.
- }
-
- ty::ty_closure(def_id, substs) => {
- let &(r_a, opt_ty) = self.stack.last().unwrap();
- self.out.push(Implication::RegionSubClosure(opt_ty, r_a, def_id, substs));
- }
-
- ty::ty_trait(ref t) => {
- let required_region_bounds =
- object_region_bounds(self.tcx(), &t.principal, t.bounds.builtin_bounds);
- self.accumulate_from_object_ty(ty, t.bounds.region_bound, required_region_bounds)
- }
-
- ty::ty_enum(def_id, substs) |
- ty::ty_struct(def_id, substs) => {
- let item_scheme = ty::lookup_item_type(self.tcx(), def_id);
- self.accumulate_from_adt(ty, def_id, &item_scheme.generics, substs)
- }
-
- ty::ty_vec(t, _) |
- ty::ty_ptr(ty::mt { ty: t, .. }) |
- ty::ty_uniq(t) => {
- self.accumulate_from_ty(t)
- }
-
- ty::ty_rptr(r_b, mt) => {
- self.accumulate_from_rptr(ty, *r_b, mt.ty);
- }
-
- ty::ty_param(p) => {
- self.push_param_constraint_from_top(p);
- }
-
- ty::ty_projection(ref data) => {
- // `<T as TraitRef<..>>::Name`
-
- self.push_projection_constraint_from_top(data);
- }
-
- ty::ty_tup(ref tuptys) => {
- for &tupty in tuptys {
- self.accumulate_from_ty(tupty);
- }
- }
-
- ty::ty_infer(_) => {
- // This should not happen, BUT:
- //
- // Currently we uncover region relationships on
- // entering the fn check. We should do this after
- // the fn check, then we can call this case a bug().
- }
- }
- }
-
- fn accumulate_from_rptr(&mut self,
- ty: Ty<'tcx>,
- r_b: ty::Region,
- ty_b: Ty<'tcx>) {
- // We are walking down a type like this, and current
- // position is indicated by caret:
- //
- // &'a &'b ty_b
- // ^
- //
- // At this point, top of stack will be `'a`. We must
- // require that `'a <= 'b`.
-
- self.push_region_constraint_from_top(r_b);
-
- // Now we push `'b` onto the stack, because it must
- // constrain any borrowed content we find within `T`.
-
- self.stack.push((r_b, Some(ty)));
- self.accumulate_from_ty(ty_b);
- self.stack.pop().unwrap();
- }
-
- /// Pushes a constraint that `r_b` must outlive the top region on the stack.
- fn push_region_constraint_from_top(&mut self,
- r_b: ty::Region) {
-
- // Indicates that we have found borrowed content with a lifetime
- // of at least `r_b`. This adds a constraint that `r_b` must
- // outlive the region `r_a` on top of the stack.
- //
- // As an example, imagine walking a type like:
- //
- // &'a &'b T
- // ^
- //
- // when we hit the inner pointer (indicated by caret), `'a` will
- // be on top of stack and `'b` will be the lifetime of the content
- // we just found. So we add constraint that `'a <= 'b`.
-
- let &(r_a, opt_ty) = self.stack.last().unwrap();
- self.push_sub_region_constraint(opt_ty, r_a, r_b);
- }
-
- /// Pushes a constraint that `r_a <= r_b`, due to `opt_ty`
- fn push_sub_region_constraint(&mut self,
- opt_ty: Option<Ty<'tcx>>,
- r_a: ty::Region,
- r_b: ty::Region) {
- self.out.push(Implication::RegionSubRegion(opt_ty, r_a, r_b));
- }
-
- /// Pushes a constraint that `param_ty` must outlive the top region on the stack.
- fn push_param_constraint_from_top(&mut self,
- param_ty: ty::ParamTy) {
- let &(region, opt_ty) = self.stack.last().unwrap();
- self.push_param_constraint(region, opt_ty, param_ty);
- }
-
- /// Pushes a constraint that `projection_ty` must outlive the top region on the stack.
- fn push_projection_constraint_from_top(&mut self,
- projection_ty: &ty::ProjectionTy<'tcx>) {
- let &(region, opt_ty) = self.stack.last().unwrap();
- self.out.push(Implication::RegionSubGeneric(
- opt_ty, region, GenericKind::Projection(projection_ty.clone())));
- }
-
- /// Pushes a constraint that `region <= param_ty`, due to `opt_ty`
- fn push_param_constraint(&mut self,
- region: ty::Region,
- opt_ty: Option<Ty<'tcx>>,
- param_ty: ty::ParamTy) {
- self.out.push(Implication::RegionSubGeneric(
- opt_ty, region, GenericKind::Param(param_ty)));
- }
-
- fn accumulate_from_adt(&mut self,
- ty: Ty<'tcx>,
- def_id: ast::DefId,
- _generics: &ty::Generics<'tcx>,
- substs: &Substs<'tcx>)
- {
- let predicates =
- ty::lookup_predicates(self.tcx(), def_id).instantiate(self.tcx(), substs);
- let predicates = match self.fully_normalize(&predicates) {
- Ok(predicates) => predicates,
- Err(ErrorReported) => { return; }
- };
-
- for predicate in predicates.predicates.as_slice() {
- match *predicate {
- ty::Predicate::Trait(ref data) => {
- self.accumulate_from_assoc_types_transitive(data);
- }
- ty::Predicate::Equate(..) => { }
- ty::Predicate::Projection(..) => { }
- ty::Predicate::RegionOutlives(ref data) => {
- match ty::no_late_bound_regions(self.tcx(), data) {
- None => { }
- Some(ty::OutlivesPredicate(r_a, r_b)) => {
- self.push_sub_region_constraint(Some(ty), r_b, r_a);
- }
- }
- }
- ty::Predicate::TypeOutlives(ref data) => {
- match ty::no_late_bound_regions(self.tcx(), data) {
- None => { }
- Some(ty::OutlivesPredicate(ty_a, r_b)) => {
- self.stack.push((r_b, Some(ty)));
- self.accumulate_from_ty(ty_a);
- self.stack.pop().unwrap();
- }
- }
- }
- }
- }
-
- let obligations = predicates.predicates
- .into_iter()
- .map(|pred| Implication::Predicate(def_id, pred));
- self.out.extend(obligations);
-
- let variances = ty::item_variances(self.tcx(), def_id);
-
- for (®ion, &variance) in substs.regions().iter().zip(variances.regions.iter()) {
- match variance {
- ty::Contravariant | ty::Invariant => {
- // If any data with this lifetime is reachable
- // within, it must be at least contravariant.
- self.push_region_constraint_from_top(region)
- }
- ty::Covariant | ty::Bivariant => { }
- }
- }
-
- for (&ty, &variance) in substs.types.iter().zip(variances.types.iter()) {
- match variance {
- ty::Covariant | ty::Invariant => {
- // If any data of this type is reachable within,
- // it must be at least covariant.
- self.accumulate_from_ty(ty);
- }
- ty::Contravariant | ty::Bivariant => { }
- }
- }
- }
-
- /// Given that there is a requirement that `Foo<X> : 'a`, where
- /// `Foo` is declared like `struct Foo<T> where T : SomeTrait`,
- /// this code finds all the associated types defined in
- /// `SomeTrait` (and supertraits) and adds a requirement that `<X
- /// as SomeTrait>::N : 'a` (where `N` is some associated type
- /// defined in `SomeTrait`). This rule only applies to
- /// trait-bounds that are not higher-ranked, because we cannot
- /// project out of a HRTB. This rule helps code using associated
- /// types to compile, see Issue #22246 for an example.
- fn accumulate_from_assoc_types_transitive(&mut self,
- data: &ty::PolyTraitPredicate<'tcx>)
- {
- debug!("accumulate_from_assoc_types_transitive({})",
- data.repr(self.tcx()));
-
- for poly_trait_ref in traits::supertraits(self.tcx(), data.to_poly_trait_ref()) {
- match ty::no_late_bound_regions(self.tcx(), &poly_trait_ref) {
- Some(trait_ref) => { self.accumulate_from_assoc_types(trait_ref); }
- None => { }
- }
- }
- }
-
- fn accumulate_from_assoc_types(&mut self,
- trait_ref: Rc<ty::TraitRef<'tcx>>)
- {
- debug!("accumulate_from_assoc_types({})",
- trait_ref.repr(self.tcx()));
-
- let trait_def_id = trait_ref.def_id;
- let trait_def = ty::lookup_trait_def(self.tcx(), trait_def_id);
- let assoc_type_projections: Vec<_> =
- trait_def.associated_type_names
- .iter()
- .map(|&name| ty::mk_projection(self.tcx(), trait_ref.clone(), name))
- .collect();
- debug!("accumulate_from_assoc_types: assoc_type_projections={}",
- assoc_type_projections.repr(self.tcx()));
- let tys = match self.fully_normalize(&assoc_type_projections) {
- Ok(tys) => { tys }
- Err(ErrorReported) => { return; }
- };
- for ty in tys {
- self.accumulate_from_ty(ty);
- }
- }
-
- fn accumulate_from_object_ty(&mut self,
- ty: Ty<'tcx>,
- region_bound: ty::Region,
- required_region_bounds: Vec<ty::Region>)
- {
- // Imagine a type like this:
- //
- // trait Foo { }
- // trait Bar<'c> : 'c { }
- //
- // &'b (Foo+'c+Bar<'d>)
- // ^
- //
- // In this case, the following relationships must hold:
- //
- // 'b <= 'c
- // 'd <= 'c
- //
- // The first conditions is due to the normal region pointer
- // rules, which say that a reference cannot outlive its
- // referent.
- //
- // The final condition may be a bit surprising. In particular,
- // you may expect that it would have been `'c <= 'd`, since
- // usually lifetimes of outer things are conservative
- // approximations for inner things. However, it works somewhat
- // differently with trait objects: here the idea is that if the
- // user specifies a region bound (`'c`, in this case) it is the
- // "master bound" that *implies* that bounds from other traits are
- // all met. (Remember that *all bounds* in a type like
- // `Foo+Bar+Zed` must be met, not just one, hence if we write
- // `Foo<'x>+Bar<'y>`, we know that the type outlives *both* 'x and
- // 'y.)
- //
- // Note: in fact we only permit builtin traits, not `Bar<'d>`, I
- // am looking forward to the future here.
-
- // The content of this object type must outlive
- // `bounds.region_bound`:
- let r_c = region_bound;
- self.push_region_constraint_from_top(r_c);
-
- // And then, in turn, to be well-formed, the
- // `region_bound` that user specified must imply the
- // region bounds required from all of the trait types:
- for &r_d in &required_region_bounds {
- // Each of these is an instance of the `'c <= 'b`
- // constraint above
- self.out.push(Implication::RegionSubRegion(Some(ty), r_d, r_c));
- }
- }
-
- fn fully_normalize<T>(&self, value: &T) -> Result<T,ErrorReported>
- where T : TypeFoldable<'tcx> + ty::HasProjectionTypes + Clone + Repr<'tcx>
- {
- let value =
- traits::fully_normalize(self.infcx,
- self.closure_typer,
- traits::ObligationCause::misc(self.span, self.body_id),
- value);
- match value {
- Ok(value) => Ok(value),
- Err(errors) => {
- // I don't like reporting these errors here, but I
- // don't know where else to report them just now. And
- // I don't really expect errors to arise here
- // frequently. I guess the best option would be to
- // propagate them out.
- traits::report_fulfillment_errors(self.infcx, &errors);
- Err(ErrorReported)
- }
- }
- }
-}
-
-impl<'tcx> Repr<'tcx> for Implication<'tcx> {
- fn repr(&self, tcx: &ty::ctxt<'tcx>) -> String {
- match *self {
- Implication::RegionSubRegion(_, ref r_a, ref r_b) => {
- format!("RegionSubRegion({}, {})",
- r_a.repr(tcx),
- r_b.repr(tcx))
- }
-
- Implication::RegionSubGeneric(_, ref r, ref p) => {
- format!("RegionSubGeneric({}, {})",
- r.repr(tcx),
- p.repr(tcx))
- }
-
- Implication::RegionSubClosure(_, ref a, ref b, ref c) => {
- format!("RegionSubClosure({}, {}, {})",
- a.repr(tcx),
- b.repr(tcx),
- c.repr(tcx))
- }
-
- Implication::Predicate(ref def_id, ref p) => {
- format!("Predicate({}, {})",
- def_id.repr(tcx),
- p.repr(tcx))
- }
- }
- }
-}
pub mod _match;
pub mod vtable;
pub mod writeback;
-pub mod implicator;
pub mod regionck;
pub mod coercion;
pub mod demand;
use astconv::AstConv;
use check::dropck;
use check::FnCtxt;
-use check::implicator;
use check::vtable;
+use middle::free_region::FreeRegionMap;
+use middle::implicator;
use middle::mem_categorization as mc;
use middle::region::CodeExtent;
use middle::subst::Substs;
pub fn regionck_item(fcx: &FnCtxt, item: &ast::Item) {
let mut rcx = Rcx::new(fcx, RepeatingScope(item.id), item.id, Subject(item.id));
+ let tcx = fcx.tcx();
+ rcx.free_region_map.relate_free_regions_from_predicates(tcx, &fcx.inh.param_env.caller_bounds);
rcx.visit_region_obligations(item.id);
rcx.resolve_regions_and_report_errors();
}
blk: &ast::Block) {
debug!("regionck_fn(id={})", fn_id);
let mut rcx = Rcx::new(fcx, RepeatingScope(blk.id), blk.id, Subject(fn_id));
+
if fcx.err_count_since_creation() == 0 {
// regionck assumes typeck succeeded
rcx.visit_fn_body(fn_id, decl, blk, fn_span);
}
+ let tcx = fcx.tcx();
+ rcx.free_region_map.relate_free_regions_from_predicates(tcx, &fcx.inh.param_env.caller_bounds);
+
rcx.resolve_regions_and_report_errors();
+
+ // For the top-level fn, store the free-region-map. We don't store
+ // any map for closures; they just share the same map as the
+ // function that created them.
+ fcx.tcx().store_free_region_map(fn_id, rcx.free_region_map);
}
/// Checks that the types in `component_tys` are well-formed. This will add constraints into the
region_bound_pairs: Vec<(ty::Region, GenericKind<'tcx>)>,
+ free_region_map: FreeRegionMap,
+
// id of innermost fn body id
body_id: ast::NodeId,
repeating_scope: initial_repeating_scope,
body_id: initial_body_id,
subject: subject,
- region_bound_pairs: Vec::new()
+ region_bound_pairs: Vec::new(),
+ free_region_map: FreeRegionMap::new(),
}
}
}
};
- let len = self.region_bound_pairs.len();
+ let old_region_bounds_pairs_len = self.region_bound_pairs.len();
+
let old_body_id = self.set_body_id(body.id);
self.relate_free_regions(&fn_sig[..], body.id, span);
link_fn_args(self, CodeExtent::from_node_id(body.id), &fn_decl.inputs[..]);
self.visit_block(body);
self.visit_region_obligations(body.id);
- self.region_bound_pairs.truncate(len);
+
+ self.region_bound_pairs.truncate(old_region_bounds_pairs_len);
+
self.set_body_id(old_body_id);
}
let body_scope = ty::ReScope(body_scope);
let implications = implicator::implications(self.fcx.infcx(), self.fcx, body_id,
ty, body_scope, span);
+
+ // Record any relations between free regions that we observe into the free-region-map.
+ self.free_region_map.relate_free_regions_from_implications(tcx, &implications);
+
+ // But also record other relationships, such as `T:'x`,
+ // that don't go into the free-region-map but which we use
+ // here.
for implication in implications {
debug!("implication: {}", implication.repr(tcx));
match implication {
- implicator::Implication::RegionSubRegion(_,
- ty::ReFree(free_a),
- ty::ReFree(free_b)) => {
- tcx.region_maps.relate_free_regions(free_a, free_b);
- }
implicator::Implication::RegionSubRegion(_,
ty::ReFree(free_a),
ty::ReInfer(ty::ReVar(vid_b))) => {
}
};
- self.fcx.infcx().resolve_regions_and_report_errors(subject_node_id);
+ self.fcx.infcx().resolve_regions_and_report_errors(&self.free_region_map,
+ subject_node_id);
}
}
use middle::def;
use constrained_type_params as ctp;
use middle::lang_items::SizedTraitLangItem;
+use middle::free_region::FreeRegionMap;
use middle::region;
use middle::resolve_lifetime;
use middle::subst::{Substs, FnSpace, ParamSpace, SelfSpace, TypeSpace, VecPerParamSpace};
format!("mismatched self type: expected `{}`",
ppaux::ty_to_string(tcx, required_type))
}));
- infcx.resolve_regions_and_report_errors(body_id);
+
+ // We could conceviably add more free-reion relations here,
+ // but since this code is just concerned with checking that
+ // the `&Self` types etc match up, it's not really necessary.
+ // It would just allow people to be more approximate in some
+ // cases. In any case, we can do it later as we feel the need;
+ // I'd like this function to go away eventually.
+ let free_regions = FreeRegionMap::new();
+
+ infcx.resolve_regions_and_report_errors(&free_regions, body_id);
}
fn liberate_early_bound_regions<'tcx,T>(
$(document).on("click", ".collapse-toggle", function() {
var toggle = $(this);
var relatedDoc = toggle.parent().next();
+ if (relatedDoc.is(".stability")) {
+ relatedDoc = relatedDoc.next();
+ }
if (relatedDoc.is(".docblock")) {
if (relatedDoc.is(":visible")) {
relatedDoc.slideUp({duration:'fast', easing:'linear'});
.html("[<span class='inner'>-</span>]");
$(".method").each(function() {
- if ($(this).next().is(".docblock")) {
- $(this).children().first().after(toggle.clone());
- }
+ if ($(this).next().is(".docblock") ||
+ ($(this).next().is(".stability") && $(this).next().next().is(".docblock"))) {
+ $(this).children().first().after(toggle.clone());
+ }
});
var mainToggle =
}
#[cfg(test)]
-mod test {
+mod tests {
use super::{TocBuilder, Toc, TocEntry};
#[test]
}
impl DynamicLibrary {
- // FIXME (#12938): Until DST lands, we cannot decompose &str into
- // & and str, so we cannot usefully take ToCStr arguments by
- // reference (without forcing an additional & around &str). So we
- // are instead temporarily adding an instance for &Path, so that
- // we can take ToCStr as owned. When DST lands, the &Path instance
- // should be removed, and arguments bound by ToCStr should be
- // passed by reference. (Here: in the `open` method.)
-
/// Lazily open a dynamic library. When passed None it gives a
/// handle to the calling process
pub fn open(filename: Option<&Path>) -> Result<DynamicLibrary, String> {
}
#[cfg(all(test, not(target_os = "ios")))]
-mod test {
+mod tests {
use super::*;
use prelude::v1::*;
use libc;
///
/// This handle implements the `Read` trait, but beware that concurrent reads
/// of `Stdin` must be executed with care.
+///
+/// Created by the function `io::stdin()`.
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Stdin {
inner: Arc<Mutex<BufReader<StdinRaw>>>,
/// Each handle shares a global buffer of data to be written to the standard
/// output stream. Access is also synchronized via a lock and explicit control
/// over locking is available via the `lock` method.
+///
+/// Created by the function `io::stdout()`.
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Stdout {
// FIXME: this should be LineWriter or BufWriter depending on the state of
}
#[cfg(test)]
-mod test {
+mod tests {
use thread;
use super::*;
}
#[cfg(test)]
-mod test {
+mod tests {
use prelude::v1::*;
use io::prelude::*;
///
/// Path::new("foo.txt");
/// ```
+ ///
+ /// You can create `Path`s from `String`s, or even other `Path`s:
+ ///
+ /// ```
+ /// use std::path::Path;
+ ///
+ /// let s = String::from("bar.txt");
+ /// let p = Path::new(&s);
+ /// Path::new(&p);
+ /// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn new<S: AsRef<OsStr> + ?Sized>(s: &S) -> &Path {
unsafe { mem::transmute(s.as_ref()) }
}
#[cfg(test)]
-mod test {
+mod tests {
use prelude::v1::*;
use sync::mpsc::channel;
}
#[cfg(test)]
-mod test {
+mod tests {
use prelude::v1::*;
use super::ReaderRng;
}
#[cfg(test)]
-mod test {
+mod tests {
use prelude::v1::*;
use sys_common;
macro_rules! t { ($a:expr, $b:expr) => ({
}
#[cfg(test)]
-mod test {
+mod tests {
use prelude::v1::*;
use sync::mpsc::channel;
use sync::Future;
}
#[cfg(test)]
-mod test {
+mod tests {
use prelude::v1::*;
use std::env;
#[cfg(test)]
#[allow(unused_imports)]
-mod test {
+mod tests {
use prelude::v1::*;
use thread;
}
#[cfg(test)]
-mod test {
+mod tests {
use prelude::v1::*;
use sync::Arc;
}
#[cfg(test)]
-mod test {
+mod tests {
use prelude::v1::*;
use sync::mpsc::channel;
}
#[cfg(test)]
-mod test {
+mod tests {
use prelude::v1::*;
use thread;
setsockopt(&self.inner, libc::IPPROTO_TCP, libc::TCP_KEEPALIVE,
seconds as c_int)
}
- #[cfg(any(target_os = "freebsd", target_os = "dragonfly"))]
+ #[cfg(any(target_os = "freebsd",
+ target_os = "dragonfly",
+ target_os = "linux"))]
fn set_tcp_keepalive(&self, seconds: u32) -> io::Result<()> {
setsockopt(&self.inner, libc::IPPROTO_TCP, libc::TCP_KEEPIDLE,
seconds as c_int)
}
+
#[cfg(not(any(target_os = "macos",
target_os = "ios",
target_os = "freebsd",
- target_os = "dragonfly")))]
+ target_os = "dragonfly",
+ target_os = "linux")))]
fn set_tcp_keepalive(&self, _seconds: u32) -> io::Result<()> {
Ok(())
}
#[cfg(test)]
-mod test {
+mod tests {
use prelude::v1::*;
use sys_common::remutex::{ReentrantMutex, ReentrantMutexGuard};
use cell::RefCell;
/// # Ok(())
/// # }
/// ```
+ #[stable(feature = "rust1", since = "1.0.0")]
pub fn symlink<P: AsRef<Path>, Q: AsRef<Path>>(src: P, dst: Q) -> io::Result<()>
{
sys::fs2::symlink(src.as_ref(), dst.as_ref())
/// # Ok(())
/// # }
/// ```
+ #[stable(feature = "rust1", since = "1.0.0")]
pub fn symlink_file<P: AsRef<Path>, Q: AsRef<Path>>(src: P, dst: Q)
-> io::Result<()>
{
/// # Ok(())
/// # }
/// ```
+ #[stable(feature = "rust1", since = "1.0.0")]
pub fn symlink_dir<P: AsRef<Path>, Q: AsRef<Path>> (src: P, dst: Q)
-> io::Result<()>
{
////////////////////////////////////////////////////////////////////////////////
#[cfg(test)]
-mod test {
+mod tests {
use prelude::v1::*;
use any::Any;
}
#[cfg(test)]
-mod test {
+mod tests {
use serialize;
use super::*;
}
#[cfg(test)]
-mod test {
+mod tests {
use ast::*;
use super::*;
//
#[cfg(test)]
-mod test {
+mod tests {
use super::*;
use std::rc::Rc;
#[cfg(test)]
-mod test {
+mod tests {
use super::{pattern_bindings, expand_crate};
use super::{PatIdentFinder, IdentRenamer, PatIdentRenamer, ExpansionConfig};
use ast;
are reserved for internal compiler diagnostics");
} else if name.starts_with("derive_") {
self.gate_feature("custom_derive", attr.span,
- "attributes of the form `#[derive_*]` are reserved
+ "attributes of the form `#[derive_*]` are reserved \
for the compiler");
} else {
self.gate_feature("custom_attribute", attr.span,
pattern.span,
"multiple-element slice matches anywhere \
but at the end of a slice (e.g. \
- `[0, ..xs, 0]` are experimental")
+ `[0, ..xs, 0]`) are experimental")
}
ast::PatVec(..) => {
self.gate_feature("slice_patterns",
}
#[cfg(test)]
-mod test {
+mod tests {
use std::io;
use ast;
use util::parser_testing::{string_to_crate, matches_codepattern};
}
#[cfg(test)]
-mod test {
+mod tests {
use super::*;
#[test] fn test_block_doc_comment_1() {
}
#[cfg(test)]
-mod test {
+mod tests {
use super::*;
use codemap::{BytePos, CodeMap, Span, NO_EXPANSION};
}
#[cfg(test)]
-mod test {
+mod tests {
use super::*;
use std::rc::Rc;
use codemap::{Span, BytePos, Pos, Spanned, NO_EXPANSION};
}
#[cfg(test)]
-mod test {
+mod tests {
use super::*;
use ast;
use ext::mtwt;
fn repeat(s: &str, n: usize) -> String { iter::repeat(s).take(n).collect() }
#[cfg(test)]
-mod test {
+mod tests {
use super::*;
use ast;
}
#[cfg(test)]
-mod test {
+mod tests {
use super::*;
#[test] fn eqmodws() {
}
#[cfg(test)]
-mod test {
+mod tests {
use super::*;
#[test]
}
#[cfg(test)]
-mod test {
+mod tests {
use super::{expand,Param,Words,Variables,Number};
use std::result::Result::Ok;
}
#[cfg(test)]
-mod test {
+mod tests {
use super::{boolnames, boolfnames, numnames, numfnames, stringnames, stringfnames};
}
for (var i = 0; i < toc.length; i++) {
- if (toc[i].attributes['href'].value === href) {
+ if (toc[i].attributes['href'].value.split('/').pop() === href) {
var nav = document.createElement('p');
if (i > 0) {
var prevNode = toc[i-1].cloneNode(true);
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that negating unsigned integers is gated by `negate_unsigned` feature
+// gate
+
+const MAX: usize = -1;
+//~^ ERROR unary negation of unsigned integers may be removed in the future
+
+fn main() {}
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that `#[rustc_on_unimplemented]` is gated by `on_unimplemented` feature
+// gate.
+
+#[rustc_on_unimplemented = "test error `{Self}` with `{Bar}`"]
+//~^ ERROR the `#[rustc_on_unimplemented]` attribute is an experimental feature
+trait Foo<Bar>
+{}
+
+fn main() {}
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that default and negative trait implementations are gated by
+// `optin_builtin_traits` feature gate
+
+struct DummyStruct;
+
+trait DummyTrait {
+ fn dummy(&self) {}
+}
+
+impl DummyTrait for .. {}
+//~^ ERROR default trait implementations are experimental and possibly buggy
+
+impl !DummyTrait for DummyStruct {}
+//~^ ERROR negative trait bounds are not yet fully implemented; use marker types for now
+
+fn main() {}
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that `#![plugin(...)]` attribute is gated by `plugin` feature gate
+
+#![plugin(foo)]
+//~^ ERROR compiler plugins are experimental and possibly buggy
+
+fn main() {}
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// ignore-tidy-linelength
+
+// Test that `#[rustc_*]` attributes are gated by `rustc_attrs` feature gate.
+
+#[rustc_variance] //~ ERROR the `#[rustc_variance]` attribute is an experimental feature
+#[rustc_error] //~ ERROR the `#[rustc_error]` attribute is an experimental feature
+#[rustc_move_fragments] //~ ERROR the `#[rustc_move_fragments]` attribute is an experimental feature
+#[rustc_foo]
+//~^ ERROR unless otherwise specified, attributes with the prefix `rustc_` are reserved for internal compiler diagnostics
+
+fn main() {}
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that diagnostic macros are gated by `rustc_diagnostic_macros` feature
+// gate
+
+__register_diagnostic!(E0001);
+//~^ ERROR macro undefined: '__register_diagnostic!'
+
+fn main() {
+ __diagnostic_used!(E0001);
+ //~^ ERROR macro undefined: '__diagnostic_used!'
+}
+
+__build_diagnostic_array!(DIAGNOSTICS);
+//~^ ERROR macro undefined: '__build_diagnostic_array!'
--- /dev/null
+// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test that slice pattern syntax is gated by `slice_patterns` feature gate
+
+fn main() {
+ let x = [1, 2, 3, 4, 5];
+ match x {
+ [1, 2, xs..] => {} //~ ERROR slice pattern syntax is experimental
+ }
+}
+++ /dev/null
-// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// Test that patterns including the box syntax are gated by `box_patterns` feature gate.
-
-fn main() {
- let x = Box::new(1);
-
- match x {
- box 1 => (),
- //~^ box pattern syntax is experimental
- _ => ()
- };
-}
+++ /dev/null
-// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// Test that the use of smid types in the ffi is gated by `smid_ffi` feature gate.
-
-#![feature(simd)]
-
-#[repr(C)]
-#[derive(Copy, Clone)]
-#[simd]
-pub struct f32x4(f32, f32, f32, f32);
-
-#[allow(dead_code)]
-extern {
- fn foo(x: f32x4);
- //~^ ERROR use of SIMD type `f32x4` in FFI is highly experimental and may result in invalid code
- //~| HELP add #![feature(simd_ffi)] to the crate attributes to enable
-}
-
-fn main() {}
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Regression test for issue #22779. An extra where clause was
+// permitted on the impl that is not present on the trait.
+
+trait Tr<'a, T> {
+ fn renew<'b: 'a>(self) -> &'b mut [T];
+}
+
+impl<'a, T> Tr<'a, T> for &'a mut [T] {
+ fn renew<'b: 'a>(self) -> &'b mut [T] where 'a: 'b {
+ //~^ ERROR lifetime bound not satisfied
+ &mut self[..]
+ }
+}
+
+fn main() { }
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test related to #22779. In this case, the impl is an inherent impl,
+// so it doesn't have to match any trait, so no error results.
+
+#![feature(rustc_attrs)]
+#![allow(dead_code)]
+
+struct MySlice<'a, T:'a>(&'a mut [T]);
+
+impl<'a, T> MySlice<'a, T> {
+ fn renew<'b: 'a>(self) -> &'b mut [T] where 'a: 'b {
+ &mut self.0[..]
+ }
+}
+
+#[rustc_error]
+fn main() { } //~ ERROR compilation successful
--- /dev/null
+// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// Test related to #22779, but where the `'a:'b` relation
+// appears in the trait too. No error here.
+
+#![feature(rustc_attrs)]
+
+trait Tr<'a, T> {
+ fn renew<'b: 'a>(self) -> &'b mut [T] where 'a: 'b;
+}
+
+impl<'a, T> Tr<'a, T> for &'a mut [T] {
+ fn renew<'b: 'a>(self) -> &'b mut [T] where 'a: 'b {
+ &mut self[..]
+ }
+}
+
+#[rustc_error]
+fn main() { } //~ ERROR compilation successful
fn okay_bound<'b,'c,'e:'b+'c>(self, b: Inv<'b>, c: Inv<'c>, e: Inv<'e>) {
}
- fn another_bound<'x: 't>(self, x: Inv<'x>, y: Inv<'t>) {}
+ fn another_bound<'x: 't>(self, x: Inv<'x>, y: Inv<'t>) {
+ //~^ ERROR lifetime bound not satisfied
+ }
}
fn main() { }
// compile-flags:--test
// ignore-pretty turns out the pretty-printer doesn't handle gensym'd things...
-mod test {
+mod tests {
use super::*;
#[test]