make_dir $h/test/doc-tutorial-ffi
make_dir $h/test/doc-tutorial-macros
make_dir $h/test/doc-tutorial-borrowed-ptr
+ make_dir $h/test/doc-tutorial-container
make_dir $h/test/doc-tutorial-tasks
+ make_dir $h/test/doc-tutorial-conditions
make_dir $h/test/doc-rust
done
Each source file contains a sequence of zero or more `item` definitions,
and may optionally begin with any number of `attributes` that apply to the containing module.
-Atributes on the anonymous crate module define important metadata that influences
+Attributes on the anonymous crate module define important metadata that influences
the behavior of the compiler.
~~~~~~~~
In type-parameterized functions,
methods of the supertrait may be called on values of subtrait-bound type parameters.
-Refering to the previous example of `trait Circle : Shape`:
+Referring to the previous example of `trait Circle : Shape`:
~~~
# trait Shape { fn area(&self) -> float; }
When the field is mutable, it can be [assigned](#assignment-expressions) to.
When the type of the expression to the left of the dot is a pointer to a record or structure,
-it is automatically derferenced to make the field access possible.
+it is automatically dereferenced to make the field access possible.
### Vector expressions
An example of an object type:
~~~~~~~~
-# use std::int;
trait Printable {
- fn to_str(&self) -> ~str;
+ fn to_string(&self) -> ~str;
}
impl Printable for int {
- fn to_str(&self) -> ~str { int::to_str(*self) }
+ fn to_string(&self) -> ~str { self.to_str() }
}
fn print(a: @Printable) {
- println(a.to_str());
+ println(a.to_string());
}
fn main() {
--- /dev/null
+% Rust Condition and Error-handling Tutorial
+
+# Introduction
+
+Rust does not provide exception handling[^why-no-exceptions]
+in the form most commonly seen in other programming languages such as C++ or Java.
+Instead, it provides four mechanisms that work together to handle errors or other rare events.
+The four mechanisms are:
+
+ - Options
+ - Results
+ - Failure
+ - Conditions
+
+This tutorial will lead you through use of these mechanisms
+in order to understand the trade-offs of each and relationships between them.
+
+# Example program
+
+This tutorial will be based around an example program
+that attempts to read lines from a file
+consisting of pairs of numbers,
+and then print them back out with slightly different formatting.
+The input to the program might look like this:
+
+~~~~ {.notrust}
+$ cat numbers.txt
+1 2
+34 56
+789 123
+45 67
+~~~~
+
+For which the intended output looks like this:
+
+~~~~ {.notrust}
+$ ./example numbers.txt
+0001, 0002
+0034, 0056
+0789, 0123
+0045, 0067
+~~~~
+
+An example program that does this task reads like this:
+
+~~~~
+# #[allow(unused_imports)];
+extern mod extra;
+use extra::fileinput::FileInput;
+use std::int;
+# mod FileInput {
+# use std::io::{Reader, BytesReader};
+# static s : &'static [u8] = bytes!("1 2\n\
+# 34 56\n\
+# 789 123\n\
+# 45 67\n\
+# ");
+# pub fn from_args() -> @Reader{
+# @BytesReader {
+# bytes: s,
+# pos: @mut 0
+# } as @Reader
+# }
+# }
+
+fn main() {
+ let pairs = read_int_pairs();
+ for &(a,b) in pairs.iter() {
+ println(fmt!("%4.4d, %4.4d", a, b));
+ }
+}
+
+
+fn read_int_pairs() -> ~[(int,int)] {
+
+ let mut pairs = ~[];
+
+ let fi = FileInput::from_args();
+ while ! fi.eof() {
+
+ // 1. Read a line of input.
+ let line = fi.read_line();
+
+ // 2. Split the line into fields ("words").
+ let fields = line.word_iter().to_owned_vec();
+
+ // 3. Match the vector of fields against a vector pattern.
+ match fields {
+
+ // 4. When the line had two fields:
+ [a, b] => {
+
+ // 5. Try parsing both fields as ints.
+ match (int::from_str(a), int::from_str(b)) {
+
+ // 6. If parsing succeeded for both, push both.
+ (Some(a), Some(b)) => pairs.push((a,b)),
+
+ // 7. Ignore non-int fields.
+ _ => ()
+ }
+ }
+
+ // 8. Ignore lines that don't have 2 fields.
+ _ => ()
+ }
+ }
+
+ pairs
+}
+~~~~
+
+This example shows the use of `Option`,
+along with some other forms of error-handling (and non-handling).
+We will look at these mechanisms
+and then modify parts of the example to perform "better" error handling.
+
+
+# Options
+
+The simplest and most lightweight mechanism in Rust for indicating an error is the type `std::option::Option<T>`.
+This type is a general purpose `enum`
+for conveying a value of type `T`, represented as `Some(T)`
+_or_ the sentinel `None`, to indicate the absence of a `T` value.
+For simple APIs, it may be sufficient to encode errors as `Option<T>`,
+returning `Some(T)` on success and `None` on error.
+In the example program, the call to `int::from_str` returns `Option<int>`
+with the understanding that "all parse errors" result in `None`.
+The resulting `Option<int>` values are matched against the pattern `(Some(a), Some(b))`
+in steps 5 and 6 in the example program,
+to handle the case in which both fields were parsed successfully.
+
+Using `Option` as in this API has some advantages:
+
+ - Simple API, users can read it and guess how it works.
+ - Very efficient, only an extra `enum` tag on return values.
+ - Caller has flexibility in handling or propagating errors.
+ - Caller is forced to acknowledge existence of possible-error before using value.
+
+However, it has serious disadvantages too:
+
+ - Verbose, requires matching results or calling `Option::unwrap` everywhere.
+ - Infects caller: if caller doesn't know how to handle the error, must propagate (or force).
+ - Temptation to do just that: force the `Some(T)` case by blindly calling `unwrap`,
+ which hides the error from the API without providing any way to make the program robust against the error.
+ - Collapses all errors into one:
+ - Caller can't handle different errors differently.
+ - Caller can't even report a very precise error message
+
+Note that in order to keep the example code reasonably compact,
+several unwanted cases are silently ignored:
+lines that do not contain two fields, as well as fields that do not parse as ints.
+To propagate these cases to the caller using `Option` would require even more verbose code.
+
+
+# Results
+
+Before getting into _trapping_ the error,
+we will look at a slight refinement on the `Option` type above.
+This second mechanism for indicating an error is called a `Result`.
+The type `std::result::Result<T,E>` is another simple `enum` type with two forms, `Ok(T)` and `Err(E)`.
+The `Result` type is not substantially different from the `Option` type in terms of its ergonomics.
+Its main advantage is that the error constructor `Err(E)` can convey _more detail_ about the error.
+For example, the `int::from_str` API could be reformed
+to return a `Result` carrying an informative description of a parse error,
+like this:
+
+~~~~ {.ignore}
+enum IntParseErr {
+ EmptyInput,
+ Overflow,
+ BadChar(char)
+}
+
+fn int::from_str(&str) -> Result<int,IntParseErr> {
+ // ...
+}
+~~~~
+
+This would give the caller more information for both handling and reporting the error,
+but would otherwise retain the verbosity problems of using `Option`.
+In particular, it would still be necessary for the caller to return a further `Result` to _its_ caller if it did not want to handle the error.
+Manually propagating result values this way can be attractive in certain circumstances
+-- for example when processing must halt on the very first error, or backtrack --
+but as we will see later, many cases have simpler options available.
+
+# Failure
+
+The third and arguably easiest mechanism for handling errors is called "failure".
+In fact it was hinted at earlier by suggesting that one can choose to propagate `Option` or `Result` types _or "force" them_.
+"Forcing" them, in this case, means calling a method like `Option<T>::unwrap`,
+which contains the following code:
+
+~~~~ {.ignore}
+pub fn unwrap(self) -> T {
+ match self {
+ Some(x) => return x,
+ None => fail!("option::unwrap `None`")
+ }
+}
+~~~~
+
+That is, it returns `T` when `self` is `Some(T)`, and _fails_ when `self` is `None`.
+
+Every Rust task can _fail_, either indirectly due to a kill signal or other asynchronous event,
+or directly by failing an `assert!` or calling the `fail!` macro.
+Failure is an _unrecoverable event_ at the task level:
+it causes the task to halt normal execution and unwind its control stack,
+freeing all task-local resources (the local heap as well as any task-owned values from the global heap)
+and running destructors (the `drop` method of the `Drop` trait)
+as frames are unwound and heap values destroyed.
+A failing task is not permitted to "catch" the unwinding during failure and recover,
+it is only allowed to clean up and exit.
+
+Failure has advantages:
+
+ - Simple and non-verbose. Suitable for programs that can't reasonably continue past an error anyways.
+ - _All_ errors (except memory-safety errors) can be uniformly trapped in a supervisory task outside the failing task.
+ For a large program to be robust against a variety of errors,
+ often some form of task-level partitioning to contain pervasive errors (arithmetic overflow, division by zero,
+ logic bugs) is necessary anyways.
+
+As well as obvious disadvantages:
+
+ - A blunt instrument, terminates the containing task entirely.
+
+Recall that in the first two approaches to error handling,
+the example program was only handling success cases, and ignoring error cases.
+That is, if the input is changed to contain a malformed line:
+
+~~~~ {.notrust}
+$ cat bad.txt
+1 2
+34 56
+ostrich
+789 123
+45 67
+~~~~
+
+Then the program would give the same output as if there was no error:
+
+~~~~ {.notrust}
+$ ./example bad.txt
+0001, 0002
+0034, 0056
+0789, 0123
+0045, 0067
+~~~~
+
+If the example is rewritten to use failure, these error cases can be trapped.
+In this rewriting, failures are trapped by placing the I/O logic in a sub-task,
+and trapping its exit status using `task::try`:
+
+~~~~ {.xfail-test}
+# #[allowed(unused_imports)];
+extern mod extra;
+use extra::fileinput::FileInput;
+use std::int;
+use std::task;
+# mod FileInput {
+# use std::io::{Reader, BytesReader};
+# static s : &'static [u8] = bytes!("1 2\n\
+# 34 56\n\
+# ostrich\n\
+# 789 123\n\
+# 45 67\n\
+# ");
+# pub fn from_args() -> @Reader{
+# @BytesReader {
+# bytes: s,
+# pos: @mut 0
+# } as @Reader
+# }
+# }
+
+fn main() {
+
+ // Isolate failure within a subtask.
+ let result = do task::try {
+
+ // The protected logic.
+ let pairs = read_int_pairs();
+ for &(a,b) in pairs.iter() {
+ println(fmt!("%4.4d, %4.4d", a, b));
+ }
+
+ };
+ if result.is_err() {
+ println("parsing failed");
+ }
+}
+
+fn read_int_pairs() -> ~[(int,int)] {
+ let mut pairs = ~[];
+ let fi = FileInput::from_args();
+ while ! fi.eof() {
+ let line = fi.read_line();
+ let fields = line.word_iter().to_owned_vec();
+ match fields {
+ [a, b] => pairs.push((int::from_str(a).unwrap(),
+ int::from_str(b).unwrap())),
+
+ // Explicitly fail on malformed lines.
+ _ => fail!()
+ }
+ }
+ pairs
+}
+~~~~
+
+With these changes in place, running the program on malformed input gives a different answer:
+
+~~~~ {.notrust}
+$ ./example bad.txt
+rust: task failed at 'explicit failure', ./example.rs:44
+parsing failed
+~~~~
+
+Note that while failure unwinds the sub-task performing I/O in `read_int_pairs`,
+control returns to `main` and can easily continue uninterrupted.
+In this case, control simply prints out `parsing failed` and then exits `main` (successfully).
+Failure of a (sub-)task is analogous to calling `exit(1)` or `abort()` in a unix C program:
+all the state of a sub-task is cleanly discarded on exit,
+and a supervisor task can take appropriate action
+without worrying about its own state having been corrupted.
+
+
+# Conditions
+
+The final mechanism for handling errors is called a "condition".
+Conditions are less blunt than failure, and less cumbersome than the `Option` or `Result` types;
+indeed they are designed to strike just the right balance between the two.
+Conditions require some care to use effectively, but give maximum flexibility with minimum verbosity.
+While conditions use exception-like terminology ("trap", "raise") they are significantly different:
+
+ - Like exceptions and failure, conditions separate the site at which the error is raised from the site where it is trapped.
+ - Unlike exceptions and unlike failure, when a condition is raised and trapped, _no unwinding occurs_.
+ - A successfully trapped condition causes execution to continue _at the site of the error_, as though no error occurred.
+
+Conditions are declared with the `condition!` macro.
+Each condition has a name, an input type and an output type, much like a function.
+In fact, conditions are implemented as dynamically-scoped functions held in task local storage.
+
+The `condition!` macro declares a module with the name of the condition;
+the module contains a single static value called `cond`, of type `std::condition::Condition`.
+The `cond` value within the module is the rendezvous point
+between the site of error and the site that handles the error.
+It has two methods of interest: `raise` and `trap`.
+
+The `raise` method maps a value of the condition's input type to its output type.
+The input type should therefore convey all relevant information to the condition handler.
+The output type should convey all relevant information _for continuing execution at the site of error_.
+When the error site raises a condition handler,
+the `Condition::raise` method searches task-local storage (TLS) for the innermost installed _handler_,
+and if any such handler is found, calls it with the provided input value.
+If no handler is found, `Condition::raise` will fail the task with an appropriate error message.
+
+Rewriting the example to use a condition in place of ignoring malformed lines makes it slightly longer,
+but similarly clear as the version that used `fail!` in the logic where the error occurs:
+
+~~~~ {.xfail-test}
+# #[allow(unused_imports)];
+extern mod extra;
+use extra::fileinput::FileInput;
+use std::int;
+# mod FileInput {
+# use std::io::{Reader, BytesReader};
+# static s : &'static [u8] = bytes!("1 2\n\
+# 34 56\n\
+# ostrich\n\
+# 789 123\n\
+# 45 67\n\
+# ");
+# pub fn from_args() -> @Reader{
+# @BytesReader {
+# bytes: s,
+# pos: @mut 0
+# } as @Reader
+# }
+# }
+
+// Introduce a new condition.
+condition! {
+ pub malformed_line : ~str -> (int,int);
+}
+
+fn main() {
+ let pairs = read_int_pairs();
+ for &(a,b) in pairs.iter() {
+ println(fmt!("%4.4d, %4.4d", a, b));
+ }
+}
+
+fn read_int_pairs() -> ~[(int,int)] {
+ let mut pairs = ~[];
+ let fi = FileInput::from_args();
+ while ! fi.eof() {
+ let line = fi.read_line();
+ let fields = line.word_iter().to_owned_vec();
+ match fields {
+ [a, b] => pairs.push((int::from_str(a).unwrap(),
+ int::from_str(b).unwrap())),
+
+ // On malformed lines, call the condition handler and
+ // push whatever the condition handler returns.
+ _ => pairs.push(malformed_line::cond.raise(line.clone()))
+ }
+ }
+ pairs
+}
+~~~~
+
+When this is run on malformed input, it still fails,
+but with a slightly different failure message than before:
+
+~~~~ {.notrust}
+$ ./example bad.txt
+rust: task failed at 'Unhandled condition: malformed_line: ~"ostrich"', .../libstd/condition.rs:43
+~~~~
+
+While this superficially resembles the trapped `fail!` call before,
+it is only because the example did not install a handler for the condition.
+The different failure message is indicating, among other things,
+that the condition-handling system is being invoked and failing
+only due to the absence of a _handler_ that traps the condition.
+
+# Trapping a condition
+
+To trap a condition, use `Condition::trap` in some caller of the site that calls `Condition::raise`.
+For example, this version of the program traps the `malformed_line` condition
+and replaces bad input lines with the pair `(-1,-1)`:
+
+~~~~
+# #[allow(unused_imports)];
+extern mod extra;
+use extra::fileinput::FileInput;
+use std::int;
+# mod FileInput {
+# use std::io::{Reader, BytesReader};
+# static s : &'static [u8] = bytes!("1 2\n\
+# 34 56\n\
+# ostrich\n\
+# 789 123\n\
+# 45 67\n\
+# ");
+# pub fn from_args() -> @Reader{
+# @BytesReader {
+# bytes: s,
+# pos: @mut 0
+# } as @Reader
+# }
+# }
+
+condition! {
+ pub malformed_line : ~str -> (int,int);
+}
+
+fn main() {
+ // Trap the condition:
+ do malformed_line::cond.trap(|_| (-1,-1)).inside {
+
+ // The protected logic.
+ let pairs = read_int_pairs();
+ for &(a,b) in pairs.iter() {
+ println(fmt!("%4.4d, %4.4d", a, b));
+ }
+
+ }
+}
+
+fn read_int_pairs() -> ~[(int,int)] {
+ let mut pairs = ~[];
+ let fi = FileInput::from_args();
+ while ! fi.eof() {
+ let line = fi.read_line();
+ let fields = line.word_iter().to_owned_vec();
+ match fields {
+ [a, b] => pairs.push((int::from_str(a).unwrap(),
+ int::from_str(b).unwrap())),
+ _ => pairs.push(malformed_line::cond.raise(line.clone()))
+ }
+ }
+ pairs
+}
+~~~~
+
+Note that the remainder of the program is _unchanged_ with this trap in place;
+only the caller that installs the trap changed.
+Yet when the condition-trapping variant runs on the malformed input,
+it continues execution past the malformed line, substituting the handler's return value.
+
+~~~~ {.notrust}
+$ ./example bad.txt
+0001, 0002
+0034, 0056
+-0001, -0001
+0789, 0123
+0045, 0067
+~~~~
+
+# Refining a condition
+
+As you work with a condition, you may find that the original set of options you present for recovery is insufficient.
+This is no different than any other issue of API design:
+a condition handler is an API for recovering from the condition, and sometimes APIs need to be enriched.
+In the example program, the first form of the `malformed_line` API implicitly assumes that recovery involves a substitute value.
+This assumption may not be correct; some callers may wish to skip malformed lines, for example.
+Changing the condition's return type from `(int,int)` to `Option<(int,int)>` will suffice to support this type of recovery:
+
+~~~~
+# #[allow(unused_imports)];
+extern mod extra;
+use extra::fileinput::FileInput;
+use std::int;
+# mod FileInput {
+# use std::io::{Reader, BytesReader};
+# static s : &'static [u8] = bytes!("1 2\n\
+# 34 56\n\
+# ostrich\n\
+# 789 123\n\
+# 45 67\n\
+# ");
+# pub fn from_args() -> @Reader{
+# @BytesReader {
+# bytes: s,
+# pos: @mut 0
+# } as @Reader
+# }
+# }
+
+// Modify the condition signature to return an Option.
+condition! {
+ pub malformed_line : ~str -> Option<(int,int)>;
+}
+
+fn main() {
+ // Trap the condition and return `None`
+ do malformed_line::cond.trap(|_| None).inside {
+
+ // The protected logic.
+ let pairs = read_int_pairs();
+ for &(a,b) in pairs.iter() {
+ println(fmt!("%4.4d, %4.4d", a, b));
+ }
+
+ }
+}
+
+fn read_int_pairs() -> ~[(int,int)] {
+ let mut pairs = ~[];
+ let fi = FileInput::from_args();
+ while ! fi.eof() {
+ let line = fi.read_line();
+ let fields = line.word_iter().to_owned_vec();
+ match fields {
+ [a, b] => pairs.push((int::from_str(a).unwrap(),
+ int::from_str(b).unwrap())),
+
+ // On malformed lines, call the condition handler and
+ // either ignore the line (if the handler returns `None`)
+ // or push any `Some(pair)` value returned instead.
+ _ => {
+ match malformed_line::cond.raise(line.clone()) {
+ Some(pair) => pairs.push(pair),
+ None => ()
+ }
+ }
+ }
+ }
+ pairs
+}
+~~~~
+
+Again, note that the remainder of the program is _unchanged_,
+in particular the signature of `read_int_pairs` is unchanged,
+even though the innermost part of its reading-loop has a new way of handling a malformed line.
+When the example is run with the `None` trap in place,
+the line is ignored as it was in the first example,
+but the choice of whether to ignore or use a substitute value has been moved to some caller,
+possibly a distant caller.
+
+~~~~ {.notrust}
+$ ./example bad.txt
+0001, 0002
+0034, 0056
+0789, 0123
+0045, 0067
+~~~~
+
+# Further refining a condition
+
+Like with any API, the process of refining argument and return types of a condition will continue,
+until all relevant combinations encountered in practice are encoded.
+In the example, suppose a third possible recovery form arose: reusing the previous value read.
+This can be encoded in the handler API by introducing a helper type: `enum MalformedLineFix`.
+
+~~~~
+# #[allow(unused_imports)];
+extern mod extra;
+use extra::fileinput::FileInput;
+use std::int;
+# mod FileInput {
+# use std::io::{Reader, BytesReader};
+# static s : &'static [u8] = bytes!("1 2\n\
+# 34 56\n\
+# ostrich\n\
+# 789 123\n\
+# 45 67\n\
+# ");
+# pub fn from_args() -> @Reader{
+# @BytesReader {
+# bytes: s,
+# pos: @mut 0
+# } as @Reader
+# }
+# }
+
+// Introduce a new enum to convey condition-handling strategy to error site.
+pub enum MalformedLineFix {
+ UsePair(int,int),
+ IgnoreLine,
+ UsePreviousLine
+}
+
+// Modify the condition signature to return the new enum.
+// Note: a condition introduces a new module, so the enum must be
+// named with the `super::` prefix to access it.
+condition! {
+ pub malformed_line : ~str -> super::MalformedLineFix;
+}
+
+fn main() {
+ // Trap the condition and return `UsePreviousLine`
+ do malformed_line::cond.trap(|_| UsePreviousLine).inside {
+
+ // The protected logic.
+ let pairs = read_int_pairs();
+ for &(a,b) in pairs.iter() {
+ println(fmt!("%4.4d, %4.4d", a, b));
+ }
+
+ }
+}
+
+fn read_int_pairs() -> ~[(int,int)] {
+ let mut pairs = ~[];
+ let fi = FileInput::from_args();
+ while ! fi.eof() {
+ let line = fi.read_line();
+ let fields = line.word_iter().to_owned_vec();
+ match fields {
+ [a, b] => pairs.push((int::from_str(a).unwrap(),
+ int::from_str(b).unwrap())),
+
+ // On malformed lines, call the condition handler and
+ // take action appropriate to the enum value returned.
+ _ => {
+ match malformed_line::cond.raise(line.clone()) {
+ UsePair(a,b) => pairs.push((a,b)),
+ IgnoreLine => (),
+ UsePreviousLine => {
+ let prev = pairs[pairs.len() - 1];
+ pairs.push(prev)
+ }
+ }
+ }
+ }
+ }
+ pairs
+}
+~~~~
+
+Running the example with `UsePreviousLine` as the fix code returned from the handler
+gives the expected result:
+
+~~~~ {.notrust}
+$ ./example bad.txt
+0001, 0002
+0034, 0056
+0034, 0056
+0789, 0123
+0045, 0067
+~~~~
+
+At this point the example has a rich variety of recovery options,
+none of which is visible to casual users of the `read_int_pairs` function.
+This is intentional: part of the purpose of using a condition
+is to free intermediate callers from the burden of having to write repetitive error-propagation logic,
+and/or having to change function call and return types as error-handling strategies are refined.
+
+# Multiple conditions, intermediate callers
+
+So far the function trapping the condition and the function raising it have been immediately adjacent in the call stack.
+That is, the caller traps and its immediate callee raises.
+In most programs, the function that traps may be separated by very many function calls from the function that raises.
+Again, this is part of the point of using conditions:
+to support that separation without having to thread multiple error values and recovery strategies all the way through the program's main logic.
+
+Careful readers will notice that there is a remaining failure mode in the example program: the call to `.unwrap()` when parsing each integer.
+For example, when presented with a file that has the correct number of fields on a line,
+but a non-numeric value in one of them, such as this:
+
+~~~~ {.notrust}
+$ cat bad.txt
+1 2
+34 56
+7 marmot
+789 123
+45 67
+~~~~
+
+
+Then the program fails once more:
+
+~~~~ {.notrust}
+$ ./example bad.txt
+task <unnamed> failed at 'called `Option::unwrap()` on a `None` value', .../libstd/option.rs:314
+~~~~
+
+To make the program robust -- or at least flexible -- in the face of this potential failure,
+a second condition and a helper function will suffice:
+
+~~~~
+# #[allow(unused_imports)];
+extern mod extra;
+use extra::fileinput::FileInput;
+use std::int;
+# mod FileInput {
+# use std::io::{Reader, BytesReader};
+# static s : &'static [u8] = bytes!("1 2\n\
+# 34 56\n\
+# 7 marmot\n\
+# 789 123\n\
+# 45 67\n\
+# ");
+# pub fn from_args() -> @Reader{
+# @BytesReader {
+# bytes: s,
+# pos: @mut 0
+# } as @Reader
+# }
+# }
+
+pub enum MalformedLineFix {
+ UsePair(int,int),
+ IgnoreLine,
+ UsePreviousLine
+}
+
+condition! {
+ pub malformed_line : ~str -> ::MalformedLineFix;
+}
+
+// Introduce a second condition.
+condition! {
+ pub malformed_int : ~str -> int;
+}
+
+fn main() {
+ // Trap the `malformed_int` condition and return -1
+ do malformed_int::cond.trap(|_| -1).inside {
+
+ // Trap the `malformed_line` condition and return `UsePreviousLine`
+ do malformed_line::cond.trap(|_| UsePreviousLine).inside {
+
+ // The protected logic.
+ let pairs = read_int_pairs();
+ for &(a,b) in pairs.iter() {
+ println(fmt!("%4.4d, %4.4d", a, b));
+ }
+
+ }
+ }
+}
+
+// Parse an int; if parsing fails, call the condition handler and
+// return whatever it returns.
+fn parse_int(x: &str) -> int {
+ match int::from_str(x) {
+ Some(v) => v,
+ None => malformed_int::cond.raise(x.to_owned())
+ }
+}
+
+fn read_int_pairs() -> ~[(int,int)] {
+ let mut pairs = ~[];
+ let fi = FileInput::from_args();
+ while ! fi.eof() {
+ let line = fi.read_line();
+ let fields = line.word_iter().to_owned_vec();
+ match fields {
+
+ // Delegate parsing ints to helper function that will
+ // handle parse errors by calling `malformed_int`.
+ [a, b] => pairs.push((parse_int(a), parse_int(b))),
+
+ _ => {
+ match malformed_line::cond.raise(line.clone()) {
+ UsePair(a,b) => pairs.push((a,b)),
+ IgnoreLine => (),
+ UsePreviousLine => {
+ let prev = pairs[pairs.len() - 1];
+ pairs.push(prev)
+ }
+ }
+ }
+ }
+ }
+ pairs
+}
+~~~~
+
+Again, note that `read_int_pairs` has not changed signature,
+nor has any of the machinery for trapping or raising `malformed_line`,
+but now the program can handle the "right number of fields, non-integral field" form of bad input:
+
+~~~~ {.notrust}
+$ ./example bad.txt
+0001, 0002
+0034, 0056
+0007, -0001
+0789, 0123
+0045, 0067
+~~~~
+
+There are three other things to note in this variant of the example program:
+
+ - It traps multiple conditions simultaneously,
+ nesting the protected logic of one `trap` call inside the other.
+
+ - There is a function in between the `trap` site and `raise` site for the `malformed_int` condition.
+ There could be any number of calls between them:
+ so long as the `raise` occurs within a callee (of any depth) of the logic protected by the `trap` call,
+ it will invoke the handler.
+
+ - This variant insulates callers from a design choice in the `int` library:
+ the `int::from_str` function was designed to return an `Option<int>`,
+ but this program insulates callers from that choice,
+ routing all `None` values that arise from parsing integers in this file into the condition.
+
+
+# When to use which technique
+
+This tutorial explored several techniques for handling errors.
+Each is appropriate to different circumstances:
+
+ - If an error may be extremely frequent, expected, and very likely dealt with by an immediate caller,
+ then returning an `Option` or `Result` type is best. These types force the caller to handle the error,
+ and incur the lowest speed overhead, usually only returning one extra word to tag the return value.
+ Between `Option` and `Result`: use an `Option` when there is only one kind of error,
+ otherwise make an `enum FooErr` to represent the possible error codes and use `Result<T,FooErr>`.
+
+ - If an error can reasonably be handled at the site it occurs by one of a few strategies -- possibly including failure --
+ and it is not clear which strategy a caller would want to use, a condition is best.
+ For many errors, the only reasonable "non-stop" recovery strategies are to retry some number of times,
+ create or substitute an empty or sentinel value, ignore the error, or fail.
+
+ - If an error cannot reasonably be handled at the site it occurs,
+ and the only reasonable response is to abandon a large set of operations in progress,
+ then directly failing is best.
+
+Note that an unhandled condition will cause failure (along with a more-informative-than-usual message),
+so if there is any possibility that a caller might wish to "ignore and keep going",
+it is usually harmless to use a condition in place of a direct call to `fail!()`.
+
+
+[^why-no-exceptions]: Exceptions in languages like C++ and Java permit unwinding, like Rust's failure system,
+but with the option to halt unwinding partway through the process and continue execution.
+This behavior unfortunately means that the _heap_ may be left in an inconsistent but accessible state,
+if an exception is thrown part way through the process of initializing or modifying memory.
+To compensate for this risk, correct C++ and Java code must program in an extremely elaborate and difficult "exception-safe" style
+-- effectively transactional style against heap structures --
+or else risk introducing silent and very difficult-to-debug errors due to control resuming in a corrupted heap after a caught exception.
+These errors are frequently memory-safety errors, which Rust strives to eliminate,
+and so Rust unwinding is unrecoverable within a single task:
+once unwinding starts, the entire local heap of a task is destroyed and the task is terminated.
\ No newline at end of file
The `for` keyword can be used as sugar for iterating through any iterator:
~~~
-let xs = [2, 3, 5, 7, 11, 13, 17];
+let xs = [2u, 3, 5, 7, 11, 13, 17];
// print out all the elements in the vector
for x in xs.iter() {
implementing the `FromIterator` trait. For example, the implementation for
vectors is as follows:
-~~~
+~~~ {.xfail-test}
impl<A> FromIterator<A> for ~[A] {
pub fn from_iterator<T: Iterator<A>>(iterator: &mut T) -> ~[A] {
let (lower, _) = iterator.size_hint();
The `Iterator` trait provides a `size_hint` default method, returning a lower
bound and optionally on upper bound on the length of the iterator:
-~~~
+~~~ {.xfail-test}
fn size_hint(&self) -> (uint, Option<uint>) { (0, None) }
~~~
}
~~~
+The `reverse_` method is also available for any double-ended iterator yielding
+mutable references. It can be used to reverse a container in-place. Note that
+the trailing underscore is a workaround for issue #5898 and will be removed.
+
+~~~
+let mut ys = [1, 2, 3, 4, 5];
+ys.mut_iter().reverse_();
+assert_eq!(ys, [5, 4, 3, 2, 1]);
+~~~
+
## Random-access iterators
The `RandomAccessIterator` trait represents an iterator offering random access
fn snappy_max_compressed_length(source_length: size_t) -> size_t;
}
+#[fixed_stack_segment]
fn main() {
let x = unsafe { snappy_max_compressed_length(100) };
println(fmt!("max compressed length of a 100 byte buffer: %?", x));
valid for all possible inputs since the pointer could be dangling, and raw pointers fall outside of
Rust's safe memory model.
+Finally, the `#[fixed_stack_segment]` annotation that appears on
+`main()` instructs the Rust compiler that when `main()` executes, it
+should request a "very large" stack segment. More details on
+stack management can be found in the following sections.
+
When declaring the argument types to a foreign function, the Rust compiler will not check if the
declaration is correct, so specifying it correctly is part of keeping the binding correct at
runtime.
the allocated memory. The length is less than or equal to the capacity.
~~~~ {.xfail-test}
+#[fixed_stack_segment]
+#[inline(never)]
pub fn validate_compressed_buffer(src: &[u8]) -> bool {
unsafe {
snappy_validate_compressed_buffer(vec::raw::to_ptr(src), src.len() as size_t) == 0
guarantee that calling it is safe for all inputs by leaving off `unsafe` from the function
signature.
+The `validate_compressed_buffer` wrapper is also annotated with two
+attributes `#[fixed_stack_segment]` and `#[inline(never)]`. The
+purpose of these attributes is to guarantee that there will be
+sufficient stack for the C function to execute. This is necessary
+because Rust, unlike C, does not assume that the stack is allocated in
+one continuous chunk. Instead, we rely on a *segmented stack* scheme,
+in which the stack grows and shrinks as necessary. C code, however,
+expects one large stack, and so callers of C functions must request a
+large stack segment to ensure that the C routine will not run off the
+end of the stack.
+
+The compiler includes a lint mode that will report an error if you
+call a C function without a `#[fixed_stack_segment]` attribute. More
+details on the lint mode are given in a later section.
+
+You may be wondering why we include a `#[inline(never)]` directive.
+This directive informs the compiler never to inline this function.
+While not strictly necessary, it is usually a good idea to use an
+`#[inline(never)]` directive in concert with `#[fixed_stack_segment]`.
+The reason is that if a fn annotated with `fixed_stack_segment` is
+inlined, then its caller also inherits the `fixed_stack_segment`
+annotation. This means that rather than requesting a large stack
+segment only for the duration of the call into C, the large stack
+segment would be used for the entire duration of the caller. This is
+not necessarily *bad* -- it can for example be more efficient,
+particularly if `validate_compressed_buffer()` is called multiple
+times in a row -- but it does work against the purpose of the
+segmented stack scheme, which is to keep stacks small and thus
+conserve address space.
+
The `snappy_compress` and `snappy_uncompress` functions are more complex, since a buffer has to be
allocated to hold the output too.
~~~~ {.xfail-test}
pub fn compress(src: &[u8]) -> ~[u8] {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let srclen = src.len() as size_t;
let psrc = vec::raw::to_ptr(src);
~~~~ {.xfail-test}
pub fn uncompress(src: &[u8]) -> Option<~[u8]> {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let srclen = src.len() as size_t;
let psrc = vec::raw::to_ptr(src);
For reference, the examples used here are also available as an [library on
GitHub](https://github.com/thestinger/rust-snappy).
+# Automatic wrappers
+
+Sometimes writing Rust wrappers can be quite tedious. For example, if
+function does not take any pointer arguments, often there is no need
+for translating types. In such cases, it is usually still a good idea
+to have a Rust wrapper so as to manage the segmented stacks, but you
+can take advantage of the (standard) `externfn!` macro to remove some
+of the tedium.
+
+In the initial section, we showed an extern block that added a call
+to a specific snappy API:
+
+~~~~ {.xfail-test}
+use std::libc::size_t;
+
+#[link_args = "-lsnappy"]
+extern {
+ fn snappy_max_compressed_length(source_length: size_t) -> size_t;
+}
+
+#[fixed_stack_segment]
+fn main() {
+ let x = unsafe { snappy_max_compressed_length(100) };
+ println(fmt!("max compressed length of a 100 byte buffer: %?", x));
+}
+~~~~
+
+To avoid the need to create a wrapper fn for `snappy_max_compressed_length()`,
+and also to avoid the need to think about `#[fixed_stack_segment]`, we
+could simply use the `externfn!` macro instead, as shown here:
+
+~~~~ {.xfail-test}
+use std::libc::size_t;
+
+externfn!(#[link_args = "-lsnappy"]
+ fn snappy_max_compressed_length(source_length: size_t) -> size_t)
+
+fn main() {
+ let x = unsafe { snappy_max_compressed_length(100) };
+ println(fmt!("max compressed length of a 100 byte buffer: %?", x));
+}
+~~~~
+
+As you can see from the example, `externfn!` replaces the extern block
+entirely. After macro expansion, it will create something like this:
+
+~~~~ {.xfail-test}
+use std::libc::size_t;
+
+// Automatically generated by
+// externfn!(#[link_args = "-lsnappy"]
+// fn snappy_max_compressed_length(source_length: size_t) -> size_t)
+unsafe fn snappy_max_compressed_length(source_length: size_t) -> size_t {
+ #[fixed_stack_segment]; #[inline(never)];
+ return snappy_max_compressed_length(source_length);
+
+ #[link_args = "-lsnappy"]
+ extern {
+ fn snappy_max_compressed_length(source_length: size_t) -> size_t;
+ }
+}
+
+fn main() {
+ let x = unsafe { snappy_max_compressed_length(100) };
+ println(fmt!("max compressed length of a 100 byte buffer: %?", x));
+}
+~~~~
+
+# Segmented stacks and the linter
+
+By default, whenever you invoke a non-Rust fn, the `cstack` lint will
+check that one of the following conditions holds:
+
+1. The call occurs inside of a fn that has been annotated with
+ `#[fixed_stack_segment]`;
+2. The call occurs inside of an `extern fn`;
+3. The call occurs within a stack closure created by some other
+ safe fn.
+
+All of these conditions ensure that you are running on a large stack
+segmented. However, they are sometimes too strict. If your application
+will be making many calls into C, it is often beneficial to promote
+the `#[fixed_stack_segment]` attribute higher up the call chain. For
+example, the Rust compiler actually labels main itself as requiring a
+`#[fixed_stack_segment]`. In such cases, the linter is just an
+annoyance, because all C calls that occur from within the Rust
+compiler are made on a large stack. Another situation where this
+frequently occurs is on a 64-bit architecture, where large stacks are
+the default. In cases, you can disable the linter by including a
+`#[allow(cstack)]` directive somewhere, which permits violations of
+the "cstack" rules given above (you can also use `#[warn(cstack)]` to
+convert the errors into warnings, if you prefer).
+
# Destructors
Foreign libraries often hand off ownership of resources to the calling code,
impl<T: Send> Unique<T> {
pub fn new(value: T) -> Unique<T> {
+ #[fixed_stack_segment];
+ #[inline(never)];
+
unsafe {
let ptr = malloc(std::sys::size_of::<T>() as size_t) as *mut T;
assert!(!ptr::is_null(ptr));
#[unsafe_destructor]
impl<T: Send> Drop for Unique<T> {
fn drop(&self) {
+ #[fixed_stack_segment];
+ #[inline(never)];
+
unsafe {
let x = intrinsics::init(); // dummy value to swap in
// moving the object out is needed to call the destructor
`loop` denotes an infinite loop, and is the preferred way of writing `while true`:
~~~~
-use std::int;
-let mut x = 5;
+let mut x = 5u;
loop {
x += x - 3;
if x % 5 == 0 { break; }
- println(int::to_str(x));
+ println(x.to_str());
}
~~~~
--include-before-body=doc/version_info.html \
--output=$@
+DOCS += doc/tutorial-conditions.html
+doc/tutorial-conditions.html: tutorial-conditions.md doc/version_info.html doc/rust.css
+ @$(call E, pandoc: $@)
+ $(Q)$(CFG_NODE) $(S)doc/prep.js --highlight $< | \
+ $(CFG_PANDOC) --standalone --toc \
+ --section-divs --number-sections \
+ --from=markdown --to=html --css=rust.css \
+ --include-before-body=doc/version_info.html \
+ --output=$@
+
ifeq ($(CFG_PDFLATEX),)
$(info cfg: no pdflatex found, omitting doc/rust.pdf)
else
$$(Q)$(AR_$(1)) rcs $$@ $$<
rt/$(1)/stage$(2)/$(CFG_RUNTIME_$(1)): $$(RUNTIME_OBJS_$(1)_$(2)) $$(MKFILE_DEPS) \
- $$(RUNTIME_DEF_$(1)_$(2)) $$(LIBUV_LIB_$(1)_$(2))
+ $$(RUNTIME_DEF_$(1)_$(2)) $$(LIBUV_LIB_$(1)_$(2)) $$(JEMALLOC_LIB_$(1)_$(2))
@$$(call E, link: $$@)
$$(Q)$$(call CFG_LINK_CXX_$(1),$$@, $$(RUNTIME_OBJS_$(1)_$(2)) \
- $$(CFG_GCCISH_POST_LIB_FLAGS_$(1)) $$(LIBUV_LIB_$(1)_$(2)) \
+ $$(JEMALLOC_LIB_$(1)_$(2)) $$(CFG_GCCISH_POST_LIB_FLAGS_$(1)) $$(LIBUV_LIB_$(1)_$(2)) \
$$(CFG_LIBUV_LINK_FLAGS_$(1)),$$(RUNTIME_DEF_$(1)_$(2)),$$(CFG_RUNTIME_$(1)))
# FIXME: For some reason libuv's makefiles can't figure out the
TEST_CRATES = $(TEST_TARGET_CRATES) $(TEST_HOST_CRATES)
# Markdown files under doc/ that should have their code extracted and run
-DOC_TEST_NAMES = tutorial tutorial-ffi tutorial-macros tutorial-borrowed-ptr tutorial-tasks rust
+DOC_TEST_NAMES = tutorial tutorial-ffi tutorial-macros tutorial-borrowed-ptr \
+ tutorial-tasks tutorial-conditions tutorial-container rust
######################################################################
# Environment configuration
use std::str;
use std::task::{spawn_sched, SingleThreaded};
use std::vec;
+use std::unstable::running_on_valgrind;
use extra::test::MetricMap;
// that destroys parallelism if we let normal schedulers block.
// It should be possible to remove this spawn once std::run is
// rewritten to be non-blocking.
- do spawn_sched(SingleThreaded) {
+ //
+ // We do _not_ create another thread if we're running on V because
+ // it serializes all threads anyways.
+ if running_on_valgrind() {
let config = config.take();
let testfile = testfile.take();
let mut _mm = MetricMap::new();
run_metrics(config, testfile, &mut _mm);
+ } else {
+ do spawn_sched(SingleThreaded) {
+ let config = config.take();
+ let testfile = testfile.take();
+ let mut _mm = MetricMap::new();
+ run_metrics(config, testfile, &mut _mm);
+ }
}
}
#include <math.h>
#include <stdio.h>
-// must match core::ctypes
+// must match std::ctypes
#define C_FLT(x) (float)x
#define C_DBL(x) (double)x
# xfail-license
# This creates the tables used for distributions implemented using the
-# ziggurat algorithm in `core::rand::distributions;`. They are
+# ziggurat algorithm in `std::rand::distributions;`. They are
# (basically) the tables as used in the ZIGNOR variant (Doornik 2005).
# They are changed rarely, so the generated file should be checked in
# to git.
'path-statement[path statements with no effect]'
'missing-trait-doc[detects missing documentation for traits]'
'missing-struct-doc[detects missing documentation for structs]'
- 'ctypes[proper use of core::libc types in foreign modules]'
+ 'ctypes[proper use of std::libc types in foreign modules]'
"unused-mut[detect mut variables which don't need to be mutable]"
'unused-imports[imports that are never used]'
'heap-memory[use of any (~ type or @ type) heap memory]'
do 10.times {
let tmp = *num;
*num = -1;
- task::yield();
+ task::deschedule();
*num = tmp + 1;
}
c.send(());
do read_mode.read |state| {
// if writer mistakenly got in, make sure it mutates state
// before we assert on it
- do 5.times { task::yield(); }
+ do 5.times { task::deschedule(); }
// make sure writer didn't get in.
assert!(*state);
}
}
#[test]
fn test_rw_write_cond_downgrade_read_race() {
- // Ideally the above test case would have yield statements in it that
+ // Ideally the above test case would have deschedule statements in it that
// helped to expose the race nearly 100% of the time... but adding
- // yields in the intuitively-right locations made it even less likely,
+ // deschedules in the intuitively-right locations made it even less likely,
// and I wasn't sure why :( . This is a mediocre "next best" option.
do 8.times { test_rw_write_cond_downgrade_read_race_helper() }
}
use std::libc;
fn malloc(n: size_t) -> CVec<u8> {
+ #[fixed_stack_segment];
+ #[inline(never)];
+
unsafe {
let mem = libc::malloc(n);
assert!(mem as int != 0);
- c_vec_with_dtor(mem as *mut u8, n as uint, || free(mem))
+ return c_vec_with_dtor(mem as *mut u8, n as uint, || f(mem));
+ }
+
+ fn f(mem: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+ unsafe { libc::free(mem) }
}
}
/// method that modifies the buffer directory or provides the caller with bytes that can be modifies
/// results in those bytes being marked as used by the buffer.
pub trait FixedBuffer {
- /// Input a vector of bytes. If the buffer becomes full, proccess it with the provided
+ /// Input a vector of bytes. If the buffer becomes full, process it with the provided
/// function and then clear the buffer.
fn input(&mut self, input: &[u8], func: &fn(&[u8]));
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use std::uint;
use std::vec;
fn output_bits(&self) -> uint;
/**
- * Convenience functon that feeds a string into a digest
+ * Convenience function that feeds a string into a digest.
*
* # Arguments
*
- * * in The string to feed into the digest
+ * * `input` The string to feed into the digest
*/
fn input_str(&mut self, input: &str) {
self.input(input.as_bytes());
}
/**
- * Convenience functon that retrieves the result of a digest as a
+ * Convenience function that retrieves the result of a digest as a
* ~str in hexadecimal format.
*/
fn result_str(&mut self) -> ~str {
fn to_hex(rr: &[u8]) -> ~str {
let mut s = ~"";
for b in rr.iter() {
- let hex = uint::to_str_radix(*b as uint, 16u);
+ let hex = (*b as uint).to_str_radix(16u);
if hex.len() == 1 {
s.push_char('0');
}
priv bits: uint
}
-/// An iterface for casting C-like enum to uint and back.
+/// An interface for casting C-like enum to uint and back.
pub trait CLike {
/// Converts C-like enum to uint.
fn to_uint(&self) -> uint;
pub mod container;
pub mod bitv;
-pub mod fun_treemap;
pub mod list;
pub mod ringbuf;
pub mod priority_queue;
/**
Create a `FileInput` object from a vec of files. An empty
vec means lines are read from `stdin` (use `from_vec_raw` to stop
- this behaviour). Any occurence of `None` represents `stdin`.
+ this behaviour). Any occurrence of `None` represents `stdin`.
*/
pub fn from_vec(files: ~[Option<Path>]) -> FileInput {
FileInput::from_vec_raw(
static TDEFL_WRITE_ZLIB_HEADER : c_int = 0x01000; // write zlib header and adler32 checksum
fn deflate_bytes_internal(bytes: &[u8], flags: c_int) -> ~[u8] {
+ #[fixed_stack_segment]; #[inline(never)];
+
do bytes.as_imm_buf |b, len| {
unsafe {
let mut outsz : size_t = 0;
}
fn inflate_bytes_internal(bytes: &[u8], flags: c_int) -> ~[u8] {
+ #[fixed_stack_segment]; #[inline(never)];
+
do bytes.as_imm_buf |b, len| {
unsafe {
let mut outsz : size_t = 0;
This module is currently unsafe because it uses `Clone + Send` as a type
parameter bounds meaning POD (plain old data), but `Clone + Send` and
-POD are not equivelant.
+POD are not equivalent.
*/
pub mod pod {
+++ /dev/null
-// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-/*!
- * A functional key,value store that works on anything.
- *
- * This works using a binary search tree. In the first version, it's a
- * very naive algorithm, but it will probably be updated to be a
- * red-black tree or something else.
- *
- * This is copied and modified from treemap right now. It's missing a lot
- * of features.
- */
-
-
-use std::cmp::{Eq, Ord};
-use std::option::{Some, None};
-
-pub type Treemap<K, V> = @TreeNode<K, V>;
-
-enum TreeNode<K, V> {
- Empty,
- Node(@K, @V, @TreeNode<K, V>, @TreeNode<K, V>)
-}
-
-/// Create a treemap
-pub fn init<K: 'static, V: 'static>() -> Treemap<K, V> {
- @Empty
-}
-
-/// Insert a value into the map
-pub fn insert<K:Eq + Ord + 'static,
- V:'static>(
- m: Treemap<K, V>,
- k: K,
- v: V)
- -> Treemap<K, V> {
- @match m {
- @Empty => Node(@k, @v, @Empty, @Empty),
- @Node(kk, vv, left, right) => cond!(
- (k < *kk) { Node(kk, vv, insert(left, k, v), right) }
- (k == *kk) { Node(kk, @v, left, right) }
- _ { Node(kk, vv, left, insert(right, k, v)) }
- )
- }
-}
-
-/// Find a value based on the key
-pub fn find<K:Eq + Ord + 'static,
- V:Clone + 'static>(
- m: Treemap<K, V>,
- k: K)
- -> Option<V> {
- match *m {
- Empty => None,
- Node(kk, v, left, right) => cond!(
- (k == *kk) { Some((*v).clone()) }
- (k < *kk) { find(left, k) }
- _ { find(right, k) }
- )
- }
-}
-
-/// Visit all pairs in the map in order.
-pub fn traverse<K, V>(m: Treemap<K, V>, f: &fn(&K, &V)) {
- match *m {
- Empty => (),
- // Previously, this had what looked like redundant
- // matches to me, so I changed it. but that may be a
- // de-optimization -- tjc
- Node(@ref k, @ref v, left, right) => {
- traverse(left, |k,v| f(k,v));
- f(k, v);
- traverse(right, |k,v| f(k,v));
- }
- }
-}
// except according to those terms.
-use std::uint;
use std::vec;
struct Quad {
if byte <= 16u8 {
result.push_char('0')
}
- result.push_str(uint::to_str_radix(byte as uint, 16u));
+ result.push_str((byte as uint).to_str_radix(16u));
i += 1u32;
}
}
if v.is_empty() { return ~"0" }
let mut s = str::with_capacity(v.len() * l);
for n in v.rev_iter() {
- let ss = uint::to_str_radix(*n as uint, radix);
+ let ss = (*n as uint).to_str_radix(radix);
s.push_str("0".repeat(l - ss.len()));
s.push_str(ss);
}
pub mod rustrt {
use std::libc::{c_char, c_int};
- extern {
- pub fn linenoise(prompt: *c_char) -> *c_char;
- pub fn linenoiseHistoryAdd(line: *c_char) -> c_int;
- pub fn linenoiseHistorySetMaxLen(len: c_int) -> c_int;
- pub fn linenoiseHistorySave(file: *c_char) -> c_int;
- pub fn linenoiseHistoryLoad(file: *c_char) -> c_int;
- pub fn linenoiseSetCompletionCallback(callback: *u8);
- pub fn linenoiseAddCompletion(completions: *(), line: *c_char);
+ #[cfg(stage0)]
+ mod macro_hack {
+ #[macro_escape];
+ macro_rules! externfn(
+ (fn $name:ident ($($arg_name:ident : $arg_ty:ty),*) $(-> $ret_ty:ty),*) => (
+ extern {
+ fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),*;
+ }
+ )
+ )
}
+
+ externfn!(fn linenoise(prompt: *c_char) -> *c_char)
+ externfn!(fn linenoiseHistoryAdd(line: *c_char) -> c_int)
+ externfn!(fn linenoiseHistorySetMaxLen(len: c_int) -> c_int)
+ externfn!(fn linenoiseHistorySave(file: *c_char) -> c_int)
+ externfn!(fn linenoiseHistoryLoad(file: *c_char) -> c_int)
+ externfn!(fn linenoiseSetCompletionCallback(callback: *u8))
+ externfn!(fn linenoiseAddCompletion(completions: *(), line: *c_char))
}
/// Add a line to history
rustrt::linenoiseAddCompletion(completions, buf);
}
}
-}
+ }
}
}
w.write_str(histr);
}
-/// Returns a HashMap with the number of occurences of every element in the
+/// Returns a HashMap with the number of occurrences of every element in the
/// sequence that the iterator exposes.
pub fn freq_count<T: Iterator<U>, U: Eq+Hash>(mut iter: T) -> hashmap::HashMap<U, uint> {
let mut map = hashmap::HashMap::new::<U, uint>();
}
}
// Uncomment if you wish to test for sem races. Not valgrind-friendly.
- /* do 1000.times { task::yield(); } */
+ /* do 1000.times { task::deschedule(); } */
// Need to wait outside the exclusive.
if waiter_nobe.is_some() {
let _ = waiter_nobe.unwrap().recv();
}
}
- // If yield checks start getting inserted anywhere, we can be
+ // If deschedule checks start getting inserted anywhere, we can be
// killed before or after enqueueing. Deciding whether to
// unkillably reacquire the lock needs to happen atomically
// wrt enqueuing.
let s2 = ~s.clone();
do task::spawn || {
do s2.access {
- do 5.times { task::yield(); }
+ do 5.times { task::deschedule(); }
}
}
do s.access {
- do 5.times { task::yield(); }
+ do 5.times { task::deschedule(); }
}
}
#[test]
s2.acquire();
c.send(());
}
- do 5.times { task::yield(); }
+ do 5.times { task::deschedule(); }
s.release();
let _ = p.recv();
let s = ~Semaphore::new(0);
let s2 = ~s.clone();
do task::spawn || {
- do 5.times { task::yield(); }
+ do 5.times { task::deschedule(); }
s2.release();
let _ = p.recv();
}
c.send(());
}
let _ = p.recv(); // wait for child to come alive
- do 5.times { task::yield(); } // let the child contend
+ do 5.times { task::deschedule(); } // let the child contend
}
let _ = p.recv(); // wait for child to be done
}
do n.times {
do m.lock {
let oldval = *sharedstate;
- task::yield();
+ task::deschedule();
*sharedstate = oldval + 1;
}
}
let (p,c) = comm::stream();
do task::spawn || { // linked
let _ = p.recv(); // wait for sibling to get in the mutex
- task::yield();
+ task::deschedule();
fail!();
}
do m2.lock_cond |cond| {
do n.times {
do lock_rwlock_in_mode(x, mode) {
let oldval = *sharedstate;
- task::yield();
+ task::deschedule();
*sharedstate = oldval + 1;
}
}
/// If the color is a bright color, but the terminal only supports 8 colors,
/// the corresponding normal color will be used instead.
///
- /// Rturns true if the color was set, false otherwise.
+ /// Returns true if the color was set, false otherwise.
pub fn bg(&self, color: color::Color) -> bool {
let color = self.dim_if_necessary(color);
if self.num_colors > color {
}
fn usage(binary: &str, helpstr: &str) -> ! {
+ #[fixed_stack_segment]; #[inline(never)];
+
let message = fmt!("Usage: %s [OPTIONS] [FILTER]", binary);
println(groups::usage(message, optgroups()));
println("");
#[allow(missing_doc)];
-
-use std::int;
use std::io;
use std::num;
use std::str;
}
/// A record specifying a time value in seconds and nanoseconds.
-#[deriving(Eq, Encodable, Decodable)]
+#[deriving(Clone, DeepClone, Eq, Encodable, Decodable)]
pub struct Timespec { sec: i64, nsec: i32 }
/*
* nanoseconds since 1970-01-01T00:00:00Z.
*/
pub fn get_time() -> Timespec {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let mut sec = 0i64;
let mut nsec = 0i32;
* in nanoseconds since an unspecified epoch.
*/
pub fn precise_time_ns() -> u64 {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let mut ns = 0u64;
rustrt::precise_time_ns(&mut ns);
}
pub fn tzset() {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
rustrt::rust_tzset();
}
}
-#[deriving(Eq, Encodable, Decodable)]
+#[deriving(Clone, DeepClone, Eq, Encodable, Decodable)]
pub struct Tm {
tm_sec: i32, // seconds after the minute ~[0-60]
tm_min: i32, // minutes after the hour ~[0-59]
/// Returns the specified time in UTC
pub fn at_utc(clock: Timespec) -> Tm {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let Timespec { sec, nsec } = clock;
let mut tm = empty_tm();
/// Returns the specified time in the local timezone
pub fn at(clock: Timespec) -> Tm {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let Timespec { sec, nsec } = clock;
let mut tm = empty_tm();
impl Tm {
/// Convert time to the seconds from January 1, 1970
pub fn to_timespec(&self) -> Timespec {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let sec = match self.tm_gmtoff {
0_i32 => rustrt::rust_timegm(self),
//'U' {}
'u' => {
let i = tm.tm_wday as int;
- int::to_str(if i == 0 { 7 } else { i })
+ (if i == 0 { 7 } else { i }).to_str()
}
//'V' {}
'v' => {
parse_type('Y', tm))
}
//'W' {}
- 'w' => int::to_str(tm.tm_wday as int),
+ 'w' => (tm.tm_wday as int).to_str(),
//'X' {}
//'x' {}
- 'Y' => int::to_str(tm.tm_year as int + 1900),
+ 'Y' => (tm.tm_year as int + 1900).to_str(),
'y' => fmt!("%02d", (tm.tm_year as int + 1900) % 100),
'Z' => tm.tm_zone.clone(),
'z' => {
}
pub fn main() {
+ #[fixed_stack_segment]; #[inline(never)];
+
let os_args = os::args();
if (os_args.len() > 1 && (os_args[1] == ~"-v" || os_args[1] == ~"--version")) {
("scalarrepl", "Scalar Replacement of Aggregates (DT)"),
("scalarrepl-ssa", "Scalar Replacement of Aggregates (SSAUp)"),
("sccp", "Sparse Conditional Constant Propagation"),
- ("simplify-libcalls", "Simplify well-known library calls"),
("simplifycfg", "Simplify the CFG"),
("sink", "Code sinking"),
("strip", "Strip all symbols from a module"),
pub struct Upcalls {
trace: ValueRef,
- call_shim_on_c_stack: ValueRef,
- call_shim_on_rust_stack: ValueRef,
rust_personality: ValueRef,
reset_stack_limit: ValueRef
}
@Upcalls {
trace: upcall!(fn trace(opaque_ptr, opaque_ptr, int_ty) -> Type::void()),
- call_shim_on_c_stack: upcall!(fn call_shim_on_c_stack(opaque_ptr, opaque_ptr) -> int_ty),
- call_shim_on_rust_stack:
- upcall!(fn call_shim_on_rust_stack(opaque_ptr, opaque_ptr) -> int_ty),
rust_personality: upcall!(nothrow fn rust_personality -> Type::i32()),
reset_stack_limit: upcall!(nothrow fn reset_stack_limit -> Type::void())
}
use util::ppaux;
use std::hashmap::{HashMap,HashSet};
-use std::int;
use std::io;
use std::os;
use std::vec;
time(time_passes, ~"loop checking", ||
middle::check_loop::check_crate(ty_cx, crate));
+ time(time_passes, ~"stack checking", ||
+ middle::stack_check::stack_check_crate(ty_cx, crate));
+
let middle::moves::MoveMaps {moves_map, moved_variables_set,
capture_map} =
time(time_passes, ~"compute moves", ||
match node {
pprust::node_item(s, item) => {
pp::space(s.s);
- pprust::synth_comment(s, int::to_str(item.id));
+ pprust::synth_comment(s, item.id.to_str());
}
pprust::node_block(s, ref blk) => {
pp::space(s.s);
pprust::synth_comment(
- s, ~"block " + int::to_str(blk.id));
+ s, ~"block " + blk.id.to_str());
}
pprust::node_expr(s, expr) => {
pp::space(s.s);
- pprust::synth_comment(s, int::to_str(expr.id));
+ pprust::synth_comment(s, expr.id.to_str());
pprust::pclose(s);
}
pprust::node_pat(s, pat) => {
pp::space(s.s);
- pprust::synth_comment(s, ~"pat " + int::to_str(pat.id));
+ pprust::synth_comment(s, ~"pat " + pat.id.to_str());
}
}
}
}
debugging_opts |= this_bit;
}
+
if debugging_opts & session::debug_llvm != 0 {
- unsafe {
- llvm::LLVMSetDebug(1);
+ set_llvm_debug();
+
+ fn set_llvm_debug() {
+ #[fixed_stack_segment]; #[inline(never)];
+ unsafe { llvm::LLVMSetDebug(1); }
}
}
}
}
+#[cfg(stage0)]
fn mk_test_module(cx: &TestCtxt) -> @ast::item {
// Link to extra
return @item;
}
+#[cfg(not(stage0))]
+fn mk_test_module(cx: &TestCtxt) -> @ast::item {
+
+ // Link to extra
+ let view_items = ~[mk_std(cx)];
+
+ // A constant vector of test descriptors.
+ let tests = mk_tests(cx);
+
+ // The synthesized main function which will call the console test runner
+ // with our list of tests
+ let mainfn = (quote_item!(cx.ext_cx,
+ pub fn main() {
+ #[main];
+ extra::test::test_main_static(::std::os::args(), TESTS);
+ }
+ )).unwrap();
+
+ let testmod = ast::_mod {
+ view_items: view_items,
+ items: ~[mainfn, tests],
+ };
+ let item_ = ast::item_mod(testmod);
+
+ // This attribute tells resolve to let us call unexported functions
+ let resolve_unexported_attr =
+ attr::mk_attr(attr::mk_word_item(@"!resolve_unexported"));
+
+ let item = ast::item {
+ ident: cx.sess.ident_of("__test"),
+ attrs: ~[resolve_unexported_attr],
+ id: cx.sess.next_node_id(),
+ node: item_,
+ vis: ast::public,
+ span: dummy_sp(),
+ };
+
+ debug!("Synthetic test module:\n%s\n",
+ pprust::item_to_str(@item.clone(), cx.sess.intr()));
+
+ return @item;
+}
fn nospan<T>(t: T) -> codemap::spanned<T> {
codemap::spanned { node: t, span: dummy_sp() }
types: ~[] }
}
+#[cfg(stage0)]
fn mk_tests(cx: &TestCtxt) -> @ast::item {
let ext_cx = cx.ext_cx;
;
)).unwrap()
}
+#[cfg(not(stage0))]
+fn mk_tests(cx: &TestCtxt) -> @ast::item {
+ // The vector of test_descs for this crate
+ let test_descs = mk_test_descs(cx);
+
+ (quote_item!(cx.ext_cx,
+ pub static TESTS : &'static [self::extra::test::TestDescAndFn] =
+ $test_descs
+ ;
+ )).unwrap()
+}
fn is_extra(cx: &TestCtxt) -> bool {
let items = attr::find_linkage_metas(cx.crate.attrs);
}
}
+#[cfg(stage0)]
fn mk_test_desc_and_fn_rec(cx: &TestCtxt, test: &Test) -> @ast::expr {
let span = test.span;
let path = test.path.clone();
);
e
}
+#[cfg(not(stage0))]
+fn mk_test_desc_and_fn_rec(cx: &TestCtxt, test: &Test) -> @ast::expr {
+ let span = test.span;
+ let path = test.path.clone();
+
+ debug!("encoding %s", ast_util::path_name_i(path));
+
+ let name_lit: ast::lit =
+ nospan(ast::lit_str(ast_util::path_name_i(path).to_managed()));
+
+ let name_expr = @ast::expr {
+ id: cx.sess.next_node_id(),
+ node: ast::expr_lit(@name_lit),
+ span: span
+ };
+
+ let fn_path = path_node_global(path);
+
+ let fn_expr = @ast::expr {
+ id: cx.sess.next_node_id(),
+ node: ast::expr_path(fn_path),
+ span: span,
+ };
+
+ let t_expr = if test.bench {
+ quote_expr!(cx.ext_cx, self::extra::test::StaticBenchFn($fn_expr) )
+ } else {
+ quote_expr!(cx.ext_cx, self::extra::test::StaticTestFn($fn_expr) )
+ };
+
+ let ignore_expr = if test.ignore {
+ quote_expr!(cx.ext_cx, true )
+ } else {
+ quote_expr!(cx.ext_cx, false )
+ };
+
+ let fail_expr = if test.should_fail {
+ quote_expr!(cx.ext_cx, true )
+ } else {
+ quote_expr!(cx.ext_cx, false )
+ };
+
+ let e = quote_expr!(cx.ext_cx,
+ self::extra::test::TestDescAndFn {
+ desc: self::extra::test::TestDesc {
+ name: self::extra::test::StaticTestName($name_expr),
+ ignore: $ignore_expr,
+ should_fail: $fail_expr
+ },
+ testfn: $t_expr,
+ }
+ );
+ e
+}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+// LLVM wrappers are intended to be called from trans,
+// which already runs in a #[fixed_stack_segment]
+#[allow(cstack)];
+
use std::c_str::ToCStr;
use std::hashmap::HashMap;
use std::libc::{c_uint, c_ushort};
ReturnsTwiceAttribute = 1 << 29,
UWTableAttribute = 1 << 30,
NonLazyBindAttribute = 1 << 31,
-
- // Not added to LLVM yet, so may need to stay updated if LLVM changes.
- // FIXME(#8199): if this changes, be sure to change the relevant constant
- // down below
- // FixedStackSegment = 1 << 41,
}
// enum for the LLVM IntPredicate type
#[fast_ffi]
pub fn LLVMSetGC(Fn: ValueRef, Name: *c_char);
#[fast_ffi]
- pub fn LLVMAddFunctionAttr(Fn: ValueRef, PA: c_uint, HighPA: c_uint);
+ pub fn LLVMAddFunctionAttr(Fn: ValueRef, PA: c_uint);
+ #[fast_ffi]
+ pub fn LLVMAddFunctionAttrString(Fn: ValueRef, Name: *c_char);
#[fast_ffi]
pub fn LLVMGetFunctionAttr(Fn: ValueRef) -> c_ulonglong;
#[fast_ffi]
pub fn SetFunctionAttribute(Fn: ValueRef, attr: Attribute) {
unsafe {
- let attr = attr as u64;
- let lower = attr & 0xffffffff;
- let upper = (attr >> 32) & 0xffffffff;
- llvm::LLVMAddFunctionAttr(Fn, lower as c_uint, upper as c_uint);
- }
-}
-
-// FIXME(#8199): this shouldn't require this hackery. On i686
-// (FixedStackSegment as u64) will return 0 instead of 1 << 41.
-// Furthermore, if we use a match of any sort then an LLVM
-// assertion is generated!
-pub fn SetFixedStackSegmentAttribute(Fn: ValueRef) {
- unsafe {
- let attr = 1u64 << 41;
- let lower = attr & 0xffffffff;
- let upper = (attr >> 32) & 0xffffffff;
- llvm::LLVMAddFunctionAttr(Fn, lower as c_uint, upper as c_uint);
+ llvm::LLVMAddFunctionAttr(Fn, attr as c_uint)
}
}
/* Memory-managed object interface to type handles. */
self.type_to_str_depth(ty, 30)
}
+ pub fn types_to_str(&self, tys: &[Type]) -> ~str {
+ let strs = tys.map(|t| self.type_to_str(*t));
+ fmt!("[%s]", strs.connect(","))
+ }
+
pub fn val_to_str(&self, val: ValueRef) -> ~str {
unsafe {
let ty = Type::from_ref(llvm::LLVMTypeOf(val));
use std::hashmap::{HashMap, HashSet};
use std::io;
use std::str;
-use std::uint;
use std::vec;
use extra::flate;
use extra::serialize::Encodable;
ebml_w: &mut writer::Encoder,
disr_val: uint) {
ebml_w.start_tag(tag_disr_val);
- let s = uint::to_str(disr_val);
+ let s = disr_val.to_str();
ebml_w.writer.write(s.as_bytes());
ebml_w.end_tag();
}
use std::hashmap::HashMap;
use std::io::WriterUtil;
use std::io;
-use std::uint;
use syntax::abi::AbiSet;
use syntax::ast;
use syntax::ast::*;
w.write_char('p');
w.write_str((cx.ds)(did));
w.write_char('|');
- w.write_str(uint::to_str(id));
+ w.write_str(id.to_str());
}
ty::ty_self(did) => {
w.write_char('s');
}
)
}
- typeck::method_trait(did, m) => {
- typeck::method_trait(did.tr(xcx), m)
+ typeck::method_object(ref mo) => {
+ typeck::method_object(
+ typeck::method_object {
+ trait_id: mo.trait_id.tr(xcx),
+ .. *mo
+ }
+ )
}
}
}
#[test]
fn test_basic() {
- let ext_cx = mk_ctxt();
- roundtrip(quote_item!(
+ let cx = mk_ctxt();
+ roundtrip(quote_item!(cx,
fn foo() {}
));
}
#[test]
fn test_smalltalk() {
- let ext_cx = mk_ctxt();
- roundtrip(quote_item!(
+ let cx = mk_ctxt();
+ roundtrip(quote_item!(cx,
fn foo() -> int { 3 + 4 } // first smalltalk program ever executed.
));
}
#[test]
fn test_more() {
- let ext_cx = mk_ctxt();
- roundtrip(quote_item!(
+ let cx = mk_ctxt();
+ roundtrip(quote_item!(cx,
fn foo(x: uint, y: uint) -> uint {
let z = x + y;
return z;
#[test]
fn test_simplification() {
- let ext_cx = mk_ctxt();
- let item_in = ast::ii_item(quote_item!(
+ let cx = mk_ctxt();
+ let item_in = ast::ii_item(quote_item!(cx,
fn new_int_alist<B>() -> alist<int, B> {
fn eq_int(a: int, b: int) -> bool { a == b }
return alist {eq_fn: eq_int, data: ~[]};
}
).unwrap());
let item_out = simplify_ast(&item_in);
- let item_exp = ast::ii_item(quote_item!(
+ let item_exp = ast::ii_item(quote_item!(cx,
fn new_int_alist<B>() -> alist<int, B> {
return alist {eq_fn: eq_int, data: ~[]};
}
use syntax::ast;
use syntax::ast_util;
use syntax::codemap::span;
-use syntax::oldvisit;
+use syntax::visit;
+use syntax::visit::Visitor;
use util::ppaux::Repr;
#[deriving(Clone)]
reported: @mut HashSet<ast::NodeId>,
}
+struct CheckLoanVisitor;
+
+impl<'self> Visitor<CheckLoanCtxt<'self>> for CheckLoanVisitor {
+ fn visit_expr<'a>(&mut self, ex:@ast::expr, e:CheckLoanCtxt<'a>) {
+ check_loans_in_expr(self, ex, e);
+ }
+ fn visit_local(&mut self, l:@ast::Local, e:CheckLoanCtxt) {
+ check_loans_in_local(self, l, e);
+ }
+ fn visit_block(&mut self, b:&ast::Block, e:CheckLoanCtxt) {
+ check_loans_in_block(self, b, e);
+ }
+ fn visit_pat(&mut self, p:@ast::pat, e:CheckLoanCtxt) {
+ check_loans_in_pat(self, p, e);
+ }
+ fn visit_fn(&mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ b:&ast::Block, s:span, n:ast::NodeId, e:CheckLoanCtxt) {
+ check_loans_in_fn(self, fk, fd, b, s, n, e);
+ }
+}
+
pub fn check_loans(bccx: @BorrowckCtxt,
dfcx_loans: &LoanDataFlow,
move_data: move_data::FlowedMoveData,
reported: @mut HashSet::new(),
};
- let vt = oldvisit::mk_vt(@oldvisit::Visitor {
- visit_expr: check_loans_in_expr,
- visit_local: check_loans_in_local,
- visit_block: check_loans_in_block,
- visit_pat: check_loans_in_pat,
- visit_fn: check_loans_in_fn,
- .. *oldvisit::default_visitor()
- });
- (vt.visit_block)(body, (clcx, vt));
+ let mut vt = CheckLoanVisitor;
+ vt.visit_block(body, clcx);
}
enum MoveError {
}
}
-fn check_loans_in_fn<'a>(fk: &oldvisit::fn_kind,
+fn check_loans_in_fn<'a>(visitor: &mut CheckLoanVisitor,
+ fk: &visit::fn_kind,
decl: &ast::fn_decl,
body: &ast::Block,
sp: span,
id: ast::NodeId,
- (this, visitor): (CheckLoanCtxt<'a>,
- oldvisit::vt<CheckLoanCtxt<'a>>)) {
+ this: CheckLoanCtxt<'a>) {
match *fk {
- oldvisit::fk_item_fn(*) |
- oldvisit::fk_method(*) => {
+ visit::fk_item_fn(*) |
+ visit::fk_method(*) => {
// Don't process nested items.
return;
}
- oldvisit::fk_anon(*) |
- oldvisit::fk_fn_block(*) => {
+ visit::fk_anon(*) |
+ visit::fk_fn_block(*) => {
check_captured_variables(this, id, sp);
}
}
- oldvisit::visit_fn(fk, decl, body, sp, id, (this, visitor));
+ visit::walk_fn(visitor, fk, decl, body, sp, id, this);
fn check_captured_variables(this: CheckLoanCtxt,
closure_id: ast::NodeId,
}
}
-fn check_loans_in_local<'a>(local: @ast::Local,
- (this, vt): (CheckLoanCtxt<'a>,
- oldvisit::vt<CheckLoanCtxt<'a>>)) {
- oldvisit::visit_local(local, (this, vt));
+fn check_loans_in_local<'a>(vt: &mut CheckLoanVisitor,
+ local: @ast::Local,
+ this: CheckLoanCtxt<'a>) {
+ visit::walk_local(vt, local, this);
}
-fn check_loans_in_expr<'a>(expr: @ast::expr,
- (this, vt): (CheckLoanCtxt<'a>,
- oldvisit::vt<CheckLoanCtxt<'a>>)) {
- oldvisit::visit_expr(expr, (this, vt));
+fn check_loans_in_expr<'a>(vt: &mut CheckLoanVisitor,
+ expr: @ast::expr,
+ this: CheckLoanCtxt<'a>) {
+ visit::walk_expr(vt, expr, this);
debug!("check_loans_in_expr(expr=%s)",
expr.repr(this.tcx()));
}
}
-fn check_loans_in_pat<'a>(pat: @ast::pat,
- (this, vt): (CheckLoanCtxt<'a>,
- oldvisit::vt<CheckLoanCtxt<'a>>))
+fn check_loans_in_pat<'a>(vt: &mut CheckLoanVisitor,
+ pat: @ast::pat,
+ this: CheckLoanCtxt<'a>)
{
this.check_for_conflicting_loans(pat.id);
this.check_move_out_from_id(pat.id, pat.span);
- oldvisit::visit_pat(pat, (this, vt));
+ visit::walk_pat(vt, pat, this);
}
-fn check_loans_in_block<'a>(blk: &ast::Block,
- (this, vt): (CheckLoanCtxt<'a>,
- oldvisit::vt<CheckLoanCtxt<'a>>))
+fn check_loans_in_block<'a>(vt: &mut CheckLoanVisitor,
+ blk: &ast::Block,
+ this: CheckLoanCtxt<'a>)
{
- oldvisit::visit_block(blk, (this, vt));
+ visit::walk_block(vt, blk, this);
this.check_for_conflicting_loans(blk.id);
}
use syntax::ast::*;
use syntax::codemap;
-use syntax::{oldvisit, ast_util, ast_map};
+use syntax::{ast_util, ast_map};
+use syntax::visit::Visitor;
+use syntax::visit;
+
+struct CheckCrateVisitor {
+ sess: Session,
+ ast_map: ast_map::map,
+ def_map: resolve::DefMap,
+ method_map: typeck::method_map,
+ tcx: ty::ctxt,
+}
+
+impl Visitor<bool> for CheckCrateVisitor {
+ fn visit_item(&mut self, i:@item, env:bool) {
+ check_item(self, self.sess, self.ast_map, self.def_map, i, env);
+ }
+ fn visit_pat(&mut self, p:@pat, env:bool) {
+ check_pat(self, p, env);
+ }
+ fn visit_expr(&mut self, ex:@expr, env:bool) {
+ check_expr(self, self.sess, self.def_map, self.method_map,
+ self.tcx, ex, env);
+ }
+}
pub fn check_crate(sess: Session,
crate: &Crate,
def_map: resolve::DefMap,
method_map: typeck::method_map,
tcx: ty::ctxt) {
- oldvisit::visit_crate(crate, (false, oldvisit::mk_vt(@oldvisit::Visitor {
- visit_item: |a,b| check_item(sess, ast_map, def_map, a, b),
- visit_pat: check_pat,
- visit_expr: |a,b|
- check_expr(sess, def_map, method_map, tcx, a, b),
- .. *oldvisit::default_visitor()
- })));
+ let mut v = CheckCrateVisitor {
+ sess: sess,
+ ast_map: ast_map,
+ def_map: def_map,
+ method_map: method_map,
+ tcx: tcx,
+ };
+ visit::walk_crate(&mut v, crate, false);
sess.abort_if_errors();
}
-pub fn check_item(sess: Session,
+pub fn check_item(v: &mut CheckCrateVisitor,
+ sess: Session,
ast_map: ast_map::map,
def_map: resolve::DefMap,
it: @item,
- (_is_const, v): (bool,
- oldvisit::vt<bool>)) {
+ _is_const: bool) {
match it.node {
item_static(_, _, ex) => {
- (v.visit_expr)(ex, (true, v));
+ v.visit_expr(ex, true);
check_item_recursion(sess, ast_map, def_map, it);
}
item_enum(ref enum_definition, _) => {
for var in (*enum_definition).variants.iter() {
for ex in var.node.disr_expr.iter() {
- (v.visit_expr)(*ex, (true, v));
+ v.visit_expr(*ex, true);
}
}
}
- _ => oldvisit::visit_item(it, (false, v))
+ _ => visit::walk_item(v, it, false)
}
}
-pub fn check_pat(p: @pat, (_is_const, v): (bool, oldvisit::vt<bool>)) {
+pub fn check_pat(v: &mut CheckCrateVisitor, p: @pat, _is_const: bool) {
fn is_str(e: @expr) -> bool {
match e.node {
expr_vstore(
}
match p.node {
// Let through plain ~-string literals here
- pat_lit(a) => if !is_str(a) { (v.visit_expr)(a, (true, v)); },
+ pat_lit(a) => if !is_str(a) { v.visit_expr(a, true); },
pat_range(a, b) => {
- if !is_str(a) { (v.visit_expr)(a, (true, v)); }
- if !is_str(b) { (v.visit_expr)(b, (true, v)); }
+ if !is_str(a) { v.visit_expr(a, true); }
+ if !is_str(b) { v.visit_expr(b, true); }
}
- _ => oldvisit::visit_pat(p, (false, v))
+ _ => visit::walk_pat(v, p, false)
}
}
-pub fn check_expr(sess: Session,
+pub fn check_expr(v: &mut CheckCrateVisitor,
+ sess: Session,
def_map: resolve::DefMap,
method_map: typeck::method_map,
tcx: ty::ctxt,
e: @expr,
- (is_const, v): (bool,
- oldvisit::vt<bool>)) {
+ is_const: bool) {
if is_const {
match e.node {
expr_unary(_, deref, _) => { }
}
}
}
- expr_paren(e) => { check_expr(sess, def_map, method_map,
- tcx, e, (is_const, v)); }
+ expr_paren(e) => { check_expr(v, sess, def_map, method_map,
+ tcx, e, is_const); }
expr_vstore(_, expr_vstore_slice) |
expr_vec(_, m_imm) |
expr_addr_of(m_imm, _) |
}
_ => ()
}
- oldvisit::visit_expr(e, (is_const, v));
+ visit::walk_expr(v, e, is_const);
}
#[deriving(Clone)]
idstack: @mut ~[NodeId]
}
+struct CheckItemRecursionVisitor;
+
// Make sure a const item doesn't recursively refer to itself
// FIXME: Should use the dependency graph when it's available (#1356)
pub fn check_item_recursion(sess: Session,
idstack: @mut ~[]
};
- let visitor = oldvisit::mk_vt(@oldvisit::Visitor {
- visit_item: visit_item,
- visit_expr: visit_expr,
- .. *oldvisit::default_visitor()
- });
- (visitor.visit_item)(it, (env, visitor));
+ let mut visitor = CheckItemRecursionVisitor;
+ visitor.visit_item(it, env);
+}
- fn visit_item(it: @item, (env, v): (env, oldvisit::vt<env>)) {
+impl Visitor<env> for CheckItemRecursionVisitor {
+ fn visit_item(&mut self, it: @item, env: env) {
if env.idstack.iter().any(|x| x == &(it.id)) {
env.sess.span_fatal(env.root_it.span, "recursive constant");
}
env.idstack.push(it.id);
- oldvisit::visit_item(it, (env, v));
+ visit::walk_item(self, it, env);
env.idstack.pop();
}
- fn visit_expr(e: @expr, (env, v): (env, oldvisit::vt<env>)) {
+ fn visit_expr(&mut self, e: @expr, env: env) {
match e.node {
expr_path(*) => match env.def_map.find(&e.id) {
Some(&def_static(def_id, _)) if ast_util::is_local(def_id) =>
match env.ast_map.get_copy(&def_id.node) {
ast_map::node_item(it, _) => {
- (v.visit_item)(it, (env, v));
+ self.visit_item(it, env);
}
_ => fail!("const not bound to an item")
},
},
_ => ()
}
- oldvisit::visit_expr(e, (env, v));
+ visit::walk_expr(self, e, env);
}
}
use middle::ty;
use syntax::ast::*;
-use syntax::oldvisit;
+use syntax::visit;
+use syntax::visit::Visitor;
#[deriving(Clone)]
pub struct Context {
can_ret: bool
}
+struct CheckLoopVisitor {
+ tcx: ty::ctxt,
+}
+
pub fn check_crate(tcx: ty::ctxt, crate: &Crate) {
- oldvisit::visit_crate(crate,
- (Context { in_loop: false, can_ret: true },
- oldvisit::mk_vt(@oldvisit::Visitor {
- visit_item: |i, (_cx, v)| {
- oldvisit::visit_item(i, (Context {
+ visit::walk_crate(&mut CheckLoopVisitor { tcx: tcx },
+ crate,
+ Context { in_loop: false, can_ret: true });
+}
+
+impl Visitor<Context> for CheckLoopVisitor {
+ fn visit_item(&mut self, i:@item, _cx:Context) {
+ visit::walk_item(self, i, Context {
in_loop: false,
can_ret: true
- }, v));
- },
- visit_expr: |e: @expr, (cx, v): (Context, oldvisit::vt<Context>)| {
+ });
+ }
+
+ fn visit_expr(&mut self, e:@expr, cx:Context) {
+
match e.node {
expr_while(e, ref b) => {
- (v.visit_expr)(e, (cx, v));
- (v.visit_block)(b, (Context { in_loop: true,.. cx }, v));
+ self.visit_expr(e, cx);
+ self.visit_block(b, Context { in_loop: true,.. cx });
}
expr_loop(ref b, _) => {
- (v.visit_block)(b, (Context { in_loop: true,.. cx }, v));
+ self.visit_block(b, Context { in_loop: true,.. cx });
}
expr_fn_block(_, ref b) => {
- (v.visit_block)(b, (Context {
- in_loop: false,
- can_ret: false
- }, v));
+ self.visit_block(b, Context { in_loop: false, can_ret: false });
}
expr_break(_) => {
if !cx.in_loop {
- tcx.sess.span_err(e.span, "`break` outside of loop");
+ self.tcx.sess.span_err(e.span, "`break` outside of loop");
}
}
expr_again(_) => {
if !cx.in_loop {
- tcx.sess.span_err(e.span, "`loop` outside of loop");
+ self.tcx.sess.span_err(e.span, "`loop` outside of loop");
}
}
expr_ret(oe) => {
if !cx.can_ret {
- tcx.sess.span_err(e.span, "`return` in block function");
+ self.tcx.sess.span_err(e.span, "`return` in block function");
}
- oldvisit::visit_expr_opt(oe, (cx, v));
+ visit::walk_expr_opt(self, oe, cx);
}
- _ => oldvisit::visit_expr(e, (cx, v))
+ _ => visit::walk_expr(self, e, cx)
}
- },
- .. *oldvisit::default_visitor()
- })));
+
+ }
}
use syntax::ast::*;
use syntax::ast_util::{unguarded_pat, walk_pat};
use syntax::codemap::{span, dummy_sp, spanned};
-use syntax::oldvisit;
+use syntax::visit;
+use syntax::visit::{Visitor,fn_kind};
pub struct MatchCheckCtxt {
tcx: ty::ctxt,
moves_map: moves::MovesMap
}
+struct CheckMatchVisitor {
+ cx: @MatchCheckCtxt
+}
+
+impl Visitor<()> for CheckMatchVisitor {
+ fn visit_expr(&mut self, ex:@expr, e:()) {
+ check_expr(self, self.cx, ex, e);
+ }
+ fn visit_local(&mut self, l:@Local, e:()) {
+ check_local(self, self.cx, l, e);
+ }
+ fn visit_fn(&mut self, fk:&fn_kind, fd:&fn_decl, b:&Block, s:span, n:NodeId, e:()) {
+ check_fn(self, self.cx, fk, fd, b, s, n, e);
+ }
+}
+
pub fn check_crate(tcx: ty::ctxt,
method_map: method_map,
moves_map: moves::MovesMap,
let cx = @MatchCheckCtxt {tcx: tcx,
method_map: method_map,
moves_map: moves_map};
- oldvisit::visit_crate(crate, ((), oldvisit::mk_vt(@oldvisit::Visitor {
- visit_expr: |a,b| check_expr(cx, a, b),
- visit_local: |a,b| check_local(cx, a, b),
- visit_fn: |kind, decl, body, sp, id, (e, v)|
- check_fn(cx, kind, decl, body, sp, id, (e, v)),
- .. *oldvisit::default_visitor::<()>()
- })));
+ let mut v = CheckMatchVisitor { cx: cx };
+
+ visit::walk_crate(&mut v, crate, ());
+
tcx.sess.abort_if_errors();
}
-pub fn check_expr(cx: @MatchCheckCtxt,
+pub fn check_expr(v: &mut CheckMatchVisitor,
+ cx: @MatchCheckCtxt,
ex: @expr,
- (s, v): ((), oldvisit::vt<()>)) {
- oldvisit::visit_expr(ex, (s, v));
+ s: ()) {
+ visit::walk_expr(v, ex, s);
match ex.node {
expr_match(scrut, ref arms) => {
// First, check legality of move bindings.
else { None }
}
-pub fn check_local(cx: &MatchCheckCtxt,
+pub fn check_local(v: &mut CheckMatchVisitor,
+ cx: &MatchCheckCtxt,
loc: @Local,
- (s, v): ((), oldvisit::vt<()>)) {
- oldvisit::visit_local(loc, (s, v));
+ s: ()) {
+ visit::walk_local(v, loc, s);
if is_refutable(cx, loc.pat) {
cx.tcx.sess.span_err(loc.pat.span,
"refutable pattern in local binding");
check_legality_of_move_bindings(cx, false, [ loc.pat ]);
}
-pub fn check_fn(cx: &MatchCheckCtxt,
- kind: &oldvisit::fn_kind,
+pub fn check_fn(v: &mut CheckMatchVisitor,
+ cx: &MatchCheckCtxt,
+ kind: &visit::fn_kind,
decl: &fn_decl,
body: &Block,
sp: span,
id: NodeId,
- (s, v): ((),
- oldvisit::vt<()>)) {
- oldvisit::visit_fn(kind, decl, body, sp, id, (s, v));
+ s: ()) {
+ visit::walk_fn(v, kind, decl, body, sp, id, s);
for input in decl.inputs.iter() {
if is_refutable(cx, input.pat) {
cx.tcx.sess.span_err(input.pat.span,
use middle::ty;
use middle;
-use syntax::{ast, ast_map, ast_util, oldvisit};
+use syntax::{ast, ast_map, ast_util};
+use syntax::visit;
+use syntax::visit::Visitor;
use syntax::ast::*;
use std::float;
}
}
+struct ConstEvalVisitor { tcx: ty::ctxt }
+
+impl Visitor<()> for ConstEvalVisitor {
+ fn visit_expr_post(&mut self, e:@expr, _:()) {
+ classify(e, self.tcx);
+ }
+}
+
pub fn process_crate(crate: &ast::Crate,
tcx: ty::ctxt) {
- let v = oldvisit::mk_simple_visitor(@oldvisit::SimpleVisitor {
- visit_expr_post: |e| { classify(e, tcx); },
- .. *oldvisit::default_simple_visitor()
- });
- oldvisit::visit_crate(crate, ((), v));
+ let mut v = ConstEvalVisitor { tcx: tcx };
+ visit::walk_crate(&mut v, crate, ());
tcx.sess.abort_if_errors();
}
use syntax::ast::{expr_unary, unsafe_fn, expr_path};
use syntax::ast;
use syntax::codemap::span;
-use syntax::oldvisit::{fk_item_fn, fk_method};
-use syntax::oldvisit;
+use syntax::visit::{fk_item_fn, fk_method};
+use syntax::visit;
+use syntax::visit::{Visitor,fn_kind};
+use syntax::ast::{fn_decl,Block,NodeId,expr};
#[deriving(Eq)]
enum UnsafeContext {
}
}
-pub fn check_crate(tcx: ty::ctxt,
- method_map: method_map,
- crate: &ast::Crate) {
- let context = @mut Context {
- method_map: method_map,
- unsafe_context: SafeContext,
- };
+struct EffectCheckVisitor {
+ tcx: ty::ctxt,
+ context: @mut Context,
+}
- let require_unsafe: @fn(span: span,
- description: &str) = |span, description| {
- match context.unsafe_context {
+impl EffectCheckVisitor {
+ fn require_unsafe(&mut self, span: span, description: &str) {
+ match self.context.unsafe_context {
SafeContext => {
// Report an error.
- tcx.sess.span_err(span,
+ self.tcx.sess.span_err(span,
fmt!("%s requires unsafe function or block",
description))
}
UnsafeBlock(block_id) => {
// OK, but record this.
debug!("effect: recording unsafe block as used: %?", block_id);
- let _ = tcx.used_unsafe.insert(block_id);
+ let _ = self.tcx.used_unsafe.insert(block_id);
}
UnsafeFn => {}
}
- };
+ }
+}
+
+impl Visitor<()> for EffectCheckVisitor {
+ fn visit_fn(&mut self, fn_kind:&fn_kind, fn_decl:&fn_decl,
+ block:&Block, span:span, node_id:NodeId, _:()) {
- let visitor = oldvisit::mk_vt(@oldvisit::Visitor {
- visit_fn: |fn_kind, fn_decl, block, span, node_id, (_, visitor)| {
let (is_item_fn, is_unsafe_fn) = match *fn_kind {
fk_item_fn(_, _, purity, _) => (true, purity == unsafe_fn),
fk_method(_, _, method) => (true, method.purity == unsafe_fn),
_ => (false, false),
};
- let old_unsafe_context = context.unsafe_context;
+ let old_unsafe_context = self.context.unsafe_context;
if is_unsafe_fn {
- context.unsafe_context = UnsafeFn
+ self.context.unsafe_context = UnsafeFn
} else if is_item_fn {
- context.unsafe_context = SafeContext
+ self.context.unsafe_context = SafeContext
}
- oldvisit::visit_fn(fn_kind,
+ visit::walk_fn(self,
+ fn_kind,
fn_decl,
block,
span,
node_id,
- ((),
- visitor));
+ ());
+
+ self.context.unsafe_context = old_unsafe_context
+ }
- context.unsafe_context = old_unsafe_context
- },
+ fn visit_block(&mut self, block:&Block, _:()) {
- visit_block: |block, (_, visitor)| {
- let old_unsafe_context = context.unsafe_context;
+ let old_unsafe_context = self.context.unsafe_context;
if block.rules == ast::UnsafeBlock &&
- context.unsafe_context == SafeContext {
- context.unsafe_context = UnsafeBlock(block.id)
+ self.context.unsafe_context == SafeContext {
+ self.context.unsafe_context = UnsafeBlock(block.id)
}
- oldvisit::visit_block(block, ((), visitor));
+ visit::walk_block(self, block, ());
- context.unsafe_context = old_unsafe_context
- },
+ self.context.unsafe_context = old_unsafe_context
+ }
+
+ fn visit_expr(&mut self, expr:@expr, _:()) {
- visit_expr: |expr, (_, visitor)| {
match expr.node {
expr_method_call(callee_id, _, _, _, _, _) => {
- let base_type = ty::node_id_to_type(tcx, callee_id);
+ let base_type = ty::node_id_to_type(self.tcx, callee_id);
debug!("effect: method call case, base type is %s",
- ppaux::ty_to_str(tcx, base_type));
+ ppaux::ty_to_str(self.tcx, base_type));
if type_is_unsafe_function(base_type) {
- require_unsafe(expr.span,
+ self.require_unsafe(expr.span,
"invocation of unsafe method")
}
}
expr_call(base, _, _) => {
- let base_type = ty::node_id_to_type(tcx, base.id);
+ let base_type = ty::node_id_to_type(self.tcx, base.id);
debug!("effect: call case, base type is %s",
- ppaux::ty_to_str(tcx, base_type));
+ ppaux::ty_to_str(self.tcx, base_type));
if type_is_unsafe_function(base_type) {
- require_unsafe(expr.span, "call to unsafe function")
+ self.require_unsafe(expr.span, "call to unsafe function")
}
}
expr_unary(_, deref, base) => {
- let base_type = ty::node_id_to_type(tcx, base.id);
+ let base_type = ty::node_id_to_type(self.tcx, base.id);
debug!("effect: unary case, base type is %s",
- ppaux::ty_to_str(tcx, base_type));
+ ppaux::ty_to_str(self.tcx, base_type));
match ty::get(base_type).sty {
ty_ptr(_) => {
- require_unsafe(expr.span,
+ self.require_unsafe(expr.span,
"dereference of unsafe pointer")
}
_ => {}
}
}
expr_inline_asm(*) => {
- require_unsafe(expr.span, "use of inline assembly")
+ self.require_unsafe(expr.span, "use of inline assembly")
}
expr_path(*) => {
- match ty::resolve_expr(tcx, expr) {
+ match ty::resolve_expr(self.tcx, expr) {
ast::def_static(_, true) => {
- require_unsafe(expr.span, "use of mutable static")
+ self.require_unsafe(expr.span, "use of mutable static")
}
_ => {}
}
_ => {}
}
- oldvisit::visit_expr(expr, ((), visitor))
- },
+ visit::walk_expr(self, expr, ());
+ }
+}
- .. *oldvisit::default_visitor()
- });
+pub fn check_crate(tcx: ty::ctxt,
+ method_map: method_map,
+ crate: &ast::Crate) {
+ let context = @mut Context {
+ method_map: method_map,
+ unsafe_context: SafeContext,
+ };
+
+ let mut visitor = EffectCheckVisitor {
+ tcx: tcx,
+ context: context,
+ };
- oldvisit::visit_crate(crate, ((), visitor))
+ visit::walk_crate(&mut visitor, crate, ());
}
use syntax::ast_map;
use syntax::attr;
use syntax::codemap::span;
-use syntax::oldvisit::{default_visitor, mk_vt, vt, Visitor, visit_crate};
-use syntax::oldvisit::{visit_item};
use syntax::parse::token::special_idents;
+use syntax::visit;
+use syntax::visit::Visitor;
use std::util;
struct EntryContext {
non_main_fns: ~[(NodeId, span)],
}
-type EntryVisitor = vt<@mut EntryContext>;
+struct EntryVisitor;
+
+impl Visitor<@mut EntryContext> for EntryVisitor {
+ fn visit_item(&mut self, item:@item, ctxt:@mut EntryContext) {
+ find_item(item, ctxt, self);
+ }
+}
pub fn find_entry_point(session: Session, crate: &Crate, ast_map: ast_map::map) {
non_main_fns: ~[],
};
- visit_crate(crate, (ctxt, mk_vt(@Visitor {
- visit_item: |item, (ctxt, visitor)| find_item(item, ctxt, visitor),
- .. *default_visitor()
- })));
+ let mut v = EntryVisitor;
+
+ visit::walk_crate(&mut v, crate, ctxt);
configure_main(ctxt);
}
-fn find_item(item: @item, ctxt: @mut EntryContext, visitor: EntryVisitor) {
+fn find_item(item: @item, ctxt: @mut EntryContext, visitor: &mut EntryVisitor) {
match item.node {
item_fn(*) => {
if item.ident == special_idents::main {
_ => ()
}
- visit_item(item, (ctxt, visitor));
+ visit::walk_item(visitor, item, ctxt);
}
fn configure_main(ctxt: @mut EntryContext) {
use std::hashmap::HashMap;
use syntax::codemap::span;
-use syntax::{ast, ast_util, oldvisit};
+use syntax::{ast, ast_util};
+use syntax::visit;
+use syntax::visit::Visitor;
+use syntax::ast::{item};
// A vector of defs representing the free variables referred to in a function.
// (The def_upvar will already have been stripped).
pub type freevar_info = @~[@freevar_entry];
pub type freevar_map = @mut HashMap<ast::NodeId, freevar_info>;
-// Searches through part of the AST for all references to locals or
-// upvars in this frame and returns the list of definition IDs thus found.
-// Since we want to be able to collect upvars in some arbitrary piece
-// of the AST, we take a walker function that we invoke with a visitor
-// in order to start the search.
-fn collect_freevars(def_map: resolve::DefMap, blk: &ast::Block)
- -> freevar_info {
- let seen = @mut HashMap::new();
- let refs = @mut ~[];
+struct CollectFreevarsVisitor {
+ seen: @mut HashMap<ast::NodeId, ()>,
+ refs: @mut ~[@freevar_entry],
+ def_map: resolve::DefMap,
+}
+
+impl Visitor<int> for CollectFreevarsVisitor {
- fn ignore_item(_i: @ast::item, (_depth, _v): (int, oldvisit::vt<int>)) { }
+ fn visit_item(&mut self, _:@item, _:int) {
+ // ignore_item
+ }
+
+ fn visit_expr(&mut self, expr:@ast::expr, depth:int) {
- let walk_expr: @fn(expr: @ast::expr, (int, oldvisit::vt<int>)) =
- |expr, (depth, v)| {
match expr.node {
ast::expr_fn_block(*) => {
- oldvisit::visit_expr(expr, (depth + 1, v))
+ visit::walk_expr(self, expr, depth + 1)
}
ast::expr_path(*) | ast::expr_self => {
let mut i = 0;
- match def_map.find(&expr.id) {
+ match self.def_map.find(&expr.id) {
None => fail!("path not found"),
Some(&df) => {
let mut def = df;
}
if i == depth { // Made it to end of loop
let dnum = ast_util::def_id_of_def(def).node;
- if !seen.contains_key(&dnum) {
- refs.push(@freevar_entry {
+ if !self.seen.contains_key(&dnum) {
+ self.refs.push(@freevar_entry {
def: def,
span: expr.span,
});
- seen.insert(dnum, ());
+ self.seen.insert(dnum, ());
}
}
}
}
}
- _ => oldvisit::visit_expr(expr, (depth, v))
+ _ => visit::walk_expr(self, expr, depth)
}
- };
+ }
- let v = oldvisit::mk_vt(@oldvisit::Visitor {visit_item: ignore_item,
- visit_expr: walk_expr,
- .. *oldvisit::default_visitor()});
- (v.visit_block)(blk, (1, v));
+
+}
+
+// Searches through part of the AST for all references to locals or
+// upvars in this frame and returns the list of definition IDs thus found.
+// Since we want to be able to collect upvars in some arbitrary piece
+// of the AST, we take a walker function that we invoke with a visitor
+// in order to start the search.
+fn collect_freevars(def_map: resolve::DefMap, blk: &ast::Block)
+ -> freevar_info {
+ let seen = @mut HashMap::new();
+ let refs = @mut ~[];
+
+ let mut v = CollectFreevarsVisitor {
+ seen: seen,
+ refs: refs,
+ def_map: def_map,
+ };
+
+ v.visit_block(blk, 1);
return @(*refs).clone();
}
+struct AnnotateFreevarsVisitor {
+ def_map: resolve::DefMap,
+ freevars: freevar_map,
+}
+
+impl Visitor<()> for AnnotateFreevarsVisitor {
+ fn visit_fn(&mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ blk:&ast::Block, s:span, nid:ast::NodeId, _:()) {
+ let vars = collect_freevars(self.def_map, blk);
+ self.freevars.insert(nid, vars);
+ visit::walk_fn(self, fk, fd, blk, s, nid, ());
+ }
+}
+
// Build a map from every function and for-each body to a set of the
// freevars contained in it. The implementation is not particularly
// efficient as it fully recomputes the free variables at every
freevar_map {
let freevars = @mut HashMap::new();
- let walk_fn: @fn(&oldvisit::fn_kind,
- &ast::fn_decl,
- &ast::Block,
- span,
- ast::NodeId) = |_, _, blk, _, nid| {
- let vars = collect_freevars(def_map, blk);
- freevars.insert(nid, vars);
+ let mut visitor = AnnotateFreevarsVisitor {
+ def_map: def_map,
+ freevars: freevars,
};
-
- let visitor =
- oldvisit::mk_simple_visitor(@oldvisit::SimpleVisitor {
- visit_fn: walk_fn,
- .. *oldvisit::default_simple_visitor()});
- oldvisit::visit_crate(crate, ((), visitor));
+ visit::walk_crate(&mut visitor, crate, ());
return freevars;
}
use syntax::codemap::span;
use syntax::opt_vec;
use syntax::print::pprust::expr_to_str;
-use syntax::{oldvisit, ast_util};
+use syntax::{visit,ast_util};
+use syntax::visit::Visitor;
// Kind analysis pass.
//
current_item: NodeId
}
+struct KindAnalysisVisitor;
+
+impl Visitor<Context> for KindAnalysisVisitor {
+
+ fn visit_expr(&mut self, ex:@expr, e:Context) {
+ check_expr(self, ex, e);
+ }
+
+ fn visit_fn(&mut self, fk:&visit::fn_kind, fd:&fn_decl, b:&Block, s:span, n:NodeId, e:Context) {
+ check_fn(self, fk, fd, b, s, n, e);
+ }
+
+ fn visit_ty(&mut self, t:&Ty, e:Context) {
+ check_ty(self, t, e);
+ }
+ fn visit_item(&mut self, i:@item, e:Context) {
+ check_item(self, i, e);
+ }
+ fn visit_block(&mut self, b:&Block, e:Context) {
+ check_block(self, b, e);
+ }
+}
+
pub fn check_crate(tcx: ty::ctxt,
method_map: typeck::method_map,
crate: &Crate) {
method_map: method_map,
current_item: -1
};
- let visit = oldvisit::mk_vt(@oldvisit::Visitor {
- visit_expr: check_expr,
- visit_fn: check_fn,
- visit_ty: check_ty,
- visit_item: check_item,
- visit_block: check_block,
- .. *oldvisit::default_visitor()
- });
- oldvisit::visit_crate(crate, (ctx, visit));
+ let mut visit = KindAnalysisVisitor;
+ visit::walk_crate(&mut visit, crate, ctx);
tcx.sess.abort_if_errors();
}
}
}
-fn check_block(block: &Block,
- (cx, visitor): (Context, oldvisit::vt<Context>)) {
- oldvisit::visit_block(block, (cx, visitor));
+fn check_block(visitor: &mut KindAnalysisVisitor,
+ block: &Block,
+ cx: Context) {
+ visit::walk_block(visitor, block, cx);
}
-fn check_item(item: @item, (cx, visitor): (Context, oldvisit::vt<Context>)) {
+fn check_item(visitor: &mut KindAnalysisVisitor, item: @item, cx: Context) {
// If this is a destructor, check kinds.
if !attr::contains_name(item.attrs, "unsafe_destructor") {
match item.node {
}
let cx = Context { current_item: item.id, ..cx };
- oldvisit::visit_item(item, (cx, visitor));
+ visit::walk_item(visitor, item, cx);
}
// Yields the appropriate function to check the kind of closed over
// Check that the free variables used in a shared/sendable closure conform
// to the copy/move kind bounds. Then recursively check the function body.
fn check_fn(
- fk: &oldvisit::fn_kind,
+ v: &mut KindAnalysisVisitor,
+ fk: &visit::fn_kind,
decl: &fn_decl,
body: &Block,
sp: span,
fn_id: NodeId,
- (cx, v): (Context,
- oldvisit::vt<Context>)) {
+ cx: Context) {
// Check kinds on free variables:
do with_appropriate_checker(cx, fn_id) |chk| {
}
}
- oldvisit::visit_fn(fk, decl, body, sp, fn_id, (cx, v));
+ visit::walk_fn(v, fk, decl, body, sp, fn_id, cx);
}
-pub fn check_expr(e: @expr, (cx, v): (Context, oldvisit::vt<Context>)) {
+pub fn check_expr(v: &mut KindAnalysisVisitor, e: @expr, cx: Context) {
debug!("kind::check_expr(%s)", expr_to_str(e, cx.tcx.sess.intr()));
// Handle any kind bounds on type parameters
}
_ => {}
}
- oldvisit::visit_expr(e, (cx, v));
+ visit::walk_expr(v, e, cx);
}
-fn check_ty(aty: &Ty, (cx, v): (Context, oldvisit::vt<Context>)) {
+fn check_ty(v: &mut KindAnalysisVisitor, aty: &Ty, cx: Context) {
match aty.node {
ty_path(_, _, id) => {
let r = cx.tcx.node_type_substs.find(&id);
}
_ => {}
}
- oldvisit::visit_ty(aty, (cx, v));
+ visit::walk_ty(v, aty, cx);
}
// Calls "any_missing" if any bounds were missing.
use syntax::ast::{Crate, def_id, MetaItem};
use syntax::ast_util::local_def;
use syntax::attr::AttrMetaMethods;
-use syntax::oldvisit::{default_simple_visitor, mk_simple_visitor};
-use syntax::oldvisit::{SimpleVisitor, visit_crate};
+use syntax::ast::{item};
+use syntax::visit;
+use syntax::visit::Visitor;
use std::hashmap::HashMap;
item_refs: HashMap<@str, uint>,
}
+struct LanguageItemVisitor<'self> {
+ this: *mut LanguageItemCollector<'self>,
+}
+
+impl<'self> Visitor<()> for LanguageItemVisitor<'self> {
+
+ fn visit_item(&mut self, item:@item, _:()) {
+
+ for attribute in item.attrs.iter() {
+ unsafe {
+ (*self.this).match_and_collect_meta_item(
+ local_def(item.id),
+ attribute.node.value
+ );
+ }
+ }
+
+ visit::walk_item(self, item, ());
+ }
+}
+
impl<'self> LanguageItemCollector<'self> {
pub fn new<'a>(crate: &'a Crate, session: Session)
-> LanguageItemCollector<'a> {
pub fn collect_local_language_items(&mut self) {
let this: *mut LanguageItemCollector = &mut *self;
- visit_crate(self.crate, ((), mk_simple_visitor(@SimpleVisitor {
- visit_item: |item| {
- for attribute in item.attrs.iter() {
- unsafe {
- (*this).match_and_collect_meta_item(
- local_def(item.id),
- attribute.node.value
- );
- }
- }
- },
- .. *default_simple_visitor()
- })));
+ let mut v = LanguageItemVisitor { this: this };
+ visit::walk_crate(&mut v, self.crate, ());
}
pub fn collect_external_language_items(&mut self) {
use syntax::codemap::span;
use syntax::codemap;
use syntax::parse::token;
-use syntax::{ast, oldvisit, ast_util, visit};
+use syntax::{ast, ast_util, visit};
+use syntax::visit::Visitor;
/**
* A 'lint' check is a kind of miscellaneous constraint that a user _might_
#[deriving(Clone, Eq)]
pub enum lint {
ctypes,
+ cstack,
unused_imports,
unnecessary_qualification,
while_true,
default: warn
}),
+ ("cstack",
+ LintSpec {
+ lint: cstack,
+ desc: "only invoke foreign functions from fixedstacksegment fns",
+ default: deny
+ }),
+
("unused_imports",
LintSpec {
lint: unused_imports,
return map;
}
+trait OuterLint {
+ fn process_item(@mut self, i:@ast::item, e:@mut Context);
+ fn process_fn(@mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ b:&ast::Block, s:span, n:ast::NodeId, e:@mut Context);
+
+ // Returned inner variant will not proceed past subitems.
+ // Supports decomposition of simple lints into subitem-traversing
+ // outer lint visitor and subitem-stopping inner lint visitor.
+ fn inner_variant(@mut self) -> @mut InnerLint;
+}
+
+trait InnerLint {
+ fn descend_item(@mut self, i:&ast::item, e:@mut Context);
+ fn descend_crate(@mut self, crate: &ast::Crate, env: @mut Context);
+ fn descend_fn(@mut self,
+ function_kind: &visit::fn_kind,
+ function_declaration: &ast::fn_decl,
+ function_body: &ast::Block,
+ sp: span,
+ id: ast::NodeId,
+ env: @mut Context);
+}
+
+impl<V:Visitor<@mut Context>> InnerLint for V {
+ fn descend_item(@mut self, i:&ast::item, e:@mut Context) {
+ visit::walk_item(self, i, e);
+ }
+ fn descend_crate(@mut self, crate: &ast::Crate, env: @mut Context) {
+ visit::walk_crate(self, crate, env);
+ }
+ fn descend_fn(@mut self, fk: &visit::fn_kind, fd: &ast::fn_decl, fb: &ast::Block,
+ sp: span, id: ast::NodeId, env: @mut Context) {
+ visit::walk_fn(self, fk, fd, fb, sp, id, env);
+ }
+}
+
enum AnyVisitor {
// This is a pair so every visitor can visit every node. When a lint pass
// is registered, another visitor is created which stops at all items
// first element. This means that when visiting a node, the original
// recursive call can use the original visitor's method, although the
// recursing visitor supplied to the method is the item stopping visitor.
- OldVisitor(oldvisit::vt<@mut Context>, oldvisit::vt<@mut Context>),
+ OldVisitor(@mut OuterLint, @mut InnerLint),
NewVisitor(@mut visit::Visitor<()>),
}
+type VCObj = @mut Visitor<@mut Context>;
+
struct Context {
// All known lint modes (string versions)
dict: @LintDict,
}
}
- fn add_oldvisit_lint(&mut self, v: oldvisit::vt<@mut Context>) {
- self.visitors.push(OldVisitor(v, item_stopping_visitor(v)));
+ fn add_oldvisit_lint(&mut self, v: @mut OuterLint) {
+ self.visitors.push(OldVisitor(v, v.inner_variant()));
}
fn add_lint(&mut self, v: @mut visit::Visitor<()>) {
for visitor in self.visitors.iter() {
match *visitor {
OldVisitor(orig, stopping) => {
- (orig.visit_item)(it, (self, stopping));
+ orig.process_item(it, self);
+ stopping.descend_item(it, self);
}
NewVisitor(new_visitor) => {
let new_visitor = new_visitor;
for visitor in self.visitors.iter() {
match *visitor {
OldVisitor(_, stopping) => {
- oldvisit::visit_crate(c, (self, stopping))
+ stopping.descend_crate(c, self)
}
NewVisitor(new_visitor) => {
let mut new_visitor = new_visitor;
for visitor in self.visitors.iter() {
match *visitor {
OldVisitor(orig, stopping) => {
- let fk = oldvisit::fk_method(m.ident,
- &m.generics,
- m);
- (orig.visit_fn)(&fk,
- &m.decl,
- &m.body,
- m.span,
- m.id,
- (self, stopping));
+ let fk = visit::fk_method(m.ident, &m.generics, m);
+ orig.process_fn(&fk, &m.decl, &m.body, m.span, m.id, self);
+ stopping.descend_fn(&fk, &m.decl, &m.body, m.span, m.id, self);
}
NewVisitor(new_visitor) => {
let fk = visit::fk_method(m.ident,
true
}
-// Take a visitor, and modify it so that it will not proceed past subitems.
-// This is used to make the simple visitors used for the lint passes
-// not traverse into subitems, since that is handled by the outer
-// lint visitor.
-fn item_stopping_visitor<E>(outer: oldvisit::vt<E>) -> oldvisit::vt<E> {
- oldvisit::mk_vt(@oldvisit::Visitor {
- visit_item: |_i, (_e, _v)| { },
- visit_fn: |fk, fd, b, s, id, (e, v)| {
+trait SubitemStoppableVisitor : Visitor<@mut Context> {
+ fn is_running_on_items(&mut self) -> bool;
+
+ fn visit_item_action(&mut self, _i:@ast::item, _e:@mut Context) {
+ // fill in with particular action without recursion if desired
+ }
+
+ fn visit_fn_action(&mut self, _fk:&visit::fn_kind, _fd:&ast::fn_decl,
+ _b:&ast::Block, _s:span, _n:ast::NodeId, _e:@mut Context) {
+ // fill in with particular action without recursion if desired
+ }
+
+ // The two OVERRIDE methods:
+ //
+ // OVERRIDE_visit_item
+ // OVERRIDE_visit_fn
+ //
+ // *must* be included as initial reimplementations of the standard
+ // default behavior of visit_item and visit_fn for every impl of
+ // Visitor, in order to recreate the effect of having two variant
+ // Outer/Inner behaviors of lint visitors. (See earlier versions
+ // of this module to see what the original encoding was of this
+ // emulated behavior.)
+
+ fn OVERRIDE_visit_item(&mut self, i:@ast::item, e:@mut Context) {
+ if self.is_running_on_items() {
+ self.visit_item_action(i, e);
+ visit::walk_item(self, i, e);
+ }
+ }
+
+ fn OVERRIDE_visit_fn(&mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ b:&ast::Block, s:span, n:ast::NodeId, e:@mut Context) {
+ if self.is_running_on_items() {
+ self.visit_fn_action(fk, fd, b, s, n, e);
+ visit::walk_fn(self, fk, fd, b, s, n, e);
+ } else {
match *fk {
- oldvisit::fk_method(*) => {}
- _ => (outer.visit_fn)(fk, fd, b, s, id, (e, v))
+ visit::fk_method(*) => {}
+ _ => {
+ self.visit_fn_action(fk, fd, b, s, n, e);
+ visit::walk_fn(self, fk, fd, b, s, n, e);
+ }
}
- },
- .. **outer})
+ }
+ }
}
-fn lint_while_true() -> oldvisit::vt<@mut Context> {
- oldvisit::mk_vt(@oldvisit::Visitor {
- visit_expr: |e,
- (cx, vt): (@mut Context, oldvisit::vt<@mut Context>)| {
+struct WhileTrueLintVisitor { stopping_on_items: bool }
+
+
+impl SubitemStoppableVisitor for WhileTrueLintVisitor {
+ fn is_running_on_items(&mut self) -> bool { !self.stopping_on_items }
+}
+
+impl Visitor<@mut Context> for WhileTrueLintVisitor {
+ fn visit_item(&mut self, i:@ast::item, e:@mut Context) {
+ self.OVERRIDE_visit_item(i, e);
+ }
+
+ fn visit_fn(&mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ b:&ast::Block, s:span, n:ast::NodeId, e:@mut Context) {
+ self.OVERRIDE_visit_fn(fk, fd, b, s, n, e);
+ }
+
+ fn visit_expr(&mut self, e:@ast::expr, cx:@mut Context) {
+
match e.node {
ast::expr_while(cond, _) => {
match cond.node {
}
_ => ()
}
- oldvisit::visit_expr(e, (cx, vt));
- },
- .. *oldvisit::default_visitor()
- })
+ visit::walk_expr(self, e, cx);
+ }
+}
+
+macro_rules! outer_lint_boilerplate_impl(
+ ($Visitor:ident) =>
+ (
+ impl OuterLint for $Visitor {
+ fn process_item(@mut self, i:@ast::item, e:@mut Context) {
+ self.visit_item_action(i, e);
+ }
+ fn process_fn(@mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ b:&ast::Block, s:span, n:ast::NodeId, e:@mut Context) {
+ self.visit_fn_action(fk, fd, b, s, n, e);
+ }
+ fn inner_variant(@mut self) -> @mut InnerLint {
+ @mut $Visitor { stopping_on_items: true } as @mut InnerLint
+ }
+ }
+ ))
+
+outer_lint_boilerplate_impl!(WhileTrueLintVisitor)
+
+fn lint_while_true() -> @mut OuterLint {
+ @mut WhileTrueLintVisitor{ stopping_on_items: false } as @mut OuterLint
}
-fn lint_type_limits() -> oldvisit::vt<@mut Context> {
- fn is_valid<T:cmp::Ord>(binop: ast::binop, v: T,
+struct TypeLimitsLintVisitor { stopping_on_items: bool }
+
+impl SubitemStoppableVisitor for TypeLimitsLintVisitor {
+ fn is_running_on_items(&mut self) -> bool { !self.stopping_on_items }
+}
+
+impl TypeLimitsLintVisitor {
+ fn is_valid<T:cmp::Ord>(&mut self, binop: ast::binop, v: T,
min: T, max: T) -> bool {
match binop {
ast::lt => v <= max,
}
}
- fn rev_binop(binop: ast::binop) -> ast::binop {
+ fn rev_binop(&mut self, binop: ast::binop) -> ast::binop {
match binop {
ast::lt => ast::gt,
ast::le => ast::ge,
// for int & uint, be conservative with the warnings, so that the
// warnings are consistent between 32- and 64-bit platforms
- fn int_ty_range(int_ty: ast::int_ty) -> (i64, i64) {
+ fn int_ty_range(&mut self, int_ty: ast::int_ty) -> (i64, i64) {
match int_ty {
ast::ty_i => (i64::min_value, i64::max_value),
ast::ty_char => (u32::min_value as i64, u32::max_value as i64),
}
}
- fn uint_ty_range(uint_ty: ast::uint_ty) -> (u64, u64) {
+ fn uint_ty_range(&mut self, uint_ty: ast::uint_ty) -> (u64, u64) {
match uint_ty {
ast::ty_u => (u64::min_value, u64::max_value),
ast::ty_u8 => (u8::min_value as u64, u8::max_value as u64),
}
}
- fn check_limits(cx: &Context,
+ fn check_limits(&mut self,
+ cx: &Context,
binop: ast::binop,
l: @ast::expr,
r: @ast::expr)
// Normalize the binop so that the literal is always on the RHS in
// the comparison
let norm_binop = if swap {
- rev_binop(binop)
+ self.rev_binop(binop)
} else {
binop
};
match ty::get(ty::expr_ty(cx.tcx, expr)).sty {
ty::ty_int(int_ty) => {
- let (min, max) = int_ty_range(int_ty);
+ let (min, max) = self.int_ty_range(int_ty);
let lit_val: i64 = match lit.node {
ast::expr_lit(@li) => match li.node {
ast::lit_int(v, _) => v,
},
_ => fail!()
};
- is_valid(norm_binop, lit_val, min, max)
+ self.is_valid(norm_binop, lit_val, min, max)
}
ty::ty_uint(uint_ty) => {
- let (min, max): (u64, u64) = uint_ty_range(uint_ty);
+ let (min, max): (u64, u64) = self.uint_ty_range(uint_ty);
let lit_val: u64 = match lit.node {
ast::expr_lit(@li) => match li.node {
ast::lit_int(v, _) => v as u64,
},
_ => fail!()
};
- is_valid(norm_binop, lit_val, min, max)
+ self.is_valid(norm_binop, lit_val, min, max)
}
_ => true
}
}
- fn is_comparison(binop: ast::binop) -> bool {
+ fn is_comparison(&mut self, binop: ast::binop) -> bool {
match binop {
ast::eq | ast::lt | ast::le |
ast::ne | ast::ge | ast::gt => true,
_ => false
}
}
+}
+
+impl Visitor<@mut Context> for TypeLimitsLintVisitor {
+ fn visit_item(&mut self, i:@ast::item, e:@mut Context) {
+ self.OVERRIDE_visit_item(i, e);
+ }
+
+ fn visit_fn(&mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ b:&ast::Block, s:span, n:ast::NodeId, e:@mut Context) {
+ self.OVERRIDE_visit_fn(fk, fd, b, s, n, e);
+ }
+
+ fn visit_expr(&mut self, e:@ast::expr, cx:@mut Context) {
- oldvisit::mk_vt(@oldvisit::Visitor {
- visit_expr: |e,
- (cx, vt): (@mut Context, oldvisit::vt<@mut Context>)| {
match e.node {
ast::expr_binary(_, ref binop, l, r) => {
- if is_comparison(*binop)
- && !check_limits(cx, *binop, l, r) {
+ if self.is_comparison(*binop)
+ && !self.check_limits(cx, *binop, l, r) {
cx.span_lint(type_limits, e.span,
"comparison is useless due to type limits");
}
}
_ => ()
}
- oldvisit::visit_expr(e, (cx, vt));
- },
+ visit::walk_expr(self, e, cx);
+ }
+}
+
+outer_lint_boilerplate_impl!(TypeLimitsLintVisitor)
- .. *oldvisit::default_visitor()
- })
+fn lint_type_limits() -> @mut OuterLint {
+ @mut TypeLimitsLintVisitor{ stopping_on_items: false } as @mut OuterLint
}
fn check_item_ctypes(cx: &Context, it: &ast::item) {
}
}
-fn lint_heap() -> oldvisit::vt<@mut Context> {
- oldvisit::mk_vt(@oldvisit::Visitor {
- visit_expr: |e,
- (cx, vt): (@mut Context, oldvisit::vt<@mut Context>)| {
+struct HeapLintVisitor { stopping_on_items: bool }
+
+impl SubitemStoppableVisitor for HeapLintVisitor {
+ fn is_running_on_items(&mut self) -> bool { !self.stopping_on_items }
+}
+
+impl Visitor<@mut Context> for HeapLintVisitor {
+ fn visit_item(&mut self, i:@ast::item, e:@mut Context) {
+ self.OVERRIDE_visit_item(i, e);
+ }
+
+ fn visit_fn(&mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ b:&ast::Block, s:span, n:ast::NodeId, e:@mut Context) {
+ self.OVERRIDE_visit_fn(fk, fd, b, s, n, e);
+ }
+
+ fn visit_expr(&mut self, e:@ast::expr, cx:@mut Context) {
let ty = ty::expr_ty(cx.tcx, e);
check_type(cx, e.span, ty);
- oldvisit::visit_expr(e, (cx, vt));
- },
- .. *oldvisit::default_visitor()
- })
+ visit::walk_expr(self, e, cx);
+ }
}
-fn lint_path_statement() -> oldvisit::vt<@mut Context> {
- oldvisit::mk_vt(@oldvisit::Visitor {
- visit_stmt: |s,
- (cx, vt): (@mut Context, oldvisit::vt<@mut Context>)| {
+outer_lint_boilerplate_impl!(HeapLintVisitor)
+
+fn lint_heap() -> @mut OuterLint {
+ @mut HeapLintVisitor { stopping_on_items: false } as @mut OuterLint
+}
+
+struct PathStatementLintVisitor {
+ stopping_on_items: bool
+}
+
+impl SubitemStoppableVisitor for PathStatementLintVisitor {
+ fn is_running_on_items(&mut self) -> bool { !self.stopping_on_items }
+}
+
+impl Visitor<@mut Context> for PathStatementLintVisitor {
+ fn visit_item(&mut self, i:@ast::item, e:@mut Context) {
+ self.OVERRIDE_visit_item(i, e);
+ }
+
+ fn visit_fn(&mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ b:&ast::Block, s:span, n:ast::NodeId, e:@mut Context) {
+ self.OVERRIDE_visit_fn(fk, fd, b, s, n, e);
+ }
+
+ fn visit_stmt(&mut self, s:@ast::stmt, cx:@mut Context) {
match s.node {
ast::stmt_semi(
@ast::expr { node: ast::expr_path(_), _ },
}
_ => ()
}
- oldvisit::visit_stmt(s, (cx, vt));
- },
- .. *oldvisit::default_visitor()
- })
+ visit::walk_stmt(self, s, cx);
+
+ }
+}
+
+outer_lint_boilerplate_impl!(PathStatementLintVisitor)
+
+fn lint_path_statement() -> @mut OuterLint {
+ @mut PathStatementLintVisitor{ stopping_on_items: false } as @mut OuterLint
}
fn check_item_non_camel_case_types(cx: &Context, it: &ast::item) {
}
}
-fn lint_unused_unsafe() -> oldvisit::vt<@mut Context> {
- oldvisit::mk_vt(@oldvisit::Visitor {
- visit_expr: |e,
- (cx, vt): (@mut Context, oldvisit::vt<@mut Context>)| {
+struct UnusedUnsafeLintVisitor { stopping_on_items: bool }
+
+impl SubitemStoppableVisitor for UnusedUnsafeLintVisitor {
+ fn is_running_on_items(&mut self) -> bool { !self.stopping_on_items }
+}
+
+impl Visitor<@mut Context> for UnusedUnsafeLintVisitor {
+ fn visit_item(&mut self, i:@ast::item, e:@mut Context) {
+ self.OVERRIDE_visit_item(i, e);
+ }
+
+ fn visit_fn(&mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ b:&ast::Block, s:span, n:ast::NodeId, e:@mut Context) {
+ self.OVERRIDE_visit_fn(fk, fd, b, s, n, e);
+ }
+
+ fn visit_expr(&mut self, e:@ast::expr, cx:@mut Context) {
+
match e.node {
ast::expr_block(ref blk) if blk.rules == ast::UnsafeBlock => {
if !cx.tcx.used_unsafe.contains(&blk.id) {
}
_ => ()
}
- oldvisit::visit_expr(e, (cx, vt));
- },
- .. *oldvisit::default_visitor()
- })
+ visit::walk_expr(self, e, cx);
+ }
+}
+
+outer_lint_boilerplate_impl!(UnusedUnsafeLintVisitor)
+
+fn lint_unused_unsafe() -> @mut OuterLint {
+ @mut UnusedUnsafeLintVisitor{ stopping_on_items: false } as @mut OuterLint
}
-fn lint_unused_mut() -> oldvisit::vt<@mut Context> {
- fn check_pat(cx: &Context, p: @ast::pat) {
+struct UnusedMutLintVisitor { stopping_on_items: bool }
+
+impl UnusedMutLintVisitor {
+ fn check_pat(&mut self, cx: &Context, p: @ast::pat) {
let mut used = false;
let mut bindings = 0;
do pat_util::pat_bindings(cx.tcx.def_map, p) |_, id, _, _| {
}
}
- fn visit_fn_decl(cx: &Context, fd: &ast::fn_decl) {
+ fn visit_fn_decl(&mut self, cx: &Context, fd: &ast::fn_decl) {
for arg in fd.inputs.iter() {
if arg.is_mutbl {
- check_pat(cx, arg.pat);
+ self.check_pat(cx, arg.pat);
}
}
}
+}
- oldvisit::mk_vt(@oldvisit::Visitor {
- visit_local: |l,
- (cx, vt): (@mut Context, oldvisit::vt<@mut Context>)| {
+impl SubitemStoppableVisitor for UnusedMutLintVisitor {
+ fn is_running_on_items(&mut self) -> bool { !self.stopping_on_items }
+
+ fn visit_fn_action(&mut self, _a:&visit::fn_kind, fd:&ast::fn_decl,
+ _b:&ast::Block, _c:span, _d:ast::NodeId, cx:@mut Context) {
+ self.visit_fn_decl(cx, fd);
+ }
+}
+
+impl Visitor<@mut Context> for UnusedMutLintVisitor {
+ fn visit_item(&mut self, i:@ast::item, e:@mut Context) {
+ self.OVERRIDE_visit_item(i, e);
+ }
+
+ fn visit_fn(&mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ b:&ast::Block, s:span, n:ast::NodeId, e:@mut Context) {
+ self.OVERRIDE_visit_fn(fk, fd, b, s, n, e);
+ }
+
+
+ fn visit_local(&mut self, l:@ast::Local, cx:@mut Context) {
if l.is_mutbl {
- check_pat(cx, l.pat);
+ self.check_pat(cx, l.pat);
}
- oldvisit::visit_local(l, (cx, vt));
- },
- visit_fn: |a, fd, b, c, d, (cx, vt)| {
- visit_fn_decl(cx, fd);
- oldvisit::visit_fn(a, fd, b, c, d, (cx, vt));
- },
- visit_ty_method: |tm, (cx, vt)| {
- visit_fn_decl(cx, &tm.decl);
- oldvisit::visit_ty_method(tm, (cx, vt));
- },
- visit_trait_method: |tm, (cx, vt)| {
+ visit::walk_local(self, l, cx);
+ }
+
+ fn visit_ty_method(&mut self, tm:&ast::TypeMethod, cx:@mut Context) {
+ self.visit_fn_decl(cx, &tm.decl);
+ visit::walk_ty_method(self, tm, cx);
+ }
+
+ fn visit_trait_method(&mut self, tm:&ast::trait_method, cx:@mut Context) {
match *tm {
- ast::required(ref tm) => visit_fn_decl(cx, &tm.decl),
- ast::provided(m) => visit_fn_decl(cx, &m.decl)
+ ast::required(ref tm) => self.visit_fn_decl(cx, &tm.decl),
+ ast::provided(m) => self.visit_fn_decl(cx, &m.decl)
}
- oldvisit::visit_trait_method(tm, (cx, vt));
- },
- .. *oldvisit::default_visitor()
- })
+ visit::walk_trait_method(self, tm, cx);
+ }
+}
+
+outer_lint_boilerplate_impl!(UnusedMutLintVisitor)
+
+fn lint_unused_mut() -> @mut OuterLint {
+ @mut UnusedMutLintVisitor{ stopping_on_items: false } as @mut OuterLint
}
fn lint_session(cx: @mut Context) -> @mut visit::Visitor<()> {
}, false)
}
-fn lint_unnecessary_allocations() -> oldvisit::vt<@mut Context> {
+struct UnnecessaryAllocationLintVisitor { stopping_on_items: bool }
+
+impl SubitemStoppableVisitor for UnnecessaryAllocationLintVisitor {
+ fn is_running_on_items(&mut self) -> bool { !self.stopping_on_items }
+}
+
+impl Visitor<@mut Context> for UnnecessaryAllocationLintVisitor {
+ fn visit_item(&mut self, i:@ast::item, e:@mut Context) {
+ self.OVERRIDE_visit_item(i, e);
+ }
+
+ fn visit_fn(&mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ b:&ast::Block, s:span, n:ast::NodeId, e:@mut Context) {
+ self.OVERRIDE_visit_fn(fk, fd, b, s, n, e);
+ }
+
+ fn visit_expr(&mut self, e:@ast::expr, cx:@mut Context) {
+ self.check(cx, e);
+ visit::walk_expr(self, e, cx);
+ }
+}
+
+impl UnnecessaryAllocationLintVisitor {
// Warn if string and vector literals with sigils are immediately borrowed.
// Those can have the sigil removed.
- fn check(cx: &Context, e: &ast::expr) {
+ fn check(&mut self, cx: &Context, e: &ast::expr) {
match e.node {
ast::expr_vstore(e2, ast::expr_vstore_uniq) |
ast::expr_vstore(e2, ast::expr_vstore_box) => {
_ => ()
}
}
+}
+
+outer_lint_boilerplate_impl!(UnnecessaryAllocationLintVisitor)
- oldvisit::mk_vt(@oldvisit::Visitor {
- visit_expr: |e,
- (cx, vt): (@mut Context, oldvisit::vt<@mut Context>)| {
- check(cx, e);
- oldvisit::visit_expr(e, (cx, vt));
- },
- .. *oldvisit::default_visitor()
- })
+fn lint_unnecessary_allocations() -> @mut OuterLint {
+ @mut UnnecessaryAllocationLintVisitor{ stopping_on_items: false } as @mut OuterLint
}
-fn lint_missing_doc() -> oldvisit::vt<@mut Context> {
- fn check_attrs(cx: @mut Context,
+struct MissingDocLintVisitor { stopping_on_items: bool }
+
+impl MissingDocLintVisitor {
+ fn check_attrs(&mut self,
+ cx: @mut Context,
attrs: &[ast::Attribute],
sp: span,
msg: &str) {
// otherwise, warn!
cx.span_lint(missing_doc, sp, msg);
}
+}
- oldvisit::mk_vt(@oldvisit::Visitor {
- visit_ty_method: |m, (cx, vt)| {
+impl Visitor<@mut Context> for MissingDocLintVisitor {
+ fn visit_item(&mut self, i:@ast::item, e:@mut Context) {
+ self.OVERRIDE_visit_item(i, e);
+ }
+
+ fn visit_fn(&mut self, fk:&visit::fn_kind, fd:&ast::fn_decl,
+ b:&ast::Block, s:span, n:ast::NodeId, e:@mut Context) {
+ self.OVERRIDE_visit_fn(fk, fd, b, s, n, e);
+ }
+
+ fn visit_ty_method(&mut self, m:&ast::TypeMethod, cx:@mut Context) {
// All ty_method objects are linted about because they're part of a
// trait (no visibility)
- check_attrs(cx, m.attrs, m.span,
+ self.check_attrs(cx, m.attrs, m.span,
"missing documentation for a method");
- oldvisit::visit_ty_method(m, (cx, vt));
- },
+ visit::walk_ty_method(self, m, cx);
+ }
+}
+
+impl SubitemStoppableVisitor for MissingDocLintVisitor {
+ fn is_running_on_items(&mut self) -> bool { !self.stopping_on_items }
+
+ fn visit_fn_action(&mut self, fk:&visit::fn_kind, _d:&ast::fn_decl,
+ _b:&ast::Block, sp:span, _id:ast::NodeId, cx:@mut Context) {
- visit_fn: |fk, d, b, sp, id, (cx, vt)| {
// Only warn about explicitly public methods. Soon implicit
// public-ness will hopefully be going away.
match *fk {
- oldvisit::fk_method(_, _, m) if m.vis == ast::public => {
+ visit::fk_method(_, _, m) if m.vis == ast::public => {
// If we're in a trait implementation, no need to duplicate
// documentation
if !cx.in_trait_impl {
- check_attrs(cx, m.attrs, sp,
+ self.check_attrs(cx, m.attrs, sp,
"missing documentation for a method");
}
}
_ => {}
}
- oldvisit::visit_fn(fk, d, b, sp, id, (cx, vt));
- },
+ }
+
+ fn visit_item_action(&mut self, it:@ast::item, cx:@mut Context) {
- visit_item: |it, (cx, vt)| {
match it.node {
// Go ahead and match the fields here instead of using
// visit_struct_field while we have access to the enclosing
// struct's visibility
ast::item_struct(sdef, _) if it.vis == ast::public => {
- check_attrs(cx, it.attrs, it.span,
+ self.check_attrs(cx, it.attrs, it.span,
"missing documentation for a struct");
for field in sdef.fields.iter() {
match field.node.kind {
ast::named_field(_, vis) if vis != ast::private => {
- check_attrs(cx, field.node.attrs, field.span,
+ self.check_attrs(cx, field.node.attrs, field.span,
"missing documentation for a field");
}
ast::unnamed_field | ast::named_field(*) => {}
}
ast::item_trait(*) if it.vis == ast::public => {
- check_attrs(cx, it.attrs, it.span,
+ self.check_attrs(cx, it.attrs, it.span,
"missing documentation for a trait");
}
ast::item_fn(*) if it.vis == ast::public => {
- check_attrs(cx, it.attrs, it.span,
+ self.check_attrs(cx, it.attrs, it.span,
"missing documentation for a function");
}
_ => {}
- };
+ }
+ }
+}
+
+outer_lint_boilerplate_impl!(MissingDocLintVisitor)
+
+fn lint_missing_doc() -> @mut OuterLint {
+ @mut MissingDocLintVisitor { stopping_on_items: false } as @mut OuterLint
+}
+
+struct LintCheckVisitor;
+
+impl Visitor<@mut Context> for LintCheckVisitor {
+
+ fn visit_item(&mut self, it:@ast::item, cx: @mut Context) {
+
+ do cx.with_lint_attrs(it.attrs) {
+ match it.node {
+ ast::item_impl(_, Some(*), _, _) => {
+ cx.in_trait_impl = true;
+ }
+ _ => {}
+ }
+ check_item_ctypes(cx, it);
+ check_item_non_camel_case_types(cx, it);
+ check_item_non_uppercase_statics(cx, it);
+ check_item_heap(cx, it);
+
+ cx.process(Item(it));
+ visit::walk_item(self, it, cx);
+ cx.in_trait_impl = false;
+ }
+ }
- oldvisit::visit_item(it, (cx, vt));
- },
+ fn visit_fn(&mut self, fk:&visit::fn_kind, decl:&ast::fn_decl,
+ body:&ast::Block, span:span, id:ast::NodeId, cx:@mut Context) {
- .. *oldvisit::default_visitor()
- })
+ match *fk {
+ visit::fk_method(_, _, m) => {
+ do cx.with_lint_attrs(m.attrs) {
+ cx.process(Method(m));
+ visit::walk_fn(self,
+ fk,
+ decl,
+ body,
+ span,
+ id,
+ cx);
+ }
+ }
+ _ => {
+ visit::walk_fn(self,
+ fk,
+ decl,
+ body,
+ span,
+ id,
+ cx);
+ }
+ }
+ }
}
pub fn check_crate(tcx: ty::ctxt, crate: @ast::Crate) {
do cx.with_lint_attrs(crate.attrs) {
cx.process(Crate(crate));
- oldvisit::visit_crate(crate, (cx, oldvisit::mk_vt(@oldvisit::Visitor {
- visit_item: |it,
- (cx, vt):
- (@mut Context, oldvisit::vt<@mut Context>)| {
- do cx.with_lint_attrs(it.attrs) {
- match it.node {
- ast::item_impl(_, Some(*), _, _) => {
- cx.in_trait_impl = true;
- }
- _ => {}
- }
- check_item_ctypes(cx, it);
- check_item_non_camel_case_types(cx, it);
- check_item_non_uppercase_statics(cx, it);
- check_item_heap(cx, it);
+ let mut visitor = LintCheckVisitor;
- cx.process(Item(it));
- oldvisit::visit_item(it, (cx, vt));
- cx.in_trait_impl = false;
- }
- },
- visit_fn: |fk, decl, body, span, id, (cx, vt)| {
- match *fk {
- oldvisit::fk_method(_, _, m) => {
- do cx.with_lint_attrs(m.attrs) {
- cx.process(Method(m));
- oldvisit::visit_fn(fk,
- decl,
- body,
- span,
- id,
- (cx, vt));
- }
- }
- _ => {
- oldvisit::visit_fn(fk,
- decl,
- body,
- span,
- id,
- (cx, vt));
- }
- }
- },
- .. *oldvisit::default_visitor()
- })));
+ visit::walk_crate(&mut visitor, crate, cx);
}
// If we missed any lints added to the session, then there's a bug somewhere
use syntax::codemap::span;
use syntax::parse::token::special_idents;
use syntax::print::pprust::{expr_to_str, block_to_str};
-use syntax::oldvisit::{fk_anon, fk_fn_block, fk_item_fn, fk_method};
-use syntax::oldvisit::{vt};
-use syntax::{oldvisit, ast_util};
+use syntax::{visit, ast_util};
+use syntax::visit::{Visitor,fn_kind};
#[deriving(Eq)]
struct Variable(uint);
}
}
+struct LivenessVisitor;
+
+impl Visitor<@mut IrMaps> for LivenessVisitor {
+ fn visit_fn(&mut self, fk:&fn_kind, fd:&fn_decl, b:&Block, s:span, n:NodeId, e:@mut IrMaps) {
+ visit_fn(self, fk, fd, b, s, n, e);
+ }
+ fn visit_local(&mut self, l:@Local, e:@mut IrMaps) { visit_local(self, l, e); }
+ fn visit_expr(&mut self, ex:@expr, e:@mut IrMaps) { visit_expr(self, ex, e); }
+ fn visit_arm(&mut self, a:&arm, e:@mut IrMaps) { visit_arm(self, a, e); }
+}
+
pub fn check_crate(tcx: ty::ctxt,
method_map: typeck::method_map,
capture_map: moves::CaptureMap,
crate: &Crate) {
- let visitor = oldvisit::mk_vt(@oldvisit::Visitor {
- visit_fn: visit_fn,
- visit_local: visit_local,
- visit_expr: visit_expr,
- visit_arm: visit_arm,
- .. *oldvisit::default_visitor()
- });
+ let mut visitor = LivenessVisitor;
let initial_maps = @mut IrMaps(tcx,
method_map,
capture_map);
- oldvisit::visit_crate(crate, (initial_maps, visitor));
+ visit::walk_crate(&mut visitor, crate, initial_maps);
tcx.sess.abort_if_errors();
}
}
}
-fn visit_fn(fk: &oldvisit::fn_kind,
+struct ErrorCheckVisitor;
+
+impl Visitor<@Liveness> for ErrorCheckVisitor {
+ fn visit_fn(&mut self, fk:&fn_kind, fd:&fn_decl, b:&Block, s:span, n:NodeId, e:@Liveness) {
+ check_fn(self, fk, fd, b, s, n, e);
+ }
+ fn visit_local(&mut self, l:@Local, e:@Liveness) {
+ check_local(self, l, e);
+ }
+ fn visit_expr(&mut self, ex:@expr, e:@Liveness) {
+ check_expr(self, ex, e);
+ }
+ fn visit_arm(&mut self, a:&arm, e:@Liveness) {
+ check_arm(self, a, e);
+ }
+}
+
+fn visit_fn(v: &mut LivenessVisitor,
+ fk: &visit::fn_kind,
decl: &fn_decl,
body: &Block,
sp: span,
id: NodeId,
- (this, v): (@mut IrMaps,
- vt<@mut IrMaps>)) {
+ this: @mut IrMaps) {
debug!("visit_fn: id=%d", id);
let _i = ::util::common::indenter();
// Add `this`, whether explicit or implicit.
match *fk {
- fk_method(_, _, method) => {
+ visit::fk_method(_, _, method) => {
match method.explicit_self.node {
sty_value | sty_region(*) | sty_box(_) | sty_uniq => {
fn_maps.add_variable(Arg(method.self_id,
sty_static => {}
}
}
- fk_item_fn(*) | fk_anon(*) | fk_fn_block(*) => {}
+ visit::fk_item_fn(*) | visit::fk_anon(*) | visit::fk_fn_block(*) => {}
}
// gather up the various local variables, significant expressions,
// and so forth:
- oldvisit::visit_fn(fk, decl, body, sp, id, (fn_maps, v));
+ visit::walk_fn(v, fk, decl, body, sp, id, fn_maps);
// Special nodes and variables:
// - exit_ln represents the end of the fn, either by return or fail
let entry_ln = (*lsets).compute(decl, body);
// check for various error conditions
- let check_vt = oldvisit::mk_vt(@oldvisit::Visitor {
- visit_fn: check_fn,
- visit_local: check_local,
- visit_expr: check_expr,
- visit_arm: check_arm,
- .. *oldvisit::default_visitor()
- });
- (check_vt.visit_block)(body, (lsets, check_vt));
+ let mut check_vt = ErrorCheckVisitor;
+ check_vt.visit_block(body, lsets);
lsets.check_ret(id, sp, fk, entry_ln);
lsets.warn_about_unused_args(decl, entry_ln);
}
-fn visit_local(local: @Local, (this, vt): (@mut IrMaps, vt<@mut IrMaps>)) {
+fn visit_local(v: &mut LivenessVisitor, local: @Local, this: @mut IrMaps) {
let def_map = this.tcx.def_map;
do pat_util::pat_bindings(def_map, local.pat) |_bm, p_id, sp, path| {
debug!("adding local variable %d", p_id);
kind: kind
}));
}
- oldvisit::visit_local(local, (this, vt));
+ visit::walk_local(v, local, this);
}
-fn visit_arm(arm: &arm, (this, vt): (@mut IrMaps, vt<@mut IrMaps>)) {
+fn visit_arm(v: &mut LivenessVisitor, arm: &arm, this: @mut IrMaps) {
let def_map = this.tcx.def_map;
for pat in arm.pats.iter() {
do pat_util::pat_bindings(def_map, *pat) |bm, p_id, sp, path| {
}));
}
}
- oldvisit::visit_arm(arm, (this, vt));
+ visit::walk_arm(v, arm, this);
}
-fn visit_expr(expr: @expr, (this, vt): (@mut IrMaps, vt<@mut IrMaps>)) {
+fn visit_expr(v: &mut LivenessVisitor, expr: @expr, this: @mut IrMaps) {
match expr.node {
// live nodes required for uses or definitions of variables:
expr_path(_) | expr_self => {
if moves::moved_variable_node_id_from_def(def).is_some() {
this.add_live_node_for_node(expr.id, ExprNode(expr.span));
}
- oldvisit::visit_expr(expr, (this, vt));
+ visit::walk_expr(v, expr, this);
}
expr_fn_block(*) => {
// Interesting control flow (for loops can contain labeled
}
this.set_captures(expr.id, call_caps);
- oldvisit::visit_expr(expr, (this, vt));
+ visit::walk_expr(v, expr, this);
}
// live nodes required for interesting control flow:
expr_if(*) | expr_match(*) | expr_while(*) | expr_loop(*) => {
this.add_live_node_for_node(expr.id, ExprNode(expr.span));
- oldvisit::visit_expr(expr, (this, vt));
+ visit::walk_expr(v, expr, this);
}
expr_for_loop(*) => fail!("non-desugared expr_for_loop"),
expr_binary(_, op, _, _) if ast_util::lazy_binop(op) => {
this.add_live_node_for_node(expr.id, ExprNode(expr.span));
- oldvisit::visit_expr(expr, (this, vt));
+ visit::walk_expr(v, expr, this);
}
// otherwise, live nodes are not required:
expr_assign(*) | expr_assign_op(*) | expr_mac(*) |
expr_struct(*) | expr_repeat(*) | expr_paren(*) |
expr_inline_asm(*) => {
- oldvisit::visit_expr(expr, (this, vt));
+ visit::walk_expr(v, expr, this);
}
}
}
// _______________________________________________________________________
// Checking for error conditions
-fn check_local(local: @Local, (this, vt): (@Liveness, vt<@Liveness>)) {
+fn check_local(vt: &mut ErrorCheckVisitor, local: @Local, this: @Liveness) {
match local.init {
Some(_) => {
this.warn_about_unused_or_dead_vars_in_pat(local.pat);
}
}
- oldvisit::visit_local(local, (this, vt));
+ visit::walk_local(vt, local, this);
}
-fn check_arm(arm: &arm, (this, vt): (@Liveness, vt<@Liveness>)) {
+fn check_arm(vt: &mut ErrorCheckVisitor, arm: &arm, this: @Liveness) {
do this.arm_pats_bindings(arm.pats) |ln, var, sp, id| {
this.warn_about_unused(sp, id, ln, var);
}
- oldvisit::visit_arm(arm, (this, vt));
+ visit::walk_arm(vt, arm, this);
}
-fn check_expr(expr: @expr, (this, vt): (@Liveness, vt<@Liveness>)) {
+fn check_expr(vt: &mut ErrorCheckVisitor, expr: @expr, this: @Liveness) {
match expr.node {
expr_assign(l, r) => {
this.check_lvalue(l, vt);
- (vt.visit_expr)(r, (this, vt));
+ vt.visit_expr(r, this);
- oldvisit::visit_expr(expr, (this, vt));
+ visit::walk_expr(vt, expr, this);
}
expr_assign_op(_, _, l, _) => {
this.check_lvalue(l, vt);
- oldvisit::visit_expr(expr, (this, vt));
+ visit::walk_expr(vt, expr, this);
}
expr_inline_asm(ref ia) => {
for &(_, input) in ia.inputs.iter() {
- (vt.visit_expr)(input, (this, vt));
+ vt.visit_expr(input, this);
}
// Output operands must be lvalues
}
_ => {}
}
- (vt.visit_expr)(out, (this, vt));
+ vt.visit_expr(out, this);
}
- oldvisit::visit_expr(expr, (this, vt));
+ visit::walk_expr(vt, expr, this);
}
// no correctness conditions related to liveness
expr_again(*) | expr_lit(_) | expr_block(*) |
expr_mac(*) | expr_addr_of(*) | expr_struct(*) | expr_repeat(*) |
expr_paren(*) | expr_fn_block(*) | expr_path(*) | expr_self(*) => {
- oldvisit::visit_expr(expr, (this, vt));
+ visit::walk_expr(vt, expr, this);
}
expr_for_loop(*) => fail!("non-desugared expr_for_loop")
}
}
-fn check_fn(_fk: &oldvisit::fn_kind,
+fn check_fn(_v: &mut ErrorCheckVisitor,
+ _fk: &visit::fn_kind,
_decl: &fn_decl,
_body: &Block,
_sp: span,
_id: NodeId,
- (_self, _v): (@Liveness, vt<@Liveness>)) {
+ _self: @Liveness) {
// do not check contents of nested fns
}
pub fn check_ret(&self,
id: NodeId,
sp: span,
- _fk: &oldvisit::fn_kind,
+ _fk: &visit::fn_kind,
entry_ln: LiveNode) {
if self.live_on_entry(entry_ln, self.s.no_ret_var).is_some() {
// if no_ret_var is live, then we fall off the end of the
}
}
- pub fn check_lvalue(@self, expr: @expr, vt: vt<@Liveness>) {
+ pub fn check_lvalue(@self, expr: @expr, vt: &mut ErrorCheckVisitor) {
match expr.node {
expr_path(_) => {
match self.tcx.def_map.get_copy(&expr.id) {
_ => {
// For other kinds of lvalues, no checks are required,
// and any embedded expressions are actually rvalues
- oldvisit::visit_expr(expr, (self, vt));
+ visit::walk_expr(vt, expr, self);
}
}
}
use middle::ty::{ty_struct, ty_enum};
use middle::ty;
use middle::typeck::{method_map, method_origin, method_param};
-use middle::typeck::{method_static, method_trait};
+use middle::typeck::{method_static, method_object};
use std::util::ignore;
use syntax::ast::{decl_item, def, def_fn, def_id, def_static_method};
use syntax::attr;
use syntax::codemap::span;
use syntax::parse::token;
-use syntax::oldvisit;
+use syntax::visit;
+use syntax::visit::Visitor;
+use syntax::ast::{_mod,expr,item,Block,pat};
-pub fn check_crate<'mm>(tcx: ty::ctxt,
- method_map: &'mm method_map,
- crate: &ast::Crate) {
- let privileged_items = @mut ~[];
+struct PrivacyVisitor {
+ tcx: ty::ctxt,
+ privileged_items: @mut ~[NodeId],
+}
+impl PrivacyVisitor {
// Adds an item to its scope.
- let add_privileged_item: @fn(@ast::item, &mut uint) = |item, count| {
+ fn add_privileged_item(&mut self, item: @ast::item, count: &mut uint) {
match item.node {
item_struct(*) | item_trait(*) | item_enum(*) |
item_fn(*) => {
- privileged_items.push(item.id);
+ self.privileged_items.push(item.id);
*count += 1;
}
item_impl(_, _, _, ref methods) => {
for method in methods.iter() {
- privileged_items.push(method.id);
+ self.privileged_items.push(method.id);
*count += 1;
}
- privileged_items.push(item.id);
+ self.privileged_items.push(item.id);
*count += 1;
}
item_foreign_mod(ref foreign_mod) => {
for foreign_item in foreign_mod.items.iter() {
- privileged_items.push(foreign_item.id);
+ self.privileged_items.push(foreign_item.id);
*count += 1;
}
}
_ => {}
}
- };
+ }
// Adds items that are privileged to this scope.
- let add_privileged_items: @fn(&[@ast::item]) -> uint = |items| {
+ fn add_privileged_items(&mut self, items: &[@ast::item]) -> uint {
let mut count = 0;
for &item in items.iter() {
- add_privileged_item(item, &mut count);
+ self.add_privileged_item(item, &mut count);
}
count
- };
+ }
// Checks that an enum variant is in scope
- let check_variant: @fn(span: span, enum_id: ast::def_id) =
- |span, enum_id| {
- let variant_info = ty::enum_variants(tcx, enum_id)[0];
+ fn check_variant(&mut self, span: span, enum_id: ast::def_id) {
+ let variant_info = ty::enum_variants(self.tcx, enum_id)[0];
let parental_privacy = if is_local(enum_id) {
- let parent_vis = ast_map::node_item_query(tcx.items, enum_id.node,
+ let parent_vis = ast_map::node_item_query(self.tcx.items,
+ enum_id.node,
|it| { it.vis },
~"unbound enum parent when checking \
dereference of enum type");
if variant_visibility_to_privacy(variant_info.vis,
parental_privacy == Public)
== Private {
- tcx.sess.span_err(span,
+ self.tcx.sess.span_err(span,
"can only dereference enums \
with a single, public variant");
}
- };
+ }
// Returns true if a crate-local method is private and false otherwise.
- let method_is_private: @fn(span: span, method_id: NodeId) -> bool =
- |span, method_id| {
+ fn method_is_private(&mut self, span: span, method_id: NodeId) -> bool {
let check = |vis: visibility, container_id: def_id| {
let mut is_private = false;
if vis == private {
} else {
// Look up the enclosing impl.
if container_id.crate != LOCAL_CRATE {
- tcx.sess.span_bug(span,
+ self.tcx.sess.span_bug(span,
"local method isn't in local \
impl?!");
}
- match tcx.items.find(&container_id.node) {
+ match self.tcx.items.find(&container_id.node) {
Some(&node_item(item, _)) => {
match item.node {
item_impl(_, None, _, _)
}
}
Some(_) => {
- tcx.sess.span_bug(span, "impl wasn't an item?!");
+ self.tcx.sess.span_bug(span, "impl wasn't an item?!");
}
None => {
- tcx.sess.span_bug(span, "impl wasn't in AST map?!");
+ self.tcx.sess.span_bug(span, "impl wasn't in AST map?!");
}
}
}
is_private
};
- match tcx.items.find(&method_id) {
+ match self.tcx.items.find(&method_id) {
Some(&node_method(method, impl_id, _)) => {
check(method.vis, impl_id)
}
}
}
Some(_) => {
- tcx.sess.span_bug(span,
+ self.tcx.sess.span_bug(span,
fmt!("method_is_private: method was a %s?!",
ast_map::node_id_to_str(
- tcx.items,
+ self.tcx.items,
method_id,
token::get_ident_interner())));
}
None => {
- tcx.sess.span_bug(span, "method not found in \
+ self.tcx.sess.span_bug(span, "method not found in \
AST map?!");
}
}
- };
+ }
// Returns true if the given local item is private and false otherwise.
- let local_item_is_private: @fn(span: span, item_id: NodeId) -> bool =
- |span, item_id| {
+ fn local_item_is_private(&mut self, span: span, item_id: NodeId) -> bool {
let mut f: &fn(NodeId) -> bool = |_| false;
f = |item_id| {
- match tcx.items.find(&item_id) {
+ match self.tcx.items.find(&item_id) {
Some(&node_item(item, _)) => item.vis != public,
Some(&node_foreign_item(*)) => false,
Some(&node_method(method, impl_did, _)) => {
}
Some(&node_trait_method(_, trait_did, _)) => f(trait_did.node),
Some(_) => {
- tcx.sess.span_bug(span,
+ self.tcx.sess.span_bug(span,
fmt!("local_item_is_private: item was \
a %s?!",
ast_map::node_id_to_str(
- tcx.items,
+ self.tcx.items,
item_id,
token::get_ident_interner())));
}
None => {
- tcx.sess.span_bug(span, "item not found in AST map?!");
+ self.tcx.sess.span_bug(span, "item not found in AST map?!");
}
}
};
f(item_id)
- };
+ }
// Checks that a private field is in scope.
- let check_field: @fn(span: span, id: ast::def_id, ident: ast::ident) =
- |span, id, ident| {
- let fields = ty::lookup_struct_fields(tcx, id);
+ fn check_field(&mut self, span: span, id: ast::def_id, ident: ast::ident) {
+ let fields = ty::lookup_struct_fields(self.tcx, id);
for field in fields.iter() {
if field.ident != ident { loop; }
if field.vis == private {
- tcx.sess.span_err(span, fmt!("field `%s` is private",
+ self.tcx.sess.span_err(span, fmt!("field `%s` is private",
token::ident_to_str(&ident)));
}
break;
}
- };
+ }
// Given the ID of a method, checks to ensure it's in scope.
- let check_method_common: @fn(span: span,
- method_id: def_id,
- name: &ident) =
- |span, method_id, name| {
+ fn check_method_common(&mut self, span: span, method_id: def_id, name: &ident) {
// If the method is a default method, we need to use the def_id of
// the default implementation.
// Having to do this this is really unfortunate.
- let method_id = ty::method(tcx, method_id).provided_source
+ let method_id = ty::method(self.tcx, method_id).provided_source
.unwrap_or_default(method_id);
if method_id.crate == LOCAL_CRATE {
- let is_private = method_is_private(span, method_id.node);
- let container_id = ty::method(tcx, method_id).container_id;
+ let is_private = self.method_is_private(span, method_id.node);
+ let container_id = ty::method(self.tcx, method_id).container_id;
if is_private &&
(container_id.crate != LOCAL_CRATE ||
- !privileged_items.iter().any(|x| x == &(container_id.node))) {
- tcx.sess.span_err(span,
+ !self.privileged_items.iter().any(|x| x == &(container_id.node))) {
+ self.tcx.sess.span_err(span,
fmt!("method `%s` is private",
token::ident_to_str(name)));
}
} else {
let visibility =
- csearch::get_item_visibility(tcx.sess.cstore, method_id);
+ csearch::get_item_visibility(self.tcx.sess.cstore, method_id);
if visibility != public {
- tcx.sess.span_err(span,
+ self.tcx.sess.span_err(span,
fmt!("method `%s` is private",
token::ident_to_str(name)));
}
}
- };
+ }
// Checks that a private path is in scope.
- let check_path: @fn(span: span, def: def, path: &Path) =
- |span, def, path| {
+ fn check_path(&mut self, span: span, def: def, path: &Path) {
debug!("checking path");
match def {
def_static_method(method_id, _, _) => {
debug!("found static method def, checking it");
- check_method_common(span, method_id, path.idents.last())
+ self.check_method_common(span, method_id, path.idents.last())
}
def_fn(def_id, _) => {
if def_id.crate == LOCAL_CRATE {
- if local_item_is_private(span, def_id.node) &&
- !privileged_items.iter().any(|x| x == &def_id.node) {
- tcx.sess.span_err(span,
+ if self.local_item_is_private(span, def_id.node) &&
+ !self.privileged_items.iter().any(|x| x == &def_id.node) {
+ self.tcx.sess.span_err(span,
fmt!("function `%s` is private",
token::ident_to_str(path.idents.last())));
}
- } else if csearch::get_item_visibility(tcx.sess.cstore,
+ } else if csearch::get_item_visibility(self.tcx.sess.cstore,
def_id) != public {
- tcx.sess.span_err(span,
+ self.tcx.sess.span_err(span,
fmt!("function `%s` is private",
token::ident_to_str(path.idents.last())));
}
}
_ => {}
}
- };
+ }
// Checks that a private method is in scope.
- let check_method: @fn(span: span,
- origin: &method_origin,
- ident: ast::ident) =
- |span, origin, ident| {
+ fn check_method(&mut self, span: span, origin: &method_origin, ident: ast::ident) {
match *origin {
method_static(method_id) => {
- check_method_common(span, method_id, &ident)
+ self.check_method_common(span, method_id, &ident)
}
method_param(method_param {
trait_id: trait_id,
- method_num: method_num,
+ method_num: method_num,
_
}) |
- method_trait(trait_id, method_num) => {
+ method_object(method_object {
+ trait_id: trait_id,
+ method_num: method_num,
+ _
+ }) => {
if trait_id.crate == LOCAL_CRATE {
- match tcx.items.find(&trait_id.node) {
+ match self.tcx.items.find(&trait_id.node) {
Some(&node_item(item, _)) => {
match item.node {
item_trait(_, _, ref methods) => {
if method_num >= (*methods).len() {
- tcx.sess.span_bug(span, "method number out of range?!");
+ self.tcx.sess.span_bug(span,
+ "method number out of range?!");
}
match (*methods)[method_num] {
provided(method)
if method.vis == private &&
- !privileged_items.iter()
+ !self.privileged_items.iter()
.any(|x| x == &(trait_id.node)) => {
- tcx.sess.span_err(span,
+ self.tcx.sess.span_err(span,
fmt!("method `%s` is private",
token::ident_to_str(&method
.ident)));
}
}
_ => {
- tcx.sess.span_bug(span, "trait wasn't actually a trait?!");
+ self.tcx.sess.span_bug(span, "trait wasn't actually a trait?!");
}
}
}
Some(_) => {
- tcx.sess.span_bug(span, "trait wasn't an item?!");
+ self.tcx.sess.span_bug(span, "trait wasn't an item?!");
}
None => {
- tcx.sess.span_bug(span, "trait item wasn't found in the AST map?!");
+ self.tcx.sess.span_bug(span,
+ "trait item wasn't found in the AST map?!");
}
}
} else {
}
}
}
- };
+ }
+}
- let visitor = oldvisit::mk_vt(@oldvisit::Visitor {
- visit_mod: |the_module, span, node_id, (method_map, visitor)| {
- let n_added = add_privileged_items(the_module.items);
+impl<'self> Visitor<&'self method_map> for PrivacyVisitor {
- oldvisit::visit_mod(the_module,
- span,
- node_id,
- (method_map, visitor));
+ fn visit_mod<'mm>(&mut self, the_module:&_mod, _:span, _:NodeId,
+ method_map:&'mm method_map) {
+
+ let n_added = self.add_privileged_items(the_module.items);
+
+ visit::walk_mod(self, the_module, method_map);
do n_added.times {
- ignore(privileged_items.pop());
+ ignore(self.privileged_items.pop());
}
- },
- visit_item: |item, (method_map, visitor)| {
+ }
+
+ fn visit_item<'mm>(&mut self, item:@item, method_map:&'mm method_map) {
+
// Do not check privacy inside items with the resolve_unexported
// attribute. This is used for the test runner.
if !attr::contains_name(item.attrs, "!resolve_unexported") {
- check_sane_privacy(tcx, item);
- oldvisit::visit_item(item, (method_map, visitor));
+ check_sane_privacy(self.tcx, item);
+ visit::walk_item(self, item, method_map);
}
- },
- visit_block: |block, (method_map, visitor)| {
+ }
+
+ fn visit_block<'mm>(&mut self, block:&Block, method_map:&'mm method_map) {
+
// Gather up all the privileged items.
let mut n_added = 0;
for stmt in block.stmts.iter() {
stmt_decl(decl, _) => {
match decl.node {
decl_item(item) => {
- add_privileged_item(item, &mut n_added);
+ self.add_privileged_item(item, &mut n_added);
}
_ => {}
}
}
}
- oldvisit::visit_block(block, (method_map, visitor));
+ visit::walk_block(self, block, method_map);
do n_added.times {
- ignore(privileged_items.pop());
+ ignore(self.privileged_items.pop());
}
- },
- visit_expr: |expr,
- (method_map, visitor):
- (&'mm method_map, oldvisit::vt<&'mm method_map>)| {
+
+ }
+
+ fn visit_expr<'mm>(&mut self, expr:@expr, method_map:&'mm method_map) {
+
match expr.node {
expr_field(base, ident, _) => {
// Method calls are now a special syntactic form,
// With type_autoderef, make sure we don't
// allow pointers to violate privacy
- match ty::get(ty::type_autoderef(tcx, ty::expr_ty(tcx,
+ match ty::get(ty::type_autoderef(self.tcx, ty::expr_ty(self.tcx,
base))).sty {
ty_struct(id, _)
- if id.crate != LOCAL_CRATE || !privileged_items.iter()
+ if id.crate != LOCAL_CRATE || !self.privileged_items.iter()
.any(|x| x == &(id.node)) => {
debug!("(privacy checking) checking field access");
- check_field(expr.span, id, ident);
+ self.check_field(expr.span, id, ident);
}
_ => {}
}
}
expr_method_call(_, base, ident, _, _, _) => {
// Ditto
- match ty::get(ty::type_autoderef(tcx, ty::expr_ty(tcx,
+ match ty::get(ty::type_autoderef(self.tcx, ty::expr_ty(self.tcx,
base))).sty {
ty_enum(id, _) |
ty_struct(id, _)
if id.crate != LOCAL_CRATE ||
- !privileged_items.iter().any(|x| x == &(id.node)) => {
+ !self.privileged_items.iter().any(|x| x == &(id.node)) => {
match method_map.find(&expr.id) {
None => {
- tcx.sess.span_bug(expr.span,
+ self.tcx.sess.span_bug(expr.span,
"method call not in \
method map");
}
Some(ref entry) => {
debug!("(privacy checking) checking \
impl method");
- check_method(expr.span, &entry.origin, ident);
+ self.check_method(expr.span, &entry.origin, ident);
}
}
}
}
}
expr_path(ref path) => {
- check_path(expr.span, tcx.def_map.get_copy(&expr.id), path);
+ self.check_path(expr.span, self.tcx.def_map.get_copy(&expr.id), path);
}
expr_struct(_, ref fields, _) => {
- match ty::get(ty::expr_ty(tcx, expr)).sty {
+ match ty::get(ty::expr_ty(self.tcx, expr)).sty {
ty_struct(id, _) => {
if id.crate != LOCAL_CRATE ||
- !privileged_items.iter().any(|x| x == &(id.node)) {
+ !self.privileged_items.iter().any(|x| x == &(id.node)) {
for field in (*fields).iter() {
debug!("(privacy checking) checking \
field in struct literal");
- check_field(expr.span, id, field.ident);
+ self.check_field(expr.span, id, field.ident);
}
}
}
ty_enum(id, _) => {
if id.crate != LOCAL_CRATE ||
- !privileged_items.iter().any(|x| x == &(id.node)) {
- match tcx.def_map.get_copy(&expr.id) {
+ !self.privileged_items.iter().any(|x| x == &(id.node)) {
+ match self.tcx.def_map.get_copy(&expr.id) {
def_variant(_, variant_id) => {
for field in (*fields).iter() {
debug!("(privacy checking) \
checking field in \
struct variant \
literal");
- check_field(expr.span, variant_id, field.ident);
+ self.check_field(expr.span, variant_id, field.ident);
}
}
_ => {
- tcx.sess.span_bug(expr.span,
+ self.tcx.sess.span_bug(expr.span,
"resolve didn't \
map enum struct \
constructor to a \
}
}
_ => {
- tcx.sess.span_bug(expr.span, "struct expr \
+ self.tcx.sess.span_bug(expr.span, "struct expr \
didn't have \
struct type?!");
}
// enum type t, then t's first variant is public or
// privileged. (We can assume it has only one variant
// since typeck already happened.)
- match ty::get(ty::expr_ty(tcx, operand)).sty {
+ match ty::get(ty::expr_ty(self.tcx, operand)).sty {
ty_enum(id, _) => {
if id.crate != LOCAL_CRATE ||
- !privileged_items.iter().any(|x| x == &(id.node)) {
- check_variant(expr.span, id);
+ !self.privileged_items.iter().any(|x| x == &(id.node)) {
+ self.check_variant(expr.span, id);
}
}
_ => { /* No check needed */ }
_ => {}
}
- oldvisit::visit_expr(expr, (method_map, visitor));
- },
- visit_pat: |pattern, (method_map, visitor)| {
+ visit::walk_expr(self, expr, method_map);
+
+ }
+
+ fn visit_pat<'mm>(&mut self, pattern:@pat, method_map:&'mm method_map) {
+
match pattern.node {
pat_struct(_, ref fields, _) => {
- match ty::get(ty::pat_ty(tcx, pattern)).sty {
+ match ty::get(ty::pat_ty(self.tcx, pattern)).sty {
ty_struct(id, _) => {
if id.crate != LOCAL_CRATE ||
- !privileged_items.iter().any(|x| x == &(id.node)) {
+ !self.privileged_items.iter().any(|x| x == &(id.node)) {
for field in fields.iter() {
debug!("(privacy checking) checking \
struct pattern");
- check_field(pattern.span, id, field.ident);
+ self.check_field(pattern.span, id, field.ident);
}
}
}
ty_enum(enum_id, _) => {
if enum_id.crate != LOCAL_CRATE ||
- !privileged_items.iter().any(|x| x == &enum_id.node) {
- match tcx.def_map.find(&pattern.id) {
+ !self.privileged_items.iter().any(|x| x == &enum_id.node) {
+ match self.tcx.def_map.find(&pattern.id) {
Some(&def_variant(_, variant_id)) => {
for field in fields.iter() {
debug!("(privacy checking) \
checking field in \
struct variant pattern");
- check_field(pattern.span, variant_id, field.ident);
+ self.check_field(pattern.span, variant_id, field.ident);
}
}
_ => {
- tcx.sess.span_bug(pattern.span,
+ self.tcx.sess.span_bug(pattern.span,
"resolve didn't \
map enum struct \
pattern to a \
}
}
_ => {
- tcx.sess.span_bug(pattern.span,
+ self.tcx.sess.span_bug(pattern.span,
"struct pattern didn't have \
struct type?!");
}
_ => {}
}
- oldvisit::visit_pat(pattern, (method_map, visitor));
- },
- .. *oldvisit::default_visitor()
- });
- oldvisit::visit_crate(crate, (method_map, visitor));
+ visit::walk_pat(self, pattern, method_map);
+ }
+}
+
+pub fn check_crate<'mm>(tcx: ty::ctxt,
+ method_map: &'mm method_map,
+ crate: &ast::Crate) {
+ let privileged_items = @mut ~[];
+
+ let mut visitor = PrivacyVisitor {
+ tcx: tcx,
+ privileged_items: privileged_items,
+ };
+ visit::walk_crate(&mut visitor, crate, method_map);
}
/// Validates all of the visibility qualifers placed on the item given. This
use syntax::ast_util::def_id_of_def;
use syntax::attr;
use syntax::parse::token;
-use syntax::oldvisit::Visitor;
-use syntax::oldvisit;
+use syntax::visit::Visitor;
+use syntax::visit;
// Returns true if the given set of attributes contains the `#[inline]`
// attribute.
worklist: @mut ~[NodeId],
}
-impl ReachableContext {
- // Creates a new reachability computation context.
- fn new(tcx: ty::ctxt, method_map: typeck::method_map)
- -> ReachableContext {
- ReachableContext {
- tcx: tcx,
- method_map: method_map,
- reachable_symbols: @mut HashSet::new(),
- worklist: @mut ~[],
- }
- }
+struct ReachableVisitor {
+ reachable_symbols: @mut HashSet<NodeId>,
+ worklist: @mut ~[NodeId],
+}
+
+impl Visitor<PrivacyContext> for ReachableVisitor {
+
+ fn visit_item(&mut self, item:@item, privacy_context:PrivacyContext) {
- // Step 1: Mark all public symbols, and add all public symbols that might
- // be inlined to a worklist.
- fn mark_public_symbols(&self, crate: @Crate) {
- let reachable_symbols = self.reachable_symbols;
- let worklist = self.worklist;
- let visitor = oldvisit::mk_vt(@Visitor {
- visit_item: |item, (privacy_context, visitor):
- (PrivacyContext, oldvisit::vt<PrivacyContext>)| {
match item.node {
item_fn(*) => {
if privacy_context == PublicContext {
- reachable_symbols.insert(item.id);
+ self.reachable_symbols.insert(item.id);
}
if item_might_be_inlined(item) {
- worklist.push(item.id)
+ self.worklist.push(item.id)
}
}
item_struct(ref struct_def, _) => {
match struct_def.ctor_id {
Some(ctor_id) if
privacy_context == PublicContext => {
- reachable_symbols.insert(ctor_id);
+ self.reachable_symbols.insert(ctor_id);
}
Some(_) | None => {}
}
item_enum(ref enum_def, _) => {
if privacy_context == PublicContext {
for variant in enum_def.variants.iter() {
- reachable_symbols.insert(variant.node.id);
+ self.reachable_symbols.insert(variant.node.id);
}
}
}
// Mark all public methods as reachable.
for &method in methods.iter() {
if should_be_considered_public(method) {
- reachable_symbols.insert(method.id);
+ self.reachable_symbols.insert(method.id);
}
}
// symbols to the worklist.
for &method in methods.iter() {
if should_be_considered_public(method) {
- worklist.push(method.id)
+ self.worklist.push(method.id)
}
}
} else {
if generics_require_inlining(generics) ||
attributes_specify_inlining(*attrs) ||
should_be_considered_public(*method) {
- worklist.push(method.id)
+ self.worklist.push(method.id)
}
}
}
for trait_method in trait_methods.iter() {
match *trait_method {
provided(method) => {
- reachable_symbols.insert(method.id);
- worklist.push(method.id)
+ self.reachable_symbols.insert(method.id);
+ self.worklist.push(method.id)
}
required(_) => {}
}
}
if item.vis == public && privacy_context == PublicContext {
- oldvisit::visit_item(item, (PublicContext, visitor))
+ visit::walk_item(self, item, PublicContext)
} else {
- oldvisit::visit_item(item, (PrivateContext, visitor))
+ visit::walk_item(self, item, PrivateContext)
}
- },
- .. *oldvisit::default_visitor()
- });
+ }
+
+}
+
+struct MarkSymbolVisitor {
+ worklist: @mut ~[NodeId],
+ method_map: typeck::method_map,
+ tcx: ty::ctxt,
+ reachable_symbols: @mut HashSet<NodeId>,
+}
+
+impl Visitor<()> for MarkSymbolVisitor {
- oldvisit::visit_crate(crate, (PublicContext, visitor))
+ fn visit_expr(&mut self, expr:@expr, _:()) {
+
+ match expr.node {
+ expr_path(_) => {
+ let def = match self.tcx.def_map.find(&expr.id) {
+ Some(&def) => def,
+ None => {
+ self.tcx.sess.span_bug(expr.span,
+ "def ID not in def map?!")
+ }
+ };
+
+ let def_id = def_id_of_def(def);
+ if ReachableContext::
+ def_id_represents_local_inlined_item(self.tcx,
+ def_id) {
+ self.worklist.push(def_id.node)
+ }
+ self.reachable_symbols.insert(def_id.node);
+ }
+ expr_method_call(*) => {
+ match self.method_map.find(&expr.id) {
+ Some(&typeck::method_map_entry {
+ origin: typeck::method_static(def_id),
+ _
+ }) => {
+ if ReachableContext::
+ def_id_represents_local_inlined_item(
+ self.tcx,
+ def_id) {
+ self.worklist.push(def_id.node)
+ }
+ self.reachable_symbols.insert(def_id.node);
+ }
+ Some(_) => {}
+ None => {
+ self.tcx.sess.span_bug(expr.span,
+ "method call expression \
+ not in method map?!")
+ }
+ }
+ }
+ _ => {}
+ }
+
+ visit::walk_expr(self, expr, ())
+ }
+}
+
+impl ReachableContext {
+ // Creates a new reachability computation context.
+ fn new(tcx: ty::ctxt, method_map: typeck::method_map)
+ -> ReachableContext {
+ ReachableContext {
+ tcx: tcx,
+ method_map: method_map,
+ reachable_symbols: @mut HashSet::new(),
+ worklist: @mut ~[],
+ }
+ }
+
+ // Step 1: Mark all public symbols, and add all public symbols that might
+ // be inlined to a worklist.
+ fn mark_public_symbols(&self, crate: @Crate) {
+ let reachable_symbols = self.reachable_symbols;
+ let worklist = self.worklist;
+
+ let mut visitor = ReachableVisitor {
+ reachable_symbols: reachable_symbols,
+ worklist: worklist,
+ };
+
+
+ visit::walk_crate(&mut visitor, crate, PublicContext);
}
// Returns true if the given def ID represents a local item that is
}
// Helper function to set up a visitor for `propagate()` below.
- fn init_visitor(&self) -> oldvisit::vt<()> {
+ fn init_visitor(&self) -> MarkSymbolVisitor {
let (worklist, method_map) = (self.worklist, self.method_map);
let (tcx, reachable_symbols) = (self.tcx, self.reachable_symbols);
- oldvisit::mk_vt(@oldvisit::Visitor {
- visit_expr: |expr, (_, visitor)| {
- match expr.node {
- expr_path(_) => {
- let def = match tcx.def_map.find(&expr.id) {
- Some(&def) => def,
- None => {
- tcx.sess.span_bug(expr.span,
- "def ID not in def map?!")
- }
- };
-
- let def_id = def_id_of_def(def);
- if ReachableContext::
- def_id_represents_local_inlined_item(tcx,
- def_id) {
- worklist.push(def_id.node)
- }
- reachable_symbols.insert(def_id.node);
- }
- expr_method_call(*) => {
- match method_map.find(&expr.id) {
- Some(&typeck::method_map_entry {
- origin: typeck::method_static(def_id),
- _
- }) => {
- if ReachableContext::
- def_id_represents_local_inlined_item(
- tcx,
- def_id) {
- worklist.push(def_id.node)
- }
- reachable_symbols.insert(def_id.node);
- }
- Some(_) => {}
- None => {
- tcx.sess.span_bug(expr.span,
- "method call expression \
- not in method map?!")
- }
- }
- }
- _ => {}
- }
- oldvisit::visit_expr(expr, ((), visitor))
- },
- ..*oldvisit::default_visitor()
- })
+ MarkSymbolVisitor {
+ worklist: worklist,
+ method_map: method_map,
+ tcx: tcx,
+ reachable_symbols: reachable_symbols,
+ }
}
// Step 2: Mark all symbols that the symbols on the worklist touch.
fn propagate(&self) {
- let visitor = self.init_visitor();
+ let mut visitor = self.init_visitor();
let mut scanned = HashSet::new();
while self.worklist.len() > 0 {
let search_item = self.worklist.pop();
Some(&ast_map::node_item(item, _)) => {
match item.node {
item_fn(_, _, _, _, ref search_block) => {
- oldvisit::visit_block(search_block, ((), visitor))
+ visit::walk_block(&mut visitor, search_block, ())
}
_ => {
self.tcx.sess.span_bug(item.span,
worklist?!")
}
provided(ref method) => {
- oldvisit::visit_block(&method.body, ((), visitor))
+ visit::walk_block(&mut visitor, &method.body, ())
}
}
}
Some(&ast_map::node_method(ref method, _, _)) => {
- oldvisit::visit_block(&method.body, ((), visitor))
+ visit::walk_block(&mut visitor, &method.body, ())
}
Some(_) => {
let ident_interner = token::get_ident_interner();
use syntax::print::pprust;
use syntax::parse::token;
use syntax::parse::token::special_idents;
-use syntax::{ast, oldvisit};
+use syntax::{ast, visit};
+use syntax::visit::{Visitor,fn_kind};
+use syntax::ast::{Block,item,fn_decl,NodeId,arm,pat,stmt,expr,Local};
+use syntax::ast::{Ty,TypeMethod,struct_field};
/**
The region maps encode information about region relationships.
}
}
-fn resolve_block(blk: &ast::Block,
- (cx, visitor): (Context, oldvisit::vt<Context>)) {
+fn resolve_block(visitor: &mut RegionResolutionVisitor,
+ blk: &ast::Block,
+ cx: Context) {
// Record the parent of this block.
parent_to_expr(cx, blk.id, blk.span);
let new_cx = Context {var_parent: Some(blk.id),
parent: Some(blk.id),
..cx};
- oldvisit::visit_block(blk, (new_cx, visitor));
+ visit::walk_block(visitor, blk, new_cx);
}
-fn resolve_arm(arm: &ast::arm,
- (cx, visitor): (Context, oldvisit::vt<Context>)) {
- oldvisit::visit_arm(arm, (cx, visitor));
+fn resolve_arm(visitor: &mut RegionResolutionVisitor,
+ arm: &ast::arm,
+ cx: Context) {
+ visit::walk_arm(visitor, arm, cx);
}
-fn resolve_pat(pat: @ast::pat,
- (cx, visitor): (Context, oldvisit::vt<Context>)) {
+fn resolve_pat(visitor: &mut RegionResolutionVisitor,
+ pat: @ast::pat,
+ cx: Context) {
assert_eq!(cx.var_parent, cx.parent);
parent_to_expr(cx, pat.id, pat.span);
- oldvisit::visit_pat(pat, (cx, visitor));
+ visit::walk_pat(visitor, pat, cx);
}
-fn resolve_stmt(stmt: @ast::stmt,
- (cx, visitor): (Context, oldvisit::vt<Context>)) {
+fn resolve_stmt(visitor: &mut RegionResolutionVisitor,
+ stmt: @ast::stmt,
+ cx: Context) {
match stmt.node {
ast::stmt_decl(*) => {
- oldvisit::visit_stmt(stmt, (cx, visitor));
+ visit::walk_stmt(visitor, stmt, cx);
}
ast::stmt_expr(_, stmt_id) |
ast::stmt_semi(_, stmt_id) => {
parent_to_expr(cx, stmt_id, stmt.span);
let expr_cx = Context {parent: Some(stmt_id), ..cx};
- oldvisit::visit_stmt(stmt, (expr_cx, visitor));
+ visit::walk_stmt(visitor, stmt, expr_cx);
}
ast::stmt_mac(*) => cx.sess.bug("unexpanded macro")
}
}
-fn resolve_expr(expr: @ast::expr,
- (cx, visitor): (Context, oldvisit::vt<Context>)) {
+fn resolve_expr(visitor: &mut RegionResolutionVisitor,
+ expr: @ast::expr,
+ cx: Context) {
parent_to_expr(cx, expr.id, expr.span);
let mut new_cx = cx;
};
- oldvisit::visit_expr(expr, (new_cx, visitor));
+ visit::walk_expr(visitor, expr, new_cx);
}
-fn resolve_local(local: @ast::Local,
- (cx, visitor): (Context, oldvisit::vt<Context>)) {
+fn resolve_local(visitor: &mut RegionResolutionVisitor,
+ local: @ast::Local,
+ cx: Context) {
assert_eq!(cx.var_parent, cx.parent);
parent_to_expr(cx, local.id, local.span);
- oldvisit::visit_local(local, (cx, visitor));
+ visit::walk_local(visitor, local, cx);
}
-fn resolve_item(item: @ast::item,
- (cx, visitor): (Context, oldvisit::vt<Context>)) {
+fn resolve_item(visitor: &mut RegionResolutionVisitor,
+ item: @ast::item,
+ cx: Context) {
// Items create a new outer block scope as far as we're concerned.
let new_cx = Context {var_parent: None, parent: None, ..cx};
- oldvisit::visit_item(item, (new_cx, visitor));
+ visit::walk_item(visitor, item, new_cx);
}
-fn resolve_fn(fk: &oldvisit::fn_kind,
+fn resolve_fn(visitor: &mut RegionResolutionVisitor,
+ fk: &visit::fn_kind,
decl: &ast::fn_decl,
body: &ast::Block,
sp: span,
id: ast::NodeId,
- (cx, visitor): (Context,
- oldvisit::vt<Context>)) {
+ cx: Context) {
debug!("region::resolve_fn(id=%?, \
span=%?, \
body.id=%?, \
var_parent: Some(body.id),
..cx};
match *fk {
- oldvisit::fk_method(_, _, method) => {
+ visit::fk_method(_, _, method) => {
cx.region_maps.record_parent(method.self_id, body.id);
}
_ => {}
}
- oldvisit::visit_fn_decl(decl, (decl_cx, visitor));
+ visit::walk_fn_decl(visitor, decl, decl_cx);
// The body of the fn itself is either a root scope (top-level fn)
// or it continues with the inherited scope (closures).
let body_cx = match *fk {
- oldvisit::fk_item_fn(*) |
- oldvisit::fk_method(*) => {
+ visit::fk_item_fn(*) |
+ visit::fk_method(*) => {
Context {parent: None, var_parent: None, ..cx}
}
- oldvisit::fk_anon(*) |
- oldvisit::fk_fn_block(*) => {
+ visit::fk_anon(*) |
+ visit::fk_fn_block(*) => {
cx
}
};
- (visitor.visit_block)(body, (body_cx, visitor));
+ visitor.visit_block(body, body_cx);
+}
+
+struct RegionResolutionVisitor;
+
+impl Visitor<Context> for RegionResolutionVisitor {
+
+ fn visit_block(&mut self, b:&Block, cx:Context) {
+ resolve_block(self, b, cx);
+ }
+
+ fn visit_item(&mut self, i:@item, cx:Context) {
+ resolve_item(self, i, cx);
+ }
+
+ fn visit_fn(&mut self, fk:&fn_kind, fd:&fn_decl, b:&Block, s:span, n:NodeId, cx:Context) {
+ resolve_fn(self, fk, fd, b, s, n, cx);
+ }
+ fn visit_arm(&mut self, a:&arm, cx:Context) {
+ resolve_arm(self, a, cx);
+ }
+ fn visit_pat(&mut self, p:@pat, cx:Context) {
+ resolve_pat(self, p, cx);
+ }
+ fn visit_stmt(&mut self, s:@stmt, cx:Context) {
+ resolve_stmt(self, s, cx);
+ }
+ fn visit_expr(&mut self, ex:@expr, cx:Context) {
+ resolve_expr(self, ex, cx);
+ }
+ fn visit_local(&mut self, l:@Local, cx:Context) {
+ resolve_local(self, l, cx);
+ }
}
pub fn resolve_crate(sess: Session,
region_maps: region_maps,
parent: None,
var_parent: None};
- let visitor = oldvisit::mk_vt(@oldvisit::Visitor {
- visit_block: resolve_block,
- visit_item: resolve_item,
- visit_fn: resolve_fn,
- visit_arm: resolve_arm,
- visit_pat: resolve_pat,
- visit_stmt: resolve_stmt,
- visit_expr: resolve_expr,
- visit_local: resolve_local,
- .. *oldvisit::default_visitor()
- });
- oldvisit::visit_crate(crate, (cx, visitor));
+ let mut visitor = RegionResolutionVisitor;
+ visit::walk_crate(&mut visitor, crate, cx);
return region_maps;
}
}
}
-fn determine_rp_in_item(item: @ast::item,
- (cx, visitor): (@mut DetermineRpCtxt,
- oldvisit::vt<@mut DetermineRpCtxt>)) {
+fn determine_rp_in_item(visitor: &mut DetermineRpVisitor,
+ item: @ast::item,
+ cx: @mut DetermineRpCtxt) {
do cx.with(item.id, true) {
- oldvisit::visit_item(item, (cx, visitor));
+ visit::walk_item(visitor, item, cx);
}
}
-fn determine_rp_in_fn(fk: &oldvisit::fn_kind,
+fn determine_rp_in_fn(visitor: &mut DetermineRpVisitor,
+ fk: &visit::fn_kind,
decl: &ast::fn_decl,
body: &ast::Block,
_: span,
_: ast::NodeId,
- (cx, visitor): (@mut DetermineRpCtxt,
- oldvisit::vt<@mut DetermineRpCtxt>)) {
+ cx: @mut DetermineRpCtxt) {
do cx.with(cx.item_id, false) {
do cx.with_ambient_variance(rv_contravariant) {
for a in decl.inputs.iter() {
- (visitor.visit_ty)(&a.ty, (cx, visitor));
+ visitor.visit_ty(&a.ty, cx);
}
}
- (visitor.visit_ty)(&decl.output, (cx, visitor));
- let generics = oldvisit::generics_of_fn(fk);
- (visitor.visit_generics)(&generics, (cx, visitor));
- (visitor.visit_block)(body, (cx, visitor));
+ visitor.visit_ty(&decl.output, cx);
+ let generics = visit::generics_of_fn(fk);
+ visitor.visit_generics(&generics, cx);
+ visitor.visit_block(body, cx);
}
}
-fn determine_rp_in_ty_method(ty_m: &ast::TypeMethod,
- (cx, visitor):
- (@mut DetermineRpCtxt,
- oldvisit::vt<@mut DetermineRpCtxt>)) {
+fn determine_rp_in_ty_method(visitor: &mut DetermineRpVisitor,
+ ty_m: &ast::TypeMethod,
+ cx: @mut DetermineRpCtxt) {
do cx.with(cx.item_id, false) {
- oldvisit::visit_ty_method(ty_m, (cx, visitor));
+ visit::walk_ty_method(visitor, ty_m, cx);
}
}
-fn determine_rp_in_ty(ty: &ast::Ty,
- (cx, visitor): (@mut DetermineRpCtxt,
- oldvisit::vt<@mut DetermineRpCtxt>)) {
+fn determine_rp_in_ty(visitor: &mut DetermineRpVisitor,
+ ty: &ast::Ty,
+ cx: @mut DetermineRpCtxt) {
// we are only interested in types that will require an item to
// be region-parameterized. if cx.item_id is zero, then this type
// is not a member of a type defn nor is it a constitutent of an
match ty.node {
ast::ty_box(ref mt) | ast::ty_uniq(ref mt) | ast::ty_vec(ref mt) |
ast::ty_rptr(_, ref mt) | ast::ty_ptr(ref mt) => {
- visit_mt(mt, (cx, visitor));
+ visit_mt(visitor, mt, cx);
}
ast::ty_path(ref path, _, _) => {
// type parameters are---for now, anyway---always invariant
do cx.with_ambient_variance(rv_invariant) {
for tp in path.types.iter() {
- (visitor.visit_ty)(tp, (cx, visitor));
+ visitor.visit_ty(tp, cx);
}
}
}
// parameters are contravariant
do cx.with_ambient_variance(rv_contravariant) {
for a in decl.inputs.iter() {
- (visitor.visit_ty)(&a.ty, (cx, visitor));
+ visitor.visit_ty(&a.ty, cx);
}
}
- (visitor.visit_ty)(&decl.output, (cx, visitor));
+ visitor.visit_ty(&decl.output, cx);
}
}
_ => {
- oldvisit::visit_ty(ty, (cx, visitor));
+ visit::walk_ty(visitor, ty, cx);
}
}
- fn visit_mt(mt: &ast::mt,
- (cx, visitor): (@mut DetermineRpCtxt,
- oldvisit::vt<@mut DetermineRpCtxt>)) {
+ fn visit_mt(visitor: &mut DetermineRpVisitor,
+ mt: &ast::mt,
+ cx: @mut DetermineRpCtxt) {
// mutability is invariant
if mt.mutbl == ast::m_mutbl {
do cx.with_ambient_variance(rv_invariant) {
- (visitor.visit_ty)(mt.ty, (cx, visitor));
+ visitor.visit_ty(mt.ty, cx);
}
} else {
- (visitor.visit_ty)(mt.ty, (cx, visitor));
+ visitor.visit_ty(mt.ty, cx);
}
}
}
-fn determine_rp_in_struct_field(
- cm: @ast::struct_field,
- (cx, visitor): (@mut DetermineRpCtxt,
- oldvisit::vt<@mut DetermineRpCtxt>)) {
- oldvisit::visit_struct_field(cm, (cx, visitor));
+fn determine_rp_in_struct_field(visitor: &mut DetermineRpVisitor,
+ cm: @ast::struct_field,
+ cx: @mut DetermineRpCtxt) {
+ visit::walk_struct_field(visitor, cm, cx);
+}
+
+struct DetermineRpVisitor;
+
+impl Visitor<@mut DetermineRpCtxt> for DetermineRpVisitor {
+
+ fn visit_fn(&mut self, fk:&fn_kind, fd:&fn_decl,
+ b:&Block, s:span, n:NodeId, e:@mut DetermineRpCtxt) {
+ determine_rp_in_fn(self, fk, fd, b, s, n, e);
+ }
+ fn visit_item(&mut self, i:@item, e:@mut DetermineRpCtxt) {
+ determine_rp_in_item(self, i, e);
+ }
+ fn visit_ty(&mut self, t:&Ty, e:@mut DetermineRpCtxt) {
+ determine_rp_in_ty(self, t, e);
+ }
+ fn visit_ty_method(&mut self, t:&TypeMethod, e:@mut DetermineRpCtxt) {
+ determine_rp_in_ty_method(self, t, e);
+ }
+ fn visit_struct_field(&mut self, s:@struct_field, e:@mut DetermineRpCtxt) {
+ determine_rp_in_struct_field(self, s, e);
+ }
+
}
pub fn determine_rp_in_crate(sess: Session,
};
// Gather up the base set, worklist and dep_map
- let visitor = oldvisit::mk_vt(@oldvisit::Visitor {
- visit_fn: determine_rp_in_fn,
- visit_item: determine_rp_in_item,
- visit_ty: determine_rp_in_ty,
- visit_ty_method: determine_rp_in_ty_method,
- visit_struct_field: determine_rp_in_struct_field,
- .. *oldvisit::default_visitor()
- });
- oldvisit::visit_crate(crate, (cx, visitor));
+ let mut visitor = DetermineRpVisitor;
+ visit::walk_crate(&mut visitor, crate, cx);
// Propagate indirect dependencies
//
use syntax::ast_util::{Privacy, Public, Private};
use syntax::ast_util::{variant_visibility_to_privacy, visibility_to_privacy};
use syntax::attr;
-use syntax::oldvisit::{mk_simple_visitor, default_simple_visitor};
-use syntax::oldvisit::{default_visitor, mk_vt, Visitor, visit_block};
-use syntax::oldvisit::{visit_crate, visit_expr, visit_expr_opt};
-use syntax::oldvisit::{visit_foreign_item, visit_item};
-use syntax::oldvisit::{visit_mod, visit_ty, vt, SimpleVisitor};
use syntax::parse::token;
use syntax::parse::token::ident_interner;
use syntax::parse::token::special_idents;
use syntax::print::pprust::path_to_str;
use syntax::codemap::{span, dummy_sp, BytePos};
use syntax::opt_vec::OptVec;
+use syntax::visit;
+use syntax::visit::Visitor;
use std::str;
use std::uint;
HasSelfBinding(NodeId, bool /* is implicit */)
}
-pub type ResolveVisitor = vt<()>;
+struct ResolveVisitor {
+ resolver: @mut Resolver,
+}
+
+impl Visitor<()> for ResolveVisitor {
+ fn visit_item(&mut self, item:@item, _:()) {
+ self.resolver.resolve_item(item, self);
+ }
+ fn visit_arm(&mut self, arm:&arm, _:()) {
+ self.resolver.resolve_arm(arm, self);
+ }
+ fn visit_block(&mut self, block:&Block, _:()) {
+ self.resolver.resolve_block(block, self);
+ }
+ fn visit_expr(&mut self, expr:@expr, _:()) {
+ self.resolver.resolve_expr(expr, self);
+ }
+ fn visit_local(&mut self, local:@Local, _:()) {
+ self.resolver.resolve_local(local, self);
+ }
+ fn visit_ty(&mut self, ty:&Ty, _:()) {
+ self.resolver.resolve_type(ty, self);
+ }
+}
/// Contains data for specific types of import directives.
pub enum ImportDirectiveSubclass {
used_imports: HashSet<NodeId>,
}
+struct BuildReducedGraphVisitor {
+ resolver: @mut Resolver,
+}
+
+impl Visitor<ReducedGraphParent> for BuildReducedGraphVisitor {
+
+ fn visit_item(&mut self, item:@item, context:ReducedGraphParent) {
+ self.resolver.build_reduced_graph_for_item(item, (context, self));
+ }
+
+ fn visit_foreign_item(&mut self, foreign_item:@foreign_item, context:ReducedGraphParent) {
+ self.resolver.build_reduced_graph_for_foreign_item(foreign_item,
+ (context,
+ self));
+ }
+
+ fn visit_view_item(&mut self, view_item:&view_item, context:ReducedGraphParent) {
+ self.resolver.build_reduced_graph_for_view_item(view_item,
+ (context,
+ self));
+ }
+
+ fn visit_block(&mut self, block:&Block, context:ReducedGraphParent) {
+ self.resolver.build_reduced_graph_for_block(block,
+ (context,
+ self));
+ }
+
+}
+
+struct UnusedImportCheckVisitor { resolver: @mut Resolver }
+
+impl Visitor<()> for UnusedImportCheckVisitor {
+ fn visit_view_item(&mut self, vi:&view_item, _:()) {
+ self.resolver.check_for_item_unused_imports(vi);
+ visit::walk_view_item(self, vi, ());
+ }
+}
+
impl Resolver {
/// The main name resolution procedure.
pub fn resolve(@mut self) {
pub fn build_reduced_graph(@mut self) {
let initial_parent =
ModuleReducedGraphParent(self.graph_root.get_module());
- visit_crate(self.crate, (initial_parent, mk_vt(@Visitor {
- visit_item: |item, (context, visitor)|
- self.build_reduced_graph_for_item(item, (context, visitor)),
-
- visit_foreign_item: |foreign_item, (context, visitor)|
- self.build_reduced_graph_for_foreign_item(foreign_item,
- (context,
- visitor)),
-
- visit_view_item: |view_item, (context, visitor)|
- self.build_reduced_graph_for_view_item(view_item,
- (context,
- visitor)),
-
- visit_block: |block, (context, visitor)|
- self.build_reduced_graph_for_block(block,
- (context,
- visitor)),
- .. *default_visitor()
- })));
+ let mut visitor = BuildReducedGraphVisitor { resolver: self, };
+ visit::walk_crate(&mut visitor, self.crate, initial_parent);
}
/// Returns the current module tracked by the reduced graph parent.
pub fn build_reduced_graph_for_item(@mut self,
item: @item,
(parent, visitor): (ReducedGraphParent,
- vt<ReducedGraphParent>)) {
+ &mut BuildReducedGraphVisitor)) {
let ident = item.ident;
let sp = item.span;
let privacy = visibility_to_privacy(item.vis);
let new_parent =
ModuleReducedGraphParent(name_bindings.get_module());
- visit_mod(module_, sp, item.id, (new_parent, visitor));
+ visit::walk_mod(visitor, module_, new_parent);
}
item_foreign_mod(ref fm) => {
anonymous => parent
};
- visit_item(item, (new_parent, visitor));
+ visit::walk_item(visitor, item, new_parent);
}
// These items live in the value namespace.
let def = def_fn(local_def(item.id), purity);
name_bindings.define_value(privacy, def, sp);
- visit_item(item, (new_parent, visitor));
+ visit::walk_item(visitor, item, new_parent);
}
// These items live in the type namespace.
// inherited => privacy of the enum item
variant_visibility_to_privacy(variant.node.vis,
privacy == Public),
- (new_parent, visitor));
+ new_parent, visitor);
}
}
// Record the def ID of this struct.
self.structs.insert(local_def(item.id));
- visit_item(item, (new_parent, visitor));
+ visit::walk_item(visitor, item, new_parent);
}
item_impl(_, None, ref ty, ref methods) => {
_ => {}
}
- visit_item(item, (parent, visitor));
+ visit::walk_item(visitor, item, parent);
}
item_impl(_, Some(_), _, _) => {
- visit_item(item, (parent, visitor));
+ visit::walk_item(visitor, item, parent);
}
item_trait(_, _, ref methods) => {
}
name_bindings.define_type(privacy, def_trait(def_id), sp);
- visit_item(item, (new_parent, visitor));
+ visit::walk_item(visitor, item, new_parent);
}
item_mac(*) => {
variant: &variant,
item_id: def_id,
parent_privacy: Privacy,
- (parent, _visitor):
- (ReducedGraphParent,
- vt<ReducedGraphParent>)) {
+ parent: ReducedGraphParent,
+ _: &mut BuildReducedGraphVisitor) {
let ident = variant.node.name;
let privacy =
view_item: &view_item,
(parent, _):
(ReducedGraphParent,
- vt<ReducedGraphParent>)) {
+ &mut BuildReducedGraphVisitor)) {
let privacy = visibility_to_privacy(view_item.vis);
match view_item.node {
view_item_use(ref view_paths) => {
foreign_item: @foreign_item,
(parent, visitor):
(ReducedGraphParent,
- vt<ReducedGraphParent>)) {
+ &mut BuildReducedGraphVisitor)) {
let name = foreign_item.ident;
let (name_bindings, new_parent) =
self.add_child(name, parent, ForbidDuplicateValues,
HasTypeParameters(
generics, foreign_item.id, 0, NormalRibKind))
{
- visit_foreign_item(foreign_item, (new_parent, visitor));
+ visit::walk_foreign_item(visitor, foreign_item, new_parent);
}
}
foreign_item_static(_, m) => {
let def = def_static(local_def(foreign_item.id), m);
name_bindings.define_value(Public, def, foreign_item.span);
- visit_foreign_item(foreign_item, (new_parent, visitor));
+ visit::walk_foreign_item(visitor, foreign_item, new_parent);
}
}
}
block: &Block,
(parent, visitor):
(ReducedGraphParent,
- vt<ReducedGraphParent>)) {
+ &mut BuildReducedGraphVisitor)) {
let new_parent;
if self.block_needs_anonymous_module(block) {
let block_id = block.id;
new_parent = parent;
}
- visit_block(block, (new_parent, visitor));
+ visit::walk_block(visitor, block, new_parent);
}
pub fn handle_external_def(@mut self,
pub fn resolve_crate(@mut self) {
debug!("(resolving crate) starting");
- visit_crate(self.crate, ((), mk_vt(@Visitor {
- visit_item: |item, (_context, visitor)|
- self.resolve_item(item, visitor),
- visit_arm: |arm, (_context, visitor)|
- self.resolve_arm(arm, visitor),
- visit_block: |block, (_context, visitor)|
- self.resolve_block(block, visitor),
- visit_expr: |expr, (_context, visitor)|
- self.resolve_expr(expr, visitor),
- visit_local: |local, (_context, visitor)|
- self.resolve_local(local, visitor),
- visit_ty: |ty, (_context, visitor)|
- self.resolve_type(ty, visitor),
- .. *default_visitor()
- })));
+ let mut visitor = ResolveVisitor{ resolver: self };
+ visit::walk_crate(&mut visitor, self.crate, ());
}
- pub fn resolve_item(@mut self, item: @item, visitor: ResolveVisitor) {
+ pub fn resolve_item(@mut self, item: @item, visitor: &mut ResolveVisitor) {
debug!("(resolving item) resolving %s",
self.session.str_of(item.ident));
do self.with_type_parameter_rib(
HasTypeParameters(
generics, item.id, 0, NormalRibKind)) {
- visit_item(item, ((), visitor));
+ visit::walk_item(visitor, item, ());
}
}
NormalRibKind))
|| {
- visit_item(item, ((), visitor));
+ visit::walk_item(visitor, item, ());
}
}
HasTypeParameters(
generics, foreign_item.id, 0,
NormalRibKind),
- || visit_foreign_item(*foreign_item,
- ((), visitor)));
+ || visit::walk_foreign_item(visitor,
+ *foreign_item,
+ ()));
}
foreign_item_static(*) => {
- visit_foreign_item(*foreign_item,
- ((), visitor));
+ visit::walk_foreign_item(visitor,
+ *foreign_item,
+ ());
}
}
}
item_static(*) => {
self.with_constant_rib(|| {
- visit_item(item, ((), visitor));
+ visit::walk_item(visitor, item, ());
});
}
type_parameters: TypeParameters,
block: &Block,
self_binding: SelfBinding,
- visitor: ResolveVisitor) {
+ visitor: &mut ResolveVisitor) {
// Create a value rib for the function.
let function_value_rib = @Rib(rib_kind);
self.value_ribs.push(function_value_rib);
pub fn resolve_type_parameters(@mut self,
type_parameters: &OptVec<TyParam>,
- visitor: ResolveVisitor) {
+ visitor: &mut ResolveVisitor) {
for type_parameter in type_parameters.iter() {
for bound in type_parameter.bounds.iter() {
self.resolve_type_parameter_bound(type_parameter.id, bound, visitor);
pub fn resolve_type_parameter_bound(@mut self,
id: NodeId,
type_parameter_bound: &TyParamBound,
- visitor: ResolveVisitor) {
+ visitor: &mut ResolveVisitor) {
match *type_parameter_bound {
TraitTyParamBound(ref tref) => {
self.resolve_trait_reference(id, tref, visitor, TraitBoundingTypeParameter)
pub fn resolve_trait_reference(@mut self,
id: NodeId,
trait_reference: &trait_ref,
- visitor: ResolveVisitor,
+ visitor: &mut ResolveVisitor,
reference_type: TraitReferenceType) {
match self.resolve_path(id, &trait_reference.path, TypeNS, true, visitor) {
None => {
id: NodeId,
generics: &Generics,
fields: &[@struct_field],
- visitor: ResolveVisitor) {
+ visitor: &mut ResolveVisitor) {
let mut ident_map = HashMap::new::<ast::ident, @struct_field>();
for &field in fields.iter() {
match field.node.kind {
rib_kind: RibKind,
method: @method,
outer_type_parameter_count: uint,
- visitor: ResolveVisitor) {
+ visitor: &mut ResolveVisitor) {
let method_generics = &method.generics;
let type_parameters =
HasTypeParameters(method_generics,
opt_trait_reference: &Option<trait_ref>,
self_type: &Ty,
methods: &[@method],
- visitor: ResolveVisitor) {
+ visitor: &mut ResolveVisitor) {
// If applicable, create a rib for the type parameters.
let outer_type_parameter_count = generics.ty_params.len();
do self.with_type_parameter_rib(HasTypeParameters
pub fn resolve_module(@mut self,
module_: &_mod,
- span: span,
+ _span: span,
_name: ident,
id: NodeId,
- visitor: ResolveVisitor) {
+ visitor: &mut ResolveVisitor) {
// Write the implementations in scope into the module metadata.
debug!("(resolving module) resolving module ID %d", id);
- visit_mod(module_, span, id, ((), visitor));
+ visit::walk_mod(visitor, module_, ());
}
- pub fn resolve_local(@mut self, local: @Local, visitor: ResolveVisitor) {
+ pub fn resolve_local(@mut self, local: @Local, visitor: &mut ResolveVisitor) {
let mutability = if local.is_mutbl {Mutable} else {Immutable};
// Resolve the type.
}
}
- pub fn resolve_arm(@mut self, arm: &arm, visitor: ResolveVisitor) {
+ pub fn resolve_arm(@mut self, arm: &arm, visitor: &mut ResolveVisitor) {
self.value_ribs.push(@Rib(NormalRibKind));
let bindings_list = @mut HashMap::new();
// pat_idents are variants
self.check_consistent_bindings(arm);
- visit_expr_opt(arm.guard, ((), visitor));
+ visit::walk_expr_opt(visitor, arm.guard, ());
self.resolve_block(&arm.body, visitor);
self.value_ribs.pop();
}
- pub fn resolve_block(@mut self, block: &Block, visitor: ResolveVisitor) {
+ pub fn resolve_block(@mut self, block: &Block, visitor: &mut ResolveVisitor) {
debug!("(resolving block) entering block");
self.value_ribs.push(@Rib(NormalRibKind));
}
// Descend into the block.
- visit_block(block, ((), visitor));
+ visit::walk_block(visitor, block, ());
// Move back up.
self.current_module = orig_module;
debug!("(resolving block) leaving block");
}
- pub fn resolve_type(@mut self, ty: &Ty, visitor: ResolveVisitor) {
+ pub fn resolve_type(@mut self, ty: &Ty, visitor: &mut ResolveVisitor) {
match ty.node {
// Like path expressions, the interpretation of path types depends
// on whether the path has multiple elements in it or not.
self.resolve_type_parameter_bound(ty.id, bound, visitor);
}
};
- visit_ty(ty, ((), visitor));
+ visit::walk_ty(visitor, ty, ());
}
_ => {
// Just resolve embedded types.
- visit_ty(ty, ((), visitor));
+ visit::walk_ty(visitor, ty, ());
}
}
}
// Maps idents to the node ID for the (outermost)
// pattern that binds them
bindings_list: Option<@mut HashMap<ident,NodeId>>,
- visitor: ResolveVisitor) {
+ visitor: &mut ResolveVisitor) {
let pat_id = pattern.id;
do walk_pat(pattern) |pattern| {
match pattern.node {
path: &Path,
namespace: Namespace,
check_ribs: bool,
- visitor: ResolveVisitor)
+ visitor: &mut ResolveVisitor)
-> Option<def> {
// First, resolve the types.
for ty in path.types.iter() {
return false;
}
- pub fn resolve_expr(@mut self, expr: @expr, visitor: ResolveVisitor) {
+ pub fn resolve_expr(@mut self, expr: @expr, visitor: &mut ResolveVisitor) {
// First, record candidate traits for this expression if it could
// result in the invocation of a method call.
}
}
- visit_expr(expr, ((), visitor));
+ visit::walk_expr(visitor, expr, ());
}
expr_fn_block(ref fn_decl, ref block) => {
}
}
- visit_expr(expr, ((), visitor));
+ visit::walk_expr(visitor, expr, ());
}
expr_loop(_, Some(label)) => {
rib.bindings.insert(label, def_like);
}
- visit_expr(expr, ((), visitor));
+ visit::walk_expr(visitor, expr, ());
}
}
}
_ => {
- visit_expr(expr, ((), visitor));
+ visit::walk_expr(visitor, expr, ());
}
}
}
//
pub fn check_for_unused_imports(@mut self) {
- let vt = mk_simple_visitor(@SimpleVisitor {
- visit_view_item: |vi| self.check_for_item_unused_imports(vi),
- .. *default_simple_visitor()
- });
- visit_crate(self.crate, ((), vt));
+ let mut visitor = UnusedImportCheckVisitor{ resolver: self };
+ visit::walk_crate(&mut visitor, self.crate, ());
}
pub fn check_for_item_unused_imports(&mut self, vi: &view_item) {
--- /dev/null
+// Copyright 2012-2013 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+/*!
+
+Lint mode to detect cases where we call non-Rust fns, which do not
+have a stack growth check, from locations not annotated to request
+large stacks.
+
+*/
+
+use middle::lint;
+use middle::ty;
+use syntax::ast;
+use syntax::ast_map;
+use syntax::attr;
+use syntax::codemap::span;
+use visit = syntax::oldvisit;
+use util::ppaux::Repr;
+
+#[deriving(Clone)]
+struct Context {
+ tcx: ty::ctxt,
+ safe_stack: bool
+}
+
+pub fn stack_check_crate(tcx: ty::ctxt,
+ crate: &ast::Crate) {
+ let new_cx = Context {
+ tcx: tcx,
+ safe_stack: false
+ };
+ let visitor = visit::mk_vt(@visit::Visitor {
+ visit_item: stack_check_item,
+ visit_fn: stack_check_fn,
+ visit_expr: stack_check_expr,
+ ..*visit::default_visitor()
+ });
+ visit::visit_crate(crate, (new_cx, visitor));
+}
+
+fn stack_check_item(item: @ast::item,
+ (in_cx, v): (Context, visit::vt<Context>)) {
+ match item.node {
+ ast::item_fn(_, ast::extern_fn, _, _, _) => {
+ // an extern fn is already being called from C code...
+ let new_cx = Context {safe_stack: true, ..in_cx};
+ visit::visit_item(item, (new_cx, v));
+ }
+ ast::item_fn(*) => {
+ let safe_stack = fixed_stack_segment(item.attrs);
+ let new_cx = Context {safe_stack: safe_stack, ..in_cx};
+ visit::visit_item(item, (new_cx, v));
+ }
+ ast::item_impl(_, _, _, ref methods) => {
+ // visit_method() would make this nicer
+ for &method in methods.iter() {
+ let safe_stack = fixed_stack_segment(method.attrs);
+ let new_cx = Context {safe_stack: safe_stack, ..in_cx};
+ visit::visit_method_helper(method, (new_cx, v));
+ }
+ }
+ _ => {
+ visit::visit_item(item, (in_cx, v));
+ }
+ }
+
+ fn fixed_stack_segment(attrs: &[ast::Attribute]) -> bool {
+ attr::contains_name(attrs, "fixed_stack_segment")
+ }
+}
+
+fn stack_check_fn<'a>(fk: &visit::fn_kind,
+ decl: &ast::fn_decl,
+ body: &ast::Block,
+ sp: span,
+ id: ast::NodeId,
+ (in_cx, v): (Context, visit::vt<Context>)) {
+ let safe_stack = match *fk {
+ visit::fk_method(*) | visit::fk_item_fn(*) => {
+ in_cx.safe_stack // see stack_check_item above
+ }
+ visit::fk_anon(*) | visit::fk_fn_block => {
+ match ty::get(ty::node_id_to_type(in_cx.tcx, id)).sty {
+ ty::ty_bare_fn(*) |
+ ty::ty_closure(ty::ClosureTy {sigil: ast::OwnedSigil, _}) |
+ ty::ty_closure(ty::ClosureTy {sigil: ast::ManagedSigil, _}) => {
+ false
+ }
+ _ => {
+ in_cx.safe_stack
+ }
+ }
+ }
+ };
+ let new_cx = Context {safe_stack: safe_stack, ..in_cx};
+ debug!("stack_check_fn(safe_stack=%b, id=%?)", safe_stack, id);
+ visit::visit_fn(fk, decl, body, sp, id, (new_cx, v));
+}
+
+fn stack_check_expr<'a>(expr: @ast::expr,
+ (cx, v): (Context, visit::vt<Context>)) {
+ debug!("stack_check_expr(safe_stack=%b, expr=%s)",
+ cx.safe_stack, expr.repr(cx.tcx));
+ if !cx.safe_stack {
+ match expr.node {
+ ast::expr_call(callee, _, _) => {
+ let callee_ty = ty::expr_ty(cx.tcx, callee);
+ debug!("callee_ty=%s", callee_ty.repr(cx.tcx));
+ match ty::get(callee_ty).sty {
+ ty::ty_bare_fn(ref fty) => {
+ if !fty.abis.is_rust() && !fty.abis.is_intrinsic() {
+ call_to_extern_fn(cx, callee);
+ }
+ }
+ _ => {}
+ }
+ }
+ _ => {}
+ }
+ }
+ visit::visit_expr(expr, (cx, v));
+}
+
+fn call_to_extern_fn(cx: Context, callee: @ast::expr) {
+ // Permit direct calls to extern fns that are annotated with
+ // #[rust_stack]. This is naturally a horrible pain to achieve.
+ match callee.node {
+ ast::expr_path(*) => {
+ match cx.tcx.def_map.find(&callee.id) {
+ Some(&ast::def_fn(id, _)) if id.crate == ast::LOCAL_CRATE => {
+ match cx.tcx.items.find(&id.node) {
+ Some(&ast_map::node_foreign_item(item, _, _, _)) => {
+ if attr::contains_name(item.attrs, "rust_stack") {
+ return;
+ }
+ }
+ _ => {}
+ }
+ }
+ _ => {}
+ }
+ }
+ _ => {}
+ }
+
+ cx.tcx.sess.add_lint(lint::cstack,
+ callee.id,
+ callee.span,
+ fmt!("invoking non-Rust fn in fn without \
+ #[fixed_stack_segment]"));
+}
use std::hashmap::HashMap;
use std::io;
use std::libc::c_uint;
-use std::uint;
use std::vec;
use std::local_data;
use extra::time;
use syntax::parse::token;
use syntax::parse::token::{special_idents};
use syntax::print::pprust::stmt_to_str;
-use syntax::oldvisit;
+use syntax::visit;
use syntax::{ast, ast_util, codemap, ast_map};
use syntax::abi::{X86, X86_64, Arm, Mips};
return llfn;
}
-pub fn get_extern_fn(externs: &mut ExternMap, llmod: ModuleRef, name: @str,
+pub fn get_extern_fn(externs: &mut ExternMap, llmod: ModuleRef, name: &str,
cc: lib::llvm::CallConv, ty: Type) -> ValueRef {
- match externs.find_copy(&name) {
- Some(n) => return n,
+ match externs.find_equiv(&name) {
+ Some(n) => return *n,
None => ()
}
let f = decl_fn(llmod, name, cc, ty);
- externs.insert(name, f);
+ externs.insert(name.to_owned(), f);
return f;
}
pub fn get_extern_const(externs: &mut ExternMap, llmod: ModuleRef,
- name: @str, ty: Type) -> ValueRef {
- match externs.find_copy(&name) {
- Some(n) => return n,
+ name: &str, ty: Type) -> ValueRef {
+ match externs.find_equiv(&name) {
+ Some(n) => return *n,
None => ()
}
unsafe {
let c = do name.with_c_str |buf| {
llvm::LLVMAddGlobal(llmod, ty.to_ref(), buf)
};
- externs.insert(name, c);
+ externs.insert(name.to_owned(), c);
return c;
}
}
lib::llvm::SetFunctionAttribute(f, lib::llvm::InlineHintAttribute)
}
-pub fn set_inline_hint_if_appr(attrs: &[ast::Attribute],
- llfn: ValueRef) {
+pub fn set_llvm_fn_attrs(attrs: &[ast::Attribute], llfn: ValueRef) {
use syntax::attr::*;
+ // Set the inline hint if there is one
match find_inline_attr(attrs) {
InlineHint => set_inline_hint(llfn),
InlineAlways => set_always_inline(llfn),
InlineNever => set_no_inline(llfn),
InlineNone => { /* fallthrough */ }
}
+
+ // Add the no-split-stack attribute if requested
+ if contains_name(attrs, "no_split_stack") {
+ set_no_split_stack(llfn);
+ }
}
pub fn set_always_inline(f: ValueRef) {
}
pub fn set_fixed_stack_segment(f: ValueRef) {
- lib::llvm::SetFixedStackSegmentAttribute(f);
+ do "fixed-stack-segment".to_c_str().with_ref |buf| {
+ unsafe { llvm::LLVMAddFunctionAttrString(f, buf); }
+ }
+}
+
+pub fn set_no_split_stack(f: ValueRef) {
+ do "no-split-stack".to_c_str().with_ref |buf| {
+ unsafe { llvm::LLVMAddFunctionAttrString(f, buf); }
+ }
}
pub fn set_glue_inlining(f: ValueRef, t: ty::t) {
None,
ty::lookup_item_type(tcx, parent_id).ty);
let llty = type_of_dtor(ccx, class_ty);
- let name = name.to_managed(); // :-(
get_extern_fn(&mut ccx.externs,
ccx.llmod,
name,
for variant in (*variants).iter() {
let variant_cx =
sub_block(cx, ~"enum-iter-variant-" +
- uint::to_str(variant.disr_val));
+ variant.disr_val.to_str());
let variant_cx =
iter_variant(variant_cx, repr, av, *variant,
substs.tps, |x,y,z| f(x,y,z));
}
}
-pub fn null_env_ptr(bcx: @mut Block) -> ValueRef {
- C_null(Type::opaque_box(bcx.ccx()).ptr_to())
+pub fn null_env_ptr(ccx: &CrateContext) -> ValueRef {
+ C_null(Type::opaque_box(ccx).ptr_to())
}
pub fn trans_external_path(ccx: &mut CrateContext, did: ast::def_id, t: ty::t)
-> ValueRef {
- let name = csearch::get_symbol(ccx.sess.cstore, did).to_managed(); // Sad
+ let name = csearch::get_symbol(ccx.sess.cstore, did);
match ty::get(t).sty {
ty::ty_bare_fn(_) | ty::ty_closure(_) => {
let llty = type_of_fn_from_ty(ccx, t);
// slot where the return value of the function must go.
pub fn make_return_pointer(fcx: @mut FunctionContext, output_type: ty::t) -> ValueRef {
unsafe {
- if !ty::type_is_immediate(fcx.ccx.tcx, output_type) {
+ if type_of::return_uses_outptr(fcx.ccx.tcx, output_type) {
llvm::LLVMGetParam(fcx.llfn, 0)
} else {
let lloutputtype = type_of::type_of(fcx.ccx, output_type);
ty::subst_tps(ccx.tcx, substs.tys, substs.self_ty, output_type)
}
};
- let is_immediate = ty::type_is_immediate(ccx.tcx, substd_output_type);
+ let uses_outptr = type_of::return_uses_outptr(ccx.tcx, substd_output_type);
let fcx = @mut FunctionContext {
llfn: llfndecl,
llenv: unsafe {
llreturn: None,
llself: None,
personality: None,
- has_immediate_return_value: is_immediate,
+ caller_expects_out_pointer: uses_outptr,
llargs: @mut HashMap::new(),
lllocals: @mut HashMap::new(),
llupvars: @mut HashMap::new(),
fcx.alloca_insert_pt = Some(llvm::LLVMGetFirstInstruction(entry_bcx.llbb));
}
- if !ty::type_is_nil(substd_output_type) && !(is_immediate && skip_retptr) {
- fcx.llretptr = Some(make_return_pointer(fcx, substd_output_type));
+ if !ty::type_is_voidish(substd_output_type) {
+ // If the function returns nil/bot, there is no real return
+ // value, so do not set `llretptr`.
+ if !skip_retptr || uses_outptr {
+ // Otherwise, we normally allocate the llretptr, unless we
+ // have been instructed to skip it for immediate return
+ // values.
+ fcx.llretptr = Some(make_return_pointer(fcx, substd_output_type));
+ }
}
fcx
}
// Builds the return block for a function.
pub fn build_return_block(fcx: &FunctionContext, ret_cx: @mut Block) {
// Return the value if this function immediate; otherwise, return void.
- if fcx.llretptr.is_none() || !fcx.has_immediate_return_value {
+ if fcx.llretptr.is_none() || fcx.caller_expects_out_pointer {
return RetVoid(ret_cx);
}
// translation calls that don't have a return value (trans_crate,
// trans_mod, trans_item, et cetera) and those that do
// (trans_block, trans_expr, et cetera).
- if body.expr.is_none() || ty::type_is_bot(block_ty) ||
- ty::type_is_nil(block_ty)
- {
+ if body.expr.is_none() || ty::type_is_voidish(block_ty) {
bcx = controlflow::trans_block(bcx, body, expr::Ignore);
} else {
let dest = expr::SaveIn(fcx.llretptr.unwrap());
ast::item_fn(ref decl, purity, _abis, ref generics, ref body) => {
if purity == ast::extern_fn {
let llfndecl = get_item_val(ccx, item.id);
- foreign::trans_foreign_fn(ccx,
- vec::append((*path).clone(),
- [path_name(item.ident)]),
- decl,
- body,
- llfndecl,
- item.id);
+ foreign::trans_rust_fn_with_foreign_abi(
+ ccx,
+ &vec::append((*path).clone(),
+ [path_name(item.ident)]),
+ decl,
+ body,
+ llfndecl,
+ item.id);
} else if !generics.is_type_parameterized() {
let llfndecl = get_item_val(ccx, item.id);
trans_fn(ccx,
}
},
ast::item_foreign_mod(ref foreign_mod) => {
- foreign::trans_foreign_mod(ccx, path, foreign_mod);
+ foreign::trans_foreign_mod(ccx, foreign_mod);
}
ast::item_struct(struct_def, ref generics) => {
if !generics.is_type_parameterized() {
fn create_main(ccx: @mut CrateContext, main_llfn: ValueRef) -> ValueRef {
let nt = ty::mk_nil();
-
- let llfty = type_of_fn(ccx, [], nt);
+ let llfty = type_of_rust_fn(ccx, [], nt);
let llfdecl = decl_fn(ccx.llmod, "_rust_main",
lib::llvm::CCallConv, llfty);
// the args vector built in create_entry_fn will need
// be updated if this assertion starts to fail.
- assert!(fcx.has_immediate_return_value);
+ assert!(!fcx.caller_expects_out_pointer);
let bcx = fcx.entry_bcx.unwrap();
// Call main.
let llfn = if purity != ast::extern_fn {
register_fn(ccx, i.span, sym, i.id, ty)
} else {
- foreign::register_foreign_fn(ccx, i.span, sym, i.id)
+ foreign::register_rust_fn_with_foreign_abi(ccx,
+ i.span,
+ sym,
+ i.id)
};
- set_inline_hint_if_appr(i.attrs, llfn);
+ set_llvm_fn_attrs(i.attrs, llfn);
llfn
}
register_method(ccx, id, pth, m)
}
- ast_map::node_foreign_item(ni, _, _, pth) => {
+ ast_map::node_foreign_item(ni, abis, _, pth) => {
let ty = ty::node_id_to_type(ccx.tcx, ni.id);
exprt = true;
match ni.node {
ast::foreign_item_fn(*) => {
let path = vec::append((*pth).clone(), [path_name(ni.ident)]);
- let sym = exported_name(ccx, path, ty, ni.attrs);
-
- register_fn(ccx, ni.span, sym, ni.id, ty)
+ foreign::register_foreign_item_fn(ccx, abis, &path, ni)
}
ast::foreign_item_static(*) => {
let ident = token::ident_to_str(&ni.ident);
let sym = exported_name(ccx, path, mty, m.attrs);
let llfn = register_fn(ccx, m.span, sym, id, mty);
- set_inline_hint_if_appr(m.attrs, llfn);
+ set_llvm_fn_attrs(m.attrs, llfn);
llfn
}
}
}
+struct TransConstantsVisitor { ccx: @mut CrateContext }
+
+impl visit::Visitor<()> for TransConstantsVisitor {
+ fn visit_item(&mut self, i:@ast::item, _:()) {
+ trans_constant(self.ccx, i);
+ visit::walk_item(self, i, ());
+ }
+}
+
pub fn trans_constants(ccx: @mut CrateContext, crate: &ast::Crate) {
- oldvisit::visit_crate(
- crate, ((),
- oldvisit::mk_simple_visitor(@oldvisit::SimpleVisitor {
- visit_item: |a| trans_constant(ccx, a),
- ..*oldvisit::default_simple_visitor()
- })));
+ let mut v = TransConstantsVisitor { ccx: ccx };
+ visit::walk_crate(&mut v, crate, ());
}
pub fn vp2i(cx: @mut Block, v: ValueRef) -> ValueRef {
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use lib::llvm::{llvm, ValueRef, Attribute, Void};
-use middle::trans::base::*;
-use middle::trans::build::*;
-use middle::trans::common::*;
-
-use middle::trans::type_::Type;
-
-use std::libc::c_uint;
+use lib::llvm::Attribute;
use std::option;
-
-pub trait ABIInfo {
- fn compute_info(&self, atys: &[Type], rty: Type, ret_def: bool) -> FnType;
-}
+use middle::trans::context::CrateContext;
+use middle::trans::cabi_x86;
+use middle::trans::cabi_x86_64;
+use middle::trans::cabi_arm;
+use middle::trans::cabi_mips;
+use middle::trans::type_::Type;
+use syntax::abi::{X86, X86_64, Arm, Mips};
#[deriving(Clone)]
pub struct LLVMType {
ty: Type
}
+/// Metadata describing how the arguments to a native function
+/// should be passed in order to respect the native ABI.
+///
+/// I will do my best to describe this structure, but these
+/// comments are reverse-engineered and may be inaccurate. -NDM
pub struct FnType {
+ /// The LLVM types of each argument. If the cast flag is true,
+ /// then the argument should be cast, typically because the
+ /// official argument type will be an int and the rust type is i8
+ /// or something like that.
arg_tys: ~[LLVMType],
- ret_ty: LLVMType,
- attrs: ~[option::Option<Attribute>],
- sret: bool
-}
-
-impl FnType {
- pub fn decl_fn(&self, decl: &fn(fnty: Type) -> ValueRef) -> ValueRef {
- let atys = self.arg_tys.iter().map(|t| t.ty).collect::<~[Type]>();
- let rty = self.ret_ty.ty;
- let fnty = Type::func(atys, &rty);
- let llfn = decl(fnty);
-
- for (i, a) in self.attrs.iter().enumerate() {
- match *a {
- option::Some(attr) => {
- unsafe {
- let llarg = get_param(llfn, i);
- llvm::LLVMAddAttribute(llarg, attr as c_uint);
- }
- }
- _ => ()
- }
- }
- return llfn;
- }
- pub fn build_shim_args(&self, bcx: @mut Block, arg_tys: &[Type], llargbundle: ValueRef)
- -> ~[ValueRef] {
- let mut atys: &[LLVMType] = self.arg_tys;
- let mut attrs: &[option::Option<Attribute>] = self.attrs;
-
- let mut llargvals = ~[];
- let mut i = 0u;
- let n = arg_tys.len();
-
- if self.sret {
- let llretptr = GEPi(bcx, llargbundle, [0u, n]);
- let llretloc = Load(bcx, llretptr);
- llargvals = ~[llretloc];
- atys = atys.tail();
- attrs = attrs.tail();
- }
-
- while i < n {
- let llargval = if atys[i].cast {
- let arg_ptr = GEPi(bcx, llargbundle, [0u, i]);
- let arg_ptr = BitCast(bcx, arg_ptr, atys[i].ty.ptr_to());
- Load(bcx, arg_ptr)
- } else if attrs[i].is_some() {
- GEPi(bcx, llargbundle, [0u, i])
- } else {
- load_inbounds(bcx, llargbundle, [0u, i])
- };
- llargvals.push(llargval);
- i += 1u;
- }
-
- return llargvals;
- }
-
- pub fn build_shim_ret(&self, bcx: @mut Block, arg_tys: &[Type], ret_def: bool,
- llargbundle: ValueRef, llretval: ValueRef) {
- for (i, a) in self.attrs.iter().enumerate() {
- match *a {
- option::Some(attr) => {
- unsafe {
- llvm::LLVMAddInstrAttribute(llretval, (i + 1u) as c_uint, attr as c_uint);
- }
- }
- _ => ()
- }
- }
- if self.sret || !ret_def {
- return;
- }
- let n = arg_tys.len();
- // R** llretptr = &args->r;
- let llretptr = GEPi(bcx, llargbundle, [0u, n]);
- // R* llretloc = *llretptr; /* (args->r) */
- let llretloc = Load(bcx, llretptr);
- if self.ret_ty.cast {
- let tmp_ptr = BitCast(bcx, llretloc, self.ret_ty.ty.ptr_to());
- // *args->r = r;
- Store(bcx, llretval, tmp_ptr);
- } else {
- // *args->r = r;
- Store(bcx, llretval, llretloc);
- };
- }
-
- pub fn build_wrap_args(&self, bcx: @mut Block, ret_ty: Type,
- llwrapfn: ValueRef, llargbundle: ValueRef) {
- let mut atys: &[LLVMType] = self.arg_tys;
- let mut attrs: &[option::Option<Attribute>] = self.attrs;
- let mut j = 0u;
- let llretptr = if self.sret {
- atys = atys.tail();
- attrs = attrs.tail();
- j = 1u;
- get_param(llwrapfn, 0u)
- } else if self.ret_ty.cast {
- let retptr = alloca(bcx, self.ret_ty.ty, "");
- BitCast(bcx, retptr, ret_ty.ptr_to())
- } else {
- alloca(bcx, ret_ty, "")
- };
+ /// A list of attributes to be attached to each argument (parallel
+ /// the `arg_tys` array). If the attribute for a given is Some,
+ /// then the argument should be passed by reference.
+ attrs: ~[option::Option<Attribute>],
- let mut i = 0u;
- let n = atys.len();
- while i < n {
- let mut argval = get_param(llwrapfn, i + j);
- if attrs[i].is_some() {
- argval = Load(bcx, argval);
- store_inbounds(bcx, argval, llargbundle, [0u, i]);
- } else if atys[i].cast {
- let argptr = GEPi(bcx, llargbundle, [0u, i]);
- let argptr = BitCast(bcx, argptr, atys[i].ty.ptr_to());
- Store(bcx, argval, argptr);
- } else {
- store_inbounds(bcx, argval, llargbundle, [0u, i]);
- }
- i += 1u;
- }
- store_inbounds(bcx, llretptr, llargbundle, [0u, n]);
- }
+ /// LLVM return type.
+ ret_ty: LLVMType,
- pub fn build_wrap_ret(&self, bcx: @mut Block, arg_tys: &[Type], llargbundle: ValueRef) {
- if self.ret_ty.ty.kind() == Void {
- return;
- }
+ /// If true, then an implicit pointer should be added for the result.
+ sret: bool
+}
- if bcx.fcx.llretptr.is_some() {
- let llretval = load_inbounds(bcx, llargbundle, [ 0, arg_tys.len() ]);
- let llretval = if self.ret_ty.cast {
- let retptr = BitCast(bcx, llretval, self.ret_ty.ty.ptr_to());
- Load(bcx, retptr)
- } else {
- Load(bcx, llretval)
- };
- let llretptr = BitCast(bcx, bcx.fcx.llretptr.unwrap(), self.ret_ty.ty.ptr_to());
- Store(bcx, llretval, llretptr);
- }
+pub fn compute_abi_info(ccx: &mut CrateContext,
+ atys: &[Type],
+ rty: Type,
+ ret_def: bool) -> FnType {
+ match ccx.sess.targ_cfg.arch {
+ X86 => cabi_x86::compute_abi_info(ccx, atys, rty, ret_def),
+ X86_64 => cabi_x86_64::compute_abi_info(ccx, atys, rty, ret_def),
+ Arm => cabi_arm::compute_abi_info(ccx, atys, rty, ret_def),
+ Mips => cabi_mips::compute_abi_info(ccx, atys, rty, ret_def),
}
}
use lib::llvm::{llvm, Integer, Pointer, Float, Double, Struct, Array};
use lib::llvm::{Attribute, StructRetAttribute};
-use middle::trans::cabi::{ABIInfo, FnType, LLVMType};
+use middle::trans::cabi::{FnType, LLVMType};
+use middle::trans::context::CrateContext;
use middle::trans::type_::Type;
}
}
-enum ARM_ABIInfo { ARM_ABIInfo }
-
-impl ABIInfo for ARM_ABIInfo {
- fn compute_info(&self,
- atys: &[Type],
- rty: Type,
- ret_def: bool) -> FnType {
- let mut arg_tys = ~[];
- let mut attrs = ~[];
- for &aty in atys.iter() {
- let (ty, attr) = classify_arg_ty(aty);
- arg_tys.push(ty);
- attrs.push(attr);
- }
-
- let (ret_ty, ret_attr) = if ret_def {
- classify_ret_ty(rty)
- } else {
- (LLVMType { cast: false, ty: Type::void() }, None)
- };
+pub fn compute_abi_info(_ccx: &mut CrateContext,
+ atys: &[Type],
+ rty: Type,
+ ret_def: bool) -> FnType {
+ let mut arg_tys = ~[];
+ let mut attrs = ~[];
+ for &aty in atys.iter() {
+ let (ty, attr) = classify_arg_ty(aty);
+ arg_tys.push(ty);
+ attrs.push(attr);
+ }
- let mut ret_ty = ret_ty;
+ let (ret_ty, ret_attr) = if ret_def {
+ classify_ret_ty(rty)
+ } else {
+ (LLVMType { cast: false, ty: Type::void() }, None)
+ };
- let sret = ret_attr.is_some();
- if sret {
- arg_tys.unshift(ret_ty);
- attrs.unshift(ret_attr);
- ret_ty = LLVMType { cast: false, ty: Type::void() };
- }
+ let mut ret_ty = ret_ty;
- return FnType {
- arg_tys: arg_tys,
- ret_ty: ret_ty,
- attrs: attrs,
- sret: sret
- };
+ let sret = ret_attr.is_some();
+ if sret {
+ arg_tys.unshift(ret_ty);
+ attrs.unshift(ret_attr);
+ ret_ty = LLVMType { cast: false, ty: Type::void() };
}
-}
-pub fn abi_info() -> @ABIInfo {
- return @ARM_ABIInfo as @ABIInfo;
+ return FnType {
+ arg_tys: arg_tys,
+ ret_ty: ret_ty,
+ attrs: attrs,
+ sret: sret
+ };
}
use std::vec;
use lib::llvm::{llvm, Integer, Pointer, Float, Double, Struct, Array};
use lib::llvm::{Attribute, StructRetAttribute};
+use middle::trans::context::CrateContext;
use middle::trans::context::task_llcx;
use middle::trans::cabi::*;
return Type::struct_(fields, false);
}
-enum MIPS_ABIInfo { MIPS_ABIInfo }
-
-impl ABIInfo for MIPS_ABIInfo {
- fn compute_info(&self,
- atys: &[Type],
- rty: Type,
- ret_def: bool) -> FnType {
- let (ret_ty, ret_attr) = if ret_def {
- classify_ret_ty(rty)
- } else {
- (LLVMType { cast: false, ty: Type::void() }, None)
- };
-
- let mut ret_ty = ret_ty;
-
- let sret = ret_attr.is_some();
- let mut arg_tys = ~[];
- let mut attrs = ~[];
- let mut offset = if sret { 4 } else { 0 };
-
- for aty in atys.iter() {
- let (ty, attr) = classify_arg_ty(*aty, &mut offset);
- arg_tys.push(ty);
- attrs.push(attr);
- };
-
- if sret {
- arg_tys = vec::append(~[ret_ty], arg_tys);
- attrs = vec::append(~[ret_attr], attrs);
- ret_ty = LLVMType { cast: false, ty: Type::void() };
- }
+pub fn compute_abi_info(_ccx: &mut CrateContext,
+ atys: &[Type],
+ rty: Type,
+ ret_def: bool) -> FnType {
+ let (ret_ty, ret_attr) = if ret_def {
+ classify_ret_ty(rty)
+ } else {
+ (LLVMType { cast: false, ty: Type::void() }, None)
+ };
+
+ let mut ret_ty = ret_ty;
+
+ let sret = ret_attr.is_some();
+ let mut arg_tys = ~[];
+ let mut attrs = ~[];
+ let mut offset = if sret { 4 } else { 0 };
- return FnType {
- arg_tys: arg_tys,
- ret_ty: ret_ty,
- attrs: attrs,
- sret: sret
- };
+ for aty in atys.iter() {
+ let (ty, attr) = classify_arg_ty(*aty, &mut offset);
+ arg_tys.push(ty);
+ attrs.push(attr);
+ };
+
+ if sret {
+ arg_tys = vec::append(~[ret_ty], arg_tys);
+ attrs = vec::append(~[ret_attr], attrs);
+ ret_ty = LLVMType { cast: false, ty: Type::void() };
}
-}
-pub fn abi_info() -> @ABIInfo {
- return @MIPS_ABIInfo as @ABIInfo;
+ return FnType {
+ arg_tys: arg_tys,
+ ret_ty: ret_ty,
+ attrs: attrs,
+ sret: sret
+ };
}
use super::cabi::*;
use super::common::*;
use super::machine::*;
-
use middle::trans::type_::Type;
-struct X86_ABIInfo {
- ccx: @mut CrateContext
-}
+pub fn compute_abi_info(ccx: &mut CrateContext,
+ atys: &[Type],
+ rty: Type,
+ ret_def: bool) -> FnType {
+ let mut arg_tys = ~[];
+ let mut attrs = ~[];
-impl ABIInfo for X86_ABIInfo {
- fn compute_info(&self,
- atys: &[Type],
- rty: Type,
- ret_def: bool) -> FnType {
- let mut arg_tys = do atys.map |a| {
- LLVMType { cast: false, ty: *a }
- };
- let mut ret_ty = LLVMType {
+ let ret_ty;
+ let sret;
+ if !ret_def {
+ ret_ty = LLVMType {
cast: false,
- ty: rty
+ ty: Type::void(),
};
- let mut attrs = do atys.map |_| {
- None
- };
-
- // Rules for returning structs taken from
+ sret = false;
+ } else if rty.kind() == Struct {
+ // Returning a structure. Most often, this will use
+ // a hidden first argument. On some platforms, though,
+ // small structs are returned as integers.
+ //
+ // Some links:
// http://www.angelcode.com/dev/callconv/callconv.html
// Clang's ABI handling is in lib/CodeGen/TargetInfo.cpp
- let sret = {
- let returning_a_struct = rty.kind() == Struct && ret_def;
- let big_struct = match self.ccx.sess.targ_cfg.os {
- os_win32 | os_macos => llsize_of_alloc(self.ccx, rty) > 8,
- _ => true
- };
- returning_a_struct && big_struct
+
+ enum Strategy { RetValue(Type), RetPointer }
+ let strategy = match ccx.sess.targ_cfg.os {
+ os_win32 | os_macos => {
+ match llsize_of_alloc(ccx, rty) {
+ 1 => RetValue(Type::i8()),
+ 2 => RetValue(Type::i16()),
+ 4 => RetValue(Type::i32()),
+ 8 => RetValue(Type::i64()),
+ _ => RetPointer
+ }
+ }
+ _ => {
+ RetPointer
+ }
};
- if sret {
- let ret_ptr_ty = LLVMType {
- cast: false,
- ty: ret_ty.ty.ptr_to()
- };
- arg_tys = ~[ret_ptr_ty] + arg_tys;
- attrs = ~[Some(StructRetAttribute)] + attrs;
- ret_ty = LLVMType {
- cast: false,
- ty: Type::void(),
- };
- } else if !ret_def {
- ret_ty = LLVMType {
- cast: false,
- ty: Type::void()
- };
- }
+ match strategy {
+ RetValue(t) => {
+ ret_ty = LLVMType {
+ cast: true,
+ ty: t
+ };
+ sret = false;
+ }
+ RetPointer => {
+ arg_tys.push(LLVMType {
+ cast: false,
+ ty: rty.ptr_to()
+ });
+ attrs.push(Some(StructRetAttribute));
- return FnType {
- arg_tys: arg_tys,
- ret_ty: ret_ty,
- attrs: attrs,
- sret: sret
+ ret_ty = LLVMType {
+ cast: false,
+ ty: Type::void(),
+ };
+ sret = true;
+ }
+ }
+ } else {
+ ret_ty = LLVMType {
+ cast: false,
+ ty: rty
};
+ sret = false;
+ }
+
+ for &a in atys.iter() {
+ arg_tys.push(LLVMType { cast: false, ty: a });
+ attrs.push(None);
}
-}
-pub fn abi_info(ccx: @mut CrateContext) -> @ABIInfo {
- return @X86_ABIInfo {
- ccx: ccx
- } as @ABIInfo;
+ return FnType {
+ arg_tys: arg_tys,
+ ret_ty: ret_ty,
+ attrs: attrs,
+ sret: sret
+ };
}
use lib::llvm::{Struct, Array, Attribute};
use lib::llvm::{StructRetAttribute, ByValAttribute};
use middle::trans::cabi::*;
+use middle::trans::context::CrateContext;
use middle::trans::type_::Type;
return Type::struct_(tys, false);
}
-fn x86_64_tys(atys: &[Type],
- rty: Type,
- ret_def: bool) -> FnType {
-
+pub fn compute_abi_info(_ccx: &mut CrateContext,
+ atys: &[Type],
+ rty: Type,
+ ret_def: bool) -> FnType {
fn x86_64_ty(ty: Type,
is_mem_cls: &fn(cls: &[RegClass]) -> bool,
attr: Attribute) -> (LLVMType, Option<Attribute>) {
sret: sret
};
}
-
-enum X86_64_ABIInfo { X86_64_ABIInfo }
-
-impl ABIInfo for X86_64_ABIInfo {
- fn compute_info(&self,
- atys: &[Type],
- rty: Type,
- ret_def: bool) -> FnType {
- return x86_64_tys(atys, rty, ret_def);
- }
-}
-
-pub fn abi_info() -> @ABIInfo {
- return @X86_64_ABIInfo as @ABIInfo;
-}
use middle::trans::meth;
use middle::trans::monomorphize;
use middle::trans::type_of;
+use middle::trans::foreign;
use middle::ty;
use middle::subst::Subst;
use middle::typeck;
use middle::trans::type_::Type;
use syntax::ast;
+use syntax::abi::AbiSet;
use syntax::ast_map;
-use syntax::oldvisit;
+use syntax::visit;
+use syntax::visit::Visitor;
// Represents a (possibly monomorphized) top-level fn item or method
// item. Note that this is just the fn-ptr and is not a Rust closure
type_params: &[ty::t], // values for fn's ty params
vtables: Option<typeck::vtable_res>) // vtables for the call
-> FnData {
- //!
- //
- // Translates a reference to a fn/method item, monomorphizing and
- // inlining as it goes.
- //
- // # Parameters
- //
- // - `bcx`: the current block where the reference to the fn occurs
- // - `def_id`: def id of the fn or method item being referenced
- // - `ref_id`: node id of the reference to the fn/method, if applicable.
- // This parameter may be zero; but, if so, the resulting value may not
- // have the right type, so it must be cast before being used.
- // - `type_params`: values for each of the fn/method's type parameters
- // - `vtables`: values for each bound on each of the type parameters
+ /*!
+ * Translates a reference to a fn/method item, monomorphizing and
+ * inlining as it goes.
+ *
+ * # Parameters
+ *
+ * - `bcx`: the current block where the reference to the fn occurs
+ * - `def_id`: def id of the fn or method item being referenced
+ * - `ref_id`: node id of the reference to the fn/method, if applicable.
+ * This parameter may be zero; but, if so, the resulting value may not
+ * have the right type, so it must be cast before being used.
+ * - `type_params`: values for each of the fn/method's type parameters
+ * - `vtables`: values for each bound on each of the type parameters
+ */
let _icx = push_ctxt("trans_fn_ref_with_vtables");
let ccx = bcx.ccx();
}
// Find the actual function pointer.
- let val = {
+ let mut val = {
if def_id.crate == ast::LOCAL_CRATE {
// Internal reference.
get_item_val(ccx, def_id.node)
}
};
+ // This is subtle and surprising, but sometimes we have to bitcast
+ // the resulting fn pointer. The reason has to do with external
+ // functions. If you have two crates that both bind the same C
+ // library, they may not use precisely the same types: for
+ // example, they will probably each declare their own structs,
+ // which are distinct types from LLVM's point of view (nominal
+ // types).
+ //
+ // Now, if those two crates are linked into an application, and
+ // they contain inlined code, you can wind up with a situation
+ // where both of those functions wind up being loaded into this
+ // application simultaneously. In that case, the same function
+ // (from LLVM's point of view) requires two types. But of course
+ // LLVM won't allow one function to have two types.
+ //
+ // What we currently do, therefore, is declare the function with
+ // one of the two types (whichever happens to come first) and then
+ // bitcast as needed when the function is referenced to make sure
+ // it has the type we expect.
+ //
+ // This can occur on either a crate-local or crate-external
+ // reference. It also occurs when testing libcore and in some
+ // other weird situations. Annoying.
+ let llty = type_of::type_of_fn_from_ty(ccx, fn_tpt.ty);
+ let llptrty = llty.ptr_to();
+ if val_ty(val) != llptrty {
+ val = BitCast(bcx, val, llptrty);
+ }
+
return FnData {llfn: val};
}
ArgVals(args), Some(dest), DontAutorefArg).bcx;
}
-pub fn body_contains_ret(body: &ast::Block) -> bool {
- let cx = @mut false;
- oldvisit::visit_block(body, (cx, oldvisit::mk_vt(@oldvisit::Visitor {
- visit_item: |_i, (_cx, _v)| { },
- visit_expr: |e: @ast::expr,
- (cx, v): (@mut bool, oldvisit::vt<@mut bool>)| {
+
+struct CalleeTranslationVisitor;
+
+impl Visitor<@mut bool> for CalleeTranslationVisitor {
+
+ fn visit_item(&mut self, _:@ast::item, _:@mut bool) { }
+
+ fn visit_expr(&mut self, e:@ast::expr, cx:@mut bool) {
+
if !*cx {
match e.node {
ast::expr_ret(_) => *cx = true,
- _ => oldvisit::visit_expr(e, (cx, v)),
+ _ => visit::walk_expr(self, e, cx),
}
}
- },
- ..*oldvisit::default_visitor()
- })));
+ }
+
+}
+
+pub fn body_contains_ret(body: &ast::Block) -> bool {
+ let cx = @mut false;
+ let mut v = CalleeTranslationVisitor;
+ visit::walk_block(&mut v, body, cx);
*cx
}
-// See [Note-arg-mode]
pub fn trans_call_inner(in_cx: @mut Block,
call_info: Option<NodeInfo>,
- fn_expr_ty: ty::t,
+ callee_ty: ty::t,
ret_ty: ty::t,
get_callee: &fn(@mut Block) -> Callee,
args: CallArgs,
dest: Option<expr::Dest>,
autoref_arg: AutorefArg)
-> Result {
+ /*!
+ * This behemoth of a function translates function calls.
+ * Unfortunately, in order to generate more efficient LLVM
+ * output at -O0, it has quite a complex signature (refactoring
+ * this into two functions seems like a good idea).
+ *
+ * In particular, for lang items, it is invoked with a dest of
+ * None, and
+ */
+
+
do base::with_scope_result(in_cx, call_info, "call") |cx| {
let callee = get_callee(cx);
let mut bcx = callee.bcx;
}
};
- let llretslot = trans_ret_slot(bcx, fn_expr_ty, dest);
+ let abi = match ty::get(callee_ty).sty {
+ ty::ty_bare_fn(ref f) => f.abis,
+ _ => AbiSet::Rust()
+ };
+ let is_rust_fn =
+ abi.is_rust() ||
+ abi.is_intrinsic();
+
+ // Generate a location to store the result. If the user does
+ // not care about the result, just make a stack slot.
+ let opt_llretslot = match dest {
+ None => {
+ assert!(!type_of::return_uses_outptr(in_cx.tcx(), ret_ty));
+ None
+ }
+ Some(expr::SaveIn(dst)) => Some(dst),
+ Some(expr::Ignore) => {
+ if !ty::type_is_voidish(ret_ty) {
+ Some(alloc_ty(bcx, ret_ty, "__llret"))
+ } else {
+ unsafe {
+ Some(llvm::LLVMGetUndef(Type::nil().ptr_to().to_ref()))
+ }
+ }
+ }
+ };
+
+ let mut llresult = unsafe {
+ llvm::LLVMGetUndef(Type::nil().ptr_to().to_ref())
+ };
- let mut llargs = ~[];
+ // The code below invokes the function, using either the Rust
+ // conventions (if it is a rust fn) or the native conventions
+ // (otherwise). The important part is that, when all is sad
+ // and done, either the return value of the function will have been
+ // written in opt_llretslot (if it is Some) or `llresult` will be
+ // set appropriately (otherwise).
+ if is_rust_fn {
+ let mut llargs = ~[];
+
+ // Push the out-pointer if we use an out-pointer for this
+ // return type, otherwise push "undef".
+ if type_of::return_uses_outptr(in_cx.tcx(), ret_ty) {
+ llargs.push(opt_llretslot.unwrap());
+ }
- if !ty::type_is_immediate(bcx.tcx(), ret_ty) {
- llargs.push(llretslot);
- }
+ // Push the environment.
+ llargs.push(llenv);
- llargs.push(llenv);
- bcx = trans_args(bcx, args, fn_expr_ty, autoref_arg, &mut llargs);
+ // Push the arguments.
+ bcx = trans_args(bcx, args, callee_ty,
+ autoref_arg, &mut llargs);
- // Now that the arguments have finished evaluating, we need to revoke
- // the cleanup for the self argument
- match callee.data {
- Method(d) => {
- for &v in d.temp_cleanup.iter() {
- revoke_clean(bcx, v);
+ // Now that the arguments have finished evaluating, we
+ // need to revoke the cleanup for the self argument
+ match callee.data {
+ Method(d) => {
+ for &v in d.temp_cleanup.iter() {
+ revoke_clean(bcx, v);
+ }
}
+ _ => {}
}
- _ => {}
- }
- // Uncomment this to debug calls.
- /*
- printfln!("calling: %s", bcx.val_to_str(llfn));
- for llarg in llargs.iter() {
- printfln!("arg: %s", bcx.val_to_str(*llarg));
+ // Invoke the actual rust fn and update bcx/llresult.
+ let (llret, b) = base::invoke(bcx, llfn, llargs);
+ bcx = b;
+ llresult = llret;
+
+ // If the Rust convention for this type is return via
+ // the return value, copy it into llretslot.
+ match opt_llretslot {
+ Some(llretslot) => {
+ if !type_of::return_uses_outptr(bcx.tcx(), ret_ty) &&
+ !ty::type_is_voidish(ret_ty)
+ {
+ Store(bcx, llret, llretslot);
+ }
+ }
+ None => {}
+ }
+ } else {
+ // Lang items are the only case where dest is None, and
+ // they are always Rust fns.
+ assert!(dest.is_some());
+
+ let mut llargs = ~[];
+ bcx = trans_args(bcx, args, callee_ty,
+ autoref_arg, &mut llargs);
+ bcx = foreign::trans_native_call(bcx, callee_ty,
+ llfn, opt_llretslot.unwrap(), llargs);
}
- io::println("---");
- */
-
- // If the block is terminated, then one or more of the args
- // has type _|_. Since that means it diverges, the code for
- // the call itself is unreachable.
- let (llresult, new_bcx) = base::invoke(bcx, llfn, llargs);
- bcx = new_bcx;
+ // If the caller doesn't care about the result of this fn call,
+ // drop the temporary slot we made.
match dest {
- None => { assert!(ty::type_is_immediate(bcx.tcx(), ret_ty)) }
+ None => {
+ assert!(!type_of::return_uses_outptr(bcx.tcx(), ret_ty));
+ }
Some(expr::Ignore) => {
// drop the value if it is not being saved.
- if ty::type_needs_drop(bcx.tcx(), ret_ty) {
- if ty::type_is_immediate(bcx.tcx(), ret_ty) {
- let llscratchptr = alloc_ty(bcx, ret_ty, "__ret");
- Store(bcx, llresult, llscratchptr);
- bcx = glue::drop_ty(bcx, llscratchptr, ret_ty);
- } else {
- bcx = glue::drop_ty(bcx, llretslot, ret_ty);
- }
- }
- }
- Some(expr::SaveIn(lldest)) => {
- // If this is an immediate, store into the result location.
- // (If this was not an immediate, the result will already be
- // directly written into the output slot.)
- if ty::type_is_immediate(bcx.tcx(), ret_ty) {
- Store(bcx, llresult, lldest);
- }
+ bcx = glue::drop_ty(bcx, opt_llretslot.unwrap(), ret_ty);
}
+ Some(expr::SaveIn(_)) => { }
}
if ty::type_is_bot(ret_ty) {
Unreachable(bcx);
}
+
rslt(bcx, llresult)
}
}
-
pub enum CallArgs<'self> {
ArgExprs(&'self [@ast::expr]),
ArgVals(&'self [ValueRef])
}
-pub fn trans_ret_slot(bcx: @mut Block, fn_ty: ty::t, dest: Option<expr::Dest>)
- -> ValueRef {
- let retty = ty::ty_fn_ret(fn_ty);
-
- match dest {
- Some(expr::SaveIn(dst)) => dst,
- _ => {
- if ty::type_is_immediate(bcx.tcx(), retty) {
- unsafe {
- llvm::LLVMGetUndef(Type::nil().ptr_to().to_ref())
- }
- } else {
- alloc_ty(bcx, retty, "__trans_ret_slot")
- }
- }
- }
-}
-
pub fn trans_args(cx: @mut Block,
args: CallArgs,
fn_ty: ty::t,
if formal_arg_ty != arg_datum.ty {
// this could happen due to e.g. subtyping
- let llformal_arg_ty = type_of::type_of_explicit_arg(ccx, &formal_arg_ty);
+ let llformal_arg_ty = type_of::type_of_explicit_arg(ccx, formal_arg_ty);
debug!("casting actual type (%s) to match formal (%s)",
bcx.val_to_str(val), bcx.llty_str(llformal_arg_ty));
val = PointerCast(bcx, val, llformal_arg_ty);
}
}
-pub type ExternMap = HashMap<@str, ValueRef>;
+pub type ExternMap = HashMap<~str, ValueRef>;
// Types used for llself.
pub struct ValSelfData {
// outputting the resume instruction.
personality: Option<ValueRef>,
- // True if this function has an immediate return value, false otherwise.
- // If this is false, the llretptr will alias the first argument of the
- // function.
- has_immediate_return_value: bool,
+ // True if the caller expects this fn to use the out pointer to
+ // return. Either way, your code should write into llretptr, but if
+ // this value is false, llretptr will be a local alloca.
+ caller_expects_out_pointer: bool,
// Maps arguments to allocas created for them in llallocas.
llargs: @mut HashMap<ast::NodeId, ValueRef>,
impl FunctionContext {
pub fn arg_pos(&self, arg: uint) -> uint {
- if self.has_immediate_return_value {
- arg + 1u
- } else {
+ if self.caller_expects_out_pointer {
arg + 2u
+ } else {
+ arg + 1u
}
}
pub fn out_arg_pos(&self) -> uint {
- assert!(!self.has_immediate_return_value);
+ assert!(self.caller_expects_out_pointer);
0u
}
pub fn env_arg_pos(&self) -> uint {
- if !self.has_immediate_return_value {
+ if self.caller_expects_out_pointer {
1u
} else {
0u
raw_vtables.map_move(|vts| resolve_vtables_in_fn_ctxt(bcx.fcx, *vts))
}
+// Apply the typaram substitutions in the FunctionContext to some
+// vtables. This should eliminate any vtable_params.
pub fn resolve_vtables_in_fn_ctxt(fcx: &FunctionContext, vts: typeck::vtable_res)
-> typeck::vtable_res {
resolve_vtables_under_param_substs(fcx.ccx.tcx,
-// Apply the typaram substitutions in the FunctionContext to a vtable. This should
-// eliminate any vtable_params.
-pub fn resolve_vtable_in_fn_ctxt(fcx: &FunctionContext, vt: &typeck::vtable_origin)
- -> typeck::vtable_origin {
- resolve_vtable_under_param_substs(fcx.ccx.tcx,
- fcx.param_substs,
- vt)
-}
-
pub fn resolve_vtable_under_param_substs(tcx: ty::ctxt,
param_substs: Option<@param_substs>,
vt: &typeck::vtable_origin)
}
_ => {
tcx.sess.bug(fmt!(
- "resolve_vtable_in_fn_ctxt: asked to lookup but \
- no vtables in the fn_ctxt!"))
+ "resolve_vtable_under_param_substs: asked to lookup \
+ but no vtables in the fn_ctxt!"))
}
}
}
// Cache computed type parameter uses (see type_use.rs)
type_use_cache: HashMap<ast::def_id, @~[type_use::type_uses]>,
// Cache generated vtables
- vtables: HashMap<mono_id, ValueRef>,
+ vtables: HashMap<(ty::t, mono_id), ValueRef>,
// Cache of constant strings,
const_cstr_cache: HashMap<@str, ValueRef>,
pub fn appropriate_mode(tcx: ty::ctxt, ty: ty::t) -> DatumMode {
/*!
- *
- * Indicates the "appropriate" mode for this value,
- * which is either by ref or by value, depending
- * on whether type is immediate or not. */
+ * Indicates the "appropriate" mode for this value,
+ * which is either by ref or by value, depending
+ * on whether type is immediate or not.
+ */
- if ty::type_is_nil(ty) || ty::type_is_bot(ty) {
+ if ty::type_is_voidish(ty) {
ByValue
} else if ty::type_is_immediate(tcx, ty) {
ByValue
let _icx = push_ctxt("copy_to");
- if ty::type_is_nil(self.ty) || ty::type_is_bot(self.ty) {
+ if ty::type_is_voidish(self.ty) {
return bcx;
}
debug!("move_to(self=%s, action=%?, dst=%s)",
self.to_str(bcx.ccx()), action, bcx.val_to_str(dst));
- if ty::type_is_nil(self.ty) || ty::type_is_bot(self.ty) {
+ if ty::type_is_voidish(self.ty) {
return bcx;
}
*
* Yields the value itself. */
- if ty::type_is_nil(self.ty) || ty::type_is_bot(self.ty) {
+ if ty::type_is_voidish(self.ty) {
C_nil()
} else {
match self.mode {
match self.mode {
ByRef(_) => self.val,
ByValue => {
- if ty::type_is_nil(self.ty) || ty::type_is_bot(self.ty) {
+ if ty::type_is_voidish(self.ty) {
C_null(type_of::type_of(bcx.ccx(), self.ty).ptr_to())
} else {
let slot = alloc_ty(bcx, self.ty, "");
assert_eq!(datum.appropriate_mode(tcx), ByValue);
Store(bcx, datum.to_appropriate_llval(bcx), llfn);
let llenv = GEPi(bcx, scratch.val, [0u, abi::fn_field_box]);
- Store(bcx, base::null_env_ptr(bcx), llenv);
+ Store(bcx, base::null_env_ptr(bcx.ccx()), llenv);
DatumBlock {bcx: bcx, datum: scratch}
}
debuginfo::update_source_pos(bcx.fcx, expr.id, expr.span);
let dest = {
- if ty::type_is_nil(ty) || ty::type_is_bot(ty) {
+ if ty::type_is_voidish(ty) {
Ignore
} else {
dest
ty::RvalueDpsExpr => {
let ty = expr_ty(bcx, expr);
- if ty::type_is_nil(ty) || ty::type_is_bot(ty) {
+ if ty::type_is_voidish(ty) {
bcx = trans_rvalue_dps_unadjusted(bcx, expr, Ignore);
return nil(bcx, ty);
} else {
return bcx;
}
}
- ast::def_struct(*) => {
+ ast::def_struct(def_id) => {
let ty = expr_ty(bcx, ref_expr);
match ty::get(ty).sty {
ty::ty_struct(did, _) if ty::has_dtor(ccx.tcx, did) => {
let repr = adt::represent_type(ccx, ty);
adt::trans_start_init(bcx, repr, lldest, 0);
}
- _ => {}
+ ty::ty_bare_fn(*) => {
+ let fn_data = callee::trans_fn_ref(bcx, def_id, ref_expr.id);
+ Store(bcx, fn_data.llfn, lldest);
+ }
+ _ => ()
}
return bcx;
}
// except according to those terms.
-use back::{link, abi};
-use lib::llvm::{Pointer, ValueRef};
+use back::{link};
+use std::libc::c_uint;
+use lib::llvm::{ValueRef, Attribute, CallConv};
+use lib::llvm::llvm;
use lib;
-use middle::trans::base::*;
+use middle::trans::machine;
+use middle::trans::base;
+use middle::trans::base::push_ctxt;
use middle::trans::cabi;
-use middle::trans::cabi_x86;
-use middle::trans::cabi_x86_64;
-use middle::trans::cabi_arm;
-use middle::trans::cabi_mips;
use middle::trans::build::*;
-use middle::trans::callee::*;
+use middle::trans::builder::noname;
use middle::trans::common::*;
-use middle::trans::datum::*;
-use middle::trans::expr::Ignore;
-use middle::trans::machine::llsize_of;
-use middle::trans::glue;
-use middle::trans::machine;
use middle::trans::type_of::*;
use middle::trans::type_of;
use middle::ty;
use middle::ty::FnSig;
-use util::ppaux::ty_to_str;
-use std::cell::Cell;
+use std::uint;
use std::vec;
use syntax::codemap::span;
-use syntax::{ast, ast_util};
+use syntax::{ast};
use syntax::{attr, ast_map};
-use syntax::opt_vec;
use syntax::parse::token::special_idents;
-use syntax::parse::token;
-use syntax::abi::{X86, X86_64, Arm, Mips};
use syntax::abi::{RustIntrinsic, Rust, Stdcall, Fastcall,
- Cdecl, Aapcs, C};
+ Cdecl, Aapcs, C, AbiSet};
+use util::ppaux::{Repr, UserString};
use middle::trans::type_::Type;
-fn abi_info(ccx: @mut CrateContext) -> @cabi::ABIInfo {
- return match ccx.sess.targ_cfg.arch {
- X86 => cabi_x86::abi_info(ccx),
- X86_64 => cabi_x86_64::abi_info(),
- Arm => cabi_arm::abi_info(),
- Mips => cabi_mips::abi_info(),
- }
-}
-
-pub fn link_name(ccx: &CrateContext, i: &ast::foreign_item) -> @str {
- match attr::first_attr_value_str_by_name(i.attrs, "link_name") {
- None => ccx.sess.str_of(i.ident),
- Some(ln) => ln,
- }
-}
+///////////////////////////////////////////////////////////////////////////
+// Type definitions
-struct ShimTypes {
+struct ForeignTypes {
+ /// Rust signature of the function
fn_sig: ty::FnSig,
+ /// Adapter object for handling native ABI rules (trust me, you
+ /// don't want to know)
+ fn_ty: cabi::FnType,
+
/// LLVM types that will appear on the foreign function
llsig: LlvmSignature,
/// True if there is a return value (not bottom, not unit)
ret_def: bool,
-
- /// Type of the struct we will use to shuttle values back and forth.
- /// This is always derived from the llsig.
- bundle_ty: Type,
-
- /// Type of the shim function itself.
- shim_fn_ty: Type,
-
- /// Adapter object for handling native ABI rules (trust me, you
- /// don't want to know).
- fn_ty: cabi::FnType
}
struct LlvmSignature {
+ // LLVM versions of the types of this function's arguments.
llarg_tys: ~[Type],
- llret_ty: Type,
- sret: bool,
-}
-fn foreign_signature(ccx: &mut CrateContext, fn_sig: &ty::FnSig)
- -> LlvmSignature {
- /*!
- * The ForeignSignature is the LLVM types of the arguments/return type
- * of a function. Note that these LLVM types are not quite the same
- * as the LLVM types would be for a native Rust function because foreign
- * functions just plain ignore modes. They also don't pass aggregate
- * values by pointer like we do.
- */
+ // LLVM version of the type that this function returns. Note that
+ // this *may not be* the declared return type of the foreign
+ // function, because the foreign function may opt to return via an
+ // out pointer.
+ llret_ty: Type,
- let llarg_tys = fn_sig.inputs.map(|arg_ty| type_of(ccx, *arg_ty));
- let llret_ty = type_of::type_of(ccx, fn_sig.output);
- LlvmSignature {
- llarg_tys: llarg_tys,
- llret_ty: llret_ty,
- sret: !ty::type_is_immediate(ccx.tcx, fn_sig.output),
- }
+ // True if *Rust* would use an outpointer for this function.
+ sret: bool,
}
-fn shim_types(ccx: @mut CrateContext, id: ast::NodeId) -> ShimTypes {
- let fn_sig = match ty::get(ty::node_id_to_type(ccx.tcx, id)).sty {
- ty::ty_bare_fn(ref fn_ty) => fn_ty.sig.clone(),
- _ => ccx.sess.bug("c_arg_and_ret_lltys called on non-function type")
- };
- let llsig = foreign_signature(ccx, &fn_sig);
- let bundle_ty = Type::struct_(llsig.llarg_tys + &[llsig.llret_ty.ptr_to()], false);
- let ret_def = !ty::type_is_bot(fn_sig.output) &&
- !ty::type_is_nil(fn_sig.output);
- let fn_ty = abi_info(ccx).compute_info(llsig.llarg_tys, llsig.llret_ty, ret_def);
- ShimTypes {
- fn_sig: fn_sig,
- llsig: llsig,
- ret_def: ret_def,
- bundle_ty: bundle_ty,
- shim_fn_ty: Type::func([bundle_ty.ptr_to()], &Type::void()),
- fn_ty: fn_ty
- }
-}
-type shim_arg_builder<'self> =
- &'self fn(bcx: @mut Block, tys: &ShimTypes,
- llargbundle: ValueRef) -> ~[ValueRef];
-
-type shim_ret_builder<'self> =
- &'self fn(bcx: @mut Block, tys: &ShimTypes,
- llargbundle: ValueRef,
- llretval: ValueRef);
-
-fn build_shim_fn_(ccx: @mut CrateContext,
- shim_name: &str,
- llbasefn: ValueRef,
- tys: &ShimTypes,
- cc: lib::llvm::CallConv,
- arg_builder: shim_arg_builder,
- ret_builder: shim_ret_builder)
- -> ValueRef {
- let llshimfn = decl_internal_cdecl_fn(
- ccx.llmod, shim_name, tys.shim_fn_ty);
-
- // Declare the body of the shim function:
- let fcx = new_fn_ctxt(ccx, ~[], llshimfn, tys.fn_sig.output, None);
- let bcx = fcx.entry_bcx.unwrap();
-
- let llargbundle = get_param(llshimfn, 0u);
- let llargvals = arg_builder(bcx, tys, llargbundle);
-
- // Create the call itself and store the return value:
- let llretval = CallWithConv(bcx, llbasefn, llargvals, cc);
-
- ret_builder(bcx, tys, llargbundle, llretval);
-
- // Don't finish up the function in the usual way, because this doesn't
- // follow the normal Rust calling conventions.
- let ret_cx = match fcx.llreturn {
- Some(llreturn) => raw_block(fcx, false, llreturn),
- None => bcx
- };
- RetVoid(ret_cx);
- fcx.cleanup();
+///////////////////////////////////////////////////////////////////////////
+// Calls to external functions
- return llshimfn;
-}
+fn llvm_calling_convention(ccx: @mut CrateContext,
+ abis: AbiSet)
+ -> Option<CallConv> {
+ let arch = ccx.sess.targ_cfg.arch;
+ abis.for_arch(arch).map(|abi| {
+ match *abi {
+ RustIntrinsic => {
+ // Intrinsics are emitted by monomorphic fn
+ ccx.sess.bug(fmt!("Asked to register intrinsic fn"));
+ }
-type wrap_arg_builder<'self> = &'self fn(bcx: @mut Block,
- tys: &ShimTypes,
- llwrapfn: ValueRef,
- llargbundle: ValueRef);
-
-type wrap_ret_builder<'self> = &'self fn(bcx: @mut Block,
- tys: &ShimTypes,
- llargbundle: ValueRef);
-
-fn build_wrap_fn_(ccx: @mut CrateContext,
- tys: &ShimTypes,
- llshimfn: ValueRef,
- llwrapfn: ValueRef,
- shim_upcall: ValueRef,
- needs_c_return: bool,
- arg_builder: wrap_arg_builder,
- ret_builder: wrap_ret_builder) {
- let _icx = push_ctxt("foreign::build_wrap_fn_");
- let fcx = new_fn_ctxt(ccx, ~[], llwrapfn, tys.fn_sig.output, None);
- let bcx = fcx.entry_bcx.unwrap();
-
- // Patch up the return type if it's not immediate and we're returning via
- // the C ABI.
- if needs_c_return && !ty::type_is_immediate(ccx.tcx, tys.fn_sig.output) {
- let lloutputtype = type_of::type_of(fcx.ccx, tys.fn_sig.output);
- fcx.llretptr = Some(alloca(bcx, lloutputtype, ""));
- }
+ Rust => {
+ // FIXME(#3678) Implement linking to foreign fns with Rust ABI
+ ccx.sess.unimpl(
+ fmt!("Foreign functions with Rust ABI"));
+ }
- // Allocate the struct and write the arguments into it.
- let llargbundle = alloca(bcx, tys.bundle_ty, "__llargbundle");
- arg_builder(bcx, tys, llwrapfn, llargbundle);
+ Stdcall => lib::llvm::X86StdcallCallConv,
+ Fastcall => lib::llvm::X86FastcallCallConv,
+ C => lib::llvm::CCallConv,
- // Create call itself.
- let llshimfnptr = PointerCast(bcx, llshimfn, Type::i8p());
- let llrawargbundle = PointerCast(bcx, llargbundle, Type::i8p());
- Call(bcx, shim_upcall, [llrawargbundle, llshimfnptr]);
- ret_builder(bcx, tys, llargbundle);
+ // NOTE These API constants ought to be more specific
+ Cdecl => lib::llvm::CCallConv,
+ Aapcs => lib::llvm::CCallConv,
+ }
+ })
+}
- // Then return according to the C ABI.
- let return_context = match fcx.llreturn {
- Some(llreturn) => raw_block(fcx, false, llreturn),
- None => bcx
- };
- let llfunctiontype = val_ty(llwrapfn);
- let llfunctiontype = llfunctiontype.element_type();
- let return_type = llfunctiontype.return_type();
- if return_type.kind() == ::lib::llvm::Void {
- // XXX: This might be wrong if there are any functions for which
- // the C ABI specifies a void output pointer and the Rust ABI
- // does not.
- RetVoid(return_context);
- } else {
- // Cast if we have to...
- // XXX: This is ugly.
- let llretptr = BitCast(return_context, fcx.llretptr.unwrap(), return_type.ptr_to());
- Ret(return_context, Load(return_context, llretptr));
- }
- fcx.cleanup();
-}
+pub fn register_foreign_item_fn(ccx: @mut CrateContext,
+ abis: AbiSet,
+ path: &ast_map::path,
+ foreign_item: @ast::foreign_item) -> ValueRef {
+ /*!
+ * Registers a foreign function found in a library.
+ * Just adds a LLVM global.
+ */
-// For each foreign function F, we generate a wrapper function W and a shim
-// function S that all work together. The wrapper function W is the function
-// that other rust code actually invokes. Its job is to marshall the
-// arguments into a struct. It then uses a small bit of assembly to switch
-// over to the C stack and invoke the shim function. The shim function S then
-// unpacks the arguments from the struct and invokes the actual function F
-// according to its specified calling convention.
-//
-// Example: Given a foreign c-stack function F(x: X, y: Y) -> Z,
-// we generate a wrapper function W that looks like:
-//
-// void W(Z* dest, void *env, X x, Y y) {
-// struct { X x; Y y; Z *z; } args = { x, y, z };
-// call_on_c_stack_shim(S, &args);
-// }
-//
-// The shim function S then looks something like:
-//
-// void S(struct { X x; Y y; Z *z; } *args) {
-// *args->z = F(args->x, args->y);
-// }
-//
-// However, if the return type of F is dynamically sized or of aggregate type,
-// the shim function looks like:
-//
-// void S(struct { X x; Y y; Z *z; } *args) {
-// F(args->z, args->x, args->y);
-// }
-//
-// Note: on i386, the layout of the args struct is generally the same
-// as the desired layout of the arguments on the C stack. Therefore,
-// we could use upcall_alloc_c_stack() to allocate the `args`
-// structure and switch the stack pointer appropriately to avoid a
-// round of copies. (In fact, the shim function itself is
-// unnecessary). We used to do this, in fact, and will perhaps do so
-// in the future.
-pub fn trans_foreign_mod(ccx: @mut CrateContext,
- path: &ast_map::path,
- foreign_mod: &ast::foreign_mod) {
- let _icx = push_ctxt("foreign::trans_foreign_mod");
+ debug!("register_foreign_item_fn(abis=%s, \
+ path=%s, \
+ foreign_item.id=%?)",
+ abis.repr(ccx.tcx),
+ path.repr(ccx.tcx),
+ foreign_item.id);
- let arch = ccx.sess.targ_cfg.arch;
- let abi = match foreign_mod.abis.for_arch(arch) {
+ let cc = match llvm_calling_convention(ccx, abis) {
+ Some(cc) => cc,
None => {
+ // FIXME(#8357) We really ought to report a span here
ccx.sess.fatal(
- fmt!("No suitable ABI for target architecture \
+ fmt!("ABI `%s` has no suitable ABI \
+ for target architecture \
in module %s",
+ abis.user_string(ccx.tcx),
ast_map::path_to_str(*path,
ccx.sess.intr())));
}
-
- Some(abi) => abi,
};
- for &foreign_item in foreign_mod.items.iter() {
- match foreign_item.node {
- ast::foreign_item_fn(*) => {
- let id = foreign_item.id;
- match abi {
- RustIntrinsic => {
- // Intrinsics are emitted by monomorphic fn
- }
-
- Rust => {
- // FIXME(#3678) Implement linking to foreign fns with Rust ABI
- ccx.sess.unimpl(
- fmt!("Foreign functions with Rust ABI"));
- }
-
- Stdcall => {
- build_foreign_fn(ccx, id, foreign_item,
- lib::llvm::X86StdcallCallConv);
- }
-
- Fastcall => {
- build_foreign_fn(ccx, id, foreign_item,
- lib::llvm::X86FastcallCallConv);
- }
-
- Cdecl => {
- // FIXME(#3678) should really be more specific
- build_foreign_fn(ccx, id, foreign_item,
- lib::llvm::CCallConv);
- }
-
- Aapcs => {
- // FIXME(#3678) should really be more specific
- build_foreign_fn(ccx, id, foreign_item,
- lib::llvm::CCallConv);
- }
-
- C => {
- build_foreign_fn(ccx, id, foreign_item,
- lib::llvm::CCallConv);
- }
- }
- }
- ast::foreign_item_static(*) => {
- let ident = token::ident_to_str(&foreign_item.ident);
- ccx.item_symbols.insert(foreign_item.id, /* bad */ident.to_owned());
- }
- }
- }
+ // Register the function as a C extern fn
+ let lname = link_name(ccx, foreign_item);
+ let tys = foreign_types_for_id(ccx, foreign_item.id);
- fn build_foreign_fn(ccx: @mut CrateContext,
- id: ast::NodeId,
- foreign_item: @ast::foreign_item,
- cc: lib::llvm::CallConv) {
- let llwrapfn = get_item_val(ccx, id);
- let tys = shim_types(ccx, id);
- if attr::contains_name(foreign_item.attrs, "rust_stack") {
- build_direct_fn(ccx, llwrapfn, foreign_item,
- &tys, cc);
- } else if attr::contains_name(foreign_item.attrs, "fast_ffi") {
- build_fast_ffi_fn(ccx, llwrapfn, foreign_item, &tys, cc);
- } else {
- let llshimfn = build_shim_fn(ccx, foreign_item, &tys, cc);
- build_wrap_fn(ccx, &tys, llshimfn, llwrapfn);
- }
- }
+ // Create the LLVM value for the C extern fn
+ let llfn_ty = lltype_for_fn_from_foreign_types(&tys);
+ let llfn = base::get_extern_fn(&mut ccx.externs, ccx.llmod,
+ lname, cc, llfn_ty);
+ add_argument_attributes(&tys, llfn);
- fn build_shim_fn(ccx: @mut CrateContext,
- foreign_item: &ast::foreign_item,
- tys: &ShimTypes,
- cc: lib::llvm::CallConv)
- -> ValueRef {
- /*!
- *
- * Build S, from comment above:
- *
- * void S(struct { X x; Y y; Z *z; } *args) {
- * F(args->z, args->x, args->y);
- * }
- */
-
- let _icx = push_ctxt("foreign::build_shim_fn");
-
- fn build_args(bcx: @mut Block, tys: &ShimTypes, llargbundle: ValueRef)
- -> ~[ValueRef] {
- let _icx = push_ctxt("foreign::shim::build_args");
- tys.fn_ty.build_shim_args(bcx, tys.llsig.llarg_tys, llargbundle)
- }
+ return llfn;
+}
- fn build_ret(bcx: @mut Block,
- tys: &ShimTypes,
- llargbundle: ValueRef,
- llretval: ValueRef) {
- let _icx = push_ctxt("foreign::shim::build_ret");
- tys.fn_ty.build_shim_ret(bcx,
- tys.llsig.llarg_tys,
- tys.ret_def,
- llargbundle,
- llretval);
- }
+pub fn trans_native_call(bcx: @mut Block,
+ callee_ty: ty::t,
+ llfn: ValueRef,
+ llretptr: ValueRef,
+ llargs_rust: &[ValueRef]) -> @mut Block {
+ /*!
+ * Prepares a call to a native function. This requires adapting
+ * from the Rust argument passing rules to the native rules.
+ *
+ * # Parameters
+ *
+ * - `callee_ty`: Rust type for the function we are calling
+ * - `llfn`: the function pointer we are calling
+ * - `llretptr`: where to store the return value of the function
+ * - `llargs_rust`: a list of the argument values, prepared
+ * as they would be if calling a Rust function
+ */
- let lname = link_name(ccx, foreign_item);
- let llbasefn = base_fn(ccx, lname, tys, cc);
- // Name the shim function
- let shim_name = fmt!("%s__c_stack_shim", lname);
- build_shim_fn_(ccx,
- shim_name,
- llbasefn,
- tys,
- cc,
- build_args,
- build_ret)
- }
+ let ccx = bcx.ccx();
+ let tcx = bcx.tcx();
- fn base_fn(ccx: &CrateContext,
- lname: &str,
- tys: &ShimTypes,
- cc: lib::llvm::CallConv)
- -> ValueRef {
- // Declare the "prototype" for the base function F:
- do tys.fn_ty.decl_fn |fnty| {
- decl_fn(ccx.llmod, lname, cc, fnty)
- }
- }
+ debug!("trans_native_call(callee_ty=%s, \
+ llfn=%s, \
+ llretptr=%s)",
+ callee_ty.repr(tcx),
+ ccx.tn.val_to_str(llfn),
+ ccx.tn.val_to_str(llretptr));
- // FIXME (#2535): this is very shaky and probably gets ABIs wrong all
- // over the place
- fn build_direct_fn(ccx: @mut CrateContext,
- decl: ValueRef,
- item: &ast::foreign_item,
- tys: &ShimTypes,
- cc: lib::llvm::CallConv) {
- debug!("build_direct_fn(%s)", link_name(ccx, item));
-
- let fcx = new_fn_ctxt(ccx, ~[], decl, tys.fn_sig.output, None);
- let bcx = fcx.entry_bcx.unwrap();
- let llbasefn = base_fn(ccx, link_name(ccx, item), tys, cc);
- let ty = ty::lookup_item_type(ccx.tcx,
- ast_util::local_def(item.id)).ty;
- let ret_ty = ty::ty_fn_ret(ty);
- let args = vec::from_fn(ty::ty_fn_args(ty).len(), |i| {
- get_param(decl, fcx.arg_pos(i))
- });
- let retval = Call(bcx, llbasefn, args);
- if !ty::type_is_nil(ret_ty) && !ty::type_is_bot(ret_ty) {
- Store(bcx, retval, fcx.llretptr.unwrap());
+ let (fn_abis, fn_sig) = match ty::get(callee_ty).sty {
+ ty::ty_bare_fn(ref fn_ty) => (fn_ty.abis, fn_ty.sig.clone()),
+ _ => ccx.sess.bug("trans_native_call called on non-function type")
+ };
+ let llsig = foreign_signature(ccx, &fn_sig);
+ let ret_def = !ty::type_is_voidish(fn_sig.output);
+ let fn_type = cabi::compute_abi_info(ccx,
+ llsig.llarg_tys,
+ llsig.llret_ty,
+ ret_def);
+
+ let all_arg_tys: &[cabi::LLVMType] = fn_type.arg_tys;
+ let all_attributes: &[Option<Attribute>] = fn_type.attrs;
+
+ let mut llargs_foreign = ~[];
+
+ // If the foreign ABI expects return value by pointer, supply the
+ // pointer that Rust gave us. Sometimes we have to bitcast
+ // because foreign fns return slightly different (but equivalent)
+ // views on the same type (e.g., i64 in place of {i32,i32}).
+ let (arg_tys, attributes) = {
+ if fn_type.sret {
+ if all_arg_tys[0].cast {
+ let llcastedretptr =
+ BitCast(bcx, llretptr, all_arg_tys[0].ty.ptr_to());
+ llargs_foreign.push(llcastedretptr);
+ } else {
+ llargs_foreign.push(llretptr);
+ }
+ (all_arg_tys.tail(), all_attributes.tail())
+ } else {
+ (all_arg_tys, all_attributes)
}
- finish_fn(fcx, bcx);
- }
+ };
- // FIXME (#2535): this is very shaky and probably gets ABIs wrong all
- // over the place
- fn build_fast_ffi_fn(ccx: @mut CrateContext,
- decl: ValueRef,
- item: &ast::foreign_item,
- tys: &ShimTypes,
- cc: lib::llvm::CallConv) {
- debug!("build_fast_ffi_fn(%s)", link_name(ccx, item));
-
- let fcx = new_fn_ctxt(ccx, ~[], decl, tys.fn_sig.output, None);
- let bcx = fcx.entry_bcx.unwrap();
- let llbasefn = base_fn(ccx, link_name(ccx, item), tys, cc);
- set_no_inline(fcx.llfn);
- set_fixed_stack_segment(fcx.llfn);
- let ty = ty::lookup_item_type(ccx.tcx,
- ast_util::local_def(item.id)).ty;
- let ret_ty = ty::ty_fn_ret(ty);
- let args = vec::from_fn(ty::ty_fn_args(ty).len(), |i| {
- get_param(decl, fcx.arg_pos(i))
- });
- let retval = Call(bcx, llbasefn, args);
- if !ty::type_is_nil(ret_ty) && !ty::type_is_bot(ret_ty) {
- Store(bcx, retval, fcx.llretptr.unwrap());
- }
- finish_fn(fcx, bcx);
- }
+ for (i, &llarg_rust) in llargs_rust.iter().enumerate() {
+ let mut llarg_rust = llarg_rust;
- fn build_wrap_fn(ccx: @mut CrateContext,
- tys: &ShimTypes,
- llshimfn: ValueRef,
- llwrapfn: ValueRef) {
- /*!
- *
- * Build W, from comment above:
- *
- * void W(Z* dest, void *env, X x, Y y) {
- * struct { X x; Y y; Z *z; } args = { x, y, z };
- * call_on_c_stack_shim(S, &args);
- * }
- *
- * One thing we have to be very careful of is to
- * account for the Rust modes.
- */
-
- let _icx = push_ctxt("foreign::build_wrap_fn");
-
- build_wrap_fn_(ccx,
- tys,
- llshimfn,
- llwrapfn,
- ccx.upcalls.call_shim_on_c_stack,
- false,
- build_args,
- build_ret);
-
- fn build_args(bcx: @mut Block,
- tys: &ShimTypes,
- llwrapfn: ValueRef,
- llargbundle: ValueRef) {
- let _icx = push_ctxt("foreign::wrap::build_args");
- let ccx = bcx.ccx();
- let n = tys.llsig.llarg_tys.len();
- for i in range(0u, n) {
- let arg_i = bcx.fcx.arg_pos(i);
- let mut llargval = get_param(llwrapfn, arg_i);
-
- // In some cases, Rust will pass a pointer which the
- // native C type doesn't have. In that case, just
- // load the value from the pointer.
- if type_of::arg_is_indirect(ccx, &tys.fn_sig.inputs[i]) {
- llargval = Load(bcx, llargval);
- }
+ // Does Rust pass this argument by pointer?
+ let rust_indirect = type_of::arg_is_indirect(ccx, fn_sig.inputs[i]);
- store_inbounds(bcx, llargval, llargbundle, [0u, i]);
- }
+ debug!("argument %u, llarg_rust=%s, rust_indirect=%b, arg_ty=%s",
+ i,
+ ccx.tn.val_to_str(llarg_rust),
+ rust_indirect,
+ ccx.tn.type_to_str(arg_tys[i].ty));
- for &retptr in bcx.fcx.llretptr.iter() {
- store_inbounds(bcx, retptr, llargbundle, [0u, n]);
- }
+ // Ensure that we always have the Rust value indirectly,
+ // because it makes bitcasting easier.
+ if !rust_indirect {
+ let scratch = base::alloca(bcx, arg_tys[i].ty, "__arg");
+ Store(bcx, llarg_rust, scratch);
+ llarg_rust = scratch;
}
- fn build_ret(bcx: @mut Block,
- shim_types: &ShimTypes,
- llargbundle: ValueRef) {
- let _icx = push_ctxt("foreign::wrap::build_ret");
- let arg_count = shim_types.fn_sig.inputs.len();
- for &retptr in bcx.fcx.llretptr.iter() {
- let llretptr = load_inbounds(bcx, llargbundle, [0, arg_count]);
- Store(bcx, Load(bcx, llretptr), retptr);
- }
- }
- }
-}
+ debug!("llarg_rust=%s (after indirection)",
+ ccx.tn.val_to_str(llarg_rust));
-pub fn trans_intrinsic(ccx: @mut CrateContext,
- decl: ValueRef,
- item: &ast::foreign_item,
- path: ast_map::path,
- substs: @param_substs,
- attributes: &[ast::Attribute],
- ref_id: Option<ast::NodeId>) {
- debug!("trans_intrinsic(item.ident=%s)", ccx.sess.str_of(item.ident));
-
- fn simple_llvm_intrinsic(bcx: @mut Block, name: &'static str, num_args: uint) {
- assert!(num_args <= 4);
- let mut args = [0 as ValueRef, ..4];
- let first_real_arg = bcx.fcx.arg_pos(0u);
- for i in range(0u, num_args) {
- args[i] = get_param(bcx.fcx.llfn, first_real_arg + i);
+ // Check whether we need to do any casting
+ let foreignarg_ty = arg_tys[i].ty;
+ if arg_tys[i].cast {
+ llarg_rust = BitCast(bcx, llarg_rust, foreignarg_ty.ptr_to());
}
- let llfn = bcx.ccx().intrinsics.get_copy(&name);
- Ret(bcx, Call(bcx, llfn, args.slice(0, num_args)));
- }
-
- fn with_overflow_instrinsic(bcx: @mut Block, name: &'static str) {
- let first_real_arg = bcx.fcx.arg_pos(0u);
- let a = get_param(bcx.fcx.llfn, first_real_arg);
- let b = get_param(bcx.fcx.llfn, first_real_arg + 1);
- let llfn = bcx.ccx().intrinsics.get_copy(&name);
-
- // convert `i1` to a `bool`, and write to the out parameter
- let val = Call(bcx, llfn, [a, b]);
- let result = ExtractValue(bcx, val, 0);
- let overflow = ZExt(bcx, ExtractValue(bcx, val, 1), Type::bool());
- let retptr = get_param(bcx.fcx.llfn, bcx.fcx.out_arg_pos());
- let ret = Load(bcx, retptr);
- let ret = InsertValue(bcx, ret, result, 0);
- let ret = InsertValue(bcx, ret, overflow, 1);
- Store(bcx, ret, retptr);
- RetVoid(bcx)
- }
- fn memcpy_intrinsic(bcx: @mut Block, name: &'static str, tp_ty: ty::t, sizebits: u8) {
- let ccx = bcx.ccx();
- let lltp_ty = type_of::type_of(ccx, tp_ty);
- let align = C_i32(machine::llalign_of_min(ccx, lltp_ty) as i32);
- let size = match sizebits {
- 32 => C_i32(machine::llsize_of_real(ccx, lltp_ty) as i32),
- 64 => C_i64(machine::llsize_of_real(ccx, lltp_ty) as i64),
- _ => ccx.sess.fatal("Invalid value for sizebits")
- };
+ debug!("llarg_rust=%s (after casting)",
+ ccx.tn.val_to_str(llarg_rust));
- let decl = bcx.fcx.llfn;
- let first_real_arg = bcx.fcx.arg_pos(0u);
- let dst_ptr = PointerCast(bcx, get_param(decl, first_real_arg), Type::i8p());
- let src_ptr = PointerCast(bcx, get_param(decl, first_real_arg + 1), Type::i8p());
- let count = get_param(decl, first_real_arg + 2);
- let volatile = C_i1(false);
- let llfn = bcx.ccx().intrinsics.get_copy(&name);
- Call(bcx, llfn, [dst_ptr, src_ptr, Mul(bcx, size, count), align, volatile]);
- RetVoid(bcx);
- }
-
- fn memset_intrinsic(bcx: @mut Block, name: &'static str, tp_ty: ty::t, sizebits: u8) {
- let ccx = bcx.ccx();
- let lltp_ty = type_of::type_of(ccx, tp_ty);
- let align = C_i32(machine::llalign_of_min(ccx, lltp_ty) as i32);
- let size = match sizebits {
- 32 => C_i32(machine::llsize_of_real(ccx, lltp_ty) as i32),
- 64 => C_i64(machine::llsize_of_real(ccx, lltp_ty) as i64),
- _ => ccx.sess.fatal("Invalid value for sizebits")
+ // Finally, load the value if needed for the foreign ABI
+ let foreign_indirect = attributes[i].is_some();
+ let llarg_foreign = if foreign_indirect {
+ llarg_rust
+ } else {
+ Load(bcx, llarg_rust)
};
- let decl = bcx.fcx.llfn;
- let first_real_arg = bcx.fcx.arg_pos(0u);
- let dst_ptr = PointerCast(bcx, get_param(decl, first_real_arg), Type::i8p());
- let val = get_param(decl, first_real_arg + 1);
- let count = get_param(decl, first_real_arg + 2);
- let volatile = C_i1(false);
- let llfn = bcx.ccx().intrinsics.get_copy(&name);
- Call(bcx, llfn, [dst_ptr, val, Mul(bcx, size, count), align, volatile]);
- RetVoid(bcx);
- }
+ debug!("argument %u, llarg_foreign=%s",
+ i, ccx.tn.val_to_str(llarg_foreign));
- fn count_zeros_intrinsic(bcx: @mut Block, name: &'static str) {
- let x = get_param(bcx.fcx.llfn, bcx.fcx.arg_pos(0u));
- let y = C_i1(false);
- let llfn = bcx.ccx().intrinsics.get_copy(&name);
- Ret(bcx, Call(bcx, llfn, [x, y]));
+ llargs_foreign.push(llarg_foreign);
}
- let output_type = ty::ty_fn_ret(ty::node_id_to_type(ccx.tcx, item.id));
-
- let fcx = new_fn_ctxt_w_id(ccx,
- path,
- decl,
- item.id,
- output_type,
- true,
- Some(substs),
- None,
- Some(item.span));
-
- set_always_inline(fcx.llfn);
+ let cc = match llvm_calling_convention(ccx, fn_abis) {
+ Some(cc) => cc,
+ None => {
+ // FIXME(#8357) We really ought to report a span here
+ ccx.sess.fatal(
+ fmt!("ABI string `%s` has no suitable ABI \
+ for target architecture",
+ fn_abis.user_string(ccx.tcx)));
+ }
+ };
- // Set the fixed stack segment flag if necessary.
- if attr::contains_name(attributes, "fixed_stack_segment") {
- set_fixed_stack_segment(fcx.llfn);
- }
+ let llforeign_retval = CallWithConv(bcx, llfn, llargs_foreign, cc);
- let mut bcx = fcx.entry_bcx.unwrap();
- let first_real_arg = fcx.arg_pos(0u);
+ // If the function we just called does not use an outpointer,
+ // store the result into the rust outpointer. Cast the outpointer
+ // type to match because some ABIs will use a different type than
+ // the Rust type. e.g., a {u32,u32} struct could be returned as
+ // u64.
+ if ret_def && !fn_type.sret {
+ let llrust_ret_ty = llsig.llret_ty;
+ let llforeign_ret_ty = fn_type.ret_ty.ty;
- let nm = ccx.sess.str_of(item.ident);
- let name = nm.as_slice();
+ debug!("llretptr=%s", ccx.tn.val_to_str(llretptr));
+ debug!("llforeign_retval=%s", ccx.tn.val_to_str(llforeign_retval));
+ debug!("llrust_ret_ty=%s", ccx.tn.type_to_str(llrust_ret_ty));
+ debug!("llforeign_ret_ty=%s", ccx.tn.type_to_str(llforeign_ret_ty));
- // This requires that atomic intrinsics follow a specific naming pattern:
- // "atomic_<operation>[_<ordering>], and no ordering means SeqCst
- if name.starts_with("atomic_") {
- let split : ~[&str] = name.split_iter('_').collect();
- assert!(split.len() >= 2, "Atomic intrinsic not correct format");
- let order = if split.len() == 2 {
- lib::llvm::SequentiallyConsistent
+ if llrust_ret_ty == llforeign_ret_ty {
+ Store(bcx, llforeign_retval, llretptr);
} else {
- match split[2] {
- "relaxed" => lib::llvm::Monotonic,
- "acq" => lib::llvm::Acquire,
- "rel" => lib::llvm::Release,
- "acqrel" => lib::llvm::AcquireRelease,
- _ => ccx.sess.fatal("Unknown ordering in atomic intrinsic")
- }
- };
-
- match split[1] {
- "cxchg" => {
- let old = AtomicCmpXchg(bcx, get_param(decl, first_real_arg),
- get_param(decl, first_real_arg + 1u),
- get_param(decl, first_real_arg + 2u),
- order);
- Ret(bcx, old);
- }
- "load" => {
- let old = AtomicLoad(bcx, get_param(decl, first_real_arg),
- order);
- Ret(bcx, old);
- }
- "store" => {
- AtomicStore(bcx, get_param(decl, first_real_arg + 1u),
- get_param(decl, first_real_arg),
- order);
- RetVoid(bcx);
- }
- "fence" => {
- AtomicFence(bcx, order);
- RetVoid(bcx);
- }
- op => {
- // These are all AtomicRMW ops
- let atom_op = match op {
- "xchg" => lib::llvm::Xchg,
- "xadd" => lib::llvm::Add,
- "xsub" => lib::llvm::Sub,
- "and" => lib::llvm::And,
- "nand" => lib::llvm::Nand,
- "or" => lib::llvm::Or,
- "xor" => lib::llvm::Xor,
- "max" => lib::llvm::Max,
- "min" => lib::llvm::Min,
- "umax" => lib::llvm::UMax,
- "umin" => lib::llvm::UMin,
- _ => ccx.sess.fatal("Unknown atomic operation")
- };
-
- let old = AtomicRMW(bcx, atom_op, get_param(decl, first_real_arg),
- get_param(decl, first_real_arg + 1u),
- order);
- Ret(bcx, old);
- }
+ // The actual return type is a struct, but the ABI
+ // adaptation code has cast it into some scalar type. The
+ // code that follows is the only reliable way I have
+ // found to do a transform like i64 -> {i32,i32}.
+ // Basically we dump the data onto the stack then memcpy it.
+ //
+ // Other approaches I tried:
+ // - Casting rust ret pointer to the foreign type and using Store
+ // is (a) unsafe if size of foreign type > size of rust type and
+ // (b) runs afoul of strict aliasing rules, yielding invalid
+ // assembly under -O (specifically, the store gets removed).
+ // - Truncating foreign type to correct integral type and then
+ // bitcasting to the struct type yields invalid cast errors.
+ let llscratch = base::alloca(bcx, llforeign_ret_ty, "__cast");
+ Store(bcx, llforeign_retval, llscratch);
+ let llscratch_i8 = BitCast(bcx, llscratch, Type::i8().ptr_to());
+ let llretptr_i8 = BitCast(bcx, llretptr, Type::i8().ptr_to());
+ let llrust_size = machine::llsize_of_store(ccx, llrust_ret_ty);
+ let llforeign_align = machine::llalign_of_min(ccx, llforeign_ret_ty);
+ let llrust_align = machine::llalign_of_min(ccx, llrust_ret_ty);
+ let llalign = uint::min(llforeign_align, llrust_align);
+ debug!("llrust_size=%?", llrust_size);
+ base::call_memcpy(bcx, llretptr_i8, llscratch_i8,
+ C_uint(ccx, llrust_size), llalign as u32);
}
-
- fcx.cleanup();
- return;
}
- match name {
- "size_of" => {
- let tp_ty = substs.tys[0];
- let lltp_ty = type_of::type_of(ccx, tp_ty);
- Ret(bcx, C_uint(ccx, machine::llsize_of_real(ccx, lltp_ty)));
- }
- "move_val" => {
- // Create a datum reflecting the value being moved.
- // Use `appropriate_mode` so that the datum is by ref
- // if the value is non-immediate. Note that, with
- // intrinsics, there are no argument cleanups to
- // concern ourselves with.
- let tp_ty = substs.tys[0];
- let mode = appropriate_mode(ccx.tcx, tp_ty);
- let src = Datum {val: get_param(decl, first_real_arg + 1u),
- ty: tp_ty, mode: mode};
- bcx = src.move_to(bcx, DROP_EXISTING,
- get_param(decl, first_real_arg));
- RetVoid(bcx);
- }
- "move_val_init" => {
- // See comments for `"move_val"`.
- let tp_ty = substs.tys[0];
- let mode = appropriate_mode(ccx.tcx, tp_ty);
- let src = Datum {val: get_param(decl, first_real_arg + 1u),
- ty: tp_ty, mode: mode};
- bcx = src.move_to(bcx, INIT, get_param(decl, first_real_arg));
- RetVoid(bcx);
- }
- "min_align_of" => {
- let tp_ty = substs.tys[0];
- let lltp_ty = type_of::type_of(ccx, tp_ty);
- Ret(bcx, C_uint(ccx, machine::llalign_of_min(ccx, lltp_ty)));
- }
- "pref_align_of"=> {
- let tp_ty = substs.tys[0];
- let lltp_ty = type_of::type_of(ccx, tp_ty);
- Ret(bcx, C_uint(ccx, machine::llalign_of_pref(ccx, lltp_ty)));
- }
- "get_tydesc" => {
- let tp_ty = substs.tys[0];
- let static_ti = get_tydesc(ccx, tp_ty);
- glue::lazily_emit_all_tydesc_glue(ccx, static_ti);
-
- // FIXME (#3730): ideally this shouldn't need a cast,
- // but there's a circularity between translating rust types to llvm
- // types and having a tydesc type available. So I can't directly access
- // the llvm type of intrinsic::TyDesc struct.
- let userland_tydesc_ty = type_of::type_of(ccx, output_type);
- let td = PointerCast(bcx, static_ti.tydesc, userland_tydesc_ty);
- Ret(bcx, td);
- }
- "init" => {
- let tp_ty = substs.tys[0];
- let lltp_ty = type_of::type_of(ccx, tp_ty);
- match bcx.fcx.llretptr {
- Some(ptr) => { Store(bcx, C_null(lltp_ty), ptr); RetVoid(bcx); }
- None if ty::type_is_nil(tp_ty) => RetVoid(bcx),
- None => Ret(bcx, C_null(lltp_ty)),
- }
- }
- "uninit" => {
- // Do nothing, this is effectively a no-op
- let retty = substs.tys[0];
- if ty::type_is_immediate(ccx.tcx, retty) && !ty::type_is_nil(retty) {
- unsafe {
- Ret(bcx, lib::llvm::llvm::LLVMGetUndef(type_of(ccx, retty).to_ref()));
- }
- } else {
- RetVoid(bcx)
- }
- }
- "forget" => {
- RetVoid(bcx);
- }
- "transmute" => {
- let (in_type, out_type) = (substs.tys[0], substs.tys[1]);
- let llintype = type_of::type_of(ccx, in_type);
- let llouttype = type_of::type_of(ccx, out_type);
-
- let in_type_size = machine::llbitsize_of_real(ccx, llintype);
- let out_type_size = machine::llbitsize_of_real(ccx, llouttype);
- if in_type_size != out_type_size {
- let sp = match ccx.tcx.items.get_copy(&ref_id.unwrap()) {
- ast_map::node_expr(e) => e.span,
- _ => fail!("transmute has non-expr arg"),
- };
- let pluralize = |n| if 1u == n { "" } else { "s" };
- ccx.sess.span_fatal(sp,
- fmt!("transmute called on types with \
- different sizes: %s (%u bit%s) to \
- %s (%u bit%s)",
- ty_to_str(ccx.tcx, in_type),
- in_type_size,
- pluralize(in_type_size),
- ty_to_str(ccx.tcx, out_type),
- out_type_size,
- pluralize(out_type_size)));
- }
+ return bcx;
+}
- if !ty::type_is_nil(out_type) {
- let llsrcval = get_param(decl, first_real_arg);
- if ty::type_is_immediate(ccx.tcx, in_type) {
- match fcx.llretptr {
- Some(llretptr) => {
- Store(bcx, llsrcval, PointerCast(bcx, llretptr, llintype.ptr_to()));
- RetVoid(bcx);
- }
- None => match (llintype.kind(), llouttype.kind()) {
- (Pointer, other) | (other, Pointer) if other != Pointer => {
- let tmp = Alloca(bcx, llouttype, "");
- Store(bcx, llsrcval, PointerCast(bcx, tmp, llintype.ptr_to()));
- Ret(bcx, Load(bcx, tmp));
- }
- _ => Ret(bcx, BitCast(bcx, llsrcval, llouttype))
- }
- }
- } else if ty::type_is_immediate(ccx.tcx, out_type) {
- let llsrcptr = PointerCast(bcx, llsrcval, llouttype.ptr_to());
- Ret(bcx, Load(bcx, llsrcptr));
- } else {
- // NB: Do not use a Load and Store here. This causes massive
- // code bloat when `transmute` is used on large structural
- // types.
- let lldestptr = fcx.llretptr.unwrap();
- let lldestptr = PointerCast(bcx, lldestptr, Type::i8p());
- let llsrcptr = PointerCast(bcx, llsrcval, Type::i8p());
-
- let llsize = llsize_of(ccx, llintype);
- call_memcpy(bcx, lldestptr, llsrcptr, llsize, 1);
- RetVoid(bcx);
- };
- } else {
- RetVoid(bcx);
- }
- }
- "needs_drop" => {
- let tp_ty = substs.tys[0];
- Ret(bcx, C_bool(ty::type_needs_drop(ccx.tcx, tp_ty)));
- }
- "contains_managed" => {
- let tp_ty = substs.tys[0];
- Ret(bcx, C_bool(ty::type_contents(ccx.tcx, tp_ty).contains_managed()));
- }
- "visit_tydesc" => {
- let td = get_param(decl, first_real_arg);
- let visitor = get_param(decl, first_real_arg + 1u);
- let td = PointerCast(bcx, td, ccx.tydesc_type.ptr_to());
- glue::call_tydesc_glue_full(bcx, visitor, td,
- abi::tydesc_field_visit_glue, None);
- RetVoid(bcx);
- }
- "frame_address" => {
- let frameaddress = ccx.intrinsics.get_copy(& &"llvm.frameaddress");
- let frameaddress_val = Call(bcx, frameaddress, [C_i32(0i32)]);
- let star_u8 = ty::mk_imm_ptr(
- bcx.tcx(),
- ty::mk_mach_uint(ast::ty_u8));
- let fty = ty::mk_closure(bcx.tcx(), ty::ClosureTy {
- purity: ast::impure_fn,
- sigil: ast::BorrowedSigil,
- onceness: ast::Many,
- region: ty::re_bound(ty::br_anon(0)),
- bounds: ty::EmptyBuiltinBounds(),
- sig: FnSig {
- bound_lifetime_names: opt_vec::Empty,
- inputs: ~[ star_u8 ],
- output: ty::mk_nil()
- }
- });
- let datum = Datum {val: get_param(decl, first_real_arg),
- mode: ByRef(ZeroMem), ty: fty};
- let arg_vals = ~[frameaddress_val];
- bcx = trans_call_inner(
- bcx, None, fty, ty::mk_nil(),
- |bcx| Callee {bcx: bcx, data: Closure(datum)},
- ArgVals(arg_vals), Some(Ignore), DontAutorefArg).bcx;
- RetVoid(bcx);
- }
- "morestack_addr" => {
- // XXX This is a hack to grab the address of this particular
- // native function. There should be a general in-language
- // way to do this
- let llfty = type_of_fn(bcx.ccx(), [], ty::mk_nil());
- let morestack_addr = decl_cdecl_fn(
- bcx.ccx().llmod, "__morestack", llfty);
- let morestack_addr = PointerCast(bcx, morestack_addr, Type::nil().ptr_to());
- Ret(bcx, morestack_addr);
- }
- "offset" => {
- let ptr = get_param(decl, first_real_arg);
- let offset = get_param(decl, first_real_arg + 1);
- Ret(bcx, GEP(bcx, ptr, [offset]));
- }
- "offset_inbounds" => {
- let ptr = get_param(decl, first_real_arg);
- let offset = get_param(decl, first_real_arg + 1);
- Ret(bcx, InBoundsGEP(bcx, ptr, [offset]));
- }
- "memcpy32" => memcpy_intrinsic(bcx, "llvm.memcpy.p0i8.p0i8.i32", substs.tys[0], 32),
- "memcpy64" => memcpy_intrinsic(bcx, "llvm.memcpy.p0i8.p0i8.i64", substs.tys[0], 64),
- "memmove32" => memcpy_intrinsic(bcx, "llvm.memmove.p0i8.p0i8.i32", substs.tys[0], 32),
- "memmove64" => memcpy_intrinsic(bcx, "llvm.memmove.p0i8.p0i8.i64", substs.tys[0], 64),
- "memset32" => memset_intrinsic(bcx, "llvm.memset.p0i8.i32", substs.tys[0], 32),
- "memset64" => memset_intrinsic(bcx, "llvm.memset.p0i8.i64", substs.tys[0], 64),
- "sqrtf32" => simple_llvm_intrinsic(bcx, "llvm.sqrt.f32", 1),
- "sqrtf64" => simple_llvm_intrinsic(bcx, "llvm.sqrt.f64", 1),
- "powif32" => simple_llvm_intrinsic(bcx, "llvm.powi.f32", 2),
- "powif64" => simple_llvm_intrinsic(bcx, "llvm.powi.f64", 2),
- "sinf32" => simple_llvm_intrinsic(bcx, "llvm.sin.f32", 1),
- "sinf64" => simple_llvm_intrinsic(bcx, "llvm.sin.f64", 1),
- "cosf32" => simple_llvm_intrinsic(bcx, "llvm.cos.f32", 1),
- "cosf64" => simple_llvm_intrinsic(bcx, "llvm.cos.f64", 1),
- "powf32" => simple_llvm_intrinsic(bcx, "llvm.pow.f32", 2),
- "powf64" => simple_llvm_intrinsic(bcx, "llvm.pow.f64", 2),
- "expf32" => simple_llvm_intrinsic(bcx, "llvm.exp.f32", 1),
- "expf64" => simple_llvm_intrinsic(bcx, "llvm.exp.f64", 1),
- "exp2f32" => simple_llvm_intrinsic(bcx, "llvm.exp2.f32", 1),
- "exp2f64" => simple_llvm_intrinsic(bcx, "llvm.exp2.f64", 1),
- "logf32" => simple_llvm_intrinsic(bcx, "llvm.log.f32", 1),
- "logf64" => simple_llvm_intrinsic(bcx, "llvm.log.f64", 1),
- "log10f32" => simple_llvm_intrinsic(bcx, "llvm.log10.f32", 1),
- "log10f64" => simple_llvm_intrinsic(bcx, "llvm.log10.f64", 1),
- "log2f32" => simple_llvm_intrinsic(bcx, "llvm.log2.f32", 1),
- "log2f64" => simple_llvm_intrinsic(bcx, "llvm.log2.f64", 1),
- "fmaf32" => simple_llvm_intrinsic(bcx, "llvm.fma.f32", 3),
- "fmaf64" => simple_llvm_intrinsic(bcx, "llvm.fma.f64", 3),
- "fabsf32" => simple_llvm_intrinsic(bcx, "llvm.fabs.f32", 1),
- "fabsf64" => simple_llvm_intrinsic(bcx, "llvm.fabs.f64", 1),
- "floorf32" => simple_llvm_intrinsic(bcx, "llvm.floor.f32", 1),
- "floorf64" => simple_llvm_intrinsic(bcx, "llvm.floor.f64", 1),
- "ceilf32" => simple_llvm_intrinsic(bcx, "llvm.ceil.f32", 1),
- "ceilf64" => simple_llvm_intrinsic(bcx, "llvm.ceil.f64", 1),
- "truncf32" => simple_llvm_intrinsic(bcx, "llvm.trunc.f32", 1),
- "truncf64" => simple_llvm_intrinsic(bcx, "llvm.trunc.f64", 1),
- "ctpop8" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i8", 1),
- "ctpop16" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i16", 1),
- "ctpop32" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i32", 1),
- "ctpop64" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i64", 1),
- "ctlz8" => count_zeros_intrinsic(bcx, "llvm.ctlz.i8"),
- "ctlz16" => count_zeros_intrinsic(bcx, "llvm.ctlz.i16"),
- "ctlz32" => count_zeros_intrinsic(bcx, "llvm.ctlz.i32"),
- "ctlz64" => count_zeros_intrinsic(bcx, "llvm.ctlz.i64"),
- "cttz8" => count_zeros_intrinsic(bcx, "llvm.cttz.i8"),
- "cttz16" => count_zeros_intrinsic(bcx, "llvm.cttz.i16"),
- "cttz32" => count_zeros_intrinsic(bcx, "llvm.cttz.i32"),
- "cttz64" => count_zeros_intrinsic(bcx, "llvm.cttz.i64"),
- "bswap16" => simple_llvm_intrinsic(bcx, "llvm.bswap.i16", 1),
- "bswap32" => simple_llvm_intrinsic(bcx, "llvm.bswap.i32", 1),
- "bswap64" => simple_llvm_intrinsic(bcx, "llvm.bswap.i64", 1),
-
- "i8_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i8"),
- "i16_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i16"),
- "i32_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i32"),
- "i64_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i64"),
-
- "u8_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i8"),
- "u16_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i16"),
- "u32_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i32"),
- "u64_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i64"),
-
- "i8_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i8"),
- "i16_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i16"),
- "i32_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i32"),
- "i64_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i64"),
-
- "u8_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i8"),
- "u16_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i16"),
- "u32_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i32"),
- "u64_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i64"),
-
- "i8_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i8"),
- "i16_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i16"),
- "i32_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i32"),
- "i64_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i64"),
-
- "u8_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i8"),
- "u16_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i16"),
- "u32_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i32"),
- "u64_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i64"),
-
- _ => {
- // Could we make this an enum rather than a string? does it get
- // checked earlier?
- ccx.sess.span_bug(item.span, "unknown intrinsic");
- }
+pub fn trans_foreign_mod(ccx: @mut CrateContext,
+ foreign_mod: &ast::foreign_mod) {
+ let _icx = push_ctxt("foreign::trans_foreign_mod");
+ for &foreign_item in foreign_mod.items.iter() {
+ let lname = link_name(ccx, foreign_item);
+ ccx.item_symbols.insert(foreign_item.id, lname.to_owned());
}
- fcx.cleanup();
}
-/**
- * Translates a "crust" fn, meaning a Rust fn that can be called
- * from C code. In this case, we have to perform some adaptation
- * to (1) switch back to the Rust stack and (2) adapt the C calling
- * convention to our own.
- *
- * Example: Given a crust fn F(x: X, y: Y) -> Z, we generate a
- * Rust function R as normal:
- *
- * void R(Z* dest, void *env, X x, Y y) {...}
- *
- * and then we generate a wrapper function W that looks like:
- *
- * Z W(X x, Y y) {
- * struct { X x; Y y; Z *z; } args = { x, y, z };
- * call_on_c_stack_shim(S, &args);
- * }
- *
- * Note that the wrapper follows the foreign (typically "C") ABI.
- * The wrapper is the actual "value" of the foreign fn. Finally,
- * we generate a shim function S that looks like:
- *
- * void S(struct { X x; Y y; Z *z; } *args) {
- * R(args->z, NULL, args->x, args->y);
- * }
- */
-pub fn trans_foreign_fn(ccx: @mut CrateContext,
- path: ast_map::path,
- decl: &ast::fn_decl,
- body: &ast::Block,
- llwrapfn: ValueRef,
- id: ast::NodeId) {
+///////////////////////////////////////////////////////////////////////////
+// Rust functions with foreign ABIs
+//
+// These are normal Rust functions defined with foreign ABIs. For
+// now, and perhaps forever, we translate these using a "layer of
+// indirection". That is, given a Rust declaration like:
+//
+// extern "C" fn foo(i: u32) -> u32 { ... }
+//
+// we will generate a function like:
+//
+// S foo(T i) {
+// S r;
+// foo0(&r, NULL, i);
+// return r;
+// }
+//
+// #[inline_always]
+// void foo0(uint32_t *r, void *env, uint32_t i) { ... }
+//
+// Here the (internal) `foo0` function follows the Rust ABI as normal,
+// where the `foo` function follows the C ABI. We rely on LLVM to
+// inline the one into the other. Of course we could just generate the
+// correct code in the first place, but this is much simpler.
+
+pub fn register_rust_fn_with_foreign_abi(ccx: @mut CrateContext,
+ sp: span,
+ sym: ~str,
+ node_id: ast::NodeId)
+ -> ValueRef {
+ let _icx = push_ctxt("foreign::register_foreign_fn");
+
+ let tys = foreign_types_for_id(ccx, node_id);
+ let llfn_ty = lltype_for_fn_from_foreign_types(&tys);
+ let llfn = base::register_fn_llvmty(ccx,
+ sp,
+ sym,
+ node_id,
+ lib::llvm::CCallConv,
+ llfn_ty);
+ add_argument_attributes(&tys, llfn);
+ debug!("register_rust_fn_with_foreign_abi(node_id=%?, llfn_ty=%s, llfn=%s)",
+ node_id, ccx.tn.type_to_str(llfn_ty), ccx.tn.val_to_str(llfn));
+ llfn
+}
+
+pub fn trans_rust_fn_with_foreign_abi(ccx: @mut CrateContext,
+ path: &ast_map::path,
+ decl: &ast::fn_decl,
+ body: &ast::Block,
+ llwrapfn: ValueRef,
+ id: ast::NodeId) {
let _icx = push_ctxt("foreign::build_foreign_fn");
+ let tys = foreign_types_for_id(ccx, id);
+
+ unsafe { // unsafe because we call LLVM operations
+ // Build up the Rust function (`foo0` above).
+ let llrustfn = build_rust_fn(ccx, path, decl, body, id);
+
+ // Build up the foreign wrapper (`foo` above).
+ return build_wrap_fn(ccx, llrustfn, llwrapfn, &tys);
+ }
fn build_rust_fn(ccx: @mut CrateContext,
path: &ast_map::path,
decl: &ast::fn_decl,
body: &ast::Block,
id: ast::NodeId)
- -> ValueRef {
+ -> ValueRef {
let _icx = push_ctxt("foreign::foreign::build_rust_fn");
- let t = ty::node_id_to_type(ccx.tcx, id);
- // XXX: Bad copy.
+ let tcx = ccx.tcx;
+ let t = ty::node_id_to_type(tcx, id);
let ps = link::mangle_internal_name_by_path(
- ccx,
- vec::append_one((*path).clone(),
- ast_map::path_name(
- special_idents::clownshoe_abi)));
+ ccx, vec::append_one((*path).clone(), ast_map::path_name(
+ special_idents::clownshoe_abi
+ )));
let llty = type_of_fn_from_ty(ccx, t);
- let llfndecl = decl_internal_cdecl_fn(ccx.llmod, ps, llty);
- trans_fn(ccx,
- (*path).clone(),
- decl,
- body,
- llfndecl,
- no_self,
- None,
- id,
- []);
+ let llfndecl = base::decl_internal_cdecl_fn(ccx.llmod, ps, llty);
+ base::trans_fn(ccx,
+ (*path).clone(),
+ decl,
+ body,
+ llfndecl,
+ base::no_self,
+ None,
+ id,
+ []);
return llfndecl;
}
- fn build_shim_fn(ccx: @mut CrateContext,
- path: ast_map::path,
- llrustfn: ValueRef,
- tys: &ShimTypes)
- -> ValueRef {
- /*!
- *
- * Generate the shim S:
- *
- * void S(struct { X x; Y y; Z *z; } *args) {
- * R(args->z, NULL, &args->x, args->y);
- * }
- *
- * One complication is that we must adapt to the Rust
- * calling convention, which introduces indirection
- * in some cases. To demonstrate this, I wrote one of the
- * entries above as `&args->x`, because presumably `X` is
- * one of those types that is passed by pointer in Rust.
- */
-
- let _icx = push_ctxt("foreign::foreign::build_shim_fn");
-
- fn build_args(bcx: @mut Block, tys: &ShimTypes, llargbundle: ValueRef)
- -> ~[ValueRef] {
- let _icx = push_ctxt("foreign::extern::shim::build_args");
- let ccx = bcx.ccx();
- let mut llargvals = ~[];
- let mut i = 0u;
- let n = tys.fn_sig.inputs.len();
-
- if !ty::type_is_immediate(bcx.tcx(), tys.fn_sig.output) {
- let llretptr = load_inbounds(bcx, llargbundle, [0u, n]);
- llargvals.push(llretptr);
+ unsafe fn build_wrap_fn(ccx: @mut CrateContext,
+ llrustfn: ValueRef,
+ llwrapfn: ValueRef,
+ tys: &ForeignTypes) {
+ let _icx = push_ctxt(
+ "foreign::trans_rust_fn_with_foreign_abi::build_wrap_fn");
+ let tcx = ccx.tcx;
+
+ debug!("build_wrap_fn(llrustfn=%s, llwrapfn=%s)",
+ ccx.tn.val_to_str(llrustfn),
+ ccx.tn.val_to_str(llwrapfn));
+
+ // Avoid all the Rust generation stuff and just generate raw
+ // LLVM here.
+ //
+ // We want to generate code like this:
+ //
+ // S foo(T i) {
+ // S r;
+ // foo0(&r, NULL, i);
+ // return r;
+ // }
+
+ let the_block =
+ "the block".to_c_str().with_ref(
+ |s| llvm::LLVMAppendBasicBlockInContext(ccx.llcx, llwrapfn, s));
+
+ let builder = ccx.builder.B;
+ llvm::LLVMPositionBuilderAtEnd(builder, the_block);
+
+ // Array for the arguments we will pass to the rust function.
+ let mut llrust_args = ~[];
+ let mut next_foreign_arg_counter: c_uint = 0;
+ let next_foreign_arg: &fn() -> c_uint = {
+ || {
+ next_foreign_arg_counter += 1;
+ next_foreign_arg_counter - 1
}
+ };
- let llenvptr = C_null(Type::opaque_box(bcx.ccx()).ptr_to());
- llargvals.push(llenvptr);
- while i < n {
- // Get a pointer to the argument:
- let mut llargval = GEPi(bcx, llargbundle, [0u, i]);
+ // If there is an out pointer on the foreign function
+ let foreign_outptr = {
+ if tys.fn_ty.sret {
+ Some(llvm::LLVMGetParam(llwrapfn, next_foreign_arg()))
+ } else {
+ None
+ }
+ };
- if !type_of::arg_is_indirect(ccx, &tys.fn_sig.inputs[i]) {
- // If Rust would pass this by value, load the value.
- llargval = Load(bcx, llargval);
+ // Push Rust return pointer, using null if it will be unused.
+ let rust_uses_outptr =
+ type_of::return_uses_outptr(tcx, tys.fn_sig.output);
+ let return_alloca: Option<ValueRef>;
+ let llrust_ret_ty = tys.llsig.llret_ty;
+ let llrust_retptr_ty = llrust_ret_ty.ptr_to();
+ if rust_uses_outptr {
+ // Rust expects to use an outpointer. If the foreign fn
+ // also uses an outpointer, we can reuse it, but the types
+ // may vary, so cast first to the Rust type. If the
+ // foriegn fn does NOT use an outpointer, we will have to
+ // alloca some scratch space on the stack.
+ match foreign_outptr {
+ Some(llforeign_outptr) => {
+ debug!("out pointer, foreign=%s",
+ ccx.tn.val_to_str(llforeign_outptr));
+ let llrust_retptr =
+ llvm::LLVMBuildBitCast(builder,
+ llforeign_outptr,
+ llrust_ret_ty.ptr_to().to_ref(),
+ noname());
+ debug!("out pointer, foreign=%s (casted)",
+ ccx.tn.val_to_str(llrust_retptr));
+ llrust_args.push(llrust_retptr);
+ return_alloca = None;
}
- llargvals.push(llargval);
- i += 1u;
+ None => {
+ let slot = {
+ "return_alloca".to_c_str().with_ref(
+ |s| llvm::LLVMBuildAlloca(builder,
+ llrust_ret_ty.to_ref(),
+ s))
+ };
+ debug!("out pointer, \
+ allocad=%s, \
+ llrust_ret_ty=%s, \
+ return_ty=%s",
+ ccx.tn.val_to_str(slot),
+ ccx.tn.type_to_str(llrust_ret_ty),
+ tys.fn_sig.output.repr(tcx));
+ llrust_args.push(slot);
+ return_alloca = Some(slot);
+ }
+ }
+ } else {
+ // Rust does not expect an outpointer. If the foreign fn
+ // does use an outpointer, then we will do a store of the
+ // value that the Rust fn returns.
+ return_alloca = None;
+ };
+
+ // Push an (null) env pointer
+ let env_pointer = base::null_env_ptr(ccx);
+ debug!("env pointer=%s", ccx.tn.val_to_str(env_pointer));
+ llrust_args.push(env_pointer);
+
+ // Build up the arguments to the call to the rust function.
+ // Careful to adapt for cases where the native convention uses
+ // a pointer and Rust does not or vice versa.
+ for i in range(0, tys.fn_sig.inputs.len()) {
+ let rust_ty = tys.fn_sig.inputs[i];
+ let llrust_ty = tys.llsig.llarg_tys[i];
+ let foreign_index = next_foreign_arg();
+ let rust_indirect = type_of::arg_is_indirect(ccx, rust_ty);
+ let foreign_indirect = tys.fn_ty.attrs[foreign_index].is_some();
+ let mut llforeign_arg = llvm::LLVMGetParam(llwrapfn, foreign_index);
+
+ debug!("llforeign_arg #%u: %s",
+ i, ccx.tn.val_to_str(llforeign_arg));
+ debug!("rust_indirect = %b, foreign_indirect = %b",
+ rust_indirect, foreign_indirect);
+
+ // Ensure that the foreign argument is indirect (by
+ // pointer). It makes adapting types easier, since we can
+ // always just bitcast pointers.
+ if !foreign_indirect {
+ let lltemp =
+ llvm::LLVMBuildAlloca(
+ builder, val_ty(llforeign_arg).to_ref(), noname());
+ llvm::LLVMBuildStore(
+ builder, llforeign_arg, lltemp);
+ llforeign_arg = lltemp;
+ }
+
+ // If the types in the ABI and the Rust types don't match,
+ // bitcast the llforeign_arg pointer so it matches the types
+ // Rust expects.
+ if tys.fn_ty.arg_tys[foreign_index].cast {
+ assert!(!foreign_indirect);
+ llforeign_arg = llvm::LLVMBuildBitCast(
+ builder, llforeign_arg,
+ llrust_ty.ptr_to().to_ref(), noname());
}
- return llargvals;
- }
- fn build_ret(bcx: @mut Block,
- shim_types: &ShimTypes,
- llargbundle: ValueRef,
- llretval: ValueRef) {
- if bcx.fcx.llretptr.is_some() &&
- ty::type_is_immediate(bcx.tcx(), shim_types.fn_sig.output) {
- // Write the value into the argument bundle.
- let arg_count = shim_types.fn_sig.inputs.len();
- let llretptr = load_inbounds(bcx,
- llargbundle,
- [0, arg_count]);
- Store(bcx, llretval, llretptr);
+ let llrust_arg = if rust_indirect {
+ llforeign_arg
} else {
- // NB: The return pointer in the Rust ABI function is wired
- // directly into the return slot in the shim struct.
+ llvm::LLVMBuildLoad(builder, llforeign_arg, noname())
+ };
+
+ debug!("llrust_arg #%u: %s",
+ i, ccx.tn.val_to_str(llrust_arg));
+ llrust_args.push(llrust_arg);
+ }
+
+ // Perform the call itself
+ let llrust_ret_val = do llrust_args.as_imm_buf |ptr, len| {
+ debug!("calling llrustfn = %s", ccx.tn.val_to_str(llrustfn));
+ llvm::LLVMBuildCall(builder, llrustfn, ptr,
+ len as c_uint, noname())
+ };
+
+ // Get the return value where the foreign fn expects it.
+ let llforeign_ret_ty = tys.fn_ty.ret_ty.ty;
+ match foreign_outptr {
+ None if !tys.ret_def => {
+ // Function returns `()` or `bot`, which in Rust is the LLVM
+ // type "{}" but in foreign ABIs is "Void".
+ llvm::LLVMBuildRetVoid(builder);
+ }
+
+ None if rust_uses_outptr => {
+ // Rust uses an outpointer, but the foreign ABI does not. Load.
+ let llrust_outptr = return_alloca.unwrap();
+ let llforeign_outptr_casted =
+ llvm::LLVMBuildBitCast(builder,
+ llrust_outptr,
+ llforeign_ret_ty.ptr_to().to_ref(),
+ noname());
+ let llforeign_retval =
+ llvm::LLVMBuildLoad(builder, llforeign_outptr_casted, noname());
+ llvm::LLVMBuildRet(builder, llforeign_retval);
+ }
+
+ None if llforeign_ret_ty != llrust_ret_ty => {
+ // Neither ABI uses an outpointer, but the types don't
+ // quite match. Must cast. Probably we should try and
+ // examine the types and use a concrete llvm cast, but
+ // right now we just use a temp memory location and
+ // bitcast the pointer, which is the same thing the
+ // old wrappers used to do.
+ let lltemp =
+ llvm::LLVMBuildAlloca(
+ builder, llforeign_ret_ty.to_ref(), noname());
+ let lltemp_casted =
+ llvm::LLVMBuildBitCast(builder,
+ lltemp,
+ llrust_ret_ty.ptr_to().to_ref(),
+ noname());
+ llvm::LLVMBuildStore(
+ builder, llrust_ret_val, lltemp_casted);
+ let llforeign_retval =
+ llvm::LLVMBuildLoad(builder, lltemp, noname());
+ llvm::LLVMBuildRet(builder, llforeign_retval);
+ }
+
+ None => {
+ // Neither ABI uses an outpointer, and the types
+ // match. Easy peasy.
+ llvm::LLVMBuildRet(builder, llrust_ret_val);
+ }
+
+ Some(llforeign_outptr) if !rust_uses_outptr => {
+ // Foreign ABI requires an out pointer, but Rust doesn't.
+ // Store Rust return value.
+ let llforeign_outptr_casted =
+ llvm::LLVMBuildBitCast(builder,
+ llforeign_outptr,
+ llrust_retptr_ty.to_ref(),
+ noname());
+ llvm::LLVMBuildStore(
+ builder, llrust_ret_val, llforeign_outptr_casted);
+ llvm::LLVMBuildRetVoid(builder);
+ }
+
+ Some(_) => {
+ // Both ABIs use outpointers. Easy peasy.
+ llvm::LLVMBuildRetVoid(builder);
}
}
+ }
+}
- let shim_name = link::mangle_internal_name_by_path(
- ccx,
- vec::append_one(path, ast_map::path_name(
- special_idents::clownshoe_stack_shim
- )));
- build_shim_fn_(ccx,
- shim_name,
- llrustfn,
- tys,
- lib::llvm::CCallConv,
- build_args,
- build_ret)
+///////////////////////////////////////////////////////////////////////////
+// General ABI Support
+//
+// This code is kind of a confused mess and needs to be reworked given
+// the massive simplifications that have occurred.
+
+pub fn link_name(ccx: &CrateContext, i: @ast::foreign_item) -> @str {
+ match attr::first_attr_value_str_by_name(i.attrs, "link_name") {
+ None => ccx.sess.str_of(i.ident),
+ Some(ln) => ln,
}
+}
- fn build_wrap_fn(ccx: @mut CrateContext,
- llshimfn: ValueRef,
- llwrapfn: ValueRef,
- tys: &ShimTypes) {
- /*!
- *
- * Generate the wrapper W:
- *
- * Z W(X x, Y y) {
- * struct { X x; Y y; Z *z; } args = { x, y, z };
- * call_on_c_stack_shim(S, &args);
- * }
- */
-
- let _icx = push_ctxt("foreign::foreign::build_wrap_fn");
-
- build_wrap_fn_(ccx,
- tys,
- llshimfn,
- llwrapfn,
- ccx.upcalls.call_shim_on_rust_stack,
- true,
- build_args,
- build_ret);
-
- fn build_args(bcx: @mut Block,
- tys: &ShimTypes,
- llwrapfn: ValueRef,
- llargbundle: ValueRef) {
- let _icx = push_ctxt("foreign::foreign::wrap::build_args");
- tys.fn_ty.build_wrap_args(bcx,
- tys.llsig.llret_ty,
- llwrapfn,
- llargbundle);
- }
+fn foreign_signature(ccx: &mut CrateContext, fn_sig: &ty::FnSig)
+ -> LlvmSignature {
+ /*!
+ * The ForeignSignature is the LLVM types of the arguments/return type
+ * of a function. Note that these LLVM types are not quite the same
+ * as the LLVM types would be for a native Rust function because foreign
+ * functions just plain ignore modes. They also don't pass aggregate
+ * values by pointer like we do.
+ */
- fn build_ret(bcx: @mut Block, tys: &ShimTypes, llargbundle: ValueRef) {
- let _icx = push_ctxt("foreign::foreign::wrap::build_ret");
- tys.fn_ty.build_wrap_ret(bcx, tys.llsig.llarg_tys, llargbundle);
- }
+ let llarg_tys = fn_sig.inputs.map(|&arg| type_of(ccx, arg));
+ let llret_ty = type_of::type_of(ccx, fn_sig.output);
+ LlvmSignature {
+ llarg_tys: llarg_tys,
+ llret_ty: llret_ty,
+ sret: type_of::return_uses_outptr(ccx.tcx, fn_sig.output),
}
+}
- let tys = shim_types(ccx, id);
- // The internal Rust ABI function - runs on the Rust stack
- // XXX: Bad copy.
- let llrustfn = build_rust_fn(ccx, &path, decl, body, id);
- // The internal shim function - runs on the Rust stack
- let llshimfn = build_shim_fn(ccx, path, llrustfn, &tys);
- // The foreign C function - runs on the C stack
- build_wrap_fn(ccx, llshimfn, llwrapfn, &tys)
+fn foreign_types_for_id(ccx: &mut CrateContext,
+ id: ast::NodeId) -> ForeignTypes {
+ foreign_types_for_fn_ty(ccx, ty::node_id_to_type(ccx.tcx, id))
}
-pub fn register_foreign_fn(ccx: @mut CrateContext,
- sp: span,
- sym: ~str,
- node_id: ast::NodeId)
- -> ValueRef {
- let _icx = push_ctxt("foreign::register_foreign_fn");
+fn foreign_types_for_fn_ty(ccx: &mut CrateContext,
+ ty: ty::t) -> ForeignTypes {
+ let fn_sig = match ty::get(ty).sty {
+ ty::ty_bare_fn(ref fn_ty) => fn_ty.sig.clone(),
+ _ => ccx.sess.bug("foreign_types_for_fn_ty called on non-function type")
+ };
+ let llsig = foreign_signature(ccx, &fn_sig);
+ let ret_def = !ty::type_is_voidish(fn_sig.output);
+ let fn_ty = cabi::compute_abi_info(ccx,
+ llsig.llarg_tys,
+ llsig.llret_ty,
+ ret_def);
+ debug!("foreign_types_for_fn_ty(\
+ ty=%s, \
+ llsig=%s -> %s, \
+ fn_ty=%s -> %s, \
+ ret_def=%b",
+ ty.repr(ccx.tcx),
+ ccx.tn.types_to_str(llsig.llarg_tys),
+ ccx.tn.type_to_str(llsig.llret_ty),
+ ccx.tn.types_to_str(fn_ty.arg_tys.map(|t| t.ty)),
+ ccx.tn.type_to_str(fn_ty.ret_ty.ty),
+ ret_def);
+
+ ForeignTypes {
+ fn_sig: fn_sig,
+ llsig: llsig,
+ ret_def: ret_def,
+ fn_ty: fn_ty
+ }
+}
- let sym = Cell::new(sym);
+fn lltype_for_fn_from_foreign_types(tys: &ForeignTypes) -> Type {
+ let llargument_tys: ~[Type] =
+ tys.fn_ty.arg_tys.iter().map(|t| t.ty).collect();
+ let llreturn_ty = tys.fn_ty.ret_ty.ty;
+ Type::func(llargument_tys, &llreturn_ty)
+}
+
+pub fn lltype_for_foreign_fn(ccx: &mut CrateContext, ty: ty::t) -> Type {
+ let fn_types = foreign_types_for_fn_ty(ccx, ty);
+ lltype_for_fn_from_foreign_types(&fn_types)
+}
- let tys = shim_types(ccx, node_id);
- do tys.fn_ty.decl_fn |fnty| {
- register_fn_llvmty(ccx, sp, sym.take(), node_id, lib::llvm::CCallConv, fnty)
+fn add_argument_attributes(tys: &ForeignTypes,
+ llfn: ValueRef) {
+ for (i, a) in tys.fn_ty.attrs.iter().enumerate() {
+ match *a {
+ Some(attr) => {
+ let llarg = get_param(llfn, i);
+ unsafe {
+ llvm::LLVMAddAttribute(llarg, attr as c_uint);
+ }
+ }
+ None => ()
+ }
}
}
--- /dev/null
+// Copyright 2012-2013 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use back::{abi};
+use lib::llvm::{SequentiallyConsistent, Acquire, Release, Xchg};
+use lib::llvm::{ValueRef, Pointer};
+use lib;
+use middle::trans::base::*;
+use middle::trans::build::*;
+use middle::trans::callee::*;
+use middle::trans::common::*;
+use middle::trans::datum::*;
+use middle::trans::type_of::*;
+use middle::trans::type_of;
+use middle::trans::expr::Ignore;
+use middle::trans::machine;
+use middle::trans::glue;
+use middle::ty::FnSig;
+use middle::ty;
+use syntax::ast;
+use syntax::ast_map;
+use syntax::attr;
+use syntax::opt_vec;
+use util::ppaux::{ty_to_str};
+use middle::trans::machine::llsize_of;
+use middle::trans::type_::Type;
+
+pub fn trans_intrinsic(ccx: @mut CrateContext,
+ decl: ValueRef,
+ item: &ast::foreign_item,
+ path: ast_map::path,
+ substs: @param_substs,
+ attributes: &[ast::Attribute],
+ ref_id: Option<ast::NodeId>) {
+ debug!("trans_intrinsic(item.ident=%s)", ccx.sess.str_of(item.ident));
+
+ fn simple_llvm_intrinsic(bcx: @mut Block, name: &'static str, num_args: uint) {
+ assert!(num_args <= 4);
+ let mut args = [0 as ValueRef, ..4];
+ let first_real_arg = bcx.fcx.arg_pos(0u);
+ for i in range(0u, num_args) {
+ args[i] = get_param(bcx.fcx.llfn, first_real_arg + i);
+ }
+ let llfn = bcx.ccx().intrinsics.get_copy(&name);
+ Ret(bcx, Call(bcx, llfn, args.slice(0, num_args)));
+ }
+
+ fn with_overflow_instrinsic(bcx: @mut Block, name: &'static str) {
+ let first_real_arg = bcx.fcx.arg_pos(0u);
+ let a = get_param(bcx.fcx.llfn, first_real_arg);
+ let b = get_param(bcx.fcx.llfn, first_real_arg + 1);
+ let llfn = bcx.ccx().intrinsics.get_copy(&name);
+
+ // convert `i1` to a `bool`, and write to the out parameter
+ let val = Call(bcx, llfn, [a, b]);
+ let result = ExtractValue(bcx, val, 0);
+ let overflow = ZExt(bcx, ExtractValue(bcx, val, 1), Type::bool());
+ let retptr = get_param(bcx.fcx.llfn, bcx.fcx.out_arg_pos());
+ let ret = Load(bcx, retptr);
+ let ret = InsertValue(bcx, ret, result, 0);
+ let ret = InsertValue(bcx, ret, overflow, 1);
+ Store(bcx, ret, retptr);
+ RetVoid(bcx)
+ }
+
+ fn memcpy_intrinsic(bcx: @mut Block, name: &'static str, tp_ty: ty::t, sizebits: u8) {
+ let ccx = bcx.ccx();
+ let lltp_ty = type_of::type_of(ccx, tp_ty);
+ let align = C_i32(machine::llalign_of_min(ccx, lltp_ty) as i32);
+ let size = match sizebits {
+ 32 => C_i32(machine::llsize_of_real(ccx, lltp_ty) as i32),
+ 64 => C_i64(machine::llsize_of_real(ccx, lltp_ty) as i64),
+ _ => ccx.sess.fatal("Invalid value for sizebits")
+ };
+
+ let decl = bcx.fcx.llfn;
+ let first_real_arg = bcx.fcx.arg_pos(0u);
+ let dst_ptr = PointerCast(bcx, get_param(decl, first_real_arg), Type::i8p());
+ let src_ptr = PointerCast(bcx, get_param(decl, first_real_arg + 1), Type::i8p());
+ let count = get_param(decl, first_real_arg + 2);
+ let volatile = C_i1(false);
+ let llfn = bcx.ccx().intrinsics.get_copy(&name);
+ Call(bcx, llfn, [dst_ptr, src_ptr, Mul(bcx, size, count), align, volatile]);
+ RetVoid(bcx);
+ }
+
+ fn memset_intrinsic(bcx: @mut Block, name: &'static str, tp_ty: ty::t, sizebits: u8) {
+ let ccx = bcx.ccx();
+ let lltp_ty = type_of::type_of(ccx, tp_ty);
+ let align = C_i32(machine::llalign_of_min(ccx, lltp_ty) as i32);
+ let size = match sizebits {
+ 32 => C_i32(machine::llsize_of_real(ccx, lltp_ty) as i32),
+ 64 => C_i64(machine::llsize_of_real(ccx, lltp_ty) as i64),
+ _ => ccx.sess.fatal("Invalid value for sizebits")
+ };
+
+ let decl = bcx.fcx.llfn;
+ let first_real_arg = bcx.fcx.arg_pos(0u);
+ let dst_ptr = PointerCast(bcx, get_param(decl, first_real_arg), Type::i8p());
+ let val = get_param(decl, first_real_arg + 1);
+ let count = get_param(decl, first_real_arg + 2);
+ let volatile = C_i1(false);
+ let llfn = bcx.ccx().intrinsics.get_copy(&name);
+ Call(bcx, llfn, [dst_ptr, val, Mul(bcx, size, count), align, volatile]);
+ RetVoid(bcx);
+ }
+
+ fn count_zeros_intrinsic(bcx: @mut Block, name: &'static str) {
+ let x = get_param(bcx.fcx.llfn, bcx.fcx.arg_pos(0u));
+ let y = C_i1(false);
+ let llfn = bcx.ccx().intrinsics.get_copy(&name);
+ Ret(bcx, Call(bcx, llfn, [x, y]));
+ }
+
+ let output_type = ty::ty_fn_ret(ty::node_id_to_type(ccx.tcx, item.id));
+
+ let fcx = new_fn_ctxt_w_id(ccx,
+ path,
+ decl,
+ item.id,
+ output_type,
+ true,
+ Some(substs),
+ None,
+ Some(item.span));
+
+ set_always_inline(fcx.llfn);
+
+ // Set the fixed stack segment flag if necessary.
+ if attr::contains_name(attributes, "fixed_stack_segment") {
+ set_fixed_stack_segment(fcx.llfn);
+ }
+
+ let mut bcx = fcx.entry_bcx.unwrap();
+ let first_real_arg = fcx.arg_pos(0u);
+
+ let nm = ccx.sess.str_of(item.ident);
+ let name = nm.as_slice();
+
+ // This requires that atomic intrinsics follow a specific naming pattern:
+ // "atomic_<operation>[_<ordering>], and no ordering means SeqCst
+ if name.starts_with("atomic_") {
+ let split : ~[&str] = name.split_iter('_').collect();
+ assert!(split.len() >= 2, "Atomic intrinsic not correct format");
+ let order = if split.len() == 2 {
+ lib::llvm::SequentiallyConsistent
+ } else {
+ match split[2] {
+ "relaxed" => lib::llvm::Monotonic,
+ "acq" => lib::llvm::Acquire,
+ "rel" => lib::llvm::Release,
+ "acqrel" => lib::llvm::AcquireRelease,
+ _ => ccx.sess.fatal("Unknown ordering in atomic intrinsic")
+ }
+ };
+
+ match split[1] {
+ "cxchg" => {
+ let old = AtomicCmpXchg(bcx, get_param(decl, first_real_arg),
+ get_param(decl, first_real_arg + 1u),
+ get_param(decl, first_real_arg + 2u),
+ order);
+ Ret(bcx, old);
+ }
+ "load" => {
+ let old = AtomicLoad(bcx, get_param(decl, first_real_arg),
+ order);
+ Ret(bcx, old);
+ }
+ "store" => {
+ AtomicStore(bcx, get_param(decl, first_real_arg + 1u),
+ get_param(decl, first_real_arg),
+ order);
+ RetVoid(bcx);
+ }
+ "fence" => {
+ AtomicFence(bcx, order);
+ RetVoid(bcx);
+ }
+ op => {
+ // These are all AtomicRMW ops
+ let atom_op = match op {
+ "xchg" => lib::llvm::Xchg,
+ "xadd" => lib::llvm::Add,
+ "xsub" => lib::llvm::Sub,
+ "and" => lib::llvm::And,
+ "nand" => lib::llvm::Nand,
+ "or" => lib::llvm::Or,
+ "xor" => lib::llvm::Xor,
+ "max" => lib::llvm::Max,
+ "min" => lib::llvm::Min,
+ "umax" => lib::llvm::UMax,
+ "umin" => lib::llvm::UMin,
+ _ => ccx.sess.fatal("Unknown atomic operation")
+ };
+
+ let old = AtomicRMW(bcx, atom_op, get_param(decl, first_real_arg),
+ get_param(decl, first_real_arg + 1u),
+ order);
+ Ret(bcx, old);
+ }
+ }
+
+ fcx.cleanup();
+ return;
+ }
+
+ match name {
+ "size_of" => {
+ let tp_ty = substs.tys[0];
+ let lltp_ty = type_of::type_of(ccx, tp_ty);
+ Ret(bcx, C_uint(ccx, machine::llsize_of_real(ccx, lltp_ty)));
+ }
+ "move_val" => {
+ // Create a datum reflecting the value being moved.
+ // Use `appropriate_mode` so that the datum is by ref
+ // if the value is non-immediate. Note that, with
+ // intrinsics, there are no argument cleanups to
+ // concern ourselves with.
+ let tp_ty = substs.tys[0];
+ let mode = appropriate_mode(ccx.tcx, tp_ty);
+ let src = Datum {val: get_param(decl, first_real_arg + 1u),
+ ty: tp_ty, mode: mode};
+ bcx = src.move_to(bcx, DROP_EXISTING,
+ get_param(decl, first_real_arg));
+ RetVoid(bcx);
+ }
+ "move_val_init" => {
+ // See comments for `"move_val"`.
+ let tp_ty = substs.tys[0];
+ let mode = appropriate_mode(ccx.tcx, tp_ty);
+ let src = Datum {val: get_param(decl, first_real_arg + 1u),
+ ty: tp_ty, mode: mode};
+ bcx = src.move_to(bcx, INIT, get_param(decl, first_real_arg));
+ RetVoid(bcx);
+ }
+ "min_align_of" => {
+ let tp_ty = substs.tys[0];
+ let lltp_ty = type_of::type_of(ccx, tp_ty);
+ Ret(bcx, C_uint(ccx, machine::llalign_of_min(ccx, lltp_ty)));
+ }
+ "pref_align_of"=> {
+ let tp_ty = substs.tys[0];
+ let lltp_ty = type_of::type_of(ccx, tp_ty);
+ Ret(bcx, C_uint(ccx, machine::llalign_of_pref(ccx, lltp_ty)));
+ }
+ "get_tydesc" => {
+ let tp_ty = substs.tys[0];
+ let static_ti = get_tydesc(ccx, tp_ty);
+ glue::lazily_emit_all_tydesc_glue(ccx, static_ti);
+
+ // FIXME (#3730): ideally this shouldn't need a cast,
+ // but there's a circularity between translating rust types to llvm
+ // types and having a tydesc type available. So I can't directly access
+ // the llvm type of intrinsic::TyDesc struct.
+ let userland_tydesc_ty = type_of::type_of(ccx, output_type);
+ let td = PointerCast(bcx, static_ti.tydesc, userland_tydesc_ty);
+ Ret(bcx, td);
+ }
+ "init" => {
+ let tp_ty = substs.tys[0];
+ let lltp_ty = type_of::type_of(ccx, tp_ty);
+ match bcx.fcx.llretptr {
+ Some(ptr) => { Store(bcx, C_null(lltp_ty), ptr); RetVoid(bcx); }
+ None if ty::type_is_nil(tp_ty) => RetVoid(bcx),
+ None => Ret(bcx, C_null(lltp_ty)),
+ }
+ }
+ "uninit" => {
+ // Do nothing, this is effectively a no-op
+ let retty = substs.tys[0];
+ if ty::type_is_immediate(ccx.tcx, retty) && !ty::type_is_nil(retty) {
+ unsafe {
+ Ret(bcx, lib::llvm::llvm::LLVMGetUndef(type_of(ccx, retty).to_ref()));
+ }
+ } else {
+ RetVoid(bcx)
+ }
+ }
+ "forget" => {
+ RetVoid(bcx);
+ }
+ "transmute" => {
+ let (in_type, out_type) = (substs.tys[0], substs.tys[1]);
+ let llintype = type_of::type_of(ccx, in_type);
+ let llouttype = type_of::type_of(ccx, out_type);
+
+ let in_type_size = machine::llbitsize_of_real(ccx, llintype);
+ let out_type_size = machine::llbitsize_of_real(ccx, llouttype);
+ if in_type_size != out_type_size {
+ let sp = match ccx.tcx.items.get_copy(&ref_id.unwrap()) {
+ ast_map::node_expr(e) => e.span,
+ _ => fail!("transmute has non-expr arg"),
+ };
+ let pluralize = |n| if 1u == n { "" } else { "s" };
+ ccx.sess.span_fatal(sp,
+ fmt!("transmute called on types with \
+ different sizes: %s (%u bit%s) to \
+ %s (%u bit%s)",
+ ty_to_str(ccx.tcx, in_type),
+ in_type_size,
+ pluralize(in_type_size),
+ ty_to_str(ccx.tcx, out_type),
+ out_type_size,
+ pluralize(out_type_size)));
+ }
+
+ if !ty::type_is_voidish(out_type) {
+ let llsrcval = get_param(decl, first_real_arg);
+ if ty::type_is_immediate(ccx.tcx, in_type) {
+ match fcx.llretptr {
+ Some(llretptr) => {
+ Store(bcx, llsrcval, PointerCast(bcx, llretptr, llintype.ptr_to()));
+ RetVoid(bcx);
+ }
+ None => match (llintype.kind(), llouttype.kind()) {
+ (Pointer, other) | (other, Pointer) if other != Pointer => {
+ let tmp = Alloca(bcx, llouttype, "");
+ Store(bcx, llsrcval, PointerCast(bcx, tmp, llintype.ptr_to()));
+ Ret(bcx, Load(bcx, tmp));
+ }
+ _ => Ret(bcx, BitCast(bcx, llsrcval, llouttype))
+ }
+ }
+ } else if ty::type_is_immediate(ccx.tcx, out_type) {
+ let llsrcptr = PointerCast(bcx, llsrcval, llouttype.ptr_to());
+ Ret(bcx, Load(bcx, llsrcptr));
+ } else {
+ // NB: Do not use a Load and Store here. This causes massive
+ // code bloat when `transmute` is used on large structural
+ // types.
+ let lldestptr = fcx.llretptr.unwrap();
+ let lldestptr = PointerCast(bcx, lldestptr, Type::i8p());
+ let llsrcptr = PointerCast(bcx, llsrcval, Type::i8p());
+
+ let llsize = llsize_of(ccx, llintype);
+ call_memcpy(bcx, lldestptr, llsrcptr, llsize, 1);
+ RetVoid(bcx);
+ };
+ } else {
+ RetVoid(bcx);
+ }
+ }
+ "needs_drop" => {
+ let tp_ty = substs.tys[0];
+ Ret(bcx, C_bool(ty::type_needs_drop(ccx.tcx, tp_ty)));
+ }
+ "contains_managed" => {
+ let tp_ty = substs.tys[0];
+ Ret(bcx, C_bool(ty::type_contents(ccx.tcx, tp_ty).contains_managed()));
+ }
+ "visit_tydesc" => {
+ let td = get_param(decl, first_real_arg);
+ let visitor = get_param(decl, first_real_arg + 1u);
+ let td = PointerCast(bcx, td, ccx.tydesc_type.ptr_to());
+ glue::call_tydesc_glue_full(bcx, visitor, td,
+ abi::tydesc_field_visit_glue, None);
+ RetVoid(bcx);
+ }
+ "frame_address" => {
+ let frameaddress = ccx.intrinsics.get_copy(& &"llvm.frameaddress");
+ let frameaddress_val = Call(bcx, frameaddress, [C_i32(0i32)]);
+ let star_u8 = ty::mk_imm_ptr(
+ bcx.tcx(),
+ ty::mk_mach_uint(ast::ty_u8));
+ let fty = ty::mk_closure(bcx.tcx(), ty::ClosureTy {
+ purity: ast::impure_fn,
+ sigil: ast::BorrowedSigil,
+ onceness: ast::Many,
+ region: ty::re_bound(ty::br_anon(0)),
+ bounds: ty::EmptyBuiltinBounds(),
+ sig: FnSig {
+ bound_lifetime_names: opt_vec::Empty,
+ inputs: ~[ star_u8 ],
+ output: ty::mk_nil()
+ }
+ });
+ let datum = Datum {val: get_param(decl, first_real_arg),
+ mode: ByRef(ZeroMem), ty: fty};
+ let arg_vals = ~[frameaddress_val];
+ bcx = trans_call_inner(
+ bcx, None, fty, ty::mk_nil(),
+ |bcx| Callee {bcx: bcx, data: Closure(datum)},
+ ArgVals(arg_vals), Some(Ignore), DontAutorefArg).bcx;
+ RetVoid(bcx);
+ }
+ "morestack_addr" => {
+ // XXX This is a hack to grab the address of this particular
+ // native function. There should be a general in-language
+ // way to do this
+ let llfty = type_of_rust_fn(bcx.ccx(), [], ty::mk_nil());
+ let morestack_addr = decl_cdecl_fn(
+ bcx.ccx().llmod, "__morestack", llfty);
+ let morestack_addr = PointerCast(bcx, morestack_addr, Type::nil().ptr_to());
+ Ret(bcx, morestack_addr);
+ }
+ "offset" => {
+ let ptr = get_param(decl, first_real_arg);
+ let offset = get_param(decl, first_real_arg + 1);
+ Ret(bcx, GEP(bcx, ptr, [offset]));
+ }
+ "offset_inbounds" => {
+ let ptr = get_param(decl, first_real_arg);
+ let offset = get_param(decl, first_real_arg + 1);
+ Ret(bcx, InBoundsGEP(bcx, ptr, [offset]));
+ }
+ "memcpy32" => memcpy_intrinsic(bcx, "llvm.memcpy.p0i8.p0i8.i32", substs.tys[0], 32),
+ "memcpy64" => memcpy_intrinsic(bcx, "llvm.memcpy.p0i8.p0i8.i64", substs.tys[0], 64),
+ "memmove32" => memcpy_intrinsic(bcx, "llvm.memmove.p0i8.p0i8.i32", substs.tys[0], 32),
+ "memmove64" => memcpy_intrinsic(bcx, "llvm.memmove.p0i8.p0i8.i64", substs.tys[0], 64),
+ "memset32" => memset_intrinsic(bcx, "llvm.memset.p0i8.i32", substs.tys[0], 32),
+ "memset64" => memset_intrinsic(bcx, "llvm.memset.p0i8.i64", substs.tys[0], 64),
+ "sqrtf32" => simple_llvm_intrinsic(bcx, "llvm.sqrt.f32", 1),
+ "sqrtf64" => simple_llvm_intrinsic(bcx, "llvm.sqrt.f64", 1),
+ "powif32" => simple_llvm_intrinsic(bcx, "llvm.powi.f32", 2),
+ "powif64" => simple_llvm_intrinsic(bcx, "llvm.powi.f64", 2),
+ "sinf32" => simple_llvm_intrinsic(bcx, "llvm.sin.f32", 1),
+ "sinf64" => simple_llvm_intrinsic(bcx, "llvm.sin.f64", 1),
+ "cosf32" => simple_llvm_intrinsic(bcx, "llvm.cos.f32", 1),
+ "cosf64" => simple_llvm_intrinsic(bcx, "llvm.cos.f64", 1),
+ "powf32" => simple_llvm_intrinsic(bcx, "llvm.pow.f32", 2),
+ "powf64" => simple_llvm_intrinsic(bcx, "llvm.pow.f64", 2),
+ "expf32" => simple_llvm_intrinsic(bcx, "llvm.exp.f32", 1),
+ "expf64" => simple_llvm_intrinsic(bcx, "llvm.exp.f64", 1),
+ "exp2f32" => simple_llvm_intrinsic(bcx, "llvm.exp2.f32", 1),
+ "exp2f64" => simple_llvm_intrinsic(bcx, "llvm.exp2.f64", 1),
+ "logf32" => simple_llvm_intrinsic(bcx, "llvm.log.f32", 1),
+ "logf64" => simple_llvm_intrinsic(bcx, "llvm.log.f64", 1),
+ "log10f32" => simple_llvm_intrinsic(bcx, "llvm.log10.f32", 1),
+ "log10f64" => simple_llvm_intrinsic(bcx, "llvm.log10.f64", 1),
+ "log2f32" => simple_llvm_intrinsic(bcx, "llvm.log2.f32", 1),
+ "log2f64" => simple_llvm_intrinsic(bcx, "llvm.log2.f64", 1),
+ "fmaf32" => simple_llvm_intrinsic(bcx, "llvm.fma.f32", 3),
+ "fmaf64" => simple_llvm_intrinsic(bcx, "llvm.fma.f64", 3),
+ "fabsf32" => simple_llvm_intrinsic(bcx, "llvm.fabs.f32", 1),
+ "fabsf64" => simple_llvm_intrinsic(bcx, "llvm.fabs.f64", 1),
+ "floorf32" => simple_llvm_intrinsic(bcx, "llvm.floor.f32", 1),
+ "floorf64" => simple_llvm_intrinsic(bcx, "llvm.floor.f64", 1),
+ "ceilf32" => simple_llvm_intrinsic(bcx, "llvm.ceil.f32", 1),
+ "ceilf64" => simple_llvm_intrinsic(bcx, "llvm.ceil.f64", 1),
+ "truncf32" => simple_llvm_intrinsic(bcx, "llvm.trunc.f32", 1),
+ "truncf64" => simple_llvm_intrinsic(bcx, "llvm.trunc.f64", 1),
+ "ctpop8" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i8", 1),
+ "ctpop16" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i16", 1),
+ "ctpop32" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i32", 1),
+ "ctpop64" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i64", 1),
+ "ctlz8" => count_zeros_intrinsic(bcx, "llvm.ctlz.i8"),
+ "ctlz16" => count_zeros_intrinsic(bcx, "llvm.ctlz.i16"),
+ "ctlz32" => count_zeros_intrinsic(bcx, "llvm.ctlz.i32"),
+ "ctlz64" => count_zeros_intrinsic(bcx, "llvm.ctlz.i64"),
+ "cttz8" => count_zeros_intrinsic(bcx, "llvm.cttz.i8"),
+ "cttz16" => count_zeros_intrinsic(bcx, "llvm.cttz.i16"),
+ "cttz32" => count_zeros_intrinsic(bcx, "llvm.cttz.i32"),
+ "cttz64" => count_zeros_intrinsic(bcx, "llvm.cttz.i64"),
+ "bswap16" => simple_llvm_intrinsic(bcx, "llvm.bswap.i16", 1),
+ "bswap32" => simple_llvm_intrinsic(bcx, "llvm.bswap.i32", 1),
+ "bswap64" => simple_llvm_intrinsic(bcx, "llvm.bswap.i64", 1),
+
+ "i8_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i8"),
+ "i16_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i16"),
+ "i32_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i32"),
+ "i64_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i64"),
+
+ "u8_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i8"),
+ "u16_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i16"),
+ "u32_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i32"),
+ "u64_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i64"),
+
+ "i8_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i8"),
+ "i16_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i16"),
+ "i32_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i32"),
+ "i64_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i64"),
+
+ "u8_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i8"),
+ "u16_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i16"),
+ "u32_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i32"),
+ "u64_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i64"),
+
+ "i8_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i8"),
+ "i16_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i16"),
+ "i32_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i32"),
+ "i64_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i64"),
+
+ "u8_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i8"),
+ "u16_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i16"),
+ "u32_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i32"),
+ "u64_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i64"),
+
+ _ => {
+ // Could we make this an enum rather than a string? does it get
+ // checked earlier?
+ ccx.sess.span_bug(item.span, "unknown intrinsic");
+ }
+ }
+ fcx.cleanup();
+}
}
}
- typeck::method_trait(_, off) => {
+ typeck::method_object(ref mt) => {
trans_trait_callee(bcx,
callee_id,
- off,
+ mt.real_index,
this)
}
}
return (ty_substs, vtables);
}
-
pub fn trans_trait_callee(bcx: @mut Block,
callee_id: ast::NodeId,
n_method: uint,
/// This is used only for objects.
pub fn get_vtable(bcx: @mut Block,
self_ty: ty::t,
- origin: typeck::vtable_origin)
+ origins: typeck::vtable_param_res)
-> ValueRef {
- let hash_id = vtable_id(bcx.ccx(), &origin);
- match bcx.ccx().vtables.find(&hash_id) {
- Some(&val) => val,
- None => {
- match origin {
- typeck::vtable_static(id, substs, sub_vtables) => {
- make_impl_vtable(bcx, id, self_ty, substs, sub_vtables)
- }
- _ => fail!("get_vtable: expected a static origin"),
+ let ccx = bcx.ccx();
+ let _icx = push_ctxt("impl::get_vtable");
+
+ // Check the cache.
+ let hash_id = (self_ty, vtable_id(ccx, &origins[0]));
+ match ccx.vtables.find(&hash_id) {
+ Some(&val) => { return val }
+ None => { }
+ }
+
+ // Not in the cache. Actually build it.
+ let methods = do origins.flat_map |origin| {
+ match *origin {
+ typeck::vtable_static(id, ref substs, sub_vtables) => {
+ emit_vtable_methods(bcx, id, *substs, sub_vtables)
}
+ _ => ccx.sess.bug("get_vtable: expected a static origin"),
}
- }
+ };
+
+ // Generate a type descriptor for the vtable.
+ let tydesc = get_tydesc(ccx, self_ty);
+ glue::lazily_emit_all_tydesc_glue(ccx, tydesc);
+
+ let vtable = make_vtable(ccx, tydesc, methods);
+ ccx.vtables.insert(hash_id, vtable);
+ return vtable;
}
/// Helper function to declare and initialize the vtable.
}
}
-/// Generates a dynamic vtable for objects.
-pub fn make_impl_vtable(bcx: @mut Block,
- impl_id: ast::def_id,
- self_ty: ty::t,
- substs: &[ty::t],
- vtables: typeck::vtable_res)
- -> ValueRef {
+fn emit_vtable_methods(bcx: @mut Block,
+ impl_id: ast::def_id,
+ substs: &[ty::t],
+ vtables: typeck::vtable_res)
+ -> ~[ValueRef] {
let ccx = bcx.ccx();
- let _icx = push_ctxt("impl::make_impl_vtable");
let tcx = ccx.tcx;
let trt_id = match ty::impl_trait_ref(tcx, impl_id) {
};
let trait_method_def_ids = ty::trait_method_def_ids(tcx, trt_id);
- let methods = do trait_method_def_ids.map |method_def_id| {
+ do trait_method_def_ids.map |method_def_id| {
let im = ty::method(tcx, *method_def_id);
let fty = ty::subst_tps(tcx,
substs,
trans_fn_ref_with_vtables(bcx, m_id, 0,
substs, Some(vtables)).llfn
}
- };
-
- // Generate a type descriptor for the vtable.
- let tydesc = get_tydesc(ccx, self_ty);
- glue::lazily_emit_all_tydesc_glue(ccx, tydesc);
-
- make_vtable(ccx, tydesc, methods)
+ }
}
pub fn trans_trait_cast(bcx: @mut Block,
bcx = expr::trans_into(bcx, val, SaveIn(llboxdest));
// Store the vtable into the pair or triple.
- let orig = ccx.maps.vtable_map.get(&id)[0][0].clone();
- let orig = resolve_vtable_in_fn_ctxt(bcx.fcx, &orig);
- let vtable = get_vtable(bcx, v_ty, orig);
+ // This is structured a bit funny because of dynamic borrow failures.
+ let origins = {
+ let res = ccx.maps.vtable_map.get(&id);
+ let res = resolve_vtables_in_fn_ctxt(bcx.fcx, *res);
+ res[0]
+ };
+ let vtable = get_vtable(bcx, v_ty, origins);
Store(bcx, vtable, PointerCast(bcx,
GEPi(bcx, lldest, [0u, abi::trt_field_vtable]),
val_ty(vtable).ptr_to()));
pub mod cabi_arm;
pub mod cabi_mips;
pub mod foreign;
+pub mod intrinsic;
pub mod reflect;
pub mod debuginfo;
pub mod type_use;
use back::link::mangle_exported_name;
use driver::session;
use lib::llvm::ValueRef;
-use middle::trans::base::{set_inline_hint_if_appr, set_inline_hint};
+use middle::trans::base::{set_llvm_fn_attrs, set_inline_hint};
use middle::trans::base::{trans_enum_variant,push_ctxt};
use middle::trans::base::{trans_fn, decl_internal_cdecl_fn};
use middle::trans::base::{get_item_val, no_self};
use middle::trans::base;
use middle::trans::common::*;
use middle::trans::datum;
-use middle::trans::foreign;
use middle::trans::machine;
use middle::trans::meth;
use middle::trans::type_of::type_of_fn_from_ty;
use middle::trans::type_of;
use middle::trans::type_use;
+use middle::trans::intrinsic;
use middle::ty;
use middle::ty::{FnSig};
use middle::typeck;
_
}, _) => {
let d = mk_lldecl();
- set_inline_hint_if_appr(i.attrs, d);
+ set_llvm_fn_attrs(i.attrs, d);
trans_fn(ccx,
pt,
decl,
}
ast_map::node_foreign_item(i, _, _, _) => {
let d = mk_lldecl();
- foreign::trans_intrinsic(ccx, d, i, pt, psubsts, i.attrs,
- ref_id);
+ intrinsic::trans_intrinsic(ccx, d, i, pt, psubsts, i.attrs,
+ ref_id);
d
}
ast_map::node_variant(ref v, enum_item, _) => {
ast_map::node_method(mth, _, _) => {
// XXX: What should the self type be here?
let d = mk_lldecl();
- set_inline_hint_if_appr(mth.attrs.clone(), d);
+ set_llvm_fn_attrs(mth.attrs, d);
meth::trans_method(ccx, pt, mth, Some(psubsts), d);
d
}
ast_map::node_trait_method(@ast::provided(mth), _, pt) => {
let d = mk_lldecl();
- set_inline_hint_if_appr(mth.attrs.clone(), d);
+ set_llvm_fn_attrs(mth.attrs, d);
meth::trans_method(ccx, (*pt).clone(), mth, Some(psubsts), d);
d
}
sub_path,
"get_disr");
- let llfty = type_of_fn(ccx, [opaqueptrty], ty::mk_int());
+ let llfty = type_of_rust_fn(ccx, [opaqueptrty], ty::mk_int());
let llfdecl = decl_internal_cdecl_fn(ccx.llmod, sym, llfty);
let fcx = new_fn_ctxt(ccx,
~[],
use middle::trans::adt;
use middle::trans::common::*;
+use middle::trans::foreign;
use middle::ty;
use util::ppaux;
use syntax::ast;
use syntax::opt_vec;
-pub fn arg_is_indirect(ccx: &CrateContext, arg_ty: &ty::t) -> bool {
- !ty::type_is_immediate(ccx.tcx, *arg_ty)
+pub fn arg_is_indirect(ccx: &CrateContext, arg_ty: ty::t) -> bool {
+ !ty::type_is_immediate(ccx.tcx, arg_ty)
}
-pub fn type_of_explicit_arg(ccx: &mut CrateContext, arg_ty: &ty::t) -> Type {
- let llty = type_of(ccx, *arg_ty);
+pub fn return_uses_outptr(tcx: ty::ctxt, ty: ty::t) -> bool {
+ !ty::type_is_immediate(tcx, ty)
+}
+
+pub fn type_of_explicit_arg(ccx: &mut CrateContext, arg_ty: ty::t) -> Type {
+ let llty = type_of(ccx, arg_ty);
if arg_is_indirect(ccx, arg_ty) {
llty.ptr_to()
} else {
pub fn type_of_explicit_args(ccx: &mut CrateContext,
inputs: &[ty::t]) -> ~[Type] {
- inputs.map(|arg_ty| type_of_explicit_arg(ccx, arg_ty))
+ inputs.map(|&arg_ty| type_of_explicit_arg(ccx, arg_ty))
}
-pub fn type_of_fn(cx: &mut CrateContext, inputs: &[ty::t], output: ty::t) -> Type {
+pub fn type_of_rust_fn(cx: &mut CrateContext,
+ inputs: &[ty::t],
+ output: ty::t) -> Type {
let mut atys: ~[Type] = ~[];
// Arg 0: Output pointer.
// (if the output type is non-immediate)
- let output_is_immediate = ty::type_is_immediate(cx.tcx, output);
+ let use_out_pointer = return_uses_outptr(cx.tcx, output);
let lloutputtype = type_of(cx, output);
- if !output_is_immediate {
+ if use_out_pointer {
atys.push(lloutputtype.ptr_to());
}
atys.push_all(type_of_explicit_args(cx, inputs));
// Use the output as the actual return value if it's immediate.
- if output_is_immediate && !ty::type_is_nil(output) {
+ if !use_out_pointer && !ty::type_is_voidish(output) {
Type::func(atys, &lloutputtype)
} else {
Type::func(atys, &Type::void())
// Given a function type and a count of ty params, construct an llvm type
pub fn type_of_fn_from_ty(cx: &mut CrateContext, fty: ty::t) -> Type {
- match ty::get(fty).sty {
- ty::ty_closure(ref f) => type_of_fn(cx, f.sig.inputs, f.sig.output),
- ty::ty_bare_fn(ref f) => type_of_fn(cx, f.sig.inputs, f.sig.output),
+ return match ty::get(fty).sty {
+ ty::ty_closure(ref f) => {
+ type_of_rust_fn(cx, f.sig.inputs, f.sig.output)
+ }
+ ty::ty_bare_fn(ref f) => {
+ if f.abis.is_rust() || f.abis.is_intrinsic() {
+ type_of_rust_fn(cx, f.sig.inputs, f.sig.output)
+ } else {
+ foreign::lltype_for_foreign_fn(cx, fty)
+ }
+ }
_ => {
cx.sess.bug("type_of_fn_from_ty given non-closure, non-bare-fn")
}
- }
+ };
}
// A "sizing type" is an LLVM type, the size and alignment of which are
Type::array(&type_of(cx, mt.ty), n as u64)
}
- ty::ty_bare_fn(_) => type_of_fn_from_ty(cx, t).ptr_to(),
+ ty::ty_bare_fn(_) => {
+ type_of_fn_from_ty(cx, t).ptr_to()
+ }
ty::ty_closure(_) => {
let ty = type_of_fn_from_ty(cx, t);
Type::func_pair(cx, &ty)
use syntax::ast_map;
use syntax::ast_util;
use syntax::parse::token;
-use syntax::oldvisit;
+use syntax::visit;
+use syntax::visit::Visitor;
pub type type_uses = uint; // Bitmask
pub static use_repr: uint = 1; /* Dependency on size/alignment/mode and
pub static use_tydesc: uint = 2; /* Takes the tydesc, or compares */
pub static use_all: uint = use_repr|use_tydesc;
-
+#[deriving(Clone)]
pub struct Context {
ccx: @mut CrateContext,
uses: @mut ~[type_uses]
}
}
-pub fn handle_body(cx: &Context, body: &Block) {
- let v = oldvisit::mk_vt(@oldvisit::Visitor {
- visit_expr: |e, (cx, v)| {
- oldvisit::visit_expr(e, (cx, v));
+struct TypeUseVisitor;
+
+impl<'self> Visitor<&'self Context> for TypeUseVisitor {
+
+ fn visit_expr<'a>(&mut self, e:@expr, cx: &'a Context) {
+ visit::walk_expr(self, e, cx);
mark_for_expr(cx, e);
- },
- visit_local: |l, (cx, v)| {
- oldvisit::visit_local(l, (cx, v));
+ }
+
+ fn visit_local<'a>(&mut self, l:@Local, cx: &'a Context) {
+ visit::walk_local(self, l, cx);
node_type_needs(cx, use_repr, l.id);
- },
- visit_pat: |p, (cx, v)| {
- oldvisit::visit_pat(p, (cx, v));
+ }
+
+ fn visit_pat<'a>(&mut self, p:@pat, cx: &'a Context) {
+ visit::walk_pat(self, p, cx);
node_type_needs(cx, use_repr, p.id);
- },
- visit_block: |b, (cx, v)| {
- oldvisit::visit_block(b, (cx, v));
+ }
+
+ fn visit_block<'a>(&mut self, b:&Block, cx: &'a Context) {
+ visit::walk_block(self, b, cx);
for e in b.expr.iter() {
node_type_needs(cx, use_repr, e.id);
}
- },
- visit_item: |_i, (_cx, _v)| { },
- ..*oldvisit::default_visitor()
- });
- (v.visit_block)(body, (cx, v));
+ }
+
+ fn visit_item<'a>(&mut self, _:@item, _: &'a Context) {
+ // do nothing
+ }
+
+}
+
+pub fn handle_body(cx: &Context, body: &Block) {
+ let mut v = TypeUseVisitor;
+ v.visit_block(body, cx);
}
use std::ptr::to_unsafe_ptr;
use std::to_bytes;
use std::to_str::ToStr;
-use std::u32;
use std::vec;
use syntax::ast::*;
use syntax::ast_util::is_local;
mt: mt
}
+#[deriving(Clone)]
pub struct Method {
ident: ast::ident,
generics: ty::Generics,
// Type utilities
+pub fn type_is_voidish(ty: t) -> bool {
+ //! "nil" and "bot" are void types in that they represent 0 bits of information
+ type_is_nil(ty) || type_is_bot(ty)
+}
+
pub fn type_is_nil(ty: t) -> bool { get(ty).sty == ty_nil }
pub fn type_is_bot(ty: t) -> bool {
impl ToStr for TypeContents {
fn to_str(&self) -> ~str {
- fmt!("TypeContents(%s)", u32::to_str_radix(self.bits, 2))
+ fmt!("TypeContents(%s)", self.bits.to_str_radix(2))
}
}
typeck::method_param(typeck::method_param {
trait_id: trt_id,
method_num: n_mth, _}) |
- typeck::method_trait(trt_id, n_mth) => {
+ typeck::method_object(typeck::method_object {
+ trait_id: trt_id,
+ method_num: n_mth, _}) => {
// ...trait methods bounds, in contrast, include only the
// method bounds, so we must preprend the tps from the
// trait itself. This ought to be harmonized.
let trait_type_param_defs =
- ty::lookup_trait_def(tcx, trt_id).generics.type_param_defs;
+ lookup_trait_def(tcx, trt_id).generics.type_param_defs;
@vec::append(
(*trait_type_param_defs).clone(),
*ty::trait_method(tcx,
use middle::typeck::check;
use middle::typeck::infer;
use middle::typeck::{method_map_entry, method_origin, method_param};
-use middle::typeck::{method_static, method_trait};
+use middle::typeck::{method_static, method_object};
use middle::typeck::{param_numbered, param_self, param_index};
use middle::typeck::check::regionmanip::replace_bound_regions_in_fn_sig;
use util::common::indenter;
loop {
match get(self_ty).sty {
ty_trait(did, ref substs, _, _, _) => {
- self.push_inherent_candidates_from_trait(did, substs);
+ self.push_inherent_candidates_from_object(did, substs);
self.push_inherent_impl_candidates_for_type(did);
}
ty_enum(did, _) | ty_struct(did, _) => {
}
}
- fn push_inherent_candidates_from_trait(&self,
- did: def_id,
- substs: &ty::substs) {
- debug!("push_inherent_candidates_from_trait(did=%s, substs=%s)",
- self.did_to_str(did),
- substs_to_str(self.tcx(), substs));
- let _indenter = indenter();
-
+ // Determine the index of a method in the list of all methods belonging
+ // to a trait and its supertraits.
+ fn get_method_index(&self,
+ trait_ref: @TraitRef,
+ subtrait_id: ast::def_id,
+ n_method: uint) -> uint {
let tcx = self.tcx();
- let ms = ty::trait_methods(tcx, did);
- let index = match ms.iter().position(|m| m.ident == self.m_name) {
- Some(i) => i,
- None => { return; } // no method with the right name
- };
- let method = ms[index];
- match method.explicit_self {
- ast::sty_static => {
- return; // not a method we can call with dot notation
+ // We need to figure the "real index" of the method in a
+ // listing of all the methods of an object. We do this by
+ // iterating down the supertraits of the object's trait until
+ // we find the trait the method came from, counting up the
+ // methods from them.
+ let mut method_count = 0;
+ do ty::each_bound_trait_and_supertraits(tcx, &[trait_ref])
+ |bound_ref| {
+ if bound_ref.def_id == subtrait_id { false }
+ else {
+ method_count += ty::trait_methods(tcx, bound_ref.def_id).len();
+ true
}
- _ => {}
- }
+ };
+
+ return method_count + n_method;
+ }
+
+
+ fn push_inherent_candidates_from_object(&self,
+ did: def_id,
+ substs: &ty::substs) {
+ debug!("push_inherent_candidates_from_object(did=%s, substs=%s)",
+ self.did_to_str(did),
+ substs_to_str(self.tcx(), substs));
+ let _indenter = indenter();
// It is illegal to invoke a method on a trait instance that
// refers to the `self` type. An error will be reported by
// to the `Self` type. Substituting ty_err here allows
// compiler to soldier on.
//
- // NOTE: `confirm_candidate()` also relies upon this substitution
+ // `confirm_candidate()` also relies upon this substitution
// for Self. (fix)
let rcvr_substs = substs {
self_ty: Some(ty::mk_err()),
..(*substs).clone()
};
-
- self.inherent_candidates.push(Candidate {
- rcvr_match_condition: RcvrMatchesIfObject(did),
- rcvr_substs: rcvr_substs,
- method_ty: method,
- origin: method_trait(did, index)
- });
+ let trait_ref = @TraitRef { def_id: did, substs: rcvr_substs.clone() };
+
+ do self.push_inherent_candidates_from_bounds_inner(&[trait_ref])
+ |trait_ref, m, method_num, _bound_num| {
+ let vtable_index =
+ self.get_method_index(trait_ref, trait_ref.def_id, method_num);
+ // We need to fix up the transformed self type.
+ let transformed_self_ty =
+ self.construct_transformed_self_ty_for_object(
+ did, &rcvr_substs, m);
+ let m = @Method {
+ transformed_self_ty: Some(transformed_self_ty),
+ .. (*m).clone()
+ };
+
+ Candidate {
+ rcvr_match_condition: RcvrMatchesIfObject(did),
+ rcvr_substs: trait_ref.substs.clone(),
+ method_ty: m,
+ origin: method_object(method_object {
+ trait_id: trait_ref.def_id,
+ object_trait_id: did,
+ method_num: method_num,
+ real_index: vtable_index
+ })
+ }
+ };
}
fn push_inherent_candidates_from_param(&self,
- rcvr_ty: ty::t,
- param_ty: param_ty) {
+ rcvr_ty: ty::t,
+ param_ty: param_ty) {
debug!("push_inherent_candidates_from_param(param_ty=%?)",
param_ty);
let _indenter = indenter();
}
fn push_inherent_candidates_from_bounds(&self,
- self_ty: ty::t,
- bounds: &[@TraitRef],
- param: param_index) {
+ self_ty: ty::t,
+ bounds: &[@TraitRef],
+ param: param_index) {
+ do self.push_inherent_candidates_from_bounds_inner(bounds)
+ |trait_ref, m, method_num, bound_num| {
+ Candidate {
+ rcvr_match_condition: RcvrMatchesIfSubtype(self_ty),
+ rcvr_substs: trait_ref.substs.clone(),
+ method_ty: m,
+ origin: method_param(
+ method_param {
+ trait_id: trait_ref.def_id,
+ method_num: method_num,
+ param_num: param,
+ bound_num: bound_num,
+ })
+ }
+ }
+ }
+
+ // Do a search through a list of bounds, using a callback to actually
+ // create the candidates.
+ fn push_inherent_candidates_from_bounds_inner(
+ &self,
+ bounds: &[@TraitRef],
+ mk_cand: &fn(trait_ref: @TraitRef, m: @ty::Method, method_num: uint,
+ bound_num: uint) -> Candidate) {
+
let tcx = self.tcx();
let mut next_bound_idx = 0; // count only trait bounds
Some(pos) => {
let method = trait_methods[pos];
- let cand = Candidate {
- rcvr_match_condition: RcvrMatchesIfSubtype(self_ty),
- rcvr_substs: bound_trait_ref.substs.clone(),
- method_ty: method,
- origin: method_param(
- method_param {
- trait_id: bound_trait_ref.def_id,
- method_num: pos,
- param_num: param,
- bound_num: this_bound_idx,
- })
- };
+ let cand = mk_cand(bound_trait_ref, method,
+ pos, this_bound_idx);
debug!("pushing inherent candidate for param: %?", cand);
self.inherent_candidates.push(cand);
fn confirm_candidate(&self, rcvr_ty: ty::t, candidate: &Candidate)
-> method_map_entry {
let tcx = self.tcx();
- let fty = self.fn_ty_from_origin(&candidate.origin);
+ let fty = ty::mk_bare_fn(tcx, candidate.method_ty.fty.clone());
debug!("confirm_candidate(expr=%s, candidate=%s, fty=%s)",
self.expr.repr(tcx),
// static methods should never have gotten this far:
assert!(candidate.method_ty.explicit_self != sty_static);
-
- let transformed_self_ty = match candidate.origin {
- method_trait(trait_def_id, _) => {
- self.construct_transformed_self_ty_for_object(
- trait_def_id, candidate)
- }
- _ => {
- let t = candidate.method_ty.transformed_self_ty.unwrap();
- ty::subst(tcx, &candidate.rcvr_substs, t)
- }
- };
+ let transformed_self_ty =
+ ty::subst(tcx, &candidate.rcvr_substs,
+ candidate.method_ty.transformed_self_ty.unwrap());
// Determine the values for the type parameters of the method.
// If they were not explicitly supplied, just construct fresh
}
}
- fn construct_transformed_self_ty_for_object(&self,
- trait_def_id: ast::def_id,
- candidate: &Candidate) -> ty::t
+ fn construct_transformed_self_ty_for_object(
+ &self,
+ trait_def_id: ast::def_id,
+ rcvr_substs: &ty::substs,
+ method_ty: &ty::Method) -> ty::t
{
/*!
* This is a bit tricky. We have a match against a trait method
* result to be `&'a Foo`. Assuming that `m_method` is being
* called, we want the result to be `@mut Foo`. Of course,
* this transformation has already been done as part of
- * `candidate.method_ty.transformed_self_ty`, but there the
+ * `method_ty.transformed_self_ty`, but there the
* type is expressed in terms of `Self` (i.e., `&'a Self`, `@mut Self`).
* Because objects are not standalone types, we can't just substitute
* `s/Self/Foo/`, so we must instead perform this kind of hokey
* match below.
*/
- let substs = ty::substs {regions: candidate.rcvr_substs.regions.clone(),
+ let substs = ty::substs {regions: rcvr_substs.regions.clone(),
self_ty: None,
- tps: candidate.rcvr_substs.tps.clone()};
- match candidate.method_ty.explicit_self {
+ tps: rcvr_substs.tps.clone()};
+ match method_ty.explicit_self {
ast::sty_static => {
self.bug(~"static method for object type receiver");
}
}
ast::sty_region(*) | ast::sty_box(*) | ast::sty_uniq(*) => {
let transformed_self_ty =
- candidate.method_ty.transformed_self_ty.clone().unwrap();
+ method_ty.transformed_self_ty.clone().unwrap();
match ty::get(transformed_self_ty).sty {
ty::ty_rptr(r, mt) => { // must be sty_region
ty::mk_trait(self.tcx(), trait_def_id,
method_static(*) | method_param(*) => {
return; // not a call to a trait instance
}
- method_trait(*) => {}
+ method_object(*) => {}
}
match candidate.method_ty.explicit_self {
// XXX: does this properly enforce this on everything now
// that self has been merged in? -sully
method_param(method_param { trait_id: trait_id, _ }) |
- method_trait(trait_id, _) => {
+ method_object(method_object { trait_id: trait_id, _ }) => {
bad = self.tcx().destructor_for_type.contains_key(&trait_id);
}
}
}
}
- fn fn_ty_from_origin(&self, origin: &method_origin) -> ty::t {
- return match *origin {
- method_static(did) => {
- ty::lookup_item_type(self.tcx(), did).ty
- }
- method_param(ref mp) => {
- type_of_trait_method(self.tcx(), mp.trait_id, mp.method_num)
- }
- method_trait(did, idx) => {
- type_of_trait_method(self.tcx(), did, idx)
- }
- };
-
- fn type_of_trait_method(tcx: ty::ctxt,
- trait_did: def_id,
- method_num: uint) -> ty::t {
- let trait_methods = ty::trait_methods(tcx, trait_did);
- ty::mk_bare_fn(tcx, trait_methods[method_num].fty.clone())
- }
- }
-
fn report_candidate(&self, idx: uint, origin: &method_origin) {
match *origin {
method_static(impl_did) => {
method_param(ref mp) => {
self.report_param_candidate(idx, (*mp).trait_id)
}
- method_trait(trait_did, _) => {
- self.report_trait_candidate(idx, trait_did)
+ method_object(ref mo) => {
+ self.report_trait_candidate(idx, mo.trait_id)
}
}
}
use syntax::parse::token;
use syntax::parse::token::special_idents;
use syntax::print::pprust;
-use syntax::oldvisit;
+use syntax::visit;
+use syntax::visit::Visitor;
use syntax;
pub mod _match;
}
}
+struct CheckItemTypesVisitor { ccx: @mut CrateCtxt }
+
+impl Visitor<()> for CheckItemTypesVisitor {
+ fn visit_item(&mut self, i:@ast::item, _:()) {
+ check_item(self.ccx, i);
+ visit::walk_item(self, i, ());
+ }
+}
+
pub fn check_item_types(ccx: @mut CrateCtxt, crate: &ast::Crate) {
- let visit = oldvisit::mk_simple_visitor(@oldvisit::SimpleVisitor {
- visit_item: |a| check_item(ccx, a),
- .. *oldvisit::default_simple_visitor()
- });
- oldvisit::visit_crate(crate, ((), visit));
+ let mut visit = CheckItemTypesVisitor { ccx: ccx };
+ visit::walk_crate(&mut visit, crate, ());
}
pub fn check_bare_fn(ccx: @mut CrateCtxt,
}
}
+struct GatherLocalsVisitor {
+ fcx: @mut FnCtxt,
+ tcx: ty::ctxt,
+}
+
+impl GatherLocalsVisitor {
+ fn assign(&mut self, nid: ast::NodeId, ty_opt: Option<ty::t>) {
+ match ty_opt {
+ None => {
+ // infer the variable's type
+ let var_id = self.fcx.infcx().next_ty_var_id();
+ let var_ty = ty::mk_var(self.fcx.tcx(), var_id);
+ self.fcx.inh.locals.insert(nid, var_ty);
+ }
+ Some(typ) => {
+ // take type that the user specified
+ self.fcx.inh.locals.insert(nid, typ);
+ }
+ }
+ }
+}
+
+impl Visitor<()> for GatherLocalsVisitor {
+ // Add explicitly-declared locals.
+ fn visit_local(&mut self, local:@ast::Local, _:()) {
+ let o_ty = match local.ty.node {
+ ast::ty_infer => None,
+ _ => Some(self.fcx.to_ty(&local.ty))
+ };
+ self.assign(local.id, o_ty);
+ debug!("Local variable %s is assigned type %s",
+ self.fcx.pat_to_str(local.pat),
+ self.fcx.infcx().ty_to_str(
+ self.fcx.inh.locals.get_copy(&local.id)));
+ visit::walk_local(self, local, ());
+
+ }
+ // Add pattern bindings.
+ fn visit_pat(&mut self, p:@ast::pat, _:()) {
+ match p.node {
+ ast::pat_ident(_, ref path, _)
+ if pat_util::pat_is_binding(self.fcx.ccx.tcx.def_map, p) => {
+ self.assign(p.id, None);
+ debug!("Pattern binding %s is assigned to %s",
+ self.tcx.sess.str_of(path.idents[0]),
+ self.fcx.infcx().ty_to_str(
+ self.fcx.inh.locals.get_copy(&p.id)));
+ }
+ _ => {}
+ }
+ visit::walk_pat(self, p, ());
+
+ }
+
+ fn visit_block(&mut self, b:&ast::Block, _:()) {
+ // non-obvious: the `blk` variable maps to region lb, so
+ // we have to keep this up-to-date. This
+ // is... unfortunate. It'd be nice to not need this.
+ do self.fcx.with_region_lb(b.id) {
+ visit::walk_block(self, b, ());
+ }
+ }
+
+ // Don't descend into fns and items
+ fn visit_fn(&mut self, _:&visit::fn_kind, _:&ast::fn_decl,
+ _:&ast::Block, _:span, _:ast::NodeId, _:()) { }
+ fn visit_item(&mut self, _:@ast::item, _:()) { }
+
+}
+
pub fn check_fn(ccx: @mut CrateCtxt,
opt_self_info: Option<SelfInfo>,
purity: ast::purity,
opt_self_info: Option<SelfInfo>) {
let tcx = fcx.ccx.tcx;
- let assign: @fn(ast::NodeId, Option<ty::t>) = |nid, ty_opt| {
- match ty_opt {
- None => {
- // infer the variable's type
- let var_id = fcx.infcx().next_ty_var_id();
- let var_ty = ty::mk_var(fcx.tcx(), var_id);
- fcx.inh.locals.insert(nid, var_ty);
- }
- Some(typ) => {
- // take type that the user specified
- fcx.inh.locals.insert(nid, typ);
- }
- }
- };
+ let mut visit = GatherLocalsVisitor { fcx: fcx, tcx: tcx, };
// Add the self parameter
for self_info in opt_self_info.iter() {
- assign(self_info.self_id, Some(self_info.self_ty));
+ visit.assign(self_info.self_id, Some(self_info.self_ty));
debug!("self is assigned to %s",
fcx.infcx().ty_to_str(
fcx.inh.locals.get_copy(&self_info.self_id)));
// Create type variables for each argument.
do pat_util::pat_bindings(tcx.def_map, input.pat)
|_bm, pat_id, _sp, _path| {
- assign(pat_id, None);
+ visit.assign(pat_id, None);
}
// Check the pattern.
_match::check_pat(&pcx, input.pat, *arg_ty);
}
- // Add explicitly-declared locals.
- let visit_local: @fn(@ast::Local, ((), oldvisit::vt<()>)) =
- |local, (e, v)| {
- let o_ty = match local.ty.node {
- ast::ty_infer => None,
- _ => Some(fcx.to_ty(&local.ty))
- };
- assign(local.id, o_ty);
- debug!("Local variable %s is assigned type %s",
- fcx.pat_to_str(local.pat),
- fcx.infcx().ty_to_str(
- fcx.inh.locals.get_copy(&local.id)));
- oldvisit::visit_local(local, (e, v));
- };
-
- // Add pattern bindings.
- let visit_pat: @fn(@ast::pat, ((), oldvisit::vt<()>)) = |p, (e, v)| {
- match p.node {
- ast::pat_ident(_, ref path, _)
- if pat_util::pat_is_binding(fcx.ccx.tcx.def_map, p) => {
- assign(p.id, None);
- debug!("Pattern binding %s is assigned to %s",
- tcx.sess.str_of(path.idents[0]),
- fcx.infcx().ty_to_str(
- fcx.inh.locals.get_copy(&p.id)));
- }
- _ => {}
- }
- oldvisit::visit_pat(p, (e, v));
- };
-
- let visit_block:
- @fn(&ast::Block, ((), oldvisit::vt<()>)) = |b, (e, v)| {
- // non-obvious: the `blk` variable maps to region lb, so
- // we have to keep this up-to-date. This
- // is... unfortunate. It'd be nice to not need this.
- do fcx.with_region_lb(b.id) {
- oldvisit::visit_block(b, (e, v));
- }
- };
-
- // Don't descend into fns and items
- fn visit_fn(_fk: &oldvisit::fn_kind,
- _decl: &ast::fn_decl,
- _body: &ast::Block,
- _sp: span,
- _id: ast::NodeId,
- (_t,_v): ((), oldvisit::vt<()>)) {
- }
- fn visit_item(_i: @ast::item, (_e,_v): ((), oldvisit::vt<()>)) { }
-
- let visit = oldvisit::mk_vt(
- @oldvisit::Visitor {visit_local: visit_local,
- visit_pat: visit_pat,
- visit_fn: visit_fn,
- visit_item: visit_item,
- visit_block: visit_block,
- ..*oldvisit::default_visitor()});
-
- (visit.visit_block)(body, ((), visit));
+ visit.visit_block(body, ());
}
}
if ty::type_is_error(e) || ty::type_is_error(a) {
return;
}
- match self.fn_kind {
- DoBlock if ty::type_is_bool(e) && ty::type_is_nil(a) =>
- // If we expected bool and got ()...
- self.tcx().sess.span_err(sp, fmt!("Do-block body must \
- return %s, but returns () here. Perhaps you meant \
- to write a `for`-loop?",
- ppaux::ty_to_str(self.tcx(), e))),
- _ => self.infcx().report_mismatched_types(sp, e, a, err)
- }
+ self.infcx().report_mismatched_types(sp, e, a, err)
}
pub fn report_mismatched_types(&self,
use syntax::ast::{def_arg, def_binding, def_local, def_self, def_upvar};
use syntax::ast;
use syntax::codemap::span;
-use syntax::oldvisit;
+use syntax::visit;
+use syntax::visit::Visitor;
pub struct Rcx {
fcx: @mut FnCtxt,
repeating_scope: ast::NodeId,
}
-pub type rvt = oldvisit::vt<@mut Rcx>;
-
fn encl_region_of_def(fcx: @mut FnCtxt, def: ast::def) -> ty::Region {
let tcx = fcx.tcx();
match def {
repeating_scope: e.id };
if fcx.err_count_since_creation() == 0 {
// regionck assumes typeck succeeded
- let v = regionck_visitor();
- (v.visit_expr)(e, (rcx, v));
+ let mut v = regionck_visitor();
+ v.visit_expr(e, rcx);
}
fcx.infcx().resolve_regions();
}
repeating_scope: blk.id };
if fcx.err_count_since_creation() == 0 {
// regionck assumes typeck succeeded
- let v = regionck_visitor();
- (v.visit_block)(blk, (rcx, v));
+ let mut v = regionck_visitor();
+ v.visit_block(blk, rcx);
}
fcx.infcx().resolve_regions();
}
-fn regionck_visitor() -> rvt {
+struct RegionckVisitor;
+
+impl Visitor<@mut Rcx> for RegionckVisitor {
// (*) FIXME(#3238) should use visit_pat, not visit_arm/visit_local,
// However, right now we run into an issue whereby some free
// regions are not properly related if they appear within the
// addressed by deferring the construction of the region
// hierarchy, and in particular the relationships between free
// regions, until regionck, as described in #3238.
- oldvisit::mk_vt(@oldvisit::Visitor {
- visit_item: visit_item,
- visit_expr: visit_expr,
+
+ fn visit_item(&mut self, i:@ast::item, e:@mut Rcx) { visit_item(self, i, e); }
+
+ fn visit_expr(&mut self, ex:@ast::expr, e:@mut Rcx) { visit_expr(self, ex, e); }
//visit_pat: visit_pat, // (*) see above
- visit_arm: visit_arm,
- visit_local: visit_local,
- visit_block: visit_block,
- .. *oldvisit::default_visitor()
- })
+ fn visit_arm(&mut self, a:&ast::arm, e:@mut Rcx) { visit_arm(self, a, e); }
+
+ fn visit_local(&mut self, l:@ast::Local, e:@mut Rcx) { visit_local(self, l, e); }
+
+ fn visit_block(&mut self, b:&ast::Block, e:@mut Rcx) { visit_block(self, b, e); }
+}
+
+fn regionck_visitor() -> RegionckVisitor {
+ RegionckVisitor
}
-fn visit_item(_item: @ast::item, (_rcx, _v): (@mut Rcx, rvt)) {
+fn visit_item(_v: &mut RegionckVisitor, _item: @ast::item, _rcx: @mut Rcx) {
// Ignore items
}
-fn visit_block(b: &ast::Block, (rcx, v): (@mut Rcx, rvt)) {
+fn visit_block(v: &mut RegionckVisitor, b: &ast::Block, rcx: @mut Rcx) {
rcx.fcx.tcx().region_maps.record_cleanup_scope(b.id);
- oldvisit::visit_block(b, (rcx, v));
+ visit::walk_block(v, b, rcx);
}
-fn visit_arm(arm: &ast::arm, (rcx, v): (@mut Rcx, rvt)) {
+fn visit_arm(v: &mut RegionckVisitor, arm: &ast::arm, rcx: @mut Rcx) {
// see above
for &p in arm.pats.iter() {
constrain_bindings_in_pat(p, rcx);
}
- oldvisit::visit_arm(arm, (rcx, v));
+ visit::walk_arm(v, arm, rcx);
}
-fn visit_local(l: @ast::Local, (rcx, v): (@mut Rcx, rvt)) {
+fn visit_local(v: &mut RegionckVisitor, l: @ast::Local, rcx: @mut Rcx) {
// see above
constrain_bindings_in_pat(l.pat, rcx);
- oldvisit::visit_local(l, (rcx, v));
+ visit::walk_local(v, l, rcx);
}
fn constrain_bindings_in_pat(pat: @ast::pat, rcx: @mut Rcx) {
}
}
-fn visit_expr(expr: @ast::expr, (rcx, v): (@mut Rcx, rvt)) {
+fn visit_expr(v: &mut RegionckVisitor, expr: @ast::expr, rcx: @mut Rcx) {
debug!("regionck::visit_expr(e=%s, repeating_scope=%?)",
expr.repr(rcx.fcx.tcx()), rcx.repeating_scope);
constrain_callee(rcx, callee.id, expr, callee);
constrain_call(rcx, callee.id, expr, None, *args, false);
- oldvisit::visit_expr(expr, (rcx, v));
+ visit::walk_expr(v, expr, rcx);
}
ast::expr_method_call(callee_id, arg0, _, _, ref args, _) => {
constrain_call(rcx, callee_id, expr, Some(arg0), *args, false);
- oldvisit::visit_expr(expr, (rcx, v));
+ visit::walk_expr(v,expr, rcx);
}
ast::expr_index(callee_id, lhs, rhs) |
// should be converted to an adjustment!
constrain_call(rcx, callee_id, expr, Some(lhs), [rhs], true);
- oldvisit::visit_expr(expr, (rcx, v));
+ visit::walk_expr(v, expr, rcx);
}
ast::expr_unary(callee_id, _, lhs) if has_method_map => {
// As above.
constrain_call(rcx, callee_id, expr, Some(lhs), [], true);
- oldvisit::visit_expr(expr, (rcx, v));
+ visit::walk_expr(v, expr, rcx);
}
ast::expr_unary(_, ast::deref, base) => {
let base_ty = rcx.resolve_node_type(base.id);
constrain_derefs(rcx, expr, 1, base_ty);
- oldvisit::visit_expr(expr, (rcx, v));
+ visit::walk_expr(v, expr, rcx);
}
ast::expr_index(_, vec_expr, _) => {
let vec_type = rcx.resolve_expr_type_adjusted(vec_expr);
constrain_index(rcx, expr, vec_type);
- oldvisit::visit_expr(expr, (rcx, v));
+ visit::walk_expr(v, expr, rcx);
}
ast::expr_cast(source, _) => {
_ => ()
}
- oldvisit::visit_expr(expr, (rcx, v));
+ visit::walk_expr(v, expr, rcx);
}
ast::expr_addr_of(_, base) => {
let ty0 = rcx.resolve_node_type(expr.id);
constrain_regions_in_type(rcx, ty::re_scope(expr.id),
infer::AddrOf(expr.span), ty0);
- oldvisit::visit_expr(expr, (rcx, v));
+ visit::walk_expr(v, expr, rcx);
}
ast::expr_match(discr, ref arms) => {
guarantor::for_match(rcx, discr, *arms);
- oldvisit::visit_expr(expr, (rcx, v));
+ visit::walk_expr(v, expr, rcx);
}
ast::expr_fn_block(*) => {
ast::expr_loop(ref body, _) => {
let repeating_scope = rcx.set_repeating_scope(body.id);
- oldvisit::visit_expr(expr, (rcx, v));
+ visit::walk_expr(v, expr, rcx);
rcx.set_repeating_scope(repeating_scope);
}
ast::expr_while(cond, ref body) => {
let repeating_scope = rcx.set_repeating_scope(cond.id);
- (v.visit_expr)(cond, (rcx, v));
+ v.visit_expr(cond, rcx);
rcx.set_repeating_scope(body.id);
- (v.visit_block)(body, (rcx, v));
+ v.visit_block(body, rcx);
rcx.set_repeating_scope(repeating_scope);
}
_ => {
- oldvisit::visit_expr(expr, (rcx, v));
+ visit::walk_expr(v, expr, rcx);
}
}
}
fn check_expr_fn_block(rcx: @mut Rcx,
expr: @ast::expr,
- v: rvt) {
+ v: &mut RegionckVisitor) {
let tcx = rcx.fcx.tcx();
match expr.node {
ast::expr_fn_block(_, ref body) => {
}
let repeating_scope = rcx.set_repeating_scope(body.id);
- oldvisit::visit_expr(expr, (rcx, v));
+ visit::walk_expr(v, expr, rcx);
rcx.set_repeating_scope(repeating_scope);
}
self_ty: Some(mt.ty)
}
};
- let vtable_opt =
- lookup_vtable(&vcx,
- location_info,
- mt.ty,
- target_trait_ref,
- is_early);
- match vtable_opt {
- Some(vtable) => {
- // Map this expression to that
- // vtable (that is: "ex has vtable
- // <vtable>")
- if !is_early {
- insert_vtables(fcx, ex.id,
- @~[@~[vtable]]);
- }
- }
- None => {
- fcx.tcx().sess.span_err(
- ex.span,
- fmt!("failed to find an implementation \
- of trait %s for %s",
- fcx.infcx().ty_to_str(target_ty),
- fcx.infcx().ty_to_str(mt.ty)));
- }
+
+ let param_bounds = ty::ParamBounds {
+ builtin_bounds: ty::EmptyBuiltinBounds(),
+ trait_bounds: ~[target_trait_ref]
+ };
+ let vtables =
+ lookup_vtables_for_param(&vcx,
+ location_info,
+ None,
+ ¶m_bounds,
+ mt.ty,
+ is_early);
+
+ if !is_early {
+ insert_vtables(fcx, ex.id, @~[vtables]);
}
// Now, if this is &trait, we need to link the
use syntax::ast_util::{def_id_of_def, local_def};
use syntax::codemap::{span, dummy_sp};
use syntax::opt_vec;
-use syntax::oldvisit::{default_simple_visitor, default_visitor};
-use syntax::oldvisit::{mk_simple_visitor, mk_vt, visit_crate, visit_item};
-use syntax::oldvisit::{Visitor, SimpleVisitor};
-use syntax::oldvisit::{visit_mod};
+use syntax::visit;
use syntax::parse;
use util::ppaux::ty_to_str;
base_type_def_ids: @mut HashMap<def_id,def_id>,
}
-impl CoherenceChecker {
- pub fn check_coherence(self, crate: &Crate) {
- // Check implementations and traits. This populates the tables
- // containing the inherent methods and extension methods. It also
- // builds up the trait inheritance table.
- visit_crate(crate, ((), mk_simple_visitor(@SimpleVisitor {
- visit_item: |item| {
+struct CoherenceCheckVisitor { cc: CoherenceChecker }
+
+impl visit::Visitor<()> for CoherenceCheckVisitor {
+ fn visit_item(&mut self, item:@item, _:()) {
+
// debug!("(checking coherence) item '%s'",
-// self.crate_context.tcx.sess.str_of(item.ident));
+// self.cc.crate_context.tcx.sess.str_of(item.ident));
match item.node {
item_impl(_, ref opt_trait, _, _) => {
opt_trait.iter()
.map(|x| (*x).clone())
.collect();
- self.check_implementation(item, opt_trait);
+ self.cc.check_implementation(item, opt_trait);
}
_ => {
// Nothing to do.
}
};
- },
- .. *default_simple_visitor()
- })));
+
+ visit::walk_item(self, item, ());
+ }
+}
+
+struct PrivilegedScopeVisitor { cc: CoherenceChecker }
+
+impl visit::Visitor<()> for PrivilegedScopeVisitor {
+ fn visit_item(&mut self, item:@item, _:()) {
+
+ match item.node {
+ item_mod(ref module_) => {
+ // Then visit the module items.
+ visit::walk_mod(self, module_, ());
+ }
+ item_impl(_, None, ref ast_ty, _) => {
+ if !self.cc.ast_type_is_defined_in_local_crate(ast_ty) {
+ // This is an error.
+ let session = self.cc.crate_context.tcx.sess;
+ session.span_err(item.span,
+ "cannot associate methods with a type outside the \
+ crate the type is defined in; define and implement \
+ a trait or new type instead");
+ }
+ }
+ item_impl(_, Some(ref trait_ref), _, _) => {
+ // `for_ty` is `Type` in `impl Trait for Type`
+ let for_ty =
+ ty::node_id_to_type(self.cc.crate_context.tcx,
+ item.id);
+ if !type_is_defined_in_local_crate(for_ty) {
+ // This implementation is not in scope of its base
+ // type. This still might be OK if the trait is
+ // defined in the same crate.
+
+ let trait_def_id =
+ self.cc.trait_ref_to_trait_def_id(trait_ref);
+
+ if trait_def_id.crate != LOCAL_CRATE {
+ let session = self.cc.crate_context.tcx.sess;
+ session.span_err(item.span,
+ "cannot provide an extension implementation \
+ for a trait not defined in this crate");
+ }
+ }
+
+ visit::walk_item(self, item, ());
+ }
+ _ => {
+ visit::walk_item(self, item, ());
+ }
+ }
+ }
+}
+
+impl CoherenceChecker {
+ pub fn check_coherence(self, crate: &Crate) {
+ // Check implementations and traits. This populates the tables
+ // containing the inherent methods and extension methods. It also
+ // builds up the trait inheritance table.
+
+ let mut visitor = CoherenceCheckVisitor { cc: self };
+ visit::walk_crate(&mut visitor, crate, ());
// Check that there are no overlapping trait instances
self.check_implementation_coherence();
// Privileged scope checking
pub fn check_privileged_scopes(self, crate: &Crate) {
- visit_crate(crate, ((), mk_vt(@Visitor {
- visit_item: |item, (_context, visitor)| {
- match item.node {
- item_mod(ref module_) => {
- // Then visit the module items.
- visit_mod(module_, item.span, item.id, ((), visitor));
- }
- item_impl(_, None, ref ast_ty, _) => {
- if !self.ast_type_is_defined_in_local_crate(ast_ty) {
- // This is an error.
- let session = self.crate_context.tcx.sess;
- session.span_err(item.span,
- "cannot associate methods with a type outside the \
- crate the type is defined in; define and implement \
- a trait or new type instead");
- }
- }
- item_impl(_, Some(ref trait_ref), _, _) => {
- // `for_ty` is `Type` in `impl Trait for Type`
- let for_ty =
- ty::node_id_to_type(self.crate_context.tcx,
- item.id);
- if !type_is_defined_in_local_crate(for_ty) {
- // This implementation is not in scope of its base
- // type. This still might be OK if the trait is
- // defined in the same crate.
-
- let trait_def_id =
- self.trait_ref_to_trait_def_id(trait_ref);
-
- if trait_def_id.crate != LOCAL_CRATE {
- let session = self.crate_context.tcx.sess;
- session.span_err(item.span,
- "cannot provide an extension implementation \
- for a trait not defined in this crate");
- }
- }
-
- visit_item(item, ((), visitor));
- }
- _ => {
- visit_item(item, ((), visitor));
- }
- }
- },
- .. *default_visitor()
- })));
+ let mut visitor = PrivilegedScopeVisitor{ cc: self };
+ visit::walk_crate(&mut visitor, crate, ());
}
pub fn trait_ref_to_trait_def_id(&self, trait_ref: &trait_ref) -> def_id {
use syntax::codemap::span;
use syntax::codemap;
use syntax::print::pprust::{path_to_str, explicit_self_to_str};
-use syntax::oldvisit;
+use syntax::visit;
use syntax::opt_vec::OptVec;
use syntax::opt_vec;
use syntax::parse::token::special_idents;
+struct CollectItemTypesVisitor {
+ ccx: @mut CrateCtxt
+}
+
+impl visit::Visitor<()> for CollectItemTypesVisitor {
+ fn visit_item(&mut self, i:@ast::item, _:()) {
+ convert(self.ccx, i);
+ visit::walk_item(self, i, ());
+ }
+ fn visit_foreign_item(&mut self, i:@ast::foreign_item, _:()) {
+ convert_foreign(self.ccx, i);
+ visit::walk_foreign_item(self, i, ());
+ }
+}
+
pub fn collect_item_types(ccx: @mut CrateCtxt, crate: &ast::Crate) {
fn collect_intrinsic_type(ccx: &CrateCtxt,
lang_item: ast::def_id) {
Some(id) => { collect_intrinsic_type(ccx, id); } None => {}
}
- oldvisit::visit_crate(
- crate, ((),
- oldvisit::mk_simple_visitor(@oldvisit::SimpleVisitor {
- visit_item: |a| convert(ccx, a),
- visit_foreign_item: |a|convert_foreign(ccx, a),
- .. *oldvisit::default_simple_visitor()
- })));
+ let mut visitor = CollectItemTypesVisitor{ ccx: ccx };
+ visit::walk_crate(&mut visitor, crate, ());
}
pub trait ToTy {
use middle::typeck::infer::unify::{Redirect, Root, VarValue};
use util::ppaux::{mt_to_str, ty_to_str, trait_ref_to_str};
-use std::uint;
use syntax::ast;
pub trait InferStr {
match *self {
Redirect(ref vid) => fmt!("Redirect(%s)", vid.to_str()),
Root(ref pt, rk) => fmt!("Root(%s, %s)", pt.inf_str(cx),
- uint::to_str_radix(rk, 10u))
+ rk.to_str_radix(10u))
}
}
}
method_param(method_param),
// method invoked on a trait instance
- method_trait(ast::def_id, uint),
+ method_object(method_object),
}
bound_num: uint,
}
+// details for a method invoked with a receiver whose type is an object
+#[deriving(Clone, Encodable, Decodable)]
+pub struct method_object {
+ // the (super)trait containing the method to be invoked
+ trait_id: ast::def_id,
+
+ // the actual base trait id of the object
+ object_trait_id: ast::def_id,
+
+ // index of the method to be invoked amongst the trait's methods
+ method_num: uint,
+
+ // index into the actual runtime vtable.
+ // the vtable is formed by concatenating together the method lists of
+ // the base object trait and all supertraits; this is the index into
+ // that vtable
+ real_index: uint,
+}
+
+
#[deriving(Clone)]
pub struct method_map_entry {
// the type of the self parameter, which is not reflected in the fn type
#[license = "MIT/ASL2"];
#[crate_type = "lib"];
+// Rustc tasks always run on a fixed_stack_segment, so code in this
+// module can call C functions (in particular, LLVM functions) with
+// impunity.
+#[allow(cstack)];
+
extern mod extra;
extern mod syntax;
pub mod reachable;
pub mod graph;
pub mod cfg;
+ pub mod stack_check;
}
pub mod front {
use syntax::ast;
use syntax::codemap::{span};
-use syntax::oldvisit;
+use syntax::visit;
+use syntax::visit::Visitor;
use std::hashmap::HashSet;
use extra;
fields.map(|f| f.expr)
}
-// Takes a predicate p, returns true iff p is true for any subexpressions
-// of b -- skipping any inner loops (loop, while, loop_body)
-pub fn loop_query(b: &ast::Block, p: @fn(&ast::expr_) -> bool) -> bool {
- let rs = @mut false;
- let visit_expr: @fn(@ast::expr,
- (@mut bool,
- oldvisit::vt<@mut bool>)) = |e, (flag, v)| {
- *flag |= p(&e.node);
+struct LoopQueryVisitor {
+ p: @fn(&ast::expr_) -> bool
+}
+
+impl Visitor<@mut bool> for LoopQueryVisitor {
+ fn visit_expr(&mut self, e:@ast::expr, flag:@mut bool) {
+ *flag |= (self.p)(&e.node);
match e.node {
// Skip inner loops, since a break in the inner loop isn't a
// break inside the outer loop
ast::expr_loop(*) | ast::expr_while(*) => {}
- _ => oldvisit::visit_expr(e, (flag, v))
+ _ => visit::walk_expr(self, e, flag)
}
- };
- let v = oldvisit::mk_vt(@oldvisit::Visitor {
- visit_expr: visit_expr,
- .. *oldvisit::default_visitor()});
- oldvisit::visit_block(b, (rs, v));
+ }
+}
+
+// Takes a predicate p, returns true iff p is true for any subexpressions
+// of b -- skipping any inner loops (loop, while, loop_body)
+pub fn loop_query(b: &ast::Block, p: @fn(&ast::expr_) -> bool) -> bool {
+ let rs = @mut false;
+ let mut v = LoopQueryVisitor { p: p };
+ visit::walk_block(&mut v, b, rs);
return *rs;
}
+struct BlockQueryVisitor {
+ p: @fn(@ast::expr) -> bool
+}
+
+impl Visitor<@mut bool> for BlockQueryVisitor {
+ fn visit_expr(&mut self, e:@ast::expr, flag:@mut bool) {
+ *flag |= (self.p)(e);
+ visit::walk_expr(self, e, flag)
+ }
+}
+
// Takes a predicate p, returns true iff p is true for any subexpressions
// of b -- skipping any inner loops (loop, while, loop_body)
pub fn block_query(b: &ast::Block, p: @fn(@ast::expr) -> bool) -> bool {
let rs = @mut false;
- let visit_expr: @fn(@ast::expr,
- (@mut bool,
- oldvisit::vt<@mut bool>)) = |e, (flag, v)| {
- *flag |= p(e);
- oldvisit::visit_expr(e, (flag, v))
- };
- let v = oldvisit::mk_vt(@oldvisit::Visitor{
- visit_expr: visit_expr,
- .. *oldvisit::default_visitor()});
- oldvisit::visit_block(b, (rs, v));
+ let mut v = BlockQueryVisitor { p: p };
+ visit::walk_block(&mut v, b, rs);
return *rs;
}
&typeck::method_param(ref p) => {
p.repr(tcx)
}
- &typeck::method_trait(def_id, n) => {
- fmt!("method_trait(%s, %?)", def_id.repr(tcx), n)
+ &typeck::method_object(ref p) => {
+ p.repr(tcx)
}
}
}
}
}
+impl Repr for typeck::method_object {
+ fn repr(&self, tcx: ctxt) -> ~str {
+ fmt!("method_object(%s,%?,%?)",
+ self.trait_id.repr(tcx),
+ self.method_num,
+ self.real_index)
+ }
+}
+
+
impl Repr for ty::RegionVid {
fn repr(&self, _tcx: ctxt) -> ~str {
fmt!("%?", *self)
ty_to_str(tcx, *self)
}
}
+
+impl Repr for AbiSet {
+ fn repr(&self, _tcx: ctxt) -> ~str {
+ self.to_str()
+ }
+}
+
+impl UserString for AbiSet {
+ fn user_string(&self, _tcx: ctxt) -> ~str {
+ self.to_str()
+ }
+}
}
pub fn main() {
+ #[fixed_stack_segment]; #[inline(never)];
+
let args = os::args();
let input = io::stdin();
let out = io::stdout();
use syntax::print::pp;
use syntax::print::pprust;
use syntax::parse::token;
+use syntax::visit;
-pub fn each_binding(l: @ast::Local, f: @fn(&ast::Path, ast::NodeId)) {
- use syntax::oldvisit;
+struct EachBindingVisitor {
+ f: @fn(&ast::Path, ast::NodeId)
+}
- let vt = oldvisit::mk_simple_visitor(
- @oldvisit::SimpleVisitor {
- visit_pat: |pat| {
+impl visit::Visitor<()> for EachBindingVisitor {
+ fn visit_pat(&mut self, pat:@ast::pat, _:()) {
match pat.node {
ast::pat_ident(_, ref path, _) => {
- f(path, pat.id);
+ (self.f)(path, pat.id);
}
_ => {}
}
- },
- .. *oldvisit::default_simple_visitor()
- }
- );
- (vt.visit_pat)(l.pat, ((), vt));
+
+ visit::walk_pat(self, pat, ());
+ }
+}
+
+pub fn each_binding(l: @ast::Local, f: @fn(&ast::Path, ast::NodeId)) {
+ use syntax::visit::Visitor;
+
+ let mut vt = EachBindingVisitor{ f: f };
+
+ vt.visit_pat(l.pat, ());
}
/// A utility function that hands off a pretty printer to a callback.
Some(p) => p.to_str()
}
}
+
+ // Hack so that rustpkg can run either out of a rustc target dir,
+ // or the host dir
+ pub fn sysroot_to_use(&self) -> Option<@Path> {
+ if !in_target(self.sysroot_opt) {
+ self.sysroot_opt
+ }
+ else {
+ self.sysroot_opt.map(|p| { @p.pop().pop().pop() })
+ }
+ }
}
/// We assume that if ../../rustc exists, then we're running
debug!("Pushing onto root: %s | %s", self.id.path.to_str(), self.root.to_str());
let dirs = pkgid_src_in_workspace(&self.id, &self.root);
- debug!("Checking dirs: %?", dirs);
+ debug!("Checking dirs: %?", dirs.map(|s| s.to_str()).connect(":"));
let path = dirs.iter().find(|&d| os::path_exists(d));
let dir = match path {
pub use package_id::PkgId;
pub use target::{OutputType, Main, Lib, Test, Bench, Target, Build, Install};
-pub use version::{Version, NoVersion, split_version_general};
+pub use version::{Version, NoVersion, split_version_general, try_parsing_version};
pub use rustc::metadata::filesearch::rust_path;
use std::libc::consts::os::posix88::{S_IRUSR, S_IWUSR, S_IXUSR};
/// Figure out what the library name for <pkgid> in <workspace>'s build
/// directory is, and if the file exists, return it.
pub fn built_library_in_workspace(pkgid: &PkgId, workspace: &Path) -> Option<Path> {
- library_in_workspace(&pkgid.path, pkgid.short_name, Build, workspace, "build")
+ library_in_workspace(&pkgid.path, pkgid.short_name, Build, workspace, "build", &pkgid.version)
}
/// Does the actual searching stuff
pub fn installed_library_in_workspace(short_name: &str, workspace: &Path) -> Option<Path> {
- library_in_workspace(&Path(short_name), short_name, Install, workspace, "lib")
+ // NOTE: this could break once we're handling multiple versions better... want a test for it
+ library_in_workspace(&Path(short_name), short_name, Install, workspace, "lib", &NoVersion)
}
-
-/// This doesn't take a PkgId, so we can use it for `extern mod` inference, where we
-/// don't know the entire package ID.
/// `workspace` is used to figure out the directory to search.
/// `short_name` is taken as the link name of the library.
pub fn library_in_workspace(path: &Path, short_name: &str, where: Target,
- workspace: &Path, prefix: &str) -> Option<Path> {
+ workspace: &Path, prefix: &str, version: &Version) -> Option<Path> {
debug!("library_in_workspace: checking whether a library named %s exists",
short_name);
for p_path in libraries {
// Find a filename that matches the pattern: (lib_prefix)-hash-(version)(lib_suffix)
// and remember what the hash was
- let f_name = match p_path.filename() {
+ let mut f_name = match p_path.filestem() {
Some(s) => s, None => loop
};
-
- let mut hash = None;
- let mut which = 0;
- for piece in f_name.split_iter('-') {
- debug!("a piece = %s", piece);
- if which == 0 && piece != lib_prefix {
- break;
- }
- else if which == 0 {
- which += 1;
- }
- else if which == 1 {
- hash = Some(piece.to_owned());
- break;
- }
- else {
- // something went wrong
- hash = None;
- break;
- }
- }
-
- if hash.is_some() {
- result_filename = Some(p_path);
- break;
- }
- }
+ // Already checked the filetype above
+
+ // This is complicated because library names and versions can both contain dashes
+ loop {
+ if f_name.is_empty() { break; }
+ match f_name.rfind('-') {
+ Some(i) => {
+ debug!("Maybe %s is a version", f_name.slice(i + 1, f_name.len()));
+ match try_parsing_version(f_name.slice(i + 1, f_name.len())) {
+ Some(ref found_vers) if version == found_vers => {
+ match f_name.slice(0, i).rfind('-') {
+ Some(j) => {
+ debug!("Maybe %s equals %s", f_name.slice(0, j), lib_prefix);
+ if f_name.slice(0, j) == lib_prefix {
+ result_filename = Some(p_path);
+ }
+ break;
+ }
+ None => break
+ }
+ }
+ _ => { f_name = f_name.slice(0, i).to_owned(); }
+ }
+ }
+ None => break
+ } // match
+ } // loop
+ } // for
if result_filename.is_none() {
warn(fmt!("library_in_workspace didn't find a library in %s for %s",
/// Given the path name for a package script
/// and a package ID, parse the package script into
/// a PkgScript that we can then execute
- fn parse<'a>(script: Path, workspace: &Path, id: &'a PkgId) -> PkgScript<'a> {
+ fn parse<'a>(sysroot: @Path,
+ script: Path,
+ workspace: &Path,
+ id: &'a PkgId) -> PkgScript<'a> {
// Get the executable name that was invoked
let binary = os::args()[0].to_managed();
// Build the rustc session data structures to pass
// to the compiler
- debug!("pkgscript parse: %?", os::self_exe_path());
+ debug!("pkgscript parse: %s", sysroot.to_str());
let options = @session::options {
binary: binary,
- maybe_sysroot: Some(@os::self_exe_path().unwrap().pop()),
+ maybe_sysroot: Some(sysroot),
crate_type: session::bin_crate,
.. (*session::basic_options()).clone()
};
let crate = driver::phase_2_configure_and_expand(sess, cfg.clone(), crate);
let work_dir = build_pkg_id_in_workspace(id, workspace);
- debug!("Returning package script with id %?", id);
+ debug!("Returning package script with id %s", id.to_str());
PkgScript {
id: id,
let crate = util::ready_crate(sess, self.crate);
debug!("Building output filenames with script name %s",
driver::source_name(&self.input));
- let root = filesearch::get_or_default_sysroot().pop().pop(); // :-\
- debug!("Root is %s, calling compile_rest", root.to_str());
let exe = self.build_dir.push(~"pkg" + util::exe_suffix());
util::compile_crate_from_input(&self.input,
&self.build_dir,
sess,
crate);
- debug!("Running program: %s %s %s %s", exe.to_str(),
- sysroot.to_str(), root.to_str(), "install");
+ debug!("Running program: %s %s %s", exe.to_str(),
+ sysroot.to_str(), "install");
// FIXME #7401 should support commands besides `install`
let status = run::process_status(exe.to_str(), [sysroot.to_str(), ~"install"]);
if status != 0 {
}
else {
debug!("Running program (configs): %s %s %s",
- exe.to_str(), root.to_str(), "configs");
- let output = run::process_output(exe.to_str(), [root.to_str(), ~"configs"]);
+ exe.to_str(), sysroot.to_str(), "configs");
+ let output = run::process_output(exe.to_str(), [sysroot.to_str(), ~"configs"]);
// Run the configs() function to get the configs
let cfgs = str::from_bytes_slice(output.output).word_iter()
.map(|w| w.to_owned()).collect();
debug!("Package source directory = %?", pkg_src_dir);
let cfgs = match pkg_src_dir.chain_ref(|p| src.package_script_option(p)) {
Some(package_script_path) => {
- let pscript = PkgScript::parse(package_script_path,
+ let sysroot = self.sysroot_to_use().expect("custom build needs a sysroot");
+ let pscript = PkgScript::parse(sysroot,
+ package_script_path,
workspace,
pkgid);
- let sysroot = self.sysroot_opt.expect("custom build needs a sysroot");
let (cfgs, hook_result) = pscript.run_custom(sysroot);
debug!("Command return code = %?", hook_result);
if hook_result != 0 {
assert!(os::path_is_dir(&*cwd));
let cwd = (*cwd).clone();
let mut prog = run::Process::new(cmd, args, run::ProcessOptions {
- env: env,
+ env: env.map(|e| e + os::env()),
dir: Some(&cwd),
in_fd: None,
out_fd: None,
short_name,
Build,
workspace,
- "build").expect("lib_output_file_name")
+ "build",
+ &NoVersion).expect("lib_output_file_name")
}
fn output_file_name(workspace: &Path, short_name: &str) -> Path {
}
}
-// FIXME(#7249): these tests fail on multi-platform builds, so for now they're
-// only run one x86
-
-#[test] #[ignore(cfg(target_arch = "x86"))]
+#[test]
fn test_make_dir_rwx() {
let temp = &os::tmpdir();
let dir = temp.push("quux");
assert!(os::remove_dir_recursive(&dir));
}
-#[test] #[ignore(cfg(target_arch = "x86"))]
+#[test]
fn test_install_valid() {
use path_util::installed_library_in_workspace;
assert!(!os::path_exists(&bench));
}
-#[test] #[ignore(cfg(target_arch = "x86"))]
+#[test]
fn test_install_invalid() {
use conditions::nonexistent_package::cond;
use cond1 = conditions::missing_pkg_files::cond;
// Tests above should (maybe) be converted to shell out to rustpkg, too
-// FIXME: #7956: temporarily disabled
-#[ignore(cfg(target_arch = "x86"))]
fn test_install_git() {
let sysroot = test_sysroot();
debug!("sysroot = %s", sysroot.to_str());
assert!(!os::path_exists(&bench));
}
-#[test] #[ignore(cfg(target_arch = "x86"))]
+#[test]
fn test_package_ids_must_be_relative_path_like() {
use conditions::bad_pkg_id::cond;
}
-// FIXME: #7956: temporarily disabled
-#[ignore(cfg(target_arch = "x86"))]
fn test_package_version() {
let local_path = "mockgithub.com/catamorphism/test_pkg_version";
let repo = init_git_repo(&Path(local_path));
&temp_dir);
}
-// FIXME: #7956: temporarily disabled
#[test]
fn rustpkg_library_target() {
let foo_repo = init_git_repo(&Path("foo"));
assert_executable_exists(&dir, "foo");
}
-// FIXME: #7956: temporarily disabled
-// Failing on dist-linux bot
#[test]
-#[ignore]
fn package_script_with_default_build() {
let dir = create_local_package(&PkgId::new("fancy-lib"));
debug!("dir = %s", dir.to_str());
push("testsuite").push("pass").push("src").push("fancy-lib").push("pkg.rs");
debug!("package_script_with_default_build: %s", source.to_str());
if !os::copy_file(&source,
- & dir.push("src").push("fancy_lib-0.1").push("pkg.rs")) {
+ & dir.push("src").push("fancy-lib-0.1").push("pkg.rs")) {
fail!("Couldn't copy file");
}
command_line_test([~"install", ~"fancy-lib"], &dir);
assert_lib_exists(&dir, "fancy-lib", NoVersion);
- assert!(os::path_exists(&dir.push("build").push("fancy_lib").push("generated.rs")));
+ assert!(os::path_exists(&dir.push("build").push("fancy-lib").push("generated.rs")));
}
#[test]
#[test]
fn rustpkg_install_no_arg() {
let tmp = mkdtemp(&os::tmpdir(),
- "rustpkg_install_no_arg").expect("rustpkg_build_no_arg failed");
+ "rustpkg_install_no_arg").expect("rustpkg_install_no_arg failed");
let package_dir = tmp.push("src").push("foo");
assert!(os::mkdir_recursive(&package_dir, U_RWX));
writeFile(&package_dir.push("lib.rs"),
}
#[test]
-#[ignore (reason = "Specifying env doesn't work -- see #8028")]
fn rust_path_test() {
let dir_for_path = mkdtemp(&os::tmpdir(), "more_rust").expect("rust_path_test failed");
let dir = mk_workspace(&dir_for_path, &Path("foo"), &NoVersion);
let cwd = os::getcwd();
debug!("cwd = %s", cwd.to_str());
// use command_line_test_with_env
- let mut prog = run::Process::new("rustpkg",
- [~"install", ~"foo"],
-// This should actually extend the environment; then we can probably
-// un-ignore it
- run::ProcessOptions { env: Some(~[(~"RUST_LOG",
- ~"rustpkg"),
- (~"RUST_PATH",
- dir_for_path.to_str())]),
- dir: Some(&cwd),
- in_fd: None,
- out_fd: None,
- err_fd: None
- });
- prog.finish_with_output();
+ command_line_test_with_env([~"install", ~"foo"],
+ &cwd,
+ Some(~[(~"RUST_PATH", dir_for_path.to_str())]));
assert_executable_exists(&dir_for_path, "foo");
}
return;
}
- let out_path = Path("build/fancy_lib");
+ let out_path = Path("build/fancy-lib");
if !os::path_exists(&out_path) {
assert!(os::make_dir(&out_path, (S_IRUSR | S_IWUSR | S_IXUSR) as i32));
}
#[cfg(windows)]
pub fn link_exe(_src: &Path, _dest: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+
/* FIXME (#1768): Investigate how to do this on win32
Node wraps symlinks by having a .bat,
but that won't work with minGW. */
#[cfg(target_os = "freebsd")]
#[cfg(target_os = "macos")]
pub fn link_exe(src: &Path, dest: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+
use std::c_str::ToCStr;
use std::libc;
NoVersion // user didn't specify a version -- prints as 0.1
}
+// Equality on versions is non-symmetric: if self is NoVersion, it's equal to
+// anything; but if self is a precise version, it's not equal to NoVersion.
+// We should probably make equality symmetric, and use less-than and greater-than
+// where we currently use eq
impl Eq for Version {
fn eq(&self, other: &Version) -> bool {
match (self, other) {
SawDot
}
-fn try_parsing_version(s: &str) -> Option<Version> {
+pub fn try_parsing_version(s: &str) -> Option<Version> {
let s = s.trim();
debug!("Attempting to parse: %s", s);
let mut parse_state = Start;
/// number, return the prefix before the # and the version.
/// Otherwise, return None.
pub fn split_version<'a>(s: &'a str) -> Option<(&'a str, Version)> {
- split_version_general(s, '#')
+ // Check for extra '#' characters separately
+ if s.split_iter('#').len() > 2 {
+ None
+ }
+ else {
+ split_version_general(s, '#')
+ }
}
pub fn split_version_general<'a>(s: &'a str, sep: char) -> Option<(&'a str, Version)> {
- // reject strings with multiple '#'s
- for st in s.split_iter(sep) {
- debug!("whole = %s part = %s", s, st);
- }
- if s.split_iter(sep).len() > 2 {
- return None;
- }
match s.rfind(sep) {
Some(i) => {
debug!("in %s, i = %?", s, i);
* Sets the length of a vector
*
* This will explicitly set the size of the vector, without actually
- * modifing its buffers, so it is up to the caller to ensure that
+ * modifying its buffers, so it is up to the caller to ensure that
* the vector is actually the specified size.
*/
#[inline]
Also, a few conversion functions: `to_bit` and `to_str`.
-Finally, some inquries into the nature of truth: `is_true` and `is_false`.
+Finally, some inquiries into the nature of truth: `is_true` and `is_false`.
*/
///
/// Fails if the CString is null.
pub fn as_bytes<'a>(&'a self) -> &'a [u8] {
+ #[fixed_stack_segment]; #[inline(never)];
if self.buf.is_null() { fail!("CString is null!"); }
unsafe {
let len = libc::strlen(self.buf) as uint;
impl Drop for CString {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
if self.owns_buffer_ {
unsafe {
libc::free(self.buf as *libc::c_void)
impl<'self> ToCStr for &'self [u8] {
fn to_c_str(&self) -> CString {
+ #[fixed_stack_segment]; #[inline(never)];
let mut cs = unsafe { self.to_c_str_unchecked() };
do cs.with_mut_ref |buf| {
for i in range(0, self.len()) {
}
unsafe fn to_c_str_unchecked(&self) -> CString {
+ #[fixed_stack_segment]; #[inline(never)];
do self.as_imm_buf |self_buf, self_len| {
let buf = libc::malloc(self_len as libc::size_t + 1) as *mut u8;
if buf.is_null() {
#[test]
fn test_unwrap() {
+ #[fixed_stack_segment]; #[inline(never)];
+
let c_str = "hello".to_c_str();
unsafe { libc::free(c_str.unwrap() as *libc::c_void) }
}
#[test]
fn test_with_ref() {
+ #[fixed_stack_segment]; #[inline(never)];
+
let c_str = "hello".to_c_str();
let len = unsafe { c_str.with_ref(|buf| libc::strlen(buf)) };
assert!(!c_str.is_null());
use io::WriterUtil;
use io;
use libc;
- use rt::borrowck;
use sys;
use managed;
n_bytes_freed: 0
};
- // Quick hack: we need to free this list upon task exit, and this
- // is a convenient place to do it.
- borrowck::clear_task_borrow_list();
-
// Pass 1: Make all boxes immortal.
//
// In this pass, nothing gets freed, so it does not matter whether
fn find<'a>(&'a self, key: &K) -> Option<&'a V>;
/// Return true if the map contains a value for the specified key
+ #[inline]
fn contains_key(&self, key: &K) -> bool {
self.find(key).is_some()
}
## Internationalization
The formatting syntax supported by the `ifmt!` extension supports
-internationalization by providing "methods" which execute various differnet
+internationalization by providing "methods" which execute various different
outputs depending on the input. The syntax and methods provided are similar to
other internationalization systems, so again nothing should seem alien.
Currently two methods are supported by this extension: "select" and "plural".
priv value: &'self util::Void,
}
+/// When a format is not otherwise specified, types are formatted by ascribing
+/// to this trait. There is not an explicit way of selecting this trait to be
+/// used for formatting, it is only if no other format is specified.
+#[allow(missing_doc)]
+pub trait Default { fn fmt(&Self, &mut Formatter); }
+
+/// Format trait for the `b` character
#[allow(missing_doc)]
pub trait Bool { fn fmt(&Self, &mut Formatter); }
+/// Format trait for the `c` character
#[allow(missing_doc)]
pub trait Char { fn fmt(&Self, &mut Formatter); }
+/// Format trait for the `i` and `d` characters
#[allow(missing_doc)]
pub trait Signed { fn fmt(&Self, &mut Formatter); }
+/// Format trait for the `u` character
#[allow(missing_doc)]
pub trait Unsigned { fn fmt(&Self, &mut Formatter); }
+/// Format trait for the `o` character
#[allow(missing_doc)]
pub trait Octal { fn fmt(&Self, &mut Formatter); }
+/// Format trait for the `b` character
#[allow(missing_doc)]
pub trait Binary { fn fmt(&Self, &mut Formatter); }
+/// Format trait for the `x` character
#[allow(missing_doc)]
pub trait LowerHex { fn fmt(&Self, &mut Formatter); }
+/// Format trait for the `X` character
#[allow(missing_doc)]
pub trait UpperHex { fn fmt(&Self, &mut Formatter); }
+/// Format trait for the `s` character
#[allow(missing_doc)]
pub trait String { fn fmt(&Self, &mut Formatter); }
+/// Format trait for the `?` character
#[allow(missing_doc)]
pub trait Poly { fn fmt(&Self, &mut Formatter); }
+/// Format trait for the `p` character
#[allow(missing_doc)]
pub trait Pointer { fn fmt(&Self, &mut Formatter); }
+/// Format trait for the `f` character
#[allow(missing_doc)]
pub trait Float { fn fmt(&Self, &mut Formatter); }
}
}
-impl<'self> String for &'self str {
- fn fmt(s: & &'self str, f: &mut Formatter) {
- f.pad(*s);
+impl<'self, T: str::Str> String for T {
+ fn fmt(s: &T, f: &mut Formatter) {
+ f.pad(s.as_slice());
}
}
}
}
+// Implementation of Default for various core types
+
+macro_rules! delegate(($ty:ty to $other:ident) => {
+ impl<'self> Default for $ty {
+ fn fmt(me: &$ty, f: &mut Formatter) {
+ $other::fmt(me, f)
+ }
+ }
+})
+delegate!(int to Signed)
+delegate!( i8 to Signed)
+delegate!(i16 to Signed)
+delegate!(i32 to Signed)
+delegate!(i64 to Signed)
+delegate!(uint to Unsigned)
+delegate!( u8 to Unsigned)
+delegate!( u16 to Unsigned)
+delegate!( u32 to Unsigned)
+delegate!( u64 to Unsigned)
+delegate!(@str to String)
+delegate!(~str to String)
+delegate!(&'self str to String)
+delegate!(bool to Bool)
+delegate!(char to Char)
+delegate!(float to Float)
+delegate!(f32 to Float)
+delegate!(f64 to Float)
+
+impl<T> Default for *const T {
+ fn fmt(me: &*const T, f: &mut Formatter) { Pointer::fmt(me, f) }
+}
+
// If you expected tests to be here, look instead at the run-pass/ifmt.rs test,
// it's a lot easier than creating all of the rt::Piece structures here.
ArgumentNext, ArgumentIs(uint), ArgumentNamed(&'self str)
}
-/// Enum of alignments which are supoprted.
+/// Enum of alignments which are supported.
#[deriving(Eq)]
pub enum Alignment { AlignLeft, AlignRight, AlignUnknown }
}
}
// Finally the actual format specifier
- spec.ty = self.word();
+ if self.consume('?') {
+ spec.ty = "?";
+ } else {
+ spec.ty = self.word();
+ }
return spec;
}
use rt::io::Writer;
use str::OwnedStr;
use to_bytes::IterBytes;
-use uint;
use vec::ImmutableVector;
+use num::ToStrRadix;
// Alias `SipState` to `State`.
pub use State = hash::SipState;
let r = self.result_bytes();
let mut s = ~"";
for b in r.iter() {
- s.push_str(uint::to_str_radix(*b as uint, 16u));
+ s.push_str((*b as uint).to_str_radix(16u));
}
s
}
use super::*;
use prelude::*;
- use uint;
+ // Hash just the bytes of the slice, without length prefix
+ struct Bytes<'self>(&'self [u8]);
+ impl<'self> IterBytes for Bytes<'self> {
+ fn iter_bytes(&self, _lsb0: bool, f: &fn(&[u8]) -> bool) -> bool {
+ f(**self)
+ }
+ }
#[test]
fn test_siphash() {
fn to_hex_str(r: &[u8, ..8]) -> ~str {
let mut s = ~"";
for b in r.iter() {
- s.push_str(uint::to_str_radix(*b as uint, 16u));
+ s.push_str((*b as uint).to_str_radix(16u));
}
s
}
while t < 64 {
debug!("siphash test %?", t);
let vec = u8to64_le!(vecs[t], 0);
- let out = buf.hash_keyed(k0, k1);
+ let out = Bytes(buf.as_slice()).hash_keyed(k0, k1);
debug!("got %?, expected %?", out, vec);
assert_eq!(vec, out);
fn test_float_hashes_of_zero() {
assert_eq!(0.0.hash(), (-0.0).hash());
}
+
+ #[test]
+ fn test_hash_no_concat_alias() {
+ let s = ("aa", "bb");
+ let t = ("aabb", "");
+ let u = ("a", "abb");
+
+ let v = (&[1u8], &[0u8, 0], &[0u8]);
+ let w = (&[1u8, 0, 0, 0], &[], &[]);
+
+ assert!(v != w);
+ assert!(s.hash() != t.hash() && s.hash() != u.hash());
+ assert!(v.hash() != w.hash());
+ }
}
impl Reader for *libc::FILE {
fn read(&self, bytes: &mut [u8], len: uint) -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
do bytes.as_mut_buf |buf_p, buf_len| {
assert!(buf_len >= len);
}
}
fn read_byte(&self) -> int {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
libc::fgetc(*self) as int
}
}
fn eof(&self) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
return libc::feof(*self) != 0 as c_int;
}
}
fn seek(&self, offset: int, whence: SeekStyle) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
assert!(libc::fseek(*self,
offset as c_long,
}
}
fn tell(&self) -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
return libc::ftell(*self) as uint;
}
impl Drop for FILERes {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
libc::fclose(self.f);
}
* # Example
*
* ~~~ {.rust}
-* let stdin = core::io::stdin();
+* let stdin = std::io::stdin();
* let line = stdin.read_line();
-* core::io::print(line);
+* std::io::print(line);
* ~~~
*/
pub fn stdin() -> @Reader {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
@rustrt::rust_get_stdin() as @Reader
}
}
pub fn file_reader(path: &Path) -> Result<@Reader, ~str> {
+ #[fixed_stack_segment]; #[inline(never)];
+
let f = do path.with_c_str |pathbuf| {
do "rb".with_c_str |modebuf| {
unsafe { libc::fopen(pathbuf, modebuf as *libc::c_char) }
impl Writer for *libc::FILE {
fn write(&self, v: &[u8]) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
do v.as_imm_buf |vbuf, len| {
let nout = libc::fwrite(vbuf as *c_void,
}
}
fn seek(&self, offset: int, whence: SeekStyle) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
assert!(libc::fseek(*self,
offset as c_long,
}
}
fn tell(&self) -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
libc::ftell(*self) as uint
}
}
fn flush(&self) -> int {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
libc::fflush(*self) as int
}
}
fn get_type(&self) -> WriterType {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let fd = libc::fileno(*self);
if libc::isatty(fd) == 0 { File }
impl Writer for fd_t {
fn write(&self, v: &[u8]) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let mut count = 0u;
do v.as_imm_buf |vbuf, len| {
}
fn flush(&self) -> int { 0 }
fn get_type(&self) -> WriterType {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
if libc::isatty(*self) == 0 { File } else { Screen }
}
impl Drop for FdRes {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
libc::close(self.fd);
}
pub fn mk_file_writer(path: &Path, flags: &[FileFlag])
-> Result<@Writer, ~str> {
+ #[fixed_stack_segment]; #[inline(never)];
+
#[cfg(windows)]
fn wb() -> c_int {
(O_WRONLY | libc::consts::os::extra::O_BINARY) as c_int
/// (8 bytes).
fn write_le_f64(&self, f: f64);
- /// Write a litten-endian IEEE754 single-precision floating-point
+ /// Write a little-endian IEEE754 single-precision floating-point
/// (4 bytes).
fn write_le_f32(&self, f: f32);
// FIXME: fileflags // #2004
pub fn buffered_file_writer(path: &Path) -> Result<@Writer, ~str> {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let f = do path.with_c_str |pathbuf| {
do "w".with_c_str |modebuf| {
* # Example
*
* ~~~ {.rust}
-* let stdout = core::io::stdout();
+* let stdout = std::io::stdout();
* stdout.write_str("hello\n");
* ~~~
*/
* # Example
*
* ~~~ {.rust}
-* let stderr = core::io::stderr();
+* let stderr = std::io::stderr();
* stderr.write_str("hello\n");
* ~~~
*/
blk: &fn(v: Res<*libc::FILE>)) {
blk(Res::new(Arg {
val: file.f, opt_level: opt_level,
- fsync_fn: |file, l| {
- unsafe {
- os::fsync_fd(libc::fileno(*file), l) as int
- }
- }
+ fsync_fn: |file, l| fsync_fd(fileno(*file), l)
}));
+
+ fn fileno(stream: *libc::FILE) -> libc::c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+ unsafe { libc::fileno(stream) }
+ }
}
// fsync fd after executing blk
blk: &fn(v: Res<fd_t>)) {
blk(Res::new(Arg {
val: fd.fd, opt_level: opt_level,
- fsync_fn: |fd, l| os::fsync_fd(*fd, l) as int
+ fsync_fn: |fd, l| fsync_fd(*fd, l)
}));
}
+ fn fsync_fd(fd: libc::c_int, level: Level) -> int {
+ #[fixed_stack_segment]; #[inline(never)];
+
+ os::fsync_fd(fd, level) as int
+ }
+
// Type of objects that may want to fsync
pub trait FSyncable { fn fsync(&self, l: Level) -> int; }
use cmp::Ord;
use clone::Clone;
use uint;
+use util;
/// Conversion from an `Iterator`
pub trait FromIterator<A> {
i
}
- /// Return the element that gives the maximum value from the specfied function
+ /// Return the element that gives the maximum value from the
+ /// specified function.
///
/// # Example
///
}).map_move(|(x, _)| x)
}
- /// Return the element that gives the minimum value from the specfied function
+ /// Return the element that gives the minimum value from the
+ /// specified function.
///
/// # Example
///
}
}
+/// A double-ended iterator yielding mutable references
+pub trait MutableDoubleEndedIterator {
+ // FIXME: #5898: should be called `reverse`
+ /// Use an iterator to reverse a container in-place
+ fn reverse_(&mut self);
+}
+
+impl<'self, A, T: DoubleEndedIterator<&'self mut A>> MutableDoubleEndedIterator for T {
+ // FIXME: #5898: should be called `reverse`
+ /// Use an iterator to reverse a container in-place
+ fn reverse_(&mut self) {
+ loop {
+ match (self.next(), self.next_back()) {
+ (Some(x), Some(y)) => util::swap(x, y),
+ _ => break
+ }
+ }
+ }
+}
+
/// An object implementing random access indexing by `uint`
///
/// A `RandomAccessIterator` should be either infinite or a `DoubleEndedIterator`.
}
}
+/// A range of numbers from [0, N]
+#[deriving(Clone, DeepClone)]
+pub struct RangeInclusive<A> {
+ priv range: Range<A>,
+ priv done: bool
+}
+
+/// Return an iterator over the range [start, stop]
+#[inline]
+pub fn range_inclusive<A: Add<A, A> + Ord + Clone + One>(start: A, stop: A) -> RangeInclusive<A> {
+ RangeInclusive{range: range(start, stop), done: false}
+}
+
+impl<A: Add<A, A> + Ord + Clone> Iterator<A> for RangeInclusive<A> {
+ #[inline]
+ fn next(&mut self) -> Option<A> {
+ match self.range.next() {
+ Some(x) => Some(x),
+ None => {
+ if self.done {
+ None
+ } else {
+ self.done = true;
+ Some(self.range.stop.clone())
+ }
+ }
+ }
+ }
+}
+
+impl<A: Sub<A, A> + Integer + Ord + Clone> DoubleEndedIterator<A> for RangeInclusive<A> {
+ #[inline]
+ fn next_back(&mut self) -> Option<A> {
+ if self.range.stop > self.range.state {
+ let result = self.range.stop.clone();
+ self.range.stop = self.range.stop - self.range.one;
+ Some(result)
+ } else if self.done {
+ None
+ } else {
+ self.done = true;
+ Some(self.range.stop.clone())
+ }
+ }
+}
+
impl<A: Add<A, A> + Clone> Iterator<A> for Counter<A> {
#[inline]
fn next(&mut self) -> Option<A> {
}
impl<A: Clone> Repeat<A> {
- /// Create a new `Repeat` that enlessly repeats the element `elt`.
+ /// Create a new `Repeat` that endlessly repeats the element `elt`.
#[inline]
pub fn new(elt: A) -> Repeat<A> {
Repeat{element: elt}
fail!("unreachable");
}
}
+
+ #[test]
+ fn test_range_inclusive() {
+ assert_eq!(range_inclusive(0i, 5).collect::<~[int]>(), ~[0i, 1, 2, 3, 4, 5]);
+ assert_eq!(range_inclusive(0i, 5).invert().collect::<~[int]>(), ~[5i, 4, 3, 2, 1, 0]);
+ }
+
+ #[test]
+ fn test_reverse() {
+ let mut ys = [1, 2, 3, 4, 5];
+ ys.mut_iter().reverse_();
+ assert_eq!(ys, [5, 4, 3, 2, 1]);
+ }
}
// doesn't link it correctly on i686, so we're going
// through a C function that mysteriously does work.
pub unsafe fn opendir(dirname: *c_char) -> *DIR {
+ #[fixed_stack_segment]; #[inline(never)];
rust_opendir(dirname)
}
pub unsafe fn readdir(dirp: *DIR) -> *dirent_t {
+ #[fixed_stack_segment]; #[inline(never)];
rust_readdir(dirp)
}
use unstable::intrinsics;
$(
- #[inline]
+ #[inline] #[fixed_stack_segment] #[inline(never)]
pub fn $name($( $arg : $arg_ty ),*) -> $rv {
unsafe {
$bound_name($( $arg ),*)
pub mod consts {
// FIXME (requires Issue #1433 to fix): replace with mathematical
// staticants from cmath.
- /// Archimedes' staticant
+ /// Archimedes' constant
pub static pi: f32 = 3.14159265358979323846264338327950288_f32;
/// pi/2.0
r
}
-///
-/// Converts a float to a string in a given radix
-///
-/// # Arguments
-///
-/// * num - The float value
-/// * radix - The base to use
-///
-/// # Failure
-///
-/// Fails if called on a special value like `inf`, `-inf` or `NaN` due to
-/// possible misinterpretation of the result at higher bases. If those values
-/// are expected, use `to_str_radix_special()` instead.
-///
-#[inline]
-pub fn to_str_radix(num: f32, rdx: uint) -> ~str {
- let (r, special) = strconv::float_to_str_common(
- num, rdx, true, strconv::SignNeg, strconv::DigAll);
- if special { fail!("number has a special value, \
- try to_str_radix_special() if those are expected") }
- r
-}
-
///
/// Converts a float to a string in a given radix, and a flag indicating
/// whether it's a special value
}
impl num::ToStrRadix for f32 {
+ /// Converts a float to a string in a given radix
+ ///
+ /// # Arguments
+ ///
+ /// * num - The float value
+ /// * radix - The base to use
+ ///
+ /// # Failure
+ ///
+ /// Fails if called on a special value like `inf`, `-inf` or `NaN` due to
+ /// possible misinterpretation of the result at higher bases. If those values
+ /// are expected, use `to_str_radix_special()` instead.
#[inline]
fn to_str_radix(&self, rdx: uint) -> ~str {
- to_str_radix(*self, rdx)
+ let (r, special) = strconv::float_to_str_common(
+ *self, rdx, true, strconv::SignNeg, strconv::DigAll);
+ if special { fail!("number has a special value, \
+ try to_str_radix_special() if those are expected") }
+ r
}
}
use unstable::intrinsics;
$(
- #[inline]
+ #[inline] #[fixed_stack_segment] #[inline(never)]
pub fn $name($( $arg : $arg_ty ),*) -> $rv {
unsafe {
$bound_name($( $arg ),*)
r
}
-///
-/// Converts a float to a string in a given radix
-///
-/// # Arguments
-///
-/// * num - The float value
-/// * radix - The base to use
-///
-/// # Failure
-///
-/// Fails if called on a special value like `inf`, `-inf` or `NaN` due to
-/// possible misinterpretation of the result at higher bases. If those values
-/// are expected, use `to_str_radix_special()` instead.
-///
-#[inline]
-pub fn to_str_radix(num: f64, rdx: uint) -> ~str {
- let (r, special) = strconv::float_to_str_common(
- num, rdx, true, strconv::SignNeg, strconv::DigAll);
- if special { fail!("number has a special value, \
- try to_str_radix_special() if those are expected") }
- r
-}
-
///
/// Converts a float to a string in a given radix, and a flag indicating
/// whether it's a special value
}
impl num::ToStrRadix for f64 {
+ /// Converts a float to a string in a given radix
+ ///
+ /// # Arguments
+ ///
+ /// * num - The float value
+ /// * radix - The base to use
+ ///
+ /// # Failure
+ ///
+ /// Fails if called on a special value like `inf`, `-inf` or `NaN` due to
+ /// possible misinterpretation of the result at higher bases. If those values
+ /// are expected, use `to_str_radix_special()` instead.
#[inline]
fn to_str_radix(&self, rdx: uint) -> ~str {
- to_str_radix(*self, rdx)
+ let (r, special) = strconv::float_to_str_common(
+ *self, rdx, true, strconv::SignNeg, strconv::DigAll);
+ if special { fail!("number has a special value, \
+ try to_str_radix_special() if those are expected") }
+ r
}
}
r
}
-///
-/// Converts a float to a string in a given radix
-///
-/// # Arguments
-///
-/// * num - The float value
-/// * radix - The base to use
-///
-/// # Failure
-///
-/// Fails if called on a special value like `inf`, `-inf` or `NaN` due to
-/// possible misinterpretation of the result at higher bases. If those values
-/// are expected, use `to_str_radix_special()` instead.
-///
-#[inline]
-pub fn to_str_radix(num: float, radix: uint) -> ~str {
- let (r, special) = strconv::float_to_str_common(
- num, radix, true, strconv::SignNeg, strconv::DigAll);
- if special { fail!("number has a special value, \
- try to_str_radix_special() if those are expected") }
- r
-}
-
///
/// Converts a float to a string in a given radix, and a flag indicating
/// whether it's a special value
}
impl num::ToStrRadix for float {
+ /// Converts a float to a string in a given radix
+ ///
+ /// # Arguments
+ ///
+ /// * num - The float value
+ /// * radix - The base to use
+ ///
+ /// # Failure
+ ///
+ /// Fails if called on a special value like `inf`, `-inf` or `NaN` due to
+ /// possible misinterpretation of the result at higher bases. If those values
+ /// are expected, use `to_str_radix_special()` instead.
#[inline]
fn to_str_radix(&self, radix: uint) -> ~str {
- to_str_radix(*self, radix)
+ let (r, special) = strconv::float_to_str_common(
+ *self, radix, true, strconv::SignNeg, strconv::DigAll);
+ if special { fail!("number has a special value, \
+ try to_str_radix_special() if those are expected") }
+ r
}
}
#[test]
pub fn test_to_str_radix() {
- assert_eq!(to_str_radix(36., 36u), ~"10");
- assert_eq!(to_str_radix(8.125, 2u), ~"1000.001");
+ assert_eq!(36.0f.to_str_radix(36u), ~"10");
+ assert_eq!(8.125f.to_str_radix(2u), ~"1000.001");
}
#[test]
#[allow(non_uppercase_statics)];
use num::{ToStrRadix, FromStrRadix};
-use num::{Zero, One, strconv};
+use num::{CheckedDiv, Zero, One, strconv};
use prelude::*;
use str;
pub static min_value: $T = (-1 as $T) << (bits - 1);
pub static max_value: $T = min_value - 1 as $T;
+impl CheckedDiv for $T {
+ #[inline]
+ fn checked_div(&self, v: &$T) -> Option<$T> {
+ if *v == 0 || (*self == min_value && *v == -1) {
+ None
+ } else {
+ Some(self / *v)
+ }
+ }
+}
+
enum Range { Closed, HalfOpen }
#[inline]
f(buf.slice(0, cur))
}
-/// Convert to a string in base 10.
-#[inline]
-pub fn to_str(num: $T) -> ~str {
- to_str_radix(num, 10u)
-}
-
-/// Convert to a string in a given base.
-#[inline]
-pub fn to_str_radix(num: $T, radix: uint) -> ~str {
- let mut buf: ~[u8] = ~[];
- do strconv::int_to_str_bytes_common(num, radix, strconv::SignNeg) |i| {
- buf.push(i);
- }
- // We know we generated valid utf-8, so we don't need to go through that
- // check.
- unsafe { str::raw::from_bytes_owned(buf) }
-}
-
impl ToStr for $T {
+ /// Convert to a string in base 10.
#[inline]
fn to_str(&self) -> ~str {
- to_str(*self)
+ self.to_str_radix(10)
}
}
impl ToStrRadix for $T {
+ /// Convert to a string in a given base.
#[inline]
fn to_str_radix(&self, radix: uint) -> ~str {
- to_str_radix(*self, radix)
+ let mut buf: ~[u8] = ~[];
+ do strconv::int_to_str_bytes_common(*self, radix, strconv::SignNeg) |i| {
+ buf.push(i);
+ }
+ // We know we generated valid utf-8, so we don't need to go through that
+ // check.
+ unsafe { str::raw::from_bytes_owned(buf) }
}
}
use super::*;
use prelude::*;
+ use int;
use i16;
use i32;
use i64;
#[test]
fn test_to_str() {
- assert_eq!(to_str_radix(0 as $T, 10u), ~"0");
- assert_eq!(to_str_radix(1 as $T, 10u), ~"1");
- assert_eq!(to_str_radix(-1 as $T, 10u), ~"-1");
- assert_eq!(to_str_radix(127 as $T, 16u), ~"7f");
- assert_eq!(to_str_radix(100 as $T, 10u), ~"100");
+ assert_eq!((0 as $T).to_str_radix(10u), ~"0");
+ assert_eq!((1 as $T).to_str_radix(10u), ~"1");
+ assert_eq!((-1 as $T).to_str_radix(10u), ~"-1");
+ assert_eq!((127 as $T).to_str_radix(16u), ~"7f");
+ assert_eq!((100 as $T).to_str_radix(10u), ~"100");
}
#[test]
fn test_int_to_str_overflow() {
let mut i8_val: i8 = 127_i8;
- assert_eq!(i8::to_str(i8_val), ~"127");
+ assert_eq!(i8_val.to_str(), ~"127");
i8_val += 1 as i8;
- assert_eq!(i8::to_str(i8_val), ~"-128");
+ assert_eq!(i8_val.to_str(), ~"-128");
let mut i16_val: i16 = 32_767_i16;
- assert_eq!(i16::to_str(i16_val), ~"32767");
+ assert_eq!(i16_val.to_str(), ~"32767");
i16_val += 1 as i16;
- assert_eq!(i16::to_str(i16_val), ~"-32768");
+ assert_eq!(i16_val.to_str(), ~"-32768");
let mut i32_val: i32 = 2_147_483_647_i32;
- assert_eq!(i32::to_str(i32_val), ~"2147483647");
+ assert_eq!(i32_val.to_str(), ~"2147483647");
i32_val += 1 as i32;
- assert_eq!(i32::to_str(i32_val), ~"-2147483648");
+ assert_eq!(i32_val.to_str(), ~"-2147483648");
let mut i64_val: i64 = 9_223_372_036_854_775_807_i64;
- assert_eq!(i64::to_str(i64_val), ~"9223372036854775807");
+ assert_eq!(i64_val.to_str(), ~"9223372036854775807");
i64_val += 1 as i64;
- assert_eq!(i64::to_str(i64_val), ~"-9223372036854775808");
+ assert_eq!(i64_val.to_str(), ~"-9223372036854775808");
}
#[test]
fn test_range_step_zero_step() {
do range_step(0,10,0) |_i| { true };
}
+
+ #[test]
+ fn test_signed_checked_div() {
+ assert_eq!(10i.checked_div(&2), Some(5));
+ assert_eq!(5i.checked_div(&0), None);
+ assert_eq!(int::min_value.checked_div(&-1), None);
+ }
}
}))
}
}
+pub trait CheckedDiv: Div<Self, Self> {
+ fn checked_div(&self, v: &Self) -> Option<Self>;
+}
+
/// Helper function for testing numeric operations
#[cfg(test)]
pub fn test_num<T:Num + NumCast>(ten: T, two: T) {
mod bench {
use extra::test::BenchHarness;
use rand::{XorShiftRng,RngUtil};
- use uint;
use float;
+ use to_str::ToStr;
#[bench]
fn uint_to_str_rand(bh: &mut BenchHarness) {
let mut rng = XorShiftRng::new();
do bh.iter {
- uint::to_str(rng.gen());
+ rng.gen::<uint>().to_str();
}
}
use num::BitCount;
use num::{ToStrRadix, FromStrRadix};
-use num::{Zero, One, strconv};
+use num::{CheckedDiv, Zero, One, strconv};
use prelude::*;
use str;
pub static min_value: $T = 0 as $T;
pub static max_value: $T = 0 as $T - 1 as $T;
+impl CheckedDiv for $T {
+ #[inline]
+ fn checked_div(&self, v: &$T) -> Option<$T> {
+ if *v == 0 {
+ None
+ } else {
+ Some(self / *v)
+ }
+ }
+}
+
enum Range { Closed, HalfOpen }
#[inline]
f(buf.slice(0, cur))
}
-/// Convert to a string in base 10.
-#[inline]
-pub fn to_str(num: $T) -> ~str {
- to_str_radix(num, 10u)
-}
-
-/// Convert to a string in a given base.
-#[inline]
-pub fn to_str_radix(num: $T, radix: uint) -> ~str {
- let mut buf = ~[];
- do strconv::int_to_str_bytes_common(num, radix, strconv::SignNone) |i| {
- buf.push(i);
- }
- // We know we generated valid utf-8, so we don't need to go through that
- // check.
- unsafe { str::raw::from_bytes_owned(buf) }
-}
-
impl ToStr for $T {
+ /// Convert to a string in base 10.
#[inline]
fn to_str(&self) -> ~str {
- to_str(*self)
+ self.to_str_radix(10u)
}
}
impl ToStrRadix for $T {
+ /// Convert to a string in a given base.
#[inline]
fn to_str_radix(&self, radix: uint) -> ~str {
- to_str_radix(*self, radix)
+ let mut buf = ~[];
+ do strconv::int_to_str_bytes_common(*self, radix, strconv::SignNone) |i| {
+ buf.push(i);
+ }
+ // We know we generated valid utf-8, so we don't need to go through that
+ // check.
+ unsafe { str::raw::from_bytes_owned(buf) }
}
}
use u32;
use u64;
use u8;
- use uint;
#[test]
fn test_num() {
#[test]
pub fn test_to_str() {
- assert_eq!(to_str_radix(0 as $T, 10u), ~"0");
- assert_eq!(to_str_radix(1 as $T, 10u), ~"1");
- assert_eq!(to_str_radix(2 as $T, 10u), ~"2");
- assert_eq!(to_str_radix(11 as $T, 10u), ~"11");
- assert_eq!(to_str_radix(11 as $T, 16u), ~"b");
- assert_eq!(to_str_radix(255 as $T, 16u), ~"ff");
- assert_eq!(to_str_radix(0xff as $T, 10u), ~"255");
+ assert_eq!((0 as $T).to_str_radix(10u), ~"0");
+ assert_eq!((1 as $T).to_str_radix(10u), ~"1");
+ assert_eq!((2 as $T).to_str_radix(10u), ~"2");
+ assert_eq!((11 as $T).to_str_radix(10u), ~"11");
+ assert_eq!((11 as $T).to_str_radix(16u), ~"b");
+ assert_eq!((255 as $T).to_str_radix(16u), ~"ff");
+ assert_eq!((0xff as $T).to_str_radix(10u), ~"255");
}
#[test]
#[test]
fn test_uint_to_str_overflow() {
let mut u8_val: u8 = 255_u8;
- assert_eq!(u8::to_str(u8_val), ~"255");
+ assert_eq!(u8_val.to_str(), ~"255");
u8_val += 1 as u8;
- assert_eq!(u8::to_str(u8_val), ~"0");
+ assert_eq!(u8_val.to_str(), ~"0");
let mut u16_val: u16 = 65_535_u16;
- assert_eq!(u16::to_str(u16_val), ~"65535");
+ assert_eq!(u16_val.to_str(), ~"65535");
u16_val += 1 as u16;
- assert_eq!(u16::to_str(u16_val), ~"0");
+ assert_eq!(u16_val.to_str(), ~"0");
let mut u32_val: u32 = 4_294_967_295_u32;
- assert_eq!(u32::to_str(u32_val), ~"4294967295");
+ assert_eq!(u32_val.to_str(), ~"4294967295");
u32_val += 1 as u32;
- assert_eq!(u32::to_str(u32_val), ~"0");
+ assert_eq!(u32_val.to_str(), ~"0");
let mut u64_val: u64 = 18_446_744_073_709_551_615_u64;
- assert_eq!(u64::to_str(u64_val), ~"18446744073709551615");
+ assert_eq!(u64_val.to_str(), ~"18446744073709551615");
u64_val += 1 as u64;
- assert_eq!(u64::to_str(u64_val), ~"0");
+ assert_eq!(u64_val.to_str(), ~"0");
}
#[test]
#[should_fail]
#[ignore(cfg(windows))]
pub fn to_str_radix1() {
- uint::to_str_radix(100u, 1u);
+ 100u.to_str_radix(1u);
}
#[test]
#[should_fail]
#[ignore(cfg(windows))]
pub fn to_str_radix37() {
- uint::to_str_radix(100u, 37u);
+ 100u.to_str_radix(37u);
}
#[test]
fn test_range_step_zero_step_down() {
do range_step(0,-10,0) |_i| { true };
}
+
+ #[test]
+ fn test_unsigned_checked_div() {
+ assert_eq!(10u.checked_div(&2), Some(5));
+ assert_eq!(5u.checked_div(&0), None);
+ }
}
}))
}
/// An iterator that yields either one or zero elements
+#[deriving(Clone, DeepClone)]
pub struct OptionIterator<A> {
priv opt: Option<A>
}
/// Delegates to the libc close() function, returning the same return value.
pub fn close(fd: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
libc::close(fd)
}
static BUF_BYTES : uint = 2048u;
pub fn getcwd() -> Path {
+ #[fixed_stack_segment]; #[inline(never)];
let mut buf = [0 as libc::c_char, ..BUF_BYTES];
do buf.as_mut_buf |buf, len| {
unsafe {
pub fn fill_utf16_buf_and_decode(f: &fn(*mut u16, DWORD) -> DWORD)
-> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let mut n = TMPBUF_SZ as DWORD;
let mut res = None;
}
}
+#[cfg(stage0)]
+mod macro_hack {
+#[macro_escape];
+macro_rules! externfn(
+ (fn $name:ident ()) => (
+ extern {
+ fn $name();
+ }
+ )
+)
+}
+
/*
Accessing environment variables is not generally threadsafe.
Serialize access through a global lock.
};
}
- extern {
- #[fast_ffi]
- fn rust_take_env_lock();
- #[fast_ffi]
- fn rust_drop_env_lock();
- }
+ externfn!(fn rust_take_env_lock());
+ externfn!(fn rust_drop_env_lock());
}
/// Returns a vector of (variable, value) pairs for all the environment
unsafe {
#[cfg(windows)]
unsafe fn get_env_pairs() -> ~[~str] {
+ #[fixed_stack_segment]; #[inline(never)];
+
use libc::funcs::extra::kernel32::{
GetEnvironmentStringsA,
FreeEnvironmentStringsA
}
#[cfg(unix)]
unsafe fn get_env_pairs() -> ~[~str] {
+ #[fixed_stack_segment]; #[inline(never)];
+
extern {
fn rust_env_pairs() -> **libc::c_char;
}
/// Fetches the environment variable `n` from the current process, returning
/// None if the variable isn't set.
pub fn getenv(n: &str) -> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do with_env_lock {
let s = do n.with_c_str |buf| {
/// Fetches the environment variable `n` from the current process, returning
/// None if the variable isn't set.
pub fn getenv(n: &str) -> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
do with_env_lock {
use os::win32::{as_utf16_p, fill_utf16_buf_and_decode};
/// Sets the environment variable `n` to the value `v` for the currently running
/// process
pub fn setenv(n: &str, v: &str) {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do with_env_lock {
do n.with_c_str |nbuf| {
/// Sets the environment variable `n` to the value `v` for the currently running
/// process
pub fn setenv(n: &str, v: &str) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
do with_env_lock {
use os::win32::as_utf16_p;
pub fn unsetenv(n: &str) {
#[cfg(unix)]
fn _unsetenv(n: &str) {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do with_env_lock {
do n.with_c_str |nbuf| {
}
#[cfg(windows)]
fn _unsetenv(n: &str) {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do with_env_lock {
use os::win32::as_utf16_p;
}
pub fn fdopen(fd: c_int) -> *FILE {
+ #[fixed_stack_segment]; #[inline(never)];
do "r".with_c_str |modebuf| {
unsafe {
libc::fdopen(fd, modebuf)
#[cfg(windows)]
pub fn fsync_fd(fd: c_int, _level: io::fsync::Level) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use libc::funcs::extra::msvcrt::*;
return commit(fd);
#[cfg(target_os = "linux")]
#[cfg(target_os = "android")]
pub fn fsync_fd(fd: c_int, level: io::fsync::Level) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use libc::funcs::posix01::unistd::*;
match level {
#[cfg(target_os = "macos")]
pub fn fsync_fd(fd: c_int, level: io::fsync::Level) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
use libc::consts::os::extra::*;
use libc::funcs::posix88::fcntl::*;
#[cfg(target_os = "freebsd")]
pub fn fsync_fd(fd: c_int, _l: io::fsync::Level) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
use libc::funcs::posix01::unistd::*;
return fsync(fd);
#[cfg(unix)]
pub fn pipe() -> Pipe {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
let mut fds = Pipe {input: 0 as c_int,
out: 0 as c_int };
#[cfg(windows)]
pub fn pipe() -> Pipe {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
// Windows pipes work subtly differently than unix pipes, and their
// inheritance has to be handled in a different way that I do not
// fully understand. Here we explicitly make the pipe non-inheritable,
// which means to pass it to a subprocess they need to be duplicated
- // first, as in core::run.
+ // first, as in std::run.
let mut fds = Pipe {input: 0 as c_int,
out: 0 as c_int };
let res = libc::pipe(&mut fds.input, 1024 as ::libc::c_uint,
}
fn dup2(src: c_int, dst: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
libc::dup2(src, dst)
}
#[cfg(target_os = "freebsd")]
fn load_self() -> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use libc::funcs::bsd44::*;
use libc::consts::os::extra::*;
#[cfg(target_os = "linux")]
#[cfg(target_os = "android")]
fn load_self() -> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use libc::funcs::posix01::unistd::readlink;
#[cfg(target_os = "macos")]
fn load_self() -> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do fill_charp_buf() |buf, sz| {
let mut sz = sz as u32;
#[cfg(windows)]
fn load_self() -> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use os::win32::fill_utf16_buf_and_decode;
do fill_utf16_buf_and_decode() |buf, sz| {
/// Indicates whether a path represents a directory
pub fn path_is_dir(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do p.with_c_str |buf| {
rustrt::rust_path_is_dir(buf) != 0 as c_int
/// Indicates whether a path exists
pub fn path_exists(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do p.with_c_str |buf| {
rustrt::rust_path_exists(buf) != 0 as c_int
#[cfg(windows)]
fn mkdir(p: &Path, _mode: c_int) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use os::win32::as_utf16_p;
// FIXME: turn mode into something useful? #2623
#[cfg(unix)]
fn mkdir(p: &Path, mode: c_int) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
do p.with_c_str |buf| {
unsafe {
libc::mkdir(buf, mode as libc::mode_t) == (0 as c_int)
#[cfg(target_os = "freebsd")]
#[cfg(target_os = "macos")]
unsafe fn get_list(p: &Path) -> ~[~str] {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::{dirent_t};
use libc::{opendir, readdir, closedir};
extern {
}
#[cfg(windows)]
unsafe fn get_list(p: &Path) -> ~[~str] {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::consts::os::extra::INVALID_HANDLE_VALUE;
use libc::{wcslen, free};
use libc::funcs::extra::kernel32::{
#[cfg(windows)]
fn rmdir(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use os::win32::as_utf16_p;
return do as_utf16_p(p.to_str()) |buf| {
#[cfg(unix)]
fn rmdir(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
do p.with_c_str |buf| {
unsafe {
libc::rmdir(buf) == (0 as c_int)
#[cfg(windows)]
fn chdir(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use os::win32::as_utf16_p;
return do as_utf16_p(p.to_str()) |buf| {
#[cfg(unix)]
fn chdir(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
do p.with_c_str |buf| {
unsafe {
libc::chdir(buf) == (0 as c_int)
#[cfg(windows)]
fn do_copy_file(from: &Path, to: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use os::win32::as_utf16_p;
return do as_utf16_p(from.to_str()) |fromp| {
#[cfg(unix)]
fn do_copy_file(from: &Path, to: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
let istream = do from.with_c_str |fromp| {
do "rb".with_c_str |modebuf| {
#[cfg(windows)]
fn unlink(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use os::win32::as_utf16_p;
return do as_utf16_p(p.to_str()) |buf| {
#[cfg(unix)]
fn unlink(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do p.with_c_str |buf| {
libc::unlink(buf) == (0 as c_int)
#[cfg(target_os = "macos")]
#[cfg(target_os = "freebsd")]
fn errno_location() -> *c_int {
+ #[fixed_stack_segment]; #[inline(never)];
#[nolink]
extern {
fn __error() -> *c_int;
#[cfg(target_os = "linux")]
#[cfg(target_os = "android")]
fn errno_location() -> *c_int {
+ #[fixed_stack_segment]; #[inline(never)];
#[nolink]
extern {
fn __errno_location() -> *c_int;
#[cfg(windows)]
/// Returns the platform-specific value of errno
pub fn errno() -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::types::os::arch::extra::DWORD;
#[link_name = "kernel32"]
#[cfg(target_os = "freebsd")]
fn strerror_r(errnum: c_int, buf: *mut c_char, buflen: size_t)
-> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
#[nolink]
extern {
fn strerror_r(errnum: c_int, buf: *mut c_char, buflen: size_t)
// So we just use __xpg_strerror_r which is always POSIX compliant
#[cfg(target_os = "linux")]
fn strerror_r(errnum: c_int, buf: *mut c_char, buflen: size_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
#[nolink]
extern {
fn __xpg_strerror_r(errnum: c_int,
#[cfg(windows)]
fn strerror() -> ~str {
+ #[fixed_stack_segment]; #[inline(never)];
+
use libc::types::os::arch::extra::DWORD;
use libc::types::os::arch::extra::LPSTR;
use libc::types::os::arch::extra::LPVOID;
*/
#[cfg(target_os = "macos")]
pub fn real_args() -> ~[~str] {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let (argc, argv) = (*_NSGetArgc() as c_int,
*_NSGetArgv() as **c_char);
#[cfg(windows)]
pub fn real_args() -> ~[~str] {
+ #[fixed_stack_segment]; #[inline(never)];
+
let mut nArgs: c_int = 0;
let lpArgCount: *mut c_int = &mut nArgs;
let lpCmdLine = unsafe { GetCommandLineW() };
#[cfg(target_os = "freebsd")]
#[cfg(target_os = "macos")]
pub fn glob(pattern: &str) -> ~[Path] {
+ #[fixed_stack_segment]; #[inline(never)];
+
#[cfg(target_os = "linux")]
#[cfg(target_os = "android")]
fn default_glob_t () -> libc::glob_t {
#[cfg(unix)]
pub fn page_size() -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
libc::sysconf(libc::_SC_PAGESIZE) as uint
}
#[cfg(windows)]
pub fn page_size() -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let mut info = libc::SYSTEM_INFO::new();
libc::GetSystemInfo(&mut info);
#[cfg(unix)]
impl MemoryMap {
pub fn new(min_len: uint, options: ~[MapOption]) -> Result<~MemoryMap, MapError> {
+ #[fixed_stack_segment]; #[inline(never)];
+
use libc::off_t;
let mut addr: *c_void = ptr::null();
#[cfg(unix)]
impl Drop for MemoryMap {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
match libc::munmap(self.data as *c_void, self.len) {
0 => (),
#[cfg(windows)]
impl MemoryMap {
pub fn new(min_len: uint, options: ~[MapOption]) -> Result<~MemoryMap, MapError> {
+ #[fixed_stack_segment]; #[inline(never)];
+
use libc::types::os::arch::extra::{LPVOID, DWORD, SIZE_T, HANDLE};
let mut lpAddress: LPVOID = ptr::mut_null();
#[cfg(windows)]
impl Drop for MemoryMap {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
use libc::types::os::arch::extra::{LPCVOID, HANDLE};
unsafe {
#[test]
fn copy_file_ok() {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let tempdir = getcwd(); // would like to use $TMPDIR,
// doesn't seem to work on Linux
#[test]
fn memory_map_file() {
+ #[fixed_stack_segment]; #[inline(never)];
+
use result::{Ok, Err};
use os::*;
use libc::*;
#[cfg(unix)]
+ #[fixed_stack_segment]
+ #[inline(never)]
fn lseek_(fd: c_int, size: uint) {
unsafe {
assert!(lseek(fd, size as off_t, SEEK_SET) == size as off_t);
}
}
#[cfg(windows)]
+ #[fixed_stack_segment]
+ #[inline(never)]
fn lseek_(fd: c_int, size: uint) {
unsafe {
assert!(lseek(fd, size as c_long, SEEK_SET) == size as c_long);
#[cfg(target_os = "win32")]
impl WindowsPath {
pub fn stat(&self) -> Option<libc::stat> {
+ #[fixed_stack_segment]; #[inline(never)];
do self.with_c_str |buf| {
let mut st = stat::arch::default_stat();
match unsafe { libc::stat(buf, &mut st) } {
#[cfg(not(target_os = "win32"))]
impl PosixPath {
pub fn stat(&self) -> Option<libc::stat> {
+ #[fixed_stack_segment]; #[inline(never)];
do self.with_c_str |buf| {
let mut st = stat::arch::default_stat();
match unsafe { libc::stat(buf as *libc::c_char, &mut st) } {
#[cfg(unix)]
impl PosixPath {
pub fn lstat(&self) -> Option<libc::stat> {
+ #[fixed_stack_segment]; #[inline(never)];
do self.with_c_str |buf| {
let mut st = stat::arch::default_stat();
match unsafe { libc::lstat(buf, &mut st) } {
}
pub fn extract_drive_prefix(s: &str) -> Option<(~str,~str)> {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
if (s.len() > 1 &&
libc::isalpha(s[0] as libc::c_int) != 0 &&
pub use container::{Container, Mutable, Map, MutableMap, Set, MutableSet};
pub use hash::Hash;
pub use iter::Times;
-pub use iterator::Extendable;
-pub use iterator::{Iterator, DoubleEndedIterator};
-pub use iterator::{ClonableIterator, OrdIterator};
+pub use iterator::{FromIterator, Extendable};
+pub use iterator::{Iterator, DoubleEndedIterator, RandomAccessIterator, ClonableIterator};
+pub use iterator::{OrdIterator, MutableDoubleEndedIterator};
pub use num::{Num, NumCast, CheckedAdd, CheckedSub, CheckedMul};
pub use num::{Orderable, Signed, Unsigned, Round};
pub use num::{Algebraic, Trigonometric, Exponential, Hyperbolic};
pub use num::{Integer, Fractional, Real, RealExt};
pub use num::{Bitwise, BitCount, Bounded};
-pub use num::{Primitive, Int, Float};
+pub use num::{Primitive, Int, Float, ToStrRadix};
pub use path::GenericPath;
pub use path::Path;
pub use path::PosixPath;
passing to the provided callback function
SAFETY NOTE: This will only work with a null-terminated
- pointer array. Barely less-dodgey Pointer Arithmetic.
+ pointer array. Barely less-dodgy Pointer Arithmetic.
Dragons be here.
*/
pub unsafe fn array_each<T>(arr: **T, cb: &fn(*T)) {
/// Create a weak random number generator with a default algorithm and seed.
///
-/// It returns the fatest `Rng` algorithm currently available in Rust without
+/// It returns the fastest `Rng` algorithm currently available in Rust without
/// consideration for cryptography or security. If you require a specifically
/// seeded `Rng` for consistency over time you should pick one algorithm and
/// create the `Rng` yourself.
}
impl XorShiftRng {
- /// Create an xor shift random number generator with a default seed.
+ /// Create an xor shift random number generator with a random seed.
pub fn new() -> XorShiftRng {
- // constants taken from http://en.wikipedia.org/wiki/Xorshift
- XorShiftRng::new_seeded(123456789u32,
- 362436069u32,
- 521288629u32,
- 88675123u32)
+ #[fixed_stack_segment]; #[inline(never)];
+
+ // generate seeds the same way as seed(), except we have a spceific size
+ let mut s = [0u8, ..16];
+ loop {
+ do s.as_mut_buf |p, sz| {
+ unsafe {
+ rustrt::rand_gen_seed(p, sz as size_t);
+ }
+ }
+ if !s.iter().all(|x| *x == 0) {
+ break;
+ }
+ }
+ let s: &[u32, ..4] = unsafe { cast::transmute(&s) };
+ XorShiftRng::new_seeded(s[0], s[1], s[2], s[3])
}
/**
/// Create a new random seed.
pub fn seed() -> ~[u8] {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let n = rustrt::rand_seed_size() as uint;
let mut s = vec::from_elem(n, 0_u8);
#[test]
fn compare_isaac_implementation() {
+ #[fixed_stack_segment]; #[inline(never)];
+
// This is to verify that the implementation of the ISAAC rng is
// correct (i.e. matches the output of the upstream implementation,
// which is in the runtime)
/// # Example
///
/// ~~~
-/// use core::rand::distributions::StandardNormal;
+/// use std::rand::distributions::StandardNormal;
///
/// fn main() {
/// let normal = 2.0 + (*rand::random::<StandardNormal>()) * 3.0;
/// # Example
///
/// ~~~
-/// use core::rand::distributions::Exp1;
+/// use std::rand::distributions::Exp1;
///
/// fn main() {
/// let exp2 = (*rand::random::<Exp1>()) * 0.5;
/// `Result` is a type that represents either success (`Ok`) or failure (`Err`).
///
-/// In order to provide informative error messages, `E` is reqired to implement `ToStr`.
+/// In order to provide informative error messages, `E` is required to implement `ToStr`.
/// It is further recommended for `E` to be a descriptive error type, eg a `enum` for
/// all possible errors cases.
#[deriving(Clone, Eq)]
args
}
- extern {
- fn rust_take_global_args_lock();
- fn rust_drop_global_args_lock();
- fn rust_get_global_args_ptr() -> *mut Option<~~[~str]>;
+ #[cfg(stage0)]
+ mod macro_hack {
+ #[macro_escape];
+ macro_rules! externfn(
+ (fn $name:ident () $(-> $ret_ty:ty),*) => (
+ extern {
+ fn $name() $(-> $ret_ty),*;
+ }
+ )
+ )
}
+ externfn!(fn rust_take_global_args_lock())
+ externfn!(fn rust_drop_global_args_lock())
+ externfn!(fn rust_get_global_args_ptr() -> *mut Option<~~[~str]>)
+
#[cfg(test)]
mod tests {
use option::{Some, None};
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+use cell::Cell;
use c_str::ToCStr;
use cast::transmute;
use libc::{c_char, size_t, STDERR_FILENO};
use io::{Writer, WriterUtil};
use option::{Option, None, Some};
use uint;
+use rt::env;
+use rt::local::Local;
+use rt::task::Task;
use str;
use str::{OwnedStr, StrSlice};
use sys;
static ALL_BITS: uint = FROZEN_BIT | MUT_BIT;
#[deriving(Eq)]
-struct BorrowRecord {
+pub struct BorrowRecord {
box: *mut raw::Box<()>,
file: *c_char,
line: size_t
}
fn try_take_task_borrow_list() -> Option<~[BorrowRecord]> {
- // XXX
- None
+ do Local::borrow::<Task, Option<~[BorrowRecord]>> |task| {
+ task.borrow_list.take()
+ }
}
-fn swap_task_borrow_list(_f: &fn(~[BorrowRecord]) -> ~[BorrowRecord]) {
- // XXX
+fn swap_task_borrow_list(f: &fn(~[BorrowRecord]) -> ~[BorrowRecord]) {
+ let borrows = match try_take_task_borrow_list() {
+ Some(l) => l,
+ None => ~[]
+ };
+ let borrows = f(borrows);
+ let borrows = Cell::new(borrows);
+ do Local::borrow::<Task, ()> |task| {
+ task.borrow_list = Some(borrows.take());
+ }
}
-pub unsafe fn clear_task_borrow_list() {
+pub fn clear_task_borrow_list() {
// pub because it is used by the box annihilator.
let _ = try_take_task_borrow_list();
}
//! A useful debugging function that prints a pointer + tag + newline
//! without allocating memory.
- // XXX
- if false {
+ if ENABLE_DEBUG && env::debug_borrow() {
debug_borrow_slow(tag, p, old_bits, new_bits, filename, line);
}
}
unsafe fn write_cstr(&self, p: *c_char) {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::strlen;
use vec;
use rt::test::*;
use cell::Cell;
use iter::Times;
+ use rt::util;
#[test]
fn oneshot_single_thread_close_port_first() {
#[test]
fn oneshot_multi_thread_close_stress() {
+ if util::limit_thread_creation_due_to_osx_and_valgrind() { return; }
do stress_factor().times {
do run_in_newsched_task {
let (port, chan) = oneshot::<int>();
#[test]
fn oneshot_multi_thread_send_close_stress() {
+ if util::limit_thread_creation_due_to_osx_and_valgrind() { return; }
do stress_factor().times {
do run_in_newsched_task {
let (port, chan) = oneshot::<int>();
#[test]
fn oneshot_multi_thread_recv_close_stress() {
+ if util::limit_thread_creation_due_to_osx_and_valgrind() { return; }
do stress_factor().times {
do run_in_newsched_task {
let (port, chan) = oneshot::<int>();
#[test]
fn oneshot_multi_thread_send_recv_stress() {
+ if util::limit_thread_creation_due_to_osx_and_valgrind() { return; }
do stress_factor().times {
do run_in_newsched_task {
let (port, chan) = oneshot::<~int>();
#[test]
fn stream_send_recv_stress() {
+ if util::limit_thread_creation_due_to_osx_and_valgrind() { return; }
do stress_factor().times {
do run_in_mt_newsched_task {
let (port, chan) = stream::<~int>();
#[test]
fn shared_chan_stress() {
+ if util::limit_thread_creation_due_to_osx_and_valgrind() { return; }
do run_in_mt_newsched_task {
let (port, chan) = stream();
let chan = SharedChan::new(chan);
#[test]
fn shared_port_stress() {
+ if util::limit_thread_creation_due_to_osx_and_valgrind() { return; }
do run_in_mt_newsched_task {
// XXX: Removing these type annotations causes an ICE
let (end_port, end_chan) = stream::<()>();
use rand;
use rand::RngUtil;
+ if util::limit_thread_creation_due_to_osx_and_valgrind() { return; }
+
do run_in_mt_newsched_task {
let (end_port, end_chan) = stream::<()>();
let end_chan = SharedChan::new(end_chan);
// They are expected to be initialized once then left alone.
static mut MIN_STACK: uint = 2000000;
+static mut DEBUG_BORROW: bool = false;
pub fn init() {
unsafe {
},
None => ()
}
+ match os::getenv("RUST_DEBUG_BORROW") {
+ Some(_) => DEBUG_BORROW = true,
+ None => ()
+ }
}
}
pub fn min_stack() -> uint {
unsafe { MIN_STACK }
}
+
+pub fn debug_borrow() -> bool {
+ unsafe { DEBUG_BORROW }
+}
}
/// A wrapper around libc::malloc, aborting on out-of-memory
-#[inline]
pub unsafe fn malloc_raw(size: uint) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
let p = malloc(size as size_t);
if p.is_null() {
// we need a non-allocating way to print an error here
}
/// A wrapper around libc::realloc, aborting on out-of-memory
-#[inline]
pub unsafe fn realloc_raw(ptr: *mut c_void, size: uint) -> *mut c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
let p = realloc(ptr, size as size_t);
if p.is_null() {
// we need a non-allocating way to print an error here
exchange_free(ptr)
}
-#[inline]
pub unsafe fn exchange_free(ptr: *c_char) {
+ #[fixed_stack_segment]; #[inline(never)];
+
free(ptr as *c_void);
}
/// (8 bytes).
fn write_le_f64(&mut self, f: f64);
- /// Write a litten-endian IEEE754 single-precision floating-point
+ /// Write a little-endian IEEE754 single-precision floating-point
/// (4 bytes).
fn write_le_f32(&mut self, f: f32);
Readers and Writers may be composed to add capabilities like string
parsing, encoding, and compression.
-This will likely live in core::io, not core::rt::io.
+This will likely live in std::io, not std::rt::io.
# Examples
(continuation-passing) style popularised by node.js. Such systems rely
on all computations being run inside an event loop which maintains a
list of all pending I/O events; when one completes the registered
-callback is run and the code that made the I/O request continiues.
+callback is run and the code that made the I/O request continues.
Such interfaces achieve non-blocking at the expense of being more
difficult to reason about.
while still providing feedback about errors. The basic strategy:
* Errors are fatal by default, resulting in task failure
-* Errors raise the `io_error` conditon which provides an opportunity to inspect
+* Errors raise the `io_error` condition which provides an opportunity to inspect
an IoError object containing details.
* Return values must have a sensible null or zero value which is returned
if a condition is handled successfully. This may be an `Option`, an empty
* XXX: How should we use condition handlers that return values?
* XXX: Should EOF raise default conditions when EOF is not an error?
-# Issues withi/o scheduler affinity, work stealing, task pinning
+# Issues with i/o scheduler affinity, work stealing, task pinning
# Resource management
/// println(reader.read_line());
/// }
///
- /// # Failue
+ /// # Failure
///
/// Returns `true` on failure.
fn eof(&mut self) -> bool;
type Port = u16;
-#[deriving(Eq, TotalEq)]
+#[deriving(Eq, TotalEq, Clone)]
pub enum IpAddr {
Ipv4Addr(u8, u8, u8, u8),
Ipv6Addr(u16, u16, u16, u16, u16, u16, u16, u16)
}
}
-#[deriving(Eq, TotalEq)]
+#[deriving(Eq, TotalEq, Clone)]
pub struct SocketAddr {
ip: IpAddr,
port: Port,
fn write(&mut self, buf: &[u8]) {
match (**self).write(buf) {
Ok(_) => (),
- Err(ioerr) => {
- io_error::cond.raise(ioerr);
- }
+ Err(ioerr) => io_error::cond.raise(ioerr),
}
}
impl Listener<TcpStream> for TcpListener {
fn accept(&mut self) -> Option<TcpStream> {
match (**self).accept() {
- Ok(s) => {
- Some(TcpStream::new(s))
- }
+ Ok(s) => Some(TcpStream::new(s)),
Err(ioerr) => {
io_error::cond.raise(ioerr);
return None;
}
impl RtioTimer for Timer {
- fn sleep(&self, msecs: u64) {
+ fn sleep(&mut self, msecs: u64) {
(**self).sleep(msecs);
}
}
mod test {
use super::*;
use rt::test::*;
- use option::{Some, None};
#[test]
fn test_io_timer_sleep_simple() {
do run_in_newsched_task {
let timer = Timer::new();
- match timer {
- Some(t) => t.sleep(1),
- None => assert!(false)
- }
+ do timer.map_move |mut t| { t.sleep(1) };
}
}
-}
\ No newline at end of file
+}
on_exit: Option<~fn(bool)>,
// nesting level counter for task::unkillable calls (0 == killable).
unkillable: int,
- // nesting level counter for unstable::atomically calls (0 == can yield).
+ // nesting level counter for unstable::atomically calls (0 == can deschedule).
wont_sleep: int,
// A "spare" handle to the kill flag inside the kill handle. Used during
// blocking/waking as an optimization to avoid two xadds on the refcount.
}
/// Enter a possibly-nested "atomic" section of code. Just for assertions.
- /// All calls must be paired with a subsequent call to allow_yield.
+ /// All calls must be paired with a subsequent call to allow_deschedule.
#[inline]
- pub fn inhibit_yield(&mut self) {
+ pub fn inhibit_deschedule(&mut self) {
self.wont_sleep += 1;
}
/// Exit a possibly-nested "atomic" section of code. Just for assertions.
- /// All calls must be paired with a preceding call to inhibit_yield.
+ /// All calls must be paired with a preceding call to inhibit_deschedule.
#[inline]
- pub fn allow_yield(&mut self) {
+ pub fn allow_deschedule(&mut self) {
rtassert!(self.wont_sleep != 0);
self.wont_sleep -= 1;
}
}
impl LocalHeap {
+ #[fixed_stack_segment] #[inline(never)]
pub fn new() -> LocalHeap {
unsafe {
// Don't need synchronization for the single-threaded local heap
}
}
+ #[fixed_stack_segment] #[inline(never)]
pub fn alloc(&mut self, td: *TypeDesc, size: uint) -> *OpaqueBox {
unsafe {
return rust_boxed_region_malloc(self.boxed_region, td, size as size_t);
}
}
+ #[fixed_stack_segment] #[inline(never)]
pub fn realloc(&mut self, ptr: *OpaqueBox, size: uint) -> *OpaqueBox {
unsafe {
return rust_boxed_region_realloc(self.boxed_region, ptr, size as size_t);
}
}
+ #[fixed_stack_segment] #[inline(never)]
pub fn free(&mut self, box: *OpaqueBox) {
unsafe {
return rust_boxed_region_free(self.boxed_region, box);
}
impl Drop for LocalHeap {
+ #[fixed_stack_segment] #[inline(never)]
fn drop(&self) {
unsafe {
rust_delete_boxed_region(self.boxed_region);
use tls = rt::thread_local_storage;
/// Initialize the TLS key. Other ops will fail if this isn't executed first.
+#[fixed_stack_segment]
+#[inline(never)]
pub fn init_tls_key() {
unsafe {
rust_initialize_rt_tls_key();
}
}
+#[fixed_stack_segment]
+#[inline(never)]
fn maybe_tls_key() -> Option<tls::Key> {
unsafe {
let key: *mut c_void = rust_get_rt_tls_key();
}
extern {
- #[fast_ffi]
fn rust_get_rt_tls_key() -> *mut c_void;
}
-
}
};
// Truncate the string
- let buf_bytes = 256;
+ let buf_bytes = 2048;
if s.len() > buf_bytes {
let s = s.slice(0, buf_bytes) + "[...]";
print(s);
/// Configure logging by traversing the crate map and setting the
/// per-module global logging flags based on the logging spec
+#[fixed_stack_segment] #[inline(never)]
pub fn init(crate_map: *u8) {
use c_str::ToCStr;
use os;
}
}
+#[fixed_stack_segment] #[inline(never)]
pub fn console_on() { unsafe { rust_log_console_on() } }
+
+#[fixed_stack_segment] #[inline(never)]
pub fn console_off() { unsafe { rust_log_console_off() } }
+
+#[fixed_stack_segment] #[inline(never)]
fn should_log_console() -> bool { unsafe { rust_should_log_console() != 0 } }
extern {
Several modules in `core` are clients of `rt`:
-* `core::task` - The user-facing interface to the Rust task model.
-* `core::task::local_data` - The interface to local data.
-* `core::gc` - The garbage collector.
-* `core::unstable::lang` - Miscellaneous lang items, some of which rely on `core::rt`.
-* `core::condition` - Uses local data.
-* `core::cleanup` - Local heap destruction.
-* `core::io` - In the future `core::io` will use an `rt` implementation.
-* `core::logging`
-* `core::pipes`
-* `core::comm`
-* `core::stackwalk`
+* `std::task` - The user-facing interface to the Rust task model.
+* `std::task::local_data` - The interface to local data.
+* `std::gc` - The garbage collector.
+* `std::unstable::lang` - Miscellaneous lang items, some of which rely on `std::rt`.
+* `std::condition` - Uses local data.
+* `std::cleanup` - Local heap destruction.
+* `std::io` - In the future `std::io` will use an `rt` implementation.
+* `std::logging`
+* `std::pipes`
+* `std::comm`
+* `std::stackwalk`
*/
/// scheduler and task context
pub mod tube;
-/// Simple reimplementation of core::comm
+/// Simple reimplementation of std::comm
pub mod comm;
mod select;
return exit_code;
}
+#[cfg(stage0)]
+mod macro_hack {
+#[macro_escape];
+macro_rules! externfn(
+ (fn $name:ident ($($arg_name:ident : $arg_ty:ty),*) $(-> $ret_ty:ty),*) => (
+ extern {
+ fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),*;
+ }
+ )
+)
+}
+
/// One-time runtime initialization.
///
/// Initializes global state, including frobbing
rust_update_gc_metadata(crate_map);
}
- extern {
- fn rust_update_gc_metadata(crate_map: *u8);
- }
+ externfn!(fn rust_update_gc_metadata(crate_map: *u8));
}
/// One-time runtime cleanup.
pub type RtioTcpListenerObject = uvio::UvTcpListener;
pub type RtioUdpSocketObject = uvio::UvUdpSocket;
pub type RtioTimerObject = uvio::UvTimer;
+pub type PausibleIdleCallback = uvio::UvPausibleIdleCallback;
pub trait EventLoop {
fn run(&mut self);
fn callback(&mut self, ~fn());
+ fn pausible_idle_callback(&mut self) -> ~PausibleIdleCallback;
fn callback_ms(&mut self, ms: u64, ~fn());
fn remote_callback(&mut self, ~fn()) -> ~RemoteCallbackObject;
/// The asynchronous I/O services. Not all event loops may provide one
}
pub trait RemoteCallback {
- /// Trigger the remote callback. Note that the number of times the callback
- /// is run is not guaranteed. All that is guaranteed is that, after calling 'fire',
- /// the callback will be called at least once, but multiple callbacks may be coalesced
- /// and callbacks may be called more often requested. Destruction also triggers the
- /// callback.
+ /// Trigger the remote callback. Note that the number of times the
+ /// callback is run is not guaranteed. All that is guaranteed is
+ /// that, after calling 'fire', the callback will be called at
+ /// least once, but multiple callbacks may be coalesced and
+ /// callbacks may be called more often requested. Destruction also
+ /// triggers the callback.
fn fire(&mut self);
}
}
pub trait RtioTimer {
- fn sleep(&self, msecs: u64);
+ fn sleep(&mut self, msecs: u64);
}
use rt::kill::BlockedTask;
use rt::local_ptr;
use rt::local::Local;
-use rt::rtio::RemoteCallback;
+use rt::rtio::{RemoteCallback, PausibleIdleCallback};
use rt::metrics::SchedMetrics;
use borrow::{to_uint};
use cell::Cell;
use iterator::{range};
use vec::{OwnedVector};
-/// The Scheduler is responsible for coordinating execution of Coroutines
-/// on a single thread. When the scheduler is running it is owned by
-/// thread local storage and the running task is owned by the
-/// scheduler.
+/// A scheduler is responsible for coordinating the execution of Tasks
+/// on a single thread. The scheduler runs inside a slightly modified
+/// Rust Task. When not running this task is stored in the scheduler
+/// struct. The scheduler struct acts like a baton, all scheduling
+/// actions are transfers of the baton.
///
/// XXX: This creates too many callbacks to run_sched_once, resulting
/// in too much allocation and too many events.
stack_pool: StackPool,
/// The event loop used to drive the scheduler and perform I/O
event_loop: ~EventLoopObject,
- /// The scheduler runs on a special task.
+ /// The scheduler runs on a special task. When it is not running
+ /// it is stored here instead of the work queue.
sched_task: Option<~Task>,
/// An action performed after a context switch on behalf of the
/// code running before the context switch
- priv cleanup_job: Option<CleanupJob>,
+ cleanup_job: Option<CleanupJob>,
metrics: SchedMetrics,
/// Should this scheduler run any task, or only pinned tasks?
run_anything: bool,
/// them to.
friend_handle: Option<SchedHandle>,
/// A fast XorShift rng for scheduler use
- rng: XorShiftRng
-
-}
-
-pub struct SchedHandle {
- priv remote: ~RemoteCallbackObject,
- priv queue: MessageQueue<SchedMessage>,
- sched_id: uint
-}
-
-pub enum SchedMessage {
- Wake,
- Shutdown,
- PinnedTask(~Task),
- TaskFromFriend(~Task)
-}
-
-enum CleanupJob {
- DoNothing,
- GiveTask(~Task, UnsafeTaskReceiver)
+ rng: XorShiftRng,
+ /// A toggleable idle callback
+ idle_callback: Option<~PausibleIdleCallback>
}
impl Scheduler {
- pub fn sched_id(&self) -> uint { to_uint(self) }
+ // * Initialization Functions
pub fn new(event_loop: ~EventLoopObject,
work_queue: WorkQueue<~Task>,
}
- // When you create a scheduler it isn't yet "in" a task, so the
- // task field is None.
pub fn new_special(event_loop: ~EventLoopObject,
work_queue: WorkQueue<~Task>,
work_queues: ~[WorkQueue<~Task>],
metrics: SchedMetrics::new(),
run_anything: run_anything,
friend_handle: friend,
- rng: XorShiftRng::new()
+ rng: XorShiftRng::new(),
+ idle_callback: None
}
}
// scheduler task and bootstrap into it.
pub fn bootstrap(~self, task: ~Task) {
+ let mut this = self;
+
+ // Build an Idle callback.
+ this.idle_callback = Some(this.event_loop.pausible_idle_callback());
+
// Initialize the TLS key.
local_ptr::init_tls_key();
// task, put it in TLS.
Local::put::(sched_task);
+ // Before starting our first task, make sure the idle callback
+ // is active. As we do not start in the sleep state this is
+ // important.
+ this.idle_callback.get_mut_ref().start(Scheduler::run_sched_once);
+
// Now, as far as all the scheduler state is concerned, we are
// inside the "scheduler" context. So we can act like the
// scheduler and resume the provided task.
- self.resume_task_immediately(task);
+ this.resume_task_immediately(task);
// Now we are back in the scheduler context, having
// successfully run the input task. Start by running the
let sched = Local::take::<Scheduler>();
rtdebug!("starting scheduler %u", sched.sched_id());
+ sched.run();
+ // Close the idle callback.
+ let mut sched = Local::take::<Scheduler>();
+ sched.idle_callback.get_mut_ref().close();
+ // Make one go through the loop to run the close callback.
sched.run();
// Now that we are done with the scheduler, clean up the
let mut self_sched = self;
- // Always run through the scheduler loop at least once so that
- // we enter the sleep state and can then be woken up by other
- // schedulers.
- self_sched.event_loop.callback(Scheduler::run_sched_once);
-
// This is unsafe because we need to place the scheduler, with
// the event_loop inside, inside our task. But we still need a
// mutable reference to the event_loop to give it the "run"
}
}
- // One iteration of the scheduler loop, always run at least once.
+ // * Execution Functions - Core Loop Logic
// The model for this function is that you continue through it
// until you either use the scheduler while performing a schedule
- // action, in which case you give it away and do not return, or
+ // action, in which case you give it away and return early, or
// you reach the end and sleep. In the case that a scheduler
// action is performed the loop is evented such that this function
// is called again.
// already have a scheduler stored in our local task, so we
// start off by taking it. This is the only path through the
// scheduler where we get the scheduler this way.
- let sched = Local::take::<Scheduler>();
+ let mut sched = Local::take::<Scheduler>();
- // Our first task is to read mail to see if we have important
- // messages.
-
- // 1) A wake message is easy, mutate sched struct and return
- // it.
- // 2) A shutdown is also easy, shutdown.
- // 3) A pinned task - we resume immediately and do not return
- // here.
- // 4) A message from another scheduler with a non-homed task
- // to run here.
-
- let result = sched.interpret_message_queue();
- let sched = match result {
- Some(sched) => {
- // We did not resume a task, so we returned.
- sched
- }
- None => {
- return;
- }
- };
+ // Assume that we need to continue idling unless we reach the
+ // end of this function without performing an action.
+ sched.idle_callback.get_mut_ref().resume();
- // Second activity is to try resuming a task from the queue.
+ // First we check for scheduler messages, these are higher
+ // priority than regular tasks.
+ let sched = match sched.interpret_message_queue() {
+ Some(sched) => sched,
+ None => return
+ };
- let result = sched.do_work();
- let mut sched = match result {
- Some(sched) => {
- // Failed to dequeue a task, so we return.
- sched
- }
- None => {
- return;
- }
+ // This helper will use a randomized work-stealing algorithm
+ // to find work.
+ let mut sched = match sched.do_work() {
+ Some(sched) => sched,
+ None => return
};
// If we got here then there was no work to do.
sched.sleepy = true;
let handle = sched.make_handle();
sched.sleeper_list.push(handle);
+ // Since we are sleeping, deactivate the idle callback.
+ sched.idle_callback.get_mut_ref().pause();
} else {
rtdebug!("not sleeping, already doing so or no_sleep set");
+ // We may not be sleeping, but we still need to deactivate
+ // the idle callback.
+ sched.idle_callback.get_mut_ref().pause();
}
// Finished a cycle without using the Scheduler. Place it back
Local::put(sched);
}
- pub fn make_handle(&mut self) -> SchedHandle {
- let remote = self.event_loop.remote_callback(Scheduler::run_sched_once);
-
- return SchedHandle {
- remote: remote,
- queue: self.message_queue.clone(),
- sched_id: self.sched_id()
- };
- }
-
- /// Schedule a task to be executed later.
- ///
- /// Pushes the task onto the work stealing queue and tells the
- /// event loop to run it later. Always use this instead of pushing
- /// to the work queue directly.
- pub fn enqueue_task(&mut self, task: ~Task) {
-
- let this = self;
-
- // We push the task onto our local queue clone.
- this.work_queue.push(task);
- this.event_loop.callback(Scheduler::run_sched_once);
-
- // We've made work available. Notify a
- // sleeping scheduler.
-
- // XXX: perf. Check for a sleeper without
- // synchronizing memory. It's not critical
- // that we always find it.
-
- // XXX: perf. If there's a sleeper then we
- // might as well just send it the task
- // directly instead of pushing it to the
- // queue. That is essentially the intent here
- // and it is less work.
- match this.sleeper_list.pop() {
- Some(handle) => {
- let mut handle = handle;
- handle.send(Wake)
- }
- None => { (/* pass */) }
- };
- }
-
- /// As enqueue_task, but with the possibility for the blocked task to
- /// already have been killed.
- pub fn enqueue_blocked_task(&mut self, blocked_task: BlockedTask) {
- do blocked_task.wake().map_move |task| {
- self.enqueue_task(task);
- };
- }
-
- // * Scheduler-context operations
-
// This function returns None if the scheduler is "used", or it
- // returns the still-available scheduler.
+ // returns the still-available scheduler. At this point all
+ // message-handling will count as a turn of work, and as a result
+ // return None.
fn interpret_message_queue(~self) -> Option<~Scheduler> {
let mut this = self;
match this.message_queue.pop() {
Some(PinnedTask(task)) => {
- this.event_loop.callback(Scheduler::run_sched_once);
let mut task = task;
task.give_home(Sched(this.make_handle()));
this.resume_task_immediately(task);
return None;
}
Some(TaskFromFriend(task)) => {
- this.event_loop.callback(Scheduler::run_sched_once);
rtdebug!("got a task from a friend. lovely!");
- return this.sched_schedule_task(task);
+ this.process_task(task,
+ Scheduler::resume_task_immediately_cl).map_move(Local::put);
+ return None;
}
Some(Wake) => {
- this.event_loop.callback(Scheduler::run_sched_once);
this.sleepy = false;
- return Some(this);
+ Local::put(this);
+ return None;
}
Some(Shutdown) => {
- this.event_loop.callback(Scheduler::run_sched_once);
+ rtdebug!("shutting down");
if this.sleepy {
// There may be an outstanding handle on the
// sleeper list. Pop them all to make sure that's
// event loop references we will shut down.
this.no_sleep = true;
this.sleepy = false;
- // YYY: Does a shutdown count as a "use" of the
- // scheduler? This seems to work - so I'm leaving it
- // this way despite not having a solid rational for
- // why I should return the scheduler here.
- return Some(this);
+ Local::put(this);
+ return None;
}
None => {
return Some(this);
}
}
- /// Given an input Coroutine sends it back to its home scheduler.
- fn send_task_home(task: ~Task) {
- let mut task = task;
- let mut home = task.take_unwrap_home();
- match home {
- Sched(ref mut home_handle) => {
- home_handle.send(PinnedTask(task));
- }
- AnySched => {
- rtabort!("error: cannot send anysched task home");
- }
- }
- }
+ fn do_work(~self) -> Option<~Scheduler> {
+ let mut this = self;
- /// Take a non-homed task we aren't allowed to run here and send
- /// it to the designated friend scheduler to execute.
- fn send_to_friend(&mut self, task: ~Task) {
- rtdebug!("sending a task to friend");
- match self.friend_handle {
- Some(ref mut handle) => {
- handle.send(TaskFromFriend(task));
+ rtdebug!("scheduler calling do work");
+ match this.find_work() {
+ Some(task) => {
+ rtdebug!("found some work! processing the task");
+ return this.process_task(task,
+ Scheduler::resume_task_immediately_cl);
}
None => {
- rtabort!("tried to send task to a friend but scheduler has no friends");
+ rtdebug!("no work was found, returning the scheduler struct");
+ return Some(this);
}
}
}
None => {
// Our naive stealing, try kinda hard.
rtdebug!("scheduler trying to steal");
- let _len = self.work_queues.len();
- return self.try_steals(2);
+ let len = self.work_queues.len();
+ return self.try_steals(len/2);
}
}
}
let work_queues = &mut self.work_queues;
match work_queues[index].steal() {
Some(task) => {
- rtdebug!("found task by stealing"); return Some(task)
+ rtdebug!("found task by stealing");
+ return Some(task)
}
None => ()
}
return None;
}
- // Given a task, execute it correctly.
- fn process_task(~self, task: ~Task) -> Option<~Scheduler> {
+ // * Task Routing Functions - Make sure tasks send up in the right
+ // place.
+
+ fn process_task(~self, task: ~Task,
+ schedule_fn: SchedulingFn) -> Option<~Scheduler> {
let mut this = self;
let mut task = task;
} else {
rtdebug!("running task here");
task.give_home(Sched(home_handle));
- this.resume_task_immediately(task);
- return None;
+ return schedule_fn(this, task);
}
}
AnySched if this.run_anything => {
rtdebug!("running anysched task here");
task.give_home(AnySched);
- this.resume_task_immediately(task);
- return None;
+ return schedule_fn(this, task);
}
AnySched => {
rtdebug!("sending task to friend");
}
}
- // Bundle the helpers together.
- fn do_work(~self) -> Option<~Scheduler> {
- let mut this = self;
-
- rtdebug!("scheduler calling do work");
- match this.find_work() {
- Some(task) => {
- rtdebug!("found some work! processing the task");
- return this.process_task(task);
+ fn send_task_home(task: ~Task) {
+ let mut task = task;
+ let mut home = task.take_unwrap_home();
+ match home {
+ Sched(ref mut home_handle) => {
+ home_handle.send(PinnedTask(task));
}
- None => {
- rtdebug!("no work was found, returning the scheduler struct");
- return Some(this);
+ AnySched => {
+ rtabort!("error: cannot send anysched task home");
}
}
}
- /// Called by a running task to end execution, after which it will
- /// be recycled by the scheduler for reuse in a new task.
- pub fn terminate_current_task(~self) {
- // Similar to deschedule running task and then, but cannot go through
- // the task-blocking path. The task is already dying.
- let mut this = self;
- let stask = this.sched_task.take_unwrap();
- do this.change_task_context(stask) |sched, mut dead_task| {
- let coroutine = dead_task.coroutine.take_unwrap();
- coroutine.recycle(&mut sched.stack_pool);
+ /// Take a non-homed task we aren't allowed to run here and send
+ /// it to the designated friend scheduler to execute.
+ fn send_to_friend(&mut self, task: ~Task) {
+ rtdebug!("sending a task to friend");
+ match self.friend_handle {
+ Some(ref mut handle) => {
+ handle.send(TaskFromFriend(task));
+ }
+ None => {
+ rtabort!("tried to send task to a friend but scheduler has no friends");
+ }
}
}
- // Scheduling a task requires a few checks to make sure the task
- // ends up in the appropriate location. The run_anything flag on
- // the scheduler and the home on the task need to be checked. This
- // helper performs that check. It takes a function that specifies
- // how to queue the the provided task if that is the correct
- // action. This is a "core" function that requires handling the
- // returned Option correctly.
-
- pub fn schedule_task(~self, task: ~Task,
- schedule_fn: ~fn(sched: ~Scheduler, task: ~Task))
- -> Option<~Scheduler> {
-
- // is the task home?
- let is_home = task.is_home_no_tls(&self);
+ /// Schedule a task to be executed later.
+ ///
+ /// Pushes the task onto the work stealing queue and tells the
+ /// event loop to run it later. Always use this instead of pushing
+ /// to the work queue directly.
+ pub fn enqueue_task(&mut self, task: ~Task) {
- // does the task have a home?
- let homed = task.homed();
+ let this = self;
- let mut this = self;
+ // We push the task onto our local queue clone.
+ this.work_queue.push(task);
+ this.idle_callback.get_mut_ref().resume();
- if is_home || (!homed && this.run_anything) {
- // here we know we are home, execute now OR we know we
- // aren't homed, and that this sched doesn't care
- rtdebug!("task: %u is on ok sched, executing", to_uint(task));
- schedule_fn(this, task);
- return None;
- } else if !homed && !this.run_anything {
- // the task isn't homed, but it can't be run here
- this.send_to_friend(task);
- return Some(this);
- } else {
- // task isn't home, so don't run it here, send it home
- Scheduler::send_task_home(task);
- return Some(this);
- }
- }
+ // We've made work available. Notify a
+ // sleeping scheduler.
- // There are two contexts in which schedule_task can be called:
- // inside the scheduler, and inside a task. These contexts handle
- // executing the task slightly differently. In the scheduler
- // context case we want to receive the scheduler as an input, and
- // manually deal with the option. In the task context case we want
- // to use TLS to find the scheduler, and deal with the option
- // inside the helper.
-
- pub fn sched_schedule_task(~self, task: ~Task) -> Option<~Scheduler> {
- do self.schedule_task(task) |sched, next_task| {
- sched.resume_task_immediately(next_task);
- }
+ // XXX: perf. Check for a sleeper without
+ // synchronizing memory. It's not critical
+ // that we always find it.
+ match this.sleeper_list.pop() {
+ Some(handle) => {
+ let mut handle = handle;
+ handle.send(Wake)
+ }
+ None => { (/* pass */) }
+ };
}
- // Task context case - use TLS.
- pub fn run_task(task: ~Task) {
- let sched = Local::take::<Scheduler>();
- let opt = do sched.schedule_task(task) |sched, next_task| {
- do sched.switch_running_tasks_and_then(next_task) |sched, last_task| {
- sched.enqueue_blocked_task(last_task);
- }
+ /// As enqueue_task, but with the possibility for the blocked task to
+ /// already have been killed.
+ pub fn enqueue_blocked_task(&mut self, blocked_task: BlockedTask) {
+ do blocked_task.wake().map_move |task| {
+ self.enqueue_task(task);
};
- opt.map_move(Local::put);
}
+ // * Core Context Switching Functions
+
// The primary function for changing contexts. In the current
// design the scheduler is just a slightly modified GreenTask, so
// all context swaps are from Task to Task. The only difference
// The current task is placed inside an enum with the cleanup
// function. This enum is then placed inside the scheduler.
- this.enqueue_cleanup_job(GiveTask(current_task, f_opaque));
+ this.cleanup_job = Some(CleanupJob::new(current_task, f_opaque));
// The scheduler is then placed inside the next task.
let mut next_task = next_task;
transmute_mut_region(*next_task.sched.get_mut_ref());
let current_task: &mut Task = match sched.cleanup_job {
- Some(GiveTask(ref task, _)) => {
+ Some(CleanupJob { task: ref task, _ }) => {
transmute_mut_region(*transmute_mut_unsafe(task))
}
- Some(DoNothing) => {
- rtabort!("no next task");
- }
None => {
rtabort!("no cleanup job");
}
}
}
- // Old API for task manipulation implemented over the new core
- // function.
+ // Returns a mutable reference to both contexts involved in this
+ // swap. This is unsafe - we are getting mutable internal
+ // references to keep even when we don't own the tasks. It looks
+ // kinda safe because we are doing transmutes before passing in
+ // the arguments.
+ pub fn get_contexts<'a>(current_task: &mut Task, next_task: &mut Task) ->
+ (&'a mut Context, &'a mut Context) {
+ let current_task_context =
+ &mut current_task.coroutine.get_mut_ref().saved_context;
+ let next_task_context =
+ &mut next_task.coroutine.get_mut_ref().saved_context;
+ unsafe {
+ (transmute_mut_region(current_task_context),
+ transmute_mut_region(next_task_context))
+ }
+ }
+
+ // * Context Swapping Helpers - Here be ugliness!
- pub fn resume_task_immediately(~self, task: ~Task) {
+ pub fn resume_task_immediately(~self, task: ~Task) -> Option<~Scheduler> {
do self.change_task_context(task) |sched, stask| {
sched.sched_task = Some(stask);
}
+ return None;
}
+ fn resume_task_immediately_cl(sched: ~Scheduler,
+ task: ~Task) -> Option<~Scheduler> {
+ sched.resume_task_immediately(task)
+ }
+
+
pub fn resume_blocked_task_immediately(~self, blocked_task: BlockedTask) {
match blocked_task.wake() {
- Some(task) => self.resume_task_immediately(task),
- None => Local::put(self),
+ Some(task) => { self.resume_task_immediately(task); }
+ None => Local::put(self)
};
}
}
}
- // A helper that looks up the scheduler and runs a task later by
- // enqueuing it.
+ fn switch_task(sched: ~Scheduler, task: ~Task) -> Option<~Scheduler> {
+ do sched.switch_running_tasks_and_then(task) |sched, last_task| {
+ sched.enqueue_blocked_task(last_task);
+ };
+ return None;
+ }
+
+ // * Task Context Helpers
+
+ /// Called by a running task to end execution, after which it will
+ /// be recycled by the scheduler for reuse in a new task.
+ pub fn terminate_current_task(~self) {
+ // Similar to deschedule running task and then, but cannot go through
+ // the task-blocking path. The task is already dying.
+ let mut this = self;
+ let stask = this.sched_task.take_unwrap();
+ do this.change_task_context(stask) |sched, mut dead_task| {
+ let coroutine = dead_task.coroutine.take_unwrap();
+ coroutine.recycle(&mut sched.stack_pool);
+ }
+ }
+
+ pub fn run_task(task: ~Task) {
+ let sched = Local::take::<Scheduler>();
+ sched.process_task(task, Scheduler::switch_task).map_move(Local::put);
+ }
+
pub fn run_task_later(next_task: ~Task) {
- // We aren't performing a scheduler operation, so we want to
- // put the Scheduler back when we finish.
let next_task = Cell::new(next_task);
do Local::borrow::<Scheduler,()> |sched| {
sched.enqueue_task(next_task.take());
};
}
- // Returns a mutable reference to both contexts involved in this
- // swap. This is unsafe - we are getting mutable internal
- // references to keep even when we don't own the tasks. It looks
- // kinda safe because we are doing transmutes before passing in
- // the arguments.
- pub fn get_contexts<'a>(current_task: &mut Task, next_task: &mut Task) ->
- (&'a mut Context, &'a mut Context) {
- let current_task_context =
- &mut current_task.coroutine.get_mut_ref().saved_context;
- let next_task_context =
- &mut next_task.coroutine.get_mut_ref().saved_context;
- unsafe {
- (transmute_mut_region(current_task_context),
- transmute_mut_region(next_task_context))
- }
- }
+ // * Utility Functions
- pub fn enqueue_cleanup_job(&mut self, job: CleanupJob) {
- self.cleanup_job = Some(job);
- }
+ pub fn sched_id(&self) -> uint { to_uint(self) }
pub fn run_cleanup_job(&mut self) {
- rtdebug!("running cleanup job");
let cleanup_job = self.cleanup_job.take_unwrap();
- match cleanup_job {
- DoNothing => { }
- GiveTask(task, f) => f.to_fn()(self, task)
- }
+ cleanup_job.run(self);
+ }
+
+ pub fn make_handle(&mut self) -> SchedHandle {
+ let remote = self.event_loop.remote_callback(Scheduler::run_sched_once);
+
+ return SchedHandle {
+ remote: remote,
+ queue: self.message_queue.clone(),
+ sched_id: self.sched_id()
+ };
}
}
-// The cases for the below function.
-enum ResumeAction {
- SendHome,
- Requeue,
- ResumeNow,
- Homeless
+// Supporting types
+
+type SchedulingFn = ~fn(~Scheduler, ~Task) -> Option<~Scheduler>;
+
+pub enum SchedMessage {
+ Wake,
+ Shutdown,
+ PinnedTask(~Task),
+ TaskFromFriend(~Task)
+}
+
+pub struct SchedHandle {
+ priv remote: ~RemoteCallbackObject,
+ priv queue: MessageQueue<SchedMessage>,
+ sched_id: uint
}
impl SchedHandle {
}
}
+struct CleanupJob {
+ task: ~Task,
+ f: UnsafeTaskReceiver
+}
+
+impl CleanupJob {
+ pub fn new(task: ~Task, f: UnsafeTaskReceiver) -> CleanupJob {
+ CleanupJob {
+ task: task,
+ f: f
+ }
+ }
+
+ pub fn run(self, sched: &mut Scheduler) {
+ let CleanupJob { task: task, f: f } = self;
+ f.to_fn()(sched, task)
+ }
+}
+
// XXX: Some hacks to put a &fn in Scheduler without borrowck
// complaining
type UnsafeTaskReceiver = raw::Closure;
use cell::Cell;
use rt::thread::Thread;
use rt::task::{Task, Sched};
+ use rt::util;
use option::{Some};
#[test]
#[test]
fn test_stress_schedule_task_states() {
+ if util::limit_thread_creation_due_to_osx_and_valgrind() { return; }
let n = stress_factor() * 120;
for _ in range(0, n as int) {
test_schedule_home_states();
}
}
+ // A regression test that the final message is always handled.
+ // Used to deadlock because Shutdown was never recvd.
+ #[test]
+ fn no_missed_messages() {
+ use rt::work_queue::WorkQueue;
+ use rt::sleeper_list::SleeperList;
+ use rt::stack::StackPool;
+ use rt::uv::uvio::UvEventLoop;
+ use rt::sched::{Shutdown, TaskFromFriend};
+ use util;
+
+ do run_in_bare_thread {
+ do stress_factor().times {
+ let sleepers = SleeperList::new();
+ let queue = WorkQueue::new();
+ let queues = ~[queue.clone()];
+
+ let mut sched = ~Scheduler::new(
+ ~UvEventLoop::new(),
+ queue,
+ queues.clone(),
+ sleepers.clone());
+
+ let mut handle = sched.make_handle();
+
+ let sched = Cell::new(sched);
+
+ let thread = do Thread::start {
+ let mut sched = sched.take();
+ let bootstrap_task = ~Task::new_root(&mut sched.stack_pool, None, ||());
+ sched.bootstrap(bootstrap_task);
+ };
+
+ let mut stack_pool = StackPool::new();
+ let task = ~Task::new_root(&mut stack_pool, None, ||());
+ handle.send(TaskFromFriend(task));
+
+ handle.send(Shutdown);
+ util::ignore(handle);
+
+ thread.join();
+ }
+ }
+ }
+
#[test]
fn multithreading() {
use rt::comm::*;
impl StackSegment {
pub fn new(size: uint) -> StackSegment {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
// Crate a block of uninitialized values
let mut stack = vec::with_capacity(size);
impl Drop for StackSegment {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
// XXX: Using the FFI to call a C macro. Slow
rust_valgrind_stack_deregister(self.valgrind_id);
use ptr;
use prelude::*;
use option::{Option, Some, None};
+use rt::borrowck;
+use rt::borrowck::BorrowRecord;
use rt::env;
use rt::kill::Death;
use rt::local::Local;
name: Option<~str>,
coroutine: Option<Coroutine>,
sched: Option<~Scheduler>,
- task_type: TaskType
+ task_type: TaskType,
+ // Dynamic borrowck debugging info
+ borrow_list: Option<~[BorrowRecord]>
}
pub enum TaskType {
saved_context: Context
}
-/// Some tasks have a deciated home scheduler that they must run on.
+/// Some tasks have a dedicated home scheduler that they must run on.
pub enum SchedHome {
AnySched,
Sched(SchedHandle)
coroutine: Some(Coroutine::empty()),
name: None,
sched: None,
- task_type: SchedTask
+ task_type: SchedTask,
+ borrow_list: None
}
}
name: None,
coroutine: Some(Coroutine::new(stack_pool, stack_size, start)),
sched: None,
- task_type: GreenTask(Some(~home))
+ task_type: GreenTask(Some(~home)),
+ borrow_list: None
}
}
name: None,
coroutine: Some(Coroutine::new(stack_pool, stack_size, start)),
sched: None,
- task_type: GreenTask(Some(~home))
+ task_type: GreenTask(Some(~home)),
+ borrow_list: None
}
}
}
}
+ // Cleanup the dynamic borrowck debugging info
+ borrowck::clear_task_borrow_list();
+
// NB. We pass the taskgroup into death so that it can be dropped while
// the unkillable counter is set. This is necessary for when the
// taskgroup destruction code drops references on KillHandles, which
// Again - might work while safe, or it might not.
do Local::borrow::<Scheduler,()> |sched| {
- (sched).run_cleanup_job();
+ sched.run_cleanup_job();
}
// To call the run method on a task we need a direct
}
pub fn begin_unwind(&mut self) -> ! {
+ #[fixed_stack_segment]; #[inline(never)];
+
self.unwinding = true;
unsafe {
rust_begin_unwind(UNWIND_TOKEN);
}
}
}
-
use super::io::net::ip::{SocketAddr, Ipv4Addr, Ipv6Addr};
use vec::{OwnedVector, MutableVector, ImmutableVector};
use rt::sched::Scheduler;
-use unstable::run_in_bare_thread;
+use unstable::{run_in_bare_thread};
use rt::thread::Thread;
use rt::task::Task;
use rt::uv::uvio::UvEventLoop;
static RLIMIT_NOFILE: libc::c_int = 8;
pub unsafe fn raise_fd_limit() {
+ #[fixed_stack_segment]; #[inline(never)];
+
// The strategy here is to fetch the current resource limits, read the kern.maxfilesperproc
// sysctl value, and bump the soft resource limit for maxfiles up to the sysctl value.
use ptr::{to_unsafe_ptr, to_mut_unsafe_ptr, mut_null};
let nthreads = match os::getenv("RUST_RT_TEST_THREADS") {
Some(nstr) => FromStr::from_str(nstr).unwrap(),
None => {
- // Using more threads than cores in test code
- // to force the OS to preempt them frequently.
- // Assuming that this help stress test concurrent types.
- util::num_cpus() * 2
+ if util::limit_thread_creation_due_to_osx_and_valgrind() {
+ 1
+ } else {
+ // Using more threads than cores in test code
+ // to force the OS to preempt them frequently.
+ // Assuming that this help stress test concurrent types.
+ util::num_cpus() * 2
+ }
}
};
}
/// Get a port number, starting at 9600, for use in tests
+#[fixed_stack_segment] #[inline(never)]
pub fn next_test_port() -> u16 {
unsafe {
return rust_dbg_next_port(base_port() as libc::uintptr_t) as u16;
impl Thread {
pub fn start(main: ~fn()) -> Thread {
fn substart(main: &~fn()) -> *raw_thread {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe { rust_raw_thread_start(main) }
}
let raw = substart(&main);
}
pub fn join(self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
assert!(!self.joined);
let mut this = self;
unsafe { rust_raw_thread_join(this.raw_thread); }
impl Drop for Thread {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
assert!(self.joined);
unsafe { rust_raw_thread_delete(self.raw_thread) }
}
pub type Key = pthread_key_t;
#[cfg(unix)]
+#[fixed_stack_segment]
+#[inline(never)]
pub unsafe fn create(key: &mut Key) {
assert_eq!(0, pthread_key_create(key, null()));
}
#[cfg(unix)]
+#[fixed_stack_segment]
+#[inline(never)]
pub unsafe fn set(key: Key, value: *mut c_void) {
assert_eq!(0, pthread_setspecific(key, value));
}
#[cfg(unix)]
+#[fixed_stack_segment]
+#[inline(never)]
pub unsafe fn get(key: Key) -> *mut c_void {
pthread_getspecific(key)
}
pub type Key = DWORD;
#[cfg(windows)]
+#[fixed_stack_segment]
+#[inline(never)]
pub unsafe fn create(key: &mut Key) {
static TLS_OUT_OF_INDEXES: DWORD = 0xFFFFFFFF;
*key = TlsAlloc();
}
#[cfg(windows)]
+#[fixed_stack_segment]
+#[inline(never)]
pub unsafe fn set(key: Key, value: *mut c_void) {
assert!(0 != TlsSetValue(key, value))
}
#[cfg(windows)]
+#[fixed_stack_segment]
+#[inline(never)]
pub unsafe fn get(key: Key) -> *mut c_void {
TlsGetValue(key)
}
use os;
use str::StrSlice;
+#[cfg(target_os="macos")]
+use unstable::running_on_valgrind;
+
/// Get the number of cores available
pub fn num_cpus() -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
return rust_get_num_cpus();
}
}
}
+/// Valgrind has a fixed-sized array (size around 2000) of segment descriptors wired into it; this
+/// is a hard limit and requires rebuilding valgrind if you want to go beyond it. Normally this is
+/// not a problem, but in some tests, we produce a lot of threads casually. Making lots of threads
+/// alone might not be a problem _either_, except on OSX, the segments produced for new threads
+/// _take a while_ to get reclaimed by the OS. Combined with the fact that libuv schedulers fork off
+/// a separate thread for polling fsevents on OSX, we get a perfect storm of creating "too many
+/// mappings" for valgrind to handle when running certain stress tests in the runtime.
+#[cfg(target_os="macos")]
+pub fn limit_thread_creation_due_to_osx_and_valgrind() -> bool {
+ running_on_valgrind()
+}
+
+#[cfg(not(target_os="macos"))]
+pub fn limit_thread_creation_due_to_osx_and_valgrind() -> bool {
+ false
+}
+
/// Get's the number of scheduler threads requested by the environment
/// either `RUST_THREADS` or `num_cpus`.
pub fn default_sched_threads() -> uint {
match os::getenv("RUST_THREADS") {
Some(nstr) => FromStr::from_str(nstr).unwrap(),
- None => num_cpus()
+ None => {
+ if limit_thread_creation_due_to_osx_and_valgrind() {
+ 1
+ } else {
+ num_cpus()
+ }
+ }
}
}
pub fn dumb_println(s: &str) {
use io::WriterUtil;
let dbg = ::libc::STDERR_FILENO as ::io::fd_t;
- dbg.write_str(s);
- dbg.write_str("\n");
+ dbg.write_str(s + "\n");
}
pub fn abort(msg: &str) -> ! {
rterrln!("%s", "");
rterrln!("fatal runtime error: %s", msg);
- unsafe { libc::abort(); }
+ abort();
+
+ fn abort() -> ! {
+ #[fixed_stack_segment]; #[inline(never)];
+ unsafe { libc::abort() }
+ }
}
pub fn set_exit_status(code: int) {
-
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
return rust_set_exit_status_newrt(code as libc::uintptr_t);
}
}
pub fn get_exit_status() -> int {
-
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
return rust_get_exit_status_newrt() as int;
}
}
}
+ pub fn restart(&mut self) {
+ unsafe {
+ assert!(0 == uvll::idle_start(self.native_handle(), idle_cb))
+ };
+
+ extern fn idle_cb(handle: *uvll::uv_idle_t, status: c_int) {
+ let mut idle_watcher: IdleWatcher = NativeHandle::from_native_handle(handle);
+ let data = idle_watcher.get_watcher_data();
+ let cb: &IdleCallback = data.idle_cb.get_ref();
+ let status = status_to_maybe_uv_error(idle_watcher, status);
+ (*cb)(idle_watcher, status);
+ }
+ }
+
pub fn stop(&mut self) {
// NB: Not resetting the Rust idle_cb to None here because `stop` is
// likely called from *within* the idle callback, causing a use after
/*!
-Bindings to libuv, along with the default implementation of `core::rt::rtio`.
+Bindings to libuv, along with the default implementation of `std::rt::rtio`.
UV types consist of the event loop (Loop), Watchers, Requests and
Callbacks.
/// Transmute an owned vector to a Buf
pub fn vec_to_uv_buf(v: ~[u8]) -> Buf {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let data = malloc(v.len() as size_t) as *u8;
assert!(data.is_not_null());
/// Transmute a Buf that was once a ~[u8] back to ~[u8]
pub fn vec_from_uv_buf(buf: Buf) -> Option<~[u8]> {
+ #[fixed_stack_segment]; #[inline(never)];
+
if !(buf.len == 0 && buf.base.is_null()) {
let v = unsafe { vec::from_buf(buf.base, buf.len as uint) };
unsafe { free(buf.base as *c_void) };
extern fn close_cb(handle: *uvll::uv_stream_t) {
let mut stream_watcher: StreamWatcher = NativeHandle::from_native_handle(handle);
- stream_watcher.get_watcher_data().close_cb.take_unwrap()();
+ let cb = stream_watcher.get_watcher_data().close_cb.take_unwrap();
stream_watcher.drop_watcher_data();
unsafe { free_handle(handle as *c_void) }
+ cb();
}
}
}
extern fn close_cb(handle: *uvll::uv_udp_t) {
let mut udp_watcher: UdpWatcher = NativeHandle::from_native_handle(handle);
- udp_watcher.get_watcher_data().close_cb.take_unwrap()();
+ let cb = udp_watcher.get_watcher_data().close_cb.take_unwrap();
udp_watcher.drop_watcher_data();
unsafe { free_handle(handle as *c_void) }
+ cb();
}
}
}
use rt::io::{standard_error, OtherIoError};
use rt::local::Local;
use rt::rtio::*;
-use rt::sched::Scheduler;
+use rt::sched::{Scheduler, SchedHandle};
use rt::tube::Tube;
use rt::uv::*;
use rt::uv::idle::IdleWatcher;
run_in_newsched_task};
#[cfg(test)] use iterator::{Iterator, range};
+// XXX we should not be calling uvll functions in here.
+
+trait HomingIO {
+ fn home<'r>(&'r mut self) -> &'r mut SchedHandle;
+ /* XXX This will move pinned tasks to do IO on the proper scheduler
+ * and then move them back to their home.
+ */
+ fn home_for_io<A>(&mut self, io: &fn(&mut Self) -> A) -> A {
+ use rt::sched::{PinnedTask, TaskFromFriend};
+ // go home
+ let old_home = Cell::new_empty();
+ let old_home_ptr = &old_home;
+ let scheduler = Local::take::<Scheduler>();
+ do scheduler.deschedule_running_task_and_then |_, task| {
+ // get the old home first
+ do task.wake().map_move |mut task| {
+ old_home_ptr.put_back(task.take_unwrap_home());
+ self.home().send(PinnedTask(task));
+ };
+ }
+
+ // do IO
+ let a = io(self);
+
+ // unhome home
+ let scheduler = Local::take::<Scheduler>();
+ do scheduler.deschedule_running_task_and_then |scheduler, task| {
+ do task.wake().map_move |mut task| {
+ task.give_home(old_home.take());
+ scheduler.make_handle().send(TaskFromFriend(task));
+ };
+ }
+
+ // return the result of the IO
+ a
+ }
+}
+
+// get a handle for the current scheduler
+macro_rules! get_handle_to_current_scheduler(
+ () => (do Local::borrow::<Scheduler, SchedHandle> |sched| { sched.make_handle() })
+)
+
enum SocketNameKind {
TcpPeer,
Tcp,
fn socket_name<T, U: Watcher + NativeHandle<*T>>(sk: SocketNameKind,
handle: U) -> Result<SocketAddr, IoError> {
-
let getsockname = match sk {
- TcpPeer => uvll::rust_uv_tcp_getpeername,
- Tcp => uvll::rust_uv_tcp_getsockname,
- Udp => uvll::rust_uv_udp_getsockname
+ TcpPeer => uvll::tcp_getpeername,
+ Tcp => uvll::tcp_getsockname,
+ Udp => uvll::udp_getsockname,
};
// Allocate a sockaddr_storage
}
+// Obviously an Event Loop is always home.
pub struct UvEventLoop {
uvio: UvIoFactory
}
}
}
+ fn pausible_idle_callback(&mut self) -> ~PausibleIdleCallback {
+ let idle_watcher = IdleWatcher::new(self.uvio.uv_loop());
+ return ~UvPausibleIdleCallback {
+ watcher: idle_watcher,
+ idle_flag: false,
+ closed: false
+ };
+ }
+
fn callback_ms(&mut self, ms: u64, f: ~fn()) {
let mut timer = TimerWatcher::new(self.uvio.uv_loop());
do timer.start(ms, 0) |timer, status| {
}
}
+pub struct UvPausibleIdleCallback {
+ watcher: IdleWatcher,
+ idle_flag: bool,
+ closed: bool
+}
+
+impl UvPausibleIdleCallback {
+ #[inline]
+ pub fn start(&mut self, f: ~fn()) {
+ do self.watcher.start |_idle_watcher, _status| {
+ f();
+ };
+ self.idle_flag = true;
+ }
+ #[inline]
+ pub fn pause(&mut self) {
+ if self.idle_flag == true {
+ self.watcher.stop();
+ self.idle_flag = false;
+ }
+ }
+ #[inline]
+ pub fn resume(&mut self) {
+ if self.idle_flag == false {
+ self.watcher.restart();
+ self.idle_flag = true;
+ }
+ }
+ #[inline]
+ pub fn close(&mut self) {
+ self.pause();
+ if !self.closed {
+ self.closed = true;
+ self.watcher.close(||{});
+ }
+ }
+}
+
#[test]
fn test_callback_run_once() {
do run_in_bare_thread {
}
}
+// The entire point of async is to call into a loop from other threads so it does not need to home.
pub struct UvRemoteCallback {
// The uv async handle for triggering the callback
async: AsyncWatcher,
let exit_flag_clone = exit_flag.clone();
let async = do AsyncWatcher::new(loop_) |watcher, status| {
assert!(status.is_none());
+
+ // The synchronization logic here is subtle. To review,
+ // the uv async handle type promises that, after it is
+ // triggered the remote callback is definitely called at
+ // least once. UvRemoteCallback needs to maintain those
+ // semantics while also shutting down cleanly from the
+ // dtor. In our case that means that, when the
+ // UvRemoteCallback dtor calls `async.send()`, here `f` is
+ // always called later.
+
+ // In the dtor both the exit flag is set and the async
+ // callback fired under a lock. Here, before calling `f`,
+ // we take the lock and check the flag. Because we are
+ // checking the flag before calling `f`, and the flag is
+ // set under the same lock as the send, then if the flag
+ // is set then we're guaranteed to call `f` after the
+ // final send.
+
+ // If the check was done after `f()` then there would be a
+ // period between that call and the check where the dtor
+ // could be called in the other thread, missing the final
+ // callback while still destroying the handle.
+
+ let should_exit = unsafe {
+ exit_flag_clone.with_imm(|&should_exit| should_exit)
+ };
+
f();
- unsafe {
- do exit_flag_clone.with_imm |&should_exit| {
- if should_exit {
- watcher.close(||());
- }
- }
+
+ if should_exit {
+ watcher.close(||());
}
+
};
UvRemoteCallback {
async: async,
let tube_clone = tube_clone.clone();
let tube_clone_cell = Cell::new(tube_clone);
let remote = do sched.event_loop.remote_callback {
- tube_clone_cell.take().send(1);
+ // This could be called multiple times
+ if !tube_clone_cell.is_empty() {
+ tube_clone_cell.take().send(1);
+ }
};
remote_cell.put_back(remote);
}
let result_cell = Cell::new_empty();
let result_cell_ptr: *Cell<Result<~RtioTcpStreamObject, IoError>> = &result_cell;
- let scheduler = Local::take::<Scheduler>();
-
// Block this task and take ownership, switch to scheduler context
+ let scheduler = Local::take::<Scheduler>();
do scheduler.deschedule_running_task_and_then |_, task| {
- rtdebug!("connect: entered scheduler context");
- let mut tcp_watcher = TcpWatcher::new(self.uv_loop());
+ let mut tcp = TcpWatcher::new(self.uv_loop());
let task_cell = Cell::new(task);
// Wait for a connection
- do tcp_watcher.connect(addr) |stream_watcher, status| {
- rtdebug!("connect: in connect callback");
- if status.is_none() {
- rtdebug!("status is none");
- let tcp_watcher =
- NativeHandle::from_native_handle(stream_watcher.native_handle());
- let res = Ok(~UvTcpStream(tcp_watcher));
-
- // Store the stream in the task's stack
- unsafe { (*result_cell_ptr).put_back(res); }
-
- // Context switch
- let scheduler = Local::take::<Scheduler>();
- scheduler.resume_blocked_task_immediately(task_cell.take());
- } else {
- rtdebug!("status is some");
- let task_cell = Cell::new(task_cell.take());
- do stream_watcher.close {
- let res = Err(uv_error_to_io_error(status.unwrap()));
+ do tcp.connect(addr) |stream, status| {
+ match status {
+ None => {
+ let tcp = NativeHandle::from_native_handle(stream.native_handle());
+ let home = get_handle_to_current_scheduler!();
+ let res = Ok(~UvTcpStream { watcher: tcp, home: home });
+
+ // Store the stream in the task's stack
unsafe { (*result_cell_ptr).put_back(res); }
+
+ // Context switch
let scheduler = Local::take::<Scheduler>();
scheduler.resume_blocked_task_immediately(task_cell.take());
}
- };
+ Some(_) => {
+ let task_cell = Cell::new(task_cell.take());
+ do stream.close {
+ let res = Err(uv_error_to_io_error(status.unwrap()));
+ unsafe { (*result_cell_ptr).put_back(res); }
+ let scheduler = Local::take::<Scheduler>();
+ scheduler.resume_blocked_task_immediately(task_cell.take());
+ }
+ }
+ }
}
}
fn tcp_bind(&mut self, addr: SocketAddr) -> Result<~RtioTcpListenerObject, IoError> {
let mut watcher = TcpWatcher::new(self.uv_loop());
match watcher.bind(addr) {
- Ok(_) => Ok(~UvTcpListener::new(watcher)),
+ Ok(_) => {
+ let home = get_handle_to_current_scheduler!();
+ Ok(~UvTcpListener::new(watcher, home))
+ }
Err(uverr) => {
let scheduler = Local::take::<Scheduler>();
do scheduler.deschedule_running_task_and_then |_, task| {
fn udp_bind(&mut self, addr: SocketAddr) -> Result<~RtioUdpSocketObject, IoError> {
let mut watcher = UdpWatcher::new(self.uv_loop());
match watcher.bind(addr) {
- Ok(_) => Ok(~UvUdpSocket(watcher)),
+ Ok(_) => {
+ let home = get_handle_to_current_scheduler!();
+ Ok(~UvUdpSocket { watcher: watcher, home: home })
+ }
Err(uverr) => {
let scheduler = Local::take::<Scheduler>();
do scheduler.deschedule_running_task_and_then |_, task| {
}
fn timer_init(&mut self) -> Result<~RtioTimerObject, IoError> {
- Ok(~UvTimer(TimerWatcher::new(self.uv_loop())))
+ let watcher = TimerWatcher::new(self.uv_loop());
+ let home = get_handle_to_current_scheduler!();
+ Ok(~UvTimer::new(watcher, home))
}
}
pub struct UvTcpListener {
watcher: TcpWatcher,
listening: bool,
- incoming_streams: Tube<Result<~RtioTcpStreamObject, IoError>>
+ incoming_streams: Tube<Result<~RtioTcpStreamObject, IoError>>,
+ home: SchedHandle,
+}
+
+impl HomingIO for UvTcpListener {
+ fn home<'r>(&'r mut self) -> &'r mut SchedHandle { &mut self.home }
}
impl UvTcpListener {
- fn new(watcher: TcpWatcher) -> UvTcpListener {
+ fn new(watcher: TcpWatcher, home: SchedHandle) -> UvTcpListener {
UvTcpListener {
watcher: watcher,
listening: false,
- incoming_streams: Tube::new()
+ incoming_streams: Tube::new(),
+ home: home,
}
}
impl Drop for UvTcpListener {
fn drop(&self) {
- let watcher = self.watcher();
- let scheduler = Local::take::<Scheduler>();
- do scheduler.deschedule_running_task_and_then |_, task| {
- let task_cell = Cell::new(task);
- do watcher.as_stream().close {
- let scheduler = Local::take::<Scheduler>();
- scheduler.resume_blocked_task_immediately(task_cell.take());
+ // XXX need mutable finalizer
+ let self_ = unsafe { transmute::<&UvTcpListener, &mut UvTcpListener>(self) };
+ do self_.home_for_io |self_| {
+ let scheduler = Local::take::<Scheduler>();
+ do scheduler.deschedule_running_task_and_then |_, task| {
+ let task_cell = Cell::new(task);
+ do self_.watcher().as_stream().close {
+ let scheduler = Local::take::<Scheduler>();
+ scheduler.resume_blocked_task_immediately(task_cell.take());
+ }
}
}
}
impl RtioSocket for UvTcpListener {
fn socket_name(&mut self) -> Result<SocketAddr, IoError> {
- socket_name(Tcp, self.watcher)
+ do self.home_for_io |self_| {
+ socket_name(Tcp, self_.watcher)
+ }
}
}
impl RtioTcpListener for UvTcpListener {
fn accept(&mut self) -> Result<~RtioTcpStreamObject, IoError> {
- rtdebug!("entering listen");
-
- if self.listening {
- return self.incoming_streams.recv();
- }
-
- self.listening = true;
-
- let server_tcp_watcher = self.watcher();
- let incoming_streams_cell = Cell::new(self.incoming_streams.clone());
-
- let incoming_streams_cell = Cell::new(incoming_streams_cell.take());
- let mut server_tcp_watcher = server_tcp_watcher;
- do server_tcp_watcher.listen |mut server_stream_watcher, status| {
- let maybe_stream = if status.is_none() {
- let mut loop_ = server_stream_watcher.event_loop();
- let client_tcp_watcher = TcpWatcher::new(&mut loop_);
- // XXX: Need's to be surfaced in interface
- server_stream_watcher.accept(client_tcp_watcher.as_stream());
- Ok(~UvTcpStream(client_tcp_watcher))
- } else {
- Err(standard_error(OtherIoError))
- };
+ do self.home_for_io |self_| {
+
+ if !self_.listening {
+ self_.listening = true;
+
+ let incoming_streams_cell = Cell::new(self_.incoming_streams.clone());
+
+ do self_.watcher().listen |mut server, status| {
+ let stream = match status {
+ Some(_) => Err(standard_error(OtherIoError)),
+ None => {
+ let client = TcpWatcher::new(&server.event_loop());
+ // XXX: needs to be surfaced in interface
+ server.accept(client.as_stream());
+ let home = get_handle_to_current_scheduler!();
+ Ok(~UvTcpStream { watcher: client, home: home })
+ }
+ };
+
+ let mut incoming_streams = incoming_streams_cell.take();
+ incoming_streams.send(stream);
+ incoming_streams_cell.put_back(incoming_streams);
+ }
- let mut incoming_streams = incoming_streams_cell.take();
- incoming_streams.send(maybe_stream);
- incoming_streams_cell.put_back(incoming_streams);
+ }
+ self_.incoming_streams.recv()
}
-
- return self.incoming_streams.recv();
}
fn accept_simultaneously(&mut self) -> Result<(), IoError> {
- let r = unsafe {
- uvll::rust_uv_tcp_simultaneous_accepts(self.watcher.native_handle(), 1 as c_int)
- };
+ do self.home_for_io |self_| {
+ let r = unsafe {
+ uvll::tcp_simultaneous_accepts(self_.watcher().native_handle(), 1 as c_int)
+ };
- match status_to_maybe_uv_error(self.watcher, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ match status_to_maybe_uv_error(self_.watcher(), r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
fn dont_accept_simultaneously(&mut self) -> Result<(), IoError> {
- let r = unsafe {
- uvll::rust_uv_tcp_simultaneous_accepts(self.watcher.native_handle(), 0 as c_int)
- };
+ do self.home_for_io |self_| {
+ let r = unsafe {
+ uvll::tcp_simultaneous_accepts(self_.watcher().native_handle(), 0 as c_int)
+ };
- match status_to_maybe_uv_error(self.watcher, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ match status_to_maybe_uv_error(self_.watcher(), r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
}
-pub struct UvTcpStream(TcpWatcher);
+pub struct UvTcpStream {
+ watcher: TcpWatcher,
+ home: SchedHandle,
+}
+
+impl HomingIO for UvTcpStream {
+ fn home<'r>(&'r mut self) -> &'r mut SchedHandle { &mut self.home }
+}
impl Drop for UvTcpStream {
fn drop(&self) {
- rtdebug!("closing tcp stream");
- let scheduler = Local::take::<Scheduler>();
- do scheduler.deschedule_running_task_and_then |_, task| {
- let task_cell = Cell::new(task);
- do self.as_stream().close {
- let scheduler = Local::take::<Scheduler>();
- scheduler.resume_blocked_task_immediately(task_cell.take());
+ // XXX need mutable finalizer
+ let this = unsafe { transmute::<&UvTcpStream, &mut UvTcpStream>(self) };
+ do this.home_for_io |self_| {
+ let scheduler = Local::take::<Scheduler>();
+ do scheduler.deschedule_running_task_and_then |_, task| {
+ let task_cell = Cell::new(task);
+ do self_.watcher.as_stream().close {
+ let scheduler = Local::take::<Scheduler>();
+ scheduler.resume_blocked_task_immediately(task_cell.take());
+ }
}
}
}
impl RtioSocket for UvTcpStream {
fn socket_name(&mut self) -> Result<SocketAddr, IoError> {
- socket_name(Tcp, **self)
+ do self.home_for_io |self_| {
+ socket_name(Tcp, self_.watcher)
+ }
}
}
impl RtioTcpStream for UvTcpStream {
fn read(&mut self, buf: &mut [u8]) -> Result<uint, IoError> {
- let result_cell = Cell::new_empty();
- let result_cell_ptr: *Cell<Result<uint, IoError>> = &result_cell;
-
- let scheduler = Local::take::<Scheduler>();
- let buf_ptr: *&mut [u8] = &buf;
- do scheduler.deschedule_running_task_and_then |_sched, task| {
- rtdebug!("read: entered scheduler context");
- let task_cell = Cell::new(task);
- // XXX: We shouldn't reallocate these callbacks every
- // call to read
- let alloc: AllocCallback = |_| unsafe {
- slice_to_uv_buf(*buf_ptr)
- };
- let mut watcher = self.as_stream();
- do watcher.read_start(alloc) |mut watcher, nread, _buf, status| {
-
- // Stop reading so that no read callbacks are
- // triggered before the user calls `read` again.
- // XXX: Is there a performance impact to calling
- // stop here?
- watcher.read_stop();
-
- let result = if status.is_none() {
- assert!(nread >= 0);
- Ok(nread as uint)
- } else {
- Err(uv_error_to_io_error(status.unwrap()))
+ do self.home_for_io |self_| {
+ let result_cell = Cell::new_empty();
+ let result_cell_ptr: *Cell<Result<uint, IoError>> = &result_cell;
+
+ let scheduler = Local::take::<Scheduler>();
+ let buf_ptr: *&mut [u8] = &buf;
+ do scheduler.deschedule_running_task_and_then |_sched, task| {
+ let task_cell = Cell::new(task);
+ // XXX: We shouldn't reallocate these callbacks every
+ // call to read
+ let alloc: AllocCallback = |_| unsafe {
+ slice_to_uv_buf(*buf_ptr)
};
+ let mut watcher = self_.watcher.as_stream();
+ do watcher.read_start(alloc) |mut watcher, nread, _buf, status| {
- unsafe { (*result_cell_ptr).put_back(result); }
+ // Stop reading so that no read callbacks are
+ // triggered before the user calls `read` again.
+ // XXX: Is there a performance impact to calling
+ // stop here?
+ watcher.read_stop();
- let scheduler = Local::take::<Scheduler>();
- scheduler.resume_blocked_task_immediately(task_cell.take());
+ let result = if status.is_none() {
+ assert!(nread >= 0);
+ Ok(nread as uint)
+ } else {
+ Err(uv_error_to_io_error(status.unwrap()))
+ };
+
+ unsafe { (*result_cell_ptr).put_back(result); }
+
+ let scheduler = Local::take::<Scheduler>();
+ scheduler.resume_blocked_task_immediately(task_cell.take());
+ }
}
- }
- assert!(!result_cell.is_empty());
- return result_cell.take();
+ assert!(!result_cell.is_empty());
+ result_cell.take()
+ }
}
fn write(&mut self, buf: &[u8]) -> Result<(), IoError> {
- let result_cell = Cell::new_empty();
- let result_cell_ptr: *Cell<Result<(), IoError>> = &result_cell;
- let scheduler = Local::take::<Scheduler>();
- let buf_ptr: *&[u8] = &buf;
- do scheduler.deschedule_running_task_and_then |_, task| {
- let task_cell = Cell::new(task);
- let buf = unsafe { slice_to_uv_buf(*buf_ptr) };
- let mut watcher = self.as_stream();
- do watcher.write(buf) |_watcher, status| {
- let result = if status.is_none() {
- Ok(())
- } else {
- Err(uv_error_to_io_error(status.unwrap()))
- };
-
- unsafe { (*result_cell_ptr).put_back(result); }
+ do self.home_for_io |self_| {
+ let result_cell = Cell::new_empty();
+ let result_cell_ptr: *Cell<Result<(), IoError>> = &result_cell;
+ let scheduler = Local::take::<Scheduler>();
+ let buf_ptr: *&[u8] = &buf;
+ do scheduler.deschedule_running_task_and_then |_, task| {
+ let task_cell = Cell::new(task);
+ let buf = unsafe { slice_to_uv_buf(*buf_ptr) };
+ let mut watcher = self_.watcher.as_stream();
+ do watcher.write(buf) |_watcher, status| {
+ let result = if status.is_none() {
+ Ok(())
+ } else {
+ Err(uv_error_to_io_error(status.unwrap()))
+ };
+
+ unsafe { (*result_cell_ptr).put_back(result); }
- let scheduler = Local::take::<Scheduler>();
- scheduler.resume_blocked_task_immediately(task_cell.take());
+ let scheduler = Local::take::<Scheduler>();
+ scheduler.resume_blocked_task_immediately(task_cell.take());
+ }
}
- }
- assert!(!result_cell.is_empty());
- return result_cell.take();
+ assert!(!result_cell.is_empty());
+ result_cell.take()
+ }
}
fn peer_name(&mut self) -> Result<SocketAddr, IoError> {
- socket_name(TcpPeer, **self)
+ do self.home_for_io |self_| {
+ socket_name(TcpPeer, self_.watcher)
+ }
}
fn control_congestion(&mut self) -> Result<(), IoError> {
- let r = unsafe {
- uvll::rust_uv_tcp_nodelay(self.native_handle(), 0 as c_int)
- };
+ do self.home_for_io |self_| {
+ let r = unsafe { uvll::tcp_nodelay(self_.watcher.native_handle(), 0 as c_int) };
- match status_to_maybe_uv_error(**self, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ match status_to_maybe_uv_error(self_.watcher, r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
fn nodelay(&mut self) -> Result<(), IoError> {
- let r = unsafe {
- uvll::rust_uv_tcp_nodelay(self.native_handle(), 1 as c_int)
- };
+ do self.home_for_io |self_| {
+ let r = unsafe { uvll::tcp_nodelay(self_.watcher.native_handle(), 1 as c_int) };
- match status_to_maybe_uv_error(**self, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ match status_to_maybe_uv_error(self_.watcher, r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
fn keepalive(&mut self, delay_in_seconds: uint) -> Result<(), IoError> {
- let r = unsafe {
- uvll::rust_uv_tcp_keepalive(self.native_handle(), 1 as c_int,
- delay_in_seconds as c_uint)
- };
+ do self.home_for_io |self_| {
+ let r = unsafe {
+ uvll::tcp_keepalive(self_.watcher.native_handle(), 1 as c_int,
+ delay_in_seconds as c_uint)
+ };
- match status_to_maybe_uv_error(**self, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ match status_to_maybe_uv_error(self_.watcher, r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
fn letdie(&mut self) -> Result<(), IoError> {
- let r = unsafe {
- uvll::rust_uv_tcp_keepalive(self.native_handle(), 0 as c_int, 0 as c_uint)
- };
+ do self.home_for_io |self_| {
+ let r = unsafe {
+ uvll::tcp_keepalive(self_.watcher.native_handle(), 0 as c_int, 0 as c_uint)
+ };
- match status_to_maybe_uv_error(**self, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ match status_to_maybe_uv_error(self_.watcher, r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
}
-pub struct UvUdpSocket(UdpWatcher);
+pub struct UvUdpSocket {
+ watcher: UdpWatcher,
+ home: SchedHandle,
+}
+
+impl HomingIO for UvUdpSocket {
+ fn home<'r>(&'r mut self) -> &'r mut SchedHandle { &mut self.home }
+}
impl Drop for UvUdpSocket {
fn drop(&self) {
- rtdebug!("closing udp socket");
- let scheduler = Local::take::<Scheduler>();
- do scheduler.deschedule_running_task_and_then |_, task| {
- let task_cell = Cell::new(task);
- do self.close {
- let scheduler = Local::take::<Scheduler>();
- scheduler.resume_blocked_task_immediately(task_cell.take());
+ // XXX need mutable finalizer
+ let this = unsafe { transmute::<&UvUdpSocket, &mut UvUdpSocket>(self) };
+ do this.home_for_io |_| {
+ let scheduler = Local::take::<Scheduler>();
+ do scheduler.deschedule_running_task_and_then |_, task| {
+ let task_cell = Cell::new(task);
+ do this.watcher.close {
+ let scheduler = Local::take::<Scheduler>();
+ scheduler.resume_blocked_task_immediately(task_cell.take());
+ }
}
}
}
impl RtioSocket for UvUdpSocket {
fn socket_name(&mut self) -> Result<SocketAddr, IoError> {
- socket_name(Udp, **self)
+ do self.home_for_io |self_| {
+ socket_name(Udp, self_.watcher)
+ }
}
}
impl RtioUdpSocket for UvUdpSocket {
fn recvfrom(&mut self, buf: &mut [u8]) -> Result<(uint, SocketAddr), IoError> {
- let result_cell = Cell::new_empty();
- let result_cell_ptr: *Cell<Result<(uint, SocketAddr), IoError>> = &result_cell;
-
- let scheduler = Local::take::<Scheduler>();
- let buf_ptr: *&mut [u8] = &buf;
- do scheduler.deschedule_running_task_and_then |_sched, task| {
- rtdebug!("recvfrom: entered scheduler context");
- let task_cell = Cell::new(task);
- let alloc: AllocCallback = |_| unsafe { slice_to_uv_buf(*buf_ptr) };
- do self.recv_start(alloc) |mut watcher, nread, _buf, addr, flags, status| {
- let _ = flags; // XXX add handling for partials?
-
- watcher.recv_stop();
+ do self.home_for_io |self_| {
+ let result_cell = Cell::new_empty();
+ let result_cell_ptr: *Cell<Result<(uint, SocketAddr), IoError>> = &result_cell;
+
+ let scheduler = Local::take::<Scheduler>();
+ let buf_ptr: *&mut [u8] = &buf;
+ do scheduler.deschedule_running_task_and_then |_, task| {
+ let task_cell = Cell::new(task);
+ let alloc: AllocCallback = |_| unsafe { slice_to_uv_buf(*buf_ptr) };
+ do self_.watcher.recv_start(alloc) |mut watcher, nread, _buf, addr, flags, status| {
+ let _ = flags; // /XXX add handling for partials?
+
+ watcher.recv_stop();
+
+ let result = match status {
+ None => {
+ assert!(nread >= 0);
+ Ok((nread as uint, addr))
+ }
+ Some(err) => Err(uv_error_to_io_error(err)),
+ };
+
+ unsafe { (*result_cell_ptr).put_back(result); }
- let result = match status {
- None => {
- assert!(nread >= 0);
- Ok((nread as uint, addr))
- }
- Some(err) => Err(uv_error_to_io_error(err))
- };
-
- unsafe { (*result_cell_ptr).put_back(result); }
-
- let scheduler = Local::take::<Scheduler>();
- scheduler.resume_blocked_task_immediately(task_cell.take());
+ let scheduler = Local::take::<Scheduler>();
+ scheduler.resume_blocked_task_immediately(task_cell.take());
+ }
}
- }
- assert!(!result_cell.is_empty());
- return result_cell.take();
+ assert!(!result_cell.is_empty());
+ result_cell.take()
+ }
}
fn sendto(&mut self, buf: &[u8], dst: SocketAddr) -> Result<(), IoError> {
- let result_cell = Cell::new_empty();
- let result_cell_ptr: *Cell<Result<(), IoError>> = &result_cell;
- let scheduler = Local::take::<Scheduler>();
- let buf_ptr: *&[u8] = &buf;
- do scheduler.deschedule_running_task_and_then |_, task| {
- let task_cell = Cell::new(task);
- let buf = unsafe { slice_to_uv_buf(*buf_ptr) };
- do self.send(buf, dst) |_watcher, status| {
-
- let result = match status {
- None => Ok(()),
- Some(err) => Err(uv_error_to_io_error(err)),
- };
-
- unsafe { (*result_cell_ptr).put_back(result); }
+ do self.home_for_io |self_| {
+ let result_cell = Cell::new_empty();
+ let result_cell_ptr: *Cell<Result<(), IoError>> = &result_cell;
+ let scheduler = Local::take::<Scheduler>();
+ let buf_ptr: *&[u8] = &buf;
+ do scheduler.deschedule_running_task_and_then |_, task| {
+ let task_cell = Cell::new(task);
+ let buf = unsafe { slice_to_uv_buf(*buf_ptr) };
+ do self_.watcher.send(buf, dst) |_watcher, status| {
+
+ let result = match status {
+ None => Ok(()),
+ Some(err) => Err(uv_error_to_io_error(err)),
+ };
+
+ unsafe { (*result_cell_ptr).put_back(result); }
- let scheduler = Local::take::<Scheduler>();
- scheduler.resume_blocked_task_immediately(task_cell.take());
+ let scheduler = Local::take::<Scheduler>();
+ scheduler.resume_blocked_task_immediately(task_cell.take());
+ }
}
- }
- assert!(!result_cell.is_empty());
- return result_cell.take();
+ assert!(!result_cell.is_empty());
+ result_cell.take()
+ }
}
fn join_multicast(&mut self, multi: IpAddr) -> Result<(), IoError> {
- let r = unsafe {
- do multi.to_str().with_c_str |m_addr| {
- uvll::udp_set_membership(self.native_handle(), m_addr,
- ptr::null(), uvll::UV_JOIN_GROUP)
- }
- };
+ do self.home_for_io |self_| {
+ let r = unsafe {
+ do multi.to_str().with_c_str |m_addr| {
+ uvll::udp_set_membership(self_.watcher.native_handle(), m_addr,
+ ptr::null(), uvll::UV_JOIN_GROUP)
+ }
+ };
- match status_to_maybe_uv_error(**self, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ match status_to_maybe_uv_error(self_.watcher, r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
fn leave_multicast(&mut self, multi: IpAddr) -> Result<(), IoError> {
- let r = unsafe {
- do multi.to_str().with_c_str |m_addr| {
- uvll::udp_set_membership(self.native_handle(), m_addr,
- ptr::null(), uvll::UV_LEAVE_GROUP)
- }
- };
+ do self.home_for_io |self_| {
+ let r = unsafe {
+ do multi.to_str().with_c_str |m_addr| {
+ uvll::udp_set_membership(self_.watcher.native_handle(), m_addr,
+ ptr::null(), uvll::UV_LEAVE_GROUP)
+ }
+ };
- match status_to_maybe_uv_error(**self, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ match status_to_maybe_uv_error(self_.watcher, r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
fn loop_multicast_locally(&mut self) -> Result<(), IoError> {
- let r = unsafe {
- uvll::udp_set_multicast_loop(self.native_handle(), 1 as c_int)
- };
+ do self.home_for_io |self_| {
+
+ let r = unsafe {
+ uvll::udp_set_multicast_loop(self_.watcher.native_handle(), 1 as c_int)
+ };
- match status_to_maybe_uv_error(**self, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ match status_to_maybe_uv_error(self_.watcher, r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
fn dont_loop_multicast_locally(&mut self) -> Result<(), IoError> {
- let r = unsafe {
- uvll::udp_set_multicast_loop(self.native_handle(), 0 as c_int)
- };
+ do self.home_for_io |self_| {
- match status_to_maybe_uv_error(**self, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ let r = unsafe {
+ uvll::udp_set_multicast_loop(self_.watcher.native_handle(), 0 as c_int)
+ };
+
+ match status_to_maybe_uv_error(self_.watcher, r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
fn multicast_time_to_live(&mut self, ttl: int) -> Result<(), IoError> {
- let r = unsafe {
- uvll::udp_set_multicast_ttl(self.native_handle(), ttl as c_int)
- };
+ do self.home_for_io |self_| {
- match status_to_maybe_uv_error(**self, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ let r = unsafe {
+ uvll::udp_set_multicast_ttl(self_.watcher.native_handle(), ttl as c_int)
+ };
+
+ match status_to_maybe_uv_error(self_.watcher, r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
fn time_to_live(&mut self, ttl: int) -> Result<(), IoError> {
- let r = unsafe {
- uvll::udp_set_ttl(self.native_handle(), ttl as c_int)
- };
+ do self.home_for_io |self_| {
+
+ let r = unsafe {
+ uvll::udp_set_ttl(self_.watcher.native_handle(), ttl as c_int)
+ };
- match status_to_maybe_uv_error(**self, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ match status_to_maybe_uv_error(self_.watcher, r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
fn hear_broadcasts(&mut self) -> Result<(), IoError> {
- let r = unsafe {
- uvll::udp_set_broadcast(self.native_handle(), 1 as c_int)
- };
+ do self.home_for_io |self_| {
+
+ let r = unsafe {
+ uvll::udp_set_broadcast(self_.watcher.native_handle(), 1 as c_int)
+ };
- match status_to_maybe_uv_error(**self, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ match status_to_maybe_uv_error(self_.watcher, r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
fn ignore_broadcasts(&mut self) -> Result<(), IoError> {
- let r = unsafe {
- uvll::udp_set_broadcast(self.native_handle(), 0 as c_int)
- };
+ do self.home_for_io |self_| {
- match status_to_maybe_uv_error(**self, r) {
- Some(err) => Err(uv_error_to_io_error(err)),
- None => Ok(())
+ let r = unsafe {
+ uvll::udp_set_broadcast(self_.watcher.native_handle(), 0 as c_int)
+ };
+
+ match status_to_maybe_uv_error(self_.watcher, r) {
+ Some(err) => Err(uv_error_to_io_error(err)),
+ None => Ok(())
+ }
}
}
}
-pub struct UvTimer(timer::TimerWatcher);
+pub struct UvTimer {
+ watcher: timer::TimerWatcher,
+ home: SchedHandle,
+}
+
+impl HomingIO for UvTimer {
+ fn home<'r>(&'r mut self) -> &'r mut SchedHandle { &mut self.home }
+}
impl UvTimer {
- fn new(w: timer::TimerWatcher) -> UvTimer {
- UvTimer(w)
+ fn new(w: timer::TimerWatcher, home: SchedHandle) -> UvTimer {
+ UvTimer { watcher: w, home: home }
}
}
impl Drop for UvTimer {
fn drop(&self) {
- rtdebug!("closing UvTimer");
- let scheduler = Local::take::<Scheduler>();
- do scheduler.deschedule_running_task_and_then |_, task| {
- let task_cell = Cell::new(task);
- do self.close {
- let scheduler = Local::take::<Scheduler>();
- scheduler.resume_blocked_task_immediately(task_cell.take());
+ let self_ = unsafe { transmute::<&UvTimer, &mut UvTimer>(self) };
+ do self_.home_for_io |self_| {
+ rtdebug!("closing UvTimer");
+ let scheduler = Local::take::<Scheduler>();
+ do scheduler.deschedule_running_task_and_then |_, task| {
+ let task_cell = Cell::new(task);
+ do self_.watcher.close {
+ let scheduler = Local::take::<Scheduler>();
+ scheduler.resume_blocked_task_immediately(task_cell.take());
+ }
}
}
}
}
impl RtioTimer for UvTimer {
- fn sleep(&self, msecs: u64) {
- let scheduler = Local::take::<Scheduler>();
- do scheduler.deschedule_running_task_and_then |_sched, task| {
- rtdebug!("sleep: entered scheduler context");
- let task_cell = Cell::new(task);
- let mut watcher = **self;
- do watcher.start(msecs, 0) |_, status| {
- assert!(status.is_none());
- let scheduler = Local::take::<Scheduler>();
- scheduler.resume_blocked_task_immediately(task_cell.take());
+ fn sleep(&mut self, msecs: u64) {
+ do self.home_for_io |self_| {
+ let scheduler = Local::take::<Scheduler>();
+ do scheduler.deschedule_running_task_and_then |_sched, task| {
+ rtdebug!("sleep: entered scheduler context");
+ let task_cell = Cell::new(task);
+ do self_.watcher.start(msecs, 0) |_, status| {
+ assert!(status.is_none());
+ let scheduler = Local::take::<Scheduler>();
+ scheduler.resume_blocked_task_immediately(task_cell.take());
+ }
}
+ self_.watcher.stop();
}
- let mut w = **self;
- w.stop();
}
}
}
}
+#[test]
+fn test_simple_homed_udp_io_bind_then_move_task_then_home_and_close() {
+ use rt::sleeper_list::SleeperList;
+ use rt::work_queue::WorkQueue;
+ use rt::thread::Thread;
+ use rt::task::Task;
+ use rt::sched::{Shutdown, TaskFromFriend};
+ do run_in_bare_thread {
+ let sleepers = SleeperList::new();
+ let work_queue1 = WorkQueue::new();
+ let work_queue2 = WorkQueue::new();
+ let queues = ~[work_queue1.clone(), work_queue2.clone()];
+
+ let mut sched1 = ~Scheduler::new(~UvEventLoop::new(), work_queue1, queues.clone(),
+ sleepers.clone());
+ let mut sched2 = ~Scheduler::new(~UvEventLoop::new(), work_queue2, queues.clone(),
+ sleepers.clone());
+
+ let handle1 = Cell::new(sched1.make_handle());
+ let handle2 = Cell::new(sched2.make_handle());
+ let tasksFriendHandle = Cell::new(sched2.make_handle());
+
+ let on_exit: ~fn(bool) = |exit_status| {
+ handle1.take().send(Shutdown);
+ handle2.take().send(Shutdown);
+ rtassert!(exit_status);
+ };
+
+ let test_function: ~fn() = || {
+ let io = unsafe { Local::unsafe_borrow::<IoFactoryObject>() };
+ let addr = next_test_ip4();
+ let maybe_socket = unsafe { (*io).udp_bind(addr) };
+ // this socket is bound to this event loop
+ assert!(maybe_socket.is_ok());
+
+ // block self on sched1
+ let scheduler = Local::take::<Scheduler>();
+ do scheduler.deschedule_running_task_and_then |_, task| {
+ // unblock task
+ do task.wake().map_move |task| {
+ // send self to sched2
+ tasksFriendHandle.take().send(TaskFromFriend(task));
+ };
+ // sched1 should now sleep since it has nothing else to do
+ }
+ // sched2 will wake up and get the task
+ // as we do nothing else, the function ends and the socket goes out of scope
+ // sched2 will start to run the destructor
+ // the destructor will first block the task, set it's home as sched1, then enqueue it
+ // sched2 will dequeue the task, see that it has a home, and send it to sched1
+ // sched1 will wake up, exec the close function on the correct loop, and then we're done
+ };
+
+ let mut main_task = ~Task::new_root(&mut sched1.stack_pool, None, test_function);
+ main_task.death.on_exit = Some(on_exit);
+ let main_task = Cell::new(main_task);
+
+ let null_task = Cell::new(~do Task::new_root(&mut sched2.stack_pool, None) || {});
+
+ let sched1 = Cell::new(sched1);
+ let sched2 = Cell::new(sched2);
+
+ let thread1 = do Thread::start {
+ sched1.take().bootstrap(main_task.take());
+ };
+ let thread2 = do Thread::start {
+ sched2.take().bootstrap(null_task.take());
+ };
+
+ thread1.join();
+ thread2.join();
+ }
+}
+
+#[test]
+fn test_simple_homed_udp_io_bind_then_move_handle_then_home_and_close() {
+ use rt::sleeper_list::SleeperList;
+ use rt::work_queue::WorkQueue;
+ use rt::thread::Thread;
+ use rt::task::Task;
+ use rt::comm::oneshot;
+ use rt::sched::Shutdown;
+ do run_in_bare_thread {
+ let sleepers = SleeperList::new();
+ let work_queue1 = WorkQueue::new();
+ let work_queue2 = WorkQueue::new();
+ let queues = ~[work_queue1.clone(), work_queue2.clone()];
+
+ let mut sched1 = ~Scheduler::new(~UvEventLoop::new(), work_queue1, queues.clone(),
+ sleepers.clone());
+ let mut sched2 = ~Scheduler::new(~UvEventLoop::new(), work_queue2, queues.clone(),
+ sleepers.clone());
+
+ let handle1 = Cell::new(sched1.make_handle());
+ let handle2 = Cell::new(sched2.make_handle());
+
+ let (port, chan) = oneshot();
+ let port = Cell::new(port);
+ let chan = Cell::new(chan);
+
+ let body1: ~fn() = || {
+ let io = unsafe { Local::unsafe_borrow::<IoFactoryObject>() };
+ let addr = next_test_ip4();
+ let socket = unsafe { (*io).udp_bind(addr) };
+ assert!(socket.is_ok());
+ chan.take().send(socket);
+ };
+
+ let body2: ~fn() = || {
+ let socket = port.take().recv();
+ assert!(socket.is_ok());
+ /* The socket goes out of scope and the destructor is called.
+ * The destructor:
+ * - sends itself back to sched1
+ * - frees the socket
+ * - resets the home of the task to whatever it was previously
+ */
+ };
+
+ let on_exit: ~fn(bool) = |exit| {
+ handle1.take().send(Shutdown);
+ handle2.take().send(Shutdown);
+ rtassert!(exit);
+ };
+
+ let task1 = Cell::new(~Task::new_root(&mut sched1.stack_pool, None, body1));
+
+ let mut task2 = ~Task::new_root(&mut sched2.stack_pool, None, body2);
+ task2.death.on_exit = Some(on_exit);
+ let task2 = Cell::new(task2);
+
+ let sched1 = Cell::new(sched1);
+ let sched2 = Cell::new(sched2);
+
+ let thread1 = do Thread::start {
+ sched1.take().bootstrap(task1.take());
+ };
+ let thread2 = do Thread::start {
+ sched2.take().bootstrap(task2.take());
+ };
+
+ thread1.join();
+ thread2.join();
+ }
+}
+
#[test]
fn test_simple_tcp_server_and_client() {
do run_in_newsched_task {
}
}
+#[test]
+fn test_simple_tcp_server_and_client_on_diff_threads() {
+ use rt::sleeper_list::SleeperList;
+ use rt::work_queue::WorkQueue;
+ use rt::thread::Thread;
+ use rt::task::Task;
+ use rt::sched::{Shutdown};
+ do run_in_bare_thread {
+ let sleepers = SleeperList::new();
+
+ let server_addr = next_test_ip4();
+ let client_addr = server_addr.clone();
+
+ let server_work_queue = WorkQueue::new();
+ let client_work_queue = WorkQueue::new();
+ let queues = ~[server_work_queue.clone(), client_work_queue.clone()];
+
+ let mut server_sched = ~Scheduler::new(~UvEventLoop::new(), server_work_queue,
+ queues.clone(), sleepers.clone());
+ let mut client_sched = ~Scheduler::new(~UvEventLoop::new(), client_work_queue,
+ queues.clone(), sleepers.clone());
+
+ let server_handle = Cell::new(server_sched.make_handle());
+ let client_handle = Cell::new(client_sched.make_handle());
+
+ let server_on_exit: ~fn(bool) = |exit_status| {
+ server_handle.take().send(Shutdown);
+ rtassert!(exit_status);
+ };
+
+ let client_on_exit: ~fn(bool) = |exit_status| {
+ client_handle.take().send(Shutdown);
+ rtassert!(exit_status);
+ };
+
+ let server_fn: ~fn() = || {
+ let io = unsafe { Local::unsafe_borrow::<IoFactoryObject>() };
+ let mut listener = unsafe { (*io).tcp_bind(server_addr).unwrap() };
+ let mut stream = listener.accept().unwrap();
+ let mut buf = [0, .. 2048];
+ let nread = stream.read(buf).unwrap();
+ assert_eq!(nread, 8);
+ for i in range(0u, nread) {
+ assert_eq!(buf[i], i as u8);
+ }
+ };
+
+ let client_fn: ~fn() = || {
+ let io = unsafe { Local::unsafe_borrow::<IoFactoryObject>() };
+ let mut stream = unsafe { (*io).tcp_connect(client_addr) };
+ while stream.is_err() {
+ stream = unsafe { (*io).tcp_connect(client_addr) };
+ }
+ stream.unwrap().write([0, 1, 2, 3, 4, 5, 6, 7]);
+ };
+
+ let mut server_task = ~Task::new_root(&mut server_sched.stack_pool, None, server_fn);
+ server_task.death.on_exit = Some(server_on_exit);
+ let server_task = Cell::new(server_task);
+
+ let mut client_task = ~Task::new_root(&mut client_sched.stack_pool, None, client_fn);
+ client_task.death.on_exit = Some(client_on_exit);
+ let client_task = Cell::new(client_task);
+
+ let server_sched = Cell::new(server_sched);
+ let client_sched = Cell::new(client_sched);
+
+ let server_thread = do Thread::start {
+ server_sched.take().bootstrap(server_task.take());
+ };
+ let client_thread = do Thread::start {
+ client_sched.take().bootstrap(client_task.take());
+ };
+
+ server_thread.join();
+ client_thread.join();
+ }
+}
+
#[test]
fn test_simple_udp_server_and_client() {
do run_in_newsched_task {
}
}
-fn test_timer_sleep_simple_impl() {
- unsafe {
- let io = Local::unsafe_borrow::<IoFactoryObject>();
- let timer = (*io).timer_init();
- match timer {
- Ok(t) => t.sleep(1),
- Err(_) => assert!(false)
- }
- }
-}
#[test]
fn test_timer_sleep_simple() {
do run_in_newsched_task {
- test_timer_sleep_simple_impl();
+ unsafe {
+ let io = Local::unsafe_borrow::<IoFactoryObject>();
+ let timer = (*io).timer_init();
+ do timer.map_move |mut t| { t.sleep(1) };
+ }
}
}
* There are also a collection of helper functions to ease interacting
* with the low-level API.
*
- * As new functionality, existant in uv.h, is added to the rust stdlib,
+ * As new functionality, existent in uv.h, is added to the rust stdlib,
* the mappings should be added in this module.
*/
}
pub unsafe fn malloc_handle(handle: uv_handle_type) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
assert!(handle != UV_UNKNOWN_HANDLE && handle != UV_HANDLE_TYPE_MAX);
let size = rust_uv_handle_size(handle as uint);
let p = malloc(size);
}
pub unsafe fn free_handle(v: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
free(v)
}
pub unsafe fn malloc_req(req: uv_req_type) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
assert!(req != UV_UNKNOWN_REQ && req != UV_REQ_TYPE_MAX);
let size = rust_uv_req_size(req as uint);
let p = malloc(size);
}
pub unsafe fn free_req(v: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
free(v)
}
#[test]
fn handle_sanity_check() {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
assert_eq!(UV_HANDLE_TYPE_MAX as uint, rust_uv_handle_type_max());
}
}
#[test]
+#[fixed_stack_segment]
+#[inline(never)]
fn request_sanity_check() {
unsafe {
assert_eq!(UV_REQ_TYPE_MAX as uint, rust_uv_req_type_max());
}
}
+// XXX Event loops ignore SIGPIPE by default.
pub unsafe fn loop_new() -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_loop_new();
}
pub unsafe fn loop_delete(loop_handle: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_loop_delete(loop_handle);
}
pub unsafe fn run(loop_handle: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_run(loop_handle);
}
pub unsafe fn close<T>(handle: *T, cb: *u8) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_close(handle as *c_void, cb);
}
pub unsafe fn walk(loop_handle: *c_void, cb: *u8, arg: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_walk(loop_handle, cb, arg);
}
pub unsafe fn idle_new() -> *uv_idle_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_idle_new()
}
pub unsafe fn idle_delete(handle: *uv_idle_t) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_idle_delete(handle)
}
pub unsafe fn idle_init(loop_handle: *uv_loop_t, handle: *uv_idle_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_idle_init(loop_handle, handle)
}
pub unsafe fn idle_start(handle: *uv_idle_t, cb: uv_idle_cb) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_idle_start(handle, cb)
}
pub unsafe fn idle_stop(handle: *uv_idle_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_idle_stop(handle)
}
pub unsafe fn udp_init(loop_handle: *uv_loop_t, handle: *uv_udp_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_init(loop_handle, handle);
}
pub unsafe fn udp_bind(server: *uv_udp_t, addr: *sockaddr_in, flags: c_uint) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_bind(server, addr, flags);
}
pub unsafe fn udp_bind6(server: *uv_udp_t, addr: *sockaddr_in6, flags: c_uint) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_bind6(server, addr, flags);
}
pub unsafe fn udp_send<T>(req: *uv_udp_send_t, handle: *T, buf_in: &[uv_buf_t],
addr: *sockaddr_in, cb: uv_udp_send_cb) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
let buf_ptr = vec::raw::to_ptr(buf_in);
let buf_cnt = buf_in.len() as i32;
return rust_uv_udp_send(req, handle as *c_void, buf_ptr, buf_cnt, addr, cb);
pub unsafe fn udp_send6<T>(req: *uv_udp_send_t, handle: *T, buf_in: &[uv_buf_t],
addr: *sockaddr_in6, cb: uv_udp_send_cb) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
let buf_ptr = vec::raw::to_ptr(buf_in);
let buf_cnt = buf_in.len() as i32;
return rust_uv_udp_send6(req, handle as *c_void, buf_ptr, buf_cnt, addr, cb);
pub unsafe fn udp_recv_start(server: *uv_udp_t, on_alloc: uv_alloc_cb,
on_recv: uv_udp_recv_cb) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_recv_start(server, on_alloc, on_recv);
}
pub unsafe fn udp_recv_stop(server: *uv_udp_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_recv_stop(server);
}
pub unsafe fn get_udp_handle_from_send_req(send_req: *uv_udp_send_t) -> *uv_udp_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_udp_handle_from_send_req(send_req);
}
-pub unsafe fn udp_get_sockname(handle: *uv_udp_t, name: *sockaddr_storage) -> c_int {
+pub unsafe fn udp_getsockname(handle: *uv_udp_t, name: *sockaddr_storage) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_getsockname(handle, name);
}
pub unsafe fn udp_set_membership(handle: *uv_udp_t, multicast_addr: *c_char,
interface_addr: *c_char, membership: uv_membership) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_set_membership(handle, multicast_addr, interface_addr, membership as c_int);
}
pub unsafe fn udp_set_multicast_loop(handle: *uv_udp_t, on: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_set_multicast_loop(handle, on);
}
pub unsafe fn udp_set_multicast_ttl(handle: *uv_udp_t, ttl: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_set_multicast_ttl(handle, ttl);
}
pub unsafe fn udp_set_ttl(handle: *uv_udp_t, ttl: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_set_ttl(handle, ttl);
}
pub unsafe fn udp_set_broadcast(handle: *uv_udp_t, on: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_set_broadcast(handle, on);
}
pub unsafe fn tcp_init(loop_handle: *c_void, handle: *uv_tcp_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_init(loop_handle, handle);
}
pub unsafe fn tcp_connect(connect_ptr: *uv_connect_t, tcp_handle_ptr: *uv_tcp_t,
addr_ptr: *sockaddr_in, after_connect_cb: *u8) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_connect(connect_ptr, tcp_handle_ptr, after_connect_cb, addr_ptr);
}
pub unsafe fn tcp_connect6(connect_ptr: *uv_connect_t, tcp_handle_ptr: *uv_tcp_t,
addr_ptr: *sockaddr_in6, after_connect_cb: *u8) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_connect6(connect_ptr, tcp_handle_ptr, after_connect_cb, addr_ptr);
}
pub unsafe fn tcp_bind(tcp_server_ptr: *uv_tcp_t, addr_ptr: *sockaddr_in) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_bind(tcp_server_ptr, addr_ptr);
}
pub unsafe fn tcp_bind6(tcp_server_ptr: *uv_tcp_t, addr_ptr: *sockaddr_in6) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_bind6(tcp_server_ptr, addr_ptr);
}
pub unsafe fn tcp_getpeername(tcp_handle_ptr: *uv_tcp_t, name: *sockaddr_storage) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_getpeername(tcp_handle_ptr, name);
}
pub unsafe fn tcp_getsockname(handle: *uv_tcp_t, name: *sockaddr_storage) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_getsockname(handle, name);
}
pub unsafe fn tcp_nodelay(handle: *uv_tcp_t, enable: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_nodelay(handle, enable);
}
pub unsafe fn tcp_keepalive(handle: *uv_tcp_t, enable: c_int, delay: c_uint) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_keepalive(handle, enable, delay);
}
pub unsafe fn tcp_simultaneous_accepts(handle: *uv_tcp_t, enable: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_simultaneous_accepts(handle, enable);
}
pub unsafe fn listen<T>(stream: *T, backlog: c_int, cb: *u8) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_listen(stream as *c_void, backlog, cb);
}
pub unsafe fn accept(server: *c_void, client: *c_void) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_accept(server as *c_void, client as *c_void);
}
pub unsafe fn write<T>(req: *uv_write_t, stream: *T, buf_in: &[uv_buf_t], cb: *u8) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
let buf_ptr = vec::raw::to_ptr(buf_in);
let buf_cnt = buf_in.len() as i32;
return rust_uv_write(req as *c_void, stream as *c_void, buf_ptr, buf_cnt, cb);
}
pub unsafe fn read_start(stream: *uv_stream_t, on_alloc: uv_alloc_cb, on_read: *u8) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_read_start(stream as *c_void, on_alloc, on_read);
}
pub unsafe fn read_stop(stream: *uv_stream_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_read_stop(stream as *c_void);
}
pub unsafe fn last_error(loop_handle: *c_void) -> uv_err_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_last_error(loop_handle);
}
pub unsafe fn strerror(err: *uv_err_t) -> *c_char {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_strerror(err);
}
pub unsafe fn err_name(err: *uv_err_t) -> *c_char {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_err_name(err);
}
pub unsafe fn async_init(loop_handle: *c_void, async_handle: *uv_async_t, cb: *u8) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_async_init(loop_handle, async_handle, cb);
}
pub unsafe fn async_send(async_handle: *uv_async_t) {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_async_send(async_handle);
}
pub unsafe fn buf_init(input: *u8, len: uint) -> uv_buf_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
let out_buf = uv_buf_t { base: ptr::null(), len: 0 as size_t };
let out_buf_ptr = ptr::to_unsafe_ptr(&out_buf);
rust_uv_buf_init(out_buf_ptr, input, len as size_t);
}
pub unsafe fn timer_init(loop_ptr: *c_void, timer_ptr: *uv_timer_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_timer_init(loop_ptr, timer_ptr);
}
pub unsafe fn timer_start(timer_ptr: *uv_timer_t, cb: *u8, timeout: u64,
repeat: u64) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_timer_start(timer_ptr, cb, timeout, repeat);
}
pub unsafe fn timer_stop(timer_ptr: *uv_timer_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_timer_stop(timer_ptr);
}
pub unsafe fn is_ip4_addr(addr: *sockaddr) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+
match rust_uv_is_ipv4_sockaddr(addr) { 0 => false, _ => true }
}
pub unsafe fn is_ip6_addr(addr: *sockaddr) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+
match rust_uv_is_ipv6_sockaddr(addr) { 0 => false, _ => true }
}
pub unsafe fn malloc_ip4_addr(ip: &str, port: int) -> *sockaddr_in {
+ #[fixed_stack_segment]; #[inline(never)];
do ip.with_c_str |ip_buf| {
rust_uv_ip4_addrp(ip_buf as *u8, port as libc::c_int)
}
}
pub unsafe fn malloc_ip6_addr(ip: &str, port: int) -> *sockaddr_in6 {
+ #[fixed_stack_segment]; #[inline(never)];
do ip.with_c_str |ip_buf| {
rust_uv_ip6_addrp(ip_buf as *u8, port as libc::c_int)
}
}
pub unsafe fn malloc_sockaddr_storage() -> *sockaddr_storage {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_malloc_sockaddr_storage()
}
pub unsafe fn free_sockaddr_storage(ss: *sockaddr_storage) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_free_sockaddr_storage(ss);
}
pub unsafe fn free_ip4_addr(addr: *sockaddr_in) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_free_ip4_addr(addr);
}
pub unsafe fn free_ip6_addr(addr: *sockaddr_in6) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_free_ip6_addr(addr);
}
pub unsafe fn ip4_name(addr: *sockaddr_in, dst: *u8, size: size_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_ip4_name(addr, dst, size);
}
pub unsafe fn ip6_name(addr: *sockaddr_in6, dst: *u8, size: size_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_ip6_name(addr, dst, size);
}
pub unsafe fn ip4_port(addr: *sockaddr_in) -> c_uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_ip4_port(addr);
}
pub unsafe fn ip6_port(addr: *sockaddr_in6) -> c_uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_ip6_port(addr);
}
// data access helpers
pub unsafe fn get_loop_for_uv_handle<T>(handle: *T) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_loop_for_uv_handle(handle as *c_void);
}
pub unsafe fn get_stream_handle_from_connect_req(connect: *uv_connect_t) -> *uv_stream_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_stream_handle_from_connect_req(connect);
}
pub unsafe fn get_stream_handle_from_write_req(write_req: *uv_write_t) -> *uv_stream_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_stream_handle_from_write_req(write_req);
}
pub unsafe fn get_data_for_uv_loop(loop_ptr: *c_void) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_get_data_for_uv_loop(loop_ptr)
}
pub unsafe fn set_data_for_uv_loop(loop_ptr: *c_void, data: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_set_data_for_uv_loop(loop_ptr, data);
}
pub unsafe fn get_data_for_uv_handle<T>(handle: *T) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_data_for_uv_handle(handle as *c_void);
}
pub unsafe fn set_data_for_uv_handle<T, U>(handle: *T, data: *U) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_set_data_for_uv_handle(handle as *c_void, data as *c_void);
}
pub unsafe fn get_data_for_req<T>(req: *T) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_data_for_req(req as *c_void);
}
pub unsafe fn set_data_for_req<T, U>(req: *T, data: *U) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_set_data_for_req(req as *c_void, data as *c_void);
}
pub unsafe fn get_base_from_buf(buf: uv_buf_t) -> *u8 {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_base_from_buf(buf);
}
pub unsafe fn get_len_from_buf(buf: uv_buf_t) -> size_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_len_from_buf(buf);
}
pub unsafe fn get_last_err_info(uv_loop: *c_void) -> ~str {
in_fd: Option<c_int>,
/**
- * If this is None then a new pipe will be created for the new progam's
+ * If this is None then a new pipe will be created for the new program's
* output and Process.output() will provide a Reader to read from this pipe.
*
* If this is Some(file-descriptor) then the new process will write its output
out_fd: Option<c_int>,
/**
- * If this is None then a new pipe will be created for the new progam's
+ * If this is None then a new pipe will be created for the new program's
* error stream and Process.error() will provide a Reader to read from this pipe.
*
* If this is Some(file-descriptor) then the new process will write its error output
* * options - Options to configure the environment of the process,
* the working directory and the standard IO streams.
*/
- pub fn new(prog: &str, args: &[~str], options: ProcessOptions)
+ pub fn new(prog: &str, args: &[~str],
+ options: ProcessOptions)
-> Process {
+ #[fixed_stack_segment]; #[inline(never)];
+
let (in_pipe, in_fd) = match options.in_fd {
None => {
let pipe = os::pipe();
* method does nothing.
*/
pub fn close_input(&mut self) {
+ #[fixed_stack_segment]; #[inline(never)];
match self.input {
Some(-1) | None => (),
Some(fd) => {
}
fn close_outputs(&mut self) {
+ #[fixed_stack_segment]; #[inline(never)];
fclose_and_null(&mut self.output);
fclose_and_null(&mut self.error);
fn fclose_and_null(f_opt: &mut Option<*libc::FILE>) {
+ #[allow(cstack)]; // fixed_stack_segment declared on enclosing fn
match *f_opt {
Some(f) if !f.is_null() => {
unsafe {
#[cfg(windows)]
fn killpid(pid: pid_t, _force: bool) {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
libc::funcs::extra::kernel32::TerminateProcess(
cast::transmute(pid), 1);
#[cfg(unix)]
fn killpid(pid: pid_t, force: bool) {
+ #[fixed_stack_segment]; #[inline(never)];
+
let signal = if force {
libc::consts::os::posix88::SIGKILL
} else {
env: Option<~[(~str, ~str)]>,
dir: Option<&Path>,
in_fd: c_int, out_fd: c_int, err_fd: c_int) -> SpawnProcessResult {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::types::os::arch::extra::{DWORD, HANDLE, STARTUPINFO};
use libc::consts::os::extra::{
env: Option<~[(~str, ~str)]>,
dir: Option<&Path>,
in_fd: c_int, out_fd: c_int, err_fd: c_int) -> SpawnProcessResult {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::funcs::posix88::unistd::{fork, dup2, close, chdir, execvp};
use libc::funcs::bsd44::getdtablesize;
#[cfg(windows)]
fn free_handle(handle: *()) {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
libc::funcs::extra::kernel32::CloseHandle(cast::transmute(handle));
}
* Note that this is private to avoid race conditions on unix where if
* a user calls waitpid(some_process.get_id()) then some_process.finish()
* and some_process.destroy() and some_process.finalize() will then either
- * operate on a none-existant process or, even worse, on a newer process
+ * operate on a none-existent process or, even worse, on a newer process
* with the same id.
*/
fn waitpid(pid: pid_t) -> int {
#[cfg(windows)]
fn waitpid_os(pid: pid_t) -> int {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::types::os::arch::extra::DWORD;
use libc::consts::os::extra::{
#[cfg(unix)]
fn waitpid_os(pid: pid_t) -> int {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::funcs::posix01::wait::*;
use path::Path;
use run;
use str;
+ use unstable::running_on_valgrind;
#[test]
#[cfg(windows)]
}
fn readclose(fd: c_int) -> ~str {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let file = os::fdopen(fd);
let reader = io::FILE_reader(file, false);
assert!(output.contains("RUN_TEST_NEW_ENV=123"));
}
-
- fn running_on_valgrind() -> bool {
- unsafe { rust_running_on_valgrind() != 0 }
- }
-
- extern {
- fn rust_running_on_valgrind() -> uintptr_t;
- }
}
use rt::local::Local;
use rt::rtio::EventLoop;
use task;
+use unstable::finally::Finally;
use vec::{OwnedVector, MutableVector};
/// Trait for message-passing primitives that can be select()ed on.
let p = Cell::new(p);
let c = Cell::new(c);
- let sched = Local::take::<Scheduler>();
- do sched.deschedule_running_task_and_then |sched, task| {
- let task_handles = task.make_selectable(ports.len());
-
- for (index, (port, task_handle)) in
- ports.mut_iter().zip(task_handles.move_iter()).enumerate() {
- // If one of the ports has data by now, it will wake the handle.
- if port.block_on(sched, task_handle) {
- ready_index = index;
- break;
+ do (|| {
+ let c = Cell::new(c.take());
+ let sched = Local::take::<Scheduler>();
+ do sched.deschedule_running_task_and_then |sched, task| {
+ let task_handles = task.make_selectable(ports.len());
+
+ for (index, (port, task_handle)) in
+ ports.mut_iter().zip(task_handles.move_iter()).enumerate() {
+ // If one of the ports has data by now, it will wake the handle.
+ if port.block_on(sched, task_handle) {
+ ready_index = index;
+ break;
+ }
}
- }
- let c = Cell::new(c.take());
- do sched.event_loop.callback { c.take().send_deferred(()) }
+ let c = Cell::new(c.take());
+ do sched.event_loop.callback { c.take().send_deferred(()) }
+ }
+ }).finally {
+ let p = Cell::new(p.take());
+ // Unkillable is necessary not because getting killed is dangerous here,
+ // but to force the recv not to use the same kill-flag that we used for
+ // selecting. Otherwise a user-sender could spuriously wakeup us here.
+ do task::unkillable { p.take().recv(); }
}
- // Unkillable is necessary not because getting killed is dangerous here,
- // but to force the recv not to use the same kill-flag that we used for
- // selecting. Otherwise a user-sender could spuriously wakeup us here.
- do task::unkillable { p.take().recv(); }
-
// Task resumes. Now unblock ourselves from all the ports we blocked on.
// If the success index wasn't reset, 'take' will just take all of them.
// Iterate in reverse so the 'earliest' index that's ready gets returned.
let (c2, p3, c4) = x.take();
p3.recv(); // handshake parent
c4.send(()); // normal receive
- task::yield();
+ task::deschedule();
c2.send(()); // select receive
}
if send_on_chans.contains(&i) {
let c = Cell::new(c);
do spawntask_random {
- task::yield();
+ task::deschedule();
c.take().send(());
}
}
pub use fmt;
pub use to_bytes;
}
+
use cast;
use char;
use char::Char;
-use clone::Clone;
+use clone::{Clone, DeepClone};
use container::{Container, Mutable};
use iter::Times;
use iterator::{Iterator, FromIterator, Extendable};
}
}
-/// An iterator over the start and end indicies of the matches of a
+/// An iterator over the start and end indices of the matches of a
/// substring within a larger string
#[deriving(Clone)]
pub struct MatchesIndexIterator<'self> {
/// Sets the length of a string
///
/// This will explicitly set the size of the string, without actually
- /// modifing its buffers, so it is up to the caller to ensure that
+ /// modifying its buffers, so it is up to the caller to ensure that
/// the string is actually the specified size.
#[inline]
pub unsafe fn set_len(s: &mut ~str, new_len: uint) {
}
}
+impl DeepClone for ~str {
+ #[inline]
+ fn deep_clone(&self) -> ~str {
+ self.to_owned()
+ }
+}
+
impl Clone for @str {
#[inline]
fn clone(&self) -> @str {
}
}
+impl DeepClone for @str {
+ #[inline]
+ fn deep_clone(&self) -> @str {
+ *self
+ }
+}
+
impl FromIterator<char> for ~str {
#[inline]
fn from_iterator<T: Iterator<char>>(iterator: &mut T) -> ~str {
#[test]
fn test_map() {
+ #[fixed_stack_segment]; #[inline(never)];
assert_eq!(~"", "".map_chars(|c| unsafe {libc::toupper(c as c_char)} as char));
assert_eq!(~"YMCA", "ymca".map_chars(|c| unsafe {libc::toupper(c as c_char)} as char));
}
#[cfg(test)]
mod tests {
use super::*;
- use to_bytes::ToBytes;
use str::from_char;
macro_rules! v2ascii (
#[test]
fn test_ascii_to_bytes() {
- assert_eq!(v2ascii!(~[40, 32, 59]).to_bytes(false), ~[40u8, 32u8, 59u8]);
assert_eq!(v2ascii!(~[40, 32, 59]).into_bytes(), ~[40u8, 32u8, 59u8]);
}
// above.
let data = match util::replace(entry, None) {
Some((_, data, _)) => data,
- None => libc::abort(),
+ None => abort(),
};
// Move `data` into transmute to get out the memory that it
}
}
}
- _ => libc::abort()
+ _ => abort()
}
// n.b. 'data' and 'loans' are both invalid pointers at the point
if return_loan {
match map[i] {
Some((_, _, ref mut loan)) => { *loan = NoLoan; }
- None => { libc::abort(); }
+ None => { abort(); }
}
}
return ret;
}
}
+fn abort() -> ! {
+ #[fixed_stack_segment]; #[inline(never)];
+
+ unsafe { libc::abort() }
+}
+
pub unsafe fn local_set<T: 'static>(handle: Handle,
key: local_data::Key<T>,
data: T) {
spawn::spawn_raw(opts, f);
}
- /// Runs a task, while transfering ownership of one argument to the child.
+ /// Runs a task, while transferring ownership of one argument to the child.
pub fn spawn_with<A:Send>(&mut self, arg: A, f: ~fn(v: A)) {
let arg = Cell::new(arg);
do self.spawn {
pub fn spawn_with<A:Send>(arg: A, f: ~fn(v: A)) {
/*!
- * Runs a task, while transfering ownership of one argument to the
+ * Runs a task, while transferring ownership of one argument to the
* child.
*
- * This is useful for transfering ownership of noncopyables to
+ * This is useful for transferring ownership of noncopyables to
* another task.
*
* This function is equivalent to `task().spawn_with(arg, f)`.
}
}
-pub fn yield() {
+pub fn deschedule() {
//! Yield control to the task scheduler
use rt::local::Local;
*
* ~~~
* do task::unkillable {
- * // detach / yield / destroy must all be called together
+ * // detach / deschedule / destroy must all be called together
* rustrt::rust_port_detach(po);
* // This must not result in the current task being killed
- * task::yield();
+ * task::deschedule();
* rustrt::rust_port_destroy(po);
* }
* ~~~
let ch = ch.clone();
do spawn_unlinked {
// Give middle task a chance to fail-but-not-kill-us.
- do 16.times { task::yield(); }
+ do 16.times { task::deschedule(); }
ch.send(()); // If killed first, grandparent hangs.
}
fail!(); // Shouldn't kill either (grand)parent or (grand)child.
do run_in_newsched_task {
do spawn_supervised { fail!(); }
// Give child a chance to fail-but-not-kill-us.
- do 16.times { task::yield(); }
+ do 16.times { task::deschedule(); }
}
}
#[ignore(reason = "linked failure")]
do spawn_supervised {
do spawn_supervised { block_forever(); }
}
- do 16.times { task::yield(); }
+ do 16.times { task::deschedule(); }
fail!();
};
assert!(result.is_err());
do spawn_supervised {
do spawn { block_forever(); } // linked
}
- do 16.times { task::yield(); }
+ do 16.times { task::deschedule(); }
fail!();
};
assert!(result.is_err());
do spawn { // linked
do spawn_supervised { block_forever(); }
}
- do 16.times { task::yield(); }
+ do 16.times { task::deschedule(); }
fail!();
};
assert!(result.is_err());
do spawn { // linked
do spawn { block_forever(); } // linked
}
- do 16.times { task::yield(); }
+ do 16.times { task::deschedule(); }
fail!();
};
assert!(result.is_err());
mod testrt {
use libc;
- #[nolink]
- extern {
- pub fn rust_dbg_lock_create() -> *libc::c_void;
- pub fn rust_dbg_lock_destroy(lock: *libc::c_void);
- pub fn rust_dbg_lock_lock(lock: *libc::c_void);
- pub fn rust_dbg_lock_unlock(lock: *libc::c_void);
- pub fn rust_dbg_lock_wait(lock: *libc::c_void);
- pub fn rust_dbg_lock_signal(lock: *libc::c_void);
- }
+ externfn!(fn rust_dbg_lock_create() -> *libc::c_void)
+ externfn!(fn rust_dbg_lock_destroy(lock: *libc::c_void))
+ externfn!(fn rust_dbg_lock_lock(lock: *libc::c_void))
+ externfn!(fn rust_dbg_lock_unlock(lock: *libc::c_void))
+ externfn!(fn rust_dbg_lock_wait(lock: *libc::c_void))
+ externfn!(fn rust_dbg_lock_signal(lock: *libc::c_void))
}
#[test]
// We want to do this after failing
do spawn_unlinked {
- do 10.times { yield() }
+ do 10.times { deschedule() }
ch.send(());
}
do spawn {
- yield();
+ deschedule();
// We want to fail after the unkillable task
// blocks on recv
fail!();
// We want to do this after failing
do spawn_unlinked || {
- do 10.times { yield() }
+ do 10.times { deschedule() }
ch.send(());
}
do spawn {
- yield();
+ deschedule();
// We want to fail after the unkillable task
// blocks on recv
fail!();
t.unlinked();
t.watched();
do t.spawn {
- task::yield();
+ task::deschedule();
fail!();
}
}
t.unwatched();
do t.spawn {
p3.recv();
- task::yield();
+ task::deschedule();
fail!();
}
c3.send(());
*/
use cast;
+use container::Container;
use io;
use io::Writer;
use iterator::Iterator;
use option::{None, Option, Some};
-use str::StrSlice;
-use vec::ImmutableVector;
+use str::{Str, StrSlice};
+use vec::{Vector, ImmutableVector};
pub type Cb<'self> = &'self fn(buf: &[u8]) -> bool;
-/**
- * A trait to implement in order to make a type hashable;
- * This works in combination with the trait `Hash::Hash`, and
- * may in the future be merged with that trait or otherwise
- * modified when default methods and trait inheritence are
- * completed.
- */
+///
+/// A trait to implement in order to make a type hashable;
+/// This works in combination with the trait `std::hash::Hash`, and
+/// may in the future be merged with that trait or otherwise
+/// modified when default methods and trait inheritance are
+/// completed.
+///
+/// IterBytes should be implemented so that the extent of the
+/// produced byte stream can be discovered, given the original
+/// type.
+/// For example, the IterBytes implementation for vectors emits
+/// its length first, and enums should emit their discriminant.
+///
pub trait IterBytes {
- /**
- * Call the provided callback `f` one or more times with
- * byte-slices that should be used when computing a hash
- * value or otherwise "flattening" the structure into
- * a sequence of bytes. The `lsb0` parameter conveys
- * whether the caller is asking for little-endian bytes
- * (`true`) or big-endian (`false`); this should only be
- * relevant in implementations that represent a single
- * multi-byte datum such as a 32 bit integer or 64 bit
- * floating-point value. It can be safely ignored for
- * larger structured types as they are usually processed
- * left-to-right in declaration order, regardless of
- * underlying memory endianness.
- */
+ /// Call the provided callback `f` one or more times with
+ /// byte-slices that should be used when computing a hash
+ /// value or otherwise "flattening" the structure into
+ /// a sequence of bytes. The `lsb0` parameter conveys
+ /// whether the caller is asking for little-endian bytes
+ /// (`true`) or big-endian (`false`); this should only be
+ /// relevant in implementations that represent a single
+ /// multi-byte datum such as a 32 bit integer or 64 bit
+ /// floating-point value. It can be safely ignored for
+ /// larger structured types as they are usually processed
+ /// left-to-right in declaration order, regardless of
+ /// underlying memory endianness.
+ ///
fn iter_bytes(&self, lsb0: bool, f: Cb) -> bool;
}
impl<'self,A:IterBytes> IterBytes for &'self [A] {
#[inline]
fn iter_bytes(&self, lsb0: bool, f: Cb) -> bool {
+ self.len().iter_bytes(lsb0, |b| f(b)) &&
self.iter().advance(|elt| elt.iter_bytes(lsb0, |b| f(b)))
}
}
-impl<A:IterBytes,B:IterBytes> IterBytes for (A,B) {
- #[inline]
- fn iter_bytes(&self, lsb0: bool, f: Cb) -> bool {
- match *self {
- (ref a, ref b) => { a.iter_bytes(lsb0, |b| f(b)) &&
- b.iter_bytes(lsb0, |b| f(b)) }
- }
- }
-}
-
-impl<A:IterBytes,B:IterBytes,C:IterBytes> IterBytes for (A,B,C) {
- #[inline]
- fn iter_bytes(&self, lsb0: bool, f: Cb) -> bool {
- match *self {
- (ref a, ref b, ref c) => {
- a.iter_bytes(lsb0, |b| f(b)) &&
- b.iter_bytes(lsb0, |b| f(b)) &&
- c.iter_bytes(lsb0, |b| f(b))
- }
+impl<A: IterBytes> IterBytes for (A, ) {
+ fn iter_bytes(&self, lsb0: bool, f: Cb) -> bool {
+ match *self {
+ (ref a, ) => a.iter_bytes(lsb0, |b| f(b))
+ }
}
- }
}
-// Move this to vec, probably.
-fn borrow<'x,A>(a: &'x [A]) -> &'x [A] {
- a
-}
+macro_rules! iter_bytes_tuple(
+ ($($A:ident),+) => (
+ impl<$($A: IterBytes),+> IterBytes for ($($A),+) {
+ fn iter_bytes(&self, lsb0: bool, f: Cb) -> bool {
+ match *self {
+ ($(ref $A),+) => {
+ $(
+ $A .iter_bytes(lsb0, |b| f(b))
+ )&&+
+ }
+ }
+ }
+ }
+ )
+)
+
+iter_bytes_tuple!(A, B)
+iter_bytes_tuple!(A, B, C)
+iter_bytes_tuple!(A, B, C, D)
+iter_bytes_tuple!(A, B, C, D, E)
+iter_bytes_tuple!(A, B, C, D, E, F)
+iter_bytes_tuple!(A, B, C, D, E, F, G)
+iter_bytes_tuple!(A, B, C, D, E, F, G, H)
impl<A:IterBytes> IterBytes for ~[A] {
#[inline]
fn iter_bytes(&self, lsb0: bool, f: Cb) -> bool {
- borrow(*self).iter_bytes(lsb0, f)
+ self.as_slice().iter_bytes(lsb0, f)
}
}
impl<A:IterBytes> IterBytes for @[A] {
#[inline]
fn iter_bytes(&self, lsb0: bool, f: Cb) -> bool {
- borrow(*self).iter_bytes(lsb0, f)
+ self.as_slice().iter_bytes(lsb0, f)
}
}
impl<'self> IterBytes for &'self str {
#[inline]
fn iter_bytes(&self, _lsb0: bool, f: Cb) -> bool {
- f(self.as_bytes())
+ // Terminate the string with a byte that does not appear in UTF-8
+ f(self.as_bytes()) && f([0xFF])
}
}
impl IterBytes for ~str {
#[inline]
- fn iter_bytes(&self, _lsb0: bool, f: Cb) -> bool {
- // this should possibly include the null terminator, but that
- // breaks .find_equiv on hashmaps.
- f(self.as_bytes())
+ fn iter_bytes(&self, lsb0: bool, f: Cb) -> bool {
+ self.as_slice().iter_bytes(lsb0, f)
}
}
impl IterBytes for @str {
#[inline]
- fn iter_bytes(&self, _lsb0: bool, f: Cb) -> bool {
- // this should possibly include the null terminator, but that
- // breaks .find_equiv on hashmaps.
- f(self.as_bytes())
+ fn iter_bytes(&self, lsb0: bool, f: Cb) -> bool {
+ self.as_slice().iter_bytes(lsb0, f)
}
}
/// Trait for converting a type to a string, consuming it in the process.
pub trait ToStrConsume {
- /// Cosume and convert to a string.
+ /// Consume and convert to a string.
fn into_str(self) -> ~str;
}
}
/**
- * A signed atomic integer type, supporting basic atomic aritmetic operations
+ * A signed atomic integer type, supporting basic atomic arithmetic operations
*/
pub struct AtomicInt {
priv v: int
}
/**
- * An unsigned atomic integer type, supporting basic atomic aritmetic operations
+ * An unsigned atomic integer type, supporting basic atomic arithmetic operations
*/
pub struct AtomicUint {
priv v: uint
* A fence 'A' which has `Release` ordering semantics, synchronizes with a
* fence 'B' with (at least) `Acquire` semantics, if and only if there exists
* atomic operations X and Y, both operating on some atomic object 'M' such
- * that A is sequenced before X, Y is synchronized before B and Y obsevers
+ * that A is sequenced before X, Y is synchronized before B and Y observers
* the change to M. This provides a happens-before dependence between A and B.
*
* Atomic operations with `Release` or `Acquire` semantics can also synchronize
use result::*;
pub unsafe fn open_external(filename: &path::Path) -> *libc::c_void {
+ #[fixed_stack_segment]; #[inline(never)];
do filename.with_c_str |raw_name| {
dlopen(raw_name, Lazy as libc::c_int)
}
}
pub unsafe fn open_internal() -> *libc::c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
dlopen(ptr::null(), Lazy as libc::c_int)
}
pub fn check_for_errors_in<T>(f: &fn()->T) -> Result<T, ~str> {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
do atomically {
let _old_error = dlerror();
}
pub unsafe fn symbol(handle: *libc::c_void, symbol: *libc::c_char) -> *libc::c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
dlsym(handle, symbol)
}
pub unsafe fn close(handle: *libc::c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
dlclose(handle); ()
}
use result::*;
pub unsafe fn open_external(filename: &path::Path) -> *libc::c_void {
+ #[fixed_stack_segment]; #[inline(never)];
do os::win32::as_utf16_p(filename.to_str()) |raw_name| {
LoadLibraryW(raw_name)
}
}
pub unsafe fn open_internal() -> *libc::c_void {
+ #[fixed_stack_segment]; #[inline(never)];
let handle = ptr::null();
GetModuleHandleExW(0 as libc::DWORD, ptr::null(), &handle as **libc::c_void);
handle
}
pub fn check_for_errors_in<T>(f: &fn()->T) -> Result<T, ~str> {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do atomically {
SetLastError(0);
}
}
pub unsafe fn symbol(handle: *libc::c_void, symbol: *libc::c_char) -> *libc::c_void {
+ #[fixed_stack_segment]; #[inline(never)];
GetProcAddress(handle, symbol)
}
pub unsafe fn close(handle: *libc::c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
FreeLibrary(handle); ()
}
use str;
use sys;
use num;
- use uint;
use vec;
use option::{Some, None, Option};
return if prec == 0u && num == 0u {
~""
} else {
- let s = uint::to_str_radix(num, radix);
+ let s = num.to_str_radix(radix);
let len = s.char_len();
if len < prec {
let diff = prec - len;
do || {
...
}.finally {
- alway_run_this();
+ always_run_this();
}
~~~
*/
A quick refresher on memory ordering:
-* Acquire - a barrier for aquiring a lock. Subsequent reads and writes
+* Acquire - a barrier for acquiring a lock. Subsequent reads and writes
take place after the barrier.
* Release - a barrier for releasing a lock. Preceding reads and writes
take place before the barrier.
use comm;
use prelude::*;
use task;
+use libc::uintptr_t;
pub mod dynamic_lib;
/// can lead to deadlock. Calling change_dir_locked recursively will
/// also deadlock.
pub fn change_dir_locked(p: &Path, action: &fn()) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+
use os;
use os::change_dir;
use unstable::sync::atomically;
fn rust_drop_change_dir_lock();
}
}
+
+
+/// Dynamically inquire about whether we're running under V.
+/// You should usually not use this unless your test definitely
+/// can't run correctly un-altered. Valgrind is there to help
+/// you notice weirdness in normal, un-doctored code paths!
+pub fn running_on_valgrind() -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+ unsafe { rust_running_on_valgrind() != 0 }
+}
+
+extern {
+ fn rust_running_on_valgrind() -> uintptr_t;
+}
/**
* Enables a runtime assertion that no operation in the argument closure shall
- * use scheduler operations (yield, recv, spawn, etc). This is for use with
+ * use scheduler operations (deschedule, recv, spawn, etc). This is for use with
* pthread mutexes, which may block the entire scheduler thread, rather than
- * just one task, and is hence prone to deadlocks if mixed with yielding.
+ * just one task, and is hence prone to deadlocks if mixed with descheduling.
*
* NOTE: THIS DOES NOT PROVIDE LOCKING, or any sort of critical-section
* synchronization whatsoever. It only makes sense to use for CPU-local issues.
if in_green_task_context() {
let t = Local::unsafe_borrow::<Task>();
do (|| {
- (*t).death.inhibit_yield();
+ (*t).death.inhibit_deschedule();
f()
}).finally {
- (*t).death.allow_yield();
+ (*t).death.allow_deschedule();
}
} else {
f()
}
}
- #[inline]
pub unsafe fn lock<T>(&self, f: &fn() -> T) -> T {
do atomically {
rust_lock_little_lock(self.l);
* This uses a pthread mutex, not one that's aware of the userspace scheduler.
* The user of an Exclusive must be careful not to invoke any functions that may
* reschedule the task while holding the lock, or deadlock may result. If you
- * need to block or yield while accessing shared state, use extra::sync::RWArc.
+ * need to block or deschedule while accessing shared state, use extra::sync::RWArc.
*/
pub struct Exclusive<T> {
x: UnsafeAtomicRcBox<ExData<T>>
// Exactly like std::arc::MutexArc,access(), but with the LittleLock
// instead of a proper mutex. Same reason for being unsafe.
//
- // Currently, scheduling operations (i.e., yielding, receiving on a pipe,
+ // Currently, scheduling operations (i.e., descheduling, receiving on a pipe,
// accessing the provided condition variable) are prohibited while inside
// the Exclusive. Supporting that is a work in progress.
#[inline]
}
}
-extern {
- fn rust_create_little_lock() -> rust_little_lock;
- fn rust_destroy_little_lock(lock: rust_little_lock);
- fn rust_lock_little_lock(lock: rust_little_lock);
- fn rust_unlock_little_lock(lock: rust_little_lock);
+#[cfg(stage0)]
+mod macro_hack {
+#[macro_escape];
+macro_rules! externfn(
+ (fn $name:ident () $(-> $ret_ty:ty),*) => (
+ extern {
+ fn $name() $(-> $ret_ty),*;
+ }
+ );
+ (fn $name:ident ($($arg_name:ident : $arg_ty:ty),*) $(-> $ret_ty:ty),*) => (
+ extern {
+ fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),*;
+ }
+ )
+)
}
+externfn!(fn rust_create_little_lock() -> rust_little_lock)
+externfn!(fn rust_destroy_little_lock(lock: rust_little_lock))
+externfn!(fn rust_lock_little_lock(lock: rust_little_lock))
+externfn!(fn rust_unlock_little_lock(lock: rust_little_lock))
+
#[cfg(test)]
mod tests {
use cell::Cell;
fn test_atomically() {
// NB. The whole runtime will abort on an 'atomic-sleep' violation,
// so we can't really test for the converse behaviour.
- unsafe { do atomically { } } task::yield(); // oughtn't fail
+ unsafe { do atomically { } } task::deschedule(); // oughtn't fail
}
#[test]
c.send(());
}
p.recv();
- task::yield(); // Try to make the unwrapper get blocked first.
+ task::deschedule(); // Try to make the unwrapper get blocked first.
let left_x = x.try_unwrap();
assert!(left_x.is_left());
util::ignore(left_x);
do task::spawn {
let x2 = x2.take();
unsafe { do x2.with |_hello| { } }
- task::yield();
+ task::deschedule();
}
assert!(x.unwrap() == ~~"hello");
let x = Exclusive::new(~~"hello");
let x2 = x.clone();
do task::spawn {
- do 10.times { task::yield(); } // try to let the unwrapper go
+ do 10.times { task::deschedule(); } // try to let the unwrapper go
fail!(); // punt it awake from its deadlock
}
let _z = x.unwrap();
/// elements at a time).
///
/// When the vector len is not evenly divided by the chunk size,
-/// the last slice of the iteration will be the remainer.
+/// the last slice of the iteration will be the remainder.
#[deriving(Clone)]
pub struct ChunkIter<'self, T> {
priv v: &'self [T],
fn reserve(&mut self, n: uint);
fn reserve_at_least(&mut self, n: uint);
fn capacity(&self) -> uint;
+ fn shrink_to_fit(&mut self);
fn push(&mut self, t: T);
unsafe fn push_fast(&mut self, t: T);
*
* * n - The number of elements to reserve space for
*/
+ #[inline]
fn reserve_at_least(&mut self, n: uint) {
self.reserve(uint::next_power_of_two(n));
}
}
}
+ /// Shrink the capacity of the vector to match the length
+ fn shrink_to_fit(&mut self) {
+ unsafe {
+ let ptr: *mut *mut Vec<()> = cast::transmute(self);
+ let alloc = (**ptr).fill;
+ let size = alloc + sys::size_of::<Vec<()>>();
+ *ptr = realloc_raw(*ptr as *mut c_void, size) as *mut Vec<()>;
+ (**ptr).alloc = alloc;
+ }
+ }
+
/// Append an element to a vector
#[inline]
fn push(&mut self, t: T) {
* Sets the length of a vector
*
* This will explicitly set the size of the vector, without actually
- * modifing its buffers, so it is up to the caller to ensure that
+ * modifying its buffers, so it is up to the caller to ensure that
* the vector is actually the specified size.
*/
#[inline]
use sys;
use vec::*;
use cmp::*;
+ use prelude::*;
fn square(n: uint) -> uint { n * n }
}
assert!(cnt == 3);
}
+
+ #[test]
+ fn test_shrink_to_fit() {
+ let mut xs = ~[0, 1, 2, 3];
+ for i in range(4, 100) {
+ xs.push(i)
+ }
+ assert_eq!(xs.capacity(), 128);
+ xs.shrink_to_fit();
+ assert_eq!(xs.capacity(), 100);
+ assert_eq!(xs, range(0, 100).to_owned_vec());
+ }
}
#[cfg(test)]
}
// a "Path" is essentially Rust's notion of a name;
-// for instance: core::cmp::Eq . It's represented
+// for instance: std::cmp::Eq . It's represented
// as a sequence of identifiers, along with a bunch
// of supporting information.
#[deriving(Clone, Eq, Encodable, Decodable, IterBytes)]
/* hold off on tests ... they appear in a later merge.
#[cfg(test)]
mod test {
- use core::option::{None, Option, Some};
- use core::uint;
+ use std::option::{None, Option, Some};
+ use std::uint;
use extra;
use codemap::*;
use super::*;
self.expr(span, ast::expr_fn_block(fn_decl, blk))
}
+ #[cfg(stage0)]
fn lambda0(&self, _span: span, blk: ast::Block) -> @ast::expr {
let ext_cx = *self;
let blk_e = self.expr(blk.span, ast::expr_block(blk.clone()));
quote_expr!(|| $blk_e )
}
+ #[cfg(not(stage0))]
+ fn lambda0(&self, _span: span, blk: ast::Block) -> @ast::expr {
+ let blk_e = self.expr(blk.span, ast::expr_block(blk.clone()));
+ quote_expr!(*self, || $blk_e )
+ }
+ #[cfg(stage0)]
fn lambda1(&self, _span: span, blk: ast::Block, ident: ast::ident) -> @ast::expr {
let ext_cx = *self;
let blk_e = self.expr(blk.span, ast::expr_block(blk.clone()));
quote_expr!(|$ident| $blk_e )
}
+ #[cfg(not(stage0))]
+ fn lambda1(&self, _span: span, blk: ast::Block, ident: ast::ident) -> @ast::expr {
+ let blk_e = self.expr(blk.span, ast::expr_block(blk.clone()));
+ quote_expr!(*self, |$ident| $blk_e )
+ }
fn lambda_expr(&self, span: span, ids: ~[ast::ident], expr: @ast::expr) -> @ast::expr {
self.lambda(span, ids, self.block_expr(expr))
use std::os;
+#[cfg(stage0)]
pub fn expand_option_env(ext_cx: @ExtCtxt, sp: span, tts: &[ast::token_tree])
-> base::MacResult {
let var = get_single_str_from_tts(ext_cx, sp, tts, "option_env!");
};
MRExpr(e)
}
+#[cfg(not(stage0))]
+pub fn expand_option_env(cx: @ExtCtxt, sp: span, tts: &[ast::token_tree])
+ -> base::MacResult {
+ let var = get_single_str_from_tts(cx, sp, tts, "option_env!");
+
+ let e = match os::getenv(var) {
+ None => quote_expr!(cx, ::std::option::None::<&'static str>),
+ Some(s) => quote_expr!(cx, ::std::option::Some($s))
+ };
+ MRExpr(e)
+}
-pub fn expand_env(ext_cx: @ExtCtxt, sp: span, tts: &[ast::token_tree])
+pub fn expand_env(cx: @ExtCtxt, sp: span, tts: &[ast::token_tree])
-> base::MacResult {
- let exprs = get_exprs_from_tts(ext_cx, sp, tts);
+ let exprs = get_exprs_from_tts(cx, sp, tts);
if exprs.len() == 0 {
- ext_cx.span_fatal(sp, "env! takes 1 or 2 arguments");
+ cx.span_fatal(sp, "env! takes 1 or 2 arguments");
}
- let var = expr_to_str(ext_cx, exprs[0], "expected string literal");
+ let var = expr_to_str(cx, exprs[0], "expected string literal");
let msg = match exprs.len() {
1 => fmt!("Environment variable %s not defined", var).to_managed(),
- 2 => expr_to_str(ext_cx, exprs[1], "expected string literal"),
- _ => ext_cx.span_fatal(sp, "env! takes 1 or 2 arguments")
+ 2 => expr_to_str(cx, exprs[1], "expected string literal"),
+ _ => cx.span_fatal(sp, "env! takes 1 or 2 arguments")
};
let e = match os::getenv(var) {
- None => ext_cx.span_fatal(sp, msg),
- Some(s) => ext_cx.expr_str(sp, s.to_managed())
+ None => cx.span_fatal(sp, msg),
+ Some(s) => cx.expr_str(sp, s.to_managed())
};
MRExpr(e)
}
pub static $name: ::std::local_data::Key<$ty> = &::std::local_data::Key;
)
)
+
+ // externfn! declares a wrapper for an external function.
+ // It is intended to be used like:
+ //
+ // externfn!(#[nolink]
+ // #[abi = \"cdecl\"]
+ // fn memcmp(cx: *u8, ct: *u8, n: u32) -> u32)
+ //
+ // Due to limitations in the macro parser, this pattern must be
+ // implemented with 4 distinct patterns (with attrs / without
+ // attrs CROSS with args / without ARGS).
+ //
+ // Also, this macro grammar allows for any number of return types
+ // because I couldn't figure out the syntax to specify at most one.
+ macro_rules! externfn(
+ (fn $name:ident () $(-> $ret_ty:ty),*) => (
+ pub unsafe fn $name() $(-> $ret_ty),* {
+ // Note: to avoid obscure bug in macros, keep these
+ // attributes *internal* to the fn
+ #[fixed_stack_segment];
+ #[inline(never)];
+ #[allow(missing_doc)];
+
+ return $name();
+
+ extern {
+ fn $name() $(-> $ret_ty),*;
+ }
+ }
+ );
+ (fn $name:ident ($($arg_name:ident : $arg_ty:ty),*) $(-> $ret_ty:ty),*) => (
+ pub unsafe fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),* {
+ // Note: to avoid obscure bug in macros, keep these
+ // attributes *internal* to the fn
+ #[fixed_stack_segment];
+ #[inline(never)];
+ #[allow(missing_doc)];
+
+ return $name($($arg_name),*);
+
+ extern {
+ fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),*;
+ }
+ }
+ );
+ ($($attrs:attr)* fn $name:ident () $(-> $ret_ty:ty),*) => (
+ pub unsafe fn $name() $(-> $ret_ty),* {
+ // Note: to avoid obscure bug in macros, keep these
+ // attributes *internal* to the fn
+ #[fixed_stack_segment];
+ #[inline(never)];
+ #[allow(missing_doc)];
+
+ return $name();
+
+ $($attrs)*
+ extern {
+ fn $name() $(-> $ret_ty),*;
+ }
+ }
+ );
+ ($($attrs:attr)* fn $name:ident ($($arg_name:ident : $arg_ty:ty),*) $(-> $ret_ty:ty),*) => (
+ pub unsafe fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),* {
+ // Note: to avoid obscure bug in macros, keep these
+ // attributes *internal* to the fn
+ #[fixed_stack_segment];
+ #[inline(never)];
+ #[allow(missing_doc)];
+
+ return $name($($arg_name),*);
+
+ $($attrs)*
+ extern {
+ fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),*;
+ }
+ }
+ )
+ )
+
}";
}
fn format_arg(&self, sp: span, arg: Either<uint, @str>,
ident: ast::ident) -> @ast::expr {
- let mut ty = match arg {
+ let ty = match arg {
Left(i) => self.arg_types[i].unwrap(),
Right(s) => *self.name_types.get(&s)
};
- // Default types to '?' if nothing else is specified.
- if ty == Unknown {
- ty = Known(@"?");
- }
let argptr = self.ecx.expr_addr_of(sp, self.ecx.expr_ident(sp, ident));
- match ty {
+ let fmt_trait = match ty {
+ Unknown => "Default",
Known(tyname) => {
- let fmt_trait = match tyname.as_slice() {
+ match tyname.as_slice() {
"?" => "Poly",
"b" => "Bool",
"c" => "Char",
`%s`", tyname));
"Dummy"
}
- };
- let format_fn = self.ecx.path_global(sp, ~[
- self.ecx.ident_of("std"),
- self.ecx.ident_of("fmt"),
- self.ecx.ident_of(fmt_trait),
- self.ecx.ident_of("fmt"),
- ]);
- self.ecx.expr_call_global(sp, ~[
- self.ecx.ident_of("std"),
- self.ecx.ident_of("fmt"),
- self.ecx.ident_of("argument"),
- ], ~[self.ecx.expr_path(format_fn), argptr])
+ }
}
String => {
- self.ecx.expr_call_global(sp, ~[
+ return self.ecx.expr_call_global(sp, ~[
self.ecx.ident_of("std"),
self.ecx.ident_of("fmt"),
self.ecx.ident_of("argumentstr"),
], ~[argptr])
}
Unsigned => {
- self.ecx.expr_call_global(sp, ~[
+ return self.ecx.expr_call_global(sp, ~[
self.ecx.ident_of("std"),
self.ecx.ident_of("fmt"),
self.ecx.ident_of("argumentuint"),
], ~[argptr])
}
- Unknown => { fail!() }
- }
+ };
+
+ let format_fn = self.ecx.path_global(sp, ~[
+ self.ecx.ident_of("std"),
+ self.ecx.ident_of("fmt"),
+ self.ecx.ident_of(fmt_trait),
+ self.ecx.ident_of("fmt"),
+ ]);
+ self.ecx.expr_call_global(sp, ~[
+ self.ecx.ident_of("std"),
+ self.ecx.ident_of("fmt"),
+ self.ecx.ident_of("argument"),
+ ], ~[self.ecx.expr_path(format_fn), argptr])
}
}
// Alas ... we write these out instead. All redundant.
- impl ToTokens for ast::ident {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for @ast::item {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl<'self> ToTokens for &'self [@ast::item] {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for ast::Ty {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl<'self> ToTokens for &'self [ast::Ty] {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for Generics {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for @ast::expr {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for ast::Block {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl<'self> ToTokens for &'self str {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for int {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for i8 {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for i16 {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for i32 {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for i64 {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for uint {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for u8 {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for u16 {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for u32 {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
-
- impl ToTokens for u64 {
- fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
- cx.parse_tts(self.to_source())
- }
- }
+ macro_rules! impl_to_tokens(
+ ($t:ty) => (
+ impl ToTokens for $t {
+ fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
+ cx.parse_tts(self.to_source())
+ }
+ }
+ )
+ )
+
+ macro_rules! impl_to_tokens_self(
+ ($t:ty) => (
+ impl<'self> ToTokens for $t {
+ fn to_tokens(&self, cx: @ExtCtxt) -> ~[token_tree] {
+ cx.parse_tts(self.to_source())
+ }
+ }
+ )
+ )
+
+ impl_to_tokens!(ast::ident)
+ impl_to_tokens!(@ast::item)
+ impl_to_tokens_self!(&'self [@ast::item])
+ impl_to_tokens!(ast::Ty)
+ impl_to_tokens_self!(&'self [ast::Ty])
+ impl_to_tokens!(Generics)
+ impl_to_tokens!(@ast::expr)
+ impl_to_tokens!(ast::Block)
+ impl_to_tokens_self!(&'self str)
+ impl_to_tokens!(int)
+ impl_to_tokens!(i8)
+ impl_to_tokens!(i16)
+ impl_to_tokens!(i32)
+ impl_to_tokens!(i64)
+ impl_to_tokens!(uint)
+ impl_to_tokens!(u8)
+ impl_to_tokens!(u16)
+ impl_to_tokens!(u32)
+ impl_to_tokens!(u64)
pub trait ExtParseUtils {
fn parse_item(&self, s: @str) -> @ast::item;
pub fn expand_quote_tokens(cx: @ExtCtxt,
sp: span,
tts: &[ast::token_tree]) -> base::MacResult {
- base::MRExpr(expand_tts(cx, sp, tts))
+ let (cx_expr, expr) = expand_tts(cx, sp, tts);
+ base::MRExpr(expand_wrapper(cx, sp, cx_expr, expr))
}
pub fn expand_quote_expr(cx: @ExtCtxt,
fn expand_tts(cx: @ExtCtxt,
sp: span,
- tts: &[ast::token_tree]) -> @ast::expr {
+ tts: &[ast::token_tree]) -> (@ast::expr, @ast::expr) {
// NB: It appears that the main parser loses its mind if we consider
// $foo as a tt_nonterminal during the main parse, so we have to re-parse
tts.to_owned()
);
*p.quote_depth += 1u;
- let tts = p.parse_all_token_trees();
- p.abort_if_errors();
- // We want to emit a block expression that does a sequence of 'use's to
- // import the runtime module, followed by a tt-building expression.
+ let cx_expr = p.parse_expr();
+ if !p.eat(&token::COMMA) {
+ p.fatal("Expected token `,`");
+ }
- let uses = ~[ cx.view_use_glob(sp, ast::public,
- ids_ext(~[~"syntax",
- ~"ext",
- ~"quote",
- ~"rt"])) ];
+ let tts = p.parse_all_token_trees();
+ p.abort_if_errors();
// We also bind a single value, sp, to ext_cx.call_site()
//
// the site the string literal occurred, which was in a source file
// _other_ than the one the user has control over. For example, an
// error in a quote from the protocol compiler, invoked in user code
- // using macro_rules! for example, will be attributed to the macro_rules.rs file in
- // libsyntax, which the user might not even have source to (unless they
- // happen to have a compiler on hand). Over all, the phase distinction
+ // using macro_rules! for example, will be attributed to the macro_rules.rs
+ // file in libsyntax, which the user might not even have source to (unless
+ // they happen to have a compiler on hand). Over all, the phase distinction
// just makes quotes "hard to attribute". Possibly this could be fixed
// by recreating some of the original qq machinery in the tt regime
// (pushing fake FileMaps onto the parser to account for original sites
id_ext("tt"),
cx.expr_vec_uniq(sp, ~[]));
- cx.expr_block(
- cx.block_all(sp, uses,
- ~[stmt_let_sp,
- stmt_let_tt] + mk_tts(cx, sp, tts),
- Some(cx.expr_ident(sp, id_ext("tt")))))
+ let block = cx.expr_block(
+ cx.block_all(sp,
+ ~[],
+ ~[stmt_let_sp, stmt_let_tt] + mk_tts(cx, sp, tts),
+ Some(cx.expr_ident(sp, id_ext("tt")))));
+
+ (cx_expr, block)
+}
+
+fn expand_wrapper(cx: @ExtCtxt,
+ sp: span,
+ cx_expr: @ast::expr,
+ expr: @ast::expr) -> @ast::expr {
+ let uses = ~[ cx.view_use_glob(sp, ast::public,
+ ids_ext(~[~"syntax",
+ ~"ext",
+ ~"quote",
+ ~"rt"])) ];
+
+ let stmt_let_ext_cx = cx.stmt_let(sp, false, id_ext("ext_cx"), cx_expr);
+
+ cx.expr_block(cx.block_all(sp, uses, ~[stmt_let_ext_cx], Some(expr)))
}
fn expand_parse_call(cx: @ExtCtxt,
parse_method: &str,
arg_exprs: ~[@ast::expr],
tts: &[ast::token_tree]) -> @ast::expr {
- let tts_expr = expand_tts(cx, sp, tts);
+ let (cx_expr, tts_expr) = expand_tts(cx, sp, tts);
let cfg_call = || cx.expr_method_call(
sp, cx.expr_ident(sp, id_ext("ext_cx")),
id_ext("parse_sess"), ~[]);
let new_parser_call =
- cx.expr_call_global(sp,
- ids_ext(~[~"syntax",
- ~"ext",
- ~"quote",
- ~"rt",
- ~"new_parser_from_tts"]),
- ~[parse_sess_call(),
- cfg_call(),
- tts_expr]);
-
- cx.expr_method_call(sp, new_parser_call,
- id_ext(parse_method),
- arg_exprs)
+ cx.expr_call(sp,
+ cx.expr_ident(sp, id_ext("new_parser_from_tts")),
+ ~[parse_sess_call(), cfg_call(), tts_expr]);
+
+ let expr = cx.expr_method_call(sp, new_parser_call, id_ext(parse_method),
+ arg_exprs);
+
+ expand_wrapper(cx, sp, cx_expr, expr)
}
c => {
// So the error span points to the unrecognized character
rdr.peek_span = codemap::mk_sp(rdr.last_pos, rdr.pos);
- rdr.fatal(fmt!("unknown start of token: %d", c as int));
+ let mut cs = ~"";
+ char::escape_default(c, |c| cs.push_char(c));
+ rdr.fatal(fmt!("unknown start of token: %s", cs));
}
}
}
pub enum ObsoleteSyntax {
ObsoleteLet,
ObsoleteFieldTerminator,
- ObsoleteStructCtor,
ObsoleteWith,
ObsoleteClassTraits,
ObsoletePrivSection,
fn token_is_obsolete_ident(&self, ident: &str, token: &Token) -> bool;
fn is_obsolete_ident(&self, ident: &str) -> bool;
fn eat_obsolete_ident(&self, ident: &str) -> bool;
- fn try_parse_obsolete_struct_ctor(&self) -> bool;
fn try_parse_obsolete_with(&self) -> bool;
fn try_parse_obsolete_priv_section(&self, attrs: &[Attribute]) -> bool;
}
"field declaration terminated with semicolon",
"fields are now separated by commas"
),
- ObsoleteStructCtor => (
- "struct constructor",
- "structs are now constructed with `MyStruct { foo: val }` \
- syntax. Structs with private fields cannot be created \
- outside of their defining module"
- ),
ObsoleteWith => (
"with",
"record update is done with `..`, e.g. \
}
}
- fn try_parse_obsolete_struct_ctor(&self) -> bool {
- if self.eat_obsolete_ident("new") {
- self.obsolete(*self.last_span, ObsoleteStructCtor);
- self.parse_fn_decl();
- self.parse_block();
- true
- } else {
- false
- }
- }
-
fn try_parse_obsolete_with(&self) -> bool {
if *self.token == token::COMMA
&& self.look_ahead(1,
_ => {
p.fatal(
fmt!(
- "expected `;` or `}` but found `%s`",
+ "expected `;` or `{` but found `%s`",
self.this_token_to_str()
)
);
return ~[self.parse_single_struct_field(public, attrs)];
}
- if self.try_parse_obsolete_struct_ctor() {
- return ~[];
- }
-
return ~[self.parse_single_struct_field(inherited, attrs)];
}
"be", // 64
"pure", // 65
+ "yield", // 66
];
@ident_interner {
Once,
Priv,
Pub,
- Pure,
Ref,
Return,
Static,
// Reserved keywords
Be,
+ Pure,
+ Yield,
}
impl Keyword {
Once => ident { name: 50, ctxt: 0 },
Priv => ident { name: 51, ctxt: 0 },
Pub => ident { name: 52, ctxt: 0 },
- Pure => ident { name: 65, ctxt: 0 },
Ref => ident { name: 53, ctxt: 0 },
Return => ident { name: 54, ctxt: 0 },
Static => ident { name: 27, ctxt: 0 },
Use => ident { name: 61, ctxt: 0 },
While => ident { name: 62, ctxt: 0 },
Be => ident { name: 64, ctxt: 0 },
+ Pure => ident { name: 65, ctxt: 0 },
+ Yield => ident { name: 66, ctxt: 0 },
}
}
}
pub fn is_any_keyword(tok: &Token) -> bool {
match *tok {
token::IDENT(sid, false) => match sid.name {
- 8 | 27 | 32 .. 65 => true,
+ 8 | 27 | 32 .. 66 => true,
_ => false,
},
_ => false
pub fn is_reserved_keyword(tok: &Token) -> bool {
match *tok {
token::IDENT(sid, false) => match sid.name {
- 64 .. 65 => true,
+ 64 .. 66 => true,
_ => false,
},
_ => false,
use print::pprust;
use std::io;
-use std::u64;
// The @ps is stored here to prevent recursive type.
pub enum ann_node<'self> {
ast::lit_int(i, t) => {
if i < 0_i64 {
word(s.s,
- ~"-" + u64::to_str_radix(-i as u64, 10u)
+ ~"-" + (-i as u64).to_str_radix(10u)
+ ast_util::int_ty_to_str(t));
} else {
word(s.s,
- u64::to_str_radix(i as u64, 10u)
+ (i as u64).to_str_radix(10u)
+ ast_util::int_ty_to_str(t));
}
}
ast::lit_uint(u, t) => {
word(s.s,
- u64::to_str_radix(u, 10u)
+ u.to_str_radix(10u)
+ ast_util::uint_ty_to_str(t));
}
ast::lit_int_unsuffixed(i) => {
if i < 0_i64 {
- word(s.s, ~"-" + u64::to_str_radix(-i as u64, 10u));
+ word(s.s, ~"-" + (-i as u64).to_str_radix(10u));
} else {
- word(s.s, u64::to_str_radix(i as u64, 10u));
+ word(s.s, (i as u64).to_str_radix(10u));
}
}
ast::lit_float(f, t) => {
-Subproject commit f67442eee27d3d075a65cf7f9a70f7ec6649ffd1
+Subproject commit 0964c68ddf2c67ce455e7443a06f4bb3db9e92bb
force_lazy_lock="1"
;;
+ *-*-linux-android*)
+ CFLAGS="$CFLAGS"
+ CPPFLAGS="$CPPFLAGS -D_GNU_SOURCE"
+ abi="elf"
+ $as_echo "#define JEMALLOC_HAS_ALLOCA_H 1" >>confdefs.h
+
+ $as_echo "#define JEMALLOC_PURGE_MADVISE_DONTNEED " >>confdefs.h
+
+ $as_echo "#define JEMALLOC_THREADED_INIT " >>confdefs.h
+
+ default_munmap="0"
+ ;;
*-*-linux*)
CFLAGS="$CFLAGS"
CPPFLAGS="$CPPFLAGS -D_GNU_SOURCE"
CTARGET='-o $@'
LDTARGET='-o $@'
EXTRA_LDFLAGS=
-MKLIB='ar crus $@'
+MKLIB='$(AR) crus $@'
CC_MM=1
dnl Platform-specific settings. abi and RPATH can probably be determined
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
force_lazy_lock="1"
;;
+ *-*-linux-android*)
+ CFLAGS="$CFLAGS"
+ CPPFLAGS="$CPPFLAGS -D_GNU_SOURCE"
+ abi="elf"
+ AC_DEFINE([JEMALLOC_HAS_ALLOCA_H])
+ AC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED], [ ])
+ AC_DEFINE([JEMALLOC_THREADED_INIT], [ ])
+ default_munmap="0"
+ ;;
*-*-linux*)
CFLAGS="$CFLAGS"
CPPFLAGS="$CPPFLAGS -D_GNU_SOURCE"
// This is a rather ugly parser for strings in the form
// "crate1,crate2.mod3,crate3.x=1". Log levels are 0-255,
-// with the most likely ones being 0-3 (defined in core::).
+// with the most likely ones being 0-3 (defined in std::).
size_t parse_logging_spec(char* spec, log_directive* dirs) {
size_t dir = 0;
while (dir < max_log_directives && *spec) {
#include <malloc.h>
#endif
+#ifndef __WIN32__
+// for signal
+#include <signal.h>
+#endif
+
#include "uv.h"
#include "rust_globals.h"
extern "C" void*
rust_uv_loop_new() {
+// XXX libuv doesn't always ignore SIGPIPE even though we don't need it.
+#ifndef __WIN32__
+ signal(SIGPIPE, SIG_IGN);
+#endif
return (void*)uv_loop_new();
}
const char* path) {
PassManager *PM = unwrap<PassManager>(PMR);
std::string ErrorInfo;
- raw_fd_ostream OS(path, ErrorInfo, raw_fd_ostream::F_Binary);
+ raw_fd_ostream OS(path, ErrorInfo, sys::fs::F_Binary);
formatted_raw_ostream FOS(OS);
PM->add(createPrintModulePass(&FOS));
PM->run(*unwrap(M));
bool NoVerify = false;
std::string ErrorInfo;
raw_fd_ostream OS(path, ErrorInfo,
- raw_fd_ostream::F_Binary);
+ sys::fs::F_Binary);
if (ErrorInfo != "") {
LLVMRustError = ErrorInfo.c_str();
return false;
return wrap(Type::getMetadataTy(*unwrap(C)));
}
+extern "C" void LLVMAddFunctionAttrString(LLVMValueRef fn, const char *Name) {
+ unwrap<Function>(fn)->addFnAttr(Name);
+}
+
extern "C" LLVMValueRef LLVMBuildAtomicLoad(LLVMBuilderRef B,
LLVMValueRef source,
const char* Name,
return wrap(Builder->createFunction(
unwrapDI<DIScope>(Scope), Name, LinkageName,
unwrapDI<DIFile>(File), LineNo,
- unwrapDI<DIType>(Ty), isLocalToUnit, isDefinition, ScopeLine,
+ unwrapDI<DICompositeType>(Ty), isLocalToUnit, isDefinition, ScopeLine,
Flags, isOptimized,
unwrap<Function>(Fn),
unwrapDI<MDNode*>(TParam),
# If this file is modified, then llvm will be forcibly cleaned and then rebuilt.
# The actual contents of this file do not matter, but to trigger a change on the
# build bots then the contents should be changed so git updates the mtime.
-2013-07-04
+2013-08-20
LLVMAddEarlyCSEPass
LLVMAddFunction
LLVMAddFunctionAttr
+LLVMAddFunctionAttrString
LLVMAddFunctionAttrsPass
LLVMAddFunctionInliningPass
LLVMAddGVNPass
}
}
+#[fixed_stack_segment] #[inline(never)]
pub fn fact(n: uint) -> uint {
unsafe {
info!("n = %?", n);
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#[crate_type="lib"];
+
+condition! {
+ pub oops: int -> int;
+}
+
+pub fn trouble() -> int {
+ oops::cond.raise(1)
+}
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#[crate_type="lib"];
+
+condition! {
+ pub oops: int -> int;
+}
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#[crate_type="lib"];
+
+condition! {
+ pub oops: int -> int;
+}
+
+pub fn guard(k: extern fn() -> int, x: int) -> int {
+ do oops::cond.trap(|i| i*x).inside {
+ k()
+ }
+}
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#[crate_type="lib"];
+
+#[deriving(Eq)]
+pub enum Color {
+ Red, Green, Blue
+}
+
+condition! {
+ pub oops: (int,float,~str) -> ::Color;
+}
+
+pub trait Thunk<T> {
+ fn call(self) -> T;
+}
+
+pub fn callback<T,TH:Thunk<T>>(t:TH) -> T {
+ t.call()
+}
+
let mut set = f();
do timed(&mut self.sequential_strings) {
for i in range(0u, num_keys) {
- let s = uint::to_str(i);
- set.insert(s);
+ set.insert(i.to_str());
}
for i in range(0u, num_keys) {
- let s = uint::to_str(i);
- assert!(set.contains(&s));
+ assert!(set.contains(&i.to_str()));
}
}
}
let mut set = f();
do timed(&mut self.random_strings) {
for _ in range(0, num_keys) {
- let s = uint::to_str(rng.next() as uint);
+ let s = (rng.next() as uint).to_str();
set.insert(s);
}
}
{
let mut set = f();
for i in range(0u, num_keys) {
- set.insert(uint::to_str(i));
+ set.insert(i.to_str());
}
do timed(&mut self.delete_strings) {
for i in range(0u, num_keys) {
- assert!(set.remove(&uint::to_str(i)));
+ assert!(set.remove(&i.to_str()));
}
}
}
let n = uint::from_str(args[1]).unwrap();
for i in range(0u, n) {
- let x = uint::to_str(i);
+ let x = i.to_str();
info!(x);
}
}
fn gradient(orig: Vec2, grad: Vec2, p: Vec2) -> f32 {
let sp = Vec2 {x: p.x - orig.x, y: p.y - orig.y};
- grad.x * sp.x + grad.y + sp.y
+ grad.x * sp.x + grad.y * sp.y
}
struct Noise2DContext {
let elapsed = stop - start;
out.write_line(fmt!("%d\t%d\t%s", n, fibn,
- u64::to_str(elapsed)));
+ elapsed.to_str()));
}
}
}
do task::spawn_supervised {
let c = c.take();
if gens_left & 1 == 1 {
- task::yield(); // shake things up a bit
+ task::deschedule(); // shake things up a bit
}
if gens_left > 0 {
child_generation(gens_left - 1, c); // recurse
+++ /dev/null
-// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-use std::uint;
-
-fn uuid() -> uint { fail!(); }
-
-fn from_str(s: ~str) -> uint { fail!(); }
-fn to_str(u: uint) -> ~str { fail!(); }
-fn uuid_random() -> uint { fail!(); }
-
-fn main() {
- do range(0u, 100000).advance |_i| { //~ ERROR Do-block body must return bool, but
- };
- // should get a more general message if the callback
- // doesn't return nil
- do range(0u, 100000).advance |_i| { //~ ERROR mismatched types
- ~"str"
- };
-}
+++ /dev/null
-// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-fn main() {
- fn take_block(f: &fn() -> bool) -> bool { f() }
- do take_block {}; //~ ERROR Do-block body must return bool, but returns () here. Perhaps
-}
// except according to those terms.
// xfail-test
-use core::io::ReaderUtil;
-use core::io::Reader;
+use std::io::ReaderUtil;
+use std::io::Reader;
fn bar(r:@ReaderUtil) -> ~str { r.read_line() }
--- /dev/null
+// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+trait Foo { fn a() } //~ ERROR expected `;` or `{` but found `}`
+
+fn main() {}
// Exercise the unused_unsafe attribute in some positive and negative cases
+#[allow(cstack)];
#[deny(unused_unsafe)];
mod foo {
}
}
}
+
unsafe fn good3() { foo::bar() }
fn good4() { unsafe { foo::bar() } }
//~^ ERROR obsolete syntax: `let` in field declaration
bar: ();
//~^ ERROR obsolete syntax: field declaration terminated with semicolon
- new() { }
- //~^ ERROR obsolete syntax: struct constructor
}
struct q : r {
fn main() {
- let ext_cx = mk_ctxt();
+ let cx = mk_ctxt();
- let abc = quote_expr!(23);
+ let abc = quote_expr!(cx, 23);
check_pp(abc, pprust::print_expr, "23");
- let expr3 = quote_expr!(2 - $abcd + 7); //~ ERROR unresolved name: abcd
+ let expr3 = quote_expr!(cx, 2 - $abcd + 7); //~ ERROR unresolved name: abcd
check_pp(expr3, pprust::print_expr, "2 - 23 + 7");
}
fn main() {
- let ext_cx = mk_ctxt();
+ let cx = mk_ctxt();
- let stmt = quote_stmt!(let x int = 20;); //~ ERROR expected end-of-string
+ let stmt = quote_stmt!(cx, let x int = 20;); //~ ERROR expected end-of-string
check_pp(*stmt, pprust::print_stmt, "");
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-//error-pattern:libc::c_int or libc::c_long should be used
+#[forbid(ctypes)];
+
mod xx {
extern {
- pub fn strlen(str: *u8) -> uint;
- pub fn foo(x: int, y: uint);
+ pub fn strlen(str: *u8) -> uint; //~ ERROR found rust type `uint`
+ pub fn foo(x: int, y: uint); //~ ERROR found rust type `int`
+ //~^ ERROR found rust type `uint`
}
}
fn main() {
- // let it fail to verify warning message
- fail!()
}
fn count(n: uint) -> uint {
unsafe {
- task::yield();
+ task::deschedule();
rustrt::rust_dbg_call(cb, n)
}
}
fn main() {
let (p, _c) = comm::stream::<()>();
task::spawn(|| child() );
- task::yield();
+ task::deschedule();
}
use std::task;
fn goodfail() {
- task::yield();
+ task::deschedule();
fail!("goodfail");
}
fn main() {
- let ext_cx = mk_ctxt();
- let s = quote_expr!(__s);
- let e = quote_expr!(__e);
- let f = quote_expr!($s.foo {|__e| $e});
+ let cx = mk_ctxt();
+ let s = quote_expr!(cx, __s);
+ let e = quote_expr!(cx, __e);
+ let f = quote_expr!(cx, $s.foo {|__e| $e});
log(error, pprust::expr_to_str(f));
}
}
fn main() {
- let ext_cx = mk_ctxt();
+ let cx = mk_ctxt();
- let abc = quote_expr!(23);
+ let abc = quote_expr!(cx, 23);
check_pp(ext_cx, abc, pprust::print_expr, ~"23");
- let ty = quote_ty!(int);
+ let ty = quote_ty!(cx, int);
check_pp(ext_cx, ty, pprust::print_type, ~"int");
- let item = quote_item!(static x : int = 10;).get();
+ let item = quote_item!(cx, static x : int = 10;).get();
check_pp(ext_cx, item, pprust::print_item, ~"static x: int = 10;");
- let stmt = quote_stmt!(let x = 20;);
+ let stmt = quote_stmt!(cx, let x = 20;);
check_pp(ext_cx, *stmt, pprust::print_stmt, ~"let x = 20;");
- let pat = quote_pat!(Some(_));
+ let pat = quote_pat!(cx, Some(_));
check_pp(ext_cx, pat, pprust::print_pat, ~"Some(_)");
}
use syntax::ext::base::ExtCtxt;
-fn syntax_extension(ext_cx: @ExtCtxt) {
- let e_toks : ~[syntax::ast::token_tree] = quote_tokens!(1 + 2);
- let p_toks : ~[syntax::ast::token_tree] = quote_tokens!((x, 1 .. 4, *));
+fn syntax_extension(cx: @ExtCtxt) {
+ let e_toks : ~[syntax::ast::token_tree] = quote_tokens!(cx, 1 + 2);
+ let p_toks : ~[syntax::ast::token_tree] = quote_tokens!(cx, (x, 1 .. 4, *));
- let a: @syntax::ast::expr = quote_expr!(1 + 2);
- let _b: Option<@syntax::ast::item> = quote_item!( static foo : int = $e_toks; );
- let _c: @syntax::ast::pat = quote_pat!( (x, 1 .. 4, *) );
- let _d: @syntax::ast::stmt = quote_stmt!( let x = $a; );
- let _e: @syntax::ast::expr = quote_expr!( match foo { $p_toks => 10 } );
+ let a: @syntax::ast::expr = quote_expr!(cx, 1 + 2);
+ let _b: Option<@syntax::ast::item> = quote_item!(cx, static foo : int = $e_toks; );
+ let _c: @syntax::ast::pat = quote_pat!(cx, (x, 1 .. 4, *) );
+ let _d: @syntax::ast::stmt = quote_stmt!(cx, let x = $a; );
+ let _e: @syntax::ast::expr = quote_expr!(cx, match foo { $p_toks => 10 } );
}
fn main() {
use anonexternmod::*;
+#[fixed_stack_segment]
pub fn main() {
unsafe {
rust_get_test_int();
fn rust_get_test_int() -> libc::intptr_t;
}
+#[fixed_stack_segment]
pub fn main() {
unsafe {
let _ = rust_get_test_int();
}
}
+#[fixed_stack_segment]
fn atol(s: ~str) -> int {
s.with_c_str(|x| unsafe { libc::atol(x) as int })
}
+#[fixed_stack_segment]
fn atoll(s: ~str) -> i64 {
s.with_c_str(|x| unsafe { libc::atoll(x) as i64 })
}
}
}
+#[fixed_stack_segment]
fn count(n: uint) -> uint {
unsafe {
info!("n = %?", n);
}
}
+#[fixed_stack_segment] #[inline(never)]
fn count(n: uint) -> uint {
unsafe {
info!("n = %?", n);
}
}
+#[fixed_stack_segment] #[inline(never)]
fn count(n: uint) -> uint {
unsafe {
info!("n = %?", n);
}
}
+#[fixed_stack_segment] #[inline(never)]
fn fact(n: uint) -> uint {
unsafe {
info!("n = %?", n);
extern mod externcallback(vers = "0.1");
+#[fixed_stack_segment] #[inline(never)]
fn fact(n: uint) -> uint {
unsafe {
info!("n = %?", n);
pub fn rust_dbg_extern_identity_TwoU32s(v: TwoU32s) -> TwoU32s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let x = TwoU32s {one: 22, two: 23};
pub fn rust_dbg_extern_identity_TwoU64s(u: TwoU64s) -> TwoU64s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let x = TwoU64s {one: 22, two: 23};
pub fn rust_dbg_extern_identity_TwoU64s(v: TwoU64s) -> TwoU64s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let x = TwoU64s {one: 22, two: 23};
pub fn rust_dbg_extern_identity_u8(v: u8) -> u8;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
assert_eq!(22_u8, rust_dbg_extern_identity_u8(22_u8));
pub fn rust_dbg_extern_identity_double(v: f64) -> f64;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
assert_eq!(22.0_f64, rust_dbg_extern_identity_double(22.0_f64));
pub fn rust_dbg_extern_identity_u32(v: u32) -> u32;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
assert_eq!(22_u32, rust_dbg_extern_identity_u32(22_u32));
pub fn rust_dbg_extern_identity_u64(v: u64) -> u64;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
assert_eq!(22_u64, rust_dbg_extern_identity_u64(22_u64));
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// xfail-win32 #5745
-// xfail-macos Broken on mac i686
-
struct TwoU16s {
one: u16, two: u16
}
pub fn rust_dbg_extern_return_TwoU16s() -> TwoU16s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let y = rust_dbg_extern_return_TwoU16s();
pub fn rust_dbg_extern_return_TwoU32s() -> TwoU32s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let y = rust_dbg_extern_return_TwoU32s();
pub fn rust_dbg_extern_return_TwoU64s() -> TwoU64s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let y = rust_dbg_extern_return_TwoU64s();
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// xfail-win32 #5745
-// xfail-macos Broken on mac i686
-
struct TwoU8s {
one: u8, two: u8
}
pub fn rust_dbg_extern_return_TwoU8s() -> TwoU8s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let y = rust_dbg_extern_return_TwoU8s();
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// This creates a bunch of yielding tasks that run concurrently
+// This creates a bunch of descheduling tasks that run concurrently
// while holding onto C stacks
use std::libc;
if data == 1u {
data
} else {
- task::yield();
+ task::deschedule();
count(data - 1u) + count(data - 1u)
}
}
+#[fixed_stack_segment] #[inline(never)]
fn count(n: uint) -> uint {
unsafe {
rustrt::rust_dbg_call(cb, n)
}
}
+#[fixed_stack_segment] #[inline(never)]
fn count(n: uint) -> uint {
unsafe {
- task::yield();
+ task::deschedule();
rustrt::rust_dbg_call(cb, n)
}
}
use std::libc;
use std::unstable::run_in_bare_thread;
-extern {
- pub fn rust_dbg_call(cb: *u8, data: libc::uintptr_t) -> libc::uintptr_t;
-}
+externfn!(fn rust_dbg_call(cb: *u8, data: libc::uintptr_t) -> libc::uintptr_t)
pub fn main() {
unsafe {
}
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
rustrt1::rust_get_test_int();
}
}
+#[fixed_stack_segment] #[inline(never)]
fn strlen(str: ~str) -> uint {
// C string is terminated with a zero
do str.with_c_str |buf| {
}
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
rustrt::rust_get_test_int();
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// xfail-win32
// Passing enums by value
pub enum void { }
macro_rules! t(($a:expr, $b:expr) => { assert_eq!($a, $b.to_owned()) })
// Make sure there's a poly formatter that takes anything
- t!(ifmt!("{}", 1), "1");
- t!(ifmt!("{}", A), "{}");
- t!(ifmt!("{}", ()), "()");
- t!(ifmt!("{}", @(~1, "foo")), "@(~1, \"foo\")");
+ t!(ifmt!("{:?}", 1), "1");
+ t!(ifmt!("{:?}", A), "{}");
+ t!(ifmt!("{:?}", ()), "()");
+ t!(ifmt!("{:?}", @(~1, "foo")), "@(~1, \"foo\")");
// Various edge cases without formats
t!(ifmt!(""), "");
t!(ifmt!("hello"), "hello");
t!(ifmt!("hello \\{"), "hello {");
+ // default formatters should work
+ t!(ifmt!("{}", 1i), "1");
+ t!(ifmt!("{}", 1i8), "1");
+ t!(ifmt!("{}", 1i16), "1");
+ t!(ifmt!("{}", 1i32), "1");
+ t!(ifmt!("{}", 1i64), "1");
+ t!(ifmt!("{}", 1u), "1");
+ t!(ifmt!("{}", 1u8), "1");
+ t!(ifmt!("{}", 1u16), "1");
+ t!(ifmt!("{}", 1u32), "1");
+ t!(ifmt!("{}", 1u64), "1");
+ t!(ifmt!("{}", 1.0f), "1");
+ t!(ifmt!("{}", 1.0f32), "1");
+ t!(ifmt!("{}", 1.0f64), "1");
+ t!(ifmt!("{}", "a"), "a");
+ t!(ifmt!("{}", ~"a"), "a");
+ t!(ifmt!("{}", @"a"), "a");
+ t!(ifmt!("{}", false), "false");
+ t!(ifmt!("{}", 'a'), "a");
+
// At least exercise all the formats
t!(ifmt!("{:b}", true), "true");
t!(ifmt!("{:c}", '☃'), "☃");
t!(ifmt!("{:x}", 10u), "a");
t!(ifmt!("{:X}", 10u), "A");
t!(ifmt!("{:s}", "foo"), "foo");
+ t!(ifmt!("{:s}", ~"foo"), "foo");
+ t!(ifmt!("{:s}", @"foo"), "foo");
t!(ifmt!("{:p}", 0x1234 as *int), "0x1234");
t!(ifmt!("{:p}", 0x1234 as *mut int), "0x1234");
t!(ifmt!("{:d}", A), "aloha");
t!(ifmt!("{foo} {bar}", foo=0, bar=1), "0 1");
t!(ifmt!("{foo} {1} {bar} {0}", 0, 1, foo=2, bar=3), "2 1 3 0");
t!(ifmt!("{} {0:s}", "a"), "a a");
- t!(ifmt!("{} {0}", "a"), "\"a\" \"a\"");
+ t!(ifmt!("{} {0}", "a"), "a a");
// Methods should probably work
t!(ifmt!("{0, plural, =1{a#} =2{b#} zero{c#} other{d#}}", 0u), "c0");
extern mod foreign_lib;
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let _foo = foreign_lib::rustrt::rust_get_test_int();
}
}
+#[fixed_stack_segment] #[inline(never)]
fn lgamma(n: c_double, value: &mut int) -> c_double {
unsafe {
return m::lgamma(n, to_c_int(value));
let old_state = swap_state_acq(&mut (*p).state,
blocked);
match old_state {
- empty | blocked => { task::yield(); }
+ empty | blocked => { task::deschedule(); }
full => {
let payload = util::replace(&mut p.payload, None);
return Some(payload.unwrap())
--- /dev/null
+// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+fn make_adder(x: int) -> @fn(int) -> int { |y| x + y }
+pub fn main() { }
}
fn transform(x: Option<int>) -> Option<~str> {
- x.bind(|n| Some(*n + 1) ).bind(|n| Some(int::to_str(*n)) )
+ x.bind(|n| Some(*n + 1) ).bind(|n| Some(n.to_str()) )
}
pub fn main() {
pub struct Fd(c_int);
impl Drop for Fd {
+ #[fixed_stack_segment] #[inline(never)]
fn drop(&self) {
unsafe {
libc::close(**self);
}
}
+#[fixed_stack_segment] #[inline(never)]
fn main() {
unsafe {
a::free(transmute(0));
}
fn visit_int(&self) -> bool {
do self.get::<int>() |i| {
- self.vals.push(int::to_str(i));
+ self.vals.push(i.to_str());
};
true
}
x: int
}
+#[fixed_stack_segment] #[inline(never)]
fn alloc<'a>(_bcx : &'a arena) -> &'a Bcx<'a> {
unsafe {
cast::transmute(libc::malloc(sys::size_of::<Bcx<'blk>>()
return alloc(bcx.fcx.arena);
}
+#[fixed_stack_segment] #[inline(never)]
fn g(fcx : &Fcx) {
let bcx = Bcx { fcx: fcx };
let bcx2 = h(&bcx);
task::spawn(|| die() );
let (p, c) = comm::stream::<()>();
loop {
- // Sending and receiving here because these actions yield,
+ // Sending and receiving here because these actions deschedule,
// at which point our child can kill us.
c.send(());
p.recv();
// The above comment no longer makes sense but I'm
// reluctant to remove a linked failure test case.
- task::yield();
+ task::deschedule();
}
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+// xfail-test - FIXME(#8538) some kind of problem linking induced by extern "C" fns that I do not understand
// xfail-fast - windows doesn't like this
// Smallest hello world with no runtime
task::spawn(|| x(~"hello from second spawned fn", 66) );
task::spawn(|| x(~"hello from third spawned fn", 67) );
let mut i: int = 30;
- while i > 0 { i = i - 1; info!("parent sleeping"); task::yield(); }
+ while i > 0 { i = i - 1; info!("parent sleeping"); task::deschedule(); }
}
}
impl uint_utils for uint {
- fn str(&self) -> ~str { uint::to_str(*self) }
+ fn str(&self) -> ~str { self.to_str() }
fn multi(&self, f: &fn(uint)) {
let mut c = 0u;
while c < *self { f(c); c += 1u; }
*a = 3;
}
+#[fixed_stack_segment] #[inline(never)]
unsafe fn run() {
assert!(debug_static_mut == 3);
debug_static_mut = 4;
--- /dev/null
+struct Foo {
+ new: int,
+}
+
+pub fn main() {
+ let foo = Foo{ new: 3 };
+ assert_eq!(foo.new, 3);
+}
}
}
+#[fixed_stack_segment] #[inline(never)]
fn test1() {
unsafe {
let q = Quad { a: 0xaaaa_aaaa_aaaa_aaaa_u64,
}
#[cfg(target_arch = "x86_64")]
+#[fixed_stack_segment]
+#[inline(never)]
fn test2() {
unsafe {
let f = Floats { a: 1.234567890e-15_f64,
let c = p.recv();
c.send(~"A");
c.send(~"B");
- task::yield();
+ task::deschedule();
}
// Sleep long enough for the task to finish.
let mut i = 0;
while i < 10000 {
- task::yield();
+ task::deschedule();
i += 1;
}
}
fn supervised() {
- // Yield to make sure the supervisor joins before we
+ // Deschedule to make sure the supervisor joins before we
// fail. This is currently not needed because the supervisor
// runs first, but I can imagine that changing.
error!("supervised task=%?", 0);
- task::yield();
+ task::deschedule();
fail!();
}
use std::task;
fn supervised() {
- // Yield to make sure the supervisor joins before we fail. This is
+ // Deschedule to make sure the supervisor joins before we fail. This is
// currently not needed because the supervisor runs first, but I can
// imagine that changing.
- task::yield();
+ task::deschedule();
fail!();
}
use std::int;
trait to_str {
- fn to_str(&self) -> ~str;
+ fn to_string(&self) -> ~str;
}
impl to_str for int {
- fn to_str(&self) -> ~str { int::to_str(*self) }
+ fn to_string(&self) -> ~str { self.to_str() }
}
impl to_str for ~str {
- fn to_str(&self) -> ~str { self.clone() }
+ fn to_string(&self) -> ~str { self.clone() }
}
impl to_str for () {
- fn to_str(&self) -> ~str { ~"()" }
+ fn to_string(&self) -> ~str { ~"()" }
}
trait map<T> {
x.map(|_e| ~"hi" )
}
fn bar<U:to_str,T:map<U>>(x: T) -> ~[~str] {
- x.map(|_e| _e.to_str() )
+ x.map(|_e| _e.to_string() )
}
pub fn main() {
// xfail-fast
-#[no_std];
-
-extern mod std;
-
-use std::str::StrVector;
-use std::vec::ImmutableVector;
-use std::iterator::Iterator;
-use std::int;
-
trait to_str {
- fn to_str(&self) -> ~str;
+ fn to_string(&self) -> ~str;
}
impl to_str for int {
- fn to_str(&self) -> ~str { int::to_str(*self) }
+ fn to_string(&self) -> ~str { self.to_str() }
}
impl<T:to_str> to_str for ~[T] {
- fn to_str(&self) -> ~str {
- fmt!("[%s]", self.iter().map(|e| e.to_str()).collect::<~[~str]>().connect(", "))
+ fn to_string(&self) -> ~str {
+ fmt!("[%s]", self.iter().map(|e| e.to_string()).collect::<~[~str]>().connect(", "))
}
}
pub fn main() {
- assert!(1.to_str() == ~"1");
- assert!((~[2, 3, 4]).to_str() == ~"[2, 3, 4]");
+ assert!(1.to_string() == ~"1");
+ assert!((~[2, 3, 4]).to_string() == ~"[2, 3, 4]");
fn indirect<T:to_str>(x: T) -> ~str {
- x.to_str() + "!"
+ x.to_string() + "!"
}
assert!(indirect(~[10, 20]) == ~"[10, 20]!");
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#[deriving(Eq)]
+struct Foo(int);
+#[deriving(Eq)]
+struct Bar(int, int);
+
+fn main() {
+ let f: extern fn(int) -> Foo = Foo;
+ let g: extern fn(int, int) -> Bar = Bar;
+ assert_eq!(f(42), Foo(42));
+ assert_eq!(g(4, 7), Bar(4, 7));
+}
#[cfg(target_os = "win32")]
+#[fixed_stack_segment]
pub fn main() {
let heap = unsafe { kernel32::GetProcessHeap() };
let mem = unsafe { kernel32::HeapAlloc(heap, 0u32, 100u32) };
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// xfail-fast
+// aux-build:xc_conditions.rs
+
+extern mod xc_conditions;
+use xc_conditions::oops;
+use xc_conditions::trouble;
+
+// Tests of cross-crate conditions; the condition is
+// defined in lib, and we test various combinations
+// of `trap` and `raise` in the client or the lib where
+// the condition was defined. Also in test #4 we use
+// more complex features (generics, traits) in
+// combination with the condition.
+//
+// trap raise
+// ------------
+// xc_conditions : client lib
+// xc_conditions_2: client client
+// xc_conditions_3: lib client
+// xc_conditions_4: client client (with traits)
+//
+// the trap=lib, raise=lib case isn't tested since
+// there's no cross-crate-ness to test in that case.
+
+pub fn main() {
+ do oops::cond.trap(|_i| 12345).inside {
+ let x = trouble();
+ assert_eq!(x,12345);
+ }
+}
\ No newline at end of file
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// xfail-fast
+// aux-build:xc_conditions_2.rs
+
+extern mod xc_conditions_2;
+use xcc = xc_conditions_2;
+
+pub fn main() {
+ do xcc::oops::cond.trap(|_| 1).inside {
+ xcc::oops::cond.raise(1);
+ }
+}
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// xfail-fast
+// aux-build:xc_conditions_3.rs
+
+extern mod xc_conditions_3;
+use xcc = xc_conditions_3;
+
+pub fn main() {
+ assert_eq!(xcc::guard(a, 1), 40);
+}
+
+pub fn a() -> int {
+ assert_eq!(xcc::oops::cond.raise(7), 7);
+ xcc::guard(b, 2)
+}
+
+pub fn b() -> int {
+ assert_eq!(xcc::oops::cond.raise(8), 16);
+ xcc::guard(c, 3)
+}
+
+pub fn c() -> int {
+ assert_eq!(xcc::oops::cond.raise(9), 27);
+ xcc::guard(d, 4)
+}
+
+pub fn d() -> int {
+ xcc::oops::cond.raise(10)
+}
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// xfail-fast
+// aux-build:xc_conditions_4.rs
+
+extern mod xc_conditions_4;
+use xcc = xc_conditions_4;
+
+struct SThunk {
+ x: int
+}
+
+impl xcc::Thunk<xcc::Color> for SThunk {
+ fn call(self) -> xcc::Color {
+ xcc::oops::cond.raise((self.x, 1.23, ~"oh no"))
+ }
+}
+
+pub fn main() {
+ do xcc::oops::cond.trap(|_| xcc::Red).inside {
+ let t = SThunk { x : 10 };
+ assert_eq!(xcc::callback(t), xcc::Red)
+ }
+}
\ No newline at end of file
builder.future_result(|r| { result = Some(r); });
builder.spawn(child);
error!("1");
- task::yield();
+ task::deschedule();
error!("2");
- task::yield();
+ task::deschedule();
error!("3");
result.unwrap().recv();
}
fn child() {
- error!("4"); task::yield(); error!("5"); task::yield(); error!("6");
+ error!("4"); task::deschedule(); error!("5"); task::deschedule(); error!("6");
}
builder.future_result(|r| { result = Some(r); });
builder.spawn(child);
error!("1");
- task::yield();
+ task::deschedule();
result.unwrap().recv();
}
pub fn main() {
let mut i: int = 0;
- while i < 100 { i = i + 1; error!(i); task::yield(); }
+ while i < 100 { i = i + 1; error!(i); task::deschedule(); }
}