Brian Anderson <banderson@mozilla.com>
Chris Double <chris.double@double.co.nz>
Chris Peterson <cpeterson@mozilla.com>
+Damian Gryski <damian@gryski.com>
Damien Grassart <damien@grassart.com>
Daniel Brooks <db48x@db48x.net>
Daniel Luz <dev@mernen.com>
Ian D. Bollinger <ian.bollinger@gmail.com>
Jacob Parker <j3parker@csclub.uwaterloo.ca>
Jason Orendorff <jorendorff@mozilla.com>
+Jed Davis <jld@panix.com>
Jeff Balogh <jbalogh@mozilla.com>
Jeff Muizelaar <jmuizelaar@mozilla.com>
Jeff Olson <olson.jeffery@gmail.com>
# version-string calculation
CFG_GIT_DIR := $(CFG_SRC_DIR).git
-CFG_RELEASE = 0.3
+CFG_RELEASE = 0.3.1
CFG_VERSION = $(CFG_RELEASE)
ifneq ($(wildcard $(CFG_GIT)),)
LLVM_VERSION=$($LLVM_CONFIG --version)
case $LLVM_VERSION in
- (3.1svn)
+ (3.1svn|3.1|3.0svn|3.0)
msg "found ok version of LLVM: $LLVM_VERSION"
;;
(*)
A _path_ is a sequence of one or more path components _logically_ separated by
a namespace qualifier (`::`). If a path consists of only one component, it may
-refer to either an [item](#items) or a [slot](#slot-declarations) in a local
+refer to either an [item](#items) or a [slot](#memory-slots) in a local
control scope. If a path has multiple components, it refers to an item.
Every item has a _canonical path_ within its crate, but the path naming an
A _function item_ defines a sequence of [statements](#statements) and an
optional final [expression](#expressions) associated with a name and a set of
parameters. Functions are declared with the keyword `fn`. Functions declare a
-set of *input [slots](#slot-declarations)* as parameters, through which the
-caller passes arguments into the function, and an *output
-[slot](#slot-declarations)* through which the function passes results back to
+set of *input* [*slots*](#memory-slots) as parameters, through which the
+caller passes arguments into the function, and an *output*
+[*slot*](#memory-slots) through which the function passes results back to
the caller.
A function may also be copied into a first class *value*, in which case the
### Traits
-A _trait item_ describes a set of method types. _[implementation
-items](#implementations)_ can be used to provide implementations of
+A _trait item_ describes a set of method types. [_implementation
+items_](#implementations) can be used to provide implementations of
those methods for a specific type.
~~~~
do: multiple implementations with the same name may exist in a scope at
the same time.
-It is possible to define an implementation without referring to a trait.
-The methods in such an implementation can only be used
+It is possible to define an implementation without referring to a
+trait. The methods in such an implementation can only be used
statically (as direct calls on the values of the type that the
implementation targets). In such an implementation, the `of` clause is
-not given, and the name is mandatory.
-
-~~~~
-impl uint_loops for uint {
- fn times(f: fn(uint)) {
- let mut i = 0u;
- while i < self { f(i); i += 1u; }
- }
-}
-~~~~
+not given, and the name is mandatory. Such implementations are
+limited to nominal types (enums, classes) and the implementation must
+appear in the same module or a sub-module as the receiver type.
_When_ a trait is specified, all methods declared as part of the
trait must be present, with matching types and type parameter
[ "with" expr ] '}'
~~~~~~~~
-A _[record](#record-types) expression_ is one or more comma-separated
+A [_record_](#record-types) _expression_ is one or more comma-separated
name-value pairs enclosed by braces. A fieldname can be any identifier
(including keywords), and is separated from its value expression by a
colon. To indicate that a field is mutable, the `mut` keyword is
vec_expr : '[' "mut" ? [ expr [ ',' expr ] * ] ? ']'
~~~~~~~~
-A _[vector](#vector-types) expression_ is written by enclosing zero or
+A [_vector_](#vector-types) _expression_ is written by enclosing zero or
more comma-separated expressions of uniform type in square brackets.
The keyword `mut` can be written after the opening bracket to
indicate that the elements of the resulting vector may be mutated.
task in a _failing state_.
~~~~
-# let buildr = task::builder();
-# task::unsupervise(buildr);
-# do task::run(buildr) {
+# do task::spawn_unlinked {
(~[1, 2, 3, 4])[0];
(~[mut 'x', 'y'])[1] = 'z';
Rust is a systems programming language with a focus on type safety,
memory safety, concurrency and performance. It is intended for writing
-large, high performance applications while preventing several classes
+large, high-performance applications while preventing several classes
of errors commonly found in languages like C++. Rust has a
-sophisticated memory model that enables many of the efficient data
-structures used in C++ while disallowing invalid memory access that
-would otherwise cause segmentation faults. Like other systems
-languages it is statically typed and compiled ahead of time.
+sophisticated memory model that makes possible many of the efficient
+data structures used in C++, while disallowing invalid memory accesses
+that would otherwise cause segmentation faults. Like other systems
+languages, it is statically typed and compiled ahead of time.
-As a multi-paradigm language it has strong support for writing code in
-procedural, functional and object-oriented styles. Some of it's nice
+As a multi-paradigm language, Rust supports writing code in
+procedural, functional and object-oriented styles. Some of its nice
high-level features include:
-* Pattern matching and algebraic data types (enums) - common in functional
- languages, pattern matching on ADTs provides a compact and expressive
- way to encode program logic
-* Task-based concurrency - Rust uses lightweight tasks that do not share
- memory
-* Higher-order functions - Closures in Rust are very powerful and used
- pervasively
-* Polymorphism - Rust's type system features a unique combination of
- Java-style interfaces and Haskell-style typeclasses
-* Generics - Functions and types can be parameterized over generic
- types with optional type constraints
+* ***Pattern matching and algebraic data types (enums).*** Common in
+ functional languages, pattern matching on ADTs provides a compact
+ and expressive way to encode program logic.
+* ***Task-based concurrency.*** Rust uses lightweight tasks that do
+ not share memory.
+* ***Higher-order functions.*** Rust functions may take closures as
+ arguments or return closures as return values. Closures in Rust are
+ very powerful and used pervasively.
+* ***Interface polymorphism.*** Rust's type system features a unique
+ combination of Java-style interfaces and Haskell-style typeclasses.
+* ***Parametric polymorphism (generics).*** Functions and types can be
+ parameterized over type variables with optional type constraints.
+* ***Type inference.*** Type annotations on local variable
+ declarations can be omitted.
## First impressions
## Anatomy of a Rust program
-In its simplest form, a Rust program is simply a `.rs` file with some
+In its simplest form, a Rust program is a `.rs` file with some
types and functions defined in it. If it has a `main` function, it can
be compiled to an executable. Rust does not allow code that's not a
declaration to appear at the top level of the file—all statements must
the type of `x` is inferred to be `u16` because it is passed to a
function that takes a `u16` argument:
-~~~~~
+~~~~
let x = 3;
fn identity_u16(n: u16) -> u16 { n }
what the type of the unsuffixed literal should be, you'll get an error
message.
-~~~~~{.xfail-test}
+~~~~{.xfail-test}
let x = 3;
let y: i32 = 3;
Rust has three competing goals that inform its view of memory:
-* Memory safety - memory that is managed by and is accessible to
- the Rust language must be guaranteed to be valid. Under normal
- circumstances it is impossible for Rust to trigger a segmentation
- fault or leak memory
-* Performance - high-performance low-level code tends to employ
- a number of allocation strategies. low-performance high-level
- code often uses a single, GC-based, heap allocation strategy
-* Concurrency - Rust must maintain memory safety guarantees even
- for code running in parallel
+* Memory safety: memory that is managed by and is accessible to the
+ Rust language must be guaranteed to be valid; under normal
+ circumstances it must be impossible for Rust to trigger a
+ segmentation fault or leak memory
+* Performance: high-performance low-level code must be able to employ
+ a number of allocation strategies; low-performance high-level code
+ must be able to employ a single, garbage-collection-based, heap
+ allocation strategy
+* Concurrency: Rust must maintain memory safety guarantees, even for
+ code running in parallel
## How performance considerations influence the memory model
-Many languages that ofter the kinds of memory safety guarentees that
-Rust does have a single allocation strategy: objects live on the heap,
-live for as long as they are needed, and are periodically garbage
-collected. This is very straightforword both conceptually and in
-implementation, but has very significant costs. Such languages tend to
+Most languages that offer strong memory safety guarantees rely upon a
+garbage-collected heap to manage all of the objects. This approach is
+straightforward both in concept and in implementation, but has
+significant costs. Languages that take this approach tend to
aggressively pursue ways to ameliorate allocation costs (think the
-Java virtual machine). Rust supports this strategy with _shared
-boxes_, memory allocated on the heap that may be referred to (shared)
+Java Virtual Machine). Rust supports this strategy with _shared
+boxes_: memory allocated on the heap that may be referred to (shared)
by multiple variables.
-In comparison, languages like C++ offer a very precise control over
-where objects are allocated. In particular, it is common to put
-them directly on the stack, avoiding expensive heap allocation. In
-Rust this is possible as well, and the compiler will use a clever
-lifetime analysis to ensure that no variable can refer to stack
+By comparison, languages like C++ offer very precise control over
+where objects are allocated. In particular, it is common to put them
+directly on the stack, avoiding expensive heap allocation. In Rust
+this is possible as well, and the compiler will use a clever _pointer
+lifetime analysis_ to ensure that no variable can refer to stack
objects after they are destroyed.
## How concurrency considerations influence the memory model
-Memory safety in a concurrent environment tends to mean avoiding race
+Memory safety in a concurrent environment involves avoiding race
conditions between two threads of execution accessing the same
-memory. Even high-level languages frequently avoid solving this
-problem, requiring programmers to correctly employ locking to unsure
-their program is free of races.
-
-Rust starts from the position that memory simply cannot be shared
-between tasks. Experience in other languages has proven that isolating
-each tasks' heap from each other is a reliable strategy and one that
-is easy for programmers to reason about. Having isolated heaps
-additionally means that garbage collection must only be done
-per-heap. Rust never 'stops the world' to garbage collect memory.
-
-If Rust tasks have completely isolated heaps then that seems to imply
-that any data transferred between them must be copied. While this
-is a fine and useful way to implement communication between tasks,
-it is also very inefficient for large data structures.
-
-Because of this Rust also introduces a global "exchange heap". Objects
-allocated here have _ownership semantics_, meaning that there is only
-a single variable that refers to them. For this reason they are
-refered to as _unique boxes_. All tasks may allocate objects on this
-heap, then transfer ownership of those allocations to other tasks,
-avoiding expensive copies.
+memory. Even high-level languages often require programmers to
+correctly employ locking to ensure that a program is free of races.
+
+Rust starts from the position that memory cannot be shared between
+tasks. Experience in other languages has proven that isolating each
+task's heap from the others is a reliable strategy and one that is
+easy for programmers to reason about. Heap isolation has the
+additional benefit that garbage collection must only be done
+per-heap. Rust never "stops the world" to garbage-collect memory.
+
+Complete isolation of heaps between tasks implies that any data
+transferred between tasks must be copied. While this is a fine and
+useful way to implement communication between tasks, it is also very
+inefficient for large data structures. Because of this, Rust also
+employs a global _exchange heap_. Objects allocated in the exchange
+heap have _ownership semantics_, meaning that there is only a single
+variable that refers to them. For this reason, they are referred to as
+_unique boxes_. All tasks may allocate objects on the exchange heap,
+then transfer ownership of those objects to other tasks, avoiding
+expensive copies.
## What to be aware of
# Boxes and pointers
In contrast to a lot of modern languages, aggregate types like records
-and enums are not represented as pointers to allocated memory. They
-are, like in C and C++, represented directly. This means that if you
-`let x = {x: 1f, y: 1f};`, you are creating a record on the stack. If
-you then copy it into a data structure, the whole record is copied,
-not just a pointer.
+and enums are _not_ represented as pointers to allocated memory in
+Rust. They are, as in C and C++, represented directly. This means that
+if you `let x = {x: 1f, y: 1f};`, you are creating a record on the
+stack. If you then copy it into a data structure, the whole record is
+copied, not just a pointer.
For small records like `point`, this is usually more efficient than
allocating memory and going through a pointer. But for big records, or
# Closures
Named functions, like those we've seen so far, may not refer to local
-variables decalared outside the function - they do not "close over
+variables declared outside the function - they do not "close over
their environment". For example you couldn't write the following:
~~~~ {.ignore}
for drop.
In the constructor, the compiler will enforce that all fields are initialized
-before doing anything which might allow them to be accessed. This includes
+before doing anything that might allow them to be accessed. This includes
returning from the constructor, calling any method on 'self', calling any
function with 'self' as an argument, or taking a reference to 'self'. Mutation
of immutable fields is possible only in the constructor, and only before doing
## Safe references
-*This system has recently changed. An explanantion is forthcoming.*
+*This system has recently changed. An explanation is forthcoming.*
## Other uses of safe references
of `seq<int>`—the `of` clause *refers* to a type, rather than defining
one.
+The type parameters bound by an iface are in scope in each of the
+method declarations. So, re-declaring the type parameter
+`T` as an explicit type parameter for `len` -- in either the iface or
+the impl -- would be a compile-time error.
+
+## The `self` type in interfaces
+
+In an interface, `self` is a special type that you can think of as a
+type parameter. An implementation of the interface for any given type
+`T` replaces the `self` type parameter with `T`. The following
+interface describes types that support an equality operation:
+
+~~~~
+iface eq {
+ fn equals(&&other: self) -> bool;
+}
+
+impl of eq for int {
+ fn equals(&&other: int) -> bool { other == self }
+}
+~~~~
+
+Notice that `equals` takes an `int` argument, rather than a `self` argument, in
+an implementation for type `int`.
+
## Casting to an interface type
The above allows us to define functions that polymorphically act on
If you only intend to use an implementation for static overloading,
and there is no interface available that it conforms to, you are free
-to leave off the `of` clause.
-
-~~~~
-# type currency = ();
-# fn mk_currency(x: int, s: ~str) {}
-impl int_util for int {
- fn times(b: fn(int)) {
- let mut i = 0;
- while i < self { b(i); i += 1; }
- }
- fn dollars() -> currency {
- mk_currency(self, ~"USD")
- }
-}
-~~~~
-
-This allows cutesy things like `send_payment(10.dollars())`. And the
-nice thing is that it's fully scoped, so the uneasy feeling that
-anybody with experience in object-oriented languages (with the
-possible exception of Rubyists) gets at the sight of such things is
-not justified. It's harmless!
+to leave off the `of` clause. However, this is only possible when you
+are defining an implementation in the same module as the receiver
+type, and the receiver type is a named type (i.e., an enum or a
+class); [single-variant enums](#single_variant_enum) are a common
+choice.
# Interacting with foreign code
briefly at how it is used.
To see how `spawn_listener()` works, we will create a child task
-which receives `uint` messages, converts them to a string, and sends
+that receives `uint` messages, converts them to a string, and sends
the string in response. The child terminates when `0` is received.
-Here is the function which implements the child task:
+Here is the function that implements the child task:
~~~~
# import comm::{port, chan, methods};
doc/version_info.html: version_info_template.html
@$(call E, version-info: $@)
sed -e "s/VERSION/$(CFG_RELEASE)/; s/SHORT_HASH/$(shell echo \
- $(CFG_VER_HASH) | head --bytes=8)/;\
+ $(CFG_VER_HASH) | head -c 8)/;\
s/STAMP/$(CFG_VER_HASH)/;" $< >$@
GENERATED += doc/version.md
if vec::len(c.opts.free) < 3u {
for c.sources.each_value |v| {
info(#fmt("%s (%s) via %s",
- copy v.name, copy v.url, copy v.method));
+ v.name, v.url, v.method));
}
ret;
}
import run::spawn_process;
-import io::writer_util;
+import io::{writer_util, reader_util};
import libc::{c_int, pid_t};
export run;
syn keyword rustKeyword alt again as break
syn keyword rustKeyword check claim const copy do drop else export extern fail
syn keyword rustKeyword for if impl import in let log
-syn keyword rustKeyword loop mod mut new of pure
+syn keyword rustKeyword loop mod mut new of owned pure
syn keyword rustKeyword ret self to unchecked
syn match rustKeyword "unsafe" " Allows also matching unsafe::foo()
syn keyword rustKeyword use while with
" FIXME: Scoped impl's name is also fallen in this category
-syn keyword rustKeyword mod iface trait class enum type nextgroup=rustIdentifier skipwhite
+syn keyword rustKeyword mod iface trait class struct enum type nextgroup=rustIdentifier skipwhite
syn keyword rustKeyword fn nextgroup=rustFuncName skipwhite
syn match rustIdentifier "\%([^[:cntrl:][:space:][:punct:][:digit:]]\|_\)\%([^[:cntrl:][:punct:][:space:]]\|_\)*" display contained
syn match rustModPath "\w\(\w\)*::[^<]"he=e-3,me=e-3
syn match rustModPathSep "::"
-syn match rustFuncCall "\w\(\w\)*("he=e-1,me=e-1
-syn match rustFuncCall "\w\(\w\)*::<"he=e-3,me=e-3 " foo::<T>();
+syn match rustFuncCall "\w\(\w\)*("he=e-1,me=e-1 contains=rustAssert
+syn match rustFuncCall "\w\(\w\)*::<"he=e-3,me=e-3 contains=rustAssert " foo::<T>();
+
+syn match rustMacro '\w\(\w\)*!'
syn region rustString start=+L\="+ skip=+\\\\\|\\"+ end=+"+ contains=rustTodo
" Other Suggestions:
" hi def link rustModPathSep Conceal
" hi rustAssert ctermfg=yellow
-" hi rustFuncCall ctermfg=magenta
+" hi rustMacro ctermfg=magenta
syn sync minlines=200
syn sync maxlines=500
Memcheck:Leak
fun:malloc
...
- fun:uv_loop_delete
+ fun:*uv_loop_delete*
}
{
* Access the underlying data in an atomically reference counted
* wrapper.
*/
-fn get<T: const send>(rc: &a.arc<T>) -> &a.T {
+fn get<T: const send>(rc: &arc<T>) -> &T {
unsafe {
let ptr: ~arc_data<T> = unsafe::reinterpret_cast((*rc).data);
// Cast us back into the correct region
--- /dev/null
+//! Shared Vectors
+
+import ptr::addr_of;
+
+export init_op;
+export capacity;
+export build_sized, build;
+export map;
+export from_fn, from_elem;
+export unsafe;
+
+/// Code for dealing with @-vectors. This is pretty incomplete, and
+/// contains a bunch of duplication from the code for ~-vectors.
+
+#[abi = "cdecl"]
+extern mod rustrt {
+ fn vec_reserve_shared_actual(++t: *sys::type_desc,
+ ++v: **vec::unsafe::vec_repr,
+ ++n: libc::size_t);
+}
+
+#[abi = "rust-intrinsic"]
+extern mod rusti {
+ fn move_val_init<T>(&dst: T, -src: T);
+}
+
+/// A function used to initialize the elements of a vector
+type init_op<T> = fn(uint) -> T;
+
+/// Returns the number of elements the vector can hold without reallocating
+#[inline(always)]
+pure fn capacity<T>(&&v: @[const T]) -> uint {
+ unsafe {
+ let repr: **unsafe::vec_repr =
+ ::unsafe::reinterpret_cast(addr_of(v));
+ (**repr).alloc / sys::size_of::<T>()
+ }
+}
+
+/**
+ * Builds a vector by calling a provided function with an argument
+ * function that pushes an element to the back of a vector.
+ * This version takes an initial size for the vector.
+ *
+ * # Arguments
+ *
+ * * size - An initial size of the vector to reserve
+ * * builder - A function that will construct the vector. It recieves
+ * as an argument a function that will push an element
+ * onto the vector being constructed.
+ */
+#[inline(always)]
+pure fn build_sized<A>(size: uint, builder: fn(push: pure fn(+A))) -> @[A] {
+ let mut vec = @[];
+ unsafe {
+ unsafe::reserve(vec, size);
+ // This is an awful hack to be able to make the push function
+ // pure. Is there a better way?
+ ::unsafe::reinterpret_cast::
+ <fn(push: pure fn(+A)), fn(push: fn(+A))>
+ (builder)(|+x| unsafe::push(vec, x));
+ }
+ ret vec;
+}
+
+/**
+ * Builds a vector by calling a provided function with an argument
+ * function that pushes an element to the back of a vector.
+ *
+ * # Arguments
+ *
+ * * builder - A function that will construct the vector. It recieves
+ * as an argument a function that will push an element
+ * onto the vector being constructed.
+ */
+#[inline(always)]
+pure fn build<A>(builder: fn(push: pure fn(+A))) -> @[A] {
+ build_sized(4, builder)
+}
+
+/// Apply a function to each element of a vector and return the results
+pure fn map<T, U>(v: &[T], f: fn(T) -> U) -> @[U] {
+ do build_sized(v.len()) |push| {
+ for vec::each(v) |elem| {
+ push(f(elem));
+ }
+ }
+}
+
+/**
+ * Creates and initializes an immutable vector.
+ *
+ * Creates an immutable vector of size `n_elts` and initializes the elements
+ * to the value returned by the function `op`.
+ */
+pure fn from_fn<T>(n_elts: uint, op: init_op<T>) -> @[T] {
+ do build_sized(n_elts) |push| {
+ let mut i: uint = 0u;
+ while i < n_elts { push(op(i)); i += 1u; }
+ }
+}
+
+/**
+ * Creates and initializes an immutable vector.
+ *
+ * Creates an immutable vector of size `n_elts` and initializes the elements
+ * to the value `t`.
+ */
+pure fn from_elem<T: copy>(n_elts: uint, t: T) -> @[T] {
+ do build_sized(n_elts) |push| {
+ let mut i: uint = 0u;
+ while i < n_elts { push(t); i += 1u; }
+ }
+}
+
+
+mod unsafe {
+ type vec_repr = vec::unsafe::vec_repr;
+ type slice_repr = vec::unsafe::slice_repr;
+
+ /**
+ * Sets the length of a vector
+ *
+ * This will explicitly set the size of the vector, without actually
+ * modifing its buffers, so it is up to the caller to ensure that
+ * the vector is actually the specified size.
+ */
+ #[inline(always)]
+ unsafe fn set_len<T>(&&v: @[const T], new_len: uint) {
+ let repr: **vec_repr = ::unsafe::reinterpret_cast(addr_of(v));
+ (**repr).fill = new_len * sys::size_of::<T>();
+ }
+
+ /// Append an element to a vector
+ #[inline(always)]
+ unsafe fn push<T>(&v: @[const T], +initval: T) {
+ let repr: **vec_repr = ::unsafe::reinterpret_cast(addr_of(v));
+ let fill = (**repr).fill;
+ if (**repr).alloc > fill {
+ (**repr).fill += sys::size_of::<T>();
+ let p = addr_of((**repr).data);
+ let p = ptr::offset(p, fill) as *mut T;
+ rusti::move_val_init(*p, initval);
+ }
+ else {
+ push_slow(v, initval);
+ }
+ }
+ unsafe fn push_slow<T>(&v: @[const T], +initval: T) {
+ reserve_at_least(v, v.len() + 1u);
+ push(v, initval);
+ }
+ /**
+ * Reserves capacity for exactly `n` elements in the given vector.
+ *
+ * If the capacity for `v` is already equal to or greater than the
+ * requested capacity, then no action is taken.
+ *
+ * # Arguments
+ *
+ * * v - A vector
+ * * n - The number of elements to reserve space for
+ */
+ unsafe fn reserve<T>(&v: @[const T], n: uint) {
+ // Only make the (slow) call into the runtime if we have to
+ if capacity(v) < n {
+ let ptr = addr_of(v) as **vec_repr;
+ rustrt::vec_reserve_shared_actual(sys::get_type_desc::<T>(),
+ ptr, n as libc::size_t);
+ }
+ }
+
+ /**
+ * Reserves capacity for at least `n` elements in the given vector.
+ *
+ * This function will over-allocate in order to amortize the
+ * allocation costs in scenarios where the caller may need to
+ * repeatedly reserve additional space.
+ *
+ * If the capacity for `v` is already equal to or greater than the
+ * requested capacity, then no action is taken.
+ *
+ * # Arguments
+ *
+ * * v - A vector
+ * * n - The number of elements to reserve space for
+ */
+ unsafe fn reserve_at_least<T>(&v: @[const T], n: uint) {
+ reserve(v, uint::next_power_of_two(n));
+ }
+
+}
}
fn peek_(p: *rust_port) -> bool {
+ // Yield here before we check to see if someone sent us a message
+ // FIXME #524, if the compilergenerates yields, we don't need this
+ task::yield();
rustrt::rust_port_size(p) != 0u as libc::size_t
}
fn select2<A: send, B: send>(p_a: port<A>, p_b: port<B>)
-> either<A, B> {
let ports = ~[(**p_a).po, (**p_b).po];
- let n_ports = 2 as libc::size_t;
let yield = 0u, yieldp = ptr::addr_of(yield);
let mut resport: *rust_port;
resport = rusti::init::<*rust_port>();
- do vec::as_buf(ports) |ports| {
- rustrt::rust_port_select(ptr::addr_of(resport), ports, n_ports,
- yieldp);
+ do vec::as_buf(ports) |ports, n_ports| {
+ rustrt::rust_port_select(ptr::addr_of(resport), ports,
+ n_ports as size_t, yieldp);
}
if yield != 0u {
#[ignore(cfg(windows))]
fn test_port_detach_fail() {
for iter::repeat(100u) {
- let builder = task::builder();
- task::unsupervise(builder);
- do task::run(builder) {
+ do task::spawn_unlinked {
let po = port();
let ch = po.chan();
export int, i8, i16, i32, i64;
export uint, u8, u16, u32, u64;
export float, f32, f64;
-export box, char, str, ptr, vec, bool;
+export box, char, str, ptr, vec, at_vec, bool;
export either, option, result, iter;
export libc, os, io, run, rand, sys, unsafe, logging;
export arc, comm, task, future, pipes;
export extfmt;
+// The test harness links against core, so don't include runtime in tests.
+// FIXME (#2861): Uncomment this after snapshot gets updated.
+//#[cfg(notest)]
+export rt;
export tuple;
export to_str, to_bytes;
export dvec, dvec_iter;
export dlist, dlist_iter;
+export hash;
export cmp;
export num;
mod str;
mod ptr;
mod vec;
+mod at_vec;
mod bool;
mod tuple;
mod cmp;
mod num;
+mod hash;
mod either;
mod iter;
mod logging;
// Exported but not part of the public interface
mod extfmt;
+// The test harness links against core, so don't include runtime in tests.
+#[cfg(notest)]
+mod rt;
// For internal use, not exported
import option::{some, none};
import option = option::option;
import path = path::path;
-import tuple::extensions;
+import tuple::{extensions, tuple_ops, extended_tuple_ops};
import str::{extensions, str_slice, unique_str};
import vec::extensions;
import vec::{const_vector, copyable_vector, immutable_vector};
import vec::{immutable_copyable_vector, iter_trait_extensions, vec_concat};
-import iter::{base_iter, extended_iter, copyable_iter, times};
+import iter::{base_iter, extended_iter, copyable_iter, times, timesi};
import option::extensions;
import option_iter::extensions;
import ptr::{extensions, ptr};
import rand::extensions;
import result::extensions;
-import int::{num, times};
-import i8::{num, times};
-import i16::{num, times};
-import i32::{num, times};
-import i64::{num, times};
-import uint::{num, times};
-import u8::{num, times};
-import u16::{num, times};
-import u32::{num, times};
-import u64::{num, times};
+import int::{num, times, timesi};
+import i8::{num, times, timesi};
+import i16::{num, times, timesi};
+import i32::{num, times, timesi};
+import i64::{num, times, timesi};
+import uint::{num, times, timesi};
+import u8::{num, times, timesi};
+import u16::{num, times, timesi};
+import u32::{num, times, timesi};
+import u64::{num, times, timesi};
import float::num;
import f32::num;
import f64::num;
export path, option, some, none, unreachable;
export extensions;
// The following exports are the extension impls for numeric types
-export num, times;
+export num, times, timesi;
// The following exports are the common traits
export str_slice, unique_str;
export const_vector, copyable_vector, immutable_vector;
export immutable_copyable_vector, iter_trait_extensions, vec_concat;
export base_iter, copyable_iter, extended_iter;
+export tuple_ops, extended_tuple_ops;
export ptr;
// Export the log levels as global constants. Higher levels mean
import dlist_iter::extensions;
export dlist, dlist_node;
-export create, from_elt, from_vec, extensions;
+export new_dlist, from_elem, from_vec, extensions;
type dlist_link<T> = option<dlist_node<T>>;
enum dlist_node<T> = @{
data: T,
- mut root: option<dlist<T>>,
+ mut linked: bool, // for assertions
mut prev: dlist_link<T>,
mut next: dlist_link<T>
};
none { fail ~"This dlist node has no previous neighbour." }
}
}
-
- /// Remove a node from whatever dlist it's on (failing if none).
- fn remove() {
- if option::is_some(self.root) {
- option::get(self.root).remove(self);
- } else {
- fail ~"Removing an orphaned dlist node - what do I remove from?"
- }
- }
}
/// Creates a new dlist node with the given data.
-pure fn create_node<T>(+data: T) -> dlist_node<T> {
- dlist_node(@{data: data, mut root: none, mut prev: none, mut next: none})
+pure fn new_dlist_node<T>(+data: T) -> dlist_node<T> {
+ dlist_node(@{data: data, mut linked: false,
+ mut prev: none, mut next: none})
}
/// Creates a new, empty dlist.
-pure fn create<T>() -> dlist<T> {
+pure fn new_dlist<T>() -> dlist<T> {
dlist(@{mut size: 0, mut hd: none, mut tl: none})
}
/// Creates a new dlist with a single element
-fn from_elt<T>(+data: T) -> dlist<T> {
- let list = create();
- list.push(data);
+pure fn from_elem<T>(+data: T) -> dlist<T> {
+ let list = new_dlist();
+ unchecked { list.push(data); }
list
}
fn from_vec<T: copy>(+vec: &[T]) -> dlist<T> {
- do vec::foldl(create(), vec) |list,data| {
+ do vec::foldl(new_dlist(), vec) |list,data| {
list.push(data); // Iterating left-to-right -- add newly to the tail.
list
}
}
+/// Produce a list from a list of lists, leaving no elements behind in the
+/// input. O(number of sub-lists).
+fn concat<T>(lists: dlist<dlist<T>>) -> dlist<T> {
+ let result = new_dlist();
+ while !lists.is_empty() {
+ result.append(lists.pop().get());
+ }
+ result
+}
+
impl private_methods<T> for dlist<T> {
pure fn new_link(-data: T) -> dlist_link<T> {
- some(dlist_node(@{data: data, mut root: some(self),
+ some(dlist_node(@{data: data, mut linked: true,
mut prev: none, mut next: none}))
}
pure fn assert_mine(nobe: dlist_node<T>) {
- alt nobe.root {
- some(me) { assert box::ptr_eq(*self, *me); }
- none { fail ~"This node isn't on this dlist." }
+ // These asserts could be stronger if we had node-root back-pointers,
+ // but those wouldn't allow for O(1) append.
+ if self.size == 0 {
+ fail ~"This dlist is empty; that node can't be on it."
+ }
+ if !nobe.linked { fail ~"That node isn't linked to any dlist." }
+ if !((nobe.prev.is_some()
+ || box::ptr_eq(*self.hd.expect(~"headless dlist?"), *nobe)) &&
+ (nobe.next.is_some()
+ || box::ptr_eq(*self.tl.expect(~"tailless dlist?"), *nobe))) {
+ fail ~"That node isn't on this dlist."
}
}
fn make_mine(nobe: dlist_node<T>) {
- if option::is_some(nobe.root) {
+ if nobe.prev.is_some() || nobe.next.is_some() || nobe.linked {
fail ~"Cannot insert node that's already on a dlist!"
}
- nobe.root = some(self);
+ nobe.linked = true;
}
// Link two nodes together. If either of them are 'none', also sets
// the head and/or tail pointers appropriately.
self.link(nobe.prev, nobe.next);
nobe.prev = none; // Release extraneous references.
nobe.next = none;
- nobe.root = none;
+ nobe.linked = false;
self.size -= 1;
}
tl.map(|nobe| self.unlink(nobe));
tl
}
+ /// Remove data from the head of the list. O(1).
+ fn pop() -> option<T> {
+ do option::map_consume(self.pop_n()) |nobe| {
+ let dlist_node(@{ data: x, _ }) <- nobe;
+ x
+ }
+ }
+ /// Remove data from the tail of the list. O(1).
+ fn pop_tail() -> option<T> {
+ do option::map_consume(self.pop_tail_n()) |nobe| {
+ let dlist_node(@{ data: x, _ }) <- nobe;
+ x
+ }
+ }
/// Get the node at the list's head. O(1).
pure fn peek_n() -> option<dlist_node<T>> { self.hd }
/// Get the node at the list's tail. O(1).
/// Remove a node from anywhere in the list. O(1).
fn remove(nobe: dlist_node<T>) { self.unlink(nobe); }
+ /**
+ * Empty another list onto the end of this list, joining this list's tail
+ * to the other list's head. O(1).
+ */
+ fn append(them: dlist<T>) {
+ if box::ptr_eq(*self, *them) {
+ fail ~"Cannot append a dlist to itself!"
+ }
+ if them.len() > 0 {
+ self.link(self.tl, them.hd);
+ self.tl = them.tl;
+ self.size += them.size;
+ them.size = 0;
+ them.hd = none;
+ them.tl = none;
+ }
+ }
+ /**
+ * Empty another list onto the start of this list, joining the other
+ * list's tail to this list's head. O(1).
+ */
+ fn prepend(them: dlist<T>) {
+ if box::ptr_eq(*self, *them) {
+ fail ~"Cannot prepend a dlist to itself!"
+ }
+ if them.len() > 0 {
+ self.link(them.tl, self.hd);
+ self.hd = them.hd;
+ self.size += them.size;
+ them.size = 0;
+ them.hd = none;
+ them.tl = none;
+ }
+ }
+
+ /// Reverse the list's elements in place. O(n).
+ fn reverse() {
+ let temp = new_dlist::<T>();
+ while !self.is_empty() {
+ let nobe = self.pop_n().get();
+ nobe.linked = true; // meh, kind of disorganised.
+ temp.add_head(some(nobe));
+ }
+ self.hd = temp.hd;
+ self.tl = temp.tl;
+ self.size = temp.size;
+ }
+
+ /// Iterate over nodes.
+ pure fn each_node(f: fn(dlist_node<T>) -> bool) {
+ let mut link = self.peek_n();
+ while link.is_some() {
+ let nobe = link.get();
+ if !f(nobe) { break; }
+ link = nobe.next_link();
+ }
+ }
+
/// Check data structure integrity. O(n).
fn assert_consistent() {
if option::is_none(self.hd) || option::is_none(self.tl) {
let mut rabbit = link;
while option::is_some(link) {
let nobe = option::get(link);
- // check self on this list
- assert option::is_some(nobe.root) &&
- box::ptr_eq(*option::get(nobe.root), *self);
+ assert nobe.linked;
// check cycle
if option::is_some(rabbit) { rabbit = option::get(rabbit).next; }
if option::is_some(rabbit) { rabbit = option::get(rabbit).next; }
rabbit = link;
while option::is_some(link) {
let nobe = option::get(link);
- // check self on this list
- assert option::is_some(nobe.root) &&
- box::ptr_eq(*option::get(nobe.root), *self);
+ assert nobe.linked;
// check cycle
if option::is_some(rabbit) { rabbit = option::get(rabbit).prev; }
if option::is_some(rabbit) { rabbit = option::get(rabbit).prev; }
}
impl extensions<T: copy> for dlist<T> {
- /// Remove data from the head of the list. O(1).
- fn pop() -> option<T> { self.pop_n().map (|nobe| nobe.data) }
- /// Remove data from the tail of the list. O(1).
- fn pop_tail() -> option<T> { self.pop_tail_n().map (|nobe| nobe.data) }
/// Get data at the list's head. O(1).
- fn peek() -> option<T> { self.peek_n().map (|nobe| nobe.data) }
+ pure fn peek() -> option<T> { self.peek_n().map (|nobe| nobe.data) }
/// Get data at the list's tail. O(1).
- fn peek_tail() -> option<T> { self.peek_tail_n().map (|nobe| nobe.data) }
+ pure fn peek_tail() -> option<T> {
+ self.peek_tail_n().map (|nobe| nobe.data)
+ }
/// Get data at the list's head, failing if empty. O(1).
- pure fn head() -> T { self.head_n().data }
+ pure fn head() -> T { self.head_n().data }
/// Get data at the list's tail, failing if empty. O(1).
- pure fn tail() -> T { self.tail_n().data }
+ pure fn tail() -> T { self.tail_n().data }
+ /// Get the elements of the list as a vector. O(n).
+ pure fn to_vec() -> ~[mut T] {
+ let mut v = ~[mut];
+ unchecked {
+ vec::reserve(v, self.size);
+ // Take this out of the unchecked when iter's functions are pure
+ for self.eachi |index,data| {
+ v[index] = data;
+ }
+ }
+ v
+ }
}
#[cfg(test)]
mod tests {
+ #[test]
+ fn test_dlist_concat() {
+ let a = from_vec(~[1,2]);
+ let b = from_vec(~[3,4]);
+ let c = from_vec(~[5,6]);
+ let d = from_vec(~[7,8]);
+ let ab = from_vec(~[a,b]);
+ let cd = from_vec(~[c,d]);
+ let abcd = concat(concat(from_vec(~[ab,cd])));
+ abcd.assert_consistent(); assert abcd.len() == 8;
+ abcd.assert_consistent(); assert abcd.pop().get() == 1;
+ abcd.assert_consistent(); assert abcd.pop().get() == 2;
+ abcd.assert_consistent(); assert abcd.pop().get() == 3;
+ abcd.assert_consistent(); assert abcd.pop().get() == 4;
+ abcd.assert_consistent(); assert abcd.pop().get() == 5;
+ abcd.assert_consistent(); assert abcd.pop().get() == 6;
+ abcd.assert_consistent(); assert abcd.pop().get() == 7;
+ abcd.assert_consistent(); assert abcd.pop().get() == 8;
+ abcd.assert_consistent(); assert abcd.is_empty();
+ }
+ #[test]
+ fn test_dlist_append() {
+ let a = from_vec(~[1,2,3]);
+ let b = from_vec(~[4,5,6]);
+ a.append(b);
+ assert a.len() == 6;
+ assert b.len() == 0;
+ b.assert_consistent();
+ a.assert_consistent(); assert a.pop().get() == 1;
+ a.assert_consistent(); assert a.pop().get() == 2;
+ a.assert_consistent(); assert a.pop().get() == 3;
+ a.assert_consistent(); assert a.pop().get() == 4;
+ a.assert_consistent(); assert a.pop().get() == 5;
+ a.assert_consistent(); assert a.pop().get() == 6;
+ a.assert_consistent(); assert a.is_empty();
+ }
+ #[test]
+ fn test_dlist_append_empty() {
+ let a = from_vec(~[1,2,3]);
+ let b = new_dlist::<int>();
+ a.append(b);
+ assert a.len() == 3;
+ assert b.len() == 0;
+ b.assert_consistent();
+ a.assert_consistent(); assert a.pop().get() == 1;
+ a.assert_consistent(); assert a.pop().get() == 2;
+ a.assert_consistent(); assert a.pop().get() == 3;
+ a.assert_consistent(); assert a.is_empty();
+ }
+ #[test]
+ fn test_dlist_append_to_empty() {
+ let a = new_dlist::<int>();
+ let b = from_vec(~[4,5,6]);
+ a.append(b);
+ assert a.len() == 3;
+ assert b.len() == 0;
+ b.assert_consistent();
+ a.assert_consistent(); assert a.pop().get() == 4;
+ a.assert_consistent(); assert a.pop().get() == 5;
+ a.assert_consistent(); assert a.pop().get() == 6;
+ a.assert_consistent(); assert a.is_empty();
+ }
+ #[test]
+ fn test_dlist_append_two_empty() {
+ let a = new_dlist::<int>();
+ let b = new_dlist::<int>();
+ a.append(b);
+ assert a.len() == 0;
+ assert b.len() == 0;
+ b.assert_consistent();
+ a.assert_consistent();
+ }
+ #[test]
+ #[ignore(cfg(windows))]
+ #[should_fail]
+ fn test_dlist_append_self() {
+ let a = new_dlist::<int>();
+ a.append(a);
+ }
+ #[test]
+ #[ignore(cfg(windows))]
+ #[should_fail]
+ fn test_dlist_prepend_self() {
+ let a = new_dlist::<int>();
+ a.prepend(a);
+ }
+ #[test]
+ fn test_dlist_prepend() {
+ let a = from_vec(~[1,2,3]);
+ let b = from_vec(~[4,5,6]);
+ b.prepend(a);
+ assert a.len() == 0;
+ assert b.len() == 6;
+ a.assert_consistent();
+ b.assert_consistent(); assert b.pop().get() == 1;
+ b.assert_consistent(); assert b.pop().get() == 2;
+ b.assert_consistent(); assert b.pop().get() == 3;
+ b.assert_consistent(); assert b.pop().get() == 4;
+ b.assert_consistent(); assert b.pop().get() == 5;
+ b.assert_consistent(); assert b.pop().get() == 6;
+ b.assert_consistent(); assert b.is_empty();
+ }
+ #[test]
+ fn test_dlist_reverse() {
+ let a = from_vec(~[5,4,3,2,1]);
+ a.reverse();
+ assert a.len() == 5;
+ a.assert_consistent(); assert a.pop().get() == 1;
+ a.assert_consistent(); assert a.pop().get() == 2;
+ a.assert_consistent(); assert a.pop().get() == 3;
+ a.assert_consistent(); assert a.pop().get() == 4;
+ a.assert_consistent(); assert a.pop().get() == 5;
+ a.assert_consistent(); assert a.is_empty();
+ }
+ #[test]
+ fn test_dlist_reverse_empty() {
+ let a = new_dlist::<int>();
+ a.reverse();
+ assert a.len() == 0;
+ a.assert_consistent();
+ }
+ #[test]
+ fn test_dlist_each_node() {
+ let a = from_vec(~[1,2,4,5]);
+ for a.each_node |nobe| {
+ if nobe.data > 3 {
+ a.insert_before(3, nobe);
+ }
+ }
+ assert a.len() == 6;
+ a.assert_consistent(); assert a.pop().get() == 1;
+ a.assert_consistent(); assert a.pop().get() == 2;
+ a.assert_consistent(); assert a.pop().get() == 3;
+ a.assert_consistent(); assert a.pop().get() == 4;
+ a.assert_consistent(); assert a.pop().get() == 3;
+ a.assert_consistent(); assert a.pop().get() == 5;
+ a.assert_consistent(); assert a.is_empty();
+ }
#[test]
fn test_dlist_is_empty() {
- let empty = create::<int>();
+ let empty = new_dlist::<int>();
let full1 = from_vec(~[1,2,3]);
assert empty.is_empty();
assert !full1.is_empty();
}
#[test]
fn test_dlist_push() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.push(1);
assert l.head() == 1;
assert l.tail() == 1;
}
#[test]
fn test_dlist_push_head() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.push_head(3);
assert l.head() == 3;
assert l.tail() == 3;
}
#[test]
fn test_dlist_remove_head() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.assert_consistent(); let one = l.push_n(1);
l.assert_consistent(); let _two = l.push_n(2);
l.assert_consistent(); let _three = l.push_n(3);
}
#[test]
fn test_dlist_remove_mid() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.assert_consistent(); let _one = l.push_n(1);
l.assert_consistent(); let two = l.push_n(2);
l.assert_consistent(); let _three = l.push_n(3);
}
#[test]
fn test_dlist_remove_tail() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.assert_consistent(); let _one = l.push_n(1);
l.assert_consistent(); let _two = l.push_n(2);
l.assert_consistent(); let three = l.push_n(3);
}
#[test]
fn test_dlist_remove_one_two() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.assert_consistent(); let one = l.push_n(1);
l.assert_consistent(); let two = l.push_n(2);
l.assert_consistent(); let _three = l.push_n(3);
}
#[test]
fn test_dlist_remove_one_three() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.assert_consistent(); let one = l.push_n(1);
l.assert_consistent(); let _two = l.push_n(2);
l.assert_consistent(); let three = l.push_n(3);
}
#[test]
fn test_dlist_remove_two_three() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.assert_consistent(); let _one = l.push_n(1);
l.assert_consistent(); let two = l.push_n(2);
l.assert_consistent(); let three = l.push_n(3);
}
#[test]
fn test_dlist_remove_all() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.assert_consistent(); let one = l.push_n(1);
l.assert_consistent(); let two = l.push_n(2);
l.assert_consistent(); let three = l.push_n(3);
}
#[test]
fn test_dlist_insert_n_before() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.assert_consistent(); let _one = l.push_n(1);
l.assert_consistent(); let two = l.push_n(2);
- l.assert_consistent(); let three = create_node(3);
+ l.assert_consistent(); let three = new_dlist_node(3);
l.assert_consistent(); assert l.len() == 2;
l.assert_consistent(); l.insert_n_before(three, two);
l.assert_consistent(); assert l.len() == 3;
}
#[test]
fn test_dlist_insert_n_after() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.assert_consistent(); let one = l.push_n(1);
l.assert_consistent(); let _two = l.push_n(2);
- l.assert_consistent(); let three = create_node(3);
+ l.assert_consistent(); let three = new_dlist_node(3);
l.assert_consistent(); assert l.len() == 2;
l.assert_consistent(); l.insert_n_after(three, one);
l.assert_consistent(); assert l.len() == 3;
}
#[test]
fn test_dlist_insert_before_head() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.assert_consistent(); let one = l.push_n(1);
l.assert_consistent(); let _two = l.push_n(2);
l.assert_consistent(); assert l.len() == 2;
}
#[test]
fn test_dlist_insert_after_tail() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
l.assert_consistent(); let _one = l.push_n(1);
l.assert_consistent(); let two = l.push_n(2);
l.assert_consistent(); assert l.len() == 2;
}
#[test] #[should_fail] #[ignore(cfg(windows))]
fn test_asymmetric_link() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
let _one = l.push_n(1);
let two = l.push_n(2);
two.prev = none;
}
#[test] #[should_fail] #[ignore(cfg(windows))]
fn test_cyclic_list() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
let one = l.push_n(1);
let _two = l.push_n(2);
let three = l.push_n(3);
}
#[test] #[should_fail] #[ignore(cfg(windows))]
fn test_headless() {
- create::<int>().head();
+ new_dlist::<int>().head();
}
#[test] #[should_fail] #[ignore(cfg(windows))]
fn test_insert_already_present_before() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
let one = l.push_n(1);
let two = l.push_n(2);
l.insert_n_before(two, one);
}
#[test] #[should_fail] #[ignore(cfg(windows))]
fn test_insert_already_present_after() {
- let l = create::<int>();
+ let l = new_dlist::<int>();
let one = l.push_n(1);
let two = l.push_n(2);
l.insert_n_after(one, two);
}
#[test] #[should_fail] #[ignore(cfg(windows))]
fn test_insert_before_orphan() {
- let l = create::<int>();
- let one = create_node(1);
- let two = create_node(2);
+ let l = new_dlist::<int>();
+ let one = new_dlist_node(1);
+ let two = new_dlist_node(2);
l.insert_n_before(one, two);
}
#[test] #[should_fail] #[ignore(cfg(windows))]
fn test_insert_after_orphan() {
- let l = create::<int>();
- let one = create_node(1);
- let two = create_node(2);
+ let l = new_dlist::<int>();
+ let one = new_dlist_node(1);
+ let two = new_dlist_node(2);
l.insert_n_after(two, one);
}
}
// almost nothing works without the copy bound due to limitations
// around closures.
impl extensions<A> for dvec<A> {
+ /// Reserves space for N elements
+ fn reserve(count: uint) {
+ vec::reserve(self.data, count)
+ }
+
/**
* Swaps out the current vector and hands it off to a user-provided
* function `f`. The function should transform it however is desired
let mut s = str::from_char(c);
ret pad(cv, s, pad_nozero);
}
- fn conv_str(cv: conv, s: ~str) -> ~str {
+ pure fn conv_str(cv: conv, s: &str) -> ~str {
// For strings, precision is the maximum characters
// displayed
let mut unpadded = alt cv.precision {
- count_implied { s }
+ count_implied { s.to_unique() }
count_is(max) {
if max as uint < str::char_len(s) {
str::substr(s, 0u, max as uint)
- } else { s }
+ } else { s.to_unique() }
}
};
- ret pad(cv, unpadded, pad_nozero);
+ ret unchecked { pad(cv, unpadded, pad_nozero) };
}
fn conv_float(cv: conv, f: float) -> ~str {
let (to_str, digits) = alt cv.precision {
pad_float { {might_zero_pad:true, signed:true } }
pad_unsigned { {might_zero_pad:true, signed:false} }
};
- fn have_precision(cv: conv) -> bool {
+ pure fn have_precision(cv: conv) -> bool {
ret alt cv.precision { count_implied { false } _ { true } };
}
let zero_padding = {
}
ret padstr + s;
}
- fn have_flag(flags: u32, f: u32) -> bool {
+ pure fn have_flag(flags: u32, f: u32) -> bool {
flags & f != 0
}
}
+#[cfg(test)]
+mod test {
+ #[test]
+ fn fmt_slice() {
+ let s = "abc";
+ let _s = #fmt("%s", s);
+ }
+}
// Local Variables:
// mode: rust;
})
}
-fn macros() {
- #macro[
- [#move[x],
- unsafe { let y <- *ptr::addr_of(x); y }]
- ];
+macro_rules! move{
+ {$x:expr} => { unsafe { let y <- *ptr::addr_of($x); y } }
}
fn from_port<A:send>(-port: future_pipe::client::waiting<A>) -> future<A> {
port_ <-> *port;
let port = option::unwrap(port_);
alt recv(port) {
- future_pipe::completed(data, _next) { #move(data) }
+ future_pipe::completed(data) { move!{data} }
}
}
}
proto! future_pipe {
waiting:recv<T:send> {
- completed(T) -> terminated
+ completed(T) -> !
}
-
- terminated:send { }
}
#[test]
--- /dev/null
+/*!
+ * Implementation of SipHash 2-4
+ *
+ * See: http://131002.net/siphash/
+ *
+ * Consider this as a main "general-purpose" hash for all hashtables: it
+ * runs at good speed (competitive with spooky and city) and permits
+ * cryptographically strong _keyed_ hashing. Key your hashtables from a
+ * CPRNG like rand::rng.
+ */
+
+pure fn hash_bytes(buf: &[const u8]) -> u64 {
+ ret hash_bytes_keyed(buf, 0u64, 0u64);
+}
+
+pure fn hash_bytes_keyed(buf: &[const u8], k0: u64, k1: u64) -> u64 {
+
+ let mut v0 : u64 = k0 ^ 0x736f_6d65_7073_6575;
+ let mut v1 : u64 = k1 ^ 0x646f_7261_6e64_6f6d;
+ let mut v2 : u64 = k0 ^ 0x6c79_6765_6e65_7261;
+ let mut v3 : u64 = k1 ^ 0x7465_6462_7974_6573;
+
+ #macro([#u8to64_le(buf,i),
+ (buf[0+i] as u64 |
+ buf[1+i] as u64 << 8 |
+ buf[2+i] as u64 << 16 |
+ buf[3+i] as u64 << 24 |
+ buf[4+i] as u64 << 32 |
+ buf[5+i] as u64 << 40 |
+ buf[6+i] as u64 << 48 |
+ buf[7+i] as u64 << 56)]);
+
+ #macro([#rotl(x,b), (x << b) | (x >> (64 - b))]);
+
+ #macro([#compress(), {
+ v0 += v1; v1 = #rotl(v1, 13); v1 ^= v0; v0 = #rotl(v0, 32);
+ v2 += v3; v3 = #rotl(v3, 16); v3 ^= v2;
+ v0 += v3; v3 = #rotl(v3, 21); v3 ^= v0;
+ v2 += v1; v1 = #rotl(v1, 17); v1 ^= v2; v2 = #rotl(v2, 32);
+ }]);
+
+ let len = vec::len(buf);
+ let end = len & (!0x7);
+ let left = len & 0x7;
+
+ let mut i = 0;
+ while i < end {
+ let m = #u8to64_le(buf, i);
+ v3 ^= m;
+ #compress();
+ #compress();
+ v0 ^= m;
+ i += 8;
+ }
+
+ let mut b : u64 = (len as u64 & 0xff) << 56;
+
+ if left > 0 { b |= buf[i] as u64; }
+ if left > 1 { b |= buf[i + 1] as u64 << 8; }
+ if left > 2 { b |= buf[i + 2] as u64 << 16; }
+ if left > 3 { b |= buf[i + 3] as u64 << 24; }
+ if left > 4 { b |= buf[i + 4] as u64 << 32; }
+ if left > 5 { b |= buf[i + 5] as u64 << 40; }
+ if left > 6 { b |= buf[i + 6] as u64 << 48; }
+
+ v3 ^= b;
+ #compress();
+ #compress();
+ v0 ^= b;
+
+ v2 ^= 0xff;
+ #compress();
+ #compress();
+ #compress();
+ #compress();
+
+ ret v0 ^ v1 ^ v2 ^ v3;
+}
+
+#[test]
+fn test_siphash() {
+ let vecs : [[u8]/8]/64 = [
+ [ 0x31, 0x0e, 0x0e, 0xdd, 0x47, 0xdb, 0x6f, 0x72, ]/_,
+ [ 0xfd, 0x67, 0xdc, 0x93, 0xc5, 0x39, 0xf8, 0x74, ]/_,
+ [ 0x5a, 0x4f, 0xa9, 0xd9, 0x09, 0x80, 0x6c, 0x0d, ]/_,
+ [ 0x2d, 0x7e, 0xfb, 0xd7, 0x96, 0x66, 0x67, 0x85, ]/_,
+ [ 0xb7, 0x87, 0x71, 0x27, 0xe0, 0x94, 0x27, 0xcf, ]/_,
+ [ 0x8d, 0xa6, 0x99, 0xcd, 0x64, 0x55, 0x76, 0x18, ]/_,
+ [ 0xce, 0xe3, 0xfe, 0x58, 0x6e, 0x46, 0xc9, 0xcb, ]/_,
+ [ 0x37, 0xd1, 0x01, 0x8b, 0xf5, 0x00, 0x02, 0xab, ]/_,
+ [ 0x62, 0x24, 0x93, 0x9a, 0x79, 0xf5, 0xf5, 0x93, ]/_,
+ [ 0xb0, 0xe4, 0xa9, 0x0b, 0xdf, 0x82, 0x00, 0x9e, ]/_,
+ [ 0xf3, 0xb9, 0xdd, 0x94, 0xc5, 0xbb, 0x5d, 0x7a, ]/_,
+ [ 0xa7, 0xad, 0x6b, 0x22, 0x46, 0x2f, 0xb3, 0xf4, ]/_,
+ [ 0xfb, 0xe5, 0x0e, 0x86, 0xbc, 0x8f, 0x1e, 0x75, ]/_,
+ [ 0x90, 0x3d, 0x84, 0xc0, 0x27, 0x56, 0xea, 0x14, ]/_,
+ [ 0xee, 0xf2, 0x7a, 0x8e, 0x90, 0xca, 0x23, 0xf7, ]/_,
+ [ 0xe5, 0x45, 0xbe, 0x49, 0x61, 0xca, 0x29, 0xa1, ]/_,
+ [ 0xdb, 0x9b, 0xc2, 0x57, 0x7f, 0xcc, 0x2a, 0x3f, ]/_,
+ [ 0x94, 0x47, 0xbe, 0x2c, 0xf5, 0xe9, 0x9a, 0x69, ]/_,
+ [ 0x9c, 0xd3, 0x8d, 0x96, 0xf0, 0xb3, 0xc1, 0x4b, ]/_,
+ [ 0xbd, 0x61, 0x79, 0xa7, 0x1d, 0xc9, 0x6d, 0xbb, ]/_,
+ [ 0x98, 0xee, 0xa2, 0x1a, 0xf2, 0x5c, 0xd6, 0xbe, ]/_,
+ [ 0xc7, 0x67, 0x3b, 0x2e, 0xb0, 0xcb, 0xf2, 0xd0, ]/_,
+ [ 0x88, 0x3e, 0xa3, 0xe3, 0x95, 0x67, 0x53, 0x93, ]/_,
+ [ 0xc8, 0xce, 0x5c, 0xcd, 0x8c, 0x03, 0x0c, 0xa8, ]/_,
+ [ 0x94, 0xaf, 0x49, 0xf6, 0xc6, 0x50, 0xad, 0xb8, ]/_,
+ [ 0xea, 0xb8, 0x85, 0x8a, 0xde, 0x92, 0xe1, 0xbc, ]/_,
+ [ 0xf3, 0x15, 0xbb, 0x5b, 0xb8, 0x35, 0xd8, 0x17, ]/_,
+ [ 0xad, 0xcf, 0x6b, 0x07, 0x63, 0x61, 0x2e, 0x2f, ]/_,
+ [ 0xa5, 0xc9, 0x1d, 0xa7, 0xac, 0xaa, 0x4d, 0xde, ]/_,
+ [ 0x71, 0x65, 0x95, 0x87, 0x66, 0x50, 0xa2, 0xa6, ]/_,
+ [ 0x28, 0xef, 0x49, 0x5c, 0x53, 0xa3, 0x87, 0xad, ]/_,
+ [ 0x42, 0xc3, 0x41, 0xd8, 0xfa, 0x92, 0xd8, 0x32, ]/_,
+ [ 0xce, 0x7c, 0xf2, 0x72, 0x2f, 0x51, 0x27, 0x71, ]/_,
+ [ 0xe3, 0x78, 0x59, 0xf9, 0x46, 0x23, 0xf3, 0xa7, ]/_,
+ [ 0x38, 0x12, 0x05, 0xbb, 0x1a, 0xb0, 0xe0, 0x12, ]/_,
+ [ 0xae, 0x97, 0xa1, 0x0f, 0xd4, 0x34, 0xe0, 0x15, ]/_,
+ [ 0xb4, 0xa3, 0x15, 0x08, 0xbe, 0xff, 0x4d, 0x31, ]/_,
+ [ 0x81, 0x39, 0x62, 0x29, 0xf0, 0x90, 0x79, 0x02, ]/_,
+ [ 0x4d, 0x0c, 0xf4, 0x9e, 0xe5, 0xd4, 0xdc, 0xca, ]/_,
+ [ 0x5c, 0x73, 0x33, 0x6a, 0x76, 0xd8, 0xbf, 0x9a, ]/_,
+ [ 0xd0, 0xa7, 0x04, 0x53, 0x6b, 0xa9, 0x3e, 0x0e, ]/_,
+ [ 0x92, 0x59, 0x58, 0xfc, 0xd6, 0x42, 0x0c, 0xad, ]/_,
+ [ 0xa9, 0x15, 0xc2, 0x9b, 0xc8, 0x06, 0x73, 0x18, ]/_,
+ [ 0x95, 0x2b, 0x79, 0xf3, 0xbc, 0x0a, 0xa6, 0xd4, ]/_,
+ [ 0xf2, 0x1d, 0xf2, 0xe4, 0x1d, 0x45, 0x35, 0xf9, ]/_,
+ [ 0x87, 0x57, 0x75, 0x19, 0x04, 0x8f, 0x53, 0xa9, ]/_,
+ [ 0x10, 0xa5, 0x6c, 0xf5, 0xdf, 0xcd, 0x9a, 0xdb, ]/_,
+ [ 0xeb, 0x75, 0x09, 0x5c, 0xcd, 0x98, 0x6c, 0xd0, ]/_,
+ [ 0x51, 0xa9, 0xcb, 0x9e, 0xcb, 0xa3, 0x12, 0xe6, ]/_,
+ [ 0x96, 0xaf, 0xad, 0xfc, 0x2c, 0xe6, 0x66, 0xc7, ]/_,
+ [ 0x72, 0xfe, 0x52, 0x97, 0x5a, 0x43, 0x64, 0xee, ]/_,
+ [ 0x5a, 0x16, 0x45, 0xb2, 0x76, 0xd5, 0x92, 0xa1, ]/_,
+ [ 0xb2, 0x74, 0xcb, 0x8e, 0xbf, 0x87, 0x87, 0x0a, ]/_,
+ [ 0x6f, 0x9b, 0xb4, 0x20, 0x3d, 0xe7, 0xb3, 0x81, ]/_,
+ [ 0xea, 0xec, 0xb2, 0xa3, 0x0b, 0x22, 0xa8, 0x7f, ]/_,
+ [ 0x99, 0x24, 0xa4, 0x3c, 0xc1, 0x31, 0x57, 0x24, ]/_,
+ [ 0xbd, 0x83, 0x8d, 0x3a, 0xaf, 0xbf, 0x8d, 0xb7, ]/_,
+ [ 0x0b, 0x1a, 0x2a, 0x32, 0x65, 0xd5, 0x1a, 0xea, ]/_,
+ [ 0x13, 0x50, 0x79, 0xa3, 0x23, 0x1c, 0xe6, 0x60, ]/_,
+ [ 0x93, 0x2b, 0x28, 0x46, 0xe4, 0xd7, 0x06, 0x66, ]/_,
+ [ 0xe1, 0x91, 0x5f, 0x5c, 0xb1, 0xec, 0xa4, 0x6c, ]/_,
+ [ 0xf3, 0x25, 0x96, 0x5c, 0xa1, 0x6d, 0x62, 0x9f, ]/_,
+ [ 0x57, 0x5f, 0xf2, 0x8e, 0x60, 0x38, 0x1b, 0xe5, ]/_,
+ [ 0x72, 0x45, 0x06, 0xeb, 0x4c, 0x32, 0x8a, 0x95, ]/_
+ ]/_;
+
+ let k0 = 0x_07_06_05_04_03_02_01_00_u64;
+ let k1 = 0x_0f_0e_0d_0c_0b_0a_09_08_u64;
+ let mut buf : ~[u8] = ~[];
+ let mut t = 0;
+ while t < 64 {
+ #debug("siphash test %?", t);
+ let vec = #u8to64_le(vecs[t], 0);
+ let out = hash_bytes_keyed(buf, k0, k1);
+ #debug("got %?, expected %?", out, vec);
+ assert vec == out;
+ buf += ~[t as u8];
+ t += 1;
+ }
+
+}
\ No newline at end of file
export compl;
export abs;
export parse_buf, from_str, to_str, to_str_bytes, str;
-export num, ord, eq, times;
+export num, ord, eq, times, timesi;
const min_value: T = -1 as T << (inst::bits - 1 as T);
const max_value: T = min_value - 1 as T;
/// Convert to a string in a given base
fn to_str(n: T, radix: uint) -> ~str {
do to_str_bytes(n, radix) |slice| {
- do vec::unpack_slice(slice) |p, len| {
+ do vec::as_buf(slice) |p, len| {
unsafe { str::unsafe::from_buf_len(p, len) }
}
}
}
}
+impl timesi of iter::timesi for T {
+ #[inline(always)]
+ /// Like `times`, but provides an index
+ fn timesi(it: fn(uint) -> bool) {
+ let slf = self as uint;
+ if slf < 0u {
+ fail #fmt("The .timesi method expects a nonnegative number, \
+ but found %?", self);
+ }
+ let mut i = 0u;
+ while i < slf {
+ if !it(i) { break }
+ i += 1u;
+ }
+ }
+}
+
// FIXME: Has alignment issues on windows and 32-bit linux (#2609)
#[test]
#[ignore]
// The raw underlying reader iface. All readers must implement this.
iface reader {
// FIXME (#2004): Seekable really should be orthogonal.
- fn read_bytes(uint) -> ~[u8];
+
+ // FIXME (#2982): This should probably return an error.
+ fn read(buf: &[mut u8], len: uint) -> uint;
fn read_byte() -> int;
fn unread_byte(int);
fn eof() -> bool;
// Generic utility functions defined on readers
impl reader_util for reader {
+ fn read_bytes(len: uint) -> ~[u8] {
+ let mut buf = ~[mut];
+ vec::reserve(buf, len);
+ unsafe { vec::unsafe::set_len(buf, len); }
+
+ let count = self.read(buf, len);
+
+ unsafe { vec::unsafe::set_len(buf, count); }
+ vec::from_mut(buf)
+ }
fn read_chars(n: uint) -> ~[char] {
// returns the (consumed offset, n_req), appends characters to &chars
fn chars_from_buf(buf: ~[u8], &chars: ~[char]) -> (uint, uint) {
}
impl of reader for *libc::FILE {
- fn read_bytes(len: uint) -> ~[u8] {
- let mut buf : ~[mut u8] = ~[mut];
- vec::reserve(buf, len);
- do vec::as_mut_buf(buf) |b| {
- let read = libc::fread(b as *mut c_void, 1u as size_t,
- len as size_t, self);
- unsafe { vec::unsafe::set_len(buf, read as uint) };
+ fn read(buf: &[mut u8], len: uint) -> uint {
+ do vec::as_buf(buf) |buf_p, buf_len| {
+ assert buf_len <= len;
+
+ let count = libc::fread(buf_p as *mut c_void, 1u as size_t,
+ len as size_t, self);
+
+ count as uint
}
- ret vec::from_mut(buf);
}
fn read_byte() -> int { ret libc::fgetc(self) as int; }
fn unread_byte(byte: int) { libc::ungetc(byte as c_int, self); }
// duration of its lifetime.
// FIXME there really should be a better way to do this // #2004
impl <T: reader, C> of reader for {base: T, cleanup: C} {
- fn read_bytes(len: uint) -> ~[u8] { self.base.read_bytes(len) }
+ fn read(buf: &[mut u8], len: uint) -> uint { self.base.read(buf, len) }
fn read_byte() -> int { self.base.read_byte() }
fn unread_byte(byte: int) { self.base.unread_byte(byte); }
fn eof() -> bool { self.base.eof() }
type byte_buf = {buf: ~[const u8], mut pos: uint, len: uint};
impl of reader for byte_buf {
- fn read_bytes(len: uint) -> ~[u8] {
- let rest = self.len - self.pos;
- let mut to_read = len;
- if rest < to_read { to_read = rest; }
- let range = vec::slice(self.buf, self.pos, self.pos + to_read);
- self.pos += to_read;
- ret range;
+ fn read(buf: &[mut u8], len: uint) -> uint {
+ let count = uint::min(len, self.len - self.pos);
+
+ vec::u8::memcpy(buf, vec::const_view(self.buf, self.pos, self.len),
+ count);
+
+ self.pos += count;
+
+ count
}
fn read_byte() -> int {
if self.pos == self.len { ret -1; }
impl of writer for *libc::FILE {
fn write(v: &[const u8]) {
- do vec::unpack_const_slice(v) |vbuf, len| {
+ do vec::as_const_buf(v) |vbuf, len| {
let nout = libc::fwrite(vbuf as *c_void, len as size_t,
1u as size_t, self);
if nout < 1 as size_t {
impl of writer for fd_t {
fn write(v: &[const u8]) {
let mut count = 0u;
- do vec::unpack_const_slice(v) |vbuf, len| {
+ do vec::as_const_buf(v) |vbuf, len| {
while count < len {
let vb = ptr::const_offset(vbuf, count) as *c_void;
let nout = libc::write(self, vb, len as size_t);
fn stdout() -> writer { fd_writer(libc::STDOUT_FILENO as c_int, false) }
fn stderr() -> writer { fd_writer(libc::STDERR_FILENO as c_int, false) }
-fn print(s: ~str) { stdout().write_str(s); }
-fn println(s: ~str) { stdout().write_line(s); }
+fn print(s: &str) { stdout().write_str(s); }
+fn println(s: &str) { stdout().write_line(s); }
type mem_buffer = @{buf: dvec<u8>, mut pos: uint};
* e.g. breadth-first search with in-place enqueues), but removing the current
* node is forbidden.
*/
-fn EACH<A>(self: IMPL_T<A>, f: fn(A) -> bool) {
+pure fn EACH<A>(self: IMPL_T<A>, f: fn(A) -> bool) {
import dlist::extensions;
let mut link = self.peek_n();
while option::is_some(link) {
let nobe = option::get(link);
- // Check dlist invariant.
- if !option::is_some(nobe.root) ||
- !box::ptr_eq(*option::get(nobe.root), *self) {
- fail ~"Iteration encountered a dlist node not on this dlist."
- }
+ assert nobe.linked;
if !f(nobe.data) { break; }
- // Check that the user didn't do a remove.
- // Note that this makes it ok for the user to remove the node and then
- // immediately put it back in a different position. I allow this.
- if !option::is_some(nobe.root) ||
- !box::ptr_eq(*option::get(nobe.root), *self) {
+ // Check (weakly) that the user didn't do a remove.
+ if self.size == 0 {
+ fail ~"The dlist became empty during iteration??"
+ }
+ if !nobe.linked ||
+ (!((nobe.prev.is_some()
+ || box::ptr_eq(*self.hd.expect(~"headless dlist?"), *nobe)) &&
+ (nobe.next.is_some()
+ || box::ptr_eq(*self.tl.expect(~"tailless dlist?"), *nobe)))) {
fail ~"Removing a dlist node during iteration is forbidden!"
}
link = nobe.next_link();
type IMPL_T<A> = option<A>;
-fn EACH<A>(self: IMPL_T<A>, f: fn(A) -> bool) {
+pure fn EACH<A>(self: IMPL_T<A>, f: fn(A) -> bool) {
alt self {
none { }
some(a) { f(a); }
iface times {
fn times(it: fn() -> bool);
}
+iface timesi{
+ fn timesi(it: fn(uint) -> bool);
+}
trait copyable_iter<A:copy> {
fn filter_to_vec(pred: fn(A) -> bool) -> ~[A];
alt opt { some(x) { x } none { fail reason; } }
}
-pure fn map<T, U: copy>(opt: option<T>, f: fn(T) -> U) -> option<U> {
+pure fn map<T, U>(opt: option<T>, f: fn(T) -> U) -> option<U> {
//! Maps a `some` value from one type to another
alt opt { some(x) { some(f(x)) } none { none } }
}
+pure fn map_consume<T, U>(-opt: option<T>, f: fn(-T) -> U) -> option<U> {
+ /*!
+ * As `map`, but consumes the option and gives `f` ownership to avoid
+ * copying.
+ */
+ if opt.is_some() { some(f(option::unwrap(opt))) } else { none }
+}
+
pure fn chain<T, U>(opt: option<T>, f: fn(T) -> option<U>) -> option<U> {
/*!
* Update an optional value by optionally running its content through a
}
}
+pure fn unwrap_expect<T>(-opt: option<T>, reason: ~str) -> T {
+ //! As unwrap, but with a specified failure message.
+ if opt.is_none() { fail reason; }
+ unwrap(opt)
+}
+
impl extensions<T> for option<T> {
/**
* Update an optional value by optionally running its content through a
/// Returns true if the option contains some value
pure fn is_some() -> bool { is_some(self) }
/// Maps a `some` value from one type to another
- pure fn map<U:copy>(f: fn(T) -> U) -> option<U> { map(self, f) }
+ pure fn map<U>(f: fn(T) -> U) -> option<U> { map(self, f) }
}
impl extensions<T: copy> for option<T> {
#[test]
fn test_unwrap_str() {
let x = ~"test";
- let addr_x = str::as_buf(x, |buf| ptr::addr_of(buf));
+ let addr_x = str::as_buf(x, |buf, _len| ptr::addr_of(buf));
let opt = some(x);
let y = unwrap(opt);
- let addr_y = str::as_buf(y, |buf| ptr::addr_of(buf));
+ let addr_y = str::as_buf(y, |buf, _len| ptr::addr_of(buf));
assert addr_x == addr_y;
}
import getcwd = rustrt::rust_getcwd;
import consts::*;
+import task::task_builder;
export close, fclose, fsync_fd, waitpid;
export env, getenv, setenv, fdopen, pipe;
export as_c_charp, fill_charp_buf;
extern mod rustrt {
- fn rust_env_pairs() -> ~[~str];
fn rust_getcwd() -> ~str;
fn rust_path_is_dir(path: *libc::c_char) -> c_int;
fn rust_path_exists(path: *libc::c_char) -> c_int;
}
-fn env() -> ~[(~str,~str)] {
- let mut pairs = ~[];
- for vec::each(rustrt::rust_env_pairs()) |p| {
- let vs = str::splitn_char(p, '=', 1u);
- assert vec::len(vs) == 2u;
- vec::push(pairs, (vs[0], vs[1]));
- }
- ret pairs;
-}
-
const tmpbuf_sz : uint = 1000u;
fn as_c_charp<T>(s: ~str, f: fn(*c_char) -> T) -> T {
fn fill_charp_buf(f: fn(*mut c_char, size_t) -> bool)
-> option<~str> {
let buf = vec::to_mut(vec::from_elem(tmpbuf_sz, 0u8 as c_char));
- do vec::as_mut_buf(buf) |b| {
- if f(b, tmpbuf_sz as size_t) unsafe {
+ do vec::as_mut_buf(buf) |b, sz| {
+ if f(b, sz as size_t) unsafe {
some(str::unsafe::from_buf(b as *u8))
} else {
none
let mut done = false;
while !done {
let buf = vec::to_mut(vec::from_elem(n as uint, 0u16));
- do vec::as_mut_buf(buf) |b| {
+ do vec::as_mut_buf(buf) |b, _sz| {
let k : dword = f(b, tmpbuf_sz as dword);
if k == (0 as dword) {
done = true;
let mut t = str::to_utf16(s);
// Null terminate before passing on.
t += ~[0u16];
- vec::as_buf(t, f)
+ vec::as_buf(t, |buf, _len| f(buf))
}
}
global_env::setenv(n, v)
}
+fn env() -> ~[(~str,~str)] {
+ global_env::env()
+}
+
mod global_env {
//! Internal module for serializing access to getenv/setenv
export getenv;
export setenv;
+ export env;
extern mod rustrt {
fn rust_global_env_chan_ptr() -> *libc::uintptr_t;
enum msg {
msg_getenv(~str, comm::chan<option<~str>>),
- msg_setenv(~str, ~str, comm::chan<()>)
+ msg_setenv(~str, ~str, comm::chan<()>),
+ msg_env(comm::chan<~[(~str,~str)]>)
}
fn getenv(n: ~str) -> option<~str> {
comm::recv(po)
}
+ fn env() -> ~[(~str,~str)] {
+ let env_ch = get_global_env_chan();
+ let po = comm::port();
+ comm::send(env_ch, msg_env(comm::chan(po)));
+ comm::recv(po)
+ }
+
fn get_global_env_chan() -> comm::chan<msg> {
let global_ptr = rustrt::rust_global_env_chan_ptr();
- let builder_fn = || {
- let builder = task::builder();
- task::unsupervise(builder);
-
+ let task_build_fn = || {
// FIXME (#2621): This would be a good place to use a very small
// foreign stack
- task::set_sched_mode(builder, task::single_threaded);
-
- builder
+ task::task().sched_mode(task::single_threaded).unlinked()
};
unsafe {
priv::chan_from_global_ptr(
- global_ptr, builder_fn, global_env_task)
+ global_ptr, task_build_fn, global_env_task)
}
}
either::left(msg_setenv(n, v, resp_ch)) {
comm::send(resp_ch, impl::setenv(n, v))
}
+ either::left(msg_env(resp_ch)) {
+ comm::send(resp_ch, impl::env())
+ }
either::right(_) {
break;
}
}
mod impl {
+ extern mod rustrt {
+ fn rust_env_pairs() -> ~[~str];
+ }
+
+ fn env() -> ~[(~str,~str)] {
+ let mut pairs = ~[];
+ for vec::each(rustrt::rust_env_pairs()) |p| {
+ let vs = str::splitn_char(p, '=', 1u);
+ assert vec::len(vs) == 2u;
+ vec::push(pairs, (vs[0], vs[1]));
+ }
+ ret pairs;
+ }
#[cfg(unix)]
fn getenv(n: ~str) -> option<~str> {
let mut done = false;
let mut ok = true;
while !done {
- do vec::as_mut_buf(buf) |b| {
+ do vec::as_mut_buf(buf) |b, _sz| {
let nread = libc::fread(b as *mut c_void, 1u as size_t,
bufsize as size_t,
istream);
assert (ostream as uint != 0u);
let s = ~"hello";
let mut buf = vec::to_mut(str::bytes(s) + ~[0 as u8]);
- do vec::as_mut_buf(buf) |b| {
+ do vec::as_mut_buf(buf) |b, _len| {
assert (libc::fwrite(b as *c_void, 1u as size_t,
(str::len(s) + 1u) as size_t, ostream)
== buf.len() as size_t)};
import option::unwrap;
import arc::methods;
-/* Use this after the snapshot
+// Things used by code generated by the pipe compiler.
+export entangle, get_buffer, drop_buffer;
+export send_packet_buffered, recv_packet_buffered;
+export packet, mk_packet, entangle_buffer, has_buffer, buffer_header;
+
+// export these so we can find them in the buffer_resource
+// destructor. This is probably another metadata bug.
+export atomic_add_acq, atomic_sub_rel;
+
+// User-level things
+export send_packet, recv_packet, send, recv, try_recv, peek;
+export select, select2, selecti, select2i, selectable;
+export spawn_service, spawn_service_recv;
+export stream, port, chan, shared_chan, port_set, channel;
+
+const SPIN_COUNT: uint = 0;
+
macro_rules! move {
{ $x:expr } => { unsafe { let y <- *ptr::addr_of($x); y } }
}
-*/
-fn macros() {
- #macro[
- [#move(x), { unsafe { let y <- *ptr::addr_of(x); y } }]
- ];
-}
+// This is to help make sure we only move out of enums in safe
+// places. Once there is unary move, it can be removed.
+fn move<T>(-x: T) -> T { x }
+
+/**
+
+Some thoughts about fixed buffers.
+
+The idea is if a protocol is bounded, we will synthesize a record that
+has a field for each state. Each of these states contains a packet for
+the messages that are legal to be sent in that state. Then, instead of
+allocating, the send code just finds a pointer to the right field and
+uses that instead.
+
+Unforunately, this makes things kind of tricky. We need to be able to
+find the buffer, which means we need to pass it around. This could
+either be associated with the (send|recv)_packet classes, or with the
+packet itself. We will also need some form of reference counting so we
+can track who has the responsibility of freeing the buffer.
+
+We want to preserve the ability to do things like optimistic buffer
+re-use, and skipping over to a new buffer when necessary. What I mean
+is, suppose we had the typical stream protocol. It'd make sense to
+amortize allocation costs by allocating a buffer with say 16
+messages. When the sender gets to the end of the buffer, it could
+check if the receiver is done with the packet in slot 0. If so, it can
+just reuse that one, checking if the receiver is done with the next
+one in each case. If it is ever not done, it just allocates a new
+buffer and skips over to that.
+
+Also, since protocols are in libcore, we have to do this in a way that
+maintains backwards compatibility.
+
+buffer header and buffer. Cast as c_void when necessary.
+
+===
+
+Okay, here are some new ideas.
+
+It'd be nice to keep the bounded/unbounded case as uniform as
+possible. It leads to less code duplication, and less things that can
+go sublty wrong. For the bounded case, we could either have a struct
+with a bunch of unique pointers to pre-allocated packets, or we could
+lay them out inline. Inline layout is better, if for no other reason
+than that we don't have to allocate each packet
+individually. Currently we pass unique packets around as unsafe
+pointers, but they are actually unique pointers. We should instead use
+real unsafe pointers. This makes freeing data and running destructors
+trickier though. Thus, we should allocate all packets in parter of a
+higher level buffer structure. Packets can maintain a pointer to their
+buffer, and this is the part that gets freed.
+
+It might be helpful to have some idea of a semi-unique pointer (like
+being partially pregnant, also like an ARC).
+
+*/
enum state {
empty,
terminated
}
-type packet_header_ = {
- mut state: state,
- mut blocked_task: option<*rust_task>,
+class buffer_header {
+ // Tracks whether this buffer needs to be freed. We can probably
+ // get away with restricting it to 0 or 1, if we're careful.
+ let mut ref_count: int;
+
+ new() { self.ref_count = 0; }
+
+ // We may want a drop, and to be careful about stringing this
+ // thing along.
+}
+
+// This is for protocols to associate extra data to thread around.
+type buffer<T: send> = {
+ header: buffer_header,
+ data: T,
};
-enum packet_header {
- packet_header_(packet_header_)
+class packet_header {
+ let mut state: state;
+ let mut blocked_task: option<*rust_task>;
+
+ // This is a reinterpret_cast of a ~buffer, that can also be cast
+ // to a buffer_header if need be.
+ let mut buffer: *libc::c_void;
+
+ new() {
+ self.state = empty;
+ self.blocked_task = none;
+ self.buffer = ptr::null();
+ }
+
+ // Returns the old state.
+ unsafe fn mark_blocked(this: *rust_task) -> state {
+ self.blocked_task = some(this);
+ swap_state_acq(self.state, blocked)
+ }
+
+ unsafe fn unblock() {
+ alt swap_state_acq(self.state, empty) {
+ empty | blocked { }
+ terminated { self.state = terminated; }
+ full { self.state = full; }
+ }
+ }
+
+ // unsafe because this can do weird things to the space/time
+ // continuum. It ends making multiple unique pointers to the same
+ // thing. You'll proobably want to forget them when you're done.
+ unsafe fn buf_header() -> ~buffer_header {
+ assert self.buffer.is_not_null();
+ reinterpret_cast(self.buffer)
+ }
+
+ fn set_buffer<T: send>(b: ~buffer<T>) unsafe {
+ self.buffer = reinterpret_cast(b);
+ }
}
-type packet_<T:send> = {
+type packet<T: send> = {
header: packet_header,
- mut payload: option<T>
+ mut payload: option<T>,
};
-enum packet<T:send> {
- packet_(packet_<T>)
+trait has_buffer {
+ fn set_buffer(b: *libc::c_void);
+}
+
+impl methods<T: send> of has_buffer for packet<T> {
+ fn set_buffer(b: *libc::c_void) {
+ self.header.buffer = b;
+ }
+}
+
+fn mk_packet<T: send>() -> packet<T> {
+ {
+ header: packet_header(),
+ mut payload: none
+ }
}
-fn packet<T: send>() -> *packet<T> unsafe {
- let p: *packet<T> = unsafe::transmute(~{
- header: {
- mut state: empty,
- mut blocked_task: none::<task::task>,
- },
- mut payload: none::<T>
- });
+fn unibuffer<T: send>() -> ~buffer<packet<T>> {
+ let b = ~{
+ header: buffer_header(),
+ data: {
+ header: packet_header(),
+ mut payload: none,
+ }
+ };
+
+ unsafe {
+ b.data.header.buffer = reinterpret_cast(b);
+ }
+
+ b
+}
+
+fn packet<T: send>() -> *packet<T> {
+ let b = unibuffer();
+ let p = ptr::addr_of(b.data);
+ // We'll take over memory management from here.
+ unsafe { forget(b) }
p
}
+fn entangle_buffer<T: send, Tstart: send>(
+ -buffer: ~buffer<T>,
+ init: fn(*libc::c_void, x: &T) -> *packet<Tstart>)
+ -> (send_packet_buffered<Tstart, T>, recv_packet_buffered<Tstart, T>)
+{
+ let p = init(unsafe { reinterpret_cast(buffer) }, &buffer.data);
+ unsafe { forget(buffer) }
+ (send_packet_buffered(p), recv_packet_buffered(p))
+}
+
#[abi = "rust-intrinsic"]
extern mod rusti {
fn atomic_xchng(&dst: int, src: int) -> int;
fn atomic_xchng_acq(&dst: int, src: int) -> int;
fn atomic_xchng_rel(&dst: int, src: int) -> int;
+
+ fn atomic_add_acq(&dst: int, src: int) -> int;
+ fn atomic_sub_rel(&dst: int, src: int) -> int;
+}
+
+// If I call the rusti versions directly from a polymorphic function,
+// I get link errors. This is a bug that needs investigated more.
+fn atomic_xchng_rel(&dst: int, src: int) -> int {
+ rusti::atomic_xchng_rel(dst, src)
+}
+
+fn atomic_add_acq(&dst: int, src: int) -> int {
+ rusti::atomic_add_acq(dst, src)
+}
+
+fn atomic_sub_rel(&dst: int, src: int) -> int {
+ rusti::atomic_sub_rel(dst, src)
}
type rust_task = libc::c_void;
#[rust_stack]
fn task_clear_event_reject(task: *rust_task);
- fn task_wait_event(this: *rust_task, killed: &mut bool) -> *libc::c_void;
- fn task_signal_event(target: *rust_task, event: *libc::c_void);
-}
-
-// We should consider moving this to core::unsafe, although I
-// suspect graydon would want us to use void pointers instead.
-unsafe fn uniquify<T>(x: *T) -> ~T {
- unsafe { unsafe::reinterpret_cast(x) }
+ fn task_wait_event(this: *rust_task, killed: &mut *libc::c_void) -> bool;
+ pure fn task_signal_event(target: *rust_task, event: *libc::c_void);
}
fn wait_event(this: *rust_task) -> *libc::c_void {
- let mut killed = false;
+ let mut event = ptr::null();
- let res = rustrt::task_wait_event(this, &mut killed);
+ let killed = rustrt::task_wait_event(this, &mut event);
if killed && !task::failing() {
fail ~"killed"
}
- res
+ event
}
fn swap_state_acq(&dst: state, src: state) -> state {
}
}
-fn send<T: send>(-p: send_packet<T>, -payload: T) {
+unsafe fn get_buffer<T: send>(p: *packet_header) -> ~buffer<T> {
+ transmute((*p).buf_header())
+}
+
+class buffer_resource<T: send> {
+ let buffer: ~buffer<T>;
+ new(+b: ~buffer<T>) {
+ //let p = ptr::addr_of(*b);
+ //#error("take %?", p);
+ atomic_add_acq(b.header.ref_count, 1);
+ self.buffer = b;
+ }
+
+ drop unsafe {
+ let b = move!{self.buffer};
+ //let p = ptr::addr_of(*b);
+ //#error("drop %?", p);
+ let old_count = atomic_sub_rel(b.header.ref_count, 1);
+ //let old_count = atomic_xchng_rel(b.header.ref_count, 0);
+ if old_count == 1 {
+ // The new count is 0.
+
+ // go go gadget drop glue
+ }
+ else {
+ forget(b)
+ }
+ }
+}
+
+fn send<T: send, Tbuffer: send>(-p: send_packet_buffered<T, Tbuffer>,
+ -payload: T) {
+ let header = p.header();
let p_ = p.unwrap();
- let p = unsafe { uniquify(p_) };
- assert (*p).payload == none;
- (*p).payload <- some(payload);
+ let p = unsafe { &*p_ };
+ assert ptr::addr_of(p.header) == header;
+ assert p.payload == none;
+ p.payload <- some(payload);
let old_state = swap_state_rel(p.header.state, full);
alt old_state {
empty {
// Yay, fastpath.
// The receiver will eventually clean this up.
- unsafe { forget(p); }
+ //unsafe { forget(p); }
}
full { fail ~"duplicate send" }
blocked {
}
// The receiver will eventually clean this up.
- unsafe { forget(p); }
+ //unsafe { forget(p); }
}
terminated {
// The receiver will never receive this. Rely on drop_glue
}
}
-fn recv<T: send>(-p: recv_packet<T>) -> T {
+fn recv<T: send, Tbuffer: send>(-p: recv_packet_buffered<T, Tbuffer>) -> T {
option::unwrap(try_recv(p))
}
-fn try_recv<T: send>(-p: recv_packet<T>) -> option<T> {
+fn try_recv<T: send, Tbuffer: send>(-p: recv_packet_buffered<T, Tbuffer>)
+ -> option<T>
+{
let p_ = p.unwrap();
- let p = unsafe { uniquify(p_) };
+ let p = unsafe { &*p_ };
let this = rustrt::rust_get_task();
rustrt::task_clear_event_reject(this);
p.header.blocked_task = some(this);
let mut first = true;
+ let mut count = SPIN_COUNT;
loop {
rustrt::task_clear_event_reject(this);
let old_state = swap_state_acq(p.header.state,
alt old_state {
empty {
#debug("no data available on %?, going to sleep.", p_);
- wait_event(this);
- #debug("woke up, p.state = %?", p.header.state);
+ if count == 0 {
+ wait_event(this);
+ }
+ else {
+ count -= 1;
+ // FIXME (#524): Putting the yield here destroys a lot
+ // of the benefit of spinning, since we still go into
+ // the scheduler at every iteration. However, without
+ // this everything spins too much because we end up
+ // sometimes blocking the thing we are waiting on.
+ task::yield();
+ }
+ #debug("woke up, p.state = %?", copy p.header.state);
}
blocked {
if first {
}
full {
let mut payload = none;
- payload <-> (*p).payload;
- p.header.state = terminated;
+ payload <-> p.payload;
+ p.header.state = empty;
ret some(option::unwrap(payload))
}
terminated {
}
/// Returns true if messages are available.
-pure fn peek<T: send>(p: recv_packet<T>) -> bool {
+pure fn peek<T: send, Tb: send>(p: recv_packet_buffered<T, Tb>) -> bool {
alt unsafe {(*p.header()).state} {
empty { false }
blocked { fail ~"peeking on blocked packet" }
}
fn sender_terminate<T: send>(p: *packet<T>) {
- let p = unsafe { uniquify(p) };
+ let p = unsafe { &*p };
alt swap_state_rel(p.header.state, terminated) {
empty {
// The receiver will eventually clean up.
- unsafe { forget(p) }
+ //unsafe { forget(p) }
}
blocked {
// wake up the target
ptr::addr_of(p.header) as *libc::c_void);
// The receiver will eventually clean up.
- unsafe { forget(p) }
+ //unsafe { forget(p) }
}
full {
// This is impossible
}
fn receiver_terminate<T: send>(p: *packet<T>) {
- let p = unsafe { uniquify(p) };
+ let p = unsafe { &*p };
alt swap_state_rel(p.header.state, terminated) {
empty {
// the sender will clean up
- unsafe { forget(p) }
+ //unsafe { forget(p) }
}
blocked {
// this shouldn't happen.
}
}
-impl private_methods for *packet_header {
- // Returns the old state.
- unsafe fn mark_blocked(this: *rust_task) -> state {
- let self = &*self;
- self.blocked_task = some(this);
- swap_state_acq(self.state, blocked)
- }
-
- unsafe fn unblock() {
- let self = &*self;
- alt swap_state_acq(self.state, empty) {
- empty | blocked { }
- terminated { self.state = terminated; }
- full { self.state = full; }
- }
- }
-}
-
#[doc = "Returns when one of the packet headers reports data is
available."]
fn wait_many(pkts: &[*packet_header]) -> uint {
let mut data_avail = false;
let mut ready_packet = pkts.len();
for pkts.eachi |i, p| unsafe {
+ let p = unsafe { &*p };
let old = p.mark_blocked(this);
alt old {
full | terminated {
#debug("%?", pkts[ready_packet]);
- for pkts.each |p| { unsafe{p.unblock()} }
+ for pkts.each |p| { unsafe{ (*p).unblock()} }
#debug("%?, %?", ready_packet, pkts[ready_packet]);
ready_packet
}
-fn select2<A: send, B: send>(
- +a: recv_packet<A>,
- +b: recv_packet<B>)
- -> either<(option<A>, recv_packet<B>), (recv_packet<A>, option<B>)>
+fn select2<A: send, Ab: send, B: send, Bb: send>(
+ +a: recv_packet_buffered<A, Ab>,
+ +b: recv_packet_buffered<B, Bb>)
+ -> either<(option<A>, recv_packet_buffered<B, Bb>),
+ (recv_packet_buffered<A, Ab>, option<B>)>
{
let i = wait_many([a.header(), b.header()]/_);
#[doc = "Waits on a set of endpoints. Returns a message, its index,
and a list of the remaining endpoints."]
-fn select<T: send>(+endpoints: ~[recv_packet<T>])
- -> (uint, option<T>, ~[recv_packet<T>])
+fn select<T: send, Tb: send>(+endpoints: ~[recv_packet_buffered<T, Tb>])
+ -> (uint, option<T>, ~[recv_packet_buffered<T, Tb>])
{
let ready = wait_many(endpoints.map(|p| p.header()));
let mut remaining = ~[];
(ready, result, remaining)
}
-class send_packet<T: send> {
+type send_packet<T: send> = send_packet_buffered<T, packet<T>>;
+
+fn send_packet<T: send>(p: *packet<T>) -> send_packet<T> {
+ send_packet_buffered(p)
+}
+
+class send_packet_buffered<T: send, Tbuffer: send> {
let mut p: option<*packet<T>>;
+ let mut buffer: option<buffer_resource<Tbuffer>>;
new(p: *packet<T>) {
//#debug("take send %?", p);
self.p = some(p);
+ unsafe {
+ self.buffer = some(
+ buffer_resource(
+ get_buffer(ptr::addr_of((*p).header))));
+ };
}
drop {
//if self.p != none {
p <-> self.p;
sender_terminate(option::unwrap(p))
}
+ //unsafe { #error("send_drop: %?",
+ // if self.buffer == none {
+ // "none"
+ // } else { "some" }); }
}
fn unwrap() -> *packet<T> {
let mut p = none;
p <-> self.p;
option::unwrap(p)
}
+
+ pure fn header() -> *packet_header {
+ alt self.p {
+ some(packet) {
+ unsafe {
+ let packet = &*packet;
+ let header = ptr::addr_of(packet.header);
+ //forget(packet);
+ header
+ }
+ }
+ none { fail ~"packet already consumed" }
+ }
+ }
+
+ fn reuse_buffer() -> buffer_resource<Tbuffer> {
+ //#error("send reuse_buffer");
+ let mut tmp = none;
+ tmp <-> self.buffer;
+ option::unwrap(tmp)
+ }
}
-class recv_packet<T: send> {
+type recv_packet<T: send> = recv_packet_buffered<T, packet<T>>;
+
+fn recv_packet<T: send>(p: *packet<T>) -> recv_packet<T> {
+ recv_packet_buffered(p)
+}
+
+class recv_packet_buffered<T: send, Tbuffer: send> : selectable {
let mut p: option<*packet<T>>;
+ let mut buffer: option<buffer_resource<Tbuffer>>;
new(p: *packet<T>) {
//#debug("take recv %?", p);
self.p = some(p);
+ unsafe {
+ self.buffer = some(
+ buffer_resource(
+ get_buffer(ptr::addr_of((*p).header))));
+ };
}
drop {
//if self.p != none {
p <-> self.p;
receiver_terminate(option::unwrap(p))
}
+ //unsafe { #error("recv_drop: %?",
+ // if self.buffer == none {
+ // "none"
+ // } else { "some" }); }
}
fn unwrap() -> *packet<T> {
let mut p = none;
alt self.p {
some(packet) {
unsafe {
- let packet = uniquify(packet);
+ let packet = &*packet;
let header = ptr::addr_of(packet.header);
- forget(packet);
+ //forget(packet);
header
}
}
none { fail ~"packet already consumed" }
}
}
+
+ fn reuse_buffer() -> buffer_resource<Tbuffer> {
+ //#error("recv reuse_buffer");
+ let mut tmp = none;
+ tmp <-> self.buffer;
+ option::unwrap(tmp)
+ }
}
fn entangle<T: send>() -> (send_packet<T>, recv_packet<T>) {
(send_packet(p), recv_packet(p))
}
-fn spawn_service<T: send>(
- init: extern fn() -> (send_packet<T>, recv_packet<T>),
- +service: fn~(+recv_packet<T>))
- -> send_packet<T>
+fn spawn_service<T: send, Tb: send>(
+ init: extern fn() -> (send_packet_buffered<T, Tb>,
+ recv_packet_buffered<T, Tb>),
+ +service: fn~(+recv_packet_buffered<T, Tb>))
+ -> send_packet_buffered<T, Tb>
{
let (client, server) = init();
client
}
-fn spawn_service_recv<T: send>(
- init: extern fn() -> (recv_packet<T>, send_packet<T>),
- +service: fn~(+send_packet<T>))
- -> recv_packet<T>
+fn spawn_service_recv<T: send, Tb: send>(
+ init: extern fn() -> (recv_packet_buffered<T, Tb>,
+ send_packet_buffered<T, Tb>),
+ +service: fn~(+send_packet_buffered<T, Tb>))
+ -> recv_packet_buffered<T, Tb>
{
let (client, server) = init();
}
}
+// It'd be nice to call this send, but it'd conflict with the built in
+// send kind.
+trait channel<T: send> {
+ fn send(+x: T);
+}
+
+trait recv<T: send> {
+ fn recv() -> T;
+ fn try_recv() -> option<T>;
+ // This should perhaps be a new trait
+ pure fn peek() -> bool;
+}
+
type chan_<T:send> = { mut endp: option<streamp::client::open<T>> };
enum chan<T:send> {
(chan_({ mut endp: some(c) }), port_({ mut endp: some(s) }))
}
-impl chan<T: send> for chan<T> {
+impl chan<T: send> of channel<T> for chan<T> {
fn send(+x: T) {
let mut endp = none;
endp <-> self.endp;
}
}
-impl port<T: send> for port<T> {
+impl port<T: send> of recv<T> for port<T> {
fn recv() -> T {
let mut endp = none;
endp <-> self.endp;
fn try_recv() -> option<T> {
let mut endp = none;
endp <-> self.endp;
- alt pipes::try_recv(unwrap(endp)) {
+ alt move(pipes::try_recv(unwrap(endp))) {
some(streamp::data(x, endp)) {
- self.endp = some(#move(endp));
- some(#move(x))
+ self.endp = some(move!{endp});
+ some(move!{x})
}
none { none }
}
}
// Treat a whole bunch of ports as one.
-class port_set<T: send> {
+class port_set<T: send> : recv<T> {
let mut ports: ~[pipes::port<T>];
new() { self.ports = ~[]; }
fn try_recv() -> option<T> {
let mut result = none;
while result == none && self.ports.len() > 0 {
- let i = pipes::wait_many(self.ports.map(|p| p.header()));
+ let i = wait_many(self.ports.map(|p| p.header()));
// dereferencing an unsafe pointer nonsense to appease the
// borrowchecker.
- alt unsafe {(*ptr::addr_of(self.ports[i])).try_recv()} {
+ alt move(unsafe {(*ptr::addr_of(self.ports[i])).try_recv()}) {
some(m) {
- result = some(#move(m));
+ result = some(move!{m});
}
none {
// Remove this port.
fn recv() -> T {
option::unwrap(self.try_recv())
}
+
+ pure fn peek() -> bool {
+ // It'd be nice to use self.port.each, but that version isn't
+ // pure.
+ for vec::each(self.ports) |p| {
+ if p.peek() { ret true }
+ }
+ false
+ }
}
-impl<T: send> of selectable for pipes::port<T> {
- pure fn header() -> *pipes::packet_header unchecked {
+impl<T: send> of selectable for port<T> {
+ pure fn header() -> *packet_header unchecked {
alt self.endp {
some(endp) {
endp.header()
}
-type shared_chan<T: send> = arc::exclusive<pipes::chan<T>>;
-
-trait send_on_shared_chan<T> {
- fn send(+x: T);
-}
+type shared_chan<T: send> = arc::exclusive<chan<T>>;
-impl chan<T: send> of send_on_shared_chan<T> for shared_chan<T> {
+impl chan<T: send> of channel<T> for shared_chan<T> {
fn send(+x: T) {
let mut xx = some(x);
do self.with |_c, chan| {
}
}
-fn shared_chan<T:send>(+c: pipes::chan<T>) -> shared_chan<T> {
+fn shared_chan<T:send>(+c: chan<T>) -> shared_chan<T> {
arc::exclusive(c)
}
export chan_from_global_ptr, weaken_task;
import compare_and_swap = rustrt::rust_compare_and_swap_ptr;
+import task::task_builder;
type rust_port_id = uint;
*/
unsafe fn chan_from_global_ptr<T: send>(
global: global_ptr,
- builder: fn() -> task::builder,
+ task_fn: fn() -> task::task_builder,
+f: fn~(comm::port<T>)
) -> comm::chan<T> {
let setup_po = comm::port();
let setup_ch = comm::chan(setup_po);
- let setup_ch = do task::run_listener(builder()) |setup_po| {
+ let setup_ch = do task_fn().spawn_listener |setup_po| {
let po = comm::port::<T>();
let ch = comm::chan(po);
comm::send(setup_ch, ch);
// Create the global channel, attached to a new task
let ch = unsafe {
- do chan_from_global_ptr(globchanp, task::builder) |po| {
+ do chan_from_global_ptr(globchanp, task::task) |po| {
let ch = comm::recv(po);
comm::send(ch, true);
let ch = comm::recv(po);
// This one just reuses the previous channel
let ch = unsafe {
- do chan_from_global_ptr(globchanp, task::builder) |po| {
+ do chan_from_global_ptr(globchanp, task::task) |po| {
let ch = comm::recv(po);
comm::send(ch, false);
}
do task::spawn {
let ch = unsafe {
do chan_from_global_ptr(
- globchanp, task::builder) |po| {
+ globchanp, task::task) |po| {
for uint::range(0u, 10u) |_j| {
let ch = comm::recv(po);
#[test]
fn test_weaken_task_wait() {
- let builder = task::builder();
- task::unsupervise(builder);
- do task::run(builder) {
+ do task::spawn_unlinked {
unsafe {
do weaken_task |po| {
comm::recv(po);
}
}
}
- let builder = task::builder();
- task::unsupervise(builder);
- do task::run(builder) {
+ do task::spawn_unlinked {
unsafe {
do weaken_task |po| {
// Wait for it to tell us to die
do str::as_c_str(s1) |p1| {
do str::as_c_str(s2) |p2| {
let v = ~[p0, p1, p2, null()];
- do vec::as_buf(v) |vp| {
+ do vec::as_buf(v) |vp, len| {
assert unsafe { buf_len(vp) } == 3u;
+ assert len == 4u;
}
}
}
(self.next() as u64 << 32) | self.next() as u64
}
- /// Return a random float
+ /// Return a random float in the interval [0,1]
fn gen_float() -> float {
self.gen_f64() as float
}
- /// Return a random f32
+ /// Return a random f32 in the interval [0,1]
fn gen_f32() -> f32 {
self.gen_f64() as f32
}
- /// Return a random f64
+ /// Return a random f64 in the interval [0,1]
fn gen_f64() -> f64 {
let u1 = self.next() as f64;
let u2 = self.next() as f64;
let u3 = self.next() as f64;
- let scale = u32::max_value as f64;
+ const scale : f64 = (u32::max_value as f64) + 1.0f64;
ret ((u1 / scale + u2) / scale + u3) / scale;
}
--- /dev/null
+//! Runtime calls emitted by the compiler.
+
+import libc::c_char;
+import libc::c_void;
+import libc::size_t;
+import libc::uintptr_t;
+
+type rust_task = c_void;
+
+extern mod rustrt {
+ #[rust_stack]
+ fn rust_upcall_fail(expr: *c_char, file: *c_char, line: size_t);
+
+ #[rust_stack]
+ fn rust_upcall_exchange_malloc(td: *c_char, size: uintptr_t) -> *c_char;
+
+ #[rust_stack]
+ fn rust_upcall_exchange_free(ptr: *c_char);
+
+ #[rust_stack]
+ fn rust_upcall_malloc(td: *c_char, size: uintptr_t) -> *c_char;
+
+ #[rust_stack]
+ fn rust_upcall_free(ptr: *c_char);
+}
+
+// FIXME (#2861): This needs both the attribute, and the name prefixed with
+// 'rt_', otherwise the compiler won't find it. To fix this, see
+// gather_rust_rtcalls.
+#[rt(fail)]
+fn rt_fail(expr: *c_char, file: *c_char, line: size_t) {
+ rustrt::rust_upcall_fail(expr, file, line);
+}
+
+#[rt(exchange_malloc)]
+fn rt_exchange_malloc(td: *c_char, size: uintptr_t) -> *c_char {
+ ret rustrt::rust_upcall_exchange_malloc(td, size);
+}
+
+// NB: Calls to free CANNOT be allowed to fail, as throwing an exception from
+// inside a landing pad may corrupt the state of the exception handler. If a
+// problem occurs, call exit instead.
+#[rt(exchange_free)]
+fn rt_exchange_free(ptr: *c_char) {
+ rustrt::rust_upcall_exchange_free(ptr);
+}
+
+#[rt(malloc)]
+fn rt_malloc(td: *c_char, size: uintptr_t) -> *c_char {
+ ret rustrt::rust_upcall_malloc(td, size);
+}
+
+// NB: Calls to free CANNOT be allowed to fail, as throwing an exception from
+// inside a landing pad may corrupt the state of the exception handler. If a
+// problem occurs, call exit instead.
+#[rt(free)]
+fn rt_free(ptr: *c_char) {
+ rustrt::rust_upcall_free(ptr);
+}
+
+// Local Variables:
+// mode: rust;
+// fill-column: 78;
+// indent-tabs-mode: nil
+// c-basic-offset: 4
+// buffer-file-coding-system: utf-8-unix
+// End:
//! Process spawning
import option::{some, none};
import libc::{pid_t, c_void, c_int};
+import io::reader_util;
export program;
export run_program;
vec::push_all(argptrs, str::as_c_str(*t, |b| ~[b]));
}
vec::push(argptrs, ptr::null());
- vec::as_buf(argptrs, cb)
+ vec::as_buf(argptrs, |buf, _len| cb(buf))
}
#[cfg(unix)]
vec::push_all(ptrs, str::as_c_str(*t, |b| ~[b]));
}
vec::push(ptrs, ptr::null());
- vec::as_buf(ptrs, |p|
+ vec::as_buf(ptrs, |p, _len|
unsafe { cb(::unsafe::reinterpret_cast(p)) }
)
}
::unsafe::forget(v);
}
blk += ~[0_u8];
- vec::as_buf(blk, |p| cb(::unsafe::reinterpret_cast(p)))
+ vec::as_buf(blk, |p, _len| cb(::unsafe::reinterpret_cast(p)))
}
_ {
cb(ptr::null())
as_bytes,
as_buf,
as_c_str,
- unpack_slice,
// Adding things to and removing things from a string
push_str_no_overallocate,
*
* Fails if invalid UTF-8
*/
-pure fn from_bytes(+vv: ~[u8]) -> ~str {
+pure fn from_bytes(vv: &[const u8]) -> ~str {
assert is_utf8(vv);
ret unsafe { unsafe::from_bytes(vv) };
}
+/// Copy a slice into a new unique str
+pure fn from_slice(s: &str) -> ~str {
+ unsafe { unsafe::slice_bytes(s, 0, len(s)) }
+}
+
/**
* Convert a byte to a UTF-8 string
*
let new_len = len + nb;
reserve_at_least(s, new_len);
let off = len;
- do as_buf(s) |buf| {
+ do as_buf(s) |buf, _len| {
let buf: *mut u8 = ::unsafe::reinterpret_cast(buf);
if nb == 1u {
*ptr::mut_offset(buf, off) =
}
/// Convert a vector of chars to a string
-pure fn from_chars(chs: &[const char]) -> ~str {
+pure fn from_chars(chs: &[char]) -> ~str {
let mut buf = ~"";
unchecked {
reserve(buf, chs.len());
let llen = lhs.len();
let rlen = rhs.len();
reserve(lhs, llen + rlen);
- do as_buf(lhs) |lbuf| {
- do unpack_slice(rhs) |rbuf, _rlen| {
+ do as_buf(lhs) |lbuf, _llen| {
+ do as_buf(rhs) |rbuf, _rlen| {
let dst = ptr::offset(lbuf, llen);
ptr::memcpy(dst, rbuf, rlen);
}
let llen = lhs.len();
let rlen = rhs.len();
reserve_at_least(lhs, llen + rlen);
- do as_buf(lhs) |lbuf| {
- do unpack_slice(rhs) |rbuf, _rlen| {
+ do as_buf(lhs) |lbuf, _llen| {
+ do as_buf(rhs) |rbuf, _rlen| {
let dst = ptr::offset(lbuf, llen);
ptr::memcpy(dst, rbuf, rlen);
}
/// Concatenate a vector of strings
-pure fn concat(v: &[const ~str]) -> ~str {
+pure fn concat(v: &[~str]) -> ~str {
let mut s: ~str = ~"";
for vec::each(v) |ss| { unchecked { push_str(s, ss) }; }
ret s;
}
/// Concatenate a vector of strings, placing a given separator between each
-pure fn connect(v: &[const ~str], sep: ~str) -> ~str {
+pure fn connect(v: &[~str], sep: &str) -> ~str {
let mut s = ~"", first = true;
for vec::each(v) |ss| {
if first { first = false; } else { unchecked { push_str(s, sep); } }
fn unshift_char(&s: ~str, ch: char) { s = from_char(ch) + s; }
/// Returns a string with leading whitespace removed
-pure fn trim_left(+s: ~str) -> ~str {
+pure fn trim_left(s: &str) -> ~str {
alt find(s, |c| !char::is_whitespace(c)) {
none { ~"" }
some(first) {
- if first == 0u { s }
- else unsafe { unsafe::slice_bytes(s, first, len(s)) }
+ unsafe { unsafe::slice_bytes(s, first, len(s)) }
}
}
}
/// Returns a string with trailing whitespace removed
-pure fn trim_right(+s: ~str) -> ~str {
+pure fn trim_right(s: &str) -> ~str {
alt rfind(s, |c| !char::is_whitespace(c)) {
none { ~"" }
some(last) {
let {next, _} = char_range_at(s, last);
- if next == len(s) { s }
- else unsafe { unsafe::slice_bytes(s, 0u, next) }
+ unsafe { unsafe::slice_bytes(s, 0u, next) }
}
}
}
/// Returns a string with leading and trailing whitespace removed
-pure fn trim(+s: ~str) -> ~str { trim_left(trim_right(s)) }
+pure fn trim(s: &str) -> ~str { trim_left(trim_right(s)) }
/*
Section: Transforming strings
*
* The result vector is not null-terminated.
*/
-pure fn bytes(s: ~str) -> ~[u8] {
+pure fn bytes(s: &str) -> ~[u8] {
unsafe {
- let mut s_copy = s;
+ let mut s_copy = from_slice(s);
let mut v: ~[u8] = ::unsafe::transmute(s_copy);
vec::unsafe::set_len(v, len(s));
ret v;
/// Work with the string as a byte slice, not including trailing null.
#[inline(always)]
pure fn byte_slice<T>(s: &str, f: fn(v: &[u8]) -> T) -> T {
- do unpack_slice(s) |p,n| {
+ do as_buf(s) |p,n| {
unsafe { vec::unsafe::form_slice(p, n-1u, f) }
}
}
*
* The original string with all occurances of `from` replaced with `to`
*/
-pure fn replace(s: ~str, from: ~str, to: ~str) -> ~str {
+pure fn replace(s: &str, from: &str, to: &str) -> ~str {
let mut result = ~"", first = true;
do iter_between_matches(s, from) |start, end| {
if first { first = false; } else { unchecked {push_str(result, to); }}
/// String hash function
pure fn hash(&&s: ~str) -> uint {
- // djb hash.
- // FIXME: replace with murmur. (see #859 and #1616)
- let mut u: uint = 5381u;
- for each(s) |c| { u *= 33u; u += c as uint; }
- ret u;
+ let x = do as_bytes(s) |bytes| {
+ hash::hash_bytes(bytes)
+ };
+ ret x as uint;
}
/*
/// Returns the string length/size in bytes not counting the null terminator
pure fn len(s: &str) -> uint {
- do unpack_slice(s) |_p, n| { n - 1u }
+ do as_buf(s) |_p, n| { n - 1u }
}
/// Returns the number of characters that a string holds
}
/// Determines if a vector of `u16` contains valid UTF-16
-pure fn is_utf16(v: &[const u16]) -> bool {
+pure fn is_utf16(v: &[u16]) -> bool {
let len = vec::len(v);
let mut i = 0u;
while (i < len) {
ret u;
}
-pure fn utf16_chars(v: &[const u16], f: fn(char)) {
+pure fn utf16_chars(v: &[u16], f: fn(char)) {
let len = vec::len(v);
let mut i = 0u;
while (i < len && v[i] != 0u16) {
}
-pure fn from_utf16(v: &[const u16]) -> ~str {
+pure fn from_utf16(v: &[u16]) -> ~str {
let mut buf = ~"";
unchecked {
reserve(buf, vec::len(v));
}
}
-/**
- * Work with the byte buffer of a string.
- *
- * Allows for unsafe manipulation of strings, which is useful for foreign
- * interop.
- */
-pure fn as_buf<T>(s: ~str, f: fn(*u8) -> T) -> T {
- as_bytes(s, |v| unsafe { vec::as_buf(v, f) })
-}
-
/**
* Work with the byte buffer of a string as a null-terminated C string.
*
* Allows for unsafe manipulation of strings, which is useful for foreign
- * interop, without copying the original string.
+ * interop. This is similar to `str::as_buf`, but guarantees null-termination.
+ * If the given slice is not already null-terminated, this function will
+ * allocate a temporary, copy the slice, null terminate it, and pass
+ * that instead.
*
* # Example
*
* ~~~
- * let s = str::as_buf("PATH", { |path_buf| libc::getenv(path_buf) });
+ * let s = str::as_c_str("PATH", { |path| libc::getenv(path) });
* ~~~
*/
-pure fn as_c_str<T>(s: ~str, f: fn(*libc::c_char) -> T) -> T {
- as_buf(s, |buf| f(buf as *libc::c_char))
+pure fn as_c_str<T>(s: &str, f: fn(*libc::c_char) -> T) -> T {
+ do as_buf(s) |buf, len| {
+ // NB: len includes the trailing null.
+ assert len > 0;
+ if unsafe { *(ptr::offset(buf,len-1)) != 0 } {
+ as_c_str(from_slice(s), f)
+ } else {
+ f(buf as *libc::c_char)
+ }
+ }
}
/**
* Work with the byte buffer and length of a slice.
*
- * The unpacked length is one byte longer than the 'official' indexable
+ * The given length is one byte longer than the 'official' indexable
* length of the string. This is to permit probing the byte past the
* indexable area for a null byte, as is the case in slices pointing
* to full strings, or suffixes of them.
*/
#[inline(always)]
-pure fn unpack_slice<T>(s: &str, f: fn(*u8, uint) -> T) -> T {
+pure fn as_buf<T>(s: &str, f: fn(*u8, uint) -> T) -> T {
unsafe {
let v : *(*u8,uint) = ::unsafe::reinterpret_cast(ptr::addr_of(s));
let (buf,len) = *v;
}
/// Create a Rust string from a *u8 buffer of the given length
- unsafe fn from_buf_len(buf: *u8, len: uint) -> ~str {
- let mut v: ~[u8] = ~[];
+ unsafe fn from_buf_len(buf: *const u8, len: uint) -> ~str {
+ let mut v: ~[mut u8] = ~[mut];
vec::reserve(v, len + 1u);
- vec::as_buf(v, |b| ptr::memcpy(b, buf, len));
+ vec::as_buf(v, |b, _len| ptr::memcpy(b, buf as *u8, len));
vec::unsafe::set_len(v, len);
vec::push(v, 0u8);
from_buf_len(::unsafe::reinterpret_cast(c_str), len)
}
- /**
- * Converts a vector of bytes to a string.
- *
- * Does not verify that the vector contains valid UTF-8.
- */
- unsafe fn from_bytes(+v: ~[const u8]) -> ~str {
- unsafe {
- let mut vcopy = ::unsafe::transmute(v);
- vec::push(vcopy, 0u8);
- ::unsafe::transmute(vcopy)
- }
- }
+ /// Converts a vector of bytes to a string.
+ unsafe fn from_bytes(v: &[const u8]) -> ~str {
+ do vec::as_const_buf(v) |buf, len| {
+ from_buf_len(buf, len)
+ }
+ }
- /**
- * Converts a byte to a string.
- *
- * Does not verify that the byte is valid UTF-8.
- */
- unsafe fn from_byte(u: u8) -> ~str { unsafe::from_bytes(~[u]) }
+ /// Converts a byte to a string.
+ unsafe fn from_byte(u: u8) -> ~str { unsafe::from_bytes([u]) }
/**
* Takes a bytewise (not UTF-8) slice from a string.
* If end is greater than the length of the string.
*/
unsafe fn slice_bytes(s: &str, begin: uint, end: uint) -> ~str {
- do unpack_slice(s) |sbuf, n| {
+ do as_buf(s) |sbuf, n| {
assert (begin <= end);
assert (end <= n);
let mut v = ~[];
vec::reserve(v, end - begin + 1u);
unsafe {
- do vec::as_buf(v) |vbuf| {
+ do vec::as_buf(v) |vbuf, _vlen| {
let src = ptr::offset(sbuf, begin);
ptr::memcpy(vbuf, src, end - begin);
}
fn is_whitespace() -> bool;
fn is_alphanumeric() -> bool;
pure fn len() -> uint;
- fn slice(begin: uint, end: uint) -> ~str;
+ pure fn slice(begin: uint, end: uint) -> ~str;
fn split(sepfn: fn(char) -> bool) -> ~[~str];
fn split_char(sep: char) -> ~[~str];
fn split_str(sep: &a/str) -> ~[~str];
fn to_upper() -> ~str;
fn escape_default() -> ~str;
fn escape_unicode() -> ~str;
+ pure fn to_unique() -> ~str;
}
/// Extension methods for strings
* beyond the last character of the string
*/
#[inline]
- fn slice(begin: uint, end: uint) -> ~str { slice(self, begin, end) }
+ pure fn slice(begin: uint, end: uint) -> ~str { slice(self, begin, end) }
/// Splits a string into substrings using a character function
#[inline]
fn split(sepfn: fn(char) -> bool) -> ~[~str] { split(self, sepfn) }
/// Escape each char in `s` with char::escape_unicode.
#[inline]
fn escape_unicode() -> ~str { escape_unicode(self) }
+
+ #[inline]
+ pure fn to_unique() -> ~str { self.slice(0, self.len()) }
}
#[cfg(test)]
#[test]
fn test_as_buf() {
let a = ~"Abcdefg";
- let b = as_buf(a, |buf| {
+ let b = as_buf(a, |buf, _l| {
assert unsafe { *buf } == 65u8;
100
});
#[test]
fn test_as_buf_small() {
let a = ~"A";
- let b = as_buf(a, |buf| {
+ let b = as_buf(a, |buf, _l| {
assert unsafe { *buf } == 65u8;
100
});
fn test_as_buf2() {
unsafe {
let s = ~"hello";
- let sb = as_buf(s, |b| b);
+ let sb = as_buf(s, |b, _l| b);
let s_cstr = unsafe::from_buf(sb);
assert (eq(s_cstr, s));
}
}
+ #[test]
+ fn test_as_buf_3() {
+ let a = ~"hello";
+ do as_buf(a) |buf, len| {
+ unsafe {
+ assert a[0] == 'h' as u8;
+ assert *buf == 'h' as u8;
+ assert len == 6u;
+ assert *ptr::offset(buf,4u) == 'o' as u8;
+ assert *ptr::offset(buf,5u) == 0u8;
+ }
+ }
+ }
+
#[test]
fn vec_str_conversions() {
let s1: ~str = ~"All mimsy were the borogoves";
assert found_b;
}
- #[test]
- fn test_unpack_slice() {
- let a = ~"hello";
- do unpack_slice(a) |buf, len| {
- unsafe {
- assert a[0] == 'h' as u8;
- assert *buf == 'h' as u8;
- assert len == 6u;
- assert *ptr::offset(buf,4u) == 'o' as u8;
- assert *ptr::offset(buf,5u) == 0u8;
- }
- }
- }
-
#[test]
fn test_escape_unicode() {
assert escape_unicode(~"abc") == ~"\\x61\\x62\\x63";
export log_str;
export lock_and_signal, condition, methods;
+import task::atomically;
+
enum type_desc = {
size: uint,
align: uint
#[abi = "cdecl"]
extern mod rustrt {
- fn unsupervise();
pure fn shape_log_str(t: *sys::type_desc, data: *()) -> ~str;
fn rust_create_cond_lock() -> rust_cond_lock;
impl methods for lock_and_signal {
unsafe fn lock<T>(f: fn() -> T) -> T {
- rustrt::rust_lock_cond_lock(self.lock);
- let _r = unlock(self.lock);
- f()
+ do atomically {
+ rustrt::rust_lock_cond_lock(self.lock);
+ let _r = unlock(self.lock);
+ f()
+ }
}
unsafe fn lock_cond<T>(f: fn(condition) -> T) -> T {
- rustrt::rust_lock_cond_lock(self.lock);
- let _r = unlock(self.lock);
- f(condition_(self.lock))
+ do atomically {
+ rustrt::rust_lock_cond_lock(self.lock);
+ let _r = unlock(self.lock);
+ f(condition_(self.lock))
+ }
}
}
export sched_mode;
export sched_opts;
export task_opts;
-export builder;
+export task_builder;
export default_task_opts;
export get_opts;
export run;
export future_result;
-export future_task;
-export unsupervise;
export run_listener;
export run_with;
export spawn;
+export spawn_unlinked;
+export spawn_supervised;
export spawn_with;
export spawn_listener;
export spawn_sched;
export failing;
export get_task;
export unkillable;
+export atomically;
export local_data_key;
export local_data_pop;
/* Data types */
/// A handle to a task
-enum task = task_id;
+enum task { task_handle(task_id) }
/**
* Indicates the manner in which a task exited.
*
* # Fields
*
- * * supervise - Do not propagate failure to the parent task
+ * * linked - Do not propagate failure to the parent task
*
* All tasks are linked together via a tree, from parents to children. By
* default children are 'supervised' by their parent and when they fail
* scheduler other tasks will be impeded or even blocked indefinitely.
*/
type task_opts = {
- supervise: bool,
+ linked: bool,
+ parented: bool,
notify_chan: option<comm::chan<notification>>,
sched: option<sched_opts>,
};
// when you try to reuse the builder to spawn a new task. We'll just
// sidestep that whole issue by making builders uncopyable and making
// the run function move them in.
-enum builder {
- builder_({
- mut opts: task_opts,
- mut gen_body: fn@(+fn~()) -> fn~(),
- can_not_copy: option<comm::port<()>>
+class dummy { let x: (); new() { self.x = (); } drop { } }
+
+// FIXME (#2585): Replace the 'consumed' bit with move mode on self
+enum task_builder = {
+ opts: task_opts,
+ gen_body: fn@(+fn~()) -> fn~(),
+ can_not_copy: option<dummy>,
+ mut consumed: bool,
+};
+
+/**
+ * Generate the base configuration for spawning a task, off of which more
+ * configuration methods can be chained.
+ * For example, task().unlinked().spawn is equivalent to spawn_unlinked.
+ */
+fn task() -> task_builder {
+ task_builder({
+ opts: default_task_opts(),
+ gen_body: |body| body, // Identity function
+ can_not_copy: none,
+ mut consumed: false,
})
}
-
-/* Task construction */
-
-fn default_task_opts() -> task_opts {
- /*!
- * The default task options
- *
- * By default all tasks are supervised by their parent, are spawned
- * into the same scheduler, and do not post lifecycle notifications.
- */
-
- {
- supervise: true,
- notify_chan: none,
- sched: none
+impl private_methods for task_builder {
+ fn consume() -> task_builder {
+ if self.consumed {
+ fail ~"Cannot copy a task_builder"; // Fake move mode on self
+ }
+ self.consumed = true;
+ task_builder({ can_not_copy: none, mut consumed: false, with *self })
}
}
-fn builder() -> builder {
- //! Construct a builder
-
- let body_identity = fn@(+body: fn~()) -> fn~() { body };
-
- builder_({
- mut opts: default_task_opts(),
- mut gen_body: body_identity,
- can_not_copy: none
- })
-}
-
-fn get_opts(builder: builder) -> task_opts {
- //! Get the task_opts associated with a builder
-
- builder.opts
-}
+impl task_builder for task_builder {
+ /**
+ * Decouple the child task's failure from the parent's. If either fails,
+ * the other will not be killed.
+ */
+ fn unlinked() -> task_builder {
+ task_builder({
+ opts: { linked: false with self.opts },
+ can_not_copy: none,
+ with *self.consume()
+ })
+ }
+ /**
+ * Unidirectionally link the child task's failure with the parent's. The
+ * child's failure will not kill the parent, but the parent's will kill
+ * the child.
+ */
+ fn supervised() -> task_builder {
+ task_builder({
+ opts: { linked: false, parented: true with self.opts },
+ can_not_copy: none,
+ with *self.consume()
+ })
+ }
+ /**
+ * Link the child task's and parent task's failures. If either fails, the
+ * other will be killed.
+ */
+ fn linked() -> task_builder {
+ task_builder({
+ opts: { linked: true, parented: false with self.opts },
+ can_not_copy: none,
+ with *self.consume()
+ })
+ }
-fn set_opts(builder: builder, opts: task_opts) {
- /*!
- * Set the task_opts associated with a builder
+ /**
+ * Get a future representing the exit status of the task.
*
- * To update a single option use a pattern like the following:
+ * Taking the value of the future will block until the child task
+ * terminates. The future-receiving callback specified will be called
+ * *before* the task is spawned; as such, do not invoke .get() within the
+ * closure; rather, store it in an outer variable/list for later use.
*
- * set_opts(builder, {
- * supervise: false
- * with get_opts(builder)
- * });
+ * Note that the future returning by this function is only useful for
+ * obtaining the value of the next task to be spawning with the
+ * builder. If additional tasks are spawned with the same builder
+ * then a new result future must be obtained prior to spawning each
+ * task.
*/
+ fn future_result(blk: fn(-future::future<task_result>)) -> task_builder {
+ // FIXME (#1087, #1857): Once linked failure and notification are
+ // handled in the library, I can imagine implementing this by just
+ // registering an arbitrary number of task::on_exit handlers and
+ // sending out messages.
+
+ // Construct the future and give it to the caller.
+ let po = comm::port::<notification>();
+ let ch = comm::chan(po);
+
+ blk(do future::from_fn {
+ alt comm::recv(po) {
+ exit(_, result) { result }
+ }
+ });
- builder.opts = opts;
-}
-
-fn set_sched_mode(builder: builder, mode: sched_mode) {
- set_opts(builder, {
- sched: some({
- mode: mode,
- foreign_stack_size: none
+ // Reconfigure self to use a notify channel.
+ task_builder({
+ opts: { notify_chan: some(ch) with self.opts },
+ can_not_copy: none,
+ with *self.consume()
})
- with get_opts(builder)
- });
-}
+ }
+ /// Configure a custom scheduler mode for the task.
+ fn sched_mode(mode: sched_mode) -> task_builder {
+ task_builder({
+ opts: { sched: some({ mode: mode, foreign_stack_size: none})
+ with self.opts },
+ can_not_copy: none,
+ with *self.consume()
+ })
+ }
-fn add_wrapper(builder: builder, gen_body: fn@(+fn~()) -> fn~()) {
- /*!
+ /**
* Add a wrapper to the body of the spawned task.
*
* Before the task is spawned it is passed through a 'body generator'
* generator by applying the task body which results from the
* existing body generator to the new body generator.
*/
+ fn add_wrapper(wrapper: fn@(+fn~()) -> fn~()) -> task_builder {
+ let prev_gen_body = self.gen_body;
+ task_builder({
+ gen_body: |body| { wrapper(prev_gen_body(body)) },
+ can_not_copy: none,
+ with *self.consume()
+ })
+ }
- let prev_gen_body = builder.gen_body;
- builder.gen_body = fn@(+body: fn~()) -> fn~() {
- gen_body(prev_gen_body(body))
- };
-}
-
-fn run(-builder: builder, +f: fn~()) {
- /*!
+ /**
* Creates and exucutes a new child task
*
* Sets up a new task with its own call stack and schedules it to run
* the provided unique closure. The task has the properties and behavior
- * specified by `builder`.
+ * specified by the task_builder.
*
* # Failure
*
* When spawning into a new scheduler, the number of threads requested
* must be greater than zero.
*/
-
- let body = builder.gen_body(f);
- spawn_raw(builder.opts, body);
-}
-
-
-/* Builder convenience functions */
-
-fn future_result(builder: builder) -> future::future<task_result> {
- /*!
- * Get a future representing the exit status of the task.
- *
- * Taking the value of the future will block until the child task
- * terminates.
- *
- * Note that the future returning by this function is only useful for
- * obtaining the value of the next task to be spawning with the
- * builder. If additional tasks are spawned with the same builder
- * then a new result future must be obtained prior to spawning each
- * task.
- */
-
- // FIXME (#1087, #1857): Once linked failure and notification are
- // handled in the library, I can imagine implementing this by just
- // registering an arbitrary number of task::on_exit handlers and
- // sending out messages.
-
- let po = comm::port();
- let ch = comm::chan(po);
-
- set_opts(builder, {
- notify_chan: some(ch)
- with get_opts(builder)
- });
-
- do future::from_fn {
- alt comm::recv(po) {
- exit(_, result) { result }
- }
+ fn spawn(+f: fn~()) {
+ let x = self.consume();
+ spawn_raw(x.opts, x.gen_body(f));
}
-}
-
-fn future_task(builder: builder) -> future::future<task> {
- //! Get a future representing the handle to the new task
-
- import future::future_pipe;
-
- let (po, ch) = future_pipe::init();
-
- let ch = ~mut some(ch);
-
- do add_wrapper(builder) |body, move ch| {
- let ch = { let mut t = none;
- t <-> *ch;
- ~mut t};
- fn~(move ch) {
- let mut po = none;
- po <-> *ch;
- future_pipe::server::completed(option::unwrap(po),
- get_task());
- body();
+ /// Runs a task, while transfering ownership of one argument to the child.
+ fn spawn_with<A: send>(+arg: A, +f: fn~(+A)) {
+ let arg = ~mut some(arg);
+ do self.spawn {
+ let mut my_arg = none;
+ my_arg <-> *arg;
+ f(option::unwrap(my_arg))
}
}
- future::from_port(po)
-}
-
-fn unsupervise(builder: builder) {
- //! Configures the new task to not propagate failure to its parent
-
- set_opts(builder, {
- supervise: false
- with get_opts(builder)
- });
-}
-
-fn run_with<A:send>(-builder: builder,
- +arg: A,
- +f: fn~(+A)) {
-
- /*!
- * Runs a task, while transfering ownership of one argument to the
- * child.
- *
- * This is useful for transfering ownership of noncopyables to
- * another task.
- *
- */
-
- let arg = ~mut some(arg);
- do run(builder) {
- let mut my_arg = none;
- my_arg <-> *arg;
- f(option::unwrap(my_arg))
- }
-}
-fn run_listener<A:send>(-builder: builder,
- +f: fn~(comm::port<A>)) -> comm::chan<A> {
- /*!
+ /**
* Runs a new task while providing a channel from the parent to the child
*
* Sets up a communication channel from the current task to the new
* otherwise be required to establish communication from the parent
* to the child.
*/
+ fn spawn_listener<A: send>(+f: fn~(comm::port<A>)) -> comm::chan<A> {
+ let setup_po = comm::port();
+ let setup_ch = comm::chan(setup_po);
+ do self.spawn {
+ let po = comm::port();
+ let ch = comm::chan(po);
+ comm::send(setup_ch, ch);
+ f(po);
+ }
+ comm::recv(setup_po)
+ }
+}
- let setup_po = comm::port();
- let setup_ch = comm::chan(setup_po);
- do run(builder) {
- let po = comm::port();
- let mut ch = comm::chan(po);
- comm::send(setup_ch, ch);
- f(po);
- }
+/* Task construction */
- comm::recv(setup_po)
-}
+fn default_task_opts() -> task_opts {
+ /*!
+ * The default task options
+ *
+ * By default all tasks are supervised by their parent, are spawned
+ * into the same scheduler, and do not post lifecycle notifications.
+ */
+ {
+ linked: true,
+ parented: false,
+ notify_chan: none,
+ sched: none
+ }
+}
/* Spawn convenience functions */
* Sets up a new task with its own call stack and schedules it to run
* the provided unique closure.
*
- * This function is equivalent to `run(new_builder(), f)`.
+ * This function is equivalent to `task().spawn(f)`.
*/
- run(builder(), f);
+ task().spawn(f)
+}
+
+fn spawn_unlinked(+f: fn~()) {
+ /*!
+ * Creates a child task unlinked from the current one. If either this
+ * task or the child task fails, the other will not be killed.
+ */
+
+ task().unlinked().spawn(f)
+}
+
+fn spawn_supervised(+f: fn~()) {
+ /*!
+ * Creates a child task unlinked from the current one. If either this
+ * task or the child task fails, the other will not be killed.
+ */
+
+ task().supervised().spawn(f)
}
fn spawn_with<A:send>(+arg: A, +f: fn~(+A)) {
* This is useful for transfering ownership of noncopyables to
* another task.
*
- * This function is equivalent to `run_with(builder(), arg, f)`.
+ * This function is equivalent to `task().spawn_with(arg, f)`.
*/
- run_with(builder(), arg, f)
+ task().spawn_with(arg, f)
}
fn spawn_listener<A:send>(+f: fn~(comm::port<A>)) -> comm::chan<A> {
* };
* // Likewise, the parent has both a 'po' and 'ch'
*
- * This function is equivalent to `run_listener(builder(), f)`.
+ * This function is equivalent to `task().spawn_listener(f)`.
*/
- run_listener(builder(), f)
+ task().spawn_listener(f)
}
fn spawn_sched(mode: sched_mode, +f: fn~()) {
* greater than zero.
*/
- let mut builder = builder();
- set_sched_mode(builder, mode);
- run(builder, f);
+ task().sched_mode(mode).spawn(f)
}
fn try<T:send>(+f: fn~() -> T) -> result<T,()> {
let po = comm::port();
let ch = comm::chan(po);
- let mut builder = builder();
- unsupervise(builder);
- let result = future_result(builder);
- do run(builder) {
+
+ let mut result = none;
+
+ do task().unlinked().future_result(|-r| { result = some(r); }).spawn {
comm::send(ch, f());
}
- alt future::get(result) {
+ alt future::get(option::unwrap(result)) {
success { result::ok(comm::recv(po)) }
failure { result::err(()) }
}
//! Yield control to the task scheduler
let task_ = rustrt::rust_get_task();
- let mut killed = false;
- rustrt::rust_task_yield(task_, killed);
+ let killed = rustrt::rust_task_yield(task_);
if killed && !failing() {
fail ~"killed";
}
fn get_task() -> task {
//! Get a handle to the running task
- task(rustrt::get_task_id())
+ task_handle(rustrt::get_task_id())
}
/**
*/
unsafe fn unkillable(f: fn()) {
class allow_failure {
- let i: (); // since a class must have at least one field
- new(_i: ()) { self.i = (); }
- drop { rustrt::rust_task_allow_kill(); }
+ let t: *rust_task;
+ new(t: *rust_task) { self.t = t; }
+ drop { rustrt::rust_task_allow_kill(self.t); }
}
- let _allow_failure = allow_failure(());
- rustrt::rust_task_inhibit_kill();
+ let t = rustrt::rust_get_task();
+ let _allow_failure = allow_failure(t);
+ rustrt::rust_task_inhibit_kill(t);
f();
}
+/**
+ * A stronger version of unkillable that also inhibits scheduling operations.
+ * For use with exclusive ARCs, which use pthread mutexes directly.
+ */
+unsafe fn atomically<U>(f: fn() -> U) -> U {
+ class defer_interrupts {
+ let t: *rust_task;
+ new(t: *rust_task) { self.t = t; }
+ drop {
+ rustrt::rust_task_allow_yield(self.t);
+ rustrt::rust_task_allow_kill(self.t);
+ }
+ }
+ let t = rustrt::rust_get_task();
+ let _interrupts = defer_interrupts(t);
+ rustrt::rust_task_inhibit_kill(t);
+ rustrt::rust_task_inhibit_yield(t);
+ f()
+}
/****************************************************************************
* Internal
/* linked failure */
-type taskgroup_arc = arc::exclusive<option<dvec::dvec<option<*rust_task>>>>;
+type taskgroup_arc =
+ arc::exclusive<option<(dvec::dvec<option<*rust_task>>,dvec::dvec<uint>)>>;
class taskgroup {
// FIXME (#2816): Change dvec to an O(1) data structure (and change 'me'
// to a node-handle or somesuch when so done (or remove the field entirely
// if keyed by *rust_task)).
- let tasks: taskgroup_arc; // 'none' means the group already failed.
let me: *rust_task;
- let my_pos: uint;
- // let parent_group: taskgroup_arc; // FIXME (#1868) (bblum)
- // FIXME (#1868) XXX bblum: add a list of empty slots to get runtime back
- // Indicates whether this is the main (root) taskgroup. If so, failure
- // here should take down the entire runtime.
+ // List of tasks with whose fates this one's is intertwined.
+ let tasks: taskgroup_arc; // 'none' means the group already failed.
+ let my_pos: uint; // Index into above for this task's slot.
+ // Lists of tasks who will kill us if they fail, but whom we won't kill.
+ let parents: option<(taskgroup_arc,uint)>;
let is_main: bool;
- new(-tasks: taskgroup_arc, me: *rust_task, my_pos: uint, is_main: bool) {
- self.tasks = tasks;
- self.me = me;
- self.my_pos = my_pos;
- self.is_main = is_main;
+ let notifier: option<auto_notify>;
+ new(me: *rust_task, -tasks: taskgroup_arc, my_pos: uint,
+ -parents: option<(taskgroup_arc,uint)>, is_main: bool,
+ -notifier: option<auto_notify>) {
+ self.me = me;
+ self.tasks = tasks;
+ self.my_pos = my_pos;
+ self.parents = parents;
+ self.is_main = is_main;
+ self.notifier = notifier;
+ self.notifier.iter(|x| { x.failed = false; });
}
// Runs on task exit.
drop {
// If we are failing, the whole taskgroup needs to die.
if rustrt::rust_task_is_unwinding(self.me) {
+ self.notifier.iter(|x| { x.failed = true; });
// Take everybody down with us.
kill_taskgroup(self.tasks, self.me, self.my_pos, self.is_main);
} else {
- // Remove ourselves from the group.
+ // Remove ourselves from the group(s).
leave_taskgroup(self.tasks, self.me, self.my_pos);
}
+ // It doesn't matter whether this happens before or after dealing with
+ // our own taskgroup, so long as both happen before we die.
+ alt self.parents {
+ some((parent_group,pos_in_group)) {
+ leave_taskgroup(parent_group, self.me, pos_in_group);
+ }
+ none { }
+ }
+ }
+}
+
+class auto_notify {
+ let notify_chan: comm::chan<notification>;
+ let mut failed: bool;
+ new(chan: comm::chan<notification>) {
+ self.notify_chan = chan;
+ self.failed = true; // Un-set above when taskgroup successfully made.
+ }
+ drop {
+ let result = if self.failed { failure } else { success };
+ comm::send(self.notify_chan, exit(get_task(), result));
}
}
me: *rust_task) -> option<uint> {
do group_arc.with |_c, state| {
// If 'none', the group was failing. Can't enlist.
- do state.map |tasks| {
+ let mut newstate = none;
+ *state <-> newstate;
+ if newstate.is_some() {
+ let (tasks,empty_slots) = option::unwrap(newstate);
// Try to find an empty slot.
- alt tasks.position(|x| x == none) {
- some(empty_index) {
- tasks.set_elt(empty_index, some(me));
- empty_index
- }
- none {
- tasks.push(some(me));
- tasks.len() - 1
- }
- }
+ let slotno = if empty_slots.len() > 0 {
+ let empty_index = empty_slots.pop();
+ assert tasks[empty_index] == none;
+ tasks.set_elt(empty_index, some(me));
+ empty_index
+ } else {
+ tasks.push(some(me));
+ tasks.len() - 1
+ };
+ *state = some((tasks,empty_slots));
+ some(slotno)
+ } else {
+ none
}
}
}
// NB: Runs in destructor/post-exit context. Can't 'fail'.
fn leave_taskgroup(group_arc: taskgroup_arc, me: *rust_task, index: uint) {
do group_arc.with |_c, state| {
+ let mut newstate = none;
+ *state <-> newstate;
// If 'none', already failing and we've already gotten a kill signal.
- do state.map |tasks| {
+ if newstate.is_some() {
+ let (tasks,empty_slots) = option::unwrap(newstate);
assert tasks[index] == some(me);
tasks.set_elt(index, none);
+ empty_slots.push(index);
+ *state = some((tasks,empty_slots));
};
};
}
// Might already be none, if somebody is failing simultaneously.
// That's ok; only one task needs to do the dirty work. (Might also
// see 'none' if somebody already failed and we got a kill signal.)
- do newstate.map |tasks| {
+ if newstate.is_some() {
+ let (tasks,_empty_slots) = option::unwrap(newstate);
// First remove ourself (killing ourself won't do much good). This
// is duplicated here to avoid having to lock twice.
assert tasks[index] == some(me);
if is_main {
rustrt::rust_task_kill_all(me);
}
- };
+ // Do NOT restore state to some(..)! It stays none to indicate
+ // that the whole taskgroup is failing, to forbid new spawns.
+ }
// (note: multiple tasks may reach this point)
};
}
unsafe::transmute((-2 as uint, 0u))
}
-fn share_parent_taskgroup() -> (taskgroup_arc, bool) {
+// The 'linked' arg tells whether or not to also ref the unidirectionally-
+// linked supervisors' group. False when the spawn is supervised, not linked.
+fn share_spawner_taskgroup(linked: bool)
+ -> (taskgroup_arc, option<taskgroup_arc>, bool) {
let me = rustrt::rust_get_task();
alt unsafe { local_get(me, taskgroup_key()) } {
some(group) {
+ // If they are linked to us, they share our parent group.
+ let parent_arc_opt = if linked {
+ group.parents.map(|x| alt x { (pg,_) { pg.clone() } })
+ } else {
+ none
+ };
// Clone the shared state for the child; propagate main-ness.
- (group.tasks.clone(), group.is_main)
+ (group.tasks.clone(), parent_arc_opt, group.is_main)
}
none {
// Main task, doing first spawn ever.
- let tasks = arc::exclusive(some(dvec::from_elem(some(me))));
- let group = @taskgroup(tasks.clone(), me, 0, true);
+ let tasks = arc::exclusive(some((dvec::from_elem(some(me)),
+ dvec::dvec())));
+ // Main group has no parent group.
+ let group = @taskgroup(me, tasks.clone(), 0, none, true, none);
unsafe { local_set(me, taskgroup_key(), group); }
// Tell child task it's also in the main group.
- (tasks, true)
+ // Whether or not it wanted our parent group, we haven't got one.
+ (tasks, none, true)
}
}
}
fn spawn_raw(opts: task_opts, +f: fn~()) {
// Decide whether the child needs to be in a new linked failure group.
- let (child_tg, is_main) = if opts.supervise {
- share_parent_taskgroup()
+ // This whole conditional should be consolidated with share_spawner above.
+ let (child_tg, parent_tg, is_main) = if opts.linked {
+ // It doesn't mean anything for a linked-spawned-task to have a parent
+ // group. The spawning task is already bidirectionally linked to it.
+ share_spawner_taskgroup(true)
} else {
// Detached from the parent group; create a new (non-main) one.
- (arc::exclusive(some(dvec::from_elem(none))), false)
+ (arc::exclusive(some((dvec::dvec(),dvec::dvec()))),
+ // Allow the parent to unidirectionally fail the child?
+ if opts.parented {
+ // Use the spawner's own group as the child's parent group.
+ let (pg,_,_) = share_spawner_taskgroup(false); some(pg)
+ } else {
+ none
+ },
+ false)
};
unsafe {
- let child_data_ptr = ~mut some((child_tg, f));
+ let child_data_ptr = ~mut some((child_tg, parent_tg, f));
// Being killed with the unsafe task/closure pointers would leak them.
do unkillable {
// Agh. Get move-mode items into the closure. FIXME (#2829)
let mut child_data = none;
*child_data_ptr <-> child_data;
- let (child_tg, f) = option::unwrap(child_data);
+ let (child_tg, parent_tg, f) = option::unwrap(child_data);
// Create child task.
let new_task = alt opts.sched {
none { rustrt::new_task() }
// Getting killed after here would leak the task.
let child_wrapper =
- make_child_wrapper(new_task, child_tg,
- opts.supervise, is_main, f);
+ make_child_wrapper(new_task, child_tg, parent_tg, is_main,
+ opts.notify_chan, f);
let fptr = ptr::addr_of(child_wrapper);
let closure: *rust_closure = unsafe::reinterpret_cast(fptr);
- do option::iter(opts.notify_chan) |c| {
- // FIXME (#1087): Would like to do notification in Rust
- rustrt::rust_task_config_notify(new_task, c);
- }
-
// Getting killed between these two calls would free the child's
// closure. (Reordering them wouldn't help - then getting killed
// between them would leak.)
}
}
- fn make_child_wrapper(child_task: *rust_task, -child_tg: taskgroup_arc,
- supervise: bool, is_main: bool,
+ // This function returns a closure-wrapper that we pass to the child task.
+ // In brief, it does the following:
+ // if enlist_in_group(child_group) {
+ // if parent_group {
+ // if !enlist_in_group(parent_group) {
+ // leave_group(child_group); // Roll back
+ // ret; // Parent group failed. Don't run child's f().
+ // }
+ // }
+ // stash_taskgroup_data_in_TLS(child_group, parent_group);
+ // f();
+ // } else {
+ // // My group failed. Don't run chid's f().
+ // }
+ fn make_child_wrapper(child: *rust_task, -child_tg: taskgroup_arc,
+ -parent_tg: option<taskgroup_arc>, is_main: bool,
+ notify_chan: option<comm::chan<notification>>,
-f: fn~()) -> fn~() {
- let child_tg_ptr = ~mut some(child_tg);
+ let child_tg_ptr = ~mut some((child_tg, parent_tg));
fn~() {
// Agh. Get move-mode items into the closure. FIXME (#2829)
- let mut child_tg_opt = none;
- *child_tg_ptr <-> child_tg_opt;
- let child_tg = option::unwrap(child_tg_opt);
+ let mut tg_data_opt = none;
+ *child_tg_ptr <-> tg_data_opt;
+ let (child_tg, parent_tg) = option::unwrap(tg_data_opt);
// Child task runs this code.
- if !supervise {
- // FIXME (#1868, #1789) take this out later
- rustrt::unsupervise();
- }
- // Set up membership in taskgroup. If this returns none, the
- // parent was already failing, so don't bother doing anything.
- alt enlist_in_taskgroup(child_tg, child_task) {
- some(my_index) {
- let group =
- @taskgroup(child_tg, child_task, my_index, is_main);
- unsafe { local_set(child_task, taskgroup_key(), group); }
- // Run the child's body.
- f();
- // TLS cleanup code will exit the taskgroup.
- }
- none {
+
+ // Even if the below code fails to kick the child off, we must
+ // send something on the notify channel.
+ let notifier = notify_chan.map(|c| auto_notify(c));
+
+ // Set up membership in taskgroup. If this returns none, some
+ // task was already failing, so don't bother doing anything.
+ alt enlist_in_taskgroup(child_tg, child) {
+ some(my_pos) {
+ // Enlist in parent group too. If enlist returns none, a
+ // parent was failing: don't spawn; leave this group too.
+ let (pg, enlist_ok) = if parent_tg.is_some() {
+ let parent_group = option::unwrap(parent_tg);
+ alt enlist_in_taskgroup(parent_group, child) {
+ some(my_p_index) {
+ // Successful enlist.
+ (some((parent_group, my_p_index)), true)
+ }
+ none {
+ // Couldn't enlist. Have to quit here too.
+ leave_taskgroup(child_tg, child, my_pos);
+ (none, false)
+ }
+ }
+ } else {
+ // No parent group to enlist in. No worry.
+ (none, true)
+ };
+ if enlist_ok {
+ let group = @taskgroup(child, child_tg, my_pos,
+ pg, is_main, notifier);
+ unsafe { local_set(child, taskgroup_key(), group); }
+ // Run the child's body.
+ f();
+ // TLS cleanup code will exit the taskgroup.
+ }
}
+ none { }
}
}
}
* types; arbitrary type coercion is possible this way. The interface is safe
* as long as all key functions are monomorphic.
*/
-type local_data_key<T> = fn@(+@T);
+type local_data_key<T: owned> = fn@(+@T);
iface local_data { }
-impl<T> of local_data for @T { }
+impl<T: owned> of local_data for @T { }
// We use dvec because it's the best data structure in core. If TLS is used
// heavily in future, this could be made more efficient with a proper map.
// Gets the map from the runtime. Lazily initialises if not done so already.
unsafe fn get_task_local_map(task: *rust_task) -> task_local_map {
+
// Relies on the runtime initialising the pointer to null.
// NOTE: The map's box lives in TLS invisibly referenced once. Each time
// we retrieve it for get/set, we make another reference, which get/set
}
}
-unsafe fn key_to_key_value<T>(key: local_data_key<T>) -> *libc::c_void {
+unsafe fn key_to_key_value<T: owned>(
+ key: local_data_key<T>) -> *libc::c_void {
+
// Keys are closures, which are (fnptr,envptr) pairs. Use fnptr.
// Use reintepret_cast -- transmute would leak (forget) the closure.
let pair: (*libc::c_void, *libc::c_void) = unsafe::reinterpret_cast(key);
}
// If returning some(..), returns with @T with the map's reference. Careful!
-unsafe fn local_data_lookup<T>(map: task_local_map, key: local_data_key<T>)
- -> option<(uint, *libc::c_void)> {
+unsafe fn local_data_lookup<T: owned>(
+ map: task_local_map, key: local_data_key<T>)
+ -> option<(uint, *libc::c_void)> {
+
let key_value = key_to_key_value(key);
let map_pos = (*map).position(|entry|
alt entry { some((k,_,_)) { k == key_value } none { false } }
}
}
-unsafe fn local_get_helper<T>(task: *rust_task, key: local_data_key<T>,
- do_pop: bool) -> option<@T> {
+unsafe fn local_get_helper<T: owned>(
+ task: *rust_task, key: local_data_key<T>,
+ do_pop: bool) -> option<@T> {
+
let map = get_task_local_map(task);
// Interpret our findings from the map
do local_data_lookup(map, key).map |result| {
}
}
-unsafe fn local_pop<T>(task: *rust_task,
- key: local_data_key<T>) -> option<@T> {
+unsafe fn local_pop<T: owned>(
+ task: *rust_task,
+ key: local_data_key<T>) -> option<@T> {
+
local_get_helper(task, key, true)
}
-unsafe fn local_get<T>(task: *rust_task,
- key: local_data_key<T>) -> option<@T> {
+unsafe fn local_get<T: owned>(
+ task: *rust_task,
+ key: local_data_key<T>) -> option<@T> {
+
local_get_helper(task, key, false)
}
-unsafe fn local_set<T>(task: *rust_task, key: local_data_key<T>, +data: @T) {
+unsafe fn local_set<T: owned>(
+ task: *rust_task, key: local_data_key<T>, +data: @T) {
+
let map = get_task_local_map(task);
// Store key+data as *voids. Data is invisibly referenced once; key isn't.
let keyval = key_to_key_value(key);
}
}
-unsafe fn local_modify<T>(task: *rust_task, key: local_data_key<T>,
- modify_fn: fn(option<@T>) -> option<@T>) {
+unsafe fn local_modify<T: owned>(
+ task: *rust_task, key: local_data_key<T>,
+ modify_fn: fn(option<@T>) -> option<@T>) {
+
// Could be more efficient by doing the lookup work, but this is easy.
let newdata = modify_fn(local_pop(task, key));
if newdata.is_some() {
* Remove a task-local data value from the table, returning the
* reference that was originally created to insert it.
*/
-unsafe fn local_data_pop<T>(key: local_data_key<T>) -> option<@T> {
+unsafe fn local_data_pop<T: owned>(
+ key: local_data_key<T>) -> option<@T> {
+
local_pop(rustrt::rust_get_task(), key)
}
/**
* Retrieve a task-local data value. It will also be kept alive in the
* table until explicitly removed.
*/
-unsafe fn local_data_get<T>(key: local_data_key<T>) -> option<@T> {
+unsafe fn local_data_get<T: owned>(
+ key: local_data_key<T>) -> option<@T> {
+
local_get(rustrt::rust_get_task(), key)
}
/**
* Store a value in task-local data. If this key already has a value,
* that value is overwritten (and its destructor is run).
*/
-unsafe fn local_data_set<T>(key: local_data_key<T>, +data: @T) {
+unsafe fn local_data_set<T: owned>(
+ key: local_data_key<T>, +data: @T) {
+
local_set(rustrt::rust_get_task(), key, data)
}
/**
* Modify a task-local data value. If the function returns 'none', the
* data is removed (and its reference dropped).
*/
-unsafe fn local_data_modify<T>(key: local_data_key<T>,
- modify_fn: fn(option<@T>) -> option<@T>) {
+unsafe fn local_data_modify<T: owned>(
+ key: local_data_key<T>,
+ modify_fn: fn(option<@T>) -> option<@T>) {
+
local_modify(rustrt::rust_get_task(), key, modify_fn)
}
extern mod rustrt {
#[rust_stack]
- fn rust_task_yield(task: *rust_task, &killed: bool);
+ fn rust_task_yield(task: *rust_task) -> bool;
fn rust_get_sched_id() -> sched_id;
fn rust_new_sched(num_threads: libc::uintptr_t) -> sched_id;
fn new_task() -> *rust_task;
fn rust_new_task_in_sched(id: sched_id) -> *rust_task;
- fn rust_task_config_notify(
- task: *rust_task, &&chan: comm::chan<notification>);
-
fn start_task(task: *rust_task, closure: *rust_closure);
fn rust_task_is_unwinding(task: *rust_task) -> bool;
- fn unsupervise();
fn rust_osmain_sched_id() -> sched_id;
- fn rust_task_inhibit_kill();
- fn rust_task_allow_kill();
+ fn rust_task_inhibit_kill(t: *rust_task);
+ fn rust_task_allow_kill(t: *rust_task);
+ fn rust_task_inhibit_yield(t: *rust_task);
+ fn rust_task_allow_yield(t: *rust_task);
fn rust_task_kill_other(task: *rust_task);
fn rust_task_kill_all(task: *rust_task);
#[ignore(cfg(windows))]
fn test_spawn_raw_unsupervise() {
let opts = {
- supervise: false
+ linked: false
with default_task_opts()
};
do spawn_raw(opts) {
}
}
+#[test] #[should_fail] #[ignore(cfg(windows))]
+fn test_cant_dup_task_builder() {
+ let b = task().unlinked();
+ do b.spawn { }
+ // FIXME(#2585): For now, this is a -runtime- failure, because we haven't
+ // got modes on self. When 2585 is fixed, this test should fail to compile
+ // instead, and should go in tests/compile-fail.
+ do b.spawn { } // b should have been consumed by the previous call
+}
+
+// The following 8 tests test the following 2^3 combinations:
+// {un,}linked {un,}supervised failure propagation {up,down}wards.
+
+// !!! These tests are dangerous. If something is buggy, they will hang, !!!
+// !!! instead of exiting cleanly. This might wedge the buildbots. !!!
+
+#[test] #[ignore(cfg(windows))]
+fn test_spawn_unlinked_unsup_no_fail_down() { // grandchild sends on a port
+ let po = comm::port();
+ let ch = comm::chan(po);
+ do spawn_unlinked {
+ do spawn_unlinked {
+ // Give middle task a chance to fail-but-not-kill-us.
+ for iter::repeat(8192) { task::yield(); }
+ comm::send(ch, ()); // If killed first, grandparent hangs.
+ }
+ fail; // Shouldn't kill either (grand)parent or (grand)child.
+ }
+ comm::recv(po);
+}
+#[test] #[ignore(cfg(windows))]
+fn test_spawn_unlinked_unsup_no_fail_up() { // child unlinked fails
+ do spawn_unlinked { fail; }
+}
+#[test] #[ignore(cfg(windows))]
+fn test_spawn_unlinked_sup_no_fail_up() { // child unlinked fails
+ do spawn_supervised { fail; }
+ // Give child a chance to fail-but-not-kill-us.
+ for iter::repeat(8192) { task::yield(); }
+}
+#[test] #[should_fail] #[ignore(cfg(windows))]
+fn test_spawn_unlinked_sup_fail_down() {
+ do spawn_supervised { loop { task::yield(); } }
+ fail; // Shouldn't leave a child hanging around.
+}
+
+#[test] #[should_fail] #[ignore(cfg(windows))]
+fn test_spawn_linked_sup_fail_up() { // child fails; parent fails
+ let po = comm::port::<()>();
+ let _ch = comm::chan(po);
+ // Unidirectional "parenting" shouldn't override bidirectional linked.
+ // We have to cheat with opts - the interface doesn't support them because
+ // they don't make sense (redundant with task().supervised()).
+ let b0 = task();
+ let b1 = task_builder({
+ opts: { linked: true, parented: true with b0.opts },
+ can_not_copy: none,
+ with *b0
+ });
+ do b1.spawn { fail; }
+ comm::recv(po); // We should get punted awake
+}
+#[test] #[should_fail] #[ignore(cfg(windows))]
+fn test_spawn_linked_sup_fail_down() { // parent fails; child fails
+ // We have to cheat with opts - the interface doesn't support them because
+ // they don't make sense (redundant with task().supervised()).
+ let b0 = task();
+ let b1 = task_builder({
+ opts: { linked: true, parented: true with b0.opts },
+ can_not_copy: none,
+ with *b0
+ });
+ do b1.spawn { loop { task::yield(); } }
+ fail; // *both* mechanisms would be wrong if this didn't kill the child...
+}
+#[test] #[should_fail] #[ignore(cfg(windows))]
+fn test_spawn_linked_unsup_fail_up() { // child fails; parent fails
+ let po = comm::port::<()>();
+ let _ch = comm::chan(po);
+ // Default options are to spawn linked & unsupervised.
+ do spawn { fail; }
+ comm::recv(po); // We should get punted awake
+}
+#[test] #[should_fail] #[ignore(cfg(windows))]
+fn test_spawn_linked_unsup_fail_down() { // parent fails; child fails
+ // Default options are to spawn linked & unsupervised.
+ do spawn { loop { task::yield(); } }
+ fail;
+}
+#[test] #[should_fail] #[ignore(cfg(windows))]
+fn test_spawn_linked_unsup_default_opts() { // parent fails; child fails
+ // Make sure the above test is the same as this one.
+ do task().linked().spawn { loop { task::yield(); } }
+ fail;
+}
+
+// A couple bonus linked failure tests - testing for failure propagation even
+// when the middle task exits successfully early before kill signals are sent.
+
+#[test] #[should_fail] // #[ignore(cfg(windows))]
+#[ignore] // FIXME (#1868) (bblum) make this work
+fn test_spawn_failure_propagate_grandchild() {
+ // Middle task exits; does grandparent's failure propagate across the gap?
+ do spawn_supervised {
+ do spawn_supervised {
+ loop { task::yield(); }
+ }
+ }
+ for iter::repeat(8192) { task::yield(); }
+ fail;
+}
+
+#[test] #[should_fail] #[ignore(cfg(windows))]
+fn test_spawn_failure_propagate_secondborn() {
+ // First-born child exits; does parent's failure propagate to sibling?
+ do spawn_supervised {
+ do spawn { // linked
+ loop { task::yield(); }
+ }
+ }
+ for iter::repeat(8192) { task::yield(); }
+ fail;
+}
+
+#[test] #[should_fail] #[ignore(cfg(windows))]
+fn test_spawn_failure_propagate_nephew_or_niece() {
+ // Our sibling exits; does our failure propagate to sibling's child?
+ do spawn { // linked
+ do spawn_supervised {
+ loop { task::yield(); }
+ }
+ }
+ for iter::repeat(8192) { task::yield(); }
+ fail;
+}
+
+#[test] #[should_fail] #[ignore(cfg(windows))]
+fn test_spawn_linked_sup_propagate_sibling() {
+ // Middle sibling exits - does eldest's failure propagate to youngest?
+ do spawn { // linked
+ do spawn { // linked
+ loop { task::yield(); }
+ }
+ }
+ for iter::repeat(8192) { task::yield(); }
+ fail;
+}
+
#[test]
#[ignore(cfg(windows))]
fn test_spawn_raw_notify() {
assert comm::recv(notify_po) == exit(task_, success);
let opts = {
- supervise: false,
+ linked: false,
notify_chan: some(notify_ch)
with default_task_opts()
};
fn test_run_basic() {
let po = comm::port();
let ch = comm::chan(po);
- let buildr = builder();
- do run(buildr) {
+ do task().spawn {
comm::send(ch, ());
}
comm::recv(po);
fn test_add_wrapper() {
let po = comm::port();
let ch = comm::chan(po);
- let buildr = builder();
- do add_wrapper(buildr) |body| {
+ let b0 = task();
+ let b1 = do b0.add_wrapper |body| {
fn~() {
body();
comm::send(ch, ());
}
- }
- do run(buildr) { }
+ };
+ do b1.spawn { }
comm::recv(po);
}
#[test]
#[ignore(cfg(windows))]
fn test_future_result() {
- let buildr = builder();
- let result = future_result(buildr);
- do run(buildr) { }
- assert future::get(result) == success;
+ let mut result = none;
+ do task().future_result(|-r| { result = some(r); }).spawn { }
+ assert future::get(option::unwrap(result)) == success;
- let buildr = builder();
- let result = future_result(buildr);
- unsupervise(buildr);
- do run(buildr) { fail }
- assert future::get(result) == failure;
-}
-
-#[test]
-fn test_future_task() {
- let po = comm::port();
- let ch = comm::chan(po);
- let buildr = builder();
- let task1 = future_task(buildr);
- do run(buildr) { comm::send(ch, get_task()) }
- assert future::get(task1) == comm::recv(po);
+ result = none;
+ do task().future_result(|-r| { result = some(r); }).unlinked().spawn {
+ fail;
+ }
+ assert future::get(option::unwrap(result)) == failure;
}
#[test]
}
#[test]
-fn test_avoid_copying_the_body_run() {
+fn test_avoid_copying_the_body_task_spawn() {
do avoid_copying_the_body |f| {
- let buildr = builder();
- do run(buildr) {
+ do task().spawn {
f();
}
}
}
#[test]
-fn test_avoid_copying_the_body_run_listener() {
+fn test_avoid_copying_the_body_spawn_listener() {
do avoid_copying_the_body |f| {
- let buildr = builder();
- run_listener(buildr, fn~(move f, _po: comm::port<int>) {
+ task().spawn_listener(fn~(move f, _po: comm::port<int>) {
f();
});
}
}
#[test]
-fn test_avoid_copying_the_body_future_task() {
+fn test_avoid_copying_the_body_unlinked() {
do avoid_copying_the_body |f| {
- let buildr = builder();
- future_task(buildr);
- do run(buildr) {
- f();
- }
- }
-}
-
-#[test]
-fn test_avoid_copying_the_body_unsupervise() {
- do avoid_copying_the_body |f| {
- let buildr = builder();
- unsupervise(buildr);
- do run(buildr) {
+ do spawn_unlinked {
f();
}
}
#[test]
fn test_osmain() {
- let buildr = builder();
- set_sched_mode(buildr, osmain);
-
let po = comm::port();
let ch = comm::chan(po);
- do run(buildr) {
+ do task().sched_mode(osmain).spawn {
comm::send(ch, ());
}
comm::recv(po);
let ch = po.chan();
// We want to do this after failing
- do spawn_raw({ supervise: false with default_task_opts() }) {
+ do spawn_raw({ linked: false with default_task_opts() }) {
for iter::repeat(10u) { yield() }
ch.send(());
}
let ch = po.chan();
// We want to do this after failing
- do spawn_raw({ supervise: false with default_task_opts() }) {
+ do spawn_raw({ linked: false with default_task_opts() }) {
for iter::repeat(10u) { yield() }
ch.send(());
}
po.recv();
}
+#[test] #[should_fail] #[ignore(cfg(windows))]
+fn test_atomically() {
+ unsafe { do atomically { yield(); } }
+}
+
+#[test]
+fn test_atomically2() {
+ unsafe { do atomically { } } yield(); // shouldn't fail
+}
+
+#[test] #[should_fail] #[ignore(cfg(windows))]
+fn test_atomically_nested() {
+ unsafe { do atomically { do atomically { } yield(); } }
+}
+
+#[test]
+fn test_child_doesnt_ref_parent() {
+ // If the child refcounts the parent task, this will stack overflow when
+ // climbing the task tree to dereference each ancestor. (See #1789)
+ const generations: uint = 8192;
+ fn child_no(x: uint) -> fn~() {
+ ret || {
+ if x < generations {
+ task::spawn(child_no(x+1));
+ }
+ }
+ }
+ task::spawn(child_no(0));
+}
+
#[test]
fn test_tls_multitask() unsafe {
fn my_key(+_x: @~str) { }
//! Operations on tuples
+trait tuple_ops<T,U> {
+ pure fn first() -> T;
+ pure fn second() -> U;
+ pure fn swap() -> (U, T);
+}
-impl extensions <T:copy, U:copy> for (T, U) {
+impl extensions <T:copy, U:copy> of tuple_ops<T,U> for (T, U) {
/// Return the first element of self
pure fn first() -> T {
}
-impl extensions<A: copy, B: copy> for (&[A], &[B]) {
+trait extended_tuple_ops<A,B> {
+ fn zip() -> ~[(A, B)];
+ fn map<C>(f: fn(A, B) -> C) -> ~[C];
+}
+
+impl extensions<A: copy, B: copy> of extended_tuple_ops<A,B>
+ for (&[A], &[B]) {
+
fn zip() -> ~[(A, B)] {
let (a, b) = self;
vec::zip(a, b)
}
}
-impl extensions<A: copy, B: copy> for (~[A], ~[B]) {
+impl extensions<A: copy, B: copy> of extended_tuple_ops<A,B>
+ for (~[A], ~[B]) {
+
fn zip() -> ~[(A, B)] {
let (a, b) = self;
vec::zip(a, b)
export compl;
export to_str, to_str_bytes;
export from_str, from_str_radix, str, parse_buf;
-export num, ord, eq, times;
+export num, ord, eq, times, timesi;
const min_value: T = 0 as T;
const max_value: T = 0 as T - 1 as T;
#[inline(always)]
/// Iterate over the range [`lo`..`hi`)
-fn range(lo: T, hi: T, it: fn(T) -> bool) {
+pure fn range(lo: T, hi: T, it: fn(T) -> bool) {
let mut i = lo;
while i < hi {
if !it(i) { break }
}
}
+impl timesi of iter::timesi for T {
+ #[inline(always)]
+ /// Like `times`, but with an index, `eachi`-style.
+ fn timesi(it: fn(uint) -> bool) {
+ let slf = self as uint;
+ let mut i = 0u;
+ while i < slf {
+ if !it(i) { break }
+ i += 1u;
+ }
+ }
+}
+
/// Parse a string to an int
fn from_str(s: ~str) -> option<T> { parse_buf(str::bytes(s), 10u) }
*/
fn to_str(num: T, radix: uint) -> ~str {
do to_str_bytes(false, num, radix) |slice| {
- do vec::unpack_slice(slice) |p, len| {
+ do vec::as_buf(slice) |p, len| {
unsafe { str::unsafe::from_buf_len(p, len) }
}
}
// in-bounds, no extra cost.
unsafe {
- do vec::unpack_slice(buf) |p, len| {
+ do vec::as_buf(buf) |p, len| {
let mp = p as *mut u8;
let mut i = len;
let mut n = num;
export len;
export from_fn;
export from_elem;
+export build, build_sized;
export to_mut;
export from_mut;
export head;
export last;
export last_opt;
export slice;
-export view;
+export view, mut_view, const_view;
export split;
export splitn;
export rsplit;
export windowed;
export as_buf;
export as_mut_buf;
-export unpack_slice;
-export unpack_const_slice;
+export as_const_buf;
export unsafe;
export u8;
export extensions;
/// Returns true if a vector contains no elements
pure fn is_empty<T>(v: &[const T]) -> bool {
- unpack_const_slice(v, |_p, len| len == 0u)
+ as_const_buf(v, |_p, len| len == 0u)
}
/// Returns true if a vector contains some elements
pure fn is_not_empty<T>(v: &[const T]) -> bool {
- unpack_const_slice(v, |_p, len| len > 0u)
+ as_const_buf(v, |_p, len| len > 0u)
}
/// Returns true if two vectors have the same length
/// Returns the length of a vector
#[inline(always)]
pure fn len<T>(&&v: &[const T]) -> uint {
- unpack_const_slice(v, |_p, len| len)
+ as_const_buf(v, |_p, len| len)
}
/**
* Creates an immutable vector of size `n_elts` and initializes the elements
* to the value returned by the function `op`.
*/
-pure fn from_fn<T: copy>(n_elts: uint, op: init_op<T>) -> ~[T] {
+pure fn from_fn<T>(n_elts: uint, op: init_op<T>) -> ~[T] {
let mut v = ~[];
unchecked{reserve(v, n_elts);}
let mut i: uint = 0u;
ret v;
}
+/**
+ * Builds a vector by calling a provided function with an argument
+ * function that pushes an element to the back of a vector.
+ * This version takes an initial size for the vector.
+ *
+ * # Arguments
+ *
+ * * size - An initial size of the vector to reserve
+ * * builder - A function that will construct the vector. It recieves
+ * as an argument a function that will push an element
+ * onto the vector being constructed.
+ */
+#[inline(always)]
+pure fn build_sized<A>(size: uint, builder: fn(push: pure fn(+A))) -> ~[A] {
+ let mut vec = ~[];
+ unsafe {
+ reserve(vec, size);
+ // This is an awful hack to be able to make the push function
+ // pure. Is there a better way?
+ ::unsafe::reinterpret_cast::
+ <fn(push: pure fn(+A)), fn(push: fn(+A))>
+ (builder)(|+x| push(vec, x));
+ }
+ ret vec;
+}
+
+/**
+ * Builds a vector by calling a provided function with an argument
+ * function that pushes an element to the back of a vector.
+ *
+ * # Arguments
+ *
+ * * builder - A function that will construct the vector. It recieves
+ * as an argument a function that will push an element
+ * onto the vector being constructed.
+ */
+#[inline(always)]
+pure fn build<A>(builder: fn(push: pure fn(+A))) -> ~[A] {
+ build_sized(4, builder)
+}
+
/// Produces a mut vector from an immutable vector.
pure fn to_mut<T>(+v: ~[T]) -> ~[mut T] {
unsafe { ::unsafe::transmute(v) }
ret result;
}
-#[doc = "Return a slice that points into another slice."]
-pure fn view<T>(v: &a.[T], start: uint, end: uint) -> &a.[T] {
+/// Return a slice that points into another slice.
+pure fn view<T>(v: &[T], start: uint, end: uint) -> &[T] {
+ assert (start <= end);
+ assert (end <= len(v));
+ do as_buf(v) |p, _len| {
+ unsafe {
+ ::unsafe::reinterpret_cast(
+ (ptr::offset(p, start), (end - start) * sys::size_of::<T>()))
+ }
+ }
+}
+
+/// Return a slice that points into another slice.
+pure fn mut_view<T>(v: &[mut T], start: uint, end: uint) -> &[mut T] {
assert (start <= end);
assert (end <= len(v));
- do unpack_slice(v) |p, _len| {
+ do as_buf(v) |p, _len| {
+ unsafe {
+ ::unsafe::reinterpret_cast(
+ (ptr::offset(p, start), (end - start) * sys::size_of::<T>()))
+ }
+ }
+}
+
+/// Return a slice that points into another slice.
+pure fn const_view<T>(v: &[const T], start: uint, end: uint) -> &[const T] {
+ assert (start <= end);
+ assert (end <= len(v));
+ do as_buf(v) |p, _len| {
unsafe {
::unsafe::reinterpret_cast(
(ptr::offset(p, start), (end - start) * sys::size_of::<T>()))
}
fn consume<T>(+v: ~[T], f: fn(uint, +T)) unsafe {
- do unpack_slice(v) |p, ln| {
+ do as_buf(v) |p, ln| {
for uint::range(0, ln) |i| {
let x <- *ptr::offset(p, i);
f(i, x);
let repr: **unsafe::vec_repr = ::unsafe::reinterpret_cast(addr_of(v));
let fill = (**repr).fill;
if (**repr).alloc > fill {
- let sz = sys::size_of::<T>();
- (**repr).fill += sz;
- let p = ptr::addr_of((**repr).data);
- let p = ptr::offset(p, fill) as *mut T;
- rusti::move_val_init(*p, initval);
+ push_fast(v, initval);
}
else {
push_slow(v, initval);
}
}
+// This doesn't bother to make sure we have space.
+#[inline(always)] // really pretty please
+unsafe fn push_fast<T>(&v: ~[const T], +initval: T) {
+ let repr: **unsafe::vec_repr = ::unsafe::reinterpret_cast(addr_of(v));
+ let fill = (**repr).fill;
+ (**repr).fill += sys::size_of::<T>();
+ let p = ptr::addr_of((**repr).data);
+ let p = ptr::offset(p, fill) as *mut T;
+ rusti::move_val_init(*p, initval);
+}
+
+#[inline(never)]
fn push_slow<T>(&v: ~[const T], +initval: T) {
- unsafe {
- let ln = v.len();
- reserve_at_least(v, ln + 1u);
- let repr: **unsafe::vec_repr = ::unsafe::reinterpret_cast(addr_of(v));
- let fill = (**repr).fill;
- let sz = sys::size_of::<T>();
- (**repr).fill += sz;
- let p = ptr::addr_of((**repr).data);
- let p = ptr::offset(p, fill) as *mut T;
- rusti::move_val_init(*p, initval);
- }
+ reserve_at_least(v, v.len() + 1u);
+ unsafe { push_fast(v, initval) }
}
// Unchecked vector indexing
#[inline(always)]
unsafe fn ref<T: copy>(v: &[const T], i: uint) -> T {
- unpack_slice(v, |p, _len| *ptr::offset(p, i))
+ as_buf(v, |p, _len| *ptr::offset(p, i))
}
#[inline(always)]
-unsafe fn ref_set<T: copy>(v: &[mut T], i: uint, +val: T) {
+unsafe fn ref_set<T>(v: &[mut T], i: uint, +val: T) {
let mut box = some(val);
- do unpack_mut_slice(v) |p, _len| {
+ do as_mut_buf(v) |p, _len| {
let mut box2 = none;
box2 <-> box;
rusti::move_val_init(*ptr::mut_offset(p, i), option::unwrap(box2));
fn push_all_move<T>(&v: ~[const T], -rhs: ~[const T]) {
reserve(v, v.len() + rhs.len());
unsafe {
- do unpack_slice(rhs) |p, len| {
+ do as_buf(rhs) |p, len| {
for uint::range(0, len) |i| {
let x <- *ptr::offset(p, i);
push(v, x);
* of the vector, expands the vector by replicating `initval` to fill the
* intervening space.
*/
-#[inline(always)]
fn grow_set<T: copy>(&v: ~[mut T], index: uint, initval: T, val: T) {
if index >= len(v) { grow(v, index - len(v) + 1u, initval); }
v[index] = val;
}
-
// Functional utilities
/// Apply a function to each element of a vector and return the results
*/
#[inline(always)]
pure fn iter_between<T>(v: &[T], start: uint, end: uint, f: fn(T)) {
- do unpack_slice(v) |base_ptr, len| {
+ do as_buf(v) |base_ptr, len| {
assert start <= end;
assert end <= len;
unsafe {
* Return true to continue, false to break.
*/
#[inline(always)]
-pure fn each<T>(v: &[const T], f: fn(T) -> bool) {
- do vec::unpack_slice(v) |p, n| {
+pure fn each<T>(v: &[T], f: fn(T) -> bool) {
+ do vec::as_buf(v) |p, n| {
let mut n = n;
let mut p = p;
while n > 0u {
* Return true to continue, false to break.
*/
#[inline(always)]
-pure fn eachi<T>(v: &[const T], f: fn(uint, T) -> bool) {
- do vec::unpack_slice(v) |p, n| {
+pure fn eachi<T>(v: &[T], f: fn(uint, T) -> bool) {
+ do vec::as_buf(v) |p, n| {
let mut i = 0u;
let mut p = p;
while i < n {
*/
#[inline(always)]
pure fn reach<T>(v: &[T], blk: fn(T) -> bool) {
- do vec::unpack_slice(v) |p, n| {
+ do vec::as_buf(v) |p, n| {
let mut i = 1;
while i <= n {
unsafe {
*/
#[inline(always)]
pure fn reachi<T>(v: &[T], blk: fn(uint, T) -> bool) {
- do vec::unpack_slice(v) |p, n| {
+ do vec::as_buf(v) |p, n| {
let mut i = 1;
while i <= n {
unsafe {
* Allows for unsafe manipulation of vector contents, which is useful for
* foreign interop.
*/
-fn as_buf<E,T>(v: &[E], f: fn(*E) -> T) -> T {
- unpack_slice(v, |buf, _len| f(buf))
-}
-
-fn as_mut_buf<E,T>(v: &[mut E], f: fn(*mut E) -> T) -> T {
- unpack_mut_slice(v, |buf, _len| f(buf))
-}
-
-/// Work with the buffer and length of a slice.
#[inline(always)]
-pure fn unpack_slice<T,U>(s: &[const T],
- f: fn(*T, uint) -> U) -> U {
+pure fn as_buf<T,U>(s: &[const T],
+ f: fn(*T, uint) -> U) -> U {
unsafe {
let v : *(*T,uint) = ::unsafe::reinterpret_cast(ptr::addr_of(s));
let (buf,len) = *v;
}
}
-/// Work with the buffer and length of a slice.
+/// Similar to `as_buf` but passing a `*const T`
#[inline(always)]
-pure fn unpack_const_slice<T,U>(s: &[const T],
- f: fn(*const T, uint) -> U) -> U {
- unsafe {
- let v : *(*const T,uint) =
- ::unsafe::reinterpret_cast(ptr::addr_of(s));
- let (buf,len) = *v;
- f(buf, len / sys::size_of::<T>())
+pure fn as_const_buf<T,U>(s: &[const T],
+ f: fn(*const T, uint) -> U) -> U {
+ do as_buf(s) |p, len| {
+ unsafe {
+ let pp : *const T = ::unsafe::reinterpret_cast(p);
+ f(pp, len)
+ }
}
}
-/// Work with the buffer and length of a slice.
+/// Similar to `as_buf` but passing a `*mut T`
#[inline(always)]
-pure fn unpack_mut_slice<T,U>(s: &[mut T],
- f: fn(*mut T, uint) -> U) -> U {
- unsafe {
- let v : *(*const T,uint) =
- ::unsafe::reinterpret_cast(ptr::addr_of(s));
- let (buf,len) = *v;
- f(buf, len / sys::size_of::<T>())
+pure fn as_mut_buf<T,U>(s: &[mut T],
+ f: fn(*mut T, uint) -> U) -> U {
+ do as_buf(s) |p, len| {
+ unsafe {
+ let pp : *mut T = ::unsafe::reinterpret_cast(p);
+ f(pp, len)
+ }
}
}
pure fn tail() -> ~[T] { tail(self) }
}
-trait immutable_vector/&<T> {
+trait immutable_vector<T> {
pure fn foldr<U: copy>(z: U, p: fn(T, U) -> U) -> U;
pure fn iter(f: fn(T));
pure fn iteri(f: fn(uint, T));
pure fn rposition_elem(x: T) -> option<uint>;
pure fn map<U>(f: fn(T) -> U) -> ~[U];
pure fn mapi<U>(f: fn(uint, T) -> U) -> ~[U];
- fn map_r<U>(f: fn(x: &self.T) -> U) -> ~[U];
+ fn map_r<U>(f: fn(x: &T) -> U) -> ~[U];
pure fn alli(f: fn(uint, T) -> bool) -> bool;
pure fn flat_map<U>(f: fn(T) -> ~[U]) -> ~[U];
pure fn filter_map<U: copy>(f: fn(T) -> option<U>) -> ~[U];
}
#[inline]
- fn map_r<U>(f: fn(x: &self.T) -> U) -> ~[U] {
+ fn map_r<U>(f: fn(x: &T) -> U) -> ~[U] {
let mut r = ~[];
let mut i = 0;
while i < self.len() {
#[inline(always)]
unsafe fn form_slice<T,U>(p: *T, len: uint, f: fn(&& &[T]) -> U) -> U {
let pair = (p, len * sys::size_of::<T>());
- let v : *(&blk.[T]) =
+ let v : *(&blk/[T]) =
::unsafe::reinterpret_cast(ptr::addr_of(pair));
f(*v)
}
+
+ /**
+ * Copies data from one vector to another.
+ *
+ * Copies `count` bytes from `src` to `dst`. The source and destination
+ * may overlap.
+ */
+ unsafe fn memcpy<T>(dst: &[mut T], src: &[const T], count: uint) {
+ do as_buf(dst) |p_dst, _len_dst| {
+ do as_buf(src) |p_src, _len_src| {
+ ptr::memcpy(p_dst, p_src, count)
+ }
+ }
+ }
+
+ /**
+ * Copies data from one vector to another.
+ *
+ * Copies `count` bytes from `src` to `dst`. The source and destination
+ * may overlap.
+ */
+ unsafe fn memmove<T>(dst: &[mut T], src: &[const T], count: uint) {
+ do as_buf(dst) |p_dst, _len_dst| {
+ do as_buf(src) |p_src, _len_src| {
+ ptr::memmove(p_dst, p_src, count)
+ }
+ }
+ }
}
/// Operations on `[u8]`
export cmp;
export lt, le, eq, ne, ge, gt;
export hash;
+ export memcpy, memmove;
/// Bytewise string comparison
pure fn cmp(&&a: ~[u8], &&b: ~[u8]) -> int {
/// Bytewise greater than
pure fn gt(&&a: ~[u8], &&b: ~[u8]) -> bool { cmp(a, b) > 0 }
- /// String hash function
+ /// Byte-vec hash function
fn hash(&&s: ~[u8]) -> uint {
- /* Seems to have been tragically copy/pasted from str.rs,
- or vice versa. But I couldn't figure out how to abstract
- it out. -- tjc */
+ hash::hash_bytes(s) as uint
+ }
- let mut u: uint = 5381u;
- vec::iter(s, |c| {u *= 33u; u += c as uint;});
- ret u;
+ /**
+ * Copies data from one vector to another.
+ *
+ * Copies `count` bytes from `src` to `dst`. The source and destination
+ * may not overlap.
+ */
+ fn memcpy(dst: &[mut u8], src: &[const u8], count: uint) {
+ assert dst.len() >= count;
+ assert src.len() >= count;
+
+ unsafe { vec::unsafe::memcpy(dst, src, count) }
+ }
+
+ /**
+ * Copies data from one vector to another.
+ *
+ * Copies `count` bytes from `src` to `dst`. The source and destination
+ * may overlap.
+ */
+ fn memmove(dst: &[mut u8], src: &[const u8], count: uint) {
+ assert dst.len() >= count;
+ assert src.len() >= count;
+
+ unsafe { vec::unsafe::memmove(dst, src, count) }
}
}
//
// This cannot be used with iter-trait.rs because of the region pointer
// required in the slice.
-impl extensions/&<A> of iter::base_iter<A> for &[const A] {
+
+impl extensions/&<A> of iter::base_iter<A> for &[A] {
fn each(blk: fn(A) -> bool) { each(self, blk) }
fn size_hint() -> option<uint> { some(len(self)) }
}
-impl extensions/&<A> of iter::extended_iter<A> for &[const A] {
+impl extensions/&<A> of iter::extended_iter<A> for &[A] {
fn eachi(blk: fn(uint, A) -> bool) { iter::eachi(self, blk) }
fn all(blk: fn(A) -> bool) -> bool { iter::all(self, blk) }
fn any(blk: fn(A) -> bool) -> bool { iter::any(self, blk) }
fn max() -> A;
}
-impl extensions/&<A:copy> of iter_trait_extensions<A> for &[const A] {
+impl extensions/&<A:copy> of iter_trait_extensions<A> for &[A] {
fn filter_to_vec(pred: fn(A) -> bool) -> ~[A] {
iter::filter_to_vec(self, pred)
}
export to_vec;
export to_str;
export eq_vec;
+export methods;
// FIXME (#2341): With recursive object types, we could implement binary
// methods like union, intersection, and difference. At that point, we could
// for the case where nbits <= 32.
/// The bitvector type
-type bitv = @{storage: ~[mut uint], nbits: uint};
+type bitv = {storage: ~[mut uint], nbits: uint};
-const uint_bits: uint = 32u + (1u << 32u >> 27u);
+#[cfg(target_arch="x86")]
+const uint_bits: uint = 32;
+#[cfg(target_arch="x86_64")]
+const uint_bits: uint = 64;
/**
* Constructs a bitvector
fn bitv(nbits: uint, init: bool) -> bitv {
let elt = if init { !0u } else { 0u };
let storage = vec::to_mut(vec::from_elem(nbits / uint_bits + 1u, elt));
- ret @{storage: storage, nbits: nbits};
+ ret {storage: storage, nbits: nbits};
}
fn process(v0: bitv, v1: bitv, op: fn(uint, uint) -> uint) -> bool {
}
-fn lor(w0: uint, w1: uint) -> uint { ret w0 | w1; }
-
+/**
+ * Calculates the union of two bitvectors
+ *
+ * Sets `v0` to the union of `v0` and `v1`. Both bitvectors must be the
+ * same length. Returns 'true' if `v0` was changed.
+ */
fn union(v0: bitv, v1: bitv) -> bool {
- let sub = lor; ret process(v0, v1, sub);
+ process(v0, v1, |a, b| a | b)
}
-fn land(w0: uint, w1: uint) -> uint { ret w0 & w1; }
-
/**
* Calculates the intersection of two bitvectors
*
* same length. Returns 'true' if `v0` was changed.
*/
fn intersect(v0: bitv, v1: bitv) -> bool {
- let sub = land;
- ret process(v0, v1, sub);
+ process(v0, v1, |a, b| a & b)
}
fn right(_w0: uint, w1: uint) -> uint { ret w1; }
/// Makes a copy of a bitvector
fn clone(v: bitv) -> bitv {
- let storage = vec::to_mut(vec::from_elem(v.nbits / uint_bits + 1u, 0u));
- let len = vec::len(v.storage);
- for uint::range(0u, len) |i| { storage[i] = v.storage[i]; };
- ret @{storage: storage, nbits: v.nbits};
+ copy v
}
/// Retrieve the value at index `i`
ret true;
}
-fn init_to_vec(v: bitv, i: uint) -> uint {
- ret if get(v, i) { 1u } else { 0u };
-}
-
/**
* Converts the bitvector to a vector of uint with the same length.
*
* Each uint in the resulting vector has either value 0u or 1u.
*/
fn to_vec(v: bitv) -> ~[uint] {
- let sub = |x| init_to_vec(v, x);
- ret vec::from_fn::<uint>(v.nbits, sub);
+ vec::from_fn::<uint>(v.nbits, |i| if get(v, i) { 1 } else { 0 })
}
#[inline(always)]
ret true;
}
+trait methods {
+ fn union(rhs: bitv) -> bool;
+ fn intersect(rhs: bitv) -> bool;
+ fn assign(rhs: bitv) -> bool;
+ fn get(i: uint) -> bool;
+ fn [](i: uint) -> bool;
+ fn eq(rhs: bitv) -> bool;
+ fn clear();
+ fn set_all();
+ fn invert();
+ fn difference(rhs: bitv) -> bool;
+ fn set(i: uint, x: bool);
+ fn is_true() -> bool;
+ fn is_false() -> bool;
+ fn to_vec() -> ~[uint];
+ fn each(f: fn(bool) -> bool);
+ fn each_storage(f: fn(&uint) -> bool);
+ fn eq_vec(v: ~[uint]) -> bool;
+
+ fn ones(f: fn(uint) -> bool);
+}
+
+impl of methods for bitv {
+ fn union(rhs: bitv) -> bool { union(self, rhs) }
+ fn intersect(rhs: bitv) -> bool { intersect(self, rhs) }
+ fn assign(rhs: bitv) -> bool { assign(self, rhs) }
+ fn get(i: uint) -> bool { get(self, i) }
+ fn [](i: uint) -> bool { self.get(i) }
+ fn eq(rhs: bitv) -> bool { equal(self, rhs) }
+ fn clear() { clear(self) }
+ fn set_all() { set_all(self) }
+ fn invert() { invert(self) }
+ fn difference(rhs: bitv) -> bool { difference(self, rhs) }
+ fn set(i: uint, x: bool) { set(self, i, x) }
+ fn is_true() -> bool { is_true(self) }
+ fn is_false() -> bool { is_false(self) }
+ fn to_vec() -> ~[uint] { to_vec(self) }
+ fn each(f: fn(bool) -> bool) { each(self, f) }
+ fn each_storage(f: fn(&uint) -> bool) { each_storage(self, f) }
+ fn eq_vec(v: ~[uint]) -> bool { eq_vec(self, v) }
+
+ fn ones(f: fn(uint) -> bool) {
+ for uint::range(0, self.nbits) |i| {
+ if self.get(i) {
+ if !f(i) { break }
+ }
+ }
+ }
+}
+
+impl of to_str::to_str for bitv {
+ fn to_str() -> ~str { to_str(self) }
+}
+
#[cfg(test)]
mod tests {
#[test]
type eqfn<T> = fn@(T, T) -> bool;
- fn test_parameterized<T: copy>(e: eqfn<T>, a: T, b: T, c: T, d: T) {
+ fn test_parameterized<T: copy owned>(
+ e: eqfn<T>, a: T, b: T, c: T, d: T) {
+
let deq: deque::t<T> = deque::create::<T>();
assert (deq.size() == 0u);
deq.add_front(a);
//! A map type
import chained::hashmap;
+import io::writer_util;
+import to_str::to_str;
export hashmap, hashfn, eqfn, set, map, chained, hashmap, str_hash;
export box_str_hash;
export bytes_hash, int_hash, uint_hash, set_add;
hasher: hashfn<K>,
eqer: eqfn<K>
};
+ type t<K, V> = @hashmap_<K, V>;
enum hashmap_<K, V> {
hashmap_(@hashmap__<K, V>)
found_after(@entry<K,V>, @entry<K,V>)
}
- impl private_methods<K, V: copy> for t<K, V> {
+ impl private_methods<K, V: copy> for hashmap_<K, V> {
fn search_rem(k: K, h: uint, idx: uint,
e_root: @entry<K,V>) -> search_result<K,V> {
let mut e0 = e_root;
fn each_value(blk: fn(V) -> bool) { self.each(|_k, v| blk(v)) }
}
+ impl hashmap<K: to_str, V: to_str copy> of to_str for hashmap_<K, V> {
+ fn to_writer(wr: io::writer) {
+ if self.count == 0u {
+ wr.write_str("{}");
+ ret;
+ }
+
+ wr.write_str("{ ");
+ let mut first = true;
+ for self.each_entry |entry| {
+ if !first {
+ wr.write_str(", ");
+ }
+ first = false;
+ wr.write_str(entry.key.to_str());
+ wr.write_str(": ");
+ wr.write_str((copy entry.value).to_str());
+ };
+ wr.write_str(" }");
+ }
+
+ fn to_str() -> ~str {
+ do io::with_str_writer |wr| { self.to_writer(wr) }
+ }
+ }
+
+
fn chains<K,V>(nchains: uint) -> ~[mut chain<K,V>] {
ret vec::to_mut(vec::from_elem(nchains, absent));
}
import ip = net_ip;
export ip;
+
+import url = net_url;
+export url;
\ No newline at end of file
fn get_addr(++node: ~str, iotask: iotask)
-> result::result<~[ip_addr], ip_get_addr_err> unsafe {
do comm::listen |output_ch| {
- do str::unpack_slice(node) |node_ptr, len| {
+ do str::as_buf(node) |node_ptr, len| {
log(debug, #fmt("slice len %?", len));
let handle = create_uv_getaddrinfo_t();
let handle_ptr = ptr::addr_of(handle);
import result::*;
import libc::size_t;
import str::extensions;
-import io::{reader, writer};
+import io::{reader, reader_util, writer};
// tcp interfaces
export tcp_socket;
/// Implementation of `io::reader` iface for a buffered `net::tcp::tcp_socket`
impl tcp_socket_buf of io::reader for @tcp_socket_buf {
- fn read_bytes(amt: uint) -> ~[u8] {
- let has_amt_available =
- vec::len((*(self.data)).buf) >= amt;
- if has_amt_available {
- // no arbitrary-length shift in vec::?
- let mut ret_buf = ~[];
- while vec::len(ret_buf) < amt {
- ret_buf += ~[vec::shift((*(self.data)).buf)];
- }
- ret_buf
- }
- else {
- let read_result = read((*(self.data)).sock, 0u);
+ fn read(buf: &[mut u8], len: uint) -> uint {
+ // Loop until our buffer has enough data in it for us to read from.
+ while self.data.buf.len() < len {
+ let read_result = read(self.data.sock, 0u);
if read_result.is_err() {
let err_data = read_result.get_err();
- log(debug, #fmt("ERROR sock_buf as io::reader.read err %? %?",
- err_data.err_name, err_data.err_msg));
- ~[]
+
+ if err_data.err_name == ~"EOF" {
+ break;
+ } else {
+ #debug("ERROR sock_buf as io::reader.read err %? %?",
+ err_data.err_name, err_data.err_msg);
+
+ ret 0;
+ }
}
else {
- let new_chunk = result::unwrap(read_result);
- (*(self.data)).buf += new_chunk;
- self.read_bytes(amt)
+ vec::push_all(self.data.buf, result::unwrap(read_result));
}
}
+
+ let count = uint::min(len, self.data.buf.len());
+
+ let mut data = ~[];
+ self.data.buf <-> data;
+
+ vec::u8::memcpy(buf, vec::view(data, 0, data.len()), count);
+
+ vec::push_all(self.data.buf, vec::view(data, count, data.len()));
+
+ count
}
fn read_byte() -> int {
- self.read_bytes(1u)[0] as int
+ let bytes = ~[0];
+ if self.read(bytes, 1u) == 0 { fail } else { bytes[0] as int }
}
fn unread_byte(amt: int) {
vec::unshift((*(self.data)).buf, amt as u8);
--- /dev/null
+//! Types/fns concerning URLs (see RFC 3986)
+
+import map;
+import map::*;
+
+export url, userinfo, query, from_str, to_str;
+
+type url = {
+ scheme: ~str,
+ user: option<userinfo>,
+ host: ~str,
+ path: ~str,
+ query: query,
+ fragment: option<~str>
+};
+
+type userinfo = {
+ user: ~str,
+ pass: option<~str>
+};
+
+type query = map::hashmap<~str, ~str>;
+
+fn url(-scheme: ~str, -user: option<userinfo>, -host: ~str,
+ -path: ~str, -query: query, -fragment: option<~str>) -> url {
+ { scheme: scheme, user: user, host: host,
+ path: path, query: query, fragment: fragment }
+}
+
+fn userinfo(-user: ~str, -pass: option<~str>) -> userinfo {
+ {user: user, pass: pass}
+}
+
+fn split_char_first(s: ~str, c: char) -> (~str, ~str) {
+ let mut v = str::splitn_char(s, c, 1);
+ if v.len() == 1 {
+ ret (s, ~"");
+ } else {
+ ret (vec::shift(v), vec::pop(v));
+ }
+}
+
+fn userinfo_from_str(uinfo: ~str) -> userinfo {
+ let (user, p) = split_char_first(uinfo, ':');
+ let pass = if str::len(p) == 0 {
+ option::none
+ } else {
+ option::some(p)
+ };
+ ret userinfo(user, pass);
+}
+
+fn userinfo_to_str(-userinfo: userinfo) -> ~str {
+ if option::is_some(userinfo.pass) {
+ ret str::concat(~[copy userinfo.user, ~":",
+ option::unwrap(copy userinfo.pass),
+ ~"@"]);
+ } else {
+ ret str::concat(~[copy userinfo.user, ~"@"]);
+ }
+}
+
+fn query_from_str(rawquery: ~str) -> query {
+ let query: query = map::str_hash();
+ if str::len(rawquery) != 0 {
+ for str::split_char(rawquery, '&').each |p| {
+ let (k, v) = split_char_first(p, '=');
+ query.insert(k, v);
+ };
+ }
+ ret query;
+}
+
+fn query_to_str(query: query) -> ~str {
+ let mut strvec = ~[];
+ for query.each |k, v| {
+ strvec += ~[#fmt("%s=%s", k, v)];
+ };
+ ret str::connect(strvec, ~"&");
+}
+
+fn get_scheme(rawurl: ~str) -> option::option<(~str, ~str)> {
+ for str::each_chari(rawurl) |i,c| {
+ if char::is_alphabetic(c) {
+ again;
+ } else if c == ':' && i != 0 {
+ ret option::some((rawurl.slice(0,i),
+ rawurl.slice(i+3,str::len(rawurl))));
+ } else {
+ ret option::none;
+ }
+ };
+ ret option::none;
+}
+
+/**
+ * Parse a `str` to a `url`
+ *
+ * # Arguments
+ *
+ * `rawurl` - a string representing a full url, including scheme.
+ *
+ * # Returns
+ *
+ * a `url` that contains the parsed representation of the url.
+ *
+ */
+
+fn from_str(rawurl: ~str) -> result::result<url, ~str> {
+ let mut schm = get_scheme(rawurl);
+ if option::is_none(schm) {
+ ret result::err(~"invalid scheme");
+ }
+ let (scheme, rest) = option::unwrap(schm);
+ let (u, rest) = split_char_first(rest, '@');
+ let user = if str::len(rest) == 0 {
+ option::none
+ } else {
+ option::some(userinfo_from_str(u))
+ };
+ let rest = if str::len(rest) == 0 {
+ u
+ } else {
+ rest
+ };
+ let (rest, frag) = split_char_first(rest, '#');
+ let fragment = if str::len(frag) == 0 {
+ option::none
+ } else {
+ option::some(frag)
+ };
+ let (rest, query) = split_char_first(rest, '?');
+ let query = query_from_str(query);
+ let (host, pth) = split_char_first(rest, '/');
+ let mut path = pth;
+ if str::len(path) != 0 {
+ str::unshift_char(path, '/');
+ }
+
+ ret result::ok(url(scheme, user, host, path, query, fragment));
+}
+
+/**
+ * Format a `url` as a string
+ *
+ * # Arguments
+ *
+ * `url` - a url.
+ *
+ * # Returns
+ *
+ * a `str` that contains the formatted url. Note that this will usually
+ * be an inverse of `from_str` but might strip out unneeded separators.
+ * for example, "http://somehost.com?", when parsed and formatted, will
+ * result in just "http://somehost.com".
+ *
+ */
+fn to_str(url: url) -> ~str {
+ let user = if option::is_some(url.user) {
+ userinfo_to_str(option::unwrap(copy url.user))
+ } else {
+ ~""
+ };
+ let query = if url.query.size() == 0 {
+ ~""
+ } else {
+ str::concat(~[~"?", query_to_str(url.query)])
+ };
+ let fragment = if option::is_some(url.fragment) {
+ str::concat(~[~"#", option::unwrap(copy url.fragment)])
+ } else {
+ ~""
+ };
+
+ ret str::concat(~[copy url.scheme,
+ ~"://",
+ user,
+ copy url.host,
+ copy url.path,
+ query,
+ fragment]);
+}
+
+#[cfg(test)]
+mod tests {
+ #[test]
+ fn test_full_url_parse_and_format() {
+ let url = ~"http://user:pass@rust-lang.org/doc?s=v#something";
+ assert to_str(result::unwrap(from_str(url))) == url;
+ }
+
+ #[test]
+ fn test_userless_url_parse_and_format() {
+ let url = ~"http://rust-lang.org/doc?s=v#something";
+ assert to_str(result::unwrap(from_str(url))) == url;
+ }
+
+ #[test]
+ fn test_queryless_url_parse_and_format() {
+ let url = ~"http://user:pass@rust-lang.org/doc#something";
+ assert to_str(result::unwrap(from_str(url))) == url;
+ }
+
+ #[test]
+ fn test_empty_query_url_parse_and_format() {
+ let url = ~"http://user:pass@rust-lang.org/doc?#something";
+ let should_be = ~"http://user:pass@rust-lang.org/doc#something";
+ assert to_str(result::unwrap(from_str(url))) == should_be;
+ }
+
+ #[test]
+ fn test_fragmentless_url_parse_and_format() {
+ let url = ~"http://user:pass@rust-lang.org/doc?q=v";
+ assert to_str(result::unwrap(from_str(url))) == url;
+ }
+
+ #[test]
+ fn test_minimal_url_parse_and_format() {
+ let url = ~"http://rust-lang.org/doc";
+ assert to_str(result::unwrap(from_str(url))) == url;
+ }
+
+ #[test]
+ fn test_scheme_host_only_url_parse_and_format() {
+ let url = ~"http://rust-lang.org";
+ assert to_str(result::unwrap(from_str(url))) == url;
+ }
+
+ #[test]
+ fn test_pathless_url_parse_and_format() {
+ let url = ~"http://user:pass@rust-lang.org?q=v#something";
+ assert to_str(result::unwrap(from_str(url))) == url;
+ }
+
+ #[test]
+ fn test_scheme_host_fragment_only_url_parse_and_format() {
+ let url = ~"http://rust-lang.org#something";
+ assert to_str(result::unwrap(from_str(url))) == url;
+ }
+
+}
\ No newline at end of file
while base < len {
let end = uint::min(len, base + items_per_task);
// FIXME: why is the ::<A, ()> annotation required here? (#2617)
- do vec::unpack_slice::<A, ()>(xs) |p, _len| {
+ do vec::as_buf::<A, ()>(xs) |p, _len| {
let f = f();
let f = do future_spawn() |copy base| {
unsafe {
/// Create a smallintmap
fn mk<T: copy>() -> smallintmap<T> {
- ret smallintmap_(@{v: dvec()});
+ let v = dvec();
+ ret smallintmap_(@{v: v});
}
/**
*/
#[inline(always)]
fn insert<T: copy>(self: smallintmap<T>, key: uint, val: T) {
+ //io::println(#fmt("%?", key));
self.v.grow_set_elt(key, none, some(val));
}
use core(vers = "0.3");
import core::*;
-export net, net_tcp, net_ip;
+export net, net_tcp, net_ip, net_url;
export uv, uv_ll, uv_iotask, uv_global_loop;
export c_vec, util, timer;
export bitv, deque, fun_treemap, list, map, smallintmap, sort, treemap;
mod net;
mod net_ip;
mod net_tcp;
+mod net_url;
// libuv modules
mod uv;
import result::{ok, err};
import io::writer_util;
import libc::size_t;
+import task::task_builder;
export test_name;
export test_fn;
do task::spawn {
let testfn = copy test.fn;
- let mut builder = task::builder();
- let result_future = task::future_result(builder);
- task::unsupervise(builder);
- task::run(builder, testfn);
- let task_result = future::get(result_future);
+ let mut result_future = none; // task::future_result(builder);
+ task::task().unlinked().future_result(|-r| {
+ result_future = some(r);
+ }).spawn(testfn);
+ let task_result = future::get(option::unwrap(result_future));
let test_result = calc_result(test, task_result == task::success);
comm::send(monitor_ch, (copy test, test_result));
};
import iotask::{iotask, spawn_iotask};
import priv::{chan_from_global_ptr, weaken_task};
import comm::{port, chan, methods, select2, listen};
+import task::task_builder;
import either::{left, right};
extern mod rustrt {
monitor_loop_chan_ptr);
let builder_fn = || {
- let builder = task::builder();
- task::unsupervise(builder);
- task::set_sched_mode(builder, task::single_threaded);
- builder
+ task::task().sched_mode(task::single_threaded).unlinked()
};
#debug("before priv::chan_from_global_ptr");
}
fn spawn_loop() -> iotask unsafe {
- let builder = task::builder();
- do task::add_wrapper(builder) |task_body| {
+ let builder = do task::task().add_wrapper |task_body| {
fn~(move task_body) {
// The I/O loop task also needs to be weak so it doesn't keep
// the runtime alive
#debug("global libuv task is leaving weakened state");
}
}
- }
+ };
spawn_iotask(builder)
}
import libc::c_void;
import ptr::addr_of;
import comm::{port, chan, methods, listen};
+import task::task_builder;
import ll = uv_ll;
/// Used to abstract-away direct interaction with a libuv loop.
})
}
-fn spawn_iotask(-builder: task::builder) -> iotask {
-
- task::set_sched_mode(builder, task::single_threaded);
+fn spawn_iotask(-task: task::task_builder) -> iotask {
do listen |iotask_ch| {
- do task::run(copy(builder)) {
+ do task.sched_mode(task::single_threaded).spawn {
#debug("entering libuv task");
run_loop(iotask_ch);
#debug("libuv task exiting");
// ipv4 addr max size: 15 + 1 trailing null byte
let dst: ~[u8] = ~[0u8,0u8,0u8,0u8,0u8,0u8,0u8,0u8,
0u8,0u8,0u8,0u8,0u8,0u8,0u8,0u8];
- let size = 16 as libc::size_t;
- do vec::as_buf(dst) |dst_buf| {
+ do vec::as_buf(dst) |dst_buf, size| {
rustrt::rust_uv_ip4_name(src as *sockaddr_in,
- dst_buf, size);
+ dst_buf, size as libc::size_t);
// seems that checking the result of uv_ip4_name
// doesn't work too well..
// you're stuck looking at the value of dst_buf
0u8,0u8,0u8,0u8,0u8,0u8,0u8,0u8,
0u8,0u8,0u8,0u8,0u8,0u8,0u8,0u8,
0u8,0u8,0u8,0u8,0u8,0u8];
- let size = 46 as libc::size_t;
- do vec::as_buf(dst) |dst_buf| {
+ do vec::as_buf(dst) |dst_buf, size| {
let src_unsafe_ptr = src as *sockaddr_in6;
log(debug, #fmt("val of src *sockaddr_in6: %? sockaddr_in6: %?",
src_unsafe_ptr, src));
let result = rustrt::rust_uv_ip6_name(src_unsafe_ptr,
- dst_buf, size);
+ dst_buf, size as libc::size_t);
alt result {
0i32 {
str::unsafe::from_buf(dst_buf)
/* id for the alloc() call */ node_id,
/* value */ @expr),
- /* just an assert, no significance to typestate */
+ /* just an assert */
expr_assert(@expr),
expr_mac(mac),
+
+ // A struct literal expression.
+ //
+ // XXX: Add functional record update.
+ expr_struct(@path, ~[field])
}
#[auto_serialize]
enum matcher_ {
/* match one token */
mtc_tok(token::token),
- /* match repetitions of a sequence: body, separator, zero ok? : */
- mtc_rep(~[matcher], option<token::token>, bool),
+ /* match repetitions of a sequence: body, separator, zero ok?,
+ lo, hi position-in-match-array used: */
+ mtc_rep(~[matcher], option<token::token>, bool, uint, uint),
/* parse a Rust NT: name to bind, name of NT, position in match array : */
mtc_bb(ident, ident, uint)
}
~[@trait_ref], /* traits this class implements */
~[@class_member], /* methods, etc. */
/* (not including ctor or dtor) */
- class_ctor,
+ /* ctor is optional, and will soon go away */
+ option<class_ctor>,
/* dtor is optional */
option<class_dtor>
),
item_trait(~[ty_param], ~[trait_method]),
- item_impl(~[ty_param], option<@trait_ref> /* trait */,
- @ty /* self */, ~[@method]),
+ item_impl(~[ty_param],
+ ~[@trait_ref], /* traits this impl implements */
+ @ty, /* self */
+ ~[@method]),
item_mac(mac),
}
enum inline_attr {
ia_none,
ia_hint,
- ia_always
+ ia_always,
+ ia_never,
}
/// True if something like #[inline] is found in the list of attrs.
ast::meta_list(@~"inline", items) {
if !vec::is_empty(find_meta_items_by_name(items, ~"always")) {
ia_always
+ } else if !vec::is_empty(
+ find_meta_items_by_name(items, ~"never")) {
+ ia_never
} else {
ia_hint
}
}
let argument_gram = ~[ms(mtc_rep(~[
- ms(mtc_bb(@"arg",@"expr", 0u))
- ], some(parse::token::COMMA), true))];
+ ms(mtc_bb(@~"arg",@~"expr", 0u))
+ ], some(parse::token::COMMA), true, 0u, 1u))];
let arg_reader = new_tt_reader(cx.parse_sess().span_diagnostic,
cx.parse_sess().interner, none, arg);
let args =
alt parse_or_else(cx.parse_sess(), cx.cfg(), arg_reader as reader,
- argument_gram).get(@"arg") {
+ argument_gram).get(@~"arg") {
@seq(s, _) {
do s.map() |lf| {
alt lf {
@leaf(parse::token::w_expr(arg)) {
arg /* whew! list of exprs, here we come! */
}
- _ { fail "badly-structured parse result"; }
+ _ { fail ~"badly-structured parse result"; }
}
}
}
- _ { fail "badly-structured parse result"; }
+ _ { fail ~"badly-structured parse result"; }
};
ret some(@{id: parse::next_node_id(cx.parse_sess()),
+ callee_id: parse::next_node_id(cx.parse_sess()),
node: ast::expr_vec(args, ast::m_imm), span: sp});
}
// check for errors
visit(proto, cx);
+ // do analysis
+ liveness::analyze(proto, cx);
+
// compile
base::mr_item(proto.compile(cx))
}
import codemap::span;
import ext::base::mk_ctxt;
-fn ident(s: ~str) -> ast::ident {
- @(copy s)
-}
-
-fn empty_span() -> span {
- {lo: 0, hi: 0, expn_info: none}
+// Transitional reexports so qquote can find the paths it is looking for
+mod syntax {
+ import ext;
+ export ext;
+ import parse;
+ export parse;
}
-fn span<T>(+x: T) -> ast::spanned<T> {
- {node: x,
- span: empty_span()}
+fn ident(s: ~str) -> ast::ident {
+ @(copy s)
}
-fn path(id: ident) -> @ast::path {
- @{span: empty_span(),
+fn path(id: ident, span: span) -> @ast::path {
+ @{span: span,
global: false,
idents: ~[id],
rp: none,
types: ~[]}
}
+fn empty_span() -> span {
+ {lo: 0, hi: 0, expn_info: none}
+}
+
trait path_concat {
fn +(id: ident) -> @ast::path;
}
impl methods of path_concat for ident {
fn +(id: ident) -> @ast::path {
- path(self) + id
+ path(self, empty_span()) + id
}
}
+params: ~[ast::ty_param]) -> @ast::item;
fn item_ty(name: ident, ty: @ast::ty) -> @ast::item;
fn ty_vars(+ty_params: ~[ast::ty_param]) -> ~[@ast::ty];
+ fn ty_field_imm(name: ident, ty: @ast::ty) -> ast::ty_field;
+ fn ty_rec(+~[ast::ty_field]) -> @ast::ty;
+ fn field_imm(name: ident, e: @ast::expr) -> ast::field;
+ fn rec(+~[ast::field]) -> @ast::expr;
+ fn block(+stmts: ~[@ast::stmt], e: @ast::expr) -> ast::blk;
+ fn stmt_let(ident: ident, e: @ast::expr) -> @ast::stmt;
+ fn stmt_expr(e: @ast::expr) -> @ast::stmt;
+ fn block_expr(b: ast::blk) -> @ast::expr;
+ fn empty_span() -> span;
}
impl ast_builder of ext_ctxt_ast_builder for ext_ctxt {
+ fn empty_span() -> span {
+ {lo: 0, hi: 0, expn_info: self.backtrace()}
+ }
+
+ fn block_expr(b: ast::blk) -> @ast::expr {
+ @{id: self.next_id(),
+ callee_id: self.next_id(),
+ node: ast::expr_block(b),
+ span: self.empty_span()}
+ }
+
+ fn stmt_expr(e: @ast::expr) -> @ast::stmt {
+ @{node: ast::stmt_expr(e, self.next_id()),
+ span: self.empty_span()}
+ }
+
+ fn stmt_let(ident: ident, e: @ast::expr) -> @ast::stmt {
+ // If the quasiquoter could interpolate idents, this is all
+ // we'd need.
+ //
+ //let ext_cx = self;
+ //#ast[stmt] { let $(ident) = $(e) }
+
+ @{node: ast::stmt_decl(@{node: ast::decl_local(~[
+ @{node: {is_mutbl: false,
+ ty: self.ty_infer(),
+ pat: @{id: self.next_id(),
+ node: ast::pat_ident(
+ path(ident, self.empty_span()), none),
+ span: self.empty_span()},
+ init: some({op: ast::init_move,
+ expr: e}),
+ id: self.next_id()},
+ span: self.empty_span()}]),
+ span: self.empty_span()}, self.next_id()),
+ span: self.empty_span()}
+ }
+
+ fn field_imm(name: ident, e: @ast::expr) -> ast::field {
+ {node: {mutbl: ast::m_imm, ident: name, expr: e},
+ span: self.empty_span()}
+ }
+
+ fn rec(+fields: ~[ast::field]) -> @ast::expr {
+ @{id: self.next_id(),
+ callee_id: self.next_id(),
+ node: ast::expr_rec(fields, none),
+ span: self.empty_span()}
+ }
+
+ fn ty_field_imm(name: ident, ty: @ast::ty) -> ast::ty_field {
+ {node: {ident: name, mt: { ty: ty, mutbl: ast::m_imm } },
+ span: self.empty_span()}
+ }
+
+ fn ty_rec(+fields: ~[ast::ty_field]) -> @ast::ty {
+ @{id: self.next_id(),
+ node: ast::ty_rec(fields),
+ span: self.empty_span()}
+ }
+
+ fn ty_infer() -> @ast::ty {
+ @{id: self.next_id(),
+ node: ast::ty_infer,
+ span: self.empty_span()}
+ }
+
fn ty_param(id: ast::ident, +bounds: ~[ast::ty_param_bound])
-> ast::ty_param
{
id: self.next_id()}
}
- fn expr_block(e: @ast::expr) -> ast::blk {
+ fn block(+stmts: ~[@ast::stmt], e: @ast::expr) -> ast::blk {
let blk = {view_items: ~[],
- stmts: ~[],
+ stmts: stmts,
expr: some(e),
id: self.next_id(),
rules: ast::default_blk};
{node: blk,
- span: empty_span()}
+ span: self.empty_span()}
+ }
+
+ fn expr_block(e: @ast::expr) -> ast::blk {
+ self.block(~[], e)
}
fn fn_decl(+inputs: ~[ast::arg],
id: self.next_id(),
node: node,
vis: ast::public,
- span: empty_span()}
+ span: self.empty_span()}
}
fn item_fn_poly(name: ident,
+tys: ~[@ast::ty]) -> ast::variant {
let args = tys.map(|ty| {ty: ty, id: self.next_id()});
- span({name: name,
- attrs: ~[],
- args: args,
- id: self.next_id(),
- disr_expr: none,
- vis: ast::public})
+ {node: {name: name,
+ attrs: ~[],
+ args: args,
+ id: self.next_id(),
+ disr_expr: none,
+ vis: ast::public},
+ span: self.empty_span()}
}
fn item_mod(name: ident,
// FIXME #2886: make sure the node ids are legal.
@{id: self.next_id(),
node: ast::ty_path(path, self.next_id()),
- span: empty_span()}
+ span: self.empty_span()}
}
- fn ty_nil() -> @ast::ty {
+ fn ty_nil_ast_builder() -> @ast::ty {
@{id: self.next_id(),
node: ast::ty_nil,
- span: empty_span()}
+ span: self.empty_span()}
}
fn item_ty_poly(name: ident,
}
fn ty_vars(+ty_params: ~[ast::ty_param]) -> ~[@ast::ty] {
- ty_params.map(|p| self.ty_path_ast_builder(path(p.ident)))
+ ty_params.map(|p| self.ty_path_ast_builder(
+ path(p.ident, self.empty_span())))
}
}
fn visit_state(state: state, _m: &[()]) {
if state.messages.len() == 0 {
self.span_warn(
- empty_span(), // use a real span!
+ state.span, // use a real span!
#fmt("state %s contains no messages, \
consider stepping to a terminal state instead",
*state.name))
}
}
- fn visit_message(name: ident, _tys: &[@ast::ty],
+ fn visit_message(name: ident, _span: span, _tys: &[@ast::ty],
this: state, next: next_state) {
- alt next {
- some({state: next, tys: next_tys}) {
- let proto = this.proto;
- if !proto.has_state(next) {
- // This should be a span fatal, but then we need to
- // track span information.
- self.span_err(
- empty_span(),
- #fmt("message %s steps to undefined state, %s",
- *name, *next));
- }
-
- let next = proto.get_state(next);
-
- if next.ty_params.len() != next_tys.len() {
+ alt next {
+ some({state: next, tys: next_tys}) {
+ let proto = this.proto;
+ if !proto.has_state(next) {
+ // This should be a span fatal, but then we need to
+ // track span information.
self.span_err(
- empty_span(), // use a real span
- #fmt("message %s target (%s) \
- needs %u type parameters, but got %u",
- *name, *next.name,
- next.ty_params.len(),
- next_tys.len()));
+ proto.get_state(next).span,
+ #fmt("message %s steps to undefined state, %s",
+ *name, *next));
+ }
+ else {
+ let next = proto.get_state(next);
+
+ if next.ty_params.len() != next_tys.len() {
+ self.span_err(
+ next.span, // use a real span
+ #fmt("message %s target (%s) \
+ needs %u type parameters, but got %u",
+ *name, *next.name,
+ next.ty_params.len(),
+ next_tys.len()));
+ }
}
}
none { }
--- /dev/null
+/*
+
+Liveness analysis for protocols. This is useful for a lot of possible
+optimizations.
+
+This analysis computes the "co-live" relationship between
+states. Co-live is defined inductively as follows.
+
+1. u is co-live with v if u can transition to v in one message.
+
+2. u is co-live with v if there exists a w such that u and w are
+co-live, w and v are co-live, and u and w have the same direction.
+
+This relationship approximates when it is safe to store two states in
+the same memory location. If there is no u such u is co-live with
+itself, then the protocol is bounded.
+
+(These assertions could use proofs)
+
+In addition, this analysis does reachability, to warn when we have
+useless states.
+
+The algorithm is a fixpoint computation. For each state, we initialize
+a bitvector containing whether it is co-live with each other state. At
+first we use rule (1) above to set each vector. Then we iterate
+updating the states using rule (2) until there are no changes.
+
+*/
+
+import dvec::extensions;
+
+import std::bitv::{bitv, methods};
+
+import proto::methods;
+import ast_builder::empty_span;
+
+fn analyze(proto: protocol, _cx: ext_ctxt) {
+ #debug("initializing colive analysis");
+ let num_states = proto.num_states();
+ let colive = do (copy proto.states).map_to_vec |state| {
+ let bv = bitv(num_states, false);
+ for state.reachable |s| {
+ bv.set(s.id, true);
+ }
+ bv
+ };
+
+ let mut i = 0;
+ let mut changed = true;
+ while changed {
+ changed = false;
+ #debug("colive iteration %?", i);
+ for colive.eachi |i, this_colive| {
+ let this = proto.get_state_by_id(i);
+ for this_colive.ones |j| {
+ let next = proto.get_state_by_id(j);
+ if this.dir == next.dir {
+ changed = changed || this_colive.union(colive[j]);
+ }
+ }
+ }
+ i += 1;
+ }
+
+ #debug("colive analysis complete");
+
+ // Determine if we're bounded
+ let mut self_live = ~[];
+ for colive.eachi |i, bv| {
+ if bv.get(i) {
+ vec::push(self_live, proto.get_state_by_id(i))
+ }
+ }
+
+ if self_live.len() > 0 {
+ let states = str::connect(self_live.map(|s| *s.name), ~" ");
+
+ #debug("protocol %s is unbounded due to loops involving: %s",
+ *proto.name, states);
+
+ // Someday this will be configurable with a warning
+ //cx.span_warn(empty_span(),
+ // #fmt("protocol %s is unbounded due to loops \
+ // involving these states: %s",
+ // *proto.name,
+ // states));
+
+ proto.bounded = some(false);
+ }
+ else {
+ #debug("protocol %s is bounded. yay!", *proto.name);
+ proto.bounded = some(true);
+ }
+}
\ No newline at end of file
impl proto_parser of proto_parser for parser {
fn parse_proto(id: ident) -> protocol {
- let proto = protocol(id);
+ let proto = protocol(id, self.span);
self.parse_seq_to_before_end(token::EOF,
{sep: none, trailing_sep_allowed: false},
_ { self.fatal(~"invalid next state") }
};
- state.add_message(mname, args, next);
+ state.add_message(mname, copy self.span, args, next);
}
}
import ast_builder::ast_builder;
import ast_builder::methods;
import ast_builder::path;
+import ast_builder::path_concat;
+
+// Transitional reexports so qquote can find the paths it is looking for
+mod syntax {
+ import ext;
+ export ext;
+ import parse;
+ export parse;
+}
+
+trait gen_send {
+ fn gen_send(cx: ext_ctxt) -> @ast::item;
+}
+
+trait to_type_decls {
+ fn to_type_decls(cx: ext_ctxt) -> ~[@ast::item];
+ fn to_endpoint_decls(cx: ext_ctxt, dir: direction) -> ~[@ast::item];
+}
+
+trait gen_init {
+ fn gen_init(cx: ext_ctxt) -> @ast::item;
+ fn compile(cx: ext_ctxt) -> @ast::item;
+}
-impl compile for message {
+impl compile of gen_send for message {
fn gen_send(cx: ext_ctxt) -> @ast::item {
#debug("pipec: gen_send");
alt self {
- message(id, tys, this, some({state: next, tys: next_tys})) {
+ message(id, span, tys, this, some({state: next, tys: next_tys})) {
#debug("pipec: next state exists");
let next = this.proto.get_state(next);
assert next_tys.len() == next.ty_params.len();
|n, t| cx.arg_mode(n, t, ast::by_copy)
);
+ let pipe_ty = cx.ty_path_ast_builder(
+ path(this.data_name(), span)
+ .add_tys(cx.ty_vars(this.ty_params)));
let args_ast = vec::append(
~[cx.arg_mode(@~"pipe",
- cx.ty_path_ast_builder(path(this.data_name())
- .add_tys(cx.ty_vars(this.ty_params))),
+ pipe_ty,
ast::by_copy)],
args_ast);
- let pat = alt (this.dir, next.dir) {
- (send, send) { ~"(c, s)" }
- (send, recv) { ~"(s, c)" }
- (recv, send) { ~"(s, c)" }
- (recv, recv) { ~"(c, s)" }
- };
+ let mut body = ~"{\n";
+
+ if this.proto.is_bounded() {
+ let (sp, rp) = alt (this.dir, next.dir) {
+ (send, send) { ("c", "s") }
+ (send, recv) { ("s", "c") }
+ (recv, send) { ("s", "c") }
+ (recv, recv) { ("c", "s") }
+ };
+
+ body += "let b = pipe.reuse_buffer();\n";
+ body += #fmt("let %s = pipes::send_packet_buffered(\
+ ptr::addr_of(b.buffer.data.%s));\n",
+ sp, *next.name);
+ body += #fmt("let %s = pipes::recv_packet_buffered(\
+ ptr::addr_of(b.buffer.data.%s));\n",
+ rp, *next.name);
+ }
+ else {
+ let pat = alt (this.dir, next.dir) {
+ (send, send) { ~"(c, s)" }
+ (send, recv) { ~"(s, c)" }
+ (recv, send) { ~"(s, c)" }
+ (recv, recv) { ~"(c, s)" }
+ };
- let mut body = #fmt("{ let %s = pipes::entangle();\n", pat);
+ body += #fmt("let %s = pipes::entangle();\n", pat);
+ }
body += #fmt("let message = %s::%s(%s);\n",
*this.proto.name,
*self.name(),
.map(|x| *x),
~", "));
body += #fmt("pipes::send(pipe, message);\n");
+ // return the new channel
body += ~"c }";
let body = cx.parse_expr(body);
cx.item_fn_poly(self.name(),
args_ast,
- cx.ty_path_ast_builder(path(next.data_name())
+ cx.ty_path_ast_builder(path(next.data_name(),
+ span)
.add_tys(next_tys)),
self.get_params(),
cx.expr_block(body))
}
- message(id, tys, this, none) {
+ message(id, span, tys, this, none) {
#debug("pipec: no next state");
let arg_names = tys.mapi(|i, _ty| @(~"x_" + i.to_str()));
let args_ast = vec::append(
~[cx.arg_mode(@~"pipe",
- cx.ty_path(path(this.data_name())
+ cx.ty_path_ast_builder(path(this.data_name(),
+ span)
.add_tys(cx.ty_vars(this.ty_params))),
ast::by_copy)],
args_ast);
cx.item_fn_poly(self.name(),
args_ast,
- cx.ty_nil(),
+ cx.ty_nil_ast_builder(),
self.get_params(),
cx.expr_block(body))
}
}
fn to_ty(cx: ext_ctxt) -> @ast::ty {
- cx.ty_path_ast_builder(path(self.name)
- .add_tys(cx.ty_vars(self.ty_params)))
+ cx.ty_path_ast_builder(path(self.name(), self.span())
+ .add_tys(cx.ty_vars(self.get_params())))
}
}
-impl compile for state {
+impl compile of to_type_decls for state {
fn to_type_decls(cx: ext_ctxt) -> ~[@ast::item] {
#debug("pipec: to_type_decls");
// This compiles into two different type declarations. Say the
let mut items_msg = ~[];
for self.messages.each |m| {
- let message(name, tys, this, next) = m;
+ let message(name, _span, tys, this, next) = m;
let tys = alt next {
some({state: next, tys: next_tys}) {
};
vec::append_one(tys,
- cx.ty_path((dir + next_name)
+ cx.ty_path_ast_builder((dir + next_name)
.add_tys(next_tys)))
}
none { tys }
}
}
- vec::push(items,
- cx.item_ty_poly(
- self.data_name(),
- cx.ty_path_ast_builder(
- (@~"pipes" + @(dir.to_str() + ~"_packet"))
- .add_ty(cx.ty_path_ast_builder(
- (self.proto.name + self.data_name())
- .add_tys(cx.ty_vars(self.ty_params))))),
- self.ty_params));
+ if !self.proto.is_bounded() {
+ vec::push(items,
+ cx.item_ty_poly(
+ self.data_name(),
+ cx.ty_path_ast_builder(
+ (@~"pipes" + @(dir.to_str() + ~"_packet"))
+ .add_ty(cx.ty_path_ast_builder(
+ (self.proto.name + self.data_name())
+ .add_tys(cx.ty_vars(self.ty_params))))),
+ self.ty_params));
+ }
+ else {
+ vec::push(items,
+ cx.item_ty_poly(
+ self.data_name(),
+ cx.ty_path_ast_builder(
+ (@~"pipes" + @(dir.to_str()
+ + ~"_packet_buffered"))
+ .add_tys(~[cx.ty_path_ast_builder(
+ (self.proto.name + self.data_name())
+ .add_tys(cx.ty_vars(self.ty_params))),
+ self.proto.buffer_ty_path(cx)])),
+ self.ty_params));
+ };
items
}
}
-impl compile for protocol {
+impl compile of gen_init for protocol {
fn gen_init(cx: ext_ctxt) -> @ast::item {
+ let ext_cx = cx;
+
+ #debug("gen_init");
let start_state = self.states[0];
- let body = alt start_state.dir {
- send { cx.parse_expr(~"pipes::entangle()") }
- recv {
- cx.parse_expr(~"{ \
- let (s, c) = pipes::entangle(); \
- (c, s) \
- }")
- }
+ let body = if !self.is_bounded() {
+ alt start_state.dir {
+ send { #ast { pipes::entangle() } }
+ recv {
+ #ast {{
+ let (s, c) = pipes::entangle();
+ (c, s)
+ }}
+ }
+ }
+ }
+ else {
+ let body = self.gen_init_bounded(ext_cx);
+ alt start_state.dir {
+ send { body }
+ recv {
+ #ast {{
+ let (s, c) = $(body);
+ (c, s)
+ }}
+ }
+ }
};
cx.parse_item(#fmt("fn init%s() -> (client::%s, server::%s)\
- { %s }",
+ { import pipes::has_buffer; %s }",
start_state.ty_params.to_source(),
start_state.to_ty(cx).to_source(),
start_state.to_ty(cx).to_source(),
body.to_source()))
}
+ fn gen_buffer_init(ext_cx: ext_ctxt) -> @ast::expr {
+ ext_cx.rec(self.states.map_to_vec(|s| {
+ let fty = s.to_ty(ext_cx);
+ ext_cx.field_imm(s.name, #ast { pipes::mk_packet::<$(fty)>() })
+ }))
+ }
+
+ fn gen_init_bounded(ext_cx: ext_ctxt) -> @ast::expr {
+ #debug("gen_init_bounded");
+ let buffer_fields = self.gen_buffer_init(ext_cx);
+
+ let buffer = #ast {
+ ~{header: pipes::buffer_header(),
+ data: $(buffer_fields)}
+ };
+
+ let entangle_body = ext_cx.block_expr(
+ ext_cx.block(
+ self.states.map_to_vec(
+ |s| ext_cx.parse_stmt(
+ #fmt("data.%s.set_buffer(buffer)", *s.name))),
+ ext_cx.parse_expr(
+ #fmt("ptr::addr_of(data.%s)", *self.states[0].name))));
+
+ #ast {{
+ let buffer = $(buffer);
+ do pipes::entangle_buffer(buffer) |buffer, data| {
+ $(entangle_body)
+ }
+ }}
+ }
+
+ fn buffer_ty_path(cx: ext_ctxt) -> @ast::ty {
+ let mut params: ~[ast::ty_param] = ~[];
+ for (copy self.states).each |s| {
+ for s.ty_params.each |tp| {
+ alt params.find(|tpp| *tp.ident == *tpp.ident) {
+ none { vec::push(params, tp) }
+ _ { }
+ }
+ }
+ }
+
+ cx.ty_path_ast_builder(path(@~"buffer", self.span)
+ .add_tys(cx.ty_vars(params)))
+ }
+
+ fn gen_buffer_type(cx: ext_ctxt) -> @ast::item {
+ let ext_cx = cx;
+ let mut params: ~[ast::ty_param] = ~[];
+ let fields = do (copy self.states).map_to_vec |s| {
+ for s.ty_params.each |tp| {
+ alt params.find(|tpp| *tp.ident == *tpp.ident) {
+ none { vec::push(params, tp) }
+ _ { }
+ }
+ }
+ let ty = s.to_ty(cx);
+ let fty = #ast[ty] {
+ pipes::packet<$(ty)>
+ };
+ cx.ty_field_imm(s.name, fty)
+ };
+
+ cx.item_ty_poly(
+ @~"buffer",
+ cx.ty_rec(fields),
+ params)
+ }
+
fn compile(cx: ext_ctxt) -> @ast::item {
let mut items = ~[self.gen_init(cx)];
let mut client_states = ~[];
server_states += s.to_endpoint_decls(cx, recv);
}
+ if self.is_bounded() {
+ vec::push(items, self.gen_buffer_type(cx))
+ }
+
vec::push(items,
cx.item_mod(@~"client",
client_states));
trait ext_ctxt_parse_utils {
fn parse_item(s: ~str) -> @ast::item;
fn parse_expr(s: ~str) -> @ast::expr;
+ fn parse_stmt(s: ~str) -> @ast::stmt;
}
impl parse_utils of ext_ctxt_parse_utils for ext_ctxt {
}
}
- fn parse_expr(s: ~str) -> @ast::expr {
- parse::parse_expr_from_source_str(
+ fn parse_stmt(s: ~str) -> @ast::stmt {
+ parse::parse_stmt_from_source_str(
~"***protocol expansion***",
@(copy s),
self.cfg(),
+ ~[],
self.parse_sess())
}
-}
-
-trait two_vector_utils<A, B> {
- fn zip() -> ~[(A, B)];
- fn map<C>(f: fn(A, B) -> C) -> ~[C];
-}
-impl methods<A: copy, B: copy> of two_vector_utils<A, B> for (~[A], ~[B]) {
- fn zip() -> ~[(A, B)] {
- let (a, b) = self;
- vec::zip(a, b)
- }
-
- fn map<C>(f: fn(A, B) -> C) -> ~[C] {
- let (a, b) = self;
- vec::map2(a, b, f)
+ fn parse_expr(s: ~str) -> @ast::expr {
+ parse::parse_expr_from_source_str(
+ ~"***protocol expansion***",
+ @(copy s),
+ self.cfg(),
+ self.parse_sess())
}
}
import ast::{ident};
-import ast_builder::{path, methods, ast_builder};
+import ast_builder::{path, methods, ast_builder, append_types};
enum direction {
send, recv
type next_state = option<{state: ident, tys: ~[@ast::ty]}>;
enum message {
- // name, data, current state, next state
- message(ident, ~[@ast::ty], state, next_state)
+ // name, span, data, current state, next state
+ message(ident, span, ~[@ast::ty], state, next_state)
}
impl methods for message {
fn name() -> ident {
alt self {
- message(id, _, _, _) {
+ message(id, _, _, _, _) {
id
}
}
}
+ fn span() -> span {
+ alt self {
+ message(_, span, _, _, _) {
+ span
+ }
+ }
+ }
+
/// Return the type parameters actually used by this message
fn get_params() -> ~[ast::ty_param] {
alt self {
- message(_, _, this, _) {
+ message(_, _, _, this, _) {
this.ty_params
}
}
enum state {
state_(@{
+ id: uint,
name: ident,
+ span: span,
dir: direction,
ty_params: ~[ast::ty_param],
messages: dvec<message>,
}
impl methods for state {
- fn add_message(name: ident, +data: ~[@ast::ty], next: next_state) {
- self.messages.push(message(name, data, self,
+ fn add_message(name: ident, span: span,
+ +data: ~[@ast::ty], next: next_state) {
+ self.messages.push(message(name, span, data, self,
next));
}
self.name
}
+ /// Returns the type that is used for the messages.
fn to_ty(cx: ext_ctxt) -> @ast::ty {
- cx.ty_path(path(self.name).add_tys(cx.ty_vars(self.ty_params)))
+ cx.ty_path_ast_builder
+ (path(self.name, self.span).add_tys(cx.ty_vars(self.ty_params)))
}
-}
-enum protocol {
- protocol_(@{
- name: ident,
- states: dvec<state>,
- }),
+ /// Iterate over the states that can be reached in one message
+ /// from this state.
+ fn reachable(f: fn(state) -> bool) {
+ for self.messages.each |m| {
+ alt m {
+ message(_, _, _, _, some({state: id, _})) {
+ let state = self.proto.get_state(id);
+ if !f(state) { break }
+ }
+ _ { }
+ }
+ }
+ }
}
-fn protocol(name: ident) -> protocol {
- protocol_(@{name: name, states: dvec()})
+type protocol = @protocol_;
+
+fn protocol(name: ident, +span: span) -> protocol {
+ @protocol_(name, span)
}
-impl methods for protocol {
- fn add_state(name: ident, dir: direction) -> state {
- self.add_state_poly(name, dir, ~[])
+class protocol_ {
+ let name: ident;
+ let span: span;
+ let states: dvec<state>;
+
+ let mut bounded: option<bool>;
+
+ new(name: ident, span: span) {
+ self.name = name;
+ self.span = span;
+ self.states = dvec();
+ self.bounded = none;
}
- /// Get or create a state.
+ /// Get a state.
fn get_state(name: ident) -> state {
self.states.find(|i| i.name == name).get()
}
+ fn get_state_by_id(id: uint) -> state { self.states[id] }
+
fn has_state(name: ident) -> bool {
self.states.find(|i| i.name == name) != none
}
+ fn filename() -> ~str {
+ ~"proto://" + *self.name
+ }
+
+ fn num_states() -> uint { self.states.len() }
+
+ fn has_ty_params() -> bool {
+ for self.states.each |s| {
+ if s.ty_params.len() > 0 {
+ ret true;
+ }
+ }
+ false
+ }
+ fn is_bounded() -> bool {
+ let bounded = self.bounded.get();
+ bounded
+ //if bounded && self.has_ty_params() {
+ // #debug("protocol %s has is bounded, but type parameters\
+ // are not yet supported.",
+ // *self.name);
+ // false
+ //}
+ //else { bounded }
+ }
+}
+
+impl methods for protocol {
+ fn add_state(name: ident, dir: direction) -> state {
+ self.add_state_poly(name, dir, ~[])
+ }
+
fn add_state_poly(name: ident, dir: direction,
+ty_params: ~[ast::ty_param]) -> state {
let messages = dvec();
let state = state_(@{
+ id: self.states.len(),
name: name,
+ span: self.span,
dir: dir,
ty_params: ty_params,
messages: messages,
self.states.push(state);
state
}
-
- fn filename() -> ~str {
- ~"proto://" + *self.name
- }
}
trait visitor<Tproto, Tstate, Tmessage> {
fn visit_proto(proto: protocol, st: &[Tstate]) -> Tproto;
fn visit_state(state: state, m: &[Tmessage]) -> Tstate;
- fn visit_message(name: ident, tys: &[@ast::ty],
+ fn visit_message(name: ident, spane: span, tys: &[@ast::ty],
this: state, next: next_state) -> Tmessage;
}
// the copy keywords prevent recursive use of dvec
let states = do (copy proto.states).map_to_vec |s| {
let messages = do (copy s.messages).map_to_vec |m| {
- let message(name, tys, this, next) = m;
- visitor.visit_message(name, tys, this, next)
+ let message(name, span, tys, this, next) = m;
+ visitor.visit_message(name, span, tys, this, next)
};
visitor.visit_state(s, messages)
};
mut idx: uint,
mut up: matcher_pos_up, // mutable for swapping only
matches: ~[dvec<@arb_depth>],
+ match_lo: uint, match_hi: uint,
sp_lo: uint,
};
vec::foldl(0u, ms, |ct, m| {
ct + alt m.node {
mtc_tok(_) { 0u }
- mtc_rep(more_ms, _, _) { count_names(more_ms) }
+ mtc_rep(more_ms, _, _, _, _) { count_names(more_ms) }
mtc_bb(_,_,_) { 1u }
}})
}
#[warn(no_non_implicitly_copyable_typarams)]
-fn new_matcher_pos(ms: ~[matcher], sep: option<token>, lo: uint)
+fn initial_matcher_pos(ms: ~[matcher], sep: option<token>, lo: uint)
-> matcher_pos {
+ let mut match_idx_hi = 0u;
+ for ms.each() |elt| {
+ alt elt.node {
+ mtc_tok(_) {}
+ mtc_rep(_,_,_,_,hi) { match_idx_hi = hi; } //it is monotonic...
+ mtc_bb(_,_,pos) { match_idx_hi = pos+1u; } //...so latest is highest
+ }
+ }
~{elts: ms, sep: sep, mut idx: 0u, mut up: matcher_pos_up(none),
matches: copy vec::from_fn(count_names(ms), |_i| dvec::dvec()),
- sp_lo: lo}
+ match_lo: 0u, match_hi: match_idx_hi, sp_lo: lo}
}
/* logically, an arb_depth should contain only one kind of nonterminal */
ret_val: hashmap<ident, @arb_depth>) {
alt m {
{node: mtc_tok(_), span: _} { }
- {node: mtc_rep(more_ms, _, _), span: _} {
+ {node: mtc_rep(more_ms, _, _, _, _), span: _} {
for more_ms.each() |next_m| { n_rec(p_s, next_m, res, ret_val) };
}
{node: mtc_bb(bind_name, _, idx), span: sp} {
fn parse(sess: parse_sess, cfg: ast::crate_cfg, rdr: reader, ms: ~[matcher])
-> parse_result {
let mut cur_eis = ~[];
- vec::push(cur_eis, new_matcher_pos(ms, none, rdr.peek().sp.lo));
+ vec::push(cur_eis, initial_matcher_pos(ms, none, rdr.peek().sp.lo));
loop {
let mut bb_eis = ~[]; // black-box parsed by parser.rs
// I bet this is a perf problem: we're preemptively
// doing a lot of array work that will get thrown away
// most of the time.
- for ei.matches.eachi() |idx, elt| {
- let sub = elt.get();
- // Some subtrees don't contain the name at all
- if sub.len() == 0u { again; }
+
+ // Only touch the binders we have actually bound
+ for uint::range(ei.match_lo, ei.match_hi) |idx| {
+ let sub = ei.matches[idx].get();
new_pos.matches[idx]
.push(@seq(sub, mk_sp(ei.sp_lo,sp.hi)));
}
} else {
alt copy ei.elts[idx].node {
/* need to descend into sequence */
- mtc_rep(matchers, sep, zero_ok) {
+ mtc_rep(matchers, sep, zero_ok, match_idx_lo, match_idx_hi){
if zero_ok {
let new_ei = copy ei;
new_ei.idx += 1u;
+ //we specifically matched zero repeats.
+ for uint::range(match_idx_lo, match_idx_hi) |idx| {
+ new_ei.matches[idx].push(@seq(~[], sp));
+ }
+
vec::push(cur_eis, new_ei);
}
vec::push(cur_eis, ~{
elts: matchers, sep: sep, mut idx: 0u,
mut up: matcher_pos_up(some(ei_t)),
- matches: matches, sp_lo: sp.lo
+ matches: matches,
+ match_lo: match_idx_lo, match_hi: match_idx_hi,
+ sp_lo: sp.lo
});
}
mtc_bb(_,_,_) { vec::push(bb_eis, ei) }
if (bb_eis.len() > 0u && next_eis.len() > 0u)
|| bb_eis.len() > 1u {
let nts = str::connect(vec::map(bb_eis, |ei| {
- alt ei.elts[ei.idx].node
- { mtc_bb(_,name,_) { *name } _ { fail; } }
- }), ~" or ");
+ alt ei.elts[ei.idx].node {
+ mtc_bb(bind,name,_) {
+ #fmt["%s ('%s')", *name, *bind]
+ }
+ _ { fail; } } }), ~" or ");
ret failure(sp, #fmt[
"Local ambiguity: multiple parsing options: \
built-in NTs %s or %u other options.",
ms(mtc_bb(@~"lhs",@~"mtcs", 0u)),
ms(mtc_tok(FAT_ARROW)),
ms(mtc_bb(@~"rhs",@~"tt", 1u)),
- ], some(SEMI), false))];
+ ], some(SEMI), false, 0u, 2u))];
let arg_reader = new_tt_reader(cx.parse_sess().span_diagnostic,
cx.parse_sess().interner, none, arg);
fn tt_next_token(&&r: tt_reader) -> {tok: token, sp: span} {
let ret_val = { tok: r.cur_tok, sp: r.cur_span };
- while r.cur.idx >= vec::len(r.cur.readme) {
+ while r.cur.idx >= r.cur.readme.len() {
/* done with this set; pop or repeat? */
if ! r.cur.dotdotdoted
|| r.repeat_idx.last() == r.repeat_len.last() - 1 {
r.sp_diag.span_fatal(sp, msg);
}
lis_constraint(len, _) {
- vec::push(r.repeat_len, len);
- vec::push(r.repeat_idx, 0u);
- r.cur = @{readme: tts, mut idx: 0u, dotdotdoted: true,
- sep: sep, up: tt_frame_up(option::some(r.cur)) };
-
if len == 0 {
if !zerok {
r.sp_diag.span_fatal(sp, /* FIXME #2887 blame invoker
~"this must repeat at least \
once");
}
- /* we need to pop before we proceed, so recur */
+
+ r.cur.idx += 1u;
ret tt_next_token(r);
+ } else {
+ vec::push(r.repeat_len, len);
+ vec::push(r.repeat_idx, 0u);
+ r.cur = @{readme: tts, mut idx: 0u, dotdotdoted: true,
+ sep: sep, up: tt_frame_up(option::some(r.cur))};
}
}
}
item_enum(vec::map(variants, |x| fld.fold_variant(x)),
fold_ty_params(typms, fld))
}
- item_class(typms, traits, items, ctor, m_dtor) {
- let ctor_body = fld.fold_block(ctor.node.body);
- let ctor_decl = fold_fn_decl(ctor.node.dec, fld);
- let ctor_id = fld.new_id(ctor.node.id);
+ item_class(typms, traits, items, m_ctor, m_dtor) {
+ let resulting_optional_constructor;
+ alt m_ctor {
+ none => {
+ resulting_optional_constructor = none;
+ }
+ some(constructor) => {
+ resulting_optional_constructor = some({
+ node: {
+ body: fld.fold_block(constructor.node.body),
+ dec: fold_fn_decl(constructor.node.dec, fld),
+ id: fld.new_id(constructor.node.id)
+ with constructor.node
+ }
+ with constructor
+ });
+ }
+ }
let dtor = do option::map(m_dtor) |dtor| {
let dtor_body = fld.fold_block(dtor.node.body);
let dtor_id = fld.new_id(dtor.node.id);
/* FIXME (#2543) */ copy typms,
vec::map(traits, |p| fold_trait_ref(p, fld)),
vec::map(items, |x| fld.fold_class_item(x)),
- {node: {body: ctor_body,
- dec: ctor_decl,
- id: ctor_id with ctor.node}
- with ctor}, dtor)
+ resulting_optional_constructor,
+ dtor)
}
item_impl(tps, ifce, ty, methods) {
item_impl(fold_ty_params(tps, fld),
fld.fold_expr(e)) }
expr_assert(e) { expr_assert(fld.fold_expr(e)) }
expr_mac(mac) { expr_mac(fold_mac(mac)) }
+ expr_struct(path, fields) {
+ expr_struct(fld.fold_path(path), vec::map(fields, fold_field))
+ }
}
}
export parse_crate_from_file, parse_crate_from_crate_file;
export parse_crate_from_source_str;
export parse_expr_from_source_str, parse_item_from_source_str;
+export parse_stmt_from_source_str;
export parse_from_source_str;
import parser::parser;
ret r;
}
+fn parse_stmt_from_source_str(name: ~str, source: @~str, cfg: ast::crate_cfg,
+ +attrs: ~[ast::attribute],
+ sess: parse_sess) -> @ast::stmt {
+ let (p, rdr) = new_parser_etc_from_source_str(sess, cfg, name,
+ codemap::fss_none, source);
+ let r = p.parse_stmt(attrs);
+ sess.chpos = rdr.chpos;
+ sess.byte_pos = sess.byte_pos + rdr.pos;
+ ret r;
+}
+
fn parse_from_source_str<T>(f: fn (p: parser) -> T,
name: ~str, ss: codemap::file_substr,
source: @~str, cfg: ast::crate_cfg,
fn eat_keyword(word: ~str) -> bool {
self.require_keyword(word);
- // FIXME (#13042): this gratuitous use of @ is to
- // workaround LLVM bug.
- alt @self.token {
- @token::IDENT(sid, false) {
+ let mut bump = false;
+ let val = alt self.token {
+ token::IDENT(sid, false) {
if str::eq(word, *self.get_str(sid)) {
- self.bump();
- ret true;
- } else { ret false; }
+ bump = true;
+ true
+ } else { false }
}
- _ { ret false; }
- }
+ _ { false }
+ };
+ if bump { self.bump() }
+ val
}
fn expect_keyword(word: ~str) {
fn next_token() -> {tok: token::token, sp: span};
fn fatal(~str) -> !;
fn span_diag() -> span_handler;
- fn interner() -> @interner<@~str>;
+ pure fn interner() -> @interner<@~str>;
fn peek() -> {tok: token::token, sp: span};
fn dup() -> reader;
}
self.span_diagnostic.span_fatal(copy self.peek_span, m)
}
fn span_diag() -> span_handler { self.span_diagnostic }
- fn interner() -> @interner<@~str> { self.interner }
+ pure fn interner() -> @interner<@~str> { self.interner }
fn peek() -> {tok: token::token, sp: span} {
{tok: self.peek_tok, sp: self.peek_span}
}
self.sp_diag.span_fatal(copy self.cur_span, m);
}
fn span_diag() -> span_handler { self.sp_diag }
- fn interner() -> @interner<@~str> { self.interner }
+ pure fn interner() -> @interner<@~str> { self.interner }
fn peek() -> {tok: token::token, sp: span} {
{ tok: self.cur_tok, sp: self.cur_span }
}
bound_copy, bound_send, bound_trait, bound_owned,
box, by_copy, by_move,
by_mutbl_ref, by_ref, by_val, capture_clause, capture_item,
- carg_base, carg_ident, cdir_dir_mod, cdir_src_mod,
- cdir_view_item, checked_expr, claimed_expr, class_immutable,
+ cdir_dir_mod, cdir_src_mod,
+ cdir_view_item, class_immutable,
class_member, class_method, class_mutable,
crate, crate_cfg, crate_directive, decl,
decl_item, decl_local, default_blk, deref, div, expl, expr,
expr_fail, expr_field, expr_fn, expr_fn_block, expr_if,
expr_index, expr_lit, expr_log, expr_loop,
expr_loop_body, expr_mac, expr_move, expr_new, expr_path,
- expr_rec, expr_ret, expr_swap, expr_tup, expr_unary, expr_vec,
- expr_vstore, expr_while, extern_fn, field, fn_decl, foreign_item,
- foreign_item_fn, foreign_mod, ident, impure_fn, infer,
- init_assign, init_move, initializer, instance_var, item, item_,
- item_class, item_const, item_enum, item_fn, item_foreign_mod,
- item_impl, item_mac, item_mod, item_trait, item_ty, lit, lit_,
- lit_bool, lit_float, lit_int, lit_int_unsuffixed, lit_nil,
- lit_str, lit_uint, local, m_const, m_imm, m_mutbl, mac_, mac_aq,
- mac_ellipsis, mac_embed_block, mac_embed_type, mac_invoc,
- mac_invoc_tt, mac_var, matcher, method, mode, mt, mtc_bb,
- mtc_rep, mtc_tok, mul, mutability, neg, noreturn, not, pat,
- pat_box, pat_enum, pat_ident, pat_lit, pat_range, pat_rec,
- pat_tup, pat_uniq, pat_wild, path, private, proto, proto_any,
- proto_bare, proto_block, proto_box, proto_uniq, provided, public,
- pure_fn, purity, re_anon, re_named, region, rem, required,
- ret_style, return_val, shl, shr, stmt, stmt_decl, stmt_expr,
- stmt_semi, subtract, token_tree, trait_method, trait_ref,
- tt_delim, tt_dotdotdot, tt_flat, tt_interpolate, ty, ty_, ty_bot,
- ty_box, ty_constr, ty_constr_, ty_constr_arg, ty_field, ty_fn,
- ty_infer, ty_mac, ty_method, ty_nil, ty_param, ty_path, ty_ptr,
- ty_rec, ty_rptr, ty_tup, ty_u32, ty_uniq, ty_vec,
- ty_fixed_length,
- unchecked_blk, uniq, unsafe_blk, unsafe_fn, variant, view_item,
- view_item_, view_item_export, view_item_import, view_item_use,
- view_path, view_path_glob, view_path_list, view_path_simple,
- visibility, vstore, vstore_box, vstore_fixed, vstore_slice,
- vstore_uniq};
+ expr_rec, expr_ret, expr_swap, expr_struct, expr_tup, expr_unary,
+ expr_vec, expr_vstore, expr_while, extern_fn, field, fn_decl,
+ foreign_item, foreign_item_fn, foreign_mod, ident, impure_fn,
+ infer, init_assign, init_move, initializer, instance_var, item,
+ item_, item_class, item_const, item_enum, item_fn,
+ item_foreign_mod, item_impl, item_mac, item_mod, item_trait,
+ item_ty, lit, lit_, lit_bool, lit_float, lit_int,
+ lit_int_unsuffixed, lit_nil, lit_str, lit_uint, local, m_const,
+ m_imm, m_mutbl, mac_, mac_aq, mac_ellipsis, mac_embed_block,
+ mac_embed_type, mac_invoc, mac_invoc_tt, mac_var, matcher,
+ method, mode, mt, mtc_bb, mtc_rep, mtc_tok, mul, mutability, neg,
+ noreturn, not, pat, pat_box, pat_enum, pat_ident, pat_lit,
+ pat_range, pat_rec, pat_tup, pat_uniq, pat_wild, path, private,
+ proto, proto_any, proto_bare, proto_block, proto_box, proto_uniq,
+ provided, public, pure_fn, purity, re_anon, re_named, region,
+ rem, required, ret_style, return_val, shl, shr, stmt, stmt_decl,
+ stmt_expr, stmt_semi, subtract, token_tree, trait_method,
+ trait_ref, tt_delim, tt_dotdotdot, tt_flat, tt_interpolate, ty,
+ ty_, ty_bot, ty_box, ty_field, ty_fn, ty_infer, ty_mac,
+ ty_method, ty_nil, ty_param, ty_path, ty_ptr, ty_rec, ty_rptr,
+ ty_tup, ty_u32, ty_uniq, ty_vec, ty_fixed_length, unchecked_blk,
+ uniq, unsafe_blk, unsafe_fn, variant, view_item, view_item_,
+ view_item_export, view_item_import, view_item_use, view_path,
+ view_path_glob, view_path_list, view_path_simple, visibility,
+ vstore, vstore_box, vstore_fixed, vstore_slice, vstore_uniq};
export file_type;
export parser;
type arg_or_capture_item = either<arg, capture_item>;
type item_info = (ident, item_, option<~[attribute]>);
-fn dummy() {
+/* The expr situation is not as complex as I thought it would be.
+The important thing is to make sure that lookahead doesn't balk
+at ACTUALLY tokens */
+macro_rules! maybe_whole_expr{
+ {$p:expr} => { alt copy $p.token {
+ ACTUALLY(token::w_expr(e)) {
+ $p.bump();
+ ret pexpr(e);
+ }
+ ACTUALLY(token::w_path(pt)) {
+ $p.bump();
+ ret $p.mk_pexpr($p.span.lo, $p.span.lo,
+ expr_path(pt));
+ }
+ _ {}
+ }}
+}
+macro_rules! maybe_whole {
+ {$p:expr, $constructor:path} => { alt copy $p.token {
+ ACTUALLY($constructor(x)) { $p.bump(); ret x; }
+ _ {}
+ }}
+}
- #macro[[#maybe_whole_item[p],
- alt copy p.token {
- ACTUALLY(token::w_item(i)) { p.bump(); ret i; }
- _ {} }]];
- #macro[[#maybe_whole_block[p],
- alt copy p.token {
- ACTUALLY(token::w_block(b)) { p.bump(); ret b; }
- _ {} }]];
- #macro[[#maybe_whole_stmt[p],
- alt copy p.token {
- ACTUALLY(token::w_stmt(s)) { p.bump(); ret s; }
- _ {} }]];
- #macro[[#maybe_whole_pat[p],
- alt copy p.token {
- ACTUALLY(token::w_pat(pt)) { p.bump(); ret pt; }
- _ {} }]];
- /* The expr situation is not as complex as I thought it would be.
- The important thing is to make sure that lookahead doesn't balk
- at ACTUALLY tokens */
- #macro[[#maybe_whole_expr_pexpr[p], /* ack! */
- alt copy p.token {
- ACTUALLY(token::w_expr(e)) {
- p.bump();
- ret pexpr(e);
- }
- ACTUALLY(token::w_path(pt)) {
- p.bump();
- ret p.mk_pexpr(p.span.lo, p.span.lo,
- expr_path(pt));
- }
- _ {} }]];
- #macro[[#maybe_whole_ty[p],
- alt copy p.token {
- ACTUALLY(token::w_ty(t)) { p.bump(); ret t; }
- _ {} }]];
- /* ident is handled by common.rs */
+/* ident is handled by common.rs */
+
+fn dummy() {
+ /* we will need this to bootstrap maybe_whole! */
#macro[[#maybe_whole_path[p],
alt p.token {
ACTUALLY(token::w_path(pt)) { p.bump(); ret pt; }
_ {} }]];
}
+
class parser {
let sess: parse_sess;
let cfg: crate_cfg;
fn warn(m: ~str) {
self.sess.span_diagnostic.span_warn(copy self.span, m)
}
- fn get_str(i: token::str_num) -> @~str {
+ pure fn get_str(i: token::str_num) -> @~str {
interner::get(*self.reader.interner(), i)
}
fn get_id() -> node_id { next_node_id(self.sess) }
let name =
alt copy self.token {
token::IDENT(sid, _) => {
- if self.look_ahead(1u) == token::DOT || // backwards compat
- self.look_ahead(1u) == token::BINOP(token::SLASH) {
+ if self.look_ahead(1u) == token::BINOP(token::SLASH) {
self.bump(); self.bump();
some(self.get_str(sid))
} else {
}
fn parse_bottom_expr() -> pexpr {
- #maybe_whole_expr_pexpr[self];
+ maybe_whole_expr!{self};
let lo = self.span.lo;
let mut hi = self.span.hi;
let hi = self.span.hi;
ret pexpr(self.mk_mac_expr(lo, hi, mac_invoc_tt(pth, tts)));
- } else {
- hi = pth.span.hi;
- ex = expr_path(pth);
+ } else if self.token == token::LBRACE {
+ // This might be a struct literal.
+ let lookahead = self.look_ahead(1);
+ if self.token_is_keyword(~"mut", lookahead) ||
+ (is_plain_ident(lookahead) &&
+ self.look_ahead(2) == token::COLON) {
+
+ // It's a struct literal.
+ self.bump();
+ let mut fields = ~[];
+ if self.is_keyword(~"mut") || is_plain_ident(self.token)
+ && self.look_ahead(1) == token::COLON {
+ vec::push(fields, self.parse_field(token::COLON));
+ while self.token != token::RBRACE {
+ self.expect(token::COMMA);
+ if self.token == token::RBRACE {
+ // Accept an optional trailing comma.
+ break;
+ }
+ vec::push(fields, self.parse_field(token::COLON));
+ }
+ }
+
+ hi = pth.span.hi;
+ self.expect(token::RBRACE);
+ ex = expr_struct(pth, fields);
+ ret self.mk_pexpr(lo, hi, ex);
+ }
}
+
+ hi = pth.span.hi;
+ ex = expr_path(pth);
} else {
let lit = self.parse_lit();
hi = lit.span.hi;
let m = if self.token == token::DOLLAR {
self.bump();
if self.token == token::LPAREN {
+ let name_idx_lo = *name_idx;
let ms = self.parse_matcher_subseq(name_idx, token::LPAREN,
token::RPAREN);
if ms.len() == 0u {
self.fatal(~"repetition body must be nonempty");
}
let (sep, zerok) = self.parse_sep_and_zerok();
- mtc_rep(ms, sep, zerok)
+ mtc_rep(ms, sep, zerok, name_idx_lo, *name_idx)
} else {
let bound_to = self.parse_ident();
self.expect(token::COLON);
(ident, item_trait(tps, meths), none)
}
- // Parses three variants (with the region/type params always optional):
+ // Parses four variants (with the region/type params always optional):
// impl /&<T: copy> of to_str for ~[T] { ... }
// impl name/&<T> of to_str for ~[T] { ... }
// impl name/&<T> for ~[T] { ... }
+ // impl<T> ~[T] : to_str { ... }
fn parse_item_impl() -> item_info {
fn wrap_path(p: parser, pt: @path) -> @ty {
@{id: p.get_id(), node: ty_path(pt, p.get_id()), span: pt.span}
}
- let mut (ident, tps) = {
- if self.token == token::LT {
- (none, self.parse_ty_params())
- } else if self.token == token::BINOP(token::SLASH) {
- self.parse_region_param();
- (none, self.parse_ty_params())
+
+ // We do two separate paths here: old-style impls and new-style impls.
+
+ // First, parse type parameters if necessary.
+ let mut tps;
+ if self.token == token::LT {
+ tps = self.parse_ty_params();
+ } else {
+ tps = ~[];
+ }
+
+ let mut ident;
+ let ty, traits;
+ if !self.is_keyword(~"of") &&
+ !self.token_is_keyword(~"of", self.look_ahead(1)) &&
+ !self.token_is_keyword(~"for", self.look_ahead(1)) &&
+ self.look_ahead(1) != token::BINOP(token::SLASH) &&
+ self.look_ahead(1) != token::LT {
+
+ // This is a new-style impl declaration.
+ ident = @~"__extensions__"; // XXX: clownshoes
+
+ // Parse the type.
+ ty = self.parse_ty(false);
+
+ // Parse traits, if necessary.
+ if self.token == token::COLON {
+ self.bump();
+ traits = self.parse_trait_ref_list(token::LBRACE);
+ } else {
+ traits = ~[];
}
- else if self.is_keyword(~"of") {
- (none, ~[])
+ } else {
+ let mut ident_old;
+ if self.token == token::BINOP(token::SLASH) {
+ self.parse_region_param();
+ ident_old = none;
+ tps = self.parse_ty_params();
+ } else if self.is_keyword(~"of") {
+ ident_old = none;
} else {
- let id = self.parse_ident();
+ ident_old = some(self.parse_ident());
self.parse_region_param();
- (some(id), self.parse_ty_params())
+ tps = self.parse_ty_params();
}
- };
- let ifce = if self.eat_keyword(~"of") {
- let path = self.parse_path_with_tps(false);
- if option::is_none(ident) {
- ident = some(vec::last(path.idents));
- }
- some(@{path: path, ref_id: self.get_id(), impl_id: self.get_id()})
- } else { none };
- let ident = alt ident {
- some(name) { name }
- none { self.expect_keyword(~"of"); fail; }
- };
- self.expect_keyword(~"for");
- let ty = self.parse_ty(false);
+
+ if self.eat_keyword(~"of") {
+ let for_atom = interner::intern(*self.reader.interner(),
+ @~"for");
+ traits = self.parse_trait_ref_list
+ (token::IDENT(for_atom, false));
+ if traits.len() >= 1 && option::is_none(ident_old) {
+ ident_old = some(vec::last(traits[0].path.idents));
+ }
+ if traits.len() == 0 {
+ self.fatal(~"BUG: 'of' but no trait");
+ }
+ if traits.len() > 1 {
+ self.fatal(~"BUG: multiple traits");
+ }
+ } else {
+ traits = ~[];
+ };
+ ident = alt ident_old {
+ some(name) { name }
+ none { self.expect_keyword(~"of"); fail; }
+ };
+ self.expect_keyword(~"for");
+ ty = self.parse_ty(false);
+ }
+
let mut meths = ~[];
self.expect(token::LBRACE);
while !self.eat(token::RBRACE) {
vec::push(meths, self.parse_method(public));
}
- (ident, item_impl(tps, ifce, ty, meths), none)
+ (ident, item_impl(tps, traits, ty, meths), none)
}
// Instantiates ident <i> with references to <typarams> as arguments.
ref_id: self.get_id(), impl_id: self.get_id()}
}
- fn parse_trait_ref_list() -> ~[@trait_ref] {
+ fn parse_trait_ref_list(ket: token::token) -> ~[@trait_ref] {
self.parse_seq_to_before_end(
- token::LBRACE, seq_sep_trailing_disallowed(token::COMMA),
+ ket, seq_sep_trailing_disallowed(token::COMMA),
|p| p.parse_trait_ref())
}
let ty_params = self.parse_ty_params();
let class_path = self.ident_to_path_tys(class_name, ty_params);
let traits : ~[@trait_ref] = if self.eat(token::COLON)
- { self.parse_trait_ref_list() }
+ { self.parse_trait_ref_list(token::LBRACE) }
else { ~[] };
self.expect(token::LBRACE);
let mut ms: ~[@class_member] = ~[];
alt the_ctor {
some((ct_d, ct_attrs, ct_b, ct_s)) {
(class_name,
- item_class(ty_params, traits, ms, {
+ item_class(ty_params, traits, ms, some({
node: {id: ctor_id,
attrs: ct_attrs,
self_id: self.get_id(),
dec: ct_d,
body: ct_b},
- span: ct_s}, actual_dtor),
+ span: ct_s}), actual_dtor),
none)
}
/*
Is it strange for the parser to check this?
*/
none {
- self.fatal(~"class with no constructor");
+ (class_name,
+ item_class(ty_params, traits, ms, none, actual_dtor),
+ none)
}
}
}
ast::ty_rptr(region, mt) {
alt region.node {
ast::re_anon { word(s.s, ~"&"); }
- _ { print_region(s, region); word(s.s, ~"."); }
+ _ { print_region(s, region); word(s.s, ~"/"); }
}
print_mt(s, mt);
}
bclose(s, item.span);
}
}
- ast::item_class(tps, traits, items, ctor, m_dtor) {
+ ast::item_class(tps, traits, items, m_ctor, m_dtor) {
head(s, ~"class");
word_nbsp(s, *item.ident);
print_type_params(s, tps);
}
bopen(s);
hardbreak_if_not_bol(s);
- maybe_print_comment(s, ctor.span.lo);
- print_outer_attributes(s, ctor.node.attrs);
- /* Doesn't call head because there shouldn't be a space after new */
- cbox(s, indent_unit);
- ibox(s, 4);
- word(s.s, ~"new(");
- print_fn_args(s, ctor.node.dec, ~[]);
- word(s.s, ~")");
- space(s.s);
- print_block(s, ctor.node.body);
+ do option::iter(m_ctor) |ctor| {
+ maybe_print_comment(s, ctor.span.lo);
+ print_outer_attributes(s, ctor.node.attrs);
+ // Doesn't call head because there shouldn't be a space after new.
+ cbox(s, indent_unit);
+ ibox(s, 4);
+ word(s.s, ~"new(");
+ print_fn_args(s, ctor.node.dec, ~[]);
+ word(s.s, ~")");
+ space(s.s);
+ print_block(s, ctor.node.body);
+ }
do option::iter(m_dtor) |dtor| {
hardbreak_if_not_bol(s);
maybe_print_comment(s, dtor.span.lo);
}
bclose(s, item.span);
}
- ast::item_impl(tps, ifce, ty, methods) {
+ ast::item_impl(tps, traits, ty, methods) {
head(s, ~"impl");
word(s.s, *item.ident);
print_type_params(s, tps);
space(s.s);
- option::iter(ifce, |p| {
+ if vec::len(traits) != 0u {
word_nbsp(s, ~"of");
- print_path(s, p.path, false);
+ do commasep(s, inconsistent, traits) |s, p| {
+ print_path(s, p.path, false);
+ }
space(s.s);
- });
+ }
word_nbsp(s, ~"for");
print_type(s, ty);
space(s.s);
}
fn print_expr(s: ps, &&expr: @ast::expr) {
+ fn print_field(s: ps, field: ast::field) {
+ ibox(s, indent_unit);
+ if field.node.mutbl == ast::m_mutbl { word_nbsp(s, ~"mut"); }
+ word(s.s, *field.node.ident);
+ word_space(s, ~":");
+ print_expr(s, field.node.expr);
+ end(s);
+ }
+ fn get_span(field: ast::field) -> codemap::span { ret field.span; }
+
maybe_print_comment(s, expr.span.lo);
ibox(s, indent_unit);
let ann_node = node_expr(s, expr);
end(s);
}
ast::expr_rec(fields, wth) {
- fn print_field(s: ps, field: ast::field) {
- ibox(s, indent_unit);
- if field.node.mutbl == ast::m_mutbl { word_nbsp(s, ~"mut"); }
- word(s.s, *field.node.ident);
- word_space(s, ~":");
- print_expr(s, field.node.expr);
- end(s);
- }
- fn get_span(field: ast::field) -> codemap::span { ret field.span; }
word(s.s, ~"{");
commasep_cmnt(s, consistent, fields, print_field, get_span);
alt wth {
}
word(s.s, ~"}");
}
+ ast::expr_struct(path, fields) {
+ print_path(s, path, true);
+ word(s.s, ~"{");
+ commasep_cmnt(s, consistent, fields, print_field, get_span);
+ word(s.s, ~",");
+ word(s.s, ~"}");
+ }
ast::expr_tup(exprs) {
popen(s);
commasep_exprs(s, inconsistent, exprs);
mod pipec;
mod proto;
mod check;
+ mod liveness;
}
}
for vr.node.args.each |va| { v.visit_ty(va.ty, e, v); }
}
}
- item_impl(tps, ifce, ty, methods) {
+ item_impl(tps, traits, ty, methods) {
v.visit_ty_params(tps, e, v);
- option::iter(ifce, |p| visit_path(p.path, e, v));
+ for traits.each |p| {
+ visit_path(p.path, e, v);
+ }
v.visit_ty(ty, e, v);
for methods.each |m| {
visit_method_helper(m, e, v)
}
}
- item_class(tps, traits, members, ctor, m_dtor) {
+ item_class(tps, traits, members, m_ctor, m_dtor) {
v.visit_ty_params(tps, e, v);
for members.each |m| {
v.visit_class_item(m, e, v);
}
for traits.each |p| { visit_path(p.path, e, v); }
- visit_class_ctor_helper(ctor, i.ident, tps,
- ast_util::local_def(i.id), e, v);
+ do option::iter(m_ctor) |ctor| {
+ visit_class_ctor_helper(ctor, i.ident, tps,
+ ast_util::local_def(i.id), e, v);
+ };
do option::iter(m_dtor) |dtor| {
- visit_class_dtor_helper(dtor, tps,
- ast_util::local_def(i.id), e, v)};
+ visit_class_dtor_helper(dtor, tps,
+ ast_util::local_def(i.id), e, v)
+ };
}
item_trait(tps, methods) {
v.visit_ty_params(tps, e, v);
for flds.each |f| { v.visit_expr(f.node.expr, e, v); }
visit_expr_opt(base, e, v);
}
+ expr_struct(p, flds) {
+ visit_path(p, e, v);
+ for flds.each |f| { v.visit_expr(f.node.expr, e, v); }
+ }
expr_tup(elts) { for elts.each |el| { v.visit_expr(el, e, v); } }
expr_call(callee, args, _) {
visit_exprs(args, e, v);
-Subproject commit 3a57b672f89adcb2d2d06adc564dc15ca4e276d6
+Subproject commit 5e5c8465696cbaa8197e391230a7b33cf99afafa
return box;
}
+rust_opaque_box *boxed_region::realloc(rust_opaque_box *box,
+ size_t new_size) {
+ assert(box->ref_count == 1);
+
+ size_t total_size = new_size + sizeof(rust_opaque_box);
+ rust_opaque_box *new_box =
+ (rust_opaque_box*)backing_region->realloc(box, total_size);
+ if (new_box->prev) new_box->prev->next = new_box;
+ if (new_box->next) new_box->next->prev = new_box;
+ if (live_allocs == box) live_allocs = new_box;
+
+
+ LOG(rust_get_current_task(), box,
+ "@realloc()=%p with orig=%p, size %lu==%lu+%lu",
+ new_box, box, total_size, sizeof(rust_opaque_box), new_size);
+
+ return new_box;
+}
+
+
rust_opaque_box *boxed_region::calloc(type_desc *td, size_t body_size) {
rust_opaque_box *box = malloc(td, body_size);
memset(box_body(box), 0, td->size);
rust_opaque_box *malloc(type_desc *td, size_t body_size);
rust_opaque_box *calloc(type_desc *td, size_t body_size);
+ rust_opaque_box *realloc(rust_opaque_box *box, size_t new_size);
void free(rust_opaque_box *box);
};
#endif
extern "C" CDECL void
-unsupervise() {
+vec_reserve_shared_actual(type_desc* ty, rust_vec_box** vp,
+ size_t n_elts) {
rust_task *task = rust_get_current_task();
- task->unsupervise();
+ reserve_vec_exact_shared(task, vp, n_elts * ty->size);
}
+// This is completely misnamed.
extern "C" CDECL void
vec_reserve_shared(type_desc* ty, rust_vec_box** vp,
size_t n_elts) {
rust_vec *v = (rust_vec *) task->kernel->malloc(vec_size<uint8_t>(size),
"rand_seed");
v->fill = v->alloc = size;
- isaac_seed(task->kernel, (uint8_t*) &v->data);
+ isaac_seed(task->kernel, (uint8_t*) &v->data, size);
return v;
}
return new_task_common(sched, task);
}
-extern "C" CDECL void
-rust_task_config_notify(rust_task *target, rust_port_id *port) {
- target->config_notify(*port);
-}
-
extern "C" rust_task *
rust_get_task() {
return rust_get_current_task();
// This is called by an intrinsic on the Rust stack and must run
// entirely in the red zone. Do not call on the C stack.
-extern "C" CDECL void
+extern "C" CDECL MUST_CHECK bool
rust_task_yield(rust_task *task, bool *killed) {
- task->yield(killed);
+ return task->yield();
}
extern "C" CDECL void
}
extern "C" void
-rust_task_inhibit_kill() {
- rust_task *task = rust_get_current_task();
+rust_task_inhibit_kill(rust_task *task) {
task->inhibit_kill();
}
extern "C" void
-rust_task_allow_kill() {
- rust_task *task = rust_get_current_task();
+rust_task_allow_kill(rust_task *task) {
task->allow_kill();
}
+extern "C" void
+rust_task_inhibit_yield(rust_task *task) {
+ task->inhibit_yield();
+}
+
+extern "C" void
+rust_task_allow_yield(rust_task *task) {
+ task->allow_yield();
+}
+
extern "C" void
rust_task_kill_other(rust_task *task) { /* Used for linked failure */
task->kill();
main_taskgroup_failed = true;
}
+extern "C" CDECL
+bool rust_task_is_unwinding(rust_task *rt) {
+ return rt->unwinding;
+}
+
extern "C" rust_cond_lock*
rust_create_cond_lock() {
return new rust_cond_lock();
lock->waiting = task;
task->block(lock, "waiting for signal");
lock->lock.unlock();
- bool killed = false;
- task->yield(&killed);
+ bool killed = task->yield();
+ assert(!killed && "unimplemented");
lock->lock.lock();
}
// Waits on an event, returning the pointer to the event that unblocked this
// task.
-extern "C" void *
-task_wait_event(rust_task *task, bool *killed) {
- // FIXME #2890: we should assert that the passed in task is the currently
+extern "C" MUST_CHECK bool
+task_wait_event(rust_task *task, void **result) {
+ // Maybe (if not too slow) assert that the passed in task is the currently
// running task. We wouldn't want to wait some other task.
- return task->wait_event(killed);
+ return task->wait_event(result);
}
extern "C" void
} \
}
+#define MUST_CHECK __attribute__((warn_unused_result))
+
#define PTR "0x%" PRIxPTR
// This accounts for logging buffers.
max_port_id(1),
rval(0),
max_sched_id(1),
+ killed(false),
sched_reaper(this),
osmain_driver(NULL),
non_weak_tasks(0),
id = max_sched_id++;
assert(id != INTPTR_MAX && "Hit the maximum scheduler id");
sched = new (this, "rust_scheduler")
- rust_scheduler(this, num_threads, id, allow_exit, launchfac);
+ rust_scheduler(this, num_threads, id, allow_exit, killed,
+ launchfac);
bool is_new = sched_table
.insert(std::pair<rust_sched_id,
rust_scheduler*>(id, sched)).second;
rust_scheduler *sched = iter->second;
sched_table.erase(iter);
sched->join_task_threads();
- delete sched;
+ sched->deref();
if (sched_table.size() == 1) {
KLOG_("Allowing osmain scheduler to exit");
// It's only the osmain scheduler left. Tell it to exit
void
rust_kernel::fail() {
- // FIXME (#2671): On windows we're getting "Application has
+ // FIXME (#908): On windows we're getting "Application has
// requested the Runtime to terminate it in an unusual way" when
// trying to shutdown cleanly.
set_exit_status(PROC_FAIL_CODE);
#if defined(__WIN32__)
exit(rval);
#endif
+ // I think this only needs to be done by one task ever; as it is,
+ // multiple tasks invoking kill_all might get here. Currently libcore
+ // ensures only one task will ever invoke it, but this would really be
+ // fine either way, so I'm leaving it as it is. -- bblum
+
// Copy the list of schedulers so that we don't hold the lock while
- // running kill_all_tasks.
- // FIXME (#2671): There's a lot that happens under kill_all_tasks,
- // and I don't know that holding sched_lock here is ok, but we need
- // to hold the sched lock to prevent the scheduler from being
- // destroyed while we are using it. Probably we need to make
- // rust_scheduler atomicly reference counted.
+ // running kill_all_tasks. Refcount to ensure they stay alive.
std::vector<rust_scheduler*> scheds;
{
scoped_lock with(sched_lock);
+ // All schedulers created after this flag is set will be doomed.
+ killed = true;
for (sched_map::iterator iter = sched_table.begin();
iter != sched_table.end(); iter++) {
+ iter->second->ref();
scheds.push_back(iter->second);
}
}
- // FIXME (#2671): This is not a foolproof way to kill all tasks
- // while ensuring that no new tasks or schedulers are created in the
- // meantime that keep the scheduler alive.
for (std::vector<rust_scheduler*>::iterator iter = scheds.begin();
iter != scheds.end(); iter++) {
(*iter)->kill_all_tasks();
+ (*iter)->deref();
}
}
lock_and_signal rval_lock;
int rval;
- // Protects max_sched_id and sched_table, join_list
+ // Protects max_sched_id and sched_table, join_list, killed
lock_and_signal sched_lock;
// The next scheduler id
rust_sched_id max_sched_id;
sched_map sched_table;
// A list of scheduler ids that are ready to exit
std::vector<rust_sched_id> join_list;
+ // Whether or not the runtime has to die (triggered when the root/main
+ // task group fails). This propagates to all new schedulers and tasks
+ // created after it is set.
+ bool killed;
rust_sched_reaper sched_reaper;
// The single-threaded scheduler that uses the main thread
const size_t SCHED_STACK_SIZE = 1024*100;
-rust_sched_launcher::rust_sched_launcher(rust_scheduler *sched, int id)
+rust_sched_launcher::rust_sched_launcher(rust_scheduler *sched, int id,
+ bool killed)
: kernel(sched->kernel),
- sched_loop(sched, id),
+ sched_loop(sched, id, killed),
driver(&sched_loop) {
}
rust_thread_sched_launcher::rust_thread_sched_launcher(rust_scheduler *sched,
- int id)
- : rust_sched_launcher(sched, id),
+ int id, bool killed)
+ : rust_sched_launcher(sched, id, killed),
rust_thread(SCHED_STACK_SIZE) {
}
rust_manual_sched_launcher::rust_manual_sched_launcher(rust_scheduler *sched,
- int id)
- : rust_sched_launcher(sched, id) {
+ int id, bool killed)
+ : rust_sched_launcher(sched, id, killed) {
}
rust_sched_launcher *
-rust_thread_sched_launcher_factory::create(rust_scheduler *sched, int id) {
+rust_thread_sched_launcher_factory::create(rust_scheduler *sched, int id,
+ bool killed) {
return new(sched->kernel, "rust_thread_sched_launcher")
- rust_thread_sched_launcher(sched, id);
+ rust_thread_sched_launcher(sched, id, killed);
}
rust_sched_launcher *
-rust_manual_sched_launcher_factory::create(rust_scheduler *sched, int id) {
+rust_manual_sched_launcher_factory::create(rust_scheduler *sched, int id,
+ bool killed) {
assert(launcher == NULL && "I can only track one sched_launcher");
launcher = new(sched->kernel, "rust_manual_sched_launcher")
- rust_manual_sched_launcher(sched, id);
+ rust_manual_sched_launcher(sched, id, killed);
return launcher;
}
rust_sched_driver driver;
public:
- rust_sched_launcher(rust_scheduler *sched, int id);
+ rust_sched_launcher(rust_scheduler *sched, int id, bool killed);
virtual ~rust_sched_launcher() { }
virtual void start() = 0;
:public rust_sched_launcher,
private rust_thread {
public:
- rust_thread_sched_launcher(rust_scheduler *sched, int id);
+ rust_thread_sched_launcher(rust_scheduler *sched, int id, bool killed);
virtual void start() { rust_thread::start(); }
virtual void join() { rust_thread::join(); }
virtual void run() { driver.start_main_loop(); }
class rust_manual_sched_launcher : public rust_sched_launcher {
public:
- rust_manual_sched_launcher(rust_scheduler *sched, int id);
+ rust_manual_sched_launcher(rust_scheduler *sched, int id, bool killed);
virtual void start() { }
virtual void join() { }
rust_sched_driver *get_driver() { return &driver; };
public:
virtual ~rust_sched_launcher_factory() { }
virtual rust_sched_launcher *
- create(rust_scheduler *sched, int id) = 0;
+ create(rust_scheduler *sched, int id, bool killed) = 0;
};
class rust_thread_sched_launcher_factory
: public rust_sched_launcher_factory {
public:
- virtual rust_sched_launcher *create(rust_scheduler *sched, int id);
+ virtual rust_sched_launcher *create(rust_scheduler *sched, int id,
+ bool killed);
};
class rust_manual_sched_launcher_factory
rust_manual_sched_launcher *launcher;
public:
rust_manual_sched_launcher_factory() : launcher(NULL) { }
- virtual rust_sched_launcher *create(rust_scheduler *sched, int id);
+ virtual rust_sched_launcher *create(rust_scheduler *sched, int id,
+ bool killed);
rust_sched_driver *get_driver() {
assert(launcher != NULL);
return launcher->get_driver();
bool rust_sched_loop::tls_initialized = false;
-rust_sched_loop::rust_sched_loop(rust_scheduler *sched,int id) :
+rust_sched_loop::rust_sched_loop(rust_scheduler *sched, int id, bool killed) :
_log(this),
id(id),
should_exit(false),
cached_c_stack(NULL),
dead_task(NULL),
+ killed(killed),
pump_signal(NULL),
kernel(sched->kernel),
sched(sched),
{
scoped_lock with(lock);
+ // Any task created after this will be killed. See transition, below.
+ killed = true;
for (size_t i = 0; i < running_tasks.length(); i++) {
- all_tasks.push_back(running_tasks[i]);
+ rust_task *t = running_tasks[i];
+ t->ref();
+ all_tasks.push_back(t);
}
for (size_t i = 0; i < blocked_tasks.length(); i++) {
- all_tasks.push_back(blocked_tasks[i]);
+ rust_task *t = blocked_tasks[i];
+ t->ref();
+ all_tasks.push_back(t);
}
}
while (!all_tasks.empty()) {
rust_task *task = all_tasks.back();
all_tasks.pop_back();
- // We don't want the failure of these tasks to propagate back
- // to the kernel again since we're already failing everything
- task->unsupervise();
task->kill();
+ task->deref();
}
}
rust_task *task =
new (this->kernel, "rust_task")
rust_task(this, task_state_newborn,
- spawner, name, kernel->env->min_stack_size);
+ name, kernel->env->min_stack_size);
DLOG(this, task, "created task: " PTR ", spawner: %s, name: %s",
- task, spawner ? spawner->name : "null", name);
+ task, spawner ? spawner->name : "(none)", name);
task->id = kernel->generate_task_id();
return task;
}
task->set_state(dst, cond, cond_name);
+ // If the entire runtime is failing, newborn tasks must be doomed.
+ if (src == task_state_newborn && killed) {
+ task->kill_inner();
+ }
+
pump_loop();
}
rust_task_list running_tasks;
rust_task_list blocked_tasks;
rust_task *dead_task;
+ bool killed;
rust_signal *pump_signal;
// Only a pointer to 'name' is kept, so it must live as long as this
// domain.
- rust_sched_loop(rust_scheduler *sched, int id);
+ rust_sched_loop(rust_scheduler *sched, int id, bool killed);
void activate(rust_task *task);
rust_log & get_log();
void fail();
void log_state();
void kill_all_tasks();
+ bool doomed();
rust_task *create_task(rust_task *spawner, const char *name);
size_t num_threads,
rust_sched_id id,
bool allow_exit,
+ bool killed,
rust_sched_launcher_factory *launchfac) :
+ ref_count(1),
kernel(kernel),
live_threads(num_threads),
live_tasks(0),
num_threads(num_threads),
id(id)
{
- create_task_threads(launchfac);
+ create_task_threads(launchfac, killed);
}
-rust_scheduler::~rust_scheduler() {
+void rust_scheduler::delete_this() {
destroy_task_threads();
+ delete this;
}
rust_sched_launcher *
rust_scheduler::create_task_thread(rust_sched_launcher_factory *launchfac,
- int id) {
- rust_sched_launcher *thread = launchfac->create(this, id);
+ int id, bool killed) {
+ rust_sched_launcher *thread = launchfac->create(this, id, killed);
KLOG(kernel, kern, "created task thread: " PTR ", id: %d",
thread, id);
return thread;
}
void
-rust_scheduler::create_task_threads(rust_sched_launcher_factory *launchfac) {
+rust_scheduler::create_task_threads(rust_sched_launcher_factory *launchfac,
+ bool killed) {
KLOG(kernel, kern, "Using %d scheduler threads.", num_threads);
for(size_t i = 0; i < num_threads; ++i) {
- threads.push(create_task_thread(launchfac, i));
+ threads.push(create_task_thread(launchfac, i, killed));
}
}
#include "rust_globals.h"
#include "util/array_list.h"
#include "rust_kernel.h"
+#include "rust_refcount.h"
class rust_sched_launcher;
class rust_sched_launcher_factory;
class rust_scheduler : public kernel_owned<rust_scheduler> {
+ RUST_ATOMIC_REFCOUNT();
// FIXME (#2693): Make these private
public:
rust_kernel *kernel;
rust_sched_id id;
- void create_task_threads(rust_sched_launcher_factory *launchfac);
+ void create_task_threads(rust_sched_launcher_factory *launchfac,
+ bool killed);
void destroy_task_threads();
rust_sched_launcher *
- create_task_thread(rust_sched_launcher_factory *launchfac, int id);
+ create_task_thread(rust_sched_launcher_factory *launchfac, int id,
+ bool killed);
void destroy_task_thread(rust_sched_launcher *thread);
void exit();
+ // Called when refcount reaches zero
+ void delete_this();
+
public:
rust_scheduler(rust_kernel *kernel, size_t num_threads,
- rust_sched_id id, bool allow_exit,
+ rust_sched_id id, bool allow_exit, bool killed,
rust_sched_launcher_factory *launchfac);
- ~rust_scheduler();
void start_task_threads();
void join_task_threads();
#include "rust_env.h"
#include "rust_port.h"
-// FIXME (#1789) (bblum): get rid of supervisors
-
// Tasks
rust_task::rust_task(rust_sched_loop *sched_loop, rust_task_state state,
- rust_task *spawner, const char *name,
- size_t init_stack_sz) :
+ const char *name, size_t init_stack_sz) :
ref_count(1),
id(0),
- notify_enabled(false),
stk(NULL),
runtime_sp(0),
sched(sched_loop->sched),
local_region(&sched_loop->local_region),
boxed(sched_loop->kernel->env, &local_region),
unwinding(false),
- propagate_failure(true),
cc_counter(0),
total_stack_sz(0),
task_local_data(NULL),
killed(false),
reentered_rust_stack(false),
disallow_kill(0),
+ disallow_yield(0),
c_stack(NULL),
next_c_sp(0),
- next_rust_sp(0),
- supervisor(spawner)
+ next_rust_sp(0)
{
LOGPTR(sched_loop, "new task", (uintptr_t)this);
DLOG(sched_loop, task, "sizeof(task) = %d (0x%x)",
sizeof *this, sizeof *this);
new_stack(init_stack_sz);
- if (supervisor) {
- supervisor->ref();
- }
}
// NB: This does not always run on the task's scheduler thread
DLOG(sched_loop, task, "~rust_task %s @0x%" PRIxPTR ", refcnt=%d",
name, (uintptr_t)this, ref_count);
- // FIXME (#2677): We should do this when the task exits, not in the
- // destructor
- {
- scoped_lock with(supervisor_lock);
- if (supervisor) {
- supervisor->deref();
- }
- }
-
/* FIXME (#2677): tighten this up, there are some more
assertions that hold at task-lifecycle events. */
assert(ref_count == 0); // ||
task->die();
- task->notify(!threw_exception);
-
#ifdef __WIN32__
assert(!threw_exception && "No exception-handling yet on windows builds");
#endif
return killed && !reentered_rust_stack && disallow_kill == 0;
}
+void rust_task_yield_fail(rust_task *task) {
+ LOG_ERR(task, task, "task %" PRIxPTR " yielded in an atomic section",
+ task);
+ task->fail();
+}
+
// Only run this on the rust stack
-void
-rust_task::yield(bool *killed) {
- // FIXME (#2875): clean this up
+MUST_CHECK bool rust_task::yield() {
+ bool killed = false;
+
+ if (disallow_yield > 0) {
+ call_on_c_stack(this, (void *)rust_task_yield_fail);
+ }
+
+ // This check is largely superfluous; it's the one after the context swap
+ // that really matters. This one allows us to assert a useful invariant.
if (must_fail_from_being_killed()) {
{
scoped_lock with(lifecycle_lock);
assert(!(state == task_state_blocked));
}
- *killed = true;
+ killed = true;
}
// Return to the scheduler.
ctx.next->swap(ctx);
if (must_fail_from_being_killed()) {
- *killed = true;
+ killed = true;
}
+ return killed;
}
void
rust_task::kill() {
scoped_lock with(lifecycle_lock);
+ kill_inner();
+}
+
+void rust_task::kill_inner() {
+ lifecycle_lock.must_have_lock();
- // XXX: bblum: kill/kill race
+ // Multiple kills should be able to safely race, but check anyway.
+ if (killed) {
+ LOG(this, task, "task %s @0x%" PRIxPTR " already killed", name, this);
+ return;
+ }
// Note the distinction here: kill() is when you're in an upcall
// from task A and want to force-fail task B, you do B->kill().
LOG(this, task, "preparing to unwind task: 0x%" PRIxPTR, this);
}
-// (bblum): Move this to rust_builtin.cpp (cleanup)
-extern "C" CDECL
-bool rust_task_is_unwinding(rust_task *rt) {
- return rt->unwinding;
-}
-
void
rust_task::fail() {
// See note in ::kill() regarding who should call this.
sched_loop->fail();
}
-void
-rust_task::unsupervise()
-{
- scoped_lock with(supervisor_lock);
- if (supervisor) {
- DLOG(sched_loop, task,
- "task %s @0x%" PRIxPTR
- " disconnecting from supervisor %s @0x%" PRIxPTR,
- name, this, supervisor->name, supervisor);
- supervisor->deref();
- }
- supervisor = NULL;
- propagate_failure = false;
-}
-
frame_glue_fns*
rust_task::get_frame_glue_fns(uintptr_t fp) {
fp -= sizeof(uintptr_t);
return local_region.calloc(size, tag);
}
-void
-rust_task::notify(bool success) {
- // FIXME (#1078) Do this in rust code
- if(notify_enabled) {
- rust_port *target_port =
- kernel->get_port_by_id(notify_port);
- if(target_port) {
- task_notification msg;
- msg.id = id;
- msg.result = !success ? tr_failure : tr_success;
-
- target_port->send(&msg);
- target_port->deref();
- }
- }
-}
-
size_t
rust_task::get_next_stack_size(size_t min, size_t current, size_t requested) {
LOG(this, mem, "calculating new stack size for 0x%" PRIxPTR, this);
return (uintptr_t)stk->data <= sp && sp <= stk->end;
}
-struct reset_args {
- rust_task *task;
- uintptr_t sp;
-};
-
-void
-reset_stack_limit_on_c_stack(reset_args *args) {
-}
-
/*
Called by landing pads during unwinding to figure out which stack segment we
are currently running on and record the stack limit (which was not restored
}
}
-void
-rust_task::config_notify(rust_port_id port) {
- notify_enabled = true;
- notify_port = port;
-}
-
/*
Returns true if we're currently running on the Rust stack
*/
disallow_kill--;
}
-void *
-rust_task::wait_event(bool *killed) {
+void rust_task::inhibit_yield() {
+ scoped_lock with(lifecycle_lock);
+ disallow_yield++;
+}
+
+void rust_task::allow_yield() {
+ scoped_lock with(lifecycle_lock);
+ assert(disallow_yield > 0 && "Illegal allow_yield(): already yieldable!");
+ disallow_yield--;
+}
+
+MUST_CHECK bool rust_task::wait_event(void **result) {
+ bool killed = false;
scoped_lock with(lifecycle_lock);
if(!event_reject) {
block_inner(&event_cond, "waiting on event");
lifecycle_lock.unlock();
- yield(killed);
+ killed = yield();
lifecycle_lock.lock();
+ } else if (must_fail_from_being_killed_inner()) {
+ // If the deschedule was rejected, yield won't do our killed check for
+ // us. For thoroughness, do it here. FIXME (#524)
+ killed = true;
}
event_reject = false;
- return event;
+ *result = event;
+ return killed;
}
void
RUST_ATOMIC_REFCOUNT();
rust_task_id id;
- bool notify_enabled;
- rust_port_id notify_port;
context ctx;
stk_seg *stk;
// Indicates that we've called back into Rust from C
bool reentered_rust_stack;
unsigned long disallow_kill;
+ unsigned long disallow_yield;
// The stack used for running C code, borrowed from the scheduler thread
stk_seg *c_stack;
rust_port_selector port_selector;
- lock_and_signal supervisor_lock;
- rust_task *supervisor; // Parent-link for failure propagation.
-
// Called when the atomic refcount reaches zero
void delete_this();
// Only a pointer to 'name' is kept, so it must live as long as this task.
rust_task(rust_sched_loop *sched_loop,
rust_task_state state,
- rust_task *spawner,
const char *name,
size_t init_stack_sz);
void backtrace();
// Yields control to the scheduler. Called from the Rust stack
- void yield(bool *killed);
+ // Returns TRUE if the task was killed and needs to fail.
+ MUST_CHECK bool yield();
// Fail this task (assuming caller-on-stack is different task).
void kill();
+ void kill_inner();
// Indicates that we've been killed and now is an apropriate
// time to fail as a result
void fail(char const *expr, char const *file, size_t line);
// Propagate failure to the entire rust runtime.
- // FIXME (#1868) (bblum): maybe this can be done at rust-level?
void fail_sched_loop();
- // Disconnect from our supervisor.
- void unsupervise();
-
frame_glue_fns *get_frame_glue_fns(uintptr_t fp);
void *calloc(size_t size, const char *tag);
// not at all safe.
intptr_t get_ref_count() const { return ref_count; }
- void notify(bool success);
-
void *next_stack(size_t stk_sz, void *args_addr, size_t args_sz);
void prev_stack();
void record_stack_limit();
void check_stack_canary();
void delete_all_stacks();
- void config_notify(rust_port_id port);
-
void call_on_c_stack(void *args, void *fn_ptr);
void call_on_rust_stack(void *args, void *fn_ptr);
bool have_c_stack() { return c_stack != NULL; }
this->event_reject = false;
}
- void *wait_event(bool *killed);
+ // Returns TRUE if the task was killed and needs to fail.
+ MUST_CHECK bool wait_event(void **result);
void signal_event(void *event);
void cleanup_after_turn();
void inhibit_kill();
void allow_kill();
+ void inhibit_yield();
+ void allow_yield();
};
// FIXME (#2697): It would be really nice to be able to get rid of this.
bool had_reentered_rust_stack = reentered_rust_stack;
{
- // FIXME (#2875) This must be racy. Figure it out.
+ // FIXME (#1868) This must be racy. Figure it out.
scoped_lock with(lifecycle_lock);
reentered_rust_stack = true;
}
UPCALL_SWITCH_STACK(task, &args, upcall_s_fail);
}
+// FIXME (#2861): Alias used by libcore/rt.rs to avoid naming conflicts with
+// autogenerated wrappers for upcall_fail. Remove this when we fully move away
+// away from the C upcall path.
+extern "C" CDECL void
+rust_upcall_fail(char const *expr,
+ char const *file,
+ size_t line) {
+ upcall_fail(expr, file, line);
+}
+
struct s_trace_args {
rust_task *task;
char const *msg;
return args.retval;
}
+// FIXME (#2861): Alias used by libcore/rt.rs to avoid naming conflicts with
+// autogenerated wrappers for upcall_exchange_malloc. Remove this when we
+// fully move away away from the C upcall path.
+extern "C" CDECL uintptr_t
+rust_upcall_exchange_malloc(type_desc *td, uintptr_t size) {
+ return upcall_exchange_malloc(td, size);
+}
+
struct s_exchange_free_args {
rust_task *task;
void *ptr;
UPCALL_SWITCH_STACK(task, &args, upcall_s_exchange_free);
}
+// FIXME (#2861): Alias used by libcore/rt.rs to avoid naming conflicts with
+// autogenerated wrappers for upcall_exchange_free. Remove this when we fully
+// move away away from the C upcall path.
+extern "C" CDECL void
+rust_upcall_exchange_free(void *ptr) {
+ return upcall_exchange_free(ptr);
+}
+
/**********************************************************************
* Allocate an object in the task-local heap.
*/
return args.retval;
}
+// FIXME (#2861): Alias used by libcore/rt.rs to avoid naming conflicts with
+// autogenerated wrappers for upcall_malloc. Remove this when we fully move
+// away away from the C upcall path.
+extern "C" CDECL uintptr_t
+rust_upcall_malloc(type_desc *td, uintptr_t size) {
+ return upcall_malloc(td, size);
+}
+
/**********************************************************************
* Called whenever an object in the task-local heap is freed.
*/
UPCALL_SWITCH_STACK(task, &args, upcall_s_free);
}
+// FIXME (#2861): Alias used by libcore/rt.rs to avoid naming conflicts with
+// autogenerated wrappers for upcall_free. Remove this when we fully move away
+// away from the C upcall path.
+extern "C" CDECL void
+rust_upcall_free(void* ptr) {
+ upcall_free(ptr);
+}
+
/**********************************************************************
* Sanity checks on boxes, insert when debugging possible
* use-after-free bugs. See maybe_validate_box() in trans.rs.
return reinterpret_cast<T*>(v->data);
}
+inline void reserve_vec_exact_shared(rust_task* task, rust_vec_box** vpp,
+ size_t size) {
+ rust_opaque_box** ovpp = (rust_opaque_box**)vpp;
+ if (size > (*vpp)->body.alloc) {
+ *vpp = (rust_vec_box*)task->boxed.realloc(
+ *ovpp, size + sizeof(rust_vec));
+ (*vpp)->body.alloc = size;
+ }
+}
+
inline void reserve_vec_exact(rust_task* task, rust_vec_box** vpp,
size_t size) {
if (size > (*vpp)->body.alloc) {
// Initialization helpers for ISAAC RNG
-inline void isaac_seed(rust_kernel* kernel, uint8_t* dest)
+inline void isaac_seed(rust_kernel* kernel, uint8_t* dest, size_t size)
{
- size_t size = sizeof(ub4) * RANDSIZ;
#ifdef __WIN32__
HCRYPTPROV hProv;
kernel->win32_require
#else
int fd = open("/dev/urandom", O_RDONLY);
assert(fd > 0);
- assert(read(fd, dest, size) == (int) size);
- assert(close(fd) == 0);
+ size_t amount = 0;
+ do {
+ ssize_t ret = read(fd, dest+amount, size-amount);
+ assert(ret >= 0);
+ amount += (size_t)ret;
+ } while (amount < size);
+ int ret = close(fd);
+ assert(ret == 0);
#endif
}
seed = (seed + 0x7ed55d16) + (seed << 12);
}
} else {
- isaac_seed(kernel, (uint8_t*) &rctx->randrsl);
+ isaac_seed(kernel, (uint8_t*) &rctx->randrsl, sizeof(rctx->randrsl));
}
randinit(rctx, 1);
rust_task_yield
rust_task_is_unwinding
rust_get_task
-rust_task_config_notify
rust_task_weaken
rust_task_unweaken
sched_threads
shape_log_str
start_task
+vec_reserve_shared_actual
vec_reserve_shared
str_reserve_shared
vec_from_buf_shared
task_clear_event_reject
task_wait_event
task_signal_event
-unsupervise
upcall_cmp_type
upcall_fail
upcall_trace
upcall_reset_stack_limit
upcall_exchange_malloc
upcall_exchange_free
+rust_upcall_exchange_free
+rust_upcall_exchange_malloc
+rust_upcall_fail
+rust_upcall_free
+rust_upcall_malloc
rust_uv_loop_new
rust_uv_loop_delete
rust_uv_loop_refcount
rust_port_task
rust_task_inhibit_kill
rust_task_allow_kill
+rust_task_inhibit_yield
+rust_task_allow_yield
rust_task_kill_other
rust_task_kill_all
rust_create_cond_lock
-import libc::{c_int, c_uint};
+import libc::{c_int, c_uint, c_char};
import driver::session;
import session::session;
import lib::llvm::llvm;
} else { sess.fatal(msg + ~": " + str::unsafe::from_c_str(cstr)); }
}
+fn WriteOutputFile(sess:session,
+ PM: lib::llvm::PassManagerRef, M: ModuleRef,
+ Triple: *c_char,
+ // FIXME: When #2334 is fixed, change
+ // c_uint to FileType
+ Output: *c_char, FileType: c_uint,
+ OptLevel: c_int,
+ EnableSegmentedStacks: bool) {
+ let result = llvm::LLVMRustWriteOutputFile(
+ PM, M, Triple, Output, FileType, OptLevel, EnableSegmentedStacks);
+ if (!result) {
+ llvm_err(sess, ~"Could not write output");
+ }
+}
+
mod write {
fn is_object_or_assembly_or_exe(ot: output_type) -> bool {
if ot == output_type_assembly || ot == output_type_object ||
sess.targ_cfg.target_strs.target_triple,
|buf_t| {
str::as_c_str(output, |buf_o| {
- llvm::LLVMRustWriteOutputFile(
+ WriteOutputFile(
+ sess,
pm.llpm,
llmod,
buf_t,
sess.targ_cfg.target_strs.target_triple,
|buf_t| {
str::as_c_str(output, |buf_o| {
- llvm::LLVMRustWriteOutputFile(
+ WriteOutputFile(
+ sess,
pm.llpm,
llmod,
buf_t,
sess.targ_cfg.target_strs.target_triple,
|buf_t| {
str::as_c_str(output, |buf_o| {
- llvm::LLVMRustWriteOutputFile(
+ WriteOutputFile(
+ sess,
pm.llpm,
llmod,
buf_t,
import syntax::parse;
import syntax::{ast, codemap};
import syntax::attr;
-import middle::{trans, resolve, freevars, kind, ty, typeck, lint};
+import middle::{trans, freevars, kind, ty, typeck, lint};
import syntax::print::{pp, pprust};
import util::ppaux;
import back::link;
// If the user wants a test runner, then add the test cfg
let gen_cfg =
{
- if sess.opts.test && !attr::contains_name(user_cfg, ~"test")
- {
+ if sess.opts.test && !attr::contains_name(user_cfg, ~"test") {
~[attr::mk_word_item(@~"test")]
- } else { ~[] }
+ } else {
+ ~[attr::mk_word_item(@~"notest")]
+ }
};
ret vec::append(vec::append(user_cfg, gen_cfg), default_cfg);
}
const borrowck_stats: uint = 1024u;
const borrowck_note_pure: uint = 2048;
const borrowck_note_loan: uint = 4096;
+const no_landing_pads: uint = 8192;
fn debugging_opts_map() -> ~[(~str, ~str, uint)] {
~[(~"ppregions", ~"prettyprint regions with \
(~"borrowck-note-pure", ~"note where purity is req'd",
borrowck_note_pure),
(~"borrowck-note-loan", ~"note where loans are req'd",
- borrowck_note_loan)
+ borrowck_note_loan),
+ (~"no-landing-pads", ~"omit landing pads for unwinding",
+ no_landing_pads)
]
}
// c_uint to FileType
Output: *c_char, FileType: c_uint,
OptLevel: c_int,
- EnableSegmentedStacks: bool);
+ EnableSegmentedStacks: bool) -> bool;
/** Returns a string describing the last error caused by an LLVMRust*
call. */
fn LLVMConstNamedStruct(S: TypeRef, ConstantVals: *ValueRef,
Count: c_uint) -> ValueRef;
-
- /** Links LLVM modules together. `Src` is destroyed by this call and
- must never be referenced again. */
- fn LLVMLinkModules(Dest: ModuleRef, Src: ModuleRef) -> Bool;
}
fn SetInstructionCallConv(Instr: ValueRef, CC: CallConv) {
export get_method_names_if_trait;
export each_path;
export get_type;
-export get_impl_trait;
+export get_impl_traits;
export get_impl_method;
export get_item_path;
export maybe_get_item_ast, found_ast, found, found_parent, not_found;
ret {bounds: @~[], rp: false, ty: ty};
}
-// Given a def_id for an impl or class, return the trait it implements,
-// or none if it's not for an impl or for a class that implements traits
-fn get_impl_trait(tcx: ty::ctxt, def: ast::def_id) -> option<ty::t> {
+// Given a def_id for an impl or class, return the traits it implements,
+// or the empty vector if it's not for an impl or for a class that implements
+// traits
+fn get_impl_traits(tcx: ty::ctxt, def: ast::def_id) -> ~[ty::t] {
let cstore = tcx.cstore;
let cdata = cstore::get_crate_data(cstore, def.crate);
- decoder::get_impl_trait(cdata, def.node, tcx)
+ decoder::get_impl_traits(cdata, def.node, tcx)
}
fn get_impl_method(cstore: cstore::cstore,
export get_type;
export get_region_param;
export get_type_param_count;
-export get_impl_trait;
+export get_impl_traits;
export get_class_method;
export get_impl_method;
export lookup_def;
} else { t }
}
-fn item_impl_trait(item: ebml::doc, tcx: ty::ctxt, cdata: cmd)
- -> option<ty::t> {
- let mut result = none;
+fn item_impl_traits(item: ebml::doc, tcx: ty::ctxt, cdata: cmd) -> ~[ty::t] {
+ let mut results = ~[];
do ebml::tagged_docs(item, tag_impl_trait) |ity| {
- result = some(doc_type(ity, tcx, cdata));
+ vec::push(results, doc_type(ity, tcx, cdata));
};
- result
+ results
}
fn item_ty_param_bounds(item: ebml::doc, tcx: ty::ctxt, cdata: cmd)
item_ty_param_count(lookup_item(id, data))
}
-fn get_impl_trait(cdata: cmd, id: ast::node_id, tcx: ty::ctxt)
- -> option<ty::t> {
- item_impl_trait(lookup_item(id, cdata.data), tcx, cdata)
+fn get_impl_traits(cdata: cmd, id: ast::node_id, tcx: ty::ctxt) -> ~[ty::t] {
+ item_impl_traits(lookup_item(id, cdata.data), tcx, cdata)
}
fn get_impl_method(cdata: cmd, id: ast::node_id,
do ebml::tagged_docs(mod_item, tag_mod_impl) |doc| {
let did = ebml::with_doc_data(doc, |d| parse_def_id(d));
let local_did = translate_def_id(cdata, did);
+ #debug("(get impls for mod) getting did %? for '%?'",
+ local_did, name);
// The impl may be defined in a different crate. Ask the caller
// to give us the metadata
let impl_cdata = get_cdata(local_did.crate);
encode_name_and_def_id(ebml_w, it.ident, it.id);
}
}
- item_class(_, _, items, ctor, m_dtor) {
+ item_class(_, _, items, m_ctor, m_dtor) {
do ebml_w.wr_tag(tag_paths_data_item) {
encode_name_and_def_id(ebml_w, it.ident, it.id);
}
// We add the same ident twice: for the
// class and for its ctor
add_to_index(ebml_w, path, index, it.ident);
- encode_named_def_id(ebml_w, it.ident,
- local_def(ctor.node.id));
+
+ alt m_ctor {
+ none => {
+ // Nothing to do.
+ }
+ some(ctor) {
+ encode_named_def_id(ebml_w, it.ident,
+ local_def(ctor.node.id));
+ }
+ }
+
encode_class_item_paths(ebml_w, items,
vec::append_one(path, it.ident),
index);
let impls = ecx.impl_map(id);
for impls.each |i| {
let (ident, did) = i;
- #debug("(encoding info for module) ... encoding impl %s (%?), \
+ #debug("(encoding info for module) ... encoding impl %s (%?/%?), \
exported? %?",
- *ident, did, ast_util::is_exported(ident, md));
- if ast_util::is_exported(ident, md) {
- ebml_w.start_tag(tag_mod_impl);
- alt ecx.tcx.items.find(did.node) {
- some(ast_map::node_item(it@@{node: cl@item_class(*),_},_)) {
- /* If did stands for a trait
- ref, we need to map it to its parent class */
- ebml_w.wr_str(def_to_str(local_def(it.id)));
- }
- some(ast_map::node_item(@{node: item_impl(_,
- some(ifce),_,_),_},_)) {
- ebml_w.wr_str(def_to_str(did));
- }
- some(_) {
- ebml_w.wr_str(def_to_str(did));
- }
- none {
- // Must be a re-export, then!
- // ...or an iface ref
- ebml_w.wr_str(def_to_str(did));
- }
- };
- ebml_w.end_tag();
- } // if
+ *ident,
+ did,
+ ast_map::node_id_to_str(ecx.tcx.items, did.node),
+ ast_util::is_exported(ident, md));
+
+ ebml_w.start_tag(tag_mod_impl);
+ alt ecx.tcx.items.find(did.node) {
+ some(ast_map::node_item(it@@{node: cl@item_class(*),_},_)) {
+ /* If did stands for a trait
+ ref, we need to map it to its parent class */
+ ebml_w.wr_str(def_to_str(local_def(it.id)));
+ }
+ _ {
+ // Must be a re-export, then!
+ // ...or an iface ref
+ ebml_w.wr_str(def_to_str(did));
+ }
+ };
+ ebml_w.end_tag();
} // for
encode_path(ebml_w, path, ast_map::path_mod(name));
fn should_inline(attrs: ~[attribute]) -> bool {
alt attr::find_inline_attr(attrs) {
- attr::ia_none { false }
- attr::ia_hint | attr::ia_always { true }
+ attr::ia_none | attr::ia_never { false }
+ attr::ia_hint | attr::ia_always { true }
}
}
let tcx = ecx.tcx;
let must_write =
- alt item.node { item_enum(_, _) { true } _ { false } };
+ alt item.node {
+ item_enum(_, _) | item_impl(*) | item_trait(*) | item_class(*) {
+ true
+ }
+ _ {
+ false
+ }
+ };
if !must_write && !reachable(ecx, item.id) { ret; }
fn add_to_index_(item: @item, ebml_w: ebml::writer,
encode_index(ebml_w, bkts, write_int);
ebml_w.end_tag();
}
- item_impl(tps, ifce, _, methods) {
+ item_impl(tps, traits, _, methods) {
add_to_index();
ebml_w.start_tag(tag_items_data_item);
encode_def_id(ebml_w, local_def(item.id));
encode_type_param_bounds(ebml_w, ecx, tps);
encode_type(ecx, ebml_w, node_id_to_type(tcx, item.id));
encode_name(ebml_w, item.ident);
+ encode_attributes(ebml_w, item.attrs);
for methods.each |m| {
ebml_w.start_tag(tag_item_impl_method);
ebml_w.writer.write(str::bytes(def_to_str(local_def(m.id))));
ebml_w.end_tag();
}
- do option::iter(ifce) |t| {
- encode_trait_ref(ebml_w, ecx, t)
- };
+ if traits.len() > 1 {
+ fail ~"multiple traits!!";
+ }
+ for traits.each |associated_trait| {
+ encode_trait_ref(ebml_w, ecx, associated_trait)
+ }
encode_path(ebml_w, path, ast_map::path_name(item.ident));
ebml_w.end_tag();
encode_info_for_item(ecx, ebml_w, i, index, *pt);
/* encode ctor, then encode items */
alt i.node {
- item_class(tps, _, _, ctor, m_dtor) {
- #debug("encoding info for ctor %s %d", *i.ident,
- ctor.node.id);
- vec::push(*index,
- {val: ctor.node.id, pos: ebml_w.writer.tell()});
- encode_info_for_fn(ecx, ebml_w, ctor.node.id, i.ident,
- *pt, if tps.len() > 0u {
- some(ii_ctor(ctor, i.ident, tps,
- local_def(i.id))) }
- else { none }, tps, ctor.node.dec);
- }
- _ {}
+ item_class(tps, _, _, some(ctor), m_dtor) {
+ #debug("encoding info for ctor %s %d", *i.ident,
+ ctor.node.id);
+ vec::push(*index, {
+ val: ctor.node.id,
+ pos: ebml_w.writer.tell()
+ });
+ encode_info_for_fn(ecx, ebml_w, ctor.node.id, i.ident,
+ *pt, if tps.len() > 0u {
+ some(ii_ctor(ctor, i.ident, tps,
+ local_def(i.id))) }
+ else { none }, tps, ctor.node.dec);
+ }
+ _ {}
}
}
}
's' { ty::br_self }
'a' { ty::br_anon }
'[' { ty::br_named(@parse_str(st, ']')) }
+ 'c' {
+ let id = parse_int(st);
+ assert next(st) == '|';
+ ty::br_cap_avoid(id, @parse_bound_region(st))
+ }
}
}
w.write_str(*s);
w.write_char(']')
}
+ ty::br_cap_avoid(id, br) {
+ w.write_char('c');
+ w.write_int(id);
+ w.write_char('|');
+ enc_bound_region(w, *br);
+ }
}
}
w.write_char('I');
w.write_uint(id.to_uint());
}
- ty::ty_param(id, did) {
+ ty::ty_param({idx: id, def_id: did}) {
w.write_char('p');
w.write_str(cx.ds(did));
w.write_char('|');
mutbl_map: middle::borrowck::mutbl_map,
root_map: middle::borrowck::root_map,
last_use_map: middle::liveness::last_use_map,
- impl_map: middle::resolve::impl_map,
+ impl_map: middle::resolve3::ImplMap,
method_map: middle::typeck::method_map,
vtable_map: middle::typeck::vtable_map,
};
/*!
- * # Borrow check
- *
- * This pass is in job of enforcing *memory safety* and *purity*. As
- * memory safety is by far the more complex topic, I'll focus on that in
- * this description, but purity will be covered later on. In the context
- * of Rust, memory safety means three basic things:
- *
- * - no writes to immutable memory;
- * - all pointers point to non-freed memory;
- * - all pointers point to memory of the same type as the pointer.
- *
- * The last point might seem confusing: after all, for the most part,
- * this condition is guaranteed by the type check. However, there are
- * two cases where the type check effectively delegates to borrow check.
- *
- * The first case has to do with enums. If there is a pointer to the
- * interior of an enum, and the enum is in a mutable location (such as a
- * local variable or field declared to be mutable), it is possible that
- * the user will overwrite the enum with a new value of a different
- * variant, and thus effectively change the type of the memory that the
- * pointer is pointing at.
- *
- * The second case has to do with mutability. Basically, the type
- * checker has only a limited understanding of mutability. It will allow
- * (for example) the user to get an immutable pointer with the address of
- * a mutable local variable. It will also allow a `@mut T` or `~mut T`
- * pointer to be borrowed as a `&r.T` pointer. These seeming oversights
- * are in fact intentional; they allow the user to temporarily treat a
- * mutable value as immutable. It is up to the borrow check to guarantee
- * that the value in question is not in fact mutated during the lifetime
- * `r` of the reference.
- *
- * # Summary of the safety check
- *
- * In order to enforce mutability, the borrow check has three tricks up
- * its sleeve.
- *
- * First, data which is uniquely tied to the current stack frame (that'll
- * be defined shortly) is tracked very precisely. This means that, for
- * example, if an immutable pointer to a mutable local variable is
- * created, the borrowck will simply check for assignments to that
- * particular local variable: no other memory is affected.
- *
- * Second, if the data is not uniquely tied to the stack frame, it may
- * still be possible to ensure its validity by rooting garbage collected
- * pointers at runtime. For example, if there is a mutable local
- * variable `x` of type `@T`, and its contents are borrowed with an
- * expression like `&*x`, then the value of `x` will be rooted (today,
- * that means its ref count will be temporary increased) for the lifetime
- * of the reference that is created. This means that the pointer remains
- * valid even if `x` is reassigned.
- *
- * Finally, if neither of these two solutions are applicable, then we
- * require that all operations within the scope of the reference be
- * *pure*. A pure operation is effectively one that does not write to
- * any aliasable memory. This means that it is still possible to write
- * to local variables or other data that is uniquely tied to the stack
- * frame (there's that term again; formal definition still pending) but
- * not to data reached via a `&T` or `@T` pointer. Such writes could
- * possibly have the side-effect of causing the data which must remain
- * valid to be overwritten.
- *
- * # Possible future directions
- *
- * There are numerous ways that the `borrowck` could be strengthened, but
- * these are the two most likely:
- *
- * - flow-sensitivity: we do not currently consider flow at all but only
- * block-scoping. This means that innocent code like the following is
- * rejected:
- *
- * let mut x: int;
- * ...
- * x = 5;
- * let y: &int = &x; // immutable ptr created
- * ...
- *
- * The reason is that the scope of the pointer `y` is the entire
- * enclosing block, and the assignment `x = 5` occurs within that
- * block. The analysis is not smart enough to see that `x = 5` always
- * happens before the immutable pointer is created. This is relatively
- * easy to fix and will surely be fixed at some point.
- *
- * - finer-grained purity checks: currently, our fallback for
- * guaranteeing random references into mutable, aliasable memory is to
- * require *total purity*. This is rather strong. We could use local
- * type-based alias analysis to distinguish writes that could not
- * possibly invalid the references which must be guaranteed. This
- * would only work within the function boundaries; function calls would
- * still require total purity. This seems less likely to be
- * implemented in the short term as it would make the code
- * significantly more complex; there is currently no code to analyze
- * the types and determine the possible impacts of a write.
- *
- * # Terminology
- *
- * A **loan** is .
- *
- * # How the code works
- *
- * The borrow check code is divided into several major modules, each of
- * which is documented in its own file.
- *
- * The `gather_loans` and `check_loans` are the two major passes of the
- * analysis. The `gather_loans` pass runs over the IR once to determine
- * what memory must remain valid and for how long. Its name is a bit of
- * a misnomer; it does in fact gather up the set of loans which are
- * granted, but it also determines when @T pointers must be rooted and
- * for which scopes purity must be required.
- *
- * The `check_loans` pass walks the IR and examines the loans and purity
- * requirements computed in `gather_loans`. It checks to ensure that (a)
- * the conditions of all loans are honored; (b) no contradictory loans
- * were granted (for example, loaning out the same memory as mutable and
- * immutable simultaneously); and (c) any purity requirements are
- * honored.
- *
- * The remaining modules are helper modules used by `gather_loans` and
- * `check_loans`:
- *
- * - `categorization` has the job of analyzing an expression to determine
- * what kind of memory is used in evaluating it (for example, where
- * dereferences occur and what kind of pointer is dereferenced; whether
- * the memory is mutable; etc)
- * - `loan` determines when data uniquely tied to the stack frame can be
- * loaned out.
- * - `preserve` determines what actions (if any) must be taken to preserve
- * aliasable data. This is the code which decides when to root
- * an @T pointer or to require purity.
- *
- * # Maps that are created
- *
- * Borrowck results in two maps.
- *
- * - `root_map`: identifies those expressions or patterns whose result
- * needs to be rooted. Conceptually the root_map maps from an
- * expression or pattern node to a `node_id` identifying the scope for
- * which the expression must be rooted (this `node_id` should identify
- * a block or call). The actual key to the map is not an expression id,
- * however, but a `root_map_key`, which combines an expression id with a
- * deref count and is used to cope with auto-deref.
- *
- * - `mutbl_map`: identifies those local variables which are modified or
- * moved. This is used by trans to guarantee that such variables are
- * given a memory location and not used as immediates.
+# Borrow check
+
+This pass is in job of enforcing *memory safety* and *purity*. As
+memory safety is by far the more complex topic, I'll focus on that in
+this description, but purity will be covered later on. In the context
+of Rust, memory safety means three basic things:
+
+- no writes to immutable memory;
+- all pointers point to non-freed memory;
+- all pointers point to memory of the same type as the pointer.
+
+The last point might seem confusing: after all, for the most part,
+this condition is guaranteed by the type check. However, there are
+two cases where the type check effectively delegates to borrow check.
+
+The first case has to do with enums. If there is a pointer to the
+interior of an enum, and the enum is in a mutable location (such as a
+local variable or field declared to be mutable), it is possible that
+the user will overwrite the enum with a new value of a different
+variant, and thus effectively change the type of the memory that the
+pointer is pointing at.
+
+The second case has to do with mutability. Basically, the type
+checker has only a limited understanding of mutability. It will allow
+(for example) the user to get an immutable pointer with the address of
+a mutable local variable. It will also allow a `@mut T` or `~mut T`
+pointer to be borrowed as a `&r.T` pointer. These seeming oversights
+are in fact intentional; they allow the user to temporarily treat a
+mutable value as immutable. It is up to the borrow check to guarantee
+that the value in question is not in fact mutated during the lifetime
+`r` of the reference.
+
+# Definition of unstable memory
+
+The primary danger to safety arises due to *unstable memory*.
+Unstable memory is memory whose validity or type may change as a
+result of an assignment, move, or a variable going out of scope.
+There are two cases in Rust where memory is unstable: the contents of
+unique boxes and enums.
+
+Unique boxes are unstable because when the variable containing the
+unique box is re-assigned, moves, or goes out of scope, the unique box
+is freed or---in the case of a move---potentially given to another
+task. In either case, if there is an extant and usable pointer into
+the box, then safety guarantees would be compromised.
+
+Enum values are unstable because they are reassigned the types of
+their contents may change if they are assigned with a different
+variant than they had previously.
+
+# Safety criteria that must be enforced
+
+Whenever a piece of memory is borrowed for lifetime L, there are two
+things which the borrow checker must guarantee. First, it must
+guarantee that the memory address will remain allocated (and owned by
+the current task) for the entirety of the lifetime L. Second, it must
+guarantee that the type of the data will not change for the entirety
+of the lifetime L. In exchange, the region-based type system will
+guarantee that the pointer is not used outside the lifetime L. These
+guarantees are to some extent independent but are also inter-related.
+
+In some cases, the type of a pointer cannot be invalidated but the
+lifetime can. For example, imagine a pointer to the interior of
+a shared box like:
+
+ let mut x = @mut {f: 5, g: 6};
+ let y = &mut x.f;
+
+Here, a pointer was created to the interior of a shared box which
+contains a record. Even if `*x` were to be mutated like so:
+
+ *x = {f: 6, g: 7};
+
+This would cause `*y` to change from 5 to 6, but the pointer pointer
+`y` remains valid. It still points at an integer even if that integer
+has been overwritten.
+
+However, if we were to reassign `x` itself, like so:
+
+ x = @{f: 6, g: 7};
+
+This could potentially invalidate `y`, because if `x` were the final
+reference to the shared box, then that memory would be released and
+now `y` points at freed memory. (We will see that to prevent this
+scenario we will *root* shared boxes that reside in mutable memory
+whose contents are borrowed; rooting means that we create a temporary
+to ensure that the box is not collected).
+
+In other cases, like an enum on the stack, the memory cannot be freed
+but its type can change:
+
+ let mut x = some(5);
+ alt x {
+ some(ref y) => { ... }
+ none => { ... }
+ }
+
+Here as before, the pointer `y` would be invalidated if we were to
+reassign `x` to `none`. (We will see that this case is prevented
+because borrowck tracks data which resides on the stack and prevents
+variables from reassigned if there may be pointers to their interior)
+
+Finally, in some cases, both dangers can arise. For example, something
+like the following:
+
+ let mut x = ~some(5);
+ alt x {
+ ~some(ref y) => { ... }
+ ~none => { ... }
+ }
+
+In this case, if `x` to be reassigned or `*x` were to be mutated, then
+the pointer `y` would be invalided. (This case is also prevented by
+borrowck tracking data which is owned by the current stack frame)
+
+# Summary of the safety check
+
+In order to enforce mutability, the borrow check has a few tricks up
+its sleeve:
+
+- When data is owned by the current stack frame, we can identify every
+ possible assignment to a local variable and simply prevent
+ potentially dangerous assignments directly.
+
+- If data is owned by a shared box, we can root the box to increase
+ its lifetime.
+
+- If data is found within a borrowed pointer, we can assume that the
+ data will remain live for the entirety of the borrowed pointer.
+
+- We can rely on the fact that pure actions (such as calling pure
+ functions) do not mutate data which is not owned by the current
+ stack frame.
+
+# Possible future directions
+
+There are numerous ways that the `borrowck` could be strengthened, but
+these are the two most likely:
+
+- flow-sensitivity: we do not currently consider flow at all but only
+ block-scoping. This means that innocent code like the following is
+ rejected:
+
+ let mut x: int;
+ ...
+ x = 5;
+ let y: &int = &x; // immutable ptr created
+ ...
+
+ The reason is that the scope of the pointer `y` is the entire
+ enclosing block, and the assignment `x = 5` occurs within that
+ block. The analysis is not smart enough to see that `x = 5` always
+ happens before the immutable pointer is created. This is relatively
+ easy to fix and will surely be fixed at some point.
+
+- finer-grained purity checks: currently, our fallback for
+ guaranteeing random references into mutable, aliasable memory is to
+ require *total purity*. This is rather strong. We could use local
+ type-based alias analysis to distinguish writes that could not
+ possibly invalid the references which must be guaranteed. This
+ would only work within the function boundaries; function calls would
+ still require total purity. This seems less likely to be
+ implemented in the short term as it would make the code
+ significantly more complex; there is currently no code to analyze
+ the types and determine the possible impacts of a write.
+
+# How the code works
+
+The borrow check code is divided into several major modules, each of
+which is documented in its own file.
+
+The `gather_loans` and `check_loans` are the two major passes of the
+analysis. The `gather_loans` pass runs over the IR once to determine
+what memory must remain valid and for how long. Its name is a bit of
+a misnomer; it does in fact gather up the set of loans which are
+granted, but it also determines when @T pointers must be rooted and
+for which scopes purity must be required.
+
+The `check_loans` pass walks the IR and examines the loans and purity
+requirements computed in `gather_loans`. It checks to ensure that (a)
+the conditions of all loans are honored; (b) no contradictory loans
+were granted (for example, loaning out the same memory as mutable and
+immutable simultaneously); and (c) any purity requirements are
+honored.
+
+The remaining modules are helper modules used by `gather_loans` and
+`check_loans`:
+
+- `categorization` has the job of analyzing an expression to determine
+ what kind of memory is used in evaluating it (for example, where
+ dereferences occur and what kind of pointer is dereferenced; whether
+ the memory is mutable; etc)
+- `loan` determines when data uniquely tied to the stack frame can be
+ loaned out.
+- `preserve` determines what actions (if any) must be taken to preserve
+ aliasable data. This is the code which decides when to root
+ an @T pointer or to require purity.
+
+# Maps that are created
+
+Borrowck results in two maps.
+
+- `root_map`: identifies those expressions or patterns whose result
+ needs to be rooted. Conceptually the root_map maps from an
+ expression or pattern node to a `node_id` identifying the scope for
+ which the expression must be rooted (this `node_id` should identify
+ a block or call). The actual key to the map is not an expression id,
+ however, but a `root_map_key`, which combines an expression id with a
+ deref count and is used to cope with auto-deref.
+
+- `mutbl_map`: identifies those local variables which are modified or
+ moved. This is used by trans to guarantee that such variables are
+ given a memory location and not used as immediates.
*/
import syntax::ast;
impl public_methods for borrowck_ctxt {
fn cat_borrow_of_expr(expr: @ast::expr) -> cmt {
- // a borrowed expression must be either an @, ~, or a vec/@, vec/~
+ // a borrowed expression must be either an @, ~, or a @vec, ~vec
let expr_ty = ty::expr_ty(self.tcx, expr);
alt ty::get(expr_ty).struct {
ty::ty_evec(*) | ty::ty_estr(*) {
self.cat_deref(expr, cmt, 0u, true).get()
}
+ /*
+ ty::ty_fn({proto, _}) {
+ self.cat_call(expr, expr, proto)
+ }
+ */
+
_ {
self.tcx.sess.span_bug(
expr.span,
ast::expr_new(*) | ast::expr_binary(*) | ast::expr_while(*) |
ast::expr_block(*) | ast::expr_loop(*) | ast::expr_alt(*) |
ast::expr_lit(*) | ast::expr_break | ast::expr_mac(*) |
- ast::expr_again | ast::expr_rec(*) {
+ ast::expr_again | ast::expr_rec(*) | ast::expr_struct(*) {
ret self.cat_rvalue(expr, expr_ty);
}
}
ret @{cat:cat_discr(cmt, alt_id) with *cmt};
}
+ /// inherited mutability: used in cases where the mutability of a
+ /// component is inherited from the base it is a part of. For
+ /// example, a record field is mutable if it is declared mutable
+ /// or if the container is mutable.
+ fn inherited_mutability(base_m: ast::mutability,
+ comp_m: ast::mutability) -> ast::mutability {
+ alt comp_m {
+ m_imm => {base_m} // imm: as mutable as the container
+ m_mutbl | m_const => {comp_m}
+ }
+ }
+
fn cat_field<N:ast_node>(node: N, base_cmt: cmt,
f_name: ast::ident) -> cmt {
let f_mutbl = alt field_mutbl(self.tcx, base_cmt.ty, f_name) {
*f_name, ty_to_str(self.tcx, base_cmt.ty)]);
}
};
- let m = alt f_mutbl {
- m_imm { base_cmt.mutbl } // imm: as mutable as the container
- m_mutbl | m_const { f_mutbl }
- };
+ let m = self.inherited_mutability(base_cmt.mutbl, f_mutbl);
let f_comp = comp_field(f_name, f_mutbl);
let lp = base_cmt.lp.map(|lp| @lp_comp(lp, f_comp) );
@{id: node.id(), span: node.span(),
// Other ptr types admit aliases and are therefore
// not loanable.
alt ptr {
- uniq_ptr {some(@lp_deref(l, ptr))}
- gc_ptr | region_ptr | unsafe_ptr {none}
+ uniq_ptr => {some(@lp_deref(l, ptr))}
+ gc_ptr | region_ptr | unsafe_ptr => {none}
}
};
+
+ // for unique ptrs, we inherit mutability from the
+ // owning reference.
+ let m = alt ptr {
+ uniq_ptr => {
+ self.inherited_mutability(base_cmt.mutbl, mt.mutbl)
+ }
+ gc_ptr | region_ptr | unsafe_ptr => {
+ mt.mutbl
+ }
+ };
+
@{id:node.id(), span:node.span(),
cat:cat_deref(base_cmt, derefs, ptr), lp:lp,
- mutbl:mt.mutbl, ty:mt.ty}
+ mutbl:m, ty:mt.ty}
}
deref_comp(comp) {
let lp = base_cmt.lp.map(|l| @lp_comp(l, comp) );
+ let m = self.inherited_mutability(base_cmt.mutbl, mt.mutbl);
@{id:node.id(), span:node.span(),
cat:cat_comp(base_cmt, comp), lp:lp,
- mutbl:mt.mutbl, ty:mt.ty}
+ mutbl:m, ty:mt.ty}
}
}
}
ret alt deref_kind(self.tcx, base_cmt.ty) {
deref_ptr(ptr) {
- // make deref of vectors explicit, as explained in the comment at
- // the head of this section
- let deref_lp = base_cmt.lp.map(|lp| @lp_deref(lp, ptr) );
+ // (a) the contents are loanable if the base is loanable
+ // and this is a *unique* vector
+ let deref_lp = alt ptr {
+ uniq_ptr => {base_cmt.lp.map(|lp| @lp_deref(lp, uniq_ptr))}
+ _ => {none}
+ };
+
+ // (b) for unique ptrs, we inherit mutability from the
+ // owning reference.
+ let m = alt ptr {
+ uniq_ptr => {
+ self.inherited_mutability(base_cmt.mutbl, mt.mutbl)
+ }
+ gc_ptr | region_ptr | unsafe_ptr => {
+ mt.mutbl
+ }
+ };
+
+ // (c) the deref is explicit in the resulting cmt
let deref_cmt = @{id:expr.id, span:expr.span,
- cat:cat_deref(base_cmt, 0u, ptr), lp:deref_lp,
- mutbl:m_imm, ty:mt.ty};
- comp(expr, deref_cmt, base_cmt.ty, mt)
+ cat:cat_deref(base_cmt, 0u, ptr), lp:deref_lp,
+ mutbl:m, ty:mt.ty};
+
+ comp(expr, deref_cmt, base_cmt.ty, m, mt.ty)
}
deref_comp(_) {
// fixed-length vectors have no deref
- comp(expr, base_cmt, base_cmt.ty, mt)
+ comp(expr, base_cmt, base_cmt.ty, mt.mutbl, mt.ty)
}
};
fn comp(expr: @ast::expr, of_cmt: cmt,
- vect: ty::t, mt: ty::mt) -> cmt {
- let comp = comp_index(vect, mt.mutbl);
+ vect: ty::t, mutbl: ast::mutability, ty: ty::t) -> cmt {
+ let comp = comp_index(vect, mutbl);
let index_lp = of_cmt.lp.map(|lp| @lp_comp(lp, comp) );
@{id:expr.id, span:expr.span,
cat:cat_comp(of_cmt, comp), lp:index_lp,
- mutbl:mt.mutbl, ty:mt.ty}
+ mutbl:mutbl, ty:ty}
}
}
import dvec::{dvec, extensions};
fn check_crate(sess: session, crate: @crate, ast_map: ast_map::map,
- def_map: resolve::def_map,
+ def_map: resolve3::DefMap,
method_map: typeck::method_map, tcx: ty::ctxt) {
visit::visit_crate(*crate, false, visit::mk_vt(@{
visit_item: |a,b,c| check_item(sess, ast_map, def_map, a, b, c),
sess.abort_if_errors();
}
-fn check_item(sess: session, ast_map: ast_map::map, def_map: resolve::def_map,
+fn check_item(sess: session, ast_map: ast_map::map,
+ def_map: resolve3::DefMap,
it: @item, &&_is_const: bool, v: visit::vt<bool>) {
alt it.node {
item_const(_, ex) {
}
}
-fn check_expr(sess: session, def_map: resolve::def_map,
+fn check_expr(sess: session, def_map: resolve3::DefMap,
method_map: typeck::method_map, tcx: ty::ctxt,
e: @expr, &&is_const: bool, v: visit::vt<bool>) {
if is_const {
// Make sure a const item doesn't recursively refer to itself
// FIXME: Should use the dependency graph when it's available (#1356)
fn check_item_recursion(sess: session, ast_map: ast_map::map,
- def_map: resolve::def_map, it: @item) {
+ def_map: resolve3::DefMap, it: @item) {
type env = {
root_it: @item,
sess: session,
ast_map: ast_map::map,
- def_map: resolve::def_map,
+ def_map: resolve3::DefMap,
idstack: @dvec<node_id>,
};
import option::*;
import syntax::{ast, ast_util, visit};
import syntax::ast::{serialize_span, deserialize_span};
-import middle::resolve;
import syntax::codemap::span;
export annotate_freevars;
// Since we want to be able to collect upvars in some arbitrary piece
// of the AST, we take a walker function that we invoke with a visitor
// in order to start the search.
-fn collect_freevars(def_map: resolve::def_map, blk: ast::blk)
+fn collect_freevars(def_map: resolve3::DefMap, blk: ast::blk)
-> freevar_info {
let seen = int_hash();
let refs = @mut ~[];
// efficient as it fully recomputes the free variables at every
// node of interest rather than building up the free variables in
// one pass. This could be improved upon if it turns out to matter.
-fn annotate_freevars(def_map: resolve::def_map, crate: @ast::crate) ->
+fn annotate_freevars(def_map: resolve3::DefMap, crate: @ast::crate) ->
freevar_map {
let freevars = int_hash();
fn check_for_box(cx: ctx, id: node_id, fv: option<@freevar_entry>,
is_move: bool, var_t: ty::t, sp: span) {
// all captured data must be owned
- if !check_owned(cx, var_t, sp) { ret; }
+ if !check_owned(cx.tcx, var_t, sp) { ret; }
// copied in data must be copyable, but moved in data can be anything
let is_implicit = fv.is_some();
alt e.node {
expr_assign(_, ex) |
expr_unary(box(_), ex) | expr_unary(uniq(_), ex) |
- expr_ret(some(ex)) | expr_cast(ex, _) { maybe_copy(cx, ex); }
+ expr_ret(some(ex)) {
+ maybe_copy(cx, ex);
+ }
+ expr_cast(source, _) {
+ maybe_copy(cx, source);
+ check_cast_for_escaping_regions(cx, source, e);
+ }
expr_copy(expr) { check_copy_ex(cx, expr, false); }
// Vector add copies, but not "implicitly"
expr_assign_op(_, _, ex) { check_copy_ex(cx, ex, false) }
}
}
-fn check_owned(cx: ctx, ty: ty::t, sp: span) -> bool {
- if !ty::kind_is_owned(ty::type_kind(cx.tcx, ty)) {
- cx.tcx.sess.span_err(sp, ~"not an owned value");
+// note: also used from middle::typeck::regionck!
+fn check_owned(tcx: ty::ctxt, ty: ty::t, sp: span) -> bool {
+ if !ty::kind_is_owned(ty::type_kind(tcx, ty)) {
+ alt ty::get(ty).struct {
+ ty::ty_param(*) {
+ tcx.sess.span_err(sp, ~"value may contain borrowed \
+ pointers; use `owned` bound");
+ }
+ _ {
+ tcx.sess.span_err(sp, ~"value may contain borrowed \
+ pointers");
+ }
+ }
false
} else {
true
}
}
+/// This is rather subtle. When we are casting a value to a
+/// instantiated iface like `a as iface/&r`, regionck already ensures
+/// that any borrowed pointers that appear in the type of `a` are
+/// bounded by `&r`. However, it is possible that there are *type
+/// parameters* in the type of `a`, and those *type parameters* may
+/// have borrowed pointers within them. We have to guarantee that the
+/// regions which appear in those type parameters are not obscured.
+///
+/// Therefore, we ensure that one of three conditions holds:
+///
+/// (1) The iface instance cannot escape the current fn. This is
+/// guaranteed if the region bound `&r` is some scope within the fn
+/// itself. This case is safe because whatever borrowed pointers are
+/// found within the type parameter, they must enclose the fn body
+/// itself.
+///
+/// (2) The type parameter appears in the type of the iface. For
+/// example, if the type parameter is `T` and the iface type is
+/// `deque<T>`, then whatever borrowed ptrs may appear in `T` also
+/// appear in `deque<T>`.
+///
+/// (3) The type parameter is owned (and therefore does not contain
+/// borrowed ptrs).
+fn check_cast_for_escaping_regions(
+ cx: ctx,
+ source: @expr,
+ target: @expr) {
+
+ // Determine what type we are casting to; if it is not an iface, then no
+ // worries.
+ let target_ty = ty::expr_ty(cx.tcx, target);
+ let target_substs = alt ty::get(target_ty).struct {
+ ty::ty_trait(_, substs) => {substs}
+ _ => { ret; /* not a cast to a trait */ }
+ };
+
+ // Check, based on the region associated with the iface, whether it can
+ // possibly escape the enclosing fn item (note that all type parameters
+ // must have been declared on the enclosing fn item):
+ alt target_substs.self_r {
+ some(ty::re_scope(*)) => { ret; /* case (1) */ }
+ none | some(ty::re_static) | some(ty::re_free(*)) => {}
+ some(ty::re_bound(*)) | some(ty::re_var(*)) => {
+ cx.tcx.sess.span_bug(
+ source.span,
+ #fmt["bad region found in kind: %?", target_substs.self_r]);
+ }
+ }
+
+ // Assuming the iface instance can escape, then ensure that each parameter
+ // either appears in the iface type or is owned:
+ let target_params = ty::param_tys_in_type(target_ty);
+ let source_ty = ty::expr_ty(cx.tcx, source);
+ do ty::walk_ty(source_ty) |ty| {
+ alt ty::get(ty).struct {
+ ty::ty_param(source_param) => {
+ if target_params.contains(source_param) {
+ /* case (2) */
+ } else {
+ check_owned(cx.tcx, ty, source.span); /* case (3) */
+ }
+ }
+ _ => {}
+ }
+ }
+}
+
//
// Local Variables:
// mode: rust
import std::map::{map,hashmap,int_hash,hash_from_strs};
import std::smallintmap::{map,smallintmap};
import io::writer_util;
-import syntax::print::pprust::expr_to_str;
+import util::ppaux::{ty_to_str};
+import syntax::print::pprust::{expr_to_str, mode_to_str};
export lint, ctypes, unused_imports, while_true, path_statement, old_vecs;
export unrecognized_warning, non_implicitly_copyable_typarams;
export vecs_not_implicitly_copyable, implicit_copies;
unrecognized_warning,
non_implicitly_copyable_typarams,
vecs_not_implicitly_copyable,
+ deprecated_mode,
}
// This is pretty unfortunate. We really want some sort of "deriving Enum"
5 { unrecognized_warning }
6 { non_implicitly_copyable_typarams }
7 { vecs_not_implicitly_copyable }
+ 8 { deprecated_mode }
}
}
(~"implicit_copies",
@{lint: implicit_copies,
desc: ~"implicit copies of non implicitly copyable data",
- default: warn})
+ default: warn}),
+ (~"deprecated_mode",
+ @{lint: deprecated_mode,
+ desc: ~"warn about deprecated uses of modes",
+ default: ignore})
];
hash_from_strs(v)
}
visit::visit_item(it, (), visit);
}
+fn check_fn(tcx: ty::ctxt, fk: visit::fn_kind, decl: ast::fn_decl,
+ _body: ast::blk, span: span, id: ast::node_id) {
+ #debug["lint check_fn fk=%? id=%?", fk, id];
+
+ // don't complain about blocks, since they tend to get their modes
+ // specified from the outside
+ alt fk {
+ visit::fk_fn_block(*) => { ret; }
+ _ => {}
+ }
+
+ let fn_ty = ty::node_id_to_type(tcx, id);
+ alt check ty::get(fn_ty).struct {
+ ty::ty_fn(fn_ty) {
+ let mut counter = 0;
+ do vec::iter2(fn_ty.inputs, decl.inputs) |arg_ty, arg_ast| {
+ counter += 1;
+ #debug["arg %d, ty=%s, mode=%s",
+ counter,
+ ty_to_str(tcx, arg_ty.ty),
+ mode_to_str(arg_ast.mode)];
+ alt arg_ast.mode {
+ ast::expl(ast::by_copy) => {
+ /* always allow by-copy */
+ }
+
+ ast::expl(_) => {
+ tcx.sess.span_lint(
+ deprecated_mode, id, id,
+ span,
+ #fmt["argument %d uses an explicit mode", counter]);
+ }
+
+ ast::infer(_) {
+ let kind = ty::type_kind(tcx, arg_ty.ty);
+ if !ty::kind_is_safe_for_default_mode(kind) {
+ tcx.sess.span_lint(
+ deprecated_mode, id, id,
+ span,
+ #fmt["argument %d uses the default mode \
+ but shouldn't",
+ counter]);
+ }
+ }
+ }
+ }
+ }
+ }
+}
+
fn check_crate(tcx: ty::ctxt, crate: @ast::crate) {
let v = visit::mk_simple_visitor(@{
- visit_item: fn@(it: @ast::item) { check_item(it, tcx); }
+ visit_item:
+ |it| check_item(it, tcx),
+ visit_fn:
+ |fk, decl, body, span, id| check_fn(tcx, fk, decl, body,
+ span, id),
with *visit::default_simple_visitor()
});
visit::visit_crate(*crate, (), v);
expr_unary(*) | expr_fail(*) |
expr_break | expr_again | expr_lit(_) | expr_ret(*) |
expr_block(*) | expr_move(*) | expr_assign(*) | expr_swap(*) |
- expr_assign_op(*) | expr_mac(*) {
+ expr_assign_op(*) | expr_mac(*) | expr_struct(*) {
visit::visit_expr(expr, self, vt);
}
}
}
}
+ expr_struct(_, fields) {
+ do fields.foldr(succ) |field, succ| {
+ self.propagate_through_expr(field.node.expr, succ)
+ }
+ }
+
expr_call(f, args, _) {
// calling a fn with bot return type means that the fn
// will fail, and hence the successors can be ignored
expr_loop_body(*) | expr_do_body(*) |
expr_cast(*) | expr_unary(*) | expr_fail(*) |
expr_ret(*) | expr_break | expr_again | expr_lit(_) |
- expr_block(*) | expr_swap(*) | expr_mac(*) | expr_addr_of(*) {
+ expr_block(*) | expr_swap(*) | expr_mac(*) | expr_addr_of(*) |
+ expr_struct(*) {
visit::visit_expr(expr, self, vt);
}
}
// This is used because same-named variables in alternative patterns need to
// use the node_id of their namesake in the first pattern.
-fn pat_id_map(dm: resolve::def_map, pat: @pat) -> pat_id_map {
+fn pat_id_map(dm: resolve3::DefMap, pat: @pat) -> pat_id_map {
let map = std::map::box_str_hash();
do pat_bindings(dm, pat) |p_id, _s, n| {
map.insert(path_to_ident(n), p_id);
ret map;
}
-fn pat_is_variant(dm: resolve::def_map, pat: @pat) -> bool {
+fn pat_is_variant(dm: resolve3::DefMap, pat: @pat) -> bool {
alt pat.node {
pat_enum(_, _) { true }
pat_ident(_, none) {
}
}
-fn pat_bindings(dm: resolve::def_map, pat: @pat,
+fn pat_bindings(dm: resolve3::DefMap, pat: @pat,
it: fn(node_id, span, @path)) {
do walk_pat(pat) |p| {
alt p.node {
}
}
-fn pat_binding_ids(dm: resolve::def_map, pat: @pat) -> ~[node_id] {
+fn pat_binding_ids(dm: resolve3::DefMap, pat: @pat) -> ~[node_id] {
let mut found = ~[];
pat_bindings(dm, pat, |b_id, _sp, _pt| vec::push(found, b_id) );
ret found;
-/*
+/*!
-Region resolution. This pass runs before typechecking and resolves region
-names to the appropriate block.
-
-This seems to be as good a place as any to explain in detail how
-region naming, representation, and type check works.
-
-### Naming and so forth
-
-We really want regions to be very lightweight to use. Therefore,
-unlike other named things, the scopes for regions are not explicitly
-declared: instead, they are implicitly defined. Functions declare new
-scopes: if the function is not a bare function, then as always it
-inherits the names in scope from the outer scope. Within a function
-declaration, new names implicitly declare new region variables. Outside
-of function declarations, new names are illegal. To make this more
-concrete, here is an example:
-
- fn foo(s: &a.S, t: &b.T) {
- let s1: &a.S = s; // a refers to the same a as in the decl
- let t1: &c.T = t; // illegal: cannot introduce new name here
- }
-
-The code in this file is what actually handles resolving these names.
-It creates a couple of maps that map from the AST node representing a
-region ptr type to the resolved form of its region parameter. If new
-names are introduced where they shouldn't be, then an error is
-reported.
-
-If regions are not given an explicit name, then the behavior depends
-a bit on the context. Within a function declaration, all unnamed regions
-are mapped to a single, anonymous parameter. That is, a function like:
-
- fn foo(s: &S) -> &S { s }
-
-is equivalent to a declaration like:
-
- fn foo(s: &a.S) -> &a.S { s }
-
-Within a function body or other non-binding context, an unnamed region
-reference is mapped to a fresh region variable whose value can be
-inferred as normal.
-
-The resolved form of regions is `ty::region`. Before I can explain
-why this type is setup the way it is, I have to digress a little bit
-into some ill-explained type theory.
-
-### Universal Quantification
-
-Regions are more complex than type parameters because, unlike type
-parameters, they can be universally quantified within a type. To put
-it another way, you cannot (at least at the time of this writing) have
-a variable `x` of type `fn<T>(T) -> T`. You can have an *item* of
-type `fn<T>(T) -> T`, but whenever it is referenced within a method,
-that type parameter `T` is replaced with a concrete type *variable*
-`$T`. To make this more concrete, imagine this code:
-
- fn identity<T>(x: T) -> T { x }
- let f = identity; // f has type fn($T) -> $T
- f(3u); // $T is bound to uint
- f(3); // Type error
-
-You can see here that a type error will result because the type of `f`
-(as opposed to the type of `identity`) is not universally quantified
-over `$T`. That's fancy math speak for saying that the type variable
-`$T` refers to a specific type that may not yet be known, unlike the
-type parameter `T` which refers to some type which will never be
-known.
-
-Anyway, regions work differently. If you have an item of type
-`fn(&a.T) -> &a.T` and you reference it, its type remains the same:
-only when the function *is called* is `&a` instantiated with a
-concrete region variable. This means you could call it twice and give
-different values for `&a` each time.
-
-This more general form is possible for regions because they do not
-impact code generation. We do not need to monomorphize functions
-differently just because they contain region pointers. In fact, we
-don't really do *anything* differently.
-
-### Representing regions; or, why do I care about all that?
-
-The point of this discussion is that the representation of regions
-must distinguish between a *bound* reference to a region and a *free*
-reference. A bound reference is one which will be replaced with a
-fresh type variable when the function is called, like the type
-parameter `T` in `identity`. They can only appear within function
-types. A free reference is a region that may not yet be concretely
-known, like the variable `$T`.
-
-To see why we must distinguish them carefully, consider this program:
-
- fn item1(s: &a.S) {
- let choose = fn@(s1: &a.S) -> &a.S {
- if some_cond { s } else { s1 }
- };
- }
-
-Here, the variable `s1: &a.S` that appears within the `fn@` is a free
-reference to `a`. That is, when you call `choose()`, you don't
-replace `&a` with a fresh region variable, but rather you expect `s1`
-to be in the same region as the parameter `s`.
-
-But in this program, this is not the case at all:
-
- fn item2() {
- let identity = fn@(s1: &a.S) -> &a.S { s1 };
- }
-
-To distinguish between these two cases, `ty::region` contains two
-variants: `re_bound` and `re_free`. In `item1()`, the outer reference
-to `&a` would be `re_bound(rid_param("a", 0u))`, and the inner reference
-would be `re_free(rid_param("a", 0u))`. In `item2()`, the inner reference
-would be `re_bound(rid_param("a", 0u))`.
-
-#### Implications for typeck
-
-In typeck, whenever we call a function, we must go over and replace
-all references to `re_bound()` regions within its parameters with
-fresh type variables (we do not, however, replace bound regions within
-nested function types, as those nested functions have not yet been
-called).
-
-Also, when we typecheck the *body* of an item, we must replace all
-`re_bound` references with `re_free` references. This means that the
-region in the type of the argument `s` in `item1()` *within `item1()`*
-is not `re_bound(re_param("a", 0u))` but rather `re_free(re_param("a",
-0u))`. This is because, for any particular *invocation of `item1()`*,
-`&a` will be bound to some specific region, and hence it is no longer
-bound.
+This file actually contains two passes related to regions. The first
+pass builds up the `region_map`, which describes the parent links in
+the region hierarchy. The second pass infers which types must be
+region parameterized.
*/
name: ~str,
br: ty::bound_region};
-// Mapping from a block/expr/binding to the innermost scope that
-// bounds its lifetime. For a block/expression, this is the lifetime
-// in which it will be evaluated. For a binding, this is the lifetime
-// in which is in scope.
+/// Mapping from a block/expr/binding to the innermost scope that
+/// bounds its lifetime. For a block/expression, this is the lifetime
+/// in which it will be evaluated. For a binding, this is the lifetime
+/// in which is in scope.
type region_map = hashmap<ast::node_id, ast::node_id>;
type ctxt = {
sess: session,
- def_map: resolve::def_map,
+ def_map: resolve3::DefMap,
region_map: region_map,
// The parent scope is the innermost block, call, or alt
parent: parent
};
-// Returns true if `subscope` is equal to or is lexically nested inside
-// `superscope` and false otherwise.
+/// Returns true if `subscope` is equal to or is lexically nested inside
+/// `superscope` and false otherwise.
fn scope_contains(region_map: region_map, superscope: ast::node_id,
subscope: ast::node_id) -> bool {
let mut subscope = subscope;
ret true;
}
+/// Finds the nearest common ancestor (if any) of two scopes. That
+/// is, finds the smallest scope which is greater than or equal to
+/// both `scope_a` and `scope_b`.
fn nearest_common_ancestor(region_map: region_map, scope_a: ast::node_id,
scope_b: ast::node_id) -> option<ast::node_id> {
}
}
+/// Extracts that current parent from cx, failing if there is none.
fn parent_id(cx: ctxt, span: span) -> ast::node_id {
alt cx.parent {
none {
}
}
+/// Records the current parent (if any) as the parent of `child_id`.
fn record_parent(cx: ctxt, child_id: ast::node_id) {
alt cx.parent {
none { /* no-op */ }
visit::visit_fn(fk, decl, body, sp, id, fn_cx, visitor);
}
-fn resolve_crate(sess: session, def_map: resolve::def_map, crate: @ast::crate)
- -> region_map {
+fn resolve_crate(sess: session, def_map: resolve3::DefMap,
+ crate: @ast::crate) -> region_map {
let cx: ctxt = {sess: sess,
def_map: def_map,
region_map: int_hash(),
type determine_rp_ctxt_ = {
sess: session,
ast_map: ast_map::map,
- def_map: resolve::def_map,
+ def_map: resolve3::DefMap,
region_paramd_items: region_paramd_items,
dep_map: dep_map,
worklist: dvec<ast::node_id>,
fn determine_rp_in_crate(sess: session,
ast_map: ast_map::map,
- def_map: resolve::def_map,
+ def_map: resolve3::DefMap,
crate: @ast::crate) -> region_paramd_items {
let cx = determine_rp_ctxt_(@{sess: sess,
ast_map: ast_map,
+++ /dev/null
-import syntax::{ast, ast_util, codemap, ast_map};
-import syntax::ast::*;
-import ast::{ident, fn_ident, def, def_id, node_id};
-import ast::{required, provided};
-import syntax::ast_util::{local_def, def_id_of_def, new_def_hash,
- class_item_ident, path_to_ident};
-import pat_util::*;
-
-import syntax::attr;
-import metadata::{csearch, cstore};
-import driver::session::session;
-import util::common::is_main_name;
-import std::map::{int_hash, str_hash, box_str_hash, hashmap};
-import vec::each;
-import syntax::codemap::span;
-import syntax::visit;
-import visit::vt;
-import std::{list};
-import std::list::{list, nil, cons};
-import option::{is_none, is_some};
-import syntax::print::pprust::*;
-import dvec::{dvec, extensions};
-
-export resolve_crate;
-export def_map, ext_map, exp_map, impl_map;
-export _impl, iscopes, method_info;
-
-// Resolving happens in two passes. The first pass collects defids of all
-// (internal) imports and modules, so that they can be looked up when needed,
-// and then uses this information to resolve the imports. The second pass
-// locates all names (in expressions, types, and alt patterns) and resolves
-// them, storing the resulting def in the AST nodes.
-
-/* foreign modules can't contain enums, and we don't store their ASTs because
- we only need to look at them to determine exports, which they can't
- control.*/
-
-type def_map = hashmap<node_id, def>;
-type ext_map = hashmap<def_id, ~[ident]>;
-type impl_map = hashmap<node_id, iscopes>;
-type impl_cache = hashmap<def_id, option<@~[@_impl]>>;
-
-
-// Impl resolution
-
-type method_info = {did: def_id, n_tps: uint, ident: ast::ident};
-/* An _impl represents an implementation that's currently in scope.
- Its fields:
- * did: the def id of the class or impl item
- * ident: the name of the impl, unless it has no name (as in
- "impl of X") in which case the ident
- is the ident of the trait that's being implemented
- * methods: the item's methods
-*/
-type _impl = {did: def_id, ident: ast::ident, methods: ~[@method_info]};
-type iscopes = @list<@~[@_impl]>;
-
-type exp = {reexp: bool, id: def_id};
-type exp_map = hashmap<node_id, ~[exp]>;
-
-// Local Variables:
-// mode: rust
-// fill-column: 78;
-// indent-tabs-mode: nil
-// c-basic-offset: 4
-// buffer-file-coding-system: utf-8-unix
-// End:
import syntax::ast::{def_upvar, def_use, def_variant, expr, expr_assign_op};
import syntax::ast::{expr_binary, expr_cast, expr_field, expr_fn};
import syntax::ast::{expr_fn_block, expr_index, expr_new, expr_path};
-import syntax::ast::{expr_unary, fn_decl, foreign_item, foreign_item_fn};
-import syntax::ast::{ident, trait_ref, impure_fn, instance_var, item};
-import syntax::ast::{item_class, item_const, item_enum, item_fn, item_mac};
-import syntax::ast::{item_foreign_mod, item_trait, item_impl, item_mod};
-import syntax::ast::{item_ty, local, local_crate, method, node_id, pat};
-import syntax::ast::{pat_enum, pat_ident, path, prim_ty, stmt_decl, ty,
- pat_box, pat_uniq, pat_lit, pat_range, pat_rec,
- pat_tup, pat_wild};
-import syntax::ast::{ty_bool, ty_char, ty_f, ty_f32, ty_f64};
+import syntax::ast::{expr_struct, expr_unary, fn_decl, foreign_item};
+import syntax::ast::{foreign_item_fn, ident, trait_ref, impure_fn};
+import syntax::ast::{instance_var, item, item_class, item_const, item_enum};
+import syntax::ast::{item_fn, item_mac, item_foreign_mod, item_impl};
+import syntax::ast::{item_mod, item_trait, item_ty, local, local_crate};
+import syntax::ast::{method, node_id, pat, pat_enum, pat_ident};
+import syntax::ast::{path, prim_ty, pat_box, pat_uniq, pat_lit, pat_range};
+import syntax::ast::{pat_rec, pat_tup, pat_wild, stmt_decl};
+import syntax::ast::{ty, ty_bool, ty_char, ty_f, ty_f32, ty_f64};
import syntax::ast::{ty_float, ty_i, ty_i16, ty_i32, ty_i64, ty_i8, ty_int};
import syntax::ast::{ty_param, ty_path, ty_str, ty_u, ty_u16, ty_u32, ty_u64};
import syntax::ast::{ty_u8, ty_uint, variant, view_item, view_item_export};
let unused_import_lint_level: level;
let trait_info: hashmap<def_id,@hashmap<Atom,()>>;
+ let structs: hashmap<def_id,()>;
// The number of imports that are currently unresolved.
let mut unresolved_imports: uint;
let mut xray_context: XrayFlag;
// The trait that the current context can refer to.
- let mut current_trait_ref: option<def_id>;
+ let mut current_trait_refs: option<@dvec<def_id>>;
// The atom for the keyword "self".
let self_atom: Atom;
self.unused_import_lint_level = unused_import_lint_level(session);
self.trait_info = new_def_hash();
+ self.structs = new_def_hash();
self.unresolved_imports = 0u;
self.type_ribs = @dvec();
self.xray_context = NoXray;
- self.current_trait_ref = none;
+ self.current_trait_refs = none;
self.self_atom = (*self.atom_table).intern(@~"self");
self.primitive_type_table = @PrimitiveTypeTable(self.atom_table);
visitor);
}
}
- item_class(_, _, class_members, ctor, _) {
+ item_class(_, _, class_members, optional_ctor, _) {
(*name_bindings).define_type(def_ty(local_def(item.id)));
- let purity = ctor.node.dec.purity;
- let ctor_def = def_fn(local_def(ctor.node.id), purity);
- (*name_bindings).define_value(ctor_def);
+ alt optional_ctor {
+ none => {
+ // Nothing to do.
+ }
+ some(ctor) => {
+ let purity = ctor.node.dec.purity;
+ let ctor_def = def_fn(local_def(ctor.node.id),
+ purity);
+ (*name_bindings).define_value(ctor_def);
+ }
+ }
// Create the set of implementation information that the
// implementation scopes (ImplScopes) need and write it into
(*name_bindings).define_impl(impl_info);
+ // Record the def ID of this struct.
+ self.structs.insert(local_def(item.id), ());
+
visit_item(item, new_parent, visitor);
}
}
}
+ let i = import_resolution;
+ alt (i.module_target, i.value_target, i.type_target, i.impl_target) {
+ /*
+ If this name wasn't found in any of the four namespaces, it's
+ definitely unresolved
+ */
+ (none, none, none, v) if v.len() == 0 { ret Failed; }
+ _ {}
+ }
+
assert import_resolution.outstanding_references >= 1u;
import_resolution.outstanding_references -= 1u;
}
}
- item_impl(type_parameters, interface_reference, self_type,
+ item_impl(type_parameters, implemented_traits, self_type,
methods) {
- self.resolve_implementation(item.id,
- item.span,
+ self.resolve_implementation(item.id, item.span,
type_parameters,
- interface_reference,
- self_type,
- methods,
- visitor);
+ implemented_traits,
+ self_type, methods, visitor);
}
item_trait(type_parameters, methods) {
(*self.type_ribs).pop();
}
- item_class(ty_params, interfaces, class_members, constructor,
- optional_destructor) {
+ item_class(ty_params, interfaces, class_members,
+ optional_constructor, optional_destructor) {
self.resolve_class(item.id,
@copy ty_params,
interfaces,
class_members,
- constructor,
+ optional_constructor,
optional_destructor,
visitor);
}
type_parameters: @~[ty_param],
interfaces: ~[@trait_ref],
class_members: ~[@class_member],
- constructor: class_ctor,
+ optional_constructor: option<class_ctor>,
optional_destructor: option<class_dtor>,
visitor: ResolveVisitor) {
// Add a type into the def map. This is needed to prevent an ICE in
- // ty::impl_trait.
+ // ty::impl_traits.
// If applicable, create a rib for the type parameters.
let outer_type_parameter_count = (*type_parameters).len();
let borrowed_type_parameters: &~[ty_param] = &*type_parameters;
do self.with_type_parameter_rib(HasTypeParameters
(borrowed_type_parameters, id, 0u,
- NormalRibKind))
- || {
+ NormalRibKind)) {
// Resolve the type parameters.
self.resolve_type_parameters(*type_parameters, visitor);
}
}
- // Resolve the constructor.
- self.resolve_function(NormalRibKind,
- some(@constructor.node.dec),
- NoTypeParameters,
- constructor.node.body,
- HasSelfBinding(constructor.node.self_id),
- NoCaptureClause,
- visitor);
-
+ // Resolve the constructor, if applicable.
+ alt optional_constructor {
+ none => {
+ // Nothing to do.
+ }
+ some(constructor) => {
+ self.resolve_function(NormalRibKind,
+ some(@constructor.node.dec),
+ NoTypeParameters,
+ constructor.node.body,
+ HasSelfBinding(constructor.node.
+ self_id),
+ NoCaptureClause,
+ visitor);
+ }
+ }
// Resolve the destructor, if applicable.
alt optional_destructor {
fn resolve_implementation(id: node_id,
span: span,
type_parameters: ~[ty_param],
- interface_reference: option<@trait_ref>,
+ trait_references: ~[@trait_ref],
self_type: @ty,
methods: ~[@method],
visitor: ResolveVisitor) {
self.resolve_type_parameters(type_parameters, visitor);
// Resolve the interface reference, if necessary.
- let original_trait_ref = self.current_trait_ref;
- alt interface_reference {
- none {
- // Nothing to do.
- }
- some(interface_reference) {
- alt self.resolve_path(interface_reference.path, TypeNS,
- true, visitor) {
+ let original_trait_refs = self.current_trait_refs;
+ if trait_references.len() >= 1 {
+ let mut new_trait_refs = @dvec();
+ for trait_references.each |trait_reference| {
+ alt self.resolve_path(trait_reference.path, TypeNS, true,
+ visitor) {
none {
self.session.span_err(span,
~"attempt to implement an \
- unknown interface");
+ unknown trait");
}
some(def) {
- self.record_def(interface_reference.ref_id, def);
+ self.record_def(trait_reference.ref_id, def);
// Record the current trait reference.
- self.current_trait_ref = some(def_id_of_def(def));
+ (*new_trait_refs).push(def_id_of_def(def));
}
}
}
+
+ // Record the current set of trait references.
+ self.current_trait_refs = some(new_trait_refs);
}
// Resolve the self type.
visitor);
}
- // Restore the original trait reference.
- self.current_trait_ref = original_trait_ref;
+ // Restore the original trait references.
+ self.current_trait_refs = original_trait_refs;
}
}
visitor);
}
+ expr_struct(path, _) {
+ // Resolve the path to the structure it goes to.
+ //
+ // XXX: We might want to support explicit type parameters in
+ // the path, in which case this gets a little more
+ // complicated:
+ //
+ // 1. Should we go through the ast_path_to_ty() path, which
+ // handles typedefs and the like?
+ //
+ // 2. If so, should programmers be able to write this?
+ //
+ // class Foo<A> { ... }
+ // type Bar<A> = Foo<A>;
+ // let bar = Bar { ... } // no type parameters
+
+ alt self.resolve_path(path, TypeNS, false, visitor) {
+ some(definition @ def_ty(class_id))
+ if self.structs.contains_key(class_id) {
+
+ self.record_def(expr.id, def_class(class_id));
+ }
+ _ {
+ self.session.span_err(path.span,
+ #fmt("`%s` does not name a \
+ structure",
+ connect(path.idents.map
+ (|x| *x),
+ ~"::")));
+ }
+ }
+
+ visit_expr(expr, (), visitor);
+ }
+
_ {
visit_expr(expr, (), visitor);
}
let mut search_module = self.current_module;
loop {
// Look for the current trait.
- alt copy self.current_trait_ref {
- some(trait_def_id) {
- self.add_trait_info_if_containing_method(found_traits,
- trait_def_id,
- name);
+ alt copy self.current_trait_refs {
+ some(trait_def_ids) {
+ for trait_def_ids.each |trait_def_id| {
+ self.add_trait_info_if_containing_method
+ (found_traits, trait_def_id, name);
+ }
}
none {
// Nothing to do.
import syntax::ast::def_id;
import syntax::codemap::span;
import syntax::print::pprust::pat_to_str;
-import middle::resolve::def_map;
+import middle::resolve3::DefMap;
import back::abi;
import std::map::hashmap;
import dvec::{dvec, extensions};
type enter_pat = fn(@ast::pat) -> option<~[@ast::pat]>;
-fn enter_match(dm: def_map, m: match, col: uint, val: ValueRef,
+fn enter_match(dm: DefMap, m: match, col: uint, val: ValueRef,
e: enter_pat) -> match {
let mut result = ~[];
for vec::each(m) |br| {
ret result;
}
-fn enter_default(dm: def_map, m: match, col: uint, val: ValueRef) -> match {
+fn enter_default(dm: DefMap, m: match, col: uint, val: ValueRef) -> match {
do enter_match(dm, m, col, val) |p| {
alt p.node {
ast::pat_wild | ast::pat_rec(_, _) | ast::pat_tup(_) { some(~[]) }
}
}
-fn enter_rec(dm: def_map, m: match, col: uint, fields: ~[ast::ident],
+fn enter_rec(dm: DefMap, m: match, col: uint, fields: ~[ast::ident],
val: ValueRef) -> match {
let dummy = @{id: 0, node: ast::pat_wild, span: dummy_sp()};
do enter_match(dm, m, col, val) |p| {
}
}
-fn enter_tup(dm: def_map, m: match, col: uint, val: ValueRef,
+fn enter_tup(dm: DefMap, m: match, col: uint, val: ValueRef,
n_elts: uint) -> match {
let dummy = @{id: 0, node: ast::pat_wild, span: dummy_sp()};
do enter_match(dm, m, col, val) |p| {
}
}
-fn enter_box(dm: def_map, m: match, col: uint, val: ValueRef) -> match {
+fn enter_box(dm: DefMap, m: match, col: uint, val: ValueRef) -> match {
let dummy = @{id: 0, node: ast::pat_wild, span: dummy_sp()};
do enter_match(dm, m, col, val) |p| {
alt p.node {
}
}
-fn enter_uniq(dm: def_map, m: match, col: uint, val: ValueRef) -> match {
+fn enter_uniq(dm: DefMap, m: match, col: uint, val: ValueRef) -> match {
let dummy = @{id: 0, node: ast::pat_wild, span: dummy_sp()};
do enter_match(dm, m, col, val) |p| {
alt p.node {
mangle_internal_name_by_path,
mangle_internal_name_by_path_and_seq,
mangle_exported_name};
-import metadata::{csearch, cstore, encoder};
+import metadata::{csearch, cstore, decoder, encoder};
import metadata::common::link_meta;
import util::ppaux;
import util::ppaux::{ty_to_str, ty_to_short_str};
fn trans_free(cx: block, v: ValueRef) -> block {
let _icx = cx.insn_ctxt(~"trans_free");
- Call(cx, cx.ccx().upcalls.free, ~[PointerCast(cx, v, T_ptr(T_i8()))]);
- cx
+ trans_rtcall(cx, ~"free", ~[PointerCast(cx, v, T_ptr(T_i8()))], ignore)
}
fn trans_unique_free(cx: block, v: ValueRef) -> block {
let _icx = cx.insn_ctxt(~"trans_unique_free");
- Call(cx, cx.ccx().upcalls.exchange_free,
- ~[PointerCast(cx, v, T_ptr(T_i8()))]);
- ret cx;
+ trans_rtcall(cx, ~"exchange_free", ~[PointerCast(cx, v, T_ptr(T_i8()))],
+ ignore)
}
fn umax(cx: block, a: ValueRef, b: ValueRef) -> ValueRef {
fn alloca_maybe_zeroed(cx: block, t: TypeRef, zero: bool) -> ValueRef {
let _icx = cx.insn_ctxt(~"alloca");
if cx.unreachable { ret llvm::LLVMGetUndef(t); }
- let initcx = raw_block(cx.fcx, cx.fcx.llstaticallocas);
+ let initcx = raw_block(cx.fcx, false, cx.fcx.llstaticallocas);
let p = Alloca(initcx, t);
if zero { Store(initcx, C_null(t), p); }
ret p;
fn arrayalloca(cx: block, t: TypeRef, v: ValueRef) -> ValueRef {
let _icx = cx.insn_ctxt(~"arrayalloca");
if cx.unreachable { ret llvm::LLVMGetUndef(t); }
- ret ArrayAlloca(raw_block(cx.fcx, cx.fcx.llstaticallocas), t, v);
+ ret ArrayAlloca(raw_block(cx.fcx, false, cx.fcx.llstaticallocas), t, v);
}
// Given a pointer p, returns a pointer sz(p) (i.e., inc'd by sz bytes).
// malloc_raw_dyn: allocates a box to contain a given type, but with a
// potentially dynamic size.
fn malloc_raw_dyn(bcx: block, t: ty::t, heap: heap,
- size: ValueRef) -> ValueRef {
+ size: ValueRef) -> result {
let _icx = bcx.insn_ctxt(~"malloc_raw");
let ccx = bcx.ccx();
- let (mk_fn, upcall) = alt heap {
- heap_shared { (ty::mk_imm_box, ccx.upcalls.malloc) }
- heap_exchange {
- (ty::mk_imm_uniq, ccx.upcalls.exchange_malloc )
- }
+ let (mk_fn, rtcall) = alt heap {
+ heap_shared { (ty::mk_imm_box, ~"malloc") }
+ heap_exchange { (ty::mk_imm_uniq, ~"exchange_malloc") }
};
// Grab the TypeRef type of box_ptr_ty.
lazily_emit_all_tydesc_glue(ccx, static_ti);
// Allocate space:
- let rval = Call(bcx, upcall, ~[static_ti.tydesc, size]);
- ret PointerCast(bcx, rval, llty);
+ let tydesc = PointerCast(bcx, static_ti.tydesc, T_ptr(T_i8()));
+ let rval = alloca_zeroed(bcx, T_ptr(T_i8()));
+ let bcx = trans_rtcall(bcx, rtcall, ~[tydesc, size], save_in(rval));
+ let retval = {bcx: bcx, val: PointerCast(bcx, Load(bcx, rval), llty)};
+ ret retval;
}
// malloc_raw: expects an unboxed type and returns a pointer to
// enough space for a box of that type. This includes a rust_opaque_box
// header.
-fn malloc_raw(bcx: block, t: ty::t, heap: heap) -> ValueRef {
+fn malloc_raw(bcx: block, t: ty::t, heap: heap) -> result {
malloc_raw_dyn(bcx, t, heap, llsize_of(bcx.ccx(), type_of(bcx.ccx(), t)))
}
// malloc_general_dyn: usefully wraps malloc_raw_dyn; allocates a box,
// and pulls out the body
-fn malloc_general_dyn(bcx: block, t: ty::t, heap: heap, size: ValueRef) ->
- {box: ValueRef, body: ValueRef} {
+fn malloc_general_dyn(bcx: block, t: ty::t, heap: heap, size: ValueRef)
+ -> {bcx: block, box: ValueRef, body: ValueRef} {
let _icx = bcx.insn_ctxt(~"malloc_general");
- let llbox = malloc_raw_dyn(bcx, t, heap, size);
+ let {bcx: bcx, val: llbox} = malloc_raw_dyn(bcx, t, heap, size);
let non_gc_box = non_gc_box_cast(bcx, llbox);
let body = GEPi(bcx, non_gc_box, ~[0u, abi::box_field_body]);
- ret {box: llbox, body: body};
+ ret {bcx: bcx, box: llbox, body: body};
}
-fn malloc_general(bcx: block, t: ty::t, heap: heap) ->
- {box: ValueRef, body: ValueRef} {
+fn malloc_general(bcx: block, t: ty::t, heap: heap)
+ -> {bcx: block, box: ValueRef, body: ValueRef} {
malloc_general_dyn(bcx, t, heap,
llsize_of(bcx.ccx(), type_of(bcx.ccx(), t)))
}
-fn malloc_boxed(bcx: block, t: ty::t) -> {box: ValueRef, body: ValueRef} {
+fn malloc_boxed(bcx: block, t: ty::t)
+ -> {bcx: block, box: ValueRef, body: ValueRef} {
malloc_general(bcx, t, heap_shared)
}
-fn malloc_unique(bcx: block, t: ty::t) -> {box: ValueRef, body: ValueRef} {
+fn malloc_unique(bcx: block, t: ty::t)
+ -> {bcx: block, box: ValueRef, body: ValueRef} {
malloc_general(bcx, t, heap_exchange)
}
alt attr::find_inline_attr(attrs) {
attr::ia_hint { set_inline_hint(llfn); }
attr::ia_always { set_always_inline(llfn); }
+ attr::ia_never { set_no_inline(llfn); }
attr::ia_none { /* fallthrough */ }
}
}
fn type_is_structural_or_param(t: ty::t) -> bool {
if ty::type_is_structural(t) { ret true; }
alt ty::get(t).struct {
- ty::ty_param(_, _) { ret true; }
+ ty::ty_param(*) { ret true; }
_ { ret false; }
}
}
t: ty::t, heap: heap,
dest: dest) -> block {
let _icx = bcx.insn_ctxt(~"trans_boxed_expr");
- let {box, body} = malloc_general(bcx, t, heap);
+ let {bcx, box, body} = malloc_general(bcx, t, heap);
add_clean_free(bcx, box, heap);
let bcx = trans_expr_save_in(bcx, contents, body);
revoke_clean(bcx, box);
}
fn need_invoke(bcx: block) -> bool {
+ if (bcx.ccx().sess.opts.debugging_opts & session::no_landing_pads != 0) {
+ ret false;
+ }
+
+ // Avoid using invoke if we are already inside a landing pad.
+ if bcx.is_lpad {
+ ret false;
+ }
+
if have_cached_lpad(bcx) {
ret true;
}
alt copy inf.landing_pad {
some(target) { cached = some(target); }
none {
- pad_bcx = sub_block(bcx, ~"unwind");
+ pad_bcx = lpad_block(bcx, ~"unwind");
inf.landing_pad = some(pad_bcx.llbb);
}
}
ret bcx;
}
+fn trans_struct(block_context: block, span: span, fields: ~[ast::field],
+ id: ast::node_id, dest: dest) -> block {
+
+ let _instruction_context = block_context.insn_ctxt(~"trans_struct");
+ let mut block_context = block_context;
+ let type_context = block_context.ccx().tcx;
+
+ let struct_type = node_id_type(block_context, id);
+
+ // Get the address to store the structure into. If there is no address,
+ // just translate each field and be done with it.
+ let dest_address;
+ alt dest {
+ ignore => {
+ for fields.each |field| {
+ block_context = trans_expr(block_context,
+ field.node.expr,
+ ignore);
+ }
+ ret block_context;
+ }
+ save_in(destination_address) => {
+ dest_address = destination_address;
+ }
+ by_val(_) => {
+ type_context.sess.span_bug(span, ~"didn't expect by_val");
+ }
+ }
+
+ // Get the class ID and its fields.
+ let class_fields, class_id, substitutions;
+ alt ty::get(struct_type).struct {
+ ty::ty_class(existing_class_id, existing_substitutions) => {
+ class_id = existing_class_id;
+ substitutions = existing_substitutions;
+ class_fields = ty::lookup_class_fields(type_context, class_id);
+ }
+ _ => {
+ type_context.sess.span_bug(span, ~"didn't resolve to a struct");
+ }
+ }
+
+ // Now translate each field.
+ let mut temp_cleanups = ~[];
+ for fields.each |field| {
+ let mut found = none;
+ for class_fields.eachi |i, class_field| {
+ if str::eq(*class_field.ident, *field.node.ident) {
+ found = some((i, class_field.id));
+ break;
+ }
+ }
+
+ let index, field_id;
+ alt found {
+ some((found_index, found_field_id)) => {
+ index = found_index;
+ field_id = found_field_id;
+ }
+ none => {
+ type_context.sess.span_bug(span, ~"unknown field");
+ }
+ }
+
+ let dest = GEPi(block_context, dest_address, ~[0, index]);
+ block_context = trans_expr_save_in(block_context,
+ field.node.expr,
+ dest);
+
+ let field_type = ty::lookup_field_type(type_context, class_id,
+ field_id, substitutions);
+ add_clean_temp_mem(block_context, dest, field_type);
+ vec::push(temp_cleanups, dest);
+ }
+
+ // Now revoke the cleanups, as we pass responsibility for the data
+ // structure onto the caller.
+ for temp_cleanups.each |temp_cleanup| {
+ revoke_clean(block_context, temp_cleanup);
+ }
+
+ block_context
+}
+
// Store the result of an expression in the given memory location, ensuring
// that nil or bot expressions get ignore rather than save_in as destination.
fn trans_expr_save_in(bcx: block, e: @ast::expr, dest: ValueRef)
ast::expr_rec(args, base) {
ret trans_rec(bcx, args, base, e.id, dest);
}
+ ast::expr_struct(_, fields) {
+ ret trans_struct(bcx, e.span, fields, e.id, dest);
+ }
ast::expr_tup(args) { ret trans_tup(bcx, args, dest); }
ast::expr_vstore(e, v) { ret tvec::trans_vstore(bcx, e, v, dest); }
ast::expr_lit(lit) { ret trans_lit(bcx, e, *lit, dest); }
let V_str = PointerCast(bcx, V_fail_str, T_ptr(T_i8()));
let V_filename = PointerCast(bcx, V_filename, T_ptr(T_i8()));
let args = ~[V_str, V_filename, C_int(ccx, V_line)];
- let bcx = invoke(bcx, bcx.ccx().upcalls._fail, args);
+ let bcx = trans_rtcall(bcx, ~"fail", args, ignore);
Unreachable(bcx);
ret bcx;
}
+fn trans_rtcall(bcx: block, name: ~str, args: ~[ValueRef], dest: dest)
+ -> block {
+ let did = bcx.ccx().rtcalls[name];
+ let fty = if did.crate == ast::local_crate {
+ ty::node_id_to_type(bcx.ccx().tcx, did.node)
+ } else {
+ csearch::get_type(bcx.ccx().tcx, did).ty
+ };
+ let rty = ty::ty_fn_ret(fty);
+ ret trans_call_inner(
+ bcx, none, fty, rty,
+ |bcx| lval_static_fn_inner(bcx, did, 0, ~[], none),
+ arg_vals(args), dest);
+}
+
fn trans_break_cont(bcx: block, to_end: bool)
-> block {
let _icx = bcx.insn_ctxt(~"trans_break_cont");
// You probably don't want to use this one. See the
// next three functions instead.
fn new_block(cx: fn_ctxt, parent: option<block>, +kind: block_kind,
- name: ~str, opt_node_info: option<node_info>) -> block {
+ is_lpad: bool, name: ~str, opt_node_info: option<node_info>)
+ -> block {
let s = if cx.ccx.sess.opts.save_temps || cx.ccx.sess.opts.debuginfo {
cx.ccx.names(name)
let llbb: BasicBlockRef = str::as_c_str(s, |buf| {
llvm::LLVMAppendBasicBlock(cx.llfn, buf)
});
- let bcx = mk_block(llbb, parent, kind, opt_node_info, cx);
+ let bcx = mk_block(llbb, parent, kind, is_lpad, opt_node_info, cx);
do option::iter(parent) |cx| {
if cx.unreachable { Unreachable(bcx); }
};
// Use this when you're at the top block of a function or the like.
fn top_scope_block(fcx: fn_ctxt, opt_node_info: option<node_info>) -> block {
- ret new_block(fcx, none, simple_block_scope(),
+ ret new_block(fcx, none, simple_block_scope(), false,
~"function top level", opt_node_info);
}
fn scope_block(bcx: block,
opt_node_info: option<node_info>,
n: ~str) -> block {
- ret new_block(bcx.fcx, some(bcx), simple_block_scope(),
+ ret new_block(bcx.fcx, some(bcx), simple_block_scope(), bcx.is_lpad,
n, opt_node_info);
}
mut cleanups: ~[],
mut cleanup_paths: ~[],
mut landing_pad: none
- }), n, opt_node_info);
+ }), bcx.is_lpad, n, opt_node_info);
}
+// Use this when creating a block for the inside of a landing pad.
+fn lpad_block(bcx: block, n: ~str) -> block {
+ new_block(bcx.fcx, some(bcx), block_non_scope, true, n, none)
+}
// Use this when you're making a general CFG BB within a scope.
fn sub_block(bcx: block, n: ~str) -> block {
- new_block(bcx.fcx, some(bcx), block_non_scope, n, none)
+ new_block(bcx.fcx, some(bcx), block_non_scope, bcx.is_lpad, n, none)
}
-fn raw_block(fcx: fn_ctxt, llbb: BasicBlockRef) -> block {
- mk_block(llbb, none, block_non_scope, none, fcx)
+fn raw_block(fcx: fn_ctxt, is_lpad: bool, llbb: BasicBlockRef) -> block {
+ mk_block(llbb, none, block_non_scope, is_lpad, none, fcx)
}
fn finish_fn(fcx: fn_ctxt, lltop: BasicBlockRef) {
let _icx = fcx.insn_ctxt(~"finish_fn");
tie_up_header_blocks(fcx, lltop);
- let ret_cx = raw_block(fcx, fcx.llreturn);
+ let ret_cx = raw_block(fcx, false, fcx.llreturn);
RetVoid(ret_cx);
}
fn tie_up_header_blocks(fcx: fn_ctxt, lltop: BasicBlockRef) {
let _icx = fcx.insn_ctxt(~"tie_up_header_blocks");
- Br(raw_block(fcx, fcx.llstaticallocas), fcx.llloadenv);
- Br(raw_block(fcx, fcx.llloadenv), lltop);
+ Br(raw_block(fcx, false, fcx.llstaticallocas), fcx.llloadenv);
+ Br(raw_block(fcx, false, fcx.llloadenv), lltop);
}
enum self_arg { impl_self(ty::t), no_self, }
};
foreign::trans_foreign_mod(ccx, foreign_mod, abi);
}
- ast::item_class(tps, _traits, items, ctor, m_dtor) {
+ ast::item_class(tps, _traits, items, m_ctor, m_dtor) {
if tps.len() == 0u {
let psubsts = {tys: ty::ty_params_to_tys(ccx.tcx, tps),
vtables: none,
bounds: @~[]};
- trans_class_ctor(ccx, *path, ctor.node.dec, ctor.node.body,
- get_item_val(ccx, ctor.node.id), psubsts,
- ctor.node.id, local_def(item.id), ctor.span);
- do option::iter(m_dtor) |dtor| {
+ do option::iter(m_ctor) |ctor| {
+ trans_class_ctor(ccx, *path, ctor.node.dec, ctor.node.body,
+ get_item_val(ccx, ctor.node.id), psubsts,
+ ctor.node.id, local_def(item.id), ctor.span);
+ }
+ do option::iter(m_dtor) |dtor| {
trans_class_dtor(ccx, *path, dtor.node.body,
dtor.node.id, none, none, local_def(item.id));
};
}
}
+fn push_rtcall(ccx: @crate_ctxt, name: ~str, did: ast::def_id) {
+ if ccx.rtcalls.contains_key(name) {
+ fail #fmt("multiple definitions for runtime call %s", name);
+ }
+ ccx.rtcalls.insert(name, did);
+}
+
+fn gather_local_rtcalls(ccx: @crate_ctxt, crate: @ast::crate) {
+ visit::visit_crate(*crate, (), visit::mk_simple_visitor(@{
+ visit_item: |item| alt item.node {
+ ast::item_fn(decl, _, _) {
+ let attr_metas = attr::attr_metas(
+ attr::find_attrs_by_name(item.attrs, ~"rt"));
+ do vec::iter(attr_metas) |attr_meta| {
+ alt attr::get_meta_item_list(attr_meta) {
+ some(list) {
+ let name = *attr::get_meta_item_name(vec::head(list));
+ push_rtcall(ccx, name, {crate: ast::local_crate,
+ node: item.id});
+ }
+ none {}
+ }
+ }
+ }
+ _ {}
+ }
+ with *visit::default_simple_visitor()
+ }));
+}
+
+fn gather_external_rtcalls(ccx: @crate_ctxt) {
+ do cstore::iter_crate_data(ccx.sess.cstore) |_cnum, cmeta| {
+ do decoder::each_path(cmeta) |path| {
+ let pathname = path.path_string;
+ alt path.def_like {
+ decoder::dl_def(d) {
+ alt d {
+ ast::def_fn(did, _) {
+ // FIXME (#2861): This should really iterate attributes
+ // like gather_local_rtcalls, but we'll need to
+ // export attributes in metadata/encoder before we can do
+ // that.
+ let sentinel = "rt::rt_";
+ let slen = str::len(sentinel);
+ if str::starts_with(pathname, sentinel) {
+ let name = str::substr(pathname,
+ slen, str::len(pathname)-slen);
+ push_rtcall(ccx, name, did);
+ }
+ }
+ _ {}
+ }
+ }
+ _ {}
+ }
+ true
+ }
+ }
+}
+
+fn gather_rtcalls(ccx: @crate_ctxt, crate: @ast::crate) {
+ gather_local_rtcalls(ccx, crate);
+ gather_external_rtcalls(ccx);
+
+ // FIXME (#2861): Check for other rtcalls too, once they are
+ // supported. Also probably want to check type signature so we don't crash
+ // in some obscure place in LLVM if the user provides the wrong signature
+ // for an rtcall.
+ let expected_rtcalls =
+ ~[~"exchange_free", ~"exchange_malloc", ~"fail", ~"free", ~"malloc"];
+ for vec::each(expected_rtcalls) |name| {
+ if !ccx.rtcalls.contains_key(name) {
+ fail #fmt("no definition for runtime call %s", name);
+ }
+ }
+}
+
fn create_module_map(ccx: @crate_ctxt) -> ValueRef {
let elttype = T_struct(~[ccx.int_type, ccx.int_type]);
let maptype = T_array(elttype, ccx.module_data.size() + 1u);
}
fn trans_crate(sess: session::session, crate: @ast::crate, tcx: ty::ctxt,
- output: ~str, emap: resolve::exp_map,
+ output: ~str, emap: resolve3::ExportMap,
maps: astencode::maps)
-> (ModuleRef, link_meta) {
let sha = std::sha1::sha1();
upcalls:
upcall::declare_upcalls(targ_cfg, tn, tydesc_type,
llmod),
+ rtcalls: str_hash::<ast::def_id>(),
tydesc_type: tydesc_type,
int_type: int_type,
float_type: float_type,
mut do_not_commit_warning_issued: false};
+ gather_rtcalls(ccx, crate);
+
{
let _icx = ccx.insn_ctxt(~"data");
trans_constants(ccx, crate);
let ccx = bcx.ccx();
if !ccx.sess.no_asm_comments() {
let sanitized = str::replace(text, ~"$", ~"");
- let comment_text = ~"# " + sanitized;
+ let comment_text = ~"# " + str::replace(sanitized, ~"\n", ~"\n\t# ");
let asm = str::as_c_str(comment_text, |c| {
str::as_c_str(~"", |e| {
count_insn(bcx, ~"inlineasm");
fn allocate_cbox(bcx: block,
ck: ty::closure_kind,
cdata_ty: ty::t)
- -> ValueRef {
+ -> result {
let _icx = bcx.insn_ctxt(~"closure::allocate_cbox");
let ccx = bcx.ccx(), tcx = ccx.tcx;
}
// Allocate and initialize the box:
- let llbox = alt ck {
+ let {bcx, val} = alt ck {
ty::ck_box {
malloc_raw(bcx, cdata_ty, heap_shared)
}
let cbox_ty = tuplify_box_ty(tcx, cdata_ty);
let llbox = base::alloc_ty(bcx, cbox_ty);
nuke_ref_count(bcx, llbox);
- llbox
+ {bcx: bcx, val: llbox}
}
};
- ret llbox;
+ ret {bcx: bcx, val: val};
}
type closure_result = {
let cdata_ty = mk_closure_tys(tcx, bound_values);
// allocate closure in the heap
- let llbox = allocate_cbox(bcx, ck, cdata_ty);
+ let {bcx: bcx, val: llbox} = allocate_cbox(bcx, ck, cdata_ty);
let mut temp_cleanups = ~[];
// cbox_ty has the form of a tuple: (a, b, c) we want a ptr to a
load_ret_handle: bool,
ck: ty::closure_kind) {
let _icx = fcx.insn_ctxt(~"closure::load_environment");
- let bcx = raw_block(fcx, fcx.llloadenv);
+ let bcx = raw_block(fcx, false, fcx.llloadenv);
// Load a pointer to the closure data, skipping over the box header:
let llcdata = base::opaque_box_body(bcx, cdata_ty, fcx.llenv);
dest: dest) -> block {
let _icx = bcx.insn_ctxt(~"closure::trans_expr_fn");
if dest == ignore { ret bcx; }
- let ccx = bcx.ccx(), bcx = bcx;
+ let ccx = bcx.ccx();
let fty = node_id_type(bcx, id);
let llfnty = type_of_fn_from_ty(ccx, fty);
let sub_path = vec::append_one(bcx.fcx.path, path_name(@~"anon"));
let s = mangle_internal_name_by_path(ccx, sub_path);
let llfn = decl_internal_cdecl_fn(ccx.llmod, s, llfnty);
- let trans_closure_env = fn@(ck: ty::closure_kind) -> ValueRef {
+ let trans_closure_env = fn@(ck: ty::closure_kind) -> result {
let cap_vars = capture::compute_capture_vars(
ccx.tcx, id, proto, cap_clause);
let ret_handle = alt is_loop_body { some(x) { x } none { none } };
Store(bcx, C_bool(true), bcx.fcx.llretptr);
}
});
- llbox
+ {bcx: bcx, val: llbox}
};
- let closure = alt proto {
+ let {bcx: bcx, val: closure} = alt proto {
ast::proto_any | ast::proto_block { trans_closure_env(ty::ck_block) }
ast::proto_box { trans_closure_env(ty::ck_box) }
ast::proto_uniq { trans_closure_env(ty::ck_uniq) }
ast::proto_bare {
trans_closure(ccx, sub_path, decl, body, llfn, no_self, none,
id, |_fcx| { }, |_bcx| { });
- C_null(T_opaque_box_ptr(ccx))
+ {bcx: bcx, val: C_null(T_opaque_box_ptr(ccx))}
}
};
fill_fn_pair(bcx, get_dest_addr(dest), llfn, closure);
+
ret bcx;
}
let sz = Add(bcx, sz, shape::llsize_of(ccx, T_box_header(ccx)));
// Allocate memory, update original ptr, and copy existing data
- let malloc = ccx.upcalls.exchange_malloc;
- let cbox_out = Call(bcx, malloc, ~[tydesc, sz]);
- let cbox_out = PointerCast(bcx, cbox_out, llopaquecboxty);
+ let malloc = ~"exchange_malloc";
+ let opaque_tydesc = PointerCast(bcx, tydesc, T_ptr(T_i8()));
+ let rval = alloca_zeroed(bcx, T_ptr(T_i8()));
+ let bcx = trans_rtcall(bcx, malloc, ~[opaque_tydesc, sz],
+ save_in(rval));
+ let cbox_out = PointerCast(bcx, Load(bcx, rval), llopaquecboxty);
call_memmove(bcx, cbox_out, cbox_in, sz);
Store(bcx, cbox_out, cboxptr);
import syntax::{ast, ast_map};
import driver::session;
import session::session;
-import middle::{resolve, ty};
+import middle::ty;
import back::{link, abi, upcall};
import syntax::codemap::span;
import lib::llvm::{llvm, target_data, type_names, associate_type,
externs: hashmap<~str, ValueRef>,
intrinsics: hashmap<~str, ValueRef>,
item_vals: hashmap<ast::node_id, ValueRef>,
- exp_map: resolve::exp_map,
+ exp_map: resolve3::ExportMap,
reachable: reachable::map,
item_symbols: hashmap<ast::node_id, ~str>,
mut main_fn: option<ValueRef>,
maps: astencode::maps,
stats: stats,
upcalls: @upcall::upcalls,
+ rtcalls: hashmap<~str, ast::def_id>,
tydesc_type: TypeRef,
int_type: TypeRef,
float_type: TypeRef,
let parent: option<block>;
// The 'kind' of basic block this is.
let kind: block_kind;
+ // Is this block part of a landing pad?
+ let is_lpad: bool;
// info about the AST node this block originated from, if any
let node_info: option<node_info>;
// The function context for the function to which this block is
// attached.
let fcx: fn_ctxt;
new(llbb: BasicBlockRef, parent: option<block>, -kind: block_kind,
- node_info: option<node_info>, fcx: fn_ctxt) {
+ is_lpad: bool, node_info: option<node_info>, fcx: fn_ctxt) {
// sigh
self.llbb = llbb; self.terminated = false; self.unreachable = false;
- self.parent = parent; self.kind = kind; self.node_info = node_info;
- self.fcx = fcx;
+ self.parent = parent; self.kind = kind; self.is_lpad = is_lpad;
+ self.node_info = node_info; self.fcx = fcx;
}
}
enum block = @block_;
fn mk_block(llbb: BasicBlockRef, parent: option<block>, -kind: block_kind,
- node_info: option<node_info>, fcx: fn_ctxt) -> block {
- block(@block_(llbb, parent, kind, node_info, fcx))
+ is_lpad: bool, node_info: option<node_info>, fcx: fn_ctxt)
+ -> block {
+ block(@block_(llbb, parent, kind, is_lpad, node_info, fcx))
}
// First two args are retptr, env
fn struct_tys(ty: TypeRef) -> ~[TypeRef] {
let n = llvm::LLVMCountStructElementTypes(ty);
let elts = vec::from_elem(n as uint, ptr::null());
- do vec::as_buf(elts) |buf| {
+ do vec::as_buf(elts) |buf, _len| {
llvm::LLVMGetStructElementTypes(ty, buf);
}
ret elts;
tie_up_header_blocks(fcx, lltop);
// Make sure our standard return block (that we didn't use) is terminated
- let ret_cx = raw_block(fcx, fcx.llreturn);
+ let ret_cx = raw_block(fcx, false, fcx.llreturn);
Unreachable(ret_cx);
}
vtables: typeck::vtable_res) -> ValueRef {
let _icx = ccx.insn_ctxt(~"impl::make_impl_vtable");
let tcx = ccx.tcx;
+
+ // XXX: This should support multiple traits.
let ifce_id = expect(ccx.sess,
- ty::ty_to_def_id(option::get(ty::impl_trait(tcx,
- impl_id))),
+ ty::ty_to_def_id(ty::impl_traits(tcx, impl_id)[0]),
|| ~"make_impl_vtable: non-trait-type implemented");
+
let has_tps = (*ty::lookup_item_type(ccx.tcx, impl_id).bounds).len() > 0u;
make_vtable(ccx, vec::map(*ty::trait_methods(tcx, ifce_id), |im| {
let fty = ty::subst_tps(tcx, substs, ty::mk_fn(tcx, im.fty));
if dest == ignore { ret trans_expr(bcx, val, ignore); }
let ccx = bcx.ccx();
let v_ty = expr_ty(bcx, val);
- let {box: llbox, body: body} = malloc_boxed(bcx, v_ty);
+ let {bcx: bcx, box: llbox, body: body} = malloc_boxed(bcx, v_ty);
add_clean_free(bcx, llbox, heap_shared);
let bcx = trans_expr_save_in(bcx, val, body);
revoke_clean(bcx, llbox);
type map = std::map::hashmap<node_id, ()>;
-type ctx = {exp_map: resolve::exp_map,
+type ctx = {exp_map: resolve3::ExportMap,
tcx: ty::ctxt,
method_map: typeck::method_map,
rmap: map};
-fn find_reachable(crate_mod: _mod, exp_map: resolve::exp_map,
+fn find_reachable(crate_mod: _mod, exp_map: resolve3::ExportMap,
tcx: ty::ctxt, method_map: typeck::method_map) -> map {
let rmap = std::map::int_hash();
let cx = {exp_map: exp_map, tcx: tcx, method_map: method_map, rmap: rmap};
}
}
}
- item_class(tps, _traits, items, ctor, m_dtor) {
- cx.rmap.insert(ctor.node.id, ());
- if tps.len() > 0u || attr::find_inline_attr(ctor.node.attrs)
- != attr::ia_none {
- traverse_inline_body(cx, ctor.node.body);
+ item_class(tps, _traits, items, m_ctor, m_dtor) {
+ do option::iter(m_ctor) |ctor| {
+ cx.rmap.insert(ctor.node.id, ());
+ if tps.len() > 0u || attr::find_inline_attr(ctor.node.attrs)
+ != attr::ia_none {
+ traverse_inline_body(cx, ctor.node.body);
+ }
}
do option::iter(m_dtor) |dtor| {
cx.rmap.insert(dtor.node.id, ());
ty::ty_trait(_, _) { self.leaf(~"trait") }
ty::ty_var(_) { self.leaf(~"var") }
ty::ty_var_integral(_) { self.leaf(~"var_integral") }
- ty::ty_param(n, _) { self.visit(~"param", ~[self.c_uint(n)]) }
+ ty::ty_param(p) { self.visit(~"param", ~[self.c_uint(p.idx)]) }
ty::ty_self { self.leaf(~"self") }
ty::ty_type { self.leaf(~"type") }
ty::ty_opaque_box { self.leaf(~"opaque_box") }
let vecbodyty = ty::mk_mut_unboxed_vec(bcx.tcx(), unit_ty);
let vecsize = Add(bcx, alloc, llsize_of(ccx, ccx.opaque_vec_type));
- let {box, body} = base::malloc_general_dyn(bcx, vecbodyty, heap, vecsize);
+ let {bcx, box, body} =
+ base::malloc_general_dyn(bcx, vecbodyty, heap, vecsize);
Store(bcx, fill, GEPi(bcx, body, ~[0u, abi::vec_elt_fill]));
Store(bcx, alloc, GEPi(bcx, body, ~[0u, abi::vec_elt_alloc]));
ret {bcx: bcx, val: box};
}
false
}
- ty::ty_param(n, _) {
- cx.uses[n] |= use;
+ ty::ty_param(p) {
+ cx.uses[p.idx] |= use;
false
}
_ { true }
alt e.node {
expr_vstore(_, _) |
expr_vec(_, _) |
- expr_rec(_, _) | expr_tup(_) |
+ expr_rec(_, _) | expr_struct(*) | expr_tup(_) |
expr_unary(box(_), _) | expr_unary(uniq(_), _) |
expr_binary(add, _, _) |
expr_copy(_) | expr_move(_, _) {
fn duplicate(bcx: block, v: ValueRef, t: ty::t) -> result {
let _icx = bcx.insn_ctxt(~"uniq::duplicate");
let content_ty = content_ty(t);
- let {box: dst_box, body: dst_body} = malloc_unique(bcx, content_ty);
+ let {bcx: bcx, box: dst_box, body: dst_body} =
+ malloc_unique(bcx, content_ty);
let src_box = v;
let src_body = opaque_box_body(bcx, content_ty, src_box);
export t;
export new_ty_hash;
export enum_variants, substd_enum_variants, enum_is_univariant;
-export trait_methods, store_trait_methods, impl_trait;
+export trait_methods, store_trait_methods, impl_traits;
export enum_variant_with_id;
export ty_dtor;
export ty_param_bounds_and_ty;
export ty_class;
export region, bound_region, encl_region;
export re_bound, re_free, re_scope, re_static, re_var;
-export br_self, br_anon, br_named;
+export br_self, br_anon, br_named, br_cap_avoid;
export get, type_has_params, type_needs_infer, type_has_regions;
export type_has_resources, type_id;
export tbox_has_flag;
export kind, kind_implicitly_copyable, kind_send_copy, kind_copyable;
export kind_noncopyable, kind_const;
export kind_can_be_copied, kind_can_be_sent, kind_can_be_implicitly_copied;
+export kind_is_safe_for_default_mode;
export kind_is_owned;
export proto_kind, kind_lteq, type_kind;
export operators;
export terr_proto_mismatch;
export terr_ret_style_mismatch;
export purity_to_str;
+export param_tys_in_type;
// Data types
vecs_implicitly_copyable: bool,
cstore: metadata::cstore::cstore,
sess: session::session,
- def_map: resolve::def_map,
+ def_map: resolve3::DefMap,
region_map: middle::region::region_map,
region_paramd_items: middle::region::region_paramd_items,
ck_uniq,
}
+/// Innards of a function type:
+///
+/// - `purity` is the function's effect (pure, impure, unsafe).
+/// - `proto` is the protocol (fn@, fn~, etc).
+/// - `inputs` is the list of arguments and their modes.
+/// - `output` is the return type.
+/// - `ret_style`indicates whether the function returns a value or fails.
type fn_ty = {purity: ast::purity,
proto: ast::proto,
inputs: ~[arg],
output: t,
ret_style: ret_style};
-// See discussion at head of region.rs
+type param_ty = {idx: uint, def_id: def_id};
+
+/// Representation of regions:
enum region {
+ /// Bound regions are found (primarily) in function types. They indicate
+ /// region parameters that have yet to be replaced with actual regions
+ /// (analogous to type parameters, except that due to the monomorphic
+ /// nature of our type system, bound type parameters are always replaced
+ /// with fresh type variables whenever an item is referenced, so type
+ /// parameters only appear "free" in types. Regions in contrast can
+ /// appear free or bound.). When a function is called, all bound regions
+ /// tied to that function's node-id are replaced with fresh region
+ /// variables whose value is then inferred.
re_bound(bound_region),
+
+ /// When checking a function body, the types of all arguments and so forth
+ /// that refer to bound region parameters are modified to refer to free
+ /// region parameters.
re_free(node_id, bound_region),
+
+ /// A concrete region naming some expression within the current function.
re_scope(node_id),
+
+ /// Static data that has an "infinite" lifetime.
+ re_static,
+
+ /// A region variable. Should not exist after typeck.
re_var(region_vid),
- re_static // effectively `top` in the region lattice
}
enum bound_region {
- br_self, // The self region for classes, impls
- br_anon, // The anonymous region parameter for a given function.
- br_named(ast::ident) // A named region parameter.
+ /// The self region for classes, impls (&T in a type defn or &self/T)
+ br_self,
+
+ /// Anonymous region parameter for a given fn (&T)
+ br_anon,
+
+ /// Named region parameters for functions (a in &a/T)
+ br_named(ast::ident),
+
+ /// Handles capture-avoiding substitution in a rather subtle case. If you
+ /// have a closure whose argument types are being inferred based on the
+ /// expected type, and the expected type includes bound regions, then we
+ /// will wrap those bound regions in a br_cap_avoid() with the id of the
+ /// fn expression. This ensures that the names are not "captured" by the
+ /// enclosing scope, which may define the same names. For an example of
+ /// where this comes up, see src/test/compile-fail/regions-ret-borrowed.rs
+ /// and regions-ret-borrowed-1.rs.
+ br_cap_avoid(ast::node_id, @bound_region),
}
type opt_region = option<region>;
-// The type substs represents the kinds of things that can be substituted into
-// a type. There may be at most one region parameter (self_r), along with
-// some number of type parameters (tps).
-//
-// The region parameter is present on nominative types (enums, resources,
-// classes) that are declared as having a region parameter. If the type is
-// declared as `enum foo&`, then self_r should always be non-none. If the
-// type is declared as `enum foo`, then self_r will always be none. In the
-// latter case, typeck::ast_ty_to_ty() will reject any references to `&T` or
-// `&self.T` within the type and report an error.
+/// The type substs represents the kinds of things that can be substituted to
+/// convert a polytype into a monotype. Note however that substituting bound
+/// regions other than `self` is done through a different mechanism.
+///
+/// `tps` represents the type parameters in scope. They are indexed according
+/// to the order in which they were declared.
+///
+/// `self_r` indicates the region parameter `self` that is present on nominal
+/// types (enums, classes) declared as having a region parameter. `self_r`
+/// should always be none for types that are not region-parameterized and
+/// some(_) for types that are. The only bound region parameter that should
+/// appear within a region-parameterized type is `self`.
+///
+/// `self_ty` is the type to which `self` should be remapped, if any. The
+/// `self` type is rather funny in that it can only appear on interfaces and
+/// is always substituted away to the implementing type for an interface.
type substs = {
self_r: opt_region,
self_ty: option<ty::t>,
ty_var(tv_vid), // type variable during typechecking
ty_var_integral(tvi_vid), // type variable during typechecking, for
// integral types only
- ty_param(uint, def_id), // type parameter
+ ty_param(param_ty), // type parameter
ty_self, // special, implicit `self` type parameter
// "Fake" types, used for trans purposes
}
fn mk_ctxt(s: session::session,
- dm: resolve::def_map,
+ dm: resolve3::DefMap,
amap: ast_map::map,
freevars: freevars::freevar_map,
region_map: middle::region::region_map,
ty_nil | ty_bot | ty_bool | ty_int(_) | ty_float(_) | ty_uint(_) |
ty_estr(_) | ty_type | ty_opaque_closure_ptr(_) |
ty_opaque_box {}
- ty_param(_, _) { flags |= has_params as uint; }
+ ty_param(_) { flags |= has_params as uint; }
ty_var(_) | ty_var_integral(_) { flags |= needs_infer as uint; }
ty_self { flags |= has_self as uint; }
ty_enum(_, substs) | ty_class(_, substs) | ty_trait(_, substs) {
fn mk_self(cx: ctxt) -> t { mk_t(cx, ty_self) }
-fn mk_param(cx: ctxt, n: uint, k: def_id) -> t { mk_t(cx, ty_param(n, k)) }
+fn mk_param(cx: ctxt, n: uint, k: def_id) -> t {
+ mk_t(cx, ty_param({idx: n, def_id: k}))
+}
fn mk_type(cx: ctxt) -> t { mk_t(cx, ty_type) }
ty_nil | ty_bot | ty_bool | ty_int(_) | ty_uint(_) | ty_float(_) |
ty_estr(_) | ty_type | ty_opaque_box | ty_self |
ty_opaque_closure_ptr(_) | ty_var(_) | ty_var_integral(_) |
- ty_param(_, _) {
+ ty_param(_) {
}
ty_box(tm) | ty_evec(tm, _) | ty_unboxed_vec(tm) |
ty_ptr(tm) | ty_rptr(_, tm) {
let tb = ty::get(typ);
if !tbox_has_flag(tb, has_params) { ret typ; }
alt tb.struct {
- ty_param(idx, _) { tps[idx] }
+ ty_param(p) { tps[p.idx] }
sty { fold_sty_to_ty(cx, sty, |t| subst_tps(cx, tps, t)) }
}
}
let tb = get(typ);
if !tbox_has_flag(tb, needs_subst) { ret typ; }
alt tb.struct {
- ty_param(idx, _) {substs.tps[idx]}
+ ty_param(p) {substs.tps[p.idx]}
ty_self {substs.self_ty.get()}
_ {
fold_regions_and_ty(
enum kind { kind_(u32) }
/// can be copied (implicitly or explicitly)
-const KIND_MASK_COPY : u32 = 0b00000000000000000000000000000001_u32;
+const KIND_MASK_COPY : u32 = 0b000000000000000000000000001_u32;
/// can be sent: no shared box, borrowed ptr (must imply OWNED)
-const KIND_MASK_SEND : u32 = 0b00000000000000000000000000000010_u32;
+const KIND_MASK_SEND : u32 = 0b000000000000000000000000010_u32;
/// is owned (no borrowed ptrs)
-const KIND_MASK_OWNED : u32 = 0b00000000000000000000000000000100_u32;
+const KIND_MASK_OWNED : u32 = 0b000000000000000000000000100_u32;
/// is deeply immutable
-const KIND_MASK_CONST : u32 = 0b00000000000000000000000000001000_u32;
+const KIND_MASK_CONST : u32 = 0b000000000000000000000001000_u32;
/// can be implicitly copied (must imply COPY)
-const KIND_MASK_IMPLICIT : u32 = 0b00000000000000000000000000010000_u32;
+const KIND_MASK_IMPLICIT : u32 = 0b000000000000000000000010000_u32;
+
+/// safe for default mode (subset of KIND_MASK_IMPLICIT)
+const KIND_MASK_DEFAULT_MODE : u32 = 0b000000000000000000000100000_u32;
fn kind_noncopyable() -> kind {
kind_(0u32)
kind_(KIND_MASK_IMPLICIT | KIND_MASK_COPY)
}
+fn kind_safe_for_default_mode() -> kind {
+ // similar to implicit copy, but always includes vectors and strings
+ kind_(KIND_MASK_DEFAULT_MODE | KIND_MASK_IMPLICIT | KIND_MASK_COPY)
+}
+
fn kind_implicitly_sendable() -> kind {
kind_(KIND_MASK_IMPLICIT | KIND_MASK_COPY | KIND_MASK_SEND)
}
+fn kind_safe_for_default_mode_send() -> kind {
+ // similar to implicit copy, but always includes vectors and strings
+ kind_(KIND_MASK_DEFAULT_MODE | KIND_MASK_IMPLICIT |
+ KIND_MASK_COPY | KIND_MASK_SEND)
+}
+
+
fn kind_send_copy() -> kind {
kind_(KIND_MASK_COPY | KIND_MASK_SEND)
}
}
fn remove_implicit(k: kind) -> kind {
- k - kind_(KIND_MASK_IMPLICIT)
+ k - kind_(KIND_MASK_IMPLICIT | KIND_MASK_DEFAULT_MODE)
}
fn remove_send(k: kind) -> kind {
*k & KIND_MASK_IMPLICIT == KIND_MASK_IMPLICIT
}
+pure fn kind_is_safe_for_default_mode(k: kind) -> bool {
+ *k & KIND_MASK_DEFAULT_MODE == KIND_MASK_DEFAULT_MODE
+}
+
pure fn kind_can_be_copied(k: kind) -> bool {
*k & KIND_MASK_COPY == KIND_MASK_COPY
}
alt p {
ast::proto_any { kind_noncopyable() }
ast::proto_block { kind_noncopyable() }
- ast::proto_box { kind_implicitly_copyable() | kind_owned() }
+ ast::proto_box { kind_safe_for_default_mode() | kind_owned() }
ast::proto_uniq { kind_send_copy() | kind_owned() }
- ast::proto_bare { kind_implicitly_sendable() | kind_const() |
+ ast::proto_bare { kind_safe_for_default_mode_send() | kind_const() |
kind_owned() }
}
}
// Insert a default in case we loop back on self recursively.
cx.kind_cache.insert(ty, kind_top());
- let result = alt get(ty).struct {
+ let mut result = alt get(ty).struct {
// Scalar and unique types are sendable, constant, and owned
ty_nil | ty_bot | ty_bool | ty_int(_) | ty_uint(_) | ty_float(_) |
ty_ptr(_) {
- kind_implicitly_sendable() | kind_const() | kind_owned()
+ kind_safe_for_default_mode_send() | kind_const() | kind_owned()
}
// Implicit copyability of strs is configurable
// Those with refcounts raise noncopyable to copyable,
// lower sendable to copyable. Therefore just set result to copyable.
ty_box(tm) {
- remove_send(mutable_type_kind(cx, tm) | kind_implicitly_copyable())
+ remove_send(mutable_type_kind(cx, tm) | kind_safe_for_default_mode())
}
// Iface instances are (for now) like shared boxes, basically
- ty_trait(_, _) { kind_implicitly_copyable() | kind_owned() }
+ ty_trait(_, _) { kind_safe_for_default_mode() | kind_owned() }
// Region pointers are copyable but NOT owned nor sendable
- ty_rptr(_, _) { kind_implicitly_copyable() }
+ ty_rptr(_, _) { kind_safe_for_default_mode() }
// Unique boxes and vecs have the kind of their contained type,
// but unique boxes can't be implicitly copyable.
// contained type, but aren't implicitly copyable. Fixed vectors have
// the kind of the element they contain, taking mutability into account.
ty_evec(tm, vstore_box) {
- remove_send(kind_implicitly_copyable() | mutable_type_kind(cx, tm))
+ remove_send(kind_safe_for_default_mode() | mutable_type_kind(cx, tm))
}
ty_evec(tm, vstore_slice(_)) {
- remove_owned_send(kind_implicitly_copyable() |
+ remove_owned_send(kind_safe_for_default_mode() |
mutable_type_kind(cx, tm))
}
ty_evec(tm, vstore_fixed(_)) {
// All estrs are copyable; uniques and interiors are sendable.
ty_estr(vstore_box) {
- kind_implicitly_copyable() | kind_const() | kind_owned()
+ kind_safe_for_default_mode() | kind_const() | kind_owned()
}
ty_estr(vstore_slice(_)) {
- kind_implicitly_copyable() | kind_const()
+ kind_safe_for_default_mode() | kind_const()
}
ty_estr(vstore_fixed(_)) {
- kind_implicitly_sendable() | kind_const() | kind_owned()
+ kind_safe_for_default_mode_send() | kind_const() | kind_owned()
}
// Records lower to the lowest of their members.
lowest
}
- ty_param(_, did) {
- param_bounds_to_kind(cx.ty_param_bounds.get(did.node))
+ ty_param(p) {
+ param_bounds_to_kind(cx.ty_param_bounds.get(p.def_id.node))
}
// self is a special type parameter that can only appear in ifaces; it
}
};
+ // arbitrary threshold to prevent by-value copying of big records
+ if kind_is_safe_for_default_mode(result) {
+ if type_size(cx, ty) > 4 {
+ result -= kind_(KIND_MASK_DEFAULT_MODE);
+ }
+ }
+
cx.kind_cache.insert(ty, result);
ret result;
}
-// True if instantiating an instance of `ty` requires an instance of `r_ty`.
+/// gives a rough estimate of how much space it takes to represent
+/// an instance of `ty`. Used for the mode transition.
+fn type_size(cx: ctxt, ty: t) -> uint {
+ alt get(ty).struct {
+ ty_nil | ty_bot | ty_bool | ty_int(_) | ty_uint(_) | ty_float(_) |
+ ty_ptr(_) | ty_box(_) | ty_uniq(_) | ty_estr(vstore_uniq) |
+ ty_trait(*) | ty_rptr(*) | ty_evec(_, vstore_uniq) |
+ ty_evec(_, vstore_box) | ty_estr(vstore_box) => {
+ 1
+ }
+
+ ty_evec(_, vstore_slice(_)) |
+ ty_estr(vstore_slice(_)) |
+ ty_fn(_) => {
+ 2
+ }
+
+ ty_evec(t, vstore_fixed(n)) => {
+ type_size(cx, t.ty) * n
+ }
+
+ ty_estr(vstore_fixed(n)) => {
+ n
+ }
+
+ ty_rec(flds) => {
+ flds.foldl(0, |s, f| s + type_size(cx, f.mt.ty))
+ }
+
+ ty_class(did, substs) {
+ let flds = class_items_as_fields(cx, did, substs);
+ flds.foldl(0, |s, f| s + type_size(cx, f.mt.ty))
+ }
+
+ ty_tup(tys) {
+ tys.foldl(0, |s, t| s + type_size(cx, t))
+ }
+
+ ty_enum(did, substs) {
+ let variants = substd_enum_variants(cx, did, substs);
+ variants.foldl( // find max size of any variant
+ 0,
+ |m, v| uint::max(m,
+ // find size of this variant:
+ v.args.foldl(0, |s, a| s + type_size(cx, a))))
+ }
+
+ ty_param(_) | ty_self {
+ 1
+ }
+
+ ty_var(_) | ty_var_integral(_) {
+ cx.sess.bug(~"Asked to compute kind of a type variable");
+ }
+ ty_type | ty_opaque_closure_ptr(_) | ty_opaque_box | ty_unboxed_vec(_) {
+ cx.sess.bug(~"Asked to compute kind of fictitious type");
+ }
+ }
+}
+
+// True if instantiating an instance of `r_ty` requires an instance of `r_ty`.
fn is_instantiable(cx: ctxt, r_ty: t) -> bool {
fn type_requires(cx: ctxt, seen: @mut ~[def_id],
ty_fn(_) |
ty_var(_) |
ty_var_integral(_) |
- ty_param(_, _) |
+ ty_param(_) |
ty_self |
ty_type |
ty_opaque_box |
ty_evec(mt, vstore_fixed(_)) | ty_unboxed_vec(mt) {
result = type_is_pod(cx, mt.ty);
}
- ty_param(_, _) { result = false; }
+ ty_param(_) { result = false; }
ty_opaque_closure_ptr(_) { result = true; }
ty_class(did, substs) {
result = vec::any(lookup_class_fields(cx, did), |f| {
fn type_param(ty: t) -> option<uint> {
alt get(ty).struct {
- ty_param(id, _) { ret some(id); }
+ ty_param(p) { ret some(p.idx); }
_ {/* fall through */ }
}
ret none;
ty::br_self { 0u }
ty::br_anon { 1u }
ty::br_named(str) { str::hash(*str) }
+ ty::br_cap_avoid(id, br) { id as uint | hash_bound_region(*br) }
}
}
ty_self { 28u }
ty_var(v) { hash_uint(29u, v.to_uint()) }
ty_var_integral(v) { hash_uint(30u, v.to_uint()) }
- ty_param(pid, did) { hash_def(hash_uint(31u, pid), did) }
+ ty_param(p) { hash_def(hash_uint(31u, p.idx), p.def_id) }
ty_type { 32u }
ty_bot { 34u }
ty_ptr(mt) { hash_subty(35u, mt.ty) }
ret none;
}
+/// Returns a vector containing the indices of all type parameters that appear
+/// in `ty`. The vector may contain duplicates. Probably should be converted
+/// to a bitset or some other representation.
+fn param_tys_in_type(ty: t) -> ~[param_ty] {
+ let mut rslt = ~[];
+ do walk_ty(ty) |ty| {
+ alt get(ty).struct {
+ ty_param(p) {
+ vec::push(rslt, p);
+ }
+ _ { }
+ }
+ }
+ rslt
+}
+
fn occurs_check(tcx: ctxt, sp: span, vid: tv_vid, rt: t) {
// Returns a vec of all the type variables occurring in `ty`. It may
ty_tup(_) { ~"tuple" }
ty_var(_) { ~"variable" }
ty_var_integral(_) { ~"integral variable" }
- ty_param(_, _) { ~"type parameter" }
+ ty_param(_) { ~"type parameter" }
ty_self { ~"self" }
}
}
result
}
-fn impl_trait(cx: ctxt, id: ast::def_id) -> option<t> {
+fn impl_traits(cx: ctxt, id: ast::def_id) -> ~[t] {
if id.crate == ast::local_crate {
- #debug("(impl_trait) searching for trait impl %?", id);
+ #debug("(impl_traits) searching for trait impl %?", id);
alt cx.items.find(id.node) {
- some(ast_map::node_item(@{node: ast::item_impl(
- _, some(@{ref_id: id, _}), _, _), _}, _)) {
- some(node_id_to_type(cx, id))
+ some(ast_map::node_item(@{
+ node: ast::item_impl(_, trait_refs, _, _),
+ _},
+ _)) {
+
+ do vec::map(trait_refs) |trait_ref| {
+ node_id_to_type(cx, trait_ref.ref_id)
+ }
}
some(ast_map::node_item(@{node: ast::item_class(*),
_},_)) {
alt cx.def_map.find(id.node) {
some(def_ty(trait_id)) {
// XXX: Doesn't work cross-crate.
- #debug("(impl_trait) found trait id %?", trait_id);
- some(node_id_to_type(cx, trait_id.node))
+ #debug("(impl_traits) found trait id %?", trait_id);
+ ~[node_id_to_type(cx, trait_id.node)]
}
some(x) {
- cx.sess.bug(#fmt("impl_trait: trait ref is in trait map \
+ cx.sess.bug(#fmt("impl_traits: trait ref is in trait map \
but is bound to %?", x));
}
none {
- none
+ ~[]
}
}
}
- _ { none }
+ _ { ~[] }
}
} else {
- csearch::get_impl_trait(cx, id)
+ csearch::get_impl_traits(cx, id)
}
}
type ty_table = hashmap<ast::def_id, ty::t>;
-type crate_ctxt_ = {impl_map: resolve::impl_map,
+type crate_ctxt_ = {impl_map: resolve3::ImplMap,
trait_map: resolve3::TraitMap,
method_map: method_map,
vtable_map: vtable_map,
}
fn check_crate(tcx: ty::ctxt,
- impl_map: resolve::impl_map,
+ impl_map: resolve3::ImplMap,
trait_map: resolve3::TraitMap,
crate: @ast::crate)
-> (method_map, vtable_map) {
}
}
-fn ast_region_to_region<AC: ast_conv, RS: region_scope>(
+fn ast_region_to_region<AC: ast_conv, RS: region_scope copy owned>(
self: AC, rscope: RS, span: span, a_r: @ast::region) -> ty::region {
let res = alt a_r.node {
get_region_reporting_err(self.tcx(), span, res)
}
-fn ast_path_to_substs_and_ty<AC: ast_conv, RS: region_scope copy>(
+fn ast_path_to_substs_and_ty<AC: ast_conv, RS: region_scope copy owned>(
self: AC, rscope: RS, did: ast::def_id,
path: @ast::path) -> ty_param_substs_and_ty {
{substs: substs, ty: ty::subst(tcx, substs, decl_ty)}
}
-fn ast_path_to_ty<AC: ast_conv, RS: region_scope copy>(
+fn ast_path_to_ty<AC: ast_conv, RS: region_scope copy owned>(
self: AC,
rscope: RS,
did: ast::def_id,
// Parses the programmer's textual representation of a type into our
// internal notion of a type. `getter` is a function that returns the type
// corresponding to a definition ID:
-fn ast_ty_to_ty<AC: ast_conv, RS: region_scope copy>(
+fn ast_ty_to_ty<AC: ast_conv, RS: region_scope copy owned>(
self: AC, rscope: RS, &&ast_ty: @ast::ty) -> ty::t {
- fn ast_mt_to_mt<AC: ast_conv, RS: region_scope copy>(
+ fn ast_mt_to_mt<AC: ast_conv, RS: region_scope copy owned>(
self: AC, rscope: RS, mt: ast::mt) -> ty::mt {
ret {ty: ast_ty_to_ty(self, rscope, mt.ty), mutbl: mt.mutbl};
// Handle @, ~, and & being able to mean estrs and evecs.
// If a_seq_ty is a str or a vec, make it an estr/evec
- fn mk_maybe_vstore<AC: ast_conv, RS: region_scope copy>(
+ fn mk_maybe_vstore<AC: ast_conv, RS: region_scope copy owned>(
self: AC, rscope: RS, a_seq_ty: ast::mt, vst: ty::vstore,
constr: fn(ty::mt) -> ty::t) -> ty::t {
ret typ;
}
-fn ty_of_arg<AC: ast_conv, RS: region_scope copy>(
+fn ty_of_arg<AC: ast_conv, RS: region_scope copy owned>(
self: AC, rscope: RS, a: ast::arg,
expected_ty: option<ty::arg>) -> ty::arg {
type expected_tys = option<{inputs: ~[ty::arg],
output: ty::t}>;
-fn ty_of_fn_decl<AC: ast_conv, RS: region_scope copy>(
+fn ty_of_fn_decl<AC: ast_conv, RS: region_scope copy owned>(
self: AC, rscope: RS,
proto: ast::proto,
decl: ast::fn_decl,
*/
-import astconv::{ast_conv, ast_ty_to_ty, ast_region_to_region};
+import astconv::{ast_conv, ast_path_to_ty, ast_ty_to_ty};
+import astconv::{ast_region_to_region};
import collect::{methods}; // ccx.to_ty()
import middle::ty::{tv_vid, vid};
import regionmanip::{replace_bound_regions_in_fn_ty, region_of};
import typeck::infer::{unify_methods}; // infcx.set()
import typeck::infer::{resolve_type, force_tvar};
+import std::map::str_hash;
+
type fn_ctxt_ =
// var_bindings, locals and next_var_id are shared
// with any nested functions that capture the environment
in_scope_regions: isr_alist,
- node_types: smallintmap::smallintmap<ty::t>,
+ node_types: hashmap<ast::node_id, ty::t>,
node_type_substs: hashmap<ast::node_id, ty::substs>,
ccx: @crate_ctxt};
mut region_lb: region_bnd,
mut region_ub: region_bnd,
in_scope_regions: @nil,
- node_types: smallintmap::mk(),
+ node_types: map::int_hash(),
node_type_substs: map::int_hash(),
ccx: ccx})
}
{infcx: infer::new_infer_ctxt(tcx),
locals: int_hash(),
purity: decl.purity,
- node_types: smallintmap::mk(),
+ node_types: map::int_hash(),
node_type_substs: map::int_hash()}
}
some(fcx) {
let self_ty = ccx.to_ty(rscope::type_rscope(rp), ty);
for ms.each |m| { check_method(ccx, m, self_ty);}
}
- ast::item_class(tps, traits, members, ctor, m_dtor) {
- let tcx = ccx.tcx;
- let class_t = ty::node_id_to_type(tcx, it.id);
- // typecheck the ctor
- check_bare_fn(ccx, ctor.node.dec,
- ctor.node.body, ctor.node.id,
- some(class_t));
- // Write the ctor's self's type
- write_ty_to_tcx(tcx, ctor.node.self_id, class_t);
+ ast::item_class(tps, traits, members, m_ctor, m_dtor) {
+ let tcx = ccx.tcx;
+ let class_t = ty::node_id_to_type(tcx, it.id);
+
+ do option::iter(m_ctor) |ctor| {
+ // typecheck the ctor
+ check_bare_fn(ccx, ctor.node.dec,
+ ctor.node.body, ctor.node.id,
+ some(class_t));
+ // Write the ctor's self's type
+ write_ty_to_tcx(tcx, ctor.node.self_id, class_t);
+ }
do option::iter(m_dtor) |dtor| {
// typecheck the dtor
- check_bare_fn(ccx, ast_util::dtor_dec(),
- dtor.node.body, dtor.node.id,
- some(class_t));
- // Write the dtor's self's type
- write_ty_to_tcx(tcx, dtor.node.self_id, class_t);
- };
- // typecheck the members
+ check_bare_fn(ccx, ast_util::dtor_dec(),
+ dtor.node.body, dtor.node.id,
+ some(class_t));
+ // Write the dtor's self's type
+ write_ty_to_tcx(tcx, dtor.node.self_id, class_t);
+ };
+
+ // typecheck the members
for members.each |m| { check_class_member(ccx, class_t, m); }
// Check that there's at least one field
let (fields,_) = split_class_items(members);
fn write_ty(node_id: ast::node_id, ty: ty::t) {
#debug["write_ty(%d, %s) in fcx %s",
node_id, ty_to_str(self.tcx(), ty), self.tag()];
- self.node_types.insert(node_id as uint, ty);
+ self.node_types.insert(node_id, ty);
}
fn write_substs(node_id: ast::node_id, +substs: ty::substs) {
if !ty::substs_is_noop(substs) {
}
fn expr_ty(ex: @ast::expr) -> ty::t {
- alt self.node_types.find(ex.id as uint) {
+ alt self.node_types.find(ex.id) {
some(t) { t }
none {
self.tcx().sess.bug(#fmt["no type for expr %d (%s) in fcx %s",
}
}
fn node_ty(id: ast::node_id) -> ty::t {
- alt self.node_types.find(id as uint) {
+ alt self.node_types.find(id) {
some(t) { t }
none {
self.tcx().sess.bug(
expected: option<ty::t>) {
let tcx = fcx.ccx.tcx;
+ // Find the expected input/output types (if any). Careful to
+ // avoid capture of bound regions in the expected type. See
+ // def'n of br_cap_avoid() for a more lengthy explanation of
+ // what's going on here.
let expected_tys = do unpack_expected(fcx, expected) |sty| {
alt sty {
- ty::ty_fn(fn_ty) {some({inputs:fn_ty.inputs,
- output:fn_ty.output})}
- _ {none}
+ ty::ty_fn(fn_ty) => {
+ let {fn_ty, _} =
+ replace_bound_regions_in_fn_ty(
+ tcx, @nil, none, fn_ty,
+ |br| ty::re_bound(ty::br_cap_avoid(expr.id, @br)));
+ some({inputs:fn_ty.inputs,
+ output:fn_ty.output})
+ }
+ _ => {none}
}
};
}
}
}
+ ast::expr_struct(path, fields) {
+ // Resolve the path.
+ let class_id;
+ alt tcx.def_map.find(id) {
+ some(ast::def_class(type_def_id)) => {
+ class_id = type_def_id;
+ }
+ _ => {
+ tcx.sess.span_bug(path.span, ~"structure constructor does \
+ not name a structure type");
+ }
+ }
+
+ // Look up the number of type parameters and the raw type, and
+ // determine whether the class is region-parameterized.
+ let type_parameter_count, region_parameterized, raw_type;
+ if class_id.crate == ast::local_crate {
+ region_parameterized =
+ tcx.region_paramd_items.contains_key(class_id.node);
+ alt tcx.items.find(class_id.node) {
+ some(ast_map::node_item(@{
+ node: ast::item_class(type_parameters, _, _, _, _),
+ _
+ }, _)) => {
+
+ type_parameter_count = type_parameters.len();
+
+ let self_region;
+ if region_parameterized {
+ self_region = some(ty::re_bound(ty::br_self));
+ } else {
+ self_region = none;
+ }
+
+ raw_type = ty::mk_class(tcx, class_id, {
+ self_r: self_region,
+ self_ty: none,
+ tps: ty::ty_params_to_tys(tcx, type_parameters)
+ });
+ }
+ _ => {
+ tcx.sess.span_bug(expr.span,
+ ~"resolve didn't map this to a class");
+ }
+ }
+ } else {
+ let item_type = ty::lookup_item_type(tcx, class_id);
+ type_parameter_count = (*item_type.bounds).len();
+ region_parameterized = item_type.rp;
+ raw_type = item_type.ty;
+ }
+
+ // Generate the struct type.
+ let self_region;
+ if region_parameterized {
+ self_region = some(fcx.infcx.next_region_var_nb());
+ } else {
+ self_region = none;
+ }
+
+ let type_parameters = fcx.infcx.next_ty_vars(type_parameter_count);
+ let substitutions = {
+ self_r: self_region,
+ self_ty: none,
+ tps: type_parameters
+ };
+
+ let struct_type = ty::subst(tcx, substitutions, raw_type);
+
+ // Look up the class fields and build up a map.
+ let class_fields = ty::lookup_class_fields(tcx, class_id);
+ let class_field_map = str_hash();
+ let mut fields_found = 0;
+ for class_fields.each |field| {
+ // XXX: Check visibility here.
+ class_field_map.insert(*field.ident, (field.id, false));
+ }
+
+ // Typecheck each field.
+ for fields.each |field| {
+ alt class_field_map.find(*field.node.ident) {
+ none => {
+ tcx.sess.span_err(field.span,
+ #fmt("structure has no field named \
+ field named `%s`",
+ *field.node.ident));
+ }
+ some((_, true)) => {
+ tcx.sess.span_err(field.span,
+ #fmt("field `%s` specified more than \
+ once",
+ *field.node.ident));
+ }
+ some((field_id, false)) => {
+ let expected_field_type =
+ ty::lookup_field_type(tcx, class_id, field_id,
+ substitutions);
+ bot |= check_expr(fcx,
+ field.node.expr,
+ some(expected_field_type));
+ fields_found += 1;
+ }
+ }
+ }
+
+ // Make sure the programmer specified all the fields.
+ assert fields_found <= class_fields.len();
+ if fields_found < class_fields.len() {
+ let mut missing_fields = ~[];
+ for class_fields.each |class_field| {
+ let name = *class_field.ident;
+ let (_, seen) = class_field_map.get(name);
+ if !seen {
+ vec::push(missing_fields,
+ ~"`" + name + ~"`");
+ }
+ }
+
+ tcx.sess.span_err(expr.span,
+ #fmt("missing field%s: %s",
+ if missing_fields.len() == 1 {
+ ~""
+ } else {
+ ~"s"
+ },
+ str::connect(missing_fields, ~", ")));
+ }
+
+ // Write in the resulting type.
+ fcx.write_ty(id, struct_type);
+ }
ast::expr_field(base, field, tys) {
bot = check_field(fcx, expr, false, base, field, tys);
}
writeback::resolve_type_vars_in_expr(fcx, e);
}
+/// Checks whether a type can be created without an instance of itself.
+/// This is similar but different from the question of whether a type
+/// can be represented. For example, the following type:
+///
+/// enum foo { none, some(foo) }
+///
+/// is instantiable but is not representable. Similarly, the type
+///
+/// enum foo { some(@foo) }
+///
+/// is representable, but not instantiable.
fn check_instantiable(tcx: ty::ctxt,
sp: span,
item_id: ast::node_id) {
- let rty = ty::node_id_to_type(tcx, item_id);
- if !ty::is_instantiable(tcx, rty) {
+ let item_ty = ty::node_id_to_type(tcx, item_id);
+ if !ty::is_instantiable(tcx, item_ty) {
tcx.sess.span_err(sp, #fmt["this type cannot be instantiated \
without an instance of itself; \
consider using `option<%s>`",
- ty_to_str(tcx, rty)]);
+ ty_to_str(tcx, item_ty)]);
}
}
}
// Check that it is possible to instantiate this enum:
+ //
+ // This *sounds* like the same that as representable, but it's
+ // not. See def'n of `check_instantiable()` for details.
check_instantiable(ccx.tcx, sp, id);
}
|_r| {},
|t| {
alt ty::get(t).struct {
- ty::ty_param(idx, _) { tps_used[idx] = true; }
+ ty::ty_param({idx, _}) { tps_used[idx] = true; }
_ { }
}
true
loop {
// First, see whether this is an interface-bounded parameter.
alt ty::get(self.self_ty).struct {
- ty::ty_param(n, did) {
- self.add_candidates_from_param(n, did);
+ ty::ty_param(p) {
+ self.add_candidates_from_param(p.idx, p.def_id);
}
ty::ty_trait(did, substs) {
self.add_candidates_from_trait(did, substs);
for (*trait_ids).each |trait_id| {
#debug("(adding inherent and extension candidates) \
trying trait: %s",
- node_id_to_str(self.tcx().items, trait_id.node));
+ self.def_id_to_str(trait_id));
let coherence_info = self.fcx.ccx.coherence_info;
alt coherence_info.extension_methods.find(trait_id) {
}
some(extension_methods) {
for extension_methods.each |implementation| {
+ #debug("(adding inherent and extension \
+ candidates) adding impl %s",
+ self.def_id_to_str
+ (implementation.did));
self.add_candidates_from_impl
(implementation, use_assignability);
}
}
}
+ fn def_id_to_str(def_id: ast::def_id) -> ~str {
+ if def_id.crate == ast::local_crate {
+ node_id_to_str(self.tcx().items, def_id.node)
+ } else {
+ ast_map::path_to_str(csearch::get_item_path(self.tcx(), def_id))
+ }
+ }
+
fn write_mty_from_candidate(cand: candidate) -> method_map_entry {
let tcx = self.fcx.ccx.tcx;
import util::ppaux;
import syntax::print::pprust;
import infer::{resolve_type, resolve_all, force_all,
- resolve_rvar, force_rvar};
+ resolve_rvar, force_rvar, fres};
+import middle::kind::check_owned;
-type rcx = @{fcx: @fn_ctxt, mut errors_reported: uint};
-type rvt = visit::vt<rcx>;
+enum rcx { rcx_({fcx: @fn_ctxt, mut errors_reported: uint}) }
+type rvt = visit::vt<@rcx>;
+
+impl methods for @rcx {
+ /// Try to resolve the type for the given node.
+ ///
+ /// Note one important point: we do not attempt to resolve *region
+ /// variables* here. This is because regionck is essentially adding
+ /// constraints to those region variables and so may yet influence
+ /// how they are resolved.
+ ///
+ /// Consider this silly example:
+ ///
+ /// fn borrow(x: &int) -> &int {x}
+ /// fn foo(x: @int) -> int { /* block: B */
+ /// let b = borrow(x); /* region: <R0> */
+ /// *b
+ /// }
+ ///
+ /// Here, the region of `b` will be `<R0>`. `<R0>` is constrainted
+ /// to be some subregion of the block B and some superregion of
+ /// the call. If we forced it now, we'd choose the smaller region
+ /// (the call). But that would make the *b illegal. Since we don't
+ /// resolve, the type of b will be `&<R0>.int` and then `*b` will require
+ /// that `<R0>` be bigger than the let and the `*b` expression, so we
+ /// will effectively resolve `<R0>` to be the block B.
+ fn resolve_type(unresolved_ty: ty::t) -> fres<ty::t> {
+ resolve_type(self.fcx.infcx, unresolved_ty,
+ (resolve_all | force_all) -
+ (resolve_rvar | force_rvar))
+ }
+
+ /// Try to resolve the type for the given node.
+ fn resolve_node_type(id: ast::node_id) -> fres<ty::t> {
+ self.resolve_type(self.fcx.node_ty(id))
+ }
+}
fn regionck_expr(fcx: @fn_ctxt, e: @ast::expr) {
- let rcx = @{fcx:fcx, mut errors_reported: 0u};
+ let rcx = rcx_({fcx:fcx, mut errors_reported: 0u});
let v = regionck_visitor();
- v.visit_expr(e, rcx, v);
+ v.visit_expr(e, @rcx, v);
}
fn regionck_fn(fcx: @fn_ctxt,
_decl: ast::fn_decl,
blk: ast::blk) {
- let rcx = @{fcx:fcx, mut errors_reported: 0u};
+ let rcx = rcx_({fcx:fcx, mut errors_reported: 0u});
let v = regionck_visitor();
- v.visit_block(blk, rcx, v);
+ v.visit_block(blk, @rcx, v);
}
fn regionck_visitor() -> rvt {
with *visit::default_visitor()})
}
-fn visit_item(_item: @ast::item, &&_rcx: rcx, _v: rvt) {
+fn visit_item(_item: @ast::item, &&_rcx: @rcx, _v: rvt) {
// Ignore items
}
-fn visit_local(l: @ast::local, &&rcx: rcx, v: rvt) {
+fn visit_local(l: @ast::local, &&rcx: @rcx, v: rvt) {
let e = rcx.errors_reported;
v.visit_pat(l.node.pat, rcx, v);
if e != rcx.errors_reported {
}
}
-fn visit_pat(p: @ast::pat, &&rcx: rcx, v: rvt) {
+fn visit_pat(p: @ast::pat, &&rcx: @rcx, v: rvt) {
let fcx = rcx.fcx;
alt p.node {
ast::pat_ident(path, _)
visit::visit_pat(p, rcx, v);
}
-fn visit_block(b: ast::blk, &&rcx: rcx, v: rvt) {
+fn visit_block(b: ast::blk, &&rcx: @rcx, v: rvt) {
visit::visit_block(b, rcx, v);
}
-fn visit_expr(e: @ast::expr, &&rcx: rcx, v: rvt) {
+fn visit_expr(e: @ast::expr, &&rcx: @rcx, v: rvt) {
#debug["visit_expr(e=%s)", pprust::expr_to_str(e)];
alt e.node {
_ { }
}
}
+
+ ast::expr_cast(source, _) {
+ // Determine if we are casting `source` to an iface instance.
+ // If so, we have to be sure that the type of the source obeys
+ // the iface's region bound.
+ //
+ // Note: there is a subtle point here concerning type
+ // parameters. It is possible that the type of `source`
+ // contains type parameters, which in turn may contain regions
+ // that are not visible to us (only the caller knows about
+ // them). The kind checker is ultimately responsible for
+ // guaranteeing region safety in that particular case. There
+ // is an extensive comment on the function
+ // check_cast_for_escaping_regions() in kind.rs explaining how
+ // it goes about doing that.
+ alt rcx.resolve_node_type(e.id) {
+ result::err(_) => { ret; /* typeck will fail anyhow */ }
+ result::ok(target_ty) => {
+ alt ty::get(target_ty).struct {
+ ty::ty_trait(_, substs) {
+ let iface_region = alt substs.self_r {
+ some(r) => {r}
+ none => {ty::re_static}
+ };
+ let source_ty = rcx.fcx.expr_ty(source);
+ constrain_regions_in_type(rcx, iface_region,
+ e.span, source_ty);
+ }
+ _ { }
+ }
+ }
+ };
+
+ }
+
_ { }
}
visit::visit_expr(e, rcx, v);
}
-fn visit_stmt(s: @ast::stmt, &&rcx: rcx, v: rvt) {
+fn visit_stmt(s: @ast::stmt, &&rcx: @rcx, v: rvt) {
visit::visit_stmt(s, rcx, v);
}
// references a region that is not in scope for that node. Returns
// false if an error is reported; this is used to cause us to cut off
// region checking for that subtree to avoid reporting tons of errors.
-fn visit_node(id: ast::node_id, span: span, rcx: rcx) -> bool {
+fn visit_node(id: ast::node_id, span: span, rcx: @rcx) -> bool {
let fcx = rcx.fcx;
// Try to resolve the type. If we encounter an error, then typeck
// is going to fail anyway, so just stop here and let typeck
// report errors later on in the writeback phase.
- //
- // Note one important point: we do not attempt to resolve *region
- // variables* here. This is because regionck is essentially adding
- // constraints to those region variables and so may yet influence
- // how they are resolved.
- //
- // Consider this silly example:
- //
- // fn borrow(x: &int) -> &int {x}
- // fn foo(x: @int) -> int { /* block: B */
- // let b = borrow(x); /* region: <R0> */
- // *b
- // }
- //
- // Here, the region of `b` will be `<R0>`. `<R0>` is constrainted
- // to be some subregion of the block B and some superregion of
- // the call. If we forced it now, we'd choose the smaller region
- // (the call). But that would make the *b illegal. Since we don't
- // resolve, the type of b will be `&<R0>.int` and then `*b` will require
- // that `<R0>` be bigger than the let and the `*b` expression, so we
- // will effectively resolve `<R0>` to be the block B.
- let ty0 = fcx.node_ty(id);
- let ty = alt resolve_type(fcx.infcx, ty0,
- (resolve_all | force_all) -
- (resolve_rvar | force_rvar)) {
+ let ty = alt rcx.resolve_node_type(id) {
result::err(_) { ret true; }
result::ok(ty) { ty }
};
let tcx = fcx.ccx.tcx;
let encl_region = ty::encl_region(tcx, id);
- #debug["visit_node(ty=%s, id=%d, encl_region=%s, ty0=%s)",
+ #debug["visit_node(ty=%s, id=%d, encl_region=%s)",
ppaux::ty_to_str(tcx, ty),
id,
- ppaux::region_to_str(tcx, encl_region),
- ppaux::ty_to_str(tcx, ty0)];
+ ppaux::region_to_str(tcx, encl_region)];
// Otherwise, look at the type and see if it is a region pointer.
+ ret constrain_regions_in_type(rcx, encl_region, span, ty);
+}
+
+fn constrain_regions_in_type(
+ rcx: @rcx,
+ encl_region: ty::region,
+ span: span,
+ ty: ty::t) -> bool {
+
let e = rcx.errors_reported;
ty::walk_regions_and_ty(
- tcx, ty,
+ rcx.fcx.ccx.tcx, ty,
|r| constrain_region(rcx, encl_region, span, r),
|t| ty::type_has_regions(t));
ret (e == rcx.errors_reported);
- fn constrain_region(rcx: rcx,
+ fn constrain_region(rcx: @rcx,
encl_region: ty::region,
span: span,
region: ty::region) {
import check::{fn_ctxt, impl_self_ty, methods};
import infer::{resolve_type, resolve_all, force_all, fixup_err_to_str};
+import ast_util::new_def_hash;
+import dvec::extensions;
fn has_trait_bounds(tps: ~[ty::param_bounds]) -> bool {
vec::any(tps, |bs| {
})
}
-fn lookup_vtables(fcx: @fn_ctxt, isc: resolve::iscopes, sp: span,
+fn lookup_vtables(fcx: @fn_ctxt, sp: span,
bounds: @~[ty::param_bounds], substs: ty::substs,
allow_unsafe: bool) -> vtable_res {
let tcx = fcx.ccx.tcx;
alt bound {
ty::bound_trait(i_ty) {
let i_ty = ty::subst(tcx, substs, i_ty);
- vec::push(result, lookup_vtable(fcx, isc, sp, ty, i_ty,
- allow_unsafe));
+ vec::push(result, lookup_vtable(fcx, sp, ty, i_ty,
+ allow_unsafe));
}
_ {}
}
Look up the vtable to use when treating an item of type <t>
as if it has type <trait_ty>
*/
-fn lookup_vtable(fcx: @fn_ctxt, isc: resolve::iscopes, sp: span,
- ty: ty::t, trait_ty: ty::t, allow_unsafe: bool)
- -> vtable_origin {
+fn lookup_vtable(fcx: @fn_ctxt, sp: span, ty: ty::t, trait_ty: ty::t,
+ allow_unsafe: bool)
+ -> vtable_origin {
#debug["lookup_vtable(ty=%s, trait_ty=%s)",
fcx.infcx.ty_to_str(ty), fcx.infcx.ty_to_str(trait_ty)];
};
let ty = fixup_ty(fcx, sp, ty);
alt ty::get(ty).struct {
- ty::ty_param(n, did) {
+ ty::ty_param({idx: n, def_id: did}) {
let mut n_bound = 0u;
for vec::each(*tcx.ty_param_bounds.get(did.node)) |bound| {
alt bound {
_ {
let mut found = ~[];
- for list::each(isc) |impls| {
- /* For each impl in scope... */
- for vec::each(*impls) |im| {
- // im = one specific impl
- // find the trait that im implements (if any)
- let of_ty = alt ty::impl_trait(tcx, im.did) {
- some(of_ty) { of_ty }
- _ { again; }
- };
+ let mut impls_seen = new_def_hash();
- // it must have the same id as the expected one
- alt ty::get(of_ty).struct {
- ty::ty_trait(id, _) if id != trait_id { again; }
- _ { /* ok */ }
- }
+ alt fcx.ccx.coherence_info.extension_methods.find(trait_id) {
+ none {
+ // Nothing found. Continue.
+ }
+ some(implementations) {
+ for uint::range(0, implementations.len()) |i| {
+ let im = implementations[i];
- // check whether the type unifies with the type
- // that the impl is for, and continue if not
- let {substs: substs, ty: for_ty} =
- impl_self_ty(fcx, im.did);
- let im_bs = ty::lookup_item_type(tcx, im.did).bounds;
- alt fcx.mk_subty(ty, for_ty) {
- result::err(_) { again; }
- result::ok(()) { }
- }
+ // im = one specific impl
- // check that desired trait type unifies
- #debug("(checking vtable) @2 relating trait ty %s to \
- of_ty %s",
- fcx.infcx.ty_to_str(trait_ty),
- fcx.infcx.ty_to_str(of_ty));
- let of_ty = ty::subst(tcx, substs, of_ty);
- relate_trait_tys(fcx, sp, trait_ty, of_ty);
+ // First, ensure that we haven't processed this impl yet.
+ if impls_seen.contains_key(im.did) {
+ again;
+ }
+ impls_seen.insert(im.did, ());
- // recursively process the bounds
- let trait_tps = trait_substs.tps;
- let substs_f = fixup_substs(fcx, sp, trait_id, substs);
- connect_trait_tps(fcx, sp, substs_f.tps,
- trait_tps, im.did);
- let subres = lookup_vtables(fcx, isc, sp,
- im_bs, substs_f, false);
- vec::push(found,
- vtable_static(im.did, substs_f.tps, subres));
- }
+ // find the trait that im implements (if any)
+ for vec::each(ty::impl_traits(tcx, im.did)) |of_ty| {
+ // it must have the same id as the expected one
+ alt ty::get(of_ty).struct {
+ ty::ty_trait(id, _) if id != trait_id { again; }
+ _ { /* ok */ }
+ }
- alt found.len() {
- 0u { /* fallthrough */ }
- 1u { ret found[0]; }
- _ {
- fcx.ccx.tcx.sess.span_err(
- sp, ~"multiple applicable methods in scope");
- ret found[0];
- }
+ // check whether the type unifies with the type
+ // that the impl is for, and continue if not
+ let {substs: substs, ty: for_ty} =
+ impl_self_ty(fcx, im.did);
+ let im_bs = ty::lookup_item_type(tcx, im.did).bounds;
+ alt fcx.mk_subty(ty, for_ty) {
+ result::err(_) { again; }
+ result::ok(()) { }
+ }
+
+ // check that desired trait type unifies
+ #debug("(checking vtable) @2 relating trait ty %s to \
+ of_ty %s",
+ fcx.infcx.ty_to_str(trait_ty),
+ fcx.infcx.ty_to_str(of_ty));
+ let of_ty = ty::subst(tcx, substs, of_ty);
+ relate_trait_tys(fcx, sp, trait_ty, of_ty);
+
+ // recursively process the bounds
+ let trait_tps = trait_substs.tps;
+ let substs_f = fixup_substs(fcx, sp, trait_id,
+ substs);
+ connect_trait_tps(fcx, sp, substs_f.tps,
+ trait_tps, im.did);
+ let subres = lookup_vtables(fcx, sp, im_bs, substs_f,
+ false);
+ vec::push(found,
+ vtable_static(im.did, substs_f.tps,
+ subres));
+ }
+ }
}
}
+
+ alt found.len() {
+ 0u { /* fallthrough */ }
+ 1u { ret found[0]; }
+ _ {
+ fcx.ccx.tcx.sess.span_err(
+ sp, ~"multiple applicable methods in scope");
+ ret found[0];
+ }
+ }
}
}
fn connect_trait_tps(fcx: @fn_ctxt, sp: span, impl_tys: ~[ty::t],
trait_tys: ~[ty::t], impl_did: ast::def_id) {
let tcx = fcx.ccx.tcx;
- let ity = option::get(ty::impl_trait(tcx, impl_did));
+
+ // XXX: This should work for multiple traits.
+ let ity = ty::impl_traits(tcx, impl_did)[0];
let trait_ty = ty::subst_tps(tcx, impl_tys, ity);
#debug("(connect trait tps) trait type is %?, impl did is %?",
ty::get(trait_ty).struct, impl_did);
let did = ast_util::def_id_of_def(cx.tcx.def_map.get(ex.id));
let item_ty = ty::lookup_item_type(cx.tcx, did);
if has_trait_bounds(*item_ty.bounds) {
- let impls = cx.impl_map.get(ex.id);
- cx.vtable_map.insert(ex.id, lookup_vtables(
- fcx, impls, ex.span,
- item_ty.bounds, substs, false));
+ cx.vtable_map.insert(ex.id, lookup_vtables(fcx,
+ ex.span,
+ item_ty.bounds,
+ substs,
+ false));
}
}
_ {}
_ { ex.callee_id }
};
let substs = fcx.node_ty_substs(callee_id);
- let iscs = cx.impl_map.get(ex.id);
- cx.vtable_map.insert(callee_id, lookup_vtables(
- fcx, iscs, ex.span, bounds, substs, false));
+ cx.vtable_map.insert(callee_id, lookup_vtables(fcx,
+ ex.span,
+ bounds,
+ substs,
+ false));
}
}
_ {}
let target_ty = fcx.expr_ty(ex);
alt ty::get(target_ty).struct {
ty::ty_trait(*) {
- /* Casting to an interface type.
- Look up all impls for the cast expr...
- */
- let impls = cx.impl_map.get(ex.id);
/*
Look up vtables for the type we're casting to,
passing in the source and target type
*/
- let vtable = lookup_vtable(fcx, impls, ex.span,
- fcx.expr_ty(src), target_ty,
- true);
+ let vtable = lookup_vtable(fcx, ex.span, fcx.expr_ty(src),
+ target_ty, true);
/*
Map this expression to that vtable (that is: "ex has
vtable <vtable>")
fn maybe_resolve_type_vars_for_node(wbcx: wb_ctxt, sp: span,
id: ast::node_id)
-> option<ty::t> {
- if wbcx.fcx.node_types.contains_key(id as uint) {
+ if wbcx.fcx.node_types.contains_key(id) {
resolve_type_vars_for_node(wbcx, sp, id)
} else {
none
// has at most one implementation for each type. Then we build a mapping from
// each trait in the system to its implementations.
-import metadata::csearch::{each_path, get_impl_trait, get_impls_for_mod};
+import metadata::csearch::{each_path, get_impl_traits, get_impls_for_mod};
import metadata::cstore::{cstore, iter_crate_data};
import metadata::decoder::{dl_def, dl_field, dl_impl};
import middle::resolve3::Impl;
import middle::ty::{ty_opaque_closure_ptr, ty_unboxed_vec, type_is_var};
import middle::typeck::infer::{infer_ctxt, mk_subty};
import middle::typeck::infer::{new_infer_ctxt, resolve_ivar, resolve_type};
-import syntax::ast::{crate, def_id, def_mod, item, item_class, item_const};
-import syntax::ast::{item_enum, item_fn, item_foreign_mod, item_impl};
-import syntax::ast::{item_mac, item_mod, item_trait, item_ty, local_crate};
-import syntax::ast::{method, node_id, region_param, rp_none, rp_self};
+import syntax::ast::{class_method, crate, def_id, def_mod, instance_var};
+import syntax::ast::{item, item_class, item_const, item_enum, item_fn};
+import syntax::ast::{item_foreign_mod, item_impl, item_mac, item_mod};
+import syntax::ast::{item_trait, item_ty, local_crate, method, node_id};
import syntax::ast::{trait_ref};
import syntax::ast_map::node_item;
import syntax::ast_util::{def_id_of_def, dummy_sp, new_def_hash};
class CoherenceInfo {
// Contains implementations of methods that are inherent to a type.
// Methods in these implementations don't need to be exported.
+
let inherent_methods: hashmap<def_id,@dvec<@Impl>>;
// Contains implementations of methods associated with a trait. For these,
// A mapping from implementations to the corresponding base type
// definition ID.
+
let base_type_def_ids: hashmap<def_id,def_id>;
// A set of implementations in privileged scopes; i.e. those
// implementations that are defined in the same scope as their base types.
+
let privileged_implementations: hashmap<node_id,()>;
// The set of types that we are currently in the privileged scope of. This
// is used while we traverse the AST while checking privileged scopes.
+
let privileged_types: hashmap<def_id,()>;
new(crate_context: @crate_ctxt) {
visit_crate(*crate, (), mk_simple_visitor(@{
visit_item: |item| {
+ #debug("(checking coherence) item '%s'", *item.ident);
+
alt item.node {
- item_impl(_, associated_trait, self_type, _) {
- self.check_implementation(item, associated_trait);
+ item_impl(_, associated_traits, _, _) {
+ self.check_implementation(item, associated_traits);
+ }
+ item_class(_, associated_traits, _, _, _) {
+ self.check_implementation(item, associated_traits);
}
_ {
// Nothing to do.
self.add_external_crates();
}
- fn check_implementation(item: @item,
- optional_associated_trait: option<@trait_ref>) {
-
+ fn check_implementation(item: @item, associated_traits: ~[@trait_ref]) {
let self_type = self.crate_context.tcx.tcache.get(local_def(item.id));
- alt optional_associated_trait {
- none {
- alt get_base_type_def_id(self.inference_context,
- item.span,
- self_type.ty) {
- none {
- let session = self.crate_context.tcx.sess;
- session.span_err(item.span,
- ~"no base type found for inherent \
- implementation; implement a \
- trait instead");
- }
- some(_) {
- // Nothing to do.
- }
+
+ // If there are no traits, then this implementation must have a
+ // base type.
+
+ if associated_traits.len() == 0 {
+ #debug("(checking implementation) no associated traits for item \
+ '%s'",
+ *item.ident);
+
+ alt get_base_type_def_id(self.inference_context,
+ item.span,
+ self_type.ty) {
+ none {
+ let session = self.crate_context.tcx.sess;
+ session.span_err(item.span,
+ ~"no base type found for inherent \
+ implementation; implement a \
+ trait instead");
+ }
+ some(_) {
+ // Nothing to do.
}
}
- some(associated_trait) {
- let def = self.crate_context.tcx.def_map.get
- (associated_trait.ref_id);
- let implementation = self.create_impl_from_item(item);
- self.add_trait_method(def_id_of_def(def), implementation);
- }
+ }
+
+ for associated_traits.each |associated_trait| {
+ let def = self.crate_context.tcx.def_map.get
+ (associated_trait.ref_id);
+ #debug("(checking implementation) adding impl for trait \
+ '%s', item '%s'",
+ ast_map::node_id_to_str(self.crate_context.tcx.items,
+ associated_trait.ref_id),
+ *item.ident);
+
+ let implementation = self.create_impl_from_item(item);
+ self.add_trait_method(def_id_of_def(def), implementation);
}
// Add the implementation to the mapping from implementation to base
// type def ID, if there is a base type for this implementation.
+
alt get_base_type_def_id(self.inference_context,
item.span,
self_type.ty) {
// Converts a polytype to a monotype by replacing all parameters with
// type variables.
+
fn universally_quantify_polytype(polytype: ty_param_bounds_and_ty) -> t {
let self_region =
if !polytype.rp {none}
self.privileged_types.remove(privileged_type);
}
}
- item_impl(_, optional_trait_ref, _, _) {
+ item_impl(_, associated_traits, _, _) {
alt self.base_type_def_ids.find(local_def(item.id)) {
none {
// Nothing to do.
} else {
// This implementation is not in scope of
// its base type. This still might be OK
- // if the trait is defined in the same
+ // if the traits are defined in the same
// crate.
- alt optional_trait_ref {
- none {
- // There is no trait to implement,
- // so this is an error.
+ if associated_traits.len() == 0 {
+ // There is no trait to implement, so
+ // this is an error.
+
+ let session =
+ self.crate_context.tcx.sess;
+ session.span_err(item.span,
+ ~"cannot implement \
+ inherent methods \
+ for a type outside \
+ the scope the type \
+ was defined in; \
+ define and \
+ implement a trait \
+ instead");
+ }
- let session =
- self.crate_context.tcx.sess;
+ for associated_traits.each |trait_ref| {
+ // This is OK if and only if the
+ // trait was defined in this
+ // crate.
+
+ let def_map = self.crate_context.tcx
+ .def_map;
+ let trait_def = def_map.get
+ (trait_ref.ref_id);
+ let trait_id =
+ def_id_of_def(trait_def);
+ if trait_id.crate != local_crate {
+ let session = self.crate_context
+ .tcx.sess;
session.span_err(item.span,
~"cannot \
- implement \
- inherent \
- methods for a \
- type outside \
- the scope the \
- type was \
- defined in; \
- define and \
- implement a \
- trait instead");
- }
- some(trait_ref) {
- // This is OK if and only if the
- // trait was defined in this
- // crate.
-
- let def_map = self.crate_context
- .tcx.def_map;
- let trait_def =
- def_map.get(trait_ref.ref_id);
- let trait_id =
- def_id_of_def(trait_def);
- if trait_id.crate != local_crate {
- let session = self
- .crate_context.tcx.sess;
- session.span_err(item.span,
- ~"cannot \
- provide \
- an \
- extension \
- implementa\
- tion \
- for a \
- trait not \
- defined \
- in this \
- crate");
- }
+ provide an \
+ extension \
+ implementa\
+ tion \
+ for a trait \
+ not defined \
+ in this \
+ crate");
}
}
}
methods: methods
};
}
+ item_class(ty_params, _, class_members, _, _) {
+ let mut methods = ~[];
+ for class_members.each |class_member| {
+ alt class_member.node {
+ instance_var(*) {
+ // Nothing to do.
+ }
+ class_method(ast_method) {
+ push(methods, @{
+ did: local_def(ast_method.id),
+ n_tps: ast_method.tps.len(),
+ ident: ast_method.ident
+ });
+ }
+ }
+ }
+
+ ret @{
+ did: local_def(item.id),
+ ident: item.ident,
+ methods: methods
+ };
+ }
_ {
self.crate_context.tcx.sess.span_bug(item.span,
~"can't convert a \
let self_type = lookup_item_type(self.crate_context.tcx,
implementation.did);
- let optional_trait =
- get_impl_trait(self.crate_context.tcx,
- implementation.did);
- alt optional_trait {
- none {
- // This is an inherent method. There should be
- // no problems here, but perform a sanity check
- // anyway.
-
- alt get_base_type_def_id(self.inference_context,
- dummy_sp(),
- self_type.ty) {
- none {
- let session = self.crate_context.tcx.sess;
- session.bug(#fmt("no base type for \
- external impl with no \
- trait: %s (type %s)!",
- *implementation.ident,
- ty_to_str
- (self.crate_context.tcx,
- self_type.ty)));
- }
- some(_) {
- // Nothing to do.
- }
+ let associated_traits = get_impl_traits(self.crate_context.tcx,
+ implementation.did);
+
+ // Do a sanity check to make sure that inherent methods have base
+ // types.
+
+ if associated_traits.len() == 0 {
+ alt get_base_type_def_id(self.inference_context,
+ dummy_sp(),
+ self_type.ty) {
+ none {
+ let session = self.crate_context.tcx.sess;
+ session.bug(#fmt("no base type for external impl \
+ with no trait: %s (type %s)!",
+ *implementation.ident,
+ ty_to_str(self.crate_context.tcx,
+ self_type.ty)));
+ }
+ some(_) {
+ // Nothing to do.
}
}
+ }
- some(trait_type) {
- alt get(trait_type).struct {
- ty_trait(trait_id, _) {
- self.add_trait_method(trait_id,
- implementation);
- }
- _ {
- self.crate_context.tcx.sess
- .bug(~"trait type returned is not a \
- trait");
- }
+ // Record all the trait methods.
+ for associated_traits.each |trait_type| {
+ alt get(trait_type).struct {
+ ty_trait(trait_id, _) {
+ self.add_trait_method(trait_id, implementation);
+ }
+ _ {
+ self.crate_context.tcx.sess.bug(~"trait type \
+ returned is not a \
+ trait");
}
}
}
}
impl methods for @crate_ctxt {
- fn to_ty<RS: region_scope copy>(rs: RS, ast_ty: @ast::ty) -> ty::t {
+ fn to_ty<RS: region_scope copy owned>(
+ rs: RS, ast_ty: @ast::ty) -> ty::t {
+
ast_ty_to_ty(self, rs, ast_ty)
}
}
write_ty_to_tcx(tcx, it.id, tpt.ty);
ensure_trait_methods(ccx, it.id);
}
- ast::item_class(tps, traits, members, ctor, m_dtor) {
+ ast::item_class(tps, traits, members, m_ctor, m_dtor) {
// Write the class type
let tpt = ty_of_item(ccx, it);
write_ty_to_tcx(tcx, it.id, tpt.ty);
tcx.tcache.insert(local_def(it.id), tpt);
- // Write the ctor type
- let t_args = ctor.node.dec.inputs.map(
- |a| ty_of_arg(ccx, type_rscope(rp), a, none) );
- let t_res = ty::mk_class(
- tcx, local_def(it.id),
- {self_r: if rp {some(ty::re_bound(ty::br_self))} else {none},
- self_ty: none,
- tps: ty::ty_params_to_tys(tcx, tps)});
- let t_ctor = ty::mk_fn(
- tcx, {purity: ast::impure_fn,
- proto: ast::proto_any,
- inputs: t_args,
- output: t_res,
- ret_style: ast::return_val});
- // constraints, or remove constraints from the language
- write_ty_to_tcx(tcx, ctor.node.id, t_ctor);
- tcx.tcache.insert(local_def(ctor.node.id),
- {bounds: tpt.bounds,
- rp: rp,
- ty: t_ctor});
+
+ do option::iter(m_ctor) |ctor| {
+ // Write the ctor type
+ let t_args = ctor.node.dec.inputs.map(
+ |a| ty_of_arg(ccx, type_rscope(rp), a, none) );
+ let t_res = ty::mk_class(
+ tcx, local_def(it.id),
+ {self_r: if rp {some(ty::re_bound(ty::br_self))} else {none},
+ self_ty: none,
+ tps: ty::ty_params_to_tys(tcx, tps)});
+ let t_ctor = ty::mk_fn(
+ tcx, {purity: ast::impure_fn,
+ proto: ast::proto_any,
+ inputs: t_args,
+ output: t_res,
+ ret_style: ast::return_val});
+ write_ty_to_tcx(tcx, ctor.node.id, t_ctor);
+ tcx.tcache.insert(local_def(ctor.node.id),
+ {bounds: tpt.bounds,
+ rp: rp,
+ ty: t_ctor});
+ }
+
do option::iter(m_dtor) |dtor| {
// Write the dtor type
let t_dtor = ty::mk_fn(
export resolve_borrowings;
export methods; // for infer_ctxt
export unify_methods; // for infer_ctxt
-export fixup_err, fixup_err_to_str;
+export fres, fixup_err, fixup_err_to_str;
export assignment;
export root, to_str;
export int_ty_set_all;
}
}
- (ty::ty_param(a_n, _), ty::ty_param(b_n, _)) if a_n == b_n {
+ (ty::ty_param(a_p), ty::ty_param(b_p)) if a_p.idx == b_p.idx {
ok(a)
}
}
}
- (ty::re_scope(a_id), ty::re_scope(b_id)) {
+ (ty::re_scope(a_id), ty::re_scope(b_id)) |
+ (ty::re_free(a_id, _), ty::re_free(b_id, _)) {
// We want to generate a region that is contained by both of
// these: so, if one of these scopes is a subscope of the
// other, return it. Otherwise fail.
// For these types, we cannot define any additional
// relationship:
- (ty::re_free(_, _), ty::re_free(_, _)) |
(ty::re_bound(_), ty::re_bound(_)) |
(ty::re_bound(_), ty::re_free(_, _)) |
(ty::re_bound(_), ty::re_scope(_)) |
}
enum anon_rscope = {anon: ty::region, base: region_scope};
-fn in_anon_rscope<RS: region_scope copy>(self: RS, r: ty::region)
+fn in_anon_rscope<RS: region_scope copy owned>(self: RS, r: ty::region)
-> @anon_rscope {
@anon_rscope({anon: r, base: self as region_scope})
}
}
enum binding_rscope = {base: region_scope};
-fn in_binding_rscope<RS: region_scope copy>(self: RS) -> @binding_rscope {
+fn in_binding_rscope<RS: region_scope copy owned>(self: RS)
+ -> @binding_rscope {
let base = self as region_scope;
@binding_rscope({base: base})
}
mod reachable;
}
mod ty;
- mod resolve;
mod resolve3;
mod typeck {
mod check {
import std::map::hashmap;
import middle::ty;
-import middle::ty::{arg, bound_region, br_anon, br_named, canon_mode};
+import middle::ty::{arg, canon_mode};
+import middle::ty::{bound_region, br_anon, br_named, br_self, br_cap_avoid};
import middle::ty::{ck_block, ck_box, ck_uniq, ctxt, field, method};
import middle::ty::{mt, re_bound, re_free, re_scope, re_var, region, t};
import middle::ty::{ty_bool, ty_bot, ty_box, ty_class, ty_enum};
fn bound_region_to_str(cx: ctxt, br: bound_region) -> ~str {
alt br {
- br_anon { ~"&" }
- br_named(str) { #fmt["&%s", *str] }
- br_self if cx.sess.ppregions() { ~"&<self>" }
- br_self { ~"&self" }
+ br_anon => { ~"&" }
+ br_named(str) => { #fmt["&%s", *str] }
+ br_self if cx.sess.ppregions() => { ~"&<self>" }
+ br_self => { ~"&self" }
+
+ // FIXME(#3011) -- even if this arm is removed, exhaustiveness checking
+ // does not fail
+ br_cap_avoid(id, br) => {
+ if cx.sess.ppregions() {
+ #fmt["br_cap_avoid(%?, %s)", id, bound_region_to_str(cx, *br)]
+ } else {
+ bound_region_to_str(cx, *br)
+ }
+ }
}
}
}
ty_var(v) { v.to_str() }
ty_var_integral(v) { v.to_str() }
- ty_param(id, _) {
+ ty_param({idx: id, _}) {
~"'" + str::from_bytes(~[('a' as u8) + (id as u8)])
}
ty_self { ~"self" }
import rustc::back::link;
import rustc::metadata::filesearch;
import rustc::front;
-import rustc::middle::resolve;
-import rustc::middle::resolve3;
export ctxt;
export ctxt_handler;
type ctxt = {
ast: @ast::crate,
- ast_map: ast_map::map,
- exp_map: resolve::exp_map,
- impl_map: resolve::impl_map
+ ast_map: ast_map::map
};
type srv_owner<T> = fn(srv: srv) -> T;
}
fn act(po: comm::port<msg>, source: ~str, parse: parser) {
- let (sess, ignore_errors) = build_session();
+ let sess = build_session();
let ctxt = build_ctxt(
sess,
- parse(sess, source),
- ignore_errors
+ parse(sess, source)
);
let mut keep_going = true;
}
fn build_ctxt(sess: session,
- ast: @ast::crate,
- ignore_errors: @mut bool) -> ctxt {
+ ast: @ast::crate) -> ctxt {
import rustc::front::config;
let ast = config::strip_unconfigured_items(ast);
+ let ast = syntax::ext::expand::expand_crate(sess.parse_sess,
+ sess.opts.cfg, ast);
let ast = front::test::modify_for_testing(sess, ast);
let ast_map = ast_map::map_crate(sess.diagnostic(), *ast);
- *ignore_errors = true;
- let {exp_map, impl_map, _} = resolve3::resolve_crate(sess, ast_map, ast);
- *ignore_errors = false;
{
ast: ast,
ast_map: ast_map,
- exp_map: exp_map,
- impl_map: impl_map
}
}
-fn build_session() -> (session, @mut bool) {
+fn build_session() -> session {
let sopts: @options = basic_options();
let codemap = codemap::new_codemap();
let error_handlers = build_error_handlers(codemap);
- let {emitter, span_handler, ignore_errors} = error_handlers;
+ let {emitter, span_handler} = error_handlers;
let session = driver::build_session_(sopts, codemap, emitter,
span_handler);
- (session, ignore_errors)
+ session
}
type error_handlers = {
emitter: diagnostic::emitter,
- span_handler: diagnostic::span_handler,
- ignore_errors: @mut bool
+ span_handler: diagnostic::span_handler
};
// Build a custom error handler that will allow us to ignore non-fatal
type diagnostic_handler = {
inner: diagnostic::handler,
- ignore_errors: @mut bool
};
impl of diagnostic::handler for diagnostic_handler {
fn fatal(msg: ~str) -> ! { self.inner.fatal(msg) }
fn err(msg: ~str) { self.inner.err(msg) }
fn bump_err_count() {
- if !(*self.ignore_errors) {
- self.inner.bump_err_count();
- }
+ self.inner.bump_err_count();
}
fn has_errors() -> bool { self.inner.has_errors() }
fn abort_if_errors() { self.inner.abort_if_errors() }
}
}
- let ignore_errors = @mut false;
let emitter = fn@(cmsp: option<(codemap::codemap, codemap::span)>,
msg: ~str, lvl: diagnostic::level) {
- if !(*ignore_errors) {
- diagnostic::emit(cmsp, msg, lvl);
- }
+ diagnostic::emit(cmsp, msg, lvl);
};
let inner_handler = diagnostic::mk_handler(some(emitter));
let handler = {
inner: inner_handler,
- ignore_errors: ignore_errors
};
let span_handler = diagnostic::mk_span_handler(
handler as diagnostic::handler, codemap);
{
emitter: emitter,
- span_handler: span_handler,
- ignore_errors: ignore_errors
+ span_handler: span_handler
}
}
}
}
-#[test]
-fn srv_should_build_reexport_map() {
- let source = ~"import a::b; export b; mod a { mod b { } }";
- do from_str(source) |srv| {
- do exec(srv) |ctxt| {
- assert ctxt.exp_map.size() != 0u
- };
- }
-}
-
-#[test]
-fn srv_should_resolve_external_crates() {
- let source = ~"use std;\
- fn f() -> std::sha1::sha1 {\
- std::sha1::mk_sha1() }";
- // Just testing that resolve doesn't crash
- from_str(source, |_srv| { } )
-}
-
-#[test]
-fn srv_should_resolve_core_crate() {
- let source = ~"fn a() -> option { fail }";
- // Just testing that resolve doesn't crash
- from_str(source, |_srv| { } )
-}
-
-#[test]
-fn srv_should_resolve_non_existant_imports() {
- // We want to ignore things we can't resolve. Shouldn't
- // need to be able to find external crates to create docs.
- let source = ~"import wooboo; fn a() { }";
- from_str(source, |_srv| { } )
-}
-
-#[test]
-fn srv_should_resolve_non_existant_uses() {
- let source = ~"use forble; fn a() { }";
- from_str(source, |_srv| { } )
-}
-
#[test]
fn should_ignore_external_import_paths_that_dont_exist() {
let source = ~"use forble; import forble::bippy;";
type impldoc = {
item: itemdoc,
- trait_ty: option<~str>,
+ trait_types: ~[~str],
self_ty: option<~str>,
methods: ~[methoddoc]
};
) -> doc::impldoc {
{
item: itemdoc,
- trait_ty: none,
+ trait_types: ~[],
self_ty: none,
methods: do vec::map(methods) |method| {
{
doc::impltag(doc) {
assert option::is_some(doc.self_ty);
let self_ty = option::get(doc.self_ty);
- alt doc.trait_ty {
- some(trait_ty) {
- #fmt("%s of %s for %s", doc.name(), trait_ty, self_ty)
- }
- none {
- #fmt("%s for %s", doc.name(), self_ty)
- }
+ let mut trait_part = ~"";
+ for doc.trait_types.eachi |i, trait_type| {
+ if i == 0 {
+ trait_part += ~" of ";
+ } else {
+ trait_part += ", ";
+ }
+ trait_part += trait_type;
}
+ #fmt("%s%s for %s", doc.name(), trait_part, self_ty)
}
_ {
doc.name()
+++ /dev/null
-//! Prunes branches of the tree that are not exported
-
-import doc::item_utils;
-import syntax::ast;
-import syntax::ast_util;
-import syntax::ast_map;
-import std::map::hashmap;
-
-export mk_pass;
-
-fn mk_pass() -> pass {
- {
- name: ~"prune_unexported",
- f: run
- }
-}
-
-fn run(srv: astsrv::srv, doc: doc::doc) -> doc::doc {
- let fold = fold::fold({
- fold_mod: fold_mod
- with *fold::default_any_fold(srv)
- });
- fold.fold_doc(fold, doc)
-}
-
-fn fold_mod(fold: fold::fold<astsrv::srv>, doc: doc::moddoc) -> doc::moddoc {
- let doc = fold::default_any_fold_mod(fold, doc);
- doc::moddoc_({
- items: exported_items(fold.ctxt, doc)
- with *doc
- })
-}
-
-fn exported_items(srv: astsrv::srv, doc: doc::moddoc) -> ~[doc::itemtag] {
- exported_things(
- srv, doc,
- exported_items_from_crate,
- exported_items_from_mod
- )
-}
-
-fn exported_things<T>(
- srv: astsrv::srv,
- doc: doc::moddoc,
- from_crate: fn(astsrv::srv, doc::moddoc) -> ~[T],
- from_mod: fn(astsrv::srv, doc::moddoc) -> ~[T]
-) -> ~[T] {
- if doc.id() == ast::crate_node_id {
- from_crate(srv, doc)
- } else {
- from_mod(srv, doc)
- }
-}
-
-fn exported_items_from_crate(
- srv: astsrv::srv,
- doc: doc::moddoc
-) -> ~[doc::itemtag] {
- exported_items_from(srv, doc, is_exported_from_crate)
-}
-
-fn exported_items_from_mod(
- srv: astsrv::srv,
- doc: doc::moddoc
-) -> ~[doc::itemtag] {
- exported_items_from(srv, doc, |a,b| {
- is_exported_from_mod(a, doc.id(), b)
- })
-}
-
-fn exported_items_from(
- srv: astsrv::srv,
- doc: doc::moddoc,
- is_exported: fn(astsrv::srv, ~str) -> bool
-) -> ~[doc::itemtag] {
- do vec::filter_map(doc.items) |itemtag| {
- let itemtag = alt itemtag {
- doc::enumtag(enumdoc) {
- // Also need to check variant exportedness
- doc::enumtag({
- variants: exported_variants_from(srv, enumdoc, is_exported)
- with enumdoc
- })
- }
- _ { itemtag }
- };
-
- if itemtag.item().reexport || is_exported(srv, itemtag.name()) {
- some(itemtag)
- } else {
- none
- }
- }
-}
-
-fn exported_variants_from(
- srv: astsrv::srv,
- doc: doc::enumdoc,
- is_exported: fn(astsrv::srv, ~str) -> bool
-) -> ~[doc::variantdoc] {
- do vec::filter_map(doc.variants) |doc| {
- if is_exported(srv, doc.name) {
- some(doc)
- } else {
- none
- }
- }
-}
-
-fn is_exported_from_mod(
- srv: astsrv::srv,
- mod_id: doc::ast_id,
- item_name: ~str
-) -> bool {
- do astsrv::exec(srv) |ctxt| {
- alt ctxt.ast_map.get(mod_id) {
- ast_map::node_item(item, _) {
- alt item.node {
- ast::item_mod(m) {
- ast_util::is_exported(@item_name, m)
- }
- _ {
- fail ~"is_exported_from_mod: not a mod";
- }
- }
- }
- _ { fail ~"is_exported_from_mod: not an item"; }
- }
- }
-}
-
-fn is_exported_from_crate(
- srv: astsrv::srv,
- item_name: ~str
-) -> bool {
- do astsrv::exec(srv) |ctxt| {
- ast_util::is_exported(@item_name, ctxt.ast.node.module)
- }
-}
-
-#[test]
-fn should_prune_unexported_fns() {
- let doc = test::mk_doc(~"mod b { export a; fn a() { } fn b() { } }");
- assert vec::len(doc.cratemod().mods()[0].fns()) == 1u;
-}
-
-#[test]
-fn should_prune_unexported_fns_from_top_mod() {
- let doc = test::mk_doc(~"export a; fn a() { } fn b() { }");
- assert vec::len(doc.cratemod().fns()) == 1u;
-}
-
-#[test]
-fn should_prune_unexported_modules() {
- let doc = test::mk_doc(~"mod a { export a; mod a { } mod b { } }");
- assert vec::len(doc.cratemod().mods()[0].mods()) == 1u;
-}
-
-#[test]
-fn should_prune_unexported_modules_from_top_mod() {
- let doc = test::mk_doc(~"export a; mod a { } mod b { }");
- assert vec::len(doc.cratemod().mods()) == 1u;
-}
-
-#[test]
-fn should_prune_unexported_consts() {
- let doc = test::mk_doc(
- ~"mod a { export a; \
- const a: bool = true; \
- const b: bool = true; }");
- assert vec::len(doc.cratemod().mods()[0].consts()) == 1u;
-}
-
-#[test]
-fn should_prune_unexported_consts_from_top_mod() {
- let doc = test::mk_doc(
- ~"export a; const a: bool = true; const b: bool = true;");
- assert vec::len(doc.cratemod().consts()) == 1u;
-}
-
-#[test]
-fn should_prune_unexported_enums_from_top_mod() {
- let doc = test::mk_doc(~"export a; mod a { } enum b { c }");
- assert vec::len(doc.cratemod().enums()) == 0u;
-}
-
-#[test]
-fn should_prune_unexported_enums() {
- let doc = test::mk_doc(~"mod a { export a; mod a { } enum b { c } }");
- assert vec::len(doc.cratemod().mods()[0].enums()) == 0u;
-}
-
-#[test]
-fn should_prune_unexported_variants_from_top_mod() {
- let doc = test::mk_doc(~"export b::{}; enum b { c }");
- assert vec::len(doc.cratemod().enums()[0].variants) == 0u;
-}
-
-#[test]
-fn should_prune_unexported_variants() {
- let doc = test::mk_doc(~"mod a { export b::{}; enum b { c } }");
- assert vec::len(doc.cratemod().mods()[0].enums()[0].variants) == 0u;
-}
-
-#[test]
-fn should_prune_unexported_traits_from_top_mod() {
- let doc = test::mk_doc(~"export a; mod a { } iface b { fn c(); }");
- assert vec::is_empty(doc.cratemod().traits());
-}
-
-#[test]
-fn should_prune_unexported_impls_from_top_mod() {
- let doc = test::mk_doc(
- ~"export a; mod a { } impl b for int { fn c() { } }");
- assert vec::is_empty(doc.cratemod().impls())
-}
-
-#[test]
-fn should_prune_unexported_types() {
- let doc = test::mk_doc(~"export a; mod a { } type b = int;");
- assert vec::is_empty(doc.cratemod().types());
-}
-
-#[test]
-fn should_not_prune_reexports() {
- fn mk_doc(source: ~str) -> doc::doc {
- do astsrv::from_str(source) |srv| {
- let doc = extract::from_srv(srv, ~"");
- let doc = reexport_pass::mk_pass().f(srv, doc);
- run(srv, doc)
- }
- }
- let doc = mk_doc(~"import a::b; \
- export b; \
- mod a { fn b() { } }");
- assert vec::is_not_empty(doc.cratemod().fns());
-}
-
-#[cfg(test)]
-mod test {
- fn mk_doc(source: ~str) -> doc::doc {
- do astsrv::from_str(source) |srv| {
- let doc = extract::from_srv(srv, ~"");
- run(srv, doc)
- }
- }
-}
+++ /dev/null
-//! Finds docs for reexported items and duplicates them
-
-import doc::item_utils;
-import std::map;
-import std::map::hashmap;
-import std::list;
-import syntax::ast;
-import syntax::ast_util;
-import syntax::ast_map;
-import syntax::visit;
-import syntax::codemap;
-import rustc::middle::resolve;
-
-export mk_pass;
-
-fn mk_pass() -> pass {
- {
- name: ~"reexport",
- f: run
- }
-}
-
-type def_set = map::set<ast::def_id>;
-type def_map = map::hashmap<ast::def_id, doc::itemtag>;
-type path_map = map::hashmap<~str, ~[(~str, doc::itemtag)]>;
-
-fn run(srv: astsrv::srv, doc: doc::doc) -> doc::doc {
-
- // First gather the set of defs that are used as reexports
- let def_set = build_reexport_def_set(srv);
-
- // Now find the docs that go with those defs
- let def_map = build_reexport_def_map(srv, doc, def_set);
-
- // Now create a map that tells us where to insert the duplicated
- // docs into the existing doc tree
- let path_map = build_reexport_path_map(srv, def_map);
-
- // Finally update the doc tree
- merge_reexports(doc, path_map)
-}
-
-// Hash maps are not sendable so converting them back and forth
-// to association lists. Yuck.
-fn to_assoc_list<K:copy, V:copy>(
- map: map::hashmap<K, V>
-) -> ~[(K, V)] {
-
- let mut vec = ~[];
- for map.each |k, v| {
- vec += ~[(k, v)];
- }
- ret vec;
-}
-
-fn from_assoc_list<K:copy, V:copy>(
- list: ~[(K, V)],
- new_hash: fn() -> map::hashmap<K, V>
-) -> map::hashmap<K, V> {
-
- let map = new_hash();
- do vec::iter(list) |elt| {
- let (k, v) = elt;
- map.insert(k, v);
- }
- ret map;
-}
-
-fn from_def_assoc_list<V:copy>(
- list: ~[(ast::def_id, V)]
-) -> map::hashmap<ast::def_id, V> {
- from_assoc_list(list, ast_util::new_def_hash)
-}
-
-fn from_str_assoc_list<V:copy>(
- list: ~[(~str, V)]
-) -> map::hashmap<~str, V> {
- from_assoc_list(list, map::str_hash)
-}
-
-fn build_reexport_def_set(srv: astsrv::srv) -> def_set {
- let assoc_list = do astsrv::exec(srv) |ctxt| {
- let def_set = ast_util::new_def_hash();
- for ctxt.exp_map.each |_id, defs| {
- for defs.each |def| {
- if def.reexp {
- def_set.insert(def.id, ());
- }
- }
- }
- for find_reexport_impls(ctxt).each |def| {
- def_set.insert(def, ());
- }
- to_assoc_list(def_set)
- };
-
- from_def_assoc_list(assoc_list)
-}
-
-fn find_reexport_impls(ctxt: astsrv::ctxt) -> ~[ast::def_id] {
- let defs = @mut ~[];
- do for_each_reexported_impl(ctxt) |_mod_id, i| {
- *defs += ~[i.did]
- }
- ret *defs;
-}
-
-fn build_reexport_def_map(
- srv: astsrv::srv,
- doc: doc::doc,
- def_set: def_set
-) -> def_map {
-
- type ctxt = {
- srv: astsrv::srv,
- def_set: def_set,
- def_map: def_map
- };
-
- let ctxt = {
- srv: srv,
- def_set: def_set,
- def_map: ast_util::new_def_hash()
- };
-
- // FIXME: Do a parallel fold (#2597)
- let fold = fold::fold({
- fold_mod: fold_mod,
- fold_nmod: fold_nmod
- with *fold::default_seq_fold(ctxt)
- });
-
- fold.fold_doc(fold, doc);
-
- ret ctxt.def_map;
-
- fn fold_mod(fold: fold::fold<ctxt>, doc: doc::moddoc) -> doc::moddoc {
- let doc = fold::default_seq_fold_mod(fold, doc);
-
- for doc.items.each |item| {
- let def_id = ast_util::local_def(item.id());
- if fold.ctxt.def_set.contains_key(def_id) {
- fold.ctxt.def_map.insert(def_id, item);
- }
- }
-
- ret doc;
- }
-
- fn fold_nmod(fold: fold::fold<ctxt>, doc: doc::nmoddoc) -> doc::nmoddoc {
- let doc = fold::default_seq_fold_nmod(fold, doc);
-
- for doc.fns.each |fndoc| {
- let def_id = ast_util::local_def(fndoc.id());
- if fold.ctxt.def_set.contains_key(def_id) {
- fold.ctxt.def_map.insert(def_id, doc::fntag(fndoc));
- }
- }
-
- ret doc;
- }
-}
-
-fn build_reexport_path_map(srv: astsrv::srv, -def_map: def_map) -> path_map {
-
- // This is real unfortunate. Lots of copying going on here
- let def_assoc_list = to_assoc_list(def_map);
- #debug("def_map: %?", def_assoc_list);
-
- let assoc_list = do astsrv::exec(srv) |ctxt| {
-
- let def_map = from_def_assoc_list(def_assoc_list);
- let path_map = map::str_hash::<~[(~str,doc::itemtag)]>();
-
- for ctxt.exp_map.each |exp_id, defs| {
- let path = alt check ctxt.ast_map.get(exp_id) {
- ast_map::node_export(_, path) { path }
- };
- // should be a constraint on the node_export constructor
- // that guarantees path is non-empty
- let name = alt check vec::last(*path) {
- ast_map::path_name(nm) { nm }
- };
- let modpath = ast_map::path_to_str(vec::init(*path));
-
- let mut reexportdocs = ~[];
- for defs.each |def| {
- if !def.reexp { again; }
- alt def_map.find(def.id) {
- some(itemtag) {
- reexportdocs += ~[(*name, itemtag)];
- }
- _ {}
- }
- }
-
- if reexportdocs.len() > 0u {
- do option::iter(path_map.find(modpath)) |docs| {
- reexportdocs = docs + vec::filter(reexportdocs, |x| {
- !vec::contains(docs, x)
- });
- }
- path_map.insert(modpath, reexportdocs);
- #debug("path_map entry: %? - %?",
- modpath, (name, reexportdocs));
- }
- }
-
- for find_reexport_impl_docs(ctxt, def_map).each |elt| {
- let (path, doc) = elt;
- let docs = alt path_map.find(path) {
- some(docs) { docs + ~[(doc)] }
- none { ~[doc] }
- };
- path_map.insert(path, docs);
- }
-
- to_assoc_list(path_map)
- };
-
- from_str_assoc_list(assoc_list)
-}
-
-fn find_reexport_impl_docs(
- ctxt: astsrv::ctxt,
- def_map: def_map
-) -> ~[(~str, (~str, doc::itemtag))] {
- let docs = @mut ~[];
-
- do for_each_reexported_impl(ctxt) |mod_id, i| {
- let path = alt ctxt.ast_map.find(mod_id) {
- some(ast_map::node_item(item, path)) {
- let path = ast_map::path_to_str(*path);
- if str::is_empty(path) {
- *item.ident
- } else {
- path + ~"::" + *item.ident
- }
- }
- _ {
- assert mod_id == ast::crate_node_id;
- ~""
- }
- };
- let ident = *i.ident;
- let doc = alt check def_map.find(i.did) {
- some(doc) { doc }
- };
- *docs += ~[(path, (ident, doc))];
- }
-
- ret *docs;
-}
-
-fn for_each_reexported_impl(
- ctxt: astsrv::ctxt,
- f: fn@(ast::node_id, resolve::_impl)
-) {
- let visitor = @{
- visit_mod: |a,b,c| visit_mod(ctxt, f, a, b, c)
- with *visit::default_simple_visitor()
- };
- let visitor = visit::mk_simple_visitor(visitor);
- visit::visit_crate(*ctxt.ast, (), visitor);
-
- fn visit_mod(
- ctxt: astsrv::ctxt,
- f: fn@(ast::node_id, resolve::_impl),
- m: ast::_mod,
- _sp: codemap::span,
- mod_id: ast::node_id
- ) {
- let all_impls = all_impls(m);
- alt *ctxt.impl_map.get(mod_id) {
- list::cons(impls, _) {
- for vec::each(*impls) |i| {
- // This impl is not an item in the current mod
- if !all_impls.contains_key(i.did) {
- // Ignore external impls because I don't
- // know what to do with them yet
- if i.did.crate == ast::local_crate {
- f(mod_id, *i);
- }
- }
- }
- }
- list::nil {
- // Do nothing -- mod has no impls
- }
- }
- }
-}
-
-fn all_impls(m: ast::_mod) -> map::set<ast::def_id> {
- let all_impls = ast_util::new_def_hash();
- for m.items.each |item| {
- alt item.node {
- ast::item_impl(_, _, _, _) {
- all_impls.insert(ast_util::local_def(item.id), ());
- }
- _ { }
- }
- }
- ret all_impls;
-}
-
-fn merge_reexports(
- doc: doc::doc,
- path_map: path_map
-) -> doc::doc {
-
- let fold = fold::fold({
- fold_mod: fold_mod
- with *fold::default_seq_fold(path_map)
- });
-
- ret fold.fold_doc(fold, doc);
-
- fn fold_mod(fold: fold::fold<path_map>, doc: doc::moddoc) -> doc::moddoc {
- let doc = fold::default_seq_fold_mod(fold, doc);
-
- let is_topmod = doc.id() == ast::crate_node_id;
-
- // In the case of the top mod, it really doesn't have a name;
- // the name we have here is actually the crate name
- let path = if is_topmod {
- doc.path()
- } else {
- doc.path() + ~[doc.name()]
- };
-
- let new_items = get_new_items(path, fold.ctxt);
- #debug("merging into %?: %?", path, new_items);
-
- doc::moddoc_({
- items: (doc.items + new_items)
- with *doc
- })
- }
-
- fn get_new_items(path: ~[~str], path_map: path_map) -> ~[doc::itemtag] {
- #debug("looking for reexports in path %?", path);
- alt path_map.find(str::connect(path, ~"::")) {
- some(name_docs) {
- do vec::foldl(~[], name_docs) |v, name_doc| {
- let (name, doc) = name_doc;
- v + ~[reexport_doc(doc, name)]
- }
- }
- none { ~[] }
- }
- }
-
- fn reexport_doc(doc: doc::itemtag, name: ~str) -> doc::itemtag {
- alt doc {
- doc::modtag(doc @ doc::moddoc_({item, _})) {
- doc::modtag(doc::moddoc_({
- item: reexport(item, name)
- with *doc
- }))
- }
- doc::nmodtag(_) { fail }
- doc::consttag(doc @ {item, _}) {
- doc::consttag({
- item: reexport(item, name)
- with doc
- })
- }
- doc::fntag(doc @ {item, _}) {
- doc::fntag({
- item: reexport(item, name)
- with doc
- })
- }
- doc::enumtag(doc @ {item, _}) {
- doc::enumtag({
- item: reexport(item, name)
- with doc
- })
- }
- doc::traittag(doc @ {item, _}) {
- doc::traittag({
- item: reexport(item, name)
- with doc
- })
- }
- doc::impltag(doc @ {item, _}) {
- doc::impltag({
- item: reexport(item, name)
- with doc
- })
- }
- doc::tytag(doc @ {item, _}) {
- doc::tytag({
- item: reexport(item, name)
- with doc
- })
- }
- }
- }
-
- fn reexport(doc: doc::itemdoc, name: ~str) -> doc::itemdoc {
- {
- name: name,
- reexport: true
- with doc
- }
- }
-}
-
-#[test]
-fn should_duplicate_reexported_items() {
- let source = ~"mod a { export b; fn b() { } } \
- mod c { import a::b; export b; }";
- let doc = test::mk_doc(source);
- assert doc.cratemod().mods()[1].fns()[0].name() == ~"b";
-}
-
-#[test]
-fn should_mark_reepxorts_as_such() {
- let source = ~"mod a { export b; fn b() { } } \
- mod c { import a::b; export b; }";
- let doc = test::mk_doc(source);
- assert doc.cratemod().mods()[1].fns()[0].item.reexport == true;
-}
-
-#[test]
-fn should_duplicate_reexported_impls() {
- let source = ~"mod a { impl b for int { fn c() { } } } \
- mod d { import a::b; export b; }";
- let doc = test::mk_doc(source);
- assert doc.cratemod().mods()[1].impls()[0].name() == ~"b";
-}
-
-#[test]
-fn should_duplicate_reexported_impls_deep() {
- let source = ~"mod a { impl b for int { fn c() { } } } \
- mod d { mod e { import a::b; export b; } }";
- let doc = test::mk_doc(source);
- assert doc.cratemod().mods()[1].mods()[0].impls()[0].name() == ~"b";
-}
-
-#[test]
-fn should_duplicate_reexported_impls_crate() {
- let source = ~"import a::b; export b; \
- mod a { impl b for int { fn c() { } } }";
- let doc = test::mk_doc(source);
- assert doc.cratemod().impls()[0].name() == ~"b";
-}
-
-#[test]
-fn should_duplicate_reexported_foreign_fns() {
- let source = ~"extern mod a { fn b(); } \
- mod c { import a::b; export b; }";
- let doc = test::mk_doc(source);
- assert doc.cratemod().mods()[0].fns()[0].name() == ~"b";
-}
-
-#[test]
-fn should_duplicate_multiple_reexported_items() {
- let source = ~"mod a { \
- export b; export c; \
- fn b() { } fn c() { } \
- } \
- mod d { \
- import a::b; import a::c; \
- export b; export c; \
- }";
- do astsrv::from_str(source) |srv| {
- let doc = extract::from_srv(srv, ~"");
- let doc = path_pass::mk_pass().f(srv, doc);
- let doc = run(srv, doc);
- // Reexports may not be in any specific order
- let doc = sort_item_name_pass::mk_pass().f(srv, doc);
- assert doc.cratemod().mods()[1].fns()[0].name() == ~"b";
- assert doc.cratemod().mods()[1].fns()[1].name() == ~"c";
- }
-}
-
-#[test]
-fn should_rename_items_reexported_with_different_names() {
- let source = ~"mod a { export b; fn b() { } } \
- mod c { import x = a::b; export x; }";
- let doc = test::mk_doc(source);
- assert doc.cratemod().mods()[1].fns()[0].name() == ~"x";
-}
-
-#[test]
-fn should_reexport_in_topmod() {
- fn mk_doc(source: ~str) -> doc::doc {
- do astsrv::from_str(source) |srv| {
- let doc = extract::from_srv(srv, ~"core");
- let doc = path_pass::mk_pass().f(srv, doc);
- run(srv, doc)
- }
- }
- let source = ~"import option::{some, none}; \
- import option = option::t; \
- export option, some, none; \
- mod option { \
- enum t { some, none } \
- }";
- let doc = mk_doc(source);
- assert doc.cratemod().enums()[0].name() == ~"option";
-}
-
-#[test]
-fn should_not_reexport_multiple_times() {
- let source = ~"import option = option::t; \
- export option; \
- export option; \
- mod option { \
- enum t { none, some } \
- }";
- let doc = test::mk_doc(source);
- assert vec::len(doc.cratemod().enums()) == 1u;
-}
-
-#[cfg(test)]
-mod test {
- fn mk_doc(source: ~str) -> doc::doc {
- do astsrv::from_str(source) |srv| {
- let doc = extract::from_srv(srv, ~"");
- let doc = path_pass::mk_pass().f(srv, doc);
- run(srv, doc)
- }
- }
-}
mod path_pass;
mod attr_pass;
mod tystr_pass;
-mod prune_unexported_pass;
mod prune_hidden_pass;
mod desc_to_brief_pass;
mod text_pass;
mod sort_pass;
mod sort_item_name_pass;
mod sort_item_type_pass;
-mod reexport_pass;
mod page_pass;
mod sectionalize_pass;
mod escape_pass;
extract::from_srv(srv, default_name)
});
run_passes(srv, doc, ~[
- reexport_pass::mk_pass(),
- prune_unexported_pass::mk_pass(),
tystr_pass::mk_pass(),
path_pass::mk_pass(),
attr_pass::mk_pass(),
let srv = fold.ctxt;
- let (trait_ty, self_ty) = do astsrv::exec(srv) |ctxt| {
+ let (trait_types, self_ty) = do astsrv::exec(srv) |ctxt| {
alt ctxt.ast_map.get(doc.id()) {
ast_map::node_item(@{
- node: ast::item_impl(_, trait_ty, self_ty, _), _
+ node: ast::item_impl(_, trait_types, self_ty, _), _
}, _) {
- let trait_ty = option::map(trait_ty, |p| {
+ let trait_types = vec::map(trait_types, |p| {
pprust::path_to_str(p.path)
});
- (trait_ty, some(pprust::ty_to_str(self_ty)))
+ (trait_types, some(pprust::ty_to_str(self_ty)))
}
_ { fail ~"expected impl" }
}
};
{
- trait_ty: trait_ty,
+ trait_types: trait_types,
self_ty: self_ty,
methods: merge_methods(fold.ctxt, doc.id(), doc.methods)
with doc
}
#[test]
-fn should_add_impl_trait_ty() {
+fn should_add_impl_trait_types() {
let doc = test::mk_doc(~"impl i of j for int { fn a<T>() { } }");
- assert doc.cratemod().impls()[0].trait_ty == some(~"j");
+ assert doc.cratemod().impls()[0].trait_types[0] == ~"j";
}
#[test]
-fn should_not_add_impl_trait_ty_if_none() {
+fn should_not_add_impl_trait_types_if_none() {
let doc = test::mk_doc(~"impl i for int { fn a() { } }");
- assert doc.cratemod().impls()[0].trait_ty == none;
+ assert vec::len(doc.cratemod().impls()[0].trait_types) == 0;
}
#[test]
PM->run(*unwrap(M));
}
-extern "C" bool LLVMLinkModules(LLVMModuleRef Dest, LLVMModuleRef Src) {
- static std::string err;
-
- // For some strange reason, unwrap() doesn't work here. "No matching
- // function" error.
- Module *DM = reinterpret_cast<Module *>(Dest);
- Module *SM = reinterpret_cast<Module *>(Src);
- if (Linker::LinkModules(DM, SM, Linker::DestroySource, &err)) {
- LLVMRustError = err.c_str();
- return false;
- }
- return true;
-}
+void LLVMInitializeX86TargetInfo();
+void LLVMInitializeX86Target();
+void LLVMInitializeX86TargetMC();
+void LLVMInitializeX86AsmPrinter();
+void LLVMInitializeX86AsmParser();
-extern "C" void
+extern "C" bool
LLVMRustWriteOutputFile(LLVMPassManagerRef PMR,
LLVMModuleRef M,
const char *triple,
CodeGenOpt::Level OptLevel,
bool EnableSegmentedStacks) {
- InitializeAllTargets();
- InitializeAllTargetMCs();
- InitializeAllAsmPrinters();
- InitializeAllAsmParsers();
+ // Only initialize the platforms supported by Rust here,
+ // because using --llvm-root will have multiple platforms
+ // that rustllvm doesn't actually link to and it's pointless to put target info
+ // into the registry that Rust can not generate machine code for.
+
+ LLVMInitializeX86TargetInfo();
+ LLVMInitializeX86Target();
+ LLVMInitializeX86TargetMC();
+ LLVMInitializeX86AsmPrinter();
+ LLVMInitializeX86AsmParser();
TargetOptions Options;
Options.NoFramePointerElim = true;
std::string ErrorInfo;
raw_fd_ostream OS(path, ErrorInfo,
raw_fd_ostream::F_Binary);
+ if (ErrorInfo != "") {
+ LLVMRustError = ErrorInfo.c_str();
+ return false;
+ }
formatted_raw_ostream FOS(OS);
bool foo = Target->addPassesToEmitFile(*PM, FOS, FileType, NoVerify);
(void)foo;
PM->run(*unwrap(M));
delete Target;
+ return true;
}
extern "C" LLVMModuleRef LLVMRustParseAssemblyFile(const char *Filename) {
LLVMRustParseBitcode
LLVMRustParseAssemblyFile
LLVMRustPrintPassTimings
-LLVMLinkModules
LLVMCreateObjectFile
LLVMDisposeObjectFile
LLVMGetSections
LLVMInitializeX86Disassembler
LLVMInitializeX86Target
LLVMInitializeX86Target
+LLVMInitializeX86TargetMC
+LLVMInitializeX86TargetMC
LLVMInitializeX86TargetInfo
LLVMInitializeX86TargetInfo
LLVMInsertBasicBlock
+S 2012-07-26 5805616
+ macos-i386 cded3df1c96da88a593438b3c473b0c0e2acf211
+ macos-x86_64 2eade230d378daff7ee4eac7c2922df2c4b71277
+ freebsd-x86_64 9190485b8b86dcfb33e4ee14bb5954d55cb92a8b
+ linux-i386 fbb14d21652f49b1eb741e926ba6d7a96436556b
+ linux-x86_64 fbd5dc14d4e99feee3c6086dd3ad11145507ba34
+ winnt-i386 bab3360e67c7e6b333d9f514bbd922a79359e6a3
+
S 2012-07-16 0e42004
macos-i386 67616307e5498327bcf4f0c13287e7f9f4439c1c
macos-x86_64 f3348eb9314895ffa71056fad8c1f79d8d45b161
winnt-i386 b0248e68346a1724c673a2fa5bc5a73eda2d821f
S 2011-09-01 91ea257
- linux-i386 dcea4ce8001eaba3e3b2c404a147fbad47defe96
+ linux-i386 dcea4ce8001eaba3e3b2c404a147fbad47defe96
macos-i386 0807e3a7c2c88dbf459a2a78601403090d38c01d
winnt-i386 03d0fd04f6b080d9f601bb1a3711c98f9eab2490
--- /dev/null
+#[abi = "rust-intrinsic"]
+extern mod rusti {
+ fn atomic_xchng(&dst: int, src: int) -> int;
+ fn atomic_xchng_acq(&dst: int, src: int) -> int;
+ fn atomic_xchng_rel(&dst: int, src: int) -> int;
+
+ fn atomic_add(&dst: int, src: int) -> int;
+ fn atomic_add_acq(&dst: int, src: int) -> int;
+ fn atomic_add_rel(&dst: int, src: int) -> int;
+
+ fn atomic_sub(&dst: int, src: int) -> int;
+ fn atomic_sub_acq(&dst: int, src: int) -> int;
+ fn atomic_sub_rel(&dst: int, src: int) -> int;
+}
+
+#[inline(always)]
+fn atomic_xchng(&dst: int, src: int) -> int {
+ rusti::atomic_xchng(dst, src)
+}
\ No newline at end of file
-#[link(name = "zmq",
+#[link(name = "issue_2526",
vers = "0.2",
uuid = "54cc1bc9-02b8-447c-a227-75ebc923bc29")];
#[crate_type = "lib"];
let start = std::time::precise_time_s();
let mut worker_results = ~[];
for uint::range(0u, workers) |i| {
- let builder = task::builder();
- vec::push(worker_results, task::future_result(builder));
let to_child = to_child.clone();
- do task::run(builder) {
+ do task::task().future_result(|-r| {
+ vec::push(worker_results, r);
+ }).spawn {
for uint::range(0u, size / workers) |_i| {
//#error("worker %?: sending %? bytes", i, num_bytes);
to_child.send(bytes(num_bytes));
let start = std::time::precise_time_s();
let mut worker_results = ~[];
for uint::range(0u, workers) |i| {
- let builder = task::builder();
- vec::push(worker_results, task::future_result(builder));
let (to_child, from_parent_) = pipes::stream();
from_parent.add(from_parent_);
- do task::run(builder) {
+ do task::task().future_result(|-r| {
+ vec::push(worker_results, r);
+ }).spawn {
for uint::range(0u, size / workers) |_i| {
//#error("worker %?: sending %? bytes", i, num_bytes);
to_child.send(bytes(num_bytes));
let to_child = to_child;
let mut worker_results = ~[];
for uint::range(0u, workers) |_i| {
- let builder = task::builder();
- vec::push(worker_results, task::future_result(builder));
- do task::run(builder) {
+ do task::task().future_result(|-r| {
+ vec::push(worker_results, r);
+ }).spawn {
for uint::range(0u, size / workers) |_i| {
comm::send(to_child, bytes(100u));
}
--- /dev/null
+// Compare bounded and unbounded protocol performance.
+
+// xfail-test
+// xfail-pretty
+
+use std;
+
+import pipes::{spawn_service, recv};
+import std::time::precise_time_s;
+
+proto! pingpong {
+ ping: send {
+ ping -> pong
+ }
+
+ pong: recv {
+ pong -> ping
+ }
+}
+
+proto! pingpong_unbounded {
+ ping: send {
+ ping -> pong
+ }
+
+ pong: recv {
+ pong -> ping
+ }
+
+ you_will_never_catch_me: send {
+ never_ever_ever -> you_will_never_catch_me
+ }
+}
+
+// This stuff should go in libcore::pipes
+macro_rules! move {
+ { $x:expr } => { unsafe { let y <- *ptr::addr_of($x); y } }
+}
+
+macro_rules! follow {
+ {
+ $($message:path($($x: ident),+) -> $next:ident $e:expr)+
+ } => (
+ |m| alt move(m) {
+ $(some($message($($x,)* next)) {
+ let $next = move!{next};
+ $e })+
+ _ { fail }
+ }
+ );
+
+ {
+ $($message:path -> $next:ident $e:expr)+
+ } => (
+ |m| alt move(m) {
+ $(some($message(next)) {
+ let $next = move!{next};
+ $e })+
+ _ { fail }
+ }
+ )
+}
+
+fn switch<T: send, Tb: send, U>(+endp: pipes::recv_packet_buffered<T, Tb>,
+ f: fn(+option<T>) -> U) -> U {
+ f(pipes::try_recv(endp))
+}
+
+fn move<T>(-x: T) -> T { x }
+
+// Here's the benchmark
+
+fn bounded(count: uint) {
+ import pingpong::*;
+
+ let mut ch = do spawn_service(init) |ch| {
+ let mut count = count;
+ let mut ch = ch;
+ while count > 0 {
+ ch = switch(ch, follow! {
+ ping -> next { server::pong(next) }
+ });
+
+ count -= 1;
+ }
+ };
+
+ let mut count = count;
+ while count > 0 {
+ let ch_ = client::ping(ch);
+
+ ch = switch(ch_, follow! {
+ pong -> next { next }
+ });
+
+ count -= 1;
+ }
+}
+
+fn unbounded(count: uint) {
+ import pingpong_unbounded::*;
+
+ let mut ch = do spawn_service(init) |ch| {
+ let mut count = count;
+ let mut ch = ch;
+ while count > 0 {
+ ch = switch(ch, follow! {
+ ping -> next { server::pong(next) }
+ });
+
+ count -= 1;
+ }
+ };
+
+ let mut count = count;
+ while count > 0 {
+ let ch_ = client::ping(ch);
+
+ ch = switch(ch_, follow! {
+ pong -> next { next }
+ });
+
+ count -= 1;
+ }
+}
+
+fn timeit(f: fn()) -> float {
+ let start = precise_time_s();
+ f();
+ let stop = precise_time_s();
+ stop - start
+}
+
+fn main() {
+ let count = 1000000;
+ let bounded = do timeit { bounded(count) };
+ let unbounded = do timeit { unbounded(count) };
+
+ io::println(#fmt("count: %?\n", count));
+ io::println(#fmt("bounded: %? s\t(%? μs/message)",
+ bounded, bounded * 1000000. / (count as float)));
+ io::println(#fmt("unbounded: %? s\t(%? μs/message)",
+ unbounded, unbounded * 1000000. / (count as float)));
+
+ io::println(#fmt("\n\
+ bounded is %?%% faster",
+ (unbounded - bounded) / bounded * 100.));
+}
}
}
-fn bottom_up_tree(arena: &a.arena::arena, item: int, depth: int) -> &a.tree {
+fn bottom_up_tree(arena: &arena::arena, item: int, depth: int) -> &tree {
if depth > 0 {
ret new(*arena) node(bottom_up_tree(arena, 2 * item - 1, depth - 1),
bottom_up_tree(arena, 2 * item, depth - 1),
fn stress(num_tasks: int) {
let mut results = ~[];
for range(0, num_tasks) |i| {
- let builder = task::builder();
- results += ~[task::future_result(builder)];
- task::run(builder, || stress_task(i) );
+ do task::task().future_result(|-r| {
+ results += ~[r];
+ }).spawn {
+ stress_task(i);
+ }
}
for results.each |r| { future::get(r); }
}
+// xfail-test
/**
A parallel word-frequency counting program.
proto! ctrl_proto {
open: send<K: copy send, V: copy send> {
find_reducer(K) -> reducer_response<K, V>,
- mapper_done -> terminated
+ mapper_done -> !
}
reducer_response: recv<K: copy send, V: copy send> {
reducer(chan<reduce_proto<V>>) -> open<K, V>
}
-
- terminated: send { }
}
enum reduce_proto<V: copy send> { emit_val(V), done, ref, release }
while num_mappers > 0 {
let (_ready, message, ctrls) = pipes::select(ctrl);
alt option::unwrap(message) {
- ctrl_proto::mapper_done(_) {
+ ctrl_proto::mapper_done {
// #error("received mapper terminated.");
num_mappers -= 1;
ctrl = ctrls;
+// xfail-test
/**
A parallel word-frequency counting program.
~[future::future<task::task_result>] {
let mut results = ~[];
for inputs.each |i| {
- let builder = task::builder();
- results += ~[task::future_result(builder)];
- task::run(builder, || map_task(ctrl, i));
+ do task::task().future_result(|-r| {
+ results += ~[r]; // Add result for this task to the list
+ }).spawn {
+ map_task(ctrl, i); // Task body
+ }
}
ret results;
}
// log(error, "creating new reducer for " + k);
let p = port();
let ch = chan(p);
- let builder = task::builder();
- results += ~[task::future_result(builder)];
- task::run(builder, || reduce_task(k, ch) );
+ do task::task().future_result(|-r| {
+ results += ~[r]; // Add result for this task
+ }).spawn {
+ reduce_task(k, ch); // Task body
+ }
c = recv(p);
reducers.insert(k, c);
}
}
fn to_lambda2(b: fn(uint) -> uint) -> fn@(uint) -> uint {
- ret to_lambda1({|x| b(x)}); //~ ERROR not an owned value
+ ret to_lambda1({|x| b(x)}); //~ ERROR value may contain borrowed pointers
}
fn main() {
--- /dev/null
+// xfail-test #2978
+
+fn call(x: @{mut f: fn~()}) {
+ x.f(); //~ ERROR foo
+ //~^ NOTE bar
+}
+
+fn main() {}
}
fn borrow_from_arg_mut_ref(&v: ~int) {
- borrow(v); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, mutable memory
//~^ NOTE impure due to access to impure function
}
--- /dev/null
+enum foo = ~int;
+
+fn borrow(x: @mut foo) {
+ let _y = &***x; //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, mutable memory
+ *x = foo(~4); //~ NOTE impure due to assigning to dereference of mutable @ pointer
+}
+
+fn main() {
+}
\ No newline at end of file
// Here, evaluating the second argument actually invalidates the
// first borrow, even though it occurs outside of the scope of the
// borrow!
- pure_borrow(*x, *x = ~5); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ pure_borrow(*x, *x = ~5); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, mutable memory
//~^ NOTE impure due to assigning to dereference of mutable @ pointer
}
fn borrow(_v: &int) {}
fn box_mut(v: @mut ~int) {
- borrow(*v); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(*v); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, mutable memory
//~^ NOTE impure due to access to impure function
}
fn box_rec_mut(v: @{mut f: ~int}) {
- borrow(v.f); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, mutable memory
//~^ NOTE impure due to access to impure function
}
fn box_mut_rec(v: @mut {f: ~int}) {
- borrow(v.f); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, mutable memory
//~^ NOTE impure due to access to impure function
}
fn box_mut_recs(v: @mut {f: {g: {h: ~int}}}) {
- borrow(v.f.g.h); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f.g.h); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, mutable memory
//~^ NOTE impure due to access to impure function
}
}
fn box_const(v: @const ~int) {
- borrow(*v); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(*v); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, const memory
//~^ NOTE impure due to access to impure function
}
fn box_rec_const(v: @{const f: ~int}) {
- borrow(v.f); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, const memory
//~^ NOTE impure due to access to impure function
}
fn box_recs_const(v: @{f: {g: {const h: ~int}}}) {
- borrow(v.f.g.h); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f.g.h); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, const memory
//~^ NOTE impure due to access to impure function
}
fn box_const_rec(v: @const {f: ~int}) {
- borrow(v.f); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, const memory
//~^ NOTE impure due to access to impure function
}
fn box_const_recs(v: @const {f: {g: {h: ~int}}}) {
- borrow(v.f.g.h); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f.g.h); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, const memory
//~^ NOTE impure due to access to impure function
}
fn borrow(_v: &int) {}
fn box_mut(v: &mut ~int) {
- borrow(*v); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(*v); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, mutable memory
//~^ NOTE impure due to access to impure function
}
fn box_rec_mut(v: &{mut f: ~int}) {
- borrow(v.f); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, mutable memory
//~^ NOTE impure due to access to impure function
}
fn box_mut_rec(v: &mut {f: ~int}) {
- borrow(v.f); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, mutable memory
//~^ NOTE impure due to access to impure function
}
fn box_mut_recs(v: &mut {f: {g: {h: ~int}}}) {
- borrow(v.f.g.h); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f.g.h); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, mutable memory
//~^ NOTE impure due to access to impure function
}
}
fn box_const(v: &const ~int) {
- borrow(*v); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(*v); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, const memory
//~^ NOTE impure due to access to impure function
}
fn box_rec_const(v: &{const f: ~int}) {
- borrow(v.f); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, const memory
//~^ NOTE impure due to access to impure function
}
fn box_recs_const(v: &{f: {g: {const h: ~int}}}) {
- borrow(v.f.g.h); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f.g.h); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, const memory
//~^ NOTE impure due to access to impure function
}
fn box_const_rec(v: &const {f: ~int}) {
- borrow(v.f); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, const memory
//~^ NOTE impure due to access to impure function
}
fn box_const_recs(v: &const {f: {g: {h: ~int}}}) {
- borrow(v.f.g.h); //~ ERROR illegal borrow unless pure: unique value in aliasable, mutable location
+ borrow(v.f.g.h); //~ ERROR illegal borrow unless pure: creating immutable alias to aliasable, const memory
//~^ NOTE impure due to access to impure function
}
iface foo { fn foo(); }
-impl of int for uint { fn foo() {} } //~ ERROR interface
+impl of int for uint { fn foo() {} } //~ ERROR trait
fn main() {}
// error-pattern:unresolved
+// xfail-test
import spam::{ham, eggs};
mod spam {
--- /dev/null
+// error-pattern:failed to resolve imports
+import x = m::f;
+
+mod m {
+}
+
+fn main() {
+}
--- /dev/null
+iface repeat<A> { fn get() -> A; }
+
+impl<A:copy> of repeat<A> for @A {
+ fn get() -> A { *self }
+}
+
+fn repeater<A:copy>(v: @A) -> repeat<A> {
+ // Note: owned kind is not necessary as A appears in the iface type
+ v as repeat::<A> // No
+}
+
+fn main() {
+ // Here, an error results as the type of y is inferred to
+ // repeater<</3> where lt is the block.
+ let y = { //~ ERROR reference is not valid outside of its lifetime
+ let x = &3;
+ repeater(@x)
+ };
+ assert 3 == *(y.get()); //~ ERROR reference is not valid outside of its lifetime
+}
\ No newline at end of file
--- /dev/null
+// A dummy iface/impl that work close over any type. The iface will
+// be parameterized by a region due to the &self/int constraint.
+
+iface foo {
+ fn foo(i: &self/int) -> int;
+}
+
+impl<T:copy> of foo for T {
+ fn foo(i: &self/int) -> int {*i}
+}
+
+fn to_foo<T:copy>(t: T) {
+ // This version is ok because, although T may contain borrowed
+ // pointers, it never escapes the fn body. We know this because
+ // the type of foo includes a region which will be resolved to
+ // the fn body itself.
+ let v = &3;
+ let x = {f:t} as foo;
+ assert x.foo(v) == 3;
+}
+
+fn to_foo_2<T:copy>(t: T) -> foo {
+ // Not OK---T may contain borrowed ptrs and it is going to escape
+ // as part of the returned foo value
+ {f:t} as foo //~ ERROR value may contain borrowed pointers; use `owned` bound
+}
+
+fn to_foo_3<T:copy owned>(t: T) -> foo {
+ // OK---T may escape as part of the returned foo value, but it is
+ // owned and hence does not contain borrowed ptrs
+ {f:t} as foo
+}
+
+fn main() {
+}
\ No newline at end of file
--- /dev/null
+iface foo { fn foo(); }
+
+fn to_foo<T: copy foo>(t: T) -> foo {
+ t as foo //~ ERROR value may contain borrowed pointers; use `owned` bound
+}
+
+fn to_foo2<T: copy foo owned>(t: T) -> foo {
+ t as foo
+}
+
+fn main() {}
fn copy1<T: copy>(t: T) -> fn@() -> T {
- fn@() -> T { t } //~ ERROR not an owned value
+ fn@() -> T { t } //~ ERROR value may contain borrowed pointers
}
fn copy2<T: copy owned>(t: T) -> fn@() -> T {
fn main() {
let x = &3;
- copy2(&x); //~ ERROR instantiating a type parameter with an incompatible type
+ copy2(&x); //~ ERROR missing `owned`
copy2(@3);
- copy2(@&x); //~ ERROR instantiating a type parameter with an incompatible type
+ copy2(@&x); //~ ERROR missing `owned`
copy2(fn@() {});
copy2(fn~() {}); //~ WARNING instantiating copy type parameter with a not implicitly copyable type
- copy2(fn&() {}); //~ ERROR instantiating a type parameter with an incompatible type
+ copy2(fn&() {}); //~ ERROR missing `copy owned`
}
--- /dev/null
+iface deref {
+ fn get() -> int;
+}
+
+impl of deref for &int {
+ fn get() -> int {
+ *self
+ }
+}
+
+fn with<R: deref>(f: fn(x: &int) -> R) -> int {
+ f(&3).get()
+}
+
+fn return_it() -> int {
+ with(|o| o)
+ //~^ ERROR reference is not valid outside of its lifetime, &
+ //~^^ ERROR reference is not valid outside of its lifetime, &
+}
+
+fn main() {
+ let x = return_it();
+ #debug["foo=%d", x];
+}
--- /dev/null
+fn select(x: &int, y: &int) -> &int { x }
+
+fn with<T>(f: fn(x: &int) -> T) -> T {
+ f(&20)
+}
+
+fn manip(x: &a/int) -> int {
+ let z = do with |y| { select(x, y) };
+ //~^ ERROR reference is not valid outside of its lifetime
+ *z
+}
+
+fn main() {
+}
\ No newline at end of file
--- /dev/null
+// Similar to regions-ret-borrowed.rs, but using a named lifetime. At
+// some point regions-ret-borrowed reported an error but this file did
+// not, due to special hardcoding around the anonymous region.
+
+fn with<R>(f: fn(x: &a/int) -> R) -> R {
+ f(&3)
+}
+
+fn return_it() -> &a/int {
+ with(|o| o) //~ ERROR mismatched types
+ //~^ ERROR reference is not valid outside of its lifetime
+}
+
+fn main() {
+ let x = return_it();
+ #debug["foo=%d", *x];
+}
--- /dev/null
+// Ensure that you cannot use generic types to return a region outside
+// of its bound. Here, in the `return_it()` fn, we call with() but
+// with R bound to &int from the return_it. Meanwhile, with()
+// provides a value that is only good within its own stack frame. This
+// used to successfully compile because we failed to account for the
+// fact that fn(x: &int) rebound the region &.
+
+fn with<R>(f: fn(x: &int) -> R) -> R {
+ f(&3)
+}
+
+fn return_it() -> &int {
+ with(|o| o) //~ ERROR mismatched types
+ //~^ ERROR reference is not valid outside of its lifetime
+}
+
+fn main() {
+ let x = return_it();
+ #debug["foo=%d", *x];
+}
// huge).
let x = ~[1u,2u,3u];
- do vec::unpack_slice(x) |p, _len| {
+ do vec::as_buf(x) |p, _len| {
let base = p as uint; // base = 0x1230 say
let idx = base / sys::size_of::<uint>(); // idx = 0x0246 say
#error("ov1 base = 0x%x", base);
fn iterate(blk: fn(A) -> bool);
}
-impl vec<A> of iterable<A> for &[const A] {
+impl vec<A> of iterable<A> for &[A] {
fn iterate(f: fn(A) -> bool) {
vec::each(self, f);
}
}
-impl vec<A> of iterable<A> for ~[const A] {
+impl vec<A> of iterable<A> for ~[A] {
fn iterate(f: fn(A) -> bool) {
vec::each(self, f);
}
--- /dev/null
+import at_vec::{build, from_fn, from_elem};
+
+// Some code that could use that, then:
+fn seq_range(lo: uint, hi: uint) -> @[uint] {
+ do build |push| {
+ for uint::range(lo, hi) |i| {
+ push(i);
+ }
+ }
+}
+
+fn main() {
+ assert seq_range(10, 15) == @[10, 11, 12, 13, 14];
+ assert from_fn(5, |x| x+1) == @[1, 2, 3, 4, 5];
+ assert from_elem(5, 3.14) == @[3.14, 3.14, 3.14, 3.14, 3.14];
+}
#[abi = "cdecl"]
#[nolink]
extern mod test {
- fn unsupervise();
- fn get_task_id();
+ fn rust_get_sched_id() -> libc::intptr_t;
+ fn get_task_id() -> libc::intptr_t;
}
fn test_foreign_fn() {
- assert test::unsupervise != test::get_task_id;
- assert test::unsupervise == test::unsupervise;
+ assert test::rust_get_sched_id != test::get_task_id;
+ assert test::rust_get_sched_id == test::rust_get_sched_id;
}
class p {
--- /dev/null
+fn sum_slice(x: &[int]) -> int {
+ let mut sum = 0;
+ for x.each |i| { sum += i; }
+ ret sum;
+}
+
+fn main() {
+ let x = @[1, 2, 3];
+ assert sum_slice(x) == 6;
+}
\ No newline at end of file
--- /dev/null
+type ints = {sum: ~int, values: ~[int]};
+
+fn add_int(x: &mut ints, v: int) {
+ *x.sum += v;
+ let mut values = ~[];
+ x.values <-> values;
+ vec::push(values, v);
+ x.values <- values;
+}
+
+fn iter_ints(x: &ints, f: fn(x: &int) -> bool) {
+ let l = x.values.len();
+ uint::range(0, l, |i| f(&x.values[i]))
+}
+
+fn main() {
+ let mut ints = ~{sum: ~0, values: ~[]};
+ add_int(ints, 22);
+ add_int(ints, 44);
+
+ for iter_ints(ints) |i| {
+ #error["int = %d", *i];
+ }
+
+ #error["ints=%?", ints];
+}
#[abi = "cdecl"]
extern mod rustrt {
- fn unsupervise();
+ fn get_task_id() -> libc::intptr_t;
}
fn main() {
- let _foo = rustrt::unsupervise;
+ let _foo = rustrt::get_task_id;
}
}
fn atol(s: ~str) -> int {
- ret str::as_buf(s, { |x| libc::atol(x) });
+ ret str::as_buf(s, { |x, _len| libc::atol(x) });
}
fn atoll(s: ~str) -> i64 {
- ret str::as_buf(s, { |x| libc::atoll(x) });
+ ret str::as_buf(s, { |x, _len| libc::atoll(x) });
}
fn main() {
// ABI is cdecl by default
extern mod rustrt {
- fn unsupervise();
+ fn get_task_id() -> libc::intptr_t;
}
fn main() {
- rustrt::unsupervise();
-}
\ No newline at end of file
+ rustrt::get_task_id();
+}
-
-
-#[abi = "cdecl"]
-extern mod rustrt {
- fn unsupervise();
-}
-
#[abi = "cdecl"]
#[nolink]
extern mod bar { }
--- /dev/null
+// xfail-fast - check-fast doesn't understand aux-build
+// aux-build:cci_intrinsic.rs
+
+// xfail-test
+
+use cci_intrinsic;
+import cci_intrinsic::atomic_xchng;
+
+fn main() {
+ let mut x = 1;
+ atomic_xchng(x, 5);
+ assert x == 5;
+}
-#[link(name = "unsupervise")];
+#[link(name = "get_task_id")];
extern mod rustrt {
- fn unsupervise();
+ fn get_task_id() -> libc::intptr_t;
}
fn main() { }
// exec-env:RUST_CC_ZEAL=1
+// xfail-test
fn main() {
#error["%?", os::getenv(~"RUST_CC_ZEAL")];
// xfail-fast
// aux-build:issue-2526.rs
-use zmq;
-import zmq::*;
+use issue_2526;
+import issue_2526::*;
fn main() {}
iface hax { }
impl <A> of hax for A { }
-fn perform_hax<T>(x: @T) -> hax {
+fn perform_hax<T: owned>(x: @T) -> hax {
x as hax
}
iface hax { }
impl <A> of hax for A { }
-fn perform_hax<T>(x: @T) -> hax {
+fn perform_hax<T: owned>(x: @T) -> hax {
x as hax
}
+// xfail-test
/*
A reduced test case for Issue #506, provided by Rob Arnold.
#[abi = "cdecl"]
extern mod rustrt {
- fn unsupervise();
+ fn get_task_id() -> libc::intptr_t;
}
-fn main() { task::spawn(rustrt::unsupervise); }
+fn main() {
+ let f: fn() -> libc::intptr_t = rustrt::get_task_id;
+ task::spawn(unsafe { unsafe::reinterpret_cast(f) });
+}
#[attr];
#[attr]
- fn unsupervise();
+ fn get_task_id() -> libc::intptr_t;
}
}
--- /dev/null
+iface repeat<A> { fn get() -> A; }
+
+impl<A:copy> of repeat<A> for @A {
+ fn get() -> A { *self }
+}
+
+fn repeater<A:copy>(v: @A) -> repeat<A> {
+ // Note: owned kind is not necessary as A appears in the iface type
+ v as repeat::<A> // No
+}
+
+fn main() {
+ let x = &3;
+ let y = repeater(@x);
+ assert *x == *(y.get());
+}
\ No newline at end of file
fn main() {
for uint::range(0u, 100u) |_i| {
- let builder = task::builder();
- task::unsupervise(builder);
- task::run(builder, || iloop() );
+ task::spawn_unlinked(|| iloop() );
}
-}
\ No newline at end of file
+}
fn add(a: int, b: int) -> int { ret a + b; }
- assert (#apply[add, [1, 15]] == 16);
+ assert(#apply[add, [1, 15]] == 16);
+ assert(apply!{add, [1, 15]} == 16);
assert(apply_tt!{add, (1, 15)} == 16);
}
--- /dev/null
+trait Product {
+ fn product() -> int;
+}
+
+struct Foo {
+ x: int;
+ y: int;
+}
+
+impl Foo {
+ fn sum() -> int {
+ self.x + self.y
+ }
+}
+
+impl Foo : Product {
+ fn product() -> int {
+ self.x * self.y
+ }
+}
+
+fn Foo(x: int, y: int) -> Foo {
+ Foo { x: x, y: y }
+}
+
+fn main() {
+ let foo = Foo(3, 20);
+ io::println(#fmt("%d %d", foo.sum(), foo.product()));
+}
+
-#[no_core];
-
-
#[path = "module-polymorphism-files"]
-mod float {
+mod my_float {
// The type of the float
import inst::T;
}
#[path = "module-polymorphism-files"]
-mod f64 {
+mod my_f64 {
import inst::T;
}
#[path = "module-polymorphism-files"]
-mod f32 {
+mod my_f32 {
import inst::T;
#[path = "inst_f32.rs"]
fn main() {
// All of these functions are defined by a single module
// source file but instantiated for different types
- assert float::template::plus(1.0f, 2.0f) == 3.0f;
- assert f64::template::plus(1.0f64, 2.0f64) == 3.0f64;
- assert f32::template::plus(1.0f32, 2.0f32) == 3.0f32;
+ assert my_float::template::plus(1.0f, 2.0f) == 3.0f;
+ assert my_f64::template::plus(1.0f64, 2.0f64) == 3.0f64;
+ assert my_f32::template::plus(1.0f32, 2.0f32) == 3.0f32;
}
\ No newline at end of file
-#[no_core];
-
-
#[path = "module-polymorphism2-files"]
mod mystd {
-#[no_core];
-
// Use one template module to specify in a single file the implementation
// of functions for multiple types
// This test attempts to force the dynamic linker to resolve
// external symbols as close to the red zone as possible.
-use std;
-import task;
-import std::rand;
-
extern mod rustrt {
fn debug_get_stk_seg() -> *u8;
- fn unsupervise();
+ fn rust_get_sched_id() -> libc::intptr_t;
fn last_os_error() -> ~str;
fn rust_getcwd() -> ~str;
- fn get_task_id();
+ fn get_task_id() -> libc::intptr_t;
fn sched_threads();
fn rust_get_task();
}
-fn calllink01() { rustrt::unsupervise(); }
+fn calllink01() { rustrt::rust_get_sched_id(); }
fn calllink02() { rustrt::last_os_error(); }
fn calllink03() { rustrt::rust_getcwd(); }
fn calllink08() { rustrt::get_task_id(); }
ret;
}
- let builder = task::builder();
- let opts = {
- sched: some({
- mode: task::osmain,
- foreign_stack_size: none
- })
- with task::get_opts(builder)
- };
- task::set_opts(builder, opts);
- task::unsupervise(builder);
- do task::run(builder) {
+ do task::task().sched_mode(task::osmain).unlinked().spawn {
task::yield();
- let builder = task::builder();
- let opts = {
- sched: some({
- mode: task::single_threaded,
- foreign_stack_size: none
- })
- with task::get_opts(builder)
- };
- task::set_opts(builder, opts);
- task::unsupervise(builder);
- do task::run(builder) {
+ do task::task().sched_mode(task::single_threaded).unlinked().spawn {
task::yield();
run(i - 1);
task::yield();
}
}
-fn macros() {
- #macro[
- [#move[x],
- unsafe { let y <- *ptr::addr_of(x); y }]
- ];
+macro_rules! move {
+ { $x:expr } => { unsafe { let y <- *ptr::addr_of($x); y } }
}
+fn switch<T: send, U>(+endp: pipes::recv_packet<T>,
+ f: fn(+option<T>) -> U) -> U {
+ f(pipes::try_recv(endp))
+}
+
+fn move<T>(-x: T) -> T { x }
+
+macro_rules! follow {
+ {
+ $($message:path($($x: ident),+) => $next:ident $e:expr)+
+ } => (
+ |m| alt move(m) {
+ $(some($message($($x,)* next)) {
+ let $next = move!{next};
+ $e })+
+ _ { fail }
+ }
+ );
+
+ {
+ $($message:path => $next:ident $e:expr)+
+ } => (
+ |m| alt move(m) {
+ $(some($message(next)) {
+ let $next = move!{next};
+ $e })+
+ _ { fail }
+ }
+ )
+}
+
+/*
+fn client_follow(+bank: bank::client::login) {
+ import bank::*;
+
+ let bank = client::login(bank, ~"theincredibleholk", ~"1234");
+ let bank = switch(bank, follow! {
+ ok => connected { connected }
+ invalid => _next { fail ~"bank closed the connected" }
+ });
+
+ /* // potential alternate syntax
+ let bank = recv_alt! {
+ bank => {
+ | ok -> connected { connected }
+ | invalid -> _next { fail }
+ }
+ bank2 => {
+ | foo -> _n { fail }
+ }
+ }
+ */
+
+ let bank = client::deposit(bank, 100.00);
+ let bank = client::withdrawal(bank, 50.00);
+ alt try_recv(bank) {
+ some(money(m, _)) {
+ io::println(~"Yay! I got money!");
+ }
+ some(insufficient_funds(_)) {
+ fail ~"someone stole my money"
+ }
+ none {
+ fail ~"bank closed the connection"
+ }
+ }
+}
+*/
+
fn bank_client(+bank: bank::client::login) {
import bank::*;
let bank = client::login(bank, ~"theincredibleholk", ~"1234");
let bank = alt try_recv(bank) {
some(ok(connected)) {
- #move(connected)
+ move!{connected}
}
some(invalid(_)) { fail ~"login unsuccessful" }
none { fail ~"bank closed the connection" }
proto! oneshot {
waiting:send {
- signal -> signaled
+ signal -> !
}
-
- signaled:send { }
}
fn main() {
let iotask = uv::global_loop::get();
- let c = pipes::spawn_service(oneshot::init, |p| {
+ pipes::spawn_service(oneshot::init, |p| {
alt try_recv(p) {
some(*) { fail }
none { }
sleep(iotask, 100);
- let b = task::builder();
- task::unsupervise(b);
- task::run(b, failtest);
+ task::spawn_unlinked(failtest);
}
// Make sure the right thing happens during failure.
fn failtest() {
- let iotask = uv::global_loop::get();
-
let (c, p) = oneshot::init();
do task::spawn_with(c) |_c| {
proto! oneshot {
waiting:send {
- signal -> signaled
+ signal -> !
}
-
- signaled:send { }
}
fn main() {
--- /dev/null
+// Ping-pong is a bounded protocol. This is place where I can
+// experiment with what code the compiler should generate for bounded
+// protocols.
+
+// xfail-pretty
+
+// This was generated initially by the pipe compiler, but it's been
+// modified in hopefully straightforward ways.
+mod pingpong {
+ import pipes::*;
+
+ type packets = {
+ // This is probably a resolve bug, I forgot to export packet,
+ // but since I didn't import pipes::*, it worked anyway.
+ ping: packet<ping>,
+ pong: packet<pong>,
+ };
+
+ fn init() -> (client::ping, server::ping) {
+ let buffer = ~{
+ header: buffer_header(),
+ data: {
+ ping: mk_packet::<ping>(),
+ pong: mk_packet::<pong>()
+ }
+ };
+ do pipes::entangle_buffer(buffer) |buffer, data| {
+ data.ping.set_buffer(buffer);
+ data.pong.set_buffer(buffer);
+ ptr::addr_of(data.ping)
+ }
+ }
+ enum ping = server::pong;
+ enum pong = client::ping;
+ mod client {
+ fn ping(+pipe: ping) -> pong {
+ {
+ let b = pipe.reuse_buffer();
+ let s = send_packet_buffered(ptr::addr_of(b.buffer.data.pong));
+ let c = recv_packet_buffered(ptr::addr_of(b.buffer.data.pong));
+ let message = pingpong::ping(s);
+ pipes::send(pipe, message);
+ c
+ }
+ }
+ type ping = pipes::send_packet_buffered<pingpong::ping,
+ pingpong::packets>;
+ type pong = pipes::recv_packet_buffered<pingpong::pong,
+ pingpong::packets>;
+ }
+ mod server {
+ type ping = pipes::recv_packet_buffered<pingpong::ping,
+ pingpong::packets>;
+ fn pong(+pipe: pong) -> ping {
+ {
+ let b = pipe.reuse_buffer();
+ let s = send_packet_buffered(ptr::addr_of(b.buffer.data.ping));
+ let c = recv_packet_buffered(ptr::addr_of(b.buffer.data.ping));
+ let message = pingpong::pong(s);
+ pipes::send(pipe, message);
+ c
+ }
+ }
+ type pong = pipes::send_packet_buffered<pingpong::pong,
+ pingpong::packets>;
+ }
+}
+
+mod test {
+ import pipes::recv;
+ import pingpong::{ping, pong};
+
+ fn client(-chan: pingpong::client::ping) {
+ import pingpong::client;
+
+ let chan = client::ping(chan); ret;
+ log(error, "Sent ping");
+ let pong(_chan) = recv(chan);
+ log(error, "Received pong");
+ }
+
+ fn server(-chan: pingpong::server::ping) {
+ import pingpong::server;
+
+ let ping(chan) = recv(chan); ret;
+ log(error, "Received ping");
+ let _chan = server::pong(chan);
+ log(error, "Sent pong");
+ }
+}
+
+fn main() {
+ let (client_, server_) = pingpong::init();
+ let client_ = ~mut some(client_);
+ let server_ = ~mut some(server_);
+ do task::spawn |move client_| {
+ let mut client__ = none;
+ *client_ <-> client__;
+ test::client(option::unwrap(client__));
+ };
+ do task::spawn |move server_| {
+ let mut server_ˊ = none;
+ *server_ <-> server_ˊ;
+ test::server(option::unwrap(server_ˊ));
+ };
+}
waiting:send {
signal -> !
}
-
- signaled:send { }
}
proto! stream {
proto! oneshot {
waiting:send {
- signal -> signaled
+ signal -> !
}
-
- signaled:send { }
}
fn main() {
let c = pipes::spawn_service(oneshot::init, |p| { recv(p); });
let iotask = uv::global_loop::get();
- sleep(iotask, 5000);
+ sleep(iotask, 500);
signal(c);
}
\ No newline at end of file
}
fn visit_rptr(mtbl: uint, inner: *tydesc) -> bool {
- self.align_to::<&static.u8>();
+ self.align_to::<&static/u8>();
if ! self.inner.visit_rptr(mtbl, inner) { ret false; }
- self.bump_past::<&static.u8>();
+ self.bump_past::<&static/u8>();
true
}
}
fn visit_evec_slice(mtbl: uint, inner: *tydesc) -> bool {
- self.align_to::<&static.[u8]>();
+ self.align_to::<&static/[u8]>();
if ! self.inner.visit_evec_slice(mtbl, inner) { ret false; }
- self.bump_past::<&static.[u8]>();
+ self.bump_past::<&static/[u8]>();
true
}
}
fn visit_self() -> bool {
- self.align_to::<&static.u8>();
+ self.align_to::<&static/u8>();
if ! self.inner.visit_self() { ret false; }
- self.align_to::<&static.u8>();
+ self.align_to::<&static/u8>();
true
}
-fn f(x : &a.int) -> &a.int {
+fn f(x : &a/int) -> &a/int {
ret &*x;
}
// A very limited test of the "bottom" region
-fn produce_static<T>() -> &static.T { fail; }
+fn produce_static<T>() -> &static/T { fail; }
fn foo<T>(x: &T) -> &uint { produce_static() }
// f's type should be a subtype of g's type), because f can be
// used in any context that expects g's type. But this currently
// fails.
- let mut g: fn@(y: &r.uint) = fn@(x: &r.uint) { };
+ let mut g: fn@(y: &r/uint) = fn@(x: &r/uint) { };
g = f;
}
// This version is the same as above, except that here, g's type is
// inferred.
fn ok_inferred(f: fn@(x: &uint)) {
- let mut g = fn@(x: &r.uint) { };
+ let mut g = fn@(x: &r/uint) { };
g = f;
}
fn main() {
let ctxt = { v: 22u };
let hc = { c: &ctxt };
- assert get_v(hc as get_ctxt) == 22u;
+
+ // This no longer works, interestingly, due to the ownership
+ // requirement. Perhaps this ownership requirement is too strict.
+ // assert get_v(hc as get_ctxt) == 22u;
}
--- /dev/null
+fn takes_two(x: &int, y: &int) -> int { *x + *y }
+
+fn with<T>(f: fn(x: &int) -> T) -> T {
+ f(&20)
+}
+
+fn has_one(x: &a/int) -> int {
+ do with |y| { takes_two(x, y) }
+}
+
+fn main() {
+ assert has_one(&2) == 22;
+}
\ No newline at end of file
--- /dev/null
+fn takes_two(x: &int, y: &int) -> int { *x + *y }
+
+fn has_two(x: &a/int, y: &b/int) -> int {
+ takes_two(x, y)
+}
+
+fn main() {
+ assert has_two(&20, &2) == 22;
+}
\ No newline at end of file
x: int
};
-fn alloc(_bcx : &a.arena) -> &a.bcx unsafe {
+fn alloc(_bcx : &arena) -> &bcx unsafe {
ret unsafe::reinterpret_cast(
libc::malloc(sys::size_of::<bcx/&blk>() as libc::size_t));
}
-fn h(bcx : &a.bcx) -> &a.bcx {
+fn h(bcx : &bcx) -> &bcx {
ret alloc(bcx.fcx.arena);
}
-fn region_identity(x: &r.uint) -> &r.uint { x }
+fn region_identity(x: &uint) -> &uint { x }
fn apply<T>(t: T, f: fn(T) -> T) -> T { f(t) }
fn main() {
for uint::range(0u, 16u) |_i| {
- let builder = task::builder();
- task::unsupervise(builder);
- task::run(builder, || iloop() );
+ task::spawn_unlinked(|| iloop() );
}
-}
\ No newline at end of file
+}
fn test00() {
let i: int = 0;
- let builder = task::builder();
- let r = task::future_result(builder);
- task::run(builder, || start(i) );
+ let mut result = none;
+ do task::task().future_result(|-r| { result = some(r); }).spawn {
+ start(i)
+ }
// Sleep long enough for the task to finish.
let mut i = 0;
}
// Try joining tasks that have already finished.
- future::get(r);
+ future::get(option::unwrap(result));
#debug("Joined task.");
}
// Create and spawn tasks...
let mut results = ~[];
while i < number_of_tasks {
- let builder = task::builder();
- results += ~[task::future_result(builder)];
- do task::run(builder) |copy i| {
+ do task::task().future_result(|-r| {
+ results += ~[r];
+ }).spawn |copy i| {
test00_start(ch, i, number_of_messages)
}
i = i + 1;
let number_of_messages: int = 10;
let ch = comm::chan(p);
- let builder = task::builder();
- let r = task::future_result(builder);
- do task::run(builder) {
+ let mut result = none;
+ do task::task().future_result(|-r| { result = some(r); }).spawn {
test00_start(ch, number_of_messages);
}
i += 1;
}
- future::get(r);
+ future::get(option::unwrap(result));
assert (sum == number_of_messages * (number_of_messages - 1) / 2);
}
let mut results = ~[];
while i < number_of_tasks {
i = i + 1;
- let builder = task::builder();
- results += ~[task::future_result(builder)];
- do task::run(builder) |copy i| {
+ do task::task().future_result(|-r| {
+ results += ~[r];
+ }).spawn |copy i| {
test00_start(ch, i, number_of_messages);
}
}
let mut results = ~[];
while i < number_of_tasks {
i = i + 1;
- let builder = task::builder();
- results += ~[task::future_result(builder)];
- do task::run(builder) |copy i| {
+ do task::task().future_result(|-r| {
+ results += ~[r];
+ }).spawn |copy i| {
test06_start(i);
};
}
-// xfail-test
+// xfail-win32
// A port of task-killjoin to use a class with a dtor to manage
// the join.
let ch: comm::chan<bool>; let v: @mut bool;
new(ch: comm::chan<bool>, v: @mut bool) { self.ch = ch; self.v = v; }
drop {
- #error~["notify: task=%? v=%x unwinding=%b b=%b",
+ #error["notify: task=%? v=%x unwinding=%b b=%b",
task::get_task(),
ptr::addr_of(*(self.v)) as uint,
task::failing(),
}
}
-fn joinable(f: fn~()) -> comm::port<bool> {
- fn wrapper(+pair: (comm::chan<bool>, fn())) {
- let (c, f) = pair;
+fn joinable(+f: fn~()) -> comm::port<bool> {
+ fn wrapper(+c: comm::chan<bool>, +f: fn()) {
let b = @mut false;
- #error~["wrapper: task=%? allocated v=%x",
+ #error["wrapper: task=%? allocated v=%x",
task::get_task(),
ptr::addr_of(*b) as uint];
let _r = notify(c, b);
}
let p = comm::port();
let c = comm::chan(p);
- let _ = task::spawn {|| wrapper((c, f)) };
+ do task::spawn_unlinked { wrapper(c, copy f) };
p
}
fail;
}
-fn supervisor(b: task::builder) {
- // Unsupervise this task so the process doesn't return a failure status as
- // a result of the main task being killed.
- task::unsupervise(b);
+fn supervisor() {
#error["supervisor task=%?", task::get_task()];
let t = joinable(supervised);
join(t);
}
fn main() {
- join(joinable({|| supervisor(task::builder())}));
+ join(joinable(supervisor));
}
// Local Variables:
// Unsupervise this task so the process doesn't return a failure status as
// a result of the main task being killed.
let f = supervised;
- task::try(|| supervised() );
+ task::try(supervised);
}
fn main() {
- let builder = task::builder();
- task::unsupervise(builder);
- task::run(builder, || supervisor() )
+ task::spawn_unlinked(supervisor)
}
// Local Variables:
// xfail-win32
// error-pattern:ran out of stack
-// Test that the task fails after hiting the recursion limit, but
+// Test that the task fails after hitting the recursion limit, but
// that it doesn't bring down the whole proc
fn main() {
- let builder = task::builder();
- task::unsupervise(builder);
- do task::run(builder) {
+ do task::spawn_unlinked {
fn f() { f() };
f();
};
-}
\ No newline at end of file
+}
}
fn main() {
- let builder = task::builder();
- task::unsupervise(builder);
- task::run(builder, || f() );
-}
\ No newline at end of file
+ task::spawn_unlinked(f);
+}
fn main() {
let p = comm::port();
let c = comm::chan(p);
- let builder = task::builder();
- task::unsupervise(builder);
- task::run(builder, || f(c) );
+ task::spawn_unlinked(|| f(c) );
#error("hiiiiiiiii");
assert comm::recv(p);
-}
\ No newline at end of file
+}
}
fn main() {
- let builder = task::builder();
- task::unsupervise(builder);
- task::run(builder, || f() );
-}
\ No newline at end of file
+ task::spawn_unlinked(f);
+}
}
fn main() {
- let builder = task::builder();
- task::unsupervise(builder);
- task::run(builder, || f() );
-}
\ No newline at end of file
+ task::spawn_unlinked(f);
+}
fn main() {
+ let ε = 0.00001;
let Π = 3.14;
let लंच = Π * Π + 1.54;
- assert लंच - 1.54 == Π * Π;
+ assert float::abs((लंच - 1.54) - (Π * Π)) < ε;
assert საჭმელად_გემრიელი_სადილი() == 0;
}
import task::*;
fn main() {
- let builder = task::builder();
- let result = task::future_result(builder);
- task::run(builder, || child() );
+ let mut result = none;
+ task::task().future_result(|-r| { result = some(r); }).spawn(child);
#error("1");
yield();
#error("2");
yield();
#error("3");
- future::get(result);
+ future::get(option::unwrap(result));
}
fn child() {
import task::*;
fn main() {
- let builder = task::builder();
- let result = task::future_result(builder);
- task::run(builder, || child() );
+ let mut result = none;
+ task::task().future_result(|-r| { result = some(r); }).spawn(child);
#error("1");
yield();
- future::get(result);
+ future::get(option::unwrap(result));
}
fn child() { #error("2"); }