Long-standing branch to remove foreign function wrappers altogether. Calls to C functions are done "in place" with no stack manipulation; the scheme relies entirely on the correct use of `#[fixed_stack_segment]` to guarantee adequate stack space. A linter is added to detect when `#[fixed_stack_segment]` annotations are missing. An `externfn!` macro is added to make it easier to declare foreign fns and wrappers in one go: this macro may need some refinement, though, for example it might be good to be able to declare a group of foreign fns. I leave that for future work (hopefully somebody else's work :) ).
Fixes #3678.
fn snappy_max_compressed_length(source_length: size_t) -> size_t;
}
+#[fixed_stack_segment]
fn main() {
let x = unsafe { snappy_max_compressed_length(100) };
println(fmt!("max compressed length of a 100 byte buffer: %?", x));
valid for all possible inputs since the pointer could be dangling, and raw pointers fall outside of
Rust's safe memory model.
+Finally, the `#[fixed_stack_segment]` annotation that appears on
+`main()` instructs the Rust compiler that when `main()` executes, it
+should request a "very large" stack segment. More details on
+stack management can be found in the following sections.
+
When declaring the argument types to a foreign function, the Rust compiler will not check if the
declaration is correct, so specifying it correctly is part of keeping the binding correct at
runtime.
the allocated memory. The length is less than or equal to the capacity.
~~~~ {.xfail-test}
+#[fixed_stack_segment]
+#[inline(never)]
pub fn validate_compressed_buffer(src: &[u8]) -> bool {
unsafe {
snappy_validate_compressed_buffer(vec::raw::to_ptr(src), src.len() as size_t) == 0
guarantee that calling it is safe for all inputs by leaving off `unsafe` from the function
signature.
+The `validate_compressed_buffer` wrapper is also annotated with two
+attributes `#[fixed_stack_segment]` and `#[inline(never)]`. The
+purpose of these attributes is to guarantee that there will be
+sufficient stack for the C function to execute. This is necessary
+because Rust, unlike C, does not assume that the stack is allocated in
+one continuous chunk. Instead, we rely on a *segmented stack* scheme,
+in which the stack grows and shrinks as necessary. C code, however,
+expects one large stack, and so callers of C functions must request a
+large stack segment to ensure that the C routine will not run off the
+end of the stack.
+
+The compiler includes a lint mode that will report an error if you
+call a C function without a `#[fixed_stack_segment]` attribute. More
+details on the lint mode are given in a later section.
+
+You may be wondering why we include a `#[inline(never)]` directive.
+This directive informs the compiler never to inline this function.
+While not strictly necessary, it is usually a good idea to use an
+`#[inline(never)]` directive in concert with `#[fixed_stack_segment]`.
+The reason is that if a fn annotated with `fixed_stack_segment` is
+inlined, then its caller also inherits the `fixed_stack_segment`
+annotation. This means that rather than requesting a large stack
+segment only for the duration of the call into C, the large stack
+segment would be used for the entire duration of the caller. This is
+not necessarily *bad* -- it can for example be more efficient,
+particularly if `validate_compressed_buffer()` is called multiple
+times in a row -- but it does work against the purpose of the
+segmented stack scheme, which is to keep stacks small and thus
+conserve address space.
+
The `snappy_compress` and `snappy_uncompress` functions are more complex, since a buffer has to be
allocated to hold the output too.
~~~~ {.xfail-test}
pub fn compress(src: &[u8]) -> ~[u8] {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let srclen = src.len() as size_t;
let psrc = vec::raw::to_ptr(src);
~~~~ {.xfail-test}
pub fn uncompress(src: &[u8]) -> Option<~[u8]> {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let srclen = src.len() as size_t;
let psrc = vec::raw::to_ptr(src);
For reference, the examples used here are also available as an [library on
GitHub](https://github.com/thestinger/rust-snappy).
+# Automatic wrappers
+
+Sometimes writing Rust wrappers can be quite tedious. For example, if
+function does not take any pointer arguments, often there is no need
+for translating types. In such cases, it is usually still a good idea
+to have a Rust wrapper so as to manage the segmented stacks, but you
+can take advantage of the (standard) `externfn!` macro to remove some
+of the tedium.
+
+In the initial section, we showed an extern block that added a call
+to a specific snappy API:
+
+~~~~ {.xfail-test}
+use std::libc::size_t;
+
+#[link_args = "-lsnappy"]
+extern {
+ fn snappy_max_compressed_length(source_length: size_t) -> size_t;
+}
+
+#[fixed_stack_segment]
+fn main() {
+ let x = unsafe { snappy_max_compressed_length(100) };
+ println(fmt!("max compressed length of a 100 byte buffer: %?", x));
+}
+~~~~
+
+To avoid the need to create a wrapper fn for `snappy_max_compressed_length()`,
+and also to avoid the need to think about `#[fixed_stack_segment]`, we
+could simply use the `externfn!` macro instead, as shown here:
+
+~~~~ {.xfail-test}
+use std::libc::size_t;
+
+externfn!(#[link_args = "-lsnappy"]
+ fn snappy_max_compressed_length(source_length: size_t) -> size_t)
+
+fn main() {
+ let x = unsafe { snappy_max_compressed_length(100) };
+ println(fmt!("max compressed length of a 100 byte buffer: %?", x));
+}
+~~~~
+
+As you can see from the example, `externfn!` replaces the extern block
+entirely. After macro expansion, it will create something like this:
+
+~~~~ {.xfail-test}
+use std::libc::size_t;
+
+// Automatically generated by
+// externfn!(#[link_args = "-lsnappy"]
+// fn snappy_max_compressed_length(source_length: size_t) -> size_t)
+unsafe fn snappy_max_compressed_length(source_length: size_t) -> size_t {
+ #[fixed_stack_segment]; #[inline(never)];
+ return snappy_max_compressed_length(source_length);
+
+ #[link_args = "-lsnappy"]
+ extern {
+ fn snappy_max_compressed_length(source_length: size_t) -> size_t;
+ }
+}
+
+fn main() {
+ let x = unsafe { snappy_max_compressed_length(100) };
+ println(fmt!("max compressed length of a 100 byte buffer: %?", x));
+}
+~~~~
+
+# Segmented stacks and the linter
+
+By default, whenever you invoke a non-Rust fn, the `cstack` lint will
+check that one of the following conditions holds:
+
+1. The call occurs inside of a fn that has been annotated with
+ `#[fixed_stack_segment]`;
+2. The call occurs inside of an `extern fn`;
+3. The call occurs within a stack closure created by some other
+ safe fn.
+
+All of these conditions ensure that you are running on a large stack
+segmented. However, they are sometimes too strict. If your application
+will be making many calls into C, it is often beneficial to promote
+the `#[fixed_stack_segment]` attribute higher up the call chain. For
+example, the Rust compiler actually labels main itself as requiring a
+`#[fixed_stack_segment]`. In such cases, the linter is just an
+annoyance, because all C calls that occur from within the Rust
+compiler are made on a large stack. Another situation where this
+frequently occurs is on a 64-bit architecture, where large stacks are
+the default. In cases, you can disable the linter by including a
+`#[allow(cstack)]` directive somewhere, which permits violations of
+the "cstack" rules given above (you can also use `#[warn(cstack)]` to
+convert the errors into warnings, if you prefer).
+
# Destructors
Foreign libraries often hand off ownership of resources to the calling code,
impl<T: Send> Unique<T> {
pub fn new(value: T) -> Unique<T> {
+ #[fixed_stack_segment];
+ #[inline(never)];
+
unsafe {
let ptr = malloc(std::sys::size_of::<T>() as size_t) as *mut T;
assert!(!ptr::is_null(ptr));
#[unsafe_destructor]
impl<T: Send> Drop for Unique<T> {
fn drop(&self) {
+ #[fixed_stack_segment];
+ #[inline(never)];
+
unsafe {
let x = intrinsics::init(); // dummy value to swap in
// moving the object out is needed to call the destructor
use std::libc;
fn malloc(n: size_t) -> CVec<u8> {
+ #[fixed_stack_segment];
+ #[inline(never)];
+
unsafe {
let mem = libc::malloc(n);
assert!(mem as int != 0);
- c_vec_with_dtor(mem as *mut u8, n as uint, || free(mem))
+ return c_vec_with_dtor(mem as *mut u8, n as uint, || f(mem));
+ }
+
+ fn f(mem: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+ unsafe { libc::free(mem) }
}
}
static TDEFL_WRITE_ZLIB_HEADER : c_int = 0x01000; // write zlib header and adler32 checksum
fn deflate_bytes_internal(bytes: &[u8], flags: c_int) -> ~[u8] {
+ #[fixed_stack_segment]; #[inline(never)];
+
do bytes.as_imm_buf |b, len| {
unsafe {
let mut outsz : size_t = 0;
}
fn inflate_bytes_internal(bytes: &[u8], flags: c_int) -> ~[u8] {
+ #[fixed_stack_segment]; #[inline(never)];
+
do bytes.as_imm_buf |b, len| {
unsafe {
let mut outsz : size_t = 0;
pub mod rustrt {
use std::libc::{c_char, c_int};
- extern {
- pub fn linenoise(prompt: *c_char) -> *c_char;
- pub fn linenoiseHistoryAdd(line: *c_char) -> c_int;
- pub fn linenoiseHistorySetMaxLen(len: c_int) -> c_int;
- pub fn linenoiseHistorySave(file: *c_char) -> c_int;
- pub fn linenoiseHistoryLoad(file: *c_char) -> c_int;
- pub fn linenoiseSetCompletionCallback(callback: *u8);
- pub fn linenoiseAddCompletion(completions: *(), line: *c_char);
+ #[cfg(stage0)]
+ mod macro_hack {
+ #[macro_escape];
+ macro_rules! externfn(
+ (fn $name:ident ($($arg_name:ident : $arg_ty:ty),*) $(-> $ret_ty:ty),*) => (
+ extern {
+ fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),*;
+ }
+ )
+ )
}
+
+ externfn!(fn linenoise(prompt: *c_char) -> *c_char)
+ externfn!(fn linenoiseHistoryAdd(line: *c_char) -> c_int)
+ externfn!(fn linenoiseHistorySetMaxLen(len: c_int) -> c_int)
+ externfn!(fn linenoiseHistorySave(file: *c_char) -> c_int)
+ externfn!(fn linenoiseHistoryLoad(file: *c_char) -> c_int)
+ externfn!(fn linenoiseSetCompletionCallback(callback: *u8))
+ externfn!(fn linenoiseAddCompletion(completions: *(), line: *c_char))
}
/// Add a line to history
rustrt::linenoiseAddCompletion(completions, buf);
}
}
-}
+ }
}
}
}
fn usage(binary: &str, helpstr: &str) -> ! {
+ #[fixed_stack_segment]; #[inline(never)];
+
let message = fmt!("Usage: %s [OPTIONS] [FILTER]", binary);
println(groups::usage(message, optgroups()));
println("");
* nanoseconds since 1970-01-01T00:00:00Z.
*/
pub fn get_time() -> Timespec {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let mut sec = 0i64;
let mut nsec = 0i32;
* in nanoseconds since an unspecified epoch.
*/
pub fn precise_time_ns() -> u64 {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let mut ns = 0u64;
rustrt::precise_time_ns(&mut ns);
}
pub fn tzset() {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
rustrt::rust_tzset();
}
/// Returns the specified time in UTC
pub fn at_utc(clock: Timespec) -> Tm {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let Timespec { sec, nsec } = clock;
let mut tm = empty_tm();
/// Returns the specified time in the local timezone
pub fn at(clock: Timespec) -> Tm {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let Timespec { sec, nsec } = clock;
let mut tm = empty_tm();
impl Tm {
/// Convert time to the seconds from January 1, 1970
pub fn to_timespec(&self) -> Timespec {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let sec = match self.tm_gmtoff {
0_i32 => rustrt::rust_timegm(self),
}
pub fn main() {
+ #[fixed_stack_segment]; #[inline(never)];
+
let os_args = os::args();
if (os_args.len() > 1 && (os_args[1] == ~"-v" || os_args[1] == ~"--version")) {
pub struct Upcalls {
trace: ValueRef,
- call_shim_on_c_stack: ValueRef,
- call_shim_on_rust_stack: ValueRef,
rust_personality: ValueRef,
reset_stack_limit: ValueRef
}
@Upcalls {
trace: upcall!(fn trace(opaque_ptr, opaque_ptr, int_ty) -> Type::void()),
- call_shim_on_c_stack: upcall!(fn call_shim_on_c_stack(opaque_ptr, opaque_ptr) -> int_ty),
- call_shim_on_rust_stack:
- upcall!(fn call_shim_on_rust_stack(opaque_ptr, opaque_ptr) -> int_ty),
rust_personality: upcall!(nothrow fn rust_personality -> Type::i32()),
reset_stack_limit: upcall!(nothrow fn reset_stack_limit -> Type::void())
}
time(time_passes, ~"loop checking", ||
middle::check_loop::check_crate(ty_cx, crate));
+ time(time_passes, ~"stack checking", ||
+ middle::stack_check::stack_check_crate(ty_cx, crate));
+
let middle::moves::MoveMaps {moves_map, moved_variables_set,
capture_map} =
time(time_passes, ~"compute moves", ||
}
debugging_opts |= this_bit;
}
+
if debugging_opts & session::debug_llvm != 0 {
- unsafe {
- llvm::LLVMSetDebug(1);
+ set_llvm_debug();
+
+ fn set_llvm_debug() {
+ #[fixed_stack_segment]; #[inline(never)];
+ unsafe { llvm::LLVMSetDebug(1); }
}
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+// LLVM wrappers are intended to be called from trans,
+// which already runs in a #[fixed_stack_segment]
+#[allow(cstack)];
+
use std::c_str::ToCStr;
use std::hashmap::HashMap;
use std::libc::{c_uint, c_ushort};
self.type_to_str_depth(ty, 30)
}
+ pub fn types_to_str(&self, tys: &[Type]) -> ~str {
+ let strs = tys.map(|t| self.type_to_str(*t));
+ fmt!("[%s]", strs.connect(","))
+ }
+
pub fn val_to_str(&self, val: ValueRef) -> ~str {
unsafe {
let ty = Type::from_ref(llvm::LLVMTypeOf(val));
#[deriving(Clone, Eq)]
pub enum lint {
ctypes,
+ cstack,
unused_imports,
unnecessary_qualification,
while_true,
default: warn
}),
+ ("cstack",
+ LintSpec {
+ lint: cstack,
+ desc: "only invoke foreign functions from fixedstacksegment fns",
+ default: deny
+ }),
+
("unused_imports",
LintSpec {
lint: unused_imports,
--- /dev/null
+// Copyright 2012-2013 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+/*!
+
+Lint mode to detect cases where we call non-Rust fns, which do not
+have a stack growth check, from locations not annotated to request
+large stacks.
+
+*/
+
+use middle::lint;
+use middle::ty;
+use syntax::ast;
+use syntax::ast_map;
+use syntax::attr;
+use syntax::codemap::span;
+use visit = syntax::oldvisit;
+use util::ppaux::Repr;
+
+#[deriving(Clone)]
+struct Context {
+ tcx: ty::ctxt,
+ safe_stack: bool
+}
+
+pub fn stack_check_crate(tcx: ty::ctxt,
+ crate: &ast::Crate) {
+ let new_cx = Context {
+ tcx: tcx,
+ safe_stack: false
+ };
+ let visitor = visit::mk_vt(@visit::Visitor {
+ visit_item: stack_check_item,
+ visit_fn: stack_check_fn,
+ visit_expr: stack_check_expr,
+ ..*visit::default_visitor()
+ });
+ visit::visit_crate(crate, (new_cx, visitor));
+}
+
+fn stack_check_item(item: @ast::item,
+ (in_cx, v): (Context, visit::vt<Context>)) {
+ match item.node {
+ ast::item_fn(_, ast::extern_fn, _, _, _) => {
+ // an extern fn is already being called from C code...
+ let new_cx = Context {safe_stack: true, ..in_cx};
+ visit::visit_item(item, (new_cx, v));
+ }
+ ast::item_fn(*) => {
+ let safe_stack = fixed_stack_segment(item.attrs);
+ let new_cx = Context {safe_stack: safe_stack, ..in_cx};
+ visit::visit_item(item, (new_cx, v));
+ }
+ ast::item_impl(_, _, _, ref methods) => {
+ // visit_method() would make this nicer
+ for &method in methods.iter() {
+ let safe_stack = fixed_stack_segment(method.attrs);
+ let new_cx = Context {safe_stack: safe_stack, ..in_cx};
+ visit::visit_method_helper(method, (new_cx, v));
+ }
+ }
+ _ => {
+ visit::visit_item(item, (in_cx, v));
+ }
+ }
+
+ fn fixed_stack_segment(attrs: &[ast::Attribute]) -> bool {
+ attr::contains_name(attrs, "fixed_stack_segment")
+ }
+}
+
+fn stack_check_fn<'a>(fk: &visit::fn_kind,
+ decl: &ast::fn_decl,
+ body: &ast::Block,
+ sp: span,
+ id: ast::NodeId,
+ (in_cx, v): (Context, visit::vt<Context>)) {
+ let safe_stack = match *fk {
+ visit::fk_method(*) | visit::fk_item_fn(*) => {
+ in_cx.safe_stack // see stack_check_item above
+ }
+ visit::fk_anon(*) | visit::fk_fn_block => {
+ match ty::get(ty::node_id_to_type(in_cx.tcx, id)).sty {
+ ty::ty_bare_fn(*) |
+ ty::ty_closure(ty::ClosureTy {sigil: ast::OwnedSigil, _}) |
+ ty::ty_closure(ty::ClosureTy {sigil: ast::ManagedSigil, _}) => {
+ false
+ }
+ _ => {
+ in_cx.safe_stack
+ }
+ }
+ }
+ };
+ let new_cx = Context {safe_stack: safe_stack, ..in_cx};
+ debug!("stack_check_fn(safe_stack=%b, id=%?)", safe_stack, id);
+ visit::visit_fn(fk, decl, body, sp, id, (new_cx, v));
+}
+
+fn stack_check_expr<'a>(expr: @ast::expr,
+ (cx, v): (Context, visit::vt<Context>)) {
+ debug!("stack_check_expr(safe_stack=%b, expr=%s)",
+ cx.safe_stack, expr.repr(cx.tcx));
+ if !cx.safe_stack {
+ match expr.node {
+ ast::expr_call(callee, _, _) => {
+ let callee_ty = ty::expr_ty(cx.tcx, callee);
+ debug!("callee_ty=%s", callee_ty.repr(cx.tcx));
+ match ty::get(callee_ty).sty {
+ ty::ty_bare_fn(ref fty) => {
+ if !fty.abis.is_rust() && !fty.abis.is_intrinsic() {
+ call_to_extern_fn(cx, callee);
+ }
+ }
+ _ => {}
+ }
+ }
+ _ => {}
+ }
+ }
+ visit::visit_expr(expr, (cx, v));
+}
+
+fn call_to_extern_fn(cx: Context, callee: @ast::expr) {
+ // Permit direct calls to extern fns that are annotated with
+ // #[rust_stack]. This is naturally a horrible pain to achieve.
+ match callee.node {
+ ast::expr_path(*) => {
+ match cx.tcx.def_map.find(&callee.id) {
+ Some(&ast::def_fn(id, _)) if id.crate == ast::LOCAL_CRATE => {
+ match cx.tcx.items.find(&id.node) {
+ Some(&ast_map::node_foreign_item(item, _, _, _)) => {
+ if attr::contains_name(item.attrs, "rust_stack") {
+ return;
+ }
+ }
+ _ => {}
+ }
+ }
+ _ => {}
+ }
+ }
+ _ => {}
+ }
+
+ cx.tcx.sess.add_lint(lint::cstack,
+ callee.id,
+ callee.span,
+ fmt!("invoking non-Rust fn in fn without \
+ #[fixed_stack_segment]"));
+}
return llfn;
}
-pub fn get_extern_fn(externs: &mut ExternMap, llmod: ModuleRef, name: @str,
+pub fn get_extern_fn(externs: &mut ExternMap, llmod: ModuleRef, name: &str,
cc: lib::llvm::CallConv, ty: Type) -> ValueRef {
- match externs.find_copy(&name) {
- Some(n) => return n,
+ match externs.find_equiv(&name) {
+ Some(n) => return *n,
None => ()
}
let f = decl_fn(llmod, name, cc, ty);
- externs.insert(name, f);
+ externs.insert(name.to_owned(), f);
return f;
}
pub fn get_extern_const(externs: &mut ExternMap, llmod: ModuleRef,
- name: @str, ty: Type) -> ValueRef {
- match externs.find_copy(&name) {
- Some(n) => return n,
+ name: &str, ty: Type) -> ValueRef {
+ match externs.find_equiv(&name) {
+ Some(n) => return *n,
None => ()
}
unsafe {
let c = do name.with_c_str |buf| {
llvm::LLVMAddGlobal(llmod, ty.to_ref(), buf)
};
- externs.insert(name, c);
+ externs.insert(name.to_owned(), c);
return c;
}
}
None,
ty::lookup_item_type(tcx, parent_id).ty);
let llty = type_of_dtor(ccx, class_ty);
- let name = name.to_managed(); // :-(
get_extern_fn(&mut ccx.externs,
ccx.llmod,
name,
}
}
-pub fn null_env_ptr(bcx: @mut Block) -> ValueRef {
- C_null(Type::opaque_box(bcx.ccx()).ptr_to())
+pub fn null_env_ptr(ccx: &CrateContext) -> ValueRef {
+ C_null(Type::opaque_box(ccx).ptr_to())
}
pub fn trans_external_path(ccx: &mut CrateContext, did: ast::def_id, t: ty::t)
-> ValueRef {
- let name = csearch::get_symbol(ccx.sess.cstore, did).to_managed(); // Sad
+ let name = csearch::get_symbol(ccx.sess.cstore, did);
match ty::get(t).sty {
ty::ty_bare_fn(_) | ty::ty_closure(_) => {
let llty = type_of_fn_from_ty(ccx, t);
// slot where the return value of the function must go.
pub fn make_return_pointer(fcx: @mut FunctionContext, output_type: ty::t) -> ValueRef {
unsafe {
- if !ty::type_is_immediate(fcx.ccx.tcx, output_type) {
+ if type_of::return_uses_outptr(fcx.ccx.tcx, output_type) {
llvm::LLVMGetParam(fcx.llfn, 0)
} else {
let lloutputtype = type_of::type_of(fcx.ccx, output_type);
ty::subst_tps(ccx.tcx, substs.tys, substs.self_ty, output_type)
}
};
- let is_immediate = ty::type_is_immediate(ccx.tcx, substd_output_type);
+ let uses_outptr = type_of::return_uses_outptr(ccx.tcx, substd_output_type);
let fcx = @mut FunctionContext {
llfn: llfndecl,
llenv: unsafe {
llreturn: None,
llself: None,
personality: None,
- has_immediate_return_value: is_immediate,
+ caller_expects_out_pointer: uses_outptr,
llargs: @mut HashMap::new(),
lllocals: @mut HashMap::new(),
llupvars: @mut HashMap::new(),
fcx.alloca_insert_pt = Some(llvm::LLVMGetFirstInstruction(entry_bcx.llbb));
}
- if !ty::type_is_nil(substd_output_type) && !(is_immediate && skip_retptr) {
- fcx.llretptr = Some(make_return_pointer(fcx, substd_output_type));
+ if !ty::type_is_voidish(substd_output_type) {
+ // If the function returns nil/bot, there is no real return
+ // value, so do not set `llretptr`.
+ if !skip_retptr || uses_outptr {
+ // Otherwise, we normally allocate the llretptr, unless we
+ // have been instructed to skip it for immediate return
+ // values.
+ fcx.llretptr = Some(make_return_pointer(fcx, substd_output_type));
+ }
}
fcx
}
// Builds the return block for a function.
pub fn build_return_block(fcx: &FunctionContext, ret_cx: @mut Block) {
// Return the value if this function immediate; otherwise, return void.
- if fcx.llretptr.is_none() || !fcx.has_immediate_return_value {
+ if fcx.llretptr.is_none() || fcx.caller_expects_out_pointer {
return RetVoid(ret_cx);
}
// translation calls that don't have a return value (trans_crate,
// trans_mod, trans_item, et cetera) and those that do
// (trans_block, trans_expr, et cetera).
- if body.expr.is_none() || ty::type_is_bot(block_ty) ||
- ty::type_is_nil(block_ty)
- {
+ if body.expr.is_none() || ty::type_is_voidish(block_ty) {
bcx = controlflow::trans_block(bcx, body, expr::Ignore);
} else {
let dest = expr::SaveIn(fcx.llretptr.unwrap());
ast::item_fn(ref decl, purity, _abis, ref generics, ref body) => {
if purity == ast::extern_fn {
let llfndecl = get_item_val(ccx, item.id);
- foreign::trans_foreign_fn(ccx,
- vec::append((*path).clone(),
- [path_name(item.ident)]),
- decl,
- body,
- llfndecl,
- item.id);
+ foreign::trans_rust_fn_with_foreign_abi(
+ ccx,
+ &vec::append((*path).clone(),
+ [path_name(item.ident)]),
+ decl,
+ body,
+ llfndecl,
+ item.id);
} else if !generics.is_type_parameterized() {
let llfndecl = get_item_val(ccx, item.id);
trans_fn(ccx,
}
},
ast::item_foreign_mod(ref foreign_mod) => {
- foreign::trans_foreign_mod(ccx, path, foreign_mod);
+ foreign::trans_foreign_mod(ccx, foreign_mod);
}
ast::item_struct(struct_def, ref generics) => {
if !generics.is_type_parameterized() {
fn create_main(ccx: @mut CrateContext, main_llfn: ValueRef) -> ValueRef {
let nt = ty::mk_nil();
-
- let llfty = type_of_fn(ccx, [], nt);
+ let llfty = type_of_rust_fn(ccx, [], nt);
let llfdecl = decl_fn(ccx.llmod, "_rust_main",
lib::llvm::CCallConv, llfty);
// the args vector built in create_entry_fn will need
// be updated if this assertion starts to fail.
- assert!(fcx.has_immediate_return_value);
+ assert!(!fcx.caller_expects_out_pointer);
let bcx = fcx.entry_bcx.unwrap();
// Call main.
let llfn = if purity != ast::extern_fn {
register_fn(ccx, i.span, sym, i.id, ty)
} else {
- foreign::register_foreign_fn(ccx, i.span, sym, i.id)
+ foreign::register_rust_fn_with_foreign_abi(ccx,
+ i.span,
+ sym,
+ i.id)
};
set_inline_hint_if_appr(i.attrs, llfn);
llfn
register_method(ccx, id, pth, m)
}
- ast_map::node_foreign_item(ni, _, _, pth) => {
+ ast_map::node_foreign_item(ni, abis, _, pth) => {
let ty = ty::node_id_to_type(ccx.tcx, ni.id);
exprt = true;
match ni.node {
ast::foreign_item_fn(*) => {
let path = vec::append((*pth).clone(), [path_name(ni.ident)]);
- let sym = exported_name(ccx, path, ty, ni.attrs);
-
- register_fn(ccx, ni.span, sym, ni.id, ty)
+ foreign::register_foreign_item_fn(ccx, abis, &path, ni)
}
ast::foreign_item_static(*) => {
let ident = token::ident_to_str(&ni.ident);
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use lib::llvm::{llvm, ValueRef, Attribute, Void};
-use middle::trans::base::*;
-use middle::trans::build::*;
-use middle::trans::common::*;
-
-use middle::trans::type_::Type;
-
-use std::libc::c_uint;
+use lib::llvm::Attribute;
use std::option;
-
-pub trait ABIInfo {
- fn compute_info(&self, atys: &[Type], rty: Type, ret_def: bool) -> FnType;
-}
+use middle::trans::context::CrateContext;
+use middle::trans::cabi_x86;
+use middle::trans::cabi_x86_64;
+use middle::trans::cabi_arm;
+use middle::trans::cabi_mips;
+use middle::trans::type_::Type;
+use syntax::abi::{X86, X86_64, Arm, Mips};
#[deriving(Clone)]
pub struct LLVMType {
ty: Type
}
+/// Metadata describing how the arguments to a native function
+/// should be passed in order to respect the native ABI.
+///
+/// I will do my best to describe this structure, but these
+/// comments are reverse-engineered and may be inaccurate. -NDM
pub struct FnType {
+ /// The LLVM types of each argument. If the cast flag is true,
+ /// then the argument should be cast, typically because the
+ /// official argument type will be an int and the rust type is i8
+ /// or something like that.
arg_tys: ~[LLVMType],
- ret_ty: LLVMType,
- attrs: ~[option::Option<Attribute>],
- sret: bool
-}
-
-impl FnType {
- pub fn decl_fn(&self, decl: &fn(fnty: Type) -> ValueRef) -> ValueRef {
- let atys = self.arg_tys.iter().map(|t| t.ty).collect::<~[Type]>();
- let rty = self.ret_ty.ty;
- let fnty = Type::func(atys, &rty);
- let llfn = decl(fnty);
-
- for (i, a) in self.attrs.iter().enumerate() {
- match *a {
- option::Some(attr) => {
- unsafe {
- let llarg = get_param(llfn, i);
- llvm::LLVMAddAttribute(llarg, attr as c_uint);
- }
- }
- _ => ()
- }
- }
- return llfn;
- }
- pub fn build_shim_args(&self, bcx: @mut Block, arg_tys: &[Type], llargbundle: ValueRef)
- -> ~[ValueRef] {
- let mut atys: &[LLVMType] = self.arg_tys;
- let mut attrs: &[option::Option<Attribute>] = self.attrs;
-
- let mut llargvals = ~[];
- let mut i = 0u;
- let n = arg_tys.len();
-
- if self.sret {
- let llretptr = GEPi(bcx, llargbundle, [0u, n]);
- let llretloc = Load(bcx, llretptr);
- llargvals = ~[llretloc];
- atys = atys.tail();
- attrs = attrs.tail();
- }
-
- while i < n {
- let llargval = if atys[i].cast {
- let arg_ptr = GEPi(bcx, llargbundle, [0u, i]);
- let arg_ptr = BitCast(bcx, arg_ptr, atys[i].ty.ptr_to());
- Load(bcx, arg_ptr)
- } else if attrs[i].is_some() {
- GEPi(bcx, llargbundle, [0u, i])
- } else {
- load_inbounds(bcx, llargbundle, [0u, i])
- };
- llargvals.push(llargval);
- i += 1u;
- }
-
- return llargvals;
- }
-
- pub fn build_shim_ret(&self, bcx: @mut Block, arg_tys: &[Type], ret_def: bool,
- llargbundle: ValueRef, llretval: ValueRef) {
- for (i, a) in self.attrs.iter().enumerate() {
- match *a {
- option::Some(attr) => {
- unsafe {
- llvm::LLVMAddInstrAttribute(llretval, (i + 1u) as c_uint, attr as c_uint);
- }
- }
- _ => ()
- }
- }
- if self.sret || !ret_def {
- return;
- }
- let n = arg_tys.len();
- // R** llretptr = &args->r;
- let llretptr = GEPi(bcx, llargbundle, [0u, n]);
- // R* llretloc = *llretptr; /* (args->r) */
- let llretloc = Load(bcx, llretptr);
- if self.ret_ty.cast {
- let tmp_ptr = BitCast(bcx, llretloc, self.ret_ty.ty.ptr_to());
- // *args->r = r;
- Store(bcx, llretval, tmp_ptr);
- } else {
- // *args->r = r;
- Store(bcx, llretval, llretloc);
- };
- }
-
- pub fn build_wrap_args(&self, bcx: @mut Block, ret_ty: Type,
- llwrapfn: ValueRef, llargbundle: ValueRef) {
- let mut atys: &[LLVMType] = self.arg_tys;
- let mut attrs: &[option::Option<Attribute>] = self.attrs;
- let mut j = 0u;
- let llretptr = if self.sret {
- atys = atys.tail();
- attrs = attrs.tail();
- j = 1u;
- get_param(llwrapfn, 0u)
- } else if self.ret_ty.cast {
- let retptr = alloca(bcx, self.ret_ty.ty, "");
- BitCast(bcx, retptr, ret_ty.ptr_to())
- } else {
- alloca(bcx, ret_ty, "")
- };
+ /// A list of attributes to be attached to each argument (parallel
+ /// the `arg_tys` array). If the attribute for a given is Some,
+ /// then the argument should be passed by reference.
+ attrs: ~[option::Option<Attribute>],
- let mut i = 0u;
- let n = atys.len();
- while i < n {
- let mut argval = get_param(llwrapfn, i + j);
- if attrs[i].is_some() {
- argval = Load(bcx, argval);
- store_inbounds(bcx, argval, llargbundle, [0u, i]);
- } else if atys[i].cast {
- let argptr = GEPi(bcx, llargbundle, [0u, i]);
- let argptr = BitCast(bcx, argptr, atys[i].ty.ptr_to());
- Store(bcx, argval, argptr);
- } else {
- store_inbounds(bcx, argval, llargbundle, [0u, i]);
- }
- i += 1u;
- }
- store_inbounds(bcx, llretptr, llargbundle, [0u, n]);
- }
+ /// LLVM return type.
+ ret_ty: LLVMType,
- pub fn build_wrap_ret(&self, bcx: @mut Block, arg_tys: &[Type], llargbundle: ValueRef) {
- if self.ret_ty.ty.kind() == Void {
- return;
- }
+ /// If true, then an implicit pointer should be added for the result.
+ sret: bool
+}
- if bcx.fcx.llretptr.is_some() {
- let llretval = load_inbounds(bcx, llargbundle, [ 0, arg_tys.len() ]);
- let llretval = if self.ret_ty.cast {
- let retptr = BitCast(bcx, llretval, self.ret_ty.ty.ptr_to());
- Load(bcx, retptr)
- } else {
- Load(bcx, llretval)
- };
- let llretptr = BitCast(bcx, bcx.fcx.llretptr.unwrap(), self.ret_ty.ty.ptr_to());
- Store(bcx, llretval, llretptr);
- }
+pub fn compute_abi_info(ccx: &mut CrateContext,
+ atys: &[Type],
+ rty: Type,
+ ret_def: bool) -> FnType {
+ match ccx.sess.targ_cfg.arch {
+ X86 => cabi_x86::compute_abi_info(ccx, atys, rty, ret_def),
+ X86_64 => cabi_x86_64::compute_abi_info(ccx, atys, rty, ret_def),
+ Arm => cabi_arm::compute_abi_info(ccx, atys, rty, ret_def),
+ Mips => cabi_mips::compute_abi_info(ccx, atys, rty, ret_def),
}
}
use lib::llvm::{llvm, Integer, Pointer, Float, Double, Struct, Array};
use lib::llvm::{Attribute, StructRetAttribute};
-use middle::trans::cabi::{ABIInfo, FnType, LLVMType};
+use middle::trans::cabi::{FnType, LLVMType};
+use middle::trans::context::CrateContext;
use middle::trans::type_::Type;
}
}
-enum ARM_ABIInfo { ARM_ABIInfo }
-
-impl ABIInfo for ARM_ABIInfo {
- fn compute_info(&self,
- atys: &[Type],
- rty: Type,
- ret_def: bool) -> FnType {
- let mut arg_tys = ~[];
- let mut attrs = ~[];
- for &aty in atys.iter() {
- let (ty, attr) = classify_arg_ty(aty);
- arg_tys.push(ty);
- attrs.push(attr);
- }
-
- let (ret_ty, ret_attr) = if ret_def {
- classify_ret_ty(rty)
- } else {
- (LLVMType { cast: false, ty: Type::void() }, None)
- };
+pub fn compute_abi_info(_ccx: &mut CrateContext,
+ atys: &[Type],
+ rty: Type,
+ ret_def: bool) -> FnType {
+ let mut arg_tys = ~[];
+ let mut attrs = ~[];
+ for &aty in atys.iter() {
+ let (ty, attr) = classify_arg_ty(aty);
+ arg_tys.push(ty);
+ attrs.push(attr);
+ }
- let mut ret_ty = ret_ty;
+ let (ret_ty, ret_attr) = if ret_def {
+ classify_ret_ty(rty)
+ } else {
+ (LLVMType { cast: false, ty: Type::void() }, None)
+ };
- let sret = ret_attr.is_some();
- if sret {
- arg_tys.unshift(ret_ty);
- attrs.unshift(ret_attr);
- ret_ty = LLVMType { cast: false, ty: Type::void() };
- }
+ let mut ret_ty = ret_ty;
- return FnType {
- arg_tys: arg_tys,
- ret_ty: ret_ty,
- attrs: attrs,
- sret: sret
- };
+ let sret = ret_attr.is_some();
+ if sret {
+ arg_tys.unshift(ret_ty);
+ attrs.unshift(ret_attr);
+ ret_ty = LLVMType { cast: false, ty: Type::void() };
}
-}
-pub fn abi_info() -> @ABIInfo {
- return @ARM_ABIInfo as @ABIInfo;
+ return FnType {
+ arg_tys: arg_tys,
+ ret_ty: ret_ty,
+ attrs: attrs,
+ sret: sret
+ };
}
use std::vec;
use lib::llvm::{llvm, Integer, Pointer, Float, Double, Struct, Array};
use lib::llvm::{Attribute, StructRetAttribute};
+use middle::trans::context::CrateContext;
use middle::trans::context::task_llcx;
use middle::trans::cabi::*;
return Type::struct_(fields, false);
}
-enum MIPS_ABIInfo { MIPS_ABIInfo }
-
-impl ABIInfo for MIPS_ABIInfo {
- fn compute_info(&self,
- atys: &[Type],
- rty: Type,
- ret_def: bool) -> FnType {
- let (ret_ty, ret_attr) = if ret_def {
- classify_ret_ty(rty)
- } else {
- (LLVMType { cast: false, ty: Type::void() }, None)
- };
-
- let mut ret_ty = ret_ty;
-
- let sret = ret_attr.is_some();
- let mut arg_tys = ~[];
- let mut attrs = ~[];
- let mut offset = if sret { 4 } else { 0 };
-
- for aty in atys.iter() {
- let (ty, attr) = classify_arg_ty(*aty, &mut offset);
- arg_tys.push(ty);
- attrs.push(attr);
- };
-
- if sret {
- arg_tys = vec::append(~[ret_ty], arg_tys);
- attrs = vec::append(~[ret_attr], attrs);
- ret_ty = LLVMType { cast: false, ty: Type::void() };
- }
+pub fn compute_abi_info(_ccx: &mut CrateContext,
+ atys: &[Type],
+ rty: Type,
+ ret_def: bool) -> FnType {
+ let (ret_ty, ret_attr) = if ret_def {
+ classify_ret_ty(rty)
+ } else {
+ (LLVMType { cast: false, ty: Type::void() }, None)
+ };
+
+ let mut ret_ty = ret_ty;
+
+ let sret = ret_attr.is_some();
+ let mut arg_tys = ~[];
+ let mut attrs = ~[];
+ let mut offset = if sret { 4 } else { 0 };
- return FnType {
- arg_tys: arg_tys,
- ret_ty: ret_ty,
- attrs: attrs,
- sret: sret
- };
+ for aty in atys.iter() {
+ let (ty, attr) = classify_arg_ty(*aty, &mut offset);
+ arg_tys.push(ty);
+ attrs.push(attr);
+ };
+
+ if sret {
+ arg_tys = vec::append(~[ret_ty], arg_tys);
+ attrs = vec::append(~[ret_attr], attrs);
+ ret_ty = LLVMType { cast: false, ty: Type::void() };
}
-}
-pub fn abi_info() -> @ABIInfo {
- return @MIPS_ABIInfo as @ABIInfo;
+ return FnType {
+ arg_tys: arg_tys,
+ ret_ty: ret_ty,
+ attrs: attrs,
+ sret: sret
+ };
}
use super::cabi::*;
use super::common::*;
use super::machine::*;
-
use middle::trans::type_::Type;
-struct X86_ABIInfo {
- ccx: @mut CrateContext
-}
+pub fn compute_abi_info(ccx: &mut CrateContext,
+ atys: &[Type],
+ rty: Type,
+ ret_def: bool) -> FnType {
+ let mut arg_tys = ~[];
+ let mut attrs = ~[];
-impl ABIInfo for X86_ABIInfo {
- fn compute_info(&self,
- atys: &[Type],
- rty: Type,
- ret_def: bool) -> FnType {
- let mut arg_tys = do atys.map |a| {
- LLVMType { cast: false, ty: *a }
- };
- let mut ret_ty = LLVMType {
+ let ret_ty;
+ let sret;
+ if !ret_def {
+ ret_ty = LLVMType {
cast: false,
- ty: rty
+ ty: Type::void(),
};
- let mut attrs = do atys.map |_| {
- None
- };
-
- // Rules for returning structs taken from
+ sret = false;
+ } else if rty.kind() == Struct {
+ // Returning a structure. Most often, this will use
+ // a hidden first argument. On some platforms, though,
+ // small structs are returned as integers.
+ //
+ // Some links:
// http://www.angelcode.com/dev/callconv/callconv.html
// Clang's ABI handling is in lib/CodeGen/TargetInfo.cpp
- let sret = {
- let returning_a_struct = rty.kind() == Struct && ret_def;
- let big_struct = match self.ccx.sess.targ_cfg.os {
- os_win32 | os_macos => llsize_of_alloc(self.ccx, rty) > 8,
- _ => true
- };
- returning_a_struct && big_struct
+
+ enum Strategy { RetValue(Type), RetPointer }
+ let strategy = match ccx.sess.targ_cfg.os {
+ os_win32 | os_macos => {
+ match llsize_of_alloc(ccx, rty) {
+ 1 => RetValue(Type::i8()),
+ 2 => RetValue(Type::i16()),
+ 4 => RetValue(Type::i32()),
+ 8 => RetValue(Type::i64()),
+ _ => RetPointer
+ }
+ }
+ _ => {
+ RetPointer
+ }
};
- if sret {
- let ret_ptr_ty = LLVMType {
- cast: false,
- ty: ret_ty.ty.ptr_to()
- };
- arg_tys = ~[ret_ptr_ty] + arg_tys;
- attrs = ~[Some(StructRetAttribute)] + attrs;
- ret_ty = LLVMType {
- cast: false,
- ty: Type::void(),
- };
- } else if !ret_def {
- ret_ty = LLVMType {
- cast: false,
- ty: Type::void()
- };
- }
+ match strategy {
+ RetValue(t) => {
+ ret_ty = LLVMType {
+ cast: true,
+ ty: t
+ };
+ sret = false;
+ }
+ RetPointer => {
+ arg_tys.push(LLVMType {
+ cast: false,
+ ty: rty.ptr_to()
+ });
+ attrs.push(Some(StructRetAttribute));
- return FnType {
- arg_tys: arg_tys,
- ret_ty: ret_ty,
- attrs: attrs,
- sret: sret
+ ret_ty = LLVMType {
+ cast: false,
+ ty: Type::void(),
+ };
+ sret = true;
+ }
+ }
+ } else {
+ ret_ty = LLVMType {
+ cast: false,
+ ty: rty
};
+ sret = false;
+ }
+
+ for &a in atys.iter() {
+ arg_tys.push(LLVMType { cast: false, ty: a });
+ attrs.push(None);
}
-}
-pub fn abi_info(ccx: @mut CrateContext) -> @ABIInfo {
- return @X86_ABIInfo {
- ccx: ccx
- } as @ABIInfo;
+ return FnType {
+ arg_tys: arg_tys,
+ ret_ty: ret_ty,
+ attrs: attrs,
+ sret: sret
+ };
}
use lib::llvm::{Struct, Array, Attribute};
use lib::llvm::{StructRetAttribute, ByValAttribute};
use middle::trans::cabi::*;
+use middle::trans::context::CrateContext;
use middle::trans::type_::Type;
return Type::struct_(tys, false);
}
-fn x86_64_tys(atys: &[Type],
- rty: Type,
- ret_def: bool) -> FnType {
-
+pub fn compute_abi_info(_ccx: &mut CrateContext,
+ atys: &[Type],
+ rty: Type,
+ ret_def: bool) -> FnType {
fn x86_64_ty(ty: Type,
is_mem_cls: &fn(cls: &[RegClass]) -> bool,
attr: Attribute) -> (LLVMType, Option<Attribute>) {
sret: sret
};
}
-
-enum X86_64_ABIInfo { X86_64_ABIInfo }
-
-impl ABIInfo for X86_64_ABIInfo {
- fn compute_info(&self,
- atys: &[Type],
- rty: Type,
- ret_def: bool) -> FnType {
- return x86_64_tys(atys, rty, ret_def);
- }
-}
-
-pub fn abi_info() -> @ABIInfo {
- return @X86_64_ABIInfo as @ABIInfo;
-}
use middle::trans::meth;
use middle::trans::monomorphize;
use middle::trans::type_of;
+use middle::trans::foreign;
use middle::ty;
use middle::subst::Subst;
use middle::typeck;
use middle::trans::type_::Type;
use syntax::ast;
+use syntax::abi::AbiSet;
use syntax::ast_map;
use syntax::oldvisit;
type_params: &[ty::t], // values for fn's ty params
vtables: Option<typeck::vtable_res>) // vtables for the call
-> FnData {
- //!
- //
- // Translates a reference to a fn/method item, monomorphizing and
- // inlining as it goes.
- //
- // # Parameters
- //
- // - `bcx`: the current block where the reference to the fn occurs
- // - `def_id`: def id of the fn or method item being referenced
- // - `ref_id`: node id of the reference to the fn/method, if applicable.
- // This parameter may be zero; but, if so, the resulting value may not
- // have the right type, so it must be cast before being used.
- // - `type_params`: values for each of the fn/method's type parameters
- // - `vtables`: values for each bound on each of the type parameters
+ /*!
+ * Translates a reference to a fn/method item, monomorphizing and
+ * inlining as it goes.
+ *
+ * # Parameters
+ *
+ * - `bcx`: the current block where the reference to the fn occurs
+ * - `def_id`: def id of the fn or method item being referenced
+ * - `ref_id`: node id of the reference to the fn/method, if applicable.
+ * This parameter may be zero; but, if so, the resulting value may not
+ * have the right type, so it must be cast before being used.
+ * - `type_params`: values for each of the fn/method's type parameters
+ * - `vtables`: values for each bound on each of the type parameters
+ */
let _icx = push_ctxt("trans_fn_ref_with_vtables");
let ccx = bcx.ccx();
}
// Find the actual function pointer.
- let val = {
+ let mut val = {
if def_id.crate == ast::LOCAL_CRATE {
// Internal reference.
get_item_val(ccx, def_id.node)
}
};
+ // This is subtle and surprising, but sometimes we have to bitcast
+ // the resulting fn pointer. The reason has to do with external
+ // functions. If you have two crates that both bind the same C
+ // library, they may not use precisely the same types: for
+ // example, they will probably each declare their own structs,
+ // which are distinct types from LLVM's point of view (nominal
+ // types).
+ //
+ // Now, if those two crates are linked into an application, and
+ // they contain inlined code, you can wind up with a situation
+ // where both of those functions wind up being loaded into this
+ // application simultaneously. In that case, the same function
+ // (from LLVM's point of view) requires two types. But of course
+ // LLVM won't allow one function to have two types.
+ //
+ // What we currently do, therefore, is declare the function with
+ // one of the two types (whichever happens to come first) and then
+ // bitcast as needed when the function is referenced to make sure
+ // it has the type we expect.
+ //
+ // This can occur on either a crate-local or crate-external
+ // reference. It also occurs when testing libcore and in some
+ // other weird situations. Annoying.
+ let llty = type_of::type_of_fn_from_ty(ccx, fn_tpt.ty);
+ let llptrty = llty.ptr_to();
+ if val_ty(val) != llptrty {
+ val = BitCast(bcx, val, llptrty);
+ }
+
return FnData {llfn: val};
}
*cx
}
-// See [Note-arg-mode]
pub fn trans_call_inner(in_cx: @mut Block,
call_info: Option<NodeInfo>,
- fn_expr_ty: ty::t,
+ callee_ty: ty::t,
ret_ty: ty::t,
get_callee: &fn(@mut Block) -> Callee,
args: CallArgs,
dest: Option<expr::Dest>,
autoref_arg: AutorefArg)
-> Result {
+ /*!
+ * This behemoth of a function translates function calls.
+ * Unfortunately, in order to generate more efficient LLVM
+ * output at -O0, it has quite a complex signature (refactoring
+ * this into two functions seems like a good idea).
+ *
+ * In particular, for lang items, it is invoked with a dest of
+ * None, and
+ */
+
+
do base::with_scope_result(in_cx, call_info, "call") |cx| {
let callee = get_callee(cx);
let mut bcx = callee.bcx;
}
};
- let llretslot = trans_ret_slot(bcx, fn_expr_ty, dest);
+ let abi = match ty::get(callee_ty).sty {
+ ty::ty_bare_fn(ref f) => f.abis,
+ _ => AbiSet::Rust()
+ };
+ let is_rust_fn =
+ abi.is_rust() ||
+ abi.is_intrinsic();
+
+ // Generate a location to store the result. If the user does
+ // not care about the result, just make a stack slot.
+ let opt_llretslot = match dest {
+ None => {
+ assert!(!type_of::return_uses_outptr(in_cx.tcx(), ret_ty));
+ None
+ }
+ Some(expr::SaveIn(dst)) => Some(dst),
+ Some(expr::Ignore) => {
+ if !ty::type_is_voidish(ret_ty) {
+ Some(alloc_ty(bcx, ret_ty, "__llret"))
+ } else {
+ unsafe {
+ Some(llvm::LLVMGetUndef(Type::nil().ptr_to().to_ref()))
+ }
+ }
+ }
+ };
- let mut llargs = ~[];
+ let mut llresult = unsafe {
+ llvm::LLVMGetUndef(Type::nil().ptr_to().to_ref())
+ };
- if !ty::type_is_immediate(bcx.tcx(), ret_ty) {
- llargs.push(llretslot);
- }
+ // The code below invokes the function, using either the Rust
+ // conventions (if it is a rust fn) or the native conventions
+ // (otherwise). The important part is that, when all is sad
+ // and done, either the return value of the function will have been
+ // written in opt_llretslot (if it is Some) or `llresult` will be
+ // set appropriately (otherwise).
+ if is_rust_fn {
+ let mut llargs = ~[];
+
+ // Push the out-pointer if we use an out-pointer for this
+ // return type, otherwise push "undef".
+ if type_of::return_uses_outptr(in_cx.tcx(), ret_ty) {
+ llargs.push(opt_llretslot.unwrap());
+ }
+
+ // Push the environment.
+ llargs.push(llenv);
- llargs.push(llenv);
- bcx = trans_args(bcx, args, fn_expr_ty, autoref_arg, &mut llargs);
+ // Push the arguments.
+ bcx = trans_args(bcx, args, callee_ty,
+ autoref_arg, &mut llargs);
- // Now that the arguments have finished evaluating, we need to revoke
- // the cleanup for the self argument
- match callee.data {
- Method(d) => {
- for &v in d.temp_cleanup.iter() {
- revoke_clean(bcx, v);
+ // Now that the arguments have finished evaluating, we
+ // need to revoke the cleanup for the self argument
+ match callee.data {
+ Method(d) => {
+ for &v in d.temp_cleanup.iter() {
+ revoke_clean(bcx, v);
+ }
}
+ _ => {}
}
- _ => {}
- }
- // Uncomment this to debug calls.
- /*
- printfln!("calling: %s", bcx.val_to_str(llfn));
- for llarg in llargs.iter() {
- printfln!("arg: %s", bcx.val_to_str(*llarg));
+ // Invoke the actual rust fn and update bcx/llresult.
+ let (llret, b) = base::invoke(bcx, llfn, llargs);
+ bcx = b;
+ llresult = llret;
+
+ // If the Rust convention for this type is return via
+ // the return value, copy it into llretslot.
+ match opt_llretslot {
+ Some(llretslot) => {
+ if !type_of::return_uses_outptr(bcx.tcx(), ret_ty) &&
+ !ty::type_is_voidish(ret_ty)
+ {
+ Store(bcx, llret, llretslot);
+ }
+ }
+ None => {}
+ }
+ } else {
+ // Lang items are the only case where dest is None, and
+ // they are always Rust fns.
+ assert!(dest.is_some());
+
+ let mut llargs = ~[];
+ bcx = trans_args(bcx, args, callee_ty,
+ autoref_arg, &mut llargs);
+ bcx = foreign::trans_native_call(bcx, callee_ty,
+ llfn, opt_llretslot.unwrap(), llargs);
}
- io::println("---");
- */
-
- // If the block is terminated, then one or more of the args
- // has type _|_. Since that means it diverges, the code for
- // the call itself is unreachable.
- let (llresult, new_bcx) = base::invoke(bcx, llfn, llargs);
- bcx = new_bcx;
+ // If the caller doesn't care about the result of this fn call,
+ // drop the temporary slot we made.
match dest {
- None => { assert!(ty::type_is_immediate(bcx.tcx(), ret_ty)) }
+ None => {
+ assert!(!type_of::return_uses_outptr(bcx.tcx(), ret_ty));
+ }
Some(expr::Ignore) => {
// drop the value if it is not being saved.
- if ty::type_needs_drop(bcx.tcx(), ret_ty) {
- if ty::type_is_immediate(bcx.tcx(), ret_ty) {
- let llscratchptr = alloc_ty(bcx, ret_ty, "__ret");
- Store(bcx, llresult, llscratchptr);
- bcx = glue::drop_ty(bcx, llscratchptr, ret_ty);
- } else {
- bcx = glue::drop_ty(bcx, llretslot, ret_ty);
- }
- }
- }
- Some(expr::SaveIn(lldest)) => {
- // If this is an immediate, store into the result location.
- // (If this was not an immediate, the result will already be
- // directly written into the output slot.)
- if ty::type_is_immediate(bcx.tcx(), ret_ty) {
- Store(bcx, llresult, lldest);
- }
+ bcx = glue::drop_ty(bcx, opt_llretslot.unwrap(), ret_ty);
}
+ Some(expr::SaveIn(_)) => { }
}
if ty::type_is_bot(ret_ty) {
Unreachable(bcx);
}
+
rslt(bcx, llresult)
}
}
-
pub enum CallArgs<'self> {
ArgExprs(&'self [@ast::expr]),
ArgVals(&'self [ValueRef])
}
-pub fn trans_ret_slot(bcx: @mut Block, fn_ty: ty::t, dest: Option<expr::Dest>)
- -> ValueRef {
- let retty = ty::ty_fn_ret(fn_ty);
-
- match dest {
- Some(expr::SaveIn(dst)) => dst,
- _ => {
- if ty::type_is_immediate(bcx.tcx(), retty) {
- unsafe {
- llvm::LLVMGetUndef(Type::nil().ptr_to().to_ref())
- }
- } else {
- alloc_ty(bcx, retty, "__trans_ret_slot")
- }
- }
- }
-}
-
pub fn trans_args(cx: @mut Block,
args: CallArgs,
fn_ty: ty::t,
if formal_arg_ty != arg_datum.ty {
// this could happen due to e.g. subtyping
- let llformal_arg_ty = type_of::type_of_explicit_arg(ccx, &formal_arg_ty);
+ let llformal_arg_ty = type_of::type_of_explicit_arg(ccx, formal_arg_ty);
debug!("casting actual type (%s) to match formal (%s)",
bcx.val_to_str(val), bcx.llty_str(llformal_arg_ty));
val = PointerCast(bcx, val, llformal_arg_ty);
}
}
-pub type ExternMap = HashMap<@str, ValueRef>;
+pub type ExternMap = HashMap<~str, ValueRef>;
// Types used for llself.
pub struct ValSelfData {
// outputting the resume instruction.
personality: Option<ValueRef>,
- // True if this function has an immediate return value, false otherwise.
- // If this is false, the llretptr will alias the first argument of the
- // function.
- has_immediate_return_value: bool,
+ // True if the caller expects this fn to use the out pointer to
+ // return. Either way, your code should write into llretptr, but if
+ // this value is false, llretptr will be a local alloca.
+ caller_expects_out_pointer: bool,
// Maps arguments to allocas created for them in llallocas.
llargs: @mut HashMap<ast::NodeId, ValueRef>,
impl FunctionContext {
pub fn arg_pos(&self, arg: uint) -> uint {
- if self.has_immediate_return_value {
- arg + 1u
- } else {
+ if self.caller_expects_out_pointer {
arg + 2u
+ } else {
+ arg + 1u
}
}
pub fn out_arg_pos(&self) -> uint {
- assert!(!self.has_immediate_return_value);
+ assert!(self.caller_expects_out_pointer);
0u
}
pub fn env_arg_pos(&self) -> uint {
- if !self.has_immediate_return_value {
+ if self.caller_expects_out_pointer {
1u
} else {
0u
pub fn appropriate_mode(tcx: ty::ctxt, ty: ty::t) -> DatumMode {
/*!
- *
- * Indicates the "appropriate" mode for this value,
- * which is either by ref or by value, depending
- * on whether type is immediate or not. */
+ * Indicates the "appropriate" mode for this value,
+ * which is either by ref or by value, depending
+ * on whether type is immediate or not.
+ */
- if ty::type_is_nil(ty) || ty::type_is_bot(ty) {
+ if ty::type_is_voidish(ty) {
ByValue
} else if ty::type_is_immediate(tcx, ty) {
ByValue
let _icx = push_ctxt("copy_to");
- if ty::type_is_nil(self.ty) || ty::type_is_bot(self.ty) {
+ if ty::type_is_voidish(self.ty) {
return bcx;
}
debug!("move_to(self=%s, action=%?, dst=%s)",
self.to_str(bcx.ccx()), action, bcx.val_to_str(dst));
- if ty::type_is_nil(self.ty) || ty::type_is_bot(self.ty) {
+ if ty::type_is_voidish(self.ty) {
return bcx;
}
*
* Yields the value itself. */
- if ty::type_is_nil(self.ty) || ty::type_is_bot(self.ty) {
+ if ty::type_is_voidish(self.ty) {
C_nil()
} else {
match self.mode {
match self.mode {
ByRef(_) => self.val,
ByValue => {
- if ty::type_is_nil(self.ty) || ty::type_is_bot(self.ty) {
+ if ty::type_is_voidish(self.ty) {
C_null(type_of::type_of(bcx.ccx(), self.ty).ptr_to())
} else {
let slot = alloc_ty(bcx, self.ty, "");
assert_eq!(datum.appropriate_mode(tcx), ByValue);
Store(bcx, datum.to_appropriate_llval(bcx), llfn);
let llenv = GEPi(bcx, scratch.val, [0u, abi::fn_field_box]);
- Store(bcx, base::null_env_ptr(bcx), llenv);
+ Store(bcx, base::null_env_ptr(bcx.ccx()), llenv);
DatumBlock {bcx: bcx, datum: scratch}
}
debuginfo::update_source_pos(bcx.fcx, expr.id, expr.span);
let dest = {
- if ty::type_is_nil(ty) || ty::type_is_bot(ty) {
+ if ty::type_is_voidish(ty) {
Ignore
} else {
dest
ty::RvalueDpsExpr => {
let ty = expr_ty(bcx, expr);
- if ty::type_is_nil(ty) || ty::type_is_bot(ty) {
+ if ty::type_is_voidish(ty) {
bcx = trans_rvalue_dps_unadjusted(bcx, expr, Ignore);
return nil(bcx, ty);
} else {
// except according to those terms.
-use back::{link, abi};
-use lib::llvm::{Pointer, ValueRef};
+use back::{link};
+use std::libc::c_uint;
+use lib::llvm::{ValueRef, Attribute, CallConv};
+use lib::llvm::llvm;
use lib;
-use middle::trans::base::*;
+use middle::trans::machine;
+use middle::trans::base;
+use middle::trans::base::push_ctxt;
use middle::trans::cabi;
-use middle::trans::cabi_x86;
-use middle::trans::cabi_x86_64;
-use middle::trans::cabi_arm;
-use middle::trans::cabi_mips;
use middle::trans::build::*;
-use middle::trans::callee::*;
+use middle::trans::builder::noname;
use middle::trans::common::*;
-use middle::trans::datum::*;
-use middle::trans::expr::Ignore;
-use middle::trans::machine::llsize_of;
-use middle::trans::glue;
-use middle::trans::machine;
use middle::trans::type_of::*;
use middle::trans::type_of;
use middle::ty;
use middle::ty::FnSig;
-use util::ppaux::ty_to_str;
-use std::cell::Cell;
+use std::uint;
use std::vec;
use syntax::codemap::span;
-use syntax::{ast, ast_util};
+use syntax::{ast};
use syntax::{attr, ast_map};
-use syntax::opt_vec;
use syntax::parse::token::special_idents;
-use syntax::parse::token;
-use syntax::abi::{X86, X86_64, Arm, Mips};
use syntax::abi::{RustIntrinsic, Rust, Stdcall, Fastcall,
- Cdecl, Aapcs, C};
+ Cdecl, Aapcs, C, AbiSet};
+use util::ppaux::{Repr, UserString};
use middle::trans::type_::Type;
-fn abi_info(ccx: @mut CrateContext) -> @cabi::ABIInfo {
- return match ccx.sess.targ_cfg.arch {
- X86 => cabi_x86::abi_info(ccx),
- X86_64 => cabi_x86_64::abi_info(),
- Arm => cabi_arm::abi_info(),
- Mips => cabi_mips::abi_info(),
- }
-}
-
-pub fn link_name(ccx: &CrateContext, i: &ast::foreign_item) -> @str {
- match attr::first_attr_value_str_by_name(i.attrs, "link_name") {
- None => ccx.sess.str_of(i.ident),
- Some(ln) => ln,
- }
-}
+///////////////////////////////////////////////////////////////////////////
+// Type definitions
-struct ShimTypes {
+struct ForeignTypes {
+ /// Rust signature of the function
fn_sig: ty::FnSig,
+ /// Adapter object for handling native ABI rules (trust me, you
+ /// don't want to know)
+ fn_ty: cabi::FnType,
+
/// LLVM types that will appear on the foreign function
llsig: LlvmSignature,
/// True if there is a return value (not bottom, not unit)
ret_def: bool,
-
- /// Type of the struct we will use to shuttle values back and forth.
- /// This is always derived from the llsig.
- bundle_ty: Type,
-
- /// Type of the shim function itself.
- shim_fn_ty: Type,
-
- /// Adapter object for handling native ABI rules (trust me, you
- /// don't want to know).
- fn_ty: cabi::FnType
}
struct LlvmSignature {
+ // LLVM versions of the types of this function's arguments.
llarg_tys: ~[Type],
- llret_ty: Type,
- sret: bool,
-}
-fn foreign_signature(ccx: &mut CrateContext, fn_sig: &ty::FnSig)
- -> LlvmSignature {
- /*!
- * The ForeignSignature is the LLVM types of the arguments/return type
- * of a function. Note that these LLVM types are not quite the same
- * as the LLVM types would be for a native Rust function because foreign
- * functions just plain ignore modes. They also don't pass aggregate
- * values by pointer like we do.
- */
+ // LLVM version of the type that this function returns. Note that
+ // this *may not be* the declared return type of the foreign
+ // function, because the foreign function may opt to return via an
+ // out pointer.
+ llret_ty: Type,
- let llarg_tys = fn_sig.inputs.map(|arg_ty| type_of(ccx, *arg_ty));
- let llret_ty = type_of::type_of(ccx, fn_sig.output);
- LlvmSignature {
- llarg_tys: llarg_tys,
- llret_ty: llret_ty,
- sret: !ty::type_is_immediate(ccx.tcx, fn_sig.output),
- }
+ // True if *Rust* would use an outpointer for this function.
+ sret: bool,
}
-fn shim_types(ccx: @mut CrateContext, id: ast::NodeId) -> ShimTypes {
- let fn_sig = match ty::get(ty::node_id_to_type(ccx.tcx, id)).sty {
- ty::ty_bare_fn(ref fn_ty) => fn_ty.sig.clone(),
- _ => ccx.sess.bug("c_arg_and_ret_lltys called on non-function type")
- };
- let llsig = foreign_signature(ccx, &fn_sig);
- let bundle_ty = Type::struct_(llsig.llarg_tys + &[llsig.llret_ty.ptr_to()], false);
- let ret_def = !ty::type_is_bot(fn_sig.output) &&
- !ty::type_is_nil(fn_sig.output);
- let fn_ty = abi_info(ccx).compute_info(llsig.llarg_tys, llsig.llret_ty, ret_def);
- ShimTypes {
- fn_sig: fn_sig,
- llsig: llsig,
- ret_def: ret_def,
- bundle_ty: bundle_ty,
- shim_fn_ty: Type::func([bundle_ty.ptr_to()], &Type::void()),
- fn_ty: fn_ty
- }
-}
-type shim_arg_builder<'self> =
- &'self fn(bcx: @mut Block, tys: &ShimTypes,
- llargbundle: ValueRef) -> ~[ValueRef];
-
-type shim_ret_builder<'self> =
- &'self fn(bcx: @mut Block, tys: &ShimTypes,
- llargbundle: ValueRef,
- llretval: ValueRef);
-
-fn build_shim_fn_(ccx: @mut CrateContext,
- shim_name: &str,
- llbasefn: ValueRef,
- tys: &ShimTypes,
- cc: lib::llvm::CallConv,
- arg_builder: shim_arg_builder,
- ret_builder: shim_ret_builder)
- -> ValueRef {
- let llshimfn = decl_internal_cdecl_fn(
- ccx.llmod, shim_name, tys.shim_fn_ty);
-
- // Declare the body of the shim function:
- let fcx = new_fn_ctxt(ccx, ~[], llshimfn, tys.fn_sig.output, None);
- let bcx = fcx.entry_bcx.unwrap();
-
- let llargbundle = get_param(llshimfn, 0u);
- let llargvals = arg_builder(bcx, tys, llargbundle);
-
- // Create the call itself and store the return value:
- let llretval = CallWithConv(bcx, llbasefn, llargvals, cc);
-
- ret_builder(bcx, tys, llargbundle, llretval);
-
- // Don't finish up the function in the usual way, because this doesn't
- // follow the normal Rust calling conventions.
- let ret_cx = match fcx.llreturn {
- Some(llreturn) => raw_block(fcx, false, llreturn),
- None => bcx
- };
- RetVoid(ret_cx);
- fcx.cleanup();
+///////////////////////////////////////////////////////////////////////////
+// Calls to external functions
- return llshimfn;
-}
+fn llvm_calling_convention(ccx: @mut CrateContext,
+ abis: AbiSet)
+ -> Option<CallConv> {
+ let arch = ccx.sess.targ_cfg.arch;
+ abis.for_arch(arch).map(|abi| {
+ match *abi {
+ RustIntrinsic => {
+ // Intrinsics are emitted by monomorphic fn
+ ccx.sess.bug(fmt!("Asked to register intrinsic fn"));
+ }
-type wrap_arg_builder<'self> = &'self fn(bcx: @mut Block,
- tys: &ShimTypes,
- llwrapfn: ValueRef,
- llargbundle: ValueRef);
-
-type wrap_ret_builder<'self> = &'self fn(bcx: @mut Block,
- tys: &ShimTypes,
- llargbundle: ValueRef);
-
-fn build_wrap_fn_(ccx: @mut CrateContext,
- tys: &ShimTypes,
- llshimfn: ValueRef,
- llwrapfn: ValueRef,
- shim_upcall: ValueRef,
- needs_c_return: bool,
- arg_builder: wrap_arg_builder,
- ret_builder: wrap_ret_builder) {
- let _icx = push_ctxt("foreign::build_wrap_fn_");
- let fcx = new_fn_ctxt(ccx, ~[], llwrapfn, tys.fn_sig.output, None);
- let bcx = fcx.entry_bcx.unwrap();
-
- // Patch up the return type if it's not immediate and we're returning via
- // the C ABI.
- if needs_c_return && !ty::type_is_immediate(ccx.tcx, tys.fn_sig.output) {
- let lloutputtype = type_of::type_of(fcx.ccx, tys.fn_sig.output);
- fcx.llretptr = Some(alloca(bcx, lloutputtype, ""));
- }
+ Rust => {
+ // FIXME(#3678) Implement linking to foreign fns with Rust ABI
+ ccx.sess.unimpl(
+ fmt!("Foreign functions with Rust ABI"));
+ }
- // Allocate the struct and write the arguments into it.
- let llargbundle = alloca(bcx, tys.bundle_ty, "__llargbundle");
- arg_builder(bcx, tys, llwrapfn, llargbundle);
+ Stdcall => lib::llvm::X86StdcallCallConv,
+ Fastcall => lib::llvm::X86FastcallCallConv,
+ C => lib::llvm::CCallConv,
- // Create call itself.
- let llshimfnptr = PointerCast(bcx, llshimfn, Type::i8p());
- let llrawargbundle = PointerCast(bcx, llargbundle, Type::i8p());
- Call(bcx, shim_upcall, [llrawargbundle, llshimfnptr]);
- ret_builder(bcx, tys, llargbundle);
+ // NOTE These API constants ought to be more specific
+ Cdecl => lib::llvm::CCallConv,
+ Aapcs => lib::llvm::CCallConv,
+ }
+ })
+}
- // Then return according to the C ABI.
- let return_context = match fcx.llreturn {
- Some(llreturn) => raw_block(fcx, false, llreturn),
- None => bcx
- };
- let llfunctiontype = val_ty(llwrapfn);
- let llfunctiontype = llfunctiontype.element_type();
- let return_type = llfunctiontype.return_type();
- if return_type.kind() == ::lib::llvm::Void {
- // XXX: This might be wrong if there are any functions for which
- // the C ABI specifies a void output pointer and the Rust ABI
- // does not.
- RetVoid(return_context);
- } else {
- // Cast if we have to...
- // XXX: This is ugly.
- let llretptr = BitCast(return_context, fcx.llretptr.unwrap(), return_type.ptr_to());
- Ret(return_context, Load(return_context, llretptr));
- }
- fcx.cleanup();
-}
+pub fn register_foreign_item_fn(ccx: @mut CrateContext,
+ abis: AbiSet,
+ path: &ast_map::path,
+ foreign_item: @ast::foreign_item) -> ValueRef {
+ /*!
+ * Registers a foreign function found in a library.
+ * Just adds a LLVM global.
+ */
-// For each foreign function F, we generate a wrapper function W and a shim
-// function S that all work together. The wrapper function W is the function
-// that other rust code actually invokes. Its job is to marshall the
-// arguments into a struct. It then uses a small bit of assembly to switch
-// over to the C stack and invoke the shim function. The shim function S then
-// unpacks the arguments from the struct and invokes the actual function F
-// according to its specified calling convention.
-//
-// Example: Given a foreign c-stack function F(x: X, y: Y) -> Z,
-// we generate a wrapper function W that looks like:
-//
-// void W(Z* dest, void *env, X x, Y y) {
-// struct { X x; Y y; Z *z; } args = { x, y, z };
-// call_on_c_stack_shim(S, &args);
-// }
-//
-// The shim function S then looks something like:
-//
-// void S(struct { X x; Y y; Z *z; } *args) {
-// *args->z = F(args->x, args->y);
-// }
-//
-// However, if the return type of F is dynamically sized or of aggregate type,
-// the shim function looks like:
-//
-// void S(struct { X x; Y y; Z *z; } *args) {
-// F(args->z, args->x, args->y);
-// }
-//
-// Note: on i386, the layout of the args struct is generally the same
-// as the desired layout of the arguments on the C stack. Therefore,
-// we could use upcall_alloc_c_stack() to allocate the `args`
-// structure and switch the stack pointer appropriately to avoid a
-// round of copies. (In fact, the shim function itself is
-// unnecessary). We used to do this, in fact, and will perhaps do so
-// in the future.
-pub fn trans_foreign_mod(ccx: @mut CrateContext,
- path: &ast_map::path,
- foreign_mod: &ast::foreign_mod) {
- let _icx = push_ctxt("foreign::trans_foreign_mod");
+ debug!("register_foreign_item_fn(abis=%s, \
+ path=%s, \
+ foreign_item.id=%?)",
+ abis.repr(ccx.tcx),
+ path.repr(ccx.tcx),
+ foreign_item.id);
- let arch = ccx.sess.targ_cfg.arch;
- let abi = match foreign_mod.abis.for_arch(arch) {
+ let cc = match llvm_calling_convention(ccx, abis) {
+ Some(cc) => cc,
None => {
+ // FIXME(#8357) We really ought to report a span here
ccx.sess.fatal(
- fmt!("No suitable ABI for target architecture \
+ fmt!("ABI `%s` has no suitable ABI \
+ for target architecture \
in module %s",
+ abis.user_string(ccx.tcx),
ast_map::path_to_str(*path,
ccx.sess.intr())));
}
-
- Some(abi) => abi,
};
- for &foreign_item in foreign_mod.items.iter() {
- match foreign_item.node {
- ast::foreign_item_fn(*) => {
- let id = foreign_item.id;
- match abi {
- RustIntrinsic => {
- // Intrinsics are emitted by monomorphic fn
- }
-
- Rust => {
- // FIXME(#3678) Implement linking to foreign fns with Rust ABI
- ccx.sess.unimpl(
- fmt!("Foreign functions with Rust ABI"));
- }
-
- Stdcall => {
- build_foreign_fn(ccx, id, foreign_item,
- lib::llvm::X86StdcallCallConv);
- }
-
- Fastcall => {
- build_foreign_fn(ccx, id, foreign_item,
- lib::llvm::X86FastcallCallConv);
- }
-
- Cdecl => {
- // FIXME(#3678) should really be more specific
- build_foreign_fn(ccx, id, foreign_item,
- lib::llvm::CCallConv);
- }
-
- Aapcs => {
- // FIXME(#3678) should really be more specific
- build_foreign_fn(ccx, id, foreign_item,
- lib::llvm::CCallConv);
- }
-
- C => {
- build_foreign_fn(ccx, id, foreign_item,
- lib::llvm::CCallConv);
- }
- }
- }
- ast::foreign_item_static(*) => {
- let ident = token::ident_to_str(&foreign_item.ident);
- ccx.item_symbols.insert(foreign_item.id, /* bad */ident.to_owned());
- }
- }
- }
+ // Register the function as a C extern fn
+ let lname = link_name(ccx, foreign_item);
+ let tys = foreign_types_for_id(ccx, foreign_item.id);
- fn build_foreign_fn(ccx: @mut CrateContext,
- id: ast::NodeId,
- foreign_item: @ast::foreign_item,
- cc: lib::llvm::CallConv) {
- let llwrapfn = get_item_val(ccx, id);
- let tys = shim_types(ccx, id);
- if attr::contains_name(foreign_item.attrs, "rust_stack") {
- build_direct_fn(ccx, llwrapfn, foreign_item,
- &tys, cc);
- } else if attr::contains_name(foreign_item.attrs, "fast_ffi") {
- build_fast_ffi_fn(ccx, llwrapfn, foreign_item, &tys, cc);
- } else {
- let llshimfn = build_shim_fn(ccx, foreign_item, &tys, cc);
- build_wrap_fn(ccx, &tys, llshimfn, llwrapfn);
- }
- }
+ // Create the LLVM value for the C extern fn
+ let llfn_ty = lltype_for_fn_from_foreign_types(&tys);
+ let llfn = base::get_extern_fn(&mut ccx.externs, ccx.llmod,
+ lname, cc, llfn_ty);
+ add_argument_attributes(&tys, llfn);
- fn build_shim_fn(ccx: @mut CrateContext,
- foreign_item: &ast::foreign_item,
- tys: &ShimTypes,
- cc: lib::llvm::CallConv)
- -> ValueRef {
- /*!
- *
- * Build S, from comment above:
- *
- * void S(struct { X x; Y y; Z *z; } *args) {
- * F(args->z, args->x, args->y);
- * }
- */
-
- let _icx = push_ctxt("foreign::build_shim_fn");
-
- fn build_args(bcx: @mut Block, tys: &ShimTypes, llargbundle: ValueRef)
- -> ~[ValueRef] {
- let _icx = push_ctxt("foreign::shim::build_args");
- tys.fn_ty.build_shim_args(bcx, tys.llsig.llarg_tys, llargbundle)
- }
+ return llfn;
+}
- fn build_ret(bcx: @mut Block,
- tys: &ShimTypes,
- llargbundle: ValueRef,
- llretval: ValueRef) {
- let _icx = push_ctxt("foreign::shim::build_ret");
- tys.fn_ty.build_shim_ret(bcx,
- tys.llsig.llarg_tys,
- tys.ret_def,
- llargbundle,
- llretval);
- }
+pub fn trans_native_call(bcx: @mut Block,
+ callee_ty: ty::t,
+ llfn: ValueRef,
+ llretptr: ValueRef,
+ llargs_rust: &[ValueRef]) -> @mut Block {
+ /*!
+ * Prepares a call to a native function. This requires adapting
+ * from the Rust argument passing rules to the native rules.
+ *
+ * # Parameters
+ *
+ * - `callee_ty`: Rust type for the function we are calling
+ * - `llfn`: the function pointer we are calling
+ * - `llretptr`: where to store the return value of the function
+ * - `llargs_rust`: a list of the argument values, prepared
+ * as they would be if calling a Rust function
+ */
- let lname = link_name(ccx, foreign_item);
- let llbasefn = base_fn(ccx, lname, tys, cc);
- // Name the shim function
- let shim_name = fmt!("%s__c_stack_shim", lname);
- build_shim_fn_(ccx,
- shim_name,
- llbasefn,
- tys,
- cc,
- build_args,
- build_ret)
- }
+ let ccx = bcx.ccx();
+ let tcx = bcx.tcx();
- fn base_fn(ccx: &CrateContext,
- lname: &str,
- tys: &ShimTypes,
- cc: lib::llvm::CallConv)
- -> ValueRef {
- // Declare the "prototype" for the base function F:
- do tys.fn_ty.decl_fn |fnty| {
- decl_fn(ccx.llmod, lname, cc, fnty)
- }
- }
+ debug!("trans_native_call(callee_ty=%s, \
+ llfn=%s, \
+ llretptr=%s)",
+ callee_ty.repr(tcx),
+ ccx.tn.val_to_str(llfn),
+ ccx.tn.val_to_str(llretptr));
- // FIXME (#2535): this is very shaky and probably gets ABIs wrong all
- // over the place
- fn build_direct_fn(ccx: @mut CrateContext,
- decl: ValueRef,
- item: &ast::foreign_item,
- tys: &ShimTypes,
- cc: lib::llvm::CallConv) {
- debug!("build_direct_fn(%s)", link_name(ccx, item));
-
- let fcx = new_fn_ctxt(ccx, ~[], decl, tys.fn_sig.output, None);
- let bcx = fcx.entry_bcx.unwrap();
- let llbasefn = base_fn(ccx, link_name(ccx, item), tys, cc);
- let ty = ty::lookup_item_type(ccx.tcx,
- ast_util::local_def(item.id)).ty;
- let ret_ty = ty::ty_fn_ret(ty);
- let args = vec::from_fn(ty::ty_fn_args(ty).len(), |i| {
- get_param(decl, fcx.arg_pos(i))
- });
- let retval = Call(bcx, llbasefn, args);
- if !ty::type_is_nil(ret_ty) && !ty::type_is_bot(ret_ty) {
- Store(bcx, retval, fcx.llretptr.unwrap());
+ let (fn_abis, fn_sig) = match ty::get(callee_ty).sty {
+ ty::ty_bare_fn(ref fn_ty) => (fn_ty.abis, fn_ty.sig.clone()),
+ _ => ccx.sess.bug("trans_native_call called on non-function type")
+ };
+ let llsig = foreign_signature(ccx, &fn_sig);
+ let ret_def = !ty::type_is_voidish(fn_sig.output);
+ let fn_type = cabi::compute_abi_info(ccx,
+ llsig.llarg_tys,
+ llsig.llret_ty,
+ ret_def);
+
+ let all_arg_tys: &[cabi::LLVMType] = fn_type.arg_tys;
+ let all_attributes: &[Option<Attribute>] = fn_type.attrs;
+
+ let mut llargs_foreign = ~[];
+
+ // If the foreign ABI expects return value by pointer, supply the
+ // pointer that Rust gave us. Sometimes we have to bitcast
+ // because foreign fns return slightly different (but equivalent)
+ // views on the same type (e.g., i64 in place of {i32,i32}).
+ let (arg_tys, attributes) = {
+ if fn_type.sret {
+ if all_arg_tys[0].cast {
+ let llcastedretptr =
+ BitCast(bcx, llretptr, all_arg_tys[0].ty.ptr_to());
+ llargs_foreign.push(llcastedretptr);
+ } else {
+ llargs_foreign.push(llretptr);
+ }
+ (all_arg_tys.tail(), all_attributes.tail())
+ } else {
+ (all_arg_tys, all_attributes)
}
- finish_fn(fcx, bcx);
- }
+ };
- // FIXME (#2535): this is very shaky and probably gets ABIs wrong all
- // over the place
- fn build_fast_ffi_fn(ccx: @mut CrateContext,
- decl: ValueRef,
- item: &ast::foreign_item,
- tys: &ShimTypes,
- cc: lib::llvm::CallConv) {
- debug!("build_fast_ffi_fn(%s)", link_name(ccx, item));
-
- let fcx = new_fn_ctxt(ccx, ~[], decl, tys.fn_sig.output, None);
- let bcx = fcx.entry_bcx.unwrap();
- let llbasefn = base_fn(ccx, link_name(ccx, item), tys, cc);
- set_no_inline(fcx.llfn);
- set_fixed_stack_segment(fcx.llfn);
- let ty = ty::lookup_item_type(ccx.tcx,
- ast_util::local_def(item.id)).ty;
- let ret_ty = ty::ty_fn_ret(ty);
- let args = vec::from_fn(ty::ty_fn_args(ty).len(), |i| {
- get_param(decl, fcx.arg_pos(i))
- });
- let retval = Call(bcx, llbasefn, args);
- if !ty::type_is_nil(ret_ty) && !ty::type_is_bot(ret_ty) {
- Store(bcx, retval, fcx.llretptr.unwrap());
- }
- finish_fn(fcx, bcx);
- }
+ for (i, &llarg_rust) in llargs_rust.iter().enumerate() {
+ let mut llarg_rust = llarg_rust;
- fn build_wrap_fn(ccx: @mut CrateContext,
- tys: &ShimTypes,
- llshimfn: ValueRef,
- llwrapfn: ValueRef) {
- /*!
- *
- * Build W, from comment above:
- *
- * void W(Z* dest, void *env, X x, Y y) {
- * struct { X x; Y y; Z *z; } args = { x, y, z };
- * call_on_c_stack_shim(S, &args);
- * }
- *
- * One thing we have to be very careful of is to
- * account for the Rust modes.
- */
-
- let _icx = push_ctxt("foreign::build_wrap_fn");
-
- build_wrap_fn_(ccx,
- tys,
- llshimfn,
- llwrapfn,
- ccx.upcalls.call_shim_on_c_stack,
- false,
- build_args,
- build_ret);
-
- fn build_args(bcx: @mut Block,
- tys: &ShimTypes,
- llwrapfn: ValueRef,
- llargbundle: ValueRef) {
- let _icx = push_ctxt("foreign::wrap::build_args");
- let ccx = bcx.ccx();
- let n = tys.llsig.llarg_tys.len();
- for i in range(0u, n) {
- let arg_i = bcx.fcx.arg_pos(i);
- let mut llargval = get_param(llwrapfn, arg_i);
-
- // In some cases, Rust will pass a pointer which the
- // native C type doesn't have. In that case, just
- // load the value from the pointer.
- if type_of::arg_is_indirect(ccx, &tys.fn_sig.inputs[i]) {
- llargval = Load(bcx, llargval);
- }
+ // Does Rust pass this argument by pointer?
+ let rust_indirect = type_of::arg_is_indirect(ccx, fn_sig.inputs[i]);
- store_inbounds(bcx, llargval, llargbundle, [0u, i]);
- }
+ debug!("argument %u, llarg_rust=%s, rust_indirect=%b, arg_ty=%s",
+ i,
+ ccx.tn.val_to_str(llarg_rust),
+ rust_indirect,
+ ccx.tn.type_to_str(arg_tys[i].ty));
- for &retptr in bcx.fcx.llretptr.iter() {
- store_inbounds(bcx, retptr, llargbundle, [0u, n]);
- }
+ // Ensure that we always have the Rust value indirectly,
+ // because it makes bitcasting easier.
+ if !rust_indirect {
+ let scratch = base::alloca(bcx, arg_tys[i].ty, "__arg");
+ Store(bcx, llarg_rust, scratch);
+ llarg_rust = scratch;
}
- fn build_ret(bcx: @mut Block,
- shim_types: &ShimTypes,
- llargbundle: ValueRef) {
- let _icx = push_ctxt("foreign::wrap::build_ret");
- let arg_count = shim_types.fn_sig.inputs.len();
- for &retptr in bcx.fcx.llretptr.iter() {
- let llretptr = load_inbounds(bcx, llargbundle, [0, arg_count]);
- Store(bcx, Load(bcx, llretptr), retptr);
- }
- }
- }
-}
+ debug!("llarg_rust=%s (after indirection)",
+ ccx.tn.val_to_str(llarg_rust));
-pub fn trans_intrinsic(ccx: @mut CrateContext,
- decl: ValueRef,
- item: &ast::foreign_item,
- path: ast_map::path,
- substs: @param_substs,
- attributes: &[ast::Attribute],
- ref_id: Option<ast::NodeId>) {
- debug!("trans_intrinsic(item.ident=%s)", ccx.sess.str_of(item.ident));
-
- fn simple_llvm_intrinsic(bcx: @mut Block, name: &'static str, num_args: uint) {
- assert!(num_args <= 4);
- let mut args = [0 as ValueRef, ..4];
- let first_real_arg = bcx.fcx.arg_pos(0u);
- for i in range(0u, num_args) {
- args[i] = get_param(bcx.fcx.llfn, first_real_arg + i);
+ // Check whether we need to do any casting
+ let foreignarg_ty = arg_tys[i].ty;
+ if arg_tys[i].cast {
+ llarg_rust = BitCast(bcx, llarg_rust, foreignarg_ty.ptr_to());
}
- let llfn = bcx.ccx().intrinsics.get_copy(&name);
- Ret(bcx, Call(bcx, llfn, args.slice(0, num_args)));
- }
-
- fn with_overflow_instrinsic(bcx: @mut Block, name: &'static str) {
- let first_real_arg = bcx.fcx.arg_pos(0u);
- let a = get_param(bcx.fcx.llfn, first_real_arg);
- let b = get_param(bcx.fcx.llfn, first_real_arg + 1);
- let llfn = bcx.ccx().intrinsics.get_copy(&name);
-
- // convert `i1` to a `bool`, and write to the out parameter
- let val = Call(bcx, llfn, [a, b]);
- let result = ExtractValue(bcx, val, 0);
- let overflow = ZExt(bcx, ExtractValue(bcx, val, 1), Type::bool());
- let retptr = get_param(bcx.fcx.llfn, bcx.fcx.out_arg_pos());
- let ret = Load(bcx, retptr);
- let ret = InsertValue(bcx, ret, result, 0);
- let ret = InsertValue(bcx, ret, overflow, 1);
- Store(bcx, ret, retptr);
- RetVoid(bcx)
- }
- fn memcpy_intrinsic(bcx: @mut Block, name: &'static str, tp_ty: ty::t, sizebits: u8) {
- let ccx = bcx.ccx();
- let lltp_ty = type_of::type_of(ccx, tp_ty);
- let align = C_i32(machine::llalign_of_min(ccx, lltp_ty) as i32);
- let size = match sizebits {
- 32 => C_i32(machine::llsize_of_real(ccx, lltp_ty) as i32),
- 64 => C_i64(machine::llsize_of_real(ccx, lltp_ty) as i64),
- _ => ccx.sess.fatal("Invalid value for sizebits")
- };
+ debug!("llarg_rust=%s (after casting)",
+ ccx.tn.val_to_str(llarg_rust));
- let decl = bcx.fcx.llfn;
- let first_real_arg = bcx.fcx.arg_pos(0u);
- let dst_ptr = PointerCast(bcx, get_param(decl, first_real_arg), Type::i8p());
- let src_ptr = PointerCast(bcx, get_param(decl, first_real_arg + 1), Type::i8p());
- let count = get_param(decl, first_real_arg + 2);
- let volatile = C_i1(false);
- let llfn = bcx.ccx().intrinsics.get_copy(&name);
- Call(bcx, llfn, [dst_ptr, src_ptr, Mul(bcx, size, count), align, volatile]);
- RetVoid(bcx);
- }
-
- fn memset_intrinsic(bcx: @mut Block, name: &'static str, tp_ty: ty::t, sizebits: u8) {
- let ccx = bcx.ccx();
- let lltp_ty = type_of::type_of(ccx, tp_ty);
- let align = C_i32(machine::llalign_of_min(ccx, lltp_ty) as i32);
- let size = match sizebits {
- 32 => C_i32(machine::llsize_of_real(ccx, lltp_ty) as i32),
- 64 => C_i64(machine::llsize_of_real(ccx, lltp_ty) as i64),
- _ => ccx.sess.fatal("Invalid value for sizebits")
+ // Finally, load the value if needed for the foreign ABI
+ let foreign_indirect = attributes[i].is_some();
+ let llarg_foreign = if foreign_indirect {
+ llarg_rust
+ } else {
+ Load(bcx, llarg_rust)
};
- let decl = bcx.fcx.llfn;
- let first_real_arg = bcx.fcx.arg_pos(0u);
- let dst_ptr = PointerCast(bcx, get_param(decl, first_real_arg), Type::i8p());
- let val = get_param(decl, first_real_arg + 1);
- let count = get_param(decl, first_real_arg + 2);
- let volatile = C_i1(false);
- let llfn = bcx.ccx().intrinsics.get_copy(&name);
- Call(bcx, llfn, [dst_ptr, val, Mul(bcx, size, count), align, volatile]);
- RetVoid(bcx);
- }
+ debug!("argument %u, llarg_foreign=%s",
+ i, ccx.tn.val_to_str(llarg_foreign));
- fn count_zeros_intrinsic(bcx: @mut Block, name: &'static str) {
- let x = get_param(bcx.fcx.llfn, bcx.fcx.arg_pos(0u));
- let y = C_i1(false);
- let llfn = bcx.ccx().intrinsics.get_copy(&name);
- Ret(bcx, Call(bcx, llfn, [x, y]));
+ llargs_foreign.push(llarg_foreign);
}
- let output_type = ty::ty_fn_ret(ty::node_id_to_type(ccx.tcx, item.id));
-
- let fcx = new_fn_ctxt_w_id(ccx,
- path,
- decl,
- item.id,
- output_type,
- true,
- Some(substs),
- None,
- Some(item.span));
-
- set_always_inline(fcx.llfn);
+ let cc = match llvm_calling_convention(ccx, fn_abis) {
+ Some(cc) => cc,
+ None => {
+ // FIXME(#8357) We really ought to report a span here
+ ccx.sess.fatal(
+ fmt!("ABI string `%s` has no suitable ABI \
+ for target architecture",
+ fn_abis.user_string(ccx.tcx)));
+ }
+ };
- // Set the fixed stack segment flag if necessary.
- if attr::contains_name(attributes, "fixed_stack_segment") {
- set_fixed_stack_segment(fcx.llfn);
- }
+ let llforeign_retval = CallWithConv(bcx, llfn, llargs_foreign, cc);
- let mut bcx = fcx.entry_bcx.unwrap();
- let first_real_arg = fcx.arg_pos(0u);
+ // If the function we just called does not use an outpointer,
+ // store the result into the rust outpointer. Cast the outpointer
+ // type to match because some ABIs will use a different type than
+ // the Rust type. e.g., a {u32,u32} struct could be returned as
+ // u64.
+ if ret_def && !fn_type.sret {
+ let llrust_ret_ty = llsig.llret_ty;
+ let llforeign_ret_ty = fn_type.ret_ty.ty;
- let nm = ccx.sess.str_of(item.ident);
- let name = nm.as_slice();
+ debug!("llretptr=%s", ccx.tn.val_to_str(llretptr));
+ debug!("llforeign_retval=%s", ccx.tn.val_to_str(llforeign_retval));
+ debug!("llrust_ret_ty=%s", ccx.tn.type_to_str(llrust_ret_ty));
+ debug!("llforeign_ret_ty=%s", ccx.tn.type_to_str(llforeign_ret_ty));
- // This requires that atomic intrinsics follow a specific naming pattern:
- // "atomic_<operation>[_<ordering>], and no ordering means SeqCst
- if name.starts_with("atomic_") {
- let split : ~[&str] = name.split_iter('_').collect();
- assert!(split.len() >= 2, "Atomic intrinsic not correct format");
- let order = if split.len() == 2 {
- lib::llvm::SequentiallyConsistent
+ if llrust_ret_ty == llforeign_ret_ty {
+ Store(bcx, llforeign_retval, llretptr);
} else {
- match split[2] {
- "relaxed" => lib::llvm::Monotonic,
- "acq" => lib::llvm::Acquire,
- "rel" => lib::llvm::Release,
- "acqrel" => lib::llvm::AcquireRelease,
- _ => ccx.sess.fatal("Unknown ordering in atomic intrinsic")
- }
- };
-
- match split[1] {
- "cxchg" => {
- let old = AtomicCmpXchg(bcx, get_param(decl, first_real_arg),
- get_param(decl, first_real_arg + 1u),
- get_param(decl, first_real_arg + 2u),
- order);
- Ret(bcx, old);
- }
- "load" => {
- let old = AtomicLoad(bcx, get_param(decl, first_real_arg),
- order);
- Ret(bcx, old);
- }
- "store" => {
- AtomicStore(bcx, get_param(decl, first_real_arg + 1u),
- get_param(decl, first_real_arg),
- order);
- RetVoid(bcx);
- }
- "fence" => {
- AtomicFence(bcx, order);
- RetVoid(bcx);
- }
- op => {
- // These are all AtomicRMW ops
- let atom_op = match op {
- "xchg" => lib::llvm::Xchg,
- "xadd" => lib::llvm::Add,
- "xsub" => lib::llvm::Sub,
- "and" => lib::llvm::And,
- "nand" => lib::llvm::Nand,
- "or" => lib::llvm::Or,
- "xor" => lib::llvm::Xor,
- "max" => lib::llvm::Max,
- "min" => lib::llvm::Min,
- "umax" => lib::llvm::UMax,
- "umin" => lib::llvm::UMin,
- _ => ccx.sess.fatal("Unknown atomic operation")
- };
-
- let old = AtomicRMW(bcx, atom_op, get_param(decl, first_real_arg),
- get_param(decl, first_real_arg + 1u),
- order);
- Ret(bcx, old);
- }
+ // The actual return type is a struct, but the ABI
+ // adaptation code has cast it into some scalar type. The
+ // code that follows is the only reliable way I have
+ // found to do a transform like i64 -> {i32,i32}.
+ // Basically we dump the data onto the stack then memcpy it.
+ //
+ // Other approaches I tried:
+ // - Casting rust ret pointer to the foreign type and using Store
+ // is (a) unsafe if size of foreign type > size of rust type and
+ // (b) runs afoul of strict aliasing rules, yielding invalid
+ // assembly under -O (specifically, the store gets removed).
+ // - Truncating foreign type to correct integral type and then
+ // bitcasting to the struct type yields invalid cast errors.
+ let llscratch = base::alloca(bcx, llforeign_ret_ty, "__cast");
+ Store(bcx, llforeign_retval, llscratch);
+ let llscratch_i8 = BitCast(bcx, llscratch, Type::i8().ptr_to());
+ let llretptr_i8 = BitCast(bcx, llretptr, Type::i8().ptr_to());
+ let llrust_size = machine::llsize_of_store(ccx, llrust_ret_ty);
+ let llforeign_align = machine::llalign_of_min(ccx, llforeign_ret_ty);
+ let llrust_align = machine::llalign_of_min(ccx, llrust_ret_ty);
+ let llalign = uint::min(llforeign_align, llrust_align);
+ debug!("llrust_size=%?", llrust_size);
+ base::call_memcpy(bcx, llretptr_i8, llscratch_i8,
+ C_uint(ccx, llrust_size), llalign as u32);
}
-
- fcx.cleanup();
- return;
}
- match name {
- "size_of" => {
- let tp_ty = substs.tys[0];
- let lltp_ty = type_of::type_of(ccx, tp_ty);
- Ret(bcx, C_uint(ccx, machine::llsize_of_real(ccx, lltp_ty)));
- }
- "move_val" => {
- // Create a datum reflecting the value being moved.
- // Use `appropriate_mode` so that the datum is by ref
- // if the value is non-immediate. Note that, with
- // intrinsics, there are no argument cleanups to
- // concern ourselves with.
- let tp_ty = substs.tys[0];
- let mode = appropriate_mode(ccx.tcx, tp_ty);
- let src = Datum {val: get_param(decl, first_real_arg + 1u),
- ty: tp_ty, mode: mode};
- bcx = src.move_to(bcx, DROP_EXISTING,
- get_param(decl, first_real_arg));
- RetVoid(bcx);
- }
- "move_val_init" => {
- // See comments for `"move_val"`.
- let tp_ty = substs.tys[0];
- let mode = appropriate_mode(ccx.tcx, tp_ty);
- let src = Datum {val: get_param(decl, first_real_arg + 1u),
- ty: tp_ty, mode: mode};
- bcx = src.move_to(bcx, INIT, get_param(decl, first_real_arg));
- RetVoid(bcx);
- }
- "min_align_of" => {
- let tp_ty = substs.tys[0];
- let lltp_ty = type_of::type_of(ccx, tp_ty);
- Ret(bcx, C_uint(ccx, machine::llalign_of_min(ccx, lltp_ty)));
- }
- "pref_align_of"=> {
- let tp_ty = substs.tys[0];
- let lltp_ty = type_of::type_of(ccx, tp_ty);
- Ret(bcx, C_uint(ccx, machine::llalign_of_pref(ccx, lltp_ty)));
- }
- "get_tydesc" => {
- let tp_ty = substs.tys[0];
- let static_ti = get_tydesc(ccx, tp_ty);
- glue::lazily_emit_all_tydesc_glue(ccx, static_ti);
-
- // FIXME (#3730): ideally this shouldn't need a cast,
- // but there's a circularity between translating rust types to llvm
- // types and having a tydesc type available. So I can't directly access
- // the llvm type of intrinsic::TyDesc struct.
- let userland_tydesc_ty = type_of::type_of(ccx, output_type);
- let td = PointerCast(bcx, static_ti.tydesc, userland_tydesc_ty);
- Ret(bcx, td);
- }
- "init" => {
- let tp_ty = substs.tys[0];
- let lltp_ty = type_of::type_of(ccx, tp_ty);
- match bcx.fcx.llretptr {
- Some(ptr) => { Store(bcx, C_null(lltp_ty), ptr); RetVoid(bcx); }
- None if ty::type_is_nil(tp_ty) => RetVoid(bcx),
- None => Ret(bcx, C_null(lltp_ty)),
- }
- }
- "uninit" => {
- // Do nothing, this is effectively a no-op
- let retty = substs.tys[0];
- if ty::type_is_immediate(ccx.tcx, retty) && !ty::type_is_nil(retty) {
- unsafe {
- Ret(bcx, lib::llvm::llvm::LLVMGetUndef(type_of(ccx, retty).to_ref()));
- }
- } else {
- RetVoid(bcx)
- }
- }
- "forget" => {
- RetVoid(bcx);
- }
- "transmute" => {
- let (in_type, out_type) = (substs.tys[0], substs.tys[1]);
- let llintype = type_of::type_of(ccx, in_type);
- let llouttype = type_of::type_of(ccx, out_type);
-
- let in_type_size = machine::llbitsize_of_real(ccx, llintype);
- let out_type_size = machine::llbitsize_of_real(ccx, llouttype);
- if in_type_size != out_type_size {
- let sp = match ccx.tcx.items.get_copy(&ref_id.unwrap()) {
- ast_map::node_expr(e) => e.span,
- _ => fail!("transmute has non-expr arg"),
- };
- let pluralize = |n| if 1u == n { "" } else { "s" };
- ccx.sess.span_fatal(sp,
- fmt!("transmute called on types with \
- different sizes: %s (%u bit%s) to \
- %s (%u bit%s)",
- ty_to_str(ccx.tcx, in_type),
- in_type_size,
- pluralize(in_type_size),
- ty_to_str(ccx.tcx, out_type),
- out_type_size,
- pluralize(out_type_size)));
- }
+ return bcx;
+}
- if !ty::type_is_nil(out_type) {
- let llsrcval = get_param(decl, first_real_arg);
- if ty::type_is_immediate(ccx.tcx, in_type) {
- match fcx.llretptr {
- Some(llretptr) => {
- Store(bcx, llsrcval, PointerCast(bcx, llretptr, llintype.ptr_to()));
- RetVoid(bcx);
- }
- None => match (llintype.kind(), llouttype.kind()) {
- (Pointer, other) | (other, Pointer) if other != Pointer => {
- let tmp = Alloca(bcx, llouttype, "");
- Store(bcx, llsrcval, PointerCast(bcx, tmp, llintype.ptr_to()));
- Ret(bcx, Load(bcx, tmp));
- }
- _ => Ret(bcx, BitCast(bcx, llsrcval, llouttype))
- }
- }
- } else if ty::type_is_immediate(ccx.tcx, out_type) {
- let llsrcptr = PointerCast(bcx, llsrcval, llouttype.ptr_to());
- Ret(bcx, Load(bcx, llsrcptr));
- } else {
- // NB: Do not use a Load and Store here. This causes massive
- // code bloat when `transmute` is used on large structural
- // types.
- let lldestptr = fcx.llretptr.unwrap();
- let lldestptr = PointerCast(bcx, lldestptr, Type::i8p());
- let llsrcptr = PointerCast(bcx, llsrcval, Type::i8p());
-
- let llsize = llsize_of(ccx, llintype);
- call_memcpy(bcx, lldestptr, llsrcptr, llsize, 1);
- RetVoid(bcx);
- };
- } else {
- RetVoid(bcx);
- }
- }
- "needs_drop" => {
- let tp_ty = substs.tys[0];
- Ret(bcx, C_bool(ty::type_needs_drop(ccx.tcx, tp_ty)));
- }
- "contains_managed" => {
- let tp_ty = substs.tys[0];
- Ret(bcx, C_bool(ty::type_contents(ccx.tcx, tp_ty).contains_managed()));
- }
- "visit_tydesc" => {
- let td = get_param(decl, first_real_arg);
- let visitor = get_param(decl, first_real_arg + 1u);
- let td = PointerCast(bcx, td, ccx.tydesc_type.ptr_to());
- glue::call_tydesc_glue_full(bcx, visitor, td,
- abi::tydesc_field_visit_glue, None);
- RetVoid(bcx);
- }
- "frame_address" => {
- let frameaddress = ccx.intrinsics.get_copy(& &"llvm.frameaddress");
- let frameaddress_val = Call(bcx, frameaddress, [C_i32(0i32)]);
- let star_u8 = ty::mk_imm_ptr(
- bcx.tcx(),
- ty::mk_mach_uint(ast::ty_u8));
- let fty = ty::mk_closure(bcx.tcx(), ty::ClosureTy {
- purity: ast::impure_fn,
- sigil: ast::BorrowedSigil,
- onceness: ast::Many,
- region: ty::re_bound(ty::br_anon(0)),
- bounds: ty::EmptyBuiltinBounds(),
- sig: FnSig {
- bound_lifetime_names: opt_vec::Empty,
- inputs: ~[ star_u8 ],
- output: ty::mk_nil()
- }
- });
- let datum = Datum {val: get_param(decl, first_real_arg),
- mode: ByRef(ZeroMem), ty: fty};
- let arg_vals = ~[frameaddress_val];
- bcx = trans_call_inner(
- bcx, None, fty, ty::mk_nil(),
- |bcx| Callee {bcx: bcx, data: Closure(datum)},
- ArgVals(arg_vals), Some(Ignore), DontAutorefArg).bcx;
- RetVoid(bcx);
- }
- "morestack_addr" => {
- // XXX This is a hack to grab the address of this particular
- // native function. There should be a general in-language
- // way to do this
- let llfty = type_of_fn(bcx.ccx(), [], ty::mk_nil());
- let morestack_addr = decl_cdecl_fn(
- bcx.ccx().llmod, "__morestack", llfty);
- let morestack_addr = PointerCast(bcx, morestack_addr, Type::nil().ptr_to());
- Ret(bcx, morestack_addr);
- }
- "offset" => {
- let ptr = get_param(decl, first_real_arg);
- let offset = get_param(decl, first_real_arg + 1);
- Ret(bcx, GEP(bcx, ptr, [offset]));
- }
- "offset_inbounds" => {
- let ptr = get_param(decl, first_real_arg);
- let offset = get_param(decl, first_real_arg + 1);
- Ret(bcx, InBoundsGEP(bcx, ptr, [offset]));
- }
- "memcpy32" => memcpy_intrinsic(bcx, "llvm.memcpy.p0i8.p0i8.i32", substs.tys[0], 32),
- "memcpy64" => memcpy_intrinsic(bcx, "llvm.memcpy.p0i8.p0i8.i64", substs.tys[0], 64),
- "memmove32" => memcpy_intrinsic(bcx, "llvm.memmove.p0i8.p0i8.i32", substs.tys[0], 32),
- "memmove64" => memcpy_intrinsic(bcx, "llvm.memmove.p0i8.p0i8.i64", substs.tys[0], 64),
- "memset32" => memset_intrinsic(bcx, "llvm.memset.p0i8.i32", substs.tys[0], 32),
- "memset64" => memset_intrinsic(bcx, "llvm.memset.p0i8.i64", substs.tys[0], 64),
- "sqrtf32" => simple_llvm_intrinsic(bcx, "llvm.sqrt.f32", 1),
- "sqrtf64" => simple_llvm_intrinsic(bcx, "llvm.sqrt.f64", 1),
- "powif32" => simple_llvm_intrinsic(bcx, "llvm.powi.f32", 2),
- "powif64" => simple_llvm_intrinsic(bcx, "llvm.powi.f64", 2),
- "sinf32" => simple_llvm_intrinsic(bcx, "llvm.sin.f32", 1),
- "sinf64" => simple_llvm_intrinsic(bcx, "llvm.sin.f64", 1),
- "cosf32" => simple_llvm_intrinsic(bcx, "llvm.cos.f32", 1),
- "cosf64" => simple_llvm_intrinsic(bcx, "llvm.cos.f64", 1),
- "powf32" => simple_llvm_intrinsic(bcx, "llvm.pow.f32", 2),
- "powf64" => simple_llvm_intrinsic(bcx, "llvm.pow.f64", 2),
- "expf32" => simple_llvm_intrinsic(bcx, "llvm.exp.f32", 1),
- "expf64" => simple_llvm_intrinsic(bcx, "llvm.exp.f64", 1),
- "exp2f32" => simple_llvm_intrinsic(bcx, "llvm.exp2.f32", 1),
- "exp2f64" => simple_llvm_intrinsic(bcx, "llvm.exp2.f64", 1),
- "logf32" => simple_llvm_intrinsic(bcx, "llvm.log.f32", 1),
- "logf64" => simple_llvm_intrinsic(bcx, "llvm.log.f64", 1),
- "log10f32" => simple_llvm_intrinsic(bcx, "llvm.log10.f32", 1),
- "log10f64" => simple_llvm_intrinsic(bcx, "llvm.log10.f64", 1),
- "log2f32" => simple_llvm_intrinsic(bcx, "llvm.log2.f32", 1),
- "log2f64" => simple_llvm_intrinsic(bcx, "llvm.log2.f64", 1),
- "fmaf32" => simple_llvm_intrinsic(bcx, "llvm.fma.f32", 3),
- "fmaf64" => simple_llvm_intrinsic(bcx, "llvm.fma.f64", 3),
- "fabsf32" => simple_llvm_intrinsic(bcx, "llvm.fabs.f32", 1),
- "fabsf64" => simple_llvm_intrinsic(bcx, "llvm.fabs.f64", 1),
- "floorf32" => simple_llvm_intrinsic(bcx, "llvm.floor.f32", 1),
- "floorf64" => simple_llvm_intrinsic(bcx, "llvm.floor.f64", 1),
- "ceilf32" => simple_llvm_intrinsic(bcx, "llvm.ceil.f32", 1),
- "ceilf64" => simple_llvm_intrinsic(bcx, "llvm.ceil.f64", 1),
- "truncf32" => simple_llvm_intrinsic(bcx, "llvm.trunc.f32", 1),
- "truncf64" => simple_llvm_intrinsic(bcx, "llvm.trunc.f64", 1),
- "ctpop8" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i8", 1),
- "ctpop16" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i16", 1),
- "ctpop32" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i32", 1),
- "ctpop64" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i64", 1),
- "ctlz8" => count_zeros_intrinsic(bcx, "llvm.ctlz.i8"),
- "ctlz16" => count_zeros_intrinsic(bcx, "llvm.ctlz.i16"),
- "ctlz32" => count_zeros_intrinsic(bcx, "llvm.ctlz.i32"),
- "ctlz64" => count_zeros_intrinsic(bcx, "llvm.ctlz.i64"),
- "cttz8" => count_zeros_intrinsic(bcx, "llvm.cttz.i8"),
- "cttz16" => count_zeros_intrinsic(bcx, "llvm.cttz.i16"),
- "cttz32" => count_zeros_intrinsic(bcx, "llvm.cttz.i32"),
- "cttz64" => count_zeros_intrinsic(bcx, "llvm.cttz.i64"),
- "bswap16" => simple_llvm_intrinsic(bcx, "llvm.bswap.i16", 1),
- "bswap32" => simple_llvm_intrinsic(bcx, "llvm.bswap.i32", 1),
- "bswap64" => simple_llvm_intrinsic(bcx, "llvm.bswap.i64", 1),
-
- "i8_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i8"),
- "i16_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i16"),
- "i32_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i32"),
- "i64_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i64"),
-
- "u8_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i8"),
- "u16_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i16"),
- "u32_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i32"),
- "u64_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i64"),
-
- "i8_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i8"),
- "i16_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i16"),
- "i32_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i32"),
- "i64_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i64"),
-
- "u8_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i8"),
- "u16_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i16"),
- "u32_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i32"),
- "u64_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i64"),
-
- "i8_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i8"),
- "i16_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i16"),
- "i32_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i32"),
- "i64_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i64"),
-
- "u8_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i8"),
- "u16_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i16"),
- "u32_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i32"),
- "u64_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i64"),
-
- _ => {
- // Could we make this an enum rather than a string? does it get
- // checked earlier?
- ccx.sess.span_bug(item.span, "unknown intrinsic");
- }
+pub fn trans_foreign_mod(ccx: @mut CrateContext,
+ foreign_mod: &ast::foreign_mod) {
+ let _icx = push_ctxt("foreign::trans_foreign_mod");
+ for &foreign_item in foreign_mod.items.iter() {
+ let lname = link_name(ccx, foreign_item);
+ ccx.item_symbols.insert(foreign_item.id, lname.to_owned());
}
- fcx.cleanup();
}
-/**
- * Translates a "crust" fn, meaning a Rust fn that can be called
- * from C code. In this case, we have to perform some adaptation
- * to (1) switch back to the Rust stack and (2) adapt the C calling
- * convention to our own.
- *
- * Example: Given a crust fn F(x: X, y: Y) -> Z, we generate a
- * Rust function R as normal:
- *
- * void R(Z* dest, void *env, X x, Y y) {...}
- *
- * and then we generate a wrapper function W that looks like:
- *
- * Z W(X x, Y y) {
- * struct { X x; Y y; Z *z; } args = { x, y, z };
- * call_on_c_stack_shim(S, &args);
- * }
- *
- * Note that the wrapper follows the foreign (typically "C") ABI.
- * The wrapper is the actual "value" of the foreign fn. Finally,
- * we generate a shim function S that looks like:
- *
- * void S(struct { X x; Y y; Z *z; } *args) {
- * R(args->z, NULL, args->x, args->y);
- * }
- */
-pub fn trans_foreign_fn(ccx: @mut CrateContext,
- path: ast_map::path,
- decl: &ast::fn_decl,
- body: &ast::Block,
- llwrapfn: ValueRef,
- id: ast::NodeId) {
+///////////////////////////////////////////////////////////////////////////
+// Rust functions with foreign ABIs
+//
+// These are normal Rust functions defined with foreign ABIs. For
+// now, and perhaps forever, we translate these using a "layer of
+// indirection". That is, given a Rust declaration like:
+//
+// extern "C" fn foo(i: u32) -> u32 { ... }
+//
+// we will generate a function like:
+//
+// S foo(T i) {
+// S r;
+// foo0(&r, NULL, i);
+// return r;
+// }
+//
+// #[inline_always]
+// void foo0(uint32_t *r, void *env, uint32_t i) { ... }
+//
+// Here the (internal) `foo0` function follows the Rust ABI as normal,
+// where the `foo` function follows the C ABI. We rely on LLVM to
+// inline the one into the other. Of course we could just generate the
+// correct code in the first place, but this is much simpler.
+
+pub fn register_rust_fn_with_foreign_abi(ccx: @mut CrateContext,
+ sp: span,
+ sym: ~str,
+ node_id: ast::NodeId)
+ -> ValueRef {
+ let _icx = push_ctxt("foreign::register_foreign_fn");
+
+ let tys = foreign_types_for_id(ccx, node_id);
+ let llfn_ty = lltype_for_fn_from_foreign_types(&tys);
+ let llfn = base::register_fn_llvmty(ccx,
+ sp,
+ sym,
+ node_id,
+ lib::llvm::CCallConv,
+ llfn_ty);
+ add_argument_attributes(&tys, llfn);
+ debug!("register_rust_fn_with_foreign_abi(node_id=%?, llfn_ty=%s, llfn=%s)",
+ node_id, ccx.tn.type_to_str(llfn_ty), ccx.tn.val_to_str(llfn));
+ llfn
+}
+
+pub fn trans_rust_fn_with_foreign_abi(ccx: @mut CrateContext,
+ path: &ast_map::path,
+ decl: &ast::fn_decl,
+ body: &ast::Block,
+ llwrapfn: ValueRef,
+ id: ast::NodeId) {
let _icx = push_ctxt("foreign::build_foreign_fn");
+ let tys = foreign_types_for_id(ccx, id);
+
+ unsafe { // unsafe because we call LLVM operations
+ // Build up the Rust function (`foo0` above).
+ let llrustfn = build_rust_fn(ccx, path, decl, body, id);
+
+ // Build up the foreign wrapper (`foo` above).
+ return build_wrap_fn(ccx, llrustfn, llwrapfn, &tys);
+ }
fn build_rust_fn(ccx: @mut CrateContext,
path: &ast_map::path,
decl: &ast::fn_decl,
body: &ast::Block,
id: ast::NodeId)
- -> ValueRef {
+ -> ValueRef {
let _icx = push_ctxt("foreign::foreign::build_rust_fn");
- let t = ty::node_id_to_type(ccx.tcx, id);
- // XXX: Bad copy.
+ let tcx = ccx.tcx;
+ let t = ty::node_id_to_type(tcx, id);
let ps = link::mangle_internal_name_by_path(
- ccx,
- vec::append_one((*path).clone(),
- ast_map::path_name(
- special_idents::clownshoe_abi)));
+ ccx, vec::append_one((*path).clone(), ast_map::path_name(
+ special_idents::clownshoe_abi
+ )));
let llty = type_of_fn_from_ty(ccx, t);
- let llfndecl = decl_internal_cdecl_fn(ccx.llmod, ps, llty);
- trans_fn(ccx,
- (*path).clone(),
- decl,
- body,
- llfndecl,
- no_self,
- None,
- id,
- []);
+ let llfndecl = base::decl_internal_cdecl_fn(ccx.llmod, ps, llty);
+ base::trans_fn(ccx,
+ (*path).clone(),
+ decl,
+ body,
+ llfndecl,
+ base::no_self,
+ None,
+ id,
+ []);
return llfndecl;
}
- fn build_shim_fn(ccx: @mut CrateContext,
- path: ast_map::path,
- llrustfn: ValueRef,
- tys: &ShimTypes)
- -> ValueRef {
- /*!
- *
- * Generate the shim S:
- *
- * void S(struct { X x; Y y; Z *z; } *args) {
- * R(args->z, NULL, &args->x, args->y);
- * }
- *
- * One complication is that we must adapt to the Rust
- * calling convention, which introduces indirection
- * in some cases. To demonstrate this, I wrote one of the
- * entries above as `&args->x`, because presumably `X` is
- * one of those types that is passed by pointer in Rust.
- */
-
- let _icx = push_ctxt("foreign::foreign::build_shim_fn");
-
- fn build_args(bcx: @mut Block, tys: &ShimTypes, llargbundle: ValueRef)
- -> ~[ValueRef] {
- let _icx = push_ctxt("foreign::extern::shim::build_args");
- let ccx = bcx.ccx();
- let mut llargvals = ~[];
- let mut i = 0u;
- let n = tys.fn_sig.inputs.len();
-
- if !ty::type_is_immediate(bcx.tcx(), tys.fn_sig.output) {
- let llretptr = load_inbounds(bcx, llargbundle, [0u, n]);
- llargvals.push(llretptr);
+ unsafe fn build_wrap_fn(ccx: @mut CrateContext,
+ llrustfn: ValueRef,
+ llwrapfn: ValueRef,
+ tys: &ForeignTypes) {
+ let _icx = push_ctxt(
+ "foreign::trans_rust_fn_with_foreign_abi::build_wrap_fn");
+ let tcx = ccx.tcx;
+
+ debug!("build_wrap_fn(llrustfn=%s, llwrapfn=%s)",
+ ccx.tn.val_to_str(llrustfn),
+ ccx.tn.val_to_str(llwrapfn));
+
+ // Avoid all the Rust generation stuff and just generate raw
+ // LLVM here.
+ //
+ // We want to generate code like this:
+ //
+ // S foo(T i) {
+ // S r;
+ // foo0(&r, NULL, i);
+ // return r;
+ // }
+
+ let the_block =
+ "the block".to_c_str().with_ref(
+ |s| llvm::LLVMAppendBasicBlockInContext(ccx.llcx, llwrapfn, s));
+
+ let builder = ccx.builder.B;
+ llvm::LLVMPositionBuilderAtEnd(builder, the_block);
+
+ // Array for the arguments we will pass to the rust function.
+ let mut llrust_args = ~[];
+ let mut next_foreign_arg_counter: c_uint = 0;
+ let next_foreign_arg: &fn() -> c_uint = {
+ || {
+ next_foreign_arg_counter += 1;
+ next_foreign_arg_counter - 1
}
+ };
- let llenvptr = C_null(Type::opaque_box(bcx.ccx()).ptr_to());
- llargvals.push(llenvptr);
- while i < n {
- // Get a pointer to the argument:
- let mut llargval = GEPi(bcx, llargbundle, [0u, i]);
+ // If there is an out pointer on the foreign function
+ let foreign_outptr = {
+ if tys.fn_ty.sret {
+ Some(llvm::LLVMGetParam(llwrapfn, next_foreign_arg()))
+ } else {
+ None
+ }
+ };
- if !type_of::arg_is_indirect(ccx, &tys.fn_sig.inputs[i]) {
- // If Rust would pass this by value, load the value.
- llargval = Load(bcx, llargval);
+ // Push Rust return pointer, using null if it will be unused.
+ let rust_uses_outptr =
+ type_of::return_uses_outptr(tcx, tys.fn_sig.output);
+ let return_alloca: Option<ValueRef>;
+ let llrust_ret_ty = tys.llsig.llret_ty;
+ let llrust_retptr_ty = llrust_ret_ty.ptr_to();
+ if rust_uses_outptr {
+ // Rust expects to use an outpointer. If the foreign fn
+ // also uses an outpointer, we can reuse it, but the types
+ // may vary, so cast first to the Rust type. If the
+ // foriegn fn does NOT use an outpointer, we will have to
+ // alloca some scratch space on the stack.
+ match foreign_outptr {
+ Some(llforeign_outptr) => {
+ debug!("out pointer, foreign=%s",
+ ccx.tn.val_to_str(llforeign_outptr));
+ let llrust_retptr =
+ llvm::LLVMBuildBitCast(builder,
+ llforeign_outptr,
+ llrust_ret_ty.ptr_to().to_ref(),
+ noname());
+ debug!("out pointer, foreign=%s (casted)",
+ ccx.tn.val_to_str(llrust_retptr));
+ llrust_args.push(llrust_retptr);
+ return_alloca = None;
}
- llargvals.push(llargval);
- i += 1u;
+ None => {
+ let slot = {
+ "return_alloca".to_c_str().with_ref(
+ |s| llvm::LLVMBuildAlloca(builder,
+ llrust_ret_ty.to_ref(),
+ s))
+ };
+ debug!("out pointer, \
+ allocad=%s, \
+ llrust_ret_ty=%s, \
+ return_ty=%s",
+ ccx.tn.val_to_str(slot),
+ ccx.tn.type_to_str(llrust_ret_ty),
+ tys.fn_sig.output.repr(tcx));
+ llrust_args.push(slot);
+ return_alloca = Some(slot);
+ }
+ }
+ } else {
+ // Rust does not expect an outpointer. If the foreign fn
+ // does use an outpointer, then we will do a store of the
+ // value that the Rust fn returns.
+ return_alloca = None;
+ };
+
+ // Push an (null) env pointer
+ let env_pointer = base::null_env_ptr(ccx);
+ debug!("env pointer=%s", ccx.tn.val_to_str(env_pointer));
+ llrust_args.push(env_pointer);
+
+ // Build up the arguments to the call to the rust function.
+ // Careful to adapt for cases where the native convention uses
+ // a pointer and Rust does not or vice versa.
+ for i in range(0, tys.fn_sig.inputs.len()) {
+ let rust_ty = tys.fn_sig.inputs[i];
+ let llrust_ty = tys.llsig.llarg_tys[i];
+ let foreign_index = next_foreign_arg();
+ let rust_indirect = type_of::arg_is_indirect(ccx, rust_ty);
+ let foreign_indirect = tys.fn_ty.attrs[foreign_index].is_some();
+ let mut llforeign_arg = llvm::LLVMGetParam(llwrapfn, foreign_index);
+
+ debug!("llforeign_arg #%u: %s",
+ i, ccx.tn.val_to_str(llforeign_arg));
+ debug!("rust_indirect = %b, foreign_indirect = %b",
+ rust_indirect, foreign_indirect);
+
+ // Ensure that the foreign argument is indirect (by
+ // pointer). It makes adapting types easier, since we can
+ // always just bitcast pointers.
+ if !foreign_indirect {
+ let lltemp =
+ llvm::LLVMBuildAlloca(
+ builder, val_ty(llforeign_arg).to_ref(), noname());
+ llvm::LLVMBuildStore(
+ builder, llforeign_arg, lltemp);
+ llforeign_arg = lltemp;
+ }
+
+ // If the types in the ABI and the Rust types don't match,
+ // bitcast the llforeign_arg pointer so it matches the types
+ // Rust expects.
+ if tys.fn_ty.arg_tys[foreign_index].cast {
+ assert!(!foreign_indirect);
+ llforeign_arg = llvm::LLVMBuildBitCast(
+ builder, llforeign_arg,
+ llrust_ty.ptr_to().to_ref(), noname());
}
- return llargvals;
- }
- fn build_ret(bcx: @mut Block,
- shim_types: &ShimTypes,
- llargbundle: ValueRef,
- llretval: ValueRef) {
- if bcx.fcx.llretptr.is_some() &&
- ty::type_is_immediate(bcx.tcx(), shim_types.fn_sig.output) {
- // Write the value into the argument bundle.
- let arg_count = shim_types.fn_sig.inputs.len();
- let llretptr = load_inbounds(bcx,
- llargbundle,
- [0, arg_count]);
- Store(bcx, llretval, llretptr);
+ let llrust_arg = if rust_indirect {
+ llforeign_arg
} else {
- // NB: The return pointer in the Rust ABI function is wired
- // directly into the return slot in the shim struct.
+ llvm::LLVMBuildLoad(builder, llforeign_arg, noname())
+ };
+
+ debug!("llrust_arg #%u: %s",
+ i, ccx.tn.val_to_str(llrust_arg));
+ llrust_args.push(llrust_arg);
+ }
+
+ // Perform the call itself
+ let llrust_ret_val = do llrust_args.as_imm_buf |ptr, len| {
+ debug!("calling llrustfn = %s", ccx.tn.val_to_str(llrustfn));
+ llvm::LLVMBuildCall(builder, llrustfn, ptr,
+ len as c_uint, noname())
+ };
+
+ // Get the return value where the foreign fn expects it.
+ let llforeign_ret_ty = tys.fn_ty.ret_ty.ty;
+ match foreign_outptr {
+ None if !tys.ret_def => {
+ // Function returns `()` or `bot`, which in Rust is the LLVM
+ // type "{}" but in foreign ABIs is "Void".
+ llvm::LLVMBuildRetVoid(builder);
+ }
+
+ None if rust_uses_outptr => {
+ // Rust uses an outpointer, but the foreign ABI does not. Load.
+ let llrust_outptr = return_alloca.unwrap();
+ let llforeign_outptr_casted =
+ llvm::LLVMBuildBitCast(builder,
+ llrust_outptr,
+ llforeign_ret_ty.ptr_to().to_ref(),
+ noname());
+ let llforeign_retval =
+ llvm::LLVMBuildLoad(builder, llforeign_outptr_casted, noname());
+ llvm::LLVMBuildRet(builder, llforeign_retval);
+ }
+
+ None if llforeign_ret_ty != llrust_ret_ty => {
+ // Neither ABI uses an outpointer, but the types don't
+ // quite match. Must cast. Probably we should try and
+ // examine the types and use a concrete llvm cast, but
+ // right now we just use a temp memory location and
+ // bitcast the pointer, which is the same thing the
+ // old wrappers used to do.
+ let lltemp =
+ llvm::LLVMBuildAlloca(
+ builder, llforeign_ret_ty.to_ref(), noname());
+ let lltemp_casted =
+ llvm::LLVMBuildBitCast(builder,
+ lltemp,
+ llrust_ret_ty.ptr_to().to_ref(),
+ noname());
+ llvm::LLVMBuildStore(
+ builder, llrust_ret_val, lltemp_casted);
+ let llforeign_retval =
+ llvm::LLVMBuildLoad(builder, lltemp, noname());
+ llvm::LLVMBuildRet(builder, llforeign_retval);
+ }
+
+ None => {
+ // Neither ABI uses an outpointer, and the types
+ // match. Easy peasy.
+ llvm::LLVMBuildRet(builder, llrust_ret_val);
+ }
+
+ Some(llforeign_outptr) if !rust_uses_outptr => {
+ // Foreign ABI requires an out pointer, but Rust doesn't.
+ // Store Rust return value.
+ let llforeign_outptr_casted =
+ llvm::LLVMBuildBitCast(builder,
+ llforeign_outptr,
+ llrust_retptr_ty.to_ref(),
+ noname());
+ llvm::LLVMBuildStore(
+ builder, llrust_ret_val, llforeign_outptr_casted);
+ llvm::LLVMBuildRetVoid(builder);
+ }
+
+ Some(_) => {
+ // Both ABIs use outpointers. Easy peasy.
+ llvm::LLVMBuildRetVoid(builder);
}
}
+ }
+}
- let shim_name = link::mangle_internal_name_by_path(
- ccx,
- vec::append_one(path, ast_map::path_name(
- special_idents::clownshoe_stack_shim
- )));
- build_shim_fn_(ccx,
- shim_name,
- llrustfn,
- tys,
- lib::llvm::CCallConv,
- build_args,
- build_ret)
+///////////////////////////////////////////////////////////////////////////
+// General ABI Support
+//
+// This code is kind of a confused mess and needs to be reworked given
+// the massive simplifications that have occurred.
+
+pub fn link_name(ccx: &CrateContext, i: @ast::foreign_item) -> @str {
+ match attr::first_attr_value_str_by_name(i.attrs, "link_name") {
+ None => ccx.sess.str_of(i.ident),
+ Some(ln) => ln,
}
+}
- fn build_wrap_fn(ccx: @mut CrateContext,
- llshimfn: ValueRef,
- llwrapfn: ValueRef,
- tys: &ShimTypes) {
- /*!
- *
- * Generate the wrapper W:
- *
- * Z W(X x, Y y) {
- * struct { X x; Y y; Z *z; } args = { x, y, z };
- * call_on_c_stack_shim(S, &args);
- * }
- */
-
- let _icx = push_ctxt("foreign::foreign::build_wrap_fn");
-
- build_wrap_fn_(ccx,
- tys,
- llshimfn,
- llwrapfn,
- ccx.upcalls.call_shim_on_rust_stack,
- true,
- build_args,
- build_ret);
-
- fn build_args(bcx: @mut Block,
- tys: &ShimTypes,
- llwrapfn: ValueRef,
- llargbundle: ValueRef) {
- let _icx = push_ctxt("foreign::foreign::wrap::build_args");
- tys.fn_ty.build_wrap_args(bcx,
- tys.llsig.llret_ty,
- llwrapfn,
- llargbundle);
- }
+fn foreign_signature(ccx: &mut CrateContext, fn_sig: &ty::FnSig)
+ -> LlvmSignature {
+ /*!
+ * The ForeignSignature is the LLVM types of the arguments/return type
+ * of a function. Note that these LLVM types are not quite the same
+ * as the LLVM types would be for a native Rust function because foreign
+ * functions just plain ignore modes. They also don't pass aggregate
+ * values by pointer like we do.
+ */
- fn build_ret(bcx: @mut Block, tys: &ShimTypes, llargbundle: ValueRef) {
- let _icx = push_ctxt("foreign::foreign::wrap::build_ret");
- tys.fn_ty.build_wrap_ret(bcx, tys.llsig.llarg_tys, llargbundle);
- }
+ let llarg_tys = fn_sig.inputs.map(|&arg| type_of(ccx, arg));
+ let llret_ty = type_of::type_of(ccx, fn_sig.output);
+ LlvmSignature {
+ llarg_tys: llarg_tys,
+ llret_ty: llret_ty,
+ sret: type_of::return_uses_outptr(ccx.tcx, fn_sig.output),
}
+}
- let tys = shim_types(ccx, id);
- // The internal Rust ABI function - runs on the Rust stack
- // XXX: Bad copy.
- let llrustfn = build_rust_fn(ccx, &path, decl, body, id);
- // The internal shim function - runs on the Rust stack
- let llshimfn = build_shim_fn(ccx, path, llrustfn, &tys);
- // The foreign C function - runs on the C stack
- build_wrap_fn(ccx, llshimfn, llwrapfn, &tys)
+fn foreign_types_for_id(ccx: &mut CrateContext,
+ id: ast::NodeId) -> ForeignTypes {
+ foreign_types_for_fn_ty(ccx, ty::node_id_to_type(ccx.tcx, id))
}
-pub fn register_foreign_fn(ccx: @mut CrateContext,
- sp: span,
- sym: ~str,
- node_id: ast::NodeId)
- -> ValueRef {
- let _icx = push_ctxt("foreign::register_foreign_fn");
+fn foreign_types_for_fn_ty(ccx: &mut CrateContext,
+ ty: ty::t) -> ForeignTypes {
+ let fn_sig = match ty::get(ty).sty {
+ ty::ty_bare_fn(ref fn_ty) => fn_ty.sig.clone(),
+ _ => ccx.sess.bug("foreign_types_for_fn_ty called on non-function type")
+ };
+ let llsig = foreign_signature(ccx, &fn_sig);
+ let ret_def = !ty::type_is_voidish(fn_sig.output);
+ let fn_ty = cabi::compute_abi_info(ccx,
+ llsig.llarg_tys,
+ llsig.llret_ty,
+ ret_def);
+ debug!("foreign_types_for_fn_ty(\
+ ty=%s, \
+ llsig=%s -> %s, \
+ fn_ty=%s -> %s, \
+ ret_def=%b",
+ ty.repr(ccx.tcx),
+ ccx.tn.types_to_str(llsig.llarg_tys),
+ ccx.tn.type_to_str(llsig.llret_ty),
+ ccx.tn.types_to_str(fn_ty.arg_tys.map(|t| t.ty)),
+ ccx.tn.type_to_str(fn_ty.ret_ty.ty),
+ ret_def);
+
+ ForeignTypes {
+ fn_sig: fn_sig,
+ llsig: llsig,
+ ret_def: ret_def,
+ fn_ty: fn_ty
+ }
+}
- let sym = Cell::new(sym);
+fn lltype_for_fn_from_foreign_types(tys: &ForeignTypes) -> Type {
+ let llargument_tys: ~[Type] =
+ tys.fn_ty.arg_tys.iter().map(|t| t.ty).collect();
+ let llreturn_ty = tys.fn_ty.ret_ty.ty;
+ Type::func(llargument_tys, &llreturn_ty)
+}
+
+pub fn lltype_for_foreign_fn(ccx: &mut CrateContext, ty: ty::t) -> Type {
+ let fn_types = foreign_types_for_fn_ty(ccx, ty);
+ lltype_for_fn_from_foreign_types(&fn_types)
+}
- let tys = shim_types(ccx, node_id);
- do tys.fn_ty.decl_fn |fnty| {
- register_fn_llvmty(ccx, sp, sym.take(), node_id, lib::llvm::CCallConv, fnty)
+fn add_argument_attributes(tys: &ForeignTypes,
+ llfn: ValueRef) {
+ for (i, a) in tys.fn_ty.attrs.iter().enumerate() {
+ match *a {
+ Some(attr) => {
+ let llarg = get_param(llfn, i);
+ unsafe {
+ llvm::LLVMAddAttribute(llarg, attr as c_uint);
+ }
+ }
+ None => ()
+ }
}
}
--- /dev/null
+// Copyright 2012-2013 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use back::{abi};
+use lib::llvm::{SequentiallyConsistent, Acquire, Release, Xchg};
+use lib::llvm::{ValueRef, Pointer};
+use lib;
+use middle::trans::base::*;
+use middle::trans::build::*;
+use middle::trans::callee::*;
+use middle::trans::common::*;
+use middle::trans::datum::*;
+use middle::trans::type_of::*;
+use middle::trans::type_of;
+use middle::trans::expr::Ignore;
+use middle::trans::machine;
+use middle::trans::glue;
+use middle::ty::FnSig;
+use middle::ty;
+use syntax::ast;
+use syntax::ast_map;
+use syntax::attr;
+use syntax::opt_vec;
+use util::ppaux::{ty_to_str};
+use middle::trans::machine::llsize_of;
+use middle::trans::type_::Type;
+
+pub fn trans_intrinsic(ccx: @mut CrateContext,
+ decl: ValueRef,
+ item: &ast::foreign_item,
+ path: ast_map::path,
+ substs: @param_substs,
+ attributes: &[ast::Attribute],
+ ref_id: Option<ast::NodeId>) {
+ debug!("trans_intrinsic(item.ident=%s)", ccx.sess.str_of(item.ident));
+
+ fn simple_llvm_intrinsic(bcx: @mut Block, name: &'static str, num_args: uint) {
+ assert!(num_args <= 4);
+ let mut args = [0 as ValueRef, ..4];
+ let first_real_arg = bcx.fcx.arg_pos(0u);
+ for i in range(0u, num_args) {
+ args[i] = get_param(bcx.fcx.llfn, first_real_arg + i);
+ }
+ let llfn = bcx.ccx().intrinsics.get_copy(&name);
+ Ret(bcx, Call(bcx, llfn, args.slice(0, num_args)));
+ }
+
+ fn with_overflow_instrinsic(bcx: @mut Block, name: &'static str) {
+ let first_real_arg = bcx.fcx.arg_pos(0u);
+ let a = get_param(bcx.fcx.llfn, first_real_arg);
+ let b = get_param(bcx.fcx.llfn, first_real_arg + 1);
+ let llfn = bcx.ccx().intrinsics.get_copy(&name);
+
+ // convert `i1` to a `bool`, and write to the out parameter
+ let val = Call(bcx, llfn, [a, b]);
+ let result = ExtractValue(bcx, val, 0);
+ let overflow = ZExt(bcx, ExtractValue(bcx, val, 1), Type::bool());
+ let retptr = get_param(bcx.fcx.llfn, bcx.fcx.out_arg_pos());
+ let ret = Load(bcx, retptr);
+ let ret = InsertValue(bcx, ret, result, 0);
+ let ret = InsertValue(bcx, ret, overflow, 1);
+ Store(bcx, ret, retptr);
+ RetVoid(bcx)
+ }
+
+ fn memcpy_intrinsic(bcx: @mut Block, name: &'static str, tp_ty: ty::t, sizebits: u8) {
+ let ccx = bcx.ccx();
+ let lltp_ty = type_of::type_of(ccx, tp_ty);
+ let align = C_i32(machine::llalign_of_min(ccx, lltp_ty) as i32);
+ let size = match sizebits {
+ 32 => C_i32(machine::llsize_of_real(ccx, lltp_ty) as i32),
+ 64 => C_i64(machine::llsize_of_real(ccx, lltp_ty) as i64),
+ _ => ccx.sess.fatal("Invalid value for sizebits")
+ };
+
+ let decl = bcx.fcx.llfn;
+ let first_real_arg = bcx.fcx.arg_pos(0u);
+ let dst_ptr = PointerCast(bcx, get_param(decl, first_real_arg), Type::i8p());
+ let src_ptr = PointerCast(bcx, get_param(decl, first_real_arg + 1), Type::i8p());
+ let count = get_param(decl, first_real_arg + 2);
+ let volatile = C_i1(false);
+ let llfn = bcx.ccx().intrinsics.get_copy(&name);
+ Call(bcx, llfn, [dst_ptr, src_ptr, Mul(bcx, size, count), align, volatile]);
+ RetVoid(bcx);
+ }
+
+ fn memset_intrinsic(bcx: @mut Block, name: &'static str, tp_ty: ty::t, sizebits: u8) {
+ let ccx = bcx.ccx();
+ let lltp_ty = type_of::type_of(ccx, tp_ty);
+ let align = C_i32(machine::llalign_of_min(ccx, lltp_ty) as i32);
+ let size = match sizebits {
+ 32 => C_i32(machine::llsize_of_real(ccx, lltp_ty) as i32),
+ 64 => C_i64(machine::llsize_of_real(ccx, lltp_ty) as i64),
+ _ => ccx.sess.fatal("Invalid value for sizebits")
+ };
+
+ let decl = bcx.fcx.llfn;
+ let first_real_arg = bcx.fcx.arg_pos(0u);
+ let dst_ptr = PointerCast(bcx, get_param(decl, first_real_arg), Type::i8p());
+ let val = get_param(decl, first_real_arg + 1);
+ let count = get_param(decl, first_real_arg + 2);
+ let volatile = C_i1(false);
+ let llfn = bcx.ccx().intrinsics.get_copy(&name);
+ Call(bcx, llfn, [dst_ptr, val, Mul(bcx, size, count), align, volatile]);
+ RetVoid(bcx);
+ }
+
+ fn count_zeros_intrinsic(bcx: @mut Block, name: &'static str) {
+ let x = get_param(bcx.fcx.llfn, bcx.fcx.arg_pos(0u));
+ let y = C_i1(false);
+ let llfn = bcx.ccx().intrinsics.get_copy(&name);
+ Ret(bcx, Call(bcx, llfn, [x, y]));
+ }
+
+ let output_type = ty::ty_fn_ret(ty::node_id_to_type(ccx.tcx, item.id));
+
+ let fcx = new_fn_ctxt_w_id(ccx,
+ path,
+ decl,
+ item.id,
+ output_type,
+ true,
+ Some(substs),
+ None,
+ Some(item.span));
+
+ set_always_inline(fcx.llfn);
+
+ // Set the fixed stack segment flag if necessary.
+ if attr::contains_name(attributes, "fixed_stack_segment") {
+ set_fixed_stack_segment(fcx.llfn);
+ }
+
+ let mut bcx = fcx.entry_bcx.unwrap();
+ let first_real_arg = fcx.arg_pos(0u);
+
+ let nm = ccx.sess.str_of(item.ident);
+ let name = nm.as_slice();
+
+ // This requires that atomic intrinsics follow a specific naming pattern:
+ // "atomic_<operation>[_<ordering>], and no ordering means SeqCst
+ if name.starts_with("atomic_") {
+ let split : ~[&str] = name.split_iter('_').collect();
+ assert!(split.len() >= 2, "Atomic intrinsic not correct format");
+ let order = if split.len() == 2 {
+ lib::llvm::SequentiallyConsistent
+ } else {
+ match split[2] {
+ "relaxed" => lib::llvm::Monotonic,
+ "acq" => lib::llvm::Acquire,
+ "rel" => lib::llvm::Release,
+ "acqrel" => lib::llvm::AcquireRelease,
+ _ => ccx.sess.fatal("Unknown ordering in atomic intrinsic")
+ }
+ };
+
+ match split[1] {
+ "cxchg" => {
+ let old = AtomicCmpXchg(bcx, get_param(decl, first_real_arg),
+ get_param(decl, first_real_arg + 1u),
+ get_param(decl, first_real_arg + 2u),
+ order);
+ Ret(bcx, old);
+ }
+ "load" => {
+ let old = AtomicLoad(bcx, get_param(decl, first_real_arg),
+ order);
+ Ret(bcx, old);
+ }
+ "store" => {
+ AtomicStore(bcx, get_param(decl, first_real_arg + 1u),
+ get_param(decl, first_real_arg),
+ order);
+ RetVoid(bcx);
+ }
+ "fence" => {
+ AtomicFence(bcx, order);
+ RetVoid(bcx);
+ }
+ op => {
+ // These are all AtomicRMW ops
+ let atom_op = match op {
+ "xchg" => lib::llvm::Xchg,
+ "xadd" => lib::llvm::Add,
+ "xsub" => lib::llvm::Sub,
+ "and" => lib::llvm::And,
+ "nand" => lib::llvm::Nand,
+ "or" => lib::llvm::Or,
+ "xor" => lib::llvm::Xor,
+ "max" => lib::llvm::Max,
+ "min" => lib::llvm::Min,
+ "umax" => lib::llvm::UMax,
+ "umin" => lib::llvm::UMin,
+ _ => ccx.sess.fatal("Unknown atomic operation")
+ };
+
+ let old = AtomicRMW(bcx, atom_op, get_param(decl, first_real_arg),
+ get_param(decl, first_real_arg + 1u),
+ order);
+ Ret(bcx, old);
+ }
+ }
+
+ fcx.cleanup();
+ return;
+ }
+
+ match name {
+ "size_of" => {
+ let tp_ty = substs.tys[0];
+ let lltp_ty = type_of::type_of(ccx, tp_ty);
+ Ret(bcx, C_uint(ccx, machine::llsize_of_real(ccx, lltp_ty)));
+ }
+ "move_val" => {
+ // Create a datum reflecting the value being moved.
+ // Use `appropriate_mode` so that the datum is by ref
+ // if the value is non-immediate. Note that, with
+ // intrinsics, there are no argument cleanups to
+ // concern ourselves with.
+ let tp_ty = substs.tys[0];
+ let mode = appropriate_mode(ccx.tcx, tp_ty);
+ let src = Datum {val: get_param(decl, first_real_arg + 1u),
+ ty: tp_ty, mode: mode};
+ bcx = src.move_to(bcx, DROP_EXISTING,
+ get_param(decl, first_real_arg));
+ RetVoid(bcx);
+ }
+ "move_val_init" => {
+ // See comments for `"move_val"`.
+ let tp_ty = substs.tys[0];
+ let mode = appropriate_mode(ccx.tcx, tp_ty);
+ let src = Datum {val: get_param(decl, first_real_arg + 1u),
+ ty: tp_ty, mode: mode};
+ bcx = src.move_to(bcx, INIT, get_param(decl, first_real_arg));
+ RetVoid(bcx);
+ }
+ "min_align_of" => {
+ let tp_ty = substs.tys[0];
+ let lltp_ty = type_of::type_of(ccx, tp_ty);
+ Ret(bcx, C_uint(ccx, machine::llalign_of_min(ccx, lltp_ty)));
+ }
+ "pref_align_of"=> {
+ let tp_ty = substs.tys[0];
+ let lltp_ty = type_of::type_of(ccx, tp_ty);
+ Ret(bcx, C_uint(ccx, machine::llalign_of_pref(ccx, lltp_ty)));
+ }
+ "get_tydesc" => {
+ let tp_ty = substs.tys[0];
+ let static_ti = get_tydesc(ccx, tp_ty);
+ glue::lazily_emit_all_tydesc_glue(ccx, static_ti);
+
+ // FIXME (#3730): ideally this shouldn't need a cast,
+ // but there's a circularity between translating rust types to llvm
+ // types and having a tydesc type available. So I can't directly access
+ // the llvm type of intrinsic::TyDesc struct.
+ let userland_tydesc_ty = type_of::type_of(ccx, output_type);
+ let td = PointerCast(bcx, static_ti.tydesc, userland_tydesc_ty);
+ Ret(bcx, td);
+ }
+ "init" => {
+ let tp_ty = substs.tys[0];
+ let lltp_ty = type_of::type_of(ccx, tp_ty);
+ match bcx.fcx.llretptr {
+ Some(ptr) => { Store(bcx, C_null(lltp_ty), ptr); RetVoid(bcx); }
+ None if ty::type_is_nil(tp_ty) => RetVoid(bcx),
+ None => Ret(bcx, C_null(lltp_ty)),
+ }
+ }
+ "uninit" => {
+ // Do nothing, this is effectively a no-op
+ let retty = substs.tys[0];
+ if ty::type_is_immediate(ccx.tcx, retty) && !ty::type_is_nil(retty) {
+ unsafe {
+ Ret(bcx, lib::llvm::llvm::LLVMGetUndef(type_of(ccx, retty).to_ref()));
+ }
+ } else {
+ RetVoid(bcx)
+ }
+ }
+ "forget" => {
+ RetVoid(bcx);
+ }
+ "transmute" => {
+ let (in_type, out_type) = (substs.tys[0], substs.tys[1]);
+ let llintype = type_of::type_of(ccx, in_type);
+ let llouttype = type_of::type_of(ccx, out_type);
+
+ let in_type_size = machine::llbitsize_of_real(ccx, llintype);
+ let out_type_size = machine::llbitsize_of_real(ccx, llouttype);
+ if in_type_size != out_type_size {
+ let sp = match ccx.tcx.items.get_copy(&ref_id.unwrap()) {
+ ast_map::node_expr(e) => e.span,
+ _ => fail!("transmute has non-expr arg"),
+ };
+ let pluralize = |n| if 1u == n { "" } else { "s" };
+ ccx.sess.span_fatal(sp,
+ fmt!("transmute called on types with \
+ different sizes: %s (%u bit%s) to \
+ %s (%u bit%s)",
+ ty_to_str(ccx.tcx, in_type),
+ in_type_size,
+ pluralize(in_type_size),
+ ty_to_str(ccx.tcx, out_type),
+ out_type_size,
+ pluralize(out_type_size)));
+ }
+
+ if !ty::type_is_voidish(out_type) {
+ let llsrcval = get_param(decl, first_real_arg);
+ if ty::type_is_immediate(ccx.tcx, in_type) {
+ match fcx.llretptr {
+ Some(llretptr) => {
+ Store(bcx, llsrcval, PointerCast(bcx, llretptr, llintype.ptr_to()));
+ RetVoid(bcx);
+ }
+ None => match (llintype.kind(), llouttype.kind()) {
+ (Pointer, other) | (other, Pointer) if other != Pointer => {
+ let tmp = Alloca(bcx, llouttype, "");
+ Store(bcx, llsrcval, PointerCast(bcx, tmp, llintype.ptr_to()));
+ Ret(bcx, Load(bcx, tmp));
+ }
+ _ => Ret(bcx, BitCast(bcx, llsrcval, llouttype))
+ }
+ }
+ } else if ty::type_is_immediate(ccx.tcx, out_type) {
+ let llsrcptr = PointerCast(bcx, llsrcval, llouttype.ptr_to());
+ Ret(bcx, Load(bcx, llsrcptr));
+ } else {
+ // NB: Do not use a Load and Store here. This causes massive
+ // code bloat when `transmute` is used on large structural
+ // types.
+ let lldestptr = fcx.llretptr.unwrap();
+ let lldestptr = PointerCast(bcx, lldestptr, Type::i8p());
+ let llsrcptr = PointerCast(bcx, llsrcval, Type::i8p());
+
+ let llsize = llsize_of(ccx, llintype);
+ call_memcpy(bcx, lldestptr, llsrcptr, llsize, 1);
+ RetVoid(bcx);
+ };
+ } else {
+ RetVoid(bcx);
+ }
+ }
+ "needs_drop" => {
+ let tp_ty = substs.tys[0];
+ Ret(bcx, C_bool(ty::type_needs_drop(ccx.tcx, tp_ty)));
+ }
+ "contains_managed" => {
+ let tp_ty = substs.tys[0];
+ Ret(bcx, C_bool(ty::type_contents(ccx.tcx, tp_ty).contains_managed()));
+ }
+ "visit_tydesc" => {
+ let td = get_param(decl, first_real_arg);
+ let visitor = get_param(decl, first_real_arg + 1u);
+ let td = PointerCast(bcx, td, ccx.tydesc_type.ptr_to());
+ glue::call_tydesc_glue_full(bcx, visitor, td,
+ abi::tydesc_field_visit_glue, None);
+ RetVoid(bcx);
+ }
+ "frame_address" => {
+ let frameaddress = ccx.intrinsics.get_copy(& &"llvm.frameaddress");
+ let frameaddress_val = Call(bcx, frameaddress, [C_i32(0i32)]);
+ let star_u8 = ty::mk_imm_ptr(
+ bcx.tcx(),
+ ty::mk_mach_uint(ast::ty_u8));
+ let fty = ty::mk_closure(bcx.tcx(), ty::ClosureTy {
+ purity: ast::impure_fn,
+ sigil: ast::BorrowedSigil,
+ onceness: ast::Many,
+ region: ty::re_bound(ty::br_anon(0)),
+ bounds: ty::EmptyBuiltinBounds(),
+ sig: FnSig {
+ bound_lifetime_names: opt_vec::Empty,
+ inputs: ~[ star_u8 ],
+ output: ty::mk_nil()
+ }
+ });
+ let datum = Datum {val: get_param(decl, first_real_arg),
+ mode: ByRef(ZeroMem), ty: fty};
+ let arg_vals = ~[frameaddress_val];
+ bcx = trans_call_inner(
+ bcx, None, fty, ty::mk_nil(),
+ |bcx| Callee {bcx: bcx, data: Closure(datum)},
+ ArgVals(arg_vals), Some(Ignore), DontAutorefArg).bcx;
+ RetVoid(bcx);
+ }
+ "morestack_addr" => {
+ // XXX This is a hack to grab the address of this particular
+ // native function. There should be a general in-language
+ // way to do this
+ let llfty = type_of_rust_fn(bcx.ccx(), [], ty::mk_nil());
+ let morestack_addr = decl_cdecl_fn(
+ bcx.ccx().llmod, "__morestack", llfty);
+ let morestack_addr = PointerCast(bcx, morestack_addr, Type::nil().ptr_to());
+ Ret(bcx, morestack_addr);
+ }
+ "offset" => {
+ let ptr = get_param(decl, first_real_arg);
+ let offset = get_param(decl, first_real_arg + 1);
+ Ret(bcx, GEP(bcx, ptr, [offset]));
+ }
+ "offset_inbounds" => {
+ let ptr = get_param(decl, first_real_arg);
+ let offset = get_param(decl, first_real_arg + 1);
+ Ret(bcx, InBoundsGEP(bcx, ptr, [offset]));
+ }
+ "memcpy32" => memcpy_intrinsic(bcx, "llvm.memcpy.p0i8.p0i8.i32", substs.tys[0], 32),
+ "memcpy64" => memcpy_intrinsic(bcx, "llvm.memcpy.p0i8.p0i8.i64", substs.tys[0], 64),
+ "memmove32" => memcpy_intrinsic(bcx, "llvm.memmove.p0i8.p0i8.i32", substs.tys[0], 32),
+ "memmove64" => memcpy_intrinsic(bcx, "llvm.memmove.p0i8.p0i8.i64", substs.tys[0], 64),
+ "memset32" => memset_intrinsic(bcx, "llvm.memset.p0i8.i32", substs.tys[0], 32),
+ "memset64" => memset_intrinsic(bcx, "llvm.memset.p0i8.i64", substs.tys[0], 64),
+ "sqrtf32" => simple_llvm_intrinsic(bcx, "llvm.sqrt.f32", 1),
+ "sqrtf64" => simple_llvm_intrinsic(bcx, "llvm.sqrt.f64", 1),
+ "powif32" => simple_llvm_intrinsic(bcx, "llvm.powi.f32", 2),
+ "powif64" => simple_llvm_intrinsic(bcx, "llvm.powi.f64", 2),
+ "sinf32" => simple_llvm_intrinsic(bcx, "llvm.sin.f32", 1),
+ "sinf64" => simple_llvm_intrinsic(bcx, "llvm.sin.f64", 1),
+ "cosf32" => simple_llvm_intrinsic(bcx, "llvm.cos.f32", 1),
+ "cosf64" => simple_llvm_intrinsic(bcx, "llvm.cos.f64", 1),
+ "powf32" => simple_llvm_intrinsic(bcx, "llvm.pow.f32", 2),
+ "powf64" => simple_llvm_intrinsic(bcx, "llvm.pow.f64", 2),
+ "expf32" => simple_llvm_intrinsic(bcx, "llvm.exp.f32", 1),
+ "expf64" => simple_llvm_intrinsic(bcx, "llvm.exp.f64", 1),
+ "exp2f32" => simple_llvm_intrinsic(bcx, "llvm.exp2.f32", 1),
+ "exp2f64" => simple_llvm_intrinsic(bcx, "llvm.exp2.f64", 1),
+ "logf32" => simple_llvm_intrinsic(bcx, "llvm.log.f32", 1),
+ "logf64" => simple_llvm_intrinsic(bcx, "llvm.log.f64", 1),
+ "log10f32" => simple_llvm_intrinsic(bcx, "llvm.log10.f32", 1),
+ "log10f64" => simple_llvm_intrinsic(bcx, "llvm.log10.f64", 1),
+ "log2f32" => simple_llvm_intrinsic(bcx, "llvm.log2.f32", 1),
+ "log2f64" => simple_llvm_intrinsic(bcx, "llvm.log2.f64", 1),
+ "fmaf32" => simple_llvm_intrinsic(bcx, "llvm.fma.f32", 3),
+ "fmaf64" => simple_llvm_intrinsic(bcx, "llvm.fma.f64", 3),
+ "fabsf32" => simple_llvm_intrinsic(bcx, "llvm.fabs.f32", 1),
+ "fabsf64" => simple_llvm_intrinsic(bcx, "llvm.fabs.f64", 1),
+ "floorf32" => simple_llvm_intrinsic(bcx, "llvm.floor.f32", 1),
+ "floorf64" => simple_llvm_intrinsic(bcx, "llvm.floor.f64", 1),
+ "ceilf32" => simple_llvm_intrinsic(bcx, "llvm.ceil.f32", 1),
+ "ceilf64" => simple_llvm_intrinsic(bcx, "llvm.ceil.f64", 1),
+ "truncf32" => simple_llvm_intrinsic(bcx, "llvm.trunc.f32", 1),
+ "truncf64" => simple_llvm_intrinsic(bcx, "llvm.trunc.f64", 1),
+ "ctpop8" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i8", 1),
+ "ctpop16" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i16", 1),
+ "ctpop32" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i32", 1),
+ "ctpop64" => simple_llvm_intrinsic(bcx, "llvm.ctpop.i64", 1),
+ "ctlz8" => count_zeros_intrinsic(bcx, "llvm.ctlz.i8"),
+ "ctlz16" => count_zeros_intrinsic(bcx, "llvm.ctlz.i16"),
+ "ctlz32" => count_zeros_intrinsic(bcx, "llvm.ctlz.i32"),
+ "ctlz64" => count_zeros_intrinsic(bcx, "llvm.ctlz.i64"),
+ "cttz8" => count_zeros_intrinsic(bcx, "llvm.cttz.i8"),
+ "cttz16" => count_zeros_intrinsic(bcx, "llvm.cttz.i16"),
+ "cttz32" => count_zeros_intrinsic(bcx, "llvm.cttz.i32"),
+ "cttz64" => count_zeros_intrinsic(bcx, "llvm.cttz.i64"),
+ "bswap16" => simple_llvm_intrinsic(bcx, "llvm.bswap.i16", 1),
+ "bswap32" => simple_llvm_intrinsic(bcx, "llvm.bswap.i32", 1),
+ "bswap64" => simple_llvm_intrinsic(bcx, "llvm.bswap.i64", 1),
+
+ "i8_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i8"),
+ "i16_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i16"),
+ "i32_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i32"),
+ "i64_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.sadd.with.overflow.i64"),
+
+ "u8_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i8"),
+ "u16_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i16"),
+ "u32_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i32"),
+ "u64_add_with_overflow" => with_overflow_instrinsic(bcx, "llvm.uadd.with.overflow.i64"),
+
+ "i8_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i8"),
+ "i16_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i16"),
+ "i32_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i32"),
+ "i64_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.ssub.with.overflow.i64"),
+
+ "u8_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i8"),
+ "u16_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i16"),
+ "u32_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i32"),
+ "u64_sub_with_overflow" => with_overflow_instrinsic(bcx, "llvm.usub.with.overflow.i64"),
+
+ "i8_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i8"),
+ "i16_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i16"),
+ "i32_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i32"),
+ "i64_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.smul.with.overflow.i64"),
+
+ "u8_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i8"),
+ "u16_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i16"),
+ "u32_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i32"),
+ "u64_mul_with_overflow" => with_overflow_instrinsic(bcx, "llvm.umul.with.overflow.i64"),
+
+ _ => {
+ // Could we make this an enum rather than a string? does it get
+ // checked earlier?
+ ccx.sess.span_bug(item.span, "unknown intrinsic");
+ }
+ }
+ fcx.cleanup();
+}
pub mod cabi_arm;
pub mod cabi_mips;
pub mod foreign;
+pub mod intrinsic;
pub mod reflect;
pub mod debuginfo;
pub mod type_use;
use middle::trans::base;
use middle::trans::common::*;
use middle::trans::datum;
-use middle::trans::foreign;
use middle::trans::machine;
use middle::trans::meth;
use middle::trans::type_of::type_of_fn_from_ty;
use middle::trans::type_of;
use middle::trans::type_use;
+use middle::trans::intrinsic;
use middle::ty;
use middle::ty::{FnSig};
use middle::typeck;
}
ast_map::node_foreign_item(i, _, _, _) => {
let d = mk_lldecl();
- foreign::trans_intrinsic(ccx, d, i, pt, psubsts, i.attrs,
- ref_id);
+ intrinsic::trans_intrinsic(ccx, d, i, pt, psubsts, i.attrs,
+ ref_id);
d
}
ast_map::node_variant(ref v, enum_item, _) => {
sub_path,
"get_disr");
- let llfty = type_of_fn(ccx, [opaqueptrty], ty::mk_int());
+ let llfty = type_of_rust_fn(ccx, [opaqueptrty], ty::mk_int());
let llfdecl = decl_internal_cdecl_fn(ccx.llmod, sym, llfty);
let fcx = new_fn_ctxt(ccx,
~[],
use middle::trans::adt;
use middle::trans::common::*;
+use middle::trans::foreign;
use middle::ty;
use util::ppaux;
use syntax::ast;
use syntax::opt_vec;
-pub fn arg_is_indirect(ccx: &CrateContext, arg_ty: &ty::t) -> bool {
- !ty::type_is_immediate(ccx.tcx, *arg_ty)
+pub fn arg_is_indirect(ccx: &CrateContext, arg_ty: ty::t) -> bool {
+ !ty::type_is_immediate(ccx.tcx, arg_ty)
}
-pub fn type_of_explicit_arg(ccx: &mut CrateContext, arg_ty: &ty::t) -> Type {
- let llty = type_of(ccx, *arg_ty);
+pub fn return_uses_outptr(tcx: ty::ctxt, ty: ty::t) -> bool {
+ !ty::type_is_immediate(tcx, ty)
+}
+
+pub fn type_of_explicit_arg(ccx: &mut CrateContext, arg_ty: ty::t) -> Type {
+ let llty = type_of(ccx, arg_ty);
if arg_is_indirect(ccx, arg_ty) {
llty.ptr_to()
} else {
pub fn type_of_explicit_args(ccx: &mut CrateContext,
inputs: &[ty::t]) -> ~[Type] {
- inputs.map(|arg_ty| type_of_explicit_arg(ccx, arg_ty))
+ inputs.map(|&arg_ty| type_of_explicit_arg(ccx, arg_ty))
}
-pub fn type_of_fn(cx: &mut CrateContext, inputs: &[ty::t], output: ty::t) -> Type {
+pub fn type_of_rust_fn(cx: &mut CrateContext,
+ inputs: &[ty::t],
+ output: ty::t) -> Type {
let mut atys: ~[Type] = ~[];
// Arg 0: Output pointer.
// (if the output type is non-immediate)
- let output_is_immediate = ty::type_is_immediate(cx.tcx, output);
+ let use_out_pointer = return_uses_outptr(cx.tcx, output);
let lloutputtype = type_of(cx, output);
- if !output_is_immediate {
+ if use_out_pointer {
atys.push(lloutputtype.ptr_to());
}
atys.push_all(type_of_explicit_args(cx, inputs));
// Use the output as the actual return value if it's immediate.
- if output_is_immediate && !ty::type_is_nil(output) {
+ if !use_out_pointer && !ty::type_is_voidish(output) {
Type::func(atys, &lloutputtype)
} else {
Type::func(atys, &Type::void())
// Given a function type and a count of ty params, construct an llvm type
pub fn type_of_fn_from_ty(cx: &mut CrateContext, fty: ty::t) -> Type {
- match ty::get(fty).sty {
- ty::ty_closure(ref f) => type_of_fn(cx, f.sig.inputs, f.sig.output),
- ty::ty_bare_fn(ref f) => type_of_fn(cx, f.sig.inputs, f.sig.output),
+ return match ty::get(fty).sty {
+ ty::ty_closure(ref f) => {
+ type_of_rust_fn(cx, f.sig.inputs, f.sig.output)
+ }
+ ty::ty_bare_fn(ref f) => {
+ if f.abis.is_rust() || f.abis.is_intrinsic() {
+ type_of_rust_fn(cx, f.sig.inputs, f.sig.output)
+ } else {
+ foreign::lltype_for_foreign_fn(cx, fty)
+ }
+ }
_ => {
cx.sess.bug("type_of_fn_from_ty given non-closure, non-bare-fn")
}
- }
+ };
}
// A "sizing type" is an LLVM type, the size and alignment of which are
Type::array(&type_of(cx, mt.ty), n as u64)
}
- ty::ty_bare_fn(_) => type_of_fn_from_ty(cx, t).ptr_to(),
+ ty::ty_bare_fn(_) => {
+ type_of_fn_from_ty(cx, t).ptr_to()
+ }
ty::ty_closure(_) => {
let ty = type_of_fn_from_ty(cx, t);
Type::func_pair(cx, &ty)
// Type utilities
+pub fn type_is_voidish(ty: t) -> bool {
+ //! "nil" and "bot" are void types in that they represent 0 bits of information
+ type_is_nil(ty) || type_is_bot(ty)
+}
+
pub fn type_is_nil(ty: t) -> bool { get(ty).sty == ty_nil }
pub fn type_is_bot(ty: t) -> bool {
#[license = "MIT/ASL2"];
#[crate_type = "lib"];
+// Rustc tasks always run on a fixed_stack_segment, so code in this
+// module can call C functions (in particular, LLVM functions) with
+// impunity.
+#[allow(cstack)];
+
extern mod extra;
extern mod syntax;
pub mod reachable;
pub mod graph;
pub mod cfg;
+ pub mod stack_check;
}
pub mod front {
ty_to_str(tcx, *self)
}
}
+
+impl Repr for AbiSet {
+ fn repr(&self, _tcx: ctxt) -> ~str {
+ self.to_str()
+ }
+}
+
+impl UserString for AbiSet {
+ fn user_string(&self, _tcx: ctxt) -> ~str {
+ self.to_str()
+ }
+}
}
pub fn main() {
+ #[fixed_stack_segment]; #[inline(never)];
+
let args = os::args();
let input = io::stdin();
let out = io::stdout();
#[cfg(windows)]
pub fn link_exe(_src: &Path, _dest: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+
/* FIXME (#1768): Investigate how to do this on win32
Node wraps symlinks by having a .bat,
but that won't work with minGW. */
#[cfg(target_os = "freebsd")]
#[cfg(target_os = "macos")]
pub fn link_exe(src: &Path, dest: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+
use std::c_str::ToCStr;
use std::libc;
///
/// Fails if the CString is null.
pub fn as_bytes<'a>(&'a self) -> &'a [u8] {
+ #[fixed_stack_segment]; #[inline(never)];
if self.buf.is_null() { fail!("CString is null!"); }
unsafe {
let len = libc::strlen(self.buf) as uint;
impl Drop for CString {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
if self.owns_buffer_ {
unsafe {
libc::free(self.buf as *libc::c_void)
impl<'self> ToCStr for &'self [u8] {
fn to_c_str(&self) -> CString {
+ #[fixed_stack_segment]; #[inline(never)];
let mut cs = unsafe { self.to_c_str_unchecked() };
do cs.with_mut_ref |buf| {
for i in range(0, self.len()) {
}
unsafe fn to_c_str_unchecked(&self) -> CString {
+ #[fixed_stack_segment]; #[inline(never)];
do self.as_imm_buf |self_buf, self_len| {
let buf = libc::malloc(self_len as libc::size_t + 1) as *mut u8;
if buf.is_null() {
#[test]
fn test_unwrap() {
+ #[fixed_stack_segment]; #[inline(never)];
+
let c_str = "hello".to_c_str();
unsafe { libc::free(c_str.unwrap() as *libc::c_void) }
}
#[test]
fn test_with_ref() {
+ #[fixed_stack_segment]; #[inline(never)];
+
let c_str = "hello".to_c_str();
let len = unsafe { c_str.with_ref(|buf| libc::strlen(buf)) };
assert!(!c_str.is_null());
impl Reader for *libc::FILE {
fn read(&self, bytes: &mut [u8], len: uint) -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
do bytes.as_mut_buf |buf_p, buf_len| {
assert!(buf_len >= len);
}
}
fn read_byte(&self) -> int {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
libc::fgetc(*self) as int
}
}
fn eof(&self) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
return libc::feof(*self) != 0 as c_int;
}
}
fn seek(&self, offset: int, whence: SeekStyle) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
assert!(libc::fseek(*self,
offset as c_long,
}
}
fn tell(&self) -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
return libc::ftell(*self) as uint;
}
impl Drop for FILERes {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
libc::fclose(self.f);
}
* ~~~
*/
pub fn stdin() -> @Reader {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
@rustrt::rust_get_stdin() as @Reader
}
}
pub fn file_reader(path: &Path) -> Result<@Reader, ~str> {
+ #[fixed_stack_segment]; #[inline(never)];
+
let f = do path.with_c_str |pathbuf| {
do "rb".with_c_str |modebuf| {
unsafe { libc::fopen(pathbuf, modebuf as *libc::c_char) }
impl Writer for *libc::FILE {
fn write(&self, v: &[u8]) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
do v.as_imm_buf |vbuf, len| {
let nout = libc::fwrite(vbuf as *c_void,
}
}
fn seek(&self, offset: int, whence: SeekStyle) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
assert!(libc::fseek(*self,
offset as c_long,
}
}
fn tell(&self) -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
libc::ftell(*self) as uint
}
}
fn flush(&self) -> int {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
libc::fflush(*self) as int
}
}
fn get_type(&self) -> WriterType {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let fd = libc::fileno(*self);
if libc::isatty(fd) == 0 { File }
impl Writer for fd_t {
fn write(&self, v: &[u8]) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let mut count = 0u;
do v.as_imm_buf |vbuf, len| {
}
fn flush(&self) -> int { 0 }
fn get_type(&self) -> WriterType {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
if libc::isatty(*self) == 0 { File } else { Screen }
}
impl Drop for FdRes {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
libc::close(self.fd);
}
pub fn mk_file_writer(path: &Path, flags: &[FileFlag])
-> Result<@Writer, ~str> {
+ #[fixed_stack_segment]; #[inline(never)];
+
#[cfg(windows)]
fn wb() -> c_int {
(O_WRONLY | libc::consts::os::extra::O_BINARY) as c_int
// FIXME: fileflags // #2004
pub fn buffered_file_writer(path: &Path) -> Result<@Writer, ~str> {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let f = do path.with_c_str |pathbuf| {
do "w".with_c_str |modebuf| {
blk: &fn(v: Res<*libc::FILE>)) {
blk(Res::new(Arg {
val: file.f, opt_level: opt_level,
- fsync_fn: |file, l| {
- unsafe {
- os::fsync_fd(libc::fileno(*file), l) as int
- }
- }
+ fsync_fn: |file, l| fsync_fd(fileno(*file), l)
}));
+
+ fn fileno(stream: *libc::FILE) -> libc::c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+ unsafe { libc::fileno(stream) }
+ }
}
// fsync fd after executing blk
blk: &fn(v: Res<fd_t>)) {
blk(Res::new(Arg {
val: fd.fd, opt_level: opt_level,
- fsync_fn: |fd, l| os::fsync_fd(*fd, l) as int
+ fsync_fn: |fd, l| fsync_fd(*fd, l)
}));
}
+ fn fsync_fd(fd: libc::c_int, level: Level) -> int {
+ #[fixed_stack_segment]; #[inline(never)];
+
+ os::fsync_fd(fd, level) as int
+ }
+
// Type of objects that may want to fsync
pub trait FSyncable { fn fsync(&self, l: Level) -> int; }
// doesn't link it correctly on i686, so we're going
// through a C function that mysteriously does work.
pub unsafe fn opendir(dirname: *c_char) -> *DIR {
+ #[fixed_stack_segment]; #[inline(never)];
rust_opendir(dirname)
}
pub unsafe fn readdir(dirp: *DIR) -> *dirent_t {
+ #[fixed_stack_segment]; #[inline(never)];
rust_readdir(dirp)
}
use unstable::intrinsics;
$(
- #[inline]
+ #[inline] #[fixed_stack_segment] #[inline(never)]
pub fn $name($( $arg : $arg_ty ),*) -> $rv {
unsafe {
$bound_name($( $arg ),*)
use unstable::intrinsics;
$(
- #[inline]
+ #[inline] #[fixed_stack_segment] #[inline(never)]
pub fn $name($( $arg : $arg_ty ),*) -> $rv {
unsafe {
$bound_name($( $arg ),*)
/// Delegates to the libc close() function, returning the same return value.
pub fn close(fd: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
libc::close(fd)
}
static BUF_BYTES : uint = 2048u;
pub fn getcwd() -> Path {
+ #[fixed_stack_segment]; #[inline(never)];
let mut buf = [0 as libc::c_char, ..BUF_BYTES];
do buf.as_mut_buf |buf, len| {
unsafe {
pub fn fill_utf16_buf_and_decode(f: &fn(*mut u16, DWORD) -> DWORD)
-> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let mut n = TMPBUF_SZ as DWORD;
let mut res = None;
}
}
+#[cfg(stage0)]
+mod macro_hack {
+#[macro_escape];
+macro_rules! externfn(
+ (fn $name:ident ()) => (
+ extern {
+ fn $name();
+ }
+ )
+)
+}
+
/*
Accessing environment variables is not generally threadsafe.
Serialize access through a global lock.
};
}
- extern {
- #[fast_ffi]
- fn rust_take_env_lock();
- #[fast_ffi]
- fn rust_drop_env_lock();
- }
+ externfn!(fn rust_take_env_lock());
+ externfn!(fn rust_drop_env_lock());
}
/// Returns a vector of (variable, value) pairs for all the environment
unsafe {
#[cfg(windows)]
unsafe fn get_env_pairs() -> ~[~str] {
+ #[fixed_stack_segment]; #[inline(never)];
+
use libc::funcs::extra::kernel32::{
GetEnvironmentStringsA,
FreeEnvironmentStringsA
}
#[cfg(unix)]
unsafe fn get_env_pairs() -> ~[~str] {
+ #[fixed_stack_segment]; #[inline(never)];
+
extern {
fn rust_env_pairs() -> **libc::c_char;
}
/// Fetches the environment variable `n` from the current process, returning
/// None if the variable isn't set.
pub fn getenv(n: &str) -> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do with_env_lock {
let s = do n.with_c_str |buf| {
/// Fetches the environment variable `n` from the current process, returning
/// None if the variable isn't set.
pub fn getenv(n: &str) -> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
do with_env_lock {
use os::win32::{as_utf16_p, fill_utf16_buf_and_decode};
/// Sets the environment variable `n` to the value `v` for the currently running
/// process
pub fn setenv(n: &str, v: &str) {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do with_env_lock {
do n.with_c_str |nbuf| {
/// Sets the environment variable `n` to the value `v` for the currently running
/// process
pub fn setenv(n: &str, v: &str) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
do with_env_lock {
use os::win32::as_utf16_p;
pub fn unsetenv(n: &str) {
#[cfg(unix)]
fn _unsetenv(n: &str) {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do with_env_lock {
do n.with_c_str |nbuf| {
}
#[cfg(windows)]
fn _unsetenv(n: &str) {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do with_env_lock {
use os::win32::as_utf16_p;
}
pub fn fdopen(fd: c_int) -> *FILE {
+ #[fixed_stack_segment]; #[inline(never)];
do "r".with_c_str |modebuf| {
unsafe {
libc::fdopen(fd, modebuf)
#[cfg(windows)]
pub fn fsync_fd(fd: c_int, _level: io::fsync::Level) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use libc::funcs::extra::msvcrt::*;
return commit(fd);
#[cfg(target_os = "linux")]
#[cfg(target_os = "android")]
pub fn fsync_fd(fd: c_int, level: io::fsync::Level) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use libc::funcs::posix01::unistd::*;
match level {
#[cfg(target_os = "macos")]
pub fn fsync_fd(fd: c_int, level: io::fsync::Level) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
use libc::consts::os::extra::*;
use libc::funcs::posix88::fcntl::*;
#[cfg(target_os = "freebsd")]
pub fn fsync_fd(fd: c_int, _l: io::fsync::Level) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
use libc::funcs::posix01::unistd::*;
return fsync(fd);
#[cfg(unix)]
pub fn pipe() -> Pipe {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
let mut fds = Pipe {input: 0 as c_int,
out: 0 as c_int };
#[cfg(windows)]
pub fn pipe() -> Pipe {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
// Windows pipes work subtly differently than unix pipes, and their
// inheritance has to be handled in a different way that I do not
}
fn dup2(src: c_int, dst: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
libc::dup2(src, dst)
}
#[cfg(target_os = "freebsd")]
fn load_self() -> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use libc::funcs::bsd44::*;
use libc::consts::os::extra::*;
#[cfg(target_os = "linux")]
#[cfg(target_os = "android")]
fn load_self() -> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use libc::funcs::posix01::unistd::readlink;
#[cfg(target_os = "macos")]
fn load_self() -> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do fill_charp_buf() |buf, sz| {
let mut sz = sz as u32;
#[cfg(windows)]
fn load_self() -> Option<~str> {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use os::win32::fill_utf16_buf_and_decode;
do fill_utf16_buf_and_decode() |buf, sz| {
/// Indicates whether a path represents a directory
pub fn path_is_dir(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do p.with_c_str |buf| {
rustrt::rust_path_is_dir(buf) != 0 as c_int
/// Indicates whether a path exists
pub fn path_exists(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do p.with_c_str |buf| {
rustrt::rust_path_exists(buf) != 0 as c_int
#[cfg(windows)]
fn mkdir(p: &Path, _mode: c_int) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use os::win32::as_utf16_p;
// FIXME: turn mode into something useful? #2623
#[cfg(unix)]
fn mkdir(p: &Path, mode: c_int) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
do p.with_c_str |buf| {
unsafe {
libc::mkdir(buf, mode as libc::mode_t) == (0 as c_int)
#[cfg(target_os = "freebsd")]
#[cfg(target_os = "macos")]
unsafe fn get_list(p: &Path) -> ~[~str] {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::{dirent_t};
use libc::{opendir, readdir, closedir};
extern {
}
#[cfg(windows)]
unsafe fn get_list(p: &Path) -> ~[~str] {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::consts::os::extra::INVALID_HANDLE_VALUE;
use libc::{wcslen, free};
use libc::funcs::extra::kernel32::{
#[cfg(windows)]
fn rmdir(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use os::win32::as_utf16_p;
return do as_utf16_p(p.to_str()) |buf| {
#[cfg(unix)]
fn rmdir(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
do p.with_c_str |buf| {
unsafe {
libc::rmdir(buf) == (0 as c_int)
#[cfg(windows)]
fn chdir(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use os::win32::as_utf16_p;
return do as_utf16_p(p.to_str()) |buf| {
#[cfg(unix)]
fn chdir(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
do p.with_c_str |buf| {
unsafe {
libc::chdir(buf) == (0 as c_int)
#[cfg(windows)]
fn do_copy_file(from: &Path, to: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use os::win32::as_utf16_p;
return do as_utf16_p(from.to_str()) |fromp| {
#[cfg(unix)]
fn do_copy_file(from: &Path, to: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
let istream = do from.with_c_str |fromp| {
do "rb".with_c_str |modebuf| {
#[cfg(windows)]
fn unlink(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
use os::win32::as_utf16_p;
return do as_utf16_p(p.to_str()) |buf| {
#[cfg(unix)]
fn unlink(p: &Path) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do p.with_c_str |buf| {
libc::unlink(buf) == (0 as c_int)
#[cfg(target_os = "macos")]
#[cfg(target_os = "freebsd")]
fn errno_location() -> *c_int {
+ #[fixed_stack_segment]; #[inline(never)];
#[nolink]
extern {
fn __error() -> *c_int;
#[cfg(target_os = "linux")]
#[cfg(target_os = "android")]
fn errno_location() -> *c_int {
+ #[fixed_stack_segment]; #[inline(never)];
#[nolink]
extern {
fn __errno_location() -> *c_int;
#[cfg(windows)]
/// Returns the platform-specific value of errno
pub fn errno() -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::types::os::arch::extra::DWORD;
#[link_name = "kernel32"]
#[cfg(target_os = "freebsd")]
fn strerror_r(errnum: c_int, buf: *mut c_char, buflen: size_t)
-> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
#[nolink]
extern {
fn strerror_r(errnum: c_int, buf: *mut c_char, buflen: size_t)
// So we just use __xpg_strerror_r which is always POSIX compliant
#[cfg(target_os = "linux")]
fn strerror_r(errnum: c_int, buf: *mut c_char, buflen: size_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
#[nolink]
extern {
fn __xpg_strerror_r(errnum: c_int,
#[cfg(windows)]
fn strerror() -> ~str {
+ #[fixed_stack_segment]; #[inline(never)];
+
use libc::types::os::arch::extra::DWORD;
use libc::types::os::arch::extra::LPSTR;
use libc::types::os::arch::extra::LPVOID;
*/
#[cfg(target_os = "macos")]
pub fn real_args() -> ~[~str] {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let (argc, argv) = (*_NSGetArgc() as c_int,
*_NSGetArgv() as **c_char);
#[cfg(windows)]
pub fn real_args() -> ~[~str] {
+ #[fixed_stack_segment]; #[inline(never)];
+
let mut nArgs: c_int = 0;
let lpArgCount: *mut c_int = &mut nArgs;
let lpCmdLine = unsafe { GetCommandLineW() };
#[cfg(target_os = "freebsd")]
#[cfg(target_os = "macos")]
pub fn glob(pattern: &str) -> ~[Path] {
+ #[fixed_stack_segment]; #[inline(never)];
+
#[cfg(target_os = "linux")]
#[cfg(target_os = "android")]
fn default_glob_t () -> libc::glob_t {
#[cfg(unix)]
pub fn page_size() -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
libc::sysconf(libc::_SC_PAGESIZE) as uint
}
#[cfg(windows)]
pub fn page_size() -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let mut info = libc::SYSTEM_INFO::new();
libc::GetSystemInfo(&mut info);
#[cfg(unix)]
impl MemoryMap {
pub fn new(min_len: uint, options: ~[MapOption]) -> Result<~MemoryMap, MapError> {
+ #[fixed_stack_segment]; #[inline(never)];
+
use libc::off_t;
let mut addr: *c_void = ptr::null();
#[cfg(unix)]
impl Drop for MemoryMap {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
match libc::munmap(self.data as *c_void, self.len) {
0 => (),
#[cfg(windows)]
impl MemoryMap {
pub fn new(min_len: uint, options: ~[MapOption]) -> Result<~MemoryMap, MapError> {
+ #[fixed_stack_segment]; #[inline(never)];
+
use libc::types::os::arch::extra::{LPVOID, DWORD, SIZE_T, HANDLE};
let mut lpAddress: LPVOID = ptr::mut_null();
#[cfg(windows)]
impl Drop for MemoryMap {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
use libc::types::os::arch::extra::{LPCVOID, HANDLE};
unsafe {
#[test]
fn copy_file_ok() {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let tempdir = getcwd(); // would like to use $TMPDIR,
// doesn't seem to work on Linux
#[test]
fn memory_map_file() {
+ #[fixed_stack_segment]; #[inline(never)];
+
use result::{Ok, Err};
use os::*;
use libc::*;
#[cfg(unix)]
+ #[fixed_stack_segment]
+ #[inline(never)]
fn lseek_(fd: c_int, size: uint) {
unsafe {
assert!(lseek(fd, size as off_t, SEEK_SET) == size as off_t);
}
}
#[cfg(windows)]
+ #[fixed_stack_segment]
+ #[inline(never)]
fn lseek_(fd: c_int, size: uint) {
unsafe {
assert!(lseek(fd, size as c_long, SEEK_SET) == size as c_long);
#[cfg(target_os = "win32")]
impl WindowsPath {
pub fn stat(&self) -> Option<libc::stat> {
+ #[fixed_stack_segment]; #[inline(never)];
do self.with_c_str |buf| {
let mut st = stat::arch::default_stat();
match unsafe { libc::stat(buf, &mut st) } {
#[cfg(not(target_os = "win32"))]
impl PosixPath {
pub fn stat(&self) -> Option<libc::stat> {
+ #[fixed_stack_segment]; #[inline(never)];
do self.with_c_str |buf| {
let mut st = stat::arch::default_stat();
match unsafe { libc::stat(buf as *libc::c_char, &mut st) } {
#[cfg(unix)]
impl PosixPath {
pub fn lstat(&self) -> Option<libc::stat> {
+ #[fixed_stack_segment]; #[inline(never)];
do self.with_c_str |buf| {
let mut st = stat::arch::default_stat();
match unsafe { libc::lstat(buf, &mut st) } {
}
pub fn extract_drive_prefix(s: &str) -> Option<(~str,~str)> {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
if (s.len() > 1 &&
libc::isalpha(s[0] as libc::c_int) != 0 &&
impl XorShiftRng {
/// Create an xor shift random number generator with a random seed.
pub fn new() -> XorShiftRng {
+ #[fixed_stack_segment]; #[inline(never)];
+
// generate seeds the same way as seed(), except we have a spceific size
let mut s = [0u8, ..16];
loop {
/// Create a new random seed.
pub fn seed() -> ~[u8] {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let n = rustrt::rand_seed_size() as uint;
let mut s = vec::from_elem(n, 0_u8);
#[test]
fn compare_isaac_implementation() {
+ #[fixed_stack_segment]; #[inline(never)];
+
// This is to verify that the implementation of the ISAAC rng is
// correct (i.e. matches the output of the upstream implementation,
// which is in the runtime)
args
}
- extern {
- fn rust_take_global_args_lock();
- fn rust_drop_global_args_lock();
- fn rust_get_global_args_ptr() -> *mut Option<~~[~str]>;
+ #[cfg(stage0)]
+ mod macro_hack {
+ #[macro_escape];
+ macro_rules! externfn(
+ (fn $name:ident () $(-> $ret_ty:ty),*) => (
+ extern {
+ fn $name() $(-> $ret_ty),*;
+ }
+ )
+ )
}
+ externfn!(fn rust_take_global_args_lock())
+ externfn!(fn rust_drop_global_args_lock())
+ externfn!(fn rust_get_global_args_ptr() -> *mut Option<~~[~str]>)
+
#[cfg(test)]
mod tests {
use option::{Some, None};
}
unsafe fn write_cstr(&self, p: *c_char) {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::strlen;
use vec;
}
/// A wrapper around libc::malloc, aborting on out-of-memory
-#[inline]
pub unsafe fn malloc_raw(size: uint) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
let p = malloc(size as size_t);
if p.is_null() {
// we need a non-allocating way to print an error here
}
/// A wrapper around libc::realloc, aborting on out-of-memory
-#[inline]
pub unsafe fn realloc_raw(ptr: *mut c_void, size: uint) -> *mut c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
let p = realloc(ptr, size as size_t);
if p.is_null() {
// we need a non-allocating way to print an error here
exchange_free(ptr)
}
-#[inline]
pub unsafe fn exchange_free(ptr: *c_char) {
+ #[fixed_stack_segment]; #[inline(never)];
+
free(ptr as *c_void);
}
}
impl LocalHeap {
+ #[fixed_stack_segment] #[inline(never)]
pub fn new() -> LocalHeap {
unsafe {
// Don't need synchronization for the single-threaded local heap
}
}
+ #[fixed_stack_segment] #[inline(never)]
pub fn alloc(&mut self, td: *TypeDesc, size: uint) -> *OpaqueBox {
unsafe {
return rust_boxed_region_malloc(self.boxed_region, td, size as size_t);
}
}
+ #[fixed_stack_segment] #[inline(never)]
pub fn realloc(&mut self, ptr: *OpaqueBox, size: uint) -> *OpaqueBox {
unsafe {
return rust_boxed_region_realloc(self.boxed_region, ptr, size as size_t);
}
}
+ #[fixed_stack_segment] #[inline(never)]
pub fn free(&mut self, box: *OpaqueBox) {
unsafe {
return rust_boxed_region_free(self.boxed_region, box);
}
impl Drop for LocalHeap {
+ #[fixed_stack_segment] #[inline(never)]
fn drop(&self) {
unsafe {
rust_delete_boxed_region(self.boxed_region);
use tls = rt::thread_local_storage;
/// Initialize the TLS key. Other ops will fail if this isn't executed first.
+#[fixed_stack_segment]
+#[inline(never)]
pub fn init_tls_key() {
unsafe {
rust_initialize_rt_tls_key();
}
}
+#[fixed_stack_segment]
+#[inline(never)]
fn maybe_tls_key() -> Option<tls::Key> {
unsafe {
let key: *mut c_void = rust_get_rt_tls_key();
}
extern {
- #[fast_ffi]
fn rust_get_rt_tls_key() -> *mut c_void;
}
-
}
/// Configure logging by traversing the crate map and setting the
/// per-module global logging flags based on the logging spec
+#[fixed_stack_segment] #[inline(never)]
pub fn init(crate_map: *u8) {
use c_str::ToCStr;
use os;
}
}
+#[fixed_stack_segment] #[inline(never)]
pub fn console_on() { unsafe { rust_log_console_on() } }
+
+#[fixed_stack_segment] #[inline(never)]
pub fn console_off() { unsafe { rust_log_console_off() } }
+
+#[fixed_stack_segment] #[inline(never)]
fn should_log_console() -> bool { unsafe { rust_should_log_console() != 0 } }
extern {
return exit_code;
}
+#[cfg(stage0)]
+mod macro_hack {
+#[macro_escape];
+macro_rules! externfn(
+ (fn $name:ident ($($arg_name:ident : $arg_ty:ty),*) $(-> $ret_ty:ty),*) => (
+ extern {
+ fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),*;
+ }
+ )
+)
+}
+
/// One-time runtime initialization.
///
/// Initializes global state, including frobbing
rust_update_gc_metadata(crate_map);
}
- extern {
- fn rust_update_gc_metadata(crate_map: *u8);
- }
+ externfn!(fn rust_update_gc_metadata(crate_map: *u8));
}
/// One-time runtime cleanup.
impl StackSegment {
pub fn new(size: uint) -> StackSegment {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
// Crate a block of uninitialized values
let mut stack = vec::with_capacity(size);
impl Drop for StackSegment {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
// XXX: Using the FFI to call a C macro. Slow
rust_valgrind_stack_deregister(self.valgrind_id);
}
pub fn begin_unwind(&mut self) -> ! {
+ #[fixed_stack_segment]; #[inline(never)];
+
self.unwinding = true;
unsafe {
rust_begin_unwind(UNWIND_TOKEN);
static RLIMIT_NOFILE: libc::c_int = 8;
pub unsafe fn raise_fd_limit() {
+ #[fixed_stack_segment]; #[inline(never)];
+
// The strategy here is to fetch the current resource limits, read the kern.maxfilesperproc
// sysctl value, and bump the soft resource limit for maxfiles up to the sysctl value.
use ptr::{to_unsafe_ptr, to_mut_unsafe_ptr, mut_null};
}
/// Get a port number, starting at 9600, for use in tests
+#[fixed_stack_segment] #[inline(never)]
pub fn next_test_port() -> u16 {
unsafe {
return rust_dbg_next_port(base_port() as libc::uintptr_t) as u16;
impl Thread {
pub fn start(main: ~fn()) -> Thread {
fn substart(main: &~fn()) -> *raw_thread {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe { rust_raw_thread_start(main) }
}
let raw = substart(&main);
}
pub fn join(self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
assert!(!self.joined);
let mut this = self;
unsafe { rust_raw_thread_join(this.raw_thread); }
impl Drop for Thread {
fn drop(&self) {
+ #[fixed_stack_segment]; #[inline(never)];
+
assert!(self.joined);
unsafe { rust_raw_thread_delete(self.raw_thread) }
}
pub type Key = pthread_key_t;
#[cfg(unix)]
+#[fixed_stack_segment]
+#[inline(never)]
pub unsafe fn create(key: &mut Key) {
assert_eq!(0, pthread_key_create(key, null()));
}
#[cfg(unix)]
+#[fixed_stack_segment]
+#[inline(never)]
pub unsafe fn set(key: Key, value: *mut c_void) {
assert_eq!(0, pthread_setspecific(key, value));
}
#[cfg(unix)]
+#[fixed_stack_segment]
+#[inline(never)]
pub unsafe fn get(key: Key) -> *mut c_void {
pthread_getspecific(key)
}
pub type Key = DWORD;
#[cfg(windows)]
+#[fixed_stack_segment]
+#[inline(never)]
pub unsafe fn create(key: &mut Key) {
static TLS_OUT_OF_INDEXES: DWORD = 0xFFFFFFFF;
*key = TlsAlloc();
}
#[cfg(windows)]
+#[fixed_stack_segment]
+#[inline(never)]
pub unsafe fn set(key: Key, value: *mut c_void) {
assert!(0 != TlsSetValue(key, value))
}
#[cfg(windows)]
+#[fixed_stack_segment]
+#[inline(never)]
pub unsafe fn get(key: Key) -> *mut c_void {
TlsGetValue(key)
}
/// Get the number of cores available
pub fn num_cpus() -> uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
return rust_get_num_cpus();
}
rterrln!("%s", "");
rterrln!("fatal runtime error: %s", msg);
- unsafe { libc::abort(); }
+ abort();
+
+ fn abort() -> ! {
+ #[fixed_stack_segment]; #[inline(never)];
+ unsafe { libc::abort() }
+ }
}
pub fn set_exit_status(code: int) {
-
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
return rust_set_exit_status_newrt(code as libc::uintptr_t);
}
}
pub fn get_exit_status() -> int {
-
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
return rust_get_exit_status_newrt() as int;
}
/// Transmute an owned vector to a Buf
pub fn vec_to_uv_buf(v: ~[u8]) -> Buf {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let data = malloc(v.len() as size_t) as *u8;
assert!(data.is_not_null());
/// Transmute a Buf that was once a ~[u8] back to ~[u8]
pub fn vec_from_uv_buf(buf: Buf) -> Option<~[u8]> {
+ #[fixed_stack_segment]; #[inline(never)];
+
if !(buf.len == 0 && buf.base.is_null()) {
let v = unsafe { vec::from_buf(buf.base, buf.len as uint) };
unsafe { free(buf.base as *c_void) };
fn socket_name<T, U: Watcher + NativeHandle<*T>>(sk: SocketNameKind,
handle: U) -> Result<SocketAddr, IoError> {
+ #[fixed_stack_segment]; #[inline(never)];
let getsockname = match sk {
TcpPeer => uvll::rust_uv_tcp_getpeername,
}
fn accept_simultaneously(&mut self) -> Result<(), IoError> {
+ #[fixed_stack_segment]; #[inline(never)];
+
let r = unsafe {
uvll::rust_uv_tcp_simultaneous_accepts(self.watcher.native_handle(), 1 as c_int)
};
}
fn dont_accept_simultaneously(&mut self) -> Result<(), IoError> {
+ #[fixed_stack_segment]; #[inline(never)];
+
let r = unsafe {
uvll::rust_uv_tcp_simultaneous_accepts(self.watcher.native_handle(), 0 as c_int)
};
}
fn control_congestion(&mut self) -> Result<(), IoError> {
+ #[fixed_stack_segment]; #[inline(never)];
+
let r = unsafe {
uvll::rust_uv_tcp_nodelay(self.native_handle(), 0 as c_int)
};
}
fn nodelay(&mut self) -> Result<(), IoError> {
+ #[fixed_stack_segment]; #[inline(never)];
+
let r = unsafe {
uvll::rust_uv_tcp_nodelay(self.native_handle(), 1 as c_int)
};
}
fn keepalive(&mut self, delay_in_seconds: uint) -> Result<(), IoError> {
+ #[fixed_stack_segment]; #[inline(never)];
+
let r = unsafe {
uvll::rust_uv_tcp_keepalive(self.native_handle(), 1 as c_int,
delay_in_seconds as c_uint)
}
fn letdie(&mut self) -> Result<(), IoError> {
+ #[fixed_stack_segment]; #[inline(never)];
+
let r = unsafe {
uvll::rust_uv_tcp_keepalive(self.native_handle(), 0 as c_int, 0 as c_uint)
};
}
pub unsafe fn malloc_handle(handle: uv_handle_type) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
assert!(handle != UV_UNKNOWN_HANDLE && handle != UV_HANDLE_TYPE_MAX);
let size = rust_uv_handle_size(handle as uint);
let p = malloc(size);
}
pub unsafe fn free_handle(v: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
free(v)
}
pub unsafe fn malloc_req(req: uv_req_type) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
assert!(req != UV_UNKNOWN_REQ && req != UV_REQ_TYPE_MAX);
let size = rust_uv_req_size(req as uint);
let p = malloc(size);
}
pub unsafe fn free_req(v: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
free(v)
}
#[test]
fn handle_sanity_check() {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
assert_eq!(UV_HANDLE_TYPE_MAX as uint, rust_uv_handle_type_max());
}
}
#[test]
+#[fixed_stack_segment]
+#[inline(never)]
fn request_sanity_check() {
unsafe {
assert_eq!(UV_REQ_TYPE_MAX as uint, rust_uv_req_type_max());
}
pub unsafe fn loop_new() -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_loop_new();
}
pub unsafe fn loop_delete(loop_handle: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_loop_delete(loop_handle);
}
pub unsafe fn run(loop_handle: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_run(loop_handle);
}
pub unsafe fn close<T>(handle: *T, cb: *u8) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_close(handle as *c_void, cb);
}
pub unsafe fn walk(loop_handle: *c_void, cb: *u8, arg: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_walk(loop_handle, cb, arg);
}
pub unsafe fn idle_new() -> *uv_idle_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_idle_new()
}
pub unsafe fn idle_delete(handle: *uv_idle_t) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_idle_delete(handle)
}
pub unsafe fn idle_init(loop_handle: *uv_loop_t, handle: *uv_idle_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_idle_init(loop_handle, handle)
}
pub unsafe fn idle_start(handle: *uv_idle_t, cb: uv_idle_cb) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_idle_start(handle, cb)
}
pub unsafe fn idle_stop(handle: *uv_idle_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_idle_stop(handle)
}
pub unsafe fn udp_init(loop_handle: *uv_loop_t, handle: *uv_udp_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_init(loop_handle, handle);
}
pub unsafe fn udp_bind(server: *uv_udp_t, addr: *sockaddr_in, flags: c_uint) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_bind(server, addr, flags);
}
pub unsafe fn udp_bind6(server: *uv_udp_t, addr: *sockaddr_in6, flags: c_uint) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_bind6(server, addr, flags);
}
pub unsafe fn udp_send<T>(req: *uv_udp_send_t, handle: *T, buf_in: &[uv_buf_t],
addr: *sockaddr_in, cb: uv_udp_send_cb) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
let buf_ptr = vec::raw::to_ptr(buf_in);
let buf_cnt = buf_in.len() as i32;
return rust_uv_udp_send(req, handle as *c_void, buf_ptr, buf_cnt, addr, cb);
pub unsafe fn udp_send6<T>(req: *uv_udp_send_t, handle: *T, buf_in: &[uv_buf_t],
addr: *sockaddr_in6, cb: uv_udp_send_cb) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
let buf_ptr = vec::raw::to_ptr(buf_in);
let buf_cnt = buf_in.len() as i32;
return rust_uv_udp_send6(req, handle as *c_void, buf_ptr, buf_cnt, addr, cb);
pub unsafe fn udp_recv_start(server: *uv_udp_t, on_alloc: uv_alloc_cb,
on_recv: uv_udp_recv_cb) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_recv_start(server, on_alloc, on_recv);
}
pub unsafe fn udp_recv_stop(server: *uv_udp_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_recv_stop(server);
}
pub unsafe fn get_udp_handle_from_send_req(send_req: *uv_udp_send_t) -> *uv_udp_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_udp_handle_from_send_req(send_req);
}
pub unsafe fn udp_get_sockname(handle: *uv_udp_t, name: *sockaddr_storage) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_getsockname(handle, name);
}
pub unsafe fn udp_set_membership(handle: *uv_udp_t, multicast_addr: *c_char,
interface_addr: *c_char, membership: uv_membership) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_set_membership(handle, multicast_addr, interface_addr, membership as c_int);
}
pub unsafe fn udp_set_multicast_loop(handle: *uv_udp_t, on: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_set_multicast_loop(handle, on);
}
pub unsafe fn udp_set_multicast_ttl(handle: *uv_udp_t, ttl: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_set_multicast_ttl(handle, ttl);
}
pub unsafe fn udp_set_ttl(handle: *uv_udp_t, ttl: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_set_ttl(handle, ttl);
}
pub unsafe fn udp_set_broadcast(handle: *uv_udp_t, on: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_udp_set_broadcast(handle, on);
}
pub unsafe fn tcp_init(loop_handle: *c_void, handle: *uv_tcp_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_init(loop_handle, handle);
}
pub unsafe fn tcp_connect(connect_ptr: *uv_connect_t, tcp_handle_ptr: *uv_tcp_t,
addr_ptr: *sockaddr_in, after_connect_cb: *u8) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_connect(connect_ptr, tcp_handle_ptr, after_connect_cb, addr_ptr);
}
pub unsafe fn tcp_connect6(connect_ptr: *uv_connect_t, tcp_handle_ptr: *uv_tcp_t,
addr_ptr: *sockaddr_in6, after_connect_cb: *u8) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_connect6(connect_ptr, tcp_handle_ptr, after_connect_cb, addr_ptr);
}
pub unsafe fn tcp_bind(tcp_server_ptr: *uv_tcp_t, addr_ptr: *sockaddr_in) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_bind(tcp_server_ptr, addr_ptr);
}
pub unsafe fn tcp_bind6(tcp_server_ptr: *uv_tcp_t, addr_ptr: *sockaddr_in6) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_bind6(tcp_server_ptr, addr_ptr);
}
pub unsafe fn tcp_getpeername(tcp_handle_ptr: *uv_tcp_t, name: *sockaddr_storage) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_getpeername(tcp_handle_ptr, name);
}
pub unsafe fn tcp_getsockname(handle: *uv_tcp_t, name: *sockaddr_storage) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_getsockname(handle, name);
}
pub unsafe fn tcp_nodelay(handle: *uv_tcp_t, enable: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_nodelay(handle, enable);
}
pub unsafe fn tcp_keepalive(handle: *uv_tcp_t, enable: c_int, delay: c_uint) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_keepalive(handle, enable, delay);
}
pub unsafe fn tcp_simultaneous_accepts(handle: *uv_tcp_t, enable: c_int) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_tcp_simultaneous_accepts(handle, enable);
}
pub unsafe fn listen<T>(stream: *T, backlog: c_int, cb: *u8) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_listen(stream as *c_void, backlog, cb);
}
pub unsafe fn accept(server: *c_void, client: *c_void) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_accept(server as *c_void, client as *c_void);
}
pub unsafe fn write<T>(req: *uv_write_t, stream: *T, buf_in: &[uv_buf_t], cb: *u8) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
let buf_ptr = vec::raw::to_ptr(buf_in);
let buf_cnt = buf_in.len() as i32;
return rust_uv_write(req as *c_void, stream as *c_void, buf_ptr, buf_cnt, cb);
}
pub unsafe fn read_start(stream: *uv_stream_t, on_alloc: uv_alloc_cb, on_read: *u8) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_read_start(stream as *c_void, on_alloc, on_read);
}
pub unsafe fn read_stop(stream: *uv_stream_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_read_stop(stream as *c_void);
}
pub unsafe fn last_error(loop_handle: *c_void) -> uv_err_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_last_error(loop_handle);
}
pub unsafe fn strerror(err: *uv_err_t) -> *c_char {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_strerror(err);
}
pub unsafe fn err_name(err: *uv_err_t) -> *c_char {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_err_name(err);
}
pub unsafe fn async_init(loop_handle: *c_void, async_handle: *uv_async_t, cb: *u8) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_async_init(loop_handle, async_handle, cb);
}
pub unsafe fn async_send(async_handle: *uv_async_t) {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_async_send(async_handle);
}
pub unsafe fn buf_init(input: *u8, len: uint) -> uv_buf_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
let out_buf = uv_buf_t { base: ptr::null(), len: 0 as size_t };
let out_buf_ptr = ptr::to_unsafe_ptr(&out_buf);
rust_uv_buf_init(out_buf_ptr, input, len as size_t);
}
pub unsafe fn timer_init(loop_ptr: *c_void, timer_ptr: *uv_timer_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_timer_init(loop_ptr, timer_ptr);
}
pub unsafe fn timer_start(timer_ptr: *uv_timer_t, cb: *u8, timeout: u64,
repeat: u64) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_timer_start(timer_ptr, cb, timeout, repeat);
}
pub unsafe fn timer_stop(timer_ptr: *uv_timer_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_timer_stop(timer_ptr);
}
pub unsafe fn is_ip4_addr(addr: *sockaddr) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+
match rust_uv_is_ipv4_sockaddr(addr) { 0 => false, _ => true }
}
pub unsafe fn is_ip6_addr(addr: *sockaddr) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+
match rust_uv_is_ipv6_sockaddr(addr) { 0 => false, _ => true }
}
pub unsafe fn malloc_ip4_addr(ip: &str, port: int) -> *sockaddr_in {
+ #[fixed_stack_segment]; #[inline(never)];
do ip.with_c_str |ip_buf| {
rust_uv_ip4_addrp(ip_buf as *u8, port as libc::c_int)
}
}
pub unsafe fn malloc_ip6_addr(ip: &str, port: int) -> *sockaddr_in6 {
+ #[fixed_stack_segment]; #[inline(never)];
do ip.with_c_str |ip_buf| {
rust_uv_ip6_addrp(ip_buf as *u8, port as libc::c_int)
}
}
pub unsafe fn malloc_sockaddr_storage() -> *sockaddr_storage {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_malloc_sockaddr_storage()
}
pub unsafe fn free_sockaddr_storage(ss: *sockaddr_storage) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_free_sockaddr_storage(ss);
}
pub unsafe fn free_ip4_addr(addr: *sockaddr_in) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_free_ip4_addr(addr);
}
pub unsafe fn free_ip6_addr(addr: *sockaddr_in6) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_free_ip6_addr(addr);
}
pub unsafe fn ip4_name(addr: *sockaddr_in, dst: *u8, size: size_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_ip4_name(addr, dst, size);
}
pub unsafe fn ip6_name(addr: *sockaddr_in6, dst: *u8, size: size_t) -> c_int {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_ip6_name(addr, dst, size);
}
pub unsafe fn ip4_port(addr: *sockaddr_in) -> c_uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_ip4_port(addr);
}
pub unsafe fn ip6_port(addr: *sockaddr_in6) -> c_uint {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_ip6_port(addr);
}
// data access helpers
pub unsafe fn get_loop_for_uv_handle<T>(handle: *T) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_loop_for_uv_handle(handle as *c_void);
}
pub unsafe fn get_stream_handle_from_connect_req(connect: *uv_connect_t) -> *uv_stream_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_stream_handle_from_connect_req(connect);
}
pub unsafe fn get_stream_handle_from_write_req(write_req: *uv_write_t) -> *uv_stream_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_stream_handle_from_write_req(write_req);
}
pub unsafe fn get_data_for_uv_loop(loop_ptr: *c_void) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_get_data_for_uv_loop(loop_ptr)
}
pub unsafe fn set_data_for_uv_loop(loop_ptr: *c_void, data: *c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_set_data_for_uv_loop(loop_ptr, data);
}
pub unsafe fn get_data_for_uv_handle<T>(handle: *T) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_data_for_uv_handle(handle as *c_void);
}
pub unsafe fn set_data_for_uv_handle<T, U>(handle: *T, data: *U) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_set_data_for_uv_handle(handle as *c_void, data as *c_void);
}
pub unsafe fn get_data_for_req<T>(req: *T) -> *c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_data_for_req(req as *c_void);
}
pub unsafe fn set_data_for_req<T, U>(req: *T, data: *U) {
+ #[fixed_stack_segment]; #[inline(never)];
+
rust_uv_set_data_for_req(req as *c_void, data as *c_void);
}
pub unsafe fn get_base_from_buf(buf: uv_buf_t) -> *u8 {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_base_from_buf(buf);
}
pub unsafe fn get_len_from_buf(buf: uv_buf_t) -> size_t {
+ #[fixed_stack_segment]; #[inline(never)];
+
return rust_uv_get_len_from_buf(buf);
}
pub unsafe fn get_last_err_info(uv_loop: *c_void) -> ~str {
* * options - Options to configure the environment of the process,
* the working directory and the standard IO streams.
*/
- pub fn new(prog: &str, args: &[~str], options: ProcessOptions)
+ pub fn new(prog: &str, args: &[~str],
+ options: ProcessOptions)
-> Process {
+ #[fixed_stack_segment]; #[inline(never)];
+
let (in_pipe, in_fd) = match options.in_fd {
None => {
let pipe = os::pipe();
* method does nothing.
*/
pub fn close_input(&mut self) {
+ #[fixed_stack_segment]; #[inline(never)];
match self.input {
Some(-1) | None => (),
Some(fd) => {
}
fn close_outputs(&mut self) {
+ #[fixed_stack_segment]; #[inline(never)];
fclose_and_null(&mut self.output);
fclose_and_null(&mut self.error);
fn fclose_and_null(f_opt: &mut Option<*libc::FILE>) {
+ #[allow(cstack)]; // fixed_stack_segment declared on enclosing fn
match *f_opt {
Some(f) if !f.is_null() => {
unsafe {
#[cfg(windows)]
fn killpid(pid: pid_t, _force: bool) {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
libc::funcs::extra::kernel32::TerminateProcess(
cast::transmute(pid), 1);
#[cfg(unix)]
fn killpid(pid: pid_t, force: bool) {
+ #[fixed_stack_segment]; #[inline(never)];
+
let signal = if force {
libc::consts::os::posix88::SIGKILL
} else {
env: Option<~[(~str, ~str)]>,
dir: Option<&Path>,
in_fd: c_int, out_fd: c_int, err_fd: c_int) -> SpawnProcessResult {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::types::os::arch::extra::{DWORD, HANDLE, STARTUPINFO};
use libc::consts::os::extra::{
env: Option<~[(~str, ~str)]>,
dir: Option<&Path>,
in_fd: c_int, out_fd: c_int, err_fd: c_int) -> SpawnProcessResult {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::funcs::posix88::unistd::{fork, dup2, close, chdir, execvp};
use libc::funcs::bsd44::getdtablesize;
#[cfg(windows)]
fn free_handle(handle: *()) {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
libc::funcs::extra::kernel32::CloseHandle(cast::transmute(handle));
}
#[cfg(windows)]
fn waitpid_os(pid: pid_t) -> int {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::types::os::arch::extra::DWORD;
use libc::consts::os::extra::{
#[cfg(unix)]
fn waitpid_os(pid: pid_t) -> int {
+ #[fixed_stack_segment]; #[inline(never)];
use libc::funcs::posix01::wait::*;
}
fn readclose(fd: c_int) -> ~str {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
let file = os::fdopen(fd);
let reader = io::FILE_reader(file, false);
}
fn running_on_valgrind() -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe { rust_running_on_valgrind() != 0 }
}
pub use fmt;
pub use to_bytes;
}
+
#[test]
fn test_map() {
+ #[fixed_stack_segment]; #[inline(never)];
assert_eq!(~"", "".map_chars(|c| unsafe {libc::toupper(c as c_char)} as char));
assert_eq!(~"YMCA", "ymca".map_chars(|c| unsafe {libc::toupper(c as c_char)} as char));
}
// above.
let data = match util::replace(entry, None) {
Some((_, data, _)) => data,
- None => libc::abort(),
+ None => abort(),
};
// Move `data` into transmute to get out the memory that it
}
}
}
- _ => libc::abort()
+ _ => abort()
}
// n.b. 'data' and 'loans' are both invalid pointers at the point
if return_loan {
match map[i] {
Some((_, _, ref mut loan)) => { *loan = NoLoan; }
- None => { libc::abort(); }
+ None => { abort(); }
}
}
return ret;
}
}
+fn abort() -> ! {
+ #[fixed_stack_segment]; #[inline(never)];
+
+ unsafe { libc::abort() }
+}
+
pub unsafe fn local_set<T: 'static>(handle: Handle,
key: local_data::Key<T>,
data: T) {
mod testrt {
use libc;
- #[nolink]
- extern {
- pub fn rust_dbg_lock_create() -> *libc::c_void;
- pub fn rust_dbg_lock_destroy(lock: *libc::c_void);
- pub fn rust_dbg_lock_lock(lock: *libc::c_void);
- pub fn rust_dbg_lock_unlock(lock: *libc::c_void);
- pub fn rust_dbg_lock_wait(lock: *libc::c_void);
- pub fn rust_dbg_lock_signal(lock: *libc::c_void);
- }
+ externfn!(fn rust_dbg_lock_create() -> *libc::c_void)
+ externfn!(fn rust_dbg_lock_destroy(lock: *libc::c_void))
+ externfn!(fn rust_dbg_lock_lock(lock: *libc::c_void))
+ externfn!(fn rust_dbg_lock_unlock(lock: *libc::c_void))
+ externfn!(fn rust_dbg_lock_wait(lock: *libc::c_void))
+ externfn!(fn rust_dbg_lock_signal(lock: *libc::c_void))
}
#[test]
use result::*;
pub unsafe fn open_external(filename: &path::Path) -> *libc::c_void {
+ #[fixed_stack_segment]; #[inline(never)];
do filename.with_c_str |raw_name| {
dlopen(raw_name, Lazy as libc::c_int)
}
}
pub unsafe fn open_internal() -> *libc::c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
dlopen(ptr::null(), Lazy as libc::c_int)
}
pub fn check_for_errors_in<T>(f: &fn()->T) -> Result<T, ~str> {
+ #[fixed_stack_segment]; #[inline(never)];
+
unsafe {
do atomically {
let _old_error = dlerror();
}
pub unsafe fn symbol(handle: *libc::c_void, symbol: *libc::c_char) -> *libc::c_void {
+ #[fixed_stack_segment]; #[inline(never)];
+
dlsym(handle, symbol)
}
pub unsafe fn close(handle: *libc::c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
+
dlclose(handle); ()
}
use result::*;
pub unsafe fn open_external(filename: &path::Path) -> *libc::c_void {
+ #[fixed_stack_segment]; #[inline(never)];
do os::win32::as_utf16_p(filename.to_str()) |raw_name| {
LoadLibraryW(raw_name)
}
}
pub unsafe fn open_internal() -> *libc::c_void {
+ #[fixed_stack_segment]; #[inline(never)];
let handle = ptr::null();
GetModuleHandleExW(0 as libc::DWORD, ptr::null(), &handle as **libc::c_void);
handle
}
pub fn check_for_errors_in<T>(f: &fn()->T) -> Result<T, ~str> {
+ #[fixed_stack_segment]; #[inline(never)];
unsafe {
do atomically {
SetLastError(0);
}
}
pub unsafe fn symbol(handle: *libc::c_void, symbol: *libc::c_char) -> *libc::c_void {
+ #[fixed_stack_segment]; #[inline(never)];
GetProcAddress(handle, symbol)
}
pub unsafe fn close(handle: *libc::c_void) {
+ #[fixed_stack_segment]; #[inline(never)];
FreeLibrary(handle); ()
}
/// can lead to deadlock. Calling change_dir_locked recursively will
/// also deadlock.
pub fn change_dir_locked(p: &Path, action: &fn()) -> bool {
+ #[fixed_stack_segment]; #[inline(never)];
+
use os;
use os::change_dir;
use unstable::sync::atomically;
}
}
- #[inline]
pub unsafe fn lock<T>(&self, f: &fn() -> T) -> T {
do atomically {
rust_lock_little_lock(self.l);
}
}
-extern {
- fn rust_create_little_lock() -> rust_little_lock;
- fn rust_destroy_little_lock(lock: rust_little_lock);
- fn rust_lock_little_lock(lock: rust_little_lock);
- fn rust_unlock_little_lock(lock: rust_little_lock);
+#[cfg(stage0)]
+mod macro_hack {
+#[macro_escape];
+macro_rules! externfn(
+ (fn $name:ident () $(-> $ret_ty:ty),*) => (
+ extern {
+ fn $name() $(-> $ret_ty),*;
+ }
+ );
+ (fn $name:ident ($($arg_name:ident : $arg_ty:ty),*) $(-> $ret_ty:ty),*) => (
+ extern {
+ fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),*;
+ }
+ )
+)
}
+externfn!(fn rust_create_little_lock() -> rust_little_lock)
+externfn!(fn rust_destroy_little_lock(lock: rust_little_lock))
+externfn!(fn rust_lock_little_lock(lock: rust_little_lock))
+externfn!(fn rust_unlock_little_lock(lock: rust_little_lock))
+
#[cfg(test)]
mod tests {
use cell::Cell;
pub static $name: ::std::local_data::Key<$ty> = &::std::local_data::Key;
)
)
+
+ // externfn! declares a wrapper for an external function.
+ // It is intended to be used like:
+ //
+ // externfn!(#[nolink]
+ // #[abi = \"cdecl\"]
+ // fn memcmp(cx: *u8, ct: *u8, n: u32) -> u32)
+ //
+ // Due to limitations in the macro parser, this pattern must be
+ // implemented with 4 distinct patterns (with attrs / without
+ // attrs CROSS with args / without ARGS).
+ //
+ // Also, this macro grammar allows for any number of return types
+ // because I couldn't figure out the syntax to specify at most one.
+ macro_rules! externfn(
+ (fn $name:ident () $(-> $ret_ty:ty),*) => (
+ pub unsafe fn $name() $(-> $ret_ty),* {
+ // Note: to avoid obscure bug in macros, keep these
+ // attributes *internal* to the fn
+ #[fixed_stack_segment];
+ #[inline(never)];
+ #[allow(missing_doc)];
+
+ return $name();
+
+ extern {
+ fn $name() $(-> $ret_ty),*;
+ }
+ }
+ );
+ (fn $name:ident ($($arg_name:ident : $arg_ty:ty),*) $(-> $ret_ty:ty),*) => (
+ pub unsafe fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),* {
+ // Note: to avoid obscure bug in macros, keep these
+ // attributes *internal* to the fn
+ #[fixed_stack_segment];
+ #[inline(never)];
+ #[allow(missing_doc)];
+
+ return $name($($arg_name),*);
+
+ extern {
+ fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),*;
+ }
+ }
+ );
+ ($($attrs:attr)* fn $name:ident () $(-> $ret_ty:ty),*) => (
+ pub unsafe fn $name() $(-> $ret_ty),* {
+ // Note: to avoid obscure bug in macros, keep these
+ // attributes *internal* to the fn
+ #[fixed_stack_segment];
+ #[inline(never)];
+ #[allow(missing_doc)];
+
+ return $name();
+
+ $($attrs)*
+ extern {
+ fn $name() $(-> $ret_ty),*;
+ }
+ }
+ );
+ ($($attrs:attr)* fn $name:ident ($($arg_name:ident : $arg_ty:ty),*) $(-> $ret_ty:ty),*) => (
+ pub unsafe fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),* {
+ // Note: to avoid obscure bug in macros, keep these
+ // attributes *internal* to the fn
+ #[fixed_stack_segment];
+ #[inline(never)];
+ #[allow(missing_doc)];
+
+ return $name($($arg_name),*);
+
+ $($attrs)*
+ extern {
+ fn $name($($arg_name : $arg_ty),*) $(-> $ret_ty),*;
+ }
+ }
+ )
+ )
+
}";
}
}
}
+#[fixed_stack_segment] #[inline(never)]
pub fn fact(n: uint) -> uint {
unsafe {
info!("n = %?", n);
// Exercise the unused_unsafe attribute in some positive and negative cases
+#[allow(cstack)];
#[deny(unused_unsafe)];
mod foo {
}
}
}
+
unsafe fn good3() { foo::bar() }
fn good4() { unsafe { foo::bar() } }
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-//error-pattern:libc::c_int or libc::c_long should be used
+#[forbid(ctypes)];
+
mod xx {
extern {
- pub fn strlen(str: *u8) -> uint;
- pub fn foo(x: int, y: uint);
+ pub fn strlen(str: *u8) -> uint; //~ ERROR found rust type `uint`
+ pub fn foo(x: int, y: uint); //~ ERROR found rust type `int`
+ //~^ ERROR found rust type `uint`
}
}
fn main() {
- // let it fail to verify warning message
- fail!()
}
use anonexternmod::*;
+#[fixed_stack_segment]
pub fn main() {
unsafe {
rust_get_test_int();
fn rust_get_test_int() -> libc::intptr_t;
}
+#[fixed_stack_segment]
pub fn main() {
unsafe {
let _ = rust_get_test_int();
}
}
+#[fixed_stack_segment]
fn atol(s: ~str) -> int {
s.with_c_str(|x| unsafe { libc::atol(x) as int })
}
+#[fixed_stack_segment]
fn atoll(s: ~str) -> i64 {
s.with_c_str(|x| unsafe { libc::atoll(x) as i64 })
}
}
}
+#[fixed_stack_segment]
fn count(n: uint) -> uint {
unsafe {
info!("n = %?", n);
}
}
+#[fixed_stack_segment] #[inline(never)]
fn count(n: uint) -> uint {
unsafe {
info!("n = %?", n);
}
}
+#[fixed_stack_segment] #[inline(never)]
fn count(n: uint) -> uint {
unsafe {
info!("n = %?", n);
}
}
+#[fixed_stack_segment] #[inline(never)]
fn fact(n: uint) -> uint {
unsafe {
info!("n = %?", n);
extern mod externcallback(vers = "0.1");
+#[fixed_stack_segment] #[inline(never)]
fn fact(n: uint) -> uint {
unsafe {
info!("n = %?", n);
pub fn rust_dbg_extern_identity_TwoU32s(v: TwoU32s) -> TwoU32s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let x = TwoU32s {one: 22, two: 23};
pub fn rust_dbg_extern_identity_TwoU64s(u: TwoU64s) -> TwoU64s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let x = TwoU64s {one: 22, two: 23};
pub fn rust_dbg_extern_identity_TwoU64s(v: TwoU64s) -> TwoU64s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let x = TwoU64s {one: 22, two: 23};
pub fn rust_dbg_extern_identity_u8(v: u8) -> u8;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
assert_eq!(22_u8, rust_dbg_extern_identity_u8(22_u8));
pub fn rust_dbg_extern_identity_double(v: f64) -> f64;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
assert_eq!(22.0_f64, rust_dbg_extern_identity_double(22.0_f64));
pub fn rust_dbg_extern_identity_u32(v: u32) -> u32;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
assert_eq!(22_u32, rust_dbg_extern_identity_u32(22_u32));
pub fn rust_dbg_extern_identity_u64(v: u64) -> u64;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
assert_eq!(22_u64, rust_dbg_extern_identity_u64(22_u64));
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// xfail-win32 #5745
-// xfail-macos Broken on mac i686
-
struct TwoU16s {
one: u16, two: u16
}
pub fn rust_dbg_extern_return_TwoU16s() -> TwoU16s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let y = rust_dbg_extern_return_TwoU16s();
pub fn rust_dbg_extern_return_TwoU32s() -> TwoU32s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let y = rust_dbg_extern_return_TwoU32s();
pub fn rust_dbg_extern_return_TwoU64s() -> TwoU64s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let y = rust_dbg_extern_return_TwoU64s();
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-// xfail-win32 #5745
-// xfail-macos Broken on mac i686
-
struct TwoU8s {
one: u8, two: u8
}
pub fn rust_dbg_extern_return_TwoU8s() -> TwoU8s;
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let y = rust_dbg_extern_return_TwoU8s();
}
}
+#[fixed_stack_segment] #[inline(never)]
fn count(n: uint) -> uint {
unsafe {
rustrt::rust_dbg_call(cb, n)
}
}
+#[fixed_stack_segment] #[inline(never)]
fn count(n: uint) -> uint {
unsafe {
task::deschedule();
use std::libc;
use std::unstable::run_in_bare_thread;
-extern {
- pub fn rust_dbg_call(cb: *u8, data: libc::uintptr_t) -> libc::uintptr_t;
-}
+externfn!(fn rust_dbg_call(cb: *u8, data: libc::uintptr_t) -> libc::uintptr_t)
pub fn main() {
unsafe {
}
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
rustrt1::rust_get_test_int();
}
}
+#[fixed_stack_segment] #[inline(never)]
fn strlen(str: ~str) -> uint {
// C string is terminated with a zero
do str.with_c_str |buf| {
}
}
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
rustrt::rust_get_test_int();
extern mod foreign_lib;
+#[fixed_stack_segment] #[inline(never)]
pub fn main() {
unsafe {
let _foo = foreign_lib::rustrt::rust_get_test_int();
}
}
+#[fixed_stack_segment] #[inline(never)]
fn lgamma(n: c_double, value: &mut int) -> c_double {
unsafe {
return m::lgamma(n, to_c_int(value));
pub struct Fd(c_int);
impl Drop for Fd {
+ #[fixed_stack_segment] #[inline(never)]
fn drop(&self) {
unsafe {
libc::close(**self);
}
}
+#[fixed_stack_segment] #[inline(never)]
fn main() {
unsafe {
a::free(transmute(0));
x: int
}
+#[fixed_stack_segment] #[inline(never)]
fn alloc<'a>(_bcx : &'a arena) -> &'a Bcx<'a> {
unsafe {
cast::transmute(libc::malloc(sys::size_of::<Bcx<'blk>>()
return alloc(bcx.fcx.arena);
}
+#[fixed_stack_segment] #[inline(never)]
fn g(fcx : &Fcx) {
let bcx = Bcx { fcx: fcx };
let bcx2 = h(&bcx);
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+// xfail-test - FIXME(#8538) some kind of problem linking induced by extern "C" fns that I do not understand
// xfail-fast - windows doesn't like this
// Smallest hello world with no runtime
*a = 3;
}
+#[fixed_stack_segment] #[inline(never)]
unsafe fn run() {
assert!(debug_static_mut == 3);
debug_static_mut = 4;
}
}
+#[fixed_stack_segment] #[inline(never)]
fn test1() {
unsafe {
let q = Quad { a: 0xaaaa_aaaa_aaaa_aaaa_u64,
}
#[cfg(target_arch = "x86_64")]
+#[fixed_stack_segment]
+#[inline(never)]
fn test2() {
unsafe {
let f = Floats { a: 1.234567890e-15_f64,
#[cfg(target_os = "win32")]
+#[fixed_stack_segment]
pub fn main() {
let heap = unsafe { kernel32::GetProcessHeap() };
let mem = unsafe { kernel32::HeapAlloc(heap, 0u32, 100u32) };