-language: rust
+language: minimal
sudo: required
dist: trusty
services:
- env: IMAGE=x86_64-gnu-cargotest
- env: IMAGE=x86_64-gnu-debug
- env: IMAGE=x86_64-gnu-nopt
- - env: IMAGE=x86_64-gnu-rustbuild
+ - env: IMAGE=x86_64-gnu-make
- env: IMAGE=x86_64-gnu-llvm-3.7 ALLOW_PR=1 RUST_BACKTRACE=1
- env: IMAGE=x86_64-musl
install: brew install ccache
- env: >
RUST_CHECK_TARGET=check
- RUST_CONFIGURE_ARGS=--target=x86_64-apple-darwin --enable-rustbuild
+ RUST_CONFIGURE_ARGS=--target=x86_64-apple-darwin --disable-rustbuild
SRC=.
os: osx
install: brew install ccache
install: brew install ccache
script:
- - if [ -z "$ALLOW_PR" ] && [ "$TRAVIS_BRANCH" != "auto" ]; then
- echo skipping, not a full build;
- elif [ -z "$ENABLE_AUTO" ] then
- echo skipping, not quite ready yet
- elif [ "$TRAVIS_OS_NAME" = "osx" ]; then
- git submodule update --init;
- src/ci/run.sh;
- else
- git submodule update --init;
- src/ci/docker/run.sh $IMAGE;
- fi
+ - >
+ if [ "$ALLOW_PR" = "" ] && [ "$TRAVIS_BRANCH" != "auto" ]; then
+ echo skipping, not a full build;
+ elif [ "$TRAVIS_OS_NAME" = "osx" ]; then
+ git submodule update --init;
+ src/ci/run.sh;
+ else
+ git submodule update --init;
+ src/ci/docker/run.sh $IMAGE;
+ fi
# Save tagged docker images we created and load them if they're available
before_cache:
It's your best friend when working on Rust, allowing you to compile & test
your contributions before submission.
-All the configuration for the build system lives in [the `mk` directory][mkdir]
-in the project root. It can be hard to follow in places, as it uses some
-advanced Make features which make for some challenging reading. If you have
-questions on the build system internals, try asking in
-[`#rust-internals`][pound-rust-internals].
+The build system lives in [the `src/bootstrap` directory][bootstrap] in the
+project root. Our build system is itself written in Rust and is based on Cargo
+to actually build all the compiler's crates. If you have questions on the build
+system internals, try asking in [`#rust-internals`][pound-rust-internals].
-[mkdir]: https://github.com/rust-lang/rust/tree/master/mk/
+[bootstrap]: https://github.com/rust-lang/rust/tree/master/src/bootstrap/
+
+> **Note**: the build system was recently rewritten from a jungle of makefiles
+> to the current incarnation you'll see in `src/bootstrap`. If you experience
+> bugs you can temporarily revert back to the makefiles with
+> `--disable-rustbuild` passed to `./configure`.
### Configuration
To see a full list of options, run `./configure --help`.
-### Useful Targets
-
-Some common make targets are:
-
-- `make tips` - show useful targets, variables and other tips for working with
- the build system.
-- `make rustc-stage1` - build up to (and including) the first stage. For most
- cases we don't need to build the stage2 compiler, so we can save time by not
- building it. The stage1 compiler is a fully functioning compiler and
- (probably) will be enough to determine if your change works as expected.
-- `make $host/stage1/bin/rustc` - Where $host is a target triple like x86_64-unknown-linux-gnu.
- This will build just rustc, without libstd. This is the fastest way to recompile after
- you changed only rustc source code. Note however that the resulting rustc binary
- won't have a stdlib to link against by default. You can build libstd once with
- `make rustc-stage1`, rustc will pick it up afterwards. libstd is only guaranteed to
- work if recompiled, so if there are any issues recompile it.
-- `make check` - build the full compiler & run all tests (takes a while). This
+### Building
+
+Although the `./configure` script will generate a `Makefile`, this is actually
+just a thin veneer over the actual build system driver, `x.py`. This file, at
+the root of the repository, is used to build, test, and document various parts
+of the compiler. You can execute it as:
+
+```sh
+python x.py build
+```
+
+On some systems you can also use the shorter version:
+
+```sh
+./x.py build
+```
+
+To learn more about the driver and top-level targets, you can execute:
+
+```sh
+python x.py --help
+```
+
+The general format for the driver script is:
+
+```sh
+python x.py <command> [<directory>]
+```
+
+Some example commands are `build`, `test`, and `doc`. These will build, test,
+and document the specified directory. The second argument, `<directory>`, is
+optional and defaults to working over the entire compiler. If specified,
+however, only that specific directory will be built. For example:
+
+```sh
+# build the entire compiler
+python x.py build
+
+# build all documentation
+python x.py doc
+
+# run all test suites
+python x.py test
+
+# build only the standard library
+python x.py build src/libstd
+
+# test only one particular test suite
+python x.py test src/test/rustdoc
+
+# build only the stage0 libcore library
+python x.py build src/libcore --stage 0
+```
+
+You can explore the build system throught the various `--help` pages for each
+subcommand. For example to learn more about a command you can run:
+
+```
+python x.py build --help
+```
+
+To learn about all possible rules you can execute, run:
+
+```
+python x.py build --help --verbose
+```
+
+### Useful commands
+
+Some common invocations of `x.py` are:
+
+- `x.py build --help` - show the help message and explain the subcommand
+- `x.py build src/libtest --stage 1` - build up to (and including) the first
+ stage. For most cases we don't need to build the stage2 compiler, so we can
+ save time by not building it. The stage1 compiler is a fully functioning
+ compiler and (probably) will be enough to determine if your change works as
+ expected.
+- `x.py build src/rustc --stage 1` - This will build just rustc, without libstd.
+ This is the fastest way to recompile after you changed only rustc source code.
+ Note however that the resulting rustc binary won't have a stdlib to link
+ against by default. You can build libstd once with `x.py build src/libstd`,
+ but it is is only guaranteed to work if recompiled, so if there are any issues
+ recompile it.
+- `x.py test` - build the full compiler & run all tests (takes a while). This
is what gets run by the continuous integration system against your pull
request. You should run this before submitting to make sure your tests pass
& everything builds in the correct manner.
-- `make check-stage1-std NO_REBUILD=1` - test the standard library without
- rebuilding the entire compiler
-- `make check TESTNAME=<substring-of-test-name>` - Run a matching set of tests.
+- `x.py test src/libstd --stage 1` - test the standard library without
+ recompiling stage 2.
+- `x.py test src/test/run-pass --filter TESTNAME` - Run a matching set of tests.
- `TESTNAME` should be a substring of the tests to match against e.g. it could
be the fully qualified test name, or just a part of it.
`TESTNAME=collections::hash::map::test_map::test_capacity_not_less_than_len`
or `TESTNAME=test_capacity_not_less_than_len`.
-- `make check-stage1-rpass TESTNAME=<substring-of-test-name>` - Run a single
- rpass test with the stage1 compiler (this will be quicker than running the
- command above as we only build the stage1 compiler, not the entire thing).
- You can also leave off the `-rpass` to run all stage1 test types.
-- `make check-stage1-coretest` - Run stage1 tests in `libcore`.
-- `make tidy` - Check that the source code is in compliance with Rust's style
- guidelines. There is no official document describing Rust's full guidelines
- as of yet, but basic rules like 4 spaces for indentation and no more than 99
- characters in a single line should be kept in mind when writing code.
+- `x.py test src/test/run-pass --stage 1 --filter <substring-of-test-name>` -
+ Run a single rpass test with the stage1 compiler (this will be quicker than
+ running the command above as we only build the stage1 compiler, not the entire
+ thing). You can also leave off the directory argument to run all stage1 test
+ types.
+- `x.py test src/libcore --stage 1` - Run stage1 tests in `libcore`.
+- `x.py test src/tools/tidy` - Check that the source code is in compliance with
+ Rust's style guidelines. There is no official document describing Rust's full
+ guidelines as of yet, but basic rules like 4 spaces for indentation and no
+ more than 99 characters in a single line should be kept in mind when writing
+ code.
## Pull Requests
once before running these will work, but that’s only one full build rather than
one each time.
- $ make -j8 rustc-stage1 && make check-stage1
+ $ python x.py test --stage 1
is one such example, which builds just `rustc`, and then runs the tests. If
you’re adding something to the standard library, try
- $ make -j8 check-stage1-std NO_REBUILD=1
-
-This will not rebuild the compiler, but will run the tests.
+ $ python x.py test src/libstd --stage 1
Please make sure your pull request is in compliance with Rust's style
guidelines by running
- $ make tidy
+ $ python x.py test src/tools/tidy
Make this check before every pull request (and every new commit in a pull
request) ; you can add [git hooks](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks)
```sh
$ ./configure
- $ make && make install
+ $ make && sudo make install
```
- > ***Note:*** You may need to use `sudo make install` if you do not
- > normally have permission to modify the destination directory. The
- > install locations can be adjusted by passing a `--prefix` argument
- > to `configure`. Various other options are also supported – pass
+ > ***Note:*** Install locations can be adjusted by passing a `--prefix`
+ > argument to `configure`. Various other options are also supported – pass
> `--help` for more information on them.
- When complete, `make install` will place several programs into
+ When complete, `sudo make install` will place several programs into
`/usr/local/bin`: `rustc`, the Rust compiler, and `rustdoc`, the
API-documentation tool. This install does not include [Cargo],
Rust's package manager, which you may also want to build.
(or later) so `rustc` can use its linker. Make sure to check the “C++ tools”
option.
-With these dependencies installed, the build takes two steps:
+With these dependencies installed, you can build the compiler in a `cmd.exe`
+shell with:
```sh
-$ ./configure
-$ make && make install
+> python x.py build
```
-#### MSVC with rustbuild
-
-The old build system, based on makefiles, is currently being rewritten into a
-Rust-based build system called rustbuild. This can be used to bootstrap the
-compiler on MSVC without needing to install MSYS or MinGW. All you need are
-[Python 2](https://www.python.org/downloads/),
-[CMake](https://cmake.org/download/), and
-[Git](https://git-scm.com/downloads) in your PATH (make sure you do not use the
-ones from MSYS if you have it installed). You'll also need Visual Studio 2013 or
-newer with the C++ tools. Then all you need to do is to kick off rustbuild.
+If you're running inside of an msys shell, however, you can run:
-```
-python x.py build
+```sh
+$ ./configure --build=x86_64-pc-windows-msvc
+$ make && make install
```
-Currently rustbuild only works with some known versions of Visual Studio. If you
-have a more recent version installed that a part of rustbuild doesn't understand
+Currently building Rust only works with some known versions of Visual Studio. If
+you have a more recent version installed the build system doesn't understand
then you may need to force rustbuild to use an older version. This can be done
by manually calling the appropriate vcvars file before running the bootstrap.
$ make docs
```
-Building the documentation requires building the compiler, so the above
-details will apply. Once you have the compiler built, you can
-
-```sh
-$ make docs NO_REBUILD=1
-```
-
-To make sure you don’t re-build the compiler because you made a change
-to some documentation.
-
The generated documentation will appear in a top-level `doc` directory,
created by the `make` rule.
matrix:
# 32/64 bit MSVC
- MSYS_BITS: 64
- TARGET: x86_64-pc-windows-msvc
- CHECK: check
- CONFIGURE_ARGS: --enable-llvm-assertions --enable-debug-assertions
+ RUST_CONFIGURE_ARGS: --build=x86_64-pc-windows-msvc
+ RUST_CHECK_TARGET: check
- MSYS_BITS: 32
- TARGET: i686-pc-windows-msvc
- CHECK: check
- CONFIGURE_ARGS: --enable-llvm-assertions --enable-debug-assertions
+ RUST_CONFIGURE_ARGS: --build=i686-pc-windows-msvc
+ RUST_CHECK_TARGET: check
- # MSVC rustbuild
+ # MSVC makefiles
- MSYS_BITS: 64
- CONFIGURE_ARGS: --enable-rustbuild --enable-llvm-assertions --enable-debug-assertions
- TARGET: x86_64-pc-windows-msvc
- CHECK: check
+ RUST_CONFIGURE_ARGS: --build=x86_64-pc-windows-msvc --disable-rustbuild
+ RUST_CHECK_TARGET: check
# MSVC cargotest
- MSYS_BITS: 64
- CONFIGURE_ARGS: --enable-rustbuild --enable-llvm-assertions --enable-debug-assertions
- TARGET: x86_64-pc-windows-msvc
- CHECK: check-cargotest
+ NO_VENDOR: 1
+ RUST_CHECK_TARGET: check-cargotest
+ RUST_CONFIGURE_ARGS: --build=x86_64-pc-windows-msvc
# 32/64-bit MinGW builds.
#
# *not* use debug assertions and llvm assertions. This is because they take
# too long on appveyor and this is tested by rustbuild below.
- MSYS_BITS: 32
- TARGET: i686-pc-windows-gnu
- CHECK: check
+ RUST_CONFIGURE_ARGS: --build=i686-pc-windows-gnu
+ RUST_CHECK_TARGET: check
MINGW_URL: https://s3.amazonaws.com/rust-lang-ci
MINGW_ARCHIVE: i686-4.9.2-release-win32-dwarf-rt_v4-rev4.7z
MINGW_DIR: mingw32
- MSYS_BITS: 32
- CONFIGURE_ARGS: --enable-rustbuild --enable-llvm-assertions --enable-debug-assertions
- TARGET: i686-pc-windows-gnu
- CHECK: check
+ RUST_CONFIGURE_ARGS: --build=i686-pc-windows-gnu --disable-rustbuild
+ RUST_CHECK_TARGET: check
MINGW_URL: https://s3.amazonaws.com/rust-lang-ci
MINGW_ARCHIVE: i686-4.9.2-release-win32-dwarf-rt_v4-rev4.7z
MINGW_DIR: mingw32
- MSYS_BITS: 64
- CONFIGURE_ARGS: --enable-llvm-assertions --enable-debug-assertions
- TARGET: x86_64-pc-windows-gnu
- CHECK: check
+ RUST_CHECK_TARGET: check
+ RUST_CONFIGURE_ARGS: --build=x86_64-pc-windows-gnu
MINGW_URL: https://s3.amazonaws.com/rust-lang-ci
MINGW_ARCHIVE: x86_64-4.9.2-release-win32-seh-rt_v4-rev4.7z
MINGW_DIR: mingw64
- if NOT defined MINGW_URL set PATH=C:\msys64\mingw%MSYS_BITS%\bin;C:\msys64\usr\bin;%PATH%
test_script:
- - sh ./configure
- %CONFIGURE_ARGS%
- --build=%TARGET%
- - bash -c "make -j$(nproc)"
- - bash -c "make %CHECK% -j$(nproc)"
+ - git submodule update --init
+ - set SRC=.
+ - set NO_CCACHE=1
+ - sh src/ci/run.sh
cache:
- - build/%TARGET%/llvm -> src/rustllvm/llvm-auto-clean-trigger
- - "%TARGET%/llvm -> src/rustllvm/llvm-auto-clean-trigger"
+ - "build/i686-pc-windows-gnu/llvm -> src/rustllvm/llvm-auto-clean-trigger"
+ - "build/x86_64-pc-windows-gnu/llvm -> src/rustllvm/llvm-auto-clean-trigger"
+ - "build/i686-pc-windows-msvc/llvm -> src/rustllvm/llvm-auto-clean-trigger"
+ - "build/x86_64-pc-windows-msvc/llvm -> src/rustllvm/llvm-auto-clean-trigger"
+ - "i686-pc-windows-gnu/llvm -> src/rustllvm/llvm-auto-clean-trigger"
+ - "x86_64-pc-windows-gnu/llvm -> src/rustllvm/llvm-auto-clean-trigger"
+ - "i686-pc-windows-msvc/llvm -> src/rustllvm/llvm-auto-clean-trigger"
+ - "x86_64-pc-windows-msvc/llvm -> src/rustllvm/llvm-auto-clean-trigger"
branches:
only:
opt dist-host-only 0 "only install bins for the host architecture"
opt inject-std-version 1 "inject the current compiler version of libstd into programs"
opt llvm-version-check 1 "check if the LLVM version is supported, build anyway"
-opt rustbuild 0 "use the rust and cargo based build system"
+opt rustbuild 1 "use the rust and cargo based build system"
opt codegen-tests 1 "run the src/test/codegen tests"
opt option-checking 1 "complain about unrecognized options in this configure script"
opt ninja 0 "build LLVM using the Ninja generator (for MSVC, requires building in the correct environment)"
valopt aarch64-linux-android-ndk "" "aarch64-linux-android NDK standalone path"
valopt nacl-cross-path "" "NaCl SDK path (Pepper Canary is recommended). Must be absolute!"
valopt musl-root "/usr/local" "MUSL root installation directory (deprecated)"
-valopt musl-root-x86_64 "/usr/local" "x86_64-unknown-linux-musl install directory"
-valopt musl-root-i686 "/usr/local" "i686-unknown-linux-musl install directory"
-valopt musl-root-arm "/usr/local" "arm-unknown-linux-musleabi install directory"
-valopt musl-root-armhf "/usr/local" "arm-unknown-linux-musleabihf install directory"
-valopt musl-root-armv7 "/usr/local" "armv7-unknown-linux-musleabihf install directory"
+valopt musl-root-x86_64 "" "x86_64-unknown-linux-musl install directory"
+valopt musl-root-i686 "" "i686-unknown-linux-musl install directory"
+valopt musl-root-arm "" "arm-unknown-linux-musleabi install directory"
+valopt musl-root-armhf "" "arm-unknown-linux-musleabihf install directory"
+valopt musl-root-armv7 "" "armv7-unknown-linux-musleabihf install directory"
valopt extra-filename "" "Additional data that is hashed and passed to the -C extra-filename flag"
if [ -e ${CFG_SRC_DIR}.git ]
fi
# For building LLVM
-probe_need CFG_CMAKE cmake
+if [ -z "$CFG_LLVM_ROOT" ]
+then
+ probe_need CFG_CMAKE cmake
+fi
# On MacOS X, invoking `javac` pops up a dialog if the JDK is not
# installed. Since `javac` is only used if `antlr4` is available,
fi
fi
-if [ -z "$CFG_ENABLE_RUSTBUILD" ]; then
+if [ -n "$CFG_DISABLE_RUSTBUILD" ]; then
step_msg "making directories"
step_msg "configuring submodules"
# Have to be in the top of src directory for this
-if [ -z $CFG_DISABLE_MANAGE_SUBMODULES ] && [ -z $CFG_ENABLE_RUSTBUILD ]
+if [ -z "$CFG_DISABLE_MANAGE_SUBMODULES" ] && [ -n "$CFG_DISABLE_RUSTBUILD" ]
then
cd ${CFG_SRC_DIR}
;;
esac
- if [ -n "$CFG_ENABLE_RUSTBUILD" ]
+ if [ -z "$CFG_DISABLE_RUSTBUILD" ]
then
msg "not configuring LLVM, rustbuild in use"
do_reconfigure=0
- elif [ -z $CFG_LLVM_ROOT ]
+ elif [ -z "$CFG_LLVM_ROOT" ]
then
LLVM_BUILD_DIR=${CFG_BUILD_DIR}$t/llvm
LLVM_INST_DIR=$LLVM_BUILD_DIR
putvar $CFG_LLVM_INST_DIR
done
-if [ -n "$CFG_ENABLE_RUSTBUILD" ]
+if [ -z "$CFG_DISABLE_RUSTBUILD" ]
then
INPUT_MAKEFILE=src/bootstrap/mk/Makefile.in
else
step_msg "complete"
fi
-msg "run \`make help\`"
+if [ -z "$CFG_DISABLE_RUSTBUILD" ]; then
+ msg "NOTE you have now configured rust to use a rewritten build system"
+ msg " called rustbuild, and as a result this may have bugs that "
+ msg " you did not see before. If you experience any issues you can"
+ msg " go back to the old build system with --disable-rustbuild and"
+ msg " please feel free to report any bugs!"
+ msg ""
+ msg "run \`python x.py --help\`"
+else
+ warn "the makefile-based build system is deprecated in favor of rustbuild"
+ msg ""
+ msg "It is recommended you avoid passing --disable-rustbuild to get your"
+ msg "build working as the makefiles will be deleted on 2017-02-02. If you"
+ msg "encounter bugs with rustbuild please file issues against rust-lang/rust"
+ msg ""
+ msg "run \`make help\`"
+fi
+
msg
--- /dev/null
+# i686-unknown-openbsd configuration
+CC_i686-unknown-openbsd=$(CC)
+CXX_i686-unknown-openbsd=$(CXX)
+CPP_i686-unknown-openbsd=$(CPP)
+AR_i686-unknown-openbsd=$(AR)
+CFG_LIB_NAME_i686-unknown-openbsd=lib$(1).so
+CFG_STATIC_LIB_NAME_i686-unknown-openbsd=lib$(1).a
+CFG_LIB_GLOB_i686-unknown-openbsd=lib$(1)-*.so
+CFG_LIB_DSYM_GLOB_i686-unknown-openbsd=$(1)-*.dylib.dSYM
+CFG_JEMALLOC_CFLAGS_i686-unknown-openbsd := -m32 -I/usr/include $(CFLAGS)
+CFG_GCCISH_CFLAGS_i686-unknown-openbsd := -g -fPIC -m32 -I/usr/include $(CFLAGS)
+CFG_GCCISH_LINK_FLAGS_i686-unknown-openbsd := -shared -fPIC -g -pthread -m32
+CFG_GCCISH_DEF_FLAG_i686-unknown-openbsd := -Wl,--export-dynamic,--dynamic-list=
+CFG_LLC_FLAGS_i686-unknown-openbsd :=
+CFG_INSTALL_NAME_i686-unknown-openbsd =
+CFG_EXE_SUFFIX_i686-unknown-openbsd :=
+CFG_WINDOWSY_i686-unknown-openbsd :=
+CFG_UNIXY_i686-unknown-openbsd := 1
+CFG_LDPATH_i686-unknown-openbsd :=
+CFG_RUN_i686-unknown-openbsd=$(2)
+CFG_RUN_TARG_i686-unknown-openbsd=$(call CFG_RUN_i686-unknown-openbsd,,$(2))
+CFG_GNU_TRIPLE_i686-unknown-openbsd := i686-unknown-openbsd
+RUSTC_FLAGS_i686-unknown-openbsd=-C linker=$(call FIND_COMPILER,$(CC))
+CFG_DISABLE_JEMALLOC_i686-unknown-openbsd := 1
* `doc` - a command for building documentation. Like above can take arguments
for what to document.
-If you're more used to `./configure` and `make`, however, then you can also
-configure the build system to use rustbuild instead of the old makefiles:
-
-```
-./configure --enable-rustbuild
-make
-```
-
-Afterwards the `Makefile` which is generated will have a few commands like
-`make check`, `make tidy`, etc.
-
## Configuring rustbuild
There are currently two primary methods for configuring the rustbuild build
can also be passed as `--config path/to/config.toml` if the build system is
being invoked manually (via the python script).
+Finally, rustbuild makes use of the [gcc-rs crate] which has [its own
+method][env-vars] of configuring C compilers and C flags via environment
+variables.
+
+[gcc-rs crate]: https://github.com/alexcrichton/gcc-rs
+[env-vars]: https://github.com/alexcrichton/gcc-rs#external-configuration-via-environment-variables
+
## Build stages
The rustbuild build system goes through a few phases to actually build the
you up and running. Some general areas that you may be interested in modifying
are:
-* Adding a new build tool? Take a look at `build/step.rs` for examples of other
- tools, as well as `build/mod.rs`.
+* Adding a new build tool? Take a look at `bootstrap/step.rs` for examples of
+ other tools.
* Adding a new compiler crate? Look no further! Adding crates can be done by
adding a new directory with `Cargo.toml` followed by configuring all
`Cargo.toml` files accordingly.
* Adding a new dependency from crates.io? We're still working on that, so hold
off on that for now.
-* Adding a new configuration option? Take a look at `build/config.rs` or perhaps
- `build/flags.rs` and then modify the build elsewhere to read that option.
-* Adding a sanity check? Take a look at `build/sanity.rs`.
+* Adding a new configuration option? Take a look at `bootstrap/config.rs` or
+ perhaps `bootstrap/flags.rs` and then modify the build elsewhere to read that
+ option.
+* Adding a sanity check? Take a look at `bootstrap/sanity.rs`.
If you have any questions feel free to reach out on `#rust-internals` on IRC or
open an issue in the bug tracker!
sha_path = sha_file.name
try:
- download(sha_path, sha_url, verbose)
+ download(sha_path, sha_url, False, verbose)
if os.path.exists(path):
if verify(path, sha_path, False):
- print("using already-download file " + path)
+ if verbose:
+ print("using already-download file " + path)
return
else:
- print("ignoring already-download file " + path + " due to failed verification")
+ if verbose:
+ print("ignoring already-download file " + path + " due to failed verification")
os.unlink(path)
- download(temp_path, url, verbose)
- if not verify(temp_path, sha_path, True):
+ download(temp_path, url, True, verbose)
+ if not verify(temp_path, sha_path, verbose):
raise RuntimeError("failed verification")
- print("moving {} to {}".format(temp_path, path))
+ if verbose:
+ print("moving {} to {}".format(temp_path, path))
shutil.move(temp_path, path)
finally:
- delete_if_present(sha_path)
- delete_if_present(temp_path)
+ delete_if_present(sha_path, verbose)
+ delete_if_present(temp_path, verbose)
-def delete_if_present(path):
+def delete_if_present(path, verbose):
if os.path.isfile(path):
- print("removing " + path)
+ if verbose:
+ print("removing " + path)
os.unlink(path)
-def download(path, url, verbose):
- print("downloading {} to {}".format(url, path))
+def download(path, url, probably_big, verbose):
+ if probably_big or verbose:
+ print("downloading {}".format(url))
# see http://serverfault.com/questions/301128/how-to-download
if sys.platform == 'win32':
run(["PowerShell.exe", "/nologo", "-Command",
".DownloadFile('{}', '{}')".format(url, path)],
verbose=verbose)
else:
- run(["curl", "-o", path, url], verbose=verbose)
+ if probably_big or verbose:
+ option = "-#"
+ else:
+ option = "-s"
+ run(["curl", option, "-Sf", "-o", path, url], verbose=verbose)
def verify(path, sha_path, verbose):
- print("verifying " + path)
+ if verbose:
+ print("verifying " + path)
with open(path, "rb") as f:
found = hashlib.sha256(f.read()).hexdigest()
with open(sha_path, "r") as f:
expected, _ = f.readline().split()
verified = found == expected
- if not verified and verbose:
+ if not verified:
print("invalid checksum:\n"
" found: {}\n"
" expected: {}".format(found, expected))
if self.rustc().startswith(self.bin_root()) and \
(not os.path.exists(self.rustc()) or self.rustc_out_of_date()):
+ self.print_what_it_means_to_bootstrap()
if os.path.exists(self.bin_root()):
shutil.rmtree(self.bin_root())
channel = self.stage0_rustc_channel()
if self.cargo().startswith(self.bin_root()) and \
(not os.path.exists(self.cargo()) or self.cargo_out_of_date()):
+ self.print_what_it_means_to_bootstrap()
channel = self.stage0_cargo_channel()
filename = "cargo-{}-{}.tar.gz".format(channel, self.build)
url = "https://static.rust-lang.org/cargo-dist/" + self.stage0_cargo_date()
else:
return ''
+ def print_what_it_means_to_bootstrap(self):
+ if hasattr(self, 'printed'):
+ return
+ self.printed = True
+ if os.path.exists(self.bootstrap_binary()):
+ return
+ if not '--help' in sys.argv or len(sys.argv) == 1:
+ return
+
+ print('info: the build system for Rust is written in Rust, so this')
+ print(' script is now going to download a stage0 rust compiler')
+ print(' and then compile the build system itself')
+ print('')
+ print('info: in the meantime you can read more about rustbuild at')
+ print(' src/bootstrap/README.md before the download finishes')
+
+ def bootstrap_binary(self):
+ return os.path.join(self.build_dir, "bootstrap/debug/bootstrap")
+
def build_bootstrap(self):
+ self.print_what_it_means_to_bootstrap()
build_dir = os.path.join(self.build_dir, "bootstrap")
if self.clean and os.path.exists(build_dir):
shutil.rmtree(build_dir)
rb.use_vendored_sources = '\nvendor = true' in rb.config_toml or \
'CFG_ENABLE_VENDOR' in rb.config_mk
+ if 'SUDO_USER' in os.environ:
+ if os.environ['USER'] != os.environ['SUDO_USER']:
+ rb.use_vendored_sources = True
+ print('info: looks like you are running this command under `sudo`')
+ print(' and so in order to preserve your $HOME this will now')
+ print(' use vendored sources by default. Note that if this')
+ print(' does not work you should run a normal build first')
+ print(' before running a command like `sudo make intall`')
+
if rb.use_vendored_sources:
if not os.path.exists('.cargo'):
os.makedirs('.cargo')
- f = open('.cargo/config','w')
- f.write("""
- [source.crates-io]
- replace-with = 'vendored-sources'
- registry = 'https://example.com'
-
- [source.vendored-sources]
- directory = '{}/src/vendor'
- """.format(rb.rust_root))
- f.close()
+ with open('.cargo/config','w') as f:
+ f.write("""
+ [source.crates-io]
+ replace-with = 'vendored-sources'
+ registry = 'https://example.com'
+
+ [source.vendored-sources]
+ directory = '{}/src/vendor'
+ """.format(rb.rust_root))
else:
if os.path.exists('.cargo'):
shutil.rmtree('.cargo')
+
data = stage0_data(rb.rust_root)
rb._rustc_channel, rb._rustc_date = data['rustc'].split('-', 1)
rb._cargo_channel, rb._cargo_date = data['cargo'].split('-', 1)
sys.stdout.flush()
# Run the bootstrap
- args = [os.path.join(rb.build_dir, "bootstrap/debug/bootstrap")]
+ args = [rb.bootstrap_binary()]
args.extend(sys.argv[1:])
env = os.environ.copy()
env["BUILD"] = rb.build
if let Some(cc) = config.and_then(|c| c.cc.as_ref()) {
cfg.compiler(cc);
} else {
- set_compiler(&mut cfg, "gcc", target, config);
+ set_compiler(&mut cfg, "gcc", target, config, build);
}
let compiler = cfg.get_compiler();
if let Some(cxx) = config.and_then(|c| c.cxx.as_ref()) {
cfg.compiler(cxx);
} else {
- set_compiler(&mut cfg, "g++", host, config);
+ set_compiler(&mut cfg, "g++", host, config, build);
}
let compiler = cfg.get_compiler();
build.verbose(&format!("CXX_{} = {:?}", host, compiler.path()));
fn set_compiler(cfg: &mut gcc::Config,
gnu_compiler: &str,
target: &str,
- config: Option<&Target>) {
+ config: Option<&Target>,
+ build: &Build) {
match target {
// When compiling for android we may have the NDK configured in the
// config.toml in which case we look there. Otherwise the default
}
}
+ "mips-unknown-linux-musl" => {
+ cfg.compiler("mips-linux-musl-gcc");
+ }
+ "mipsel-unknown-linux-musl" => {
+ cfg.compiler("mipsel-linux-musl-gcc");
+ }
+
+ t if t.contains("musl") => {
+ if let Some(root) = build.musl_root(target) {
+ let guess = root.join("bin/musl-gcc");
+ if guess.exists() {
+ cfg.compiler(guess);
+ }
+ }
+ }
+
_ => {}
}
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-//! Implementation of the various `check-*` targets of the build system.
+//! Implementation of the test-related targets of the build system.
//!
//! This file implements the various regression test suites that we execute on
//! our CI.
pub fn linkcheck(build: &Build, stage: u32, host: &str) {
println!("Linkcheck stage{} ({})", stage, host);
let compiler = Compiler::new(stage, host);
+
+ let _time = util::timeit();
build.run(build.tool_cmd(&compiler, "linkchecker")
.arg(build.out.join(host).join("doc")));
}
let out_dir = build.out.join("ct");
t!(fs::create_dir_all(&out_dir));
+ let _time = util::timeit();
build.run(build.tool_cmd(compiler, "cargotest")
.env("PATH", newpath)
.arg(&build.cargo)
target: &str,
mode: &str,
suite: &str) {
- println!("Check compiletest {} ({} -> {})", suite, compiler.host, target);
+ println!("Check compiletest suite={} mode={} ({} -> {})",
+ suite, mode, compiler.host, target);
let mut cmd = build.tool_cmd(compiler, "compiletest");
// compiletest currently has... a lot of arguments, so let's just pass all
// Running a C compiler on MSVC requires a few env vars to be set, to be
// sure to set them here.
+ //
+ // Note that if we encounter `PATH` we make sure to append to our own `PATH`
+ // rather than stomp over it.
if target.contains("msvc") {
for &(ref k, ref v) in build.cc[target].0.env() {
if k != "PATH" {
}
}
cmd.env("RUSTC_BOOTSTRAP", "1");
+ build.add_rust_test_threads(&mut cmd);
cmd.arg("--adb-path").arg("adb");
cmd.arg("--adb-test-dir").arg(ADB_TEST_DIR);
cmd.arg("--android-cross-path").arg("");
}
+ let _time = util::timeit();
build.run(&mut cmd);
}
// Do a breadth-first traversal of the `src/doc` directory and just run
// tests for all files that end in `*.md`
let mut stack = vec![build.src.join("src/doc")];
+ let _time = util::timeit();
while let Some(p) = stack.pop() {
if p.is_dir() {
let dir = testdir(build, compiler.host);
t!(fs::create_dir_all(&dir));
let output = dir.join("error-index.md");
+
+ let _time = util::timeit();
build.run(build.tool_cmd(compiler, "error_index_generator")
.arg("markdown")
.arg(&output)
fn markdown_test(build: &Build, compiler: &Compiler, markdown: &Path) {
let mut cmd = Command::new(build.rustdoc(compiler));
build.add_rustc_lib_path(compiler, &mut cmd);
+ build.add_rust_test_threads(&mut cmd);
cmd.arg("--test");
cmd.arg(markdown);
dylib_path.insert(0, build.sysroot_libdir(compiler, target));
cargo.env(dylib_path_var(), env::join_paths(&dylib_path).unwrap());
+ if target.contains("android") {
+ cargo.arg("--no-run");
+ } else if target.contains("emscripten") {
+ cargo.arg("--no-run");
+ }
+
+ cargo.arg("--");
+
if build.config.quiet_tests {
- cargo.arg("--");
cargo.arg("--quiet");
}
+ let _time = util::timeit();
+
if target.contains("android") {
- build.run(cargo.arg("--no-run"));
+ build.run(&mut cargo);
krate_android(build, compiler, target, mode);
} else if target.contains("emscripten") {
- build.run(cargo.arg("--no-run"));
+ build.run(&mut cargo);
krate_emscripten(build, compiler, target, mode);
} else {
cargo.args(&build.flags.cmd.test_args());
target,
compiler.host,
test_file_name);
+ let quiet = if build.config.quiet_tests { "--quiet" } else { "" };
let program = format!("(cd {dir}; \
LD_LIBRARY_PATH=./{target} ./{test} \
--logfile {log} \
+ {quiet} \
{args})",
dir = ADB_TEST_DIR,
target = target,
test = test_file_name,
log = log,
+ quiet = quiet,
args = build.flags.cmd.test_args().join(" "));
let output = output(Command::new("adb").arg("shell").arg(&program));
let test_file_name = test.to_string_lossy().into_owned();
println!("running {}", test_file_name);
let nodejs = build.config.nodejs.as_ref().expect("nodejs not configured");
- let status = Command::new(nodejs)
- .arg(&test_file_name)
- .stderr(::std::process::Stdio::inherit())
- .status();
- match status {
- Ok(status) => {
- if !status.success() {
- panic!("some tests failed");
- }
- }
- Err(e) => panic!(format!("failed to execute command: {}", e)),
- };
+ let mut cmd = Command::new(nodejs);
+ cmd.arg(&test_file_name)
+ .stderr(::std::process::Stdio::inherit());
+ if build.config.quiet_tests {
+ cmd.arg("--quiet");
+ }
+ build.run(&mut cmd);
}
}
if !path.exists() {
return
}
+ if path.is_file() {
+ return do_op(path, "remove file", |p| fs::remove_file(p));
+ }
for file in t!(fs::read_dir(path)) {
let file = t!(file).path();
install: m.opt_present("install"),
}
}
+ "--help" => usage(0, &opts),
cmd => {
println!("unknown command: {}", cmd);
usage(1, &opts);
//! This module, and its descendants, are the implementation of the Rust build
//! system. Most of this build system is backed by Cargo but the outer layer
//! here serves as the ability to orchestrate calling Cargo, sequencing Cargo
-//! builds, building artifacts like LLVM, etc.
+//! builds, building artifacts like LLVM, etc. The goals of rustbuild are:
//!
-//! More documentation can be found in each respective module below.
+//! * To be an easily understandable, easily extensible, and maintainable build
+//! system.
+//! * Leverage standard tools in the Rust ecosystem to build the compiler, aka
+//! crates.io and Cargo.
+//! * A standard interface to build across all platforms, including MSVC
+//!
+//! ## Architecture
+//!
+//! Although this build system defers most of the complicated logic to Cargo
+//! itself, it still needs to maintain a list of targets and dependencies which
+//! it can itself perform. Rustbuild is made up of a list of rules with
+//! dependencies amongst them (created in the `step` module) and then knows how
+//! to execute each in sequence. Each time rustbuild is invoked, it will simply
+//! iterate through this list of steps and execute each serially in turn. For
+//! each step rustbuild relies on the step internally being incremental and
+//! parallel. Note, though, that the `-j` parameter to rustbuild gets forwarded
+//! to appropriate test harnesses and such.
+//!
+//! Most of the "meaty" steps that matter are backed by Cargo, which does indeed
+//! have its own parallelism and incremental management. Later steps, like
+//! tests, aren't incremental and simply run the entire suite currently.
+//!
+//! When you execute `x.py build`, the steps which are executed are:
+//!
+//! * First, the python script is run. This will automatically download the
+//! stage0 rustc and cargo according to `src/stage0.txt`, or using the cached
+//! versions if they're available. These are then used to compile rustbuild
+//! itself (using Cargo). Finally, control is then transferred to rustbuild.
+//!
+//! * Rustbuild takes over, performs sanity checks, probes the environment,
+//! reads configuration, builds up a list of steps, and then starts executing
+//! them.
+//!
+//! * The stage0 libstd is compiled
+//! * The stage0 libtest is compiled
+//! * The stage0 librustc is compiled
+//! * The stage1 compiler is assembled
+//! * The stage1 libstd, libtest, librustc are compiled
+//! * The stage2 compiler is assembled
+//! * The stage2 libstd, libtest, librustc are compiled
+//!
+//! Each step is driven by a separate Cargo project and rustbuild orchestrates
+//! copying files between steps and otherwise preparing for Cargo to run.
+//!
+//! ## Further information
+//!
+//! More documentation can be found in each respective module below, and you can
+//! also check out the `src/bootstrap/README.md` file for more information.
extern crate build_helper;
extern crate cmake;
use std::collections::HashMap;
use std::env;
+use std::ffi::OsString;
use std::fs::{self, File};
use std::path::{Component, PathBuf, Path};
use std::process::Command;
cc: HashMap<String, (gcc::Tool, Option<PathBuf>)>,
cxx: HashMap<String, gcc::Tool>,
crates: HashMap<String, Crate>,
+ is_sudo: bool,
}
#[derive(Debug)]
};
let local_rebuild = config.local_rebuild;
+ let is_sudo = match env::var_os("SUDO_USER") {
+ Some(sudo_user) => {
+ match env::var_os("USER") {
+ Some(user) => user != sudo_user,
+ None => false,
+ }
+ }
+ None => false,
+ };
+
Build {
flags: flags,
config: config,
crates: HashMap::new(),
lldb_version: None,
lldb_python_dir: None,
+ is_sudo: is_sudo,
}
}
// how the actual compiler itself is called.
//
// These variables are primarily all read by
- // src/bootstrap/{rustc,rustdoc.rs}
+ // src/bootstrap/bin/{rustc.rs,rustdoc.rs}
cargo.env("RUSTC", self.out.join("bootstrap/debug/rustc"))
.env("RUSTC_REAL", self.compiler_path(compiler))
.env("RUSTC_STAGE", stage.to_string())
// Enable usage of unstable features
cargo.env("RUSTC_BOOTSTRAP", "1");
+ self.add_rust_test_threads(&mut cargo);
// Specify some various options for build scripts used throughout
// the build.
if self.config.rust_optimize && cmd != "bench" {
cargo.arg("--release");
}
- if self.config.vendor {
+ if self.config.vendor || self.is_sudo {
cargo.arg("--frozen");
}
return cargo
fn tool_cmd(&self, compiler: &Compiler, tool: &str) -> Command {
let mut cmd = Command::new(self.tool(&compiler, tool));
let host = compiler.host;
- let paths = vec![
+ let mut paths = vec![
self.cargo_out(compiler, Mode::Libstd, host).join("deps"),
self.cargo_out(compiler, Mode::Libtest, host).join("deps"),
self.cargo_out(compiler, Mode::Librustc, host).join("deps"),
self.cargo_out(compiler, Mode::Tool, host).join("deps"),
];
+
+ // On MSVC a tool may invoke a C compiler (e.g. compiletest in run-make
+ // mode) and that C compiler may need some extra PATH modification. Do
+ // so here.
+ if compiler.host.contains("msvc") {
+ let curpaths = env::var_os("PATH").unwrap_or(OsString::new());
+ let curpaths = env::split_paths(&curpaths).collect::<Vec<_>>();
+ for &(ref k, ref v) in self.cc[compiler.host].0.env() {
+ if k != "PATH" {
+ continue
+ }
+ for path in env::split_paths(v) {
+ if !curpaths.contains(&path) {
+ paths.push(path);
+ }
+ }
+ }
+ }
add_lib_path(paths, &mut cmd);
return cmd
}
add_lib_path(vec![self.rustc_libdir(compiler)], cmd);
}
+ /// Adds the `RUST_TEST_THREADS` env var if necessary
+ fn add_rust_test_threads(&self, cmd: &mut Command) {
+ if env::var_os("RUST_TEST_THREADS").is_none() {
+ cmd.env("RUST_TEST_THREADS", self.jobs().to_string());
+ }
+ }
+
/// Returns the compiler's libdir where it stores the dynamic libraries that
/// it itself links against.
///
$(Q)$(BOOTSTRAP) build $(BOOTSTRAP_ARGS)
$(Q)$(BOOTSTRAP) doc $(BOOTSTRAP_ARGS)
-# Don’t use $(Q) here, always show how to invoke the bootstrap script directly
help:
- $(BOOTSTRAP) --help
+ $(Q)echo 'Welcome to the rustbuild build system!'
+ $(Q)echo
+ $(Q)echo This makefile is a thin veneer over the ./x.py script located
+ $(Q)echo in this directory. To get the full power of the build system
+ $(Q)echo you can run x.py directly.
+ $(Q)echo
+ $(Q)echo To learn more run \`./x.py --help\`
clean:
$(Q)$(BOOTSTRAP) clean $(BOOTSTRAP_ARGS)
dist:
$(Q)$(BOOTSTRAP) dist $(BOOTSTRAP_ARGS)
install:
-ifeq (root user, $(USER) $(patsubst %,user,$(SUDO_USER)))
- $(Q)echo "'sudo make install' is not supported currently."
-else
$(Q)$(BOOTSTRAP) dist --install $(BOOTSTRAP_ARGS)
-endif
tidy:
$(Q)$(BOOTSTRAP) test src/tools/tidy $(BOOTSTRAP_ARGS) --stage 0
-check-stage2-android:
- $(Q)$(BOOTSTRAP) --step check-target --target arm-linux-androideabi
+check-stage2-T-arm-linux-androideabi-H-x86_64-unknown-linux-gnu:
+ $(Q)$(BOOTSTRAP) test --target arm-linux-androideabi
+check-stage2-T-x86_64-unknown-linux-musl-H-x86_64-unknown-linux-gnu:
+ $(Q)$(BOOTSTRAP) test --target x86_64-unknown-linux-gnu
+
.PHONY: dist
use gcc;
use Build;
-use util::up_to_date;
+use util::{self, up_to_date};
/// Compile LLVM for `target`.
pub fn llvm(build: &Build, target: &str) {
println!("Building LLVM for {}", target);
+ let _time = util::timeit();
let _ = fs::remove_dir_all(&dst.join("build"));
t!(fs::create_dir_all(&dst.join("build")));
let assertions = if build.config.llvm_assertions {"ON"} else {"OFF"};
println!("Building test helpers");
t!(fs::create_dir_all(&dst));
let mut cfg = gcc::Config::new();
+
+ // We may have found various cross-compilers a little differently due to our
+ // extra configuration, so inform gcc of these compilers. Note, though, that
+ // on MSVC we still need gcc's detection of env vars (ugh).
+ if !target.contains("msvc") {
+ if let Some(ar) = build.ar(target) {
+ cfg.archiver(ar);
+ }
+ cfg.compiler(build.cc(target));
+ }
+
cfg.cargo_metadata(false)
.out_dir(&dst)
.target(target)
}
}
let have_cmd = |cmd: &OsStr| {
- for path in env::split_paths(&path).map(|p| p.join(cmd)) {
- if fs::metadata(&path).is_ok() ||
- fs::metadata(path.with_extension("exe")).is_ok() {
- return Some(path);
+ for path in env::split_paths(&path) {
+ let target = path.join(cmd);
+ let mut cmd_alt = cmd.to_os_string();
+ cmd_alt.push(".exe");
+ if target.exists() ||
+ target.with_extension("exe").exists() ||
+ target.join(cmd_alt).exists() {
+ return Some(target);
}
}
return None;
// option. This file may not be copied, modified, or distributed
// except according to those terms.
+//! Definition of steps of the build system.
+//!
+//! This is where some of the real meat of rustbuild is located, in how we
+//! define targets and the dependencies amongst them. This file can sort of be
+//! viewed as just defining targets in a makefile which shell out to predefined
+//! functions elsewhere about how to execute the target.
+//!
+//! The primary function here you're likely interested in is the `build_rules`
+//! function. This will create a `Rules` structure which basically just lists
+//! everything that rustbuild can do. Each rule has a human-readable name, a
+//! path associated with it, some dependencies, and then a closure of how to
+//! actually perform the rule.
+//!
+//! All steps below are defined in self-contained units, so adding a new target
+//! to the build system should just involve adding the meta information here
+//! along with the actual implementation elsewhere. You can find more comments
+//! about how to define rules themselves below.
+
use std::collections::{HashMap, HashSet};
use std::mem;
use native;
use {Compiler, Build, Mode};
-#[derive(PartialEq, Eq, Hash, Clone, Debug)]
-struct Step<'a> {
- name: &'a str,
- stage: u32,
- host: &'a str,
- target: &'a str,
-}
-
-impl<'a> Step<'a> {
- fn name(&self, name: &'a str) -> Step<'a> {
- Step { name: name, ..*self }
- }
-
- fn stage(&self, stage: u32) -> Step<'a> {
- Step { stage: stage, ..*self }
- }
-
- fn host(&self, host: &'a str) -> Step<'a> {
- Step { host: host, ..*self }
- }
-
- fn target(&self, target: &'a str) -> Step<'a> {
- Step { target: target, ..*self }
- }
-
- fn compiler(&self) -> Compiler<'a> {
- Compiler::new(self.stage, self.host)
- }
-}
-
pub fn run(build: &Build) {
let rules = build_rules(build);
let steps = rules.plan();
}
pub fn build_rules(build: &Build) -> Rules {
- let mut rules: Rules = Rules::new(build);
+ let mut rules = Rules::new(build);
+
+ // This is the first rule that we're going to define for rustbuild, which is
+ // used to compile LLVM itself. All rules are added through the `rules`
+ // structure created above and are configured through a builder-style
+ // interface.
+ //
+ // First up we see the `build` method. This represents a rule that's part of
+ // the top-level `build` subcommand. For example `./x.py build` is what this
+ // is associating with. Note that this is normally only relevant if you flag
+ // a rule as `default`, which we'll talk about later.
+ //
+ // Next up we'll see two arguments to this method:
+ //
+ // * `llvm` - this is the "human readable" name of this target. This name is
+ // not accessed anywhere outside this file itself (e.g. not in
+ // the CLI nor elsewhere in rustbuild). The purpose of this is to
+ // easily define dependencies between rules. That is, other rules
+ // will depend on this with the name "llvm".
+ // * `src/llvm` - this is the relevant path to the rule that we're working
+ // with. This path is the engine behind how commands like
+ // `./x.py build src/llvm` work. This should typically point
+ // to the relevant component, but if there's not really a
+ // path to be assigned here you can pass something like
+ // `path/to/nowhere` to ignore it.
+ //
+ // After we create the rule with the `build` method we can then configure
+ // various aspects of it. For example this LLVM rule uses `.host(true)` to
+ // flag that it's a rule only for host targets. In other words, LLVM isn't
+ // compiled for targets configured through `--target` (e.g. those we're just
+ // building a standard library for).
+ //
+ // Next up the `dep` method will add a dependency to this rule. The closure
+ // is yielded the step that represents executing the `llvm` rule itself
+ // (containing information like stage, host, target, ...) and then it must
+ // return a target that the step depends on. Here LLVM is actually
+ // interesting where a cross-compiled LLVM depends on the host LLVM, but
+ // otherwise it has no dependencies.
+ //
+ // To handle this we do a bit of dynamic dispatch to see what the dependency
+ // is. If we're building a LLVM for the build triple, then we don't actually
+ // have any dependencies! To do that we return a dependency on the "dummy"
+ // target which does nothing.
+ //
+ // If we're build a cross-compiled LLVM, however, we need to assemble the
+ // libraries from the previous compiler. This step has the same name as
+ // ours (llvm) but we want it for a different target, so we use the
+ // builder-style methods on `Step` to configure this target to the build
+ // triple.
+ //
+ // Finally, to finish off this rule, we define how to actually execute it.
+ // That logic is all defined in the `native` module so we just delegate to
+ // the relevant function there. The argument to the closure passed to `run`
+ // is a `Step` (defined below) which encapsulates information like the
+ // stage, target, host, etc.
+ rules.build("llvm", "src/llvm")
+ .host(true)
+ .dep(move |s| {
+ if s.target == build.config.build {
+ dummy(s, build)
+ } else {
+ s.target(&build.config.build)
+ }
+ })
+ .run(move |s| native::llvm(build, s.target));
+
+ // Ok! After that example rule that's hopefully enough to explain what's
+ // going on here. You can check out the API docs below and also see a bunch
+ // more examples of rules directly below as well.
+
// dummy rule to do nothing, useful when a dep maps to no deps
rules.build("dummy", "path/to/nowhere");
- fn dummy<'a>(s: &Step<'a>, build: &'a Build) -> Step<'a> {
- s.name("dummy").stage(0)
- .target(&build.config.build)
- .host(&build.config.build)
- }
+
+ // the compiler with no target libraries ready to go
+ rules.build("rustc", "src/rustc")
+ .dep(move |s| {
+ if s.stage == 0 {
+ dummy(s, build)
+ } else {
+ s.name("librustc")
+ .host(&build.config.build)
+ .stage(s.stage - 1)
+ }
+ })
+ .run(move |s| compile::assemble_rustc(build, s.stage, s.target));
// Helper for loading an entire DAG of crates, rooted at `name`
let krates = |name: &str| {
return ret
};
- rules.build("rustc", "path/to/nowhere")
- .dep(move |s| {
- if s.stage == 0 {
- dummy(s, build)
- } else {
- s.name("librustc")
- .host(&build.config.build)
- .stage(s.stage - 1)
- }
- })
- .run(move |s| compile::assemble_rustc(build, s.stage, s.target));
- rules.build("llvm", "src/llvm")
- .host(true)
- .dep(move |s| {
- if s.target == build.config.build {
- dummy(s, build)
- } else {
- s.target(&build.config.build)
- }
- })
- .run(move |s| native::llvm(build, s.target));
-
// ========================================================================
// Crate compilations
//
});
}
for (krate, path, default) in krates("rustc-main") {
+ // We hijacked the `src/rustc` path above for "build just the compiler"
+ // so let's not reinterpret it here as everything and redirect the
+ // `src/rustc` path to a nonexistent path.
+ let path = if path == "src/rustc" {
+ "path/to/nowhere"
+ } else {
+ path
+ };
rules.build(&krate.build_step, path)
.dep(|s| s.name("libtest"))
.dep(move |s| s.name("llvm").host(&build.config.build).stage(0))
.host(true)
.run(move |s| check::cargotest(build, s.stage, s.target));
rules.test("check-tidy", "src/tools/tidy")
- .dep(|s| s.name("tool-tidy"))
+ .dep(|s| s.name("tool-tidy").stage(0))
.default(true)
.host(true)
- .run(move |s| check::tidy(build, s.stage, s.target));
+ .run(move |s| check::tidy(build, 0, s.target));
rules.test("check-error-index", "src/tools/error_index_generator")
.dep(|s| s.name("libstd"))
.dep(|s| s.name("tool-error-index").host(s.host))
.run(move |s| install::install(build, s.stage, s.target));
rules.verify();
- return rules
+ return rules;
+
+ fn dummy<'a>(s: &Step<'a>, build: &'a Build) -> Step<'a> {
+ s.name("dummy").stage(0)
+ .target(&build.config.build)
+ .host(&build.config.build)
+ }
+}
+
+#[derive(PartialEq, Eq, Hash, Clone, Debug)]
+struct Step<'a> {
+ /// Human readable name of the rule this step is executing. Possible names
+ /// are all defined above in `build_rules`.
+ name: &'a str,
+
+ /// The stage this step is executing in. This is typically 0, 1, or 2.
+ stage: u32,
+
+ /// This step will likely involve a compiler, and the target that compiler
+ /// itself is built for is called the host, this variable. Typically this is
+ /// the target of the build machine itself.
+ host: &'a str,
+
+ /// The target that this step represents generating. If you're building a
+ /// standard library for a new suite of targets, for example, this'll be set
+ /// to those targets.
+ target: &'a str,
+}
+
+impl<'a> Step<'a> {
+ /// Creates a new step which is the same as this, except has a new name.
+ fn name(&self, name: &'a str) -> Step<'a> {
+ Step { name: name, ..*self }
+ }
+
+ /// Creates a new step which is the same as this, except has a new stage.
+ fn stage(&self, stage: u32) -> Step<'a> {
+ Step { stage: stage, ..*self }
+ }
+
+ /// Creates a new step which is the same as this, except has a new host.
+ fn host(&self, host: &'a str) -> Step<'a> {
+ Step { host: host, ..*self }
+ }
+
+ /// Creates a new step which is the same as this, except has a new target.
+ fn target(&self, target: &'a str) -> Step<'a> {
+ Step { target: target, ..*self }
+ }
+
+ /// Returns the `Compiler` structure that this step corresponds to.
+ fn compiler(&self) -> Compiler<'a> {
+ Compiler::new(self.stage, self.host)
+ }
}
struct Rule<'a> {
+ /// The human readable name of this target, defined in `build_rules`.
name: &'a str,
+
+ /// The path associated with this target, used in the `./x.py` driver for
+ /// easy and ergonomic specification of what to do.
path: &'a str,
+
+ /// The "kind" of top-level command that this rule is associated with, only
+ /// relevant if this is a default rule.
kind: Kind,
+
+ /// List of dependencies this rule has. Each dependency is a function from a
+ /// step that's being executed to another step that should be executed.
deps: Vec<Box<Fn(&Step<'a>) -> Step<'a> + 'a>>,
+
+ /// How to actually execute this rule. Takes a step with contextual
+ /// information and then executes it.
run: Box<Fn(&Step<'a>) + 'a>,
+
+ /// Whether or not this is a "default" rule. That basically means that if
+ /// you run, for example, `./x.py test` whether it's included or not.
default: bool,
+
+ /// Whether or not this is a "host" rule, or in other words whether this is
+ /// only intended for compiler hosts and not for targets that are being
+ /// generated.
host: bool,
}
}
}
+/// Builder pattern returned from the various methods on `Rules` which will add
+/// the rule to the internal list on `Drop`.
struct RuleBuilder<'a: 'b, 'b> {
rules: &'b mut Rules<'a>,
rule: Rule<'a>,
}
}
+ /// Creates a new rule of `Kind::Build` with the specified human readable
+ /// name and path associated with it.
+ ///
+ /// The builder returned should be configured further with information such
+ /// as how to actually run this rule.
fn build<'b>(&'b mut self, name: &'a str, path: &'a str)
-> RuleBuilder<'a, 'b> {
self.rule(name, path, Kind::Build)
}
+ /// Same as `build`, but for `Kind::Test`.
fn test<'b>(&'b mut self, name: &'a str, path: &'a str)
-> RuleBuilder<'a, 'b> {
self.rule(name, path, Kind::Test)
}
+ /// Same as `build`, but for `Kind::Bench`.
fn bench<'b>(&'b mut self, name: &'a str, path: &'a str)
-> RuleBuilder<'a, 'b> {
self.rule(name, path, Kind::Bench)
}
+ /// Same as `build`, but for `Kind::Doc`.
fn doc<'b>(&'b mut self, name: &'a str, path: &'a str)
-> RuleBuilder<'a, 'b> {
self.rule(name, path, Kind::Doc)
}
+ /// Same as `build`, but for `Kind::Dist`.
fn dist<'b>(&'b mut self, name: &'a str, path: &'a str)
-> RuleBuilder<'a, 'b> {
self.rule(name, path, Kind::Dist)
/// Construct the top-level build steps that we're going to be executing,
/// given the subcommand that our build is performing.
fn plan(&self) -> Vec<Step<'a>> {
+ // Ok, the logic here is pretty subtle, and involves quite a few
+ // conditionals. The basic idea here is to:
+ //
+ // 1. First, filter all our rules to the relevant ones. This means that
+ // the command specified corresponds to one of our `Kind` variants,
+ // and we filter all rules based on that.
+ //
+ // 2. Next, we determine which rules we're actually executing. If a
+ // number of path filters were specified on the command line we look
+ // for those, otherwise we look for anything tagged `default`.
+ //
+ // 3. Finally, we generate some steps with host and target information.
+ //
+ // The last step is by far the most complicated and subtle. The basic
+ // thinking here is that we want to take the cartesian product of
+ // specified hosts and targets and build rules with that. The list of
+ // hosts and targets, if not specified, come from the how this build was
+ // configured. If the rule we're looking at is a host-only rule the we
+ // ignore the list of targets and instead consider the list of hosts
+ // also the list of targets.
+ //
+ // Once the host and target lists are generated we take the cartesian
+ // product of the two and then create a step based off them. Note that
+ // the stage each step is associated was specified with the `--step`
+ // flag on the command line.
let (kind, paths) = match self.build.flags.cmd {
Subcommand::Build { ref paths } => (Kind::Build, &paths[..]),
Subcommand::Doc { ref paths } => (Kind::Doc, &paths[..]),
} else {
&self.build.config.target
};
- let arr = if rule.host {hosts} else {targets};
+ // If --target was specified but --host wasn't specified, don't run
+ // any host-only tests
+ let arr = if rule.host {
+ if self.build.flags.target.len() > 0 &&
+ self.build.flags.host.len() == 0 {
+ &hosts[..0]
+ } else {
+ hosts
+ }
+ } else {
+ targets
+ };
hosts.iter().flat_map(move |host| {
arr.iter().map(move |target| {
}
}
+ /// Performs topological sort of dependencies rooted at the `step`
+ /// specified, pushing all results onto the `order` vector provided.
+ ///
+ /// In other words, when this method returns, the `order` vector will
+ /// contain a list of steps which if executed in order will eventually
+ /// complete the `step` specified as well.
+ ///
+ /// The `added` set specified here is the set of steps that are already
+ /// present in `order` (and hence don't need to be added again).
fn fill(&self,
step: Step<'a>,
order: &mut Vec<Step<'a>>,
use std::fs;
use std::path::{Path, PathBuf};
use std::process::Command;
+use std::time::Instant;
use filetime::FileTime;
buf
}
+
+pub struct TimeIt(Instant);
+
+/// Returns an RAII structure that prints out how long it took to drop.
+pub fn timeit() -> TimeIt {
+ TimeIt(Instant::now())
+}
+
+impl Drop for TimeIt {
+ fn drop(&mut self) {
+ let time = self.0.elapsed();
+ println!("\tfinished in {}.{:03}",
+ time.as_secs(),
+ time.subsec_nanos() / 1_000_000);
+ }
+}
curl \
ca-certificates \
python2.7 \
- python-minimal \
git \
cmake \
ccache \
--arm-linux-androideabi-ndk=/android/ndk-arm-9 \
--armv7-linux-androideabi-ndk=/android/ndk-arm-9 \
--i686-linux-android-ndk=/android/ndk-x86-9 \
- --aarch64-linux-android-ndk=/android/ndk-aarch64 \
- --enable-rustbuild
-ENV RUST_CHECK_TARGET check-stage2-android
+ --aarch64-linux-android-ndk=/android/ndk-aarch64
+ENV XPY_CHECK test --target arm-linux-androideabi
RUN mkdir /tmp/obj
RUN chmod 777 /tmp/obj
curl \
ca-certificates \
python2.7 \
- python-minimal \
git \
cmake \
ccache \
src_dir="`dirname $ci_dir`"
root_dir="`dirname $src_dir`"
-docker build \
+docker \
+ build \
--rm \
-t rust-ci \
"`dirname "$script"`/$image"
mkdir -p $HOME/.ccache
mkdir -p $HOME/.cargo
+mkdir -p $root_dir/obj
-exec docker run \
+exec docker \
+ run \
--volume "$root_dir:/checkout:ro" \
- --workdir /tmp/obj \
+ --volume "$root_dir/obj:/checkout/obj" \
+ --workdir /checkout/obj \
--env SRC=/checkout \
--env CCACHE_DIR=/ccache \
--volume "$HOME/.ccache:/ccache" \
curl \
ca-certificates \
python2.7 \
- python-minimal \
git \
cmake \
ccache \
AR_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-ar \
CC_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-gcc
-ENV RUST_CONFIGURE_ARGS --target=x86_64-unknown-freebsd --enable-rustbuild
+ENV RUST_CONFIGURE_ARGS --target=x86_64-unknown-freebsd
ENV RUST_CHECK_TARGET ""
RUN mkdir /tmp/obj
RUN chmod 777 /tmp/obj
curl \
ca-certificates \
python2.7 \
- python-minimal \
git \
cmake \
ccache \
libssl-dev \
sudo
-ENV RUST_CONFIGURE_ARGS --build=x86_64-unknown-linux-gnu --enable-rustbuild
+ENV RUST_CONFIGURE_ARGS --build=x86_64-unknown-linux-gnu
ENV RUST_CHECK_TARGET check-cargotest
+ENV NO_VENDOR 1
RUN mkdir /tmp/obj
RUN chmod 777 /tmp/obj
curl \
ca-certificates \
python2.7 \
- python2.7-minimal \
git \
cmake \
ccache \
ENV RUST_CONFIGURE_ARGS \
--build=x86_64-unknown-linux-gnu \
- --enable-rustbuild \
--llvm-root=/usr/lib/llvm-3.7
ENV RUST_CHECK_TARGET check
RUN mkdir /tmp/obj
--- /dev/null
+FROM ubuntu:16.04
+
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ g++ \
+ make \
+ file \
+ curl \
+ ca-certificates \
+ python2.7 \
+ git \
+ cmake \
+ ccache \
+ sudo \
+ gdb
+
+ENV RUST_CONFIGURE_ARGS --build=x86_64-unknown-linux-gnu --disable-rustbuild
+ENV RUST_CHECK_TARGET check
+RUN mkdir /tmp/obj
+RUN chmod 777 /tmp/obj
+++ /dev/null
-FROM ubuntu:16.04
-
-RUN apt-get update && apt-get install -y --no-install-recommends \
- g++ \
- make \
- file \
- curl \
- ca-certificates \
- python2.7 \
- python-minimal \
- git \
- cmake \
- ccache \
- sudo \
- gdb
-
-ENV RUST_CONFIGURE_ARGS --build=x86_64-unknown-linux-gnu --enable-rustbuild
-ENV RUST_CHECK_TARGET check
-RUN mkdir /tmp/obj
-RUN chmod 777 /tmp/obj
ENV RUST_CONFIGURE_ARGS \
--target=x86_64-unknown-linux-musl \
- --musl-root=/musl-x86_64
+ --musl-root-x86_64=/musl-x86_64
ENV RUST_CHECK_TARGET check-stage2-T-x86_64-unknown-linux-musl-H-x86_64-unknown-linux-gnu
+ENV PATH=$PATH:/musl-x86_64/bin
+ENV XPY_CHECK test --target x86_64-unknown-linux-musl
RUN mkdir /tmp/obj
RUN chmod 777 /tmp/obj
if [ "$LOCAL_USER_ID" != "" ]; then
useradd --shell /bin/bash -u $LOCAL_USER_ID -o -c "" -m user
export HOME=/home/user
- export LOCAL_USER_ID=
- exec sudo -E -u user env PATH=$PATH "$0"
+ unset LOCAL_USER_ID
+ exec su --preserve-environment -c "env PATH=$PATH \"$0\"" user
fi
if [ "$NO_LLVM_ASSERTIONS" = "" ]; then
- LLVM_ASSERTIONS=--enable-llvm-assertions
+ ENABLE_LLVM_ASSERTIONS=--enable-llvm-assertions
+fi
+
+if [ "$NO_VENDOR" = "" ]; then
+ ENABLE_VENDOR=--enable-vendor
+fi
+
+if [ "$NO_CCACHE" = "" ]; then
+ ENABLE_CCACHE=--enable-ccache
fi
set -ex
--disable-manage-submodules \
--enable-debug-assertions \
--enable-quiet-tests \
- --enable-ccache \
- --enable-vendor \
- $LLVM_ASSERTIONS \
+ $ENABLE_CCACHE \
+ $ENABLE_VENDOR \
+ $ENABLE_LLVM_ASSERTIONS \
$RUST_CONFIGURE_ARGS
if [ "$TRAVIS_OS_NAME" = "osx" ]; then
make -j $ncpus tidy
make -j $ncpus
-exec make $RUST_CHECK_TARGET -j $ncpus
+if [ ! -z "$XPY_CHECK" ]; then
+ exec python2.7 $SRC/x.py $XPY_CHECK
+else
+ exec make $RUST_CHECK_TARGET -j $ncpus
+fi
It’s important to be mindful of `panic!`s when working with FFI. A `panic!`
across an FFI boundary is undefined behavior. If you’re writing code that may
-panic, you should run it in another thread, so that the panic doesn’t bubble up
-to C:
+panic, you should run it in a closure with [`catch_unwind()`]:
```rust
-use std::thread;
+use std::panic::catch_unwind;
#[no_mangle]
pub extern fn oh_no() -> i32 {
- let h = thread::spawn(|| {
+ let result = catch_unwind(|| {
panic!("Oops!");
});
-
- match h.join() {
- Ok(_) => 1,
- Err(_) => 0,
+ match result {
+ Ok(_) => 0,
+ Err(_) => 1,
}
}
-# fn main() {}
+
+fn main() {}
```
+Please note that [`catch_unwind()`] will only catch unwinding panics, not
+those who abort the process. See the documentation of [`catch_unwind()`]
+for more information.
+
+[`catch_unwind()`]: https://doc.rust-lang.org/std/panic/fn.catch_unwind.html
+
# Representing opaque structs
Sometimes, a C library wants to provide a pointer to something, but not let you
* `ty`: a [type](#types)
* `ident`: an [identifier](#identifiers)
* `path`: a [path](#paths)
-* `tt`: either side of the `=>` in macro rules
+* `tt`: a token tree (a single [token](#tokens) or a sequence of token trees surrounded
+ by matching `()`, `[]`, or `{}`)
* `meta`: the contents of an [attribute](#attributes)
In the transcriber, the
* `unboxed_closures` - Rust's new closure design, which is currently a work in
progress feature with many known bugs.
-* `unmarked_api` - Allows use of items within a `#![staged_api]` crate
- which have not been marked with a stability marker.
- Such items should not be allowed by the compiler to exist,
- so if you need this there probably is a compiler bug.
-
* `allow_internal_unstable` - Allows `macro_rules!` macros to be tagged with the
`#[allow_internal_unstable]` attribute, designed
to allow `std` macros to call
fn size_hint(&self) -> (usize, Option<usize>) {
(**self).size_hint()
}
+ fn nth(&mut self, n: usize) -> Option<I::Item> {
+ (**self).nth(n)
+ }
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<I: DoubleEndedIterator + ?Sized> DoubleEndedIterator for Box<I> {
}
}
#[stable(feature = "rust1", since = "1.0.0")]
-impl<I: ExactSizeIterator + ?Sized> ExactSizeIterator for Box<I> {}
+impl<I: ExactSizeIterator + ?Sized> ExactSizeIterator for Box<I> {
+ fn len(&self) -> usize {
+ (**self).len()
+ }
+ fn is_empty(&self) -> bool {
+ (**self).is_empty()
+ }
+}
#[unstable(feature = "fused", issue = "35602")]
impl<I: FusedIterator + ?Sized> FusedIterator for Box<I> {}
#![feature(core_intrinsics)]
#![feature(custom_attribute)]
#![feature(dropck_parametricity)]
+#![cfg_attr(not(test), feature(exact_size_is_empty))]
#![feature(fundamental)]
#![feature(lang_items)]
#![feature(needs_allocator)]
//! Single-threaded reference-counting pointers.
//!
-//! The type [`Rc<T>`][rc] provides shared ownership of a value of type `T`,
-//! allocated in the heap. Invoking [`clone`][clone] on `Rc` produces a new
-//! pointer to the same value in the heap. When the last `Rc` pointer to a
+//! The type [`Rc<T>`][`Rc`] provides shared ownership of a value of type `T`,
+//! allocated in the heap. Invoking [`clone()`][clone] on [`Rc`] produces a new
+//! pointer to the same value in the heap. When the last [`Rc`] pointer to a
//! given value is destroyed, the pointed-to value is also destroyed.
//!
//! Shared references in Rust disallow mutation by default, and `Rc` is no
-//! exception. If you need to mutate through an `Rc`, use [`Cell`][cell] or
-//! [`RefCell`][refcell].
+//! exception. If you need to mutate through an [`Rc`], use [`Cell`] or
+//! [`RefCell`].
//!
-//! `Rc` uses non-atomic reference counting. This means that overhead is very
-//! low, but an `Rc` cannot be sent between threads, and consequently `Rc`
+//! [`Rc`] uses non-atomic reference counting. This means that overhead is very
+//! low, but an [`Rc`] cannot be sent between threads, and consequently [`Rc`]
//! does not implement [`Send`][send]. As a result, the Rust compiler
-//! will check *at compile time* that you are not sending `Rc`s between
+//! will check *at compile time* that you are not sending [`Rc`]s between
//! threads. If you need multi-threaded, atomic reference counting, use
//! [`sync::Arc`][arc].
//!
-//! The [`downgrade`][downgrade] method can be used to create a non-owning
-//! [`Weak`][weak] pointer. A `Weak` pointer can be [`upgrade`][upgrade]d
-//! to an `Rc`, but this will return [`None`][option] if the value has
+//! The [`downgrade()`][downgrade] method can be used to create a non-owning
+//! [`Weak`] pointer. A [`Weak`] pointer can be [`upgrade`][upgrade]d
+//! to an [`Rc`], but this will return [`None`] if the value has
//! already been dropped.
//!
-//! A cycle between `Rc` pointers will never be deallocated. For this reason,
-//! `Weak` is used to break cycles. For example, a tree could have strong
-//! `Rc` pointers from parent nodes to children, and `Weak` pointers from
+//! A cycle between [`Rc`] pointers will never be deallocated. For this reason,
+//! [`Weak`] is used to break cycles. For example, a tree could have strong
+//! [`Rc`] pointers from parent nodes to children, and [`Weak`] pointers from
//! children back to their parents.
//!
-//! `Rc<T>` automatically dereferences to `T` (via the [`Deref`][deref] trait),
-//! so you can call `T`'s methods on a value of type `Rc<T>`. To avoid name
-//! clashes with `T`'s methods, the methods of `Rc<T>` itself are [associated
+//! `Rc<T>` automatically dereferences to `T` (via the [`Deref`] trait),
+//! so you can call `T`'s methods on a value of type [`Rc<T>`][`Rc`]. To avoid name
+//! clashes with `T`'s methods, the methods of [`Rc<T>`][`Rc`] itself are [associated
//! functions][assoc], called using function-like syntax:
//!
//! ```
//! Rc::downgrade(&my_rc);
//! ```
//!
-//! `Weak<T>` does not auto-dereference to `T`, because the value may have
+//! [`Weak<T>`][`Weak`] does not auto-dereference to `T`, because the value may have
//! already been destroyed.
//!
-//! [rc]: struct.Rc.html
-//! [weak]: struct.Weak.html
-//! [clone]: ../../std/clone/trait.Clone.html#tymethod.clone
-//! [cell]: ../../std/cell/struct.Cell.html
-//! [refcell]: ../../std/cell/struct.RefCell.html
-//! [send]: ../../std/marker/trait.Send.html
-//! [arc]: ../../std/sync/struct.Arc.html
-//! [deref]: ../../std/ops/trait.Deref.html
-//! [downgrade]: struct.Rc.html#method.downgrade
-//! [upgrade]: struct.Weak.html#method.upgrade
-//! [option]: ../../std/option/enum.Option.html
-//! [assoc]: ../../book/method-syntax.html#associated-functions
-//!
//! # Examples
//!
//! Consider a scenario where a set of `Gadget`s are owned by a given `Owner`.
//! We want to have our `Gadget`s point to their `Owner`. We can't do this with
//! unique ownership, because more than one gadget may belong to the same
-//! `Owner`. `Rc` allows us to share an `Owner` between multiple `Gadget`s,
+//! `Owner`. [`Rc`] allows us to share an `Owner` between multiple `Gadget`s,
//! and have the `Owner` remain allocated as long as any `Gadget` points at it.
//!
//! ```
//! ```
//!
//! If our requirements change, and we also need to be able to traverse from
-//! `Owner` to `Gadget`, we will run into problems. An `Rc` pointer from `Owner`
+//! `Owner` to `Gadget`, we will run into problems. An [`Rc`] pointer from `Owner`
//! to `Gadget` introduces a cycle between the values. This means that their
//! reference counts can never reach 0, and the values will remain allocated
-//! forever: a memory leak. In order to get around this, we can use `Weak`
+//! forever: a memory leak. In order to get around this, we can use [`Weak`]
//! pointers.
//!
//! Rust actually makes it somewhat difficult to produce this loop in the first
//! place. In order to end up with two values that point at each other, one of
-//! them needs to be mutable. This is difficult because `Rc` enforces
+//! them needs to be mutable. This is difficult because [`Rc`] enforces
//! memory safety by only giving out shared references to the value it wraps,
//! and these don't allow direct mutation. We need to wrap the part of the
-//! value we wish to mutate in a [`RefCell`][refcell], which provides *interior
+//! value we wish to mutate in a [`RefCell`], which provides *interior
//! mutability*: a method to achieve mutability through a shared reference.
-//! `RefCell` enforces Rust's borrowing rules at runtime.
+//! [`RefCell`] enforces Rust's borrowing rules at runtime.
//!
//! ```
//! use std::rc::Rc;
//! // Gadget Man, so he gets destroyed as well.
//! }
//! ```
+//!
+//! [`Rc`]: struct.Rc.html
+//! [`Weak`]: struct.Weak.html
+//! [clone]: ../../std/clone/trait.Clone.html#tymethod.clone
+//! [`Cell`]: ../../std/cell/struct.Cell.html
+//! [`RefCell`]: ../../std/cell/struct.RefCell.html
+//! [send]: ../../std/marker/trait.Send.html
+//! [arc]: ../../std/sync/struct.Arc.html
+//! [`Deref`]: ../../std/ops/trait.Deref.html
+//! [downgrade]: struct.Rc.html#method.downgrade
+//! [upgrade]: struct.Weak.html#method.upgrade
+//! [`None`]: ../../std/option/enum.Option.html#variant.None
+//! [assoc]: ../../book/method-syntax.html#associated-functions
#![stable(feature = "rust1", since = "1.0.0")]
/// See the [module-level documentation](./index.html) for more details.
///
/// The inherent methods of `Rc` are all associated functions, which means
-/// that you have to call them as e.g. `Rc::get_mut(&value)` instead of
-/// `value.get_mut()`. This avoids conflicts with methods of the inner
+/// that you have to call them as e.g. [`Rc::get_mut(&value)`][get_mut] instead of
+/// `value.get_mut()`. This avoids conflicts with methods of the inner
/// type `T`.
+///
+/// [get_mut]: #method.get_mut
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Rc<T: ?Sized> {
ptr: Shared<RcBox<T>>,
}
/// Checks whether [`Rc::try_unwrap`][try_unwrap] would return
- /// [`Ok`][result].
+ /// [`Ok`].
///
/// [try_unwrap]: struct.Rc.html#method.try_unwrap
- /// [result]: ../../std/result/enum.Result.html
+ /// [`Ok`]: ../../std/result/enum.Result.html#variant.Ok
///
/// # Examples
///
/// Returns a mutable reference to the inner value, if there are
/// no other `Rc` or [`Weak`][weak] pointers to the same value.
///
- /// Returns [`None`][option] otherwise, because it is not safe to
+ /// Returns [`None`] otherwise, because it is not safe to
/// mutate a shared value.
///
/// See also [`make_mut`][make_mut], which will [`clone`][clone]
/// the inner value when it's shared.
///
/// [weak]: struct.Weak.html
- /// [option]: ../../std/option/enum.Option.html
+ /// [`None`]: ../../std/option/enum.Option.html#variant.None
/// [make_mut]: struct.Rc.html#method.make_mut
/// [clone]: ../../std/clone/trait.Clone.html#tymethod.clone
///
.read_dir()
.unwrap()
.map(|e| e.unwrap())
+ .filter(|e| &*e.file_name() != ".git")
.collect::<Vec<_>>();
while let Some(entry) = stack.pop() {
let path = entry.path();
cmd.arg(format!("--build={}", build_helper::gnu_target(&host)));
run(&mut cmd);
- run(Command::new("make")
- .current_dir(&build_dir)
- .arg("build_lib_static")
- .arg("-j")
- .arg(env::var("NUM_JOBS").expect("NUM_JOBS was not set")));
+ let mut make = Command::new("make");
+ make.current_dir(&build_dir)
+ .arg("build_lib_static");
+
+ // mingw make seems... buggy? unclear...
+ if !host.contains("windows") {
+ make.arg("-j")
+ .arg(env::var("NUM_JOBS").expect("NUM_JOBS was not set"));
+ }
+
+ run(&mut make);
if target.contains("windows") {
println!("cargo:rustc-link-lib=static=jemalloc");
}
#[stable(feature = "rust1", since = "1.0.0")]
-impl<'a, T> ExactSizeIterator for Iter<'a, T> {}
+impl<'a, T> ExactSizeIterator for Iter<'a, T> {
+ fn is_empty(&self) -> bool {
+ self.iter.is_empty()
+ }
+}
#[unstable(feature = "fused", issue = "35602")]
impl<'a, T> FusedIterator for Iter<'a, T> {}
}
#[stable(feature = "rust1", since = "1.0.0")]
-impl<T> ExactSizeIterator for IntoIter<T> {}
+impl<T> ExactSizeIterator for IntoIter<T> {
+ fn is_empty(&self) -> bool {
+ self.iter.is_empty()
+ }
+}
#[unstable(feature = "fused", issue = "35602")]
impl<T> FusedIterator for IntoIter<T> {}
}
#[stable(feature = "drain", since = "1.6.0")]
-impl<'a, T: 'a> ExactSizeIterator for Drain<'a, T> {}
+impl<'a, T: 'a> ExactSizeIterator for Drain<'a, T> {
+ fn is_empty(&self) -> bool {
+ self.iter.is_empty()
+ }
+}
#[unstable(feature = "fused", issue = "35602")]
impl<'a, T: 'a> FusedIterator for Drain<'a, T> {}
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
pub fn insert(&mut self, idx: usize, ch: char) {
- let len = self.len();
- assert!(idx <= len);
assert!(self.is_char_boundary(idx));
let mut bits = [0; 4];
let bits = ch.encode_utf8(&mut bits).as_bytes();
reason = "recent addition",
issue = "35553")]
pub fn insert_str(&mut self, idx: usize, string: &str) {
- assert!(idx <= self.len());
assert!(self.is_char_boundary(idx));
unsafe {
#[unstable(feature = "string_split_off", issue = "38080")]
pub fn split_off(&mut self, mid: usize) -> String {
assert!(self.is_char_boundary(mid));
- assert!(mid <= self.len());
let other = self.vec.split_off(mid);
unsafe { String::from_utf8_unchecked(other) }
}
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_empty(&self) -> bool {
- self.len() == 0
+ self.tail == self.head
}
/// Create a draining iterator that removes the specified range in the
}
#[stable(feature = "rust1", since = "1.0.0")]
-impl<'a, T> ExactSizeIterator for Iter<'a, T> {}
+impl<'a, T> ExactSizeIterator for Iter<'a, T> {
+ fn is_empty(&self) -> bool {
+ self.head == self.tail
+ }
+}
#[unstable(feature = "fused", issue = "35602")]
impl<'a, T> FusedIterator for Iter<'a, T> {}
}
#[stable(feature = "rust1", since = "1.0.0")]
-impl<'a, T> ExactSizeIterator for IterMut<'a, T> {}
+impl<'a, T> ExactSizeIterator for IterMut<'a, T> {
+ fn is_empty(&self) -> bool {
+ self.head == self.tail
+ }
+}
#[unstable(feature = "fused", issue = "35602")]
impl<'a, T> FusedIterator for IterMut<'a, T> {}
}
#[stable(feature = "rust1", since = "1.0.0")]
-impl<T> ExactSizeIterator for IntoIter<T> {}
+impl<T> ExactSizeIterator for IntoIter<T> {
+ fn is_empty(&self) -> bool {
+ self.inner.is_empty()
+ }
+}
#[unstable(feature = "fused", issue = "35602")]
impl<T> FusedIterator for IntoIter<T> {}
d
}
}
+
+#[test]
+fn test_is_empty() {
+ let mut v = VecDeque::<i32>::new();
+ assert!(v.is_empty());
+ assert!(v.iter().is_empty());
+ assert!(v.iter_mut().is_empty());
+ v.extend(&[2, 3, 4]);
+ assert!(!v.is_empty());
+ assert!(!v.iter().is_empty());
+ assert!(!v.iter_mut().is_empty());
+ while let Some(_) = v.pop_front() {
+ assert_eq!(v.is_empty(), v.len() == 0);
+ assert_eq!(v.iter().is_empty(), v.iter().len() == 0);
+ assert_eq!(v.iter_mut().is_empty(), v.iter_mut().len() == 0);
+ }
+ assert!(v.is_empty());
+ assert!(v.iter().is_empty());
+ assert!(v.iter_mut().is_empty());
+ assert!(v.into_iter().is_empty());
+}
/// ```
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
- fn nth(&mut self, mut n: usize) -> Option<Self::Item> where Self: Sized {
+ fn nth(&mut self, mut n: usize) -> Option<Self::Item> {
for x in self {
if n == 0 { return Some(x) }
n -= 1;
type Item = I::Item;
fn next(&mut self) -> Option<I::Item> { (**self).next() }
fn size_hint(&self) -> (usize, Option<usize>) { (**self).size_hint() }
+ fn nth(&mut self, n: usize) -> Option<Self::Item> {
+ (**self).nth(n)
+ }
}
}
#[stable(feature = "rust1", since = "1.0.0")]
-impl<'a, I: ExactSizeIterator + ?Sized> ExactSizeIterator for &'a mut I {}
+impl<'a, I: ExactSizeIterator + ?Sized> ExactSizeIterator for &'a mut I {
+ fn len(&self) -> usize {
+ (**self).len()
+ }
+ fn is_empty(&self) -> bool {
+ (**self).is_empty()
+ }
+}
/// Trait to represent types that can be created by summing up an iterator.
///
fn len(&self) -> usize {
self.0.len()
}
+
+ #[inline]
+ fn is_empty(&self) -> bool {
+ self.0.is_empty()
+ }
}
#[unstable(feature = "fused", issue = "35602")]
-Subproject commit 6e8c1b490ccbe5e84d248bab883515bc85394b5f
+Subproject commit 0ac39c5ccf6a04395b7c40dd62321cb91f63f160
}
}
-impl<'a> Visitor for CheckAttrVisitor<'a> {
- fn visit_item(&mut self, item: &ast::Item) {
+impl<'a> Visitor<'a> for CheckAttrVisitor<'a> {
+ fn visit_item(&mut self, item: &'a ast::Item) {
let target = Target::from_item(item);
for attr in &item.attrs {
self.check_attribute(attr, target);
lctx: &'lcx mut LoweringContext<'interner>,
}
- impl<'lcx, 'interner> Visitor for ItemLowerer<'lcx, 'interner> {
- fn visit_item(&mut self, item: &Item) {
+ impl<'lcx, 'interner> Visitor<'lcx> for ItemLowerer<'lcx, 'interner> {
+ fn visit_item(&mut self, item: &'lcx Item) {
let hir_item = self.lctx.lower_item(item);
self.lctx.items.insert(item.id, hir_item);
visit::walk_item(self, item);
}
- fn visit_impl_item(&mut self, item: &ImplItem) {
+ fn visit_impl_item(&mut self, item: &'lcx ImplItem) {
let id = self.lctx.lower_impl_item_ref(item).id;
let hir_item = self.lctx.lower_impl_item(item);
self.lctx.impl_items.insert(id, hir_item);
}
}
-impl<'a> visit::Visitor for DefCollector<'a> {
- fn visit_item(&mut self, i: &Item) {
+impl<'a> visit::Visitor<'a> for DefCollector<'a> {
+ fn visit_item(&mut self, i: &'a Item) {
debug!("visit_item: {:?}", i);
// Pick the def data. This need not be unique, but the more
});
}
- fn visit_foreign_item(&mut self, foreign_item: &ForeignItem) {
+ fn visit_foreign_item(&mut self, foreign_item: &'a ForeignItem) {
let def = self.create_def(foreign_item.id,
DefPathData::ValueNs(foreign_item.ident.name.as_str()));
});
}
- fn visit_generics(&mut self, generics: &Generics) {
+ fn visit_generics(&mut self, generics: &'a Generics) {
for ty_param in generics.ty_params.iter() {
self.create_def(ty_param.id, DefPathData::TypeParam(ty_param.ident.name.as_str()));
}
visit::walk_generics(self, generics);
}
- fn visit_trait_item(&mut self, ti: &TraitItem) {
+ fn visit_trait_item(&mut self, ti: &'a TraitItem) {
let def_data = match ti.node {
TraitItemKind::Method(..) | TraitItemKind::Const(..) =>
DefPathData::ValueNs(ti.ident.name.as_str()),
});
}
- fn visit_impl_item(&mut self, ii: &ImplItem) {
+ fn visit_impl_item(&mut self, ii: &'a ImplItem) {
let def_data = match ii.node {
ImplItemKind::Method(..) | ImplItemKind::Const(..) =>
DefPathData::ValueNs(ii.ident.name.as_str()),
});
}
- fn visit_pat(&mut self, pat: &Pat) {
+ fn visit_pat(&mut self, pat: &'a Pat) {
let parent_def = self.parent_def;
match pat.node {
self.parent_def = parent_def;
}
- fn visit_expr(&mut self, expr: &Expr) {
+ fn visit_expr(&mut self, expr: &'a Expr) {
let parent_def = self.parent_def;
match expr.node {
self.parent_def = parent_def;
}
- fn visit_ty(&mut self, ty: &Ty) {
+ fn visit_ty(&mut self, ty: &'a Ty) {
match ty.node {
TyKind::Mac(..) => return self.visit_macro_invoc(ty.id, false),
TyKind::Array(_, ref length) => self.visit_ast_const_integer(length),
visit::walk_ty(self, ty);
}
- fn visit_lifetime_def(&mut self, def: &LifetimeDef) {
+ fn visit_lifetime_def(&mut self, def: &'a LifetimeDef) {
self.create_def(def.lifetime.id, DefPathData::LifetimeDef(def.lifetime.name.as_str()));
}
- fn visit_macro_def(&mut self, macro_def: &MacroDef) {
+ fn visit_macro_def(&mut self, macro_def: &'a MacroDef) {
self.create_def(macro_def.id, DefPathData::MacroDef(macro_def.ident.name.as_str()));
}
- fn visit_stmt(&mut self, stmt: &Stmt) {
+ fn visit_stmt(&mut self, stmt: &'a Stmt) {
match stmt.node {
StmtKind::Mac(..) => self.visit_macro_invoc(stmt.id, false),
_ => visit::walk_stmt(self, stmt),
}
}
-impl LateLintPass for HardwiredLints {}
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for HardwiredLints {}
err
}
-pub trait LintContext: Sized {
+pub trait LintContext<'tcx>: Sized {
fn sess(&self) -> &Session;
fn lints(&self) -> &LintStore;
fn mut_lints(&mut self) -> &mut LintStore;
fn level_stack(&mut self) -> &mut Vec<(LintId, LevelSource)>;
- fn enter_attrs(&mut self, attrs: &[ast::Attribute]);
- fn exit_attrs(&mut self, attrs: &[ast::Attribute]);
+ fn enter_attrs(&mut self, attrs: &'tcx [ast::Attribute]);
+ fn exit_attrs(&mut self, attrs: &'tcx [ast::Attribute]);
/// Get the level of `lint` at the current position of the lint
/// traversal.
/// current lint context, call the provided function, then reset the
/// lints in effect to their previous state.
fn with_lint_attrs<F>(&mut self,
- attrs: &[ast::Attribute],
+ attrs: &'tcx [ast::Attribute],
f: F)
where F: FnOnce(&mut Self),
{
}
}
-impl<'a, 'tcx> LintContext for LateContext<'a, 'tcx> {
+impl<'a, 'tcx> LintContext<'tcx> for LateContext<'a, 'tcx> {
/// Get the overall compiler `Session` object.
fn sess(&self) -> &Session {
&self.tcx.sess
&mut self.level_stack
}
- fn enter_attrs(&mut self, attrs: &[ast::Attribute]) {
+ fn enter_attrs(&mut self, attrs: &'tcx [ast::Attribute]) {
debug!("late context: enter_attrs({:?})", attrs);
run_lints!(self, enter_lint_attrs, late_passes, attrs);
}
- fn exit_attrs(&mut self, attrs: &[ast::Attribute]) {
+ fn exit_attrs(&mut self, attrs: &'tcx [ast::Attribute]) {
debug!("late context: exit_attrs({:?})", attrs);
run_lints!(self, exit_lint_attrs, late_passes, attrs);
}
}
-impl<'a> LintContext for EarlyContext<'a> {
+impl<'a> LintContext<'a> for EarlyContext<'a> {
/// Get the overall compiler `Session` object.
fn sess(&self) -> &Session {
&self.sess
&mut self.level_stack
}
- fn enter_attrs(&mut self, attrs: &[ast::Attribute]) {
+ fn enter_attrs(&mut self, attrs: &'a [ast::Attribute]) {
debug!("early context: enter_attrs({:?})", attrs);
run_lints!(self, enter_lint_attrs, early_passes, attrs);
}
- fn exit_attrs(&mut self, attrs: &[ast::Attribute]) {
+ fn exit_attrs(&mut self, attrs: &'a [ast::Attribute]) {
debug!("early context: exit_attrs({:?})", attrs);
run_lints!(self, exit_lint_attrs, early_passes, attrs);
}
hir_visit::walk_path(self, p);
}
- fn visit_attribute(&mut self, attr: &ast::Attribute) {
+ fn visit_attribute(&mut self, attr: &'tcx ast::Attribute) {
check_lint_name_attribute(self, attr);
run_lints!(self, check_attribute, late_passes, attr);
}
}
-impl<'a> ast_visit::Visitor for EarlyContext<'a> {
- fn visit_item(&mut self, it: &ast::Item) {
+impl<'a> ast_visit::Visitor<'a> for EarlyContext<'a> {
+ fn visit_item(&mut self, it: &'a ast::Item) {
self.with_lint_attrs(&it.attrs, |cx| {
run_lints!(cx, check_item, early_passes, it);
ast_visit::walk_item(cx, it);
})
}
- fn visit_foreign_item(&mut self, it: &ast::ForeignItem) {
+ fn visit_foreign_item(&mut self, it: &'a ast::ForeignItem) {
self.with_lint_attrs(&it.attrs, |cx| {
run_lints!(cx, check_foreign_item, early_passes, it);
ast_visit::walk_foreign_item(cx, it);
})
}
- fn visit_pat(&mut self, p: &ast::Pat) {
+ fn visit_pat(&mut self, p: &'a ast::Pat) {
run_lints!(self, check_pat, early_passes, p);
ast_visit::walk_pat(self, p);
}
- fn visit_expr(&mut self, e: &ast::Expr) {
+ fn visit_expr(&mut self, e: &'a ast::Expr) {
self.with_lint_attrs(&e.attrs, |cx| {
run_lints!(cx, check_expr, early_passes, e);
ast_visit::walk_expr(cx, e);
})
}
- fn visit_stmt(&mut self, s: &ast::Stmt) {
+ fn visit_stmt(&mut self, s: &'a ast::Stmt) {
run_lints!(self, check_stmt, early_passes, s);
ast_visit::walk_stmt(self, s);
}
- fn visit_fn(&mut self, fk: ast_visit::FnKind, decl: &ast::FnDecl,
+ fn visit_fn(&mut self, fk: ast_visit::FnKind<'a>, decl: &'a ast::FnDecl,
span: Span, id: ast::NodeId) {
run_lints!(self, check_fn, early_passes, fk, decl, span, id);
ast_visit::walk_fn(self, fk, decl, span);
}
fn visit_variant_data(&mut self,
- s: &ast::VariantData,
+ s: &'a ast::VariantData,
ident: ast::Ident,
- g: &ast::Generics,
+ g: &'a ast::Generics,
item_id: ast::NodeId,
_: Span) {
run_lints!(self, check_struct_def, early_passes, s, ident, g, item_id);
run_lints!(self, check_struct_def_post, early_passes, s, ident, g, item_id);
}
- fn visit_struct_field(&mut self, s: &ast::StructField) {
+ fn visit_struct_field(&mut self, s: &'a ast::StructField) {
self.with_lint_attrs(&s.attrs, |cx| {
run_lints!(cx, check_struct_field, early_passes, s);
ast_visit::walk_struct_field(cx, s);
})
}
- fn visit_variant(&mut self, v: &ast::Variant, g: &ast::Generics, item_id: ast::NodeId) {
+ fn visit_variant(&mut self, v: &'a ast::Variant, g: &'a ast::Generics, item_id: ast::NodeId) {
self.with_lint_attrs(&v.node.attrs, |cx| {
run_lints!(cx, check_variant, early_passes, v, g);
ast_visit::walk_variant(cx, v, g, item_id);
})
}
- fn visit_ty(&mut self, t: &ast::Ty) {
+ fn visit_ty(&mut self, t: &'a ast::Ty) {
run_lints!(self, check_ty, early_passes, t);
ast_visit::walk_ty(self, t);
}
run_lints!(self, check_ident, early_passes, sp, id);
}
- fn visit_mod(&mut self, m: &ast::Mod, s: Span, n: ast::NodeId) {
+ fn visit_mod(&mut self, m: &'a ast::Mod, s: Span, n: ast::NodeId) {
run_lints!(self, check_mod, early_passes, m, s, n);
ast_visit::walk_mod(self, m);
run_lints!(self, check_mod_post, early_passes, m, s, n);
}
- fn visit_local(&mut self, l: &ast::Local) {
+ fn visit_local(&mut self, l: &'a ast::Local) {
self.with_lint_attrs(&l.attrs, |cx| {
run_lints!(cx, check_local, early_passes, l);
ast_visit::walk_local(cx, l);
})
}
- fn visit_block(&mut self, b: &ast::Block) {
+ fn visit_block(&mut self, b: &'a ast::Block) {
run_lints!(self, check_block, early_passes, b);
ast_visit::walk_block(self, b);
run_lints!(self, check_block_post, early_passes, b);
}
- fn visit_arm(&mut self, a: &ast::Arm) {
+ fn visit_arm(&mut self, a: &'a ast::Arm) {
run_lints!(self, check_arm, early_passes, a);
ast_visit::walk_arm(self, a);
}
- fn visit_expr_post(&mut self, e: &ast::Expr) {
+ fn visit_expr_post(&mut self, e: &'a ast::Expr) {
run_lints!(self, check_expr_post, early_passes, e);
}
- fn visit_generics(&mut self, g: &ast::Generics) {
+ fn visit_generics(&mut self, g: &'a ast::Generics) {
run_lints!(self, check_generics, early_passes, g);
ast_visit::walk_generics(self, g);
}
- fn visit_trait_item(&mut self, trait_item: &ast::TraitItem) {
+ fn visit_trait_item(&mut self, trait_item: &'a ast::TraitItem) {
self.with_lint_attrs(&trait_item.attrs, |cx| {
run_lints!(cx, check_trait_item, early_passes, trait_item);
ast_visit::walk_trait_item(cx, trait_item);
});
}
- fn visit_impl_item(&mut self, impl_item: &ast::ImplItem) {
+ fn visit_impl_item(&mut self, impl_item: &'a ast::ImplItem) {
self.with_lint_attrs(&impl_item.attrs, |cx| {
run_lints!(cx, check_impl_item, early_passes, impl_item);
ast_visit::walk_impl_item(cx, impl_item);
});
}
- fn visit_lifetime(&mut self, lt: &ast::Lifetime) {
+ fn visit_lifetime(&mut self, lt: &'a ast::Lifetime) {
run_lints!(self, check_lifetime, early_passes, lt);
}
- fn visit_lifetime_def(&mut self, lt: &ast::LifetimeDef) {
+ fn visit_lifetime_def(&mut self, lt: &'a ast::LifetimeDef) {
run_lints!(self, check_lifetime_def, early_passes, lt);
}
- fn visit_path(&mut self, p: &ast::Path, id: ast::NodeId) {
+ fn visit_path(&mut self, p: &'a ast::Path, id: ast::NodeId) {
run_lints!(self, check_path, early_passes, p, id);
ast_visit::walk_path(self, p);
}
- fn visit_path_list_item(&mut self, prefix: &ast::Path, item: &ast::PathListItem) {
+ fn visit_path_list_item(&mut self, prefix: &'a ast::Path, item: &'a ast::PathListItem) {
run_lints!(self, check_path_list_item, early_passes, item);
ast_visit::walk_path_list_item(self, prefix, item);
}
- fn visit_attribute(&mut self, attr: &ast::Attribute) {
+ fn visit_attribute(&mut self, attr: &'a ast::Attribute) {
run_lints!(self, check_attribute, early_passes, attr);
}
}
//
// FIXME: eliminate the duplication with `Visitor`. But this also
// contains a few lint-specific methods with no equivalent in `Visitor`.
-pub trait LateLintPass: LintPass {
+pub trait LateLintPass<'a, 'tcx>: LintPass {
fn check_name(&mut self, _: &LateContext, _: Span, _: ast::Name) { }
- fn check_crate(&mut self, _: &LateContext, _: &hir::Crate) { }
- fn check_crate_post(&mut self, _: &LateContext, _: &hir::Crate) { }
- fn check_mod(&mut self, _: &LateContext, _: &hir::Mod, _: Span, _: ast::NodeId) { }
- fn check_mod_post(&mut self, _: &LateContext, _: &hir::Mod, _: Span, _: ast::NodeId) { }
- fn check_foreign_item(&mut self, _: &LateContext, _: &hir::ForeignItem) { }
- fn check_foreign_item_post(&mut self, _: &LateContext, _: &hir::ForeignItem) { }
- fn check_item(&mut self, _: &LateContext, _: &hir::Item) { }
- fn check_item_post(&mut self, _: &LateContext, _: &hir::Item) { }
- fn check_local(&mut self, _: &LateContext, _: &hir::Local) { }
- fn check_block(&mut self, _: &LateContext, _: &hir::Block) { }
- fn check_block_post(&mut self, _: &LateContext, _: &hir::Block) { }
- fn check_stmt(&mut self, _: &LateContext, _: &hir::Stmt) { }
- fn check_arm(&mut self, _: &LateContext, _: &hir::Arm) { }
- fn check_pat(&mut self, _: &LateContext, _: &hir::Pat) { }
- fn check_decl(&mut self, _: &LateContext, _: &hir::Decl) { }
- fn check_expr(&mut self, _: &LateContext, _: &hir::Expr) { }
- fn check_expr_post(&mut self, _: &LateContext, _: &hir::Expr) { }
- fn check_ty(&mut self, _: &LateContext, _: &hir::Ty) { }
- fn check_generics(&mut self, _: &LateContext, _: &hir::Generics) { }
- fn check_fn(&mut self, _: &LateContext,
- _: FnKind, _: &hir::FnDecl, _: &hir::Expr, _: Span, _: ast::NodeId) { }
- fn check_fn_post(&mut self, _: &LateContext,
- _: FnKind, _: &hir::FnDecl, _: &hir::Expr, _: Span, _: ast::NodeId) { }
- fn check_trait_item(&mut self, _: &LateContext, _: &hir::TraitItem) { }
- fn check_trait_item_post(&mut self, _: &LateContext, _: &hir::TraitItem) { }
- fn check_impl_item(&mut self, _: &LateContext, _: &hir::ImplItem) { }
- fn check_impl_item_post(&mut self, _: &LateContext, _: &hir::ImplItem) { }
- fn check_struct_def(&mut self, _: &LateContext,
- _: &hir::VariantData, _: ast::Name, _: &hir::Generics, _: ast::NodeId) { }
- fn check_struct_def_post(&mut self, _: &LateContext,
- _: &hir::VariantData, _: ast::Name, _: &hir::Generics, _: ast::NodeId) { }
- fn check_struct_field(&mut self, _: &LateContext, _: &hir::StructField) { }
- fn check_variant(&mut self, _: &LateContext, _: &hir::Variant, _: &hir::Generics) { }
- fn check_variant_post(&mut self, _: &LateContext, _: &hir::Variant, _: &hir::Generics) { }
- fn check_lifetime(&mut self, _: &LateContext, _: &hir::Lifetime) { }
- fn check_lifetime_def(&mut self, _: &LateContext, _: &hir::LifetimeDef) { }
- fn check_path(&mut self, _: &LateContext, _: &hir::Path, _: ast::NodeId) { }
- fn check_attribute(&mut self, _: &LateContext, _: &ast::Attribute) { }
+ fn check_crate(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Crate) { }
+ fn check_crate_post(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Crate) { }
+ fn check_mod(&mut self,
+ _: &LateContext<'a, 'tcx>,
+ _: &'tcx hir::Mod,
+ _: Span,
+ _: ast::NodeId) { }
+ fn check_mod_post(&mut self,
+ _: &LateContext<'a, 'tcx>,
+ _: &'tcx hir::Mod,
+ _: Span,
+ _: ast::NodeId) { }
+ fn check_foreign_item(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::ForeignItem) { }
+ fn check_foreign_item_post(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::ForeignItem) { }
+ fn check_item(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Item) { }
+ fn check_item_post(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Item) { }
+ fn check_local(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Local) { }
+ fn check_block(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Block) { }
+ fn check_block_post(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Block) { }
+ fn check_stmt(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Stmt) { }
+ fn check_arm(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Arm) { }
+ fn check_pat(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Pat) { }
+ fn check_decl(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Decl) { }
+ fn check_expr(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Expr) { }
+ fn check_expr_post(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Expr) { }
+ fn check_ty(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Ty) { }
+ fn check_generics(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Generics) { }
+ fn check_fn(&mut self,
+ _: &LateContext<'a, 'tcx>,
+ _: FnKind<'tcx>,
+ _: &'tcx hir::FnDecl,
+ _: &'tcx hir::Expr,
+ _: Span,
+ _: ast::NodeId) { }
+ fn check_fn_post(&mut self,
+ _: &LateContext<'a, 'tcx>,
+ _: FnKind<'tcx>,
+ _: &'tcx hir::FnDecl,
+ _: &'tcx hir::Expr,
+ _: Span,
+ _: ast::NodeId) { }
+ fn check_trait_item(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::TraitItem) { }
+ fn check_trait_item_post(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::TraitItem) { }
+ fn check_impl_item(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::ImplItem) { }
+ fn check_impl_item_post(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::ImplItem) { }
+ fn check_struct_def(&mut self,
+ _: &LateContext<'a, 'tcx>,
+ _: &'tcx hir::VariantData,
+ _: ast::Name,
+ _: &'tcx hir::Generics,
+ _: ast::NodeId) { }
+ fn check_struct_def_post(&mut self,
+ _: &LateContext<'a, 'tcx>,
+ _: &'tcx hir::VariantData,
+ _: ast::Name,
+ _: &'tcx hir::Generics,
+ _: ast::NodeId) { }
+ fn check_struct_field(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::StructField) { }
+ fn check_variant(&mut self,
+ _: &LateContext<'a, 'tcx>,
+ _: &'tcx hir::Variant,
+ _: &'tcx hir::Generics) { }
+ fn check_variant_post(&mut self,
+ _: &LateContext<'a, 'tcx>,
+ _: &'tcx hir::Variant,
+ _: &'tcx hir::Generics) { }
+ fn check_lifetime(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Lifetime) { }
+ fn check_lifetime_def(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::LifetimeDef) { }
+ fn check_path(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx hir::Path, _: ast::NodeId) { }
+ fn check_attribute(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx ast::Attribute) { }
/// Called when entering a syntax node that can have lint attributes such
/// as `#[allow(...)]`. Called with *all* the attributes of that node.
- fn enter_lint_attrs(&mut self, _: &LateContext, _: &[ast::Attribute]) { }
+ fn enter_lint_attrs(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx [ast::Attribute]) { }
/// Counterpart to `enter_lint_attrs`.
- fn exit_lint_attrs(&mut self, _: &LateContext, _: &[ast::Attribute]) { }
+ fn exit_lint_attrs(&mut self, _: &LateContext<'a, 'tcx>, _: &'tcx [ast::Attribute]) { }
}
pub trait EarlyLintPass: LintPass {
/// A lint pass boxed up as a trait object.
pub type EarlyLintPassObject = Box<EarlyLintPass + 'static>;
-pub type LateLintPassObject = Box<LateLintPass + 'static>;
+pub type LateLintPassObject = Box<for<'a, 'tcx> LateLintPass<'a, 'tcx> + 'static>;
/// Identifies a lint known to the compiler.
#[derive(Clone, Copy, Debug)]
pub kind: NativeLibraryKind,
pub name: Symbol,
pub cfg: Option<ast::MetaItem>,
+ pub foreign_items: Vec<DefIndex>,
}
/// The data we save and restore about an inlined item or method. This is not
fn is_defaulted_trait(&self, did: DefId) -> bool;
fn is_default_impl(&self, impl_did: DefId) -> bool;
fn is_foreign_item(&self, did: DefId) -> bool;
- fn is_statically_included_foreign_item(&self, id: ast::NodeId) -> bool;
+ fn is_dllimport_foreign_item(&self, def: DefId) -> bool;
+ fn is_statically_included_foreign_item(&self, def_id: DefId) -> bool;
// crate metadata
fn dylib_dependency_formats(&self, cnum: CrateNum)
fn crate_hash(&self, cnum: CrateNum) -> Svh;
fn crate_disambiguator(&self, cnum: CrateNum) -> Symbol;
fn plugin_registrar_fn(&self, cnum: CrateNum) -> Option<DefId>;
+ fn derive_registrar_fn(&self, cnum: CrateNum) -> Option<DefId>;
fn native_libraries(&self, cnum: CrateNum) -> Vec<NativeLibrary>;
- fn reachable_ids(&self, cnum: CrateNum) -> Vec<DefId>;
+ fn exported_symbols(&self, cnum: CrateNum) -> Vec<DefId>;
fn is_no_builtins(&self, cnum: CrateNum) -> bool;
// resolve
fn is_defaulted_trait(&self, did: DefId) -> bool { bug!("is_defaulted_trait") }
fn is_default_impl(&self, impl_did: DefId) -> bool { bug!("is_default_impl") }
fn is_foreign_item(&self, did: DefId) -> bool { bug!("is_foreign_item") }
- fn is_statically_included_foreign_item(&self, id: ast::NodeId) -> bool { false }
+ fn is_dllimport_foreign_item(&self, id: DefId) -> bool { false }
+ fn is_statically_included_foreign_item(&self, def_id: DefId) -> bool { false }
// crate metadata
fn dylib_dependency_formats(&self, cnum: CrateNum)
-> Symbol { bug!("crate_disambiguator") }
fn plugin_registrar_fn(&self, cnum: CrateNum) -> Option<DefId>
{ bug!("plugin_registrar_fn") }
+ fn derive_registrar_fn(&self, cnum: CrateNum) -> Option<DefId>
+ { bug!("derive_registrar_fn") }
fn native_libraries(&self, cnum: CrateNum) -> Vec<NativeLibrary>
{ bug!("native_libraries") }
- fn reachable_ids(&self, cnum: CrateNum) -> Vec<DefId> { bug!("reachable_ids") }
+ fn exported_symbols(&self, cnum: CrateNum) -> Vec<DefId> { bug!("exported_symbols") }
fn is_no_builtins(&self, cnum: CrateNum) -> bool { bug!("is_no_builtins") }
// resolve
// This is basically a 1-based range of ints, which is a little
// silly - I may fix that.
fn crates(&self) -> Vec<CrateNum> { vec![] }
- fn used_libraries(&self) -> Vec<NativeLibrary> {
- vec![]
- }
+ fn used_libraries(&self) -> Vec<NativeLibrary> { vec![] }
fn used_link_args(&self) -> Vec<String> { vec![] }
// utility functions
let typ = self.infcx.tcx.tables().node_id_to_type(expr.id);
match typ.sty {
ty::TyFnDef(.., ref bare_fn_ty) if bare_fn_ty.abi == RustIntrinsic => {
- let from = bare_fn_ty.sig.0.inputs[0];
- let to = bare_fn_ty.sig.0.output;
+ let from = bare_fn_ty.sig.skip_binder().inputs()[0];
+ let to = bare_fn_ty.sig.skip_binder().output();
self.check_transmute(expr.span, from, to, expr.id);
}
_ => {
// handled by the lint emitting logic above.
}
None => {
- // This is an 'unmarked' API, which should not exist
- // in the standard library.
- if self.sess.features.borrow().unmarked_api {
- self.sess.struct_span_warn(span, "use of unmarked library feature")
- .span_note(span, "this is either a bug in the library you are \
- using or a bug in the compiler - please \
- report it in both places")
- .emit()
- } else {
- self.sess.struct_span_err(span, "use of unmarked library feature")
- .span_note(span, "this is either a bug in the library you are \
- using or a bug in the compiler - please \
- report it in both places")
- .span_note(span, "use #![feature(unmarked_api)] in the \
- crate attributes to override this")
- .emit()
- }
+ span_bug!(span, "encountered unmarked API");
}
}
}
// much sense: The search path can stay the same while the
// things discovered there might have changed on disk.
search_paths: SearchPaths [TRACKED],
- libs: Vec<(String, cstore::NativeLibraryKind)> [TRACKED],
+ libs: Vec<(String, Option<String>, cstore::NativeLibraryKind)> [TRACKED],
maybe_sysroot: Option<PathBuf> [TRACKED],
target_triple: String [TRACKED],
"print some performance-related statistics"),
hir_stats: bool = (false, parse_bool, [UNTRACKED],
"print some statistics about AST and HIR"),
+ mir_stats: bool = (false, parse_bool, [UNTRACKED],
+ "print some statistics about MIR"),
}
pub fn default_lib_output() -> CrateType {
}
let libs = matches.opt_strs("l").into_iter().map(|s| {
+ // Parse string of the form "[KIND=]lib[:new_name]",
+ // where KIND is one of "dylib", "framework", "static".
let mut parts = s.splitn(2, '=');
let kind = parts.next().unwrap();
let (name, kind) = match (parts.next(), kind) {
s));
}
};
- (name.to_string(), kind)
+ let mut name_parts = name.splitn(2, ':');
+ let name = name_parts.next().unwrap();
+ let new_name = name_parts.next();
+ (name.to_string(), new_name.map(|n| n.to_string()), kind)
}).collect();
let cfg = parse_cfgspecs(matches.opt_strs("cfg"));
impl_dep_tracking_hash_for_sortable_vec_of!(String);
impl_dep_tracking_hash_for_sortable_vec_of!(CrateType);
impl_dep_tracking_hash_for_sortable_vec_of!((String, lint::Level));
- impl_dep_tracking_hash_for_sortable_vec_of!((String, cstore::NativeLibraryKind));
-
+ impl_dep_tracking_hash_for_sortable_vec_of!((String, Option<String>,
+ cstore::NativeLibraryKind));
impl DepTrackingHash for SearchPaths {
fn hash(&self, hasher: &mut DefaultHasher, _: ErrorOutputType) {
let mut elems: Vec<_> = self
}
}
+ impl<T1, T2, T3> DepTrackingHash for (T1, T2, T3)
+ where T1: DepTrackingHash,
+ T2: DepTrackingHash,
+ T3: DepTrackingHash
+ {
+ fn hash(&self, hasher: &mut DefaultHasher, error_format: ErrorOutputType) {
+ Hash::hash(&0, hasher);
+ DepTrackingHash::hash(&self.0, hasher, error_format);
+ Hash::hash(&1, hasher);
+ DepTrackingHash::hash(&self.1, hasher, error_format);
+ Hash::hash(&2, hasher);
+ DepTrackingHash::hash(&self.2, hasher, error_format);
+ }
+ }
+
// This is a stable hash because BTreeMap is a sorted container
pub fn stable_hash(sub_hashes: BTreeMap<&'static str, &DepTrackingHash>,
hasher: &mut DefaultHasher,
let mut v1 = super::basic_options();
let mut v2 = super::basic_options();
let mut v3 = super::basic_options();
+ let mut v4 = super::basic_options();
// Reference
- v1.libs = vec![(String::from("a"), cstore::NativeStatic),
- (String::from("b"), cstore::NativeFramework),
- (String::from("c"), cstore::NativeUnknown)];
+ v1.libs = vec![(String::from("a"), None, cstore::NativeStatic),
+ (String::from("b"), None, cstore::NativeFramework),
+ (String::from("c"), None, cstore::NativeUnknown)];
// Change label
- v2.libs = vec![(String::from("a"), cstore::NativeStatic),
- (String::from("X"), cstore::NativeFramework),
- (String::from("c"), cstore::NativeUnknown)];
+ v2.libs = vec![(String::from("a"), None, cstore::NativeStatic),
+ (String::from("X"), None, cstore::NativeFramework),
+ (String::from("c"), None, cstore::NativeUnknown)];
// Change kind
- v3.libs = vec![(String::from("a"), cstore::NativeStatic),
- (String::from("b"), cstore::NativeStatic),
- (String::from("c"), cstore::NativeUnknown)];
+ v3.libs = vec![(String::from("a"), None, cstore::NativeStatic),
+ (String::from("b"), None, cstore::NativeStatic),
+ (String::from("c"), None, cstore::NativeUnknown)];
+
+ // Change new-name
+ v4.libs = vec![(String::from("a"), None, cstore::NativeStatic),
+ (String::from("b"), Some(String::from("X")), cstore::NativeFramework),
+ (String::from("c"), None, cstore::NativeUnknown)];
assert!(v1.dep_tracking_hash() != v2.dep_tracking_hash());
assert!(v1.dep_tracking_hash() != v3.dep_tracking_hash());
+ assert!(v1.dep_tracking_hash() != v4.dep_tracking_hash());
// Check clone
assert_eq!(v1.dep_tracking_hash(), v1.clone().dep_tracking_hash());
assert_eq!(v2.dep_tracking_hash(), v2.clone().dep_tracking_hash());
assert_eq!(v3.dep_tracking_hash(), v3.clone().dep_tracking_hash());
+ assert_eq!(v4.dep_tracking_hash(), v4.clone().dep_tracking_hash());
}
#[test]
let mut v3 = super::basic_options();
// Reference
- v1.libs = vec![(String::from("a"), cstore::NativeStatic),
- (String::from("b"), cstore::NativeFramework),
- (String::from("c"), cstore::NativeUnknown)];
+ v1.libs = vec![(String::from("a"), None, cstore::NativeStatic),
+ (String::from("b"), None, cstore::NativeFramework),
+ (String::from("c"), None, cstore::NativeUnknown)];
- v2.libs = vec![(String::from("b"), cstore::NativeFramework),
- (String::from("a"), cstore::NativeStatic),
- (String::from("c"), cstore::NativeUnknown)];
+ v2.libs = vec![(String::from("b"), None, cstore::NativeFramework),
+ (String::from("a"), None, cstore::NativeStatic),
+ (String::from("c"), None, cstore::NativeUnknown)];
- v3.libs = vec![(String::from("c"), cstore::NativeUnknown),
- (String::from("a"), cstore::NativeStatic),
- (String::from("b"), cstore::NativeFramework)];
+ v3.libs = vec![(String::from("c"), None, cstore::NativeUnknown),
+ (String::from("a"), None, cstore::NativeStatic),
+ (String::from("b"), None, cstore::NativeFramework)];
assert!(v1.dep_tracking_hash() == v2.dep_tracking_hash());
assert!(v1.dep_tracking_hash() == v3.dep_tracking_hash());
// The `Self` type is erased, so it should not appear in list of
// arguments or return type apart from the receiver.
let ref sig = self.item_type(method.def_id).fn_sig();
- for &input_ty in &sig.0.inputs[1..] {
+ for input_ty in &sig.skip_binder().inputs()[1..] {
if self.contains_illegal_self_type_reference(trait_def_id, input_ty) {
return Some(MethodViolationCode::ReferencesSelf);
}
}
- if self.contains_illegal_self_type_reference(trait_def_id, sig.0.output) {
+ if self.contains_illegal_self_type_reference(trait_def_id, sig.output().skip_binder()) {
return Some(MethodViolationCode::ReferencesSelf);
}
ty::TyFnDef(.., &ty::BareFnTy {
unsafety: hir::Unsafety::Normal,
abi: Abi::Rust,
- sig: ty::Binder(ty::FnSig {
- inputs: _,
- output: _,
- variadic: false
- })
+ ref sig,
}) |
ty::TyFnPtr(&ty::BareFnTy {
unsafety: hir::Unsafety::Normal,
abi: Abi::Rust,
- sig: ty::Binder(ty::FnSig {
- inputs: _,
- output: _,
- variadic: false
- })
- }) => {
+ ref sig
+ }) if !sig.variadic() => {
candidates.vec.push(FnPointerCandidate);
}
-> ty::Binder<(ty::TraitRef<'tcx>, Ty<'tcx>)>
{
let arguments_tuple = match tuple_arguments {
- TupleArgumentsFlag::No => sig.0.inputs[0],
- TupleArgumentsFlag::Yes => self.intern_tup(&sig.0.inputs[..]),
+ TupleArgumentsFlag::No => sig.skip_binder().inputs()[0],
+ TupleArgumentsFlag::Yes =>
+ self.intern_tup(sig.skip_binder().inputs()),
};
let trait_ref = ty::TraitRef {
def_id: fn_trait_def_id,
substs: self.mk_substs_trait(self_ty, &[arguments_tuple]),
};
- ty::Binder((trait_ref, sig.0.output))
+ ty::Binder((trait_ref, sig.skip_binder().output()))
}
}
}
}
+ pub fn mk_fn_sig<I>(self, inputs: I, output: I::Item, variadic: bool)
+ -> <I::Item as InternIteratorElement<Ty<'tcx>, ty::FnSig<'tcx>>>::Output
+ where I: Iterator,
+ I::Item: InternIteratorElement<Ty<'tcx>, ty::FnSig<'tcx>>
+ {
+ inputs.chain(iter::once(output)).intern_with(|xs| ty::FnSig {
+ inputs_and_output: self.intern_type_list(xs),
+ variadic: variadic
+ })
+ }
+
pub fn mk_existential_predicates<I: InternAs<[ExistentialPredicate<'tcx>],
&'tcx Slice<ExistentialPredicate<'tcx>>>>(self, iter: I)
-> I::Output {
Some(TupleSimplifiedType(tys.len()))
}
ty::TyFnDef(.., ref f) | ty::TyFnPtr(ref f) => {
- Some(FunctionSimplifiedType(f.sig.0.inputs.len()))
+ Some(FunctionSimplifiedType(f.sig.skip_binder().inputs().len()))
}
ty::TyProjection(_) | ty::TyParam(_) => {
if can_simplify_params {
fn add_fn_sig(&mut self, fn_sig: &ty::PolyFnSig) {
let mut computation = FlagComputation::new();
- computation.add_tys(&fn_sig.0.inputs);
- computation.add_ty(fn_sig.0.output);
+ computation.add_tys(fn_sig.skip_binder().inputs());
+ computation.add_ty(fn_sig.skip_binder().output());
self.add_bound_computation(&computation);
}
use ty::{self, Ty, TyCtxt, TypeFoldable};
use ty::error::{ExpectedFound, TypeError};
use std::rc::Rc;
+use std::iter;
use syntax::abi;
use hir as ast;
+use rustc_data_structures::accumulate_vec::AccumulateVec;
pub type RelateResult<'tcx, T> = Result<T, TypeError<'tcx>>;
expected_found(relation, &a.variadic, &b.variadic)));
}
- let inputs = relate_arg_vecs(relation,
- &a.inputs,
- &b.inputs)?;
- let output = relation.relate(&a.output, &b.output)?;
+ if a.inputs().len() != b.inputs().len() {
+ return Err(TypeError::ArgCount);
+ }
- Ok(ty::FnSig {inputs: inputs,
- output: output,
- variadic: a.variadic})
+ let inputs_and_output = a.inputs().iter().cloned()
+ .zip(b.inputs().iter().cloned())
+ .map(|x| (x, false))
+ .chain(iter::once(((a.output(), b.output()), true)))
+ .map(|((a, b), is_output)| {
+ if is_output {
+ relation.relate(&a, &b)
+ } else {
+ relation.relate_with_variance(ty::Contravariant, &a, &b)
+ }
+ }).collect::<Result<AccumulateVec<[_; 8]>, _>>()?;
+ Ok(ty::FnSig {
+ inputs_and_output: relation.tcx().intern_type_list(&inputs_and_output),
+ variadic: a.variadic
+ })
}
}
-fn relate_arg_vecs<'a, 'gcx, 'tcx, R>(relation: &mut R,
- a_args: &[Ty<'tcx>],
- b_args: &[Ty<'tcx>])
- -> RelateResult<'tcx, Vec<Ty<'tcx>>>
- where R: TypeRelation<'a, 'gcx, 'tcx>, 'gcx: 'a+'tcx, 'tcx: 'a
-{
- if a_args.len() != b_args.len() {
- return Err(TypeError::ArgCount);
- }
-
- a_args.iter().zip(b_args)
- .map(|(a, b)| relation.relate_with_variance(ty::Contravariant, a, b))
- .collect()
-}
-
impl<'tcx> Relate<'tcx> for ast::Unsafety {
fn relate<'a, 'gcx, R>(relation: &mut R,
a: &ast::Unsafety,
impl<'a, 'tcx> Lift<'tcx> for ty::FnSig<'a> {
type Lifted = ty::FnSig<'tcx>;
fn lift_to_tcx<'b, 'gcx>(&self, tcx: TyCtxt<'b, 'gcx, 'tcx>) -> Option<Self::Lifted> {
- tcx.lift(&self.inputs[..]).and_then(|inputs| {
- tcx.lift(&self.output).map(|output| {
- ty::FnSig {
- inputs: inputs,
- output: output,
- variadic: self.variadic
- }
- })
+ tcx.lift(&self.inputs_and_output).map(|x| {
+ ty::FnSig {
+ inputs_and_output: x,
+ variadic: self.variadic
+ }
})
}
}
impl<'tcx> TypeFoldable<'tcx> for ty::FnSig<'tcx> {
fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self {
- ty::FnSig { inputs: self.inputs.fold_with(folder),
- output: self.output.fold_with(folder),
- variadic: self.variadic }
+ let inputs_and_output = self.inputs_and_output.fold_with(folder);
+ ty::FnSig {
+ inputs_and_output: folder.tcx().intern_type_list(&inputs_and_output),
+ variadic: self.variadic,
+ }
}
fn fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self {
}
fn super_visit_with<V: TypeVisitor<'tcx>>(&self, visitor: &mut V) -> bool {
- self.inputs.visit_with(visitor) || self.output.visit_with(visitor)
+ self.inputs().iter().any(|i| i.visit_with(visitor)) ||
+ self.output().visit_with(visitor)
}
}
/// - `variadic` indicates whether this is a variadic function. (only true for foreign fns)
#[derive(Clone, PartialEq, Eq, Hash, RustcEncodable, RustcDecodable)]
pub struct FnSig<'tcx> {
- pub inputs: Vec<Ty<'tcx>>,
- pub output: Ty<'tcx>,
+ pub inputs_and_output: &'tcx Slice<Ty<'tcx>>,
pub variadic: bool
}
+impl<'tcx> FnSig<'tcx> {
+ pub fn inputs(&self) -> &[Ty<'tcx>] {
+ &self.inputs_and_output[..self.inputs_and_output.len() - 1]
+ }
+
+ pub fn output(&self) -> Ty<'tcx> {
+ self.inputs_and_output[self.inputs_and_output.len() - 1]
+ }
+}
+
pub type PolyFnSig<'tcx> = Binder<FnSig<'tcx>>;
impl<'tcx> PolyFnSig<'tcx> {
- pub fn inputs(&self) -> ty::Binder<Vec<Ty<'tcx>>> {
- self.map_bound_ref(|fn_sig| fn_sig.inputs.clone())
+ pub fn inputs(&self) -> Binder<&[Ty<'tcx>]> {
+ Binder(self.skip_binder().inputs())
}
pub fn input(&self, index: usize) -> ty::Binder<Ty<'tcx>> {
- self.map_bound_ref(|fn_sig| fn_sig.inputs[index])
+ self.map_bound_ref(|fn_sig| fn_sig.inputs()[index])
}
pub fn output(&self) -> ty::Binder<Ty<'tcx>> {
- self.map_bound_ref(|fn_sig| fn_sig.output.clone())
+ self.map_bound_ref(|fn_sig| fn_sig.output().clone())
}
pub fn variadic(&self) -> bool {
self.skip_binder().variadic
}
// Type accessors for substructures of types
- pub fn fn_args(&self) -> ty::Binder<Vec<Ty<'tcx>>> {
+ pub fn fn_args(&self) -> ty::Binder<&[Ty<'tcx>]> {
self.fn_sig().inputs()
}
self.hash(f.unsafety);
self.hash(f.abi);
self.hash(f.sig.variadic());
- self.hash(f.sig.inputs().skip_binder().len());
+ self.hash(f.sig.skip_binder().inputs().len());
}
TyDynamic(ref data, ..) => {
if let Some(p) = data.principal() {
}
fn push_sig_subtypes<'tcx>(stack: &mut TypeWalkerStack<'tcx>, sig: &ty::PolyFnSig<'tcx>) {
- stack.push(sig.0.output);
- stack.extend(sig.0.inputs.iter().cloned().rev());
+ stack.push(sig.skip_binder().output());
+ stack.extend(sig.skip_binder().inputs().iter().cloned().rev());
}
impl<'tcx> fmt::Display for ty::FnSig<'tcx> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "fn")?;
- fn_sig(f, &self.inputs, self.variadic, self.output)
+ fn_sig(f, self.inputs(), self.variadic, self.output())
}
}
impl<'tcx> fmt::Debug for ty::FnSig<'tcx> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
- write!(f, "({:?}; variadic: {})->{:?}", self.inputs, self.variadic, self.output)
+ write!(f, "({:?}; variadic: {})->{:?}", self.inputs(), self.variadic, self.output())
}
}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use target::{Target, TargetResult};
+
+pub fn target() -> TargetResult {
+ let mut base = super::openbsd_base::opts();
+ base.cpu = "pentium4".to_string();
+ base.max_atomic_width = Some(64);
+ base.pre_link_args.push("-m32".to_string());
+
+ Ok(Target {
+ llvm_target: "i686-unknown-openbsd".to_string(),
+ target_endian: "little".to_string(),
+ target_pointer_width: "32".to_string(),
+ data_layout: "e-m:e-p:32:32-f64:32:64-f80:32-n8:16:32-S128".to_string(),
+ arch: "x86".to_string(),
+ target_os: "openbsd".to_string(),
+ target_env: "".to_string(),
+ target_vendor: "unknown".to_string(),
+ options: base,
+ })
+}
("x86_64-unknown-dragonfly", x86_64_unknown_dragonfly),
("x86_64-unknown-bitrig", x86_64_unknown_bitrig),
+
+ ("i686-unknown-openbsd", i686_unknown_openbsd),
("x86_64-unknown-openbsd", x86_64_unknown_openbsd),
+
("x86_64-unknown-netbsd", x86_64_unknown_netbsd),
("x86_64-rumprun-netbsd", x86_64_rumprun_netbsd),
pub staticlib_suffix: String,
/// OS family to use for conditional compilation. Valid options: "unix", "windows".
pub target_family: Option<String>,
+ /// Whether the target toolchain is like OpenBSD's.
+ /// Only useful for compiling against OpenBSD, for configuring abi when returning a struct.
+ pub is_like_openbsd: bool,
/// Whether the target toolchain is like OSX's. Only useful for compiling against iOS/OS X, in
/// particular running dsymutil and some other stuff like `-dead_strip`. Defaults to false.
pub is_like_osx: bool,
staticlib_prefix: "lib".to_string(),
staticlib_suffix: ".a".to_string(),
target_family: None,
+ is_like_openbsd: false,
is_like_osx: false,
is_like_solaris: false,
is_like_windows: false,
key!(staticlib_prefix);
key!(staticlib_suffix);
key!(target_family, optional);
+ key!(is_like_openbsd, bool);
key!(is_like_osx, bool);
key!(is_like_solaris, bool);
key!(is_like_windows, bool);
target_option_val!(staticlib_prefix);
target_option_val!(staticlib_suffix);
target_option_val!(target_family);
+ target_option_val!(is_like_openbsd);
target_option_val!(is_like_osx);
target_option_val!(is_like_solaris);
target_option_val!(is_like_windows);
executables: true,
linker_is_gnu: true,
has_rpath: true,
+ is_like_openbsd: true,
pre_link_args: vec![
// GNU-style linkers will use this to omit linking to libraries
// which don't actually fulfill any relocations, but only for
use rustc_plugin::registry::Registry;
use rustc_plugin as plugin;
use rustc_passes::{ast_validation, no_asm, loops, consts, rvalues,
- static_recursion, hir_stats};
+ static_recursion, hir_stats, mir_stats};
use rustc_const_eval::check_match;
use super::Compilation;
"MIR dump",
|| mir::mir_map::build_mir_for_crate(tcx));
+ if sess.opts.debugging_opts.mir_stats {
+ mir_stats::print_mir_stats(tcx, "PRE CLEANUP MIR STATS");
+ }
+
time(time_passes, "MIR cleanup and validation", || {
let mut passes = sess.mir_passes.borrow_mut();
// Push all the built-in validation passes.
"resolving dependency formats",
|| dependency_format::calculate(&tcx.sess));
+ if tcx.sess.opts.debugging_opts.mir_stats {
+ mir_stats::print_mir_stats(tcx, "PRE OPTIMISATION MIR STATS");
+ }
+
// Run the passes that transform the MIR into a more suitable form for translation to LLVM
// code.
time(time_passes, "MIR optimisations", || {
passes.run_passes(tcx);
});
+ if tcx.sess.opts.debugging_opts.mir_stats {
+ mir_stats::print_mir_stats(tcx, "POST OPTIMISATION MIR STATS");
+ }
+
let translation =
time(time_passes,
"translation",
}
pub fn t_fn(&self, input_tys: &[Ty<'tcx>], output_ty: Ty<'tcx>) -> Ty<'tcx> {
- let input_args = input_tys.iter().cloned().collect();
self.infcx.tcx.mk_fn_ptr(self.infcx.tcx.mk_bare_fn(ty::BareFnTy {
unsafety: hir::Unsafety::Normal,
abi: Abi::Rust,
- sig: ty::Binder(ty::FnSig {
- inputs: input_args,
- output: output_ty,
- variadic: false,
- }),
+ sig: ty::Binder(self.infcx.tcx.mk_fn_sig(input_tys.iter().cloned(), output_ty, false)),
}))
}
}
}
-impl LateLintPass for NonCamelCaseTypes {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for NonCamelCaseTypes {
fn check_item(&mut self, cx: &LateContext, it: &hir::Item) {
let extern_repr_count = it.attrs
.iter()
}
}
-impl LateLintPass for NonSnakeCase {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for NonSnakeCase {
fn check_crate(&mut self, cx: &LateContext, cr: &hir::Crate) {
let attr_crate_name = cr.attrs
.iter()
}
}
-impl LateLintPass for NonUpperCaseGlobals {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for NonUpperCaseGlobals {
fn check_item(&mut self, cx: &LateContext, it: &hir::Item) {
match it.node {
hir::ItemStatic(..) => {
}
}
-impl LateLintPass for WhileTrue {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for WhileTrue {
fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) {
if let hir::ExprWhile(ref cond, ..) = e.node {
if let hir::ExprLit(ref lit) = cond.node {
pub struct BoxPointers;
impl BoxPointers {
- fn check_heap_type<'a, 'tcx>(&self, cx: &LateContext<'a, 'tcx>, span: Span, ty: Ty<'tcx>) {
+ fn check_heap_type<'a, 'tcx>(&self, cx: &LateContext, span: Span, ty: Ty) {
for leaf_ty in ty.walk() {
if let ty::TyBox(_) = leaf_ty.sty {
let m = format!("type uses owned (Box type) pointers: {}", ty);
}
}
-impl LateLintPass for BoxPointers {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for BoxPointers {
fn check_item(&mut self, cx: &LateContext, it: &hir::Item) {
match it.node {
hir::ItemFn(..) |
}
}
-impl LateLintPass for NonShorthandFieldPatterns {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for NonShorthandFieldPatterns {
fn check_pat(&mut self, cx: &LateContext, pat: &hir::Pat) {
if let PatKind::Struct(_, ref field_pats, _) = pat.node {
for fieldpat in field_pats {
}
}
-impl LateLintPass for UnsafeCode {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnsafeCode {
fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) {
if let hir::ExprBlock(ref blk) = e.node {
// Don't warn about generated blocks, that'll just pollute the output.
fn check_fn(&mut self,
cx: &LateContext,
- fk: FnKind,
+ fk: FnKind<'tcx>,
_: &hir::FnDecl,
_: &hir::Expr,
span: Span,
}
}
-impl LateLintPass for MissingDoc {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for MissingDoc {
fn enter_lint_attrs(&mut self, _: &LateContext, attrs: &[ast::Attribute]) {
let doc_hidden = self.doc_hidden() ||
attrs.iter().any(|attr| {
self.doc_hidden_stack.push(doc_hidden);
}
- fn exit_lint_attrs(&mut self, _: &LateContext, _: &[ast::Attribute]) {
+ fn exit_lint_attrs(&mut self, _: &LateContext, _attrs: &[ast::Attribute]) {
self.doc_hidden_stack.pop().expect("empty doc_hidden_stack");
}
}
}
-impl LateLintPass for MissingCopyImplementations {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for MissingCopyImplementations {
fn check_item(&mut self, cx: &LateContext, item: &hir::Item) {
if !cx.access_levels.is_reachable(item.id) {
return;
}
}
-impl LateLintPass for MissingDebugImplementations {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for MissingDebugImplementations {
fn check_item(&mut self, cx: &LateContext, item: &hir::Item) {
if !cx.access_levels.is_reachable(item.id) {
return;
}
}
-impl LateLintPass for UnconditionalRecursion {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnconditionalRecursion {
fn check_fn(&mut self,
cx: &LateContext,
fn_kind: FnKind,
}
}
-impl LateLintPass for PluginAsLibrary {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for PluginAsLibrary {
fn check_item(&mut self, cx: &LateContext, it: &hir::Item) {
if cx.sess().plugin_registrar_fn.get().is_some() {
// We're compiling a plugin; it's fine to link other plugins.
}
}
-impl LateLintPass for InvalidNoMangleItems {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for InvalidNoMangleItems {
fn check_item(&mut self, cx: &LateContext, it: &hir::Item) {
match it.node {
hir::ItemFn(.., ref generics, _) => {
}
}
-impl LateLintPass for MutableTransmutes {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for MutableTransmutes {
fn check_expr(&mut self, cx: &LateContext, expr: &hir::Expr) {
use syntax::abi::Abi::RustIntrinsic;
let typ = cx.tcx.tables().node_id_to_type(expr.id);
match typ.sty {
ty::TyFnDef(.., ref bare_fn) if bare_fn.abi == RustIntrinsic => {
- let from = bare_fn.sig.0.inputs[0];
- let to = bare_fn.sig.0.output;
+ let from = bare_fn.sig.skip_binder().inputs()[0];
+ let to = bare_fn.sig.skip_binder().output();
return Some((&from.sty, &to.sty));
}
_ => (),
}
}
-impl LateLintPass for UnstableFeatures {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnstableFeatures {
fn check_attribute(&mut self, ctx: &LateContext, attr: &ast::Attribute) {
if attr.meta().check_name("feature") {
if let Some(items) = attr.meta().meta_item_list() {
}
}
-impl LateLintPass for UnionsWithDropFields {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnionsWithDropFields {
fn check_item(&mut self, ctx: &LateContext, item: &hir::Item) {
if let hir::ItemUnion(ref vdata, _) = item.node {
let param_env = &ty::ParameterEnvironment::for_item(ctx.tcx, item.id);
}
}
-impl LateLintPass for TypeLimits {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for TypeLimits {
fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) {
match e.node {
hir::ExprUnary(hir::UnNeg, ref expr) => {
}
let sig = cx.erase_late_bound_regions(&bare_fn.sig);
- if !sig.output.is_nil() {
- let r = self.check_type_for_ffi(cache, sig.output);
+ if !sig.output().is_nil() {
+ let r = self.check_type_for_ffi(cache, sig.output());
match r {
FfiSafe => {}
_ => {
}
}
}
- for arg in sig.inputs {
+ for arg in sig.inputs() {
let r = self.check_type_for_ffi(cache, arg);
match r {
FfiSafe => {}
let sig = self.cx.tcx.item_type(def_id).fn_sig();
let sig = self.cx.tcx.erase_late_bound_regions(&sig);
- for (&input_ty, input_hir) in sig.inputs.iter().zip(&decl.inputs) {
- self.check_type_for_ffi_and_report_errors(input_hir.ty.span, &input_ty);
+ for (input_ty, input_hir) in sig.inputs().iter().zip(&decl.inputs) {
+ self.check_type_for_ffi_and_report_errors(input_hir.ty.span, input_ty);
}
if let hir::Return(ref ret_hir) = decl.output {
- let ret_ty = sig.output;
+ let ret_ty = sig.output();
if !ret_ty.is_nil() {
self.check_type_for_ffi_and_report_errors(ret_hir.span, ret_ty);
}
}
}
-impl LateLintPass for ImproperCTypes {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for ImproperCTypes {
fn check_item(&mut self, cx: &LateContext, it: &hir::Item) {
let mut vis = ImproperCTypesVisitor { cx: cx };
if let hir::ItemForeignMod(ref nmod) = it.node {
}
}
-impl LateLintPass for VariantSizeDifferences {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for VariantSizeDifferences {
fn check_item(&mut self, cx: &LateContext, it: &hir::Item) {
if let hir::ItemEnum(ref enum_definition, ref gens) = it.node {
if gens.ty_params.is_empty() {
}
}
-impl LateLintPass for UnusedMut {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnusedMut {
fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) {
if let hir::ExprMatch(_, ref arms, _) = e.node {
for a in arms {
}
}
-impl LateLintPass for UnusedResults {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnusedResults {
fn check_stmt(&mut self, cx: &LateContext, s: &hir::Stmt) {
let expr = match s.node {
hir::StmtSemi(ref expr, _) => &**expr,
}
}
-impl LateLintPass for UnusedUnsafe {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnusedUnsafe {
fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) {
if let hir::ExprBlock(ref blk) = e.node {
// Don't warn about generated blocks, that'll just pollute the output.
}
}
-impl LateLintPass for PathStatements {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for PathStatements {
fn check_stmt(&mut self, cx: &LateContext, s: &hir::Stmt) {
if let hir::StmtSemi(ref expr, _) = s.node {
if let hir::ExprPath(_) = expr.node {
}
}
-impl LateLintPass for UnusedAttributes {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnusedAttributes {
fn check_attribute(&mut self, cx: &LateContext, attr: &ast::Attribute) {
debug!("checking attribute: {:?}", attr);
}
}
-impl LateLintPass for UnusedAllocation {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnusedAllocation {
fn check_expr(&mut self, cx: &LateContext, e: &hir::Expr) {
match e.node {
hir::ExprBox(_) => {}
CommonLinkage = 10,
}
+// LLVMRustVisibility
+#[derive(Copy, Clone, PartialEq, Eq, Hash, Debug)]
+#[repr(C)]
+pub enum Visibility {
+ Default = 0,
+ Hidden = 1,
+ Protected = 2,
+}
+
/// LLVMDiagnosticSeverity
#[derive(Copy, Clone, Debug)]
#[repr(C)]
pub type DiagnosticHandler = unsafe extern "C" fn(DiagnosticInfoRef, *mut c_void);
pub type InlineAsmDiagHandler = unsafe extern "C" fn(SMDiagnosticRef, *const c_void, c_uint);
-/// LLVMVisibility
-#[repr(C)]
-pub enum Visibility {
- Default,
- Hidden,
- Protected,
-}
pub mod debuginfo {
use super::MetadataRef;
// generates an llvmdeps.rs file next to this one which will be
// automatically updated whenever LLVM is updated to include an up-to-date
// set of the libraries we need to link to LLVM for.
-#[link(name = "rustllvm", kind = "static")]
-#[cfg(not(cargobuild))]
-extern "C" {}
-
-#[linked_from = "rustllvm"] // not quite true but good enough
+#[cfg_attr(not(all(stage0,cargobuild)),
+ link(name = "rustllvm", kind = "static"))] // not quite true but good enough
+#[cfg_attr(stage0, linked_from = "rustllvm")]
extern "C" {
// Create and destroy contexts.
pub fn LLVMContextCreate() -> ContextRef;
pub fn LLVMRustSetLinkage(Global: ValueRef, RustLinkage: Linkage);
pub fn LLVMGetSection(Global: ValueRef) -> *const c_char;
pub fn LLVMSetSection(Global: ValueRef, Section: *const c_char);
- pub fn LLVMSetVisibility(Global: ValueRef, Viz: Visibility);
+ pub fn LLVMRustGetVisibility(Global: ValueRef) -> Visibility;
+ pub fn LLVMRustSetVisibility(Global: ValueRef, Viz: Visibility);
pub fn LLVMGetAlignment(Global: ValueRef) -> c_uint;
pub fn LLVMSetAlignment(Global: ValueRef, Bytes: c_uint);
pub fn LLVMSetDLLStorageClass(V: ValueRef, C: DLLStorageClass);
#![feature(concat_idents)]
#![feature(libc)]
#![feature(link_args)]
-#![feature(linked_from)]
+#![cfg_attr(stage0, feature(linked_from))]
#![feature(staged_api)]
#![cfg_attr(not(stage0), feature(rustc_private))]
use rustc::session::search_paths::PathKind;
use rustc::middle;
use rustc::middle::cstore::{CrateStore, validate_crate_name, ExternCrate};
-use rustc::util::nodemap::{FxHashMap, FxHashSet};
+use rustc::util::nodemap::FxHashSet;
use rustc::middle::cstore::NativeLibrary;
use rustc::hir::map::Definitions;
pub sess: &'a Session,
cstore: &'a CStore,
next_crate_num: CrateNum,
- foreign_item_map: FxHashMap<String, Vec<ast::NodeId>>,
local_crate_name: Symbol,
}
cstore.add_used_library(lib);
}
+fn relevant_lib(sess: &Session, lib: &NativeLibrary) -> bool {
+ match lib.cfg {
+ Some(ref cfg) => attr::cfg_matches(cfg, &sess.parse_sess, None),
+ None => true,
+ }
+}
+
// Extra info about a crate loaded for plugins or exported macros.
struct ExtensionCrate {
metadata: PMDSource,
sess: sess,
cstore: cstore,
next_crate_num: cstore.next_crate_num(),
- foreign_item_map: FxHashMap(),
local_crate_name: Symbol::intern(local_crate_name),
}
}
let cnum_map = self.resolve_crate_deps(root, &crate_root, &metadata, cnum, span, dep_kind);
- let cmeta = Rc::new(cstore::CrateMetadata {
+ let mut cmeta = cstore::CrateMetadata {
name: name,
extern_crate: Cell::new(None),
key_map: metadata.load_key_map(crate_root.index),
rlib: rlib,
rmeta: rmeta,
},
- });
+ dllimport_foreign_items: FxHashSet(),
+ };
+ let dllimports: Vec<_> = cmeta.get_native_libraries().iter()
+ .filter(|lib| relevant_lib(self.sess, lib) &&
+ lib.kind == cstore::NativeLibraryKind::NativeUnknown)
+ .flat_map(|lib| &lib.foreign_items)
+ .map(|id| *id)
+ .collect();
+ cmeta.dllimport_foreign_items.extend(dllimports);
+
+ let cmeta = Rc::new(cmeta);
self.cstore.set_crate_data(cnum, cmeta.clone());
(cnum, cmeta)
}
}
}
- fn register_statically_included_foreign_items(&mut self) {
+ fn get_foreign_items_of_kind(&self, kind: cstore::NativeLibraryKind) -> Vec<DefIndex> {
+ let mut items = vec![];
let libs = self.cstore.get_used_libraries();
- for (foreign_lib, list) in self.foreign_item_map.iter() {
- let is_static = libs.borrow().iter().any(|lib| {
- lib.name == &**foreign_lib && lib.kind == cstore::NativeStatic
- });
- if is_static {
- for id in list {
- self.cstore.add_statically_included_foreign_item(*id);
- }
+ for lib in libs.borrow().iter() {
+ if relevant_lib(self.sess, lib) && lib.kind == kind {
+ items.extend(&lib.foreign_items);
}
}
+ items
+ }
+
+ fn register_statically_included_foreign_items(&mut self) {
+ for id in self.get_foreign_items_of_kind(cstore::NativeStatic) {
+ self.cstore.add_statically_included_foreign_item(id);
+ }
+ }
+
+ fn register_dllimport_foreign_items(&mut self) {
+ let mut dllimports = self.cstore.dllimport_foreign_items.borrow_mut();
+ for id in self.get_foreign_items_of_kind(cstore::NativeUnknown) {
+ dllimports.insert(id);
+ }
}
fn inject_panic_runtime(&mut self, krate: &ast::Crate) {
}
}
- fn process_foreign_mod(&mut self, i: &ast::Item, fm: &ast::ForeignMod) {
+ fn process_foreign_mod(&mut self, i: &ast::Item, fm: &ast::ForeignMod,
+ definitions: &Definitions) {
if fm.abi == Abi::Rust || fm.abi == Abi::RustIntrinsic || fm.abi == Abi::PlatformIntrinsic {
return;
}
let cfg = cfg.map(|list| {
list[0].meta_item().unwrap().clone()
});
+ let foreign_items = fm.items.iter()
+ .map(|it| definitions.opt_def_index(it.id).unwrap())
+ .collect();
let lib = NativeLibrary {
name: n,
kind: kind,
cfg: cfg,
+ foreign_items: foreign_items,
};
register_native_lib(self.sess, self.cstore, Some(m.span), lib);
}
-
- // Finally, process the #[linked_from = "..."] attribute
- for m in i.attrs.iter().filter(|a| a.check_name("linked_from")) {
- let lib_name = match m.value_str() {
- Some(name) => name,
- None => continue,
- };
- let list = self.foreign_item_map.entry(lib_name.to_string())
- .or_insert(Vec::new());
- list.extend(fm.items.iter().map(|it| it.id));
- }
}
}
dump_crates(&self.cstore);
}
- for &(ref name, kind) in &self.sess.opts.libs {
- let lib = NativeLibrary {
- name: Symbol::intern(name),
- kind: kind,
- cfg: None,
- };
- register_native_lib(self.sess, self.cstore, None, lib);
+ // Process libs passed on the command line
+ // First, check for errors
+ let mut renames = FxHashSet();
+ for &(ref name, ref new_name, _) in &self.sess.opts.libs {
+ if let &Some(ref new_name) = new_name {
+ if new_name.is_empty() {
+ self.sess.err(
+ &format!("an empty renaming target was specified for library `{}`",name));
+ } else if !self.cstore.get_used_libraries().borrow().iter()
+ .any(|lib| lib.name == name as &str) {
+ self.sess.err(&format!("renaming of the library `{}` was specified, \
+ however this crate contains no #[link(...)] \
+ attributes referencing this library.", name));
+ } else if renames.contains(name) {
+ self.sess.err(&format!("multiple renamings were specified for library `{}` .",
+ name));
+ } else {
+ renames.insert(name);
+ }
+ }
+ }
+ // Update kind and, optionally, the name of all native libaries
+ // (there may be more than one) with the specified name.
+ for &(ref name, ref new_name, kind) in &self.sess.opts.libs {
+ let mut found = false;
+ for lib in self.cstore.get_used_libraries().borrow_mut().iter_mut() {
+ if lib.name == name as &str {
+ lib.kind = kind;
+ if let &Some(ref new_name) = new_name {
+ lib.name = Symbol::intern(new_name);
+ }
+ found = true;
+ }
+ }
+ if !found {
+ // Add if not found
+ let new_name = new_name.as_ref().map(|s| &**s); // &Option<String> -> Option<&str>
+ let lib = NativeLibrary {
+ name: Symbol::intern(new_name.unwrap_or(name)),
+ kind: kind,
+ cfg: None,
+ foreign_items: Vec::new(),
+ };
+ register_native_lib(self.sess, self.cstore, None, lib);
+ }
}
self.register_statically_included_foreign_items();
+ self.register_dllimport_foreign_items();
}
fn process_item(&mut self, item: &ast::Item, definitions: &Definitions) {
match item.node {
- ast::ItemKind::ExternCrate(_) => {}
- ast::ItemKind::ForeignMod(ref fm) => return self.process_foreign_mod(item, fm),
- _ => return,
- }
-
- let info = self.extract_crate_info(item).unwrap();
- let (cnum, ..) = self.resolve_crate(
- &None, info.ident, info.name, None, item.span, PathKind::Crate, info.dep_kind,
- );
+ ast::ItemKind::ForeignMod(ref fm) => {
+ self.process_foreign_mod(item, fm, definitions)
+ },
+ ast::ItemKind::ExternCrate(_) => {
+ let info = self.extract_crate_info(item).unwrap();
+ let (cnum, ..) = self.resolve_crate(
+ &None, info.ident, info.name, None, item.span, PathKind::Crate, info.dep_kind,
+ );
- let def_id = definitions.opt_local_def_id(item.id).unwrap();
- let len = definitions.def_path(def_id.index).data.len();
+ let def_id = definitions.opt_local_def_id(item.id).unwrap();
+ let len = definitions.def_path(def_id.index).data.len();
- let extern_crate =
- ExternCrate { def_id: def_id, span: item.span, direct: true, path_len: len };
- self.update_extern_crate(cnum, extern_crate, &mut FxHashSet());
- self.cstore.add_extern_mod_stmt_cnum(info.id, cnum);
+ let extern_crate =
+ ExternCrate { def_id: def_id, span: item.span, direct: true, path_len: len };
+ self.update_extern_crate(cnum, extern_crate, &mut FxHashSet());
+ self.cstore.add_extern_mod_stmt_cnum(info.id, cnum);
+ }
+ _ => {}
+ }
}
}
use schema;
use rustc::dep_graph::DepGraph;
-use rustc::hir::def_id::{CRATE_DEF_INDEX, CrateNum, DefIndex, DefId};
+use rustc::hir::def_id::{CRATE_DEF_INDEX, LOCAL_CRATE, CrateNum, DefIndex, DefId};
use rustc::hir::map::DefKey;
use rustc::hir::svh::Svh;
use rustc::middle::cstore::{DepKind, ExternCrate};
use rustc_back::PanicStrategy;
use rustc_data_structures::indexed_vec::IndexVec;
-use rustc::util::nodemap::{FxHashMap, NodeMap, NodeSet, DefIdMap};
+use rustc::util::nodemap::{FxHashMap, FxHashSet, NodeMap, DefIdMap};
use std::cell::{RefCell, Cell};
use std::rc::Rc;
use syntax::symbol::Symbol;
use syntax_pos;
-pub use rustc::middle::cstore::{NativeLibrary, LinkagePreference};
+pub use rustc::middle::cstore::{NativeLibrary, NativeLibraryKind, LinkagePreference};
pub use rustc::middle::cstore::{NativeStatic, NativeFramework, NativeUnknown};
pub use rustc::middle::cstore::{CrateSource, LinkMeta, LibSource};
pub source: CrateSource,
pub proc_macros: Option<Vec<(ast::Name, Rc<SyntaxExtension>)>>,
+ // Foreign items imported from a dylib (Windows only)
+ pub dllimport_foreign_items: FxHashSet<DefIndex>,
}
pub struct CachedInlinedItem {
extern_mod_crate_map: RefCell<NodeMap<CrateNum>>,
used_libraries: RefCell<Vec<NativeLibrary>>,
used_link_args: RefCell<Vec<String>>,
- statically_included_foreign_items: RefCell<NodeSet>,
+ statically_included_foreign_items: RefCell<FxHashSet<DefIndex>>,
+ pub dllimport_foreign_items: RefCell<FxHashSet<DefIndex>>,
pub inlined_item_cache: RefCell<DefIdMap<Option<CachedInlinedItem>>>,
pub defid_for_inlined_node: RefCell<NodeMap<DefId>>,
pub visible_parent_map: RefCell<DefIdMap<DefId>>,
extern_mod_crate_map: RefCell::new(FxHashMap()),
used_libraries: RefCell::new(Vec::new()),
used_link_args: RefCell::new(Vec::new()),
- statically_included_foreign_items: RefCell::new(NodeSet()),
+ statically_included_foreign_items: RefCell::new(FxHashSet()),
+ dllimport_foreign_items: RefCell::new(FxHashSet()),
visible_parent_map: RefCell::new(FxHashMap()),
inlined_item_cache: RefCell::new(FxHashMap()),
defid_for_inlined_node: RefCell::new(FxHashMap()),
self.extern_mod_crate_map.borrow_mut().insert(emod_id, cnum);
}
- pub fn add_statically_included_foreign_item(&self, id: ast::NodeId) {
+ pub fn add_statically_included_foreign_item(&self, id: DefIndex) {
self.statically_included_foreign_items.borrow_mut().insert(id);
}
- pub fn do_is_statically_included_foreign_item(&self, id: ast::NodeId) -> bool {
- self.statically_included_foreign_items.borrow().contains(&id)
+ pub fn do_is_statically_included_foreign_item(&self, def_id: DefId) -> bool {
+ assert!(def_id.krate == LOCAL_CRATE);
+ self.statically_included_foreign_items.borrow().contains(&def_id.index)
}
pub fn do_extern_mod_stmt_cnum(&self, emod_id: ast::NodeId) -> Option<CrateNum> {
use rustc::middle::lang_items;
use rustc::session::Session;
use rustc::ty::{self, Ty, TyCtxt};
-use rustc::hir::def_id::{CrateNum, DefId, DefIndex, CRATE_DEF_INDEX};
+use rustc::hir::def_id::{CrateNum, DefId, DefIndex, CRATE_DEF_INDEX, LOCAL_CRATE};
use rustc::dep_graph::DepNode;
use rustc::hir::map as hir_map;
self.get_crate_data(did.krate).is_foreign_item(did.index)
}
- fn is_statically_included_foreign_item(&self, id: ast::NodeId) -> bool
+ fn is_statically_included_foreign_item(&self, def_id: DefId) -> bool
{
- self.do_is_statically_included_foreign_item(id)
+ self.do_is_statically_included_foreign_item(def_id)
+ }
+
+ fn is_dllimport_foreign_item(&self, def_id: DefId) -> bool {
+ if def_id.krate == LOCAL_CRATE {
+ self.dllimport_foreign_items.borrow().contains(&def_id.index)
+ } else {
+ self.get_crate_data(def_id.krate).is_dllimport_foreign_item(def_id.index)
+ }
}
fn dylib_dependency_formats(&self, cnum: CrateNum)
})
}
+ fn derive_registrar_fn(&self, cnum: CrateNum) -> Option<DefId>
+ {
+ self.get_crate_data(cnum).root.macro_derive_registrar.map(|index| DefId {
+ krate: cnum,
+ index: index
+ })
+ }
+
fn native_libraries(&self, cnum: CrateNum) -> Vec<NativeLibrary>
{
self.get_crate_data(cnum).get_native_libraries()
}
- fn reachable_ids(&self, cnum: CrateNum) -> Vec<DefId>
+ fn exported_symbols(&self, cnum: CrateNum) -> Vec<DefId>
{
- self.get_crate_data(cnum).get_reachable_ids()
+ self.get_crate_data(cnum).get_exported_symbols()
}
fn is_no_builtins(&self, cnum: CrateNum) -> bool {
arg_names.decode(self).collect()
}
- pub fn get_reachable_ids(&self) -> Vec<DefId> {
- self.root.reachable_ids.decode(self).map(|index| self.local_def_id(index)).collect()
+ pub fn get_exported_symbols(&self) -> Vec<DefId> {
+ self.root.exported_symbols.decode(self).map(|index| self.local_def_id(index)).collect()
}
pub fn get_macro(&self, id: DefIndex) -> (ast::Name, MacroDef) {
}
}
+ pub fn is_dllimport_foreign_item(&self, id: DefIndex) -> bool {
+ self.dllimport_foreign_items.contains(&id)
+ }
+
pub fn is_defaulted_trait(&self, trait_id: DefIndex) -> bool {
match self.entry(trait_id).kind {
EntryKind::Trait(data) => data.decode(self).has_default_impl,
reexports: &'a def::ExportMap,
link_meta: &'a LinkMeta,
cstore: &'a cstore::CStore,
- reachable: &'a NodeSet,
+ exported_symbols: &'a NodeSet,
lazy_state: LazyState,
type_shorthands: FxHashMap<Ty<'tcx>, usize>,
self.lazy_seq(all_impls)
}
- // Encodes all reachable symbols in this crate into the metadata.
+ // Encodes all symbols exported from this crate into the metadata.
//
// This pass is seeded off the reachability list calculated in the
// middle::reachable module but filters out items that either don't have a
// symbol associated with them (they weren't translated) or if they're an FFI
// definition (as that's not defined in this crate).
- fn encode_reachable(&mut self) -> LazySeq<DefIndex> {
- let reachable = self.reachable;
+ fn encode_exported_symbols(&mut self) -> LazySeq<DefIndex> {
+ let exported_symbols = self.exported_symbols;
let tcx = self.tcx;
- self.lazy_seq(reachable.iter().map(|&id| tcx.map.local_def_id(id).index))
+ self.lazy_seq(exported_symbols.iter().map(|&id| tcx.map.local_def_id(id).index))
}
fn encode_dylib_dependency_formats(&mut self) -> LazySeq<Option<LinkagePreference>> {
let impls = self.encode_impls();
let impl_bytes = self.position() - i;
- // Encode reachability info.
+ // Encode exported symbols info.
i = self.position();
- let reachable_ids = self.encode_reachable();
- let reachable_bytes = self.position() - i;
+ let exported_symbols = self.encode_exported_symbols();
+ let exported_symbols_bytes = self.position() - i;
// Encode and index the items.
i = self.position();
native_libraries: native_libraries,
codemap: codemap,
impls: impls,
- reachable_ids: reachable_ids,
+ exported_symbols: exported_symbols,
index: index,
});
println!(" native bytes: {}", native_lib_bytes);
println!(" codemap bytes: {}", codemap_bytes);
println!(" impl bytes: {}", impl_bytes);
- println!(" reachable bytes: {}", reachable_bytes);
+ println!(" exp. symbols bytes: {}", exported_symbols_bytes);
println!(" item bytes: {}", item_bytes);
println!(" index bytes: {}", index_bytes);
println!(" zero bytes: {}", zero_bytes);
cstore: &cstore::CStore,
reexports: &def::ExportMap,
link_meta: &LinkMeta,
- reachable: &NodeSet)
+ exported_symbols: &NodeSet)
-> Vec<u8> {
let mut cursor = Cursor::new(vec![]);
cursor.write_all(METADATA_HEADER).unwrap();
reexports: reexports,
link_meta: link_meta,
cstore: cstore,
- reachable: reachable,
+ exported_symbols: exported_symbols,
lazy_state: LazyState::NoNode,
type_shorthands: Default::default(),
predicate_shorthands: Default::default(),
pub native_libraries: LazySeq<NativeLibrary>,
pub codemap: LazySeq<syntax_pos::FileMap>,
pub impls: LazySeq<TraitImpls>,
- pub reachable_ids: LazySeq<DefIndex>,
+ pub exported_symbols: LazySeq<DefIndex>,
pub index: LazySeq<index::Index>,
}
let diverges = match ty.sty {
ty::TyFnDef(_, _, ref f) | ty::TyFnPtr(ref f) => {
// FIXME(canndrew): This is_never should probably be an is_uninhabited
- f.sig.0.output.is_never()
+ f.sig.skip_binder().output().is_never()
}
_ => false
};
span_bug!(expr.span, "method call has late-bound regions")
});
- assert_eq!(sig.inputs.len(), 2);
+ assert_eq!(sig.inputs().len(), 2);
let tupled_args = Expr {
- ty: sig.inputs[1],
+ ty: sig.inputs()[1],
temp_lifetime: temp_lifetime,
span: expr.span,
kind: ExprKind::Tuple {
.iter()
.enumerate()
.map(|(index, arg)| {
- (fn_sig.inputs[index], Some(&*arg.pat))
+ (fn_sig.inputs()[index], Some(&*arg.pat))
});
let body = self.tcx.map.expr(body_id);
let arguments = implicit_argument.into_iter().chain(explicit_arguments);
self.cx(MirSource::Fn(id)).build(|cx| {
- build::construct_fn(cx, id, arguments, abi, fn_sig.output, body)
+ build::construct_fn(cx, id, arguments, abi, fn_sig.output(), body)
});
intravisit::walk_fn(self, fk, decl, body_id, span, id);
match *destination {
Some((ref dest, _)) => {
let dest_ty = dest.ty(mir, tcx).to_ty(tcx);
- if let Err(terr) = self.sub_types(sig.output, dest_ty) {
+ if let Err(terr) = self.sub_types(sig.output(), dest_ty) {
span_mirbug!(self, term,
"call dest mismatch ({:?} <- {:?}): {:?}",
- dest_ty, sig.output, terr);
+ dest_ty, sig.output(), terr);
}
},
None => {
// FIXME(canndrew): This is_never should probably be an is_uninhabited
- if !sig.output.is_never() {
+ if !sig.output().is_never() {
span_mirbug!(self, term, "call to converging function {:?} w/o dest", sig);
}
},
args: &[Operand<'tcx>])
{
debug!("check_call_inputs({:?}, {:?})", sig, args);
- if args.len() < sig.inputs.len() ||
- (args.len() > sig.inputs.len() && !sig.variadic) {
+ if args.len() < sig.inputs().len() ||
+ (args.len() > sig.inputs().len() && !sig.variadic) {
span_mirbug!(self, term, "call to {:?} with wrong # of args", sig);
}
- for (n, (fn_arg, op_arg)) in sig.inputs.iter().zip(args).enumerate() {
+ for (n, (fn_arg, op_arg)) in sig.inputs().iter().zip(args).enumerate() {
let op_arg_ty = op_arg.ty(mir, self.tcx());
if let Err(terr) = self.sub_types(op_arg_ty, fn_arg) {
span_mirbug!(self, term, "bad arg #{:?} ({:?} <- {:?}): {:?}",
// box_free takes a Box as a pointer. Allow for that.
- if sig.inputs.len() != 1 {
+ if sig.inputs().len() != 1 {
span_mirbug!(self, term, "box_free should take 1 argument");
return;
}
- let pointee_ty = match sig.inputs[0].sty {
+ let pointee_ty = match sig.inputs()[0].sty {
ty::TyRawPtr(mt) => mt.ty,
_ => {
span_mirbug!(self, term, "box_free should take a raw ptr");
}
}
-impl<'a> Visitor for AstValidator<'a> {
- fn visit_lifetime(&mut self, lt: &Lifetime) {
+impl<'a> Visitor<'a> for AstValidator<'a> {
+ fn visit_lifetime(&mut self, lt: &'a Lifetime) {
if lt.name == "'_" {
self.session.add_lint(lint::builtin::LIFETIME_UNDERSCORE,
lt.id,
visit::walk_lifetime(self, lt)
}
- fn visit_expr(&mut self, expr: &Expr) {
+ fn visit_expr(&mut self, expr: &'a Expr) {
match expr.node {
ExprKind::While(.., Some(ident)) |
ExprKind::Loop(_, Some(ident)) |
visit::walk_expr(self, expr)
}
- fn visit_ty(&mut self, ty: &Ty) {
+ fn visit_ty(&mut self, ty: &'a Ty) {
match ty.node {
TyKind::BareFn(ref bfty) => {
self.check_decl_no_pat(&bfty.decl, |span, _| {
visit::walk_ty(self, ty)
}
- fn visit_path(&mut self, path: &Path, id: NodeId) {
+ fn visit_path(&mut self, path: &'a Path, id: NodeId) {
if path.global && path.segments.len() > 0 {
let ident = path.segments[0].identifier;
if token::Ident(ident).is_path_segment_keyword() {
visit::walk_path(self, path)
}
- fn visit_item(&mut self, item: &Item) {
+ fn visit_item(&mut self, item: &'a Item) {
match item.node {
ItemKind::Use(ref view_path) => {
let path = view_path.node.path();
visit::walk_item(self, item)
}
- fn visit_foreign_item(&mut self, fi: &ForeignItem) {
+ fn visit_foreign_item(&mut self, fi: &'a ForeignItem) {
match fi.node {
ForeignItemKind::Fn(ref decl, _) => {
self.check_decl_no_pat(decl, |span, is_recent| {
visit::walk_foreign_item(self, fi)
}
- fn visit_vis(&mut self, vis: &Visibility) {
+ fn visit_vis(&mut self, vis: &'a Visibility) {
match *vis {
Visibility::Restricted { ref path, .. } => {
if !path.segments.iter().all(|segment| segment.parameters.is_empty()) {
collector.print("HIR STATS");
}
-pub fn print_ast_stats(krate: &ast::Crate, title: &str) {
+pub fn print_ast_stats<'v>(krate: &'v ast::Crate, title: &str) {
let mut collector = StatCollector {
krate: None,
data: FxHashMap(),
}
}
-impl<'v> ast_visit::Visitor for StatCollector<'v> {
+impl<'v> ast_visit::Visitor<'v> for StatCollector<'v> {
- fn visit_mod(&mut self, m: &ast::Mod, _s: Span, _n: NodeId) {
+ fn visit_mod(&mut self, m: &'v ast::Mod, _s: Span, _n: NodeId) {
self.record("Mod", Id::None, m);
ast_visit::walk_mod(self, m)
}
- fn visit_foreign_item(&mut self, i: &ast::ForeignItem) {
+ fn visit_foreign_item(&mut self, i: &'v ast::ForeignItem) {
self.record("ForeignItem", Id::None, i);
ast_visit::walk_foreign_item(self, i)
}
- fn visit_item(&mut self, i: &ast::Item) {
+ fn visit_item(&mut self, i: &'v ast::Item) {
self.record("Item", Id::None, i);
ast_visit::walk_item(self, i)
}
- fn visit_local(&mut self, l: &ast::Local) {
+ fn visit_local(&mut self, l: &'v ast::Local) {
self.record("Local", Id::None, l);
ast_visit::walk_local(self, l)
}
- fn visit_block(&mut self, b: &ast::Block) {
+ fn visit_block(&mut self, b: &'v ast::Block) {
self.record("Block", Id::None, b);
ast_visit::walk_block(self, b)
}
- fn visit_stmt(&mut self, s: &ast::Stmt) {
+ fn visit_stmt(&mut self, s: &'v ast::Stmt) {
self.record("Stmt", Id::None, s);
ast_visit::walk_stmt(self, s)
}
- fn visit_arm(&mut self, a: &ast::Arm) {
+ fn visit_arm(&mut self, a: &'v ast::Arm) {
self.record("Arm", Id::None, a);
ast_visit::walk_arm(self, a)
}
- fn visit_pat(&mut self, p: &ast::Pat) {
+ fn visit_pat(&mut self, p: &'v ast::Pat) {
self.record("Pat", Id::None, p);
ast_visit::walk_pat(self, p)
}
- fn visit_expr(&mut self, ex: &ast::Expr) {
+ fn visit_expr(&mut self, ex: &'v ast::Expr) {
self.record("Expr", Id::None, ex);
ast_visit::walk_expr(self, ex)
}
- fn visit_ty(&mut self, t: &ast::Ty) {
+ fn visit_ty(&mut self, t: &'v ast::Ty) {
self.record("Ty", Id::None, t);
ast_visit::walk_ty(self, t)
}
fn visit_fn(&mut self,
- fk: ast_visit::FnKind,
- fd: &ast::FnDecl,
+ fk: ast_visit::FnKind<'v>,
+ fd: &'v ast::FnDecl,
s: Span,
_: NodeId) {
self.record("FnDecl", Id::None, fd);
ast_visit::walk_fn(self, fk, fd, s)
}
- fn visit_trait_item(&mut self, ti: &ast::TraitItem) {
+ fn visit_trait_item(&mut self, ti: &'v ast::TraitItem) {
self.record("TraitItem", Id::None, ti);
ast_visit::walk_trait_item(self, ti)
}
- fn visit_impl_item(&mut self, ii: &ast::ImplItem) {
+ fn visit_impl_item(&mut self, ii: &'v ast::ImplItem) {
self.record("ImplItem", Id::None, ii);
ast_visit::walk_impl_item(self, ii)
}
- fn visit_ty_param_bound(&mut self, bounds: &ast::TyParamBound) {
+ fn visit_ty_param_bound(&mut self, bounds: &'v ast::TyParamBound) {
self.record("TyParamBound", Id::None, bounds);
ast_visit::walk_ty_param_bound(self, bounds)
}
- fn visit_struct_field(&mut self, s: &ast::StructField) {
+ fn visit_struct_field(&mut self, s: &'v ast::StructField) {
self.record("StructField", Id::None, s);
ast_visit::walk_struct_field(self, s)
}
fn visit_variant(&mut self,
- v: &ast::Variant,
- g: &ast::Generics,
+ v: &'v ast::Variant,
+ g: &'v ast::Generics,
item_id: NodeId) {
self.record("Variant", Id::None, v);
ast_visit::walk_variant(self, v, g, item_id)
}
- fn visit_lifetime(&mut self, lifetime: &ast::Lifetime) {
+ fn visit_lifetime(&mut self, lifetime: &'v ast::Lifetime) {
self.record("Lifetime", Id::None, lifetime);
ast_visit::walk_lifetime(self, lifetime)
}
- fn visit_lifetime_def(&mut self, lifetime: &ast::LifetimeDef) {
+ fn visit_lifetime_def(&mut self, lifetime: &'v ast::LifetimeDef) {
self.record("LifetimeDef", Id::None, lifetime);
ast_visit::walk_lifetime_def(self, lifetime)
}
- fn visit_mac(&mut self, mac: &ast::Mac) {
+ fn visit_mac(&mut self, mac: &'v ast::Mac) {
self.record("Mac", Id::None, mac);
}
fn visit_path_list_item(&mut self,
- prefix: &ast::Path,
- item: &ast::PathListItem) {
+ prefix: &'v ast::Path,
+ item: &'v ast::PathListItem) {
self.record("PathListItem", Id::None, item);
ast_visit::walk_path_list_item(self, prefix, item)
}
fn visit_path_segment(&mut self,
path_span: Span,
- path_segment: &ast::PathSegment) {
+ path_segment: &'v ast::PathSegment) {
self.record("PathSegment", Id::None, path_segment);
ast_visit::walk_path_segment(self, path_span, path_segment)
}
- fn visit_assoc_type_binding(&mut self, type_binding: &ast::TypeBinding) {
+ fn visit_assoc_type_binding(&mut self, type_binding: &'v ast::TypeBinding) {
self.record("TypeBinding", Id::None, type_binding);
ast_visit::walk_assoc_type_binding(self, type_binding)
}
- fn visit_attribute(&mut self, attr: &ast::Attribute) {
+ fn visit_attribute(&mut self, attr: &'v ast::Attribute) {
self.record("Attribute", Id::None, attr);
}
- fn visit_macro_def(&mut self, macro_def: &ast::MacroDef) {
+ fn visit_macro_def(&mut self, macro_def: &'v ast::MacroDef) {
self.record("MacroDef", Id::None, macro_def);
ast_visit::walk_macro_def(self, macro_def)
}
pub mod consts;
pub mod hir_stats;
pub mod loops;
+pub mod mir_stats;
pub mod no_asm;
pub mod rvalues;
pub mod static_recursion;
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// The visitors in this module collect sizes and counts of the most important
+// pieces of MIR. The resulting numbers are good approximations but not
+// completely accurate (some things might be counted twice, others missed).
+
+use rustc_const_math::{ConstUsize};
+use rustc::middle::const_val::{ConstVal};
+use rustc::mir::{AggregateKind, AssertMessage, BasicBlock, BasicBlockData};
+use rustc::mir::{Constant, Literal, Location, LocalDecl};
+use rustc::mir::{Lvalue, LvalueElem, LvalueProjection};
+use rustc::mir::{Mir, Operand, ProjectionElem};
+use rustc::mir::{Rvalue, SourceInfo, Statement, StatementKind};
+use rustc::mir::{Terminator, TerminatorKind, TypedConstVal, VisibilityScope, VisibilityScopeData};
+use rustc::mir::visit as mir_visit;
+use rustc::mir::visit::Visitor;
+use rustc::ty::{ClosureSubsts, TyCtxt};
+use rustc::util::common::to_readable_str;
+use rustc::util::nodemap::{FxHashMap};
+
+struct NodeData {
+ count: usize,
+ size: usize,
+}
+
+struct StatCollector<'a, 'tcx: 'a> {
+ _tcx: TyCtxt<'a, 'tcx, 'tcx>,
+ data: FxHashMap<&'static str, NodeData>,
+}
+
+pub fn print_mir_stats<'tcx, 'a>(tcx: TyCtxt<'a, 'tcx, 'tcx>, title: &str) {
+ let mut collector = StatCollector {
+ _tcx: tcx,
+ data: FxHashMap(),
+ };
+ // For debugging instrumentation like this, we don't need to worry
+ // about maintaining the dep graph.
+ let _ignore = tcx.dep_graph.in_ignore();
+ let mir_map = tcx.mir_map.borrow();
+ for def_id in mir_map.keys() {
+ let mir = mir_map.get(&def_id).unwrap();
+ collector.visit_mir(&mir.borrow());
+ }
+ collector.print(title);
+}
+
+impl<'a, 'tcx> StatCollector<'a, 'tcx> {
+
+ fn record_with_size(&mut self, label: &'static str, node_size: usize) {
+ let entry = self.data.entry(label).or_insert(NodeData {
+ count: 0,
+ size: 0,
+ });
+
+ entry.count += 1;
+ entry.size = node_size;
+ }
+
+ fn record<T>(&mut self, label: &'static str, node: &T) {
+ self.record_with_size(label, ::std::mem::size_of_val(node));
+ }
+
+ fn print(&self, title: &str) {
+ let mut stats: Vec<_> = self.data.iter().collect();
+
+ stats.sort_by_key(|&(_, ref d)| d.count * d.size);
+
+ println!("\n{}\n", title);
+
+ println!("{:<32}{:>18}{:>14}{:>14}",
+ "Name", "Accumulated Size", "Count", "Item Size");
+ println!("------------------------------------------------------------------------------");
+
+ for (label, data) in stats {
+ println!("{:<32}{:>18}{:>14}{:>14}",
+ label,
+ to_readable_str(data.count * data.size),
+ to_readable_str(data.count),
+ to_readable_str(data.size));
+ }
+ println!("------------------------------------------------------------------------------");
+ }
+}
+
+impl<'a, 'tcx> mir_visit::Visitor<'tcx> for StatCollector<'a, 'tcx> {
+ fn visit_mir(&mut self, mir: &Mir<'tcx>) {
+ self.record("Mir", mir);
+
+ // since the `super_mir` method does not traverse the MIR of
+ // promoted rvalues, (but we still want to gather statistics
+ // on the structures represented there) we manually traverse
+ // the promoted rvalues here.
+ for promoted_mir in &mir.promoted {
+ self.visit_mir(promoted_mir);
+ }
+
+ self.super_mir(mir);
+ }
+
+ fn visit_basic_block_data(&mut self,
+ block: BasicBlock,
+ data: &BasicBlockData<'tcx>) {
+ self.record("BasicBlockData", data);
+ self.super_basic_block_data(block, data);
+ }
+
+ fn visit_visibility_scope_data(&mut self,
+ scope_data: &VisibilityScopeData) {
+ self.record("VisibilityScopeData", scope_data);
+ self.super_visibility_scope_data(scope_data);
+ }
+
+ fn visit_statement(&mut self,
+ block: BasicBlock,
+ statement: &Statement<'tcx>,
+ location: Location) {
+ self.record("Statement", statement);
+ self.record(match statement.kind {
+ StatementKind::Assign(..) => "StatementKind::Assign",
+ StatementKind::SetDiscriminant { .. } => "StatementKind::SetDiscriminant",
+ StatementKind::StorageLive(..) => "StatementKind::StorageLive",
+ StatementKind::StorageDead(..) => "StatementKind::StorageDead",
+ StatementKind::Nop => "StatementKind::Nop",
+ }, &statement.kind);
+ self.super_statement(block, statement, location);
+ }
+
+ fn visit_terminator(&mut self,
+ block: BasicBlock,
+ terminator: &Terminator<'tcx>,
+ location: Location) {
+ self.record("Terminator", terminator);
+ self.super_terminator(block, terminator, location);
+ }
+
+ fn visit_terminator_kind(&mut self,
+ block: BasicBlock,
+ kind: &TerminatorKind<'tcx>,
+ location: Location) {
+ self.record("TerminatorKind", kind);
+ self.record(match *kind {
+ TerminatorKind::Goto { .. } => "TerminatorKind::Goto",
+ TerminatorKind::If { .. } => "TerminatorKind::If",
+ TerminatorKind::Switch { .. } => "TerminatorKind::Switch",
+ TerminatorKind::SwitchInt { .. } => "TerminatorKind::SwitchInt",
+ TerminatorKind::Resume => "TerminatorKind::Resume",
+ TerminatorKind::Return => "TerminatorKind::Return",
+ TerminatorKind::Unreachable => "TerminatorKind::Unreachable",
+ TerminatorKind::Drop { .. } => "TerminatorKind::Drop",
+ TerminatorKind::DropAndReplace { .. } => "TerminatorKind::DropAndReplace",
+ TerminatorKind::Call { .. } => "TerminatorKind::Call",
+ TerminatorKind::Assert { .. } => "TerminatorKind::Assert",
+ }, kind);
+ self.super_terminator_kind(block, kind, location);
+ }
+
+ fn visit_assert_message(&mut self,
+ msg: &AssertMessage<'tcx>,
+ location: Location) {
+ self.record("AssertMessage", msg);
+ self.record(match *msg {
+ AssertMessage::BoundsCheck { .. } => "AssertMessage::BoundsCheck",
+ AssertMessage::Math(..) => "AssertMessage::Math",
+ }, msg);
+ self.super_assert_message(msg, location);
+ }
+
+ fn visit_rvalue(&mut self,
+ rvalue: &Rvalue<'tcx>,
+ location: Location) {
+ self.record("Rvalue", rvalue);
+ let rvalue_kind = match *rvalue {
+ Rvalue::Use(..) => "Rvalue::Use",
+ Rvalue::Repeat(..) => "Rvalue::Repeat",
+ Rvalue::Ref(..) => "Rvalue::Ref",
+ Rvalue::Len(..) => "Rvalue::Len",
+ Rvalue::Cast(..) => "Rvalue::Cast",
+ Rvalue::BinaryOp(..) => "Rvalue::BinaryOp",
+ Rvalue::CheckedBinaryOp(..) => "Rvalue::CheckedBinaryOp",
+ Rvalue::UnaryOp(..) => "Rvalue::UnaryOp",
+ Rvalue::Box(..) => "Rvalue::Box",
+ Rvalue::Aggregate(ref kind, ref _operands) => {
+ // AggregateKind is not distinguished by visit API, so
+ // record it. (`super_rvalue` handles `_operands`.)
+ self.record(match *kind {
+ AggregateKind::Array => "AggregateKind::Array",
+ AggregateKind::Tuple => "AggregateKind::Tuple",
+ AggregateKind::Adt(..) => "AggregateKind::Adt",
+ AggregateKind::Closure(..) => "AggregateKind::Closure",
+ }, kind);
+
+ "Rvalue::Aggregate"
+ }
+ Rvalue::InlineAsm { .. } => "Rvalue::InlineAsm",
+ };
+ self.record(rvalue_kind, rvalue);
+ self.super_rvalue(rvalue, location);
+ }
+
+ fn visit_operand(&mut self,
+ operand: &Operand<'tcx>,
+ location: Location) {
+ self.record("Operand", operand);
+ self.record(match *operand {
+ Operand::Consume(..) => "Operand::Consume",
+ Operand::Constant(..) => "Operand::Constant",
+ }, operand);
+ self.super_operand(operand, location);
+ }
+
+ fn visit_lvalue(&mut self,
+ lvalue: &Lvalue<'tcx>,
+ context: mir_visit::LvalueContext<'tcx>,
+ location: Location) {
+ self.record("Lvalue", lvalue);
+ self.record(match *lvalue {
+ Lvalue::Local(..) => "Lvalue::Local",
+ Lvalue::Static(..) => "Lvalue::Static",
+ Lvalue::Projection(..) => "Lvalue::Projection",
+ }, lvalue);
+ self.super_lvalue(lvalue, context, location);
+ }
+
+ fn visit_projection(&mut self,
+ lvalue: &LvalueProjection<'tcx>,
+ context: mir_visit::LvalueContext<'tcx>,
+ location: Location) {
+ self.record("LvalueProjection", lvalue);
+ self.super_projection(lvalue, context, location);
+ }
+
+ fn visit_projection_elem(&mut self,
+ lvalue: &LvalueElem<'tcx>,
+ context: mir_visit::LvalueContext<'tcx>,
+ location: Location) {
+ self.record("LvalueElem", lvalue);
+ self.record(match *lvalue {
+ ProjectionElem::Deref => "LvalueElem::Deref",
+ ProjectionElem::Subslice { .. } => "LvalueElem::Subslice",
+ ProjectionElem::Field(..) => "LvalueElem::Field",
+ ProjectionElem::Index(..) => "LvalueElem::Index",
+ ProjectionElem::ConstantIndex { .. } => "LvalueElem::ConstantIndex",
+ ProjectionElem::Downcast(..) => "LvalueElem::Downcast",
+ }, lvalue);
+ self.super_projection_elem(lvalue, context, location);
+ }
+
+ fn visit_constant(&mut self,
+ constant: &Constant<'tcx>,
+ location: Location) {
+ self.record("Constant", constant);
+ self.super_constant(constant, location);
+ }
+
+ fn visit_literal(&mut self,
+ literal: &Literal<'tcx>,
+ location: Location) {
+ self.record("Literal", literal);
+ self.record(match *literal {
+ Literal::Item { .. } => "Literal::Item",
+ Literal::Value { .. } => "Literal::Value",
+ Literal::Promoted { .. } => "Literal::Promoted",
+ }, literal);
+ self.super_literal(literal, location);
+ }
+
+ fn visit_source_info(&mut self,
+ source_info: &SourceInfo) {
+ self.record("SourceInfo", source_info);
+ self.super_source_info(source_info);
+ }
+
+ fn visit_closure_substs(&mut self,
+ substs: &ClosureSubsts<'tcx>) {
+ self.record("ClosureSubsts", substs);
+ self.super_closure_substs(substs);
+ }
+
+ fn visit_const_val(&mut self,
+ const_val: &ConstVal,
+ _: Location) {
+ self.record("ConstVal", const_val);
+ self.super_const_val(const_val);
+ }
+
+ fn visit_const_usize(&mut self,
+ const_usize: &ConstUsize,
+ _: Location) {
+ self.record("ConstUsize", const_usize);
+ self.super_const_usize(const_usize);
+ }
+
+ fn visit_typed_const_val(&mut self,
+ val: &TypedConstVal<'tcx>,
+ location: Location) {
+ self.record("TypedConstVal", val);
+ self.super_typed_const_val(val, location);
+ }
+
+ fn visit_local_decl(&mut self,
+ local_decl: &LocalDecl<'tcx>) {
+ self.record("LocalDecl", local_decl);
+ self.super_local_decl(local_decl);
+ }
+
+ fn visit_visibility_scope(&mut self,
+ scope: &VisibilityScope) {
+ self.record("VisiblityScope", scope);
+ self.super_visibility_scope(scope);
+ }
+}
sess: &'a Session,
}
-impl<'a> Visitor for CheckNoAsm<'a> {
- fn visit_expr(&mut self, e: &ast::Expr) {
+impl<'a> Visitor<'a> for CheckNoAsm<'a> {
+ fn visit_expr(&mut self, e: &'a ast::Expr) {
match e.node {
ast::ExprKind::InlineAsm(_) => {
span_err!(self.sess,
use rustc::hir::map as ast_map;
use rustc::session::{CompileResult, Session};
use rustc::hir::def::{Def, CtorKind};
-use rustc::util::nodemap::NodeMap;
+use rustc::util::nodemap::{NodeMap, NodeSet};
use syntax::ast;
use syntax::feature_gate::{GateIssue, emit_feature_err};
use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap};
use rustc::hir;
-use std::cell::RefCell;
-
struct CheckCrateVisitor<'a, 'ast: 'a> {
sess: &'a Session,
ast_map: &'a ast_map::Map<'ast>,
// variant definitions with the discriminant expression that applies to
// each one. If the variant uses the default values (starting from `0`),
// then `None` is stored.
- discriminant_map: RefCell<NodeMap<Option<&'ast hir::Expr>>>,
+ discriminant_map: NodeMap<Option<&'ast hir::Expr>>,
+ detected_recursive_ids: NodeSet,
}
impl<'a, 'ast: 'a> Visitor<'ast> for CheckCrateVisitor<'a, 'ast> {
}
}
-pub fn check_crate<'ast>(sess: &Session,
- ast_map: &ast_map::Map<'ast>)
- -> CompileResult {
+pub fn check_crate<'ast>(sess: &Session, ast_map: &ast_map::Map<'ast>) -> CompileResult {
let _task = ast_map.dep_graph.in_task(DepNode::CheckStaticRecursion);
let mut visitor = CheckCrateVisitor {
sess: sess,
ast_map: ast_map,
- discriminant_map: RefCell::new(NodeMap()),
+ discriminant_map: NodeMap(),
+ detected_recursive_ids: NodeSet(),
};
sess.track_errors(|| {
// FIXME(#37712) could use ItemLikeVisitor if trait items were item-like
})
}
-struct CheckItemRecursionVisitor<'a, 'ast: 'a> {
- root_span: &'a Span,
- sess: &'a Session,
- ast_map: &'a ast_map::Map<'ast>,
- discriminant_map: &'a RefCell<NodeMap<Option<&'ast hir::Expr>>>,
+struct CheckItemRecursionVisitor<'a, 'b: 'a, 'ast: 'b> {
+ root_span: &'b Span,
+ sess: &'b Session,
+ ast_map: &'b ast_map::Map<'ast>,
+ discriminant_map: &'a mut NodeMap<Option<&'ast hir::Expr>>,
idstack: Vec<ast::NodeId>,
+ detected_recursive_ids: &'a mut NodeSet,
}
-impl<'a, 'ast: 'a> CheckItemRecursionVisitor<'a, 'ast> {
- fn new(v: &'a CheckCrateVisitor<'a, 'ast>,
- span: &'a Span)
- -> CheckItemRecursionVisitor<'a, 'ast> {
+impl<'a, 'b: 'a, 'ast: 'b> CheckItemRecursionVisitor<'a, 'b, 'ast> {
+ fn new(v: &'a mut CheckCrateVisitor<'b, 'ast>, span: &'b Span) -> Self {
CheckItemRecursionVisitor {
root_span: span,
sess: v.sess,
ast_map: v.ast_map,
- discriminant_map: &v.discriminant_map,
+ discriminant_map: &mut v.discriminant_map,
idstack: Vec::new(),
+ detected_recursive_ids: &mut v.detected_recursive_ids,
}
}
fn with_item_id_pushed<F>(&mut self, id: ast::NodeId, f: F, span: Span)
where F: Fn(&mut Self)
{
if self.idstack.iter().any(|&x| x == id) {
+ if self.detected_recursive_ids.contains(&id) {
+ return;
+ }
+ self.detected_recursive_ids.insert(id);
let any_static = self.idstack.iter().any(|&x| {
if let ast_map::NodeItem(item) = self.ast_map.get(x) {
if let hir::ItemStatic(..) = item.node {
// So for every variant, we need to track whether there is an expression
// somewhere in the enum definition that controls its discriminant. We do
// this by starting from the end and searching backward.
- fn populate_enum_discriminants(&self, enum_definition: &'ast hir::EnumDef) {
+ fn populate_enum_discriminants(&mut self, enum_definition: &'ast hir::EnumDef) {
// Get the map, and return if we already processed this enum or if it
// has no variants.
- let mut discriminant_map = self.discriminant_map.borrow_mut();
match enum_definition.variants.first() {
None => {
return;
}
- Some(variant) if discriminant_map.contains_key(&variant.node.data.id()) => {
+ Some(variant) if self.discriminant_map.contains_key(&variant.node.data.id()) => {
return;
}
_ => {}
// is affected by that expression.
if let Some(ref expr) = variant.node.disr_expr {
for id in &variant_stack {
- discriminant_map.insert(*id, Some(expr));
+ self.discriminant_map.insert(*id, Some(expr));
}
variant_stack.clear()
}
// If we are at the top, that always starts at 0, so any variant on the
// stack has a default value and does not need to be checked.
for id in &variant_stack {
- discriminant_map.insert(*id, None);
+ self.discriminant_map.insert(*id, None);
}
}
}
-impl<'a, 'ast: 'a> Visitor<'ast> for CheckItemRecursionVisitor<'a, 'ast> {
+impl<'a, 'b: 'a, 'ast: 'b> Visitor<'ast> for CheckItemRecursionVisitor<'a, 'b, 'ast> {
fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, 'ast> {
NestedVisitorMap::OnlyBodies(&self.ast_map)
}
-
fn visit_item(&mut self, it: &'ast hir::Item) {
self.with_item_id_pushed(it.id, |v| intravisit::walk_item(v, it), it.span);
}
_: ast::NodeId) {
let variant_id = variant.node.data.id();
let maybe_expr;
- if let Some(get_expr) = self.discriminant_map.borrow().get(&variant_id) {
+ if let Some(get_expr) = self.discriminant_map.get(&variant_id) {
// This is necessary because we need to let the `discriminant_map`
// borrow fall out of scope, so that we can reborrow farther down.
maybe_expr = (*get_expr).clone();
self.with_item_id_pushed(ii.id, |v| intravisit::walk_impl_item(v, ii), ii.span);
}
- fn visit_expr(&mut self, e: &'ast hir::Expr) {
- match e.node {
- hir::ExprPath(hir::QPath::Resolved(_, ref path)) => {
- match path.def {
- Def::Static(def_id, _) |
- Def::AssociatedConst(def_id) |
- Def::Const(def_id) => {
- if let Some(node_id) = self.ast_map.as_local_node_id(def_id) {
- match self.ast_map.get(node_id) {
- ast_map::NodeItem(item) => self.visit_item(item),
- ast_map::NodeTraitItem(item) => self.visit_trait_item(item),
- ast_map::NodeImplItem(item) => self.visit_impl_item(item),
- ast_map::NodeForeignItem(_) => {}
- _ => {
- span_bug!(e.span,
- "expected item, found {}",
- self.ast_map.node_to_string(node_id));
- }
- }
+ fn visit_path(&mut self, path: &'ast hir::Path, _: ast::NodeId) {
+ match path.def {
+ Def::Static(def_id, _) |
+ Def::AssociatedConst(def_id) |
+ Def::Const(def_id) => {
+ if let Some(node_id) = self.ast_map.as_local_node_id(def_id) {
+ match self.ast_map.get(node_id) {
+ ast_map::NodeItem(item) => self.visit_item(item),
+ ast_map::NodeTraitItem(item) => self.visit_trait_item(item),
+ ast_map::NodeImplItem(item) => self.visit_impl_item(item),
+ ast_map::NodeForeignItem(_) => {}
+ _ => {
+ span_bug!(path.span,
+ "expected item, found {}",
+ self.ast_map.node_to_string(node_id));
}
}
- // For variants, we only want to check expressions that
- // affect the specific variant used, but we need to check
- // the whole enum definition to see what expression that
- // might be (if any).
- Def::VariantCtor(variant_id, CtorKind::Const) => {
- if let Some(variant_id) = self.ast_map.as_local_node_id(variant_id) {
- let variant = self.ast_map.expect_variant(variant_id);
- let enum_id = self.ast_map.get_parent(variant_id);
- let enum_item = self.ast_map.expect_item(enum_id);
- if let hir::ItemEnum(ref enum_def, ref generics) = enum_item.node {
- self.populate_enum_discriminants(enum_def);
- self.visit_variant(variant, generics, enum_id);
- } else {
- span_bug!(e.span,
- "`check_static_recursion` found \
- non-enum in Def::VariantCtor");
- }
- }
+ }
+ }
+ // For variants, we only want to check expressions that
+ // affect the specific variant used, but we need to check
+ // the whole enum definition to see what expression that
+ // might be (if any).
+ Def::VariantCtor(variant_id, CtorKind::Const) => {
+ if let Some(variant_id) = self.ast_map.as_local_node_id(variant_id) {
+ let variant = self.ast_map.expect_variant(variant_id);
+ let enum_id = self.ast_map.get_parent(variant_id);
+ let enum_item = self.ast_map.expect_item(enum_id);
+ if let hir::ItemEnum(ref enum_def, ref generics) = enum_item.node {
+ self.populate_enum_discriminants(enum_def);
+ self.visit_variant(variant, generics, enum_id);
+ } else {
+ span_bug!(path.span,
+ "`check_static_recursion` found \
+ non-enum in Def::VariantCtor");
}
- _ => (),
}
}
_ => (),
}
- intravisit::walk_expr(self, e);
+ intravisit::walk_path(self, path);
}
}
macro_rules! method {
($visit:ident: $ty:ty, $invoc:path, $walk:ident) => {
- fn $visit(&mut self, node: &$ty) {
+ fn $visit(&mut self, node: &'a $ty) {
if let $invoc(..) = node.node {
self.visit_invoc(node.id);
} else {
}
}
-impl<'a, 'b> Visitor for BuildReducedGraphVisitor<'a, 'b> {
+impl<'a, 'b> Visitor<'a> for BuildReducedGraphVisitor<'a, 'b> {
method!(visit_impl_item: ast::ImplItem, ast::ImplItemKind::Macro, walk_impl_item);
method!(visit_expr: ast::Expr, ast::ExprKind::Mac, walk_expr);
method!(visit_pat: ast::Pat, ast::PatKind::Mac, walk_pat);
method!(visit_ty: ast::Ty, ast::TyKind::Mac, walk_ty);
- fn visit_item(&mut self, item: &Item) {
+ fn visit_item(&mut self, item: &'a Item) {
let macro_use = match item.node {
ItemKind::Mac(..) if item.id == ast::DUMMY_NODE_ID => return, // Scope placeholder
ItemKind::Mac(..) => {
}
}
- fn visit_stmt(&mut self, stmt: &ast::Stmt) {
+ fn visit_stmt(&mut self, stmt: &'a ast::Stmt) {
if let ast::StmtKind::Mac(..) = stmt.node {
self.legacy_scope = LegacyScope::Expansion(self.visit_invoc(stmt.id));
} else {
}
}
- fn visit_foreign_item(&mut self, foreign_item: &ForeignItem) {
+ fn visit_foreign_item(&mut self, foreign_item: &'a ForeignItem) {
self.resolver.build_reduced_graph_for_foreign_item(foreign_item, self.expansion);
visit::walk_foreign_item(self, foreign_item);
}
- fn visit_block(&mut self, block: &Block) {
+ fn visit_block(&mut self, block: &'a Block) {
let (parent, legacy_scope) = (self.resolver.current_module, self.legacy_scope);
self.resolver.build_reduced_graph_for_block(block);
visit::walk_block(self, block);
self.legacy_scope = legacy_scope;
}
- fn visit_trait_item(&mut self, item: &TraitItem) {
+ fn visit_trait_item(&mut self, item: &'a TraitItem) {
let parent = self.resolver.current_module;
let def_id = parent.def_id().unwrap();
}
}
-impl<'a, 'b> Visitor for UnusedImportCheckVisitor<'a, 'b> {
- fn visit_item(&mut self, item: &ast::Item) {
+impl<'a, 'b> Visitor<'a> for UnusedImportCheckVisitor<'a, 'b> {
+ fn visit_item(&mut self, item: &'a ast::Item) {
visit::walk_item(self, item);
// Ignore is_public import statements because there's no way to be sure
// whether they're used or not. Also ignore imports with a dummy span
}
ViewPathList(_, ref list) => {
+ if list.len() == 0 {
+ self.unused_imports
+ .entry(item.id)
+ .or_insert_with(NodeMap)
+ .insert(item.id, item.span);
+ }
for i in list {
self.check_import(item.id, i.node.id, i.span);
}
}
}
-impl<'a> Visitor for Resolver<'a> {
- fn visit_item(&mut self, item: &Item) {
+impl<'a, 'tcx> Visitor<'tcx> for Resolver<'a> {
+ fn visit_item(&mut self, item: &'tcx Item) {
self.resolve_item(item);
}
- fn visit_arm(&mut self, arm: &Arm) {
+ fn visit_arm(&mut self, arm: &'tcx Arm) {
self.resolve_arm(arm);
}
- fn visit_block(&mut self, block: &Block) {
+ fn visit_block(&mut self, block: &'tcx Block) {
self.resolve_block(block);
}
- fn visit_expr(&mut self, expr: &Expr) {
+ fn visit_expr(&mut self, expr: &'tcx Expr) {
self.resolve_expr(expr, None);
}
- fn visit_local(&mut self, local: &Local) {
+ fn visit_local(&mut self, local: &'tcx Local) {
self.resolve_local(local);
}
- fn visit_ty(&mut self, ty: &Ty) {
+ fn visit_ty(&mut self, ty: &'tcx Ty) {
self.resolve_type(ty);
}
- fn visit_poly_trait_ref(&mut self, tref: &ast::PolyTraitRef, m: &ast::TraitBoundModifier) {
+ fn visit_poly_trait_ref(&mut self,
+ tref: &'tcx ast::PolyTraitRef,
+ m: &'tcx ast::TraitBoundModifier) {
let ast::Path { ref segments, span, global } = tref.trait_ref.path;
let path: Vec<_> = segments.iter().map(|seg| seg.identifier).collect();
let def = self.resolve_trait_reference(&path, global, None, span);
visit::walk_poly_trait_ref(self, tref, m);
}
fn visit_variant(&mut self,
- variant: &ast::Variant,
- generics: &Generics,
+ variant: &'tcx ast::Variant,
+ generics: &'tcx Generics,
item_id: ast::NodeId) {
if let Some(ref dis_expr) = variant.node.disr_expr {
// resolve the discriminator expr as a constant
item_id,
variant.span);
}
- fn visit_foreign_item(&mut self, foreign_item: &ForeignItem) {
+ fn visit_foreign_item(&mut self, foreign_item: &'tcx ForeignItem) {
let type_parameters = match foreign_item.node {
ForeignItemKind::Fn(_, ref generics) => {
HasTypeParameters(generics, ItemRibKind)
});
}
fn visit_fn(&mut self,
- function_kind: FnKind,
- declaration: &FnDecl,
+ function_kind: FnKind<'tcx>,
+ declaration: &'tcx FnDecl,
_: Span,
node_id: NodeId) {
let rib_kind = match function_kind {
}
}
- fn process_formals(&mut self, formals: &Vec<ast::Arg>, qualname: &str) {
+ fn process_formals(&mut self, formals: &'l [ast::Arg], qualname: &str) {
for arg in formals {
self.visit_pat(&arg.pat);
let mut collector = PathCollector::new();
}
fn process_method(&mut self,
- sig: &ast::MethodSig,
- body: Option<&ast::Block>,
+ sig: &'l ast::MethodSig,
+ body: Option<&'l ast::Block>,
id: ast::NodeId,
name: ast::Name,
vis: Visibility,
- attrs: &[Attribute],
+ attrs: &'l [Attribute],
span: Span) {
debug!("process_method: {}:{}", id, name);
}
}
- fn process_trait_ref(&mut self, trait_ref: &ast::TraitRef) {
+ fn process_trait_ref(&mut self, trait_ref: &'l ast::TraitRef) {
let trait_ref_data = self.save_ctxt.get_trait_ref_data(trait_ref, self.cur_scope);
if let Some(trait_ref_data) = trait_ref_data {
if !self.span.filter_generated(Some(trait_ref_data.span), trait_ref.path.span) {
// Dump generic params bindings, then visit_generics
fn process_generic_params(&mut self,
- generics: &ast::Generics,
+ generics: &'l ast::Generics,
full_span: Span,
prefix: &str,
id: NodeId) {
}
fn process_fn(&mut self,
- item: &ast::Item,
- decl: &ast::FnDecl,
- ty_params: &ast::Generics,
- body: &ast::Block) {
+ item: &'l ast::Item,
+ decl: &'l ast::FnDecl,
+ ty_params: &'l ast::Generics,
+ body: &'l ast::Block) {
if let Some(fn_data) = self.save_ctxt.get_item_data(item) {
down_cast_data!(fn_data, FunctionData, item.span);
if !self.span.filter_generated(Some(fn_data.span), item.span) {
self.nest(item.id, |v| v.visit_block(&body));
}
- fn process_static_or_const_item(&mut self, item: &ast::Item, typ: &ast::Ty, expr: &ast::Expr) {
+ fn process_static_or_const_item(&mut self,
+ item: &'l ast::Item,
+ typ: &'l ast::Ty,
+ expr: &'l ast::Expr) {
if let Some(var_data) = self.save_ctxt.get_item_data(item) {
down_cast_data!(var_data, VariableData, item.span);
if !self.span.filter_generated(Some(var_data.span), item.span) {
id: ast::NodeId,
name: ast::Name,
span: Span,
- typ: &ast::Ty,
- expr: &ast::Expr,
+ typ: &'l ast::Ty,
+ expr: &'l ast::Expr,
parent_id: DefId,
vis: Visibility,
- attrs: &[Attribute]) {
+ attrs: &'l [Attribute]) {
let qualname = format!("::{}", self.tcx.node_path_str(id));
let sub_span = self.span.sub_span_after_keyword(span, keywords::Const);
// FIXME tuple structs should generate tuple-specific data.
fn process_struct(&mut self,
- item: &ast::Item,
- def: &ast::VariantData,
- ty_params: &ast::Generics) {
+ item: &'l ast::Item,
+ def: &'l ast::VariantData,
+ ty_params: &'l ast::Generics) {
let name = item.ident.to_string();
let qualname = format!("::{}", self.tcx.node_path_str(item.id));
}
fn process_enum(&mut self,
- item: &ast::Item,
- enum_definition: &ast::EnumDef,
- ty_params: &ast::Generics) {
+ item: &'l ast::Item,
+ enum_definition: &'l ast::EnumDef,
+ ty_params: &'l ast::Generics) {
let enum_data = self.save_ctxt.get_item_data(item);
let enum_data = match enum_data {
None => return,
}
fn process_impl(&mut self,
- item: &ast::Item,
- type_parameters: &ast::Generics,
- trait_ref: &Option<ast::TraitRef>,
- typ: &ast::Ty,
- impl_items: &[ast::ImplItem]) {
+ item: &'l ast::Item,
+ type_parameters: &'l ast::Generics,
+ trait_ref: &'l Option<ast::TraitRef>,
+ typ: &'l ast::Ty,
+ impl_items: &'l [ast::ImplItem]) {
let mut has_self_ref = false;
if let Some(impl_data) = self.save_ctxt.get_item_data(item) {
down_cast_data!(impl_data, ImplData, item.span);
}
fn process_trait(&mut self,
- item: &ast::Item,
- generics: &ast::Generics,
- trait_refs: &ast::TyParamBounds,
- methods: &[ast::TraitItem]) {
+ item: &'l ast::Item,
+ generics: &'l ast::Generics,
+ trait_refs: &'l ast::TyParamBounds,
+ methods: &'l [ast::TraitItem]) {
let name = item.ident.to_string();
let qualname = format!("::{}", self.tcx.node_path_str(item.id));
let mut val = name.clone();
}
fn process_struct_lit(&mut self,
- ex: &ast::Expr,
- path: &ast::Path,
- fields: &Vec<ast::Field>,
- variant: &ty::VariantDef,
- base: &Option<P<ast::Expr>>) {
+ ex: &'l ast::Expr,
+ path: &'l ast::Path,
+ fields: &'l [ast::Field],
+ variant: &'l ty::VariantDef,
+ base: &'l Option<P<ast::Expr>>) {
self.write_sub_paths_truncated(path, false);
if let Some(struct_lit_data) = self.save_ctxt.get_expr_data(ex) {
walk_list!(self, visit_expr, base);
}
- fn process_method_call(&mut self, ex: &ast::Expr, args: &Vec<P<ast::Expr>>) {
+ fn process_method_call(&mut self, ex: &'l ast::Expr, args: &'l [P<ast::Expr>]) {
if let Some(mcd) = self.save_ctxt.get_expr_data(ex) {
down_cast_data!(mcd, MethodCallData, ex.span);
if !self.span.filter_generated(Some(mcd.span), ex.span) {
walk_list!(self, visit_expr, args);
}
- fn process_pat(&mut self, p: &ast::Pat) {
+ fn process_pat(&mut self, p: &'l ast::Pat) {
match p.node {
PatKind::Struct(ref path, ref fields, _) => {
visit::walk_path(self, path);
}
- fn process_var_decl(&mut self, p: &ast::Pat, value: String) {
+ fn process_var_decl(&mut self, p: &'l ast::Pat, value: String) {
// The local could declare multiple new vars, we must walk the
// pattern and collect them all.
let mut collector = PathCollector::new();
}
}
- fn process_trait_item(&mut self, trait_item: &ast::TraitItem, trait_id: DefId) {
+ fn process_trait_item(&mut self, trait_item: &'l ast::TraitItem, trait_id: DefId) {
self.process_macro_use(trait_item.span, trait_item.id);
match trait_item.node {
ast::TraitItemKind::Const(ref ty, Some(ref expr)) => {
}
}
- fn process_impl_item(&mut self, impl_item: &ast::ImplItem, impl_id: DefId) {
+ fn process_impl_item(&mut self, impl_item: &'l ast::ImplItem, impl_id: DefId) {
self.process_macro_use(impl_item.span, impl_item.id);
match impl_item.node {
ast::ImplItemKind::Const(ref ty, ref expr) => {
}
}
-impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor for DumpVisitor<'l, 'tcx, 'll, D> {
- fn visit_item(&mut self, item: &ast::Item) {
+impl<'l, 'tcx: 'l, 'll, D: Dump +'ll> Visitor<'l> for DumpVisitor<'l, 'tcx, 'll, D> {
+ fn visit_item(&mut self, item: &'l ast::Item) {
use syntax::ast::ItemKind::*;
self.process_macro_use(item.span, item.id);
match item.node {
}
}
- fn visit_generics(&mut self, generics: &ast::Generics) {
+ fn visit_generics(&mut self, generics: &'l ast::Generics) {
for param in generics.ty_params.iter() {
for bound in param.bounds.iter() {
if let ast::TraitTyParamBound(ref trait_ref, _) = *bound {
}
}
- fn visit_ty(&mut self, t: &ast::Ty) {
+ fn visit_ty(&mut self, t: &'l ast::Ty) {
self.process_macro_use(t.span, t.id);
match t.node {
ast::TyKind::Path(_, ref path) => {
}
}
- fn visit_expr(&mut self, ex: &ast::Expr) {
+ fn visit_expr(&mut self, ex: &'l ast::Expr) {
self.process_macro_use(ex.span, ex.id);
match ex.node {
ast::ExprKind::Call(ref _f, ref _args) => {
}
}
- fn visit_mac(&mut self, mac: &ast::Mac) {
+ fn visit_mac(&mut self, mac: &'l ast::Mac) {
// These shouldn't exist in the AST at this point, log a span bug.
span_bug!(mac.span, "macro invocation should have been expanded out of AST");
}
- fn visit_pat(&mut self, p: &ast::Pat) {
+ fn visit_pat(&mut self, p: &'l ast::Pat) {
self.process_macro_use(p.span, p.id);
self.process_pat(p);
}
- fn visit_arm(&mut self, arm: &ast::Arm) {
+ fn visit_arm(&mut self, arm: &'l ast::Arm) {
let mut collector = PathCollector::new();
for pattern in &arm.pats {
// collect paths from the arm's patterns
self.visit_expr(&arm.body);
}
- fn visit_stmt(&mut self, s: &ast::Stmt) {
+ fn visit_stmt(&mut self, s: &'l ast::Stmt) {
self.process_macro_use(s.span, s.id);
visit::walk_stmt(self, s)
}
- fn visit_local(&mut self, l: &ast::Local) {
+ fn visit_local(&mut self, l: &'l ast::Local) {
self.process_macro_use(l.span, l.id);
let value = l.init.as_ref().map(|i| self.span.snippet(i.span)).unwrap_or(String::new());
self.process_var_decl(&l.pat, value);
}
}
-impl Visitor for PathCollector {
+impl<'a> Visitor<'a> for PathCollector {
fn visit_pat(&mut self, p: &ast::Pat) {
match p.node {
PatKind::Struct(ref path, ..) => {
Cdecl => llvm::CCallConv,
};
- let mut inputs = &sig.inputs[..];
+ let mut inputs = sig.inputs();
let extra_args = if abi == RustCall {
assert!(!sig.variadic && extra_args.is_empty());
- match inputs[inputs.len() - 1].sty {
+ match sig.inputs().last().unwrap().sty {
ty::TyTuple(ref tupled_arguments) => {
- inputs = &inputs[..inputs.len() - 1];
+ inputs = &sig.inputs()[0..sig.inputs().len() - 1];
&tupled_arguments[..]
}
_ => {
}
};
- let ret_ty = sig.output;
+ let ret_ty = sig.output();
let mut ret = arg_of(ret_ty, true);
if !type_is_fat_ptr(ccx.tcx(), ret_ty) {
};
// Fat pointers are returned by-value.
if !self.ret.is_ignore() {
- if !type_is_fat_ptr(ccx.tcx(), sig.output) {
+ if !type_is_fat_ptr(ccx.tcx(), sig.output()) {
fixup(&mut self.ret);
}
}
use std::process::Command;
use context::SharedCrateContext;
-use monomorphize::Instance;
use back::archive;
+use back::symbol_export::{self, ExportedSymbols};
use middle::dependency_format::Linkage;
-use rustc::hir::def_id::CrateNum;
+use rustc::hir::def_id::{LOCAL_CRATE, CrateNum};
use session::Session;
use session::config::CrateType;
use session::config;
impl<'a, 'tcx> LinkerInfo {
pub fn new(scx: &SharedCrateContext<'a, 'tcx>,
- reachable: &[String]) -> LinkerInfo {
+ exports: &ExportedSymbols) -> LinkerInfo {
LinkerInfo {
exports: scx.sess().crate_types.borrow().iter().map(|&c| {
- (c, exported_symbols(scx, reachable, c))
+ (c, exported_symbols(scx, exports, c))
}).collect(),
}
}
let mut arg = OsString::new();
let path = tmpdir.join("list");
- if self.sess.target.target.options.is_like_solaris {
+ debug!("EXPORTED SYMBOLS:");
+
+ if self.sess.target.target.options.is_like_osx {
+ // Write a plain, newline-separated list of symbols
let res = (|| -> io::Result<()> {
let mut f = BufWriter::new(File::create(&path)?);
- writeln!(f, "{{\n global:")?;
for sym in self.info.exports[&crate_type].iter() {
- writeln!(f, " {};", sym)?;
+ debug!(" _{}", sym);
+ writeln!(f, "_{}", sym)?;
}
- writeln!(f, "\n local:\n *;\n}};")?;
Ok(())
})();
if let Err(e) = res {
- self.sess.fatal(&format!("failed to write version script: {}", e));
+ self.sess.fatal(&format!("failed to write lib.def file: {}", e));
}
-
- arg.push("-Wl,-M,");
- arg.push(&path);
} else {
- let prefix = if self.sess.target.target.options.is_like_osx {
- "_"
- } else {
- ""
- };
+ // Write an LD version script
let res = (|| -> io::Result<()> {
let mut f = BufWriter::new(File::create(&path)?);
+ writeln!(f, "{{\n global:")?;
for sym in self.info.exports[&crate_type].iter() {
- writeln!(f, "{}{}", prefix, sym)?;
+ debug!(" {};", sym);
+ writeln!(f, " {};", sym)?;
}
+ writeln!(f, "\n local:\n *;\n}};")?;
Ok(())
})();
if let Err(e) = res {
- self.sess.fatal(&format!("failed to write lib.def file: {}", e));
- }
- if self.sess.target.target.options.is_like_osx {
- arg.push("-Wl,-exported_symbols_list,");
- } else {
- arg.push("-Wl,--retain-symbols-file=");
+ self.sess.fatal(&format!("failed to write version script: {}", e));
}
- arg.push(&path);
}
+ if self.sess.target.target.options.is_like_osx {
+ arg.push("-Wl,-exported_symbols_list,");
+ } else if self.sess.target.target.options.is_like_solaris {
+ arg.push("-Wl,-M,");
+ } else {
+ arg.push("-Wl,--version-script=");
+ }
+
+ arg.push(&path);
self.cmd.arg(arg);
}
}
fn gc_sections(&mut self, _keep_metadata: bool) {
- self.cmd.arg("/OPT:REF,ICF");
+ // MSVC's ICF (Identical COMDAT Folding) link optimization is
+ // slow for Rust and thus we disable it by default when not in
+ // optimization build.
+ if self.sess.opts.optimize != config::OptLevel::No {
+ self.cmd.arg("/OPT:REF,ICF");
+ } else {
+ // It is necessary to specify NOICF here, because /OPT:REF
+ // implies ICF by default.
+ self.cmd.arg("/OPT:REF,NOICF");
+ }
}
fn link_dylib(&mut self, lib: &str) {
}
fn exported_symbols(scx: &SharedCrateContext,
- reachable: &[String],
+ exported_symbols: &ExportedSymbols,
crate_type: CrateType)
-> Vec<String> {
- // See explanation in GnuLinker::export_symbols, for
- // why we don't ever need dylib symbols on non-MSVC.
- if crate_type == CrateType::CrateTypeDylib ||
- crate_type == CrateType::CrateTypeProcMacro {
- if !scx.sess().target.target.options.is_like_msvc {
- return vec![];
- }
- }
-
- let mut symbols = reachable.to_vec();
+ let export_threshold = symbol_export::crate_export_threshold(crate_type);
- // If we're producing anything other than a dylib then the `reachable` array
- // above is the exhaustive set of symbols we should be exporting.
- //
- // For dylibs, however, we need to take a look at how all upstream crates
- // are linked into this dynamic library. For all statically linked
- // libraries we take all their reachable symbols and emit them as well.
- if crate_type != CrateType::CrateTypeDylib {
- return symbols
- }
+ let mut symbols = Vec::new();
+ exported_symbols.for_each_exported_symbol(LOCAL_CRATE, export_threshold, |name, _| {
+ symbols.push(name.to_owned());
+ });
- let cstore = &scx.sess().cstore;
let formats = scx.sess().dependency_formats.borrow();
let deps = formats[&crate_type].iter();
- symbols.extend(deps.enumerate().filter_map(|(i, f)| {
- if *f == Linkage::Static {
- Some(CrateNum::new(i + 1))
- } else {
- None
+
+ for (index, dep_format) in deps.enumerate() {
+ let cnum = CrateNum::new(index + 1);
+ // For each dependency that we are linking to statically ...
+ if *dep_format == Linkage::Static {
+ // ... we add its symbol list to our export list.
+ exported_symbols.for_each_exported_symbol(cnum, export_threshold, |name, _| {
+ symbols.push(name.to_owned());
+ })
}
- }).flat_map(|cnum| {
- cstore.reachable_ids(cnum)
- }).map(|did| -> String {
- Instance::mono(scx, did).symbol_name(scx)
- }));
+ }
+
symbols
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-use super::link;
-use super::write;
+use back::link;
+use back::write;
+use back::symbol_export::{self, ExportedSymbols};
use rustc::session::{self, config};
use llvm;
use llvm::archive_ro::ArchiveRO;
use llvm::{ModuleRef, TargetMachineRef, True, False};
use rustc::util::common::time;
use rustc::util::common::path2cstr;
+use rustc::hir::def_id::LOCAL_CRATE;
use back::write::{ModuleConfig, with_llvm_pmb};
use libc;
use std::ffi::CString;
use std::path::Path;
-pub fn run(sess: &session::Session, llmod: ModuleRef,
- tm: TargetMachineRef, reachable: &[String],
+pub fn crate_type_allows_lto(crate_type: config::CrateType) -> bool {
+ match crate_type {
+ config::CrateTypeExecutable |
+ config::CrateTypeStaticlib |
+ config::CrateTypeCdylib => true,
+
+ config::CrateTypeDylib |
+ config::CrateTypeRlib |
+ config::CrateTypeMetadata |
+ config::CrateTypeProcMacro => false,
+ }
+}
+
+pub fn run(sess: &session::Session,
+ llmod: ModuleRef,
+ tm: TargetMachineRef,
+ exported_symbols: &ExportedSymbols,
config: &ModuleConfig,
temp_no_opt_bc_filename: &Path) {
if sess.opts.cg.prefer_dynamic {
// Make sure we actually can run LTO
for crate_type in sess.crate_types.borrow().iter() {
- match *crate_type {
- config::CrateTypeExecutable |
- config::CrateTypeCdylib |
- config::CrateTypeStaticlib => {}
- _ => {
- sess.fatal("lto can only be run for executables and \
+ if !crate_type_allows_lto(*crate_type) {
+ sess.fatal("lto can only be run for executables, cdylibs and \
static library outputs");
- }
}
}
+ let export_threshold =
+ symbol_export::crates_export_threshold(&sess.crate_types.borrow()[..]);
+
+ let symbol_filter = &|&(ref name, level): &(String, _)| {
+ if symbol_export::is_below_threshold(level, export_threshold) {
+ let mut bytes = Vec::with_capacity(name.len() + 1);
+ bytes.extend(name.bytes());
+ Some(CString::new(bytes).unwrap())
+ } else {
+ None
+ }
+ };
+
+ let mut symbol_white_list: Vec<CString> = exported_symbols
+ .exported_symbols(LOCAL_CRATE)
+ .iter()
+ .filter_map(symbol_filter)
+ .collect();
+
// For each of our upstream dependencies, find the corresponding rlib and
// load the bitcode from the archive. Then merge it into the current LLVM
// module that we've got.
return;
}
+ symbol_white_list.extend(
+ exported_symbols.exported_symbols(cnum)
+ .iter()
+ .filter_map(symbol_filter));
+
let archive = ArchiveRO::open(&path).expect("wanted an rlib");
let bytecodes = archive.iter().filter_map(|child| {
child.ok().and_then(|c| c.name().map(|name| (name, c)))
}
});
- // Internalize everything but the reachable symbols of the current module
- let cstrs: Vec<CString> = reachable.iter().map(|s| {
- CString::new(s.clone()).unwrap()
- }).collect();
- let arr: Vec<*const libc::c_char> = cstrs.iter().map(|c| c.as_ptr()).collect();
+ // Internalize everything but the exported symbols of the current module
+ let arr: Vec<*const libc::c_char> = symbol_white_list.iter()
+ .map(|c| c.as_ptr())
+ .collect();
let ptr = arr.as_ptr();
unsafe {
llvm::LLVMRustRunRestrictionPass(llmod,
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+use context::SharedCrateContext;
+use monomorphize::Instance;
+use symbol_map::SymbolMap;
+use util::nodemap::FxHashMap;
+use rustc::hir::def_id::{DefId, CrateNum, LOCAL_CRATE};
+use rustc::session::config;
+use syntax::attr;
+use trans_item::TransItem;
+
+/// The SymbolExportLevel of a symbols specifies from which kinds of crates
+/// the symbol will be exported. `C` symbols will be exported from any
+/// kind of crate, including cdylibs which export very few things.
+/// `Rust` will only be exported if the crate produced is a Rust
+/// dylib.
+#[derive(Eq, PartialEq, Debug, Copy, Clone)]
+pub enum SymbolExportLevel {
+ C,
+ Rust,
+}
+
+/// The set of symbols exported from each crate in the crate graph.
+pub struct ExportedSymbols {
+ exports: FxHashMap<CrateNum, Vec<(String, SymbolExportLevel)>>,
+}
+
+impl ExportedSymbols {
+
+ pub fn empty() -> ExportedSymbols {
+ ExportedSymbols {
+ exports: FxHashMap(),
+ }
+ }
+
+ pub fn compute_from<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>,
+ symbol_map: &SymbolMap<'tcx>)
+ -> ExportedSymbols {
+ let mut local_crate: Vec<_> = scx
+ .exported_symbols()
+ .iter()
+ .map(|&node_id| {
+ scx.tcx().map.local_def_id(node_id)
+ })
+ .map(|def_id| {
+ (symbol_for_def_id(scx, def_id, symbol_map),
+ export_level(scx, def_id))
+ })
+ .collect();
+
+ if scx.sess().entry_fn.borrow().is_some() {
+ local_crate.push(("main".to_string(), SymbolExportLevel::C));
+ }
+
+ if let Some(id) = scx.sess().derive_registrar_fn.get() {
+ let svh = &scx.link_meta().crate_hash;
+ let def_id = scx.tcx().map.local_def_id(id);
+ let idx = def_id.index;
+ let registrar = scx.sess().generate_derive_registrar_symbol(svh, idx);
+ local_crate.push((registrar, SymbolExportLevel::C));
+ }
+
+ if scx.sess().crate_types.borrow().contains(&config::CrateTypeDylib) {
+ local_crate.push((scx.metadata_symbol_name(),
+ SymbolExportLevel::Rust));
+ }
+
+ let mut exports = FxHashMap();
+ exports.insert(LOCAL_CRATE, local_crate);
+
+ for cnum in scx.sess().cstore.crates() {
+ debug_assert!(cnum != LOCAL_CRATE);
+
+ if scx.sess().cstore.plugin_registrar_fn(cnum).is_some() ||
+ scx.sess().cstore.derive_registrar_fn(cnum).is_some() {
+ continue;
+ }
+
+ let crate_exports = scx
+ .sess()
+ .cstore
+ .exported_symbols(cnum)
+ .iter()
+ .map(|&def_id| {
+ debug!("EXTERN-SYMBOL: {:?}", def_id);
+ let name = Instance::mono(scx, def_id).symbol_name(scx);
+ (name, export_level(scx, def_id))
+ })
+ .collect();
+
+ exports.insert(cnum, crate_exports);
+ }
+
+ return ExportedSymbols {
+ exports: exports
+ };
+
+ fn export_level(scx: &SharedCrateContext,
+ sym_def_id: DefId)
+ -> SymbolExportLevel {
+ let attrs = scx.tcx().get_attrs(sym_def_id);
+ if attr::contains_extern_indicator(scx.sess().diagnostic(), &attrs) {
+ SymbolExportLevel::C
+ } else {
+ SymbolExportLevel::Rust
+ }
+ }
+ }
+
+ pub fn exported_symbols(&self,
+ cnum: CrateNum)
+ -> &[(String, SymbolExportLevel)] {
+ match self.exports.get(&cnum) {
+ Some(exports) => &exports[..],
+ None => &[]
+ }
+ }
+
+ pub fn for_each_exported_symbol<F>(&self,
+ cnum: CrateNum,
+ export_threshold: SymbolExportLevel,
+ mut f: F)
+ where F: FnMut(&str, SymbolExportLevel)
+ {
+ for &(ref name, export_level) in self.exported_symbols(cnum) {
+ if is_below_threshold(export_level, export_threshold) {
+ f(&name[..], export_level)
+ }
+ }
+ }
+}
+
+pub fn crate_export_threshold(crate_type: config::CrateType)
+ -> SymbolExportLevel {
+ match crate_type {
+ config::CrateTypeExecutable |
+ config::CrateTypeStaticlib |
+ config::CrateTypeProcMacro |
+ config::CrateTypeCdylib => SymbolExportLevel::C,
+ config::CrateTypeRlib |
+ config::CrateTypeMetadata |
+ config::CrateTypeDylib => SymbolExportLevel::Rust,
+ }
+}
+
+pub fn crates_export_threshold(crate_types: &[config::CrateType])
+ -> SymbolExportLevel {
+ if crate_types.iter().any(|&crate_type| {
+ crate_export_threshold(crate_type) == SymbolExportLevel::Rust
+ }) {
+ SymbolExportLevel::Rust
+ } else {
+ SymbolExportLevel::C
+ }
+}
+
+pub fn is_below_threshold(level: SymbolExportLevel,
+ threshold: SymbolExportLevel)
+ -> bool {
+ if threshold == SymbolExportLevel::Rust {
+ // We export everything from Rust dylibs
+ true
+ } else {
+ level == SymbolExportLevel::C
+ }
+}
+
+fn symbol_for_def_id<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>,
+ def_id: DefId,
+ symbol_map: &SymbolMap<'tcx>)
+ -> String {
+ // Just try to look things up in the symbol map. If nothing's there, we
+ // recompute.
+ if let Some(node_id) = scx.tcx().map.as_local_node_id(def_id) {
+ if let Some(sym) = symbol_map.get(TransItem::Static(node_id)) {
+ return sym.to_owned();
+ }
+ }
+
+ let instance = Instance::mono(scx, def_id);
+
+ symbol_map.get(TransItem::Fn(instance))
+ .map(str::to_owned)
+ .unwrap_or_else(|| instance.symbol_name(scx))
+}
use back::lto;
use back::link::{get_linker, remove};
+use back::symbol_export::ExportedSymbols;
use rustc_incremental::{save_trans_partition, in_incr_comp_dir};
use session::config::{OutputFilenames, OutputTypes, Passes, SomePasses, AllPasses};
use session::Session;
struct CodegenContext<'a> {
// Extra resources used for LTO: (sess, reachable). This will be `None`
// when running in a worker thread.
- lto_ctxt: Option<(&'a Session, &'a [String])>,
+ lto_ctxt: Option<(&'a Session, &'a ExportedSymbols)>,
// Handler to use for diagnostics produced during codegen.
handler: &'a Handler,
// LLVM passes added by plugins.
}
impl<'a> CodegenContext<'a> {
- fn new_with_session(sess: &'a Session, reachable: &'a [String]) -> CodegenContext<'a> {
+ fn new_with_session(sess: &'a Session,
+ exported_symbols: &'a ExportedSymbols)
+ -> CodegenContext<'a> {
CodegenContext {
- lto_ctxt: Some((sess, reachable)),
+ lto_ctxt: Some((sess, exported_symbols)),
handler: sess.diagnostic(),
plugin_passes: sess.plugin_llvm_passes.borrow().clone(),
remark: sess.opts.cg.remark.clone(),
llvm::LLVMDisposePassManager(mpm);
match cgcx.lto_ctxt {
- Some((sess, reachable)) if sess.lto() => {
+ Some((sess, exported_symbols)) if sess.lto() => {
time(sess.time_passes(), "all lto passes", || {
let temp_no_opt_bc_filename =
output_names.temp_path_ext("no-opt.lto.bc", module_name);
lto::run(sess,
llmod,
tm,
- reachable,
+ exported_symbols,
&config,
&temp_no_opt_bc_filename);
});
// potentially create hundreds of them).
let num_workers = work_items.len() - 1;
if num_workers == 1 {
- run_work_singlethreaded(sess, &trans.reachable, work_items);
+ run_work_singlethreaded(sess, &trans.exported_symbols, work_items);
} else {
run_work_multithreaded(sess, work_items, num_workers);
}
}
fn run_work_singlethreaded(sess: &Session,
- reachable: &[String],
+ exported_symbols: &ExportedSymbols,
work_items: Vec<WorkItem>) {
- let cgcx = CodegenContext::new_with_session(sess, reachable);
+ let cgcx = CodegenContext::new_with_session(sess, exported_symbols);
// Since we're running single-threaded, we can pass the session to
// the proc, allowing `optimize_and_codegen` to perform LTO.
use assert_module_sources;
use back::link;
use back::linker::LinkerInfo;
+use back::symbol_export::{self, ExportedSymbols};
use llvm::{Linkage, ValueRef, Vector, get_param};
use llvm;
-use rustc::hir::def::Def;
-use rustc::hir::def_id::DefId;
+use rustc::hir::def_id::{DefId, LOCAL_CRATE};
use middle::lang_items::{LangItem, ExchangeMallocFnLangItem, StartFnLangItem};
use rustc::ty::subst::Substs;
use rustc::traits;
use arena::TypedArena;
use libc::c_uint;
use std::ffi::{CStr, CString};
-use std::borrow::Cow;
use std::cell::{Cell, RefCell};
use std::ptr;
use std::rc::Rc;
let dest_val = adt::MaybeSizedValue::sized(dest); // Can return unsized value
let mut llarg_idx = fcx.fn_ty.ret.is_indirect() as usize;
let mut arg_idx = 0;
- for (i, arg_ty) in sig.inputs.into_iter().enumerate() {
- let lldestptr = adt::trans_field_ptr(bcx, sig.output, dest_val, Disr::from(disr), i);
+ for (i, arg_ty) in sig.inputs().iter().enumerate() {
+ let lldestptr = adt::trans_field_ptr(bcx, sig.output(), dest_val, Disr::from(disr), i);
let arg = &fcx.fn_ty.args[arg_idx];
arg_idx += 1;
let b = &bcx.build();
arg.store_fn_arg(b, &mut llarg_idx, lldestptr);
}
}
- adt::trans_set_discr(bcx, sig.output, dest, disr);
+ adt::trans_set_discr(bcx, sig.output(), dest, disr);
}
fcx.finish(bcx, DebugLoc::None);
}
fn write_metadata(cx: &SharedCrateContext,
- reachable_ids: &NodeSet) -> Vec<u8> {
+ exported_symbols: &NodeSet) -> Vec<u8> {
use flate;
#[derive(PartialEq, Eq, PartialOrd, Ord)]
let metadata = cstore.encode_metadata(cx.tcx(),
cx.export_map(),
cx.link_meta(),
- reachable_ids);
+ exported_symbols);
if kind == MetadataKind::Uncompressed {
return metadata;
}
fn internalize_symbols<'a, 'tcx>(sess: &Session,
ccxs: &CrateContextList<'a, 'tcx>,
symbol_map: &SymbolMap<'tcx>,
- reachable: &FxHashSet<&str>) {
+ exported_symbols: &ExportedSymbols) {
+ let export_threshold =
+ symbol_export::crates_export_threshold(&sess.crate_types.borrow()[..]);
+
+ let exported_symbols = exported_symbols
+ .exported_symbols(LOCAL_CRATE)
+ .iter()
+ .filter(|&&(_, export_level)| {
+ symbol_export::is_below_threshold(export_level, export_threshold)
+ })
+ .map(|&(ref name, _)| &name[..])
+ .collect::<FxHashSet<&str>>();
+
let scx = ccxs.shared();
let tcx = scx.tcx();
- // In incr. comp. mode, we can't necessarily see all refs since we
- // don't generate LLVM IR for reused modules, so skip this
- // step. Later we should get smarter.
- if sess.opts.debugging_opts.incremental.is_some() {
- return;
- }
+ let incr_comp = sess.opts.debugging_opts.incremental.is_some();
// 'unsafe' because we are holding on to CStr's from the LLVM module within
// this block.
let mut referenced_somewhere = FxHashSet();
// Collect all symbols that need to stay externally visible because they
- // are referenced via a declaration in some other codegen unit.
- for ccx in ccxs.iter_need_trans() {
- for val in iter_globals(ccx.llmod()).chain(iter_functions(ccx.llmod())) {
- let linkage = llvm::LLVMRustGetLinkage(val);
- // We only care about external declarations (not definitions)
- // and available_externally definitions.
- let is_available_externally = linkage == llvm::Linkage::AvailableExternallyLinkage;
- let is_decl = llvm::LLVMIsDeclaration(val) != 0;
-
- if is_decl || is_available_externally {
- let symbol_name = CStr::from_ptr(llvm::LLVMGetValueName(val));
- referenced_somewhere.insert(symbol_name);
+ // are referenced via a declaration in some other codegen unit. In
+ // incremental compilation, we don't need to collect. See below for more
+ // information.
+ if !incr_comp {
+ for ccx in ccxs.iter_need_trans() {
+ for val in iter_globals(ccx.llmod()).chain(iter_functions(ccx.llmod())) {
+ let linkage = llvm::LLVMRustGetLinkage(val);
+ // We only care about external declarations (not definitions)
+ // and available_externally definitions.
+ let is_available_externally =
+ linkage == llvm::Linkage::AvailableExternallyLinkage;
+ let is_decl = llvm::LLVMIsDeclaration(val) == llvm::True;
+
+ if is_decl || is_available_externally {
+ let symbol_name = CStr::from_ptr(llvm::LLVMGetValueName(val));
+ referenced_somewhere.insert(symbol_name);
+ }
}
}
}
// Also collect all symbols for which we cannot adjust linkage, because
- // it is fixed by some directive in the source code (e.g. #[no_mangle]).
- let linkage_fixed_explicitly: FxHashSet<_> = scx
- .translation_items()
- .borrow()
- .iter()
- .cloned()
- .filter(|trans_item|{
- trans_item.explicit_linkage(tcx).is_some()
- })
- .map(|trans_item| symbol_map.get_or_compute(scx, trans_item))
- .collect();
+ // it is fixed by some directive in the source code.
+ let (locally_defined_symbols, linkage_fixed_explicitly) = {
+ let mut locally_defined_symbols = FxHashSet();
+ let mut linkage_fixed_explicitly = FxHashSet();
+
+ for trans_item in scx.translation_items().borrow().iter() {
+ let symbol_name = symbol_map.get_or_compute(scx, *trans_item);
+ if trans_item.explicit_linkage(tcx).is_some() {
+ linkage_fixed_explicitly.insert(symbol_name.clone());
+ }
+ locally_defined_symbols.insert(symbol_name);
+ }
+
+ (locally_defined_symbols, linkage_fixed_explicitly)
+ };
// Examine each external definition. If the definition is not used in
// any other compilation unit, and is not reachable from other crates,
let is_externally_visible = (linkage == llvm::Linkage::ExternalLinkage) ||
(linkage == llvm::Linkage::LinkOnceODRLinkage) ||
(linkage == llvm::Linkage::WeakODRLinkage);
- let is_definition = llvm::LLVMIsDeclaration(val) == 0;
-
- // If this is a definition (as opposed to just a declaration)
- // and externally visible, check if we can internalize it
- if is_definition && is_externally_visible {
- let name_cstr = CStr::from_ptr(llvm::LLVMGetValueName(val));
- let name_str = name_cstr.to_str().unwrap();
- let name_cow = Cow::Borrowed(name_str);
-
- let is_referenced_somewhere = referenced_somewhere.contains(&name_cstr);
- let is_reachable = reachable.contains(&name_str);
- let has_fixed_linkage = linkage_fixed_explicitly.contains(&name_cow);
-
- if !is_referenced_somewhere && !is_reachable && !has_fixed_linkage {
- llvm::LLVMRustSetLinkage(val, llvm::Linkage::InternalLinkage);
- llvm::LLVMSetDLLStorageClass(val,
- llvm::DLLStorageClass::Default);
+
+ if !is_externally_visible {
+ // This symbol is not visible outside of its codegen unit,
+ // so there is nothing to do for it.
+ continue;
+ }
+
+ let name_cstr = CStr::from_ptr(llvm::LLVMGetValueName(val));
+ let name_str = name_cstr.to_str().unwrap();
+
+ if exported_symbols.contains(&name_str) {
+ // This symbol is explicitly exported, so we can't
+ // mark it as internal or hidden.
+ continue;
+ }
+
+ let is_declaration = llvm::LLVMIsDeclaration(val) == llvm::True;
+
+ if is_declaration {
+ if locally_defined_symbols.contains(name_str) {
+ // Only mark declarations from the current crate as hidden.
+ // Otherwise we would mark things as hidden that are
+ // imported from other crates or native libraries.
+ llvm::LLVMRustSetVisibility(val, llvm::Visibility::Hidden);
+ }
+ } else {
+ let has_fixed_linkage = linkage_fixed_explicitly.contains(name_str);
+
+ if !has_fixed_linkage {
+ // In incremental compilation mode, we can't be sure that
+ // we saw all references because we don't know what's in
+ // cached compilation units, so we always assume that the
+ // given item has been referenced.
+ if incr_comp || referenced_somewhere.contains(&name_cstr) {
+ llvm::LLVMRustSetVisibility(val, llvm::Visibility::Hidden);
+ } else {
+ llvm::LLVMRustSetLinkage(val, llvm::Linkage::InternalLinkage);
+ }
+
+ llvm::LLVMSetDLLStorageClass(val, llvm::DLLStorageClass::Default);
llvm::UnsetComdat(val);
}
}
///
/// This list is later used by linkers to determine the set of symbols needed to
/// be exposed from a dynamic library and it's also encoded into the metadata.
-pub fn filter_reachable_ids(tcx: TyCtxt, reachable: NodeSet) -> NodeSet {
+pub fn find_exported_symbols(tcx: TyCtxt, reachable: NodeSet) -> NodeSet {
reachable.into_iter().filter(|&id| {
// Next, we want to ignore some FFI functions that are not exposed from
// this crate. Reachable FFI functions can be lumped into two
// let it through if it's included statically.
match tcx.map.get(id) {
hir_map::NodeForeignItem(..) => {
- tcx.sess.cstore.is_statically_included_foreign_item(id)
+ let def_id = tcx.map.local_def_id(id);
+ tcx.sess.cstore.is_statically_included_foreign_item(def_id)
}
// Only consider nodes that actually have exported symbols.
let krate = tcx.map.krate();
let ty::CrateAnalysis { export_map, reachable, name, .. } = analysis;
- let reachable = filter_reachable_ids(tcx, reachable);
+ let exported_symbols = find_exported_symbols(tcx, reachable);
let check_overflow = if let Some(v) = tcx.sess.opts.debugging_opts.force_overflow_checks {
v
let shared_ccx = SharedCrateContext::new(tcx,
export_map,
link_meta.clone(),
- reachable,
+ exported_symbols,
check_overflow);
// Translate the metadata.
let metadata = time(tcx.sess.time_passes(), "write metadata", || {
- write_metadata(&shared_ccx, shared_ccx.reachable())
+ write_metadata(&shared_ccx, shared_ccx.exported_symbols())
});
let metadata_module = ModuleTranslation {
// Skip crate items and just output metadata in -Z no-trans mode.
if tcx.sess.opts.debugging_opts.no_trans ||
tcx.sess.crate_types.borrow().iter().all(|ct| ct == &config::CrateTypeMetadata) {
- let linker_info = LinkerInfo::new(&shared_ccx, &[]);
+ let linker_info = LinkerInfo::new(&shared_ccx, &ExportedSymbols::empty());
return CrateTranslation {
modules: modules,
metadata_module: metadata_module,
link: link_meta,
metadata: metadata,
- reachable: vec![],
+ exported_symbols: ExportedSymbols::empty(),
no_builtins: no_builtins,
linker_info: linker_info,
windows_subsystem: None,
}
let sess = shared_ccx.sess();
- let mut reachable_symbols = shared_ccx.reachable().iter().map(|&id| {
- let def_id = shared_ccx.tcx().map.local_def_id(id);
- symbol_for_def_id(def_id, &shared_ccx, &symbol_map)
- }).collect::<Vec<_>>();
-
- if sess.entry_fn.borrow().is_some() {
- reachable_symbols.push("main".to_string());
- }
-
- if sess.crate_types.borrow().contains(&config::CrateTypeDylib) {
- reachable_symbols.push(shared_ccx.metadata_symbol_name());
- }
- // For the purposes of LTO or when creating a cdylib, we add to the
- // reachable set all of the upstream reachable extern fns. These functions
- // are all part of the public ABI of the final product, so we need to
- // preserve them.
- //
- // Note that this happens even if LTO isn't requested or we're not creating
- // a cdylib. In those cases, though, we're not even reading the
- // `reachable_symbols` list later on so it should be ok.
- for cnum in sess.cstore.crates() {
- let syms = sess.cstore.reachable_ids(cnum);
- reachable_symbols.extend(syms.into_iter().filter(|&def_id| {
- let applicable = match sess.cstore.describe_def(def_id) {
- Some(Def::Static(..)) => true,
- Some(Def::Fn(_)) => {
- shared_ccx.tcx().item_generics(def_id).types.is_empty()
- }
- _ => false
- };
-
- if applicable {
- let attrs = shared_ccx.tcx().get_attrs(def_id);
- attr::contains_extern_indicator(sess.diagnostic(), &attrs)
- } else {
- false
- }
- }).map(|did| {
- symbol_for_def_id(did, &shared_ccx, &symbol_map)
- }));
- }
+ let exported_symbols = ExportedSymbols::compute_from(&shared_ccx,
+ &symbol_map);
+ // Now that we have all symbols that are exported from the CGUs of this
+ // crate, we can run the `internalize_symbols` pass.
time(shared_ccx.sess().time_passes(), "internalize symbols", || {
internalize_symbols(sess,
&crate_context_list,
&symbol_map,
- &reachable_symbols.iter()
- .map(|s| &s[..])
- .collect())
+ &exported_symbols);
});
if tcx.sess.opts.debugging_opts.print_type_sizes {
create_imps(&crate_context_list);
}
- let linker_info = LinkerInfo::new(&shared_ccx, &reachable_symbols);
+ let linker_info = LinkerInfo::new(&shared_ccx, &exported_symbols);
let subsystem = attr::first_attr_value_str_by_name(&krate.attrs,
"windows_subsystem");
metadata_module: metadata_module,
link: link_meta,
metadata: metadata,
- reachable: reachable_symbols,
+ exported_symbols: exported_symbols,
no_builtins: no_builtins,
linker_info: linker_info,
windows_subsystem: windows_subsystem,
(codegen_units, symbol_map)
}
-
-fn symbol_for_def_id<'a, 'tcx>(def_id: DefId,
- scx: &SharedCrateContext<'a, 'tcx>,
- symbol_map: &SymbolMap<'tcx>)
- -> String {
- // Just try to look things up in the symbol map. If nothing's there, we
- // recompute.
- if let Some(node_id) = scx.tcx().map.as_local_node_id(def_id) {
- if let Some(sym) = symbol_map.get(TransItem::Static(node_id)) {
- return sym.to_owned();
- }
- }
-
- let instance = Instance::mono(scx, def_id);
-
- symbol_map.get(TransItem::Fn(instance))
- .map(str::to_owned)
- .unwrap_or_else(|| instance.symbol_name(scx))
-}
// http://www.angelcode.com/dev/callconv/callconv.html
// Clang's ABI handling is in lib/CodeGen/TargetInfo.cpp
let t = &ccx.sess().target.target;
- if t.options.is_like_osx || t.options.is_like_windows {
+ if t.options.is_like_osx || t.options.is_like_windows
+ || t.options.is_like_openbsd {
match llsize_of_alloc(ccx, fty.ret.ty) {
1 => fty.ret.cast = Some(Type::i8(ccx)),
2 => fty.ret.cast = Some(Type::i16(ccx)),
use Disr;
use rustc::ty::{self, Ty, TypeFoldable};
use rustc::hir;
+use std::iter;
use syntax_pos::DUMMY_SP;
// Make a version with the type of by-ref closure.
let ty::ClosureTy { unsafety, abi, mut sig } = tcx.closure_type(def_id, substs);
- sig.0.inputs.insert(0, ref_closure_ty); // sig has no self type as of yet
+ sig.0 = tcx.mk_fn_sig(
+ iter::once(ref_closure_ty).chain(sig.0.inputs().iter().cloned()),
+ sig.0.output(),
+ sig.0.variadic
+ );
let llref_fn_ty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy {
unsafety: unsafety,
abi: abi,
// Make a version of the closure type with the same arguments, but
// with argument #0 being by value.
assert_eq!(abi, Abi::RustCall);
- sig.0.inputs[0] = closure_ty;
+ sig.0 = tcx.mk_fn_sig(
+ iter::once(closure_ty).chain(sig.0.inputs().iter().skip(1).cloned()),
+ sig.0.output(),
+ sig.0.variadic
+ );
let sig = tcx.erase_late_bound_regions_and_normalize(&sig);
let fn_ty = FnType::new(ccx, abi, &sig, &[]);
}
};
let sig = tcx.erase_late_bound_regions_and_normalize(sig);
- let tuple_input_ty = tcx.intern_tup(&sig.inputs[..]);
- let sig = ty::FnSig {
- inputs: vec![bare_fn_ty_maybe_ref,
- tuple_input_ty],
- output: sig.output,
- variadic: false
- };
+ let tuple_input_ty = tcx.intern_tup(sig.inputs());
+ let sig = tcx.mk_fn_sig(
+ [bare_fn_ty_maybe_ref, tuple_input_ty].iter().cloned(),
+ sig.output(),
+ false
+ );
let fn_ty = FnType::new(ccx, Abi::RustCall, &sig, &[]);
let tuple_fn_ty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy {
unsafety: hir::Unsafety::Normal,
llvm::LLVMRustSetLinkage(llfn, llvm::Linkage::ExternalLinkage);
}
}
-
+ if ccx.use_dll_storage_attrs() && ccx.sess().cstore.is_dllimport_foreign_item(def_id) {
+ unsafe {
+ llvm::LLVMSetDLLStorageClass(llfn, llvm::DLLStorageClass::DllImport);
+ }
+ }
llfn
};
let ty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy {
unsafety: hir::Unsafety::Unsafe,
abi: Abi::C,
- sig: ty::Binder(ty::FnSig {
- inputs: vec![tcx.mk_mut_ptr(tcx.types.u8)],
- output: tcx.types.never,
- variadic: false
- }),
+ sig: ty::Binder(tcx.mk_fn_sig(
+ iter::once(tcx.mk_mut_ptr(tcx.types.u8)),
+ tcx.types.never,
+ false
+ )),
}));
let unwresume = ccx.eh_unwind_resume();
ty::ClosureKind::FnOnce => ty,
};
- let sig = sig.map_bound(|sig| ty::FnSig {
- inputs: iter::once(env_ty).chain(sig.inputs).collect(),
- ..sig
- });
+ let sig = sig.map_bound(|sig| tcx.mk_fn_sig(
+ iter::once(env_ty).chain(sig.inputs().iter().cloned()),
+ sig.output(),
+ sig.variadic
+ ));
Cow::Owned(ty::BareFnTy { unsafety: unsafety, abi: abi, sig: sig })
}
_ => bug!("unexpected type {:?} to ty_fn_sig", ty)
llvm::set_thread_local(g, true);
}
}
- if ccx.use_dll_storage_attrs() {
+ if ccx.use_dll_storage_attrs() && !ccx.sess().cstore.is_foreign_item(def_id) {
+ // This item is external but not foreign, i.e. it originates from an external Rust
+ // crate. Since we don't know whether this crate will be linked dynamically or
+ // statically in the final application, we always mark such symbols as 'dllimport'.
+ // If final linkage happens to be static, we rely on compiler-emitted __imp_ stubs to
+ // make things work.
unsafe {
llvm::LLVMSetDLLStorageClass(g, llvm::DLLStorageClass::DllImport);
}
g
};
+ if ccx.use_dll_storage_attrs() && ccx.sess().cstore.is_dllimport_foreign_item(def_id) {
+ // For foreign (native) libs we know the exact storage type to use.
+ unsafe {
+ llvm::LLVMSetDLLStorageClass(g, llvm::DLLStorageClass::DllImport);
+ }
+ }
ccx.instances().borrow_mut().insert(instance, g);
ccx.statics().borrow_mut().insert(g, def_id);
g
metadata_llcx: ContextRef,
export_map: ExportMap,
- reachable: NodeSet,
+ exported_symbols: NodeSet,
link_meta: LinkMeta,
tcx: TyCtxt<'a, 'tcx, 'tcx>,
stats: Stats,
pub fn new(tcx: TyCtxt<'b, 'tcx, 'tcx>,
export_map: ExportMap,
link_meta: LinkMeta,
- reachable: NodeSet,
+ exported_symbols: NodeSet,
check_overflow: bool)
-> SharedCrateContext<'b, 'tcx> {
let (metadata_llcx, metadata_llmod) = unsafe {
// they're not available to be linked against. This poses a few problems
// for the compiler, some of which are somewhat fundamental, but we use
// the `use_dll_storage_attrs` variable below to attach the `dllexport`
- // attribute to all LLVM functions that are reachable (e.g. they're
+ // attribute to all LLVM functions that are exported e.g. they're
// already tagged with external linkage). This is suboptimal for a few
// reasons:
//
metadata_llmod: metadata_llmod,
metadata_llcx: metadata_llcx,
export_map: export_map,
- reachable: reachable,
+ exported_symbols: exported_symbols,
link_meta: link_meta,
tcx: tcx,
stats: Stats {
&self.export_map
}
- pub fn reachable<'a>(&'a self) -> &'a NodeSet {
- &self.reachable
+ pub fn exported_symbols<'a>(&'a self) -> &'a NodeSet {
+ &self.exported_symbols
}
pub fn trait_cache(&self) -> &RefCell<DepTrackingMap<TraitSelectionCache<'tcx>>> {
&self.shared.export_map
}
- pub fn reachable<'a>(&'a self) -> &'a NodeSet {
- &self.shared.reachable
+ pub fn exported_symbols<'a>(&'a self) -> &'a NodeSet {
+ &self.shared.exported_symbols
}
pub fn link_meta<'a>(&'a self) -> &'a LinkMeta {
{
let signature = cx.tcx().erase_late_bound_regions(signature);
- let mut signature_metadata: Vec<DIType> = Vec::with_capacity(signature.inputs.len() + 1);
+ let mut signature_metadata: Vec<DIType> = Vec::with_capacity(signature.inputs().len() + 1);
// return type
- signature_metadata.push(match signature.output.sty {
+ signature_metadata.push(match signature.output().sty {
ty::TyTuple(ref tys) if tys.is_empty() => ptr::null_mut(),
- _ => type_metadata(cx, signature.output, span)
+ _ => type_metadata(cx, signature.output(), span)
});
// regular arguments
- for &argument_type in &signature.inputs {
+ for &argument_type in signature.inputs() {
signature_metadata.push(type_metadata(cx, argument_type, span));
}
return create_DIArray(DIB(cx), &[]);
}
- let mut signature = Vec::with_capacity(sig.inputs.len() + 1);
+ let mut signature = Vec::with_capacity(sig.inputs().len() + 1);
// Return type -- llvm::DIBuilder wants this at index 0
- signature.push(match sig.output.sty {
+ signature.push(match sig.output().sty {
ty::TyTuple(ref tys) if tys.is_empty() => ptr::null_mut(),
- _ => type_metadata(cx, sig.output, syntax_pos::DUMMY_SP)
+ _ => type_metadata(cx, sig.output(), syntax_pos::DUMMY_SP)
});
let inputs = if abi == Abi::RustCall {
- &sig.inputs[..sig.inputs.len()-1]
+ &sig.inputs()[..sig.inputs().len() - 1]
} else {
- &sig.inputs[..]
+ sig.inputs()
};
// Arguments types
signature.push(type_metadata(cx, argument_type, syntax_pos::DUMMY_SP));
}
- if abi == Abi::RustCall && !sig.inputs.is_empty() {
- if let ty::TyTuple(args) = sig.inputs[sig.inputs.len() - 1].sty {
+ if abi == Abi::RustCall && !sig.inputs().is_empty() {
+ if let ty::TyTuple(args) = sig.inputs()[sig.inputs().len() - 1].sty {
for &argument_type in args {
signature.push(type_metadata(cx, argument_type, syntax_pos::DUMMY_SP));
}
output.push_str("fn(");
let sig = cx.tcx().erase_late_bound_regions_and_normalize(sig);
- if !sig.inputs.is_empty() {
- for ¶meter_type in &sig.inputs {
+ if !sig.inputs().is_empty() {
+ for ¶meter_type in sig.inputs() {
push_debuginfo_type_name(cx, parameter_type, true, output);
output.push_str(", ");
}
}
if sig.variadic {
- if !sig.inputs.is_empty() {
+ if !sig.inputs().is_empty() {
output.push_str(", ...");
} else {
output.push_str("...");
output.push(')');
- if !sig.output.is_nil() {
+ if !sig.output().is_nil() {
output.push_str(" -> ");
- push_debuginfo_type_name(cx, sig.output, true, output);
+ push_debuginfo_type_name(cx, sig.output(), true, output);
}
},
ty::TyClosure(..) => {
// visible). It might better to use the `exported_items` set from
// `driver::CrateAnalysis` in the future, but (atm) this set is not
// available in the translation pass.
- !cx.reachable().contains(&node_id)
+ !cx.exported_symbols().contains(&node_id)
}
#[allow(non_snake_case)]
// don't want the symbols to get exported.
if attr::contains_name(ccx.tcx().map.krate_attrs(), "compiler_builtins") {
unsafe {
- llvm::LLVMSetVisibility(llfn, llvm::Visibility::Hidden);
+ llvm::LLVMRustSetVisibility(llfn, llvm::Visibility::Hidden);
}
}
let llfn = declare_raw_fn(ccx, name, fty.cconv, fty.llvm_type(ccx));
// FIXME(canndrew): This is_never should really be an is_uninhabited
- if sig.output.is_never() {
+ if sig.output().is_never() {
llvm::Attribute::NoReturn.apply_llfn(Function, llfn);
}
use syntax_pos::{Span, DUMMY_SP};
use std::cmp::Ordering;
+use std::iter;
fn get_simple_intrinsic(ccx: &CrateContext, name: &str) -> Option<ValueRef> {
let llvm_name = match name {
};
let sig = tcx.erase_late_bound_regions_and_normalize(&fty.sig);
- let arg_tys = sig.inputs;
- let ret_ty = sig.output;
+ let arg_tys = sig.inputs();
+ let ret_ty = sig.output();
let name = &*tcx.item_name(def_id).as_str();
let span = match call_debug_location {
// again to find them and extract the arguments
intr.inputs.iter()
.zip(llargs)
- .zip(&arg_tys)
+ .zip(arg_tys)
.flat_map(|((t, llarg), ty)| modify_as_needed(bcx, t, ty, *llarg))
.collect()
};
trans: &mut for<'b> FnMut(Block<'b, 'tcx>))
-> ValueRef {
let ccx = fcx.ccx;
- let sig = ty::FnSig {
- inputs: inputs,
- output: output,
- variadic: false,
- };
+ let sig = ccx.tcx().mk_fn_sig(inputs.into_iter(), output, false);
let fn_ty = FnType::new(ccx, Abi::Rust, &sig, &[]);
let rust_fn_ty = ccx.tcx().mk_fn_ptr(ccx.tcx().mk_bare_fn(ty::BareFnTy {
let fn_ty = tcx.mk_fn_ptr(tcx.mk_bare_fn(ty::BareFnTy {
unsafety: hir::Unsafety::Unsafe,
abi: Abi::Rust,
- sig: ty::Binder(ty::FnSig {
- inputs: vec![i8p],
- output: tcx.mk_nil(),
- variadic: false,
- }),
+ sig: ty::Binder(tcx.mk_fn_sig(iter::once(i8p), tcx.mk_nil(), false)),
}));
let output = tcx.types.i32;
let rust_try = gen_fn(fcx, "__rust_try", vec![fn_ty, i8p, i8p], output, trans);
let tcx = bcx.tcx();
let sig = tcx.erase_late_bound_regions_and_normalize(callee_ty.fn_sig());
- let arg_tys = sig.inputs;
+ let arg_tys = sig.inputs();
// every intrinsic takes a SIMD vector as its first argument
require_simd!(arg_tys[0], "input");
pub mod linker;
pub mod link;
pub mod lto;
+ pub mod symbol_export;
pub mod symbol_names;
pub mod write;
pub mod msvc;
pub metadata_module: ModuleTranslation,
pub link: middle::cstore::LinkMeta,
pub metadata: Vec<u8>,
- pub reachable: Vec<String>,
+ pub exported_symbols: back::symbol_export::ExportedSymbols,
pub no_builtins: bool,
pub windows_subsystem: Option<String>,
pub linker_info: back::linker::LinkerInfo
return;
}
- let extra_args = &args[sig.inputs.len()..];
+ let extra_args = &args[sig.inputs().len()..];
let extra_args = extra_args.iter().map(|op_arg| {
let op_ty = op_arg.ty(&self.mir, bcx.tcx());
bcx.monomorphize(&op_ty)
// Make a fake operand for store_return
let op = OperandRef {
val: Ref(dst),
- ty: sig.output,
+ ty: sig.output(),
};
self.store_return(&bcx, ret_dest, fn_ty.ret, op);
}
debug_loc.apply_to_bcx(ret_bcx);
let op = OperandRef {
val: Immediate(invokeret),
- ty: sig.output,
+ ty: sig.output(),
};
self.store_return(&ret_bcx, ret_dest, fn_ty.ret, op);
});
if let Some((_, target)) = *destination {
let op = OperandRef {
val: Immediate(llret),
- ty: sig.output,
+ ty: sig.output(),
};
self.store_return(&bcx, ret_dest, fn_ty.ret, op);
funclet_br(self, bcx, target);
assert_eq!(dg.ty(), glue::get_drop_glue_type(tcx, dg.ty()));
let t = dg.ty();
- let sig = ty::FnSig {
- inputs: vec![tcx.mk_mut_ptr(tcx.types.i8)],
- output: tcx.mk_nil(),
- variadic: false,
- };
+ let sig = tcx.mk_fn_sig(iter::once(tcx.mk_mut_ptr(tcx.types.i8)), tcx.mk_nil(), false);
// Create a FnType for fn(*mut i8) and substitute the real type in
// later - that prevents FnType from splitting fat pointers up.
output.push_str("fn(");
- let ty::FnSig {
- inputs: sig_inputs,
- output: sig_output,
- variadic: sig_variadic
- } = self.tcx.erase_late_bound_regions_and_normalize(sig);
+ let sig = self.tcx.erase_late_bound_regions_and_normalize(sig);
- if !sig_inputs.is_empty() {
- for ¶meter_type in &sig_inputs {
+ if !sig.inputs().is_empty() {
+ for ¶meter_type in sig.inputs() {
self.push_type_name(parameter_type, output);
output.push_str(", ");
}
output.pop();
}
- if sig_variadic {
- if !sig_inputs.is_empty() {
+ if sig.variadic {
+ if !sig.inputs().is_empty() {
output.push_str(", ...");
} else {
output.push_str("...");
output.push(')');
- if !sig_output.is_nil() {
+ if !sig.output().is_nil() {
output.push_str(" -> ");
- self.push_type_name(sig_output, output);
+ self.push_type_name(sig.output(), output);
}
},
ty::TyClosure(def_id, ref closure_substs) => {
/// Returns the appropriate lifetime to use for any output lifetimes
/// (if one exists) and a vector of the (pattern, number of lifetimes)
/// corresponding to each input type/pattern.
- fn find_implied_output_region<F>(&self,
+ fn find_implied_output_region<I>(&self,
input_tys: &[Ty<'tcx>],
- input_pats: F) -> ElidedLifetime
- where F: FnOnce() -> Vec<String>
+ input_pats: I) -> ElidedLifetime
+ where I: Iterator<Item=String>
{
let tcx = self.tcx();
- let mut lifetimes_for_params = Vec::new();
+ let mut lifetimes_for_params = Vec::with_capacity(input_tys.len());
let mut possible_implied_output_region = None;
+ let mut lifetimes = 0;
for input_type in input_tys.iter() {
let mut regions = FxHashSet();
debug!("find_implied_output_regions: collected {:?} from {:?} \
have_bound_regions={:?}", ®ions, input_type, have_bound_regions);
- if regions.len() == 1 {
+ lifetimes += regions.len();
+
+ if lifetimes == 1 && regions.len() == 1 {
// there's a chance that the unique lifetime of this
// iteration will be the appropriate lifetime for output
// parameters, so lets store it.
});
}
- if lifetimes_for_params.iter().map(|e| e.lifetime_count).sum::<usize>() == 1 {
+ if lifetimes == 1 {
Ok(*possible_implied_output_region.unwrap())
} else {
// Fill in the expensive `name` fields now that we know they're
// needed.
- for (info, input_pat) in lifetimes_for_params.iter_mut().zip(input_pats()) {
+ for (info, input_pat) in lifetimes_for_params.iter_mut().zip(input_pats) {
info.name = input_pat;
}
Err(Some(lifetimes_for_params))
let inputs = self.tcx().mk_type_list(data.inputs.iter().map(|a_t| {
self.ast_ty_arg_to_ty(&binding_rscope, None, region_substs, a_t)
}));
- let inputs_len = inputs.len();
- let input_params = || vec![String::new(); inputs_len];
+ let input_params = iter::repeat(String::new()).take(inputs.len());
let implied_output_region = self.find_implied_output_region(&inputs, input_params);
let (output, output_span) = match data.output {
// checking for here would be considered early bound
// anyway.)
let inputs = bare_fn_ty.sig.inputs();
- let late_bound_in_args = tcx.collect_constrained_late_bound_regions(&inputs);
+ let late_bound_in_args = tcx.collect_constrained_late_bound_regions(
+ &inputs.map_bound(|i| i.to_owned()));
let output = bare_fn_ty.sig.output();
let late_bound_in_ret = tcx.collect_referenced_late_bound_regions(&output);
for br in late_bound_in_ret.difference(&late_bound_in_args) {
let implied_output_region = match explicit_self {
Some(ExplicitSelf::ByReference(region, _)) => Ok(*region),
_ => {
- // `pat_to_string` is expensive and
- // `find_implied_output_region` only needs its result when
- // there's an error. So we wrap it in a closure to avoid
- // calling it until necessary.
- let arg_pats = || {
- arg_params.iter().map(|a| pprust::pat_to_string(&a.pat)).collect()
- };
- self.find_implied_output_region(&arg_tys, arg_pats)
+ self.find_implied_output_region(&arg_tys,
+ arg_params.iter()
+ .map(|a| pprust::pat_to_string(&a.pat)))
+
}
};
hir::DefaultReturn(..) => self.tcx().mk_nil(),
};
- let input_tys = self_ty.into_iter().chain(arg_tys).collect();
-
- debug!("ty_of_method_or_bare_fn: input_tys={:?}", input_tys);
debug!("ty_of_method_or_bare_fn: output_ty={:?}", output_ty);
self.tcx().mk_bare_fn(ty::BareFnTy {
unsafety: unsafety,
abi: abi,
- sig: ty::Binder(ty::FnSig {
- inputs: input_tys,
- output: output_ty,
- variadic: decl.variadic
- }),
+ sig: ty::Binder(self.tcx().mk_fn_sig(
+ self_ty.into_iter().chain(arg_tys),
+ output_ty,
+ decl.variadic
+ )),
})
}
// that function type
let rb = rscope::BindingRscope::new();
- let input_tys: Vec<_> = decl.inputs.iter().enumerate().map(|(i, a)| {
+ let input_tys = decl.inputs.iter().enumerate().map(|(i, a)| {
let expected_arg_ty = expected_sig.as_ref().and_then(|e| {
// no guarantee that the correct number of expected args
// were supplied
- if i < e.inputs.len() {
- Some(e.inputs[i])
+ if i < e.inputs().len() {
+ Some(e.inputs()[i])
} else {
None
}
});
self.ty_of_arg(&rb, a, expected_arg_ty)
- }).collect();
+ });
- let expected_ret_ty = expected_sig.map(|e| e.output);
+ let expected_ret_ty = expected_sig.as_ref().map(|e| e.output());
let is_infer = match decl.output {
hir::Return(ref output) if output.node == hir::TyInfer => true,
hir::DefaultReturn(..) => bug!(),
};
- debug!("ty_of_closure: input_tys={:?}", input_tys);
debug!("ty_of_closure: output_ty={:?}", output_ty);
ty::ClosureTy {
unsafety: unsafety,
abi: abi,
- sig: ty::Binder(ty::FnSig {inputs: input_tys,
- output: output_ty,
- variadic: decl.variadic}),
+ sig: ty::Binder(self.tcx().mk_fn_sig(input_tys, output_ty, decl.variadic)),
}
}
-> Ty<'tcx> {
let error_fn_sig;
- let fn_sig = match callee_ty.sty {
- ty::TyFnDef(.., &ty::BareFnTy {ref sig, ..}) |
- ty::TyFnPtr(&ty::BareFnTy {ref sig, ..}) => sig,
+ let (fn_sig, def_span) = match callee_ty.sty {
+ ty::TyFnDef(def_id, .., &ty::BareFnTy {ref sig, ..}) => {
+ (sig, self.tcx.map.span_if_local(def_id))
+ }
+ ty::TyFnPtr(&ty::BareFnTy {ref sig, ..}) => (sig, None),
ref t => {
let mut unit_variant = None;
if let &ty::TyAdt(adt_def, ..) = t {
// This is the "default" function signature, used in case of error.
// In that case, we check each argument against "error" in order to
// set up all the node type bindings.
- error_fn_sig = ty::Binder(ty::FnSig {
- inputs: self.err_args(arg_exprs.len()),
- output: self.tcx.types.err,
- variadic: false,
- });
+ error_fn_sig = ty::Binder(self.tcx.mk_fn_sig(
+ self.err_args(arg_exprs.len()).into_iter(),
+ self.tcx.types.err,
+ false,
+ ));
- &error_fn_sig
+ (&error_fn_sig, None)
}
};
let expected_arg_tys =
self.expected_types_for_fn_args(call_expr.span,
expected,
- fn_sig.output,
- &fn_sig.inputs);
+ fn_sig.output(),
+ fn_sig.inputs());
self.check_argument_types(call_expr.span,
- &fn_sig.inputs,
+ fn_sig.inputs(),
&expected_arg_tys[..],
arg_exprs,
fn_sig.variadic,
- TupleArgumentsFlag::DontTupleArguments);
+ TupleArgumentsFlag::DontTupleArguments,
+ def_span);
- fn_sig.output
+ fn_sig.output()
}
fn confirm_deferred_closure_call(&self,
let expected_arg_tys = self.expected_types_for_fn_args(call_expr.span,
expected,
- fn_sig.output.clone(),
- &fn_sig.inputs);
+ fn_sig.output().clone(),
+ fn_sig.inputs());
self.check_argument_types(call_expr.span,
- &fn_sig.inputs,
+ fn_sig.inputs(),
&expected_arg_tys,
arg_exprs,
fn_sig.variadic,
- TupleArgumentsFlag::TupleArguments);
+ TupleArgumentsFlag::TupleArguments,
+ None);
- fn_sig.output
+ fn_sig.output()
}
fn confirm_overloaded_call(&self,
debug!("attempt_resolution: method_callee={:?}", method_callee);
- for (&method_arg_ty, &self_arg_ty) in
- method_sig.inputs[1..].iter().zip(&self.fn_sig.inputs) {
- fcx.demand_eqtype(self.call_expr.span, self_arg_ty, method_arg_ty);
+ for (method_arg_ty, self_arg_ty) in
+ method_sig.inputs().iter().skip(1).zip(self.fn_sig.inputs()) {
+ fcx.demand_eqtype(self.call_expr.span, &self_arg_ty, &method_arg_ty);
}
- fcx.demand_eqtype(self.call_expr.span, method_sig.output, self.fn_sig.output);
+ fcx.demand_eqtype(self.call_expr.span, method_sig.output(), self.fn_sig.output());
fcx.write_overloaded_call_method_map(self.call_expr, method_callee);
}
use astconv::AstConv;
use rustc::ty::{self, ToPolyTraitRef, Ty};
use std::cmp;
+use std::iter;
use syntax::abi::Abi;
use rustc::hir;
// Tuple up the arguments and insert the resulting function type into
// the `closures` table.
- fn_ty.sig.0.inputs = vec![self.tcx.intern_tup(&fn_ty.sig.0.inputs[..])];
+ fn_ty.sig.0 = self.tcx.mk_fn_sig(
+ iter::once(self.tcx.intern_tup(fn_ty.sig.skip_binder().inputs())),
+ fn_ty.sig.skip_binder().output(),
+ fn_ty.sig.variadic()
+ );
debug!("closure for {:?} --> sig={:?} opt_kind={:?}",
expr_def_id,
arg_param_ty);
let input_tys = match arg_param_ty.sty {
- ty::TyTuple(tys) => tys.to_vec(),
+ ty::TyTuple(tys) => tys.into_iter(),
_ => {
return None;
}
};
- debug!("deduce_sig_from_projection: input_tys {:?}", input_tys);
let ret_param_ty = projection.0.ty;
let ret_param_ty = self.resolve_type_vars_if_possible(&ret_param_ty);
- debug!("deduce_sig_from_projection: ret_param_ty {:?}",
- ret_param_ty);
+ debug!("deduce_sig_from_projection: ret_param_ty {:?}", ret_param_ty);
- let fn_sig = ty::FnSig {
- inputs: input_tys,
- output: ret_param_ty,
- variadic: false,
- };
+ let fn_sig = self.tcx.mk_fn_sig(input_tys.cloned(), ret_param_ty, false);
debug!("deduce_sig_from_projection: fn_sig {:?}", fn_sig);
Some(fn_sig)
_ => bug!("{:?} is not a MethodTraitItem", trait_m),
};
- let impl_iter = impl_sig.inputs.iter();
- let trait_iter = trait_sig.inputs.iter();
+ let impl_iter = impl_sig.inputs().iter();
+ let trait_iter = trait_sig.inputs().iter();
impl_iter.zip(trait_iter)
.zip(impl_m_iter)
.zip(trait_m_iter)
})
.next()
.unwrap_or_else(|| {
- if infcx.sub_types(false, &cause, impl_sig.output, trait_sig.output)
+ if infcx.sub_types(false, &cause, impl_sig.output(),
+ trait_sig.output())
.is_err() {
(impl_m_output.span(), Some(trait_m_output.span()))
} else {
};
let impl_m_fty = m_fty(impl_m);
let trait_m_fty = m_fty(trait_m);
- if impl_m_fty.sig.0.inputs.len() != trait_m_fty.sig.0.inputs.len() {
- let trait_number_args = trait_m_fty.sig.0.inputs.len();
- let impl_number_args = impl_m_fty.sig.0.inputs.len();
+ let trait_number_args = trait_m_fty.sig.inputs().skip_binder().len();
+ let impl_number_args = impl_m_fty.sig.inputs().skip_binder().len();
+ if trait_number_args != impl_number_args {
let trait_m_node_id = tcx.map.as_local_node_id(trait_m.def_id);
let trait_span = if let Some(trait_id) = trait_m_node_id {
match tcx.map.expect_trait_item(trait_id).node {
use intrinsics;
use rustc::traits::{ObligationCause, ObligationCauseCode};
use rustc::ty::subst::Substs;
-use rustc::ty::FnSig;
use rustc::ty::{self, Ty};
use rustc::util::nodemap::FxHashMap;
use {CrateCtxt, require_same_types};
use rustc::hir;
+use std::iter;
+
fn equate_intrinsic_type<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>,
it: &hir::ForeignItem,
n_tps: usize,
let fty = tcx.mk_fn_def(def_id, substs, tcx.mk_bare_fn(ty::BareFnTy {
unsafety: hir::Unsafety::Unsafe,
abi: abi,
- sig: ty::Binder(FnSig {
- inputs: inputs,
- output: output,
- variadic: false,
- }),
+ sig: ty::Binder(tcx.mk_fn_sig(inputs.into_iter(), output, false)),
}));
let i_n_tps = tcx.item_generics(def_id).types.len();
if i_n_tps != n_tps {
let fn_ty = tcx.mk_bare_fn(ty::BareFnTy {
unsafety: hir::Unsafety::Normal,
abi: Abi::Rust,
- sig: ty::Binder(FnSig {
- inputs: vec![mut_u8],
- output: tcx.mk_nil(),
- variadic: false,
- }),
+ sig: ty::Binder(tcx.mk_fn_sig(iter::once(mut_u8), tcx.mk_nil(), false)),
});
(0, vec![tcx.mk_fn_ptr(fn_ty), mut_u8, mut_u8], tcx.types.i32)
}
let sig = tcx.item_type(def_id).fn_sig();
let sig = tcx.no_late_bound_regions(sig).unwrap();
- if intr.inputs.len() != sig.inputs.len() {
+ if intr.inputs.len() != sig.inputs().len() {
span_err!(tcx.sess, it.span, E0444,
"platform-specific intrinsic has invalid number of \
arguments: found {}, expected {}",
- sig.inputs.len(), intr.inputs.len());
+ sig.inputs().len(), intr.inputs.len());
return
}
- let input_pairs = intr.inputs.iter().zip(&sig.inputs);
+ let input_pairs = intr.inputs.iter().zip(sig.inputs());
for (i, (expected_arg, arg)) in input_pairs.enumerate() {
match_intrinsic_type_to_type(ccx, &format!("argument {}", i + 1), it.span,
&mut structural_to_nomimal, expected_arg, arg);
}
match_intrinsic_type_to_type(ccx, "return value", it.span,
&mut structural_to_nomimal,
- &intr.output, sig.output);
+ &intr.output, sig.output());
return
}
None => {
infer::FnCall,
&fty.sig).0;
let fn_sig = self.instantiate_type_scheme(span, trait_ref.substs, &fn_sig);
- let transformed_self_ty = fn_sig.inputs[0];
+ let transformed_self_ty = fn_sig.inputs()[0];
let method_ty = tcx.mk_fn_def(def_id, trait_ref.substs,
tcx.mk_bare_fn(ty::BareFnTy {
sig: ty::Binder(fn_sig),
// Create the function context. This is either derived from scratch or,
// in the case of function expressions, based on the outer context.
- let mut fcx = FnCtxt::new(inherited, fn_sig.output, body.id);
+ let mut fcx = FnCtxt::new(inherited, fn_sig.output(), body.id);
*fcx.ps.borrow_mut() = UnsafetyState::function(unsafety, unsafety_id);
fcx.require_type_is_sized(fcx.ret_ty, decl.output.span(), traits::ReturnType);
fcx.ret_ty = fcx.instantiate_anon_types(&fcx.ret_ty);
- fn_sig.output = fcx.ret_ty;
+ fn_sig = fcx.tcx.mk_fn_sig(fn_sig.inputs().iter().cloned(), &fcx.ret_ty, fn_sig.variadic);
{
let mut visit = GatherLocalsVisitor { fcx: &fcx, };
// Add formal parameters.
- for (arg_ty, input) in fn_sig.inputs.iter().zip(&decl.inputs) {
+ for (arg_ty, input) in fn_sig.inputs().iter().zip(&decl.inputs) {
// The type of the argument must be well-formed.
//
// NB -- this is now checked in wfcheck, but that
};
self.check_argument_types(sp, &err_inputs[..], &[], args_no_rcvr,
- false, tuple_arguments);
+ false, tuple_arguments, None);
self.tcx.types.err
} else {
match method_fn_ty.sty {
- ty::TyFnDef(.., ref fty) => {
+ ty::TyFnDef(def_id, .., ref fty) => {
// HACK(eddyb) ignore self in the definition (see above).
- let expected_arg_tys = self.expected_types_for_fn_args(sp, expected,
- fty.sig.0.output,
- &fty.sig.0.inputs[1..]);
- self.check_argument_types(sp, &fty.sig.0.inputs[1..], &expected_arg_tys[..],
- args_no_rcvr, fty.sig.0.variadic, tuple_arguments);
- fty.sig.0.output
+ let expected_arg_tys = self.expected_types_for_fn_args(
+ sp,
+ expected,
+ fty.sig.0.output(),
+ &fty.sig.0.inputs()[1..]
+ );
+ self.check_argument_types(sp, &fty.sig.0.inputs()[1..], &expected_arg_tys[..],
+ args_no_rcvr, fty.sig.0.variadic, tuple_arguments,
+ self.tcx.map.span_if_local(def_id));
+ fty.sig.0.output()
}
_ => {
span_bug!(callee_expr.span, "method without bare fn type");
expected_arg_tys: &[Ty<'tcx>],
args: &'gcx [hir::Expr],
variadic: bool,
- tuple_arguments: TupleArgumentsFlag) {
+ tuple_arguments: TupleArgumentsFlag,
+ def_span: Option<Span>) {
let tcx = self.tcx;
// Grab the argument types, supplying fresh type variables
sp
};
- fn parameter_count_error<'tcx>(sess: &Session, sp: Span, fn_inputs: &[Ty<'tcx>],
- expected_count: usize, arg_count: usize, error_code: &str,
- variadic: bool) {
+ fn parameter_count_error<'tcx>(sess: &Session, sp: Span, expected_count: usize,
+ arg_count: usize, error_code: &str, variadic: bool,
+ def_span: Option<Span>) {
let mut err = sess.struct_span_err_with_code(sp,
&format!("this function takes {}{} parameter{} but {} parameter{} supplied",
if variadic {"at least "} else {""},
if arg_count == 1 {" was"} else {"s were"}),
error_code);
- let input_types = fn_inputs.iter().map(|i| format!("{:?}", i)).collect::<Vec<String>>();
- if input_types.len() > 1 {
- err.note("the following parameter types were expected:");
- err.note(&input_types.join(", "));
- } else if input_types.len() > 0 {
- err.note(&format!("the following parameter type was expected: {}",
- input_types[0]));
- } else {
- err.span_label(sp, &format!("expected {}{} parameter{}",
- if variadic {"at least "} else {""},
- expected_count,
- if expected_count == 1 {""} else {"s"}));
+ err.span_label(sp, &format!("expected {}{} parameter{}",
+ if variadic {"at least "} else {""},
+ expected_count,
+ if expected_count == 1 {""} else {"s"}));
+ if let Some(def_s) = def_span {
+ err.span_label(def_s, &format!("defined here"));
}
err.emit();
}
let tuple_type = self.structurally_resolved_type(sp, fn_inputs[0]);
match tuple_type.sty {
ty::TyTuple(arg_types) if arg_types.len() != args.len() => {
- parameter_count_error(tcx.sess, sp_args, fn_inputs, arg_types.len(), args.len(),
- "E0057", false);
+ parameter_count_error(tcx.sess, sp_args, arg_types.len(), args.len(),
+ "E0057", false, def_span);
expected_arg_tys = &[];
self.err_args(args.len())
}
if supplied_arg_count >= expected_arg_count {
fn_inputs.to_vec()
} else {
- parameter_count_error(tcx.sess, sp_args, fn_inputs, expected_arg_count,
- supplied_arg_count, "E0060", true);
+ parameter_count_error(tcx.sess, sp_args, expected_arg_count,
+ supplied_arg_count, "E0060", true, def_span);
expected_arg_tys = &[];
self.err_args(supplied_arg_count)
}
} else {
- parameter_count_error(tcx.sess, sp_args, fn_inputs, expected_arg_count,
- supplied_arg_count, "E0061", false);
+ parameter_count_error(tcx.sess, sp_args, expected_arg_count,
+ supplied_arg_count, "E0061", false, def_span);
expected_arg_tys = &[];
self.err_args(supplied_arg_count)
};
//
// FIXME(#27579) return types should not be implied bounds
let fn_sig_tys: Vec<_> =
- fn_sig.inputs.iter()
- .cloned()
- .chain(Some(fn_sig.output))
- .collect();
+ fn_sig.inputs().iter().cloned().chain(Some(fn_sig.output())).collect();
let old_body_id = self.set_body_id(body_id.node_id());
self.relate_free_regions(&fn_sig_tys[..], body_id.node_id(), span);
let fn_sig = method.ty.fn_sig();
let fn_sig = // late-bound regions should have been instantiated
self.tcx.no_late_bound_regions(fn_sig).unwrap();
- let self_ty = fn_sig.inputs[0];
+ let self_ty = fn_sig.inputs()[0];
let (m, r) = match self_ty.sty {
ty::TyRef(r, ref m) => (m.mutbl, r),
_ => {
self.type_must_outlive(infer::CallRcvr(deref_expr.span),
self_ty, r_deref_expr);
self.type_must_outlive(infer::CallReturn(deref_expr.span),
- fn_sig.output, r_deref_expr);
- fn_sig.output
+ fn_sig.output(), r_deref_expr);
+ fn_sig.output()
}
None => derefd_ty
};
let fty = fcx.instantiate_type_scheme(span, free_substs, &fty);
let sig = fcx.tcx.liberate_late_bound_regions(free_id_outlive, &fty.sig);
- for &input_ty in &sig.inputs {
- fcx.register_wf_obligation(input_ty, span, self.code.clone());
+ for input_ty in sig.inputs() {
+ fcx.register_wf_obligation(&input_ty, span, self.code.clone());
}
- implied_bounds.extend(sig.inputs);
+ implied_bounds.extend(sig.inputs());
- fcx.register_wf_obligation(sig.output, span, self.code.clone());
+ fcx.register_wf_obligation(sig.output(), span, self.code.clone());
// FIXME(#25759) return types should not be implied bounds
- implied_bounds.push(sig.output);
+ implied_bounds.push(sig.output());
self.check_where_clauses(fcx, span, predicates);
}
debug!("check_method_receiver: sig={:?}", sig);
- let self_arg_ty = sig.inputs[0];
+ let self_arg_ty = sig.inputs()[0];
let rcvr_ty = match ExplicitSelf::determine(self_ty, self_arg_ty) {
ExplicitSelf::ByValue => self_ty,
ExplicitSelf::ByReference(region, mutbl) => {
let ctor_ty = match variant.ctor_kind {
CtorKind::Fictive | CtorKind::Const => ty,
CtorKind::Fn => {
- let inputs: Vec<_> =
- variant.fields
- .iter()
- .map(|field| tcx.item_type(field.did))
- .collect();
- let substs = mk_item_substs(&ccx.icx(&predicates),
- ccx.tcx.map.span(ctor_id), def_id);
+ let inputs = variant.fields.iter().map(|field| tcx.item_type(field.did));
+ let substs = mk_item_substs(&ccx.icx(&predicates), ccx.tcx.map.span(ctor_id), def_id);
tcx.mk_fn_def(def_id, substs, tcx.mk_bare_fn(ty::BareFnTy {
unsafety: hir::Unsafety::Normal,
abi: abi::Abi::Rust,
- sig: ty::Binder(ty::FnSig {
- inputs: inputs,
- output: ty,
- variadic: false
- })
+ sig: ty::Binder(ccx.tcx.mk_fn_sig(inputs, ty, false))
}))
}
};
ccx.tcx.mk_fn_def(def_id, substs, ccx.tcx.mk_bare_fn(ty::BareFnTy {
abi: abi,
unsafety: hir::Unsafety::Unsafe,
- sig: ty::Binder(ty::FnSig {inputs: input_tys,
- output: output,
- variadic: decl.variadic}),
+ sig: ty::Binder(ccx.tcx.mk_fn_sig(input_tys.into_iter(), output, decl.variadic)),
}))
}
use syntax::abi::Abi;
use syntax_pos::Span;
+use std::iter;
use std::cell::RefCell;
use util::nodemap::NodeMap;
tcx.mk_bare_fn(ty::BareFnTy {
unsafety: hir::Unsafety::Normal,
abi: Abi::Rust,
- sig: ty::Binder(ty::FnSig {
- inputs: Vec::new(),
- output: tcx.mk_nil(),
- variadic: false
- })
+ sig: ty::Binder(tcx.mk_fn_sig(iter::empty(), tcx.mk_nil(), false))
}));
require_same_types(
tcx.mk_bare_fn(ty::BareFnTy {
unsafety: hir::Unsafety::Normal,
abi: Abi::Rust,
- sig: ty::Binder(ty::FnSig {
- inputs: vec![
+ sig: ty::Binder(tcx.mk_fn_sig(
+ [
tcx.types.isize,
tcx.mk_imm_ptr(tcx.mk_imm_ptr(tcx.types.u8))
- ],
- output: tcx.types.isize,
- variadic: false,
- }),
+ ].iter().cloned(),
+ tcx.types.isize,
+ false,
+ )),
}));
require_same_types(
sig: &ty::PolyFnSig<'tcx>,
variance: VarianceTermPtr<'a>) {
let contra = self.contravariant(variance);
- for &input in &sig.0.inputs {
+ for &input in sig.0.inputs() {
self.add_constraints_from_ty(generics, input, contra);
}
- self.add_constraints_from_ty(generics, sig.0.output, variance);
+ self.add_constraints_from_ty(generics, sig.0.output(), variance);
}
/// Adds constraints appropriate for a region appearing in a
cx.tcx.sess.cstore.fn_arg_names(did).into_iter()
}.peekable();
FnDecl {
- output: Return(sig.0.output.clean(cx)),
+ output: Return(sig.skip_binder().output().clean(cx)),
attrs: Attributes::default(),
- variadic: sig.0.variadic,
+ variadic: sig.skip_binder().variadic,
inputs: Arguments {
- values: sig.0.inputs.iter().map(|t| {
+ values: sig.skip_binder().inputs().iter().map(|t| {
Argument {
type_: t.clean(cx),
id: ast::CRATE_NODE_ID,
use std::collections::BTreeMap;
use std::default::Default;
use std::error;
-use std::fmt::{self, Display, Formatter};
+use std::fmt::{self, Display, Formatter, Write as FmtWrite};
use std::fs::{self, File, OpenOptions};
use std::io::prelude::*;
use std::io::{self, BufWriter, BufReader};
// Update the search index
let dst = cx.dst.join("search-index.js");
- let all_indexes = try_err!(collect(&dst, &krate.name, "searchIndex"), &dst);
+ let mut all_indexes = try_err!(collect(&dst, &krate.name, "searchIndex"), &dst);
+ all_indexes.push(search_index);
+ // Sort the indexes by crate so the file will be generated identically even
+ // with rustdoc running in parallel.
+ all_indexes.sort();
let mut w = try_err!(File::create(&dst), &dst);
try_err!(writeln!(&mut w, "var searchIndex = {{}};"), &dst);
- try_err!(writeln!(&mut w, "{}", search_index), &dst);
for index in &all_indexes {
try_err!(writeln!(&mut w, "{}", *index), &dst);
}
// Update the list of all implementors for traits
let dst = cx.dst.join("implementors");
- try_err!(mkdir(&dst), &dst);
for (&did, imps) in &cache.implementors {
// Private modules can leak through to this phase of rustdoc, which
// could contain implementations for otherwise private types. In some
}
};
+ let mut implementors = format!(r#"implementors["{}"] = ["#, krate.name);
+ for imp in imps {
+ // If the trait and implementation are in the same crate, then
+ // there's no need to emit information about it (there's inlining
+ // going on). If they're in different crates then the crate defining
+ // the trait will be interested in our implementation.
+ if imp.def_id.krate == did.krate { continue }
+ write!(implementors, r#""{}","#, imp.impl_).unwrap();
+ }
+ implementors.push_str("];");
+
let mut mydst = dst.clone();
for part in &remote_path[..remote_path.len() - 1] {
mydst.push(part);
- try_err!(mkdir(&mydst), &mydst);
}
+ try_err!(fs::create_dir_all(&mydst), &mydst);
mydst.push(&format!("{}.{}.js",
remote_item_type.css_class(),
remote_path[remote_path.len() - 1]));
- let all_implementors = try_err!(collect(&mydst, &krate.name,
- "implementors"),
- &mydst);
- try_err!(mkdir(mydst.parent().unwrap()),
- &mydst.parent().unwrap().to_path_buf());
- let mut f = BufWriter::new(try_err!(File::create(&mydst), &mydst));
- try_err!(writeln!(&mut f, "(function() {{var implementors = {{}};"), &mydst);
+ let mut all_implementors = try_err!(collect(&mydst, &krate.name, "implementors"), &mydst);
+ all_implementors.push(implementors);
+ // Sort the implementors by crate so the file will be generated
+ // identically even with rustdoc running in parallel.
+ all_implementors.sort();
+ let mut f = try_err!(File::create(&mydst), &mydst);
+ try_err!(writeln!(&mut f, "(function() {{var implementors = {{}};"), &mydst);
for implementor in &all_implementors {
- try_err!(write!(&mut f, "{}", *implementor), &mydst);
- }
-
- try_err!(write!(&mut f, r#"implementors["{}"] = ["#, krate.name), &mydst);
- for imp in imps {
- // If the trait and implementation are in the same crate, then
- // there's no need to emit information about it (there's inlining
- // going on). If they're in different crates then the crate defining
- // the trait will be interested in our implementation.
- if imp.def_id.krate == did.krate { continue }
- try_err!(write!(&mut f, r#""{}","#, imp.impl_), &mydst);
+ try_err!(writeln!(&mut f, "{}", *implementor), &mydst);
}
- try_err!(writeln!(&mut f, r"];"), &mydst);
try_err!(writeln!(&mut f, "{}", r"
if (window.register_implementors) {
window.register_implementors(implementors);
S: BuildHasher + Default
{
fn from_iter<T: IntoIterator<Item = (K, V)>>(iter: T) -> HashMap<K, V, S> {
- let iterator = iter.into_iter();
- let lower = iterator.size_hint().0;
- let mut map = HashMap::with_capacity_and_hasher(lower, Default::default());
- map.extend(iterator);
+ let mut map = HashMap::with_hasher(Default::default());
+ map.extend(iter);
map
}
}
S: BuildHasher
{
fn extend<T: IntoIterator<Item = (K, V)>>(&mut self, iter: T) {
+ // Keys may be already present or show multiple times in the iterator.
+ // Reserve the entire hint lower bound if the map is empty.
+ // Otherwise reserve half the hint (rounded up), so the map
+ // will only resize twice in the worst case.
+ let iter = iter.into_iter();
+ let reserve = if self.is_empty() {
+ iter.size_hint().0
+ } else {
+ (iter.size_hint().0 + 1) / 2
+ };
+ self.reserve(reserve);
for (k, v) in iter {
self.insert(k, v);
}
#[stable(feature = "hashmap_default_hasher", since = "1.13.0")]
impl Default for DefaultHasher {
+ /// Creates a new `DefaultHasher` using [`DefaultHasher::new`]. See
+ /// [`DefaultHasher::new`] documentation for more information.
+ ///
+ /// [`DefaultHasher::new`]: #method.new
fn default() -> DefaultHasher {
DefaultHasher::new()
}
S: BuildHasher + Default
{
fn from_iter<I: IntoIterator<Item = T>>(iter: I) -> HashSet<T, S> {
- let iterator = iter.into_iter();
- let lower = iterator.size_hint().0;
- let mut set = HashSet::with_capacity_and_hasher(lower, Default::default());
- set.extend(iterator);
+ let mut set = HashSet::with_hasher(Default::default());
+ set.extend(iter);
set
}
}
S: BuildHasher
{
fn extend<I: IntoIterator<Item = T>>(&mut self, iter: I) {
- for k in iter {
- self.insert(k);
- }
+ self.map.extend(iter.into_iter().map(|k| (k, ())));
}
}
#[stable(feature = "env", since = "1.0.0")]
impl ExactSizeIterator for Args {
fn len(&self) -> usize { self.inner.len() }
+ fn is_empty(&self) -> bool { self.inner.is_empty() }
}
#[stable(feature = "env_iterators", since = "1.11.0")]
#[stable(feature = "env", since = "1.0.0")]
impl ExactSizeIterator for ArgsOs {
fn len(&self) -> usize { self.inner.len() }
+ fn is_empty(&self) -> bool { self.inner.is_empty() }
}
#[stable(feature = "env_iterators", since = "1.11.0")]
#![feature(core_float)]
#![feature(core_intrinsics)]
#![feature(dropck_parametricity)]
+#![feature(exact_size_is_empty)]
#![feature(float_extras)]
#![feature(float_from_str_radix)]
#![feature(fn_traits)]
}
/// A struct providing information about a panic.
+///
+/// `PanicInfo` structure is passed to a panic hook set by the [`set_hook()`]
+/// function.
+///
+/// [`set_hook()`]: ../../std/panic/fn.set_hook.html
+///
+/// # Examples
+///
+/// ```should_panic
+/// use std::panic;
+///
+/// panic::set_hook(Box::new(|panic_info| {
+/// println!("panic occured: {:?}", panic_info.payload().downcast_ref::<&str>().unwrap());
+/// }));
+///
+/// panic!("Normal panic");
+/// ```
#[stable(feature = "panic_hooks", since = "1.10.0")]
pub struct PanicInfo<'a> {
payload: &'a (Any + Send),
impl<'a> PanicInfo<'a> {
/// Returns the payload associated with the panic.
///
- /// This will commonly, but not always, be a `&'static str` or `String`.
+ /// This will commonly, but not always, be a `&'static str` or [`String`].
+ ///
+ /// [`String`]: ../../std/string/struct.String.html
+ ///
+ /// # Examples
+ ///
+ /// ```should_panic
+ /// use std::panic;
+ ///
+ /// panic::set_hook(Box::new(|panic_info| {
+ /// println!("panic occured: {:?}", panic_info.payload().downcast_ref::<&str>().unwrap());
+ /// }));
+ ///
+ /// panic!("Normal panic");
+ /// ```
#[stable(feature = "panic_hooks", since = "1.10.0")]
pub fn payload(&self) -> &(Any + Send) {
self.payload
/// Returns information about the location from which the panic originated,
/// if available.
///
- /// This method will currently always return `Some`, but this may change
+ /// This method will currently always return [`Some`], but this may change
/// in future versions.
+ ///
+ /// [`Some`]: ../../std/option/enum.Option.html#variant.Some
+ ///
+ /// # Examples
+ ///
+ /// ```should_panic
+ /// use std::panic;
+ ///
+ /// panic::set_hook(Box::new(|panic_info| {
+ /// if let Some(location) = panic_info.location() {
+ /// println!("panic occured in file '{}' at line {}", location.file(), location.line());
+ /// } else {
+ /// println!("panic occured but can't get location information...");
+ /// }
+ /// }));
+ ///
+ /// panic!("Normal panic");
+ /// ```
#[stable(feature = "panic_hooks", since = "1.10.0")]
pub fn location(&self) -> Option<&Location> {
Some(&self.location)
}
/// A struct containing information about the location of a panic.
+///
+/// This structure is created by the [`location()`] method of [`PanicInfo`].
+///
+/// [`location()`]: ../../std/panic/struct.PanicInfo.html#method.location
+/// [`PanicInfo`]: ../../std/panic/struct.PanicInfo.html
+///
+/// # Examples
+///
+/// ```should_panic
+/// use std::panic;
+///
+/// panic::set_hook(Box::new(|panic_info| {
+/// if let Some(location) = panic_info.location() {
+/// println!("panic occured in file '{}' at line {}", location.file(), location.line());
+/// } else {
+/// println!("panic occured but can't get location information...");
+/// }
+/// }));
+///
+/// panic!("Normal panic");
+/// ```
#[stable(feature = "panic_hooks", since = "1.10.0")]
pub struct Location<'a> {
file: &'a str,
impl<'a> Location<'a> {
/// Returns the name of the source file from which the panic originated.
+ ///
+ /// # Examples
+ ///
+ /// ```should_panic
+ /// use std::panic;
+ ///
+ /// panic::set_hook(Box::new(|panic_info| {
+ /// if let Some(location) = panic_info.location() {
+ /// println!("panic occured in file '{}'", location.file());
+ /// } else {
+ /// println!("panic occured but can't get location information...");
+ /// }
+ /// }));
+ ///
+ /// panic!("Normal panic");
+ /// ```
#[stable(feature = "panic_hooks", since = "1.10.0")]
pub fn file(&self) -> &str {
self.file
}
/// Returns the line number from which the panic originated.
+ ///
+ /// # Examples
+ ///
+ /// ```should_panic
+ /// use std::panic;
+ ///
+ /// panic::set_hook(Box::new(|panic_info| {
+ /// if let Some(location) = panic_info.location() {
+ /// println!("panic occured at line {}", location.line());
+ /// } else {
+ /// println!("panic occured but can't get location information...");
+ /// }
+ /// }));
+ ///
+ /// panic!("Normal panic");
+ /// ```
#[stable(feature = "panic_hooks", since = "1.10.0")]
pub fn line(&self) -> u32 {
self.line
/// will be run. If a clean shutdown is needed it is recommended to only call
/// this function at a known point where there are no more destructors left
/// to run.
+///
+/// # Examples
+///
+/// ```
+/// use std::process;
+///
+/// process::exit(0);
+/// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn exit(code: i32) -> ! {
::sys_common::cleanup();
Ok(_) => panic!(),
}
}
+
+ /// Test that process creation flags work by debugging a process.
+ /// Other creation flags make it hard or impossible to detect
+ /// behavioral changes in the process.
+ #[test]
+ #[cfg(windows)]
+ fn test_creation_flags() {
+ use os::windows::process::CommandExt;
+ use sys::c::{BOOL, DWORD, INFINITE};
+ #[repr(C, packed)]
+ struct DEBUG_EVENT {
+ pub event_code: DWORD,
+ pub process_id: DWORD,
+ pub thread_id: DWORD,
+ // This is a union in the real struct, but we don't
+ // need this data for the purposes of this test.
+ pub _junk: [u8; 164],
+ }
+
+ extern "system" {
+ fn WaitForDebugEvent(lpDebugEvent: *mut DEBUG_EVENT, dwMilliseconds: DWORD) -> BOOL;
+ fn ContinueDebugEvent(dwProcessId: DWORD, dwThreadId: DWORD,
+ dwContinueStatus: DWORD) -> BOOL;
+ }
+
+ const DEBUG_PROCESS: DWORD = 1;
+ const EXIT_PROCESS_DEBUG_EVENT: DWORD = 5;
+ const DBG_EXCEPTION_NOT_HANDLED: DWORD = 0x80010001;
+
+ let mut child = Command::new("cmd")
+ .creation_flags(DEBUG_PROCESS)
+ .stdin(Stdio::piped()).spawn().unwrap();
+ child.stdin.take().unwrap().write_all(b"exit\r\n").unwrap();
+ let mut events = 0;
+ let mut event = DEBUG_EVENT {
+ event_code: 0,
+ process_id: 0,
+ thread_id: 0,
+ _junk: [0; 164],
+ };
+ loop {
+ if unsafe { WaitForDebugEvent(&mut event as *mut DEBUG_EVENT, INFINITE) } == 0 {
+ panic!("WaitForDebugEvent failed!");
+ }
+ events += 1;
+
+ if event.event_code == EXIT_PROCESS_DEBUG_EVENT {
+ break;
+ }
+
+ if unsafe { ContinueDebugEvent(event.process_id,
+ event.thread_id,
+ DBG_EXCEPTION_NOT_HANDLED) } == 0 {
+ panic!("ContinueDebugEvent failed!");
+ }
+ }
+ assert!(events > 0);
+ }
}
extern {
fn sel_registerName(name: *const libc::c_uchar) -> Sel;
- fn objc_msgSend(obj: NsId, sel: Sel, ...) -> NsId;
fn objc_getClass(class_name: *const libc::c_uchar) -> NsId;
}
+ #[cfg(target_arch="aarch64")]
+ extern {
+ fn objc_msgSend(obj: NsId, sel: Sel) -> NsId;
+ #[link_name="objc_msgSend"]
+ fn objc_msgSend_ul(obj: NsId, sel: Sel, i: libc::c_ulong) -> NsId;
+ }
+
+ #[cfg(not(target_arch="aarch64"))]
+ extern {
+ fn objc_msgSend(obj: NsId, sel: Sel, ...) -> NsId;
+ #[link_name="objc_msgSend"]
+ fn objc_msgSend_ul(obj: NsId, sel: Sel, ...) -> NsId;
+ }
+
#[link(name = "Foundation", kind = "framework")]
#[link(name = "objc")]
#[cfg(not(cargobuild))]
let cnt: usize = mem::transmute(objc_msgSend(args, count_sel));
for i in 0..cnt {
- let tmp = objc_msgSend(args, object_at_sel, i);
+ let tmp = objc_msgSend_ul(args, object_at_sel, i as libc::c_ulong);
let utf_c_str: *const libc::c_char =
mem::transmute(objc_msgSend(tmp, utf8_sel));
let bytes = CStr::from_ptr(utf_c_str).to_bytes();
/// descriptor.
#[stable(feature = "from_raw_os", since = "1.1.0")]
pub trait FromRawFd {
- /// Constructs a new instances of `Self` from the given raw file
+ /// Constructs a new instance of `Self` from the given raw file
/// descriptor.
///
/// This function **consumes ownership** of the specified file
use os::windows::io::{FromRawHandle, RawHandle, AsRawHandle, IntoRawHandle};
use process;
use sys;
-use sys_common::{AsInner, FromInner, IntoInner};
+use sys_common::{AsInnerMut, AsInner, FromInner, IntoInner};
#[stable(feature = "process_extensions", since = "1.2.0")]
impl FromRawHandle for process::Stdio {
process::ExitStatus::from_inner(From::from(raw))
}
}
+
+/// Windows-specific extensions to the `std::process::Command` builder
+#[unstable(feature = "windows_process_extensions", issue = "37827")]
+pub trait CommandExt {
+ /// Sets the [process creation flags][1] to be passed to `CreateProcess`.
+ ///
+ /// These will always be ORed with `CREATE_UNICODE_ENVIRONMENT`.
+ /// [1]: https://msdn.microsoft.com/en-us/library/windows/desktop/ms684863(v=vs.85).aspx
+ #[unstable(feature = "windows_process_extensions", issue = "37827")]
+ fn creation_flags(&mut self, flags: u32) -> &mut process::Command;
+}
+
+#[unstable(feature = "windows_process_extensions", issue = "37827")]
+impl CommandExt for process::Command {
+ fn creation_flags(&mut self, flags: u32) -> &mut process::Command {
+ self.as_inner_mut().creation_flags(flags);
+ self
+ }
+}
args: Vec<OsString>,
env: Option<HashMap<OsString, OsString>>,
cwd: Option<OsString>,
+ flags: u32,
detach: bool, // not currently exposed in std::process
stdin: Option<Stdio>,
stdout: Option<Stdio>,
args: Vec::new(),
env: None,
cwd: None,
+ flags: 0,
detach: false,
stdin: None,
stdout: None,
pub fn stderr(&mut self, stderr: Stdio) {
self.stderr = Some(stderr);
}
+ pub fn creation_flags(&mut self, flags: u32) {
+ self.flags = flags;
+ }
pub fn spawn(&mut self, default: Stdio, needs_stdin: bool)
-> io::Result<(Process, StdioPipes)> {
cmd_str.push(0); // add null terminator
// stolen from the libuv code.
- let mut flags = c::CREATE_UNICODE_ENVIRONMENT;
+ let mut flags = self.flags | c::CREATE_UNICODE_ENVIRONMENT;
if self.detach {
flags |= c::DETACHED_PROCESS | c::CREATE_NEW_PROCESS_GROUP;
}
}
}
- pub fn visit_with<V: Visitor>(&self, visitor: &mut V) {
+ pub fn visit_with<'a, V: Visitor<'a>>(&'a self, visitor: &mut V) {
match *self {
Expansion::OptExpr(Some(ref expr)) => visitor.visit_expr(expr),
Expansion::OptExpr(None) => {}
(active, allocator, "1.0.0", Some(27389)),
(active, fundamental, "1.0.0", Some(29635)),
- (active, linked_from, "1.3.0", Some(29629)),
(active, main, "1.0.0", Some(29634)),
(active, needs_allocator, "1.4.0", Some(27389)),
(active, on_unimplemented, "1.0.0", Some(29628)),
// rustc internal
(active, staged_api, "1.0.0", None),
- // Allows using items which are missing stability attributes
- // rustc internal
- (active, unmarked_api, "1.0.0", None),
-
// Allows using #![no_core]
(active, no_core, "1.3.0", Some(29639)),
(removed, test_removed_feature, "1.0.0", None),
(removed, visible_private_types, "1.0.0", None),
(removed, unsafe_no_drop_flag, "1.0.0", None),
+ // Allows using items which are missing stability attributes
+ // rustc internal
+ (removed, unmarked_api, "1.0.0", None),
);
declare_features! (
is an experimental feature",
cfg_fn!(fundamental))),
- ("linked_from", Normal, Gated(Stability::Unstable,
- "linked_from",
- "the `#[linked_from]` attribute \
- is an experimental feature",
- cfg_fn!(linked_from))),
-
("proc_macro_derive", Normal, Gated(Stability::Unstable,
"proc_macro",
"the `#[proc_macro_derive]` attribute \
s.as_bytes().first().cloned().map_or(false, |b| b >= b'0' && b <= b'9')
}
-impl<'a> Visitor for PostExpansionVisitor<'a> {
+impl<'a> Visitor<'a> for PostExpansionVisitor<'a> {
fn visit_attribute(&mut self, attr: &ast::Attribute) {
if !self.context.cm.span_allows_unstable(attr.span) {
// check for gated attributes
}
}
- fn visit_item(&mut self, i: &ast::Item) {
+ fn visit_item(&mut self, i: &'a ast::Item) {
match i.node {
ast::ItemKind::ExternCrate(_) => {
if attr::contains_name(&i.attrs[..], "macro_reexport") {
visit::walk_item(self, i);
}
- fn visit_foreign_item(&mut self, i: &ast::ForeignItem) {
+ fn visit_foreign_item(&mut self, i: &'a ast::ForeignItem) {
let links_to_llvm = match attr::first_attr_value_str_by_name(&i.attrs, "link_name") {
Some(val) => val.as_str().starts_with("llvm."),
_ => false
visit::walk_foreign_item(self, i)
}
- fn visit_ty(&mut self, ty: &ast::Ty) {
+ fn visit_ty(&mut self, ty: &'a ast::Ty) {
match ty.node {
ast::TyKind::BareFn(ref bare_fn_ty) => {
self.check_abi(bare_fn_ty.abi, ty.span);
visit::walk_ty(self, ty)
}
- fn visit_fn_ret_ty(&mut self, ret_ty: &ast::FunctionRetTy) {
+ fn visit_fn_ret_ty(&mut self, ret_ty: &'a ast::FunctionRetTy) {
if let ast::FunctionRetTy::Ty(ref output_ty) = *ret_ty {
match output_ty.node {
ast::TyKind::Never => return,
}
}
- fn visit_expr(&mut self, e: &ast::Expr) {
+ fn visit_expr(&mut self, e: &'a ast::Expr) {
match e.node {
ast::ExprKind::Box(_) => {
gate_feature_post!(&self, box_syntax, e.span, EXPLAIN_BOX_SYNTAX);
visit::walk_expr(self, e);
}
- fn visit_pat(&mut self, pattern: &ast::Pat) {
+ fn visit_pat(&mut self, pattern: &'a ast::Pat) {
match pattern.node {
PatKind::Slice(_, Some(_), ref last) if !last.is_empty() => {
gate_feature_post!(&self, advanced_slice_patterns,
}
fn visit_fn(&mut self,
- fn_kind: FnKind,
- fn_decl: &ast::FnDecl,
+ fn_kind: FnKind<'a>,
+ fn_decl: &'a ast::FnDecl,
span: Span,
_node_id: NodeId) {
// check for const fn declarations
visit::walk_fn(self, fn_kind, fn_decl, span);
}
- fn visit_trait_item(&mut self, ti: &ast::TraitItem) {
+ fn visit_trait_item(&mut self, ti: &'a ast::TraitItem) {
match ti.node {
ast::TraitItemKind::Const(..) => {
gate_feature_post!(&self, associated_consts,
visit::walk_trait_item(self, ti);
}
- fn visit_impl_item(&mut self, ii: &ast::ImplItem) {
+ fn visit_impl_item(&mut self, ii: &'a ast::ImplItem) {
if ii.defaultness == ast::Defaultness::Default {
gate_feature_post!(&self, specialization,
ii.span,
visit::walk_impl_item(self, ii);
}
- fn visit_vis(&mut self, vis: &ast::Visibility) {
+ fn visit_vis(&mut self, vis: &'a ast::Visibility) {
let span = match *vis {
ast::Visibility::Crate(span) => span,
ast::Visibility::Restricted { ref path, .. } => path.span,
visit::walk_vis(self, vis)
}
- fn visit_generics(&mut self, g: &ast::Generics) {
+ fn visit_generics(&mut self, g: &'a ast::Generics) {
for t in &g.ty_params {
if !t.attrs.is_empty() {
gate_feature_post!(&self, generic_param_attrs, t.attrs[0].span,
visit::walk_generics(self, g)
}
- fn visit_lifetime_def(&mut self, lifetime_def: &ast::LifetimeDef) {
+ fn visit_lifetime_def(&mut self, lifetime_def: &'a ast::LifetimeDef) {
if !lifetime_def.attrs.is_empty() {
gate_feature_post!(&self, generic_param_attrs, lifetime_def.attrs[0].span,
"attributes on lifetime bindings are experimental");
struct PatIdentVisitor {
spans: Vec<Span>
}
- impl ::visit::Visitor for PatIdentVisitor {
- fn visit_pat(&mut self, p: &ast::Pat) {
+ impl<'a> ::visit::Visitor<'a> for PatIdentVisitor {
+ fn visit_pat(&mut self, p: &'a ast::Pat) {
match p.node {
PatKind::Ident(_ , ref spannedident, _) => {
self.spans.push(spannedident.span.clone());
mode: Mode,
}
-impl<'a> Visitor for ShowSpanVisitor<'a> {
- fn visit_expr(&mut self, e: &ast::Expr) {
+impl<'a> Visitor<'a> for ShowSpanVisitor<'a> {
+ fn visit_expr(&mut self, e: &'a ast::Expr) {
if let Mode::Expression = self.mode {
self.span_diagnostic.span_warn(e.span, "expression");
}
visit::walk_expr(self, e);
}
- fn visit_pat(&mut self, p: &ast::Pat) {
+ fn visit_pat(&mut self, p: &'a ast::Pat) {
if let Mode::Pattern = self.mode {
self.span_diagnostic.span_warn(p.span, "pattern");
}
visit::walk_pat(self, p);
}
- fn visit_ty(&mut self, t: &ast::Ty) {
+ fn visit_ty(&mut self, t: &'a ast::Ty) {
if let Mode::Type = self.mode {
self.span_diagnostic.span_warn(t.span, "type");
}
visit::walk_ty(self, t);
}
- fn visit_mac(&mut self, mac: &ast::Mac) {
+ fn visit_mac(&mut self, mac: &'a ast::Mac) {
visit::walk_mac(self, mac);
}
}
}
}
-impl Visitor for NodeCounter {
+impl<'ast> Visitor<'ast> for NodeCounter {
fn visit_ident(&mut self, span: Span, ident: Ident) {
self.count += 1;
walk_ident(self, span, ident);
/// explicitly, you need to override each method. (And you also need
/// to monitor future changes to `Visitor` in case a new method with a
/// new default implementation gets introduced.)
-pub trait Visitor: Sized {
+pub trait Visitor<'ast>: Sized {
fn visit_name(&mut self, _span: Span, _name: Name) {
// Nothing to do.
}
fn visit_ident(&mut self, span: Span, ident: Ident) {
walk_ident(self, span, ident);
}
- fn visit_mod(&mut self, m: &Mod, _s: Span, _n: NodeId) { walk_mod(self, m) }
- fn visit_foreign_item(&mut self, i: &ForeignItem) { walk_foreign_item(self, i) }
- fn visit_item(&mut self, i: &Item) { walk_item(self, i) }
- fn visit_local(&mut self, l: &Local) { walk_local(self, l) }
- fn visit_block(&mut self, b: &Block) { walk_block(self, b) }
- fn visit_stmt(&mut self, s: &Stmt) { walk_stmt(self, s) }
- fn visit_arm(&mut self, a: &Arm) { walk_arm(self, a) }
- fn visit_pat(&mut self, p: &Pat) { walk_pat(self, p) }
- fn visit_expr(&mut self, ex: &Expr) { walk_expr(self, ex) }
- fn visit_expr_post(&mut self, _ex: &Expr) { }
- fn visit_ty(&mut self, t: &Ty) { walk_ty(self, t) }
- fn visit_generics(&mut self, g: &Generics) { walk_generics(self, g) }
- fn visit_fn(&mut self, fk: FnKind, fd: &FnDecl, s: Span, _: NodeId) {
+ fn visit_mod(&mut self, m: &'ast Mod, _s: Span, _n: NodeId) { walk_mod(self, m) }
+ fn visit_foreign_item(&mut self, i: &'ast ForeignItem) { walk_foreign_item(self, i) }
+ fn visit_item(&mut self, i: &'ast Item) { walk_item(self, i) }
+ fn visit_local(&mut self, l: &'ast Local) { walk_local(self, l) }
+ fn visit_block(&mut self, b: &'ast Block) { walk_block(self, b) }
+ fn visit_stmt(&mut self, s: &'ast Stmt) { walk_stmt(self, s) }
+ fn visit_arm(&mut self, a: &'ast Arm) { walk_arm(self, a) }
+ fn visit_pat(&mut self, p: &'ast Pat) { walk_pat(self, p) }
+ fn visit_expr(&mut self, ex: &'ast Expr) { walk_expr(self, ex) }
+ fn visit_expr_post(&mut self, _ex: &'ast Expr) { }
+ fn visit_ty(&mut self, t: &'ast Ty) { walk_ty(self, t) }
+ fn visit_generics(&mut self, g: &'ast Generics) { walk_generics(self, g) }
+ fn visit_fn(&mut self, fk: FnKind<'ast>, fd: &'ast FnDecl, s: Span, _: NodeId) {
walk_fn(self, fk, fd, s)
}
- fn visit_trait_item(&mut self, ti: &TraitItem) { walk_trait_item(self, ti) }
- fn visit_impl_item(&mut self, ii: &ImplItem) { walk_impl_item(self, ii) }
- fn visit_trait_ref(&mut self, t: &TraitRef) { walk_trait_ref(self, t) }
- fn visit_ty_param_bound(&mut self, bounds: &TyParamBound) {
+ fn visit_trait_item(&mut self, ti: &'ast TraitItem) { walk_trait_item(self, ti) }
+ fn visit_impl_item(&mut self, ii: &'ast ImplItem) { walk_impl_item(self, ii) }
+ fn visit_trait_ref(&mut self, t: &'ast TraitRef) { walk_trait_ref(self, t) }
+ fn visit_ty_param_bound(&mut self, bounds: &'ast TyParamBound) {
walk_ty_param_bound(self, bounds)
}
- fn visit_poly_trait_ref(&mut self, t: &PolyTraitRef, m: &TraitBoundModifier) {
+ fn visit_poly_trait_ref(&mut self, t: &'ast PolyTraitRef, m: &'ast TraitBoundModifier) {
walk_poly_trait_ref(self, t, m)
}
- fn visit_variant_data(&mut self, s: &VariantData, _: Ident,
- _: &Generics, _: NodeId, _: Span) {
+ fn visit_variant_data(&mut self, s: &'ast VariantData, _: Ident,
+ _: &'ast Generics, _: NodeId, _: Span) {
walk_struct_def(self, s)
}
- fn visit_struct_field(&mut self, s: &StructField) { walk_struct_field(self, s) }
- fn visit_enum_def(&mut self, enum_definition: &EnumDef,
- generics: &Generics, item_id: NodeId, _: Span) {
+ fn visit_struct_field(&mut self, s: &'ast StructField) { walk_struct_field(self, s) }
+ fn visit_enum_def(&mut self, enum_definition: &'ast EnumDef,
+ generics: &'ast Generics, item_id: NodeId, _: Span) {
walk_enum_def(self, enum_definition, generics, item_id)
}
- fn visit_variant(&mut self, v: &Variant, g: &Generics, item_id: NodeId) {
+ fn visit_variant(&mut self, v: &'ast Variant, g: &'ast Generics, item_id: NodeId) {
walk_variant(self, v, g, item_id)
}
- fn visit_lifetime(&mut self, lifetime: &Lifetime) {
+ fn visit_lifetime(&mut self, lifetime: &'ast Lifetime) {
walk_lifetime(self, lifetime)
}
- fn visit_lifetime_def(&mut self, lifetime: &LifetimeDef) {
+ fn visit_lifetime_def(&mut self, lifetime: &'ast LifetimeDef) {
walk_lifetime_def(self, lifetime)
}
- fn visit_mac(&mut self, _mac: &Mac) {
+ fn visit_mac(&mut self, _mac: &'ast Mac) {
panic!("visit_mac disabled by default");
// NB: see note about macros above.
// if you really want a visitor that
// definition in your trait impl:
// visit::walk_mac(self, _mac)
}
- fn visit_path(&mut self, path: &Path, _id: NodeId) {
+ fn visit_path(&mut self, path: &'ast Path, _id: NodeId) {
walk_path(self, path)
}
- fn visit_path_list_item(&mut self, prefix: &Path, item: &PathListItem) {
+ fn visit_path_list_item(&mut self, prefix: &'ast Path, item: &'ast PathListItem) {
walk_path_list_item(self, prefix, item)
}
- fn visit_path_segment(&mut self, path_span: Span, path_segment: &PathSegment) {
+ fn visit_path_segment(&mut self, path_span: Span, path_segment: &'ast PathSegment) {
walk_path_segment(self, path_span, path_segment)
}
- fn visit_path_parameters(&mut self, path_span: Span, path_parameters: &PathParameters) {
+ fn visit_path_parameters(&mut self, path_span: Span, path_parameters: &'ast PathParameters) {
walk_path_parameters(self, path_span, path_parameters)
}
- fn visit_assoc_type_binding(&mut self, type_binding: &TypeBinding) {
+ fn visit_assoc_type_binding(&mut self, type_binding: &'ast TypeBinding) {
walk_assoc_type_binding(self, type_binding)
}
- fn visit_attribute(&mut self, _attr: &Attribute) {}
- fn visit_macro_def(&mut self, macro_def: &MacroDef) {
+ fn visit_attribute(&mut self, _attr: &'ast Attribute) {}
+ fn visit_macro_def(&mut self, macro_def: &'ast MacroDef) {
walk_macro_def(self, macro_def)
}
- fn visit_vis(&mut self, vis: &Visibility) {
+ fn visit_vis(&mut self, vis: &'ast Visibility) {
walk_vis(self, vis)
}
- fn visit_fn_ret_ty(&mut self, ret_ty: &FunctionRetTy) {
+ fn visit_fn_ret_ty(&mut self, ret_ty: &'ast FunctionRetTy) {
walk_fn_ret_ty(self, ret_ty)
}
}
}
}
-pub fn walk_opt_name<V: Visitor>(visitor: &mut V, span: Span, opt_name: Option<Name>) {
+pub fn walk_opt_name<'a, V: Visitor<'a>>(visitor: &mut V, span: Span, opt_name: Option<Name>) {
if let Some(name) = opt_name {
visitor.visit_name(span, name);
}
}
-pub fn walk_opt_ident<V: Visitor>(visitor: &mut V, span: Span, opt_ident: Option<Ident>) {
+pub fn walk_opt_ident<'a, V: Visitor<'a>>(visitor: &mut V, span: Span, opt_ident: Option<Ident>) {
if let Some(ident) = opt_ident {
visitor.visit_ident(span, ident);
}
}
-pub fn walk_opt_sp_ident<V: Visitor>(visitor: &mut V, opt_sp_ident: &Option<Spanned<Ident>>) {
+pub fn walk_opt_sp_ident<'a, V: Visitor<'a>>(visitor: &mut V,
+ opt_sp_ident: &Option<Spanned<Ident>>) {
if let Some(ref sp_ident) = *opt_sp_ident {
visitor.visit_ident(sp_ident.span, sp_ident.node);
}
}
-pub fn walk_ident<V: Visitor>(visitor: &mut V, span: Span, ident: Ident) {
+pub fn walk_ident<'a, V: Visitor<'a>>(visitor: &mut V, span: Span, ident: Ident) {
visitor.visit_name(span, ident.name);
}
-pub fn walk_crate<V: Visitor>(visitor: &mut V, krate: &Crate) {
+pub fn walk_crate<'a, V: Visitor<'a>>(visitor: &mut V, krate: &'a Crate) {
visitor.visit_mod(&krate.module, krate.span, CRATE_NODE_ID);
walk_list!(visitor, visit_attribute, &krate.attrs);
walk_list!(visitor, visit_macro_def, &krate.exported_macros);
}
-pub fn walk_macro_def<V: Visitor>(visitor: &mut V, macro_def: &MacroDef) {
+pub fn walk_macro_def<'a, V: Visitor<'a>>(visitor: &mut V, macro_def: &'a MacroDef) {
visitor.visit_ident(macro_def.span, macro_def.ident);
walk_opt_ident(visitor, macro_def.span, macro_def.imported_from);
walk_list!(visitor, visit_attribute, ¯o_def.attrs);
}
-pub fn walk_mod<V: Visitor>(visitor: &mut V, module: &Mod) {
+pub fn walk_mod<'a, V: Visitor<'a>>(visitor: &mut V, module: &'a Mod) {
walk_list!(visitor, visit_item, &module.items);
}
-pub fn walk_local<V: Visitor>(visitor: &mut V, local: &Local) {
+pub fn walk_local<'a, V: Visitor<'a>>(visitor: &mut V, local: &'a Local) {
for attr in local.attrs.iter() {
visitor.visit_attribute(attr);
}
walk_list!(visitor, visit_expr, &local.init);
}
-pub fn walk_lifetime<V: Visitor>(visitor: &mut V, lifetime: &Lifetime) {
+pub fn walk_lifetime<'a, V: Visitor<'a>>(visitor: &mut V, lifetime: &'a Lifetime) {
visitor.visit_name(lifetime.span, lifetime.name);
}
-pub fn walk_lifetime_def<V: Visitor>(visitor: &mut V, lifetime_def: &LifetimeDef) {
+pub fn walk_lifetime_def<'a, V: Visitor<'a>>(visitor: &mut V, lifetime_def: &'a LifetimeDef) {
visitor.visit_lifetime(&lifetime_def.lifetime);
walk_list!(visitor, visit_lifetime, &lifetime_def.bounds);
walk_list!(visitor, visit_attribute, &*lifetime_def.attrs);
}
-pub fn walk_poly_trait_ref<V>(visitor: &mut V, trait_ref: &PolyTraitRef, _: &TraitBoundModifier)
- where V: Visitor,
+pub fn walk_poly_trait_ref<'a, V>(visitor: &mut V,
+ trait_ref: &'a PolyTraitRef,
+ _: &TraitBoundModifier)
+ where V: Visitor<'a>,
{
walk_list!(visitor, visit_lifetime_def, &trait_ref.bound_lifetimes);
visitor.visit_trait_ref(&trait_ref.trait_ref);
}
-pub fn walk_trait_ref<V: Visitor>(visitor: &mut V, trait_ref: &TraitRef) {
+pub fn walk_trait_ref<'a, V: Visitor<'a>>(visitor: &mut V, trait_ref: &'a TraitRef) {
visitor.visit_path(&trait_ref.path, trait_ref.ref_id)
}
-pub fn walk_item<V: Visitor>(visitor: &mut V, item: &Item) {
+pub fn walk_item<'a, V: Visitor<'a>>(visitor: &mut V, item: &'a Item) {
visitor.visit_vis(&item.vis);
visitor.visit_ident(item.span, item.ident);
match item.node {
walk_list!(visitor, visit_attribute, &item.attrs);
}
-pub fn walk_enum_def<V: Visitor>(visitor: &mut V,
- enum_definition: &EnumDef,
- generics: &Generics,
+pub fn walk_enum_def<'a, V: Visitor<'a>>(visitor: &mut V,
+ enum_definition: &'a EnumDef,
+ generics: &'a Generics,
item_id: NodeId) {
walk_list!(visitor, visit_variant, &enum_definition.variants, generics, item_id);
}
-pub fn walk_variant<V>(visitor: &mut V, variant: &Variant, generics: &Generics, item_id: NodeId)
- where V: Visitor,
+pub fn walk_variant<'a, V>(visitor: &mut V,
+ variant: &'a Variant,
+ generics: &'a Generics,
+ item_id: NodeId)
+ where V: Visitor<'a>,
{
visitor.visit_ident(variant.span, variant.node.name);
visitor.visit_variant_data(&variant.node.data, variant.node.name,
walk_list!(visitor, visit_attribute, &variant.node.attrs);
}
-pub fn walk_ty<V: Visitor>(visitor: &mut V, typ: &Ty) {
+pub fn walk_ty<'a, V: Visitor<'a>>(visitor: &mut V, typ: &'a Ty) {
match typ.node {
TyKind::Slice(ref ty) | TyKind::Paren(ref ty) => {
visitor.visit_ty(ty)
}
}
-pub fn walk_path<V: Visitor>(visitor: &mut V, path: &Path) {
+pub fn walk_path<'a, V: Visitor<'a>>(visitor: &mut V, path: &'a Path) {
for segment in &path.segments {
visitor.visit_path_segment(path.span, segment);
}
}
-pub fn walk_path_list_item<V: Visitor>(visitor: &mut V, _prefix: &Path, item: &PathListItem) {
+pub fn walk_path_list_item<'a, V: Visitor<'a>>(visitor: &mut V,
+ _prefix: &Path,
+ item: &'a PathListItem) {
visitor.visit_ident(item.span, item.node.name);
walk_opt_ident(visitor, item.span, item.node.rename);
}
-pub fn walk_path_segment<V: Visitor>(visitor: &mut V, path_span: Span, segment: &PathSegment) {
+pub fn walk_path_segment<'a, V: Visitor<'a>>(visitor: &mut V,
+ path_span: Span,
+ segment: &'a PathSegment) {
visitor.visit_ident(path_span, segment.identifier);
visitor.visit_path_parameters(path_span, &segment.parameters);
}
-pub fn walk_path_parameters<V>(visitor: &mut V, _path_span: Span, path_parameters: &PathParameters)
- where V: Visitor,
+pub fn walk_path_parameters<'a, V>(visitor: &mut V,
+ _path_span: Span,
+ path_parameters: &'a PathParameters)
+ where V: Visitor<'a>,
{
match *path_parameters {
PathParameters::AngleBracketed(ref data) => {
}
}
-pub fn walk_assoc_type_binding<V: Visitor>(visitor: &mut V, type_binding: &TypeBinding) {
+pub fn walk_assoc_type_binding<'a, V: Visitor<'a>>(visitor: &mut V,
+ type_binding: &'a TypeBinding) {
visitor.visit_ident(type_binding.span, type_binding.ident);
visitor.visit_ty(&type_binding.ty);
}
-pub fn walk_pat<V: Visitor>(visitor: &mut V, pattern: &Pat) {
+pub fn walk_pat<'a, V: Visitor<'a>>(visitor: &mut V, pattern: &'a Pat) {
match pattern.node {
PatKind::TupleStruct(ref path, ref children, _) => {
visitor.visit_path(path, pattern.id);
}
}
-pub fn walk_foreign_item<V: Visitor>(visitor: &mut V, foreign_item: &ForeignItem) {
+pub fn walk_foreign_item<'a, V: Visitor<'a>>(visitor: &mut V, foreign_item: &'a ForeignItem) {
visitor.visit_vis(&foreign_item.vis);
visitor.visit_ident(foreign_item.span, foreign_item.ident);
walk_list!(visitor, visit_attribute, &foreign_item.attrs);
}
-pub fn walk_ty_param_bound<V: Visitor>(visitor: &mut V, bound: &TyParamBound) {
+pub fn walk_ty_param_bound<'a, V: Visitor<'a>>(visitor: &mut V, bound: &'a TyParamBound) {
match *bound {
TraitTyParamBound(ref typ, ref modifier) => {
visitor.visit_poly_trait_ref(typ, modifier);
}
}
-pub fn walk_generics<V: Visitor>(visitor: &mut V, generics: &Generics) {
+pub fn walk_generics<'a, V: Visitor<'a>>(visitor: &mut V, generics: &'a Generics) {
for param in &generics.ty_params {
visitor.visit_ident(param.span, param.ident);
walk_list!(visitor, visit_ty_param_bound, ¶m.bounds);
}
}
-pub fn walk_fn_ret_ty<V: Visitor>(visitor: &mut V, ret_ty: &FunctionRetTy) {
+pub fn walk_fn_ret_ty<'a, V: Visitor<'a>>(visitor: &mut V, ret_ty: &'a FunctionRetTy) {
if let FunctionRetTy::Ty(ref output_ty) = *ret_ty {
visitor.visit_ty(output_ty)
}
}
-pub fn walk_fn_decl<V: Visitor>(visitor: &mut V, function_declaration: &FnDecl) {
+pub fn walk_fn_decl<'a, V: Visitor<'a>>(visitor: &mut V, function_declaration: &'a FnDecl) {
for argument in &function_declaration.inputs {
visitor.visit_pat(&argument.pat);
visitor.visit_ty(&argument.ty)
visitor.visit_fn_ret_ty(&function_declaration.output)
}
-pub fn walk_fn<V>(visitor: &mut V, kind: FnKind, declaration: &FnDecl, _span: Span)
- where V: Visitor,
+pub fn walk_fn<'a, V>(visitor: &mut V, kind: FnKind<'a>, declaration: &'a FnDecl, _span: Span)
+ where V: Visitor<'a>,
{
match kind {
FnKind::ItemFn(_, generics, _, _, _, _, body) => {
}
}
-pub fn walk_trait_item<V: Visitor>(visitor: &mut V, trait_item: &TraitItem) {
+pub fn walk_trait_item<'a, V: Visitor<'a>>(visitor: &mut V, trait_item: &'a TraitItem) {
visitor.visit_ident(trait_item.span, trait_item.ident);
walk_list!(visitor, visit_attribute, &trait_item.attrs);
match trait_item.node {
}
}
-pub fn walk_impl_item<V: Visitor>(visitor: &mut V, impl_item: &ImplItem) {
+pub fn walk_impl_item<'a, V: Visitor<'a>>(visitor: &mut V, impl_item: &'a ImplItem) {
visitor.visit_vis(&impl_item.vis);
visitor.visit_ident(impl_item.span, impl_item.ident);
walk_list!(visitor, visit_attribute, &impl_item.attrs);
}
}
-pub fn walk_struct_def<V: Visitor>(visitor: &mut V, struct_definition: &VariantData) {
+pub fn walk_struct_def<'a, V: Visitor<'a>>(visitor: &mut V, struct_definition: &'a VariantData) {
walk_list!(visitor, visit_struct_field, struct_definition.fields());
}
-pub fn walk_struct_field<V: Visitor>(visitor: &mut V, struct_field: &StructField) {
+pub fn walk_struct_field<'a, V: Visitor<'a>>(visitor: &mut V, struct_field: &'a StructField) {
visitor.visit_vis(&struct_field.vis);
walk_opt_ident(visitor, struct_field.span, struct_field.ident);
visitor.visit_ty(&struct_field.ty);
walk_list!(visitor, visit_attribute, &struct_field.attrs);
}
-pub fn walk_block<V: Visitor>(visitor: &mut V, block: &Block) {
+pub fn walk_block<'a, V: Visitor<'a>>(visitor: &mut V, block: &'a Block) {
walk_list!(visitor, visit_stmt, &block.stmts);
}
-pub fn walk_stmt<V: Visitor>(visitor: &mut V, statement: &Stmt) {
+pub fn walk_stmt<'a, V: Visitor<'a>>(visitor: &mut V, statement: &'a Stmt) {
match statement.node {
StmtKind::Local(ref local) => visitor.visit_local(local),
StmtKind::Item(ref item) => visitor.visit_item(item),
}
}
-pub fn walk_mac<V: Visitor>(_: &mut V, _: &Mac) {
+pub fn walk_mac<'a, V: Visitor<'a>>(_: &mut V, _: &Mac) {
// Empty!
}
-pub fn walk_expr<V: Visitor>(visitor: &mut V, expression: &Expr) {
+pub fn walk_expr<'a, V: Visitor<'a>>(visitor: &mut V, expression: &'a Expr) {
for attr in expression.attrs.iter() {
visitor.visit_attribute(attr);
}
visitor.visit_expr_post(expression)
}
-pub fn walk_arm<V: Visitor>(visitor: &mut V, arm: &Arm) {
+pub fn walk_arm<'a, V: Visitor<'a>>(visitor: &mut V, arm: &'a Arm) {
walk_list!(visitor, visit_pat, &arm.pats);
walk_list!(visitor, visit_expr, &arm.guard);
visitor.visit_expr(&arm.body);
walk_list!(visitor, visit_attribute, &arm.attrs);
}
-pub fn walk_vis<V: Visitor>(visitor: &mut V, vis: &Visibility) {
+pub fn walk_vis<'a, V: Visitor<'a>>(visitor: &mut V, vis: &'a Visibility) {
if let Visibility::Restricted { ref path, id } = *vis {
visitor.visit_path(path, id);
}
struct MarkAttrs<'a>(&'a [ast::Name]);
-impl<'a> Visitor for MarkAttrs<'a> {
+impl<'a> Visitor<'a> for MarkAttrs<'a> {
fn visit_attribute(&mut self, attr: &Attribute) {
if self.0.contains(&attr.name()) {
mark_used(attr);
res
}
}
-
types: Vec<P<ast::Ty>>,
}
- impl<'a, 'b> visit::Visitor for Visitor<'a, 'b> {
- fn visit_ty(&mut self, ty: &ast::Ty) {
+ impl<'a, 'b> visit::Visitor<'a> for Visitor<'a, 'b> {
+ fn visit_ty(&mut self, ty: &'a ast::Ty) {
match ty.node {
ast::TyKind::Path(_, ref path) if !path.global => {
if let Some(segment) = path.segments.first() {
let ecfg = ExpansionConfig::default("proc_macro".to_string());
let mut cx = ExtCtxt::new(sess, ecfg, resolver);
- let mut collect = CollectCustomDerives {
- derives: Vec::new(),
- in_root: true,
- handler: handler,
- is_proc_macro_crate: is_proc_macro_crate,
- is_test_crate: is_test_crate,
+ let derives = {
+ let mut collect = CollectCustomDerives {
+ derives: Vec::new(),
+ in_root: true,
+ handler: handler,
+ is_proc_macro_crate: is_proc_macro_crate,
+ is_test_crate: is_test_crate,
+ };
+ visit::walk_crate(&mut collect, &krate);
+ collect.derives
};
- visit::walk_crate(&mut collect, &krate);
if !is_proc_macro_crate {
return krate
handler.err("cannot mix `proc-macro` crate type with others");
}
- krate.module.items.push(mk_registrar(&mut cx, &collect.derives));
+ if is_test_crate {
+ return krate;
+ }
+
+ krate.module.items.push(mk_registrar(&mut cx, &derives));
if krate.exported_macros.len() > 0 {
handler.err("cannot export macro_rules! macros from a `proc-macro` \
}
}
-impl<'a> Visitor for CollectCustomDerives<'a> {
- fn visit_item(&mut self, item: &ast::Item) {
+impl<'a> Visitor<'a> for CollectCustomDerives<'a> {
+ fn visit_item(&mut self, item: &'a ast::Item) {
// First up, make sure we're checking a bare function. If we're not then
// we're just not interested in this item.
//
}
if self.is_test_crate {
- self.handler.span_err(attr.span(),
- "`--test` cannot be used with proc-macro crates");
return;
}
visit::walk_item(self, item);
}
- fn visit_mod(&mut self, m: &ast::Mod, _s: Span, id: NodeId) {
+ fn visit_mod(&mut self, m: &'a ast::Mod, _s: Span, id: NodeId) {
let mut prev_in_root = self.in_root;
if id != ast::CRATE_NODE_ID {
prev_in_root = mem::replace(&mut self.in_root, false);
-Subproject commit c1d962263bf76a10bea0c761621fcd98d6214b2e
+Subproject commit 3ec14daffb4b8c0604df50b7fb0ab552f456e381
extern "C" LLVMContextRef LLVMRustGetValueContext(LLVMValueRef V) {
return wrap(&unwrap(V)->getContext());
}
+
+enum class LLVMRustVisibility {
+ Default = 0,
+ Hidden = 1,
+ Protected = 2,
+};
+
+static LLVMRustVisibility to_rust(LLVMVisibility vis) {
+ switch (vis) {
+ case LLVMDefaultVisibility:
+ return LLVMRustVisibility::Default;
+ case LLVMHiddenVisibility:
+ return LLVMRustVisibility::Hidden;
+ case LLVMProtectedVisibility:
+ return LLVMRustVisibility::Protected;
+
+ default:
+ llvm_unreachable("Invalid LLVMRustVisibility value!");
+ }
+}
+
+static LLVMVisibility from_rust(LLVMRustVisibility vis) {
+ switch (vis) {
+ case LLVMRustVisibility::Default:
+ return LLVMDefaultVisibility;
+ case LLVMRustVisibility::Hidden:
+ return LLVMHiddenVisibility;
+ case LLVMRustVisibility::Protected:
+ return LLVMProtectedVisibility;
+
+ default:
+ llvm_unreachable("Invalid LLVMRustVisibility value!");
+ }
+}
+
+extern "C" LLVMRustVisibility LLVMRustGetVisibility(LLVMValueRef V) {
+ return to_rust(LLVMGetVisibility(V));
+}
+
+extern "C" void LLVMRustSetVisibility(LLVMValueRef V, LLVMRustVisibility RustVisibility) {
+ LLVMSetVisibility(V, from_rust(RustVisibility));
+}
# If this file is modified, then llvm will be forcibly cleaned and then rebuilt.
# The actual contents of this file do not matter, but to trigger a change on the
# build bots then the contents should be changed so git updates the mtime.
-2016-10-29
+2016-12-06
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// no-prefer-dynamic
+#![crate_type = "staticlib"]
+
+// Since codegen tests don't actually perform linking, this library doesn't need to export
+// any symbols. It's here just to satisfy the compiler looking for a .lib file when processing
+// #[link(...)] attributes in wrapper.rs.
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// no-prefer-dynamic
+#![crate_type = "rlib"]
+
+#[link(name = "dummy", kind="dylib")]
+extern "C" {
+ pub fn dylib_func2(x: i32) -> i32;
+ pub static dylib_global2: i32;
+}
+
+#[link(name = "dummy", kind="static")]
+extern "C" {
+ pub fn static_func2(x: i32) -> i32;
+ pub static static_global2: i32;
+}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// This test is for *-windows-msvc only.
+// ignore-gnu
+// ignore-android
+// ignore-bitrig
+// ignore-macos
+// ignore-dragonfly
+// ignore-freebsd
+// ignore-haiku
+// ignore-ios
+// ignore-linux
+// ignore-netbsd
+// ignore-openbsd
+// ignore-solaris
+// ignore-emscripten
+
+// aux-build:dummy.rs
+// aux-build:wrapper.rs
+
+extern crate wrapper;
+
+// Check that external symbols coming from foreign dylibs are adorned with 'dllimport',
+// whereas symbols coming from foreign staticlibs are not. (RFC-1717)
+
+// CHECK: @dylib_global1 = external dllimport local_unnamed_addr global i32
+// CHECK: @dylib_global2 = external dllimport local_unnamed_addr global i32
+// CHECK: @static_global1 = external local_unnamed_addr global i32
+// CHECK: @static_global2 = external local_unnamed_addr global i32
+
+// CHECK: declare dllimport i32 @dylib_func1(i32)
+// CHECK: declare dllimport i32 @dylib_func2(i32)
+// CHECK: declare i32 @static_func1(i32)
+// CHECK: declare i32 @static_func2(i32)
+
+#[link(name = "dummy", kind="dylib")]
+extern "C" {
+ pub fn dylib_func1(x: i32) -> i32;
+ pub static dylib_global1: i32;
+}
+
+#[link(name = "dummy", kind="static")]
+extern "C" {
+ pub fn static_func1(x: i32) -> i32;
+ pub static static_global1: i32;
+}
+
+fn main() {
+ unsafe {
+ dylib_func1(dylib_global1);
+ wrapper::dylib_func2(wrapper::dylib_global2);
+
+ static_func1(static_global1);
+ wrapper::static_func2(wrapper::static_global2);
+ }
+}
}
}
-impl LateLintPass for Pass {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for Pass {
fn check_crate(&mut self, cx: &LateContext, krate: &hir::Crate) {
if !attr::contains_name(&krate.attrs, "crate_okay") {
cx.span_lint(CRATE_NOT_OKAY, krate.span,
#[plugin_registrar]
pub fn plugin_registrar(reg: &mut Registry) {
- reg.register_late_lint_pass(box Pass as LateLintPassObject);
+ reg.register_late_lint_pass(box Pass);
}
}
}
-impl LateLintPass for Pass {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for Pass {
fn check_item(&mut self, cx: &LateContext, it: &hir::Item) {
match &*it.name.as_str() {
"lintme" => cx.span_lint(TEST_LINT, it.span, "item is named 'lintme'"),
#[plugin_registrar]
pub fn plugin_registrar(reg: &mut Registry) {
- reg.register_late_lint_pass(box Pass as LateLintPassObject);
+ reg.register_late_lint_pass(box Pass);
reg.register_lint_group("lint_me", vec![TEST_LINT, PLEASE_LINT]);
}
+++ /dev/null
-// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-// compile-flags: --test
-
-#![crate_type = "proc-macro"]
-#![feature(proc_macro, proc_macro_lib)]
-
-extern crate proc_macro;
-
-#[proc_macro_derive(A)]
-//~^ ERROR: `--test` cannot be used with proc-macro crates
-pub fn foo1(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
- "".parse().unwrap()
-}
extern "C" {
fn printf(_: *const u8, ...) -> u32;
+ //~^ NOTE defined here
}
fn main() {
unsafe { printf(); }
//~^ ERROR E0060
- //~| NOTE the following parameter type was expected: *const u8
+ //~| expected at least 1 parameter
}
// except according to those terms.
fn f(a: u16, b: &str) {}
+//~^ NOTE defined here
fn f2(a: u16) {}
+//~^ NOTE defined here
fn main() {
f(0);
//~^ ERROR E0061
- //~| NOTE the following parameter types were expected:
- //~| NOTE u16, &str
+ //~| expected 2 parameters
f2();
//~^ ERROR E0061
- //~| NOTE the following parameter type was expected: u16
+ //~| expected 1 parameter
}
+++ /dev/null
-// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-#[linked_from = "foo"] //~ ERROR experimental feature
-extern {
- fn foo();
-}
-
-fn main() {}
}
fn print_x(_: &Foo<Item=bool>, extra: &str) {
+ //~^ NOTE defined here
println!("{}", extra);
}
fn main() {
print_x(X);
- //~^ ERROR this function takes 2 parameters but 1 parameter was supplied
- //~| NOTE the following parameter types were expected:
- //~| NOTE &Foo<Item=bool>, &str
+ //~^ ERROR E0061
+ //~| NOTE expected 2 parameters
}
// Prefix in imports with empty braces should be resolved and checked privacy, stability, etc.
-use foo::{}; //~ ERROR failed to resolve. Maybe a missing `extern crate foo;`?
+use foo::{};
+//~^ ERROR failed to resolve. Maybe a missing `extern crate foo;`?
+//~| NOTE foo
fn main() {}
mod n {}
}
-use m::n::{}; //~ ERROR module `n` is private
+use m::n::{};
+//~^ ERROR module `n` is private
fn main() {}
extern crate lint_stability;
-use lint_stability::UnstableStruct::{}; //~ ERROR use of unstable library feature 'test_feature'
+use lint_stability::UnstableStruct::{};
+//~^ ERROR use of unstable library feature 'test_feature'
use lint_stability::StableStruct::{}; // OK
fn main() {}
needlesArr.iter().fold(|x, y| {
});
//~^^ ERROR this function takes 2 parameters but 1 parameter was supplied
- //~| NOTE the following parameter types were expected
- //~| NOTE _, _
- // the first error is, um, non-ideal.
+ //~| NOTE expected 2 parameters
}
--- /dev/null
+// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+const A: i32 = Foo::B; //~ ERROR E0265
+ //~^ NOTE recursion not allowed in constant
+
+enum Foo {
+ B = A, //~ ERROR E0265
+ //~^ NOTE recursion not allowed in constant
+}
+
+enum Bar {
+ C = Bar::C, //~ ERROR E0265
+ //~^ NOTE recursion not allowed in constant
+}
+
+const D: i32 = A;
+
+fn main() {}
// Regression test for issue #4935
fn foo(a: usize) {}
+//~^ defined here
fn main() { foo(5, 6) }
//~^ ERROR this function takes 1 parameter but 2 parameters were supplied
-//~| NOTE the following parameter type was expected
+//~| NOTE expected 1 parameter
use std::mem::*; // shouldn't get errors for not using
// everything imported
+use std::fmt::{};
+//~^ ERROR unused import: `use std::fmt::{};`
// Should get errors for both 'Some' and 'None'
use std::option::Option::{Some, None};
pub struct Foo;
impl Foo {
fn zero(self) -> Foo { self }
+ //~^ NOTE defined here
fn one(self, _: isize) -> Foo { self }
+ //~^ NOTE defined here
fn two(self, _: isize, _: isize) -> Foo { self }
+ //~^ NOTE defined here
}
fn main() {
x.zero(0) //~ ERROR this function takes 0 parameters but 1 parameter was supplied
//~^ NOTE expected 0 parameters
.one() //~ ERROR this function takes 1 parameter but 0 parameters were supplied
- //~^ NOTE the following parameter type was expected
+ //~^ NOTE expected 1 parameter
.two(0); //~ ERROR this function takes 2 parameters but 1 parameter was supplied
- //~^ NOTE the following parameter types were expected
- //~| NOTE isize, isize
+ //~^ NOTE expected 2 parameters
let y = Foo;
y.zero()
// unrelated errors.
fn foo(a: isize, b: isize, c: isize, d:isize) {
+ //~^ NOTE defined here
panic!();
}
fn main() {
foo(1, 2, 3);
//~^ ERROR this function takes 4 parameters but 3
- //~| NOTE the following parameter types were expected:
- //~| NOTE isize, isize, isize, isize
+ //~| NOTE expected 4 parameters
}
//~| NOTE found type
let ans = s();
//~^ ERROR this function takes 1 parameter but 0 parameters were supplied
- //~| NOTE the following parameter type was expected
+ //~| NOTE expected 1 parameter
let ans = s("burma", "shave");
//~^ ERROR this function takes 1 parameter but 2 parameters were supplied
- //~| NOTE the following parameter type was expected
+ //~| NOTE expected 1 parameter
}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// compile-flags: -l foo:bar
+// error-pattern: renaming of the library `foo` was specified
+
+#![crate_type = "lib"]
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// compile-flags: -l foo:bar -l foo:baz
+// error-pattern: multiple renamings were specified for library
+
+#![crate_type = "lib"]
+
+#[link(name = "foo")]
+extern "C" {}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// compile-flags: -l foo:
+// error-pattern: an empty renaming target was specified for library
+
+#![crate_type = "lib"]
+
+#[link(name = "foo")]
+extern "C" {}
extern {
fn foo(f: isize, x: u8, ...);
+ //~^ defined here
+ //~| defined here
}
extern "C" fn bar(f: isize, x: u8) {}
fn main() {
unsafe {
foo(); //~ ERROR: this function takes at least 2 parameters but 0 parameters were supplied
- //~^ NOTE the following parameter types were expected:
- //~| NOTE isize, u8
+ //~| NOTE expected at least 2 parameters
foo(1); //~ ERROR: this function takes at least 2 parameters but 1 parameter was supplied
- //~^ NOTE the following parameter types were expected:
- //~| NOTE isize, u8
+ //~| NOTE expected at least 2 parameters
let x: unsafe extern "C" fn(f: isize, x: u8) = foo;
//~^ ERROR: mismatched types
#![crate_type = "dylib"]
-#[link(name = "cfoo")]
+#[link(name = "cfoo", kind = "static")]
extern {
fn foo();
}
#![crate_type = "rlib"]
-#[link(name = "cfoo")]
+#[link(name = "cfoo", kind = "static")]
extern {
fn foo();
}
# Should not link dead code...
$(RUSTC) -Z print-link-args dummy.rs 2>&1 | \
- grep -e '--gc-sections' -e '-dead_strip' -e '/OPT:REF,ICF'
+ grep -e '--gc-sections' -e '-dead_strip' -e '/OPT:REF'
# ... unless you specifically ask to keep it
$(RUSTC) -Z print-link-args -C link-dead-code dummy.rs 2>&1 | \
- (! grep -e '--gc-sections' -e '-dead_strip' -e '/OPT:REF,ICF')
+ (! grep -e '--gc-sections' -e '-dead_strip' -e '/OPT:REF')
extern "C" fn bar<T>(ts: testcrate::TestStruct<T>) -> T { ts.y }
-#[link(name = "test")]
+#[link(name = "test", kind = "static")]
extern {
fn call(c: extern "C" fn(testcrate::TestStruct<i32>) -> i32) -> i32;
}
pub extern "C" fn foo<T>(ts: TestStruct<T>) -> T { ts.y }
-#[link(name = "test")]
+#[link(name = "test", kind = "static")]
extern {
pub fn call(c: extern "C" fn(TestStruct<i32>) -> i32) -> i32;
}
extern crate foo;
-#[link(name = "bar")]
+#[link(name = "bar", kind = "static")]
extern {
fn bar();
}
#![crate_type = "rlib"]
-#[link(name = "foo")]
+#[link(name = "foo", kind = "static")]
extern {
fn foo();
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#![feature(linked_from)]
#![crate_type = "dylib"]
#[link(name = "foo", kind = "static")]
-#[linked_from = "foo"]
extern {
pub fn foo();
}
extern crate libc;
-#[link(name = "test")]
+#[link(name = "test", kind = "static")]
extern {
fn slice_len(s: &[u8]) -> libc::size_t;
fn slice_elem(s: &[u8], idx: libc::size_t) -> u8;
extern crate libc;
-#[link(name="foo")]
+#[link(name="foo", kind = "static")]
extern {
fn should_return_one() -> libc::c_int;
}
$(RUSTC) foo.rs --emit=llvm-ir -C codegen-units=3
[ "$$(cat "$(TMPDIR)"/foo.?.ll | grep -c define\ i32\ .*inlined)" -eq "0" ]
[ "$$(cat "$(TMPDIR)"/foo.?.ll | grep -c define\ internal\ i32\ .*inlined)" -eq "2" ]
- [ "$$(cat "$(TMPDIR)"/foo.?.ll | grep -c define\ i32\ .*normal)" -eq "1" ]
- [ "$$(cat "$(TMPDIR)"/foo.?.ll | grep -c declare\ i32\ .*normal)" -eq "2" ]
+ [ "$$(cat "$(TMPDIR)"/foo.?.ll | grep -c define\ hidden\ i32\ .*normal)" -eq "1" ]
+ [ "$$(cat "$(TMPDIR)"/foo.?.ll | grep -c declare\ hidden\ i32\ .*normal)" -eq "2" ]
--- /dev/null
+include ../tools.mk
+
+ifdef IS_WINDOWS
+# Do nothing on MSVC.
+# On MINGW the --version-script, --dynamic-list, and --retain-symbol args don't
+# seem to work reliably.
+all:
+ exit 0
+else
+
+NM=nm -D
+DYLIB_EXT=so
+CDYLIB_NAME=liba_cdylib.so
+RDYLIB_NAME=liba_rust_dylib.so
+EXE_NAME=an_executable
+
+ifeq ($(UNAME),Darwin)
+NM=nm -gU
+DYLIB_EXT=dylib
+CDYLIB_NAME=liba_cdylib.dylib
+RDYLIB_NAME=liba_rust_dylib.dylib
+EXE_NAME=an_executable
+endif
+
+all:
+ $(RUSTC) an_rlib.rs
+ $(RUSTC) a_cdylib.rs
+ $(RUSTC) a_rust_dylib.rs
+ $(RUSTC) an_executable.rs
+
+ # Check that a cdylib exports its public #[no_mangle] functions
+ [ "$$($(NM) $(TMPDIR)/$(CDYLIB_NAME) | grep -c public_c_function_from_cdylib)" -eq "1" ]
+ # Check that a cdylib exports the public #[no_mangle] functions of dependencies
+ [ "$$($(NM) $(TMPDIR)/$(CDYLIB_NAME) | grep -c public_c_function_from_rlib)" -eq "1" ]
+ # Check that a cdylib DOES NOT export any public Rust functions
+ [ "$$($(NM) $(TMPDIR)/$(CDYLIB_NAME) | grep -c _ZN.*h.*E)" -eq "0" ]
+
+ # Check that a Rust dylib exports its monomorphic functions
+ [ "$$($(NM) $(TMPDIR)/$(RDYLIB_NAME) | grep -c public_c_function_from_rust_dylib)" -eq "1" ]
+ [ "$$($(NM) $(TMPDIR)/$(RDYLIB_NAME) | grep -c _ZN.*public_rust_function_from_rust_dylib.*E)" -eq "1" ]
+
+ # Check that a Rust dylib exports the monomorphic functions from its dependencies
+ [ "$$($(NM) $(TMPDIR)/$(RDYLIB_NAME) | grep -c public_c_function_from_rlib)" -eq "1" ]
+ [ "$$($(NM) $(TMPDIR)/$(RDYLIB_NAME) | grep -c public_rust_function_from_rlib)" -eq "1" ]
+
+ # Check that an executable does not export any dynamic symbols
+ [ "$$($(NM) $(TMPDIR)/$(EXE_NAME) | grep -c public_c_function_from_rlib)" -eq "0" ]
+ [ "$$($(NM) $(TMPDIR)/$(EXE_NAME) | grep -c public_rust_function_from_exe)" -eq "0" ]
+
+endif
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![crate_type="cdylib"]
+
+extern crate an_rlib;
+
+// This should not be exported
+pub fn public_rust_function_from_cdylib() {}
+
+// This should be exported
+#[no_mangle]
+pub extern "C" fn public_c_function_from_cdylib() {
+ an_rlib::public_c_function_from_rlib();
+}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![crate_type="dylib"]
+
+extern crate an_rlib;
+
+// This should be exported
+pub fn public_rust_function_from_rust_dylib() {}
+
+// This should be exported
+#[no_mangle]
+pub extern "C" fn public_c_function_from_rust_dylib() {}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![crate_type="bin"]
+
+extern crate an_rlib;
+
+pub fn public_rust_function_from_exe() {}
+
+fn main() {}
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+#![crate_type="rlib"]
+
+pub fn public_rust_function_from_rlib() {}
+
+#[no_mangle]
+pub extern "C" fn public_c_function_from_rlib() {}
}
}
-impl LateLintPass for Pass {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for Pass {
fn check_crate(&mut self, cx: &LateContext, krate: &hir::Crate) {
if !attr::contains_name(&krate.attrs, "crate_okay") {
cx.span_lint(CRATE_NOT_OKAY, krate.span,
#[plugin_registrar]
pub fn plugin_registrar(reg: &mut Registry) {
- reg.register_late_lint_pass(box Pass as LateLintPassObject);
+ reg.register_late_lint_pass(box Pass);
}
}
}
-impl LateLintPass for Pass {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for Pass {
fn check_item(&mut self, cx: &LateContext, it: &hir::Item) {
match &*it.name.as_str() {
"lintme" => cx.span_lint(TEST_LINT, it.span, "item is named 'lintme'"),
#[plugin_registrar]
pub fn plugin_registrar(reg: &mut Registry) {
- reg.register_late_lint_pass(box Pass as LateLintPassObject);
+ reg.register_late_lint_pass(box Pass);
reg.register_lint_group("lint_me", vec![TEST_LINT, PLEASE_LINT]);
}
fn get_lints(&self) -> LintArray { lint_array!(REGION_HIERARCHY) }
}
-impl LateLintPass for Pass {
+impl<'a, 'tcx> LateLintPass<'a, 'tcx> for Pass {
fn check_fn(&mut self, cx: &LateContext,
- fk: FnKind, _: &hir::FnDecl, expr: &hir::Expr,
- span: Span, node: ast::NodeId)
+ fk: FnKind, _: &hir::FnDecl, expr: &hir::Expr,
+ span: Span, node: ast::NodeId)
{
if let FnKind::Closure(..) = fk { return }
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// no-prefer-dynamic
+// compile-flags: --test
+
+#![crate_type = "proc-macro"]
+#![feature(proc_macro)]
+#![feature(proc_macro, proc_macro_lib)]
+
+extern crate proc_macro;
+
+use proc_macro::TokenStream;
+
+#[proc_macro_derive(Foo)]
+pub fn derive_foo(_input: TokenStream) -> TokenStream {
+ "".parse().unwrap()
+}
+
+#[test]
+pub fn test_derive() {
+ assert!(true);
+}
#[derive(Copy, Clone)]
pub struct Floats { a: f64, b: u8, c: f64 }
- #[link(name = "rust_test_helpers")]
+ #[link(name = "rust_test_helpers", kind = "static")]
extern "sysv64" {
pub fn rust_int8_to_int32(_: i8) -> i32;
pub fn rust_dbg_extern_identity_u8(v: u8) -> u8;
extern crate libc;
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
fn rust_get_test_int() -> libc::intptr_t;
}
extern crate libc;
-#[link(name="rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_get_test_int() -> libc::intptr_t;
}
pub mod rustrt {
extern crate libc;
- #[link(name = "rust_test_helpers")]
+ #[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_call(cb: extern "C" fn(libc::uintptr_t) -> libc::uintptr_t,
data: libc::uintptr_t)
pub mod rustrt {
extern crate libc;
- #[link(name = "rust_test_helpers")]
+ #[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_get_test_int() -> libc::intptr_t;
}
// no-prefer-dynamic
-#![feature(linked_from)]
-
#![crate_type = "rlib"]
#[link(name = "rust_test_helpers", kind = "static")]
-#[linked_from = "rust_test_helpers"]
extern {
pub fn rust_dbg_extern_identity_u32(u: u32) -> u32;
}
mod rustrt {
extern crate libc;
- #[link(name = "rust_test_helpers")]
+ #[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_get_test_int() -> libc::intptr_t;
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
fn rust_int8_to_int32(_: i8) -> i32;
}
mod rustrt {
extern crate libc;
- #[link(name = "rust_test_helpers")]
+ #[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_call(cb: extern "C" fn(libc::uintptr_t) -> libc::uintptr_t,
data: libc::uintptr_t)
mod rustrt {
extern crate libc;
- #[link(name = "rust_test_helpers")]
+ #[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_call(cb: extern "C" fn(libc::uintptr_t) -> libc::uintptr_t,
data: libc::uintptr_t)
mod rustrt {
extern crate libc;
- #[link(name = "rust_test_helpers")]
+ #[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_call(cb: extern "C" fn(libc::uintptr_t) -> libc::uintptr_t,
data: libc::uintptr_t)
mod rustrt {
extern crate libc;
- #[link(name = "rust_test_helpers")]
+ #[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_call(cb: extern "C" fn(libc::uintptr_t) -> libc::uintptr_t,
data: libc::uintptr_t)
one: u16, two: u16
}
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_extern_identity_TwoU16s(v: TwoU16s) -> TwoU16s;
}
one: u32, two: u32
}
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_extern_identity_TwoU32s(v: TwoU32s) -> TwoU32s;
}
one: u64, two: u64
}
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_extern_identity_TwoU64s(v: TwoU64s) -> TwoU64s;
}
one: u8, two: u8
}
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_extern_identity_TwoU8s(v: TwoU8s) -> TwoU8s;
}
// Test a function that takes/returns a u8.
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_extern_identity_u8(v: u8) -> u8;
}
// except according to those terms.
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_extern_identity_double(v: f64) -> f64;
}
struct Empty;
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
fn rust_dbg_extern_empty_struct(v1: ManyInts, e: Empty, v2: ManyInts);
}
// Test a function that takes/returns a u32.
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_extern_identity_u32(v: u32) -> u32;
}
// Test a call to a function that takes/returns a u64.
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_extern_identity_u64(v: u64) -> u64;
}
one: u16, two: u16
}
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_extern_return_TwoU16s() -> TwoU16s;
}
one: u32, two: u32
}
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_extern_return_TwoU32s() -> TwoU32s;
}
one: u64, two: u64
}
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_extern_return_TwoU64s() -> TwoU64s;
}
one: u8, two: u8
}
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_extern_return_TwoU8s() -> TwoU8s;
}
use std::mem;
use std::thread;
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
fn rust_dbg_call(cb: extern "C" fn(libc::uintptr_t),
data: libc::uintptr_t) -> libc::uintptr_t;
z: u64,
}
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn get_x(x: S) -> u64;
pub fn get_y(x: S) -> u64;
mod rustrt {
extern crate libc;
- #[link(name = "rust_test_helpers")]
+ #[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_get_test_int() -> libc::intptr_t;
}
mod rustrt {
use super::Quad;
- #[link(name = "rust_test_helpers")]
+ #[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn get_c_many_params(_: *const (), _: *const (),
_: *const (), _: *const (), f: Quad) -> u64;
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
fn rust_interesting_average(_: i64, ...) -> f64;
}
// except according to those terms.
// ignore-emscripten
+// ignore-android
#![feature(libc)]
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// no-prefer-dynamic
+#![crate_type = "staticlib"]
+
+#[no_mangle]
+pub extern "C" fn foo(x:i32) -> i32 { x }
--- /dev/null
+// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
+// file at the top-level directory of this distribution and at
+// http://rust-lang.org/COPYRIGHT.
+//
+// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
+// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
+// option. This file may not be copied, modified, or distributed
+// except according to those terms.
+
+// aux-build:clibrary.rs
+// compile-flags: -lstatic=wronglibrary:clibrary
+
+#[link(name = "wronglibrary", kind = "dylib")]
+extern "C" {
+ pub fn foo(x:i32) -> i32;
+}
+
+fn main() {
+ unsafe {
+ foo(42);
+ }
+}
use std::process::{Command, ExitStatus};
use std::env;
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
fn rust_get_null_ptr() -> *mut ::libc::c_char;
}
extern crate libc;
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
static mut rust_dbg_static_mut: libc::c_int;
pub fn rust_dbg_static_mut_check_four();
mod rustrt {
use super::{Floats, Quad};
- #[link(name = "rust_test_helpers")]
+ #[link(name = "rust_test_helpers", kind = "static")]
extern {
pub fn rust_dbg_abi_1(q: Quad) -> Quad;
pub fn rust_dbg_abi_2(f: Floats) -> Floats;
QuadPart: u64,
}
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern "C" {
fn increment_all_parts(_: LARGE_INTEGER) -> LARGE_INTEGER;
}
// option. This file may not be copied, modified, or distributed
// except according to those terms.
-#[link(name = "rust_test_helpers")]
+#[link(name = "rust_test_helpers", kind = "static")]
extern {
fn rust_interesting_average(_: u64, ...) -> f64;
}
--> $DIR/E0057.rs:13:13
|
13 | let a = f(); //~ ERROR E0057
- | ^^^
- |
- = note: the following parameter type was expected: (_,)
+ | ^^^ expected 1 parameter
error[E0057]: this function takes 1 parameter but 2 parameters were supplied
--> $DIR/E0057.rs:15:15
|
15 | let c = f(2, 3); //~ ERROR E0057
- | ^^^^
- |
- = note: the following parameter type was expected: (_,)
+ | ^^^^ expected 1 parameter
error: aborting due to 2 previous errors