- [ ] Implement the RFC (cc @rust-lang/XXX -- can anyone write up mentoring
instructions?)
-- [ ] Adjust documentation ([see instructions on rustc-guide][doc-guide])
-- [ ] Stabilization PR ([see instructions on rustc-guide][stabilization-guide])
+- [ ] Adjust documentation ([see instructions on rustc-dev-guide][doc-guide])
+- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
-[stabilization-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#stabilization-pr
-[doc-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#documentation-prs
+[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
+[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
### Unresolved Questions
<!--
# email addresses.
#
-Aaron Power <theaaronepower@gmail.com> Erin Power <xampprocky@gmail.com>
Aaron Todd <github@opprobrio.us>
Abhishek Chanda <abhishek.becs@gmail.com> Abhishek Chanda <abhishek@cloudscaling.com>
Adolfo Ochagavía <aochagavia92@gmail.com>
Eric Holmes <eric@ejholmes.net>
Eric Reed <ecreed@cs.washington.edu> <ereed@mozilla.com>
Erick Tryzelaar <erick.tryzelaar@gmail.com> <etryzelaar@iqt.org>
+Erin Power <xampprocky@gmail.com> <theaaronepower@gmail.com>
+Erin Power <xampprocky@gmail.com> <Aaronepower@users.noreply.github.com>
Esteban Küber <esteban@kuber.com.ar> <esteban@commure.com>
Esteban Küber <esteban@kuber.com.ar> <estebank@users.noreply.github.com>
Esteban Küber <esteban@kuber.com.ar> <github@kuber.com.ar>
Mateusz Mikuła <matti@marinelayer.io> <mati865@users.noreply.github.com>
Matt Brubeck <mbrubeck@limpet.net> <mbrubeck@cs.hmc.edu>
Matthew Auld <matthew.auld@intel.com>
+Matthew Kraai <kraai@ftbfs.org>
+Matthew Kraai <kraai@ftbfs.org> <matt.kraai@abbott.com>
+Matthew Kraai <kraai@ftbfs.org> <mkraai@its.jnj.com>
Matthew McPherrin <matthew@mcpherrin.ca> <matt@mcpherrin.ca>
Matthijs Hofstra <thiezz@gmail.com>
Melody Horn <melody@boringcactus.com> <mathphreak@gmail.com>
As a reminder, all contributors are expected to follow our [Code of Conduct][coc].
-The [rustc-guide] is your friend! It describes how the compiler works and how
+The [rustc-dev-guide] is your friend! It describes how the compiler works and how
to contribute to it in more detail than this document.
If this is your first time contributing, the [walkthrough] chapter of the guide
[rust-discord]: http://discord.gg/rust-lang
[rust-zulip]: https://rust-lang.zulipchat.com
[coc]: https://www.rust-lang.org/conduct.html
-[rustc-guide]: https://rust-lang.github.io/rustc-guide/
-[walkthrough]: https://rust-lang.github.io/rustc-guide/walkthrough.html
+[rustc-dev-guide]: https://rustc-dev-guide.rust-lang.org/
+[walkthrough]: https://rustc-dev-guide.rust-lang.org/walkthrough.html
## Feature Requests
[feature-requests]: #feature-requests
## The Build System
For info on how to configure and build the compiler, please see [this
-chapter][rustcguidebuild] of the rustc-guide. This chapter contains info for
+chapter][rustcguidebuild] of the rustc-dev-guide. This chapter contains info for
contributions to the compiler and the standard library. It also lists some
really useful commands to the build system (`./x.py`), which could save you a
lot of time.
-[rustcguidebuild]: https://rust-lang.github.io/rustc-guide/building/how-to-build-and-run.html
+[rustcguidebuild]: https://rustc-dev-guide.rust-lang.org/building/how-to-build-and-run.html
## Pull Requests
[pull-requests]: #pull-requests
Please make sure your pull request is in compliance with Rust's style
guidelines by running
- $ python x.py test src/tools/tidy
+ $ python x.py test tidy
Make this check before every pull request (and every new commit in a pull
request); you can add [git hooks](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks)
reference to `doc/reference.html`. The CSS might be messed up, but you can
verify that the HTML is right.
-Additionally, contributions to the [rustc-guide] are always welcome. Contributions
+Additionally, contributions to the [rustc-dev-guide] are always welcome. Contributions
can be made directly at [the
-rust-lang/rustc-guide](https://github.com/rust-lang/rustc-guide) repo. The issue
+rust-lang/rustc-dev-guide](https://github.com/rust-lang/rustc-dev-guide) repo. The issue
tracker in that repo is also a great way to find things that need doing. There
are issues for beginners and advanced compiler devs alike!
more seasoned developers, some useful places to look for information
are:
-* The [rustc guide] contains information about how various parts of the compiler work and how to contribute to the compiler
+* The [rustc dev guide] contains information about how various parts of the compiler work and how to contribute to the compiler
* [Rust Forge][rustforge] contains additional documentation, including write-ups of how to achieve common tasks
* The [Rust Internals forum][rif], a place to ask questions and
discuss Rust's internals
* **Google!** ([search only in Rust Documentation][gsearchdocs] to find types, traits, etc. quickly)
* Don't be afraid to ask! The Rust community is friendly and helpful.
-[rustc guide]: https://rust-lang.github.io/rustc-guide/about-this-guide.html
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/about-this-guide.html
[gdfrustc]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc/
[gsearchdocs]: https://www.google.com/search?q=site:doc.rust-lang.org+your+query+here
[rif]: http://internals.rust-lang.org
[rustforge]: https://forge.rust-lang.org/
[tlgba]: http://tomlee.co/2014/04/a-more-detailed-tour-of-the-rust-compiler/
[ro]: http://www.rustaceans.org/
-[rctd]: https://rust-lang.github.io/rustc-guide/tests/intro.html
+[rctd]: https://rustc-dev-guide.rust-lang.org/tests/intro.html
[cheatsheet]: https://buildbot2.rust-lang.org/homu/
[[package]]
name = "cargo"
-version = "0.44.0"
+version = "0.45.0"
dependencies = [
"anyhow",
"atty",
"clap",
"core-foundation 0.7.0",
"crates-io",
- "crossbeam-channel",
"crossbeam-utils 0.7.0",
"crypto-hash",
"curl",
"termcolor",
"toml",
"unicode-width",
+ "unicode-xid 0.2.0",
"url 2.1.0",
"walkdir",
"winapi 0.3.8",
[[package]]
name = "cargo_metadata"
-version = "0.9.0"
+version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "8d2d1617e838936c0d2323a65cc151e03ae19a7678dd24f72bccf27119b90a5d"
+checksum = "46e3374c604fb39d1a2f35ed5e4a4e30e60d01fab49446e08f1b3e9a90aef202"
dependencies = [
"semver",
"serde",
"rustc-std-workspace-core",
]
-[[package]]
-name = "chalk-engine"
-version = "0.9.0"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "17ec698a6f053a23bfbe646d9f2fde4b02abc19125595270a99e6f44ae0bdd1a"
-dependencies = [
- "chalk-macros",
- "rustc-hash",
-]
-
-[[package]]
-name = "chalk-macros"
-version = "0.1.0"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "295635afd6853aa9f20baeb7f0204862440c0fe994c5a253d5f479dac41d047e"
-dependencies = [
- "lazy_static 0.2.11",
-]
-
[[package]]
name = "chrono"
version = "0.4.6"
name = "clippy"
version = "0.0.212"
dependencies = [
- "cargo_metadata 0.9.0",
+ "cargo_metadata 0.9.1",
"clippy-mini-macro-test",
"clippy_lints",
"compiletest_rs",
"derive-new",
- "git2",
"lazy_static 1.4.0",
"regex",
"rustc-workspace-hack",
name = "clippy_lints"
version = "0.0.212"
dependencies = [
- "cargo_metadata 0.9.0",
+ "cargo_metadata 0.9.1",
"if_chain",
- "itertools 0.8.0",
+ "itertools 0.9.0",
"lazy_static 1.4.0",
"matches",
"pulldown-cmark 0.7.0",
[[package]]
name = "git2"
-version = "0.11.0"
+version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "77519ef7c5beee314d0804d4534f01e0f9e8d9acdee2b7a48627e590b27e0ec4"
+checksum = "b7da16ceafe24cedd9ba02c4463a2b506b6493baf4317c79c5acb553134a3c15"
dependencies = [
"bitflags",
"libc",
[[package]]
name = "git2-curl"
-version = "0.12.0"
+version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "d2559abb1d87d27668d31bd868a000f0e2e0065d10e78961b62da95d7a7f1cc7"
+checksum = "502d532a2d06184beb3bc869d4d90236e60934e3382c921b203fa3c33e212bd7"
dependencies = [
"curl",
"git2",
"either",
]
+[[package]]
+name = "itertools"
+version = "0.9.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "284f18f85651fe11e8a991b2adb42cb078325c996ed026d994719efcfca1d54b"
+dependencies = [
+ "either",
+]
+
[[package]]
name = "itoa"
version = "0.4.4"
[[package]]
name = "libgit2-sys"
-version = "0.10.0"
+version = "0.12.0+0.99.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "d9ec6bca50549d34a392611dde775123086acbd994e3fff64954777ce2dc2e51"
+checksum = "05dff41ac39e7b653f5f1550886cf00ba52f8e7f57210b633cdeedb3de5b236c"
dependencies = [
"cc",
"libc",
version = "0.1.0"
dependencies = [
"byteorder",
- "cargo_metadata 0.9.0",
+ "cargo_metadata 0.9.1",
"colored",
"compiletest_rs",
"directories",
"rustc-workspace-hack",
"rustc_version",
"serde",
+ "serde_json",
"shell-escape",
"vergen",
]
[[package]]
name = "polonius-engine"
-version = "0.11.0"
+version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "1e478d7c38eb785c6416cbe58df12aa55d7aefa3759b6d3e044b2ed03f423cec"
+checksum = "04d8ef65e3f89ecaec9ca7cb0e0911b4617352d4494018bcf934992f03f2024c"
dependencies = [
"datafrog",
"log",
"backtrace",
"bitflags",
"byteorder",
- "chalk-engine",
"jobserver",
"log",
"measureme",
"rustc_data_structures",
"rustc_hir",
"rustc_metadata",
+ "rustc_session",
"rustc_span",
"rustc_target",
]
"rustc_parse",
"rustc_plugin_impl",
"rustc_save_analysis",
+ "rustc_session",
"rustc_span",
"rustc_target",
"serialize",
name = "rustc_infer"
version = "0.0.0"
dependencies = [
- "fmt_macros",
"graphviz",
"log",
"rustc",
"rustc_ast",
- "rustc_attr",
"rustc_data_structures",
- "rustc_error_codes",
"rustc_errors",
"rustc_hir",
"rustc_index",
"rustc_session",
"rustc_span",
"rustc_target",
+ "rustc_trait_selection",
"rustc_traits",
"rustc_ty",
"rustc_typeck",
"rustc_session",
"rustc_span",
"rustc_target",
+ "rustc_trait_selection",
"unicode-security",
]
"memmap",
"rustc",
"rustc_ast",
- "rustc_ast_pretty",
"rustc_attr",
"rustc_data_structures",
"rustc_errors",
"rustc_expand",
"rustc_hir",
"rustc_index",
- "rustc_parse",
+ "rustc_session",
"rustc_span",
"rustc_target",
"serialize",
"rustc_infer",
"rustc_lexer",
"rustc_macros",
+ "rustc_session",
"rustc_span",
"rustc_target",
+ "rustc_trait_selection",
"serialize",
"smallvec 1.0.0",
]
"rustc_session",
"rustc_span",
"rustc_target",
+ "rustc_trait_selection",
"serialize",
"smallvec 1.0.0",
]
"rustc_session",
"rustc_span",
"rustc_target",
+ "rustc_trait_selection",
]
[[package]]
"rustc_hir",
"rustc_lint",
"rustc_metadata",
+ "rustc_session",
"rustc_span",
]
"rustc_data_structures",
"rustc_errors",
"rustc_hir",
+ "rustc_session",
"rustc_span",
"rustc_typeck",
]
"rustc_expand",
"rustc_feature",
"rustc_hir",
- "rustc_infer",
"rustc_metadata",
"rustc_session",
"rustc_span",
"rustc_data_structures",
"rustc_hir",
"rustc_parse",
+ "rustc_session",
"rustc_span",
"serde_json",
]
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b725dadae9fabc488df69a287f5a99c5eaf5d10853842a8a3dfac52476f544ee"
+[[package]]
+name = "rustc_trait_selection"
+version = "0.0.0"
+dependencies = [
+ "fmt_macros",
+ "log",
+ "rustc",
+ "rustc_ast",
+ "rustc_attr",
+ "rustc_data_structures",
+ "rustc_errors",
+ "rustc_hir",
+ "rustc_index",
+ "rustc_infer",
+ "rustc_macros",
+ "rustc_session",
+ "rustc_span",
+ "rustc_target",
+ "smallvec 1.0.0",
+]
+
[[package]]
name = "rustc_traits"
version = "0.0.0"
dependencies = [
- "chalk-engine",
"log",
"rustc",
"rustc_ast",
"rustc_macros",
"rustc_span",
"rustc_target",
+ "rustc_trait_selection",
"smallvec 1.0.0",
]
"rustc_data_structures",
"rustc_hir",
"rustc_infer",
+ "rustc_session",
"rustc_span",
"rustc_target",
+ "rustc_trait_selection",
]
[[package]]
"rustc_hir",
"rustc_index",
"rustc_infer",
+ "rustc_session",
"rustc_span",
"rustc_target",
+ "rustc_trait_selection",
"smallvec 1.0.0",
]
name = "tidy"
version = "0.1.0"
dependencies = [
+ "cargo_metadata 0.9.1",
"lazy_static 1.4.0",
"regex",
- "serde",
- "serde_json",
"walkdir",
]
## Installing from Source
_Note: If you wish to contribute to the compiler, you should read [this
-chapter][rustcguidebuild] of the rustc-guide instead of this section._
+chapter][rustcguidebuild] of the rustc-dev-guide instead of this section._
The Rust build system has a Python script called `x.py` to bootstrap building
the compiler. More information about it may be found by running `./x.py --help`
-or reading the [rustc guide][rustcguidebuild].
+or reading the [rustc dev guide][rustcguidebuild].
-[rustcguidebuild]: https://rust-lang.github.io/rustc-guide/building/how-to-build-and-run.html
+[rustcguidebuild]: https://rustc-dev-guide.rust-lang.org/building/how-to-build-and-run.html
### Building on *nix
1. Make sure you have installed the dependencies:
community, documentation, and all major contribution areas in the Rust ecosystem.
A good place to ask for help would be the #help channel.
-The [rustc guide] might be a good place to start if you want to find out how
+The [rustc dev guide] might be a good place to start if you want to find out how
various parts of the compiler work.
Also, you may find the [rustdocs for the compiler itself][rustdocs] useful.
[rust-discord]: https://discord.gg/rust-lang
-[rustc guide]: https://rust-lang.github.io/rustc-guide/about-this-guide.html
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/about-this-guide.html
[rustdocs]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc/
## License
# `0` - no debug info
# `1` - line tables only
# `2` - full debug info with variable and type information
-# Can be overriden for specific subsets of Rust code (rustc, std or tools).
+# Can be overridden for specific subsets of Rust code (rustc, std or tools).
# Debuginfo for tests run with compiletest is not controlled by this option
# and needs to be enabled separately with `debuginfo-level-tests`.
#debuginfo-level = if debug { 2 } else { 0 }
- `libstd`
- Various submodules for tools, like rustdoc, rls, etc.
-For more information on how various parts of the compiler work, see the [rustc guide].
+For more information on how various parts of the compiler work, see the [rustc dev guide].
-[rustc guide]: https://rust-lang.github.io/rustc-guide/about-this-guide.html
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/about-this-guide.html
# run all unit tests
./x.py test
+ # execute tool tests
+ ./x.py test tidy
+
# execute the UI test suite
./x.py test src/test/ui
self.clear_if_dirty(&my_out, &rustdoc);
}
- cargo.env("CARGO_TARGET_DIR", &out_dir).arg(cmd).arg("-Zconfig-profile");
+ cargo.env("CARGO_TARGET_DIR", &out_dir).arg(cmd);
let profile_var = |name: &str| {
let profile = if self.config.rust_optimize { "RELEASE" } else { "DEV" };
rustflags.arg("-Zforce-unstable-if-unmarked");
}
- // cfg(bootstrap): the flag was renamed from `-Zexternal-macro-backtrace`
- // to `-Zmacro-backtrace`, keep only the latter after beta promotion.
- if stage == 0 {
- rustflags.arg("-Zexternal-macro-backtrace");
- } else {
- rustflags.arg("-Zmacro-backtrace");
- }
+ rustflags.arg("-Zmacro-backtrace");
let want_rustdoc = self.doc_tests != DocTests::No;
use crate::Build;
// The version number
-pub const CFG_RELEASE_NUM: &str = "1.43.0";
+pub const CFG_RELEASE_NUM: &str = "1.44.0";
pub struct GitInfo {
inner: Option<Info>,
copy_and_stamp(&srcdir, "crt1.o");
}
- // Copies libunwind.a compiled to be linked wit x86_64-fortanix-unknown-sgx.
+ // Copies libunwind.a compiled to be linked with x86_64-fortanix-unknown-sgx.
//
// This target needs to be linked to Fortanix's port of llvm's libunwind.
// libunwind requires support for rwlock and printing to stderr,
"src/tools/rustc-std-workspace-core",
"src/tools/rustc-std-workspace-alloc",
"src/tools/rustc-std-workspace-std",
- "src/librustc",
- "src/librustc_ast",
];
copy_src_dirs(builder, &std_src_dirs[..], &[], &dst_src);
pub rustc_error_format: Option<String>,
pub dry_run: bool,
- // This overrides the deny-warnings configuation option,
+ // This overrides the deny-warnings configuration option,
// which passes -Dwarnings to the compiler invocations.
//
// true => deny, false => warn
subcommand_help.push_str(
"\n
Arguments:
- This subcommand accepts a number of paths to directories to tests that
+ This subcommand accepts a number of paths to test directories that
should be compiled and run. For example:
./x.py test src/test/ui
just like `build src/libstd --stage N` it tests the compiler produced by the previous
stage.
+ Execute tool tests with a tool name argument:
+
+ ./x.py test tidy
+
If no arguments are passed then the complete artifacts for that stage are
compiled and tested.
}
pub fn format(build: &Build, check: bool) {
+ if build.config.dry_run {
+ return;
+ }
let mut builder = ignore::types::TypesBuilder::new();
builder.add_defaults();
builder.select("rust");
use std::env;
use std::ffi::OsString;
use std::fs::{self, File};
+use std::io;
use std::path::{Path, PathBuf};
use std::process::Command;
}
}
- let llvm_info = &builder.in_tree_llvm_info;
let root = "src/llvm-project/llvm";
let out_dir = builder.llvm_out(target);
let mut llvm_config_ret_dir = builder.llvm_out(builder.config.build);
let build_llvm_config =
llvm_config_ret_dir.join(exe("llvm-config", &*builder.config.build));
- let done_stamp = out_dir.join("llvm-finished-building");
- if done_stamp.exists() {
- if builder.config.llvm_skip_rebuild {
- builder.info(
- "Warning: \
- Using a potentially stale build of LLVM; \
- This may not behave well.",
- );
- return build_llvm_config;
- }
+ let stamp = out_dir.join("llvm-finished-building");
+ let stamp = HashStamp::new(stamp, builder.in_tree_llvm_info.sha());
- if let Some(llvm_commit) = llvm_info.sha() {
- let done_contents = t!(fs::read(&done_stamp));
+ if builder.config.llvm_skip_rebuild && stamp.path.exists() {
+ builder.info(
+ "Warning: \
+ Using a potentially stale build of LLVM; \
+ This may not behave well.",
+ );
+ return build_llvm_config;
+ }
- // If LLVM was already built previously and the submodule's commit didn't change
- // from the previous build, then no action is required.
- if done_contents == llvm_commit.as_bytes() {
- return build_llvm_config;
- }
- } else {
+ if stamp.is_done() {
+ if stamp.hash.is_none() {
builder.info(
"Could not determine the LLVM submodule commit hash. \
Assuming that an LLVM rebuild is not necessary.",
);
builder.info(&format!(
"To force LLVM to rebuild, remove the file `{}`",
- done_stamp.display()
+ stamp.path.display()
));
- return build_llvm_config;
}
+ return build_llvm_config;
}
builder.info(&format!("Building LLVM for {}", target));
+ t!(stamp.remove());
let _time = util::timeit(&builder);
t!(fs::create_dir_all(&out_dir));
cfg.build();
- t!(fs::write(&done_stamp, llvm_info.sha().unwrap_or("")));
+ t!(stamp.write());
build_llvm_config
}
return runtimes;
}
- let done_stamp = out_dir.join("sanitizers-finished-building");
- if done_stamp.exists() {
- builder.info(&format!(
- "Assuming that sanitizers rebuild is not necessary. \
- To force a rebuild, remove the file `{}`",
- done_stamp.display()
- ));
+ let stamp = out_dir.join("sanitizers-finished-building");
+ let stamp = HashStamp::new(stamp, builder.in_tree_llvm_info.sha());
+
+ if stamp.is_done() {
+ if stamp.hash.is_none() {
+ builder.info(&format!(
+ "Rebuild sanitizers by removing the file `{}`",
+ stamp.path.display()
+ ));
+ }
return runtimes;
}
builder.info(&format!("Building sanitizers for {}", self.target));
+ t!(stamp.remove());
let _time = util::timeit(&builder);
let mut cfg = cmake::Config::new(&compiler_rt_dir);
cfg.build_target(&runtime.cmake_target);
cfg.build();
}
-
- t!(fs::write(&done_stamp, b""));
+ t!(stamp.write());
runtimes
}
}
result
}
+
+struct HashStamp {
+ path: PathBuf,
+ hash: Option<Vec<u8>>,
+}
+
+impl HashStamp {
+ fn new(path: PathBuf, hash: Option<&str>) -> Self {
+ HashStamp { path, hash: hash.map(|s| s.as_bytes().to_owned()) }
+ }
+
+ fn is_done(&self) -> bool {
+ match fs::read(&self.path) {
+ Ok(h) => self.hash.as_deref().unwrap_or(b"") == h.as_slice(),
+ Err(e) if e.kind() == io::ErrorKind::NotFound => false,
+ Err(e) => {
+ panic!("failed to read stamp file `{}`: {}", self.path.display(), e);
+ }
+ }
+ }
+
+ fn remove(&self) -> io::Result<()> {
+ match fs::remove_file(&self.path) {
+ Ok(()) => Ok(()),
+ Err(e) => {
+ if e.kind() == io::ErrorKind::NotFound {
+ Ok(())
+ } else {
+ Err(e)
+ }
+ }
+ }
+ }
+
+ fn write(&self) -> io::Result<()> {
+ fs::write(&self.path, self.hash.as_deref().unwrap_or(b""))
+ }
+}
// Overwrite bootstrap's `rustc` wrapper overwriting our flags.
cargo.env("RUSTC_DEBUG_ASSERTIONS", "true");
// Let cargo-miri know where xargo ended up.
- cargo.env("XARGO", builder.out.join("bin").join("xargo"));
+ cargo.env("XARGO_CHECK", builder.out.join("bin").join("xargo-check"));
let mut cargo = Command::from(cargo);
if !try_run(builder, &mut cargo) {
#[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)]
pub struct RustdocJSStd {
- pub host: Interned<String>,
pub target: Interned<String>,
}
}
fn make_run(run: RunConfig<'_>) {
- run.builder.ensure(RustdocJSStd { host: run.host, target: run.target });
+ run.builder.ensure(RustdocJSStd { target: run.target });
}
fn run(self, builder: &Builder<'_>) {
if let Some(ref nodejs) = builder.config.nodejs {
let mut command = Command::new(nodejs);
- command.args(&["src/tools/rustdoc-js-std/tester.js", &*self.host]);
+ command
+ .arg(builder.src.join("src/tools/rustdoc-js-std/tester.js"))
+ .arg(builder.doc_out(self.target))
+ .arg(builder.src.join("src/test/rustdoc-js-std"));
builder.ensure(crate::doc::Std { target: self.target, stage: builder.top_stage });
builder.run(&mut command);
} else {
let mut cmd = builder.tool_cmd(Tool::Tidy);
cmd.arg(builder.src.join("src"));
cmd.arg(&builder.initial_cargo);
- if !builder.config.vendor {
- cmd.arg("--no-vendor");
- }
if builder.is_verbose() {
cmd.arg("--verbose");
}
use std::fmt;
use std::fs;
use std::io::{Seek, SeekFrom};
-use std::path::PathBuf;
+use std::path::{Path, PathBuf};
use std::process::Command;
use std::time;
type ToolstateData = HashMap<Box<str>, ToolState>;
-#[derive(Copy, Clone, Debug, Deserialize, Serialize, PartialEq, Eq)]
+#[derive(Copy, Clone, Debug, Deserialize, Serialize, PartialEq, Eq, PartialOrd)]
#[serde(rename_all = "kebab-case")]
/// Whether a tool can be compiled, tested or neither
pub enum ToolState {
impl Step for ToolStateCheck {
type Output = ();
- /// Runs the `linkchecker` tool as compiled in `stage` by the `host` compiler.
+ /// Checks tool state status.
///
- /// This tool in `src/tools` will verify the validity of all our links in the
- /// documentation to ensure we don't have a bunch of dead ones.
+ /// This is intended to be used in the `checktools.sh` script. To use
+ /// this, set `save-toolstates` in `config.toml` so that tool status will
+ /// be saved to a JSON file. Then, run `x.py test --no-fail-fast` for all
+ /// of the tools to populate the JSON file. After that is done, this
+ /// command can be run to check for any status failures, and exits with an
+ /// error if there are any.
+ ///
+ /// This also handles publishing the results to the `history` directory of
+ /// the toolstate repo https://github.com/rust-lang-nursery/rust-toolstate
+ /// if the env var `TOOLSTATE_PUBLISH` is set. Note that there is a
+ /// *separate* step of updating the `latest.json` file and creating GitHub
+ /// issues and comments in `src/ci/publish_toolstate.sh`, which is only
+ /// performed on master. (The shell/python code is intended to be migrated
+ /// here eventually.)
+ ///
+ /// The rules for failure are:
+ /// * If the PR modifies a tool, the status must be test-pass.
+ /// NOTE: There is intent to change this, see
+ /// https://github.com/rust-lang/rust/issues/65000.
+ /// * All "stable" tools must be test-pass on the stable or beta branches.
+ /// * During beta promotion week, a PR is not allowed to "regress" a
+ /// stable tool. That is, the status is not allowed to get worse
+ /// (test-pass to test-fail or build-fail).
fn run(self, builder: &Builder<'_>) {
if builder.config.dry_run {
return;
}
check_changed_files(&toolstates);
+ checkout_toolstate_repo();
+ let old_toolstate = read_old_toolstate();
for (tool, _) in STABLE_TOOLS.iter() {
let state = toolstates[*tool];
did_error = true;
eprintln!("error: Tool `{}` should be test-pass but is {}", tool, state);
} else if in_beta_week {
- did_error = true;
- eprintln!(
- "error: Tool `{}` should be test-pass but is {} during beta week.",
- tool, state
- );
+ let old_state = old_toolstate
+ .iter()
+ .find(|ts| ts.tool == *tool)
+ .expect("latest.json missing tool")
+ .state();
+ if state < old_state {
+ did_error = true;
+ eprintln!(
+ "error: Tool `{}` has regressed from {} to {} during beta week.",
+ tool, old_state, state
+ );
+ } else {
+ // This warning only appears in the logs, which most
+ // people won't read. It's mostly here for testing and
+ // debugging.
+ eprintln!(
+ "warning: Tool `{}` is not test-pass (is `{}`), \
+ this should be fixed before beta is branched.",
+ tool, state
+ );
+ }
}
+ // `publish_toolstate.py` is responsible for updating
+ // `latest.json` and creating comments/issues warning people
+ // if there is a regression. That all happens in a separate CI
+ // job on the master branch once the PR has passed all tests
+ // on the `auto` branch.
}
}
}
if builder.config.channel == "nightly" && env::var_os("TOOLSTATE_PUBLISH").is_some() {
- commit_toolstate_change(&toolstates, in_beta_week);
+ commit_toolstate_change(&toolstates);
}
}
}
}
+fn toolstate_repo() -> String {
+ env::var("TOOLSTATE_REPO")
+ .unwrap_or_else(|_| "https://github.com/rust-lang-nursery/rust-toolstate.git".to_string())
+}
+
+/// Directory where the toolstate repo is checked out.
+const TOOLSTATE_DIR: &str = "rust-toolstate";
+
+/// Checks out the toolstate repo into `TOOLSTATE_DIR`.
+fn checkout_toolstate_repo() {
+ if let Ok(token) = env::var("TOOLSTATE_REPO_ACCESS_TOKEN") {
+ prepare_toolstate_config(&token);
+ }
+ if Path::new(TOOLSTATE_DIR).exists() {
+ eprintln!("Cleaning old toolstate directory...");
+ t!(fs::remove_dir_all(TOOLSTATE_DIR));
+ }
+
+ let status = Command::new("git")
+ .arg("clone")
+ .arg("--depth=1")
+ .arg(toolstate_repo())
+ .arg(TOOLSTATE_DIR)
+ .status();
+ let success = match status {
+ Ok(s) => s.success(),
+ Err(_) => false,
+ };
+ if !success {
+ panic!("git clone unsuccessful (status: {:?})", status);
+ }
+}
+
+/// Sets up config and authentication for modifying the toolstate repo.
+fn prepare_toolstate_config(token: &str) {
+ fn git_config(key: &str, value: &str) {
+ let status = Command::new("git").arg("config").arg("--global").arg(key).arg(value).status();
+ let success = match status {
+ Ok(s) => s.success(),
+ Err(_) => false,
+ };
+ if !success {
+ panic!("git config key={} value={} failed (status: {:?})", key, value, status);
+ }
+ }
+
+ // If changing anything here, then please check that `src/ci/publish_toolstate.sh` is up to date
+ // as well.
+ git_config("user.email", "7378925+rust-toolstate-update@users.noreply.github.com");
+ git_config("user.name", "Rust Toolstate Update");
+ git_config("credential.helper", "store");
+
+ let credential = format!("https://{}:x-oauth-basic@github.com\n", token,);
+ let git_credential_path = PathBuf::from(t!(env::var("HOME"))).join(".git-credentials");
+ t!(fs::write(&git_credential_path, credential));
+}
+
+/// Reads the latest toolstate from the toolstate repo.
+fn read_old_toolstate() -> Vec<RepoState> {
+ let latest_path = Path::new(TOOLSTATE_DIR).join("_data").join("latest.json");
+ let old_toolstate = t!(fs::read(latest_path));
+ t!(serde_json::from_slice(&old_toolstate))
+}
+
/// This function `commit_toolstate_change` provides functionality for pushing a change
/// to the `rust-toolstate` repository.
///
///
/// * See <https://help.github.com/articles/about-commit-email-addresses/>
/// if a private email by GitHub is wanted.
-fn commit_toolstate_change(current_toolstate: &ToolstateData, in_beta_week: bool) {
- fn git_config(key: &str, value: &str) {
- let status = Command::new("git").arg("config").arg("--global").arg(key).arg(value).status();
- let success = match status {
- Ok(s) => s.success(),
- Err(_) => false,
- };
- if !success {
- panic!("git config key={} value={} successful (status: {:?})", key, value, status);
- }
- }
-
- // If changing anything here, then please check that src/ci/publish_toolstate.sh is up to date
- // as well.
- git_config("user.email", "7378925+rust-toolstate-update@users.noreply.github.com");
- git_config("user.name", "Rust Toolstate Update");
- git_config("credential.helper", "store");
-
- let credential = format!(
- "https://{}:x-oauth-basic@github.com\n",
- t!(env::var("TOOLSTATE_REPO_ACCESS_TOKEN")),
- );
- let git_credential_path = PathBuf::from(t!(env::var("HOME"))).join(".git-credentials");
- t!(fs::write(&git_credential_path, credential));
-
- let status = Command::new("git")
- .arg("clone")
- .arg("--depth=1")
- .arg(t!(env::var("TOOLSTATE_REPO")))
- .status();
- let success = match status {
- Ok(s) => s.success(),
- Err(_) => false,
- };
- if !success {
- panic!("git clone successful (status: {:?})", status);
- }
-
- let old_toolstate = t!(fs::read("rust-toolstate/_data/latest.json"));
- let old_toolstate: Vec<RepoState> = t!(serde_json::from_slice(&old_toolstate));
-
+fn commit_toolstate_change(current_toolstate: &ToolstateData) {
let message = format!("({} CI update)", OS.expect("linux/windows only"));
let mut success = false;
for _ in 1..=5 {
- // Update the toolstate results (the new commit-to-toolstate mapping) in the toolstate repo.
- change_toolstate(¤t_toolstate, &old_toolstate, in_beta_week);
+ // Upload the test results (the new commit-to-toolstate mapping) to the toolstate repo.
+ // This does *not* change the "current toolstate"; that only happens post-landing
+ // via `src/ci/docker/publish_toolstate.sh`.
+ publish_test_results(¤t_toolstate);
// `git commit` failing means nothing to commit.
let status = t!(Command::new("git")
- .current_dir("rust-toolstate")
+ .current_dir(TOOLSTATE_DIR)
.arg("commit")
.arg("-a")
.arg("-m")
}
let status = t!(Command::new("git")
- .current_dir("rust-toolstate")
+ .current_dir(TOOLSTATE_DIR)
.arg("push")
.arg("origin")
.arg("master")
eprintln!("Sleeping for 3 seconds before retrying push");
std::thread::sleep(std::time::Duration::from_secs(3));
let status = t!(Command::new("git")
- .current_dir("rust-toolstate")
+ .current_dir(TOOLSTATE_DIR)
.arg("fetch")
.arg("origin")
.arg("master")
.status());
assert!(status.success());
let status = t!(Command::new("git")
- .current_dir("rust-toolstate")
+ .current_dir(TOOLSTATE_DIR)
.arg("reset")
.arg("--hard")
.arg("origin/master")
}
}
-fn change_toolstate(
- current_toolstate: &ToolstateData,
- old_toolstate: &[RepoState],
- in_beta_week: bool,
-) {
- let mut regressed = false;
- for repo_state in old_toolstate {
- let tool = &repo_state.tool;
- let state = if cfg!(target_os = "linux") {
- &repo_state.linux
- } else if cfg!(windows) {
- &repo_state.windows
- } else {
- unimplemented!()
- };
- let new_state = current_toolstate[tool.as_str()];
-
- if new_state != *state {
- eprintln!("The state of `{}` has changed from `{}` to `{}`", tool, state, new_state);
- if (new_state as u8) < (*state as u8) {
- if !["rustc-guide", "miri", "embedded-book"].contains(&tool.as_str()) {
- regressed = true;
- }
- }
- }
- }
-
- if regressed && in_beta_week {
- std::process::exit(1);
- }
-
+/// Updates the "history" files with the latest results.
+///
+/// These results will later be promoted to `latest.json` by the
+/// `publish_toolstate.py` script if the PR passes all tests and is merged to
+/// master.
+fn publish_test_results(current_toolstate: &ToolstateData) {
let commit = t!(std::process::Command::new("git").arg("rev-parse").arg("HEAD").output());
let commit = t!(String::from_utf8(commit.stdout));
let toolstate_serialized = t!(serde_json::to_string(¤t_toolstate));
- let history_path = format!("rust-toolstate/history/{}.tsv", OS.expect("linux/windows only"));
+ let history_path = Path::new(TOOLSTATE_DIR)
+ .join("history")
+ .join(format!("{}.tsv", OS.expect("linux/windows only")));
let mut file = t!(fs::read_to_string(&history_path));
let end_of_first_line = file.find('\n').unwrap();
file.insert_str(end_of_first_line, &format!("\n{}\t{}", commit.trim(), toolstate_serialized));
commit: String,
datetime: String,
}
+
+impl RepoState {
+ fn state(&self) -> ToolState {
+ if cfg!(target_os = "linux") {
+ self.linux
+ } else if cfg!(windows) {
+ self.windows
+ } else {
+ unimplemented!()
+ }
+ }
+}
# The macOS and Windows builds here are currently disabled due to them not being
# overly necessary on `try` builds. We also don't actually have anything that
-# consumes the artifacts currently. Perhaps one day we can reenable, but for now
+# consumes the artifacts currently. Perhaps one day we can re-enable, but for now
# it helps free up capacity on Azure.
# - job: macOS
# timeoutInMinutes: 600
ENV SCRIPT python2.7 ../x.py check --target=i686-pc-windows-gnu --host=i686-pc-windows-gnu && \
python2.7 ../x.py build --stage 0 src/tools/build-manifest && \
python2.7 ../x.py test --stage 0 src/tools/compiletest && \
+ python2.7 ../x.py test src/tools/tidy && \
/scripts/validate-toolstate.sh
libssl-dev \
pkg-config \
zlib1g-dev \
- xz-utils
+ xz-utils \
+ nodejs
COPY scripts/sccache.sh /scripts/
RUN sh /scripts/sccache.sh
--enable-llvm-link-shared \
--set rust.thin-lto-import-instr-limit=10
-ENV SCRIPT python2.7 ../x.py test src/tools/tidy && python2.7 ../x.py test
+ENV SCRIPT python2.7 ../x.py test --exclude src/tools/tidy && python2.7 ../x.py test src/tools/tidy
# The purpose of this container isn't to test with debug assertions and
# this is run on all PRs, so let's get speedier builds by disabling these extra
cd rust-toolstate
FAILURE=1
for RETRY_COUNT in 1 2 3 4 5; do
- # The purpose is to publish the new "current" toolstate in the toolstate repo.
+ # The purpose of this is to publish the new "current" toolstate in the toolstate repo.
+ # This happens post-landing, on master.
+ # (Publishing the per-commit test results happens pre-landing in src/bootstrap/toolstate.rs).
"$(ciCheckoutPath)/src/tools/publish_toolstate.py" "$GIT_COMMIT" \
"$GIT_COMMIT_MSG" \
"$MESSAGE_FILE" \
# one way or another. The msys interpreters seem to have weird path conversions
# baked in which break LLVM's build system one way or another, so let's use the
# native version which keeps everything as native as possible.
- cp C:/Python27amd64/python.exe C:/Python27amd64/python2.7.exe
- ciCommandAddPath "C:\\Python27amd64"
+ python_home="C:/hostedtoolcache/windows/Python/2.7.17/x64"
+ cp "${python_home}/python.exe" "${python_home}/python2.7.exe"
+ ciCommandAddPath "C:\\hostedtoolcache\\windows\\Python\\2.7.17\\x64"
fi
-Subproject commit b2e1092bf67bd4d7686c4553f186edbb7f5f92db
+Subproject commit d22a9c487c78095afc4584f1d9b4ec43529d713c
## The `rustc` Contribution Guide
-[The `rustc` Guide](https://rust-lang.github.io/rustc-guide/) documents how
+[The `rustc` Guide](https://rustc-dev-guide.rust-lang.org/) documents how
the compiler works and how to contribute to it. This is useful if you want to build
or modify the Rust compiler from source (e.g. to target something non-standard).
-Subproject commit 3e6e1001dc6e095dbd5c88005e80969f60e384e1
+Subproject commit 9f797e65e6bcc79419975b17aff8e21c9adc039f
-Subproject commit 64239df6d173562b9deb4f012e4c3e6e960c4754
+Subproject commit e2f11fe4d6a5ecb471c70323197da43c70cb96b6
User-agent: *
-Disallow: /0.3/
-Disallow: /0.4/
-Disallow: /0.5/
-Disallow: /0.6/
-Disallow: /0.7/
-Disallow: /0.8/
-Disallow: /0.9/
-Disallow: /0.10/
-Disallow: /0.11.0/
-Disallow: /0.12.0/
-Disallow: /1.0.0-alpha/
-Disallow: /1.0.0-alpha.2/
-Disallow: /1.0.0-beta/
-Disallow: /1.0.0-beta.2/
-Disallow: /1.0.0-beta.3/
-Disallow: /1.0.0-beta.4/
-Disallow: /1.0.0-beta.5/
+Disallow: /1.
+Disallow: /0.
Disallow: /book/first-edition/
Disallow: /book/second-edition/
Disallow: /stable/book/first-edition/
-Subproject commit 32facd5522ddbbf37baf01e4e4b6562bc55c071a
+Subproject commit cb369ae95ca36b841960182d26f6d5d9b2e3cc18
- `crate-name` — The name of the crate.
- `file-names` — The names of the files created by the `link` emit kind.
- `sysroot` — Path to the sysroot.
+- `target-libdir` - Path to the target libdir.
- `cfg` — List of cfg values. See [conditional compilation] for more
information about cfg values.
- `target-list` — List of known targets. The target may be selected with the
# Contributing to rustc
We'd love to have your help improving `rustc`! To that end, we've written [a
-whole book][rustc_guide] on its
+whole book][rustc_dev_guide] on its
internals, how it works, and how to get started working on it. To learn
more, you'll want to check that out.
If you would like to contribute to _this_ book, you can find its source in the
rustc source at [src/doc/rustc][rustc_book].
-[rustc_guide]: https://rust-lang.github.io/rustc-guide/
+[rustc_dev_guide]: https://rustc-dev-guide.rust-lang.org/
[rustc_book]: https://github.com/rust-lang/rust/tree/master/src/doc/rustc
## `#[cfg(doc)]`: Documenting platform-/feature-specific information
-For conditional compilation, Rustdoc treats your crate the same way the compiler does: Only things
+For conditional compilation, Rustdoc treats your crate the same way the compiler does. Only things
from the host target are available (or from the given `--target` if present), and everything else is
"filtered out" from the crate. This can cause problems if your crate is providing different things
on different targets and you want your documentation to reflect all the available items you
Using this flag looks like this:
```bash
-$ rustdoc src/lib.rs -o target\\doc
-$ rustdoc src/lib.rs --output target\\doc
+$ rustdoc src/lib.rs -o target/doc
+$ rustdoc src/lib.rs --output target/doc
```
By default, `rustdoc`'s output appears in a directory named `doc` in
`should_panic` tells `rustdoc` that the code should compile correctly, but
not actually pass as a test.
-```text
+```rust
/// ```no_run
/// loop {
/// println!("Hello, world");
[unstable-doc-cfg]: ../unstable-book/language-features/doc-cfg.html
[issue-doc-cfg]: https://github.com/rust-lang/rust/issues/43781
-### Adding your trait to the "Important Traits" dialog
-
-Rustdoc keeps a list of a few traits that are believed to be "fundamental" to a given type when
-implemented on it. These traits are intended to be the primary interface for their types, and are
-often the only thing available to be documented on their types. For this reason, Rustdoc will track
-when a given type implements one of these traits and call special attention to it when a function
-returns one of these types. This is the "Important Traits" dialog, visible as a circle-i button next
-to the function, which, when clicked, shows the dialog.
-
-In the standard library, the traits that qualify for inclusion are `Iterator`, `io::Read`, and
-`io::Write`. However, rather than being implemented as a hard-coded list, these traits have a
-special marker attribute on them: `#[doc(spotlight)]`. This means that you could apply this
-attribute to your own trait to include it in the "Important Traits" dialog in documentation.
-
-The `#[doc(spotlight)]` attribute currently requires the `#![feature(doc_spotlight)]` feature gate.
-For more information, see [its chapter in the Unstable Book][unstable-spotlight] and [its tracking
-issue][issue-spotlight].
-
-[unstable-spotlight]: ../unstable-book/language-features/doc-spotlight.html
-[issue-spotlight]: https://github.com/rust-lang/rust/issues/45040
-
### Exclude certain dependencies from documentation
The standard library uses several dependencies which, in turn, use several types and traits from the
### `--runtool`, `--runtool-arg`: program to run tests with; args to pass to it
-Using thses options looks like this:
+Using these options looks like this:
```bash
$ rustdoc src/lib.rs -Z unstable-options --runtool runner --runtool-arg --do-thing --runtool-arg --do-other-thing
Internally, this calls out to `rustdoc` like this:
```bash
-$ rustdoc --crate-name docs srclib.rs -o <path>\docs\target\doc -L
-dependency=<path>docs\target\debug\deps
+$ rustdoc --crate-name docs src/lib.rs -o <path>/docs/target/doc -L
+dependency=<path>/docs/target/debug/deps
```
You can see this with `cargo doc --verbose`.
## Summary
This covers the simplest use-cases of `rustdoc`. The rest of this book will
-explain all of the options that `rustdoc` has, and how to use them.
\ No newline at end of file
+explain all of the options that `rustdoc` has, and how to use them.
--- /dev/null
+# `const_eval_limit`
+
+The tracking issue for this feature is: [#67217]
+
+[#67217]: https://github.com/rust-lang/rust/issues/67217
+
+The `const_eval_limit` allows someone to limit the evaluation steps the CTFE undertakes to evaluate a `const fn`.
The tracking issue for this feature is: [#49147]
-[#44109]: https://github.com/rust-lang/rust/issues/49147
+[#49147]: https://github.com/rust-lang/rust/issues/49147
------------------------
+++ /dev/null
-# `doc_spotlight`
-
-The tracking issue for this feature is: [#45040]
-
-The `doc_spotlight` feature allows the use of the `spotlight` parameter to the `#[doc]` attribute,
-to "spotlight" a specific trait on the return values of functions. Adding a `#[doc(spotlight)]`
-attribute to a trait definition will make rustdoc print extra information for functions which return
-a type that implements that trait. This attribute is applied to the `Iterator`, `io::Read`, and
-`io::Write` traits in the standard library.
-
-You can do this on your own traits, like this:
-
-```
-#![feature(doc_spotlight)]
-
-#[doc(spotlight)]
-pub trait MyTrait {}
-
-pub struct MyStruct;
-impl MyTrait for MyStruct {}
-
-/// The docs for this function will have an extra line about `MyStruct` implementing `MyTrait`,
-/// without having to write that yourself!
-pub fn my_fn() -> MyStruct { MyStruct }
-```
-
-This feature was originally implemented in PR [#45039].
-
-[#45040]: https://github.com/rust-lang/rust/issues/45040
-[#45039]: https://github.com/rust-lang/rust/pull/45039
# `impl_trait_in_bindings`
-The tracking issue for this feature is: [#34511]
+The tracking issue for this feature is: [#63065]
-[#34511]: https://github.com/rust-lang/rust/issues/34511
+[#63065]: https://github.com/rust-lang/rust/issues/63065
------------------------
#[lang = "eh_personality"] extern fn rust_eh_personality() {}
#[lang = "panic_impl"] extern fn rust_begin_panic(info: &PanicInfo) -> ! { unsafe { intrinsics::abort() } }
-#[lang = "eh_unwind_resume"] extern fn rust_eh_unwind_resume() {}
#[no_mangle] pub extern fn rust_eh_register_frames () {}
#[no_mangle] pub extern fn rust_eh_unregister_frames () {}
```
marked with lang items; those specific four are `eq`, `ord`,
`deref`, and `add` respectively.
- stack unwinding and general failure; the `eh_personality`,
- `eh_unwind_resume`, `fail` and `fail_bounds_checks` lang items.
+ `panic` and `panic_bounds_checks` lang items.
- the traits in `std::marker` used to indicate types of
various kinds; lang items `send`, `sync` and `copy`.
- the marker types and variance indicators found in
pub extern fn rust_eh_personality() {
}
-// This function may be needed based on the compilation target.
-#[lang = "eh_unwind_resume"]
-#[no_mangle]
-pub extern fn rust_eh_unwind_resume() {
-}
-
#[lang = "panic_impl"]
#[no_mangle]
pub extern fn rust_begin_panic(info: &PanicInfo) -> ! {
pub extern fn rust_eh_personality() {
}
-// This function may be needed based on the compilation target.
-#[lang = "eh_unwind_resume"]
-#[no_mangle]
-pub extern fn rust_eh_unwind_resume() {
-}
-
#[lang = "panic_impl"]
#[no_mangle]
pub extern fn rust_begin_panic(info: &PanicInfo) -> ! {
the screen. While the language item's name is `panic_impl`, the symbol name is
`rust_begin_panic`.
-A third function, `rust_eh_unwind_resume`, is also needed if the `custom_unwind_resume`
-flag is set in the options of the compilation target. It allows customizing the
-process of resuming unwind at the end of the landing pads. The language item's name
-is `eh_unwind_resume`.
+Finally, a `eh_catch_typeinfo` static is needed for certain targets which
+implement Rust panics on top of C++ exceptions.
## List of all language items
- `eh_personality`: `libpanic_unwind/emcc.rs` (EMCC)
- `eh_personality`: `libpanic_unwind/gcc.rs` (GNU)
- `eh_personality`: `libpanic_unwind/seh.rs` (SEH)
- - `eh_unwind_resume`: `libpanic_unwind/gcc.rs` (GCC)
- - `eh_catch_typeinfo`: `libpanic_unwind/seh.rs` (SEH)
- `eh_catch_typeinfo`: `libpanic_unwind/emcc.rs` (EMCC)
- `panic`: `libcore/panicking.rs`
- `panic_bounds_check`: `libcore/panicking.rs`
--- /dev/null
+# `link_cfg`
+
+This feature is internal to the Rust compiler and is not intended for general use.
+
+------------------------
impl<T: Copy> CheapToClone for T {}
-// These could potentally overlap with the blanket implementation above,
+// These could potentially overlap with the blanket implementation above,
// so are only allowed because CheapToClone is a marker trait.
impl<T: CheapToClone, U: CheapToClone> CheapToClone for (T, U) {}
impl<T: CheapToClone> CheapToClone for std::ops::Range<T> {}
The tracking issue for this feature is: [#41517]
-[#41417]: https://github.com/rust-lang/rust/issues/41517
+[#41517]: https://github.com/rust-lang/rust/issues/41517
------------------------
The tracking issue for this feature is [#60405]
-[60405]: https://github.com/rust-lang/rust/issues/60405
+[#60405]: https://github.com/rust-lang/rust/issues/60405
----
+++ /dev/null
-# `read_initializer`
-
-The tracking issue for this feature is: [#42788]
-
-[#0]: https://github.com/rust-lang/rust/issues/42788
-
-------------------------
--- /dev/null
+# `tidy_test_never_used_anywhere_else`
+
+This feature is internal to the Rust compiler and is not intended for general use.
+
+------------------------
#[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl AllocRef for Global {
#[inline]
- unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<u8>, AllocErr> {
- NonNull::new(alloc(layout)).ok_or(AllocErr)
+ fn alloc(&mut self, layout: Layout) -> Result<(NonNull<u8>, usize), AllocErr> {
+ if layout.size() == 0 {
+ Ok((layout.dangling(), 0))
+ } else {
+ unsafe { NonNull::new(alloc(layout)).ok_or(AllocErr).map(|p| (p, layout.size())) }
+ }
}
#[inline]
unsafe fn dealloc(&mut self, ptr: NonNull<u8>, layout: Layout) {
- dealloc(ptr.as_ptr(), layout)
+ if layout.size() != 0 {
+ dealloc(ptr.as_ptr(), layout)
+ }
}
#[inline]
ptr: NonNull<u8>,
layout: Layout,
new_size: usize,
- ) -> Result<NonNull<u8>, AllocErr> {
- NonNull::new(realloc(ptr.as_ptr(), layout, new_size)).ok_or(AllocErr)
+ ) -> Result<(NonNull<u8>, usize), AllocErr> {
+ match (layout.size(), new_size) {
+ (0, 0) => Ok((layout.dangling(), 0)),
+ (0, _) => self.alloc(Layout::from_size_align_unchecked(new_size, layout.align())),
+ (_, 0) => {
+ self.dealloc(ptr, layout);
+ Ok((layout.dangling(), 0))
+ }
+ (_, _) => NonNull::new(realloc(ptr.as_ptr(), layout, new_size))
+ .ok_or(AllocErr)
+ .map(|p| (p, new_size)),
+ }
}
#[inline]
- unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<NonNull<u8>, AllocErr> {
- NonNull::new(alloc_zeroed(layout)).ok_or(AllocErr)
+ fn alloc_zeroed(&mut self, layout: Layout) -> Result<(NonNull<u8>, usize), AllocErr> {
+ if layout.size() == 0 {
+ Ok((layout.dangling(), 0))
+ } else {
+ unsafe {
+ NonNull::new(alloc_zeroed(layout)).ok_or(AllocErr).map(|p| (p, layout.size()))
+ }
+ }
}
}
} else {
let layout = Layout::from_size_align_unchecked(size, align);
match Global.alloc(layout) {
- Ok(ptr) => ptr.as_ptr(),
+ Ok((ptr, _)) => ptr.as_ptr(),
Err(_) => handle_alloc_error(layout),
}
}
fn allocate_zeroed() {
unsafe {
let layout = Layout::from_size_align(1024, 1).unwrap();
- let ptr =
+ let (ptr, _) =
Global.alloc_zeroed(layout.clone()).unwrap_or_else(|_| handle_alloc_error(layout));
let mut i = ptr.cast::<u8>().as_ptr();
let ptr = if layout.size() == 0 {
NonNull::dangling()
} else {
- Global.alloc(layout).unwrap_or_else(|_| alloc::handle_alloc_error(layout)).cast()
+ Global.alloc(layout).unwrap_or_else(|_| alloc::handle_alloc_error(layout)).0.cast()
};
Box::from_raw(ptr.as_ptr())
}
let ptr = if layout.size() == 0 {
NonNull::dangling()
} else {
- Global.alloc(layout).unwrap_or_else(|_| alloc::handle_alloc_error(layout)).cast()
+ Global.alloc(layout).unwrap_or_else(|_| alloc::handle_alloc_error(layout)).0.cast()
};
Box::from_raw(slice::from_raw_parts_mut(ptr.as_ptr(), len))
}
#[stable(feature = "pin", since = "1.33.0")]
impl<T: ?Sized> Unpin for Box<T> {}
-#[cfg(bootstrap)]
-#[unstable(feature = "generator_trait", issue = "43122")]
-impl<G: ?Sized + Generator + Unpin> Generator for Box<G> {
- type Yield = G::Yield;
- type Return = G::Return;
-
- fn resume(mut self: Pin<&mut Self>) -> GeneratorState<Self::Yield, Self::Return> {
- G::resume(Pin::new(&mut *self))
- }
-}
-
-#[cfg(bootstrap)]
-#[unstable(feature = "generator_trait", issue = "43122")]
-impl<G: ?Sized + Generator> Generator for Pin<Box<G>> {
- type Yield = G::Yield;
- type Return = G::Return;
-
- fn resume(mut self: Pin<&mut Self>) -> GeneratorState<Self::Yield, Self::Return> {
- G::resume((*self).as_mut())
- }
-}
-
-#[cfg(not(bootstrap))]
#[unstable(feature = "generator_trait", issue = "43122")]
impl<G: ?Sized + Generator<R> + Unpin, R> Generator<R> for Box<G> {
type Yield = G::Yield;
}
}
-#[cfg(not(bootstrap))]
#[unstable(feature = "generator_trait", issue = "43122")]
impl<G: ?Sized + Generator<R>, R> Generator<R> for Pin<Box<G>> {
type Yield = G::Yield;
while child < end {
let right = child + 1;
// compare with the greater of the two children
- if right < end && !(hole.get(child) > hole.get(right)) {
+ if right < end && hole.get(child) <= hole.get(right) {
child = right;
}
// if we are already in order, stop.
while child < end {
let right = child + 1;
// compare with the greater of the two children
- if right < end && !(hole.get(child) > hole.get(right)) {
+ if right < end && hole.get(child) <= hole.get(right) {
child = right;
}
hole.move_to(child);
// Continue the same loop we perform below. This only runs when unwinding, so we
// don't have to care about panics this time (they'll abort).
while let Some(_) = self.0.next() {}
+
+ // No need to avoid the shared root, because the tree was definitely not empty.
+ unsafe {
+ let mut node = ptr::read(&self.0.front).into_node().forget_type();
+ while let Some(parent) = node.deallocate_and_ascend() {
+ node = parent.into_node().forget_type();
+ }
+ }
}
}
if node.is_shared_root() {
return;
}
-
+ // Most of the nodes have been deallocated while traversing
+ // but one pile from a leaf up to the root is left standing.
while let Some(parent) = node.deallocate_and_ascend() {
node = parent.into_node().forget_type();
}
}
}
-/// An owned pointer to a node. This basically is either `Box<LeafNode<K, V>>` or
-/// `Box<InternalNode<K, V>>`. However, it contains no information as to which of the two types
-/// of nodes is actually behind the box, and, partially due to this lack of information, has no
-/// destructor.
+/// A managed, non-null pointer to a node. This is either an owned pointer to
+/// `LeafNode<K, V>`, an owned pointer to `InternalNode<K, V>`, or a (not owned)
+/// pointer to `NodeHeader<(), ()` (more specifically, the pointer to EMPTY_ROOT_NODE).
+/// All of these types have a `NodeHeader<K, V>` prefix, meaning that they have at
+/// least the same size as `NodeHeader<K, V>` and store the same kinds of data at the same
+/// offsets; and they have a pointer alignment at least as large as `NodeHeader<K, V>`'s.
+/// However, `BoxedNode` contains no information as to which of the three types
+/// of nodes it actually contains, and, partially due to this lack of information,
+/// has no destructor.
struct BoxedNode<K, V> {
ptr: Unique<LeafNode<K, V>>,
}
}
fn from_internal(node: Box<InternalNode<K, V>>) -> Self {
- unsafe {
- BoxedNode { ptr: Unique::new_unchecked(Box::into_raw(node) as *mut LeafNode<K, V>) }
- }
+ BoxedNode { ptr: Box::into_unique(node).cast() }
}
unsafe fn from_ptr(ptr: NonNull<LeafNode<K, V>>) -> Self {
}
}
-/// An owned tree. Note that despite being owned, this does not have a destructor,
-/// and must be cleaned up manually.
+/// Either an owned tree or a shared, empty tree. Note that this does not have a destructor,
+/// and must be cleaned up manually if it is an owned tree.
pub struct Root<K, V> {
node: BoxedNode<K, V>,
+ /// The number of levels below the root node.
height: usize,
}
unsafe impl<K: Send, V: Send> Send for Root<K, V> {}
impl<K, V> Root<K, V> {
+ /// Whether the instance of `Root` wraps a shared, empty root node. If not,
+ /// the entire tree is uniquely owned by the owner of the `Root` instance.
pub fn is_shared_root(&self) -> bool {
self.as_ref().is_shared_root()
}
+ /// Returns a shared tree, wrapping a shared root node that is eternally empty.
pub fn shared_empty_root() -> Self {
Root {
- node: unsafe {
- BoxedNode::from_ptr(NonNull::new_unchecked(
- &EMPTY_ROOT_NODE as *const _ as *const LeafNode<K, V> as *mut _,
- ))
- },
+ node: unsafe { BoxedNode::from_ptr(NonNull::from(&EMPTY_ROOT_NODE).cast()) },
height: 0,
}
}
+ /// Returns a new owned tree, with its own root node that is initially empty.
pub fn new_leaf() -> Self {
Root { node: BoxedNode::from_leaf(Box::new(unsafe { LeafNode::new() })), height: 0 }
}
/// `NodeRef` could be pointing to either type of node.
/// Note that in case of a leaf node, this might still be the shared root!
/// Only turn this into a `LeafNode` reference if you know it is not the shared root!
-/// Shared references must be dereferencable *for the entire size of their pointee*,
+/// Shared references must be dereferenceable *for the entire size of their pointee*,
/// so '&LeafNode` or `&InternalNode` pointing to the shared root is undefined behavior.
/// Turning this into a `NodeHeader` reference is always safe.
pub struct NodeRef<BorrowType, K, V, Type> {
+ /// The number of levels below the node.
height: usize,
node: NonNull<LeafNode<K, V>>,
// `root` is null unless the borrow type is `Mut`
let right_len = right_node.len();
// necessary for correctness, but in a private module
- assert!(left_len + right_len + 1 <= CAPACITY);
+ assert!(left_len + right_len < CAPACITY);
unsafe {
ptr::write(
/// d.push_front(2);
/// d.push_front(3);
///
- /// let mut splitted = d.split_off(2);
+ /// let mut split = d.split_off(2);
///
- /// assert_eq!(splitted.pop_front(), Some(1));
- /// assert_eq!(splitted.pop_front(), None);
+ /// assert_eq!(split.pop_front(), Some(1));
+ /// assert_eq!(split.pop_front(), None);
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn split_off(&mut self, at: usize) -> LinkedList<T> {
let it = self.head;
let old_len = self.len;
- DrainFilter { list: self, it: it, pred: filter, idx: 0, old_len: old_len }
+ DrainFilter { list: self, it, pred: filter, idx: 0, old_len }
}
}
/// `CursorMut`, which means it cannot outlive the `CursorMut` and that the
/// `CursorMut` is frozen for the lifetime of the `Cursor`.
#[unstable(feature = "linked_list_cursors", issue = "58533")]
- pub fn as_cursor<'cm>(&'cm self) -> Cursor<'cm, T> {
+ pub fn as_cursor(&self) -> Cursor<'_, T> {
Cursor { list: self.list, current: self.current, index: self.index }
}
}
pub use vec_deque::VecDeque;
use crate::alloc::{Layout, LayoutErr};
+use core::fmt::Display;
/// The error type for `try_reserve` methods.
#[derive(Clone, PartialEq, Eq, Debug)]
}
}
+#[unstable(feature = "try_reserve", reason = "new API", issue = "48043")]
+impl Display for TryReserveError {
+ fn fmt(
+ &self,
+ fmt: &mut core::fmt::Formatter<'_>,
+ ) -> core::result::Result<(), core::fmt::Error> {
+ fmt.write_str("memory allocation failed")?;
+ let reason = match &self {
+ TryReserveError::CapacityOverflow => {
+ " because the computed capacity exceeded the collection's maximum"
+ }
+ TryReserveError::AllocError { .. } => " because the memory allocator returned a error",
+ };
+ fmt.write_str(reason)
+ }
+}
+
/// An intermediate trait for specialization of `Extend`.
#[doc(hidden)]
trait SpecExtend<I: IntoIterator> {
// Safety: the following two methods require that the rotation amount
// be less than half the length of the deque.
//
- // `wrap_copy` requres that `min(x, cap() - x) + copy_len <= cap()`,
+ // `wrap_copy` requires that `min(x, cap() - x) + copy_len <= cap()`,
// but than `min` is never more than half the capacity, regardless of x,
// so it's sound to call here because we're calling with something
// less than half the length, which is never above half the capacity.
//! Hello, ` 123` has 3 right-aligned characters
//! ```
//!
+//! ## Localization
+//!
+//! In some programming languages, the behavior of string formatting functions
+//! depends on the operating system's locale setting. The format functions
+//! provided by Rust's standard library do not have any concept of locale, and
+//! will produce the same results on all systems regardless of user
+//! configuration.
+//!
+//! For example, the following code will always print `1.5` even if the system
+//! locale uses a decimal separator other than a dot.
+//!
+//! ```
+//! println!("The value is {}", 1.5);
+//! ```
+//!
//! # Escaping
//!
//! The literal characters `{` and `}` may be included in a string by preceding
#![feature(box_into_raw_non_null)]
#![feature(box_patterns)]
#![feature(box_syntax)]
+#![feature(cfg_sanitize)]
#![feature(cfg_target_has_atomic)]
#![feature(coerce_unsized)]
#![feature(const_generic_impls_guard)]
RawVec::allocate_in(capacity, true, a)
}
- fn allocate_in(capacity: usize, zeroed: bool, mut a: A) -> Self {
- unsafe {
- let elem_size = mem::size_of::<T>();
+ fn allocate_in(mut capacity: usize, zeroed: bool, mut a: A) -> Self {
+ let elem_size = mem::size_of::<T>();
- let alloc_size = capacity.checked_mul(elem_size).unwrap_or_else(|| capacity_overflow());
- alloc_guard(alloc_size).unwrap_or_else(|_| capacity_overflow());
+ let alloc_size = capacity.checked_mul(elem_size).unwrap_or_else(|| capacity_overflow());
+ alloc_guard(alloc_size).unwrap_or_else(|_| capacity_overflow());
- // Handles ZSTs and `capacity == 0` alike.
- let ptr = if alloc_size == 0 {
- NonNull::<T>::dangling()
- } else {
- let align = mem::align_of::<T>();
- let layout = Layout::from_size_align(alloc_size, align).unwrap();
- let result = if zeroed { a.alloc_zeroed(layout) } else { a.alloc(layout) };
- match result {
- Ok(ptr) => ptr.cast(),
- Err(_) => handle_alloc_error(layout),
+ // Handles ZSTs and `capacity == 0` alike.
+ let ptr = if alloc_size == 0 {
+ NonNull::<T>::dangling()
+ } else {
+ let align = mem::align_of::<T>();
+ let layout = Layout::from_size_align(alloc_size, align).unwrap();
+ let result = if zeroed { a.alloc_zeroed(layout) } else { a.alloc(layout) };
+ match result {
+ Ok((ptr, size)) => {
+ capacity = size / elem_size;
+ ptr.cast()
}
- };
+ Err(_) => handle_alloc_error(layout),
+ }
+ };
- RawVec { ptr: ptr.into(), cap: capacity, a }
- }
+ RawVec { ptr: ptr.into(), cap: capacity, a }
}
}
// 0, getting to here necessarily means the `RawVec` is overfull.
assert!(elem_size != 0, "capacity overflow");
- let (new_cap, ptr) = match self.current_layout() {
+ let (ptr, new_cap) = match self.current_layout() {
Some(cur) => {
// Since we guarantee that we never allocate more than
// `isize::MAX` bytes, `elem_size * self.cap <= isize::MAX` as
alloc_guard(new_size).unwrap_or_else(|_| capacity_overflow());
let ptr_res = self.a.realloc(NonNull::from(self.ptr).cast(), cur, new_size);
match ptr_res {
- Ok(ptr) => (new_cap, ptr),
+ Ok((ptr, new_size)) => (ptr, new_size / elem_size),
Err(_) => handle_alloc_error(Layout::from_size_align_unchecked(
new_size,
cur.align(),
let new_cap = if elem_size > (!0) / 8 { 1 } else { 4 };
let layout = Layout::array::<T>(new_cap).unwrap();
match self.a.alloc(layout) {
- Ok(ptr) => (new_cap, ptr),
+ Ok((ptr, new_size)) => (ptr, new_size / elem_size),
Err(_) => handle_alloc_error(layout),
}
}
let align = mem::align_of::<T>();
let old_layout = Layout::from_size_align_unchecked(old_size, align);
match self.a.realloc(NonNull::from(self.ptr).cast(), old_layout, new_size) {
- Ok(p) => self.ptr = p.cast().into(),
+ Ok((ptr, _)) => self.ptr = ptr.cast().into(),
Err(_) => {
handle_alloc_error(Layout::from_size_align_unchecked(new_size, align))
}
fallibility: Fallibility,
strategy: ReserveStrategy,
) -> Result<(), TryReserveError> {
+ let elem_size = mem::size_of::<T>();
+
unsafe {
// NOTE: we don't early branch on ZSTs here because we want this
// to actually catch "asking for more than usize::MAX" in that case.
None => self.a.alloc(new_layout),
};
- let ptr = match (res, fallibility) {
+ let (ptr, new_cap) = match (res, fallibility) {
(Err(AllocErr), Infallible) => handle_alloc_error(new_layout),
(Err(AllocErr), Fallible) => {
return Err(TryReserveError::AllocError {
non_exhaustive: (),
});
}
- (Ok(ptr), _) => ptr,
+ (Ok((ptr, new_size)), _) => (ptr, new_size / elem_size),
};
self.ptr = ptr.cast().into();
fuel: usize,
}
unsafe impl AllocRef for BoundedAlloc {
- unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<u8>, AllocErr> {
+ fn alloc(&mut self, layout: Layout) -> Result<(NonNull<u8>, usize), AllocErr> {
let size = layout.size();
if size > self.fuel {
return Err(AllocErr);
let layout = Layout::new::<RcBox<()>>().extend(value_layout).unwrap().0.pad_to_align();
// Allocate for the layout.
- let mem = Global.alloc(layout).unwrap_or_else(|_| handle_alloc_error(layout));
+ let (mem, _) = Global.alloc(layout).unwrap_or_else(|_| handle_alloc_error(layout));
// Initialize the RcBox
let inner = mem_to_rcbox(mem.as_ptr());
///
/// assert_eq!(s.capacity(), cap);
///
- /// // ...but this may make the vector reallocate
+ /// // ...but this may make the string reallocate
/// s.push('a');
/// ```
#[inline]
}
}
+#[stable(feature = "from_mut_str_for_string", since = "1.44.0")]
+impl From<&mut str> for String {
+ /// Converts a `&mut str` into a `String`.
+ ///
+ /// The result is allocated on the heap.
+ #[inline]
+ fn from(s: &mut str) -> String {
+ s.to_owned()
+ }
+}
+
#[stable(feature = "from_ref_string", since = "1.35.0")]
impl From<&String> for String {
#[inline]
/// necessarily) at _exactly_ `MAX_REFCOUNT + 1` references.
const MAX_REFCOUNT: usize = (isize::MAX) as usize;
+#[cfg(not(sanitize = "thread"))]
+macro_rules! acquire {
+ ($x:expr) => {
+ atomic::fence(Acquire)
+ };
+}
+
+// ThreadSanitizer does not support memory fences. To avoid false positive
+// reports in Arc / Weak implementation use atomic loads for synchronization
+// instead.
+#[cfg(sanitize = "thread")]
+macro_rules! acquire {
+ ($x:expr) => {
+ $x.load(Acquire)
+ };
+}
+
/// A thread-safe reference-counting pointer. 'Arc' stands for 'Atomically
/// Reference Counted'.
///
return Err(this);
}
- atomic::fence(Acquire);
+ acquire!(this.inner().strong);
unsafe {
let elem = ptr::read(&this.ptr.as_ref().data);
ptr::drop_in_place(&mut self.ptr.as_mut().data);
if self.inner().weak.fetch_sub(1, Release) == 1 {
- atomic::fence(Acquire);
+ acquire!(self.inner().weak);
Global.dealloc(self.ptr.cast(), Layout::for_value(self.ptr.as_ref()))
}
}
// reference (see #54908).
let layout = Layout::new::<ArcInner<()>>().extend(value_layout).unwrap().0.pad_to_align();
- let mem = Global.alloc(layout).unwrap_or_else(|_| handle_alloc_error(layout));
+ let (mem, _) = Global.alloc(layout).unwrap_or_else(|_| handle_alloc_error(layout));
// Initialize the ArcInner
let inner = mem_to_arcinner(mem.as_ptr());
//
// [1]: (www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html)
// [2]: (https://github.com/rust-lang/rust/pull/41714)
- atomic::fence(Acquire);
+ acquire!(self.inner().strong);
unsafe {
self.drop_slow();
let inner = if let Some(inner) = self.inner() { inner } else { return };
if inner.weak.fetch_sub(1, Release) == 1 {
- atomic::fence(Acquire);
+ acquire!(inner.weak);
unsafe { Global.dealloc(self.ptr.cast(), Layout::for_value(self.ptr.as_ref())) }
}
}
}
#[test]
-fn test_into_iter_drop_leak() {
+fn test_into_iter_drop_leak_1() {
static DROPS: AtomicU32 = AtomicU32::new(0);
struct D;
assert_eq!(DROPS.load(Ordering::SeqCst), 5);
}
+
+#[test]
+fn test_into_iter_drop_leak_2() {
+ let size = 12; // to obtain tree with 2 levels (having edges to leaf nodes)
+ static DROPS: AtomicU32 = AtomicU32::new(0);
+ static PANIC_POINT: AtomicU32 = AtomicU32::new(0);
+
+ struct D;
+ impl Drop for D {
+ fn drop(&mut self) {
+ if DROPS.fetch_add(1, Ordering::SeqCst) == PANIC_POINT.load(Ordering::SeqCst) {
+ panic!("panic in `drop`");
+ }
+ }
+ }
+
+ for panic_point in vec![0, 1, size - 2, size - 1] {
+ DROPS.store(0, Ordering::SeqCst);
+ PANIC_POINT.store(panic_point, Ordering::SeqCst);
+ let map: BTreeMap<_, _> = (0..size).map(|i| (i, D)).collect();
+ catch_unwind(move || drop(map.into_iter())).ok();
+ assert_eq!(DROPS.load(Ordering::SeqCst), size);
+ }
+}
unsafe {
let pointers: Vec<_> = (0..iterations)
.map(|_| {
- allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap()
+ allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap().0
})
.collect();
for &ptr in &pointers {
let moduli = &[5, 20, 50];
#[cfg(miri)]
- let lens = 1..13;
+ let lens = 1..10;
#[cfg(miri)]
- let moduli = &[10];
+ let moduli = &[5];
for len in lens {
for &modulus in moduli {
/// let mut vec: Vec<i32> = Vec::new();
/// ```
#[inline]
- #[rustc_const_stable(feature = "const_vec_new", since = "1.32.0")]
+ #[rustc_const_stable(feature = "const_vec_new", since = "1.39.0")]
#[stable(feature = "rust1", since = "1.0.0")]
pub const fn new() -> Vec<T> {
Vec { buf: RawVec::NEW, len: 0 }
/// difference, with each additional slot filled with `value`.
/// If `new_len` is less than `len`, the `Vec` is simply truncated.
///
- /// This method requires [`Clone`] to be able clone the passed value. If
- /// you need more flexibility (or want to rely on [`Default`] instead of
+ /// This method requires `T` to implement [`Clone`],
+ /// in order to be able to clone the passed value.
+ /// If you need more flexibility (or want to rely on [`Default`] instead of
/// [`Clone`]), use [`resize_with`].
///
/// # Examples
impl<'a> SetLenOnDrop<'a> {
#[inline]
fn new(len: &'a mut usize) -> Self {
- SetLenOnDrop { local_len: *len, len: len }
+ SetLenOnDrop { local_len: *len, len }
}
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
unsafe impl<#[may_dangle] T> Drop for IntoIter<T> {
fn drop(&mut self) {
+ struct DropGuard<'a, T>(&'a mut IntoIter<T>);
+
+ impl<T> Drop for DropGuard<'_, T> {
+ fn drop(&mut self) {
+ // RawVec handles deallocation
+ let _ = unsafe { RawVec::from_raw_parts(self.0.buf.as_ptr(), self.0.cap) };
+ }
+ }
+
+ let guard = DropGuard(self);
// destroy the remaining elements
unsafe {
- ptr::drop_in_place(self.as_mut_slice());
+ ptr::drop_in_place(guard.0.as_mut_slice());
}
-
- // RawVec handles deallocation
- let _ = unsafe { RawVec::from_raw_parts(self.buf.as_ptr(), self.cap) };
+ // now `guard` will be dropped and do the rest
}
}
/// The filter test predicate.
pred: F,
/// A flag that indicates a panic has occurred in the filter test prodicate.
- /// This is used as a hint in the drop implmentation to prevent consumption
+ /// This is used as a hint in the drop implementation to prevent consumption
/// of the remainder of the `DrainFilter`. Any unprocessed items will be
/// backshifted in the `vec`, but no further items will be dropped or
/// tested by the filter predicate.
use crate::ptr::{self, NonNull};
use crate::usize;
-/// Represents the combination of a starting address and
-/// a total capacity of the returned block.
-#[unstable(feature = "allocator_api", issue = "32838")]
-#[derive(Debug)]
-pub struct Excess(pub NonNull<u8>, pub usize);
-
const fn size_align<T>() -> (usize, usize) {
(mem::size_of::<T>(), mem::align_of::<T>())
}
unsafe { Layout::from_size_align_unchecked(size, align) }
}
+ /// Creates a `NonNull` that is dangling, but well-aligned for this Layout.
+ ///
+ /// Note that the pointer value may potentially represent a valid pointer,
+ /// which means this must not be used as a "not yet initialized"
+ /// sentinel value. Types that lazily allocate must track initialization by
+ /// some other means.
+ #[unstable(feature = "alloc_layout_extra", issue = "55724")]
+ pub const fn dangling(&self) -> NonNull<u8> {
+ // align is non-zero and a power of two
+ unsafe { NonNull::new_unchecked(self.align() as *mut u8) }
+ }
+
/// Creates a layout describing the record that can hold a value
/// of the same layout as `self`, but that also is aligned to
/// alignment `align` (measured in bytes).
#[unstable(feature = "alloc_layout_extra", issue = "55724")]
#[inline]
pub fn repeat(&self, n: usize) -> Result<(Self, usize), LayoutErr> {
- // Warning, removing the checked_add here led to segfaults in #67174. Further
- // analysis in #69225 seems to indicate that this is an LTO-related
- // miscompilation, so #67174 might be able to be reapplied in the future.
- let padded_size = self
- .size()
- .checked_add(self.padding_needed_for(self.align()))
- .ok_or(LayoutErr { private: () })?;
+ // This cannot overflow. Quoting from the invariant of Layout:
+ // > `size`, when rounded up to the nearest multiple of `align`,
+ // > must not overflow (i.e., the rounded value must be less than
+ // > `usize::MAX`)
+ let padded_size = self.size() + self.padding_needed_for(self.align());
let alloc_size = padded_size.checked_mul(n).ok_or(LayoutErr { private: () })?;
unsafe {
///
/// * the starting address for that memory block was previously
/// returned by a previous call to an allocation method (`alloc`,
-/// `alloc_zeroed`, `alloc_excess`) or reallocation method
-/// (`realloc`, `realloc_excess`), and
+/// `alloc_zeroed`) or reallocation method (`realloc`), and
///
/// * the memory block has not been subsequently deallocated, where
/// blocks are deallocated either by being passed to a deallocation
-/// method (`dealloc`, `dealloc_one`, `dealloc_array`) or by being
-/// passed to a reallocation method (see above) that returns `Ok`.
+/// method (`dealloc`) or by being passed to a reallocation method
+/// (see above) that returns `Ok`.
///
-/// A note regarding zero-sized types and zero-sized layouts: many
-/// methods in the `AllocRef` trait state that allocation requests
-/// must be non-zero size, or else undefined behavior can result.
-///
-/// * If an `AllocRef` implementation chooses to return `Ok` in this
-/// case (i.e., the pointer denotes a zero-sized inaccessible block)
-/// then that returned pointer must be considered "currently
-/// allocated". On such an allocator, *all* methods that take
-/// currently-allocated pointers as inputs must accept these
-/// zero-sized pointers, *without* causing undefined behavior.
-///
-/// * In other words, if a zero-sized pointer can flow out of an
-/// allocator, then that allocator must likewise accept that pointer
-/// flowing back into its deallocation and reallocation methods.
+/// Unlike [`GlobalAlloc`], zero-sized allocations are allowed in
+/// `AllocRef`. If an underlying allocator does not support this (like
+/// jemalloc) or return a null pointer (such as `libc::malloc`), this case
+/// must be caught. In this case [`Layout::dangling()`] can be used to
+/// create a dangling, but aligned `NonNull<u8>`.
///
/// Some of the methods require that a layout *fit* a memory block.
/// What it means for a layout to "fit" a memory block means (or
///
/// 2. The block's size must fall in the range `[use_min, use_max]`, where:
///
-/// * `use_min` is `self.usable_size(layout).0`, and
+/// * `use_min` is `layout.size()`, and
///
-/// * `use_max` is the capacity that was (or would have been)
-/// returned when (if) the block was allocated via a call to
-/// `alloc_excess` or `realloc_excess`.
+/// * `use_max` is the capacity that was returned.
///
/// Note that:
///
/// currently allocated via an allocator `a`, then it is legal to
/// use that layout to deallocate it, i.e., `a.dealloc(ptr, k);`.
///
+/// * if an allocator does not support overallocating, it is fine to
+/// simply return `layout.size()` as the allocated size.
+///
+/// [`GlobalAlloc`]: self::GlobalAlloc
+/// [`Layout::dangling()`]: self::Layout::dangling
+///
/// # Safety
///
/// The `AllocRef` trait is an `unsafe` trait for a number of reasons, and
/// the future.
#[unstable(feature = "allocator_api", issue = "32838")]
pub unsafe trait AllocRef {
- // (Note: some existing allocators have unspecified but well-defined
- // behavior in response to a zero size allocation request ;
- // e.g., in C, `malloc` of 0 will either return a null pointer or a
- // unique pointer, but will not have arbitrary undefined
- // behavior.
- // However in jemalloc for example,
- // `mallocx(0)` is documented as undefined behavior.)
-
- /// Returns a pointer meeting the size and alignment guarantees of
- /// `layout`.
+ /// On success, returns a pointer meeting the size and alignment
+ /// guarantees of `layout` and the actual size of the allocated block,
+ /// which must be greater than or equal to `layout.size()`.
///
/// If this method returns an `Ok(addr)`, then the `addr` returned
/// will be non-null address pointing to a block of storage
/// behavior, e.g., to ensure initialization to particular sets of
/// bit patterns.)
///
- /// # Safety
- ///
- /// This function is unsafe because undefined behavior can result
- /// if the caller does not ensure that `layout` has non-zero size.
- ///
- /// (Extension subtraits might provide more specific bounds on
- /// behavior, e.g., guarantee a sentinel address or a null pointer
- /// in response to a zero-size allocation request.)
- ///
/// # Errors
///
/// Returning `Err` indicates that either memory is exhausted or
/// rather than directly invoking `panic!` or similar.
///
/// [`handle_alloc_error`]: ../../alloc/alloc/fn.handle_alloc_error.html
- unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<u8>, AllocErr>;
+ fn alloc(&mut self, layout: Layout) -> Result<(NonNull<u8>, usize), AllocErr>;
/// Deallocate the memory referenced by `ptr`.
///
/// to allocate that block of memory.
unsafe fn dealloc(&mut self, ptr: NonNull<u8>, layout: Layout);
- // == ALLOCATOR-SPECIFIC QUANTITIES AND LIMITS ==
- // usable_size
-
- /// Returns bounds on the guaranteed usable size of a successful
- /// allocation created with the specified `layout`.
- ///
- /// In particular, if one has a memory block allocated via a given
- /// allocator `a` and layout `k` where `a.usable_size(k)` returns
- /// `(l, u)`, then one can pass that block to `a.dealloc()` with a
- /// layout in the size range [l, u].
- ///
- /// (All implementors of `usable_size` must ensure that
- /// `l <= k.size() <= u`)
- ///
- /// Both the lower- and upper-bounds (`l` and `u` respectively)
- /// are provided, because an allocator based on size classes could
- /// misbehave if one attempts to deallocate a block without
- /// providing a correct value for its size (i.e., one within the
- /// range `[l, u]`).
- ///
- /// Clients who wish to make use of excess capacity are encouraged
- /// to use the `alloc_excess` and `realloc_excess` instead, as
- /// this method is constrained to report conservative values that
- /// serve as valid bounds for *all possible* allocation method
- /// calls.
- ///
- /// However, for clients that do not wish to track the capacity
- /// returned by `alloc_excess` locally, this method is likely to
- /// produce useful results.
- #[inline]
- fn usable_size(&self, layout: &Layout) -> (usize, usize) {
- (layout.size(), layout.size())
+ /// Behaves like `alloc`, but also ensures that the contents
+ /// are set to zero before being returned.
+ ///
+ /// # Errors
+ ///
+ /// Returning `Err` indicates that either memory is exhausted or
+ /// `layout` does not meet allocator's size or alignment
+ /// constraints, just as in `alloc`.
+ ///
+ /// Clients wishing to abort computation in response to an
+ /// allocation error are encouraged to call the [`handle_alloc_error`] function,
+ /// rather than directly invoking `panic!` or similar.
+ ///
+ /// [`handle_alloc_error`]: ../../alloc/alloc/fn.handle_alloc_error.html
+ fn alloc_zeroed(&mut self, layout: Layout) -> Result<(NonNull<u8>, usize), AllocErr> {
+ let size = layout.size();
+ let result = self.alloc(layout);
+ if let Ok((p, _)) = result {
+ unsafe { ptr::write_bytes(p.as_ptr(), 0, size) }
+ }
+ result
}
// == METHODS FOR MEMORY REUSE ==
- // realloc. alloc_excess, realloc_excess
+ // realloc, realloc_zeroed, grow_in_place, grow_in_place_zeroed, shrink_in_place
/// Returns a pointer suitable for holding data described by
/// a new layout with `layout`’s alignment and a size given
- /// by `new_size`. To
- /// accomplish this, this may extend or shrink the allocation
- /// referenced by `ptr` to fit the new layout.
+ /// by `new_size` and the actual size of the allocated block.
+ /// The latter is greater than or equal to `layout.size()`.
+ /// To accomplish this, the allocator may extend or shrink
+ /// the allocation referenced by `ptr` to fit the new layout.
///
/// If this returns `Ok`, then ownership of the memory block
/// referenced by `ptr` has been transferred to this
/// * `layout` must *fit* the `ptr` (see above). (The `new_size`
/// argument need not fit it.)
///
- /// * `new_size` must be greater than zero.
- ///
/// * `new_size`, when rounded up to the nearest multiple of `layout.align()`,
/// must not overflow (i.e., the rounded value must be less than `usize::MAX`).
///
ptr: NonNull<u8>,
layout: Layout,
new_size: usize,
- ) -> Result<NonNull<u8>, AllocErr> {
+ ) -> Result<(NonNull<u8>, usize), AllocErr> {
let old_size = layout.size();
- if new_size >= old_size {
- if let Ok(()) = self.grow_in_place(ptr, layout, new_size) {
- return Ok(ptr);
+ if new_size > old_size {
+ if let Ok(size) = self.grow_in_place(ptr, layout, new_size) {
+ return Ok((ptr, size));
}
} else if new_size < old_size {
- if let Ok(()) = self.shrink_in_place(ptr, layout, new_size) {
- return Ok(ptr);
+ if let Ok(size) = self.shrink_in_place(ptr, layout, new_size) {
+ return Ok((ptr, size));
}
+ } else {
+ return Ok((ptr, new_size));
}
// otherwise, fall back on alloc + copy + dealloc.
let new_layout = Layout::from_size_align_unchecked(new_size, layout.align());
let result = self.alloc(new_layout);
- if let Ok(new_ptr) = result {
+ if let Ok((new_ptr, _)) = result {
ptr::copy_nonoverlapping(ptr.as_ptr(), new_ptr.as_ptr(), cmp::min(old_size, new_size));
self.dealloc(ptr, layout);
}
ptr: NonNull<u8>,
layout: Layout,
new_size: usize,
- ) -> Result<NonNull<u8>, AllocErr> {
+ ) -> Result<(NonNull<u8>, usize), AllocErr> {
let old_size = layout.size();
- if new_size >= old_size {
- if let Ok(()) = self.grow_in_place_zeroed(ptr, layout, new_size) {
- return Ok(ptr);
+ if new_size > old_size {
+ if let Ok(size) = self.grow_in_place_zeroed(ptr, layout, new_size) {
+ return Ok((ptr, size));
}
} else if new_size < old_size {
- if let Ok(()) = self.shrink_in_place(ptr, layout, new_size) {
- return Ok(ptr);
+ if let Ok(size) = self.shrink_in_place(ptr, layout, new_size) {
+ return Ok((ptr, size));
}
+ } else {
+ return Ok((ptr, new_size));
}
// otherwise, fall back on alloc + copy + dealloc.
let new_layout = Layout::from_size_align_unchecked(new_size, layout.align());
let result = self.alloc_zeroed(new_layout);
- if let Ok(new_ptr) = result {
+ if let Ok((new_ptr, _)) = result {
ptr::copy_nonoverlapping(ptr.as_ptr(), new_ptr.as_ptr(), cmp::min(old_size, new_size));
self.dealloc(ptr, layout);
}
result
}
- /// Behaves like `alloc`, but also ensures that the contents
- /// are set to zero before being returned.
- ///
- /// # Safety
- ///
- /// This function is unsafe for the same reasons that `alloc` is.
- ///
- /// # Errors
- ///
- /// Returning `Err` indicates that either memory is exhausted or
- /// `layout` does not meet allocator's size or alignment
- /// constraints, just as in `alloc`.
- ///
- /// Clients wishing to abort computation in response to an
- /// allocation error are encouraged to call the [`handle_alloc_error`] function,
- /// rather than directly invoking `panic!` or similar.
- ///
- /// [`handle_alloc_error`]: ../../alloc/alloc/fn.handle_alloc_error.html
- unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<NonNull<u8>, AllocErr> {
- let size = layout.size();
- let p = self.alloc(layout);
- if let Ok(p) = p {
- ptr::write_bytes(p.as_ptr(), 0, size);
- }
- p
- }
-
- /// Behaves like `alloc`, but also returns the whole size of
- /// the returned block. For some `layout` inputs, like arrays, this
- /// may include extra storage usable for additional data.
- ///
- /// # Safety
- ///
- /// This function is unsafe for the same reasons that `alloc` is.
- ///
- /// # Errors
- ///
- /// Returning `Err` indicates that either memory is exhausted or
- /// `layout` does not meet allocator's size or alignment
- /// constraints, just as in `alloc`.
- ///
- /// Clients wishing to abort computation in response to an
- /// allocation error are encouraged to call the [`handle_alloc_error`] function,
- /// rather than directly invoking `panic!` or similar.
- ///
- /// [`handle_alloc_error`]: ../../alloc/alloc/fn.handle_alloc_error.html
- unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr> {
- let usable_size = self.usable_size(&layout);
- self.alloc(layout).map(|p| Excess(p, usable_size.1))
- }
-
- /// Behaves like `alloc`, but also returns the whole size of
- /// the returned block. For some `layout` inputs, like arrays, this
- /// may include extra storage usable for additional data.
- /// Also it ensures that the contents are set to zero before being returned.
- ///
- /// # Safety
- ///
- /// This function is unsafe for the same reasons that `alloc` is.
- ///
- /// # Errors
- ///
- /// Returning `Err` indicates that either memory is exhausted or
- /// `layout` does not meet allocator's size or alignment
- /// constraints, just as in `alloc`.
- ///
- /// Clients wishing to abort computation in response to an
- /// allocation error are encouraged to call the [`handle_alloc_error`] function,
- /// rather than directly invoking `panic!` or similar.
- ///
- /// [`handle_alloc_error`]: ../../alloc/alloc/fn.handle_alloc_error.html
- unsafe fn alloc_excess_zeroed(&mut self, layout: Layout) -> Result<Excess, AllocErr> {
- let usable_size = self.usable_size(&layout);
- self.alloc_zeroed(layout).map(|p| Excess(p, usable_size.1))
- }
-
- /// Behaves like `realloc`, but also returns the whole size of
- /// the returned block. For some `layout` inputs, like arrays, this
- /// may include extra storage usable for additional data.
- ///
- /// # Safety
- ///
- /// This function is unsafe for the same reasons that `realloc` is.
- ///
- /// # Errors
- ///
- /// Returning `Err` indicates that either memory is exhausted or
- /// `layout` does not meet allocator's size or alignment
- /// constraints, just as in `realloc`.
- ///
- /// Clients wishing to abort computation in response to a
- /// reallocation error are encouraged to call the [`handle_alloc_error`] function,
- /// rather than directly invoking `panic!` or similar.
- ///
- /// [`handle_alloc_error`]: ../../alloc/alloc/fn.handle_alloc_error.html
- unsafe fn realloc_excess(
- &mut self,
- ptr: NonNull<u8>,
- layout: Layout,
- new_size: usize,
- ) -> Result<Excess, AllocErr> {
- let new_layout = Layout::from_size_align_unchecked(new_size, layout.align());
- let usable_size = self.usable_size(&new_layout);
- self.realloc(ptr, layout, new_size).map(|p| Excess(p, usable_size.1))
- }
-
- /// Behaves like `realloc`, but also returns the whole size of
- /// the returned block. For some `layout` inputs, like arrays, this
- /// may include extra storage usable for additional data.
- /// Also it ensures that the contents are set to zero before being returned.
- ///
- /// # Safety
- ///
- /// This function is unsafe for the same reasons that `realloc` is.
- ///
- /// # Errors
- ///
- /// Returning `Err` indicates that either memory is exhausted or
- /// `layout` does not meet allocator's size or alignment
- /// constraints, just as in `realloc`.
- ///
- /// Clients wishing to abort computation in response to a
- /// reallocation error are encouraged to call the [`handle_alloc_error`] function,
- /// rather than directly invoking `panic!` or similar.
- ///
- /// [`handle_alloc_error`]: ../../alloc/alloc/fn.handle_alloc_error.html
- unsafe fn realloc_excess_zeroed(
- &mut self,
- ptr: NonNull<u8>,
- layout: Layout,
- new_size: usize,
- ) -> Result<Excess, AllocErr> {
- let new_layout = Layout::from_size_align_unchecked(new_size, layout.align());
- let usable_size = self.usable_size(&new_layout);
- self.realloc_zeroed(ptr, layout, new_size).map(|p| Excess(p, usable_size.1))
- }
-
/// Attempts to extend the allocation referenced by `ptr` to fit `new_size`.
///
/// If this returns `Ok`, then the allocator has asserted that the
/// memory block referenced by `ptr` now fits `new_size`, and thus can
/// be used to carry data of a layout of that size and same alignment as
- /// `layout`. (The allocator is allowed to
- /// expend effort to accomplish this, such as extending the memory block to
- /// include successor blocks, or virtual memory tricks.)
+ /// `layout`. The returned value is the new size of the allocated block.
+ /// (The allocator is allowed to expend effort to accomplish this, such
+ /// as extending the memory block to include successor blocks, or virtual
+ /// memory tricks.)
///
/// Regardless of what this method returns, ownership of the
/// memory block referenced by `ptr` has not been transferred, and
/// function; clients are expected either to be able to recover from
/// `grow_in_place` failures without aborting, or to fall back on
/// another reallocation method before resorting to an abort.
+ #[inline]
unsafe fn grow_in_place(
&mut self,
ptr: NonNull<u8>,
layout: Layout,
new_size: usize,
- ) -> Result<(), CannotReallocInPlace> {
- let _ = ptr; // this default implementation doesn't care about the actual address.
- debug_assert!(new_size >= layout.size());
- let (_l, u) = self.usable_size(&layout);
- // _l <= layout.size() [guaranteed by usable_size()]
- // layout.size() <= new_layout.size() [required by this method]
- if new_size <= u { Ok(()) } else { Err(CannotReallocInPlace) }
+ ) -> Result<usize, CannotReallocInPlace> {
+ let _ = ptr;
+ let _ = layout;
+ let _ = new_size;
+ Err(CannotReallocInPlace)
}
/// Behaves like `grow_in_place`, but also ensures that the new
ptr: NonNull<u8>,
layout: Layout,
new_size: usize,
- ) -> Result<(), CannotReallocInPlace> {
- self.grow_in_place(ptr, layout, new_size)?;
+ ) -> Result<usize, CannotReallocInPlace> {
+ let size = self.grow_in_place(ptr, layout, new_size)?;
ptr.as_ptr().add(layout.size()).write_bytes(0, new_size - layout.size());
- Ok(())
+ Ok(size)
}
/// Attempts to shrink the allocation referenced by `ptr` to fit `new_size`.
/// If this returns `Ok`, then the allocator has asserted that the
/// memory block referenced by `ptr` now fits `new_size`, and
/// thus can only be used to carry data of that smaller
- /// layout. (The allocator is allowed to take advantage of this,
+ /// layout. The returned value is the new size the allocated block.
+ /// (The allocator is allowed to take advantage of this,
/// carving off portions of the block for reuse elsewhere.) The
/// truncated contents of the block within the smaller layout are
/// unaltered, and ownership of block has not been transferred.
/// * `layout` must *fit* the `ptr` (see above); note the
/// `new_size` argument need not fit it,
///
- /// * `new_size` must not be greater than `layout.size()`
- /// (and must be greater than zero),
+ /// * `new_size` must not be greater than `layout.size()`,
///
/// # Errors
///
/// function; clients are expected either to be able to recover from
/// `shrink_in_place` failures without aborting, or to fall back
/// on another reallocation method before resorting to an abort.
+ #[inline]
unsafe fn shrink_in_place(
&mut self,
ptr: NonNull<u8>,
layout: Layout,
new_size: usize,
- ) -> Result<(), CannotReallocInPlace> {
- let _ = ptr; // this default implementation doesn't care about the actual address.
- debug_assert!(new_size <= layout.size());
- let (l, _u) = self.usable_size(&layout);
- // layout.size() <= _u [guaranteed by usable_size()]
- // new_layout.size() <= layout.size() [required by this method]
- if l <= new_size { Ok(()) } else { Err(CannotReallocInPlace) }
+ ) -> Result<usize, CannotReallocInPlace> {
+ let _ = ptr;
+ let _ = layout;
+ let _ = new_size;
+ Err(CannotReallocInPlace)
}
}
unsafe { &mut *self.value.get() }
}
+ /// Undo the effect of leaked guards on the borrow state of the `RefCell`.
+ ///
+ /// This call is similar to [`get_mut`] but more specialized. It borrows `RefCell` mutably to
+ /// ensure no borrows exist and then resets the state tracking shared borrows. This is relevant
+ /// if some `Ref` or `RefMut` borrows have been leaked.
+ ///
+ /// [`get_mut`]: #method.get_mut
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// #![feature(cell_leak)]
+ /// use std::cell::RefCell;
+ ///
+ /// let mut c = RefCell::new(0);
+ /// std::mem::forget(c.borrow_mut());
+ ///
+ /// assert!(c.try_borrow().is_err());
+ /// c.undo_leak();
+ /// assert!(c.try_borrow().is_ok());
+ /// ```
+ #[unstable(feature = "cell_leak", issue = "69099")]
+ pub fn undo_leak(&mut self) -> &mut T {
+ *self.borrow.get_mut() = UNUSED;
+ self.get_mut()
+ }
+
/// Immutably borrows the wrapped value, returning an error if the value is
/// currently mutably borrowed.
///
/// ```
#[unstable(feature = "cell_leak", issue = "69099")]
pub fn leak(orig: Ref<'b, T>) -> &'b T {
- // By forgetting this Ref we ensure that the borrow counter in the RefCell never goes back
- // to UNUSED again. No further mutable references can be created from the original cell.
+ // By forgetting this Ref we ensure that the borrow counter in the RefCell can't go back to
+ // UNUSED within the lifetime `'b`. Resetting the reference tracking state would require a
+ // unique reference to the borrowed RefCell. No further mutable references can be created
+ // from the original cell.
mem::forget(orig.borrow);
orig.value
}
/// ```
#[unstable(feature = "cell_leak", issue = "69099")]
pub fn leak(orig: RefMut<'b, T>) -> &'b mut T {
- // By forgetting this BorrowRefMut we ensure that the borrow counter in the RefCell never
- // goes back to UNUSED again. No further references can be created from the original cell,
- // making the current borrow the only reference for the remaining lifetime.
+ // By forgetting this BorrowRefMut we ensure that the borrow counter in the RefCell can't
+ // go back to UNUSED within the lifetime `'b`. Resetting the reference tracking state would
+ // require a unique reference to the borrowed RefCell. No further references can be created
+ // from the original cell within that lifetime, making the current borrow the only
+ // reference for the remaining lifetime.
mem::forget(orig.borrow);
orig.value
}
#[lang = "unsafe_cell"]
#[stable(feature = "rust1", since = "1.0.0")]
#[repr(transparent)]
-#[cfg_attr(not(bootstrap), repr(no_niche))] // rust-lang/rust#68303.
+#[repr(no_niche)] // rust-lang/rust#68303.
pub struct UnsafeCell<T: ?Sized> {
value: T,
}
///
/// # Implementing [`Into`] for conversions to external types in old versions of Rust
///
-/// Prior to Rust 1.40, if the destination type was not part of the current crate
+/// Prior to Rust 1.41, if the destination type was not part of the current crate
/// then you couldn't implement [`From`] directly.
/// For example, take this code:
///
use crate::mem::MaybeUninit;
use crate::num::flt2dec;
-// ignore-tidy-undocumented-unsafe
-
// Don't inline this so callers don't use the stack space this function
// requires unless they have to.
#[inline(never)]
where
T: flt2dec::DecodableFloat,
{
+ // SAFETY: Possible undefined behavior, see FIXME(#53491)
unsafe {
let mut buf = MaybeUninit::<[u8; 1024]>::uninit(); // enough for f32 and f64
let mut parts = MaybeUninit::<[flt2dec::Part<'_>; 4]>::uninit();
where
T: flt2dec::DecodableFloat,
{
+ // SAFETY: Possible undefined behavior, see FIXME(#53491)
unsafe {
// enough for f32 and f64
let mut buf = MaybeUninit::<[u8; flt2dec::MAX_SIG_DIGITS]>::uninit();
where
T: flt2dec::DecodableFloat,
{
+ // SAFETY: Possible undefined behavior, see FIXME(#53491)
unsafe {
let mut buf = MaybeUninit::<[u8; 1024]>::uninit(); // enough for f32 and f64
let mut parts = MaybeUninit::<[flt2dec::Part<'_>; 6]>::uninit();
where
T: flt2dec::DecodableFloat,
{
+ // SAFETY: Possible undefined behavior, see FIXME(#53491)
unsafe {
// enough for f32 and f64
let mut buf = MaybeUninit::<[u8; flt2dec::MAX_SIG_DIGITS]>::uninit();
//! Utilities for formatting and printing strings.
-// ignore-tidy-undocumented-unsafe
-
#![stable(feature = "rust1", since = "1.0.0")]
use crate::cell::{Cell, Ref, RefCell, RefMut, UnsafeCell};
formatter: fn(&Opaque, &mut Formatter<'_>) -> Result,
}
-// This gurantees a single stable value for the function pointer associated with
+// This guarantees a single stable value for the function pointer associated with
// indices/counts in the formatting infrastructure.
//
// Note that a function defined as such would not be correct as functions are
// could have been miscompiled. In practice, we never call as_usize on non-usize
// containing data (as a matter of static generation of the formatting
// arguments), so this is merely an additional check.
+//
+// We primarily want to ensure that the function pointer at `USIZE_MARKER` has
+// an address corresponding *only* to functions that also take `&usize` as their
+// first argument. The read_volatile here ensures that we can safely ready out a
+// usize from the passed reference and that this address does not point at a
+// non-usize taking function.
#[unstable(feature = "fmt_internals", reason = "internal to format_args!", issue = "none")]
-static USIZE_MARKER: fn(&usize, &mut Formatter<'_>) -> Result = |_, _| loop {};
+static USIZE_MARKER: fn(&usize, &mut Formatter<'_>) -> Result = |ptr, _| {
+ // SAFETY: ptr is a reference
+ let _v: usize = unsafe { crate::ptr::read_volatile(ptr) };
+ loop {}
+};
impl<'a> ArgumentV1<'a> {
#[doc(hidden)]
#[unstable(feature = "fmt_internals", reason = "internal to format_args!", issue = "none")]
pub fn new<'b, T>(x: &'b T, f: fn(&T, &mut Formatter<'_>) -> Result) -> ArgumentV1<'b> {
+ // SAFETY: `mem::transmute(x)` is safe because
+ // 1. `&'b T` keeps the lifetime it originated with `'b`
+ // (so as to not have an unbounded lifetime)
+ // 2. `&'b T` and `&'b Void` have the same memory layout
+ // (when `T` is `Sized`, as it is here)
+ // `mem::transmute(f)` is safe since `fn(&T, &mut Formatter<'_>) -> Result`
+ // and `fn(&Void, &mut Formatter<'_>) -> Result` have the same ABI
+ // (as long as `T` is `Sized`)
unsafe { ArgumentV1 { formatter: mem::transmute(f), value: mem::transmute(x) } }
}
fn write_formatted_parts(&mut self, formatted: &flt2dec::Formatted<'_>) -> Result {
fn write_bytes(buf: &mut dyn Write, s: &[u8]) -> Result {
+ // SAFETY: This is used for `flt2dec::Part::Num` and `flt2dec::Part::Copy`.
+ // It's safe to use for `flt2dec::Part::Num` since every char `c` is between
+ // `b'0'` and `b'9'`, which means `s` is valid UTF-8.
+ // It's also probably safe in practice to use for `flt2dec::Part::Copy(buf)`
+ // since `buf` should be plain ASCII, but it's possible for someone to pass
+ // in a bad value for `buf` into `flt2dec::to_shortest_str` since it is a
+ // public function.
+ // FIXME: Determine whether this could result in UB.
buf.write_str(unsafe { str::from_utf8_unchecked(s) })
}
//! Integer and floating-point number formatting
-// ignore-tidy-undocumented-unsafe
-
use crate::fmt;
use crate::mem::MaybeUninit;
use crate::num::flt2dec;
}
}
let buf = &buf[curr..];
+ // SAFETY: The only chars in `buf` are created by `Self::digit` which are assumed to be
+ // valid UTF-8
let buf = unsafe {
str::from_utf8_unchecked(slice::from_raw_parts(MaybeUninit::first_ptr(buf), buf.len()))
};
macro_rules! impl_Display {
($($t:ident),* as $u:ident via $conv_fn:ident named $name:ident) => {
fn $name(mut n: $u, is_nonnegative: bool, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ // 2^128 is about 3*10^38, so 39 gives an extra byte of space
let mut buf = [MaybeUninit::<u8>::uninit(); 39];
let mut curr = buf.len() as isize;
let buf_ptr = MaybeUninit::first_ptr_mut(&mut buf);
let lut_ptr = DEC_DIGITS_LUT.as_ptr();
+ // SAFETY: Since `d1` and `d2` are always less than or equal to `198`, we
+ // can copy from `lut_ptr[d1..d1 + 1]` and `lut_ptr[d2..d2 + 1]`. To show
+ // that it's OK to copy into `buf_ptr`, notice that at the beginning
+ // `curr == buf.len() == 39 > log(n)` since `n < 2^128 < 10^39`, and at
+ // each step this is kept the same as `n` is divided. Since `n` is always
+ // non-negative, this means that `curr > 0` so `buf_ptr[curr..curr + 1]`
+ // is safe to access.
unsafe {
// need at least 16 bits for the 4-characters-at-a-time to work.
assert!(crate::mem::size_of::<$u>() >= 2);
let d1 = (rem / 100) << 1;
let d2 = (rem % 100) << 1;
curr -= 4;
+
+ // We are allowed to copy to `buf_ptr[curr..curr + 3]` here since
+ // otherwise `curr < 0`. But then `n` was originally at least `10000^10`
+ // which is `10^40 > 2^128 > n`.
ptr::copy_nonoverlapping(lut_ptr.offset(d1), buf_ptr.offset(curr), 2);
ptr::copy_nonoverlapping(lut_ptr.offset(d2), buf_ptr.offset(curr + 2), 2);
}
}
}
+ // SAFETY: `curr` > 0 (since we made `buf` large enough), and all the chars are valid
+ // UTF-8 since `DEC_DIGITS_LUT` is
let buf_slice = unsafe {
str::from_utf8_unchecked(
slice::from_raw_parts(buf_ptr.offset(curr), buf.len() - curr as usize))
};
// 39 digits (worst case u128) + . = 40
+ // Since `curr` always decreases by the number of digits copied, this means
+ // that `curr >= 0`.
let mut buf = [MaybeUninit::<u8>::uninit(); 40];
let mut curr = buf.len() as isize; //index for buf
let buf_ptr = MaybeUninit::first_ptr_mut(&mut buf);
while n >= 100 {
let d1 = ((n % 100) as isize) << 1;
curr -= 2;
+ // SAFETY: `d1 <= 198`, so we can copy from `lut_ptr[d1..d1 + 2]` since
+ // `DEC_DIGITS_LUT` has a length of 200.
unsafe {
ptr::copy_nonoverlapping(lut_ptr.offset(d1), buf_ptr.offset(curr), 2);
}
// decode second-to-last character
if n >= 10 {
curr -= 1;
+ // SAFETY: Safe since `40 > curr >= 0` (see comment)
unsafe {
*buf_ptr.offset(curr) = (n as u8 % 10_u8) + b'0';
}
// add decimal point iff >1 mantissa digit will be printed
if exponent != trailing_zeros || added_precision != 0 {
curr -= 1;
+ // SAFETY: Safe since `40 > curr >= 0`
unsafe {
*buf_ptr.offset(curr) = b'.';
}
}
+ // SAFETY: Safe since `40 > curr >= 0`
let buf_slice = unsafe {
// decode last character
curr -= 1;
// stores 'e' (or 'E') and the up to 2-digit exponent
let mut exp_buf = [MaybeUninit::<u8>::uninit(); 3];
let exp_ptr = MaybeUninit::first_ptr_mut(&mut exp_buf);
+ // SAFETY: In either case, `exp_buf` is written within bounds and `exp_ptr[..len]`
+ // is contained within `exp_buf` since `len <= 3`.
let exp_slice = unsafe {
*exp_ptr.offset(0) = if upper {b'E'} else {b'e'};
let len = if exponent < 10 {
/// `.await` the value.
///
/// [`Waker`]: ../task/struct.Waker.html
-#[doc(spotlight)]
#[must_use = "futures do nothing unless you `.await` or poll them"]
#[stable(feature = "futures_api", since = "1.36.0")]
#[lang = "future_trait"]
//! Asynchronous values.
+#[cfg(not(bootstrap))]
+use crate::{
+ ops::{Generator, GeneratorState},
+ pin::Pin,
+ ptr::NonNull,
+ task::{Context, Poll},
+};
+
mod future;
#[stable(feature = "futures_api", since = "1.36.0")]
pub use self::future::Future;
+
+/// This type is needed because:
+///
+/// a) Generators cannot implement `for<'a, 'b> Generator<&'a mut Context<'b>>`, so we need to pass
+/// a raw pointer (see https://github.com/rust-lang/rust/issues/68923).
+/// b) Raw pointers and `NonNull` aren't `Send` or `Sync`, so that would make every single future
+/// non-Send/Sync as well, and we don't want that.
+///
+/// It also simplifies the HIR lowering of `.await`.
+#[doc(hidden)]
+#[unstable(feature = "gen_future", issue = "50547")]
+#[cfg(not(bootstrap))]
+#[derive(Debug, Copy, Clone)]
+pub struct ResumeTy(NonNull<Context<'static>>);
+
+#[unstable(feature = "gen_future", issue = "50547")]
+#[cfg(not(bootstrap))]
+unsafe impl Send for ResumeTy {}
+
+#[unstable(feature = "gen_future", issue = "50547")]
+#[cfg(not(bootstrap))]
+unsafe impl Sync for ResumeTy {}
+
+/// Wrap a generator in a future.
+///
+/// This function returns a `GenFuture` underneath, but hides it in `impl Trait` to give
+/// better error messages (`impl Future` rather than `GenFuture<[closure.....]>`).
+// This is `const` to avoid extra errors after we recover from `const async fn`
+#[doc(hidden)]
+#[unstable(feature = "gen_future", issue = "50547")]
+#[cfg(not(bootstrap))]
+#[inline]
+pub const fn from_generator<T>(gen: T) -> impl Future<Output = T::Return>
+where
+ T: Generator<ResumeTy, Yield = ()>,
+{
+ struct GenFuture<T: Generator<ResumeTy, Yield = ()>>(T);
+
+ // We rely on the fact that async/await futures are immovable in order to create
+ // self-referential borrows in the underlying generator.
+ impl<T: Generator<ResumeTy, Yield = ()>> !Unpin for GenFuture<T> {}
+
+ impl<T: Generator<ResumeTy, Yield = ()>> Future for GenFuture<T> {
+ type Output = T::Return;
+ fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
+ // Safety: Safe because we're !Unpin + !Drop, and this is just a field projection.
+ let gen = unsafe { Pin::map_unchecked_mut(self, |s| &mut s.0) };
+
+ // Resume the generator, turning the `&mut Context` into a `NonNull` raw pointer. The
+ // `.await` lowering will safely cast that back to a `&mut Context`.
+ match gen.resume(ResumeTy(NonNull::from(cx).cast::<Context<'static>>())) {
+ GeneratorState::Yielded(()) => Poll::Pending,
+ GeneratorState::Complete(x) => Poll::Ready(x),
+ }
+ }
+ }
+
+ GenFuture(gen)
+}
+
+#[doc(hidden)]
+#[unstable(feature = "gen_future", issue = "50547")]
+#[cfg(not(bootstrap))]
+#[inline]
+pub unsafe fn poll_with_context<F>(f: Pin<&mut F>, mut cx: ResumeTy) -> Poll<F::Output>
+where
+ F: Future,
+{
+ F::poll(f, cx.0.as_mut())
+}
self.state.v3 = self.k1 ^ 0x7465646279746573;
self.ntail = 0;
}
-
- // Specialized write function that is only valid for buffers with len <= 8.
- // It's used to force inlining of write_u8 and write_usize, those would normally be inlined
- // except for composite types (that includes slices and str hashing because of delimiter).
- // Without this extra push the compiler is very reluctant to inline delimiter writes,
- // degrading performance substantially for the most common use cases.
- #[inline]
- fn short_write(&mut self, msg: &[u8]) {
- debug_assert!(msg.len() <= 8);
- let length = msg.len();
- self.length += length;
-
- let needed = 8 - self.ntail;
- let fill = cmp::min(length, needed);
- if fill == 8 {
- self.tail = unsafe { load_int_le!(msg, 0, u64) };
- } else {
- self.tail |= unsafe { u8to64_le(msg, 0, fill) } << (8 * self.ntail);
- if length < needed {
- self.ntail += length;
- return;
- }
- }
- self.state.v3 ^= self.tail;
- S::c_rounds(&mut self.state);
- self.state.v0 ^= self.tail;
-
- // Buffered tail is now flushed, process new input.
- self.ntail = length - needed;
- self.tail = unsafe { u8to64_le(msg, needed, self.ntail) };
- }
}
#[stable(feature = "rust1", since = "1.0.0")]
}
impl<S: Sip> super::Hasher for Hasher<S> {
- // see short_write comment for explanation
- #[inline]
- fn write_usize(&mut self, i: usize) {
- let bytes = unsafe {
- crate::slice::from_raw_parts(&i as *const usize as *const u8, mem::size_of::<usize>())
- };
- self.short_write(bytes);
- }
-
- // see short_write comment for explanation
- #[inline]
- fn write_u8(&mut self, i: u8) {
- self.short_write(&[i]);
- }
-
+ // Note: no integer hashing methods (`write_u*`, `write_i*`) are defined
+ // for this type. We could add them, copy the `short_write` implementation
+ // in librustc_data_structures/sip128.rs, and add `write_u*`/`write_i*`
+ // methods to `SipHasher`, `SipHasher13`, and `DefaultHasher`. This would
+ // greatly speed up integer hashing by those hashers, at the cost of
+ // slightly slowing down compile speeds on some benchmarks. See #69152 for
+ // details.
#[inline]
fn write(&mut self, msg: &[u8]) {
let length = msg.len();
// memory, which is not valid for either `&` or `&mut`.
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [compare_exchange]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange
pub fn atomic_cxchg<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange` method by passing
/// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html)
/// [compare_exchange]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange
pub fn atomic_cxchg_acq<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange` method by passing
/// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html)
/// [compare_exchange]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange
pub fn atomic_cxchg_rel<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange` method by passing
/// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html)
/// [compare_exchange]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange
pub fn atomic_cxchg_acqrel<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange` method by passing
/// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html)
/// [compare_exchange]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange
pub fn atomic_cxchg_relaxed<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [compare_exchange]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange
pub fn atomic_cxchg_failrelaxed<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [compare_exchange]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange
pub fn atomic_cxchg_failacq<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange` method by passing
/// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html)
/// [compare_exchange]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange
pub fn atomic_cxchg_acq_failrelaxed<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange` method by passing
/// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html)
pub fn atomic_cxchg_acqrel_failrelaxed<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange_weak` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [cew]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak
pub fn atomic_cxchgweak<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange_weak` method by passing
/// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html)
/// [cew]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak
pub fn atomic_cxchgweak_acq<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange_weak` method by passing
/// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html)
/// [cew]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak
pub fn atomic_cxchgweak_rel<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange_weak` method by passing
/// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html)
/// [cew]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak
pub fn atomic_cxchgweak_acqrel<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange_weak` method by passing
/// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html)
/// [cew]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak
pub fn atomic_cxchgweak_relaxed<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange_weak` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [cew]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak
pub fn atomic_cxchgweak_failrelaxed<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange_weak` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [cew]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak
pub fn atomic_cxchgweak_failacq<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange_weak` method by passing
/// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html)
/// [cew]: ../../std/sync/atomic/struct.AtomicBool.html#method.compare_exchange_weak
pub fn atomic_cxchgweak_acq_failrelaxed<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Stores a value if the current value is the same as the `old` value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `compare_exchange_weak` method by passing
/// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html)
pub fn atomic_cxchgweak_acqrel_failrelaxed<T>(dst: *mut T, old: T, src: T) -> (T, bool);
/// Loads the current value of the pointer.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `load` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::load`](../../std/sync/atomic/struct.AtomicBool.html#method.load).
pub fn atomic_load<T>(src: *const T) -> T;
/// Loads the current value of the pointer.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `load` method by passing
/// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::load`](../../std/sync/atomic/struct.AtomicBool.html#method.load).
pub fn atomic_load_acq<T>(src: *const T) -> T;
/// Loads the current value of the pointer.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `load` method by passing
/// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html)
pub fn atomic_load_unordered<T>(src: *const T) -> T;
/// Stores the value at the specified memory location.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `store` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::store`](../../std/sync/atomic/struct.AtomicBool.html#method.store).
pub fn atomic_store<T>(dst: *mut T, val: T);
/// Stores the value at the specified memory location.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `store` method by passing
/// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::store`](../../std/sync/atomic/struct.AtomicBool.html#method.store).
pub fn atomic_store_rel<T>(dst: *mut T, val: T);
/// Stores the value at the specified memory location.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `store` method by passing
/// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html)
pub fn atomic_store_unordered<T>(dst: *mut T, val: T);
/// Stores the value at the specified memory location, returning the old value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `swap` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::swap`](../../std/sync/atomic/struct.AtomicBool.html#method.swap).
pub fn atomic_xchg<T>(dst: *mut T, src: T) -> T;
/// Stores the value at the specified memory location, returning the old value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `swap` method by passing
/// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::swap`](../../std/sync/atomic/struct.AtomicBool.html#method.swap).
pub fn atomic_xchg_acq<T>(dst: *mut T, src: T) -> T;
/// Stores the value at the specified memory location, returning the old value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `swap` method by passing
/// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::swap`](../../std/sync/atomic/struct.AtomicBool.html#method.swap).
pub fn atomic_xchg_rel<T>(dst: *mut T, src: T) -> T;
/// Stores the value at the specified memory location, returning the old value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `swap` method by passing
/// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::swap`](../../std/sync/atomic/struct.AtomicBool.html#method.swap).
pub fn atomic_xchg_acqrel<T>(dst: *mut T, src: T) -> T;
/// Stores the value at the specified memory location, returning the old value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `swap` method by passing
/// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html)
pub fn atomic_xchg_relaxed<T>(dst: *mut T, src: T) -> T;
/// Adds to the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_add` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicIsize::fetch_add`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_add).
pub fn atomic_xadd<T>(dst: *mut T, src: T) -> T;
/// Adds to the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_add` method by passing
/// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicIsize::fetch_add`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_add).
pub fn atomic_xadd_acq<T>(dst: *mut T, src: T) -> T;
/// Adds to the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_add` method by passing
/// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicIsize::fetch_add`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_add).
pub fn atomic_xadd_rel<T>(dst: *mut T, src: T) -> T;
/// Adds to the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_add` method by passing
/// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicIsize::fetch_add`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_add).
pub fn atomic_xadd_acqrel<T>(dst: *mut T, src: T) -> T;
/// Adds to the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_add` method by passing
/// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html)
pub fn atomic_xadd_relaxed<T>(dst: *mut T, src: T) -> T;
/// Subtract from the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_sub` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicIsize::fetch_sub`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_sub).
pub fn atomic_xsub<T>(dst: *mut T, src: T) -> T;
/// Subtract from the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_sub` method by passing
/// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicIsize::fetch_sub`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_sub).
pub fn atomic_xsub_acq<T>(dst: *mut T, src: T) -> T;
/// Subtract from the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_sub` method by passing
/// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicIsize::fetch_sub`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_sub).
pub fn atomic_xsub_rel<T>(dst: *mut T, src: T) -> T;
/// Subtract from the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_sub` method by passing
/// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicIsize::fetch_sub`](../../std/sync/atomic/struct.AtomicIsize.html#method.fetch_sub).
pub fn atomic_xsub_acqrel<T>(dst: *mut T, src: T) -> T;
/// Subtract from the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_sub` method by passing
/// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html)
pub fn atomic_xsub_relaxed<T>(dst: *mut T, src: T) -> T;
/// Bitwise and with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_and` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_and`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_and).
pub fn atomic_and<T>(dst: *mut T, src: T) -> T;
/// Bitwise and with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_and` method by passing
/// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_and`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_and).
pub fn atomic_and_acq<T>(dst: *mut T, src: T) -> T;
/// Bitwise and with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_and` method by passing
/// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_and`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_and).
pub fn atomic_and_rel<T>(dst: *mut T, src: T) -> T;
/// Bitwise and with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_and` method by passing
/// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_and`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_and).
pub fn atomic_and_acqrel<T>(dst: *mut T, src: T) -> T;
/// Bitwise and with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_and` method by passing
/// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html)
pub fn atomic_and_relaxed<T>(dst: *mut T, src: T) -> T;
/// Bitwise nand with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic::AtomicBool` type via the `fetch_nand` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_nand`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_nand).
pub fn atomic_nand<T>(dst: *mut T, src: T) -> T;
/// Bitwise nand with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic::AtomicBool` type via the `fetch_nand` method by passing
/// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_nand`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_nand).
pub fn atomic_nand_acq<T>(dst: *mut T, src: T) -> T;
/// Bitwise nand with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic::AtomicBool` type via the `fetch_nand` method by passing
/// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_nand`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_nand).
pub fn atomic_nand_rel<T>(dst: *mut T, src: T) -> T;
/// Bitwise nand with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic::AtomicBool` type via the `fetch_nand` method by passing
/// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_nand`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_nand).
pub fn atomic_nand_acqrel<T>(dst: *mut T, src: T) -> T;
/// Bitwise nand with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic::AtomicBool` type via the `fetch_nand` method by passing
/// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html)
pub fn atomic_nand_relaxed<T>(dst: *mut T, src: T) -> T;
/// Bitwise or with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_or` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_or`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_or).
pub fn atomic_or<T>(dst: *mut T, src: T) -> T;
/// Bitwise or with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_or` method by passing
/// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_or`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_or).
pub fn atomic_or_acq<T>(dst: *mut T, src: T) -> T;
/// Bitwise or with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_or` method by passing
/// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_or`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_or).
pub fn atomic_or_rel<T>(dst: *mut T, src: T) -> T;
/// Bitwise or with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_or` method by passing
/// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_or`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_or).
pub fn atomic_or_acqrel<T>(dst: *mut T, src: T) -> T;
/// Bitwise or with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_or` method by passing
/// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html)
pub fn atomic_or_relaxed<T>(dst: *mut T, src: T) -> T;
/// Bitwise xor with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_xor` method by passing
/// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_xor`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_xor).
pub fn atomic_xor<T>(dst: *mut T, src: T) -> T;
/// Bitwise xor with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_xor` method by passing
/// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_xor`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_xor).
pub fn atomic_xor_acq<T>(dst: *mut T, src: T) -> T;
/// Bitwise xor with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_xor` method by passing
/// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_xor`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_xor).
pub fn atomic_xor_rel<T>(dst: *mut T, src: T) -> T;
/// Bitwise xor with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_xor` method by passing
/// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_xor`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_xor).
pub fn atomic_xor_acqrel<T>(dst: *mut T, src: T) -> T;
/// Bitwise xor with the current value, returning the previous value.
+ ///
/// The stabilized version of this intrinsic is available on the
/// `std::sync::atomic` types via the `fetch_xor` method by passing
/// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html)
/// [`AtomicBool::fetch_xor`](../../std/sync/atomic/struct.AtomicBool.html#method.fetch_xor).
pub fn atomic_xor_relaxed<T>(dst: *mut T, src: T) -> T;
+ /// Maximum with the current value using a signed comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` signed integer types via the `fetch_max` method by passing
+ /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html#variant.SeqCst)
+ /// as the `order`. For example,
+ /// [`AtomicI32::fetch_max`](../../std/sync/atomic/struct.AtomicI32.html#method.fetch_max).
pub fn atomic_max<T>(dst: *mut T, src: T) -> T;
+ /// Maximum with the current value using a signed comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` signed integer types via the `fetch_max` method by passing
+ /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html#variant.Acquire)
+ /// as the `order`. For example,
+ /// [`AtomicI32::fetch_max`](../../std/sync/atomic/struct.AtomicI32.html#method.fetch_max).
pub fn atomic_max_acq<T>(dst: *mut T, src: T) -> T;
+ /// Maximum with the current value using a signed comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` signed integer types via the `fetch_max` method by passing
+ /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html#variant.Release)
+ /// as the `order`. For example,
+ /// [`AtomicI32::fetch_max`](../../std/sync/atomic/struct.AtomicI32.html#method.fetch_max).
pub fn atomic_max_rel<T>(dst: *mut T, src: T) -> T;
+ /// Maximum with the current value using a signed comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` signed integer types via the `fetch_max` method by passing
+ /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html#variant.AcqRel)
+ /// as the `order`. For example,
+ /// [`AtomicI32::fetch_max`](../../std/sync/atomic/struct.AtomicI32.html#method.fetch_max).
pub fn atomic_max_acqrel<T>(dst: *mut T, src: T) -> T;
+ /// Maximum with the current value.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` signed integer types via the `fetch_max` method by passing
+ /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html#variant.Relaxed)
+ /// as the `order`. For example,
+ /// [`AtomicI32::fetch_max`](../../std/sync/atomic/struct.AtomicI32.html#method.fetch_max).
pub fn atomic_max_relaxed<T>(dst: *mut T, src: T) -> T;
+ /// Minimum with the current value using a signed comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` signed integer types via the `fetch_min` method by passing
+ /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html#variant.SeqCst)
+ /// as the `order`. For example,
+ /// [`AtomicI32::fetch_min`](../../std/sync/atomic/struct.AtomicI32.html#method.fetch_min).
pub fn atomic_min<T>(dst: *mut T, src: T) -> T;
+ /// Minimum with the current value using a signed comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` signed integer types via the `fetch_min` method by passing
+ /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html#variant.Acquire)
+ /// as the `order`. For example,
+ /// [`AtomicI32::fetch_min`](../../std/sync/atomic/struct.AtomicI32.html#method.fetch_min).
pub fn atomic_min_acq<T>(dst: *mut T, src: T) -> T;
+ /// Minimum with the current value using a signed comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` signed integer types via the `fetch_min` method by passing
+ /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html#variant.Release)
+ /// as the `order`. For example,
+ /// [`AtomicI32::fetch_min`](../../std/sync/atomic/struct.AtomicI32.html#method.fetch_min).
pub fn atomic_min_rel<T>(dst: *mut T, src: T) -> T;
+ /// Minimum with the current value using a signed comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` signed integer types via the `fetch_min` method by passing
+ /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html#variant.AcqRel)
+ /// as the `order`. For example,
+ /// [`AtomicI32::fetch_min`](../../std/sync/atomic/struct.AtomicI32.html#method.fetch_min).
pub fn atomic_min_acqrel<T>(dst: *mut T, src: T) -> T;
+ /// Minimum with the current value using a signed comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` signed integer types via the `fetch_min` method by passing
+ /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html#variant.Relaxed)
+ /// as the `order`. For example,
+ /// [`AtomicI32::fetch_min`](../../std/sync/atomic/struct.AtomicI32.html#method.fetch_min).
pub fn atomic_min_relaxed<T>(dst: *mut T, src: T) -> T;
+ /// Minimum with the current value using an unsigned comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` unsigned integer types via the `fetch_min` method by passing
+ /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html#variant.SeqCst)
+ /// as the `order`. For example,
+ /// [`AtomicU32::fetch_min`](../../std/sync/atomic/struct.AtomicU32.html#method.fetch_min).
pub fn atomic_umin<T>(dst: *mut T, src: T) -> T;
+ /// Minimum with the current value using an unsigned comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` unsigned integer types via the `fetch_min` method by passing
+ /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html#variant.Acquire)
+ /// as the `order`. For example,
+ /// [`AtomicU32::fetch_min`](../../std/sync/atomic/struct.AtomicU32.html#method.fetch_min).
pub fn atomic_umin_acq<T>(dst: *mut T, src: T) -> T;
+ /// Minimum with the current value using an unsigned comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` unsigned integer types via the `fetch_min` method by passing
+ /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html#variant.Release)
+ /// as the `order`. For example,
+ /// [`AtomicU32::fetch_min`](../../std/sync/atomic/struct.AtomicU32.html#method.fetch_min).
pub fn atomic_umin_rel<T>(dst: *mut T, src: T) -> T;
+ /// Minimum with the current value using an unsigned comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` unsigned integer types via the `fetch_min` method by passing
+ /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html#variant.AcqRel)
+ /// as the `order`. For example,
+ /// [`AtomicU32::fetch_min`](../../std/sync/atomic/struct.AtomicU32.html#method.fetch_min).
pub fn atomic_umin_acqrel<T>(dst: *mut T, src: T) -> T;
+ /// Minimum with the current value using an unsigned comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` unsigned integer types via the `fetch_min` method by passing
+ /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html#variant.Relaxed)
+ /// as the `order`. For example,
+ /// [`AtomicU32::fetch_min`](../../std/sync/atomic/struct.AtomicU32.html#method.fetch_min).
pub fn atomic_umin_relaxed<T>(dst: *mut T, src: T) -> T;
+ /// Maximum with the current value using an unsigned comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` unsigned integer types via the `fetch_max` method by passing
+ /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html#variant.SeqCst)
+ /// as the `order`. For example,
+ /// [`AtomicU32::fetch_max`](../../std/sync/atomic/struct.AtomicU32.html#method.fetch_max).
pub fn atomic_umax<T>(dst: *mut T, src: T) -> T;
+ /// Maximum with the current value using an unsigned comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` unsigned integer types via the `fetch_max` method by passing
+ /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html#variant.Acquire)
+ /// as the `order`. For example,
+ /// [`AtomicU32::fetch_max`](../../std/sync/atomic/struct.AtomicU32.html#method.fetch_max).
pub fn atomic_umax_acq<T>(dst: *mut T, src: T) -> T;
+ /// Maximum with the current value using an unsigned comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` unsigned integer types via the `fetch_max` method by passing
+ /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html#variant.Release)
+ /// as the `order`. For example,
+ /// [`AtomicU32::fetch_max`](../../std/sync/atomic/struct.AtomicU32.html#method.fetch_max).
pub fn atomic_umax_rel<T>(dst: *mut T, src: T) -> T;
+ /// Maximum with the current value using an unsigned comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` unsigned integer types via the `fetch_max` method by passing
+ /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html#variant.AcqRel)
+ /// as the `order`. For example,
+ /// [`AtomicU32::fetch_max`](../../std/sync/atomic/struct.AtomicU32.html#method.fetch_max).
pub fn atomic_umax_acqrel<T>(dst: *mut T, src: T) -> T;
+ /// Maximum with the current value using an unsigned comparison.
+ ///
+ /// The stabilized version of this intrinsic is available on the
+ /// `std::sync::atomic` unsigned integer types via the `fetch_max` method by passing
+ /// [`Ordering::Relaxed`](../../std/sync/atomic/enum.Ordering.html#variant.Relaxed)
+ /// as the `order`. For example,
+ /// [`AtomicU32::fetch_max`](../../std/sync/atomic/struct.AtomicU32.html#method.fetch_max).
pub fn atomic_umax_relaxed<T>(dst: *mut T, src: T) -> T;
/// The `prefetch` intrinsic is a hint to the code generator to insert a prefetch instruction
extern "rust-intrinsic" {
+ /// An atomic fence.
+ ///
+ /// The stabilized version of this intrinsic is available in
+ /// [`std::sync::atomic::fence`](../../std/sync/atomic/fn.fence.html)
+ /// by passing
+ /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html#variant.SeqCst)
+ /// as the `order`.
pub fn atomic_fence();
+ /// An atomic fence.
+ ///
+ /// The stabilized version of this intrinsic is available in
+ /// [`std::sync::atomic::fence`](../../std/sync/atomic/fn.fence.html)
+ /// by passing
+ /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html#variant.Acquire)
+ /// as the `order`.
pub fn atomic_fence_acq();
+ /// An atomic fence.
+ ///
+ /// The stabilized version of this intrinsic is available in
+ /// [`std::sync::atomic::fence`](../../std/sync/atomic/fn.fence.html)
+ /// by passing
+ /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html#variant.Release)
+ /// as the `order`.
pub fn atomic_fence_rel();
+ /// An atomic fence.
+ ///
+ /// The stabilized version of this intrinsic is available in
+ /// [`std::sync::atomic::fence`](../../std/sync/atomic/fn.fence.html)
+ /// by passing
+ /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html#variant.AcqRel)
+ /// as the `order`.
pub fn atomic_fence_acqrel();
/// A compiler-only memory barrier.
/// compiler, but no instructions will be emitted for it. This is
/// appropriate for operations on the same thread that may be preempted,
/// such as when interacting with signal handlers.
+ ///
+ /// The stabilized version of this intrinsic is available in
+ /// [`std::sync::atomic::compiler_fence`](../../std/sync/atomic/fn.compiler_fence.html)
+ /// by passing
+ /// [`Ordering::SeqCst`](../../std/sync/atomic/enum.Ordering.html#variant.SeqCst)
+ /// as the `order`.
pub fn atomic_singlethreadfence();
+ /// A compiler-only memory barrier.
+ ///
+ /// Memory accesses will never be reordered across this barrier by the
+ /// compiler, but no instructions will be emitted for it. This is
+ /// appropriate for operations on the same thread that may be preempted,
+ /// such as when interacting with signal handlers.
+ ///
+ /// The stabilized version of this intrinsic is available in
+ /// [`std::sync::atomic::compiler_fence`](../../std/sync/atomic/fn.compiler_fence.html)
+ /// by passing
+ /// [`Ordering::Acquire`](../../std/sync/atomic/enum.Ordering.html#variant.Acquire)
+ /// as the `order`.
pub fn atomic_singlethreadfence_acq();
+ /// A compiler-only memory barrier.
+ ///
+ /// Memory accesses will never be reordered across this barrier by the
+ /// compiler, but no instructions will be emitted for it. This is
+ /// appropriate for operations on the same thread that may be preempted,
+ /// such as when interacting with signal handlers.
+ ///
+ /// The stabilized version of this intrinsic is available in
+ /// [`std::sync::atomic::compiler_fence`](../../std/sync/atomic/fn.compiler_fence.html)
+ /// by passing
+ /// [`Ordering::Release`](../../std/sync/atomic/enum.Ordering.html#variant.Release)
+ /// as the `order`.
pub fn atomic_singlethreadfence_rel();
+ /// A compiler-only memory barrier.
+ ///
+ /// Memory accesses will never be reordered across this barrier by the
+ /// compiler, but no instructions will be emitted for it. This is
+ /// appropriate for operations on the same thread that may be preempted,
+ /// such as when interacting with signal handlers.
+ ///
+ /// The stabilized version of this intrinsic is available in
+ /// [`std::sync::atomic::compiler_fence`](../../std/sync/atomic/fn.compiler_fence.html)
+ /// by passing
+ /// [`Ordering::AcqRel`](../../std/sync/atomic/enum.Ordering.html#variant.AcqRel)
+ /// as the `order`.
pub fn atomic_singlethreadfence_acqrel();
/// Magic intrinsic that derives its meaning from attributes
/// Moves a value to an uninitialized memory location.
///
/// Drop glue is not run on the destination.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::ptr::write`](../../std/ptr/fn.write.html).
pub fn move_val_init<T>(dst: *mut T, src: T);
+ /// The minimum alignment of a type.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::mem::align_of`](../../std/mem/fn.align_of.html).
#[rustc_const_stable(feature = "const_min_align_of", since = "1.40.0")]
pub fn min_align_of<T>() -> usize;
#[rustc_const_unstable(feature = "const_pref_align_of", issue = "none")]
/// The stabilized version of this intrinsic is
/// [`std::mem::size_of_val`](../../std/mem/fn.size_of_val.html).
pub fn size_of_val<T: ?Sized>(_: &T) -> usize;
+ /// The minimum alignment of the type of the value that `val` points to.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::mem::min_align_of_val`](../../std/mem/fn.min_align_of_val.html).
pub fn min_align_of_val<T: ?Sized>(_: &T) -> usize;
/// Gets a static string slice containing the name of a type.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::any::type_name`](../../std/any/fn.type_name.html)
#[rustc_const_unstable(feature = "const_type_name", issue = "none")]
pub fn type_name<T: ?Sized>() -> &'static str;
/// Gets an identifier which is globally unique to the specified type. This
/// function will return the same value for a type regardless of whichever
/// crate it is invoked in.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::any::TypeId::of`](../../std/any/struct.TypeId.html#method.of)
#[rustc_const_unstable(feature = "const_type_id", issue = "none")]
pub fn type_id<T: ?Sized + 'static>() -> u64;
/// A guard for unsafe functions that cannot ever be executed if `T` is uninhabited:
/// This will statically either panic, or do nothing.
+ #[cfg(bootstrap)]
pub fn panic_if_uninhabited<T>();
+ /// A guard for unsafe functions that cannot ever be executed if `T` is uninhabited:
+ /// This will statically either panic, or do nothing.
+ #[cfg(not(bootstrap))]
+ pub fn assert_inhabited<T>();
+
+ /// A guard for unsafe functions that cannot ever be executed if `T` does not permit
+ /// zero-initialization: This will statically either panic, or do nothing.
+ #[cfg(not(bootstrap))]
+ pub fn assert_zero_valid<T>();
+
+ /// A guard for unsafe functions that cannot ever be executed if `T` has invalid
+ /// bit patterns: This will statically either panic, or do nothing.
+ #[cfg(not(bootstrap))]
+ pub fn assert_uninit_valid<T>();
+
/// Gets a reference to a static `Location` indicating where it was called.
#[rustc_const_unstable(feature = "const_caller_location", issue = "47809")]
pub fn caller_location() -> &'static crate::panic::Location<'static>;
- /// Creates a value initialized to zero.
- ///
- /// `init` is unsafe because it returns a zeroed-out datum,
- /// which is unsafe unless `T` is `Copy`. Also, even if T is
- /// `Copy`, an all-zero value may not correspond to any legitimate
- /// state for the type in question.
- #[unstable(
- feature = "core_intrinsics",
- reason = "intrinsics are unlikely to ever be stabilized, instead \
- they should be used through stabilized interfaces \
- in the rest of the standard library",
- issue = "none"
- )]
- #[rustc_deprecated(reason = "superseded by MaybeUninit, removal planned", since = "1.38.0")]
- pub fn init<T>() -> T;
-
- /// Creates an uninitialized value.
- ///
- /// `uninit` is unsafe because there is no guarantee of what its
- /// contents are. In particular its drop-flag may be set to any
- /// state, which means it may claim either dropped or
- /// undropped. In the general case one must use `ptr::write` to
- /// initialize memory previous set to the result of `uninit`.
- #[unstable(
- feature = "core_intrinsics",
- reason = "intrinsics are unlikely to ever be stabilized, instead \
- they should be used through stabilized interfaces \
- in the rest of the standard library",
- issue = "none"
- )]
- #[rustc_deprecated(reason = "superseded by MaybeUninit, removal planned", since = "1.38.0")]
- pub fn uninit<T>() -> T;
-
/// Moves a value out of scope without running drop glue.
+ /// This exists solely for `mem::forget_unsized`; normal `forget` uses `ManuallyDrop` instead.
pub fn forget<T: ?Sized>(_: T);
/// Reinterprets the bits of a value of one type as another type.
/// byte past the end of an allocated object. If either pointer is out of
/// bounds or arithmetic overflow occurs then any further use of the
/// returned value will result in undefined behavior.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::pointer::offset`](../../std/primitive.pointer.html#method.offset).
pub fn offset<T>(dst: *const T, offset: isize) -> *const T;
/// Calculates the offset from a pointer, potentially wrapping.
/// resulting pointer to point into or one byte past the end of an allocated
/// object, and it wraps with two's complement arithmetic. The resulting
/// value is not necessarily valid to be used to actually access memory.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::pointer::wrapping_offset`](../../std/primitive.pointer.html#method.wrapping_offset).
pub fn arith_offset<T>(dst: *const T, offset: isize) -> *const T;
/// Equivalent to the appropriate `llvm.memcpy.p0i8.0i8.*` intrinsic, with
pub fn volatile_set_memory<T>(dst: *mut T, val: u8, count: usize);
/// Performs a volatile load from the `src` pointer.
+ ///
/// The stabilized version of this intrinsic is
/// [`std::ptr::read_volatile`](../../std/ptr/fn.read_volatile.html).
pub fn volatile_load<T>(src: *const T) -> T;
/// Performs a volatile store to the `dst` pointer.
+ ///
/// The stabilized version of this intrinsic is
/// [`std::ptr::write_volatile`](../../std/ptr/fn.write_volatile.html).
pub fn volatile_store<T>(dst: *mut T, val: T);
pub fn unaligned_volatile_store<T>(dst: *mut T, val: T);
/// Returns the square root of an `f32`
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::sqrt`](../../std/primitive.f32.html#method.sqrt)
pub fn sqrtf32(x: f32) -> f32;
/// Returns the square root of an `f64`
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::sqrt`](../../std/primitive.f64.html#method.sqrt)
pub fn sqrtf64(x: f64) -> f64;
/// Raises an `f32` to an integer power.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::powi`](../../std/primitive.f32.html#method.powi)
pub fn powif32(a: f32, x: i32) -> f32;
/// Raises an `f64` to an integer power.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::powi`](../../std/primitive.f64.html#method.powi)
pub fn powif64(a: f64, x: i32) -> f64;
/// Returns the sine of an `f32`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::sin`](../../std/primitive.f32.html#method.sin)
pub fn sinf32(x: f32) -> f32;
/// Returns the sine of an `f64`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::sin`](../../std/primitive.f64.html#method.sin)
pub fn sinf64(x: f64) -> f64;
/// Returns the cosine of an `f32`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::cos`](../../std/primitive.f32.html#method.cos)
pub fn cosf32(x: f32) -> f32;
/// Returns the cosine of an `f64`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::cos`](../../std/primitive.f64.html#method.cos)
pub fn cosf64(x: f64) -> f64;
/// Raises an `f32` to an `f32` power.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::powf`](../../std/primitive.f32.html#method.powf)
pub fn powf32(a: f32, x: f32) -> f32;
/// Raises an `f64` to an `f64` power.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::powf`](../../std/primitive.f64.html#method.powf)
pub fn powf64(a: f64, x: f64) -> f64;
/// Returns the exponential of an `f32`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::exp`](../../std/primitive.f32.html#method.exp)
pub fn expf32(x: f32) -> f32;
/// Returns the exponential of an `f64`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::exp`](../../std/primitive.f64.html#method.exp)
pub fn expf64(x: f64) -> f64;
/// Returns 2 raised to the power of an `f32`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::exp2`](../../std/primitive.f32.html#method.exp2)
pub fn exp2f32(x: f32) -> f32;
/// Returns 2 raised to the power of an `f64`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::exp2`](../../std/primitive.f64.html#method.exp2)
pub fn exp2f64(x: f64) -> f64;
/// Returns the natural logarithm of an `f32`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::ln`](../../std/primitive.f32.html#method.ln)
pub fn logf32(x: f32) -> f32;
/// Returns the natural logarithm of an `f64`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::ln`](../../std/primitive.f64.html#method.ln)
pub fn logf64(x: f64) -> f64;
/// Returns the base 10 logarithm of an `f32`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::log10`](../../std/primitive.f32.html#method.log10)
pub fn log10f32(x: f32) -> f32;
/// Returns the base 10 logarithm of an `f64`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::log10`](../../std/primitive.f64.html#method.log10)
pub fn log10f64(x: f64) -> f64;
/// Returns the base 2 logarithm of an `f32`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::log2`](../../std/primitive.f32.html#method.log2)
pub fn log2f32(x: f32) -> f32;
/// Returns the base 2 logarithm of an `f64`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::log2`](../../std/primitive.f64.html#method.log2)
pub fn log2f64(x: f64) -> f64;
/// Returns `a * b + c` for `f32` values.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::mul_add`](../../std/primitive.f32.html#method.mul_add)
pub fn fmaf32(a: f32, b: f32, c: f32) -> f32;
/// Returns `a * b + c` for `f64` values.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::mul_add`](../../std/primitive.f64.html#method.mul_add)
pub fn fmaf64(a: f64, b: f64, c: f64) -> f64;
/// Returns the absolute value of an `f32`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::abs`](../../std/primitive.f32.html#method.abs)
pub fn fabsf32(x: f32) -> f32;
/// Returns the absolute value of an `f64`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::abs`](../../std/primitive.f64.html#method.abs)
pub fn fabsf64(x: f64) -> f64;
/// Returns the minimum of two `f32` values.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::min`](../../std/primitive.f32.html#method.min)
pub fn minnumf32(x: f32, y: f32) -> f32;
/// Returns the minimum of two `f64` values.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::min`](../../std/primitive.f64.html#method.min)
pub fn minnumf64(x: f64, y: f64) -> f64;
/// Returns the maximum of two `f32` values.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::max`](../../std/primitive.f32.html#method.max)
pub fn maxnumf32(x: f32, y: f32) -> f32;
/// Returns the maximum of two `f64` values.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::max`](../../std/primitive.f64.html#method.max)
pub fn maxnumf64(x: f64, y: f64) -> f64;
/// Copies the sign from `y` to `x` for `f32` values.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::copysign`](../../std/primitive.f32.html#method.copysign)
pub fn copysignf32(x: f32, y: f32) -> f32;
/// Copies the sign from `y` to `x` for `f64` values.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::copysign`](../../std/primitive.f64.html#method.copysign)
pub fn copysignf64(x: f64, y: f64) -> f64;
/// Returns the largest integer less than or equal to an `f32`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::floor`](../../std/primitive.f32.html#method.floor)
pub fn floorf32(x: f32) -> f32;
/// Returns the largest integer less than or equal to an `f64`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::floor`](../../std/primitive.f64.html#method.floor)
pub fn floorf64(x: f64) -> f64;
/// Returns the smallest integer greater than or equal to an `f32`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::ceil`](../../std/primitive.f32.html#method.ceil)
pub fn ceilf32(x: f32) -> f32;
/// Returns the smallest integer greater than or equal to an `f64`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::ceil`](../../std/primitive.f64.html#method.ceil)
pub fn ceilf64(x: f64) -> f64;
/// Returns the integer part of an `f32`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::trunc`](../../std/primitive.f32.html#method.trunc)
pub fn truncf32(x: f32) -> f32;
/// Returns the integer part of an `f64`.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::trunc`](../../std/primitive.f64.html#method.trunc)
pub fn truncf64(x: f64) -> f64;
/// Returns the nearest integer to an `f32`. May raise an inexact floating-point exception
pub fn nearbyintf64(x: f64) -> f64;
/// Returns the nearest integer to an `f32`. Rounds half-way cases away from zero.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f32::round`](../../std/primitive.f32.html#method.round)
pub fn roundf32(x: f32) -> f32;
/// Returns the nearest integer to an `f64`. Rounds half-way cases away from zero.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::f64::round`](../../std/primitive.f64.html#method.round)
pub fn roundf64(x: f64) -> f64;
/// Float addition that allows optimizations based on algebraic rules.
pub fn frem_fast<T>(a: T, b: T) -> T;
/// Convert with LLVM’s fptoui/fptosi, which may return undef for values out of range
- /// https://github.com/rust-lang/rust/issues/10184
+ /// (<https://github.com/rust-lang/rust/issues/10184>)
+ /// This is under stabilization at <https://github.com/rust-lang/rust/issues/67058>
pub fn float_to_int_approx_unchecked<Float, Int>(value: Float) -> Int;
/// Returns the number of bits set in an integer type `T`
+ ///
+ /// The stabilized versions of this intrinsic are available on the integer
+ /// primitives via the `count_ones` method. For example,
+ /// [`std::u32::count_ones`](../../std/primitive.u32.html#method.count_ones)
#[rustc_const_stable(feature = "const_ctpop", since = "1.40.0")]
pub fn ctpop<T>(x: T) -> T;
/// Returns the number of leading unset bits (zeroes) in an integer type `T`.
///
+ /// The stabilized versions of this intrinsic are available on the integer
+ /// primitives via the `leading_zeros` method. For example,
+ /// [`std::u32::leading_zeros`](../../std/primitive.u32.html#method.leading_zeros)
+ ///
/// # Examples
///
/// ```
/// Returns the number of trailing unset bits (zeroes) in an integer type `T`.
///
+ /// The stabilized versions of this intrinsic are available on the integer
+ /// primitives via the `trailing_zeros` method. For example,
+ /// [`std::u32::trailing_zeros`](../../std/primitive.u32.html#method.trailing_zeros)
+ ///
/// # Examples
///
/// ```
pub fn cttz_nonzero<T>(x: T) -> T;
/// Reverses the bytes in an integer type `T`.
+ ///
+ /// The stabilized versions of this intrinsic are available on the integer
+ /// primitives via the `swap_bytes` method. For example,
+ /// [`std::u32::swap_bytes`](../../std/primitive.u32.html#method.swap_bytes)
#[rustc_const_stable(feature = "const_bswap", since = "1.40.0")]
pub fn bswap<T>(x: T) -> T;
/// Reverses the bits in an integer type `T`.
+ ///
+ /// The stabilized versions of this intrinsic are available on the integer
+ /// primitives via the `reverse_bits` method. For example,
+ /// [`std::u32::reverse_bits`](../../std/primitive.u32.html#method.reverse_bits)
#[rustc_const_stable(feature = "const_bitreverse", since = "1.40.0")]
pub fn bitreverse<T>(x: T) -> T;
/// Performs checked integer addition.
+ ///
/// The stabilized versions of this intrinsic are available on the integer
/// primitives via the `overflowing_add` method. For example,
/// [`std::u32::overflowing_add`](../../std/primitive.u32.html#method.overflowing_add)
pub fn add_with_overflow<T>(x: T, y: T) -> (T, bool);
/// Performs checked integer subtraction
+ ///
/// The stabilized versions of this intrinsic are available on the integer
/// primitives via the `overflowing_sub` method. For example,
/// [`std::u32::overflowing_sub`](../../std/primitive.u32.html#method.overflowing_sub)
pub fn sub_with_overflow<T>(x: T, y: T) -> (T, bool);
/// Performs checked integer multiplication
+ ///
/// The stabilized versions of this intrinsic are available on the integer
/// primitives via the `overflowing_mul` method. For example,
/// [`std::u32::overflowing_mul`](../../std/primitive.u32.html#method.overflowing_mul)
/// Performs an unchecked division, resulting in undefined behavior
/// where y = 0 or x = `T::min_value()` and y = -1
+ ///
+ /// The stabilized versions of this intrinsic are available on the integer
+ /// primitives via the `checked_div` method. For example,
+ /// [`std::u32::checked_div`](../../std/primitive.u32.html#method.checked_div)
#[rustc_const_unstable(feature = "const_int_unchecked_arith", issue = "none")]
pub fn unchecked_div<T>(x: T, y: T) -> T;
/// Returns the remainder of an unchecked division, resulting in
/// undefined behavior where y = 0 or x = `T::min_value()` and y = -1
+ ///
+ /// The stabilized versions of this intrinsic are available on the integer
+ /// primitives via the `checked_rem` method. For example,
+ /// [`std::u32::checked_rem`](../../std/primitive.u32.html#method.checked_rem)
#[rustc_const_unstable(feature = "const_int_unchecked_arith", issue = "none")]
pub fn unchecked_rem<T>(x: T, y: T) -> T;
/// Performs an unchecked left shift, resulting in undefined behavior when
/// y < 0 or y >= N, where N is the width of T in bits.
+ ///
+ /// The stabilized versions of this intrinsic are available on the integer
+ /// primitives via the `checked_shl` method. For example,
+ /// [`std::u32::checked_shl`](../../std/primitive.u32.html#method.checked_shl)
#[rustc_const_stable(feature = "const_int_unchecked", since = "1.40.0")]
pub fn unchecked_shl<T>(x: T, y: T) -> T;
/// Performs an unchecked right shift, resulting in undefined behavior when
/// y < 0 or y >= N, where N is the width of T in bits.
+ ///
+ /// The stabilized versions of this intrinsic are available on the integer
+ /// primitives via the `checked_shr` method. For example,
+ /// [`std::u32::checked_shr`](../../std/primitive.u32.html#method.checked_shr)
#[rustc_const_stable(feature = "const_int_unchecked", since = "1.40.0")]
pub fn unchecked_shr<T>(x: T, y: T) -> T;
pub fn unchecked_mul<T>(x: T, y: T) -> T;
/// Performs rotate left.
+ ///
/// The stabilized versions of this intrinsic are available on the integer
/// primitives via the `rotate_left` method. For example,
/// [`std::u32::rotate_left`](../../std/primitive.u32.html#method.rotate_left)
pub fn rotate_left<T>(x: T, y: T) -> T;
/// Performs rotate right.
+ ///
/// The stabilized versions of this intrinsic are available on the integer
/// primitives via the `rotate_right` method. For example,
/// [`std::u32::rotate_right`](../../std/primitive.u32.html#method.rotate_right)
pub fn rotate_right<T>(x: T, y: T) -> T;
/// Returns (a + b) mod 2<sup>N</sup>, where N is the width of T in bits.
+ ///
/// The stabilized versions of this intrinsic are available on the integer
- /// primitives via the `wrapping_add` method. For example,
- /// [`std::u32::wrapping_add`](../../std/primitive.u32.html#method.wrapping_add)
+ /// primitives via the `checked_add` method. For example,
+ /// [`std::u32::checked_add`](../../std/primitive.u32.html#method.checked_add)
#[rustc_const_stable(feature = "const_int_wrapping", since = "1.40.0")]
pub fn wrapping_add<T>(a: T, b: T) -> T;
/// Returns (a - b) mod 2<sup>N</sup>, where N is the width of T in bits.
+ ///
/// The stabilized versions of this intrinsic are available on the integer
- /// primitives via the `wrapping_sub` method. For example,
- /// [`std::u32::wrapping_sub`](../../std/primitive.u32.html#method.wrapping_sub)
+ /// primitives via the `checked_sub` method. For example,
+ /// [`std::u32::checked_sub`](../../std/primitive.u32.html#method.checked_sub)
#[rustc_const_stable(feature = "const_int_wrapping", since = "1.40.0")]
pub fn wrapping_sub<T>(a: T, b: T) -> T;
/// Returns (a * b) mod 2<sup>N</sup>, where N is the width of T in bits.
+ ///
/// The stabilized versions of this intrinsic are available on the integer
- /// primitives via the `wrapping_mul` method. For example,
- /// [`std::u32::wrapping_mul`](../../std/primitive.u32.html#method.wrapping_mul)
+ /// primitives via the `checked_mul` method. For example,
+ /// [`std::u32::checked_mul`](../../std/primitive.u32.html#method.checked_mul)
#[rustc_const_stable(feature = "const_int_wrapping", since = "1.40.0")]
pub fn wrapping_mul<T>(a: T, b: T) -> T;
/// Computes `a + b`, while saturating at numeric bounds.
+ ///
/// The stabilized versions of this intrinsic are available on the integer
/// primitives via the `saturating_add` method. For example,
/// [`std::u32::saturating_add`](../../std/primitive.u32.html#method.saturating_add)
#[rustc_const_stable(feature = "const_int_saturating", since = "1.40.0")]
pub fn saturating_add<T>(a: T, b: T) -> T;
/// Computes `a - b`, while saturating at numeric bounds.
+ ///
/// The stabilized versions of this intrinsic are available on the integer
/// primitives via the `saturating_sub` method. For example,
/// [`std::u32::saturating_sub`](../../std/primitive.u32.html#method.saturating_sub)
/// Returns the value of the discriminant for the variant in 'v',
/// cast to a `u64`; if `T` has no discriminant, returns 0.
+ ///
+ /// The stabilized version of this intrinsic is
+ /// [`std::mem::discriminant`](../../std/mem/fn.discriminant.html)
+ #[rustc_const_unstable(feature = "const_discriminant", issue = "69821")]
pub fn discriminant_value<T>(v: &T) -> u64;
- /// Rust's "try catch" construct which invokes the function pointer `f` with
- /// the data pointer `data`.
+ /// Rust's "try catch" construct which invokes the function pointer `try_fn`
+ /// with the data pointer `data`.
///
- /// The third pointer is a target-specific data pointer which is filled in
- /// with the specifics of the exception that occurred. For examples on Unix
- /// platforms this is a `*mut *mut T` which is filled in by the compiler and
- /// on MSVC it's `*mut [usize; 2]`. For more information see the compiler's
+ /// The third argument is a function called if a panic occurs. This function
+ /// takes the data pointer and a pointer to the target-specific exception
+ /// object that was caught. For more information see the compiler's
/// source as well as std's catch implementation.
+ #[cfg(not(bootstrap))]
+ pub fn r#try(try_fn: fn(*mut u8), data: *mut u8, catch_fn: fn(*mut u8, *mut u8)) -> i32;
+ #[cfg(bootstrap)]
pub fn r#try(f: fn(*mut u8), data: *mut u8, local_ptr: *mut u8) -> i32;
/// Emits a `!nontemporal` store according to LLVM (see their docs).
pub fn ptr_offset_from<T>(ptr: *const T, base: *const T) -> isize;
/// Internal hook used by Miri to implement unwinding.
- /// Compiles to a NOP during non-Miri codegen.
+ /// ICEs when encountered during non-Miri codegen.
+ ///
+ /// The `payload` ptr here will be exactly the one `do_catch` gets passed by `try`.
///
- /// Perma-unstable: do not use
- pub fn miri_start_panic(data: *mut (dyn crate::any::Any + crate::marker::Send)) -> ();
+ /// Perma-unstable: do not use.
+ pub fn miri_start_panic(payload: *mut u8) -> !;
}
// Some functions are defined here because they accidentally got made
use crate::fmt;
use crate::ops::Try;
-use super::super::{DoubleEndedIterator, FusedIterator, Iterator};
+use super::super::{DoubleEndedIterator, Fuse, FusedIterator, Iterator};
use super::Map;
/// An iterator that maps each element to an iterator, and yields the elements
/// this type.
#[derive(Clone, Debug)]
struct FlattenCompat<I, U> {
- iter: I,
+ iter: Fuse<I>,
frontiter: Option<U>,
backiter: Option<U>,
}
-impl<I, U> FlattenCompat<I, U> {
+impl<I, U> FlattenCompat<I, U>
+where
+ I: Iterator,
+{
/// Adapts an iterator by flattening it, for use in `flatten()` and `flat_map()`.
fn new(iter: I) -> FlattenCompat<I, U> {
- FlattenCompat { iter, frontiter: None, backiter: None }
+ FlattenCompat { iter: iter.fuse(), frontiter: None, backiter: None }
}
}
fn next(&mut self) -> Option<U::Item> {
loop {
if let Some(ref mut inner) = self.frontiter {
- if let elt @ Some(_) = inner.next() {
- return elt;
+ match inner.next() {
+ None => self.frontiter = None,
+ elt @ Some(_) => return elt,
}
}
match self.iter.next() {
fn next_back(&mut self) -> Option<U::Item> {
loop {
if let Some(ref mut inner) = self.backiter {
- if let elt @ Some(_) = inner.next_back() {
- return elt;
+ match inner.next_back() {
+ None => self.backiter = None,
+ elt @ Some(_) => return elt,
}
}
match self.iter.next_back() {
{
self.it.fold(init, copy_fold(f))
}
+
+ fn nth(&mut self, n: usize) -> Option<T> {
+ self.it.nth(n).copied()
+ }
+
+ fn last(self) -> Option<T> {
+ self.it.last().copied()
+ }
+
+ fn count(self) -> usize {
+ self.it.count()
+ }
}
#[stable(feature = "iter_copied", since = "1.36.0")]
{
#[inline]
fn next_back(&mut self) -> Option<Self::Item> {
- self.iter.next_back().or_else(|| self.peeked.take().and_then(|x| x))
+ match self.peeked.as_mut() {
+ Some(v @ Some(_)) => self.iter.next_back().or_else(|| v.take()),
+ Some(None) => None,
+ None => self.iter.next_back(),
+ }
}
#[inline]
let to_skip = self.n;
self.n = 0;
// nth(n) skips n+1
- if self.iter.nth(to_skip - 1).is_none() {
- return None;
- }
+ self.iter.nth(to_skip - 1)?;
}
self.iter.nth(n)
}
fn last(mut self) -> Option<I::Item> {
if self.n > 0 {
// nth(n) skips n+1
- if self.iter.nth(self.n - 1).is_none() {
- return None;
- }
+ self.iter.nth(self.n - 1)?;
}
self.iter.last()
}
//! are elements, and once they've all been exhausted, will return `None` to
//! indicate that iteration is finished. Individual iterators may choose to
//! resume iteration, and so calling [`next`] again may or may not eventually
-//! start returning `Some(Item)` again at some point.
+//! start returning `Some(Item)` again at some point (for example, see [`TryIter`]).
//!
//! [`Iterator`]'s full definition includes a number of other methods as well,
//! but they are default methods, built on top of [`next`], and so you get
//! [`Iterator`]: trait.Iterator.html
//! [`next`]: trait.Iterator.html#tymethod.next
//! [`Option`]: ../../std/option/enum.Option.html
+//! [`TryIter`]: ../../std/sync/mpsc/struct.TryIter.html
//!
//! # The three forms of iteration
//!
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub trait ExactSizeIterator: Iterator {
- /// Returns the exact number of times the iterator will iterate.
+ /// Returns the exact length of the iterator.
///
+ /// The implementation ensures that the iterator will return exactly `len()`
+ /// more times a `Some(T)` value, before returning `None`.
/// This method has a default implementation, so you usually should not
/// implement it directly. However, if you can provide a more efficient
/// implementation, you can do so. See the [trait-level] docs for an
label = "`{Self}` is not an iterator",
message = "`{Self}` is not an iterator"
)]
-#[doc(spotlight)]
#[must_use = "iterators are lazy and do nothing unless consumed"]
pub trait Iterator {
/// The type of the elements being iterated over.
#![feature(concat_idents)]
#![feature(const_ascii_ctype_on_intrinsics)]
#![feature(const_alloc_layout)]
+#![feature(const_discriminant)]
#![feature(const_if_match)]
#![feature(const_loop)]
#![feature(const_checked_int_methods)]
#![feature(custom_inner_attributes)]
#![feature(decl_macro)]
#![feature(doc_cfg)]
-#![feature(doc_spotlight)]
#![feature(extern_types)]
#![feature(fundamental)]
#![feature(intrinsics)]
#![feature(rtm_target_feature)]
#![feature(f16c_target_feature)]
#![feature(hexagon_target_feature)]
-#![feature(const_int_conversion)]
#![feature(const_transmute)]
#![feature(structural_match)]
#![feature(abi_unadjusted)]
#![feature(associated_type_bounds)]
#![feature(const_type_id)]
#![feature(const_caller_location)]
-#![feature(assoc_int_consts)]
-#![cfg_attr(not(bootstrap), feature(no_niche))] // rust-lang/rust#68303
+#![feature(option_zip)]
+#![feature(no_niche)] // rust-lang/rust#68303
#[prelude_import]
#[allow(unused)]
#[macro_use]
mod int_macros;
-#[path = "num/uint_macros.rs"]
-#[macro_use]
-mod uint_macros;
-
#[path = "num/i128.rs"]
pub mod i128;
#[path = "num/i16.rs"]
+#[cfg(bootstrap)]
#[doc(include = "panic.md")]
#[macro_export]
#[allow_internal_unstable(core_panic, track_caller)]
);
}
+#[cfg(not(bootstrap))]
+#[doc(include = "panic.md")]
+#[macro_export]
+#[allow_internal_unstable(core_panic, track_caller)]
+#[stable(feature = "core", since = "1.6.0")]
+macro_rules! panic {
+ () => (
+ $crate::panic!("explicit panic")
+ );
+ ($msg:expr) => (
+ $crate::panicking::panic($msg)
+ );
+ ($msg:expr,) => (
+ $crate::panic!($msg)
+ );
+ ($fmt:expr, $($arg:tt)+) => (
+ $crate::panicking::panic_fmt($crate::format_args!($fmt, $($arg)+))
+ );
+}
+
/// Asserts that two expressions are equal to each other (using [`PartialEq`]).
///
/// On panic, this macro will print the values of the expressions with their
/* compiler built-in */
}
+ /// Keeps the item it's applied to if the passed path is accessible, and removes it otherwise.
+ #[cfg(not(bootstrap))]
+ #[unstable(
+ feature = "cfg_accessible",
+ issue = "64797",
+ reason = "`cfg_accessible` is not fully implemented"
+ )]
+ #[rustc_builtin_macro]
+ pub macro cfg_accessible($item:item) {
+ /* compiler built-in */
+ }
+
/// Unstable implementation detail of the `rustc` compiler, do not use.
#[rustc_builtin_macro]
#[stable(feature = "rust1", since = "1.0.0")]
ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait>"
)]
#[fundamental] // for Default, for example, which requires that `[T]: !Default` be evaluatable
+#[cfg_attr(not(bootstrap), rustc_specialization_trait)]
pub trait Sized {
// Empty.
}
/// Instead of using [`ManuallyDrop::drop`] to manually drop the value,
/// you can use this method to take the value and use it however desired.
///
- /// Whenever possible, it is preferrable to use [`into_inner`][`ManuallyDrop::into_inner`]
+ /// Whenever possible, it is preferable to use [`into_inner`][`ManuallyDrop::into_inner`]
/// instead, which prevents duplicating the content of the `ManuallyDrop<T>`.
///
/// # Safety
#[inline(always)]
#[rustc_diagnostic_item = "assume_init"]
pub unsafe fn assume_init(self) -> T {
+ #[cfg(bootstrap)]
intrinsics::panic_if_uninhabited::<T>();
+ #[cfg(not(bootstrap))]
+ intrinsics::assert_inhabited::<T>();
ManuallyDrop::into_inner(self.value)
}
#[unstable(feature = "maybe_uninit_extra", issue = "63567")]
#[inline(always)]
pub unsafe fn read(&self) -> T {
+ #[cfg(bootstrap)]
intrinsics::panic_if_uninhabited::<T>();
+ #[cfg(not(bootstrap))]
+ intrinsics::assert_inhabited::<T>();
self.as_ptr().read()
}
#[unstable(feature = "maybe_uninit_ref", issue = "63568")]
#[inline(always)]
pub unsafe fn get_ref(&self) -> &T {
+ #[cfg(bootstrap)]
intrinsics::panic_if_uninhabited::<T>();
+ #[cfg(not(bootstrap))]
+ intrinsics::assert_inhabited::<T>();
&*self.value
}
#[unstable(feature = "maybe_uninit_ref", issue = "63568")]
#[inline(always)]
pub unsafe fn get_mut(&mut self) -> &mut T {
+ #[cfg(bootstrap)]
intrinsics::panic_if_uninhabited::<T>();
+ #[cfg(not(bootstrap))]
+ intrinsics::assert_inhabited::<T>();
&mut *self.value
}
///
/// # Examples
///
-/// Leak an I/O object, never closing the file:
+/// The canonical safe use of `mem::forget` is to circumvent a value's destructor
+/// implemented by the `Drop` trait. For example, this will leak a `File`, i.e. reclaim
+/// the space taken by the variable but never close the underlying system resource:
///
/// ```no_run
/// use std::mem;
/// mem::forget(file);
/// ```
///
-/// The practical use cases for `forget` are rather specialized and mainly come
-/// up in unsafe or FFI code. However, [`ManuallyDrop`] is usually preferred
-/// for such cases, e.g.:
+/// This is useful when the ownership of the underlying resource was previously
+/// transferred to code outside of Rust, for example by transmitting the raw
+/// file descriptor to C code.
+///
+/// # Relationship with `ManuallyDrop`
+///
+/// While `mem::forget` can also be used to transfer *memory* ownership, doing so is error-prone.
+/// [`ManuallyDrop`] should be used instead. Consider, for example, this code:
+///
+/// ```
+/// use std::mem;
+///
+/// let mut v = vec![65, 122];
+/// // Build a `String` using the contents of `v`
+/// let s = unsafe { String::from_raw_parts(v.as_mut_ptr(), v.len(), v.capacity()) };
+/// // leak `v` because its memory is now managed by `s`
+/// mem::forget(v); // ERROR - v is invalid and must not be passed to a function
+/// assert_eq!(s, "Az");
+/// // `s` is implicitly dropped and its memory deallocated.
+/// ```
+///
+/// There are two issues with the above example:
+///
+/// * If more code were added between the construction of `String` and the invocation of
+/// `mem::forget()`, a panic within it would cause a double free because the same memory
+/// is handled by both `v` and `s`.
+/// * After calling `v.as_mut_ptr()` and transmitting the ownership of the data to `s`,
+/// the `v` value is invalid. Even when a value is just moved to `mem::forget` (which won't
+/// inspect it), some types have strict requirements on their values that
+/// make them invalid when dangling or no longer owned. Using invalid values in any
+/// way, including passing them to or returning them from functions, constitutes
+/// undefined behavior and may break the assumptions made by the compiler.
+///
+/// Switching to `ManuallyDrop` avoids both issues:
///
/// ```
/// use std::mem::ManuallyDrop;
/// // does not get dropped!
/// let mut v = ManuallyDrop::new(v);
/// // Now disassemble `v`. These operations cannot panic, so there cannot be a leak.
-/// let ptr = v.as_mut_ptr();
-/// let cap = v.capacity();
+/// let (ptr, len, cap) = (v.as_mut_ptr(), v.len(), v.capacity());
/// // Finally, build a `String`.
-/// let s = unsafe { String::from_raw_parts(ptr, 2, cap) };
+/// let s = unsafe { String::from_raw_parts(ptr, len, cap) };
/// assert_eq!(s, "Az");
/// // `s` is implicitly dropped and its memory deallocated.
/// ```
///
-/// Using `ManuallyDrop` here has two advantages:
+/// `ManuallyDrop` robustly prevents double-free because we disable `v`'s destructor
+/// before doing anything else. `mem::forget()` doesn't allow this because it consumes its
+/// argument, forcing us to call it only after extracting anything we need from `v`. Even
+/// if a panic were introduced between construction of `ManuallyDrop` and building the
+/// string (which cannot happen in the code as shown), it would result in a leak and not a
+/// double free. In other words, `ManuallyDrop` errs on the side of leaking instead of
+/// erring on the side of (double-)dropping.
///
-/// * We do not "touch" `v` after disassembling it. For some types, operations
-/// such as passing ownership (to a funcion like `mem::forget`) requires them to actually
-/// be fully owned right now; that is a promise we do not want to make here as we are
-/// in the process of transferring ownership to the new `String` we are building.
-/// * In case of an unexpected panic, `ManuallyDrop` is not dropped, but if the panic
-/// occurs before `mem::forget` was called we might end up dropping invalid data,
-/// or double-dropping. In other words, `ManuallyDrop` errs on the side of leaking
-/// instead of erring on the side of dropping.
+/// Also, `ManuallyDrop` prevents us from having to "touch" `v` after transferring the
+/// ownership to `s` - the final step of interacting with `v` to dispoe of it without
+/// running its destructor is entirely avoided.
///
/// [drop]: fn.drop.html
/// [uninit]: fn.uninitialized.html
///
/// let _x: &i32 = unsafe { mem::zeroed() }; // Undefined behavior!
/// ```
-#[inline]
+#[inline(always)]
#[stable(feature = "rust1", since = "1.0.0")]
#[allow(deprecated_in_future)]
#[allow(deprecated)]
#[rustc_diagnostic_item = "mem_zeroed"]
pub unsafe fn zeroed<T>() -> T {
+ #[cfg(not(bootstrap))]
+ intrinsics::assert_zero_valid::<T>();
+ #[cfg(bootstrap)]
intrinsics::panic_if_uninhabited::<T>();
- intrinsics::init()
+ MaybeUninit::zeroed().assume_init()
}
/// Bypasses Rust's normal memory-initialization checks by pretending to
/// [uninit]: union.MaybeUninit.html#method.uninit
/// [assume_init]: union.MaybeUninit.html#method.assume_init
/// [inv]: union.MaybeUninit.html#initialization-invariant
-#[inline]
+#[inline(always)]
#[rustc_deprecated(since = "1.39.0", reason = "use `mem::MaybeUninit` instead")]
#[stable(feature = "rust1", since = "1.0.0")]
#[allow(deprecated_in_future)]
#[allow(deprecated)]
#[rustc_diagnostic_item = "mem_uninitialized"]
pub unsafe fn uninitialized<T>() -> T {
+ #[cfg(not(bootstrap))]
+ intrinsics::assert_uninit_valid::<T>();
+ #[cfg(bootstrap)]
intrinsics::panic_if_uninhabited::<T>();
- intrinsics::uninit()
+ MaybeUninit::uninit().assume_init()
}
/// Swaps the values at two mutable locations, without deinitializing either one.
/// assert_ne!(mem::discriminant(&Foo::B(3)), mem::discriminant(&Foo::C(3)));
/// ```
#[stable(feature = "discriminant_value", since = "1.21.0")]
-pub fn discriminant<T>(v: &T) -> Discriminant<T> {
+#[rustc_const_unstable(feature = "const_discriminant", issue = "69821")]
+pub const fn discriminant<T>(v: &T) -> Discriminant<T> {
Discriminant(intrinsics::discriminant_value(v), PhantomData)
}
//! *[See also the `f32` primitive type](../../std/primitive.f32.html).*
//!
//! Mathematically significant numbers are provided in the `consts` sub-module.
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
use crate::num::FpCategory;
/// The radix or base of the internal representation of `f32`.
+/// Use [`f32::RADIX`](../../std/primitive.f32.html#associatedconstant.RADIX) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const RADIX: u32 = f32::RADIX;
/// Number of significant digits in base 2.
+/// Use [`f32::MANTISSA_DIGITS`](../../std/primitive.f32.html#associatedconstant.MANTISSA_DIGITS) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MANTISSA_DIGITS: u32 = f32::MANTISSA_DIGITS;
/// Approximate number of significant digits in base 10.
+/// Use [`f32::DIGITS`](../../std/primitive.f32.html#associatedconstant.DIGITS) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const DIGITS: u32 = f32::DIGITS;
/// [Machine epsilon] value for `f32`.
+/// Use [`f32::EPSILON`](../../std/primitive.f32.html#associatedconstant.EPSILON) instead.
///
/// This is the difference between `1.0` and the next larger representable number.
///
pub const EPSILON: f32 = f32::EPSILON;
/// Smallest finite `f32` value.
+/// Use [`f32::MIN`](../../std/primitive.f32.html#associatedconstant.MIN) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MIN: f32 = f32::MIN;
/// Smallest positive normal `f32` value.
+/// Use [`f32::MIN_POSITIVE`](../../std/primitive.f32.html#associatedconstant.MIN_POSITIVE) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MIN_POSITIVE: f32 = f32::MIN_POSITIVE;
/// Largest finite `f32` value.
+/// Use [`f32::MAX`](../../std/primitive.f32.html#associatedconstant.MAX) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MAX: f32 = f32::MAX;
/// One greater than the minimum possible normal power of 2 exponent.
+/// Use [`f32::MIN_EXP`](../../std/primitive.f32.html#associatedconstant.MIN_EXP) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MIN_EXP: i32 = f32::MIN_EXP;
/// Maximum possible power of 2 exponent.
+/// Use [`f32::MAX_EXP`](../../std/primitive.f32.html#associatedconstant.MAX_EXP) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MAX_EXP: i32 = f32::MAX_EXP;
/// Minimum possible normal power of 10 exponent.
+/// Use [`f32::MIN_10_EXP`](../../std/primitive.f32.html#associatedconstant.MIN_10_EXP) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MIN_10_EXP: i32 = f32::MIN_10_EXP;
/// Maximum possible power of 10 exponent.
+/// Use [`f32::MAX_10_EXP`](../../std/primitive.f32.html#associatedconstant.MAX_10_EXP) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MAX_10_EXP: i32 = f32::MAX_10_EXP;
/// Not a Number (NaN).
+/// Use [`f32::NAN`](../../std/primitive.f32.html#associatedconstant.NAN) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const NAN: f32 = f32::NAN;
/// Infinity (∞).
+/// Use [`f32::INFINITY`](../../std/primitive.f32.html#associatedconstant.INFINITY) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const INFINITY: f32 = f32::INFINITY;
/// Negative infinity (−∞).
+/// Use [`f32::NEG_INFINITY`](../../std/primitive.f32.html#associatedconstant.NEG_INFINITY) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const NEG_INFINITY: f32 = f32::NEG_INFINITY;
#[cfg(not(test))]
impl f32 {
/// The radix or base of the internal representation of `f32`.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const RADIX: u32 = 2;
/// Number of significant digits in base 2.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MANTISSA_DIGITS: u32 = 24;
/// Approximate number of significant digits in base 10.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const DIGITS: u32 = 6;
/// [Machine epsilon] value for `f32`.
/// This is the difference between `1.0` and the next larger representable number.
///
/// [Machine epsilon]: https://en.wikipedia.org/wiki/Machine_epsilon
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const EPSILON: f32 = 1.19209290e-07_f32;
/// Smallest finite `f32` value.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MIN: f32 = -3.40282347e+38_f32;
/// Smallest positive normal `f32` value.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MIN_POSITIVE: f32 = 1.17549435e-38_f32;
/// Largest finite `f32` value.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MAX: f32 = 3.40282347e+38_f32;
/// One greater than the minimum possible normal power of 2 exponent.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MIN_EXP: i32 = -125;
/// Maximum possible power of 2 exponent.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MAX_EXP: i32 = 128;
/// Minimum possible normal power of 10 exponent.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MIN_10_EXP: i32 = -37;
/// Maximum possible power of 10 exponent.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MAX_10_EXP: i32 = 38;
/// Not a Number (NaN).
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const NAN: f32 = 0.0_f32 / 0.0_f32;
/// Infinity (∞).
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const INFINITY: f32 = 1.0_f32 / 0.0_f32;
/// Negative infinity (-∞).
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const NEG_INFINITY: f32 = -1.0_f32 / 0.0_f32;
/// Returns `true` if this value is `NaN`.
///
/// ```
- /// use std::f32;
- ///
/// let nan = f32::NAN;
/// let f = 7.0_f32;
///
/// `false` otherwise.
///
/// ```
- /// use std::f32;
- ///
/// let f = 7.0f32;
/// let inf = f32::INFINITY;
/// let neg_inf = f32::NEG_INFINITY;
/// Returns `true` if this number is neither infinite nor `NaN`.
///
/// ```
- /// use std::f32;
- ///
/// let f = 7.0f32;
/// let inf = f32::INFINITY;
/// let neg_inf = f32::NEG_INFINITY;
/// [subnormal], or `NaN`.
///
/// ```
- /// use std::f32;
- ///
/// let min = f32::MIN_POSITIVE; // 1.17549435e-38f32
/// let max = f32::MAX;
/// let lower_than_min = 1.0e-40_f32;
///
/// ```
/// use std::num::FpCategory;
- /// use std::f32;
///
/// let num = 12.4_f32;
/// let inf = f32::INFINITY;
/// Takes the reciprocal (inverse) of a number, `1/x`.
///
/// ```
- /// use std::f32;
- ///
/// let x = 2.0_f32;
/// let abs_difference = (x.recip() - (1.0 / x)).abs();
///
/// Converts radians to degrees.
///
/// ```
- /// use std::f32::{self, consts};
+ /// use std::f32::consts;
///
/// let angle = consts::PI;
///
/// Converts degrees to radians.
///
/// ```
- /// use std::f32::{self, consts};
+ /// use std::f32::consts;
///
/// let angle = 180.0f32;
///
//! *[See also the `f64` primitive type](../../std/primitive.f64.html).*
//!
//! Mathematically significant numbers are provided in the `consts` sub-module.
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
use crate::num::FpCategory;
/// The radix or base of the internal representation of `f64`.
+/// Use [`f64::RADIX`](../../std/primitive.f64.html#associatedconstant.RADIX) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const RADIX: u32 = f64::RADIX;
/// Number of significant digits in base 2.
+/// Use [`f64::MANTISSA_DIGITS`](../../std/primitive.f64.html#associatedconstant.MANTISSA_DIGITS) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MANTISSA_DIGITS: u32 = f64::MANTISSA_DIGITS;
/// Approximate number of significant digits in base 10.
+/// Use [`f64::DIGITS`](../../std/primitive.f64.html#associatedconstant.DIGITS) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const DIGITS: u32 = f64::DIGITS;
/// [Machine epsilon] value for `f64`.
+/// Use [`f64::EPSILON`](../../std/primitive.f64.html#associatedconstant.EPSILON) instead.
///
/// This is the difference between `1.0` and the next larger representable number.
///
pub const EPSILON: f64 = f64::EPSILON;
/// Smallest finite `f64` value.
+/// Use [`f64::MIN`](../../std/primitive.f64.html#associatedconstant.MIN) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MIN: f64 = f64::MIN;
/// Smallest positive normal `f64` value.
+/// Use [`f64::MIN_POSITIVE`](../../std/primitive.f64.html#associatedconstant.MIN_POSITIVE) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MIN_POSITIVE: f64 = f64::MIN_POSITIVE;
/// Largest finite `f64` value.
+/// Use [`f64::MAX`](../../std/primitive.f64.html#associatedconstant.MAX) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MAX: f64 = f64::MAX;
/// One greater than the minimum possible normal power of 2 exponent.
+/// Use [`f64::MIN_EXP`](../../std/primitive.f64.html#associatedconstant.MIN_EXP) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MIN_EXP: i32 = f64::MIN_EXP;
/// Maximum possible power of 2 exponent.
+/// Use [`f64::MAX_EXP`](../../std/primitive.f64.html#associatedconstant.MAX_EXP) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MAX_EXP: i32 = f64::MAX_EXP;
/// Minimum possible normal power of 10 exponent.
+/// Use [`f64::MIN_10_EXP`](../../std/primitive.f64.html#associatedconstant.MIN_10_EXP) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MIN_10_EXP: i32 = f64::MIN_10_EXP;
/// Maximum possible power of 10 exponent.
+/// Use [`f64::MAX_10_EXP`](../../std/primitive.f64.html#associatedconstant.MAX_10_EXP) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const MAX_10_EXP: i32 = f64::MAX_10_EXP;
/// Not a Number (NaN).
+/// Use [`f64::NAN`](../../std/primitive.f64.html#associatedconstant.NAN) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const NAN: f64 = f64::NAN;
/// Infinity (∞).
+/// Use [`f64::INFINITY`](../../std/primitive.f64.html#associatedconstant.INFINITY) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const INFINITY: f64 = f64::INFINITY;
/// Negative infinity (−∞).
+/// Use [`f64::NEG_INFINITY`](../../std/primitive.f64.html#associatedconstant.NEG_INFINITY) instead.
#[stable(feature = "rust1", since = "1.0.0")]
pub const NEG_INFINITY: f64 = f64::NEG_INFINITY;
#[cfg(not(test))]
impl f64 {
/// The radix or base of the internal representation of `f64`.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const RADIX: u32 = 2;
/// Number of significant digits in base 2.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MANTISSA_DIGITS: u32 = 53;
/// Approximate number of significant digits in base 10.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const DIGITS: u32 = 15;
/// [Machine epsilon] value for `f64`.
/// This is the difference between `1.0` and the next larger representable number.
///
/// [Machine epsilon]: https://en.wikipedia.org/wiki/Machine_epsilon
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const EPSILON: f64 = 2.2204460492503131e-16_f64;
/// Smallest finite `f64` value.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MIN: f64 = -1.7976931348623157e+308_f64;
/// Smallest positive normal `f64` value.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MIN_POSITIVE: f64 = 2.2250738585072014e-308_f64;
/// Largest finite `f64` value.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MAX: f64 = 1.7976931348623157e+308_f64;
/// One greater than the minimum possible normal power of 2 exponent.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MIN_EXP: i32 = -1021;
/// Maximum possible power of 2 exponent.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MAX_EXP: i32 = 1024;
/// Minimum possible normal power of 10 exponent.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MIN_10_EXP: i32 = -307;
/// Maximum possible power of 10 exponent.
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MAX_10_EXP: i32 = 308;
/// Not a Number (NaN).
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const NAN: f64 = 0.0_f64 / 0.0_f64;
/// Infinity (∞).
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const INFINITY: f64 = 1.0_f64 / 0.0_f64;
/// Negative infinity (-∞).
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const NEG_INFINITY: f64 = -1.0_f64 / 0.0_f64;
/// Returns `true` if this value is `NaN`.
///
/// ```
- /// use std::f64;
- ///
/// let nan = f64::NAN;
/// let f = 7.0_f64;
///
/// `false` otherwise.
///
/// ```
- /// use std::f64;
- ///
/// let f = 7.0f64;
/// let inf = f64::INFINITY;
/// let neg_inf = f64::NEG_INFINITY;
/// Returns `true` if this number is neither infinite nor `NaN`.
///
/// ```
- /// use std::f64;
- ///
/// let f = 7.0f64;
/// let inf: f64 = f64::INFINITY;
/// let neg_inf: f64 = f64::NEG_INFINITY;
/// [subnormal], or `NaN`.
///
/// ```
- /// use std::f64;
- ///
/// let min = f64::MIN_POSITIVE; // 2.2250738585072014e-308f64
/// let max = f64::MAX;
/// let lower_than_min = 1.0e-308_f64;
///
/// ```
/// use std::num::FpCategory;
- /// use std::f64;
///
/// let num = 12.4_f64;
/// let inf = f64::INFINITY;
//! The 128-bit signed integer type.
//!
//! *[See also the `i128` primitive type](../../std/primitive.i128.html).*
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "i128", since = "1.26.0")]
//! The 16-bit signed integer type.
//!
//! *[See also the `i16` primitive type](../../std/primitive.i16.html).*
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
//! The 32-bit signed integer type.
//!
//! *[See also the `i32` primitive type](../../std/primitive.i32.html).*
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
//! The 64-bit signed integer type.
//!
//! *[See also the `i64` primitive type](../../std/primitive.i64.html).*
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
//! The 8-bit signed integer type.
//!
//! *[See also the `i8` primitive type](../../std/primitive.i8.html).*
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
#![doc(hidden)]
+macro_rules! doc_comment {
+ ($x:expr, $($tt:tt)*) => {
+ #[doc = $x]
+ $($tt)*
+ };
+}
+
macro_rules! int_module {
($T:ident) => (int_module!($T, #[stable(feature = "rust1", since = "1.0.0")]););
($T:ident, #[$attr:meta]) => (
- /// The smallest value that can be represented by this integer type.
- #[$attr]
- pub const MIN: $T = $T::min_value();
- /// The largest value that can be represented by this integer type.
- #[$attr]
- pub const MAX: $T = $T::max_value();
+ doc_comment! {
+ concat!("The smallest value that can be represented by this integer type.
+Use [`", stringify!($T), "::MIN", "`](../../std/primitive.", stringify!($T), ".html#associatedconstant.MIN) instead."),
+ #[$attr]
+ pub const MIN: $T = $T::min_value();
+ }
+
+ doc_comment! {
+ concat!("The largest value that can be represented by this integer type.
+Use [`", stringify!($T), "::MAX", "`](../../std/primitive.", stringify!($T), ".html#associatedconstant.MAX) instead."),
+ #[$attr]
+ pub const MAX: $T = $T::max_value();
+ }
)
}
//! The pointer-sized signed integer type.
//!
//! *[See also the `isize` primitive type](../../std/primitive.isize.html).*
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
Basic usage:
```
-#![feature(assoc_int_consts)]
", $Feature, "assert_eq!(", stringify!($SelfT), "::MIN, ", stringify!($Min), ");",
$EndFeature, "
```"),
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MIN: Self = !0 ^ ((!0 as $UnsignedT) >> 1) as Self;
}
Basic usage:
```
-#![feature(assoc_int_consts)]
", $Feature, "assert_eq!(", stringify!($SelfT), "::MAX, ", stringify!($Max), ");",
$EndFeature, "
```"),
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MAX: Self = !Self::MIN;
}
- doc_comment! {
- "Returns the smallest value that can be represented by this integer type.",
- #[stable(feature = "rust1", since = "1.0.0")]
- #[inline(always)]
- #[rustc_promotable]
- #[rustc_const_stable(feature = "const_min_value", since = "1.32.0")]
- pub const fn min_value() -> Self {
- Self::MIN
- }
- }
-
- doc_comment! {
- "Returns the largest value that can be represented by this integer type.",
- #[stable(feature = "rust1", since = "1.0.0")]
- #[inline(always)]
- #[rustc_promotable]
- #[rustc_const_stable(feature = "const_max_value", since = "1.32.0")]
- pub const fn max_value() -> Self {
- Self::MAX
- }
- }
-
doc_comment! {
concat!("Converts a string slice in a given base to an integer.
Basic usage:
```
-", $Feature, "assert_eq!(", stringify!($SelfT), "::max_value().count_zeros(), 1);", $EndFeature, "
+", $Feature, "assert_eq!(", stringify!($SelfT), "::MAX.count_zeros(), 1);", $EndFeature, "
```"),
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_const_stable(feature = "const_int_methods", since = "1.32.0")]
```
", $Feature, "assert_eq!((", stringify!($SelfT),
-"::max_value() - 2).checked_add(1), Some(", stringify!($SelfT), "::max_value() - 1));
-assert_eq!((", stringify!($SelfT), "::max_value() - 2).checked_add(3), None);",
+"::MAX - 2).checked_add(1), Some(", stringify!($SelfT), "::MAX - 1));
+assert_eq!((", stringify!($SelfT), "::MAX - 2).checked_add(3), None);",
$EndFeature, "
```"),
#[stable(feature = "rust1", since = "1.0.0")]
```
", $Feature, "assert_eq!((", stringify!($SelfT),
-"::min_value() + 2).checked_sub(1), Some(", stringify!($SelfT), "::min_value() + 1));
-assert_eq!((", stringify!($SelfT), "::min_value() + 2).checked_sub(3), None);",
+"::MIN + 2).checked_sub(1), Some(", stringify!($SelfT), "::MIN + 1));
+assert_eq!((", stringify!($SelfT), "::MIN + 2).checked_sub(3), None);",
$EndFeature, "
```"),
#[stable(feature = "rust1", since = "1.0.0")]
```
", $Feature, "assert_eq!(", stringify!($SelfT),
-"::max_value().checked_mul(1), Some(", stringify!($SelfT), "::max_value()));
-assert_eq!(", stringify!($SelfT), "::max_value().checked_mul(2), None);",
+"::MAX.checked_mul(1), Some(", stringify!($SelfT), "::MAX));
+assert_eq!(", stringify!($SelfT), "::MAX.checked_mul(2), None);",
$EndFeature, "
```"),
#[stable(feature = "rust1", since = "1.0.0")]
```
", $Feature, "assert_eq!((", stringify!($SelfT),
-"::min_value() + 1).checked_div(-1), Some(", stringify!($Max), "));
-assert_eq!(", stringify!($SelfT), "::min_value().checked_div(-1), None);
+"::MIN + 1).checked_div(-1), Some(", stringify!($Max), "));
+assert_eq!(", stringify!($SelfT), "::MIN.checked_div(-1), None);
assert_eq!((1", stringify!($SelfT), ").checked_div(0), None);",
$EndFeature, "
```"),
```
assert_eq!((", stringify!($SelfT),
-"::min_value() + 1).checked_div_euclid(-1), Some(", stringify!($Max), "));
-assert_eq!(", stringify!($SelfT), "::min_value().checked_div_euclid(-1), None);
+"::MIN + 1).checked_div_euclid(-1), Some(", stringify!($Max), "));
+assert_eq!(", stringify!($SelfT), "::MIN.checked_div_euclid(-1), None);
assert_eq!((1", stringify!($SelfT), ").checked_div_euclid(0), None);
```"),
#[stable(feature = "euclidean_division", since = "1.38.0")]
Basic usage:
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!(5", stringify!($SelfT), ".checked_rem(2), Some(1));
assert_eq!(5", stringify!($SelfT), ".checked_rem(0), None);
assert_eq!(", stringify!($SelfT), "::MIN.checked_rem(-1), None);",
Basic usage:
```
-use std::", stringify!($SelfT), ";
-
assert_eq!(5", stringify!($SelfT), ".checked_rem_euclid(2), Some(1));
assert_eq!(5", stringify!($SelfT), ".checked_rem_euclid(0), None);
assert_eq!(", stringify!($SelfT), "::MIN.checked_rem_euclid(-1), None);
Basic usage:
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!(5", stringify!($SelfT), ".checked_neg(), Some(-5));
assert_eq!(", stringify!($SelfT), "::MIN.checked_neg(), None);",
$EndFeature, "
Basic usage:
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!((-5", stringify!($SelfT), ").checked_abs(), Some(5));
assert_eq!(", stringify!($SelfT), "::MIN.checked_abs(), None);",
$EndFeature, "
```
", $Feature, "assert_eq!(8", stringify!($SelfT), ".checked_pow(2), Some(64));
-assert_eq!(", stringify!($SelfT), "::max_value().checked_pow(2), None);",
+assert_eq!(", stringify!($SelfT), "::MAX.checked_pow(2), None);",
$EndFeature, "
```"),
```
", $Feature, "assert_eq!(100", stringify!($SelfT), ".saturating_add(1), 101);
-assert_eq!(", stringify!($SelfT), "::max_value().saturating_add(100), ", stringify!($SelfT),
-"::max_value());
-assert_eq!(", stringify!($SelfT), "::min_value().saturating_add(-1), ", stringify!($SelfT),
-"::min_value());",
+assert_eq!(", stringify!($SelfT), "::MAX.saturating_add(100), ", stringify!($SelfT),
+"::MAX);
+assert_eq!(", stringify!($SelfT), "::MIN.saturating_add(-1), ", stringify!($SelfT),
+"::MIN);",
$EndFeature, "
```"),
```
", $Feature, "assert_eq!(100", stringify!($SelfT), ".saturating_sub(127), -27);
-assert_eq!(", stringify!($SelfT), "::min_value().saturating_sub(100), ", stringify!($SelfT),
-"::min_value());
-assert_eq!(", stringify!($SelfT), "::max_value().saturating_sub(-1), ", stringify!($SelfT),
-"::max_value());",
+assert_eq!(", stringify!($SelfT), "::MIN.saturating_sub(100), ", stringify!($SelfT),
+"::MIN);
+assert_eq!(", stringify!($SelfT), "::MAX.saturating_sub(-1), ", stringify!($SelfT),
+"::MAX);",
$EndFeature, "
```"),
#[stable(feature = "rust1", since = "1.0.0")]
", $Feature, "#![feature(saturating_neg)]
assert_eq!(100", stringify!($SelfT), ".saturating_neg(), -100);
assert_eq!((-100", stringify!($SelfT), ").saturating_neg(), 100);
-assert_eq!(", stringify!($SelfT), "::min_value().saturating_neg(), ", stringify!($SelfT),
-"::max_value());
-assert_eq!(", stringify!($SelfT), "::max_value().saturating_neg(), ", stringify!($SelfT),
-"::min_value() + 1);",
+assert_eq!(", stringify!($SelfT), "::MIN.saturating_neg(), ", stringify!($SelfT),
+"::MAX);
+assert_eq!(", stringify!($SelfT), "::MAX.saturating_neg(), ", stringify!($SelfT),
+"::MIN + 1);",
$EndFeature, "
```"),
", $Feature, "#![feature(saturating_neg)]
assert_eq!(100", stringify!($SelfT), ".saturating_abs(), 100);
assert_eq!((-100", stringify!($SelfT), ").saturating_abs(), 100);
-assert_eq!(", stringify!($SelfT), "::min_value().saturating_abs(), ", stringify!($SelfT),
-"::max_value());
-assert_eq!((", stringify!($SelfT), "::min_value() + 1).saturating_abs(), ", stringify!($SelfT),
-"::max_value());",
+assert_eq!(", stringify!($SelfT), "::MIN.saturating_abs(), ", stringify!($SelfT),
+"::MAX);
+assert_eq!((", stringify!($SelfT), "::MIN + 1).saturating_abs(), ", stringify!($SelfT),
+"::MAX);",
$EndFeature, "
```"),
Basic usage:
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!(10", stringify!($SelfT), ".saturating_mul(12), 120);
assert_eq!(", stringify!($SelfT), "::MAX.saturating_mul(10), ", stringify!($SelfT), "::MAX);
assert_eq!(", stringify!($SelfT), "::MIN.saturating_mul(10), ", stringify!($SelfT), "::MIN);",
Basic usage:
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!((-4", stringify!($SelfT), ").saturating_pow(3), -64);
assert_eq!(", stringify!($SelfT), "::MIN.saturating_pow(2), ", stringify!($SelfT), "::MAX);
assert_eq!(", stringify!($SelfT), "::MIN.saturating_pow(3), ", stringify!($SelfT), "::MIN);",
```
", $Feature, "assert_eq!(100", stringify!($SelfT), ".wrapping_add(27), 127);
-assert_eq!(", stringify!($SelfT), "::max_value().wrapping_add(2), ", stringify!($SelfT),
-"::min_value() + 1);",
+assert_eq!(", stringify!($SelfT), "::MAX.wrapping_add(2), ", stringify!($SelfT),
+"::MIN + 1);",
$EndFeature, "
```"),
#[stable(feature = "rust1", since = "1.0.0")]
```
", $Feature, "assert_eq!(0", stringify!($SelfT), ".wrapping_sub(127), -127);
-assert_eq!((-2", stringify!($SelfT), ").wrapping_sub(", stringify!($SelfT), "::max_value()), ",
-stringify!($SelfT), "::max_value());",
+assert_eq!((-2", stringify!($SelfT), ").wrapping_sub(", stringify!($SelfT), "::MAX), ",
+stringify!($SelfT), "::MAX);",
$EndFeature, "
```"),
#[stable(feature = "rust1", since = "1.0.0")]
```
", $Feature, "assert_eq!(100", stringify!($SelfT), ".wrapping_neg(), -100);
-assert_eq!(", stringify!($SelfT), "::min_value().wrapping_neg(), ", stringify!($SelfT),
-"::min_value());",
+assert_eq!(", stringify!($SelfT), "::MIN.wrapping_neg(), ", stringify!($SelfT),
+"::MIN);",
$EndFeature, "
```"),
#[stable(feature = "num_wrapping", since = "1.2.0")]
```
", $Feature, "assert_eq!(100", stringify!($SelfT), ".wrapping_abs(), 100);
assert_eq!((-100", stringify!($SelfT), ").wrapping_abs(), 100);
-assert_eq!(", stringify!($SelfT), "::min_value().wrapping_abs(), ", stringify!($SelfT),
-"::min_value());
+assert_eq!(", stringify!($SelfT), "::MIN.wrapping_abs(), ", stringify!($SelfT),
+"::MIN);
assert_eq!((-128i8).wrapping_abs() as u8, 128);",
$EndFeature, "
```"),
Basic usage:
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!(5", stringify!($SelfT), ".overflowing_add(2), (7, false));
assert_eq!(", stringify!($SelfT), "::MAX.overflowing_add(1), (", stringify!($SelfT),
"::MIN, true));", $EndFeature, "
Basic usage:
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!(5", stringify!($SelfT), ".overflowing_sub(2), (3, false));
assert_eq!(", stringify!($SelfT), "::MIN.overflowing_sub(1), (", stringify!($SelfT),
"::MAX, true));", $EndFeature, "
Basic usage:
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!(5", stringify!($SelfT), ".overflowing_div(2), (2, false));
assert_eq!(", stringify!($SelfT), "::MIN.overflowing_div(-1), (", stringify!($SelfT),
"::MIN, true));",
Basic usage:
```
-use std::", stringify!($SelfT), ";
-
assert_eq!(5", stringify!($SelfT), ".overflowing_div_euclid(2), (2, false));
assert_eq!(", stringify!($SelfT), "::MIN.overflowing_div_euclid(-1), (", stringify!($SelfT),
"::MIN, true));
Basic usage:
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!(5", stringify!($SelfT), ".overflowing_rem(2), (1, false));
assert_eq!(", stringify!($SelfT), "::MIN.overflowing_rem(-1), (0, true));",
$EndFeature, "
Basic usage:
```
-use std::", stringify!($SelfT), ";
-
assert_eq!(5", stringify!($SelfT), ".overflowing_rem_euclid(2), (1, false));
assert_eq!(", stringify!($SelfT), "::MIN.overflowing_rem_euclid(-1), (0, true));
```"),
Basic usage:
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
assert_eq!(2", stringify!($SelfT), ".overflowing_neg(), (-2, false));
assert_eq!(", stringify!($SelfT), "::MIN.overflowing_neg(), (", stringify!($SelfT),
"::MIN, true));", $EndFeature, "
```
", $Feature, "assert_eq!(10", stringify!($SelfT), ".overflowing_abs(), (10, false));
assert_eq!((-10", stringify!($SelfT), ").overflowing_abs(), (10, false));
-assert_eq!((", stringify!($SelfT), "::min_value()).overflowing_abs(), (", stringify!($SelfT),
-"::min_value(), true));",
+assert_eq!((", stringify!($SelfT), "::MIN).overflowing_abs(), (", stringify!($SelfT),
+"::MIN, true));",
$EndFeature, "
```"),
#[stable(feature = "no_panic_abs", since = "1.13.0")]
# Overflow behavior
-The absolute value of `", stringify!($SelfT), "::min_value()` cannot be represented as an
+The absolute value of `", stringify!($SelfT), "::MIN` cannot be represented as an
`", stringify!($SelfT), "`, and attempting to calculate it will cause an overflow. This means that
code in debug mode will trigger a panic on this case and optimized code will return `",
-stringify!($SelfT), "::min_value()` without a panic.
+stringify!($SelfT), "::MIN` without a panic.
# Examples
assert_eq!(bytes, ", $be_bytes, ");
```"),
#[stable(feature = "int_to_from_bytes", since = "1.32.0")]
- #[rustc_const_unstable(feature = "const_int_conversion", issue = "53718")]
+ #[rustc_const_stable(feature = "const_int_conversion", since = "1.44.0")]
#[inline]
pub const fn to_be_bytes(self) -> [u8; mem::size_of::<Self>()] {
self.to_be().to_ne_bytes()
assert_eq!(bytes, ", $le_bytes, ");
```"),
#[stable(feature = "int_to_from_bytes", since = "1.32.0")]
- #[rustc_const_unstable(feature = "const_int_conversion", issue = "53718")]
+ #[rustc_const_stable(feature = "const_int_conversion", since = "1.44.0")]
#[inline]
pub const fn to_le_bytes(self) -> [u8; mem::size_of::<Self>()] {
self.to_le().to_ne_bytes()
);
```"),
#[stable(feature = "int_to_from_bytes", since = "1.32.0")]
- #[rustc_const_unstable(feature = "const_int_conversion", issue = "53718")]
+ #[rustc_const_stable(feature = "const_int_conversion", since = "1.44.0")]
+ // SAFETY: const sound because integers are plain old datatypes so we can always
+ // transmute them to arrays of bytes
+ #[allow_internal_unstable(const_fn_union)]
#[inline]
pub const fn to_ne_bytes(self) -> [u8; mem::size_of::<Self>()] {
+ #[repr(C)]
+ union Bytes {
+ val: $SelfT,
+ bytes: [u8; mem::size_of::<$SelfT>()],
+ }
// SAFETY: integers are plain old datatypes so we can always transmute them to
// arrays of bytes
- unsafe { mem::transmute(self) }
+ unsafe { Bytes { val: self }.bytes }
}
}
}
```"),
#[stable(feature = "int_to_from_bytes", since = "1.32.0")]
- #[rustc_const_unstable(feature = "const_int_conversion", issue = "53718")]
+ #[rustc_const_stable(feature = "const_int_conversion", since = "1.44.0")]
#[inline]
pub const fn from_be_bytes(bytes: [u8; mem::size_of::<Self>()]) -> Self {
Self::from_be(Self::from_ne_bytes(bytes))
}
```"),
#[stable(feature = "int_to_from_bytes", since = "1.32.0")]
- #[rustc_const_unstable(feature = "const_int_conversion", issue = "53718")]
+ #[rustc_const_stable(feature = "const_int_conversion", since = "1.44.0")]
#[inline]
pub const fn from_le_bytes(bytes: [u8; mem::size_of::<Self>()]) -> Self {
Self::from_le(Self::from_ne_bytes(bytes))
}
```"),
#[stable(feature = "int_to_from_bytes", since = "1.32.0")]
- #[rustc_const_unstable(feature = "const_int_conversion", issue = "53718")]
+ #[rustc_const_stable(feature = "const_int_conversion", since = "1.44.0")]
+ // SAFETY: const sound because integers are plain old datatypes so we can always
+ // transmute to them
+ #[allow_internal_unstable(const_fn_union)]
#[inline]
pub const fn from_ne_bytes(bytes: [u8; mem::size_of::<Self>()]) -> Self {
+ #[repr(C)]
+ union Bytes {
+ val: $SelfT,
+ bytes: [u8; mem::size_of::<$SelfT>()],
+ }
// SAFETY: integers are plain old datatypes so we can always transmute to them
- unsafe { mem::transmute(bytes) }
+ unsafe { Bytes { bytes }.val }
+ }
+ }
+
+ doc_comment! {
+ concat!("**This method is soft-deprecated.**
+
+Although using it won’t cause compilation warning,
+new code should use [`", stringify!($SelfT), "::MIN", "`](#associatedconstant.MIN) instead.
+
+Returns the smallest value that can be represented by this integer type."),
+ #[stable(feature = "rust1", since = "1.0.0")]
+ #[inline(always)]
+ #[rustc_promotable]
+ #[rustc_const_stable(feature = "const_min_value", since = "1.32.0")]
+ pub const fn min_value() -> Self {
+ Self::MIN
+ }
+ }
+
+ doc_comment! {
+ concat!("**This method is soft-deprecated.**
+
+Although using it won’t cause compilation warning,
+new code should use [`", stringify!($SelfT), "::MAX", "`](#associatedconstant.MAX) instead.
+
+Returns the largest value that can be represented by this integer type."),
+ #[stable(feature = "rust1", since = "1.0.0")]
+ #[inline(always)]
+ #[rustc_promotable]
+ #[rustc_const_stable(feature = "const_max_value", since = "1.32.0")]
+ pub const fn max_value() -> Self {
+ Self::MAX
}
}
}
Basic usage:
```
-#![feature(assoc_int_consts)]
", $Feature, "assert_eq!(", stringify!($SelfT), "::MIN, 0);", $EndFeature, "
```"),
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MIN: Self = 0;
}
Basic usage:
```
-#![feature(assoc_int_consts)]
", $Feature, "assert_eq!(", stringify!($SelfT), "::MAX, ", stringify!($MaxV), ");",
$EndFeature, "
```"),
- #[unstable(feature = "assoc_int_consts", reason = "recently added", issue = "68490")]
+ #[stable(feature = "assoc_int_consts", since = "1.43.0")]
pub const MAX: Self = !0;
}
- doc_comment! {
- "Returns the smallest value that can be represented by this integer type.",
- #[stable(feature = "rust1", since = "1.0.0")]
- #[rustc_promotable]
- #[inline(always)]
- #[rustc_const_stable(feature = "const_max_value", since = "1.32.0")]
- pub const fn min_value() -> Self { Self::MIN }
- }
-
- doc_comment! {
- "Returns the largest value that can be represented by this integer type.",
- #[stable(feature = "rust1", since = "1.0.0")]
- #[rustc_promotable]
- #[inline(always)]
- #[rustc_const_stable(feature = "const_max_value", since = "1.32.0")]
- pub const fn max_value() -> Self { Self::MAX }
- }
-
doc_comment! {
concat!("Converts a string slice in a given base to an integer.
Basic usage:
```
-", $Feature, "assert_eq!(", stringify!($SelfT), "::max_value().count_zeros(), 0);", $EndFeature, "
+", $Feature, "assert_eq!(", stringify!($SelfT), "::MAX.count_zeros(), 0);", $EndFeature, "
```"),
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_const_stable(feature = "const_math", since = "1.32.0")]
Basic usage:
```
-", $Feature, "let n = ", stringify!($SelfT), "::max_value() >> 2;
+", $Feature, "let n = ", stringify!($SelfT), "::MAX >> 2;
assert_eq!(n.leading_zeros(), 2);", $EndFeature, "
```"),
```
", $Feature, "#![feature(leading_trailing_ones)]
-let n = !(", stringify!($SelfT), "::max_value() >> 2);
+let n = !(", stringify!($SelfT), "::MAX >> 2);
assert_eq!(n.leading_ones(), 2);", $EndFeature, "
```"),
Basic usage:
```
-", $Feature, "assert_eq!((", stringify!($SelfT), "::max_value() - 2).checked_add(1), ",
-"Some(", stringify!($SelfT), "::max_value() - 1));
-assert_eq!((", stringify!($SelfT), "::max_value() - 2).checked_add(3), None);", $EndFeature, "
+", $Feature, "assert_eq!((", stringify!($SelfT), "::MAX - 2).checked_add(1), ",
+"Some(", stringify!($SelfT), "::MAX - 1));
+assert_eq!((", stringify!($SelfT), "::MAX - 2).checked_add(3), None);", $EndFeature, "
```"),
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_const_unstable(feature = "const_checked_int_methods", issue = "53718")]
```
", $Feature, "assert_eq!(5", stringify!($SelfT), ".checked_mul(1), Some(5));
-assert_eq!(", stringify!($SelfT), "::max_value().checked_mul(2), None);", $EndFeature, "
+assert_eq!(", stringify!($SelfT), "::MAX.checked_mul(2), None);", $EndFeature, "
```"),
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_const_unstable(feature = "const_checked_int_methods", issue = "53718")]
```
", $Feature, "assert_eq!(2", stringify!($SelfT), ".checked_pow(5), Some(32));
-assert_eq!(", stringify!($SelfT), "::max_value().checked_pow(2), None);", $EndFeature, "
+assert_eq!(", stringify!($SelfT), "::MAX.checked_pow(2), None);", $EndFeature, "
```"),
#[stable(feature = "no_panic_pow", since = "1.34.0")]
#[rustc_const_unstable(feature = "const_int_pow", issue = "53718")]
Basic usage:
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!(2", stringify!($SelfT), ".saturating_mul(10), 20);
assert_eq!((", stringify!($SelfT), "::MAX).saturating_mul(10), ", stringify!($SelfT),
"::MAX);", $EndFeature, "
Basic usage:
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!(4", stringify!($SelfT), ".saturating_pow(3), 64);
assert_eq!(", stringify!($SelfT), "::MAX.saturating_pow(2), ", stringify!($SelfT), "::MAX);",
$EndFeature, "
```
", $Feature, "assert_eq!(200", stringify!($SelfT), ".wrapping_add(55), 255);
-assert_eq!(200", stringify!($SelfT), ".wrapping_add(", stringify!($SelfT), "::max_value()), 199);",
+assert_eq!(200", stringify!($SelfT), ".wrapping_add(", stringify!($SelfT), "::MAX), 199);",
$EndFeature, "
```"),
#[stable(feature = "rust1", since = "1.0.0")]
```
", $Feature, "assert_eq!(100", stringify!($SelfT), ".wrapping_sub(100), 0);
-assert_eq!(100", stringify!($SelfT), ".wrapping_sub(", stringify!($SelfT), "::max_value()), 101);",
+assert_eq!(100", stringify!($SelfT), ".wrapping_sub(", stringify!($SelfT), "::MAX), 101);",
$EndFeature, "
```"),
#[stable(feature = "rust1", since = "1.0.0")]
Basic usage
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!(5", stringify!($SelfT), ".overflowing_add(2), (7, false));
assert_eq!(", stringify!($SelfT), "::MAX.overflowing_add(1), (0, true));", $EndFeature, "
```"),
Basic usage
```
-", $Feature, "use std::", stringify!($SelfT), ";
-
+", $Feature, "
assert_eq!(5", stringify!($SelfT), ".overflowing_sub(2), (3, false));
assert_eq!(0", stringify!($SelfT), ".overflowing_sub(1), (", stringify!($SelfT), "::MAX, true));",
$EndFeature, "
", $Feature, "assert_eq!(2", stringify!($SelfT),
".checked_next_power_of_two(), Some(2));
assert_eq!(3", stringify!($SelfT), ".checked_next_power_of_two(), Some(4));
-assert_eq!(", stringify!($SelfT), "::max_value().checked_next_power_of_two(), None);",
+assert_eq!(", stringify!($SelfT), "::MAX.checked_next_power_of_two(), None);",
$EndFeature, "
```"),
#[inline]
", $Feature, "
assert_eq!(2", stringify!($SelfT), ".wrapping_next_power_of_two(), 2);
assert_eq!(3", stringify!($SelfT), ".wrapping_next_power_of_two(), 4);
-assert_eq!(", stringify!($SelfT), "::max_value().wrapping_next_power_of_two(), 0);",
+assert_eq!(", stringify!($SelfT), "::MAX.wrapping_next_power_of_two(), 0);",
$EndFeature, "
```"),
#[unstable(feature = "wrapping_next_power_of_two", issue = "32463",
assert_eq!(bytes, ", $be_bytes, ");
```"),
#[stable(feature = "int_to_from_bytes", since = "1.32.0")]
- #[rustc_const_unstable(feature = "const_int_conversion", issue = "53718")]
+ #[rustc_const_stable(feature = "const_int_conversion", since = "1.44.0")]
#[inline]
pub const fn to_be_bytes(self) -> [u8; mem::size_of::<Self>()] {
self.to_be().to_ne_bytes()
assert_eq!(bytes, ", $le_bytes, ");
```"),
#[stable(feature = "int_to_from_bytes", since = "1.32.0")]
- #[rustc_const_unstable(feature = "const_int_conversion", issue = "53718")]
+ #[rustc_const_stable(feature = "const_int_conversion", since = "1.44.0")]
#[inline]
pub const fn to_le_bytes(self) -> [u8; mem::size_of::<Self>()] {
self.to_le().to_ne_bytes()
);
```"),
#[stable(feature = "int_to_from_bytes", since = "1.32.0")]
- #[rustc_const_unstable(feature = "const_int_conversion", issue = "53718")]
+ #[rustc_const_stable(feature = "const_int_conversion", since = "1.44.0")]
+ // SAFETY: const sound because integers are plain old datatypes so we can always
+ // transmute them to arrays of bytes
+ #[allow_internal_unstable(const_fn_union)]
#[inline]
pub const fn to_ne_bytes(self) -> [u8; mem::size_of::<Self>()] {
+ #[repr(C)]
+ union Bytes {
+ val: $SelfT,
+ bytes: [u8; mem::size_of::<$SelfT>()],
+ }
// SAFETY: integers are plain old datatypes so we can always transmute them to
// arrays of bytes
- unsafe { mem::transmute(self) }
+ unsafe { Bytes { val: self }.bytes }
}
}
}
```"),
#[stable(feature = "int_to_from_bytes", since = "1.32.0")]
- #[rustc_const_unstable(feature = "const_int_conversion", issue = "53718")]
+ #[rustc_const_stable(feature = "const_int_conversion", since = "1.44.0")]
#[inline]
pub const fn from_be_bytes(bytes: [u8; mem::size_of::<Self>()]) -> Self {
Self::from_be(Self::from_ne_bytes(bytes))
}
```"),
#[stable(feature = "int_to_from_bytes", since = "1.32.0")]
- #[rustc_const_unstable(feature = "const_int_conversion", issue = "53718")]
+ #[rustc_const_stable(feature = "const_int_conversion", since = "1.44.0")]
#[inline]
pub const fn from_le_bytes(bytes: [u8; mem::size_of::<Self>()]) -> Self {
Self::from_le(Self::from_ne_bytes(bytes))
}
```"),
#[stable(feature = "int_to_from_bytes", since = "1.32.0")]
- #[rustc_const_unstable(feature = "const_int_conversion", issue = "53718")]
+ #[rustc_const_stable(feature = "const_int_conversion", since = "1.44.0")]
+ // SAFETY: const sound because integers are plain old datatypes so we can always
+ // transmute to them
+ #[allow_internal_unstable(const_fn_union)]
#[inline]
pub const fn from_ne_bytes(bytes: [u8; mem::size_of::<Self>()]) -> Self {
+ #[repr(C)]
+ union Bytes {
+ val: $SelfT,
+ bytes: [u8; mem::size_of::<$SelfT>()],
+ }
// SAFETY: integers are plain old datatypes so we can always transmute to them
- unsafe { mem::transmute(bytes) }
+ unsafe { Bytes { bytes }.val }
}
}
+
+ doc_comment! {
+ concat!("**This method is soft-deprecated.**
+
+Although using it won’t cause compilation warning,
+new code should use [`", stringify!($SelfT), "::MIN", "`](#associatedconstant.MIN) instead.
+
+Returns the smallest value that can be represented by this integer type."),
+ #[stable(feature = "rust1", since = "1.0.0")]
+ #[rustc_promotable]
+ #[inline(always)]
+ #[rustc_const_stable(feature = "const_max_value", since = "1.32.0")]
+ pub const fn min_value() -> Self { Self::MIN }
+ }
+
+ doc_comment! {
+ concat!("**This method is soft-deprecated.**
+
+Although using it won’t cause compilation warning,
+new code should use [`", stringify!($SelfT), "::MAX", "`](#associatedconstant.MAX) instead.
+
+Returns the largest value that can be represented by this integer type."),
+ #[stable(feature = "rust1", since = "1.0.0")]
+ #[rustc_promotable]
+ #[inline(always)]
+ #[rustc_const_stable(feature = "const_max_value", since = "1.32.0")]
+ pub const fn max_value() -> Self { Self::MAX }
+ }
}
}
///
/// ```
/// use std::num::FpCategory;
-/// use std::f32;
///
/// let num = 12.4_f32;
/// let inf = f32::INFINITY;
//! The 128-bit unsigned integer type.
//!
//! *[See also the `u128` primitive type](../../std/primitive.u128.html).*
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "i128", since = "1.26.0")]
-uint_module! { u128, #[stable(feature = "i128", since="1.26.0")] }
+int_module! { u128, #[stable(feature = "i128", since="1.26.0")] }
//! The 16-bit unsigned integer type.
//!
//! *[See also the `u16` primitive type](../../std/primitive.u16.html).*
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
-uint_module! { u16 }
+int_module! { u16 }
//! The 32-bit unsigned integer type.
//!
//! *[See also the `u32` primitive type](../../std/primitive.u32.html).*
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
-uint_module! { u32 }
+int_module! { u32 }
//! The 64-bit unsigned integer type.
//!
//! *[See also the `u64` primitive type](../../std/primitive.u64.html).*
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
-uint_module! { u64 }
+int_module! { u64 }
//! The 8-bit unsigned integer type.
//!
//! *[See also the `u8` primitive type](../../std/primitive.u8.html).*
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
-uint_module! { u8 }
+int_module! { u8 }
+++ /dev/null
-#![doc(hidden)]
-
-macro_rules! uint_module {
- ($T:ident) => (uint_module!($T, #[stable(feature = "rust1", since = "1.0.0")]););
- ($T:ident, #[$attr:meta]) => (
- /// The smallest value that can be represented by this integer type.
- #[$attr]
- pub const MIN: $T = $T::min_value();
- /// The largest value that can be represented by this integer type.
- #[$attr]
- pub const MAX: $T = $T::max_value();
- )
-}
//! The pointer-sized unsigned integer type.
//!
//! *[See also the `usize` primitive type](../../std/primitive.usize.html).*
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
-uint_module! { usize }
+int_module! { usize }
/// ```
/// use std::ops::Add;
///
-/// #[derive(Debug, PartialEq)]
+/// #[derive(Debug, Copy, Clone, PartialEq)]
/// struct Point {
/// x: i32,
/// y: i32,
/// ```
/// use std::ops::Add;
///
-/// #[derive(Debug, PartialEq)]
+/// #[derive(Debug, Copy, Clone, PartialEq)]
/// struct Point<T> {
/// x: T,
/// y: T,
/// ```
/// use std::ops::Sub;
///
-/// #[derive(Debug, PartialEq)]
+/// #[derive(Debug, Copy, Clone, PartialEq)]
/// struct Point {
/// x: i32,
/// y: i32,
/// ```
/// use std::ops::AddAssign;
///
-/// #[derive(Debug, PartialEq)]
+/// #[derive(Debug, Copy, Clone, PartialEq)]
/// struct Point {
/// x: i32,
/// y: i32,
/// ```
/// use std::ops::SubAssign;
///
-/// #[derive(Debug, PartialEq)]
+/// #[derive(Debug, Copy, Clone, PartialEq)]
/// struct Point {
/// x: i32,
/// y: i32,
#[lang = "generator"]
#[unstable(feature = "generator_trait", issue = "43122")]
#[fundamental]
-pub trait Generator<#[cfg(not(bootstrap))] R = ()> {
+pub trait Generator<R = ()> {
/// The type of value this generator yields.
///
/// This associated type corresponds to the `yield` expression and the
/// been returned previously. While generator literals in the language are
/// guaranteed to panic on resuming after `Complete`, this is not guaranteed
/// for all implementations of the `Generator` trait.
- fn resume(
- self: Pin<&mut Self>,
- #[cfg(not(bootstrap))] arg: R,
- ) -> GeneratorState<Self::Yield, Self::Return>;
+ fn resume(self: Pin<&mut Self>, arg: R) -> GeneratorState<Self::Yield, Self::Return>;
}
-#[cfg(bootstrap)]
-#[unstable(feature = "generator_trait", issue = "43122")]
-impl<G: ?Sized + Generator> Generator for Pin<&mut G> {
- type Yield = G::Yield;
- type Return = G::Return;
-
- fn resume(mut self: Pin<&mut Self>) -> GeneratorState<Self::Yield, Self::Return> {
- G::resume((*self).as_mut())
- }
-}
-
-#[cfg(bootstrap)]
-#[unstable(feature = "generator_trait", issue = "43122")]
-impl<G: ?Sized + Generator + Unpin> Generator for &mut G {
- type Yield = G::Yield;
- type Return = G::Return;
-
- fn resume(mut self: Pin<&mut Self>) -> GeneratorState<Self::Yield, Self::Return> {
- G::resume(Pin::new(&mut *self))
- }
-}
-
-#[cfg(not(bootstrap))]
#[unstable(feature = "generator_trait", issue = "43122")]
impl<G: ?Sized + Generator<R>, R> Generator<R> for Pin<&mut G> {
type Yield = G::Yield;
}
}
-#[cfg(not(bootstrap))]
#[unstable(feature = "generator_trait", issue = "43122")]
impl<G: ?Sized + Generator<R> + Unpin, R> Generator<R> for &mut G {
type Yield = G::Yield;
//! ```rust
//! use std::ops::{Add, Sub};
//!
-//! #[derive(Debug, PartialEq)]
+//! #[derive(Debug, Copy, Clone, PartialEq)]
//! struct Point {
//! x: i32,
//! y: i32,
pub fn replace(&mut self, value: T) -> Option<T> {
mem::replace(self, Some(value))
}
+
+ /// Zips `self` with another `Option`.
+ ///
+ /// If `self` is `Some(s)` and `other` is `Some(o)`, this method returns `Some((s, o))`.
+ /// Otherwise, `None` is returned.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// #![feature(option_zip)]
+ /// let x = Some(1);
+ /// let y = Some("hi");
+ /// let z = None::<u8>;
+ ///
+ /// assert_eq!(x.zip(y), Some((1, "hi")));
+ /// assert_eq!(x.zip(z), None);
+ /// ```
+ #[unstable(feature = "option_zip", issue = "70086")]
+ pub fn zip<U>(self, other: Option<U>) -> Option<(T, U)> {
+ self.zip_with(other, |a, b| (a, b))
+ }
+
+ /// Zips `self` and another `Option` with function `f`.
+ ///
+ /// If `self` is `Some(s)` and `other` is `Some(o)`, this method returns `Some(f(s, o))`.
+ /// Otherwise, `None` is returned.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// #![feature(option_zip)]
+ ///
+ /// #[derive(Debug, PartialEq)]
+ /// struct Point {
+ /// x: f64,
+ /// y: f64,
+ /// }
+ ///
+ /// impl Point {
+ /// fn new(x: f64, y: f64) -> Self {
+ /// Self { x, y }
+ /// }
+ /// }
+ ///
+ /// let x = Some(17.5);
+ /// let y = Some(42.7);
+ ///
+ /// assert_eq!(x.zip_with(y, Point::new), Some(Point { x: 17.5, y: 42.7 }));
+ /// assert_eq!(x.zip_with(None, Point::new), None);
+ /// ```
+ #[unstable(feature = "option_zip", issue = "70086")]
+ pub fn zip_with<U, F, R>(self, other: Option<U>, f: F) -> Option<R>
+ where
+ F: FnOnce(T, U) -> R,
+ {
+ Some(f(self?, other?))
+ }
}
impl<T: Copy> Option<&T> {
use crate::fmt;
use crate::panic::{Location, PanicInfo};
+/// The underlying implementation of libcore's `panic!` macro when no formatting is used.
#[cold]
// never inline unless panic_immediate_abort to avoid code
// bloat at the call sites as much as possible
// truncation and padding (even though none is used here). Using
// Arguments::new_v1 may allow the compiler to omit Formatter::pad from the
// output binary, saving up to a few kilobytes.
- panic_fmt(fmt::Arguments::new_v1(&[expr], &[]), Location::caller())
+ #[cfg(not(bootstrap))]
+ panic_fmt(fmt::Arguments::new_v1(&[expr], &[]));
+ #[cfg(bootstrap)]
+ panic_fmt(fmt::Arguments::new_v1(&[expr], &[]), Location::caller());
}
+#[cfg(not(bootstrap))]
+#[cold]
+#[cfg_attr(not(feature = "panic_immediate_abort"), inline(never))]
+#[track_caller]
+#[lang = "panic_bounds_check"] // needed by codegen for panic on OOB array/slice access
+fn panic_bounds_check(index: usize, len: usize) -> ! {
+ if cfg!(feature = "panic_immediate_abort") {
+ unsafe { super::intrinsics::abort() }
+ }
+
+ panic!("index out of bounds: the len is {} but the index is {}", len, index)
+}
+
+// For bootstrap, we need a variant with the old argument order, and a corresponding
+// `panic_fmt`.
+#[cfg(bootstrap)]
#[cold]
#[cfg_attr(not(feature = "panic_immediate_abort"), inline(never))]
#[lang = "panic_bounds_check"] // needed by codegen for panic on OOB array/slice access
)
}
+/// The underlying implementation of libcore's `panic!` macro when formatting is used.
#[cold]
#[cfg_attr(not(feature = "panic_immediate_abort"), inline(never))]
#[cfg_attr(feature = "panic_immediate_abort", inline)]
-pub fn panic_fmt(fmt: fmt::Arguments<'_>, location: &Location<'_>) -> ! {
+#[cfg_attr(not(bootstrap), track_caller)]
+pub fn panic_fmt(fmt: fmt::Arguments<'_>, #[cfg(bootstrap)] location: &Location<'_>) -> ! {
if cfg!(feature = "panic_immediate_abort") {
unsafe { super::intrinsics::abort() }
}
fn panic_impl(pi: &PanicInfo<'_>) -> !;
}
+ #[cfg(bootstrap)]
let pi = PanicInfo::internal_constructor(Some(&fmt), location);
+ #[cfg(not(bootstrap))]
+ let pi = PanicInfo::internal_constructor(Some(&fmt), Location::caller());
+
unsafe { panic_impl(&pi) }
}
//!
//! ## Examples
//!
-//! For a type like [`Vec<T>`], both possibilites (structural pinning or not) make sense.
+//! For a type like [`Vec<T>`], both possibilities (structural pinning or not) make sense.
//! A [`Vec<T>`] with structural pinning could have `get_pin`/`get_pin_mut` methods to get
//! pinned references to elements. However, it could *not* allow calling
//! [`pop`][Vec::pop] on a pinned [`Vec<T>`] because that would move the (structurally pinned)
/// ```
/// A value, once pinned, must remain pinned forever (unless its type implements `Unpin`).
///
- /// Similarily, calling `Pin::new_unchecked` on an `Rc<T>` is unsafe because there could be
+ /// Similarly, calling `Pin::new_unchecked` on an `Rc<T>` is unsafe because there could be
/// aliases to the same data that are not subject to the pinning restrictions:
/// ```
/// use std::rc::Rc;
pub use crate::macros::builtin::{
bench, global_allocator, test, test_case, RustcDecodable, RustcEncodable,
};
+
+#[cfg(not(bootstrap))]
+#[unstable(
+ feature = "cfg_accessible",
+ issue = "64797",
+ reason = "`cfg_accessible` is not fully implemented"
+)]
+#[doc(no_inline)]
+pub use crate::macros::builtin::cfg_accessible;
/// all of the following is true:
/// - it is properly aligned
/// - it must point to an initialized instance of T; in particular, the pointer must be
- /// "dereferencable" in the sense defined [here].
+ /// "dereferenceable" in the sense defined [here].
///
/// This applies even if the result of this method is unused!
/// (The part about being initialized is not yet fully decided, but until
/// within the same allocated object: [`offset`] is immediate Undefined Behavior when
/// crossing object boundaries; `wrapping_offset` produces a pointer but still leads
/// to Undefined Behavior if that pointer is dereferenced. [`offset`] can be optimized
- /// better and is thus preferrable in performance-sensitive code.
+ /// better and is thus preferable in performance-sensitive code.
///
/// If you need to cross object boundaries, cast the pointer to an integer and
/// do the arithmetic there.
/// within the same allocated object: [`add`] is immediate Undefined Behavior when
/// crossing object boundaries; `wrapping_add` produces a pointer but still leads
/// to Undefined Behavior if that pointer is dereferenced. [`add`] can be optimized
- /// better and is thus preferrable in performance-sensitive code.
+ /// better and is thus preferable in performance-sensitive code.
///
/// If you need to cross object boundaries, cast the pointer to an integer and
/// do the arithmetic there.
/// within the same allocated object: [`sub`] is immediate Undefined Behavior when
/// crossing object boundaries; `wrapping_sub` produces a pointer but still leads
/// to Undefined Behavior if that pointer is dereferenced. [`sub`] can be optimized
- /// better and is thus preferrable in performance-sensitive code.
+ /// better and is thus preferable in performance-sensitive code.
///
/// If you need to cross object boundaries, cast the pointer to an integer and
/// do the arithmetic there.
//! * All pointers (except for the null pointer) are valid for all operations of
//! [size zero][zst].
//! * For a pointer to be valid, it is necessary, but not always sufficient, that the pointer
-//! be *dereferencable*: the memory range of the given size starting at the pointer must all be
+//! be *dereferenceable*: the memory range of the given size starting at the pointer must all be
//! within the bounds of a single allocated object. Note that in Rust,
//! every (stack-allocated) variable is considered a separate allocated object.
//! * All accesses performed by functions in this module are *non-atomic* in the sense
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
pub unsafe fn write<T>(dst: *mut T, src: T) {
- // FIXME: the debug assertion here causes codegen test failures on some architectures.
- // See <https://github.com/rust-lang/rust/pull/69208#issuecomment-591326757>.
- // debug_assert!(is_aligned_and_not_null(dst), "attempt to write to unaligned or null pointer");
+ debug_assert!(is_aligned_and_not_null(dst), "attempt to write to unaligned or null pointer");
intrinsics::move_val_init(&mut *dst, src)
}
/// memory.
///
/// When calling this method, you have to ensure that if the pointer is
- /// non-NULL, then it is properly aligned, dereferencable (for the whole
+ /// non-NULL, then it is properly aligned, dereferenceable (for the whole
/// size of `T`) and points to an initialized instance of `T`. This applies
/// even if the result of this method is unused!
/// (The part about being initialized is not yet fully decided, but until
/// within the same allocated object: [`offset`] is immediate Undefined Behavior when
/// crossing object boundaries; `wrapping_offset` produces a pointer but still leads
/// to Undefined Behavior if that pointer is dereferenced. [`offset`] can be optimized
- /// better and is thus preferrable in performance-sensitive code.
+ /// better and is thus preferable in performance-sensitive code.
///
/// If you need to cross object boundaries, cast the pointer to an integer and
/// do the arithmetic there.
/// all of the following is true:
/// - it is properly aligned
/// - it must point to an initialized instance of T; in particular, the pointer must be
- /// "dereferencable" in the sense defined [here].
+ /// "dereferenceable" in the sense defined [here].
///
/// This applies even if the result of this method is unused!
/// (The part about being initialized is not yet fully decided, but until
/// within the same allocated object: [`add`] is immediate Undefined Behavior when
/// crossing object boundaries; `wrapping_add` produces a pointer but still leads
/// to Undefined Behavior if that pointer is dereferenced. [`add`] can be optimized
- /// better and is thus preferrable in performance-sensitive code.
+ /// better and is thus preferable in performance-sensitive code.
///
/// If you need to cross object boundaries, cast the pointer to an integer and
/// do the arithmetic there.
/// within the same allocated object: [`sub`] is immediate Undefined Behavior when
/// crossing object boundaries; `wrapping_sub` produces a pointer but still leads
/// to Undefined Behavior if that pointer is dereferenced. [`sub`] can be optimized
- /// better and is thus preferrable in performance-sensitive code.
+ /// better and is thus preferable in performance-sensitive code.
///
/// If you need to cross object boundaries, cast the pointer to an integer and
/// do the arithmetic there.
#[stable(feature = "rust1", since = "1.0.0")]
#[inline]
pub fn first(&self) -> Option<&T> {
- self.get(0)
+ if let [first, ..] = self { Some(first) } else { None }
}
/// Returns a mutable pointer to the first element of the slice, or `None` if it is empty.
#[stable(feature = "rust1", since = "1.0.0")]
#[inline]
pub fn first_mut(&mut self) -> Option<&mut T> {
- self.get_mut(0)
+ if let [first, ..] = self { Some(first) } else { None }
}
/// Returns the first and all the rest of the elements of the slice, or `None` if it is empty.
#[stable(feature = "slice_splits", since = "1.5.0")]
#[inline]
pub fn split_first(&self) -> Option<(&T, &[T])> {
- if self.is_empty() { None } else { Some((&self[0], &self[1..])) }
+ if let [first, tail @ ..] = self { Some((first, tail)) } else { None }
}
/// Returns the first and all the rest of the elements of the slice, or `None` if it is empty.
#[stable(feature = "slice_splits", since = "1.5.0")]
#[inline]
pub fn split_first_mut(&mut self) -> Option<(&mut T, &mut [T])> {
- if self.is_empty() {
- None
- } else {
- let split = self.split_at_mut(1);
- Some((&mut split.0[0], split.1))
- }
+ if let [first, tail @ ..] = self { Some((first, tail)) } else { None }
}
/// Returns the last and all the rest of the elements of the slice, or `None` if it is empty.
#[stable(feature = "slice_splits", since = "1.5.0")]
#[inline]
pub fn split_last(&self) -> Option<(&T, &[T])> {
- let len = self.len();
- if len == 0 { None } else { Some((&self[len - 1], &self[..(len - 1)])) }
+ if let [init @ .., last] = self { Some((last, init)) } else { None }
}
/// Returns the last and all the rest of the elements of the slice, or `None` if it is empty.
#[stable(feature = "slice_splits", since = "1.5.0")]
#[inline]
pub fn split_last_mut(&mut self) -> Option<(&mut T, &mut [T])> {
- let len = self.len();
- if len == 0 {
- None
- } else {
- let split = self.split_at_mut(len - 1);
- Some((&mut split.1[0], split.0))
- }
+ if let [init @ .., last] = self { Some((last, init)) } else { None }
}
/// Returns the last element of the slice, or `None` if it is empty.
#[stable(feature = "rust1", since = "1.0.0")]
#[inline]
pub fn last(&self) -> Option<&T> {
- let last_idx = self.len().checked_sub(1)?;
- self.get(last_idx)
+ if let [.., last] = self { Some(last) } else { None }
}
/// Returns a mutable pointer to the last item in the slice.
#[stable(feature = "rust1", since = "1.0.0")]
#[inline]
pub fn last_mut(&mut self) -> Option<&mut T> {
- let last_idx = self.len().checked_sub(1)?;
- self.get_mut(last_idx)
+ if let [.., last] = self { Some(last) } else { None }
}
/// Returns a reference to an element or subslice depending on the type of
//! String manipulation.
//!
-//! For more details, see the `std::str` module.
+//! For more details, see the [`std::str`] module.
+//!
+//! [`std::str`]: ../../std/str/index.html
#![stable(feature = "rust1", since = "1.0.0")]
let haystack = self.haystack.as_bytes();
loop {
// get the haystack up to but not including the last character searched
- let bytes = if let Some(slice) = haystack.get(self.finger..self.finger_back) {
- slice
- } else {
- return None;
- };
+ let bytes = haystack.get(self.finger..self.finger_back)?;
// the last byte of the utf8 encoded needle
// SAFETY: we have an invariant that `utf8_size < 5`
let last_byte = unsafe { *self.utf8_encoded.get_unchecked(self.utf8_size - 1) };
#[inline]
fn is_suffix_of(self, haystack: &'a str) -> bool
- where $t: ReverseSearcher<'a>
+ where
+ $t: ReverseSearcher<'a>,
{
($pmap)(self).is_suffix_of(haystack)
}
- }
+ };
}
macro_rules! searcher_methods {
fn next_reject_back(&mut self) -> Option<(usize, usize)> {
self.0.next_reject_back()
}
- }
+ };
}
/////////////////////////////////////////////////////////////////////////////
}
#[inline]
-unsafe fn atomic_store<T>(dst: *mut T, val: T, order: Ordering) {
+unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) {
match order {
Release => intrinsics::atomic_store_rel(dst, val),
Relaxed => intrinsics::atomic_store_relaxed(dst, val),
}
#[inline]
-unsafe fn atomic_load<T>(dst: *const T, order: Ordering) -> T {
+unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T {
match order {
Acquire => intrinsics::atomic_load_acq(dst),
Relaxed => intrinsics::atomic_load_relaxed(dst),
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_swap<T>(dst: *mut T, val: T, order: Ordering) -> T {
+unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
match order {
Acquire => intrinsics::atomic_xchg_acq(dst, val),
Release => intrinsics::atomic_xchg_rel(dst, val),
/// Returns the previous value (like __sync_fetch_and_add).
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_add<T>(dst: *mut T, val: T, order: Ordering) -> T {
+unsafe fn atomic_add<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
match order {
Acquire => intrinsics::atomic_xadd_acq(dst, val),
Release => intrinsics::atomic_xadd_rel(dst, val),
/// Returns the previous value (like __sync_fetch_and_sub).
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_sub<T>(dst: *mut T, val: T, order: Ordering) -> T {
+unsafe fn atomic_sub<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
match order {
Acquire => intrinsics::atomic_xsub_acq(dst, val),
Release => intrinsics::atomic_xsub_rel(dst, val),
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_compare_exchange<T>(
+unsafe fn atomic_compare_exchange<T: Copy>(
dst: *mut T,
old: T,
new: T,
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_compare_exchange_weak<T>(
+unsafe fn atomic_compare_exchange_weak<T: Copy>(
dst: *mut T,
old: T,
new: T,
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_and<T>(dst: *mut T, val: T, order: Ordering) -> T {
+unsafe fn atomic_and<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
match order {
Acquire => intrinsics::atomic_and_acq(dst, val),
Release => intrinsics::atomic_and_rel(dst, val),
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_nand<T>(dst: *mut T, val: T, order: Ordering) -> T {
+unsafe fn atomic_nand<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
match order {
Acquire => intrinsics::atomic_nand_acq(dst, val),
Release => intrinsics::atomic_nand_rel(dst, val),
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_or<T>(dst: *mut T, val: T, order: Ordering) -> T {
+unsafe fn atomic_or<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
match order {
Acquire => intrinsics::atomic_or_acq(dst, val),
Release => intrinsics::atomic_or_rel(dst, val),
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_xor<T>(dst: *mut T, val: T, order: Ordering) -> T {
+unsafe fn atomic_xor<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
match order {
Acquire => intrinsics::atomic_xor_acq(dst, val),
Release => intrinsics::atomic_xor_rel(dst, val),
/// returns the max value (signed comparison)
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_max<T>(dst: *mut T, val: T, order: Ordering) -> T {
+unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
match order {
Acquire => intrinsics::atomic_max_acq(dst, val),
Release => intrinsics::atomic_max_rel(dst, val),
/// returns the min value (signed comparison)
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_min<T>(dst: *mut T, val: T, order: Ordering) -> T {
+unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
match order {
Acquire => intrinsics::atomic_min_acq(dst, val),
Release => intrinsics::atomic_min_rel(dst, val),
}
}
-/// returns the max value (signed comparison)
+/// returns the max value (unsigned comparison)
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_umax<T>(dst: *mut T, val: T, order: Ordering) -> T {
+unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
match order {
Acquire => intrinsics::atomic_umax_acq(dst, val),
Release => intrinsics::atomic_umax_rel(dst, val),
}
}
-/// returns the min value (signed comparison)
+/// returns the min value (unsigned comparison)
#[inline]
#[cfg(target_has_atomic = "8")]
-unsafe fn atomic_umin<T>(dst: *mut T, val: T, order: Ordering) -> T {
+unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
match order {
Acquire => intrinsics::atomic_umin_acq(dst, val),
Release => intrinsics::atomic_umin_rel(dst, val),
use core::alloc::Layout;
+use core::ptr::NonNull;
#[test]
fn const_unchecked_layout() {
const SIZE: usize = 0x2000;
const ALIGN: usize = 0x1000;
const LAYOUT: Layout = unsafe { Layout::from_size_align_unchecked(SIZE, ALIGN) };
+ const DANGLING: NonNull<u8> = LAYOUT.dangling();
assert_eq!(LAYOUT.size(), SIZE);
assert_eq!(LAYOUT.align(), ALIGN);
+ assert_eq!(Some(DANGLING), NonNull::new(ALIGN as *mut u8));
}
let t = ("aabb", "");
let u = ("a", "abb");
- assert!(s != t && t != u);
assert_ne!(s, t);
assert_ne!(t, u);
assert_ne!(hash(&s), hash(&t));
+// ignore-tidy-filelength
+
use core::cell::Cell;
use core::convert::TryFrom;
use core::iter::*;
check(xs, |&x| x < 3, 3); // small
check(xs, |&x| x > 6, 3); // large
}
+
+/// An iterator that panics whenever `next` or next_back` is called
+/// after `None` has already been returned. This does not violate
+/// `Iterator`'s contract. Used to test that iterator adaptors don't
+/// poll their inner iterators after exhausting them.
+struct NonFused<I> {
+ iter: I,
+ done: bool,
+}
+
+impl<I> NonFused<I> {
+ fn new(iter: I) -> Self {
+ Self { iter, done: false }
+ }
+}
+
+impl<I> Iterator for NonFused<I>
+where
+ I: Iterator,
+{
+ type Item = I::Item;
+
+ fn next(&mut self) -> Option<Self::Item> {
+ assert!(!self.done, "this iterator has already returned None");
+ self.iter.next().or_else(|| {
+ self.done = true;
+ None
+ })
+ }
+}
+
+impl<I> DoubleEndedIterator for NonFused<I>
+where
+ I: DoubleEndedIterator,
+{
+ fn next_back(&mut self) -> Option<Self::Item> {
+ assert!(!self.done, "this iterator has already returned None");
+ self.iter.next_back().or_else(|| {
+ self.done = true;
+ None
+ })
+ }
+}
+
+#[test]
+fn test_peekable_non_fused() {
+ let mut iter = NonFused::new(empty::<i32>()).peekable();
+
+ assert_eq!(iter.peek(), None);
+ assert_eq!(iter.next_back(), None);
+}
+
+#[test]
+fn test_flatten_non_fused_outer() {
+ let mut iter = NonFused::new(once(0..2)).flatten();
+
+ assert_eq!(iter.next_back(), Some(1));
+ assert_eq!(iter.next(), Some(0));
+ assert_eq!(iter.next(), None);
+}
+
+#[test]
+fn test_flatten_non_fused_inner() {
+ let mut iter = once(0..1).chain(once(1..3)).flat_map(NonFused::new);
+
+ assert_eq!(iter.next_back(), Some(2));
+ assert_eq!(iter.next(), Some(0));
+ assert_eq!(iter.next(), Some(1));
+ assert_eq!(iter.next(), None);
+}
+#![feature(alloc_layout_extra)]
#![feature(bool_to_option)]
#![feature(bound_cloned)]
#![feature(box_syntax)]
} else if x < 0x20000 {
check(lower, SINGLETONS1U, SINGLETONS1L, NORMAL1)
} else {
- if 0x2a6d7 <= x && x < 0x2a700 {
+ if 0x2a6de <= x && x < 0x2a700 {
return false;
}
if 0x2b735 <= x && x < 0x2b740 {
if 0x2ebe1 <= x && x < 0x2f800 {
return false;
}
- if 0x2fa1e <= x && x < 0xe0100 {
+ if 0x2fa1e <= x && x < 0x30000 {
+ return false;
+ }
+ if 0x3134b <= x && x < 0xe0100 {
return false;
}
if 0xe01f0 <= x && x < 0x110000 {
(0x0a, 28),
(0x0b, 25),
(0x0c, 20),
- (0x0d, 18),
+ (0x0d, 16),
(0x0e, 13),
(0x0f, 4),
(0x10, 3),
(0x1d, 1),
(0x1f, 22),
(0x20, 3),
- (0x2b, 4),
+ (0x2b, 3),
(0x2c, 2),
(0x2d, 11),
(0x2e, 1),
0x4a, 0x5e, 0x64, 0x65, 0x84, 0x91, 0x9b, 0x9d,
0xc9, 0xce, 0xcf, 0x0d, 0x11, 0x29, 0x45, 0x49,
0x57, 0x64, 0x65, 0x8d, 0x91, 0xa9, 0xb4, 0xba,
- 0xbb, 0xc5, 0xc9, 0xdf, 0xe4, 0xe5, 0xf0, 0x04,
- 0x0d, 0x11, 0x45, 0x49, 0x64, 0x65, 0x80, 0x81,
- 0x84, 0xb2, 0xbc, 0xbe, 0xbf, 0xd5, 0xd7, 0xf0,
- 0xf1, 0x83, 0x85, 0x8b, 0xa4, 0xa6, 0xbe, 0xbf,
- 0xc5, 0xc7, 0xce, 0xcf, 0xda, 0xdb, 0x48, 0x98,
- 0xbd, 0xcd, 0xc6, 0xce, 0xcf, 0x49, 0x4e, 0x4f,
- 0x57, 0x59, 0x5e, 0x5f, 0x89, 0x8e, 0x8f, 0xb1,
- 0xb6, 0xb7, 0xbf, 0xc1, 0xc6, 0xc7, 0xd7, 0x11,
- 0x16, 0x17, 0x5b, 0x5c, 0xf6, 0xf7, 0xfe, 0xff,
- 0x80, 0x0d, 0x6d, 0x71, 0xde, 0xdf, 0x0e, 0x0f,
- 0x1f, 0x6e, 0x6f, 0x1c, 0x1d, 0x5f, 0x7d, 0x7e,
- 0xae, 0xaf, 0xbb, 0xbc, 0xfa, 0x16, 0x17, 0x1e,
- 0x1f, 0x46, 0x47, 0x4e, 0x4f, 0x58, 0x5a, 0x5c,
- 0x5e, 0x7e, 0x7f, 0xb5, 0xc5, 0xd4, 0xd5, 0xdc,
- 0xf0, 0xf1, 0xf5, 0x72, 0x73, 0x8f, 0x74, 0x75,
- 0x96, 0x97, 0x2f, 0x5f, 0x26, 0x2e, 0x2f, 0xa7,
- 0xaf, 0xb7, 0xbf, 0xc7, 0xcf, 0xd7, 0xdf, 0x9a,
- 0x40, 0x97, 0x98, 0x30, 0x8f, 0x1f, 0xc0, 0xc1,
- 0xce, 0xff, 0x4e, 0x4f, 0x5a, 0x5b, 0x07, 0x08,
- 0x0f, 0x10, 0x27, 0x2f, 0xee, 0xef, 0x6e, 0x6f,
- 0x37, 0x3d, 0x3f, 0x42, 0x45, 0x90, 0x91, 0xfe,
- 0xff, 0x53, 0x67, 0x75, 0xc8, 0xc9, 0xd0, 0xd1,
- 0xd8, 0xd9, 0xe7, 0xfe, 0xff,
+ 0xbb, 0xc5, 0xc9, 0xdf, 0xe4, 0xe5, 0xf0, 0x0d,
+ 0x11, 0x45, 0x49, 0x64, 0x65, 0x80, 0x84, 0xb2,
+ 0xbc, 0xbe, 0xbf, 0xd5, 0xd7, 0xf0, 0xf1, 0x83,
+ 0x85, 0x8b, 0xa4, 0xa6, 0xbe, 0xbf, 0xc5, 0xc7,
+ 0xce, 0xcf, 0xda, 0xdb, 0x48, 0x98, 0xbd, 0xcd,
+ 0xc6, 0xce, 0xcf, 0x49, 0x4e, 0x4f, 0x57, 0x59,
+ 0x5e, 0x5f, 0x89, 0x8e, 0x8f, 0xb1, 0xb6, 0xb7,
+ 0xbf, 0xc1, 0xc6, 0xc7, 0xd7, 0x11, 0x16, 0x17,
+ 0x5b, 0x5c, 0xf6, 0xf7, 0xfe, 0xff, 0x80, 0x0d,
+ 0x6d, 0x71, 0xde, 0xdf, 0x0e, 0x0f, 0x1f, 0x6e,
+ 0x6f, 0x1c, 0x1d, 0x5f, 0x7d, 0x7e, 0xae, 0xaf,
+ 0xbb, 0xbc, 0xfa, 0x16, 0x17, 0x1e, 0x1f, 0x46,
+ 0x47, 0x4e, 0x4f, 0x58, 0x5a, 0x5c, 0x5e, 0x7e,
+ 0x7f, 0xb5, 0xc5, 0xd4, 0xd5, 0xdc, 0xf0, 0xf1,
+ 0xf5, 0x72, 0x73, 0x8f, 0x74, 0x75, 0x96, 0x2f,
+ 0x5f, 0x26, 0x2e, 0x2f, 0xa7, 0xaf, 0xb7, 0xbf,
+ 0xc7, 0xcf, 0xd7, 0xdf, 0x9a, 0x40, 0x97, 0x98,
+ 0x30, 0x8f, 0x1f, 0xc0, 0xc1, 0xce, 0xff, 0x4e,
+ 0x4f, 0x5a, 0x5b, 0x07, 0x08, 0x0f, 0x10, 0x27,
+ 0x2f, 0xee, 0xef, 0x6e, 0x6f, 0x37, 0x3d, 0x3f,
+ 0x42, 0x45, 0x90, 0x91, 0xfe, 0xff, 0x53, 0x67,
+ 0x75, 0xc8, 0xc9, 0xd0, 0xd1, 0xd8, 0xd9, 0xe7,
+ 0xfe, 0xff,
];
#[rustfmt::skip]
const SINGLETONS1U: &[(u8, u8)] = &[
(0x09, 2),
(0x0a, 5),
(0x0b, 2),
+ (0x0e, 4),
(0x10, 1),
- (0x11, 4),
+ (0x11, 2),
(0x12, 5),
(0x13, 17),
- (0x14, 2),
+ (0x14, 1),
(0x15, 2),
(0x17, 2),
- (0x19, 4),
+ (0x19, 13),
(0x1c, 5),
(0x1d, 8),
(0x24, 1),
(0xe8, 2),
(0xee, 32),
(0xf0, 4),
- (0xf9, 6),
+ (0xf8, 2),
+ (0xf9, 2),
(0xfa, 2),
+ (0xfb, 1),
];
#[rustfmt::skip]
const SINGLETONS1L: &[u8] = &[
0x0c, 0x27, 0x3b, 0x3e, 0x4e, 0x4f, 0x8f, 0x9e,
0x9e, 0x9f, 0x06, 0x07, 0x09, 0x36, 0x3d, 0x3e,
0x56, 0xf3, 0xd0, 0xd1, 0x04, 0x14, 0x18, 0x36,
- 0x37, 0x56, 0x57, 0xbd, 0x35, 0xce, 0xcf, 0xe0,
- 0x12, 0x87, 0x89, 0x8e, 0x9e, 0x04, 0x0d, 0x0e,
- 0x11, 0x12, 0x29, 0x31, 0x34, 0x3a, 0x45, 0x46,
- 0x49, 0x4a, 0x4e, 0x4f, 0x64, 0x65, 0x5a, 0x5c,
- 0xb6, 0xb7, 0x1b, 0x1c, 0xa8, 0xa9, 0xd8, 0xd9,
- 0x09, 0x37, 0x90, 0x91, 0xa8, 0x07, 0x0a, 0x3b,
- 0x3e, 0x66, 0x69, 0x8f, 0x92, 0x6f, 0x5f, 0xee,
- 0xef, 0x5a, 0x62, 0x9a, 0x9b, 0x27, 0x28, 0x55,
- 0x9d, 0xa0, 0xa1, 0xa3, 0xa4, 0xa7, 0xa8, 0xad,
- 0xba, 0xbc, 0xc4, 0x06, 0x0b, 0x0c, 0x15, 0x1d,
- 0x3a, 0x3f, 0x45, 0x51, 0xa6, 0xa7, 0xcc, 0xcd,
- 0xa0, 0x07, 0x19, 0x1a, 0x22, 0x25, 0x3e, 0x3f,
- 0xc5, 0xc6, 0x04, 0x20, 0x23, 0x25, 0x26, 0x28,
- 0x33, 0x38, 0x3a, 0x48, 0x4a, 0x4c, 0x50, 0x53,
- 0x55, 0x56, 0x58, 0x5a, 0x5c, 0x5e, 0x60, 0x63,
- 0x65, 0x66, 0x6b, 0x73, 0x78, 0x7d, 0x7f, 0x8a,
- 0xa4, 0xaa, 0xaf, 0xb0, 0xc0, 0xd0, 0x0c, 0x72,
- 0xa3, 0xa4, 0xcb, 0xcc, 0x6e, 0x6f,
+ 0x37, 0x56, 0x57, 0x7f, 0xaa, 0xae, 0xaf, 0xbd,
+ 0x35, 0xe0, 0x12, 0x87, 0x89, 0x8e, 0x9e, 0x04,
+ 0x0d, 0x0e, 0x11, 0x12, 0x29, 0x31, 0x34, 0x3a,
+ 0x45, 0x46, 0x49, 0x4a, 0x4e, 0x4f, 0x64, 0x65,
+ 0x5c, 0xb6, 0xb7, 0x1b, 0x1c, 0x07, 0x08, 0x0a,
+ 0x0b, 0x14, 0x17, 0x36, 0x39, 0x3a, 0xa8, 0xa9,
+ 0xd8, 0xd9, 0x09, 0x37, 0x90, 0x91, 0xa8, 0x07,
+ 0x0a, 0x3b, 0x3e, 0x66, 0x69, 0x8f, 0x92, 0x6f,
+ 0x5f, 0xee, 0xef, 0x5a, 0x62, 0x9a, 0x9b, 0x27,
+ 0x28, 0x55, 0x9d, 0xa0, 0xa1, 0xa3, 0xa4, 0xa7,
+ 0xa8, 0xad, 0xba, 0xbc, 0xc4, 0x06, 0x0b, 0x0c,
+ 0x15, 0x1d, 0x3a, 0x3f, 0x45, 0x51, 0xa6, 0xa7,
+ 0xcc, 0xcd, 0xa0, 0x07, 0x19, 0x1a, 0x22, 0x25,
+ 0x3e, 0x3f, 0xc5, 0xc6, 0x04, 0x20, 0x23, 0x25,
+ 0x26, 0x28, 0x33, 0x38, 0x3a, 0x48, 0x4a, 0x4c,
+ 0x50, 0x53, 0x55, 0x56, 0x58, 0x5a, 0x5c, 0x5e,
+ 0x60, 0x63, 0x65, 0x66, 0x6b, 0x73, 0x78, 0x7d,
+ 0x7f, 0x8a, 0xa4, 0xaa, 0xaf, 0xb0, 0xc0, 0xd0,
+ 0xae, 0xaf, 0x79, 0xcc, 0x6e, 0x6f, 0x93,
];
#[rustfmt::skip]
const NORMAL0: &[u8] = &[
0x06, 0x11,
0x81, 0xac, 0x0e,
0x80, 0xab, 0x35,
- 0x1e, 0x15,
+ 0x28, 0x0b,
0x80, 0xe0, 0x03,
0x19, 0x08,
0x01, 0x04,
0x11, 0x0a,
0x50, 0x0f,
0x12, 0x07,
- 0x55, 0x08,
- 0x02, 0x04,
+ 0x55, 0x07,
+ 0x03, 0x04,
0x1c, 0x0a,
0x09, 0x03,
0x08, 0x03,
0x0b, 0x03,
0x80, 0xac, 0x06,
0x0a, 0x06,
- 0x1f, 0x41,
+ 0x21, 0x3f,
0x4c, 0x04,
0x2d, 0x03,
0x74, 0x08,
0x3b, 0x07,
0x02, 0x0e,
0x18, 0x09,
- 0x80, 0xb0, 0x30,
+ 0x80, 0xb3, 0x2d,
0x74, 0x0c,
0x80, 0xd6, 0x1a,
0x0c, 0x05,
0x80, 0xff, 0x05,
- 0x80, 0xb6, 0x05,
- 0x24, 0x0c,
- 0x9b, 0xc6, 0x0a,
- 0xd2, 0x30, 0x10,
+ 0x80, 0xdf, 0x0c,
+ 0xee, 0x0d, 0x03,
0x84, 0x8d, 0x03,
0x37, 0x09,
0x81, 0x5c, 0x14,
0x80, 0xb8, 0x08,
- 0x80, 0xc7, 0x30,
- 0x35, 0x04,
+ 0x80, 0xcb, 0x2a,
+ 0x38, 0x03,
0x0a, 0x06,
0x38, 0x08,
0x46, 0x08,
0x80, 0x83, 0x18,
0x1c, 0x0a,
0x16, 0x09,
- 0x48, 0x08,
+ 0x4c, 0x04,
0x80, 0x8a, 0x06,
0xab, 0xa4, 0x0c,
0x17, 0x04,
0x7b, 0x05,
0x03, 0x04,
0x2d, 0x03,
- 0x65, 0x04,
+ 0x66, 0x03,
0x01, 0x2f,
0x2e, 0x80, 0x82,
0x1d, 0x03,
0x33, 0x07,
0x2e, 0x08,
0x0a, 0x81, 0x26,
- 0x1f, 0x80, 0x81,
+ 0x52, 0x4e,
0x28, 0x08,
- 0x2a, 0x80, 0x86,
+ 0x2a, 0x56,
+ 0x1c, 0x14,
0x17, 0x09,
0x4e, 0x04,
0x1e, 0x0f,
0x43, 0x0e,
0x19, 0x07,
0x0a, 0x06,
- 0x47, 0x09,
+ 0x48, 0x08,
0x27, 0x09,
0x75, 0x0b,
0x3f, 0x41,
0x01, 0x05,
0x10, 0x03,
0x05, 0x80, 0x8b,
- 0x60, 0x20,
+ 0x62, 0x1e,
0x48, 0x08,
0x0a, 0x80, 0xa6,
0x5e, 0x22,
0x10, 0x80, 0xc0,
0x3c, 0x64,
0x53, 0x0c,
- 0x01, 0x80, 0xa0,
+ 0x48, 0x09,
+ 0x0a, 0x46,
0x45, 0x1b,
0x48, 0x08,
0x53, 0x1d,
0x0a, 0x06,
0x39, 0x07,
0x0a, 0x81, 0x36,
- 0x19, 0x80, 0xc7,
+ 0x19, 0x80, 0xb7,
+ 0x01, 0x0f,
0x32, 0x0d,
0x83, 0x9b, 0x66,
0x75, 0x0b,
0x4b, 0x04,
0x39, 0x07,
0x11, 0x40,
- 0x04, 0x1c,
+ 0x05, 0x0b,
+ 0x02, 0x0e,
0x97, 0xf8, 0x08,
- 0x82, 0xf3, 0xa5, 0x0d,
+ 0x84, 0xd6, 0x2a,
+ 0x09, 0xa2, 0xf7,
0x81, 0x1f, 0x31,
0x03, 0x11,
0x04, 0x08,
0x2c, 0x04,
0x64, 0x0c,
0x56, 0x0a,
- 0x0d, 0x03,
- 0x5d, 0x03,
- 0x3d, 0x39,
+ 0x80, 0xae, 0x38,
0x1d, 0x0d,
0x2c, 0x04,
0x09, 0x07,
0x02, 0x0e,
0x06, 0x80, 0x9a,
- 0x83, 0xd6, 0x0a,
+ 0x83, 0xd8, 0x08,
+ 0x0d, 0x03,
0x0d, 0x03,
- 0x0b, 0x05,
0x74, 0x0c,
0x59, 0x07,
0x0c, 0x14,
0x38, 0x08,
0x0a, 0x06,
0x28, 0x08,
- 0x1e, 0x52,
- 0x77, 0x03,
- 0x31, 0x03,
- 0x80, 0xa6, 0x0c,
- 0x14, 0x04,
+ 0x22, 0x4e,
+ 0x81, 0x54, 0x0c,
+ 0x15, 0x03,
0x03, 0x05,
+ 0x07, 0x09,
+ 0x19, 0x07,
+ 0x07, 0x09,
0x03, 0x0d,
- 0x06, 0x85, 0x6a,
+ 0x07, 0x29,
+ 0x80, 0xcb, 0x25,
+ 0x0a, 0x84, 0x06,
];
///! This file is generated by src/tools/unicode-table-generator; do not edit manually!
use super::range_search;
-pub const UNICODE_VERSION: (u32, u32, u32) = (12, 1, 0);
+pub const UNICODE_VERSION: (u32, u32, u32) = (13, 0, 0);
#[rustfmt::skip]
pub mod alphabetic {
- static BITSET_LAST_CHUNK_MAP: (u16, u8) = (190, 37);
- static BITSET_CHUNKS_MAP: [u8; 187] = [
- 6, 32, 10, 18, 19, 23, 21, 12, 7, 5, 0, 20, 14, 49, 49, 49, 49, 49, 49, 36, 49, 49, 49, 49,
- 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 47, 49, 30, 8, 49, 49, 49, 49,
- 49, 49, 49, 49, 49, 49, 45, 0, 0, 0, 0, 0, 0, 0, 0, 4, 35, 17, 31, 16, 25, 24, 26, 13, 15,
- 44, 27, 0, 0, 49, 11, 0, 0, 0, 39, 0, 0, 0, 0, 0, 0, 0, 0, 38, 1, 49, 49, 49, 49, 49, 48,
- 42, 0, 0, 0, 0, 0, 0, 0, 0, 0, 34, 0, 0, 28, 0, 0, 0, 0, 0, 29, 0, 0, 9, 0, 33, 2, 3, 0, 0,
- 0, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49,
- 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 41, 49, 49, 49,
- 43, 22, 49, 49, 49, 49, 40, 49, 49, 49, 49, 49, 49, 46,
+ static BITSET_LAST_CHUNK_MAP: (u16, u8) = (196, 44);
+ static BITSET_CHUNKS_MAP: [u8; 196] = [
+ 6, 32, 10, 18, 19, 23, 21, 12, 7, 5, 0, 20, 14, 50, 50, 50, 50, 50, 50, 37, 50, 50, 50, 50,
+ 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 49, 50, 30, 8, 50, 50, 50, 50,
+ 50, 50, 50, 50, 50, 50, 46, 0, 0, 0, 0, 0, 0, 0, 0, 4, 36, 17, 31, 16, 25, 24, 26, 13, 15,
+ 45, 27, 0, 0, 50, 11, 0, 0, 0, 40, 0, 0, 0, 0, 0, 0, 0, 0, 39, 1, 50, 50, 50, 50, 50, 48,
+ 50, 34, 0, 0, 0, 0, 0, 0, 0, 0, 35, 0, 0, 28, 0, 0, 0, 0, 0, 29, 0, 0, 9, 0, 33, 2, 3, 0, 0,
+ 0, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50,
+ 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 42, 50, 50, 50,
+ 43, 22, 50, 50, 50, 50, 41, 50, 50, 50, 50, 50, 50, 47, 0, 0, 0, 38, 0, 50, 50, 50, 50,
];
- static BITSET_INDEX_CHUNKS: [[u8; 16]; 50] = [
+ static BITSET_INDEX_CHUNKS: [[u8; 16]; 51] = [
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 248, 0, 0, 248, 241, 38, 40],
- [0, 0, 0, 0, 0, 0, 0, 0, 108, 133, 110, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 190, 200, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 248, 248, 248, 248, 248, 205, 248, 23, 134, 245, 68, 237],
- [0, 0, 179, 52, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 103, 99, 176, 248, 248, 248, 248, 248, 248, 248, 61, 0, 151, 217, 178],
- [0, 145, 28, 0, 168, 221, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [48, 77, 248, 165, 201, 120, 184, 137, 91, 175, 143, 83, 206, 196, 248, 56],
- [53, 0, 0, 0, 126, 15, 0, 0, 0, 0, 0, 58, 0, 0, 0, 0],
- [59, 54, 127, 199, 167, 186, 157, 114, 154, 84, 160, 115, 158, 66, 155, 21],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 254, 0, 0, 254, 247, 39, 68],
+ [0, 0, 0, 0, 0, 0, 0, 0, 111, 135, 113, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 195, 205, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 254, 254, 254, 254, 254, 210, 254, 25, 136, 251, 71, 243],
+ [0, 0, 182, 52, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 107, 103, 180, 254, 254, 254, 254, 254, 254, 254, 61, 0, 155, 222, 181],
+ [0, 148, 30, 0, 172, 226, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [48, 80, 254, 169, 206, 123, 189, 139, 95, 179, 145, 86, 211, 204, 254, 56],
+ [53, 0, 0, 0, 129, 17, 0, 0, 0, 0, 0, 58, 0, 0, 0, 0],
+ [59, 54, 185, 203, 171, 191, 161, 117, 158, 87, 164, 118, 162, 67, 159, 23],
[62, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [91, 129, 164, 101, 248, 248, 248, 79, 248, 248, 248, 248, 230, 128, 135, 117],
- [97, 0, 220, 144, 0, 0, 212, 44, 142, 240, 30, 97, 0, 0, 0, 0],
- [116, 247, 219, 171, 188, 248, 104, 190, 0, 0, 0, 0, 0, 0, 0, 0],
- [141, 185, 88, 0, 149, 213, 22, 0, 0, 0, 0, 89, 0, 0, 0, 0],
- [147, 90, 35, 82, 98, 0, 153, 0, 85, 119, 29, 45, 86, 71, 18, 0],
- [150, 32, 248, 107, 0, 81, 0, 0, 0, 0, 227, 17, 211, 105, 231, 19],
- [162, 41, 161, 69, 163, 173, 123, 73, 106, 14, 124, 37, 1, 187, 121, 0],
- [172, 240, 228, 170, 248, 248, 248, 248, 248, 229, 138, 235, 234, 24, 222, 125],
- [208, 233, 248, 74, 204, 64, 140, 232, 63, 0, 0, 0, 0, 0, 0, 0],
- [220, 97, 202, 86, 94, 78, 203, 9, 226, 80, 46, 0, 183, 11, 174, 67],
- [231, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248],
- [247, 248, 248, 248, 248, 248, 248, 248, 248, 209, 225, 95, 76, 75, 180, 25],
- [248, 5, 96, 50, 72, 87, 248, 26, 132, 0, 198, 51, 159, 42, 0, 0],
- [248, 8, 72, 72, 49, 0, 0, 0, 0, 0, 0, 0, 194, 5, 0, 89],
- [248, 36, 248, 7, 0, 0, 139, 31, 143, 3, 93, 0, 55, 0, 0, 0],
- [248, 62, 248, 248, 248, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [248, 118, 34, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [248, 236, 166, 246, 136, 239, 248, 248, 248, 248, 215, 169, 182, 207, 214, 12],
- [248, 248, 13, 130, 248, 248, 248, 248, 57, 146, 248, 65, 218, 248, 243, 177],
- [248, 248, 191, 111, 197, 43, 0, 0, 248, 248, 248, 248, 91, 47, 0, 0],
- [248, 248, 244, 248, 189, 223, 152, 70, 224, 210, 248, 148, 240, 242, 68, 100],
- [248, 248, 248, 4, 248, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [248, 248, 248, 248, 35, 195, 248, 248, 248, 248, 248, 113, 0, 0, 0, 0],
- [248, 248, 248, 248, 131, 240, 238, 109, 0, 181, 248, 122, 102, 216, 143, 27],
- [248, 248, 248, 248, 248, 248, 86, 0, 248, 248, 248, 248, 248, 248, 248, 248],
- [248, 248, 248, 248, 248, 248, 248, 248, 33, 0, 0, 0, 0, 0, 0, 0],
- [248, 248, 248, 248, 248, 248, 248, 248, 97, 35, 0, 60, 65, 156, 16, 0],
- [248, 248, 248, 248, 248, 248, 248, 248, 248, 6, 0, 0, 0, 0, 0, 0],
- [248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 192, 248, 248, 248, 248, 248],
- [248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 20, 248, 248, 248, 248],
- [248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 72, 0, 0, 0, 0],
- [248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 81, 248, 248, 248],
- [248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 23, 0],
- [248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 193, 112],
- [248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 39],
- [248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 65],
- [248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 92],
- [248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248, 248],
+ [95, 131, 168, 105, 254, 254, 254, 82, 254, 254, 254, 254, 236, 130, 137, 120],
+ [101, 0, 225, 146, 151, 2, 217, 45, 144, 246, 32, 101, 0, 0, 0, 0],
+ [119, 253, 224, 175, 193, 254, 227, 195, 0, 0, 0, 0, 0, 0, 0, 0],
+ [143, 190, 91, 0, 153, 218, 24, 0, 0, 0, 0, 92, 0, 0, 66, 0],
+ [150, 94, 37, 85, 102, 0, 157, 0, 88, 122, 31, 46, 89, 74, 20, 0],
+ [154, 34, 254, 110, 0, 84, 0, 0, 0, 0, 233, 19, 216, 108, 237, 21],
+ [166, 42, 165, 72, 167, 177, 126, 76, 109, 16, 127, 38, 1, 192, 124, 0],
+ [176, 246, 234, 174, 254, 254, 254, 254, 254, 235, 140, 241, 240, 26, 228, 128],
+ [213, 239, 254, 77, 209, 64, 142, 238, 63, 0, 0, 0, 0, 0, 0, 0],
+ [225, 101, 207, 89, 98, 81, 208, 10, 232, 83, 147, 1, 188, 13, 178, 70],
+ [237, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254],
+ [253, 254, 254, 254, 254, 254, 254, 254, 254, 214, 231, 99, 79, 78, 183, 27],
+ [254, 6, 100, 50, 75, 90, 254, 28, 134, 0, 202, 51, 163, 43, 0, 0],
+ [254, 9, 75, 75, 49, 0, 0, 0, 0, 0, 69, 0, 199, 6, 195, 93],
+ [254, 41, 254, 8, 0, 0, 141, 33, 145, 4, 97, 0, 55, 0, 0, 0],
+ [254, 62, 254, 254, 254, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [254, 121, 36, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [254, 242, 170, 252, 138, 245, 254, 254, 254, 254, 220, 173, 186, 212, 219, 14],
+ [254, 254, 15, 132, 254, 254, 254, 254, 57, 149, 254, 65, 223, 254, 249, 187],
+ [254, 254, 196, 114, 201, 44, 0, 0, 254, 254, 254, 254, 95, 47, 0, 0],
+ [254, 254, 250, 254, 194, 229, 156, 73, 230, 215, 254, 152, 246, 248, 71, 104],
+ [254, 254, 254, 5, 254, 12, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [254, 254, 254, 22, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [254, 254, 254, 254, 37, 200, 254, 254, 254, 254, 254, 116, 0, 0, 0, 0],
+ [254, 254, 254, 254, 133, 246, 244, 112, 0, 184, 254, 125, 106, 221, 145, 29],
+ [254, 254, 254, 254, 254, 254, 254, 0, 254, 254, 254, 254, 254, 254, 254, 254],
+ [254, 254, 254, 254, 254, 254, 254, 254, 35, 0, 0, 0, 0, 0, 0, 0],
+ [254, 254, 254, 254, 254, 254, 254, 254, 101, 37, 0, 60, 65, 160, 18, 0],
+ [254, 254, 254, 254, 254, 254, 254, 254, 254, 7, 0, 0, 0, 0, 0, 0],
+ [254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 197, 254, 254, 254, 254, 254],
+ [254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 35, 254, 254, 254, 254],
+ [254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 84, 254, 254, 254],
+ [254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 11, 0, 0],
+ [254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 25, 0],
+ [254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 198, 115],
+ [254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 40],
+ [254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 96],
+ [254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 125],
+ [254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254, 254],
];
- static BITSET: [u64; 249] = [
- 0, 1, 15, 17, 31, 63, 127, 179, 511, 1023, 2191, 4079, 4087, 8191, 8319, 16384, 65535,
- 131071, 262143, 4128527, 8388607, 8461767, 24870911, 67108863, 134217727, 276824575,
- 335544350, 486341884, 536805376, 536870911, 553648127, 1056964608, 1073692671, 1073741823,
- 1140785663, 2147483647, 2147485627, 4026540127, 4294934783, 8589934591, 47244640256,
- 64548249055, 68191066527, 68719476735, 115913785343, 137438953215, 549755813888,
- 1095220854783, 1099511627711, 1099511627775, 2199023190016, 2199023255551, 4398046511103,
- 8641373536127, 8791831609343, 8795690369023, 8796093022207, 13198434443263, 17592186044415,
- 35184321757183, 70368744112128, 88094074470339, 140737488355327, 140737488355328,
- 141836999983103, 281474976710655, 563017343310239, 1125625028935679, 1125899906842623,
- 1688915364814303, 2119858418286774, 2251795522912255, 2251799813685247, 3377704004976767,
- 3509778554814463, 3905461007941631, 4503595333443583, 4503599627370495, 8796093022142464,
- 9006649498927104, 9007192812290047, 9007199254740991, 15762594400829440, 17169970223906821,
- 17732925109967239, 18014398491652207, 18014398509481983, 20266198323101808,
- 36027697507139583, 36028792723996672, 36028792728190975, 36028797018963967,
- 72057594037927935, 90071992547409919, 143851303137705983, 144053615424700415,
- 144115188075855868, 144115188075855871, 288230371860938751, 297241973452963840,
- 301749971126844416, 319718190147960832, 576460743713488896, 576460743847706622,
- 576460748008488959, 576460752303359999, 576460752303423486, 576460752303423487,
- 790380184120328175, 1152640029630136575, 1152917029519358975, 1152921504591118335,
- 1152921504606845055, 1152921504606846975, 1153765996922689951, 2161727885562420159,
- 2251241253188403424, 2295745090394464220, 2305570330330005503, 2305843004918726656,
- 2305843004919250943, 2305843009196916483, 2305843009213693951, 3457638613854978028,
- 4323455298678290390, 4557642822898941951, 4575692405780512767, 4602678814877679616,
- 4611686017001275199, 4611686018360336384, 4611686018427322368, 4611686018427387903,
- 4656722014700830719, 6843210385291930244, 6881498031078244479, 6908521828386340863,
- 8935141660164089791, 8935423131384840192, 9168765891372858879, 9169328841326329855,
- 9187201948305063935, 9187343239835811327, 9216616637413720063, 9223372036854775807,
- 9223372041149743103, 9223934986808197120, 10371930679322607615, 10502394331027995967,
- 11241233151490523135, 13006395723845991295, 13258596753222922239, 13609596598936928288,
- 13834776580305453567, 13907115649320091647, 14082190885810440174, 14123225865944680428,
- 16212958624174047247, 16412803692974677999, 16424062692043104238, 16424062692043104239,
- 16424062692043243502, 16424625641996804079, 16429129241624174575, 16717361816799141871,
- 16717361816799216127, 16788293510930366511, 17005555242810474495, 17293822569102704639,
- 17581979622616071300, 17870283321271910397, 17870283321406070975, 17870283321406128127,
- 17978369712463020031, 18158513764145585631, 18158781978395017215, 18194542490281852927,
- 18410715276682199039, 18410715276690587772, 18428729675200069631, 18428729675200069632,
- 18433233274827440127, 18437455399478099968, 18437736874452713471, 18442240474082181119,
+ static BITSET: [u64; 255] = [
+ 0, 1, 7, 15, 17, 31, 63, 127, 179, 511, 1023, 2047, 2191, 4079, 4087, 8191, 8319, 16384,
+ 65535, 131071, 262143, 4128527, 4194303, 8461767, 24870911, 67108863, 134217727, 276824575,
+ 335593502, 486341884, 536805376, 536870911, 553648127, 1056964608, 1073692671, 1073741823,
+ 1140785663, 2147483647, 4026540127, 4294934783, 8589934591, 15032387515, 64548249055,
+ 68191066527, 68719476735, 115913785343, 137438953215, 1095220854783, 1099511627711,
+ 1099511627775, 2199023190016, 2199023255551, 4398046511103, 8641373536127, 8791831609343,
+ 8795690369023, 8796093022207, 13198434443263, 17592186044415, 35184321757183,
+ 70368744112128, 88094074470339, 140737488355327, 140737488355328, 141836999983103,
+ 281474976710655, 281474976710656, 563017343310239, 844472174772224, 875211255709695,
+ 1125625028935679, 1125899906842623, 1688915364814303, 2119858418286774, 2251795522912255,
+ 2251799813685247, 3377704004976767, 3509778554814463, 3905461007941631, 4503595333443583,
+ 4503599627370495, 8796093022142464, 9006649498927104, 9007192812290047, 9007199254740991,
+ 15762594400829440, 17169970223906821, 17732925109967239, 18014398491652207,
+ 18014398509481983, 20266198323101936, 36027697507139583, 36028792723996672,
+ 36028792723996703, 36028792728190975, 36028797018963967, 72057594037927935,
+ 90071992547409919, 143851303137705983, 144053615424700415, 144115188075855868,
+ 144115188075855871, 288230371860938751, 297241973452963840, 301749971126844416,
+ 319718190147960832, 576460743713488896, 576460743847706622, 576460752303359999,
+ 576460752303423486, 576460752303423487, 790380184120328175, 1152640029630136575,
+ 1152917029519358975, 1152921504591118335, 1152921504606845055, 1152921504606846975,
+ 1153765996922689951, 2161727885562420159, 2251241253188403424, 2295745090394464220,
+ 2305570330330005503, 2305843004918726656, 2305843004919250943, 2305843009196916483,
+ 2305843009213693951, 3457638613854978030, 4323455298678290390, 4557642822898941951,
+ 4575692405780512767, 4611686017001275199, 4611686018360336384, 4611686018427322368,
+ 4611686018427387903, 4656722014700830719, 6843210385291930244, 6881498031078244479,
+ 6908521828386340863, 8935141660164089791, 8935423131384840192, 9168765891372858879,
+ 9169328841326329855, 9187201948305063935, 9187343239835811327, 9216616637413720063,
+ 9223372036854775807, 9223372041149743103, 9223372586610589696, 9223934986808197120,
+ 10371930679322607615, 10502394331027995967, 11078855083321979519, 11241233151490523135,
+ 13006395723845991295, 13258596753222922239, 13609596598936928288, 13834776580305453567,
+ 13907115649320091647, 14082190885810440174, 14123225865944680428, 16212958624174047247,
+ 16412803692974677999, 16424062692043104238, 16424062692043104239, 16424062692043243502,
+ 16424625641996804079, 16429129241624174575, 16717361816799141887, 16717361816799216127,
+ 16788293510930366511, 17005555242810474495, 17293822569102704639, 17581979622616071300,
+ 17870283321271910397, 17870283321406070975, 17870283321406128127, 17978369712463020031,
+ 18158513764145585631, 18158781978395017215, 18194542490281852927, 18410715276682199039,
+ 18428729675200069631, 18428729675200069632, 18433233274827440127, 18437455399478099968,
+ 18437736870159843328, 18437736874452713471, 18437736874454812668, 18442240474082181119,
18444492273895866367, 18445618173802708993, 18446181192473632767, 18446216308128218879,
18446462598732840928, 18446462598732840959, 18446462598732840960, 18446462599806582783,
18446462615912710143, 18446462667452317695, 18446463149025525759, 18446463629525450752,
- 18446463698110251007, 18446463698244468735, 18446464796682337663, 18446466966713532416,
+ 18446463698244468735, 18446464796682337663, 18446466966713532671, 18446466996645134335,
18446466996779287551, 18446471394825862144, 18446471394825863167, 18446480190918885375,
18446498607738650623, 18446532967477018623, 18446602782178705022, 18446603336221163519,
18446603336221196287, 18446638520593285119, 18446673709243564031, 18446708893632430079,
18446740770879700992, 18446741595513422027, 18446741874686295551, 18446743249075830783,
18446743798965862398, 18446744056529672000, 18446744060816261120, 18446744068886102015,
- 18446744069414584320, 18446744069414601696, 18446744069414649855, 18446744069456527359,
- 18446744069548736512, 18446744069548802046, 18446744069683019775, 18446744069951455231,
- 18446744070421282815, 18446744070446333439, 18446744070475743231, 18446744070488326143,
- 18446744071553646463, 18446744071562067967, 18446744073696837631, 18446744073701162813,
- 18446744073707454463, 18446744073709027328, 18446744073709355007, 18446744073709419615,
- 18446744073709486080, 18446744073709520895, 18446744073709543424, 18446744073709550079,
- 18446744073709550595, 18446744073709551579, 18446744073709551599, 18446744073709551614,
- 18446744073709551615,
+ 18446744069414584320, 18446744069414601696, 18446744069414617087, 18446744069414649855,
+ 18446744069456527359, 18446744069548736512, 18446744069548802046, 18446744069683019775,
+ 18446744069951455231, 18446744070421282815, 18446744070446333439, 18446744070475743231,
+ 18446744070488326143, 18446744071553646463, 18446744071562067967, 18446744073696837631,
+ 18446744073701162813, 18446744073707454463, 18446744073709027328, 18446744073709355007,
+ 18446744073709419615, 18446744073709486080, 18446744073709520895, 18446744073709543424,
+ 18446744073709550079, 18446744073709550595, 18446744073709551579, 18446744073709551599,
+ 18446744073709551614, 18446744073709551615,
];
pub fn lookup(c: char) -> bool {
static BITSET_LAST_CHUNK_MAP: (u16, u8) = (896, 33);
static BITSET_CHUNKS_MAP: [u8; 125] = [
25, 14, 21, 30, 28, 4, 17, 23, 22, 0, 0, 16, 27, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 18, 13, 19, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 18, 13, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 3, 6, 9, 0, 7, 11, 32, 31, 26, 29, 0, 0, 0, 0, 0, 24, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 5, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 15, 0, 0, 0, 0,
- 10, 0, 8, 0, 20, 0, 12, 0, 1,
+ 10, 0, 8, 0, 19, 0, 12, 0, 1,
];
static BITSET_INDEX_CHUNKS: [[u8; 16]; 34] = [
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 164],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 22, 47, 52],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 40, 0, 171, 2],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 92, 88, 134, 38],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 94, 102, 6, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 76, 26, 0, 146, 136, 79, 43, 117],
- [0, 0, 0, 0, 0, 0, 0, 0, 152, 0, 0, 58, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 165, 97, 75, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 128, 0, 0, 0, 48, 0, 114, 0, 0],
- [0, 0, 0, 0, 0, 170, 68, 0, 0, 7, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 61, 0, 0, 0, 0, 0, 0, 0, 0, 23, 0, 0],
- [0, 0, 0, 28, 0, 14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 133, 0, 0, 0, 0, 15, 160, 45, 84, 51, 78, 12, 109],
- [0, 0, 11, 0, 0, 30, 161, 90, 35, 80, 0, 69, 173, 13, 81, 129],
- [0, 0, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 131, 0, 85, 0, 148, 0, 175, 73, 0, 0, 0, 0, 0, 0, 0],
- [20, 4, 62, 0, 118, 0, 0, 0, 32, 154, 145, 0, 124, 89, 67, 86],
- [25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [59, 0, 0, 150, 70, 24, 132, 60, 100, 122, 163, 99, 0, 46, 0, 66],
- [63, 0, 0, 0, 135, 0, 0, 0, 0, 0, 0, 74, 0, 0, 0, 0],
- [71, 33, 0, 178, 123, 83, 120, 137, 121, 98, 121, 167, 153, 55, 3, 18],
- [72, 149, 36, 82, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [104, 133, 0, 110, 174, 105, 177, 166, 0, 0, 0, 0, 0, 0, 155, 139],
- [107, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [111, 50, 106, 0, 0, 0, 0, 0, 0, 0, 172, 179, 179, 112, 9, 0],
- [113, 0, 0, 0, 0, 0, 0, 49, 142, 34, 31, 0, 0, 0, 0, 0],
- [116, 0, 42, 141, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [140, 93, 37, 119, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 0, 0],
- [159, 0, 101, 0, 158, 10, 29, 0, 0, 0, 0, 91, 0, 0, 0, 0],
- [162, 56, 153, 54, 125, 53, 0, 27, 115, 21, 126, 19, 108, 144, 127, 8],
- [168, 41, 151, 5, 0, 0, 157, 39, 156, 1, 103, 0, 65, 0, 0, 0],
- [169, 147, 130, 17, 96, 87, 143, 16, 138, 0, 0, 64, 125, 95, 0, 0],
- [176, 179, 0, 0, 179, 179, 179, 77, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 166],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 22, 47, 57],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 40, 0, 173, 3],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 94, 90, 136, 38],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 96, 104, 7, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 78, 27, 0, 148, 138, 81, 44, 119],
+ [0, 0, 0, 0, 0, 0, 0, 0, 154, 0, 0, 58, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 167, 99, 77, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 130, 0, 0, 0, 48, 0, 116, 0, 0],
+ [0, 0, 0, 0, 0, 172, 70, 0, 0, 8, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 60, 0, 0, 0, 0, 0, 67, 0, 0, 24, 0, 0],
+ [0, 0, 0, 29, 0, 15, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 135, 0, 0, 0, 0, 16, 162, 46, 86, 51, 80, 13, 111],
+ [0, 0, 12, 0, 0, 43, 163, 92, 35, 82, 0, 71, 175, 14, 83, 131],
+ [0, 0, 56, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 133, 0, 87, 0, 150, 0, 178, 75, 0, 0, 0, 0, 0, 0, 0],
+ [20, 5, 61, 0, 120, 0, 0, 0, 32, 156, 176, 1, 126, 91, 69, 88],
+ [26, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [62, 0, 0, 0, 137, 0, 0, 0, 0, 0, 0, 76, 0, 0, 0, 0],
+ [66, 0, 0, 152, 72, 25, 134, 59, 102, 124, 165, 101, 0, 64, 0, 68],
+ [73, 33, 0, 181, 125, 85, 122, 139, 123, 100, 123, 169, 155, 54, 4, 18],
+ [74, 151, 36, 84, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [106, 135, 0, 112, 177, 107, 180, 168, 0, 0, 0, 0, 0, 0, 157, 142],
+ [109, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [113, 50, 108, 0, 0, 0, 0, 0, 0, 0, 174, 182, 182, 114, 10, 0],
+ [115, 0, 0, 0, 141, 5, 0, 49, 145, 34, 31, 0, 0, 0, 0, 0],
+ [118, 0, 42, 144, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [143, 95, 37, 121, 0, 0, 0, 0, 0, 0, 0, 0, 0, 45, 0, 0],
+ [161, 0, 103, 0, 160, 11, 30, 0, 0, 0, 0, 93, 0, 0, 0, 0],
+ [164, 55, 155, 53, 127, 52, 2, 28, 117, 21, 128, 19, 110, 147, 129, 9],
+ [170, 41, 153, 6, 0, 0, 159, 39, 158, 1, 105, 0, 65, 0, 0, 0],
+ [171, 149, 132, 17, 98, 89, 146, 23, 140, 0, 0, 63, 127, 97, 0, 0],
+ [179, 182, 0, 0, 182, 182, 182, 79, 0, 0, 0, 0, 0, 0, 0, 0],
];
- static BITSET: [u64; 180] = [
- 0, 1, 3, 4, 8, 13, 15, 28, 64, 176, 191, 1016, 1792, 2047, 4080, 4096, 7680, 8192, 8193,
- 16192, 30720, 32704, 32768, 131008, 262016, 2097152, 2359296, 6030336, 8323072, 10682368,
- 33554432, 58719232, 159383552, 234881024, 243138688, 402587711, 536805376, 536879204,
- 546307648, 805306369, 1073741824, 1073741916, 2113929216, 3221225472, 3758096384,
- 4026531840, 4160749568, 4294934528, 4294967296, 4512022528, 5368709120, 17179869183,
- 47244640256, 51539615774, 51539619904, 51543810078, 51545914817, 66035122176, 412316860416,
- 412316862532, 412316893184, 1030792151040, 2199023255648, 8641373536127, 8763880767488,
- 17303886364672, 36421322670080, 65128884076547, 65970697670631, 68168642985984,
- 70093866270720, 70368739983360, 136957967529984, 140737488355328, 263882790666240,
- 281470547525648, 281470682333183, 281474976710655, 281474976710656, 281474976710657,
- 281479271675905, 562675075514368, 562949953355776, 563001509683710, 844424930131968,
- 985162418487296, 1023920203366400, 2251799813685248, 3377699721314304, 4494803534348292,
- 4503599627370678, 6755399441055744, 7881299349733376, 8444256867844096, 8725724278030336,
- 8760633772212225, 8989057312882695, 9042383626829823, 9851624185018758, 24822575045541890,
- 28848986089586688, 30958948903026688, 35747322042253312, 53805701016846336,
- 58529202969772032, 72066390130950143, 112767012056334336, 143833713099145216,
- 189151184399892480, 216172782113783808, 220713756545974272, 288301294651703296,
- 302022650010533887, 504262420777140224, 558446353793941504, 572520102629474304,
- 593978171557150752, 1008806350890729472, 1009933895770046464, 1152921504606846976,
- 1152921504606846978, 1152921504606846982, 1153202979583561736, 1441151880758558727,
- 1715871458028158991, 1729382256910270467, 2301902359539744768, 2305843009196908767,
- 2305843009213693952, 2612078987781865472, 2771965570646540291, 3458764513820540928,
- 3731232291276455943, 4539628424389459968, 4589168020290535424, 4611404543450677248,
- 4611686018494513280, 4611686069967003678, 4671217976001691648, 6917775322003857411,
- 7421334051581067264, 8070450532247928832, 8788774672813524990, 9205357638345293827,
- 9222809086901354496, 9223091111633879040, 9223372036854775808, 9223372036854775935,
- 9223512774343131136, 9224216320050987008, 9224497932466651184, 9653465801268658176,
- 9727775195120332910, 10376293541461622786, 11526998316797657088, 11529215046068469760,
- 12103423998558208000, 12699025049277956096, 13005832773892571136, 13798747783286489088,
- 13832665517980123136, 13835058055282032640, 13835058055282163729, 13951307220663664640,
- 17870283321406128128, 17906312118425092095, 18158513697557839871, 18158513749097456062,
- 18374686479671623680, 18374686479671623682, 18444496122186563584, 18445618173802708992,
- 18446462598732840960, 18446462598733004800, 18446726481523507200, 18446744069414584320,
- 18446744069414584322, 18446744073575333888, 18446744073709027328, 18446744073709551615,
+ static BITSET: [u64; 183] = [
+ 0, 1, 2, 3, 4, 8, 13, 15, 28, 64, 176, 191, 1016, 1792, 2047, 4080, 4096, 8192, 8193,
+ 16192, 30720, 32704, 32768, 40448, 131008, 262016, 2097152, 2359296, 6030336, 8323072,
+ 10682368, 58719232, 159383552, 234881024, 243138688, 402587711, 536805376, 536879204,
+ 546307648, 805306369, 1073741824, 1073741916, 2113929216, 2181038080, 3221225472,
+ 3758096384, 4026531840, 4294934528, 4294967296, 4512022528, 5368709120, 17179869183,
+ 51539615774, 51539619904, 51545907230, 51545914817, 66035122176, 115964116992, 412316860416,
+ 412316893184, 1030792151040, 2199023255648, 8641373536127, 8763880767488, 15397323538432,
+ 17303886364672, 18004502906948, 26388279066624, 36421322670080, 65128884076547,
+ 65970697670631, 68168642985984, 70093866270720, 70368739983360, 136957967529984,
+ 140737488355328, 263882790666240, 281470547525648, 281470682333183, 281474976710655,
+ 281474976710656, 281474976710657, 281479271675905, 562675075514368, 562949953355776,
+ 563001509683710, 844424930131968, 985162418487296, 1023920203366400, 2251799813685248,
+ 3377699721314304, 4494803534348292, 4503599627370678, 6755399441055744, 7881299349733376,
+ 8444256867844096, 8725724278030336, 8760633772212225, 8989057312882695, 9042383626829823,
+ 9851624185018758, 24822575045541890, 28848986089586688, 30958948903026688,
+ 35747322042253312, 53805701016846336, 58529202969772032, 72066390130950143,
+ 112767012056334336, 143833713099145216, 189151184399892480, 216172782113783808,
+ 220713756545974272, 288301294651703296, 302022650010533887, 504262420777140224,
+ 558446353793941504, 572520102629474304, 593978171557150752, 1008806350890729472,
+ 1009933895770046464, 1152921504606846976, 1152921504606846978, 1152921504606846982,
+ 1153202979583561736, 1441151880758558727, 1715871458028158991, 1729382256910270467,
+ 2301902359539744768, 2305843009196908767, 2305843009213693952, 2612078987781865472,
+ 2771965570646540291, 3458764513820540928, 3731232291276455943, 4539628424389459968,
+ 4589168020290535424, 4611404543450677248, 4611686018494513280, 4611686069967003678,
+ 4671217976001691648, 6341068275337658368, 6917775322003857411, 7421334051581067264,
+ 8070450532247928832, 8788774672813524990, 9205357638345293827, 9222809086901354496,
+ 9223372036854775808, 9223372036854775935, 9223512774343131136, 9224216320050987008,
+ 9224497932466651184, 9653465801268658176, 9727775195120332910, 10376293541461622786,
+ 11526998316797657088, 11529215046068469760, 12103423998558208000, 12699025049277956096,
+ 13005832773892571136, 13798747783286489088, 13832665517980123136, 13835058055282032640,
+ 13835058055282163729, 13951307220663664640, 17870283321406128128, 17906312118425092095,
+ 18158513697557839871, 18158513749097456062, 18374686479671623680, 18374686479671623682,
+ 18444496122186563584, 18445618173802708992, 18446462598732840960, 18446462598733004800,
+ 18446463148488654848, 18446726481523507200, 18446744069414584320, 18446744069414584322,
+ 18446744073575333888, 18446744073709027328, 18446744073709551615,
];
pub fn lookup(c: char) -> bool {
static BITSET: [u64; 63] = [
0, 15, 24, 511, 1023, 4087, 65535, 16253055, 134217726, 536805376, 1073741823, 4294967295,
133143986179, 4398046511103, 36009005809663, 70368744177663, 2251799813685247,
- 3509778554814463, 144115188074807295, 297241973452963840, 504403158265495676,
+ 3509778554814463, 144115188074807295, 297241973452963840, 531424756029720572,
576460743713488896, 576460743847706622, 1152921504591118335, 2295745090394464220,
4557642822898941951, 4611686017001275199, 6908521828386340863, 8935141660164089791,
9223934986808197120, 13605092999309557792, 16717361816799216127, 16717361816799223999,
17005555242810474495, 17446871633794956420, 17870283321271910397, 17870283321406128127,
18410715276682199039, 18428729675200069631, 18428729675200069632, 18437736874452713471,
- 18446462598732840959, 18446462598732840960, 18446463698110251007, 18446466996779287551,
+ 18446462598732840959, 18446462598732840960, 18446464797621878783, 18446466996779287551,
18446603336221163519, 18446603336221196287, 18446741874686295551, 18446743249075830783,
18446744056529672000, 18446744056529682432, 18446744069414584320, 18446744069414601696,
18446744069422972927, 18446744070475743231, 18446744071562067967, 18446744073707454463,
static BITSET_LAST_CHUNK_MAP: (u16, u8) = (896, 30);
static BITSET_CHUNKS_MAP: [u8; 123] = [
4, 15, 21, 27, 25, 3, 18, 23, 17, 0, 0, 14, 22, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 19, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 2, 7, 10, 0, 8, 12, 29, 28, 24, 26, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 5, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16, 0, 0, 0, 0,
- 11, 0, 9, 0, 20, 0, 13,
+ 11, 0, 9, 0, 19, 0, 13,
];
static BITSET_INDEX_CHUNKS: [[u8; 16]; 31] = [
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 15, 18, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 31, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 73, 70, 102, 29],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 138, 62, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 75, 83, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 103, 35, 66, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 61, 0, 0, 0, 0, 0, 35, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 117, 0, 0, 45, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 130, 78, 60, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 99, 0, 0, 0, 37, 0, 90, 0, 0],
- [0, 0, 0, 0, 0, 129, 54, 0, 0, 3, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 47, 0, 0, 0, 0, 0, 0, 0, 0, 16, 0, 0],
- [0, 0, 0, 19, 0, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 67, 0, 114, 0, 137, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 7, 0, 0, 0, 125, 5, 24, 63, 0, 55, 135, 9, 64, 100],
- [0, 0, 33, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [10, 0, 0, 65, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [12, 0, 48, 0, 92, 0, 0, 0, 25, 119, 113, 0, 96, 71, 53, 68],
- [46, 0, 0, 116, 57, 17, 101, 44, 81, 94, 127, 80, 0, 0, 0, 52],
- [49, 0, 0, 0, 83, 0, 0, 0, 0, 0, 0, 58, 0, 0, 0, 0],
- [56, 26, 0, 136, 95, 43, 107, 105, 93, 79, 93, 132, 128, 42, 104, 20],
- [59, 0, 23, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [85, 0, 0, 87, 0, 0, 0, 131, 0, 0, 0, 0, 0, 0, 0, 0],
- [89, 0, 0, 0, 0, 0, 0, 38, 110, 27, 22, 0, 0, 0, 0, 0],
- [109, 74, 28, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 36, 0, 0],
- [124, 0, 82, 0, 123, 6, 21, 0, 0, 0, 0, 72, 0, 0, 0, 0],
- [126, 40, 118, 39, 108, 41, 0, 34, 91, 14, 97, 13, 86, 112, 98, 4],
- [133, 32, 120, 2, 0, 0, 122, 30, 121, 1, 84, 0, 51, 0, 0, 0],
- [134, 115, 88, 0, 77, 69, 111, 11, 106, 0, 0, 50, 108, 76, 0, 0],
- [137, 138, 0, 0, 138, 138, 138, 62, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16, 20, 46],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 33, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 77, 74, 106, 31],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 143, 66, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 79, 87, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 107, 37, 70, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 65, 0, 0, 0, 0, 0, 37, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 121, 0, 0, 48, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 134, 82, 64, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 103, 0, 0, 0, 39, 0, 94, 0, 0],
+ [0, 0, 0, 0, 0, 133, 58, 0, 0, 5, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 49, 0, 0, 0, 0, 0, 55, 0, 0, 18, 0, 0],
+ [0, 0, 0, 21, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 71, 0, 118, 0, 142, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 9, 0, 0, 0, 129, 7, 26, 67, 0, 59, 140, 11, 68, 104],
+ [0, 0, 35, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [12, 0, 0, 69, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [13, 0, 50, 0, 96, 0, 0, 0, 27, 123, 139, 1, 100, 75, 57, 72],
+ [51, 0, 0, 0, 87, 0, 0, 0, 0, 0, 0, 62, 0, 0, 0, 0],
+ [54, 0, 0, 120, 61, 19, 105, 47, 85, 98, 131, 84, 0, 0, 0, 56],
+ [60, 28, 0, 141, 99, 45, 111, 109, 97, 83, 97, 136, 132, 44, 108, 22],
+ [63, 0, 25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [89, 0, 0, 91, 0, 0, 0, 135, 0, 0, 0, 0, 0, 0, 0, 0],
+ [93, 0, 0, 0, 113, 3, 0, 40, 115, 29, 24, 0, 0, 0, 0, 0],
+ [114, 78, 30, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 38, 0, 0],
+ [128, 0, 86, 0, 127, 8, 23, 0, 0, 0, 0, 76, 0, 0, 0, 0],
+ [130, 42, 122, 41, 112, 43, 2, 36, 95, 15, 101, 14, 90, 117, 102, 6],
+ [137, 34, 124, 4, 0, 0, 126, 32, 125, 1, 88, 0, 53, 0, 0, 0],
+ [138, 119, 92, 0, 81, 73, 116, 17, 110, 0, 0, 52, 112, 80, 0, 0],
+ [142, 143, 0, 0, 143, 143, 143, 66, 0, 0, 0, 0, 0, 0, 0, 0],
];
- static BITSET: [u64; 139] = [
- 0, 1, 13, 28, 64, 182, 191, 1016, 2032, 2047, 4096, 7680, 14336, 16128, 32640, 32768,
- 131008, 262016, 491520, 8323072, 8396801, 10682368, 58719232, 100663296, 134152192,
+ static BITSET: [u64; 144] = [
+ 0, 1, 2, 8, 13, 28, 64, 182, 191, 1016, 2032, 2047, 4096, 14336, 16128, 32640, 32768,
+ 40448, 131008, 262016, 491520, 8323072, 8396801, 10682368, 58719232, 100663296, 134152192,
159383552, 234881024, 243138688, 536879204, 537919040, 805306369, 1073741824, 1073741916,
1610612736, 2153546752, 3221225472, 3758096384, 4294967296, 4512022528, 51545911364,
- 51545914817, 51548004382, 51552198686, 51556262398, 137438953472, 412316860416,
- 412316862532, 1030792151040, 2199023255648, 8641373536127, 8763880767488, 17303886364672,
- 36421322670080, 65128884076547, 65970697670631, 67755789254656, 69200441769984,
- 70093866270720, 263882790666240, 277076930199552, 281470547525648, 281470681808895,
- 281474976710655, 281479271675904, 562675075514368, 562949953355776, 844424930131968,
- 985162418487296, 1023920203366400, 2251799813685248, 3377699721314304, 4494803534348292,
- 6755399441055744, 7881299349733376, 8444256867844096, 8725724278030336, 8760633780600833,
- 8989057312882695, 9042383626829823, 9851624185018758, 18067175067615234, 28848986089586688,
- 30958948903026688, 35747322042253312, 53805701016846336, 58529202969772032,
- 189151184399892480, 220713756545974272, 466122561432846339, 504262420777140224,
- 558446353793941504, 572520102629474304, 1009933895770046464, 1152921504606846982,
- 1152921504606851080, 1441151880758558727, 1724878657282899983, 2301902359539744768,
- 2305843009196908767, 2305843009213693952, 2310337812748042240, 3731232291276455943,
- 4589168020290535424, 4609293481125347328, 4611686018427387908, 4611686069975392286,
- 4671217976001691648, 5764607523034234882, 6341068275337658371, 7421334051581067264,
- 8788774672813524990, 9205357638345293827, 9222809086901354496, 9223090561878065152,
- 9223372036854775808, 9223372036854775935, 9224497932466651184, 9727775195120332910,
- 10376293541461622786, 11526998316797657088, 11959590285459062784, 12103423998558208000,
- 12699165786766311424, 13005832773892571136, 13798747783286489088, 13835058055282032640,
- 13835058055282163729, 13951307220663664640, 14987979559889010690, 17872468738205286400,
- 17906312118425092095, 18158513697557839871, 18158513749097456062, 18374686479671623680,
- 18374686479671623682, 18446462598732972032, 18446744056529158144, 18446744069414584320,
- 18446744073709551615,
+ 51545914817, 51548004382, 51554295838, 51556262398, 68719476736, 137438953472, 412316860416,
+ 1030792151040, 2199023255648, 8641373536127, 8763880767488, 17303886364672, 18004502906948,
+ 26388279066624, 36421322670080, 65128884076547, 65970697670631, 67755789254656,
+ 69200441769984, 70093866270720, 263882790666240, 277076930199552, 281470547525648,
+ 281470681808895, 281474976710655, 281479271675904, 562675075514368, 562949953355776,
+ 844424930131968, 985162418487296, 1023920203366400, 2251799813685248, 3377699721314304,
+ 4494803534348292, 6755399441055744, 7881299349733376, 8444256867844096, 8725724278030336,
+ 8760633780600833, 8989057312882695, 9042383626829823, 9851624185018758, 18067175067615234,
+ 28848986089586688, 30958948903026688, 35747322042253312, 53805701016846336,
+ 58529202969772032, 189151184399892480, 220713756545974272, 466122561432846339,
+ 504262420777140224, 558446353793941504, 572520102629474304, 1009933895770046464,
+ 1152921504606846982, 1152921504606851080, 1441151880758558727, 1724878657282899983,
+ 2301902359539744768, 2305843009196908767, 2305843009213693952, 2310337812748042240,
+ 3731232291276455943, 4589168020290535424, 4609293481125347328, 4611686018427387908,
+ 4611686069975392286, 4671217976001691648, 5764607523034234882, 6341068275337658371,
+ 6341349750314369024, 7421334051581067264, 8788774672813524990, 9205357638345293827,
+ 9222809086901354496, 9223372036854775808, 9223372036854775935, 9224497932466651184,
+ 9727775195120332910, 10376293541461622786, 11526998316797657088, 11959590285459062784,
+ 12103423998558208000, 12699165786766311424, 13005832773892571136, 13798747783286489088,
+ 13835058055282032640, 13835058055282163729, 13951307220663664640, 14987979559889010690,
+ 17872468738205286400, 17906312118425092095, 18158513697557839871, 18158513749097456062,
+ 18374686479671623680, 18374686479671623682, 18446462598732840960, 18446462598732972032,
+ 18446744056529158144, 18446744069414584320, 18446744073709551615,
];
pub fn lookup(c: char) -> bool {
133143986179, 274877905920, 1099509514240, 4398046445568, 17592185782272, 36009005809663,
46912496118442, 187649984473770, 281474972516352, 2251799813685247, 2339875276368554,
4503599560261632, 61925590106570972, 71777214282006783, 72057592964186127,
- 144115188074807295, 297241973452963840, 504403158265495560, 576460743713488896,
+ 144115188074807295, 297241973452963840, 522417556774978824, 576460743713488896,
1152921487426978047, 1152921504590069760, 1814856824841797631, 3607524039012697088,
4362299189061746720, 4539628424389459968, 4601013482110844927, 4611405638684049471,
4674456033467236607, 6172933889249159850, 9223934986808197120, 10663022717737544362,
12298110845996498944, 15324248332066007893, 16596095761559859497, 16717361816799215616,
16987577794709946364, 17293822586148356092, 18158513701852807104, 18410715274543104000,
18428729675466407935, 18446462598732840960, 18446462598732858304, 18446462598737002495,
- 18446463698110251007, 18446673704966422527, 18446726481523572736, 18446739675663105535,
+ 18446464797621878783, 18446673704966422527, 18446726481523572736, 18446739675663105535,
18446739675663106031, 18446742974197923840, 18446744056529682432, 18446744069414584320,
18446744073709529733, 18446744073709551615,
];
#[rustfmt::skip]
pub mod n {
- static BITSET_LAST_CHUNK_MAP: (u16, u8) = (124, 11);
- static BITSET_CHUNKS_MAP: [u8; 124] = [
- 30, 7, 10, 24, 18, 3, 28, 20, 23, 27, 0, 15, 31, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 29, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 2, 12, 17, 25, 16, 22, 19, 14, 21, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 6, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 4, 1, 0, 0, 9, 0, 13, 26,
+ static BITSET_LAST_CHUNK_MAP: (u16, u8) = (127, 0);
+ static BITSET_CHUNKS_MAP: [u8; 127] = [
+ 31, 8, 11, 25, 19, 4, 29, 21, 24, 28, 0, 16, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 30, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 3, 13, 18, 26, 17, 23, 20, 15, 22, 0, 33, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 7, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 5, 2, 0, 0, 10, 0, 14, 27, 12, 0, 1,
];
- static BITSET_INDEX_CHUNKS: [[u8; 16]; 33] = [
+ static BITSET_INDEX_CHUNKS: [[u8; 16]; 34] = [
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 71],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 14, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 48],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 35, 0, 42, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 13, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 24, 0, 0, 0, 21, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 24, 0, 46, 0, 0, 0, 2],
- [0, 0, 0, 0, 0, 0, 0, 0, 24, 0, 0, 30, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 46, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 30, 0, 44, 0, 30, 0, 30, 0, 40, 0, 33],
- [0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 36, 43, 4, 0, 0, 0, 0, 51, 22, 3, 0, 12],
- [0, 0, 0, 6, 0, 14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 34, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 53, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 61, 46, 0, 0, 0, 0, 59, 0, 0, 23, 9, 0, 0],
- [0, 0, 24, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 2, 14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 52, 0, 0],
- [0, 14, 0, 14, 0, 0, 0, 0, 0, 14, 0, 2, 50, 0, 0, 0],
- [0, 15, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 25, 0, 0, 0, 14, 24, 0, 0, 0, 0, 0, 0, 0, 0, 10],
- [0, 31, 0, 46, 64, 0, 0, 38, 0, 0, 0, 46, 0, 0, 0, 0],
- [0, 45, 2, 0, 0, 70, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 58, 0, 30, 0, 41, 0, 30, 0, 14, 0, 14, 35, 0, 0, 0],
- [0, 62, 29, 60, 17, 0, 54, 69, 0, 56, 19, 27, 0, 63, 28, 0],
- [0, 65, 37, 0, 55, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 68, 18, 67, 0, 0, 0, 0, 0, 0, 0, 0, 0, 64, 8, 0],
- [14, 0, 0, 0, 0, 7, 0, 16, 0, 0, 15, 0, 0, 14, 46, 0],
- [39, 0, 0, 14, 2, 0, 0, 47, 0, 14, 0, 0, 0, 0, 0, 46],
- [46, 0, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [49, 0, 0, 0, 0, 0, 11, 0, 24, 20, 66, 0, 0, 0, 0, 0],
- [72, 26, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 47],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 72],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 15, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 33, 0, 0, 0, 49],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 36, 0, 43, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 14, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 25, 0, 0, 0, 22, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 25, 0, 47, 0, 0, 0, 2],
+ [0, 0, 0, 0, 0, 0, 0, 0, 25, 0, 0, 31, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 47, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 31, 0, 45, 0, 31, 0, 31, 0, 41, 0, 34],
+ [0, 0, 0, 0, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 37, 44, 4, 0, 0, 0, 0, 52, 23, 3, 0, 13],
+ [0, 0, 0, 7, 0, 15, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 35, 0, 15, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 62, 47, 0, 0, 0, 0, 60, 0, 0, 24, 10, 0, 5],
+ [0, 0, 25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 2, 15, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 53, 0, 0],
+ [0, 15, 0, 15, 0, 0, 0, 0, 0, 15, 0, 2, 51, 0, 0, 0],
+ [0, 16, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 26, 0, 0, 0, 15, 25, 0, 0, 0, 0, 0, 0, 0, 0, 11],
+ [0, 32, 0, 47, 65, 0, 0, 39, 0, 0, 0, 47, 0, 0, 0, 0],
+ [0, 46, 2, 0, 0, 71, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 59, 0, 31, 0, 42, 0, 31, 0, 15, 0, 15, 36, 0, 0, 0],
+ [0, 63, 30, 61, 18, 0, 55, 70, 0, 57, 20, 28, 0, 64, 29, 0],
+ [0, 66, 38, 0, 56, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 69, 19, 68, 0, 0, 0, 0, 0, 0, 0, 0, 0, 65, 9, 0],
+ [15, 0, 0, 0, 0, 8, 0, 17, 0, 0, 16, 0, 0, 15, 47, 0],
+ [40, 0, 0, 15, 2, 0, 0, 48, 0, 15, 0, 0, 0, 0, 0, 47],
+ [47, 0, 58, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [50, 0, 0, 0, 0, 0, 12, 0, 25, 21, 67, 0, 0, 0, 0, 0],
+ [73, 27, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
];
- static BITSET: [u64; 73] = [
- 0, 999, 1023, 1026, 3072, 8191, 65408, 65472, 1048575, 1966080, 2097151, 3932160, 4063232,
- 8388607, 67043328, 67044351, 134152192, 264241152, 268435455, 3758096384, 4294901504,
- 17112694784, 64424509440, 549218942976, 4393751543808, 35184372023296, 140737488355327,
- 272678883688448, 279275953455104, 280925220896768, 281200098803712, 281474976448512,
- 492581209243648, 2251524935778304, 2251795518717952, 4503595332403200, 4503599627370368,
- 8708132091985919, 9007190731849728, 17732923532771328, 71212894229889024,
+ static BITSET: [u64; 74] = [
+ 0, 999, 1023, 1026, 3072, 4064, 8191, 65408, 65472, 1048575, 1966080, 2097151, 3932160,
+ 4063232, 8388607, 67043328, 67044351, 134152192, 264241152, 268435455, 3758096384,
+ 4294901504, 17112694784, 64424509440, 549218942976, 4393751543808, 35184372023296,
+ 140737488355327, 272678883688448, 279275953455104, 280925220896768, 281200098803712,
+ 281474976448512, 492581209243648, 2251524935778304, 2251795518717952, 4503595332403200,
+ 4503599627370368, 8708132091985919, 9007190731849728, 17732923532771328, 71212894229889024,
144114915328655360, 144115183780888576, 144115188075855871, 284007976623144960,
284008251501051904, 287948901175001088, 287948901242044416, 287953294926544896,
504407547722072192, 1152640029630136320, 1152921496016912384, 2305840810190438400,
static BITSET_INDEX_CHUNKS: [[u8; 16]; 17] = [
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 33, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 13, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 19, 10, 0, 38, 46, 44, 2],
- [0, 0, 0, 0, 14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 51, 24, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 60, 62, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 27, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 54, 0, 0, 0, 0, 0, 43, 43, 40, 43, 56, 23, 34, 35],
- [0, 0, 57, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 12, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 18, 9, 0, 38, 46, 44, 28],
+ [0, 0, 0, 0, 13, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 51, 23, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 60, 62, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 26, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 54, 0, 0, 0, 0, 0, 43, 43, 40, 43, 56, 22, 34, 35],
+ [0, 0, 57, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 66, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 66, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 66, 30],
- [0, 11, 0, 12, 50, 37, 36, 45, 47, 6, 0, 0, 0, 49, 18, 53],
- [15, 0, 60, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [22, 52, 43, 26, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [25, 39, 42, 41, 59, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [58, 65, 29, 17, 48, 63, 31, 20, 55, 61, 64, 32, 28, 21, 16, 4],
+ [0, 0, 66, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 66, 30],
+ [0, 10, 0, 11, 50, 37, 36, 45, 47, 5, 0, 0, 0, 49, 17, 53],
+ [14, 0, 60, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [21, 52, 43, 25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [24, 39, 42, 41, 59, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [58, 65, 29, 16, 48, 63, 31, 19, 55, 61, 64, 32, 27, 20, 15, 3],
];
static BITSET: [u64; 67] = [
- 0, 8, 116, 1023, 1024, 8383, 21882, 65535, 1048575, 8388607, 89478485, 134217726,
- 2139095039, 4294967295, 17179869183, 1099511627775, 2199023190016, 4398046445568,
- 17575006099264, 23456248059221, 70368743129088, 140737484161024, 140737488355327,
- 280378317225728, 281470681743392, 281474976710655, 1169903278445909, 2251799813685247,
- 9007198986305536, 17977448100528131, 18014398509481983, 288230371856744511,
+ 0, 8, 1023, 1024, 8383, 21882, 65535, 1048575, 8388607, 89478485, 134217726, 2139095039,
+ 4294967295, 17179869183, 1099511627775, 2199023190016, 4398046445568, 17575006099264,
+ 23456248059221, 70368743129088, 140737484161024, 140737488355327, 280378317225728,
+ 281470681743392, 281474976710655, 1169903278445909, 2251799813685247, 9007198986305536,
+ 9007199254741748, 17977448100528131, 18014398509481983, 288230371856744511,
576460735123554305, 576460743713488896, 1080863910568919040, 1080897995681042176,
1274187559846268630, 3122495741643543722, 6148633210533183488, 6148914689804861440,
6148914690880001365, 6148914691236506283, 6148914691236516865, 6148914691236517205,
('\u{a7ba}', ['\u{a7bb}', '\u{0}', '\u{0}']), ('\u{a7bc}', ['\u{a7bd}', '\u{0}', '\u{0}']),
('\u{a7be}', ['\u{a7bf}', '\u{0}', '\u{0}']), ('\u{a7c2}', ['\u{a7c3}', '\u{0}', '\u{0}']),
('\u{a7c4}', ['\u{a794}', '\u{0}', '\u{0}']), ('\u{a7c5}', ['\u{282}', '\u{0}', '\u{0}']),
- ('\u{a7c6}', ['\u{1d8e}', '\u{0}', '\u{0}']), ('\u{ff21}', ['\u{ff41}', '\u{0}', '\u{0}']),
- ('\u{ff22}', ['\u{ff42}', '\u{0}', '\u{0}']), ('\u{ff23}', ['\u{ff43}', '\u{0}', '\u{0}']),
- ('\u{ff24}', ['\u{ff44}', '\u{0}', '\u{0}']), ('\u{ff25}', ['\u{ff45}', '\u{0}', '\u{0}']),
- ('\u{ff26}', ['\u{ff46}', '\u{0}', '\u{0}']), ('\u{ff27}', ['\u{ff47}', '\u{0}', '\u{0}']),
- ('\u{ff28}', ['\u{ff48}', '\u{0}', '\u{0}']), ('\u{ff29}', ['\u{ff49}', '\u{0}', '\u{0}']),
- ('\u{ff2a}', ['\u{ff4a}', '\u{0}', '\u{0}']), ('\u{ff2b}', ['\u{ff4b}', '\u{0}', '\u{0}']),
- ('\u{ff2c}', ['\u{ff4c}', '\u{0}', '\u{0}']), ('\u{ff2d}', ['\u{ff4d}', '\u{0}', '\u{0}']),
- ('\u{ff2e}', ['\u{ff4e}', '\u{0}', '\u{0}']), ('\u{ff2f}', ['\u{ff4f}', '\u{0}', '\u{0}']),
- ('\u{ff30}', ['\u{ff50}', '\u{0}', '\u{0}']), ('\u{ff31}', ['\u{ff51}', '\u{0}', '\u{0}']),
- ('\u{ff32}', ['\u{ff52}', '\u{0}', '\u{0}']), ('\u{ff33}', ['\u{ff53}', '\u{0}', '\u{0}']),
- ('\u{ff34}', ['\u{ff54}', '\u{0}', '\u{0}']), ('\u{ff35}', ['\u{ff55}', '\u{0}', '\u{0}']),
- ('\u{ff36}', ['\u{ff56}', '\u{0}', '\u{0}']), ('\u{ff37}', ['\u{ff57}', '\u{0}', '\u{0}']),
- ('\u{ff38}', ['\u{ff58}', '\u{0}', '\u{0}']), ('\u{ff39}', ['\u{ff59}', '\u{0}', '\u{0}']),
- ('\u{ff3a}', ['\u{ff5a}', '\u{0}', '\u{0}']),
+ ('\u{a7c6}', ['\u{1d8e}', '\u{0}', '\u{0}']), ('\u{a7c7}', ['\u{a7c8}', '\u{0}', '\u{0}']),
+ ('\u{a7c9}', ['\u{a7ca}', '\u{0}', '\u{0}']), ('\u{a7f5}', ['\u{a7f6}', '\u{0}', '\u{0}']),
+ ('\u{ff21}', ['\u{ff41}', '\u{0}', '\u{0}']), ('\u{ff22}', ['\u{ff42}', '\u{0}', '\u{0}']),
+ ('\u{ff23}', ['\u{ff43}', '\u{0}', '\u{0}']), ('\u{ff24}', ['\u{ff44}', '\u{0}', '\u{0}']),
+ ('\u{ff25}', ['\u{ff45}', '\u{0}', '\u{0}']), ('\u{ff26}', ['\u{ff46}', '\u{0}', '\u{0}']),
+ ('\u{ff27}', ['\u{ff47}', '\u{0}', '\u{0}']), ('\u{ff28}', ['\u{ff48}', '\u{0}', '\u{0}']),
+ ('\u{ff29}', ['\u{ff49}', '\u{0}', '\u{0}']), ('\u{ff2a}', ['\u{ff4a}', '\u{0}', '\u{0}']),
+ ('\u{ff2b}', ['\u{ff4b}', '\u{0}', '\u{0}']), ('\u{ff2c}', ['\u{ff4c}', '\u{0}', '\u{0}']),
+ ('\u{ff2d}', ['\u{ff4d}', '\u{0}', '\u{0}']), ('\u{ff2e}', ['\u{ff4e}', '\u{0}', '\u{0}']),
+ ('\u{ff2f}', ['\u{ff4f}', '\u{0}', '\u{0}']), ('\u{ff30}', ['\u{ff50}', '\u{0}', '\u{0}']),
+ ('\u{ff31}', ['\u{ff51}', '\u{0}', '\u{0}']), ('\u{ff32}', ['\u{ff52}', '\u{0}', '\u{0}']),
+ ('\u{ff33}', ['\u{ff53}', '\u{0}', '\u{0}']), ('\u{ff34}', ['\u{ff54}', '\u{0}', '\u{0}']),
+ ('\u{ff35}', ['\u{ff55}', '\u{0}', '\u{0}']), ('\u{ff36}', ['\u{ff56}', '\u{0}', '\u{0}']),
+ ('\u{ff37}', ['\u{ff57}', '\u{0}', '\u{0}']), ('\u{ff38}', ['\u{ff58}', '\u{0}', '\u{0}']),
+ ('\u{ff39}', ['\u{ff59}', '\u{0}', '\u{0}']), ('\u{ff3a}', ['\u{ff5a}', '\u{0}', '\u{0}']),
('\u{10400}', ['\u{10428}', '\u{0}', '\u{0}']),
('\u{10401}', ['\u{10429}', '\u{0}', '\u{0}']),
('\u{10402}', ['\u{1042a}', '\u{0}', '\u{0}']),
('\u{a7b7}', ['\u{a7b6}', '\u{0}', '\u{0}']), ('\u{a7b9}', ['\u{a7b8}', '\u{0}', '\u{0}']),
('\u{a7bb}', ['\u{a7ba}', '\u{0}', '\u{0}']), ('\u{a7bd}', ['\u{a7bc}', '\u{0}', '\u{0}']),
('\u{a7bf}', ['\u{a7be}', '\u{0}', '\u{0}']), ('\u{a7c3}', ['\u{a7c2}', '\u{0}', '\u{0}']),
- ('\u{ab53}', ['\u{a7b3}', '\u{0}', '\u{0}']), ('\u{ab70}', ['\u{13a0}', '\u{0}', '\u{0}']),
- ('\u{ab71}', ['\u{13a1}', '\u{0}', '\u{0}']), ('\u{ab72}', ['\u{13a2}', '\u{0}', '\u{0}']),
- ('\u{ab73}', ['\u{13a3}', '\u{0}', '\u{0}']), ('\u{ab74}', ['\u{13a4}', '\u{0}', '\u{0}']),
- ('\u{ab75}', ['\u{13a5}', '\u{0}', '\u{0}']), ('\u{ab76}', ['\u{13a6}', '\u{0}', '\u{0}']),
- ('\u{ab77}', ['\u{13a7}', '\u{0}', '\u{0}']), ('\u{ab78}', ['\u{13a8}', '\u{0}', '\u{0}']),
- ('\u{ab79}', ['\u{13a9}', '\u{0}', '\u{0}']), ('\u{ab7a}', ['\u{13aa}', '\u{0}', '\u{0}']),
- ('\u{ab7b}', ['\u{13ab}', '\u{0}', '\u{0}']), ('\u{ab7c}', ['\u{13ac}', '\u{0}', '\u{0}']),
- ('\u{ab7d}', ['\u{13ad}', '\u{0}', '\u{0}']), ('\u{ab7e}', ['\u{13ae}', '\u{0}', '\u{0}']),
- ('\u{ab7f}', ['\u{13af}', '\u{0}', '\u{0}']), ('\u{ab80}', ['\u{13b0}', '\u{0}', '\u{0}']),
- ('\u{ab81}', ['\u{13b1}', '\u{0}', '\u{0}']), ('\u{ab82}', ['\u{13b2}', '\u{0}', '\u{0}']),
- ('\u{ab83}', ['\u{13b3}', '\u{0}', '\u{0}']), ('\u{ab84}', ['\u{13b4}', '\u{0}', '\u{0}']),
- ('\u{ab85}', ['\u{13b5}', '\u{0}', '\u{0}']), ('\u{ab86}', ['\u{13b6}', '\u{0}', '\u{0}']),
- ('\u{ab87}', ['\u{13b7}', '\u{0}', '\u{0}']), ('\u{ab88}', ['\u{13b8}', '\u{0}', '\u{0}']),
- ('\u{ab89}', ['\u{13b9}', '\u{0}', '\u{0}']), ('\u{ab8a}', ['\u{13ba}', '\u{0}', '\u{0}']),
- ('\u{ab8b}', ['\u{13bb}', '\u{0}', '\u{0}']), ('\u{ab8c}', ['\u{13bc}', '\u{0}', '\u{0}']),
- ('\u{ab8d}', ['\u{13bd}', '\u{0}', '\u{0}']), ('\u{ab8e}', ['\u{13be}', '\u{0}', '\u{0}']),
- ('\u{ab8f}', ['\u{13bf}', '\u{0}', '\u{0}']), ('\u{ab90}', ['\u{13c0}', '\u{0}', '\u{0}']),
- ('\u{ab91}', ['\u{13c1}', '\u{0}', '\u{0}']), ('\u{ab92}', ['\u{13c2}', '\u{0}', '\u{0}']),
- ('\u{ab93}', ['\u{13c3}', '\u{0}', '\u{0}']), ('\u{ab94}', ['\u{13c4}', '\u{0}', '\u{0}']),
- ('\u{ab95}', ['\u{13c5}', '\u{0}', '\u{0}']), ('\u{ab96}', ['\u{13c6}', '\u{0}', '\u{0}']),
- ('\u{ab97}', ['\u{13c7}', '\u{0}', '\u{0}']), ('\u{ab98}', ['\u{13c8}', '\u{0}', '\u{0}']),
- ('\u{ab99}', ['\u{13c9}', '\u{0}', '\u{0}']), ('\u{ab9a}', ['\u{13ca}', '\u{0}', '\u{0}']),
- ('\u{ab9b}', ['\u{13cb}', '\u{0}', '\u{0}']), ('\u{ab9c}', ['\u{13cc}', '\u{0}', '\u{0}']),
- ('\u{ab9d}', ['\u{13cd}', '\u{0}', '\u{0}']), ('\u{ab9e}', ['\u{13ce}', '\u{0}', '\u{0}']),
- ('\u{ab9f}', ['\u{13cf}', '\u{0}', '\u{0}']), ('\u{aba0}', ['\u{13d0}', '\u{0}', '\u{0}']),
- ('\u{aba1}', ['\u{13d1}', '\u{0}', '\u{0}']), ('\u{aba2}', ['\u{13d2}', '\u{0}', '\u{0}']),
- ('\u{aba3}', ['\u{13d3}', '\u{0}', '\u{0}']), ('\u{aba4}', ['\u{13d4}', '\u{0}', '\u{0}']),
- ('\u{aba5}', ['\u{13d5}', '\u{0}', '\u{0}']), ('\u{aba6}', ['\u{13d6}', '\u{0}', '\u{0}']),
- ('\u{aba7}', ['\u{13d7}', '\u{0}', '\u{0}']), ('\u{aba8}', ['\u{13d8}', '\u{0}', '\u{0}']),
- ('\u{aba9}', ['\u{13d9}', '\u{0}', '\u{0}']), ('\u{abaa}', ['\u{13da}', '\u{0}', '\u{0}']),
- ('\u{abab}', ['\u{13db}', '\u{0}', '\u{0}']), ('\u{abac}', ['\u{13dc}', '\u{0}', '\u{0}']),
- ('\u{abad}', ['\u{13dd}', '\u{0}', '\u{0}']), ('\u{abae}', ['\u{13de}', '\u{0}', '\u{0}']),
- ('\u{abaf}', ['\u{13df}', '\u{0}', '\u{0}']), ('\u{abb0}', ['\u{13e0}', '\u{0}', '\u{0}']),
- ('\u{abb1}', ['\u{13e1}', '\u{0}', '\u{0}']), ('\u{abb2}', ['\u{13e2}', '\u{0}', '\u{0}']),
- ('\u{abb3}', ['\u{13e3}', '\u{0}', '\u{0}']), ('\u{abb4}', ['\u{13e4}', '\u{0}', '\u{0}']),
- ('\u{abb5}', ['\u{13e5}', '\u{0}', '\u{0}']), ('\u{abb6}', ['\u{13e6}', '\u{0}', '\u{0}']),
- ('\u{abb7}', ['\u{13e7}', '\u{0}', '\u{0}']), ('\u{abb8}', ['\u{13e8}', '\u{0}', '\u{0}']),
- ('\u{abb9}', ['\u{13e9}', '\u{0}', '\u{0}']), ('\u{abba}', ['\u{13ea}', '\u{0}', '\u{0}']),
- ('\u{abbb}', ['\u{13eb}', '\u{0}', '\u{0}']), ('\u{abbc}', ['\u{13ec}', '\u{0}', '\u{0}']),
- ('\u{abbd}', ['\u{13ed}', '\u{0}', '\u{0}']), ('\u{abbe}', ['\u{13ee}', '\u{0}', '\u{0}']),
- ('\u{abbf}', ['\u{13ef}', '\u{0}', '\u{0}']), ('\u{fb00}', ['F', 'F', '\u{0}']),
- ('\u{fb01}', ['F', 'I', '\u{0}']), ('\u{fb02}', ['F', 'L', '\u{0}']),
- ('\u{fb03}', ['F', 'F', 'I']), ('\u{fb04}', ['F', 'F', 'L']),
- ('\u{fb05}', ['S', 'T', '\u{0}']), ('\u{fb06}', ['S', 'T', '\u{0}']),
- ('\u{fb13}', ['\u{544}', '\u{546}', '\u{0}']),
+ ('\u{a7c8}', ['\u{a7c7}', '\u{0}', '\u{0}']), ('\u{a7ca}', ['\u{a7c9}', '\u{0}', '\u{0}']),
+ ('\u{a7f6}', ['\u{a7f5}', '\u{0}', '\u{0}']), ('\u{ab53}', ['\u{a7b3}', '\u{0}', '\u{0}']),
+ ('\u{ab70}', ['\u{13a0}', '\u{0}', '\u{0}']), ('\u{ab71}', ['\u{13a1}', '\u{0}', '\u{0}']),
+ ('\u{ab72}', ['\u{13a2}', '\u{0}', '\u{0}']), ('\u{ab73}', ['\u{13a3}', '\u{0}', '\u{0}']),
+ ('\u{ab74}', ['\u{13a4}', '\u{0}', '\u{0}']), ('\u{ab75}', ['\u{13a5}', '\u{0}', '\u{0}']),
+ ('\u{ab76}', ['\u{13a6}', '\u{0}', '\u{0}']), ('\u{ab77}', ['\u{13a7}', '\u{0}', '\u{0}']),
+ ('\u{ab78}', ['\u{13a8}', '\u{0}', '\u{0}']), ('\u{ab79}', ['\u{13a9}', '\u{0}', '\u{0}']),
+ ('\u{ab7a}', ['\u{13aa}', '\u{0}', '\u{0}']), ('\u{ab7b}', ['\u{13ab}', '\u{0}', '\u{0}']),
+ ('\u{ab7c}', ['\u{13ac}', '\u{0}', '\u{0}']), ('\u{ab7d}', ['\u{13ad}', '\u{0}', '\u{0}']),
+ ('\u{ab7e}', ['\u{13ae}', '\u{0}', '\u{0}']), ('\u{ab7f}', ['\u{13af}', '\u{0}', '\u{0}']),
+ ('\u{ab80}', ['\u{13b0}', '\u{0}', '\u{0}']), ('\u{ab81}', ['\u{13b1}', '\u{0}', '\u{0}']),
+ ('\u{ab82}', ['\u{13b2}', '\u{0}', '\u{0}']), ('\u{ab83}', ['\u{13b3}', '\u{0}', '\u{0}']),
+ ('\u{ab84}', ['\u{13b4}', '\u{0}', '\u{0}']), ('\u{ab85}', ['\u{13b5}', '\u{0}', '\u{0}']),
+ ('\u{ab86}', ['\u{13b6}', '\u{0}', '\u{0}']), ('\u{ab87}', ['\u{13b7}', '\u{0}', '\u{0}']),
+ ('\u{ab88}', ['\u{13b8}', '\u{0}', '\u{0}']), ('\u{ab89}', ['\u{13b9}', '\u{0}', '\u{0}']),
+ ('\u{ab8a}', ['\u{13ba}', '\u{0}', '\u{0}']), ('\u{ab8b}', ['\u{13bb}', '\u{0}', '\u{0}']),
+ ('\u{ab8c}', ['\u{13bc}', '\u{0}', '\u{0}']), ('\u{ab8d}', ['\u{13bd}', '\u{0}', '\u{0}']),
+ ('\u{ab8e}', ['\u{13be}', '\u{0}', '\u{0}']), ('\u{ab8f}', ['\u{13bf}', '\u{0}', '\u{0}']),
+ ('\u{ab90}', ['\u{13c0}', '\u{0}', '\u{0}']), ('\u{ab91}', ['\u{13c1}', '\u{0}', '\u{0}']),
+ ('\u{ab92}', ['\u{13c2}', '\u{0}', '\u{0}']), ('\u{ab93}', ['\u{13c3}', '\u{0}', '\u{0}']),
+ ('\u{ab94}', ['\u{13c4}', '\u{0}', '\u{0}']), ('\u{ab95}', ['\u{13c5}', '\u{0}', '\u{0}']),
+ ('\u{ab96}', ['\u{13c6}', '\u{0}', '\u{0}']), ('\u{ab97}', ['\u{13c7}', '\u{0}', '\u{0}']),
+ ('\u{ab98}', ['\u{13c8}', '\u{0}', '\u{0}']), ('\u{ab99}', ['\u{13c9}', '\u{0}', '\u{0}']),
+ ('\u{ab9a}', ['\u{13ca}', '\u{0}', '\u{0}']), ('\u{ab9b}', ['\u{13cb}', '\u{0}', '\u{0}']),
+ ('\u{ab9c}', ['\u{13cc}', '\u{0}', '\u{0}']), ('\u{ab9d}', ['\u{13cd}', '\u{0}', '\u{0}']),
+ ('\u{ab9e}', ['\u{13ce}', '\u{0}', '\u{0}']), ('\u{ab9f}', ['\u{13cf}', '\u{0}', '\u{0}']),
+ ('\u{aba0}', ['\u{13d0}', '\u{0}', '\u{0}']), ('\u{aba1}', ['\u{13d1}', '\u{0}', '\u{0}']),
+ ('\u{aba2}', ['\u{13d2}', '\u{0}', '\u{0}']), ('\u{aba3}', ['\u{13d3}', '\u{0}', '\u{0}']),
+ ('\u{aba4}', ['\u{13d4}', '\u{0}', '\u{0}']), ('\u{aba5}', ['\u{13d5}', '\u{0}', '\u{0}']),
+ ('\u{aba6}', ['\u{13d6}', '\u{0}', '\u{0}']), ('\u{aba7}', ['\u{13d7}', '\u{0}', '\u{0}']),
+ ('\u{aba8}', ['\u{13d8}', '\u{0}', '\u{0}']), ('\u{aba9}', ['\u{13d9}', '\u{0}', '\u{0}']),
+ ('\u{abaa}', ['\u{13da}', '\u{0}', '\u{0}']), ('\u{abab}', ['\u{13db}', '\u{0}', '\u{0}']),
+ ('\u{abac}', ['\u{13dc}', '\u{0}', '\u{0}']), ('\u{abad}', ['\u{13dd}', '\u{0}', '\u{0}']),
+ ('\u{abae}', ['\u{13de}', '\u{0}', '\u{0}']), ('\u{abaf}', ['\u{13df}', '\u{0}', '\u{0}']),
+ ('\u{abb0}', ['\u{13e0}', '\u{0}', '\u{0}']), ('\u{abb1}', ['\u{13e1}', '\u{0}', '\u{0}']),
+ ('\u{abb2}', ['\u{13e2}', '\u{0}', '\u{0}']), ('\u{abb3}', ['\u{13e3}', '\u{0}', '\u{0}']),
+ ('\u{abb4}', ['\u{13e4}', '\u{0}', '\u{0}']), ('\u{abb5}', ['\u{13e5}', '\u{0}', '\u{0}']),
+ ('\u{abb6}', ['\u{13e6}', '\u{0}', '\u{0}']), ('\u{abb7}', ['\u{13e7}', '\u{0}', '\u{0}']),
+ ('\u{abb8}', ['\u{13e8}', '\u{0}', '\u{0}']), ('\u{abb9}', ['\u{13e9}', '\u{0}', '\u{0}']),
+ ('\u{abba}', ['\u{13ea}', '\u{0}', '\u{0}']), ('\u{abbb}', ['\u{13eb}', '\u{0}', '\u{0}']),
+ ('\u{abbc}', ['\u{13ec}', '\u{0}', '\u{0}']), ('\u{abbd}', ['\u{13ed}', '\u{0}', '\u{0}']),
+ ('\u{abbe}', ['\u{13ee}', '\u{0}', '\u{0}']), ('\u{abbf}', ['\u{13ef}', '\u{0}', '\u{0}']),
+ ('\u{fb00}', ['F', 'F', '\u{0}']), ('\u{fb01}', ['F', 'I', '\u{0}']),
+ ('\u{fb02}', ['F', 'L', '\u{0}']), ('\u{fb03}', ['F', 'F', 'I']),
+ ('\u{fb04}', ['F', 'F', 'L']), ('\u{fb05}', ['S', 'T', '\u{0}']),
+ ('\u{fb06}', ['S', 'T', '\u{0}']), ('\u{fb13}', ['\u{544}', '\u{546}', '\u{0}']),
('\u{fb14}', ['\u{544}', '\u{535}', '\u{0}']),
('\u{fb15}', ['\u{544}', '\u{53b}', '\u{0}']),
('\u{fb16}', ['\u{54e}', '\u{546}', '\u{0}']),
skips: Vec<usize>,
/// Span of the last opening brace seen, used for error reporting
last_opening_brace: Option<InnerSpan>,
- /// Wether the source string is comes from `println!` as opposed to `format!` or `print!`
+ /// Whether the source string is comes from `println!` as opposed to `format!` or `print!`
append_newline: bool,
}
#![feature(staged_api)]
#![feature(rustc_attrs)]
-// Rust's "try" function, but if we're aborting on panics we just call the
-// function as there's nothing else we need to do here.
+use core::any::Any;
+
#[rustc_std_internal_symbol]
-pub unsafe extern "C" fn __rust_maybe_catch_panic(
- f: fn(*mut u8),
- data: *mut u8,
- _data_ptr: *mut usize,
- _vtable_ptr: *mut usize,
-) -> u32 {
- f(data);
- 0
+pub unsafe extern "C" fn __rust_panic_cleanup(_: *mut u8) -> *mut (dyn Any + Send + 'static) {
+ unreachable!()
}
// "Leak" the payload and shim to the relevant abort on the platform in
// binaries, but it should never be called as we don't link in an unwinding
// runtime at all.
pub mod personalities {
- #[no_mangle]
+ #[rustc_std_internal_symbol]
#[cfg(not(any(
all(target_arch = "wasm32", not(target_os = "emscripten"),),
all(target_os = "windows", target_env = "gnu", target_arch = "x86_64",),
// On x86_64-pc-windows-gnu we use our own personality function that needs
// to return `ExceptionContinueSearch` as we're passing on all our frames.
- #[no_mangle]
+ #[rustc_std_internal_symbol]
#[cfg(all(target_os = "windows", target_env = "gnu", target_arch = "x86_64"))]
pub extern "C" fn rust_eh_personality(
_record: usize,
//
// Note that we don't execute landing pads, so this is never called, so it's
// body is empty.
- #[no_mangle]
- #[cfg(all(target_os = "windows", target_env = "gnu"))]
+ #[rustc_std_internal_symbol]
+ #[cfg(all(bootstrap, target_os = "windows", target_env = "gnu"))]
pub extern "C" fn rust_eh_unwind_resume() {}
// These two are called by our startup objects on i686-pc-windows-gnu, but
// they don't need to do anything so the bodies are nops.
- #[no_mangle]
+ #[rustc_std_internal_symbol]
#[cfg(all(target_os = "windows", target_env = "gnu", target_arch = "x86"))]
pub extern "C" fn rust_eh_register_frames() {}
- #[no_mangle]
+ #[rustc_std_internal_symbol]
#[cfg(all(target_os = "windows", target_env = "gnu", target_arch = "x86"))]
pub extern "C" fn rust_eh_unregister_frames() {}
}
use core::any::Any;
use core::intrinsics;
-pub fn payload() -> *mut u8 {
- core::ptr::null_mut()
-}
-
pub unsafe fn cleanup(_ptr: *mut u8) -> Box<dyn Any + Send> {
intrinsics::abort()
}
//! Emscripten's runtime always implements those APIs and does not
//! implement libunwind.
-#![allow(private_no_mangle_fns)]
-
use alloc::boxed::Box;
use core::any::Any;
use core::mem;
name: b"rust_panic\0".as_ptr(),
};
-pub fn payload() -> *mut u8 {
- ptr::null_mut()
-}
-
struct Exception {
// This needs to be an Option because the object's lifetime follows C++
// semantics: when catch_unwind moves the Box out of the exception it must
// still leave the exception object in a valid state because its destructor
- // is still going to be called by __cxa_end_catch..
+ // is still going to be called by __cxa_end_catch.
data: Option<Box<dyn Any + Send>>,
}
}
#[lang = "eh_personality"]
-#[no_mangle]
unsafe extern "C" fn rust_eh_personality(
version: c_int,
actions: uw::_Unwind_Action,
//!
//! Once stack has been unwound down to the handler frame level, unwinding stops
//! and the last personality routine transfers control to the catch block.
-//!
-//! ## `eh_personality` and `eh_unwind_resume`
-//!
-//! These language items are used by the compiler when generating unwind info.
-//! The first one is the personality routine described above. The second one
-//! allows compilation target to customize the process of resuming unwind at the
-//! end of the landing pads. `eh_unwind_resume` is used only if
-//! `custom_unwind_resume` flag in the target options is set.
-
-#![allow(private_no_mangle_fns)]
use alloc::boxed::Box;
use core::any::Any;
-use core::ptr;
use crate::dwarf::eh::{self, EHAction, EHContext};
use libc::{c_int, uintptr_t};
}
}
-pub fn payload() -> *mut u8 {
- ptr::null_mut()
-}
-
pub unsafe fn cleanup(ptr: *mut u8) -> Box<dyn Any + Send> {
let exception = Box::from_raw(ptr as *mut Exception);
exception.cause
//
// iOS uses the default routine instead since it uses SjLj unwinding.
#[lang = "eh_personality"]
- #[no_mangle]
unsafe extern "C" fn rust_eh_personality(state: uw::_Unwind_State,
exception_object: *mut uw::_Unwind_Exception,
context: *mut uw::_Unwind_Context)
// On x86_64 MinGW targets, the unwinding mechanism is SEH however the unwind
// handler data (aka LSDA) uses GCC-compatible encoding.
#[lang = "eh_personality"]
- #[no_mangle]
#[allow(nonstandard_style)]
unsafe extern "C" fn rust_eh_personality(exceptionRecord: *mut uw::EXCEPTION_RECORD,
establisherFrame: uw::LPVOID,
} else {
// The personality routine for most of our targets.
#[lang = "eh_personality"]
- #[no_mangle]
unsafe extern "C" fn rust_eh_personality(version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
eh::find_eh_action(lsda, &eh_context, foreign_exception)
}
-// See docs in the `unwind` module.
#[cfg(all(
+ bootstrap,
target_os = "windows",
any(target_arch = "x86", target_arch = "x86_64"),
target_env = "gnu"
fn __deregister_frame_info(eh_frame_begin: *const u8, object: *mut u8);
}
- #[no_mangle]
+ #[rustc_std_internal_symbol]
pub unsafe extern "C" fn rust_eh_register_frames(eh_frame_begin: *const u8, object: *mut u8) {
__register_frame_info(eh_frame_begin, object);
}
- #[no_mangle]
+ #[rustc_std_internal_symbol]
pub unsafe extern "C" fn rust_eh_unregister_frames(eh_frame_begin: *const u8, object: *mut u8) {
__deregister_frame_info(eh_frame_begin, object);
}
use alloc::boxed::Box;
use core::any::Any;
-use core::ptr;
-
-pub fn payload() -> *mut u8 {
- ptr::null_mut()
-}
pub unsafe fn cleanup(_ptr: *mut u8) -> Box<dyn Any + Send> {
extern "C" {
#![feature(libc)]
#![feature(nll)]
#![feature(panic_unwind)]
-#![feature(raw)]
#![feature(staged_api)]
#![feature(std_internals)]
#![feature(unwind_attributes)]
#![feature(abi_thiscall)]
+#![feature(rustc_attrs)]
+#![feature(raw)]
#![panic_runtime]
#![feature(panic_runtime)]
+// `real_imp` is unused with Miri, so silence warnings.
+#![cfg_attr(miri, allow(dead_code))]
use alloc::boxed::Box;
-use core::intrinsics;
-use core::mem;
+use core::any::Any;
use core::panic::BoxMeUp;
-use core::raw;
cfg_if::cfg_if! {
if #[cfg(target_os = "emscripten")] {
#[path = "emcc.rs"]
- mod imp;
+ mod real_imp;
} else if #[cfg(target_arch = "wasm32")] {
#[path = "dummy.rs"]
- mod imp;
+ mod real_imp;
} else if #[cfg(target_os = "hermit")] {
#[path = "hermit.rs"]
- mod imp;
+ mod real_imp;
} else if #[cfg(all(target_env = "msvc", target_arch = "aarch64"))] {
#[path = "dummy.rs"]
- mod imp;
+ mod real_imp;
} else if #[cfg(target_env = "msvc")] {
#[path = "seh.rs"]
- mod imp;
+ mod real_imp;
} else {
// Rust runtime's startup objects depend on these symbols, so make them public.
#[cfg(all(target_os="windows", target_arch = "x86", target_env="gnu"))]
- pub use imp::eh_frame_registry::*;
+ pub use real_imp::eh_frame_registry::*;
#[path = "gcc.rs"]
+ mod real_imp;
+ }
+}
+
+cfg_if::cfg_if! {
+ if #[cfg(miri)] {
+ // Use the Miri runtime.
+ // We still need to also load the normal runtime above, as rustc expects certain lang
+ // items from there to be defined.
+ #[path = "miri.rs"]
mod imp;
+ } else {
+ // Use the real runtime.
+ use real_imp as imp;
}
}
mod dwarf;
-// Entry point for catching an exception, implemented using the `try` intrinsic
-// in the compiler.
-//
-// The interaction between the `payload` function and the compiler is pretty
-// hairy and tightly coupled, for more information see the compiler's
-// implementation of this.
-#[no_mangle]
-pub unsafe extern "C" fn __rust_maybe_catch_panic(
- f: fn(*mut u8),
- data: *mut u8,
- data_ptr: *mut usize,
- vtable_ptr: *mut usize,
-) -> u32 {
- let mut payload = imp::payload();
- if intrinsics::r#try(f, data, &mut payload as *mut _ as *mut _) == 0 {
- 0
- } else {
- let obj = mem::transmute::<_, raw::TraitObject>(imp::cleanup(payload));
- *data_ptr = obj.data as usize;
- *vtable_ptr = obj.vtable as usize;
- 1
- }
+#[rustc_std_internal_symbol]
+pub unsafe extern "C" fn __rust_panic_cleanup(payload: *mut u8) -> *mut (dyn Any + Send + 'static) {
+ Box::into_raw(imp::cleanup(payload))
}
// Entry point for raising an exception, just delegates to the platform-specific
// implementation.
-#[no_mangle]
+#[rustc_std_internal_symbol]
#[unwind(allowed)]
pub unsafe extern "C" fn __rust_start_panic(payload: usize) -> u32 {
let payload = payload as *mut &mut dyn BoxMeUp;
let payload = (*payload).take_box();
- // Miri panic support: cfg'd out of normal builds just to be sure.
- // When going through normal codegen, `miri_start_panic` is a NOP, so the
- // Miri-enabled sysroot still supports normal unwinding. But when executed in
- // Miri, this line initiates unwinding.
- #[cfg(miri)]
- core::intrinsics::miri_start_panic(payload);
-
imp::panic(Box::from_raw(payload))
}
--- /dev/null
+//! Unwinding panics for Miri.
+use alloc::boxed::Box;
+use core::any::Any;
+
+// The type of the payload that the Miri engine propagates through unwinding for us.
+// Must be pointer-sized.
+type Payload = Box<Box<dyn Any + Send>>;
+
+pub unsafe fn panic(payload: Box<dyn Any + Send>) -> u32 {
+ // The payload we pass to `miri_start_panic` will be exactly the argument we get
+ // in `cleanup` below. So we just box it up once, to get something pointer-sized.
+ let payload_box: Payload = Box::new(payload);
+ core::intrinsics::miri_start_panic(Box::into_raw(payload_box) as *mut u8)
+}
+
+pub unsafe fn cleanup(payload_box: *mut u8) -> Box<dyn Any + Send> {
+ // Recover the underlying `Box`.
+ let payload_box: Payload = Box::from_raw(payload_box as *mut _);
+ *payload_box
+}
//! [llvm]: http://llvm.org/docs/ExceptionHandling.html#background-on-windows-exceptions
#![allow(nonstandard_style)]
-#![allow(private_no_mangle_fns)]
use alloc::boxed::Box;
use core::any::Any;
-use core::mem;
-use core::raw;
+use core::mem::{self, ManuallyDrop};
use libc::{c_int, c_uint, c_void};
+struct Exception {
+ // This needs to be an Option because we catch the exception by reference
+ // and its destructor is executed by the C++ runtime. When we take the Box
+ // out of the exception, we need to leave the exception in a valid state
+ // for its destructor to run without double-dropping the Box.
+ data: Option<Box<dyn Any + Send>>,
+}
+
// First up, a whole bunch of type definitions. There's a few platform-specific
// oddities here, and a lot that's just blatantly copied from LLVM. The purpose
// of all this is to implement the `panic` function below through a call to
// Note that we intentionally ignore name mangling rules here: we don't want C++
// to be able to catch Rust panics by simply declaring a `struct rust_panic`.
+//
+// When modifying, make sure that the type name string exactly matches
+// the one used in src/librustc_codegen_llvm/intrinsic.rs.
const TYPE_NAME: [u8; 11] = *b"rust_panic\0";
static mut THROW_INFO: _ThrowInfo = _ThrowInfo {
properties: 0,
pType: ptr!(0),
thisDisplacement: _PMD { mdisp: 0, pdisp: -1, vdisp: 0 },
- sizeOrOffset: mem::size_of::<[u64; 2]>() as c_int,
+ sizeOrOffset: mem::size_of::<Exception>() as c_int,
copyFunction: ptr!(0),
};
static TYPE_INFO_VTABLE: *const u8;
}
-// We use #[lang = "eh_catch_typeinfo"] here as this is the type descriptor which
-// we'll use in LLVM's `catchpad` instruction which ends up also being passed as
-// an argument to the C++ personality function.
+// This type descriptor is only used when throwing an exception. The catch part
+// is handled by the try intrinsic, which generates its own TypeDescriptor.
//
-// Again, I'm not entirely sure what this is describing, it just seems to work.
-#[cfg_attr(not(test), lang = "eh_catch_typeinfo")]
+// This is fine since the MSVC runtime uses string comparison on the type name
+// to match TypeDescriptors rather than pointer equality.
+#[cfg_attr(bootstrap, lang = "eh_catch_typeinfo")]
static mut TYPE_DESCRIPTOR: _TypeDescriptor = _TypeDescriptor {
pVFTable: unsafe { &TYPE_INFO_VTABLE } as *const _ as *const _,
spare: core::ptr::null_mut(),
// because Box<dyn Any> isn't clonable.
macro_rules! define_cleanup {
($abi:tt) => {
- unsafe extern $abi fn exception_cleanup(e: *mut [u64; 2]) {
- if (*e)[0] != 0 {
- cleanup(*e);
+ unsafe extern $abi fn exception_cleanup(e: *mut Exception) {
+ if let Exception { data: Some(b) } = e.read() {
+ drop(b);
super::__rust_drop_panic();
}
}
#[unwind(allowed)]
- unsafe extern $abi fn exception_copy(_dest: *mut [u64; 2],
- _src: *mut [u64; 2])
- -> *mut [u64; 2] {
+ unsafe extern $abi fn exception_copy(_dest: *mut Exception,
+ _src: *mut Exception)
+ -> *mut Exception {
panic!("Rust panics cannot be copied");
}
}
// need to otherwise transfer `data` to the heap. We just pass a stack
// pointer to this function.
//
- // The first argument is the payload being thrown (our two pointers), and
- // the second argument is the type information object describing the
- // exception (constructed above).
- let ptrs = mem::transmute::<_, raw::TraitObject>(data);
- let mut ptrs = [ptrs.data as u64, ptrs.vtable as u64];
- let throw_ptr = ptrs.as_mut_ptr() as *mut _;
+ // The ManuallyDrop is needed here since we don't want Exception to be
+ // dropped when unwinding. Instead it will be dropped by exception_cleanup
+ // which is invoked by the C++ runtime.
+ let mut exception = ManuallyDrop::new(Exception { data: Some(data) });
+ let throw_ptr = &mut exception as *mut _ as *mut _;
// This... may seems surprising, and justifiably so. On 32-bit MSVC the
// pointers between these structure are just that, pointers. On 64-bit MSVC,
_CxxThrowException(throw_ptr, &mut THROW_INFO as *mut _ as *mut _);
}
-pub fn payload() -> [u64; 2] {
- [0; 2]
-}
-
-pub unsafe fn cleanup(payload: [u64; 2]) -> Box<dyn Any + Send> {
- mem::transmute(raw::TraitObject { data: payload[0] as *mut _, vtable: payload[1] as *mut _ })
+pub unsafe fn cleanup(payload: *mut u8) -> Box<dyn Any + Send> {
+ let exception = &mut *(payload as *mut Exception);
+ exception.data.take().unwrap()
}
// This is required by the compiler to exist (e.g., it's a lang item), but
}
impl HandleCounters {
- // FIXME(eddyb) use a reference to the `static COUNTERS`, intead of
+ // FIXME(eddyb) use a reference to the `static COUNTERS`, instead of
// a wrapper `fn` pointer, once `const fn` can reference `static`s.
extern "C" fn get() -> &'static Self {
static COUNTERS: HandleCounters = HandleCounters {
#[repr(C)]
#[derive(Copy, Clone)]
pub struct Client<F> {
- // FIXME(eddyb) use a reference to the `static COUNTERS`, intead of
+ // FIXME(eddyb) use a reference to the `static COUNTERS`, instead of
// a wrapper `fn` pointer, once `const fn` can reference `static`s.
pub(super) get_handle_counters: extern "C" fn() -> &'static HandleCounters,
pub(super) run: extern "C" fn(Bridge<'_>, F) -> Buffer<u8>,
}
macro_rules! diagnostic_child_methods {
- ($spanned:ident, $regular:ident, $level:expr) => (
+ ($spanned:ident, $regular:ident, $level:expr) => {
/// Adds a new child diagnostic message to `self` with the level
/// identified by this method's name with the given `spans` and
/// `message`.
#[unstable(feature = "proc_macro_diagnostic", issue = "54140")]
pub fn $spanned<S, T>(mut self, spans: S, message: T) -> Diagnostic
- where S: MultiSpan, T: Into<String>
+ where
+ S: MultiSpan,
+ T: Into<String>,
{
self.children.push(Diagnostic::spanned(spans, $level, message));
self
self.children.push(Diagnostic::new($level, message));
self
}
- )
+ };
}
/// Iterator over the children diagnostics of a `Diagnostic`.
/// Creates a new diagnostic with the given `level` and `message`.
#[unstable(feature = "proc_macro_diagnostic", issue = "54140")]
pub fn new<T: Into<String>>(level: Level, message: T) -> Diagnostic {
- Diagnostic { level: level, message: message.into(), spans: vec![], children: vec![] }
+ Diagnostic { level, message: message.into(), spans: vec![], children: vec![] }
}
/// Creates a new diagnostic with the given `level` and `message` pointing to
S: MultiSpan,
T: Into<String>,
{
- Diagnostic {
- level: level,
- message: message.into(),
- spans: spans.into_spans(),
- children: vec![],
- }
+ Diagnostic { level, message: message.into(), spans: spans.into_spans(), children: vec![] }
}
diagnostic_child_methods!(span_error, error, Level::Error);
#![feature(in_band_lifetimes)]
#![feature(optin_builtin_traits)]
#![feature(rustc_attrs)]
-#![feature(specialization)]
+#![cfg_attr(bootstrap, feature(specialization))]
+#![cfg_attr(not(bootstrap), feature(min_specialization))]
#![recursion_limit = "256"]
#[unstable(feature = "proc_macro_internals", issue = "27812")]
use std::ops::{Bound, RangeBounds};
use std::path::PathBuf;
use std::str::FromStr;
-use std::{fmt, iter, mem};
+use std::{error, fmt, iter, mem};
/// The main type provided by this crate, representing an abstract stream of
/// tokens, or, more specifically, a sequence of token trees.
_inner: (),
}
+#[stable(feature = "proc_macro_lexerror_impls", since = "1.44.0")]
+impl fmt::Display for LexError {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.write_str("cannot parse string into token stream")
+ }
+}
+
+#[stable(feature = "proc_macro_lexerror_impls", since = "1.44.0")]
+impl error::Error for LexError {}
+
#[stable(feature = "proc_macro_lib", since = "1.15.0")]
impl !Send for LexError {}
#[stable(feature = "proc_macro_lib", since = "1.15.0")]
log = { version = "0.4", features = ["release_max_level_info", "std"] }
rustc-rayon = "0.3.0"
rustc-rayon-core = "0.3.0"
-polonius-engine = "0.11.0"
+polonius-engine = "0.12.0"
rustc_apfloat = { path = "../librustc_apfloat" }
rustc_attr = { path = "../librustc_attr" }
rustc_feature = { path = "../librustc_feature" }
backtrace = "0.3.40"
parking_lot = "0.9"
byteorder = { version = "1.3" }
-chalk-engine = { version = "0.9.0", default-features=false }
smallvec = { version = "1.0", features = ["union", "may_dangle"] }
measureme = "0.7.1"
rustc_session = { path = "../librustc_session" }
-For more information about how rustc works, see the [rustc guide].
+For more information about how rustc works, see the [rustc dev guide].
-[rustc guide]: https://rust-lang.github.io/rustc-guide/
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/
[] type_binding: rustc_hir::TypeBinding<$tcx>,
[] variant: rustc_hir::Variant<$tcx>,
[] where_predicate: rustc_hir::WherePredicate<$tcx>,
+
+ // HIR query types
+ [few] indexed_hir: rustc::hir::map::IndexedHir<$tcx>,
+ [few] hir_definitions: rustc::hir::map::definitions::Definitions,
+ [] hir_owner: rustc::hir::HirOwner<$tcx>,
+ [] hir_owner_items: rustc::hir::HirOwnerItems<$tcx>,
], $tcx);
)
}
To learn more about how dependency tracking works in rustc, see the [rustc
guide].
-[rustc guide]: https://rust-lang.github.io/rustc-guide/query.html
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/query.html
//! "infer" some properties for each kind of `DepNode`:
//!
//! * Whether a `DepNode` of a given kind has any parameters at all. Some
-//! `DepNode`s, like `AllLocalTraitImpls`, represent global concepts with only one value.
+//! `DepNode`s could represent global concepts with only one value.
//! * Whether it is possible, in principle, to reconstruct a query key from a
//! given `DepNode`. Many `DepKind`s only require a single `DefId` parameter,
//! in which case it is possible to map the node's fingerprint back to the
use crate::ty::{self, ParamEnvAnd, Ty, TyCtxt};
use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
-use rustc_hir::def_id::{CrateNum, DefId, DefIndex, CRATE_DEF_INDEX};
+use rustc_hir::def_id::{CrateNum, DefId, LocalDefId, CRATE_DEF_INDEX};
use rustc_hir::HirId;
use rustc_span::symbol::Symbol;
use std::fmt;
) => (
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash,
RustcEncodable, RustcDecodable)]
+ #[allow(non_camel_case_types)]
pub enum DepKind {
$($variant),*
}
pub struct DepConstructor;
+ #[allow(non_camel_case_types)]
impl DepConstructor {
$(
#[inline(always)]
#[allow(unreachable_code, non_snake_case)]
- pub fn $variant<'tcx>(_tcx: TyCtxt<'tcx>, $(arg: $tuple_arg_ty)*) -> DepNode {
+ pub fn $variant(_tcx: TyCtxt<'_>, $(arg: $tuple_arg_ty)*) -> DepNode {
// tuple args
$({
erase!($tuple_arg_ty);
/// Construct a DepNode from the given DepKind and DefPathHash. This
/// method will assert that the given DepKind actually requires a
/// single DefId/DefPathHash parameter.
- pub fn from_def_path_hash(kind: DepKind,
- def_path_hash: DefPathHash)
+ pub fn from_def_path_hash(def_path_hash: DefPathHash,
+ kind: DepKind)
-> DepNode {
debug_assert!(kind.can_reconstruct_query_key() && kind.has_params());
DepNode {
}
if kind.has_params() {
- Ok(def_path_hash.to_dep_node(kind))
+ Ok(DepNode::from_def_path_hash(def_path_hash, kind))
} else {
Ok(DepNode::new_no_params(kind))
}
}
}
-impl DefPathHash {
- pub fn to_dep_node(self, kind: DepKind) -> DepNode {
- DepNode::from_def_path_hash(kind, self)
- }
-}
-
rustc_dep_node_append!([define_dep_nodes!][ <'tcx>
// We use this for most things when incr. comp. is turned off.
[] Null,
- // Represents the body of a function or method. The def-id is that of the
- // function/method.
- [eval_always] HirBody(DefId),
-
- // Represents the HIR node with the given node-id
- [eval_always] Hir(DefId),
-
// Represents metadata from an extern crate.
[eval_always] CrateMetadata(CrateNum),
- [eval_always] AllLocalTraitImpls,
-
[anon] TraitSelect,
[] CompileCodegenUnit(Symbol),
-
- [eval_always] Analysis(CrateNum),
]);
-pub trait RecoverKey<'tcx>: Sized {
- fn recover(tcx: TyCtxt<'tcx>, dep_node: &DepNode) -> Option<Self>;
-}
-
-impl RecoverKey<'tcx> for CrateNum {
- fn recover(tcx: TyCtxt<'tcx>, dep_node: &DepNode) -> Option<Self> {
- dep_node.extract_def_id(tcx).map(|id| id.krate)
- }
-}
-
-impl RecoverKey<'tcx> for DefId {
- fn recover(tcx: TyCtxt<'tcx>, dep_node: &DepNode) -> Option<Self> {
- dep_node.extract_def_id(tcx)
- }
-}
-
-impl RecoverKey<'tcx> for DefIndex {
- fn recover(tcx: TyCtxt<'tcx>, dep_node: &DepNode) -> Option<Self> {
- dep_node.extract_def_id(tcx).map(|id| id.index)
- }
-}
-
-trait DepNodeParams<'tcx>: fmt::Debug {
+pub(crate) trait DepNodeParams<'tcx>: fmt::Debug + Sized {
const CAN_RECONSTRUCT_QUERY_KEY: bool;
/// This method turns the parameters of a DepNodeConstructor into an opaque
fn to_debug_str(&self, _: TyCtxt<'tcx>) -> String {
format!("{:?}", self)
}
+
+ /// This method tries to recover the query key from the given `DepNode`,
+ /// something which is needed when forcing `DepNode`s during red-green
+ /// evaluation. The query system will only call this method if
+ /// `CAN_RECONSTRUCT_QUERY_KEY` is `true`.
+ /// It is always valid to return `None` here, in which case incremental
+ /// compilation will treat the query as having changed instead of forcing it.
+ fn recover(tcx: TyCtxt<'tcx>, dep_node: &DepNode) -> Option<Self>;
}
impl<'tcx, T> DepNodeParams<'tcx> for T
default fn to_debug_str(&self, _: TyCtxt<'tcx>) -> String {
format!("{:?}", *self)
}
+
+ default fn recover(_: TyCtxt<'tcx>, _: &DepNode) -> Option<Self> {
+ None
+ }
}
impl<'tcx> DepNodeParams<'tcx> for DefId {
fn to_debug_str(&self, tcx: TyCtxt<'tcx>) -> String {
tcx.def_path_str(*self)
}
+
+ fn recover(tcx: TyCtxt<'tcx>, dep_node: &DepNode) -> Option<Self> {
+ dep_node.extract_def_id(tcx)
+ }
}
-impl<'tcx> DepNodeParams<'tcx> for DefIndex {
+impl<'tcx> DepNodeParams<'tcx> for LocalDefId {
const CAN_RECONSTRUCT_QUERY_KEY: bool = true;
fn to_fingerprint(&self, tcx: TyCtxt<'_>) -> Fingerprint {
- tcx.hir().definitions().def_path_hash(*self).0
+ self.to_def_id().to_fingerprint(tcx)
}
fn to_debug_str(&self, tcx: TyCtxt<'tcx>) -> String {
- tcx.def_path_str(DefId::local(*self))
+ self.to_def_id().to_debug_str(tcx)
+ }
+
+ fn recover(tcx: TyCtxt<'tcx>, dep_node: &DepNode) -> Option<Self> {
+ dep_node.extract_def_id(tcx).map(|id| id.expect_local())
}
}
fn to_debug_str(&self, tcx: TyCtxt<'tcx>) -> String {
tcx.crate_name(*self).to_string()
}
+
+ fn recover(tcx: TyCtxt<'tcx>, dep_node: &DepNode) -> Option<Self> {
+ dep_node.extract_def_id(tcx).map(|id| id.krate)
+ }
}
impl<'tcx> DepNodeParams<'tcx> for (DefId, DefId) {
fn to_fingerprint(&self, tcx: TyCtxt<'_>) -> Fingerprint {
let HirId { owner, local_id } = *self;
- let def_path_hash = tcx.def_path_hash(DefId::local(owner));
+ let def_path_hash = tcx.def_path_hash(owner.to_def_id());
let local_id = Fingerprint::from_smaller_hash(local_id.as_u32().into());
def_path_hash.0.combine(local_id)
/// what state they have access to. In particular, we want to
/// prevent implicit 'leaks' of tracked state into the task (which
/// could then be read without generating correct edges in the
- /// dep-graph -- see the [rustc guide] for more details on
+ /// dep-graph -- see the [rustc dev guide] for more details on
/// the dep-graph). To this end, the task function gets exactly two
/// pieces of state: the context `cx` and an argument `arg`. Both
/// of these bits of state must be of some type that implements
/// - If you need 3+ arguments, use a tuple for the
/// `arg` parameter.
///
- /// [rustc guide]: https://rust-lang.github.io/rustc-guide/incremental-compilation.html
+ /// [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/incremental-compilation.html
pub fn with_task<'a, C, A, R>(
&self,
key: DepNode,
)
}
- /// Creates a new dep-graph input with value `input`
- pub fn input_task<'a, C, R>(&self, key: DepNode, cx: C, input: R) -> (R, DepNodeIndex)
- where
- C: DepGraphSafe + StableHashingContextProvider<'a>,
- R: for<'b> HashStable<StableHashingContext<'b>>,
- {
- fn identity_fn<C, A>(_: C, arg: A) -> A {
- arg
- }
-
- self.with_task_impl(
- key,
- cx,
- input,
- true,
- identity_fn,
- |_| None,
- |data, key, fingerprint, _| data.alloc_node(key, SmallVec::new(), fingerprint),
- hash_result::<R>,
- )
- }
-
fn with_task_impl<'a, C, A, R>(
&self,
key: DepNode,
edge_list_indices.push((start, end));
}
- debug_assert!(edge_list_data.len() <= ::std::u32::MAX as usize);
+ debug_assert!(edge_list_data.len() <= u32::MAX as usize);
debug_assert_eq!(edge_list_data.len(), total_edge_count);
SerializedDepGraph { nodes, fingerprints, edge_list_indices, edge_list_data }
continue;
}
} else {
+ // FIXME: This match is just a workaround for incremental bugs and should
+ // be removed. https://github.com/rust-lang/rust/issues/62649 is one such
+ // bug that must be fixed before removing this.
match dep_dep_node.kind {
- DepKind::Hir | DepKind::HirBody | DepKind::CrateMetadata => {
+ DepKind::hir_owner
+ | DepKind::hir_owner_items
+ | DepKind::CrateMetadata => {
if let Some(def_id) = dep_dep_node.extract_def_id(tcx) {
if def_id_corresponds_to_hir_dep_node(tcx, def_id) {
- // The `DefPath` has corresponding node,
- // and that node should have been marked
- // either red or green in `data.colors`.
- bug!(
- "DepNode {:?} should have been \
+ if dep_dep_node.kind == DepKind::CrateMetadata {
+ // The `DefPath` has corresponding node,
+ // and that node should have been marked
+ // either red or green in `data.colors`.
+ bug!(
+ "DepNode {:?} should have been \
pre-marked as red or green but wasn't.",
- dep_dep_node
- );
+ dep_dep_node
+ );
+ }
} else {
// This `DefPath` does not have a
// corresponding `DepNode` (e.g. a
fn def_id_corresponds_to_hir_dep_node(tcx: TyCtxt<'_>, def_id: DefId) -> bool {
let hir_id = tcx.hir().as_local_hir_id(def_id).unwrap();
- def_id.index == hir_id.owner
+ def_id.index == hir_id.owner.local_def_index
}
/// A "work product" is an intermediate result that we save into the
mod safe;
mod serialized;
-pub use self::dep_node::{label_strs, DepConstructor, DepKind, DepNode, RecoverKey, WorkProductId};
+pub(crate) use self::dep_node::DepNodeParams;
+pub use self::dep_node::{label_strs, DepConstructor, DepKind, DepNode, WorkProductId};
pub use self::graph::WorkProductFileKind;
pub use self::graph::{hash_result, DepGraph, DepNodeColor, DepNodeIndex, TaskDeps, WorkProduct};
pub use self::prev::PreviousDepGraph;
impl MaybeFnLike for hir::ImplItem<'_> {
fn is_fn_like(&self) -> bool {
match self.kind {
- hir::ImplItemKind::Method(..) => true,
+ hir::ImplItemKind::Fn(..) => true,
_ => false,
}
}
impl MaybeFnLike for hir::TraitItem<'_> {
fn is_fn_like(&self) -> bool {
match self.kind {
- hir::TraitItemKind::Method(_, hir::TraitMethod::Provided(_)) => true,
+ hir::TraitItemKind::Fn(_, hir::TraitFn::Provided(_)) => true,
_ => false,
}
}
_ => bug!("item FnLikeNode that is not fn-like"),
},
Node::TraitItem(ti) => match ti.kind {
- hir::TraitItemKind::Method(ref sig, hir::TraitMethod::Provided(body)) => {
+ hir::TraitItemKind::Fn(ref sig, hir::TraitFn::Provided(body)) => {
method(ti.hir_id, ti.ident, sig, None, body, ti.span, &ti.attrs)
}
_ => bug!("trait method FnLikeNode that is not fn-like"),
},
Node::ImplItem(ii) => match ii.kind {
- hir::ImplItemKind::Method(ref sig, body) => {
+ hir::ImplItemKind::Fn(ref sig, body) => {
method(ii.hir_id, ii.ident, sig, Some(&ii.vis), body, ii.span, &ii.attrs)
}
_ => bug!("impl method FnLikeNode that is not fn-like"),
-use crate::dep_graph::{DepGraph, DepKind, DepNode, DepNodeIndex};
+use crate::arena::Arena;
use crate::hir::map::definitions::{self, DefPathHash};
-use crate::hir::map::{Entry, HirEntryMap, Map};
+use crate::hir::map::{Entry, HirOwnerData, Map};
+use crate::hir::{HirItem, HirOwner, HirOwnerItems};
use crate::ich::StableHashingContext;
use crate::middle::cstore::CrateStore;
-use rustc_ast::ast::NodeId;
use rustc_data_structures::fingerprint::Fingerprint;
use rustc_data_structures::fx::FxHashMap;
use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
use rustc_data_structures::svh::Svh;
use rustc_hir as hir;
use rustc_hir::def_id::CRATE_DEF_INDEX;
-use rustc_hir::def_id::{CrateNum, DefIndex, LOCAL_CRATE};
+use rustc_hir::def_id::{LocalDefId, LOCAL_CRATE};
use rustc_hir::intravisit::{self, NestedVisitorMap, Visitor};
use rustc_hir::*;
-use rustc_index::vec::IndexVec;
+use rustc_index::vec::{Idx, IndexVec};
use rustc_session::{CrateDisambiguator, Session};
use rustc_span::source_map::SourceMap;
use rustc_span::{Span, Symbol, DUMMY_SP};
/// A visitor that walks over the HIR and collects `Node`s into a HIR map.
pub(super) struct NodeCollector<'a, 'hir> {
+ arena: &'hir Arena<'hir>,
+
/// The crate
krate: &'hir Crate<'hir>,
/// Source map
source_map: &'a SourceMap,
- /// The node map
- map: HirEntryMap<'hir>,
+ map: IndexVec<LocalDefId, HirOwnerData<'hir>>,
+
/// The parent of this node
parent_node: hir::HirId,
- // These fields keep track of the currently relevant DepNodes during
- // the visitor's traversal.
- current_dep_node_owner: DefIndex,
- current_signature_dep_index: DepNodeIndex,
- current_full_dep_index: DepNodeIndex,
- currently_in_body: bool,
+ current_dep_node_owner: LocalDefId,
- dep_graph: &'a DepGraph,
definitions: &'a definitions::Definitions,
- hir_to_node_id: &'a FxHashMap<HirId, NodeId>,
hcx: StableHashingContext<'a>,
- // We are collecting `DepNode::HirBody` hashes here so we can compute the
- // crate hash from then later on.
+ // We are collecting HIR hashes here so we can compute the
+ // crate hash from them later on.
hir_body_nodes: Vec<(DefPathHash, Fingerprint)>,
}
-fn input_dep_node_and_hash(
- dep_graph: &DepGraph,
+fn insert_vec_map<K: Idx, V: Clone>(map: &mut IndexVec<K, Option<V>>, k: K, v: V) {
+ let i = k.index();
+ let len = map.len();
+ if i >= len {
+ map.extend(repeat(None).take(i - len + 1));
+ }
+ map[k] = Some(v);
+}
+
+fn hash(
hcx: &mut StableHashingContext<'_>,
- dep_node: DepNode,
input: impl for<'a> HashStable<StableHashingContext<'a>>,
-) -> (DepNodeIndex, Fingerprint) {
- let dep_node_index = dep_graph.input_task(dep_node, &mut *hcx, &input).1;
-
- let hash = if dep_graph.is_fully_enabled() {
- dep_graph.fingerprint_of(dep_node_index)
- } else {
- let mut stable_hasher = StableHasher::new();
- input.hash_stable(hcx, &mut stable_hasher);
- stable_hasher.finish()
- };
-
- (dep_node_index, hash)
+) -> Fingerprint {
+ let mut stable_hasher = StableHasher::new();
+ input.hash_stable(hcx, &mut stable_hasher);
+ stable_hasher.finish()
}
-fn alloc_hir_dep_nodes(
- dep_graph: &DepGraph,
+fn hash_body(
hcx: &mut StableHashingContext<'_>,
def_path_hash: DefPathHash,
item_like: impl for<'a> HashStable<StableHashingContext<'a>>,
hir_body_nodes: &mut Vec<(DefPathHash, Fingerprint)>,
-) -> (DepNodeIndex, DepNodeIndex) {
- let sig = dep_graph
- .input_task(
- def_path_hash.to_dep_node(DepKind::Hir),
- &mut *hcx,
- HirItemLike { item_like: &item_like, hash_bodies: false },
- )
- .1;
- let (full, hash) = input_dep_node_and_hash(
- dep_graph,
- hcx,
- def_path_hash.to_dep_node(DepKind::HirBody),
- HirItemLike { item_like: &item_like, hash_bodies: true },
- );
+) -> Fingerprint {
+ let hash = hash(hcx, HirItemLike { item_like: &item_like });
hir_body_nodes.push((def_path_hash, hash));
- (sig, full)
+ hash
}
fn upstream_crates(cstore: &dyn CrateStore) -> Vec<(Symbol, Fingerprint, Svh)> {
impl<'a, 'hir> NodeCollector<'a, 'hir> {
pub(super) fn root(
sess: &'a Session,
+ arena: &'hir Arena<'hir>,
krate: &'hir Crate<'hir>,
- dep_graph: &'a DepGraph,
definitions: &'a definitions::Definitions,
- hir_to_node_id: &'a FxHashMap<HirId, NodeId>,
mut hcx: StableHashingContext<'a>,
) -> NodeCollector<'a, 'hir> {
- let root_mod_def_path_hash = definitions.def_path_hash(CRATE_DEF_INDEX);
+ let root_mod_def_path_hash =
+ definitions.def_path_hash(LocalDefId { local_def_index: CRATE_DEF_INDEX });
let mut hir_body_nodes = Vec::new();
- // Allocate `DepNode`s for the root module.
- let (root_mod_sig_dep_index, root_mod_full_dep_index) = {
+ let hash = {
let Crate {
- ref module,
- // Crate attributes are not copied over to the root `Mod`, so hash
- // them explicitly here.
- ref attrs,
- span,
+ ref item,
// These fields are handled separately:
exported_macros: _,
non_exported_macro_attrs: _,
proc_macros: _,
} = *krate;
- alloc_hir_dep_nodes(
- dep_graph,
- &mut hcx,
- root_mod_def_path_hash,
- (module, attrs, span),
- &mut hir_body_nodes,
- )
+ hash_body(&mut hcx, root_mod_def_path_hash, item, &mut hir_body_nodes)
};
- {
- dep_graph.input_task(
- DepNode::new_no_params(DepKind::AllLocalTraitImpls),
- &mut hcx,
- &krate.trait_impls,
- );
- }
-
let mut collector = NodeCollector {
+ arena,
krate,
source_map: sess.source_map(),
- map: IndexVec::from_elem_n(IndexVec::new(), definitions.def_index_count()),
parent_node: hir::CRATE_HIR_ID,
- current_signature_dep_index: root_mod_sig_dep_index,
- current_full_dep_index: root_mod_full_dep_index,
- current_dep_node_owner: CRATE_DEF_INDEX,
- currently_in_body: false,
- dep_graph,
+ current_dep_node_owner: LocalDefId { local_def_index: CRATE_DEF_INDEX },
definitions,
- hir_to_node_id,
hcx,
hir_body_nodes,
+ map: (0..definitions.def_index_count())
+ .map(|_| HirOwnerData { signature: None, with_bodies: None })
+ .collect(),
};
collector.insert_entry(
hir::CRATE_HIR_ID,
- Entry {
- parent: hir::CRATE_HIR_ID,
- dep_node: root_mod_sig_dep_index,
- node: Node::Crate,
- },
+ Entry { parent: hir::CRATE_HIR_ID, node: Node::Crate(&krate.item) },
+ hash,
);
collector
crate_disambiguator: CrateDisambiguator,
cstore: &dyn CrateStore,
commandline_args_hash: u64,
- ) -> (HirEntryMap<'hir>, Svh) {
+ ) -> (IndexVec<LocalDefId, HirOwnerData<'hir>>, Svh) {
+ // Insert bodies into the map
+ for (id, body) in self.krate.bodies.iter() {
+ let bodies = &mut self.map[id.hir_id.owner].with_bodies.as_mut().unwrap().bodies;
+ assert!(bodies.insert(id.hir_id.local_id, body).is_none());
+ }
+
self.hir_body_nodes.sort_unstable_by_key(|bn| bn.0);
let node_hashes = self.hir_body_nodes.iter().fold(
.source_map
.files()
.iter()
- .filter(|source_file| CrateNum::from_u32(source_file.crate_of_origin) == LOCAL_CRATE)
+ .filter(|source_file| source_file.cnum == LOCAL_CRATE)
.map(|source_file| source_file.name_hash)
.collect();
(self.map, svh)
}
- fn insert_entry(&mut self, id: HirId, entry: Entry<'hir>) {
- debug!("hir_map: {:?} => {:?}", id, entry);
- let local_map = &mut self.map[id.owner];
+ fn insert_entry(&mut self, id: HirId, entry: Entry<'hir>, hash: Fingerprint) {
let i = id.local_id.as_u32() as usize;
- let len = local_map.len();
- if i >= len {
- local_map.extend(repeat(None).take(i - len + 1));
+
+ let arena = self.arena;
+
+ let data = &mut self.map[id.owner];
+
+ if data.with_bodies.is_none() {
+ data.with_bodies = Some(arena.alloc(HirOwnerItems {
+ hash,
+ items: IndexVec::new(),
+ bodies: FxHashMap::default(),
+ }));
+ }
+
+ let items = data.with_bodies.as_mut().unwrap();
+
+ if i == 0 {
+ // Overwrite the dummy hash with the real HIR owner hash.
+ items.hash = hash;
+
+ // FIXME: feature(impl_trait_in_bindings) broken and trigger this assert
+ //assert!(data.signature.is_none());
+
+ data.signature =
+ Some(self.arena.alloc(HirOwner { parent: entry.parent, node: entry.node }));
+ } else {
+ assert_eq!(entry.parent.owner, id.owner);
+ insert_vec_map(
+ &mut items.items,
+ id.local_id,
+ HirItem { parent: entry.parent.local_id, node: entry.node },
+ );
}
- local_map[id.local_id] = Some(entry);
}
fn insert(&mut self, span: Span, hir_id: HirId, node: Node<'hir>) {
- let entry = Entry {
- parent: self.parent_node,
- dep_node: if self.currently_in_body {
- self.current_full_dep_index
- } else {
- self.current_signature_dep_index
- },
- node,
- };
+ self.insert_with_hash(span, hir_id, node, Fingerprint::ZERO)
+ }
+
+ fn insert_with_hash(&mut self, span: Span, hir_id: HirId, node: Node<'hir>, hash: Fingerprint) {
+ let entry = Entry { parent: self.parent_node, node };
// Make sure that the DepNode of some node coincides with the HirId
// owner of that node.
if cfg!(debug_assertions) {
- let node_id = self.hir_to_node_id[&hir_id];
+ let node_id = self.definitions.hir_to_node_id(hir_id);
assert_eq!(self.definitions.node_to_hir_id(node_id), hir_id);
if hir_id.owner != self.current_dep_node_owner {
- let node_str = match self.definitions.opt_def_index(node_id) {
- Some(def_index) => self.definitions.def_path(def_index).to_string_no_crate(),
+ let node_str = match self.definitions.opt_local_def_id(node_id) {
+ Some(def_id) => self.definitions.def_path(def_id).to_string_no_crate(),
None => format!("{:?}", node),
};
}
}
- self.insert_entry(hir_id, entry);
+ self.insert_entry(hir_id, entry, hash);
}
fn with_parent<F: FnOnce(&mut Self)>(&mut self, parent_node_id: HirId, f: F) {
fn with_dep_node_owner<
T: for<'b> HashStable<StableHashingContext<'b>>,
- F: FnOnce(&mut Self),
+ F: FnOnce(&mut Self, Fingerprint),
>(
&mut self,
- dep_node_owner: DefIndex,
+ dep_node_owner: LocalDefId,
item_like: &T,
f: F,
) {
let prev_owner = self.current_dep_node_owner;
- let prev_signature_dep_index = self.current_signature_dep_index;
- let prev_full_dep_index = self.current_full_dep_index;
- let prev_in_body = self.currently_in_body;
let def_path_hash = self.definitions.def_path_hash(dep_node_owner);
- let (signature_dep_index, full_dep_index) = alloc_hir_dep_nodes(
- self.dep_graph,
- &mut self.hcx,
- def_path_hash,
- item_like,
- &mut self.hir_body_nodes,
- );
- self.current_signature_dep_index = signature_dep_index;
- self.current_full_dep_index = full_dep_index;
+ let hash = hash_body(&mut self.hcx, def_path_hash, item_like, &mut self.hir_body_nodes);
self.current_dep_node_owner = dep_node_owner;
- self.currently_in_body = false;
- f(self);
- self.currently_in_body = prev_in_body;
+ f(self, hash);
self.current_dep_node_owner = prev_owner;
- self.current_full_dep_index = prev_full_dep_index;
- self.current_signature_dep_index = prev_signature_dep_index;
}
}
/// deep walking so that we walk nested items in the context of
/// their outer items.
- fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
panic!("`visit_nested_xxx` must be manually implemented in this visitor");
}
}
fn visit_nested_body(&mut self, id: BodyId) {
- let prev_in_body = self.currently_in_body;
- self.currently_in_body = true;
self.visit_body(self.krate.body(id));
- self.currently_in_body = prev_in_body;
}
fn visit_param(&mut self, param: &'hir Param<'hir>) {
debug!("visit_item: {:?}", i);
debug_assert_eq!(
i.hir_id.owner,
- self.definitions.opt_def_index(self.hir_to_node_id[&i.hir_id]).unwrap()
+ self.definitions.opt_local_def_id(self.definitions.hir_to_node_id(i.hir_id)).unwrap()
);
- self.with_dep_node_owner(i.hir_id.owner, i, |this| {
- this.insert(i.span, i.hir_id, Node::Item(i));
+ self.with_dep_node_owner(i.hir_id.owner, i, |this, hash| {
+ this.insert_with_hash(i.span, i.hir_id, Node::Item(i), hash);
this.with_parent(i.hir_id, |this| {
if let ItemKind::Struct(ref struct_def, _) = i.kind {
// If this is a tuple or unit-like struct, register the constructor.
fn visit_trait_item(&mut self, ti: &'hir TraitItem<'hir>) {
debug_assert_eq!(
ti.hir_id.owner,
- self.definitions.opt_def_index(self.hir_to_node_id[&ti.hir_id]).unwrap()
+ self.definitions.opt_local_def_id(self.definitions.hir_to_node_id(ti.hir_id)).unwrap()
);
- self.with_dep_node_owner(ti.hir_id.owner, ti, |this| {
- this.insert(ti.span, ti.hir_id, Node::TraitItem(ti));
+ self.with_dep_node_owner(ti.hir_id.owner, ti, |this, hash| {
+ this.insert_with_hash(ti.span, ti.hir_id, Node::TraitItem(ti), hash);
this.with_parent(ti.hir_id, |this| {
intravisit::walk_trait_item(this, ti);
fn visit_impl_item(&mut self, ii: &'hir ImplItem<'hir>) {
debug_assert_eq!(
ii.hir_id.owner,
- self.definitions.opt_def_index(self.hir_to_node_id[&ii.hir_id]).unwrap()
+ self.definitions.opt_local_def_id(self.definitions.hir_to_node_id(ii.hir_id)).unwrap()
);
- self.with_dep_node_owner(ii.hir_id.owner, ii, |this| {
- this.insert(ii.span, ii.hir_id, Node::ImplItem(ii));
+ self.with_dep_node_owner(ii.hir_id.owner, ii, |this, hash| {
+ this.insert_with_hash(ii.span, ii.hir_id, Node::ImplItem(ii), hash);
this.with_parent(ii.hir_id, |this| {
intravisit::walk_impl_item(this, ii);
}
fn visit_macro_def(&mut self, macro_def: &'hir MacroDef<'hir>) {
- let node_id = self.hir_to_node_id[¯o_def.hir_id];
- let def_index = self.definitions.opt_def_index(node_id).unwrap();
-
- self.with_dep_node_owner(def_index, macro_def, |this| {
- this.insert(macro_def.span, macro_def.hir_id, Node::MacroDef(macro_def));
+ self.with_dep_node_owner(macro_def.hir_id.owner, macro_def, |this, hash| {
+ this.insert_with_hash(
+ macro_def.span,
+ macro_def.hir_id,
+ Node::MacroDef(macro_def),
+ hash,
+ );
});
}
}
}
-// This is a wrapper structure that allows determining if span values within
-// the wrapped item should be hashed or not.
struct HirItemLike<T> {
item_like: T,
- hash_bodies: bool,
}
impl<'hir, T> HashStable<StableHashingContext<'hir>> for HirItemLike<T>
T: HashStable<StableHashingContext<'hir>>,
{
fn hash_stable(&self, hcx: &mut StableHashingContext<'hir>, hasher: &mut StableHasher) {
- hcx.while_hashing_hir_bodies(self.hash_bodies, |hcx| {
+ hcx.while_hashing_hir_bodies(true, |hcx| {
self.item_like.hash_stable(hcx, hasher);
});
}
//! expressions) that are mostly just leftovers.
use rustc_ast::ast;
-use rustc_ast::node_id::NodeMap;
-use rustc_data_structures::fingerprint::Fingerprint;
use rustc_data_structures::fx::FxHashMap;
use rustc_data_structures::stable_hasher::StableHasher;
use rustc_hir as hir;
-use rustc_hir::def_id::{CrateNum, DefId, DefIndex, CRATE_DEF_INDEX, LOCAL_CRATE};
+use rustc_hir::def_id::{CrateNum, DefId, DefIndex, LocalDefId, CRATE_DEF_INDEX, LOCAL_CRATE};
use rustc_index::vec::IndexVec;
use rustc_session::CrateDisambiguator;
use rustc_span::hygiene::ExpnId;
use rustc_span::symbol::{sym, Symbol};
use rustc_span::Span;
-use std::borrow::Borrow;
use std::fmt::Write;
use std::hash::Hash;
+pub use rustc_hir::def_id::DefPathHash;
+
/// The `DefPathTable` maps `DefIndex`es to `DefKey`s and vice versa.
/// Internally the `DefPathTable` holds a tree of `DefKey`s, where each `DefKey`
/// stores the `DefIndex` of its parent.
#[derive(Clone, Default)]
pub struct Definitions {
table: DefPathTable,
- node_to_def_index: NodeMap<DefIndex>,
- def_index_to_node: IndexVec<DefIndex, ast::NodeId>,
- pub(super) node_to_hir_id: IndexVec<ast::NodeId, hir::HirId>,
+
+ def_id_to_span: IndexVec<LocalDefId, Span>,
+
+ // FIXME(eddyb) don't go through `ast::NodeId` to convert between `HirId`
+ // and `LocalDefId` - ideally all `LocalDefId`s would be HIR owners.
+ node_id_to_def_id: FxHashMap<ast::NodeId, LocalDefId>,
+ def_id_to_node_id: IndexVec<LocalDefId, ast::NodeId>,
+
+ pub(super) node_id_to_hir_id: IndexVec<ast::NodeId, hir::HirId>,
+ /// The reverse mapping of `node_id_to_hir_id`.
+ pub(super) hir_id_to_node_id: FxHashMap<hir::HirId, ast::NodeId>,
+
/// If `ExpnId` is an ID of some macro expansion,
/// then `DefId` is the normal module (`mod`) in which the expanded macro was defined.
parent_modules_of_macro_defs: FxHashMap<ExpnId, DefId>,
- /// Item with a given `DefIndex` was defined during macro expansion with ID `ExpnId`.
- expansions_that_defined: FxHashMap<DefIndex, ExpnId>,
- next_disambiguator: FxHashMap<(DefIndex, DefPathData), u32>,
- def_index_to_span: FxHashMap<DefIndex, Span>,
+ /// Item with a given `LocalDefId` was defined during macro expansion with ID `ExpnId`.
+ expansions_that_defined: FxHashMap<LocalDefId, ExpnId>,
+ next_disambiguator: FxHashMap<(LocalDefId, DefPathData), u32>,
/// When collecting definitions from an AST fragment produced by a macro invocation `ExpnId`
/// we know what parent node that fragment should be attached to thanks to this table.
- invocation_parents: FxHashMap<ExpnId, DefIndex>,
+ invocation_parents: FxHashMap<ExpnId, LocalDefId>,
/// Indices of unnamed struct or variant fields with unresolved attributes.
- placeholder_field_indices: NodeMap<usize>,
+ placeholder_field_indices: FxHashMap<ast::NodeId, usize>,
}
/// A unique identifier that we can use to lookup a definition
}
}
data.reverse();
- DefPath { data: data, krate: krate }
+ DefPath { data, krate }
}
/// Returns a string representation of the `DefPath` without
ImplTrait,
}
-#[derive(
- Copy,
- Clone,
- Hash,
- PartialEq,
- Eq,
- PartialOrd,
- Ord,
- Debug,
- RustcEncodable,
- RustcDecodable,
- HashStable
-)]
-pub struct DefPathHash(pub Fingerprint);
-
-impl Borrow<Fingerprint> for DefPathHash {
- #[inline]
- fn borrow(&self) -> &Fingerprint {
- &self.0
- }
-}
-
impl Definitions {
pub fn def_path_table(&self) -> &DefPathTable {
&self.table
self.table.index_to_key.len()
}
- pub fn def_key(&self, index: DefIndex) -> DefKey {
- self.table.def_key(index)
+ pub fn def_key(&self, id: LocalDefId) -> DefKey {
+ self.table.def_key(id.local_def_index)
}
#[inline(always)]
- pub fn def_path_hash(&self, index: DefIndex) -> DefPathHash {
- self.table.def_path_hash(index)
+ pub fn def_path_hash(&self, id: LocalDefId) -> DefPathHash {
+ self.table.def_path_hash(id.local_def_index)
}
/// Returns the path from the crate root to `index`. The root
/// empty vector for the crate root). For an inlined item, this
/// will be the path of the item in the external crate (but the
/// path will begin with the path to the external crate).
- pub fn def_path(&self, index: DefIndex) -> DefPath {
- DefPath::make(LOCAL_CRATE, index, |p| self.def_key(p))
- }
-
- #[inline]
- pub fn opt_def_index(&self, node: ast::NodeId) -> Option<DefIndex> {
- self.node_to_def_index.get(&node).copied()
+ pub fn def_path(&self, id: LocalDefId) -> DefPath {
+ DefPath::make(LOCAL_CRATE, id.local_def_index, |index| {
+ self.def_key(LocalDefId { local_def_index: index })
+ })
}
#[inline]
- pub fn opt_local_def_id(&self, node: ast::NodeId) -> Option<DefId> {
- self.opt_def_index(node).map(DefId::local)
+ pub fn opt_local_def_id(&self, node: ast::NodeId) -> Option<LocalDefId> {
+ self.node_id_to_def_id.get(&node).copied()
}
+ // FIXME(eddyb) this function can and should return `LocalDefId`.
#[inline]
pub fn local_def_id(&self, node: ast::NodeId) -> DefId {
- self.opt_local_def_id(node).unwrap()
+ self.opt_local_def_id(node).unwrap().to_def_id()
}
#[inline]
pub fn as_local_node_id(&self, def_id: DefId) -> Option<ast::NodeId> {
- if def_id.krate == LOCAL_CRATE {
- let node_id = self.def_index_to_node[def_id.index];
+ if let Some(def_id) = def_id.as_local() {
+ let node_id = self.def_id_to_node_id[def_id];
if node_id != ast::DUMMY_NODE_ID {
return Some(node_id);
}
#[inline]
pub fn as_local_hir_id(&self, def_id: DefId) -> Option<hir::HirId> {
- if def_id.krate == LOCAL_CRATE {
- let hir_id = self.def_index_to_hir_id(def_id.index);
+ if let Some(def_id) = def_id.as_local() {
+ let hir_id = self.local_def_id_to_hir_id(def_id);
if hir_id != hir::DUMMY_HIR_ID { Some(hir_id) } else { None }
} else {
None
}
}
+ // FIXME(eddyb) rename to `hir_id_to_node_id`.
+ #[inline]
+ pub fn hir_to_node_id(&self, hir_id: hir::HirId) -> ast::NodeId {
+ self.hir_id_to_node_id[&hir_id]
+ }
+
+ // FIXME(eddyb) rename to `node_id_to_hir_id`.
#[inline]
pub fn node_to_hir_id(&self, node_id: ast::NodeId) -> hir::HirId {
- self.node_to_hir_id[node_id]
+ self.node_id_to_hir_id[node_id]
}
#[inline]
- pub fn def_index_to_hir_id(&self, def_index: DefIndex) -> hir::HirId {
- let node_id = self.def_index_to_node[def_index];
- self.node_to_hir_id[node_id]
+ pub fn local_def_id_to_hir_id(&self, id: LocalDefId) -> hir::HirId {
+ let node_id = self.def_id_to_node_id[id];
+ self.node_id_to_hir_id[node_id]
}
- /// Retrieves the span of the given `DefId` if `DefId` is in the local crate, the span exists
- /// and it's not `DUMMY_SP`.
+ /// Retrieves the span of the given `DefId` if `DefId` is in the local crate.
#[inline]
pub fn opt_span(&self, def_id: DefId) -> Option<Span> {
- if def_id.krate == LOCAL_CRATE {
- self.def_index_to_span.get(&def_id.index).copied()
- } else {
- None
- }
+ if let Some(def_id) = def_id.as_local() { Some(self.def_id_to_span[def_id]) } else { None }
}
/// Adds a root definition (no parent) and a few other reserved definitions.
&mut self,
crate_name: &str,
crate_disambiguator: CrateDisambiguator,
- ) -> DefIndex {
+ ) -> LocalDefId {
let key = DefKey {
parent: None,
disambiguated_data: DisambiguatedDefPathData {
let def_path_hash = key.compute_stable_hash(parent_hash);
// Create the definition.
- let root_index = self.table.allocate(key, def_path_hash);
- assert_eq!(root_index, CRATE_DEF_INDEX);
- assert!(self.def_index_to_node.is_empty());
- self.def_index_to_node.push(ast::CRATE_NODE_ID);
- self.node_to_def_index.insert(ast::CRATE_NODE_ID, root_index);
- self.set_invocation_parent(ExpnId::root(), root_index);
+ let root = LocalDefId { local_def_index: self.table.allocate(key, def_path_hash) };
+ assert_eq!(root.local_def_index, CRATE_DEF_INDEX);
+
+ assert_eq!(self.def_id_to_node_id.push(ast::CRATE_NODE_ID), root);
+ assert_eq!(self.def_id_to_span.push(rustc_span::DUMMY_SP), root);
- root_index
+ self.node_id_to_def_id.insert(ast::CRATE_NODE_ID, root);
+ self.set_invocation_parent(ExpnId::root(), root);
+
+ root
}
/// Adds a definition with a parent definition.
pub fn create_def_with_parent(
&mut self,
- parent: DefIndex,
+ parent: LocalDefId,
node_id: ast::NodeId,
data: DefPathData,
expn_id: ExpnId,
span: Span,
- ) -> DefIndex {
+ ) -> LocalDefId {
debug!(
"create_def_with_parent(parent={:?}, node_id={:?}, data={:?})",
parent, node_id, data
);
assert!(
- !self.node_to_def_index.contains_key(&node_id),
+ !self.node_id_to_def_id.contains_key(&node_id),
"adding a def'n for node-id {:?} and data {:?} but a previous def'n exists: {:?}",
node_id,
data,
- self.table.def_key(self.node_to_def_index[&node_id])
+ self.table.def_key(self.node_id_to_def_id[&node_id].local_def_index),
);
// The root node must be created with `create_root_def()`.
};
let key = DefKey {
- parent: Some(parent),
+ parent: Some(parent.local_def_index),
disambiguated_data: DisambiguatedDefPathData { data, disambiguator },
};
- let parent_hash = self.table.def_path_hash(parent);
+ let parent_hash = self.table.def_path_hash(parent.local_def_index);
let def_path_hash = key.compute_stable_hash(parent_hash);
debug!("create_def_with_parent: after disambiguation, key = {:?}", key);
// Create the definition.
- let index = self.table.allocate(key, def_path_hash);
- assert_eq!(index.index(), self.def_index_to_node.len());
- self.def_index_to_node.push(node_id);
+ let def_id = LocalDefId { local_def_index: self.table.allocate(key, def_path_hash) };
+
+ assert_eq!(self.def_id_to_node_id.push(node_id), def_id);
+ assert_eq!(self.def_id_to_span.push(span), def_id);
- // Some things for which we allocate `DefIndex`es don't correspond to
+ // Some things for which we allocate `LocalDefId`s don't correspond to
// anything in the AST, so they don't have a `NodeId`. For these cases
- // we don't need a mapping from `NodeId` to `DefIndex`.
+ // we don't need a mapping from `NodeId` to `LocalDefId`.
if node_id != ast::DUMMY_NODE_ID {
- debug!("create_def_with_parent: def_index_to_node[{:?} <-> {:?}", index, node_id);
- self.node_to_def_index.insert(node_id, index);
+ debug!("create_def_with_parent: def_id_to_node_id[{:?}] <-> {:?}", def_id, node_id);
+ self.node_id_to_def_id.insert(node_id, def_id);
}
if expn_id != ExpnId::root() {
- self.expansions_that_defined.insert(index, expn_id);
- }
-
- // The span is added if it isn't dummy.
- if !span.is_dummy() {
- self.def_index_to_span.insert(index, span);
+ self.expansions_that_defined.insert(def_id, expn_id);
}
- index
+ def_id
}
/// Initializes the `ast::NodeId` to `HirId` mapping once it has been generated during
/// AST to HIR lowering.
pub fn init_node_id_to_hir_id_mapping(&mut self, mapping: IndexVec<ast::NodeId, hir::HirId>) {
assert!(
- self.node_to_hir_id.is_empty(),
+ self.node_id_to_hir_id.is_empty(),
"trying to initialize `NodeId` -> `HirId` mapping twice"
);
- self.node_to_hir_id = mapping;
+ self.node_id_to_hir_id = mapping;
+
+ // Build the reverse mapping of `node_id_to_hir_id`.
+ self.hir_id_to_node_id = self
+ .node_id_to_hir_id
+ .iter_enumerated()
+ .map(|(node_id, &hir_id)| (hir_id, node_id))
+ .collect();
}
- pub fn expansion_that_defined(&self, index: DefIndex) -> ExpnId {
- self.expansions_that_defined.get(&index).copied().unwrap_or(ExpnId::root())
+ pub fn expansion_that_defined(&self, id: LocalDefId) -> ExpnId {
+ self.expansions_that_defined.get(&id).copied().unwrap_or(ExpnId::root())
}
pub fn parent_module_of_macro_def(&self, expn_id: ExpnId) -> DefId {
self.parent_modules_of_macro_defs.insert(expn_id, module);
}
- pub fn invocation_parent(&self, invoc_id: ExpnId) -> DefIndex {
+ pub fn invocation_parent(&self, invoc_id: ExpnId) -> LocalDefId {
self.invocation_parents[&invoc_id]
}
- pub fn set_invocation_parent(&mut self, invoc_id: ExpnId, parent: DefIndex) {
+ pub fn set_invocation_parent(&mut self, invoc_id: ExpnId, parent: LocalDefId) {
let old_parent = self.invocation_parents.insert(invoc_id, parent);
- assert!(old_parent.is_none(), "parent `DefIndex` is reset for an invocation");
+ assert!(old_parent.is_none(), "parent `LocalDefId` is reset for an invocation");
}
pub fn placeholder_field_index(&self, node_id: ast::NodeId) -> usize {
use crate::hir::map::Map;
+use crate::ty::TyCtxt;
use rustc_data_structures::fx::FxHashSet;
use rustc_data_structures::sync::{par_iter, Lock, ParallelIterator};
use rustc_hir as hir;
-use rustc_hir::def_id::{DefId, DefIndex, CRATE_DEF_INDEX};
+use rustc_hir::def_id::{LocalDefId, CRATE_DEF_INDEX};
use rustc_hir::intravisit;
use rustc_hir::itemlikevisit::ItemLikeVisitor;
use rustc_hir::{HirId, ItemLocalId};
-pub fn check_crate(hir_map: &Map<'_>, sess: &rustc_session::Session) {
- hir_map.dep_graph.assert_ignored();
+pub fn check_crate(tcx: TyCtxt<'_>) {
+ tcx.dep_graph.assert_ignored();
let errors = Lock::new(Vec::new());
+ let hir_map = tcx.hir();
- par_iter(&hir_map.krate.modules).for_each(|(module_id, _)| {
+ par_iter(&hir_map.krate().modules).for_each(|(module_id, _)| {
let local_def_id = hir_map.local_def_id(*module_id);
hir_map.visit_item_likes_in_module(
local_def_id,
if !errors.is_empty() {
let message = errors.iter().fold(String::new(), |s1, s2| s1 + "\n" + s2);
- sess.delay_span_bug(rustc_span::DUMMY_SP, &message);
+ tcx.sess.delay_span_bug(rustc_span::DUMMY_SP, &message);
}
}
struct HirIdValidator<'a, 'hir> {
- hir_map: &'a Map<'hir>,
- owner_def_index: Option<DefIndex>,
+ hir_map: Map<'hir>,
+ owner: Option<LocalDefId>,
hir_ids_seen: FxHashSet<ItemLocalId>,
errors: &'a Lock<Vec<String>>,
}
struct OuterVisitor<'a, 'hir> {
- hir_map: &'a Map<'hir>,
+ hir_map: Map<'hir>,
errors: &'a Lock<Vec<String>>,
}
impl<'a, 'hir> OuterVisitor<'a, 'hir> {
- fn new_inner_visitor(&self, hir_map: &'a Map<'hir>) -> HirIdValidator<'a, 'hir> {
+ fn new_inner_visitor(&self, hir_map: Map<'hir>) -> HirIdValidator<'a, 'hir> {
HirIdValidator {
hir_map,
- owner_def_index: None,
+ owner: None,
hir_ids_seen: Default::default(),
errors: self.errors,
}
}
fn check<F: FnOnce(&mut HirIdValidator<'a, 'hir>)>(&mut self, hir_id: HirId, walk: F) {
- assert!(self.owner_def_index.is_none());
- let owner_def_index = self.hir_map.local_def_id(hir_id).index;
- self.owner_def_index = Some(owner_def_index);
+ assert!(self.owner.is_none());
+ let owner = self.hir_map.local_def_id(hir_id).expect_local();
+ self.owner = Some(owner);
walk(self);
- if owner_def_index == CRATE_DEF_INDEX {
+ if owner.local_def_index == CRATE_DEF_INDEX {
return;
}
let mut missing_items = Vec::with_capacity(missing.len());
for local_id in missing {
- let hir_id =
- HirId { owner: owner_def_index, local_id: ItemLocalId::from_u32(local_id) };
+ let hir_id = HirId { owner, local_id: ItemLocalId::from_u32(local_id) };
trace!("missing hir id {:#?}", hir_id);
missing_items.push(format!(
- "[local_id: {}, node:{}]",
+ "[local_id: {}, owner: {}]",
local_id,
- self.hir_map.node_to_string(hir_id)
+ self.hir_map.def_path(owner).to_string_no_crate()
));
}
self.error(|| {
format!(
"ItemLocalIds not assigned densely in {}. \
Max ItemLocalId = {}, missing IDs = {:?}; seens IDs = {:?}",
- self.hir_map.def_path(DefId::local(owner_def_index)).to_string_no_crate(),
+ self.hir_map.def_path(owner).to_string_no_crate(),
max,
missing_items,
self.hir_ids_seen
.iter()
- .map(|&local_id| HirId { owner: owner_def_index, local_id })
+ .map(|&local_id| HirId { owner, local_id })
.map(|h| format!("({:?} {})", h, self.hir_map.node_to_string(h)))
.collect::<Vec<_>>()
)
impl<'a, 'hir> intravisit::Visitor<'hir> for HirIdValidator<'a, 'hir> {
type Map = Map<'hir>;
- fn nested_visit_map<'this>(&'this mut self) -> intravisit::NestedVisitorMap<'this, Self::Map> {
+ fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<Self::Map> {
intravisit::NestedVisitorMap::OnlyBodies(self.hir_map)
}
fn visit_id(&mut self, hir_id: HirId) {
- let owner = self.owner_def_index.expect("no owner_def_index");
+ let owner = self.owner.expect("no owner");
if hir_id == hir::DUMMY_HIR_ID {
self.error(|| {
format!(
"HirIdValidator: The recorded owner of {} is {} instead of {}",
self.hir_map.node_to_string(hir_id),
- self.hir_map.def_path(DefId::local(hir_id.owner)).to_string_no_crate(),
- self.hir_map.def_path(DefId::local(owner)).to_string_no_crate()
+ self.hir_map.def_path(hir_id.owner).to_string_no_crate(),
+ self.hir_map.def_path(owner).to_string_no_crate()
)
});
}
DefKey, DefPath, DefPathData, DefPathHash, Definitions, DisambiguatedDefPathData,
};
-use crate::dep_graph::{DepGraph, DepKind, DepNode, DepNodeIndex};
-use crate::middle::cstore::CrateStoreDyn;
+use crate::hir::{HirOwner, HirOwnerItems};
use crate::ty::query::Providers;
+use crate::ty::TyCtxt;
use rustc_ast::ast::{self, Name, NodeId};
-use rustc_data_structures::fx::FxHashMap;
use rustc_data_structures::svh::Svh;
use rustc_hir::def::{DefKind, Res};
-use rustc_hir::def_id::{DefId, DefIndex, LocalDefId, CRATE_DEF_INDEX};
+use rustc_hir::def_id::{CrateNum, DefId, LocalDefId, LOCAL_CRATE};
use rustc_hir::intravisit;
use rustc_hir::itemlikevisit::ItemLikeVisitor;
use rustc_hir::print::Nested;
mod collector;
pub mod definitions;
mod hir_id_validator;
+pub use hir_id_validator::check_crate;
/// Represents an entry and its parent `HirId`.
#[derive(Copy, Clone, Debug)]
pub struct Entry<'hir> {
parent: HirId,
- dep_node: DepNodeIndex,
node: Node<'hir>,
}
impl<'hir> Entry<'hir> {
fn parent_node(self) -> Option<HirId> {
match self.node {
- Node::Crate | Node::MacroDef(_) => None,
+ Node::Crate(_) | Node::MacroDef(_) => None,
_ => Some(self.parent),
}
}
+}
- fn fn_decl(&self) -> Option<&'hir FnDecl<'hir>> {
- match self.node {
- Node::Item(ref item) => match item.kind {
- ItemKind::Fn(ref sig, _, _) => Some(&sig.decl),
- _ => None,
- },
-
- Node::TraitItem(ref item) => match item.kind {
- TraitItemKind::Method(ref sig, _) => Some(&sig.decl),
- _ => None,
- },
+fn fn_decl<'hir>(node: Node<'hir>) -> Option<&'hir FnDecl<'hir>> {
+ match node {
+ Node::Item(ref item) => match item.kind {
+ ItemKind::Fn(ref sig, _, _) => Some(&sig.decl),
+ _ => None,
+ },
- Node::ImplItem(ref item) => match item.kind {
- ImplItemKind::Method(ref sig, _) => Some(&sig.decl),
- _ => None,
- },
+ Node::TraitItem(ref item) => match item.kind {
+ TraitItemKind::Fn(ref sig, _) => Some(&sig.decl),
+ _ => None,
+ },
- Node::Expr(ref expr) => match expr.kind {
- ExprKind::Closure(_, ref fn_decl, ..) => Some(fn_decl),
- _ => None,
- },
+ Node::ImplItem(ref item) => match item.kind {
+ ImplItemKind::Fn(ref sig, _) => Some(&sig.decl),
+ _ => None,
+ },
+ Node::Expr(ref expr) => match expr.kind {
+ ExprKind::Closure(_, ref fn_decl, ..) => Some(fn_decl),
_ => None,
- }
- }
+ },
- fn fn_sig(&self) -> Option<&'hir FnSig<'hir>> {
- match &self.node {
- Node::Item(item) => match &item.kind {
- ItemKind::Fn(sig, _, _) => Some(sig),
- _ => None,
- },
+ _ => None,
+ }
+}
- Node::TraitItem(item) => match &item.kind {
- TraitItemKind::Method(sig, _) => Some(sig),
- _ => None,
- },
+fn fn_sig<'hir>(node: Node<'hir>) -> Option<&'hir FnSig<'hir>> {
+ match &node {
+ Node::Item(item) => match &item.kind {
+ ItemKind::Fn(sig, _, _) => Some(sig),
+ _ => None,
+ },
- Node::ImplItem(item) => match &item.kind {
- ImplItemKind::Method(sig, _) => Some(sig),
- _ => None,
- },
+ Node::TraitItem(item) => match &item.kind {
+ TraitItemKind::Fn(sig, _) => Some(sig),
+ _ => None,
+ },
+ Node::ImplItem(item) => match &item.kind {
+ ImplItemKind::Fn(sig, _) => Some(sig),
_ => None,
- }
- }
+ },
- fn associated_body(self) -> Option<BodyId> {
- match self.node {
- Node::Item(item) => match item.kind {
- ItemKind::Const(_, body) | ItemKind::Static(.., body) | ItemKind::Fn(.., body) => {
- Some(body)
- }
- _ => None,
- },
+ _ => None,
+ }
+}
- Node::TraitItem(item) => match item.kind {
- TraitItemKind::Const(_, Some(body))
- | TraitItemKind::Method(_, TraitMethod::Provided(body)) => Some(body),
- _ => None,
- },
+fn associated_body<'hir>(node: Node<'hir>) -> Option<BodyId> {
+ match node {
+ Node::Item(item) => match item.kind {
+ ItemKind::Const(_, body) | ItemKind::Static(.., body) | ItemKind::Fn(.., body) => {
+ Some(body)
+ }
+ _ => None,
+ },
- Node::ImplItem(item) => match item.kind {
- ImplItemKind::Const(_, body) | ImplItemKind::Method(_, body) => Some(body),
- _ => None,
- },
+ Node::TraitItem(item) => match item.kind {
+ TraitItemKind::Const(_, Some(body)) | TraitItemKind::Fn(_, TraitFn::Provided(body)) => {
+ Some(body)
+ }
+ _ => None,
+ },
- Node::AnonConst(constant) => Some(constant.body),
+ Node::ImplItem(item) => match item.kind {
+ ImplItemKind::Const(_, body) | ImplItemKind::Fn(_, body) => Some(body),
+ _ => None,
+ },
- Node::Expr(expr) => match expr.kind {
- ExprKind::Closure(.., body, _, _) => Some(body),
- _ => None,
- },
+ Node::AnonConst(constant) => Some(constant.body),
+ Node::Expr(expr) => match expr.kind {
+ ExprKind::Closure(.., body, _, _) => Some(body),
_ => None,
- }
- }
+ },
- fn is_body_owner(self, hir_id: HirId) -> bool {
- match self.associated_body() {
- Some(b) => b.hir_id == hir_id,
- None => false,
- }
+ _ => None,
}
}
-/// This type is effectively a `HashMap<HirId, Entry<'hir>>`,
-/// but it is implemented as 2 layers of arrays.
-/// - first we have `A = IndexVec<DefIndex, B>` mapping `DefIndex`s to an inner value
-/// - which is `B = IndexVec<ItemLocalId, Option<Entry<'hir>>` which gives you the `Entry`.
-pub(super) type HirEntryMap<'hir> = IndexVec<DefIndex, IndexVec<ItemLocalId, Option<Entry<'hir>>>>;
-
-/// Represents a mapping from `NodeId`s to AST elements and their parent `NodeId`s.
-#[derive(Clone)]
-pub struct Map<'hir> {
- krate: &'hir Crate<'hir>,
+fn is_body_owner<'hir>(node: Node<'hir>, hir_id: HirId) -> bool {
+ match associated_body(node) {
+ Some(b) => b.hir_id == hir_id,
+ None => false,
+ }
+}
- pub dep_graph: DepGraph,
+pub(super) struct HirOwnerData<'hir> {
+ pub(super) signature: Option<&'hir HirOwner<'hir>>,
+ pub(super) with_bodies: Option<&'hir mut HirOwnerItems<'hir>>,
+}
+pub struct IndexedHir<'hir> {
/// The SVH of the local crate.
pub crate_hash: Svh,
- map: HirEntryMap<'hir>,
-
- definitions: Definitions,
+ pub(super) map: IndexVec<LocalDefId, HirOwnerData<'hir>>,
+}
- /// The reverse mapping of `node_to_hir_id`.
- hir_to_node_id: FxHashMap<HirId, NodeId>,
+#[derive(Copy, Clone)]
+pub struct Map<'hir> {
+ pub(super) tcx: TyCtxt<'hir>,
}
/// An iterator that walks up the ancestor tree of a given `HirId`.
}
impl<'hir> Map<'hir> {
- /// This is used internally in the dependency tracking system.
- /// Use the `krate` method to ensure your dependency on the
- /// crate is tracked.
- pub fn untracked_krate(&self) -> &Crate<'hir> {
- &self.krate
+ pub fn krate(&self) -> &'hir Crate<'hir> {
+ self.tcx.hir_crate(LOCAL_CRATE)
}
#[inline]
- fn lookup(&self, id: HirId) -> Option<&Entry<'hir>> {
- let local_map = self.map.get(id.owner)?;
- local_map.get(id.local_id)?.as_ref()
- }
-
- /// Registers a read in the dependency graph of the AST node with
- /// the given `id`. This needs to be called each time a public
- /// function returns the HIR for a node -- in other words, when it
- /// "reveals" the content of a node to the caller (who might not
- /// otherwise have had access to those contents, and hence needs a
- /// read recorded). If the function just returns a DefId or
- /// HirId, no actual content was returned, so no read is needed.
- pub fn read(&self, hir_id: HirId) {
- if let Some(entry) = self.lookup(hir_id) {
- self.dep_graph.read_index(entry.dep_node);
- } else {
- bug!("called `HirMap::read()` with invalid `HirId`: {:?}", hir_id)
- }
+ pub fn definitions(&self) -> &'hir Definitions {
+ &self.tcx.definitions
}
- #[inline]
- pub fn definitions(&self) -> &Definitions {
- &self.definitions
- }
-
- pub fn def_key(&self, def_id: DefId) -> DefKey {
- assert!(def_id.is_local());
- self.definitions.def_key(def_id.index)
+ pub fn def_key(&self, def_id: LocalDefId) -> DefKey {
+ self.tcx.definitions.def_key(def_id)
}
pub fn def_path_from_hir_id(&self, id: HirId) -> Option<DefPath> {
- self.opt_local_def_id(id).map(|def_id| self.def_path(def_id))
+ self.opt_local_def_id(id).map(|def_id| self.def_path(def_id.expect_local()))
}
- pub fn def_path(&self, def_id: DefId) -> DefPath {
- assert!(def_id.is_local());
- self.definitions.def_path(def_id.index)
+ pub fn def_path(&self, def_id: LocalDefId) -> DefPath {
+ self.tcx.definitions.def_path(def_id)
}
+ // FIXME(eddyb) this function can and should return `LocalDefId`.
#[inline]
pub fn local_def_id_from_node_id(&self, node: NodeId) -> DefId {
self.opt_local_def_id_from_node_id(node).unwrap_or_else(|| {
})
}
+ // FIXME(eddyb) this function can and should return `LocalDefId`.
#[inline]
pub fn local_def_id(&self, hir_id: HirId) -> DefId {
self.opt_local_def_id(hir_id).unwrap_or_else(|| {
#[inline]
pub fn opt_local_def_id(&self, hir_id: HirId) -> Option<DefId> {
let node_id = self.hir_to_node_id(hir_id);
- self.definitions.opt_local_def_id(node_id)
+ self.opt_local_def_id_from_node_id(node_id)
}
#[inline]
pub fn opt_local_def_id_from_node_id(&self, node: NodeId) -> Option<DefId> {
- self.definitions.opt_local_def_id(node)
+ Some(self.tcx.definitions.opt_local_def_id(node)?.to_def_id())
}
#[inline]
pub fn as_local_node_id(&self, def_id: DefId) -> Option<NodeId> {
- self.definitions.as_local_node_id(def_id)
+ self.tcx.definitions.as_local_node_id(def_id)
}
#[inline]
pub fn as_local_hir_id(&self, def_id: DefId) -> Option<HirId> {
- self.definitions.as_local_hir_id(def_id)
+ self.tcx.definitions.as_local_hir_id(def_id)
}
#[inline]
pub fn hir_to_node_id(&self, hir_id: HirId) -> NodeId {
- self.hir_to_node_id[&hir_id]
+ self.tcx.definitions.hir_to_node_id(hir_id)
}
#[inline]
pub fn node_to_hir_id(&self, node_id: NodeId) -> HirId {
- self.definitions.node_to_hir_id(node_id)
- }
-
- #[inline]
- pub fn def_index_to_hir_id(&self, def_index: DefIndex) -> HirId {
- self.definitions.def_index_to_hir_id(def_index)
+ self.tcx.definitions.node_to_hir_id(node_id)
}
#[inline]
pub fn local_def_id_to_hir_id(&self, def_id: LocalDefId) -> HirId {
- self.definitions.def_index_to_hir_id(def_id.to_def_id().index)
+ self.tcx.definitions.local_def_id_to_hir_id(def_id)
}
pub fn def_kind(&self, hir_id: HirId) -> Option<DefKind> {
- let node = if let Some(node) = self.find(hir_id) { node } else { return None };
+ let node = self.find(hir_id)?;
Some(match node {
Node::Item(item) => match item.kind {
},
Node::TraitItem(item) => match item.kind {
TraitItemKind::Const(..) => DefKind::AssocConst,
- TraitItemKind::Method(..) => DefKind::Method,
+ TraitItemKind::Fn(..) => DefKind::AssocFn,
TraitItemKind::Type(..) => DefKind::AssocTy,
},
Node::ImplItem(item) => match item.kind {
ImplItemKind::Const(..) => DefKind::AssocConst,
- ImplItemKind::Method(..) => DefKind::Method,
+ ImplItemKind::Fn(..) => DefKind::AssocFn,
ImplItemKind::TyAlias(..) => DefKind::AssocTy,
ImplItemKind::OpaqueTy(..) => DefKind::AssocOpaqueTy,
},
Node::Variant(_) => DefKind::Variant,
Node::Ctor(variant_data) => {
// FIXME(eddyb) is this even possible, if we have a `Node::Ctor`?
- if variant_data.ctor_hir_id().is_none() {
- return None;
- }
+ variant_data.ctor_hir_id()?;
+
let ctor_of = match self.find(self.get_parent_node(hir_id)) {
Some(Node::Item(..)) => def::CtorOf::Struct,
Some(Node::Variant(..)) => def::CtorOf::Variant,
| Node::Lifetime(_)
| Node::Visibility(_)
| Node::Block(_)
- | Node::Crate => return None,
+ | Node::Crate(_) => return None,
Node::MacroDef(_) => DefKind::Macro(MacroKind::Bang),
Node::GenericParam(param) => match param.kind {
GenericParamKind::Lifetime { .. } => return None,
}
fn find_entry(&self, id: HirId) -> Option<Entry<'hir>> {
- self.lookup(id).cloned()
+ Some(self.get_entry(id))
}
- pub fn item(&self, id: HirId) -> &'hir Item<'hir> {
- self.read(id);
+ fn get_entry(&self, id: HirId) -> Entry<'hir> {
+ if id.local_id == ItemLocalId::from_u32(0) {
+ let owner = self.tcx.hir_owner(id.owner);
+ Entry { parent: owner.parent, node: owner.node }
+ } else {
+ let owner = self.tcx.hir_owner_items(id.owner);
+ let item = owner.items[id.local_id].as_ref().unwrap();
+ Entry { parent: HirId { owner: id.owner, local_id: item.parent }, node: item.node }
+ }
+ }
- // N.B., intentionally bypass `self.krate()` so that we
- // do not trigger a read of the whole krate here
- self.krate.item(id)
+ pub fn item(&self, id: HirId) -> &'hir Item<'hir> {
+ match self.find(id).unwrap() {
+ Node::Item(item) => item,
+ _ => bug!(),
+ }
}
pub fn trait_item(&self, id: TraitItemId) -> &'hir TraitItem<'hir> {
- self.read(id.hir_id);
-
- // N.B., intentionally bypass `self.krate()` so that we
- // do not trigger a read of the whole krate here
- self.krate.trait_item(id)
+ match self.find(id.hir_id).unwrap() {
+ Node::TraitItem(item) => item,
+ _ => bug!(),
+ }
}
pub fn impl_item(&self, id: ImplItemId) -> &'hir ImplItem<'hir> {
- self.read(id.hir_id);
-
- // N.B., intentionally bypass `self.krate()` so that we
- // do not trigger a read of the whole krate here
- self.krate.impl_item(id)
+ match self.find(id.hir_id).unwrap() {
+ Node::ImplItem(item) => item,
+ _ => bug!(),
+ }
}
pub fn body(&self, id: BodyId) -> &'hir Body<'hir> {
- self.read(id.hir_id);
-
- // N.B., intentionally bypass `self.krate()` so that we
- // do not trigger a read of the whole krate here
- self.krate.body(id)
+ self.tcx.hir_owner_items(id.hir_id.owner).bodies.get(&id.hir_id.local_id).unwrap()
}
pub fn fn_decl_by_hir_id(&self, hir_id: HirId) -> Option<&'hir FnDecl<'hir>> {
- if let Some(entry) = self.find_entry(hir_id) {
- entry.fn_decl()
+ if let Some(node) = self.find(hir_id) {
+ fn_decl(node)
} else {
- bug!("no entry for hir_id `{}`", hir_id)
+ bug!("no node for hir_id `{}`", hir_id)
}
}
pub fn fn_sig_by_hir_id(&self, hir_id: HirId) -> Option<&'hir FnSig<'hir>> {
- if let Some(entry) = self.find_entry(hir_id) {
- entry.fn_sig()
+ if let Some(node) = self.find(hir_id) {
+ fn_sig(node)
} else {
- bug!("no entry for hir_id `{}`", hir_id)
+ bug!("no node for hir_id `{}`", hir_id)
}
}
/// item (possibly associated), a closure, or a `hir::AnonConst`.
pub fn body_owner(&self, BodyId { hir_id }: BodyId) -> HirId {
let parent = self.get_parent_node(hir_id);
- assert!(self.lookup(parent).map_or(false, |e| e.is_body_owner(hir_id)));
+ assert!(self.find(parent).map_or(false, |n| is_body_owner(n, hir_id)));
parent
}
+ // FIXME(eddyb) this function can and should return `LocalDefId`.
pub fn body_owner_def_id(&self, id: BodyId) -> DefId {
self.local_def_id(self.body_owner(id))
}
/// Given a `HirId`, returns the `BodyId` associated with it,
/// if the node is a body owner, otherwise returns `None`.
pub fn maybe_body_owned_by(&self, hir_id: HirId) -> Option<BodyId> {
- if let Some(entry) = self.find_entry(hir_id) {
- if self.dep_graph.is_fully_enabled() {
- let hir_id_owner = hir_id.owner;
- let def_path_hash = self.definitions.def_path_hash(hir_id_owner);
- self.dep_graph.read(def_path_hash.to_dep_node(DepKind::HirBody));
- }
-
- entry.associated_body()
+ if let Some(node) = self.find(hir_id) {
+ associated_body(node)
} else {
bug!("no entry for id `{}`", hir_id)
}
| Node::AnonConst(_) => BodyOwnerKind::Const,
Node::Ctor(..)
| Node::Item(&Item { kind: ItemKind::Fn(..), .. })
- | Node::TraitItem(&TraitItem { kind: TraitItemKind::Method(..), .. })
- | Node::ImplItem(&ImplItem { kind: ImplItemKind::Method(..), .. }) => BodyOwnerKind::Fn,
+ | Node::TraitItem(&TraitItem { kind: TraitItemKind::Fn(..), .. })
+ | Node::ImplItem(&ImplItem { kind: ImplItemKind::Fn(..), .. }) => BodyOwnerKind::Fn,
Node::Item(&Item { kind: ItemKind::Static(_, m, _), .. }) => BodyOwnerKind::Static(m),
Node::Expr(&Expr { kind: ExprKind::Closure(..), .. }) => BodyOwnerKind::Closure,
node => bug!("{:#?} is not a body node", node),
}
pub fn trait_impls(&self, trait_did: DefId) -> &'hir [HirId] {
- self.dep_graph.read(DepNode::new_no_params(DepKind::AllLocalTraitImpls));
-
- // N.B., intentionally bypass `self.krate()` so that we
- // do not trigger a read of the whole krate here
- self.krate.trait_impls.get(&trait_did).map_or(&[], |xs| &xs[..])
+ self.tcx.all_local_trait_impls(LOCAL_CRATE).get(&trait_did).map_or(&[], |xs| &xs[..])
}
/// Gets the attributes on the crate. This is preferable to
/// invoking `krate.attrs` because it registers a tighter
/// dep-graph access.
pub fn krate_attrs(&self) -> &'hir [ast::Attribute] {
- let def_path_hash = self.definitions.def_path_hash(CRATE_DEF_INDEX);
-
- self.dep_graph.read(def_path_hash.to_dep_node(DepKind::Hir));
- &self.krate.attrs
+ match self.get_entry(CRATE_HIR_ID).node {
+ Node::Crate(item) => item.attrs,
+ _ => bug!(),
+ }
}
pub fn get_module(&self, module: DefId) -> (&'hir Mod<'hir>, Span, HirId) {
let hir_id = self.as_local_hir_id(module).unwrap();
- self.read(hir_id);
- match self.find_entry(hir_id).unwrap().node {
+ match self.get_entry(hir_id).node {
Node::Item(&Item { span, kind: ItemKind::Mod(ref m), .. }) => (m, span, hir_id),
- Node::Crate => (&self.krate.module, self.krate.span, hir_id),
+ Node::Crate(item) => (&item.module, item.span, hir_id),
node => panic!("not a module: {:?}", node),
}
}
where
V: ItemLikeVisitor<'hir>,
{
- let hir_id = self.as_local_hir_id(module).unwrap();
-
- // Read the module so we'll be re-executed if new items
- // appear immediately under in the module. If some new item appears
- // in some nested item in the module, we'll be re-executed due to reads
- // in the expect_* calls the loops below
- self.read(hir_id);
-
- let module = &self.krate.modules[&hir_id];
+ let module = self.tcx.hir_module_items(module.expect_local());
for id in &module.items {
visitor.visit_item(self.expect_item(*id));
/// Retrieves the `Node` corresponding to `id`, panicking if it cannot be found.
pub fn get(&self, id: HirId) -> Node<'hir> {
- // read recorded by `find`
self.find(id).unwrap_or_else(|| bug!("couldn't find hir id {} in the HIR map", id))
}
pub fn get_if_local(&self, id: DefId) -> Option<Node<'hir>> {
- self.as_local_hir_id(id).map(|id| self.get(id)) // read recorded by `get`
+ self.as_local_hir_id(id).map(|id| self.get(id))
}
pub fn get_generics(&self, id: DefId) -> Option<&'hir Generics<'hir>> {
/// Retrieves the `Node` corresponding to `id`, returning `None` if cannot be found.
pub fn find(&self, hir_id: HirId) -> Option<Node<'hir>> {
- let result = self
- .find_entry(hir_id)
- .and_then(|entry| if let Node::Crate = entry.node { None } else { Some(entry.node) });
- if result.is_some() {
- self.read(hir_id);
- }
- result
+ let node = self.get_entry(hir_id).node;
+ if let Node::Crate(..) = node { None } else { Some(node) }
}
/// Similar to `get_parent`; returns the parent HIR Id, or just `hir_id` if there
/// from a node to the root of the HIR (unless you get back the same ID here,
/// which can happen if the ID is not in the map itself or is just weird).
pub fn get_parent_node(&self, hir_id: HirId) -> HirId {
- if self.dep_graph.is_fully_enabled() {
- let hir_id_owner = hir_id.owner;
- let def_path_hash = self.definitions.def_path_hash(hir_id_owner);
- self.dep_graph.read(def_path_hash.to_dep_node(DepKind::HirBody));
- }
-
- self.find_entry(hir_id).and_then(|x| x.parent_node()).unwrap_or(hir_id)
+ self.get_entry(hir_id).parent_node().unwrap_or(hir_id)
}
/// Returns an iterator for the nodes in the ancestor tree of the `current_id`
}
}
- /// Wether `hir_id` corresponds to a `mod` or a crate.
+ /// Whether `hir_id` corresponds to a `mod` or a crate.
pub fn is_hir_id_module(&self, hir_id: HirId) -> bool {
- match self.lookup(hir_id) {
- Some(Entry { node: Node::Item(Item { kind: ItemKind::Mod(_), .. }), .. })
- | Some(Entry { node: Node::Crate, .. }) => true,
+ match self.get_entry(hir_id) {
+ Entry { node: Node::Item(Item { kind: ItemKind::Mod(_), .. }), .. }
+ | Entry { node: Node::Crate(..), .. } => true,
_ => false,
}
}
pub fn get_parent_item(&self, hir_id: HirId) -> HirId {
for (hir_id, node) in self.parent_iter(hir_id) {
match node {
- Node::Crate
+ Node::Crate(_)
| Node::Item(_)
| Node::ForeignItem(_)
| Node::TraitItem(_)
_ => false,
},
Node::TraitItem(ti) => match ti.kind {
- TraitItemKind::Method(..) => true,
+ TraitItemKind::Fn(..) => true,
_ => false,
},
Node::ImplItem(ii) => match ii.kind {
- ImplItemKind::Method(..) => true,
+ ImplItemKind::Fn(..) => true,
_ => false,
},
Node::Block(_) => true,
scope
}
+ // FIXME(eddyb) this function can and should return `LocalDefId`.
pub fn get_parent_did(&self, id: HirId) -> DefId {
self.local_def_id(self.get_parent_item(id))
}
node: Node::Item(Item { kind: ItemKind::ForeignMod(ref nm), .. }), ..
} = entry
{
- self.read(hir_id); // reveals some of the content of a node
return nm.abi;
}
}
pub fn expect_item(&self, id: HirId) -> &'hir Item<'hir> {
match self.find(id) {
- // read recorded by `find`
Some(Node::Item(item)) => item,
_ => bug!("expected item, found {}", self.node_to_string(id)),
}
pub fn expect_expr(&self, id: HirId) -> &'hir Expr<'hir> {
match self.find(id) {
- // read recorded by find
Some(Node::Expr(expr)) => expr,
_ => bug!("expected expr, found {}", self.node_to_string(id)),
}
/// Given a node ID, gets a list of attributes associated with the AST
/// corresponding to the node-ID.
pub fn attrs(&self, id: HirId) -> &'hir [ast::Attribute] {
- self.read(id); // reveals attributes on the node
let attrs = match self.find_entry(id).map(|entry| entry.node) {
Some(Node::Param(a)) => Some(&a.attrs[..]),
Some(Node::Local(l)) => Some(&l.attrs[..]),
// Unit/tuple structs/variants take the attributes straight from
// the struct/variant definition.
Some(Node::Ctor(..)) => return self.attrs(self.get_parent_item(id)),
- Some(Node::Crate) => Some(&self.krate.attrs[..]),
+ Some(Node::Crate(item)) => Some(&item.attrs[..]),
_ => None,
};
attrs.unwrap_or(&[])
}
- /// Returns an iterator that yields all the hir ids in the map.
- fn all_ids<'a>(&'a self) -> impl Iterator<Item = HirId> + 'a {
- // This code is a bit awkward because the map is implemented as 2 levels of arrays,
- // see the comment on `HirEntryMap`.
- // Iterate over all the indices and return a reference to
- // local maps and their index given that they exist.
- self.map.iter_enumerated().flat_map(move |(owner, local_map)| {
- // Iterate over each valid entry in the local map.
- local_map.iter_enumerated().filter_map(move |(i, entry)| {
- entry.map(move |_| {
- // Reconstruct the `HirId` based on the 3 indices we used to find it.
- HirId { owner, local_id: i }
- })
- })
- })
- }
-
- /// Returns an iterator that yields the node id's with paths that
- /// match `parts`. (Requires `parts` is non-empty.)
- ///
- /// For example, if given `parts` equal to `["bar", "quux"]`, then
- /// the iterator will produce node id's for items with paths
- /// such as `foo::bar::quux`, `bar::quux`, `other::bar::quux`, and
- /// any other such items it can find in the map.
- pub fn nodes_matching_suffix<'a>(
- &'a self,
- parts: &'a [String],
- ) -> impl Iterator<Item = NodeId> + 'a {
- let nodes = NodesMatchingSuffix {
- map: self,
- item_name: parts.last().unwrap(),
- in_which: &parts[..parts.len() - 1],
- };
-
- self.all_ids()
- .filter(move |hir| nodes.matches_suffix(*hir))
- .map(move |hir| self.hir_to_node_id(hir))
- }
-
pub fn span(&self, hir_id: HirId) -> Span {
- self.read(hir_id); // reveals span from node
match self.find_entry(hir_id).map(|entry| entry.node) {
Some(Node::Param(param)) => param.span,
Some(Node::Item(item)) => item.span,
Some(Node::Visibility(v)) => bug!("unexpected Visibility {:?}", v),
Some(Node::Local(local)) => local.span,
Some(Node::MacroDef(macro_def)) => macro_def.span,
- Some(Node::Crate) => self.krate.span,
+ Some(Node::Crate(item)) => item.span,
None => bug!("hir::map::Map::span: id not in map: {:?}", hir_id),
}
}
}
}
-pub struct NodesMatchingSuffix<'a> {
- map: &'a Map<'a>,
- item_name: &'a String,
- in_which: &'a [String],
-}
-
-impl<'a> NodesMatchingSuffix<'a> {
- /// Returns `true` only if some suffix of the module path for parent
- /// matches `self.in_which`.
- ///
- /// In other words: let `[x_0,x_1,...,x_k]` be `self.in_which`;
- /// returns true if parent's path ends with the suffix
- /// `x_0::x_1::...::x_k`.
- fn suffix_matches(&self, parent: HirId) -> bool {
- let mut cursor = parent;
- for part in self.in_which.iter().rev() {
- let (mod_id, mod_name) = match find_first_mod_parent(self.map, cursor) {
- None => return false,
- Some((node_id, name)) => (node_id, name),
- };
- if mod_name.as_str() != *part {
- return false;
- }
- cursor = self.map.get_parent_item(mod_id);
- }
- return true;
-
- // Finds the first mod in parent chain for `id`, along with
- // that mod's name.
- //
- // If `id` itself is a mod named `m` with parent `p`, then
- // returns `Some(id, m, p)`. If `id` has no mod in its parent
- // chain, then returns `None`.
- fn find_first_mod_parent(map: &Map<'_>, mut id: HirId) -> Option<(HirId, Name)> {
- loop {
- if let Node::Item(item) = map.find(id)? {
- if item_is_mod(&item) {
- return Some((id, item.ident.name));
- }
- }
- let parent = map.get_parent_item(id);
- if parent == id {
- return None;
- }
- id = parent;
- }
-
- fn item_is_mod(item: &Item<'_>) -> bool {
- match item.kind {
- ItemKind::Mod(_) => true,
- _ => false,
- }
- }
- }
- }
-
- // We are looking at some node `n` with a given name and parent
- // id; do their names match what I am seeking?
- fn matches_names(&self, parent_of_n: HirId, name: Name) -> bool {
- name.as_str() == *self.item_name && self.suffix_matches(parent_of_n)
- }
-
- fn matches_suffix(&self, hir: HirId) -> bool {
- let name = match self.map.find_entry(hir).map(|entry| entry.node) {
- Some(Node::Item(n)) => n.name(),
- Some(Node::ForeignItem(n)) => n.name(),
- Some(Node::TraitItem(n)) => n.name(),
- Some(Node::ImplItem(n)) => n.name(),
- Some(Node::Variant(n)) => n.name(),
- Some(Node::Field(n)) => n.name(),
- _ => return false,
- };
- self.matches_names(self.map.get_parent_item(hir), name)
- }
-}
-
trait Named {
fn name(&self) -> Name;
}
}
}
-pub fn map_crate<'hir>(
- sess: &rustc_session::Session,
- cstore: &CrateStoreDyn,
- krate: &'hir Crate<'hir>,
- dep_graph: DepGraph,
- definitions: Definitions,
-) -> Map<'hir> {
- let _prof_timer = sess.prof.generic_activity("build_hir_map");
-
- // Build the reverse mapping of `node_to_hir_id`.
- let hir_to_node_id = definitions
- .node_to_hir_id
- .iter_enumerated()
- .map(|(node_id, &hir_id)| (hir_id, node_id))
- .collect();
+pub(super) fn index_hir<'tcx>(tcx: TyCtxt<'tcx>, cnum: CrateNum) -> &'tcx IndexedHir<'tcx> {
+ assert_eq!(cnum, LOCAL_CRATE);
+
+ let _prof_timer = tcx.sess.prof.generic_activity("build_hir_map");
let (map, crate_hash) = {
- let hcx = crate::ich::StableHashingContext::new(sess, krate, &definitions, cstore);
+ let hcx = tcx.create_stable_hashing_context();
let mut collector =
- NodeCollector::root(sess, krate, &dep_graph, &definitions, &hir_to_node_id, hcx);
- intravisit::walk_crate(&mut collector, krate);
+ NodeCollector::root(tcx.sess, &**tcx.arena, tcx.untracked_crate, &tcx.definitions, hcx);
+ intravisit::walk_crate(&mut collector, tcx.untracked_crate);
- let crate_disambiguator = sess.local_crate_disambiguator();
- let cmdline_args = sess.opts.dep_tracking_hash();
- collector.finalize_and_compute_crate_hash(crate_disambiguator, cstore, cmdline_args)
+ let crate_disambiguator = tcx.sess.local_crate_disambiguator();
+ let cmdline_args = tcx.sess.opts.dep_tracking_hash();
+ collector.finalize_and_compute_crate_hash(crate_disambiguator, &*tcx.cstore, cmdline_args)
};
- let map = Map { krate, dep_graph, crate_hash, map, hir_to_node_id, definitions };
-
- sess.time("validate_HIR_map", || {
- hir_id_validator::check_crate(&map, sess);
- });
+ let map = tcx.arena.alloc(IndexedHir { crate_hash, map });
map
}
ImplItemKind::Const(..) => {
format!("assoc const {} in {}{}", ii.ident, path_str(), id_str)
}
- ImplItemKind::Method(..) => format!("method {} in {}{}", ii.ident, path_str(), id_str),
+ ImplItemKind::Fn(..) => format!("method {} in {}{}", ii.ident, path_str(), id_str),
ImplItemKind::TyAlias(_) => {
format!("assoc type {} in {}{}", ii.ident, path_str(), id_str)
}
Some(Node::TraitItem(ti)) => {
let kind = match ti.kind {
TraitItemKind::Const(..) => "assoc constant",
- TraitItemKind::Method(..) => "trait method",
+ TraitItemKind::Fn(..) => "trait method",
TraitItemKind::Type(..) => "assoc type",
};
Some(Node::GenericParam(ref param)) => format!("generic_param {:?}{}", param, id_str),
Some(Node::Visibility(ref vis)) => format!("visibility {:?}{}", vis, id_str),
Some(Node::MacroDef(_)) => format!("macro {}{}", path_str(), id_str),
- Some(Node::Crate) => String::from("root_crate"),
+ Some(Node::Crate(..)) => String::from("root_crate"),
None => format!("unknown node{}", id_str),
}
}
-//! HIR datatypes. See the [rustc guide] for more info.
+//! HIR datatypes. See the [rustc dev guide] for more info.
//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/hir.html
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/hir.html
pub mod exports;
pub mod map;
+use crate::ich::StableHashingContext;
use crate::ty::query::Providers;
use crate::ty::TyCtxt;
-use rustc_hir::def_id::{DefId, LOCAL_CRATE};
-use rustc_hir::print;
-use rustc_hir::Crate;
+use rustc_data_structures::fingerprint::Fingerprint;
+use rustc_data_structures::fx::FxHashMap;
+use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
+use rustc_hir::def_id::{LocalDefId, LOCAL_CRATE};
+use rustc_hir::Body;
use rustc_hir::HirId;
-use std::ops::Deref;
+use rustc_hir::ItemLocalId;
+use rustc_hir::Node;
+use rustc_index::vec::IndexVec;
-/// A wrapper type which allows you to access HIR.
-#[derive(Clone)]
-pub struct Hir<'tcx> {
- tcx: TyCtxt<'tcx>,
- map: &'tcx map::Map<'tcx>,
+pub struct HirOwner<'tcx> {
+ parent: HirId,
+ node: Node<'tcx>,
}
-impl<'tcx> Hir<'tcx> {
- pub fn krate(&self) -> &'tcx Crate<'tcx> {
- self.tcx.hir_crate(LOCAL_CRATE)
+impl<'a, 'tcx> HashStable<StableHashingContext<'a>> for HirOwner<'tcx> {
+ fn hash_stable(&self, hcx: &mut StableHashingContext<'a>, hasher: &mut StableHasher) {
+ let HirOwner { parent, node } = self;
+ hcx.while_hashing_hir_bodies(false, |hcx| {
+ parent.hash_stable(hcx, hasher);
+ node.hash_stable(hcx, hasher);
+ });
}
}
-impl<'tcx> Deref for Hir<'tcx> {
- type Target = &'tcx map::Map<'tcx>;
+#[derive(Clone)]
+pub struct HirItem<'tcx> {
+ parent: ItemLocalId,
+ node: Node<'tcx>,
+}
- #[inline(always)]
- fn deref(&self) -> &Self::Target {
- &self.map
- }
+pub struct HirOwnerItems<'tcx> {
+ hash: Fingerprint,
+ items: IndexVec<ItemLocalId, Option<HirItem<'tcx>>>,
+ bodies: FxHashMap<ItemLocalId, &'tcx Body<'tcx>>,
}
-impl<'hir> print::PpAnn for Hir<'hir> {
- fn nested(&self, state: &mut print::State<'_>, nested: print::Nested) {
- self.map.nested(state, nested)
+impl<'a, 'tcx> HashStable<StableHashingContext<'a>> for HirOwnerItems<'tcx> {
+ fn hash_stable(&self, hcx: &mut StableHashingContext<'a>, hasher: &mut StableHasher) {
+ // We ignore the `items` and `bodies` fields since these refer to information included in
+ // `hash` which is hashed in the collector and used for the crate hash.
+ let HirOwnerItems { hash, items: _, bodies: _ } = *self;
+ hash.hash_stable(hcx, hasher);
}
}
impl<'tcx> TyCtxt<'tcx> {
#[inline(always)]
- pub fn hir(self) -> Hir<'tcx> {
- Hir { tcx: self, map: &self.hir_map }
+ pub fn hir(self) -> map::Map<'tcx> {
+ map::Map { tcx: self }
}
- pub fn parent_module(self, id: HirId) -> DefId {
- self.parent_module_from_def_id(DefId::local(id.owner))
+ pub fn parent_module(self, id: HirId) -> LocalDefId {
+ self.parent_module_from_def_id(id.owner)
}
}
pub fn provide(providers: &mut Providers<'_>) {
providers.parent_module_from_def_id = |tcx, id| {
let hir = tcx.hir();
- hir.local_def_id(hir.get_module_parent_node(hir.as_local_hir_id(id).unwrap()))
+ hir.local_def_id(hir.get_module_parent_node(hir.as_local_hir_id(id.to_def_id()).unwrap()))
+ .expect_local()
+ };
+ providers.hir_crate = |tcx, _| tcx.untracked_crate;
+ providers.index_hir = map::index_hir;
+ providers.hir_module_items = |tcx, id| {
+ let hir = tcx.hir();
+ let module = hir.as_local_hir_id(id.to_def_id()).unwrap();
+ &tcx.untracked_crate.modules[&module]
+ };
+ providers.hir_owner = |tcx, id| tcx.index_hir(LOCAL_CRATE).map[id].signature.unwrap();
+ providers.hir_owner_items = |tcx, id| {
+ tcx.index_hir(LOCAL_CRATE).map[id].with_bodies.as_ref().map(|items| &**items).unwrap()
};
- providers.hir_crate = |tcx, _| tcx.hir_map.untracked_krate();
map::provide(providers);
}
-use crate::hir::map::definitions::Definitions;
-use crate::hir::map::DefPathHash;
+use crate::hir::map::definitions::{DefPathHash, Definitions};
use crate::ich::{self, CachingSourceMapView};
use crate::middle::cstore::CrateStore;
-use crate::session::Session;
use crate::ty::{fast_reject, TyCtxt};
use rustc_ast::ast;
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
-use rustc_data_structures::stable_hasher::{HashStable, StableHasher, ToStableHashKey};
+use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
use rustc_data_structures::sync::Lrc;
use rustc_hir as hir;
-use rustc_hir::def_id::{DefId, DefIndex};
+use rustc_hir::def_id::{DefId, LocalDefId};
+use rustc_session::Session;
use rustc_span::source_map::SourceMap;
use rustc_span::symbol::Symbol;
use rustc_span::{BytePos, SourceFile};
#[inline]
pub fn def_path_hash(&self, def_id: DefId) -> DefPathHash {
- if def_id.is_local() {
- self.definitions.def_path_hash(def_id.index)
+ if let Some(def_id) = def_id.as_local() {
+ self.local_def_path_hash(def_id)
} else {
self.cstore.def_path_hash(def_id)
}
}
#[inline]
- pub fn local_def_path_hash(&self, def_index: DefIndex) -> DefPathHash {
- self.definitions.def_path_hash(def_index)
+ pub fn local_def_path_hash(&self, def_id: LocalDefId) -> DefPathHash {
+ self.definitions.def_path_hash(def_id)
}
#[inline]
}
IGNORED_ATTRIBUTES.with(|attrs| attrs.contains(&name))
}
-
- pub fn hash_hir_item_like<F: FnOnce(&mut Self)>(&mut self, f: F) {
- let prev_hash_node_ids = self.node_id_hashing_mode;
- self.node_id_hashing_mode = NodeIdHashingMode::Ignore;
-
- f(self);
-
- self.node_id_hashing_mode = prev_hash_node_ids;
- }
}
/// Something that can provide a stable hashing context.
impl<'a> crate::dep_graph::DepGraphSafe for StableHashingContext<'a> {}
-impl<'a> ToStableHashKey<StableHashingContext<'a>> for hir::HirId {
- type KeyType = (DefPathHash, hir::ItemLocalId);
-
- #[inline]
- fn to_stable_hash_key(
- &self,
- hcx: &StableHashingContext<'a>,
- ) -> (DefPathHash, hir::ItemLocalId) {
- let def_path_hash = hcx.local_def_path_hash(self.owner);
- (def_path_hash, self.local_id)
- }
-}
-
impl<'a> HashStable<StableHashingContext<'a>> for ast::NodeId {
fn hash_stable(&self, _: &mut StableHashingContext<'a>, _: &mut StableHasher) {
panic!("Node IDs should not appear in incremental state");
}
}
}
+
+ fn hash_hir_item_like<F: FnOnce(&mut Self)>(&mut self, f: F) {
+ let prev_hash_node_ids = self.node_id_hashing_mode;
+ self.node_id_hashing_mode = NodeIdHashingMode::Ignore;
+
+ f(self);
+
+ self.node_id_hashing_mode = prev_hash_node_ids;
+ }
+
+ #[inline]
+ fn local_def_path_hash(&self, def_id: LocalDefId) -> DefPathHash {
+ self.local_def_path_hash(def_id)
+ }
}
impl<'a> ToStableHashKey<StableHashingContext<'a>> for DefId {
}
}
-impl<'a> HashStable<StableHashingContext<'a>> for hir::TraitItem<'_> {
- fn hash_stable(&self, hcx: &mut StableHashingContext<'a>, hasher: &mut StableHasher) {
- let hir::TraitItem { hir_id: _, ident, ref attrs, ref generics, ref kind, span } = *self;
-
- hcx.hash_hir_item_like(|hcx| {
- ident.name.hash_stable(hcx, hasher);
- attrs.hash_stable(hcx, hasher);
- generics.hash_stable(hcx, hasher);
- kind.hash_stable(hcx, hasher);
- span.hash_stable(hcx, hasher);
- });
- }
-}
-
-impl<'a> HashStable<StableHashingContext<'a>> for hir::ImplItem<'_> {
- fn hash_stable(&self, hcx: &mut StableHashingContext<'a>, hasher: &mut StableHasher) {
- let hir::ImplItem {
- hir_id: _,
- ident,
- ref vis,
- defaultness,
- ref attrs,
- ref generics,
- ref kind,
- span,
- } = *self;
-
- hcx.hash_hir_item_like(|hcx| {
- ident.name.hash_stable(hcx, hasher);
- vis.hash_stable(hcx, hasher);
- defaultness.hash_stable(hcx, hasher);
- attrs.hash_stable(hcx, hasher);
- generics.hash_stable(hcx, hasher);
- kind.hash_stable(hcx, hasher);
- span.hash_stable(hcx, hasher);
- });
- }
-}
-
-impl<'a> HashStable<StableHashingContext<'a>> for hir::Item<'_> {
- fn hash_stable(&self, hcx: &mut StableHashingContext<'a>, hasher: &mut StableHasher) {
- let hir::Item { ident, ref attrs, hir_id: _, ref kind, ref vis, span } = *self;
-
- hcx.hash_hir_item_like(|hcx| {
- ident.name.hash_stable(hcx, hasher);
- attrs.hash_stable(hcx, hasher);
- kind.hash_stable(hcx, hasher);
- vis.hash_stable(hcx, hasher);
- span.hash_stable(hcx, hasher);
- });
- }
-}
-
impl<'a> HashStable<StableHashingContext<'a>> for hir::Body<'_> {
fn hash_stable(&self, hcx: &mut StableHashingContext<'a>, hasher: &mut StableHasher) {
let hir::Body { params, value, generator_kind } = self;
}
}
-impl<'a> HashStable<StableHashingContext<'a>> for hir::def_id::DefIndex {
- fn hash_stable(&self, hcx: &mut StableHashingContext<'a>, hasher: &mut StableHasher) {
- hcx.local_def_path_hash(*self).hash_stable(hcx, hasher);
- }
-}
-
-impl<'a> ToStableHashKey<StableHashingContext<'a>> for hir::def_id::DefIndex {
- type KeyType = DefPathHash;
-
- #[inline]
- fn to_stable_hash_key(&self, hcx: &StableHashingContext<'a>) -> DefPathHash {
- hcx.local_def_path_hash(*self)
- }
-}
-
impl<'a> HashStable<StableHashingContext<'a>> for hir::TraitCandidate {
fn hash_stable(&self, hcx: &mut StableHashingContext<'a>, hasher: &mut StableHasher) {
hcx.with_node_id_hashing_mode(NodeIdHashingMode::HashDefPath, |hcx| {
use rustc_ast::ast;
use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
-use rustc_hir::def_id::{CrateNum, DefId, CRATE_DEF_INDEX};
use rustc_span::SourceFile;
use smallvec::SmallVec;
name_hash,
name_was_remapped,
unmapped_path: _,
- crate_of_origin,
+ cnum,
// Do not hash the source as it is not encoded
src: _,
src_hash,
(name_hash as u64).hash_stable(hcx, hasher);
name_was_remapped.hash_stable(hcx, hasher);
- DefId { krate: CrateNum::from_u32(crate_of_origin), index: CRATE_DEF_INDEX }
- .hash_stable(hcx, hasher);
-
src_hash.hash_stable(hcx, hasher);
// We only hash the relative position within this source_file
for &char_pos in normalized_pos.iter() {
stable_normalized_pos(char_pos, start_pos).hash_stable(hcx, hasher);
}
+
+ cnum.hash_stable(hcx, hasher);
}
}
//! `instantiate_query_result` method.
//!
//! For a more detailed look at what is happening here, check
-//! out the [chapter in the rustc guide][c].
+//! out the [chapter in the rustc dev guide][c].
//!
-//! [c]: https://rust-lang.github.io/rustc-guide/traits/canonicalization.html
+//! [c]: https://rustc-dev-guide.rust-lang.org/traits/canonicalization.html
use crate::infer::MemberConstraint;
use crate::ty::subst::GenericArg;
//! (or `tcx`), which is the central context during most of
//! compilation, containing the interners and other things.
//!
-//! For more information about how rustc works, see the [rustc guide].
+//! For more information about how rustc works, see the [rustc dev guide].
//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/
//!
//! # Note
//!
#![feature(bool_to_option)]
#![feature(box_patterns)]
#![feature(box_syntax)]
+#![feature(const_if_match)]
+#![feature(const_fn)]
+#![feature(const_panic)]
#![feature(const_transmute)]
#![feature(core_intrinsics)]
#![feature(drain_filter)]
pub mod lint;
pub mod middle;
pub mod mir;
-pub use rustc_session as session;
pub mod traits;
pub mod ty;
use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
use rustc_errors::{pluralize, Applicability, DiagnosticBuilder, DiagnosticId};
use rustc_hir::HirId;
-pub use rustc_session::lint::{builtin, Level, Lint, LintId, LintPass};
+use rustc_session::lint::{builtin, Level, Lint, LintId};
use rustc_session::{DiagnosticMessageId, Session};
use rustc_span::hygiene::MacroKind;
use rustc_span::source_map::{DesugaringKind, ExpnKind, MultiSpan};
ExpnKind::Root | ExpnKind::Desugaring(DesugaringKind::ForLoop) => false,
ExpnKind::AstPass(_) | ExpnKind::Desugaring(_) => true, // well, it's "external"
ExpnKind::Macro(MacroKind::Bang, _) => {
- if expn_data.def_site.is_dummy() {
- // Dummy span for the `def_site` means it's an external macro.
- return true;
- }
- match sess.source_map().span_to_snippet(expn_data.def_site) {
- Ok(code) => !code.starts_with("macro_rules"),
- // No snippet means external macro or compiler-builtin expansion.
- Err(_) => true,
- }
+ // Dummy span for the `def_site` means it's an external macro.
+ expn_data.def_site.is_dummy() || sess.source_map().is_imported(expn_data.def_site)
}
ExpnKind::Macro(..) => true, // definitely a plugin
}
/// "weird symbol" for the standard library in that it has slightly
/// different linkage, visibility, and reachability rules.
const RUSTC_STD_INTERNAL_SYMBOL = 1 << 6;
- /// `#[no_debug]`: an indicator that no debugging information should be
- /// generated for this function by LLVM.
- const NO_DEBUG = 1 << 7;
/// `#[thread_local]`: indicates a static is actually a thread local
/// piece of memory
const THREAD_LOCAL = 1 << 8;
//! are *mostly* used as a part of that interface, but these should
//! probably get a better home if someone can find one.
-use crate::hir::map as hir_map;
-use crate::hir::map::definitions::{DefKey, DefPathTable};
-use crate::session::search_paths::PathKind;
-use crate::session::CrateDisambiguator;
+pub use self::NativeLibraryKind::*;
+
+use crate::hir::map::definitions::{DefKey, DefPath, DefPathHash, DefPathTable};
use crate::ty::TyCtxt;
use rustc_ast::ast;
use rustc_data_structures::sync::{self, MetadataRef};
use rustc_hir::def_id::{CrateNum, DefId, LOCAL_CRATE};
use rustc_macros::HashStable;
+use rustc_session::search_paths::PathKind;
+pub use rustc_session::utils::NativeLibraryKind;
+use rustc_session::CrateDisambiguator;
use rustc_span::symbol::Symbol;
use rustc_span::Span;
use rustc_target::spec::Target;
+
use std::any::Any;
use std::path::{Path, PathBuf};
-pub use self::NativeLibraryKind::*;
-pub use rustc_session::utils::NativeLibraryKind;
-
// lonely orphan structs and enums looking for a better home
/// Where a crate came from on the local filesystem. One of these three options
HashStable
)]
pub enum DepKind {
- /// A dependency that is only used for its macros, none of which are visible from other crates.
- /// These are included in the metadata only as placeholders and are ignored when decoding.
- UnexportedMacrosOnly,
/// A dependency that is only used for its macros.
MacrosOnly,
/// A dependency that is always injected into the dependency list and so
impl DepKind {
pub fn macros_only(self) -> bool {
match self {
- DepKind::UnexportedMacrosOnly | DepKind::MacrosOnly => true,
+ DepKind::MacrosOnly => true,
DepKind::Implicit | DepKind::Explicit => false,
}
}
// resolve
fn def_key(&self, def: DefId) -> DefKey;
- fn def_path(&self, def: DefId) -> hir_map::DefPath;
- fn def_path_hash(&self, def: DefId) -> hir_map::DefPathHash;
+ fn def_path(&self, def: DefId) -> DefPath;
+ fn def_path_hash(&self, def: DefId) -> DefPathHash;
fn def_path_table(&self, cnum: CrateNum) -> &DefPathTable;
// "queries" used in resolve that aren't tracked for incremental compilation
//! For all the gory details, see the provider of the `dependency_formats`
//! query.
-use crate::session::config;
+use rustc_session::config;
/// A list of dependencies for a certain crate type.
///
// symbols. Other panic runtimes ensure that the relevant symbols are
// available to link things together, but they're never exercised.
if tcx.sess.panic_strategy() != PanicStrategy::Unwind {
- return lang_item == LangItem::EhPersonalityLangItem
- || lang_item == LangItem::EhUnwindResumeLangItem;
+ return lang_item == LangItem::EhPersonalityLangItem;
}
false
--- /dev/null
+//! Registering limits, recursion_limit, type_length_limit and const_eval_limit
+//!
+//! There are various parts of the compiler that must impose arbitrary limits
+//! on how deeply they recurse to prevent stack overflow. Users can override
+//! this via an attribute on the crate like `#![recursion_limit="22"]`. This pass
+//! just peeks and looks for that attribute.
+
+use rustc::bug;
+use rustc_ast::ast;
+use rustc_data_structures::sync::Once;
+use rustc_session::Session;
+use rustc_span::symbol::{sym, Symbol};
+
+use std::num::IntErrorKind;
+
+pub fn update_limits(sess: &Session, krate: &ast::Crate) {
+ update_limit(sess, krate, &sess.recursion_limit, sym::recursion_limit, 128);
+ update_limit(sess, krate, &sess.type_length_limit, sym::type_length_limit, 1048576);
+ update_limit(sess, krate, &sess.const_eval_limit, sym::const_eval_limit, 1_000_000);
+}
+
+fn update_limit(
+ sess: &Session,
+ krate: &ast::Crate,
+ limit: &Once<usize>,
+ name: Symbol,
+ default: usize,
+) {
+ for attr in &krate.attrs {
+ if !attr.check_name(name) {
+ continue;
+ }
+
+ if let Some(s) = attr.value_str() {
+ match s.as_str().parse() {
+ Ok(n) => {
+ limit.set(n);
+ return;
+ }
+ Err(e) => {
+ let mut err =
+ sess.struct_span_err(attr.span, "`limit` must be a non-negative integer");
+
+ let value_span = attr
+ .meta()
+ .and_then(|meta| meta.name_value_literal().cloned())
+ .map(|lit| lit.span)
+ .unwrap_or(attr.span);
+
+ let error_str = match e.kind() {
+ IntErrorKind::Overflow => "`limit` is too large",
+ IntErrorKind::Empty => "`limit` must be a non-negative integer",
+ IntErrorKind::InvalidDigit => "not a valid integer",
+ IntErrorKind::Underflow => bug!("`limit` should never underflow"),
+ IntErrorKind::Zero => bug!("zero is a valid `limit`"),
+ kind => bug!("unimplemented IntErrorKind variant: {:?}", kind),
+ };
+
+ err.span_label(value_span, error_str);
+ err.emit();
+ }
+ }
+ }
+ }
+ limit.set(default);
+}
}
}
}
+pub mod limits;
pub mod privacy;
-pub mod recursion_limit;
pub mod region;
pub mod resolve_lifetime;
pub mod stability;
+++ /dev/null
-// Recursion limit.
-//
-// There are various parts of the compiler that must impose arbitrary limits
-// on how deeply they recurse to prevent stack overflow. Users can override
-// this via an attribute on the crate like `#![recursion_limit="22"]`. This pass
-// just peeks and looks for that attribute.
-
-use crate::session::Session;
-use core::num::IntErrorKind;
-use rustc::bug;
-use rustc_ast::ast;
-use rustc_span::symbol::{sym, Symbol};
-
-use rustc_data_structures::sync::Once;
-
-pub fn update_limits(sess: &Session, krate: &ast::Crate) {
- update_limit(sess, krate, &sess.recursion_limit, sym::recursion_limit, 128);
- update_limit(sess, krate, &sess.type_length_limit, sym::type_length_limit, 1048576);
-}
-
-fn update_limit(
- sess: &Session,
- krate: &ast::Crate,
- limit: &Once<usize>,
- name: Symbol,
- default: usize,
-) {
- for attr in &krate.attrs {
- if !attr.check_name(name) {
- continue;
- }
-
- if let Some(s) = attr.value_str() {
- match s.as_str().parse() {
- Ok(n) => {
- limit.set(n);
- return;
- }
- Err(e) => {
- let mut err = sess.struct_span_err(
- attr.span,
- "`recursion_limit` must be a non-negative integer",
- );
-
- let value_span = attr
- .meta()
- .and_then(|meta| meta.name_value_literal().cloned())
- .map(|lit| lit.span)
- .unwrap_or(attr.span);
-
- let error_str = match e.kind() {
- IntErrorKind::Overflow => "`recursion_limit` is too large",
- IntErrorKind::Empty => "`recursion_limit` must be a non-negative integer",
- IntErrorKind::InvalidDigit => "not a valid integer",
- IntErrorKind::Underflow => bug!("`recursion_limit` should never underflow"),
- IntErrorKind::Zero => bug!("zero is a valid `recursion_limit`"),
- kind => bug!("unimplemented IntErrorKind variant: {:?}", kind),
- };
-
- err.span_label(value_span, error_str);
- err.emit();
- }
- }
- }
- }
- limit.set(default);
-}
//! the parent links in the region hierarchy.
//!
//! For more information about how MIR-based region-checking works,
-//! see the [rustc guide].
+//! see the [rustc dev guide].
//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/mir/borrowck.html
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/mir/borrowck.html
use crate::ich::{NodeIdHashingMode, StableHashingContext};
use crate::ty::{self, DefIdTree, TyCtxt};
use rustc_hir as hir;
-use rustc_hir::def_id::DefId;
use rustc_hir::Node;
use rustc_data_structures::fx::FxHashMap;
region scope tree for {:?} / {:?}",
param_owner,
self.root_parent.map(|id| tcx.hir().local_def_id(id)),
- self.root_body.map(|hir_id| DefId::local(hir_id.owner))
+ self.root_body.map(|hir_id| hir_id.owner)
),
);
}
pub use self::StabilityLevel::*;
-use crate::session::{DiagnosticMessageId, Session};
use crate::ty::{self, TyCtxt};
use rustc_ast::ast::CRATE_NODE_ID;
use rustc_attr::{self as attr, ConstStability, Deprecation, RustcDeprecation, Stability};
use rustc_session::lint::builtin::{DEPRECATED, DEPRECATED_IN_FUTURE, SOFT_UNSTABLE};
use rustc_session::lint::{BuiltinLintDiagnostics, Lint, LintBuffer};
use rustc_session::parse::feature_err_issue;
+use rustc_session::{DiagnosticMessageId, Session};
use rustc_span::symbol::{sym, Symbol};
use rustc_span::{MultiSpan, Span};
let span_key = msp.primary_span().and_then(|sp: Span| {
if !sp.is_dummy() {
let file = sm.lookup_char_pos(sp.lo()).file;
- if file.name.is_macros() { None } else { Some(span) }
+ if file.is_imported() { None } else { Some(span) }
} else {
None
}
fn skip_stability_check_due_to_privacy(tcx: TyCtxt<'_>, mut def_id: DefId) -> bool {
// Check if `def_id` is a trait method.
match tcx.def_kind(def_id) {
- Some(DefKind::Method) | Some(DefKind::AssocTy) | Some(DefKind::AssocConst) => {
+ Some(DefKind::AssocFn) | Some(DefKind::AssocTy) | Some(DefKind::AssocConst) => {
if let ty::TraitContainer(trait_def_id) = tcx.associated_item(def_id).container {
// Trait methods do not declare visibility (even
// for visibility info in cstore). Use containing
/// The size of the allocation. Currently, must always equal `bytes.len()`.
pub size: Size,
/// The alignment of the allocation to detect unaligned reads.
+ /// (`Align` guarantees that this is a power of two.)
pub align: Align,
/// `true` if the allocation is mutable.
/// Also used by codegen to determine if a static should be put into mutable memory,
&self.get_bytes(cx, ptr, size_with_null)?[..size]
}
// This includes the case where `offset` is out-of-bounds to begin with.
- None => throw_unsup!(UnterminatedCString(ptr.erase_tag())),
+ None => throw_ub!(UnterminatedCString(ptr.erase_tag())),
})
}
fn check_defined(&self, ptr: Pointer<Tag>, size: Size) -> InterpResult<'tcx> {
self.undef_mask
.is_range_defined(ptr.offset, ptr.offset + size)
- .or_else(|idx| throw_unsup!(ReadUndefBytes(idx)))
+ .or_else(|idx| throw_ub!(InvalidUndefBytes(Some(Pointer::new(ptr.alloc_id, idx)))))
}
pub fn mark_definedness(&mut self, ptr: Pointer<Tag>, size: Size, new_state: bool) {
// a naive undef mask copying algorithm would repeatedly have to read the undef mask from
// the source and write it to the destination. Even if we optimized the memory accesses,
// we'd be doing all of this `repeat` times.
- // Therefor we precompute a compressed version of the undef mask of the source value and
+ // Therefore we precompute a compressed version of the undef mask of the source value and
// then write it back `repeat` times without computing any more information from the source.
// A precomputed cache for ranges of defined/undefined bits
// First set all bits except the first `bita`,
// then unset the last `64 - bitb` bits.
let range = if bitb == 0 {
- u64::max_value() << bita
+ u64::MAX << bita
} else {
- (u64::max_value() << bita) & (u64::max_value() >> (64 - bitb))
+ (u64::MAX << bita) & (u64::MAX >> (64 - bitb))
};
if new_state {
self.blocks[blocka] |= range;
// across block boundaries
if new_state {
// Set `bita..64` to `1`.
- self.blocks[blocka] |= u64::max_value() << bita;
+ self.blocks[blocka] |= u64::MAX << bita;
// Set `0..bitb` to `1`.
if bitb != 0 {
- self.blocks[blockb] |= u64::max_value() >> (64 - bitb);
+ self.blocks[blockb] |= u64::MAX >> (64 - bitb);
}
// Fill in all the other blocks (much faster than one bit at a time).
for block in (blocka + 1)..blockb {
- self.blocks[block] = u64::max_value();
+ self.blocks[block] = u64::MAX;
}
} else {
// Set `bita..64` to `0`.
- self.blocks[blocka] &= !(u64::max_value() << bita);
+ self.blocks[blocka] &= !(u64::MAX << bita);
// Set `0..bitb` to `0`.
if bitb != 0 {
- self.blocks[blockb] &= !(u64::max_value() >> (64 - bitb));
+ self.blocks[blockb] &= !(u64::MAX >> (64 - bitb));
}
// Fill in all the other blocks (much faster than one bit at a time).
for block in (blocka + 1)..blockb {
-use super::{CheckInAllocMsg, Pointer, RawConst, ScalarMaybeUndef};
+use super::{AllocId, CheckInAllocMsg, Pointer, RawConst, ScalarMaybeUndef};
use crate::hir::map::definitions::DefPathData;
-use crate::mir;
use crate::mir::interpret::ConstValue;
use crate::ty::layout::{Align, LayoutError, Size};
use crate::ty::query::TyCtxtAt;
+use crate::ty::tls;
use crate::ty::{self, layout, Ty};
use backtrace::Backtrace;
+use rustc_data_structures::sync::Lock;
use rustc_errors::{struct_span_err, DiagnosticBuilder};
use rustc_hir as hir;
use rustc_macros::HashStable;
-use rustc_span::{Pos, Span};
-use rustc_target::spec::abi::Abi;
-use std::{any::Any, env, fmt};
+use rustc_session::CtfeBacktrace;
+use rustc_span::{def_id::DefId, Pos, Span};
+use std::{any::Any, fmt};
#[derive(Debug, Copy, Clone, PartialEq, Eq, HashStable, RustcEncodable, RustcDecodable)]
pub enum ErrorHandled {
eprintln!("\n\nAn error occurred in miri:\n{:?}", backtrace);
}
-impl From<ErrorHandled> for InterpErrorInfo<'tcx> {
+impl From<ErrorHandled> for InterpErrorInfo<'_> {
fn from(err: ErrorHandled) -> Self {
match err {
ErrorHandled::Reported => err_inval!(ReferencedConstant),
impl<'tcx> From<InterpError<'tcx>> for InterpErrorInfo<'tcx> {
fn from(kind: InterpError<'tcx>) -> Self {
- let backtrace = match env::var("RUSTC_CTFE_BACKTRACE") {
- // Matching `RUST_BACKTRACE` -- we treat "0" the same as "not present".
- Ok(ref val) if val != "0" => {
- let mut backtrace = Backtrace::new_unresolved();
+ let capture_backtrace = tls::with_context_opt(|ctxt| {
+ if let Some(ctxt) = ctxt {
+ *Lock::borrow(&ctxt.tcx.sess.ctfe_backtrace)
+ } else {
+ CtfeBacktrace::Disabled
+ }
+ });
- if val == "immediate" {
- // Print it now.
- print_backtrace(&mut backtrace);
- None
- } else {
- Some(Box::new(backtrace))
- }
+ let backtrace = match capture_backtrace {
+ CtfeBacktrace::Disabled => None,
+ CtfeBacktrace::Capture => Some(Box::new(Backtrace::new_unresolved())),
+ CtfeBacktrace::Immediate => {
+ // Print it now.
+ let mut backtrace = Backtrace::new_unresolved();
+ print_backtrace(&mut backtrace);
+ None
}
- _ => None,
};
+
InterpErrorInfo { kind, backtrace }
}
}
TypeckError,
/// An error occurred during layout computation.
Layout(layout::LayoutError<'tcx>),
+ /// An invalid transmute happened.
+ TransmuteSizeDiff(Ty<'tcx>, Ty<'tcx>),
}
-impl fmt::Debug for InvalidProgramInfo<'tcx> {
+impl fmt::Debug for InvalidProgramInfo<'_> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
use InvalidProgramInfo::*;
match self {
ReferencedConstant => write!(f, "referenced constant has errors"),
TypeckError => write!(f, "encountered constants with type errors, stopping evaluation"),
Layout(ref err) => write!(f, "{}", err),
+ TransmuteSizeDiff(from_ty, to_ty) => write!(
+ f,
+ "tried to transmute from {:?} to {:?}, but their sizes differed",
+ from_ty, to_ty
+ ),
}
}
}
/// An enum discriminant was set to a value which was outside the range of valid values.
InvalidDiscriminant(ScalarMaybeUndef),
/// A slice/array index projection went out-of-bounds.
- BoundsCheckFailed { len: u64, index: u64 },
+ BoundsCheckFailed {
+ len: u64,
+ index: u64,
+ },
/// Something was divided by 0 (x / 0).
DivisionByZero,
/// Something was "remainded" by 0 (x % 0).
RemainderByZero,
/// Overflowing inbounds pointer arithmetic.
PointerArithOverflow,
+ /// Invalid metadata in a wide pointer (using `str` to avoid allocations).
+ InvalidMeta(&'static str),
+ /// Reading a C string that does not end within its allocation.
+ UnterminatedCString(Pointer),
+ /// Dereferencing a dangling pointer after it got freed.
+ PointerUseAfterFree(AllocId),
+ /// Used a pointer outside the bounds it is valid for.
+ PointerOutOfBounds {
+ ptr: Pointer,
+ msg: CheckInAllocMsg,
+ allocation_size: Size,
+ },
+ /// Used a pointer with bad alignment.
+ AlignmentCheckFailed {
+ required: Align,
+ has: Align,
+ },
+ /// Using an integer as a pointer in the wrong way.
+ InvalidIntPointerUsage(u64),
+ /// Writing to read-only memory.
+ WriteToReadOnly(AllocId),
+ /// Using a pointer-not-to-a-function as function pointer.
+ InvalidFunctionPointer(Pointer),
+ // Trying to access the data behind a function pointer.
+ DerefFunctionPointer(AllocId),
+ /// The value validity check found a problem.
+ /// Should only be thrown by `validity.rs` and always point out which part of the value
+ /// is the problem.
+ ValidationFailure(String),
+ /// Using a non-boolean `u8` as bool.
+ InvalidBool(u8),
+ /// Using a non-character `u32` as character.
+ InvalidChar(u32),
+ /// Using uninitialized data where it is not allowed.
+ InvalidUndefBytes(Option<Pointer>),
+ /// Working with a local that is not currently live.
+ DeadLocal,
+ /// Trying to read from the return place of a function.
+ ReadFromReturnPlace,
}
impl fmt::Debug for UndefinedBehaviorInfo {
DivisionByZero => write!(f, "dividing by zero"),
RemainderByZero => write!(f, "calculating the remainder with a divisor of zero"),
PointerArithOverflow => write!(f, "overflowing in-bounds pointer arithmetic"),
+ InvalidMeta(msg) => write!(f, "invalid metadata in wide pointer: {}", msg),
+ UnterminatedCString(p) => write!(
+ f,
+ "reading a null-terminated string starting at {:?} with no null found before end of allocation",
+ p,
+ ),
+ PointerUseAfterFree(a) => {
+ write!(f, "pointer to {:?} was dereferenced after this allocation got freed", a)
+ }
+ PointerOutOfBounds { ptr, msg, allocation_size } => write!(
+ f,
+ "{} failed: pointer must be in-bounds at offset {}, \
+ but is outside bounds of {} which has size {}",
+ msg,
+ ptr.offset.bytes(),
+ ptr.alloc_id,
+ allocation_size.bytes()
+ ),
+ InvalidIntPointerUsage(0) => write!(f, "invalid use of NULL pointer"),
+ InvalidIntPointerUsage(i) => write!(f, "invalid use of {} as a pointer", i),
+ AlignmentCheckFailed { required, has } => write!(
+ f,
+ "accessing memory with alignment {}, but alignment {} is required",
+ has.bytes(),
+ required.bytes()
+ ),
+ WriteToReadOnly(a) => write!(f, "writing to {:?} which is read-only", a),
+ InvalidFunctionPointer(p) => {
+ write!(f, "using {:?} as function pointer but it does not point to a function", p)
+ }
+ DerefFunctionPointer(a) => write!(f, "accessing {:?} which contains a function", a),
+ ValidationFailure(ref err) => write!(f, "type validation failed: {}", err),
+ InvalidBool(b) => write!(f, "interpreting an invalid 8-bit value as a bool: {}", b),
+ InvalidChar(c) => write!(f, "interpreting an invalid 32-bit value as a char: {}", c),
+ InvalidUndefBytes(Some(p)) => write!(
+ f,
+ "reading uninitialized memory at {:?}, but this operation requires initialized memory",
+ p
+ ),
+ InvalidUndefBytes(None) => write!(
+ f,
+ "using uninitialized data, but this operation requires initialized memory"
+ ),
+ DeadLocal => write!(f, "accessing a dead local variable"),
+ ReadFromReturnPlace => write!(f, "tried to read from the return place"),
}
}
}
///
/// Currently, we also use this as fall-back error kind for errors that have not been
/// categorized yet.
-pub enum UnsupportedOpInfo<'tcx> {
+pub enum UnsupportedOpInfo {
/// Free-form case. Only for errors that are never caught!
Unsupported(String),
-
/// When const-prop encounters a situation it does not support, it raises this error.
- /// This must not allocate for performance reasons.
- ConstPropUnsupported(&'tcx str),
-
- // -- Everything below is not categorized yet --
- FunctionAbiMismatch(Abi, Abi),
- FunctionArgMismatch(Ty<'tcx>, Ty<'tcx>),
- FunctionRetMismatch(Ty<'tcx>, Ty<'tcx>),
- FunctionArgCountMismatch,
- UnterminatedCString(Pointer),
- DanglingPointerDeref,
- DoubleFree,
- InvalidMemoryAccess,
- InvalidFunctionPointer,
- InvalidBool,
- PointerOutOfBounds {
- ptr: Pointer,
- msg: CheckInAllocMsg,
- allocation_size: Size,
- },
- InvalidNullPointerUsage,
+ /// This must not allocate for performance reasons (hence `str`, not `String`).
+ ConstPropUnsupported(&'static str),
+ /// Accessing an unsupported foreign static.
+ ReadForeignStatic(DefId),
+ /// Could not find MIR for a function.
+ NoMirFor(DefId),
+ /// Modified a static during const-eval.
+ /// FIXME: move this to `ConstEvalErrKind` through a machine hook.
+ ModifiedStatic,
+ /// Encountered a pointer where we needed raw bytes.
ReadPointerAsBytes,
+ /// Encountered raw bytes where we needed a pointer.
ReadBytesAsPointer,
- ReadForeignStatic,
- InvalidPointerMath,
- ReadUndefBytes(Size),
- DeadLocal,
- InvalidBoolOp(mir::BinOp),
- UnimplementedTraitSelection,
- CalledClosureAsFunction,
- NoMirFor(String),
- DerefFunctionPointer,
- ExecuteMemory,
- InvalidChar(u128),
- OutOfTls,
- TlsOutOfBounds,
- AlignmentCheckFailed {
- required: Align,
- has: Align,
- },
- ValidationFailure(String),
- VtableForArgumentlessMethod,
- ModifiedConstantMemory,
- ModifiedStatic,
- TypeNotPrimitive(Ty<'tcx>),
- ReallocatedWrongMemoryKind(String, String),
- DeallocatedWrongMemoryKind(String, String),
- ReallocateNonBasePtr,
- DeallocateNonBasePtr,
- IncorrectAllocationInformation(Size, Size, Align, Align),
- HeapAllocZeroBytes,
- HeapAllocNonPowerOfTwoAlignment(u64),
- ReadFromReturnPointer,
- PathNotFound(Vec<String>),
- TransmuteSizeDiff(Ty<'tcx>, Ty<'tcx>),
}
-impl fmt::Debug for UnsupportedOpInfo<'tcx> {
+impl fmt::Debug for UnsupportedOpInfo {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
use UnsupportedOpInfo::*;
match self {
- PointerOutOfBounds { ptr, msg, allocation_size } => write!(
- f,
- "{} failed: pointer must be in-bounds at offset {}, \
- but is outside bounds of allocation {} which has size {}",
- msg,
- ptr.offset.bytes(),
- ptr.alloc_id,
- allocation_size.bytes()
- ),
- ValidationFailure(ref err) => write!(f, "type validation failed: {}", err),
- NoMirFor(ref func) => write!(f, "no MIR for `{}`", func),
- FunctionAbiMismatch(caller_abi, callee_abi) => write!(
- f,
- "tried to call a function with ABI {:?} using caller ABI {:?}",
- callee_abi, caller_abi
- ),
- FunctionArgMismatch(caller_ty, callee_ty) => write!(
- f,
- "tried to call a function with argument of type {:?} \
- passing data of type {:?}",
- callee_ty, caller_ty
- ),
- TransmuteSizeDiff(from_ty, to_ty) => write!(
- f,
- "tried to transmute from {:?} to {:?}, but their sizes differed",
- from_ty, to_ty
- ),
- FunctionRetMismatch(caller_ty, callee_ty) => write!(
- f,
- "tried to call a function with return type {:?} \
- passing return place of type {:?}",
- callee_ty, caller_ty
- ),
- FunctionArgCountMismatch => {
- write!(f, "tried to call a function with incorrect number of arguments")
- }
- ReallocatedWrongMemoryKind(ref old, ref new) => {
- write!(f, "tried to reallocate memory from `{}` to `{}`", old, new)
- }
- DeallocatedWrongMemoryKind(ref old, ref new) => {
- write!(f, "tried to deallocate `{}` memory but gave `{}` as the kind", old, new)
- }
- InvalidChar(c) => {
- write!(f, "tried to interpret an invalid 32-bit value as a char: {}", c)
- }
- AlignmentCheckFailed { required, has } => write!(
- f,
- "tried to access memory with alignment {}, but alignment {} is required",
- has.bytes(),
- required.bytes()
- ),
- TypeNotPrimitive(ty) => write!(f, "expected primitive type, got {}", ty),
- PathNotFound(ref path) => write!(f, "cannot find path {:?}", path),
- IncorrectAllocationInformation(size, size2, align, align2) => write!(
- f,
- "incorrect alloc info: expected size {} and align {}, \
- got size {} and align {}",
- size.bytes(),
- align.bytes(),
- size2.bytes(),
- align2.bytes()
- ),
- InvalidMemoryAccess => write!(f, "tried to access memory through an invalid pointer"),
- DanglingPointerDeref => write!(f, "dangling pointer was dereferenced"),
- DoubleFree => write!(f, "tried to deallocate dangling pointer"),
- InvalidFunctionPointer => {
- write!(f, "tried to use a function pointer after offsetting it")
- }
- InvalidBool => write!(f, "invalid boolean value read"),
- InvalidNullPointerUsage => write!(f, "invalid use of NULL pointer"),
- ReadPointerAsBytes => write!(
- f,
- "a raw memory access tried to access part of a pointer value as raw \
- bytes"
- ),
- ReadBytesAsPointer => {
- write!(f, "a memory access tried to interpret some bytes as a pointer")
- }
- ReadForeignStatic => write!(f, "tried to read from foreign (extern) static"),
- InvalidPointerMath => write!(
- f,
- "attempted to do invalid arithmetic on pointers that would leak base \
- addresses, e.g., comparing pointers into different allocations"
- ),
- DeadLocal => write!(f, "tried to access a dead local variable"),
- DerefFunctionPointer => write!(f, "tried to dereference a function pointer"),
- ExecuteMemory => write!(f, "tried to treat a memory pointer as a function pointer"),
- OutOfTls => write!(f, "reached the maximum number of representable TLS keys"),
- TlsOutOfBounds => write!(f, "accessed an invalid (unallocated) TLS key"),
- CalledClosureAsFunction => {
- write!(f, "tried to call a closure through a function pointer")
+ Unsupported(ref msg) => write!(f, "{}", msg),
+ ConstPropUnsupported(ref msg) => {
+ write!(f, "Constant propagation encountered an unsupported situation: {}", msg)
}
- VtableForArgumentlessMethod => {
- write!(f, "tried to call a vtable function without arguments")
+ ReadForeignStatic(did) => {
+ write!(f, "tried to read from foreign (extern) static {:?}", did)
}
- ModifiedConstantMemory => write!(f, "tried to modify constant memory"),
+ NoMirFor(did) => write!(f, "could not load MIR for {:?}", did),
ModifiedStatic => write!(
f,
"tried to modify a static's initial value from another static's \
initializer"
),
- ReallocateNonBasePtr => write!(
- f,
- "tried to reallocate with a pointer not to the beginning of an \
- existing object"
- ),
- DeallocateNonBasePtr => write!(
- f,
- "tried to deallocate with a pointer not to the beginning of an \
- existing object"
- ),
- HeapAllocZeroBytes => write!(f, "tried to re-, de- or allocate zero bytes on the heap"),
- ReadFromReturnPointer => write!(f, "tried to read from the return pointer"),
- UnimplementedTraitSelection => {
- write!(f, "there were unresolved type arguments during trait selection")
- }
- InvalidBoolOp(_) => write!(f, "invalid boolean operation"),
- UnterminatedCString(_) => write!(
- f,
- "attempted to get length of a null-terminated string, but no null \
- found before end of allocation"
- ),
- ReadUndefBytes(_) => write!(f, "attempted to read undefined bytes"),
- HeapAllocNonPowerOfTwoAlignment(_) => write!(
- f,
- "tried to re-, de-, or allocate heap memory with alignment that is \
- not a power of two"
- ),
- Unsupported(ref msg) => write!(f, "{}", msg),
- ConstPropUnsupported(ref msg) => {
- write!(f, "Constant propagation encountered an unsupported situation: {}", msg)
- }
+
+ ReadPointerAsBytes => write!(f, "unable to turn pointer into raw bytes",),
+ ReadBytesAsPointer => write!(f, "unable to turn bytes into a pointer"),
}
}
}
UndefinedBehavior(UndefinedBehaviorInfo),
/// The program did something the interpreter does not support (some of these *might* be UB
/// but the interpreter is not sure).
- Unsupported(UnsupportedOpInfo<'tcx>),
+ Unsupported(UnsupportedOpInfo),
/// The program was invalid (ill-typed, bad MIR, not sufficiently monomorphized, ...).
InvalidProgram(InvalidProgramInfo<'tcx>),
/// The program exhausted the interpreter's resources (stack/heap too big,
impl fmt::Display for InterpError<'_> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
// Forward `Display` to `Debug`.
- write!(f, "{:?}", self)
+ fmt::Debug::fmt(self, f)
}
}
}
}
}
+
+impl InterpError<'_> {
+ /// Some errors allocate to be created as they contain free-form strings.
+ /// And sometimes we want to be sure that did not happen as it is a
+ /// waste of resources.
+ pub fn allocates(&self) -> bool {
+ match self {
+ InterpError::MachineStop(_)
+ | InterpError::Unsupported(UnsupportedOpInfo::Unsupported(_))
+ | InterpError::UndefinedBehavior(UndefinedBehaviorInfo::ValidationFailure(_))
+ | InterpError::UndefinedBehavior(UndefinedBehaviorInfo::Ub(_))
+ | InterpError::UndefinedBehavior(UndefinedBehaviorInfo::UbExperimental(_)) => true,
+ _ => false,
+ }
+ }
+}
impl fmt::Debug for AllocId {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
- write!(fmt, "alloc{}", self.0)
+ fmt::Display::fmt(self, fmt)
+ }
+}
+
+impl fmt::Display for AllocId {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ write!(f, "alloc{}", self.0)
}
}
}
}
-impl fmt::Display for AllocId {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- write!(f, "{}", self.0)
- }
-}
-
/// An allocation in the global (tcx-managed) memory can be either a function pointer,
/// a static, or a "real" allocation with some data in it.
#[derive(Debug, Clone, Eq, PartialEq, Hash, RustcDecodable, RustcEncodable, HashStable)]
fn overflowing_signed_offset(&self, val: u64, i: i128) -> (u64, bool) {
// FIXME: is it possible to over/underflow here?
if i < 0 {
- // Trickery to ensure that `i64::min_value()` works fine: compute `n = -i`.
+ // Trickery to ensure that `i64::MIN` works fine: compute `n = -i`.
// This formula only works for true negative values; it overflows for zero!
- let n = u64::max_value() - (i as u64) + 1;
+ let n = u64::MAX - (i as u64) + 1;
let res = val.overflowing_sub(n);
self.truncate_to_ptr(res)
} else {
pub fn erase_tag(self) -> Pointer {
Pointer { alloc_id: self.alloc_id, offset: self.offset, tag: () }
}
-
- /// Test if the pointer is "inbounds" of an allocation of the given size.
- /// A pointer is "inbounds" even if its offset is equal to the size; this is
- /// a "one-past-the-end" pointer.
- #[inline(always)]
- pub fn check_inbounds_alloc(
- self,
- allocation_size: Size,
- msg: CheckInAllocMsg,
- ) -> InterpResult<'tcx, ()> {
- if self.offset > allocation_size {
- throw_unsup!(PointerOutOfBounds { ptr: self.erase_tag(), msg, allocation_size })
- } else {
- Ok(())
- }
- }
}
}
impl Scalar<()> {
+ /// Make sure the `data` fits in `size`.
+ /// This is guaranteed by all constructors here, but since the enum variants are public,
+ /// it could still be violated (even though no code outside this file should
+ /// construct `Scalar`s).
#[inline(always)]
fn check_data(data: u128, size: u8) {
debug_assert_eq!(
#[inline]
pub fn from_bool(b: bool) -> Self {
+ // Guaranteed to be truncated and does not need sign extension.
Scalar::Raw { data: b as u128, size: 1 }
}
#[inline]
pub fn from_char(c: char) -> Self {
+ // Guaranteed to be truncated and does not need sign extension.
Scalar::Raw { data: c as u128, size: 4 }
}
#[inline]
pub fn from_u8(i: u8) -> Self {
+ // Guaranteed to be truncated and does not need sign extension.
Scalar::Raw { data: i as u128, size: 1 }
}
#[inline]
pub fn from_u16(i: u16) -> Self {
+ // Guaranteed to be truncated and does not need sign extension.
Scalar::Raw { data: i as u128, size: 2 }
}
#[inline]
pub fn from_u32(i: u32) -> Self {
+ // Guaranteed to be truncated and does not need sign extension.
Scalar::Raw { data: i as u128, size: 4 }
}
#[inline]
pub fn from_u64(i: u64) -> Self {
+ // Guaranteed to be truncated and does not need sign extension.
Scalar::Raw { data: i as u128, size: 8 }
}
.unwrap_or_else(|| bug!("Signed value {:#x} does not fit in {} bits", i, size.bits()))
}
+ #[inline]
+ pub fn from_i8(i: i8) -> Self {
+ Self::from_int(i, Size::from_bits(8))
+ }
+
+ #[inline]
+ pub fn from_i16(i: i16) -> Self {
+ Self::from_int(i, Size::from_bits(16))
+ }
+
+ #[inline]
+ pub fn from_i32(i: i32) -> Self {
+ Self::from_int(i, Size::from_bits(32))
+ }
+
+ #[inline]
+ pub fn from_i64(i: i64) -> Self {
+ Self::from_int(i, Size::from_bits(64))
+ }
+
#[inline]
pub fn from_machine_isize(i: i64, cx: &impl HasDataLayout) -> Self {
Self::from_int(i, cx.data_layout().pointer_size)
target_size: Size,
cx: &impl HasDataLayout,
) -> Result<u128, Pointer<Tag>> {
+ assert_ne!(target_size.bytes(), 0, "you should never look at the bits of a ZST");
match self {
Scalar::Raw { data, size } => {
assert_eq!(target_size.bytes(), size as u64);
- assert_ne!(size, 0, "you should never look at the bits of a ZST");
Scalar::check_data(data, size);
Ok(data)
}
}
}
- #[inline(always)]
- pub fn check_raw(data: u128, size: u8, target_size: Size) {
- assert_eq!(target_size.bytes(), size as u64);
- assert_ne!(size, 0, "you should never look at the bits of a ZST");
- Scalar::check_data(data, size);
- }
-
- /// Do not call this method! Use either `assert_bits` or `force_bits`.
+ /// This method is intentionally private!
+ /// It is just a helper for other methods in this file.
#[inline]
- pub fn to_bits(self, target_size: Size) -> InterpResult<'tcx, u128> {
+ fn to_bits(self, target_size: Size) -> InterpResult<'tcx, u128> {
+ assert_ne!(target_size.bytes(), 0, "you should never look at the bits of a ZST");
match self {
Scalar::Raw { data, size } => {
- Self::check_raw(data, size, target_size);
+ assert_eq!(target_size.bytes(), size as u64);
+ Scalar::check_data(data, size);
Ok(data)
}
Scalar::Ptr(_) => throw_unsup!(ReadPointerAsBytes),
self.to_bits(target_size).expect("expected Raw bits but got a Pointer")
}
- /// Do not call this method! Use either `assert_ptr` or `force_ptr`.
- /// This method is intentionally private, do not make it public.
#[inline]
- fn to_ptr(self) -> InterpResult<'tcx, Pointer<Tag>> {
+ pub fn assert_ptr(self) -> Pointer<Tag> {
match self {
- Scalar::Raw { data: 0, .. } => throw_unsup!(InvalidNullPointerUsage),
- Scalar::Raw { .. } => throw_unsup!(ReadBytesAsPointer),
- Scalar::Ptr(p) => Ok(p),
+ Scalar::Ptr(p) => p,
+ Scalar::Raw { .. } => bug!("expected a Pointer but got Raw bits"),
}
}
- #[inline(always)]
- pub fn assert_ptr(self) -> Pointer<Tag> {
- self.to_ptr().expect("expected a Pointer but got Raw bits")
- }
-
/// Do not call this method! Dispatch based on the type instead.
#[inline]
pub fn is_bits(self) -> bool {
}
pub fn to_bool(self) -> InterpResult<'tcx, bool> {
- match self {
- Scalar::Raw { data: 0, size: 1 } => Ok(false),
- Scalar::Raw { data: 1, size: 1 } => Ok(true),
- _ => throw_unsup!(InvalidBool),
+ let val = self.to_u8()?;
+ match val {
+ 0 => Ok(false),
+ 1 => Ok(true),
+ _ => throw_ub!(InvalidBool(val)),
}
}
let val = self.to_u32()?;
match ::std::char::from_u32(val) {
Some(c) => Ok(c),
- None => throw_unsup!(InvalidChar(val as u128)),
+ None => throw_ub!(InvalidChar(val)),
}
}
pub fn not_undef(self) -> InterpResult<'static, Scalar<Tag>> {
match self {
ScalarMaybeUndef::Scalar(scalar) => Ok(scalar),
- ScalarMaybeUndef::Undef => throw_unsup!(ReadUndefBytes(Size::ZERO)),
+ ScalarMaybeUndef::Undef => throw_ub!(InvalidUndefBytes(None)),
}
}
- /// Do not call this method! Use either `assert_bits` or `force_bits`.
- #[inline(always)]
- pub fn to_bits(self, target_size: Size) -> InterpResult<'tcx, u128> {
- self.not_undef()?.to_bits(target_size)
- }
-
#[inline(always)]
pub fn to_bool(self) -> InterpResult<'tcx, bool> {
self.not_undef()?.to_bool()
self.not_undef()?.to_u8()
}
+ #[inline(always)]
+ pub fn to_u16(self) -> InterpResult<'tcx, u16> {
+ self.not_undef()?.to_u16()
+ }
+
#[inline(always)]
pub fn to_u32(self) -> InterpResult<'tcx, u32> {
self.not_undef()?.to_u32()
self.not_undef()?.to_i8()
}
+ #[inline(always)]
+ pub fn to_i16(self) -> InterpResult<'tcx, i16> {
+ self.not_undef()?.to_i16()
+ }
+
#[inline(always)]
pub fn to_i32(self) -> InterpResult<'tcx, i32> {
self.not_undef()?.to_i32()
-//! MIR datatypes and passes. See the [rustc guide] for more info.
+//! MIR datatypes and passes. See the [rustc dev guide] for more info.
//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/mir/index.html
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/mir/index.html
use crate::mir::interpret::{GlobalAlloc, Scalar};
use crate::mir::visit::MirVisitable;
) -> Self {
// We need `arg_count` locals, and one for the return place.
assert!(
- local_decls.len() >= arg_count + 1,
+ local_decls.len() > arg_count,
"expected at least {} locals, got {}",
arg_count + 1,
local_decls.len()
}
impl<T> ClearCrossCrate<T> {
- pub fn as_ref(&'a self) -> ClearCrossCrate<&'a T> {
+ pub fn as_ref(&self) -> ClearCrossCrate<&T> {
match self {
ClearCrossCrate::Clear => ClearCrossCrate::Clear,
ClearCrossCrate::Set(v) => ClearCrossCrate::Set(v),
t: BasicBlock,
f: BasicBlock,
) -> TerminatorKind<'tcx> {
- static BOOL_SWITCH_FALSE: &'static [u128] = &[0];
+ static BOOL_SWITCH_FALSE: &[u128] = &[0];
TerminatorKind::SwitchInt {
discr: cond,
switch_ty: tcx.types.bool,
match successor_count {
0 => Ok(()),
- 1 => write!(fmt, " -> {:?}", self.successors().nth(0).unwrap()),
+ 1 => write!(fmt, " -> {:?}", self.successors().next().unwrap()),
_ => {
write!(fmt, " -> [")?;
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
-pub struct PlaceRef<'a, 'tcx> {
+pub struct PlaceRef<'tcx> {
pub local: Local,
- pub projection: &'a [PlaceElem<'tcx>],
+ pub projection: &'tcx [PlaceElem<'tcx>],
}
impl<'tcx> Place<'tcx> {
self.as_ref().as_local()
}
- pub fn as_ref(&self) -> PlaceRef<'_, 'tcx> {
+ pub fn as_ref(&self) -> PlaceRef<'tcx> {
PlaceRef { local: self.local, projection: &self.projection }
}
}
}
}
-impl<'a, 'tcx> PlaceRef<'a, 'tcx> {
+impl<'tcx> PlaceRef<'tcx> {
/// Finds the innermost `Local` from this `Place`, *if* it is either a local itself or
/// a single deref of a local.
//
pub(crate) fn variant(
mut self,
- adt_def: &'tcx AdtDef,
+ adt_def: &AdtDef,
variant_index: VariantIdx,
field: Field,
) -> Self {
impl<'tcx> Display for Constant<'tcx> {
fn fmt(&self, fmt: &mut Formatter<'_>) -> fmt::Result {
+ use crate::ty::print::PrettyPrinter;
write!(fmt, "const ")?;
- // FIXME make the default pretty printing of raw pointers more detailed. Here we output the
- // debug representation of raw pointers, so that the raw pointers in the mir dump output are
- // detailed and just not '{pointer}'.
- if let ty::RawPtr(_) = self.literal.ty.kind {
- write!(fmt, "{:?} : {}", self.literal.val, self.literal.ty)
- } else {
- write!(fmt, "{}", self.literal)
- }
+ ty::tls::with(|tcx| {
+ let literal = tcx.lift(&self.literal).unwrap();
+ let mut cx = FmtPrinter::new(tcx, fmt, Namespace::ValueNS);
+ cx.print_alloc_ids = true;
+ cx.pretty_print_const(literal, true)?;
+ Ok(())
+ })
}
}
use crate::dep_graph::{DepConstructor, DepNode, WorkProduct, WorkProductId};
use crate::ich::{Fingerprint, NodeIdHashingMode, StableHashingContext};
-use crate::session::config::OptLevel;
use crate::ty::print::obsolete::DefPathBasedNames;
use crate::ty::{subst::InternalSubsts, Instance, InstanceDef, SymbolName, TyCtxt};
use rustc_attr::InlineAttr;
use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
use rustc_hir::def_id::{CrateNum, DefId, LOCAL_CRATE};
use rustc_hir::HirId;
+use rustc_session::config::OptLevel;
use rustc_span::source_map::Span;
use rustc_span::symbol::Symbol;
use std::fmt;
impl<'tcx> CodegenUnit<'tcx> {
pub fn new(name: Symbol) -> CodegenUnit<'tcx> {
- CodegenUnit { name: name, items: Default::default(), size_estimate: None }
+ CodegenUnit { name, items: Default::default(), size_estimate: None }
}
pub fn name(&self) -> Symbol {
() => (
fn visit_projection(
&mut self,
- local: &Local,
+ local: Local,
projection: &[PlaceElem<'tcx>],
context: PlaceContext,
location: Location,
fn visit_projection_elem(
&mut self,
- local: &Local,
+ local: Local,
proj_base: &[PlaceElem<'tcx>],
elem: &PlaceElem<'tcx>,
context: PlaceContext,
self.visit_place_base(&place.local, context, location);
- self.visit_projection(&place.local,
+ self.visit_projection(place.local,
&place.projection,
context,
location);
fn super_projection(
&mut self,
- local: &Local,
+ local: Local,
projection: &[PlaceElem<'tcx>],
context: PlaceContext,
location: Location,
fn super_projection_elem(
&mut self,
- _local: &Local,
+ _local: Local,
_proj_base: &[PlaceElem<'tcx>],
elem: &PlaceElem<'tcx>,
_context: PlaceContext,
-use crate::dep_graph::{DepKind, DepNode, RecoverKey, SerializedDepNodeIndex};
+use crate::dep_graph::SerializedDepNodeIndex;
use crate::mir;
use crate::mir::interpret::{GlobalId, LitToConstInput};
use crate::traits;
use crate::ty::query::QueryDescription;
use crate::ty::subst::SubstsRef;
use crate::ty::{self, ParamEnvAnd, Ty, TyCtxt};
-use rustc_hir::def_id::{CrateNum, DefId, DefIndex};
+use rustc_hir::def_id::{CrateNum, DefId, LocalDefId};
use rustc_span::symbol::Symbol;
use std::borrow::Cow;
desc { "get the crate HIR" }
}
+ // The indexed HIR. This can be conveniently accessed by `tcx.hir()`.
+ // Avoid calling this query directly.
+ query index_hir(_: CrateNum) -> &'tcx map::IndexedHir<'tcx> {
+ eval_always
+ no_hash
+ desc { "index HIR" }
+ }
+
+ // The items in a module.
+ // This can be conveniently accessed by `tcx.hir().visit_item_likes_in_module`.
+ // Avoid calling this query directly.
+ query hir_module_items(key: LocalDefId) -> &'tcx hir::ModuleItems {
+ eval_always
+ desc { |tcx| "HIR module items in `{}`", tcx.def_path_str(key.to_def_id()) }
+ }
+
+ // An HIR item with a `LocalDefId` that can own other HIR items which do
+ // not themselves have a `LocalDefId`.
+ // This can be conveniently accessed by methods on `tcx.hir()`.
+ // Avoid calling this query directly.
+ query hir_owner(key: LocalDefId) -> &'tcx HirOwner<'tcx> {
+ eval_always
+ desc { |tcx| "HIR owner of `{}`", tcx.def_path_str(key.to_def_id()) }
+ }
+
+ // The HIR items which do not themselves have a `LocalDefId` and are
+ // owned by another HIR item with a `LocalDefId`.
+ // This can be conveniently accessed by methods on `tcx.hir()`.
+ // Avoid calling this query directly.
+ query hir_owner_items(key: LocalDefId) -> &'tcx HirOwnerItems<'tcx> {
+ eval_always
+ desc { |tcx| "HIR owner items in `{}`", tcx.def_path_str(key.to_def_id()) }
+ }
+
/// Records the type of every item.
query type_of(key: DefId) -> Ty<'tcx> {
cache_on_disk_if { key.is_local() }
}
+ query analysis(key: CrateNum) -> Result<(), ErrorReported> {
+ eval_always
+ desc { "running analysis passes on this crate" }
+ }
+
/// Maps from the `DefId` of an item (trait/struct/enum/fn) to its
/// associated generics.
query generics_of(key: DefId) -> &'tcx ty::Generics {
desc { "computing the lint levels for items in this crate" }
}
- query parent_module_from_def_id(_: DefId) -> DefId {
+ query parent_module_from_def_id(key: LocalDefId) -> LocalDefId {
eval_always
+ desc { |tcx| "parent module of `{}`", tcx.def_path_str(key.to_def_id()) }
}
}
// queries). Making it anonymous avoids hashing the result, which
// may save a bit of time.
anon
- no_force
desc { "erasing regions from `{:?}`", ty }
}
}
query program_clauses_for_env(_: traits::Environment<'tcx>) -> Clauses<'tcx> {
- no_force
desc { "generating chalk-style clauses for environment" }
}
/// To avoid cycles within the predicates of a single item we compute
/// per-type-parameter predicates for resolving `T::AssocTy`.
query type_param_predicates(key: (DefId, DefId)) -> ty::GenericPredicates<'tcx> {
- no_force
desc { |tcx| "computing the bounds for type parameter `{}`", {
let id = tcx.hir().as_local_hir_id(key.1).unwrap();
tcx.hir().ty_param_name(id)
/// form to be used outside of const eval.
query const_eval_raw(key: ty::ParamEnvAnd<'tcx, GlobalId<'tcx>>)
-> ConstEvalRawResult<'tcx> {
- no_force
desc { |tcx|
"const-evaluating `{}`",
tcx.def_path_str(key.value.instance.def.def_id())
/// `tcx.const_eval_resolve`, `tcx.const_eval_instance`, or `tcx.const_eval_global_id`.
query const_eval_validated(key: ty::ParamEnvAnd<'tcx, GlobalId<'tcx>>)
-> ConstEvalResult<'tcx> {
- no_force
desc { |tcx|
"const-evaluating + checking `{}`",
tcx.def_path_str(key.value.instance.def.def_id())
query const_field(
key: ty::ParamEnvAnd<'tcx, (&'tcx ty::Const<'tcx>, mir::Field)>
) -> ConstValue<'tcx> {
- no_force
desc { "extract field of const" }
}
query destructure_const(
key: ty::ParamEnvAnd<'tcx, &'tcx ty::Const<'tcx>>
) -> mir::DestructuredConst<'tcx> {
- no_force
desc { "destructure constant" }
}
query const_caller_location(key: (rustc_span::Symbol, u32, u32)) -> ConstValue<'tcx> {
- no_force
desc { "get a &core::panic::Location referring to a span" }
}
query lit_to_const(
key: LitToConstInput<'tcx>
) -> Result<&'tcx ty::Const<'tcx>, LitToConstError> {
- no_force
desc { "converting literal to const" }
}
}
query region_scope_tree(_: DefId) -> &'tcx region::ScopeTree {}
query mir_shims(key: ty::InstanceDef<'tcx>) -> &'tcx mir::BodyAndCache<'tcx> {
- no_force
desc { |tcx| "generating MIR shim for `{}`", tcx.def_path_str(key.def_id()) }
}
/// given instance from the local crate. In particular, it will also
/// look up the correct symbol name of instances from upstream crates.
query symbol_name(key: ty::Instance<'tcx>) -> ty::SymbolName {
- no_force
desc { "computing the symbol for `{}`", key }
cache_on_disk_if { true }
}
Other {
query vtable_methods(key: ty::PolyTraitRef<'tcx>)
-> &'tcx [Option<(DefId, SubstsRef<'tcx>)>] {
- no_force
desc { |tcx| "finding all methods for trait {}", tcx.def_path_str(key.def_id()) }
}
}
Codegen {
query codegen_fulfill_obligation(
key: (ty::ParamEnv<'tcx>, ty::PolyTraitRef<'tcx>)
- ) -> Vtable<'tcx, ()> {
- no_force
+ ) -> Option<Vtable<'tcx, ()>> {
cache_on_disk_if { true }
desc { |tcx|
"checking if `{}` fulfills its obligations",
}
TypeChecking {
+ query all_local_trait_impls(key: CrateNum) -> &'tcx BTreeMap<DefId, Vec<hir::HirId>> {
+ desc { "local trait impls" }
+ }
query trait_impls_of(key: DefId) -> &'tcx ty::trait_def::TraitImpls {
desc { |tcx| "trait impls of `{}`", tcx.def_path_str(key) }
}
/// Trait selection queries. These are best used by invoking `ty.is_copy_modulo_regions()`,
/// `ty.is_copy()`, etc, since that will prune the environment where possible.
query is_copy_raw(env: ty::ParamEnvAnd<'tcx, Ty<'tcx>>) -> bool {
- no_force
desc { "computing whether `{}` is `Copy`", env.value }
}
/// Query backing `TyS::is_sized`.
query is_sized_raw(env: ty::ParamEnvAnd<'tcx, Ty<'tcx>>) -> bool {
- no_force
desc { "computing whether `{}` is `Sized`", env.value }
}
/// Query backing `TyS::is_freeze`.
query is_freeze_raw(env: ty::ParamEnvAnd<'tcx, Ty<'tcx>>) -> bool {
- no_force
desc { "computing whether `{}` is freeze", env.value }
}
/// Query backing `TyS::needs_drop`.
query needs_drop_raw(env: ty::ParamEnvAnd<'tcx, Ty<'tcx>>) -> bool {
- no_force
desc { "computing whether `{}` needs drop", env.value }
}
query layout_raw(
env: ty::ParamEnvAnd<'tcx, Ty<'tcx>>
) -> Result<&'tcx ty::layout::LayoutDetails, ty::layout::LayoutError<'tcx>> {
- no_force
desc { "computing layout of `{}`", env.value }
}
}
TypeChecking {
query specializes(_: (DefId, DefId)) -> bool {
- no_force
desc { "computing whether impls specialize one another" }
}
- query in_scope_traits_map(_: DefIndex)
+ query in_scope_traits_map(_: LocalDefId)
-> Option<&'tcx FxHashMap<ItemLocalId, StableVec<TraitCandidate>>> {
eval_always
desc { "traits in scope at a block" }
/// (like `Clone::clone` for example).
query upstream_drop_glue_for(substs: SubstsRef<'tcx>) -> Option<CrateNum> {
desc { "available upstream drop-glue for `{:?}`", substs }
- no_force
}
}
TypeChecking {
query implementations_of_trait(_: (CrateNum, DefId))
-> &'tcx [DefId] {
- no_force
desc { "looking up implementations of a trait in a crate" }
}
query all_trait_implementations(_: CrateNum)
query resolve_lifetimes(_: CrateNum) -> &'tcx ResolveLifetimes {
desc { "resolving lifetimes" }
}
- query named_region_map(_: DefIndex) ->
+ query named_region_map(_: LocalDefId) ->
Option<&'tcx FxHashMap<ItemLocalId, Region>> {
desc { "looking up a named region" }
}
- query is_late_bound_map(_: DefIndex) ->
+ query is_late_bound_map(_: LocalDefId) ->
Option<&'tcx FxHashSet<ItemLocalId>> {
desc { "testing if a region is late bound" }
}
- query object_lifetime_defaults_map(_: DefIndex)
+ query object_lifetime_defaults_map(_: LocalDefId)
-> Option<&'tcx FxHashMap<ItemLocalId, Vec<ObjectLifetimeDefault>>> {
desc { "looking up lifetime defaults for a region" }
}
}
query is_codegened_item(_: DefId) -> bool {}
query codegen_unit(_: Symbol) -> Arc<CodegenUnit<'tcx>> {
- no_force
desc { "codegen_unit" }
}
query backend_optimization_level(_: CrateNum) -> OptLevel {
&'tcx Canonical<'tcx, canonical::QueryResponse<'tcx, NormalizationResult<'tcx>>>,
NoSolution,
> {
- no_force
desc { "normalizing `{:?}`", goal }
}
query normalize_ty_after_erasing_regions(
goal: ParamEnvAnd<'tcx, Ty<'tcx>>
) -> Ty<'tcx> {
- no_force
desc { "normalizing `{:?}`", goal }
}
&'tcx Canonical<'tcx, canonical::QueryResponse<'tcx, Vec<OutlivesBound<'tcx>>>>,
NoSolution,
> {
- no_force
desc { "computing implied outlives bounds for `{:?}`", goal }
}
&'tcx Canonical<'tcx, canonical::QueryResponse<'tcx, DropckOutlivesResult<'tcx>>>,
NoSolution,
> {
- no_force
desc { "computing dropck types for `{:?}`", goal }
}
query evaluate_obligation(
goal: CanonicalPredicateGoal<'tcx>
) -> Result<traits::EvaluationResult, traits::OverflowError> {
- no_force
desc { "evaluating trait selection obligation `{}`", goal.value.value }
}
- query evaluate_goal(
- goal: traits::ChalkCanonicalGoal<'tcx>
- ) -> Result<
- &'tcx Canonical<'tcx, canonical::QueryResponse<'tcx, ()>>,
- NoSolution
- > {
- no_force
- desc { "evaluating trait selection obligation `{}`", goal.value.goal }
- }
-
/// Do not call this query directly: part of the `Eq` type-op
query type_op_ascribe_user_type(
goal: CanonicalTypeOpAscribeUserTypeGoal<'tcx>
&'tcx Canonical<'tcx, canonical::QueryResponse<'tcx, ()>>,
NoSolution,
> {
- no_force
desc { "evaluating `type_op_ascribe_user_type` `{:?}`", goal }
}
&'tcx Canonical<'tcx, canonical::QueryResponse<'tcx, ()>>,
NoSolution,
> {
- no_force
desc { "evaluating `type_op_eq` `{:?}`", goal }
}
&'tcx Canonical<'tcx, canonical::QueryResponse<'tcx, ()>>,
NoSolution,
> {
- no_force
desc { "evaluating `type_op_subtype` `{:?}`", goal }
}
&'tcx Canonical<'tcx, canonical::QueryResponse<'tcx, ()>>,
NoSolution,
> {
- no_force
desc { "evaluating `type_op_prove_predicate` `{:?}`", goal }
}
&'tcx Canonical<'tcx, canonical::QueryResponse<'tcx, Ty<'tcx>>>,
NoSolution,
> {
- no_force
desc { "normalizing `{:?}`", goal }
}
&'tcx Canonical<'tcx, canonical::QueryResponse<'tcx, ty::Predicate<'tcx>>>,
NoSolution,
> {
- no_force
desc { "normalizing `{:?}`", goal }
}
&'tcx Canonical<'tcx, canonical::QueryResponse<'tcx, ty::PolyFnSig<'tcx>>>,
NoSolution,
> {
- no_force
desc { "normalizing `{:?}`", goal }
}
&'tcx Canonical<'tcx, canonical::QueryResponse<'tcx, ty::FnSig<'tcx>>>,
NoSolution,
> {
- no_force
desc { "normalizing `{:?}`", goal }
}
query substitute_normalize_and_test_predicates(key: (DefId, SubstsRef<'tcx>)) -> bool {
- no_force
desc { |tcx|
"testing substituted normalized predicates:`{}`",
tcx.def_path_str(key.0)
query method_autoderef_steps(
goal: CanonicalTyGoal<'tcx>
) -> MethodAutoderefStepsResult<'tcx> {
- no_force
desc { "computing autoderef types for `{:?}`", goal }
}
}
// Get an estimate of the size of an InstanceDef based on its MIR for CGU partitioning.
query instance_def_size_estimate(def: ty::InstanceDef<'tcx>)
-> usize {
- no_force
desc { |tcx| "estimating size for `{}`", tcx.def_path_str(def.def_id()) }
}
-//! Trait Resolution. See the [rustc guide] for more information on how this works.
+//! Trait Resolution. See the [rustc dev guide] for more information on how this works.
//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/traits/resolution.html
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/traits/resolution.html
pub mod query;
pub mod select;
pub mod specialization_graph;
mod structural_impls;
-use crate::infer::canonical::Canonical;
use crate::mir::interpret::ErrorHandled;
-use crate::ty::fold::{TypeFolder, TypeVisitor};
use crate::ty::subst::SubstsRef;
use crate::ty::{self, AdtKind, List, Ty, TyCtxt};
pub use self::select::{EvaluationCache, EvaluationResult, OverflowError, SelectionCache};
-pub type ChalkCanonicalGoal<'tcx> = Canonical<'tcx, InEnvironment<'tcx, ty::Predicate<'tcx>>>;
-
pub use self::ObligationCauseCode::*;
pub use self::SelectionError::*;
pub use self::Vtable::*;
pub nested: Vec<N>,
}
-pub trait ExClauseFold<'tcx>
-where
- Self: chalk_engine::context::Context + Clone,
-{
- fn fold_ex_clause_with<F: TypeFolder<'tcx>>(
- ex_clause: &chalk_engine::ExClause<Self>,
- folder: &mut F,
- ) -> chalk_engine::ExClause<Self>;
-
- fn visit_ex_clause_with<V: TypeVisitor<'tcx>>(
- ex_clause: &chalk_engine::ExClause<Self>,
- visitor: &mut V,
- ) -> bool;
-}
-
-pub trait ChalkContextLift<'tcx>
-where
- Self: chalk_engine::context::Context + Clone,
-{
- type LiftedExClause: Debug + 'tcx;
- type LiftedDelayedLiteral: Debug + 'tcx;
- type LiftedLiteral: Debug + 'tcx;
-
- fn lift_ex_clause_to_tcx(
- ex_clause: &chalk_engine::ExClause<Self>,
- tcx: TyCtxt<'tcx>,
- ) -> Option<Self::LiftedExClause>;
-
- fn lift_delayed_literal_to_tcx(
- ex_clause: &chalk_engine::DelayedLiteral<Self>,
- tcx: TyCtxt<'tcx>,
- ) -> Option<Self::LiftedDelayedLiteral>;
-
- fn lift_literal_to_tcx(
- ex_clause: &chalk_engine::Literal<Self>,
- tcx: TyCtxt<'tcx>,
- ) -> Option<Self::LiftedLiteral>;
-}
-
#[derive(Clone, Debug, PartialEq, Eq, Hash, HashStable)]
pub enum ObjectSafetyViolation {
/// `Self: Sized` declared on the trait.
-//! Candidate selection. See the [rustc guide] for more information on how this works.
+//! Candidate selection. See the [rustc dev guide] for more information on how this works.
//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/traits/resolution.html#selection
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/traits/resolution.html#selection
use self::EvaluationResult::*;
self.cached_value.clone()
}
}
+
+#[derive(Clone, Debug)]
+pub enum IntercrateAmbiguityCause {
+ DownstreamCrate { trait_desc: String, self_desc: Option<String> },
+ UpstreamCrateUpdate { trait_desc: String, self_desc: Option<String> },
+ ReservationImpl { message: String },
+}
+
+impl IntercrateAmbiguityCause {
+ /// Emits notes when the overlap is caused by complex intercrate ambiguities.
+ /// See #23980 for details.
+ pub fn add_intercrate_ambiguity_hint(&self, err: &mut rustc_errors::DiagnosticBuilder<'_>) {
+ err.note(&self.intercrate_ambiguity_hint());
+ }
+
+ pub fn intercrate_ambiguity_hint(&self) -> String {
+ match self {
+ &IntercrateAmbiguityCause::DownstreamCrate { ref trait_desc, ref self_desc } => {
+ let self_desc = if let &Some(ref ty) = self_desc {
+ format!(" for type `{}`", ty)
+ } else {
+ String::new()
+ };
+ format!("downstream crates may implement trait `{}`{}", trait_desc, self_desc)
+ }
+ &IntercrateAmbiguityCause::UpstreamCrateUpdate { ref trait_desc, ref self_desc } => {
+ let self_desc = if let &Some(ref ty) = self_desc {
+ format!(" for type `{}`", ty)
+ } else {
+ String::new()
+ };
+ format!(
+ "upstream crates may add a new impl of trait `{}`{} \
+ in future versions",
+ trait_desc, self_desc
+ )
+ }
+ &IntercrateAmbiguityCause::ReservationImpl { ref message } => message.clone(),
+ }
+ }
+}
use rustc_ast::ast::Ident;
use rustc_data_structures::fx::FxHashMap;
use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
+use rustc_errors::ErrorReported;
use rustc_hir::def_id::{DefId, DefIdMap};
/// A per-trait graph of impls in specialization order. At the moment, this
/// has at most one parent.
#[derive(RustcEncodable, RustcDecodable, HashStable)]
pub struct Graph {
- // All impls have a parent; the "root" impls have as their parent the `def_id`
- // of the trait.
+ /// All impls have a parent; the "root" impls have as their parent the `def_id`
+ /// of the trait.
pub parent: DefIdMap<DefId>,
- // The "root" impls are found by looking up the trait's def_id.
+ /// The "root" impls are found by looking up the trait's def_id.
pub children: DefIdMap<Children>,
+
+ /// Whether an error was emitted while constructing the graph.
+ pub has_errored: bool,
}
impl Graph {
pub fn new() -> Graph {
- Graph { parent: Default::default(), children: Default::default() }
+ Graph { parent: Default::default(), children: Default::default(), has_errored: false }
}
/// The parent of a given impl, which is the `DefId` of the trait when the
}
/// Walk up the specialization ancestors of a given impl, starting with that
-/// impl itself.
+/// impl itself. Returns `None` if an error was reported while building the
+/// specialization graph.
pub fn ancestors(
tcx: TyCtxt<'tcx>,
trait_def_id: DefId,
start_from_impl: DefId,
-) -> Ancestors<'tcx> {
+) -> Result<Ancestors<'tcx>, ErrorReported> {
let specialization_graph = tcx.specialization_graph_of(trait_def_id);
- Ancestors {
- trait_def_id,
- specialization_graph,
- current_source: Some(Node::Impl(start_from_impl)),
+ if specialization_graph.has_errored {
+ Err(ErrorReported)
+ } else {
+ Ok(Ancestors {
+ trait_def_id,
+ specialization_graph,
+ current_source: Some(Node::Impl(start_from_impl)),
+ })
}
}
start = false;
write!(fmt, "{}", r)?;
}
- for (_, t) in &self.types {
+ for t in self.types.values() {
if !start {
write!(fmt, ", ")?;
}
super::ReferenceOutlivesReferent(ty) => {
tcx.lift(&ty).map(super::ReferenceOutlivesReferent)
}
- super::ObjectTypeBound(ty, r) => tcx
- .lift(&ty)
- .and_then(|ty| tcx.lift(&r).and_then(|r| Some(super::ObjectTypeBound(ty, r)))),
+ super::ObjectTypeBound(ty, r) => {
+ tcx.lift(&ty).and_then(|ty| tcx.lift(&r).map(|r| super::ObjectTypeBound(ty, r)))
+ }
super::ObjectCastObligation(ty) => tcx.lift(&ty).map(super::ObjectCastObligation),
super::Coercion { source, target } => {
Some(super::Coercion { source: tcx.lift(&source)?, target: tcx.lift(&target)? })
nested,
}) => tcx.lift(&substs).map(|substs| {
traits::VtableGenerator(traits::VtableGeneratorData {
- generator_def_id: generator_def_id,
- substs: substs,
- nested: nested,
+ generator_def_id,
+ substs,
+ nested,
})
}),
traits::VtableClosure(traits::VtableClosureData { closure_def_id, substs, nested }) => {
}
}
-impl<'tcx, C> Lift<'tcx> for chalk_engine::ExClause<C>
-where
- C: chalk_engine::context::Context + Clone,
- C: traits::ChalkContextLift<'tcx>,
-{
- type Lifted = C::LiftedExClause;
-
- fn lift_to_tcx(&self, tcx: TyCtxt<'tcx>) -> Option<Self::Lifted> {
- <C as traits::ChalkContextLift>::lift_ex_clause_to_tcx(self, tcx)
- }
-}
-
-impl<'tcx, C> Lift<'tcx> for chalk_engine::DelayedLiteral<C>
-where
- C: chalk_engine::context::Context + Clone,
- C: traits::ChalkContextLift<'tcx>,
-{
- type Lifted = C::LiftedDelayedLiteral;
-
- fn lift_to_tcx(&self, tcx: TyCtxt<'tcx>) -> Option<Self::Lifted> {
- <C as traits::ChalkContextLift>::lift_delayed_literal_to_tcx(self, tcx)
- }
-}
-
-impl<'tcx, C> Lift<'tcx> for chalk_engine::Literal<C>
-where
- C: chalk_engine::context::Context + Clone,
- C: traits::ChalkContextLift<'tcx>,
-{
- type Lifted = C::LiftedLiteral;
-
- fn lift_to_tcx(&self, tcx: TyCtxt<'tcx>) -> Option<Self::Lifted> {
- <C as traits::ChalkContextLift>::lift_literal_to_tcx(self, tcx)
- }
-}
-
///////////////////////////////////////////////////////////////////////////
// TypeFoldable implementations.
self.iter().any(|t| t.visit_with(visitor))
}
}
-
-impl<'tcx, C> TypeFoldable<'tcx> for chalk_engine::ExClause<C>
-where
- C: traits::ExClauseFold<'tcx>,
- C::Substitution: Clone,
- C::RegionConstraint: Clone,
-{
- fn super_fold_with<F: TypeFolder<'tcx>>(&self, folder: &mut F) -> Self {
- <C as traits::ExClauseFold>::fold_ex_clause_with(self, folder)
- }
-
- fn super_visit_with<V: TypeVisitor<'tcx>>(&self, visitor: &mut V) -> bool {
- <C as traits::ExClauseFold>::visit_ex_clause_with(self, visitor)
- }
-}
-
-EnumTypeFoldableImpl! {
- impl<'tcx, C> TypeFoldable<'tcx> for chalk_engine::DelayedLiteral<C> {
- (chalk_engine::DelayedLiteral::CannotProve)(a),
- (chalk_engine::DelayedLiteral::Negative)(a),
- (chalk_engine::DelayedLiteral::Positive)(a, b),
- } where
- C: chalk_engine::context::Context<CanonicalConstrainedSubst: TypeFoldable<'tcx>> + Clone,
-}
-
-EnumTypeFoldableImpl! {
- impl<'tcx, C> TypeFoldable<'tcx> for chalk_engine::Literal<C> {
- (chalk_engine::Literal::Negative)(a),
- (chalk_engine::Literal::Positive)(a),
- } where
- C: chalk_engine::context::Context<GoalInEnvironment: Clone + TypeFoldable<'tcx>> + Clone,
-}
-
-CloneTypeFoldableAndLiftImpls! {
- chalk_engine::TableIndex,
-}
use crate::dep_graph::{self, DepConstructor};
use crate::hir::exports::Export;
use crate::hir::map as hir_map;
+use crate::hir::map::definitions::Definitions;
use crate::hir::map::{DefPathData, DefPathHash};
use crate::ich::{NodeIdHashingMode, StableHashingContext};
use crate::infer::canonical::{Canonical, CanonicalVarInfo, CanonicalVarInfos};
};
use crate::traits;
use crate::traits::{Clause, Clauses, Goal, GoalKind, Goals};
-use crate::ty::free_region_map::FreeRegionMap;
use crate::ty::layout::{LayoutDetails, TargetDataLayout, VariantIdx};
use crate::ty::query;
use crate::ty::steal::Steal;
use rustc_data_structures::sync::{self, Lock, Lrc, WorkerLocal};
use rustc_hir as hir;
use rustc_hir::def::{DefKind, Res};
-use rustc_hir::def_id::{CrateNum, DefId, DefIdMap, DefIdSet, DefIndex, LOCAL_CRATE};
+use rustc_hir::def_id::{CrateNum, DefId, DefIdMap, DefIdSet, LocalDefId, LOCAL_CRATE};
use rustc_hir::{HirId, Node, TraitCandidate};
use rustc_hir::{ItemKind, ItemLocalId, ItemLocalMap, ItemLocalSet};
use rustc_index::vec::{Idx, IndexVec};
mut_access: bool,
) {
if let Some(local_id_root) = local_id_root {
- if hir_id.owner != local_id_root.index {
+ if hir_id.owner.to_def_id() != local_id_root {
ty::tls::with(|tcx| {
bug!(
"node {} with HirId::owner {:?} cannot be placed in \
TypeckTables with local_id_root {:?}",
tcx.hir().node_to_string(hir_id),
- DefId::local(hir_id.owner),
+ hir_id.owner,
local_id_root
)
});
/// this field will be set to `true`.
pub tainted_by_errors: bool,
- /// Stores the free-region relationships that were deduced from
- /// its where-clauses and parameter types. These are then
- /// read-again by borrowck.
- pub free_region_map: FreeRegionMap<'tcx>,
-
/// All the opaque types that are restricted to concrete types
/// by this function.
pub concrete_opaque_types: FxHashMap<DefId, ResolvedOpaqueTy<'tcx>>,
coercion_casts: Default::default(),
used_trait_imports: Lrc::new(Default::default()),
tainted_by_errors: false,
- free_region_map: Default::default(),
concrete_opaque_types: Default::default(),
upvar_list: Default::default(),
generator_interior_types: Default::default(),
}
match self.type_dependent_defs().get(expr.hir_id) {
- Some(Ok((DefKind::Method, _))) => true,
+ Some(Ok((DefKind::AssocFn, _))) => true,
_ => false,
}
}
ref used_trait_imports,
tainted_by_errors,
- ref free_region_map,
ref concrete_opaque_types,
ref upvar_list,
ref generator_interior_types,
let local_id_root = local_id_root.expect("trying to hash invalid TypeckTables");
- let var_owner_def_id =
- DefId { krate: local_id_root.krate, index: var_path.hir_id.owner };
+ let var_owner_def_id = DefId {
+ krate: local_id_root.krate,
+ index: var_path.hir_id.owner.local_def_index,
+ };
let closure_def_id =
- DefId { krate: local_id_root.krate, index: closure_expr_id.to_def_id().index };
+ DefId { krate: local_id_root.krate, index: closure_expr_id.local_def_index };
(
hcx.def_path_hash(var_owner_def_id),
var_path.hir_id.local_id,
coercion_casts.hash_stable(hcx, hasher);
used_trait_imports.hash_stable(hcx, hasher);
tainted_by_errors.hash_stable(hcx, hasher);
- free_region_map.hash_stable(hcx, hasher);
concrete_opaque_types.hash_stable(hcx, hasher);
upvar_list.hash_stable(hcx, hasher);
generator_interior_types.hash_stable(hcx, hasher);
/// The central data structure of the compiler. It stores references
/// to the various **arenas** and also houses the results of the
/// various **compiler queries** that have been performed. See the
-/// [rustc guide] for more details.
+/// [rustc dev guide] for more details.
///
-/// [rustc guide]: https://rust-lang.github.io/rustc-guide/ty.html
+/// [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/ty.html
#[derive(Copy, Clone)]
#[rustc_diagnostic_item = "TyCtxt"]
pub struct TyCtxt<'tcx> {
interners: CtxtInterners<'tcx>,
- cstore: Box<CrateStoreDyn>,
+ pub(crate) cstore: Box<CrateStoreDyn>,
pub sess: &'tcx Session,
/// Map indicating what traits are in scope for places where this
/// is relevant; generated by resolve.
- trait_map: FxHashMap<DefIndex, FxHashMap<ItemLocalId, StableVec<TraitCandidate>>>,
+ trait_map: FxHashMap<LocalDefId, FxHashMap<ItemLocalId, StableVec<TraitCandidate>>>,
/// Export map produced by name resolution.
export_map: FxHashMap<DefId, Vec<Export<hir::HirId>>>,
- /// This should usually be accessed with the `tcx.hir()` method.
- pub(crate) hir_map: hir_map::Map<'tcx>,
+ pub(crate) untracked_crate: &'tcx hir::Crate<'tcx>,
+ pub(crate) definitions: &'tcx Definitions,
/// A map from `DefPathHash` -> `DefId`. Includes `DefId`s from the local crate
/// as well as all upstream crates. Only populated in incremental mode.
extern_providers: ty::query::Providers<'tcx>,
arena: &'tcx WorkerLocal<Arena<'tcx>>,
resolutions: ty::ResolverOutputs,
- hir: hir_map::Map<'tcx>,
+ krate: &'tcx hir::Crate<'tcx>,
+ definitions: &'tcx Definitions,
+ dep_graph: DepGraph,
on_disk_query_result_cache: query::OnDiskCache<'tcx>,
crate_name: &str,
output_filenames: &OutputFilenames,
let common_types = CommonTypes::new(&interners);
let common_lifetimes = CommonLifetimes::new(&interners);
let common_consts = CommonConsts::new(&interners, &common_types);
- let dep_graph = hir.dep_graph.clone();
let cstore = resolutions.cstore;
let crates = cstore.crates_untracked();
let max_cnum = crates.iter().map(|c| c.as_usize()).max().unwrap_or(0);
let def_path_tables = crates
.iter()
.map(|&cnum| (cnum, cstore.def_path_table(cnum)))
- .chain(iter::once((LOCAL_CRATE, hir.definitions().def_path_table())));
+ .chain(iter::once((LOCAL_CRATE, definitions.def_path_table())));
// Precompute the capacity of the hashmap so we don't have to
// re-allocate when populating it.
let mut trait_map: FxHashMap<_, FxHashMap<_, _>> = FxHashMap::default();
for (k, v) in resolutions.trait_map {
- let hir_id = hir.node_to_hir_id(k);
+ let hir_id = definitions.node_to_hir_id(k);
let map = trait_map.entry(hir_id.owner).or_default();
let v = v
.into_iter()
- .map(|tc| tc.map_import_ids(|id| hir.definitions().node_to_hir_id(id)))
+ .map(|tc| tc.map_import_ids(|id| definitions.node_to_hir_id(id)))
.collect();
map.insert(hir_id.local_id, StableVec::new(v));
}
.export_map
.into_iter()
.map(|(k, v)| {
- let exports: Vec<_> =
- v.into_iter().map(|e| e.map_id(|id| hir.node_to_hir_id(id))).collect();
+ let exports: Vec<_> = v
+ .into_iter()
+ .map(|e| e.map_id(|id| definitions.node_to_hir_id(id)))
+ .collect();
(k, exports)
})
.collect(),
maybe_unused_trait_imports: resolutions
.maybe_unused_trait_imports
.into_iter()
- .map(|id| hir.local_def_id_from_node_id(id))
+ .map(|id| definitions.local_def_id(id))
.collect(),
maybe_unused_extern_crates: resolutions
.maybe_unused_extern_crates
.into_iter()
- .map(|(id, sp)| (hir.local_def_id_from_node_id(id), sp))
+ .map(|(id, sp)| (definitions.local_def_id(id), sp))
.collect(),
glob_map: resolutions
.glob_map
.into_iter()
- .map(|(id, names)| (hir.local_def_id_from_node_id(id), names))
+ .map(|(id, names)| (definitions.local_def_id(id), names))
.collect(),
extern_prelude: resolutions.extern_prelude,
- hir_map: hir,
+ untracked_crate: krate,
+ definitions,
def_path_hash_to_def_id,
queries: query::Queries::new(providers, extern_providers, on_disk_query_result_cache),
rcache: Default::default(),
}
pub fn def_key(self, id: DefId) -> hir_map::DefKey {
- if id.is_local() { self.hir().def_key(id) } else { self.cstore.def_key(id) }
+ if let Some(id) = id.as_local() { self.hir().def_key(id) } else { self.cstore.def_key(id) }
}
/// Converts a `DefId` into its fully expanded `DefPath` (every
/// Note that if `id` is not local to this crate, the result will
/// be a non-local `DefPath`.
pub fn def_path(self, id: DefId) -> hir_map::DefPath {
- if id.is_local() { self.hir().def_path(id) } else { self.cstore.def_path(id) }
+ if let Some(id) = id.as_local() {
+ self.hir().def_path(id)
+ } else {
+ self.cstore.def_path(id)
+ }
}
/// Returns whether or not the crate with CrateNum 'cnum'
#[inline]
pub fn def_path_hash(self, def_id: DefId) -> hir_map::DefPathHash {
- if def_id.is_local() {
- self.hir().definitions().def_path_hash(def_id.index)
+ if let Some(def_id) = def_id.as_local() {
+ self.definitions.def_path_hash(def_id)
} else {
self.cstore.def_path_hash(def_id)
}
#[inline(always)]
pub fn create_stable_hashing_context(self) -> StableHashingContext<'tcx> {
- let krate = self.gcx.hir_map.untracked_krate();
+ let krate = self.gcx.untracked_crate;
- StableHashingContext::new(self.sess, krate, self.hir().definitions(), &*self.cstore)
+ StableHashingContext::new(self.sess, krate, self.definitions, &*self.cstore)
}
// This method makes sure that we have a DepNode and a Fingerprint for
/// Returns a displayable description and article for the given `def_id` (e.g. `("a", "struct")`).
pub fn article_and_description(&self, def_id: DefId) -> (&'static str, &'static str) {
- match self.def_key(def_id).disambiguated_data.data {
- DefPathData::TypeNs(..) | DefPathData::ValueNs(..) | DefPathData::MacroNs(..) => {
- let kind = self.def_kind(def_id).unwrap();
- (kind.article(), kind.descr(def_id))
- }
- DefPathData::ClosureExpr => match self.generator_kind(def_id) {
- None => ("a", "closure"),
- Some(rustc_hir::GeneratorKind::Async(..)) => ("an", "async closure"),
- Some(rustc_hir::GeneratorKind::Gen) => ("a", "generator"),
- },
- DefPathData::LifetimeNs(..) => ("a", "lifetime"),
- DefPathData::Impl => ("an", "implementation"),
- _ => bug!("article_and_description called on def_id {:?}", def_id),
- }
+ self.def_kind(def_id)
+ .map(|def_kind| (def_kind.article(), def_kind.descr(def_id)))
+ .unwrap_or_else(|| match self.def_key(def_id).disambiguated_data.data {
+ DefPathData::ClosureExpr => match self.generator_kind(def_id) {
+ None => ("a", "closure"),
+ Some(rustc_hir::GeneratorKind::Async(..)) => ("an", "async closure"),
+ Some(rustc_hir::GeneratorKind::Gen) => ("a", "generator"),
+ },
+ DefPathData::LifetimeNs(..) => ("a", "lifetime"),
+ DefPathData::Impl => ("an", "implementation"),
+ DefPathData::TypeNs(..) | DefPathData::ValueNs(..) | DefPathData::MacroNs(..) => {
+ unreachable!()
+ }
+ _ => bug!("article_and_description called on def_id {:?}", def_id),
+ })
}
}
#[inline]
pub fn mk_mut_ref(self, r: Region<'tcx>, ty: Ty<'tcx>) -> Ty<'tcx> {
- self.mk_ref(r, TypeAndMut { ty: ty, mutbl: hir::Mutability::Mut })
+ self.mk_ref(r, TypeAndMut { ty, mutbl: hir::Mutability::Mut })
}
#[inline]
pub fn mk_imm_ref(self, r: Region<'tcx>, ty: Ty<'tcx>) -> Ty<'tcx> {
- self.mk_ref(r, TypeAndMut { ty: ty, mutbl: hir::Mutability::Not })
+ self.mk_ref(r, TypeAndMut { ty, mutbl: hir::Mutability::Not })
}
#[inline]
pub fn mk_mut_ptr(self, ty: Ty<'tcx>) -> Ty<'tcx> {
- self.mk_ptr(TypeAndMut { ty: ty, mutbl: hir::Mutability::Mut })
+ self.mk_ptr(TypeAndMut { ty, mutbl: hir::Mutability::Mut })
}
#[inline]
pub fn mk_imm_ptr(self, ty: Ty<'tcx>) -> Ty<'tcx> {
- self.mk_ptr(TypeAndMut { ty: ty, mutbl: hir::Mutability::Not })
+ self.mk_ptr(TypeAndMut { ty, mutbl: hir::Mutability::Not })
}
#[inline]
#[inline]
pub fn mk_ty_param(self, index: u32, name: Symbol) -> Ty<'tcx> {
- self.mk_ty(Param(ParamTy { index, name: name }))
+ self.mk_ty(Param(ParamTy { index, name }))
}
#[inline]
};
providers.lookup_stability = |tcx, id| {
- assert_eq!(id.krate, LOCAL_CRATE);
- let id = tcx.hir().definitions().def_index_to_hir_id(id.index);
+ let id = tcx.hir().local_def_id_to_hir_id(id.expect_local());
tcx.stability().local_stability(id)
};
providers.lookup_const_stability = |tcx, id| {
- assert_eq!(id.krate, LOCAL_CRATE);
- let id = tcx.hir().definitions().def_index_to_hir_id(id.index);
+ let id = tcx.hir().local_def_id_to_hir_id(id.expect_local());
tcx.stability().local_const_stability(id)
};
providers.lookup_deprecation_entry = |tcx, id| {
- assert_eq!(id.krate, LOCAL_CRATE);
- let id = tcx.hir().definitions().def_index_to_hir_id(id.index);
+ let id = tcx.hir().local_def_id_to_hir_id(id.expect_local());
tcx.stability().local_deprecation_entry(id)
};
providers.extern_mod_stmt_cnum = |tcx, id| {
&ty::Error => self.add_flags(TypeFlags::HAS_TY_ERR),
&ty::Param(_) => {
- self.add_flags(TypeFlags::HAS_FREE_LOCAL_NAMES);
- self.add_flags(TypeFlags::HAS_PARAMS);
+ self.add_flags(TypeFlags::HAS_TY_PARAM);
}
&ty::Generator(_, ref substs, _) => {
- self.add_flags(TypeFlags::HAS_TY_CLOSURE);
- self.add_flags(TypeFlags::HAS_FREE_LOCAL_NAMES);
self.add_substs(substs);
}
}
&ty::Closure(_, ref substs) => {
- self.add_flags(TypeFlags::HAS_TY_CLOSURE);
- self.add_flags(TypeFlags::HAS_FREE_LOCAL_NAMES);
self.add_substs(substs);
}
}
&ty::Placeholder(..) => {
- self.add_flags(TypeFlags::HAS_FREE_LOCAL_NAMES);
self.add_flags(TypeFlags::HAS_TY_PLACEHOLDER);
}
&ty::Infer(infer) => {
- self.add_flags(TypeFlags::HAS_FREE_LOCAL_NAMES); // it might, right?
self.add_flags(TypeFlags::HAS_TY_INFER);
match infer {
ty::FreshTy(_) | ty::FreshIntTy(_) | ty::FreshFloatTy(_) => {}
}
&ty::Projection(ref data) => {
- self.add_flags(TypeFlags::HAS_PROJECTION);
+ self.add_flags(TypeFlags::HAS_TY_PROJECTION);
self.add_projection_ty(data);
}
&ty::UnnormalizedProjection(ref data) => {
- self.add_flags(TypeFlags::HAS_PROJECTION);
+ self.add_flags(TypeFlags::HAS_TY_PROJECTION);
self.add_projection_ty(data);
}
&ty::Opaque(_, substs) => {
- self.add_flags(TypeFlags::HAS_PROJECTION | TypeFlags::HAS_TY_OPAQUE);
+ self.add_flags(TypeFlags::HAS_TY_OPAQUE);
self.add_substs(substs);
}
match c.val {
ty::ConstKind::Unevaluated(_, substs, _) => {
self.add_substs(substs);
- self.add_flags(TypeFlags::HAS_PROJECTION);
+ self.add_flags(TypeFlags::HAS_CT_PROJECTION);
}
ty::ConstKind::Infer(infer) => {
- self.add_flags(TypeFlags::HAS_FREE_LOCAL_NAMES | TypeFlags::HAS_CT_INFER);
+ self.add_flags(TypeFlags::HAS_CT_INFER);
match infer {
InferConst::Fresh(_) => {}
InferConst::Var(_) => self.add_flags(TypeFlags::KEEP_IN_LOCAL_TCX),
}
ty::ConstKind::Bound(debruijn, _) => self.add_binder(debruijn),
ty::ConstKind::Param(_) => {
- self.add_flags(TypeFlags::HAS_FREE_LOCAL_NAMES);
- self.add_flags(TypeFlags::HAS_PARAMS);
+ self.add_flags(TypeFlags::HAS_CT_PARAM);
}
ty::ConstKind::Placeholder(_) => {
- self.add_flags(TypeFlags::HAS_FREE_LOCAL_NAMES);
self.add_flags(TypeFlags::HAS_CT_PLACEHOLDER);
}
ty::ConstKind::Value(_) => {}
self.has_type_flags(TypeFlags::HAS_TY_ERR)
}
fn has_param_types(&self) -> bool {
- self.has_type_flags(TypeFlags::HAS_PARAMS)
+ self.has_type_flags(TypeFlags::HAS_TY_PARAM | TypeFlags::HAS_CT_PARAM)
}
fn has_infer_types(&self) -> bool {
self.has_type_flags(TypeFlags::HAS_TY_INFER)
}
+ fn has_infer_types_or_consts(&self) -> bool {
+ self.has_type_flags(TypeFlags::HAS_TY_INFER | TypeFlags::HAS_CT_INFER)
+ }
fn has_infer_consts(&self) -> bool {
self.has_type_flags(TypeFlags::HAS_CT_INFER)
}
self.has_type_flags(TypeFlags::KEEP_IN_LOCAL_TCX)
}
fn needs_infer(&self) -> bool {
- self.has_type_flags(
- TypeFlags::HAS_TY_INFER | TypeFlags::HAS_RE_INFER | TypeFlags::HAS_CT_INFER,
- )
+ self.has_type_flags(TypeFlags::NEEDS_INFER)
}
fn has_placeholders(&self) -> bool {
self.has_type_flags(
fn has_re_placeholders(&self) -> bool {
self.has_type_flags(TypeFlags::HAS_RE_PLACEHOLDER)
}
- fn has_closure_types(&self) -> bool {
- self.has_type_flags(TypeFlags::HAS_TY_CLOSURE)
- }
/// "Free" regions in this context means that it has any region
/// that is not (a) erased or (b) late-bound.
fn has_free_regions(&self) -> bool {
/// ```
/// This code should only compile in modules where the uninhabitedness of Foo is
/// visible.
- pub fn is_ty_uninhabited_from(self, module: DefId, ty: Ty<'tcx>) -> bool {
+ pub fn is_ty_uninhabited_from(
+ self,
+ module: DefId,
+ ty: Ty<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ ) -> bool {
// To check whether this type is uninhabited at all (not just from the
// given node), you could check whether the forest is empty.
// ```
// forest.is_empty()
// ```
- ty.uninhabited_from(self).contains(self, module)
+ ty.uninhabited_from(self, param_env).contains(self, module)
}
- pub fn is_ty_uninhabited_from_any_module(self, ty: Ty<'tcx>) -> bool {
- !ty.uninhabited_from(self).is_empty()
+ pub fn is_ty_uninhabited_from_any_module(
+ self,
+ ty: Ty<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ ) -> bool {
+ !ty.uninhabited_from(self, param_env).is_empty()
}
}
impl<'tcx> AdtDef {
/// Calculates the forest of `DefId`s from which this ADT is visibly uninhabited.
- fn uninhabited_from(&self, tcx: TyCtxt<'tcx>, substs: SubstsRef<'tcx>) -> DefIdForest {
+ fn uninhabited_from(
+ &self,
+ tcx: TyCtxt<'tcx>,
+ substs: SubstsRef<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ ) -> DefIdForest {
// Non-exhaustive ADTs from other crates are always considered inhabited.
if self.is_variant_list_non_exhaustive() && !self.did.is_local() {
DefIdForest::empty()
} else {
DefIdForest::intersection(
tcx,
- self.variants.iter().map(|v| v.uninhabited_from(tcx, substs, self.adt_kind())),
+ self.variants
+ .iter()
+ .map(|v| v.uninhabited_from(tcx, substs, self.adt_kind(), param_env)),
)
}
}
tcx: TyCtxt<'tcx>,
substs: SubstsRef<'tcx>,
adt_kind: AdtKind,
+ param_env: ty::ParamEnv<'tcx>,
) -> DefIdForest {
let is_enum = match adt_kind {
// For now, `union`s are never considered uninhabited.
} else {
DefIdForest::union(
tcx,
- self.fields.iter().map(|f| f.uninhabited_from(tcx, substs, is_enum)),
+ self.fields.iter().map(|f| f.uninhabited_from(tcx, substs, is_enum, param_env)),
)
}
}
tcx: TyCtxt<'tcx>,
substs: SubstsRef<'tcx>,
is_enum: bool,
+ param_env: ty::ParamEnv<'tcx>,
) -> DefIdForest {
- let data_uninhabitedness = move || self.ty(tcx, substs).uninhabited_from(tcx);
+ let data_uninhabitedness = move || self.ty(tcx, substs).uninhabited_from(tcx, param_env);
// FIXME(canndrew): Currently enum fields are (incorrectly) stored with
// `Visibility::Invisible` so we need to override `self.vis` if we're
// dealing with an enum.
impl<'tcx> TyS<'tcx> {
/// Calculates the forest of `DefId`s from which this type is visibly uninhabited.
- fn uninhabited_from(&self, tcx: TyCtxt<'tcx>) -> DefIdForest {
+ fn uninhabited_from(&self, tcx: TyCtxt<'tcx>, param_env: ty::ParamEnv<'tcx>) -> DefIdForest {
match self.kind {
- Adt(def, substs) => def.uninhabited_from(tcx, substs),
+ Adt(def, substs) => def.uninhabited_from(tcx, substs, param_env),
Never => DefIdForest::full(tcx),
- Tuple(ref tys) => {
- DefIdForest::union(tcx, tys.iter().map(|ty| ty.expect_ty().uninhabited_from(tcx)))
- }
+ Tuple(ref tys) => DefIdForest::union(
+ tcx,
+ tys.iter().map(|ty| ty.expect_ty().uninhabited_from(tcx, param_env)),
+ ),
- Array(ty, len) => match len.try_eval_usize(tcx, ty::ParamEnv::empty()) {
+ Array(ty, len) => match len.try_eval_usize(tcx, param_env) {
// If the array is definitely non-empty, it's uninhabited if
// the type of its elements is uninhabited.
- Some(n) if n != 0 => ty.uninhabited_from(tcx),
+ Some(n) if n != 0 => ty.uninhabited_from(tcx, param_env),
_ => DefIdForest::empty(),
},
/// `<fn() as FnTrait>::call_*`
/// `DefId` is `FnTrait::call_*`.
+ ///
+ /// NB: the (`fn` pointer) type must currently be monomorphic to avoid double substitution
+ /// problems with the MIR shim bodies. `Instance::resolve` enforces this.
+ // FIXME(#69925) support polymorphic MIR shim bodies properly instead.
FnPtrShim(DefId, Ty<'tcx>),
/// `<dyn Trait as Trait>::fn`, "direct calls" of which are implicitly
/// The `DefId` is for `core::ptr::drop_in_place`.
/// The `Option<Ty<'tcx>>` is either `Some(T)`, or `None` for empty drop
/// glue.
+ ///
+ /// NB: the type must currently be monomorphic to avoid double substitution
+ /// problems with the MIR shim bodies. `Instance::resolve` enforces this.
+ // FIXME(#69925) support polymorphic MIR shim bodies properly instead.
DropGlue(DefId, Option<Ty<'tcx>>),
///`<T as Clone>::clone` shim.
+ ///
+ /// NB: the type must currently be monomorphic to avoid double substitution
+ /// problems with the MIR shim bodies. `Instance::resolve` enforces this.
+ // FIXME(#69925) support polymorphic MIR shim bodies properly instead.
CloneShim(DefId, Ty<'tcx>),
}
}
// If this a non-generic instance, it cannot be a shared monomorphization.
- if self.substs.non_erasable_generics().next().is_none() {
- return None;
- }
+ self.substs.non_erasable_generics().next()?;
match self.def {
InstanceDef::Item(def_id) => tcx
def_id,
substs
);
- Instance { def: InstanceDef::Item(def_id), substs: substs }
+ Instance { def: InstanceDef::Item(def_id), substs }
}
pub fn mono(tcx: TyCtxt<'tcx>, def_id: DefId) -> Instance<'tcx> {
Instance { def, substs }
}
+ /// FIXME(#69925) Depending on the kind of `InstanceDef`, the MIR body associated with an
+ /// instance is expressed in terms of the generic parameters of `self.def_id()`, and in other
+ /// cases the MIR body is expressed in terms of the types found in the substitution array.
+ /// In the former case, we want to substitute those generic types and replace them with the
+ /// values from the substs when monomorphizing the function body. But in the latter case, we
+ /// don't want to do that substitution, since it has already been done effectively.
+ ///
+ /// This function returns `Some(substs)` in the former case and None otherwise -- i.e., if
+ /// this function returns `None`, then the MIR body does not require substitution during
+ /// monomorphization.
+ pub fn substs_for_mir_body(&self) -> Option<SubstsRef<'tcx>> {
+ match self.def {
+ InstanceDef::CloneShim(..)
+ | InstanceDef::DropGlue(_, Some(_)) => None,
+ InstanceDef::ClosureOnceShim { .. }
+ | InstanceDef::DropGlue(..)
+ // FIXME(#69925): `FnPtrShim` should be in the other branch.
+ | InstanceDef::FnPtrShim(..)
+ | InstanceDef::Item(_)
+ | InstanceDef::Intrinsic(..)
+ | InstanceDef::ReifyShim(..)
+ | InstanceDef::Virtual(..)
+ | InstanceDef::VtableShim(..) => Some(self.substs),
+ }
+ }
+
pub fn is_vtable_shim(&self) -> bool {
if let InstanceDef::VtableShim(..) = self.def { true } else { false }
}
-use crate::session::{self, DataTypeKind};
+use crate::ich::StableHashingContext;
+use crate::mir::{GeneratorLayout, GeneratorSavedLocal};
+use crate::ty::subst::Subst;
use crate::ty::{self, subst::SubstsRef, ReprOptions, Ty, TyCtxt, TypeFoldable};
use rustc_ast::ast::{self, Ident, IntTy, UintTy};
use rustc_attr as attr;
-use rustc_span::DUMMY_SP;
-
-use std::cmp;
-use std::fmt;
-use std::i128;
-use std::iter;
-use std::mem;
-use std::ops::Bound;
-
-use crate::ich::StableHashingContext;
-use crate::mir::{GeneratorLayout, GeneratorSavedLocal};
-use crate::ty::subst::Subst;
use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
use rustc_hir as hir;
use rustc_index::bit_set::BitSet;
use rustc_index::vec::{Idx, IndexVec};
-
+use rustc_session::{DataTypeKind, FieldInfo, SizeKind, VariantInfo};
+use rustc_span::DUMMY_SP;
use rustc_target::abi::call::{
ArgAbi, ArgAttribute, ArgAttributes, Conv, FnAbi, PassMode, Reg, RegKind,
};
pub use rustc_target::abi::*;
use rustc_target::spec::{abi::Abi as SpecAbi, HasTargetSpec};
+use std::cmp;
+use std::fmt;
+use std::iter;
+use std::mem;
+use std::ops::Bound;
+
pub trait IntegerExt {
fn to_ty<'tcx>(&self, tcx: TyCtxt<'tcx>, signed: bool) -> Ty<'tcx>;
fn from_attr<C: HasDataLayout>(cx: &C, ity: attr::IntType) -> Integer;
let univariant = |fields: &[TyLayout<'_>], repr: &ReprOptions, kind| {
Ok(tcx.intern_layout(self.univariant_uninterned(ty, fields, repr, kind)?))
};
- debug_assert!(!ty.has_infer_types());
+ debug_assert!(!ty.has_infer_types_or_consts());
Ok(match ty.kind {
// Basic scalars.
present_first @ Some(_) => present_first,
// Uninhabited because it has no variants, or only absent ones.
None if def.is_enum() => return tcx.layout_raw(param_env.and(tcx.types.never)),
- // if it's a struct, still compute a layout so that we can still compute the
- // field offsets
+ // If it's a struct, still compute a layout so that we can still compute the
+ // field offsets.
None => Some(VariantIdx::new(0)),
};
}
}
- let (mut min, mut max) = (i128::max_value(), i128::min_value());
+ let (mut min, mut max) = (i128::MAX, i128::MIN);
let discr_type = def.repr.discr_type();
let bits = Integer::from_attr(self, discr_type).size().bits();
for (i, discr) in def.discriminants(tcx) {
}
}
// We might have no inhabited variants, so pretend there's at least one.
- if (min, max) == (i128::max_value(), i128::min_value()) {
+ if (min, max) == (i128::MAX, i128::MIN) {
min = 0;
max = 0;
}
// Write down the order of our locals that will be promoted to the prefix.
{
- let mut idx = 0u32;
- for local in ineligible_locals.iter() {
- assignments[local] = Ineligible(Some(idx));
- idx += 1;
+ for (idx, local) in ineligible_locals.iter().enumerate() {
+ assignments[local] = Ineligible(Some(idx as u32));
}
}
debug!("generator saved local assignments: {:?}", assignments);
// locals as part of the prefix. We compute the layout of all of
// these fields at once to get optimal packing.
let discr_index = substs.as_generator().prefix_tys(def_id, tcx).count();
- // FIXME(eddyb) set the correct vaidity range for the discriminant.
- let discr_layout = self.layout_of(substs.as_generator().discr_ty(tcx))?;
- let discr = match &discr_layout.abi {
- Abi::Scalar(s) => s.clone(),
- _ => bug!(),
- };
+
+ // `info.variant_fields` already accounts for the reserved variants, so no need to add them.
+ let max_discr = (info.variant_fields.len() - 1) as u128;
+ let discr_int = Integer::fit_unsigned(max_discr);
+ let discr_int_ty = discr_int.to_ty(tcx, false);
+ let discr = Scalar { value: Primitive::Int(discr_int, false), valid_range: 0..=max_discr };
+ let discr_layout = self.tcx.intern_layout(LayoutDetails::scalar(self, discr.clone()));
+ let discr_layout = TyLayout { ty: discr_int_ty, details: discr_layout };
+
let promoted_layouts = ineligible_locals
.iter()
.map(|local| subst_field(info.field_tys[local]))
if min_size < field_end {
min_size = field_end;
}
- session::FieldInfo {
+ FieldInfo {
name: name.to_string(),
offset: offset.bytes(),
size: field_layout.size.bytes(),
})
.collect();
- session::VariantInfo {
+ VariantInfo {
name: n.map(|n| n.to_string()),
- kind: if layout.is_unsized() {
- session::SizeKind::Min
- } else {
- session::SizeKind::Exact
- },
+ kind: if layout.is_unsized() { SizeKind::Min } else { SizeKind::Exact },
align: layout.align.abi.bytes(),
size: if min_size.bytes() == 0 { layout.size.bytes() } else { min_size.bytes() },
fields: field_info,
tcx: TyCtxt<'tcx>,
param_env: ty::ParamEnv<'tcx>,
) -> Result<SizeSkeleton<'tcx>, LayoutError<'tcx>> {
- debug_assert!(!ty.has_infer_types());
+ debug_assert!(!ty.has_infer_types_or_consts());
// First try computing a static layout.
let err = match tcx.layout_of(param_env.and(ty)) {
}
}
-pub trait MaybeResult<T> {
- type Error;
-
- fn from(x: Result<T, Self::Error>) -> Self;
- fn to_result(self) -> Result<T, Self::Error>;
-}
-
-impl<T> MaybeResult<T> for T {
- type Error = !;
-
- fn from(x: Result<T, Self::Error>) -> Self {
- let Ok(x) = x;
- x
- }
- fn to_result(self) -> Result<T, Self::Error> {
- Ok(self)
- }
-}
-
-impl<T, E> MaybeResult<T> for Result<T, E> {
- type Error = E;
-
- fn from(x: Result<T, Self::Error>) -> Self {
- x
- }
- fn to_result(self) -> Result<T, Self::Error> {
- self
- }
-}
-
pub type TyLayout<'tcx> = ::rustc_target::abi::TyLayout<'tcx, Ty<'tcx>>;
impl<'tcx> LayoutOf for LayoutCx<'tcx, TyCtxt<'tcx>> {
{
fn for_variant(this: TyLayout<'tcx>, cx: &C, variant_index: VariantIdx) -> TyLayout<'tcx> {
let details = match this.variants {
- Variants::Single { index } if index == variant_index => this.details,
+ Variants::Single { index }
+ // If all variants but one are uninhabited, the variant layout is the enum layout.
+ if index == variant_index &&
+ // Don't confuse variants of uninhabited enums with the enum itself.
+ // For more details see https://github.com/rust-lang/rust/issues/69763.
+ this.fields != FieldPlacement::Union(0) =>
+ {
+ this.details
+ }
Variants::Single { index } => {
// Deny calling for_variant more than once for non-Single enums.
if let Some(kind) = pointee.safe {
attrs.pointee_align = Some(pointee.align);
- // `Box` (`UniqueBorrowed`) are not necessarily dereferencable
+ // `Box` (`UniqueBorrowed`) are not necessarily dereferenceable
// for the entire duration of the function as they can be deallocated
// any time. Set their valid size to 0.
attrs.pointee_size = match kind {
use crate::arena::Arena;
use crate::hir::exports::ExportMap;
use crate::hir::map as hir_map;
-
use crate::ich::Fingerprint;
use crate::ich::StableHashingContext;
use crate::infer::canonical::Canonical;
use crate::mir::interpret::ErrorHandled;
use crate::mir::GeneratorLayout;
use crate::mir::ReadOnlyBodyAndCache;
-use crate::session::DataTypeKind;
use crate::traits::{self, Reveal};
use crate::ty;
use crate::ty::layout::VariantIdx;
use rustc_index::vec::{Idx, IndexVec};
use rustc_macros::HashStable;
use rustc_serialize::{self, Encodable, Encoder};
+use rustc_session::DataTypeKind;
use rustc_span::hygiene::ExpnId;
use rustc_span::symbol::{kw, sym, Symbol};
use rustc_span::Span;
pub fn def_kind(&self) -> DefKind {
match self.kind {
AssocKind::Const => DefKind::AssocConst,
- AssocKind::Method => DefKind::Method,
+ AssocKind::Method => DefKind::AssocFn,
AssocKind::Type => DefKind::AssocTy,
AssocKind::OpaqueTy => DefKind::AssocOpaqueTy,
}
impl<'tcx> DefIdTree for TyCtxt<'tcx> {
fn parent(self, id: DefId) -> Option<DefId> {
- self.def_key(id).parent.map(|index| DefId { index: index, ..id })
+ self.def_key(id).parent.map(|index| DefId { index, ..id })
}
}
Res::Err => Visibility::Public,
def => Visibility::Restricted(def.def_id()),
},
- hir::VisibilityKind::Inherited => Visibility::Restricted(tcx.parent_module(id)),
+ hir::VisibilityKind::Inherited => {
+ Visibility::Restricted(tcx.parent_module(id).to_def_id())
+ }
}
}
pub pos: usize,
}
-// Flags that we track on types. These flags are propagated upwards
-// through the type during type construction, so that we can quickly
-// check whether the type has various kinds of types in it without
-// recursing over the type itself.
bitflags! {
+ /// Flags that we track on types. These flags are propagated upwards
+ /// through the type during type construction, so that we can quickly check
+ /// whether the type has various kinds of types in it without recursing
+ /// over the type itself.
pub struct TypeFlags: u32 {
- const HAS_PARAMS = 1 << 0;
- const HAS_TY_INFER = 1 << 1;
- const HAS_RE_INFER = 1 << 2;
- const HAS_RE_PLACEHOLDER = 1 << 3;
-
- /// Does this have any `ReEarlyBound` regions? Used to
- /// determine whether substitition is required, since those
- /// represent regions that are bound in a `ty::Generics` and
- /// hence may be substituted.
- const HAS_RE_EARLY_BOUND = 1 << 4;
-
- /// Does this have any region that "appears free" in the type?
- /// Basically anything but `ReLateBound` and `ReErased`.
- const HAS_FREE_REGIONS = 1 << 5;
-
- /// Is an error type reachable?
- const HAS_TY_ERR = 1 << 6;
- const HAS_PROJECTION = 1 << 7;
-
- // FIXME: Rename this to the actual property since it's used for generators too
- const HAS_TY_CLOSURE = 1 << 8;
+ // Does this have parameters? Used to determine whether substitution is
+ // required.
+ /// Does this have [Param]?
+ const HAS_TY_PARAM = 1 << 0;
+ /// Does this have [ReEarlyBound]?
+ const HAS_RE_PARAM = 1 << 1;
+ /// Does this have [ConstKind::Param]?
+ const HAS_CT_PARAM = 1 << 2;
+
+ const NEEDS_SUBST = TypeFlags::HAS_TY_PARAM.bits
+ | TypeFlags::HAS_RE_PARAM.bits
+ | TypeFlags::HAS_CT_PARAM.bits;
+
+ /// Does this have [Infer]?
+ const HAS_TY_INFER = 1 << 3;
+ /// Does this have [ReVar]?
+ const HAS_RE_INFER = 1 << 4;
+ /// Does this have [ConstKind::Infer]?
+ const HAS_CT_INFER = 1 << 5;
+
+ /// Does this have inference variables? Used to determine whether
+ /// inference is required.
+ const NEEDS_INFER = TypeFlags::HAS_TY_INFER.bits
+ | TypeFlags::HAS_RE_INFER.bits
+ | TypeFlags::HAS_CT_INFER.bits;
+
+ /// Does this have [Placeholder]?
+ const HAS_TY_PLACEHOLDER = 1 << 6;
+ /// Does this have [RePlaceholder]?
+ const HAS_RE_PLACEHOLDER = 1 << 7;
+ /// Does this have [ConstKind::Placeholder]?
+ const HAS_CT_PLACEHOLDER = 1 << 8;
+
+ /// `true` if there are "names" of regions and so forth
+ /// that are local to a particular fn/inferctxt
+ const HAS_FREE_LOCAL_REGIONS = 1 << 9;
/// `true` if there are "names" of types and regions and so forth
/// that are local to a particular fn
- const HAS_FREE_LOCAL_NAMES = 1 << 9;
+ const HAS_FREE_LOCAL_NAMES = TypeFlags::HAS_TY_PARAM.bits
+ | TypeFlags::HAS_CT_PARAM.bits
+ | TypeFlags::HAS_TY_INFER.bits
+ | TypeFlags::HAS_CT_INFER.bits
+ | TypeFlags::HAS_TY_PLACEHOLDER.bits
+ | TypeFlags::HAS_CT_PLACEHOLDER.bits
+ | TypeFlags::HAS_FREE_LOCAL_REGIONS.bits;
+
+ /// Does this have [Projection] or [UnnormalizedProjection]?
+ const HAS_TY_PROJECTION = 1 << 10;
+ /// Does this have [Opaque]?
+ const HAS_TY_OPAQUE = 1 << 11;
+ /// Does this have [ConstKind::Unevaluated]?
+ const HAS_CT_PROJECTION = 1 << 12;
+
+ /// Could this type be normalized further?
+ const HAS_PROJECTION = TypeFlags::HAS_TY_PROJECTION.bits
+ | TypeFlags::HAS_TY_OPAQUE.bits
+ | TypeFlags::HAS_CT_PROJECTION.bits;
/// Present if the type belongs in a local type context.
- /// Only set for Infer other than Fresh.
- const KEEP_IN_LOCAL_TCX = 1 << 10;
+ /// Set for placeholders and inference variables that are not "Fresh".
+ const KEEP_IN_LOCAL_TCX = 1 << 13;
- /// Does this have any `ReLateBound` regions? Used to check
- /// if a global bound is safe to evaluate.
- const HAS_RE_LATE_BOUND = 1 << 11;
-
- /// Does this have any `ReErased` regions?
- const HAS_RE_ERASED = 1 << 12;
+ /// Is an error type reachable?
+ const HAS_TY_ERR = 1 << 14;
- const HAS_TY_PLACEHOLDER = 1 << 13;
+ /// Does this have any region that "appears free" in the type?
+ /// Basically anything but [ReLateBound] and [ReErased].
+ const HAS_FREE_REGIONS = 1 << 15;
- const HAS_CT_INFER = 1 << 14;
- const HAS_CT_PLACEHOLDER = 1 << 15;
- /// Does this have any [Opaque] types.
- const HAS_TY_OPAQUE = 1 << 16;
+ /// Does this have any [ReLateBound] regions? Used to check
+ /// if a global bound is safe to evaluate.
+ const HAS_RE_LATE_BOUND = 1 << 16;
- const NEEDS_SUBST = TypeFlags::HAS_PARAMS.bits |
- TypeFlags::HAS_RE_EARLY_BOUND.bits;
+ /// Does this have any [ReErased] regions?
+ const HAS_RE_ERASED = 1 << 17;
/// Flags representing the nominal content of a type,
/// computed by FlagsComputation. If you add a new nominal
/// flag, it should be added here too.
- const NOMINAL_FLAGS = TypeFlags::HAS_PARAMS.bits |
- TypeFlags::HAS_TY_INFER.bits |
- TypeFlags::HAS_RE_INFER.bits |
- TypeFlags::HAS_RE_PLACEHOLDER.bits |
- TypeFlags::HAS_RE_EARLY_BOUND.bits |
- TypeFlags::HAS_FREE_REGIONS.bits |
- TypeFlags::HAS_TY_ERR.bits |
- TypeFlags::HAS_PROJECTION.bits |
- TypeFlags::HAS_TY_CLOSURE.bits |
- TypeFlags::HAS_FREE_LOCAL_NAMES.bits |
- TypeFlags::KEEP_IN_LOCAL_TCX.bits |
- TypeFlags::HAS_RE_LATE_BOUND.bits |
- TypeFlags::HAS_RE_ERASED.bits |
- TypeFlags::HAS_TY_PLACEHOLDER.bits |
- TypeFlags::HAS_CT_INFER.bits |
- TypeFlags::HAS_CT_PLACEHOLDER.bits |
- TypeFlags::HAS_TY_OPAQUE.bits;
+ const NOMINAL_FLAGS = TypeFlags::HAS_TY_PARAM.bits
+ | TypeFlags::HAS_RE_PARAM.bits
+ | TypeFlags::HAS_CT_PARAM.bits
+ | TypeFlags::HAS_TY_INFER.bits
+ | TypeFlags::HAS_RE_INFER.bits
+ | TypeFlags::HAS_CT_INFER.bits
+ | TypeFlags::HAS_TY_PLACEHOLDER.bits
+ | TypeFlags::HAS_RE_PLACEHOLDER.bits
+ | TypeFlags::HAS_CT_PLACEHOLDER.bits
+ | TypeFlags::HAS_FREE_LOCAL_REGIONS.bits
+ | TypeFlags::HAS_TY_PROJECTION.bits
+ | TypeFlags::HAS_TY_OPAQUE.bits
+ | TypeFlags::HAS_CT_PROJECTION.bits
+ | TypeFlags::KEEP_IN_LOCAL_TCX.bits
+ | TypeFlags::HAS_TY_ERR.bits
+ | TypeFlags::HAS_FREE_REGIONS.bits
+ | TypeFlags::HAS_RE_LATE_BOUND.bits
+ | TypeFlags::HAS_RE_ERASED.bits;
}
}
}
impl UniverseIndex {
- pub const ROOT: UniverseIndex = UniverseIndex::from_u32_const(0);
+ pub const ROOT: UniverseIndex = UniverseIndex::from_u32(0);
/// Returns the "next" universe index in order -- this new index
/// is considered to extend all previous universes. This
Reveal::UserFacing => ParamEnvAnd { param_env: self, value },
Reveal::All => {
- if value.has_placeholders() || value.needs_infer() || value.has_param_types() {
- ParamEnvAnd { param_env: self, value }
- } else {
+ if value.is_global() {
ParamEnvAnd { param_env: self.without_caller_bounds(), value }
+ } else {
+ ParamEnvAnd { param_env: self, value }
}
}
}
if !tcx.consider_optimizing(|| format!("Reorder fields of {:?}", tcx.def_path_str(did))) {
flags.insert(ReprFlags::IS_LINEAR);
}
- ReprOptions { int: size, align: max_align, pack: min_pack, flags: flags }
+ ReprOptions { int: size, align: max_align, pack: min_pack, flags }
}
#[inline]
}
} else {
match self.def_kind(def_id).expect("no def for `DefId`") {
- DefKind::AssocConst | DefKind::Method | DefKind::AssocTy => true,
+ DefKind::AssocConst | DefKind::AssocFn | DefKind::AssocTy => true,
_ => false,
}
};
/// `DefId` of the impl that the method belongs to; otherwise, returns `None`.
pub fn impl_of_method(self, def_id: DefId) -> Option<DefId> {
let item = if def_id.krate != LOCAL_CRATE {
- if let Some(DefKind::Method) = self.def_kind(def_id) {
+ if let Some(DefKind::AssocFn) = self.def_kind(def_id) {
Some(self.associated_item(def_id))
} else {
None
pub fn hygienic_eq(self, use_name: Ident, def_name: Ident, def_parent_def_id: DefId) -> bool {
// We could use `Ident::eq` here, but we deliberately don't. The name
// comparison fails frequently, and we want to avoid the expensive
- // `modern()` calls required for the span comparison whenever possible.
+ // `normalize_to_macros_2_0()` calls required for the span comparison whenever possible.
use_name.name == def_name.name
&& use_name
.span
}
fn expansion_that_defined(self, scope: DefId) -> ExpnId {
- match scope.krate {
- LOCAL_CRATE => self.hir().definitions().expansion_that_defined(scope.index),
- _ => ExpnId::root(),
+ match scope.as_local() {
+ Some(scope) => self.hir().definitions().expansion_that_defined(scope),
+ None => ExpnId::root(),
}
}
pub fn adjust_ident(self, mut ident: Ident, scope: DefId) -> Ident {
- ident.span.modernize_and_adjust(self.expansion_that_defined(scope));
+ ident.span.normalize_to_macros_2_0_and_adjust(self.expansion_that_defined(scope));
ident
}
scope: DefId,
block: hir::HirId,
) -> (Ident, DefId) {
- let scope = match ident.span.modernize_and_adjust(self.expansion_that_defined(scope)) {
- Some(actual_expansion) => {
- self.hir().definitions().parent_module_of_macro_def(actual_expansion)
- }
- None => self.parent_module(block),
- };
+ let scope =
+ match ident.span.normalize_to_macros_2_0_and_adjust(self.expansion_that_defined(scope))
+ {
+ Some(actual_expansion) => {
+ self.hir().definitions().parent_module_of_macro_def(actual_expansion)
+ }
+ None => self.parent_module(block).to_def_id(),
+ };
(ident, scope)
}
context::provide(providers);
erase_regions::provide(providers);
layout::provide(providers);
- *providers =
- ty::query::Providers { trait_impls_of: trait_def::trait_impls_of_provider, ..*providers };
+ *providers = ty::query::Providers {
+ trait_impls_of: trait_def::trait_impls_of_provider,
+ all_local_trait_impls: trait_def::all_local_trait_impls,
+ ..*providers
+ };
}
/// A map for the local crate mapping each type to a vector of its
if !value.has_projections() {
value
} else {
- value.fold_with(&mut NormalizeAfterErasingRegionsFolder {
- tcx: self,
- param_env: param_env,
- })
+ value.fold_with(&mut NormalizeAfterErasingRegionsFolder { tcx: self, param_env })
}
}
args: &[GenericArg<'tcx>],
) -> Result<Self::Path, Self::Error>;
- // Defaults (should not be overriden):
+ // Defaults (should not be overridden):
fn default_print_def_path(
self,
use crate::hir::map::{DefPathData, DisambiguatedDefPathData};
use crate::middle::cstore::{ExternCrate, ExternCrateSource};
use crate::middle::region;
-use crate::mir::interpret::{sign_extend, truncate, ConstValue, Scalar};
+use crate::mir::interpret::{sign_extend, truncate, AllocId, ConstValue, Pointer, Scalar};
use crate::ty::layout::{Integer, IntegerExt, Size};
use crate::ty::subst::{GenericArg, GenericArgKind, Subst};
use crate::ty::{self, DefIdTree, ParamConst, Ty, TyCtxt, TypeFoldable};
use rustc_target::spec::abi::Abi;
use std::cell::Cell;
+use std::char;
use std::collections::BTreeMap;
use std::fmt::{self, Write as _};
use std::ops::{Deref, DerefMut};
Ok(self)
}
+ /// Prints `{f: t}` or `{f as t}` depending on the `cast` argument
+ fn typed_value(
+ mut self,
+ f: impl FnOnce(Self) -> Result<Self, Self::Error>,
+ t: impl FnOnce(Self) -> Result<Self, Self::Error>,
+ conversion: &str,
+ ) -> Result<Self::Const, Self::Error> {
+ self.write_str("{")?;
+ self = f(self)?;
+ self.write_str(conversion)?;
+ self = t(self)?;
+ self.write_str("}")?;
+ Ok(self)
+ }
+
/// Prints `<...>` around what `f` prints.
fn generic_delimiters(
self,
/// This is typically the case for all non-`'_` regions.
fn region_should_not_be_omitted(&self, region: ty::Region<'_>) -> bool;
- // Defaults (should not be overriden):
+ // Defaults (should not be overridden):
/// If possible, this returns a global path resolving to `def_id` that is visible
/// from at least one local module, and returns `true`. If the crate defining `def_id` is
/// post-process it into the valid and visible version that
/// accounts for re-exports.
///
- /// This method should only be callled by itself or
+ /// This method should only be called by itself or
/// `try_print_visible_def_path`.
///
/// `callers` is a chain of visible_parent's leading to `def_id`,
ty::Error => p!(write("[type error]")),
ty::Param(ref param_ty) => p!(write("{}", param_ty)),
ty::Bound(debruijn, bound_ty) => match bound_ty.kind {
- ty::BoundTyKind::Anon => {
- if debruijn == ty::INNERMOST {
- p!(write("^{}", bound_ty.var.index()))
- } else {
- p!(write("^{}_{}", debruijn.index(), bound_ty.var.index()))
- }
- }
-
+ ty::BoundTyKind::Anon => self.pretty_print_bound_var(debruijn, bound_ty.var)?,
ty::BoundTyKind::Param(p) => p!(write("{}", p)),
},
ty::Adt(def, substs) => {
if self.tcx().sess.verbose() {
p!(write("{:?}", sz));
} else if let ty::ConstKind::Unevaluated(..) = sz.val {
- // do not try to evalute unevaluated constants. If we are const evaluating an
+ // do not try to evaluate unevaluated constants. If we are const evaluating an
// array length anon const, rustc will (with debug assertions) print the
// constant's path. Which will end up here again.
p!(write("_"));
- } else if let Some(n) = sz.try_eval_usize(self.tcx(), ty::ParamEnv::empty()) {
+ } else if let Some(n) = sz.val.try_to_bits(self.tcx().data_layout.pointer_size) {
p!(write("{}", n));
} else {
p!(write("_"));
Ok(self)
}
+ fn pretty_print_bound_var(
+ &mut self,
+ debruijn: ty::DebruijnIndex,
+ var: ty::BoundVar,
+ ) -> Result<(), Self::Error> {
+ if debruijn == ty::INNERMOST {
+ write!(self, "^{}", var.index())
+ } else {
+ write!(self, "^{}_{}", debruijn.index(), var.index())
+ }
+ }
+
fn infer_ty_name(&self, _: ty::TyVid) -> Option<String> {
None
}
macro_rules! print_underscore {
() => {{
- p!(write("_"));
if print_ty {
- p!(write(": "), print(ct.ty));
+ self = self.typed_value(
+ |mut this| {
+ write!(this, "_")?;
+ Ok(this)
+ },
+ |this| this.print_type(ct.ty),
+ ": ",
+ )?;
+ } else {
+ write!(self, "_")?;
}
}};
}
- match (ct.val, &ct.ty.kind) {
- (_, ty::FnDef(did, substs)) => p!(print_value_path(*did, substs)),
- (ty::ConstKind::Unevaluated(did, substs, promoted), _) => {
+ match ct.val {
+ ty::ConstKind::Unevaluated(did, substs, promoted) => {
if let Some(promoted) = promoted {
p!(print_value_path(did, substs));
p!(write("::{:?}", promoted));
}
}
}
- (ty::ConstKind::Infer(..), _) => print_underscore!(),
- (ty::ConstKind::Param(ParamConst { name, .. }), _) => p!(write("{}", name)),
- (ty::ConstKind::Value(value), _) => {
+ ty::ConstKind::Infer(..) => print_underscore!(),
+ ty::ConstKind::Param(ParamConst { name, .. }) => p!(write("{}", name)),
+ ty::ConstKind::Value(value) => {
return self.pretty_print_const_value(value, ct.ty, print_ty);
}
- _ => {
- // fallback
- p!(write("{:?}", ct.val));
- if print_ty {
- p!(write(": "), print(ct.ty));
- }
+ ty::ConstKind::Bound(debruijn, bound_var) => {
+ self.pretty_print_bound_var(debruijn, bound_var)?
}
+ ty::ConstKind::Placeholder(placeholder) => p!(write("Placeholder({:?})", placeholder)),
};
Ok(self)
}
- fn pretty_print_const_value(
+ fn pretty_print_const_scalar(
mut self,
- ct: ConstValue<'tcx>,
+ scalar: Scalar,
ty: Ty<'tcx>,
print_ty: bool,
) -> Result<Self::Const, Self::Error> {
define_scoped_cx!(self);
- if self.tcx().sess.verbose() {
- p!(write("ConstValue({:?}: {:?})", ct, ty));
- return Ok(self);
- }
-
- let u8 = self.tcx().types.u8;
-
- match (ct, &ty.kind) {
- (ConstValue::Scalar(Scalar::Raw { data, .. }), ty::Bool) => {
- p!(write("{}", if data == 0 { "false" } else { "true" }))
- }
- (ConstValue::Scalar(Scalar::Raw { data, .. }), ty::Float(ast::FloatTy::F32)) => {
+ match (scalar, &ty.kind) {
+ // Byte strings (&[u8; N])
+ (
+ Scalar::Ptr(ptr),
+ ty::Ref(
+ _,
+ ty::TyS {
+ kind:
+ ty::Array(
+ ty::TyS { kind: ty::Uint(ast::UintTy::U8), .. },
+ ty::Const {
+ val:
+ ty::ConstKind::Value(ConstValue::Scalar(Scalar::Raw {
+ data,
+ ..
+ })),
+ ..
+ },
+ ),
+ ..
+ },
+ _,
+ ),
+ ) => {
+ let byte_str = self
+ .tcx()
+ .alloc_map
+ .lock()
+ .unwrap_memory(ptr.alloc_id)
+ .get_bytes(&self.tcx(), ptr, Size::from_bytes(*data as u64))
+ .unwrap();
+ p!(pretty_print_byte_str(byte_str));
+ }
+ // Bool
+ (Scalar::Raw { data: 0, .. }, ty::Bool) => p!(write("false")),
+ (Scalar::Raw { data: 1, .. }, ty::Bool) => p!(write("true")),
+ // Float
+ (Scalar::Raw { data, .. }, ty::Float(ast::FloatTy::F32)) => {
p!(write("{}f32", Single::from_bits(data)))
}
- (ConstValue::Scalar(Scalar::Raw { data, .. }), ty::Float(ast::FloatTy::F64)) => {
+ (Scalar::Raw { data, .. }, ty::Float(ast::FloatTy::F64)) => {
p!(write("{}f64", Double::from_bits(data)))
}
- (ConstValue::Scalar(Scalar::Raw { data, .. }), ty::Uint(ui)) => {
+ // Int
+ (Scalar::Raw { data, .. }, ty::Uint(ui)) => {
let bit_size = Integer::from_attr(&self.tcx(), UnsignedInt(*ui)).size();
- let max = truncate(u128::max_value(), bit_size);
+ let max = truncate(u128::MAX, bit_size);
let ui_str = ui.name_str();
if data == max {
p!(write("std::{}::MAX", ui_str))
} else {
- p!(write("{}{}", data, ui_str))
+ if print_ty { p!(write("{}{}", data, ui_str)) } else { p!(write("{}", data)) }
};
}
- (ConstValue::Scalar(Scalar::Raw { data, .. }), ty::Int(i)) => {
- let bit_size = Integer::from_attr(&self.tcx(), SignedInt(*i)).size().bits() as u128;
+ (Scalar::Raw { data, .. }, ty::Int(i)) => {
+ let size = Integer::from_attr(&self.tcx(), SignedInt(*i)).size();
+ let bit_size = size.bits() as u128;
let min = 1u128 << (bit_size - 1);
let max = min - 1;
- let ty = self.tcx().lift(&ty).unwrap();
- let size = self.tcx().layout_of(ty::ParamEnv::empty().and(ty)).unwrap().size;
let i_str = i.name_str();
match data {
d if d == min => p!(write("std::{}::MIN", i_str)),
d if d == max => p!(write("std::{}::MAX", i_str)),
- _ => p!(write("{}{}", sign_extend(data, size) as i128, i_str)),
+ _ => {
+ let data = sign_extend(data, size) as i128;
+ if print_ty {
+ p!(write("{}{}", data, i_str))
+ } else {
+ p!(write("{}", data))
+ }
+ }
}
}
- (ConstValue::Scalar(Scalar::Raw { data, .. }), ty::Char) => {
- p!(write("{:?}", ::std::char::from_u32(data as u32).unwrap()))
- }
- (ConstValue::Scalar(_), ty::RawPtr(_)) => p!(write("{{pointer}}")),
- (ConstValue::Scalar(Scalar::Ptr(ptr)), ty::FnPtr(_)) => {
+ // Char
+ (Scalar::Raw { data, .. }, ty::Char) if char::from_u32(data as u32).is_some() => {
+ p!(write("{:?}", char::from_u32(data as u32).unwrap()))
+ }
+ // Raw pointers
+ (Scalar::Raw { data, .. }, ty::RawPtr(_)) => {
+ self = self.typed_value(
+ |mut this| {
+ write!(this, "0x{:x}", data)?;
+ Ok(this)
+ },
+ |this| this.print_type(ty),
+ " as ",
+ )?;
+ }
+ (Scalar::Ptr(ptr), ty::FnPtr(_)) => {
let instance = {
let alloc_map = self.tcx().alloc_map.lock();
alloc_map.unwrap_fn(ptr.alloc_id)
};
- p!(print_value_path(instance.def_id(), instance.substs));
- }
- _ => {
- let printed = if let ty::Ref(_, ref_ty, _) = ty.kind {
- let byte_str = match (ct, &ref_ty.kind) {
- (ConstValue::Scalar(Scalar::Ptr(ptr)), ty::Array(t, n)) if *t == u8 => {
- let n = n.eval_usize(self.tcx(), ty::ParamEnv::empty());
- Some(
- self.tcx()
- .alloc_map
- .lock()
- .unwrap_memory(ptr.alloc_id)
- .get_bytes(&self.tcx(), ptr, Size::from_bytes(n))
- .unwrap(),
- )
- }
- (ConstValue::Slice { data, start, end }, ty::Slice(t)) if *t == u8 => {
- // The `inspect` here is okay since we checked the bounds, and there are
- // no relocations (we have an active slice reference here). We don't use
- // this result to affect interpreter execution.
- Some(data.inspect_with_undef_and_ptr_outside_interpreter(start..end))
- }
- _ => None,
- };
-
- if let Some(byte_str) = byte_str {
- p!(write("b\""));
- for &c in byte_str {
- for e in std::ascii::escape_default(c) {
- self.write_char(e as char)?;
- }
- }
- p!(write("\""));
- true
- } else if let (ConstValue::Slice { data, start, end }, ty::Str) =
- (ct, &ref_ty.kind)
- {
- // The `inspect` here is okay since we checked the bounds, and there are no
- // relocations (we have an active `str` reference here). We don't use this
- // result to affect interpreter execution.
- let slice = data.inspect_with_undef_and_ptr_outside_interpreter(start..end);
- let s = ::std::str::from_utf8(slice).expect("non utf8 str from miri");
- p!(write("{:?}", s));
- true
+ self = self.typed_value(
+ |this| this.print_value_path(instance.def_id(), instance.substs),
+ |this| this.print_type(ty),
+ " as ",
+ )?;
+ }
+ // For function type zsts just printing the path is enough
+ (Scalar::Raw { size: 0, .. }, ty::FnDef(d, s)) => p!(print_value_path(*d, s)),
+ // Empty tuples are frequently occurring, so don't print the fallback.
+ (Scalar::Raw { size: 0, .. }, ty::Tuple(ts)) if ts.is_empty() => p!(write("()")),
+ // Zero element arrays have a trivial representation.
+ (
+ Scalar::Raw { size: 0, .. },
+ ty::Array(
+ _,
+ ty::Const {
+ val: ty::ConstKind::Value(ConstValue::Scalar(Scalar::Raw { data: 0, .. })),
+ ..
+ },
+ ),
+ ) => p!(write("[]")),
+ // Nontrivial types with scalar bit representation
+ (Scalar::Raw { data, size }, _) => {
+ let print = |mut this: Self| {
+ if size == 0 {
+ write!(this, "transmute(())")?;
} else {
- false
+ write!(this, "transmute(0x{:01$x})", data, size as usize * 2)?;
}
+ Ok(this)
+ };
+ self = if print_ty {
+ self.typed_value(print, |this| this.print_type(ty), ": ")?
} else {
- false
+ print(self)?
};
- if !printed {
- // fallback
- p!(write("{:?}", ct));
- if print_ty {
- p!(write(": "), print(ty));
- }
- }
}
- };
+ // Any pointer values not covered by a branch above
+ (Scalar::Ptr(p), _) => {
+ self = self.pretty_print_const_pointer(p, ty, print_ty)?;
+ }
+ }
Ok(self)
}
+
+ /// This is overridden for MIR printing because we only want to hide alloc ids from users, not
+ /// from MIR where it is actually useful.
+ fn pretty_print_const_pointer(
+ mut self,
+ _: Pointer,
+ ty: Ty<'tcx>,
+ print_ty: bool,
+ ) -> Result<Self::Const, Self::Error> {
+ if print_ty {
+ self.typed_value(
+ |mut this| {
+ this.write_str("&_")?;
+ Ok(this)
+ },
+ |this| this.print_type(ty),
+ ": ",
+ )
+ } else {
+ self.write_str("&_")?;
+ Ok(self)
+ }
+ }
+
+ fn pretty_print_byte_str(mut self, byte_str: &'tcx [u8]) -> Result<Self::Const, Self::Error> {
+ define_scoped_cx!(self);
+ p!(write("b\""));
+ for &c in byte_str {
+ for e in std::ascii::escape_default(c) {
+ self.write_char(e as char)?;
+ }
+ }
+ p!(write("\""));
+ Ok(self)
+ }
+
+ fn pretty_print_const_value(
+ mut self,
+ ct: ConstValue<'tcx>,
+ ty: Ty<'tcx>,
+ print_ty: bool,
+ ) -> Result<Self::Const, Self::Error> {
+ define_scoped_cx!(self);
+
+ if self.tcx().sess.verbose() {
+ p!(write("ConstValue({:?}: {:?})", ct, ty));
+ return Ok(self);
+ }
+
+ let u8_type = self.tcx().types.u8;
+
+ match (ct, &ty.kind) {
+ (ConstValue::Scalar(scalar), _) => self.pretty_print_const_scalar(scalar, ty, print_ty),
+ (
+ ConstValue::Slice { data, start, end },
+ ty::Ref(_, ty::TyS { kind: ty::Slice(t), .. }, _),
+ ) if *t == u8_type => {
+ // The `inspect` here is okay since we checked the bounds, and there are
+ // no relocations (we have an active slice reference here). We don't use
+ // this result to affect interpreter execution.
+ let byte_str = data.inspect_with_undef_and_ptr_outside_interpreter(start..end);
+ self.pretty_print_byte_str(byte_str)
+ }
+ (
+ ConstValue::Slice { data, start, end },
+ ty::Ref(_, ty::TyS { kind: ty::Str, .. }, _),
+ ) => {
+ // The `inspect` here is okay since we checked the bounds, and there are no
+ // relocations (we have an active `str` reference here). We don't use this
+ // result to affect interpreter execution.
+ let slice = data.inspect_with_undef_and_ptr_outside_interpreter(start..end);
+ let s = ::std::str::from_utf8(slice).expect("non utf8 str from miri");
+ p!(write("{:?}", s));
+ Ok(self)
+ }
+ (ConstValue::ByRef { alloc, offset }, ty::Array(t, n)) if *t == u8_type => {
+ let n = n.val.try_to_bits(self.tcx().data_layout.pointer_size).unwrap();
+ // cast is ok because we already checked for pointer size (32 or 64 bit) above
+ let n = Size::from_bytes(n as u64);
+ let ptr = Pointer::new(AllocId(0), offset);
+
+ let byte_str = alloc.get_bytes(&self.tcx(), ptr, n).unwrap();
+ p!(write("*"));
+ p!(pretty_print_byte_str(byte_str));
+ Ok(self)
+ }
+ // FIXME(oli-obk): also pretty print arrays and other aggregate constants by reading
+ // their fields instead of just dumping the memory.
+ _ => {
+ // fallback
+ p!(write("{:?}", ct));
+ if print_ty {
+ p!(write(": "), print(ty));
+ }
+ Ok(self)
+ }
+ }
+ }
}
// HACK(eddyb) boxed to avoid moving around a large struct by-value.
empty_path: bool,
in_value: bool,
+ pub print_alloc_ids: bool,
used_region_names: FxHashSet<Symbol>,
region_index: usize,
fmt,
empty_path: false,
in_value: ns == Namespace::ValueNS,
+ print_alloc_ids: false,
used_region_names: Default::default(),
region_index: 0,
binder_depth: 0,
self.pretty_in_binder(value)
}
+ fn typed_value(
+ mut self,
+ f: impl FnOnce(Self) -> Result<Self, Self::Error>,
+ t: impl FnOnce(Self) -> Result<Self, Self::Error>,
+ conversion: &str,
+ ) -> Result<Self::Const, Self::Error> {
+ self.write_str("{")?;
+ self = f(self)?;
+ self.write_str(conversion)?;
+ let was_in_value = std::mem::replace(&mut self.in_value, false);
+ self = t(self)?;
+ self.in_value = was_in_value;
+ self.write_str("}")?;
+ Ok(self)
+ }
+
fn generic_delimiters(
mut self,
f: impl FnOnce(Self) -> Result<Self, Self::Error>,
ty::ReStatic | ty::ReEmpty(_) | ty::ReClosureBound(_) => true,
}
}
+
+ fn pretty_print_const_pointer(
+ self,
+ p: Pointer,
+ ty: Ty<'tcx>,
+ print_ty: bool,
+ ) -> Result<Self::Const, Self::Error> {
+ let print = |mut this: Self| {
+ define_scoped_cx!(this);
+ if this.print_alloc_ids {
+ p!(write("{:?}", p));
+ } else {
+ p!(write("&_"));
+ }
+ Ok(this)
+ };
+ if print_ty {
+ self.typed_value(print, |this| this.print_type(ty), ": ")
+ } else {
+ print(self)
+ }
+ }
}
// HACK(eddyb) limited to `FmtPrinter` because of `region_highlight_mode`.
-For more information about how the query system works, see the [rustc guide].
+For more information about how the query system works, see the [rustc dev guide].
-[rustc guide]: https://rust-lang.github.io/rustc-guide/query.html
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/query.html
use crate::dep_graph::{DepKind, DepNode};
use crate::ty::query::caches::QueryCache;
use crate::ty::query::plumbing::CycleError;
-use crate::ty::query::queries;
use crate::ty::query::{Query, QueryState};
use crate::ty::TyCtxt;
use rustc_data_structures::profiling::ProfileCategory;
-use rustc_hir::def_id::{CrateNum, DefId};
+use rustc_hir::def_id::DefId;
use crate::ich::StableHashingContext;
use rustc_data_structures::fingerprint::Fingerprint;
bug!("QueryDescription::load_from_disk() called for an unsupported query.")
}
}
-
-impl<'tcx> QueryDescription<'tcx> for queries::analysis<'tcx> {
- fn describe(_tcx: TyCtxt<'_>, _: CrateNum) -> Cow<'static, str> {
- "running analysis passes on this crate".into()
- }
-}
use crate::ty::query::caches::DefaultCacheSelector;
use crate::ty::subst::SubstsRef;
use crate::ty::{self, Ty, TyCtxt};
-use rustc_hir::def_id::{CrateNum, DefId, DefIndex, LOCAL_CRATE};
+use rustc_hir::def_id::{CrateNum, DefId, LocalDefId, LOCAL_CRATE};
use rustc_span::symbol::Symbol;
use rustc_span::{Span, DUMMY_SP};
}
}
-impl Key for DefIndex {
+impl Key for LocalDefId {
type CacheSelector = DefaultCacheSelector;
fn query_crate(&self) -> CrateNum {
- LOCAL_CRATE
+ self.to_def_id().query_crate()
}
- fn default_span(&self, _tcx: TyCtxt<'_>) -> Span {
- DUMMY_SP
+ fn default_span(&self, tcx: TyCtxt<'_>) -> Span {
+ self.to_def_id().default_span(tcx)
}
}
-use crate::dep_graph::{self, DepConstructor, DepNode};
+use crate::dep_graph::{self, DepConstructor, DepNode, DepNodeParams};
use crate::hir::exports::Export;
+use crate::hir::map;
+use crate::hir::{HirOwner, HirOwnerItems};
use crate::infer::canonical::{self, Canonical};
use crate::lint::LintLevelMap;
use crate::middle::codegen_fn_attrs::CodegenFnAttrs;
use crate::mir::interpret::{ConstEvalRawResult, ConstEvalResult, ConstValue};
use crate::mir::interpret::{LitToConstError, LitToConstInput};
use crate::mir::mono::CodegenUnit;
-use crate::session::config::{EntryFnType, OptLevel, OutputFilenames, SymbolManglingVersion};
-use crate::session::CrateDisambiguator;
use crate::traits::query::{
CanonicalPredicateGoal, CanonicalProjectionGoal, CanonicalTyGoal,
CanonicalTypeOpAscribeUserTypeGoal, CanonicalTypeOpEqGoal, CanonicalTypeOpNormalizeGoal,
use rustc_data_structures::sync::Lrc;
use rustc_hir as hir;
use rustc_hir::def::DefKind;
-use rustc_hir::def_id::{CrateNum, DefId, DefIdMap, DefIdSet, DefIndex};
+use rustc_hir::def_id::{CrateNum, DefId, DefIdMap, DefIdSet, LocalDefId};
use rustc_hir::{Crate, HirIdSet, ItemLocalId, TraitCandidate};
use rustc_index::vec::IndexVec;
+use rustc_session::config::{EntryFnType, OptLevel, OutputFilenames, SymbolManglingVersion};
+use rustc_session::CrateDisambiguator;
use rustc_target::spec::PanicStrategy;
use rustc_ast::ast;
use rustc_span::symbol::Symbol;
use rustc_span::{Span, DUMMY_SP};
use std::borrow::Cow;
+use std::collections::BTreeMap;
use std::convert::TryFrom;
use std::ops::Deref;
use std::sync::Arc;
#[macro_use]
mod plumbing;
+pub use self::plumbing::CycleError;
use self::plumbing::*;
-pub use self::plumbing::{force_from_dep_node, CycleError};
mod stats;
pub use self::stats::print_stats;
// Queries marked with `fatal_cycle` do not need the latter implementation,
// as they will raise an fatal error on query cycles instead.
-rustc_query_append! { [define_queries!][ <'tcx>
- Other {
- /// Runs analysis passes on the crate.
- [eval_always] fn analysis: Analysis(CrateNum) -> Result<(), ErrorReported>,
- },
-]}
+rustc_query_append! { [define_queries!][<'tcx>] }
+
+/// The red/green evaluation system will try to mark a specific DepNode in the
+/// dependency graph as green by recursively trying to mark the dependencies of
+/// that `DepNode` as green. While doing so, it will sometimes encounter a `DepNode`
+/// where we don't know if it is red or green and we therefore actually have
+/// to recompute its value in order to find out. Since the only piece of
+/// information that we have at that point is the `DepNode` we are trying to
+/// re-evaluate, we need some way to re-run a query from just that. This is what
+/// `force_from_dep_node()` implements.
+///
+/// In the general case, a `DepNode` consists of a `DepKind` and an opaque
+/// GUID/fingerprint that will uniquely identify the node. This GUID/fingerprint
+/// is usually constructed by computing a stable hash of the query-key that the
+/// `DepNode` corresponds to. Consequently, it is not in general possible to go
+/// back from hash to query-key (since hash functions are not reversible). For
+/// this reason `force_from_dep_node()` is expected to fail from time to time
+/// because we just cannot find out, from the `DepNode` alone, what the
+/// corresponding query-key is and therefore cannot re-run the query.
+///
+/// The system deals with this case letting `try_mark_green` fail which forces
+/// the root query to be re-evaluated.
+///
+/// Now, if `force_from_dep_node()` would always fail, it would be pretty useless.
+/// Fortunately, we can use some contextual information that will allow us to
+/// reconstruct query-keys for certain kinds of `DepNode`s. In particular, we
+/// enforce by construction that the GUID/fingerprint of certain `DepNode`s is a
+/// valid `DefPathHash`. Since we also always build a huge table that maps every
+/// `DefPathHash` in the current codebase to the corresponding `DefId`, we have
+/// everything we need to re-run the query.
+///
+/// Take the `mir_validated` query as an example. Like many other queries, it
+/// just has a single parameter: the `DefId` of the item it will compute the
+/// validated MIR for. Now, when we call `force_from_dep_node()` on a `DepNode`
+/// with kind `MirValidated`, we know that the GUID/fingerprint of the `DepNode`
+/// is actually a `DefPathHash`, and can therefore just look up the corresponding
+/// `DefId` in `tcx.def_path_hash_to_def_id`.
+///
+/// When you implement a new query, it will likely have a corresponding new
+/// `DepKind`, and you'll have to support it here in `force_from_dep_node()`. As
+/// a rule of thumb, if your query takes a `DefId` or `LocalDefId` as sole parameter,
+/// then `force_from_dep_node()` should not fail for it. Otherwise, you can just
+/// add it to the "We don't have enough information to reconstruct..." group in
+/// the match below.
+pub fn force_from_dep_node<'tcx>(tcx: TyCtxt<'tcx>, dep_node: &DepNode) -> bool {
+ use crate::dep_graph::DepKind;
+
+ // We must avoid ever having to call `force_from_dep_node()` for a
+ // `DepNode::codegen_unit`:
+ // Since we cannot reconstruct the query key of a `DepNode::codegen_unit`, we
+ // would always end up having to evaluate the first caller of the
+ // `codegen_unit` query that *is* reconstructible. This might very well be
+ // the `compile_codegen_unit` query, thus re-codegenning the whole CGU just
+ // to re-trigger calling the `codegen_unit` query with the right key. At
+ // that point we would already have re-done all the work we are trying to
+ // avoid doing in the first place.
+ // The solution is simple: Just explicitly call the `codegen_unit` query for
+ // each CGU, right after partitioning. This way `try_mark_green` will always
+ // hit the cache instead of having to go through `force_from_dep_node`.
+ // This assertion makes sure, we actually keep applying the solution above.
+ debug_assert!(
+ dep_node.kind != DepKind::codegen_unit,
+ "calling force_from_dep_node() on DepKind::codegen_unit"
+ );
+
+ if !dep_node.kind.can_reconstruct_query_key() {
+ return false;
+ }
+
+ rustc_dep_node_force!([dep_node, tcx]
+ // These are inputs that are expected to be pre-allocated and that
+ // should therefore always be red or green already.
+ DepKind::CrateMetadata |
+
+ // These are anonymous nodes.
+ DepKind::TraitSelect |
+
+ // We don't have enough information to reconstruct the query key of
+ // these.
+ DepKind::CompileCodegenUnit => {
+ bug!("force_from_dep_node: encountered {:?}", dep_node)
+ }
+ );
+
+ false
+}
+
+impl DepNode {
+ /// Check whether the query invocation corresponding to the given
+ /// DepNode is eligible for on-disk-caching. If so, this is method
+ /// will execute the query corresponding to the given DepNode.
+ /// Also, as a sanity check, it expects that the corresponding query
+ /// invocation has been marked as green already.
+ pub fn try_load_from_on_disk_cache<'tcx>(&self, tcx: TyCtxt<'tcx>) {
+ use crate::dep_graph::DepKind;
+
+ rustc_dep_node_try_load_from_on_disk_cache!(self, tcx)
+ }
+}
use crate::ich::{CachingSourceMapView, Fingerprint};
use crate::mir::interpret::{AllocDecodingSession, AllocDecodingState};
use crate::mir::{self, interpret};
-use crate::session::{CrateDisambiguator, Session};
use crate::ty::codec::{self as ty_codec, TyDecoder, TyEncoder};
use crate::ty::context::TyCtxt;
use crate::ty::{self, Ty};
use rustc_data_structures::sync::{HashMapExt, Lock, Lrc, Once};
use rustc_data_structures::thin_vec::ThinVec;
use rustc_errors::Diagnostic;
-use rustc_hir as hir;
use rustc_hir::def_id::{CrateNum, DefId, DefIndex, LocalDefId, LOCAL_CRATE};
use rustc_index::vec::{Idx, IndexVec};
use rustc_serialize::{
opaque, Decodable, Decoder, Encodable, Encoder, SpecializedDecoder, SpecializedEncoder,
UseSpecializedDecodable, UseSpecializedEncodable,
};
+use rustc_session::{CrateDisambiguator, Session};
use rustc_span::hygiene::{ExpnId, SyntaxContext};
use rustc_span::source_map::{SourceMap, StableSourceFileId};
use rustc_span::{BytePos, SourceFile, Span, DUMMY_SP};
impl AbsoluteBytePos {
fn new(pos: usize) -> AbsoluteBytePos {
- debug_assert!(pos <= ::std::u32::MAX as usize);
+ debug_assert!(pos <= u32::MAX as usize);
AbsoluteBytePos(pos as u32)
}
//- DECODING -------------------------------------------------------------------
-/// A decoder that can read fro the incr. comp. cache. It is similar to the one
+/// A decoder that can read from the incr. comp. cache. It is similar to the one
/// we use for crate metadata decoding in that it can rebase spans and eventually
/// will also handle things that contain `Ty` instances.
struct CacheDecoder<'a, 'tcx> {
impl<'a, 'tcx> SpecializedDecoder<LocalDefId> for CacheDecoder<'a, 'tcx> {
#[inline]
fn specialized_decode(&mut self) -> Result<LocalDefId, Self::Error> {
- Ok(LocalDefId::from_def_id(DefId::decode(self)?))
- }
-}
-
-impl<'a, 'tcx> SpecializedDecoder<hir::HirId> for CacheDecoder<'a, 'tcx> {
- fn specialized_decode(&mut self) -> Result<hir::HirId, Self::Error> {
- // Load the `DefPathHash` which is what we encoded the `DefIndex` as.
- let def_path_hash = DefPathHash::decode(self)?;
-
- // Use the `DefPathHash` to map to the current `DefId`.
- let def_id = self.tcx().def_path_hash_to_def_id.as_ref().unwrap()[&def_path_hash];
-
- debug_assert!(def_id.is_local());
-
- // The `ItemLocalId` needs no remapping.
- let local_id = hir::ItemLocalId::decode(self)?;
-
- // Reconstruct the `HirId` and look up the corresponding `NodeId` in the
- // context of the current session.
- Ok(hir::HirId { owner: def_id.index, local_id })
+ Ok(DefId::decode(self)?.expect_local())
}
}
}
}
-impl<'a, 'tcx, E> SpecializedEncoder<hir::HirId> for CacheEncoder<'a, 'tcx, E>
-where
- E: 'a + TyEncoder,
-{
- #[inline]
- fn specialized_encode(&mut self, id: &hir::HirId) -> Result<(), Self::Error> {
- let hir::HirId { owner, local_id } = *id;
-
- let def_path_hash = self.tcx.hir().definitions().def_path_hash(owner);
-
- def_path_hash.encode(self)?;
- local_id.encode(self)
- }
-}
-
impl<'a, 'tcx, E> SpecializedEncoder<DefId> for CacheEncoder<'a, 'tcx, E>
where
E: 'a + TyEncoder,
//! generate the actual methods on tcx which find and execute the provider,
//! manage the caches, and so forth.
-use crate::dep_graph::{DepKind, DepNode, DepNodeIndex, SerializedDepNodeIndex};
+use crate::dep_graph::{DepNode, DepNodeIndex, SerializedDepNodeIndex};
use crate::ty::query::caches::QueryCache;
use crate::ty::query::config::{QueryAccessors, QueryDescription};
use crate::ty::query::job::{QueryInfo, QueryJob, QueryJobId, QueryShardJobId};
}
#[allow(dead_code)]
- fn force_query<Q: QueryDescription<'tcx> + 'tcx>(
+ pub(super) fn force_query<Q: QueryDescription<'tcx> + 'tcx>(
self,
key: Q::Key,
span: Span,
impl TyCtxt<$tcx> {
/// Returns a transparent wrapper for `TyCtxt`, which ensures queries
- /// are executed instead of just returing their results.
+ /// are executed instead of just returning their results.
#[inline(always)]
pub fn ensure(self) -> TyCtxtEnsure<$tcx> {
TyCtxtEnsure {
}
};
}
-
-/// The red/green evaluation system will try to mark a specific DepNode in the
-/// dependency graph as green by recursively trying to mark the dependencies of
-/// that `DepNode` as green. While doing so, it will sometimes encounter a `DepNode`
-/// where we don't know if it is red or green and we therefore actually have
-/// to recompute its value in order to find out. Since the only piece of
-/// information that we have at that point is the `DepNode` we are trying to
-/// re-evaluate, we need some way to re-run a query from just that. This is what
-/// `force_from_dep_node()` implements.
-///
-/// In the general case, a `DepNode` consists of a `DepKind` and an opaque
-/// GUID/fingerprint that will uniquely identify the node. This GUID/fingerprint
-/// is usually constructed by computing a stable hash of the query-key that the
-/// `DepNode` corresponds to. Consequently, it is not in general possible to go
-/// back from hash to query-key (since hash functions are not reversible). For
-/// this reason `force_from_dep_node()` is expected to fail from time to time
-/// because we just cannot find out, from the `DepNode` alone, what the
-/// corresponding query-key is and therefore cannot re-run the query.
-///
-/// The system deals with this case letting `try_mark_green` fail which forces
-/// the root query to be re-evaluated.
-///
-/// Now, if `force_from_dep_node()` would always fail, it would be pretty useless.
-/// Fortunately, we can use some contextual information that will allow us to
-/// reconstruct query-keys for certain kinds of `DepNode`s. In particular, we
-/// enforce by construction that the GUID/fingerprint of certain `DepNode`s is a
-/// valid `DefPathHash`. Since we also always build a huge table that maps every
-/// `DefPathHash` in the current codebase to the corresponding `DefId`, we have
-/// everything we need to re-run the query.
-///
-/// Take the `mir_validated` query as an example. Like many other queries, it
-/// just has a single parameter: the `DefId` of the item it will compute the
-/// validated MIR for. Now, when we call `force_from_dep_node()` on a `DepNode`
-/// with kind `MirValidated`, we know that the GUID/fingerprint of the `DepNode`
-/// is actually a `DefPathHash`, and can therefore just look up the corresponding
-/// `DefId` in `tcx.def_path_hash_to_def_id`.
-///
-/// When you implement a new query, it will likely have a corresponding new
-/// `DepKind`, and you'll have to support it here in `force_from_dep_node()`. As
-/// a rule of thumb, if your query takes a `DefId` or `DefIndex` as sole parameter,
-/// then `force_from_dep_node()` should not fail for it. Otherwise, you can just
-/// add it to the "We don't have enough information to reconstruct..." group in
-/// the match below.
-pub fn force_from_dep_node(tcx: TyCtxt<'_>, dep_node: &DepNode) -> bool {
- use crate::dep_graph::RecoverKey;
-
- // We must avoid ever having to call `force_from_dep_node()` for a
- // `DepNode::codegen_unit`:
- // Since we cannot reconstruct the query key of a `DepNode::codegen_unit`, we
- // would always end up having to evaluate the first caller of the
- // `codegen_unit` query that *is* reconstructible. This might very well be
- // the `compile_codegen_unit` query, thus re-codegenning the whole CGU just
- // to re-trigger calling the `codegen_unit` query with the right key. At
- // that point we would already have re-done all the work we are trying to
- // avoid doing in the first place.
- // The solution is simple: Just explicitly call the `codegen_unit` query for
- // each CGU, right after partitioning. This way `try_mark_green` will always
- // hit the cache instead of having to go through `force_from_dep_node`.
- // This assertion makes sure, we actually keep applying the solution above.
- debug_assert!(
- dep_node.kind != DepKind::codegen_unit,
- "calling force_from_dep_node() on DepKind::codegen_unit"
- );
-
- if !dep_node.kind.can_reconstruct_query_key() {
- return false;
- }
-
- rustc_dep_node_force!([dep_node, tcx]
- // These are inputs that are expected to be pre-allocated and that
- // should therefore always be red or green already.
- DepKind::AllLocalTraitImpls |
- DepKind::CrateMetadata |
- DepKind::HirBody |
- DepKind::Hir |
-
- // These are anonymous nodes.
- DepKind::TraitSelect |
-
- // We don't have enough information to reconstruct the query key of
- // these.
- DepKind::CompileCodegenUnit => {
- bug!("force_from_dep_node: encountered {:?}", dep_node)
- }
-
- DepKind::Analysis => {
- let def_id = if let Some(def_id) = dep_node.extract_def_id(tcx) {
- def_id
- } else {
- // Return from the whole function.
- return false
- };
- tcx.force_query::<crate::ty::query::queries::analysis<'_>>(
- def_id.krate,
- DUMMY_SP,
- *dep_node
- );
- }
- );
-
- true
-}
b: &T,
) -> RelateResult<'tcx, T>;
- // Overrideable relations. You shouldn't typically call these
+ // Overridable relations. You shouldn't typically call these
// directly, instead call `relate()`, which in turn calls
// these. This is both more uniform but also allows us to add
// additional hooks for other types in the future if needed
Err(TypeError::Traits(expected_found(relation, &a.def_id, &b.def_id)))
} else {
let substs = relate_substs(relation, None, a.substs, b.substs)?;
- Ok(ty::TraitRef { def_id: a.def_id, substs: substs })
+ Ok(ty::TraitRef { def_id: a.def_id, substs })
}
}
}
Err(TypeError::Traits(expected_found(relation, &a.def_id, &b.def_id)))
} else {
let substs = relate_substs(relation, None, a.substs, b.substs)?;
- Ok(ty::ExistentialTraitRef { def_id: a.def_id, substs: substs })
+ Ok(ty::ExistentialTraitRef { def_id: a.def_id, substs })
}
}
}
impl<'tcx> ParamTy {
pub fn new(index: u32, name: Symbol) -> ParamTy {
- ParamTy { index, name: name }
+ ParamTy { index, name }
}
pub fn for_self() -> ParamTy {
/// the inference variable is supposed to satisfy the relation
/// *for every value of the placeholder region*. To ensure that doesn't
/// happen, you can use `leak_check`. This is more clearly explained
-/// by the [rustc guide].
+/// by the [rustc dev guide].
///
/// [1]: http://smallcultfollowing.com/babysteps/blog/2013/10/29/intermingled-parameter-lists/
/// [2]: http://smallcultfollowing.com/babysteps/blog/2013/11/04/intermingled-parameter-lists/
-/// [rustc guide]: https://rust-lang.github.io/rustc-guide/traits/hrtb.html
+/// [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/traits/hrtb.html
#[derive(Clone, PartialEq, Eq, Hash, Copy, RustcEncodable, RustcDecodable, PartialOrd, Ord)]
pub enum RegionKind {
/// Region bound in a type or fn declaration which will be
}
}
- pub fn keep_in_local_tcx(&self) -> bool {
- if let ty::ReVar(..) = self { true } else { false }
- }
-
pub fn type_flags(&self) -> TypeFlags {
let mut flags = TypeFlags::empty();
- if self.keep_in_local_tcx() {
- flags = flags | TypeFlags::KEEP_IN_LOCAL_TCX;
- }
-
match *self {
ty::ReVar(..) => {
flags = flags | TypeFlags::HAS_FREE_REGIONS;
+ flags = flags | TypeFlags::HAS_FREE_LOCAL_REGIONS;
flags = flags | TypeFlags::HAS_RE_INFER;
+ flags = flags | TypeFlags::KEEP_IN_LOCAL_TCX;
}
ty::RePlaceholder(..) => {
flags = flags | TypeFlags::HAS_FREE_REGIONS;
+ flags = flags | TypeFlags::HAS_FREE_LOCAL_REGIONS;
flags = flags | TypeFlags::HAS_RE_PLACEHOLDER;
}
- ty::ReLateBound(..) => {
- flags = flags | TypeFlags::HAS_RE_LATE_BOUND;
- }
ty::ReEarlyBound(..) => {
flags = flags | TypeFlags::HAS_FREE_REGIONS;
- flags = flags | TypeFlags::HAS_RE_EARLY_BOUND;
+ flags = flags | TypeFlags::HAS_FREE_LOCAL_REGIONS;
+ flags = flags | TypeFlags::HAS_RE_PARAM;
}
- ty::ReEmpty(_) | ty::ReStatic | ty::ReFree { .. } | ty::ReScope { .. } => {
+ ty::ReFree { .. } | ty::ReScope { .. } => {
flags = flags | TypeFlags::HAS_FREE_REGIONS;
+ flags = flags | TypeFlags::HAS_FREE_LOCAL_REGIONS;
}
- ty::ReErased => {
- flags = flags | TypeFlags::HAS_RE_ERASED;
+ ty::ReEmpty(_) | ty::ReStatic => {
+ flags = flags | TypeFlags::HAS_FREE_REGIONS;
}
ty::ReClosureBound(..) => {
flags = flags | TypeFlags::HAS_FREE_REGIONS;
}
- }
-
- match *self {
- ty::ReStatic | ty::ReEmpty(_) | ty::ReErased | ty::ReLateBound(..) => (),
- _ => flags = flags | TypeFlags::HAS_FREE_LOCAL_NAMES,
+ ty::ReLateBound(..) => {
+ flags = flags | TypeFlags::HAS_RE_LATE_BOUND;
+ }
+ ty::ReErased => {
+ flags = flags | TypeFlags::HAS_RE_ERASED;
+ }
}
debug!("type_flags({:?}) = {:?}", self, flags);
#[inline]
pub fn try_to_bits(&self, size: ty::layout::Size) -> Option<u128> {
- self.try_to_scalar()?.to_bits(size).ok()
+ if let ConstKind::Value(val) = self { val.try_to_bits(size) } else { None }
}
}
use crate::ty::fold::TypeFoldable;
use crate::ty::{Ty, TyCtxt};
use rustc_hir as hir;
-use rustc_hir::def_id::DefId;
+use rustc_hir::def_id::{CrateNum, DefId};
+use rustc_hir::HirId;
use rustc_data_structures::fx::FxHashMap;
use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
+use rustc_errors::ErrorReported;
use rustc_macros::HashStable;
+use std::collections::BTreeMap;
/// A trait's definition with type information.
#[derive(HashStable)]
/// and thus `impl`s of it are allowed to overlap.
pub is_marker: bool,
+ /// Used to determine whether the standard library is allowed to specialize
+ /// on this trait.
+ pub specialization_kind: TraitSpecializationKind,
+
/// The ICH of this trait's DefPath, cached here so it doesn't have to be
/// recomputed all the time.
pub def_path_hash: DefPathHash,
}
+/// Whether this trait is treated specially by the standard library
+/// specialization lint.
+#[derive(HashStable, PartialEq, Clone, Copy, RustcEncodable, RustcDecodable)]
+pub enum TraitSpecializationKind {
+ /// The default. Specializing on this trait is not allowed.
+ None,
+ /// Specializing on this trait is allowed because it doesn't have any
+ /// methods. For example `Sized` or `FusedIterator`.
+ /// Applies to traits with the `rustc_unsafe_specialization_marker`
+ /// attribute.
+ Marker,
+ /// Specializing on this trait is allowed because all of the impls of this
+ /// trait are "always applicable". Always applicable means that if
+ /// `X<'x>: T<'y>` for any lifetimes, then `for<'a, 'b> X<'a>: T<'b>`.
+ /// Applies to traits with the `rustc_specialization_trait` attribute.
+ AlwaysApplicable,
+}
+
#[derive(Default)]
pub struct TraitImpls {
blanket_impls: Vec<DefId>,
paren_sugar: bool,
has_auto_impl: bool,
is_marker: bool,
+ specialization_kind: TraitSpecializationKind,
def_path_hash: DefPathHash,
) -> TraitDef {
- TraitDef { def_id, unsafety, paren_sugar, has_auto_impl, is_marker, def_path_hash }
+ TraitDef {
+ def_id,
+ unsafety,
+ paren_sugar,
+ has_auto_impl,
+ is_marker,
+ specialization_kind,
+ def_path_hash,
+ }
}
pub fn ancestors(
&self,
tcx: TyCtxt<'tcx>,
of_impl: DefId,
- ) -> specialization_graph::Ancestors<'tcx> {
+ ) -> Result<specialization_graph::Ancestors<'tcx>, ErrorReported> {
specialization_graph::ancestors(tcx, self.def_id, of_impl)
}
}
}
}
+// Query provider for `all_local_trait_impls`.
+pub(super) fn all_local_trait_impls<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ krate: CrateNum,
+) -> &'tcx BTreeMap<DefId, Vec<HirId>> {
+ &tcx.hir_crate(krate).trait_impls
+}
+
// Query provider for `trait_impls_of`.
pub(super) fn trait_impls_of_provider(tcx: TyCtxt<'_>, trait_id: DefId) -> &TraitImpls {
let mut impls = TraitImpls::default();
}
fn signed_max(size: Size) -> i128 {
- i128::max_value() >> (128 - size.bits())
+ i128::MAX >> (128 - size.bits())
}
fn unsigned_max(size: Size) -> u128 {
- u128::max_value() >> (128 - size.bits())
+ u128::MAX >> (128 - size.bits())
}
fn int_size_and_signed<'tcx>(tcx: TyCtxt<'tcx>, ty: Ty<'tcx>) -> (Size, bool) {
let min = signed_min(size);
let max = signed_max(size);
let val = sign_extend(self.val, size) as i128;
- assert!(n < (i128::max_value() as u128));
+ assert!(n < (i128::MAX as u128));
let n = n as i128;
let oflo = val > max - n;
let val = if oflo { min + (n - (max - val) - 1) } else { val + n };
adt_did: DefId,
validate: &mut dyn FnMut(Self, DefId) -> Result<(), ErrorReported>,
) -> Option<ty::Destructor> {
- let drop_trait = if let Some(def_id) = self.lang_items().drop_trait() {
- def_id
- } else {
- return None;
- };
-
+ let drop_trait = self.lang_items().drop_trait()?;
self.ensure().coherent_trait(drop_trait);
let mut dtor_did = None;
let ty = self.type_of(adt_did);
self.for_each_relevant_impl(drop_trait, ty, |impl_did| {
- if let Some(item) = self.associated_items(impl_did).in_definition_order().nth(0) {
+ if let Some(item) = self.associated_items(impl_did).in_definition_order().next() {
if validate(self, impl_did).is_ok() {
dtor_did = Some(item.def_id);
}
-The `syntax` crate contains those things concerned purely with syntax
+The `rustc_ast` crate contains those things concerned purely with syntax
– that is, the AST ("abstract syntax tree"), parser, pretty-printer,
lexer, macro expander, and utilities for traversing ASTs.
For more information about how these things work in rustc, see the
-rustc guide:
+rustc dev guide:
-- [Parsing](https://rust-lang.github.io/rustc-guide/the-parser.html)
-- [Macro Expansion](https://rust-lang.github.io/rustc-guide/macro-expansion.html)
+- [Parsing](https://rustc-dev-guide.rust-lang.org/the-parser.html)
+- [Macro Expansion](https://rustc-dev-guide.rust-lang.org/macro-expansion.html)
//! - [`Generics`], [`GenericParam`], [`WhereClause`]: Metadata associated with generic parameters.
//! - [`EnumDef`] and [`Variant`]: Enum declaration.
//! - [`Lit`] and [`LitKind`]: Literal expressions.
-//! - [`MacroDef`], [`MacStmtStyle`], [`Mac`], [`MacDelimeter`]: Macro definition and invocation.
+//! - [`MacroDef`], [`MacStmtStyle`], [`MacCall`], [`MacDelimeter`]: Macro definition and invocation.
//! - [`Attribute`]: Metadata associated with item.
//! - [`UnOp`], [`UnOpKind`], [`BinOp`], [`BinOpKind`]: Unary and binary operators.
use rustc_span::symbol::{kw, sym, Symbol};
use rustc_span::{Span, DUMMY_SP};
+use std::convert::TryFrom;
use std::fmt;
use std::iter;
TyKind::Path(None, Path::from_ident(*ident))
}
PatKind::Path(qself, path) => TyKind::Path(qself.clone(), path.clone()),
- PatKind::Mac(mac) => TyKind::Mac(mac.clone()),
+ PatKind::MacCall(mac) => TyKind::MacCall(mac.clone()),
// `&mut? P` can be reinterpreted as `&mut? T` where `T` is `P` reparsed as a type.
PatKind::Ref(pat, mutbl) => {
pat.to_ty().map(|ty| TyKind::Rptr(None, MutTy { ty, mutbl: *mutbl }))?
| PatKind::Range(..)
| PatKind::Ident(..)
| PatKind::Path(..)
- | PatKind::Mac(_) => {}
+ | PatKind::MacCall(_) => {}
}
}
Paren(P<Pat>),
/// A macro pattern; pre-expansion.
- Mac(Mac),
+ MacCall(MacCall),
}
#[derive(
pub fn add_trailing_semicolon(mut self) -> Self {
self.kind = match self.kind {
StmtKind::Expr(expr) => StmtKind::Semi(expr),
- StmtKind::Mac(mac) => {
- StmtKind::Mac(mac.map(|(mac, _style, attrs)| (mac, MacStmtStyle::Semicolon, attrs)))
- }
+ StmtKind::MacCall(mac) => StmtKind::MacCall(
+ mac.map(|(mac, _style, attrs)| (mac, MacStmtStyle::Semicolon, attrs)),
+ ),
kind => kind,
};
self
Expr(P<Expr>),
/// Expr with a trailing semi-colon.
Semi(P<Expr>),
+ /// Just a trailing semi-colon.
+ Empty,
/// Macro.
- Mac(P<(Mac, MacStmtStyle, AttrVec)>),
+ MacCall(P<(MacCall, MacStmtStyle, AttrVec)>),
}
#[derive(Clone, Copy, PartialEq, RustcEncodable, RustcDecodable, Debug)]
let kind = match &self.kind {
// Trivial conversions.
ExprKind::Path(qself, path) => TyKind::Path(qself.clone(), path.clone()),
- ExprKind::Mac(mac) => TyKind::Mac(mac.clone()),
+ ExprKind::MacCall(mac) => TyKind::MacCall(mac.clone()),
ExprKind::Paren(expr) => expr.to_ty().map(TyKind::Paren)?,
// If binary operator is `Add` and both `lhs` and `rhs` are trait bounds,
// then type of result is trait object.
- // Othewise we don't assume the result type.
+ // Otherwise we don't assume the result type.
ExprKind::Binary(binop, lhs, rhs) if binop.node == BinOpKind::Add => {
if let (Some(lhs), Some(rhs)) = (lhs.to_bound(), rhs.to_bound()) {
TyKind::TraitObject(vec![lhs, rhs], TraitObjectSyntax::None)
ExprKind::Continue(..) => ExprPrecedence::Continue,
ExprKind::Ret(..) => ExprPrecedence::Ret,
ExprKind::InlineAsm(..) => ExprPrecedence::InlineAsm,
- ExprKind::Mac(..) => ExprPrecedence::Mac,
+ ExprKind::MacCall(..) => ExprPrecedence::Mac,
ExprKind::Struct(..) => ExprPrecedence::Struct,
ExprKind::Repeat(..) => ExprPrecedence::Repeat,
ExprKind::Paren(..) => ExprPrecedence::Paren,
InlineAsm(P<InlineAsm>),
/// A macro invocation; pre-expansion.
- Mac(Mac),
+ MacCall(MacCall),
/// A struct literal expression.
///
/// Represents a macro invocation. The `path` indicates which macro
/// is being invoked, and the `args` are arguments passed to it.
#[derive(Clone, RustcEncodable, RustcDecodable, Debug)]
-pub struct Mac {
+pub struct MacCall {
pub path: Path,
pub args: P<MacArgs>,
pub prior_type_ascription: Option<(Span, bool)>,
}
-impl Mac {
+impl MacCall {
pub fn span(&self) -> Span {
self.path.span.to(self.args.span().unwrap_or(self.path.span))
}
}
/// Represents a macro definition.
-#[derive(Clone, RustcEncodable, RustcDecodable, Debug)]
+#[derive(Clone, RustcEncodable, RustcDecodable, Debug, HashStable_Generic)]
pub struct MacroDef {
pub body: P<MacArgs>,
/// `true` if macro was defined with `macro_rules`.
- pub legacy: bool,
+ pub macro_rules: bool,
}
#[derive(Clone, RustcEncodable, RustcDecodable, Debug, Copy, Hash, Eq, PartialEq)]
/// Inferred type of a `self` or `&self` argument in a method.
ImplicitSelf,
/// A macro in the type position.
- Mac(Mac),
+ MacCall(MacCall),
/// Placeholder for a kind that has failed to be defined.
Err,
/// Placeholder for a `va_list`.
if let Async::Yes { .. } = self { true } else { false }
}
- /// In ths case this is an `async` return, the `NodeId` for the generated `impl Trait` item.
+ /// In this case this is an `async` return, the `NodeId` for the generated `impl Trait` item.
pub fn opt_return_id(self) -> Option<NodeId> {
match self {
Async::Yes { return_impl_trait_id, .. } => Some(return_impl_trait_id),
/// `impl Trait for Type`
Positive,
/// `impl !Trait for Type`
- Negative,
+ Negative(Span),
}
impl fmt::Debug for ImplPolarity {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self {
ImplPolarity::Positive => "positive".fmt(f),
- ImplPolarity::Negative => "negative".fmt(f),
+ ImplPolarity::Negative(_) => "negative".fmt(f),
}
}
}
/// Module declaration.
///
/// E.g., `mod foo;` or `mod foo { .. }`.
-#[derive(Clone, RustcEncodable, RustcDecodable, Debug)]
+#[derive(Clone, RustcEncodable, RustcDecodable, Debug, Default)]
pub struct Mod {
/// A span from the first token past `{` to the last token until `}`.
/// For `mod foo;`, the inner span ranges from the first token
}
}
-impl<K: IntoItemKind> Item<K> {
+impl<K: Into<ItemKind>> Item<K> {
pub fn into_item(self) -> Item {
let Item { attrs, id, span, vis, ident, kind, tokens } = self;
- Item { attrs, id, span, vis, ident, kind: kind.into_item_kind(), tokens }
+ Item { attrs, id, span, vis, ident, kind: kind.into(), tokens }
}
}
/// A macro invocation.
///
/// E.g., `foo!(..)`.
- Mac(Mac),
+ MacCall(MacCall),
/// A macro definition.
MacroDef(MacroDef),
match self {
Use(..) | Static(..) | Const(..) | Fn(..) | Mod(..) | GlobalAsm(..) | TyAlias(..)
| Struct(..) | Union(..) | Trait(..) | TraitAlias(..) | MacroDef(..) => "a",
- ExternCrate(..) | ForeignMod(..) | Mac(..) | Enum(..) | Impl { .. } => "an",
+ ExternCrate(..) | ForeignMod(..) | MacCall(..) | Enum(..) | Impl { .. } => "an",
}
}
ItemKind::Union(..) => "union",
ItemKind::Trait(..) => "trait",
ItemKind::TraitAlias(..) => "trait alias",
- ItemKind::Mac(..) => "item macro invocation",
+ ItemKind::MacCall(..) => "item macro invocation",
ItemKind::MacroDef(..) => "macro definition",
ItemKind::Impl { .. } => "implementation",
}
}
}
-pub trait IntoItemKind {
- fn into_item_kind(self) -> ItemKind;
-}
-
-// FIXME(Centril): These definitions should be unmerged;
-// see https://github.com/rust-lang/rust/pull/69194#discussion_r379899975
-pub type ForeignItem = Item<AssocItemKind>;
-pub type ForeignItemKind = AssocItemKind;
-
/// Represents associated items.
/// These include items in `impl` and `trait` definitions.
pub type AssocItem = Item<AssocItemKind>;
-/// Represents non-free item kinds.
+/// Represents associated item kinds.
///
/// The term "provided" in the variants below refers to the item having a default
/// definition / body. Meanwhile, a "required" item lacks a definition / body.
/// means "provided" and conversely `None` means "required".
#[derive(Clone, RustcEncodable, RustcDecodable, Debug)]
pub enum AssocItemKind {
- /// A constant, `const $ident: $ty $def?;` where `def ::= "=" $expr? ;`.
+ /// An associated constant, `const $ident: $ty $def?;` where `def ::= "=" $expr? ;`.
/// If `def` is parsed, then the constant is provided, and otherwise required.
Const(Defaultness, P<Ty>, Option<P<Expr>>),
- /// A static item (`static FOO: u8`).
- Static(P<Ty>, Mutability, Option<P<Expr>>),
- /// A function.
+ /// An associated function.
Fn(Defaultness, FnSig, Generics, Option<P<Block>>),
- /// A type.
+ /// An associated type.
TyAlias(Defaultness, Generics, GenericBounds, Option<P<Ty>>),
- /// A macro expanding to items.
- Macro(Mac),
+ /// A macro expanding to associated items.
+ MacCall(MacCall),
}
impl AssocItemKind {
pub fn defaultness(&self) -> Defaultness {
match *self {
Self::Const(def, ..) | Self::Fn(def, ..) | Self::TyAlias(def, ..) => def,
- Self::Macro(..) | Self::Static(..) => Defaultness::Final,
+ Self::MacCall(..) => Defaultness::Final,
}
}
}
-impl IntoItemKind for AssocItemKind {
- fn into_item_kind(self) -> ItemKind {
- match self {
+impl From<AssocItemKind> for ItemKind {
+ fn from(assoc_item_kind: AssocItemKind) -> ItemKind {
+ match assoc_item_kind {
AssocItemKind::Const(a, b, c) => ItemKind::Const(a, b, c),
- AssocItemKind::Static(a, b, c) => ItemKind::Static(a, b, c),
AssocItemKind::Fn(a, b, c, d) => ItemKind::Fn(a, b, c, d),
AssocItemKind::TyAlias(a, b, c, d) => ItemKind::TyAlias(a, b, c, d),
- AssocItemKind::Macro(a) => ItemKind::Mac(a),
+ AssocItemKind::MacCall(a) => ItemKind::MacCall(a),
}
}
}
+
+impl TryFrom<ItemKind> for AssocItemKind {
+ type Error = ItemKind;
+
+ fn try_from(item_kind: ItemKind) -> Result<AssocItemKind, ItemKind> {
+ Ok(match item_kind {
+ ItemKind::Const(a, b, c) => AssocItemKind::Const(a, b, c),
+ ItemKind::Fn(a, b, c, d) => AssocItemKind::Fn(a, b, c, d),
+ ItemKind::TyAlias(a, b, c, d) => AssocItemKind::TyAlias(a, b, c, d),
+ ItemKind::MacCall(a) => AssocItemKind::MacCall(a),
+ _ => return Err(item_kind),
+ })
+ }
+}
+
+/// An item in `extern` block.
+#[derive(Clone, RustcEncodable, RustcDecodable, Debug)]
+pub enum ForeignItemKind {
+ /// A foreign static item (`static FOO: u8`).
+ Static(P<Ty>, Mutability, Option<P<Expr>>),
+ /// A foreign function.
+ Fn(Defaultness, FnSig, Generics, Option<P<Block>>),
+ /// A foreign type.
+ TyAlias(Defaultness, Generics, GenericBounds, Option<P<Ty>>),
+ /// A macro expanding to foreign items.
+ MacCall(MacCall),
+}
+
+impl From<ForeignItemKind> for ItemKind {
+ fn from(foreign_item_kind: ForeignItemKind) -> ItemKind {
+ match foreign_item_kind {
+ ForeignItemKind::Static(a, b, c) => ItemKind::Static(a, b, c),
+ ForeignItemKind::Fn(a, b, c, d) => ItemKind::Fn(a, b, c, d),
+ ForeignItemKind::TyAlias(a, b, c, d) => ItemKind::TyAlias(a, b, c, d),
+ ForeignItemKind::MacCall(a) => ItemKind::MacCall(a),
+ }
+ }
+}
+
+impl TryFrom<ItemKind> for ForeignItemKind {
+ type Error = ItemKind;
+
+ fn try_from(item_kind: ItemKind) -> Result<ForeignItemKind, ItemKind> {
+ Ok(match item_kind {
+ ItemKind::Static(a, b, c) => ForeignItemKind::Static(a, b, c),
+ ItemKind::Fn(a, b, c, d) => ForeignItemKind::Fn(a, b, c, d),
+ ItemKind::TyAlias(a, b, c, d) => ForeignItemKind::TyAlias(a, b, c, d),
+ ItemKind::MacCall(a) => ForeignItemKind::MacCall(a),
+ _ => return Err(item_kind),
+ })
+ }
+}
+
+pub type ForeignItem = Item<ForeignItemKind>;
}
impl AttrItem {
+ pub fn span(&self) -> Span {
+ self.args.span().map_or(self.path.span, |args_span| self.path.span.to(args_span))
+ }
+
pub fn meta(&self, span: Span) -> Option<MetaItem> {
Some(MetaItem {
path: self.path.clone(),
I: Iterator<Item = TokenTree>,
{
// FIXME: Share code with `parse_path`.
- let path = match tokens.next() {
+ let path = match tokens.next().map(TokenTree::uninterpolate) {
Some(TokenTree::Token(Token { kind: kind @ token::Ident(..), span }))
| Some(TokenTree::Token(Token { kind: kind @ token::ModSep, span })) => 'arm: {
let mut segments = if let token::Ident(name, _) = kind {
};
loop {
if let Some(TokenTree::Token(Token { kind: token::Ident(name, _), span })) =
- tokens.next()
+ tokens.next().map(TokenTree::uninterpolate)
{
segments.push(PathSegment::from_ident(Ident::new(name, span)));
} else {
Path { span, segments }
}
Some(TokenTree::Token(Token { kind: token::Interpolated(nt), .. })) => match *nt {
- token::Nonterminal::NtIdent(ident, _) => Path::from_ident(ident),
token::Nonterminal::NtMeta(ref item) => return item.meta(item.path.span),
token::Nonterminal::NtPath(ref path) => path.clone(),
_ => return None,
fn attrs(&self) -> &[Attribute] {
match *self {
StmtKind::Local(ref local) => local.attrs(),
- StmtKind::Item(..) => &[],
StmtKind::Expr(ref expr) | StmtKind::Semi(ref expr) => expr.attrs(),
- StmtKind::Mac(ref mac) => {
+ StmtKind::Empty | StmtKind::Item(..) => &[],
+ StmtKind::MacCall(ref mac) => {
let (_, _, ref attrs) = **mac;
attrs.attrs()
}
fn visit_attrs(&mut self, f: impl FnOnce(&mut Vec<Attribute>)) {
match self {
StmtKind::Local(local) => local.visit_attrs(f),
- StmtKind::Item(..) => {}
- StmtKind::Expr(expr) => expr.visit_attrs(f),
- StmtKind::Semi(expr) => expr.visit_attrs(f),
- StmtKind::Mac(mac) => {
+ StmtKind::Expr(expr) | StmtKind::Semi(expr) => expr.visit_attrs(f),
+ StmtKind::Empty | StmtKind::Item(..) => {}
+ StmtKind::MacCall(mac) => {
let (_mac, _style, attrs) = mac.deref_mut();
attrs.visit_attrs(f);
}
}
derive_has_attrs! {
- Item, Expr, Local, ast::ForeignItem, ast::StructField, ast::Arm,
+ Item, Expr, Local, ast::AssocItem, ast::ForeignItem, ast::StructField, ast::Arm,
ast::Field, ast::FieldPat, ast::Variant, ast::Param, GenericParam
}
#![doc(html_root_url = "https://doc.rust-lang.org/nightly/", test(attr(deny(warnings))))]
#![feature(bool_to_option)]
#![feature(box_syntax)]
+#![feature(const_if_match)]
#![feature(const_fn)] // For the `transmute` in `P::new`
+#![feature(const_panic)]
#![feature(const_transmute)]
#![feature(crate_visibility_modifier)]
#![feature(label_break_value)]
noop_visit_local(l, self);
}
- fn visit_mac(&mut self, _mac: &mut Mac) {
+ fn visit_mac(&mut self, _mac: &mut MacCall) {
panic!("visit_mac disabled by default");
// N.B., see note about macros above. If you really want a visitor that
// works on macros, use this definition in your trait impl:
vis.visit_id(id);
visit_vec(bounds, |bound| vis.visit_param_bound(bound));
}
- TyKind::Mac(mac) => vis.visit_mac(mac),
+ TyKind::MacCall(mac) => vis.visit_mac(mac),
}
vis.visit_span(span);
}
vis.visit_span(span);
}
-pub fn noop_visit_mac<T: MutVisitor>(mac: &mut Mac, vis: &mut T) {
- let Mac { path, args, prior_type_ascription: _ } = mac;
+pub fn noop_visit_mac<T: MutVisitor>(mac: &mut MacCall, vis: &mut T) {
+ let MacCall { path, args, prior_type_ascription: _ } = mac;
vis.visit_path(path);
visit_mac_args(args, vis);
}
pub fn noop_visit_macro_def<T: MutVisitor>(macro_def: &mut MacroDef, vis: &mut T) {
- let MacroDef { body, legacy: _ } = macro_def;
+ let MacroDef { body, macro_rules: _ } = macro_def;
visit_mac_args(body, vis);
}
vis.visit_generics(generics);
visit_bounds(bounds, vis);
}
- ItemKind::Mac(m) => vis.visit_mac(m),
+ ItemKind::MacCall(m) => vis.visit_mac(m),
ItemKind::MacroDef(def) => vis.visit_macro_def(def),
}
}
visitor: &mut T,
) -> SmallVec<[P<AssocItem>; 1]> {
let Item { id, ident, vis, attrs, kind, span, tokens: _ } = item.deref_mut();
- walk_nested_item(visitor, id, span, ident, vis, attrs, kind);
- smallvec![item]
-}
-
-pub fn walk_nested_item(
- visitor: &mut impl MutVisitor,
- id: &mut NodeId,
- span: &mut Span,
- ident: &mut Ident,
- vis: &mut Visibility,
- attrs: &mut Vec<Attribute>,
- kind: &mut AssocItemKind,
-) {
visitor.visit_id(id);
visitor.visit_ident(ident);
visitor.visit_vis(vis);
visit_attrs(attrs, visitor);
match kind {
- AssocItemKind::Const(_, ty, expr) | AssocItemKind::Static(ty, _, expr) => {
+ AssocItemKind::Const(_, ty, expr) => {
visitor.visit_ty(ty);
visit_opt(expr, |expr| visitor.visit_expr(expr));
}
visit_bounds(bounds, visitor);
visit_opt(ty, |ty| visitor.visit_ty(ty));
}
- AssocItemKind::Macro(mac) => visitor.visit_mac(mac),
+ AssocItemKind::MacCall(mac) => visitor.visit_mac(mac),
}
visitor.visit_span(span);
+ smallvec![item]
}
pub fn noop_visit_fn_header<T: MutVisitor>(header: &mut FnHeader, vis: &mut T) {
visitor: &mut T,
) -> SmallVec<[P<ForeignItem>; 1]> {
let Item { ident, attrs, id, kind, vis, span, tokens: _ } = item.deref_mut();
- walk_nested_item(visitor, id, span, ident, vis, attrs, kind);
+ visitor.visit_id(id);
+ visitor.visit_ident(ident);
+ visitor.visit_vis(vis);
+ visit_attrs(attrs, visitor);
+ match kind {
+ ForeignItemKind::Static(ty, _, expr) => {
+ visitor.visit_ty(ty);
+ visit_opt(expr, |expr| visitor.visit_expr(expr));
+ }
+ ForeignItemKind::Fn(_, sig, generics, body) => {
+ visitor.visit_generics(generics);
+ visit_fn_sig(sig, visitor);
+ visit_opt(body, |body| visitor.visit_block(body));
+ }
+ ForeignItemKind::TyAlias(_, generics, bounds, ty) => {
+ visitor.visit_generics(generics);
+ visit_bounds(bounds, visitor);
+ visit_opt(ty, |ty| visitor.visit_ty(ty));
+ }
+ ForeignItemKind::MacCall(mac) => visitor.visit_mac(mac),
+ }
+ visitor.visit_span(span);
smallvec![item]
}
visit_vec(elems, |elem| vis.visit_pat(elem))
}
PatKind::Paren(inner) => vis.visit_pat(inner),
- PatKind::Mac(mac) => vis.visit_mac(mac),
+ PatKind::MacCall(mac) => vis.visit_mac(mac),
}
vis.visit_span(span);
}
}
visit_vec(inputs, |(_c, expr)| vis.visit_expr(expr));
}
- ExprKind::Mac(mac) => vis.visit_mac(mac),
+ ExprKind::MacCall(mac) => vis.visit_mac(mac),
ExprKind::Struct(path, fields, expr) => {
vis.visit_path(path);
fields.flat_map_in_place(|field| vis.flat_map_field(field));
StmtKind::Item(item) => vis.flat_map_item(item).into_iter().map(StmtKind::Item).collect(),
StmtKind::Expr(expr) => vis.filter_map_expr(expr).into_iter().map(StmtKind::Expr).collect(),
StmtKind::Semi(expr) => vis.filter_map_expr(expr).into_iter().map(StmtKind::Semi).collect(),
- StmtKind::Mac(mut mac) => {
+ StmtKind::Empty => smallvec![StmtKind::Empty],
+ StmtKind::MacCall(mut mac) => {
let (mac_, _semi, attrs) = mac.deref_mut();
vis.visit_mac(mac_);
visit_thin_attrs(attrs, vis);
- smallvec![StmtKind::Mac(mac)]
+ smallvec![StmtKind::MacCall(mac)]
}
}
}
rustc_data_structures::define_id_collections!(NodeMap, NodeSet, NodeId);
/// `NodeId` used to represent the root of the crate.
-pub const CRATE_NODE_ID: NodeId = NodeId::from_u32_const(0);
+pub const CRATE_NODE_ID: NodeId = NodeId::from_u32(0);
/// When parsing and doing expansions, we initially give all AST nodes this AST
/// node value. Then later, in the renumber pass, we renumber them to have
use rustc_span::symbol::kw;
use rustc_span::symbol::Symbol;
use rustc_span::{self, Span, DUMMY_SP};
-use std::fmt;
-use std::mem;
+use std::borrow::Cow;
+use std::{fmt, mem};
#[derive(Clone, PartialEq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]
#[derive(HashStable_Generic)]
/* Literals */
Literal(Lit),
- /* Name components */
+ /// Identifier token.
+ /// Do not forget about `NtIdent` when you want to match on identifiers.
+ /// It's recommended to use `Token::(ident,uninterpolate,uninterpolated_span)` to
+ /// treat regular and interpolated identifiers in the same way.
Ident(ast::Name, /* is_raw */ bool),
+ /// Lifetime identifier token.
+ /// Do not forget about `NtLifetime` when you want to match on lifetime identifiers.
+ /// It's recommended to use `Token::(lifetime,uninterpolate,uninterpolated_span)` to
+ /// treat regular and interpolated lifetime identifiers in the same way.
Lifetime(ast::Name),
Interpolated(Lrc<Nonterminal>),
mem::replace(self, Token::dummy())
}
+ /// For interpolated tokens, returns a span of the fragment to which the interpolated
+ /// token refers. For all other tokens this is just a regular span.
+ /// It is particularly important to use this for identifiers and lifetimes
+ /// for which spans affect name resolution and edition checks.
+ /// Note that keywords are also identifiers, so they should use this
+ /// if they keep spans or perform edition checks.
+ pub fn uninterpolated_span(&self) -> Span {
+ match &self.kind {
+ Interpolated(nt) => nt.span(),
+ _ => self.span,
+ }
+ }
+
pub fn is_op(&self) -> bool {
match self.kind {
OpenDelim(..) | CloseDelim(..) | Literal(..) | DocComment(..) | Ident(..)
/// Returns `true` if the token can appear at the start of an expression.
pub fn can_begin_expr(&self) -> bool {
- match self.kind {
+ match self.uninterpolate().kind {
Ident(name, is_raw) =>
ident_can_begin_expr(name, self.span, is_raw), // value name or keyword
OpenDelim(..) | // tuple, array or block
Lifetime(..) | // labeled loop
Pound => true, // expression attributes
Interpolated(ref nt) => match **nt {
- NtIdent(ident, is_raw) => ident_can_begin_expr(ident.name, ident.span, is_raw),
NtLiteral(..) |
NtExpr(..) |
NtBlock(..) |
- NtPath(..) |
- NtLifetime(..) => true,
+ NtPath(..) => true,
_ => false,
},
_ => false,
/// Returns `true` if the token can appear at the start of a type.
pub fn can_begin_type(&self) -> bool {
- match self.kind {
+ match self.uninterpolate().kind {
Ident(name, is_raw) =>
ident_can_begin_type(name, self.span, is_raw), // type name or keyword
OpenDelim(Paren) | // tuple
Lt | BinOp(Shl) | // associated path
ModSep => true, // global path
Interpolated(ref nt) => match **nt {
- NtIdent(ident, is_raw) => ident_can_begin_type(ident.name, ident.span, is_raw),
- NtTy(..) | NtPath(..) | NtLifetime(..) => true,
+ NtTy(..) | NtPath(..) => true,
_ => false,
},
_ => false,
///
/// Keep this in sync with `Lit::from_token`.
pub fn can_begin_literal_or_bool(&self) -> bool {
- match self.kind {
+ match self.uninterpolate().kind {
Literal(..) | BinOp(Minus) => true,
Ident(name, false) if name.is_bool_lit() => true,
Interpolated(ref nt) => match &**nt {
- NtIdent(ident, false) if ident.name.is_bool_lit() => true,
NtExpr(e) | NtLiteral(e) => matches!(e.kind, ast::ExprKind::Lit(_)),
_ => false,
},
}
}
+ // A convenience function for matching on identifiers during parsing.
+ // Turns interpolated identifier (`$i: ident`) or lifetime (`$l: lifetime`) token
+ // into the regular identifier or lifetime token it refers to,
+ // otherwise returns the original token.
+ pub fn uninterpolate(&self) -> Cow<'_, Token> {
+ match &self.kind {
+ Interpolated(nt) => match **nt {
+ NtIdent(ident, is_raw) => {
+ Cow::Owned(Token::new(Ident(ident.name, is_raw), ident.span))
+ }
+ NtLifetime(ident) => Cow::Owned(Token::new(Lifetime(ident.name), ident.span)),
+ _ => Cow::Borrowed(self),
+ },
+ _ => Cow::Borrowed(self),
+ }
+ }
+
/// Returns an identifier if this token is an identifier.
pub fn ident(&self) -> Option<(ast::Ident, /* is_raw */ bool)> {
- match self.kind {
- Ident(name, is_raw) => Some((ast::Ident::new(name, self.span), is_raw)),
- Interpolated(ref nt) => match **nt {
- NtIdent(ident, is_raw) => Some((ident, is_raw)),
- _ => None,
- },
+ let token = self.uninterpolate();
+ match token.kind {
+ Ident(name, is_raw) => Some((ast::Ident::new(name, token.span), is_raw)),
_ => None,
}
}
/// Returns a lifetime identifier if this token is a lifetime.
pub fn lifetime(&self) -> Option<ast::Ident> {
- match self.kind {
- Lifetime(name) => Some(ast::Ident::new(name, self.span)),
- Interpolated(ref nt) => match **nt {
- NtLifetime(ident) => Some(ident),
- _ => None,
- },
+ let token = self.uninterpolate();
+ match token.kind {
+ Lifetime(name) => Some(ast::Ident::new(name, token.span)),
_ => None,
}
}
false
}
+ // Is the token an interpolated block (`$b:block`)?
+ pub fn is_whole_block(&self) -> bool {
+ if let Interpolated(ref nt) = self.kind {
+ if let NtBlock(..) = **nt {
+ return true;
+ }
+ }
+ false
+ }
+
/// Returns `true` if the token is either the `mut` or `const` keyword.
pub fn is_mutability(&self) -> bool {
self.is_keyword(kw::Mut) || self.is_keyword(kw::Const)
#[cfg(target_arch = "x86_64")]
rustc_data_structures::static_assert_size!(Nonterminal, 40);
+impl Nonterminal {
+ fn span(&self) -> Span {
+ match self {
+ NtItem(item) => item.span,
+ NtBlock(block) => block.span,
+ NtStmt(stmt) => stmt.span,
+ NtPat(pat) => pat.span,
+ NtExpr(expr) | NtLiteral(expr) => expr.span,
+ NtTy(ty) => ty.span,
+ NtIdent(ident, _) | NtLifetime(ident) => ident.span,
+ NtMeta(attr_item) => attr_item.span(),
+ NtPath(path) => path.span,
+ NtVis(vis) => vis.span,
+ NtTT(tt) => tt.span(),
+ }
+ }
+}
+
impl PartialEq for Nonterminal {
fn eq(&self, rhs: &Self) -> bool {
match (self, rhs) {
pub fn close_tt(span: DelimSpan, delim: DelimToken) -> TokenTree {
TokenTree::token(token::CloseDelim(delim), span.close)
}
+
+ pub fn uninterpolate(self) -> TokenTree {
+ match self {
+ TokenTree::Token(token) => TokenTree::Token(token.uninterpolate().into_owned()),
+ tt => tt,
+ }
+ }
}
impl<CTX> HashStable<CTX> for TokenStream
///
/// Keep this in sync with `Token::can_begin_literal_or_bool`.
pub fn from_token(token: &Token) -> Result<Lit, LitError> {
- let lit = match token.kind {
+ let lit = match token.uninterpolate().kind {
token::Ident(name, false) if name.is_bool_lit() => {
token::Lit::new(token::Bool, name, None)
}
token::Literal(lit) => lit,
token::Interpolated(ref nt) => {
- match &**nt {
- token::NtIdent(ident, false) if ident.name.is_bool_lit() => {
- let lit = token::Lit::new(token::Bool, ident.name, None);
- return Lit::from_lit_token(lit, ident.span);
+ if let token::NtExpr(expr) | token::NtLiteral(expr) = &**nt {
+ if let ast::ExprKind::Lit(lit) = &expr.kind {
+ return Ok(lit.clone());
}
- token::NtExpr(expr) | token::NtLiteral(expr) => {
- if let ast::ExprKind::Lit(lit) = &expr.kind {
- return Ok(lit.clone());
- }
- }
- _ => {}
}
return Err(LitError::NotLiteral);
}
fn visit_lifetime(&mut self, lifetime: &'ast Lifetime) {
walk_lifetime(self, lifetime)
}
- fn visit_mac(&mut self, _mac: &'ast Mac) {
+ fn visit_mac(&mut self, _mac: &'ast MacCall) {
panic!("visit_mac disabled by default");
// N.B., see note about macros above.
// if you really want a visitor that
visitor.visit_generics(generics);
walk_list!(visitor, visit_param_bound, bounds);
}
- ItemKind::Mac(ref mac) => visitor.visit_mac(mac),
+ ItemKind::MacCall(ref mac) => visitor.visit_mac(mac),
ItemKind::MacroDef(ref ts) => visitor.visit_mac_def(ts, item.id),
}
walk_list!(visitor, visit_attribute, &item.attrs);
}
TyKind::Typeof(ref expression) => visitor.visit_anon_const(expression),
TyKind::Infer | TyKind::ImplicitSelf | TyKind::Err => {}
- TyKind::Mac(ref mac) => visitor.visit_mac(mac),
+ TyKind::MacCall(ref mac) => visitor.visit_mac(mac),
TyKind::Never | TyKind::CVarArgs => {}
}
}
PatKind::Tuple(ref elems) | PatKind::Slice(ref elems) | PatKind::Or(ref elems) => {
walk_list!(visitor, visit_pat, elems);
}
- PatKind::Mac(ref mac) => visitor.visit_mac(mac),
+ PatKind::MacCall(ref mac) => visitor.visit_mac(mac),
}
}
pub fn walk_foreign_item<'a, V: Visitor<'a>>(visitor: &mut V, item: &'a ForeignItem) {
- let Item { id, span, ident, vis, attrs, kind, tokens: _ } = item;
- walk_nested_item(visitor, *id, *span, *ident, vis, attrs, kind, FnCtxt::Foreign);
+ let Item { id, span, ident, ref vis, ref attrs, ref kind, tokens: _ } = *item;
+ visitor.visit_vis(vis);
+ visitor.visit_ident(ident);
+ walk_list!(visitor, visit_attribute, attrs);
+ match kind {
+ ForeignItemKind::Static(ty, _, expr) => {
+ visitor.visit_ty(ty);
+ walk_list!(visitor, visit_expr, expr);
+ }
+ ForeignItemKind::Fn(_, sig, generics, body) => {
+ visitor.visit_generics(generics);
+ let kind = FnKind::Fn(FnCtxt::Foreign, ident, sig, vis, body.as_deref());
+ visitor.visit_fn(kind, span, id);
+ }
+ ForeignItemKind::TyAlias(_, generics, bounds, ty) => {
+ visitor.visit_generics(generics);
+ walk_list!(visitor, visit_param_bound, bounds);
+ walk_list!(visitor, visit_ty, ty);
+ }
+ ForeignItemKind::MacCall(mac) => {
+ visitor.visit_mac(mac);
+ }
+ }
}
pub fn walk_global_asm<'a, V: Visitor<'a>>(_: &mut V, _: &'a GlobalAsm) {
}
pub fn walk_assoc_item<'a, V: Visitor<'a>>(visitor: &mut V, item: &'a AssocItem, ctxt: AssocCtxt) {
- let Item { id, span, ident, vis, attrs, kind, tokens: _ } = item;
- walk_nested_item(visitor, *id, *span, *ident, vis, attrs, kind, FnCtxt::Assoc(ctxt));
-}
-
-fn walk_nested_item<'a, V: Visitor<'a>>(
- visitor: &mut V,
- id: NodeId,
- span: Span,
- ident: Ident,
- vis: &'a Visibility,
- attrs: &'a [Attribute],
- kind: &'a AssocItemKind,
- ctxt: FnCtxt,
-) {
+ let Item { id, span, ident, ref vis, ref attrs, ref kind, tokens: _ } = *item;
visitor.visit_vis(vis);
visitor.visit_ident(ident);
walk_list!(visitor, visit_attribute, attrs);
match kind {
- AssocItemKind::Const(_, ty, expr) | AssocItemKind::Static(ty, _, expr) => {
+ AssocItemKind::Const(_, ty, expr) => {
visitor.visit_ty(ty);
walk_list!(visitor, visit_expr, expr);
}
AssocItemKind::Fn(_, sig, generics, body) => {
visitor.visit_generics(generics);
- let kind = FnKind::Fn(ctxt, ident, sig, vis, body.as_deref());
+ let kind = FnKind::Fn(FnCtxt::Assoc(ctxt), ident, sig, vis, body.as_deref());
visitor.visit_fn(kind, span, id);
}
AssocItemKind::TyAlias(_, generics, bounds, ty) => {
walk_list!(visitor, visit_param_bound, bounds);
walk_list!(visitor, visit_ty, ty);
}
- AssocItemKind::Macro(mac) => {
+ AssocItemKind::MacCall(mac) => {
visitor.visit_mac(mac);
}
}
match statement.kind {
StmtKind::Local(ref local) => visitor.visit_local(local),
StmtKind::Item(ref item) => visitor.visit_item(item),
- StmtKind::Expr(ref expression) | StmtKind::Semi(ref expression) => {
- visitor.visit_expr(expression)
- }
- StmtKind::Mac(ref mac) => {
+ StmtKind::Expr(ref expr) | StmtKind::Semi(ref expr) => visitor.visit_expr(expr),
+ StmtKind::Empty => {}
+ StmtKind::MacCall(ref mac) => {
let (ref mac, _, ref attrs) = **mac;
visitor.visit_mac(mac);
for attr in attrs.iter() {
}
}
-pub fn walk_mac<'a, V: Visitor<'a>>(visitor: &mut V, mac: &'a Mac) {
+pub fn walk_mac<'a, V: Visitor<'a>>(visitor: &mut V, mac: &'a MacCall) {
visitor.visit_path(&mac.path, DUMMY_NODE_ID);
}
ExprKind::Ret(ref optional_expression) => {
walk_list!(visitor, visit_expr, optional_expression);
}
- ExprKind::Mac(ref mac) => visitor.visit_mac(mac),
+ ExprKind::MacCall(ref mac) => visitor.visit_mac(mac),
ExprKind::Paren(ref subexpression) => visitor.visit_expr(subexpression),
ExprKind::InlineAsm(ref ia) => {
for &(_, ref input) in &ia.inputs {
return self.lower_expr_for(e, pat, head, body, opt_label);
}
ExprKind::Try(ref sub_expr) => self.lower_expr_try(e.span, sub_expr),
- ExprKind::Mac(_) => panic!("Shouldn't exist here"),
+ ExprKind::MacCall(_) => panic!("Shouldn't exist here"),
};
hir::Expr {
}
}
+ /// Lower an `async` construct to a generator that is then wrapped so it implements `Future`.
+ ///
+ /// This results in:
+ ///
+ /// ```text
+ /// std::future::from_generator(static move? |_task_context| -> <ret_ty> {
+ /// <body>
+ /// })
+ /// ```
pub(super) fn make_async_expr(
&mut self,
capture_clause: CaptureBy,
body: impl FnOnce(&mut Self) -> hir::Expr<'hir>,
) -> hir::ExprKind<'hir> {
let output = match ret_ty {
- Some(ty) => FnRetTy::Ty(ty),
- None => FnRetTy::Default(span),
+ Some(ty) => hir::FnRetTy::Return(self.lower_ty(&ty, ImplTraitContext::disallowed())),
+ None => hir::FnRetTy::DefaultReturn(span),
};
- let ast_decl = FnDecl { inputs: vec![], output };
- let decl = self.lower_fn_decl(&ast_decl, None, /* impl trait allowed */ false, None);
- let body_id = self.lower_fn_body(&ast_decl, |this| {
+
+ // Resume argument type. We let the compiler infer this to simplify the lowering. It is
+ // fully constrained by `future::from_generator`.
+ let input_ty = hir::Ty { hir_id: self.next_id(), kind: hir::TyKind::Infer, span };
+
+ // The closure/generator `FnDecl` takes a single (resume) argument of type `input_ty`.
+ let decl = self.arena.alloc(hir::FnDecl {
+ inputs: arena_vec![self; input_ty],
+ output,
+ c_variadic: false,
+ implicit_self: hir::ImplicitSelfKind::None,
+ });
+
+ // Lower the argument pattern/ident. The ident is used again in the `.await` lowering.
+ let (pat, task_context_hid) = self.pat_ident_binding_mode(
+ span,
+ Ident::with_dummy_span(sym::_task_context),
+ hir::BindingAnnotation::Mutable,
+ );
+ let param = hir::Param { attrs: &[], hir_id: self.next_id(), pat, span };
+ let params = arena_vec![self; param];
+
+ let body_id = self.lower_body(move |this| {
this.generator_kind = Some(hir::GeneratorKind::Async(async_gen_kind));
- body(this)
+
+ let old_ctx = this.task_context;
+ this.task_context = Some(task_context_hid);
+ let res = body(this);
+ this.task_context = old_ctx;
+ (params, res)
});
- // `static || -> <ret_ty> { body }`:
+ // `static |_task_context| -> <ret_ty> { body }`:
let generator_kind = hir::ExprKind::Closure(
capture_clause,
decl,
/// ```rust
/// match <expr> {
/// mut pinned => loop {
- /// match ::std::future::poll_with_tls_context(unsafe {
- /// <::std::pin::Pin>::new_unchecked(&mut pinned)
- /// }) {
+ /// match unsafe { ::std::future::poll_with_context(
+ /// <::std::pin::Pin>::new_unchecked(&mut pinned),
+ /// task_context,
+ /// ) } {
/// ::std::task::Poll::Ready(result) => break result,
/// ::std::task::Poll::Pending => {}
/// }
- /// yield ();
+ /// task_context = yield ();
/// }
/// }
/// ```
let (pinned_pat, pinned_pat_hid) =
self.pat_ident_binding_mode(span, pinned_ident, hir::BindingAnnotation::Mutable);
- // ::std::future::poll_with_tls_context(unsafe {
- // ::std::pin::Pin::new_unchecked(&mut pinned)
- // })`
+ let task_context_ident = Ident::with_dummy_span(sym::_task_context);
+
+ // unsafe {
+ // ::std::future::poll_with_context(
+ // ::std::pin::Pin::new_unchecked(&mut pinned),
+ // task_context,
+ // )
+ // }
let poll_expr = {
let pinned = self.expr_ident(span, pinned_ident, pinned_pat_hid);
let ref_mut_pinned = self.expr_mut_addr_of(span, pinned);
+ let task_context = if let Some(task_context_hid) = self.task_context {
+ self.expr_ident_mut(span, task_context_ident, task_context_hid)
+ } else {
+ // Use of `await` outside of an async context, we cannot use `task_context` here.
+ self.expr_err(span)
+ };
let pin_ty_id = self.next_id();
let new_unchecked_expr_kind = self.expr_call_std_assoc_fn(
pin_ty_id,
"new_unchecked",
arena_vec![self; ref_mut_pinned],
);
- let new_unchecked =
- self.arena.alloc(self.expr(span, new_unchecked_expr_kind, ThinVec::new()));
- let unsafe_expr = self.expr_unsafe(new_unchecked);
- self.expr_call_std_path(
+ let new_unchecked = self.expr(span, new_unchecked_expr_kind, ThinVec::new());
+ let call = self.expr_call_std_path(
gen_future_span,
- &[sym::future, sym::poll_with_tls_context],
- arena_vec![self; unsafe_expr],
- )
+ &[sym::future, sym::poll_with_context],
+ arena_vec![self; new_unchecked, task_context],
+ );
+ self.arena.alloc(self.expr_unsafe(call))
};
// `::std::task::Poll::Ready(result) => break result`
self.stmt_expr(span, match_expr)
};
+ // task_context = yield ();
let yield_stmt = {
let unit = self.expr_unit(span);
let yield_expr = self.expr(
hir::ExprKind::Yield(unit, hir::YieldSource::Await),
ThinVec::new(),
);
- self.stmt_expr(span, yield_expr)
+ let yield_expr = self.arena.alloc(yield_expr);
+
+ if let Some(task_context_hid) = self.task_context {
+ let lhs = self.expr_ident(span, task_context_ident, task_context_hid);
+ let assign =
+ self.expr(span, hir::ExprKind::Assign(lhs, yield_expr, span), AttrVec::new());
+ self.stmt_expr(span, assign)
+ } else {
+ // Use of `await` outside of an async context. Return `yield_expr` so that we can
+ // proceed with type checking.
+ self.stmt(span, hir::StmtKind::Semi(yield_expr))
+ }
};
let loop_block = self.block_all(span, arena_vec![self; inner_match_stmt, yield_stmt], None);
use rustc_ast::ast::*;
use rustc_ast::attr;
use rustc_ast::node_id::NodeMap;
+use rustc_ast::ptr::P;
use rustc_ast::visit::{self, AssocCtxt, Visitor};
use rustc_errors::struct_span_err;
use rustc_hir as hir;
use rustc_hir::def::{DefKind, Res};
-use rustc_hir::def_id::DefId;
+use rustc_hir::def_id::LocalDefId;
use rustc_span::source_map::{respan, DesugaringKind};
use rustc_span::symbol::{kw, sym};
use rustc_span::Span;
_ => &[],
};
let lt_def_names = parent_generics.iter().filter_map(|param| match param.kind {
- hir::GenericParamKind::Lifetime { .. } => Some(param.name.modern()),
+ hir::GenericParamKind::Lifetime { .. } => Some(param.name.normalize_to_macros_2_0()),
_ => None,
});
self.in_scope_lifetimes.extend(lt_def_names);
let mut vis = self.lower_visibility(&i.vis, None);
let attrs = self.lower_attrs(&i.attrs);
- if let ItemKind::MacroDef(ref def) = i.kind {
- if !def.legacy || attr::contains_name(&i.attrs, sym::macro_export) {
- let body = self.lower_token_stream(def.body.inner_tokens());
+ if let ItemKind::MacroDef(MacroDef { ref body, macro_rules }) = i.kind {
+ if !macro_rules || attr::contains_name(&i.attrs, sym::macro_export) {
let hir_id = self.lower_node_id(i.id);
+ let body = P(self.lower_mac_args(body));
self.exported_macros.push(hir::MacroDef {
- name: ident.name,
+ ident,
vis,
attrs,
hir_id,
span: i.span,
- body,
- legacy: def.legacy,
+ ast: MacroDef { body, macro_rules },
});
} else {
self.non_exported_macro_attrs.extend(attrs.iter().cloned());
hir::ItemKind::Const(ty, body_id)
}
ItemKind::Fn(_, FnSig { ref decl, header }, ref generics, ref body) => {
- let fn_def_id = self.resolver.definitions().local_def_id(id);
+ let fn_def_id = self.resolver.definitions().local_def_id(id).expect_local();
self.with_new_scopes(|this| {
this.current_item = Some(ident.span);
AnonymousLifetimeMode::PassThrough,
|this, idty| {
let ret_id = asyncness.opt_return_id();
- this.lower_fn_decl(&decl, Some((fn_def_id, idty)), true, ret_id)
+ this.lower_fn_decl(
+ &decl,
+ Some((fn_def_id.to_def_id(), idty)),
+ true,
+ ret_id,
+ )
},
);
let sig = hir::FnSig { decl, header: this.lower_fn_header(header) };
self_ty: ref ty,
items: ref impl_items,
} => {
- let def_id = self.resolver.definitions().local_def_id(id);
+ let def_id = self.resolver.definitions().local_def_id(id).expect_local();
// Lower the "impl header" first. This ordering is important
// for in-band lifetimes! Consider `'a` here:
self.lower_generics(generics, ImplTraitContext::disallowed()),
self.lower_param_bounds(bounds, ImplTraitContext::disallowed()),
),
- ItemKind::MacroDef(..) | ItemKind::Mac(..) => {
+ ItemKind::MacroDef(..) | ItemKind::MacCall(..) => {
bug!("`TyMac` should have been expanded by now")
}
}
}
fn lower_foreign_item(&mut self, i: &ForeignItem) -> hir::ForeignItem<'hir> {
- let def_id = self.resolver.definitions().local_def_id(i.id);
+ let def_id = self.resolver.definitions().local_def_id(i.id).expect_local();
hir::ForeignItem {
hir_id: self.lower_node_id(i.id),
ident: i.ident,
let ty = self.lower_ty(t, ImplTraitContext::disallowed());
hir::ForeignItemKind::Static(ty, m)
}
- ForeignItemKind::Const(_, ref t, _) => {
- // For recovery purposes.
- let ty = self.lower_ty(t, ImplTraitContext::disallowed());
- hir::ForeignItemKind::Static(ty, Mutability::Not)
- }
ForeignItemKind::TyAlias(..) => hir::ForeignItemKind::Type,
- ForeignItemKind::Macro(_) => panic!("macro shouldn't exist here"),
+ ForeignItemKind::MacCall(_) => panic!("macro shouldn't exist here"),
},
vis: self.lower_visibility(&i.vis, None),
span: i.span,
}
fn lower_trait_item(&mut self, i: &AssocItem) -> hir::TraitItem<'hir> {
- let trait_item_def_id = self.resolver.definitions().local_def_id(i.id);
+ let trait_item_def_id = self.resolver.definitions().local_def_id(i.id).expect_local();
let (generics, kind) = match i.kind {
- AssocItemKind::Static(ref ty, _, ref default) // Let's pretend this is a `const`.
- | AssocItemKind::Const(_, ref ty, ref default) => {
+ AssocItemKind::Const(_, ref ty, ref default) => {
let ty = self.lower_ty(ty, ImplTraitContext::disallowed());
let body = default.as_ref().map(|x| self.lower_const_body(i.span, Some(x)));
(hir::Generics::empty(), hir::TraitItemKind::Const(ty, body))
let names = self.lower_fn_params_to_names(&sig.decl);
let (generics, sig) =
self.lower_method_sig(generics, sig, trait_item_def_id, false, None);
- (generics, hir::TraitItemKind::Method(sig, hir::TraitMethod::Required(names)))
+ (generics, hir::TraitItemKind::Fn(sig, hir::TraitFn::Required(names)))
}
AssocItemKind::Fn(_, ref sig, ref generics, Some(ref body)) => {
let body_id = self.lower_fn_body_block(i.span, &sig.decl, Some(body));
let (generics, sig) =
self.lower_method_sig(generics, sig, trait_item_def_id, false, None);
- (generics, hir::TraitItemKind::Method(sig, hir::TraitMethod::Provided(body_id)))
+ (generics, hir::TraitItemKind::Fn(sig, hir::TraitFn::Provided(body_id)))
}
AssocItemKind::TyAlias(_, ref generics, ref bounds, ref default) => {
let ty = default.as_ref().map(|x| self.lower_ty(x, ImplTraitContext::disallowed()));
(generics, kind)
}
- AssocItemKind::Macro(..) => bug!("macro item shouldn't exist at this point"),
+ AssocItemKind::MacCall(..) => bug!("macro item shouldn't exist at this point"),
};
hir::TraitItem {
fn lower_trait_item_ref(&mut self, i: &AssocItem) -> hir::TraitItemRef {
let (kind, has_default) = match &i.kind {
- AssocItemKind::Static(_, _, default) // Let's pretend this is a `const` for recovery.
- | AssocItemKind::Const(_, _, default) => {
- (hir::AssocItemKind::Const, default.is_some())
+ AssocItemKind::Const(_, _, default) => (hir::AssocItemKind::Const, default.is_some()),
+ AssocItemKind::TyAlias(_, _, _, default) => {
+ (hir::AssocItemKind::Type, default.is_some())
}
- AssocItemKind::TyAlias(_, _, _, default) => (hir::AssocItemKind::Type, default.is_some()),
AssocItemKind::Fn(_, sig, _, default) => {
(hir::AssocItemKind::Method { has_self: sig.decl.has_self() }, default.is_some())
}
- AssocItemKind::Macro(..) => unimplemented!(),
+ AssocItemKind::MacCall(..) => unimplemented!(),
};
let id = hir::TraitItemId { hir_id: self.lower_node_id(i.id) };
let defaultness = hir::Defaultness::Default { has_value: has_default };
}
/// Construct `ExprKind::Err` for the given `span`.
- fn expr_err(&mut self, span: Span) -> hir::Expr<'hir> {
+ crate fn expr_err(&mut self, span: Span) -> hir::Expr<'hir> {
self.expr(span, hir::ExprKind::Err, AttrVec::new())
}
fn lower_impl_item(&mut self, i: &AssocItem) -> hir::ImplItem<'hir> {
- let impl_item_def_id = self.resolver.definitions().local_def_id(i.id);
+ let impl_item_def_id = self.resolver.definitions().local_def_id(i.id).expect_local();
let (generics, kind) = match &i.kind {
- AssocItemKind::Static(ty, _, expr) | AssocItemKind::Const(_, ty, expr) => {
+ AssocItemKind::Const(_, ty, expr) => {
let ty = self.lower_ty(ty, ImplTraitContext::disallowed());
(
hir::Generics::empty(),
asyncness.opt_return_id(),
);
- (generics, hir::ImplItemKind::Method(sig, body_id))
+ (generics, hir::ImplItemKind::Fn(sig, body_id))
}
AssocItemKind::TyAlias(_, generics, _, ty) => {
let generics = self.lower_generics(generics, ImplTraitContext::disallowed());
};
(generics, kind)
}
- AssocItemKind::Macro(..) => bug!("`TyMac` should have been expanded by now"),
+ AssocItemKind::MacCall(..) => bug!("`TyMac` should have been expanded by now"),
};
hir::ImplItem {
vis: self.lower_visibility(&i.vis, Some(i.id)),
defaultness: self.lower_defaultness(i.kind.defaultness(), true /* [1] */),
kind: match &i.kind {
- AssocItemKind::Static(..) // Let's pretend this is a `const` for recovery.
- | AssocItemKind::Const(..) => hir::AssocItemKind::Const,
+ AssocItemKind::Const(..) => hir::AssocItemKind::Const,
AssocItemKind::TyAlias(.., ty) => {
match ty.as_deref().and_then(|ty| ty.kind.opaque_top_hack()) {
None => hir::AssocItemKind::Type,
AssocItemKind::Fn(_, sig, ..) => {
hir::AssocItemKind::Method { has_self: sig.decl.has_self() }
}
- AssocItemKind::Macro(..) => unimplemented!(),
+ AssocItemKind::MacCall(..) => unimplemented!(),
},
}
id
}
- fn lower_body(
+ pub(super) fn lower_body(
&mut self,
f: impl FnOnce(&mut Self) -> (&'hir [hir::Param<'hir>], hir::Expr<'hir>),
) -> hir::BodyId {
&mut self,
generics: &Generics,
sig: &FnSig,
- fn_def_id: DefId,
+ fn_def_id: LocalDefId,
impl_trait_return_allow: bool,
is_async: Option<NodeId>,
) -> (hir::Generics<'hir>, hir::FnSig<'hir>) {
|this, idty| {
this.lower_fn_decl(
&sig.decl,
- Some((fn_def_id, idty)),
+ Some((fn_def_id.to_def_id(), idty)),
impl_trait_return_allow,
is_async,
)
use rustc::arena::Arena;
use rustc::dep_graph::DepGraph;
use rustc::hir::map::definitions::{DefKey, DefPathData, Definitions};
-use rustc::hir::map::Map;
use rustc::{bug, span_bug};
use rustc_ast::ast;
use rustc_ast::ast::*;
use rustc_errors::struct_span_err;
use rustc_hir as hir;
use rustc_hir::def::{DefKind, Namespace, PartialRes, PerNS, Res};
-use rustc_hir::def_id::{DefId, DefIdMap, DefIndex, CRATE_DEF_INDEX};
+use rustc_hir::def_id::{DefId, DefIdMap, LocalDefId, CRATE_DEF_INDEX};
use rustc_hir::intravisit;
use rustc_hir::{ConstArg, GenericArg, ParamName};
use rustc_index::vec::IndexVec;
generator_kind: Option<hir::GeneratorKind>,
+ /// When inside an `async` context, this is the `HirId` of the
+ /// `task_context` local bound to the resume argument of the generator.
+ task_context: Option<hir::HirId>,
+
/// Used to get the current `fn`'s def span to point to when using `await`
/// outside of an `async fn`.
current_item: Option<Span>,
/// against this list to see if it is already in-scope, or if a definition
/// needs to be created for it.
///
- /// We always store a `modern()` version of the param-name in this
+ /// We always store a `normalize_to_macros_2_0()` version of the param-name in this
/// vector.
in_scope_lifetimes: Vec<ParamName>,
type_def_lifetime_params: DefIdMap<usize>,
- current_hir_id_owner: Vec<(DefIndex, u32)>,
+ current_hir_id_owner: Vec<(LocalDefId, u32)>,
item_local_id_counters: NodeMap<u32>,
node_id_to_hir_id: IndexVec<NodeId, hir::HirId>,
anonymous_lifetime_mode: AnonymousLifetimeMode::PassThrough,
type_def_lifetime_params: Default::default(),
current_module: hir::CRATE_HIR_ID,
- current_hir_id_owner: vec![(CRATE_DEF_INDEX, 0)],
+ current_hir_id_owner: vec![(LocalDefId { local_def_index: CRATE_DEF_INDEX }, 0)],
item_local_id_counters: Default::default(),
node_id_to_hir_id: IndexVec::new(),
generator_kind: None,
+ task_context: None,
current_item: None,
lifetimes_to_define: Vec::new(),
is_collecting_in_band_lifetimes: false,
}
impl MiscCollector<'_, '_, '_> {
- fn allocate_use_tree_hir_id_counters(&mut self, tree: &UseTree, owner: DefIndex) {
+ fn allocate_use_tree_hir_id_counters(&mut self, tree: &UseTree, owner: LocalDefId) {
match tree.kind {
UseTreeKind::Simple(_, id1, id2) => {
for &id in &[id1, id2] {
| ItemKind::Enum(_, ref generics)
| ItemKind::TyAlias(_, ref generics, ..)
| ItemKind::Trait(_, _, ref generics, ..) => {
- let def_id = self.lctx.resolver.definitions().local_def_id(item.id);
+ let def_id =
+ self.lctx.resolver.definitions().local_def_id(item.id).expect_local();
let count = generics
.params
.iter()
_ => false,
})
.count();
- self.lctx.type_def_lifetime_params.insert(def_id, count);
+ self.lctx.type_def_lifetime_params.insert(def_id.to_def_id(), count);
}
ItemKind::Use(ref use_tree) => {
self.allocate_use_tree_hir_id_counters(use_tree, hir_id.owner);
self.resolver.definitions().init_node_id_to_hir_id_mapping(self.node_id_to_hir_id);
hir::Crate {
- module,
- attrs,
- span: c.span,
+ item: hir::CrateItem { module, attrs, span: c.span },
exported_macros: self.arena.alloc_from_iter(self.exported_macros),
non_exported_macro_attrs: self.arena.alloc_from_iter(self.non_exported_macro_attrs),
items: self.items,
.item_local_id_counters
.insert(owner, HIR_ID_COUNTER_LOCKED)
.unwrap_or_else(|| panic!("no `item_local_id_counters` entry for {:?}", owner));
- let def_index = self.resolver.definitions().opt_def_index(owner).unwrap();
- self.current_hir_id_owner.push((def_index, counter));
+ let def_id = self.resolver.definitions().local_def_id(owner).expect_local();
+ self.current_hir_id_owner.push((def_id, counter));
let ret = f(self);
- let (new_def_index, new_counter) = self.current_hir_id_owner.pop().unwrap();
+ let (new_def_id, new_counter) = self.current_hir_id_owner.pop().unwrap();
- debug_assert!(def_index == new_def_index);
+ debug_assert!(def_id == new_def_id);
debug_assert!(new_counter >= counter);
let prev = self.item_local_id_counters.insert(owner, new_counter).unwrap();
/// properly. Calling the method twice with the same `NodeId` is fine though.
fn lower_node_id(&mut self, ast_node_id: NodeId) -> hir::HirId {
self.lower_node_id_generic(ast_node_id, |this| {
- let &mut (def_index, ref mut local_id_counter) =
+ let &mut (owner, ref mut local_id_counter) =
this.current_hir_id_owner.last_mut().unwrap();
let local_id = *local_id_counter;
*local_id_counter += 1;
- hir::HirId { owner: def_index, local_id: hir::ItemLocalId::from_u32(local_id) }
+ hir::HirId { owner, local_id: hir::ItemLocalId::from_u32(local_id) }
})
}
debug_assert!(local_id != HIR_ID_COUNTER_LOCKED);
*local_id_counter += 1;
- let def_index = this.resolver.definitions().opt_def_index(owner).expect(
+ let owner = this.resolver.definitions().opt_local_def_id(owner).expect(
"you forgot to call `create_def_with_parent` or are lowering node-IDs \
- that do not belong to the current owner",
+ that do not belong to the current owner",
);
- hir::HirId { owner: def_index, local_id: hir::ItemLocalId::from_u32(local_id) }
+ hir::HirId { owner, local_id: hir::ItemLocalId::from_u32(local_id) }
})
}
/// parameter while `f` is running (and restored afterwards).
fn collect_in_band_defs<T>(
&mut self,
- parent_id: DefId,
+ parent_def_id: LocalDefId,
anonymous_lifetime_mode: AnonymousLifetimeMode,
f: impl FnOnce(&mut Self) -> (Vec<hir::GenericParam<'hir>>, T),
) -> (Vec<hir::GenericParam<'hir>>, T) {
let params = lifetimes_to_define
.into_iter()
- .map(|(span, hir_name)| self.lifetime_to_generic_param(span, hir_name, parent_id.index))
+ .map(|(span, hir_name)| self.lifetime_to_generic_param(span, hir_name, parent_def_id))
.chain(in_band_ty_params.into_iter())
.collect();
&mut self,
span: Span,
hir_name: ParamName,
- parent_index: DefIndex,
+ parent_def_id: LocalDefId,
) -> hir::GenericParam<'hir> {
let node_id = self.resolver.next_node_id();
// Add a definition for the in-band lifetime def.
self.resolver.definitions().create_def_with_parent(
- parent_index,
+ parent_def_id,
node_id,
DefPathData::LifetimeNs(str_name),
ExpnId::root(),
return;
}
- if self.in_scope_lifetimes.contains(&ParamName::Plain(ident.modern())) {
+ if self.in_scope_lifetimes.contains(&ParamName::Plain(ident.normalize_to_macros_2_0())) {
return;
}
let hir_name = ParamName::Plain(ident);
- if self.lifetimes_to_define.iter().any(|(_, lt_name)| lt_name.modern() == hir_name.modern())
- {
+ if self.lifetimes_to_define.iter().any(|(_, lt_name)| {
+ lt_name.normalize_to_macros_2_0() == hir_name.normalize_to_macros_2_0()
+ }) {
return;
}
) -> T {
let old_len = self.in_scope_lifetimes.len();
let lt_def_names = params.iter().filter_map(|param| match param.kind {
- GenericParamKind::Lifetime { .. } => Some(ParamName::Plain(param.ident.modern())),
+ GenericParamKind::Lifetime { .. } => {
+ Some(ParamName::Plain(param.ident.normalize_to_macros_2_0()))
+ }
_ => None,
});
self.in_scope_lifetimes.extend(lt_def_names);
fn add_in_band_defs<T>(
&mut self,
generics: &Generics,
- parent_id: DefId,
+ parent_def_id: LocalDefId,
anonymous_lifetime_mode: AnonymousLifetimeMode,
f: impl FnOnce(&mut Self, &mut Vec<hir::GenericParam<'hir>>) -> T,
) -> (hir::Generics<'hir>, T) {
let (in_band_defs, (mut lowered_generics, res)) =
self.with_in_scope_lifetime_defs(&generics.params, |this| {
- this.collect_in_band_defs(parent_id, anonymous_lifetime_mode, |this| {
+ this.collect_in_band_defs(parent_def_id, anonymous_lifetime_mode, |this| {
let mut params = Vec::new();
// Note: it is necessary to lower generics *before* calling `f`.
// When lowering `async fn`, there's a final step when lowering
// constructing the HIR for `impl bounds...` and then lowering that.
let impl_trait_node_id = self.resolver.next_node_id();
- let parent_def_index = self.current_hir_id_owner.last().unwrap().0;
+ let parent_def_id = self.current_hir_id_owner.last().unwrap().0;
self.resolver.definitions().create_def_with_parent(
- parent_def_index,
+ parent_def_id,
impl_trait_node_id,
DefPathData::ImplTrait,
ExpnId::root(),
match arg {
ast::GenericArg::Lifetime(lt) => GenericArg::Lifetime(self.lower_lifetime(<)),
ast::GenericArg::Type(ty) => {
- // We parse const arguments as path types as we cannot distiguish them durring
+ // We parse const arguments as path types as we cannot distinguish them during
// parsing. We try to resolve that ambiguity by attempting resolution in both the
// type and value namespaces. If we resolved the path in the value namespace, we
// transform it into a generic const argument.
// Construct a AnonConst where the expr is the "ty"'s path.
- let parent_def_index = self.current_hir_id_owner.last().unwrap().0;
+ let parent_def_id = self.current_hir_id_owner.last().unwrap().0;
let node_id = self.resolver.next_node_id();
// Add a definition for the in-band const def.
self.resolver.definitions().create_def_with_parent(
- parent_def_index,
+ parent_def_id,
node_id,
DefPathData::AnonConst,
ExpnId::root(),
}
ImplTraitContext::Universal(in_band_ty_params) => {
// Add a definition for the in-band `Param`.
- let def_index =
- self.resolver.definitions().opt_def_index(def_node_id).unwrap();
+ let def_id =
+ self.resolver.definitions().local_def_id(def_node_id).expect_local();
let hir_bounds = self.lower_param_bounds(
bounds,
None,
self.arena.alloc(hir::Path {
span,
- res: Res::Def(DefKind::TyParam, DefId::local(def_index)),
+ res: Res::Def(DefKind::TyParam, def_id.to_def_id()),
segments: arena_vec![self; hir::PathSegment::from_ident(ident)],
}),
))
}
}
}
- TyKind::Mac(_) => bug!("`TyKind::Mac` should have been expanded by now"),
+ TyKind::MacCall(_) => bug!("`TyKind::MacCall` should have been expanded by now"),
TyKind::CVarArgs => {
self.sess.delay_span_bug(
t.span,
// frequently opened issues show.
let opaque_ty_span = self.mark_span_with_reason(DesugaringKind::OpaqueTy, span, None);
- let opaque_ty_def_index =
- self.resolver.definitions().opt_def_index(opaque_ty_node_id).unwrap();
+ let opaque_ty_def_id =
+ self.resolver.definitions().local_def_id(opaque_ty_node_id).expect_local();
self.allocate_hir_id_counter(opaque_ty_node_id);
let hir_bounds = self.with_hir_id_owner(opaque_ty_node_id, lower_bounds);
- let (lifetimes, lifetime_defs) = self.lifetimes_from_impl_trait_bounds(
- opaque_ty_node_id,
- opaque_ty_def_index,
- &hir_bounds,
- );
+ let (lifetimes, lifetime_defs) =
+ self.lifetimes_from_impl_trait_bounds(opaque_ty_node_id, opaque_ty_def_id, &hir_bounds);
debug!("lower_opaque_impl_trait: lifetimes={:#?}", lifetimes,);
origin,
};
- trace!("lower_opaque_impl_trait: {:#?}", opaque_ty_def_index);
+ trace!("lower_opaque_impl_trait: {:#?}", opaque_ty_def_id);
let opaque_ty_id =
lctx.generate_opaque_type(opaque_ty_node_id, opaque_ty_item, span, opaque_ty_span);
fn lifetimes_from_impl_trait_bounds(
&mut self,
opaque_ty_id: NodeId,
- parent_index: DefIndex,
+ parent_def_id: LocalDefId,
bounds: hir::GenericBounds<'hir>,
) -> (&'hir [hir::GenericArg<'hir>], &'hir [hir::GenericParam<'hir>]) {
debug!(
"lifetimes_from_impl_trait_bounds(opaque_ty_id={:?}, \
- parent_index={:?}, \
+ parent_def_id={:?}, \
bounds={:#?})",
- opaque_ty_id, parent_index, bounds,
+ opaque_ty_id, parent_def_id, bounds,
);
// This visitor walks over `impl Trait` bounds and creates defs for all lifetimes that
// E.g., `'a`, `'b`, but not `'c` in `impl for<'c> SomeTrait<'a, 'b, 'c>`.
struct ImplTraitLifetimeCollector<'r, 'a, 'hir> {
context: &'r mut LoweringContext<'a, 'hir>,
- parent: DefIndex,
+ parent: LocalDefId,
opaque_ty_id: NodeId,
collect_elided_lifetimes: bool,
currently_bound_lifetimes: Vec<hir::LifetimeName>,
}
impl<'r, 'a, 'v, 'hir> intravisit::Visitor<'v> for ImplTraitLifetimeCollector<'r, 'a, 'hir> {
- type Map = Map<'v>;
+ type Map = intravisit::ErasedMap<'v>;
- fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<Self::Map> {
intravisit::NestedVisitorMap::None
}
let mut lifetime_collector = ImplTraitLifetimeCollector {
context: self,
- parent: parent_index,
+ parent: parent_def_id,
opaque_ty_id,
collect_elided_lifetimes: true,
currently_bound_lifetimes: Vec::new(),
visitor.visit_ty(ty);
}
}
- let parent_def_id = DefId::local(self.current_hir_id_owner.last().unwrap().0);
+ let parent_def_id = self.current_hir_id_owner.last().unwrap().0;
let ty = l.ty.as_ref().map(|t| {
self.lower_ty(
t,
if self.sess.features_untracked().impl_trait_in_bindings {
- ImplTraitContext::OpaqueTy(Some(parent_def_id), hir::OpaqueTyOrigin::Misc)
+ ImplTraitContext::OpaqueTy(
+ Some(parent_def_id.to_def_id()),
+ hir::OpaqueTyOrigin::Misc,
+ )
} else {
ImplTraitContext::Disallowed(ImplTraitPosition::Binding)
},
let opaque_ty_span = self.mark_span_with_reason(DesugaringKind::Async, span, None);
- let opaque_ty_def_index =
- self.resolver.definitions().opt_def_index(opaque_ty_node_id).unwrap();
+ let opaque_ty_def_id =
+ self.resolver.definitions().local_def_id(opaque_ty_node_id).expect_local();
self.allocate_hir_id_counter(opaque_ty_node_id);
let generic_params =
this.arena.alloc_from_iter(lifetime_params.iter().map(|(span, hir_name)| {
- this.lifetime_to_generic_param(*span, *hir_name, opaque_ty_def_index)
+ this.lifetime_to_generic_param(*span, *hir_name, opaque_ty_def_id)
}));
let opaque_ty_item = hir::OpaqueTy {
origin: hir::OpaqueTyOrigin::AsyncFn,
};
- trace!("exist ty from async fn def index: {:#?}", opaque_ty_def_index);
+ trace!("exist ty from async fn def id: {:#?}", opaque_ty_def_id);
let opaque_ty_id =
this.generate_opaque_type(opaque_ty_node_id, opaque_ty_item, span, opaque_ty_span);
}
StmtKind::Expr(ref e) => hir::StmtKind::Expr(self.lower_expr(e)),
StmtKind::Semi(ref e) => hir::StmtKind::Semi(self.lower_expr(e)),
- StmtKind::Mac(..) => panic!("shouldn't exist here"),
+ StmtKind::Empty => return smallvec![],
+ StmtKind::MacCall(..) => panic!("shouldn't exist here"),
};
smallvec![hir::Stmt { hir_id: self.lower_node_id(s.id), kind, span: s.span }]
}
self.ban_illegal_rest_pat(p.span)
}
PatKind::Paren(ref inner) => return self.lower_pat(inner),
- PatKind::Mac(_) => panic!("Shouldn't exist here"),
+ PatKind::MacCall(_) => panic!("Shouldn't exist here"),
};
self.pat_with_node_id_of(p, node)
use super::{AnonymousLifetimeMode, ImplTraitContext, LoweringContext, ParamMode};
use super::{GenericArgsCtor, ParenthesizedGenericArgs};
-use rustc::lint::builtin::ELIDED_LIFETIMES_IN_PATHS;
use rustc::span_bug;
use rustc_ast::ast::{self, *};
use rustc_errors::{struct_span_err, Applicability};
use rustc_hir::def::{DefKind, PartialRes, Res};
use rustc_hir::def_id::DefId;
use rustc_hir::GenericArg;
+use rustc_session::lint::builtin::ELIDED_LIFETIMES_IN_PATHS;
use rustc_session::lint::BuiltinLintDiagnostics;
use rustc_span::Span;
ParenthesizedGenericArgs::Ok
}
// `a::b::Trait(Args)::TraitItem`
- Res::Def(DefKind::Method, _)
+ Res::Def(DefKind::AssocFn, _)
| Res::Def(DefKind::AssocConst, _)
| Res::Def(DefKind::AssocTy, _)
if i + 2 == proj_start =>
use rustc_ast::ast::*;
use rustc_ast::attr;
use rustc_ast::expand::is_proc_macro_attr;
+use rustc_ast::ptr::P;
use rustc_ast::visit::{self, AssocCtxt, FnCtxt, FnKind, Visitor};
use rustc_ast::walk_list;
use rustc_ast_pretty::pprust;
.span_label(ident.span, format!("`_` is not a valid name for this `{}` item", kind))
.emit();
}
+
+ fn deny_generic_params(&self, generics: &Generics, ident_span: Span) {
+ if !generics.params.is_empty() {
+ struct_span_err!(
+ self.session,
+ generics.span,
+ E0567,
+ "auto traits cannot have generic parameters"
+ )
+ .span_label(ident_span, "auto trait cannot have generic parameters")
+ .span_suggestion(
+ generics.span,
+ "remove the parameters",
+ String::new(),
+ Applicability::MachineApplicable,
+ )
+ .emit();
+ }
+ }
+
+ fn deny_super_traits(&self, bounds: &GenericBounds, ident_span: Span) {
+ if let [first @ last] | [first, .., last] = &bounds[..] {
+ let span = first.span().to(last.span());
+ struct_span_err!(self.session, span, E0568, "auto traits cannot have super traits")
+ .span_label(ident_span, "auto trait cannot have super traits")
+ .span_suggestion(
+ span,
+ "remove the super traits",
+ String::new(),
+ Applicability::MachineApplicable,
+ )
+ .emit();
+ }
+ }
+
+ fn deny_items(&self, trait_items: &[P<AssocItem>], ident_span: Span) {
+ if !trait_items.is_empty() {
+ let spans: Vec<_> = trait_items.iter().map(|i| i.ident.span).collect();
+ struct_span_err!(
+ self.session,
+ spans,
+ E0380,
+ "auto traits cannot have methods or associated items"
+ )
+ .span_label(ident_span, "auto trait cannot have items")
+ .emit();
+ }
+ }
}
fn validate_generic_param_order<'a>(
defaultness: _,
constness: _,
generics: _,
- of_trait: Some(_),
+ of_trait: Some(ref t),
ref self_ty,
items: _,
} => {
.help("use `auto trait Trait {}` instead")
.emit();
}
- if let (Unsafe::Yes(span), ImplPolarity::Negative) = (unsafety, polarity) {
+ if let (Unsafe::Yes(span), ImplPolarity::Negative(sp)) = (unsafety, polarity) {
struct_span_err!(
this.session,
- item.span,
+ sp.to(t.path.span),
E0198,
"negative impls cannot be unsafe"
)
+ .span_label(sp, "negative because of this")
.span_label(span, "unsafe because of this")
.emit();
}
constness,
generics: _,
of_trait: None,
- self_ty: _,
+ ref self_ty,
items: _,
} => {
+ let error = |annotation_span, annotation| {
+ let mut err = self.err_handler().struct_span_err(
+ self_ty.span,
+ &format!("inherent impls cannot be {}", annotation),
+ );
+ err.span_label(annotation_span, &format!("{} because of this", annotation));
+ err.span_label(self_ty.span, "inherent impl for this type");
+ err
+ };
+
self.invalid_visibility(
&item.vis,
Some("place qualifiers on individual impl items instead"),
);
if let Unsafe::Yes(span) = unsafety {
- struct_span_err!(
- self.session,
- item.span,
- E0197,
- "inherent impls cannot be unsafe"
- )
- .span_label(span, "unsafe because of this")
- .emit();
+ error(span, "unsafe").code(error_code!(E0197)).emit();
}
- if polarity == ImplPolarity::Negative {
- self.err_handler().span_err(item.span, "inherent impls cannot be negative");
+ if let ImplPolarity::Negative(span) = polarity {
+ error(span, "negative").emit();
}
if let Defaultness::Default(def_span) = defaultness {
- let span = self.session.source_map().def_span(item.span);
- self.err_handler()
- .struct_span_err(span, "inherent impls cannot be `default`")
- .span_label(def_span, "`default` because of this")
+ error(def_span, "`default`")
.note("only trait implementations may be annotated with `default`")
.emit();
}
if let Const::Yes(span) = constness {
- self.err_handler()
- .struct_span_err(item.span, "inherent impls cannot be `const`")
- .span_label(span, "`const` because of this")
+ error(span, "`const`")
.note("only trait implementations may be annotated with `const`")
.emit();
}
ItemKind::Trait(is_auto, _, ref generics, ref bounds, ref trait_items) => {
if is_auto == IsAuto::Yes {
// Auto traits cannot have generics, super traits nor contain items.
- if !generics.params.is_empty() {
- struct_span_err!(
- self.session,
- item.span,
- E0567,
- "auto traits cannot have generic parameters"
- )
- .emit();
- }
- if !bounds.is_empty() {
- struct_span_err!(
- self.session,
- item.span,
- E0568,
- "auto traits cannot have super traits"
- )
- .emit();
- }
- if !trait_items.is_empty() {
- struct_span_err!(
- self.session,
- item.span,
- E0380,
- "auto traits cannot have methods or associated items"
- )
- .emit();
- }
+ self.deny_generic_params(generics, item.ident.span);
+ self.deny_super_traits(bounds, item.ident.span);
+ self.deny_items(trait_items, item.ident.span);
}
self.no_questions_in_bounds(bounds, "supertraits", true);
ForeignItemKind::Static(_, _, body) => {
self.check_foreign_kind_bodyless(fi.ident, "static", body.as_ref().map(|b| b.span));
}
- ForeignItemKind::Const(..) | ForeignItemKind::Macro(..) => {}
+ ForeignItemKind::MacCall(..) => {}
}
visit::walk_foreign_item(self, fi)
}) = fk.header()
{
self.err_handler()
- .struct_span_err(span, "functions cannot be both `const` and `async`")
+ .struct_span_err(
+ vec![*cspan, *aspan],
+ "functions cannot be both `const` and `async`",
+ )
.span_label(*cspan, "`const` because of this")
.span_label(*aspan, "`async` because of this")
+ .span_label(span, "") // Point at the fn header.
.emit();
}
include => external_doc
cfg => doc_cfg
masked => doc_masked
- spotlight => doc_spotlight
alias => doc_alias
keyword => doc_keyword
);
}
}
- ast::ItemKind::Impl { polarity, defaultness, .. } => {
- if polarity == ast::ImplPolarity::Negative {
+ ast::ItemKind::Impl { polarity, defaultness, ref of_trait, .. } => {
+ if let ast::ImplPolarity::Negative(span) = polarity {
gate_feature_post!(
&self,
optin_builtin_traits,
- i.span,
+ span.to(of_trait.as_ref().map(|t| t.path.span).unwrap_or(span)),
"negative trait bounds are not yet fully implemented; \
- use marker types for now"
+ use marker types for now"
);
}
gate_feature_post!(&self, trait_alias, i.span, "trait aliases are experimental");
}
- ast::ItemKind::MacroDef(ast::MacroDef { legacy: false, .. }) => {
+ ast::ItemKind::MacroDef(ast::MacroDef { macro_rules: false, .. }) => {
let msg = "`macro` is experimental";
gate_feature_post!(&self, decl_macro, i.span, msg);
}
ast::ForeignItemKind::TyAlias(..) => {
gate_feature_post!(&self, extern_types, i.span, "extern types are experimental");
}
- ast::ForeignItemKind::Macro(..) | ast::ForeignItemKind::Const(..) => {}
+ ast::ForeignItemKind::MacCall(..) => {}
}
visit::walk_foreign_item(self, i)
}
fn visit_assoc_item(&mut self, i: &'a ast::AssocItem, ctxt: AssocCtxt) {
- if let ast::Defaultness::Default(_) = i.kind.defaultness() {
- gate_feature_post!(&self, specialization, i.span, "specialization is unstable");
- }
-
- match i.kind {
+ let is_fn = match i.kind {
ast::AssocItemKind::Fn(_, ref sig, _, _) => {
if let (ast::Const::Yes(_), AssocCtxt::Trait) = (sig.header.constness, ctxt) {
gate_feature_post!(&self, const_fn, i.span, "const fn is unstable");
}
+ true
}
ast::AssocItemKind::TyAlias(_, ref generics, _, ref ty) => {
if let (Some(_), AssocCtxt::Trait) = (ty, ctxt) {
self.check_impl_trait(ty);
}
self.check_gat(generics, i.span);
+ false
}
- _ => {}
+ _ => false,
+ };
+ if let ast::Defaultness::Default(_) = i.kind.defaultness() {
+ // Limit `min_specialization` to only specializing functions.
+ gate_feature_fn!(
+ &self,
+ |x: &Features| x.specialization || (is_fn && x.min_specialization),
+ i.span,
+ sym::specialization,
+ "specialization is unstable"
+ );
}
visit::walk_assoc_item(self, i, ctxt)
}
+#![feature(bindings_after_at)]
//! The `rustc_ast_passes` crate contains passes which validate the AST in `syntax`
//! parsed by `rustc_parse` and then lowered, after the passes in this crate,
//! by `rustc_ast_lowering`.
self.count += 1;
walk_lifetime(self, lifetime)
}
- fn visit_mac(&mut self, _mac: &Mac) {
+ fn visit_mac(&mut self, _mac: &MacCall) {
self.count += 1;
walk_mac(self, _mac)
}
visit::walk_ty(self, t);
}
- fn visit_mac(&mut self, mac: &'a ast::Mac) {
+ fn visit_mac(&mut self, mac: &'a ast::MacCall) {
visit::walk_mac(self, mac);
}
}
// This makes comma-separated lists look slightly nicer,
// and also addresses a specific regression described in issue #63896.
-fn tt_prepend_space(tt: &TokenTree) -> bool {
+fn tt_prepend_space(tt: &TokenTree, prev: &TokenTree) -> bool {
match tt {
TokenTree::Token(token) => match token.kind {
token::Comma => false,
_ => true,
},
+ TokenTree::Delimited(_, DelimToken::Paren, _) => match prev {
+ TokenTree::Token(token) => match token.kind {
+ token::Ident(_, _) => false,
+ _ => true,
+ },
+ _ => true,
+ },
_ => true,
}
}
}
fn print_tts(&mut self, tts: tokenstream::TokenStream, convert_dollar_crate: bool) {
- for (i, tt) in tts.into_trees().enumerate() {
- if i != 0 && tt_prepend_space(&tt) {
+ let mut iter = tts.into_trees().peekable();
+ while let Some(tt) = iter.next() {
+ let show_space =
+ if let Some(next) = iter.peek() { tt_prepend_space(next, &tt) } else { false };
+ self.print_tt(tt, convert_dollar_crate);
+ if show_space {
self.space();
}
- self.print_tt(tt, convert_dollar_crate);
}
}
ast::TyKind::ImplicitSelf => {
self.s.word("Self");
}
- ast::TyKind::Mac(ref m) => {
+ ast::TyKind::MacCall(ref m) => {
self.print_mac(m);
}
ast::TyKind::CVarArgs => {
}
crate fn print_foreign_item(&mut self, item: &ast::ForeignItem) {
- let ast::Item { id, span, ident, attrs, kind, vis, tokens: _ } = item;
- self.print_nested_item_kind(*id, *span, *ident, attrs, kind, vis);
- }
-
- fn print_nested_item_kind(
- &mut self,
- id: ast::NodeId,
- span: Span,
- ident: ast::Ident,
- attrs: &[Attribute],
- kind: &ast::AssocItemKind,
- vis: &ast::Visibility,
- ) {
+ let ast::Item { id, span, ident, ref attrs, ref kind, ref vis, tokens: _ } = *item;
self.ann.pre(self, AnnNode::SubItem(id));
self.hardbreak_if_not_bol();
self.maybe_print_comment(span.lo());
ast::ForeignItemKind::Fn(def, sig, gen, body) => {
self.print_fn_full(sig, ident, gen, vis, *def, body.as_deref(), attrs);
}
- ast::ForeignItemKind::Const(def, ty, body) => {
- self.print_item_const(ident, None, ty, body.as_deref(), vis, *def);
- }
ast::ForeignItemKind::Static(ty, mutbl, body) => {
let def = ast::Defaultness::Final;
self.print_item_const(ident, Some(*mutbl), ty, body.as_deref(), vis, def);
ast::ForeignItemKind::TyAlias(def, generics, bounds, ty) => {
self.print_associated_type(ident, generics, bounds, ty.as_deref(), vis, *def);
}
- ast::ForeignItemKind::Macro(m) => {
+ ast::ForeignItemKind::MacCall(m) => {
self.print_mac(m);
if m.args.need_semicolon() {
self.s.word(";");
self.s.space();
}
- if polarity == ast::ImplPolarity::Negative {
+ if let ast::ImplPolarity::Negative(_) = polarity {
self.s.word("!");
}
self.print_where_clause(&generics.where_clause);
self.s.word(";");
}
- ast::ItemKind::Mac(ref mac) => {
+ ast::ItemKind::MacCall(ref mac) => {
self.print_mac(mac);
if mac.args.need_semicolon() {
self.s.word(";");
}
}
ast::ItemKind::MacroDef(ref macro_def) => {
- let (kw, has_bang) = if macro_def.legacy {
+ let (kw, has_bang) = if macro_def.macro_rules {
("macro_rules", true)
} else {
self.print_visibility(&item.vis);
}
crate fn print_assoc_item(&mut self, item: &ast::AssocItem) {
- let ast::Item { id, span, ident, attrs, kind, vis, tokens: _ } = item;
- self.print_nested_item_kind(*id, *span, *ident, attrs, kind, vis);
+ let ast::Item { id, span, ident, ref attrs, ref kind, ref vis, tokens: _ } = *item;
+ self.ann.pre(self, AnnNode::SubItem(id));
+ self.hardbreak_if_not_bol();
+ self.maybe_print_comment(span.lo());
+ self.print_outer_attributes(attrs);
+ match kind {
+ ast::AssocItemKind::Fn(def, sig, gen, body) => {
+ self.print_fn_full(sig, ident, gen, vis, *def, body.as_deref(), attrs);
+ }
+ ast::AssocItemKind::Const(def, ty, body) => {
+ self.print_item_const(ident, None, ty, body.as_deref(), vis, *def);
+ }
+ ast::AssocItemKind::TyAlias(def, generics, bounds, ty) => {
+ self.print_associated_type(ident, generics, bounds, ty.as_deref(), vis, *def);
+ }
+ ast::AssocItemKind::MacCall(m) => {
+ self.print_mac(m);
+ if m.args.need_semicolon() {
+ self.s.word(";");
+ }
+ }
+ }
+ self.ann.post(self, AnnNode::SubItem(id))
}
crate fn print_stmt(&mut self, st: &ast::Stmt) {
}
}
ast::StmtKind::Semi(ref expr) => {
- match expr.kind {
- // Filter out empty `Tup` exprs created for the `redundant_semicolon`
- // lint, as they shouldn't be visible and interact poorly
- // with proc macros.
- ast::ExprKind::Tup(ref exprs) if exprs.is_empty() && expr.attrs.is_empty() => {
- ()
- }
- _ => {
- self.space_if_not_bol();
- self.print_expr_outer_attr_style(expr, false);
- self.s.word(";");
- }
- }
+ self.space_if_not_bol();
+ self.print_expr_outer_attr_style(expr, false);
+ self.s.word(";");
+ }
+ ast::StmtKind::Empty => {
+ self.space_if_not_bol();
+ self.s.word(";");
}
- ast::StmtKind::Mac(ref mac) => {
+ ast::StmtKind::MacCall(ref mac) => {
let (ref mac, style, ref attrs) = **mac;
self.space_if_not_bol();
self.print_outer_attributes(attrs);
self.print_else(elseopt)
}
- crate fn print_mac(&mut self, m: &ast::Mac) {
+ crate fn print_mac(&mut self, m: &ast::MacCall) {
self.print_mac_common(
Some(MacHeader::Path(&m.path)),
true,
self.pclose();
}
- ast::ExprKind::Mac(ref m) => self.print_mac(m),
+ ast::ExprKind::MacCall(ref m) => self.print_mac(m),
ast::ExprKind::Paren(ref e) => {
self.popen();
self.print_inner_attributes_inline(attrs);
self.print_pat(inner);
self.pclose();
}
- PatKind::Mac(ref m) => self.print_mac(m),
+ PatKind::MacCall(ref m) => self.print_mac(m),
}
self.ann.post(self, AnnNode::Pat(pat))
}
pub fn find_transparency(
attrs: &[Attribute],
- is_legacy: bool,
+ macro_rules: bool,
) -> (Transparency, Option<TransparencyError>) {
let mut transparency = None;
let mut error = None;
}
}
}
- let fallback = if is_legacy { Transparency::SemiTransparent } else { Transparency::Opaque };
+ let fallback = if macro_rules { Transparency::SemiTransparent } else { Transparency::Opaque };
(transparency.map_or(fallback, |t| t.0), error)
}
))
});
let args = P(MacArgs::Delimited(DelimSpan::from_single(sp), MacDelimiter::Parenthesis, tokens));
- let panic_call = Mac {
+ let panic_call = MacCall {
path: Path::from_ident(Ident::new(sym::panic, sp)),
args,
prior_type_ascription: None,
let if_expr = cx.expr_if(
sp,
cx.expr(sp, ExprKind::Unary(UnOp::Not, cond_expr)),
- cx.expr(sp, ExprKind::Mac(panic_call)),
+ cx.expr(sp, ExprKind::MacCall(panic_call)),
None,
);
MacEager::expr(if_expr)
// my_function();
// );
//
- // Warn about semicolon and suggest removing it. Eventually, this should be turned into an
- // error.
+ // Emit an error about semicolon and suggest removing it.
if parser.token == token::Semi {
- let mut err = cx.struct_span_warn(sp, "macro requires an expression as an argument");
+ let mut err = cx.struct_span_err(sp, "macro requires an expression as an argument");
err.span_suggestion(
parser.token.span,
"try removing semicolon",
String::new(),
Applicability::MaybeIncorrect,
);
- err.note("this is going to be an error in the future");
err.emit();
parser.bump();
//
// assert!(true "error message");
//
- // Parse this as an actual message, and suggest inserting a comma. Eventually, this should be
- // turned into an error.
+ // Emit an error and suggest inserting a comma.
let custom_message =
if let token::Literal(token::Lit { kind: token::Str, .. }) = parser.token.kind {
- let mut err = cx.struct_span_warn(parser.token.span, "unexpected string literal");
+ let mut err = cx.struct_span_err(parser.token.span, "unexpected string literal");
let comma_span = parser.prev_token.span.shrink_to_hi();
err.span_suggestion_short(
comma_span,
", ".to_string(),
Applicability::MaybeIncorrect,
);
- err.note("this is going to be an error in the future");
err.emit();
parse_custom_message(&mut parser)
--- /dev/null
+//! Implementation of the `#[cfg_accessible(path)]` attribute macro.
+
+use rustc_ast::ast;
+use rustc_expand::base::{Annotatable, ExpandResult, ExtCtxt, MultiItemModifier};
+use rustc_feature::AttributeTemplate;
+use rustc_parse::validate_attr;
+use rustc_span::symbol::sym;
+use rustc_span::Span;
+
+crate struct Expander;
+
+fn validate_input<'a>(ecx: &mut ExtCtxt<'_>, mi: &'a ast::MetaItem) -> Option<&'a ast::Path> {
+ match mi.meta_item_list() {
+ None => {}
+ Some([]) => ecx.span_err(mi.span, "`cfg_accessible` path is not specified"),
+ Some([_, .., l]) => ecx.span_err(l.span(), "multiple `cfg_accessible` paths are specified"),
+ Some([nmi]) => match nmi.meta_item() {
+ None => ecx.span_err(nmi.span(), "`cfg_accessible` path cannot be a literal"),
+ Some(mi) => {
+ if !mi.is_word() {
+ ecx.span_err(mi.span, "`cfg_accessible` path cannot accept arguments");
+ }
+ return Some(&mi.path);
+ }
+ },
+ }
+ None
+}
+
+impl MultiItemModifier for Expander {
+ fn expand(
+ &self,
+ ecx: &mut ExtCtxt<'_>,
+ _span: Span,
+ meta_item: &ast::MetaItem,
+ item: Annotatable,
+ ) -> ExpandResult<Vec<Annotatable>, Annotatable> {
+ let template = AttributeTemplate { list: Some("path"), ..Default::default() };
+ let attr = &ecx.attribute(meta_item.clone());
+ validate_attr::check_builtin_attribute(ecx.parse_sess, attr, sym::cfg_accessible, template);
+
+ let path = match validate_input(ecx, meta_item) {
+ Some(path) => path,
+ None => return ExpandResult::Ready(Vec::new()),
+ };
+
+ let failure_msg = "cannot determine whether the path is accessible or not";
+ match ecx.resolver.cfg_accessible(ecx.current_expansion.id, path) {
+ Ok(true) => ExpandResult::Ready(vec![item]),
+ Ok(false) => ExpandResult::Ready(Vec::new()),
+ Err(_) => ExpandResult::Retry(item, failure_msg.into()),
+ }
+ }
+}
visit::walk_ty(self, ty)
}
- fn visit_mac(&mut self, mac: &ast::Mac) {
+ fn visit_mac(&mut self, mac: &ast::MacCall) {
self.cx.span_err(mac.span(), "`derive` cannot be used on items with type macros");
}
}
})
.cloned(),
);
- push(Annotatable::Item(P(ast::Item { attrs: attrs, ..(*newitem).clone() })))
+ push(Annotatable::Item(P(ast::Item { attrs, ..(*newitem).clone() })))
}
_ => {
// Non-Item derive is an error, but it should have been
use rustc_ast::ast::{self, ItemKind, MetaItem};
use rustc_ast::ptr::P;
-use rustc_expand::base::{Annotatable, ExtCtxt, MultiItemModifier};
+use rustc_expand::base::{Annotatable, ExpandResult, ExtCtxt, MultiItemModifier};
use rustc_span::symbol::{sym, Symbol};
use rustc_span::Span;
span: Span,
meta_item: &MetaItem,
item: Annotatable,
- ) -> Vec<Annotatable> {
+ ) -> ExpandResult<Vec<Annotatable>, Annotatable> {
// FIXME: Built-in derives often forget to give spans contexts,
// so we are doing it here in a centralized way.
let span = ecx.with_def_site_ctxt(span);
let mut items = Vec::new();
(self.0)(ecx, span, meta_item, &item, &mut |a| items.push(a));
- items
+ ExpandResult::Ready(items)
}
}
if p.token == token::Eof {
break;
} // accept trailing commas
- if p.token.is_ident() && p.look_ahead(1, |t| *t == token::Eq) {
- named = true;
- let name = if let token::Ident(name, _) = p.normalized_token.kind {
+ match p.token.ident() {
+ Some((ident, _)) if p.look_ahead(1, |t| *t == token::Eq) => {
+ named = true;
p.bump();
- name
- } else {
- unreachable!();
- };
+ p.expect(&token::Eq)?;
+ let e = p.parse_expr()?;
+ if let Some(prev) = names.get(&ident.name) {
+ ecx.struct_span_err(e.span, &format!("duplicate argument named `{}`", ident))
+ .span_label(args[*prev].span, "previously here")
+ .span_label(e.span, "duplicate argument")
+ .emit();
+ continue;
+ }
- p.expect(&token::Eq)?;
- let e = p.parse_expr()?;
- if let Some(prev) = names.get(&name) {
- ecx.struct_span_err(e.span, &format!("duplicate argument named `{}`", name))
- .span_label(args[*prev].span, "previously here")
- .span_label(e.span, "duplicate argument")
- .emit();
- continue;
+ // Resolve names into slots early.
+ // Since all the positional args are already seen at this point
+ // if the input is valid, we can simply append to the positional
+ // args. And remember the names.
+ let slot = args.len();
+ names.insert(ident.name, slot);
+ args.push(e);
}
-
- // Resolve names into slots early.
- // Since all the positional args are already seen at this point
- // if the input is valid, we can simply append to the positional
- // args. And remember the names.
- let slot = args.len();
- names.insert(name, slot);
- args.push(e);
- } else {
- let e = p.parse_expr()?;
- if named {
- let mut err = ecx
- .struct_span_err(e.span, "positional arguments cannot follow named arguments");
- err.span_label(e.span, "positional arguments must be before named arguments");
- for (_, pos) in &names {
- err.span_label(args[*pos].span, "named argument");
+ _ => {
+ let e = p.parse_expr()?;
+ if named {
+ let mut err = ecx.struct_span_err(
+ e.span,
+ "positional arguments cannot follow named arguments",
+ );
+ err.span_label(e.span, "positional arguments must be before named arguments");
+ for pos in names.values() {
+ err.span_label(args[*pos].span, "named argument");
+ }
+ err.emit();
}
- err.emit();
+ args.push(e);
}
- args.push(e);
}
}
Ok((fmtstr, args, names))
err.tool_only_span_suggestion(
sp,
&format!("use the `{}` trait", name),
- fmt.to_string(),
+ (*fmt).to_string(),
Applicability::MaybeIncorrect,
);
}
match ty {
Placeholder(_) => {
// record every (position, type) combination only once
- let ref mut seen_ty = self.arg_unique_types[arg];
+ let seen_ty = &mut self.arg_unique_types[arg];
let i = seen_ty.iter().position(|x| *x == ty).unwrap_or_else(|| {
let i = seen_ty.len();
seen_ty.push(ty);
// Map the arguments
for i in 0..args_len {
- let ref arg_types = self.arg_types[i];
+ let arg_types = &self.arg_types[i];
let arg_offsets = arg_types.iter().map(|offset| sofar + *offset).collect::<Vec<_>>();
self.arg_index_map.push(arg_offsets);
sofar += self.arg_unique_types[i].len();
let arg_idx = match arg_index_consumed.get_mut(i) {
None => 0, // error already emitted elsewhere
Some(offset) => {
- let ref idx_map = self.arg_index_map[i];
+ let idx_map = &self.arg_index_map[i];
// unwrap_or branch: error already emitted elsewhere
let arg_idx = *idx_map.get(*offset).unwrap_or(&0);
*offset += 1;
let name = names_pos[i];
let span = self.ecx.with_def_site_ctxt(e.span);
pats.push(self.ecx.pat_ident(span, name));
- for ref arg_ty in self.arg_unique_types[i].iter() {
+ for arg_ty in self.arg_unique_types[i].iter() {
locals.push(Context::format_arg(self.ecx, self.macsp, e.span, arg_ty, name));
}
heads.push(self.ecx.expr_addr_of(e.span, e));
};
/// Finds the indices of all characters that have been processed and differ between the actual
- /// written code (code snippet) and the `InternedString` that get's processed in the `Parser`
+ /// written code (code snippet) and the `InternedString` that gets processed in the `Parser`
/// in order to properly synthethise the intra-string `Span`s for error diagnostics.
fn find_skips(snippet: &str, is_raw: bool) -> Vec<usize> {
let mut eat_ws = false;
fn allocator_fn(&self, method: &AllocatorMethod) -> Stmt {
let mut abi_args = Vec::new();
let mut i = 0;
- let ref mut mk = || {
+ let mut mk = || {
let name = self.cx.ident_of(&format!("arg{}", i), self.span);
i += 1;
name
};
- let args = method.inputs.iter().map(|ty| self.arg_ty(ty, &mut abi_args, mk)).collect();
+ let args = method.inputs.iter().map(|ty| self.arg_ty(ty, &mut abi_args, &mut mk)).collect();
let result = self.call_allocator(method.name, args);
let (output_ty, output_expr) = self.ret_ty(&method.output, result);
let decl = self.cx.fn_decl(abi_args, ast::FnRetTy::Ty(output_ty));
mod asm;
mod assert;
mod cfg;
+mod cfg_accessible;
mod compile_error;
mod concat;
mod concat_idents;
register_attr! {
bench: test::expand_bench,
+ cfg_accessible: cfg_accessible::Expander,
global_allocator: global_allocator::expand,
test: test::expand_test,
test_case: test::expand_test_case,
handler: &rustc_errors::Handler,
) -> ast::Crate {
let ecfg = ExpansionConfig::default("proc_macro".to_string());
- let mut cx = ExtCtxt::new(sess, ecfg, resolver);
+ let mut cx = ExtCtxt::new(sess, ecfg, resolver, None);
let mut collect = CollectProcMacros {
macros: Vec::new(),
self.in_root = prev_in_root;
}
- fn visit_mac(&mut self, mac: &'a ast::Mac) {
+ fn visit_mac(&mut self, mac: &'a ast::MacCall) {
visit::walk_mac(self, mac)
}
}
use rustc_ast_pretty::pprust;
use rustc_expand::base::{self, *};
use rustc_expand::panictry;
-use rustc_parse::{self, new_sub_parser_from_file, parser::Parser, DirectoryOwnership};
+use rustc_parse::{self, new_sub_parser_from_file, parser::Parser};
use rustc_session::lint::builtin::INCOMPLETE_INCLUDE;
use rustc_span::symbol::Symbol;
use rustc_span::{self, Pos, Span};
return DummyResult::any(sp);
}
};
- let directory_ownership = DirectoryOwnership::Owned { relative: None };
- let p = new_sub_parser_from_file(cx.parse_sess(), &file, directory_ownership, None, sp);
+ let p = new_sub_parser_from_file(cx.parse_sess(), &file, None, sp);
struct ExpandResult<'a> {
p: Parser<'a>,
let call_site = DUMMY_SP.with_call_site_ctxt(expn_id);
let ecfg = ExpansionConfig::default("std_lib_injection".to_string());
- let cx = ExtCtxt::new(sess, ecfg, resolver);
+ let cx = ExtCtxt::new(sess, ecfg, resolver, None);
// .rev() to preserve ordering above in combination with insert(0, ...)
for &name in names.iter().rev() {
.raise();
};
- if let ast::ItemKind::Mac(_) = item.kind {
+ if let ast::ItemKind::MacCall(_) = item.kind {
cx.parse_sess.span_diagnostic.span_warn(
item.span,
"`#[test]` attribute should not be used on macros. Use `#[cfg(test)]` instead.",
fn should_panic(cx: &ExtCtxt<'_>, i: &ast::Item) -> ShouldPanic {
match attr::find_by_name(&i.attrs, sym::should_panic) {
Some(attr) => {
- let ref sd = cx.parse_sess.span_diagnostic;
+ let sd = &cx.parse_sess.span_diagnostic;
match attr.meta_item_list() {
// Handle #[should_panic(expected = "foo")]
fn has_test_signature(cx: &ExtCtxt<'_>, i: &ast::Item) -> bool {
let has_should_panic_attr = attr::contains_name(&i.attrs, sym::should_panic);
- let ref sd = cx.parse_sess.span_diagnostic;
+ let sd = &cx.parse_sess.span_diagnostic;
if let ast::ItemKind::Fn(_, ref sig, ref generics, _) = i.kind {
if let ast::Unsafe::Yes(span) = sig.header.unsafety {
sd.struct_span_err(i.span, "unsafe functions cannot be used for tests")
smallvec![P(item)]
}
- fn visit_mac(&mut self, _mac: &mut ast::Mac) {
+ fn visit_mac(&mut self, _mac: &mut ast::MacCall) {
// Do nothing.
}
}
smallvec![item]
}
- fn visit_mac(&mut self, _mac: &mut ast::Mac) {
+ fn visit_mac(&mut self, _mac: &mut ast::MacCall) {
// Do nothing.
}
}
let mut econfig = ExpansionConfig::default("test".to_string());
econfig.features = Some(features);
- let ext_cx = ExtCtxt::new(sess, econfig, resolver);
+ let ext_cx = ExtCtxt::new(sess, econfig, resolver, None);
let expn_id = ext_cx.resolver.expansion_for_ast_pass(
DUMMY_SP,
/// &[&test1, &test2]
fn mk_tests_slice(cx: &TestCtxt<'_>, sp: Span) -> P<ast::Expr> {
debug!("building test vector from {} tests", cx.test_cases.len());
- let ref ecx = cx.ext_cx;
+ let ecx = &cx.ext_cx;
ecx.expr_vec_slice(
sp,
pub fn check_builtin_macro_attribute(ecx: &ExtCtxt<'_>, meta_item: &MetaItem, name: Symbol) {
// All the built-in macro attributes are "words" at the moment.
- let template = AttributeTemplate::only_word();
+ let template = AttributeTemplate { word: true, ..Default::default() };
let attr = ecx.attribute(meta_item.clone());
validate_attr::check_builtin_attribute(ecx.parse_sess, &attr, name, template);
}
and then from LLVM IR into machine code. In general it contains code
that runs towards the end of the compilation process.
-For more information about how codegen works, see the [rustc guide].
+For more information about how codegen works, see the [rustc dev guide].
-[rustc guide]: https://rust-lang.github.io/rustc-guide/codegen.html
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/codegen.html
.prefix
.iter()
.flat_map(|option_kind| {
- option_kind.map(|kind| Reg { kind: kind, size: self.prefix_chunk }.llvm_type(cx))
+ option_kind.map(|kind| Reg { kind, size: self.prefix_chunk }.llvm_type(cx))
})
.chain((0..rest_count).map(|_| rest_ll_unit))
.collect();
-use std::ffi::CString;
-
use crate::attributes;
use libc::c_uint;
use rustc::bug;
args.len() as c_uint,
False,
);
- let name = CString::new(format!("__rust_{}", method.name)).unwrap();
- let llfn = llvm::LLVMRustGetOrInsertFunction(llmod, name.as_ptr(), ty);
+ let name = format!("__rust_{}", method.name);
+ let llfn = llvm::LLVMRustGetOrInsertFunction(llmod, name.as_ptr().cast(), name.len(), ty);
if tcx.sess.target.target.options.default_hidden_visibility {
llvm::LLVMRustSetVisibility(llfn, llvm::Visibility::Hidden);
attributes::emit_uwtable(llfn, true);
}
- let callee = CString::new(kind.fn_name(method.name)).unwrap();
- let callee = llvm::LLVMRustGetOrInsertFunction(llmod, callee.as_ptr(), ty);
+ let callee = kind.fn_name(method.name);
+ let callee =
+ llvm::LLVMRustGetOrInsertFunction(llmod, callee.as_ptr().cast(), callee.len(), ty);
llvm::LLVMRustSetVisibility(callee, llvm::Visibility::Hidden);
let llbb = llvm::LLVMAppendBasicBlockInContext(llcx, llfn, "entry\0".as_ptr().cast());
.enumerate()
.map(|(i, _)| llvm::LLVMGetParam(llfn, i as c_uint))
.collect::<Vec<_>>();
- let ret = llvm::LLVMRustBuildCall(
- llbuilder,
- callee,
- args.as_ptr(),
- args.len() as c_uint,
- None,
- "\0".as_ptr().cast(),
- );
+ let ret =
+ llvm::LLVMRustBuildCall(llbuilder, callee, args.as_ptr(), args.len() as c_uint, None);
llvm::LLVMSetTailCall(ret, True);
if output.is_some() {
llvm::LLVMBuildRet(llbuilder, ret);
use libc::{c_char, c_uint};
use log::debug;
-use std::ffi::{CStr, CString};
impl AsmBuilderMethods<'tcx> for Builder<'a, 'll, 'tcx> {
fn codegen_inline_asm(
let mut indirect_outputs = vec![];
for (i, (out, &place)) in ia.outputs.iter().zip(&outputs).enumerate() {
if out.is_rw {
- inputs.push(self.load_operand(place).immediate());
+ let operand = self.load_operand(place);
+ if let OperandValue::Immediate(_) = operand.val {
+ inputs.push(operand.immediate());
+ }
ext_constraints.push(i.to_string());
}
if out.is_indirect {
- indirect_outputs.push(self.load_operand(place).immediate());
+ let operand = self.load_operand(place);
+ if let OperandValue::Immediate(_) = operand.val {
+ indirect_outputs.push(operand.immediate());
+ }
} else {
output_types.push(place.layout.llvm_type(self.cx()));
}
.chain(ia.inputs.iter().map(|s| s.to_string()))
.chain(ext_constraints)
.chain(clobbers)
- .chain(arch_clobbers.iter().map(|s| s.to_string()))
+ .chain(arch_clobbers.iter().map(|s| (*s).to_string()))
.collect::<Vec<String>>()
.join(",");
_ => self.type_struct(&output_types, false),
};
- let asm = CString::new(ia.asm.as_str().as_bytes()).unwrap();
- let constraint_cstr = CString::new(all_constraints).unwrap();
+ let asm = ia.asm.as_str();
let r = inline_asm_call(
self,
&asm,
- &constraint_cstr,
+ &all_constraints,
&inputs,
output_type,
ia.volatile,
impl AsmMethods for CodegenCx<'ll, 'tcx> {
fn codegen_global_asm(&self, ga: &hir::GlobalAsm) {
- let asm = CString::new(ga.asm.as_str().as_bytes()).unwrap();
+ let asm = ga.asm.as_str();
unsafe {
- llvm::LLVMRustAppendModuleInlineAsm(self.llmod, asm.as_ptr());
+ llvm::LLVMRustAppendModuleInlineAsm(self.llmod, asm.as_ptr().cast(), asm.len());
}
}
}
fn inline_asm_call(
bx: &mut Builder<'a, 'll, 'tcx>,
- asm: &CStr,
- cons: &CStr,
+ asm: &str,
+ cons: &str,
inputs: &[&'ll Value],
output: &'ll llvm::Type,
volatile: bool,
let fty = bx.cx.type_func(&argtys[..], output);
unsafe {
// Ask LLVM to verify that the constraints are well-formed.
- let constraints_ok = llvm::LLVMRustInlineAsmVerify(fty, cons.as_ptr());
+ let constraints_ok = llvm::LLVMRustInlineAsmVerify(fty, cons.as_ptr().cast(), cons.len());
debug!("constraint verification result: {:?}", constraints_ok);
if constraints_ok {
let v = llvm::LLVMRustInlineAsm(
fty,
- asm.as_ptr(),
- cons.as_ptr(),
+ asm.as_ptr().cast(),
+ asm.len(),
+ cons.as_ptr().cast(),
+ cons.len(),
volatile,
alignstack,
llvm::AsmDialect::from_generic(dia),
use std::ffi::CString;
use rustc::middle::codegen_fn_attrs::CodegenFnAttrFlags;
-use rustc::session::config::{OptLevel, Sanitizer};
-use rustc::session::Session;
use rustc::ty::layout::HasTyCtxt;
use rustc::ty::query::Providers;
use rustc::ty::{self, Ty, TyCtxt};
use rustc_data_structures::fx::FxHashMap;
use rustc_data_structures::small_c_str::SmallCStr;
use rustc_hir::def_id::{DefId, LOCAL_CRATE};
+use rustc_session::config::{OptLevel, Sanitizer};
+use rustc_session::Session;
use rustc_target::abi::call::Conv;
use rustc_target::spec::PanicStrategy;
use crate::llvm::archive_ro::{ArchiveRO, Child};
use crate::llvm::{self, ArchiveKind};
-use rustc::session::Session;
use rustc_codegen_ssa::back::archive::{find_library, ArchiveBuilder};
use rustc_codegen_ssa::{looks_like_rust_object_file, METADATA_FILENAME, RLIB_BYTECODE_EXTENSION};
+use rustc_session::Session;
use rustc_span::symbol::Symbol;
struct ArchiveConfig<'a> {
use rustc::bug;
use rustc::dep_graph::WorkProduct;
use rustc::middle::exported_symbols::SymbolExportLevel;
-use rustc::session::config::{self, Lto};
use rustc_codegen_ssa::back::lto::{LtoModuleCodegen, SerializedModule, ThinModule, ThinShared};
use rustc_codegen_ssa::back::symbol_export;
use rustc_codegen_ssa::back::write::{CodegenContext, FatLTOInput, ModuleConfig};
use rustc_errors::{FatalError, Handler};
use rustc_hir::def_id::LOCAL_CRATE;
use rustc_session::cgu_reuse_tracker::CguReuse;
+use rustc_session::config::{self, Lto};
use std::ffi::{CStr, CString};
use std::fs::File;
//
// This strategy means we can always save the computed imports as
// canon: when we reuse the post-ThinLTO version, condition (3.)
- // ensures that the curent import set is the same as the previous
+ // ensures that the current import set is the same as the previous
// one. (And of course, when we don't reuse the post-ThinLTO
// version, the current import set *is* the correct one, since we
// are doing the ThinLTO in this current compilation cycle.)
}));
}
- // Save the curent ThinLTO import information for the next compilation
+ // Save the current ThinLTO import information for the next compilation
// session, overwriting the previous serialized imports (if any).
if let Some(path) = import_map_path {
if let Err(err) = curr_import_map.save_to_file(&path) {
let mut components = vec![StringComponent::Ref(pass_name)];
// handle that LazyCallGraph::SCC is a comma separated list within parentheses
let parentheses: &[_] = &['(', ')'];
- let trimed = ir_name.trim_matches(parentheses);
- for part in trimed.split(", ") {
+ let trimmed = ir_name.trim_matches(parentheses);
+ for part in trimmed.split(", ") {
let demangled_ir_name = rustc_demangle::demangle(part).to_string();
let ir_name = profiler.get_or_alloc_cached_string(demangled_ir_name);
components.push(StringComponent::Value(SEPARATOR_BYTE));
use crate::ModuleLlvm;
use log::debug;
use rustc::bug;
-use rustc::session::config::{self, Lto, OutputType, Passes, Sanitizer, SwitchWithOptPath};
-use rustc::session::Session;
use rustc::ty::TyCtxt;
use rustc_codegen_ssa::back::write::{run_assembler, CodegenContext, ModuleConfig};
use rustc_codegen_ssa::traits::*;
use rustc_errors::{FatalError, Handler};
use rustc_fs_util::{link_or_copy, path_to_c_string};
use rustc_hir::def_id::LOCAL_CRATE;
+use rustc_session::config::{self, Lto, OutputType, Passes, Sanitizer, SwitchWithOptPath};
+use rustc_session::Session;
use libc::{c_char, c_int, c_uint, c_void, size_t};
use std::ffi::CString;
Err(_) => return 0,
};
- if let Err(_) = write!(cursor, "{:#}", demangled) {
+ if write!(cursor, "{:#}", demangled).is_err() {
// Possible only if provided buffer is not big enough
return 0;
}
use rustc::middle::cstore::EncodedMetadata;
use rustc::middle::exported_symbols;
use rustc::mir::mono::{Linkage, Visibility};
-use rustc::session::config::DebugInfo;
use rustc::ty::TyCtxt;
use rustc_codegen_ssa::base::maybe_create_entry_wrapper;
use rustc_codegen_ssa::mono_item::MonoItemExt;
use rustc_codegen_ssa::traits::*;
use rustc_codegen_ssa::{ModuleCodegen, ModuleKind};
use rustc_data_structures::small_c_str::SmallCStr;
+use rustc_session::config::DebugInfo;
use rustc_span::symbol::Symbol;
use std::ffi::CString;
use crate::value::Value;
use libc::{c_char, c_uint};
use log::debug;
-use rustc::session::config::{self, Sanitizer};
use rustc::ty::layout::{self, Align, Size, TyLayout};
use rustc::ty::{self, Ty, TyCtxt};
use rustc_codegen_ssa::base::to_immediate;
use rustc_data_structures::const_cstr;
use rustc_data_structures::small_c_str::SmallCStr;
use rustc_hir::def_id::DefId;
+use rustc_session::config::{self, Sanitizer};
use rustc_target::spec::{HasTargetSpec, Target};
use std::borrow::Cow;
use std::ffi::CStr;
args.as_ptr() as *const &llvm::Value,
args.len() as c_uint,
bundle,
- UNNAMED,
)
}
}
let base_addr = match alloc_kind {
Some(GlobalAlloc::Memory(alloc)) => {
let init = const_alloc_to_llvm(self, alloc);
- if alloc.mutability == Mutability::Mut {
- self.static_addr_of_mut(init, alloc.align, None)
- } else {
- self.static_addr_of(init, alloc.align, None)
+ let value = match alloc.mutability {
+ Mutability::Mut => self.static_addr_of_mut(init, alloc.align, None),
+ _ => self.static_addr_of(init, alloc.align, None),
+ };
+ if !self.sess().fewer_names() {
+ llvm::set_value_name(value, format!("{:?}", ptr.alloc_id).as_bytes());
}
+ value
}
Some(GlobalAlloc::Function(fn_instance)) => self.get_fn_addr(fn_instance),
Some(GlobalAlloc::Static(def_id)) => {
}
}
-pub fn val_ty(v: &'ll Value) -> &'ll Type {
+pub fn val_ty(v: &Value) -> &Type {
unsafe { llvm::LLVMTypeOf(v) }
}
((hi as u128) << 64) | (lo as u128)
}
-fn try_as_const_integral(v: &'ll Value) -> Option<&'ll ConstantInt> {
+fn try_as_const_integral(v: &Value) -> Option<&ConstantInt> {
unsafe { llvm::LLVMIsAConstantInt(v) }
}
-use crate::abi::FnAbi;
use crate::attributes;
+use crate::callee::get_fn;
use crate::debuginfo;
use crate::llvm;
use crate::llvm_util;
-use crate::value::Value;
-use rustc::dep_graph::DepGraphSafe;
-
use crate::type_::Type;
-use rustc_codegen_ssa::traits::*;
+use crate::value::Value;
-use crate::callee::get_fn;
use rustc::bug;
+use rustc::dep_graph::DepGraphSafe;
use rustc::mir::mono::CodegenUnit;
-use rustc::session::config::{self, CFGuard, DebugInfo};
-use rustc::session::Session;
use rustc::ty::layout::{
- FnAbiExt, HasParamEnv, LayoutError, LayoutOf, PointeeInfo, Size, TyLayout, VariantIdx,
+ HasParamEnv, LayoutError, LayoutOf, PointeeInfo, Size, TyLayout, VariantIdx,
};
use rustc::ty::{self, Instance, Ty, TyCtxt};
use rustc_codegen_ssa::base::wants_msvc_seh;
+use rustc_codegen_ssa::traits::*;
use rustc_data_structures::base_n;
use rustc_data_structures::const_cstr;
use rustc_data_structures::fx::FxHashMap;
use rustc_data_structures::small_c_str::SmallCStr;
-use rustc_hir::Unsafety;
-use rustc_target::spec::{HasTargetSpec, Target};
-
-use crate::abi::Abi;
+use rustc_session::config::{self, CFGuard, DebugInfo};
+use rustc_session::Session;
use rustc_span::source_map::{Span, DUMMY_SP};
use rustc_span::symbol::Symbol;
+use rustc_target::spec::{HasTargetSpec, Target};
+
use std::cell::{Cell, RefCell};
use std::ffi::CStr;
-use std::iter;
use std::str;
use std::sync::Arc;
pub dbg_cx: Option<debuginfo::CrateDebugContext<'ll, 'tcx>>,
eh_personality: Cell<Option<&'ll Value>>,
- eh_unwind_resume: Cell<Option<&'ll Value>>,
pub rust_try_fn: Cell<Option<&'ll Value>>,
intrinsics: RefCell<FxHashMap<&'static str, &'ll Value>>,
let llvm_data_layout = llvm::LLVMGetDataLayout(llmod);
let llvm_data_layout = str::from_utf8(CStr::from_ptr(llvm_data_layout).to_bytes())
- .ok()
.expect("got a non-UTF8 data-layout from LLVM");
// Unfortunately LLVM target specs change over time, and right now we
isize_ty,
dbg_cx,
eh_personality: Cell::new(None),
- eh_unwind_resume: Cell::new(None),
rust_try_fn: Cell::new(None),
intrinsics: Default::default(),
local_gen_sym_counter: Cell::new(0),
llfn
}
- // Returns a Value of the "eh_unwind_resume" lang item if one is defined,
- // otherwise declares it as an external function.
- fn eh_unwind_resume(&self) -> &'ll Value {
- let unwresume = &self.eh_unwind_resume;
- if let Some(llfn) = unwresume.get() {
- return llfn;
- }
-
- let tcx = self.tcx;
- assert!(self.sess().target.target.options.custom_unwind_resume);
- if let Some(def_id) = tcx.lang_items().eh_unwind_resume() {
- let llfn = self.get_fn_addr(
- ty::Instance::resolve(
- tcx,
- ty::ParamEnv::reveal_all(),
- def_id,
- tcx.intern_substs(&[]),
- )
- .unwrap(),
- );
- unwresume.set(Some(llfn));
- return llfn;
- }
-
- let sig = ty::Binder::bind(tcx.mk_fn_sig(
- iter::once(tcx.mk_mut_ptr(tcx.types.u8)),
- tcx.types.never,
- false,
- Unsafety::Unsafe,
- Abi::C,
- ));
-
- let fn_abi = FnAbi::of_fn_ptr(self, sig, &[]);
- let llfn = self.declare_fn("rust_eh_unwind_resume", &fn_abi);
- attributes::apply_target_cpu_attr(self, llfn);
- unwresume.set(Some(llfn));
- llfn
- }
-
fn sess(&self) -> &Session {
&self.tcx.sess
}
-use super::metadata::file_metadata;
-use super::utils::{span_start, DIB};
+use super::metadata::{file_metadata, UNKNOWN_COLUMN_NUMBER, UNKNOWN_LINE_NUMBER};
+use super::utils::DIB;
use rustc_codegen_ssa::mir::debuginfo::{DebugScope, FunctionDebugContext};
use crate::common::CodegenCx;
use crate::llvm::debuginfo::{DIScope, DISubprogram};
use rustc::mir::{Body, SourceScope};
-use libc::c_uint;
-
-use rustc_span::Pos;
-
use rustc_index::bit_set::BitSet;
use rustc_index::vec::Idx;
debug_context.scopes[parent]
} else {
// The root is the function itself.
- let loc = span_start(cx, mir.span);
+ let loc = cx.lookup_debug_loc(mir.span.lo());
debug_context.scopes[scope] = DebugScope {
scope_metadata: Some(fn_metadata),
file_start_pos: loc.file.start_pos,
return;
}
- let loc = span_start(cx, scope_data.span);
+ let loc = cx.lookup_debug_loc(scope_data.span.lo());
let file_metadata = file_metadata(cx, &loc.file.name, debug_context.defining_crate);
let scope_metadata = unsafe {
DIB(cx),
parent_scope.scope_metadata.unwrap(),
file_metadata,
- loc.line as c_uint,
- loc.col.to_usize() as c_uint,
+ loc.line.unwrap_or(UNKNOWN_LINE_NUMBER),
+ loc.col.unwrap_or(UNKNOWN_COLUMN_NUMBER),
))
};
debug_context.scopes[scope] = DebugScope {
use crate::common::CodegenCx;
use crate::value::Value;
use rustc::bug;
-use rustc::session::config::DebugInfo;
use rustc_codegen_ssa::traits::*;
+use rustc_session::config::DebugInfo;
use rustc_ast::attr;
use rustc_span::symbol::sym;
use super::namespace::mangled_name_of_instance;
use super::type_names::compute_debuginfo_type_name;
use super::utils::{
- create_DIArray, debug_context, get_namespace_for_item, is_node_local_to_unit, span_start, DIB,
+ create_DIArray, debug_context, get_namespace_for_item, is_node_local_to_unit, DIB,
};
use super::CrateDebugContext;
use rustc::middle::codegen_fn_attrs::CodegenFnAttrFlags;
use rustc::mir::interpret::truncate;
use rustc::mir::{self, Field, GeneratorLayout};
-use rustc::session::config::{self, DebugInfo};
use rustc::ty::layout::{
self, Align, Integer, IntegerExt, LayoutOf, PrimitiveExt, Size, TyLayout, VariantIdx,
};
use rustc_data_structures::const_cstr;
use rustc_data_structures::fingerprint::Fingerprint;
use rustc_data_structures::fx::FxHashMap;
-use rustc_data_structures::small_c_str::SmallCStr;
use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
use rustc_fs_util::path_to_c_string;
use rustc_hir::def::CtorKind;
use rustc_hir::def_id::{CrateNum, DefId, LOCAL_CRATE};
use rustc_index::vec::{Idx, IndexVec};
+use rustc_session::config::{self, DebugInfo};
use rustc_span::symbol::{Interner, Symbol};
use rustc_span::{self, FileName, Span};
use rustc_target::abi::HasDataLayout;
use libc::{c_longlong, c_uint};
use std::collections::hash_map::Entry;
-use std::ffi::CString;
use std::fmt::{self, Write};
use std::hash::{Hash, Hasher};
use std::iter;
/// Gets the unique type ID string for an enum variant part.
/// Variant parts are not types and shouldn't really have their own ID,
/// but it makes `set_members_of_composite_type()` simpler.
- fn get_unique_type_id_str_of_enum_variant_part(&mut self, enum_type_id: UniqueTypeId) -> &str {
- let variant_part_type_id =
- format!("{}_variant_part", self.get_unique_type_id_as_string(enum_type_id));
- let interner_key = self.unique_id_interner.intern(&variant_part_type_id);
- self.unique_id_interner.get(interner_key)
+ fn get_unique_type_id_str_of_enum_variant_part(
+ &mut self,
+ enum_type_id: UniqueTypeId,
+ ) -> String {
+ format!("{}_variant_part", self.get_unique_type_id_as_string(enum_type_id))
}
}
unsafe {
// The choice of type here is pretty arbitrary -
// anything reading the debuginfo for a recursive
- // type is going to see *somthing* weird - the only
+ // type is going to see *something* weird - the only
// question is what exactly it will see.
let (size, align) = cx.size_and_align_of(t);
+ let name = "<recur_type>";
llvm::LLVMRustDIBuilderCreateBasicType(
DIB(cx),
- SmallCStr::new("<recur_type>").as_ptr(),
+ name.as_ptr().cast(),
+ name.len(),
size.bits(),
align.bits() as u32,
DW_ATE_unsigned,
let (file_name, directory) = v.key();
debug!("file_metadata: file_name: {:?}, directory: {:?}", file_name, directory);
- let file_name = SmallCStr::new(if let Some(file_name) = file_name {
- &file_name
- } else {
- "<unknown>"
- });
- let directory =
- SmallCStr::new(if let Some(directory) = directory { &directory } else { "" });
+ let file_name = file_name.as_deref().unwrap_or("<unknown>");
+ let directory = directory.as_deref().unwrap_or("");
let file_metadata = unsafe {
- llvm::LLVMRustDIBuilderCreateFile(DIB(cx), file_name.as_ptr(), directory.as_ptr())
+ llvm::LLVMRustDIBuilderCreateFile(
+ DIB(cx),
+ file_name.as_ptr().cast(),
+ file_name.len(),
+ directory.as_ptr().cast(),
+ directory.len(),
+ )
};
v.insert(file_metadata);
};
let (size, align) = cx.size_and_align_of(t);
- let name = SmallCStr::new(name);
let ty_metadata = unsafe {
llvm::LLVMRustDIBuilderCreateBasicType(
DIB(cx),
- name.as_ptr(),
+ name.as_ptr().cast(),
+ name.len(),
size.bits(),
align.bits() as u32,
encoding,
) -> &'ll DIType {
let (pointer_size, pointer_align) = cx.size_and_align_of(pointer_type);
let name = compute_debuginfo_type_name(cx.tcx, pointer_type, false);
- let name = SmallCStr::new(&name);
unsafe {
llvm::LLVMRustDIBuilderCreatePointerType(
DIB(cx),
pointee_type_metadata,
pointer_size.bits(),
pointer_align.bits() as u32,
- name.as_ptr(),
+ 0, // Ignore DWARF address space.
+ name.as_ptr().cast(),
+ name.len(),
)
}
}
let producer = format!("clang LLVM ({})", rustc_producer);
let name_in_debuginfo = name_in_debuginfo.to_string_lossy();
- let name_in_debuginfo = SmallCStr::new(&name_in_debuginfo);
- let work_dir = SmallCStr::new(&tcx.sess.working_dir.0.to_string_lossy());
- let producer = CString::new(producer).unwrap();
+ let work_dir = tcx.sess.working_dir.0.to_string_lossy();
let flags = "\0";
- let split_name = "\0";
+ let split_name = "";
// FIXME(#60020):
//
unsafe {
let file_metadata = llvm::LLVMRustDIBuilderCreateFile(
debug_context.builder,
- name_in_debuginfo.as_ptr(),
- work_dir.as_ptr(),
+ name_in_debuginfo.as_ptr().cast(),
+ name_in_debuginfo.len(),
+ work_dir.as_ptr().cast(),
+ work_dir.len(),
);
let unit_metadata = llvm::LLVMRustDIBuilderCreateCompileUnit(
debug_context.builder,
DW_LANG_RUST,
file_metadata,
- producer.as_ptr(),
+ producer.as_ptr().cast(),
+ producer.len(),
tcx.sess.opts.optimize != config::OptLevel::No,
flags.as_ptr().cast(),
0,
split_name.as_ptr().cast(),
+ split_name.len(),
kind,
);
cx: &CodegenCx<'ll, '_>,
composite_type_metadata: &'ll DIScope,
) -> &'ll DIType {
- let member_name = CString::new(self.name).unwrap();
unsafe {
llvm::LLVMRustDIBuilderCreateVariantMemberType(
DIB(cx),
composite_type_metadata,
- member_name.as_ptr(),
+ self.name.as_ptr().cast(),
+ self.name.len(),
unknown_file_metadata(cx),
UNKNOWN_LINE_NUMBER,
self.size.bits(),
.discriminants(cx.tcx)
.zip(&def.variants)
.map(|((_, discr), v)| {
- let name = SmallCStr::new(&v.ident.as_str());
+ let name = v.ident.as_str();
+ let is_unsigned = match discr.ty.kind {
+ ty::Int(_) => false,
+ ty::Uint(_) => true,
+ _ => bug!("non integer discriminant"),
+ };
unsafe {
Some(llvm::LLVMRustDIBuilderCreateEnumerator(
DIB(cx),
- name.as_ptr(),
+ name.as_ptr().cast(),
+ name.len(),
// FIXME: what if enumeration has i128 discriminant?
- discr.val as u64,
+ discr.val as i64,
+ is_unsigned,
))
}
})
.as_generator()
.variant_range(enum_def_id, cx.tcx)
.map(|variant_index| {
- let name = SmallCStr::new(&substs.as_generator().variant_name(variant_index));
+ let name = substs.as_generator().variant_name(variant_index);
unsafe {
Some(llvm::LLVMRustDIBuilderCreateEnumerator(
DIB(cx),
- name.as_ptr(),
- // FIXME: what if enumeration has i128 discriminant?
- variant_index.as_usize() as u64,
+ name.as_ptr().cast(),
+ name.len(),
+ // Generators use u32 as discriminant type.
+ variant_index.as_u32().into(),
+ true, // IsUnsigned
))
}
})
let discriminant_base_type_metadata =
type_metadata(cx, discr.to_ty(cx.tcx), rustc_span::DUMMY_SP);
+ let item_name;
let discriminant_name = match enum_type.kind {
- ty::Adt(..) => SmallCStr::new(&cx.tcx.item_name(enum_def_id).as_str()),
- ty::Generator(..) => SmallCStr::new(&enum_name),
+ ty::Adt(..) => {
+ item_name = cx.tcx.item_name(enum_def_id).as_str();
+ &*item_name
+ }
+ ty::Generator(..) => enum_name.as_str(),
_ => bug!(),
};
llvm::LLVMRustDIBuilderCreateEnumerationType(
DIB(cx),
containing_scope,
- discriminant_name.as_ptr(),
+ discriminant_name.as_ptr().cast(),
+ discriminant_name.len(),
file_metadata,
UNKNOWN_LINE_NUMBER,
discriminant_size.bits(),
_ => {}
}
- let enum_name = SmallCStr::new(&enum_name);
- let unique_type_id_str = SmallCStr::new(
- debug_context(cx).type_map.borrow().get_unique_type_id_as_string(unique_type_id),
- );
-
if use_enum_fallback(cx) {
let discriminant_type_metadata = match layout.variants {
layout::Variants::Single { .. }
} => Some(discriminant_type_metadata(discr.value)),
};
- let enum_metadata = unsafe {
- llvm::LLVMRustDIBuilderCreateUnionType(
- DIB(cx),
- containing_scope,
- enum_name.as_ptr(),
- file_metadata,
- UNKNOWN_LINE_NUMBER,
- layout.size.bits(),
- layout.align.abi.bits() as u32,
- DIFlags::FlagZero,
- None,
- 0, // RuntimeLang
- unique_type_id_str.as_ptr(),
- )
+ let enum_metadata = {
+ let type_map = debug_context(cx).type_map.borrow();
+ let unique_type_id_str = type_map.get_unique_type_id_as_string(unique_type_id);
+
+ unsafe {
+ llvm::LLVMRustDIBuilderCreateUnionType(
+ DIB(cx),
+ containing_scope,
+ enum_name.as_ptr().cast(),
+ enum_name.len(),
+ file_metadata,
+ UNKNOWN_LINE_NUMBER,
+ layout.size.bits(),
+ layout.align.abi.bits() as u32,
+ DIFlags::FlagZero,
+ None,
+ 0, // RuntimeLang
+ unique_type_id_str.as_ptr().cast(),
+ unique_type_id_str.len(),
+ )
+ }
};
return create_and_register_recursive_type_forward_declaration(
}
let discriminator_name = match &enum_type.kind {
- ty::Generator(..) => Some(SmallCStr::new(&"__state")),
- _ => None,
+ ty::Generator(..) => "__state",
+ _ => "",
};
- let discriminator_name = discriminator_name.map(|n| n.as_ptr()).unwrap_or(ptr::null_mut());
let discriminator_metadata = match layout.variants {
// A single-variant enum has no discriminant.
layout::Variants::Single { .. } => None,
Some(llvm::LLVMRustDIBuilderCreateMemberType(
DIB(cx),
containing_scope,
- discriminator_name,
+ discriminator_name.as_ptr().cast(),
+ discriminator_name.len(),
file_metadata,
UNKNOWN_LINE_NUMBER,
size.bits(),
Some(llvm::LLVMRustDIBuilderCreateMemberType(
DIB(cx),
containing_scope,
- discriminator_name,
+ discriminator_name.as_ptr().cast(),
+ discriminator_name.len(),
file_metadata,
UNKNOWN_LINE_NUMBER,
size.bits(),
}
};
- let variant_part_unique_type_id_str = SmallCStr::new(
- debug_context(cx)
- .type_map
- .borrow_mut()
- .get_unique_type_id_str_of_enum_variant_part(unique_type_id),
- );
+ let variant_part_unique_type_id_str = debug_context(cx)
+ .type_map
+ .borrow_mut()
+ .get_unique_type_id_str_of_enum_variant_part(unique_type_id);
let empty_array = create_DIArray(DIB(cx), &[]);
+ let name = "";
let variant_part = unsafe {
llvm::LLVMRustDIBuilderCreateVariantPart(
DIB(cx),
containing_scope,
- ptr::null_mut(),
+ name.as_ptr().cast(),
+ name.len(),
file_metadata,
UNKNOWN_LINE_NUMBER,
layout.size.bits(),
DIFlags::FlagZero,
discriminator_metadata,
empty_array,
- variant_part_unique_type_id_str.as_ptr(),
+ variant_part_unique_type_id_str.as_ptr().cast(),
+ variant_part_unique_type_id_str.len(),
)
};
outer_fields.push(Some(variant_part));
- // The variant part must be wrapped in a struct according to DWARF.
- let type_array = create_DIArray(DIB(cx), &outer_fields);
- let struct_wrapper = unsafe {
- llvm::LLVMRustDIBuilderCreateStructType(
- DIB(cx),
- Some(containing_scope),
- enum_name.as_ptr(),
- file_metadata,
- UNKNOWN_LINE_NUMBER,
- layout.size.bits(),
- layout.align.abi.bits() as u32,
- DIFlags::FlagZero,
- None,
- type_array,
- 0,
- None,
- unique_type_id_str.as_ptr(),
- )
+ let struct_wrapper = {
+ // The variant part must be wrapped in a struct according to DWARF.
+ let type_array = create_DIArray(DIB(cx), &outer_fields);
+
+ let type_map = debug_context(cx).type_map.borrow();
+ let unique_type_id_str = type_map.get_unique_type_id_as_string(unique_type_id);
+
+ unsafe {
+ llvm::LLVMRustDIBuilderCreateStructType(
+ DIB(cx),
+ Some(containing_scope),
+ enum_name.as_ptr().cast(),
+ enum_name.len(),
+ file_metadata,
+ UNKNOWN_LINE_NUMBER,
+ layout.size.bits(),
+ layout.align.abi.bits() as u32,
+ DIFlags::FlagZero,
+ None,
+ type_array,
+ 0,
+ None,
+ unique_type_id_str.as_ptr().cast(),
+ unique_type_id_str.len(),
+ )
+ }
};
return create_and_register_recursive_type_forward_declaration(
/// Computes the type parameters for a type, if any, for the given metadata.
fn compute_type_parameters(cx: &CodegenCx<'ll, 'tcx>, ty: Ty<'tcx>) -> Option<&'ll DIArray> {
if let ty::Adt(def, substs) = ty.kind {
- if !substs.types().next().is_none() {
+ if substs.types().next().is_some() {
let generics = cx.tcx.generics_of(def.did);
let names = get_parameter_names(cx, generics);
let template_params: Vec<_> = substs
cx.tcx.normalize_erasing_regions(ParamEnv::reveal_all(), ty);
let actual_type_metadata =
type_metadata(cx, actual_type, rustc_span::DUMMY_SP);
- let name = SmallCStr::new(&name.as_str());
+ let name = &name.as_str();
Some(unsafe {
Some(llvm::LLVMRustDIBuilderCreateTemplateTypeParameter(
DIB(cx),
None,
- name.as_ptr(),
+ name.as_ptr().cast(),
+ name.len(),
actual_type_metadata,
unknown_file_metadata(cx),
0,
) -> &'ll DICompositeType {
let (struct_size, struct_align) = cx.size_and_align_of(struct_type);
- let name = SmallCStr::new(struct_type_name);
- let unique_type_id = SmallCStr::new(
- debug_context(cx).type_map.borrow().get_unique_type_id_as_string(unique_type_id),
- );
+ let type_map = debug_context(cx).type_map.borrow();
+ let unique_type_id = type_map.get_unique_type_id_as_string(unique_type_id);
+
let metadata_stub = unsafe {
// `LLVMRustDIBuilderCreateStructType()` wants an empty array. A null
// pointer will lead to hard to trace and debug LLVM assertions
llvm::LLVMRustDIBuilderCreateStructType(
DIB(cx),
containing_scope,
- name.as_ptr(),
+ struct_type_name.as_ptr().cast(),
+ struct_type_name.len(),
unknown_file_metadata(cx),
UNKNOWN_LINE_NUMBER,
struct_size.bits(),
empty_array,
0,
None,
- unique_type_id.as_ptr(),
+ unique_type_id.as_ptr().cast(),
+ unique_type_id.len(),
)
};
) -> &'ll DICompositeType {
let (union_size, union_align) = cx.size_and_align_of(union_type);
- let name = SmallCStr::new(union_type_name);
- let unique_type_id = SmallCStr::new(
- debug_context(cx).type_map.borrow().get_unique_type_id_as_string(unique_type_id),
- );
+ let type_map = debug_context(cx).type_map.borrow();
+ let unique_type_id = type_map.get_unique_type_id_as_string(unique_type_id);
+
let metadata_stub = unsafe {
// `LLVMRustDIBuilderCreateUnionType()` wants an empty array. A null
// pointer will lead to hard to trace and debug LLVM assertions
llvm::LLVMRustDIBuilderCreateUnionType(
DIB(cx),
containing_scope,
- name.as_ptr(),
+ union_type_name.as_ptr().cast(),
+ union_type_name.len(),
unknown_file_metadata(cx),
UNKNOWN_LINE_NUMBER,
union_size.bits(),
DIFlags::FlagZero,
Some(empty_array),
0, // RuntimeLang
- unique_type_id.as_ptr(),
+ unique_type_id.as_ptr().cast(),
+ unique_type_id.len(),
)
};
let tcx = cx.tcx;
let attrs = tcx.codegen_fn_attrs(def_id);
- if attrs.flags.contains(CodegenFnAttrFlags::NO_DEBUG) {
- return;
- }
-
let no_mangle = attrs.flags.contains(CodegenFnAttrFlags::NO_MANGLE);
// We may want to remove the namespace scope if we're in an extern block (see
// https://github.com/rust-lang/rust/pull/46457#issuecomment-351750952).
let span = tcx.def_span(def_id);
let (file_metadata, line_number) = if !span.is_dummy() {
- let loc = span_start(cx, span);
- (file_metadata(cx, &loc.file.name, LOCAL_CRATE), loc.line as c_uint)
+ let loc = cx.lookup_debug_loc(span.lo());
+ (file_metadata(cx, &loc.file.name, LOCAL_CRATE), loc.line)
} else {
- (unknown_file_metadata(cx), UNKNOWN_LINE_NUMBER)
+ (unknown_file_metadata(cx), None)
};
let is_local_to_unit = is_node_local_to_unit(cx, def_id);
let variable_type = Instance::mono(cx.tcx, def_id).monomorphic_ty(cx.tcx);
let type_metadata = type_metadata(cx, variable_type, span);
- let var_name = SmallCStr::new(&tcx.item_name(def_id).as_str());
+ let var_name = tcx.item_name(def_id).as_str();
let linkage_name = if no_mangle {
None
} else {
- let linkage_name = mangled_name_of_instance(cx, Instance::mono(tcx, def_id));
- Some(SmallCStr::new(&linkage_name.name.as_str()))
+ Some(mangled_name_of_instance(cx, Instance::mono(tcx, def_id)).name.as_str())
};
+ // When empty, linkage_name field is omitted,
+ // which is what we want for no_mangle statics
+ let linkage_name = linkage_name.as_deref().unwrap_or("");
let global_align = cx.align_of(variable_type);
llvm::LLVMRustDIBuilderCreateStaticVariable(
DIB(cx),
Some(var_scope),
- var_name.as_ptr(),
- // If null, linkage_name field is omitted,
- // which is what we want for no_mangle statics
- linkage_name.as_ref().map_or(ptr::null(), |name| name.as_ptr()),
+ var_name.as_ptr().cast(),
+ var_name.len(),
+ linkage_name.as_ptr().cast(),
+ linkage_name.len(),
file_metadata,
- line_number,
+ line_number.unwrap_or(UNKNOWN_LINE_NUMBER),
type_metadata,
is_local_to_unit,
global,
// pointer will lead to hard to trace and debug LLVM assertions
// later on in `llvm/lib/IR/Value.cpp`.
let empty_array = create_DIArray(DIB(cx), &[]);
-
- let name = const_cstr!("vtable");
+ let name = "vtable";
// Create a new one each time. We don't want metadata caching
// here, because each vtable will refer to a unique containing
let vtable_type = llvm::LLVMRustDIBuilderCreateStructType(
DIB(cx),
NO_SCOPE_METADATA,
- name.as_ptr(),
+ name.as_ptr().cast(),
+ name.len(),
unknown_file_metadata(cx),
UNKNOWN_LINE_NUMBER,
Size::ZERO.bits(),
empty_array,
0,
Some(type_metadata),
- name.as_ptr(),
+ name.as_ptr().cast(),
+ name.len(),
);
+ let linkage_name = "";
llvm::LLVMRustDIBuilderCreateStaticVariable(
DIB(cx),
NO_SCOPE_METADATA,
- name.as_ptr(),
- ptr::null(),
+ name.as_ptr().cast(),
+ name.len(),
+ linkage_name.as_ptr().cast(),
+ linkage_name.len(),
unknown_file_metadata(cx),
UNKNOWN_LINE_NUMBER,
vtable_type,
use rustc_codegen_ssa::mir::debuginfo::VariableKind::*;
-use self::metadata::{file_metadata, type_metadata, TypeMap};
+use self::metadata::{file_metadata, type_metadata, TypeMap, UNKNOWN_LINE_NUMBER};
use self::namespace::mangled_name_of_instance;
use self::type_names::compute_debuginfo_type_name;
-use self::utils::{create_DIArray, is_node_local_to_unit, span_start, DIB};
+use self::utils::{create_DIArray, is_node_local_to_unit, DIB};
use crate::llvm;
use crate::llvm::debuginfo::{
DIArray, DIBuilder, DIFile, DIFlags, DILexicalBlock, DISPFlags, DIScope, DIType, DIVariable,
};
-use rustc::middle::codegen_fn_attrs::CodegenFnAttrFlags;
use rustc::ty::subst::{GenericArgKind, SubstsRef};
use rustc_hir::def_id::{CrateNum, DefId, DefIdMap, LOCAL_CRATE};
use crate::common::CodegenCx;
use crate::value::Value;
use rustc::mir;
-use rustc::session::config::{self, DebugInfo};
-use rustc::ty::{self, Instance, InstanceDef, ParamEnv, Ty};
+use rustc::ty::{self, Instance, ParamEnv, Ty};
use rustc_codegen_ssa::debuginfo::type_names;
use rustc_codegen_ssa::mir::debuginfo::{DebugScope, FunctionDebugContext, VariableKind};
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
-use rustc_data_structures::small_c_str::SmallCStr;
use rustc_index::vec::IndexVec;
+use rustc_session::config::{self, DebugInfo};
use libc::c_uint;
use log::debug;
use std::cell::RefCell;
-use std::ffi::CString;
use rustc::ty::layout::{self, HasTyCtxt, LayoutOf, Size};
use rustc_ast::ast;
return None;
}
- if let InstanceDef::Item(def_id) = instance.def {
- if self.tcx().codegen_fn_attrs(def_id).flags.contains(CodegenFnAttrFlags::NO_DEBUG) {
- return None;
- }
- }
-
let span = mir.span;
// This can be the case for functions inlined from another crate
let def_id = instance.def_id();
let containing_scope = get_containing_scope(self, instance);
- let loc = span_start(self, span);
+ let loc = self.lookup_debug_loc(span.lo());
let file_metadata = file_metadata(self, &loc.file.name, def_id.krate);
let function_type_metadata = unsafe {
// Get the linkage_name, which is just the symbol name
let linkage_name = mangled_name_of_instance(self, instance);
+ let linkage_name = linkage_name.name.as_str();
// FIXME(eddyb) does this need to be separate from `loc.line` for some reason?
let scope_line = loc.line;
- let function_name = CString::new(name).unwrap();
- let linkage_name = SmallCStr::new(&linkage_name.name.as_str());
-
let mut flags = DIFlags::FlagPrototyped;
if fn_abi.ret.layout.abi.is_uninhabited() {
llvm::LLVMRustDIBuilderCreateFunction(
DIB(self),
containing_scope,
- function_name.as_ptr(),
- linkage_name.as_ptr(),
+ name.as_ptr().cast(),
+ name.len(),
+ linkage_name.as_ptr().cast(),
+ linkage_name.len(),
file_metadata,
- loc.line as c_uint,
+ loc.line.unwrap_or(UNKNOWN_LINE_NUMBER),
function_type_metadata,
- scope_line as c_uint,
+ scope_line.unwrap_or(UNKNOWN_LINE_NUMBER),
flags,
spflags,
llfn,
cx.tcx.normalize_erasing_regions(ParamEnv::reveal_all(), ty);
let actual_type_metadata =
type_metadata(cx, actual_type, rustc_span::DUMMY_SP);
- let name = SmallCStr::new(&name.as_str());
+ let name = name.as_str();
Some(unsafe {
Some(llvm::LLVMRustDIBuilderCreateTemplateTypeParameter(
DIB(cx),
None,
- name.as_ptr(),
+ name.as_ptr().cast(),
+ name.len(),
actual_type_metadata,
file_metadata,
0,
variable_kind: VariableKind,
span: Span,
) -> &'ll DIVariable {
- let loc = span_start(self, span);
+ let loc = self.lookup_debug_loc(span.lo());
let file_metadata = file_metadata(self, &loc.file.name, dbg_context.defining_crate);
let type_metadata = type_metadata(self, variable_type, span);
};
let align = self.align_of(variable_type);
- let name = SmallCStr::new(&variable_name.as_str());
+ let name = variable_name.as_str();
unsafe {
llvm::LLVMRustDIBuilderCreateVariable(
DIB(self),
dwarf_tag,
scope_metadata,
- name.as_ptr(),
+ name.as_ptr().cast(),
+ name.len(),
file_metadata,
- loc.line as c_uint,
+ loc.line.unwrap_or(UNKNOWN_LINE_NUMBER),
type_metadata,
true,
DIFlags::FlagZero,
// Namespace Handling.
-use super::metadata::{unknown_file_metadata, UNKNOWN_LINE_NUMBER};
use super::utils::{debug_context, DIB};
use rustc::ty::{self, Instance};
use rustc::hir::map::DefPathData;
use rustc_hir::def_id::DefId;
-use rustc_data_structures::small_c_str::SmallCStr;
-
pub fn mangled_name_of_instance<'a, 'tcx>(
cx: &CodegenCx<'a, 'tcx>,
instance: Instance<'tcx>,
DefPathData::CrateRoot => cx.tcx.crate_name(def_id.krate),
data => data.as_symbol(),
};
-
- let namespace_name = SmallCStr::new(&namespace_name.as_str());
+ let namespace_name = namespace_name.as_str();
let scope = unsafe {
llvm::LLVMRustDIBuilderCreateNameSpace(
DIB(cx),
parent_scope,
- namespace_name.as_ptr(),
- unknown_file_metadata(cx),
- UNKNOWN_LINE_NUMBER,
+ namespace_name.as_ptr().cast(),
+ namespace_name.len(),
+ false, // ExportSymbols (only relevant for C++ anonymous namespaces)
)
};
-use super::metadata::UNKNOWN_COLUMN_NUMBER;
-use super::utils::{debug_context, span_start};
+use super::metadata::{UNKNOWN_COLUMN_NUMBER, UNKNOWN_LINE_NUMBER};
+use super::utils::debug_context;
use crate::common::CodegenCx;
use crate::llvm::debuginfo::DIScope;
use crate::llvm::{self, Value};
use rustc_codegen_ssa::traits::*;
-use libc::c_uint;
-use rustc_span::{Pos, Span};
+use rustc_data_structures::sync::Lrc;
+use rustc_span::{BytePos, Pos, SourceFile, SourceFileAndLine, Span};
+
+/// A source code location used to generate debug information.
+pub struct DebugLoc {
+ /// Information about the original source file.
+ pub file: Lrc<SourceFile>,
+ /// The (1-based) line number.
+ pub line: Option<u32>,
+ /// The (1-based) column number.
+ pub col: Option<u32>,
+}
impl CodegenCx<'ll, '_> {
- pub fn create_debug_loc(&self, scope: &'ll DIScope, span: Span) -> &'ll Value {
- let loc = span_start(self, span);
+ /// Looks up debug source information about a `BytePos`.
+ pub fn lookup_debug_loc(&self, pos: BytePos) -> DebugLoc {
+ let (file, line, col) = match self.sess().source_map().lookup_line(pos) {
+ Ok(SourceFileAndLine { sf: file, line }) => {
+ let line_pos = file.line_begin_pos(pos);
+
+ // Use 1-based indexing.
+ let line = (line + 1) as u32;
+ let col = (pos - line_pos).to_u32() + 1;
+
+ (file, Some(line), Some(col))
+ }
+ Err(file) => (file, None, None),
+ };
- // For MSVC, set the column number to zero.
+ // For MSVC, omit the column number.
// Otherwise, emit it. This mimics clang behaviour.
// See discussion in https://github.com/rust-lang/rust/issues/42921
- let col_used = if self.sess().target.target.options.is_like_msvc {
- UNKNOWN_COLUMN_NUMBER
+ if self.sess().target.target.options.is_like_msvc {
+ DebugLoc { file, line, col: None }
} else {
- loc.col.to_usize() as c_uint
- };
+ DebugLoc { file, line, col }
+ }
+ }
+
+ pub fn create_debug_loc(&self, scope: &'ll DIScope, span: Span) -> &'ll Value {
+ let DebugLoc { line, col, .. } = self.lookup_debug_loc(span.lo());
unsafe {
llvm::LLVMRustDIBuilderCreateDebugLocation(
debug_context(self).llcontext,
- loc.line as c_uint,
- col_used,
+ line.unwrap_or(UNKNOWN_LINE_NUMBER),
+ col.unwrap_or(UNKNOWN_COLUMN_NUMBER),
scope,
None,
)
use crate::common::CodegenCx;
use crate::llvm;
use crate::llvm::debuginfo::{DIArray, DIBuilder, DIDescriptor, DIScope};
-use rustc_codegen_ssa::traits::*;
-
-use rustc_span::Span;
pub fn is_node_local_to_unit(cx: &CodegenCx<'_, '_>, def_id: DefId) -> bool {
// The is_local_to_unit flag indicates whether a function is local to the
};
}
-/// Returns rustc_span::Loc corresponding to the beginning of the span
-pub fn span_start(cx: &CodegenCx<'_, '_>, span: Span) -> rustc_span::Loc {
- cx.sess().source_map().lookup_char_pos(span.lo())
-}
-
#[inline]
pub fn debug_context(cx: &'a CodegenCx<'ll, 'tcx>) -> &'a CrateDebugContext<'ll, 'tcx> {
cx.dbg_cx.as_ref().unwrap()
use log::debug;
use rustc::ty::Ty;
use rustc_codegen_ssa::traits::*;
-use rustc_data_structures::small_c_str::SmallCStr;
/// Declare a function.
///
ty: &'ll Type,
) -> &'ll Value {
debug!("declare_raw_fn(name={:?}, ty={:?})", name, ty);
- let namebuf = SmallCStr::new(name);
- let llfn = unsafe { llvm::LLVMRustGetOrInsertFunction(cx.llmod, namebuf.as_ptr(), ty) };
+ let llfn = unsafe {
+ llvm::LLVMRustGetOrInsertFunction(cx.llmod, name.as_ptr().cast(), name.len(), ty)
+ };
llvm::SetFunctionCallConv(llfn, callconv);
// Function addresses in Rust are never significant, allowing functions to
fn get_declared_value(&self, name: &str) -> Option<&'ll Value> {
debug!("get_declared_value(name={:?})", name);
- let namebuf = SmallCStr::new(name);
- unsafe { llvm::LLVMRustGetNamedValue(self.llmod, namebuf.as_ptr()) }
+ unsafe { llvm::LLVMRustGetNamedValue(self.llmod, name.as_ptr().cast(), name.len()) }
}
fn get_defined_value(&self, name: &str) -> Option<&'ll Value> {
.unwrap();
OperandRef::from_const(self, ty_name, ret_ty).immediate_or_packed_pair(self)
}
- "init" => {
- let ty = substs.type_at(0);
- if !self.layout_of(ty).is_zst() {
- // Just zero out the stack slot.
- // If we store a zero constant, LLVM will drown in vreg allocation for large
- // data structures, and the generated code will be awful. (A telltale sign of
- // this is large quantities of `mov [byte ptr foo],0` in the generated code.)
- memset_intrinsic(
- self,
- false,
- ty,
- llresult,
- self.const_u8(0),
- self.const_usize(1),
- );
- }
- return;
- }
- // Effectively no-ops
- "uninit" | "forget" => {
+ // Effectively no-op
+ "forget" => {
return;
}
"offset" => {
fn try_intrinsic(
bx: &mut Builder<'a, 'll, 'tcx>,
- func: &'ll Value,
+ try_func: &'ll Value,
data: &'ll Value,
- local_ptr: &'ll Value,
+ catch_func: &'ll Value,
dest: &'ll Value,
) {
if bx.sess().no_landing_pads() {
- bx.call(func, &[data], None);
- let ptr_align = bx.tcx().data_layout.pointer_align.abi;
- bx.store(bx.const_null(bx.type_i8p()), dest, ptr_align);
+ bx.call(try_func, &[data], None);
+ // Return 0 unconditionally from the intrinsic call;
+ // we can never unwind.
+ let ret_align = bx.tcx().data_layout.i32_align.abi;
+ bx.store(bx.const_i32(0), dest, ret_align);
} else if wants_msvc_seh(bx.sess()) {
- codegen_msvc_try(bx, func, data, local_ptr, dest);
+ codegen_msvc_try(bx, try_func, data, catch_func, dest);
} else {
- codegen_gnu_try(bx, func, data, local_ptr, dest);
+ codegen_gnu_try(bx, try_func, data, catch_func, dest);
}
}
// as the old ones are still more optimized.
fn codegen_msvc_try(
bx: &mut Builder<'a, 'll, 'tcx>,
- func: &'ll Value,
+ try_func: &'ll Value,
data: &'ll Value,
- local_ptr: &'ll Value,
+ catch_func: &'ll Value,
dest: &'ll Value,
) {
let llfn = get_rust_try_fn(bx, &mut |mut bx| {
let mut catchpad = bx.build_sibling_block("catchpad");
let mut caught = bx.build_sibling_block("caught");
- let func = llvm::get_param(bx.llfn(), 0);
+ let try_func = llvm::get_param(bx.llfn(), 0);
let data = llvm::get_param(bx.llfn(), 1);
- let local_ptr = llvm::get_param(bx.llfn(), 2);
+ let catch_func = llvm::get_param(bx.llfn(), 2);
// We're generating an IR snippet that looks like:
//
- // declare i32 @rust_try(%func, %data, %ptr) {
- // %slot = alloca [2 x i64]
- // invoke %func(%data) to label %normal unwind label %catchswitch
+ // declare i32 @rust_try(%try_func, %data, %catch_func) {
+ // %slot = alloca u8*
+ // invoke %try_func(%data) to label %normal unwind label %catchswitch
//
// normal:
// ret i32 0
//
// catchpad:
// %tok = catchpad within %cs [%type_descriptor, 0, %slot]
- // %ptr[0] = %slot[0]
- // %ptr[1] = %slot[1]
+ // %ptr = load %slot
+ // call %catch_func(%data, %ptr)
// catchret from %tok to label %caught
//
// caught:
// ~rust_panic();
//
// uint64_t x[2];
- // }
+ // };
//
- // int bar(void (*foo)(void), uint64_t *ret) {
+ // int __rust_try(
+ // void (*try_func)(void*),
+ // void *data,
+ // void (*catch_func)(void*, void*) noexcept
+ // ) {
// try {
- // foo();
+ // try_func(data);
// return 0;
// } catch(rust_panic& a) {
- // ret[0] = a.x[0];
- // ret[1] = a.x[1];
- // a.x[0] = 0;
+ // catch_func(data, &a);
// return 1;
// }
// }
//
// More information can be found in libstd's seh.rs implementation.
- let i64_2 = bx.type_array(bx.type_i64(), 2);
- let i64_2_ptr = bx.type_ptr_to(i64_2);
let ptr_align = bx.tcx().data_layout.pointer_align.abi;
- let slot = bx.alloca(i64_2_ptr, ptr_align);
- bx.invoke(func, &[data], normal.llbb(), catchswitch.llbb(), None);
+ let slot = bx.alloca(bx.type_i8p(), ptr_align);
+ bx.invoke(try_func, &[data], normal.llbb(), catchswitch.llbb(), None);
normal.ret(bx.const_i32(0));
let cs = catchswitch.catch_switch(None, None, 1);
catchswitch.add_handler(cs, catchpad.llbb());
+ // We can't use the TypeDescriptor defined in libpanic_unwind because it
+ // might be in another DLL and the SEH encoding only supports specifying
+ // a TypeDescriptor from the current module.
+ //
+ // However this isn't an issue since the MSVC runtime uses string
+ // comparison on the type name to match TypeDescriptors rather than
+ // pointer equality.
+ //
+ // So instead we generate a new TypeDescriptor in each module that uses
+ // `try` and let the linker merge duplicate definitions in the same
+ // module.
+ //
+ // When modifying, make sure that the type_name string exactly matches
+ // the one used in src/libpanic_unwind/seh.rs.
+ let type_info_vtable = bx.declare_global("??_7type_info@@6B@", bx.type_i8p());
+ let type_name = bx.const_bytes(b"rust_panic\0");
+ let type_info =
+ bx.const_struct(&[type_info_vtable, bx.const_null(bx.type_i8p()), type_name], false);
+ let tydesc = bx.declare_global("__rust_panic_type_info", bx.val_ty(type_info));
+ unsafe {
+ llvm::LLVMRustSetLinkage(tydesc, llvm::Linkage::LinkOnceODRLinkage);
+ llvm::SetUniqueComdat(bx.llmod, tydesc);
+ llvm::LLVMSetInitializer(tydesc, type_info);
+ }
+
// The flag value of 8 indicates that we are catching the exception by
// reference instead of by value. We can't use catch by value because
// that requires copying the exception object, which we don't support
//
// Source: MicrosoftCXXABI::getAddrOfCXXCatchHandlerType in clang
let flags = bx.const_i32(8);
- let tydesc = match bx.tcx().lang_items().eh_catch_typeinfo() {
- Some(did) => bx.get_static(did),
- None => bug!("eh_catch_typeinfo not defined, but needed for SEH unwinding"),
- };
let funclet = catchpad.catch_pad(cs, &[tydesc, flags, slot]);
-
- let i64_align = bx.tcx().data_layout.i64_align.abi;
- let payload_ptr = catchpad.load(slot, ptr_align);
- let payload = catchpad.load(payload_ptr, i64_align);
- let local_ptr = catchpad.bitcast(local_ptr, bx.type_ptr_to(i64_2));
- catchpad.store(payload, local_ptr, i64_align);
-
- // Clear the first word of the exception so avoid double-dropping it.
- // This will be read by the destructor which is implicitly called at the
- // end of the catch block by the runtime.
- let payload_0_ptr = catchpad.inbounds_gep(payload_ptr, &[bx.const_i32(0), bx.const_i32(0)]);
- catchpad.store(bx.const_u64(0), payload_0_ptr, i64_align);
+ let ptr = catchpad.load(slot, ptr_align);
+ catchpad.call(catch_func, &[data, ptr], Some(&funclet));
catchpad.catch_ret(&funclet, caught.llbb());
// Note that no invoke is used here because by definition this function
// can't panic (that's what it's catching).
- let ret = bx.call(llfn, &[func, data, local_ptr], None);
+ let ret = bx.call(llfn, &[try_func, data, catch_func], None);
let i32_align = bx.tcx().data_layout.i32_align.abi;
bx.store(ret, dest, i32_align);
}
// the right personality function.
fn codegen_gnu_try(
bx: &mut Builder<'a, 'll, 'tcx>,
- func: &'ll Value,
+ try_func: &'ll Value,
data: &'ll Value,
- local_ptr: &'ll Value,
+ catch_func: &'ll Value,
dest: &'ll Value,
) {
let llfn = get_rust_try_fn(bx, &mut |mut bx| {
// Codegens the shims described above:
//
// bx:
- // invoke %func(%args...) normal %normal unwind %catch
+ // invoke %try_func(%data) normal %normal unwind %catch
//
// normal:
// ret 0
//
// catch:
- // (ptr, _) = landingpad
- // store ptr, %local_ptr
+ // (%ptr, _) = landingpad
+ // call %catch_func(%data, %ptr)
// ret 1
- //
- // Note that the `local_ptr` data passed into the `try` intrinsic is
- // expected to be `*mut *mut u8` for this to actually work, but that's
- // managed by the standard library.
bx.sideeffect();
let mut then = bx.build_sibling_block("then");
let mut catch = bx.build_sibling_block("catch");
- let func = llvm::get_param(bx.llfn(), 0);
+ let try_func = llvm::get_param(bx.llfn(), 0);
let data = llvm::get_param(bx.llfn(), 1);
- let local_ptr = llvm::get_param(bx.llfn(), 2);
- bx.invoke(func, &[data], then.llbb(), catch.llbb(), None);
+ let catch_func = llvm::get_param(bx.llfn(), 2);
+ bx.invoke(try_func, &[data], then.llbb(), catch.llbb(), None);
then.ret(bx.const_i32(0));
// Type indicator for the exception being thrown.
};
catch.add_clause(vals, tydesc);
let ptr = catch.extract_value(vals, 0);
- let ptr_align = bx.tcx().data_layout.pointer_align.abi;
- let bitcast = catch.bitcast(local_ptr, bx.type_ptr_to(bx.type_i8p()));
- catch.store(ptr, bitcast, ptr_align);
+ catch.call(catch_func, &[data, ptr], None);
catch.ret(bx.const_i32(1));
});
// Note that no invoke is used here because by definition this function
// can't panic (that's what it's catching).
- let ret = bx.call(llfn, &[func, data, local_ptr], None);
+ let ret = bx.call(llfn, &[try_func, data, catch_func], None);
let i32_align = bx.tcx().data_layout.i32_align.abi;
bx.store(ret, dest, i32_align);
}
));
let fn_abi = FnAbi::of_fn_ptr(cx, rust_fn_sig, &[]);
let llfn = cx.declare_fn(name, &fn_abi);
+ cx.set_frame_pointer_elimination(llfn);
+ cx.apply_target_cpu_attr(llfn);
// FIXME(eddyb) find a nicer way to do this.
unsafe { llvm::LLVMRustSetLinkage(llfn, llvm::Linkage::InternalLinkage) };
let bx = Builder::new_block(cx, llfn, "entry-block");
// Define the type up front for the signature of the rust_try function.
let tcx = cx.tcx;
let i8p = tcx.mk_mut_ptr(tcx.types.i8);
- let fn_ty = tcx.mk_fn_ptr(ty::Binder::bind(tcx.mk_fn_sig(
+ let try_fn_ty = tcx.mk_fn_ptr(ty::Binder::bind(tcx.mk_fn_sig(
iter::once(i8p),
tcx.mk_unit(),
false,
hir::Unsafety::Unsafe,
Abi::Rust,
)));
+ let catch_fn_ty = tcx.mk_fn_ptr(ty::Binder::bind(tcx.mk_fn_sig(
+ [i8p, i8p].iter().cloned(),
+ tcx.mk_unit(),
+ false,
+ hir::Unsafety::Unsafe,
+ Abi::Rust,
+ )));
let output = tcx.types.i32;
- let rust_try = gen_fn(cx, "__rust_try", vec![fn_ty, i8p, i8p], output, codegen);
+ let rust_try = gen_fn(cx, "__rust_try", vec![try_fn_ty, i8p, catch_fn_ty], output, codegen);
cx.rust_try_fn.set(Some(rust_try));
rust_try
}
#![recursion_limit = "256"]
use back::write::{create_informational_target_machine, create_target_machine};
-use rustc_span::symbol::Symbol;
pub use llvm_util::target_features;
-use rustc::dep_graph::WorkProduct;
+use rustc::dep_graph::{DepGraph, WorkProduct};
+use rustc::middle::cstore::{EncodedMetadata, MetadataLoaderDyn};
+use rustc::ty::{self, TyCtxt};
+use rustc::util::common::ErrorReported;
use rustc_ast::expand::allocator::AllocatorKind;
use rustc_codegen_ssa::back::lto::{LtoModuleCodegen, SerializedModule, ThinModule};
use rustc_codegen_ssa::back::write::{CodegenContext, FatLTOInput, ModuleConfig};
use rustc_codegen_ssa::traits::*;
+use rustc_codegen_ssa::ModuleCodegen;
use rustc_codegen_ssa::{CodegenResults, CompiledModule};
+use rustc_codegen_utils::codegen_backend::CodegenBackend;
use rustc_errors::{FatalError, Handler};
+use rustc_serialize::json;
+use rustc_session::config::{self, OptLevel, OutputFilenames, PrintRequest};
+use rustc_session::Session;
+use rustc_span::symbol::Symbol;
+
use std::any::Any;
use std::ffi::CStr;
use std::fs;
use std::sync::Arc;
-use rustc::dep_graph::DepGraph;
-use rustc::middle::cstore::{EncodedMetadata, MetadataLoaderDyn};
-use rustc::session::config::{self, OptLevel, OutputFilenames, PrintRequest};
-use rustc::session::Session;
-use rustc::ty::{self, TyCtxt};
-use rustc::util::common::ErrorReported;
-use rustc_codegen_ssa::ModuleCodegen;
-use rustc_codegen_utils::codegen_backend::CodegenBackend;
-use rustc_serialize::json;
-
mod back {
pub mod archive;
pub mod bytecode;
}
impl DebugEmissionKind {
- pub fn from_generic(kind: rustc::session::config::DebugInfo) -> Self {
- use rustc::session::config::DebugInfo;
+ pub fn from_generic(kind: rustc_session::config::DebugInfo) -> Self {
+ use rustc_session::config::DebugInfo;
match kind {
DebugInfo::None => DebugEmissionKind::NoDebug,
DebugInfo::Limited => DebugEmissionKind::LineTablesOnly,
/// See Module::setModuleInlineAsm.
pub fn LLVMSetModuleInlineAsm(M: &Module, Asm: *const c_char);
- pub fn LLVMRustAppendModuleInlineAsm(M: &Module, Asm: *const c_char);
+ pub fn LLVMRustAppendModuleInlineAsm(M: &Module, Asm: *const c_char, AsmLen: size_t);
/// See llvm::LLVMTypeKind::getTypeID.
pub fn LLVMRustGetTypeKind(Ty: &Type) -> TypeKind;
pub fn LLVMSetThreadLocalMode(GlobalVar: &Value, Mode: ThreadLocalMode);
pub fn LLVMIsGlobalConstant(GlobalVar: &Value) -> Bool;
pub fn LLVMSetGlobalConstant(GlobalVar: &Value, IsConstant: Bool);
- pub fn LLVMRustGetNamedValue(M: &Module, Name: *const c_char) -> Option<&Value>;
+ pub fn LLVMRustGetNamedValue(
+ M: &Module,
+ Name: *const c_char,
+ NameLen: size_t,
+ ) -> Option<&Value>;
pub fn LLVMSetTailCall(CallInst: &Value, IsTailCall: Bool);
// Operations on functions
pub fn LLVMRustGetOrInsertFunction(
M: &'a Module,
Name: *const c_char,
+ NameLen: size_t,
FunctionTy: &'a Type,
) -> &'a Value;
pub fn LLVMSetFunctionCallConv(Fn: &Value, CC: c_uint);
Args: *const &'a Value,
NumArgs: c_uint,
Bundle: Option<&OperandBundleDef<'a>>,
- Name: *const c_char,
) -> &'a Value;
pub fn LLVMRustBuildMemCpy(
B: &Builder<'a>,
pub fn LLVMRustInlineAsm(
Ty: &Type,
AsmString: *const c_char,
+ AsmStringLen: size_t,
Constraints: *const c_char,
+ ConstraintsLen: size_t,
SideEffects: Bool,
AlignStack: Bool,
Dialect: AsmDialect,
) -> &Value;
- pub fn LLVMRustInlineAsmVerify(Ty: &Type, Constraints: *const c_char) -> bool;
+ pub fn LLVMRustInlineAsmVerify(
+ Ty: &Type,
+ Constraints: *const c_char,
+ ConstraintsLen: size_t,
+ ) -> bool;
pub fn LLVMRustDebugMetadataVersion() -> u32;
pub fn LLVMRustVersionMajor() -> u32;
Lang: c_uint,
File: &'a DIFile,
Producer: *const c_char,
+ ProducerLen: size_t,
isOptimized: bool,
Flags: *const c_char,
RuntimeVer: c_uint,
SplitName: *const c_char,
+ SplitNameLen: size_t,
kind: DebugEmissionKind,
) -> &'a DIDescriptor;
pub fn LLVMRustDIBuilderCreateFile(
Builder: &DIBuilder<'a>,
Filename: *const c_char,
+ FilenameLen: size_t,
Directory: *const c_char,
+ DirectoryLen: size_t,
) -> &'a DIFile;
pub fn LLVMRustDIBuilderCreateSubroutineType(
Builder: &DIBuilder<'a>,
Scope: &'a DIDescriptor,
Name: *const c_char,
+ NameLen: size_t,
LinkageName: *const c_char,
+ LinkageNameLen: size_t,
File: &'a DIFile,
LineNo: c_uint,
Ty: &'a DIType,
pub fn LLVMRustDIBuilderCreateBasicType(
Builder: &DIBuilder<'a>,
Name: *const c_char,
+ NameLen: size_t,
SizeInBits: u64,
AlignInBits: u32,
Encoding: c_uint,
PointeeTy: &'a DIType,
SizeInBits: u64,
AlignInBits: u32,
+ AddressSpace: c_uint,
Name: *const c_char,
+ NameLen: size_t,
) -> &'a DIDerivedType;
pub fn LLVMRustDIBuilderCreateStructType(
Builder: &DIBuilder<'a>,
Scope: Option<&'a DIDescriptor>,
Name: *const c_char,
+ NameLen: size_t,
File: &'a DIFile,
LineNumber: c_uint,
SizeInBits: u64,
RunTimeLang: c_uint,
VTableHolder: Option<&'a DIType>,
UniqueId: *const c_char,
+ UniqueIdLen: size_t,
) -> &'a DICompositeType;
pub fn LLVMRustDIBuilderCreateMemberType(
Builder: &DIBuilder<'a>,
Scope: &'a DIDescriptor,
Name: *const c_char,
+ NameLen: size_t,
File: &'a DIFile,
LineNo: c_uint,
SizeInBits: u64,
Builder: &DIBuilder<'a>,
Scope: &'a DIScope,
Name: *const c_char,
+ NameLen: size_t,
File: &'a DIFile,
LineNumber: c_uint,
SizeInBits: u64,
Builder: &DIBuilder<'a>,
Context: Option<&'a DIScope>,
Name: *const c_char,
+ NameLen: size_t,
LinkageName: *const c_char,
+ LinkageNameLen: size_t,
File: &'a DIFile,
LineNo: c_uint,
Ty: &'a DIType,
Tag: c_uint,
Scope: &'a DIDescriptor,
Name: *const c_char,
+ NameLen: size_t,
File: &'a DIFile,
LineNo: c_uint,
Ty: &'a DIType,
pub fn LLVMRustDIBuilderCreateEnumerator(
Builder: &DIBuilder<'a>,
Name: *const c_char,
- Val: u64,
+ NameLen: size_t,
+ Value: i64,
+ IsUnsigned: bool,
) -> &'a DIEnumerator;
pub fn LLVMRustDIBuilderCreateEnumerationType(
Builder: &DIBuilder<'a>,
Scope: &'a DIScope,
Name: *const c_char,
+ NameLen: size_t,
File: &'a DIFile,
LineNumber: c_uint,
SizeInBits: u64,
Builder: &DIBuilder<'a>,
Scope: &'a DIScope,
Name: *const c_char,
+ NameLen: size_t,
File: &'a DIFile,
LineNumber: c_uint,
SizeInBits: u64,
Elements: Option<&'a DIArray>,
RunTimeLang: c_uint,
UniqueId: *const c_char,
+ UniqueIdLen: size_t,
) -> &'a DIType;
pub fn LLVMRustDIBuilderCreateVariantPart(
Builder: &DIBuilder<'a>,
Scope: &'a DIScope,
Name: *const c_char,
+ NameLen: size_t,
File: &'a DIFile,
LineNo: c_uint,
SizeInBits: u64,
Discriminator: Option<&'a DIDerivedType>,
Elements: &'a DIArray,
UniqueId: *const c_char,
+ UniqueIdLen: size_t,
) -> &'a DIDerivedType;
pub fn LLVMSetUnnamedAddr(GlobalVar: &Value, UnnamedAddr: Bool);
Builder: &DIBuilder<'a>,
Scope: Option<&'a DIScope>,
Name: *const c_char,
+ NameLen: size_t,
Ty: &'a DIType,
File: &'a DIFile,
LineNo: c_uint,
Builder: &DIBuilder<'a>,
Scope: Option<&'a DIScope>,
Name: *const c_char,
- File: &'a DIFile,
- LineNo: c_uint,
+ NameLen: size_t,
+ ExportSymbols: bool,
) -> &'a DINameSpace;
pub fn LLVMRustDICompositeTypeReplaceArrays(
}
}
-pub fn mk_section_iter(llof: &'a ffi::ObjectFile) -> SectionIter<'a> {
+pub fn mk_section_iter(llof: &ffi::ObjectFile) -> SectionIter<'_> {
unsafe { SectionIter { llsi: LLVMGetSections(llof) } }
}
/// Safe wrapper around `LLVMGetParam`, because segfaults are no fun.
-pub fn get_param(llfn: &'a Value, index: c_uint) -> &'a Value {
+pub fn get_param(llfn: &Value, index: c_uint) -> &Value {
unsafe {
assert!(
index < LLVMCountParams(llfn),
}
/// Safe wrapper for `LLVMGetValueName2` into a byte slice
-pub fn get_value_name(value: &'a Value) -> &'a [u8] {
+pub fn get_value_name(value: &Value) -> &[u8] {
unsafe {
let mut len = 0;
let data = LLVMGetValueName2(value, &mut len);
use crate::llvm;
use libc::c_int;
use rustc::bug;
-use rustc::session::config::PrintRequest;
-use rustc::session::Session;
use rustc_data_structures::fx::FxHashSet;
use rustc_feature::UnstableFeatures;
+use rustc_session::config::PrintRequest;
+use rustc_session::Session;
use rustc_span::symbol::sym;
use rustc_span::symbol::Symbol;
use rustc_target::spec::{MergeFunctions, PanicStrategy};
unsafe { llvm::LLVMIntTypeInContext(llcx, num_bits as c_uint) }
}
- pub fn i8p_llcx(llcx: &'ll llvm::Context) -> &'ll Type {
+ pub fn i8p_llcx(llcx: &llvm::Context) -> &Type {
Type::i8_llcx(llcx).ptr_to()
}
// Windows x86_64
("x86_64", true) => {
let target_ty_size = bx.cx.size_of(target_ty).bytes();
- let indirect =
- if target_ty_size > 8 || !target_ty_size.is_power_of_two() { true } else { false };
+ let indirect: bool = target_ty_size > 8 || !target_ty_size.is_power_of_two();
emit_ptr_va_arg(bx, addr, target_ty, indirect, Align::from_bytes(8).unwrap(), false)
}
// For all other architecture/OS combinations fall back to using
-Please read the rustc-guide chapter on [Backend Agnostic Codegen][bac].
+Please read the rustc-dev-guide chapter on [Backend Agnostic Codegen][bac].
-[bac]: https://rust-lang.github.io/rustc-guide/codegen/backend-agnostic.html
+[bac]: https://rustc-dev-guide.rust-lang.org/codegen/backend-agnostic.html
-use rustc::session::Session;
+use rustc_session::Session;
use rustc_span::symbol::Symbol;
use std::io;
use rustc::middle::cstore::{EncodedMetadata, LibSource, NativeLibrary, NativeLibraryKind};
use rustc::middle::dependency_format::Linkage;
-use rustc::session::config::{
+use rustc_data_structures::fx::FxHashSet;
+use rustc_fs_util::fix_windows_verbatim_for_gcc;
+use rustc_hir::def_id::CrateNum;
+use rustc_session::config::{
self, CFGuard, DebugInfo, OutputFilenames, OutputType, PrintRequest, Sanitizer,
};
-use rustc::session::search_paths::PathKind;
+use rustc_session::search_paths::PathKind;
/// For all the linkers we support, and information they might
/// need out of the shared crate context before we get rid of it.
-use rustc::session::{filesearch, Session};
-use rustc_data_structures::fx::FxHashSet;
-use rustc_fs_util::fix_windows_verbatim_for_gcc;
-use rustc_hir::def_id::CrateNum;
+use rustc_session::{filesearch, Session};
use rustc_span::symbol::Symbol;
use rustc_target::spec::{LinkerFlavor, PanicStrategy, RelroLevel};
if flavor == LinkerFlavor::Msvc && t.target_vendor == "uwp" {
if let Some(ref tool) = msvc_tool {
let original_path = tool.path();
- if let Some(ref root_lib_path) = original_path.ancestors().skip(4).next() {
+ if let Some(ref root_lib_path) = original_path.ancestors().nth(4) {
let arch = match t.arch.as_str() {
"x86_64" => Some("x64".to_string()),
"x86" => Some("x86".to_string()),
info!("preparing {:?} to {:?}", crate_type, out_filename);
let (linker, flavor) = linker_and_flavor(sess);
+ let any_dynamic_crate = crate_type == config::CrateType::Dylib
+ || codegen_results.crate_info.dependency_formats.iter().any(|(ty, list)| {
+ *ty == crate_type && list.iter().any(|&linkage| linkage == Linkage::Dynamic)
+ });
+
// The invocations of cc share some flags across platforms
let (pname, mut cmd) = get_linker(sess, &linker, flavor);
cmd.args(args);
}
if let Some(args) = sess.target.target.options.pre_link_args_crt.get(&flavor) {
- if sess.crt_static() {
+ if sess.crt_static(Some(crate_type)) {
cmd.args(args);
}
}
cmd.arg(get_file_path(sess, obj));
}
- if crate_type == config::CrateType::Executable && sess.crt_static() {
+ if crate_type == config::CrateType::Executable && sess.crt_static(Some(crate_type)) {
for obj in &sess.target.target.options.pre_link_objects_exe_crt {
cmd.arg(get_file_path(sess, obj));
}
if let Some(args) = sess.target.target.options.late_link_args.get(&flavor) {
cmd.args(args);
}
+ if any_dynamic_crate {
+ if let Some(args) = sess.target.target.options.late_link_args_dynamic.get(&flavor) {
+ cmd.args(args);
+ }
+ } else {
+ if let Some(args) = sess.target.target.options.late_link_args_static.get(&flavor) {
+ cmd.args(args);
+ }
+ }
for obj in &sess.target.target.options.post_link_objects {
cmd.arg(get_file_path(sess, obj));
}
- if sess.crt_static() {
+ if sess.crt_static(Some(crate_type)) {
for obj in &sess.target.target.options.post_link_objects_crt {
cmd.arg(get_file_path(sess, obj));
}
let more_args = &sess.opts.cg.link_arg;
let mut args = args.iter().chain(more_args.iter()).chain(used_link_args.iter());
- if is_pic(sess) && !sess.crt_static() && !args.any(|x| *x == "-static") {
+ if is_pic(sess) && !sess.crt_static(Some(crate_type)) && !args.any(|x| *x == "-static")
+ {
position_independent_executable = true;
}
}
if crate_type != config::CrateType::Executable {
cmd.build_dylib(out_filename);
}
- if crate_type == config::CrateType::Executable && sess.crt_static() {
+ if crate_type == config::CrateType::Executable && sess.crt_static(Some(crate_type)) {
cmd.build_static_executable();
}
// for the current implementation of the standard library.
let mut group_end = None;
let mut group_start = None;
- let mut end_with = FxHashSet::default();
+ // Crates available for linking thus far.
+ let mut available = FxHashSet::default();
+ // Crates required to satisfy dependencies discovered so far.
+ let mut required = FxHashSet::default();
+
let info = &codegen_results.crate_info;
for &(cnum, _) in deps.iter().rev() {
if let Some(missing) = info.missing_lang_items.get(&cnum) {
- end_with.extend(missing.iter().cloned());
- if !end_with.is_empty() && group_end.is_none() {
- group_end = Some(cnum);
- }
+ let missing_crates = missing.iter().map(|i| info.lang_item_to_crate.get(i).copied());
+ required.extend(missing_crates);
+ }
+
+ required.insert(Some(cnum));
+ available.insert(Some(cnum));
+
+ if required.len() > available.len() && group_end.is_none() {
+ group_end = Some(cnum);
}
- end_with.retain(|item| info.lang_item_to_crate.get(item) != Some(&cnum));
- if end_with.is_empty() && group_end.is_some() {
+ if required.len() == available.len() && group_end.is_some() {
group_start = Some(cnum);
break;
}
use super::command::Command;
use super::symbol_export;
-use rustc_data_structures::fx::FxHashMap;
use std::ffi::{OsStr, OsString};
use std::fs::{self, File};
use std::io::prelude::*;
use std::path::{Path, PathBuf};
use rustc::middle::dependency_format::Linkage;
-use rustc::session::config::{self, CrateType, DebugInfo, LinkerPluginLto, Lto, OptLevel};
-use rustc::session::Session;
use rustc::ty::TyCtxt;
+use rustc_data_structures::fx::FxHashMap;
use rustc_hir::def_id::{CrateNum, LOCAL_CRATE};
use rustc_serialize::{json, Encoder};
+use rustc_session::config::{self, CrateType, DebugInfo, LinkerPluginLto, Lto, OptLevel};
+use rustc_session::Session;
use rustc_span::symbol::Symbol;
use rustc_target::spec::{LinkerFlavor, LldFlavor};
use rustc::middle::codegen_fn_attrs::CodegenFnAttrFlags;
use rustc::middle::exported_symbols::{metadata_symbol_name, ExportedSymbol, SymbolExportLevel};
-use rustc::session::config::{self, Sanitizer};
use rustc::ty::query::Providers;
use rustc::ty::subst::{GenericArgKind, SubstsRef};
use rustc::ty::Instance;
use rustc_hir::def_id::{CrateNum, DefId, DefIdMap, CRATE_DEF_INDEX, LOCAL_CRATE};
use rustc_hir::Node;
use rustc_index::vec::IndexVec;
+use rustc_session::config::{self, Sanitizer};
pub fn threshold(tcx: TyCtxt<'_>) -> SymbolExportLevel {
crates_export_threshold(&tcx.sess.crate_types.borrow())
// Only consider nodes that actually have exported symbols.
Node::Item(&hir::Item { kind: hir::ItemKind::Static(..), .. })
| Node::Item(&hir::Item { kind: hir::ItemKind::Fn(..), .. })
- | Node::ImplItem(&hir::ImplItem { kind: hir::ImplItemKind::Method(..), .. }) => {
+ | Node::ImplItem(&hir::ImplItem { kind: hir::ImplItemKind::Fn(..), .. }) => {
let def_id = tcx.hir().local_def_id(hir_id);
let generics = tcx.generics_of(def_id);
if !generics.requires_monomorphization(tcx) &&
use rustc::dep_graph::{WorkProduct, WorkProductFileKind, WorkProductId};
use rustc::middle::cstore::EncodedMetadata;
use rustc::middle::exported_symbols::SymbolExportLevel;
-use rustc::session::config::{
- self, Lto, OutputFilenames, OutputType, Passes, Sanitizer, SwitchWithOptPath,
-};
-use rustc::session::Session;
use rustc::ty::TyCtxt;
use rustc_ast::attr;
use rustc_data_structures::fx::FxHashMap;
copy_cgu_workproducts_to_incr_comp_cache_dir, in_incr_comp_dir, in_incr_comp_dir_sess,
};
use rustc_session::cgu_reuse_tracker::CguReuseTracker;
+use rustc_session::config::{
+ self, Lto, OutputFilenames, OutputType, Passes, Sanitizer, SwitchWithOptPath,
+};
+use rustc_session::Session;
use rustc_span::hygiene::ExpnId;
use rustc_span::source_map::SourceMap;
use rustc_span::symbol::{sym, Symbol};
let crate_name = tcx.crate_name(LOCAL_CRATE);
let crate_hash = tcx.crate_hash(LOCAL_CRATE);
- let no_builtins = attr::contains_name(&tcx.hir().krate().attrs, sym::no_builtins);
+ let no_builtins = attr::contains_name(&tcx.hir().krate().item.attrs, sym::no_builtins);
let subsystem =
- attr::first_attr_value_str_by_name(&tcx.hir().krate().attrs, sym::windows_subsystem);
+ attr::first_attr_value_str_by_name(&tcx.hir().krate().item.attrs, sym::windows_subsystem);
let windows_subsystem = subsystem.map(|subsystem| {
if subsystem != sym::windows && subsystem != sym::console {
tcx.sess.fatal(&format!(
}
}
-// Actual LTO type we end up chosing based on multiple factors.
+// Actual LTO type we end up choosing based on multiple factors.
enum ComputedLtoType {
No,
Thin,
if main_thread_worker_state == MainThreadWorkerState::Idle {
if !queue_full_enough(work_items.len(), running, max_workers) {
// The queue is not full enough, codegen more items:
- if let Err(_) = codegen_worker_send.send(Message::CodegenItem) {
+ if codegen_worker_send.send(Message::CodegenItem).is_err() {
panic!("Could not send Message::CodegenItem to main thread")
}
main_thread_worker_state = MainThreadWorkerState::Codegenning;
use rustc::middle::lang_items;
use rustc::middle::lang_items::StartFnLangItem;
use rustc::mir::mono::{CodegenUnit, CodegenUnitNameBuilder, MonoItem};
-use rustc::session::config::{self, EntryFnType, Lto};
-use rustc::session::Session;
use rustc::ty::layout::{self, Align, HasTyCtxt, LayoutOf, TyLayout, VariantIdx};
use rustc::ty::layout::{FAT_PTR_ADDR, FAT_PTR_EXTRA};
use rustc::ty::query::Providers;
use rustc_hir::def_id::{DefId, LOCAL_CRATE};
use rustc_index::vec::Idx;
use rustc_session::cgu_reuse_tracker::CguReuse;
+use rustc_session::config::{self, EntryFnType, Lto};
+use rustc_session::Session;
use rustc_span::Span;
use std::cmp;
#![allow(non_camel_case_types, non_snake_case)]
-use rustc::session::Session;
use rustc::ty::{Ty, TyCtxt};
use rustc_errors::struct_span_err;
+use rustc_session::Session;
use rustc_span::Span;
use crate::base;
use rustc::middle::cstore::{CrateSource, LibSource, NativeLibrary};
use rustc::middle::dependency_format::Dependencies;
use rustc::middle::lang_items::LangItem;
-use rustc::session::config::{OutputFilenames, OutputType, RUST_CGU_EXT};
use rustc::ty::query::Providers;
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use rustc_data_structures::svh::Svh;
use rustc_data_structures::sync::Lrc;
use rustc_hir::def_id::CrateNum;
+use rustc_session::config::{OutputFilenames, OutputType, RUST_CGU_EXT};
use rustc_span::symbol::Symbol;
use std::path::{Path, PathBuf};
fn process_place(
&mut self,
- place_ref: &mir::PlaceRef<'_, 'tcx>,
+ place_ref: &mir::PlaceRef<'tcx>,
context: PlaceContext,
location: Location,
) {
}
self.visit_place_base(&place_ref.local, context, location);
- self.visit_projection(&place_ref.local, place_ref.projection, context, location);
+ self.visit_projection(place_ref.local, place_ref.projection, context, location);
}
}
}
let lp1 = bx.load_operand(lp1).immediate();
slot.storage_dead(&mut bx);
- if !bx.sess().target.target.options.custom_unwind_resume {
- let mut lp = bx.const_undef(self.landing_pad_type());
- lp = bx.insert_value(lp, lp0, 0);
- lp = bx.insert_value(lp, lp1, 1);
- bx.resume(lp);
- } else {
- bx.call(bx.eh_unwind_resume(), &[lp0], helper.funclet(self));
- bx.unreachable();
- }
+ let mut lp = bx.const_undef(self.landing_pad_type());
+ lp = bx.insert_value(lp, lp0, 0);
+ lp = bx.insert_value(lp, lp1, 1);
+ bx.resume(lp);
}
}
AssertKind::BoundsCheck { ref len, ref index } => {
let len = self.codegen_operand(&mut bx, len).immediate();
let index = self.codegen_operand(&mut bx, index).immediate();
- (lang_items::PanicBoundsCheckFnLangItem, vec![location, index, len])
+ // It's `fn panic_bounds_check(index: usize, len: usize)`,
+ // and `#[track_caller]` adds an implicit third argument.
+ (lang_items::PanicBoundsCheckFnLangItem, vec![index, len, location])
}
_ => {
let msg_str = Symbol::intern(msg.description());
let msg = bx.const_str(msg_str);
+ // It's `pub fn panic(expr: &str)`, with the wide reference being passed
+ // as two arguments, and `#[track_caller]` adds an implicit third argument.
(lang_items::PanicFnLangItem, vec![msg.0, msg.1, location])
}
};
helper.do_call(self, &mut bx, fn_abi, llfn, &args, None, cleanup);
}
+ /// Returns `true` if this is indeed a panic intrinsic and codegen is done.
+ fn codegen_panic_intrinsic(
+ &mut self,
+ helper: &TerminatorCodegenHelper<'tcx>,
+ bx: &mut Bx,
+ intrinsic: Option<&str>,
+ instance: Option<Instance<'tcx>>,
+ span: Span,
+ destination: &Option<(mir::Place<'tcx>, mir::BasicBlock)>,
+ cleanup: Option<mir::BasicBlock>,
+ ) -> bool {
+ // Emit a panic or a no-op for `assert_*` intrinsics.
+ // These are intrinsics that compile to panics so that we can get a message
+ // which mentions the offending type, even from a const context.
+ #[derive(Debug, PartialEq)]
+ enum AssertIntrinsic {
+ Inhabited,
+ ZeroValid,
+ UninitValid,
+ };
+ let panic_intrinsic = intrinsic.and_then(|i| match i {
+ // FIXME: Move to symbols instead of strings.
+ "assert_inhabited" => Some(AssertIntrinsic::Inhabited),
+ "assert_zero_valid" => Some(AssertIntrinsic::ZeroValid),
+ "assert_uninit_valid" => Some(AssertIntrinsic::UninitValid),
+ _ => None,
+ });
+ if let Some(intrinsic) = panic_intrinsic {
+ use AssertIntrinsic::*;
+ let ty = instance.unwrap().substs.type_at(0);
+ let layout = bx.layout_of(ty);
+ let do_panic = match intrinsic {
+ Inhabited => layout.abi.is_uninhabited(),
+ // We unwrap as the error type is `!`.
+ ZeroValid => !layout.might_permit_raw_init(bx, /*zero:*/ true).unwrap(),
+ // We unwrap as the error type is `!`.
+ UninitValid => !layout.might_permit_raw_init(bx, /*zero:*/ false).unwrap(),
+ };
+ if do_panic {
+ let msg_str = if layout.abi.is_uninhabited() {
+ // Use this error even for the other intrinsics as it is more precise.
+ format!("attempted to instantiate uninhabited type `{}`", ty)
+ } else if intrinsic == ZeroValid {
+ format!("attempted to zero-initialize type `{}`, which is invalid", ty)
+ } else {
+ format!("attempted to leave type `{}` uninitialized, which is invalid", ty)
+ };
+ let msg = bx.const_str(Symbol::intern(&msg_str));
+ let location = self.get_caller_location(bx, span).immediate();
+
+ // Obtain the panic entry point.
+ // FIXME: dedup this with `codegen_assert_terminator` above.
+ let def_id =
+ common::langcall(bx.tcx(), Some(span), "", lang_items::PanicFnLangItem);
+ let instance = ty::Instance::mono(bx.tcx(), def_id);
+ let fn_abi = FnAbi::of_instance(bx, instance, &[]);
+ let llfn = bx.get_fn_addr(instance);
+
+ if let Some((_, target)) = destination.as_ref() {
+ helper.maybe_sideeffect(self.mir, bx, &[*target]);
+ }
+ // Codegen the actual panic invoke/call.
+ helper.do_call(
+ self,
+ bx,
+ fn_abi,
+ llfn,
+ &[msg.0, msg.1, location],
+ destination.as_ref().map(|(_, bb)| (ReturnDest::Nothing, *bb)),
+ cleanup,
+ );
+ } else {
+ // a NOP
+ let target = destination.as_ref().unwrap().1;
+ helper.maybe_sideeffect(self.mir, bx, &[target]);
+ helper.funclet_br(self, bx, target)
+ }
+ true
+ } else {
+ false
+ }
+ }
+
fn codegen_call_terminator(
&mut self,
helper: TerminatorCodegenHelper<'tcx>,
bug!("`miri_start_panic` should never end up in compiled code");
}
- // Emit a panic or a no-op for `panic_if_uninhabited`.
- if intrinsic == Some("panic_if_uninhabited") {
- let ty = instance.unwrap().substs.type_at(0);
- let layout = bx.layout_of(ty);
- if layout.abi.is_uninhabited() {
- let msg_str = format!("Attempted to instantiate uninhabited type {}", ty);
- let msg = bx.const_str(Symbol::intern(&msg_str));
- let location = self.get_caller_location(&mut bx, span).immediate();
-
- // Obtain the panic entry point.
- let def_id =
- common::langcall(bx.tcx(), Some(span), "", lang_items::PanicFnLangItem);
- let instance = ty::Instance::mono(bx.tcx(), def_id);
- let fn_abi = FnAbi::of_instance(&bx, instance, &[]);
- let llfn = bx.get_fn_addr(instance);
-
- if let Some((_, target)) = destination.as_ref() {
- helper.maybe_sideeffect(self.mir, &mut bx, &[*target]);
- }
- // Codegen the actual panic invoke/call.
- helper.do_call(
- self,
- &mut bx,
- fn_abi,
- llfn,
- &[msg.0, msg.1, location],
- destination.as_ref().map(|(_, bb)| (ReturnDest::Nothing, *bb)),
- cleanup,
- );
- } else {
- // a NOP
- let target = destination.as_ref().unwrap().1;
- helper.maybe_sideeffect(self.mir, &mut bx, &[target]);
- helper.funclet_br(self, &mut bx, target)
- }
+ if self.codegen_panic_intrinsic(
+ &helper,
+ &mut bx,
+ intrinsic,
+ instance,
+ span,
+ destination,
+ cleanup,
+ ) {
return;
}
use crate::traits::*;
use rustc::mir;
-use rustc::session::config::DebugInfo;
use rustc::ty;
use rustc::ty::layout::{LayoutOf, Size};
use rustc_hir::def_id::CrateNum;
use rustc_index::vec::IndexVec;
+use rustc_session::config::DebugInfo;
use rustc_span::symbol::{kw, Symbol};
use rustc_span::{BytePos, Span};
impl<D> DebugScope<D> {
pub fn is_valid(&self) -> bool {
- !self.scope_metadata.is_none()
+ self.scope_metadata.is_some()
}
}
) -> Option<IndexVec<mir::Local, Vec<PerLocalVarDebugInfo<'tcx, Bx::DIVariable>>>> {
let full_debug_info = self.cx.sess().opts.debuginfo == DebugInfo::Full;
- if !(full_debug_info || !self.cx.sess().fewer_names()) {
+ if !full_debug_info && self.cx.sess().fewer_names() {
return None;
}
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx> {
pub fn monomorphize<T>(&self, value: &T) -> T
where
- T: TypeFoldable<'tcx>,
+ T: Copy + TypeFoldable<'tcx>,
{
- self.cx.tcx().subst_and_normalize_erasing_regions(
- self.instance.substs,
- ty::ParamEnv::reveal_all(),
- value,
- )
+ debug!("monomorphize: self.instance={:?}", self.instance);
+ if let Some(substs) = self.instance.substs_for_mir_body() {
+ self.cx.tcx().subst_and_normalize_erasing_regions(
+ substs,
+ ty::ParamEnv::reveal_all(),
+ &value,
+ )
+ } else {
+ self.cx.tcx().normalize_erasing_regions(ty::ParamEnv::reveal_all(), *value)
+ }
}
}
fn maybe_codegen_consume_direct(
&mut self,
bx: &mut Bx,
- place_ref: mir::PlaceRef<'_, 'tcx>,
+ place_ref: mir::PlaceRef<'tcx>,
) -> Option<OperandRef<'tcx, Bx::Value>> {
debug!("maybe_codegen_consume_direct(place_ref={:?})", place_ref);
pub fn codegen_consume(
&mut self,
bx: &mut Bx,
- place_ref: mir::PlaceRef<'_, 'tcx>,
+ place_ref: mir::PlaceRef<'tcx>,
) -> OperandRef<'tcx, Bx::Value> {
debug!("codegen_consume(place_ref={:?})", place_ref);
pub fn codegen_place(
&mut self,
bx: &mut Bx,
- place_ref: mir::PlaceRef<'_, 'tcx>,
+ place_ref: mir::PlaceRef<'tcx>,
) -> PlaceRef<'tcx, Bx::Value> {
debug!("codegen_place(place_ref={:?})", place_ref);
let cx = self.cx;
result
}
- pub fn monomorphized_place_ty(&self, place_ref: mir::PlaceRef<'_, 'tcx>) -> Ty<'tcx> {
+ pub fn monomorphized_place_ty(&self, place_ref: mir::PlaceRef<'tcx>) -> Ty<'tcx> {
let tcx = self.cx.tcx();
let place_ty = mir::Place::ty_from(place_ref.local, place_ref.projection, *self.mir, tcx);
self.monomorphize(&place_ty.ty)
use crate::ModuleCodegen;
use rustc::middle::cstore::EncodedMetadata;
-use rustc::session::{config, Session};
use rustc::ty::layout::{HasTyCtxt, LayoutOf, TyLayout};
use rustc::ty::Ty;
use rustc::ty::TyCtxt;
use rustc_ast::expand::allocator::AllocatorKind;
use rustc_codegen_utils::codegen_backend::CodegenBackend;
+use rustc_session::{config, Session};
use rustc_span::symbol::Symbol;
use std::sync::Arc;
use super::BackendTypes;
use rustc::mir::mono::CodegenUnit;
-use rustc::session::Session;
use rustc::ty::{self, Instance, Ty};
use rustc_data_structures::fx::FxHashMap;
+use rustc_session::Session;
use std::cell::RefCell;
use std::sync::Arc;
fn get_fn(&self, instance: Instance<'tcx>) -> Self::Function;
fn get_fn_addr(&self, instance: Instance<'tcx>) -> Self::Value;
fn eh_personality(&self) -> Self::Value;
- fn eh_unwind_resume(&self) -> Self::Value;
fn sess(&self) -> &Session;
fn codegen_unit(&self) -> &Arc<CodegenUnit<'tcx>>;
fn used_statics(&self) -> &RefCell<Vec<Self::Value>>;
//! actual codegen, while the builder stores the information about the function during codegen and
//! is used to produce the instructions of the backend IR.
//!
-//! Finaly, a third `Backend` structure has to implement methods related to how codegen information
+//! Finally, a third `Backend` structure has to implement methods related to how codegen information
//! is passed to the backend, especially for asynchronous compilation.
//!
//! The traits contain associated types that are backend-specific, such as the backend's value or
rustc_target = { path = "../librustc_target" }
rustc_data_structures = { path = "../librustc_data_structures" }
rustc_metadata = { path = "../librustc_metadata" }
+rustc_session = { path = "../librustc_session" }
use rustc::dep_graph::DepGraph;
use rustc::middle::cstore::{EncodedMetadata, MetadataLoaderDyn};
-use rustc::session::config::{OutputFilenames, PrintRequest};
-use rustc::session::Session;
use rustc::ty::query::Providers;
use rustc::ty::TyCtxt;
use rustc::util::common::ErrorReported;
+use rustc_session::config::{OutputFilenames, PrintRequest};
+use rustc_session::Session;
use rustc_span::symbol::Symbol;
pub use rustc_data_structures::sync::MetadataRef;
-use rustc::session::config::{self, Input, OutputFilenames, OutputType};
-use rustc::session::Session;
use rustc_ast::{ast, attr};
+use rustc_session::config::{self, Input, OutputFilenames, OutputType};
+use rustc_session::Session;
use rustc_span::symbol::sym;
use rustc_span::Span;
use std::path::{Path, PathBuf};
if !sess.target.target.options.dynamic_linking {
return true;
}
- if sess.crt_static() && !sess.target.target.options.crt_static_allows_dylibs {
+ if sess.crt_static(Some(crate_type))
+ && !sess.target.target.options.crt_static_allows_dylibs
+ {
return true;
}
}
use rustc::middle::codegen_fn_attrs::CodegenFnAttrFlags;
use rustc::mir::mono::{InstantiationMode, MonoItem};
-use rustc::session::config::SymbolManglingVersion;
use rustc::ty::query::Providers;
use rustc::ty::subst::SubstsRef;
use rustc::ty::{self, Instance, TyCtxt};
use rustc_hir::def_id::{CrateNum, LOCAL_CRATE};
use rustc_hir::Node;
+use rustc_session::config::SymbolManglingVersion;
use rustc_span::symbol::Symbol;
//
// * On the wasm32 targets there is a bug (or feature) in LLD [1] where the
// same-named symbol when imported from different wasm modules will get
- // hooked up incorectly. As a result foreign symbols, on the wasm target,
+ // hooked up incorrectly. As a result foreign symbols, on the wasm target,
// with a wasm import module, get mangled. Additionally our codegen will
// deduplicate symbols based purely on the symbol name, but for wasm this
// isn't quite right because the same-named symbol on wasm can come from
}
impl<I, A, R> PinnedGenerator<I, A, R> {
- #[cfg(bootstrap)]
- pub fn new<T: Generator<Yield = YieldType<I, A>, Return = R> + 'static>(
- generator: T,
- ) -> (I, Self) {
- let mut result = PinnedGenerator { generator: Box::pin(generator) };
-
- // Run it to the first yield to set it up
- let init = match Pin::new(&mut result.generator).resume() {
- GeneratorState::Yielded(YieldType::Initial(y)) => y,
- _ => panic!(),
- };
-
- (init, result)
- }
-
- #[cfg(not(bootstrap))]
pub fn new<T: Generator<Yield = YieldType<I, A>, Return = R> + 'static>(
generator: T,
) -> (I, Self) {
(init, result)
}
- #[cfg(bootstrap)]
- pub unsafe fn access(&mut self, closure: *mut dyn FnMut()) {
- BOX_REGION_ARG.with(|i| {
- i.set(Action::Access(AccessAction(closure)));
- });
-
- // Call the generator, which in turn will call the closure in BOX_REGION_ARG
- if let GeneratorState::Complete(_) = Pin::new(&mut self.generator).resume() {
- panic!()
- }
- }
-
- #[cfg(not(bootstrap))]
pub unsafe fn access(&mut self, closure: *mut dyn FnMut()) {
BOX_REGION_ARG.with(|i| {
i.set(Action::Access(AccessAction(closure)));
}
}
- #[cfg(bootstrap)]
- pub fn complete(&mut self) -> R {
- // Tell the generator we want it to complete, consuming it and yielding a result
- BOX_REGION_ARG.with(|i| i.set(Action::Complete));
-
- let result = Pin::new(&mut self.generator).resume();
- if let GeneratorState::Complete(r) = result { r } else { panic!() }
- }
-
- #[cfg(not(bootstrap))]
pub fn complete(&mut self) -> R {
// Tell the generator we want it to complete, consuming it and yielding a result
BOX_REGION_ARG.with(|i| i.set(Action::Complete));
--- /dev/null
+//! An immutable, owned value (except for interior mutability).
+//!
+//! The purpose of `Frozen` is to make a value immutable for the sake of defensive programming. For example,
+//! suppose we have the following:
+//!
+//! ```rust
+//! struct Bar { /* some data */ }
+//!
+//! struct Foo {
+//! /// Some computed data that should never change after construction.
+//! pub computed: Bar,
+//!
+//! /* some other fields */
+//! }
+//!
+//! impl Bar {
+//! /// Mutate the `Bar`.
+//! pub fn mutate(&mut self) { }
+//! }
+//! ```
+//!
+//! Now suppose we want to pass around a mutable `Foo` instance but, we want to make sure that
+//! `computed` does not change accidentally (e.g. somebody might accidentally call
+//! `foo.computed.mutate()`). This is what `Frozen` is for. We can do the following:
+//!
+//! ```rust
+//! use rustc_data_structures::frozen::Frozen;
+//!
+//! struct Foo {
+//! /// Some computed data that should never change after construction.
+//! pub computed: Frozen<Bar>,
+//!
+//! /* some other fields */
+//! }
+//! ```
+//!
+//! `Frozen` impls `Deref`, so we can ergonomically call methods on `Bar`, but it doesn't `impl
+//! DerefMut`. Now calling `foo.compute.mutate()` will result in a compile-time error stating that
+//! `mutate` requires a mutable reference but we don't have one.
+//!
+//! # Caveats
+//!
+//! - `Frozen` doesn't try to defend against interior mutability (e.g. `Frozen<RefCell<Bar>>`).
+//! - `Frozen` doesn't pin it's contents (e.g. one could still do `foo.computed =
+//! Frozen::freeze(new_bar)`).
+
+/// An owned immutable value.
+#[derive(Debug)]
+pub struct Frozen<T>(T);
+
+impl<T> Frozen<T> {
+ pub fn freeze(val: T) -> Self {
+ Frozen(val)
+ }
+}
+
+impl<T> std::ops::Deref for Frozen<T> {
+ type Target = T;
+
+ fn deref(&self) -> &T {
+ &self.0
+ }
+}
}
impl<N: Idx, S: Idx> WithSuccessors for Sccs<N, S> {
- fn successors<'graph>(&'graph self, node: S) -> <Self as GraphSuccessors<'graph>>::Iter {
+ fn successors(&self, node: S) -> <Self as GraphSuccessors<'_>>::Iter {
self.successors(node).iter().cloned()
}
}
}
impl<N: Idx> WithSuccessors for VecGraph<N> {
- fn successors<'graph>(&'graph self, node: N) -> <Self as GraphSuccessors<'graph>>::Iter {
+ fn successors(&self, node: N) -> <Self as GraphSuccessors<'_>>::Iter {
self.successors(node).iter().cloned()
}
}
}
#[test]
-fn succesors() {
+fn successors() {
let graph = create_graph();
assert_eq!(graph.successors(0), &[1]);
assert_eq!(graph.successors(1), &[2, 3]);
pub mod vec_linked_list;
pub mod work_queue;
pub use atomic_ref::AtomicRef;
+pub mod frozen;
pub struct OnDrop<F: Fn()>(pub F);
#[cfg(parallel_compiler)]
// 32 shards is sufficient to reduce contention on an 8-core Ryzen 7 1700,
// but this should be tested on higher core count CPUs. How the `Sharded` type gets used
-// may also affect the ideal nunber of shards.
+// may also affect the ideal number of shards.
const SHARD_BITS: usize = 5;
#[cfg(not(parallel_compiler))]
let mut values: SmallVec<[_; SHARDS]> =
(0..SHARDS).map(|_| CacheAligned(Lock::new(value()))).collect();
- // Create an unintialized array
+ // Create an uninitialized array
let mut shards: mem::MaybeUninit<[CacheAligned<Lock<T>>; SHARDS]> =
mem::MaybeUninit::uninit();
rustc_serialize = { path = "../libserialize", package = "serialize" }
rustc_ast = { path = "../librustc_ast" }
rustc_span = { path = "../librustc_span" }
+rustc_session = { path = "../librustc_session" }
[target.'cfg(windows)'.dependencies]
winapi = { version = "0.3", features = ["consoleapi", "debugapi", "processenv"] }
have some code related to pretty printing or other minor compiler
options).
-For more information about how the driver works, see the [rustc guide].
+For more information about how the driver works, see the [rustc dev guide].
-[rustc guide]: https://rust-lang.github.io/rustc-guide/rustc-driver.html
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/rustc-driver.html
pub extern crate rustc_plugin_impl as plugin;
-use rustc::lint::{Lint, LintId};
use rustc::middle::cstore::MetadataLoader;
-use rustc::session::config::nightly_options;
-use rustc::session::config::{ErrorOutputType, Input, OutputType, PrintRequest};
-use rustc::session::{config, DiagnosticOutput, Session};
-use rustc::session::{early_error, early_warn};
use rustc::ty::TyCtxt;
use rustc::util::common::ErrorReported;
use rustc_codegen_ssa::CodegenResults;
use rustc_codegen_utils::codegen_backend::CodegenBackend;
use rustc_data_structures::profiling::print_time_passes_entry;
use rustc_data_structures::sync::SeqCst;
-use rustc_errors::{registry::Registry, PResult};
+use rustc_errors::{
+ registry::{InvalidErrorCode, Registry},
+ PResult,
+};
use rustc_feature::{find_gated_cfg, UnstableFeatures};
use rustc_hir::def_id::LOCAL_CRATE;
use rustc_interface::util::{collect_crate_types, get_builtin_codegen_backend};
use rustc_save_analysis as save;
use rustc_save_analysis::DumpHandler;
use rustc_serialize::json::{self, ToJson};
+use rustc_session::config::nightly_options;
+use rustc_session::config::{ErrorOutputType, Input, OutputType, PrintRequest};
+use rustc_session::lint::{Lint, LintId};
+use rustc_session::{config, DiagnosticOutput, Session};
+use rustc_session::{early_error, early_warn};
use std::borrow::Cow;
use std::cmp::max;
fn handle_explain(registry: Registry, code: &str, output: ErrorOutputType) {
let normalised =
if code.starts_with('E') { code.to_string() } else { format!("E{0:0>4}", code) };
- match registry.find_description(&normalised) {
- Some(ref description) => {
+ match registry.try_find_description(&normalised) {
+ Ok(Some(description)) => {
let mut is_in_code_block = false;
let mut text = String::new();
-
// Slice off the leading newline and print.
for line in description.lines() {
let indent_level =
}
text.push('\n');
}
-
if stdout_isatty() {
show_content_with_pager(&text);
} else {
print!("{}", text);
}
}
- None => {
+ Ok(None) => {
early_error(output, &format!("no extended information for {}", code));
}
+ Err(InvalidErrorCode) => {
+ early_error(output, &format!("{} is not a valid error code", code));
+ }
}
}
odir: &Option<PathBuf>,
ofile: &Option<PathBuf>,
) -> Compilation {
- use rustc::session::config::PrintRequest::*;
+ use rustc_session::config::PrintRequest::*;
// PrintRequest::NativeStaticLibs is special - printed during linking
// (empty iterator returns true)
if sess.opts.prints.iter().all(|&p| p == PrintRequest::NativeStaticLibs) {
println!("{}", targets.join("\n"));
}
Sysroot => println!("{}", sess.sysroot.display()),
+ TargetLibdir => println!(
+ "{}",
+ sess.target_tlib_path.as_ref().unwrap_or(&sess.host_tlib_path).dir.display()
+ ),
TargetSpec => println!("{}", sess.target.target.to_json().pretty()),
FileNames | CrateName => {
let input = input.unwrap_or_else(|| {
return None;
}
- let matches = if let Some(matches) = handle_options(&args) {
- matches
- } else {
- return None;
- };
-
+ let matches = handle_options(&args)?;
let mut result = Vec::new();
let mut excluded_cargo_defaults = false;
for flag in ICE_REPORT_COMPILER_FLAGS {
//! The various pretty-printing routines.
use rustc::hir::map as hir_map;
-use rustc::session::config::{Input, PpMode, PpSourceMode};
-use rustc::session::Session;
use rustc::ty::{self, TyCtxt};
use rustc::util::common::ErrorReported;
use rustc_ast::ast;
use rustc_hir::def_id::LOCAL_CRATE;
use rustc_hir::print as pprust_hir;
use rustc_mir::util::{write_mir_graphviz, write_mir_pretty};
+use rustc_session::config::{Input, PpMode, PpSourceMode};
+use rustc_session::Session;
use rustc_span::FileName;
use std::cell::Cell;
///
/// (Rust does not yet support upcasting from a trait object to
/// an object for one of its super-traits.)
- fn pp_ann<'a>(&'a self) -> &'a dyn pprust::PpAnn;
+ fn pp_ann(&self) -> &dyn pprust::PpAnn;
}
trait HirPrinterSupport<'hir>: pprust_hir::PpAnn {
/// Provides a uniform interface for re-extracting a reference to an
/// `hir_map::Map` from a value that now owns it.
- fn hir_map<'a>(&'a self) -> Option<&'a hir_map::Map<'hir>>;
+ fn hir_map(&self) -> Option<hir_map::Map<'hir>>;
/// Produces the pretty-print annotation object.
///
/// (Rust does not yet support upcasting from a trait object to
/// an object for one of its super-traits.)
- fn pp_ann<'a>(&'a self) -> &'a dyn pprust_hir::PpAnn;
+ fn pp_ann(&self) -> &dyn pprust_hir::PpAnn;
/// Computes an user-readable representation of a path, if possible.
fn node_path(&self, id: hir::HirId) -> Option<String> {
self.sess
}
- fn pp_ann<'a>(&'a self) -> &'a dyn pprust::PpAnn {
+ fn pp_ann(&self) -> &dyn pprust::PpAnn {
self
}
}
self.sess
}
- fn hir_map<'a>(&'a self) -> Option<&'a hir_map::Map<'hir>> {
- self.tcx.map(|tcx| *tcx.hir())
+ fn hir_map(&self) -> Option<hir_map::Map<'hir>> {
+ self.tcx.map(|tcx| tcx.hir())
}
- fn pp_ann<'a>(&'a self) -> &'a dyn pprust_hir::PpAnn {
+ fn pp_ann(&self) -> &dyn pprust_hir::PpAnn {
self
}
}
impl<'hir> pprust_hir::PpAnn for NoAnn<'hir> {
fn nested(&self, state: &mut pprust_hir::State<'_>, nested: pprust_hir::Nested) {
if let Some(tcx) = self.tcx {
- pprust_hir::PpAnn::nested(*tcx.hir(), state, nested)
+ pprust_hir::PpAnn::nested(&tcx.hir(), state, nested)
}
}
}
self.sess
}
- fn pp_ann<'a>(&'a self) -> &'a dyn pprust::PpAnn {
+ fn pp_ann(&self) -> &dyn pprust::PpAnn {
self
}
}
self.sess
}
- fn hir_map<'a>(&'a self) -> Option<&'a hir_map::Map<'hir>> {
- self.tcx.map(|tcx| *tcx.hir())
+ fn hir_map(&self) -> Option<hir_map::Map<'hir>> {
+ self.tcx.map(|tcx| tcx.hir())
}
- fn pp_ann<'a>(&'a self) -> &'a dyn pprust_hir::PpAnn {
+ fn pp_ann(&self) -> &dyn pprust_hir::PpAnn {
self
}
}
impl<'hir> pprust_hir::PpAnn for IdentifiedAnnotation<'hir> {
fn nested(&self, state: &mut pprust_hir::State<'_>, nested: pprust_hir::Nested) {
if let Some(ref tcx) = self.tcx {
- pprust_hir::PpAnn::nested(*tcx.hir(), state, nested)
+ pprust_hir::PpAnn::nested(&tcx.hir(), state, nested)
}
}
fn pre(&self, s: &mut pprust_hir::State<'_>, node: pprust_hir::AnnNode<'_>) {
&self.tcx.sess
}
- fn hir_map<'a>(&'a self) -> Option<&'a hir_map::Map<'tcx>> {
- Some(&self.tcx.hir())
+ fn hir_map(&self) -> Option<hir_map::Map<'tcx>> {
+ Some(self.tcx.hir())
}
- fn pp_ann<'a>(&'a self) -> &'a dyn pprust_hir::PpAnn {
+ fn pp_ann(&self) -> &dyn pprust_hir::PpAnn {
self
}
if let pprust_hir::Nested::Body(id) = nested {
self.tables.set(self.tcx.body_tables(id));
}
- pprust_hir::PpAnn::nested(*self.tcx.hir(), state, nested);
+ pprust_hir::PpAnn::nested(&self.tcx.hir(), state, nested);
self.tables.set(old_tables);
}
fn pre(&self, s: &mut pprust_hir::State<'_>, node: pprust_hir::AnnNode<'_>) {
E0624: include_str!("./error_codes/E0624.md"),
E0626: include_str!("./error_codes/E0626.md"),
E0627: include_str!("./error_codes/E0627.md"),
+E0628: include_str!("./error_codes/E0628.md"),
E0631: include_str!("./error_codes/E0631.md"),
E0633: include_str!("./error_codes/E0633.md"),
+E0634: include_str!("./error_codes/E0634.md"),
E0635: include_str!("./error_codes/E0635.md"),
E0636: include_str!("./error_codes/E0636.md"),
E0637: include_str!("./error_codes/E0637.md"),
E0690: include_str!("./error_codes/E0690.md"),
E0691: include_str!("./error_codes/E0691.md"),
E0692: include_str!("./error_codes/E0692.md"),
+E0693: include_str!("./error_codes/E0693.md"),
E0695: include_str!("./error_codes/E0695.md"),
E0697: include_str!("./error_codes/E0697.md"),
E0698: include_str!("./error_codes/E0698.md"),
E0715: include_str!("./error_codes/E0715.md"),
E0716: include_str!("./error_codes/E0716.md"),
E0718: include_str!("./error_codes/E0718.md"),
+E0719: include_str!("./error_codes/E0719.md"),
E0720: include_str!("./error_codes/E0720.md"),
E0723: include_str!("./error_codes/E0723.md"),
E0725: include_str!("./error_codes/E0725.md"),
E0736: include_str!("./error_codes/E0736.md"),
E0737: include_str!("./error_codes/E0737.md"),
E0738: include_str!("./error_codes/E0738.md"),
+E0739: include_str!("./error_codes/E0739.md"),
E0740: include_str!("./error_codes/E0740.md"),
E0741: include_str!("./error_codes/E0741.md"),
E0742: include_str!("./error_codes/E0742.md"),
// E0612, // merged into E0609
// E0613, // Removed (merged with E0609)
E0625, // thread-local statics cannot be accessed at compile-time
- E0628, // generators cannot have explicit parameters
E0629, // missing 'feature' (rustc_const_unstable)
// rustc_const_unstable attribute must be paired with stable/unstable
// attribute
E0630,
E0632, // cannot provide explicit generic arguments when `impl Trait` is
// used in argument position
- E0634, // type has conflicting packed representaton hints
E0640, // infer outlives requirements
// E0645, // trait aliases not finished
E0657, // `impl Trait` can only capture lifetimes bound at the fn level
E0667, // `impl Trait` in projections
E0687, // in-band lifetimes cannot be used in `fn`/`Fn` syntax
E0688, // in-band lifetimes cannot be mixed with explicit lifetime binders
- E0693, // incorrect `repr(align)` attribute format
// E0694, // an unknown tool name found in scoped attributes
E0696, // `continue` pointing to a labeled block
// E0702, // replaced with a generic attribute input check
E0710, // an unknown tool name found in scoped lint
E0711, // a feature has been declared with conflicting stability attributes
E0717, // rustc_promotable without stability attribute
- E0719, // duplicate values for associated type binding
// E0721, // `await` keyword
E0722, // Malformed `#[optimize]` attribute
E0724, // `#[ffi_returns_twice]` is only allowed in foreign functions
E0726, // non-explicit (not `'_`) elided lifetime in unsupported position
- E0739, // invalid track_caller application/syntax
}
-The `Drop` trait was implemented on a non-struct type.
+Only traits defined in the current crate can be implemented for arbitrary types.
Erroneous code example:
-This error occurs when the compiler was unable to infer the concrete type of a
-variable. It can occur for several cases, the most common of which is a
-mismatch in the expected type that the compiler inferred for a variable's
-initializing expression, and the actual type explicitly assigned to the
-variable.
+Expected type did not match the received type.
-For example:
+Erroneous code example:
```compile_fail,E0308
let x: i32 = "I am not a number!";
// |
// type `i32` assigned to variable `x`
```
+
+This error occurs when the compiler was unable to infer the concrete type of a
+variable. It can occur for several cases, the most common of which is a
+mismatch in the expected type that the compiler inferred for a variable's
+initializing expression, and the actual type explicitly assigned to the
+variable.
-An implementation of a trait doesn't match the type contraint.
+An implementation of a trait doesn't match the type constraint.
Erroneous code example:
-A struct without a field containing an unsized type cannot implement
-`CoerceUnsized`. An [unsized type][1] is any type that the compiler
-doesn't know the length or alignment of at compile time. Any struct
-containing an unsized type is also unsized.
-
-[1]: https://doc.rust-lang.org/book/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait
+`CoerceUnsized` was implemented on a struct which does not contain a field with
+an unsized type.
Example of erroneous code:
where T: CoerceUnsized<U> {}
```
+An [unsized type][1] is any type where the compiler does not know the length or
+alignment of at compile time. Any struct containing an unsized type is also
+unsized.
+
+[1]: https://doc.rust-lang.org/book/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait
+
`CoerceUnsized` is used to coerce one struct containing an unsized type
into another struct containing a different unsized type. If the struct
doesn't have any fields of unsized types then you don't need explicit
-A struct with more than one field containing an unsized type cannot implement
-`CoerceUnsized`. This only occurs when you are trying to coerce one of the
-types in your struct to another type in the struct. In this case we try to
-impl `CoerceUnsized` from `T` to `U` which are both types that the struct
-takes. An [unsized type][1] is any type that the compiler doesn't know the
-length or alignment of at compile time. Any struct containing an unsized type
-is also unsized.
+`CoerceUnsized` was implemented on a struct which contains more than one field
+with an unsized type.
-Example of erroneous code:
+Erroneous code example:
```compile_fail,E0375
#![feature(coerce_unsized)]
impl<T, U> CoerceUnsized<Foo<U, T>> for Foo<T, U> {}
```
+A struct with more than one field containing an unsized type cannot implement
+`CoerceUnsized`. This only occurs when you are trying to coerce one of the
+types in your struct to another type in the struct. In this case we try to
+impl `CoerceUnsized` from `T` to `U` which are both types that the struct
+takes. An [unsized type][1] is any type that the compiler doesn't know the
+length or alignment of at compile time. Any struct containing an unsized type
+is also unsized.
+
`CoerceUnsized` only allows for coercion from a structure with a single
unsized type field to another struct with a single unsized type field.
In fact Rust only allows for a struct to have one unsized type in a struct
-The type you are trying to impl `CoerceUnsized` for is not a struct.
-`CoerceUnsized` can only be implemented for a struct. Unsized types are
-already able to be coerced without an implementation of `CoerceUnsized`
-whereas a struct containing an unsized type needs to know the unsized type
-field it's containing is able to be coerced. An [unsized type][1]
-is any type that the compiler doesn't know the length or alignment of at
-compile time. Any struct containing an unsized type is also unsized.
-
-[1]: https://doc.rust-lang.org/book/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait
+`CoerceUnsized` was implemented on something that isn't a struct.
-Example of erroneous code:
+Erroneous code example:
```compile_fail,E0376
#![feature(coerce_unsized)]
impl<T, U> CoerceUnsized<U> for Foo<T> {}
```
+`CoerceUnsized` can only be implemented for a struct. Unsized types are
+already able to be coerced without an implementation of `CoerceUnsized`
+whereas a struct containing an unsized type needs to know the unsized type
+field it's containing is able to be coerced. An [unsized type][1]
+is any type that the compiler doesn't know the length or alignment of at
+compile time. Any struct containing an unsized type is also unsized.
+
+[1]: https://doc.rust-lang.org/book/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait
+
The `CoerceUnsized` trait takes a struct type. Make sure the type you are
providing to `CoerceUnsized` is a struct with only the last field containing an
unsized type.
+The `DispatchFromDyn` trait was implemented on something which is not a pointer
+or a newtype wrapper around a pointer.
+
+Erroneous code example:
+
+```compile-fail,E0378
+#![feature(dispatch_from_dyn)]
+use std::ops::DispatchFromDyn;
+
+struct WrapperExtraField<T> {
+ ptr: T,
+ extra_stuff: i32,
+}
+
+impl<T, U> DispatchFromDyn<WrapperExtraField<U>> for WrapperExtraField<T>
+where
+ T: DispatchFromDyn<U>,
+{}
+```
+
The `DispatchFromDyn` trait currently can only be implemented for
builtin pointer types and structs that are newtype wrappers around them
— that is, the struct must have only one field (except for`PhantomData`),
and that field must itself implement `DispatchFromDyn`.
-Examples:
-
```
#![feature(dispatch_from_dyn, unsize)]
use std::{
{}
```
+Another example:
+
```
#![feature(dispatch_from_dyn)]
use std::{
T: DispatchFromDyn<U>,
{}
```
-
-Example of illegal `DispatchFromDyn` implementation
-(illegal because of extra field)
-
-```compile-fail,E0378
-#![feature(dispatch_from_dyn)]
-use std::ops::DispatchFromDyn;
-
-struct WrapperExtraField<T> {
- ptr: T,
- extra_stuff: i32,
-}
-
-impl<T, U> DispatchFromDyn<WrapperExtraField<U>> for WrapperExtraField<T>
-where
- T: DispatchFromDyn<U>,
-{}
-```
+A trait method was declared const.
+
+Erroneous code example:
+
+```compile_fail,E0379
+#![feature(const_fn)]
+
+trait Foo {
+ const fn bar() -> u32; // error!
+}
+```
+
Trait methods cannot be declared `const` by design. For more information, see
[RFC 911].
-Auto traits cannot have methods or associated items.
-For more information see the [opt-in builtin traits RFC][RFC 19].
+An auto trait was declared with a method or an associated item.
+
+Erroneous code example:
+
+```compile_fail,E0380
+unsafe auto trait Trait {
+ type Output; // error!
+}
+```
+
+Auto traits cannot have methods or associated items. For more information see
+the [opt-in builtin traits RFC][RFC 19].
[RFC 19]: https://github.com/rust-lang/rfcs/blob/master/text/0019-opt-in-builtin-traits.md
-This error occurs when an attempt is made to use a variable after its contents
-have been moved elsewhere.
+A variable was used after its contents have been moved elsewhere.
Erroneous code example:
-This error occurs when an attempt is made to reassign an immutable variable.
+An immutable variable was reassigned.
Erroneous code example:
-You tried to implement methods for a primitive type. Erroneous code example:
+A method was implemented on a primitive type.
+
+Erroneous code example:
```compile_fail,E0390
struct Foo {
-This error indicates that some types or traits depend on each other
-and therefore cannot be constructed.
+A type dependency cycle has been encountered.
-The following example contains a circular dependency between two traits:
+Erroneous code example:
```compile_fail,E0391
trait FirstTrait : SecondTrait {
}
```
+
+The previous example contains a circular dependency between two traits:
+`FirstTrait` depends on `SecondTrait` which itself depends on `FirstTrait`.
-This error indicates that a type or lifetime parameter has been declared
-but not actually used. Here is an example that demonstrates the error:
+A type or lifetime parameter has been declared but is not actually used.
+
+Erroneous code example:
```compile_fail,E0392
enum Foo<T> {
A type parameter which references `Self` in its default value was not specified.
-Example of erroneous code:
+
+Erroneous code example:
```compile_fail,E0393
trait A<T=Self> {}
-The type name used is not in scope.
+A used type name is not in scope.
Erroneous code examples:
-You are trying to use an identifier that is either undefined or not a struct.
+An identifier that is neither defined nor a struct was used.
+
Erroneous code example:
```compile_fail,E0422
An identifier was used like a function name or a value was expected and the
identifier exists but it belongs to a different namespace.
-For (an erroneous) example, here a `struct` variant name were used as a
-function:
+Erroneous code example:
```compile_fail,E0423
struct Foo { a: bool };
-This error indicates that a variable usage inside an inner function is invalid
-because the variable comes from a dynamic environment. Inner functions do not
-have access to their containing environment.
+A variable used inside an inner function comes from a dynamic environment.
Erroneous code example:
}
```
-Functions do not capture local variables. To fix this error, you can replace the
-function with a closure:
+Inner functions do not have access to their containing environment. To fix this
+error, you can replace the function with a closure:
```
fn foo() {
}
```
-or replace the captured variable with a constant or a static item:
+Or replace the captured variable with a constant or a static item:
```
fn foo() {
-Trait implementations can only implement associated types that are members of
-the trait in question. This error indicates that you attempted to implement
-an associated type whose name does not match the name of any associated type
-in the trait.
+An associated type whose name does not match any of the associated types
+in the trait was used when implementing the trait.
Erroneous code example:
}
```
+Trait implementations can only implement associated types that are members of
+the trait in question.
+
The solution to this problem is to remove the extraneous associated type:
```
-Trait implementations can only implement associated constants that are
-members of the trait in question. This error indicates that you
-attempted to implement an associated constant whose name does not
-match the name of any associated constant in the trait.
+An associated constant whose name does not match any of the associated constants
+in the trait was used when implementing the trait.
Erroneous code example:
}
```
+Trait implementations can only implement associated constants that are
+members of the trait in question.
+
The solution to this problem is to remove the extraneous associated constant:
```
- change the original fn declaration to match the expected signature,
and do the cast in the fn body (the preferred option)
-- cast the fn item fo a fn pointer before calling transmute, as shown here:
+- cast the fn item of a fn pointer before calling transmute, as shown here:
```
# extern "C" fn foo(_: Box<i32>) {}
--- /dev/null
+More than one parameter was used for a generator.
+
+Erroneous code example:
+
+```compile_fail,E0628
+#![feature(generators, generator_trait)]
+
+fn main() {
+ let generator = |a: i32, b: i32| {
+ // error: too many parameters for a generator
+ // Allowed only 0 or 1 parameter
+ yield a;
+ };
+}
+```
+
+At present, it is not permitted to pass more than one explicit
+parameter for a generator.This can be fixed by using
+at most 1 parameter for the generator. For example, we might resolve
+the previous example by passing only one parameter.
+
+```
+#![feature(generators, generator_trait)]
+
+fn main() {
+ let generator = |a: i32| {
+ yield a;
+ };
+}
+```
--- /dev/null
+A type has conflicting `packed` representation hints.
+
+Erroneous code examples:
+
+```compile_fail,E0634
+#[repr(packed, packed(2))] // error!
+struct Company(i32);
+
+#[repr(packed(2))] // error!
+#[repr(packed)]
+struct Company(i32);
+```
+
+You cannot use conflicting `packed` hints on a same type. If you want to pack a
+type to a given size, you should provide a size to packed:
+
+```
+#[repr(packed)] // ok!
+struct Company(i32);
+```
--- /dev/null
+`align` representation hint was incorrectly declared.
+
+Erroneous code examples:
+
+```compile_fail,E0693
+#[repr(align=8)] // error!
+struct Align8(i8);
+
+#[repr(align="8")] // error!
+struct Align8(i8);
+```
+
+This is a syntax error at the level of attribute declarations. The proper
+syntax for `align` representation hint is the following:
+
+```
+#[repr(align(8))] // ok!
+struct Align8(i8);
+```
--- /dev/null
+The value for an associated type has already been specified.
+
+Erroneous code example:
+
+```compile_fail,E0719
+#![feature(associated_type_bounds)]
+
+trait FooTrait {}
+trait BarTrait {}
+
+// error: associated type `Item` in trait `Iterator` is specified twice
+struct Foo<T: Iterator<Item: FooTrait, Item: BarTrait>> { f: T }
+```
+
+`Item` in trait `Iterator` cannot be specified multiple times for struct `Foo`.
+To fix this, create a new trait that is a combination of the desired traits and
+specify the associated type with the new trait.
+
+Corrected example:
+
+```
+#![feature(associated_type_bounds)]
+
+trait FooTrait {}
+trait BarTrait {}
+trait FooBarTrait: FooTrait + BarTrait {}
+
+struct Foo<T: Iterator<Item: FooBarTrait>> { f: T }
+```
+
+For more information about associated types, see [the book][bk-at]. For more
+information on associated type bounds, see [RFC 2289][rfc-2289].
+
+[bk-at]: https://doc.rust-lang.org/book/ch19-03-advanced-traits.html#specifying-placeholder-types-in-trait-definitions-with-associated-types
+[rfc-2289]: https://rust-lang.github.io/rfcs/2289-associated-type-bounds.html
--- /dev/null
+`#[track_caller]` can not be applied on struct.
+
+Erroneous code example:
+
+```compile_fail,E0739
+#![feature(track_caller)]
+#[track_caller]
+struct Bar {
+ a: u8,
+}
+```
+
+[RFC 2091]: https://github.com/rust-lang/rfcs/blob/master/text/2091-inline-semantic.md
macro_rules! register_diagnostics {
($($ecode:ident: $message:expr,)* ; $($code:ident,)*) => (
- pub static DIAGNOSTICS: &[(&str, &str)] = &[
- $( (stringify!($ecode), $message), )*
+ pub static DIAGNOSTICS: &[(&str, Option<&str>)] = &[
+ $( (stringify!($ecode), Some($message)), )*
+ $( (stringify!($code), None), )*
];
)
}
let handler = self.0.handler;
- // We need to use `ptr::read` because `DiagnosticBuilder` implements `Drop`.
- let diagnostic;
- unsafe {
- diagnostic = std::ptr::read(&self.0.diagnostic);
- std::mem::forget(self);
- };
+ // We must use `Level::Cancelled` for `dummy` to avoid an ICE about an
+ // unused diagnostic.
+ let dummy = Diagnostic::new(Level::Cancelled, "");
+ let diagnostic = std::mem::replace(&mut self.0.diagnostic, dummy);
+
// Logging here is useful to help track down where in logs an error was
// actually emitted.
debug!("buffer: diagnostic={:?}", diagnostic);
//! There are various `Emitter` implementations that generate different output formats such as
//! JSON and human readable output.
//!
-//! The output types are defined in `librustc::session::config::ErrorOutputType`.
+//! The output types are defined in `rustc_session::config::ErrorOutputType`.
use Destination::*;
}
// This does a small "fix" for multispans by looking to see if it can find any that
- // point directly at <*macros>. Since these are often difficult to read, this
- // will change the span to point at the use site.
+ // point directly at external macros. Since these are often difficult to read,
+ // this will change the span to point at the use site.
fn fix_multispans_in_extern_macros(
&self,
source_map: &Option<Lrc<SourceMap>>,
span: &mut MultiSpan,
children: &mut Vec<SubDiagnostic>,
) {
- for span in iter::once(span).chain(children.iter_mut().map(|child| &mut child.span)) {
+ debug!("fix_multispans_in_extern_macros: before: span={:?} children={:?}", span, children);
+ for span in iter::once(&mut *span).chain(children.iter_mut().map(|child| &mut child.span)) {
self.fix_multispan_in_extern_macros(source_map, span);
}
+ debug!("fix_multispans_in_extern_macros: after: span={:?} children={:?}", span, children);
}
- // This "fixes" MultiSpans that contain Spans that are pointing to locations inside of
- // <*macros>. Since these locations are often difficult to read, we move these Spans from
- // <*macros> to their corresponding use site.
+ // This "fixes" MultiSpans that contain `Span`s pointing to locations inside of external macros.
+ // Since these locations are often difficult to read,
+ // we move these spans from the external macros to their corresponding use site.
fn fix_multispan_in_extern_macros(
&self,
source_map: &Option<Lrc<SourceMap>>,
None => return,
};
- // First, find all the spans in <*macros> and point instead at their use site
+ // First, find all the spans in external macros and point instead at their use site.
let replacements: Vec<(Span, Span)> = span
.primary_spans()
.iter()
.copied()
.chain(span.span_labels().iter().map(|sp_label| sp_label.span))
.filter_map(|sp| {
- if !sp.is_dummy() && sm.span_to_filename(sp).is_macros() {
+ if !sp.is_dummy() && sm.is_imported(sp) {
let maybe_callsite = sp.source_callsite();
if sp != maybe_callsite {
return Some((sp, maybe_callsite));
})
.collect();
- // After we have them, make sure we replace these 'bad' def sites with their use sites
+ // After we have them, make sure we replace these 'bad' def sites with their use sites.
for (from, to) in replacements {
span.replace(from, to);
}
fn emit_diagnostic(&mut self, diag: &Diagnostic) {
let mut children = diag.children.clone();
let (mut primary_span, suggestions) = self.primary_span_formatted(&diag);
+ debug!("emit_diagnostic: suggestions={:?}", suggestions);
self.fix_multispans_in_extern_macros_and_render_macro_backtrace(
&self.sm,
// Render the replacements for each suggestion
let suggestions = suggestion.splice_lines(&**sm);
+ debug!("emit_suggestion_default: suggestions={:?}", suggestions);
if suggestions.is_empty() {
// Suggestions coming from macros can have malformed spans. This is a heavy handed
let line_start = sm.lookup_char_pos(parts[0].span.lo()).line;
draw_col_separator_no_space(&mut buffer, 1, max_line_num_len + 1);
- let mut line_pos = 0;
let mut lines = complete.lines();
- for line in lines.by_ref().take(MAX_SUGGESTION_HIGHLIGHT_LINES) {
+ for (line_pos, line) in lines.by_ref().take(MAX_SUGGESTION_HIGHLIGHT_LINES).enumerate()
+ {
// Print the span column to avoid confusion
buffer.puts(
row_num,
// print the suggestion
draw_col_separator(&mut buffer, row_num, max_line_num_len + 1);
buffer.append(row_num, line, Style::NoStyle);
- line_pos += 1;
row_num += 1;
}
je.sm
.span_to_lines(span)
.map(|lines| {
+ // We can't get any lines if the source is unavailable.
+ if !je.sm.ensure_source_file_source_present(lines.file.clone()) {
+ return vec![];
+ }
+
let sf = &*lines.file;
lines
.lines
DiagnosticId::Error(s) => s,
DiagnosticId::Lint(s) => s,
};
- let explanation =
- je.registry.as_ref().and_then(|registry| registry.find_description(&s));
+ let je_result =
+ je.registry.as_ref().map(|registry| registry.try_find_description(&s)).unwrap();
- DiagnosticCode { code: s, explanation }
+ DiagnosticCode { code: s, explanation: je_result.unwrap_or(None) }
})
}
}
None => buf.push_str(&line[lo..]),
}
}
- if let None = hi_opt {
+ if hi_opt.is_none() {
buf.push('\n');
}
}
let lines = sm.span_to_lines(bounding_span).ok()?;
assert!(!lines.lines.is_empty());
+ // We can't splice anything if the source is unavailable.
+ if !sm.ensure_source_file_source_present(lines.file.clone()) {
+ return None;
+ }
+
// To build up the result, we do this for each span:
// - push the line segment trailing the previous span
// (at the beginning a "phantom" span pointing at the start of the line)
}
/// Stash a given diagnostic with the given `Span` and `StashKey` as the key for later stealing.
- /// If the diagnostic with this `(span, key)` already exists, this will result in an ICE.
pub fn stash_diagnostic(&self, span: Span, key: StashKey, diag: Diagnostic) {
let mut inner = self.inner.borrow_mut();
- if let Some(mut old_diag) = inner.stashed_diagnostics.insert((span, key), diag) {
- // We are removing a previously stashed diagnostic which should not happen.
- old_diag.level = Bug;
- old_diag.note(&format!(
- "{}:{}: already existing stashed diagnostic with (span = {:?}, key = {:?})",
- file!(),
- line!(),
- span,
- key
- ));
- inner.emit_diag_at_span(old_diag, span);
- panic!(ExplicitBug);
- }
+ // FIXME(Centril, #69537): Consider reintroducing panic on overwriting a stashed diagnostic
+ // if/when we have a more robust macro-friendly replacement for `(span, key)` as a key.
+ // See the PR for a discussion.
+ inner.stashed_diagnostics.insert((span, key), diag);
}
/// Steal a previously stashed diagnostic with the given `Span` and `StashKey` as the key.
.emitted_diagnostic_codes
.iter()
.filter_map(|x| match &x {
- DiagnosticId::Error(s) if registry.find_description(s).is_some() => {
- Some(s.clone())
+ DiagnosticId::Error(s) => {
+ if let Ok(Some(_explanation)) = registry.try_find_description(s) {
+ Some(s.clone())
+ } else {
+ None
+ }
}
_ => None,
})
use rustc_data_structures::fx::FxHashMap;
+#[derive(Debug)]
+pub struct InvalidErrorCode;
+
#[derive(Clone)]
pub struct Registry {
- descriptions: FxHashMap<&'static str, &'static str>,
+ long_descriptions: FxHashMap<&'static str, Option<&'static str>>,
}
impl Registry {
- pub fn new(descriptions: &[(&'static str, &'static str)]) -> Registry {
- Registry { descriptions: descriptions.iter().cloned().collect() }
+ pub fn new(long_descriptions: &[(&'static str, Option<&'static str>)]) -> Registry {
+ Registry { long_descriptions: long_descriptions.iter().cloned().collect() }
}
+ /// This will panic if an invalid error code is passed in
pub fn find_description(&self, code: &str) -> Option<&'static str> {
- self.descriptions.get(code).cloned()
+ self.try_find_description(code).unwrap()
+ }
+ /// Returns `InvalidErrorCode` if the code requested does not exist in the
+ /// registry. Otherwise, returns an `Option` where `None` means the error
+ /// code is valid but has no extended information.
+ pub fn try_find_description(
+ &self,
+ code: &str,
+ ) -> Result<Option<&'static str>, InvalidErrorCode> {
+ if !self.long_descriptions.contains_key(code) {
+ return Err(InvalidErrorCode);
+ }
+ Ok(*self.long_descriptions.get(code).unwrap())
}
}
use crate::expand::{self, AstFragment, Invocation};
+use crate::module::DirectoryOwnership;
use rustc_ast::ast::{self, Attribute, Name, NodeId, PatKind};
use rustc_ast::mut_visit::{self, MutVisitor};
use rustc_ast::ptr::P;
use rustc_ast::token;
-use rustc_ast::tokenstream::{self, TokenStream};
+use rustc_ast::tokenstream::{self, TokenStream, TokenTree};
use rustc_ast::visit::{AssocCtxt, Visitor};
use rustc_attr::{self as attr, Deprecation, HasAttrs, Stability};
use rustc_data_structures::fx::FxHashMap;
use rustc_data_structures::sync::{self, Lrc};
use rustc_errors::{DiagnosticBuilder, DiagnosticId};
-use rustc_parse::{self, parser, DirectoryOwnership, MACRO_ARGUMENTS};
+use rustc_parse::{self, parser, MACRO_ARGUMENTS};
use rustc_session::parse::ParseSess;
use rustc_span::edition::Edition;
use rustc_span::hygiene::{AstPass, ExpnData, ExpnId, ExpnKind};
}
}
+ crate fn into_tokens(self) -> TokenStream {
+ // `Annotatable` can be converted into tokens directly, but we
+ // are packing it into a nonterminal as a piece of AST to make
+ // the produced token stream look nicer in pretty-printed form.
+ let nt = match self {
+ Annotatable::Item(item) => token::NtItem(item),
+ Annotatable::TraitItem(item) | Annotatable::ImplItem(item) => {
+ token::NtItem(P(item.and_then(ast::AssocItem::into_item)))
+ }
+ Annotatable::ForeignItem(item) => {
+ token::NtItem(P(item.and_then(ast::ForeignItem::into_item)))
+ }
+ Annotatable::Stmt(stmt) => token::NtStmt(stmt.into_inner()),
+ Annotatable::Expr(expr) => token::NtExpr(expr),
+ Annotatable::Arm(..)
+ | Annotatable::Field(..)
+ | Annotatable::FieldPat(..)
+ | Annotatable::GenericParam(..)
+ | Annotatable::Param(..)
+ | Annotatable::StructField(..)
+ | Annotatable::Variant(..) => panic!("unexpected annotatable"),
+ };
+ TokenTree::token(token::Interpolated(Lrc::new(nt)), DUMMY_SP).into()
+ }
+
pub fn expect_item(self) -> P<ast::Item> {
match self {
Annotatable::Item(i) => i,
}
}
-// `meta_item` is the annotation, and `item` is the item being modified.
-// FIXME Decorators should follow the same pattern too.
+/// Result of an expansion that may need to be retried.
+/// Consider using this for non-`MultiItemModifier` expanders as well.
+pub enum ExpandResult<T, U> {
+ /// Expansion produced a result (possibly dummy).
+ Ready(T),
+ /// Expansion could not produce a result and needs to be retried.
+ /// The string is an explanation that will be printed if we are stuck in an infinite retry loop.
+ Retry(U, String),
+}
+
+// `meta_item` is the attribute, and `item` is the item being modified.
pub trait MultiItemModifier {
fn expand(
&self,
span: Span,
meta_item: &ast::MetaItem,
item: Annotatable,
- ) -> Vec<Annotatable>;
+ ) -> ExpandResult<Vec<Annotatable>, Annotatable>;
}
-impl<F, T> MultiItemModifier for F
+impl<F> MultiItemModifier for F
where
- F: Fn(&mut ExtCtxt<'_>, Span, &ast::MetaItem, Annotatable) -> T,
- T: Into<Vec<Annotatable>>,
+ F: Fn(&mut ExtCtxt<'_>, Span, &ast::MetaItem, Annotatable) -> Vec<Annotatable>,
{
fn expand(
&self,
span: Span,
meta_item: &ast::MetaItem,
item: Annotatable,
- ) -> Vec<Annotatable> {
- (*self)(ecx, span, meta_item, item).into()
- }
-}
-
-impl Into<Vec<Annotatable>> for Annotatable {
- fn into(self) -> Vec<Annotatable> {
- vec![self]
+ ) -> ExpandResult<Vec<Annotatable>, Annotatable> {
+ ExpandResult::Ready(self(ecx, span, meta_item, item))
}
}
mut_visit::noop_visit_tt(tt, self)
}
- fn visit_mac(&mut self, mac: &mut ast::Mac) {
+ fn visit_mac(&mut self, mac: &mut ast::MacCall) {
mut_visit::noop_visit_mac(mac, self)
}
}
fn has_derive_copy(&self, expn_id: ExpnId) -> bool;
fn add_derive_copy(&mut self, expn_id: ExpnId);
+ fn cfg_accessible(&mut self, expn_id: ExpnId, path: &ast::Path) -> Result<bool, Indeterminate>;
}
#[derive(Clone)]
pub resolver: &'a mut dyn Resolver,
pub current_expansion: ExpansionData,
pub expansions: FxHashMap<Span, Vec<String>>,
+ /// Called directly after having parsed an external `mod foo;` in expansion.
+ pub(super) extern_mod_loaded: Option<&'a dyn Fn(&ast::Crate)>,
}
impl<'a> ExtCtxt<'a> {
parse_sess: &'a ParseSess,
ecfg: expand::ExpansionConfig<'a>,
resolver: &'a mut dyn Resolver,
+ extern_mod_loaded: Option<&'a dyn Fn(&ast::Crate)>,
) -> ExtCtxt<'a> {
ExtCtxt {
parse_sess,
ecfg,
- root_path: PathBuf::new(),
resolver,
+ extern_mod_loaded,
+ root_path: PathBuf::new(),
current_expansion: ExpansionData {
id: ExpnId::root(),
depth: 0,
--- /dev/null
+//! Conditional compilation stripping.
+
+use rustc_ast::ast::{self, AttrItem, Attribute, MetaItem};
+use rustc_ast::attr::HasAttrs;
+use rustc_ast::mut_visit::*;
+use rustc_ast::ptr::P;
+use rustc_ast::util::map_in_place::MapInPlace;
+use rustc_attr as attr;
+use rustc_data_structures::fx::FxHashMap;
+use rustc_errors::{error_code, struct_span_err, Applicability, Handler};
+use rustc_feature::{Feature, Features, State as FeatureState};
+use rustc_feature::{
+ ACCEPTED_FEATURES, ACTIVE_FEATURES, REMOVED_FEATURES, STABLE_REMOVED_FEATURES,
+};
+use rustc_parse::{parse_in, validate_attr};
+use rustc_session::parse::{feature_err, ParseSess};
+use rustc_span::edition::{Edition, ALL_EDITIONS};
+use rustc_span::symbol::{sym, Symbol};
+use rustc_span::{Span, DUMMY_SP};
+
+use smallvec::SmallVec;
+
+/// A folder that strips out items that do not belong in the current configuration.
+pub struct StripUnconfigured<'a> {
+ pub sess: &'a ParseSess,
+ pub features: Option<&'a Features>,
+}
+
+fn get_features(
+ span_handler: &Handler,
+ krate_attrs: &[ast::Attribute],
+ crate_edition: Edition,
+ allow_features: &Option<Vec<String>>,
+) -> Features {
+ fn feature_removed(span_handler: &Handler, span: Span, reason: Option<&str>) {
+ let mut err = struct_span_err!(span_handler, span, E0557, "feature has been removed");
+ err.span_label(span, "feature has been removed");
+ if let Some(reason) = reason {
+ err.note(reason);
+ }
+ err.emit();
+ }
+
+ fn active_features_up_to(edition: Edition) -> impl Iterator<Item = &'static Feature> {
+ ACTIVE_FEATURES.iter().filter(move |feature| {
+ if let Some(feature_edition) = feature.edition {
+ feature_edition <= edition
+ } else {
+ false
+ }
+ })
+ }
+
+ let mut features = Features::default();
+ let mut edition_enabled_features = FxHashMap::default();
+
+ for &edition in ALL_EDITIONS {
+ if edition <= crate_edition {
+ // The `crate_edition` implies its respective umbrella feature-gate
+ // (i.e., `#![feature(rust_20XX_preview)]` isn't needed on edition 20XX).
+ edition_enabled_features.insert(edition.feature_name(), edition);
+ }
+ }
+
+ for feature in active_features_up_to(crate_edition) {
+ feature.set(&mut features, DUMMY_SP);
+ edition_enabled_features.insert(feature.name, crate_edition);
+ }
+
+ // Process the edition umbrella feature-gates first, to ensure
+ // `edition_enabled_features` is completed before it's queried.
+ for attr in krate_attrs {
+ if !attr.check_name(sym::feature) {
+ continue;
+ }
+
+ let list = match attr.meta_item_list() {
+ Some(list) => list,
+ None => continue,
+ };
+
+ for mi in list {
+ if !mi.is_word() {
+ continue;
+ }
+
+ let name = mi.name_or_empty();
+
+ let edition = ALL_EDITIONS.iter().find(|e| name == e.feature_name()).copied();
+ if let Some(edition) = edition {
+ if edition <= crate_edition {
+ continue;
+ }
+
+ for feature in active_features_up_to(edition) {
+ // FIXME(Manishearth) there is currently no way to set
+ // lib features by edition
+ feature.set(&mut features, DUMMY_SP);
+ edition_enabled_features.insert(feature.name, edition);
+ }
+ }
+ }
+ }
+
+ for attr in krate_attrs {
+ if !attr.check_name(sym::feature) {
+ continue;
+ }
+
+ let list = match attr.meta_item_list() {
+ Some(list) => list,
+ None => continue,
+ };
+
+ let bad_input = |span| {
+ struct_span_err!(span_handler, span, E0556, "malformed `feature` attribute input")
+ };
+
+ for mi in list {
+ let name = match mi.ident() {
+ Some(ident) if mi.is_word() => ident.name,
+ Some(ident) => {
+ bad_input(mi.span())
+ .span_suggestion(
+ mi.span(),
+ "expected just one word",
+ format!("{}", ident.name),
+ Applicability::MaybeIncorrect,
+ )
+ .emit();
+ continue;
+ }
+ None => {
+ bad_input(mi.span()).span_label(mi.span(), "expected just one word").emit();
+ continue;
+ }
+ };
+
+ if let Some(edition) = edition_enabled_features.get(&name) {
+ let msg =
+ &format!("the feature `{}` is included in the Rust {} edition", name, edition);
+ span_handler.struct_span_warn_with_code(mi.span(), msg, error_code!(E0705)).emit();
+ continue;
+ }
+
+ if ALL_EDITIONS.iter().any(|e| name == e.feature_name()) {
+ // Handled in the separate loop above.
+ continue;
+ }
+
+ let removed = REMOVED_FEATURES.iter().find(|f| name == f.name);
+ let stable_removed = STABLE_REMOVED_FEATURES.iter().find(|f| name == f.name);
+ if let Some(Feature { state, .. }) = removed.or(stable_removed) {
+ if let FeatureState::Removed { reason } | FeatureState::Stabilized { reason } =
+ state
+ {
+ feature_removed(span_handler, mi.span(), *reason);
+ continue;
+ }
+ }
+
+ if let Some(Feature { since, .. }) = ACCEPTED_FEATURES.iter().find(|f| name == f.name) {
+ let since = Some(Symbol::intern(since));
+ features.declared_lang_features.push((name, mi.span(), since));
+ continue;
+ }
+
+ if let Some(allowed) = allow_features.as_ref() {
+ if allowed.iter().find(|&f| name.as_str() == *f).is_none() {
+ struct_span_err!(
+ span_handler,
+ mi.span(),
+ E0725,
+ "the feature `{}` is not in the list of allowed features",
+ name
+ )
+ .emit();
+ continue;
+ }
+ }
+
+ if let Some(f) = ACTIVE_FEATURES.iter().find(|f| name == f.name) {
+ f.set(&mut features, mi.span());
+ features.declared_lang_features.push((name, mi.span(), None));
+ continue;
+ }
+
+ features.declared_lib_features.push((name, mi.span()));
+ }
+ }
+
+ features
+}
+
+// `cfg_attr`-process the crate's attributes and compute the crate's features.
+pub fn features(
+ mut krate: ast::Crate,
+ sess: &ParseSess,
+ edition: Edition,
+ allow_features: &Option<Vec<String>>,
+) -> (ast::Crate, Features) {
+ let mut strip_unconfigured = StripUnconfigured { sess, features: None };
+
+ let unconfigured_attrs = krate.attrs.clone();
+ let diag = &sess.span_diagnostic;
+ let err_count = diag.err_count();
+ let features = match strip_unconfigured.configure(krate.attrs) {
+ None => {
+ // The entire crate is unconfigured.
+ krate.attrs = Vec::new();
+ krate.module.items = Vec::new();
+ Features::default()
+ }
+ Some(attrs) => {
+ krate.attrs = attrs;
+ let features = get_features(diag, &krate.attrs, edition, allow_features);
+ if err_count == diag.err_count() {
+ // Avoid reconfiguring malformed `cfg_attr`s.
+ strip_unconfigured.features = Some(&features);
+ strip_unconfigured.configure(unconfigured_attrs);
+ }
+ features
+ }
+ };
+ (krate, features)
+}
+
+#[macro_export]
+macro_rules! configure {
+ ($this:ident, $node:ident) => {
+ match $this.configure($node) {
+ Some(node) => node,
+ None => return Default::default(),
+ }
+ };
+}
+
+const CFG_ATTR_GRAMMAR_HELP: &str = "#[cfg_attr(condition, attribute, other_attribute, ...)]";
+const CFG_ATTR_NOTE_REF: &str = "for more information, visit \
+ <https://doc.rust-lang.org/reference/conditional-compilation.html\
+ #the-cfg_attr-attribute>";
+
+impl<'a> StripUnconfigured<'a> {
+ pub fn configure<T: HasAttrs>(&mut self, mut node: T) -> Option<T> {
+ self.process_cfg_attrs(&mut node);
+ self.in_cfg(node.attrs()).then_some(node)
+ }
+
+ /// Parse and expand all `cfg_attr` attributes into a list of attributes
+ /// that are within each `cfg_attr` that has a true configuration predicate.
+ ///
+ /// Gives compiler warnings if any `cfg_attr` does not contain any
+ /// attributes and is in the original source code. Gives compiler errors if
+ /// the syntax of any `cfg_attr` is incorrect.
+ pub fn process_cfg_attrs<T: HasAttrs>(&mut self, node: &mut T) {
+ node.visit_attrs(|attrs| {
+ attrs.flat_map_in_place(|attr| self.process_cfg_attr(attr));
+ });
+ }
+
+ /// Parse and expand a single `cfg_attr` attribute into a list of attributes
+ /// when the configuration predicate is true, or otherwise expand into an
+ /// empty list of attributes.
+ ///
+ /// Gives a compiler warning when the `cfg_attr` contains no attributes and
+ /// is in the original source file. Gives a compiler error if the syntax of
+ /// the attribute is incorrect.
+ fn process_cfg_attr(&mut self, attr: Attribute) -> Vec<Attribute> {
+ if !attr.has_name(sym::cfg_attr) {
+ return vec![attr];
+ }
+
+ let (cfg_predicate, expanded_attrs) = match self.parse_cfg_attr(&attr) {
+ None => return vec![],
+ Some(r) => r,
+ };
+
+ // Lint on zero attributes in source.
+ if expanded_attrs.is_empty() {
+ return vec![attr];
+ }
+
+ // At this point we know the attribute is considered used.
+ attr::mark_used(&attr);
+
+ if !attr::cfg_matches(&cfg_predicate, self.sess, self.features) {
+ return vec![];
+ }
+
+ // We call `process_cfg_attr` recursively in case there's a
+ // `cfg_attr` inside of another `cfg_attr`. E.g.
+ // `#[cfg_attr(false, cfg_attr(true, some_attr))]`.
+ expanded_attrs
+ .into_iter()
+ .flat_map(|(item, span)| {
+ let attr = attr::mk_attr_from_item(attr.style, item, span);
+ self.process_cfg_attr(attr)
+ })
+ .collect()
+ }
+
+ fn parse_cfg_attr(&self, attr: &Attribute) -> Option<(MetaItem, Vec<(AttrItem, Span)>)> {
+ match attr.get_normal_item().args {
+ ast::MacArgs::Delimited(dspan, delim, ref tts) if !tts.is_empty() => {
+ let msg = "wrong `cfg_attr` delimiters";
+ validate_attr::check_meta_bad_delim(self.sess, dspan, delim, msg);
+ match parse_in(self.sess, tts.clone(), "`cfg_attr` input", |p| p.parse_cfg_attr()) {
+ Ok(r) => return Some(r),
+ Err(mut e) => {
+ e.help(&format!("the valid syntax is `{}`", CFG_ATTR_GRAMMAR_HELP))
+ .note(CFG_ATTR_NOTE_REF)
+ .emit();
+ }
+ }
+ }
+ _ => self.error_malformed_cfg_attr_missing(attr.span),
+ }
+ None
+ }
+
+ fn error_malformed_cfg_attr_missing(&self, span: Span) {
+ self.sess
+ .span_diagnostic
+ .struct_span_err(span, "malformed `cfg_attr` attribute input")
+ .span_suggestion(
+ span,
+ "missing condition and attribute",
+ CFG_ATTR_GRAMMAR_HELP.to_string(),
+ Applicability::HasPlaceholders,
+ )
+ .note(CFG_ATTR_NOTE_REF)
+ .emit();
+ }
+
+ /// Determines if a node with the given attributes should be included in this configuration.
+ pub fn in_cfg(&self, attrs: &[Attribute]) -> bool {
+ attrs.iter().all(|attr| {
+ if !is_cfg(attr) {
+ return true;
+ }
+ let meta_item = match validate_attr::parse_meta(self.sess, attr) {
+ Ok(meta_item) => meta_item,
+ Err(mut err) => {
+ err.emit();
+ return true;
+ }
+ };
+ let error = |span, msg, suggestion: &str| {
+ let mut err = self.sess.span_diagnostic.struct_span_err(span, msg);
+ if !suggestion.is_empty() {
+ err.span_suggestion(
+ span,
+ "expected syntax is",
+ suggestion.into(),
+ Applicability::MaybeIncorrect,
+ );
+ }
+ err.emit();
+ true
+ };
+ let span = meta_item.span;
+ match meta_item.meta_item_list() {
+ None => error(span, "`cfg` is not followed by parentheses", "cfg(/* predicate */)"),
+ Some([]) => error(span, "`cfg` predicate is not specified", ""),
+ Some([_, .., l]) => error(l.span(), "multiple `cfg` predicates are specified", ""),
+ Some([single]) => match single.meta_item() {
+ Some(meta_item) => attr::cfg_matches(meta_item, self.sess, self.features),
+ None => error(single.span(), "`cfg` predicate key cannot be a literal", ""),
+ },
+ }
+ })
+ }
+
+ /// Visit attributes on expression and statements (but not attributes on items in blocks).
+ fn visit_expr_attrs(&mut self, attrs: &[Attribute]) {
+ // flag the offending attributes
+ for attr in attrs.iter() {
+ self.maybe_emit_expr_attr_err(attr);
+ }
+ }
+
+ /// If attributes are not allowed on expressions, emit an error for `attr`
+ pub fn maybe_emit_expr_attr_err(&self, attr: &Attribute) {
+ if !self.features.map(|features| features.stmt_expr_attributes).unwrap_or(true) {
+ let mut err = feature_err(
+ self.sess,
+ sym::stmt_expr_attributes,
+ attr.span,
+ "attributes on expressions are experimental",
+ );
+
+ if attr.is_doc_comment() {
+ err.help("`///` is for documentation comments. For a plain comment, use `//`.");
+ }
+
+ err.emit();
+ }
+ }
+
+ pub fn configure_foreign_mod(&mut self, foreign_mod: &mut ast::ForeignMod) {
+ let ast::ForeignMod { abi: _, items } = foreign_mod;
+ items.flat_map_in_place(|item| self.configure(item));
+ }
+
+ pub fn configure_generic_params(&mut self, params: &mut Vec<ast::GenericParam>) {
+ params.flat_map_in_place(|param| self.configure(param));
+ }
+
+ fn configure_variant_data(&mut self, vdata: &mut ast::VariantData) {
+ match vdata {
+ ast::VariantData::Struct(fields, ..) | ast::VariantData::Tuple(fields, _) => {
+ fields.flat_map_in_place(|field| self.configure(field))
+ }
+ ast::VariantData::Unit(_) => {}
+ }
+ }
+
+ pub fn configure_item_kind(&mut self, item: &mut ast::ItemKind) {
+ match item {
+ ast::ItemKind::Struct(def, _generics) | ast::ItemKind::Union(def, _generics) => {
+ self.configure_variant_data(def)
+ }
+ ast::ItemKind::Enum(ast::EnumDef { variants }, _generics) => {
+ variants.flat_map_in_place(|variant| self.configure(variant));
+ for variant in variants {
+ self.configure_variant_data(&mut variant.data);
+ }
+ }
+ _ => {}
+ }
+ }
+
+ pub fn configure_expr_kind(&mut self, expr_kind: &mut ast::ExprKind) {
+ match expr_kind {
+ ast::ExprKind::Match(_m, arms) => {
+ arms.flat_map_in_place(|arm| self.configure(arm));
+ }
+ ast::ExprKind::Struct(_path, fields, _base) => {
+ fields.flat_map_in_place(|field| self.configure(field));
+ }
+ _ => {}
+ }
+ }
+
+ pub fn configure_expr(&mut self, expr: &mut P<ast::Expr>) {
+ self.visit_expr_attrs(expr.attrs());
+
+ // If an expr is valid to cfg away it will have been removed by the
+ // outer stmt or expression folder before descending in here.
+ // Anything else is always required, and thus has to error out
+ // in case of a cfg attr.
+ //
+ // N.B., this is intentionally not part of the visit_expr() function
+ // in order for filter_map_expr() to be able to avoid this check
+ if let Some(attr) = expr.attrs().iter().find(|a| is_cfg(a)) {
+ let msg = "removing an expression is not supported in this position";
+ self.sess.span_diagnostic.span_err(attr.span, msg);
+ }
+
+ self.process_cfg_attrs(expr)
+ }
+
+ pub fn configure_pat(&mut self, pat: &mut P<ast::Pat>) {
+ if let ast::PatKind::Struct(_path, fields, _etc) = &mut pat.kind {
+ fields.flat_map_in_place(|field| self.configure(field));
+ }
+ }
+
+ pub fn configure_fn_decl(&mut self, fn_decl: &mut ast::FnDecl) {
+ fn_decl.inputs.flat_map_in_place(|arg| self.configure(arg));
+ }
+}
+
+impl<'a> MutVisitor for StripUnconfigured<'a> {
+ fn visit_foreign_mod(&mut self, foreign_mod: &mut ast::ForeignMod) {
+ self.configure_foreign_mod(foreign_mod);
+ noop_visit_foreign_mod(foreign_mod, self);
+ }
+
+ fn visit_item_kind(&mut self, item: &mut ast::ItemKind) {
+ self.configure_item_kind(item);
+ noop_visit_item_kind(item, self);
+ }
+
+ fn visit_expr(&mut self, expr: &mut P<ast::Expr>) {
+ self.configure_expr(expr);
+ self.configure_expr_kind(&mut expr.kind);
+ noop_visit_expr(expr, self);
+ }
+
+ fn filter_map_expr(&mut self, expr: P<ast::Expr>) -> Option<P<ast::Expr>> {
+ let mut expr = configure!(self, expr);
+ self.configure_expr_kind(&mut expr.kind);
+ noop_visit_expr(&mut expr, self);
+ Some(expr)
+ }
+
+ fn flat_map_stmt(&mut self, stmt: ast::Stmt) -> SmallVec<[ast::Stmt; 1]> {
+ noop_flat_map_stmt(configure!(self, stmt), self)
+ }
+
+ fn flat_map_item(&mut self, item: P<ast::Item>) -> SmallVec<[P<ast::Item>; 1]> {
+ noop_flat_map_item(configure!(self, item), self)
+ }
+
+ fn flat_map_impl_item(&mut self, item: P<ast::AssocItem>) -> SmallVec<[P<ast::AssocItem>; 1]> {
+ noop_flat_map_assoc_item(configure!(self, item), self)
+ }
+
+ fn flat_map_trait_item(&mut self, item: P<ast::AssocItem>) -> SmallVec<[P<ast::AssocItem>; 1]> {
+ noop_flat_map_assoc_item(configure!(self, item), self)
+ }
+
+ fn visit_mac(&mut self, _mac: &mut ast::MacCall) {
+ // Don't configure interpolated AST (cf. issue #34171).
+ // Interpolated AST will get configured once the surrounding tokens are parsed.
+ }
+
+ fn visit_pat(&mut self, pat: &mut P<ast::Pat>) {
+ self.configure_pat(pat);
+ noop_visit_pat(pat, self)
+ }
+
+ fn visit_fn_decl(&mut self, mut fn_decl: &mut P<ast::FnDecl>) {
+ self.configure_fn_decl(&mut fn_decl);
+ noop_visit_fn_decl(fn_decl, self);
+ }
+}
+
+fn is_cfg(attr: &Attribute) -> bool {
+ attr.check_name(sym::cfg)
+}
use crate::base::*;
use crate::config::StripUnconfigured;
+use crate::configure;
use crate::hygiene::{ExpnData, ExpnId, ExpnKind, SyntaxContext};
use crate::mbe::macro_rules::annotate_err_with_kind;
+use crate::module::{parse_external_mod, push_directory, Directory, DirectoryOwnership};
use crate::placeholders::{placeholder, PlaceholderExpander};
use crate::proc_macro::collect_derives;
use rustc_ast::mut_visit::*;
use rustc_ast::ptr::P;
use rustc_ast::token;
-use rustc_ast::tokenstream::{TokenStream, TokenTree};
+use rustc_ast::tokenstream::TokenStream;
use rustc_ast::util::map_in_place::MapInPlace;
use rustc_ast::visit::{self, AssocCtxt, Visitor};
use rustc_ast_pretty::pprust;
use rustc_attr::{self as attr, is_builtin_attr, HasAttrs};
-use rustc_data_structures::sync::Lrc;
use rustc_errors::{Applicability, FatalError, PResult};
use rustc_feature::Features;
-use rustc_parse::configure;
use rustc_parse::parser::Parser;
use rustc_parse::validate_attr;
-use rustc_parse::DirectoryOwnership;
use rustc_session::lint::builtin::UNUSED_DOC_COMMENTS;
use rustc_session::lint::BuiltinLintDiagnostics;
use rustc_session::parse::{feature_err, ParseSess};
pub enum InvocationKind {
Bang {
- mac: ast::Mac,
+ mac: ast::MacCall,
span: Span,
},
Attr {
let mut undetermined_invocations = Vec::new();
let (mut progress, mut force) = (false, !self.monotonic);
loop {
- let invoc = if let Some(invoc) = invocations.pop() {
+ let (invoc, res) = if let Some(invoc) = invocations.pop() {
invoc
} else {
self.resolve_imports();
continue;
};
- let eager_expansion_root =
- if self.monotonic { invoc.expansion_data.id } else { orig_expansion_data.id };
- let res = match self.cx.resolver.resolve_macro_invocation(
- &invoc,
- eager_expansion_root,
- force,
- ) {
- Ok(res) => res,
- Err(Indeterminate) => {
- undetermined_invocations.push(invoc);
- continue;
+ let res = match res {
+ Some(res) => res,
+ None => {
+ let eager_expansion_root = if self.monotonic {
+ invoc.expansion_data.id
+ } else {
+ orig_expansion_data.id
+ };
+ match self.cx.resolver.resolve_macro_invocation(
+ &invoc,
+ eager_expansion_root,
+ force,
+ ) {
+ Ok(res) => res,
+ Err(Indeterminate) => {
+ // Cannot resolve, will retry this invocation later.
+ undetermined_invocations.push((invoc, None));
+ continue;
+ }
+ }
}
};
- progress = true;
let ExpansionData { depth, id: expn_id, .. } = invoc.expansion_data;
self.cx.current_expansion = invoc.expansion_data.clone();
// FIXME(jseyfried): Refactor out the following logic
let (expanded_fragment, new_invocations) = match res {
- InvocationRes::Single(ext) => {
- let fragment = self.expand_invoc(invoc, &ext.kind);
- self.collect_invocations(fragment, &[])
- }
+ InvocationRes::Single(ext) => match self.expand_invoc(invoc, &ext.kind) {
+ ExpandResult::Ready(fragment) => self.collect_invocations(fragment, &[]),
+ ExpandResult::Retry(invoc, explanation) => {
+ if force {
+ // We are stuck, stop retrying and produce a dummy fragment.
+ let span = invoc.span();
+ self.cx.span_err(span, &explanation);
+ let fragment = invoc.fragment_kind.dummy(span);
+ self.collect_invocations(fragment, &[])
+ } else {
+ // Cannot expand, will retry this invocation later.
+ undetermined_invocations
+ .push((invoc, Some(InvocationRes::Single(ext))));
+ continue;
+ }
+ }
+ },
InvocationRes::DeriveContainer(_exts) => {
// FIXME: Consider using the derive resolutions (`_exts`) immediately,
// instead of enqueuing the derives to be resolved again later.
for path in derives {
let expn_id = ExpnId::fresh(None);
derive_placeholders.push(NodeId::placeholder_from_expn_id(expn_id));
- invocations.push(Invocation {
- kind: InvocationKind::Derive { path, item: item.clone() },
- fragment_kind: invoc.fragment_kind,
- expansion_data: ExpansionData {
- id: expn_id,
- ..invoc.expansion_data.clone()
+ invocations.push((
+ Invocation {
+ kind: InvocationKind::Derive { path, item: item.clone() },
+ fragment_kind: invoc.fragment_kind,
+ expansion_data: ExpansionData {
+ id: expn_id,
+ ..invoc.expansion_data.clone()
+ },
},
- });
+ None,
+ ));
}
let fragment =
invoc.fragment_kind.expect_from_annotatables(::std::iter::once(item));
}
};
+ progress = true;
if expanded_fragments.len() < depth {
expanded_fragments.push(Vec::new());
}
&mut self,
mut fragment: AstFragment,
extra_placeholders: &[NodeId],
- ) -> (AstFragment, Vec<Invocation>) {
+ ) -> (AstFragment, Vec<(Invocation, Option<InvocationRes>)>) {
// Resolve `$crate`s in the fragment for pretty-printing.
self.cx.resolver.resolve_dollar_crates();
/// A macro's expansion does not fit in this fragment kind.
/// For example, a non-type macro in a type position.
- fn error_wrong_fragment_kind(&mut self, kind: AstFragmentKind, mac: &ast::Mac, span: Span) {
+ fn error_wrong_fragment_kind(&mut self, kind: AstFragmentKind, mac: &ast::MacCall, span: Span) {
let msg = format!(
"non-{kind} macro in {kind} position: {path}",
kind = kind.name(),
self.cx.trace_macros_diag();
}
- fn expand_invoc(&mut self, invoc: Invocation, ext: &SyntaxExtensionKind) -> AstFragment {
+ fn expand_invoc(
+ &mut self,
+ invoc: Invocation,
+ ext: &SyntaxExtensionKind,
+ ) -> ExpandResult<AstFragment, Invocation> {
if self.cx.current_expansion.depth > self.cx.ecfg.recursion_limit {
self.error_recursion_limit_reached();
}
let (fragment_kind, span) = (invoc.fragment_kind, invoc.span());
- match invoc.kind {
+ ExpandResult::Ready(match invoc.kind {
InvocationKind::Bang { mac, .. } => match ext {
SyntaxExtensionKind::Bang(expander) => {
self.gate_proc_macro_expansion_kind(span, fragment_kind);
}
_ => unreachable!(),
},
- InvocationKind::Attr { attr, mut item, .. } => match ext {
+ InvocationKind::Attr { attr, mut item, derives, after_derive } => match ext {
SyntaxExtensionKind::Attr(expander) => {
self.gate_proc_macro_input(&item);
self.gate_proc_macro_attr_item(span, &item);
- // `Annotatable` can be converted into tokens directly, but we are packing it
- // into a nonterminal as a piece of AST to make the produced token stream
- // look nicer in pretty-printed form. This may be no longer necessary.
- let item_tok = TokenTree::token(
- token::Interpolated(Lrc::new(match item {
- Annotatable::Item(item) => token::NtItem(item),
- Annotatable::TraitItem(item)
- | Annotatable::ImplItem(item)
- | Annotatable::ForeignItem(item) => {
- token::NtItem(P(item.and_then(ast::AssocItem::into_item)))
- }
- Annotatable::Stmt(stmt) => token::NtStmt(stmt.into_inner()),
- Annotatable::Expr(expr) => token::NtExpr(expr),
- Annotatable::Arm(..)
- | Annotatable::Field(..)
- | Annotatable::FieldPat(..)
- | Annotatable::GenericParam(..)
- | Annotatable::Param(..)
- | Annotatable::StructField(..)
- | Annotatable::Variant(..) => panic!("unexpected annotatable"),
- })),
- DUMMY_SP,
- )
- .into();
- let item = attr.unwrap_normal_item();
- if let MacArgs::Eq(..) = item.args {
+ let tokens = item.into_tokens();
+ let attr_item = attr.unwrap_normal_item();
+ if let MacArgs::Eq(..) = attr_item.args {
self.cx.span_err(span, "key-value macro attributes are not supported");
}
let tok_result =
- expander.expand(self.cx, span, item.args.inner_tokens(), item_tok);
- self.parse_ast_fragment(tok_result, fragment_kind, &item.path, span)
+ expander.expand(self.cx, span, attr_item.args.inner_tokens(), tokens);
+ self.parse_ast_fragment(tok_result, fragment_kind, &attr_item.path, span)
}
SyntaxExtensionKind::LegacyAttr(expander) => {
match validate_attr::parse_meta(self.cx.parse_sess, &attr) {
Ok(meta) => {
- let item = expander.expand(self.cx, span, &meta, item);
- fragment_kind.expect_from_annotatables(item)
+ let items = match expander.expand(self.cx, span, &meta, item) {
+ ExpandResult::Ready(items) => items,
+ ExpandResult::Retry(item, explanation) => {
+ // Reassemble the original invocation for retrying.
+ return ExpandResult::Retry(
+ Invocation {
+ kind: InvocationKind::Attr {
+ attr,
+ item,
+ derives,
+ after_derive,
+ },
+ ..invoc
+ },
+ explanation,
+ );
+ }
+ };
+ fragment_kind.expect_from_annotatables(items)
}
Err(mut err) => {
err.emit();
SyntaxExtensionKind::Derive(expander)
| SyntaxExtensionKind::LegacyDerive(expander) => {
if !item.derive_allowed() {
- return fragment_kind.dummy(span);
+ return ExpandResult::Ready(fragment_kind.dummy(span));
}
if let SyntaxExtensionKind::Derive(..) = ext {
self.gate_proc_macro_input(&item);
}
let meta = ast::MetaItem { kind: ast::MetaItemKind::Word, span, path };
- let items = expander.expand(self.cx, span, &meta, item);
+ let items = match expander.expand(self.cx, span, &meta, item) {
+ ExpandResult::Ready(items) => items,
+ ExpandResult::Retry(item, explanation) => {
+ // Reassemble the original invocation for retrying.
+ return ExpandResult::Retry(
+ Invocation {
+ kind: InvocationKind::Derive { path: meta.path, item },
+ ..invoc
+ },
+ explanation,
+ );
+ }
+ };
fragment_kind.expect_from_annotatables(items)
}
_ => unreachable!(),
},
InvocationKind::DeriveContainer { .. } => unreachable!(),
- }
+ })
}
fn gate_proc_macro_attr_item(&self, span: Span, item: &Annotatable) {
visit::walk_item(self, item);
}
- fn visit_mac(&mut self, _: &'ast ast::Mac) {}
+ fn visit_mac(&mut self, _: &'ast ast::MacCall) {}
}
if !self.cx.ecfg.proc_macro_hygiene() {
struct InvocationCollector<'a, 'b> {
cx: &'a mut ExtCtxt<'b>,
cfg: StripUnconfigured<'a>,
- invocations: Vec<Invocation>,
+ invocations: Vec<(Invocation, Option<InvocationRes>)>,
monotonic: bool,
}
};
let expn_id = ExpnId::fresh(expn_data);
let vis = kind.placeholder_visibility();
- self.invocations.push(Invocation {
- kind,
- fragment_kind,
- expansion_data: ExpansionData {
- id: expn_id,
- depth: self.cx.current_expansion.depth + 1,
- ..self.cx.current_expansion.clone()
+ self.invocations.push((
+ Invocation {
+ kind,
+ fragment_kind,
+ expansion_data: ExpansionData {
+ id: expn_id,
+ depth: self.cx.current_expansion.depth + 1,
+ ..self.cx.current_expansion.clone()
+ },
},
- });
+ None,
+ ));
placeholder(fragment_kind, NodeId::placeholder_from_expn_id(expn_id), vis)
}
- fn collect_bang(&mut self, mac: ast::Mac, span: Span, kind: AstFragmentKind) -> AstFragment {
+ fn collect_bang(
+ &mut self,
+ mac: ast::MacCall,
+ span: Span,
+ kind: AstFragmentKind,
+ ) -> AstFragment {
self.collect(kind, InvocationKind::Bang { mac, span })
}
.into_inner();
}
- if let ast::ExprKind::Mac(mac) = expr.kind {
+ if let ast::ExprKind::MacCall(mac) = expr.kind {
self.check_attributes(&expr.attrs);
self.collect_bang(mac, expr.span, AstFragmentKind::Expr).make_expr().into_inner()
} else {
.map(|expr| expr.into_inner());
}
- if let ast::ExprKind::Mac(mac) = expr.kind {
+ if let ast::ExprKind::MacCall(mac) = expr.kind {
self.check_attributes(&expr.attrs);
self.collect_bang(mac, expr.span, AstFragmentKind::OptExpr)
.make_opt_expr()
fn visit_pat(&mut self, pat: &mut P<ast::Pat>) {
self.cfg.configure_pat(pat);
match pat.kind {
- PatKind::Mac(_) => {}
+ PatKind::MacCall(_) => {}
_ => return noop_visit_pat(pat, self),
}
visit_clobber(pat, |mut pat| match mem::replace(&mut pat.kind, PatKind::Wild) {
- PatKind::Mac(mac) => self.collect_bang(mac, pat.span, AstFragmentKind::Pat).make_pat(),
+ PatKind::MacCall(mac) => {
+ self.collect_bang(mac, pat.span, AstFragmentKind::Pat).make_pat()
+ }
_ => unreachable!(),
});
}
}
}
- if let StmtKind::Mac(mac) = stmt.kind {
+ if let StmtKind::MacCall(mac) = stmt.kind {
let (mac, style, attrs) = mac.into_inner();
self.check_attributes(&attrs);
let mut placeholder =
.make_items();
}
+ let mut attrs = mem::take(&mut item.attrs); // We do this to please borrowck.
+ let ident = item.ident;
+ let span = item.span;
+
match item.kind {
- ast::ItemKind::Mac(..) => {
+ ast::ItemKind::MacCall(..) => {
+ item.attrs = attrs;
self.check_attributes(&item.attrs);
item.and_then(|item| match item.kind {
- ItemKind::Mac(mac) => self
- .collect(
- AstFragmentKind::Items,
- InvocationKind::Bang { mac, span: item.span },
- )
+ ItemKind::MacCall(mac) => self
+ .collect(AstFragmentKind::Items, InvocationKind::Bang { mac, span })
.make_items(),
_ => unreachable!(),
})
}
- ast::ItemKind::Mod(ast::Mod { inner, inline, .. })
- if item.ident != Ident::invalid() =>
- {
- let orig_directory_ownership = self.cx.current_expansion.directory_ownership;
+ ast::ItemKind::Mod(ref mut old_mod @ ast::Mod { .. }) if ident != Ident::invalid() => {
+ let sess = self.cx.parse_sess;
+ let orig_ownership = self.cx.current_expansion.directory_ownership;
let mut module = (*self.cx.current_expansion.module).clone();
- module.mod_path.push(item.ident);
- if inline {
- if let Some(path) = attr::first_attr_value_str_by_name(&item.attrs, sym::path) {
- self.cx.current_expansion.directory_ownership =
- DirectoryOwnership::Owned { relative: None };
- module.directory.push(&*path.as_str());
- } else {
- module.directory.push(&*item.ident.as_str());
- }
+ let pushed = &mut false; // Record `parse_external_mod` pushing so we can pop.
+ let dir = Directory { ownership: orig_ownership, path: module.directory };
+ let Directory { ownership, path } = if old_mod.inline {
+ // Inline `mod foo { ... }`, but we still need to push directories.
+ item.attrs = attrs;
+ push_directory(ident, &item.attrs, dir)
} else {
- let path = self.cx.parse_sess.source_map().span_to_unmapped_path(inner);
- let mut path = match path {
- FileName::Real(path) => path,
- other => PathBuf::from(other.to_string()),
+ // We have an outline `mod foo;` so we need to parse the file.
+ let (new_mod, dir) =
+ parse_external_mod(sess, ident, span, dir, &mut attrs, pushed);
+
+ let krate = ast::Crate {
+ span: new_mod.inner,
+ module: new_mod,
+ attrs,
+ proc_macros: vec![],
};
- let directory_ownership = match path.file_name().unwrap().to_str() {
- Some("mod.rs") => DirectoryOwnership::Owned { relative: None },
- Some(_) => DirectoryOwnership::Owned { relative: Some(item.ident) },
- None => DirectoryOwnership::UnownedViaMod,
+ if let Some(extern_mod_loaded) = self.cx.extern_mod_loaded {
+ extern_mod_loaded(&krate);
+ }
+
+ *old_mod = krate.module;
+ item.attrs = krate.attrs;
+ // File can have inline attributes, e.g., `#![cfg(...)]` & co. => Reconfigure.
+ item = match self.configure(item) {
+ Some(node) => node,
+ None => {
+ if *pushed {
+ sess.included_mod_stack.borrow_mut().pop();
+ }
+ return Default::default();
+ }
};
- path.pop();
- module.directory = path;
- self.cx.current_expansion.directory_ownership = directory_ownership;
- }
+ dir
+ };
+ // Set the module info before we flat map.
+ self.cx.current_expansion.directory_ownership = ownership;
+ module.directory = path;
+ module.mod_path.push(ident);
let orig_module =
mem::replace(&mut self.cx.current_expansion.module, Rc::new(module));
+
let result = noop_flat_map_item(item, self);
+
+ // Restore the module info.
self.cx.current_expansion.module = orig_module;
- self.cx.current_expansion.directory_ownership = orig_directory_ownership;
+ self.cx.current_expansion.directory_ownership = orig_ownership;
+ if *pushed {
+ sess.included_mod_stack.borrow_mut().pop();
+ }
result
}
-
- _ => noop_flat_map_item(item, self),
+ _ => {
+ item.attrs = attrs;
+ noop_flat_map_item(item, self)
+ }
}
}
}
match item.kind {
- ast::AssocItemKind::Macro(..) => {
+ ast::AssocItemKind::MacCall(..) => {
self.check_attributes(&item.attrs);
item.and_then(|item| match item.kind {
- ast::AssocItemKind::Macro(mac) => self
+ ast::AssocItemKind::MacCall(mac) => self
.collect_bang(mac, item.span, AstFragmentKind::TraitItems)
.make_trait_items(),
_ => unreachable!(),
}
match item.kind {
- ast::AssocItemKind::Macro(..) => {
+ ast::AssocItemKind::MacCall(..) => {
self.check_attributes(&item.attrs);
item.and_then(|item| match item.kind {
- ast::AssocItemKind::Macro(mac) => self
+ ast::AssocItemKind::MacCall(mac) => self
.collect_bang(mac, item.span, AstFragmentKind::ImplItems)
.make_impl_items(),
_ => unreachable!(),
fn visit_ty(&mut self, ty: &mut P<ast::Ty>) {
match ty.kind {
- ast::TyKind::Mac(_) => {}
+ ast::TyKind::MacCall(_) => {}
_ => return noop_visit_ty(ty, self),
};
visit_clobber(ty, |mut ty| match mem::replace(&mut ty.kind, ast::TyKind::Err) {
- ast::TyKind::Mac(mac) => self.collect_bang(mac, ty.span, AstFragmentKind::Ty).make_ty(),
+ ast::TyKind::MacCall(mac) => {
+ self.collect_bang(mac, ty.span, AstFragmentKind::Ty).make_ty()
+ }
_ => unreachable!(),
});
}
}
match foreign_item.kind {
- ast::ForeignItemKind::Macro(..) => {
+ ast::ForeignItemKind::MacCall(..) => {
self.check_attributes(&foreign_item.attrs);
foreign_item.and_then(|item| match item.kind {
- ast::ForeignItemKind::Macro(mac) => self
+ ast::ForeignItemKind::MacCall(mac) => self
.collect_bang(mac, item.span, AstFragmentKind::ForeignItems)
.make_foreign_items(),
_ => unreachable!(),
+#![feature(bool_to_option)]
#![feature(cow_is_borrowed)]
#![feature(crate_visibility_modifier)]
#![feature(decl_macro)]
#![feature(proc_macro_diagnostic)]
#![feature(proc_macro_internals)]
#![feature(proc_macro_span)]
+#![feature(try_blocks)]
extern crate proc_macro as pm;
crate use rustc_span::hygiene;
pub mod base;
pub mod build;
+#[macro_use]
+pub mod config;
pub mod expand;
-pub use rustc_parse::config;
+pub mod module;
pub mod proc_macro;
crate mod mbe;
use rustc_session::lint::builtin::META_VARIABLE_MISUSE;
use rustc_session::parse::ParseSess;
use rustc_span::symbol::kw;
-use rustc_span::{symbol::Ident, MultiSpan, Span};
+use rustc_span::{symbol::MacroRulesNormalizedIdent, MultiSpan, Span};
use smallvec::SmallVec;
}
/// An environment of meta-variables to their binder information.
-type Binders = FxHashMap<Ident, BinderInfo>;
+type Binders = FxHashMap<MacroRulesNormalizedIdent, BinderInfo>;
/// The state at which we entered a macro definition in the RHS of another macro definition.
struct MacroState<'a> {
if macros.is_empty() {
sess.span_diagnostic.span_bug(span, "unexpected MetaVar in lhs");
}
+ let name = MacroRulesNormalizedIdent::new(name);
// There are 3 possibilities:
if let Some(prev_info) = binders.get(&name) {
// 1. The meta-variable is already bound in the current LHS: This is an error.
if !macros.is_empty() {
sess.span_diagnostic.span_bug(span, "unexpected MetaVarDecl in nested lhs");
}
+ let name = MacroRulesNormalizedIdent::new(name);
if let Some(prev_info) = get_binder_info(macros, binders, name) {
// Duplicate binders at the top-level macro definition are errors. The lint is only
// for nested macro definitions.
fn get_binder_info<'a>(
mut macros: &'a Stack<'a, MacroState<'a>>,
binders: &'a Binders,
- name: Ident,
+ name: MacroRulesNormalizedIdent,
) -> Option<&'a BinderInfo> {
binders.get(&name).or_else(|| macros.find_map(|state| state.binders.get(&name)))
}
sess.span_diagnostic.span_bug(span, "unexpected MetaVarDecl in rhs")
}
TokenTree::MetaVar(span, name) => {
+ let name = MacroRulesNormalizedIdent::new(name);
check_ops_is_prefix(sess, node_id, macros, binders, ops, span, name);
}
TokenTree::Delimited(_, ref del) => {
| (NestedMacroState::MacroName, &TokenTree::Delimited(_, ref del))
if del.delim == DelimToken::Brace =>
{
- let legacy = state == NestedMacroState::MacroRulesNotName;
+ let macro_rules = state == NestedMacroState::MacroRulesNotName;
state = NestedMacroState::Empty;
let rest =
- check_nested_macro(sess, node_id, legacy, &del.tts, &nested_macros, valid);
+ check_nested_macro(sess, node_id, macro_rules, &del.tts, &nested_macros, valid);
// If we did not check the whole macro definition, then check the rest as if outside
// the macro definition.
check_nested_occurrences(
/// Arguments:
/// - `sess` is used to emit diagnostics and lints
/// - `node_id` is used to emit lints
-/// - `legacy` specifies whether the macro is legacy
+/// - `macro_rules` specifies whether the macro is `macro_rules`
/// - `tts` is checked as a list of (LHS) => {RHS}
/// - `macros` is the stack of outer macros
/// - `valid` is set in case of errors
fn check_nested_macro(
sess: &ParseSess,
node_id: NodeId,
- legacy: bool,
+ macro_rules: bool,
tts: &[TokenTree],
macros: &Stack<'_, MacroState<'_>>,
valid: &mut bool,
) -> usize {
let n = tts.len();
let mut i = 0;
- let separator = if legacy { TokenKind::Semi } else { TokenKind::Comma };
+ let separator = if macro_rules { TokenKind::Semi } else { TokenKind::Comma };
loop {
// We expect 3 token trees: `(LHS) => {RHS}`. The separator is checked after.
if i + 2 >= n
let mut binders = Binders::default();
check_binders(sess, node_id, lhs, macros, &mut binders, &Stack::Empty, valid);
check_occurrences(sess, node_id, rhs, macros, &binders, &Stack::Empty, valid);
- // Since the last semicolon is optional for legacy macros and decl_macro are not terminated,
+ // Since the last semicolon is optional for `macro_rules` macros and decl_macro are not terminated,
// we increment our checked position by how many token trees we already checked (the 3
// above) before checking for the separator.
i += 3;
binders: &Binders,
ops: &Stack<'_, KleeneToken>,
span: Span,
- name: Ident,
+ name: MacroRulesNormalizedIdent,
) {
let macros = macros.push(MacroState { binders, ops: ops.into() });
// Accumulates the stacks the operators of each state until (and including when) the
sess: &ParseSess,
node_id: NodeId,
span: Span,
- name: Ident,
+ name: MacroRulesNormalizedIdent,
binder_ops: &[KleeneToken],
occurrence_ops: &[KleeneToken],
) {
use crate::mbe::{self, TokenTree};
-use rustc_ast::ast::{Ident, Name};
+use rustc_ast::ast::Name;
use rustc_ast::ptr::P;
use rustc_ast::token::{self, DocComment, Nonterminal, Token};
use rustc_ast_pretty::pprust;
use rustc_parse::parser::{FollowedByType, Parser, PathStyle};
use rustc_session::parse::ParseSess;
-use rustc_span::symbol::{kw, sym, Symbol};
+use rustc_span::symbol::{kw, sym, Ident, MacroRulesNormalizedIdent, Symbol};
use rustc_errors::{FatalError, PResult};
use rustc_span::Span;
Error(rustc_span::Span, String),
}
-/// A `ParseResult` where the `Success` variant contains a mapping of `Ident`s to `NamedMatch`es.
-/// This represents the mapping of metavars to the token trees they bind to.
-crate type NamedParseResult = ParseResult<FxHashMap<Ident, NamedMatch>>;
+/// A `ParseResult` where the `Success` variant contains a mapping of
+/// `MacroRulesNormalizedIdent`s to `NamedMatch`es. This represents the mapping
+/// of metavars to the token trees they bind to.
+crate type NamedParseResult = ParseResult<FxHashMap<MacroRulesNormalizedIdent, NamedMatch>>;
/// Count how many metavars are named in the given matcher `ms`.
pub(super) fn count_names(ms: &[TokenTree]) -> usize {
sess: &ParseSess,
m: &TokenTree,
res: &mut I,
- ret_val: &mut FxHashMap<Ident, NamedMatch>,
+ ret_val: &mut FxHashMap<MacroRulesNormalizedIdent, NamedMatch>,
) -> Result<(), (rustc_span::Span, String)> {
match *m {
TokenTree::Sequence(_, ref seq) => {
return Err((span, "missing fragment specifier".to_string()));
}
}
- TokenTree::MetaVarDecl(sp, bind_name, _) => match ret_val.entry(bind_name) {
+ TokenTree::MetaVarDecl(sp, bind_name, _) => match ret_val
+ .entry(MacroRulesNormalizedIdent::new(bind_name))
+ {
Vacant(spot) => {
spot.insert(res.next().unwrap());
}
/// The token is an identifier, but not `_`.
/// We prohibit passing `_` to macros expecting `ident` for now.
-fn get_macro_name(token: &Token) -> Option<(Name, bool)> {
- match token.kind {
- token::Ident(name, is_raw) if name != kw::Underscore => Some((name, is_raw)),
- token::Interpolated(ref nt) => match **nt {
- token::NtIdent(ident, is_raw) if ident.name != kw::Underscore => {
- Some((ident.name, is_raw))
- }
- _ => None,
- },
- _ => None,
- }
+fn get_macro_ident(token: &Token) -> Option<(Ident, bool)> {
+ token.ident().filter(|(ident, _)| ident.name != kw::Underscore)
}
/// Checks whether a non-terminal may begin with a particular token.
&& !token.is_keyword(kw::Let)
}
sym::ty => token.can_begin_type(),
- sym::ident => get_macro_name(token).is_some(),
+ sym::ident => get_macro_ident(token).is_some(),
sym::literal => token.can_begin_literal_or_bool(),
sym::vis => match token.kind {
// The follow-set of :vis + "priv" keyword + interpolated
sym::ty => token::NtTy(p.parse_ty()?),
// this could be handled like a token, since it is one
sym::ident => {
- if let Some((name, is_raw)) = get_macro_name(&p.token) {
+ if let Some((ident, is_raw)) = get_macro_ident(&p.token) {
p.bump();
- token::NtIdent(Ident::new(name, p.normalized_prev_token.span), is_raw)
+ token::NtIdent(ident, is_raw)
} else {
let token_str = pprust::token_to_string(&p.token);
let msg = &format!("expected ident, found {}", &token_str);
-use crate::base::{DummyResult, ExpansionData, ExtCtxt, MacResult, TTMacroExpander};
+use crate::base::{DummyResult, ExtCtxt, MacResult, TTMacroExpander};
use crate::base::{SyntaxExtension, SyntaxExtensionKind};
use crate::expand::{ensure_complete_parse, parse_ast_fragment, AstFragment, AstFragmentKind};
use crate::mbe;
use rustc_errors::{Applicability, DiagnosticBuilder, FatalError};
use rustc_feature::Features;
use rustc_parse::parser::Parser;
-use rustc_parse::Directory;
use rustc_session::parse::ParseSess;
use rustc_span::edition::Edition;
use rustc_span::hygiene::Transparency;
-use rustc_span::symbol::{kw, sym, Symbol};
+use rustc_span::symbol::{kw, sym, MacroRulesNormalizedIdent, Symbol};
use rustc_span::Span;
use log::debug;
if e.span.is_dummy() {
// Get around lack of span in error (#30128)
e.replace_span_with(site_span);
- if parser.sess.source_map().span_to_filename(arm_span).is_real() {
+ if !parser.sess.source_map().is_imported(arm_span) {
e.span_label(arm_span, "in this macro arm");
}
- } else if !parser.sess.source_map().span_to_filename(parser.token.span).is_real() {
+ } else if parser.sess.source_map().is_imported(parser.token.span) {
e.span_label(site_span, "in this macro invocation");
}
match kind {
lhses: &[mbe::TokenTree],
rhses: &[mbe::TokenTree],
) -> Box<dyn MacResult + 'cx> {
+ let sess = cx.parse_sess;
+
if cx.trace_macros() {
let msg = format!("expanding `{}! {{ {} }}`", name, pprust::tts_to_string(arg.clone()));
trace_macros_note(&mut cx.expansions, sp, msg);
// hacky, but speeds up the `html5ever` benchmark significantly. (Issue
// 68836 suggests a more comprehensive but more complex change to deal with
// this situation.)
- let parser = parser_from_cx(&cx.current_expansion, &cx.parse_sess, arg.clone());
+ let parser = parser_from_cx(sess, arg.clone());
for (i, lhs) in lhses.iter().enumerate() {
// try each arm's matchers
// This is used so that if a matcher is not `Success(..)`ful,
// then the spans which became gated when parsing the unsuccessful matcher
// are not recorded. On the first `Success(..)`ful matcher, the spans are merged.
- let mut gated_spans_snapshot =
- mem::take(&mut *cx.parse_sess.gated_spans.spans.borrow_mut());
+ let mut gated_spans_snapshot = mem::take(&mut *sess.gated_spans.spans.borrow_mut());
match parse_tt(&mut Cow::Borrowed(&parser), lhs_tt) {
Success(named_matches) => {
// The matcher was `Success(..)`ful.
// Merge the gated spans from parsing the matcher with the pre-existing ones.
- cx.parse_sess.gated_spans.merge(gated_spans_snapshot);
+ sess.gated_spans.merge(gated_spans_snapshot);
let rhs = match rhses[i] {
// ignore delimiters
trace_macros_note(&mut cx.expansions, sp, msg);
}
- let directory = Directory {
- path: cx.current_expansion.module.directory.clone(),
- ownership: cx.current_expansion.directory_ownership,
- };
- let mut p = Parser::new(cx.parse_sess(), tts, Some(directory), true, false, None);
+ let mut p = Parser::new(sess, tts, false, None);
p.root_module_name =
cx.current_expansion.module.mod_path.last().map(|id| id.to_string());
p.last_type_ascription = cx.current_expansion.prior_type_ascription;
// The matcher was not `Success(..)`ful.
// Restore to the state before snapshotting and maybe try again.
- mem::swap(&mut gated_spans_snapshot, &mut cx.parse_sess.gated_spans.spans.borrow_mut());
+ mem::swap(&mut gated_spans_snapshot, &mut sess.gated_spans.spans.borrow_mut());
}
drop(parser);
let span = token.span.substitute_dummy(sp);
let mut err = cx.struct_span_err(span, &parse_failure_msg(&token));
err.span_label(span, label);
- if !def_span.is_dummy() && cx.source_map().span_to_filename(def_span).is_real() {
+ if !def_span.is_dummy() && !cx.source_map().is_imported(def_span) {
err.span_label(cx.source_map().def_span(def_span), "when calling this macro");
}
mbe::TokenTree::Delimited(_, ref delim) => &delim.tts[..],
_ => continue,
};
- let parser = parser_from_cx(&cx.current_expansion, &cx.parse_sess, arg.clone());
- match parse_tt(&mut Cow::Borrowed(&parser), lhs_tt) {
+ match parse_tt(&mut Cow::Borrowed(&parser_from_cx(sess, arg.clone())), lhs_tt) {
Success(_) => {
if comma_span.is_dummy() {
err.note("you might be missing a comma");
let tt_spec = ast::Ident::new(sym::tt, def.span);
// Parse the macro_rules! invocation
- let (is_legacy, body) = match &def.kind {
- ast::ItemKind::MacroDef(macro_def) => (macro_def.legacy, macro_def.body.inner_tokens()),
+ let (macro_rules, body) = match &def.kind {
+ ast::ItemKind::MacroDef(def) => (def.macro_rules, def.body.inner_tokens()),
_ => unreachable!(),
};
mbe::TokenTree::MetaVarDecl(def.span, rhs_nm, tt_spec),
],
separator: Some(Token::new(
- if is_legacy { token::Semi } else { token::Comma },
+ if macro_rules { token::Semi } else { token::Comma },
def.span,
)),
kleene: mbe::KleeneToken::new(mbe::KleeneOp::OneOrMore, def.span),
DelimSpan::dummy(),
Lrc::new(mbe::SequenceRepetition {
tts: vec![mbe::TokenTree::token(
- if is_legacy { token::Semi } else { token::Comma },
+ if macro_rules { token::Semi } else { token::Comma },
def.span,
)],
separator: None,
),
];
- let parser = Parser::new(sess, body, None, true, true, rustc_parse::MACRO_ARGUMENTS);
+ let parser = Parser::new(sess, body, true, rustc_parse::MACRO_ARGUMENTS);
let argument_map = match parse_tt(&mut Cow::Borrowed(&parser), &argument_gram) {
Success(m) => m,
Failure(token, msg) => {
let mut valid = true;
// Extract the arguments:
- let lhses = match argument_map[&lhs_nm] {
+ let lhses = match argument_map[&MacroRulesNormalizedIdent::new(lhs_nm)] {
MatchedSeq(ref s) => s
.iter()
.map(|m| {
_ => sess.span_diagnostic.span_bug(def.span, "wrong-structured lhs"),
};
- let rhses = match argument_map[&rhs_nm] {
+ let rhses = match argument_map[&MacroRulesNormalizedIdent::new(rhs_nm)] {
MatchedSeq(ref s) => s
.iter()
.map(|m| {
// that is not lint-checked and trigger the "failed to process buffered lint here" bug.
valid &= macro_check::check_meta_variables(sess, ast::CRATE_NODE_ID, def.span, &lhses, &rhses);
- let (transparency, transparency_error) = attr::find_transparency(&def.attrs, is_legacy);
+ let (transparency, transparency_error) = attr::find_transparency(&def.attrs, macro_rules);
match transparency_error {
Some(TransparencyError::UnknownTransparency(value, span)) => {
diag.span_err(span, &format!("unknown macro transparency: `{}`", value))
if let mbe::TokenTree::MetaVarDecl(_, _, frag_spec) = *tok {
frag_can_be_followed_by_any(frag_spec.name)
} else {
- // (Non NT's can always be followed by anthing in matchers.)
+ // (Non NT's can always be followed by anything in matchers.)
true
}
}
}
}
-fn parser_from_cx<'cx>(
- current_expansion: &'cx ExpansionData,
- sess: &'cx ParseSess,
- tts: TokenStream,
-) -> Parser<'cx> {
- let directory = Directory {
- path: current_expansion.module.directory.clone(),
- ownership: current_expansion.directory_ownership,
- };
- Parser::new(sess, tts, Some(directory), true, true, rustc_parse::MACRO_ARGUMENTS)
+fn parser_from_cx(sess: &ParseSess, tts: TokenStream) -> Parser<'_> {
+ Parser::new(sess, tts, true, rustc_parse::MACRO_ARGUMENTS)
}
/// Generates an appropriate parsing failure message. For EOF, this is "unexpected end...". For
use crate::mbe;
use crate::mbe::macro_parser::{MatchedNonterminal, MatchedSeq, NamedMatch};
-use rustc_ast::ast::{Ident, Mac};
+use rustc_ast::ast::MacCall;
use rustc_ast::mut_visit::{self, MutVisitor};
use rustc_ast::token::{self, NtTT, Token};
use rustc_ast::tokenstream::{DelimSpan, TokenStream, TokenTree, TreeAndJoint};
use rustc_data_structures::sync::Lrc;
use rustc_errors::pluralize;
use rustc_span::hygiene::{ExpnId, Transparency};
+use rustc_span::symbol::MacroRulesNormalizedIdent;
use rustc_span::Span;
use smallvec::{smallvec, SmallVec};
*span = span.apply_mark(self.0, self.1)
}
- fn visit_mac(&mut self, mac: &mut Mac) {
+ fn visit_mac(&mut self, mac: &mut MacCall) {
mut_visit::noop_visit_mac(mac, self)
}
}
/// Along the way, we do some additional error checking.
pub(super) fn transcribe(
cx: &ExtCtxt<'_>,
- interp: &FxHashMap<Ident, NamedMatch>,
+ interp: &FxHashMap<MacroRulesNormalizedIdent, NamedMatch>,
src: Vec<mbe::TokenTree>,
transparency: Transparency,
) -> TokenStream {
let tree = if let Some(tree) = stack.last_mut().unwrap().next() {
// If it still has a TokenTree we have not looked at yet, use that tree.
tree
- }
- // The else-case never produces a value for `tree` (it `continue`s or `return`s).
- else {
+ } else {
+ // This else-case never produces a value for `tree` (it `continue`s or `return`s).
+
// Otherwise, if we have just reached the end of a sequence and we can keep repeating,
// go back to the beginning of the sequence.
if let Frame::Sequence { idx, sep, .. } = stack.last_mut().unwrap() {
}
// Replace the meta-var with the matched token tree from the invocation.
- mbe::TokenTree::MetaVar(mut sp, mut ident) => {
+ mbe::TokenTree::MetaVar(mut sp, mut orignal_ident) => {
// Find the matched nonterminal from the macro invocation, and use it to replace
// the meta-var.
+ let ident = MacroRulesNormalizedIdent::new(orignal_ident);
if let Some(cur_matched) = lookup_cur_matched(ident, interp, &repeats) {
if let MatchedNonterminal(ref nt) = cur_matched {
// FIXME #2887: why do we apply a mark when matching a token tree meta-var
// If we aren't able to match the meta-var, we push it back into the result but
// with modified syntax context. (I believe this supports nested macros).
marker.visit_span(&mut sp);
- marker.visit_ident(&mut ident);
+ marker.visit_ident(&mut orignal_ident);
result.push(TokenTree::token(token::Dollar, sp).into());
- result.push(TokenTree::Token(Token::from_ast_ident(ident)).into());
+ result.push(TokenTree::Token(Token::from_ast_ident(orignal_ident)).into());
}
}
/// into the right place in nested matchers. If we attempt to descend too far, the macro writer has
/// made a mistake, and we return `None`.
fn lookup_cur_matched<'a>(
- ident: Ident,
- interpolations: &'a FxHashMap<Ident, NamedMatch>,
+ ident: MacroRulesNormalizedIdent,
+ interpolations: &'a FxHashMap<MacroRulesNormalizedIdent, NamedMatch>,
repeats: &[(usize, usize)],
) -> Option<&'a NamedMatch> {
interpolations.get(&ident).map(|matched| {
/// A `MetaVar` with an actual `MatchedSeq`. The length of the match and the name of the
/// meta-var are returned.
- Constraint(usize, Ident),
+ Constraint(usize, MacroRulesNormalizedIdent),
/// Two `Constraint`s on the same sequence had different lengths. This is an error.
Contradiction(String),
/// multiple nested matcher sequences.
fn lockstep_iter_size(
tree: &mbe::TokenTree,
- interpolations: &FxHashMap<Ident, NamedMatch>,
+ interpolations: &FxHashMap<MacroRulesNormalizedIdent, NamedMatch>,
repeats: &[(usize, usize)],
) -> LockstepIterSize {
use mbe::TokenTree;
})
}
TokenTree::MetaVar(_, name) | TokenTree::MetaVarDecl(_, name, _) => {
+ let name = MacroRulesNormalizedIdent::new(name);
match lookup_cur_matched(name, interpolations, repeats) {
Some(matched) => match matched {
MatchedNonterminal(_) => LockstepIterSize::Unconstrained,
--- /dev/null
+use rustc_ast::ast::{self, Attribute, Ident, Mod};
+use rustc_ast::{attr, token};
+use rustc_errors::{struct_span_err, PResult};
+use rustc_parse::new_sub_parser_from_file;
+use rustc_session::parse::ParseSess;
+use rustc_span::source_map::{FileName, Span};
+use rustc_span::symbol::sym;
+
+use std::path::{self, Path, PathBuf};
+
+#[derive(Clone)]
+pub struct Directory {
+ pub path: PathBuf,
+ pub ownership: DirectoryOwnership,
+}
+
+#[derive(Copy, Clone)]
+pub enum DirectoryOwnership {
+ Owned {
+ // None if `mod.rs`, `Some("foo")` if we're in `foo.rs`.
+ relative: Option<ast::Ident>,
+ },
+ UnownedViaBlock,
+ UnownedViaMod,
+}
+
+/// Information about the path to a module.
+// Public for rustfmt usage.
+pub struct ModulePath<'a> {
+ name: String,
+ path_exists: bool,
+ pub result: PResult<'a, ModulePathSuccess>,
+}
+
+// Public for rustfmt usage.
+pub struct ModulePathSuccess {
+ pub path: PathBuf,
+ pub ownership: DirectoryOwnership,
+}
+
+crate fn parse_external_mod(
+ sess: &ParseSess,
+ id: ast::Ident,
+ span: Span, // The span to blame on errors.
+ Directory { mut ownership, path }: Directory,
+ attrs: &mut Vec<Attribute>,
+ pop_mod_stack: &mut bool,
+) -> (Mod, Directory) {
+ // We bail on the first error, but that error does not cause a fatal error... (1)
+ let result: PResult<'_, _> = try {
+ // Extract the file path and the new ownership.
+ let mp = submod_path(sess, id, span, &attrs, ownership, &path)?;
+ ownership = mp.ownership;
+
+ // Ensure file paths are acyclic.
+ let mut included_mod_stack = sess.included_mod_stack.borrow_mut();
+ error_on_circular_module(sess, span, &mp.path, &included_mod_stack)?;
+ included_mod_stack.push(mp.path.clone());
+ *pop_mod_stack = true; // We have pushed, so notify caller.
+ drop(included_mod_stack);
+
+ // Actually parse the external file as amodule.
+ let mut p0 = new_sub_parser_from_file(sess, &mp.path, Some(id.to_string()), span);
+ let mut module = p0.parse_mod(&token::Eof)?;
+ module.0.inline = false;
+ module
+ };
+ // (1) ...instead, we return a dummy module.
+ let (module, mut new_attrs) = result.map_err(|mut err| err.emit()).unwrap_or_default();
+ attrs.append(&mut new_attrs);
+
+ // Extract the directory path for submodules of `module`.
+ let path = sess.source_map().span_to_unmapped_path(module.inner);
+ let mut path = match path {
+ FileName::Real(path) => path,
+ other => PathBuf::from(other.to_string()),
+ };
+ path.pop();
+
+ (module, Directory { ownership, path })
+}
+
+fn error_on_circular_module<'a>(
+ sess: &'a ParseSess,
+ span: Span,
+ path: &Path,
+ included_mod_stack: &[PathBuf],
+) -> PResult<'a, ()> {
+ if let Some(i) = included_mod_stack.iter().position(|p| *p == path) {
+ let mut err = String::from("circular modules: ");
+ for p in &included_mod_stack[i..] {
+ err.push_str(&p.to_string_lossy());
+ err.push_str(" -> ");
+ }
+ err.push_str(&path.to_string_lossy());
+ return Err(sess.span_diagnostic.struct_span_err(span, &err[..]));
+ }
+ Ok(())
+}
+
+crate fn push_directory(
+ id: Ident,
+ attrs: &[Attribute],
+ Directory { mut ownership, mut path }: Directory,
+) -> Directory {
+ if let Some(filename) = attr::first_attr_value_str_by_name(attrs, sym::path) {
+ path.push(&*filename.as_str());
+ ownership = DirectoryOwnership::Owned { relative: None };
+ } else {
+ // We have to push on the current module name in the case of relative
+ // paths in order to ensure that any additional module paths from inline
+ // `mod x { ... }` come after the relative extension.
+ //
+ // For example, a `mod z { ... }` inside `x/y.rs` should set the current
+ // directory path to `/x/y/z`, not `/x/z` with a relative offset of `y`.
+ if let DirectoryOwnership::Owned { relative } = &mut ownership {
+ if let Some(ident) = relative.take() {
+ // Remove the relative offset.
+ path.push(&*ident.as_str());
+ }
+ }
+ path.push(&*id.as_str());
+ }
+ Directory { ownership, path }
+}
+
+fn submod_path<'a>(
+ sess: &'a ParseSess,
+ id: ast::Ident,
+ span: Span,
+ attrs: &[Attribute],
+ ownership: DirectoryOwnership,
+ dir_path: &Path,
+) -> PResult<'a, ModulePathSuccess> {
+ if let Some(path) = submod_path_from_attr(attrs, dir_path) {
+ let ownership = match path.file_name().and_then(|s| s.to_str()) {
+ // All `#[path]` files are treated as though they are a `mod.rs` file.
+ // This means that `mod foo;` declarations inside `#[path]`-included
+ // files are siblings,
+ //
+ // Note that this will produce weirdness when a file named `foo.rs` is
+ // `#[path]` included and contains a `mod foo;` declaration.
+ // If you encounter this, it's your own darn fault :P
+ Some(_) => DirectoryOwnership::Owned { relative: None },
+ _ => DirectoryOwnership::UnownedViaMod,
+ };
+ return Ok(ModulePathSuccess { ownership, path });
+ }
+
+ let relative = match ownership {
+ DirectoryOwnership::Owned { relative } => relative,
+ DirectoryOwnership::UnownedViaBlock | DirectoryOwnership::UnownedViaMod => None,
+ };
+ let ModulePath { path_exists, name, result } =
+ default_submod_path(sess, id, span, relative, dir_path);
+ match ownership {
+ DirectoryOwnership::Owned { .. } => Ok(result?),
+ DirectoryOwnership::UnownedViaBlock => {
+ let _ = result.map_err(|mut err| err.cancel());
+ error_decl_mod_in_block(sess, span, path_exists, &name)
+ }
+ DirectoryOwnership::UnownedViaMod => {
+ let _ = result.map_err(|mut err| err.cancel());
+ error_cannot_declare_mod_here(sess, span, path_exists, &name)
+ }
+ }
+}
+
+fn error_decl_mod_in_block<'a, T>(
+ sess: &'a ParseSess,
+ span: Span,
+ path_exists: bool,
+ name: &str,
+) -> PResult<'a, T> {
+ let msg = "Cannot declare a non-inline module inside a block unless it has a path attribute";
+ let mut err = sess.span_diagnostic.struct_span_err(span, msg);
+ if path_exists {
+ let msg = format!("Maybe `use` the module `{}` instead of redeclaring it", name);
+ err.span_note(span, &msg);
+ }
+ Err(err)
+}
+
+fn error_cannot_declare_mod_here<'a, T>(
+ sess: &'a ParseSess,
+ span: Span,
+ path_exists: bool,
+ name: &str,
+) -> PResult<'a, T> {
+ let mut err =
+ sess.span_diagnostic.struct_span_err(span, "cannot declare a new module at this location");
+ if !span.is_dummy() {
+ if let FileName::Real(src_path) = sess.source_map().span_to_filename(span) {
+ if let Some(stem) = src_path.file_stem() {
+ let mut dest_path = src_path.clone();
+ dest_path.set_file_name(stem);
+ dest_path.push("mod.rs");
+ err.span_note(
+ span,
+ &format!(
+ "maybe move this module `{}` to its own directory via `{}`",
+ src_path.display(),
+ dest_path.display()
+ ),
+ );
+ }
+ }
+ }
+ if path_exists {
+ err.span_note(
+ span,
+ &format!("... or maybe `use` the module `{}` instead of possibly redeclaring it", name),
+ );
+ }
+ Err(err)
+}
+
+/// Derive a submodule path from the first found `#[path = "path_string"]`.
+/// The provided `dir_path` is joined with the `path_string`.
+// Public for rustfmt usage.
+pub fn submod_path_from_attr(attrs: &[Attribute], dir_path: &Path) -> Option<PathBuf> {
+ // Extract path string from first `#[path = "path_string"]` attribute.
+ let path_string = attr::first_attr_value_str_by_name(attrs, sym::path)?;
+ let path_string = path_string.as_str();
+
+ // On windows, the base path might have the form
+ // `\\?\foo\bar` in which case it does not tolerate
+ // mixed `/` and `\` separators, so canonicalize
+ // `/` to `\`.
+ #[cfg(windows)]
+ let path_string = path_string.replace("/", "\\");
+
+ Some(dir_path.join(&*path_string))
+}
+
+/// Returns a path to a module.
+// Public for rustfmt usage.
+pub fn default_submod_path<'a>(
+ sess: &'a ParseSess,
+ id: ast::Ident,
+ span: Span,
+ relative: Option<ast::Ident>,
+ dir_path: &Path,
+) -> ModulePath<'a> {
+ // If we're in a foo.rs file instead of a mod.rs file,
+ // we need to look for submodules in
+ // `./foo/<id>.rs` and `./foo/<id>/mod.rs` rather than
+ // `./<id>.rs` and `./<id>/mod.rs`.
+ let relative_prefix_string;
+ let relative_prefix = if let Some(ident) = relative {
+ relative_prefix_string = format!("{}{}", ident.name, path::MAIN_SEPARATOR);
+ &relative_prefix_string
+ } else {
+ ""
+ };
+
+ let mod_name = id.name.to_string();
+ let default_path_str = format!("{}{}.rs", relative_prefix, mod_name);
+ let secondary_path_str =
+ format!("{}{}{}mod.rs", relative_prefix, mod_name, path::MAIN_SEPARATOR);
+ let default_path = dir_path.join(&default_path_str);
+ let secondary_path = dir_path.join(&secondary_path_str);
+ let default_exists = sess.source_map().file_exists(&default_path);
+ let secondary_exists = sess.source_map().file_exists(&secondary_path);
+
+ let result = match (default_exists, secondary_exists) {
+ (true, false) => Ok(ModulePathSuccess {
+ path: default_path,
+ ownership: DirectoryOwnership::Owned { relative: Some(id) },
+ }),
+ (false, true) => Ok(ModulePathSuccess {
+ path: secondary_path,
+ ownership: DirectoryOwnership::Owned { relative: None },
+ }),
+ (false, false) => {
+ let mut err = struct_span_err!(
+ sess.span_diagnostic,
+ span,
+ E0583,
+ "file not found for module `{}`",
+ mod_name,
+ );
+ err.help(&format!(
+ "to create the module `{}`, create file \"{}\"",
+ mod_name,
+ default_path.display(),
+ ));
+ Err(err)
+ }
+ (true, true) => {
+ let mut err = struct_span_err!(
+ sess.span_diagnostic,
+ span,
+ E0584,
+ "file for module `{}` found at both {} and {}",
+ mod_name,
+ default_path_str,
+ secondary_path_str,
+ );
+ err.help("delete or rename one of them to remove the ambiguity");
+ Err(err)
+ }
+ };
+
+ ModulePath { name: mod_name, path_exists: default_exists || secondary_exists, result }
+}
fn visit_ident(&mut self, ident: &mut ast::Ident) {
*ident = Ident::from_str("zz");
}
- fn visit_mac(&mut self, mac: &mut ast::Mac) {
+ fn visit_mac(&mut self, mac: &mut ast::MacCall) {
mut_visit::noop_visit_mac(mac, self)
}
}
.unwrap();
let tts: Vec<_> = match expr.kind {
- ast::ExprKind::Mac(ref mac) => mac.args.inner_tokens().trees().collect(),
+ ast::ExprKind::MacCall(ref mac) => mac.args.inner_tokens().trees().collect(),
_ => panic!("not a macro"),
};
id: ast::NodeId,
vis: Option<ast::Visibility>,
) -> AstFragment {
- fn mac_placeholder() -> ast::Mac {
- ast::Mac {
+ fn mac_placeholder() -> ast::MacCall {
+ ast::MacCall {
path: ast::Path { span: DUMMY_SP, segments: Vec::new() },
args: P(ast::MacArgs::Empty),
prior_type_ascription: None,
id,
span,
attrs: ast::AttrVec::new(),
- kind: ast::ExprKind::Mac(mac_placeholder()),
+ kind: ast::ExprKind::MacCall(mac_placeholder()),
})
};
- let ty = || P(ast::Ty { id, kind: ast::TyKind::Mac(mac_placeholder()), span });
- let pat = || P(ast::Pat { id, kind: ast::PatKind::Mac(mac_placeholder()), span });
+ let ty = || P(ast::Ty { id, kind: ast::TyKind::MacCall(mac_placeholder()), span });
+ let pat = || P(ast::Pat { id, kind: ast::PatKind::MacCall(mac_placeholder()), span });
match kind {
AstFragmentKind::Expr => AstFragment::Expr(expr_placeholder()),
ident,
vis,
attrs,
- kind: ast::ItemKind::Mac(mac_placeholder()),
+ kind: ast::ItemKind::MacCall(mac_placeholder()),
tokens: None,
})]),
AstFragmentKind::TraitItems => AstFragment::TraitItems(smallvec![P(ast::AssocItem {
ident,
vis,
attrs,
- kind: ast::AssocItemKind::Macro(mac_placeholder()),
+ kind: ast::AssocItemKind::MacCall(mac_placeholder()),
tokens: None,
})]),
AstFragmentKind::ImplItems => AstFragment::ImplItems(smallvec![P(ast::AssocItem {
ident,
vis,
attrs,
- kind: ast::AssocItemKind::Macro(mac_placeholder()),
+ kind: ast::AssocItemKind::MacCall(mac_placeholder()),
tokens: None,
})]),
AstFragmentKind::ForeignItems => {
ident,
vis,
attrs,
- kind: ast::ForeignItemKind::Macro(mac_placeholder()),
+ kind: ast::ForeignItemKind::MacCall(mac_placeholder()),
tokens: None,
})])
}
- AstFragmentKind::Pat => {
- AstFragment::Pat(P(ast::Pat { id, span, kind: ast::PatKind::Mac(mac_placeholder()) }))
- }
+ AstFragmentKind::Pat => AstFragment::Pat(P(ast::Pat {
+ id,
+ span,
+ kind: ast::PatKind::MacCall(mac_placeholder()),
+ })),
AstFragmentKind::Ty => {
- AstFragment::Ty(P(ast::Ty { id, span, kind: ast::TyKind::Mac(mac_placeholder()) }))
+ AstFragment::Ty(P(ast::Ty { id, span, kind: ast::TyKind::MacCall(mac_placeholder()) }))
}
AstFragmentKind::Stmts => AstFragment::Stmts(smallvec![{
let mac = P((mac_placeholder(), ast::MacStmtStyle::Braces, ast::AttrVec::new()));
- ast::Stmt { id, span, kind: ast::StmtKind::Mac(mac) }
+ ast::Stmt { id, span, kind: ast::StmtKind::MacCall(mac) }
}]),
AstFragmentKind::Arms => AstFragment::Arms(smallvec![ast::Arm {
attrs: Default::default(),
fn flat_map_item(&mut self, item: P<ast::Item>) -> SmallVec<[P<ast::Item>; 1]> {
match item.kind {
- ast::ItemKind::Mac(_) => return self.remove(item.id).make_items(),
+ ast::ItemKind::MacCall(_) => return self.remove(item.id).make_items(),
ast::ItemKind::MacroDef(_) => return smallvec![item],
_ => {}
}
fn flat_map_trait_item(&mut self, item: P<ast::AssocItem>) -> SmallVec<[P<ast::AssocItem>; 1]> {
match item.kind {
- ast::AssocItemKind::Macro(_) => self.remove(item.id).make_trait_items(),
+ ast::AssocItemKind::MacCall(_) => self.remove(item.id).make_trait_items(),
_ => noop_flat_map_assoc_item(item, self),
}
}
fn flat_map_impl_item(&mut self, item: P<ast::AssocItem>) -> SmallVec<[P<ast::AssocItem>; 1]> {
match item.kind {
- ast::AssocItemKind::Macro(_) => self.remove(item.id).make_impl_items(),
+ ast::AssocItemKind::MacCall(_) => self.remove(item.id).make_impl_items(),
_ => noop_flat_map_assoc_item(item, self),
}
}
item: P<ast::ForeignItem>,
) -> SmallVec<[P<ast::ForeignItem>; 1]> {
match item.kind {
- ast::ForeignItemKind::Macro(_) => self.remove(item.id).make_foreign_items(),
+ ast::ForeignItemKind::MacCall(_) => self.remove(item.id).make_foreign_items(),
_ => noop_flat_map_foreign_item(item, self),
}
}
fn visit_expr(&mut self, expr: &mut P<ast::Expr>) {
match expr.kind {
- ast::ExprKind::Mac(_) => *expr = self.remove(expr.id).make_expr(),
+ ast::ExprKind::MacCall(_) => *expr = self.remove(expr.id).make_expr(),
_ => noop_visit_expr(expr, self),
}
}
fn filter_map_expr(&mut self, expr: P<ast::Expr>) -> Option<P<ast::Expr>> {
match expr.kind {
- ast::ExprKind::Mac(_) => self.remove(expr.id).make_opt_expr(),
+ ast::ExprKind::MacCall(_) => self.remove(expr.id).make_opt_expr(),
_ => noop_filter_map_expr(expr, self),
}
}
fn flat_map_stmt(&mut self, stmt: ast::Stmt) -> SmallVec<[ast::Stmt; 1]> {
let (style, mut stmts) = match stmt.kind {
- ast::StmtKind::Mac(mac) => (mac.1, self.remove(stmt.id).make_stmts()),
+ ast::StmtKind::MacCall(mac) => (mac.1, self.remove(stmt.id).make_stmts()),
_ => return noop_flat_map_stmt(stmt, self),
};
fn visit_pat(&mut self, pat: &mut P<ast::Pat>) {
match pat.kind {
- ast::PatKind::Mac(_) => *pat = self.remove(pat.id).make_pat(),
+ ast::PatKind::MacCall(_) => *pat = self.remove(pat.id).make_pat(),
_ => noop_visit_pat(pat, self),
}
}
fn visit_ty(&mut self, ty: &mut P<ast::Ty>) {
match ty.kind {
- ast::TyKind::Mac(_) => *ty = self.remove(ty.id).make_ty(),
+ ast::TyKind::MacCall(_) => *ty = self.remove(ty.id).make_ty(),
_ => noop_visit_ty(ty, self),
}
}
fn visit_mod(&mut self, module: &mut ast::Mod) {
noop_visit_mod(module, self);
module.items.retain(|item| match item.kind {
- ast::ItemKind::Mac(_) if !self.cx.ecfg.keep_macs => false, // remove macro definitions
+ ast::ItemKind::MacCall(_) if !self.cx.ecfg.keep_macs => false, // remove macro definitions
_ => true,
});
}
- fn visit_mac(&mut self, _mac: &mut ast::Mac) {
+ fn visit_mac(&mut self, _mac: &mut ast::MacCall) {
// Do nothing.
}
}
span: Span,
_meta_item: &ast::MetaItem,
item: Annotatable,
- ) -> Vec<Annotatable> {
+ ) -> ExpandResult<Vec<Annotatable>, Annotatable> {
let item = match item {
Annotatable::Arm(..)
| Annotatable::Field(..)
"proc-macro derives may only be \
applied to a struct, enum, or union",
);
- return Vec::new();
+ return ExpandResult::Ready(Vec::new());
}
};
match item.kind {
"proc-macro derives may only be \
applied to a struct, enum, or union",
);
- return Vec::new();
+ return ExpandResult::Ready(Vec::new());
}
}
FatalError.raise();
}
- items
+ ExpandResult::Ready(items)
}
}
// no-tracking-issue-start
+ /// Allows using `rustc_*` attributes (RFC 572).
+ (active, rustc_attrs, "1.0.0", None, None),
+
/// Allows using compiler's own crates.
(active, rustc_private, "1.0.0", Some(27812), None),
/// Allows using `#[link_name="llvm.*"]`.
(active, link_llvm_intrinsics, "1.0.0", Some(29602), None),
- /// Allows using `rustc_*` attributes (RFC 572).
- (active, rustc_attrs, "1.0.0", Some(29642), None),
-
/// Allows using the `box $expr` syntax.
(active, box_syntax, "1.0.0", Some(49733), None),
/// Permits specifying whether a function should permit unwinding or abort on unwind.
(active, unwind_attributes, "1.4.0", Some(58760), None),
- /// Allows `#[no_debug]`.
- (active, no_debug, "1.5.0", Some(29721), None),
-
/// Allows attributes on expressions and non-item statements.
(active, stmt_expr_attributes, "1.6.0", Some(15701), None),
/// Allows specialization of implementations (RFC 1210).
(active, specialization, "1.7.0", Some(31844), None),
+ /// A minimal, sound subset of specialization intended to be used by the
+ /// standard library until the soundness issues with specialization
+ /// are fixed.
+ (active, min_specialization, "1.7.0", Some(31844), None),
+
/// Allows using `#[naked]` on functions.
(active, naked_functions, "1.9.0", Some(32408), None),
/// Allows `#[doc(masked)]`.
(active, doc_masked, "1.21.0", Some(44027), None),
- /// Allows `#[doc(spotlight)]`.
- (active, doc_spotlight, "1.22.0", Some(45040), None),
-
/// Allows `#[doc(include = "some-file")]`.
(active, external_doc, "1.22.0", Some(44732), None),
/// Allows defining `trait X = A + B;` alias items.
(active, trait_alias, "1.24.0", Some(41517), None),
- /// Allows infering `'static` outlives requirements (RFC 2093).
+ /// Allows inferring `'static` outlives requirements (RFC 2093).
(active, infer_static_outlives_requirements, "1.26.0", Some(54185), None),
/// Allows accessing fields of unions inside `const` functions.
/// Allows the use of `no_sanitize` attribute.
(active, no_sanitize, "1.42.0", Some(39699), None),
+ // Allows limiting the evaluation steps of const expressions
+ (active, const_eval_limit, "1.43.0", Some(67217), None),
+
// -------------------------------------------------------------------------
// feature-group-end: actual feature gates
// -------------------------------------------------------------------------
/// A template that the attribute input must match.
/// Only top-level shape (`#[attr]` vs `#[attr(...)]` vs `#[attr = ...]`) is considered now.
-#[derive(Clone, Copy)]
+#[derive(Clone, Copy, Default)]
pub struct AttributeTemplate {
pub word: bool,
pub list: Option<&'static str>,
pub name_value_str: Option<&'static str>,
}
-impl AttributeTemplate {
- pub fn only_word() -> Self {
- Self { word: true, list: None, name_value_str: None }
- }
-}
-
/// A convenience macro for constructing attribute templates.
/// E.g., `template!(Word, List: "description")` means that the attribute
/// supports forms `#[attr]` and `#[attr(description)]`.
// Stable attributes:
// ==========================================================================
- // Condtional compilation:
+ // Conditional compilation:
ungated!(cfg, Normal, template!(List: "predicate")),
ungated!(cfg_attr, Normal, template!(List: "predicate, attr1, attr2, ...")),
// Limits:
ungated!(recursion_limit, CrateLevel, template!(NameValueStr: "N")),
ungated!(type_length_limit, CrateLevel, template!(NameValueStr: "N")),
+ gated!(
+ const_eval_limit, CrateLevel, template!(NameValueStr: "N"), const_eval_limit,
+ experimental!(const_eval_limit)
+ ),
// Entry point:
ungated!(main, Normal, template!(Word)),
cfg_fn!(rustc_attrs),
),
),
- (
- sym::no_debug, Whitelisted, template!(Word),
- Gated(
- Stability::Deprecated("https://github.com/rust-lang/rust/issues/29721", None),
- sym::no_debug,
- "the `#[no_debug]` attribute was an experimental feature that has been \
- deprecated due to lack of demand",
- cfg_fn!(no_debug)
- )
- ),
gated!(
// Used in resolve:
prelude_import, Whitelisted, template!(Word),
rustc_test_marker, Normal, template!(Word),
"the `#[rustc_test_marker]` attribute is used internally to track tests",
),
+ rustc_attr!(
+ rustc_unsafe_specialization_marker, Normal, template!(Word),
+ "the `#[rustc_unsafe_specialization_marker]` attribute is used to check specializations"
+ ),
+ rustc_attr!(
+ rustc_specialization_trait, Normal, template!(Word),
+ "the `#[rustc_specialization_trait]` attribute is used to check specializations"
+ ),
// ==========================================================================
// Internal attributes, Testing:
/// Allows overlapping impls of marker traits.
(removed, overlapping_marker_traits, "1.42.0", Some(29864), None,
Some("removed in favor of `#![feature(marker_trait_attr)]`")),
+ /// Allows `#[no_debug]`.
+ (removed, no_debug, "1.43.0", Some(29721), None, Some("removed due to lack of demand")),
// -------------------------------------------------------------------------
// feature-group-end: removed features
// -------------------------------------------------------------------------
Static,
/// Refers to the struct or enum variant's constructor.
Ctor(CtorOf, CtorKind),
- Method,
+ AssocFn,
AssocConst,
// Macro namespace
DefKind::Union => "union",
DefKind::Trait => "trait",
DefKind::ForeignTy => "foreign type",
- DefKind::Method => "method",
+ DefKind::AssocFn => "associated function",
DefKind::Const => "constant",
DefKind::AssocConst => "associated constant",
DefKind::TyParam => "type parameter",
DefKind::AssocTy
| DefKind::AssocConst
| DefKind::AssocOpaqueTy
+ | DefKind::AssocFn
| DefKind::Enum
| DefKind::OpaqueTy => "an",
DefKind::Macro(macro_kind) => macro_kind.article(),
| DefKind::ConstParam
| DefKind::Static
| DefKind::Ctor(..)
- | DefKind::Method
+ | DefKind::AssocFn
| DefKind::AssocConst => ns == Namespace::ValueNS,
DefKind::Macro(..) => ns == Namespace::MacroNS,
pub use rustc_ast::ast::{BorrowKind, ImplPolarity, IsAuto};
pub use rustc_ast::ast::{CaptureBy, Movability, Mutability};
use rustc_ast::node_id::NodeMap;
-use rustc_ast::tokenstream::TokenStream;
use rustc_ast::util::parser::ExprPrecedence;
use rustc_data_structures::fx::FxHashSet;
use rustc_data_structures::sync::{par_for_each_in, Send, Sync};
}
}
- pub fn modern(&self) -> ParamName {
+ pub fn normalize_to_macros_2_0(&self) -> ParamName {
match *self {
- ParamName::Plain(ident) => ParamName::Plain(ident.modern()),
+ ParamName::Plain(ident) => ParamName::Plain(ident.normalize_to_macros_2_0()),
param_name => param_name,
}
}
self == &LifetimeName::Static
}
- pub fn modern(&self) -> LifetimeName {
+ pub fn normalize_to_macros_2_0(&self) -> LifetimeName {
match *self {
- LifetimeName::Param(param_name) => LifetimeName::Param(param_name.modern()),
+ LifetimeName::Param(param_name) => {
+ LifetimeName::Param(param_name.normalize_to_macros_2_0())
+ }
lifetime_name => lifetime_name,
}
}
pub rhs_ty: &'hir Ty<'hir>,
}
-#[derive(RustcEncodable, RustcDecodable, Debug)]
+#[derive(RustcEncodable, RustcDecodable, Debug, HashStable_Generic)]
pub struct ModuleItems {
// Use BTreeSets here so items are in the same order as in the
// list of all items in Crate
pub impl_items: BTreeSet<ImplItemId>,
}
+/// A type representing only the top-level module.
+#[derive(RustcEncodable, RustcDecodable, Debug, HashStable_Generic)]
+pub struct CrateItem<'hir> {
+ pub module: Mod<'hir>,
+ pub attrs: &'hir [Attribute],
+ pub span: Span,
+}
+
/// The top-level data structure that stores the entire contents of
/// the crate currently being compiled.
///
-/// For more details, see the [rustc guide].
+/// For more details, see the [rustc dev guide].
///
-/// [rustc guide]: https://rust-lang.github.io/rustc-guide/hir.html
+/// [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/hir.html
#[derive(RustcEncodable, RustcDecodable, Debug)]
pub struct Crate<'hir> {
- pub module: Mod<'hir>,
- pub attrs: &'hir [Attribute],
- pub span: Span,
+ pub item: CrateItem<'hir>,
pub exported_macros: &'hir [MacroDef<'hir>],
// Attributes from non-exported macros, kept only for collecting the library feature list.
pub non_exported_macro_attrs: &'hir [Attribute],
where
V: itemlikevisit::ItemLikeVisitor<'hir>,
{
- for (_, item) in &self.items {
+ for item in self.items.values() {
visitor.visit_item(item);
}
- for (_, trait_item) in &self.trait_items {
+ for trait_item in self.trait_items.values() {
visitor.visit_trait_item(trait_item);
}
- for (_, impl_item) in &self.impl_items {
+ for impl_item in self.impl_items.values() {
visitor.visit_impl_item(impl_item);
}
}
/// Not parsed directly, but created on macro import or `macro_rules!` expansion.
#[derive(RustcEncodable, RustcDecodable, Debug, HashStable_Generic)]
pub struct MacroDef<'hir> {
- pub name: Name,
+ pub ident: Ident,
pub vis: Visibility<'hir>,
pub attrs: &'hir [Attribute],
pub hir_id: HirId,
pub span: Span,
- pub body: TokenStream,
- pub legacy: bool,
+ pub ast: ast::MacroDef,
}
/// A block of statements `{ .. }`, which may have a label (in this case the
/// Represents a trait method's body (or just argument names).
#[derive(RustcEncodable, RustcDecodable, Debug, HashStable_Generic)]
-pub enum TraitMethod<'hir> {
+pub enum TraitFn<'hir> {
/// No default body in the trait, just a signature.
Required(&'hir [Ident]),
pub enum TraitItemKind<'hir> {
/// An associated constant with an optional value (otherwise `impl`s must contain a value).
Const(&'hir Ty<'hir>, Option<BodyId>),
- /// A method with an optional body.
- Method(FnSig<'hir>, TraitMethod<'hir>),
+ /// An associated function with an optional body.
+ Fn(FnSig<'hir>, TraitFn<'hir>),
/// An associated type with (possibly empty) bounds and optional concrete
/// type.
Type(GenericBounds<'hir>, Option<&'hir Ty<'hir>>),
/// An associated constant of the given type, set to the constant result
/// of the expression.
Const(&'hir Ty<'hir>, BodyId),
- /// A method implementation with the given signature and body.
- Method(FnSig<'hir>, BodyId),
+ /// An associated function implementation with the given signature and body.
+ Fn(FnSig<'hir>, BodyId),
/// An associated type.
TyAlias(&'hir Ty<'hir>),
/// An associated `type = impl Trait`.
pub fn namespace(&self) -> Namespace {
match self {
ImplItemKind::OpaqueTy(..) | ImplItemKind::TyAlias(..) => Namespace::TypeNS,
- ImplItemKind::Const(..) | ImplItemKind::Method(..) => Namespace::ValueNS,
+ ImplItemKind::Const(..) | ImplItemKind::Fn(..) => Namespace::ValueNS,
}
}
}
// imported.
pub type GlobMap = NodeMap<FxHashSet<Name>>;
-#[derive(Copy, Clone, Debug)]
+#[derive(Copy, Clone, Debug, HashStable_Generic)]
pub enum Node<'hir> {
Param(&'hir Param<'hir>),
Item(&'hir Item<'hir>),
GenericParam(&'hir GenericParam<'hir>),
Visibility(&'hir Visibility<'hir>),
- Crate,
+ Crate(&'hir CrateItem<'hir>),
}
impl Node<'_> {
pub fn fn_decl(&self) -> Option<&FnDecl<'_>> {
match self {
- Node::TraitItem(TraitItem { kind: TraitItemKind::Method(fn_sig, _), .. })
- | Node::ImplItem(ImplItem { kind: ImplItemKind::Method(fn_sig, _), .. })
+ Node::TraitItem(TraitItem { kind: TraitItemKind::Fn(fn_sig, _), .. })
+ | Node::ImplItem(ImplItem { kind: ImplItemKind::Fn(fn_sig, _), .. })
| Node::Item(Item { kind: ItemKind::Fn(fn_sig, _, _), .. }) => Some(fn_sig.decl),
Node::ForeignItem(ForeignItem { kind: ForeignItemKind::Fn(fn_decl, _, _), .. }) => {
Some(fn_decl)
-use crate::def_id::{DefId, DefIndex, LocalDefId, CRATE_DEF_INDEX};
-use rustc_serialize::{self, Decodable, Decoder, Encodable, Encoder};
+use crate::def_id::{LocalDefId, CRATE_DEF_INDEX};
use std::fmt;
/// Uniquely identifies a node in the HIR of the current crate. It is
-/// composed of the `owner`, which is the `DefIndex` of the directly enclosing
+/// composed of the `owner`, which is the `LocalDefId` of the directly enclosing
/// `hir::Item`, `hir::TraitItem`, or `hir::ImplItem` (i.e., the closest "item-like"),
/// and the `local_id` which is unique within the given owner.
///
/// the `local_id` part of the `HirId` changing, which is a very useful property in
/// incremental compilation where we have to persist things through changes to
/// the code base.
-#[derive(Copy, Clone, PartialEq, Eq, Hash, Debug, PartialOrd, Ord)]
+#[derive(Copy, Clone, PartialEq, Eq, Hash, Debug, PartialOrd, Ord, RustcEncodable, RustcDecodable)]
pub struct HirId {
- pub owner: DefIndex,
+ pub owner: LocalDefId,
pub local_id: ItemLocalId,
}
-impl HirId {
- pub fn owner_def_id(self) -> DefId {
- DefId::local(self.owner)
- }
-
- pub fn owner_local_def_id(self) -> LocalDefId {
- LocalDefId::from_def_id(DefId::local(self.owner))
- }
-}
-
-impl rustc_serialize::UseSpecializedEncodable for HirId {
- fn default_encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
- let HirId { owner, local_id } = *self;
-
- owner.encode(s)?;
- local_id.encode(s)?;
- Ok(())
- }
-}
-
-impl rustc_serialize::UseSpecializedDecodable for HirId {
- fn default_decode<D: Decoder>(d: &mut D) -> Result<HirId, D::Error> {
- let owner = DefIndex::decode(d)?;
- let local_id = ItemLocalId::decode(d)?;
-
- Ok(HirId { owner, local_id })
- }
-}
-
impl fmt::Display for HirId {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{:?}", self)
rustc_data_structures::impl_stable_hash_via_hash!(ItemLocalId);
/// The `HirId` corresponding to `CRATE_NODE_ID` and `CRATE_DEF_INDEX`.
-pub const CRATE_HIR_ID: HirId =
- HirId { owner: CRATE_DEF_INDEX, local_id: ItemLocalId::from_u32_const(0) };
+pub const CRATE_HIR_ID: HirId = HirId {
+ owner: LocalDefId { local_def_index: CRATE_DEF_INDEX },
+ local_id: ItemLocalId::from_u32(0),
+};
-pub const DUMMY_HIR_ID: HirId = HirId { owner: CRATE_DEF_INDEX, local_id: DUMMY_ITEM_LOCAL_ID };
+pub const DUMMY_HIR_ID: HirId =
+ HirId { owner: LocalDefId { local_def_index: CRATE_DEF_INDEX }, local_id: DUMMY_ITEM_LOCAL_ID };
pub const DUMMY_ITEM_LOCAL_ID: ItemLocalId = ItemLocalId::MAX;
fn impl_item(&self, id: ImplItemId) -> &'hir ImplItem<'hir>;
}
+/// An erased version of `Map<'hir>`, using dynamic dispatch.
+/// NOTE: This type is effectively only usable with `NestedVisitorMap::None`.
+pub struct ErasedMap<'hir>(&'hir dyn Map<'hir>);
+
+impl<'hir> Map<'hir> for ErasedMap<'hir> {
+ fn body(&self, id: BodyId) -> &'hir Body<'hir> {
+ self.0.body(id)
+ }
+ fn item(&self, id: HirId) -> &'hir Item<'hir> {
+ self.0.item(id)
+ }
+ fn trait_item(&self, id: TraitItemId) -> &'hir TraitItem<'hir> {
+ self.0.trait_item(id)
+ }
+ fn impl_item(&self, id: ImplItemId) -> &'hir ImplItem<'hir> {
+ self.0.impl_item(id)
+ }
+}
+
/// Specifies what nested things a visitor wants to visit. The most
/// common choice is `OnlyBodies`, which will cause the visitor to
/// visit fn bodies for fns that it encounters, but skip over nested
///
/// See the comments on `ItemLikeVisitor` for more details on the overall
/// visit strategy.
-pub enum NestedVisitorMap<'this, M> {
+pub enum NestedVisitorMap<M> {
/// Do not visit any nested things. When you add a new
/// "non-nested" thing, you will want to audit such uses to see if
/// they remain valid.
/// to use `visit_all_item_likes()` as an outer loop,
/// and to have the visitor that visits the contents of each item
/// using this setting.
- OnlyBodies(&'this M),
+ OnlyBodies(M),
/// Visits all nested things, including item-likes.
///
/// **This is an unusual choice.** It is used when you want to
/// process everything within their lexical context. Typically you
/// kick off the visit by doing `walk_krate()`.
- All(&'this M),
+ All(M),
}
-impl<'this, M> NestedVisitorMap<'this, M> {
+impl<M> NestedVisitorMap<M> {
/// Returns the map to use for an "intra item-like" thing (if any).
/// E.g., function body.
- fn intra(self) -> Option<&'this M> {
+ fn intra(self) -> Option<M> {
match self {
NestedVisitorMap::None => None,
NestedVisitorMap::OnlyBodies(map) => Some(map),
/// Returns the map to use for an "item-like" thing (if any).
/// E.g., item, impl-item.
- fn inter(self) -> Option<&'this M> {
+ fn inter(self) -> Option<M> {
match self {
NestedVisitorMap::None => None,
NestedVisitorMap::OnlyBodies(_) => None,
/// `panic!()`. This way, if a new `visit_nested_XXX` variant is
/// added in the future, we will see the panic in your code and
/// fix it appropriately.
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map>;
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map>;
/// Invoked when a nested item is encountered. By default does
/// nothing unless you override `nested_visit_map` to return other than
/// If you use this, you probably don't want to process the
/// contents of nested item-like things, since the outer loop will
/// visit them as well.
- fn as_deep_visitor<'s>(&'s mut self) -> DeepVisitor<'s, Self> {
+ fn as_deep_visitor(&mut self) -> DeepVisitor<'_, Self> {
DeepVisitor::new(self)
}
/// Walks the contents of a crate. See also `Crate::visit_all_items`.
pub fn walk_crate<'v, V: Visitor<'v>>(visitor: &mut V, krate: &'v Crate<'v>) {
- visitor.visit_mod(&krate.module, krate.span, CRATE_HIR_ID);
- walk_list!(visitor, visit_attribute, krate.attrs);
+ visitor.visit_mod(&krate.item.module, krate.item.span, CRATE_HIR_ID);
+ walk_list!(visitor, visit_attribute, krate.item.attrs);
walk_list!(visitor, visit_macro_def, krate.exported_macros);
}
pub fn walk_macro_def<'v, V: Visitor<'v>>(visitor: &mut V, macro_def: &'v MacroDef<'v>) {
visitor.visit_id(macro_def.hir_id);
- visitor.visit_name(macro_def.span, macro_def.name);
+ visitor.visit_ident(macro_def.ident);
walk_list!(visitor, visit_attribute, macro_def.attrs);
}
visitor.visit_ty(ty);
walk_list!(visitor, visit_nested_body, default);
}
- TraitItemKind::Method(ref sig, TraitMethod::Required(param_names)) => {
+ TraitItemKind::Fn(ref sig, TraitFn::Required(param_names)) => {
visitor.visit_id(trait_item.hir_id);
visitor.visit_fn_decl(&sig.decl);
for ¶m_name in param_names {
visitor.visit_ident(param_name);
}
}
- TraitItemKind::Method(ref sig, TraitMethod::Provided(body_id)) => {
+ TraitItemKind::Fn(ref sig, TraitFn::Provided(body_id)) => {
visitor.visit_fn(
FnKind::Method(trait_item.ident, sig, None, &trait_item.attrs),
&sig.decl,
visitor.visit_ty(ty);
visitor.visit_nested_body(body);
}
- ImplItemKind::Method(ref sig, body_id) => {
+ ImplItemKind::Fn(ref sig, body_id) => {
visitor.visit_fn(
FnKind::Method(impl_item.ident, sig, Some(&impl_item.vis), &impl_item.attrs),
&sig.decl,
StartFnLangItem, "start", start_fn, Target::Fn;
EhPersonalityLangItem, "eh_personality", eh_personality, Target::Fn;
- EhUnwindResumeLangItem, "eh_unwind_resume", eh_unwind_resume, Target::Fn;
EhCatchTypeinfoLangItem, "eh_catch_typeinfo", eh_catch_typeinfo, Target::Static;
OwnedBoxLangItem, "owned_box", owned_box, Target::Struct;
-//! HIR datatypes. See the [rustc guide] for more info.
+//! HIR datatypes. See the [rustc dev guide] for more info.
//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/hir.html
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/hir.html
#![feature(crate_visibility_modifier)]
+#![feature(const_if_match)]
#![feature(const_fn)] // For the unsizing cast on `&[]`
+#![feature(const_panic)]
#![feature(in_band_lifetimes)]
#![feature(specialization)]
#![recursion_limit = "256"]
Node::Ctor(..) => panic!("cannot print isolated Ctor"),
Node::Local(a) => self.print_local_decl(&a),
Node::MacroDef(_) => panic!("cannot print MacroDef"),
- Node::Crate => panic!("cannot print Crate"),
+ Node::Crate(..) => panic!("cannot print Crate"),
}
}
}
// When printing the AST, we sometimes need to inject `#[no_std]` here.
// Since you can't compile the HIR, it's not necessary.
- s.print_mod(&krate.module, &krate.attrs);
+ s.print_mod(&krate.item.module, &krate.item.attrs);
s.print_remaining_comments();
s.s.eof()
}
self.word_nbsp("const");
}
- if let hir::ImplPolarity::Negative = polarity {
+ if let hir::ImplPolarity::Negative(_) = polarity {
self.s.word("!");
}
Spanned { span: rustc_span::DUMMY_SP, node: hir::VisibilityKind::Inherited };
self.print_associated_const(ti.ident, &ty, default, &vis);
}
- hir::TraitItemKind::Method(ref sig, hir::TraitMethod::Required(ref arg_names)) => {
+ hir::TraitItemKind::Fn(ref sig, hir::TraitFn::Required(ref arg_names)) => {
let vis =
Spanned { span: rustc_span::DUMMY_SP, node: hir::VisibilityKind::Inherited };
self.print_method_sig(ti.ident, sig, &ti.generics, &vis, arg_names, None);
self.s.word(";");
}
- hir::TraitItemKind::Method(ref sig, hir::TraitMethod::Provided(body)) => {
+ hir::TraitItemKind::Fn(ref sig, hir::TraitFn::Provided(body)) => {
let vis =
Spanned { span: rustc_span::DUMMY_SP, node: hir::VisibilityKind::Inherited };
self.head("");
hir::ImplItemKind::Const(ref ty, expr) => {
self.print_associated_const(ii.ident, &ty, Some(expr), &ii.vis);
}
- hir::ImplItemKind::Method(ref sig, body) => {
+ hir::ImplItemKind::Fn(ref sig, body) => {
self.head("");
self.print_method_sig(ii.ident, sig, &ii.generics, &ii.vis, &[], Some(body));
self.nbsp();
-use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
+use rustc_data_structures::stable_hasher::{HashStable, StableHasher, ToStableHashKey};
-use crate::hir::{BodyId, Expr, ImplItemId, ItemId, Mod, TraitItemId, Ty, VisibilityKind};
-use crate::hir_id::HirId;
+use crate::hir::{
+ BodyId, Expr, ImplItem, ImplItemId, Item, ItemId, Mod, TraitItem, TraitItemId, Ty,
+ VisibilityKind,
+};
+use crate::hir_id::{HirId, ItemLocalId};
+use rustc_span::def_id::{DefPathHash, LocalDefId};
/// Requirements for a `StableHashingContext` to be used in this crate.
/// This is a hack to allow using the `HashStable_Generic` derive macro
fn hash_hir_expr(&mut self, _: &Expr<'_>, hasher: &mut StableHasher);
fn hash_hir_ty(&mut self, _: &Ty<'_>, hasher: &mut StableHasher);
fn hash_hir_visibility_kind(&mut self, _: &VisibilityKind<'_>, hasher: &mut StableHasher);
+ fn hash_hir_item_like<F: FnOnce(&mut Self)>(&mut self, f: F);
+ fn local_def_path_hash(&self, def_id: LocalDefId) -> DefPathHash;
+}
+
+impl<HirCtx: crate::HashStableContext> ToStableHashKey<HirCtx> for HirId {
+ type KeyType = (DefPathHash, ItemLocalId);
+
+ #[inline]
+ fn to_stable_hash_key(&self, hcx: &HirCtx) -> (DefPathHash, ItemLocalId) {
+ let def_path_hash = hcx.local_def_path_hash(self.owner);
+ (def_path_hash, self.local_id)
+ }
+}
+
+impl<HirCtx: crate::HashStableContext> ToStableHashKey<HirCtx> for TraitItemId {
+ type KeyType = (DefPathHash, ItemLocalId);
+
+ #[inline]
+ fn to_stable_hash_key(&self, hcx: &HirCtx) -> (DefPathHash, ItemLocalId) {
+ self.hir_id.to_stable_hash_key(hcx)
+ }
+}
+
+impl<HirCtx: crate::HashStableContext> ToStableHashKey<HirCtx> for ImplItemId {
+ type KeyType = (DefPathHash, ItemLocalId);
+
+ #[inline]
+ fn to_stable_hash_key(&self, hcx: &HirCtx) -> (DefPathHash, ItemLocalId) {
+ self.hir_id.to_stable_hash_key(hcx)
+ }
}
impl<HirCtx: crate::HashStableContext> HashStable<HirCtx> for HirId {
hcx.hash_hir_visibility_kind(self, hasher)
}
}
+
+impl<HirCtx: crate::HashStableContext> HashStable<HirCtx> for TraitItem<'_> {
+ fn hash_stable(&self, hcx: &mut HirCtx, hasher: &mut StableHasher) {
+ let TraitItem { hir_id: _, ident, ref attrs, ref generics, ref kind, span } = *self;
+
+ hcx.hash_hir_item_like(|hcx| {
+ ident.name.hash_stable(hcx, hasher);
+ attrs.hash_stable(hcx, hasher);
+ generics.hash_stable(hcx, hasher);
+ kind.hash_stable(hcx, hasher);
+ span.hash_stable(hcx, hasher);
+ });
+ }
+}
+
+impl<HirCtx: crate::HashStableContext> HashStable<HirCtx> for ImplItem<'_> {
+ fn hash_stable(&self, hcx: &mut HirCtx, hasher: &mut StableHasher) {
+ let ImplItem {
+ hir_id: _,
+ ident,
+ ref vis,
+ defaultness,
+ ref attrs,
+ ref generics,
+ ref kind,
+ span,
+ } = *self;
+
+ hcx.hash_hir_item_like(|hcx| {
+ ident.name.hash_stable(hcx, hasher);
+ vis.hash_stable(hcx, hasher);
+ defaultness.hash_stable(hcx, hasher);
+ attrs.hash_stable(hcx, hasher);
+ generics.hash_stable(hcx, hasher);
+ kind.hash_stable(hcx, hasher);
+ span.hash_stable(hcx, hasher);
+ });
+ }
+}
+
+impl<HirCtx: crate::HashStableContext> HashStable<HirCtx> for Item<'_> {
+ fn hash_stable(&self, hcx: &mut HirCtx, hasher: &mut StableHasher) {
+ let Item { ident, ref attrs, hir_id: _, ref kind, ref vis, span } = *self;
+
+ hcx.hash_hir_item_like(|hcx| {
+ ident.name.hash_stable(hcx, hasher);
+ attrs.hash_stable(hcx, hasher);
+ kind.hash_stable(hcx, hasher);
+ vis.hash_stable(hcx, hasher);
+ span.hash_stable(hcx, hasher);
+ });
+ }
+}
pub fn from_trait_item(trait_item: &TraitItem<'_>) -> Target {
match trait_item.kind {
TraitItemKind::Const(..) => Target::AssocConst,
- TraitItemKind::Method(_, hir::TraitMethod::Required(_)) => {
+ TraitItemKind::Fn(_, hir::TraitFn::Required(_)) => {
Target::Method(MethodKind::Trait { body: false })
}
- TraitItemKind::Method(_, hir::TraitMethod::Provided(_)) => {
+ TraitItemKind::Fn(_, hir::TraitFn::Provided(_)) => {
Target::Method(MethodKind::Trait { body: true })
}
TraitItemKind::Type(..) => Target::AssocTy,
weak_lang_items! {
panic_impl, PanicImplLangItem, rust_begin_unwind;
eh_personality, EhPersonalityLangItem, rust_eh_personality;
- eh_unwind_resume, EhUnwindResumeLangItem, rust_eh_unwind_resume;
oom, OomLangItem, rust_oom;
}
let (if_this_changed, then_this_would_need) = {
let mut visitor =
IfThisChanged { tcx, if_this_changed: vec![], then_this_would_need: vec![] };
- visitor.process_attrs(hir::CRATE_HIR_ID, &tcx.hir().krate().attrs);
+ visitor.process_attrs(hir::CRATE_HIR_ID, &tcx.hir().krate().item.attrs);
tcx.hir().krate().visit_all_item_likes(&mut visitor.as_deep_visitor());
(visitor.if_this_changed, visitor.then_this_would_need)
};
if attr.check_name(sym::rustc_if_this_changed) {
let dep_node_interned = self.argument(attr);
let dep_node = match dep_node_interned {
- None => def_path_hash.to_dep_node(DepKind::Hir),
+ None => DepNode::from_def_path_hash(def_path_hash, DepKind::hir_owner),
Some(n) => match DepNode::from_label_string(&n.as_str(), def_path_hash) {
Ok(n) => n,
Err(()) => {
impl Visitor<'tcx> for IfThisChanged<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.tcx.hir())
}
fn visit_item(&mut self, item: &'tcx hir::Item<'tcx>) {
let ams = AssertModuleSource { tcx, available_cgus };
- for attr in tcx.hir().krate().attrs {
+ for attr in tcx.hir().krate().item.attrs {
ams.check_attr(attr);
}
})
-For info on how the incremental compilation works, see the [rustc guide].
+For info on how the incremental compilation works, see the [rustc dev guide].
-[rustc guide]: https://rust-lang.github.io/rustc-guide/query.html
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/query.html
/// DepNodes for Hir, which is pretty much everything
const BASE_HIR: &[&str] = &[
- // Hir and HirBody should be computed for all nodes
- label_strs::Hir,
- label_strs::HirBody,
+ // hir_owner and hir_owner_items should be computed for all nodes
+ label_strs::hir_owner,
+ label_strs::hir_owner_items,
];
/// `impl` implementation of struct/trait
}
}
HirNode::TraitItem(item) => match item.kind {
- TraitItemKind::Method(..) => ("Node::TraitItem", LABELS_FN_IN_TRAIT),
+ TraitItemKind::Fn(..) => ("Node::TraitItem", LABELS_FN_IN_TRAIT),
TraitItemKind::Const(..) => ("NodeTraitConst", LABELS_CONST_IN_TRAIT),
TraitItemKind::Type(..) => ("NodeTraitType", LABELS_CONST_IN_TRAIT),
},
HirNode::ImplItem(item) => match item.kind {
- ImplItemKind::Method(..) => ("Node::ImplItem", LABELS_FN_IN_IMPL),
+ ImplItemKind::Fn(..) => ("Node::ImplItem", LABELS_FN_IN_IMPL),
ImplItemKind::Const(..) => ("NodeImplConst", LABELS_CONST_IN_IMPL),
ImplItemKind::TyAlias(..) => ("NodeImplType", LABELS_CONST_IN_IMPL),
ImplItemKind::OpaqueTy(..) => ("NodeImplType", LABELS_CONST_IN_IMPL),
&format!("clean/dirty auto-assertions not yet defined for {:?}", node),
),
};
- let labels = Labels::from_iter(labels.iter().flat_map(|s| s.iter().map(|l| l.to_string())));
+ let labels =
+ Labels::from_iter(labels.iter().flat_map(|s| s.iter().map(|l| (*l).to_string())));
(name, labels)
}
impl intravisit::Visitor<'tcx> for FindAllAttrs<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map<'this>(&'this mut self) -> intravisit::NestedVisitorMap<'this, Self::Map> {
- intravisit::NestedVisitorMap::All(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<Self::Map> {
+ intravisit::NestedVisitorMap::All(self.tcx.hir())
}
fn visit_attribute(&mut self, attr: &'tcx Attribute) {
use std::io::{self, Read};
use std::path::Path;
-use rustc::session::config::nightly_options;
use rustc_serialize::opaque::Encoder;
+use rustc_session::config::nightly_options;
/// The first few bytes of files generated by incremental compilation.
const FILE_MAGIC: &[u8] = b"RSIC";
//! unsupported file system and emit a warning in that case. This is not yet
//! implemented.
-use rustc::session::{CrateDisambiguator, Session};
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use rustc_data_structures::svh::Svh;
use rustc_data_structures::{base_n, flock};
use rustc_fs_util::{link_or_copy, LinkOrCopy};
+use rustc_session::{CrateDisambiguator, Session};
use std::fs as std_fs;
use std::io;
//! Code to save/load the dep-graph from files.
use rustc::dep_graph::{PreviousDepGraph, SerializedDepGraph, WorkProduct, WorkProductId};
-use rustc::session::Session;
use rustc::ty::query::OnDiskCache;
use rustc::ty::TyCtxt;
use rustc_data_structures::fx::FxHashMap;
use rustc_serialize::opaque::Decoder;
use rustc_serialize::Decodable as RustcDecodable;
+use rustc_session::Session;
use std::path::Path;
use super::data::*;
use rustc::dep_graph::{DepGraph, DepKind, WorkProduct, WorkProductId};
-use rustc::session::Session;
use rustc::ty::TyCtxt;
use rustc_data_structures::fx::FxHashMap;
use rustc_data_structures::sync::join;
use rustc_serialize::opaque::Encoder;
use rustc_serialize::Encodable as RustcEncodable;
+use rustc_session::Session;
use std::fs;
use std::path::PathBuf;
use crate::persist::fs::*;
use rustc::dep_graph::{WorkProduct, WorkProductFileKind, WorkProductId};
-use rustc::session::Session;
use rustc_fs_util::link_or_copy;
+use rustc_session::Session;
use std::fs as std_fs;
use std::path::PathBuf;
files: &[(WorkProductFileKind, PathBuf)],
) -> Option<(WorkProductId, WorkProduct)> {
debug!("copy_cgu_workproducts_to_incr_comp_cache_dir({:?},{:?})", cgu_name, files);
- if sess.opts.incremental.is_none() {
- return None;
- }
+ sess.opts.incremental.as_ref()?;
let saved_files = files
.iter()
#![feature(allow_internal_unstable)]
+#![feature(const_if_match)]
+#![feature(const_fn)]
+#![feature(const_panic)]
#![feature(unboxed_closures)]
#![feature(test)]
#![feature(fn_traits)]
impl $type {
$v const MAX_AS_U32: u32 = $max;
- $v const MAX: Self = Self::from_u32_const($max);
+ $v const MAX: Self = Self::from_u32($max);
#[inline]
- $v fn from_usize(value: usize) -> Self {
+ $v const fn from_usize(value: usize) -> Self {
assert!(value <= ($max as usize));
unsafe {
Self::from_u32_unchecked(value as u32)
}
#[inline]
- $v fn from_u32(value: u32) -> Self {
+ $v const fn from_u32(value: u32) -> Self {
assert!(value <= $max);
unsafe {
Self::from_u32_unchecked(value)
}
}
- /// Hacky variant of `from_u32` for use in constants.
- /// This version checks the "max" constraint by using an
- /// invalid array dereference.
- #[inline]
- $v const fn from_u32_const(value: u32) -> Self {
- // This will fail at const eval time unless `value <=
- // max` is true (in which case we get the index 0).
- // It will also fail at runtime, of course, but in a
- // kind of wacky way.
- let _ = ["out of range value used"][
- !(value <= $max) as usize
- ];
-
- unsafe {
- Self { private: value }
- }
- }
-
#[inline]
$v const unsafe fn from_u32_unchecked(value: u32) -> Self {
Self { private: value }
/// Extracts the value of this index as an integer.
#[inline]
- $v fn index(self) -> usize {
+ $v const fn index(self) -> usize {
self.as_usize()
}
/// Extracts the value of this index as a `u32`.
#[inline]
- $v fn as_u32(self) -> u32 {
+ $v const fn as_u32(self) -> u32 {
self.private
}
/// Extracts the value of this index as a `usize`.
#[inline]
- $v fn as_usize(self) -> usize {
+ $v const fn as_usize(self) -> usize {
self.as_u32() as usize
}
}
#[inline]
fn index(self) -> usize {
- usize::from(self)
+ self.as_usize()
}
}
const $name:ident = $constant:expr,
$($tokens:tt)*) => (
$(#[doc = $doc])*
- $v const $name: $type = $type::from_u32_const($constant);
+ $v const $name: $type = $type::from_u32($constant);
$crate::newtype_index!(
@derives [$($derives,)*]
@attrs [$(#[$attrs])*]
doctest = false
[dependencies]
-fmt_macros = { path = "../libfmt_macros" }
graphviz = { path = "../libgraphviz" }
log = { version = "0.4", features = ["release_max_level_info", "std"] }
-rustc_attr = { path = "../librustc_attr" }
rustc = { path = "../librustc" }
rustc_data_structures = { path = "../librustc_data_structures" }
rustc_errors = { path = "../librustc_errors" }
-rustc_error_codes = { path = "../librustc_error_codes" }
rustc_hir = { path = "../librustc_hir" }
rustc_index = { path = "../librustc_index" }
rustc_macros = { path = "../librustc_macros" }
T: ToTrace<'tcx>,
{
let trace = ToTrace::to_trace(self.cause, a_is_expected, a, b);
- Trace { at: self, trace: trace, a_is_expected }
+ Trace { at: self, trace, a_is_expected }
}
}
//! This module contains the "canonicalizer" itself.
//!
//! For an overview of what canonicalization is and how it fits into
-//! rustc, check out the [chapter in the rustc guide][c].
+//! rustc, check out the [chapter in the rustc dev guide][c].
//!
-//! [c]: https://rust-lang.github.io/rustc-guide/traits/canonicalization.html
+//! [c]: https://rustc-dev-guide.rust-lang.org/traits/canonicalization.html
use crate::infer::canonical::{
Canonical, CanonicalTyVarKind, CanonicalVarInfo, CanonicalVarKind, Canonicalized,
/// with a mapping M that maps `'?0` to `'static`.
///
/// To get a good understanding of what is happening here, check
- /// out the [chapter in the rustc guide][c].
+ /// out the [chapter in the rustc dev guide][c].
///
- /// [c]: https://rust-lang.github.io/rustc-guide/traits/canonicalization.html#canonicalizing-the-query
+ /// [c]: https://rustc-dev-guide.rust-lang.org/traits/canonicalization.html#canonicalizing-the-query
pub fn canonicalize_query<V>(
&self,
value: &V,
/// reference to `'static` alone.
///
/// To get a good understanding of what is happening here, check
- /// out the [chapter in the rustc guide][c].
+ /// out the [chapter in the rustc dev guide][c].
///
- /// [c]: https://rust-lang.github.io/rustc-guide/traits/canonicalization.html#canonicalizing-the-query-result
+ /// [c]: https://rustc-dev-guide.rust-lang.org/traits/canonicalization.html#canonicalizing-the-query-result
pub fn canonicalize_response<V>(&self, value: &V) -> Canonicalized<'tcx, V>
where
V: TypeFoldable<'tcx>,
// `TyVar(vid)` is unresolved, track its universe index in the canonicalized
// result.
Err(mut ui) => {
- if !self.infcx.unwrap().tcx.sess.opts.debugging_opts.chalk {
- // FIXME: perf problem described in #55921.
- ui = ty::UniverseIndex::ROOT;
- }
+ // FIXME: perf problem described in #55921.
+ ui = ty::UniverseIndex::ROOT;
self.canonicalize_ty_var(
CanonicalVarInfo {
kind: CanonicalVarKind::Ty(CanonicalTyVarKind::General(ui)),
// `ConstVar(vid)` is unresolved, track its universe index in the
// canonicalized result
Err(mut ui) => {
- if !self.infcx.unwrap().tcx.sess.opts.debugging_opts.chalk {
- // FIXME: perf problem described in #55921.
- ui = ty::UniverseIndex::ROOT;
- }
+ // FIXME: perf problem described in #55921.
+ ui = ty::UniverseIndex::ROOT;
return self.canonicalize_const_var(
CanonicalVarInfo { kind: CanonicalVarKind::Const(ui) },
ct,
//! `instantiate_query_result` method.
//!
//! For a more detailed look at what is happening here, check
-//! out the [chapter in the rustc guide][c].
+//! out the [chapter in the rustc dev guide][c].
//!
-//! [c]: https://rust-lang.github.io/rustc-guide/traits/canonicalization.html
+//! [c]: https://rustc-dev-guide.rust-lang.org/traits/canonicalization.html
use crate::infer::{ConstVariableOrigin, ConstVariableOriginKind};
use crate::infer::{InferCtxt, RegionVariableOrigin, TypeVariableOrigin, TypeVariableOriginKind};
//! encode them therein.
//!
//! For an overview of what canonicaliation is and how it fits into
-//! rustc, check out the [chapter in the rustc guide][c].
+//! rustc, check out the [chapter in the rustc dev guide][c].
//!
-//! [c]: https://rust-lang.github.io/rustc-guide/traits/canonicalization.html
+//! [c]: https://rustc-dev-guide.rust-lang.org/traits/canonicalization.html
use crate::infer::canonical::substitute::{substitute_value, CanonicalExt};
use crate::infer::canonical::{
Canonical, CanonicalVarValues, CanonicalizedQueryResponse, Certainty, OriginalQueryValues,
QueryOutlivesConstraint, QueryRegionConstraints, QueryResponse,
};
+use crate::infer::nll_relate::{NormalizationStrategy, TypeRelating, TypeRelatingDelegate};
use crate::infer::region_constraints::{Constraint, RegionConstraintData};
-use crate::infer::InferCtxtBuilder;
-use crate::infer::{InferCtxt, InferOk, InferResult};
+use crate::infer::{InferCtxt, InferOk, InferResult, NLLRegionVariableOrigin};
use crate::traits::query::{Fallible, NoSolution};
-use crate::traits::TraitEngine;
+use crate::traits::{DomainGoal, TraitEngine};
use crate::traits::{Obligation, ObligationCause, PredicateObligation};
use rustc::arena::ArenaAllocatable;
use rustc::ty::fold::TypeFoldable;
+use rustc::ty::relate::TypeRelation;
use rustc::ty::subst::{GenericArg, GenericArgKind};
use rustc::ty::{self, BoundVar, Ty, TyCtxt};
use rustc_data_structures::captures::Captures;
use rustc_index::vec::Idx;
use rustc_index::vec::IndexVec;
-use rustc_span::DUMMY_SP;
use std::fmt::Debug;
-impl<'tcx> InferCtxtBuilder<'tcx> {
- /// The "main method" for a canonicalized trait query. Given the
- /// canonical key `canonical_key`, this method will create a new
- /// inference context, instantiate the key, and run your operation
- /// `op`. The operation should yield up a result (of type `R`) as
- /// well as a set of trait obligations that must be fully
- /// satisfied. These obligations will be processed and the
- /// canonical result created.
- ///
- /// Returns `NoSolution` in the event of any error.
- ///
- /// (It might be mildly nicer to implement this on `TyCtxt`, and
- /// not `InferCtxtBuilder`, but that is a bit tricky right now.
- /// In part because we would need a `for<'tcx>` sort of
- /// bound for the closure and in part because it is convenient to
- /// have `'tcx` be free on this function so that we can talk about
- /// `K: TypeFoldable<'tcx>`.)
- pub fn enter_canonical_trait_query<K, R>(
- &mut self,
- canonical_key: &Canonical<'tcx, K>,
- operation: impl FnOnce(&InferCtxt<'_, 'tcx>, &mut dyn TraitEngine<'tcx>, K) -> Fallible<R>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, R>>
- where
- K: TypeFoldable<'tcx>,
- R: Debug + TypeFoldable<'tcx>,
- Canonical<'tcx, QueryResponse<'tcx, R>>: ArenaAllocatable,
- {
- self.enter_with_canonical(
- DUMMY_SP,
- canonical_key,
- |ref infcx, key, canonical_inference_vars| {
- let mut fulfill_cx = TraitEngine::new(infcx.tcx);
- let value = operation(infcx, &mut *fulfill_cx, key)?;
- infcx.make_canonicalized_query_response(
- canonical_inference_vars,
- value,
- &mut *fulfill_cx,
- )
- },
- )
- }
-}
-
impl<'cx, 'tcx> InferCtxt<'cx, 'tcx> {
/// This method is meant to be invoked as the final step of a canonical query
/// implementation. It is given:
/// the query before applying this function.)
///
/// To get a good understanding of what is happening here, check
- /// out the [chapter in the rustc guide][c].
+ /// out the [chapter in the rustc dev guide][c].
///
- /// [c]: https://rust-lang.github.io/rustc-guide/traits/canonicalization.html#processing-the-canonicalized-query-result
+ /// [c]: https://rustc-dev-guide.rust-lang.org/traits/canonicalization.html#processing-the-canonicalized-query-result
pub fn instantiate_query_response_and_region_obligations<R>(
&self,
cause: &ObligationCause<'tcx>,
}
(GenericArgKind::Type(v1), GenericArgKind::Type(v2)) => {
- let ok = self.at(cause, param_env).eq(v1, v2)?;
- obligations.extend(ok.into_obligations());
+ TypeRelating::new(
+ self,
+ QueryTypeRelatingDelegate {
+ infcx: self,
+ param_env,
+ cause,
+ obligations: &mut obligations,
+ },
+ ty::Variance::Invariant,
+ )
+ .relate(&v1, &v2)?;
}
(GenericArgKind::Const(v1), GenericArgKind::Const(v2)) => {
- let ok = self.at(cause, param_env).eq(v1, v2)?;
- obligations.extend(ok.into_obligations());
+ TypeRelating::new(
+ self,
+ QueryTypeRelatingDelegate {
+ infcx: self,
+ param_env,
+ cause,
+ obligations: &mut obligations,
+ },
+ ty::Variance::Invariant,
+ )
+ .relate(&v1, &v2)?;
}
_ => {
QueryRegionConstraints { outlives, member_constraints: member_constraints.clone() }
}
+
+struct QueryTypeRelatingDelegate<'a, 'tcx> {
+ infcx: &'a InferCtxt<'a, 'tcx>,
+ obligations: &'a mut Vec<PredicateObligation<'tcx>>,
+ param_env: ty::ParamEnv<'tcx>,
+ cause: &'a ObligationCause<'tcx>,
+}
+
+impl<'tcx> TypeRelatingDelegate<'tcx> for QueryTypeRelatingDelegate<'_, 'tcx> {
+ fn create_next_universe(&mut self) -> ty::UniverseIndex {
+ self.infcx.create_next_universe()
+ }
+
+ fn next_existential_region_var(&mut self, from_forall: bool) -> ty::Region<'tcx> {
+ let origin = NLLRegionVariableOrigin::Existential { from_forall };
+ self.infcx.next_nll_region_var(origin)
+ }
+
+ fn next_placeholder_region(&mut self, placeholder: ty::PlaceholderRegion) -> ty::Region<'tcx> {
+ self.infcx.tcx.mk_region(ty::RePlaceholder(placeholder))
+ }
+
+ fn generalize_existential(&mut self, universe: ty::UniverseIndex) -> ty::Region<'tcx> {
+ self.infcx.next_nll_region_var_in_universe(
+ NLLRegionVariableOrigin::Existential { from_forall: false },
+ universe,
+ )
+ }
+
+ fn push_outlives(&mut self, sup: ty::Region<'tcx>, sub: ty::Region<'tcx>) {
+ self.obligations.push(Obligation {
+ cause: self.cause.clone(),
+ param_env: self.param_env,
+ predicate: ty::Predicate::RegionOutlives(ty::Binder::dummy(ty::OutlivesPredicate(
+ sup, sub,
+ ))),
+ recursion_depth: 0,
+ });
+ }
+
+ fn push_domain_goal(&mut self, _: DomainGoal<'tcx>) {
+ bug!("should never be invoked with eager normalization")
+ }
+
+ fn normalization() -> NormalizationStrategy {
+ NormalizationStrategy::Eager
+ }
+
+ fn forbid_inference_vars() -> bool {
+ true
+ }
+}
//! `Canonical<'tcx, T>`.
//!
//! For an overview of what canonicalization is and how it fits into
-//! rustc, check out the [chapter in the rustc guide][c].
+//! rustc, check out the [chapter in the rustc dev guide][c].
//!
-//! [c]: https://rust-lang.github.io/rustc-guide/traits/canonicalization.html
+//! [c]: https://rustc-dev-guide.rust-lang.org/traits/canonicalization.html
use crate::infer::canonical::{Canonical, CanonicalVarValues};
use rustc::ty::fold::TypeFoldable;
fields: &'combine mut CombineFields<'infcx, 'tcx>,
a_is_expected: bool,
) -> Equate<'combine, 'infcx, 'tcx> {
- Equate { fields: fields, a_is_expected: a_is_expected }
+ Equate { fields, a_is_expected }
}
}
use super::region_constraints::GenericKind;
use super::{InferCtxt, RegionVariableOrigin, SubregionOrigin, TypeTrace, ValuePairs};
-use crate::infer::opaque_types;
-use crate::infer::{self, SuppressRegionErrors};
+use crate::infer;
use crate::traits::error_reporting::report_object_safety_error;
use crate::traits::{
IfExpressionCause, MatchExpressionArmCause, ObligationCause, ObligationCauseCode,
fn trait_item_scope_tag(item: &hir::TraitItem<'_>) -> &'static str {
match item.kind {
- hir::TraitItemKind::Method(..) => "method body",
+ hir::TraitItemKind::Fn(..) => "method body",
hir::TraitItemKind::Const(..) | hir::TraitItemKind::Type(..) => "associated item",
}
}
fn impl_item_scope_tag(item: &hir::ImplItem<'_>) -> &'static str {
match item.kind {
- hir::ImplItemKind::Method(..) => "method body",
+ hir::ImplItemKind::Fn(..) => "method body",
hir::ImplItemKind::Const(..)
| hir::ImplItemKind::OpaqueTy(..)
| hir::ImplItemKind::TyAlias(..) => "associated item",
(format!("the {} at {}:{}", heading, lo.line, lo.col.to_usize() + 1), Some(span))
}
+pub fn unexpected_hidden_region_diagnostic(
+ tcx: TyCtxt<'tcx>,
+ region_scope_tree: Option<®ion::ScopeTree>,
+ span: Span,
+ hidden_ty: Ty<'tcx>,
+ hidden_region: ty::Region<'tcx>,
+) -> DiagnosticBuilder<'tcx> {
+ let mut err = struct_span_err!(
+ tcx.sess,
+ span,
+ E0700,
+ "hidden type for `impl Trait` captures lifetime that does not appear in bounds",
+ );
+
+ // Explain the region we are capturing.
+ if let ty::ReEarlyBound(_) | ty::ReFree(_) | ty::ReStatic | ty::ReEmpty(_) = hidden_region {
+ // Assuming regionck succeeded (*), we ought to always be
+ // capturing *some* region from the fn header, and hence it
+ // ought to be free. So under normal circumstances, we will go
+ // down this path which gives a decent human readable
+ // explanation.
+ //
+ // (*) if not, the `tainted_by_errors` flag would be set to
+ // true in any case, so we wouldn't be here at all.
+ note_and_explain_free_region(
+ tcx,
+ &mut err,
+ &format!("hidden type `{}` captures ", hidden_ty),
+ hidden_region,
+ "",
+ );
+ } else {
+ // Ugh. This is a painful case: the hidden region is not one
+ // that we can easily summarize or explain. This can happen
+ // in a case like
+ // `src/test/ui/multiple-lifetimes/ordinary-bounds-unsuited.rs`:
+ //
+ // ```
+ // fn upper_bounds<'a, 'b>(a: Ordinary<'a>, b: Ordinary<'b>) -> impl Trait<'a, 'b> {
+ // if condition() { a } else { b }
+ // }
+ // ```
+ //
+ // Here the captured lifetime is the intersection of `'a` and
+ // `'b`, which we can't quite express.
+
+ if let Some(region_scope_tree) = region_scope_tree {
+ // If the `region_scope_tree` is available, this is being
+ // invoked from the "region inferencer error". We can at
+ // least report a really cryptic error for now.
+ note_and_explain_region(
+ tcx,
+ region_scope_tree,
+ &mut err,
+ &format!("hidden type `{}` captures ", hidden_ty),
+ hidden_region,
+ "",
+ );
+ } else {
+ // If the `region_scope_tree` is *unavailable*, this is
+ // being invoked by the code that comes *after* region
+ // inferencing. This is a bug, as the region inferencer
+ // ought to have noticed the failed constraint and invoked
+ // error reporting, which in turn should have prevented us
+ // from getting trying to infer the hidden type
+ // completely.
+ tcx.sess.delay_span_bug(
+ span,
+ &format!(
+ "hidden type captures unexpected lifetime `{:?}` \
+ but no region inference failure",
+ hidden_region,
+ ),
+ );
+ }
+ }
+
+ err
+}
+
impl<'a, 'tcx> InferCtxt<'a, 'tcx> {
pub fn report_region_errors(
&self,
region_scope_tree: ®ion::ScopeTree,
errors: &Vec<RegionResolutionError<'tcx>>,
- suppress: SuppressRegionErrors,
) {
- debug!(
- "report_region_errors(): {} errors to start, suppress = {:?}",
- errors.len(),
- suppress
- );
-
- if suppress.suppressed() {
- return;
- }
+ debug!("report_region_errors(): {} errors to start", errors.len());
// try to pre-process the errors, which will group some of them
// together into a `ProcessedErrors` group:
span,
} => {
let hidden_ty = self.resolve_vars_if_possible(&hidden_ty);
- opaque_types::unexpected_hidden_region_diagnostic(
+ unexpected_hidden_region_diagnostic(
self.tcx,
Some(region_scope_tree),
span,
"{} may not live long enough",
labeled_user_string
);
- // Explicitely use the name instead of `sub`'s `Display` impl. The `Display` impl
+ // Explicitly use the name instead of `sub`'s `Display` impl. The `Display` impl
// for the bound is not suitable for suggestions when `-Zverbose` is set because it
// uses `Debug` output, so we handle it specially here so that suggestions are
// always correct.
/// This is a bare signal of what kind of type we're dealing with. `ty::TyKind` tracks
/// extra information about each type, but we only care about the category.
#[derive(Clone, Copy, PartialEq, Eq, Hash)]
-crate enum TyCategory {
+pub enum TyCategory {
Closure,
Opaque,
Generator,
struct FindLocalByTypeVisitor<'a, 'tcx> {
infcx: &'a InferCtxt<'a, 'tcx>,
target_ty: Ty<'tcx>,
- hir_map: &'a Map<'tcx>,
+ hir_map: Map<'tcx>,
found_local_pattern: Option<&'tcx Pat<'tcx>>,
found_arg_pattern: Option<&'tcx Pat<'tcx>>,
found_ty: Option<Ty<'tcx>>,
}
impl<'a, 'tcx> FindLocalByTypeVisitor<'a, 'tcx> {
- fn new(infcx: &'a InferCtxt<'a, 'tcx>, target_ty: Ty<'tcx>, hir_map: &'a Map<'tcx>) -> Self {
+ fn new(infcx: &'a InferCtxt<'a, 'tcx>, target_ty: Ty<'tcx>, hir_map: Map<'tcx>) -> Self {
Self {
infcx,
target_ty,
impl<'a, 'tcx> Visitor<'tcx> for FindLocalByTypeVisitor<'a, 'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.hir_map)
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.hir_map)
}
fn visit_local(&mut self, local: &'tcx Local<'tcx>) {
let ty = self.resolve_vars_if_possible(&ty);
let (name, name_sp, descr, parent_name, parent_descr) = self.extract_type_name(&ty, None);
- let mut local_visitor = FindLocalByTypeVisitor::new(&self, ty, &self.tcx.hir());
+ let mut local_visitor = FindLocalByTypeVisitor::new(&self, ty, self.tcx.hir());
let ty_to_string = |ty: Ty<'tcx>| -> String {
let mut s = String::new();
let mut printer = ty::print::FmtPrinter::new(self.tcx, &mut s, Namespace::TypeNS);
err.span_label(pattern.span, msg);
} else if let Some(e) = local_visitor.found_method_call {
if let ExprKind::MethodCall(segment, ..) = &e.kind {
- // Suggest specifiying type params or point out the return type of the call:
+ // Suggest specifying type params or point out the return type of the call:
//
// error[E0282]: type annotations needed
// --> $DIR/type-annotations-needed-expr.rs:2:39
&segment.args,
) {
let borrow = tables.borrow();
- if let Some((DefKind::Method, did)) = borrow.type_dependent_def(e.hir_id) {
+ if let Some((DefKind::AssocFn, did)) = borrow.type_dependent_def(e.hir_id) {
let generics = self.tcx.generics_of(did);
if !generics.params.is_empty() {
err.span_suggestion(
use crate::infer::error_reporting::nice_region_error::util::AnonymousParamInfo;
use crate::infer::error_reporting::nice_region_error::NiceRegionError;
+use crate::infer::lexical_region_resolve::RegionResolutionError;
+use crate::infer::SubregionOrigin;
use rustc::util::common::ErrorReported;
use rustc_errors::struct_span_err;
pub(super) fn try_report_anon_anon_conflict(&self) -> Option<ErrorReported> {
let (span, sub, sup) = self.regions()?;
+ if let Some(RegionResolutionError::ConcreteFailure(
+ SubregionOrigin::ReferenceOutlivesReferent(..),
+ ..,
+ )) = self.error
+ {
+ // This error doesn't make much sense in this case.
+ return None;
+ }
+
// Determine whether the sub and sup consist of both anonymous (elided) regions.
let anon_reg_sup = self.tcx().is_suitable_region(sup)?;
let fndecl = match self.tcx().hir().get(hir_id) {
Node::Item(&hir::Item { kind: hir::ItemKind::Fn(ref m, ..), .. })
| Node::TraitItem(&hir::TraitItem {
- kind: hir::TraitItemKind::Method(ref m, ..),
+ kind: hir::TraitItemKind::Fn(ref m, ..),
..
})
| Node::ImplItem(&hir::ImplItem {
- kind: hir::ImplItemKind::Method(ref m, ..),
+ kind: hir::ImplItemKind::Fn(ref m, ..),
..
}) => &m.decl,
_ => return None,
impl Visitor<'tcx> for FindNestedTypeVisitor<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.tcx.hir())
}
fn visit_ty(&mut self, arg: &'tcx hir::Ty<'tcx>) {
impl Visitor<'tcx> for TyPathVisitor<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, Map<'tcx>> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Map<'tcx>> {
+ NestedVisitorMap::OnlyBodies(self.tcx.hir())
}
fn visit_lifetime(&mut self, lifetime: &hir::Lifetime) {
impl<'cx, 'tcx> InferCtxt<'cx, 'tcx> {
pub fn try_report_nice_region_error(&self, error: &RegionResolutionError<'tcx>) -> bool {
- if let Some(tables) = self.in_progress_tables {
- let tables = tables.borrow();
- NiceRegionError::new(self, error.clone(), Some(&tables)).try_report().is_some()
- } else {
- NiceRegionError::new(self, error.clone(), None).try_report().is_some()
- }
+ NiceRegionError::new(self, error.clone()).try_report().is_some()
}
}
infcx: &'cx InferCtxt<'cx, 'tcx>,
error: Option<RegionResolutionError<'tcx>>,
regions: Option<(Span, ty::Region<'tcx>, ty::Region<'tcx>)>,
- tables: Option<&'cx ty::TypeckTables<'tcx>>,
}
impl<'cx, 'tcx> NiceRegionError<'cx, 'tcx> {
- pub fn new(
- infcx: &'cx InferCtxt<'cx, 'tcx>,
- error: RegionResolutionError<'tcx>,
- tables: Option<&'cx ty::TypeckTables<'tcx>>,
- ) -> Self {
- Self { infcx, error: Some(error), regions: None, tables }
+ pub fn new(infcx: &'cx InferCtxt<'cx, 'tcx>, error: RegionResolutionError<'tcx>) -> Self {
+ Self { infcx, error: Some(error), regions: None }
}
pub fn new_from_span(
span: Span,
sub: ty::Region<'tcx>,
sup: ty::Region<'tcx>,
- tables: Option<&'cx ty::TypeckTables<'tcx>>,
) -> Self {
- Self { infcx, error: None, regions: Some((span, sub, sup)), tables }
+ Self { infcx, error: None, regions: Some((span, sub, sup)) }
}
fn tcx(&self) -> TyCtxt<'tcx> {
};
let hir = &self.tcx().hir();
- if let Some(hir_id) = hir.as_local_hir_id(id) {
- if let Some(body_id) = hir.maybe_body_owned_by(hir_id) {
- let body = hir.body(body_id);
- let owner_id = hir.body_owner(body_id);
- let fn_decl = hir.fn_decl_by_hir_id(owner_id).unwrap();
- if let Some(tables) = self.tables {
- body.params
- .iter()
- .enumerate()
- .filter_map(|(index, param)| {
- // May return None; sometimes the tables are not yet populated.
- let ty_hir_id = fn_decl.inputs[index].hir_id;
- let param_ty_span = hir.span(ty_hir_id);
- let ty = tables.node_type_opt(param.hir_id)?;
- let mut found_anon_region = false;
- let new_param_ty = self.tcx().fold_regions(&ty, &mut false, |r, _| {
- if *r == *anon_region {
- found_anon_region = true;
- replace_region
- } else {
- r
- }
- });
- if found_anon_region {
- let is_first = index == 0;
- Some(AnonymousParamInfo {
- param: param,
- param_ty: new_param_ty,
- param_ty_span: param_ty_span,
- bound_region: bound_region,
- is_first: is_first,
- })
- } else {
- None
- }
- })
- .next()
+ let hir_id = hir.as_local_hir_id(id)?;
+ let body_id = hir.maybe_body_owned_by(hir_id)?;
+ let body = hir.body(body_id);
+ let owner_id = hir.body_owner(body_id);
+ let fn_decl = hir.fn_decl_by_hir_id(owner_id).unwrap();
+ let poly_fn_sig = self.tcx().fn_sig(id);
+ let fn_sig = self.tcx().liberate_late_bound_regions(id, &poly_fn_sig);
+ body.params
+ .iter()
+ .enumerate()
+ .filter_map(|(index, param)| {
+ // May return None; sometimes the tables are not yet populated.
+ let ty = fn_sig.inputs()[index];
+ let mut found_anon_region = false;
+ let new_param_ty = self.tcx().fold_regions(&ty, &mut false, |r, _| {
+ if *r == *anon_region {
+ found_anon_region = true;
+ replace_region
+ } else {
+ r
+ }
+ });
+ if found_anon_region {
+ let ty_hir_id = fn_decl.inputs[index].hir_id;
+ let param_ty_span = hir.span(ty_hir_id);
+ let is_first = index == 0;
+ Some(AnonymousParamInfo {
+ param,
+ param_ty: new_param_ty,
+ param_ty_span,
+ bound_region,
+ is_first,
+ })
} else {
None
}
- } else {
- None
- }
- } else {
- None
- }
+ })
+ .next()
}
// Here, we check for the case where the anonymous region
}
fn fold_ty(&mut self, t: Ty<'tcx>) -> Ty<'tcx> {
- if !t.needs_infer()
- && !t.has_erasable_regions()
- && !(t.has_closure_types() && self.infcx.in_progress_tables.is_some())
- {
+ if !t.needs_infer() && !t.has_erasable_regions() {
return t;
}
fields: &'combine mut CombineFields<'infcx, 'tcx>,
a_is_expected: bool,
) -> Glb<'combine, 'infcx, 'tcx> {
- Glb { fields: fields, a_is_expected: a_is_expected }
+ Glb { fields, a_is_expected }
}
}
To learn more about how Higher-ranked trait bounds work in the _old_ trait
-solver, see [this chapter][oldhrtb] of the rustc-guide.
+solver, see [this chapter][oldhrtb] of the rustc-dev-guide.
To learn more about how they work in the _new_ trait solver, see [this
chapter][newhrtb].
-[oldhrtb]: https://rust-lang.github.io/rustc-guide/traits/hrtb.html
-[newhrtb]: https://rust-lang.github.io/rustc-guide/borrow_check/region_inference.html#placeholders-and-universes
+[oldhrtb]: https://rustc-dev-guide.rust-lang.org/traits/hrtb.html
+[newhrtb]: https://rustc-dev-guide.rust-lang.org/borrow_check/region_inference.html#placeholders-and-universes
/// needed (but is also permitted).
///
/// For more information about how placeholders and HRTBs work, see
- /// the [rustc guide].
+ /// the [rustc dev guide].
///
- /// [rustc guide]: https://rust-lang.github.io/rustc-guide/traits/hrtb.html
+ /// [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/traits/hrtb.html
pub fn replace_bound_vars_with_placeholders<T>(
&self,
binder: &ty::Binder<T>,
Lexical Region Resolution was removed in https://github.com/rust-lang/rust/pull/64790.
Rust now uses Non-lexical lifetimes. For more info, please see the [borrowck
-chapter][bc] in the rustc-guide.
+chapter][bc] in the rustc-dev-guide.
-[bc]: https://rust-lang.github.io/rustc-guide/borrow_check/region_inference.html
+[bc]: https://rustc-dev-guide.rust-lang.org/borrow_check/region_inference.html
use crate::infer::region_constraints::VarInfos;
use crate::infer::region_constraints::VerifyBound;
use crate::infer::RegionVariableOrigin;
+use crate::infer::RegionckMode;
use crate::infer::SubregionOrigin;
use rustc::middle::free_region::RegionRelations;
use rustc::ty::fold::TypeFoldable;
region_rels: &RegionRelations<'_, 'tcx>,
var_infos: VarInfos,
data: RegionConstraintData<'tcx>,
+ mode: RegionckMode,
) -> (LexicalRegionResolutions<'tcx>, Vec<RegionResolutionError<'tcx>>) {
debug!("RegionConstraintData: resolve_regions()");
let mut errors = vec![];
let mut resolver = LexicalResolver { region_rels, var_infos, data };
- let values = resolver.infer_variable_values(&mut errors);
- (values, errors)
+ match mode {
+ RegionckMode::Solve => {
+ let values = resolver.infer_variable_values(&mut errors);
+ (values, errors)
+ }
+ RegionckMode::Erase { suppress_errors: false } => {
+ // Do real inference to get errors, then erase the results.
+ let mut values = resolver.infer_variable_values(&mut errors);
+ let re_erased = region_rels.tcx.lifetimes.re_erased;
+
+ values.values.iter_mut().for_each(|v| *v = VarValue::Value(re_erased));
+ (values, errors)
+ }
+ RegionckMode::Erase { suppress_errors: true } => {
+ // Skip region inference entirely.
+ (resolver.erased_data(region_rels.tcx), Vec::new())
+ }
+ }
}
/// Contains the result of lexical region resolution. Offers methods
}
}
+ /// An erased version of the lexical region resolutions. Used when we're
+ /// erasing regions and suppressing errors: in item bodies with
+ /// `-Zborrowck=mir`.
+ fn erased_data(&self, tcx: TyCtxt<'tcx>) -> LexicalRegionResolutions<'tcx> {
+ LexicalRegionResolutions {
+ error_region: tcx.lifetimes.re_static,
+ values: IndexVec::from_elem_n(
+ VarValue::Value(tcx.lifetimes.re_erased),
+ self.num_vars(),
+ ),
+ }
+ }
+
fn dump_constraints(&self, free_regions: &RegionRelations<'_, 'tcx>) {
debug!("----() Start constraint listing (context={:?}) ()----", free_regions.context);
for (idx, (constraint, _)) in self.data.constraints.iter().enumerate() {
let dummy_source = graph.add_node(());
let dummy_sink = graph.add_node(());
- for (constraint, _) in &self.data.constraints {
+ for constraint in self.data.constraints.keys() {
match *constraint {
Constraint::VarSubVar(a_id, b_id) => {
graph.add_edge(
fields: &'combine mut CombineFields<'infcx, 'tcx>,
a_is_expected: bool,
) -> Lub<'combine, 'infcx, 'tcx> {
- Lub { fields: fields, a_is_expected: a_is_expected }
+ Lub { fields, a_is_expected }
}
}
pub use self::RegionVariableOrigin::*;
pub use self::SubregionOrigin::*;
pub use self::ValuePairs::*;
-pub use rustc::ty::IntVarValue;
use crate::traits::{self, ObligationCause, PredicateObligations, TraitEngine};
use rustc::infer::unify_key::{ConstVarValue, ConstVariableValue};
use rustc::infer::unify_key::{ConstVariableOrigin, ConstVariableOriginKind, ToType};
use rustc::middle::free_region::RegionRelations;
-use rustc::middle::lang_items;
use rustc::middle::region;
use rustc::mir;
use rustc::mir::interpret::ConstEvalResult;
-use rustc::session::config::BorrowckMode;
+use rustc::traits::select;
use rustc::ty::error::{ExpectedFound, TypeError, UnconstrainedNumeric};
use rustc::ty::fold::{TypeFoldable, TypeFolder};
use rustc::ty::relate::RelateResult;
use rustc::ty::subst::{GenericArg, InternalSubsts, SubstsRef};
+pub use rustc::ty::IntVarValue;
use rustc::ty::{self, GenericParamDefKind, InferConst, Ty, TyCtxt};
use rustc::ty::{ConstVid, FloatVid, IntVid, TyVid};
-
use rustc_ast::ast;
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use rustc_data_structures::sync::Lrc;
use rustc_errors::DiagnosticBuilder;
use rustc_hir as hir;
use rustc_hir::def_id::DefId;
+use rustc_session::config::BorrowckMode;
use rustc_span::symbol::Symbol;
use rustc_span::Span;
+
use std::cell::{Cell, Ref, RefCell};
use std::collections::BTreeMap;
use std::fmt;
mod lexical_region_resolve;
mod lub;
pub mod nll_relate;
-pub mod opaque_types;
pub mod outlives;
pub mod region_constraints;
pub mod resolve;
pub type UnitResult<'tcx> = RelateResult<'tcx, ()>; // "unify result"
pub type FixupResult<'tcx, T> = Result<T, FixupError<'tcx>>; // "fixup result"
-/// A flag that is used to suppress region errors. This is normally
-/// false, but sometimes -- when we are doing region checks that the
-/// NLL borrow checker will also do -- it might be set to true.
-#[derive(Copy, Clone, Default, Debug)]
-pub struct SuppressRegionErrors {
- suppressed: bool,
+/// How we should handle region solving.
+///
+/// This is used so that the region values inferred by HIR region solving are
+/// not exposed, and so that we can avoid doing work in HIR typeck that MIR
+/// typeck will also do.
+#[derive(Copy, Clone, Debug)]
+pub enum RegionckMode {
+ /// The default mode: report region errors, don't erase regions.
+ Solve,
+ /// Erase the results of region after solving.
+ Erase {
+ /// A flag that is used to suppress region errors, when we are doing
+ /// region checks that the NLL borrow checker will also do -- it might
+ /// be set to true.
+ suppress_errors: bool,
+ },
+}
+
+impl Default for RegionckMode {
+ fn default() -> Self {
+ RegionckMode::Solve
+ }
}
-impl SuppressRegionErrors {
+impl RegionckMode {
pub fn suppressed(self) -> bool {
- self.suppressed
+ match self {
+ Self::Solve => false,
+ Self::Erase { suppress_errors } => suppress_errors,
+ }
}
/// Indicates that the MIR borrowck will repeat these region
/// checks, so we should ignore errors if NLL is (unconditionally)
/// enabled.
- pub fn when_nll_is_enabled(tcx: TyCtxt<'_>) -> Self {
+ pub fn for_item_body(tcx: TyCtxt<'_>) -> Self {
// FIXME(Centril): Once we actually remove `::Migrate` also make
// this always `true` and then proceed to eliminate the dead code.
match tcx.borrowck_mode() {
// If we're on Migrate mode, report AST region errors
- BorrowckMode::Migrate => SuppressRegionErrors { suppressed: false },
+ BorrowckMode::Migrate => RegionckMode::Erase { suppress_errors: false },
// If we're on MIR, don't report AST region errors as they should be reported by NLL
- BorrowckMode::Mir => SuppressRegionErrors { suppressed: true },
+ BorrowckMode::Mir => RegionckMode::Erase { suppress_errors: true },
}
}
}
/// Caches the results of trait selection. This cache is used
/// for things that have to do with the parameters in scope.
- pub selection_cache: traits::SelectionCache<'tcx>,
+ pub selection_cache: select::SelectionCache<'tcx>,
/// Caches the results of trait evaluation.
- pub evaluation_cache: traits::EvaluationCache<'tcx>,
+ pub evaluation_cache: select::EvaluationCache<'tcx>,
/// the set of predicates on which errors have been reported, to
/// avoid reporting the same error twice.
region_context: DefId,
region_map: ®ion::ScopeTree,
outlives_env: &OutlivesEnvironment<'tcx>,
- suppress: SuppressRegionErrors,
+ mode: RegionckMode,
) {
assert!(
self.is_tainted_by_errors() || self.inner.borrow().region_obligations.is_empty(),
"region_obligations not empty: {:#?}",
self.inner.borrow().region_obligations
);
-
- let region_rels = &RegionRelations::new(
- self.tcx,
- region_context,
- region_map,
- outlives_env.free_region_map(),
- );
let (var_infos, data) = self
.inner
.borrow_mut()
.take()
.expect("regions already resolved")
.into_infos_and_data();
+
+ let region_rels = &RegionRelations::new(
+ self.tcx,
+ region_context,
+ region_map,
+ outlives_env.free_region_map(),
+ );
+
let (lexical_region_resolutions, errors) =
- lexical_region_resolve::resolve(region_rels, var_infos, data);
+ lexical_region_resolve::resolve(region_rels, var_infos, data, mode);
let old_value = self.lexical_region_resolutions.replace(Some(lexical_region_resolutions));
assert!(old_value.is_none());
// this infcx was in use. This is totally hokey but
// otherwise we have a hard time separating legit region
// errors from silly ones.
- self.report_region_errors(region_map, &errors, suppress);
+ self.report_region_errors(region_map, &errors);
}
}
.verify_generic_bound(origin, kind, a, bound);
}
- pub fn type_is_copy_modulo_regions(
- &self,
- param_env: ty::ParamEnv<'tcx>,
- ty: Ty<'tcx>,
- span: Span,
- ) -> bool {
- let ty = self.resolve_vars_if_possible(&ty);
-
- // Even if the type may have no inference variables, during
- // type-checking closure types are in local tables only.
- if !self.in_progress_tables.is_some() || !ty.has_closure_types() {
- if !(param_env, ty).has_local_value() {
- return ty.is_copy_modulo_regions(self.tcx, param_env, span);
- }
- }
-
- let copy_def_id = self.tcx.require_lang_item(lang_items::CopyTraitLangItem, None);
-
- // This can get called from typeck (by euv), and `moves_by_default`
- // rightly refuses to work with inference variables, but
- // moves_by_default has a cache, which we want to use in other
- // cases.
- traits::type_known_to_meet_bound_modulo_regions(self, param_env, ty, copy_def_id, span)
- }
-
/// Obtains the latest type of the given closure; this may be a
/// closure in the current function, in which case its
/// `ClosureKind` may not yet be known.
closure_sig_ty.fn_sig(self.tcx)
}
- /// Normalizes associated types in `value`, potentially returning
- /// new obligations that must further be processed.
- pub fn partially_normalize_associated_types_in<T>(
- &self,
- span: Span,
- body_id: hir::HirId,
- param_env: ty::ParamEnv<'tcx>,
- value: &T,
- ) -> InferOk<'tcx, T>
- where
- T: TypeFoldable<'tcx>,
- {
- debug!("partially_normalize_associated_types_in(value={:?})", value);
- let mut selcx = traits::SelectionContext::new(self);
- let cause = ObligationCause::misc(span, body_id);
- let traits::Normalized { value, obligations } =
- traits::normalize(&mut selcx, param_env, cause, value);
- debug!(
- "partially_normalize_associated_types_in: result={:?} predicates={:?}",
- value, obligations
- );
- InferOk { value, obligations }
- }
-
/// Clears the selection, evaluation, and projection caches. This is useful when
/// repeatedly attempting to select an `Obligation` while changing only
/// its `ParamEnv`, since `FulfillmentContext` doesn't use probing.
ty::IntVar(v) => {
// If inlined_probe_value returns a value it's always a
- // `ty::Int(_)` or `ty::UInt(_)`, which nevers matches a
+ // `ty::Int(_)` or `ty::UInt(_)`, which never matches a
// `ty::Infer(_)`.
self.infcx.inner.borrow_mut().int_unification_table.inlined_probe_value(v).is_some()
}
ty::FloatVar(v) => {
// If inlined_probe_value returns a value it's always a
- // `ty::Float(_)`, which nevers matches a `ty::Infer(_)`.
+ // `ty::Float(_)`, which never matches a `ty::Infer(_)`.
//
// Not `inlined_probe_value(v)` because this call site is colder.
self.infcx.inner.borrow_mut().float_unification_table.probe_value(v).is_some()
// In NLL, we don't have type inference variables
// floating around, so we can do this rather imprecise
// variant of the occurs-check.
- assert!(!generalized_ty.has_infer_types());
+ assert!(!generalized_ty.has_infer_types_or_consts());
}
self.infcx.inner.borrow_mut().type_variables.instantiate(vid, generalized_ty);
+++ /dev/null
-use crate::infer::error_reporting::{note_and_explain_free_region, note_and_explain_region};
-use crate::infer::{self, InferCtxt, InferOk, TypeVariableOrigin, TypeVariableOriginKind};
-use crate::traits::{self, PredicateObligation};
-use rustc::middle::region;
-use rustc::session::config::nightly_options;
-use rustc::ty::fold::{BottomUpFolder, TypeFoldable, TypeFolder, TypeVisitor};
-use rustc::ty::free_region_map::FreeRegionRelations;
-use rustc::ty::subst::{GenericArg, GenericArgKind, InternalSubsts, SubstsRef};
-use rustc::ty::{self, GenericParamDefKind, Ty, TyCtxt};
-use rustc_data_structures::fx::FxHashMap;
-use rustc_data_structures::sync::Lrc;
-use rustc_errors::{struct_span_err, DiagnosticBuilder};
-use rustc_hir as hir;
-use rustc_hir::def_id::{DefId, DefIdMap};
-use rustc_hir::Node;
-use rustc_span::Span;
-
-pub type OpaqueTypeMap<'tcx> = DefIdMap<OpaqueTypeDecl<'tcx>>;
-
-/// Information about the opaque types whose values we
-/// are inferring in this function (these are the `impl Trait` that
-/// appear in the return type).
-#[derive(Copy, Clone, Debug)]
-pub struct OpaqueTypeDecl<'tcx> {
- /// The opaque type (`ty::Opaque`) for this declaration.
- pub opaque_type: Ty<'tcx>,
-
- /// The substitutions that we apply to the opaque type that this
- /// `impl Trait` desugars to. e.g., if:
- ///
- /// fn foo<'a, 'b, T>() -> impl Trait<'a>
- ///
- /// winds up desugared to:
- ///
- /// type Foo<'x, X> = impl Trait<'x>
- /// fn foo<'a, 'b, T>() -> Foo<'a, T>
- ///
- /// then `substs` would be `['a, T]`.
- pub substs: SubstsRef<'tcx>,
-
- /// The span of this particular definition of the opaque type. So
- /// for example:
- ///
- /// ```
- /// type Foo = impl Baz;
- /// fn bar() -> Foo {
- /// ^^^ This is the span we are looking for!
- /// ```
- ///
- /// In cases where the fn returns `(impl Trait, impl Trait)` or
- /// other such combinations, the result is currently
- /// over-approximated, but better than nothing.
- pub definition_span: Span,
-
- /// The type variable that represents the value of the opaque type
- /// that we require. In other words, after we compile this function,
- /// we will be created a constraint like:
- ///
- /// Foo<'a, T> = ?C
- ///
- /// where `?C` is the value of this type variable. =) It may
- /// naturally refer to the type and lifetime parameters in scope
- /// in this function, though ultimately it should only reference
- /// those that are arguments to `Foo` in the constraint above. (In
- /// other words, `?C` should not include `'b`, even though it's a
- /// lifetime parameter on `foo`.)
- pub concrete_ty: Ty<'tcx>,
-
- /// Returns `true` if the `impl Trait` bounds include region bounds.
- /// For example, this would be true for:
- ///
- /// fn foo<'a, 'b, 'c>() -> impl Trait<'c> + 'a + 'b
- ///
- /// but false for:
- ///
- /// fn foo<'c>() -> impl Trait<'c>
- ///
- /// unless `Trait` was declared like:
- ///
- /// trait Trait<'c>: 'c
- ///
- /// in which case it would be true.
- ///
- /// This is used during regionck to decide whether we need to
- /// impose any additional constraints to ensure that region
- /// variables in `concrete_ty` wind up being constrained to
- /// something from `substs` (or, at minimum, things that outlive
- /// the fn body). (Ultimately, writeback is responsible for this
- /// check.)
- pub has_required_region_bounds: bool,
-
- /// The origin of the opaque type.
- pub origin: hir::OpaqueTyOrigin,
-}
-
-/// Whether member constraints should be generated for all opaque types
-pub enum GenerateMemberConstraints {
- /// The default, used by typeck
- WhenRequired,
- /// The borrow checker needs member constraints in any case where we don't
- /// have a `'static` bound. This is because the borrow checker has more
- /// flexibility in the values of regions. For example, given `f<'a, 'b>`
- /// the borrow checker can have an inference variable outlive `'a` and `'b`,
- /// but not be equal to `'static`.
- IfNoStaticBound,
-}
-
-impl<'a, 'tcx> InferCtxt<'a, 'tcx> {
- /// Replaces all opaque types in `value` with fresh inference variables
- /// and creates appropriate obligations. For example, given the input:
- ///
- /// impl Iterator<Item = impl Debug>
- ///
- /// this method would create two type variables, `?0` and `?1`. It would
- /// return the type `?0` but also the obligations:
- ///
- /// ?0: Iterator<Item = ?1>
- /// ?1: Debug
- ///
- /// Moreover, it returns a `OpaqueTypeMap` that would map `?0` to
- /// info about the `impl Iterator<..>` type and `?1` to info about
- /// the `impl Debug` type.
- ///
- /// # Parameters
- ///
- /// - `parent_def_id` -- the `DefId` of the function in which the opaque type
- /// is defined
- /// - `body_id` -- the body-id with which the resulting obligations should
- /// be associated
- /// - `param_env` -- the in-scope parameter environment to be used for
- /// obligations
- /// - `value` -- the value within which we are instantiating opaque types
- /// - `value_span` -- the span where the value came from, used in error reporting
- pub fn instantiate_opaque_types<T: TypeFoldable<'tcx>>(
- &self,
- parent_def_id: DefId,
- body_id: hir::HirId,
- param_env: ty::ParamEnv<'tcx>,
- value: &T,
- value_span: Span,
- ) -> InferOk<'tcx, (T, OpaqueTypeMap<'tcx>)> {
- debug!(
- "instantiate_opaque_types(value={:?}, parent_def_id={:?}, body_id={:?}, \
- param_env={:?}, value_span={:?})",
- value, parent_def_id, body_id, param_env, value_span,
- );
- let mut instantiator = Instantiator {
- infcx: self,
- parent_def_id,
- body_id,
- param_env,
- value_span,
- opaque_types: Default::default(),
- obligations: vec![],
- };
- let value = instantiator.instantiate_opaque_types_in_map(value);
- InferOk { value: (value, instantiator.opaque_types), obligations: instantiator.obligations }
- }
-
- /// Given the map `opaque_types` containing the opaque
- /// `impl Trait` types whose underlying, hidden types are being
- /// inferred, this method adds constraints to the regions
- /// appearing in those underlying hidden types to ensure that they
- /// at least do not refer to random scopes within the current
- /// function. These constraints are not (quite) sufficient to
- /// guarantee that the regions are actually legal values; that
- /// final condition is imposed after region inference is done.
- ///
- /// # The Problem
- ///
- /// Let's work through an example to explain how it works. Assume
- /// the current function is as follows:
- ///
- /// ```text
- /// fn foo<'a, 'b>(..) -> (impl Bar<'a>, impl Bar<'b>)
- /// ```
- ///
- /// Here, we have two `impl Trait` types whose values are being
- /// inferred (the `impl Bar<'a>` and the `impl
- /// Bar<'b>`). Conceptually, this is sugar for a setup where we
- /// define underlying opaque types (`Foo1`, `Foo2`) and then, in
- /// the return type of `foo`, we *reference* those definitions:
- ///
- /// ```text
- /// type Foo1<'x> = impl Bar<'x>;
- /// type Foo2<'x> = impl Bar<'x>;
- /// fn foo<'a, 'b>(..) -> (Foo1<'a>, Foo2<'b>) { .. }
- /// // ^^^^ ^^
- /// // | |
- /// // | substs
- /// // def_id
- /// ```
- ///
- /// As indicating in the comments above, each of those references
- /// is (in the compiler) basically a substitution (`substs`)
- /// applied to the type of a suitable `def_id` (which identifies
- /// `Foo1` or `Foo2`).
- ///
- /// Now, at this point in compilation, what we have done is to
- /// replace each of the references (`Foo1<'a>`, `Foo2<'b>`) with
- /// fresh inference variables C1 and C2. We wish to use the values
- /// of these variables to infer the underlying types of `Foo1` and
- /// `Foo2`. That is, this gives rise to higher-order (pattern) unification
- /// constraints like:
- ///
- /// ```text
- /// for<'a> (Foo1<'a> = C1)
- /// for<'b> (Foo1<'b> = C2)
- /// ```
- ///
- /// For these equation to be satisfiable, the types `C1` and `C2`
- /// can only refer to a limited set of regions. For example, `C1`
- /// can only refer to `'static` and `'a`, and `C2` can only refer
- /// to `'static` and `'b`. The job of this function is to impose that
- /// constraint.
- ///
- /// Up to this point, C1 and C2 are basically just random type
- /// inference variables, and hence they may contain arbitrary
- /// regions. In fact, it is fairly likely that they do! Consider
- /// this possible definition of `foo`:
- ///
- /// ```text
- /// fn foo<'a, 'b>(x: &'a i32, y: &'b i32) -> (impl Bar<'a>, impl Bar<'b>) {
- /// (&*x, &*y)
- /// }
- /// ```
- ///
- /// Here, the values for the concrete types of the two impl
- /// traits will include inference variables:
- ///
- /// ```text
- /// &'0 i32
- /// &'1 i32
- /// ```
- ///
- /// Ordinarily, the subtyping rules would ensure that these are
- /// sufficiently large. But since `impl Bar<'a>` isn't a specific
- /// type per se, we don't get such constraints by default. This
- /// is where this function comes into play. It adds extra
- /// constraints to ensure that all the regions which appear in the
- /// inferred type are regions that could validly appear.
- ///
- /// This is actually a bit of a tricky constraint in general. We
- /// want to say that each variable (e.g., `'0`) can only take on
- /// values that were supplied as arguments to the opaque type
- /// (e.g., `'a` for `Foo1<'a>`) or `'static`, which is always in
- /// scope. We don't have a constraint quite of this kind in the current
- /// region checker.
- ///
- /// # The Solution
- ///
- /// We generally prefer to make `<=` constraints, since they
- /// integrate best into the region solver. To do that, we find the
- /// "minimum" of all the arguments that appear in the substs: that
- /// is, some region which is less than all the others. In the case
- /// of `Foo1<'a>`, that would be `'a` (it's the only choice, after
- /// all). Then we apply that as a least bound to the variables
- /// (e.g., `'a <= '0`).
- ///
- /// In some cases, there is no minimum. Consider this example:
- ///
- /// ```text
- /// fn baz<'a, 'b>() -> impl Trait<'a, 'b> { ... }
- /// ```
- ///
- /// Here we would report a more complex "in constraint", like `'r
- /// in ['a, 'b, 'static]` (where `'r` is some regon appearing in
- /// the hidden type).
- ///
- /// # Constrain regions, not the hidden concrete type
- ///
- /// Note that generating constraints on each region `Rc` is *not*
- /// the same as generating an outlives constraint on `Tc` iself.
- /// For example, if we had a function like this:
- ///
- /// ```rust
- /// fn foo<'a, T>(x: &'a u32, y: T) -> impl Foo<'a> {
- /// (x, y)
- /// }
- ///
- /// // Equivalent to:
- /// type FooReturn<'a, T> = impl Foo<'a>;
- /// fn foo<'a, T>(..) -> FooReturn<'a, T> { .. }
- /// ```
- ///
- /// then the hidden type `Tc` would be `(&'0 u32, T)` (where `'0`
- /// is an inference variable). If we generated a constraint that
- /// `Tc: 'a`, then this would incorrectly require that `T: 'a` --
- /// but this is not necessary, because the opaque type we
- /// create will be allowed to reference `T`. So we only generate a
- /// constraint that `'0: 'a`.
- ///
- /// # The `free_region_relations` parameter
- ///
- /// The `free_region_relations` argument is used to find the
- /// "minimum" of the regions supplied to a given opaque type.
- /// It must be a relation that can answer whether `'a <= 'b`,
- /// where `'a` and `'b` are regions that appear in the "substs"
- /// for the opaque type references (the `<'a>` in `Foo1<'a>`).
- ///
- /// Note that we do not impose the constraints based on the
- /// generic regions from the `Foo1` definition (e.g., `'x`). This
- /// is because the constraints we are imposing here is basically
- /// the concern of the one generating the constraining type C1,
- /// which is the current function. It also means that we can
- /// take "implied bounds" into account in some cases:
- ///
- /// ```text
- /// trait SomeTrait<'a, 'b> { }
- /// fn foo<'a, 'b>(_: &'a &'b u32) -> impl SomeTrait<'a, 'b> { .. }
- /// ```
- ///
- /// Here, the fact that `'b: 'a` is known only because of the
- /// implied bounds from the `&'a &'b u32` parameter, and is not
- /// "inherent" to the opaque type definition.
- ///
- /// # Parameters
- ///
- /// - `opaque_types` -- the map produced by `instantiate_opaque_types`
- /// - `free_region_relations` -- something that can be used to relate
- /// the free regions (`'a`) that appear in the impl trait.
- pub fn constrain_opaque_types<FRR: FreeRegionRelations<'tcx>>(
- &self,
- opaque_types: &OpaqueTypeMap<'tcx>,
- free_region_relations: &FRR,
- ) {
- debug!("constrain_opaque_types()");
-
- for (&def_id, opaque_defn) in opaque_types {
- self.constrain_opaque_type(
- def_id,
- opaque_defn,
- GenerateMemberConstraints::WhenRequired,
- free_region_relations,
- );
- }
- }
-
- /// See `constrain_opaque_types` for documentation.
- pub fn constrain_opaque_type<FRR: FreeRegionRelations<'tcx>>(
- &self,
- def_id: DefId,
- opaque_defn: &OpaqueTypeDecl<'tcx>,
- mode: GenerateMemberConstraints,
- free_region_relations: &FRR,
- ) {
- debug!("constrain_opaque_type()");
- debug!("constrain_opaque_type: def_id={:?}", def_id);
- debug!("constrain_opaque_type: opaque_defn={:#?}", opaque_defn);
-
- let tcx = self.tcx;
-
- let concrete_ty = self.resolve_vars_if_possible(&opaque_defn.concrete_ty);
-
- debug!("constrain_opaque_type: concrete_ty={:?}", concrete_ty);
-
- let opaque_type_generics = tcx.generics_of(def_id);
-
- let span = tcx.def_span(def_id);
-
- // If there are required region bounds, we can use them.
- if opaque_defn.has_required_region_bounds {
- let predicates_of = tcx.predicates_of(def_id);
- debug!("constrain_opaque_type: predicates: {:#?}", predicates_of,);
- let bounds = predicates_of.instantiate(tcx, opaque_defn.substs);
- debug!("constrain_opaque_type: bounds={:#?}", bounds);
- let opaque_type = tcx.mk_opaque(def_id, opaque_defn.substs);
-
- let required_region_bounds =
- required_region_bounds(tcx, opaque_type, bounds.predicates);
- debug_assert!(!required_region_bounds.is_empty());
-
- for required_region in required_region_bounds {
- concrete_ty.visit_with(&mut ConstrainOpaqueTypeRegionVisitor {
- tcx: self.tcx,
- op: |r| self.sub_regions(infer::CallReturn(span), required_region, r),
- });
- }
- if let GenerateMemberConstraints::IfNoStaticBound = mode {
- self.generate_member_constraint(
- concrete_ty,
- opaque_type_generics,
- opaque_defn,
- def_id,
- );
- }
- return;
- }
-
- // There were no `required_region_bounds`,
- // so we have to search for a `least_region`.
- // Go through all the regions used as arguments to the
- // opaque type. These are the parameters to the opaque
- // type; so in our example above, `substs` would contain
- // `['a]` for the first impl trait and `'b` for the
- // second.
- let mut least_region = None;
- for param in &opaque_type_generics.params {
- match param.kind {
- GenericParamDefKind::Lifetime => {}
- _ => continue,
- }
-
- // Get the value supplied for this region from the substs.
- let subst_arg = opaque_defn.substs.region_at(param.index as usize);
-
- // Compute the least upper bound of it with the other regions.
- debug!("constrain_opaque_types: least_region={:?}", least_region);
- debug!("constrain_opaque_types: subst_arg={:?}", subst_arg);
- match least_region {
- None => least_region = Some(subst_arg),
- Some(lr) => {
- if free_region_relations.sub_free_regions(self.tcx, lr, subst_arg) {
- // keep the current least region
- } else if free_region_relations.sub_free_regions(self.tcx, subst_arg, lr) {
- // switch to `subst_arg`
- least_region = Some(subst_arg);
- } else {
- // There are two regions (`lr` and
- // `subst_arg`) which are not relatable. We
- // can't find a best choice. Therefore,
- // instead of creating a single bound like
- // `'r: 'a` (which is our preferred choice),
- // we will create a "in bound" like `'r in
- // ['a, 'b, 'c]`, where `'a..'c` are the
- // regions that appear in the impl trait.
-
- // For now, enforce a feature gate outside of async functions.
- self.member_constraint_feature_gate(opaque_defn, def_id, lr, subst_arg);
-
- return self.generate_member_constraint(
- concrete_ty,
- opaque_type_generics,
- opaque_defn,
- def_id,
- );
- }
- }
- }
- }
-
- let least_region = least_region.unwrap_or(tcx.lifetimes.re_static);
- debug!("constrain_opaque_types: least_region={:?}", least_region);
-
- if let GenerateMemberConstraints::IfNoStaticBound = mode {
- if least_region != tcx.lifetimes.re_static {
- self.generate_member_constraint(
- concrete_ty,
- opaque_type_generics,
- opaque_defn,
- def_id,
- );
- }
- }
- concrete_ty.visit_with(&mut ConstrainOpaqueTypeRegionVisitor {
- tcx: self.tcx,
- op: |r| self.sub_regions(infer::CallReturn(span), least_region, r),
- });
- }
-
- /// As a fallback, we sometimes generate an "in constraint". For
- /// a case like `impl Foo<'a, 'b>`, where `'a` and `'b` cannot be
- /// related, we would generate a constraint `'r in ['a, 'b,
- /// 'static]` for each region `'r` that appears in the hidden type
- /// (i.e., it must be equal to `'a`, `'b`, or `'static`).
- ///
- /// `conflict1` and `conflict2` are the two region bounds that we
- /// detected which were unrelated. They are used for diagnostics.
- fn generate_member_constraint(
- &self,
- concrete_ty: Ty<'tcx>,
- opaque_type_generics: &ty::Generics,
- opaque_defn: &OpaqueTypeDecl<'tcx>,
- opaque_type_def_id: DefId,
- ) {
- // Create the set of choice regions: each region in the hidden
- // type can be equal to any of the region parameters of the
- // opaque type definition.
- let choice_regions: Lrc<Vec<ty::Region<'tcx>>> = Lrc::new(
- opaque_type_generics
- .params
- .iter()
- .filter(|param| match param.kind {
- GenericParamDefKind::Lifetime => true,
- GenericParamDefKind::Type { .. } | GenericParamDefKind::Const => false,
- })
- .map(|param| opaque_defn.substs.region_at(param.index as usize))
- .chain(std::iter::once(self.tcx.lifetimes.re_static))
- .collect(),
- );
-
- concrete_ty.visit_with(&mut ConstrainOpaqueTypeRegionVisitor {
- tcx: self.tcx,
- op: |r| {
- self.member_constraint(
- opaque_type_def_id,
- opaque_defn.definition_span,
- concrete_ty,
- r,
- &choice_regions,
- )
- },
- });
- }
-
- /// Member constraints are presently feature-gated except for
- /// async-await. We expect to lift this once we've had a bit more
- /// time.
- fn member_constraint_feature_gate(
- &self,
- opaque_defn: &OpaqueTypeDecl<'tcx>,
- opaque_type_def_id: DefId,
- conflict1: ty::Region<'tcx>,
- conflict2: ty::Region<'tcx>,
- ) -> bool {
- // If we have `#![feature(member_constraints)]`, no problems.
- if self.tcx.features().member_constraints {
- return false;
- }
-
- let span = self.tcx.def_span(opaque_type_def_id);
-
- // Without a feature-gate, we only generate member-constraints for async-await.
- let context_name = match opaque_defn.origin {
- // No feature-gate required for `async fn`.
- hir::OpaqueTyOrigin::AsyncFn => return false,
-
- // Otherwise, generate the label we'll use in the error message.
- hir::OpaqueTyOrigin::TypeAlias
- | hir::OpaqueTyOrigin::FnReturn
- | hir::OpaqueTyOrigin::Misc => "impl Trait",
- };
- let msg = format!("ambiguous lifetime bound in `{}`", context_name);
- let mut err = self.tcx.sess.struct_span_err(span, &msg);
-
- let conflict1_name = conflict1.to_string();
- let conflict2_name = conflict2.to_string();
- let label_owned;
- let label = match (&*conflict1_name, &*conflict2_name) {
- ("'_", "'_") => "the elided lifetimes here do not outlive one another",
- _ => {
- label_owned = format!(
- "neither `{}` nor `{}` outlives the other",
- conflict1_name, conflict2_name,
- );
- &label_owned
- }
- };
- err.span_label(span, label);
-
- if nightly_options::is_nightly_build() {
- err.help("add #![feature(member_constraints)] to the crate attributes to enable");
- }
-
- err.emit();
- true
- }
-
- /// Given the fully resolved, instantiated type for an opaque
- /// type, i.e., the value of an inference variable like C1 or C2
- /// (*), computes the "definition type" for an opaque type
- /// definition -- that is, the inferred value of `Foo1<'x>` or
- /// `Foo2<'x>` that we would conceptually use in its definition:
- ///
- /// type Foo1<'x> = impl Bar<'x> = AAA; <-- this type AAA
- /// type Foo2<'x> = impl Bar<'x> = BBB; <-- or this type BBB
- /// fn foo<'a, 'b>(..) -> (Foo1<'a>, Foo2<'b>) { .. }
- ///
- /// Note that these values are defined in terms of a distinct set of
- /// generic parameters (`'x` instead of `'a`) from C1 or C2. The main
- /// purpose of this function is to do that translation.
- ///
- /// (*) C1 and C2 were introduced in the comments on
- /// `constrain_opaque_types`. Read that comment for more context.
- ///
- /// # Parameters
- ///
- /// - `def_id`, the `impl Trait` type
- /// - `substs`, the substs used to instantiate this opaque type
- /// - `instantiated_ty`, the inferred type C1 -- fully resolved, lifted version of
- /// `opaque_defn.concrete_ty`
- pub fn infer_opaque_definition_from_instantiation(
- &self,
- def_id: DefId,
- substs: SubstsRef<'tcx>,
- instantiated_ty: Ty<'tcx>,
- span: Span,
- ) -> Ty<'tcx> {
- debug!(
- "infer_opaque_definition_from_instantiation(def_id={:?}, instantiated_ty={:?})",
- def_id, instantiated_ty
- );
-
- // Use substs to build up a reverse map from regions to their
- // identity mappings. This is necessary because of `impl
- // Trait` lifetimes are computed by replacing existing
- // lifetimes with 'static and remapping only those used in the
- // `impl Trait` return type, resulting in the parameters
- // shifting.
- let id_substs = InternalSubsts::identity_for_item(self.tcx, def_id);
- let map: FxHashMap<GenericArg<'tcx>, GenericArg<'tcx>> =
- substs.iter().enumerate().map(|(index, subst)| (*subst, id_substs[index])).collect();
-
- // Convert the type from the function into a type valid outside
- // the function, by replacing invalid regions with 'static,
- // after producing an error for each of them.
- let definition_ty = instantiated_ty.fold_with(&mut ReverseMapper::new(
- self.tcx,
- self.is_tainted_by_errors(),
- def_id,
- map,
- instantiated_ty,
- span,
- ));
- debug!("infer_opaque_definition_from_instantiation: definition_ty={:?}", definition_ty);
-
- definition_ty
- }
-}
-
-pub fn unexpected_hidden_region_diagnostic(
- tcx: TyCtxt<'tcx>,
- region_scope_tree: Option<®ion::ScopeTree>,
- span: Span,
- hidden_ty: Ty<'tcx>,
- hidden_region: ty::Region<'tcx>,
-) -> DiagnosticBuilder<'tcx> {
- let mut err = struct_span_err!(
- tcx.sess,
- span,
- E0700,
- "hidden type for `impl Trait` captures lifetime that does not appear in bounds",
- );
-
- // Explain the region we are capturing.
- if let ty::ReEarlyBound(_) | ty::ReFree(_) | ty::ReStatic | ty::ReEmpty(_) = hidden_region {
- // Assuming regionck succeeded (*), we ought to always be
- // capturing *some* region from the fn header, and hence it
- // ought to be free. So under normal circumstances, we will go
- // down this path which gives a decent human readable
- // explanation.
- //
- // (*) if not, the `tainted_by_errors` flag would be set to
- // true in any case, so we wouldn't be here at all.
- note_and_explain_free_region(
- tcx,
- &mut err,
- &format!("hidden type `{}` captures ", hidden_ty),
- hidden_region,
- "",
- );
- } else {
- // Ugh. This is a painful case: the hidden region is not one
- // that we can easily summarize or explain. This can happen
- // in a case like
- // `src/test/ui/multiple-lifetimes/ordinary-bounds-unsuited.rs`:
- //
- // ```
- // fn upper_bounds<'a, 'b>(a: Ordinary<'a>, b: Ordinary<'b>) -> impl Trait<'a, 'b> {
- // if condition() { a } else { b }
- // }
- // ```
- //
- // Here the captured lifetime is the intersection of `'a` and
- // `'b`, which we can't quite express.
-
- if let Some(region_scope_tree) = region_scope_tree {
- // If the `region_scope_tree` is available, this is being
- // invoked from the "region inferencer error". We can at
- // least report a really cryptic error for now.
- note_and_explain_region(
- tcx,
- region_scope_tree,
- &mut err,
- &format!("hidden type `{}` captures ", hidden_ty),
- hidden_region,
- "",
- );
- } else {
- // If the `region_scope_tree` is *unavailable*, this is
- // being invoked by the code that comes *after* region
- // inferencing. This is a bug, as the region inferencer
- // ought to have noticed the failed constraint and invoked
- // error reporting, which in turn should have prevented us
- // from getting trying to infer the hidden type
- // completely.
- tcx.sess.delay_span_bug(
- span,
- &format!(
- "hidden type captures unexpected lifetime `{:?}` \
- but no region inference failure",
- hidden_region,
- ),
- );
- }
- }
-
- err
-}
-
-// Visitor that requires that (almost) all regions in the type visited outlive
-// `least_region`. We cannot use `push_outlives_components` because regions in
-// closure signatures are not included in their outlives components. We need to
-// ensure all regions outlive the given bound so that we don't end up with,
-// say, `ReScope` appearing in a return type and causing ICEs when other
-// functions end up with region constraints involving regions from other
-// functions.
-//
-// We also cannot use `for_each_free_region` because for closures it includes
-// the regions parameters from the enclosing item.
-//
-// We ignore any type parameters because impl trait values are assumed to
-// capture all the in-scope type parameters.
-struct ConstrainOpaqueTypeRegionVisitor<'tcx, OP>
-where
- OP: FnMut(ty::Region<'tcx>),
-{
- tcx: TyCtxt<'tcx>,
- op: OP,
-}
-
-impl<'tcx, OP> TypeVisitor<'tcx> for ConstrainOpaqueTypeRegionVisitor<'tcx, OP>
-where
- OP: FnMut(ty::Region<'tcx>),
-{
- fn visit_binder<T: TypeFoldable<'tcx>>(&mut self, t: &ty::Binder<T>) -> bool {
- t.skip_binder().visit_with(self);
- false // keep visiting
- }
-
- fn visit_region(&mut self, r: ty::Region<'tcx>) -> bool {
- match *r {
- // ignore bound regions, keep visiting
- ty::ReLateBound(_, _) => false,
- _ => {
- (self.op)(r);
- false
- }
- }
- }
-
- fn visit_ty(&mut self, ty: Ty<'tcx>) -> bool {
- // We're only interested in types involving regions
- if !ty.flags.intersects(ty::TypeFlags::HAS_FREE_REGIONS) {
- return false; // keep visiting
- }
-
- match ty.kind {
- ty::Closure(def_id, ref substs) => {
- // Skip lifetime parameters of the enclosing item(s)
-
- for upvar_ty in substs.as_closure().upvar_tys(def_id, self.tcx) {
- upvar_ty.visit_with(self);
- }
-
- substs.as_closure().sig_ty(def_id, self.tcx).visit_with(self);
- }
-
- ty::Generator(def_id, ref substs, _) => {
- // Skip lifetime parameters of the enclosing item(s)
- // Also skip the witness type, because that has no free regions.
-
- for upvar_ty in substs.as_generator().upvar_tys(def_id, self.tcx) {
- upvar_ty.visit_with(self);
- }
-
- substs.as_generator().return_ty(def_id, self.tcx).visit_with(self);
- substs.as_generator().yield_ty(def_id, self.tcx).visit_with(self);
- substs.as_generator().resume_ty(def_id, self.tcx).visit_with(self);
- }
- _ => {
- ty.super_visit_with(self);
- }
- }
-
- false
- }
-}
-
-struct ReverseMapper<'tcx> {
- tcx: TyCtxt<'tcx>,
-
- /// If errors have already been reported in this fn, we suppress
- /// our own errors because they are sometimes derivative.
- tainted_by_errors: bool,
-
- opaque_type_def_id: DefId,
- map: FxHashMap<GenericArg<'tcx>, GenericArg<'tcx>>,
- map_missing_regions_to_empty: bool,
-
- /// initially `Some`, set to `None` once error has been reported
- hidden_ty: Option<Ty<'tcx>>,
-
- /// Span of function being checked.
- span: Span,
-}
-
-impl ReverseMapper<'tcx> {
- fn new(
- tcx: TyCtxt<'tcx>,
- tainted_by_errors: bool,
- opaque_type_def_id: DefId,
- map: FxHashMap<GenericArg<'tcx>, GenericArg<'tcx>>,
- hidden_ty: Ty<'tcx>,
- span: Span,
- ) -> Self {
- Self {
- tcx,
- tainted_by_errors,
- opaque_type_def_id,
- map,
- map_missing_regions_to_empty: false,
- hidden_ty: Some(hidden_ty),
- span,
- }
- }
-
- fn fold_kind_mapping_missing_regions_to_empty(
- &mut self,
- kind: GenericArg<'tcx>,
- ) -> GenericArg<'tcx> {
- assert!(!self.map_missing_regions_to_empty);
- self.map_missing_regions_to_empty = true;
- let kind = kind.fold_with(self);
- self.map_missing_regions_to_empty = false;
- kind
- }
-
- fn fold_kind_normally(&mut self, kind: GenericArg<'tcx>) -> GenericArg<'tcx> {
- assert!(!self.map_missing_regions_to_empty);
- kind.fold_with(self)
- }
-}
-
-impl TypeFolder<'tcx> for ReverseMapper<'tcx> {
- fn tcx(&self) -> TyCtxt<'tcx> {
- self.tcx
- }
-
- fn fold_region(&mut self, r: ty::Region<'tcx>) -> ty::Region<'tcx> {
- match r {
- // Ignore bound regions and `'static` regions that appear in the
- // type, we only need to remap regions that reference lifetimes
- // from the function declaraion.
- // This would ignore `'r` in a type like `for<'r> fn(&'r u32)`.
- ty::ReLateBound(..) | ty::ReStatic => return r,
-
- // If regions have been erased (by writeback), don't try to unerase
- // them.
- ty::ReErased => return r,
-
- // The regions that we expect from borrow checking.
- ty::ReEarlyBound(_) | ty::ReFree(_) | ty::ReEmpty(ty::UniverseIndex::ROOT) => {}
-
- ty::ReEmpty(_)
- | ty::RePlaceholder(_)
- | ty::ReVar(_)
- | ty::ReScope(_)
- | ty::ReClosureBound(_) => {
- // All of the regions in the type should either have been
- // erased by writeback, or mapped back to named regions by
- // borrow checking.
- bug!("unexpected region kind in opaque type: {:?}", r);
- }
- }
-
- let generics = self.tcx().generics_of(self.opaque_type_def_id);
- match self.map.get(&r.into()).map(|k| k.unpack()) {
- Some(GenericArgKind::Lifetime(r1)) => r1,
- Some(u) => panic!("region mapped to unexpected kind: {:?}", u),
- None if self.map_missing_regions_to_empty || self.tainted_by_errors => {
- self.tcx.lifetimes.re_root_empty
- }
- None if generics.parent.is_some() => {
- if let Some(hidden_ty) = self.hidden_ty.take() {
- unexpected_hidden_region_diagnostic(
- self.tcx,
- None,
- self.tcx.def_span(self.opaque_type_def_id),
- hidden_ty,
- r,
- )
- .emit();
- }
- self.tcx.lifetimes.re_root_empty
- }
- None => {
- self.tcx
- .sess
- .struct_span_err(self.span, "non-defining opaque type use in defining scope")
- .span_label(
- self.span,
- format!(
- "lifetime `{}` is part of concrete type but not used in \
- parameter list of the `impl Trait` type alias",
- r
- ),
- )
- .emit();
-
- self.tcx().lifetimes.re_static
- }
- }
- }
-
- fn fold_ty(&mut self, ty: Ty<'tcx>) -> Ty<'tcx> {
- match ty.kind {
- ty::Closure(def_id, substs) => {
- // I am a horrible monster and I pray for death. When
- // we encounter a closure here, it is always a closure
- // from within the function that we are currently
- // type-checking -- one that is now being encapsulated
- // in an opaque type. Ideally, we would
- // go through the types/lifetimes that it references
- // and treat them just like we would any other type,
- // which means we would error out if we find any
- // reference to a type/region that is not in the
- // "reverse map".
- //
- // **However,** in the case of closures, there is a
- // somewhat subtle (read: hacky) consideration. The
- // problem is that our closure types currently include
- // all the lifetime parameters declared on the
- // enclosing function, even if they are unused by the
- // closure itself. We can't readily filter them out,
- // so here we replace those values with `'empty`. This
- // can't really make a difference to the rest of the
- // compiler; those regions are ignored for the
- // outlives relation, and hence don't affect trait
- // selection or auto traits, and they are erased
- // during codegen.
-
- let generics = self.tcx.generics_of(def_id);
- let substs = self.tcx.mk_substs(substs.iter().enumerate().map(|(index, &kind)| {
- if index < generics.parent_count {
- // Accommodate missing regions in the parent kinds...
- self.fold_kind_mapping_missing_regions_to_empty(kind)
- } else {
- // ...but not elsewhere.
- self.fold_kind_normally(kind)
- }
- }));
-
- self.tcx.mk_closure(def_id, substs)
- }
-
- ty::Generator(def_id, substs, movability) => {
- let generics = self.tcx.generics_of(def_id);
- let substs = self.tcx.mk_substs(substs.iter().enumerate().map(|(index, &kind)| {
- if index < generics.parent_count {
- // Accommodate missing regions in the parent kinds...
- self.fold_kind_mapping_missing_regions_to_empty(kind)
- } else {
- // ...but not elsewhere.
- self.fold_kind_normally(kind)
- }
- }));
-
- self.tcx.mk_generator(def_id, substs, movability)
- }
-
- ty::Param(..) => {
- // Look it up in the substitution list.
- match self.map.get(&ty.into()).map(|k| k.unpack()) {
- // Found it in the substitution list; replace with the parameter from the
- // opaque type.
- Some(GenericArgKind::Type(t1)) => t1,
- Some(u) => panic!("type mapped to unexpected kind: {:?}", u),
- None => {
- self.tcx
- .sess
- .struct_span_err(
- self.span,
- &format!(
- "type parameter `{}` is part of concrete type but not \
- used in parameter list for the `impl Trait` type alias",
- ty
- ),
- )
- .emit();
-
- self.tcx().types.err
- }
- }
- }
-
- _ => ty.super_fold_with(self),
- }
- }
-
- fn fold_const(&mut self, ct: &'tcx ty::Const<'tcx>) -> &'tcx ty::Const<'tcx> {
- trace!("checking const {:?}", ct);
- // Find a const parameter
- match ct.val {
- ty::ConstKind::Param(..) => {
- // Look it up in the substitution list.
- match self.map.get(&ct.into()).map(|k| k.unpack()) {
- // Found it in the substitution list, replace with the parameter from the
- // opaque type.
- Some(GenericArgKind::Const(c1)) => c1,
- Some(u) => panic!("const mapped to unexpected kind: {:?}", u),
- None => {
- self.tcx
- .sess
- .struct_span_err(
- self.span,
- &format!(
- "const parameter `{}` is part of concrete type but not \
- used in parameter list for the `impl Trait` type alias",
- ct
- ),
- )
- .emit();
-
- self.tcx().consts.err
- }
- }
- }
-
- _ => ct,
- }
- }
-}
-
-struct Instantiator<'a, 'tcx> {
- infcx: &'a InferCtxt<'a, 'tcx>,
- parent_def_id: DefId,
- body_id: hir::HirId,
- param_env: ty::ParamEnv<'tcx>,
- value_span: Span,
- opaque_types: OpaqueTypeMap<'tcx>,
- obligations: Vec<PredicateObligation<'tcx>>,
-}
-
-impl<'a, 'tcx> Instantiator<'a, 'tcx> {
- fn instantiate_opaque_types_in_map<T: TypeFoldable<'tcx>>(&mut self, value: &T) -> T {
- debug!("instantiate_opaque_types_in_map(value={:?})", value);
- let tcx = self.infcx.tcx;
- value.fold_with(&mut BottomUpFolder {
- tcx,
- ty_op: |ty| {
- if ty.references_error() {
- return tcx.types.err;
- } else if let ty::Opaque(def_id, substs) = ty.kind {
- // Check that this is `impl Trait` type is
- // declared by `parent_def_id` -- i.e., one whose
- // value we are inferring. At present, this is
- // always true during the first phase of
- // type-check, but not always true later on during
- // NLL. Once we support named opaque types more fully,
- // this same scenario will be able to arise during all phases.
- //
- // Here is an example using type alias `impl Trait`
- // that indicates the distinction we are checking for:
- //
- // ```rust
- // mod a {
- // pub type Foo = impl Iterator;
- // pub fn make_foo() -> Foo { .. }
- // }
- //
- // mod b {
- // fn foo() -> a::Foo { a::make_foo() }
- // }
- // ```
- //
- // Here, the return type of `foo` references a
- // `Opaque` indeed, but not one whose value is
- // presently being inferred. You can get into a
- // similar situation with closure return types
- // today:
- //
- // ```rust
- // fn foo() -> impl Iterator { .. }
- // fn bar() {
- // let x = || foo(); // returns the Opaque assoc with `foo`
- // }
- // ```
- if let Some(opaque_hir_id) = tcx.hir().as_local_hir_id(def_id) {
- let parent_def_id = self.parent_def_id;
- let def_scope_default = || {
- let opaque_parent_hir_id = tcx.hir().get_parent_item(opaque_hir_id);
- parent_def_id == tcx.hir().local_def_id(opaque_parent_hir_id)
- };
- let (in_definition_scope, origin) = match tcx.hir().find(opaque_hir_id) {
- Some(Node::Item(item)) => match item.kind {
- // Anonymous `impl Trait`
- hir::ItemKind::OpaqueTy(hir::OpaqueTy {
- impl_trait_fn: Some(parent),
- origin,
- ..
- }) => (parent == self.parent_def_id, origin),
- // Named `type Foo = impl Bar;`
- hir::ItemKind::OpaqueTy(hir::OpaqueTy {
- impl_trait_fn: None,
- origin,
- ..
- }) => (
- may_define_opaque_type(tcx, self.parent_def_id, opaque_hir_id),
- origin,
- ),
- _ => (def_scope_default(), hir::OpaqueTyOrigin::TypeAlias),
- },
- Some(Node::ImplItem(item)) => match item.kind {
- hir::ImplItemKind::OpaqueTy(_) => (
- may_define_opaque_type(tcx, self.parent_def_id, opaque_hir_id),
- hir::OpaqueTyOrigin::TypeAlias,
- ),
- _ => (def_scope_default(), hir::OpaqueTyOrigin::TypeAlias),
- },
- _ => bug!(
- "expected (impl) item, found {}",
- tcx.hir().node_to_string(opaque_hir_id),
- ),
- };
- if in_definition_scope {
- return self.fold_opaque_ty(ty, def_id, substs, origin);
- }
-
- debug!(
- "instantiate_opaque_types_in_map: \
- encountered opaque outside its definition scope \
- def_id={:?}",
- def_id,
- );
- }
- }
-
- ty
- },
- lt_op: |lt| lt,
- ct_op: |ct| ct,
- })
- }
-
- fn fold_opaque_ty(
- &mut self,
- ty: Ty<'tcx>,
- def_id: DefId,
- substs: SubstsRef<'tcx>,
- origin: hir::OpaqueTyOrigin,
- ) -> Ty<'tcx> {
- let infcx = self.infcx;
- let tcx = infcx.tcx;
-
- debug!("instantiate_opaque_types: Opaque(def_id={:?}, substs={:?})", def_id, substs);
-
- // Use the same type variable if the exact same opaque type appears more
- // than once in the return type (e.g., if it's passed to a type alias).
- if let Some(opaque_defn) = self.opaque_types.get(&def_id) {
- debug!("instantiate_opaque_types: returning concrete ty {:?}", opaque_defn.concrete_ty);
- return opaque_defn.concrete_ty;
- }
- let span = tcx.def_span(def_id);
- debug!("fold_opaque_ty {:?} {:?}", self.value_span, span);
- let ty_var = infcx
- .next_ty_var(TypeVariableOrigin { kind: TypeVariableOriginKind::TypeInference, span });
-
- let predicates_of = tcx.predicates_of(def_id);
- debug!("instantiate_opaque_types: predicates={:#?}", predicates_of,);
- let bounds = predicates_of.instantiate(tcx, substs);
-
- let param_env = tcx.param_env(def_id);
- let InferOk { value: bounds, obligations } =
- infcx.partially_normalize_associated_types_in(span, self.body_id, param_env, &bounds);
- self.obligations.extend(obligations);
-
- debug!("instantiate_opaque_types: bounds={:?}", bounds);
-
- let required_region_bounds = required_region_bounds(tcx, ty, bounds.predicates.clone());
- debug!("instantiate_opaque_types: required_region_bounds={:?}", required_region_bounds);
-
- // Make sure that we are in fact defining the *entire* type
- // (e.g., `type Foo<T: Bound> = impl Bar;` needs to be
- // defined by a function like `fn foo<T: Bound>() -> Foo<T>`).
- debug!("instantiate_opaque_types: param_env={:#?}", self.param_env,);
- debug!("instantiate_opaque_types: generics={:#?}", tcx.generics_of(def_id),);
-
- // Ideally, we'd get the span where *this specific `ty` came
- // from*, but right now we just use the span from the overall
- // value being folded. In simple cases like `-> impl Foo`,
- // these are the same span, but not in cases like `-> (impl
- // Foo, impl Bar)`.
- let definition_span = self.value_span;
-
- self.opaque_types.insert(
- def_id,
- OpaqueTypeDecl {
- opaque_type: ty,
- substs,
- definition_span,
- concrete_ty: ty_var,
- has_required_region_bounds: !required_region_bounds.is_empty(),
- origin,
- },
- );
- debug!("instantiate_opaque_types: ty_var={:?}", ty_var);
-
- for predicate in &bounds.predicates {
- if let ty::Predicate::Projection(projection) = &predicate {
- if projection.skip_binder().ty.references_error() {
- // No point on adding these obligations since there's a type error involved.
- return ty_var;
- }
- }
- }
-
- self.obligations.reserve(bounds.predicates.len());
- for predicate in bounds.predicates {
- // Change the predicate to refer to the type variable,
- // which will be the concrete type instead of the opaque type.
- // This also instantiates nested instances of `impl Trait`.
- let predicate = self.instantiate_opaque_types_in_map(&predicate);
-
- let cause = traits::ObligationCause::new(span, self.body_id, traits::SizedReturnType);
-
- // Require that the predicate holds for the concrete type.
- debug!("instantiate_opaque_types: predicate={:?}", predicate);
- self.obligations.push(traits::Obligation::new(cause, self.param_env, predicate));
- }
-
- ty_var
- }
-}
-
-/// Returns `true` if `opaque_hir_id` is a sibling or a child of a sibling of `def_id`.
-///
-/// Example:
-/// ```rust
-/// pub mod foo {
-/// pub mod bar {
-/// pub trait Bar { .. }
-///
-/// pub type Baz = impl Bar;
-///
-/// fn f1() -> Baz { .. }
-/// }
-///
-/// fn f2() -> bar::Baz { .. }
-/// }
-/// ```
-///
-/// Here, `def_id` is the `DefId` of the defining use of the opaque type (e.g., `f1` or `f2`),
-/// and `opaque_hir_id` is the `HirId` of the definition of the opaque type `Baz`.
-/// For the above example, this function returns `true` for `f1` and `false` for `f2`.
-pub fn may_define_opaque_type(tcx: TyCtxt<'_>, def_id: DefId, opaque_hir_id: hir::HirId) -> bool {
- let mut hir_id = tcx.hir().as_local_hir_id(def_id).unwrap();
-
- // Named opaque types can be defined by any siblings or children of siblings.
- let scope = tcx.hir().get_defining_scope(opaque_hir_id);
- // We walk up the node tree until we hit the root or the scope of the opaque type.
- while hir_id != scope && hir_id != hir::CRATE_HIR_ID {
- hir_id = tcx.hir().get_parent_item(hir_id);
- }
- // Syntactically, we are allowed to define the concrete type if:
- let res = hir_id == scope;
- trace!(
- "may_define_opaque_type(def={:?}, opaque_node={:?}) = {}",
- tcx.hir().find(hir_id),
- tcx.hir().get(opaque_hir_id),
- res
- );
- res
-}
-
-/// Given a set of predicates that apply to an object type, returns
-/// the region bounds that the (erased) `Self` type must
-/// outlive. Precisely *because* the `Self` type is erased, the
-/// parameter `erased_self_ty` must be supplied to indicate what type
-/// has been used to represent `Self` in the predicates
-/// themselves. This should really be a unique type; `FreshTy(0)` is a
-/// popular choice.
-///
-/// N.B., in some cases, particularly around higher-ranked bounds,
-/// this function returns a kind of conservative approximation.
-/// That is, all regions returned by this function are definitely
-/// required, but there may be other region bounds that are not
-/// returned, as well as requirements like `for<'a> T: 'a`.
-///
-/// Requires that trait definitions have been processed so that we can
-/// elaborate predicates and walk supertraits.
-//
-// FIXME: callers may only have a `&[Predicate]`, not a `Vec`, so that's
-// what this code should accept.
-crate fn required_region_bounds(
- tcx: TyCtxt<'tcx>,
- erased_self_ty: Ty<'tcx>,
- predicates: Vec<ty::Predicate<'tcx>>,
-) -> Vec<ty::Region<'tcx>> {
- debug!(
- "required_region_bounds(erased_self_ty={:?}, predicates={:?})",
- erased_self_ty, predicates
- );
-
- assert!(!erased_self_ty.has_escaping_bound_vars());
-
- traits::elaborate_predicates(tcx, predicates)
- .filter_map(|predicate| {
- match predicate {
- ty::Predicate::Projection(..)
- | ty::Predicate::Trait(..)
- | ty::Predicate::Subtype(..)
- | ty::Predicate::WellFormed(..)
- | ty::Predicate::ObjectSafe(..)
- | ty::Predicate::ClosureKind(..)
- | ty::Predicate::RegionOutlives(..)
- | ty::Predicate::ConstEvaluatable(..) => None,
- ty::Predicate::TypeOutlives(predicate) => {
- // Search for a bound of the form `erased_self_ty
- // : 'a`, but be wary of something like `for<'a>
- // erased_self_ty : 'a` (we interpret a
- // higher-ranked bound like that as 'static,
- // though at present the code in `fulfill.rs`
- // considers such bounds to be unsatisfiable, so
- // it's kind of a moot point since you could never
- // construct such an object, but this seems
- // correct even if that code changes).
- let ty::OutlivesPredicate(ref t, ref r) = predicate.skip_binder();
- if t == &erased_self_ty && !r.has_escaping_bound_vars() {
- Some(*r)
- } else {
- None
- }
- }
- }
- })
- .collect()
-}
use crate::infer::{GenericKind, InferCtxt};
-use crate::traits::query::outlives_bounds::{self, OutlivesBound};
+use crate::traits::query::OutlivesBound;
+use rustc::ty;
use rustc::ty::free_region_map::FreeRegionMap;
-use rustc::ty::{self, Ty};
use rustc_data_structures::fx::FxHashMap;
use rustc_hir as hir;
-use rustc_span::Span;
+
+use super::explicit_outlives_bounds;
/// The `OutlivesEnvironment` collects information about what outlives
/// what in a given type-checking setting. For example, if we have a
region_bound_pairs_accum: vec![],
};
- env.add_outlives_bounds(None, outlives_bounds::explicit_outlives_bounds(param_env));
+ env.add_outlives_bounds(None, explicit_outlives_bounds(param_env));
env
}
self.region_bound_pairs_accum.truncate(len);
}
- /// This method adds "implied bounds" into the outlives environment.
- /// Implied bounds are outlives relationships that we can deduce
- /// on the basis that certain types must be well-formed -- these are
- /// either the types that appear in the function signature or else
- /// the input types to an impl. For example, if you have a function
- /// like
- ///
- /// ```
- /// fn foo<'a, 'b, T>(x: &'a &'b [T]) { }
- /// ```
- ///
- /// we can assume in the caller's body that `'b: 'a` and that `T:
- /// 'b` (and hence, transitively, that `T: 'a`). This method would
- /// add those assumptions into the outlives-environment.
- ///
- /// Tests: `src/test/compile-fail/regions-free-region-ordering-*.rs`
- pub fn add_implied_bounds(
- &mut self,
- infcx: &InferCtxt<'a, 'tcx>,
- fn_sig_tys: &[Ty<'tcx>],
- body_id: hir::HirId,
- span: Span,
- ) {
- debug!("add_implied_bounds()");
-
- for &ty in fn_sig_tys {
- let ty = infcx.resolve_vars_if_possible(&ty);
- debug!("add_implied_bounds: ty = {}", ty);
- let implied_bounds = infcx.implied_outlives_bounds(self.param_env, body_id, ty, span);
- self.add_outlives_bounds(Some(infcx), implied_bounds)
- }
- }
-
/// Save the current set of region-bound pairs under the given `body_id`.
pub fn save_implied_bounds(&mut self, body_id: hir::HirId) {
let old =
/// contain inference variables, it must be supplied, in which
/// case we will register "givens" on the inference context. (See
/// `RegionConstraintData`.)
- fn add_outlives_bounds<I>(&mut self, infcx: Option<&InferCtxt<'a, 'tcx>>, outlives_bounds: I)
- where
+ pub fn add_outlives_bounds<I>(
+ &mut self,
+ infcx: Option<&InferCtxt<'a, 'tcx>>,
+ outlives_bounds: I,
+ ) where
I: IntoIterator<Item = OutlivesBound<'tcx>>,
{
// Record relationships such as `T:'x` that don't go into the
pub mod env;
pub mod obligations;
pub mod verify;
+
+use rustc::traits::query::OutlivesBound;
+use rustc::ty;
+
+pub fn explicit_outlives_bounds<'tcx>(
+ param_env: ty::ParamEnv<'tcx>,
+) -> impl Iterator<Item = OutlivesBound<'tcx>> + 'tcx {
+ debug!("explicit_outlives_bounds()");
+ param_env.caller_bounds.into_iter().filter_map(move |predicate| match predicate {
+ ty::Predicate::Projection(..)
+ | ty::Predicate::Trait(..)
+ | ty::Predicate::Subtype(..)
+ | ty::Predicate::WellFormed(..)
+ | ty::Predicate::ObjectSafe(..)
+ | ty::Predicate::ClosureKind(..)
+ | ty::Predicate::TypeOutlives(..)
+ | ty::Predicate::ConstEvaluatable(..) => None,
+ ty::Predicate::RegionOutlives(ref data) => data
+ .no_bound_vars()
+ .map(|ty::OutlivesPredicate(r_a, r_b)| OutlivesBound::RegionSubRegion(r_b, r_a)),
+ })
+}
-For info on how the current borrowck works, see the [rustc guide].
+For info on how the current borrowck works, see the [rustc dev guide].
-[rustc guide]: https://rust-lang.github.io/rustc-guide/borrow_check.html
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/borrow_check.html
assert!(self.in_snapshot());
// Go through each placeholder that we created.
- for (_, &placeholder_region) in placeholder_map {
+ for &placeholder_region in placeholder_map.values() {
// Find the universe this placeholder inhabits.
let placeholder = match placeholder_region {
ty::RePlaceholder(p) => p,
fn new(directions: TaintDirections, initial_region: ty::Region<'tcx>) -> Self {
let mut regions = FxHashSet::default();
regions.insert(initial_region);
- TaintSet { directions: directions, regions: regions }
+ TaintSet { directions, regions }
}
fn fixed_point(
b: Region<'tcx>,
origin: SubregionOrigin<'tcx>,
) -> Region<'tcx> {
- let vars = TwoRegions { a: a, b: b };
+ let vars = TwoRegions { a, b };
if let Some(&c) = self.combine_map(t).get(&vars) {
return tcx.mk_region(ReVar(c));
}
}
fn fold_ty(&mut self, t: Ty<'tcx>) -> Ty<'tcx> {
- if !t.has_infer_types() && !t.has_infer_consts() {
+ if !t.has_infer_types_or_consts() {
t // micro-optimize -- if there is nothing in this type that this fold affects...
} else {
let t = self.infcx.shallow_resolve(t);
}
fn fold_const(&mut self, ct: &'tcx Const<'tcx>) -> &'tcx Const<'tcx> {
- if !ct.has_infer_consts() {
+ if !ct.has_infer_types_or_consts() {
ct // micro-optimize -- if there is nothing in this const that this fold affects...
} else {
let ct = self.infcx.shallow_resolve(ct);
where
T: TypeFoldable<'tcx>,
{
- let mut full_resolver = FullTypeResolver { infcx: infcx, err: None };
+ let mut full_resolver = FullTypeResolver { infcx, err: None };
let result = value.fold_with(&mut full_resolver);
match full_resolver.err {
None => Ok(result),
}
// N.B. This type is not public because the protocol around checking the
-// `err` field is not enforcable otherwise.
+// `err` field is not enforceable otherwise.
struct FullTypeResolver<'a, 'tcx> {
infcx: &'a InferCtxt<'a, 'tcx>,
err: Option<FixupError<'tcx>>,
f: &'combine mut CombineFields<'infcx, 'tcx>,
a_is_expected: bool,
) -> Sub<'combine, 'infcx, 'tcx> {
- Sub { fields: f, a_is_expected: a_is_expected }
+ Sub { fields: f, a_is_expected }
}
fn with_expected_switched<R, F: FnOnce(&mut Self) -> R>(&mut self, f: F) -> R {
-//! This crates defines the trait resolution method and the type inference engine.
+//! This crates defines the type inference engine.
//!
-//! - **Traits.** Trait resolution is implemented in the `traits` module.
//! - **Type inference.** The type inference code can be found in the `infer` module;
//! this code handles low-level equality and subtyping operations. The
//! type check pass in the compiler is found in the `librustc_typeck` crate.
//!
-//! For more information about how rustc works, see the [rustc guide].
+//! For more information about how rustc works, see the [rustc dev guide].
//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/
//!
//! # Note
//!
#![feature(bool_to_option)]
#![feature(box_patterns)]
#![feature(box_syntax)]
-#![feature(drain_filter)]
#![feature(never_type)]
#![feature(range_is_empty)]
#![feature(in_band_lifetimes)]
#![feature(crate_visibility_modifier)]
-#![recursion_limit = "512"]
+#![recursion_limit = "512"] // For rustdoc
#[macro_use]
extern crate rustc_macros;
+++ /dev/null
-//! Support code for rustdoc and external tools.
-//! You really don't want to be using this unless you need to.
-
-use super::*;
-
-use crate::infer::region_constraints::{Constraint, RegionConstraintData};
-use crate::infer::InferCtxt;
-use rustc::ty::fold::TypeFolder;
-use rustc::ty::{Region, RegionVid};
-
-use rustc_data_structures::fx::{FxHashMap, FxHashSet};
-
-use std::collections::hash_map::Entry;
-use std::collections::VecDeque;
-
-// FIXME(twk): this is obviously not nice to duplicate like that
-#[derive(Eq, PartialEq, Hash, Copy, Clone, Debug)]
-pub enum RegionTarget<'tcx> {
- Region(Region<'tcx>),
- RegionVid(RegionVid),
-}
-
-#[derive(Default, Debug, Clone)]
-pub struct RegionDeps<'tcx> {
- larger: FxHashSet<RegionTarget<'tcx>>,
- smaller: FxHashSet<RegionTarget<'tcx>>,
-}
-
-pub enum AutoTraitResult<A> {
- ExplicitImpl,
- PositiveImpl(A),
- NegativeImpl,
-}
-
-impl<A> AutoTraitResult<A> {
- fn is_auto(&self) -> bool {
- match *self {
- AutoTraitResult::PositiveImpl(_) | AutoTraitResult::NegativeImpl => true,
- _ => false,
- }
- }
-}
-
-pub struct AutoTraitInfo<'cx> {
- pub full_user_env: ty::ParamEnv<'cx>,
- pub region_data: RegionConstraintData<'cx>,
- pub vid_to_region: FxHashMap<ty::RegionVid, ty::Region<'cx>>,
-}
-
-pub struct AutoTraitFinder<'tcx> {
- tcx: TyCtxt<'tcx>,
-}
-
-impl<'tcx> AutoTraitFinder<'tcx> {
- pub fn new(tcx: TyCtxt<'tcx>) -> Self {
- AutoTraitFinder { tcx }
- }
-
- /// Makes a best effort to determine whether and under which conditions an auto trait is
- /// implemented for a type. For example, if you have
- ///
- /// ```
- /// struct Foo<T> { data: Box<T> }
- /// ```
- ///
- /// then this might return that Foo<T>: Send if T: Send (encoded in the AutoTraitResult type).
- /// The analysis attempts to account for custom impls as well as other complex cases. This
- /// result is intended for use by rustdoc and other such consumers.
- ///
- /// (Note that due to the coinductive nature of Send, the full and correct result is actually
- /// quite simple to generate. That is, when a type has no custom impl, it is Send iff its field
- /// types are all Send. So, in our example, we might have that Foo<T>: Send if Box<T>: Send.
- /// But this is often not the best way to present to the user.)
- ///
- /// Warning: The API should be considered highly unstable, and it may be refactored or removed
- /// in the future.
- pub fn find_auto_trait_generics<A>(
- &self,
- ty: Ty<'tcx>,
- orig_env: ty::ParamEnv<'tcx>,
- trait_did: DefId,
- auto_trait_callback: impl Fn(&InferCtxt<'_, 'tcx>, AutoTraitInfo<'tcx>) -> A,
- ) -> AutoTraitResult<A> {
- let tcx = self.tcx;
-
- let trait_ref = ty::TraitRef { def_id: trait_did, substs: tcx.mk_substs_trait(ty, &[]) };
-
- let trait_pred = ty::Binder::bind(trait_ref);
-
- let bail_out = tcx.infer_ctxt().enter(|infcx| {
- let mut selcx = SelectionContext::with_negative(&infcx, true);
- let result = selcx.select(&Obligation::new(
- ObligationCause::dummy(),
- orig_env,
- trait_pred.to_poly_trait_predicate(),
- ));
-
- match result {
- Ok(Some(Vtable::VtableImpl(_))) => {
- debug!(
- "find_auto_trait_generics({:?}): \
- manual impl found, bailing out",
- trait_ref
- );
- true
- }
- _ => false,
- }
- });
-
- // If an explicit impl exists, it always takes priority over an auto impl
- if bail_out {
- return AutoTraitResult::ExplicitImpl;
- }
-
- return tcx.infer_ctxt().enter(|mut infcx| {
- let mut fresh_preds = FxHashSet::default();
-
- // Due to the way projections are handled by SelectionContext, we need to run
- // evaluate_predicates twice: once on the original param env, and once on the result of
- // the first evaluate_predicates call.
- //
- // The problem is this: most of rustc, including SelectionContext and traits::project,
- // are designed to work with a concrete usage of a type (e.g., Vec<u8>
- // fn<T>() { Vec<T> }. This information will generally never change - given
- // the 'T' in fn<T>() { ... }, we'll never know anything else about 'T'.
- // If we're unable to prove that 'T' implements a particular trait, we're done -
- // there's nothing left to do but error out.
- //
- // However, synthesizing an auto trait impl works differently. Here, we start out with
- // a set of initial conditions - the ParamEnv of the struct/enum/union we're dealing
- // with - and progressively discover the conditions we need to fulfill for it to
- // implement a certain auto trait. This ends up breaking two assumptions made by trait
- // selection and projection:
- //
- // * We can always cache the result of a particular trait selection for the lifetime of
- // an InfCtxt
- // * Given a projection bound such as '<T as SomeTrait>::SomeItem = K', if 'T:
- // SomeTrait' doesn't hold, then we don't need to care about the 'SomeItem = K'
- //
- // We fix the first assumption by manually clearing out all of the InferCtxt's caches
- // in between calls to SelectionContext.select. This allows us to keep all of the
- // intermediate types we create bound to the 'tcx lifetime, rather than needing to lift
- // them between calls.
- //
- // We fix the second assumption by reprocessing the result of our first call to
- // evaluate_predicates. Using the example of '<T as SomeTrait>::SomeItem = K', our first
- // pass will pick up 'T: SomeTrait', but not 'SomeItem = K'. On our second pass,
- // traits::project will see that 'T: SomeTrait' is in our ParamEnv, allowing
- // SelectionContext to return it back to us.
-
- let (new_env, user_env) = match self.evaluate_predicates(
- &mut infcx,
- trait_did,
- ty,
- orig_env,
- orig_env,
- &mut fresh_preds,
- false,
- ) {
- Some(e) => e,
- None => return AutoTraitResult::NegativeImpl,
- };
-
- let (full_env, full_user_env) = self
- .evaluate_predicates(
- &mut infcx,
- trait_did,
- ty,
- new_env,
- user_env,
- &mut fresh_preds,
- true,
- )
- .unwrap_or_else(|| {
- panic!("Failed to fully process: {:?} {:?} {:?}", ty, trait_did, orig_env)
- });
-
- debug!(
- "find_auto_trait_generics({:?}): fulfilling \
- with {:?}",
- trait_ref, full_env
- );
- infcx.clear_caches();
-
- // At this point, we already have all of the bounds we need. FulfillmentContext is used
- // to store all of the necessary region/lifetime bounds in the InferContext, as well as
- // an additional sanity check.
- let mut fulfill = FulfillmentContext::new();
- fulfill.register_bound(
- &infcx,
- full_env,
- ty,
- trait_did,
- ObligationCause::misc(DUMMY_SP, hir::DUMMY_HIR_ID),
- );
- fulfill.select_all_or_error(&infcx).unwrap_or_else(|e| {
- panic!("Unable to fulfill trait {:?} for '{:?}': {:?}", trait_did, ty, e)
- });
-
- let body_id_map: FxHashMap<_, _> = infcx
- .inner
- .borrow()
- .region_obligations
- .iter()
- .map(|&(id, _)| (id, vec![]))
- .collect();
-
- infcx.process_registered_region_obligations(&body_id_map, None, full_env);
-
- let region_data = infcx
- .inner
- .borrow_mut()
- .unwrap_region_constraints()
- .region_constraint_data()
- .clone();
-
- let vid_to_region = self.map_vid_to_region(®ion_data);
-
- let info = AutoTraitInfo { full_user_env, region_data, vid_to_region };
-
- return AutoTraitResult::PositiveImpl(auto_trait_callback(&infcx, info));
- });
- }
-}
-
-impl AutoTraitFinder<'tcx> {
- /// The core logic responsible for computing the bounds for our synthesized impl.
- ///
- /// To calculate the bounds, we call `SelectionContext.select` in a loop. Like
- /// `FulfillmentContext`, we recursively select the nested obligations of predicates we
- /// encounter. However, whenever we encounter an `UnimplementedError` involving a type
- /// parameter, we add it to our `ParamEnv`. Since our goal is to determine when a particular
- /// type implements an auto trait, Unimplemented errors tell us what conditions need to be met.
- ///
- /// This method ends up working somewhat similarly to `FulfillmentContext`, but with a few key
- /// differences. `FulfillmentContext` works under the assumption that it's dealing with concrete
- /// user code. According, it considers all possible ways that a `Predicate` could be met, which
- /// isn't always what we want for a synthesized impl. For example, given the predicate `T:
- /// Iterator`, `FulfillmentContext` can end up reporting an Unimplemented error for `T:
- /// IntoIterator` -- since there's an implementation of `Iterator` where `T: IntoIterator`,
- /// `FulfillmentContext` will drive `SelectionContext` to consider that impl before giving up.
- /// If we were to rely on `FulfillmentContext`s decision, we might end up synthesizing an impl
- /// like this:
- ///
- /// impl<T> Send for Foo<T> where T: IntoIterator
- ///
- /// While it might be technically true that Foo implements Send where `T: IntoIterator`,
- /// the bound is overly restrictive - it's really only necessary that `T: Iterator`.
- ///
- /// For this reason, `evaluate_predicates` handles predicates with type variables specially.
- /// When we encounter an `Unimplemented` error for a bound such as `T: Iterator`, we immediately
- /// add it to our `ParamEnv`, and add it to our stack for recursive evaluation. When we later
- /// select it, we'll pick up any nested bounds, without ever inferring that `T: IntoIterator`
- /// needs to hold.
- ///
- /// One additional consideration is supertrait bounds. Normally, a `ParamEnv` is only ever
- /// constructed once for a given type. As part of the construction process, the `ParamEnv` will
- /// have any supertrait bounds normalized -- e.g., if we have a type `struct Foo<T: Copy>`, the
- /// `ParamEnv` will contain `T: Copy` and `T: Clone`, since `Copy: Clone`. When we construct our
- /// own `ParamEnv`, we need to do this ourselves, through `traits::elaborate_predicates`, or
- /// else `SelectionContext` will choke on the missing predicates. However, this should never
- /// show up in the final synthesized generics: we don't want our generated docs page to contain
- /// something like `T: Copy + Clone`, as that's redundant. Therefore, we keep track of a
- /// separate `user_env`, which only holds the predicates that will actually be displayed to the
- /// user.
- fn evaluate_predicates(
- &self,
- infcx: &InferCtxt<'_, 'tcx>,
- trait_did: DefId,
- ty: Ty<'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- user_env: ty::ParamEnv<'tcx>,
- fresh_preds: &mut FxHashSet<ty::Predicate<'tcx>>,
- only_projections: bool,
- ) -> Option<(ty::ParamEnv<'tcx>, ty::ParamEnv<'tcx>)> {
- let tcx = infcx.tcx;
-
- let mut select = SelectionContext::with_negative(&infcx, true);
-
- let mut already_visited = FxHashSet::default();
- let mut predicates = VecDeque::new();
- predicates.push_back(ty::Binder::bind(ty::TraitPredicate {
- trait_ref: ty::TraitRef {
- def_id: trait_did,
- substs: infcx.tcx.mk_substs_trait(ty, &[]),
- },
- }));
-
- let mut computed_preds: FxHashSet<_> = param_env.caller_bounds.iter().cloned().collect();
- let mut user_computed_preds: FxHashSet<_> =
- user_env.caller_bounds.iter().cloned().collect();
-
- let mut new_env = param_env;
- let dummy_cause = ObligationCause::misc(DUMMY_SP, hir::DUMMY_HIR_ID);
-
- while let Some(pred) = predicates.pop_front() {
- infcx.clear_caches();
-
- if !already_visited.insert(pred) {
- continue;
- }
-
- // Call `infcx.resolve_vars_if_possible` to see if we can
- // get rid of any inference variables.
- let obligation = infcx.resolve_vars_if_possible(&Obligation::new(
- dummy_cause.clone(),
- new_env,
- pred,
- ));
- let result = select.select(&obligation);
-
- match &result {
- &Ok(Some(ref vtable)) => {
- // If we see an explicit negative impl (e.g., `impl !Send for MyStruct`),
- // we immediately bail out, since it's impossible for us to continue.
- match vtable {
- Vtable::VtableImpl(VtableImplData { impl_def_id, .. }) => {
- // Blame 'tidy' for the weird bracket placement.
- if infcx.tcx.impl_polarity(*impl_def_id) == ty::ImplPolarity::Negative {
- debug!(
- "evaluate_nested_obligations: found explicit negative impl\
- {:?}, bailing out",
- impl_def_id
- );
- return None;
- }
- }
- _ => {}
- }
-
- let obligations = vtable.clone().nested_obligations().into_iter();
-
- if !self.evaluate_nested_obligations(
- ty,
- obligations,
- &mut user_computed_preds,
- fresh_preds,
- &mut predicates,
- &mut select,
- only_projections,
- ) {
- return None;
- }
- }
- &Ok(None) => {}
- &Err(SelectionError::Unimplemented) => {
- if self.is_param_no_infer(pred.skip_binder().trait_ref.substs) {
- already_visited.remove(&pred);
- self.add_user_pred(
- &mut user_computed_preds,
- ty::Predicate::Trait(pred, hir::Constness::NotConst),
- );
- predicates.push_back(pred);
- } else {
- debug!(
- "evaluate_nested_obligations: `Unimplemented` found, bailing: \
- {:?} {:?} {:?}",
- ty,
- pred,
- pred.skip_binder().trait_ref.substs
- );
- return None;
- }
- }
- _ => panic!("Unexpected error for '{:?}': {:?}", ty, result),
- };
-
- computed_preds.extend(user_computed_preds.iter().cloned());
- let normalized_preds =
- elaborate_predicates(tcx, computed_preds.iter().cloned().collect());
- new_env =
- ty::ParamEnv::new(tcx.mk_predicates(normalized_preds), param_env.reveal, None);
- }
-
- let final_user_env = ty::ParamEnv::new(
- tcx.mk_predicates(user_computed_preds.into_iter()),
- user_env.reveal,
- None,
- );
- debug!(
- "evaluate_nested_obligations(ty={:?}, trait_did={:?}): succeeded with '{:?}' \
- '{:?}'",
- ty, trait_did, new_env, final_user_env
- );
-
- return Some((new_env, final_user_env));
- }
-
- /// This method is designed to work around the following issue:
- /// When we compute auto trait bounds, we repeatedly call `SelectionContext.select`,
- /// progressively building a `ParamEnv` based on the results we get.
- /// However, our usage of `SelectionContext` differs from its normal use within the compiler,
- /// in that we capture and re-reprocess predicates from `Unimplemented` errors.
- ///
- /// This can lead to a corner case when dealing with region parameters.
- /// During our selection loop in `evaluate_predicates`, we might end up with
- /// two trait predicates that differ only in their region parameters:
- /// one containing a HRTB lifetime parameter, and one containing a 'normal'
- /// lifetime parameter. For example:
- ///
- /// T as MyTrait<'a>
- /// T as MyTrait<'static>
- ///
- /// If we put both of these predicates in our computed `ParamEnv`, we'll
- /// confuse `SelectionContext`, since it will (correctly) view both as being applicable.
- ///
- /// To solve this, we pick the 'more strict' lifetime bound -- i.e., the HRTB
- /// Our end goal is to generate a user-visible description of the conditions
- /// under which a type implements an auto trait. A trait predicate involving
- /// a HRTB means that the type needs to work with any choice of lifetime,
- /// not just one specific lifetime (e.g., `'static`).
- fn add_user_pred<'c>(
- &self,
- user_computed_preds: &mut FxHashSet<ty::Predicate<'c>>,
- new_pred: ty::Predicate<'c>,
- ) {
- let mut should_add_new = true;
- user_computed_preds.retain(|&old_pred| {
- match (&new_pred, old_pred) {
- (&ty::Predicate::Trait(new_trait, _), ty::Predicate::Trait(old_trait, _)) => {
- if new_trait.def_id() == old_trait.def_id() {
- let new_substs = new_trait.skip_binder().trait_ref.substs;
- let old_substs = old_trait.skip_binder().trait_ref.substs;
-
- if !new_substs.types().eq(old_substs.types()) {
- // We can't compare lifetimes if the types are different,
- // so skip checking `old_pred`.
- return true;
- }
-
- for (new_region, old_region) in
- new_substs.regions().zip(old_substs.regions())
- {
- match (new_region, old_region) {
- // If both predicates have an `ReLateBound` (a HRTB) in the
- // same spot, we do nothing.
- (
- ty::RegionKind::ReLateBound(_, _),
- ty::RegionKind::ReLateBound(_, _),
- ) => {}
-
- (ty::RegionKind::ReLateBound(_, _), _)
- | (_, ty::RegionKind::ReVar(_)) => {
- // One of these is true:
- // The new predicate has a HRTB in a spot where the old
- // predicate does not (if they both had a HRTB, the previous
- // match arm would have executed). A HRBT is a 'stricter'
- // bound than anything else, so we want to keep the newer
- // predicate (with the HRBT) in place of the old predicate.
- //
- // OR
- //
- // The old predicate has a region variable where the new
- // predicate has some other kind of region. An region
- // variable isn't something we can actually display to a user,
- // so we choose their new predicate (which doesn't have a region
- // varaible).
- //
- // In both cases, we want to remove the old predicate,
- // from `user_computed_preds`, and replace it with the new
- // one. Having both the old and the new
- // predicate in a `ParamEnv` would confuse `SelectionContext`.
- //
- // We're currently in the predicate passed to 'retain',
- // so we return `false` to remove the old predicate from
- // `user_computed_preds`.
- return false;
- }
- (_, ty::RegionKind::ReLateBound(_, _))
- | (ty::RegionKind::ReVar(_), _) => {
- // This is the opposite situation as the previous arm.
- // One of these is true:
- //
- // The old predicate has a HRTB lifetime in a place where the
- // new predicate does not.
- //
- // OR
- //
- // The new predicate has a region variable where the old
- // predicate has some other type of region.
- //
- // We want to leave the old
- // predicate in `user_computed_preds`, and skip adding
- // new_pred to `user_computed_params`.
- should_add_new = false
- }
- _ => {}
- }
- }
- }
- }
- _ => {}
- }
- return true;
- });
-
- if should_add_new {
- user_computed_preds.insert(new_pred);
- }
- }
-
- /// This is very similar to `handle_lifetimes`. However, instead of matching `ty::Region`s
- /// to each other, we match `ty::RegionVid`s to `ty::Region`s.
- fn map_vid_to_region<'cx>(
- &self,
- regions: &RegionConstraintData<'cx>,
- ) -> FxHashMap<ty::RegionVid, ty::Region<'cx>> {
- let mut vid_map: FxHashMap<RegionTarget<'cx>, RegionDeps<'cx>> = FxHashMap::default();
- let mut finished_map = FxHashMap::default();
-
- for constraint in regions.constraints.keys() {
- match constraint {
- &Constraint::VarSubVar(r1, r2) => {
- {
- let deps1 = vid_map.entry(RegionTarget::RegionVid(r1)).or_default();
- deps1.larger.insert(RegionTarget::RegionVid(r2));
- }
-
- let deps2 = vid_map.entry(RegionTarget::RegionVid(r2)).or_default();
- deps2.smaller.insert(RegionTarget::RegionVid(r1));
- }
- &Constraint::RegSubVar(region, vid) => {
- {
- let deps1 = vid_map.entry(RegionTarget::Region(region)).or_default();
- deps1.larger.insert(RegionTarget::RegionVid(vid));
- }
-
- let deps2 = vid_map.entry(RegionTarget::RegionVid(vid)).or_default();
- deps2.smaller.insert(RegionTarget::Region(region));
- }
- &Constraint::VarSubReg(vid, region) => {
- finished_map.insert(vid, region);
- }
- &Constraint::RegSubReg(r1, r2) => {
- {
- let deps1 = vid_map.entry(RegionTarget::Region(r1)).or_default();
- deps1.larger.insert(RegionTarget::Region(r2));
- }
-
- let deps2 = vid_map.entry(RegionTarget::Region(r2)).or_default();
- deps2.smaller.insert(RegionTarget::Region(r1));
- }
- }
- }
-
- while !vid_map.is_empty() {
- let target = *vid_map.keys().next().expect("Keys somehow empty");
- let deps = vid_map.remove(&target).expect("Entry somehow missing");
-
- for smaller in deps.smaller.iter() {
- for larger in deps.larger.iter() {
- match (smaller, larger) {
- (&RegionTarget::Region(_), &RegionTarget::Region(_)) => {
- if let Entry::Occupied(v) = vid_map.entry(*smaller) {
- let smaller_deps = v.into_mut();
- smaller_deps.larger.insert(*larger);
- smaller_deps.larger.remove(&target);
- }
-
- if let Entry::Occupied(v) = vid_map.entry(*larger) {
- let larger_deps = v.into_mut();
- larger_deps.smaller.insert(*smaller);
- larger_deps.smaller.remove(&target);
- }
- }
- (&RegionTarget::RegionVid(v1), &RegionTarget::Region(r1)) => {
- finished_map.insert(v1, r1);
- }
- (&RegionTarget::Region(_), &RegionTarget::RegionVid(_)) => {
- // Do nothing; we don't care about regions that are smaller than vids.
- }
- (&RegionTarget::RegionVid(_), &RegionTarget::RegionVid(_)) => {
- if let Entry::Occupied(v) = vid_map.entry(*smaller) {
- let smaller_deps = v.into_mut();
- smaller_deps.larger.insert(*larger);
- smaller_deps.larger.remove(&target);
- }
-
- if let Entry::Occupied(v) = vid_map.entry(*larger) {
- let larger_deps = v.into_mut();
- larger_deps.smaller.insert(*smaller);
- larger_deps.smaller.remove(&target);
- }
- }
- }
- }
- }
- }
- finished_map
- }
-
- fn is_param_no_infer(&self, substs: SubstsRef<'_>) -> bool {
- return self.is_of_param(substs.type_at(0)) && !substs.types().any(|t| t.has_infer_types());
- }
-
- pub fn is_of_param(&self, ty: Ty<'_>) -> bool {
- return match ty.kind {
- ty::Param(_) => true,
- ty::Projection(p) => self.is_of_param(p.self_ty()),
- _ => false,
- };
- }
-
- fn is_self_referential_projection(&self, p: ty::PolyProjectionPredicate<'_>) -> bool {
- match p.ty().skip_binder().kind {
- ty::Projection(proj) if proj == p.skip_binder().projection_ty => true,
- _ => false,
- }
- }
-
- fn evaluate_nested_obligations(
- &self,
- ty: Ty<'_>,
- nested: impl Iterator<Item = Obligation<'tcx, ty::Predicate<'tcx>>>,
- computed_preds: &mut FxHashSet<ty::Predicate<'tcx>>,
- fresh_preds: &mut FxHashSet<ty::Predicate<'tcx>>,
- predicates: &mut VecDeque<ty::PolyTraitPredicate<'tcx>>,
- select: &mut SelectionContext<'_, 'tcx>,
- only_projections: bool,
- ) -> bool {
- let dummy_cause = ObligationCause::misc(DUMMY_SP, hir::DUMMY_HIR_ID);
-
- for (obligation, mut predicate) in nested.map(|o| (o.clone(), o.predicate)) {
- let is_new_pred = fresh_preds.insert(self.clean_pred(select.infcx(), predicate));
-
- // Resolve any inference variables that we can, to help selection succeed
- predicate = select.infcx().resolve_vars_if_possible(&predicate);
-
- // We only add a predicate as a user-displayable bound if
- // it involves a generic parameter, and doesn't contain
- // any inference variables.
- //
- // Displaying a bound involving a concrete type (instead of a generic
- // parameter) would be pointless, since it's always true
- // (e.g. u8: Copy)
- // Displaying an inference variable is impossible, since they're
- // an internal compiler detail without a defined visual representation
- //
- // We check this by calling is_of_param on the relevant types
- // from the various possible predicates
- match &predicate {
- &ty::Predicate::Trait(p, _) => {
- if self.is_param_no_infer(p.skip_binder().trait_ref.substs)
- && !only_projections
- && is_new_pred
- {
- self.add_user_pred(computed_preds, predicate);
- }
- predicates.push_back(p);
- }
- &ty::Predicate::Projection(p) => {
- debug!(
- "evaluate_nested_obligations: examining projection predicate {:?}",
- predicate
- );
-
- // As described above, we only want to display
- // bounds which include a generic parameter but don't include
- // an inference variable.
- // Additionally, we check if we've seen this predicate before,
- // to avoid rendering duplicate bounds to the user.
- if self.is_param_no_infer(p.skip_binder().projection_ty.substs)
- && !p.ty().skip_binder().has_infer_types()
- && is_new_pred
- {
- debug!(
- "evaluate_nested_obligations: adding projection predicate\
- to computed_preds: {:?}",
- predicate
- );
-
- // Under unusual circumstances, we can end up with a self-refeential
- // projection predicate. For example:
- // <T as MyType>::Value == <T as MyType>::Value
- // Not only is displaying this to the user pointless,
- // having it in the ParamEnv will cause an issue if we try to call
- // poly_project_and_unify_type on the predicate, since this kind of
- // predicate will normally never end up in a ParamEnv.
- //
- // For these reasons, we ignore these weird predicates,
- // ensuring that we're able to properly synthesize an auto trait impl
- if self.is_self_referential_projection(p) {
- debug!(
- "evaluate_nested_obligations: encountered a projection
- predicate equating a type with itself! Skipping"
- );
- } else {
- self.add_user_pred(computed_preds, predicate);
- }
- }
-
- // There are three possible cases when we project a predicate:
- //
- // 1. We encounter an error. This means that it's impossible for
- // our current type to implement the auto trait - there's bound
- // that we could add to our ParamEnv that would 'fix' this kind
- // of error, as it's not caused by an unimplemented type.
- //
- // 2. We successfully project the predicate (Ok(Some(_))), generating
- // some subobligations. We then process these subobligations
- // like any other generated sub-obligations.
- //
- // 3. We receieve an 'ambiguous' result (Ok(None))
- // If we were actually trying to compile a crate,
- // we would need to re-process this obligation later.
- // However, all we care about is finding out what bounds
- // are needed for our type to implement a particular auto trait.
- // We've already added this obligation to our computed ParamEnv
- // above (if it was necessary). Therefore, we don't need
- // to do any further processing of the obligation.
- //
- // Note that we *must* try to project *all* projection predicates
- // we encounter, even ones without inference variable.
- // This ensures that we detect any projection errors,
- // which indicate that our type can *never* implement the given
- // auto trait. In that case, we will generate an explicit negative
- // impl (e.g. 'impl !Send for MyType'). However, we don't
- // try to process any of the generated subobligations -
- // they contain no new information, since we already know
- // that our type implements the projected-through trait,
- // and can lead to weird region issues.
- //
- // Normally, we'll generate a negative impl as a result of encountering
- // a type with an explicit negative impl of an auto trait
- // (for example, raw pointers have !Send and !Sync impls)
- // However, through some **interesting** manipulations of the type
- // system, it's actually possible to write a type that never
- // implements an auto trait due to a projection error, not a normal
- // negative impl error. To properly handle this case, we need
- // to ensure that we catch any potential projection errors,
- // and turn them into an explicit negative impl for our type.
- debug!("Projecting and unifying projection predicate {:?}", predicate);
-
- match poly_project_and_unify_type(select, &obligation.with(p)) {
- Err(e) => {
- debug!(
- "evaluate_nested_obligations: Unable to unify predicate \
- '{:?}' '{:?}', bailing out",
- ty, e
- );
- return false;
- }
- Ok(Some(v)) => {
- // We only care about sub-obligations
- // when we started out trying to unify
- // some inference variables. See the comment above
- // for more infomration
- if p.ty().skip_binder().has_infer_types() {
- if !self.evaluate_nested_obligations(
- ty,
- v.clone().iter().cloned(),
- computed_preds,
- fresh_preds,
- predicates,
- select,
- only_projections,
- ) {
- return false;
- }
- }
- }
- Ok(None) => {
- // It's ok not to make progress when hvave no inference variables -
- // in that case, we were only performing unifcation to check if an
- // error occurred (which would indicate that it's impossible for our
- // type to implement the auto trait).
- // However, we should always make progress (either by generating
- // subobligations or getting an error) when we started off with
- // inference variables
- if p.ty().skip_binder().has_infer_types() {
- panic!("Unexpected result when selecting {:?} {:?}", ty, obligation)
- }
- }
- }
- }
- &ty::Predicate::RegionOutlives(ref binder) => {
- if select.infcx().region_outlives_predicate(&dummy_cause, binder).is_err() {
- return false;
- }
- }
- &ty::Predicate::TypeOutlives(ref binder) => {
- match (
- binder.no_bound_vars(),
- binder.map_bound_ref(|pred| pred.0).no_bound_vars(),
- ) {
- (None, Some(t_a)) => {
- select.infcx().register_region_obligation_with_cause(
- t_a,
- select.infcx().tcx.lifetimes.re_static,
- &dummy_cause,
- );
- }
- (Some(ty::OutlivesPredicate(t_a, r_b)), _) => {
- select.infcx().register_region_obligation_with_cause(
- t_a,
- r_b,
- &dummy_cause,
- );
- }
- _ => {}
- };
- }
- _ => panic!("Unexpected predicate {:?} {:?}", ty, predicate),
- };
- }
- return true;
- }
-
- pub fn clean_pred(
- &self,
- infcx: &InferCtxt<'_, 'tcx>,
- p: ty::Predicate<'tcx>,
- ) -> ty::Predicate<'tcx> {
- infcx.freshen(p)
- }
-}
-
-// Replaces all ReVars in a type with ty::Region's, using the provided map
-pub struct RegionReplacer<'a, 'tcx> {
- vid_to_region: &'a FxHashMap<ty::RegionVid, ty::Region<'tcx>>,
- tcx: TyCtxt<'tcx>,
-}
-
-impl<'a, 'tcx> TypeFolder<'tcx> for RegionReplacer<'a, 'tcx> {
- fn tcx<'b>(&'b self) -> TyCtxt<'tcx> {
- self.tcx
- }
-
- fn fold_region(&mut self, r: ty::Region<'tcx>) -> ty::Region<'tcx> {
- (match r {
- &ty::ReVar(vid) => self.vid_to_region.get(&vid).cloned(),
- _ => None,
- })
- .unwrap_or_else(|| r.super_fold_with(self))
- }
-}
+++ /dev/null
-use crate::infer::canonical::OriginalQueryValues;
-use crate::infer::InferCtxt;
-use crate::traits::query::NoSolution;
-use crate::traits::{
- Environment, FulfillmentError, FulfillmentErrorCode, InEnvironment, ObligationCause,
- PredicateObligation, SelectionError, TraitEngine,
-};
-use rustc::ty::{self, Ty};
-use rustc_data_structures::fx::FxHashSet;
-
-pub use rustc::traits::ChalkCanonicalGoal as CanonicalGoal;
-
-pub struct FulfillmentContext<'tcx> {
- obligations: FxHashSet<InEnvironment<'tcx, PredicateObligation<'tcx>>>,
-}
-
-impl FulfillmentContext<'tcx> {
- crate fn new() -> Self {
- FulfillmentContext { obligations: FxHashSet::default() }
- }
-}
-
-fn in_environment(
- infcx: &InferCtxt<'_, 'tcx>,
- obligation: PredicateObligation<'tcx>,
-) -> InEnvironment<'tcx, PredicateObligation<'tcx>> {
- assert!(!infcx.is_in_snapshot());
- let obligation = infcx.resolve_vars_if_possible(&obligation);
-
- let environment = match obligation.param_env.def_id {
- Some(def_id) => infcx.tcx.environment(def_id),
- None if obligation.param_env.caller_bounds.is_empty() => {
- Environment { clauses: ty::List::empty() }
- }
- _ => bug!("non-empty `ParamEnv` with no def-id"),
- };
-
- InEnvironment { environment, goal: obligation }
-}
-
-impl TraitEngine<'tcx> for FulfillmentContext<'tcx> {
- fn normalize_projection_type(
- &mut self,
- infcx: &InferCtxt<'_, 'tcx>,
- _param_env: ty::ParamEnv<'tcx>,
- projection_ty: ty::ProjectionTy<'tcx>,
- _cause: ObligationCause<'tcx>,
- ) -> Ty<'tcx> {
- infcx.tcx.mk_ty(ty::Projection(projection_ty))
- }
-
- fn register_predicate_obligation(
- &mut self,
- infcx: &InferCtxt<'_, 'tcx>,
- obligation: PredicateObligation<'tcx>,
- ) {
- self.obligations.insert(in_environment(infcx, obligation));
- }
-
- fn select_all_or_error(
- &mut self,
- infcx: &InferCtxt<'_, 'tcx>,
- ) -> Result<(), Vec<FulfillmentError<'tcx>>> {
- self.select_where_possible(infcx)?;
-
- if self.obligations.is_empty() {
- Ok(())
- } else {
- let errors = self
- .obligations
- .iter()
- .map(|obligation| FulfillmentError {
- obligation: obligation.goal.clone(),
- code: FulfillmentErrorCode::CodeAmbiguity,
- points_at_arg_span: false,
- })
- .collect();
- Err(errors)
- }
- }
-
- fn select_where_possible(
- &mut self,
- infcx: &InferCtxt<'_, 'tcx>,
- ) -> Result<(), Vec<FulfillmentError<'tcx>>> {
- let mut errors = Vec::new();
- let mut next_round = FxHashSet::default();
- let mut making_progress;
-
- loop {
- making_progress = false;
-
- // We iterate over all obligations, and record if we are able
- // to unambiguously prove at least one obligation.
- for obligation in self.obligations.drain() {
- let mut orig_values = OriginalQueryValues::default();
- let canonical_goal = infcx.canonicalize_query(
- &InEnvironment {
- environment: obligation.environment,
- goal: obligation.goal.predicate,
- },
- &mut orig_values,
- );
-
- match infcx.tcx.evaluate_goal(canonical_goal) {
- Ok(response) => {
- if response.is_proven() {
- making_progress = true;
-
- match infcx.instantiate_query_response_and_region_obligations(
- &obligation.goal.cause,
- obligation.goal.param_env,
- &orig_values,
- &response,
- ) {
- Ok(infer_ok) => next_round.extend(
- infer_ok
- .obligations
- .into_iter()
- .map(|obligation| in_environment(infcx, obligation)),
- ),
-
- Err(_err) => errors.push(FulfillmentError {
- obligation: obligation.goal,
- code: FulfillmentErrorCode::CodeSelectionError(
- SelectionError::Unimplemented,
- ),
- points_at_arg_span: false,
- }),
- }
- } else {
- // Ambiguous: retry at next round.
- next_round.insert(obligation);
- }
- }
-
- Err(NoSolution) => errors.push(FulfillmentError {
- obligation: obligation.goal,
- code: FulfillmentErrorCode::CodeSelectionError(
- SelectionError::Unimplemented,
- ),
- points_at_arg_span: false,
- }),
- }
- }
- next_round = std::mem::replace(&mut self.obligations, next_round);
-
- if !making_progress {
- break;
- }
- }
-
- if errors.is_empty() { Ok(()) } else { Err(errors) }
- }
-
- fn pending_obligations(&self) -> Vec<PredicateObligation<'tcx>> {
- self.obligations.iter().map(|obligation| obligation.goal.clone()).collect()
- }
-}
+++ /dev/null
-// This file contains various trait resolution methods used by codegen.
-// They all assume regions can be erased and monomorphic types. It
-// seems likely that they should eventually be merged into more
-// general routines.
-
-use crate::infer::{InferCtxt, TyCtxtInferExt};
-use crate::traits::{
- FulfillmentContext, Obligation, ObligationCause, SelectionContext, TraitEngine, Vtable,
-};
-use rustc::ty::fold::TypeFoldable;
-use rustc::ty::{self, TyCtxt};
-
-/// Attempts to resolve an obligation to a vtable. The result is
-/// a shallow vtable resolution, meaning that we do not
-/// (necessarily) resolve all nested obligations on the impl. Note
-/// that type check should guarantee to us that all nested
-/// obligations *could be* resolved if we wanted to.
-/// Assumes that this is run after the entire crate has been successfully type-checked.
-pub fn codegen_fulfill_obligation<'tcx>(
- ty: TyCtxt<'tcx>,
- (param_env, trait_ref): (ty::ParamEnv<'tcx>, ty::PolyTraitRef<'tcx>),
-) -> Vtable<'tcx, ()> {
- // Remove any references to regions; this helps improve caching.
- let trait_ref = ty.erase_regions(&trait_ref);
-
- debug!(
- "codegen_fulfill_obligation(trait_ref={:?}, def_id={:?})",
- (param_env, trait_ref),
- trait_ref.def_id()
- );
-
- // Do the initial selection for the obligation. This yields the
- // shallow result we are looking for -- that is, what specific impl.
- ty.infer_ctxt().enter(|infcx| {
- let mut selcx = SelectionContext::new(&infcx);
-
- let obligation_cause = ObligationCause::dummy();
- let obligation =
- Obligation::new(obligation_cause, param_env, trait_ref.to_poly_trait_predicate());
-
- let selection = match selcx.select(&obligation) {
- Ok(Some(selection)) => selection,
- Ok(None) => {
- // Ambiguity can happen when monomorphizing during trans
- // expands to some humongo type that never occurred
- // statically -- this humongo type can then overflow,
- // leading to an ambiguous result. So report this as an
- // overflow bug, since I believe this is the only case
- // where ambiguity can result.
- bug!(
- "Encountered ambiguity selecting `{:?}` during codegen, \
- presuming due to overflow",
- trait_ref
- )
- }
- Err(e) => {
- bug!("Encountered error `{:?}` selecting `{:?}` during codegen", e, trait_ref)
- }
- };
-
- debug!("fulfill_obligation: selection={:?}", selection);
-
- // Currently, we use a fulfillment context to completely resolve
- // all nested obligations. This is because they can inform the
- // inference of the impl's type parameters.
- let mut fulfill_cx = FulfillmentContext::new();
- let vtable = selection.map(|predicate| {
- debug!("fulfill_obligation: register_predicate_obligation {:?}", predicate);
- fulfill_cx.register_predicate_obligation(&infcx, predicate);
- });
- let vtable = infcx.drain_fulfillment_cx_or_panic(&mut fulfill_cx, &vtable);
-
- info!("Cache miss: {:?} => {:?}", trait_ref, vtable);
- vtable
- })
-}
-
-// # Global Cache
-
-impl<'a, 'tcx> InferCtxt<'a, 'tcx> {
- /// Finishes processes any obligations that remain in the
- /// fulfillment context, and then returns the result with all type
- /// variables removed and regions erased. Because this is intended
- /// for use after type-check has completed, if any errors occur,
- /// it will panic. It is used during normalization and other cases
- /// where processing the obligations in `fulfill_cx` may cause
- /// type inference variables that appear in `result` to be
- /// unified, and hence we need to process those obligations to get
- /// the complete picture of the type.
- fn drain_fulfillment_cx_or_panic<T>(
- &self,
- fulfill_cx: &mut FulfillmentContext<'tcx>,
- result: &T,
- ) -> T
- where
- T: TypeFoldable<'tcx>,
- {
- debug!("drain_fulfillment_cx_or_panic()");
-
- // In principle, we only need to do this so long as `result`
- // contains unbound type parameters. It could be a slight
- // optimization to stop iterating early.
- if let Err(errors) = fulfill_cx.select_all_or_error(self) {
- bug!("Encountered errors `{:?}` resolving bounds after type-checking", errors);
- }
-
- let result = self.resolve_vars_if_possible(result);
- self.tcx.erase_regions(&result)
- }
-}
+++ /dev/null
-//! See Rustc Guide chapters on [trait-resolution] and [trait-specialization] for more info on how
-//! this works.
-//!
-//! [trait-resolution]: https://rust-lang.github.io/rustc-guide/traits/resolution.html
-//! [trait-specialization]: https://rust-lang.github.io/rustc-guide/traits/specialization.html
-
-use crate::infer::{CombinedSnapshot, InferOk, TyCtxtInferExt};
-use crate::traits::select::IntercrateAmbiguityCause;
-use crate::traits::SkipLeakCheck;
-use crate::traits::{self, Normalized, Obligation, ObligationCause, SelectionContext};
-use rustc::ty::fold::TypeFoldable;
-use rustc::ty::subst::Subst;
-use rustc::ty::{self, Ty, TyCtxt};
-use rustc_hir::def_id::{DefId, LOCAL_CRATE};
-use rustc_span::symbol::sym;
-use rustc_span::DUMMY_SP;
-
-/// Whether we do the orphan check relative to this crate or
-/// to some remote crate.
-#[derive(Copy, Clone, Debug)]
-enum InCrate {
- Local,
- Remote,
-}
-
-#[derive(Debug, Copy, Clone)]
-pub enum Conflict {
- Upstream,
- Downstream,
-}
-
-pub struct OverlapResult<'tcx> {
- pub impl_header: ty::ImplHeader<'tcx>,
- pub intercrate_ambiguity_causes: Vec<IntercrateAmbiguityCause>,
-
- /// `true` if the overlap might've been permitted before the shift
- /// to universes.
- pub involves_placeholder: bool,
-}
-
-pub fn add_placeholder_note(err: &mut rustc_errors::DiagnosticBuilder<'_>) {
- err.note(
- "this behavior recently changed as a result of a bug fix; \
- see rust-lang/rust#56105 for details",
- );
-}
-
-/// If there are types that satisfy both impls, invokes `on_overlap`
-/// with a suitably-freshened `ImplHeader` with those types
-/// substituted. Otherwise, invokes `no_overlap`.
-pub fn overlapping_impls<F1, F2, R>(
- tcx: TyCtxt<'_>,
- impl1_def_id: DefId,
- impl2_def_id: DefId,
- skip_leak_check: SkipLeakCheck,
- on_overlap: F1,
- no_overlap: F2,
-) -> R
-where
- F1: FnOnce(OverlapResult<'_>) -> R,
- F2: FnOnce() -> R,
-{
- debug!(
- "overlapping_impls(\
- impl1_def_id={:?}, \
- impl2_def_id={:?})",
- impl1_def_id, impl2_def_id,
- );
-
- let overlaps = tcx.infer_ctxt().enter(|infcx| {
- let selcx = &mut SelectionContext::intercrate(&infcx);
- overlap(selcx, skip_leak_check, impl1_def_id, impl2_def_id).is_some()
- });
-
- if !overlaps {
- return no_overlap();
- }
-
- // In the case where we detect an error, run the check again, but
- // this time tracking intercrate ambuiguity causes for better
- // diagnostics. (These take time and can lead to false errors.)
- tcx.infer_ctxt().enter(|infcx| {
- let selcx = &mut SelectionContext::intercrate(&infcx);
- selcx.enable_tracking_intercrate_ambiguity_causes();
- on_overlap(overlap(selcx, skip_leak_check, impl1_def_id, impl2_def_id).unwrap())
- })
-}
-
-fn with_fresh_ty_vars<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- impl_def_id: DefId,
-) -> ty::ImplHeader<'tcx> {
- let tcx = selcx.tcx();
- let impl_substs = selcx.infcx().fresh_substs_for_item(DUMMY_SP, impl_def_id);
-
- let header = ty::ImplHeader {
- impl_def_id,
- self_ty: tcx.type_of(impl_def_id).subst(tcx, impl_substs),
- trait_ref: tcx.impl_trait_ref(impl_def_id).subst(tcx, impl_substs),
- predicates: tcx.predicates_of(impl_def_id).instantiate(tcx, impl_substs).predicates,
- };
-
- let Normalized { value: mut header, obligations } =
- traits::normalize(selcx, param_env, ObligationCause::dummy(), &header);
-
- header.predicates.extend(obligations.into_iter().map(|o| o.predicate));
- header
-}
-
-/// Can both impl `a` and impl `b` be satisfied by a common type (including
-/// where-clauses)? If so, returns an `ImplHeader` that unifies the two impls.
-fn overlap<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- skip_leak_check: SkipLeakCheck,
- a_def_id: DefId,
- b_def_id: DefId,
-) -> Option<OverlapResult<'tcx>> {
- debug!("overlap(a_def_id={:?}, b_def_id={:?})", a_def_id, b_def_id);
-
- selcx.infcx().probe_maybe_skip_leak_check(skip_leak_check.is_yes(), |snapshot| {
- overlap_within_probe(selcx, a_def_id, b_def_id, snapshot)
- })
-}
-
-fn overlap_within_probe(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- a_def_id: DefId,
- b_def_id: DefId,
- snapshot: &CombinedSnapshot<'_, 'tcx>,
-) -> Option<OverlapResult<'tcx>> {
- // For the purposes of this check, we don't bring any placeholder
- // types into scope; instead, we replace the generic types with
- // fresh type variables, and hence we do our evaluations in an
- // empty environment.
- let param_env = ty::ParamEnv::empty();
-
- let a_impl_header = with_fresh_ty_vars(selcx, param_env, a_def_id);
- let b_impl_header = with_fresh_ty_vars(selcx, param_env, b_def_id);
-
- debug!("overlap: a_impl_header={:?}", a_impl_header);
- debug!("overlap: b_impl_header={:?}", b_impl_header);
-
- // Do `a` and `b` unify? If not, no overlap.
- let obligations = match selcx
- .infcx()
- .at(&ObligationCause::dummy(), param_env)
- .eq_impl_headers(&a_impl_header, &b_impl_header)
- {
- Ok(InferOk { obligations, value: () }) => obligations,
- Err(_) => {
- return None;
- }
- };
-
- debug!("overlap: unification check succeeded");
-
- // Are any of the obligations unsatisfiable? If so, no overlap.
- let infcx = selcx.infcx();
- let opt_failing_obligation = a_impl_header
- .predicates
- .iter()
- .chain(&b_impl_header.predicates)
- .map(|p| infcx.resolve_vars_if_possible(p))
- .map(|p| Obligation {
- cause: ObligationCause::dummy(),
- param_env,
- recursion_depth: 0,
- predicate: p,
- })
- .chain(obligations)
- .find(|o| !selcx.predicate_may_hold_fatal(o));
- // FIXME: the call to `selcx.predicate_may_hold_fatal` above should be ported
- // to the canonical trait query form, `infcx.predicate_may_hold`, once
- // the new system supports intercrate mode (which coherence needs).
-
- if let Some(failing_obligation) = opt_failing_obligation {
- debug!("overlap: obligation unsatisfiable {:?}", failing_obligation);
- return None;
- }
-
- let impl_header = selcx.infcx().resolve_vars_if_possible(&a_impl_header);
- let intercrate_ambiguity_causes = selcx.take_intercrate_ambiguity_causes();
- debug!("overlap: intercrate_ambiguity_causes={:#?}", intercrate_ambiguity_causes);
-
- let involves_placeholder = match selcx.infcx().region_constraints_added_in_snapshot(snapshot) {
- Some(true) => true,
- _ => false,
- };
-
- Some(OverlapResult { impl_header, intercrate_ambiguity_causes, involves_placeholder })
-}
-
-pub fn trait_ref_is_knowable<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_ref: ty::TraitRef<'tcx>,
-) -> Option<Conflict> {
- debug!("trait_ref_is_knowable(trait_ref={:?})", trait_ref);
- if orphan_check_trait_ref(tcx, trait_ref, InCrate::Remote).is_ok() {
- // A downstream or cousin crate is allowed to implement some
- // substitution of this trait-ref.
- return Some(Conflict::Downstream);
- }
-
- if trait_ref_is_local_or_fundamental(tcx, trait_ref) {
- // This is a local or fundamental trait, so future-compatibility
- // is no concern. We know that downstream/cousin crates are not
- // allowed to implement a substitution of this trait ref, which
- // means impls could only come from dependencies of this crate,
- // which we already know about.
- return None;
- }
-
- // This is a remote non-fundamental trait, so if another crate
- // can be the "final owner" of a substitution of this trait-ref,
- // they are allowed to implement it future-compatibly.
- //
- // However, if we are a final owner, then nobody else can be,
- // and if we are an intermediate owner, then we don't care
- // about future-compatibility, which means that we're OK if
- // we are an owner.
- if orphan_check_trait_ref(tcx, trait_ref, InCrate::Local).is_ok() {
- debug!("trait_ref_is_knowable: orphan check passed");
- return None;
- } else {
- debug!("trait_ref_is_knowable: nonlocal, nonfundamental, unowned");
- return Some(Conflict::Upstream);
- }
-}
-
-pub fn trait_ref_is_local_or_fundamental<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_ref: ty::TraitRef<'tcx>,
-) -> bool {
- trait_ref.def_id.krate == LOCAL_CRATE || tcx.has_attr(trait_ref.def_id, sym::fundamental)
-}
-
-pub enum OrphanCheckErr<'tcx> {
- NonLocalInputType(Vec<(Ty<'tcx>, bool /* Is this the first input type? */)>),
- UncoveredTy(Ty<'tcx>, Option<Ty<'tcx>>),
-}
-
-/// Checks the coherence orphan rules. `impl_def_id` should be the
-/// `DefId` of a trait impl. To pass, either the trait must be local, or else
-/// two conditions must be satisfied:
-///
-/// 1. All type parameters in `Self` must be "covered" by some local type constructor.
-/// 2. Some local type must appear in `Self`.
-pub fn orphan_check(tcx: TyCtxt<'_>, impl_def_id: DefId) -> Result<(), OrphanCheckErr<'_>> {
- debug!("orphan_check({:?})", impl_def_id);
-
- // We only except this routine to be invoked on implementations
- // of a trait, not inherent implementations.
- let trait_ref = tcx.impl_trait_ref(impl_def_id).unwrap();
- debug!("orphan_check: trait_ref={:?}", trait_ref);
-
- // If the *trait* is local to the crate, ok.
- if trait_ref.def_id.is_local() {
- debug!("trait {:?} is local to current crate", trait_ref.def_id);
- return Ok(());
- }
-
- orphan_check_trait_ref(tcx, trait_ref, InCrate::Local)
-}
-
-/// Checks whether a trait-ref is potentially implementable by a crate.
-///
-/// The current rule is that a trait-ref orphan checks in a crate C:
-///
-/// 1. Order the parameters in the trait-ref in subst order - Self first,
-/// others linearly (e.g., `<U as Foo<V, W>>` is U < V < W).
-/// 2. Of these type parameters, there is at least one type parameter
-/// in which, walking the type as a tree, you can reach a type local
-/// to C where all types in-between are fundamental types. Call the
-/// first such parameter the "local key parameter".
-/// - e.g., `Box<LocalType>` is OK, because you can visit LocalType
-/// going through `Box`, which is fundamental.
-/// - similarly, `FundamentalPair<Vec<()>, Box<LocalType>>` is OK for
-/// the same reason.
-/// - but (knowing that `Vec<T>` is non-fundamental, and assuming it's
-/// not local), `Vec<LocalType>` is bad, because `Vec<->` is between
-/// the local type and the type parameter.
-/// 3. Every type parameter before the local key parameter is fully known in C.
-/// - e.g., `impl<T> T: Trait<LocalType>` is bad, because `T` might be
-/// an unknown type.
-/// - but `impl<T> LocalType: Trait<T>` is OK, because `LocalType`
-/// occurs before `T`.
-/// 4. Every type in the local key parameter not known in C, going
-/// through the parameter's type tree, must appear only as a subtree of
-/// a type local to C, with only fundamental types between the type
-/// local to C and the local key parameter.
-/// - e.g., `Vec<LocalType<T>>>` (or equivalently `Box<Vec<LocalType<T>>>`)
-/// is bad, because the only local type with `T` as a subtree is
-/// `LocalType<T>`, and `Vec<->` is between it and the type parameter.
-/// - similarly, `FundamentalPair<LocalType<T>, T>` is bad, because
-/// the second occurrence of `T` is not a subtree of *any* local type.
-/// - however, `LocalType<Vec<T>>` is OK, because `T` is a subtree of
-/// `LocalType<Vec<T>>`, which is local and has no types between it and
-/// the type parameter.
-///
-/// The orphan rules actually serve several different purposes:
-///
-/// 1. They enable link-safety - i.e., 2 mutually-unknowing crates (where
-/// every type local to one crate is unknown in the other) can't implement
-/// the same trait-ref. This follows because it can be seen that no such
-/// type can orphan-check in 2 such crates.
-///
-/// To check that a local impl follows the orphan rules, we check it in
-/// InCrate::Local mode, using type parameters for the "generic" types.
-///
-/// 2. They ground negative reasoning for coherence. If a user wants to
-/// write both a conditional blanket impl and a specific impl, we need to
-/// make sure they do not overlap. For example, if we write
-/// ```
-/// impl<T> IntoIterator for Vec<T>
-/// impl<T: Iterator> IntoIterator for T
-/// ```
-/// We need to be able to prove that `Vec<$0>: !Iterator` for every type $0.
-/// We can observe that this holds in the current crate, but we need to make
-/// sure this will also hold in all unknown crates (both "independent" crates,
-/// which we need for link-safety, and also child crates, because we don't want
-/// child crates to get error for impl conflicts in a *dependency*).
-///
-/// For that, we only allow negative reasoning if, for every assignment to the
-/// inference variables, every unknown crate would get an orphan error if they
-/// try to implement this trait-ref. To check for this, we use InCrate::Remote
-/// mode. That is sound because we already know all the impls from known crates.
-///
-/// 3. For non-#[fundamental] traits, they guarantee that parent crates can
-/// add "non-blanket" impls without breaking negative reasoning in dependent
-/// crates. This is the "rebalancing coherence" (RFC 1023) restriction.
-///
-/// For that, we only a allow crate to perform negative reasoning on
-/// non-local-non-#[fundamental] only if there's a local key parameter as per (2).
-///
-/// Because we never perform negative reasoning generically (coherence does
-/// not involve type parameters), this can be interpreted as doing the full
-/// orphan check (using InCrate::Local mode), substituting non-local known
-/// types for all inference variables.
-///
-/// This allows for crates to future-compatibly add impls as long as they
-/// can't apply to types with a key parameter in a child crate - applying
-/// the rules, this basically means that every type parameter in the impl
-/// must appear behind a non-fundamental type (because this is not a
-/// type-system requirement, crate owners might also go for "semantic
-/// future-compatibility" involving things such as sealed traits, but
-/// the above requirement is sufficient, and is necessary in "open world"
-/// cases).
-///
-/// Note that this function is never called for types that have both type
-/// parameters and inference variables.
-fn orphan_check_trait_ref<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_ref: ty::TraitRef<'tcx>,
- in_crate: InCrate,
-) -> Result<(), OrphanCheckErr<'tcx>> {
- debug!("orphan_check_trait_ref(trait_ref={:?}, in_crate={:?})", trait_ref, in_crate);
-
- if trait_ref.needs_infer() && trait_ref.needs_subst() {
- bug!(
- "can't orphan check a trait ref with both params and inference variables {:?}",
- trait_ref
- );
- }
-
- // Given impl<P1..=Pn> Trait<T1..=Tn> for T0, an impl is valid only
- // if at least one of the following is true:
- //
- // - Trait is a local trait
- // (already checked in orphan_check prior to calling this function)
- // - All of
- // - At least one of the types T0..=Tn must be a local type.
- // Let Ti be the first such type.
- // - No uncovered type parameters P1..=Pn may appear in T0..Ti (excluding Ti)
- //
- fn uncover_fundamental_ty<'tcx>(
- tcx: TyCtxt<'tcx>,
- ty: Ty<'tcx>,
- in_crate: InCrate,
- ) -> Vec<Ty<'tcx>> {
- if fundamental_ty(ty) && ty_is_non_local(ty, in_crate).is_some() {
- ty.walk_shallow().flat_map(|ty| uncover_fundamental_ty(tcx, ty, in_crate)).collect()
- } else {
- vec![ty]
- }
- }
-
- let mut non_local_spans = vec![];
- for (i, input_ty) in
- trait_ref.input_types().flat_map(|ty| uncover_fundamental_ty(tcx, ty, in_crate)).enumerate()
- {
- debug!("orphan_check_trait_ref: check ty `{:?}`", input_ty);
- let non_local_tys = ty_is_non_local(input_ty, in_crate);
- if non_local_tys.is_none() {
- debug!("orphan_check_trait_ref: ty_is_local `{:?}`", input_ty);
- return Ok(());
- } else if let ty::Param(_) = input_ty.kind {
- debug!("orphan_check_trait_ref: uncovered ty: `{:?}`", input_ty);
- let local_type = trait_ref
- .input_types()
- .flat_map(|ty| uncover_fundamental_ty(tcx, ty, in_crate))
- .find(|ty| ty_is_non_local_constructor(ty, in_crate).is_none());
-
- debug!("orphan_check_trait_ref: uncovered ty local_type: `{:?}`", local_type);
-
- return Err(OrphanCheckErr::UncoveredTy(input_ty, local_type));
- }
- if let Some(non_local_tys) = non_local_tys {
- for input_ty in non_local_tys {
- non_local_spans.push((input_ty, i == 0));
- }
- }
- }
- // If we exit above loop, never found a local type.
- debug!("orphan_check_trait_ref: no local type");
- Err(OrphanCheckErr::NonLocalInputType(non_local_spans))
-}
-
-fn ty_is_non_local<'t>(ty: Ty<'t>, in_crate: InCrate) -> Option<Vec<Ty<'t>>> {
- match ty_is_non_local_constructor(ty, in_crate) {
- Some(ty) => {
- if !fundamental_ty(ty) {
- Some(vec![ty])
- } else {
- let tys: Vec<_> = ty
- .walk_shallow()
- .filter_map(|t| ty_is_non_local(t, in_crate))
- .flat_map(|i| i)
- .collect();
- if tys.is_empty() { None } else { Some(tys) }
- }
- }
- None => None,
- }
-}
-
-fn fundamental_ty(ty: Ty<'_>) -> bool {
- match ty.kind {
- ty::Ref(..) => true,
- ty::Adt(def, _) => def.is_fundamental(),
- _ => false,
- }
-}
-
-fn def_id_is_local(def_id: DefId, in_crate: InCrate) -> bool {
- match in_crate {
- // The type is local to *this* crate - it will not be
- // local in any other crate.
- InCrate::Remote => false,
- InCrate::Local => def_id.is_local(),
- }
-}
-
-fn ty_is_non_local_constructor<'tcx>(ty: Ty<'tcx>, in_crate: InCrate) -> Option<Ty<'tcx>> {
- debug!("ty_is_non_local_constructor({:?})", ty);
-
- match ty.kind {
- ty::Bool
- | ty::Char
- | ty::Int(..)
- | ty::Uint(..)
- | ty::Float(..)
- | ty::Str
- | ty::FnDef(..)
- | ty::FnPtr(_)
- | ty::Array(..)
- | ty::Slice(..)
- | ty::RawPtr(..)
- | ty::Ref(..)
- | ty::Never
- | ty::Tuple(..)
- | ty::Param(..)
- | ty::Projection(..) => Some(ty),
-
- ty::Placeholder(..) | ty::Bound(..) | ty::Infer(..) => match in_crate {
- InCrate::Local => Some(ty),
- // The inference variable might be unified with a local
- // type in that remote crate.
- InCrate::Remote => None,
- },
-
- ty::Adt(def, _) => {
- if def_id_is_local(def.did, in_crate) {
- None
- } else {
- Some(ty)
- }
- }
- ty::Foreign(did) => {
- if def_id_is_local(did, in_crate) {
- None
- } else {
- Some(ty)
- }
- }
- ty::Opaque(..) => {
- // This merits some explanation.
- // Normally, opaque types are not involed when performing
- // coherence checking, since it is illegal to directly
- // implement a trait on an opaque type. However, we might
- // end up looking at an opaque type during coherence checking
- // if an opaque type gets used within another type (e.g. as
- // a type parameter). This requires us to decide whether or
- // not an opaque type should be considered 'local' or not.
- //
- // We choose to treat all opaque types as non-local, even
- // those that appear within the same crate. This seems
- // somewhat suprising at first, but makes sense when
- // you consider that opaque types are supposed to hide
- // the underlying type *within the same crate*. When an
- // opaque type is used from outside the module
- // where it is declared, it should be impossible to observe
- // anyything about it other than the traits that it implements.
- //
- // The alternative would be to look at the underlying type
- // to determine whether or not the opaque type itself should
- // be considered local. However, this could make it a breaking change
- // to switch the underlying ('defining') type from a local type
- // to a remote type. This would violate the rule that opaque
- // types should be completely opaque apart from the traits
- // that they implement, so we don't use this behavior.
- Some(ty)
- }
-
- ty::Dynamic(ref tt, ..) => {
- if let Some(principal) = tt.principal() {
- if def_id_is_local(principal.def_id(), in_crate) { None } else { Some(ty) }
- } else {
- Some(ty)
- }
- }
-
- ty::Error => None,
-
- ty::UnnormalizedProjection(..)
- | ty::Closure(..)
- | ty::Generator(..)
- | ty::GeneratorWitness(..) => bug!("ty_is_local invoked on unexpected type: {:?}", ty),
- }
-}
use crate::infer::InferCtxt;
use crate::traits::Obligation;
-use rustc::ty::{self, ToPredicate, Ty, TyCtxt, WithConstness};
+use rustc::ty::{self, ToPredicate, Ty, WithConstness};
use rustc_hir::def_id::DefId;
-use super::{ChalkFulfillmentContext, FulfillmentContext, FulfillmentError};
+use super::FulfillmentError;
use super::{ObligationCause, PredicateObligation};
pub trait TraitEngine<'tcx>: 'tcx {
}
}
}
-
-impl dyn TraitEngine<'tcx> {
- pub fn new(tcx: TyCtxt<'tcx>) -> Box<Self> {
- if tcx.sess.opts.debugging_opts.chalk {
- Box::new(ChalkFulfillmentContext::new())
- } else {
- Box::new(FulfillmentContext::new())
- }
- }
-}
-pub mod on_unimplemented;
-pub mod suggestions;
+use super::ObjectSafetyViolation;
-use super::{
- ConstEvalFailure, EvaluationResult, FulfillmentError, FulfillmentErrorCode,
- MismatchedProjectionTypes, ObjectSafetyViolation, Obligation, ObligationCause,
- ObligationCauseCode, OnUnimplementedDirective, OnUnimplementedNote,
- OutputTypeParameterMismatch, Overflow, PredicateObligation, SelectionContext, SelectionError,
- TraitNotObjectSafe,
-};
-
-use crate::infer::error_reporting::{TyCategory, TypeAnnotationNeeded as ErrorCode};
-use crate::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
-use crate::infer::{self, InferCtxt, TyCtxtInferExt};
-use rustc::mir::interpret::ErrorHandled;
-use rustc::session::DiagnosticMessageId;
-use rustc::ty::error::ExpectedFound;
-use rustc::ty::fast_reject;
-use rustc::ty::fold::TypeFolder;
-use rustc::ty::SubtypePredicate;
-use rustc::ty::{
- self, AdtKind, ToPolyTraitRef, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness,
-};
+use crate::infer::InferCtxt;
+use rustc::ty::TyCtxt;
use rustc_ast::ast;
-use rustc_data_structures::fx::{FxHashMap, FxHashSet};
+use rustc_data_structures::fx::FxHashSet;
use rustc_errors::{struct_span_err, Applicability, DiagnosticBuilder};
use rustc_hir as hir;
-use rustc_hir::def_id::{DefId, LOCAL_CRATE};
-use rustc_hir::{QPath, TyKind, WhereBoundPredicate, WherePredicate};
-use rustc_span::source_map::SourceMap;
-use rustc_span::{ExpnKind, Span, DUMMY_SP};
+use rustc_hir::def_id::DefId;
+use rustc_span::Span;
use std::fmt;
impl<'a, 'tcx> InferCtxt<'a, 'tcx> {
- pub fn report_fulfillment_errors(
- &self,
- errors: &[FulfillmentError<'tcx>],
- body_id: Option<hir::BodyId>,
- fallback_has_occurred: bool,
- ) {
- #[derive(Debug)]
- struct ErrorDescriptor<'tcx> {
- predicate: ty::Predicate<'tcx>,
- index: Option<usize>, // None if this is an old error
- }
-
- let mut error_map: FxHashMap<_, Vec<_>> = self
- .reported_trait_errors
- .borrow()
- .iter()
- .map(|(&span, predicates)| {
- (
- span,
- predicates
- .iter()
- .map(|&predicate| ErrorDescriptor { predicate, index: None })
- .collect(),
- )
- })
- .collect();
-
- for (index, error) in errors.iter().enumerate() {
- // We want to ignore desugarings here: spans are equivalent even
- // if one is the result of a desugaring and the other is not.
- let mut span = error.obligation.cause.span;
- let expn_data = span.ctxt().outer_expn_data();
- if let ExpnKind::Desugaring(_) = expn_data.kind {
- span = expn_data.call_site;
- }
-
- error_map.entry(span).or_default().push(ErrorDescriptor {
- predicate: error.obligation.predicate,
- index: Some(index),
- });
-
- self.reported_trait_errors
- .borrow_mut()
- .entry(span)
- .or_default()
- .push(error.obligation.predicate.clone());
- }
-
- // We do this in 2 passes because we want to display errors in order, though
- // maybe it *is* better to sort errors by span or something.
- let mut is_suppressed = vec![false; errors.len()];
- for (_, error_set) in error_map.iter() {
- // We want to suppress "duplicate" errors with the same span.
- for error in error_set {
- if let Some(index) = error.index {
- // Suppress errors that are either:
- // 1) strictly implied by another error.
- // 2) implied by an error with a smaller index.
- for error2 in error_set {
- if error2.index.map_or(false, |index2| is_suppressed[index2]) {
- // Avoid errors being suppressed by already-suppressed
- // errors, to prevent all errors from being suppressed
- // at once.
- continue;
- }
-
- if self.error_implies(&error2.predicate, &error.predicate)
- && !(error2.index >= error.index
- && self.error_implies(&error.predicate, &error2.predicate))
- {
- info!("skipping {:?} (implied by {:?})", error, error2);
- is_suppressed[index] = true;
- break;
- }
- }
- }
- }
- }
-
- for (error, suppressed) in errors.iter().zip(is_suppressed) {
- if !suppressed {
- self.report_fulfillment_error(error, body_id, fallback_has_occurred);
- }
- }
- }
-
- // returns if `cond` not occurring implies that `error` does not occur - i.e., that
- // `error` occurring implies that `cond` occurs.
- fn error_implies(&self, cond: &ty::Predicate<'tcx>, error: &ty::Predicate<'tcx>) -> bool {
- if cond == error {
- return true;
- }
-
- let (cond, error) = match (cond, error) {
- (&ty::Predicate::Trait(..), &ty::Predicate::Trait(ref error, _)) => (cond, error),
- _ => {
- // FIXME: make this work in other cases too.
- return false;
- }
- };
-
- for implication in super::elaborate_predicates(self.tcx, vec![*cond]) {
- if let ty::Predicate::Trait(implication, _) = implication {
- let error = error.to_poly_trait_ref();
- let implication = implication.to_poly_trait_ref();
- // FIXME: I'm just not taking associated types at all here.
- // Eventually I'll need to implement param-env-aware
- // `Γ₁ ⊦ φ₁ => Γ₂ ⊦ φ₂` logic.
- let param_env = ty::ParamEnv::empty();
- if self.can_sub(param_env, error, implication).is_ok() {
- debug!("error_implies: {:?} -> {:?} -> {:?}", cond, error, implication);
- return true;
- }
- }
- }
-
- false
- }
-
- fn report_fulfillment_error(
- &self,
- error: &FulfillmentError<'tcx>,
- body_id: Option<hir::BodyId>,
- fallback_has_occurred: bool,
- ) {
- debug!("report_fulfillment_error({:?})", error);
- match error.code {
- FulfillmentErrorCode::CodeSelectionError(ref selection_error) => {
- self.report_selection_error(
- &error.obligation,
- selection_error,
- fallback_has_occurred,
- error.points_at_arg_span,
- );
- }
- FulfillmentErrorCode::CodeProjectionError(ref e) => {
- self.report_projection_error(&error.obligation, e);
- }
- FulfillmentErrorCode::CodeAmbiguity => {
- self.maybe_report_ambiguity(&error.obligation, body_id);
- }
- FulfillmentErrorCode::CodeSubtypeError(ref expected_found, ref err) => {
- self.report_mismatched_types(
- &error.obligation.cause,
- expected_found.expected,
- expected_found.found,
- err.clone(),
- )
- .emit();
- }
- }
- }
-
- fn report_projection_error(
- &self,
- obligation: &PredicateObligation<'tcx>,
- error: &MismatchedProjectionTypes<'tcx>,
- ) {
- let predicate = self.resolve_vars_if_possible(&obligation.predicate);
-
- if predicate.references_error() {
- return;
- }
-
- self.probe(|_| {
- let err_buf;
- let mut err = &error.err;
- let mut values = None;
-
- // try to find the mismatched types to report the error with.
- //
- // this can fail if the problem was higher-ranked, in which
- // cause I have no idea for a good error message.
- if let ty::Predicate::Projection(ref data) = predicate {
- let mut selcx = SelectionContext::new(self);
- let (data, _) = self.replace_bound_vars_with_fresh_vars(
- obligation.cause.span,
- infer::LateBoundRegionConversionTime::HigherRankedType,
- data,
- );
- let mut obligations = vec![];
- let normalized_ty = super::normalize_projection_type(
- &mut selcx,
- obligation.param_env,
- data.projection_ty,
- obligation.cause.clone(),
- 0,
- &mut obligations,
- );
-
- debug!(
- "report_projection_error obligation.cause={:?} obligation.param_env={:?}",
- obligation.cause, obligation.param_env
- );
-
- debug!(
- "report_projection_error normalized_ty={:?} data.ty={:?}",
- normalized_ty, data.ty
- );
-
- let is_normalized_ty_expected = match &obligation.cause.code {
- ObligationCauseCode::ItemObligation(_)
- | ObligationCauseCode::BindingObligation(_, _)
- | ObligationCauseCode::ObjectCastObligation(_) => false,
- _ => true,
- };
-
- if let Err(error) = self.at(&obligation.cause, obligation.param_env).eq_exp(
- is_normalized_ty_expected,
- normalized_ty,
- data.ty,
- ) {
- values = Some(infer::ValuePairs::Types(ExpectedFound::new(
- is_normalized_ty_expected,
- normalized_ty,
- data.ty,
- )));
-
- err_buf = error;
- err = &err_buf;
- }
- }
-
- let msg = format!("type mismatch resolving `{}`", predicate);
- let error_id = (DiagnosticMessageId::ErrorId(271), Some(obligation.cause.span), msg);
- let fresh = self.tcx.sess.one_time_diagnostics.borrow_mut().insert(error_id);
- if fresh {
- let mut diag = struct_span_err!(
- self.tcx.sess,
- obligation.cause.span,
- E0271,
- "type mismatch resolving `{}`",
- predicate
- );
- self.note_type_err(&mut diag, &obligation.cause, None, values, err);
- self.note_obligation_cause(&mut diag, obligation);
- diag.emit();
- }
- });
- }
-
- fn fuzzy_match_tys(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> bool {
- /// returns the fuzzy category of a given type, or None
- /// if the type can be equated to any type.
- fn type_category(t: Ty<'_>) -> Option<u32> {
- match t.kind {
- ty::Bool => Some(0),
- ty::Char => Some(1),
- ty::Str => Some(2),
- ty::Int(..) | ty::Uint(..) | ty::Infer(ty::IntVar(..)) => Some(3),
- ty::Float(..) | ty::Infer(ty::FloatVar(..)) => Some(4),
- ty::Ref(..) | ty::RawPtr(..) => Some(5),
- ty::Array(..) | ty::Slice(..) => Some(6),
- ty::FnDef(..) | ty::FnPtr(..) => Some(7),
- ty::Dynamic(..) => Some(8),
- ty::Closure(..) => Some(9),
- ty::Tuple(..) => Some(10),
- ty::Projection(..) => Some(11),
- ty::Param(..) => Some(12),
- ty::Opaque(..) => Some(13),
- ty::Never => Some(14),
- ty::Adt(adt, ..) => match adt.adt_kind() {
- AdtKind::Struct => Some(15),
- AdtKind::Union => Some(16),
- AdtKind::Enum => Some(17),
- },
- ty::Generator(..) => Some(18),
- ty::Foreign(..) => Some(19),
- ty::GeneratorWitness(..) => Some(20),
- ty::Placeholder(..) | ty::Bound(..) | ty::Infer(..) | ty::Error => None,
- ty::UnnormalizedProjection(..) => bug!("only used with chalk-engine"),
- }
- }
-
- match (type_category(a), type_category(b)) {
- (Some(cat_a), Some(cat_b)) => match (&a.kind, &b.kind) {
- (&ty::Adt(def_a, _), &ty::Adt(def_b, _)) => def_a == def_b,
- _ => cat_a == cat_b,
- },
- // infer and error can be equated to all types
- _ => true,
- }
- }
-
- fn describe_generator(&self, body_id: hir::BodyId) -> Option<&'static str> {
- self.tcx.hir().body(body_id).generator_kind.map(|gen_kind| match gen_kind {
- hir::GeneratorKind::Gen => "a generator",
- hir::GeneratorKind::Async(hir::AsyncGeneratorKind::Block) => "an async block",
- hir::GeneratorKind::Async(hir::AsyncGeneratorKind::Fn) => "an async function",
- hir::GeneratorKind::Async(hir::AsyncGeneratorKind::Closure) => "an async closure",
- })
- }
-
- fn find_similar_impl_candidates(
- &self,
- trait_ref: ty::PolyTraitRef<'tcx>,
- ) -> Vec<ty::TraitRef<'tcx>> {
- let simp = fast_reject::simplify_type(self.tcx, trait_ref.skip_binder().self_ty(), true);
- let all_impls = self.tcx.all_impls(trait_ref.def_id());
-
- match simp {
- Some(simp) => all_impls
- .iter()
- .filter_map(|&def_id| {
- let imp = self.tcx.impl_trait_ref(def_id).unwrap();
- let imp_simp = fast_reject::simplify_type(self.tcx, imp.self_ty(), true);
- if let Some(imp_simp) = imp_simp {
- if simp != imp_simp {
- return None;
- }
- }
-
- Some(imp)
- })
- .collect(),
- None => {
- all_impls.iter().map(|&def_id| self.tcx.impl_trait_ref(def_id).unwrap()).collect()
- }
- }
- }
-
- fn report_similar_impl_candidates(
- &self,
- impl_candidates: Vec<ty::TraitRef<'tcx>>,
- err: &mut DiagnosticBuilder<'_>,
- ) {
- if impl_candidates.is_empty() {
- return;
- }
-
- let len = impl_candidates.len();
- let end = if impl_candidates.len() <= 5 { impl_candidates.len() } else { 4 };
-
- let normalize = |candidate| {
- self.tcx.infer_ctxt().enter(|ref infcx| {
- let normalized = infcx
- .at(&ObligationCause::dummy(), ty::ParamEnv::empty())
- .normalize(candidate)
- .ok();
- match normalized {
- Some(normalized) => format!("\n {:?}", normalized.value),
- None => format!("\n {:?}", candidate),
- }
- })
- };
-
- // Sort impl candidates so that ordering is consistent for UI tests.
- let mut normalized_impl_candidates =
- impl_candidates.iter().map(normalize).collect::<Vec<String>>();
-
- // Sort before taking the `..end` range,
- // because the ordering of `impl_candidates` may not be deterministic:
- // https://github.com/rust-lang/rust/pull/57475#issuecomment-455519507
- normalized_impl_candidates.sort();
-
- err.help(&format!(
- "the following implementations were found:{}{}",
- normalized_impl_candidates[..end].join(""),
- if len > 5 { format!("\nand {} others", len - 4) } else { String::new() }
- ));
- }
-
- /// Reports that an overflow has occurred and halts compilation. We
- /// halt compilation unconditionally because it is important that
- /// overflows never be masked -- they basically represent computations
- /// whose result could not be truly determined and thus we can't say
- /// if the program type checks or not -- and they are unusual
- /// occurrences in any case.
- pub fn report_overflow_error<T>(
- &self,
- obligation: &Obligation<'tcx, T>,
- suggest_increasing_limit: bool,
- ) -> !
- where
- T: fmt::Display + TypeFoldable<'tcx>,
- {
- let predicate = self.resolve_vars_if_possible(&obligation.predicate);
- let mut err = struct_span_err!(
- self.tcx.sess,
- obligation.cause.span,
- E0275,
- "overflow evaluating the requirement `{}`",
- predicate
- );
-
- if suggest_increasing_limit {
- self.suggest_new_overflow_limit(&mut err);
- }
-
- self.note_obligation_cause_code(
- &mut err,
- &obligation.predicate,
- &obligation.cause.code,
- &mut vec![],
- );
-
- err.emit();
- self.tcx.sess.abort_if_errors();
- bug!();
- }
-
- /// Reports that a cycle was detected which led to overflow and halts
- /// compilation. This is equivalent to `report_overflow_error` except
- /// that we can give a more helpful error message (and, in particular,
- /// we do not suggest increasing the overflow limit, which is not
- /// going to help).
- pub fn report_overflow_error_cycle(&self, cycle: &[PredicateObligation<'tcx>]) -> ! {
- let cycle = self.resolve_vars_if_possible(&cycle.to_owned());
- assert!(!cycle.is_empty());
-
- debug!("report_overflow_error_cycle: cycle={:?}", cycle);
-
- self.report_overflow_error(&cycle[0], false);
- }
-
pub fn report_extra_impl_obligation(
&self,
error_span: Span,
err
}
-
- /// Gets the parent trait chain start
- fn get_parent_trait_ref(
- &self,
- code: &ObligationCauseCode<'tcx>,
- ) -> Option<(String, Option<Span>)> {
- match code {
- &ObligationCauseCode::BuiltinDerivedObligation(ref data) => {
- let parent_trait_ref = self.resolve_vars_if_possible(&data.parent_trait_ref);
- match self.get_parent_trait_ref(&data.parent_code) {
- Some(t) => Some(t),
- None => {
- let ty = parent_trait_ref.skip_binder().self_ty();
- let span =
- TyCategory::from_ty(ty).map(|(_, def_id)| self.tcx.def_span(def_id));
- Some((ty.to_string(), span))
- }
- }
- }
- _ => None,
- }
- }
-
- pub fn report_selection_error(
- &self,
- obligation: &PredicateObligation<'tcx>,
- error: &SelectionError<'tcx>,
- fallback_has_occurred: bool,
- points_at_arg: bool,
- ) {
- let tcx = self.tcx;
- let span = obligation.cause.span;
-
- let mut err = match *error {
- SelectionError::Unimplemented => {
- if let ObligationCauseCode::CompareImplMethodObligation {
- item_name,
- impl_item_def_id,
- trait_item_def_id,
- }
- | ObligationCauseCode::CompareImplTypeObligation {
- item_name,
- impl_item_def_id,
- trait_item_def_id,
- } = obligation.cause.code
- {
- self.report_extra_impl_obligation(
- span,
- item_name,
- impl_item_def_id,
- trait_item_def_id,
- &format!("`{}`", obligation.predicate),
- )
- .emit();
- return;
- }
- match obligation.predicate {
- ty::Predicate::Trait(ref trait_predicate, _) => {
- let trait_predicate = self.resolve_vars_if_possible(trait_predicate);
-
- if self.tcx.sess.has_errors() && trait_predicate.references_error() {
- return;
- }
- let trait_ref = trait_predicate.to_poly_trait_ref();
- let (post_message, pre_message, type_def) = self
- .get_parent_trait_ref(&obligation.cause.code)
- .map(|(t, s)| {
- (
- format!(" in `{}`", t),
- format!("within `{}`, ", t),
- s.map(|s| (format!("within this `{}`", t), s)),
- )
- })
- .unwrap_or_default();
-
- let OnUnimplementedNote { message, label, note, enclosing_scope } =
- self.on_unimplemented_note(trait_ref, obligation);
- let have_alt_message = message.is_some() || label.is_some();
- let is_try = self
- .tcx
- .sess
- .source_map()
- .span_to_snippet(span)
- .map(|s| &s == "?")
- .unwrap_or(false);
- let is_from = format!("{}", trait_ref.print_only_trait_path())
- .starts_with("std::convert::From<");
- let (message, note) = if is_try && is_from {
- (
- Some(format!(
- "`?` couldn't convert the error to `{}`",
- trait_ref.self_ty(),
- )),
- Some(
- "the question mark operation (`?`) implicitly performs a \
- conversion on the error value using the `From` trait"
- .to_owned(),
- ),
- )
- } else {
- (message, note)
- };
-
- let mut err = struct_span_err!(
- self.tcx.sess,
- span,
- E0277,
- "{}",
- message.unwrap_or_else(|| format!(
- "the trait bound `{}` is not satisfied{}",
- trait_ref.without_const().to_predicate(),
- post_message,
- ))
- );
-
- let explanation =
- if obligation.cause.code == ObligationCauseCode::MainFunctionType {
- "consider using `()`, or a `Result`".to_owned()
- } else {
- format!(
- "{}the trait `{}` is not implemented for `{}`",
- pre_message,
- trait_ref.print_only_trait_path(),
- trait_ref.self_ty(),
- )
- };
-
- if self.suggest_add_reference_to_arg(
- &obligation,
- &mut err,
- &trait_ref,
- points_at_arg,
- have_alt_message,
- ) {
- self.note_obligation_cause(&mut err, obligation);
- err.emit();
- return;
- }
- if let Some(ref s) = label {
- // If it has a custom `#[rustc_on_unimplemented]`
- // error message, let's display it as the label!
- err.span_label(span, s.as_str());
- err.help(&explanation);
- } else {
- err.span_label(span, explanation);
- }
- if let Some((msg, span)) = type_def {
- err.span_label(span, &msg);
- }
- if let Some(ref s) = note {
- // If it has a custom `#[rustc_on_unimplemented]` note, let's display it
- err.note(s.as_str());
- }
- if let Some(ref s) = enclosing_scope {
- let enclosing_scope_span = tcx.def_span(
- tcx.hir()
- .opt_local_def_id(obligation.cause.body_id)
- .unwrap_or_else(|| {
- tcx.hir().body_owner_def_id(hir::BodyId {
- hir_id: obligation.cause.body_id,
- })
- }),
- );
-
- err.span_label(enclosing_scope_span, s.as_str());
- }
-
- self.suggest_borrow_on_unsized_slice(&obligation.cause.code, &mut err);
- self.suggest_fn_call(&obligation, &mut err, &trait_ref, points_at_arg);
- self.suggest_remove_reference(&obligation, &mut err, &trait_ref);
- self.suggest_semicolon_removal(&obligation, &mut err, span, &trait_ref);
- self.note_version_mismatch(&mut err, &trait_ref);
- if self.suggest_impl_trait(&mut err, span, &obligation, &trait_ref) {
- err.emit();
- return;
- }
-
- // Try to report a help message
- if !trait_ref.has_infer_types()
- && self.predicate_can_apply(obligation.param_env, trait_ref)
- {
- // If a where-clause may be useful, remind the
- // user that they can add it.
- //
- // don't display an on-unimplemented note, as
- // these notes will often be of the form
- // "the type `T` can't be frobnicated"
- // which is somewhat confusing.
- self.suggest_restricting_param_bound(
- &mut err,
- &trait_ref,
- obligation.cause.body_id,
- );
- } else {
- if !have_alt_message {
- // Can't show anything else useful, try to find similar impls.
- let impl_candidates = self.find_similar_impl_candidates(trait_ref);
- self.report_similar_impl_candidates(impl_candidates, &mut err);
- }
- self.suggest_change_mut(
- &obligation,
- &mut err,
- &trait_ref,
- points_at_arg,
- );
- }
-
- // If this error is due to `!: Trait` not implemented but `(): Trait` is
- // implemented, and fallback has occurred, then it could be due to a
- // variable that used to fallback to `()` now falling back to `!`. Issue a
- // note informing about the change in behaviour.
- if trait_predicate.skip_binder().self_ty().is_never()
- && fallback_has_occurred
- {
- let predicate = trait_predicate.map_bound(|mut trait_pred| {
- trait_pred.trait_ref.substs = self.tcx.mk_substs_trait(
- self.tcx.mk_unit(),
- &trait_pred.trait_ref.substs[1..],
- );
- trait_pred
- });
- let unit_obligation = Obligation {
- predicate: ty::Predicate::Trait(
- predicate,
- hir::Constness::NotConst,
- ),
- ..obligation.clone()
- };
- if self.predicate_may_hold(&unit_obligation) {
- err.note(
- "the trait is implemented for `()`. \
- Possibly this error has been caused by changes to \
- Rust's type-inference algorithm (see issue #48950 \
- <https://github.com/rust-lang/rust/issues/48950> \
- for more information). Consider whether you meant to use \
- the type `()` here instead.",
- );
- }
- }
-
- err
- }
-
- ty::Predicate::Subtype(ref predicate) => {
- // Errors for Subtype predicates show up as
- // `FulfillmentErrorCode::CodeSubtypeError`,
- // not selection error.
- span_bug!(span, "subtype requirement gave wrong error: `{:?}`", predicate)
- }
-
- ty::Predicate::RegionOutlives(ref predicate) => {
- let predicate = self.resolve_vars_if_possible(predicate);
- let err = self
- .region_outlives_predicate(&obligation.cause, &predicate)
- .err()
- .unwrap();
- struct_span_err!(
- self.tcx.sess,
- span,
- E0279,
- "the requirement `{}` is not satisfied (`{}`)",
- predicate,
- err,
- )
- }
-
- ty::Predicate::Projection(..) | ty::Predicate::TypeOutlives(..) => {
- let predicate = self.resolve_vars_if_possible(&obligation.predicate);
- struct_span_err!(
- self.tcx.sess,
- span,
- E0280,
- "the requirement `{}` is not satisfied",
- predicate
- )
- }
-
- ty::Predicate::ObjectSafe(trait_def_id) => {
- let violations = self.tcx.object_safety_violations(trait_def_id);
- report_object_safety_error(self.tcx, span, trait_def_id, violations)
- }
-
- ty::Predicate::ClosureKind(closure_def_id, closure_substs, kind) => {
- let found_kind = self.closure_kind(closure_def_id, closure_substs).unwrap();
- let closure_span = self
- .tcx
- .sess
- .source_map()
- .def_span(self.tcx.hir().span_if_local(closure_def_id).unwrap());
- let hir_id = self.tcx.hir().as_local_hir_id(closure_def_id).unwrap();
- let mut err = struct_span_err!(
- self.tcx.sess,
- closure_span,
- E0525,
- "expected a closure that implements the `{}` trait, \
- but this closure only implements `{}`",
- kind,
- found_kind
- );
-
- err.span_label(
- closure_span,
- format!("this closure implements `{}`, not `{}`", found_kind, kind),
- );
- err.span_label(
- obligation.cause.span,
- format!("the requirement to implement `{}` derives from here", kind),
- );
-
- // Additional context information explaining why the closure only implements
- // a particular trait.
- if let Some(tables) = self.in_progress_tables {
- let tables = tables.borrow();
- match (found_kind, tables.closure_kind_origins().get(hir_id)) {
- (ty::ClosureKind::FnOnce, Some((span, name))) => {
- err.span_label(
- *span,
- format!(
- "closure is `FnOnce` because it moves the \
- variable `{}` out of its environment",
- name
- ),
- );
- }
- (ty::ClosureKind::FnMut, Some((span, name))) => {
- err.span_label(
- *span,
- format!(
- "closure is `FnMut` because it mutates the \
- variable `{}` here",
- name
- ),
- );
- }
- _ => {}
- }
- }
-
- err.emit();
- return;
- }
-
- ty::Predicate::WellFormed(ty) => {
- if !self.tcx.sess.opts.debugging_opts.chalk {
- // WF predicates cannot themselves make
- // errors. They can only block due to
- // ambiguity; otherwise, they always
- // degenerate into other obligations
- // (which may fail).
- span_bug!(span, "WF predicate not satisfied for {:?}", ty);
- } else {
- // FIXME: we'll need a better message which takes into account
- // which bounds actually failed to hold.
- self.tcx.sess.struct_span_err(
- span,
- &format!("the type `{}` is not well-formed (chalk)", ty),
- )
- }
- }
-
- ty::Predicate::ConstEvaluatable(..) => {
- // Errors for `ConstEvaluatable` predicates show up as
- // `SelectionError::ConstEvalFailure`,
- // not `Unimplemented`.
- span_bug!(
- span,
- "const-evaluatable requirement gave wrong error: `{:?}`",
- obligation
- )
- }
- }
- }
-
- OutputTypeParameterMismatch(ref found_trait_ref, ref expected_trait_ref, _) => {
- let found_trait_ref = self.resolve_vars_if_possible(&*found_trait_ref);
- let expected_trait_ref = self.resolve_vars_if_possible(&*expected_trait_ref);
-
- if expected_trait_ref.self_ty().references_error() {
- return;
- }
-
- let found_trait_ty = found_trait_ref.self_ty();
-
- let found_did = match found_trait_ty.kind {
- ty::Closure(did, _) | ty::Foreign(did) | ty::FnDef(did, _) => Some(did),
- ty::Adt(def, _) => Some(def.did),
- _ => None,
- };
-
- let found_span = found_did
- .and_then(|did| self.tcx.hir().span_if_local(did))
- .map(|sp| self.tcx.sess.source_map().def_span(sp)); // the sp could be an fn def
-
- if self.reported_closure_mismatch.borrow().contains(&(span, found_span)) {
- // We check closures twice, with obligations flowing in different directions,
- // but we want to complain about them only once.
- return;
- }
-
- self.reported_closure_mismatch.borrow_mut().insert((span, found_span));
-
- let found = match found_trait_ref.skip_binder().substs.type_at(1).kind {
- ty::Tuple(ref tys) => vec![ArgKind::empty(); tys.len()],
- _ => vec![ArgKind::empty()],
- };
-
- let expected_ty = expected_trait_ref.skip_binder().substs.type_at(1);
- let expected = match expected_ty.kind {
- ty::Tuple(ref tys) => tys
- .iter()
- .map(|t| ArgKind::from_expected_ty(t.expect_ty(), Some(span)))
- .collect(),
- _ => vec![ArgKind::Arg("_".to_owned(), expected_ty.to_string())],
- };
-
- if found.len() == expected.len() {
- self.report_closure_arg_mismatch(
- span,
- found_span,
- found_trait_ref,
- expected_trait_ref,
- )
- } else {
- let (closure_span, found) = found_did
- .and_then(|did| self.tcx.hir().get_if_local(did))
- .map(|node| {
- let (found_span, found) = self.get_fn_like_arguments(node);
- (Some(found_span), found)
- })
- .unwrap_or((found_span, found));
-
- self.report_arg_count_mismatch(
- span,
- closure_span,
- expected,
- found,
- found_trait_ty.is_closure(),
- )
- }
- }
-
- TraitNotObjectSafe(did) => {
- let violations = self.tcx.object_safety_violations(did);
- report_object_safety_error(self.tcx, span, did, violations)
- }
-
- ConstEvalFailure(ErrorHandled::TooGeneric) => {
- // In this instance, we have a const expression containing an unevaluated
- // generic parameter. We have no idea whether this expression is valid or
- // not (e.g. it might result in an error), but we don't want to just assume
- // that it's okay, because that might result in post-monomorphisation time
- // errors. The onus is really on the caller to provide values that it can
- // prove are well-formed.
- let mut err = self
- .tcx
- .sess
- .struct_span_err(span, "constant expression depends on a generic parameter");
- // FIXME(const_generics): we should suggest to the user how they can resolve this
- // issue. However, this is currently not actually possible
- // (see https://github.com/rust-lang/rust/issues/66962#issuecomment-575907083).
- err.note("this may fail depending on what value the parameter takes");
- err
- }
-
- // Already reported in the query.
- ConstEvalFailure(ErrorHandled::Reported) => {
- self.tcx.sess.delay_span_bug(span, "constant in type had an ignored error");
- return;
- }
-
- Overflow => {
- bug!("overflow should be handled before the `report_selection_error` path");
- }
- };
-
- self.note_obligation_cause(&mut err, obligation);
- self.point_at_returns_when_relevant(&mut err, &obligation);
-
- err.emit();
- }
-
- /// If the `Self` type of the unsatisfied trait `trait_ref` implements a trait
- /// with the same path as `trait_ref`, a help message about
- /// a probable version mismatch is added to `err`
- fn note_version_mismatch(
- &self,
- err: &mut DiagnosticBuilder<'_>,
- trait_ref: &ty::PolyTraitRef<'tcx>,
- ) {
- let get_trait_impl = |trait_def_id| {
- let mut trait_impl = None;
- self.tcx.for_each_relevant_impl(trait_def_id, trait_ref.self_ty(), |impl_def_id| {
- if trait_impl.is_none() {
- trait_impl = Some(impl_def_id);
- }
- });
- trait_impl
- };
- let required_trait_path = self.tcx.def_path_str(trait_ref.def_id());
- let all_traits = self.tcx.all_traits(LOCAL_CRATE);
- let traits_with_same_path: std::collections::BTreeSet<_> = all_traits
- .iter()
- .filter(|trait_def_id| **trait_def_id != trait_ref.def_id())
- .filter(|trait_def_id| self.tcx.def_path_str(**trait_def_id) == required_trait_path)
- .collect();
- for trait_with_same_path in traits_with_same_path {
- if let Some(impl_def_id) = get_trait_impl(*trait_with_same_path) {
- let impl_span = self.tcx.def_span(impl_def_id);
- err.span_help(impl_span, "trait impl with same name found");
- let trait_crate = self.tcx.crate_name(trait_with_same_path.krate);
- let crate_msg = format!(
- "perhaps two different versions of crate `{}` are being used?",
- trait_crate
- );
- err.note(&crate_msg);
- }
- }
- }
-
- fn mk_obligation_for_def_id(
- &self,
- def_id: DefId,
- output_ty: Ty<'tcx>,
- cause: ObligationCause<'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- ) -> PredicateObligation<'tcx> {
- let new_trait_ref =
- ty::TraitRef { def_id, substs: self.tcx.mk_substs_trait(output_ty, &[]) };
- Obligation::new(cause, param_env, new_trait_ref.without_const().to_predicate())
- }
-}
-
-pub fn recursive_type_with_infinite_size_error(
- tcx: TyCtxt<'tcx>,
- type_def_id: DefId,
-) -> DiagnosticBuilder<'tcx> {
- assert!(type_def_id.is_local());
- let span = tcx.hir().span_if_local(type_def_id).unwrap();
- let span = tcx.sess.source_map().def_span(span);
- let mut err = struct_span_err!(
- tcx.sess,
- span,
- E0072,
- "recursive type `{}` has infinite size",
- tcx.def_path_str(type_def_id)
- );
- err.span_label(span, "recursive type has infinite size");
- err.help(&format!(
- "insert indirection (e.g., a `Box`, `Rc`, or `&`) \
- at some point to make `{}` representable",
- tcx.def_path_str(type_def_id)
- ));
- err
}
pub fn report_object_safety_error(
err
}
-
-impl<'a, 'tcx> InferCtxt<'a, 'tcx> {
- fn maybe_report_ambiguity(
- &self,
- obligation: &PredicateObligation<'tcx>,
- body_id: Option<hir::BodyId>,
- ) {
- // Unable to successfully determine, probably means
- // insufficient type information, but could mean
- // ambiguous impls. The latter *ought* to be a
- // coherence violation, so we don't report it here.
-
- let predicate = self.resolve_vars_if_possible(&obligation.predicate);
- let span = obligation.cause.span;
-
- debug!(
- "maybe_report_ambiguity(predicate={:?}, obligation={:?} body_id={:?}, code={:?})",
- predicate, obligation, body_id, obligation.cause.code,
- );
-
- // Ambiguity errors are often caused as fallout from earlier
- // errors. So just ignore them if this infcx is tainted.
- if self.is_tainted_by_errors() {
- return;
- }
-
- let mut err = match predicate {
- ty::Predicate::Trait(ref data, _) => {
- let trait_ref = data.to_poly_trait_ref();
- let self_ty = trait_ref.self_ty();
- debug!("self_ty {:?} {:?} trait_ref {:?}", self_ty, self_ty.kind, trait_ref);
-
- if predicate.references_error() {
- return;
- }
- // Typically, this ambiguity should only happen if
- // there are unresolved type inference variables
- // (otherwise it would suggest a coherence
- // failure). But given #21974 that is not necessarily
- // the case -- we can have multiple where clauses that
- // are only distinguished by a region, which results
- // in an ambiguity even when all types are fully
- // known, since we don't dispatch based on region
- // relationships.
-
- // This is kind of a hack: it frequently happens that some earlier
- // error prevents types from being fully inferred, and then we get
- // a bunch of uninteresting errors saying something like "<generic
- // #0> doesn't implement Sized". It may even be true that we
- // could just skip over all checks where the self-ty is an
- // inference variable, but I was afraid that there might be an
- // inference variable created, registered as an obligation, and
- // then never forced by writeback, and hence by skipping here we'd
- // be ignoring the fact that we don't KNOW the type works
- // out. Though even that would probably be harmless, given that
- // we're only talking about builtin traits, which are known to be
- // inhabited. We used to check for `self.tcx.sess.has_errors()` to
- // avoid inundating the user with unnecessary errors, but we now
- // check upstream for type errors and dont add the obligations to
- // begin with in those cases.
- if self
- .tcx
- .lang_items()
- .sized_trait()
- .map_or(false, |sized_id| sized_id == trait_ref.def_id())
- {
- self.need_type_info_err(body_id, span, self_ty, ErrorCode::E0282).emit();
- return;
- }
- let mut err = self.need_type_info_err(body_id, span, self_ty, ErrorCode::E0283);
- err.note(&format!("cannot resolve `{}`", predicate));
- if let ObligationCauseCode::ItemObligation(def_id) = obligation.cause.code {
- self.suggest_fully_qualified_path(&mut err, def_id, span, trait_ref.def_id());
- } else if let (
- Ok(ref snippet),
- ObligationCauseCode::BindingObligation(ref def_id, _),
- ) =
- (self.tcx.sess.source_map().span_to_snippet(span), &obligation.cause.code)
- {
- let generics = self.tcx.generics_of(*def_id);
- if !generics.params.is_empty() && !snippet.ends_with('>') {
- // FIXME: To avoid spurious suggestions in functions where type arguments
- // where already supplied, we check the snippet to make sure it doesn't
- // end with a turbofish. Ideally we would have access to a `PathSegment`
- // instead. Otherwise we would produce the following output:
- //
- // error[E0283]: type annotations needed
- // --> $DIR/issue-54954.rs:3:24
- // |
- // LL | const ARR_LEN: usize = Tt::const_val::<[i8; 123]>();
- // | ^^^^^^^^^^^^^^^^^^^^^^^^^^
- // | |
- // | cannot infer type
- // | help: consider specifying the type argument
- // | in the function call:
- // | `Tt::const_val::<[i8; 123]>::<T>`
- // ...
- // LL | const fn const_val<T: Sized>() -> usize {
- // | --------- - required by this bound in `Tt::const_val`
- // |
- // = note: cannot resolve `_: Tt`
-
- err.span_suggestion(
- span,
- &format!(
- "consider specifying the type argument{} in the function call",
- if generics.params.len() > 1 { "s" } else { "" },
- ),
- format!(
- "{}::<{}>",
- snippet,
- generics
- .params
- .iter()
- .map(|p| p.name.to_string())
- .collect::<Vec<String>>()
- .join(", ")
- ),
- Applicability::HasPlaceholders,
- );
- }
- }
- err
- }
-
- ty::Predicate::WellFormed(ty) => {
- // Same hacky approach as above to avoid deluging user
- // with error messages.
- if ty.references_error() || self.tcx.sess.has_errors() {
- return;
- }
- self.need_type_info_err(body_id, span, ty, ErrorCode::E0282)
- }
-
- ty::Predicate::Subtype(ref data) => {
- if data.references_error() || self.tcx.sess.has_errors() {
- // no need to overload user in such cases
- return;
- }
- let &SubtypePredicate { a_is_expected: _, a, b } = data.skip_binder();
- // both must be type variables, or the other would've been instantiated
- assert!(a.is_ty_var() && b.is_ty_var());
- self.need_type_info_err(body_id, span, a, ErrorCode::E0282)
- }
- ty::Predicate::Projection(ref data) => {
- let trait_ref = data.to_poly_trait_ref(self.tcx);
- let self_ty = trait_ref.self_ty();
- if predicate.references_error() {
- return;
- }
- let mut err = self.need_type_info_err(body_id, span, self_ty, ErrorCode::E0284);
- err.note(&format!("cannot resolve `{}`", predicate));
- err
- }
-
- _ => {
- if self.tcx.sess.has_errors() {
- return;
- }
- let mut err = struct_span_err!(
- self.tcx.sess,
- span,
- E0284,
- "type annotations needed: cannot resolve `{}`",
- predicate,
- );
- err.span_label(span, &format!("cannot resolve `{}`", predicate));
- err
- }
- };
- self.note_obligation_cause(&mut err, obligation);
- err.emit();
- }
-
- /// Returns `true` if the trait predicate may apply for *some* assignment
- /// to the type parameters.
- fn predicate_can_apply(
- &self,
- param_env: ty::ParamEnv<'tcx>,
- pred: ty::PolyTraitRef<'tcx>,
- ) -> bool {
- struct ParamToVarFolder<'a, 'tcx> {
- infcx: &'a InferCtxt<'a, 'tcx>,
- var_map: FxHashMap<Ty<'tcx>, Ty<'tcx>>,
- }
-
- impl<'a, 'tcx> TypeFolder<'tcx> for ParamToVarFolder<'a, 'tcx> {
- fn tcx<'b>(&'b self) -> TyCtxt<'tcx> {
- self.infcx.tcx
- }
-
- fn fold_ty(&mut self, ty: Ty<'tcx>) -> Ty<'tcx> {
- if let ty::Param(ty::ParamTy { name, .. }) = ty.kind {
- let infcx = self.infcx;
- self.var_map.entry(ty).or_insert_with(|| {
- infcx.next_ty_var(TypeVariableOrigin {
- kind: TypeVariableOriginKind::TypeParameterDefinition(name, None),
- span: DUMMY_SP,
- })
- })
- } else {
- ty.super_fold_with(self)
- }
- }
- }
-
- self.probe(|_| {
- let mut selcx = SelectionContext::new(self);
-
- let cleaned_pred =
- pred.fold_with(&mut ParamToVarFolder { infcx: self, var_map: Default::default() });
-
- let cleaned_pred = super::project::normalize(
- &mut selcx,
- param_env,
- ObligationCause::dummy(),
- &cleaned_pred,
- )
- .value;
-
- let obligation = Obligation::new(
- ObligationCause::dummy(),
- param_env,
- cleaned_pred.without_const().to_predicate(),
- );
-
- self.predicate_may_hold(&obligation)
- })
- }
-
- fn note_obligation_cause(
- &self,
- err: &mut DiagnosticBuilder<'_>,
- obligation: &PredicateObligation<'tcx>,
- ) {
- // First, attempt to add note to this error with an async-await-specific
- // message, and fall back to regular note otherwise.
- if !self.maybe_note_obligation_cause_for_async_await(err, obligation) {
- self.note_obligation_cause_code(
- err,
- &obligation.predicate,
- &obligation.cause.code,
- &mut vec![],
- );
- self.suggest_unsized_bound_if_applicable(err, obligation);
- }
- }
-
- fn suggest_unsized_bound_if_applicable(
- &self,
- err: &mut DiagnosticBuilder<'_>,
- obligation: &PredicateObligation<'tcx>,
- ) {
- if let (
- ty::Predicate::Trait(pred, _),
- ObligationCauseCode::BindingObligation(item_def_id, span),
- ) = (&obligation.predicate, &obligation.cause.code)
- {
- if let (Some(generics), true) = (
- self.tcx.hir().get_if_local(*item_def_id).as_ref().and_then(|n| n.generics()),
- Some(pred.def_id()) == self.tcx.lang_items().sized_trait(),
- ) {
- for param in generics.params {
- if param.span == *span
- && !param.bounds.iter().any(|bound| {
- bound.trait_def_id() == self.tcx.lang_items().sized_trait()
- })
- {
- let (span, separator) = match param.bounds {
- [] => (span.shrink_to_hi(), ":"),
- [.., bound] => (bound.span().shrink_to_hi(), " + "),
- };
- err.span_suggestion(
- span,
- "consider relaxing the implicit `Sized` restriction",
- format!("{} ?Sized", separator),
- Applicability::MachineApplicable,
- );
- return;
- }
- }
- }
- }
- }
-
- fn is_recursive_obligation(
- &self,
- obligated_types: &mut Vec<&ty::TyS<'tcx>>,
- cause_code: &ObligationCauseCode<'tcx>,
- ) -> bool {
- if let ObligationCauseCode::BuiltinDerivedObligation(ref data) = cause_code {
- let parent_trait_ref = self.resolve_vars_if_possible(&data.parent_trait_ref);
-
- if obligated_types.iter().any(|ot| ot == &parent_trait_ref.skip_binder().self_ty()) {
- return true;
- }
- }
- false
- }
-}
-
-/// Summarizes information
-#[derive(Clone)]
-pub enum ArgKind {
- /// An argument of non-tuple type. Parameters are (name, ty)
- Arg(String, String),
-
- /// An argument of tuple type. For a "found" argument, the span is
- /// the locationo in the source of the pattern. For a "expected"
- /// argument, it will be None. The vector is a list of (name, ty)
- /// strings for the components of the tuple.
- Tuple(Option<Span>, Vec<(String, String)>),
-}
-
-impl ArgKind {
- fn empty() -> ArgKind {
- ArgKind::Arg("_".to_owned(), "_".to_owned())
- }
-
- /// Creates an `ArgKind` from the expected type of an
- /// argument. It has no name (`_`) and an optional source span.
- pub fn from_expected_ty(t: Ty<'_>, span: Option<Span>) -> ArgKind {
- match t.kind {
- ty::Tuple(ref tys) => ArgKind::Tuple(
- span,
- tys.iter().map(|ty| ("_".to_owned(), ty.to_string())).collect::<Vec<_>>(),
- ),
- _ => ArgKind::Arg("_".to_owned(), t.to_string()),
- }
- }
-}
-
-/// Suggest restricting a type param with a new bound.
-pub fn suggest_constraining_type_param(
- tcx: TyCtxt<'_>,
- generics: &hir::Generics<'_>,
- err: &mut DiagnosticBuilder<'_>,
- param_name: &str,
- constraint: &str,
- source_map: &SourceMap,
- span: Span,
- def_id: Option<DefId>,
-) -> bool {
- const MSG_RESTRICT_BOUND_FURTHER: &str = "consider further restricting this bound with";
- const MSG_RESTRICT_TYPE: &str = "consider restricting this type parameter with";
- const MSG_RESTRICT_TYPE_FURTHER: &str = "consider further restricting this type parameter with";
-
- let param = generics.params.iter().find(|p| p.name.ident().as_str() == param_name);
-
- let param = if let Some(param) = param {
- param
- } else {
- return false;
- };
-
- if def_id == tcx.lang_items().sized_trait() {
- // Type parameters are already `Sized` by default.
- err.span_label(param.span, &format!("this type parameter needs to be `{}`", constraint));
- return true;
- }
-
- if param_name.starts_with("impl ") {
- // If there's an `impl Trait` used in argument position, suggest
- // restricting it:
- //
- // fn foo(t: impl Foo) { ... }
- // --------
- // |
- // help: consider further restricting this bound with `+ Bar`
- //
- // Suggestion for tools in this case is:
- //
- // fn foo(t: impl Foo) { ... }
- // --------
- // |
- // replace with: `impl Foo + Bar`
-
- err.span_help(param.span, &format!("{} `+ {}`", MSG_RESTRICT_BOUND_FURTHER, constraint));
-
- err.tool_only_span_suggestion(
- param.span,
- MSG_RESTRICT_BOUND_FURTHER,
- format!("{} + {}", param_name, constraint),
- Applicability::MachineApplicable,
- );
-
- return true;
- }
-
- if generics.where_clause.predicates.is_empty() {
- if let Some(bounds_span) = param.bounds_span() {
- // If user has provided some bounds, suggest restricting them:
- //
- // fn foo<T: Foo>(t: T) { ... }
- // ---
- // |
- // help: consider further restricting this bound with `+ Bar`
- //
- // Suggestion for tools in this case is:
- //
- // fn foo<T: Foo>(t: T) { ... }
- // --
- // |
- // replace with: `T: Bar +`
-
- err.span_help(
- bounds_span,
- &format!("{} `+ {}`", MSG_RESTRICT_BOUND_FURTHER, constraint),
- );
-
- let span_hi = param.span.with_hi(span.hi());
- let span_with_colon = source_map.span_through_char(span_hi, ':');
-
- if span_hi != param.span && span_with_colon != span_hi {
- err.tool_only_span_suggestion(
- span_with_colon,
- MSG_RESTRICT_BOUND_FURTHER,
- format!("{}: {} + ", param_name, constraint),
- Applicability::MachineApplicable,
- );
- }
- } else {
- // If user hasn't provided any bounds, suggest adding a new one:
- //
- // fn foo<T>(t: T) { ... }
- // - help: consider restricting this type parameter with `T: Foo`
-
- err.span_help(
- param.span,
- &format!("{} `{}: {}`", MSG_RESTRICT_TYPE, param_name, constraint),
- );
-
- err.tool_only_span_suggestion(
- param.span,
- MSG_RESTRICT_TYPE,
- format!("{}: {}", param_name, constraint),
- Applicability::MachineApplicable,
- );
- }
-
- true
- } else {
- // This part is a bit tricky, because using the `where` clause user can
- // provide zero, one or many bounds for the same type parameter, so we
- // have following cases to consider:
- //
- // 1) When the type parameter has been provided zero bounds
- //
- // Message:
- // fn foo<X, Y>(x: X, y: Y) where Y: Foo { ... }
- // - help: consider restricting this type parameter with `where X: Bar`
- //
- // Suggestion:
- // fn foo<X, Y>(x: X, y: Y) where Y: Foo { ... }
- // - insert: `, X: Bar`
- //
- //
- // 2) When the type parameter has been provided one bound
- //
- // Message:
- // fn foo<T>(t: T) where T: Foo { ... }
- // ^^^^^^
- // |
- // help: consider further restricting this bound with `+ Bar`
- //
- // Suggestion:
- // fn foo<T>(t: T) where T: Foo { ... }
- // ^^
- // |
- // replace with: `T: Bar +`
- //
- //
- // 3) When the type parameter has been provided many bounds
- //
- // Message:
- // fn foo<T>(t: T) where T: Foo, T: Bar {... }
- // - help: consider further restricting this type parameter with `where T: Zar`
- //
- // Suggestion:
- // fn foo<T>(t: T) where T: Foo, T: Bar {... }
- // - insert: `, T: Zar`
-
- let mut param_spans = Vec::new();
-
- for predicate in generics.where_clause.predicates {
- if let WherePredicate::BoundPredicate(WhereBoundPredicate {
- span, bounded_ty, ..
- }) = predicate
- {
- if let TyKind::Path(QPath::Resolved(_, path)) = &bounded_ty.kind {
- if let Some(segment) = path.segments.first() {
- if segment.ident.to_string() == param_name {
- param_spans.push(span);
- }
- }
- }
- }
- }
-
- let where_clause_span =
- generics.where_clause.span_for_predicates_or_empty_place().shrink_to_hi();
-
- match ¶m_spans[..] {
- &[] => {
- err.span_help(
- param.span,
- &format!("{} `where {}: {}`", MSG_RESTRICT_TYPE, param_name, constraint),
- );
-
- err.tool_only_span_suggestion(
- where_clause_span,
- MSG_RESTRICT_TYPE,
- format!(", {}: {}", param_name, constraint),
- Applicability::MachineApplicable,
- );
- }
-
- &[¶m_span] => {
- err.span_help(
- param_span,
- &format!("{} `+ {}`", MSG_RESTRICT_BOUND_FURTHER, constraint),
- );
-
- let span_hi = param_span.with_hi(span.hi());
- let span_with_colon = source_map.span_through_char(span_hi, ':');
-
- if span_hi != param_span && span_with_colon != span_hi {
- err.tool_only_span_suggestion(
- span_with_colon,
- MSG_RESTRICT_BOUND_FURTHER,
- format!("{}: {} +", param_name, constraint),
- Applicability::MachineApplicable,
- );
- }
- }
-
- _ => {
- err.span_help(
- param.span,
- &format!(
- "{} `where {}: {}`",
- MSG_RESTRICT_TYPE_FURTHER, param_name, constraint,
- ),
- );
-
- err.tool_only_span_suggestion(
- where_clause_span,
- MSG_RESTRICT_BOUND_FURTHER,
- format!(", {}: {}", param_name, constraint),
- Applicability::MachineApplicable,
- );
- }
- }
-
- true
- }
-}
+++ /dev/null
-use super::{
- ObligationCauseCode, OnUnimplementedDirective, OnUnimplementedNote, PredicateObligation,
-};
-use crate::infer::InferCtxt;
-use rustc::ty::subst::Subst;
-use rustc::ty::{self, GenericParamDefKind};
-use rustc_hir as hir;
-use rustc_hir::def_id::DefId;
-use rustc_span::symbol::sym;
-
-impl<'a, 'tcx> InferCtxt<'a, 'tcx> {
- fn impl_similar_to(
- &self,
- trait_ref: ty::PolyTraitRef<'tcx>,
- obligation: &PredicateObligation<'tcx>,
- ) -> Option<DefId> {
- let tcx = self.tcx;
- let param_env = obligation.param_env;
- let trait_ref = tcx.erase_late_bound_regions(&trait_ref);
- let trait_self_ty = trait_ref.self_ty();
-
- let mut self_match_impls = vec![];
- let mut fuzzy_match_impls = vec![];
-
- self.tcx.for_each_relevant_impl(trait_ref.def_id, trait_self_ty, |def_id| {
- let impl_substs = self.fresh_substs_for_item(obligation.cause.span, def_id);
- let impl_trait_ref = tcx.impl_trait_ref(def_id).unwrap().subst(tcx, impl_substs);
-
- let impl_self_ty = impl_trait_ref.self_ty();
-
- if let Ok(..) = self.can_eq(param_env, trait_self_ty, impl_self_ty) {
- self_match_impls.push(def_id);
-
- if trait_ref
- .substs
- .types()
- .skip(1)
- .zip(impl_trait_ref.substs.types().skip(1))
- .all(|(u, v)| self.fuzzy_match_tys(u, v))
- {
- fuzzy_match_impls.push(def_id);
- }
- }
- });
-
- let impl_def_id = if self_match_impls.len() == 1 {
- self_match_impls[0]
- } else if fuzzy_match_impls.len() == 1 {
- fuzzy_match_impls[0]
- } else {
- return None;
- };
-
- tcx.has_attr(impl_def_id, sym::rustc_on_unimplemented).then_some(impl_def_id)
- }
-
- /// Used to set on_unimplemented's `ItemContext`
- /// to be the enclosing (async) block/function/closure
- fn describe_enclosure(&self, hir_id: hir::HirId) -> Option<&'static str> {
- let hir = &self.tcx.hir();
- let node = hir.find(hir_id)?;
- match &node {
- hir::Node::Item(hir::Item { kind: hir::ItemKind::Fn(sig, _, body_id), .. }) => {
- self.describe_generator(*body_id).or_else(|| {
- Some(if let hir::FnHeader { asyncness: hir::IsAsync::Async, .. } = sig.header {
- "an async function"
- } else {
- "a function"
- })
- })
- }
- hir::Node::TraitItem(hir::TraitItem {
- kind: hir::TraitItemKind::Method(_, hir::TraitMethod::Provided(body_id)),
- ..
- }) => self.describe_generator(*body_id).or_else(|| Some("a trait method")),
- hir::Node::ImplItem(hir::ImplItem {
- kind: hir::ImplItemKind::Method(sig, body_id),
- ..
- }) => self.describe_generator(*body_id).or_else(|| {
- Some(if let hir::FnHeader { asyncness: hir::IsAsync::Async, .. } = sig.header {
- "an async method"
- } else {
- "a method"
- })
- }),
- hir::Node::Expr(hir::Expr {
- kind: hir::ExprKind::Closure(_is_move, _, body_id, _, gen_movability),
- ..
- }) => self.describe_generator(*body_id).or_else(|| {
- Some(if gen_movability.is_some() { "an async closure" } else { "a closure" })
- }),
- hir::Node::Expr(hir::Expr { .. }) => {
- let parent_hid = hir.get_parent_node(hir_id);
- if parent_hid != hir_id {
- return self.describe_enclosure(parent_hid);
- } else {
- None
- }
- }
- _ => None,
- }
- }
-
- crate fn on_unimplemented_note(
- &self,
- trait_ref: ty::PolyTraitRef<'tcx>,
- obligation: &PredicateObligation<'tcx>,
- ) -> OnUnimplementedNote {
- let def_id =
- self.impl_similar_to(trait_ref, obligation).unwrap_or_else(|| trait_ref.def_id());
- let trait_ref = *trait_ref.skip_binder();
-
- let mut flags = vec![];
- flags.push((
- sym::item_context,
- self.describe_enclosure(obligation.cause.body_id).map(|s| s.to_owned()),
- ));
-
- match obligation.cause.code {
- ObligationCauseCode::BuiltinDerivedObligation(..)
- | ObligationCauseCode::ImplDerivedObligation(..) => {}
- _ => {
- // this is a "direct", user-specified, rather than derived,
- // obligation.
- flags.push((sym::direct, None));
- }
- }
-
- if let ObligationCauseCode::ItemObligation(item) = obligation.cause.code {
- // FIXME: maybe also have some way of handling methods
- // from other traits? That would require name resolution,
- // which we might want to be some sort of hygienic.
- //
- // Currently I'm leaving it for what I need for `try`.
- if self.tcx.trait_of_item(item) == Some(trait_ref.def_id) {
- let method = self.tcx.item_name(item);
- flags.push((sym::from_method, None));
- flags.push((sym::from_method, Some(method.to_string())));
- }
- }
- if let Some((t, _)) = self.get_parent_trait_ref(&obligation.cause.code) {
- flags.push((sym::parent_trait, Some(t)));
- }
-
- if let Some(k) = obligation.cause.span.desugaring_kind() {
- flags.push((sym::from_desugaring, None));
- flags.push((sym::from_desugaring, Some(format!("{:?}", k))));
- }
- let generics = self.tcx.generics_of(def_id);
- let self_ty = trait_ref.self_ty();
- // This is also included through the generics list as `Self`,
- // but the parser won't allow you to use it
- flags.push((sym::_Self, Some(self_ty.to_string())));
- if let Some(def) = self_ty.ty_adt_def() {
- // We also want to be able to select self's original
- // signature with no type arguments resolved
- flags.push((sym::_Self, Some(self.tcx.type_of(def.did).to_string())));
- }
-
- for param in generics.params.iter() {
- let value = match param.kind {
- GenericParamDefKind::Type { .. } | GenericParamDefKind::Const => {
- trait_ref.substs[param.index as usize].to_string()
- }
- GenericParamDefKind::Lifetime => continue,
- };
- let name = param.name;
- flags.push((name, Some(value)));
- }
-
- if let Some(true) = self_ty.ty_adt_def().map(|def| def.did.is_local()) {
- flags.push((sym::crate_local, None));
- }
-
- // Allow targeting all integers using `{integral}`, even if the exact type was resolved
- if self_ty.is_integral() {
- flags.push((sym::_Self, Some("{integral}".to_owned())));
- }
-
- if let ty::Array(aty, len) = self_ty.kind {
- flags.push((sym::_Self, Some("[]".to_owned())));
- flags.push((sym::_Self, Some(format!("[{}]", aty))));
- if let Some(def) = aty.ty_adt_def() {
- // We also want to be able to select the array's type's original
- // signature with no type arguments resolved
- flags.push((
- sym::_Self,
- Some(format!("[{}]", self.tcx.type_of(def.did).to_string())),
- ));
- let tcx = self.tcx;
- if let Some(len) = len.try_eval_usize(tcx, ty::ParamEnv::empty()) {
- flags.push((
- sym::_Self,
- Some(format!("[{}; {}]", self.tcx.type_of(def.did).to_string(), len)),
- ));
- } else {
- flags.push((
- sym::_Self,
- Some(format!("[{}; _]", self.tcx.type_of(def.did).to_string())),
- ));
- }
- }
- }
- if let ty::Dynamic(traits, _) = self_ty.kind {
- for t in *traits.skip_binder() {
- match t {
- ty::ExistentialPredicate::Trait(trait_ref) => {
- flags.push((sym::_Self, Some(self.tcx.def_path_str(trait_ref.def_id))))
- }
- _ => {}
- }
- }
- }
-
- if let Ok(Some(command)) =
- OnUnimplementedDirective::of_item(self.tcx, trait_ref.def_id, def_id)
- {
- command.evaluate(self.tcx, trait_ref, &flags[..])
- } else {
- OnUnimplementedNote::default()
- }
- }
-}
+++ /dev/null
-use super::{
- ArgKind, EvaluationResult, Obligation, ObligationCause, ObligationCauseCode,
- PredicateObligation,
-};
-
-use crate::infer::InferCtxt;
-use crate::traits::error_reporting::suggest_constraining_type_param;
-
-use rustc::ty::TypeckTables;
-use rustc::ty::{self, AdtKind, DefIdTree, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness};
-use rustc_errors::{
- error_code, pluralize, struct_span_err, Applicability, DiagnosticBuilder, Style,
-};
-use rustc_hir as hir;
-use rustc_hir::def::DefKind;
-use rustc_hir::def_id::DefId;
-use rustc_hir::intravisit::Visitor;
-use rustc_hir::Node;
-use rustc_span::symbol::{kw, sym};
-use rustc_span::{MultiSpan, Span, DUMMY_SP};
-use std::fmt;
-
-impl<'a, 'tcx> InferCtxt<'a, 'tcx> {
- crate fn suggest_restricting_param_bound(
- &self,
- mut err: &mut DiagnosticBuilder<'_>,
- trait_ref: &ty::PolyTraitRef<'_>,
- body_id: hir::HirId,
- ) {
- let self_ty = trait_ref.self_ty();
- let (param_ty, projection) = match &self_ty.kind {
- ty::Param(_) => (true, None),
- ty::Projection(projection) => (false, Some(projection)),
- _ => return,
- };
-
- let suggest_restriction =
- |generics: &hir::Generics<'_>, msg, err: &mut DiagnosticBuilder<'_>| {
- let span = generics.where_clause.span_for_predicates_or_empty_place();
- if !span.from_expansion() && span.desugaring_kind().is_none() {
- err.span_suggestion(
- generics.where_clause.span_for_predicates_or_empty_place().shrink_to_hi(),
- &format!("consider further restricting {}", msg),
- format!(
- "{} {} ",
- if !generics.where_clause.predicates.is_empty() {
- ","
- } else {
- " where"
- },
- trait_ref.without_const().to_predicate(),
- ),
- Applicability::MachineApplicable,
- );
- }
- };
-
- // FIXME: Add check for trait bound that is already present, particularly `?Sized` so we
- // don't suggest `T: Sized + ?Sized`.
- let mut hir_id = body_id;
- while let Some(node) = self.tcx.hir().find(hir_id) {
- match node {
- hir::Node::TraitItem(hir::TraitItem {
- generics,
- kind: hir::TraitItemKind::Method(..),
- ..
- }) if param_ty && self_ty == self.tcx.types.self_param => {
- // Restricting `Self` for a single method.
- suggest_restriction(&generics, "`Self`", err);
- return;
- }
-
- hir::Node::Item(hir::Item { kind: hir::ItemKind::Fn(_, generics, _), .. })
- | hir::Node::TraitItem(hir::TraitItem {
- generics,
- kind: hir::TraitItemKind::Method(..),
- ..
- })
- | hir::Node::ImplItem(hir::ImplItem {
- generics,
- kind: hir::ImplItemKind::Method(..),
- ..
- })
- | hir::Node::Item(hir::Item {
- kind: hir::ItemKind::Trait(_, _, generics, _, _),
- ..
- })
- | hir::Node::Item(hir::Item {
- kind: hir::ItemKind::Impl { generics, .. }, ..
- }) if projection.is_some() => {
- // Missing associated type bound.
- suggest_restriction(&generics, "the associated type", err);
- return;
- }
-
- hir::Node::Item(hir::Item {
- kind: hir::ItemKind::Struct(_, generics),
- span,
- ..
- })
- | hir::Node::Item(hir::Item {
- kind: hir::ItemKind::Enum(_, generics), span, ..
- })
- | hir::Node::Item(hir::Item {
- kind: hir::ItemKind::Union(_, generics),
- span,
- ..
- })
- | hir::Node::Item(hir::Item {
- kind: hir::ItemKind::Trait(_, _, generics, ..),
- span,
- ..
- })
- | hir::Node::Item(hir::Item {
- kind: hir::ItemKind::Impl { generics, .. },
- span,
- ..
- })
- | hir::Node::Item(hir::Item {
- kind: hir::ItemKind::Fn(_, generics, _),
- span,
- ..
- })
- | hir::Node::Item(hir::Item {
- kind: hir::ItemKind::TyAlias(_, generics),
- span,
- ..
- })
- | hir::Node::Item(hir::Item {
- kind: hir::ItemKind::TraitAlias(generics, _),
- span,
- ..
- })
- | hir::Node::Item(hir::Item {
- kind: hir::ItemKind::OpaqueTy(hir::OpaqueTy { generics, .. }),
- span,
- ..
- })
- | hir::Node::TraitItem(hir::TraitItem { generics, span, .. })
- | hir::Node::ImplItem(hir::ImplItem { generics, span, .. })
- if param_ty =>
- {
- // Missing generic type parameter bound.
- let param_name = self_ty.to_string();
- let constraint = trait_ref.print_only_trait_path().to_string();
- if suggest_constraining_type_param(
- self.tcx,
- generics,
- &mut err,
- ¶m_name,
- &constraint,
- self.tcx.sess.source_map(),
- *span,
- Some(trait_ref.def_id()),
- ) {
- return;
- }
- }
-
- hir::Node::Crate => return,
-
- _ => {}
- }
-
- hir_id = self.tcx.hir().get_parent_item(hir_id);
- }
- }
-
- /// When encountering an assignment of an unsized trait, like `let x = ""[..];`, provide a
- /// suggestion to borrow the initializer in order to use have a slice instead.
- crate fn suggest_borrow_on_unsized_slice(
- &self,
- code: &ObligationCauseCode<'tcx>,
- err: &mut DiagnosticBuilder<'tcx>,
- ) {
- if let &ObligationCauseCode::VariableType(hir_id) = code {
- let parent_node = self.tcx.hir().get_parent_node(hir_id);
- if let Some(Node::Local(ref local)) = self.tcx.hir().find(parent_node) {
- if let Some(ref expr) = local.init {
- if let hir::ExprKind::Index(_, _) = expr.kind {
- if let Ok(snippet) = self.tcx.sess.source_map().span_to_snippet(expr.span) {
- err.span_suggestion(
- expr.span,
- "consider borrowing here",
- format!("&{}", snippet),
- Applicability::MachineApplicable,
- );
- }
- }
- }
- }
- }
- }
-
- /// Given a closure's `DefId`, return the given name of the closure.
- ///
- /// This doesn't account for reassignments, but it's only used for suggestions.
- crate fn get_closure_name(
- &self,
- def_id: DefId,
- err: &mut DiagnosticBuilder<'_>,
- msg: &str,
- ) -> Option<String> {
- let get_name =
- |err: &mut DiagnosticBuilder<'_>, kind: &hir::PatKind<'_>| -> Option<String> {
- // Get the local name of this closure. This can be inaccurate because
- // of the possibility of reassignment, but this should be good enough.
- match &kind {
- hir::PatKind::Binding(hir::BindingAnnotation::Unannotated, _, name, None) => {
- Some(format!("{}", name))
- }
- _ => {
- err.note(&msg);
- None
- }
- }
- };
-
- let hir = self.tcx.hir();
- let hir_id = hir.as_local_hir_id(def_id)?;
- let parent_node = hir.get_parent_node(hir_id);
- match hir.find(parent_node) {
- Some(hir::Node::Stmt(hir::Stmt { kind: hir::StmtKind::Local(local), .. })) => {
- get_name(err, &local.pat.kind)
- }
- // Different to previous arm because one is `&hir::Local` and the other
- // is `P<hir::Local>`.
- Some(hir::Node::Local(local)) => get_name(err, &local.pat.kind),
- _ => return None,
- }
- }
-
- /// We tried to apply the bound to an `fn` or closure. Check whether calling it would
- /// evaluate to a type that *would* satisfy the trait binding. If it would, suggest calling
- /// it: `bar(foo)` → `bar(foo())`. This case is *very* likely to be hit if `foo` is `async`.
- crate fn suggest_fn_call(
- &self,
- obligation: &PredicateObligation<'tcx>,
- err: &mut DiagnosticBuilder<'_>,
- trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
- points_at_arg: bool,
- ) {
- let self_ty = trait_ref.self_ty();
- let (def_id, output_ty, callable) = match self_ty.kind {
- ty::Closure(def_id, substs) => {
- (def_id, self.closure_sig(def_id, substs).output(), "closure")
- }
- ty::FnDef(def_id, _) => (def_id, self_ty.fn_sig(self.tcx).output(), "function"),
- _ => return,
- };
- let msg = format!("use parentheses to call the {}", callable);
-
- let obligation = self.mk_obligation_for_def_id(
- trait_ref.def_id(),
- output_ty.skip_binder(),
- obligation.cause.clone(),
- obligation.param_env,
- );
-
- match self.evaluate_obligation(&obligation) {
- Ok(EvaluationResult::EvaluatedToOk)
- | Ok(EvaluationResult::EvaluatedToOkModuloRegions)
- | Ok(EvaluationResult::EvaluatedToAmbig) => {}
- _ => return,
- }
- let hir = self.tcx.hir();
- // Get the name of the callable and the arguments to be used in the suggestion.
- let snippet = match hir.get_if_local(def_id) {
- Some(hir::Node::Expr(hir::Expr {
- kind: hir::ExprKind::Closure(_, decl, _, span, ..),
- ..
- })) => {
- err.span_label(*span, "consider calling this closure");
- let name = match self.get_closure_name(def_id, err, &msg) {
- Some(name) => name,
- None => return,
- };
- let args = decl.inputs.iter().map(|_| "_").collect::<Vec<_>>().join(", ");
- format!("{}({})", name, args)
- }
- Some(hir::Node::Item(hir::Item {
- ident,
- kind: hir::ItemKind::Fn(.., body_id),
- ..
- })) => {
- err.span_label(ident.span, "consider calling this function");
- let body = hir.body(*body_id);
- let args = body
- .params
- .iter()
- .map(|arg| match &arg.pat.kind {
- hir::PatKind::Binding(_, _, ident, None)
- // FIXME: provide a better suggestion when encountering `SelfLower`, it
- // should suggest a method call.
- if ident.name != kw::SelfLower => ident.to_string(),
- _ => "_".to_string(),
- })
- .collect::<Vec<_>>()
- .join(", ");
- format!("{}({})", ident, args)
- }
- _ => return,
- };
- if points_at_arg {
- // When the obligation error has been ensured to have been caused by
- // an argument, the `obligation.cause.span` points at the expression
- // of the argument, so we can provide a suggestion. This is signaled
- // by `points_at_arg`. Otherwise, we give a more general note.
- err.span_suggestion(
- obligation.cause.span,
- &msg,
- snippet,
- Applicability::HasPlaceholders,
- );
- } else {
- err.help(&format!("{}: `{}`", msg, snippet));
- }
- }
-
- crate fn suggest_add_reference_to_arg(
- &self,
- obligation: &PredicateObligation<'tcx>,
- err: &mut DiagnosticBuilder<'tcx>,
- trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
- points_at_arg: bool,
- has_custom_message: bool,
- ) -> bool {
- if !points_at_arg {
- return false;
- }
-
- let span = obligation.cause.span;
- let param_env = obligation.param_env;
- let trait_ref = trait_ref.skip_binder();
-
- if let ObligationCauseCode::ImplDerivedObligation(obligation) = &obligation.cause.code {
- // Try to apply the original trait binding obligation by borrowing.
- let self_ty = trait_ref.self_ty();
- let found = self_ty.to_string();
- let new_self_ty = self.tcx.mk_imm_ref(self.tcx.lifetimes.re_static, self_ty);
- let substs = self.tcx.mk_substs_trait(new_self_ty, &[]);
- let new_trait_ref = ty::TraitRef::new(obligation.parent_trait_ref.def_id(), substs);
- let new_obligation = Obligation::new(
- ObligationCause::dummy(),
- param_env,
- new_trait_ref.without_const().to_predicate(),
- );
- if self.predicate_must_hold_modulo_regions(&new_obligation) {
- if let Ok(snippet) = self.tcx.sess.source_map().span_to_snippet(span) {
- // We have a very specific type of error, where just borrowing this argument
- // might solve the problem. In cases like this, the important part is the
- // original type obligation, not the last one that failed, which is arbitrary.
- // Because of this, we modify the error to refer to the original obligation and
- // return early in the caller.
- let msg = format!(
- "the trait bound `{}: {}` is not satisfied",
- found,
- obligation.parent_trait_ref.skip_binder().print_only_trait_path(),
- );
- if has_custom_message {
- err.note(&msg);
- } else {
- err.message = vec![(msg, Style::NoStyle)];
- }
- if snippet.starts_with('&') {
- // This is already a literal borrow and the obligation is failing
- // somewhere else in the obligation chain. Do not suggest non-sense.
- return false;
- }
- err.span_label(
- span,
- &format!(
- "expected an implementor of trait `{}`",
- obligation.parent_trait_ref.skip_binder().print_only_trait_path(),
- ),
- );
- err.span_suggestion(
- span,
- "consider borrowing here",
- format!("&{}", snippet),
- Applicability::MaybeIncorrect,
- );
- return true;
- }
- }
- }
- false
- }
-
- /// Whenever references are used by mistake, like `for (i, e) in &vec.iter().enumerate()`,
- /// suggest removing these references until we reach a type that implements the trait.
- crate fn suggest_remove_reference(
- &self,
- obligation: &PredicateObligation<'tcx>,
- err: &mut DiagnosticBuilder<'tcx>,
- trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
- ) {
- let trait_ref = trait_ref.skip_binder();
- let span = obligation.cause.span;
-
- if let Ok(snippet) = self.tcx.sess.source_map().span_to_snippet(span) {
- let refs_number =
- snippet.chars().filter(|c| !c.is_whitespace()).take_while(|c| *c == '&').count();
- if let Some('\'') =
- snippet.chars().filter(|c| !c.is_whitespace()).skip(refs_number).next()
- {
- // Do not suggest removal of borrow from type arguments.
- return;
- }
-
- let mut trait_type = trait_ref.self_ty();
-
- for refs_remaining in 0..refs_number {
- if let ty::Ref(_, t_type, _) = trait_type.kind {
- trait_type = t_type;
-
- let new_obligation = self.mk_obligation_for_def_id(
- trait_ref.def_id,
- trait_type,
- ObligationCause::dummy(),
- obligation.param_env,
- );
-
- if self.predicate_may_hold(&new_obligation) {
- let sp = self
- .tcx
- .sess
- .source_map()
- .span_take_while(span, |c| c.is_whitespace() || *c == '&');
-
- let remove_refs = refs_remaining + 1;
-
- let msg = if remove_refs == 1 {
- "consider removing the leading `&`-reference".to_string()
- } else {
- format!("consider removing {} leading `&`-references", remove_refs)
- };
-
- err.span_suggestion_short(
- sp,
- &msg,
- String::new(),
- Applicability::MachineApplicable,
- );
- break;
- }
- } else {
- break;
- }
- }
- }
- }
-
- /// Check if the trait bound is implemented for a different mutability and note it in the
- /// final error.
- crate fn suggest_change_mut(
- &self,
- obligation: &PredicateObligation<'tcx>,
- err: &mut DiagnosticBuilder<'tcx>,
- trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
- points_at_arg: bool,
- ) {
- let span = obligation.cause.span;
- if let Ok(snippet) = self.tcx.sess.source_map().span_to_snippet(span) {
- let refs_number =
- snippet.chars().filter(|c| !c.is_whitespace()).take_while(|c| *c == '&').count();
- if let Some('\'') =
- snippet.chars().filter(|c| !c.is_whitespace()).skip(refs_number).next()
- {
- // Do not suggest removal of borrow from type arguments.
- return;
- }
- let trait_ref = self.resolve_vars_if_possible(trait_ref);
- if trait_ref.has_infer_types() {
- // Do not ICE while trying to find if a reborrow would succeed on a trait with
- // unresolved bindings.
- return;
- }
-
- if let ty::Ref(region, t_type, mutability) = trait_ref.skip_binder().self_ty().kind {
- let trait_type = match mutability {
- hir::Mutability::Mut => self.tcx.mk_imm_ref(region, t_type),
- hir::Mutability::Not => self.tcx.mk_mut_ref(region, t_type),
- };
-
- let new_obligation = self.mk_obligation_for_def_id(
- trait_ref.skip_binder().def_id,
- trait_type,
- ObligationCause::dummy(),
- obligation.param_env,
- );
-
- if self.evaluate_obligation_no_overflow(&new_obligation).must_apply_modulo_regions()
- {
- let sp = self
- .tcx
- .sess
- .source_map()
- .span_take_while(span, |c| c.is_whitespace() || *c == '&');
- if points_at_arg && mutability == hir::Mutability::Not && refs_number > 0 {
- err.span_suggestion(
- sp,
- "consider changing this borrow's mutability",
- "&mut ".to_string(),
- Applicability::MachineApplicable,
- );
- } else {
- err.note(&format!(
- "`{}` is implemented for `{:?}`, but not for `{:?}`",
- trait_ref.print_only_trait_path(),
- trait_type,
- trait_ref.skip_binder().self_ty(),
- ));
- }
- }
- }
- }
- }
-
- crate fn suggest_semicolon_removal(
- &self,
- obligation: &PredicateObligation<'tcx>,
- err: &mut DiagnosticBuilder<'tcx>,
- span: Span,
- trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
- ) {
- let hir = self.tcx.hir();
- let parent_node = hir.get_parent_node(obligation.cause.body_id);
- let node = hir.find(parent_node);
- if let Some(hir::Node::Item(hir::Item {
- kind: hir::ItemKind::Fn(sig, _, body_id), ..
- })) = node
- {
- let body = hir.body(*body_id);
- if let hir::ExprKind::Block(blk, _) = &body.value.kind {
- if sig.decl.output.span().overlaps(span)
- && blk.expr.is_none()
- && "()" == &trait_ref.self_ty().to_string()
- {
- // FIXME(estebank): When encountering a method with a trait
- // bound not satisfied in the return type with a body that has
- // no return, suggest removal of semicolon on last statement.
- // Once that is added, close #54771.
- if let Some(ref stmt) = blk.stmts.last() {
- let sp = self.tcx.sess.source_map().end_point(stmt.span);
- err.span_label(sp, "consider removing this semicolon");
- }
- }
- }
- }
- }
-
- /// If all conditions are met to identify a returned `dyn Trait`, suggest using `impl Trait` if
- /// applicable and signal that the error has been expanded appropriately and needs to be
- /// emitted.
- crate fn suggest_impl_trait(
- &self,
- err: &mut DiagnosticBuilder<'tcx>,
- span: Span,
- obligation: &PredicateObligation<'tcx>,
- trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
- ) -> bool {
- match obligation.cause.code.peel_derives() {
- // Only suggest `impl Trait` if the return type is unsized because it is `dyn Trait`.
- ObligationCauseCode::SizedReturnType => {}
- _ => return false,
- }
-
- let hir = self.tcx.hir();
- let parent_node = hir.get_parent_node(obligation.cause.body_id);
- let node = hir.find(parent_node);
- let (sig, body_id) = if let Some(hir::Node::Item(hir::Item {
- kind: hir::ItemKind::Fn(sig, _, body_id),
- ..
- })) = node
- {
- (sig, body_id)
- } else {
- return false;
- };
- let body = hir.body(*body_id);
- let trait_ref = self.resolve_vars_if_possible(trait_ref);
- let ty = trait_ref.skip_binder().self_ty();
- let is_object_safe = match ty.kind {
- ty::Dynamic(predicates, _) => {
- // If the `dyn Trait` is not object safe, do not suggest `Box<dyn Trait>`.
- predicates
- .principal_def_id()
- .map_or(true, |def_id| self.tcx.object_safety_violations(def_id).is_empty())
- }
- // We only want to suggest `impl Trait` to `dyn Trait`s.
- // For example, `fn foo() -> str` needs to be filtered out.
- _ => return false,
- };
-
- let ret_ty = if let hir::FnRetTy::Return(ret_ty) = sig.decl.output {
- ret_ty
- } else {
- return false;
- };
-
- // Use `TypeVisitor` instead of the output type directly to find the span of `ty` for
- // cases like `fn foo() -> (dyn Trait, i32) {}`.
- // Recursively look for `TraitObject` types and if there's only one, use that span to
- // suggest `impl Trait`.
-
- // Visit to make sure there's a single `return` type to suggest `impl Trait`,
- // otherwise suggest using `Box<dyn Trait>` or an enum.
- let mut visitor = ReturnsVisitor::default();
- visitor.visit_body(&body);
-
- let tables = self.in_progress_tables.map(|t| t.borrow()).unwrap();
-
- let mut ret_types = visitor
- .returns
- .iter()
- .filter_map(|expr| tables.node_type_opt(expr.hir_id))
- .map(|ty| self.resolve_vars_if_possible(&ty));
- let (last_ty, all_returns_have_same_type) = ret_types.clone().fold(
- (None, true),
- |(last_ty, mut same): (std::option::Option<Ty<'_>>, bool), ty| {
- let ty = self.resolve_vars_if_possible(&ty);
- same &= last_ty.map_or(true, |last_ty| last_ty == ty) && ty.kind != ty::Error;
- (Some(ty), same)
- },
- );
- let all_returns_conform_to_trait =
- if let Some(ty_ret_ty) = tables.node_type_opt(ret_ty.hir_id) {
- match ty_ret_ty.kind {
- ty::Dynamic(predicates, _) => {
- let cause = ObligationCause::misc(ret_ty.span, ret_ty.hir_id);
- let param_env = ty::ParamEnv::empty();
- ret_types.all(|returned_ty| {
- predicates.iter().all(|predicate| {
- let pred = predicate.with_self_ty(self.tcx, returned_ty);
- let obl = Obligation::new(cause.clone(), param_env, pred);
- self.predicate_may_hold(&obl)
- })
- })
- }
- _ => false,
- }
- } else {
- true
- };
-
- let (snippet, last_ty) =
- if let (true, hir::TyKind::TraitObject(..), Ok(snippet), true, Some(last_ty)) = (
- // Verify that we're dealing with a return `dyn Trait`
- ret_ty.span.overlaps(span),
- &ret_ty.kind,
- self.tcx.sess.source_map().span_to_snippet(ret_ty.span),
- // If any of the return types does not conform to the trait, then we can't
- // suggest `impl Trait` nor trait objects, it is a type mismatch error.
- all_returns_conform_to_trait,
- last_ty,
- ) {
- (snippet, last_ty)
- } else {
- return false;
- };
- err.code(error_code!(E0746));
- err.set_primary_message("return type cannot have an unboxed trait object");
- err.children.clear();
- let impl_trait_msg = "for information on `impl Trait`, see \
- <https://doc.rust-lang.org/book/ch10-02-traits.html\
- #returning-types-that-implement-traits>";
- let trait_obj_msg = "for information on trait objects, see \
- <https://doc.rust-lang.org/book/ch17-02-trait-objects.html\
- #using-trait-objects-that-allow-for-values-of-different-types>";
- let has_dyn = snippet.split_whitespace().next().map_or(false, |s| s == "dyn");
- let trait_obj = if has_dyn { &snippet[4..] } else { &snippet[..] };
- if all_returns_have_same_type {
- // Suggest `-> impl Trait`.
- err.span_suggestion(
- ret_ty.span,
- &format!(
- "return `impl {1}` instead, as all return paths are of type `{}`, \
- which implements `{1}`",
- last_ty, trait_obj,
- ),
- format!("impl {}", trait_obj),
- Applicability::MachineApplicable,
- );
- err.note(impl_trait_msg);
- } else {
- if is_object_safe {
- // Suggest `-> Box<dyn Trait>` and `Box::new(returned_value)`.
- // Get all the return values and collect their span and suggestion.
- let mut suggestions = visitor
- .returns
- .iter()
- .map(|expr| {
- (
- expr.span,
- format!(
- "Box::new({})",
- self.tcx.sess.source_map().span_to_snippet(expr.span).unwrap()
- ),
- )
- })
- .collect::<Vec<_>>();
- // Add the suggestion for the return type.
- suggestions.push((ret_ty.span, format!("Box<dyn {}>", trait_obj)));
- err.multipart_suggestion(
- "return a boxed trait object instead",
- suggestions,
- Applicability::MaybeIncorrect,
- );
- } else {
- // This is currently not possible to trigger because E0038 takes precedence, but
- // leave it in for completeness in case anything changes in an earlier stage.
- err.note(&format!(
- "if trait `{}` was object safe, you could return a trait object",
- trait_obj,
- ));
- }
- err.note(trait_obj_msg);
- err.note(&format!(
- "if all the returned values were of the same type you could use \
- `impl {}` as the return type",
- trait_obj,
- ));
- err.note(impl_trait_msg);
- err.note("you can create a new `enum` with a variant for each returned type");
- }
- true
- }
-
- crate fn point_at_returns_when_relevant(
- &self,
- err: &mut DiagnosticBuilder<'tcx>,
- obligation: &PredicateObligation<'tcx>,
- ) {
- match obligation.cause.code.peel_derives() {
- ObligationCauseCode::SizedReturnType => {}
- _ => return,
- }
-
- let hir = self.tcx.hir();
- let parent_node = hir.get_parent_node(obligation.cause.body_id);
- let node = hir.find(parent_node);
- if let Some(hir::Node::Item(hir::Item { kind: hir::ItemKind::Fn(_, _, body_id), .. })) =
- node
- {
- let body = hir.body(*body_id);
- // Point at all the `return`s in the function as they have failed trait bounds.
- let mut visitor = ReturnsVisitor::default();
- visitor.visit_body(&body);
- let tables = self.in_progress_tables.map(|t| t.borrow()).unwrap();
- for expr in &visitor.returns {
- if let Some(returned_ty) = tables.node_type_opt(expr.hir_id) {
- let ty = self.resolve_vars_if_possible(&returned_ty);
- err.span_label(expr.span, &format!("this returned value is of type `{}`", ty));
- }
- }
- }
- }
-
- /// Given some node representing a fn-like thing in the HIR map,
- /// returns a span and `ArgKind` information that describes the
- /// arguments it expects. This can be supplied to
- /// `report_arg_count_mismatch`.
- pub fn get_fn_like_arguments(&self, node: Node<'_>) -> (Span, Vec<ArgKind>) {
- match node {
- Node::Expr(&hir::Expr {
- kind: hir::ExprKind::Closure(_, ref _decl, id, span, _),
- ..
- }) => (
- self.tcx.sess.source_map().def_span(span),
- self.tcx
- .hir()
- .body(id)
- .params
- .iter()
- .map(|arg| {
- if let hir::Pat { kind: hir::PatKind::Tuple(ref args, _), span, .. } =
- *arg.pat
- {
- ArgKind::Tuple(
- Some(span),
- args.iter()
- .map(|pat| {
- let snippet = self
- .tcx
- .sess
- .source_map()
- .span_to_snippet(pat.span)
- .unwrap();
- (snippet, "_".to_owned())
- })
- .collect::<Vec<_>>(),
- )
- } else {
- let name =
- self.tcx.sess.source_map().span_to_snippet(arg.pat.span).unwrap();
- ArgKind::Arg(name, "_".to_owned())
- }
- })
- .collect::<Vec<ArgKind>>(),
- ),
- Node::Item(&hir::Item { span, kind: hir::ItemKind::Fn(ref sig, ..), .. })
- | Node::ImplItem(&hir::ImplItem {
- span,
- kind: hir::ImplItemKind::Method(ref sig, _),
- ..
- })
- | Node::TraitItem(&hir::TraitItem {
- span,
- kind: hir::TraitItemKind::Method(ref sig, _),
- ..
- }) => (
- self.tcx.sess.source_map().def_span(span),
- sig.decl
- .inputs
- .iter()
- .map(|arg| match arg.clone().kind {
- hir::TyKind::Tup(ref tys) => ArgKind::Tuple(
- Some(arg.span),
- vec![("_".to_owned(), "_".to_owned()); tys.len()],
- ),
- _ => ArgKind::empty(),
- })
- .collect::<Vec<ArgKind>>(),
- ),
- Node::Ctor(ref variant_data) => {
- let span = variant_data
- .ctor_hir_id()
- .map(|hir_id| self.tcx.hir().span(hir_id))
- .unwrap_or(DUMMY_SP);
- let span = self.tcx.sess.source_map().def_span(span);
-
- (span, vec![ArgKind::empty(); variant_data.fields().len()])
- }
- _ => panic!("non-FnLike node found: {:?}", node),
- }
- }
-
- /// Reports an error when the number of arguments needed by a
- /// trait match doesn't match the number that the expression
- /// provides.
- pub fn report_arg_count_mismatch(
- &self,
- span: Span,
- found_span: Option<Span>,
- expected_args: Vec<ArgKind>,
- found_args: Vec<ArgKind>,
- is_closure: bool,
- ) -> DiagnosticBuilder<'tcx> {
- let kind = if is_closure { "closure" } else { "function" };
-
- let args_str = |arguments: &[ArgKind], other: &[ArgKind]| {
- let arg_length = arguments.len();
- let distinct = match &other[..] {
- &[ArgKind::Tuple(..)] => true,
- _ => false,
- };
- match (arg_length, arguments.get(0)) {
- (1, Some(&ArgKind::Tuple(_, ref fields))) => {
- format!("a single {}-tuple as argument", fields.len())
- }
- _ => format!(
- "{} {}argument{}",
- arg_length,
- if distinct && arg_length > 1 { "distinct " } else { "" },
- pluralize!(arg_length)
- ),
- }
- };
-
- let expected_str = args_str(&expected_args, &found_args);
- let found_str = args_str(&found_args, &expected_args);
-
- let mut err = struct_span_err!(
- self.tcx.sess,
- span,
- E0593,
- "{} is expected to take {}, but it takes {}",
- kind,
- expected_str,
- found_str,
- );
-
- err.span_label(span, format!("expected {} that takes {}", kind, expected_str));
-
- if let Some(found_span) = found_span {
- err.span_label(found_span, format!("takes {}", found_str));
-
- // move |_| { ... }
- // ^^^^^^^^-- def_span
- //
- // move |_| { ... }
- // ^^^^^-- prefix
- let prefix_span = self.tcx.sess.source_map().span_until_non_whitespace(found_span);
- // move |_| { ... }
- // ^^^-- pipe_span
- let pipe_span =
- if let Some(span) = found_span.trim_start(prefix_span) { span } else { found_span };
-
- // Suggest to take and ignore the arguments with expected_args_length `_`s if
- // found arguments is empty (assume the user just wants to ignore args in this case).
- // For example, if `expected_args_length` is 2, suggest `|_, _|`.
- if found_args.is_empty() && is_closure {
- let underscores = vec!["_"; expected_args.len()].join(", ");
- err.span_suggestion(
- pipe_span,
- &format!(
- "consider changing the closure to take and ignore the expected argument{}",
- if expected_args.len() < 2 { "" } else { "s" }
- ),
- format!("|{}|", underscores),
- Applicability::MachineApplicable,
- );
- }
-
- if let &[ArgKind::Tuple(_, ref fields)] = &found_args[..] {
- if fields.len() == expected_args.len() {
- let sugg = fields
- .iter()
- .map(|(name, _)| name.to_owned())
- .collect::<Vec<String>>()
- .join(", ");
- err.span_suggestion(
- found_span,
- "change the closure to take multiple arguments instead of a single tuple",
- format!("|{}|", sugg),
- Applicability::MachineApplicable,
- );
- }
- }
- if let &[ArgKind::Tuple(_, ref fields)] = &expected_args[..] {
- if fields.len() == found_args.len() && is_closure {
- let sugg = format!(
- "|({}){}|",
- found_args
- .iter()
- .map(|arg| match arg {
- ArgKind::Arg(name, _) => name.to_owned(),
- _ => "_".to_owned(),
- })
- .collect::<Vec<String>>()
- .join(", "),
- // add type annotations if available
- if found_args.iter().any(|arg| match arg {
- ArgKind::Arg(_, ty) => ty != "_",
- _ => false,
- }) {
- format!(
- ": ({})",
- fields
- .iter()
- .map(|(_, ty)| ty.to_owned())
- .collect::<Vec<String>>()
- .join(", ")
- )
- } else {
- String::new()
- },
- );
- err.span_suggestion(
- found_span,
- "change the closure to accept a tuple instead of individual arguments",
- sugg,
- Applicability::MachineApplicable,
- );
- }
- }
- }
-
- err
- }
-
- crate fn report_closure_arg_mismatch(
- &self,
- span: Span,
- found_span: Option<Span>,
- expected_ref: ty::PolyTraitRef<'tcx>,
- found: ty::PolyTraitRef<'tcx>,
- ) -> DiagnosticBuilder<'tcx> {
- crate fn build_fn_sig_string<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_ref: &ty::TraitRef<'tcx>,
- ) -> String {
- let inputs = trait_ref.substs.type_at(1);
- let sig = if let ty::Tuple(inputs) = inputs.kind {
- tcx.mk_fn_sig(
- inputs.iter().map(|k| k.expect_ty()),
- tcx.mk_ty_infer(ty::TyVar(ty::TyVid { index: 0 })),
- false,
- hir::Unsafety::Normal,
- ::rustc_target::spec::abi::Abi::Rust,
- )
- } else {
- tcx.mk_fn_sig(
- ::std::iter::once(inputs),
- tcx.mk_ty_infer(ty::TyVar(ty::TyVid { index: 0 })),
- false,
- hir::Unsafety::Normal,
- ::rustc_target::spec::abi::Abi::Rust,
- )
- };
- ty::Binder::bind(sig).to_string()
- }
-
- let argument_is_closure = expected_ref.skip_binder().substs.type_at(0).is_closure();
- let mut err = struct_span_err!(
- self.tcx.sess,
- span,
- E0631,
- "type mismatch in {} arguments",
- if argument_is_closure { "closure" } else { "function" }
- );
-
- let found_str = format!(
- "expected signature of `{}`",
- build_fn_sig_string(self.tcx, found.skip_binder())
- );
- err.span_label(span, found_str);
-
- let found_span = found_span.unwrap_or(span);
- let expected_str = format!(
- "found signature of `{}`",
- build_fn_sig_string(self.tcx, expected_ref.skip_binder())
- );
- err.span_label(found_span, expected_str);
-
- err
- }
-}
-
-impl<'a, 'tcx> InferCtxt<'a, 'tcx> {
- crate fn suggest_fully_qualified_path(
- &self,
- err: &mut DiagnosticBuilder<'_>,
- def_id: DefId,
- span: Span,
- trait_ref: DefId,
- ) {
- if let Some(assoc_item) = self.tcx.opt_associated_item(def_id) {
- if let ty::AssocKind::Const | ty::AssocKind::Type = assoc_item.kind {
- err.note(&format!(
- "{}s cannot be accessed directly on a `trait`, they can only be \
- accessed through a specific `impl`",
- assoc_item.kind.suggestion_descr(),
- ));
- err.span_suggestion(
- span,
- "use the fully qualified path to an implementation",
- format!("<Type as {}>::{}", self.tcx.def_path_str(trait_ref), assoc_item.ident),
- Applicability::HasPlaceholders,
- );
- }
- }
- }
-
- /// Adds an async-await specific note to the diagnostic when the future does not implement
- /// an auto trait because of a captured type.
- ///
- /// ```ignore (diagnostic)
- /// note: future does not implement `Qux` as this value is used across an await
- /// --> $DIR/issue-64130-3-other.rs:17:5
- /// |
- /// LL | let x = Foo;
- /// | - has type `Foo`
- /// LL | baz().await;
- /// | ^^^^^^^^^^^ await occurs here, with `x` maybe used later
- /// LL | }
- /// | - `x` is later dropped here
- /// ```
- ///
- /// When the diagnostic does not implement `Send` or `Sync` specifically, then the diagnostic
- /// is "replaced" with a different message and a more specific error.
- ///
- /// ```ignore (diagnostic)
- /// error: future cannot be sent between threads safely
- /// --> $DIR/issue-64130-2-send.rs:21:5
- /// |
- /// LL | fn is_send<T: Send>(t: T) { }
- /// | ------- ---- required by this bound in `is_send`
- /// ...
- /// LL | is_send(bar());
- /// | ^^^^^^^ future returned by `bar` is not send
- /// |
- /// = help: within `impl std::future::Future`, the trait `std::marker::Send` is not
- /// implemented for `Foo`
- /// note: future is not send as this value is used across an await
- /// --> $DIR/issue-64130-2-send.rs:15:5
- /// |
- /// LL | let x = Foo;
- /// | - has type `Foo`
- /// LL | baz().await;
- /// | ^^^^^^^^^^^ await occurs here, with `x` maybe used later
- /// LL | }
- /// | - `x` is later dropped here
- /// ```
- ///
- /// Returns `true` if an async-await specific note was added to the diagnostic.
- crate fn maybe_note_obligation_cause_for_async_await(
- &self,
- err: &mut DiagnosticBuilder<'_>,
- obligation: &PredicateObligation<'tcx>,
- ) -> bool {
- debug!(
- "maybe_note_obligation_cause_for_async_await: obligation.predicate={:?} \
- obligation.cause.span={:?}",
- obligation.predicate, obligation.cause.span
- );
- let source_map = self.tcx.sess.source_map();
-
- // Attempt to detect an async-await error by looking at the obligation causes, looking
- // for a generator to be present.
- //
- // When a future does not implement a trait because of a captured type in one of the
- // generators somewhere in the call stack, then the result is a chain of obligations.
- //
- // Given a `async fn` A that calls a `async fn` B which captures a non-send type and that
- // future is passed as an argument to a function C which requires a `Send` type, then the
- // chain looks something like this:
- //
- // - `BuiltinDerivedObligation` with a generator witness (B)
- // - `BuiltinDerivedObligation` with a generator (B)
- // - `BuiltinDerivedObligation` with `std::future::GenFuture` (B)
- // - `BuiltinDerivedObligation` with `impl std::future::Future` (B)
- // - `BuiltinDerivedObligation` with `impl std::future::Future` (B)
- // - `BuiltinDerivedObligation` with a generator witness (A)
- // - `BuiltinDerivedObligation` with a generator (A)
- // - `BuiltinDerivedObligation` with `std::future::GenFuture` (A)
- // - `BuiltinDerivedObligation` with `impl std::future::Future` (A)
- // - `BuiltinDerivedObligation` with `impl std::future::Future` (A)
- // - `BindingObligation` with `impl_send (Send requirement)
- //
- // The first obligation in the chain is the most useful and has the generator that captured
- // the type. The last generator has information about where the bound was introduced. At
- // least one generator should be present for this diagnostic to be modified.
- let (mut trait_ref, mut target_ty) = match obligation.predicate {
- ty::Predicate::Trait(p, _) => {
- (Some(p.skip_binder().trait_ref), Some(p.skip_binder().self_ty()))
- }
- _ => (None, None),
- };
- let mut generator = None;
- let mut last_generator = None;
- let mut next_code = Some(&obligation.cause.code);
- while let Some(code) = next_code {
- debug!("maybe_note_obligation_cause_for_async_await: code={:?}", code);
- match code {
- ObligationCauseCode::BuiltinDerivedObligation(derived_obligation)
- | ObligationCauseCode::ImplDerivedObligation(derived_obligation) => {
- let ty = derived_obligation.parent_trait_ref.self_ty();
- debug!(
- "maybe_note_obligation_cause_for_async_await: \
- parent_trait_ref={:?} self_ty.kind={:?}",
- derived_obligation.parent_trait_ref, ty.kind
- );
-
- match ty.kind {
- ty::Generator(did, ..) => {
- generator = generator.or(Some(did));
- last_generator = Some(did);
- }
- ty::GeneratorWitness(..) => {}
- _ if generator.is_none() => {
- trait_ref = Some(*derived_obligation.parent_trait_ref.skip_binder());
- target_ty = Some(ty);
- }
- _ => {}
- }
-
- next_code = Some(derived_obligation.parent_code.as_ref());
- }
- _ => break,
- }
- }
-
- // Only continue if a generator was found.
- debug!(
- "maybe_note_obligation_cause_for_async_await: generator={:?} trait_ref={:?} \
- target_ty={:?}",
- generator, trait_ref, target_ty
- );
- let (generator_did, trait_ref, target_ty) = match (generator, trait_ref, target_ty) {
- (Some(generator_did), Some(trait_ref), Some(target_ty)) => {
- (generator_did, trait_ref, target_ty)
- }
- _ => return false,
- };
-
- let span = self.tcx.def_span(generator_did);
-
- // Do not ICE on closure typeck (#66868).
- if self.tcx.hir().as_local_hir_id(generator_did).is_none() {
- return false;
- }
-
- // Get the tables from the infcx if the generator is the function we are
- // currently type-checking; otherwise, get them by performing a query.
- // This is needed to avoid cycles.
- let in_progress_tables = self.in_progress_tables.map(|t| t.borrow());
- let generator_did_root = self.tcx.closure_base_def_id(generator_did);
- debug!(
- "maybe_note_obligation_cause_for_async_await: generator_did={:?} \
- generator_did_root={:?} in_progress_tables.local_id_root={:?} span={:?}",
- generator_did,
- generator_did_root,
- in_progress_tables.as_ref().map(|t| t.local_id_root),
- span
- );
- let query_tables;
- let tables: &TypeckTables<'tcx> = match &in_progress_tables {
- Some(t) if t.local_id_root == Some(generator_did_root) => t,
- _ => {
- query_tables = self.tcx.typeck_tables_of(generator_did);
- &query_tables
- }
- };
-
- // Look for a type inside the generator interior that matches the target type to get
- // a span.
- let target_ty_erased = self.tcx.erase_regions(&target_ty);
- let target_span = tables
- .generator_interior_types
- .iter()
- .find(|ty::GeneratorInteriorTypeCause { ty, .. }| {
- // Careful: the regions for types that appear in the
- // generator interior are not generally known, so we
- // want to erase them when comparing (and anyway,
- // `Send` and other bounds are generally unaffected by
- // the choice of region). When erasing regions, we
- // also have to erase late-bound regions. This is
- // because the types that appear in the generator
- // interior generally contain "bound regions" to
- // represent regions that are part of the suspended
- // generator frame. Bound regions are preserved by
- // `erase_regions` and so we must also call
- // `erase_late_bound_regions`.
- let ty_erased = self.tcx.erase_late_bound_regions(&ty::Binder::bind(*ty));
- let ty_erased = self.tcx.erase_regions(&ty_erased);
- let eq = ty::TyS::same_type(ty_erased, target_ty_erased);
- debug!(
- "maybe_note_obligation_cause_for_async_await: ty_erased={:?} \
- target_ty_erased={:?} eq={:?}",
- ty_erased, target_ty_erased, eq
- );
- eq
- })
- .map(|ty::GeneratorInteriorTypeCause { span, scope_span, expr, .. }| {
- (span, source_map.span_to_snippet(*span), scope_span, expr)
- });
-
- debug!(
- "maybe_note_obligation_cause_for_async_await: target_ty={:?} \
- generator_interior_types={:?} target_span={:?}",
- target_ty, tables.generator_interior_types, target_span
- );
- if let Some((target_span, Ok(snippet), scope_span, expr)) = target_span {
- self.note_obligation_cause_for_async_await(
- err,
- *target_span,
- scope_span,
- *expr,
- snippet,
- generator_did,
- last_generator,
- trait_ref,
- target_ty,
- tables,
- obligation,
- next_code,
- );
- true
- } else {
- false
- }
- }
-
- /// Unconditionally adds the diagnostic note described in
- /// `maybe_note_obligation_cause_for_async_await`'s documentation comment.
- crate fn note_obligation_cause_for_async_await(
- &self,
- err: &mut DiagnosticBuilder<'_>,
- target_span: Span,
- scope_span: &Option<Span>,
- expr: Option<hir::HirId>,
- snippet: String,
- first_generator: DefId,
- last_generator: Option<DefId>,
- trait_ref: ty::TraitRef<'_>,
- target_ty: Ty<'tcx>,
- tables: &ty::TypeckTables<'_>,
- obligation: &PredicateObligation<'tcx>,
- next_code: Option<&ObligationCauseCode<'tcx>>,
- ) {
- let source_map = self.tcx.sess.source_map();
-
- let is_async_fn = self
- .tcx
- .parent(first_generator)
- .map(|parent_did| self.tcx.asyncness(parent_did))
- .map(|parent_asyncness| parent_asyncness == hir::IsAsync::Async)
- .unwrap_or(false);
- let is_async_move = self
- .tcx
- .hir()
- .as_local_hir_id(first_generator)
- .and_then(|hir_id| self.tcx.hir().maybe_body_owned_by(hir_id))
- .map(|body_id| self.tcx.hir().body(body_id))
- .and_then(|body| body.generator_kind())
- .map(|generator_kind| match generator_kind {
- hir::GeneratorKind::Async(..) => true,
- _ => false,
- })
- .unwrap_or(false);
- let await_or_yield = if is_async_fn || is_async_move { "await" } else { "yield" };
-
- // Special case the primary error message when send or sync is the trait that was
- // not implemented.
- let is_send = self.tcx.is_diagnostic_item(sym::send_trait, trait_ref.def_id);
- let is_sync = self.tcx.is_diagnostic_item(sym::sync_trait, trait_ref.def_id);
- let hir = self.tcx.hir();
- let trait_explanation = if is_send || is_sync {
- let (trait_name, trait_verb) =
- if is_send { ("`Send`", "sent") } else { ("`Sync`", "shared") };
-
- err.clear_code();
- err.set_primary_message(format!(
- "future cannot be {} between threads safely",
- trait_verb
- ));
-
- let original_span = err.span.primary_span().unwrap();
- let mut span = MultiSpan::from_span(original_span);
-
- let message = if let Some(name) = last_generator
- .and_then(|generator_did| self.tcx.parent(generator_did))
- .and_then(|parent_did| hir.as_local_hir_id(parent_did))
- .and_then(|parent_hir_id| hir.opt_name(parent_hir_id))
- {
- format!("future returned by `{}` is not {}", name, trait_name)
- } else {
- format!("future is not {}", trait_name)
- };
-
- span.push_span_label(original_span, message);
- err.set_span(span);
-
- format!("is not {}", trait_name)
- } else {
- format!("does not implement `{}`", trait_ref.print_only_trait_path())
- };
-
- // Look at the last interior type to get a span for the `.await`.
- let await_span = tables.generator_interior_types.iter().map(|t| t.span).last().unwrap();
- let mut span = MultiSpan::from_span(await_span);
- span.push_span_label(
- await_span,
- format!("{} occurs here, with `{}` maybe used later", await_or_yield, snippet),
- );
-
- span.push_span_label(target_span, format!("has type `{}`", target_ty));
-
- // If available, use the scope span to annotate the drop location.
- if let Some(scope_span) = scope_span {
- span.push_span_label(
- source_map.end_point(*scope_span),
- format!("`{}` is later dropped here", snippet),
- );
- }
-
- err.span_note(
- span,
- &format!(
- "future {} as this value is used across an {}",
- trait_explanation, await_or_yield,
- ),
- );
-
- if let Some(expr_id) = expr {
- let expr = hir.expect_expr(expr_id);
- debug!("target_ty evaluated from {:?}", expr);
-
- let parent = hir.get_parent_node(expr_id);
- if let Some(hir::Node::Expr(e)) = hir.find(parent) {
- let parent_span = hir.span(parent);
- let parent_did = parent.owner_def_id();
- // ```rust
- // impl T {
- // fn foo(&self) -> i32 {}
- // }
- // T.foo();
- // ^^^^^^^ a temporary `&T` created inside this method call due to `&self`
- // ```
- //
- let is_region_borrow =
- tables.expr_adjustments(expr).iter().any(|adj| adj.is_region_borrow());
-
- // ```rust
- // struct Foo(*const u8);
- // bar(Foo(std::ptr::null())).await;
- // ^^^^^^^^^^^^^^^^^^^^^ raw-ptr `*T` created inside this struct ctor.
- // ```
- debug!("parent_def_kind: {:?}", self.tcx.def_kind(parent_did));
- let is_raw_borrow_inside_fn_like_call = match self.tcx.def_kind(parent_did) {
- Some(DefKind::Fn) | Some(DefKind::Ctor(..)) => target_ty.is_unsafe_ptr(),
- _ => false,
- };
-
- if (tables.is_method_call(e) && is_region_borrow)
- || is_raw_borrow_inside_fn_like_call
- {
- err.span_help(
- parent_span,
- "consider moving this into a `let` \
- binding to create a shorter lived borrow",
- );
- }
- }
- }
-
- // Add a note for the item obligation that remains - normally a note pointing to the
- // bound that introduced the obligation (e.g. `T: Send`).
- debug!("note_obligation_cause_for_async_await: next_code={:?}", next_code);
- self.note_obligation_cause_code(
- err,
- &obligation.predicate,
- next_code.unwrap(),
- &mut Vec::new(),
- );
- }
-
- crate fn note_obligation_cause_code<T>(
- &self,
- err: &mut DiagnosticBuilder<'_>,
- predicate: &T,
- cause_code: &ObligationCauseCode<'tcx>,
- obligated_types: &mut Vec<&ty::TyS<'tcx>>,
- ) where
- T: fmt::Display,
- {
- let tcx = self.tcx;
- match *cause_code {
- ObligationCauseCode::ExprAssignable
- | ObligationCauseCode::MatchExpressionArm { .. }
- | ObligationCauseCode::Pattern { .. }
- | ObligationCauseCode::IfExpression { .. }
- | ObligationCauseCode::IfExpressionWithNoElse
- | ObligationCauseCode::MainFunctionType
- | ObligationCauseCode::StartFunctionType
- | ObligationCauseCode::IntrinsicType
- | ObligationCauseCode::MethodReceiver
- | ObligationCauseCode::ReturnNoExpression
- | ObligationCauseCode::MiscObligation => {}
- ObligationCauseCode::SliceOrArrayElem => {
- err.note("slice and array elements must have `Sized` type");
- }
- ObligationCauseCode::TupleElem => {
- err.note("only the last element of a tuple may have a dynamically sized type");
- }
- ObligationCauseCode::ProjectionWf(data) => {
- err.note(&format!("required so that the projection `{}` is well-formed", data,));
- }
- ObligationCauseCode::ReferenceOutlivesReferent(ref_ty) => {
- err.note(&format!(
- "required so that reference `{}` does not outlive its referent",
- ref_ty,
- ));
- }
- ObligationCauseCode::ObjectTypeBound(object_ty, region) => {
- err.note(&format!(
- "required so that the lifetime bound of `{}` for `{}` is satisfied",
- region, object_ty,
- ));
- }
- ObligationCauseCode::ItemObligation(item_def_id) => {
- let item_name = tcx.def_path_str(item_def_id);
- let msg = format!("required by `{}`", item_name);
-
- if let Some(sp) = tcx.hir().span_if_local(item_def_id) {
- let sp = tcx.sess.source_map().def_span(sp);
- err.span_label(sp, &msg);
- } else {
- err.note(&msg);
- }
- }
- ObligationCauseCode::BindingObligation(item_def_id, span) => {
- let item_name = tcx.def_path_str(item_def_id);
- let msg = format!("required by this bound in `{}`", item_name);
- if let Some(ident) = tcx.opt_item_name(item_def_id) {
- err.span_label(ident.span, "");
- }
- if span != DUMMY_SP {
- err.span_label(span, &msg);
- } else {
- err.note(&msg);
- }
- }
- ObligationCauseCode::ObjectCastObligation(object_ty) => {
- err.note(&format!(
- "required for the cast to the object type `{}`",
- self.ty_to_string(object_ty)
- ));
- }
- ObligationCauseCode::Coercion { source: _, target } => {
- err.note(&format!("required by cast to type `{}`", self.ty_to_string(target)));
- }
- ObligationCauseCode::RepeatVec(suggest_const_in_array_repeat_expressions) => {
- err.note(
- "the `Copy` trait is required because the repeated element will be copied",
- );
- if suggest_const_in_array_repeat_expressions {
- err.note(
- "this array initializer can be evaluated at compile-time, see issue \
- #48147 <https://github.com/rust-lang/rust/issues/49147> \
- for more information",
- );
- if tcx.sess.opts.unstable_features.is_nightly_build() {
- err.help(
- "add `#![feature(const_in_array_repeat_expressions)]` to the \
- crate attributes to enable",
- );
- }
- }
- }
- ObligationCauseCode::VariableType(_) => {
- err.note("all local variables must have a statically known size");
- if !self.tcx.features().unsized_locals {
- err.help("unsized locals are gated as an unstable feature");
- }
- }
- ObligationCauseCode::SizedArgumentType => {
- err.note("all function arguments must have a statically known size");
- if !self.tcx.features().unsized_locals {
- err.help("unsized locals are gated as an unstable feature");
- }
- }
- ObligationCauseCode::SizedReturnType => {
- err.note("the return type of a function must have a statically known size");
- }
- ObligationCauseCode::SizedYieldType => {
- err.note("the yield type of a generator must have a statically known size");
- }
- ObligationCauseCode::AssignmentLhsSized => {
- err.note("the left-hand-side of an assignment must have a statically known size");
- }
- ObligationCauseCode::TupleInitializerSized => {
- err.note("tuples must have a statically known size to be initialized");
- }
- ObligationCauseCode::StructInitializerSized => {
- err.note("structs must have a statically known size to be initialized");
- }
- ObligationCauseCode::FieldSized { adt_kind: ref item, last } => match *item {
- AdtKind::Struct => {
- if last {
- err.note(
- "the last field of a packed struct may only have a \
- dynamically sized type if it does not need drop to be run",
- );
- } else {
- err.note(
- "only the last field of a struct may have a dynamically sized type",
- );
- }
- }
- AdtKind::Union => {
- err.note("no field of a union may have a dynamically sized type");
- }
- AdtKind::Enum => {
- err.note("no field of an enum variant may have a dynamically sized type");
- }
- },
- ObligationCauseCode::ConstSized => {
- err.note("constant expressions must have a statically known size");
- }
- ObligationCauseCode::ConstPatternStructural => {
- err.note("constants used for pattern-matching must derive `PartialEq` and `Eq`");
- }
- ObligationCauseCode::SharedStatic => {
- err.note("shared static variables must have a type that implements `Sync`");
- }
- ObligationCauseCode::BuiltinDerivedObligation(ref data) => {
- let parent_trait_ref = self.resolve_vars_if_possible(&data.parent_trait_ref);
- let ty = parent_trait_ref.skip_binder().self_ty();
- err.note(&format!("required because it appears within the type `{}`", ty));
- obligated_types.push(ty);
-
- let parent_predicate = parent_trait_ref.without_const().to_predicate();
- if !self.is_recursive_obligation(obligated_types, &data.parent_code) {
- self.note_obligation_cause_code(
- err,
- &parent_predicate,
- &data.parent_code,
- obligated_types,
- );
- }
- }
- ObligationCauseCode::ImplDerivedObligation(ref data) => {
- let parent_trait_ref = self.resolve_vars_if_possible(&data.parent_trait_ref);
- err.note(&format!(
- "required because of the requirements on the impl of `{}` for `{}`",
- parent_trait_ref.print_only_trait_path(),
- parent_trait_ref.skip_binder().self_ty()
- ));
- let parent_predicate = parent_trait_ref.without_const().to_predicate();
- self.note_obligation_cause_code(
- err,
- &parent_predicate,
- &data.parent_code,
- obligated_types,
- );
- }
- ObligationCauseCode::CompareImplMethodObligation { .. } => {
- err.note(&format!(
- "the requirement `{}` appears on the impl method \
- but not on the corresponding trait method",
- predicate
- ));
- }
- ObligationCauseCode::CompareImplTypeObligation { .. } => {
- err.note(&format!(
- "the requirement `{}` appears on the associated impl type \
- but not on the corresponding associated trait type",
- predicate
- ));
- }
- ObligationCauseCode::ReturnType
- | ObligationCauseCode::ReturnValue(_)
- | ObligationCauseCode::BlockTailExpression(_) => (),
- ObligationCauseCode::TrivialBound => {
- err.help("see issue #48214");
- if tcx.sess.opts.unstable_features.is_nightly_build() {
- err.help("add `#![feature(trivial_bounds)]` to the crate attributes to enable");
- }
- }
- ObligationCauseCode::AssocTypeBound(ref data) => {
- err.span_label(data.original, "associated type defined here");
- if let Some(sp) = data.impl_span {
- err.span_label(sp, "in this `impl` item");
- }
- for sp in &data.bounds {
- err.span_label(*sp, "restricted in this bound");
- }
- }
- }
- }
-
- crate fn suggest_new_overflow_limit(&self, err: &mut DiagnosticBuilder<'_>) {
- let current_limit = self.tcx.sess.recursion_limit.get();
- let suggested_limit = current_limit * 2;
- err.help(&format!(
- "consider adding a `#![recursion_limit=\"{}\"]` attribute to your crate (`{}`)",
- suggested_limit, self.tcx.crate_name,
- ));
- }
-}
-
-/// Collect all the returned expressions within the input expression.
-/// Used to point at the return spans when we want to suggest some change to them.
-#[derive(Default)]
-struct ReturnsVisitor<'v> {
- returns: Vec<&'v hir::Expr<'v>>,
- in_block_tail: bool,
-}
-
-impl<'v> Visitor<'v> for ReturnsVisitor<'v> {
- type Map = rustc::hir::map::Map<'v>;
-
- fn nested_visit_map(&mut self) -> hir::intravisit::NestedVisitorMap<'_, Self::Map> {
- hir::intravisit::NestedVisitorMap::None
- }
-
- fn visit_expr(&mut self, ex: &'v hir::Expr<'v>) {
- // Visit every expression to detect `return` paths, either through the function's tail
- // expression or `return` statements. We walk all nodes to find `return` statements, but
- // we only care about tail expressions when `in_block_tail` is `true`, which means that
- // they're in the return path of the function body.
- match ex.kind {
- hir::ExprKind::Ret(Some(ex)) => {
- self.returns.push(ex);
- }
- hir::ExprKind::Block(block, _) if self.in_block_tail => {
- self.in_block_tail = false;
- for stmt in block.stmts {
- hir::intravisit::walk_stmt(self, stmt);
- }
- self.in_block_tail = true;
- if let Some(expr) = block.expr {
- self.visit_expr(expr);
- }
- }
- hir::ExprKind::Match(_, arms, _) if self.in_block_tail => {
- for arm in arms {
- self.visit_expr(arm.body);
- }
- }
- // We need to walk to find `return`s in the entire body.
- _ if !self.in_block_tail => hir::intravisit::walk_expr(self, ex),
- _ => self.returns.push(ex),
- }
- }
-
- fn visit_body(&mut self, body: &'v hir::Body<'v>) {
- assert!(!self.in_block_tail);
- if body.generator_kind().is_none() {
- if let hir::ExprKind::Block(block, None) = body.value.kind {
- if block.expr.is_some() {
- self.in_block_tail = true;
- }
- }
- }
- hir::intravisit::walk_body(self, body);
- }
-}
+++ /dev/null
-use crate::infer::{InferCtxt, ShallowResolver};
-use rustc::ty::error::ExpectedFound;
-use rustc::ty::{self, ToPolyTraitRef, Ty, TypeFoldable};
-use rustc_data_structures::obligation_forest::ProcessResult;
-use rustc_data_structures::obligation_forest::{DoCompleted, Error, ForestObligation};
-use rustc_data_structures::obligation_forest::{ObligationForest, ObligationProcessor};
-use std::marker::PhantomData;
-
-use super::engine::{TraitEngine, TraitEngineExt};
-use super::project;
-use super::select::SelectionContext;
-use super::wf;
-use super::CodeAmbiguity;
-use super::CodeProjectionError;
-use super::CodeSelectionError;
-use super::{ConstEvalFailure, Unimplemented};
-use super::{FulfillmentError, FulfillmentErrorCode};
-use super::{ObligationCause, PredicateObligation};
-
-impl<'tcx> ForestObligation for PendingPredicateObligation<'tcx> {
- /// Note that we include both the `ParamEnv` and the `Predicate`,
- /// as the `ParamEnv` can influence whether fulfillment succeeds
- /// or fails.
- type CacheKey = ty::ParamEnvAnd<'tcx, ty::Predicate<'tcx>>;
-
- fn as_cache_key(&self) -> Self::CacheKey {
- self.obligation.param_env.and(self.obligation.predicate)
- }
-}
-
-/// The fulfillment context is used to drive trait resolution. It
-/// consists of a list of obligations that must be (eventually)
-/// satisfied. The job is to track which are satisfied, which yielded
-/// errors, and which are still pending. At any point, users can call
-/// `select_where_possible`, and the fulfillment context will try to do
-/// selection, retaining only those obligations that remain
-/// ambiguous. This may be helpful in pushing type inference
-/// along. Once all type inference constraints have been generated, the
-/// method `select_all_or_error` can be used to report any remaining
-/// ambiguous cases as errors.
-pub struct FulfillmentContext<'tcx> {
- // A list of all obligations that have been registered with this
- // fulfillment context.
- predicates: ObligationForest<PendingPredicateObligation<'tcx>>,
- // Should this fulfillment context register type-lives-for-region
- // obligations on its parent infcx? In some cases, region
- // obligations are either already known to hold (normalization) or
- // hopefully verifed elsewhere (type-impls-bound), and therefore
- // should not be checked.
- //
- // Note that if we are normalizing a type that we already
- // know is well-formed, there should be no harm setting this
- // to true - all the region variables should be determinable
- // using the RFC 447 rules, which don't depend on
- // type-lives-for-region constraints, and because the type
- // is well-formed, the constraints should hold.
- register_region_obligations: bool,
- // Is it OK to register obligations into this infcx inside
- // an infcx snapshot?
- //
- // The "primary fulfillment" in many cases in typeck lives
- // outside of any snapshot, so any use of it inside a snapshot
- // will lead to trouble and therefore is checked against, but
- // other fulfillment contexts sometimes do live inside of
- // a snapshot (they don't *straddle* a snapshot, so there
- // is no trouble there).
- usable_in_snapshot: bool,
-}
-
-#[derive(Clone, Debug)]
-pub struct PendingPredicateObligation<'tcx> {
- pub obligation: PredicateObligation<'tcx>,
- pub stalled_on: Vec<ty::InferTy>,
-}
-
-// `PendingPredicateObligation` is used a lot. Make sure it doesn't unintentionally get bigger.
-#[cfg(target_arch = "x86_64")]
-static_assert_size!(PendingPredicateObligation<'_>, 136);
-
-impl<'a, 'tcx> FulfillmentContext<'tcx> {
- /// Creates a new fulfillment context.
- pub fn new() -> FulfillmentContext<'tcx> {
- FulfillmentContext {
- predicates: ObligationForest::new(),
- register_region_obligations: true,
- usable_in_snapshot: false,
- }
- }
-
- pub fn new_in_snapshot() -> FulfillmentContext<'tcx> {
- FulfillmentContext {
- predicates: ObligationForest::new(),
- register_region_obligations: true,
- usable_in_snapshot: true,
- }
- }
-
- pub fn new_ignoring_regions() -> FulfillmentContext<'tcx> {
- FulfillmentContext {
- predicates: ObligationForest::new(),
- register_region_obligations: false,
- usable_in_snapshot: false,
- }
- }
-
- /// Attempts to select obligations using `selcx`.
- fn select(
- &mut self,
- selcx: &mut SelectionContext<'a, 'tcx>,
- ) -> Result<(), Vec<FulfillmentError<'tcx>>> {
- debug!("select(obligation-forest-size={})", self.predicates.len());
-
- let mut errors = Vec::new();
-
- loop {
- debug!("select: starting another iteration");
-
- // Process pending obligations.
- let outcome = self.predicates.process_obligations(
- &mut FulfillProcessor {
- selcx,
- register_region_obligations: self.register_region_obligations,
- },
- DoCompleted::No,
- );
- debug!("select: outcome={:#?}", outcome);
-
- // FIXME: if we kept the original cache key, we could mark projection
- // obligations as complete for the projection cache here.
-
- errors.extend(outcome.errors.into_iter().map(|e| to_fulfillment_error(e)));
-
- // If nothing new was added, no need to keep looping.
- if outcome.stalled {
- break;
- }
- }
-
- debug!(
- "select({} predicates remaining, {} errors) done",
- self.predicates.len(),
- errors.len()
- );
-
- if errors.is_empty() { Ok(()) } else { Err(errors) }
- }
-}
-
-impl<'tcx> TraitEngine<'tcx> for FulfillmentContext<'tcx> {
- /// "Normalize" a projection type `<SomeType as SomeTrait>::X` by
- /// creating a fresh type variable `$0` as well as a projection
- /// predicate `<SomeType as SomeTrait>::X == $0`. When the
- /// inference engine runs, it will attempt to find an impl of
- /// `SomeTrait` or a where-clause that lets us unify `$0` with
- /// something concrete. If this fails, we'll unify `$0` with
- /// `projection_ty` again.
- fn normalize_projection_type(
- &mut self,
- infcx: &InferCtxt<'_, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- projection_ty: ty::ProjectionTy<'tcx>,
- cause: ObligationCause<'tcx>,
- ) -> Ty<'tcx> {
- debug!("normalize_projection_type(projection_ty={:?})", projection_ty);
-
- debug_assert!(!projection_ty.has_escaping_bound_vars());
-
- // FIXME(#20304) -- cache
-
- let mut selcx = SelectionContext::new(infcx);
- let mut obligations = vec![];
- let normalized_ty = project::normalize_projection_type(
- &mut selcx,
- param_env,
- projection_ty,
- cause,
- 0,
- &mut obligations,
- );
- self.register_predicate_obligations(infcx, obligations);
-
- debug!("normalize_projection_type: result={:?}", normalized_ty);
-
- normalized_ty
- }
-
- fn register_predicate_obligation(
- &mut self,
- infcx: &InferCtxt<'_, 'tcx>,
- obligation: PredicateObligation<'tcx>,
- ) {
- // this helps to reduce duplicate errors, as well as making
- // debug output much nicer to read and so on.
- let obligation = infcx.resolve_vars_if_possible(&obligation);
-
- debug!("register_predicate_obligation(obligation={:?})", obligation);
-
- assert!(!infcx.is_in_snapshot() || self.usable_in_snapshot);
-
- self.predicates
- .register_obligation(PendingPredicateObligation { obligation, stalled_on: vec![] });
- }
-
- fn select_all_or_error(
- &mut self,
- infcx: &InferCtxt<'_, 'tcx>,
- ) -> Result<(), Vec<FulfillmentError<'tcx>>> {
- self.select_where_possible(infcx)?;
-
- let errors: Vec<_> = self
- .predicates
- .to_errors(CodeAmbiguity)
- .into_iter()
- .map(|e| to_fulfillment_error(e))
- .collect();
- if errors.is_empty() { Ok(()) } else { Err(errors) }
- }
-
- fn select_where_possible(
- &mut self,
- infcx: &InferCtxt<'_, 'tcx>,
- ) -> Result<(), Vec<FulfillmentError<'tcx>>> {
- let mut selcx = SelectionContext::new(infcx);
- self.select(&mut selcx)
- }
-
- fn pending_obligations(&self) -> Vec<PredicateObligation<'tcx>> {
- self.predicates.map_pending_obligations(|o| o.obligation.clone())
- }
-}
-
-struct FulfillProcessor<'a, 'b, 'tcx> {
- selcx: &'a mut SelectionContext<'b, 'tcx>,
- register_region_obligations: bool,
-}
-
-fn mk_pending(os: Vec<PredicateObligation<'tcx>>) -> Vec<PendingPredicateObligation<'tcx>> {
- os.into_iter()
- .map(|o| PendingPredicateObligation { obligation: o, stalled_on: vec![] })
- .collect()
-}
-
-impl<'a, 'b, 'tcx> ObligationProcessor for FulfillProcessor<'a, 'b, 'tcx> {
- type Obligation = PendingPredicateObligation<'tcx>;
- type Error = FulfillmentErrorCode<'tcx>;
-
- /// Processes a predicate obligation and returns either:
- /// - `Changed(v)` if the predicate is true, presuming that `v` are also true
- /// - `Unchanged` if we don't have enough info to be sure
- /// - `Error(e)` if the predicate does not hold
- ///
- /// This is always inlined, despite its size, because it has a single
- /// callsite and it is called *very* frequently.
- #[inline(always)]
- fn process_obligation(
- &mut self,
- pending_obligation: &mut Self::Obligation,
- ) -> ProcessResult<Self::Obligation, Self::Error> {
- // If we were stalled on some unresolved variables, first check whether
- // any of them have been resolved; if not, don't bother doing more work
- // yet.
- let change = match pending_obligation.stalled_on.len() {
- // Match arms are in order of frequency, which matters because this
- // code is so hot. 1 and 0 dominate; 2+ is fairly rare.
- 1 => {
- let infer = pending_obligation.stalled_on[0];
- ShallowResolver::new(self.selcx.infcx()).shallow_resolve_changed(infer)
- }
- 0 => {
- // In this case we haven't changed, but wish to make a change.
- true
- }
- _ => {
- // This `for` loop was once a call to `all()`, but this lower-level
- // form was a perf win. See #64545 for details.
- (|| {
- for &infer in &pending_obligation.stalled_on {
- if ShallowResolver::new(self.selcx.infcx()).shallow_resolve_changed(infer) {
- return true;
- }
- }
- false
- })()
- }
- };
-
- if !change {
- debug!(
- "process_predicate: pending obligation {:?} still stalled on {:?}",
- self.selcx.infcx().resolve_vars_if_possible(&pending_obligation.obligation),
- pending_obligation.stalled_on
- );
- return ProcessResult::Unchanged;
- }
-
- // This part of the code is much colder.
-
- pending_obligation.stalled_on.truncate(0);
-
- let obligation = &mut pending_obligation.obligation;
-
- if obligation.predicate.has_infer_types() {
- obligation.predicate =
- self.selcx.infcx().resolve_vars_if_possible(&obligation.predicate);
- }
-
- debug!("process_obligation: obligation = {:?} cause = {:?}", obligation, obligation.cause);
-
- fn infer_ty(ty: Ty<'tcx>) -> ty::InferTy {
- match ty.kind {
- ty::Infer(infer) => infer,
- _ => panic!(),
- }
- }
-
- match obligation.predicate {
- ty::Predicate::Trait(ref data, _) => {
- let trait_obligation = obligation.with(data.clone());
-
- if data.is_global() {
- // no type variables present, can use evaluation for better caching.
- // FIXME: consider caching errors too.
- if self.selcx.infcx().predicate_must_hold_considering_regions(&obligation) {
- debug!(
- "selecting trait `{:?}` at depth {} evaluated to holds",
- data, obligation.recursion_depth
- );
- return ProcessResult::Changed(vec![]);
- }
- }
-
- match self.selcx.select(&trait_obligation) {
- Ok(Some(vtable)) => {
- debug!(
- "selecting trait `{:?}` at depth {} yielded Ok(Some)",
- data, obligation.recursion_depth
- );
- ProcessResult::Changed(mk_pending(vtable.nested_obligations()))
- }
- Ok(None) => {
- debug!(
- "selecting trait `{:?}` at depth {} yielded Ok(None)",
- data, obligation.recursion_depth
- );
-
- // This is a bit subtle: for the most part, the
- // only reason we can fail to make progress on
- // trait selection is because we don't have enough
- // information about the types in the trait. One
- // exception is that we sometimes haven't decided
- // what kind of closure a closure is. *But*, in
- // that case, it turns out, the type of the
- // closure will also change, because the closure
- // also includes references to its upvars as part
- // of its type, and those types are resolved at
- // the same time.
- //
- // FIXME(#32286) logic seems false if no upvars
- pending_obligation.stalled_on =
- trait_ref_type_vars(self.selcx, data.to_poly_trait_ref());
-
- debug!(
- "process_predicate: pending obligation {:?} now stalled on {:?}",
- self.selcx.infcx().resolve_vars_if_possible(obligation),
- pending_obligation.stalled_on
- );
-
- ProcessResult::Unchanged
- }
- Err(selection_err) => {
- info!(
- "selecting trait `{:?}` at depth {} yielded Err",
- data, obligation.recursion_depth
- );
-
- ProcessResult::Error(CodeSelectionError(selection_err))
- }
- }
- }
-
- ty::Predicate::RegionOutlives(ref binder) => {
- match self.selcx.infcx().region_outlives_predicate(&obligation.cause, binder) {
- Ok(()) => ProcessResult::Changed(vec![]),
- Err(_) => ProcessResult::Error(CodeSelectionError(Unimplemented)),
- }
- }
-
- ty::Predicate::TypeOutlives(ref binder) => {
- // Check if there are higher-ranked vars.
- match binder.no_bound_vars() {
- // If there are, inspect the underlying type further.
- None => {
- // Convert from `Binder<OutlivesPredicate<Ty, Region>>` to `Binder<Ty>`.
- let binder = binder.map_bound_ref(|pred| pred.0);
-
- // Check if the type has any bound vars.
- match binder.no_bound_vars() {
- // If so, this obligation is an error (for now). Eventually we should be
- // able to support additional cases here, like `for<'a> &'a str: 'a`.
- // NOTE: this is duplicate-implemented between here and fulfillment.
- None => ProcessResult::Error(CodeSelectionError(Unimplemented)),
- // Otherwise, we have something of the form
- // `for<'a> T: 'a where 'a not in T`, which we can treat as
- // `T: 'static`.
- Some(t_a) => {
- let r_static = self.selcx.tcx().lifetimes.re_static;
- if self.register_region_obligations {
- self.selcx.infcx().register_region_obligation_with_cause(
- t_a,
- r_static,
- &obligation.cause,
- );
- }
- ProcessResult::Changed(vec![])
- }
- }
- }
- // If there aren't, register the obligation.
- Some(ty::OutlivesPredicate(t_a, r_b)) => {
- if self.register_region_obligations {
- self.selcx.infcx().register_region_obligation_with_cause(
- t_a,
- r_b,
- &obligation.cause,
- );
- }
- ProcessResult::Changed(vec![])
- }
- }
- }
-
- ty::Predicate::Projection(ref data) => {
- let project_obligation = obligation.with(data.clone());
- match project::poly_project_and_unify_type(self.selcx, &project_obligation) {
- Ok(None) => {
- let tcx = self.selcx.tcx();
- pending_obligation.stalled_on =
- trait_ref_type_vars(self.selcx, data.to_poly_trait_ref(tcx));
- ProcessResult::Unchanged
- }
- Ok(Some(os)) => ProcessResult::Changed(mk_pending(os)),
- Err(e) => ProcessResult::Error(CodeProjectionError(e)),
- }
- }
-
- ty::Predicate::ObjectSafe(trait_def_id) => {
- if !self.selcx.tcx().is_object_safe(trait_def_id) {
- ProcessResult::Error(CodeSelectionError(Unimplemented))
- } else {
- ProcessResult::Changed(vec![])
- }
- }
-
- ty::Predicate::ClosureKind(closure_def_id, closure_substs, kind) => {
- match self.selcx.infcx().closure_kind(closure_def_id, closure_substs) {
- Some(closure_kind) => {
- if closure_kind.extends(kind) {
- ProcessResult::Changed(vec![])
- } else {
- ProcessResult::Error(CodeSelectionError(Unimplemented))
- }
- }
- None => ProcessResult::Unchanged,
- }
- }
-
- ty::Predicate::WellFormed(ty) => {
- match wf::obligations(
- self.selcx.infcx(),
- obligation.param_env,
- obligation.cause.body_id,
- ty,
- obligation.cause.span,
- ) {
- None => {
- pending_obligation.stalled_on = vec![infer_ty(ty)];
- ProcessResult::Unchanged
- }
- Some(os) => ProcessResult::Changed(mk_pending(os)),
- }
- }
-
- ty::Predicate::Subtype(ref subtype) => {
- match self.selcx.infcx().subtype_predicate(
- &obligation.cause,
- obligation.param_env,
- subtype,
- ) {
- None => {
- // None means that both are unresolved.
- pending_obligation.stalled_on = vec![
- infer_ty(subtype.skip_binder().a),
- infer_ty(subtype.skip_binder().b),
- ];
- ProcessResult::Unchanged
- }
- Some(Ok(ok)) => ProcessResult::Changed(mk_pending(ok.obligations)),
- Some(Err(err)) => {
- let expected_found = ExpectedFound::new(
- subtype.skip_binder().a_is_expected,
- subtype.skip_binder().a,
- subtype.skip_binder().b,
- );
- ProcessResult::Error(FulfillmentErrorCode::CodeSubtypeError(
- expected_found,
- err,
- ))
- }
- }
- }
-
- ty::Predicate::ConstEvaluatable(def_id, substs) => {
- match self.selcx.infcx().const_eval_resolve(
- obligation.param_env,
- def_id,
- substs,
- None,
- Some(obligation.cause.span),
- ) {
- Ok(_) => ProcessResult::Changed(vec![]),
- Err(err) => ProcessResult::Error(CodeSelectionError(ConstEvalFailure(err))),
- }
- }
- }
- }
-
- fn process_backedge<'c, I>(
- &mut self,
- cycle: I,
- _marker: PhantomData<&'c PendingPredicateObligation<'tcx>>,
- ) where
- I: Clone + Iterator<Item = &'c PendingPredicateObligation<'tcx>>,
- {
- if self.selcx.coinductive_match(cycle.clone().map(|s| s.obligation.predicate)) {
- debug!("process_child_obligations: coinductive match");
- } else {
- let cycle: Vec<_> = cycle.map(|c| c.obligation.clone()).collect();
- self.selcx.infcx().report_overflow_error_cycle(&cycle);
- }
- }
-}
-
-/// Returns the set of type variables contained in a trait ref
-fn trait_ref_type_vars<'a, 'tcx>(
- selcx: &mut SelectionContext<'a, 'tcx>,
- t: ty::PolyTraitRef<'tcx>,
-) -> Vec<ty::InferTy> {
- t.skip_binder() // ok b/c this check doesn't care about regions
- .input_types()
- .map(|t| selcx.infcx().resolve_vars_if_possible(&t))
- .filter(|t| t.has_infer_types())
- .flat_map(|t| t.walk())
- .filter_map(|t| match t.kind {
- ty::Infer(infer) => Some(infer),
- _ => None,
- })
- .collect()
-}
-
-fn to_fulfillment_error<'tcx>(
- error: Error<PendingPredicateObligation<'tcx>, FulfillmentErrorCode<'tcx>>,
-) -> FulfillmentError<'tcx> {
- let obligation = error.backtrace.into_iter().next().unwrap().obligation;
- FulfillmentError::new(obligation, error.error)
-}
+++ /dev/null
-//! Miscellaneous type-system utilities that are too small to deserve their own modules.
-
-use crate::infer::TyCtxtInferExt;
-use crate::traits::{self, ObligationCause};
-
-use rustc::ty::{self, Ty, TyCtxt, TypeFoldable};
-use rustc_hir as hir;
-
-#[derive(Clone)]
-pub enum CopyImplementationError<'tcx> {
- InfrigingFields(Vec<&'tcx ty::FieldDef>),
- NotAnAdt,
- HasDestructor,
-}
-
-pub fn can_type_implement_copy(
- tcx: TyCtxt<'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- self_type: Ty<'tcx>,
-) -> Result<(), CopyImplementationError<'tcx>> {
- // FIXME: (@jroesch) float this code up
- tcx.infer_ctxt().enter(|infcx| {
- let (adt, substs) = match self_type.kind {
- // These types used to have a builtin impl.
- // Now libcore provides that impl.
- ty::Uint(_)
- | ty::Int(_)
- | ty::Bool
- | ty::Float(_)
- | ty::Char
- | ty::RawPtr(..)
- | ty::Never
- | ty::Ref(_, _, hir::Mutability::Not) => return Ok(()),
-
- ty::Adt(adt, substs) => (adt, substs),
-
- _ => return Err(CopyImplementationError::NotAnAdt),
- };
-
- let mut infringing = Vec::new();
- for variant in &adt.variants {
- for field in &variant.fields {
- let ty = field.ty(tcx, substs);
- if ty.references_error() {
- continue;
- }
- let span = tcx.def_span(field.did);
- let cause = ObligationCause { span, ..ObligationCause::dummy() };
- let ctx = traits::FulfillmentContext::new();
- match traits::fully_normalize(&infcx, ctx, cause, param_env, &ty) {
- Ok(ty) => {
- if !infcx.type_is_copy_modulo_regions(param_env, ty, span) {
- infringing.push(field);
- }
- }
- Err(errors) => {
- infcx.report_fulfillment_errors(&errors, None, false);
- }
- };
- }
- }
- if !infringing.is_empty() {
- return Err(CopyImplementationError::InfrigingFields(infringing));
- }
- if adt.has_dtor(tcx) {
- return Err(CopyImplementationError::HasDestructor);
- }
-
- Ok(())
- })
-}
//!
//! [rustc guide]: https://rust-lang.github.io/rustc-guide/traits/resolution.html
-#[allow(dead_code)]
-pub mod auto_trait;
-mod chalk_fulfill;
-pub mod codegen;
-mod coherence;
mod engine;
pub mod error_reporting;
-mod fulfill;
-pub mod misc;
-mod object_safety;
-mod on_unimplemented;
mod project;
-pub mod query;
-mod select;
-mod specialize;
mod structural_impls;
-mod structural_match;
mod util;
-pub mod wf;
-use crate::infer::outlives::env::OutlivesEnvironment;
-use crate::infer::{InferCtxt, SuppressRegionErrors, TyCtxtInferExt};
-use rustc::middle::region;
use rustc::ty::error::{ExpectedFound, TypeError};
-use rustc::ty::fold::TypeFoldable;
-use rustc::ty::subst::{InternalSubsts, SubstsRef};
-use rustc::ty::{self, GenericParamDefKind, ToPredicate, Ty, TyCtxt, WithConstness};
-use rustc::util::common::ErrorReported;
+use rustc::ty::{self, Ty};
use rustc_hir as hir;
-use rustc_hir::def_id::DefId;
-use rustc_span::{Span, DUMMY_SP};
-
-use std::fmt::Debug;
+use rustc_span::Span;
pub use self::FulfillmentErrorCode::*;
pub use self::ObligationCauseCode::*;
pub use self::SelectionError::*;
pub use self::Vtable::*;
-pub use self::coherence::{add_placeholder_note, orphan_check, overlapping_impls};
-pub use self::coherence::{OrphanCheckErr, OverlapResult};
pub use self::engine::{TraitEngine, TraitEngineExt};
-pub use self::fulfill::{FulfillmentContext, PendingPredicateObligation};
-pub use self::object_safety::astconv_object_safety_violations;
-pub use self::object_safety::is_vtable_safe_method;
-pub use self::object_safety::MethodViolationCode;
-pub use self::object_safety::ObjectSafetyViolation;
-pub use self::on_unimplemented::{OnUnimplementedDirective, OnUnimplementedNote};
pub use self::project::MismatchedProjectionTypes;
pub use self::project::{
- normalize, normalize_projection_type, normalize_to, poly_project_and_unify_type,
-};
-pub use self::project::{Normalized, ProjectionCache, ProjectionCacheSnapshot, Reveal};
-pub use self::select::{EvaluationCache, SelectionCache, SelectionContext};
-pub use self::select::{EvaluationResult, IntercrateAmbiguityCause, OverflowError};
-pub use self::specialize::find_associated_item;
-pub use self::specialize::specialization_graph::FutureCompatOverlapError;
-pub use self::specialize::specialization_graph::FutureCompatOverlapErrorKind;
-pub use self::specialize::{specialization_graph, translate_substs, OverlapError};
-pub use self::structural_match::search_for_structural_match_violation;
-pub use self::structural_match::type_marked_structural;
-pub use self::structural_match::NonStructuralMatchTy;
-pub use self::util::{elaborate_predicates, elaborate_trait_ref, elaborate_trait_refs};
-pub use self::util::{expand_trait_aliases, TraitAliasExpander};
-pub use self::util::{
- get_vtable_index_of_object_method, impl_is_default, impl_item_is_final,
- predicate_for_trait_def, upcast_choices,
-};
-pub use self::util::{
- supertrait_def_ids, supertraits, transitive_bounds, SupertraitDefIds, Supertraits,
-};
-
-pub use self::chalk_fulfill::{
- CanonicalGoal as ChalkCanonicalGoal, FulfillmentContext as ChalkFulfillmentContext,
+ Normalized, NormalizedTy, ProjectionCache, ProjectionCacheEntry, ProjectionCacheKey,
+ ProjectionCacheSnapshot, Reveal,
};
+crate use self::util::elaborate_predicates;
pub use rustc::traits::*;
-/// Whether to skip the leak check, as part of a future compatibility warning step.
-#[derive(Copy, Clone, PartialEq, Eq, Debug)]
-pub enum SkipLeakCheck {
- Yes,
- No,
-}
-
-impl SkipLeakCheck {
- fn is_yes(self) -> bool {
- self == SkipLeakCheck::Yes
- }
-}
-
-/// The "default" for skip-leak-check corresponds to the current
-/// behavior (do not skip the leak check) -- not the behavior we are
-/// transitioning into.
-impl Default for SkipLeakCheck {
- fn default() -> Self {
- SkipLeakCheck::No
- }
-}
-
-/// The mode that trait queries run in.
-#[derive(Copy, Clone, PartialEq, Eq, Debug)]
-pub enum TraitQueryMode {
- // Standard/un-canonicalized queries get accurate
- // spans etc. passed in and hence can do reasonable
- // error reporting on their own.
- Standard,
- // Canonicalized queries get dummy spans and hence
- // must generally propagate errors to
- // pre-canonicalization callsites.
- Canonical,
-}
-
/// An `Obligation` represents some trait reference (e.g., `int: Eq`) for
/// which the vtable must be found. The process of finding a vtable is
/// called "resolving" the `Obligation`. This process consists of
CodeAmbiguity,
}
-/// Creates predicate obligations from the generic bounds.
-pub fn predicates_for_generics<'tcx>(
- cause: ObligationCause<'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- generic_bounds: &ty::InstantiatedPredicates<'tcx>,
-) -> PredicateObligations<'tcx> {
- util::predicates_for_generics(cause, 0, param_env, generic_bounds)
-}
-
-/// Determines whether the type `ty` is known to meet `bound` and
-/// returns true if so. Returns false if `ty` either does not meet
-/// `bound` or is not known to meet bound (note that this is
-/// conservative towards *no impl*, which is the opposite of the
-/// `evaluate` methods).
-pub fn type_known_to_meet_bound_modulo_regions<'a, 'tcx>(
- infcx: &InferCtxt<'a, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- ty: Ty<'tcx>,
- def_id: DefId,
- span: Span,
-) -> bool {
- debug!(
- "type_known_to_meet_bound_modulo_regions(ty={:?}, bound={:?})",
- ty,
- infcx.tcx.def_path_str(def_id)
- );
-
- let trait_ref = ty::TraitRef { def_id, substs: infcx.tcx.mk_substs_trait(ty, &[]) };
- let obligation = Obligation {
- param_env,
- cause: ObligationCause::misc(span, hir::DUMMY_HIR_ID),
- recursion_depth: 0,
- predicate: trait_ref.without_const().to_predicate(),
- };
-
- let result = infcx.predicate_must_hold_modulo_regions(&obligation);
- debug!(
- "type_known_to_meet_ty={:?} bound={} => {:?}",
- ty,
- infcx.tcx.def_path_str(def_id),
- result
- );
-
- if result && (ty.has_infer_types() || ty.has_closure_types()) {
- // Because of inference "guessing", selection can sometimes claim
- // to succeed while the success requires a guess. To ensure
- // this function's result remains infallible, we must confirm
- // that guess. While imperfect, I believe this is sound.
-
- // The handling of regions in this area of the code is terrible,
- // see issue #29149. We should be able to improve on this with
- // NLL.
- let mut fulfill_cx = FulfillmentContext::new_ignoring_regions();
-
- // We can use a dummy node-id here because we won't pay any mind
- // to region obligations that arise (there shouldn't really be any
- // anyhow).
- let cause = ObligationCause::misc(span, hir::DUMMY_HIR_ID);
-
- fulfill_cx.register_bound(infcx, param_env, ty, def_id, cause);
-
- // Note: we only assume something is `Copy` if we can
- // *definitively* show that it implements `Copy`. Otherwise,
- // assume it is move; linear is always ok.
- match fulfill_cx.select_all_or_error(infcx) {
- Ok(()) => {
- debug!(
- "type_known_to_meet_bound_modulo_regions: ty={:?} bound={} success",
- ty,
- infcx.tcx.def_path_str(def_id)
- );
- true
- }
- Err(e) => {
- debug!(
- "type_known_to_meet_bound_modulo_regions: ty={:?} bound={} errors={:?}",
- ty,
- infcx.tcx.def_path_str(def_id),
- e
- );
- false
- }
- }
- } else {
- result
- }
-}
-
-fn do_normalize_predicates<'tcx>(
- tcx: TyCtxt<'tcx>,
- region_context: DefId,
- cause: ObligationCause<'tcx>,
- elaborated_env: ty::ParamEnv<'tcx>,
- predicates: Vec<ty::Predicate<'tcx>>,
-) -> Result<Vec<ty::Predicate<'tcx>>, ErrorReported> {
- debug!(
- "do_normalize_predicates(predicates={:?}, region_context={:?}, cause={:?})",
- predicates, region_context, cause,
- );
- let span = cause.span;
- tcx.infer_ctxt().enter(|infcx| {
- // FIXME. We should really... do something with these region
- // obligations. But this call just continues the older
- // behavior (i.e., doesn't cause any new bugs), and it would
- // take some further refactoring to actually solve them. In
- // particular, we would have to handle implied bounds
- // properly, and that code is currently largely confined to
- // regionck (though I made some efforts to extract it
- // out). -nmatsakis
- //
- // @arielby: In any case, these obligations are checked
- // by wfcheck anyway, so I'm not sure we have to check
- // them here too, and we will remove this function when
- // we move over to lazy normalization *anyway*.
- let fulfill_cx = FulfillmentContext::new_ignoring_regions();
- let predicates =
- match fully_normalize(&infcx, fulfill_cx, cause, elaborated_env, &predicates) {
- Ok(predicates) => predicates,
- Err(errors) => {
- infcx.report_fulfillment_errors(&errors, None, false);
- return Err(ErrorReported);
- }
- };
-
- debug!("do_normalize_predictes: normalized predicates = {:?}", predicates);
-
- let region_scope_tree = region::ScopeTree::default();
-
- // We can use the `elaborated_env` here; the region code only
- // cares about declarations like `'a: 'b`.
- let outlives_env = OutlivesEnvironment::new(elaborated_env);
-
- infcx.resolve_regions_and_report_errors(
- region_context,
- ®ion_scope_tree,
- &outlives_env,
- SuppressRegionErrors::default(),
- );
-
- let predicates = match infcx.fully_resolve(&predicates) {
- Ok(predicates) => predicates,
- Err(fixup_err) => {
- // If we encounter a fixup error, it means that some type
- // variable wound up unconstrained. I actually don't know
- // if this can happen, and I certainly don't expect it to
- // happen often, but if it did happen it probably
- // represents a legitimate failure due to some kind of
- // unconstrained variable, and it seems better not to ICE,
- // all things considered.
- tcx.sess.span_err(span, &fixup_err.to_string());
- return Err(ErrorReported);
- }
- };
- if predicates.has_local_value() {
- // FIXME: shouldn't we, you know, actually report an error here? or an ICE?
- Err(ErrorReported)
- } else {
- Ok(predicates)
- }
- })
-}
-
-// FIXME: this is gonna need to be removed ...
-/// Normalizes the parameter environment, reporting errors if they occur.
-pub fn normalize_param_env_or_error<'tcx>(
- tcx: TyCtxt<'tcx>,
- region_context: DefId,
- unnormalized_env: ty::ParamEnv<'tcx>,
- cause: ObligationCause<'tcx>,
-) -> ty::ParamEnv<'tcx> {
- // I'm not wild about reporting errors here; I'd prefer to
- // have the errors get reported at a defined place (e.g.,
- // during typeck). Instead I have all parameter
- // environments, in effect, going through this function
- // and hence potentially reporting errors. This ensures of
- // course that we never forget to normalize (the
- // alternative seemed like it would involve a lot of
- // manual invocations of this fn -- and then we'd have to
- // deal with the errors at each of those sites).
- //
- // In any case, in practice, typeck constructs all the
- // parameter environments once for every fn as it goes,
- // and errors will get reported then; so after typeck we
- // can be sure that no errors should occur.
-
- debug!(
- "normalize_param_env_or_error(region_context={:?}, unnormalized_env={:?}, cause={:?})",
- region_context, unnormalized_env, cause
- );
-
- let mut predicates: Vec<_> =
- util::elaborate_predicates(tcx, unnormalized_env.caller_bounds.to_vec()).collect();
-
- debug!("normalize_param_env_or_error: elaborated-predicates={:?}", predicates);
-
- let elaborated_env = ty::ParamEnv::new(
- tcx.intern_predicates(&predicates),
- unnormalized_env.reveal,
- unnormalized_env.def_id,
- );
-
- // HACK: we are trying to normalize the param-env inside *itself*. The problem is that
- // normalization expects its param-env to be already normalized, which means we have
- // a circularity.
- //
- // The way we handle this is by normalizing the param-env inside an unnormalized version
- // of the param-env, which means that if the param-env contains unnormalized projections,
- // we'll have some normalization failures. This is unfortunate.
- //
- // Lazy normalization would basically handle this by treating just the
- // normalizing-a-trait-ref-requires-itself cycles as evaluation failures.
- //
- // Inferred outlives bounds can create a lot of `TypeOutlives` predicates for associated
- // types, so to make the situation less bad, we normalize all the predicates *but*
- // the `TypeOutlives` predicates first inside the unnormalized parameter environment, and
- // then we normalize the `TypeOutlives` bounds inside the normalized parameter environment.
- //
- // This works fairly well because trait matching does not actually care about param-env
- // TypeOutlives predicates - these are normally used by regionck.
- let outlives_predicates: Vec<_> = predicates
- .drain_filter(|predicate| match predicate {
- ty::Predicate::TypeOutlives(..) => true,
- _ => false,
- })
- .collect();
-
- debug!(
- "normalize_param_env_or_error: predicates=(non-outlives={:?}, outlives={:?})",
- predicates, outlives_predicates
- );
- let non_outlives_predicates = match do_normalize_predicates(
- tcx,
- region_context,
- cause.clone(),
- elaborated_env,
- predicates,
- ) {
- Ok(predicates) => predicates,
- // An unnormalized env is better than nothing.
- Err(ErrorReported) => {
- debug!("normalize_param_env_or_error: errored resolving non-outlives predicates");
- return elaborated_env;
- }
- };
-
- debug!("normalize_param_env_or_error: non-outlives predicates={:?}", non_outlives_predicates);
-
- // Not sure whether it is better to include the unnormalized TypeOutlives predicates
- // here. I believe they should not matter, because we are ignoring TypeOutlives param-env
- // predicates here anyway. Keeping them here anyway because it seems safer.
- let outlives_env: Vec<_> =
- non_outlives_predicates.iter().chain(&outlives_predicates).cloned().collect();
- let outlives_env =
- ty::ParamEnv::new(tcx.intern_predicates(&outlives_env), unnormalized_env.reveal, None);
- let outlives_predicates = match do_normalize_predicates(
- tcx,
- region_context,
- cause,
- outlives_env,
- outlives_predicates,
- ) {
- Ok(predicates) => predicates,
- // An unnormalized env is better than nothing.
- Err(ErrorReported) => {
- debug!("normalize_param_env_or_error: errored resolving outlives predicates");
- return elaborated_env;
- }
- };
- debug!("normalize_param_env_or_error: outlives predicates={:?}", outlives_predicates);
-
- let mut predicates = non_outlives_predicates;
- predicates.extend(outlives_predicates);
- debug!("normalize_param_env_or_error: final predicates={:?}", predicates);
- ty::ParamEnv::new(
- tcx.intern_predicates(&predicates),
- unnormalized_env.reveal,
- unnormalized_env.def_id,
- )
-}
-
-pub fn fully_normalize<'a, 'tcx, T>(
- infcx: &InferCtxt<'a, 'tcx>,
- mut fulfill_cx: FulfillmentContext<'tcx>,
- cause: ObligationCause<'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- value: &T,
-) -> Result<T, Vec<FulfillmentError<'tcx>>>
-where
- T: TypeFoldable<'tcx>,
-{
- debug!("fully_normalize_with_fulfillcx(value={:?})", value);
- let selcx = &mut SelectionContext::new(infcx);
- let Normalized { value: normalized_value, obligations } =
- project::normalize(selcx, param_env, cause, value);
- debug!(
- "fully_normalize: normalized_value={:?} obligations={:?}",
- normalized_value, obligations
- );
- for obligation in obligations {
- fulfill_cx.register_predicate_obligation(selcx.infcx(), obligation);
- }
-
- debug!("fully_normalize: select_all_or_error start");
- fulfill_cx.select_all_or_error(infcx)?;
- debug!("fully_normalize: select_all_or_error complete");
- let resolved_value = infcx.resolve_vars_if_possible(&normalized_value);
- debug!("fully_normalize: resolved_value={:?}", resolved_value);
- Ok(resolved_value)
-}
-
-/// Normalizes the predicates and checks whether they hold in an empty
-/// environment. If this returns false, then either normalize
-/// encountered an error or one of the predicates did not hold. Used
-/// when creating vtables to check for unsatisfiable methods.
-pub fn normalize_and_test_predicates<'tcx>(
- tcx: TyCtxt<'tcx>,
- predicates: Vec<ty::Predicate<'tcx>>,
-) -> bool {
- debug!("normalize_and_test_predicates(predicates={:?})", predicates);
-
- let result = tcx.infer_ctxt().enter(|infcx| {
- let param_env = ty::ParamEnv::reveal_all();
- let mut selcx = SelectionContext::new(&infcx);
- let mut fulfill_cx = FulfillmentContext::new();
- let cause = ObligationCause::dummy();
- let Normalized { value: predicates, obligations } =
- normalize(&mut selcx, param_env, cause.clone(), &predicates);
- for obligation in obligations {
- fulfill_cx.register_predicate_obligation(&infcx, obligation);
- }
- for predicate in predicates {
- let obligation = Obligation::new(cause.clone(), param_env, predicate);
- fulfill_cx.register_predicate_obligation(&infcx, obligation);
- }
-
- fulfill_cx.select_all_or_error(&infcx).is_ok()
- });
- debug!("normalize_and_test_predicates(predicates={:?}) = {:?}", predicates, result);
- result
-}
-
-fn substitute_normalize_and_test_predicates<'tcx>(
- tcx: TyCtxt<'tcx>,
- key: (DefId, SubstsRef<'tcx>),
-) -> bool {
- debug!("substitute_normalize_and_test_predicates(key={:?})", key);
-
- let predicates = tcx.predicates_of(key.0).instantiate(tcx, key.1).predicates;
- let result = normalize_and_test_predicates(tcx, predicates);
-
- debug!("substitute_normalize_and_test_predicates(key={:?}) = {:?}", key, result);
- result
-}
-
-/// Given a trait `trait_ref`, iterates the vtable entries
-/// that come from `trait_ref`, including its supertraits.
-#[inline] // FIXME(#35870): avoid closures being unexported due to `impl Trait`.
-fn vtable_methods<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_ref: ty::PolyTraitRef<'tcx>,
-) -> &'tcx [Option<(DefId, SubstsRef<'tcx>)>] {
- debug!("vtable_methods({:?})", trait_ref);
-
- tcx.arena.alloc_from_iter(supertraits(tcx, trait_ref).flat_map(move |trait_ref| {
- let trait_methods = tcx
- .associated_items(trait_ref.def_id())
- .in_definition_order()
- .filter(|item| item.kind == ty::AssocKind::Method);
-
- // Now list each method's DefId and InternalSubsts (for within its trait).
- // If the method can never be called from this object, produce None.
- trait_methods.map(move |trait_method| {
- debug!("vtable_methods: trait_method={:?}", trait_method);
- let def_id = trait_method.def_id;
-
- // Some methods cannot be called on an object; skip those.
- if !is_vtable_safe_method(tcx, trait_ref.def_id(), &trait_method) {
- debug!("vtable_methods: not vtable safe");
- return None;
- }
-
- // The method may have some early-bound lifetimes; add regions for those.
- let substs = trait_ref.map_bound(|trait_ref| {
- InternalSubsts::for_item(tcx, def_id, |param, _| match param.kind {
- GenericParamDefKind::Lifetime => tcx.lifetimes.re_erased.into(),
- GenericParamDefKind::Type { .. } | GenericParamDefKind::Const => {
- trait_ref.substs[param.index as usize]
- }
- })
- });
-
- // The trait type may have higher-ranked lifetimes in it;
- // erase them if they appear, so that we get the type
- // at some particular call site.
- let substs =
- tcx.normalize_erasing_late_bound_regions(ty::ParamEnv::reveal_all(), &substs);
-
- // It's possible that the method relies on where-clauses that
- // do not hold for this particular set of type parameters.
- // Note that this method could then never be called, so we
- // do not want to try and codegen it, in that case (see #23435).
- let predicates = tcx.predicates_of(def_id).instantiate_own(tcx, substs);
- if !normalize_and_test_predicates(tcx, predicates.predicates) {
- debug!("vtable_methods: predicates do not hold");
- return None;
- }
-
- Some((def_id, substs))
- })
- }))
-}
-
impl<'tcx, O> Obligation<'tcx, O> {
pub fn new(
cause: ObligationCause<'tcx>,
Obligation { cause, param_env, recursion_depth: 0, predicate }
}
- fn with_depth(
+ pub fn with_depth(
cause: ObligationCause<'tcx>,
recursion_depth: usize,
param_env: ty::ParamEnv<'tcx>,
}
impl<'tcx> FulfillmentError<'tcx> {
- fn new(
+ pub fn new(
obligation: PredicateObligation<'tcx>,
code: FulfillmentErrorCode<'tcx>,
) -> FulfillmentError<'tcx> {
- FulfillmentError { obligation: obligation, code: code, points_at_arg_span: false }
+ FulfillmentError { obligation, code, points_at_arg_span: false }
}
}
impl<'tcx> TraitObligation<'tcx> {
- fn self_ty(&self) -> ty::Binder<Ty<'tcx>> {
+ pub fn self_ty(&self) -> ty::Binder<Ty<'tcx>> {
self.predicate.map_bound(|p| p.self_ty())
}
}
-
-pub fn provide(providers: &mut ty::query::Providers<'_>) {
- object_safety::provide(providers);
- *providers = ty::query::Providers {
- specialization_graph_of: specialize::specialization_graph_provider,
- specializes: specialize::specializes,
- codegen_fulfill_obligation: codegen::codegen_fulfill_obligation,
- vtable_methods,
- substitute_normalize_and_test_predicates,
- ..*providers
- };
-}
+++ /dev/null
-//! "Object safety" refers to the ability for a trait to be converted
-//! to an object. In general, traits may only be converted to an
-//! object if all of their methods meet certain criteria. In particular,
-//! they must:
-//!
-//! - have a suitable receiver from which we can extract a vtable and coerce to a "thin" version
-//! that doesn't contain the vtable;
-//! - not reference the erased type `Self` except for in this receiver;
-//! - not have generic type parameters.
-
-use super::elaborate_predicates;
-
-use crate::infer::TyCtxtInferExt;
-use crate::traits::{self, Obligation, ObligationCause};
-use rustc::ty::subst::{InternalSubsts, Subst};
-use rustc::ty::{self, Predicate, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness};
-use rustc_errors::Applicability;
-use rustc_hir as hir;
-use rustc_hir::def_id::DefId;
-use rustc_session::lint::builtin::WHERE_CLAUSES_OBJECT_SAFETY;
-use rustc_span::symbol::Symbol;
-use rustc_span::Span;
-use smallvec::SmallVec;
-
-use std::iter;
-
-pub use crate::traits::{MethodViolationCode, ObjectSafetyViolation};
-
-/// Returns the object safety violations that affect
-/// astconv -- currently, `Self` in supertraits. This is needed
-/// because `object_safety_violations` can't be used during
-/// type collection.
-pub fn astconv_object_safety_violations(
- tcx: TyCtxt<'_>,
- trait_def_id: DefId,
-) -> Vec<ObjectSafetyViolation> {
- debug_assert!(tcx.generics_of(trait_def_id).has_self);
- let violations = traits::supertrait_def_ids(tcx, trait_def_id)
- .map(|def_id| predicates_reference_self(tcx, def_id, true))
- .filter(|spans| !spans.is_empty())
- .map(|spans| ObjectSafetyViolation::SupertraitSelf(spans))
- .collect();
-
- debug!("astconv_object_safety_violations(trait_def_id={:?}) = {:?}", trait_def_id, violations);
-
- violations
-}
-
-fn object_safety_violations(tcx: TyCtxt<'_>, trait_def_id: DefId) -> Vec<ObjectSafetyViolation> {
- debug_assert!(tcx.generics_of(trait_def_id).has_self);
- debug!("object_safety_violations: {:?}", trait_def_id);
-
- traits::supertrait_def_ids(tcx, trait_def_id)
- .flat_map(|def_id| object_safety_violations_for_trait(tcx, def_id))
- .collect()
-}
-
-/// We say a method is *vtable safe* if it can be invoked on a trait
-/// object. Note that object-safe traits can have some
-/// non-vtable-safe methods, so long as they require `Self: Sized` or
-/// otherwise ensure that they cannot be used when `Self = Trait`.
-pub fn is_vtable_safe_method(tcx: TyCtxt<'_>, trait_def_id: DefId, method: &ty::AssocItem) -> bool {
- debug_assert!(tcx.generics_of(trait_def_id).has_self);
- debug!("is_vtable_safe_method({:?}, {:?})", trait_def_id, method);
- // Any method that has a `Self: Sized` bound cannot be called.
- if generics_require_sized_self(tcx, method.def_id) {
- return false;
- }
-
- match virtual_call_violation_for_method(tcx, trait_def_id, method) {
- None | Some(MethodViolationCode::WhereClauseReferencesSelf) => true,
- Some(_) => false,
- }
-}
-
-fn object_safety_violations_for_trait(
- tcx: TyCtxt<'_>,
- trait_def_id: DefId,
-) -> Vec<ObjectSafetyViolation> {
- // Check methods for violations.
- let mut violations: Vec<_> = tcx
- .associated_items(trait_def_id)
- .in_definition_order()
- .filter(|item| item.kind == ty::AssocKind::Method)
- .filter_map(|item| {
- object_safety_violation_for_method(tcx, trait_def_id, &item)
- .map(|(code, span)| ObjectSafetyViolation::Method(item.ident.name, code, span))
- })
- .filter(|violation| {
- if let ObjectSafetyViolation::Method(
- _,
- MethodViolationCode::WhereClauseReferencesSelf,
- span,
- ) = violation
- {
- // Using `CRATE_NODE_ID` is wrong, but it's hard to get a more precise id.
- // It's also hard to get a use site span, so we use the method definition span.
- tcx.struct_span_lint_hir(
- WHERE_CLAUSES_OBJECT_SAFETY,
- hir::CRATE_HIR_ID,
- *span,
- |lint| {
- let mut err = lint.build(&format!(
- "the trait `{}` cannot be made into an object",
- tcx.def_path_str(trait_def_id)
- ));
- let node = tcx.hir().get_if_local(trait_def_id);
- let msg = if let Some(hir::Node::Item(item)) = node {
- err.span_label(
- item.ident.span,
- "this trait cannot be made into an object...",
- );
- format!("...because {}", violation.error_msg())
- } else {
- format!(
- "the trait cannot be made into an object because {}",
- violation.error_msg()
- )
- };
- err.span_label(*span, &msg);
- match (node, violation.solution()) {
- (Some(_), Some((note, None))) => {
- err.help(¬e);
- }
- (Some(_), Some((note, Some((sugg, span))))) => {
- err.span_suggestion(
- span,
- ¬e,
- sugg,
- Applicability::MachineApplicable,
- );
- }
- // Only provide the help if its a local trait, otherwise it's not actionable.
- _ => {}
- }
- err.emit();
- },
- );
- false
- } else {
- true
- }
- })
- .collect();
-
- // Check the trait itself.
- if trait_has_sized_self(tcx, trait_def_id) {
- // We don't want to include the requirement from `Sized` itself to be `Sized` in the list.
- let spans = get_sized_bounds(tcx, trait_def_id);
- violations.push(ObjectSafetyViolation::SizedSelf(spans));
- }
- let spans = predicates_reference_self(tcx, trait_def_id, false);
- if !spans.is_empty() {
- violations.push(ObjectSafetyViolation::SupertraitSelf(spans));
- }
-
- violations.extend(
- tcx.associated_items(trait_def_id)
- .in_definition_order()
- .filter(|item| item.kind == ty::AssocKind::Const)
- .map(|item| ObjectSafetyViolation::AssocConst(item.ident.name, item.ident.span)),
- );
-
- debug!(
- "object_safety_violations_for_trait(trait_def_id={:?}) = {:?}",
- trait_def_id, violations
- );
-
- violations
-}
-
-fn get_sized_bounds(tcx: TyCtxt<'_>, trait_def_id: DefId) -> SmallVec<[Span; 1]> {
- tcx.hir()
- .get_if_local(trait_def_id)
- .and_then(|node| match node {
- hir::Node::Item(hir::Item {
- kind: hir::ItemKind::Trait(.., generics, bounds, _),
- ..
- }) => Some(
- generics
- .where_clause
- .predicates
- .iter()
- .filter_map(|pred| {
- match pred {
- hir::WherePredicate::BoundPredicate(pred)
- if pred.bounded_ty.hir_id.owner_def_id() == trait_def_id =>
- {
- // Fetch spans for trait bounds that are Sized:
- // `trait T where Self: Pred`
- Some(pred.bounds.iter().filter_map(|b| match b {
- hir::GenericBound::Trait(
- trait_ref,
- hir::TraitBoundModifier::None,
- ) if trait_has_sized_self(
- tcx,
- trait_ref.trait_ref.trait_def_id(),
- ) =>
- {
- Some(trait_ref.span)
- }
- _ => None,
- }))
- }
- _ => None,
- }
- })
- .flatten()
- .chain(bounds.iter().filter_map(|b| match b {
- hir::GenericBound::Trait(trait_ref, hir::TraitBoundModifier::None)
- if trait_has_sized_self(tcx, trait_ref.trait_ref.trait_def_id()) =>
- {
- // Fetch spans for supertraits that are `Sized`: `trait T: Super`
- Some(trait_ref.span)
- }
- _ => None,
- }))
- .collect::<SmallVec<[Span; 1]>>(),
- ),
- _ => None,
- })
- .unwrap_or_else(SmallVec::new)
-}
-
-fn predicates_reference_self(
- tcx: TyCtxt<'_>,
- trait_def_id: DefId,
- supertraits_only: bool,
-) -> SmallVec<[Span; 1]> {
- let trait_ref = ty::Binder::dummy(ty::TraitRef::identity(tcx, trait_def_id));
- let predicates = if supertraits_only {
- tcx.super_predicates_of(trait_def_id)
- } else {
- tcx.predicates_of(trait_def_id)
- };
- let self_ty = tcx.types.self_param;
- let has_self_ty = |t: Ty<'_>| t.walk().any(|t| t == self_ty);
- predicates
- .predicates
- .iter()
- .map(|(predicate, sp)| (predicate.subst_supertrait(tcx, &trait_ref), sp))
- .filter_map(|(predicate, &sp)| {
- match predicate {
- ty::Predicate::Trait(ref data, _) => {
- // In the case of a trait predicate, we can skip the "self" type.
- if data.skip_binder().input_types().skip(1).any(has_self_ty) {
- Some(sp)
- } else {
- None
- }
- }
- ty::Predicate::Projection(ref data) => {
- // And similarly for projections. This should be redundant with
- // the previous check because any projection should have a
- // matching `Trait` predicate with the same inputs, but we do
- // the check to be safe.
- //
- // Note that we *do* allow projection *outputs* to contain
- // `self` (i.e., `trait Foo: Bar<Output=Self::Result> { type Result; }`),
- // we just require the user to specify *both* outputs
- // in the object type (i.e., `dyn Foo<Output=(), Result=()>`).
- //
- // This is ALT2 in issue #56288, see that for discussion of the
- // possible alternatives.
- if data
- .skip_binder()
- .projection_ty
- .trait_ref(tcx)
- .input_types()
- .skip(1)
- .any(has_self_ty)
- {
- Some(sp)
- } else {
- None
- }
- }
- ty::Predicate::WellFormed(..)
- | ty::Predicate::ObjectSafe(..)
- | ty::Predicate::TypeOutlives(..)
- | ty::Predicate::RegionOutlives(..)
- | ty::Predicate::ClosureKind(..)
- | ty::Predicate::Subtype(..)
- | ty::Predicate::ConstEvaluatable(..) => None,
- }
- })
- .collect()
-}
-
-fn trait_has_sized_self(tcx: TyCtxt<'_>, trait_def_id: DefId) -> bool {
- generics_require_sized_self(tcx, trait_def_id)
-}
-
-fn generics_require_sized_self(tcx: TyCtxt<'_>, def_id: DefId) -> bool {
- let sized_def_id = match tcx.lang_items().sized_trait() {
- Some(def_id) => def_id,
- None => {
- return false; /* No Sized trait, can't require it! */
- }
- };
-
- // Search for a predicate like `Self : Sized` amongst the trait bounds.
- let predicates = tcx.predicates_of(def_id);
- let predicates = predicates.instantiate_identity(tcx).predicates;
- elaborate_predicates(tcx, predicates).any(|predicate| match predicate {
- ty::Predicate::Trait(ref trait_pred, _) => {
- trait_pred.def_id() == sized_def_id && trait_pred.skip_binder().self_ty().is_param(0)
- }
- ty::Predicate::Projection(..)
- | ty::Predicate::Subtype(..)
- | ty::Predicate::RegionOutlives(..)
- | ty::Predicate::WellFormed(..)
- | ty::Predicate::ObjectSafe(..)
- | ty::Predicate::ClosureKind(..)
- | ty::Predicate::TypeOutlives(..)
- | ty::Predicate::ConstEvaluatable(..) => false,
- })
-}
-
-/// Returns `Some(_)` if this method makes the containing trait not object safe.
-fn object_safety_violation_for_method(
- tcx: TyCtxt<'_>,
- trait_def_id: DefId,
- method: &ty::AssocItem,
-) -> Option<(MethodViolationCode, Span)> {
- debug!("object_safety_violation_for_method({:?}, {:?})", trait_def_id, method);
- // Any method that has a `Self : Sized` requisite is otherwise
- // exempt from the regulations.
- if generics_require_sized_self(tcx, method.def_id) {
- return None;
- }
-
- let violation = virtual_call_violation_for_method(tcx, trait_def_id, method);
- // Get an accurate span depending on the violation.
- violation.map(|v| {
- let node = tcx.hir().get_if_local(method.def_id);
- let span = match (v, node) {
- (MethodViolationCode::ReferencesSelfInput(arg), Some(node)) => node
- .fn_decl()
- .and_then(|decl| decl.inputs.get(arg + 1))
- .map_or(method.ident.span, |arg| arg.span),
- (MethodViolationCode::UndispatchableReceiver, Some(node)) => node
- .fn_decl()
- .and_then(|decl| decl.inputs.get(0))
- .map_or(method.ident.span, |arg| arg.span),
- (MethodViolationCode::ReferencesSelfOutput, Some(node)) => {
- node.fn_decl().map_or(method.ident.span, |decl| decl.output.span())
- }
- _ => method.ident.span,
- };
- (v, span)
- })
-}
-
-/// Returns `Some(_)` if this method cannot be called on a trait
-/// object; this does not necessarily imply that the enclosing trait
-/// is not object safe, because the method might have a where clause
-/// `Self:Sized`.
-fn virtual_call_violation_for_method<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_def_id: DefId,
- method: &ty::AssocItem,
-) -> Option<MethodViolationCode> {
- // The method's first parameter must be named `self`
- if !method.method_has_self_argument {
- // We'll attempt to provide a structured suggestion for `Self: Sized`.
- let sugg =
- tcx.hir().get_if_local(method.def_id).as_ref().and_then(|node| node.generics()).map(
- |generics| match generics.where_clause.predicates {
- [] => (" where Self: Sized", generics.where_clause.span),
- [.., pred] => (", Self: Sized", pred.span().shrink_to_hi()),
- },
- );
- return Some(MethodViolationCode::StaticMethod(sugg));
- }
-
- let sig = tcx.fn_sig(method.def_id);
-
- for (i, input_ty) in sig.skip_binder().inputs()[1..].iter().enumerate() {
- if contains_illegal_self_type_reference(tcx, trait_def_id, input_ty) {
- return Some(MethodViolationCode::ReferencesSelfInput(i));
- }
- }
- if contains_illegal_self_type_reference(tcx, trait_def_id, sig.output().skip_binder()) {
- return Some(MethodViolationCode::ReferencesSelfOutput);
- }
-
- // We can't monomorphize things like `fn foo<A>(...)`.
- let own_counts = tcx.generics_of(method.def_id).own_counts();
- if own_counts.types + own_counts.consts != 0 {
- return Some(MethodViolationCode::Generic);
- }
-
- if tcx
- .predicates_of(method.def_id)
- .predicates
- .iter()
- // A trait object can't claim to live more than the concrete type,
- // so outlives predicates will always hold.
- .cloned()
- .filter(|(p, _)| p.to_opt_type_outlives().is_none())
- .collect::<Vec<_>>()
- // Do a shallow visit so that `contains_illegal_self_type_reference`
- // may apply it's custom visiting.
- .visit_tys_shallow(|t| contains_illegal_self_type_reference(tcx, trait_def_id, t))
- {
- return Some(MethodViolationCode::WhereClauseReferencesSelf);
- }
-
- let receiver_ty =
- tcx.liberate_late_bound_regions(method.def_id, &sig.map_bound(|sig| sig.inputs()[0]));
-
- // Until `unsized_locals` is fully implemented, `self: Self` can't be dispatched on.
- // However, this is already considered object-safe. We allow it as a special case here.
- // FIXME(mikeyhew) get rid of this `if` statement once `receiver_is_dispatchable` allows
- // `Receiver: Unsize<Receiver[Self => dyn Trait]>`.
- if receiver_ty != tcx.types.self_param {
- if !receiver_is_dispatchable(tcx, method, receiver_ty) {
- return Some(MethodViolationCode::UndispatchableReceiver);
- } else {
- // Do sanity check to make sure the receiver actually has the layout of a pointer.
-
- use rustc::ty::layout::Abi;
-
- let param_env = tcx.param_env(method.def_id);
-
- let abi_of_ty = |ty: Ty<'tcx>| -> &Abi {
- match tcx.layout_of(param_env.and(ty)) {
- Ok(layout) => &layout.abi,
- Err(err) => bug!("error: {}\n while computing layout for type {:?}", err, ty),
- }
- };
-
- // e.g., `Rc<()>`
- let unit_receiver_ty =
- receiver_for_self_ty(tcx, receiver_ty, tcx.mk_unit(), method.def_id);
-
- match abi_of_ty(unit_receiver_ty) {
- &Abi::Scalar(..) => (),
- abi => {
- tcx.sess.delay_span_bug(
- tcx.def_span(method.def_id),
- &format!(
- "receiver when `Self = ()` should have a Scalar ABI; found {:?}",
- abi
- ),
- );
- }
- }
-
- let trait_object_ty =
- object_ty_for_trait(tcx, trait_def_id, tcx.mk_region(ty::ReStatic));
-
- // e.g., `Rc<dyn Trait>`
- let trait_object_receiver =
- receiver_for_self_ty(tcx, receiver_ty, trait_object_ty, method.def_id);
-
- match abi_of_ty(trait_object_receiver) {
- &Abi::ScalarPair(..) => (),
- abi => {
- tcx.sess.delay_span_bug(
- tcx.def_span(method.def_id),
- &format!(
- "receiver when `Self = {}` should have a ScalarPair ABI; \
- found {:?}",
- trait_object_ty, abi
- ),
- );
- }
- }
- }
- }
-
- None
-}
-
-/// Performs a type substitution to produce the version of `receiver_ty` when `Self = self_ty`.
-/// For example, for `receiver_ty = Rc<Self>` and `self_ty = Foo`, returns `Rc<Foo>`.
-fn receiver_for_self_ty<'tcx>(
- tcx: TyCtxt<'tcx>,
- receiver_ty: Ty<'tcx>,
- self_ty: Ty<'tcx>,
- method_def_id: DefId,
-) -> Ty<'tcx> {
- debug!("receiver_for_self_ty({:?}, {:?}, {:?})", receiver_ty, self_ty, method_def_id);
- let substs = InternalSubsts::for_item(tcx, method_def_id, |param, _| {
- if param.index == 0 { self_ty.into() } else { tcx.mk_param_from_def(param) }
- });
-
- let result = receiver_ty.subst(tcx, substs);
- debug!(
- "receiver_for_self_ty({:?}, {:?}, {:?}) = {:?}",
- receiver_ty, self_ty, method_def_id, result
- );
- result
-}
-
-/// Creates the object type for the current trait. For example,
-/// if the current trait is `Deref`, then this will be
-/// `dyn Deref<Target = Self::Target> + 'static`.
-fn object_ty_for_trait<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_def_id: DefId,
- lifetime: ty::Region<'tcx>,
-) -> Ty<'tcx> {
- debug!("object_ty_for_trait: trait_def_id={:?}", trait_def_id);
-
- let trait_ref = ty::TraitRef::identity(tcx, trait_def_id);
-
- let trait_predicate =
- ty::ExistentialPredicate::Trait(ty::ExistentialTraitRef::erase_self_ty(tcx, trait_ref));
-
- let mut associated_types = traits::supertraits(tcx, ty::Binder::dummy(trait_ref))
- .flat_map(|super_trait_ref| {
- tcx.associated_items(super_trait_ref.def_id())
- .in_definition_order()
- .map(move |item| (super_trait_ref, item))
- })
- .filter(|(_, item)| item.kind == ty::AssocKind::Type)
- .collect::<Vec<_>>();
-
- // existential predicates need to be in a specific order
- associated_types.sort_by_cached_key(|(_, item)| tcx.def_path_hash(item.def_id));
-
- let projection_predicates = associated_types.into_iter().map(|(super_trait_ref, item)| {
- // We *can* get bound lifetimes here in cases like
- // `trait MyTrait: for<'s> OtherTrait<&'s T, Output=bool>`.
- //
- // binder moved to (*)...
- let super_trait_ref = super_trait_ref.skip_binder();
- ty::ExistentialPredicate::Projection(ty::ExistentialProjection {
- ty: tcx.mk_projection(item.def_id, super_trait_ref.substs),
- item_def_id: item.def_id,
- substs: super_trait_ref.substs,
- })
- });
-
- let existential_predicates =
- tcx.mk_existential_predicates(iter::once(trait_predicate).chain(projection_predicates));
-
- let object_ty = tcx.mk_dynamic(
- // (*) ... binder re-introduced here
- ty::Binder::bind(existential_predicates),
- lifetime,
- );
-
- debug!("object_ty_for_trait: object_ty=`{}`", object_ty);
-
- object_ty
-}
-
-/// Checks the method's receiver (the `self` argument) can be dispatched on when `Self` is a
-/// trait object. We require that `DispatchableFromDyn` be implemented for the receiver type
-/// in the following way:
-/// - let `Receiver` be the type of the `self` argument, i.e `Self`, `&Self`, `Rc<Self>`,
-/// - require the following bound:
-///
-/// ```
-/// Receiver[Self => T]: DispatchFromDyn<Receiver[Self => dyn Trait]>
-/// ```
-///
-/// where `Foo[X => Y]` means "the same type as `Foo`, but with `X` replaced with `Y`"
-/// (substitution notation).
-///
-/// Some examples of receiver types and their required obligation:
-/// - `&'a mut self` requires `&'a mut Self: DispatchFromDyn<&'a mut dyn Trait>`,
-/// - `self: Rc<Self>` requires `Rc<Self>: DispatchFromDyn<Rc<dyn Trait>>`,
-/// - `self: Pin<Box<Self>>` requires `Pin<Box<Self>>: DispatchFromDyn<Pin<Box<dyn Trait>>>`.
-///
-/// The only case where the receiver is not dispatchable, but is still a valid receiver
-/// type (just not object-safe), is when there is more than one level of pointer indirection.
-/// E.g., `self: &&Self`, `self: &Rc<Self>`, `self: Box<Box<Self>>`. In these cases, there
-/// is no way, or at least no inexpensive way, to coerce the receiver from the version where
-/// `Self = dyn Trait` to the version where `Self = T`, where `T` is the unknown erased type
-/// contained by the trait object, because the object that needs to be coerced is behind
-/// a pointer.
-///
-/// In practice, we cannot use `dyn Trait` explicitly in the obligation because it would result
-/// in a new check that `Trait` is object safe, creating a cycle (until object_safe_for_dispatch
-/// is stabilized, see tracking issue https://github.com/rust-lang/rust/issues/43561).
-/// Instead, we fudge a little by introducing a new type parameter `U` such that
-/// `Self: Unsize<U>` and `U: Trait + ?Sized`, and use `U` in place of `dyn Trait`.
-/// Written as a chalk-style query:
-///
-/// forall (U: Trait + ?Sized) {
-/// if (Self: Unsize<U>) {
-/// Receiver: DispatchFromDyn<Receiver[Self => U]>
-/// }
-/// }
-///
-/// for `self: &'a mut Self`, this means `&'a mut Self: DispatchFromDyn<&'a mut U>`
-/// for `self: Rc<Self>`, this means `Rc<Self>: DispatchFromDyn<Rc<U>>`
-/// for `self: Pin<Box<Self>>`, this means `Pin<Box<Self>>: DispatchFromDyn<Pin<Box<U>>>`
-//
-// FIXME(mikeyhew) when unsized receivers are implemented as part of unsized rvalues, add this
-// fallback query: `Receiver: Unsize<Receiver[Self => U]>` to support receivers like
-// `self: Wrapper<Self>`.
-#[allow(dead_code)]
-fn receiver_is_dispatchable<'tcx>(
- tcx: TyCtxt<'tcx>,
- method: &ty::AssocItem,
- receiver_ty: Ty<'tcx>,
-) -> bool {
- debug!("receiver_is_dispatchable: method = {:?}, receiver_ty = {:?}", method, receiver_ty);
-
- let traits = (tcx.lang_items().unsize_trait(), tcx.lang_items().dispatch_from_dyn_trait());
- let (unsize_did, dispatch_from_dyn_did) = if let (Some(u), Some(cu)) = traits {
- (u, cu)
- } else {
- debug!("receiver_is_dispatchable: Missing Unsize or DispatchFromDyn traits");
- return false;
- };
-
- // the type `U` in the query
- // use a bogus type parameter to mimick a forall(U) query using u32::MAX for now.
- // FIXME(mikeyhew) this is a total hack. Once object_safe_for_dispatch is stabilized, we can
- // replace this with `dyn Trait`
- let unsized_self_ty: Ty<'tcx> =
- tcx.mk_ty_param(::std::u32::MAX, Symbol::intern("RustaceansAreAwesome"));
-
- // `Receiver[Self => U]`
- let unsized_receiver_ty =
- receiver_for_self_ty(tcx, receiver_ty, unsized_self_ty, method.def_id);
-
- // create a modified param env, with `Self: Unsize<U>` and `U: Trait` added to caller bounds
- // `U: ?Sized` is already implied here
- let param_env = {
- let mut param_env = tcx.param_env(method.def_id);
-
- // Self: Unsize<U>
- let unsize_predicate = ty::TraitRef {
- def_id: unsize_did,
- substs: tcx.mk_substs_trait(tcx.types.self_param, &[unsized_self_ty.into()]),
- }
- .without_const()
- .to_predicate();
-
- // U: Trait<Arg1, ..., ArgN>
- let trait_predicate = {
- let substs =
- InternalSubsts::for_item(tcx, method.container.assert_trait(), |param, _| {
- if param.index == 0 {
- unsized_self_ty.into()
- } else {
- tcx.mk_param_from_def(param)
- }
- });
-
- ty::TraitRef { def_id: unsize_did, substs }.without_const().to_predicate()
- };
-
- let caller_bounds: Vec<Predicate<'tcx>> = param_env
- .caller_bounds
- .iter()
- .cloned()
- .chain(iter::once(unsize_predicate))
- .chain(iter::once(trait_predicate))
- .collect();
-
- param_env.caller_bounds = tcx.intern_predicates(&caller_bounds);
-
- param_env
- };
-
- // Receiver: DispatchFromDyn<Receiver[Self => U]>
- let obligation = {
- let predicate = ty::TraitRef {
- def_id: dispatch_from_dyn_did,
- substs: tcx.mk_substs_trait(receiver_ty, &[unsized_receiver_ty.into()]),
- }
- .without_const()
- .to_predicate();
-
- Obligation::new(ObligationCause::dummy(), param_env, predicate)
- };
-
- tcx.infer_ctxt().enter(|ref infcx| {
- // the receiver is dispatchable iff the obligation holds
- infcx.predicate_must_hold_modulo_regions(&obligation)
- })
-}
-
-fn contains_illegal_self_type_reference<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_def_id: DefId,
- ty: Ty<'tcx>,
-) -> bool {
- // This is somewhat subtle. In general, we want to forbid
- // references to `Self` in the argument and return types,
- // since the value of `Self` is erased. However, there is one
- // exception: it is ok to reference `Self` in order to access
- // an associated type of the current trait, since we retain
- // the value of those associated types in the object type
- // itself.
- //
- // ```rust
- // trait SuperTrait {
- // type X;
- // }
- //
- // trait Trait : SuperTrait {
- // type Y;
- // fn foo(&self, x: Self) // bad
- // fn foo(&self) -> Self // bad
- // fn foo(&self) -> Option<Self> // bad
- // fn foo(&self) -> Self::Y // OK, desugars to next example
- // fn foo(&self) -> <Self as Trait>::Y // OK
- // fn foo(&self) -> Self::X // OK, desugars to next example
- // fn foo(&self) -> <Self as SuperTrait>::X // OK
- // }
- // ```
- //
- // However, it is not as simple as allowing `Self` in a projected
- // type, because there are illegal ways to use `Self` as well:
- //
- // ```rust
- // trait Trait : SuperTrait {
- // ...
- // fn foo(&self) -> <Self as SomeOtherTrait>::X;
- // }
- // ```
- //
- // Here we will not have the type of `X` recorded in the
- // object type, and we cannot resolve `Self as SomeOtherTrait`
- // without knowing what `Self` is.
-
- let mut supertraits: Option<Vec<ty::PolyTraitRef<'tcx>>> = None;
- let mut error = false;
- let self_ty = tcx.types.self_param;
- ty.maybe_walk(|ty| {
- match ty.kind {
- ty::Param(_) => {
- if ty == self_ty {
- error = true;
- }
-
- false // no contained types to walk
- }
-
- ty::Projection(ref data) => {
- // This is a projected type `<Foo as SomeTrait>::X`.
-
- // Compute supertraits of current trait lazily.
- if supertraits.is_none() {
- let trait_ref = ty::Binder::bind(ty::TraitRef::identity(tcx, trait_def_id));
- supertraits = Some(traits::supertraits(tcx, trait_ref).collect());
- }
-
- // Determine whether the trait reference `Foo as
- // SomeTrait` is in fact a supertrait of the
- // current trait. In that case, this type is
- // legal, because the type `X` will be specified
- // in the object type. Note that we can just use
- // direct equality here because all of these types
- // are part of the formal parameter listing, and
- // hence there should be no inference variables.
- let projection_trait_ref = ty::Binder::bind(data.trait_ref(tcx));
- let is_supertrait_of_current_trait =
- supertraits.as_ref().unwrap().contains(&projection_trait_ref);
-
- if is_supertrait_of_current_trait {
- false // do not walk contained types, do not report error, do collect $200
- } else {
- true // DO walk contained types, POSSIBLY reporting an error
- }
- }
-
- _ => true, // walk contained types, if any
- }
- });
-
- error
-}
-
-pub fn provide(providers: &mut ty::query::Providers<'_>) {
- *providers = ty::query::Providers { object_safety_violations, ..*providers };
-}
+++ /dev/null
-use fmt_macros::{Parser, Piece, Position};
-
-use rustc::ty::{self, GenericParamDefKind, TyCtxt};
-use rustc::util::common::ErrorReported;
-
-use rustc_ast::ast::{MetaItem, NestedMetaItem};
-use rustc_attr as attr;
-use rustc_data_structures::fx::FxHashMap;
-use rustc_errors::struct_span_err;
-use rustc_hir::def_id::DefId;
-use rustc_span::symbol::{kw, sym, Symbol};
-use rustc_span::Span;
-
-#[derive(Clone, Debug)]
-pub struct OnUnimplementedFormatString(Symbol);
-
-#[derive(Debug)]
-pub struct OnUnimplementedDirective {
- pub condition: Option<MetaItem>,
- pub subcommands: Vec<OnUnimplementedDirective>,
- pub message: Option<OnUnimplementedFormatString>,
- pub label: Option<OnUnimplementedFormatString>,
- pub note: Option<OnUnimplementedFormatString>,
- pub enclosing_scope: Option<OnUnimplementedFormatString>,
-}
-
-#[derive(Default)]
-pub struct OnUnimplementedNote {
- pub message: Option<String>,
- pub label: Option<String>,
- pub note: Option<String>,
- pub enclosing_scope: Option<String>,
-}
-
-fn parse_error(
- tcx: TyCtxt<'_>,
- span: Span,
- message: &str,
- label: &str,
- note: Option<&str>,
-) -> ErrorReported {
- let mut diag = struct_span_err!(tcx.sess, span, E0232, "{}", message);
- diag.span_label(span, label);
- if let Some(note) = note {
- diag.note(note);
- }
- diag.emit();
- ErrorReported
-}
-
-impl<'tcx> OnUnimplementedDirective {
- fn parse(
- tcx: TyCtxt<'tcx>,
- trait_def_id: DefId,
- items: &[NestedMetaItem],
- span: Span,
- is_root: bool,
- ) -> Result<Self, ErrorReported> {
- let mut errored = false;
- let mut item_iter = items.iter();
-
- let condition = if is_root {
- None
- } else {
- let cond = item_iter
- .next()
- .ok_or_else(|| {
- parse_error(
- tcx,
- span,
- "empty `on`-clause in `#[rustc_on_unimplemented]`",
- "empty on-clause here",
- None,
- )
- })?
- .meta_item()
- .ok_or_else(|| {
- parse_error(
- tcx,
- span,
- "invalid `on`-clause in `#[rustc_on_unimplemented]`",
- "invalid on-clause here",
- None,
- )
- })?;
- attr::eval_condition(cond, &tcx.sess.parse_sess, &mut |_| true);
- Some(cond.clone())
- };
-
- let mut message = None;
- let mut label = None;
- let mut note = None;
- let mut enclosing_scope = None;
- let mut subcommands = vec![];
-
- let parse_value = |value_str| {
- OnUnimplementedFormatString::try_parse(tcx, trait_def_id, value_str, span).map(Some)
- };
-
- for item in item_iter {
- if item.check_name(sym::message) && message.is_none() {
- if let Some(message_) = item.value_str() {
- message = parse_value(message_)?;
- continue;
- }
- } else if item.check_name(sym::label) && label.is_none() {
- if let Some(label_) = item.value_str() {
- label = parse_value(label_)?;
- continue;
- }
- } else if item.check_name(sym::note) && note.is_none() {
- if let Some(note_) = item.value_str() {
- note = parse_value(note_)?;
- continue;
- }
- } else if item.check_name(sym::enclosing_scope) && enclosing_scope.is_none() {
- if let Some(enclosing_scope_) = item.value_str() {
- enclosing_scope = parse_value(enclosing_scope_)?;
- continue;
- }
- } else if item.check_name(sym::on)
- && is_root
- && message.is_none()
- && label.is_none()
- && note.is_none()
- {
- if let Some(items) = item.meta_item_list() {
- if let Ok(subcommand) =
- Self::parse(tcx, trait_def_id, &items, item.span(), false)
- {
- subcommands.push(subcommand);
- } else {
- errored = true;
- }
- continue;
- }
- }
-
- // nothing found
- parse_error(
- tcx,
- item.span(),
- "this attribute must have a valid value",
- "expected value here",
- Some(r#"eg `#[rustc_on_unimplemented(message="foo")]`"#),
- );
- }
-
- if errored {
- Err(ErrorReported)
- } else {
- Ok(OnUnimplementedDirective {
- condition,
- subcommands,
- message,
- label,
- note,
- enclosing_scope,
- })
- }
- }
-
- pub fn of_item(
- tcx: TyCtxt<'tcx>,
- trait_def_id: DefId,
- impl_def_id: DefId,
- ) -> Result<Option<Self>, ErrorReported> {
- let attrs = tcx.get_attrs(impl_def_id);
-
- let attr = if let Some(item) = attr::find_by_name(&attrs, sym::rustc_on_unimplemented) {
- item
- } else {
- return Ok(None);
- };
-
- let result = if let Some(items) = attr.meta_item_list() {
- Self::parse(tcx, trait_def_id, &items, attr.span, true).map(Some)
- } else if let Some(value) = attr.value_str() {
- Ok(Some(OnUnimplementedDirective {
- condition: None,
- message: None,
- subcommands: vec![],
- label: Some(OnUnimplementedFormatString::try_parse(
- tcx,
- trait_def_id,
- value,
- attr.span,
- )?),
- note: None,
- enclosing_scope: None,
- }))
- } else {
- return Err(ErrorReported);
- };
- debug!("of_item({:?}/{:?}) = {:?}", trait_def_id, impl_def_id, result);
- result
- }
-
- pub fn evaluate(
- &self,
- tcx: TyCtxt<'tcx>,
- trait_ref: ty::TraitRef<'tcx>,
- options: &[(Symbol, Option<String>)],
- ) -> OnUnimplementedNote {
- let mut message = None;
- let mut label = None;
- let mut note = None;
- let mut enclosing_scope = None;
- info!("evaluate({:?}, trait_ref={:?}, options={:?})", self, trait_ref, options);
-
- for command in self.subcommands.iter().chain(Some(self)).rev() {
- if let Some(ref condition) = command.condition {
- if !attr::eval_condition(condition, &tcx.sess.parse_sess, &mut |c| {
- c.ident().map_or(false, |ident| {
- options.contains(&(ident.name, c.value_str().map(|s| s.to_string())))
- })
- }) {
- debug!("evaluate: skipping {:?} due to condition", command);
- continue;
- }
- }
- debug!("evaluate: {:?} succeeded", command);
- if let Some(ref message_) = command.message {
- message = Some(message_.clone());
- }
-
- if let Some(ref label_) = command.label {
- label = Some(label_.clone());
- }
-
- if let Some(ref note_) = command.note {
- note = Some(note_.clone());
- }
-
- if let Some(ref enclosing_scope_) = command.enclosing_scope {
- enclosing_scope = Some(enclosing_scope_.clone());
- }
- }
-
- let options: FxHashMap<Symbol, String> =
- options.iter().filter_map(|(k, v)| v.as_ref().map(|v| (*k, v.to_owned()))).collect();
- OnUnimplementedNote {
- label: label.map(|l| l.format(tcx, trait_ref, &options)),
- message: message.map(|m| m.format(tcx, trait_ref, &options)),
- note: note.map(|n| n.format(tcx, trait_ref, &options)),
- enclosing_scope: enclosing_scope.map(|e_s| e_s.format(tcx, trait_ref, &options)),
- }
- }
-}
-
-impl<'tcx> OnUnimplementedFormatString {
- fn try_parse(
- tcx: TyCtxt<'tcx>,
- trait_def_id: DefId,
- from: Symbol,
- err_sp: Span,
- ) -> Result<Self, ErrorReported> {
- let result = OnUnimplementedFormatString(from);
- result.verify(tcx, trait_def_id, err_sp)?;
- Ok(result)
- }
-
- fn verify(
- &self,
- tcx: TyCtxt<'tcx>,
- trait_def_id: DefId,
- span: Span,
- ) -> Result<(), ErrorReported> {
- let name = tcx.item_name(trait_def_id);
- let generics = tcx.generics_of(trait_def_id);
- let s = self.0.as_str();
- let parser = Parser::new(&s, None, vec![], false);
- let mut result = Ok(());
- for token in parser {
- match token {
- Piece::String(_) => (), // Normal string, no need to check it
- Piece::NextArgument(a) => match a.position {
- // `{Self}` is allowed
- Position::ArgumentNamed(s) if s == kw::SelfUpper => (),
- // `{ThisTraitsName}` is allowed
- Position::ArgumentNamed(s) if s == name => (),
- // `{from_method}` is allowed
- Position::ArgumentNamed(s) if s == sym::from_method => (),
- // `{from_desugaring}` is allowed
- Position::ArgumentNamed(s) if s == sym::from_desugaring => (),
- // `{ItemContext}` is allowed
- Position::ArgumentNamed(s) if s == sym::item_context => (),
- // So is `{A}` if A is a type parameter
- Position::ArgumentNamed(s) => {
- match generics.params.iter().find(|param| param.name == s) {
- Some(_) => (),
- None => {
- struct_span_err!(
- tcx.sess,
- span,
- E0230,
- "there is no parameter `{}` on trait `{}`",
- s,
- name
- )
- .emit();
- result = Err(ErrorReported);
- }
- }
- }
- // `{:1}` and `{}` are not to be used
- Position::ArgumentIs(_) | Position::ArgumentImplicitlyIs(_) => {
- struct_span_err!(
- tcx.sess,
- span,
- E0231,
- "only named substitution parameters are allowed"
- )
- .emit();
- result = Err(ErrorReported);
- }
- },
- }
- }
-
- result
- }
-
- pub fn format(
- &self,
- tcx: TyCtxt<'tcx>,
- trait_ref: ty::TraitRef<'tcx>,
- options: &FxHashMap<Symbol, String>,
- ) -> String {
- let name = tcx.item_name(trait_ref.def_id);
- let trait_str = tcx.def_path_str(trait_ref.def_id);
- let generics = tcx.generics_of(trait_ref.def_id);
- let generic_map = generics
- .params
- .iter()
- .filter_map(|param| {
- let value = match param.kind {
- GenericParamDefKind::Type { .. } | GenericParamDefKind::Const => {
- trait_ref.substs[param.index as usize].to_string()
- }
- GenericParamDefKind::Lifetime => return None,
- };
- let name = param.name;
- Some((name, value))
- })
- .collect::<FxHashMap<Symbol, String>>();
- let empty_string = String::new();
-
- let s = self.0.as_str();
- let parser = Parser::new(&s, None, vec![], false);
- let item_context = (options.get(&sym::item_context)).unwrap_or(&empty_string);
- parser
- .map(|p| match p {
- Piece::String(s) => s,
- Piece::NextArgument(a) => match a.position {
- Position::ArgumentNamed(s) => match generic_map.get(&s) {
- Some(val) => val,
- None if s == name => &trait_str,
- None => {
- if let Some(val) = options.get(&s) {
- val
- } else if s == sym::from_desugaring || s == sym::from_method {
- // don't break messages using these two arguments incorrectly
- &empty_string
- } else if s == sym::item_context {
- &item_context
- } else {
- bug!(
- "broken on_unimplemented {:?} for {:?}: \
- no argument matching {:?}",
- self.0,
- trait_ref,
- s
- )
- }
- }
- },
- _ => bug!("broken on_unimplemented {:?} - bad format arg", self.0),
- },
- })
- .collect()
- }
-}
//! Code for projecting associated types out of trait references.
-use super::elaborate_predicates;
-use super::specialization_graph;
-use super::translate_substs;
-use super::util;
-use super::Obligation;
-use super::ObligationCause;
use super::PredicateObligation;
-use super::Selection;
-use super::SelectionContext;
-use super::SelectionError;
-use super::{VtableClosureData, VtableFnPointerData, VtableGeneratorData, VtableImplData};
-use crate::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
-use crate::infer::{InferCtxt, InferOk, LateBoundRegionConversionTime};
-use rustc::ty::fold::{TypeFoldable, TypeFolder};
-use rustc::ty::subst::{InternalSubsts, Subst};
-use rustc::ty::{self, ToPolyTraitRef, ToPredicate, Ty, TyCtxt, WithConstness};
-use rustc_ast::ast::Ident;
+use rustc::ty::fold::TypeFoldable;
+use rustc::ty::{self, Ty};
use rustc_data_structures::snapshot_map::{Snapshot, SnapshotMap};
-use rustc_hir::def_id::DefId;
-use rustc_span::symbol::sym;
-use rustc_span::DUMMY_SP;
pub use rustc::traits::Reveal;
-pub type PolyProjectionObligation<'tcx> = Obligation<'tcx, ty::PolyProjectionPredicate<'tcx>>;
-
-pub type ProjectionObligation<'tcx> = Obligation<'tcx, ty::ProjectionPredicate<'tcx>>;
-
-pub type ProjectionTyObligation<'tcx> = Obligation<'tcx, ty::ProjectionTy<'tcx>>;
-
-/// When attempting to resolve `<T as TraitRef>::Name` ...
-#[derive(Debug)]
-pub enum ProjectionTyError<'tcx> {
- /// ...we found multiple sources of information and couldn't resolve the ambiguity.
- TooManyCandidates,
-
- /// ...an error occurred matching `T : TraitRef`
- TraitSelectionError(SelectionError<'tcx>),
-}
-
#[derive(Clone)]
pub struct MismatchedProjectionTypes<'tcx> {
pub err: ty::error::TypeError<'tcx>,
}
-#[derive(PartialEq, Eq, Debug)]
-enum ProjectionTyCandidate<'tcx> {
- // from a where-clause in the env or object type
- ParamEnv(ty::PolyProjectionPredicate<'tcx>),
-
- // from the definition of `Trait` when you have something like <<A as Trait>::B as Trait2>::C
- TraitDef(ty::PolyProjectionPredicate<'tcx>),
-
- // from a "impl" (or a "pseudo-impl" returned by select)
- Select(Selection<'tcx>),
-}
-
-enum ProjectionTyCandidateSet<'tcx> {
- None,
- Single(ProjectionTyCandidate<'tcx>),
- Ambiguous,
- Error(SelectionError<'tcx>),
-}
-
-impl<'tcx> ProjectionTyCandidateSet<'tcx> {
- fn mark_ambiguous(&mut self) {
- *self = ProjectionTyCandidateSet::Ambiguous;
- }
-
- fn mark_error(&mut self, err: SelectionError<'tcx>) {
- *self = ProjectionTyCandidateSet::Error(err);
- }
-
- // Returns true if the push was successful, or false if the candidate
- // was discarded -- this could be because of ambiguity, or because
- // a higher-priority candidate is already there.
- fn push_candidate(&mut self, candidate: ProjectionTyCandidate<'tcx>) -> bool {
- use self::ProjectionTyCandidate::*;
- use self::ProjectionTyCandidateSet::*;
-
- // This wacky variable is just used to try and
- // make code readable and avoid confusing paths.
- // It is assigned a "value" of `()` only on those
- // paths in which we wish to convert `*self` to
- // ambiguous (and return false, because the candidate
- // was not used). On other paths, it is not assigned,
- // and hence if those paths *could* reach the code that
- // comes after the match, this fn would not compile.
- let convert_to_ambiguous;
-
- match self {
- None => {
- *self = Single(candidate);
- return true;
- }
-
- Single(current) => {
- // Duplicates can happen inside ParamEnv. In the case, we
- // perform a lazy deduplication.
- if current == &candidate {
- return false;
- }
-
- // Prefer where-clauses. As in select, if there are multiple
- // candidates, we prefer where-clause candidates over impls. This
- // may seem a bit surprising, since impls are the source of
- // "truth" in some sense, but in fact some of the impls that SEEM
- // applicable are not, because of nested obligations. Where
- // clauses are the safer choice. See the comment on
- // `select::SelectionCandidate` and #21974 for more details.
- match (current, candidate) {
- (ParamEnv(..), ParamEnv(..)) => convert_to_ambiguous = (),
- (ParamEnv(..), _) => return false,
- (_, ParamEnv(..)) => unreachable!(),
- (_, _) => convert_to_ambiguous = (),
- }
- }
-
- Ambiguous | Error(..) => {
- return false;
- }
- }
-
- // We only ever get here when we moved from a single candidate
- // to ambiguous.
- let () = convert_to_ambiguous;
- *self = Ambiguous;
- false
- }
-}
-
-/// Evaluates constraints of the form:
-///
-/// for<...> <T as Trait>::U == V
-///
-/// If successful, this may result in additional obligations. Also returns
-/// the projection cache key used to track these additional obligations.
-pub fn poly_project_and_unify_type<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &PolyProjectionObligation<'tcx>,
-) -> Result<Option<Vec<PredicateObligation<'tcx>>>, MismatchedProjectionTypes<'tcx>> {
- debug!("poly_project_and_unify_type(obligation={:?})", obligation);
-
- let infcx = selcx.infcx();
- infcx.commit_if_ok(|snapshot| {
- let (placeholder_predicate, placeholder_map) =
- infcx.replace_bound_vars_with_placeholders(&obligation.predicate);
-
- let placeholder_obligation = obligation.with(placeholder_predicate);
- let result = project_and_unify_type(selcx, &placeholder_obligation)?;
- infcx
- .leak_check(false, &placeholder_map, snapshot)
- .map_err(|err| MismatchedProjectionTypes { err })?;
- Ok(result)
- })
-}
-
-/// Evaluates constraints of the form:
-///
-/// <T as Trait>::U == V
-///
-/// If successful, this may result in additional obligations.
-fn project_and_unify_type<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionObligation<'tcx>,
-) -> Result<Option<Vec<PredicateObligation<'tcx>>>, MismatchedProjectionTypes<'tcx>> {
- debug!("project_and_unify_type(obligation={:?})", obligation);
-
- let mut obligations = vec![];
- let normalized_ty = match opt_normalize_projection_type(
- selcx,
- obligation.param_env,
- obligation.predicate.projection_ty,
- obligation.cause.clone(),
- obligation.recursion_depth,
- &mut obligations,
- ) {
- Some(n) => n,
- None => return Ok(None),
- };
-
- debug!(
- "project_and_unify_type: normalized_ty={:?} obligations={:?}",
- normalized_ty, obligations
- );
-
- let infcx = selcx.infcx();
- match infcx
- .at(&obligation.cause, obligation.param_env)
- .eq(normalized_ty, obligation.predicate.ty)
- {
- Ok(InferOk { obligations: inferred_obligations, value: () }) => {
- obligations.extend(inferred_obligations);
- Ok(Some(obligations))
- }
- Err(err) => {
- debug!("project_and_unify_type: equating types encountered error {:?}", err);
- Err(MismatchedProjectionTypes { err })
- }
- }
-}
-
-/// Normalizes any associated type projections in `value`, replacing
-/// them with a fully resolved type where possible. The return value
-/// combines the normalized result and any additional obligations that
-/// were incurred as result.
-pub fn normalize<'a, 'b, 'tcx, T>(
- selcx: &'a mut SelectionContext<'b, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- cause: ObligationCause<'tcx>,
- value: &T,
-) -> Normalized<'tcx, T>
-where
- T: TypeFoldable<'tcx>,
-{
- let mut obligations = Vec::new();
- let value = normalize_to(selcx, param_env, cause, value, &mut obligations);
- Normalized { value, obligations }
-}
-
-pub fn normalize_to<'a, 'b, 'tcx, T>(
- selcx: &'a mut SelectionContext<'b, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- cause: ObligationCause<'tcx>,
- value: &T,
- obligations: &mut Vec<PredicateObligation<'tcx>>,
-) -> T
-where
- T: TypeFoldable<'tcx>,
-{
- normalize_with_depth_to(selcx, param_env, cause, 0, value, obligations)
-}
-
-/// As `normalize`, but with a custom depth.
-pub fn normalize_with_depth<'a, 'b, 'tcx, T>(
- selcx: &'a mut SelectionContext<'b, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- cause: ObligationCause<'tcx>,
- depth: usize,
- value: &T,
-) -> Normalized<'tcx, T>
-where
- T: TypeFoldable<'tcx>,
-{
- let mut obligations = Vec::new();
- let value = normalize_with_depth_to(selcx, param_env, cause, depth, value, &mut obligations);
- Normalized { value, obligations }
-}
-
-pub fn normalize_with_depth_to<'a, 'b, 'tcx, T>(
- selcx: &'a mut SelectionContext<'b, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- cause: ObligationCause<'tcx>,
- depth: usize,
- value: &T,
- obligations: &mut Vec<PredicateObligation<'tcx>>,
-) -> T
-where
- T: TypeFoldable<'tcx>,
-{
- debug!("normalize_with_depth(depth={}, value={:?})", depth, value);
- let mut normalizer = AssocTypeNormalizer::new(selcx, param_env, cause, depth, obligations);
- let result = normalizer.fold(value);
- debug!(
- "normalize_with_depth: depth={} result={:?} with {} obligations",
- depth,
- result,
- normalizer.obligations.len()
- );
- debug!("normalize_with_depth: depth={} obligations={:?}", depth, normalizer.obligations);
- result
-}
-
-struct AssocTypeNormalizer<'a, 'b, 'tcx> {
- selcx: &'a mut SelectionContext<'b, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- cause: ObligationCause<'tcx>,
- obligations: &'a mut Vec<PredicateObligation<'tcx>>,
- depth: usize,
-}
-
-impl<'a, 'b, 'tcx> AssocTypeNormalizer<'a, 'b, 'tcx> {
- fn new(
- selcx: &'a mut SelectionContext<'b, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- cause: ObligationCause<'tcx>,
- depth: usize,
- obligations: &'a mut Vec<PredicateObligation<'tcx>>,
- ) -> AssocTypeNormalizer<'a, 'b, 'tcx> {
- AssocTypeNormalizer { selcx, param_env, cause, obligations, depth }
- }
-
- fn fold<T: TypeFoldable<'tcx>>(&mut self, value: &T) -> T {
- let value = self.selcx.infcx().resolve_vars_if_possible(value);
-
- if !value.has_projections() { value } else { value.fold_with(self) }
- }
-}
-
-impl<'a, 'b, 'tcx> TypeFolder<'tcx> for AssocTypeNormalizer<'a, 'b, 'tcx> {
- fn tcx<'c>(&'c self) -> TyCtxt<'tcx> {
- self.selcx.tcx()
- }
-
- fn fold_ty(&mut self, ty: Ty<'tcx>) -> Ty<'tcx> {
- if !ty.has_projections() {
- return ty;
- }
- // We don't want to normalize associated types that occur inside of region
- // binders, because they may contain bound regions, and we can't cope with that.
- //
- // Example:
- //
- // for<'a> fn(<T as Foo<&'a>>::A)
- //
- // Instead of normalizing `<T as Foo<&'a>>::A` here, we'll
- // normalize it when we instantiate those bound regions (which
- // should occur eventually).
-
- let ty = ty.super_fold_with(self);
- match ty.kind {
- ty::Opaque(def_id, substs) if !substs.has_escaping_bound_vars() => {
- // (*)
- // Only normalize `impl Trait` after type-checking, usually in codegen.
- match self.param_env.reveal {
- Reveal::UserFacing => ty,
-
- Reveal::All => {
- let recursion_limit = *self.tcx().sess.recursion_limit.get();
- if self.depth >= recursion_limit {
- let obligation = Obligation::with_depth(
- self.cause.clone(),
- recursion_limit,
- self.param_env,
- ty,
- );
- self.selcx.infcx().report_overflow_error(&obligation, true);
- }
-
- let generic_ty = self.tcx().type_of(def_id);
- let concrete_ty = generic_ty.subst(self.tcx(), substs);
- self.depth += 1;
- let folded_ty = self.fold_ty(concrete_ty);
- self.depth -= 1;
- folded_ty
- }
- }
- }
-
- ty::Projection(ref data) if !data.has_escaping_bound_vars() => {
- // (*)
-
- // (*) This is kind of hacky -- we need to be able to
- // handle normalization within binders because
- // otherwise we wind up a need to normalize when doing
- // trait matching (since you can have a trait
- // obligation like `for<'a> T::B : Fn(&'a int)`), but
- // we can't normalize with bound regions in scope. So
- // far now we just ignore binders but only normalize
- // if all bound regions are gone (and then we still
- // have to renormalize whenever we instantiate a
- // binder). It would be better to normalize in a
- // binding-aware fashion.
-
- let normalized_ty = normalize_projection_type(
- self.selcx,
- self.param_env,
- *data,
- self.cause.clone(),
- self.depth,
- &mut self.obligations,
- );
- debug!(
- "AssocTypeNormalizer: depth={} normalized {:?} to {:?}, \
- now with {} obligations",
- self.depth,
- ty,
- normalized_ty,
- self.obligations.len()
- );
- normalized_ty
- }
-
- _ => ty,
- }
- }
-
- fn fold_const(&mut self, constant: &'tcx ty::Const<'tcx>) -> &'tcx ty::Const<'tcx> {
- constant.eval(self.selcx.tcx(), self.param_env)
- }
-}
-
#[derive(Clone, TypeFoldable)]
pub struct Normalized<'tcx, T> {
pub value: T,
}
}
-/// The guts of `normalize`: normalize a specific projection like `<T
-/// as Trait>::Item`. The result is always a type (and possibly
-/// additional obligations). If ambiguity arises, which implies that
-/// there are unresolved type variables in the projection, we will
-/// substitute a fresh type variable `$X` and generate a new
-/// obligation `<T as Trait>::Item == $X` for later.
-pub fn normalize_projection_type<'a, 'b, 'tcx>(
- selcx: &'a mut SelectionContext<'b, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- projection_ty: ty::ProjectionTy<'tcx>,
- cause: ObligationCause<'tcx>,
- depth: usize,
- obligations: &mut Vec<PredicateObligation<'tcx>>,
-) -> Ty<'tcx> {
- opt_normalize_projection_type(
- selcx,
- param_env,
- projection_ty,
- cause.clone(),
- depth,
- obligations,
- )
- .unwrap_or_else(move || {
- // if we bottom out in ambiguity, create a type variable
- // and a deferred predicate to resolve this when more type
- // information is available.
-
- let tcx = selcx.infcx().tcx;
- let def_id = projection_ty.item_def_id;
- let ty_var = selcx.infcx().next_ty_var(TypeVariableOrigin {
- kind: TypeVariableOriginKind::NormalizeProjectionType,
- span: tcx.def_span(def_id),
- });
- let projection = ty::Binder::dummy(ty::ProjectionPredicate { projection_ty, ty: ty_var });
- let obligation =
- Obligation::with_depth(cause, depth + 1, param_env, projection.to_predicate());
- obligations.push(obligation);
- ty_var
- })
-}
-
-/// The guts of `normalize`: normalize a specific projection like `<T
-/// as Trait>::Item`. The result is always a type (and possibly
-/// additional obligations). Returns `None` in the case of ambiguity,
-/// which indicates that there are unbound type variables.
-///
-/// This function used to return `Option<NormalizedTy<'tcx>>`, which contains a
-/// `Ty<'tcx>` and an obligations vector. But that obligation vector was very
-/// often immediately appended to another obligations vector. So now this
-/// function takes an obligations vector and appends to it directly, which is
-/// slightly uglier but avoids the need for an extra short-lived allocation.
-fn opt_normalize_projection_type<'a, 'b, 'tcx>(
- selcx: &'a mut SelectionContext<'b, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- projection_ty: ty::ProjectionTy<'tcx>,
- cause: ObligationCause<'tcx>,
- depth: usize,
- obligations: &mut Vec<PredicateObligation<'tcx>>,
-) -> Option<Ty<'tcx>> {
- let infcx = selcx.infcx();
-
- let projection_ty = infcx.resolve_vars_if_possible(&projection_ty);
- let cache_key = ProjectionCacheKey { ty: projection_ty };
-
- debug!(
- "opt_normalize_projection_type(\
- projection_ty={:?}, \
- depth={})",
- projection_ty, depth
- );
-
- // FIXME(#20304) For now, I am caching here, which is good, but it
- // means we don't capture the type variables that are created in
- // the case of ambiguity. Which means we may create a large stream
- // of such variables. OTOH, if we move the caching up a level, we
- // would not benefit from caching when proving `T: Trait<U=Foo>`
- // bounds. It might be the case that we want two distinct caches,
- // or else another kind of cache entry.
-
- let cache_result = infcx.inner.borrow_mut().projection_cache.try_start(cache_key);
- match cache_result {
- Ok(()) => {}
- Err(ProjectionCacheEntry::Ambiguous) => {
- // If we found ambiguity the last time, that generally
- // means we will continue to do so until some type in the
- // key changes (and we know it hasn't, because we just
- // fully resolved it). One exception though is closure
- // types, which can transition from having a fixed kind to
- // no kind with no visible change in the key.
- //
- // FIXME(#32286) refactor this so that closure type
- // changes
- debug!(
- "opt_normalize_projection_type: \
- found cache entry: ambiguous"
- );
- if !projection_ty.has_closure_types() {
- return None;
- }
- }
- Err(ProjectionCacheEntry::InProgress) => {
- // If while normalized A::B, we are asked to normalize
- // A::B, just return A::B itself. This is a conservative
- // answer, in the sense that A::B *is* clearly equivalent
- // to A::B, though there may be a better value we can
- // find.
-
- // Under lazy normalization, this can arise when
- // bootstrapping. That is, imagine an environment with a
- // where-clause like `A::B == u32`. Now, if we are asked
- // to normalize `A::B`, we will want to check the
- // where-clauses in scope. So we will try to unify `A::B`
- // with `A::B`, which can trigger a recursive
- // normalization. In that case, I think we will want this code:
- //
- // ```
- // let ty = selcx.tcx().mk_projection(projection_ty.item_def_id,
- // projection_ty.substs;
- // return Some(NormalizedTy { value: v, obligations: vec![] });
- // ```
-
- debug!(
- "opt_normalize_projection_type: \
- found cache entry: in-progress"
- );
-
- // But for now, let's classify this as an overflow:
- let recursion_limit = *selcx.tcx().sess.recursion_limit.get();
- let obligation =
- Obligation::with_depth(cause, recursion_limit, param_env, projection_ty);
- selcx.infcx().report_overflow_error(&obligation, false);
- }
- Err(ProjectionCacheEntry::NormalizedTy(ty)) => {
- // This is the hottest path in this function.
- //
- // If we find the value in the cache, then return it along
- // with the obligations that went along with it. Note
- // that, when using a fulfillment context, these
- // obligations could in principle be ignored: they have
- // already been registered when the cache entry was
- // created (and hence the new ones will quickly be
- // discarded as duplicated). But when doing trait
- // evaluation this is not the case, and dropping the trait
- // evaluations can causes ICEs (e.g., #43132).
- debug!(
- "opt_normalize_projection_type: \
- found normalized ty `{:?}`",
- ty
- );
-
- // Once we have inferred everything we need to know, we
- // can ignore the `obligations` from that point on.
- if infcx.unresolved_type_vars(&ty.value).is_none() {
- infcx.inner.borrow_mut().projection_cache.complete_normalized(cache_key, &ty);
- // No need to extend `obligations`.
- } else {
- obligations.extend(ty.obligations);
- }
-
- obligations.push(get_paranoid_cache_value_obligation(
- infcx,
- param_env,
- projection_ty,
- cause,
- depth,
- ));
- return Some(ty.value);
- }
- Err(ProjectionCacheEntry::Error) => {
- debug!(
- "opt_normalize_projection_type: \
- found error"
- );
- let result = normalize_to_error(selcx, param_env, projection_ty, cause, depth);
- obligations.extend(result.obligations);
- return Some(result.value);
- }
- }
-
- let obligation = Obligation::with_depth(cause.clone(), depth, param_env, projection_ty);
- match project_type(selcx, &obligation) {
- Ok(ProjectedTy::Progress(Progress {
- ty: projected_ty,
- obligations: mut projected_obligations,
- })) => {
- // if projection succeeded, then what we get out of this
- // is also non-normalized (consider: it was derived from
- // an impl, where-clause etc) and hence we must
- // re-normalize it
-
- debug!(
- "opt_normalize_projection_type: \
- projected_ty={:?} \
- depth={} \
- projected_obligations={:?}",
- projected_ty, depth, projected_obligations
- );
-
- let result = if projected_ty.has_projections() {
- let mut normalizer = AssocTypeNormalizer::new(
- selcx,
- param_env,
- cause,
- depth + 1,
- &mut projected_obligations,
- );
- let normalized_ty = normalizer.fold(&projected_ty);
-
- debug!(
- "opt_normalize_projection_type: \
- normalized_ty={:?} depth={}",
- normalized_ty, depth
- );
-
- Normalized { value: normalized_ty, obligations: projected_obligations }
- } else {
- Normalized { value: projected_ty, obligations: projected_obligations }
- };
-
- let cache_value = prune_cache_value_obligations(infcx, &result);
- infcx.inner.borrow_mut().projection_cache.insert_ty(cache_key, cache_value);
- obligations.extend(result.obligations);
- Some(result.value)
- }
- Ok(ProjectedTy::NoProgress(projected_ty)) => {
- debug!(
- "opt_normalize_projection_type: \
- projected_ty={:?} no progress",
- projected_ty
- );
- let result = Normalized { value: projected_ty, obligations: vec![] };
- infcx.inner.borrow_mut().projection_cache.insert_ty(cache_key, result.clone());
- // No need to extend `obligations`.
- Some(result.value)
- }
- Err(ProjectionTyError::TooManyCandidates) => {
- debug!(
- "opt_normalize_projection_type: \
- too many candidates"
- );
- infcx.inner.borrow_mut().projection_cache.ambiguous(cache_key);
- None
- }
- Err(ProjectionTyError::TraitSelectionError(_)) => {
- debug!("opt_normalize_projection_type: ERROR");
- // if we got an error processing the `T as Trait` part,
- // just return `ty::err` but add the obligation `T :
- // Trait`, which when processed will cause the error to be
- // reported later
-
- infcx.inner.borrow_mut().projection_cache.error(cache_key);
- let result = normalize_to_error(selcx, param_env, projection_ty, cause, depth);
- obligations.extend(result.obligations);
- Some(result.value)
- }
- }
-}
-
-/// If there are unresolved type variables, then we need to include
-/// any subobligations that bind them, at least until those type
-/// variables are fully resolved.
-fn prune_cache_value_obligations<'a, 'tcx>(
- infcx: &'a InferCtxt<'a, 'tcx>,
- result: &NormalizedTy<'tcx>,
-) -> NormalizedTy<'tcx> {
- if infcx.unresolved_type_vars(&result.value).is_none() {
- return NormalizedTy { value: result.value, obligations: vec![] };
- }
-
- let mut obligations: Vec<_> = result
- .obligations
- .iter()
- .filter(|obligation| match obligation.predicate {
- // We found a `T: Foo<X = U>` predicate, let's check
- // if `U` references any unresolved type
- // variables. In principle, we only care if this
- // projection can help resolve any of the type
- // variables found in `result.value` -- but we just
- // check for any type variables here, for fear of
- // indirect obligations (e.g., we project to `?0`,
- // but we have `T: Foo<X = ?1>` and `?1: Bar<X =
- // ?0>`).
- ty::Predicate::Projection(ref data) => infcx.unresolved_type_vars(&data.ty()).is_some(),
-
- // We are only interested in `T: Foo<X = U>` predicates, whre
- // `U` references one of `unresolved_type_vars`. =)
- _ => false,
- })
- .cloned()
- .collect();
-
- obligations.shrink_to_fit();
-
- NormalizedTy { value: result.value, obligations }
-}
-
-/// Whenever we give back a cache result for a projection like `<T as
-/// Trait>::Item ==> X`, we *always* include the obligation to prove
-/// that `T: Trait` (we may also include some other obligations). This
-/// may or may not be necessary -- in principle, all the obligations
-/// that must be proven to show that `T: Trait` were also returned
-/// when the cache was first populated. But there are some vague concerns,
-/// and so we take the precautionary measure of including `T: Trait` in
-/// the result:
-///
-/// Concern #1. The current setup is fragile. Perhaps someone could
-/// have failed to prove the concerns from when the cache was
-/// populated, but also not have used a snapshot, in which case the
-/// cache could remain populated even though `T: Trait` has not been
-/// shown. In this case, the "other code" is at fault -- when you
-/// project something, you are supposed to either have a snapshot or
-/// else prove all the resulting obligations -- but it's still easy to
-/// get wrong.
-///
-/// Concern #2. Even within the snapshot, if those original
-/// obligations are not yet proven, then we are able to do projections
-/// that may yet turn out to be wrong. This *may* lead to some sort
-/// of trouble, though we don't have a concrete example of how that
-/// can occur yet. But it seems risky at best.
-fn get_paranoid_cache_value_obligation<'a, 'tcx>(
- infcx: &'a InferCtxt<'a, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- projection_ty: ty::ProjectionTy<'tcx>,
- cause: ObligationCause<'tcx>,
- depth: usize,
-) -> PredicateObligation<'tcx> {
- let trait_ref = projection_ty.trait_ref(infcx.tcx).to_poly_trait_ref();
- Obligation {
- cause,
- recursion_depth: depth,
- param_env,
- predicate: trait_ref.without_const().to_predicate(),
- }
-}
-
-/// If we are projecting `<T as Trait>::Item`, but `T: Trait` does not
-/// hold. In various error cases, we cannot generate a valid
-/// normalized projection. Therefore, we create an inference variable
-/// return an associated obligation that, when fulfilled, will lead to
-/// an error.
-///
-/// Note that we used to return `Error` here, but that was quite
-/// dubious -- the premise was that an error would *eventually* be
-/// reported, when the obligation was processed. But in general once
-/// you see a `Error` you are supposed to be able to assume that an
-/// error *has been* reported, so that you can take whatever heuristic
-/// paths you want to take. To make things worse, it was possible for
-/// cycles to arise, where you basically had a setup like `<MyType<$0>
-/// as Trait>::Foo == $0`. Here, normalizing `<MyType<$0> as
-/// Trait>::Foo> to `[type error]` would lead to an obligation of
-/// `<MyType<[type error]> as Trait>::Foo`. We are supposed to report
-/// an error for this obligation, but we legitimately should not,
-/// because it contains `[type error]`. Yuck! (See issue #29857 for
-/// one case where this arose.)
-fn normalize_to_error<'a, 'tcx>(
- selcx: &mut SelectionContext<'a, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- projection_ty: ty::ProjectionTy<'tcx>,
- cause: ObligationCause<'tcx>,
- depth: usize,
-) -> NormalizedTy<'tcx> {
- let trait_ref = projection_ty.trait_ref(selcx.tcx()).to_poly_trait_ref();
- let trait_obligation = Obligation {
- cause,
- recursion_depth: depth,
- param_env,
- predicate: trait_ref.without_const().to_predicate(),
- };
- let tcx = selcx.infcx().tcx;
- let def_id = projection_ty.item_def_id;
- let new_value = selcx.infcx().next_ty_var(TypeVariableOrigin {
- kind: TypeVariableOriginKind::NormalizeProjectionType,
- span: tcx.def_span(def_id),
- });
- Normalized { value: new_value, obligations: vec![trait_obligation] }
-}
-
-enum ProjectedTy<'tcx> {
- Progress(Progress<'tcx>),
- NoProgress(Ty<'tcx>),
-}
-
-struct Progress<'tcx> {
- ty: Ty<'tcx>,
- obligations: Vec<PredicateObligation<'tcx>>,
-}
-
-impl<'tcx> Progress<'tcx> {
- fn error(tcx: TyCtxt<'tcx>) -> Self {
- Progress { ty: tcx.types.err, obligations: vec![] }
- }
-
- fn with_addl_obligations(mut self, mut obligations: Vec<PredicateObligation<'tcx>>) -> Self {
- debug!(
- "with_addl_obligations: self.obligations.len={} obligations.len={}",
- self.obligations.len(),
- obligations.len()
- );
-
- debug!(
- "with_addl_obligations: self.obligations={:?} obligations={:?}",
- self.obligations, obligations
- );
-
- self.obligations.append(&mut obligations);
- self
- }
-}
-
-/// Computes the result of a projection type (if we can).
-///
-/// IMPORTANT:
-/// - `obligation` must be fully normalized
-fn project_type<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
-) -> Result<ProjectedTy<'tcx>, ProjectionTyError<'tcx>> {
- debug!("project(obligation={:?})", obligation);
-
- let recursion_limit = *selcx.tcx().sess.recursion_limit.get();
- if obligation.recursion_depth >= recursion_limit {
- debug!("project: overflow!");
- return Err(ProjectionTyError::TraitSelectionError(SelectionError::Overflow));
- }
-
- let obligation_trait_ref = &obligation.predicate.trait_ref(selcx.tcx());
-
- debug!("project: obligation_trait_ref={:?}", obligation_trait_ref);
-
- if obligation_trait_ref.references_error() {
- return Ok(ProjectedTy::Progress(Progress::error(selcx.tcx())));
- }
-
- let mut candidates = ProjectionTyCandidateSet::None;
-
- // Make sure that the following procedures are kept in order. ParamEnv
- // needs to be first because it has highest priority, and Select checks
- // the return value of push_candidate which assumes it's ran at last.
- assemble_candidates_from_param_env(selcx, obligation, &obligation_trait_ref, &mut candidates);
-
- assemble_candidates_from_trait_def(selcx, obligation, &obligation_trait_ref, &mut candidates);
-
- assemble_candidates_from_impls(selcx, obligation, &obligation_trait_ref, &mut candidates);
-
- match candidates {
- ProjectionTyCandidateSet::Single(candidate) => Ok(ProjectedTy::Progress(
- confirm_candidate(selcx, obligation, &obligation_trait_ref, candidate),
- )),
- ProjectionTyCandidateSet::None => Ok(ProjectedTy::NoProgress(
- selcx
- .tcx()
- .mk_projection(obligation.predicate.item_def_id, obligation.predicate.substs),
- )),
- // Error occurred while trying to processing impls.
- ProjectionTyCandidateSet::Error(e) => Err(ProjectionTyError::TraitSelectionError(e)),
- // Inherent ambiguity that prevents us from even enumerating the
- // candidates.
- ProjectionTyCandidateSet::Ambiguous => Err(ProjectionTyError::TooManyCandidates),
- }
-}
-
-/// The first thing we have to do is scan through the parameter
-/// environment to see whether there are any projection predicates
-/// there that can answer this question.
-fn assemble_candidates_from_param_env<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- obligation_trait_ref: &ty::TraitRef<'tcx>,
- candidate_set: &mut ProjectionTyCandidateSet<'tcx>,
-) {
- debug!("assemble_candidates_from_param_env(..)");
- assemble_candidates_from_predicates(
- selcx,
- obligation,
- obligation_trait_ref,
- candidate_set,
- ProjectionTyCandidate::ParamEnv,
- obligation.param_env.caller_bounds.iter().cloned(),
- );
-}
-
-/// In the case of a nested projection like <<A as Foo>::FooT as Bar>::BarT, we may find
-/// that the definition of `Foo` has some clues:
-///
-/// ```
-/// trait Foo {
-/// type FooT : Bar<BarT=i32>
-/// }
-/// ```
-///
-/// Here, for example, we could conclude that the result is `i32`.
-fn assemble_candidates_from_trait_def<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- obligation_trait_ref: &ty::TraitRef<'tcx>,
- candidate_set: &mut ProjectionTyCandidateSet<'tcx>,
-) {
- debug!("assemble_candidates_from_trait_def(..)");
-
- let tcx = selcx.tcx();
- // Check whether the self-type is itself a projection.
- let (def_id, substs) = match obligation_trait_ref.self_ty().kind {
- ty::Projection(ref data) => (data.trait_ref(tcx).def_id, data.substs),
- ty::Opaque(def_id, substs) => (def_id, substs),
- ty::Infer(ty::TyVar(_)) => {
- // If the self-type is an inference variable, then it MAY wind up
- // being a projected type, so induce an ambiguity.
- candidate_set.mark_ambiguous();
- return;
- }
- _ => return,
- };
-
- // If so, extract what we know from the trait and try to come up with a good answer.
- let trait_predicates = tcx.predicates_of(def_id);
- let bounds = trait_predicates.instantiate(tcx, substs);
- let bounds = elaborate_predicates(tcx, bounds.predicates);
- assemble_candidates_from_predicates(
- selcx,
- obligation,
- obligation_trait_ref,
- candidate_set,
- ProjectionTyCandidate::TraitDef,
- bounds,
- )
-}
-
-fn assemble_candidates_from_predicates<'cx, 'tcx, I>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- obligation_trait_ref: &ty::TraitRef<'tcx>,
- candidate_set: &mut ProjectionTyCandidateSet<'tcx>,
- ctor: fn(ty::PolyProjectionPredicate<'tcx>) -> ProjectionTyCandidate<'tcx>,
- env_predicates: I,
-) where
- I: IntoIterator<Item = ty::Predicate<'tcx>>,
-{
- debug!("assemble_candidates_from_predicates(obligation={:?})", obligation);
- let infcx = selcx.infcx();
- for predicate in env_predicates {
- debug!("assemble_candidates_from_predicates: predicate={:?}", predicate);
- if let ty::Predicate::Projection(data) = predicate {
- let same_def_id = data.projection_def_id() == obligation.predicate.item_def_id;
-
- let is_match = same_def_id
- && infcx.probe(|_| {
- let data_poly_trait_ref = data.to_poly_trait_ref(infcx.tcx);
- let obligation_poly_trait_ref = obligation_trait_ref.to_poly_trait_ref();
- infcx
- .at(&obligation.cause, obligation.param_env)
- .sup(obligation_poly_trait_ref, data_poly_trait_ref)
- .map(|InferOk { obligations: _, value: () }| {
- // FIXME(#32730) -- do we need to take obligations
- // into account in any way? At the moment, no.
- })
- .is_ok()
- });
-
- debug!(
- "assemble_candidates_from_predicates: candidate={:?} \
- is_match={} same_def_id={}",
- data, is_match, same_def_id
- );
-
- if is_match {
- candidate_set.push_candidate(ctor(data));
- }
- }
- }
-}
-
-fn assemble_candidates_from_impls<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- obligation_trait_ref: &ty::TraitRef<'tcx>,
- candidate_set: &mut ProjectionTyCandidateSet<'tcx>,
-) {
- // If we are resolving `<T as TraitRef<...>>::Item == Type`,
- // start out by selecting the predicate `T as TraitRef<...>`:
- let poly_trait_ref = obligation_trait_ref.to_poly_trait_ref();
- let trait_obligation = obligation.with(poly_trait_ref.to_poly_trait_predicate());
- let _ = selcx.infcx().commit_if_ok(|_| {
- let vtable = match selcx.select(&trait_obligation) {
- Ok(Some(vtable)) => vtable,
- Ok(None) => {
- candidate_set.mark_ambiguous();
- return Err(());
- }
- Err(e) => {
- debug!("assemble_candidates_from_impls: selection error {:?}", e);
- candidate_set.mark_error(e);
- return Err(());
- }
- };
-
- let eligible = match &vtable {
- super::VtableClosure(_)
- | super::VtableGenerator(_)
- | super::VtableFnPointer(_)
- | super::VtableObject(_)
- | super::VtableTraitAlias(_) => {
- debug!("assemble_candidates_from_impls: vtable={:?}", vtable);
- true
- }
- super::VtableImpl(impl_data) => {
- // We have to be careful when projecting out of an
- // impl because of specialization. If we are not in
- // codegen (i.e., projection mode is not "any"), and the
- // impl's type is declared as default, then we disable
- // projection (even if the trait ref is fully
- // monomorphic). In the case where trait ref is not
- // fully monomorphic (i.e., includes type parameters),
- // this is because those type parameters may
- // ultimately be bound to types from other crates that
- // may have specialized impls we can't see. In the
- // case where the trait ref IS fully monomorphic, this
- // is a policy decision that we made in the RFC in
- // order to preserve flexibility for the crate that
- // defined the specializable impl to specialize later
- // for existing types.
- //
- // In either case, we handle this by not adding a
- // candidate for an impl if it contains a `default`
- // type.
- //
- // NOTE: This should be kept in sync with the similar code in
- // `rustc::ty::instance::resolve_associated_item()`.
- let node_item =
- assoc_ty_def(selcx, impl_data.impl_def_id, obligation.predicate.item_def_id);
-
- let is_default = if node_item.node.is_from_trait() {
- // If true, the impl inherited a `type Foo = Bar`
- // given in the trait, which is implicitly default.
- // Otherwise, the impl did not specify `type` and
- // neither did the trait:
- //
- // ```rust
- // trait Foo { type T; }
- // impl Foo for Bar { }
- // ```
- //
- // This is an error, but it will be
- // reported in `check_impl_items_against_trait`.
- // We accept it here but will flag it as
- // an error when we confirm the candidate
- // (which will ultimately lead to `normalize_to_error`
- // being invoked).
- false
- } else {
- // If we're looking at a trait *impl*, the item is
- // specializable if the impl or the item are marked
- // `default`.
- node_item.item.defaultness.is_default()
- || super::util::impl_is_default(selcx.tcx(), node_item.node.def_id())
- };
-
- match is_default {
- // Non-specializable items are always projectable
- false => true,
-
- // Only reveal a specializable default if we're past type-checking
- // and the obligation is monomorphic, otherwise passes such as
- // transmute checking and polymorphic MIR optimizations could
- // get a result which isn't correct for all monomorphizations.
- true if obligation.param_env.reveal == Reveal::All => {
- // NOTE(eddyb) inference variables can resolve to parameters, so
- // assume `poly_trait_ref` isn't monomorphic, if it contains any.
- let poly_trait_ref =
- selcx.infcx().resolve_vars_if_possible(&poly_trait_ref);
- !poly_trait_ref.needs_infer() && !poly_trait_ref.needs_subst()
- }
-
- true => {
- debug!(
- "assemble_candidates_from_impls: not eligible due to default: \
- assoc_ty={} predicate={}",
- selcx.tcx().def_path_str(node_item.item.def_id),
- obligation.predicate,
- );
- false
- }
- }
- }
- super::VtableParam(..) => {
- // This case tell us nothing about the value of an
- // associated type. Consider:
- //
- // ```
- // trait SomeTrait { type Foo; }
- // fn foo<T:SomeTrait>(...) { }
- // ```
- //
- // If the user writes `<T as SomeTrait>::Foo`, then the `T
- // : SomeTrait` binding does not help us decide what the
- // type `Foo` is (at least, not more specifically than
- // what we already knew).
- //
- // But wait, you say! What about an example like this:
- //
- // ```
- // fn bar<T:SomeTrait<Foo=usize>>(...) { ... }
- // ```
- //
- // Doesn't the `T : Sometrait<Foo=usize>` predicate help
- // resolve `T::Foo`? And of course it does, but in fact
- // that single predicate is desugared into two predicates
- // in the compiler: a trait predicate (`T : SomeTrait`) and a
- // projection. And the projection where clause is handled
- // in `assemble_candidates_from_param_env`.
- false
- }
- super::VtableAutoImpl(..) | super::VtableBuiltin(..) => {
- // These traits have no associated types.
- span_bug!(
- obligation.cause.span,
- "Cannot project an associated type from `{:?}`",
- vtable
- );
- }
- };
-
- if eligible {
- if candidate_set.push_candidate(ProjectionTyCandidate::Select(vtable)) {
- Ok(())
- } else {
- Err(())
- }
- } else {
- Err(())
- }
- });
-}
-
-fn confirm_candidate<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- obligation_trait_ref: &ty::TraitRef<'tcx>,
- candidate: ProjectionTyCandidate<'tcx>,
-) -> Progress<'tcx> {
- debug!("confirm_candidate(candidate={:?}, obligation={:?})", candidate, obligation);
-
- match candidate {
- ProjectionTyCandidate::ParamEnv(poly_projection)
- | ProjectionTyCandidate::TraitDef(poly_projection) => {
- confirm_param_env_candidate(selcx, obligation, poly_projection)
- }
-
- ProjectionTyCandidate::Select(vtable) => {
- confirm_select_candidate(selcx, obligation, obligation_trait_ref, vtable)
- }
- }
-}
-
-fn confirm_select_candidate<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- obligation_trait_ref: &ty::TraitRef<'tcx>,
- vtable: Selection<'tcx>,
-) -> Progress<'tcx> {
- match vtable {
- super::VtableImpl(data) => confirm_impl_candidate(selcx, obligation, data),
- super::VtableGenerator(data) => confirm_generator_candidate(selcx, obligation, data),
- super::VtableClosure(data) => confirm_closure_candidate(selcx, obligation, data),
- super::VtableFnPointer(data) => confirm_fn_pointer_candidate(selcx, obligation, data),
- super::VtableObject(_) => confirm_object_candidate(selcx, obligation, obligation_trait_ref),
- super::VtableAutoImpl(..)
- | super::VtableParam(..)
- | super::VtableBuiltin(..)
- | super::VtableTraitAlias(..) =>
- // we don't create Select candidates with this kind of resolution
- {
- span_bug!(
- obligation.cause.span,
- "Cannot project an associated type from `{:?}`",
- vtable
- )
- }
- }
-}
-
-fn confirm_object_candidate<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- obligation_trait_ref: &ty::TraitRef<'tcx>,
-) -> Progress<'tcx> {
- let self_ty = obligation_trait_ref.self_ty();
- let object_ty = selcx.infcx().shallow_resolve(self_ty);
- debug!("confirm_object_candidate(object_ty={:?})", object_ty);
- let data = match object_ty.kind {
- ty::Dynamic(ref data, ..) => data,
- _ => span_bug!(
- obligation.cause.span,
- "confirm_object_candidate called with non-object: {:?}",
- object_ty
- ),
- };
- let env_predicates = data
- .projection_bounds()
- .map(|p| p.with_self_ty(selcx.tcx(), object_ty).to_predicate())
- .collect();
- let env_predicate = {
- let env_predicates = elaborate_predicates(selcx.tcx(), env_predicates);
-
- // select only those projections that are actually projecting an
- // item with the correct name
- let env_predicates = env_predicates.filter_map(|p| match p {
- ty::Predicate::Projection(data) => {
- if data.projection_def_id() == obligation.predicate.item_def_id {
- Some(data)
- } else {
- None
- }
- }
- _ => None,
- });
-
- // select those with a relevant trait-ref
- let mut env_predicates = env_predicates.filter(|data| {
- let data_poly_trait_ref = data.to_poly_trait_ref(selcx.tcx());
- let obligation_poly_trait_ref = obligation_trait_ref.to_poly_trait_ref();
- selcx.infcx().probe(|_| {
- selcx
- .infcx()
- .at(&obligation.cause, obligation.param_env)
- .sup(obligation_poly_trait_ref, data_poly_trait_ref)
- .is_ok()
- })
- });
-
- // select the first matching one; there really ought to be one or
- // else the object type is not WF, since an object type should
- // include all of its projections explicitly
- match env_predicates.next() {
- Some(env_predicate) => env_predicate,
- None => {
- debug!(
- "confirm_object_candidate: no env-predicate \
- found in object type `{:?}`; ill-formed",
- object_ty
- );
- return Progress::error(selcx.tcx());
- }
- }
- };
-
- confirm_param_env_candidate(selcx, obligation, env_predicate)
-}
-
-fn confirm_generator_candidate<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- vtable: VtableGeneratorData<'tcx, PredicateObligation<'tcx>>,
-) -> Progress<'tcx> {
- let gen_sig = vtable.substs.as_generator().poly_sig(vtable.generator_def_id, selcx.tcx());
- let Normalized { value: gen_sig, obligations } = normalize_with_depth(
- selcx,
- obligation.param_env,
- obligation.cause.clone(),
- obligation.recursion_depth + 1,
- &gen_sig,
- );
-
- debug!(
- "confirm_generator_candidate: obligation={:?},gen_sig={:?},obligations={:?}",
- obligation, gen_sig, obligations
- );
-
- let tcx = selcx.tcx();
-
- let gen_def_id = tcx.lang_items().gen_trait().unwrap();
-
- let predicate = super::util::generator_trait_ref_and_outputs(
- tcx,
- gen_def_id,
- obligation.predicate.self_ty(),
- gen_sig,
- )
- .map_bound(|(trait_ref, yield_ty, return_ty)| {
- let name = tcx.associated_item(obligation.predicate.item_def_id).ident.name;
- let ty = if name == sym::Return {
- return_ty
- } else if name == sym::Yield {
- yield_ty
- } else {
- bug!()
- };
-
- ty::ProjectionPredicate {
- projection_ty: ty::ProjectionTy {
- substs: trait_ref.substs,
- item_def_id: obligation.predicate.item_def_id,
- },
- ty: ty,
- }
- });
-
- confirm_param_env_candidate(selcx, obligation, predicate)
- .with_addl_obligations(vtable.nested)
- .with_addl_obligations(obligations)
-}
-
-fn confirm_fn_pointer_candidate<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- fn_pointer_vtable: VtableFnPointerData<'tcx, PredicateObligation<'tcx>>,
-) -> Progress<'tcx> {
- let fn_type = selcx.infcx().shallow_resolve(fn_pointer_vtable.fn_ty);
- let sig = fn_type.fn_sig(selcx.tcx());
- let Normalized { value: sig, obligations } = normalize_with_depth(
- selcx,
- obligation.param_env,
- obligation.cause.clone(),
- obligation.recursion_depth + 1,
- &sig,
- );
-
- confirm_callable_candidate(selcx, obligation, sig, util::TupleArgumentsFlag::Yes)
- .with_addl_obligations(fn_pointer_vtable.nested)
- .with_addl_obligations(obligations)
-}
-
-fn confirm_closure_candidate<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- vtable: VtableClosureData<'tcx, PredicateObligation<'tcx>>,
-) -> Progress<'tcx> {
- let tcx = selcx.tcx();
- let infcx = selcx.infcx();
- let closure_sig_ty = vtable.substs.as_closure().sig_ty(vtable.closure_def_id, tcx);
- let closure_sig = infcx.shallow_resolve(closure_sig_ty).fn_sig(tcx);
- let Normalized { value: closure_sig, obligations } = normalize_with_depth(
- selcx,
- obligation.param_env,
- obligation.cause.clone(),
- obligation.recursion_depth + 1,
- &closure_sig,
- );
-
- debug!(
- "confirm_closure_candidate: obligation={:?},closure_sig={:?},obligations={:?}",
- obligation, closure_sig, obligations
- );
-
- confirm_callable_candidate(selcx, obligation, closure_sig, util::TupleArgumentsFlag::No)
- .with_addl_obligations(vtable.nested)
- .with_addl_obligations(obligations)
-}
-
-fn confirm_callable_candidate<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- fn_sig: ty::PolyFnSig<'tcx>,
- flag: util::TupleArgumentsFlag,
-) -> Progress<'tcx> {
- let tcx = selcx.tcx();
-
- debug!("confirm_callable_candidate({:?},{:?})", obligation, fn_sig);
-
- // the `Output` associated type is declared on `FnOnce`
- let fn_once_def_id = tcx.lang_items().fn_once_trait().unwrap();
-
- let predicate = super::util::closure_trait_ref_and_return_type(
- tcx,
- fn_once_def_id,
- obligation.predicate.self_ty(),
- fn_sig,
- flag,
- )
- .map_bound(|(trait_ref, ret_type)| ty::ProjectionPredicate {
- projection_ty: ty::ProjectionTy::from_ref_and_name(
- tcx,
- trait_ref,
- Ident::with_dummy_span(rustc_hir::FN_OUTPUT_NAME),
- ),
- ty: ret_type,
- });
-
- confirm_param_env_candidate(selcx, obligation, predicate)
-}
-
-fn confirm_param_env_candidate<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- poly_cache_entry: ty::PolyProjectionPredicate<'tcx>,
-) -> Progress<'tcx> {
- let infcx = selcx.infcx();
- let cause = &obligation.cause;
- let param_env = obligation.param_env;
-
- let (cache_entry, _) = infcx.replace_bound_vars_with_fresh_vars(
- cause.span,
- LateBoundRegionConversionTime::HigherRankedType,
- &poly_cache_entry,
- );
-
- let cache_trait_ref = cache_entry.projection_ty.trait_ref(infcx.tcx);
- let obligation_trait_ref = obligation.predicate.trait_ref(infcx.tcx);
- match infcx.at(cause, param_env).eq(cache_trait_ref, obligation_trait_ref) {
- Ok(InferOk { value: _, obligations }) => Progress { ty: cache_entry.ty, obligations },
- Err(e) => {
- let msg = format!(
- "Failed to unify obligation `{:?}` with poly_projection `{:?}`: {:?}",
- obligation, poly_cache_entry, e,
- );
- debug!("confirm_param_env_candidate: {}", msg);
- infcx.tcx.sess.delay_span_bug(obligation.cause.span, &msg);
- Progress { ty: infcx.tcx.types.err, obligations: vec![] }
- }
- }
-}
-
-fn confirm_impl_candidate<'cx, 'tcx>(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- obligation: &ProjectionTyObligation<'tcx>,
- impl_vtable: VtableImplData<'tcx, PredicateObligation<'tcx>>,
-) -> Progress<'tcx> {
- let tcx = selcx.tcx();
-
- let VtableImplData { impl_def_id, substs, nested } = impl_vtable;
- let assoc_item_id = obligation.predicate.item_def_id;
- let trait_def_id = tcx.trait_id_of_impl(impl_def_id).unwrap();
-
- let param_env = obligation.param_env;
- let assoc_ty = assoc_ty_def(selcx, impl_def_id, assoc_item_id);
-
- if !assoc_ty.item.defaultness.has_value() {
- // This means that the impl is missing a definition for the
- // associated type. This error will be reported by the type
- // checker method `check_impl_items_against_trait`, so here we
- // just return Error.
- debug!(
- "confirm_impl_candidate: no associated type {:?} for {:?}",
- assoc_ty.item.ident, obligation.predicate
- );
- return Progress { ty: tcx.types.err, obligations: nested };
- }
- let substs = obligation.predicate.substs.rebase_onto(tcx, trait_def_id, substs);
- let substs = translate_substs(selcx.infcx(), param_env, impl_def_id, substs, assoc_ty.node);
- let ty = if let ty::AssocKind::OpaqueTy = assoc_ty.item.kind {
- let item_substs = InternalSubsts::identity_for_item(tcx, assoc_ty.item.def_id);
- tcx.mk_opaque(assoc_ty.item.def_id, item_substs)
- } else {
- tcx.type_of(assoc_ty.item.def_id)
- };
- if substs.len() != tcx.generics_of(assoc_ty.item.def_id).count() {
- tcx.sess
- .delay_span_bug(DUMMY_SP, "impl item and trait item have different parameter counts");
- Progress { ty: tcx.types.err, obligations: nested }
- } else {
- Progress { ty: ty.subst(tcx, substs), obligations: nested }
- }
-}
-
-/// Locate the definition of an associated type in the specialization hierarchy,
-/// starting from the given impl.
-///
-/// Based on the "projection mode", this lookup may in fact only examine the
-/// topmost impl. See the comments for `Reveal` for more details.
-fn assoc_ty_def(
- selcx: &SelectionContext<'_, '_>,
- impl_def_id: DefId,
- assoc_ty_def_id: DefId,
-) -> specialization_graph::NodeItem<ty::AssocItem> {
- let tcx = selcx.tcx();
- let assoc_ty_name = tcx.associated_item(assoc_ty_def_id).ident;
- let trait_def_id = tcx.impl_trait_ref(impl_def_id).unwrap().def_id;
- let trait_def = tcx.trait_def(trait_def_id);
-
- // This function may be called while we are still building the
- // specialization graph that is queried below (via TraidDef::ancestors()),
- // so, in order to avoid unnecessary infinite recursion, we manually look
- // for the associated item at the given impl.
- // If there is no such item in that impl, this function will fail with a
- // cycle error if the specialization graph is currently being built.
- let impl_node = specialization_graph::Node::Impl(impl_def_id);
- for item in impl_node.items(tcx) {
- if matches!(item.kind, ty::AssocKind::Type | ty::AssocKind::OpaqueTy)
- && tcx.hygienic_eq(item.ident, assoc_ty_name, trait_def_id)
- {
- return specialization_graph::NodeItem {
- node: specialization_graph::Node::Impl(impl_def_id),
- item: *item,
- };
- }
- }
-
- if let Some(assoc_item) =
- trait_def.ancestors(tcx, impl_def_id).leaf_def(tcx, assoc_ty_name, ty::AssocKind::Type)
- {
- assoc_item
- } else {
- // This is saying that neither the trait nor
- // the impl contain a definition for this
- // associated type. Normally this situation
- // could only arise through a compiler bug --
- // if the user wrote a bad item name, it
- // should have failed in astconv.
- bug!("No associated type `{}` for {}", assoc_ty_name, tcx.def_path_str(impl_def_id))
- }
-}
-
// # Cache
/// The projection cache. Unlike the standard caches, this can include
ty: ty::ProjectionTy<'tcx>,
}
-impl<'cx, 'tcx> ProjectionCacheKey<'tcx> {
- pub fn from_poly_projection_predicate(
- selcx: &mut SelectionContext<'cx, 'tcx>,
- predicate: &ty::PolyProjectionPredicate<'tcx>,
- ) -> Option<Self> {
- let infcx = selcx.infcx();
- // We don't do cross-snapshot caching of obligations with escaping regions,
- // so there's no cache key to use
- predicate.no_bound_vars().map(|predicate| ProjectionCacheKey {
- // We don't attempt to match up with a specific type-variable state
- // from a specific call to `opt_normalize_projection_type` - if
- // there's no precise match, the original cache entry is "stranded"
- // anyway.
- ty: infcx.resolve_vars_if_possible(&predicate.projection_ty),
- })
+impl ProjectionCacheKey<'tcx> {
+ pub fn new(ty: ty::ProjectionTy<'tcx>) -> Self {
+ Self { ty }
}
}
#[derive(Clone, Debug)]
-enum ProjectionCacheEntry<'tcx> {
+pub enum ProjectionCacheEntry<'tcx> {
InProgress,
Ambiguous,
Error,
/// Try to start normalize `key`; returns an error if
/// normalization already occurred (this error corresponds to a
/// cache hit, so it's actually a good thing).
- fn try_start(
+ pub fn try_start(
&mut self,
key: ProjectionCacheKey<'tcx>,
) -> Result<(), ProjectionCacheEntry<'tcx>> {
}
/// Indicates that `key` was normalized to `value`.
- fn insert_ty(&mut self, key: ProjectionCacheKey<'tcx>, value: NormalizedTy<'tcx>) {
+ pub fn insert_ty(&mut self, key: ProjectionCacheKey<'tcx>, value: NormalizedTy<'tcx>) {
debug!(
"ProjectionCacheEntry::insert_ty: adding cache entry: key={:?}, value={:?}",
key, value
/// ambiguity. No point in trying it again then until we gain more
/// type information (in which case, the "fully resolved" key will
/// be different).
- fn ambiguous(&mut self, key: ProjectionCacheKey<'tcx>) {
+ pub fn ambiguous(&mut self, key: ProjectionCacheKey<'tcx>) {
let fresh = self.map.insert(key, ProjectionCacheEntry::Ambiguous);
assert!(!fresh, "never started projecting `{:?}`", key);
}
/// Indicates that trying to normalize `key` resulted in
/// error.
- fn error(&mut self, key: ProjectionCacheKey<'tcx>) {
+ pub fn error(&mut self, key: ProjectionCacheKey<'tcx>) {
let fresh = self.map.insert(key, ProjectionCacheEntry::Error);
assert!(!fresh, "never started projecting `{:?}`", key);
}
+++ /dev/null
-use crate::infer::at::At;
-use crate::infer::canonical::OriginalQueryValues;
-use crate::infer::InferOk;
-
-use rustc::ty::subst::GenericArg;
-use rustc::ty::{self, Ty, TyCtxt};
-
-pub use rustc::traits::query::{DropckOutlivesResult, DtorckConstraint};
-
-impl<'cx, 'tcx> At<'cx, 'tcx> {
- /// Given a type `ty` of some value being dropped, computes a set
- /// of "kinds" (types, regions) that must be outlive the execution
- /// of the destructor. These basically correspond to data that the
- /// destructor might access. This is used during regionck to
- /// impose "outlives" constraints on any lifetimes referenced
- /// within.
- ///
- /// The rules here are given by the "dropck" RFCs, notably [#1238]
- /// and [#1327]. This is a fixed-point computation, where we
- /// explore all the data that will be dropped (transitively) when
- /// a value of type `ty` is dropped. For each type T that will be
- /// dropped and which has a destructor, we must assume that all
- /// the types/regions of T are live during the destructor, unless
- /// they are marked with a special attribute (`#[may_dangle]`).
- ///
- /// [#1238]: https://github.com/rust-lang/rfcs/blob/master/text/1238-nonparametric-dropck.md
- /// [#1327]: https://github.com/rust-lang/rfcs/blob/master/text/1327-dropck-param-eyepatch.md
- pub fn dropck_outlives(&self, ty: Ty<'tcx>) -> InferOk<'tcx, Vec<GenericArg<'tcx>>> {
- debug!("dropck_outlives(ty={:?}, param_env={:?})", ty, self.param_env,);
-
- // Quick check: there are a number of cases that we know do not require
- // any destructor.
- let tcx = self.infcx.tcx;
- if trivial_dropck_outlives(tcx, ty) {
- return InferOk { value: vec![], obligations: vec![] };
- }
-
- let mut orig_values = OriginalQueryValues::default();
- let c_ty = self.infcx.canonicalize_query(&self.param_env.and(ty), &mut orig_values);
- let span = self.cause.span;
- debug!("c_ty = {:?}", c_ty);
- if let Ok(result) = &tcx.dropck_outlives(c_ty) {
- if result.is_proven() {
- if let Ok(InferOk { value, obligations }) =
- self.infcx.instantiate_query_response_and_region_obligations(
- self.cause,
- self.param_env,
- &orig_values,
- result,
- )
- {
- let ty = self.infcx.resolve_vars_if_possible(&ty);
- let kinds = value.into_kinds_reporting_overflows(tcx, span, ty);
- return InferOk { value: kinds, obligations };
- }
- }
- }
-
- // Errors and ambiuity in dropck occur in two cases:
- // - unresolved inference variables at the end of typeck
- // - non well-formed types where projections cannot be resolved
- // Either of these should have created an error before.
- tcx.sess.delay_span_bug(span, "dtorck encountered internal error");
-
- InferOk { value: vec![], obligations: vec![] }
- }
-}
-
-/// This returns true if the type `ty` is "trivial" for
-/// dropck-outlives -- that is, if it doesn't require any types to
-/// outlive. This is similar but not *quite* the same as the
-/// `needs_drop` test in the compiler already -- that is, for every
-/// type T for which this function return true, needs-drop would
-/// return `false`. But the reverse does not hold: in particular,
-/// `needs_drop` returns false for `PhantomData`, but it is not
-/// trivial for dropck-outlives.
-///
-/// Note also that `needs_drop` requires a "global" type (i.e., one
-/// with erased regions), but this function does not.
-pub fn trivial_dropck_outlives<'tcx>(tcx: TyCtxt<'tcx>, ty: Ty<'tcx>) -> bool {
- match ty.kind {
- // None of these types have a destructor and hence they do not
- // require anything in particular to outlive the dtor's
- // execution.
- ty::Infer(ty::FreshIntTy(_))
- | ty::Infer(ty::FreshFloatTy(_))
- | ty::Bool
- | ty::Int(_)
- | ty::Uint(_)
- | ty::Float(_)
- | ty::Never
- | ty::FnDef(..)
- | ty::FnPtr(_)
- | ty::Char
- | ty::GeneratorWitness(..)
- | ty::RawPtr(_)
- | ty::Ref(..)
- | ty::Str
- | ty::Foreign(..)
- | ty::Error => true,
-
- // [T; N] and [T] have same properties as T.
- ty::Array(ty, _) | ty::Slice(ty) => trivial_dropck_outlives(tcx, ty),
-
- // (T1..Tn) and closures have same properties as T1..Tn --
- // check if *any* of those are trivial.
- ty::Tuple(ref tys) => tys.iter().all(|t| trivial_dropck_outlives(tcx, t.expect_ty())),
- ty::Closure(def_id, ref substs) => {
- substs.as_closure().upvar_tys(def_id, tcx).all(|t| trivial_dropck_outlives(tcx, t))
- }
-
- ty::Adt(def, _) => {
- if Some(def.did) == tcx.lang_items().manually_drop() {
- // `ManuallyDrop` never has a dtor.
- true
- } else {
- // Other types might. Moreover, PhantomData doesn't
- // have a dtor, but it is considered to own its
- // content, so it is non-trivial. Unions can have `impl Drop`,
- // and hence are non-trivial as well.
- false
- }
- }
-
- // The following *might* require a destructor: needs deeper inspection.
- ty::Dynamic(..)
- | ty::Projection(..)
- | ty::Param(_)
- | ty::Opaque(..)
- | ty::Placeholder(..)
- | ty::Infer(_)
- | ty::Bound(..)
- | ty::Generator(..) => false,
-
- ty::UnnormalizedProjection(..) => bug!("only used with chalk-engine"),
- }
-}
+++ /dev/null
-use crate::infer::canonical::OriginalQueryValues;
-use crate::infer::InferCtxt;
-use crate::traits::{
- EvaluationResult, OverflowError, PredicateObligation, SelectionContext, TraitQueryMode,
-};
-
-impl<'cx, 'tcx> InferCtxt<'cx, 'tcx> {
- /// Evaluates whether the predicate can be satisfied (by any means)
- /// in the given `ParamEnv`.
- pub fn predicate_may_hold(&self, obligation: &PredicateObligation<'tcx>) -> bool {
- self.evaluate_obligation_no_overflow(obligation).may_apply()
- }
-
- /// Evaluates whether the predicate can be satisfied in the given
- /// `ParamEnv`, and returns `false` if not certain. However, this is
- /// not entirely accurate if inference variables are involved.
- ///
- /// This version may conservatively fail when outlives obligations
- /// are required.
- pub fn predicate_must_hold_considering_regions(
- &self,
- obligation: &PredicateObligation<'tcx>,
- ) -> bool {
- self.evaluate_obligation_no_overflow(obligation).must_apply_considering_regions()
- }
-
- /// Evaluates whether the predicate can be satisfied in the given
- /// `ParamEnv`, and returns `false` if not certain. However, this is
- /// not entirely accurate if inference variables are involved.
- ///
- /// This version ignores all outlives constraints.
- pub fn predicate_must_hold_modulo_regions(
- &self,
- obligation: &PredicateObligation<'tcx>,
- ) -> bool {
- self.evaluate_obligation_no_overflow(obligation).must_apply_modulo_regions()
- }
-
- /// Evaluate a given predicate, capturing overflow and propagating it back.
- pub fn evaluate_obligation(
- &self,
- obligation: &PredicateObligation<'tcx>,
- ) -> Result<EvaluationResult, OverflowError> {
- let mut _orig_values = OriginalQueryValues::default();
- let c_pred = self
- .canonicalize_query(&obligation.param_env.and(obligation.predicate), &mut _orig_values);
- // Run canonical query. If overflow occurs, rerun from scratch but this time
- // in standard trait query mode so that overflow is handled appropriately
- // within `SelectionContext`.
- self.tcx.evaluate_obligation(c_pred)
- }
-
- // Helper function that canonicalizes and runs the query. If an
- // overflow results, we re-run it in the local context so we can
- // report a nice error.
- crate fn evaluate_obligation_no_overflow(
- &self,
- obligation: &PredicateObligation<'tcx>,
- ) -> EvaluationResult {
- match self.evaluate_obligation(obligation) {
- Ok(result) => result,
- Err(OverflowError) => {
- let mut selcx = SelectionContext::with_query_mode(&self, TraitQueryMode::Standard);
- selcx.evaluate_root_obligation(obligation).unwrap_or_else(|r| {
- span_bug!(
- obligation.cause.span,
- "Overflow should be caught earlier in standard query mode: {:?}, {:?}",
- obligation,
- r,
- )
- })
- }
- }
- }
-}
+++ /dev/null
-pub use rustc::traits::query::{CandidateStep, MethodAutoderefBadTy, MethodAutoderefStepsResult};
+++ /dev/null
-//! Experimental types for the trait query interface. The methods
-//! defined in this module are all based on **canonicalization**,
-//! which makes a canonical query by replacing unbound inference
-//! variables and regions, so that results can be reused more broadly.
-//! The providers for the queries defined here can be found in
-//! `librustc_traits`.
-
-pub mod dropck_outlives;
-pub mod evaluate_obligation;
-pub mod method_autoderef;
-pub mod normalize;
-pub mod outlives_bounds;
-pub mod type_op;
-
-pub use rustc::traits::query::*;
+++ /dev/null
-//! Code for the 'normalization' query. This consists of a wrapper
-//! which folds deeply, invoking the underlying
-//! `normalize_projection_ty` query when it encounters projections.
-
-use crate::infer::at::At;
-use crate::infer::canonical::OriginalQueryValues;
-use crate::infer::{InferCtxt, InferOk};
-use crate::traits::project::Normalized;
-use crate::traits::{Obligation, ObligationCause, PredicateObligation, Reveal};
-use rustc::ty::fold::{TypeFoldable, TypeFolder};
-use rustc::ty::subst::Subst;
-use rustc::ty::{self, Ty, TyCtxt};
-
-use super::NoSolution;
-
-pub use rustc::traits::query::NormalizationResult;
-
-impl<'cx, 'tcx> At<'cx, 'tcx> {
- /// Normalize `value` in the context of the inference context,
- /// yielding a resulting type, or an error if `value` cannot be
- /// normalized. If you don't care about regions, you should prefer
- /// `normalize_erasing_regions`, which is more efficient.
- ///
- /// If the normalization succeeds and is unambiguous, returns back
- /// the normalized value along with various outlives relations (in
- /// the form of obligations that must be discharged).
- ///
- /// N.B., this will *eventually* be the main means of
- /// normalizing, but for now should be used only when we actually
- /// know that normalization will succeed, since error reporting
- /// and other details are still "under development".
- pub fn normalize<T>(&self, value: &T) -> Result<Normalized<'tcx, T>, NoSolution>
- where
- T: TypeFoldable<'tcx>,
- {
- debug!(
- "normalize::<{}>(value={:?}, param_env={:?})",
- ::std::any::type_name::<T>(),
- value,
- self.param_env,
- );
- if !value.has_projections() {
- return Ok(Normalized { value: value.clone(), obligations: vec![] });
- }
-
- let mut normalizer = QueryNormalizer {
- infcx: self.infcx,
- cause: self.cause,
- param_env: self.param_env,
- obligations: vec![],
- error: false,
- anon_depth: 0,
- };
-
- let value1 = value.fold_with(&mut normalizer);
- if normalizer.error {
- Err(NoSolution)
- } else {
- Ok(Normalized { value: value1, obligations: normalizer.obligations })
- }
- }
-}
-
-struct QueryNormalizer<'cx, 'tcx> {
- infcx: &'cx InferCtxt<'cx, 'tcx>,
- cause: &'cx ObligationCause<'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- obligations: Vec<PredicateObligation<'tcx>>,
- error: bool,
- anon_depth: usize,
-}
-
-impl<'cx, 'tcx> TypeFolder<'tcx> for QueryNormalizer<'cx, 'tcx> {
- fn tcx<'c>(&'c self) -> TyCtxt<'tcx> {
- self.infcx.tcx
- }
-
- fn fold_ty(&mut self, ty: Ty<'tcx>) -> Ty<'tcx> {
- if !ty.has_projections() {
- return ty;
- }
-
- let ty = ty.super_fold_with(self);
- match ty.kind {
- ty::Opaque(def_id, substs) if !substs.has_escaping_bound_vars() => {
- // (*)
- // Only normalize `impl Trait` after type-checking, usually in codegen.
- match self.param_env.reveal {
- Reveal::UserFacing => ty,
-
- Reveal::All => {
- let recursion_limit = *self.tcx().sess.recursion_limit.get();
- if self.anon_depth >= recursion_limit {
- let obligation = Obligation::with_depth(
- self.cause.clone(),
- recursion_limit,
- self.param_env,
- ty,
- );
- self.infcx.report_overflow_error(&obligation, true);
- }
-
- let generic_ty = self.tcx().type_of(def_id);
- let concrete_ty = generic_ty.subst(self.tcx(), substs);
- self.anon_depth += 1;
- if concrete_ty == ty {
- bug!(
- "infinite recursion generic_ty: {:#?}, substs: {:#?}, \
- concrete_ty: {:#?}, ty: {:#?}",
- generic_ty,
- substs,
- concrete_ty,
- ty
- );
- }
- let folded_ty = self.fold_ty(concrete_ty);
- self.anon_depth -= 1;
- folded_ty
- }
- }
- }
-
- ty::Projection(ref data) if !data.has_escaping_bound_vars() => {
- // (*)
- // (*) This is kind of hacky -- we need to be able to
- // handle normalization within binders because
- // otherwise we wind up a need to normalize when doing
- // trait matching (since you can have a trait
- // obligation like `for<'a> T::B : Fn(&'a int)`), but
- // we can't normalize with bound regions in scope. So
- // far now we just ignore binders but only normalize
- // if all bound regions are gone (and then we still
- // have to renormalize whenever we instantiate a
- // binder). It would be better to normalize in a
- // binding-aware fashion.
-
- let tcx = self.infcx.tcx;
-
- let mut orig_values = OriginalQueryValues::default();
- // HACK(matthewjasper) `'static` is special-cased in selection,
- // so we cannot canonicalize it.
- let c_data = self
- .infcx
- .canonicalize_hr_query_hack(&self.param_env.and(*data), &mut orig_values);
- debug!("QueryNormalizer: c_data = {:#?}", c_data);
- debug!("QueryNormalizer: orig_values = {:#?}", orig_values);
- match tcx.normalize_projection_ty(c_data) {
- Ok(result) => {
- // We don't expect ambiguity.
- if result.is_ambiguous() {
- self.error = true;
- return ty;
- }
-
- match self.infcx.instantiate_query_response_and_region_obligations(
- self.cause,
- self.param_env,
- &orig_values,
- &result,
- ) {
- Ok(InferOk { value: result, obligations }) => {
- debug!("QueryNormalizer: result = {:#?}", result);
- debug!("QueryNormalizer: obligations = {:#?}", obligations);
- self.obligations.extend(obligations);
- return result.normalized_ty;
- }
-
- Err(_) => {
- self.error = true;
- return ty;
- }
- }
- }
-
- Err(NoSolution) => {
- self.error = true;
- ty
- }
- }
- }
-
- _ => ty,
- }
- }
-
- fn fold_const(&mut self, constant: &'tcx ty::Const<'tcx>) -> &'tcx ty::Const<'tcx> {
- constant.eval(self.infcx.tcx, self.param_env)
- }
-}
+++ /dev/null
-use crate::infer::canonical::OriginalQueryValues;
-use crate::infer::InferCtxt;
-use crate::traits::query::NoSolution;
-use crate::traits::{FulfillmentContext, ObligationCause, TraitEngine, TraitEngineExt};
-use rustc::ty::{self, Ty};
-use rustc_hir as hir;
-use rustc_span::source_map::Span;
-
-pub use rustc::traits::query::OutlivesBound;
-
-impl<'cx, 'tcx> InferCtxt<'cx, 'tcx> {
- /// Implied bounds are region relationships that we deduce
- /// automatically. The idea is that (e.g.) a caller must check that a
- /// function's argument types are well-formed immediately before
- /// calling that fn, and hence the *callee* can assume that its
- /// argument types are well-formed. This may imply certain relationships
- /// between generic parameters. For example:
- ///
- /// fn foo<'a,T>(x: &'a T)
- ///
- /// can only be called with a `'a` and `T` such that `&'a T` is WF.
- /// For `&'a T` to be WF, `T: 'a` must hold. So we can assume `T: 'a`.
- ///
- /// # Parameters
- ///
- /// - `param_env`, the where-clauses in scope
- /// - `body_id`, the body-id to use when normalizing assoc types.
- /// Note that this may cause outlives obligations to be injected
- /// into the inference context with this body-id.
- /// - `ty`, the type that we are supposed to assume is WF.
- /// - `span`, a span to use when normalizing, hopefully not important,
- /// might be useful if a `bug!` occurs.
- pub fn implied_outlives_bounds(
- &self,
- param_env: ty::ParamEnv<'tcx>,
- body_id: hir::HirId,
- ty: Ty<'tcx>,
- span: Span,
- ) -> Vec<OutlivesBound<'tcx>> {
- debug!("implied_outlives_bounds(ty = {:?})", ty);
-
- let mut orig_values = OriginalQueryValues::default();
- let key = self.canonicalize_query(¶m_env.and(ty), &mut orig_values);
- let result = match self.tcx.implied_outlives_bounds(key) {
- Ok(r) => r,
- Err(NoSolution) => {
- self.tcx.sess.delay_span_bug(
- span,
- "implied_outlives_bounds failed to solve all obligations",
- );
- return vec![];
- }
- };
- assert!(result.value.is_proven());
-
- let result = self.instantiate_query_response_and_region_obligations(
- &ObligationCause::misc(span, body_id),
- param_env,
- &orig_values,
- &result,
- );
- debug!("implied_outlives_bounds for {:?}: {:#?}", ty, result);
- let result = match result {
- Ok(v) => v,
- Err(_) => {
- self.tcx.sess.delay_span_bug(span, "implied_outlives_bounds failed to instantiate");
- return vec![];
- }
- };
-
- // Instantiation may have produced new inference variables and constraints on those
- // variables. Process these constraints.
- let mut fulfill_cx = FulfillmentContext::new();
- fulfill_cx.register_predicate_obligations(self, result.obligations);
- if fulfill_cx.select_all_or_error(self).is_err() {
- self.tcx.sess.delay_span_bug(
- span,
- "implied_outlives_bounds failed to solve obligations from instantiation",
- );
- }
-
- result.value
- }
-}
-
-pub fn explicit_outlives_bounds<'tcx>(
- param_env: ty::ParamEnv<'tcx>,
-) -> impl Iterator<Item = OutlivesBound<'tcx>> + 'tcx {
- debug!("explicit_outlives_bounds()");
- param_env.caller_bounds.into_iter().filter_map(move |predicate| match predicate {
- ty::Predicate::Projection(..)
- | ty::Predicate::Trait(..)
- | ty::Predicate::Subtype(..)
- | ty::Predicate::WellFormed(..)
- | ty::Predicate::ObjectSafe(..)
- | ty::Predicate::ClosureKind(..)
- | ty::Predicate::TypeOutlives(..)
- | ty::Predicate::ConstEvaluatable(..) => None,
- ty::Predicate::RegionOutlives(ref data) => data
- .no_bound_vars()
- .map(|ty::OutlivesPredicate(r_a, r_b)| OutlivesBound::RegionSubRegion(r_b, r_a)),
- })
-}
+++ /dev/null
-use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
-use crate::traits::query::Fallible;
-use rustc::ty::{ParamEnvAnd, TyCtxt};
-
-pub use rustc::traits::query::type_op::AscribeUserType;
-
-impl<'tcx> super::QueryTypeOp<'tcx> for AscribeUserType<'tcx> {
- type QueryResponse = ();
-
- fn try_fast_path(
- _tcx: TyCtxt<'tcx>,
- _key: &ParamEnvAnd<'tcx, Self>,
- ) -> Option<Self::QueryResponse> {
- None
- }
-
- fn perform_query(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, ()>> {
- tcx.type_op_ascribe_user_type(canonicalized)
- }
-}
+++ /dev/null
-use crate::infer::{InferCtxt, InferOk};
-use crate::traits::query::Fallible;
-use std::fmt;
-
-use crate::infer::canonical::query_response;
-use crate::infer::canonical::QueryRegionConstraints;
-use crate::traits::{ObligationCause, TraitEngine, TraitEngineExt};
-use rustc_span::source_map::DUMMY_SP;
-use std::rc::Rc;
-
-pub struct CustomTypeOp<F, G> {
- closure: F,
- description: G,
-}
-
-impl<F, G> CustomTypeOp<F, G> {
- pub fn new<'tcx, R>(closure: F, description: G) -> Self
- where
- F: FnOnce(&InferCtxt<'_, 'tcx>) -> Fallible<InferOk<'tcx, R>>,
- G: Fn() -> String,
- {
- CustomTypeOp { closure, description }
- }
-}
-
-impl<'tcx, F, R, G> super::TypeOp<'tcx> for CustomTypeOp<F, G>
-where
- F: for<'a, 'cx> FnOnce(&'a InferCtxt<'cx, 'tcx>) -> Fallible<InferOk<'tcx, R>>,
- G: Fn() -> String,
-{
- type Output = R;
-
- /// Processes the operation and all resulting obligations,
- /// returning the final result along with any region constraints
- /// (they will be given over to the NLL region solver).
- fn fully_perform(
- self,
- infcx: &InferCtxt<'_, 'tcx>,
- ) -> Fallible<(Self::Output, Option<Rc<QueryRegionConstraints<'tcx>>>)> {
- if cfg!(debug_assertions) {
- info!("fully_perform({:?})", self);
- }
-
- scrape_region_constraints(infcx, || Ok((self.closure)(infcx)?))
- }
-}
-
-impl<F, G> fmt::Debug for CustomTypeOp<F, G>
-where
- G: Fn() -> String,
-{
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- write!(f, "{}", (self.description)())
- }
-}
-
-/// Executes `op` and then scrapes out all the "old style" region
-/// constraints that result, creating query-region-constraints.
-fn scrape_region_constraints<'tcx, R>(
- infcx: &InferCtxt<'_, 'tcx>,
- op: impl FnOnce() -> Fallible<InferOk<'tcx, R>>,
-) -> Fallible<(R, Option<Rc<QueryRegionConstraints<'tcx>>>)> {
- let mut fulfill_cx = TraitEngine::new(infcx.tcx);
- let dummy_body_id = ObligationCause::dummy().body_id;
-
- // During NLL, we expect that nobody will register region
- // obligations **except** as part of a custom type op (and, at the
- // end of each custom type op, we scrape out the region
- // obligations that resulted). So this vector should be empty on
- // entry.
- let pre_obligations = infcx.take_registered_region_obligations();
- assert!(
- pre_obligations.is_empty(),
- "scrape_region_constraints: incoming region obligations = {:#?}",
- pre_obligations,
- );
-
- let InferOk { value, obligations } = infcx.commit_if_ok(|_| op())?;
- debug_assert!(obligations.iter().all(|o| o.cause.body_id == dummy_body_id));
- fulfill_cx.register_predicate_obligations(infcx, obligations);
- if let Err(e) = fulfill_cx.select_all_or_error(infcx) {
- infcx.tcx.sess.diagnostic().delay_span_bug(
- DUMMY_SP,
- &format!("errors selecting obligation during MIR typeck: {:?}", e),
- );
- }
-
- let region_obligations = infcx.take_registered_region_obligations();
-
- let region_constraint_data = infcx.take_and_reset_region_constraints();
-
- let region_constraints = query_response::make_query_region_constraints(
- infcx.tcx,
- region_obligations
- .iter()
- .map(|(_, r_o)| (r_o.sup_type, r_o.sub_region))
- .map(|(ty, r)| (infcx.resolve_vars_if_possible(&ty), r)),
- ®ion_constraint_data,
- );
-
- if region_constraints.is_empty() {
- Ok((value, None))
- } else {
- Ok((value, Some(Rc::new(region_constraints))))
- }
-}
+++ /dev/null
-use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
-use crate::traits::query::Fallible;
-use rustc::ty::{ParamEnvAnd, TyCtxt};
-
-pub use rustc::traits::query::type_op::Eq;
-
-impl<'tcx> super::QueryTypeOp<'tcx> for Eq<'tcx> {
- type QueryResponse = ();
-
- fn try_fast_path(
- _tcx: TyCtxt<'tcx>,
- key: &ParamEnvAnd<'tcx, Eq<'tcx>>,
- ) -> Option<Self::QueryResponse> {
- if key.value.a == key.value.b { Some(()) } else { None }
- }
-
- fn perform_query(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, ()>> {
- tcx.type_op_eq(canonicalized)
- }
-}
+++ /dev/null
-use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
-use crate::traits::query::outlives_bounds::OutlivesBound;
-use crate::traits::query::Fallible;
-use rustc::ty::{ParamEnvAnd, Ty, TyCtxt};
-
-#[derive(Clone, Debug, HashStable, TypeFoldable, Lift)]
-pub struct ImpliedOutlivesBounds<'tcx> {
- pub ty: Ty<'tcx>,
-}
-
-impl<'tcx> ImpliedOutlivesBounds<'tcx> {
- pub fn new(ty: Ty<'tcx>) -> Self {
- ImpliedOutlivesBounds { ty }
- }
-}
-
-impl<'tcx> super::QueryTypeOp<'tcx> for ImpliedOutlivesBounds<'tcx> {
- type QueryResponse = Vec<OutlivesBound<'tcx>>;
-
- fn try_fast_path(
- _tcx: TyCtxt<'tcx>,
- _key: &ParamEnvAnd<'tcx, Self>,
- ) -> Option<Self::QueryResponse> {
- None
- }
-
- fn perform_query(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self::QueryResponse>> {
- // FIXME this `unchecked_map` is only necessary because the
- // query is defined as taking a `ParamEnvAnd<Ty>`; it should
- // take a `ImpliedOutlivesBounds` instead
- let canonicalized = canonicalized.unchecked_map(|ParamEnvAnd { param_env, value }| {
- let ImpliedOutlivesBounds { ty } = value;
- param_env.and(ty)
- });
-
- tcx.implied_outlives_bounds(canonicalized)
- }
-}
+++ /dev/null
-use crate::infer::canonical::{
- Canonicalized, CanonicalizedQueryResponse, OriginalQueryValues, QueryRegionConstraints,
-};
-use crate::infer::{InferCtxt, InferOk};
-use crate::traits::query::Fallible;
-use crate::traits::ObligationCause;
-use rustc::ty::fold::TypeFoldable;
-use rustc::ty::{ParamEnvAnd, TyCtxt};
-use std::fmt;
-use std::rc::Rc;
-
-pub mod ascribe_user_type;
-pub mod custom;
-pub mod eq;
-pub mod implied_outlives_bounds;
-pub mod normalize;
-pub mod outlives;
-pub mod prove_predicate;
-use self::prove_predicate::ProvePredicate;
-pub mod subtype;
-
-pub use rustc::traits::query::type_op::*;
-
-/// "Type ops" are used in NLL to perform some particular action and
-/// extract out the resulting region constraints (or an error if it
-/// cannot be completed).
-pub trait TypeOp<'tcx>: Sized + fmt::Debug {
- type Output;
-
- /// Processes the operation and all resulting obligations,
- /// returning the final result along with any region constraints
- /// (they will be given over to the NLL region solver).
- fn fully_perform(
- self,
- infcx: &InferCtxt<'_, 'tcx>,
- ) -> Fallible<(Self::Output, Option<Rc<QueryRegionConstraints<'tcx>>>)>;
-}
-
-/// "Query type ops" are type ops that are implemented using a
-/// [canonical query][c]. The `Self` type here contains the kernel of
-/// information needed to do the operation -- `TypeOp` is actually
-/// implemented for `ParamEnvAnd<Self>`, since we always need to bring
-/// along a parameter environment as well. For query type-ops, we will
-/// first canonicalize the key and then invoke the query on the tcx,
-/// which produces the resulting query region constraints.
-///
-/// [c]: https://rust-lang.github.io/rustc-guide/traits/canonicalization.html
-pub trait QueryTypeOp<'tcx>: fmt::Debug + Sized + TypeFoldable<'tcx> + 'tcx {
- type QueryResponse: TypeFoldable<'tcx>;
-
- /// Give query the option for a simple fast path that never
- /// actually hits the tcx cache lookup etc. Return `Some(r)` with
- /// a final result or `None` to do the full path.
- fn try_fast_path(
- tcx: TyCtxt<'tcx>,
- key: &ParamEnvAnd<'tcx, Self>,
- ) -> Option<Self::QueryResponse>;
-
- /// Performs the actual query with the canonicalized key -- the
- /// real work happens here. This method is not given an `infcx`
- /// because it shouldn't need one -- and if it had access to one,
- /// it might do things like invoke `sub_regions`, which would be
- /// bad, because it would create subregion relationships that are
- /// not captured in the return value.
- fn perform_query(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self::QueryResponse>>;
-
- fn fully_perform_into(
- query_key: ParamEnvAnd<'tcx, Self>,
- infcx: &InferCtxt<'_, 'tcx>,
- output_query_region_constraints: &mut QueryRegionConstraints<'tcx>,
- ) -> Fallible<Self::QueryResponse> {
- if let Some(result) = QueryTypeOp::try_fast_path(infcx.tcx, &query_key) {
- return Ok(result);
- }
-
- // FIXME(#33684) -- We need to use
- // `canonicalize_hr_query_hack` here because of things
- // like the subtype query, which go awry around
- // `'static` otherwise.
- let mut canonical_var_values = OriginalQueryValues::default();
- let canonical_self =
- infcx.canonicalize_hr_query_hack(&query_key, &mut canonical_var_values);
- let canonical_result = Self::perform_query(infcx.tcx, canonical_self)?;
-
- let param_env = query_key.param_env;
-
- let InferOk { value, obligations } = infcx
- .instantiate_nll_query_response_and_region_obligations(
- &ObligationCause::dummy(),
- param_env,
- &canonical_var_values,
- canonical_result,
- output_query_region_constraints,
- )?;
-
- // Typically, instantiating NLL query results does not
- // create obligations. However, in some cases there
- // are unresolved type variables, and unify them *can*
- // create obligations. In that case, we have to go
- // fulfill them. We do this via a (recursive) query.
- for obligation in obligations {
- let () = ProvePredicate::fully_perform_into(
- obligation.param_env.and(ProvePredicate::new(obligation.predicate)),
- infcx,
- output_query_region_constraints,
- )?;
- }
-
- Ok(value)
- }
-}
-
-impl<'tcx, Q> TypeOp<'tcx> for ParamEnvAnd<'tcx, Q>
-where
- Q: QueryTypeOp<'tcx>,
-{
- type Output = Q::QueryResponse;
-
- fn fully_perform(
- self,
- infcx: &InferCtxt<'_, 'tcx>,
- ) -> Fallible<(Self::Output, Option<Rc<QueryRegionConstraints<'tcx>>>)> {
- let mut region_constraints = QueryRegionConstraints::default();
- let r = Q::fully_perform_into(self, infcx, &mut region_constraints)?;
-
- // Promote the final query-region-constraints into a
- // (optional) ref-counted vector:
- let opt_qrc =
- if region_constraints.is_empty() { None } else { Some(Rc::new(region_constraints)) };
-
- Ok((r, opt_qrc))
- }
-}
+++ /dev/null
-use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
-use crate::traits::query::Fallible;
-use rustc::ty::fold::TypeFoldable;
-use rustc::ty::{self, Lift, ParamEnvAnd, Ty, TyCtxt};
-use std::fmt;
-
-pub use rustc::traits::query::type_op::Normalize;
-
-impl<'tcx, T> super::QueryTypeOp<'tcx> for Normalize<T>
-where
- T: Normalizable<'tcx> + 'tcx,
-{
- type QueryResponse = T;
-
- fn try_fast_path(_tcx: TyCtxt<'tcx>, key: &ParamEnvAnd<'tcx, Self>) -> Option<T> {
- if !key.value.value.has_projections() { Some(key.value.value) } else { None }
- }
-
- fn perform_query(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self::QueryResponse>> {
- T::type_op_method(tcx, canonicalized)
- }
-}
-
-pub trait Normalizable<'tcx>: fmt::Debug + TypeFoldable<'tcx> + Lift<'tcx> + Copy {
- fn type_op_method(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Normalize<Self>>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self>>;
-}
-
-impl Normalizable<'tcx> for Ty<'tcx> {
- fn type_op_method(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Normalize<Self>>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self>> {
- tcx.type_op_normalize_ty(canonicalized)
- }
-}
-
-impl Normalizable<'tcx> for ty::Predicate<'tcx> {
- fn type_op_method(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Normalize<Self>>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self>> {
- tcx.type_op_normalize_predicate(canonicalized)
- }
-}
-
-impl Normalizable<'tcx> for ty::PolyFnSig<'tcx> {
- fn type_op_method(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Normalize<Self>>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self>> {
- tcx.type_op_normalize_poly_fn_sig(canonicalized)
- }
-}
-
-impl Normalizable<'tcx> for ty::FnSig<'tcx> {
- fn type_op_method(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Normalize<Self>>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self>> {
- tcx.type_op_normalize_fn_sig(canonicalized)
- }
-}
+++ /dev/null
-use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
-use crate::traits::query::dropck_outlives::{trivial_dropck_outlives, DropckOutlivesResult};
-use crate::traits::query::Fallible;
-use rustc::ty::{ParamEnvAnd, Ty, TyCtxt};
-
-#[derive(Copy, Clone, Debug, HashStable, TypeFoldable, Lift)]
-pub struct DropckOutlives<'tcx> {
- dropped_ty: Ty<'tcx>,
-}
-
-impl<'tcx> DropckOutlives<'tcx> {
- pub fn new(dropped_ty: Ty<'tcx>) -> Self {
- DropckOutlives { dropped_ty }
- }
-}
-
-impl super::QueryTypeOp<'tcx> for DropckOutlives<'tcx> {
- type QueryResponse = DropckOutlivesResult<'tcx>;
-
- fn try_fast_path(
- tcx: TyCtxt<'tcx>,
- key: &ParamEnvAnd<'tcx, Self>,
- ) -> Option<Self::QueryResponse> {
- if trivial_dropck_outlives(tcx, key.value.dropped_ty) {
- Some(DropckOutlivesResult::default())
- } else {
- None
- }
- }
-
- fn perform_query(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self::QueryResponse>> {
- // Subtle: note that we are not invoking
- // `infcx.at(...).dropck_outlives(...)` here, but rather the
- // underlying `dropck_outlives` query. This same underlying
- // query is also used by the
- // `infcx.at(...).dropck_outlives(...)` fn. Avoiding the
- // wrapper means we don't need an infcx in this code, which is
- // good because the interface doesn't give us one (so that we
- // know we are not registering any subregion relations or
- // other things).
-
- // FIXME convert to the type expected by the `dropck_outlives`
- // query. This should eventually be fixed by changing the
- // *underlying query*.
- let canonicalized = canonicalized.unchecked_map(|ParamEnvAnd { param_env, value }| {
- let DropckOutlives { dropped_ty } = value;
- param_env.and(dropped_ty)
- });
-
- tcx.dropck_outlives(canonicalized)
- }
-}
+++ /dev/null
-use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
-use crate::traits::query::Fallible;
-use rustc::ty::{ParamEnvAnd, Predicate, TyCtxt};
-
-pub use rustc::traits::query::type_op::ProvePredicate;
-
-impl<'tcx> super::QueryTypeOp<'tcx> for ProvePredicate<'tcx> {
- type QueryResponse = ();
-
- fn try_fast_path(
- tcx: TyCtxt<'tcx>,
- key: &ParamEnvAnd<'tcx, Self>,
- ) -> Option<Self::QueryResponse> {
- // Proving Sized, very often on "obviously sized" types like
- // `&T`, accounts for about 60% percentage of the predicates
- // we have to prove. No need to canonicalize and all that for
- // such cases.
- if let Predicate::Trait(trait_ref, _) = key.value.predicate {
- if let Some(sized_def_id) = tcx.lang_items().sized_trait() {
- if trait_ref.def_id() == sized_def_id {
- if trait_ref.skip_binder().self_ty().is_trivially_sized(tcx) {
- return Some(());
- }
- }
- }
- }
-
- None
- }
-
- fn perform_query(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, ()>> {
- tcx.type_op_prove_predicate(canonicalized)
- }
-}
+++ /dev/null
-use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
-use crate::traits::query::Fallible;
-use rustc::ty::{ParamEnvAnd, TyCtxt};
-
-pub use rustc::traits::query::type_op::Subtype;
-
-impl<'tcx> super::QueryTypeOp<'tcx> for Subtype<'tcx> {
- type QueryResponse = ();
-
- fn try_fast_path(_tcx: TyCtxt<'tcx>, key: &ParamEnvAnd<'tcx, Self>) -> Option<()> {
- if key.value.sub == key.value.sup { Some(()) } else { None }
- }
-
- fn perform_query(
- tcx: TyCtxt<'tcx>,
- canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
- ) -> Fallible<CanonicalizedQueryResponse<'tcx, ()>> {
- tcx.type_op_subtype(canonicalized)
- }
-}
+++ /dev/null
-// ignore-tidy-filelength
-
-//! Candidate selection. See the [rustc guide] for more information on how this works.
-//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/traits/resolution.html#selection
-
-use self::EvaluationResult::*;
-use self::SelectionCandidate::*;
-
-use super::coherence::{self, Conflict};
-use super::project;
-use super::project::{
- normalize_with_depth, normalize_with_depth_to, Normalized, ProjectionCacheKey,
-};
-use super::util;
-use super::util::{closure_trait_ref_and_return_type, predicate_for_trait_def};
-use super::wf;
-use super::DerivedObligationCause;
-use super::Selection;
-use super::SelectionResult;
-use super::TraitNotObjectSafe;
-use super::TraitQueryMode;
-use super::{BuiltinDerivedObligation, ImplDerivedObligation, ObligationCauseCode};
-use super::{ObjectCastObligation, Obligation};
-use super::{ObligationCause, PredicateObligation, TraitObligation};
-use super::{OutputTypeParameterMismatch, Overflow, SelectionError, Unimplemented};
-use super::{
- VtableAutoImpl, VtableBuiltin, VtableClosure, VtableFnPointer, VtableGenerator, VtableImpl,
- VtableObject, VtableParam, VtableTraitAlias,
-};
-use super::{
- VtableAutoImplData, VtableBuiltinData, VtableClosureData, VtableFnPointerData,
- VtableGeneratorData, VtableImplData, VtableObjectData, VtableTraitAliasData,
-};
-
-use crate::infer::{CombinedSnapshot, InferCtxt, InferOk, PlaceholderMap, TypeFreshener};
-use rustc::dep_graph::{DepKind, DepNodeIndex};
-use rustc::middle::lang_items;
-use rustc::ty::fast_reject;
-use rustc::ty::relate::TypeRelation;
-use rustc::ty::subst::{Subst, SubstsRef};
-use rustc::ty::{self, ToPolyTraitRef, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness};
-use rustc_ast::attr;
-use rustc_data_structures::fx::{FxHashMap, FxHashSet};
-use rustc_hir as hir;
-use rustc_hir::def_id::DefId;
-use rustc_index::bit_set::GrowableBitSet;
-use rustc_span::symbol::sym;
-use rustc_target::spec::abi::Abi;
-
-use std::cell::{Cell, RefCell};
-use std::cmp;
-use std::fmt::{self, Display};
-use std::iter;
-use std::rc::Rc;
-
-pub use rustc::traits::select::*;
-
-pub struct SelectionContext<'cx, 'tcx> {
- infcx: &'cx InferCtxt<'cx, 'tcx>,
-
- /// Freshener used specifically for entries on the obligation
- /// stack. This ensures that all entries on the stack at one time
- /// will have the same set of placeholder entries, which is
- /// important for checking for trait bounds that recursively
- /// require themselves.
- freshener: TypeFreshener<'cx, 'tcx>,
-
- /// If `true`, indicates that the evaluation should be conservative
- /// and consider the possibility of types outside this crate.
- /// This comes up primarily when resolving ambiguity. Imagine
- /// there is some trait reference `$0: Bar` where `$0` is an
- /// inference variable. If `intercrate` is true, then we can never
- /// say for sure that this reference is not implemented, even if
- /// there are *no impls at all for `Bar`*, because `$0` could be
- /// bound to some type that in a downstream crate that implements
- /// `Bar`. This is the suitable mode for coherence. Elsewhere,
- /// though, we set this to false, because we are only interested
- /// in types that the user could actually have written --- in
- /// other words, we consider `$0: Bar` to be unimplemented if
- /// there is no type that the user could *actually name* that
- /// would satisfy it. This avoids crippling inference, basically.
- intercrate: bool,
-
- intercrate_ambiguity_causes: Option<Vec<IntercrateAmbiguityCause>>,
-
- /// Controls whether or not to filter out negative impls when selecting.
- /// This is used in librustdoc to distinguish between the lack of an impl
- /// and a negative impl
- allow_negative_impls: bool,
-
- /// The mode that trait queries run in, which informs our error handling
- /// policy. In essence, canonicalized queries need their errors propagated
- /// rather than immediately reported because we do not have accurate spans.
- query_mode: TraitQueryMode,
-}
-
-#[derive(Clone, Debug)]
-pub enum IntercrateAmbiguityCause {
- DownstreamCrate { trait_desc: String, self_desc: Option<String> },
- UpstreamCrateUpdate { trait_desc: String, self_desc: Option<String> },
- ReservationImpl { message: String },
-}
-
-impl IntercrateAmbiguityCause {
- /// Emits notes when the overlap is caused by complex intercrate ambiguities.
- /// See #23980 for details.
- pub fn add_intercrate_ambiguity_hint(&self, err: &mut rustc_errors::DiagnosticBuilder<'_>) {
- err.note(&self.intercrate_ambiguity_hint());
- }
-
- pub fn intercrate_ambiguity_hint(&self) -> String {
- match self {
- &IntercrateAmbiguityCause::DownstreamCrate { ref trait_desc, ref self_desc } => {
- let self_desc = if let &Some(ref ty) = self_desc {
- format!(" for type `{}`", ty)
- } else {
- String::new()
- };
- format!("downstream crates may implement trait `{}`{}", trait_desc, self_desc)
- }
- &IntercrateAmbiguityCause::UpstreamCrateUpdate { ref trait_desc, ref self_desc } => {
- let self_desc = if let &Some(ref ty) = self_desc {
- format!(" for type `{}`", ty)
- } else {
- String::new()
- };
- format!(
- "upstream crates may add a new impl of trait `{}`{} \
- in future versions",
- trait_desc, self_desc
- )
- }
- &IntercrateAmbiguityCause::ReservationImpl { ref message } => message.clone(),
- }
- }
-}
-
-// A stack that walks back up the stack frame.
-struct TraitObligationStack<'prev, 'tcx> {
- obligation: &'prev TraitObligation<'tcx>,
-
- /// The trait ref from `obligation` but "freshened" with the
- /// selection-context's freshener. Used to check for recursion.
- fresh_trait_ref: ty::PolyTraitRef<'tcx>,
-
- /// Starts out equal to `depth` -- if, during evaluation, we
- /// encounter a cycle, then we will set this flag to the minimum
- /// depth of that cycle for all participants in the cycle. These
- /// participants will then forego caching their results. This is
- /// not the most efficient solution, but it addresses #60010. The
- /// problem we are trying to prevent:
- ///
- /// - If you have `A: AutoTrait` requires `B: AutoTrait` and `C: NonAutoTrait`
- /// - `B: AutoTrait` requires `A: AutoTrait` (coinductive cycle, ok)
- /// - `C: NonAutoTrait` requires `A: AutoTrait` (non-coinductive cycle, not ok)
- ///
- /// you don't want to cache that `B: AutoTrait` or `A: AutoTrait`
- /// is `EvaluatedToOk`; this is because they were only considered
- /// ok on the premise that if `A: AutoTrait` held, but we indeed
- /// encountered a problem (later on) with `A: AutoTrait. So we
- /// currently set a flag on the stack node for `B: AutoTrait` (as
- /// well as the second instance of `A: AutoTrait`) to suppress
- /// caching.
- ///
- /// This is a simple, targeted fix. A more-performant fix requires
- /// deeper changes, but would permit more caching: we could
- /// basically defer caching until we have fully evaluated the
- /// tree, and then cache the entire tree at once. In any case, the
- /// performance impact here shouldn't be so horrible: every time
- /// this is hit, we do cache at least one trait, so we only
- /// evaluate each member of a cycle up to N times, where N is the
- /// length of the cycle. This means the performance impact is
- /// bounded and we shouldn't have any terrible worst-cases.
- reached_depth: Cell<usize>,
-
- previous: TraitObligationStackList<'prev, 'tcx>,
-
- /// The number of parent frames plus one (thus, the topmost frame has depth 1).
- depth: usize,
-
- /// The depth-first number of this node in the search graph -- a
- /// pre-order index. Basically, a freshly incremented counter.
- dfn: usize,
-}
-
-struct SelectionCandidateSet<'tcx> {
- // A list of candidates that definitely apply to the current
- // obligation (meaning: types unify).
- vec: Vec<SelectionCandidate<'tcx>>,
-
- // If `true`, then there were candidates that might or might
- // not have applied, but we couldn't tell. This occurs when some
- // of the input types are type variables, in which case there are
- // various "builtin" rules that might or might not trigger.
- ambiguous: bool,
-}
-
-#[derive(PartialEq, Eq, Debug, Clone)]
-struct EvaluatedCandidate<'tcx> {
- candidate: SelectionCandidate<'tcx>,
- evaluation: EvaluationResult,
-}
-
-/// When does the builtin impl for `T: Trait` apply?
-enum BuiltinImplConditions<'tcx> {
- /// The impl is conditional on `T1, T2, ...: Trait`.
- Where(ty::Binder<Vec<Ty<'tcx>>>),
- /// There is no built-in impl. There may be some other
- /// candidate (a where-clause or user-defined impl).
- None,
- /// It is unknown whether there is an impl.
- Ambiguous,
-}
-
-impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> {
- pub fn new(infcx: &'cx InferCtxt<'cx, 'tcx>) -> SelectionContext<'cx, 'tcx> {
- SelectionContext {
- infcx,
- freshener: infcx.freshener(),
- intercrate: false,
- intercrate_ambiguity_causes: None,
- allow_negative_impls: false,
- query_mode: TraitQueryMode::Standard,
- }
- }
-
- pub fn intercrate(infcx: &'cx InferCtxt<'cx, 'tcx>) -> SelectionContext<'cx, 'tcx> {
- SelectionContext {
- infcx,
- freshener: infcx.freshener(),
- intercrate: true,
- intercrate_ambiguity_causes: None,
- allow_negative_impls: false,
- query_mode: TraitQueryMode::Standard,
- }
- }
-
- pub fn with_negative(
- infcx: &'cx InferCtxt<'cx, 'tcx>,
- allow_negative_impls: bool,
- ) -> SelectionContext<'cx, 'tcx> {
- debug!("with_negative({:?})", allow_negative_impls);
- SelectionContext {
- infcx,
- freshener: infcx.freshener(),
- intercrate: false,
- intercrate_ambiguity_causes: None,
- allow_negative_impls,
- query_mode: TraitQueryMode::Standard,
- }
- }
-
- pub fn with_query_mode(
- infcx: &'cx InferCtxt<'cx, 'tcx>,
- query_mode: TraitQueryMode,
- ) -> SelectionContext<'cx, 'tcx> {
- debug!("with_query_mode({:?})", query_mode);
- SelectionContext {
- infcx,
- freshener: infcx.freshener(),
- intercrate: false,
- intercrate_ambiguity_causes: None,
- allow_negative_impls: false,
- query_mode,
- }
- }
-
- /// Enables tracking of intercrate ambiguity causes. These are
- /// used in coherence to give improved diagnostics. We don't do
- /// this until we detect a coherence error because it can lead to
- /// false overflow results (#47139) and because it costs
- /// computation time.
- pub fn enable_tracking_intercrate_ambiguity_causes(&mut self) {
- assert!(self.intercrate);
- assert!(self.intercrate_ambiguity_causes.is_none());
- self.intercrate_ambiguity_causes = Some(vec![]);
- debug!("selcx: enable_tracking_intercrate_ambiguity_causes");
- }
-
- /// Gets the intercrate ambiguity causes collected since tracking
- /// was enabled and disables tracking at the same time. If
- /// tracking is not enabled, just returns an empty vector.
- pub fn take_intercrate_ambiguity_causes(&mut self) -> Vec<IntercrateAmbiguityCause> {
- assert!(self.intercrate);
- self.intercrate_ambiguity_causes.take().unwrap_or(vec![])
- }
-
- pub fn infcx(&self) -> &'cx InferCtxt<'cx, 'tcx> {
- self.infcx
- }
-
- pub fn tcx(&self) -> TyCtxt<'tcx> {
- self.infcx.tcx
- }
-
- pub fn closure_typer(&self) -> &'cx InferCtxt<'cx, 'tcx> {
- self.infcx
- }
-
- ///////////////////////////////////////////////////////////////////////////
- // Selection
- //
- // The selection phase tries to identify *how* an obligation will
- // be resolved. For example, it will identify which impl or
- // parameter bound is to be used. The process can be inconclusive
- // if the self type in the obligation is not fully inferred. Selection
- // can result in an error in one of two ways:
- //
- // 1. If no applicable impl or parameter bound can be found.
- // 2. If the output type parameters in the obligation do not match
- // those specified by the impl/bound. For example, if the obligation
- // is `Vec<Foo>: Iterable<Bar>`, but the impl specifies
- // `impl<T> Iterable<T> for Vec<T>`, than an error would result.
-
- /// Attempts to satisfy the obligation. If successful, this will affect the surrounding
- /// type environment by performing unification.
- pub fn select(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- ) -> SelectionResult<'tcx, Selection<'tcx>> {
- debug!("select({:?})", obligation);
- debug_assert!(!obligation.predicate.has_escaping_bound_vars());
-
- let pec = &ProvisionalEvaluationCache::default();
- let stack = self.push_stack(TraitObligationStackList::empty(pec), obligation);
-
- let candidate = match self.candidate_from_obligation(&stack) {
- Err(SelectionError::Overflow) => {
- // In standard mode, overflow must have been caught and reported
- // earlier.
- assert!(self.query_mode == TraitQueryMode::Canonical);
- return Err(SelectionError::Overflow);
- }
- Err(e) => {
- return Err(e);
- }
- Ok(None) => {
- return Ok(None);
- }
- Ok(Some(candidate)) => candidate,
- };
-
- match self.confirm_candidate(obligation, candidate) {
- Err(SelectionError::Overflow) => {
- assert!(self.query_mode == TraitQueryMode::Canonical);
- Err(SelectionError::Overflow)
- }
- Err(e) => Err(e),
- Ok(candidate) => Ok(Some(candidate)),
- }
- }
-
- ///////////////////////////////////////////////////////////////////////////
- // EVALUATION
- //
- // Tests whether an obligation can be selected or whether an impl
- // can be applied to particular types. It skips the "confirmation"
- // step and hence completely ignores output type parameters.
- //
- // The result is "true" if the obligation *may* hold and "false" if
- // we can be sure it does not.
-
- /// Evaluates whether the obligation `obligation` can be satisfied (by any means).
- pub fn predicate_may_hold_fatal(&mut self, obligation: &PredicateObligation<'tcx>) -> bool {
- debug!("predicate_may_hold_fatal({:?})", obligation);
-
- // This fatal query is a stopgap that should only be used in standard mode,
- // where we do not expect overflow to be propagated.
- assert!(self.query_mode == TraitQueryMode::Standard);
-
- self.evaluate_root_obligation(obligation)
- .expect("Overflow should be caught earlier in standard query mode")
- .may_apply()
- }
-
- /// Evaluates whether the obligation `obligation` can be satisfied
- /// and returns an `EvaluationResult`. This is meant for the
- /// *initial* call.
- pub fn evaluate_root_obligation(
- &mut self,
- obligation: &PredicateObligation<'tcx>,
- ) -> Result<EvaluationResult, OverflowError> {
- self.evaluation_probe(|this| {
- this.evaluate_predicate_recursively(
- TraitObligationStackList::empty(&ProvisionalEvaluationCache::default()),
- obligation.clone(),
- )
- })
- }
-
- fn evaluation_probe(
- &mut self,
- op: impl FnOnce(&mut Self) -> Result<EvaluationResult, OverflowError>,
- ) -> Result<EvaluationResult, OverflowError> {
- self.infcx.probe(|snapshot| -> Result<EvaluationResult, OverflowError> {
- let result = op(self)?;
- match self.infcx.region_constraints_added_in_snapshot(snapshot) {
- None => Ok(result),
- Some(_) => Ok(result.max(EvaluatedToOkModuloRegions)),
- }
- })
- }
-
- /// Evaluates the predicates in `predicates` recursively. Note that
- /// this applies projections in the predicates, and therefore
- /// is run within an inference probe.
- fn evaluate_predicates_recursively<'o, I>(
- &mut self,
- stack: TraitObligationStackList<'o, 'tcx>,
- predicates: I,
- ) -> Result<EvaluationResult, OverflowError>
- where
- I: IntoIterator<Item = PredicateObligation<'tcx>>,
- {
- let mut result = EvaluatedToOk;
- for obligation in predicates {
- let eval = self.evaluate_predicate_recursively(stack, obligation.clone())?;
- debug!("evaluate_predicate_recursively({:?}) = {:?}", obligation, eval);
- if let EvaluatedToErr = eval {
- // fast-path - EvaluatedToErr is the top of the lattice,
- // so we don't need to look on the other predicates.
- return Ok(EvaluatedToErr);
- } else {
- result = cmp::max(result, eval);
- }
- }
- Ok(result)
- }
-
- fn evaluate_predicate_recursively<'o>(
- &mut self,
- previous_stack: TraitObligationStackList<'o, 'tcx>,
- obligation: PredicateObligation<'tcx>,
- ) -> Result<EvaluationResult, OverflowError> {
- debug!(
- "evaluate_predicate_recursively(previous_stack={:?}, obligation={:?})",
- previous_stack.head(),
- obligation
- );
-
- // `previous_stack` stores a `TraitObligatiom`, while `obligation` is
- // a `PredicateObligation`. These are distinct types, so we can't
- // use any `Option` combinator method that would force them to be
- // the same.
- match previous_stack.head() {
- Some(h) => self.check_recursion_limit(&obligation, h.obligation)?,
- None => self.check_recursion_limit(&obligation, &obligation)?,
- }
-
- match obligation.predicate {
- ty::Predicate::Trait(ref t, _) => {
- debug_assert!(!t.has_escaping_bound_vars());
- let obligation = obligation.with(t.clone());
- self.evaluate_trait_predicate_recursively(previous_stack, obligation)
- }
-
- ty::Predicate::Subtype(ref p) => {
- // Does this code ever run?
- match self.infcx.subtype_predicate(&obligation.cause, obligation.param_env, p) {
- Some(Ok(InferOk { mut obligations, .. })) => {
- self.add_depth(obligations.iter_mut(), obligation.recursion_depth);
- self.evaluate_predicates_recursively(
- previous_stack,
- obligations.into_iter(),
- )
- }
- Some(Err(_)) => Ok(EvaluatedToErr),
- None => Ok(EvaluatedToAmbig),
- }
- }
-
- ty::Predicate::WellFormed(ty) => match wf::obligations(
- self.infcx,
- obligation.param_env,
- obligation.cause.body_id,
- ty,
- obligation.cause.span,
- ) {
- Some(mut obligations) => {
- self.add_depth(obligations.iter_mut(), obligation.recursion_depth);
- self.evaluate_predicates_recursively(previous_stack, obligations.into_iter())
- }
- None => Ok(EvaluatedToAmbig),
- },
-
- ty::Predicate::TypeOutlives(..) | ty::Predicate::RegionOutlives(..) => {
- // We do not consider region relationships when evaluating trait matches.
- Ok(EvaluatedToOkModuloRegions)
- }
-
- ty::Predicate::ObjectSafe(trait_def_id) => {
- if self.tcx().is_object_safe(trait_def_id) {
- Ok(EvaluatedToOk)
- } else {
- Ok(EvaluatedToErr)
- }
- }
-
- ty::Predicate::Projection(ref data) => {
- let project_obligation = obligation.with(data.clone());
- match project::poly_project_and_unify_type(self, &project_obligation) {
- Ok(Some(mut subobligations)) => {
- self.add_depth(subobligations.iter_mut(), obligation.recursion_depth);
- let result = self.evaluate_predicates_recursively(
- previous_stack,
- subobligations.into_iter(),
- );
- if let Some(key) =
- ProjectionCacheKey::from_poly_projection_predicate(self, data)
- {
- self.infcx.inner.borrow_mut().projection_cache.complete(key);
- }
- result
- }
- Ok(None) => Ok(EvaluatedToAmbig),
- Err(_) => Ok(EvaluatedToErr),
- }
- }
-
- ty::Predicate::ClosureKind(closure_def_id, closure_substs, kind) => {
- match self.infcx.closure_kind(closure_def_id, closure_substs) {
- Some(closure_kind) => {
- if closure_kind.extends(kind) {
- Ok(EvaluatedToOk)
- } else {
- Ok(EvaluatedToErr)
- }
- }
- None => Ok(EvaluatedToAmbig),
- }
- }
-
- ty::Predicate::ConstEvaluatable(def_id, substs) => {
- match self.tcx().const_eval_resolve(
- obligation.param_env,
- def_id,
- substs,
- None,
- None,
- ) {
- Ok(_) => Ok(EvaluatedToOk),
- Err(_) => Ok(EvaluatedToErr),
- }
- }
- }
- }
-
- fn evaluate_trait_predicate_recursively<'o>(
- &mut self,
- previous_stack: TraitObligationStackList<'o, 'tcx>,
- mut obligation: TraitObligation<'tcx>,
- ) -> Result<EvaluationResult, OverflowError> {
- debug!("evaluate_trait_predicate_recursively({:?})", obligation);
-
- if !self.intercrate
- && obligation.is_global()
- && obligation.param_env.caller_bounds.iter().all(|bound| bound.needs_subst())
- {
- // If a param env has no global bounds, global obligations do not
- // depend on its particular value in order to work, so we can clear
- // out the param env and get better caching.
- debug!("evaluate_trait_predicate_recursively({:?}) - in global", obligation);
- obligation.param_env = obligation.param_env.without_caller_bounds();
- }
-
- let stack = self.push_stack(previous_stack, &obligation);
- let fresh_trait_ref = stack.fresh_trait_ref;
- if let Some(result) = self.check_evaluation_cache(obligation.param_env, fresh_trait_ref) {
- debug!("CACHE HIT: EVAL({:?})={:?}", fresh_trait_ref, result);
- return Ok(result);
- }
-
- if let Some(result) = stack.cache().get_provisional(fresh_trait_ref) {
- debug!("PROVISIONAL CACHE HIT: EVAL({:?})={:?}", fresh_trait_ref, result);
- stack.update_reached_depth(stack.cache().current_reached_depth());
- return Ok(result);
- }
-
- // Check if this is a match for something already on the
- // stack. If so, we don't want to insert the result into the
- // main cache (it is cycle dependent) nor the provisional
- // cache (which is meant for things that have completed but
- // for a "backedge" -- this result *is* the backedge).
- if let Some(cycle_result) = self.check_evaluation_cycle(&stack) {
- return Ok(cycle_result);
- }
-
- let (result, dep_node) = self.in_task(|this| this.evaluate_stack(&stack));
- let result = result?;
-
- if !result.must_apply_modulo_regions() {
- stack.cache().on_failure(stack.dfn);
- }
-
- let reached_depth = stack.reached_depth.get();
- if reached_depth >= stack.depth {
- debug!("CACHE MISS: EVAL({:?})={:?}", fresh_trait_ref, result);
- self.insert_evaluation_cache(obligation.param_env, fresh_trait_ref, dep_node, result);
-
- stack.cache().on_completion(stack.depth, |fresh_trait_ref, provisional_result| {
- self.insert_evaluation_cache(
- obligation.param_env,
- fresh_trait_ref,
- dep_node,
- provisional_result.max(result),
- );
- });
- } else {
- debug!("PROVISIONAL: {:?}={:?}", fresh_trait_ref, result);
- debug!(
- "evaluate_trait_predicate_recursively: caching provisionally because {:?} \
- is a cycle participant (at depth {}, reached depth {})",
- fresh_trait_ref, stack.depth, reached_depth,
- );
-
- stack.cache().insert_provisional(stack.dfn, reached_depth, fresh_trait_ref, result);
- }
-
- Ok(result)
- }
-
- /// If there is any previous entry on the stack that precisely
- /// matches this obligation, then we can assume that the
- /// obligation is satisfied for now (still all other conditions
- /// must be met of course). One obvious case this comes up is
- /// marker traits like `Send`. Think of a linked list:
- ///
- /// struct List<T> { data: T, next: Option<Box<List<T>>> }
- ///
- /// `Box<List<T>>` will be `Send` if `T` is `Send` and
- /// `Option<Box<List<T>>>` is `Send`, and in turn
- /// `Option<Box<List<T>>>` is `Send` if `Box<List<T>>` is
- /// `Send`.
- ///
- /// Note that we do this comparison using the `fresh_trait_ref`
- /// fields. Because these have all been freshened using
- /// `self.freshener`, we can be sure that (a) this will not
- /// affect the inferencer state and (b) that if we see two
- /// fresh regions with the same index, they refer to the same
- /// unbound type variable.
- fn check_evaluation_cycle(
- &mut self,
- stack: &TraitObligationStack<'_, 'tcx>,
- ) -> Option<EvaluationResult> {
- if let Some(cycle_depth) = stack
- .iter()
- .skip(1) // Skip top-most frame.
- .find(|prev| {
- stack.obligation.param_env == prev.obligation.param_env
- && stack.fresh_trait_ref == prev.fresh_trait_ref
- })
- .map(|stack| stack.depth)
- {
- debug!(
- "evaluate_stack({:?}) --> recursive at depth {}",
- stack.fresh_trait_ref, cycle_depth,
- );
-
- // If we have a stack like `A B C D E A`, where the top of
- // the stack is the final `A`, then this will iterate over
- // `A, E, D, C, B` -- i.e., all the participants apart
- // from the cycle head. We mark them as participating in a
- // cycle. This suppresses caching for those nodes. See
- // `in_cycle` field for more details.
- stack.update_reached_depth(cycle_depth);
-
- // Subtle: when checking for a coinductive cycle, we do
- // not compare using the "freshened trait refs" (which
- // have erased regions) but rather the fully explicit
- // trait refs. This is important because it's only a cycle
- // if the regions match exactly.
- let cycle = stack.iter().skip(1).take_while(|s| s.depth >= cycle_depth);
- let cycle = cycle.map(|stack| {
- ty::Predicate::Trait(stack.obligation.predicate, hir::Constness::NotConst)
- });
- if self.coinductive_match(cycle) {
- debug!("evaluate_stack({:?}) --> recursive, coinductive", stack.fresh_trait_ref);
- Some(EvaluatedToOk)
- } else {
- debug!("evaluate_stack({:?}) --> recursive, inductive", stack.fresh_trait_ref);
- Some(EvaluatedToRecur)
- }
- } else {
- None
- }
- }
-
- fn evaluate_stack<'o>(
- &mut self,
- stack: &TraitObligationStack<'o, 'tcx>,
- ) -> Result<EvaluationResult, OverflowError> {
- // In intercrate mode, whenever any of the types are unbound,
- // there can always be an impl. Even if there are no impls in
- // this crate, perhaps the type would be unified with
- // something from another crate that does provide an impl.
- //
- // In intra mode, we must still be conservative. The reason is
- // that we want to avoid cycles. Imagine an impl like:
- //
- // impl<T:Eq> Eq for Vec<T>
- //
- // and a trait reference like `$0 : Eq` where `$0` is an
- // unbound variable. When we evaluate this trait-reference, we
- // will unify `$0` with `Vec<$1>` (for some fresh variable
- // `$1`), on the condition that `$1 : Eq`. We will then wind
- // up with many candidates (since that are other `Eq` impls
- // that apply) and try to winnow things down. This results in
- // a recursive evaluation that `$1 : Eq` -- as you can
- // imagine, this is just where we started. To avoid that, we
- // check for unbound variables and return an ambiguous (hence possible)
- // match if we've seen this trait before.
- //
- // This suffices to allow chains like `FnMut` implemented in
- // terms of `Fn` etc, but we could probably make this more
- // precise still.
- let unbound_input_types =
- stack.fresh_trait_ref.skip_binder().input_types().any(|ty| ty.is_fresh());
- // This check was an imperfect workaround for a bug in the old
- // intercrate mode; it should be removed when that goes away.
- if unbound_input_types && self.intercrate {
- debug!(
- "evaluate_stack({:?}) --> unbound argument, intercrate --> ambiguous",
- stack.fresh_trait_ref
- );
- // Heuristics: show the diagnostics when there are no candidates in crate.
- if self.intercrate_ambiguity_causes.is_some() {
- debug!("evaluate_stack: intercrate_ambiguity_causes is some");
- if let Ok(candidate_set) = self.assemble_candidates(stack) {
- if !candidate_set.ambiguous && candidate_set.vec.is_empty() {
- let trait_ref = stack.obligation.predicate.skip_binder().trait_ref;
- let self_ty = trait_ref.self_ty();
- let cause = IntercrateAmbiguityCause::DownstreamCrate {
- trait_desc: trait_ref.print_only_trait_path().to_string(),
- self_desc: if self_ty.has_concrete_skeleton() {
- Some(self_ty.to_string())
- } else {
- None
- },
- };
- debug!("evaluate_stack: pushing cause = {:?}", cause);
- self.intercrate_ambiguity_causes.as_mut().unwrap().push(cause);
- }
- }
- }
- return Ok(EvaluatedToAmbig);
- }
- if unbound_input_types
- && stack.iter().skip(1).any(|prev| {
- stack.obligation.param_env == prev.obligation.param_env
- && self.match_fresh_trait_refs(
- &stack.fresh_trait_ref,
- &prev.fresh_trait_ref,
- prev.obligation.param_env,
- )
- })
- {
- debug!(
- "evaluate_stack({:?}) --> unbound argument, recursive --> giving up",
- stack.fresh_trait_ref
- );
- return Ok(EvaluatedToUnknown);
- }
-
- match self.candidate_from_obligation(stack) {
- Ok(Some(c)) => self.evaluate_candidate(stack, &c),
- Ok(None) => Ok(EvaluatedToAmbig),
- Err(Overflow) => Err(OverflowError),
- Err(..) => Ok(EvaluatedToErr),
- }
- }
-
- /// For defaulted traits, we use a co-inductive strategy to solve, so
- /// that recursion is ok. This routine returns `true` if the top of the
- /// stack (`cycle[0]`):
- ///
- /// - is a defaulted trait,
- /// - it also appears in the backtrace at some position `X`,
- /// - all the predicates at positions `X..` between `X` and the top are
- /// also defaulted traits.
- pub fn coinductive_match<I>(&mut self, cycle: I) -> bool
- where
- I: Iterator<Item = ty::Predicate<'tcx>>,
- {
- let mut cycle = cycle;
- cycle.all(|predicate| self.coinductive_predicate(predicate))
- }
-
- fn coinductive_predicate(&self, predicate: ty::Predicate<'tcx>) -> bool {
- let result = match predicate {
- ty::Predicate::Trait(ref data, _) => self.tcx().trait_is_auto(data.def_id()),
- _ => false,
- };
- debug!("coinductive_predicate({:?}) = {:?}", predicate, result);
- result
- }
-
- /// Further evaluates `candidate` to decide whether all type parameters match and whether nested
- /// obligations are met. Returns whether `candidate` remains viable after this further
- /// scrutiny.
- fn evaluate_candidate<'o>(
- &mut self,
- stack: &TraitObligationStack<'o, 'tcx>,
- candidate: &SelectionCandidate<'tcx>,
- ) -> Result<EvaluationResult, OverflowError> {
- debug!(
- "evaluate_candidate: depth={} candidate={:?}",
- stack.obligation.recursion_depth, candidate
- );
- let result = self.evaluation_probe(|this| {
- let candidate = (*candidate).clone();
- match this.confirm_candidate(stack.obligation, candidate) {
- Ok(selection) => this.evaluate_predicates_recursively(
- stack.list(),
- selection.nested_obligations().into_iter(),
- ),
- Err(..) => Ok(EvaluatedToErr),
- }
- })?;
- debug!(
- "evaluate_candidate: depth={} result={:?}",
- stack.obligation.recursion_depth, result
- );
- Ok(result)
- }
-
- fn check_evaluation_cache(
- &self,
- param_env: ty::ParamEnv<'tcx>,
- trait_ref: ty::PolyTraitRef<'tcx>,
- ) -> Option<EvaluationResult> {
- let tcx = self.tcx();
- if self.can_use_global_caches(param_env) {
- let cache = tcx.evaluation_cache.hashmap.borrow();
- if let Some(cached) = cache.get(¶m_env.and(trait_ref)) {
- return Some(cached.get(tcx));
- }
- }
- self.infcx
- .evaluation_cache
- .hashmap
- .borrow()
- .get(¶m_env.and(trait_ref))
- .map(|v| v.get(tcx))
- }
-
- fn insert_evaluation_cache(
- &mut self,
- param_env: ty::ParamEnv<'tcx>,
- trait_ref: ty::PolyTraitRef<'tcx>,
- dep_node: DepNodeIndex,
- result: EvaluationResult,
- ) {
- // Avoid caching results that depend on more than just the trait-ref
- // - the stack can create recursion.
- if result.is_stack_dependent() {
- return;
- }
-
- if self.can_use_global_caches(param_env) {
- if !trait_ref.has_local_value() {
- debug!(
- "insert_evaluation_cache(trait_ref={:?}, candidate={:?}) global",
- trait_ref, result,
- );
- // This may overwrite the cache with the same value
- // FIXME: Due to #50507 this overwrites the different values
- // This should be changed to use HashMapExt::insert_same
- // when that is fixed
- self.tcx()
- .evaluation_cache
- .hashmap
- .borrow_mut()
- .insert(param_env.and(trait_ref), WithDepNode::new(dep_node, result));
- return;
- }
- }
-
- debug!("insert_evaluation_cache(trait_ref={:?}, candidate={:?})", trait_ref, result,);
- self.infcx
- .evaluation_cache
- .hashmap
- .borrow_mut()
- .insert(param_env.and(trait_ref), WithDepNode::new(dep_node, result));
- }
-
- /// For various reasons, it's possible for a subobligation
- /// to have a *lower* recursion_depth than the obligation used to create it.
- /// Projection sub-obligations may be returned from the projection cache,
- /// which results in obligations with an 'old' `recursion_depth`.
- /// Additionally, methods like `wf::obligations` and
- /// `InferCtxt.subtype_predicate` produce subobligations without
- /// taking in a 'parent' depth, causing the generated subobligations
- /// to have a `recursion_depth` of `0`.
- ///
- /// To ensure that obligation_depth never decreasees, we force all subobligations
- /// to have at least the depth of the original obligation.
- fn add_depth<T: 'cx, I: Iterator<Item = &'cx mut Obligation<'tcx, T>>>(
- &self,
- it: I,
- min_depth: usize,
- ) {
- it.for_each(|o| o.recursion_depth = cmp::max(min_depth, o.recursion_depth) + 1);
- }
-
- /// Checks that the recursion limit has not been exceeded.
- ///
- /// The weird return type of this function allows it to be used with the `try` (`?`)
- /// operator within certain functions.
- fn check_recursion_limit<T: Display + TypeFoldable<'tcx>, V: Display + TypeFoldable<'tcx>>(
- &self,
- obligation: &Obligation<'tcx, T>,
- error_obligation: &Obligation<'tcx, V>,
- ) -> Result<(), OverflowError> {
- let recursion_limit = *self.infcx.tcx.sess.recursion_limit.get();
- if obligation.recursion_depth >= recursion_limit {
- match self.query_mode {
- TraitQueryMode::Standard => {
- self.infcx().report_overflow_error(error_obligation, true);
- }
- TraitQueryMode::Canonical => {
- return Err(OverflowError);
- }
- }
- }
- Ok(())
- }
-
- ///////////////////////////////////////////////////////////////////////////
- // CANDIDATE ASSEMBLY
- //
- // The selection process begins by examining all in-scope impls,
- // caller obligations, and so forth and assembling a list of
- // candidates. See the [rustc guide] for more details.
- //
- // [rustc guide]:
- // https://rust-lang.github.io/rustc-guide/traits/resolution.html#candidate-assembly
-
- fn candidate_from_obligation<'o>(
- &mut self,
- stack: &TraitObligationStack<'o, 'tcx>,
- ) -> SelectionResult<'tcx, SelectionCandidate<'tcx>> {
- // Watch out for overflow. This intentionally bypasses (and does
- // not update) the cache.
- self.check_recursion_limit(&stack.obligation, &stack.obligation)?;
-
- // Check the cache. Note that we freshen the trait-ref
- // separately rather than using `stack.fresh_trait_ref` --
- // this is because we want the unbound variables to be
- // replaced with fresh types starting from index 0.
- let cache_fresh_trait_pred = self.infcx.freshen(stack.obligation.predicate.clone());
- debug!(
- "candidate_from_obligation(cache_fresh_trait_pred={:?}, obligation={:?})",
- cache_fresh_trait_pred, stack
- );
- debug_assert!(!stack.obligation.predicate.has_escaping_bound_vars());
-
- if let Some(c) =
- self.check_candidate_cache(stack.obligation.param_env, &cache_fresh_trait_pred)
- {
- debug!("CACHE HIT: SELECT({:?})={:?}", cache_fresh_trait_pred, c);
- return c;
- }
-
- // If no match, compute result and insert into cache.
- //
- // FIXME(nikomatsakis) -- this cache is not taking into
- // account cycles that may have occurred in forming the
- // candidate. I don't know of any specific problems that
- // result but it seems awfully suspicious.
- let (candidate, dep_node) =
- self.in_task(|this| this.candidate_from_obligation_no_cache(stack));
-
- debug!("CACHE MISS: SELECT({:?})={:?}", cache_fresh_trait_pred, candidate);
- self.insert_candidate_cache(
- stack.obligation.param_env,
- cache_fresh_trait_pred,
- dep_node,
- candidate.clone(),
- );
- candidate
- }
-
- fn in_task<OP, R>(&mut self, op: OP) -> (R, DepNodeIndex)
- where
- OP: FnOnce(&mut Self) -> R,
- {
- let (result, dep_node) =
- self.tcx().dep_graph.with_anon_task(DepKind::TraitSelect, || op(self));
- self.tcx().dep_graph.read_index(dep_node);
- (result, dep_node)
- }
-
- // Treat negative impls as unimplemented, and reservation impls as ambiguity.
- fn filter_negative_and_reservation_impls(
- &mut self,
- candidate: SelectionCandidate<'tcx>,
- ) -> SelectionResult<'tcx, SelectionCandidate<'tcx>> {
- if let ImplCandidate(def_id) = candidate {
- let tcx = self.tcx();
- match tcx.impl_polarity(def_id) {
- ty::ImplPolarity::Negative if !self.allow_negative_impls => {
- return Err(Unimplemented);
- }
- ty::ImplPolarity::Reservation => {
- if let Some(intercrate_ambiguity_clauses) =
- &mut self.intercrate_ambiguity_causes
- {
- let attrs = tcx.get_attrs(def_id);
- let attr = attr::find_by_name(&attrs, sym::rustc_reservation_impl);
- let value = attr.and_then(|a| a.value_str());
- if let Some(value) = value {
- debug!(
- "filter_negative_and_reservation_impls: \
- reservation impl ambiguity on {:?}",
- def_id
- );
- intercrate_ambiguity_clauses.push(
- IntercrateAmbiguityCause::ReservationImpl {
- message: value.to_string(),
- },
- );
- }
- }
- return Ok(None);
- }
- _ => {}
- };
- }
- Ok(Some(candidate))
- }
-
- fn candidate_from_obligation_no_cache<'o>(
- &mut self,
- stack: &TraitObligationStack<'o, 'tcx>,
- ) -> SelectionResult<'tcx, SelectionCandidate<'tcx>> {
- if stack.obligation.predicate.references_error() {
- // If we encounter a `Error`, we generally prefer the
- // most "optimistic" result in response -- that is, the
- // one least likely to report downstream errors. But
- // because this routine is shared by coherence and by
- // trait selection, there isn't an obvious "right" choice
- // here in that respect, so we opt to just return
- // ambiguity and let the upstream clients sort it out.
- return Ok(None);
- }
-
- if let Some(conflict) = self.is_knowable(stack) {
- debug!("coherence stage: not knowable");
- if self.intercrate_ambiguity_causes.is_some() {
- debug!("evaluate_stack: intercrate_ambiguity_causes is some");
- // Heuristics: show the diagnostics when there are no candidates in crate.
- if let Ok(candidate_set) = self.assemble_candidates(stack) {
- let mut no_candidates_apply = true;
- {
- let evaluated_candidates =
- candidate_set.vec.iter().map(|c| self.evaluate_candidate(stack, &c));
-
- for ec in evaluated_candidates {
- match ec {
- Ok(c) => {
- if c.may_apply() {
- no_candidates_apply = false;
- break;
- }
- }
- Err(e) => return Err(e.into()),
- }
- }
- }
-
- if !candidate_set.ambiguous && no_candidates_apply {
- let trait_ref = stack.obligation.predicate.skip_binder().trait_ref;
- let self_ty = trait_ref.self_ty();
- let trait_desc = trait_ref.print_only_trait_path().to_string();
- let self_desc = if self_ty.has_concrete_skeleton() {
- Some(self_ty.to_string())
- } else {
- None
- };
- let cause = if let Conflict::Upstream = conflict {
- IntercrateAmbiguityCause::UpstreamCrateUpdate { trait_desc, self_desc }
- } else {
- IntercrateAmbiguityCause::DownstreamCrate { trait_desc, self_desc }
- };
- debug!("evaluate_stack: pushing cause = {:?}", cause);
- self.intercrate_ambiguity_causes.as_mut().unwrap().push(cause);
- }
- }
- }
- return Ok(None);
- }
-
- let candidate_set = self.assemble_candidates(stack)?;
-
- if candidate_set.ambiguous {
- debug!("candidate set contains ambig");
- return Ok(None);
- }
-
- let mut candidates = candidate_set.vec;
-
- debug!("assembled {} candidates for {:?}: {:?}", candidates.len(), stack, candidates);
-
- // At this point, we know that each of the entries in the
- // candidate set is *individually* applicable. Now we have to
- // figure out if they contain mutual incompatibilities. This
- // frequently arises if we have an unconstrained input type --
- // for example, we are looking for `$0: Eq` where `$0` is some
- // unconstrained type variable. In that case, we'll get a
- // candidate which assumes $0 == int, one that assumes `$0 ==
- // usize`, etc. This spells an ambiguity.
-
- // If there is more than one candidate, first winnow them down
- // by considering extra conditions (nested obligations and so
- // forth). We don't winnow if there is exactly one
- // candidate. This is a relatively minor distinction but it
- // can lead to better inference and error-reporting. An
- // example would be if there was an impl:
- //
- // impl<T:Clone> Vec<T> { fn push_clone(...) { ... } }
- //
- // and we were to see some code `foo.push_clone()` where `boo`
- // is a `Vec<Bar>` and `Bar` does not implement `Clone`. If
- // we were to winnow, we'd wind up with zero candidates.
- // Instead, we select the right impl now but report "`Bar` does
- // not implement `Clone`".
- if candidates.len() == 1 {
- return self.filter_negative_and_reservation_impls(candidates.pop().unwrap());
- }
-
- // Winnow, but record the exact outcome of evaluation, which
- // is needed for specialization. Propagate overflow if it occurs.
- let mut candidates = candidates
- .into_iter()
- .map(|c| match self.evaluate_candidate(stack, &c) {
- Ok(eval) if eval.may_apply() => {
- Ok(Some(EvaluatedCandidate { candidate: c, evaluation: eval }))
- }
- Ok(_) => Ok(None),
- Err(OverflowError) => Err(Overflow),
- })
- .flat_map(Result::transpose)
- .collect::<Result<Vec<_>, _>>()?;
-
- debug!("winnowed to {} candidates for {:?}: {:?}", candidates.len(), stack, candidates);
-
- let needs_infer = stack.obligation.predicate.needs_infer();
-
- // If there are STILL multiple candidates, we can further
- // reduce the list by dropping duplicates -- including
- // resolving specializations.
- if candidates.len() > 1 {
- let mut i = 0;
- while i < candidates.len() {
- let is_dup = (0..candidates.len()).filter(|&j| i != j).any(|j| {
- self.candidate_should_be_dropped_in_favor_of(
- &candidates[i],
- &candidates[j],
- needs_infer,
- )
- });
- if is_dup {
- debug!("Dropping candidate #{}/{}: {:?}", i, candidates.len(), candidates[i]);
- candidates.swap_remove(i);
- } else {
- debug!("Retaining candidate #{}/{}: {:?}", i, candidates.len(), candidates[i]);
- i += 1;
-
- // If there are *STILL* multiple candidates, give up
- // and report ambiguity.
- if i > 1 {
- debug!("multiple matches, ambig");
- return Ok(None);
- }
- }
- }
- }
-
- // If there are *NO* candidates, then there are no impls --
- // that we know of, anyway. Note that in the case where there
- // are unbound type variables within the obligation, it might
- // be the case that you could still satisfy the obligation
- // from another crate by instantiating the type variables with
- // a type from another crate that does have an impl. This case
- // is checked for in `evaluate_stack` (and hence users
- // who might care about this case, like coherence, should use
- // that function).
- if candidates.is_empty() {
- return Err(Unimplemented);
- }
-
- // Just one candidate left.
- self.filter_negative_and_reservation_impls(candidates.pop().unwrap().candidate)
- }
-
- fn is_knowable<'o>(&mut self, stack: &TraitObligationStack<'o, 'tcx>) -> Option<Conflict> {
- debug!("is_knowable(intercrate={:?})", self.intercrate);
-
- if !self.intercrate {
- return None;
- }
-
- let obligation = &stack.obligation;
- let predicate = self.infcx().resolve_vars_if_possible(&obligation.predicate);
-
- // Okay to skip binder because of the nature of the
- // trait-ref-is-knowable check, which does not care about
- // bound regions.
- let trait_ref = predicate.skip_binder().trait_ref;
-
- coherence::trait_ref_is_knowable(self.tcx(), trait_ref)
- }
-
- /// Returns `true` if the global caches can be used.
- /// Do note that if the type itself is not in the
- /// global tcx, the local caches will be used.
- fn can_use_global_caches(&self, param_env: ty::ParamEnv<'tcx>) -> bool {
- // If there are any e.g. inference variables in the `ParamEnv`, then we
- // always use a cache local to this particular scope. Otherwise, we
- // switch to a global cache.
- if param_env.has_local_value() {
- return false;
- }
-
- // Avoid using the master cache during coherence and just rely
- // on the local cache. This effectively disables caching
- // during coherence. It is really just a simplification to
- // avoid us having to fear that coherence results "pollute"
- // the master cache. Since coherence executes pretty quickly,
- // it's not worth going to more trouble to increase the
- // hit-rate, I don't think.
- if self.intercrate {
- return false;
- }
-
- // Otherwise, we can use the global cache.
- true
- }
-
- fn check_candidate_cache(
- &mut self,
- param_env: ty::ParamEnv<'tcx>,
- cache_fresh_trait_pred: &ty::PolyTraitPredicate<'tcx>,
- ) -> Option<SelectionResult<'tcx, SelectionCandidate<'tcx>>> {
- let tcx = self.tcx();
- let trait_ref = &cache_fresh_trait_pred.skip_binder().trait_ref;
- if self.can_use_global_caches(param_env) {
- let cache = tcx.selection_cache.hashmap.borrow();
- if let Some(cached) = cache.get(¶m_env.and(*trait_ref)) {
- return Some(cached.get(tcx));
- }
- }
- self.infcx
- .selection_cache
- .hashmap
- .borrow()
- .get(¶m_env.and(*trait_ref))
- .map(|v| v.get(tcx))
- }
-
- /// Determines whether can we safely cache the result
- /// of selecting an obligation. This is almost always `true`,
- /// except when dealing with certain `ParamCandidate`s.
- ///
- /// Ordinarily, a `ParamCandidate` will contain no inference variables,
- /// since it was usually produced directly from a `DefId`. However,
- /// certain cases (currently only librustdoc's blanket impl finder),
- /// a `ParamEnv` may be explicitly constructed with inference types.
- /// When this is the case, we do *not* want to cache the resulting selection
- /// candidate. This is due to the fact that it might not always be possible
- /// to equate the obligation's trait ref and the candidate's trait ref,
- /// if more constraints end up getting added to an inference variable.
- ///
- /// Because of this, we always want to re-run the full selection
- /// process for our obligation the next time we see it, since
- /// we might end up picking a different `SelectionCandidate` (or none at all).
- fn can_cache_candidate(
- &self,
- result: &SelectionResult<'tcx, SelectionCandidate<'tcx>>,
- ) -> bool {
- match result {
- Ok(Some(SelectionCandidate::ParamCandidate(trait_ref))) => {
- !trait_ref.skip_binder().input_types().any(|t| t.walk().any(|t_| t_.is_ty_infer()))
- }
- _ => true,
- }
- }
-
- fn insert_candidate_cache(
- &mut self,
- param_env: ty::ParamEnv<'tcx>,
- cache_fresh_trait_pred: ty::PolyTraitPredicate<'tcx>,
- dep_node: DepNodeIndex,
- candidate: SelectionResult<'tcx, SelectionCandidate<'tcx>>,
- ) {
- let tcx = self.tcx();
- let trait_ref = cache_fresh_trait_pred.skip_binder().trait_ref;
-
- if !self.can_cache_candidate(&candidate) {
- debug!(
- "insert_candidate_cache(trait_ref={:?}, candidate={:?} -\
- candidate is not cacheable",
- trait_ref, candidate
- );
- return;
- }
-
- if self.can_use_global_caches(param_env) {
- if let Err(Overflow) = candidate {
- // Don't cache overflow globally; we only produce this in certain modes.
- } else if !trait_ref.has_local_value() {
- if !candidate.has_local_value() {
- debug!(
- "insert_candidate_cache(trait_ref={:?}, candidate={:?}) global",
- trait_ref, candidate,
- );
- // This may overwrite the cache with the same value.
- tcx.selection_cache
- .hashmap
- .borrow_mut()
- .insert(param_env.and(trait_ref), WithDepNode::new(dep_node, candidate));
- return;
- }
- }
- }
-
- debug!(
- "insert_candidate_cache(trait_ref={:?}, candidate={:?}) local",
- trait_ref, candidate,
- );
- self.infcx
- .selection_cache
- .hashmap
- .borrow_mut()
- .insert(param_env.and(trait_ref), WithDepNode::new(dep_node, candidate));
- }
-
- fn assemble_candidates<'o>(
- &mut self,
- stack: &TraitObligationStack<'o, 'tcx>,
- ) -> Result<SelectionCandidateSet<'tcx>, SelectionError<'tcx>> {
- let TraitObligationStack { obligation, .. } = *stack;
- let ref obligation = Obligation {
- param_env: obligation.param_env,
- cause: obligation.cause.clone(),
- recursion_depth: obligation.recursion_depth,
- predicate: self.infcx().resolve_vars_if_possible(&obligation.predicate),
- };
-
- if obligation.predicate.skip_binder().self_ty().is_ty_var() {
- // Self is a type variable (e.g., `_: AsRef<str>`).
- //
- // This is somewhat problematic, as the current scheme can't really
- // handle it turning to be a projection. This does end up as truly
- // ambiguous in most cases anyway.
- //
- // Take the fast path out - this also improves
- // performance by preventing assemble_candidates_from_impls from
- // matching every impl for this trait.
- return Ok(SelectionCandidateSet { vec: vec![], ambiguous: true });
- }
-
- let mut candidates = SelectionCandidateSet { vec: Vec::new(), ambiguous: false };
-
- self.assemble_candidates_for_trait_alias(obligation, &mut candidates)?;
-
- // Other bounds. Consider both in-scope bounds from fn decl
- // and applicable impls. There is a certain set of precedence rules here.
- let def_id = obligation.predicate.def_id();
- let lang_items = self.tcx().lang_items();
-
- if lang_items.copy_trait() == Some(def_id) {
- debug!("obligation self ty is {:?}", obligation.predicate.skip_binder().self_ty());
-
- // User-defined copy impls are permitted, but only for
- // structs and enums.
- self.assemble_candidates_from_impls(obligation, &mut candidates)?;
-
- // For other types, we'll use the builtin rules.
- let copy_conditions = self.copy_clone_conditions(obligation);
- self.assemble_builtin_bound_candidates(copy_conditions, &mut candidates)?;
- } else if lang_items.sized_trait() == Some(def_id) {
- // Sized is never implementable by end-users, it is
- // always automatically computed.
- let sized_conditions = self.sized_conditions(obligation);
- self.assemble_builtin_bound_candidates(sized_conditions, &mut candidates)?;
- } else if lang_items.unsize_trait() == Some(def_id) {
- self.assemble_candidates_for_unsizing(obligation, &mut candidates);
- } else {
- if lang_items.clone_trait() == Some(def_id) {
- // Same builtin conditions as `Copy`, i.e., every type which has builtin support
- // for `Copy` also has builtin support for `Clone`, and tuples/arrays of `Clone`
- // types have builtin support for `Clone`.
- let clone_conditions = self.copy_clone_conditions(obligation);
- self.assemble_builtin_bound_candidates(clone_conditions, &mut candidates)?;
- }
-
- self.assemble_generator_candidates(obligation, &mut candidates)?;
- self.assemble_closure_candidates(obligation, &mut candidates)?;
- self.assemble_fn_pointer_candidates(obligation, &mut candidates)?;
- self.assemble_candidates_from_impls(obligation, &mut candidates)?;
- self.assemble_candidates_from_object_ty(obligation, &mut candidates);
- }
-
- self.assemble_candidates_from_projected_tys(obligation, &mut candidates);
- self.assemble_candidates_from_caller_bounds(stack, &mut candidates)?;
- // Auto implementations have lower priority, so we only
- // consider triggering a default if there is no other impl that can apply.
- if candidates.vec.is_empty() {
- self.assemble_candidates_from_auto_impls(obligation, &mut candidates)?;
- }
- debug!("candidate list size: {}", candidates.vec.len());
- Ok(candidates)
- }
-
- fn assemble_candidates_from_projected_tys(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- candidates: &mut SelectionCandidateSet<'tcx>,
- ) {
- debug!("assemble_candidates_for_projected_tys({:?})", obligation);
-
- // Before we go into the whole placeholder thing, just
- // quickly check if the self-type is a projection at all.
- match obligation.predicate.skip_binder().trait_ref.self_ty().kind {
- ty::Projection(_) | ty::Opaque(..) => {}
- ty::Infer(ty::TyVar(_)) => {
- span_bug!(
- obligation.cause.span,
- "Self=_ should have been handled by assemble_candidates"
- );
- }
- _ => return,
- }
-
- let result = self.infcx.probe(|snapshot| {
- self.match_projection_obligation_against_definition_bounds(obligation, snapshot)
- });
-
- if result {
- candidates.vec.push(ProjectionCandidate);
- }
- }
-
- fn match_projection_obligation_against_definition_bounds(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- snapshot: &CombinedSnapshot<'_, 'tcx>,
- ) -> bool {
- let poly_trait_predicate = self.infcx().resolve_vars_if_possible(&obligation.predicate);
- let (placeholder_trait_predicate, placeholder_map) =
- self.infcx().replace_bound_vars_with_placeholders(&poly_trait_predicate);
- debug!(
- "match_projection_obligation_against_definition_bounds: \
- placeholder_trait_predicate={:?}",
- placeholder_trait_predicate,
- );
-
- let (def_id, substs) = match placeholder_trait_predicate.trait_ref.self_ty().kind {
- ty::Projection(ref data) => (data.trait_ref(self.tcx()).def_id, data.substs),
- ty::Opaque(def_id, substs) => (def_id, substs),
- _ => {
- span_bug!(
- obligation.cause.span,
- "match_projection_obligation_against_definition_bounds() called \
- but self-ty is not a projection: {:?}",
- placeholder_trait_predicate.trait_ref.self_ty()
- );
- }
- };
- debug!(
- "match_projection_obligation_against_definition_bounds: \
- def_id={:?}, substs={:?}",
- def_id, substs
- );
-
- let predicates_of = self.tcx().predicates_of(def_id);
- let bounds = predicates_of.instantiate(self.tcx(), substs);
- debug!(
- "match_projection_obligation_against_definition_bounds: \
- bounds={:?}",
- bounds
- );
-
- let elaborated_predicates = util::elaborate_predicates(self.tcx(), bounds.predicates);
- let matching_bound = elaborated_predicates.filter_to_traits().find(|bound| {
- self.infcx.probe(|_| {
- self.match_projection(
- obligation,
- bound.clone(),
- placeholder_trait_predicate.trait_ref.clone(),
- &placeholder_map,
- snapshot,
- )
- })
- });
-
- debug!(
- "match_projection_obligation_against_definition_bounds: \
- matching_bound={:?}",
- matching_bound
- );
- match matching_bound {
- None => false,
- Some(bound) => {
- // Repeat the successful match, if any, this time outside of a probe.
- let result = self.match_projection(
- obligation,
- bound,
- placeholder_trait_predicate.trait_ref.clone(),
- &placeholder_map,
- snapshot,
- );
-
- assert!(result);
- true
- }
- }
- }
-
- fn match_projection(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- trait_bound: ty::PolyTraitRef<'tcx>,
- placeholder_trait_ref: ty::TraitRef<'tcx>,
- placeholder_map: &PlaceholderMap<'tcx>,
- snapshot: &CombinedSnapshot<'_, 'tcx>,
- ) -> bool {
- debug_assert!(!placeholder_trait_ref.has_escaping_bound_vars());
- self.infcx
- .at(&obligation.cause, obligation.param_env)
- .sup(ty::Binder::dummy(placeholder_trait_ref), trait_bound)
- .is_ok()
- && self.infcx.leak_check(false, placeholder_map, snapshot).is_ok()
- }
-
- /// Given an obligation like `<SomeTrait for T>`, searches the obligations that the caller
- /// supplied to find out whether it is listed among them.
- ///
- /// Never affects the inference environment.
- fn assemble_candidates_from_caller_bounds<'o>(
- &mut self,
- stack: &TraitObligationStack<'o, 'tcx>,
- candidates: &mut SelectionCandidateSet<'tcx>,
- ) -> Result<(), SelectionError<'tcx>> {
- debug!("assemble_candidates_from_caller_bounds({:?})", stack.obligation);
-
- let all_bounds = stack
- .obligation
- .param_env
- .caller_bounds
- .iter()
- .filter_map(|o| o.to_opt_poly_trait_ref());
-
- // Micro-optimization: filter out predicates relating to different traits.
- let matching_bounds =
- all_bounds.filter(|p| p.def_id() == stack.obligation.predicate.def_id());
-
- // Keep only those bounds which may apply, and propagate overflow if it occurs.
- let mut param_candidates = vec![];
- for bound in matching_bounds {
- let wc = self.evaluate_where_clause(stack, bound.clone())?;
- if wc.may_apply() {
- param_candidates.push(ParamCandidate(bound));
- }
- }
-
- candidates.vec.extend(param_candidates);
-
- Ok(())
- }
-
- fn evaluate_where_clause<'o>(
- &mut self,
- stack: &TraitObligationStack<'o, 'tcx>,
- where_clause_trait_ref: ty::PolyTraitRef<'tcx>,
- ) -> Result<EvaluationResult, OverflowError> {
- self.evaluation_probe(|this| {
- match this.match_where_clause_trait_ref(stack.obligation, where_clause_trait_ref) {
- Ok(obligations) => {
- this.evaluate_predicates_recursively(stack.list(), obligations.into_iter())
- }
- Err(()) => Ok(EvaluatedToErr),
- }
- })
- }
-
- fn assemble_generator_candidates(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- candidates: &mut SelectionCandidateSet<'tcx>,
- ) -> Result<(), SelectionError<'tcx>> {
- if self.tcx().lang_items().gen_trait() != Some(obligation.predicate.def_id()) {
- return Ok(());
- }
-
- // Okay to skip binder because the substs on generator types never
- // touch bound regions, they just capture the in-scope
- // type/region parameters.
- let self_ty = *obligation.self_ty().skip_binder();
- match self_ty.kind {
- ty::Generator(..) => {
- debug!(
- "assemble_generator_candidates: self_ty={:?} obligation={:?}",
- self_ty, obligation
- );
-
- candidates.vec.push(GeneratorCandidate);
- }
- ty::Infer(ty::TyVar(_)) => {
- debug!("assemble_generator_candidates: ambiguous self-type");
- candidates.ambiguous = true;
- }
- _ => {}
- }
-
- Ok(())
- }
-
- /// Checks for the artificial impl that the compiler will create for an obligation like `X :
- /// FnMut<..>` where `X` is a closure type.
- ///
- /// Note: the type parameters on a closure candidate are modeled as *output* type
- /// parameters and hence do not affect whether this trait is a match or not. They will be
- /// unified during the confirmation step.
- fn assemble_closure_candidates(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- candidates: &mut SelectionCandidateSet<'tcx>,
- ) -> Result<(), SelectionError<'tcx>> {
- let kind = match self.tcx().fn_trait_kind_from_lang_item(obligation.predicate.def_id()) {
- Some(k) => k,
- None => {
- return Ok(());
- }
- };
-
- // Okay to skip binder because the substs on closure types never
- // touch bound regions, they just capture the in-scope
- // type/region parameters
- match obligation.self_ty().skip_binder().kind {
- ty::Closure(closure_def_id, closure_substs) => {
- debug!("assemble_unboxed_candidates: kind={:?} obligation={:?}", kind, obligation);
- match self.infcx.closure_kind(closure_def_id, closure_substs) {
- Some(closure_kind) => {
- debug!("assemble_unboxed_candidates: closure_kind = {:?}", closure_kind);
- if closure_kind.extends(kind) {
- candidates.vec.push(ClosureCandidate);
- }
- }
- None => {
- debug!("assemble_unboxed_candidates: closure_kind not yet known");
- candidates.vec.push(ClosureCandidate);
- }
- }
- }
- ty::Infer(ty::TyVar(_)) => {
- debug!("assemble_unboxed_closure_candidates: ambiguous self-type");
- candidates.ambiguous = true;
- }
- _ => {}
- }
-
- Ok(())
- }
-
- /// Implements one of the `Fn()` family for a fn pointer.
- fn assemble_fn_pointer_candidates(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- candidates: &mut SelectionCandidateSet<'tcx>,
- ) -> Result<(), SelectionError<'tcx>> {
- // We provide impl of all fn traits for fn pointers.
- if self.tcx().fn_trait_kind_from_lang_item(obligation.predicate.def_id()).is_none() {
- return Ok(());
- }
-
- // Okay to skip binder because what we are inspecting doesn't involve bound regions.
- let self_ty = *obligation.self_ty().skip_binder();
- match self_ty.kind {
- ty::Infer(ty::TyVar(_)) => {
- debug!("assemble_fn_pointer_candidates: ambiguous self-type");
- candidates.ambiguous = true; // Could wind up being a fn() type.
- }
- // Provide an impl, but only for suitable `fn` pointers.
- ty::FnDef(..) | ty::FnPtr(_) => {
- if let ty::FnSig {
- unsafety: hir::Unsafety::Normal,
- abi: Abi::Rust,
- c_variadic: false,
- ..
- } = self_ty.fn_sig(self.tcx()).skip_binder()
- {
- candidates.vec.push(FnPointerCandidate);
- }
- }
- _ => {}
- }
-
- Ok(())
- }
-
- /// Searches for impls that might apply to `obligation`.
- fn assemble_candidates_from_impls(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- candidates: &mut SelectionCandidateSet<'tcx>,
- ) -> Result<(), SelectionError<'tcx>> {
- debug!("assemble_candidates_from_impls(obligation={:?})", obligation);
-
- self.tcx().for_each_relevant_impl(
- obligation.predicate.def_id(),
- obligation.predicate.skip_binder().trait_ref.self_ty(),
- |impl_def_id| {
- self.infcx.probe(|snapshot| {
- if let Ok(_substs) = self.match_impl(impl_def_id, obligation, snapshot) {
- candidates.vec.push(ImplCandidate(impl_def_id));
- }
- });
- },
- );
-
- Ok(())
- }
-
- fn assemble_candidates_from_auto_impls(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- candidates: &mut SelectionCandidateSet<'tcx>,
- ) -> Result<(), SelectionError<'tcx>> {
- // Okay to skip binder here because the tests we do below do not involve bound regions.
- let self_ty = *obligation.self_ty().skip_binder();
- debug!("assemble_candidates_from_auto_impls(self_ty={:?})", self_ty);
-
- let def_id = obligation.predicate.def_id();
-
- if self.tcx().trait_is_auto(def_id) {
- match self_ty.kind {
- ty::Dynamic(..) => {
- // For object types, we don't know what the closed
- // over types are. This means we conservatively
- // say nothing; a candidate may be added by
- // `assemble_candidates_from_object_ty`.
- }
- ty::Foreign(..) => {
- // Since the contents of foreign types is unknown,
- // we don't add any `..` impl. Default traits could
- // still be provided by a manual implementation for
- // this trait and type.
- }
- ty::Param(..) | ty::Projection(..) => {
- // In these cases, we don't know what the actual
- // type is. Therefore, we cannot break it down
- // into its constituent types. So we don't
- // consider the `..` impl but instead just add no
- // candidates: this means that typeck will only
- // succeed if there is another reason to believe
- // that this obligation holds. That could be a
- // where-clause or, in the case of an object type,
- // it could be that the object type lists the
- // trait (e.g., `Foo+Send : Send`). See
- // `compile-fail/typeck-default-trait-impl-send-param.rs`
- // for an example of a test case that exercises
- // this path.
- }
- ty::Infer(ty::TyVar(_)) => {
- // The auto impl might apply; we don't know.
- candidates.ambiguous = true;
- }
- ty::Generator(_, _, movability)
- if self.tcx().lang_items().unpin_trait() == Some(def_id) =>
- {
- match movability {
- hir::Movability::Static => {
- // Immovable generators are never `Unpin`, so
- // suppress the normal auto-impl candidate for it.
- }
- hir::Movability::Movable => {
- // Movable generators are always `Unpin`, so add an
- // unconditional builtin candidate.
- candidates.vec.push(BuiltinCandidate { has_nested: false });
- }
- }
- }
-
- _ => candidates.vec.push(AutoImplCandidate(def_id)),
- }
- }
-
- Ok(())
- }
-
- /// Searches for impls that might apply to `obligation`.
- fn assemble_candidates_from_object_ty(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- candidates: &mut SelectionCandidateSet<'tcx>,
- ) {
- debug!(
- "assemble_candidates_from_object_ty(self_ty={:?})",
- obligation.self_ty().skip_binder()
- );
-
- self.infcx.probe(|_snapshot| {
- // The code below doesn't care about regions, and the
- // self-ty here doesn't escape this probe, so just erase
- // any LBR.
- let self_ty = self.tcx().erase_late_bound_regions(&obligation.self_ty());
- let poly_trait_ref = match self_ty.kind {
- ty::Dynamic(ref data, ..) => {
- if data.auto_traits().any(|did| did == obligation.predicate.def_id()) {
- debug!(
- "assemble_candidates_from_object_ty: matched builtin bound, \
- pushing candidate"
- );
- candidates.vec.push(BuiltinObjectCandidate);
- return;
- }
-
- if let Some(principal) = data.principal() {
- if !self.infcx.tcx.features().object_safe_for_dispatch {
- principal.with_self_ty(self.tcx(), self_ty)
- } else if self.tcx().is_object_safe(principal.def_id()) {
- principal.with_self_ty(self.tcx(), self_ty)
- } else {
- return;
- }
- } else {
- // Only auto trait bounds exist.
- return;
- }
- }
- ty::Infer(ty::TyVar(_)) => {
- debug!("assemble_candidates_from_object_ty: ambiguous");
- candidates.ambiguous = true; // could wind up being an object type
- return;
- }
- _ => return,
- };
-
- debug!("assemble_candidates_from_object_ty: poly_trait_ref={:?}", poly_trait_ref);
-
- // Count only those upcast versions that match the trait-ref
- // we are looking for. Specifically, do not only check for the
- // correct trait, but also the correct type parameters.
- // For example, we may be trying to upcast `Foo` to `Bar<i32>`,
- // but `Foo` is declared as `trait Foo: Bar<u32>`.
- let upcast_trait_refs = util::supertraits(self.tcx(), poly_trait_ref)
- .filter(|upcast_trait_ref| {
- self.infcx
- .probe(|_| self.match_poly_trait_ref(obligation, *upcast_trait_ref).is_ok())
- })
- .count();
-
- if upcast_trait_refs > 1 {
- // Can be upcast in many ways; need more type information.
- candidates.ambiguous = true;
- } else if upcast_trait_refs == 1 {
- candidates.vec.push(ObjectCandidate);
- }
- })
- }
-
- /// Searches for unsizing that might apply to `obligation`.
- fn assemble_candidates_for_unsizing(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- candidates: &mut SelectionCandidateSet<'tcx>,
- ) {
- // We currently never consider higher-ranked obligations e.g.
- // `for<'a> &'a T: Unsize<Trait+'a>` to be implemented. This is not
- // because they are a priori invalid, and we could potentially add support
- // for them later, it's just that there isn't really a strong need for it.
- // A `T: Unsize<U>` obligation is always used as part of a `T: CoerceUnsize<U>`
- // impl, and those are generally applied to concrete types.
- //
- // That said, one might try to write a fn with a where clause like
- // for<'a> Foo<'a, T>: Unsize<Foo<'a, Trait>>
- // where the `'a` is kind of orthogonal to the relevant part of the `Unsize`.
- // Still, you'd be more likely to write that where clause as
- // T: Trait
- // so it seems ok if we (conservatively) fail to accept that `Unsize`
- // obligation above. Should be possible to extend this in the future.
- let source = match obligation.self_ty().no_bound_vars() {
- Some(t) => t,
- None => {
- // Don't add any candidates if there are bound regions.
- return;
- }
- };
- let target = obligation.predicate.skip_binder().trait_ref.substs.type_at(1);
-
- debug!("assemble_candidates_for_unsizing(source={:?}, target={:?})", source, target);
-
- let may_apply = match (&source.kind, &target.kind) {
- // Trait+Kx+'a -> Trait+Ky+'b (upcasts).
- (&ty::Dynamic(ref data_a, ..), &ty::Dynamic(ref data_b, ..)) => {
- // Upcasts permit two things:
- //
- // 1. Dropping auto traits, e.g., `Foo + Send` to `Foo`
- // 2. Tightening the region bound, e.g., `Foo + 'a` to `Foo + 'b` if `'a: 'b`
- //
- // Note that neither of these changes requires any
- // change at runtime. Eventually this will be
- // generalized.
- //
- // We always upcast when we can because of reason
- // #2 (region bounds).
- data_a.principal_def_id() == data_b.principal_def_id()
- && data_b
- .auto_traits()
- // All of a's auto traits need to be in b's auto traits.
- .all(|b| data_a.auto_traits().any(|a| a == b))
- }
-
- // `T` -> `Trait`
- (_, &ty::Dynamic(..)) => true,
-
- // Ambiguous handling is below `T` -> `Trait`, because inference
- // variables can still implement `Unsize<Trait>` and nested
- // obligations will have the final say (likely deferred).
- (&ty::Infer(ty::TyVar(_)), _) | (_, &ty::Infer(ty::TyVar(_))) => {
- debug!("assemble_candidates_for_unsizing: ambiguous");
- candidates.ambiguous = true;
- false
- }
-
- // `[T; n]` -> `[T]`
- (&ty::Array(..), &ty::Slice(_)) => true,
-
- // `Struct<T>` -> `Struct<U>`
- (&ty::Adt(def_id_a, _), &ty::Adt(def_id_b, _)) if def_id_a.is_struct() => {
- def_id_a == def_id_b
- }
-
- // `(.., T)` -> `(.., U)`
- (&ty::Tuple(tys_a), &ty::Tuple(tys_b)) => tys_a.len() == tys_b.len(),
-
- _ => false,
- };
-
- if may_apply {
- candidates.vec.push(BuiltinUnsizeCandidate);
- }
- }
-
- fn assemble_candidates_for_trait_alias(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- candidates: &mut SelectionCandidateSet<'tcx>,
- ) -> Result<(), SelectionError<'tcx>> {
- // Okay to skip binder here because the tests we do below do not involve bound regions.
- let self_ty = *obligation.self_ty().skip_binder();
- debug!("assemble_candidates_for_trait_alias(self_ty={:?})", self_ty);
-
- let def_id = obligation.predicate.def_id();
-
- if self.tcx().is_trait_alias(def_id) {
- candidates.vec.push(TraitAliasCandidate(def_id));
- }
-
- Ok(())
- }
-
- ///////////////////////////////////////////////////////////////////////////
- // WINNOW
- //
- // Winnowing is the process of attempting to resolve ambiguity by
- // probing further. During the winnowing process, we unify all
- // type variables and then we also attempt to evaluate recursive
- // bounds to see if they are satisfied.
-
- /// Returns `true` if `victim` should be dropped in favor of
- /// `other`. Generally speaking we will drop duplicate
- /// candidates and prefer where-clause candidates.
- ///
- /// See the comment for "SelectionCandidate" for more details.
- fn candidate_should_be_dropped_in_favor_of(
- &mut self,
- victim: &EvaluatedCandidate<'tcx>,
- other: &EvaluatedCandidate<'tcx>,
- needs_infer: bool,
- ) -> bool {
- if victim.candidate == other.candidate {
- return true;
- }
-
- // Check if a bound would previously have been removed when normalizing
- // the param_env so that it can be given the lowest priority. See
- // #50825 for the motivation for this.
- let is_global =
- |cand: &ty::PolyTraitRef<'_>| cand.is_global() && !cand.has_late_bound_regions();
-
- match other.candidate {
- // Prefer `BuiltinCandidate { has_nested: false }` to anything else.
- // This is a fix for #53123 and prevents winnowing from accidentally extending the
- // lifetime of a variable.
- BuiltinCandidate { has_nested: false } => true,
- ParamCandidate(ref cand) => match victim.candidate {
- AutoImplCandidate(..) => {
- bug!(
- "default implementations shouldn't be recorded \
- when there are other valid candidates"
- );
- }
- // Prefer `BuiltinCandidate { has_nested: false }` to anything else.
- // This is a fix for #53123 and prevents winnowing from accidentally extending the
- // lifetime of a variable.
- BuiltinCandidate { has_nested: false } => false,
- ImplCandidate(..)
- | ClosureCandidate
- | GeneratorCandidate
- | FnPointerCandidate
- | BuiltinObjectCandidate
- | BuiltinUnsizeCandidate
- | BuiltinCandidate { .. }
- | TraitAliasCandidate(..) => {
- // Global bounds from the where clause should be ignored
- // here (see issue #50825). Otherwise, we have a where
- // clause so don't go around looking for impls.
- !is_global(cand)
- }
- ObjectCandidate | ProjectionCandidate => {
- // Arbitrarily give param candidates priority
- // over projection and object candidates.
- !is_global(cand)
- }
- ParamCandidate(..) => false,
- },
- ObjectCandidate | ProjectionCandidate => match victim.candidate {
- AutoImplCandidate(..) => {
- bug!(
- "default implementations shouldn't be recorded \
- when there are other valid candidates"
- );
- }
- // Prefer `BuiltinCandidate { has_nested: false }` to anything else.
- // This is a fix for #53123 and prevents winnowing from accidentally extending the
- // lifetime of a variable.
- BuiltinCandidate { has_nested: false } => false,
- ImplCandidate(..)
- | ClosureCandidate
- | GeneratorCandidate
- | FnPointerCandidate
- | BuiltinObjectCandidate
- | BuiltinUnsizeCandidate
- | BuiltinCandidate { .. }
- | TraitAliasCandidate(..) => true,
- ObjectCandidate | ProjectionCandidate => {
- // Arbitrarily give param candidates priority
- // over projection and object candidates.
- true
- }
- ParamCandidate(ref cand) => is_global(cand),
- },
- ImplCandidate(other_def) => {
- // See if we can toss out `victim` based on specialization.
- // This requires us to know *for sure* that the `other` impl applies
- // i.e., `EvaluatedToOk`.
- if other.evaluation.must_apply_modulo_regions() {
- match victim.candidate {
- ImplCandidate(victim_def) => {
- let tcx = self.tcx();
- if tcx.specializes((other_def, victim_def)) {
- return true;
- }
- return match tcx.impls_are_allowed_to_overlap(other_def, victim_def) {
- Some(ty::ImplOverlapKind::Permitted { marker: true }) => {
- // Subtle: If the predicate we are evaluating has inference
- // variables, do *not* allow discarding candidates due to
- // marker trait impls.
- //
- // Without this restriction, we could end up accidentally
- // constrainting inference variables based on an arbitrarily
- // chosen trait impl.
- //
- // Imagine we have the following code:
- //
- // ```rust
- // #[marker] trait MyTrait {}
- // impl MyTrait for u8 {}
- // impl MyTrait for bool {}
- // ```
- //
- // And we are evaluating the predicate `<_#0t as MyTrait>`.
- //
- // During selection, we will end up with one candidate for each
- // impl of `MyTrait`. If we were to discard one impl in favor
- // of the other, we would be left with one candidate, causing
- // us to "successfully" select the predicate, unifying
- // _#0t with (for example) `u8`.
- //
- // However, we have no reason to believe that this unification
- // is correct - we've essentially just picked an arbitrary
- // *possibility* for _#0t, and required that this be the *only*
- // possibility.
- //
- // Eventually, we will either:
- // 1) Unify all inference variables in the predicate through
- // some other means (e.g. type-checking of a function). We will
- // then be in a position to drop marker trait candidates
- // without constraining inference variables (since there are
- // none left to constrin)
- // 2) Be left with some unconstrained inference variables. We
- // will then correctly report an inference error, since the
- // existence of multiple marker trait impls tells us nothing
- // about which one should actually apply.
- !needs_infer
- }
- Some(_) => true,
- None => false,
- };
- }
- ParamCandidate(ref cand) => {
- // Prefer the impl to a global where clause candidate.
- return is_global(cand);
- }
- _ => (),
- }
- }
-
- false
- }
- ClosureCandidate
- | GeneratorCandidate
- | FnPointerCandidate
- | BuiltinObjectCandidate
- | BuiltinUnsizeCandidate
- | BuiltinCandidate { has_nested: true } => {
- match victim.candidate {
- ParamCandidate(ref cand) => {
- // Prefer these to a global where-clause bound
- // (see issue #50825).
- is_global(cand) && other.evaluation.must_apply_modulo_regions()
- }
- _ => false,
- }
- }
- _ => false,
- }
- }
-
- ///////////////////////////////////////////////////////////////////////////
- // BUILTIN BOUNDS
- //
- // These cover the traits that are built-in to the language
- // itself: `Copy`, `Clone` and `Sized`.
-
- fn assemble_builtin_bound_candidates(
- &mut self,
- conditions: BuiltinImplConditions<'tcx>,
- candidates: &mut SelectionCandidateSet<'tcx>,
- ) -> Result<(), SelectionError<'tcx>> {
- match conditions {
- BuiltinImplConditions::Where(nested) => {
- debug!("builtin_bound: nested={:?}", nested);
- candidates
- .vec
- .push(BuiltinCandidate { has_nested: !nested.skip_binder().is_empty() });
- }
- BuiltinImplConditions::None => {}
- BuiltinImplConditions::Ambiguous => {
- debug!("assemble_builtin_bound_candidates: ambiguous builtin");
- candidates.ambiguous = true;
- }
- }
-
- Ok(())
- }
-
- fn sized_conditions(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- ) -> BuiltinImplConditions<'tcx> {
- use self::BuiltinImplConditions::{Ambiguous, None, Where};
-
- // NOTE: binder moved to (*)
- let self_ty = self.infcx.shallow_resolve(obligation.predicate.skip_binder().self_ty());
-
- match self_ty.kind {
- ty::Infer(ty::IntVar(_))
- | ty::Infer(ty::FloatVar(_))
- | ty::Uint(_)
- | ty::Int(_)
- | ty::Bool
- | ty::Float(_)
- | ty::FnDef(..)
- | ty::FnPtr(_)
- | ty::RawPtr(..)
- | ty::Char
- | ty::Ref(..)
- | ty::Generator(..)
- | ty::GeneratorWitness(..)
- | ty::Array(..)
- | ty::Closure(..)
- | ty::Never
- | ty::Error => {
- // safe for everything
- Where(ty::Binder::dummy(Vec::new()))
- }
-
- ty::Str | ty::Slice(_) | ty::Dynamic(..) | ty::Foreign(..) => None,
-
- ty::Tuple(tys) => {
- Where(ty::Binder::bind(tys.last().into_iter().map(|k| k.expect_ty()).collect()))
- }
-
- ty::Adt(def, substs) => {
- let sized_crit = def.sized_constraint(self.tcx());
- // (*) binder moved here
- Where(ty::Binder::bind(
- sized_crit.iter().map(|ty| ty.subst(self.tcx(), substs)).collect(),
- ))
- }
-
- ty::Projection(_) | ty::Param(_) | ty::Opaque(..) => None,
- ty::Infer(ty::TyVar(_)) => Ambiguous,
-
- ty::UnnormalizedProjection(..)
- | ty::Placeholder(..)
- | ty::Bound(..)
- | ty::Infer(ty::FreshTy(_))
- | ty::Infer(ty::FreshIntTy(_))
- | ty::Infer(ty::FreshFloatTy(_)) => {
- bug!("asked to assemble builtin bounds of unexpected type: {:?}", self_ty);
- }
- }
- }
-
- fn copy_clone_conditions(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- ) -> BuiltinImplConditions<'tcx> {
- // NOTE: binder moved to (*)
- let self_ty = self.infcx.shallow_resolve(obligation.predicate.skip_binder().self_ty());
-
- use self::BuiltinImplConditions::{Ambiguous, None, Where};
-
- match self_ty.kind {
- ty::Infer(ty::IntVar(_))
- | ty::Infer(ty::FloatVar(_))
- | ty::FnDef(..)
- | ty::FnPtr(_)
- | ty::Error => Where(ty::Binder::dummy(Vec::new())),
-
- ty::Uint(_)
- | ty::Int(_)
- | ty::Bool
- | ty::Float(_)
- | ty::Char
- | ty::RawPtr(..)
- | ty::Never
- | ty::Ref(_, _, hir::Mutability::Not) => {
- // Implementations provided in libcore
- None
- }
-
- ty::Dynamic(..)
- | ty::Str
- | ty::Slice(..)
- | ty::Generator(..)
- | ty::GeneratorWitness(..)
- | ty::Foreign(..)
- | ty::Ref(_, _, hir::Mutability::Mut) => None,
-
- ty::Array(element_ty, _) => {
- // (*) binder moved here
- Where(ty::Binder::bind(vec![element_ty]))
- }
-
- ty::Tuple(tys) => {
- // (*) binder moved here
- Where(ty::Binder::bind(tys.iter().map(|k| k.expect_ty()).collect()))
- }
-
- ty::Closure(def_id, substs) => {
- // (*) binder moved here
- Where(ty::Binder::bind(substs.as_closure().upvar_tys(def_id, self.tcx()).collect()))
- }
-
- ty::Adt(..) | ty::Projection(..) | ty::Param(..) | ty::Opaque(..) => {
- // Fallback to whatever user-defined impls exist in this case.
- None
- }
-
- ty::Infer(ty::TyVar(_)) => {
- // Unbound type variable. Might or might not have
- // applicable impls and so forth, depending on what
- // those type variables wind up being bound to.
- Ambiguous
- }
-
- ty::UnnormalizedProjection(..)
- | ty::Placeholder(..)
- | ty::Bound(..)
- | ty::Infer(ty::FreshTy(_))
- | ty::Infer(ty::FreshIntTy(_))
- | ty::Infer(ty::FreshFloatTy(_)) => {
- bug!("asked to assemble builtin bounds of unexpected type: {:?}", self_ty);
- }
- }
- }
-
- /// For default impls, we need to break apart a type into its
- /// "constituent types" -- meaning, the types that it contains.
- ///
- /// Here are some (simple) examples:
- ///
- /// ```
- /// (i32, u32) -> [i32, u32]
- /// Foo where struct Foo { x: i32, y: u32 } -> [i32, u32]
- /// Bar<i32> where struct Bar<T> { x: T, y: u32 } -> [i32, u32]
- /// Zed<i32> where enum Zed { A(T), B(u32) } -> [i32, u32]
- /// ```
- fn constituent_types_for_ty(&self, t: Ty<'tcx>) -> Vec<Ty<'tcx>> {
- match t.kind {
- ty::Uint(_)
- | ty::Int(_)
- | ty::Bool
- | ty::Float(_)
- | ty::FnDef(..)
- | ty::FnPtr(_)
- | ty::Str
- | ty::Error
- | ty::Infer(ty::IntVar(_))
- | ty::Infer(ty::FloatVar(_))
- | ty::Never
- | ty::Char => Vec::new(),
-
- ty::UnnormalizedProjection(..)
- | ty::Placeholder(..)
- | ty::Dynamic(..)
- | ty::Param(..)
- | ty::Foreign(..)
- | ty::Projection(..)
- | ty::Bound(..)
- | ty::Infer(ty::TyVar(_))
- | ty::Infer(ty::FreshTy(_))
- | ty::Infer(ty::FreshIntTy(_))
- | ty::Infer(ty::FreshFloatTy(_)) => {
- bug!("asked to assemble constituent types of unexpected type: {:?}", t);
- }
-
- ty::RawPtr(ty::TypeAndMut { ty: element_ty, .. }) | ty::Ref(_, element_ty, _) => {
- vec![element_ty]
- }
-
- ty::Array(element_ty, _) | ty::Slice(element_ty) => vec![element_ty],
-
- ty::Tuple(ref tys) => {
- // (T1, ..., Tn) -- meets any bound that all of T1...Tn meet
- tys.iter().map(|k| k.expect_ty()).collect()
- }
-
- ty::Closure(def_id, ref substs) => {
- substs.as_closure().upvar_tys(def_id, self.tcx()).collect()
- }
-
- ty::Generator(def_id, ref substs, _) => {
- let witness = substs.as_generator().witness(def_id, self.tcx());
- substs
- .as_generator()
- .upvar_tys(def_id, self.tcx())
- .chain(iter::once(witness))
- .collect()
- }
-
- ty::GeneratorWitness(types) => {
- // This is sound because no regions in the witness can refer to
- // the binder outside the witness. So we'll effectivly reuse
- // the implicit binder around the witness.
- types.skip_binder().to_vec()
- }
-
- // For `PhantomData<T>`, we pass `T`.
- ty::Adt(def, substs) if def.is_phantom_data() => substs.types().collect(),
-
- ty::Adt(def, substs) => def.all_fields().map(|f| f.ty(self.tcx(), substs)).collect(),
-
- ty::Opaque(def_id, substs) => {
- // We can resolve the `impl Trait` to its concrete type,
- // which enforces a DAG between the functions requiring
- // the auto trait bounds in question.
- vec![self.tcx().type_of(def_id).subst(self.tcx(), substs)]
- }
- }
- }
-
- fn collect_predicates_for_types(
- &mut self,
- param_env: ty::ParamEnv<'tcx>,
- cause: ObligationCause<'tcx>,
- recursion_depth: usize,
- trait_def_id: DefId,
- types: ty::Binder<Vec<Ty<'tcx>>>,
- ) -> Vec<PredicateObligation<'tcx>> {
- // Because the types were potentially derived from
- // higher-ranked obligations they may reference late-bound
- // regions. For example, `for<'a> Foo<&'a int> : Copy` would
- // yield a type like `for<'a> &'a int`. In general, we
- // maintain the invariant that we never manipulate bound
- // regions, so we have to process these bound regions somehow.
- //
- // The strategy is to:
- //
- // 1. Instantiate those regions to placeholder regions (e.g.,
- // `for<'a> &'a int` becomes `&0 int`.
- // 2. Produce something like `&'0 int : Copy`
- // 3. Re-bind the regions back to `for<'a> &'a int : Copy`
-
- types
- .skip_binder()
- .iter()
- .flat_map(|ty| {
- // binder moved -\
- let ty: ty::Binder<Ty<'tcx>> = ty::Binder::bind(ty); // <----/
-
- self.infcx.commit_unconditionally(|_| {
- let (skol_ty, _) = self.infcx.replace_bound_vars_with_placeholders(&ty);
- let Normalized { value: normalized_ty, mut obligations } =
- project::normalize_with_depth(
- self,
- param_env,
- cause.clone(),
- recursion_depth,
- &skol_ty,
- );
- let skol_obligation = predicate_for_trait_def(
- self.tcx(),
- param_env,
- cause.clone(),
- trait_def_id,
- recursion_depth,
- normalized_ty,
- &[],
- );
- obligations.push(skol_obligation);
- obligations
- })
- })
- .collect()
- }
-
- ///////////////////////////////////////////////////////////////////////////
- // CONFIRMATION
- //
- // Confirmation unifies the output type parameters of the trait
- // with the values found in the obligation, possibly yielding a
- // type error. See the [rustc guide] for more details.
- //
- // [rustc guide]:
- // https://rust-lang.github.io/rustc-guide/traits/resolution.html#confirmation
-
- fn confirm_candidate(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- candidate: SelectionCandidate<'tcx>,
- ) -> Result<Selection<'tcx>, SelectionError<'tcx>> {
- debug!("confirm_candidate({:?}, {:?})", obligation, candidate);
-
- match candidate {
- BuiltinCandidate { has_nested } => {
- let data = self.confirm_builtin_candidate(obligation, has_nested);
- Ok(VtableBuiltin(data))
- }
-
- ParamCandidate(param) => {
- let obligations = self.confirm_param_candidate(obligation, param);
- Ok(VtableParam(obligations))
- }
-
- ImplCandidate(impl_def_id) => {
- Ok(VtableImpl(self.confirm_impl_candidate(obligation, impl_def_id)))
- }
-
- AutoImplCandidate(trait_def_id) => {
- let data = self.confirm_auto_impl_candidate(obligation, trait_def_id);
- Ok(VtableAutoImpl(data))
- }
-
- ProjectionCandidate => {
- self.confirm_projection_candidate(obligation);
- Ok(VtableParam(Vec::new()))
- }
-
- ClosureCandidate => {
- let vtable_closure = self.confirm_closure_candidate(obligation)?;
- Ok(VtableClosure(vtable_closure))
- }
-
- GeneratorCandidate => {
- let vtable_generator = self.confirm_generator_candidate(obligation)?;
- Ok(VtableGenerator(vtable_generator))
- }
-
- FnPointerCandidate => {
- let data = self.confirm_fn_pointer_candidate(obligation)?;
- Ok(VtableFnPointer(data))
- }
-
- TraitAliasCandidate(alias_def_id) => {
- let data = self.confirm_trait_alias_candidate(obligation, alias_def_id);
- Ok(VtableTraitAlias(data))
- }
-
- ObjectCandidate => {
- let data = self.confirm_object_candidate(obligation);
- Ok(VtableObject(data))
- }
-
- BuiltinObjectCandidate => {
- // This indicates something like `Trait + Send: Send`. In this case, we know that
- // this holds because that's what the object type is telling us, and there's really
- // no additional obligations to prove and no types in particular to unify, etc.
- Ok(VtableParam(Vec::new()))
- }
-
- BuiltinUnsizeCandidate => {
- let data = self.confirm_builtin_unsize_candidate(obligation)?;
- Ok(VtableBuiltin(data))
- }
- }
- }
-
- fn confirm_projection_candidate(&mut self, obligation: &TraitObligation<'tcx>) {
- self.infcx.commit_unconditionally(|snapshot| {
- let result =
- self.match_projection_obligation_against_definition_bounds(obligation, snapshot);
- assert!(result);
- })
- }
-
- fn confirm_param_candidate(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- param: ty::PolyTraitRef<'tcx>,
- ) -> Vec<PredicateObligation<'tcx>> {
- debug!("confirm_param_candidate({:?},{:?})", obligation, param);
-
- // During evaluation, we already checked that this
- // where-clause trait-ref could be unified with the obligation
- // trait-ref. Repeat that unification now without any
- // transactional boundary; it should not fail.
- match self.match_where_clause_trait_ref(obligation, param.clone()) {
- Ok(obligations) => obligations,
- Err(()) => {
- bug!(
- "Where clause `{:?}` was applicable to `{:?}` but now is not",
- param,
- obligation
- );
- }
- }
- }
-
- fn confirm_builtin_candidate(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- has_nested: bool,
- ) -> VtableBuiltinData<PredicateObligation<'tcx>> {
- debug!("confirm_builtin_candidate({:?}, {:?})", obligation, has_nested);
-
- let lang_items = self.tcx().lang_items();
- let obligations = if has_nested {
- let trait_def = obligation.predicate.def_id();
- let conditions = if Some(trait_def) == lang_items.sized_trait() {
- self.sized_conditions(obligation)
- } else if Some(trait_def) == lang_items.copy_trait() {
- self.copy_clone_conditions(obligation)
- } else if Some(trait_def) == lang_items.clone_trait() {
- self.copy_clone_conditions(obligation)
- } else {
- bug!("unexpected builtin trait {:?}", trait_def)
- };
- let nested = match conditions {
- BuiltinImplConditions::Where(nested) => nested,
- _ => bug!("obligation {:?} had matched a builtin impl but now doesn't", obligation),
- };
-
- let cause = obligation.derived_cause(BuiltinDerivedObligation);
- self.collect_predicates_for_types(
- obligation.param_env,
- cause,
- obligation.recursion_depth + 1,
- trait_def,
- nested,
- )
- } else {
- vec![]
- };
-
- debug!("confirm_builtin_candidate: obligations={:?}", obligations);
-
- VtableBuiltinData { nested: obligations }
- }
-
- /// This handles the case where a `auto trait Foo` impl is being used.
- /// The idea is that the impl applies to `X : Foo` if the following conditions are met:
- ///
- /// 1. For each constituent type `Y` in `X`, `Y : Foo` holds
- /// 2. For each where-clause `C` declared on `Foo`, `[Self => X] C` holds.
- fn confirm_auto_impl_candidate(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- trait_def_id: DefId,
- ) -> VtableAutoImplData<PredicateObligation<'tcx>> {
- debug!("confirm_auto_impl_candidate({:?}, {:?})", obligation, trait_def_id);
-
- let types = obligation.predicate.map_bound(|inner| {
- let self_ty = self.infcx.shallow_resolve(inner.self_ty());
- self.constituent_types_for_ty(self_ty)
- });
- self.vtable_auto_impl(obligation, trait_def_id, types)
- }
-
- /// See `confirm_auto_impl_candidate`.
- fn vtable_auto_impl(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- trait_def_id: DefId,
- nested: ty::Binder<Vec<Ty<'tcx>>>,
- ) -> VtableAutoImplData<PredicateObligation<'tcx>> {
- debug!("vtable_auto_impl: nested={:?}", nested);
-
- let cause = obligation.derived_cause(BuiltinDerivedObligation);
- let mut obligations = self.collect_predicates_for_types(
- obligation.param_env,
- cause,
- obligation.recursion_depth + 1,
- trait_def_id,
- nested,
- );
-
- let trait_obligations: Vec<PredicateObligation<'_>> =
- self.infcx.commit_unconditionally(|_| {
- let poly_trait_ref = obligation.predicate.to_poly_trait_ref();
- let (trait_ref, _) =
- self.infcx.replace_bound_vars_with_placeholders(&poly_trait_ref);
- let cause = obligation.derived_cause(ImplDerivedObligation);
- self.impl_or_trait_obligations(
- cause,
- obligation.recursion_depth + 1,
- obligation.param_env,
- trait_def_id,
- &trait_ref.substs,
- )
- });
-
- // Adds the predicates from the trait. Note that this contains a `Self: Trait`
- // predicate as usual. It won't have any effect since auto traits are coinductive.
- obligations.extend(trait_obligations);
-
- debug!("vtable_auto_impl: obligations={:?}", obligations);
-
- VtableAutoImplData { trait_def_id, nested: obligations }
- }
-
- fn confirm_impl_candidate(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- impl_def_id: DefId,
- ) -> VtableImplData<'tcx, PredicateObligation<'tcx>> {
- debug!("confirm_impl_candidate({:?},{:?})", obligation, impl_def_id);
-
- // First, create the substitutions by matching the impl again,
- // this time not in a probe.
- self.infcx.commit_unconditionally(|snapshot| {
- let substs = self.rematch_impl(impl_def_id, obligation, snapshot);
- debug!("confirm_impl_candidate: substs={:?}", substs);
- let cause = obligation.derived_cause(ImplDerivedObligation);
- self.vtable_impl(
- impl_def_id,
- substs,
- cause,
- obligation.recursion_depth + 1,
- obligation.param_env,
- )
- })
- }
-
- fn vtable_impl(
- &mut self,
- impl_def_id: DefId,
- mut substs: Normalized<'tcx, SubstsRef<'tcx>>,
- cause: ObligationCause<'tcx>,
- recursion_depth: usize,
- param_env: ty::ParamEnv<'tcx>,
- ) -> VtableImplData<'tcx, PredicateObligation<'tcx>> {
- debug!(
- "vtable_impl(impl_def_id={:?}, substs={:?}, recursion_depth={})",
- impl_def_id, substs, recursion_depth,
- );
-
- let mut impl_obligations = self.impl_or_trait_obligations(
- cause,
- recursion_depth,
- param_env,
- impl_def_id,
- &substs.value,
- );
-
- debug!(
- "vtable_impl: impl_def_id={:?} impl_obligations={:?}",
- impl_def_id, impl_obligations
- );
-
- // Because of RFC447, the impl-trait-ref and obligations
- // are sufficient to determine the impl substs, without
- // relying on projections in the impl-trait-ref.
- //
- // e.g., `impl<U: Tr, V: Iterator<Item=U>> Foo<<U as Tr>::T> for V`
- impl_obligations.append(&mut substs.obligations);
-
- VtableImplData { impl_def_id, substs: substs.value, nested: impl_obligations }
- }
-
- fn confirm_object_candidate(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- ) -> VtableObjectData<'tcx, PredicateObligation<'tcx>> {
- debug!("confirm_object_candidate({:?})", obligation);
-
- // FIXME(nmatsakis) skipping binder here seems wrong -- we should
- // probably flatten the binder from the obligation and the binder
- // from the object. Have to try to make a broken test case that
- // results.
- let self_ty = self.infcx.shallow_resolve(*obligation.self_ty().skip_binder());
- let poly_trait_ref = match self_ty.kind {
- ty::Dynamic(ref data, ..) => data
- .principal()
- .unwrap_or_else(|| {
- span_bug!(obligation.cause.span, "object candidate with no principal")
- })
- .with_self_ty(self.tcx(), self_ty),
- _ => span_bug!(obligation.cause.span, "object candidate with non-object"),
- };
-
- let mut upcast_trait_ref = None;
- let mut nested = vec![];
- let vtable_base;
-
- {
- let tcx = self.tcx();
-
- // We want to find the first supertrait in the list of
- // supertraits that we can unify with, and do that
- // unification. We know that there is exactly one in the list
- // where we can unify, because otherwise select would have
- // reported an ambiguity. (When we do find a match, also
- // record it for later.)
- let nonmatching = util::supertraits(tcx, poly_trait_ref).take_while(|&t| {
- match self.infcx.commit_if_ok(|_| self.match_poly_trait_ref(obligation, t)) {
- Ok(obligations) => {
- upcast_trait_ref = Some(t);
- nested.extend(obligations);
- false
- }
- Err(_) => true,
- }
- });
-
- // Additionally, for each of the non-matching predicates that
- // we pass over, we sum up the set of number of vtable
- // entries, so that we can compute the offset for the selected
- // trait.
- vtable_base = nonmatching.map(|t| super::util::count_own_vtable_entries(tcx, t)).sum();
- }
-
- VtableObjectData { upcast_trait_ref: upcast_trait_ref.unwrap(), vtable_base, nested }
- }
-
- fn confirm_fn_pointer_candidate(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- ) -> Result<VtableFnPointerData<'tcx, PredicateObligation<'tcx>>, SelectionError<'tcx>> {
- debug!("confirm_fn_pointer_candidate({:?})", obligation);
-
- // Okay to skip binder; it is reintroduced below.
- let self_ty = self.infcx.shallow_resolve(*obligation.self_ty().skip_binder());
- let sig = self_ty.fn_sig(self.tcx());
- let trait_ref = closure_trait_ref_and_return_type(
- self.tcx(),
- obligation.predicate.def_id(),
- self_ty,
- sig,
- util::TupleArgumentsFlag::Yes,
- )
- .map_bound(|(trait_ref, _)| trait_ref);
-
- let Normalized { value: trait_ref, obligations } = project::normalize_with_depth(
- self,
- obligation.param_env,
- obligation.cause.clone(),
- obligation.recursion_depth + 1,
- &trait_ref,
- );
-
- self.confirm_poly_trait_refs(
- obligation.cause.clone(),
- obligation.param_env,
- obligation.predicate.to_poly_trait_ref(),
- trait_ref,
- )?;
- Ok(VtableFnPointerData { fn_ty: self_ty, nested: obligations })
- }
-
- fn confirm_trait_alias_candidate(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- alias_def_id: DefId,
- ) -> VtableTraitAliasData<'tcx, PredicateObligation<'tcx>> {
- debug!("confirm_trait_alias_candidate({:?}, {:?})", obligation, alias_def_id);
-
- self.infcx.commit_unconditionally(|_| {
- let (predicate, _) =
- self.infcx().replace_bound_vars_with_placeholders(&obligation.predicate);
- let trait_ref = predicate.trait_ref;
- let trait_def_id = trait_ref.def_id;
- let substs = trait_ref.substs;
-
- let trait_obligations = self.impl_or_trait_obligations(
- obligation.cause.clone(),
- obligation.recursion_depth,
- obligation.param_env,
- trait_def_id,
- &substs,
- );
-
- debug!(
- "confirm_trait_alias_candidate: trait_def_id={:?} trait_obligations={:?}",
- trait_def_id, trait_obligations
- );
-
- VtableTraitAliasData { alias_def_id, substs: substs, nested: trait_obligations }
- })
- }
-
- fn confirm_generator_candidate(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- ) -> Result<VtableGeneratorData<'tcx, PredicateObligation<'tcx>>, SelectionError<'tcx>> {
- // Okay to skip binder because the substs on generator types never
- // touch bound regions, they just capture the in-scope
- // type/region parameters.
- let self_ty = self.infcx.shallow_resolve(*obligation.self_ty().skip_binder());
- let (generator_def_id, substs) = match self_ty.kind {
- ty::Generator(id, substs, _) => (id, substs),
- _ => bug!("closure candidate for non-closure {:?}", obligation),
- };
-
- debug!("confirm_generator_candidate({:?},{:?},{:?})", obligation, generator_def_id, substs);
-
- let trait_ref = self.generator_trait_ref_unnormalized(obligation, generator_def_id, substs);
- let Normalized { value: trait_ref, mut obligations } = normalize_with_depth(
- self,
- obligation.param_env,
- obligation.cause.clone(),
- obligation.recursion_depth + 1,
- &trait_ref,
- );
-
- debug!(
- "confirm_generator_candidate(generator_def_id={:?}, \
- trait_ref={:?}, obligations={:?})",
- generator_def_id, trait_ref, obligations
- );
-
- obligations.extend(self.confirm_poly_trait_refs(
- obligation.cause.clone(),
- obligation.param_env,
- obligation.predicate.to_poly_trait_ref(),
- trait_ref,
- )?);
-
- Ok(VtableGeneratorData { generator_def_id, substs, nested: obligations })
- }
-
- fn confirm_closure_candidate(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- ) -> Result<VtableClosureData<'tcx, PredicateObligation<'tcx>>, SelectionError<'tcx>> {
- debug!("confirm_closure_candidate({:?})", obligation);
-
- let kind = self
- .tcx()
- .fn_trait_kind_from_lang_item(obligation.predicate.def_id())
- .unwrap_or_else(|| bug!("closure candidate for non-fn trait {:?}", obligation));
-
- // Okay to skip binder because the substs on closure types never
- // touch bound regions, they just capture the in-scope
- // type/region parameters.
- let self_ty = self.infcx.shallow_resolve(*obligation.self_ty().skip_binder());
- let (closure_def_id, substs) = match self_ty.kind {
- ty::Closure(id, substs) => (id, substs),
- _ => bug!("closure candidate for non-closure {:?}", obligation),
- };
-
- let trait_ref = self.closure_trait_ref_unnormalized(obligation, closure_def_id, substs);
- let Normalized { value: trait_ref, mut obligations } = normalize_with_depth(
- self,
- obligation.param_env,
- obligation.cause.clone(),
- obligation.recursion_depth + 1,
- &trait_ref,
- );
-
- debug!(
- "confirm_closure_candidate(closure_def_id={:?}, trait_ref={:?}, obligations={:?})",
- closure_def_id, trait_ref, obligations
- );
-
- obligations.extend(self.confirm_poly_trait_refs(
- obligation.cause.clone(),
- obligation.param_env,
- obligation.predicate.to_poly_trait_ref(),
- trait_ref,
- )?);
-
- // FIXME: Chalk
-
- if !self.tcx().sess.opts.debugging_opts.chalk {
- obligations.push(Obligation::new(
- obligation.cause.clone(),
- obligation.param_env,
- ty::Predicate::ClosureKind(closure_def_id, substs, kind),
- ));
- }
-
- Ok(VtableClosureData { closure_def_id, substs: substs, nested: obligations })
- }
-
- /// In the case of closure types and fn pointers,
- /// we currently treat the input type parameters on the trait as
- /// outputs. This means that when we have a match we have only
- /// considered the self type, so we have to go back and make sure
- /// to relate the argument types too. This is kind of wrong, but
- /// since we control the full set of impls, also not that wrong,
- /// and it DOES yield better error messages (since we don't report
- /// errors as if there is no applicable impl, but rather report
- /// errors are about mismatched argument types.
- ///
- /// Here is an example. Imagine we have a closure expression
- /// and we desugared it so that the type of the expression is
- /// `Closure`, and `Closure` expects an int as argument. Then it
- /// is "as if" the compiler generated this impl:
- ///
- /// impl Fn(int) for Closure { ... }
- ///
- /// Now imagine our obligation is `Fn(usize) for Closure`. So far
- /// we have matched the self type `Closure`. At this point we'll
- /// compare the `int` to `usize` and generate an error.
- ///
- /// Note that this checking occurs *after* the impl has selected,
- /// because these output type parameters should not affect the
- /// selection of the impl. Therefore, if there is a mismatch, we
- /// report an error to the user.
- fn confirm_poly_trait_refs(
- &mut self,
- obligation_cause: ObligationCause<'tcx>,
- obligation_param_env: ty::ParamEnv<'tcx>,
- obligation_trait_ref: ty::PolyTraitRef<'tcx>,
- expected_trait_ref: ty::PolyTraitRef<'tcx>,
- ) -> Result<Vec<PredicateObligation<'tcx>>, SelectionError<'tcx>> {
- self.infcx
- .at(&obligation_cause, obligation_param_env)
- .sup(obligation_trait_ref, expected_trait_ref)
- .map(|InferOk { obligations, .. }| obligations)
- .map_err(|e| OutputTypeParameterMismatch(expected_trait_ref, obligation_trait_ref, e))
- }
-
- fn confirm_builtin_unsize_candidate(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- ) -> Result<VtableBuiltinData<PredicateObligation<'tcx>>, SelectionError<'tcx>> {
- let tcx = self.tcx();
-
- // `assemble_candidates_for_unsizing` should ensure there are no late-bound
- // regions here. See the comment there for more details.
- let source = self.infcx.shallow_resolve(obligation.self_ty().no_bound_vars().unwrap());
- let target = obligation.predicate.skip_binder().trait_ref.substs.type_at(1);
- let target = self.infcx.shallow_resolve(target);
-
- debug!("confirm_builtin_unsize_candidate(source={:?}, target={:?})", source, target);
-
- let mut nested = vec![];
- match (&source.kind, &target.kind) {
- // Trait+Kx+'a -> Trait+Ky+'b (upcasts).
- (&ty::Dynamic(ref data_a, r_a), &ty::Dynamic(ref data_b, r_b)) => {
- // See `assemble_candidates_for_unsizing` for more info.
- let existential_predicates = data_a.map_bound(|data_a| {
- let iter = data_a
- .principal()
- .map(|x| ty::ExistentialPredicate::Trait(x))
- .into_iter()
- .chain(
- data_a
- .projection_bounds()
- .map(|x| ty::ExistentialPredicate::Projection(x)),
- )
- .chain(data_b.auto_traits().map(ty::ExistentialPredicate::AutoTrait));
- tcx.mk_existential_predicates(iter)
- });
- let source_trait = tcx.mk_dynamic(existential_predicates, r_b);
-
- // Require that the traits involved in this upcast are **equal**;
- // only the **lifetime bound** is changed.
- //
- // FIXME: This condition is arguably too strong -- it would
- // suffice for the source trait to be a *subtype* of the target
- // trait. In particular, changing from something like
- // `for<'a, 'b> Foo<'a, 'b>` to `for<'a> Foo<'a, 'a>` should be
- // permitted. And, indeed, in the in commit
- // 904a0bde93f0348f69914ee90b1f8b6e4e0d7cbc, this
- // condition was loosened. However, when the leak check was
- // added back, using subtype here actually guides the coercion
- // code in such a way that it accepts `old-lub-glb-object.rs`.
- // This is probably a good thing, but I've modified this to `.eq`
- // because I want to continue rejecting that test (as we have
- // done for quite some time) before we are firmly comfortable
- // with what our behavior should be there. -nikomatsakis
- let InferOk { obligations, .. } = self
- .infcx
- .at(&obligation.cause, obligation.param_env)
- .eq(target, source_trait) // FIXME -- see below
- .map_err(|_| Unimplemented)?;
- nested.extend(obligations);
-
- // Register one obligation for 'a: 'b.
- let cause = ObligationCause::new(
- obligation.cause.span,
- obligation.cause.body_id,
- ObjectCastObligation(target),
- );
- let outlives = ty::OutlivesPredicate(r_a, r_b);
- nested.push(Obligation::with_depth(
- cause,
- obligation.recursion_depth + 1,
- obligation.param_env,
- ty::Binder::bind(outlives).to_predicate(),
- ));
- }
-
- // `T` -> `Trait`
- (_, &ty::Dynamic(ref data, r)) => {
- let mut object_dids = data.auto_traits().chain(data.principal_def_id());
- if let Some(did) = object_dids.find(|did| !tcx.is_object_safe(*did)) {
- return Err(TraitNotObjectSafe(did));
- }
-
- let cause = ObligationCause::new(
- obligation.cause.span,
- obligation.cause.body_id,
- ObjectCastObligation(target),
- );
-
- let predicate_to_obligation = |predicate| {
- Obligation::with_depth(
- cause.clone(),
- obligation.recursion_depth + 1,
- obligation.param_env,
- predicate,
- )
- };
-
- // Create obligations:
- // - Casting `T` to `Trait`
- // - For all the various builtin bounds attached to the object cast. (In other
- // words, if the object type is `Foo + Send`, this would create an obligation for
- // the `Send` check.)
- // - Projection predicates
- nested.extend(
- data.iter().map(|predicate| {
- predicate_to_obligation(predicate.with_self_ty(tcx, source))
- }),
- );
-
- // We can only make objects from sized types.
- let tr = ty::TraitRef::new(
- tcx.require_lang_item(lang_items::SizedTraitLangItem, None),
- tcx.mk_substs_trait(source, &[]),
- );
- nested.push(predicate_to_obligation(tr.without_const().to_predicate()));
-
- // If the type is `Foo + 'a`, ensure that the type
- // being cast to `Foo + 'a` outlives `'a`:
- let outlives = ty::OutlivesPredicate(source, r);
- nested.push(predicate_to_obligation(ty::Binder::dummy(outlives).to_predicate()));
- }
-
- // `[T; n]` -> `[T]`
- (&ty::Array(a, _), &ty::Slice(b)) => {
- let InferOk { obligations, .. } = self
- .infcx
- .at(&obligation.cause, obligation.param_env)
- .eq(b, a)
- .map_err(|_| Unimplemented)?;
- nested.extend(obligations);
- }
-
- // `Struct<T>` -> `Struct<U>`
- (&ty::Adt(def, substs_a), &ty::Adt(_, substs_b)) => {
- let fields =
- def.all_fields().map(|field| tcx.type_of(field.did)).collect::<Vec<_>>();
-
- // The last field of the structure has to exist and contain type parameters.
- let field = if let Some(&field) = fields.last() {
- field
- } else {
- return Err(Unimplemented);
- };
- let mut ty_params = GrowableBitSet::new_empty();
- let mut found = false;
- for ty in field.walk() {
- if let ty::Param(p) = ty.kind {
- ty_params.insert(p.index as usize);
- found = true;
- }
- }
- if !found {
- return Err(Unimplemented);
- }
-
- // Replace type parameters used in unsizing with
- // Error and ensure they do not affect any other fields.
- // This could be checked after type collection for any struct
- // with a potentially unsized trailing field.
- let params = substs_a
- .iter()
- .enumerate()
- .map(|(i, &k)| if ty_params.contains(i) { tcx.types.err.into() } else { k });
- let substs = tcx.mk_substs(params);
- for &ty in fields.split_last().unwrap().1 {
- if ty.subst(tcx, substs).references_error() {
- return Err(Unimplemented);
- }
- }
-
- // Extract `Field<T>` and `Field<U>` from `Struct<T>` and `Struct<U>`.
- let inner_source = field.subst(tcx, substs_a);
- let inner_target = field.subst(tcx, substs_b);
-
- // Check that the source struct with the target's
- // unsized parameters is equal to the target.
- let params = substs_a.iter().enumerate().map(|(i, &k)| {
- if ty_params.contains(i) { substs_b.type_at(i).into() } else { k }
- });
- let new_struct = tcx.mk_adt(def, tcx.mk_substs(params));
- let InferOk { obligations, .. } = self
- .infcx
- .at(&obligation.cause, obligation.param_env)
- .eq(target, new_struct)
- .map_err(|_| Unimplemented)?;
- nested.extend(obligations);
-
- // Construct the nested `Field<T>: Unsize<Field<U>>` predicate.
- nested.push(predicate_for_trait_def(
- tcx,
- obligation.param_env,
- obligation.cause.clone(),
- obligation.predicate.def_id(),
- obligation.recursion_depth + 1,
- inner_source,
- &[inner_target.into()],
- ));
- }
-
- // `(.., T)` -> `(.., U)`
- (&ty::Tuple(tys_a), &ty::Tuple(tys_b)) => {
- assert_eq!(tys_a.len(), tys_b.len());
-
- // The last field of the tuple has to exist.
- let (&a_last, a_mid) = if let Some(x) = tys_a.split_last() {
- x
- } else {
- return Err(Unimplemented);
- };
- let &b_last = tys_b.last().unwrap();
-
- // Check that the source tuple with the target's
- // last element is equal to the target.
- let new_tuple = tcx.mk_tup(
- a_mid.iter().map(|k| k.expect_ty()).chain(iter::once(b_last.expect_ty())),
- );
- let InferOk { obligations, .. } = self
- .infcx
- .at(&obligation.cause, obligation.param_env)
- .eq(target, new_tuple)
- .map_err(|_| Unimplemented)?;
- nested.extend(obligations);
-
- // Construct the nested `T: Unsize<U>` predicate.
- nested.push(predicate_for_trait_def(
- tcx,
- obligation.param_env,
- obligation.cause.clone(),
- obligation.predicate.def_id(),
- obligation.recursion_depth + 1,
- a_last.expect_ty(),
- &[b_last],
- ));
- }
-
- _ => bug!(),
- };
-
- Ok(VtableBuiltinData { nested })
- }
-
- ///////////////////////////////////////////////////////////////////////////
- // Matching
- //
- // Matching is a common path used for both evaluation and
- // confirmation. It basically unifies types that appear in impls
- // and traits. This does affect the surrounding environment;
- // therefore, when used during evaluation, match routines must be
- // run inside of a `probe()` so that their side-effects are
- // contained.
-
- fn rematch_impl(
- &mut self,
- impl_def_id: DefId,
- obligation: &TraitObligation<'tcx>,
- snapshot: &CombinedSnapshot<'_, 'tcx>,
- ) -> Normalized<'tcx, SubstsRef<'tcx>> {
- match self.match_impl(impl_def_id, obligation, snapshot) {
- Ok(substs) => substs,
- Err(()) => {
- bug!(
- "Impl {:?} was matchable against {:?} but now is not",
- impl_def_id,
- obligation
- );
- }
- }
- }
-
- fn match_impl(
- &mut self,
- impl_def_id: DefId,
- obligation: &TraitObligation<'tcx>,
- snapshot: &CombinedSnapshot<'_, 'tcx>,
- ) -> Result<Normalized<'tcx, SubstsRef<'tcx>>, ()> {
- let impl_trait_ref = self.tcx().impl_trait_ref(impl_def_id).unwrap();
-
- // Before we create the substitutions and everything, first
- // consider a "quick reject". This avoids creating more types
- // and so forth that we need to.
- if self.fast_reject_trait_refs(obligation, &impl_trait_ref) {
- return Err(());
- }
-
- let (skol_obligation, placeholder_map) =
- self.infcx().replace_bound_vars_with_placeholders(&obligation.predicate);
- let skol_obligation_trait_ref = skol_obligation.trait_ref;
-
- let impl_substs = self.infcx.fresh_substs_for_item(obligation.cause.span, impl_def_id);
-
- let impl_trait_ref = impl_trait_ref.subst(self.tcx(), impl_substs);
-
- let Normalized { value: impl_trait_ref, obligations: mut nested_obligations } =
- project::normalize_with_depth(
- self,
- obligation.param_env,
- obligation.cause.clone(),
- obligation.recursion_depth + 1,
- &impl_trait_ref,
- );
-
- debug!(
- "match_impl(impl_def_id={:?}, obligation={:?}, \
- impl_trait_ref={:?}, skol_obligation_trait_ref={:?})",
- impl_def_id, obligation, impl_trait_ref, skol_obligation_trait_ref
- );
-
- let InferOk { obligations, .. } = self
- .infcx
- .at(&obligation.cause, obligation.param_env)
- .eq(skol_obligation_trait_ref, impl_trait_ref)
- .map_err(|e| debug!("match_impl: failed eq_trait_refs due to `{}`", e))?;
- nested_obligations.extend(obligations);
-
- if let Err(e) = self.infcx.leak_check(false, &placeholder_map, snapshot) {
- debug!("match_impl: failed leak check due to `{}`", e);
- return Err(());
- }
-
- if !self.intercrate
- && self.tcx().impl_polarity(impl_def_id) == ty::ImplPolarity::Reservation
- {
- debug!("match_impl: reservation impls only apply in intercrate mode");
- return Err(());
- }
-
- debug!("match_impl: success impl_substs={:?}", impl_substs);
- Ok(Normalized { value: impl_substs, obligations: nested_obligations })
- }
-
- fn fast_reject_trait_refs(
- &mut self,
- obligation: &TraitObligation<'_>,
- impl_trait_ref: &ty::TraitRef<'_>,
- ) -> bool {
- // We can avoid creating type variables and doing the full
- // substitution if we find that any of the input types, when
- // simplified, do not match.
-
- obligation.predicate.skip_binder().input_types().zip(impl_trait_ref.input_types()).any(
- |(obligation_ty, impl_ty)| {
- let simplified_obligation_ty =
- fast_reject::simplify_type(self.tcx(), obligation_ty, true);
- let simplified_impl_ty = fast_reject::simplify_type(self.tcx(), impl_ty, false);
-
- simplified_obligation_ty.is_some()
- && simplified_impl_ty.is_some()
- && simplified_obligation_ty != simplified_impl_ty
- },
- )
- }
-
- /// Normalize `where_clause_trait_ref` and try to match it against
- /// `obligation`. If successful, return any predicates that
- /// result from the normalization. Normalization is necessary
- /// because where-clauses are stored in the parameter environment
- /// unnormalized.
- fn match_where_clause_trait_ref(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- where_clause_trait_ref: ty::PolyTraitRef<'tcx>,
- ) -> Result<Vec<PredicateObligation<'tcx>>, ()> {
- self.match_poly_trait_ref(obligation, where_clause_trait_ref)
- }
-
- /// Returns `Ok` if `poly_trait_ref` being true implies that the
- /// obligation is satisfied.
- fn match_poly_trait_ref(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- poly_trait_ref: ty::PolyTraitRef<'tcx>,
- ) -> Result<Vec<PredicateObligation<'tcx>>, ()> {
- debug!(
- "match_poly_trait_ref: obligation={:?} poly_trait_ref={:?}",
- obligation, poly_trait_ref
- );
-
- self.infcx
- .at(&obligation.cause, obligation.param_env)
- .sup(obligation.predicate.to_poly_trait_ref(), poly_trait_ref)
- .map(|InferOk { obligations, .. }| obligations)
- .map_err(|_| ())
- }
-
- ///////////////////////////////////////////////////////////////////////////
- // Miscellany
-
- fn match_fresh_trait_refs(
- &self,
- previous: &ty::PolyTraitRef<'tcx>,
- current: &ty::PolyTraitRef<'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- ) -> bool {
- let mut matcher = ty::_match::Match::new(self.tcx(), param_env);
- matcher.relate(previous, current).is_ok()
- }
-
- fn push_stack<'o>(
- &mut self,
- previous_stack: TraitObligationStackList<'o, 'tcx>,
- obligation: &'o TraitObligation<'tcx>,
- ) -> TraitObligationStack<'o, 'tcx> {
- let fresh_trait_ref =
- obligation.predicate.to_poly_trait_ref().fold_with(&mut self.freshener);
-
- let dfn = previous_stack.cache.next_dfn();
- let depth = previous_stack.depth() + 1;
- TraitObligationStack {
- obligation,
- fresh_trait_ref,
- reached_depth: Cell::new(depth),
- previous: previous_stack,
- dfn,
- depth,
- }
- }
-
- fn closure_trait_ref_unnormalized(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- closure_def_id: DefId,
- substs: SubstsRef<'tcx>,
- ) -> ty::PolyTraitRef<'tcx> {
- debug!(
- "closure_trait_ref_unnormalized(obligation={:?}, closure_def_id={:?}, substs={:?})",
- obligation, closure_def_id, substs,
- );
- let closure_type = self.infcx.closure_sig(closure_def_id, substs);
-
- debug!("closure_trait_ref_unnormalized: closure_type = {:?}", closure_type);
-
- // (1) Feels icky to skip the binder here, but OTOH we know
- // that the self-type is an unboxed closure type and hence is
- // in fact unparameterized (or at least does not reference any
- // regions bound in the obligation). Still probably some
- // refactoring could make this nicer.
- closure_trait_ref_and_return_type(
- self.tcx(),
- obligation.predicate.def_id(),
- obligation.predicate.skip_binder().self_ty(), // (1)
- closure_type,
- util::TupleArgumentsFlag::No,
- )
- .map_bound(|(trait_ref, _)| trait_ref)
- }
-
- fn generator_trait_ref_unnormalized(
- &mut self,
- obligation: &TraitObligation<'tcx>,
- closure_def_id: DefId,
- substs: SubstsRef<'tcx>,
- ) -> ty::PolyTraitRef<'tcx> {
- let gen_sig = substs.as_generator().poly_sig(closure_def_id, self.tcx());
-
- // (1) Feels icky to skip the binder here, but OTOH we know
- // that the self-type is an generator type and hence is
- // in fact unparameterized (or at least does not reference any
- // regions bound in the obligation). Still probably some
- // refactoring could make this nicer.
-
- super::util::generator_trait_ref_and_outputs(
- self.tcx(),
- obligation.predicate.def_id(),
- obligation.predicate.skip_binder().self_ty(), // (1)
- gen_sig,
- )
- .map_bound(|(trait_ref, ..)| trait_ref)
- }
-
- /// Returns the obligations that are implied by instantiating an
- /// impl or trait. The obligations are substituted and fully
- /// normalized. This is used when confirming an impl or default
- /// impl.
- fn impl_or_trait_obligations(
- &mut self,
- cause: ObligationCause<'tcx>,
- recursion_depth: usize,
- param_env: ty::ParamEnv<'tcx>,
- def_id: DefId, // of impl or trait
- substs: SubstsRef<'tcx>, // for impl or trait
- ) -> Vec<PredicateObligation<'tcx>> {
- debug!("impl_or_trait_obligations(def_id={:?})", def_id);
- let tcx = self.tcx();
-
- // To allow for one-pass evaluation of the nested obligation,
- // each predicate must be preceded by the obligations required
- // to normalize it.
- // for example, if we have:
- // impl<U: Iterator<Item: Copy>, V: Iterator<Item = U>> Foo for V
- // the impl will have the following predicates:
- // <V as Iterator>::Item = U,
- // U: Iterator, U: Sized,
- // V: Iterator, V: Sized,
- // <U as Iterator>::Item: Copy
- // When we substitute, say, `V => IntoIter<u32>, U => $0`, the last
- // obligation will normalize to `<$0 as Iterator>::Item = $1` and
- // `$1: Copy`, so we must ensure the obligations are emitted in
- // that order.
- let predicates = tcx.predicates_of(def_id);
- assert_eq!(predicates.parent, None);
- let mut obligations = Vec::with_capacity(predicates.predicates.len());
- for (predicate, _) in predicates.predicates {
- let predicate = normalize_with_depth_to(
- self,
- param_env,
- cause.clone(),
- recursion_depth,
- &predicate.subst(tcx, substs),
- &mut obligations,
- );
- obligations.push(Obligation {
- cause: cause.clone(),
- recursion_depth,
- param_env,
- predicate,
- });
- }
-
- // We are performing deduplication here to avoid exponential blowups
- // (#38528) from happening, but the real cause of the duplication is
- // unknown. What we know is that the deduplication avoids exponential
- // amount of predicates being propagated when processing deeply nested
- // types.
- //
- // This code is hot enough that it's worth avoiding the allocation
- // required for the FxHashSet when possible. Special-casing lengths 0,
- // 1 and 2 covers roughly 75-80% of the cases.
- if obligations.len() <= 1 {
- // No possibility of duplicates.
- } else if obligations.len() == 2 {
- // Only two elements. Drop the second if they are equal.
- if obligations[0] == obligations[1] {
- obligations.truncate(1);
- }
- } else {
- // Three or more elements. Use a general deduplication process.
- let mut seen = FxHashSet::default();
- obligations.retain(|i| seen.insert(i.clone()));
- }
-
- obligations
- }
-}
-
-impl<'tcx> TraitObligation<'tcx> {
- #[allow(unused_comparisons)]
- pub fn derived_cause(
- &self,
- variant: fn(DerivedObligationCause<'tcx>) -> ObligationCauseCode<'tcx>,
- ) -> ObligationCause<'tcx> {
- /*!
- * Creates a cause for obligations that are derived from
- * `obligation` by a recursive search (e.g., for a builtin
- * bound, or eventually a `auto trait Foo`). If `obligation`
- * is itself a derived obligation, this is just a clone, but
- * otherwise we create a "derived obligation" cause so as to
- * keep track of the original root obligation for error
- * reporting.
- */
-
- let obligation = self;
-
- // NOTE(flaper87): As of now, it keeps track of the whole error
- // chain. Ideally, we should have a way to configure this either
- // by using -Z verbose or just a CLI argument.
- let derived_cause = DerivedObligationCause {
- parent_trait_ref: obligation.predicate.to_poly_trait_ref(),
- parent_code: Rc::new(obligation.cause.code.clone()),
- };
- let derived_code = variant(derived_cause);
- ObligationCause::new(obligation.cause.span, obligation.cause.body_id, derived_code)
- }
-}
-
-impl<'o, 'tcx> TraitObligationStack<'o, 'tcx> {
- fn list(&'o self) -> TraitObligationStackList<'o, 'tcx> {
- TraitObligationStackList::with(self)
- }
-
- fn cache(&self) -> &'o ProvisionalEvaluationCache<'tcx> {
- self.previous.cache
- }
-
- fn iter(&'o self) -> TraitObligationStackList<'o, 'tcx> {
- self.list()
- }
-
- /// Indicates that attempting to evaluate this stack entry
- /// required accessing something from the stack at depth `reached_depth`.
- fn update_reached_depth(&self, reached_depth: usize) {
- assert!(
- self.depth > reached_depth,
- "invoked `update_reached_depth` with something under this stack: \
- self.depth={} reached_depth={}",
- self.depth,
- reached_depth,
- );
- debug!("update_reached_depth(reached_depth={})", reached_depth);
- let mut p = self;
- while reached_depth < p.depth {
- debug!("update_reached_depth: marking {:?} as cycle participant", p.fresh_trait_ref);
- p.reached_depth.set(p.reached_depth.get().min(reached_depth));
- p = p.previous.head.unwrap();
- }
- }
-}
-
-/// The "provisional evaluation cache" is used to store intermediate cache results
-/// when solving auto traits. Auto traits are unusual in that they can support
-/// cycles. So, for example, a "proof tree" like this would be ok:
-///
-/// - `Foo<T>: Send` :-
-/// - `Bar<T>: Send` :-
-/// - `Foo<T>: Send` -- cycle, but ok
-/// - `Baz<T>: Send`
-///
-/// Here, to prove `Foo<T>: Send`, we have to prove `Bar<T>: Send` and
-/// `Baz<T>: Send`. Proving `Bar<T>: Send` in turn required `Foo<T>: Send`.
-/// For non-auto traits, this cycle would be an error, but for auto traits (because
-/// they are coinductive) it is considered ok.
-///
-/// However, there is a complication: at the point where we have
-/// "proven" `Bar<T>: Send`, we have in fact only proven it
-/// *provisionally*. In particular, we proved that `Bar<T>: Send`
-/// *under the assumption* that `Foo<T>: Send`. But what if we later
-/// find out this assumption is wrong? Specifically, we could
-/// encounter some kind of error proving `Baz<T>: Send`. In that case,
-/// `Bar<T>: Send` didn't turn out to be true.
-///
-/// In Issue #60010, we found a bug in rustc where it would cache
-/// these intermediate results. This was fixed in #60444 by disabling
-/// *all* caching for things involved in a cycle -- in our example,
-/// that would mean we don't cache that `Bar<T>: Send`. But this led
-/// to large slowdowns.
-///
-/// Specifically, imagine this scenario, where proving `Baz<T>: Send`
-/// first requires proving `Bar<T>: Send` (which is true:
-///
-/// - `Foo<T>: Send` :-
-/// - `Bar<T>: Send` :-
-/// - `Foo<T>: Send` -- cycle, but ok
-/// - `Baz<T>: Send`
-/// - `Bar<T>: Send` -- would be nice for this to be a cache hit!
-/// - `*const T: Send` -- but what if we later encounter an error?
-///
-/// The *provisional evaluation cache* resolves this issue. It stores
-/// cache results that we've proven but which were involved in a cycle
-/// in some way. We track the minimal stack depth (i.e., the
-/// farthest from the top of the stack) that we are dependent on.
-/// The idea is that the cache results within are all valid -- so long as
-/// none of the nodes in between the current node and the node at that minimum
-/// depth result in an error (in which case the cached results are just thrown away).
-///
-/// During evaluation, we consult this provisional cache and rely on
-/// it. Accessing a cached value is considered equivalent to accessing
-/// a result at `reached_depth`, so it marks the *current* solution as
-/// provisional as well. If an error is encountered, we toss out any
-/// provisional results added from the subtree that encountered the
-/// error. When we pop the node at `reached_depth` from the stack, we
-/// can commit all the things that remain in the provisional cache.
-struct ProvisionalEvaluationCache<'tcx> {
- /// next "depth first number" to issue -- just a counter
- dfn: Cell<usize>,
-
- /// Stores the "coldest" depth (bottom of stack) reached by any of
- /// the evaluation entries. The idea here is that all things in the provisional
- /// cache are always dependent on *something* that is colder in the stack:
- /// therefore, if we add a new entry that is dependent on something *colder still*,
- /// we have to modify the depth for all entries at once.
- ///
- /// Example:
- ///
- /// Imagine we have a stack `A B C D E` (with `E` being the top of
- /// the stack). We cache something with depth 2, which means that
- /// it was dependent on C. Then we pop E but go on and process a
- /// new node F: A B C D F. Now F adds something to the cache with
- /// depth 1, meaning it is dependent on B. Our original cache
- /// entry is also dependent on B, because there is a path from E
- /// to C and then from C to F and from F to B.
- reached_depth: Cell<usize>,
-
- /// Map from cache key to the provisionally evaluated thing.
- /// The cache entries contain the result but also the DFN in which they
- /// were added. The DFN is used to clear out values on failure.
- ///
- /// Imagine we have a stack like:
- ///
- /// - `A B C` and we add a cache for the result of C (DFN 2)
- /// - Then we have a stack `A B D` where `D` has DFN 3
- /// - We try to solve D by evaluating E: `A B D E` (DFN 4)
- /// - `E` generates various cache entries which have cyclic dependices on `B`
- /// - `A B D E F` and so forth
- /// - the DFN of `F` for example would be 5
- /// - then we determine that `E` is in error -- we will then clear
- /// all cache values whose DFN is >= 4 -- in this case, that
- /// means the cached value for `F`.
- map: RefCell<FxHashMap<ty::PolyTraitRef<'tcx>, ProvisionalEvaluation>>,
-}
-
-/// A cache value for the provisional cache: contains the depth-first
-/// number (DFN) and result.
-#[derive(Copy, Clone, Debug)]
-struct ProvisionalEvaluation {
- from_dfn: usize,
- result: EvaluationResult,
-}
-
-impl<'tcx> Default for ProvisionalEvaluationCache<'tcx> {
- fn default() -> Self {
- Self {
- dfn: Cell::new(0),
- reached_depth: Cell::new(std::usize::MAX),
- map: Default::default(),
- }
- }
-}
-
-impl<'tcx> ProvisionalEvaluationCache<'tcx> {
- /// Get the next DFN in sequence (basically a counter).
- fn next_dfn(&self) -> usize {
- let result = self.dfn.get();
- self.dfn.set(result + 1);
- result
- }
-
- /// Check the provisional cache for any result for
- /// `fresh_trait_ref`. If there is a hit, then you must consider
- /// it an access to the stack slots at depth
- /// `self.current_reached_depth()` and above.
- fn get_provisional(&self, fresh_trait_ref: ty::PolyTraitRef<'tcx>) -> Option<EvaluationResult> {
- debug!(
- "get_provisional(fresh_trait_ref={:?}) = {:#?} with reached-depth {}",
- fresh_trait_ref,
- self.map.borrow().get(&fresh_trait_ref),
- self.reached_depth.get(),
- );
- Some(self.map.borrow().get(&fresh_trait_ref)?.result)
- }
-
- /// Current value of the `reached_depth` counter -- all the
- /// provisional cache entries are dependent on the item at this
- /// depth.
- fn current_reached_depth(&self) -> usize {
- self.reached_depth.get()
- }
-
- /// Insert a provisional result into the cache. The result came
- /// from the node with the given DFN. It accessed a minimum depth
- /// of `reached_depth` to compute. It evaluated `fresh_trait_ref`
- /// and resulted in `result`.
- fn insert_provisional(
- &self,
- from_dfn: usize,
- reached_depth: usize,
- fresh_trait_ref: ty::PolyTraitRef<'tcx>,
- result: EvaluationResult,
- ) {
- debug!(
- "insert_provisional(from_dfn={}, reached_depth={}, fresh_trait_ref={:?}, result={:?})",
- from_dfn, reached_depth, fresh_trait_ref, result,
- );
- let r_d = self.reached_depth.get();
- self.reached_depth.set(r_d.min(reached_depth));
-
- debug!("insert_provisional: reached_depth={:?}", self.reached_depth.get());
-
- self.map.borrow_mut().insert(fresh_trait_ref, ProvisionalEvaluation { from_dfn, result });
- }
-
- /// Invoked when the node with dfn `dfn` does not get a successful
- /// result. This will clear out any provisional cache entries
- /// that were added since `dfn` was created. This is because the
- /// provisional entries are things which must assume that the
- /// things on the stack at the time of their creation succeeded --
- /// since the failing node is presently at the top of the stack,
- /// these provisional entries must either depend on it or some
- /// ancestor of it.
- fn on_failure(&self, dfn: usize) {
- debug!("on_failure(dfn={:?})", dfn,);
- self.map.borrow_mut().retain(|key, eval| {
- if !eval.from_dfn >= dfn {
- debug!("on_failure: removing {:?}", key);
- false
- } else {
- true
- }
- });
- }
-
- /// Invoked when the node at depth `depth` completed without
- /// depending on anything higher in the stack (if that completion
- /// was a failure, then `on_failure` should have been invoked
- /// already). The callback `op` will be invoked for each
- /// provisional entry that we can now confirm.
- fn on_completion(
- &self,
- depth: usize,
- mut op: impl FnMut(ty::PolyTraitRef<'tcx>, EvaluationResult),
- ) {
- debug!("on_completion(depth={}, reached_depth={})", depth, self.reached_depth.get(),);
-
- if self.reached_depth.get() < depth {
- debug!("on_completion: did not yet reach depth to complete");
- return;
- }
-
- for (fresh_trait_ref, eval) in self.map.borrow_mut().drain() {
- debug!("on_completion: fresh_trait_ref={:?} eval={:?}", fresh_trait_ref, eval,);
-
- op(fresh_trait_ref, eval.result);
- }
-
- self.reached_depth.set(std::usize::MAX);
- }
-}
-
-#[derive(Copy, Clone)]
-struct TraitObligationStackList<'o, 'tcx> {
- cache: &'o ProvisionalEvaluationCache<'tcx>,
- head: Option<&'o TraitObligationStack<'o, 'tcx>>,
-}
-
-impl<'o, 'tcx> TraitObligationStackList<'o, 'tcx> {
- fn empty(cache: &'o ProvisionalEvaluationCache<'tcx>) -> TraitObligationStackList<'o, 'tcx> {
- TraitObligationStackList { cache, head: None }
- }
-
- fn with(r: &'o TraitObligationStack<'o, 'tcx>) -> TraitObligationStackList<'o, 'tcx> {
- TraitObligationStackList { cache: r.cache(), head: Some(r) }
- }
-
- fn head(&self) -> Option<&'o TraitObligationStack<'o, 'tcx>> {
- self.head
- }
-
- fn depth(&self) -> usize {
- if let Some(head) = self.head { head.depth } else { 0 }
- }
-}
-
-impl<'o, 'tcx> Iterator for TraitObligationStackList<'o, 'tcx> {
- type Item = &'o TraitObligationStack<'o, 'tcx>;
-
- fn next(&mut self) -> Option<&'o TraitObligationStack<'o, 'tcx>> {
- match self.head {
- Some(o) => {
- *self = o.previous;
- Some(o)
- }
- None => None,
- }
- }
-}
-
-impl<'o, 'tcx> fmt::Debug for TraitObligationStack<'o, 'tcx> {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- write!(f, "TraitObligationStack({:?})", self.obligation)
- }
-}
+++ /dev/null
-//! Logic and data structures related to impl specialization, explained in
-//! greater detail below.
-//!
-//! At the moment, this implementation support only the simple "chain" rule:
-//! If any two impls overlap, one must be a strict subset of the other.
-//!
-//! See the [rustc guide] for a bit more detail on how specialization
-//! fits together with the rest of the trait machinery.
-//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/traits/specialization.html
-
-pub mod specialization_graph;
-use specialization_graph::GraphExt;
-
-use crate::infer::{InferCtxt, InferOk, TyCtxtInferExt};
-use crate::traits::select::IntercrateAmbiguityCause;
-use crate::traits::{self, coherence, FutureCompatOverlapErrorKind, ObligationCause, TraitEngine};
-use rustc::lint::LintDiagnosticBuilder;
-use rustc::ty::subst::{InternalSubsts, Subst, SubstsRef};
-use rustc::ty::{self, TyCtxt, TypeFoldable};
-use rustc_data_structures::fx::FxHashSet;
-use rustc_errors::struct_span_err;
-use rustc_hir::def_id::DefId;
-use rustc_session::lint::builtin::COHERENCE_LEAK_CHECK;
-use rustc_session::lint::builtin::ORDER_DEPENDENT_TRAIT_OBJECTS;
-use rustc_span::DUMMY_SP;
-
-use super::util::impl_trait_ref_and_oblig;
-use super::{FulfillmentContext, SelectionContext};
-
-/// Information pertinent to an overlapping impl error.
-#[derive(Debug)]
-pub struct OverlapError {
- pub with_impl: DefId,
- pub trait_desc: String,
- pub self_desc: Option<String>,
- pub intercrate_ambiguity_causes: Vec<IntercrateAmbiguityCause>,
- pub involves_placeholder: bool,
-}
-
-/// Given a subst for the requested impl, translate it to a subst
-/// appropriate for the actual item definition (whether it be in that impl,
-/// a parent impl, or the trait).
-///
-/// When we have selected one impl, but are actually using item definitions from
-/// a parent impl providing a default, we need a way to translate between the
-/// type parameters of the two impls. Here the `source_impl` is the one we've
-/// selected, and `source_substs` is a substitution of its generics.
-/// And `target_node` is the impl/trait we're actually going to get the
-/// definition from. The resulting substitution will map from `target_node`'s
-/// generics to `source_impl`'s generics as instantiated by `source_subst`.
-///
-/// For example, consider the following scenario:
-///
-/// ```rust
-/// trait Foo { ... }
-/// impl<T, U> Foo for (T, U) { ... } // target impl
-/// impl<V> Foo for (V, V) { ... } // source impl
-/// ```
-///
-/// Suppose we have selected "source impl" with `V` instantiated with `u32`.
-/// This function will produce a substitution with `T` and `U` both mapping to `u32`.
-///
-/// where-clauses add some trickiness here, because they can be used to "define"
-/// an argument indirectly:
-///
-/// ```rust
-/// impl<'a, I, T: 'a> Iterator for Cloned<I>
-/// where I: Iterator<Item = &'a T>, T: Clone
-/// ```
-///
-/// In a case like this, the substitution for `T` is determined indirectly,
-/// through associated type projection. We deal with such cases by using
-/// *fulfillment* to relate the two impls, requiring that all projections are
-/// resolved.
-pub fn translate_substs<'a, 'tcx>(
- infcx: &InferCtxt<'a, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- source_impl: DefId,
- source_substs: SubstsRef<'tcx>,
- target_node: specialization_graph::Node,
-) -> SubstsRef<'tcx> {
- debug!(
- "translate_substs({:?}, {:?}, {:?}, {:?})",
- param_env, source_impl, source_substs, target_node
- );
- let source_trait_ref =
- infcx.tcx.impl_trait_ref(source_impl).unwrap().subst(infcx.tcx, &source_substs);
-
- // translate the Self and Param parts of the substitution, since those
- // vary across impls
- let target_substs = match target_node {
- specialization_graph::Node::Impl(target_impl) => {
- // no need to translate if we're targeting the impl we started with
- if source_impl == target_impl {
- return source_substs;
- }
-
- fulfill_implication(infcx, param_env, source_trait_ref, target_impl).unwrap_or_else(
- |_| {
- bug!(
- "When translating substitutions for specialization, the expected \
- specialization failed to hold"
- )
- },
- )
- }
- specialization_graph::Node::Trait(..) => source_trait_ref.substs,
- };
-
- // directly inherent the method generics, since those do not vary across impls
- source_substs.rebase_onto(infcx.tcx, source_impl, target_substs)
-}
-
-/// Given a selected impl described by `impl_data`, returns the
-/// definition and substitutions for the method with the name `name`
-/// the kind `kind`, and trait method substitutions `substs`, in
-/// that impl, a less specialized impl, or the trait default,
-/// whichever applies.
-pub fn find_associated_item<'tcx>(
- tcx: TyCtxt<'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- item: &ty::AssocItem,
- substs: SubstsRef<'tcx>,
- impl_data: &super::VtableImplData<'tcx, ()>,
-) -> (DefId, SubstsRef<'tcx>) {
- debug!("find_associated_item({:?}, {:?}, {:?}, {:?})", param_env, item, substs, impl_data);
- assert!(!substs.needs_infer());
-
- let trait_def_id = tcx.trait_id_of_impl(impl_data.impl_def_id).unwrap();
- let trait_def = tcx.trait_def(trait_def_id);
-
- let ancestors = trait_def.ancestors(tcx, impl_data.impl_def_id);
- match ancestors.leaf_def(tcx, item.ident, item.kind) {
- Some(node_item) => {
- let substs = tcx.infer_ctxt().enter(|infcx| {
- let param_env = param_env.with_reveal_all();
- let substs = substs.rebase_onto(tcx, trait_def_id, impl_data.substs);
- let substs = translate_substs(
- &infcx,
- param_env,
- impl_data.impl_def_id,
- substs,
- node_item.node,
- );
- infcx.tcx.erase_regions(&substs)
- });
- (node_item.item.def_id, substs)
- }
- None => bug!("{:?} not found in {:?}", item, impl_data.impl_def_id),
- }
-}
-
-/// Is `impl1` a specialization of `impl2`?
-///
-/// Specialization is determined by the sets of types to which the impls apply;
-/// `impl1` specializes `impl2` if it applies to a subset of the types `impl2` applies
-/// to.
-pub(super) fn specializes(tcx: TyCtxt<'_>, (impl1_def_id, impl2_def_id): (DefId, DefId)) -> bool {
- debug!("specializes({:?}, {:?})", impl1_def_id, impl2_def_id);
-
- // The feature gate should prevent introducing new specializations, but not
- // taking advantage of upstream ones.
- if !tcx.features().specialization && (impl1_def_id.is_local() || impl2_def_id.is_local()) {
- return false;
- }
-
- // We determine whether there's a subset relationship by:
- //
- // - skolemizing impl1,
- // - assuming the where clauses for impl1,
- // - instantiating impl2 with fresh inference variables,
- // - unifying,
- // - attempting to prove the where clauses for impl2
- //
- // The last three steps are encapsulated in `fulfill_implication`.
- //
- // See RFC 1210 for more details and justification.
-
- // Currently we do not allow e.g., a negative impl to specialize a positive one
- if tcx.impl_polarity(impl1_def_id) != tcx.impl_polarity(impl2_def_id) {
- return false;
- }
-
- // create a parameter environment corresponding to a (placeholder) instantiation of impl1
- let penv = tcx.param_env(impl1_def_id);
- let impl1_trait_ref = tcx.impl_trait_ref(impl1_def_id).unwrap();
-
- // Create a infcx, taking the predicates of impl1 as assumptions:
- tcx.infer_ctxt().enter(|infcx| {
- // Normalize the trait reference. The WF rules ought to ensure
- // that this always succeeds.
- let impl1_trait_ref = match traits::fully_normalize(
- &infcx,
- FulfillmentContext::new(),
- ObligationCause::dummy(),
- penv,
- &impl1_trait_ref,
- ) {
- Ok(impl1_trait_ref) => impl1_trait_ref,
- Err(err) => {
- bug!("failed to fully normalize {:?}: {:?}", impl1_trait_ref, err);
- }
- };
-
- // Attempt to prove that impl2 applies, given all of the above.
- fulfill_implication(&infcx, penv, impl1_trait_ref, impl2_def_id).is_ok()
- })
-}
-
-/// Attempt to fulfill all obligations of `target_impl` after unification with
-/// `source_trait_ref`. If successful, returns a substitution for *all* the
-/// generics of `target_impl`, including both those needed to unify with
-/// `source_trait_ref` and those whose identity is determined via a where
-/// clause in the impl.
-fn fulfill_implication<'a, 'tcx>(
- infcx: &InferCtxt<'a, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- source_trait_ref: ty::TraitRef<'tcx>,
- target_impl: DefId,
-) -> Result<SubstsRef<'tcx>, ()> {
- debug!(
- "fulfill_implication({:?}, trait_ref={:?} |- {:?} applies)",
- param_env, source_trait_ref, target_impl
- );
-
- let selcx = &mut SelectionContext::new(&infcx);
- let target_substs = infcx.fresh_substs_for_item(DUMMY_SP, target_impl);
- let (target_trait_ref, mut obligations) =
- impl_trait_ref_and_oblig(selcx, param_env, target_impl, target_substs);
- debug!(
- "fulfill_implication: target_trait_ref={:?}, obligations={:?}",
- target_trait_ref, obligations
- );
-
- // do the impls unify? If not, no specialization.
- match infcx.at(&ObligationCause::dummy(), param_env).eq(source_trait_ref, target_trait_ref) {
- Ok(InferOk { obligations: o, .. }) => {
- obligations.extend(o);
- }
- Err(_) => {
- debug!(
- "fulfill_implication: {:?} does not unify with {:?}",
- source_trait_ref, target_trait_ref
- );
- return Err(());
- }
- }
-
- // attempt to prove all of the predicates for impl2 given those for impl1
- // (which are packed up in penv)
-
- infcx.save_and_restore_in_snapshot_flag(|infcx| {
- // If we came from `translate_substs`, we already know that the
- // predicates for our impl hold (after all, we know that a more
- // specialized impl holds, so our impl must hold too), and
- // we only want to process the projections to determine the
- // the types in our substs using RFC 447, so we can safely
- // ignore region obligations, which allows us to avoid threading
- // a node-id to assign them with.
- //
- // If we came from specialization graph construction, then
- // we already make a mockery out of the region system, so
- // why not ignore them a bit earlier?
- let mut fulfill_cx = FulfillmentContext::new_ignoring_regions();
- for oblig in obligations.into_iter() {
- fulfill_cx.register_predicate_obligation(&infcx, oblig);
- }
- match fulfill_cx.select_all_or_error(infcx) {
- Err(errors) => {
- // no dice!
- debug!(
- "fulfill_implication: for impls on {:?} and {:?}, \
- could not fulfill: {:?} given {:?}",
- source_trait_ref, target_trait_ref, errors, param_env.caller_bounds
- );
- Err(())
- }
-
- Ok(()) => {
- debug!(
- "fulfill_implication: an impl for {:?} specializes {:?}",
- source_trait_ref, target_trait_ref
- );
-
- // Now resolve the *substitution* we built for the target earlier, replacing
- // the inference variables inside with whatever we got from fulfillment.
- Ok(infcx.resolve_vars_if_possible(&target_substs))
- }
- }
- })
-}
-
-// Query provider for `specialization_graph_of`.
-pub(super) fn specialization_graph_provider(
- tcx: TyCtxt<'_>,
- trait_id: DefId,
-) -> &specialization_graph::Graph {
- let mut sg = specialization_graph::Graph::new();
-
- let mut trait_impls = tcx.all_impls(trait_id);
-
- // The coherence checking implementation seems to rely on impls being
- // iterated over (roughly) in definition order, so we are sorting by
- // negated `CrateNum` (so remote definitions are visited first) and then
- // by a flattened version of the `DefIndex`.
- trait_impls
- .sort_unstable_by_key(|def_id| (-(def_id.krate.as_u32() as i64), def_id.index.index()));
-
- for impl_def_id in trait_impls {
- if impl_def_id.is_local() {
- // This is where impl overlap checking happens:
- let insert_result = sg.insert(tcx, impl_def_id);
- // Report error if there was one.
- let (overlap, used_to_be_allowed) = match insert_result {
- Err(overlap) => (Some(overlap), None),
- Ok(Some(overlap)) => (Some(overlap.error), Some(overlap.kind)),
- Ok(None) => (None, None),
- };
-
- if let Some(overlap) = overlap {
- let impl_span =
- tcx.sess.source_map().def_span(tcx.span_of_impl(impl_def_id).unwrap());
-
- // Work to be done after we've built the DiagnosticBuilder. We have to define it
- // now because the struct_lint methods don't return back the DiagnosticBuilder
- // that's passed in.
- let decorate = |err: LintDiagnosticBuilder<'_>| {
- let msg = format!(
- "conflicting implementations of trait `{}`{}:{}",
- overlap.trait_desc,
- overlap
- .self_desc
- .clone()
- .map_or(String::new(), |ty| { format!(" for type `{}`", ty) }),
- match used_to_be_allowed {
- Some(FutureCompatOverlapErrorKind::Issue33140) => " (E0119)",
- _ => "",
- }
- );
- let mut err = err.build(&msg);
- match tcx.span_of_impl(overlap.with_impl) {
- Ok(span) => {
- err.span_label(
- tcx.sess.source_map().def_span(span),
- "first implementation here".to_string(),
- );
-
- err.span_label(
- impl_span,
- format!(
- "conflicting implementation{}",
- overlap
- .self_desc
- .map_or(String::new(), |ty| format!(" for `{}`", ty))
- ),
- );
- }
- Err(cname) => {
- let msg = match to_pretty_impl_header(tcx, overlap.with_impl) {
- Some(s) => format!(
- "conflicting implementation in crate `{}`:\n- {}",
- cname, s
- ),
- None => format!("conflicting implementation in crate `{}`", cname),
- };
- err.note(&msg);
- }
- }
-
- for cause in &overlap.intercrate_ambiguity_causes {
- cause.add_intercrate_ambiguity_hint(&mut err);
- }
-
- if overlap.involves_placeholder {
- coherence::add_placeholder_note(&mut err);
- }
- err.emit()
- };
-
- match used_to_be_allowed {
- None => {
- let err = struct_span_err!(tcx.sess, impl_span, E0119, "");
- decorate(LintDiagnosticBuilder::new(err));
- }
- Some(kind) => {
- let lint = match kind {
- FutureCompatOverlapErrorKind::Issue33140 => {
- ORDER_DEPENDENT_TRAIT_OBJECTS
- }
- FutureCompatOverlapErrorKind::LeakCheck => COHERENCE_LEAK_CHECK,
- };
- tcx.struct_span_lint_hir(
- lint,
- tcx.hir().as_local_hir_id(impl_def_id).unwrap(),
- impl_span,
- decorate,
- )
- }
- };
- }
- } else {
- let parent = tcx.impl_parent(impl_def_id).unwrap_or(trait_id);
- sg.record_impl_from_cstore(tcx, parent, impl_def_id)
- }
- }
-
- tcx.arena.alloc(sg)
-}
-
-/// Recovers the "impl X for Y" signature from `impl_def_id` and returns it as a
-/// string.
-fn to_pretty_impl_header(tcx: TyCtxt<'_>, impl_def_id: DefId) -> Option<String> {
- use std::fmt::Write;
-
- let trait_ref = if let Some(tr) = tcx.impl_trait_ref(impl_def_id) {
- tr
- } else {
- return None;
- };
-
- let mut w = "impl".to_owned();
-
- let substs = InternalSubsts::identity_for_item(tcx, impl_def_id);
-
- // FIXME: Currently only handles ?Sized.
- // Needs to support ?Move and ?DynSized when they are implemented.
- let mut types_without_default_bounds = FxHashSet::default();
- let sized_trait = tcx.lang_items().sized_trait();
-
- if !substs.is_noop() {
- types_without_default_bounds.extend(substs.types());
- w.push('<');
- w.push_str(
- &substs
- .iter()
- .map(|k| k.to_string())
- .filter(|k| k != "'_")
- .collect::<Vec<_>>()
- .join(", "),
- );
- w.push('>');
- }
-
- write!(w, " {} for {}", trait_ref.print_only_trait_path(), tcx.type_of(impl_def_id)).unwrap();
-
- // The predicates will contain default bounds like `T: Sized`. We need to
- // remove these bounds, and add `T: ?Sized` to any untouched type parameters.
- let predicates = tcx.predicates_of(impl_def_id).predicates;
- let mut pretty_predicates =
- Vec::with_capacity(predicates.len() + types_without_default_bounds.len());
-
- for (p, _) in predicates {
- if let Some(poly_trait_ref) = p.to_opt_poly_trait_ref() {
- if Some(poly_trait_ref.def_id()) == sized_trait {
- types_without_default_bounds.remove(poly_trait_ref.self_ty());
- continue;
- }
- }
- pretty_predicates.push(p.to_string());
- }
-
- pretty_predicates
- .extend(types_without_default_bounds.iter().map(|ty| format!("{}: ?Sized", ty)));
-
- if !pretty_predicates.is_empty() {
- write!(w, "\n where {}", pretty_predicates.join(", ")).unwrap();
- }
-
- w.push(';');
- Some(w)
-}
+++ /dev/null
-use super::OverlapError;
-
-use crate::traits;
-use rustc::ty::fast_reject::{self, SimplifiedType};
-use rustc::ty::{self, TyCtxt, TypeFoldable};
-use rustc_hir::def_id::DefId;
-
-pub use rustc::traits::specialization_graph::*;
-
-#[derive(Copy, Clone, Debug)]
-pub enum FutureCompatOverlapErrorKind {
- Issue33140,
- LeakCheck,
-}
-
-#[derive(Debug)]
-pub struct FutureCompatOverlapError {
- pub error: OverlapError,
- pub kind: FutureCompatOverlapErrorKind,
-}
-
-/// The result of attempting to insert an impl into a group of children.
-enum Inserted {
- /// The impl was inserted as a new child in this group of children.
- BecameNewSibling(Option<FutureCompatOverlapError>),
-
- /// The impl should replace existing impls [X1, ..], because the impl specializes X1, X2, etc.
- ReplaceChildren(Vec<DefId>),
-
- /// The impl is a specialization of an existing child.
- ShouldRecurseOn(DefId),
-}
-
-trait ChildrenExt {
- fn insert_blindly(&mut self, tcx: TyCtxt<'tcx>, impl_def_id: DefId);
- fn remove_existing(&mut self, tcx: TyCtxt<'tcx>, impl_def_id: DefId);
-
- fn insert(
- &mut self,
- tcx: TyCtxt<'tcx>,
- impl_def_id: DefId,
- simplified_self: Option<SimplifiedType>,
- ) -> Result<Inserted, OverlapError>;
-}
-
-impl ChildrenExt for Children {
- /// Insert an impl into this set of children without comparing to any existing impls.
- fn insert_blindly(&mut self, tcx: TyCtxt<'tcx>, impl_def_id: DefId) {
- let trait_ref = tcx.impl_trait_ref(impl_def_id).unwrap();
- if let Some(st) = fast_reject::simplify_type(tcx, trait_ref.self_ty(), false) {
- debug!("insert_blindly: impl_def_id={:?} st={:?}", impl_def_id, st);
- self.nonblanket_impls.entry(st).or_default().push(impl_def_id)
- } else {
- debug!("insert_blindly: impl_def_id={:?} st=None", impl_def_id);
- self.blanket_impls.push(impl_def_id)
- }
- }
-
- /// Removes an impl from this set of children. Used when replacing
- /// an impl with a parent. The impl must be present in the list of
- /// children already.
- fn remove_existing(&mut self, tcx: TyCtxt<'tcx>, impl_def_id: DefId) {
- let trait_ref = tcx.impl_trait_ref(impl_def_id).unwrap();
- let vec: &mut Vec<DefId>;
- if let Some(st) = fast_reject::simplify_type(tcx, trait_ref.self_ty(), false) {
- debug!("remove_existing: impl_def_id={:?} st={:?}", impl_def_id, st);
- vec = self.nonblanket_impls.get_mut(&st).unwrap();
- } else {
- debug!("remove_existing: impl_def_id={:?} st=None", impl_def_id);
- vec = &mut self.blanket_impls;
- }
-
- let index = vec.iter().position(|d| *d == impl_def_id).unwrap();
- vec.remove(index);
- }
-
- /// Attempt to insert an impl into this set of children, while comparing for
- /// specialization relationships.
- fn insert(
- &mut self,
- tcx: TyCtxt<'tcx>,
- impl_def_id: DefId,
- simplified_self: Option<SimplifiedType>,
- ) -> Result<Inserted, OverlapError> {
- let mut last_lint = None;
- let mut replace_children = Vec::new();
-
- debug!("insert(impl_def_id={:?}, simplified_self={:?})", impl_def_id, simplified_self,);
-
- let possible_siblings = match simplified_self {
- Some(st) => PotentialSiblings::Filtered(filtered_children(self, st)),
- None => PotentialSiblings::Unfiltered(iter_children(self)),
- };
-
- for possible_sibling in possible_siblings {
- debug!(
- "insert: impl_def_id={:?}, simplified_self={:?}, possible_sibling={:?}",
- impl_def_id, simplified_self, possible_sibling,
- );
-
- let create_overlap_error = |overlap: traits::coherence::OverlapResult<'_>| {
- let trait_ref = overlap.impl_header.trait_ref.unwrap();
- let self_ty = trait_ref.self_ty();
-
- OverlapError {
- with_impl: possible_sibling,
- trait_desc: trait_ref.print_only_trait_path().to_string(),
- // Only report the `Self` type if it has at least
- // some outer concrete shell; otherwise, it's
- // not adding much information.
- self_desc: if self_ty.has_concrete_skeleton() {
- Some(self_ty.to_string())
- } else {
- None
- },
- intercrate_ambiguity_causes: overlap.intercrate_ambiguity_causes,
- involves_placeholder: overlap.involves_placeholder,
- }
- };
-
- let report_overlap_error = |overlap: traits::coherence::OverlapResult<'_>,
- last_lint: &mut _| {
- // Found overlap, but no specialization; error out or report future-compat warning.
-
- // Do we *still* get overlap if we disable the future-incompatible modes?
- let should_err = traits::overlapping_impls(
- tcx,
- possible_sibling,
- impl_def_id,
- traits::SkipLeakCheck::default(),
- |_| true,
- || false,
- );
-
- let error = create_overlap_error(overlap);
-
- if should_err {
- Err(error)
- } else {
- *last_lint = Some(FutureCompatOverlapError {
- error,
- kind: FutureCompatOverlapErrorKind::LeakCheck,
- });
-
- Ok((false, false))
- }
- };
-
- let last_lint_mut = &mut last_lint;
- let (le, ge) = traits::overlapping_impls(
- tcx,
- possible_sibling,
- impl_def_id,
- traits::SkipLeakCheck::Yes,
- |overlap| {
- if let Some(overlap_kind) =
- tcx.impls_are_allowed_to_overlap(impl_def_id, possible_sibling)
- {
- match overlap_kind {
- ty::ImplOverlapKind::Permitted { marker: _ } => {}
- ty::ImplOverlapKind::Issue33140 => {
- *last_lint_mut = Some(FutureCompatOverlapError {
- error: create_overlap_error(overlap),
- kind: FutureCompatOverlapErrorKind::Issue33140,
- });
- }
- }
-
- return Ok((false, false));
- }
-
- let le = tcx.specializes((impl_def_id, possible_sibling));
- let ge = tcx.specializes((possible_sibling, impl_def_id));
-
- if le == ge {
- report_overlap_error(overlap, last_lint_mut)
- } else {
- Ok((le, ge))
- }
- },
- || Ok((false, false)),
- )?;
-
- if le && !ge {
- debug!(
- "descending as child of TraitRef {:?}",
- tcx.impl_trait_ref(possible_sibling).unwrap()
- );
-
- // The impl specializes `possible_sibling`.
- return Ok(Inserted::ShouldRecurseOn(possible_sibling));
- } else if ge && !le {
- debug!(
- "placing as parent of TraitRef {:?}",
- tcx.impl_trait_ref(possible_sibling).unwrap()
- );
-
- replace_children.push(possible_sibling);
- } else {
- // Either there's no overlap, or the overlap was already reported by
- // `overlap_error`.
- }
- }
-
- if !replace_children.is_empty() {
- return Ok(Inserted::ReplaceChildren(replace_children));
- }
-
- // No overlap with any potential siblings, so add as a new sibling.
- debug!("placing as new sibling");
- self.insert_blindly(tcx, impl_def_id);
- Ok(Inserted::BecameNewSibling(last_lint))
- }
-}
-
-fn iter_children(children: &mut Children) -> impl Iterator<Item = DefId> + '_ {
- let nonblanket = children.nonblanket_impls.iter_mut().flat_map(|(_, v)| v.iter());
- children.blanket_impls.iter().chain(nonblanket).cloned()
-}
-
-fn filtered_children(
- children: &mut Children,
- st: SimplifiedType,
-) -> impl Iterator<Item = DefId> + '_ {
- let nonblanket = children.nonblanket_impls.entry(st).or_default().iter();
- children.blanket_impls.iter().chain(nonblanket).cloned()
-}
-
-// A custom iterator used by Children::insert
-enum PotentialSiblings<I, J>
-where
- I: Iterator<Item = DefId>,
- J: Iterator<Item = DefId>,
-{
- Unfiltered(I),
- Filtered(J),
-}
-
-impl<I, J> Iterator for PotentialSiblings<I, J>
-where
- I: Iterator<Item = DefId>,
- J: Iterator<Item = DefId>,
-{
- type Item = DefId;
-
- fn next(&mut self) -> Option<Self::Item> {
- match *self {
- PotentialSiblings::Unfiltered(ref mut iter) => iter.next(),
- PotentialSiblings::Filtered(ref mut iter) => iter.next(),
- }
- }
-}
-
-pub trait GraphExt {
- /// Insert a local impl into the specialization graph. If an existing impl
- /// conflicts with it (has overlap, but neither specializes the other),
- /// information about the area of overlap is returned in the `Err`.
- fn insert(
- &mut self,
- tcx: TyCtxt<'tcx>,
- impl_def_id: DefId,
- ) -> Result<Option<FutureCompatOverlapError>, OverlapError>;
-
- /// Insert cached metadata mapping from a child impl back to its parent.
- fn record_impl_from_cstore(&mut self, tcx: TyCtxt<'tcx>, parent: DefId, child: DefId);
-}
-
-impl GraphExt for Graph {
- /// Insert a local impl into the specialization graph. If an existing impl
- /// conflicts with it (has overlap, but neither specializes the other),
- /// information about the area of overlap is returned in the `Err`.
- fn insert(
- &mut self,
- tcx: TyCtxt<'tcx>,
- impl_def_id: DefId,
- ) -> Result<Option<FutureCompatOverlapError>, OverlapError> {
- assert!(impl_def_id.is_local());
-
- let trait_ref = tcx.impl_trait_ref(impl_def_id).unwrap();
- let trait_def_id = trait_ref.def_id;
-
- debug!(
- "insert({:?}): inserting TraitRef {:?} into specialization graph",
- impl_def_id, trait_ref
- );
-
- // If the reference itself contains an earlier error (e.g., due to a
- // resolution failure), then we just insert the impl at the top level of
- // the graph and claim that there's no overlap (in order to suppress
- // bogus errors).
- if trait_ref.references_error() {
- debug!(
- "insert: inserting dummy node for erroneous TraitRef {:?}, \
- impl_def_id={:?}, trait_def_id={:?}",
- trait_ref, impl_def_id, trait_def_id
- );
-
- self.parent.insert(impl_def_id, trait_def_id);
- self.children.entry(trait_def_id).or_default().insert_blindly(tcx, impl_def_id);
- return Ok(None);
- }
-
- let mut parent = trait_def_id;
- let mut last_lint = None;
- let simplified = fast_reject::simplify_type(tcx, trait_ref.self_ty(), false);
-
- // Descend the specialization tree, where `parent` is the current parent node.
- loop {
- use self::Inserted::*;
-
- let insert_result =
- self.children.entry(parent).or_default().insert(tcx, impl_def_id, simplified)?;
-
- match insert_result {
- BecameNewSibling(opt_lint) => {
- last_lint = opt_lint;
- break;
- }
- ReplaceChildren(grand_children_to_be) => {
- // We currently have
- //
- // P
- // |
- // G
- //
- // and we are inserting the impl N. We want to make it:
- //
- // P
- // |
- // N
- // |
- // G
-
- // Adjust P's list of children: remove G and then add N.
- {
- let siblings = self.children.get_mut(&parent).unwrap();
- for &grand_child_to_be in &grand_children_to_be {
- siblings.remove_existing(tcx, grand_child_to_be);
- }
- siblings.insert_blindly(tcx, impl_def_id);
- }
-
- // Set G's parent to N and N's parent to P.
- for &grand_child_to_be in &grand_children_to_be {
- self.parent.insert(grand_child_to_be, impl_def_id);
- }
- self.parent.insert(impl_def_id, parent);
-
- // Add G as N's child.
- for &grand_child_to_be in &grand_children_to_be {
- self.children
- .entry(impl_def_id)
- .or_default()
- .insert_blindly(tcx, grand_child_to_be);
- }
- break;
- }
- ShouldRecurseOn(new_parent) => {
- parent = new_parent;
- }
- }
- }
-
- self.parent.insert(impl_def_id, parent);
- Ok(last_lint)
- }
-
- /// Insert cached metadata mapping from a child impl back to its parent.
- fn record_impl_from_cstore(&mut self, tcx: TyCtxt<'tcx>, parent: DefId, child: DefId) {
- if self.parent.insert(child, parent).is_some() {
- bug!(
- "When recording an impl from the crate store, information about its parent \
- was already present."
- );
- }
-
- self.children.entry(parent).or_default().insert_blindly(tcx, child);
- }
-}
+++ /dev/null
-use crate::infer::{InferCtxt, TyCtxtInferExt};
-use crate::traits::ObligationCause;
-use crate::traits::{self, ConstPatternStructural, TraitEngine};
-
-use rustc::ty::{self, AdtDef, Ty, TyCtxt, TypeFoldable, TypeVisitor};
-use rustc_data_structures::fx::FxHashSet;
-use rustc_hir as hir;
-use rustc_span::Span;
-
-#[derive(Debug)]
-pub enum NonStructuralMatchTy<'tcx> {
- Adt(&'tcx AdtDef),
- Param,
-}
-
-/// This method traverses the structure of `ty`, trying to find an
-/// instance of an ADT (i.e. struct or enum) that was declared without
-/// the `#[structural_match]` attribute, or a generic type parameter
-/// (which cannot be determined to be `structural_match`).
-///
-/// The "structure of a type" includes all components that would be
-/// considered when doing a pattern match on a constant of that
-/// type.
-///
-/// * This means this method descends into fields of structs/enums,
-/// and also descends into the inner type `T` of `&T` and `&mut T`
-///
-/// * The traversal doesn't dereference unsafe pointers (`*const T`,
-/// `*mut T`), and it does not visit the type arguments of an
-/// instantiated generic like `PhantomData<T>`.
-///
-/// The reason we do this search is Rust currently require all ADTs
-/// reachable from a constant's type to be annotated with
-/// `#[structural_match]`, an attribute which essentially says that
-/// the implementation of `PartialEq::eq` behaves *equivalently* to a
-/// comparison against the unfolded structure.
-///
-/// For more background on why Rust has this requirement, and issues
-/// that arose when the requirement was not enforced completely, see
-/// Rust RFC 1445, rust-lang/rust#61188, and rust-lang/rust#62307.
-pub fn search_for_structural_match_violation<'tcx>(
- id: hir::HirId,
- span: Span,
- tcx: TyCtxt<'tcx>,
- ty: Ty<'tcx>,
-) -> Option<NonStructuralMatchTy<'tcx>> {
- // FIXME: we should instead pass in an `infcx` from the outside.
- tcx.infer_ctxt().enter(|infcx| {
- let mut search = Search { id, span, infcx, found: None, seen: FxHashSet::default() };
- ty.visit_with(&mut search);
- search.found
- })
-}
-
-/// This method returns true if and only if `adt_ty` itself has been marked as
-/// eligible for structural-match: namely, if it implements both
-/// `StructuralPartialEq` and `StructuralEq` (which are respectively injected by
-/// `#[derive(PartialEq)]` and `#[derive(Eq)]`).
-///
-/// Note that this does *not* recursively check if the substructure of `adt_ty`
-/// implements the traits.
-pub fn type_marked_structural(
- id: hir::HirId,
- span: Span,
- infcx: &InferCtxt<'_, 'tcx>,
- adt_ty: Ty<'tcx>,
-) -> bool {
- let mut fulfillment_cx = traits::FulfillmentContext::new();
- let cause = ObligationCause::new(span, id, ConstPatternStructural);
- // require `#[derive(PartialEq)]`
- let structural_peq_def_id = infcx.tcx.lang_items().structural_peq_trait().unwrap();
- fulfillment_cx.register_bound(
- infcx,
- ty::ParamEnv::empty(),
- adt_ty,
- structural_peq_def_id,
- cause,
- );
- // for now, require `#[derive(Eq)]`. (Doing so is a hack to work around
- // the type `for<'a> fn(&'a ())` failing to implement `Eq` itself.)
- let cause = ObligationCause::new(span, id, ConstPatternStructural);
- let structural_teq_def_id = infcx.tcx.lang_items().structural_teq_trait().unwrap();
- fulfillment_cx.register_bound(
- infcx,
- ty::ParamEnv::empty(),
- adt_ty,
- structural_teq_def_id,
- cause,
- );
-
- // We deliberately skip *reporting* fulfillment errors (via
- // `report_fulfillment_errors`), for two reasons:
- //
- // 1. The error messages would mention `std::marker::StructuralPartialEq`
- // (a trait which is solely meant as an implementation detail
- // for now), and
- //
- // 2. We are sometimes doing future-incompatibility lints for
- // now, so we do not want unconditional errors here.
- fulfillment_cx.select_all_or_error(infcx).is_ok()
-}
-
-/// This implements the traversal over the structure of a given type to try to
-/// find instances of ADTs (specifically structs or enums) that do not implement
-/// the structural-match traits (`StructuralPartialEq` and `StructuralEq`).
-struct Search<'a, 'tcx> {
- id: hir::HirId,
- span: Span,
-
- infcx: InferCtxt<'a, 'tcx>,
-
- /// Records first ADT that does not implement a structural-match trait.
- found: Option<NonStructuralMatchTy<'tcx>>,
-
- /// Tracks ADTs previously encountered during search, so that
- /// we will not recur on them again.
- seen: FxHashSet<hir::def_id::DefId>,
-}
-
-impl Search<'a, 'tcx> {
- fn tcx(&self) -> TyCtxt<'tcx> {
- self.infcx.tcx
- }
-
- fn type_marked_structural(&self, adt_ty: Ty<'tcx>) -> bool {
- type_marked_structural(self.id, self.span, &self.infcx, adt_ty)
- }
-}
-
-impl<'a, 'tcx> TypeVisitor<'tcx> for Search<'a, 'tcx> {
- fn visit_ty(&mut self, ty: Ty<'tcx>) -> bool {
- debug!("Search visiting ty: {:?}", ty);
-
- let (adt_def, substs) = match ty.kind {
- ty::Adt(adt_def, substs) => (adt_def, substs),
- ty::Param(_) => {
- self.found = Some(NonStructuralMatchTy::Param);
- return true; // Stop visiting.
- }
- ty::RawPtr(..) => {
- // structural-match ignores substructure of
- // `*const _`/`*mut _`, so skip `super_visit_with`.
- //
- // For example, if you have:
- // ```
- // struct NonStructural;
- // #[derive(PartialEq, Eq)]
- // struct T(*const NonStructural);
- // const C: T = T(std::ptr::null());
- // ```
- //
- // Even though `NonStructural` does not implement `PartialEq`,
- // structural equality on `T` does not recur into the raw
- // pointer. Therefore, one can still use `C` in a pattern.
-
- // (But still tell caller to continue search.)
- return false;
- }
- ty::FnDef(..) | ty::FnPtr(..) => {
- // types of formals and return in `fn(_) -> _` are also irrelevant;
- // so we do not recur into them via `super_visit_with`
- //
- // (But still tell caller to continue search.)
- return false;
- }
- ty::Array(_, n)
- if { n.try_eval_usize(self.tcx(), ty::ParamEnv::reveal_all()) == Some(0) } =>
- {
- // rust-lang/rust#62336: ignore type of contents
- // for empty array.
- return false;
- }
- _ => {
- ty.super_visit_with(self);
- return false;
- }
- };
-
- if !self.seen.insert(adt_def.did) {
- debug!("Search already seen adt_def: {:?}", adt_def);
- // let caller continue its search
- return false;
- }
-
- if !self.type_marked_structural(ty) {
- debug!("Search found ty: {:?}", ty);
- self.found = Some(NonStructuralMatchTy::Adt(&adt_def));
- return true; // Halt visiting!
- }
-
- // structural-match does not care about the
- // instantiation of the generics in an ADT (it
- // instead looks directly at its fields outside
- // this match), so we skip super_visit_with.
- //
- // (Must not recur on substs for `PhantomData<T>` cf
- // rust-lang/rust#55028 and rust-lang/rust#55837; but also
- // want to skip substs when only uses of generic are
- // behind unsafe pointers `*const T`/`*mut T`.)
-
- // even though we skip super_visit_with, we must recur on
- // fields of ADT.
- let tcx = self.tcx();
- for field_ty in adt_def.all_fields().map(|field| field.ty(tcx, substs)) {
- if field_ty.visit_with(self) {
- // found an ADT without structural-match; halt visiting!
- assert!(self.found.is_some());
- return true;
- }
- }
-
- // Even though we do not want to recur on substs, we do
- // want our caller to continue its own search.
- false
- }
-}
-use rustc_errors::DiagnosticBuilder;
-use rustc_span::Span;
use smallvec::smallvec;
-use smallvec::SmallVec;
use rustc::ty::outlives::Component;
-use rustc::ty::subst::{GenericArg, Subst, SubstsRef};
-use rustc::ty::{self, ToPolyTraitRef, ToPredicate, Ty, TyCtxt, WithConstness};
+use rustc::ty::{self, ToPolyTraitRef, TyCtxt};
use rustc_data_structures::fx::FxHashSet;
-use rustc_hir as hir;
-use rustc_hir::def_id::DefId;
-
-use super::{Normalized, Obligation, ObligationCause, PredicateObligation, SelectionContext};
fn anonymize_predicate<'tcx>(tcx: TyCtxt<'tcx>, pred: &ty::Predicate<'tcx>) -> ty::Predicate<'tcx> {
match *pred {
visited: PredicateSet<'tcx>,
}
-pub fn elaborate_trait_ref<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_ref: ty::PolyTraitRef<'tcx>,
-) -> Elaborator<'tcx> {
- elaborate_predicates(tcx, vec![trait_ref.without_const().to_predicate()])
-}
-
-pub fn elaborate_trait_refs<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_refs: impl Iterator<Item = ty::PolyTraitRef<'tcx>>,
-) -> Elaborator<'tcx> {
- let predicates = trait_refs.map(|trait_ref| trait_ref.without_const().to_predicate()).collect();
- elaborate_predicates(tcx, predicates)
-}
-
pub fn elaborate_predicates<'tcx>(
tcx: TyCtxt<'tcx>,
mut predicates: Vec<ty::Predicate<'tcx>>,
}
impl Elaborator<'tcx> {
- pub fn filter_to_traits(self) -> FilterToTraits<Self> {
- FilterToTraits::new(self)
- }
-
fn elaborate(&mut self, predicate: &ty::Predicate<'tcx>) {
let tcx = self.visited.tcx;
match *predicate {
}
}
}
-
-///////////////////////////////////////////////////////////////////////////
-// Supertrait iterator
-///////////////////////////////////////////////////////////////////////////
-
-pub type Supertraits<'tcx> = FilterToTraits<Elaborator<'tcx>>;
-
-pub fn supertraits<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_ref: ty::PolyTraitRef<'tcx>,
-) -> Supertraits<'tcx> {
- elaborate_trait_ref(tcx, trait_ref).filter_to_traits()
-}
-
-pub fn transitive_bounds<'tcx>(
- tcx: TyCtxt<'tcx>,
- bounds: impl Iterator<Item = ty::PolyTraitRef<'tcx>>,
-) -> Supertraits<'tcx> {
- elaborate_trait_refs(tcx, bounds).filter_to_traits()
-}
-
-///////////////////////////////////////////////////////////////////////////
-// `TraitAliasExpander` iterator
-///////////////////////////////////////////////////////////////////////////
-
-/// "Trait alias expansion" is the process of expanding a sequence of trait
-/// references into another sequence by transitively following all trait
-/// aliases. e.g. If you have bounds like `Foo + Send`, a trait alias
-/// `trait Foo = Bar + Sync;`, and another trait alias
-/// `trait Bar = Read + Write`, then the bounds would expand to
-/// `Read + Write + Sync + Send`.
-/// Expansion is done via a DFS (depth-first search), and the `visited` field
-/// is used to avoid cycles.
-pub struct TraitAliasExpander<'tcx> {
- tcx: TyCtxt<'tcx>,
- stack: Vec<TraitAliasExpansionInfo<'tcx>>,
-}
-
-/// Stores information about the expansion of a trait via a path of zero or more trait aliases.
-#[derive(Debug, Clone)]
-pub struct TraitAliasExpansionInfo<'tcx> {
- pub path: SmallVec<[(ty::PolyTraitRef<'tcx>, Span); 4]>,
-}
-
-impl<'tcx> TraitAliasExpansionInfo<'tcx> {
- fn new(trait_ref: ty::PolyTraitRef<'tcx>, span: Span) -> Self {
- Self { path: smallvec![(trait_ref, span)] }
- }
-
- /// Adds diagnostic labels to `diag` for the expansion path of a trait through all intermediate
- /// trait aliases.
- pub fn label_with_exp_info(
- &self,
- diag: &mut DiagnosticBuilder<'_>,
- top_label: &str,
- use_desc: &str,
- ) {
- diag.span_label(self.top().1, top_label);
- if self.path.len() > 1 {
- for (_, sp) in self.path.iter().rev().skip(1).take(self.path.len() - 2) {
- diag.span_label(*sp, format!("referenced here ({})", use_desc));
- }
- }
- diag.span_label(
- self.bottom().1,
- format!("trait alias used in trait object type ({})", use_desc),
- );
- }
-
- pub fn trait_ref(&self) -> &ty::PolyTraitRef<'tcx> {
- &self.top().0
- }
-
- pub fn top(&self) -> &(ty::PolyTraitRef<'tcx>, Span) {
- self.path.last().unwrap()
- }
-
- pub fn bottom(&self) -> &(ty::PolyTraitRef<'tcx>, Span) {
- self.path.first().unwrap()
- }
-
- fn clone_and_push(&self, trait_ref: ty::PolyTraitRef<'tcx>, span: Span) -> Self {
- let mut path = self.path.clone();
- path.push((trait_ref, span));
-
- Self { path }
- }
-}
-
-pub fn expand_trait_aliases<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_refs: impl IntoIterator<Item = (ty::PolyTraitRef<'tcx>, Span)>,
-) -> TraitAliasExpander<'tcx> {
- let items: Vec<_> = trait_refs
- .into_iter()
- .map(|(trait_ref, span)| TraitAliasExpansionInfo::new(trait_ref, span))
- .collect();
- TraitAliasExpander { tcx, stack: items }
-}
-
-impl<'tcx> TraitAliasExpander<'tcx> {
- /// If `item` is a trait alias and its predicate has not yet been visited, then expands `item`
- /// to the definition, pushes the resulting expansion onto `self.stack`, and returns `false`.
- /// Otherwise, immediately returns `true` if `item` is a regular trait, or `false` if it is a
- /// trait alias.
- /// The return value indicates whether `item` should be yielded to the user.
- fn expand(&mut self, item: &TraitAliasExpansionInfo<'tcx>) -> bool {
- let tcx = self.tcx;
- let trait_ref = item.trait_ref();
- let pred = trait_ref.without_const().to_predicate();
-
- debug!("expand_trait_aliases: trait_ref={:?}", trait_ref);
-
- // Don't recurse if this bound is not a trait alias.
- let is_alias = tcx.is_trait_alias(trait_ref.def_id());
- if !is_alias {
- return true;
- }
-
- // Don't recurse if this trait alias is already on the stack for the DFS search.
- let anon_pred = anonymize_predicate(tcx, &pred);
- if item.path.iter().rev().skip(1).any(|(tr, _)| {
- anonymize_predicate(tcx, &tr.without_const().to_predicate()) == anon_pred
- }) {
- return false;
- }
-
- // Get components of trait alias.
- let predicates = tcx.super_predicates_of(trait_ref.def_id());
-
- let items = predicates.predicates.iter().rev().filter_map(|(pred, span)| {
- pred.subst_supertrait(tcx, &trait_ref)
- .to_opt_poly_trait_ref()
- .map(|trait_ref| item.clone_and_push(trait_ref, *span))
- });
- debug!("expand_trait_aliases: items={:?}", items.clone());
-
- self.stack.extend(items);
-
- false
- }
-}
-
-impl<'tcx> Iterator for TraitAliasExpander<'tcx> {
- type Item = TraitAliasExpansionInfo<'tcx>;
-
- fn size_hint(&self) -> (usize, Option<usize>) {
- (self.stack.len(), None)
- }
-
- fn next(&mut self) -> Option<TraitAliasExpansionInfo<'tcx>> {
- while let Some(item) = self.stack.pop() {
- if self.expand(&item) {
- return Some(item);
- }
- }
- None
- }
-}
-
-///////////////////////////////////////////////////////////////////////////
-// Iterator over def-IDs of supertraits
-///////////////////////////////////////////////////////////////////////////
-
-pub struct SupertraitDefIds<'tcx> {
- tcx: TyCtxt<'tcx>,
- stack: Vec<DefId>,
- visited: FxHashSet<DefId>,
-}
-
-pub fn supertrait_def_ids(tcx: TyCtxt<'_>, trait_def_id: DefId) -> SupertraitDefIds<'_> {
- SupertraitDefIds {
- tcx,
- stack: vec![trait_def_id],
- visited: Some(trait_def_id).into_iter().collect(),
- }
-}
-
-impl Iterator for SupertraitDefIds<'tcx> {
- type Item = DefId;
-
- fn next(&mut self) -> Option<DefId> {
- let def_id = self.stack.pop()?;
- let predicates = self.tcx.super_predicates_of(def_id);
- let visited = &mut self.visited;
- self.stack.extend(
- predicates
- .predicates
- .iter()
- .filter_map(|(pred, _)| pred.to_opt_poly_trait_ref())
- .map(|trait_ref| trait_ref.def_id())
- .filter(|&super_def_id| visited.insert(super_def_id)),
- );
- Some(def_id)
- }
-}
-
-///////////////////////////////////////////////////////////////////////////
-// Other
-///////////////////////////////////////////////////////////////////////////
-
-/// A filter around an iterator of predicates that makes it yield up
-/// just trait references.
-pub struct FilterToTraits<I> {
- base_iterator: I,
-}
-
-impl<I> FilterToTraits<I> {
- fn new(base: I) -> FilterToTraits<I> {
- FilterToTraits { base_iterator: base }
- }
-}
-
-impl<'tcx, I: Iterator<Item = ty::Predicate<'tcx>>> Iterator for FilterToTraits<I> {
- type Item = ty::PolyTraitRef<'tcx>;
-
- fn next(&mut self) -> Option<ty::PolyTraitRef<'tcx>> {
- while let Some(pred) = self.base_iterator.next() {
- if let ty::Predicate::Trait(data, _) = pred {
- return Some(data.to_poly_trait_ref());
- }
- }
- None
- }
-
- fn size_hint(&self) -> (usize, Option<usize>) {
- let (_, upper) = self.base_iterator.size_hint();
- (0, upper)
- }
-}
-
-///////////////////////////////////////////////////////////////////////////
-// Other
-///////////////////////////////////////////////////////////////////////////
-
-/// Instantiate all bound parameters of the impl with the given substs,
-/// returning the resulting trait ref and all obligations that arise.
-/// The obligations are closed under normalization.
-pub fn impl_trait_ref_and_oblig<'a, 'tcx>(
- selcx: &mut SelectionContext<'a, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- impl_def_id: DefId,
- impl_substs: SubstsRef<'tcx>,
-) -> (ty::TraitRef<'tcx>, Vec<PredicateObligation<'tcx>>) {
- let impl_trait_ref = selcx.tcx().impl_trait_ref(impl_def_id).unwrap();
- let impl_trait_ref = impl_trait_ref.subst(selcx.tcx(), impl_substs);
- let Normalized { value: impl_trait_ref, obligations: normalization_obligations1 } =
- super::normalize(selcx, param_env, ObligationCause::dummy(), &impl_trait_ref);
-
- let predicates = selcx.tcx().predicates_of(impl_def_id);
- let predicates = predicates.instantiate(selcx.tcx(), impl_substs);
- let Normalized { value: predicates, obligations: normalization_obligations2 } =
- super::normalize(selcx, param_env, ObligationCause::dummy(), &predicates);
- let impl_obligations =
- predicates_for_generics(ObligationCause::dummy(), 0, param_env, &predicates);
-
- let impl_obligations: Vec<_> = impl_obligations
- .into_iter()
- .chain(normalization_obligations1)
- .chain(normalization_obligations2)
- .collect();
-
- (impl_trait_ref, impl_obligations)
-}
-
-/// See [`super::obligations_for_generics`].
-pub fn predicates_for_generics<'tcx>(
- cause: ObligationCause<'tcx>,
- recursion_depth: usize,
- param_env: ty::ParamEnv<'tcx>,
- generic_bounds: &ty::InstantiatedPredicates<'tcx>,
-) -> Vec<PredicateObligation<'tcx>> {
- debug!("predicates_for_generics(generic_bounds={:?})", generic_bounds);
-
- generic_bounds
- .predicates
- .iter()
- .map(|&predicate| Obligation {
- cause: cause.clone(),
- recursion_depth,
- param_env,
- predicate,
- })
- .collect()
-}
-
-pub fn predicate_for_trait_ref<'tcx>(
- cause: ObligationCause<'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- trait_ref: ty::TraitRef<'tcx>,
- recursion_depth: usize,
-) -> PredicateObligation<'tcx> {
- Obligation {
- cause,
- param_env,
- recursion_depth,
- predicate: trait_ref.without_const().to_predicate(),
- }
-}
-
-pub fn predicate_for_trait_def(
- tcx: TyCtxt<'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- cause: ObligationCause<'tcx>,
- trait_def_id: DefId,
- recursion_depth: usize,
- self_ty: Ty<'tcx>,
- params: &[GenericArg<'tcx>],
-) -> PredicateObligation<'tcx> {
- let trait_ref =
- ty::TraitRef { def_id: trait_def_id, substs: tcx.mk_substs_trait(self_ty, params) };
- predicate_for_trait_ref(cause, param_env, trait_ref, recursion_depth)
-}
-
-/// Casts a trait reference into a reference to one of its super
-/// traits; returns `None` if `target_trait_def_id` is not a
-/// supertrait.
-pub fn upcast_choices(
- tcx: TyCtxt<'tcx>,
- source_trait_ref: ty::PolyTraitRef<'tcx>,
- target_trait_def_id: DefId,
-) -> Vec<ty::PolyTraitRef<'tcx>> {
- if source_trait_ref.def_id() == target_trait_def_id {
- return vec![source_trait_ref]; // Shortcut the most common case.
- }
-
- supertraits(tcx, source_trait_ref).filter(|r| r.def_id() == target_trait_def_id).collect()
-}
-
-/// Given a trait `trait_ref`, returns the number of vtable entries
-/// that come from `trait_ref`, excluding its supertraits. Used in
-/// computing the vtable base for an upcast trait of a trait object.
-pub fn count_own_vtable_entries(tcx: TyCtxt<'tcx>, trait_ref: ty::PolyTraitRef<'tcx>) -> usize {
- let mut entries = 0;
- // Count number of methods and add them to the total offset.
- // Skip over associated types and constants.
- for trait_item in tcx.associated_items(trait_ref.def_id()).in_definition_order() {
- if trait_item.kind == ty::AssocKind::Method {
- entries += 1;
- }
- }
- entries
-}
-
-/// Given an upcast trait object described by `object`, returns the
-/// index of the method `method_def_id` (which should be part of
-/// `object.upcast_trait_ref`) within the vtable for `object`.
-pub fn get_vtable_index_of_object_method<N>(
- tcx: TyCtxt<'tcx>,
- object: &super::VtableObjectData<'tcx, N>,
- method_def_id: DefId,
-) -> usize {
- // Count number of methods preceding the one we are selecting and
- // add them to the total offset.
- // Skip over associated types and constants.
- let mut entries = object.vtable_base;
- for trait_item in tcx.associated_items(object.upcast_trait_ref.def_id()).in_definition_order() {
- if trait_item.def_id == method_def_id {
- // The item with the ID we were given really ought to be a method.
- assert_eq!(trait_item.kind, ty::AssocKind::Method);
- return entries;
- }
- if trait_item.kind == ty::AssocKind::Method {
- entries += 1;
- }
- }
-
- bug!("get_vtable_index_of_object_method: {:?} was not found", method_def_id);
-}
-
-pub fn closure_trait_ref_and_return_type(
- tcx: TyCtxt<'tcx>,
- fn_trait_def_id: DefId,
- self_ty: Ty<'tcx>,
- sig: ty::PolyFnSig<'tcx>,
- tuple_arguments: TupleArgumentsFlag,
-) -> ty::Binder<(ty::TraitRef<'tcx>, Ty<'tcx>)> {
- let arguments_tuple = match tuple_arguments {
- TupleArgumentsFlag::No => sig.skip_binder().inputs()[0],
- TupleArgumentsFlag::Yes => tcx.intern_tup(sig.skip_binder().inputs()),
- };
- let trait_ref = ty::TraitRef {
- def_id: fn_trait_def_id,
- substs: tcx.mk_substs_trait(self_ty, &[arguments_tuple.into()]),
- };
- ty::Binder::bind((trait_ref, sig.skip_binder().output()))
-}
-
-pub fn generator_trait_ref_and_outputs(
- tcx: TyCtxt<'tcx>,
- fn_trait_def_id: DefId,
- self_ty: Ty<'tcx>,
- sig: ty::PolyGenSig<'tcx>,
-) -> ty::Binder<(ty::TraitRef<'tcx>, Ty<'tcx>, Ty<'tcx>)> {
- let trait_ref = ty::TraitRef {
- def_id: fn_trait_def_id,
- substs: tcx.mk_substs_trait(self_ty, &[sig.skip_binder().resume_ty.into()]),
- };
- ty::Binder::bind((trait_ref, sig.skip_binder().yield_ty, sig.skip_binder().return_ty))
-}
-
-pub fn impl_is_default(tcx: TyCtxt<'_>, node_item_def_id: DefId) -> bool {
- match tcx.hir().as_local_hir_id(node_item_def_id) {
- Some(hir_id) => {
- let item = tcx.hir().expect_item(hir_id);
- if let hir::ItemKind::Impl { defaultness, .. } = item.kind {
- defaultness.is_default()
- } else {
- false
- }
- }
- None => tcx.impl_defaultness(node_item_def_id).is_default(),
- }
-}
-
-pub fn impl_item_is_final(tcx: TyCtxt<'_>, assoc_item: &ty::AssocItem) -> bool {
- assoc_item.defaultness.is_final() && !impl_is_default(tcx, assoc_item.container.id())
-}
-
-pub enum TupleArgumentsFlag {
- Yes,
- No,
-}
+++ /dev/null
-use crate::infer::opaque_types::required_region_bounds;
-use crate::infer::InferCtxt;
-use crate::traits::{self, AssocTypeBoundData};
-use rustc::middle::lang_items;
-use rustc::ty::subst::SubstsRef;
-use rustc::ty::{self, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness};
-use rustc_hir as hir;
-use rustc_hir::def_id::DefId;
-use rustc_span::symbol::{kw, Ident};
-use rustc_span::Span;
-
-/// Returns the set of obligations needed to make `ty` well-formed.
-/// If `ty` contains unresolved inference variables, this may include
-/// further WF obligations. However, if `ty` IS an unresolved
-/// inference variable, returns `None`, because we are not able to
-/// make any progress at all. This is to prevent "livelock" where we
-/// say "$0 is WF if $0 is WF".
-pub fn obligations<'a, 'tcx>(
- infcx: &InferCtxt<'a, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- body_id: hir::HirId,
- ty: Ty<'tcx>,
- span: Span,
-) -> Option<Vec<traits::PredicateObligation<'tcx>>> {
- let mut wf = WfPredicates { infcx, param_env, body_id, span, out: vec![], item: None };
- if wf.compute(ty) {
- debug!("wf::obligations({:?}, body_id={:?}) = {:?}", ty, body_id, wf.out);
-
- let result = wf.normalize();
- debug!("wf::obligations({:?}, body_id={:?}) ~~> {:?}", ty, body_id, result);
- Some(result)
- } else {
- None // no progress made, return None
- }
-}
-
-/// Returns the obligations that make this trait reference
-/// well-formed. For example, if there is a trait `Set` defined like
-/// `trait Set<K:Eq>`, then the trait reference `Foo: Set<Bar>` is WF
-/// if `Bar: Eq`.
-pub fn trait_obligations<'a, 'tcx>(
- infcx: &InferCtxt<'a, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- body_id: hir::HirId,
- trait_ref: &ty::TraitRef<'tcx>,
- span: Span,
- item: Option<&'tcx hir::Item<'tcx>>,
-) -> Vec<traits::PredicateObligation<'tcx>> {
- let mut wf = WfPredicates { infcx, param_env, body_id, span, out: vec![], item };
- wf.compute_trait_ref(trait_ref, Elaborate::All);
- wf.normalize()
-}
-
-pub fn predicate_obligations<'a, 'tcx>(
- infcx: &InferCtxt<'a, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- body_id: hir::HirId,
- predicate: &ty::Predicate<'tcx>,
- span: Span,
-) -> Vec<traits::PredicateObligation<'tcx>> {
- let mut wf = WfPredicates { infcx, param_env, body_id, span, out: vec![], item: None };
-
- // (*) ok to skip binders, because wf code is prepared for it
- match *predicate {
- ty::Predicate::Trait(ref t, _) => {
- wf.compute_trait_ref(&t.skip_binder().trait_ref, Elaborate::None); // (*)
- }
- ty::Predicate::RegionOutlives(..) => {}
- ty::Predicate::TypeOutlives(ref t) => {
- wf.compute(t.skip_binder().0);
- }
- ty::Predicate::Projection(ref t) => {
- let t = t.skip_binder(); // (*)
- wf.compute_projection(t.projection_ty);
- wf.compute(t.ty);
- }
- ty::Predicate::WellFormed(t) => {
- wf.compute(t);
- }
- ty::Predicate::ObjectSafe(_) => {}
- ty::Predicate::ClosureKind(..) => {}
- ty::Predicate::Subtype(ref data) => {
- wf.compute(data.skip_binder().a); // (*)
- wf.compute(data.skip_binder().b); // (*)
- }
- ty::Predicate::ConstEvaluatable(def_id, substs) => {
- let obligations = wf.nominal_obligations(def_id, substs);
- wf.out.extend(obligations);
-
- for ty in substs.types() {
- wf.compute(ty);
- }
- }
- }
-
- wf.normalize()
-}
-
-struct WfPredicates<'a, 'tcx> {
- infcx: &'a InferCtxt<'a, 'tcx>,
- param_env: ty::ParamEnv<'tcx>,
- body_id: hir::HirId,
- span: Span,
- out: Vec<traits::PredicateObligation<'tcx>>,
- item: Option<&'tcx hir::Item<'tcx>>,
-}
-
-/// Controls whether we "elaborate" supertraits and so forth on the WF
-/// predicates. This is a kind of hack to address #43784. The
-/// underlying problem in that issue was a trait structure like:
-///
-/// ```
-/// trait Foo: Copy { }
-/// trait Bar: Foo { }
-/// impl<T: Bar> Foo for T { }
-/// impl<T> Bar for T { }
-/// ```
-///
-/// Here, in the `Foo` impl, we will check that `T: Copy` holds -- but
-/// we decide that this is true because `T: Bar` is in the
-/// where-clauses (and we can elaborate that to include `T:
-/// Copy`). This wouldn't be a problem, except that when we check the
-/// `Bar` impl, we decide that `T: Foo` must hold because of the `Foo`
-/// impl. And so nowhere did we check that `T: Copy` holds!
-///
-/// To resolve this, we elaborate the WF requirements that must be
-/// proven when checking impls. This means that (e.g.) the `impl Bar
-/// for T` will be forced to prove not only that `T: Foo` but also `T:
-/// Copy` (which it won't be able to do, because there is no `Copy`
-/// impl for `T`).
-#[derive(Debug, PartialEq, Eq, Copy, Clone)]
-enum Elaborate {
- All,
- None,
-}
-
-impl<'a, 'tcx> WfPredicates<'a, 'tcx> {
- fn cause(&mut self, code: traits::ObligationCauseCode<'tcx>) -> traits::ObligationCause<'tcx> {
- traits::ObligationCause::new(self.span, self.body_id, code)
- }
-
- fn normalize(&mut self) -> Vec<traits::PredicateObligation<'tcx>> {
- let cause = self.cause(traits::MiscObligation);
- let infcx = &mut self.infcx;
- let param_env = self.param_env;
- let mut obligations = Vec::with_capacity(self.out.len());
- for pred in &self.out {
- assert!(!pred.has_escaping_bound_vars());
- let mut selcx = traits::SelectionContext::new(infcx);
- let i = obligations.len();
- let value =
- traits::normalize_to(&mut selcx, param_env, cause.clone(), pred, &mut obligations);
- obligations.insert(i, value);
- }
- obligations
- }
-
- /// Pushes the obligations required for `trait_ref` to be WF into `self.out`.
- fn compute_trait_ref(&mut self, trait_ref: &ty::TraitRef<'tcx>, elaborate: Elaborate) {
- let tcx = self.infcx.tcx;
- let obligations = self.nominal_obligations(trait_ref.def_id, trait_ref.substs);
-
- let cause = self.cause(traits::MiscObligation);
- let param_env = self.param_env;
-
- let item = &self.item;
- let extend_cause_with_original_assoc_item_obligation =
- |cause: &mut traits::ObligationCause<'_>,
- pred: &ty::Predicate<'_>,
- trait_assoc_items: &[ty::AssocItem]| {
- let trait_item = tcx
- .hir()
- .as_local_hir_id(trait_ref.def_id)
- .and_then(|trait_id| tcx.hir().find(trait_id));
- let (trait_name, trait_generics) = match trait_item {
- Some(hir::Node::Item(hir::Item {
- ident,
- kind: hir::ItemKind::Trait(.., generics, _, _),
- ..
- }))
- | Some(hir::Node::Item(hir::Item {
- ident,
- kind: hir::ItemKind::TraitAlias(generics, _),
- ..
- })) => (Some(ident), Some(generics)),
- _ => (None, None),
- };
-
- let item_span = item.map(|i| tcx.sess.source_map().def_span(i.span));
- match pred {
- ty::Predicate::Projection(proj) => {
- // The obligation comes not from the current `impl` nor the `trait` being
- // implemented, but rather from a "second order" obligation, like in
- // `src/test/ui/associated-types/point-at-type-on-obligation-failure.rs`:
- //
- // error[E0271]: type mismatch resolving `<Foo2 as Bar2>::Ok == ()`
- // --> $DIR/point-at-type-on-obligation-failure.rs:13:5
- // |
- // LL | type Ok;
- // | -- associated type defined here
- // ...
- // LL | impl Bar for Foo {
- // | ---------------- in this `impl` item
- // LL | type Ok = ();
- // | ^^^^^^^^^^^^^ expected `u32`, found `()`
- // |
- // = note: expected type `u32`
- // found type `()`
- //
- // FIXME: we would want to point a span to all places that contributed to this
- // obligation. In the case above, it should be closer to:
- //
- // error[E0271]: type mismatch resolving `<Foo2 as Bar2>::Ok == ()`
- // --> $DIR/point-at-type-on-obligation-failure.rs:13:5
- // |
- // LL | type Ok;
- // | -- associated type defined here
- // LL | type Sibling: Bar2<Ok=Self::Ok>;
- // | -------------------------------- obligation set here
- // ...
- // LL | impl Bar for Foo {
- // | ---------------- in this `impl` item
- // LL | type Ok = ();
- // | ^^^^^^^^^^^^^ expected `u32`, found `()`
- // ...
- // LL | impl Bar2 for Foo2 {
- // | ---------------- in this `impl` item
- // LL | type Ok = u32;
- // | -------------- obligation set here
- // |
- // = note: expected type `u32`
- // found type `()`
- if let Some(hir::ItemKind::Impl { items, .. }) = item.map(|i| &i.kind) {
- let trait_assoc_item = tcx.associated_item(proj.projection_def_id());
- if let Some(impl_item) =
- items.iter().find(|item| item.ident == trait_assoc_item.ident)
- {
- cause.span = impl_item.span;
- cause.code = traits::AssocTypeBound(Box::new(AssocTypeBoundData {
- impl_span: item_span,
- original: trait_assoc_item.ident.span,
- bounds: vec![],
- }));
- }
- }
- }
- ty::Predicate::Trait(proj, _) => {
- // An associated item obligation born out of the `trait` failed to be met.
- // Point at the `impl` that failed the obligation, the associated item that
- // needed to meet the obligation, and the definition of that associated item,
- // which should hold the obligation in most cases. An example can be seen in
- // `src/test/ui/associated-types/point-at-type-on-obligation-failure-2.rs`:
- //
- // error[E0277]: the trait bound `bool: Bar` is not satisfied
- // --> $DIR/point-at-type-on-obligation-failure-2.rs:8:5
- // |
- // LL | type Assoc: Bar;
- // | ----- associated type defined here
- // ...
- // LL | impl Foo for () {
- // | --------------- in this `impl` item
- // LL | type Assoc = bool;
- // | ^^^^^^^^^^^^^^^^^^ the trait `Bar` is not implemented for `bool`
- //
- // If the obligation comes from the where clause in the `trait`, we point at it:
- //
- // error[E0277]: the trait bound `bool: Bar` is not satisfied
- // --> $DIR/point-at-type-on-obligation-failure-2.rs:8:5
- // |
- // | trait Foo where <Self as Foo>>::Assoc: Bar {
- // | -------------------------- restricted in this bound
- // LL | type Assoc;
- // | ----- associated type defined here
- // ...
- // LL | impl Foo for () {
- // | --------------- in this `impl` item
- // LL | type Assoc = bool;
- // | ^^^^^^^^^^^^^^^^^^ the trait `Bar` is not implemented for `bool`
- if let (
- ty::Projection(ty::ProjectionTy { item_def_id, .. }),
- Some(hir::ItemKind::Impl { items, .. }),
- ) = (&proj.skip_binder().self_ty().kind, item.map(|i| &i.kind))
- {
- if let Some((impl_item, trait_assoc_item)) = trait_assoc_items
- .iter()
- .find(|i| i.def_id == *item_def_id)
- .and_then(|trait_assoc_item| {
- items
- .iter()
- .find(|i| i.ident == trait_assoc_item.ident)
- .map(|impl_item| (impl_item, trait_assoc_item))
- })
- {
- let bounds = trait_generics
- .map(|generics| {
- get_generic_bound_spans(
- &generics,
- trait_name,
- trait_assoc_item.ident,
- )
- })
- .unwrap_or_else(Vec::new);
- cause.span = impl_item.span;
- cause.code = traits::AssocTypeBound(Box::new(AssocTypeBoundData {
- impl_span: item_span,
- original: trait_assoc_item.ident.span,
- bounds,
- }));
- }
- }
- }
- _ => {}
- }
- };
-
- if let Elaborate::All = elaborate {
- // FIXME: Make `extend_cause_with_original_assoc_item_obligation` take an iterator
- // instead of a slice.
- let trait_assoc_items: Vec<_> =
- tcx.associated_items(trait_ref.def_id).in_definition_order().copied().collect();
-
- let predicates = obligations.iter().map(|obligation| obligation.predicate).collect();
- let implied_obligations = traits::elaborate_predicates(tcx, predicates);
- let implied_obligations = implied_obligations.map(|pred| {
- let mut cause = cause.clone();
- extend_cause_with_original_assoc_item_obligation(
- &mut cause,
- &pred,
- &*trait_assoc_items,
- );
- traits::Obligation::new(cause, param_env, pred)
- });
- self.out.extend(implied_obligations);
- }
-
- self.out.extend(obligations);
-
- self.out.extend(trait_ref.substs.types().filter(|ty| !ty.has_escaping_bound_vars()).map(
- |ty| traits::Obligation::new(cause.clone(), param_env, ty::Predicate::WellFormed(ty)),
- ));
- }
-
- /// Pushes the obligations required for `trait_ref::Item` to be WF
- /// into `self.out`.
- fn compute_projection(&mut self, data: ty::ProjectionTy<'tcx>) {
- // A projection is well-formed if (a) the trait ref itself is
- // WF and (b) the trait-ref holds. (It may also be
- // normalizable and be WF that way.)
- let trait_ref = data.trait_ref(self.infcx.tcx);
- self.compute_trait_ref(&trait_ref, Elaborate::None);
-
- if !data.has_escaping_bound_vars() {
- let predicate = trait_ref.without_const().to_predicate();
- let cause = self.cause(traits::ProjectionWf(data));
- self.out.push(traits::Obligation::new(cause, self.param_env, predicate));
- }
- }
-
- /// Pushes the obligations required for an array length to be WF
- /// into `self.out`.
- fn compute_array_len(&mut self, constant: ty::Const<'tcx>) {
- if let ty::ConstKind::Unevaluated(def_id, substs, promoted) = constant.val {
- assert!(promoted.is_none());
-
- let obligations = self.nominal_obligations(def_id, substs);
- self.out.extend(obligations);
-
- let predicate = ty::Predicate::ConstEvaluatable(def_id, substs);
- let cause = self.cause(traits::MiscObligation);
- self.out.push(traits::Obligation::new(cause, self.param_env, predicate));
- }
- }
-
- fn require_sized(&mut self, subty: Ty<'tcx>, cause: traits::ObligationCauseCode<'tcx>) {
- if !subty.has_escaping_bound_vars() {
- let cause = self.cause(cause);
- let trait_ref = ty::TraitRef {
- def_id: self.infcx.tcx.require_lang_item(lang_items::SizedTraitLangItem, None),
- substs: self.infcx.tcx.mk_substs_trait(subty, &[]),
- };
- self.out.push(traits::Obligation::new(
- cause,
- self.param_env,
- trait_ref.without_const().to_predicate(),
- ));
- }
- }
-
- /// Pushes new obligations into `out`. Returns `true` if it was able
- /// to generate all the predicates needed to validate that `ty0`
- /// is WF. Returns false if `ty0` is an unresolved type variable,
- /// in which case we are not able to simplify at all.
- fn compute(&mut self, ty0: Ty<'tcx>) -> bool {
- let mut subtys = ty0.walk();
- let param_env = self.param_env;
- while let Some(ty) = subtys.next() {
- match ty.kind {
- ty::Bool
- | ty::Char
- | ty::Int(..)
- | ty::Uint(..)
- | ty::Float(..)
- | ty::Error
- | ty::Str
- | ty::GeneratorWitness(..)
- | ty::Never
- | ty::Param(_)
- | ty::Bound(..)
- | ty::Placeholder(..)
- | ty::Foreign(..) => {
- // WfScalar, WfParameter, etc
- }
-
- ty::Slice(subty) => {
- self.require_sized(subty, traits::SliceOrArrayElem);
- }
-
- ty::Array(subty, len) => {
- self.require_sized(subty, traits::SliceOrArrayElem);
- self.compute_array_len(*len);
- }
-
- ty::Tuple(ref tys) => {
- if let Some((_last, rest)) = tys.split_last() {
- for elem in rest {
- self.require_sized(elem.expect_ty(), traits::TupleElem);
- }
- }
- }
-
- ty::RawPtr(_) => {
- // simple cases that are WF if their type args are WF
- }
-
- ty::Projection(data) => {
- subtys.skip_current_subtree(); // subtree handled by compute_projection
- self.compute_projection(data);
- }
-
- ty::UnnormalizedProjection(..) => bug!("only used with chalk-engine"),
-
- ty::Adt(def, substs) => {
- // WfNominalType
- let obligations = self.nominal_obligations(def.did, substs);
- self.out.extend(obligations);
- }
-
- ty::FnDef(did, substs) => {
- let obligations = self.nominal_obligations(did, substs);
- self.out.extend(obligations);
- }
-
- ty::Ref(r, rty, _) => {
- // WfReference
- if !r.has_escaping_bound_vars() && !rty.has_escaping_bound_vars() {
- let cause = self.cause(traits::ReferenceOutlivesReferent(ty));
- self.out.push(traits::Obligation::new(
- cause,
- param_env,
- ty::Predicate::TypeOutlives(ty::Binder::dummy(ty::OutlivesPredicate(
- rty, r,
- ))),
- ));
- }
- }
-
- ty::Generator(..) => {
- // Walk ALL the types in the generator: this will
- // include the upvar types as well as the yield
- // type. Note that this is mildly distinct from
- // the closure case, where we have to be careful
- // about the signature of the closure. We don't
- // have the problem of implied bounds here since
- // generators don't take arguments.
- }
-
- ty::Closure(def_id, substs) => {
- // Only check the upvar types for WF, not the rest
- // of the types within. This is needed because we
- // capture the signature and it may not be WF
- // without the implied bounds. Consider a closure
- // like `|x: &'a T|` -- it may be that `T: 'a` is
- // not known to hold in the creator's context (and
- // indeed the closure may not be invoked by its
- // creator, but rather turned to someone who *can*
- // verify that).
- //
- // The special treatment of closures here really
- // ought not to be necessary either; the problem
- // is related to #25860 -- there is no way for us
- // to express a fn type complete with the implied
- // bounds that it is assuming. I think in reality
- // the WF rules around fn are a bit messed up, and
- // that is the rot problem: `fn(&'a T)` should
- // probably always be WF, because it should be
- // shorthand for something like `where(T: 'a) {
- // fn(&'a T) }`, as discussed in #25860.
- //
- // Note that we are also skipping the generic
- // types. This is consistent with the `outlives`
- // code, but anyway doesn't matter: within the fn
- // body where they are created, the generics will
- // always be WF, and outside of that fn body we
- // are not directly inspecting closure types
- // anyway, except via auto trait matching (which
- // only inspects the upvar types).
- subtys.skip_current_subtree(); // subtree handled by compute_projection
- for upvar_ty in substs.as_closure().upvar_tys(def_id, self.infcx.tcx) {
- self.compute(upvar_ty);
- }
- }
-
- ty::FnPtr(_) => {
- // let the loop iterate into the argument/return
- // types appearing in the fn signature
- }
-
- ty::Opaque(did, substs) => {
- // all of the requirements on type parameters
- // should've been checked by the instantiation
- // of whatever returned this exact `impl Trait`.
-
- // for named opaque `impl Trait` types we still need to check them
- if ty::is_impl_trait_defn(self.infcx.tcx, did).is_none() {
- let obligations = self.nominal_obligations(did, substs);
- self.out.extend(obligations);
- }
- }
-
- ty::Dynamic(data, r) => {
- // WfObject
- //
- // Here, we defer WF checking due to higher-ranked
- // regions. This is perhaps not ideal.
- self.from_object_ty(ty, data, r);
-
- // FIXME(#27579) RFC also considers adding trait
- // obligations that don't refer to Self and
- // checking those
-
- let defer_to_coercion = self.infcx.tcx.features().object_safe_for_dispatch;
-
- if !defer_to_coercion {
- let cause = self.cause(traits::MiscObligation);
- let component_traits = data.auto_traits().chain(data.principal_def_id());
- self.out.extend(component_traits.map(|did| {
- traits::Obligation::new(
- cause.clone(),
- param_env,
- ty::Predicate::ObjectSafe(did),
- )
- }));
- }
- }
-
- // Inference variables are the complicated case, since we don't
- // know what type they are. We do two things:
- //
- // 1. Check if they have been resolved, and if so proceed with
- // THAT type.
- // 2. If not, check whether this is the type that we
- // started with (ty0). In that case, we've made no
- // progress at all, so return false. Otherwise,
- // we've at least simplified things (i.e., we went
- // from `Vec<$0>: WF` to `$0: WF`, so we can
- // register a pending obligation and keep
- // moving. (Goal is that an "inductive hypothesis"
- // is satisfied to ensure termination.)
- ty::Infer(_) => {
- let ty = self.infcx.shallow_resolve(ty);
- if let ty::Infer(_) = ty.kind {
- // not yet resolved...
- if ty == ty0 {
- // ...this is the type we started from! no progress.
- return false;
- }
-
- let cause = self.cause(traits::MiscObligation);
- self.out.push(
- // ...not the type we started from, so we made progress.
- traits::Obligation::new(
- cause,
- self.param_env,
- ty::Predicate::WellFormed(ty),
- ),
- );
- } else {
- // Yes, resolved, proceed with the
- // result. Should never return false because
- // `ty` is not a Infer.
- assert!(self.compute(ty));
- }
- }
- }
- }
-
- // if we made it through that loop above, we made progress!
- return true;
- }
-
- fn nominal_obligations(
- &mut self,
- def_id: DefId,
- substs: SubstsRef<'tcx>,
- ) -> Vec<traits::PredicateObligation<'tcx>> {
- let predicates = self.infcx.tcx.predicates_of(def_id).instantiate(self.infcx.tcx, substs);
- let cause = self.cause(traits::ItemObligation(def_id));
- predicates
- .predicates
- .into_iter()
- .map(|pred| traits::Obligation::new(cause.clone(), self.param_env, pred))
- .filter(|pred| !pred.has_escaping_bound_vars())
- .collect()
- }
-
- fn from_object_ty(
- &mut self,
- ty: Ty<'tcx>,
- data: ty::Binder<&'tcx ty::List<ty::ExistentialPredicate<'tcx>>>,
- region: ty::Region<'tcx>,
- ) {
- // Imagine a type like this:
- //
- // trait Foo { }
- // trait Bar<'c> : 'c { }
- //
- // &'b (Foo+'c+Bar<'d>)
- // ^
- //
- // In this case, the following relationships must hold:
- //
- // 'b <= 'c
- // 'd <= 'c
- //
- // The first conditions is due to the normal region pointer
- // rules, which say that a reference cannot outlive its
- // referent.
- //
- // The final condition may be a bit surprising. In particular,
- // you may expect that it would have been `'c <= 'd`, since
- // usually lifetimes of outer things are conservative
- // approximations for inner things. However, it works somewhat
- // differently with trait objects: here the idea is that if the
- // user specifies a region bound (`'c`, in this case) it is the
- // "master bound" that *implies* that bounds from other traits are
- // all met. (Remember that *all bounds* in a type like
- // `Foo+Bar+Zed` must be met, not just one, hence if we write
- // `Foo<'x>+Bar<'y>`, we know that the type outlives *both* 'x and
- // 'y.)
- //
- // Note: in fact we only permit builtin traits, not `Bar<'d>`, I
- // am looking forward to the future here.
- if !data.has_escaping_bound_vars() && !region.has_escaping_bound_vars() {
- let implicit_bounds = object_region_bounds(self.infcx.tcx, data);
-
- let explicit_bound = region;
-
- self.out.reserve(implicit_bounds.len());
- for implicit_bound in implicit_bounds {
- let cause = self.cause(traits::ObjectTypeBound(ty, explicit_bound));
- let outlives =
- ty::Binder::dummy(ty::OutlivesPredicate(explicit_bound, implicit_bound));
- self.out.push(traits::Obligation::new(
- cause,
- self.param_env,
- outlives.to_predicate(),
- ));
- }
- }
- }
-}
-
-/// Given an object type like `SomeTrait + Send`, computes the lifetime
-/// bounds that must hold on the elided self type. These are derived
-/// from the declarations of `SomeTrait`, `Send`, and friends -- if
-/// they declare `trait SomeTrait : 'static`, for example, then
-/// `'static` would appear in the list. The hard work is done by
-/// `infer::required_region_bounds`, see that for more information.
-pub fn object_region_bounds<'tcx>(
- tcx: TyCtxt<'tcx>,
- existential_predicates: ty::Binder<&'tcx ty::List<ty::ExistentialPredicate<'tcx>>>,
-) -> Vec<ty::Region<'tcx>> {
- // Since we don't actually *know* the self type for an object,
- // this "open(err)" serves as a kind of dummy standin -- basically
- // a placeholder type.
- let open_ty = tcx.mk_ty_infer(ty::FreshTy(0));
-
- let predicates = existential_predicates
- .iter()
- .filter_map(|predicate| {
- if let ty::ExistentialPredicate::Projection(_) = *predicate.skip_binder() {
- None
- } else {
- Some(predicate.with_self_ty(tcx, open_ty))
- }
- })
- .collect();
-
- required_region_bounds(tcx, open_ty, predicates)
-}
-
-/// Find the span of a generic bound affecting an associated type.
-fn get_generic_bound_spans(
- generics: &hir::Generics<'_>,
- trait_name: Option<&Ident>,
- assoc_item_name: Ident,
-) -> Vec<Span> {
- let mut bounds = vec![];
- for clause in generics.where_clause.predicates.iter() {
- if let hir::WherePredicate::BoundPredicate(pred) = clause {
- match &pred.bounded_ty.kind {
- hir::TyKind::Path(hir::QPath::Resolved(Some(ty), path)) => {
- let mut s = path.segments.iter();
- if let (a, Some(b), None) = (s.next(), s.next(), s.next()) {
- if a.map(|s| &s.ident) == trait_name
- && b.ident == assoc_item_name
- && is_self_path(&ty.kind)
- {
- // `<Self as Foo>::Bar`
- bounds.push(pred.span);
- }
- }
- }
- hir::TyKind::Path(hir::QPath::TypeRelative(ty, segment)) => {
- if segment.ident == assoc_item_name {
- if is_self_path(&ty.kind) {
- // `Self::Bar`
- bounds.push(pred.span);
- }
- }
- }
- _ => {}
- }
- }
- }
- bounds
-}
-
-fn is_self_path(kind: &hir::TyKind<'_>) -> bool {
- match kind {
- hir::TyKind::Path(hir::QPath::Resolved(None, path)) => {
- let mut s = path.segments.iter();
- if let (Some(segment), None) = (s.next(), s.next()) {
- if segment.ident.name == kw::SelfUpper {
- // `type(Self)`
- return true;
- }
- }
- }
- _ => {}
- }
- false
-}
rustc_plugin_impl = { path = "../librustc_plugin_impl" }
rustc_privacy = { path = "../librustc_privacy" }
rustc_resolve = { path = "../librustc_resolve" }
+rustc_trait_selection = { path = "../librustc_trait_selection" }
rustc_ty = { path = "../librustc_ty" }
tempfile = "3.0.5"
once_cell = "1"
pub use crate::passes::BoxedResolver;
use crate::util;
-use rustc::lint;
-use rustc::session::config::{self, ErrorOutputType, Input, OutputFilenames};
-use rustc::session::early_error;
-use rustc::session::{DiagnosticOutput, Session};
use rustc::ty;
use rustc::util::common::ErrorReported;
use rustc_ast::ast::{self, MetaItemKind};
use rustc_errors::registry::Registry;
use rustc_lint::LintStore;
use rustc_parse::new_parser_from_source_str;
+use rustc_session::config::{self, ErrorOutputType, Input, OutputFilenames};
+use rustc_session::early_error;
+use rustc_session::lint;
use rustc_session::parse::{CrateConfig, ParseSess};
+use rustc_session::{DiagnosticOutput, Session};
use rustc_span::edition;
use rustc_span::source_map::{FileLoader, FileName, SourceMap};
use std::path::PathBuf;
use log::{info, log_enabled, warn};
use rustc::arena::Arena;
use rustc::dep_graph::DepGraph;
-use rustc::hir::map;
-use rustc::lint;
+use rustc::hir::map::Definitions;
use rustc::middle;
use rustc::middle::cstore::{CrateStore, MetadataLoader, MetadataLoaderDyn};
-use rustc::session::config::{self, CrateType, Input, OutputFilenames, OutputType};
-use rustc::session::config::{PpMode, PpSourceMode};
-use rustc::session::search_paths::PathKind;
-use rustc::session::Session;
use rustc::ty::steal::Steal;
use rustc::ty::{self, GlobalCtxt, ResolverOutputs, TyCtxt};
use rustc::util::common::ErrorReported;
use rustc_expand::base::ExtCtxt;
use rustc_hir::def_id::{CrateNum, LOCAL_CRATE};
use rustc_hir::Crate;
-use rustc_infer::traits;
use rustc_lint::LintStore;
use rustc_mir as mir;
use rustc_mir_build as mir_build;
use rustc_passes::{self, hir_stats, layout_test};
use rustc_plugin_impl as plugin;
use rustc_resolve::{Resolver, ResolverArenas};
+use rustc_session::config::{self, CrateType, Input, OutputFilenames, OutputType};
+use rustc_session::config::{PpMode, PpSourceMode};
+use rustc_session::lint;
+use rustc_session::search_paths::PathKind;
+use rustc_session::Session;
use rustc_span::symbol::Symbol;
use rustc_span::FileName;
+use rustc_trait_selection::traits;
use rustc_typeck as typeck;
use rustc_serialize::json;
}
sess.time("recursion_limit", || {
- middle::recursion_limit::update_limits(sess, &krate);
+ middle::limits::update_limits(sess, &krate);
});
let mut lint_store = rustc_lint::new_lint_store(
Ok((krate, Lrc::new(lint_store)))
}
-fn configure_and_expand_inner<'a>(
- sess: &'a Session,
- lint_store: &'a LintStore,
- mut krate: ast::Crate,
- crate_name: &str,
- resolver_arenas: &'a ResolverArenas<'a>,
- metadata_loader: &'a MetadataLoaderDyn,
-) -> Result<(ast::Crate, Resolver<'a>)> {
+fn pre_expansion_lint(sess: &Session, lint_store: &LintStore, krate: &ast::Crate) {
sess.time("pre_AST_expansion_lint_checks", || {
rustc_lint::check_ast_crate(
sess,
rustc_lint::BuiltinCombinedPreExpansionLintPass::new(),
);
});
+}
+
+fn configure_and_expand_inner<'a>(
+ sess: &'a Session,
+ lint_store: &'a LintStore,
+ mut krate: ast::Crate,
+ crate_name: &str,
+ resolver_arenas: &'a ResolverArenas<'a>,
+ metadata_loader: &'a MetadataLoaderDyn,
+) -> Result<(ast::Crate, Resolver<'a>)> {
+ pre_expansion_lint(sess, lint_store, &krate);
let mut resolver = Resolver::new(sess, &krate, crate_name, metadata_loader, &resolver_arenas);
rustc_builtin_macros::register_builtin_macros(&mut resolver, sess.edition());
..rustc_expand::expand::ExpansionConfig::default(crate_name.to_string())
};
- let mut ecx = ExtCtxt::new(&sess.parse_sess, cfg, &mut resolver);
+ let extern_mod_loaded = |k: &ast::Crate| pre_expansion_lint(sess, lint_store, k);
+ let mut ecx = ExtCtxt::new(&sess.parse_sess, cfg, &mut resolver, Some(&extern_mod_loaded));
// Expand macros now!
let krate = sess.time("expand_crate", || ecx.monotonic_expander().expand_crate(krate));
arena: &'tcx WorkerLocal<Arena<'tcx>>,
) -> QueryContext<'tcx> {
let sess = &compiler.session();
- let defs = mem::take(&mut resolver_outputs.definitions);
-
- // Construct the HIR map.
- let hir_map = map::map_crate(sess, &*resolver_outputs.cstore, krate, dep_graph, defs);
+ let defs: &'tcx Definitions = arena.alloc(mem::take(&mut resolver_outputs.definitions));
let query_result_on_disk_cache = rustc_incremental::load_query_result_cache(sess);
extern_providers,
arena,
resolver_outputs,
- hir_map,
+ krate,
+ defs,
+ dep_graph,
query_result_on_disk_cache,
&crate_name,
&outputs,
fn analysis(tcx: TyCtxt<'_>, cnum: CrateNum) -> Result<()> {
assert_eq!(cnum, LOCAL_CRATE);
+ rustc::hir::map::check_crate(tcx);
+
let sess = tcx.sess;
let mut entry_point = None;
use rustc::arena::Arena;
use rustc::dep_graph::DepGraph;
-use rustc::session::config::{OutputFilenames, OutputType};
-use rustc::session::Session;
use rustc::ty::steal::Steal;
use rustc::ty::{GlobalCtxt, ResolverOutputs};
use rustc::util::common::ErrorReported;
use rustc_hir::Crate;
use rustc_incremental::DepGraphFuture;
use rustc_lint::LintStore;
+use rustc_session::config::{OutputFilenames, OutputType};
+use rustc_session::Session;
use std::any::Any;
use std::cell::{Ref, RefCell, RefMut};
use std::mem;
use crate::interface::parse_cfgspecs;
-use rustc::lint::Level;
use rustc::middle::cstore;
-use rustc::session::config::{build_configuration, build_session_options, to_crate_config};
-use rustc::session::config::{rustc_optgroups, ErrorOutputType, ExternLocation, Options, Passes};
-use rustc::session::config::{ExternEntry, LinkerPluginLto, LtoCli, SwitchWithOptPath};
-use rustc::session::config::{Externs, OutputType, OutputTypes, SymbolManglingVersion};
-use rustc::session::search_paths::SearchPath;
-use rustc::session::{build_session, Session};
use rustc_data_structures::fx::FxHashSet;
use rustc_errors::{emitter::HumanReadableErrorType, registry, ColorConfig};
+use rustc_session::config::{build_configuration, build_session_options, to_crate_config};
+use rustc_session::config::{rustc_optgroups, ErrorOutputType, ExternLocation, Options, Passes};
+use rustc_session::config::{ExternEntry, LinkerPluginLto, LtoCli, SwitchWithOptPath};
+use rustc_session::config::{Externs, OutputType, OutputTypes, SymbolManglingVersion};
+use rustc_session::lint::Level;
+use rustc_session::search_paths::SearchPath;
+use rustc_session::{build_session, Session};
use rustc_span::edition::{Edition, DEFAULT_EDITION};
use rustc_span::symbol::sym;
use rustc_target::spec::{MergeFunctions, PanicStrategy, RelroLevel};
use log::info;
-use rustc::lint;
use rustc::ty;
use rustc_ast::ast::{AttrVec, BlockCheckMode};
use rustc_ast::mut_visit::{visit_clobber, MutVisitor, *};
use rustc_resolve::{self, Resolver};
use rustc_session as session;
use rustc_session::config::{ErrorOutputType, Input, OutputFilenames};
-use rustc_session::lint::{BuiltinLintDiagnostics, LintBuffer};
+use rustc_session::lint::{self, BuiltinLintDiagnostics, LintBuffer};
use rustc_session::parse::CrateConfig;
use rustc_session::CrateDisambiguator;
use rustc_session::{config, early_error, filesearch, DiagnosticOutput, Session};
cfg.extend(codegen_backend.target_features(sess).into_iter().map(|feat| (tf, Some(feat))));
- if sess.crt_static_feature() {
+ if sess.crt_static_feature(None) {
cfg.insert((tf, Some(Symbol::intern("crt-static"))));
}
}
for a in attrs.iter() {
if a.check_name(sym::crate_type) {
if let Some(n) = a.value_str() {
- if let Some(_) = categorize_crate_type(n) {
+ if categorize_crate_type(n).is_some() {
return;
}
// in general the pretty printer processes unexpanded code, so
// we override the default `visit_mac` method which panics.
- fn visit_mac(&mut self, mac: &mut ast::Mac) {
+ fn visit_mac(&mut self, mac: &mut ast::MacCall) {
noop_visit_mac(mac, self)
}
}
/// Escaped '\' character without continuation.
LoneSlash,
- /// Invalid escape characted (e.g. '\z').
+ /// Invalid escape character (e.g. '\z').
InvalidEscape,
/// Raw '\r' encountered.
BareCarriageReturn,
UnclosedUnicodeEscape,
/// '\u{_12}'
LeadingUnderscoreUnicodeEscape,
- /// More than 6 charactes in '\u{..}', e.g. '\u{10FFFF_FF}'
+ /// More than 6 characters in '\u{..}', e.g. '\u{10FFFF_FF}'
OverlongUnicodeEscape,
/// Invalid in-bound unicode character code, e.g. '\u{DFFF}'.
LoneSurrogateUnicodeEscape,
rustc_index = { path = "../librustc_index" }
rustc_session = { path = "../librustc_session" }
rustc_infer = { path = "../librustc_infer" }
+rustc_trait_selection = { path = "../librustc_trait_selection" }
//! `late_lint_methods!` invocation in `lib.rs`.
use crate::{EarlyContext, EarlyLintPass, LateContext, LateLintPass, LintContext};
-use rustc::hir::map::Map;
use rustc::lint::LintDiagnosticBuilder;
use rustc::ty::{self, layout::VariantIdx, Ty, TyCtxt};
use rustc_ast::ast::{self, Expr};
use rustc_hir::def_id::DefId;
use rustc_hir::{GenericParamKind, PatKind};
use rustc_hir::{HirIdSet, Node};
-use rustc_infer::traits::misc::can_type_implement_copy;
use rustc_session::lint::FutureIncompatibleInfo;
use rustc_span::edition::Edition;
use rustc_span::source_map::Spanned;
use rustc_span::symbol::{kw, sym, Symbol};
use rustc_span::{BytePos, Span};
+use rustc_trait_selection::traits::misc::can_type_implement_copy;
use crate::nonstandard_style::{method_context, MethodLateContext};
}
fn check_crate(&mut self, cx: &LateContext<'_, '_>, krate: &hir::Crate<'_>) {
- self.check_missing_docs_attrs(cx, None, &krate.attrs, krate.span, "crate");
+ self.check_missing_docs_attrs(cx, None, &krate.item.attrs, krate.item.span, "crate");
for macro_def in krate.exported_macros {
let has_doc = macro_def.attrs.iter().any(|a| has_doc(a));
let desc = match trait_item.kind {
hir::TraitItemKind::Const(..) => "an associated constant",
- hir::TraitItemKind::Method(..) => "a trait method",
+ hir::TraitItemKind::Fn(..) => "a trait method",
hir::TraitItemKind::Type(..) => "an associated type",
};
let desc = match impl_item.kind {
hir::ImplItemKind::Const(..) => "an associated constant",
- hir::ImplItemKind::Method(..) => "a method",
+ hir::ImplItemKind::Fn(..) => "a method",
hir::ImplItemKind::TyAlias(_) => "an associated type",
hir::ImplItemKind::OpaqueTy(_) => "an associated `impl Trait` type",
};
ast::StmtKind::Local(..) => "statements",
ast::StmtKind::Item(..) => "inner items",
// expressions will be reported by `check_expr`.
- ast::StmtKind::Semi(..) | ast::StmtKind::Expr(..) | ast::StmtKind::Mac(..) => return,
+ ast::StmtKind::Empty
+ | ast::StmtKind::Semi(_)
+ | ast::StmtKind::Expr(_)
+ | ast::StmtKind::MacCall(_) => return,
};
warn_if_doc(cx, stmt.span, kind, stmt.kind.attrs());
err: &'a mut DiagnosticBuilder<'db>,
}
impl<'a, 'db, 'v> Visitor<'v> for WalkAssocTypes<'a, 'db> {
- type Map = Map<'v>;
+ type Map = intravisit::ErasedMap<'v>;
- fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<Self::Map> {
intravisit::NestedVisitorMap::None
}
fn check_mac_def(&mut self, cx: &EarlyContext<'_>, mac_def: &ast::MacroDef, _id: ast::NodeId) {
self.check_tokens(cx, mac_def.body.inner_tokens());
}
- fn check_mac(&mut self, cx: &EarlyContext<'_>, mac: &ast::Mac) {
+ fn check_mac(&mut self, cx: &EarlyContext<'_>, mac: &ast::MacCall) {
self.check_tokens(cx, mac.args.inner_tokens());
}
fn check_ident(&mut self, cx: &EarlyContext<'_>, ident: ast::Ident) {
lint_name.to_string()
};
// If the lint was scoped with `tool::` check if the tool lint exists
- if let Some(_) = tool_name {
+ if tool_name.is_some() {
match self.by_name.get(&complete_name) {
None => match self.lint_groups.get(&*complete_name) {
None => return CheckLintNameResult::Tool(Err((None, String::new()))),
return if *silent {
CheckLintNameResult::Ok(&lint_ids)
} else {
- CheckLintNameResult::Tool(Err((Some(&lint_ids), name.to_string())))
+ CheckLintNameResult::Tool(Err((Some(&lint_ids), (*name).to_string())))
};
}
CheckLintNameResult::Ok(&lint_ids)
return if *silent {
CheckLintNameResult::Tool(Err((Some(&lint_ids), complete_name)))
} else {
- CheckLintNameResult::Tool(Err((Some(&lint_ids), name.to_string())))
+ CheckLintNameResult::Tool(Err((Some(&lint_ids), (*name).to_string())))
};
}
CheckLintNameResult::Tool(Err((Some(&lint_ids), complete_name)))
use crate::passes::{EarlyLintPass, EarlyLintPassObject};
use rustc_ast::ast;
use rustc_ast::visit as ast_visit;
-use rustc_session::lint::{LintBuffer, LintPass};
+use rustc_session::lint::{BufferedEarlyLint, LintBuffer, LintPass};
use rustc_session::Session;
use rustc_span::Span;
impl<'a, T: EarlyLintPass> EarlyContextAndPass<'a, T> {
fn check_id(&mut self, id: ast::NodeId) {
for early_lint in self.context.buffered.take(id) {
- let rustc_session::lint::BufferedEarlyLint {
- span,
- msg,
- node_id: _,
- lint_id,
- diagnostic,
- } = early_lint;
+ let BufferedEarlyLint { span, msg, node_id: _, lint_id, diagnostic } = early_lint;
self.context.lookup_with_diagnostics(
lint_id.lint,
Some(span),
self.check_id(id);
}
- fn visit_mac(&mut self, mac: &'a ast::Mac) {
+ fn visit_mac(&mut self, mac: &'a ast::MacCall) {
// FIXME(#54110): So, this setup isn't really right. I think
// that (a) the librustc_ast visitor ought to be doing this as
// part of `walk_mac`, and (b) we should be calling
lint_buffer: Option<LintBuffer>,
builtin_lints: T,
) {
- let mut passes: Vec<_> = if pre_expansion {
- lint_store.pre_expansion_passes.iter().map(|p| (p)()).collect()
- } else {
- lint_store.early_passes.iter().map(|p| (p)()).collect()
- };
+ let passes =
+ if pre_expansion { &lint_store.pre_expansion_passes } else { &lint_store.early_passes };
+ let mut passes: Vec<_> = passes.iter().map(|p| (p)()).collect();
let mut buffered = lint_buffer.unwrap_or_default();
if !sess.opts.debugging_opts.no_interleave_lints {
}
}
TyKind::Rptr(_, MutTy { ty: inner_ty, mutbl: Mutability::Not }) => {
- if let Some(impl_did) = cx.tcx.impl_of_method(ty.hir_id.owner_def_id()) {
+ if let Some(impl_did) = cx.tcx.impl_of_method(ty.hir_id.owner.to_def_id()) {
if cx.tcx.impl_trait_ref(impl_did).is_some() {
return;
}
/// Extract the `LintStore` from the query context.
/// This function exists because we've erased `LintStore` as `dyn Any` in the context.
-crate fn unerased_lint_store<'tcx>(tcx: TyCtxt<'tcx>) -> &'tcx LintStore {
+crate fn unerased_lint_store(tcx: TyCtxt<'_>) -> &LintStore {
let store: &dyn Any = &*tcx.lint_store;
store.downcast_ref().unwrap()
}
/// Because lints are scoped lexically, we want to walk nested
/// items in the context of the outer item, so enable
/// deep-walking.
- fn nested_visit_map<'this>(&'this mut self) -> hir_visit::NestedVisitorMap<'this, Self::Map> {
- hir_visit::NestedVisitorMap::All(&self.context.tcx.hir())
+ fn nested_visit_map(&mut self) -> hir_visit::NestedVisitorMap<Self::Map> {
+ hir_visit::NestedVisitorMap::All(self.context.tcx.hir())
}
fn visit_nested_body(&mut self, body: hir::BodyId) {
let mut cx = LateContextAndPass { context, pass };
// Visit the whole crate.
- cx.with_lint_attrs(hir::CRATE_HIR_ID, &krate.attrs, |cx| {
+ cx.with_lint_attrs(hir::CRATE_HIR_ID, &krate.item.attrs, |cx| {
// since the root module isn't visited as an item (because it isn't an
// item), warn for it here.
lint_callback!(cx, check_crate, krate);
let mut builder = LintLevelMapBuilder { levels, tcx, store };
let krate = tcx.hir().krate();
- let push = builder.levels.push(&krate.attrs, &store);
+ let push = builder.levels.push(&krate.item.attrs, &store);
builder.levels.register_id(hir::CRATE_HIR_ID);
for macro_def in krate.exported_macros {
builder.levels.register_id(macro_def.hir_id);
let prev = self.cur;
if !specs.is_empty() {
self.cur = self.sets.list.len() as u32;
- self.sets.list.push(LintSet::Node { specs: specs, parent: prev });
+ self.sets.list.push(LintSet::Node { specs, parent: prev });
}
- BuilderPush { prev: prev, changed: prev != self.cur }
+ BuilderPush { prev, changed: prev != self.cur }
}
/// Called after `push` when the scope of a set of attributes are exited.
impl<'tcx> intravisit::Visitor<'tcx> for LintLevelMapBuilder<'_, 'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map<'this>(&'this mut self) -> intravisit::NestedVisitorMap<'this, Self::Map> {
- intravisit::NestedVisitorMap::All(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<Self::Map> {
+ intravisit::NestedVisitorMap::All(self.tcx.hir())
}
fn visit_param(&mut self, param: &'tcx hir::Param<'tcx>) {
WhileTrue: WhileTrue,
NonAsciiIdents: NonAsciiIdents,
IncompleteFeatures: IncompleteFeatures,
- RedundantSemicolon: RedundantSemicolon,
+ RedundantSemicolons: RedundantSemicolons,
UnusedDocComment: UnusedDocComment,
]
);
UNUSED_EXTERN_CRATES,
UNUSED_FEATURES,
UNUSED_LABELS,
- UNUSED_PARENS
+ UNUSED_PARENS,
+ REDUNDANT_SEMICOLONS
);
add_lint_group!(
store.register_renamed("unused_doc_comment", "unused_doc_comments");
store.register_renamed("async_idents", "keyword_idents");
store.register_renamed("exceeding_bitshifts", "arithmetic_overflow");
+ store.register_renamed("redundant_semicolon", "redundant_semicolons");
store.register_removed("unknown_features", "replaced by an error");
store.register_removed("unsigned_negation", "replaced by negate_unsigned feature gate");
store.register_removed("negate_unsigned", "cast a signed value instead");
}
fn check_trait_item(&mut self, cx: &LateContext<'_, '_>, item: &hir::TraitItem<'_>) {
- if let hir::TraitItemKind::Method(_, hir::TraitMethod::Required(pnames)) = item.kind {
+ if let hir::TraitItemKind::Fn(_, hir::TraitFn::Required(pnames)) = item.kind {
self.check_snake_case(cx, "trait method", &item.ident);
for param_name in pnames {
self.check_snake_case(cx, "variable", param_name);
fn check_path(a: &ast::Path, b: ast::NodeId);
fn check_attribute(a: &ast::Attribute);
fn check_mac_def(a: &ast::MacroDef, b: ast::NodeId);
- fn check_mac(a: &ast::Mac);
+ fn check_mac(a: &ast::MacCall);
/// Called when entering a syntax node that can have lint attributes such
/// as `#[allow(...)]`. Called with *all* the attributes of that node.
use crate::{EarlyContext, EarlyLintPass, LintContext};
-use rustc_ast::ast::{ExprKind, Stmt, StmtKind};
+use rustc_ast::ast::{Block, StmtKind};
use rustc_errors::Applicability;
+use rustc_span::Span;
declare_lint! {
- pub REDUNDANT_SEMICOLON,
+ pub REDUNDANT_SEMICOLONS,
Warn,
"detects unnecessary trailing semicolons"
}
-declare_lint_pass!(RedundantSemicolon => [REDUNDANT_SEMICOLON]);
+declare_lint_pass!(RedundantSemicolons => [REDUNDANT_SEMICOLONS]);
-impl EarlyLintPass for RedundantSemicolon {
- fn check_stmt(&mut self, cx: &EarlyContext<'_>, stmt: &Stmt) {
- if let StmtKind::Semi(expr) = &stmt.kind {
- if let ExprKind::Tup(ref v) = &expr.kind {
- if v.is_empty() {
- // Strings of excess semicolons are encoded as empty tuple expressions
- // during the parsing stage, so we check for empty tuple expressions
- // which span only semicolons
- if let Ok(source_str) = cx.sess().source_map().span_to_snippet(stmt.span) {
- if source_str.chars().all(|c| c == ';') {
- let multiple = (stmt.span.hi() - stmt.span.lo()).0 > 1;
- let msg = if multiple {
- "unnecessary trailing semicolons"
- } else {
- "unnecessary trailing semicolon"
- };
- cx.struct_span_lint(REDUNDANT_SEMICOLON, stmt.span, |lint| {
- let mut err = lint.build(&msg);
- let suggest_msg = if multiple {
- "remove these semicolons"
- } else {
- "remove this semicolon"
- };
- err.span_suggestion(
- stmt.span,
- &suggest_msg,
- String::new(),
- Applicability::MaybeIncorrect,
- );
- err.emit();
- });
- }
- }
- }
+impl EarlyLintPass for RedundantSemicolons {
+ fn check_block(&mut self, cx: &EarlyContext<'_>, block: &Block) {
+ let mut seq = None;
+ for stmt in block.stmts.iter() {
+ match (&stmt.kind, &mut seq) {
+ (StmtKind::Empty, None) => seq = Some((stmt.span, false)),
+ (StmtKind::Empty, Some(seq)) => *seq = (seq.0.to(stmt.span), true),
+ (_, seq) => maybe_lint_redundant_semis(cx, seq),
}
}
+ maybe_lint_redundant_semis(cx, &mut seq);
+ }
+}
+
+fn maybe_lint_redundant_semis(cx: &EarlyContext<'_>, seq: &mut Option<(Span, bool)>) {
+ if let Some((span, multiple)) = seq.take() {
+ cx.struct_span_lint(REDUNDANT_SEMICOLONS, span, |lint| {
+ let (msg, rem) = if multiple {
+ ("unnecessary trailing semicolons", "remove these semicolons")
+ } else {
+ ("unnecessary trailing semicolon", "remove this semicolon")
+ };
+ lint.build(msg)
+ .span_suggestion(span, rem, String::new(), Applicability::MaybeIncorrect)
+ .emit();
+ });
}
}
let mut err = lint.build(&format!("literal out of range for {}", t));
err.note(&format!(
"the literal `{}` (decimal `{}`) does not fit into \
- an `{}` and will become `{}{}`",
+ the type `{}` and will become `{}{}`",
repr_str, val, t, actually, t
));
if let Some(sugg_ty) = get_type_suggestion(&cx.tables.node_type(expr.hir_id), val, negative)
v: u128,
) {
let int_type = t.normalize(cx.sess().target.ptr_width);
- let (_, max) = int_ty_range(int_type);
+ let (min, max) = int_ty_range(int_type);
let max = max as u128;
let negative = type_limits.negated_expr_id == e.hir_id;
}
cx.struct_span_lint(OVERFLOWING_LITERALS, e.span, |lint| {
- lint.build(&format!("literal out of range for `{}`", t.name_str())).emit()
+ lint.build(&format!("literal out of range for `{}`", t.name_str()))
+ .note(&format!(
+ "the literal `{}` does not fit into the type `{}` whose range is `{}..={}`",
+ cx.sess()
+ .source_map()
+ .span_to_snippet(lit.span)
+ .ok()
+ .expect("must get snippet from literal"),
+ t.name_str(),
+ min,
+ max,
+ ))
+ .emit();
});
}
}
return;
}
cx.struct_span_lint(OVERFLOWING_LITERALS, e.span, |lint| {
- lint.build(&format!("literal out of range for `{}`", t.name_str())).emit()
+ lint.build(&format!("literal out of range for `{}`", t.name_str()))
+ .note(&format!(
+ "the literal `{}` does not fit into the type `{}` whose range is `{}..={}`",
+ cx.sess()
+ .source_map()
+ .span_to_snippet(lit.span)
+ .ok()
+ .expect("must get snippet from literal"),
+ t.name_str(),
+ min,
+ max,
+ ))
+ .emit()
});
}
}
};
if is_infinite == Ok(true) {
cx.struct_span_lint(OVERFLOWING_LITERALS, e.span, |lint| {
- lint.build(&format!("literal out of range for `{}`", t.name_str())).emit()
+ lint.build(&format!("literal out of range for `{}`", t.name_str()))
+ .note(&format!(
+ "the literal `{}` does not fit into the type `{}` and will be converted to `std::{}::INFINITY`",
+ cx.sess()
+ .source_map()
+ .span_to_snippet(lit.span)
+ .expect("must get snippet from literal"),
+ t.name_str(),
+ t.name_str(),
+ ))
+ .emit();
});
}
}
let ty = cx.tcx.erase_regions(&t);
let layout = match cx.layout_of(ty) {
Ok(layout) => layout,
- Err(ty::layout::LayoutError::Unknown(_)) => return,
- Err(err @ ty::layout::LayoutError::SizeOverflow(_)) => {
- bug!("failed to get layout for `{}`: {}", t, err);
- }
+ Err(ty::layout::LayoutError::Unknown(_))
+ | Err(ty::layout::LayoutError::SizeOverflow(_)) => return,
};
let (variants, tag) = match layout.variants {
layout::Variants::Multiple {
match callee.kind {
hir::ExprKind::Path(ref qpath) => {
match cx.tables.qpath_res(qpath, callee.hir_id) {
- Res::Def(DefKind::Fn, def_id) | Res::Def(DefKind::Method, def_id) => {
+ Res::Def(DefKind::Fn, def_id) | Res::Def(DefKind::AssocFn, def_id) => {
Some(def_id)
}
// `Res::Local` if it was a closure, for which we
descr_post: &str,
plural_len: usize,
) -> bool {
- if ty.is_unit() || cx.tcx.is_ty_uninhabited_from(cx.tcx.parent_module(expr.hir_id), ty)
+ if ty.is_unit()
+ || cx.tcx.is_ty_uninhabited_from(
+ cx.tcx.parent_module(expr.hir_id).to_def_id(),
+ ty,
+ cx.param_env,
+ )
{
return true;
}
// Do not lint on `(..)` as that will result in the other arms being useless.
Paren(_)
// The other cases do not contain sub-patterns.
- | Wild | Rest | Lit(..) | Mac(..) | Range(..) | Ident(.., None) | Path(..) => return,
+ | Wild | Rest | Lit(..) | MacCall(..) | Range(..) | Ident(.., None) | Path(..) => return,
// These are list-like patterns; parens can always be removed.
TupleStruct(_, ps) | Tuple(ps) | Slice(ps) | Or(ps) => for p in ps {
self.check_unused_parens_pat(cx, p, false, false);
/// Don't hash the result, instead just mark a query red if it runs
NoHash,
- /// Don't force the query
- NoForce,
-
/// Generate a dep node based on the dependencies of the query
Anon,
- /// Always evaluate the query, ignoring its depdendencies
+ /// Always evaluate the query, ignoring its dependencies
EvalAlways,
}
Ok(QueryModifier::CycleDelayBug)
} else if modifier == "no_hash" {
Ok(QueryModifier::NoHash)
- } else if modifier == "no_force" {
- Ok(QueryModifier::NoForce)
} else if modifier == "anon" {
Ok(QueryModifier::Anon)
} else if modifier == "eval_always" {
/// Don't hash the result, instead just mark a query red if it runs
no_hash: bool,
- /// Don't force the query
- no_force: bool,
-
/// Generate a dep node based on the dependencies of the query
anon: bool,
- // Always evaluate the query, ignoring its depdendencies
+ // Always evaluate the query, ignoring its dependencies
eval_always: bool,
}
let mut fatal_cycle = false;
let mut cycle_delay_bug = false;
let mut no_hash = false;
- let mut no_force = false;
let mut anon = false;
let mut eval_always = false;
for modifier in query.modifiers.0.drain(..) {
}
no_hash = true;
}
- QueryModifier::NoForce => {
- if no_force {
- panic!("duplicate modifier `no_force` for query `{}`", query.name);
- }
- no_force = true;
- }
QueryModifier::Anon => {
if anon {
panic!("duplicate modifier `anon` for query `{}`", query.name);
fatal_cycle,
cycle_delay_bug,
no_hash,
- no_force,
anon,
eval_always,
}
let mut dep_node_def_stream = quote! {};
let mut dep_node_force_stream = quote! {};
let mut try_load_from_on_disk_cache_stream = quote! {};
- let mut no_force_queries = Vec::new();
let mut cached_queries = quote! {};
for group in groups.0 {
cached_queries.extend(quote! {
#name,
});
- }
- if modifiers.cache.is_some() && !modifiers.no_force {
try_load_from_on_disk_cache_stream.extend(quote! {
DepKind::#name => {
- debug_assert!(tcx.dep_graph
- .node_color(self)
- .map(|c| c.is_green())
- .unwrap_or(false));
-
- let key = RecoverKey::recover(tcx, self).unwrap();
- if queries::#name::cache_on_disk(tcx, key, None) {
- let _ = tcx.#name(key);
+ if <#arg as DepNodeParams>::CAN_RECONSTRUCT_QUERY_KEY {
+ debug_assert!($tcx.dep_graph
+ .node_color($dep_node)
+ .map(|c| c.is_green())
+ .unwrap_or(false));
+
+ let key = <#arg as DepNodeParams>::recover($tcx, $dep_node).unwrap();
+ if queries::#name::cache_on_disk($tcx, key, None) {
+ let _ = $tcx.#name(key);
+ }
}
}
});
[#attribute_stream] #name(#arg),
});
- if modifiers.no_force {
- no_force_queries.push(name.clone());
- } else {
- // Add a match arm to force the query given the dep node
- dep_node_force_stream.extend(quote! {
- DepKind::#name => {
- if let Some(key) = RecoverKey::recover($tcx, $dep_node) {
+ // Add a match arm to force the query given the dep node
+ dep_node_force_stream.extend(quote! {
+ DepKind::#name => {
+ if <#arg as DepNodeParams>::CAN_RECONSTRUCT_QUERY_KEY {
+ if let Some(key) = <#arg as DepNodeParams>::recover($tcx, $dep_node) {
$tcx.force_query::<crate::ty::query::queries::#name<'_>>(
key,
DUMMY_SP,
*$dep_node
);
- } else {
- return false;
+ return true;
}
}
- });
- }
+ }
+ });
add_query_description_impl(&query, modifiers, &mut query_description_stream);
}
});
}
- // Add an arm for the no force queries to panic when trying to force them
- for query in no_force_queries {
- dep_node_force_stream.extend(quote! {
- DepKind::#query |
- });
- }
dep_node_force_stream.extend(quote! {
DepKind::Null => {
bug!("Cannot force dep node: {:?}", $dep_node)
#query_description_stream
- impl DepNode {
- /// Check whether the query invocation corresponding to the given
- /// DepNode is eligible for on-disk-caching. If so, this is method
- /// will execute the query corresponding to the given DepNode.
- /// Also, as a sanity check, it expects that the corresponding query
- /// invocation has been marked as green already.
- pub fn try_load_from_on_disk_cache(&self, tcx: TyCtxt<'_>) {
- match self.kind {
+ macro_rules! rustc_dep_node_try_load_from_on_disk_cache {
+ ($dep_node:expr, $tcx:expr) => {
+ match $dep_node.kind {
#try_load_from_on_disk_cache_stream
_ => (),
}
#value,
});
keyword_stream.extend(quote! {
+ #[allow(non_upper_case_globals)]
pub const #name: Symbol = Symbol::new(#counter);
});
counter += 1;
#value,
});
symbols_stream.extend(quote! {
+ #[allow(rustc::default_hash_types)]
+ #[allow(non_upper_case_globals)]
pub const #name: Symbol = Symbol::new(#counter);
});
counter += 1;
() => {
#symbols_stream
+ #[allow(non_upper_case_globals)]
pub const digits_array: &[Symbol; 10] = &[
#digits_stream
];
memmap = "0.7"
smallvec = { version = "1.0", features = ["union", "may_dangle"] }
rustc = { path = "../librustc" }
-rustc_ast_pretty = { path = "../librustc_ast_pretty" }
rustc_attr = { path = "../librustc_attr" }
rustc_data_structures = { path = "../librustc_data_structures" }
rustc_errors = { path = "../librustc_errors" }
stable_deref_trait = "1.0.0"
rustc_ast = { path = "../librustc_ast" }
rustc_expand = { path = "../librustc_expand" }
-rustc_parse = { path = "../librustc_parse" }
rustc_span = { path = "../librustc_span" }
+rustc_session = { path = "../librustc_session" }
[target.'cfg(windows)'.dependencies]
winapi = { version = "0.3", features = ["errhandlingapi", "libloaderapi"] }
use rustc::hir::map::Definitions;
use rustc::middle::cstore::DepKind;
use rustc::middle::cstore::{CrateSource, ExternCrate, ExternCrateSource, MetadataLoaderDyn};
-use rustc::session::config;
-use rustc::session::search_paths::PathKind;
-use rustc::session::{CrateDisambiguator, Session};
use rustc::ty::TyCtxt;
-use rustc_ast::ast;
-use rustc_ast::attr;
use rustc_ast::expand::allocator::{global_allocator_spans, AllocatorKind};
+use rustc_ast::{ast, attr};
use rustc_data_structures::svh::Svh;
use rustc_data_structures::sync::Lrc;
use rustc_errors::struct_span_err;
use rustc_expand::base::SyntaxExtension;
use rustc_hir::def_id::{CrateNum, LOCAL_CRATE};
use rustc_index::vec::IndexVec;
+use rustc_session::config;
+use rustc_session::search_paths::PathKind;
+use rustc_session::{CrateDisambiguator, Session};
use rustc_span::edition::Edition;
use rustc_span::symbol::{sym, Symbol};
use rustc_span::{Span, DUMMY_SP};
}
}
+/// A reference to `CrateMetadata` that can also give access to whole crate store when necessary.
+#[derive(Clone, Copy)]
+crate struct CrateMetadataRef<'a> {
+ pub cdata: &'a CrateMetadata,
+ pub cstore: &'a CStore,
+}
+
+impl std::ops::Deref for CrateMetadataRef<'_> {
+ type Target = CrateMetadata;
+
+ fn deref(&self) -> &Self::Target {
+ self.cdata
+ }
+}
+
fn dump_crates(cstore: &CStore) {
info!("resolved crates:");
cstore.iter_crate_data(|cnum, data| {
CrateNum::new(self.metas.len() - 1)
}
- crate fn get_crate_data(&self, cnum: CrateNum) -> &CrateMetadata {
- self.metas[cnum]
+ crate fn get_crate_data(&self, cnum: CrateNum) -> CrateMetadataRef<'_> {
+ let cdata = self.metas[cnum]
.as_ref()
- .unwrap_or_else(|| panic!("Failed to get crate data for {:?}", cnum))
+ .unwrap_or_else(|| panic!("Failed to get crate data for {:?}", cnum));
+ CrateMetadataRef { cdata, cstore: self }
}
fn set_crate_data(&mut self, cnum: CrateNum, data: CrateMetadata) {
// We're also sure to compare *paths*, not actual byte slices. The
// `source` stores paths which are normalized which may be different
// from the strings on the command line.
- let source = self.cstore.get_crate_data(cnum).source();
+ let source = self.cstore.get_crate_data(cnum).cdata.source();
if let Some(entry) = self.sess.opts.externs.get(&name.as_str()) {
// Only use `--extern crate_name=path` here, not `--extern crate_name`.
if let Some(mut files) = entry.files() {
self.load(&mut locator)
.map(|r| (r, None))
.or_else(|| {
- dep_kind = DepKind::UnexportedMacrosOnly;
+ dep_kind = DepKind::MacrosOnly;
self.load_proc_macro(&mut locator, path_kind)
})
.ok_or_else(move || LoadError::LocatorError(locator))?
(LoadResult::Previous(cnum), None) => {
let data = self.cstore.get_crate_data(cnum);
if data.is_proc_macro_crate() {
- dep_kind = DepKind::UnexportedMacrosOnly;
+ dep_kind = DepKind::MacrosOnly;
}
data.update_dep_kind(|data_dep_kind| cmp::max(data_dep_kind, dep_kind));
Ok(cnum)
"resolving dep crate {} hash: `{}` extra filename: `{}`",
dep.name, dep.hash, dep.extra_filename
);
- if dep.kind == DepKind::UnexportedMacrosOnly {
- return krate;
- }
let dep_kind = match dep_kind {
DepKind::MacrosOnly => DepKind::MacrosOnly,
_ => dep.kind,
None => item.ident.name,
};
let dep_kind = if attr::contains_name(&item.attrs, sym::no_link) {
- DepKind::UnexportedMacrosOnly
+ DepKind::MacrosOnly
} else {
DepKind::Explicit
};
let cnum = self.resolve_crate(name, item.span, dep_kind, None);
let def_id = definitions.opt_local_def_id(item.id).unwrap();
- let path_len = definitions.def_path(def_id.index).data.len();
+ let path_len = definitions.def_path(def_id).data.len();
self.update_extern_crate(
cnum,
ExternCrate {
- src: ExternCrateSource::Extern(def_id),
+ src: ExternCrateSource::Extern(def_id.to_def_id()),
span: item.span,
path_len,
dependency_of: LOCAL_CRATE,
use rustc::middle::cstore::LinkagePreference::{self, RequireDynamic, RequireStatic};
use rustc::middle::cstore::{self, DepKind};
use rustc::middle::dependency_format::{Dependencies, DependencyList, Linkage};
-use rustc::session::config;
use rustc::ty::TyCtxt;
use rustc_data_structures::fx::FxHashMap;
use rustc_hir::def_id::CrateNum;
+use rustc_session::config;
use rustc_target::spec::PanicStrategy;
crate fn calculate(tcx: TyCtxt<'_>) -> Dependencies {
// If the global prefer_dynamic switch is turned off, or the final
// executable will be statically linked, prefer static crate linkage.
- config::CrateType::Executable if !sess.opts.cg.prefer_dynamic || sess.crt_static() => {
+ config::CrateType::Executable
+ if !sess.opts.cg.prefer_dynamic || sess.crt_static(Some(ty)) =>
+ {
Linkage::Static
}
config::CrateType::Executable => Linkage::Dynamic,
// If any are not found, generate some nice pretty errors.
if ty == config::CrateType::Staticlib
|| (ty == config::CrateType::Executable
- && sess.crt_static()
+ && sess.crt_static(Some(ty))
&& !sess.target.target.options.crt_static_allows_dylibs)
{
for &cnum in tcx.crates().iter() {
pub mod locator;
pub fn validate_crate_name(
- sess: Option<&rustc::session::Session>,
+ sess: Option<&rustc_session::Session>,
s: &str,
sp: Option<rustc_span::Span>,
) {
let mut collector = Collector { args: Vec::new() };
tcx.hir().krate().visit_all_item_likes(&mut collector);
- for attr in tcx.hir().krate().attrs.iter() {
+ for attr in tcx.hir().krate().item.attrs.iter() {
if attr.has_name(sym::link_args) {
if let Some(linkarg) = attr.value_str() {
collector.add_link_args(&linkarg.as_str());
use crate::rmeta::{rustc_version, MetadataBlob, METADATA_HEADER};
use rustc::middle::cstore::{CrateSource, MetadataLoader};
-use rustc::session::filesearch::{FileDoesntMatch, FileMatches, FileSearch};
-use rustc::session::search_paths::PathKind;
-use rustc::session::{config, CrateDisambiguator, Session};
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use rustc_data_structures::svh::Svh;
use rustc_data_structures::sync::MetadataRef;
use rustc_errors::{struct_span_err, DiagnosticBuilder};
+use rustc_session::filesearch::{FileDoesntMatch, FileMatches, FileSearch};
+use rustc_session::search_paths::PathKind;
+use rustc_session::{config, CrateDisambiguator, Session};
use rustc_span::symbol::{sym, Symbol};
use rustc_span::Span;
use rustc_target::spec::{Target, TargetTriple};
// See also #68149 which provides more detail on why emitting the
// dependency on the rlib is a bad thing.
//
- // We currenty do not verify that these other sources are even in sync,
+ // We currently do not verify that these other sources are even in sync,
// and this is arguably a bug (see #10786), but because reading metadata
// is quite slow (especially from dylibs) we currently do not read it
// from the other crate sources.
use rustc::middle::cstore::{self, NativeLibrary};
-use rustc::session::parse::feature_err;
-use rustc::session::Session;
use rustc::ty::TyCtxt;
use rustc_attr as attr;
use rustc_data_structures::fx::FxHashSet;
use rustc_errors::struct_span_err;
use rustc_hir as hir;
use rustc_hir::itemlikevisit::ItemLikeVisitor;
+use rustc_session::parse::feature_err;
+use rustc_session::Session;
use rustc_span::source_map::Span;
use rustc_span::symbol::{kw, sym, Symbol};
use rustc_target::spec::abi::Abi;
// Decoding metadata from a single crate's metadata
+use crate::creader::CrateMetadataRef;
use crate::rmeta::table::{FixedSizeEncoding, Table};
use crate::rmeta::*;
-use rustc::dep_graph::{self, DepNodeIndex};
+use rustc::dep_graph::{self, DepNode, DepNodeIndex};
use rustc::hir::exports::Export;
use rustc::hir::map::definitions::DefPathTable;
use rustc::hir::map::{DefKey, DefPath, DefPathData, DefPathHash};
use rustc::middle::lang_items;
use rustc::mir::interpret::{AllocDecodingSession, AllocDecodingState};
use rustc::mir::{self, interpret, BodyAndCache, Promoted};
-use rustc::session::Session;
use rustc::ty::codec::TyDecoder;
use rustc::ty::{self, Ty, TyCtxt};
use rustc::util::common::record_time;
+use rustc_ast::ast::{self, Ident};
+use rustc_attr as attr;
use rustc_data_structures::captures::Captures;
use rustc_data_structures::fingerprint::Fingerprint;
use rustc_data_structures::fx::FxHashMap;
use rustc_data_structures::svh::Svh;
use rustc_data_structures::sync::{AtomicCell, Lock, LockGuard, Lrc, Once};
+use rustc_expand::base::{SyntaxExtension, SyntaxExtensionKind};
+use rustc_expand::proc_macro::{AttrProcMacro, BangProcMacro, ProcMacroDerive};
use rustc_hir as hir;
use rustc_hir::def::{CtorKind, CtorOf, DefKind, Res};
use rustc_hir::def_id::{CrateNum, DefId, DefIndex, LocalDefId, CRATE_DEF_INDEX, LOCAL_CRATE};
use rustc_index::vec::{Idx, IndexVec};
+use rustc_serialize::{opaque, Decodable, Decoder, SpecializedDecoder};
+use rustc_session::Session;
+use rustc_span::source_map::{self, respan, Spanned};
+use rustc_span::symbol::{sym, Symbol};
+use rustc_span::{self, hygiene::MacroKind, BytePos, Pos, Span, DUMMY_SP};
+use log::debug;
+use proc_macro::bridge::client::ProcMacro;
use std::io;
use std::mem;
use std::num::NonZeroUsize;
use std::u32;
-use log::debug;
-use proc_macro::bridge::client::ProcMacro;
-use rustc_ast::ast::{self, Ident};
-use rustc_attr as attr;
-use rustc_expand::base::{SyntaxExtension, SyntaxExtensionKind};
-use rustc_expand::proc_macro::{AttrProcMacro, BangProcMacro, ProcMacroDerive};
-use rustc_serialize::{opaque, Decodable, Decoder, SpecializedDecoder};
-use rustc_span::source_map::{self, respan, Spanned};
-use rustc_span::symbol::{sym, Symbol};
-use rustc_span::{self, hygiene::MacroKind, BytePos, Pos, Span, DUMMY_SP};
-
pub use cstore_impl::{provide, provide_extern};
mod cstore_impl;
pub(super) struct DecodeContext<'a, 'tcx> {
opaque: opaque::Decoder<'a>,
- cdata: Option<&'a CrateMetadata>,
+ cdata: Option<CrateMetadataRef<'a>>,
sess: Option<&'tcx Session>,
tcx: Option<TyCtxt<'tcx>>,
/// Abstract over the various ways one can create metadata decoders.
pub(super) trait Metadata<'a, 'tcx>: Copy {
fn raw_bytes(self) -> &'a [u8];
- fn cdata(self) -> Option<&'a CrateMetadata> {
+ fn cdata(self) -> Option<CrateMetadataRef<'a>> {
None
}
fn sess(self) -> Option<&'tcx Session> {
lazy_state: LazyState::NoNode,
alloc_decoding_session: self
.cdata()
- .map(|cdata| cdata.alloc_decoding_state.new_decoding_session()),
+ .map(|cdata| cdata.cdata.alloc_decoding_state.new_decoding_session()),
}
}
}
}
}
-impl<'a, 'tcx> Metadata<'a, 'tcx> for &'a CrateMetadata {
+impl<'a, 'tcx> Metadata<'a, 'tcx> for &'a CrateMetadataRef<'a> {
fn raw_bytes(self) -> &'a [u8] {
self.blob.raw_bytes()
}
- fn cdata(self) -> Option<&'a CrateMetadata> {
- Some(self)
+ fn cdata(self) -> Option<CrateMetadataRef<'a>> {
+ Some(*self)
}
}
-impl<'a, 'tcx> Metadata<'a, 'tcx> for (&'a CrateMetadata, &'tcx Session) {
+impl<'a, 'tcx> Metadata<'a, 'tcx> for (&'a CrateMetadataRef<'a>, &'tcx Session) {
fn raw_bytes(self) -> &'a [u8] {
self.0.raw_bytes()
}
- fn cdata(self) -> Option<&'a CrateMetadata> {
- Some(self.0)
+ fn cdata(self) -> Option<CrateMetadataRef<'a>> {
+ Some(*self.0)
}
fn sess(self) -> Option<&'tcx Session> {
Some(&self.1)
}
}
-impl<'a, 'tcx> Metadata<'a, 'tcx> for (&'a CrateMetadata, TyCtxt<'tcx>) {
+impl<'a, 'tcx> Metadata<'a, 'tcx> for (&'a CrateMetadataRef<'a>, TyCtxt<'tcx>) {
fn raw_bytes(self) -> &'a [u8] {
self.0.raw_bytes()
}
- fn cdata(self) -> Option<&'a CrateMetadata> {
- Some(self.0)
+ fn cdata(self) -> Option<CrateMetadataRef<'a>> {
+ Some(*self.0)
}
fn tcx(self) -> Option<TyCtxt<'tcx>> {
Some(self.1)
self.tcx.expect("missing TyCtxt in DecodeContext")
}
- fn cdata(&self) -> &'a CrateMetadata {
+ fn cdata(&self) -> CrateMetadataRef<'a> {
self.cdata.expect("missing CrateMetadata in DecodeContext")
}
impl<'a, 'tcx> SpecializedDecoder<LocalDefId> for DecodeContext<'a, 'tcx> {
#[inline]
fn specialized_decode(&mut self) -> Result<LocalDefId, Self::Error> {
- self.specialized_decode().map(|i| LocalDefId::from_def_id(i))
+ Ok(DefId::decode(self)?.expect_local())
}
}
return Ok(DUMMY_SP);
}
- debug_assert_eq!(tag, TAG_VALID_SPAN);
+ debug_assert!(tag == TAG_VALID_SPAN_LOCAL || tag == TAG_VALID_SPAN_FOREIGN);
let lo = BytePos::decode(self)?;
let len = BytePos::decode(self)?;
bug!("Cannot decode Span without Session.")
};
- let imported_source_files = self.cdata().imported_source_files(&sess.source_map());
+ // There are two possibilities here:
+ // 1. This is a 'local span', which is located inside a `SourceFile`
+ // that came from this crate. In this case, we use the source map data
+ // encoded in this crate. This branch should be taken nearly all of the time.
+ // 2. This is a 'foreign span', which is located inside a `SourceFile`
+ // that came from a *different* crate (some crate upstream of the one
+ // whose metadata we're looking at). For example, consider this dependency graph:
+ //
+ // A -> B -> C
+ //
+ // Suppose that we're currently compiling crate A, and start deserializing
+ // metadata from crate B. When we deserialize a Span from crate B's metadata,
+ // there are two posibilites:
+ //
+ // 1. The span references a file from crate B. This makes it a 'local' span,
+ // which means that we can use crate B's serialized source map information.
+ // 2. The span references a file from crate C. This makes it a 'foreign' span,
+ // which means we need to use Crate *C* (not crate B) to determine the source
+ // map information. We only record source map information for a file in the
+ // crate that 'owns' it, so deserializing a Span may require us to look at
+ // a transitive dependency.
+ //
+ // When we encode a foreign span, we adjust its 'lo' and 'high' values
+ // to be based on the *foreign* crate (e.g. crate C), not the crate
+ // we are writing metadata for (e.g. crate B). This allows us to
+ // treat the 'local' and 'foreign' cases almost identically during deserialization:
+ // we can call `imported_source_files` for the proper crate, and binary search
+ // through the returned slice using our span.
+ let imported_source_files = if tag == TAG_VALID_SPAN_LOCAL {
+ self.cdata().imported_source_files(sess.source_map())
+ } else {
+ // FIXME: We don't decode dependencies of proc-macros.
+ // Remove this once #69976 is merged
+ if self.cdata().root.is_proc_macro_crate() {
+ debug!(
+ "SpecializedDecoder<Span>::specialized_decode: skipping span for proc-macro crate {:?}",
+ self.cdata().cnum
+ );
+ // Decode `CrateNum` as u32 - using `CrateNum::decode` will ICE
+ // since we don't have `cnum_map` populated.
+ // This advances the decoder position so that we can continue
+ // to read metadata.
+ let _ = u32::decode(self)?;
+ return Ok(DUMMY_SP);
+ }
+ // tag is TAG_VALID_SPAN_FOREIGN, checked by `debug_assert` above
+ let cnum = CrateNum::decode(self)?;
+ debug!(
+ "SpecializedDecoder<Span>::specialized_decode: loading source files from cnum {:?}",
+ cnum
+ );
+
+ // Decoding 'foreign' spans should be rare enough that it's
+ // not worth it to maintain a per-CrateNum cache for `last_source_file_index`.
+ // We just set it to 0, to ensure that we don't try to access something out
+ // of bounds for our initial 'guess'
+ self.last_source_file_index = 0;
+
+ let foreign_data = self.cdata().cstore.get_crate_data(cnum);
+ foreign_data.imported_source_files(sess.source_map())
+ };
+
let source_file = {
// Optimize for the case that most spans within a translated item
// originate from the same source_file.
.binary_search_by_key(&lo, |source_file| source_file.original_start_pos)
.unwrap_or_else(|index| index - 1);
- self.last_source_file_index = index;
+ // Don't try to cache the index for foreign spans,
+ // as this would require a map from CrateNums to indices
+ if tag == TAG_VALID_SPAN_LOCAL {
+ self.last_source_file_index = index;
+ }
&imported_source_files[index]
}
};
// Make sure our binary search above is correct.
- debug_assert!(lo >= source_file.original_start_pos && lo <= source_file.original_end_pos);
+ debug_assert!(
+ lo >= source_file.original_start_pos && lo <= source_file.original_end_pos,
+ "Bad binary search: lo={:?} source_file.original_start_pos={:?} source_file.original_end_pos={:?}",
+ lo,
+ source_file.original_start_pos,
+ source_file.original_end_pos
+ );
// Make sure we correctly filtered out invalid spans during encoding
- debug_assert!(hi >= source_file.original_start_pos && hi <= source_file.original_end_pos);
+ debug_assert!(
+ hi >= source_file.original_start_pos && hi <= source_file.original_end_pos,
+ "Bad binary search: hi={:?} source_file.original_start_pos={:?} source_file.original_end_pos={:?}",
+ hi,
+ source_file.original_start_pos,
+ source_file.original_end_pos
+ );
let lo =
(lo + source_file.translated_source_file.start_pos) - source_file.original_start_pos;
EntryKind::Struct(_, _) => DefKind::Struct,
EntryKind::Union(_, _) => DefKind::Union,
EntryKind::Fn(_) | EntryKind::ForeignFn(_) => DefKind::Fn,
- EntryKind::Method(_) => DefKind::Method,
+ EntryKind::AssocFn(_) => DefKind::AssocFn,
EntryKind::Type => DefKind::TyAlias,
EntryKind::TypeParam => DefKind::TyParam,
EntryKind::ConstParam => DefKind::ConstParam,
}
}
-impl<'a, 'tcx> CrateMetadata {
- crate fn new(
- sess: &Session,
- blob: MetadataBlob,
- root: CrateRoot<'static>,
- raw_proc_macros: Option<&'static [ProcMacro]>,
- cnum: CrateNum,
- cnum_map: CrateNumMap,
- dep_kind: DepKind,
- source: CrateSource,
- private_dep: bool,
- host_hash: Option<Svh>,
- ) -> CrateMetadata {
- let def_path_table = record_time(&sess.perf_stats.decode_def_path_tables_time, || {
- root.def_path_table.decode((&blob, sess))
- });
- let trait_impls = root
- .impls
- .decode((&blob, sess))
- .map(|trait_impls| (trait_impls.trait_id, trait_impls.impls))
- .collect();
- let alloc_decoding_state =
- AllocDecodingState::new(root.interpret_alloc_index.decode(&blob).collect());
- let dependencies = Lock::new(cnum_map.iter().cloned().collect());
- CrateMetadata {
- blob,
- root,
- def_path_table,
- trait_impls,
- raw_proc_macros,
- source_map_import_info: Once::new(),
- alloc_decoding_state,
- dep_node_index: AtomicCell::new(DepNodeIndex::INVALID),
- cnum,
- cnum_map,
- dependencies,
- dep_kind: Lock::new(dep_kind),
- source,
- private_dep,
- host_hash,
- extern_crate: Lock::new(None),
- }
- }
-
+impl<'a, 'tcx> CrateMetadataRef<'a> {
fn is_proc_macro(&self, id: DefIndex) -> bool {
self.root.proc_macro_data.and_then(|data| data.decode(self).find(|x| *x == id)).is_some()
}
})
}
- fn local_def_id(&self, index: DefIndex) -> DefId {
- DefId { krate: self.cnum, index }
- }
-
fn raw_proc_macro(&self, id: DefIndex) -> &ProcMacro {
// DefIndex's in root.proc_macro_data have a one-to-one correspondence
// with items in 'raw_proc_macros'.
data.paren_sugar,
data.has_auto_impl,
data.is_marker,
+ data.specialization_kind,
self.def_path_table.def_path_hash(item_id),
)
}
false,
false,
false,
+ ty::trait_def::TraitSpecializationKind::None,
self.def_path_table.def_path_hash(item_id),
),
_ => bug!("def-index does not refer to trait or trait alias"),
let (kind, container, has_self) = match self.kind(id) {
EntryKind::AssocConst(container, _, _) => (ty::AssocKind::Const, container, false),
- EntryKind::Method(data) => {
+ EntryKind::AssocFn(data) => {
let data = data.decode(self);
(ty::AssocKind::Method, data.container, data.has_self)
}
.collect()
}
- // Translate a DefId from the current compilation environment to a DefId
- // for an external crate.
- fn reverse_translate_def_id(&self, did: DefId) -> Option<DefId> {
- for (local, &global) in self.cnum_map.iter_enumerated() {
- if global == did.krate {
- return Some(DefId { krate: local, index: did.index });
- }
- }
-
- None
- }
-
fn get_inherent_implementations_for_type(
&self,
tcx: TyCtxt<'tcx>,
fn get_fn_param_names(&self, id: DefIndex) -> Vec<ast::Name> {
let param_names = match self.kind(id) {
EntryKind::Fn(data) | EntryKind::ForeignFn(data) => data.decode(self).param_names,
- EntryKind::Method(data) => data.decode(self).fn_data.param_names,
+ EntryKind::AssocFn(data) => data.decode(self).fn_data.param_names,
_ => Lazy::empty(),
};
param_names.decode(self).collect()
}
}
- fn get_macro(&self, id: DefIndex) -> MacroDef {
+ fn get_macro(&self, id: DefIndex, sess: &Session) -> MacroDef {
match self.kind(id) {
- EntryKind::MacroDef(macro_def) => macro_def.decode(self),
+ EntryKind::MacroDef(macro_def) => macro_def.decode((self, sess)),
_ => bug!(),
}
}
// don't serialize constness for tuple variant and tuple struct constructors.
fn is_const_fn_raw(&self, id: DefIndex) -> bool {
let constness = match self.kind(id) {
- EntryKind::Method(data) => data.decode(self).fn_data.constness,
+ EntryKind::AssocFn(data) => data.decode(self).fn_data.constness,
EntryKind::Fn(data) => data.decode(self).constness,
// Some intrinsics can be const fn. While we could recompute this (at least until we
// stop having hardcoded whitelists and move to stability attributes), it seems cleaner
fn asyncness(&self, id: DefIndex) -> hir::IsAsync {
match self.kind(id) {
EntryKind::Fn(data) => data.decode(self).asyncness,
- EntryKind::Method(data) => data.decode(self).fn_data.asyncness,
+ EntryKind::AssocFn(data) => data.decode(self).fn_data.asyncness,
EntryKind::ForeignFn(data) => data.decode(self).asyncness,
_ => bug!("asyncness: expected function kind"),
}
DefPath::make(self.cnum, id, |parent| self.def_key(parent))
}
- #[inline]
- fn def_path_hash(&self, index: DefIndex) -> DefPathHash {
- self.def_path_table.def_path_hash(index)
- }
-
/// Imports the source_map from an external crate into the source_map of the crate
/// currently being compiled (the "local crate").
///
/// Proc macro crates don't currently export spans, so this function does not have
/// to work for them.
fn imported_source_files(
- &'a self,
+ &self,
local_source_map: &source_map::SourceMap,
- ) -> &[ImportedSourceFile] {
- self.source_map_import_info.init_locking(|| {
+ ) -> &'a [ImportedSourceFile] {
+ self.cdata.source_map_import_info.init_locking(|| {
let external_source_map = self.root.source_map.decode(self);
external_source_map
let local_version = local_source_map.new_imported_source_file(
name,
name_was_remapped,
- self.cnum.as_u32(),
src_hash,
name_hash,
source_length,
+ self.cnum,
lines,
multibyte_chars,
non_narrow_chars,
normalized_pos,
+ start_pos,
+ end_pos,
);
debug!(
"CrateMetaData::imported_source_files alloc \
.collect()
})
}
+}
- /// Get the `DepNodeIndex` corresponding this crate. The result of this
- /// method is cached in the `dep_node_index` field.
- fn get_crate_dep_node_index(&self, tcx: TyCtxt<'tcx>) -> DepNodeIndex {
- let mut dep_node_index = self.dep_node_index.load();
-
- if unlikely!(dep_node_index == DepNodeIndex::INVALID) {
- // We have not cached the DepNodeIndex for this upstream crate yet,
- // so use the dep-graph to find it out and cache it.
- // Note that multiple threads can enter this block concurrently.
- // That is fine because the DepNodeIndex remains constant
- // throughout the whole compilation session, and multiple stores
- // would always write the same value.
-
- let def_path_hash = self.def_path_hash(CRATE_DEF_INDEX);
- let dep_node = def_path_hash.to_dep_node(dep_graph::DepKind::CrateMetadata);
-
- dep_node_index = tcx.dep_graph.dep_node_index_of(&dep_node);
- assert!(dep_node_index != DepNodeIndex::INVALID);
- self.dep_node_index.store(dep_node_index);
+impl CrateMetadata {
+ crate fn new(
+ sess: &Session,
+ blob: MetadataBlob,
+ root: CrateRoot<'static>,
+ raw_proc_macros: Option<&'static [ProcMacro]>,
+ cnum: CrateNum,
+ cnum_map: CrateNumMap,
+ dep_kind: DepKind,
+ source: CrateSource,
+ private_dep: bool,
+ host_hash: Option<Svh>,
+ ) -> CrateMetadata {
+ let def_path_table = record_time(&sess.perf_stats.decode_def_path_tables_time, || {
+ root.def_path_table.decode((&blob, sess))
+ });
+ let trait_impls = root
+ .impls
+ .decode((&blob, sess))
+ .map(|trait_impls| (trait_impls.trait_id, trait_impls.impls))
+ .collect();
+ let alloc_decoding_state =
+ AllocDecodingState::new(root.interpret_alloc_index.decode(&blob).collect());
+ let dependencies = Lock::new(cnum_map.iter().cloned().collect());
+ CrateMetadata {
+ blob,
+ root,
+ def_path_table,
+ trait_impls,
+ raw_proc_macros,
+ source_map_import_info: Once::new(),
+ alloc_decoding_state,
+ dep_node_index: AtomicCell::new(DepNodeIndex::INVALID),
+ cnum,
+ cnum_map,
+ dependencies,
+ dep_kind: Lock::new(dep_kind),
+ source,
+ private_dep,
+ host_hash,
+ extern_crate: Lock::new(None),
}
-
- dep_node_index
}
crate fn dependencies(&self) -> LockGuard<'_, Vec<CrateNum>> {
crate fn hash(&self) -> Svh {
self.root.hash
}
+
+ fn local_def_id(&self, index: DefIndex) -> DefId {
+ DefId { krate: self.cnum, index }
+ }
+
+ // Translate a DefId from the current compilation environment to a DefId
+ // for an external crate.
+ fn reverse_translate_def_id(&self, did: DefId) -> Option<DefId> {
+ for (local, &global) in self.cnum_map.iter_enumerated() {
+ if global == did.krate {
+ return Some(DefId { krate: local, index: did.index });
+ }
+ }
+
+ None
+ }
+
+ #[inline]
+ fn def_path_hash(&self, index: DefIndex) -> DefPathHash {
+ self.def_path_table.def_path_hash(index)
+ }
+
+ /// Get the `DepNodeIndex` corresponding this crate. The result of this
+ /// method is cached in the `dep_node_index` field.
+ fn get_crate_dep_node_index(&self, tcx: TyCtxt<'tcx>) -> DepNodeIndex {
+ let mut dep_node_index = self.dep_node_index.load();
+
+ if unlikely!(dep_node_index == DepNodeIndex::INVALID) {
+ // We have not cached the DepNodeIndex for this upstream crate yet,
+ // so use the dep-graph to find it out and cache it.
+ // Note that multiple threads can enter this block concurrently.
+ // That is fine because the DepNodeIndex remains constant
+ // throughout the whole compilation session, and multiple stores
+ // would always write the same value.
+
+ let def_path_hash = self.def_path_hash(CRATE_DEF_INDEX);
+ let dep_node =
+ DepNode::from_def_path_hash(def_path_hash, dep_graph::DepKind::CrateMetadata);
+
+ dep_node_index = tcx.dep_graph.dep_node_index_of(&dep_node);
+ assert!(dep_node_index != DepNodeIndex::INVALID);
+ self.dep_node_index.store(dep_node_index);
+ }
+
+ dep_node_index
+ }
}
// Cannot be implemented on 'ProcMacro', as libproc_macro
use rustc::hir::exports::Export;
use rustc::hir::map::definitions::DefPathTable;
use rustc::hir::map::{DefKey, DefPath, DefPathHash};
-use rustc::middle::cstore::{CrateSource, CrateStore, DepKind, EncodedMetadata, NativeLibraryKind};
+use rustc::middle::cstore::{CrateSource, CrateStore, EncodedMetadata, NativeLibraryKind};
use rustc::middle::exported_symbols::ExportedSymbol;
use rustc::middle::stability::DeprecationEntry;
-use rustc::session::{CrateDisambiguator, Session};
use rustc::ty::query::Providers;
use rustc::ty::query::QueryConfig;
use rustc::ty::{self, TyCtxt};
+use rustc_ast::ast;
+use rustc_ast::attr;
+use rustc_ast::expand::allocator::AllocatorKind;
use rustc_data_structures::svh::Svh;
use rustc_hir as hir;
use rustc_hir::def_id::{CrateNum, DefId, DefIdMap, CRATE_DEF_INDEX, LOCAL_CRATE};
-use rustc_parse::parser::emit_unclosed_delims;
-use rustc_parse::source_file_to_stream;
+use rustc_session::{CrateDisambiguator, Session};
+use rustc_span::source_map::{self, Span, Spanned};
+use rustc_span::symbol::Symbol;
use rustc_data_structures::sync::Lrc;
use smallvec::SmallVec;
use std::any::Any;
use std::sync::Arc;
-use rustc_ast::ast;
-use rustc_ast::attr;
-use rustc_ast::expand::allocator::AllocatorKind;
-use rustc_ast::ptr::P;
-use rustc_ast::tokenstream::DelimSpan;
-use rustc_span::source_map;
-use rustc_span::source_map::Spanned;
-use rustc_span::symbol::Symbol;
-use rustc_span::{FileName, Span};
-
macro_rules! provide {
(<$lt:tt> $tcx:ident, $def_id:ident, $other:ident, $cdata:ident,
$($name:ident => $compute:block)*) => {
cdata.get_deprecation(def_id.index).map(DeprecationEntry::external)
}
item_attrs => { cdata.get_item_attrs(def_id.index, tcx.sess) }
- // FIXME(#38501) We've skipped a `read` on the `HirBody` of
+ // FIXME(#38501) We've skipped a `read` on the `hir_owner_items` of
// a `fn` when encoding, so the dep-tracking wouldn't work.
// This is only used by rustdoc anyway, which shouldn't have
// incremental recompilation ever enabled.
}
impl CStore {
- pub fn export_macros_untracked(&self, cnum: CrateNum) {
- let data = self.get_crate_data(cnum);
- let mut dep_kind = data.dep_kind.lock();
- if *dep_kind == DepKind::UnexportedMacrosOnly {
- *dep_kind = DepKind::MacrosOnly;
- }
- }
-
pub fn struct_field_names_untracked(&self, def: DefId, sess: &Session) -> Vec<Spanned<Symbol>> {
self.get_crate_data(def.krate).get_struct_field_names(def.index, sess)
}
return LoadedMacro::ProcMacro(data.load_proc_macro(id.index, sess));
}
- let def = data.get_macro(id.index);
- let macro_full_name = data.def_path(id.index).to_string_friendly(|_| data.root.name);
- let source_name = FileName::Macros(macro_full_name);
-
- let source_file = sess.parse_sess.source_map().new_source_file(source_name, def.body);
- let local_span = Span::with_root_ctxt(source_file.start_pos, source_file.end_pos);
- let dspan = DelimSpan::from_single(local_span);
- let (body, mut errors) = source_file_to_stream(&sess.parse_sess, source_file, None);
- emit_unclosed_delims(&mut errors, &sess.parse_sess);
+ let span = data.get_span(id.index, sess);
// Mark the attrs as used
let attrs = data.get_item_attrs(id.index, sess);
attr::mark_used(attr);
}
- let name = data
+ let ident = data
.def_key(id.index)
.disambiguated_data
.data
.get_opt_name()
+ .map(ast::Ident::with_dummy_span) // FIXME: cross-crate hygiene
.expect("no name in load_macro");
- sess.imported_macro_spans
- .borrow_mut()
- .insert(local_span, (name.to_string(), data.get_span(id.index, sess)));
LoadedMacro::MacroDef(
ast::Item {
- // FIXME: cross-crate hygiene
- ident: ast::Ident::with_dummy_span(name),
+ ident,
id: ast::DUMMY_NODE_ID,
- span: local_span,
+ span,
attrs: attrs.iter().cloned().collect(),
- kind: ast::ItemKind::MacroDef(ast::MacroDef {
- body: P(ast::MacArgs::Delimited(dspan, ast::MacDelimiter::Brace, body)),
- legacy: def.legacy,
- }),
- vis: source_map::respan(local_span.shrink_to_lo(), ast::VisibilityKind::Inherited),
+ kind: ast::ItemKind::MacroDef(data.get_macro(id.index, sess)),
+ vis: source_map::respan(span.shrink_to_lo(), ast::VisibilityKind::Inherited),
tokens: None,
},
data.root.edition,
}
fn def_path_table(&self, cnum: CrateNum) -> &DefPathTable {
- &self.get_crate_data(cnum).def_path_table
+ &self.get_crate_data(cnum).cdata.def_path_table
}
fn crates_untracked(&self) -> Vec<CrateNum> {
use crate::rmeta::table::FixedSizeEncoding;
use crate::rmeta::*;
+use log::{debug, trace};
use rustc::hir::map::definitions::DefPathTable;
use rustc::hir::map::Map;
use rustc::middle::cstore::{EncodedMetadata, ForeignModule, LinkagePreference, NativeLibrary};
use rustc::ty::codec::{self as ty_codec, TyEncoder};
use rustc::ty::layout::VariantIdx;
use rustc::ty::{self, SymbolName, Ty, TyCtxt};
+use rustc_ast::ast;
+use rustc_ast::attr;
use rustc_data_structures::fingerprint::Fingerprint;
+use rustc_data_structures::fx::FxHashMap;
+use rustc_data_structures::stable_hasher::StableHasher;
+use rustc_data_structures::sync::Lrc;
+use rustc_hir as hir;
use rustc_hir::def::CtorKind;
use rustc_hir::def_id::{CrateNum, DefId, DefIndex, LocalDefId, CRATE_DEF_INDEX, LOCAL_CRATE};
+use rustc_hir::intravisit::{self, NestedVisitorMap, Visitor};
+use rustc_hir::itemlikevisit::ItemLikeVisitor;
use rustc_hir::{AnonConst, GenericParamKind};
use rustc_index::vec::Idx;
-
-use rustc::session::config::{self, CrateType};
-use rustc_data_structures::fx::FxHashMap;
-use rustc_data_structures::stable_hasher::StableHasher;
-use rustc_data_structures::sync::Lrc;
use rustc_serialize::{opaque, Encodable, Encoder, SpecializedEncoder};
-
-use log::{debug, trace};
-use rustc_ast::ast;
-use rustc_ast::attr;
+use rustc_session::config::{self, CrateType};
use rustc_span::source_map::Spanned;
use rustc_span::symbol::{kw, sym, Ident, Symbol};
-use rustc_span::{self, FileName, SourceFile, Span};
+use rustc_span::{self, ExternalSource, FileName, SourceFile, Span};
use std::hash::Hash;
use std::num::NonZeroUsize;
use std::path::Path;
use std::u32;
-use rustc_hir as hir;
-use rustc_hir::intravisit::{self, NestedVisitorMap, Visitor};
-use rustc_hir::itemlikevisit::ItemLikeVisitor;
-
struct EncodeContext<'tcx> {
opaque: opaque::Encoder,
tcx: TyCtxt<'tcx>,
return TAG_INVALID_SPAN.encode(self);
}
- // HACK(eddyb) there's no way to indicate which crate a Span is coming
- // from right now, so decoding would fail to find the SourceFile if
- // it's not local to the crate the Span is found in.
- if self.source_file_cache.is_imported() {
- return TAG_INVALID_SPAN.encode(self);
- }
+ // There are two possible cases here:
+ // 1. This span comes from a 'foreign' crate - e.g. some crate upstream of the
+ // crate we are writing metadata for. When the metadata for *this* crate gets
+ // deserialized, the deserializer will need to know which crate it originally came
+ // from. We use `TAG_VALID_SPAN_FOREIGN` to indicate that a `CrateNum` should
+ // be deserialized after the rest of the span data, which tells the deserializer
+ // which crate contains the source map information.
+ // 2. This span comes from our own crate. No special hamdling is needed - we just
+ // write `TAG_VALID_SPAN_LOCAL` to let the deserializer know that it should use
+ // our own source map information.
+ let (tag, lo, hi) = if self.source_file_cache.is_imported() {
+ // To simplify deserialization, we 'rebase' this span onto the crate it originally came from
+ // (the crate that 'owns' the file it references. These rebased 'lo' and 'hi' values
+ // are relative to the source map information for the 'foreign' crate whose CrateNum
+ // we write into the metadata. This allows `imported_source_files` to binary
+ // search through the 'foreign' crate's source map information, using the
+ // deserialized 'lo' and 'hi' values directly.
+ //
+ // All of this logic ensures that the final result of deserialization is a 'normal'
+ // Span that can be used without any additional trouble.
+ let external_start_pos = {
+ // Introduce a new scope so that we drop the 'lock()' temporary
+ match &*self.source_file_cache.external_src.lock() {
+ ExternalSource::Foreign { original_start_pos, .. } => *original_start_pos,
+ src => panic!("Unexpected external source {:?}", src),
+ }
+ };
+ let lo = (span.lo - self.source_file_cache.start_pos) + external_start_pos;
+ let hi = (span.hi - self.source_file_cache.start_pos) + external_start_pos;
- TAG_VALID_SPAN.encode(self)?;
- span.lo.encode(self)?;
+ (TAG_VALID_SPAN_FOREIGN, lo, hi)
+ } else {
+ (TAG_VALID_SPAN_LOCAL, span.lo, span.hi)
+ };
+
+ tag.encode(self)?;
+ lo.encode(self)?;
// Encode length which is usually less than span.hi and profits more
// from the variable-length integer encoding that we use.
- let len = span.hi - span.lo;
- len.encode(self)
+ let len = hi - lo;
+ len.encode(self)?;
+
+ if tag == TAG_VALID_SPAN_FOREIGN {
+ // This needs to be two lines to avoid holding the `self.source_file_cache`
+ // while calling `cnum.encode(self)`
+ let cnum = self.source_file_cache.cnum;
+ cnum.encode(self)?;
+ }
+ Ok(())
// Don't encode the expansion context.
}
fn encode_info_for_items(&mut self) {
let krate = self.tcx.hir().krate();
let vis = Spanned { span: rustc_span::DUMMY_SP, node: hir::VisibilityKind::Public };
- self.encode_info_for_mod(hir::CRATE_HIR_ID, &krate.module, &krate.attrs, &vis);
+ self.encode_info_for_mod(hir::CRATE_HIR_ID, &krate.item.module, &krate.item.attrs, &vis);
krate.visit_all_item_likes(&mut self.as_deep_visitor());
for macro_def in krate.exported_macros {
self.visit_macro_def(macro_def);
edition: tcx.sess.edition(),
has_global_allocator: tcx.has_global_allocator(LOCAL_CRATE),
has_panic_handler: tcx.has_panic_handler(LOCAL_CRATE),
- has_default_lib_allocator: has_default_lib_allocator,
+ has_default_lib_allocator,
plugin_registrar_fn: tcx.plugin_registrar_fn(LOCAL_CRATE).map(|id| id.index),
proc_macro_decls_static: if is_proc_macro {
let id = tcx.proc_macro_decls_static(LOCAL_CRATE).unwrap();
)
}
ty::AssocKind::Method => {
- let fn_data = if let hir::TraitItemKind::Method(m_sig, m) = &ast_item.kind {
+ let fn_data = if let hir::TraitItemKind::Fn(m_sig, m) = &ast_item.kind {
let param_names = match *m {
- hir::TraitMethod::Required(ref names) => {
+ hir::TraitFn::Required(ref names) => {
self.encode_fn_param_names(names)
}
- hir::TraitMethod::Provided(body) => {
+ hir::TraitFn::Provided(body) => {
self.encode_fn_param_names_for_body(body)
}
};
} else {
bug!()
};
- EntryKind::Method(self.lazy(MethodData {
+ EntryKind::AssocFn(self.lazy(AssocFnData {
fn_data,
container,
has_self: trait_item.method_has_self_argument,
}
}
ty::AssocKind::Method => {
- let fn_data = if let hir::ImplItemKind::Method(ref sig, body) = ast_item.kind {
+ let fn_data = if let hir::ImplItemKind::Fn(ref sig, body) = ast_item.kind {
FnData {
asyncness: sig.header.asyncness,
constness: sig.header.constness,
} else {
bug!()
};
- EntryKind::Method(self.lazy(MethodData {
+ EntryKind::AssocFn(self.lazy(AssocFnData {
fn_data,
container,
has_self: impl_item.method_has_self_argument,
self.encode_inferred_outlives(def_id);
let mir = match ast_item.kind {
hir::ImplItemKind::Const(..) => true,
- hir::ImplItemKind::Method(ref sig, _) => {
+ hir::ImplItemKind::Fn(ref sig, _) => {
let generics = self.tcx.generics_of(def_id);
let needs_inline = (generics.requires_monomorphization(self.tcx)
|| tcx.codegen_fn_attrs(def_id).requests_inline())
let polarity = self.tcx.impl_polarity(def_id);
let parent = if let Some(trait_ref) = trait_ref {
let trait_def = self.tcx.trait_def(trait_ref.def_id);
- trait_def.ancestors(self.tcx, def_id).nth(1).and_then(|node| {
- match node {
- specialization_graph::Node::Impl(parent) => Some(parent),
- _ => None,
- }
- })
+ trait_def.ancestors(self.tcx, def_id).ok()
+ .and_then(|mut an| an.nth(1).and_then(|node| {
+ match node {
+ specialization_graph::Node::Impl(parent) => Some(parent),
+ _ => None,
+ }
+ }))
} else {
None
};
paren_sugar: trait_def.paren_sugar,
has_auto_impl: self.tcx.trait_is_auto(def_id),
is_marker: trait_def.is_marker,
+ specialization_kind: trait_def.specialization_kind,
};
EntryKind::Trait(self.lazy(data))
/// Serialize the text of exported macros
fn encode_info_for_macro_def(&mut self, macro_def: &hir::MacroDef<'_>) {
- use rustc_ast_pretty::pprust;
let def_id = self.tcx.hir().local_def_id(macro_def.hir_id);
- record!(self.per_def.kind[def_id] <- EntryKind::MacroDef(self.lazy(MacroDef {
- body: pprust::tts_to_string(macro_def.body.clone()),
- legacy: macro_def.legacy,
- })));
+ record!(self.per_def.kind[def_id] <- EntryKind::MacroDef(self.lazy(macro_def.ast.clone())));
record!(self.per_def.visibility[def_id] <- ty::Visibility::Public);
record!(self.per_def.span[def_id] <- macro_def.span);
record!(self.per_def.attributes[def_id] <- macro_def.attrs);
let is_proc_macro = self.tcx.sess.crate_types.borrow().contains(&CrateType::ProcMacro);
if is_proc_macro {
let tcx = self.tcx;
- Some(self.lazy(tcx.hir().krate().proc_macros.iter().map(|p| p.owner)))
+ Some(self.lazy(tcx.hir().krate().proc_macros.iter().map(|p| p.owner.local_def_index)))
} else {
None
}
.into_iter()
.map(|(trait_def_id, mut impls)| {
// Bring everything into deterministic order for hashing
- impls.sort_by_cached_key(|&def_index| {
- tcx.hir().definitions().def_path_hash(def_index)
+ impls.sort_by_cached_key(|&index| {
+ tcx.hir().definitions().def_path_hash(LocalDefId { local_def_index: index })
});
TraitImpls {
impl Visitor<'tcx> for EncodeContext<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.tcx.hir())
}
fn visit_expr(&mut self, ex: &'tcx hir::Expr<'tcx>) {
intravisit::walk_expr(self, ex);
use rustc::middle::exported_symbols::{ExportedSymbol, SymbolExportLevel};
use rustc::middle::lang_items;
use rustc::mir;
-use rustc::session::config::SymbolManglingVersion;
-use rustc::session::CrateDisambiguator;
use rustc::ty::{self, ReprOptions, Ty};
-use rustc_ast::ast;
+use rustc_ast::ast::{self, MacroDef};
use rustc_attr as attr;
use rustc_data_structures::svh::Svh;
use rustc_data_structures::sync::MetadataRef;
use rustc_hir::def_id::{DefId, DefIndex};
use rustc_index::vec::IndexVec;
use rustc_serialize::opaque::Encoder;
+use rustc_session::config::SymbolManglingVersion;
+use rustc_session::CrateDisambiguator;
use rustc_span::edition::Edition;
use rustc_span::symbol::Symbol;
use rustc_span::{self, Span};
per_def: LazyPerDefTables<'tcx>,
- /// The DefIndex's of any proc macros delcared by this crate.
+ /// The DefIndex's of any proc macros declared by this crate.
proc_macro_data: Option<Lazy<[DefIndex]>>,
compiler_builtins: bool,
Generator(hir::GeneratorKind),
Trait(Lazy<TraitData>),
Impl(Lazy<ImplData>),
- Method(Lazy<MethodData>),
+ AssocFn(Lazy<AssocFnData>),
AssocType(AssocContainer),
AssocOpaqueTy(AssocContainer),
AssocConst(AssocContainer, mir::ConstQualifs, Lazy<RenderedConst>),
reexports: Lazy<[Export<hir::HirId>]>,
}
-#[derive(RustcEncodable, RustcDecodable)]
-struct MacroDef {
- body: String,
- legacy: bool,
-}
-
#[derive(RustcEncodable, RustcDecodable)]
struct FnData {
asyncness: hir::IsAsync,
paren_sugar: bool,
has_auto_impl: bool,
is_marker: bool,
+ specialization_kind: ty::trait_def::TraitSpecializationKind,
}
#[derive(RustcEncodable, RustcDecodable)]
}
#[derive(RustcEncodable, RustcDecodable)]
-struct MethodData {
+struct AssocFnData {
fn_data: FnData,
container: AssocContainer,
has_self: bool,
}
// Tags used for encoding Spans:
-const TAG_VALID_SPAN: u8 = 0;
-const TAG_INVALID_SPAN: u8 = 1;
+const TAG_VALID_SPAN_LOCAL: u8 = 0;
+const TAG_VALID_SPAN_FOREIGN: u8 = 1;
+const TAG_INVALID_SPAN: u8 = 2;
itertools = "0.8"
log = "0.4"
log_settings = "0.1.1"
-polonius-engine = "0.11.0"
+polonius-engine = "0.12.0"
rustc = { path = "../librustc" }
rustc_ast_pretty = { path = "../librustc_ast_pretty" }
rustc_attr = { path = "../librustc_attr" }
rustc_lexer = { path = "../librustc_lexer" }
rustc_macros = { path = "../librustc_macros" }
rustc_serialize = { path = "../libserialize", package = "serialize" }
+rustc_session = { path = "../librustc_session" }
rustc_target = { path = "../librustc_target" }
+rustc_trait_selection = { path = "../librustc_trait_selection" }
rustc_ast = { path = "../librustc_ast" }
rustc_span = { path = "../librustc_span" }
rustc_apfloat = { path = "../librustc_apfloat" }
use rustc_hir::def_id::DefId;
use rustc_hir::{AsyncGeneratorKind, GeneratorKind};
use rustc_index::vec::Idx;
-use rustc_infer::traits::error_reporting::suggest_constraining_type_param;
use rustc_span::source_map::DesugaringKind;
use rustc_span::Span;
+use rustc_trait_selection::traits::error_reporting::suggest_constraining_type_param;
use crate::dataflow::drop_flag_effects;
use crate::dataflow::indexes::{MoveOutIndex, MovePathIndex};
&mut self,
location: Location,
desired_action: InitializationRequiringAction,
- (moved_place, used_place, span): (PlaceRef<'cx, 'tcx>, PlaceRef<'cx, 'tcx>, Span),
+ (moved_place, used_place, span): (PlaceRef<'tcx>, PlaceRef<'tcx>, Span),
mpi: MovePathIndex,
) {
debug!(
// borrowed place and look for a access to a different field of the same union.
let Place { local, projection } = second_borrowed_place;
- let mut cursor = projection.as_ref();
+ let mut cursor = &projection[..];
while let [proj_base @ .., elem] = cursor {
cursor = proj_base;
GeneratorKind::Async(async_kind) => match async_kind {
AsyncGeneratorKind::Block => "async block",
AsyncGeneratorKind::Closure => "async closure",
- _ => bug!("async block/closure expected, but async funtion found."),
+ _ => bug!("async block/closure expected, but async function found."),
},
GeneratorKind::Gen => "generator",
},
err.buffer(&mut self.errors_buffer);
}
- fn classify_drop_access_kind(&self, place: PlaceRef<'cx, 'tcx>) -> StorageDeadOrDrop<'tcx> {
+ fn classify_drop_access_kind(&self, place: PlaceRef<'tcx>) -> StorageDeadOrDrop<'tcx> {
let tcx = self.infcx.tcx;
match place.projection {
[] => StorageDeadOrDrop::LocalStorageDead,
pub(super) fn add_moved_or_invoked_closure_note(
&self,
location: Location,
- place: PlaceRef<'cx, 'tcx>,
+ place: PlaceRef<'tcx>,
diag: &mut DiagnosticBuilder<'_>,
) {
debug!("add_moved_or_invoked_closure_note: location={:?} place={:?}", location, place);
/// End-user visible description of `place` if one can be found. If the
/// place is a temporary for instance, None will be returned.
- pub(super) fn describe_place(&self, place_ref: PlaceRef<'cx, 'tcx>) -> Option<String> {
+ pub(super) fn describe_place(&self, place_ref: PlaceRef<'tcx>) -> Option<String> {
self.describe_place_with_options(place_ref, IncludingDowncast(false))
}
/// `Downcast` and `IncludingDowncast` is true
pub(super) fn describe_place_with_options(
&self,
- place: PlaceRef<'cx, 'tcx>,
+ place: PlaceRef<'tcx>,
including_downcast: IncludingDowncast,
) -> Option<String> {
let mut buf = String::new();
/// Appends end-user visible description of `place` to `buf`.
fn append_place_to_string(
&self,
- place: PlaceRef<'cx, 'tcx>,
+ place: PlaceRef<'tcx>,
buf: &mut String,
mut autoderef: bool,
including_downcast: &IncludingDowncast,
if self.body.local_decls[local].is_ref_for_guard() =>
{
self.append_place_to_string(
- PlaceRef { local: local, projection: &[] },
+ PlaceRef { local, projection: &[] },
buf,
autoderef,
&including_downcast,
}
/// End-user visible description of the `field`nth field of `base`
- fn describe_field(&self, place: PlaceRef<'cx, 'tcx>, field: Field) -> String {
+ fn describe_field(&self, place: PlaceRef<'tcx>, field: Field) -> String {
// FIXME Place2 Make this work iteratively
match place {
PlaceRef { local, projection: [] } => {
pub(super) fn borrowed_content_source(
&self,
- deref_base: PlaceRef<'cx, 'tcx>,
+ deref_base: PlaceRef<'tcx>,
) -> BorrowedContentSource<'tcx> {
let tcx = self.infcx.tcx;
/// Finds the spans associated to a move or copy of move_place at location.
pub(super) fn move_spans(
&self,
- moved_place: PlaceRef<'cx, 'tcx>, // Could also be an upvar.
+ moved_place: PlaceRef<'tcx>, // Could also be an upvar.
location: Location,
) -> UseSpans {
use self::UseSpans::*;
fn closure_span(
&self,
def_id: DefId,
- target_place: PlaceRef<'cx, 'tcx>,
+ target_place: PlaceRef<'tcx>,
places: &Vec<Operand<'tcx>>,
) -> Option<(Span, Option<GeneratorKind>, Span)> {
debug!(
&mut self,
access_place: &Place<'tcx>,
span: Span,
- the_place_err: PlaceRef<'cx, 'tcx>,
+ the_place_err: PlaceRef<'tcx>,
error_access: AccessKind,
location: Location,
) {
if self.body.local_decls[local].is_user_variable() =>
{
let local_decl = &self.body.local_decls[local];
- let suggestion = match local_decl.local_info {
- LocalInfo::User(ClearCrossCrate::Set(mir::BindingForm::ImplicitSelf(_))) => {
- Some(suggest_ampmut_self(self.infcx.tcx, local_decl))
- }
-
- LocalInfo::User(ClearCrossCrate::Set(mir::BindingForm::Var(
- mir::VarBindingForm {
- binding_mode: ty::BindingMode::BindByValue(_),
- opt_ty_info,
- ..
- },
- ))) => Some(suggest_ampmut(
- self.infcx.tcx,
- self.body,
- local,
- local_decl,
- opt_ty_info,
- )),
-
- LocalInfo::User(ClearCrossCrate::Set(mir::BindingForm::Var(
- mir::VarBindingForm {
- binding_mode: ty::BindingMode::BindByReference(_),
- ..
- },
- ))) => {
- let pattern_span = local_decl.source_info.span;
- suggest_ref_mut(self.infcx.tcx, pattern_span)
- .map(|replacement| (pattern_span, replacement))
- }
-
- LocalInfo::User(ClearCrossCrate::Clear) => bug!("saw cleared local state"),
-
- _ => unreachable!(),
- };
let (pointer_sigil, pointer_desc) = if local_decl.ty.is_region_ptr() {
("&", "reference")
("*const", "pointer")
};
- if let Some((err_help_span, suggested_code)) = suggestion {
- err.span_suggestion(
- err_help_span,
- &format!("consider changing this to be a mutable {}", pointer_desc),
- suggested_code,
- Applicability::MachineApplicable,
- );
- }
-
match self.local_names[local] {
Some(name) if !local_decl.from_compiler_desugaring() => {
+ let suggestion = match local_decl.local_info {
+ LocalInfo::User(ClearCrossCrate::Set(
+ mir::BindingForm::ImplicitSelf(_),
+ )) => Some(suggest_ampmut_self(self.infcx.tcx, local_decl)),
+
+ LocalInfo::User(ClearCrossCrate::Set(mir::BindingForm::Var(
+ mir::VarBindingForm {
+ binding_mode: ty::BindingMode::BindByValue(_),
+ opt_ty_info,
+ ..
+ },
+ ))) => Some(suggest_ampmut(
+ self.infcx.tcx,
+ self.body,
+ local,
+ local_decl,
+ opt_ty_info,
+ )),
+
+ LocalInfo::User(ClearCrossCrate::Set(mir::BindingForm::Var(
+ mir::VarBindingForm {
+ binding_mode: ty::BindingMode::BindByReference(_),
+ ..
+ },
+ ))) => {
+ let pattern_span = local_decl.source_info.span;
+ suggest_ref_mut(self.infcx.tcx, pattern_span)
+ .map(|replacement| (pattern_span, replacement))
+ }
+
+ LocalInfo::User(ClearCrossCrate::Clear) => {
+ bug!("saw cleared local state")
+ }
+
+ _ => unreachable!(),
+ };
+
+ if let Some((err_help_span, suggested_code)) = suggestion {
+ err.span_suggestion(
+ err_help_span,
+ &format!("consider changing this to be a mutable {}", pointer_desc),
+ suggested_code,
+ Applicability::MachineApplicable,
+ );
+ }
err.span_label(
span,
format!(
err.buffer(&mut self.errors_buffer);
}
- /// Targetted error when encountering an `FnMut` closure where an `Fn` closure was expected.
+ /// Targeted error when encountering an `FnMut` closure where an `Fn` closure was expected.
fn expected_fn_found_fn_mut_call(&self, err: &mut DiagnosticBuilder<'_>, sp: Span, act: &str) {
err.span_label(sp, format!("cannot {}", act));
}))
| Some(hir::Node::TraitItem(hir::TraitItem {
ident,
- kind: hir::TraitItemKind::Method(sig, _),
+ kind: hir::TraitItemKind::Fn(sig, _),
..
}))
| Some(hir::Node::ImplItem(hir::ImplItem {
ident,
- kind: hir::ImplItemKind::Method(sig, _),
+ kind: hir::ImplItemKind::Fn(sig, _),
..
})) => Some(
arg_pos
hir::Node::Item(hir::Item { ident, kind: hir::ItemKind::Fn(sig, ..), .. })
| hir::Node::TraitItem(hir::TraitItem {
ident,
- kind: hir::TraitItemKind::Method(sig, _),
+ kind: hir::TraitItemKind::Fn(sig, _),
..
})
| hir::Node::ImplItem(hir::ImplItem {
ident,
- kind: hir::ImplItemKind::Method(sig, _),
+ kind: hir::ImplItemKind::Fn(sig, _),
..
}) => {
err.span_label(ident.span, "");
use rustc::ty::{self, RegionVid, Ty};
use rustc_errors::{Applicability, DiagnosticBuilder};
use rustc_infer::infer::{
- error_reporting::nice_region_error::NiceRegionError, opaque_types, NLLRegionVariableOrigin,
+ error_reporting::nice_region_error::NiceRegionError,
+ error_reporting::unexpected_hidden_region_diagnostic, NLLRegionVariableOrigin,
};
use rustc_span::symbol::kw;
use rustc_span::Span;
let region_scope_tree = &self.infcx.tcx.region_scope_tree(self.mir_def_id);
let named_ty = self.regioncx.name_regions(self.infcx.tcx, hidden_ty);
let named_region = self.regioncx.name_regions(self.infcx.tcx, member_region);
- opaque_types::unexpected_hidden_region_diagnostic(
+ unexpected_hidden_region_diagnostic(
self.infcx.tcx,
Some(region_scope_tree),
span,
debug!("report_region_error: category={:?} {:?}", category, span);
// Check if we can use one of the "nice region errors".
if let (Some(f), Some(o)) = (self.to_error_region(fr), self.to_error_region(outlived_fr)) {
- let tables = self.infcx.tcx.typeck_tables_of(self.mir_def_id);
- let nice = NiceRegionError::new_from_span(self.infcx, span, o, f, Some(tables));
+ let nice = NiceRegionError::new_from_span(self.infcx, span, o, f);
if let Some(diag) = nice.try_report_from_nll() {
diag.buffer(&mut self.errors_buffer);
return;
if gen_move.is_some() { " of generator" } else { " of closure" },
),
hir::Node::ImplItem(hir::ImplItem {
- kind: hir::ImplItemKind::Method(method_sig, _),
+ kind: hir::ImplItemKind::Fn(method_sig, _),
..
}) => (method_sig.decl.output.span(), ""),
_ => (self.body.span, ""),
killed,
outlives,
invalidates,
- var_used,
- var_defined,
- var_drop_used,
- var_uses_region,
- var_drops_region,
- child,
- path_belongs_to_var,
- initialized_at,
- moved_out_at,
- path_accessed_at,
+ var_used_at,
+ var_defined_at,
+ var_dropped_at,
+ use_of_var_derefs_origin,
+ drop_of_var_derefs_origin,
+ child_path,
+ path_is_var,
+ path_assigned_at_base,
+ path_moved_at_base,
+ path_accessed_at_base,
known_subset,
])
}
self.mutate_place(location, lhs, Shallow(None), JustWrite);
}
StatementKind::FakeRead(_, _) => {
- // Only relavent for initialized/liveness/safety checks.
+ // Only relevant for initialized/liveness/safety checks.
}
StatementKind::SetDiscriminant { ref place, variant_index: _ } => {
self.mutate_place(location, place, Shallow(None), JustWrite);
// wind up mapped to the same key `S`, we would append the
// linked list for `Ra` onto the end of the linked list for
// `Rb` (or vice versa) -- this basically just requires
- // rewriting the final link from one list to point at the othe
+ // rewriting the final link from one list to point at the other
// other (see `append_list`).
let MemberConstraintSet { first_constraints, mut constraints, choice_regions } = self;
//! This query borrow-checks the MIR to (further) ensure it is not broken.
-use rustc::lint::builtin::MUTABLE_BORROW_RESERVATION_CONFLICT;
-use rustc::lint::builtin::UNUSED_MUT;
use rustc::mir::{
read_only, traversal, Body, BodyAndCache, ClearCrossCrate, Local, Location, Mutability,
Operand, Place, PlaceElem, PlaceRef, ReadOnlyBodyAndCache,
use rustc::mir::{Terminator, TerminatorKind};
use rustc::ty::query::Providers;
use rustc::ty::{self, RegionVid, TyCtxt};
-
+use rustc_ast::ast::Name;
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use rustc_data_structures::graph::dominators::Dominators;
use rustc_errors::{Applicability, Diagnostic, DiagnosticBuilder};
use rustc_index::bit_set::BitSet;
use rustc_index::vec::IndexVec;
use rustc_infer::infer::{InferCtxt, TyCtxtInferExt};
+use rustc_session::lint::builtin::{MUTABLE_BORROW_RESERVATION_CONFLICT, UNUSED_MUT};
+use rustc_span::{Span, DUMMY_SP};
use either::Either;
use smallvec::SmallVec;
use std::mem;
use std::rc::Rc;
-use rustc_ast::ast::Name;
-use rustc_span::{Span, DUMMY_SP};
-
use crate::dataflow;
use crate::dataflow::generic::{Analysis, BorrowckFlowState as Flows, BorrowckResults};
use crate::dataflow::indexes::{BorrowIndex, InitIndex, MoveOutIndex, MovePathIndex};
mutability: Mutability,
}
+const DEREF_PROJECTION: &[PlaceElem<'_>; 1] = &[ProjectionElem::Deref];
+
pub fn provide(providers: &mut Providers<'_>) {
*providers = Providers { mir_borrowck, ..*providers };
}
/// for the activation of the borrow.
reservation_warnings:
FxHashMap<BorrowIndex, (Place<'tcx>, Span, Location, BorrowKind, BorrowData<'tcx>)>,
- /// This field keeps track of move errors that are to be reported for given move indicies.
+ /// This field keeps track of move errors that are to be reported for given move indices.
///
/// There are situations where many errors can be reported for a single move out (see #53807)
/// and we want only the best of those errors.
/// `BTreeMap` is used to preserve the order of insertions when iterating. This is necessary
/// when errors in the map are being re-added to the error buffer so that errors with the
/// same primary span come out in a consistent order.
- move_error_reported: BTreeMap<Vec<MoveOutIndex>, (PlaceRef<'cx, 'tcx>, DiagnosticBuilder<'cx>)>,
+ move_error_reported: BTreeMap<Vec<MoveOutIndex>, (PlaceRef<'tcx>, DiagnosticBuilder<'cx>)>,
/// This field keeps track of errors reported in the checking of uninitialized variables,
/// so that we don't report seemingly duplicate errors.
- uninitialized_error_reported: FxHashSet<PlaceRef<'cx, 'tcx>>,
+ uninitialized_error_reported: FxHashSet<PlaceRef<'tcx>>,
/// Errors to be reported buffer
errors_buffer: Vec<Diagnostic>,
/// This field keeps track of all the local variables that are declared mut and are mutated.
PartialAssignment,
}
-struct RootPlace<'d, 'tcx> {
+struct RootPlace<'tcx> {
place_local: Local,
- place_projection: &'d [PlaceElem<'tcx>],
+ place_projection: &'tcx [PlaceElem<'tcx>],
is_local_mutation_allowed: LocalMutationIsAllowed,
}
) {
debug!("check_for_invalidation_at_exit({:?})", borrow);
let place = &borrow.borrowed_place;
- let deref = [ProjectionElem::Deref];
let mut root_place = PlaceRef { local: place.local, projection: &[] };
// FIXME(nll-rfc#40): do more precise destructor tracking here. For now
// Thread-locals might be dropped after the function exits
// We have to dereference the outer reference because
// borrows don't conflict behind shared references.
- root_place.projection = &deref;
+ root_place.projection = DEREF_PROJECTION;
(true, true)
} else {
(false, self.locals_are_invalidated_at_exit)
&mut self,
location: Location,
desired_action: InitializationRequiringAction,
- place_span: (PlaceRef<'cx, 'tcx>, Span),
+ place_span: (PlaceRef<'tcx>, Span),
flow_state: &Flows<'cx, 'tcx>,
) {
let maybe_uninits = &flow_state.uninits;
&mut self,
location: Location,
desired_action: InitializationRequiringAction,
- place_span: (PlaceRef<'cx, 'tcx>, Span),
+ place_span: (PlaceRef<'tcx>, Span),
maybe_uninits: &BitSet<MovePathIndex>,
from: u32,
to: u32,
&mut self,
location: Location,
desired_action: InitializationRequiringAction,
- place_span: (PlaceRef<'cx, 'tcx>, Span),
+ place_span: (PlaceRef<'tcx>, Span),
flow_state: &Flows<'cx, 'tcx>,
) {
let maybe_uninits = &flow_state.uninits;
/// An Err result includes a tag indicated why the search failed.
/// Currently this can only occur if the place is built off of a
/// static variable, as we do not track those in the MoveData.
- fn move_path_closest_to(
- &mut self,
- place: PlaceRef<'_, 'tcx>,
- ) -> (PlaceRef<'cx, 'tcx>, MovePathIndex) {
+ fn move_path_closest_to(&mut self, place: PlaceRef<'tcx>) -> (PlaceRef<'tcx>, MovePathIndex) {
match self.move_data.rev_lookup.find(place) {
LookupResult::Parent(Some(mpi)) | LookupResult::Exact(mpi) => {
(self.move_data.move_paths[mpi].place.as_ref(), mpi)
}
}
- fn move_path_for_place(&mut self, place: PlaceRef<'_, 'tcx>) -> Option<MovePathIndex> {
+ fn move_path_for_place(&mut self, place: PlaceRef<'tcx>) -> Option<MovePathIndex> {
// If returns None, then there is no move path corresponding
// to a direct owner of `place` (which means there is nothing
// that borrowck tracks for its analysis).
fn check_parent_of_field<'cx, 'tcx>(
this: &mut MirBorrowckCtxt<'cx, 'tcx>,
location: Location,
- base: PlaceRef<'cx, 'tcx>,
+ base: PlaceRef<'tcx>,
span: Span,
flow_state: &Flows<'cx, 'tcx>,
) {
}
/// Adds the place into the used mutable variables set
- fn add_used_mut<'d>(&mut self, root_place: RootPlace<'d, 'tcx>, flow_state: &Flows<'cx, 'tcx>) {
+ fn add_used_mut(&mut self, root_place: RootPlace<'tcx>, flow_state: &Flows<'cx, 'tcx>) {
match root_place {
RootPlace { place_local: local, place_projection: [], is_local_mutation_allowed } => {
// If the local may have been initialized, and it is now currently being
/// Whether this value can be written or borrowed mutably.
/// Returns the root place if the place passed in is a projection.
- fn is_mutable<'d>(
+ fn is_mutable(
&self,
- place: PlaceRef<'d, 'tcx>,
+ place: PlaceRef<'tcx>,
is_local_mutation_allowed: LocalMutationIsAllowed,
- ) -> Result<RootPlace<'d, 'tcx>, PlaceRef<'d, 'tcx>> {
+ ) -> Result<RootPlace<'tcx>, PlaceRef<'tcx>> {
match place {
PlaceRef { local, projection: [] } => {
let local = &self.body.local_decls[local];
/// then returns the index of the field being projected. Note that this closure will always
/// be `self` in the current MIR, because that is the only time we directly access the fields
/// of a closure type.
- pub fn is_upvar_field_projection(&self, place_ref: PlaceRef<'cx, 'tcx>) -> Option<Field> {
+ pub fn is_upvar_field_projection(&self, place_ref: PlaceRef<'tcx>) -> Option<Field> {
let mut place_projection = place_ref.projection;
let mut by_ref = false;
body: &Body<'_>,
) {
all_facts
- .path_belongs_to_var
+ .path_is_var
.extend(move_data.rev_lookup.iter_locals_enumerated().map(|(v, &m)| (m, v)));
for (child, move_path) in move_data.move_paths.iter_enumerated() {
- all_facts
- .child
- .extend(move_path.parents(&move_data.move_paths).map(|(parent, _)| (child, parent)));
+ if let Some(parent) = move_path.parent {
+ all_facts.child_path.push((child, parent));
+ }
}
+ let fn_entry_start = location_table
+ .start_index(Location { block: BasicBlock::from_u32(0u32), statement_index: 0 });
+
// initialized_at
for init in move_data.inits.iter() {
match init.location {
// the successors, but not in the unwind block.
let first_statement = Location { block: successor, statement_index: 0 };
all_facts
- .initialized_at
+ .path_assigned_at_base
.push((init.path, location_table.start_index(first_statement)));
}
} else {
// In all other cases, the initialization just happens at the
// midpoint, like any other effect.
- all_facts.initialized_at.push((init.path, location_table.mid_index(location)));
+ all_facts
+ .path_assigned_at_base
+ .push((init.path, location_table.mid_index(location)));
}
}
// Arguments are initialized on function entry
InitLocation::Argument(local) => {
assert!(body.local_kind(local) == LocalKind::Arg);
- let fn_entry = Location { block: BasicBlock::from_u32(0u32), statement_index: 0 };
- all_facts.initialized_at.push((init.path, location_table.start_index(fn_entry)));
+ all_facts.path_assigned_at_base.push((init.path, fn_entry_start));
}
}
}
+ for (local, &path) in move_data.rev_lookup.iter_locals_enumerated() {
+ if body.local_kind(local) != LocalKind::Arg {
+ // Non-arguments start out deinitialised; we simulate this with an
+ // initial move:
+ all_facts.path_moved_at_base.push((path, fn_entry_start));
+ }
+ }
+
// moved_out_at
// deinitialisation is assumed to always happen!
all_facts
- .moved_out_at
+ .path_moved_at_base
.extend(move_data.moves.iter().map(|mo| (mo.path, location_table.mid_index(mo.source))));
}
// Dump facts if requested.
let polonius_output = all_facts.and_then(|all_facts| {
if infcx.tcx.sess.opts.debugging_opts.nll_facts {
- let def_path = infcx.tcx.hir().def_path(def_id);
+ let def_path = infcx.tcx.def_path(def_id);
let dir_path =
PathBuf::from("nll-facts").join(def_path.to_filename_friendly_no_crate());
all_facts.write_to_dir(dir_path, location_table).unwrap();
body: &Body<'tcx>,
borrow_place: &Place<'tcx>,
borrow_kind: BorrowKind,
- access_place: PlaceRef<'_, 'tcx>,
+ access_place: PlaceRef<'tcx>,
access: AccessDepth,
bias: PlaceConflictBias,
) -> bool {
body: &Body<'tcx>,
borrow_place: &Place<'tcx>,
borrow_kind: BorrowKind,
- access_place: PlaceRef<'_, 'tcx>,
+ access_place: PlaceRef<'tcx>,
access: AccessDepth,
bias: PlaceConflictBias,
) -> bool {
use rustc::ty::{self, TyCtxt};
use rustc_hir as hir;
-pub trait IsPrefixOf<'cx, 'tcx> {
- fn is_prefix_of(&self, other: PlaceRef<'cx, 'tcx>) -> bool;
+pub trait IsPrefixOf<'tcx> {
+ fn is_prefix_of(&self, other: PlaceRef<'tcx>) -> bool;
}
-impl<'cx, 'tcx> IsPrefixOf<'cx, 'tcx> for PlaceRef<'cx, 'tcx> {
- fn is_prefix_of(&self, other: PlaceRef<'cx, 'tcx>) -> bool {
+impl<'tcx> IsPrefixOf<'tcx> for PlaceRef<'tcx> {
+ fn is_prefix_of(&self, other: PlaceRef<'tcx>) -> bool {
self.local == other.local
&& self.projection.len() <= other.projection.len()
&& self.projection == &other.projection[..self.projection.len()]
body: ReadOnlyBodyAndCache<'cx, 'tcx>,
tcx: TyCtxt<'tcx>,
kind: PrefixSet,
- next: Option<PlaceRef<'cx, 'tcx>>,
+ next: Option<PlaceRef<'tcx>>,
}
#[derive(Copy, Clone, PartialEq, Eq, Debug)]
/// terminating the iteration early based on `kind`.
pub(super) fn prefixes(
&self,
- place_ref: PlaceRef<'cx, 'tcx>,
+ place_ref: PlaceRef<'tcx>,
kind: PrefixSet,
) -> Prefixes<'cx, 'tcx> {
Prefixes { next: Some(place_ref), kind, body: self.body, tcx: self.infcx.tcx }
}
impl<'cx, 'tcx> Iterator for Prefixes<'cx, 'tcx> {
- type Item = PlaceRef<'cx, 'tcx>;
+ type Item = PlaceRef<'tcx>;
fn next(&mut self) -> Option<Self::Item> {
let mut cursor = self.next?;
};
use rustc::ty::{self, subst::SubstsRef, RegionVid, Ty, TyCtxt, TypeFoldable};
use rustc_data_structures::binary_search_util;
+use rustc_data_structures::frozen::Frozen;
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use rustc_data_structures::graph::scc::Sccs;
use rustc_hir::def_id::DefId;
liveness_constraints: LivenessValues<RegionVid>,
/// The outlives constraints computed by the type-check.
- constraints: Rc<OutlivesConstraintSet>,
+ constraints: Frozen<OutlivesConstraintSet>,
/// The constraint-set, but in graph form, making it easy to traverse
/// the constraints adjacent to a particular region. Used to construct
/// the SCC (see `constraint_sccs`) and for error reporting.
- constraint_graph: Rc<NormalConstraintGraph>,
+ constraint_graph: Frozen<NormalConstraintGraph>,
/// The SCC computed from `constraints` and the constraint
/// graph. We have an edge from SCC A to SCC B if `A: B`. Used to
/// Information about how the universally quantified regions in
/// scope on this function relate to one another.
- universal_region_relations: Rc<UniversalRegionRelations<'tcx>>,
+ universal_region_relations: Frozen<UniversalRegionRelations<'tcx>>,
}
/// Each time that `apply_member_constraint` is successful, it appends
///
/// The `outlives_constraints` and `type_tests` are an initial set
/// of constraints produced by the MIR type check.
- pub(crate) fn new(
+ pub(in crate::borrow_check) fn new(
var_infos: VarInfos,
universal_regions: Rc<UniversalRegions<'tcx>>,
placeholder_indices: Rc<PlaceholderIndices>,
- universal_region_relations: Rc<UniversalRegionRelations<'tcx>>,
+ universal_region_relations: Frozen<UniversalRegionRelations<'tcx>>,
outlives_constraints: OutlivesConstraintSet,
member_constraints_in: MemberConstraintSet<'tcx, RegionVid>,
closure_bounds_mapping: FxHashMap<
.map(|info| RegionDefinition::new(info.universe, info.origin))
.collect();
- let constraints = Rc::new(outlives_constraints); // freeze constraints
- let constraint_graph = Rc::new(constraints.graph(definitions.len()));
+ let constraints = Frozen::freeze(outlives_constraints);
+ let constraint_graph = Frozen::freeze(constraints.graph(definitions.len()));
let fr_static = universal_regions.fr_static;
let constraint_sccs = Rc::new(constraints.compute_sccs(&constraint_graph, fr_static));
use rustc_hir::def_id::DefId;
use rustc_infer::infer::InferCtxt;
use rustc_span::Span;
+use rustc_trait_selection::opaque_types::InferCtxtExt;
use super::RegionInferenceContext;
/// Each of the regions in num_region_variables will be initialized with an
/// empty set of points and no causal information.
crate fn new(elements: Rc<RegionValueElements>) -> Self {
- Self { points: SparseBitMatrix::new(elements.num_points), elements: elements }
+ Self { points: SparseBitMatrix::new(elements.num_points), elements }
}
/// Iterate through each region that has a value in this set.
use rustc::mir::ConstraintCategory;
+use rustc::traits::query::OutlivesBound;
use rustc::ty::free_region_map::FreeRegionRelations;
use rustc::ty::{self, RegionVid, Ty, TyCtxt};
+use rustc_data_structures::frozen::Frozen;
use rustc_data_structures::transitive_relation::TransitiveRelation;
use rustc_infer::infer::canonical::QueryRegionConstraints;
+use rustc_infer::infer::outlives;
use rustc_infer::infer::region_constraints::GenericKind;
use rustc_infer::infer::InferCtxt;
-use rustc_infer::traits::query::outlives_bounds::{self, OutlivesBound};
-use rustc_infer::traits::query::type_op::{self, TypeOp};
use rustc_span::DUMMY_SP;
+use rustc_trait_selection::traits::query::type_op::{self, TypeOp};
use std::rc::Rc;
use crate::borrow_check::{
type NormalizedInputsAndOutput<'tcx> = Vec<Ty<'tcx>>;
crate struct CreateResult<'tcx> {
- crate universal_region_relations: Rc<UniversalRegionRelations<'tcx>>,
+ pub(in crate::borrow_check) universal_region_relations: Frozen<UniversalRegionRelations<'tcx>>,
crate region_bound_pairs: RegionBoundPairs<'tcx>,
crate normalized_inputs_and_output: NormalizedInputsAndOutput<'tcx>,
}
// Insert the facts we know from the predicates. Why? Why not.
let param_env = self.param_env;
- self.add_outlives_bounds(outlives_bounds::explicit_outlives_bounds(param_env));
+ self.add_outlives_bounds(outlives::explicit_outlives_bounds(param_env));
// Finally:
// - outlives is reflexive, so `'r: 'r` for every region `'r`
}
CreateResult {
- universal_region_relations: Rc::new(self.relations),
+ universal_region_relations: Frozen::freeze(self.relations),
region_bound_pairs: self.region_bound_pairs,
normalized_inputs_and_output,
}
}
};
+ debug!(
+ "equate_inputs_and_outputs: normalized_input_tys = {:?}, local_decls = {:?}",
+ normalized_input_tys, body.local_decls
+ );
+
// Equate expected input tys with those in the MIR.
for (&normalized_input_ty, argument_index) in normalized_input_tys.iter().zip(0..) {
// In MIR, argument N is stored in local N+1.
let local = Local::new(argument_index + 1);
- debug!("equate_inputs_and_outputs: normalized_input_ty = {:?}", normalized_input_ty);
-
let mir_input_ty = body.local_decls[local].ty;
let mir_input_span = body.local_decls[local].source_info.span;
self.equate_normalized_input_or_output(
type PathPointRelation = Vec<(MovePathIndex, LocationIndex)>;
struct UseFactsExtractor<'me> {
- var_defined: &'me mut VarPointRelation,
- var_used: &'me mut VarPointRelation,
+ var_defined_at: &'me mut VarPointRelation,
+ var_used_at: &'me mut VarPointRelation,
location_table: &'me LocationTable,
- var_drop_used: &'me mut Vec<(Local, Location)>,
+ var_dropped_at: &'me mut VarPointRelation,
move_data: &'me MoveData<'me>,
- path_accessed_at: &'me mut PathPointRelation,
+ path_accessed_at_base: &'me mut PathPointRelation,
}
// A Visitor to walk through the MIR and extract point-wise facts
fn insert_def(&mut self, local: Local, location: Location) {
debug!("UseFactsExtractor::insert_def()");
- self.var_defined.push((local, self.location_to_index(location)));
+ self.var_defined_at.push((local, self.location_to_index(location)));
}
fn insert_use(&mut self, local: Local, location: Location) {
debug!("UseFactsExtractor::insert_use()");
- self.var_used.push((local, self.location_to_index(location)));
+ self.var_used_at.push((local, self.location_to_index(location)));
}
fn insert_drop_use(&mut self, local: Local, location: Location) {
debug!("UseFactsExtractor::insert_drop_use()");
- self.var_drop_used.push((local, location));
+ self.var_dropped_at.push((local, self.location_to_index(location)));
}
fn insert_path_access(&mut self, path: MovePathIndex, location: Location) {
debug!("UseFactsExtractor::insert_path_access({:?}, {:?})", path, location);
- self.path_accessed_at.push((path, self.location_to_index(location)));
+ self.path_accessed_at_base.push((path, self.location_table.start_index(location)));
}
fn place_to_mpi(&self, place: &Place<'_>) -> Option<MovePathIndex> {
body: ReadOnlyBodyAndCache<'_, 'tcx>,
location_table: &LocationTable,
move_data: &MoveData<'_>,
- drop_used: &mut Vec<(Local, Location)>,
+ dropped_at: &mut Vec<(Local, Location)>,
) {
debug!("populate_access_facts()");
if let Some(facts) = typeck.borrowck_context.all_facts.as_mut() {
let mut extractor = UseFactsExtractor {
- var_defined: &mut facts.var_defined,
- var_used: &mut facts.var_used,
- var_drop_used: drop_used,
- path_accessed_at: &mut facts.path_accessed_at,
+ var_defined_at: &mut facts.var_defined_at,
+ var_used_at: &mut facts.var_used_at,
+ var_dropped_at: &mut facts.var_dropped_at,
+ path_accessed_at_base: &mut facts.path_accessed_at_base,
location_table,
move_data,
};
extractor.visit_body(body);
- facts.var_drop_used.extend(
- drop_used.iter().map(|&(local, location)| (local, location_table.mid_index(location))),
+ facts.var_dropped_at.extend(
+ dropped_at.iter().map(|&(local, location)| (local, location_table.mid_index(location))),
);
for (local, local_decl) in body.local_decls.iter_enumerated() {
- debug!("add var_uses_regions facts - local={:?}, type={:?}", local, local_decl.ty);
+ debug!(
+ "add use_of_var_derefs_origin facts - local={:?}, type={:?}",
+ local, local_decl.ty
+ );
let _prof_timer = typeck.infcx.tcx.prof.generic_activity("polonius_fact_generation");
let universal_regions = &typeck.borrowck_context.universal_regions;
typeck.infcx.tcx.for_each_free_region(&local_decl.ty, |region| {
let region_vid = universal_regions.to_region_vid(region);
- facts.var_uses_region.push((local, region_vid));
+ facts.use_of_var_derefs_origin.push((local, region_vid));
});
}
}
}
// For every potentially drop()-touched region `region` in `local`'s type
-// (`kind`), emit a Polonius `var_drops_region(local, region)` fact.
-pub(super) fn add_var_drops_regions(
+// (`kind`), emit a Polonius `use_of_var_derefs_origin(local, origin)` fact.
+pub(super) fn add_drop_of_var_derefs_origin(
typeck: &mut TypeChecker<'_, 'tcx>,
local: Local,
kind: &GenericArg<'tcx>,
) {
- debug!("add_var_drops_region(local={:?}, kind={:?}", local, kind);
+ debug!("add_drop_of_var_derefs_origin(local={:?}, kind={:?}", local, kind);
if let Some(facts) = typeck.borrowck_context.all_facts.as_mut() {
let _prof_timer = typeck.infcx.tcx.prof.generic_activity("polonius_fact_generation");
let universal_regions = &typeck.borrowck_context.universal_regions;
typeck.infcx.tcx.for_each_free_region(kind, |drop_live_region| {
let region_vid = universal_regions.to_region_vid(drop_live_region);
- facts.var_drops_region.push((local, region_vid));
+ facts.drop_of_var_derefs_origin.push((local, region_vid));
});
}
}
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use rustc_index::bit_set::HybridBitSet;
use rustc_infer::infer::canonical::QueryRegionConstraints;
-use rustc_infer::traits::query::dropck_outlives::DropckOutlivesResult;
-use rustc_infer::traits::query::type_op::outlives::DropckOutlives;
-use rustc_infer::traits::query::type_op::TypeOp;
+use rustc_trait_selection::traits::query::dropck_outlives::DropckOutlivesResult;
+use rustc_trait_selection::traits::query::type_op::outlives::DropckOutlives;
+use rustc_trait_selection::traits::query::type_op::TypeOp;
use std::rc::Rc;
use crate::dataflow::generic::ResultsCursor;
for &kind in &drop_data.dropck_result.kinds {
Self::make_all_regions_live(self.elements, &mut self.typeck, kind, live_at);
- polonius::add_var_drops_regions(&mut self.typeck, dropped_local, &kind);
+ polonius::add_drop_of_var_derefs_origin(&mut self.typeck, dropped_local, &kind);
}
}
self, CanonicalUserTypeAnnotation, CanonicalUserTypeAnnotations, RegionVid, ToPolyTraitRef, Ty,
TyCtxt, UserType, UserTypeAnnotationIndex,
};
+use rustc_data_structures::frozen::Frozen;
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use rustc_errors::struct_span_err;
use rustc_hir as hir;
use rustc_hir::def_id::DefId;
use rustc_index::vec::{Idx, IndexVec};
use rustc_infer::infer::canonical::QueryRegionConstraints;
-use rustc_infer::infer::opaque_types::GenerateMemberConstraints;
use rustc_infer::infer::outlives::env::RegionBoundPairs;
use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
use rustc_infer::infer::{
InferCtxt, InferOk, LateBoundRegionConversionTime, NLLRegionVariableOrigin,
};
-use rustc_infer::traits::query::type_op;
-use rustc_infer::traits::query::type_op::custom::CustomTypeOp;
-use rustc_infer::traits::query::{Fallible, NoSolution};
-use rustc_infer::traits::{self, ObligationCause, PredicateObligations};
use rustc_span::{Span, DUMMY_SP};
+use rustc_trait_selection::infer::InferCtxtExt as _;
+use rustc_trait_selection::opaque_types::{GenerateMemberConstraints, InferCtxtExt};
+use rustc_trait_selection::traits::error_reporting::InferCtxtExt as _;
+use rustc_trait_selection::traits::query::type_op;
+use rustc_trait_selection::traits::query::type_op::custom::CustomTypeOp;
+use rustc_trait_selection::traits::query::{Fallible, NoSolution};
+use rustc_trait_selection::traits::{self, ObligationCause, PredicateObligations};
use crate::dataflow::generic::ResultsCursor;
use crate::dataflow::move_paths::MoveData;
| ConstraintCategory::UseAsConst
| ConstraintCategory::UseAsStatic = constraint.category
{
- // "Returning" from a promoted is an assigment to a
+ // "Returning" from a promoted is an assignment to a
// temporary from the user's point of view.
constraint.category = ConstraintCategory::Boring;
}
crate struct MirTypeckResults<'tcx> {
crate constraints: MirTypeckRegionConstraints<'tcx>,
- crate universal_region_relations: Rc<UniversalRegionRelations<'tcx>>,
+ pub(in crate::borrow_check) universal_region_relations: Frozen<UniversalRegionRelations<'tcx>>,
crate opaque_type_values: FxHashMap<DefId, ty::ResolvedOpaqueTy<'tcx>>,
}
// expressions evaluate through `as_temp` or `into` a return
// slot or local, so to find all unsized rvalues it is enough
// to check all temps, return slots and locals.
- if let None = self.reported_errors.replace((ty, span)) {
+ if self.reported_errors.replace((ty, span)).is_none() {
let mut diag = struct_span_err!(
self.tcx().sess,
span,
&traits::Obligation::new(
ObligationCause::new(
span,
- self.tcx().hir().def_index_to_hir_id(self.mir_def_id.index),
+ self.tcx()
+ .hir()
+ .local_def_id_to_hir_id(self.mir_def_id.expect_local()),
traits::ObligationCauseCode::RepeatVec(should_suggest),
),
self.param_env,
use rustc::ty::{self, Ty};
use rustc_infer::infer::nll_relate::{NormalizationStrategy, TypeRelating, TypeRelatingDelegate};
use rustc_infer::infer::{InferCtxt, NLLRegionVariableOrigin};
-use rustc_infer::traits::query::Fallible;
-use rustc_infer::traits::DomainGoal;
+use rustc_trait_selection::traits::query::Fallible;
+use rustc_trait_selection::traits::DomainGoal;
use crate::borrow_check::constraints::OutlivesConstraint;
use crate::borrow_check::type_check::{BorrowCheckContext, Locations};
}
fn next_existential_region_var(&mut self, from_forall: bool) -> ty::Region<'tcx> {
- if let Some(_) = &mut self.borrowck_context {
+ if self.borrowck_context.is_some() {
let origin = NLLRegionVariableOrigin::Existential { from_forall };
self.infcx.next_nll_region_var(origin)
} else {
defining_ty,
unnormalized_output_ty,
unnormalized_input_tys,
- yield_ty: yield_ty,
+ yield_ty,
}
}
fn_def_id: DefId,
mut f: impl FnMut(ty::Region<'tcx>),
) {
- if let Some(late_bounds) = tcx.is_late_bound_map(fn_def_id.index) {
+ if let Some(late_bounds) = tcx.is_late_bound_map(fn_def_id.expect_local()) {
for late_bound in late_bounds.iter() {
- let hir_id = HirId { owner: fn_def_id.index, local_id: *late_bound };
+ let hir_id = HirId { owner: fn_def_id.expect_local(), local_id: *late_bound };
let name = tcx.hir().name(hir_id);
let region_def_id = tcx.hir().local_def_id(hir_id);
let liberated_region = tcx.mk_region(ty::ReFree(ty::FreeRegion {
InterpCx::new(
tcx.at(span),
param_env,
- CompileTimeInterpreter::new(),
+ CompileTimeInterpreter::new(*tcx.sess.const_eval_limit.get()),
MemoryExtra { can_access_statics },
)
}
if cid.promoted.is_none() {
let mut ref_tracking = RefTracking::new(mplace);
while let Some((mplace, path)) = ref_tracking.todo.pop() {
- ecx.validate_operand(mplace.into(), path, Some(&mut ref_tracking))?;
+ ecx.const_validate_operand(
+ mplace.into(),
+ path,
+ &mut ref_tracking,
+ /*may_ref_to_static*/ is_static,
+ )?;
}
}
// Now that we validated, turn this into a proper constant.
match tcx.const_eval_validated(key) {
// try again with reveal all as requested
Err(ErrorHandled::TooGeneric) => {}
- // dedupliate calls
+ // deduplicate calls
other => return other,
}
}
match tcx.const_eval_raw(key) {
// try again with reveal all as requested
Err(ErrorHandled::TooGeneric) => {}
- // dedupliate calls
+ // deduplicate calls
other => return other,
}
}
let mut ecx = InterpCx::new(
tcx.at(span),
key.param_env,
- CompileTimeInterpreter::new(),
+ CompileTimeInterpreter::new(*tcx.sess.const_eval_limit.get()),
MemoryExtra { can_access_statics: is_static },
);
pub fn is_parent_const_impl_raw(tcx: TyCtxt<'_>, hir_id: hir::HirId) -> bool {
let parent_id = tcx.hir().get_parent_did(hir_id);
if !parent_id.is_top_level_module() {
- is_const_impl_raw(tcx, LocalDefId::from_def_id(parent_id))
+ is_const_impl_raw(tcx, parent_id.expect_local())
} else {
false
}
pub fn provide(providers: &mut Providers<'_>) {
*providers = Providers {
is_const_fn_raw,
- is_const_impl_raw: |tcx, def_id| is_const_impl_raw(tcx, LocalDefId::from_def_id(def_id)),
+ is_const_impl_raw: |tcx, def_id| is_const_impl_raw(tcx, def_id.expect_local()),
is_promotable_const_fn,
const_fn_is_allowed_fn_ptr,
..*providers
use rustc::mir;
use rustc::ty::layout::HasTyCtxt;
-use rustc::ty::{self, Ty, TyCtxt};
-use rustc_hir::def_id::DefId;
+use rustc::ty::{self, Ty};
use std::borrow::{Borrow, Cow};
use std::collections::hash_map::Entry;
+use std::convert::TryFrom;
use std::hash::Hash;
use rustc_data_structures::fx::FxHashMap;
}
}
-/// Number of steps until the detector even starts doing anything.
-/// Also, a warning is shown to the user when this number is reached.
-const STEPS_UNTIL_DETECTOR_ENABLED: isize = 1_000_000;
/// The number of steps between loop detector snapshots.
/// Should be a power of two for performance reasons.
const DETECTOR_SNAPSHOT_PERIOD: isize = 256;
/// detector period.
pub(super) steps_since_detector_enabled: isize,
+ pub(super) is_detector_enabled: bool,
+
/// Extra state to detect loops.
pub(super) loop_detector: snapshot::InfiniteLoopDetector<'mir, 'tcx>,
}
}
impl<'mir, 'tcx> CompileTimeInterpreter<'mir, 'tcx> {
- pub(super) fn new() -> Self {
+ pub(super) fn new(const_eval_limit: usize) -> Self {
+ let steps_until_detector_enabled =
+ isize::try_from(const_eval_limit).unwrap_or(std::isize::MAX);
+
CompileTimeInterpreter {
loop_detector: Default::default(),
- steps_since_detector_enabled: -STEPS_UNTIL_DETECTOR_ENABLED,
+ steps_since_detector_enabled: -steps_until_detector_enabled,
+ is_detector_enabled: const_eval_limit != 0,
}
}
}
Ok(Some(match ecx.load_mir(instance.def, None) {
Ok(body) => *body,
Err(err) => {
- if let err_unsup!(NoMirFor(ref path)) = err.kind {
+ if let err_unsup!(NoMirFor(did)) = err.kind {
+ let path = ecx.tcx.def_path_str(did);
return Err(ConstEvalErrKind::NeedsRfc(format!(
"calling extern function `{}`",
path
fn assert_panic(
ecx: &mut InterpCx<'mir, 'tcx, Self>,
- _span: Span,
msg: &AssertMessage<'tcx>,
_unwind: Option<mir::BasicBlock>,
) -> InterpResult<'tcx> {
Err(ConstEvalErrKind::NeedsRfc("pointer arithmetic or comparison".to_string()).into())
}
- fn find_foreign_static(
- _tcx: TyCtxt<'tcx>,
- _def_id: DefId,
- ) -> InterpResult<'tcx, Cow<'tcx, Allocation<Self::PointerTag>>> {
- throw_unsup!(ReadForeignStatic)
- }
-
#[inline(always)]
fn init_allocation_extra<'b>(
_memory_extra: &MemoryExtra,
}
fn before_terminator(ecx: &mut InterpCx<'mir, 'tcx, Self>) -> InterpResult<'tcx> {
+ if !ecx.machine.is_detector_enabled {
+ return Ok(());
+ }
+
{
let steps = &mut ecx.machine.steps_since_detector_enabled;
op_to_const(&ecx, field)
}
-pub(crate) fn const_caller_location<'tcx>(
+pub(crate) fn const_caller_location(
tcx: TyCtxt<'tcx>,
(file, line, col): (Symbol, u32, u32),
) -> ConstValue<'tcx> {
use std::borrow::Borrow;
-use rustc::mir::{self, BasicBlock, Location};
+use rustc::mir::{self, BasicBlock, Location, TerminatorKind};
use rustc_index::bit_set::BitSet;
use super::{Analysis, Results};
pos: CursorPosition,
- /// When this flag is set, the cursor is pointing at a `Call` terminator whose call return
- /// effect has been applied to `state`.
+ /// When this flag is set, the cursor is pointing at a `Call` or `Yield` terminator whose call
+ /// return or resume effect has been applied to `state`.
///
- /// This flag helps to ensure that multiple calls to `seek_after_assume_call_returns` with the
+ /// This flag helps to ensure that multiple calls to `seek_after_assume_success` with the
/// same target will result in exactly one invocation of `apply_call_return_effect`. It is
/// sufficient to clear this only in `seek_to_block_start`, since seeking away from a
/// terminator will always require a cursor reset.
- call_return_effect_applied: bool,
+ success_effect_applied: bool,
}
impl<'mir, 'tcx, A, R> ResultsCursor<'mir, 'tcx, A, R>
body,
pos: CursorPosition::BlockStart(mir::START_BLOCK),
state: results.borrow().entry_sets[mir::START_BLOCK].clone(),
- call_return_effect_applied: false,
+ success_effect_applied: false,
results,
}
}
pub fn seek_to_block_start(&mut self, block: BasicBlock) {
self.state.overwrite(&self.results.borrow().entry_sets[block]);
self.pos = CursorPosition::BlockStart(block);
- self.call_return_effect_applied = false;
+ self.success_effect_applied = false;
}
/// Advances the cursor to hold all effects up to and including to the "before" effect of the
/// statement (or terminator) at the given location.
///
/// If you wish to observe the full effect of a statement or terminator, not just the "before"
- /// effect, use `seek_after` or `seek_after_assume_call_returns`.
+ /// effect, use `seek_after` or `seek_after_assume_success`.
pub fn seek_before(&mut self, target: Location) {
assert!(target <= self.body.terminator_loc(target.block));
self.seek_(target, false);
/// terminators) up to and including the `target`.
///
/// If the `target` is a `Call` terminator, any call return effect for that terminator will
- /// **not** be observed. Use `seek_after_assume_call_returns` if you wish to observe the call
+ /// **not** be observed. Use `seek_after_assume_success` if you wish to observe the call
/// return effect.
pub fn seek_after(&mut self, target: Location) {
assert!(target <= self.body.terminator_loc(target.block));
// If we have already applied the call return effect, we are currently pointing at a `Call`
// terminator. Unconditionally reset the dataflow cursor, since there is no way to "undo"
// the call return effect.
- if self.call_return_effect_applied {
+ if self.success_effect_applied {
self.seek_to_block_start(target.block);
}
/// Advances the cursor to hold all effects up to and including of the statement (or
/// terminator) at the given location.
///
- /// If the `target` is a `Call` terminator, any call return effect for that terminator will
- /// be observed. Use `seek_after` if you do **not** wish to observe the call return effect.
- pub fn seek_after_assume_call_returns(&mut self, target: Location) {
+ /// If the `target` is a `Call` or `Yield` terminator, any call return or resume effect for that
+ /// terminator will be observed. Use `seek_after` if you do **not** wish to observe the
+ /// "success" effect.
+ pub fn seek_after_assume_success(&mut self, target: Location) {
let terminator_loc = self.body.terminator_loc(target.block);
assert!(target.statement_index <= terminator_loc.statement_index);
self.seek_(target, true);
- if target != terminator_loc {
+ if target != terminator_loc || self.success_effect_applied {
return;
}
+ // Apply the effect of the "success" path of the terminator.
+
+ self.success_effect_applied = true;
let terminator = self.body.basic_blocks()[target.block].terminator();
- if let mir::TerminatorKind::Call {
- destination: Some((return_place, _)), func, args, ..
- } = &terminator.kind
- {
- if !self.call_return_effect_applied {
- self.call_return_effect_applied = true;
+ match &terminator.kind {
+ TerminatorKind::Call { destination: Some((return_place, _)), func, args, .. } => {
self.results.borrow().analysis.apply_call_return_effect(
&mut self.state,
target.block,
return_place,
);
}
+ TerminatorKind::Yield { resume, resume_arg, .. } => {
+ self.results.borrow().analysis.apply_yield_resume_effect(
+ &mut self.state,
+ *resume,
+ resume_arg,
+ );
+ }
+ _ => {}
}
}
self.seek_to_block_start(target.block)
}
- // N.B., `call_return_effect_applied` is checked in `seek_after`, not here.
+ // N.B., `success_effect_applied` is checked in `seek_after`, not here.
_ => (),
}
Goto { target }
| Assert { target, cleanup: None, .. }
- | Yield { resume: target, drop: None, .. }
| Drop { target, location: _, unwind: None }
| DropAndReplace { target, value: _, location: _, unwind: None } => {
self.propagate_bits_into_entry_set_for(in_out, target, dirty_list)
}
- Yield { resume: target, drop: Some(drop), .. } => {
+ Yield { resume: target, drop, resume_arg, .. } => {
+ if let Some(drop) = drop {
+ self.propagate_bits_into_entry_set_for(in_out, drop, dirty_list);
+ }
+
+ self.analysis.apply_yield_resume_effect(in_out, target, &resume_arg);
self.propagate_bits_into_entry_set_for(in_out, target, dirty_list);
- self.propagate_bits_into_entry_set_for(in_out, drop, dirty_list);
}
Assert { target, cleanup: Some(unwind), .. }
}
SwitchInt { ref targets, ref values, ref discr, .. } => {
- // If this is a switch on an enum discriminant, a custom effect may be applied
- // along each outgoing edge.
- if let Some(place) = discr.place() {
- let enum_def = switch_on_enum_discriminant(self.tcx, self.body, bb_data, place);
- if let Some(enum_def) = enum_def {
+ let Engine { tcx, body, .. } = *self;
+ let enum_ = discr
+ .place()
+ .and_then(|discr| switch_on_enum_discriminant(tcx, body, bb_data, discr));
+ match enum_ {
+ // If this is a switch on an enum discriminant, a custom effect may be applied
+ // along each outgoing edge.
+ Some((enum_place, enum_def)) => {
self.propagate_bits_into_enum_discriminant_switch_successors(
- in_out, bb, enum_def, place, dirty_list, &*values, &*targets,
+ in_out, bb, enum_def, enum_place, dirty_list, &*values, &*targets,
);
-
- return;
}
- }
- // Otherwise, it's just a normal `SwitchInt`, and every successor sees the same
- // exit state.
- for target in targets.iter().copied() {
- self.propagate_bits_into_entry_set_for(&in_out, target, dirty_list);
+ // Otherwise, it's just a normal `SwitchInt`, and every successor sees the same
+ // exit state.
+ None => {
+ for target in targets.iter().copied() {
+ self.propagate_bits_into_entry_set_for(&in_out, target, dirty_list);
+ }
+ }
}
}
}
}
-/// Look at the last statement of a block that ends with to see if it is an assignment of an enum
-/// discriminant to the local that determines the target of a `SwitchInt` like so:
-/// _42 = discriminant(..)
+/// Inspect a `SwitchInt`-terminated basic block to see if the condition of that `SwitchInt` is
+/// an enum discriminant.
+///
+/// We expect such blocks to have a call to `discriminant` as their last statement like so:
+/// _42 = discriminant(_1)
/// SwitchInt(_42, ..)
+///
+/// If the basic block matches this pattern, this function returns the place corresponding to the
+/// enum (`_1` in the example above) as well as the `AdtDef` of that enum.
fn switch_on_enum_discriminant(
tcx: TyCtxt<'tcx>,
- body: &mir::Body<'tcx>,
- block: &mir::BasicBlockData<'tcx>,
+ body: &'mir mir::Body<'tcx>,
+ block: &'mir mir::BasicBlockData<'tcx>,
switch_on: &mir::Place<'tcx>,
-) -> Option<&'tcx ty::AdtDef> {
+) -> Option<(&'mir mir::Place<'tcx>, &'tcx ty::AdtDef)> {
match block.statements.last().map(|stmt| &stmt.kind) {
Some(mir::StatementKind::Assign(box (lhs, mir::Rvalue::Discriminant(discriminated))))
if lhs == switch_on =>
{
match &discriminated.ty(body, tcx).ty.kind {
- ty::Adt(def, _) => Some(def),
+ ty::Adt(def, _) => Some((discriminated, def)),
// `Rvalue::Discriminant` is also used to get the active yield point for a
// generator, but we do not need edge-specific effects in that case. This may
)?;
let state_on_unwind = this.results.get().clone();
- this.results.seek_after_assume_call_returns(terminator_loc);
+ this.results.seek_after_assume_success(terminator_loc);
write_diff(w, this.results.analysis(), &state_on_unwind, this.results.get())?;
write!(w, "</td>")
Ok(())
}
-const BR_LEFT: &'static str = r#"<br align="left"/>"#;
-const BR_LEFT_SPACE: &'static str = r#"<br align="left"/> "#;
+const BR_LEFT: &str = r#"<br align="left"/>"#;
+const BR_LEFT_SPACE: &str = r#"<br align="left"/> "#;
/// Line break policy that breaks at 40 characters and starts the next line with a single space.
const LIMIT_30_ALIGN_1: Option<LineBreak> = Some(LineBreak { sequence: BR_LEFT_SPACE, limit: 30 });
return_place: &mir::Place<'tcx>,
);
+ /// Updates the current dataflow state with the effect of resuming from a `Yield` terminator.
+ ///
+ /// This is similar to `apply_call_return_effect` in that it only takes place after the
+ /// generator is resumed, not when it is dropped.
+ ///
+ /// By default, no effects happen.
+ fn apply_yield_resume_effect(
+ &self,
+ _state: &mut BitSet<Self::Idx>,
+ _resume_block: BasicBlock,
+ _resume_place: &mir::Place<'tcx>,
+ ) {
+ }
+
/// Updates the current dataflow state with the effect of taking a particular branch in a
/// `SwitchInt` terminator.
///
return_place: &mir::Place<'tcx>,
);
+ /// See `Analysis::apply_yield_resume_effect`.
+ fn yield_resume_effect(
+ &self,
+ _trans: &mut BitSet<Self::Idx>,
+ _resume_block: BasicBlock,
+ _resume_place: &mir::Place<'tcx>,
+ ) {
+ }
+
/// See `Analysis::apply_discriminant_switch_effect`.
fn discriminant_switch_effect(
&self,
self.call_return_effect(state, block, func, args, return_place);
}
+ fn apply_yield_resume_effect(
+ &self,
+ state: &mut BitSet<Self::Idx>,
+ resume_block: BasicBlock,
+ resume_place: &mir::Place<'tcx>,
+ ) {
+ self.yield_resume_effect(state, resume_block, resume_place);
+ }
+
fn apply_discriminant_switch_effect(
&self,
state: &mut BitSet<Self::Idx>,
cursor.seek_after(call_terminator_loc);
assert!(!cursor.get().contains(call_return_effect));
- cursor.seek_after_assume_call_returns(call_terminator_loc);
+ cursor.seek_after_assume_success(call_terminator_loc);
assert!(cursor.get().contains(call_return_effect));
let every_target = || {
BlockStart(block) => cursor.seek_to_block_start(block),
Before(loc) => cursor.seek_before(loc),
After(loc) => cursor.seek_after(loc),
- AfterAssumeCallReturns(loc) => cursor.seek_after_assume_call_returns(loc),
+ AfterAssumeCallReturns(loc) => cursor.seek_after_assume_success(loc),
}
assert_eq!(cursor.get(), &cursor.analysis().expected_state_at_target(targ));
let loc = Location { block, statement_index };
results.reconstruct_before_statement_effect(&mut state, stmt, loc);
- vis.visit_statement(&mut state, stmt, loc);
+ vis.visit_statement(&state, stmt, loc);
results.reconstruct_statement_effect(&mut state, stmt, loc);
- vis.visit_statement_exit(&mut state, stmt, loc);
+ vis.visit_statement_exit(&state, stmt, loc);
}
let loc = body.terminator_loc(block);
let term = block_data.terminator();
results.reconstruct_before_terminator_effect(&mut state, term, loc);
- vis.visit_terminator(&mut state, term, loc);
+ vis.visit_terminator(&state, term, loc);
results.reconstruct_terminator_effect(&mut state, term, loc);
- vis.visit_terminator_exit(&mut state, term, loc);
+ vis.visit_terminator_exit(&state, term, loc);
}
}
fn outgoing(body: &Body<'_>, bb: BasicBlock) -> Vec<Edge> {
(0..body[bb].terminator().successors().count())
- .map(|index| Edge { source: bb, index: index })
+ .map(|index| Edge { source: bb, index })
.collect()
}
self.borrowed_locals.borrow().analysis().terminator_effect(trans, terminator, loc);
match &terminator.kind {
- TerminatorKind::Call { destination: Some((place, _)), .. }
- | TerminatorKind::Yield { resume_arg: place, .. } => {
+ TerminatorKind::Call { destination: Some((place, _)), .. } => {
trans.gen(place.local);
}
+ // Note that we do *not* gen the `resume_arg` of `Yield` terminators. The reason for
+ // that is that a `yield` will return from the function, and `resume_arg` is written
+ // only when the generator is later resumed. Unlike `Call`, this doesn't require the
+ // place to have storage *before* the yield, only after.
+ TerminatorKind::Yield { .. } => {}
+
// Nothing to do for these. Match exhaustively so this fails to compile when new
// variants are added.
TerminatorKind::Call { destination: None, .. }
) {
trans.gen(return_place.local);
}
+
+ fn yield_resume_effect(
+ &self,
+ trans: &mut BitSet<Self::Idx>,
+ _resume_block: BasicBlock,
+ resume_place: &mir::Place<'tcx>,
+ ) {
+ trans.gen(resume_place.local);
+ }
}
impl<'mir, 'tcx> MaybeRequiresStorage<'mir, 'tcx> {
+use rustc::mir::traversal;
+use rustc::mir::{self, BasicBlock, BasicBlockData, Body, Location, Statement, Terminator};
+use rustc::ty::{self, TyCtxt};
use rustc_ast::ast::{self, MetaItem};
use rustc_ast_pretty::pprust;
-use rustc_span::symbol::{sym, Symbol};
-
use rustc_data_structures::work_queue::WorkQueue;
+use rustc_hir::def_id::DefId;
use rustc_index::bit_set::{BitSet, HybridBitSet};
use rustc_index::vec::Idx;
-
-use rustc::mir::traversal;
-use rustc::mir::{self, BasicBlock, BasicBlockData, Body, Location, Statement, Terminator};
-use rustc::session::Session;
-use rustc::ty::{self, TyCtxt};
-use rustc_hir::def_id::DefId;
+use rustc_session::Session;
+use rustc_span::symbol::{sym, Symbol};
use std::borrow::Borrow;
use std::fmt;
use std::io;
use std::path::PathBuf;
-use std::usize;
pub use self::at_location::{FlowAtLocation, FlowsAtLocation};
pub(crate) use self::drop_flag_effects::*;
}
fn record_move(&mut self, place: &Place<'tcx>, path: MovePathIndex) {
- let move_out = self.builder.data.moves.push(MoveOut { path: path, source: self.loc });
+ let move_out = self.builder.data.moves.push(MoveOut { path, source: self.loc });
debug!(
"gather_move({:?}, {:?}): adding move {:?} of {:?}",
self.loc, place, move_out, path
self.builder.data.loc_map[self.loc].push(move_out);
}
- fn gather_init(&mut self, place: PlaceRef<'cx, 'tcx>, kind: InitKind) {
+ fn gather_init(&mut self, place: PlaceRef<'tcx>, kind: InitKind) {
debug!("gather_init({:?}, {:?})", self.loc, place);
let mut place = place;
// alternative will *not* create a MovePath on the fly for an
// unknown place, but will rather return the nearest available
// parent.
- pub fn find(&self, place: PlaceRef<'_, '_>) -> LookupResult {
+ pub fn find(&self, place: PlaceRef<'_>) -> LookupResult {
let mut result = self.locals[place.local];
for elem in place.projection.iter() {
Char => {
// `u8` to `char` cast
- debug_assert_eq!(v as u8 as u128, v);
+ assert_eq!(v as u8 as u128, v);
Ok(Scalar::from_uint(v, Size::from_bytes(4)))
}
use super::{
Immediate, MPlaceTy, Machine, MemPlace, MemPlaceMeta, Memory, OpTy, Operand, Place, PlaceTy,
- ScalarMaybeUndef, StackPopInfo,
+ ScalarMaybeUndef, StackPopJump,
};
pub struct InterpCx<'mir, 'tcx, M: Machine<'mir, 'tcx>> {
/// Jump to the next block in the caller, or cause UB if None (that's a function
/// that may never return). Also store layout of return place so
/// we can validate it at that layout.
- /// `ret` stores the block we jump to on a normal return, while 'unwind'
- /// stores the block used for cleanup during unwinding
+ /// `ret` stores the block we jump to on a normal return, while `unwind`
+ /// stores the block used for cleanup during unwinding.
Goto { ret: Option<mir::BasicBlock>, unwind: Option<mir::BasicBlock> },
- /// Just do nohing: Used by Main and for the box_alloc hook in miri.
+ /// Just do nothing: Used by Main and for the `box_alloc` hook in miri.
/// `cleanup` says whether locals are deallocated. Static computation
/// wants them leaked to intern what they need (and just throw away
/// the entire `ecx` when it is done).
impl<'tcx, Tag: Copy + 'static> LocalState<'tcx, Tag> {
pub fn access(&self) -> InterpResult<'tcx, Operand<Tag>> {
match self.value {
- LocalValue::Dead => throw_unsup!(DeadLocal),
+ LocalValue::Dead => throw_ub!(DeadLocal),
LocalValue::Uninitialized => {
bug!("The type checker should prevent reading from a never-written local")
}
&mut self,
) -> InterpResult<'tcx, Result<&mut LocalValue<Tag>, MemPlace<Tag>>> {
match self.value {
- LocalValue::Dead => throw_unsup!(DeadLocal),
+ LocalValue::Dead => throw_ub!(DeadLocal),
LocalValue::Live(Operand::Indirect(mplace)) => Ok(Err(mplace)),
ref mut local @ LocalValue::Live(Operand::Immediate(_))
| ref mut local @ LocalValue::Uninitialized => Ok(Ok(local)),
if self.tcx.is_mir_available(did) {
Ok(self.tcx.optimized_mir(did).unwrap_read_only())
} else {
- throw_unsup!(NoMirFor(self.tcx.def_path_str(def_id)))
+ throw_unsup!(NoMirFor(def_id))
}
}
_ => Ok(self.tcx.instance_mir(instance)),
/// Call this on things you got out of the MIR (so it is as generic as the current
/// stack frame), to bring it into the proper environment for this interpreter.
+ pub(super) fn subst_from_current_frame_and_normalize_erasing_regions<T: TypeFoldable<'tcx>>(
+ &self,
+ value: T,
+ ) -> T {
+ self.subst_from_frame_and_normalize_erasing_regions(self.frame(), value)
+ }
+
+ /// Call this on things you got out of the MIR (so it is as generic as the provided
+ /// stack frame), to bring it into the proper environment for this interpreter.
pub(super) fn subst_from_frame_and_normalize_erasing_regions<T: TypeFoldable<'tcx>>(
&self,
+ frame: &Frame<'mir, 'tcx, M::PointerTag, M::FrameExtra>,
value: T,
) -> T {
- self.tcx.subst_and_normalize_erasing_regions(
- self.frame().instance.substs,
- self.param_env,
- &value,
- )
+ if let Some(substs) = frame.instance.substs_for_mir_body() {
+ self.tcx.subst_and_normalize_erasing_regions(substs, self.param_env, &value)
+ } else {
+ self.tcx.normalize_erasing_regions(self.param_env, value)
+ }
}
/// The `substs` are assumed to already be in our interpreter "universe" (param_env).
None => {
let layout = crate::interpret::operand::from_known_layout(layout, || {
let local_ty = frame.body.local_decls[local].ty;
- let local_ty = self.tcx.subst_and_normalize_erasing_regions(
- frame.instance.substs,
- self.param_env,
- &local_ty,
- );
+ let local_ty =
+ self.subst_from_frame_and_normalize_erasing_regions(frame, local_ty);
self.layout_of(local_ty)
})?;
if let Some(state) = frame.locals.get(local) {
// Check if this brought us over the size limit.
if size.bytes() >= self.tcx.data_layout().obj_size_bound() {
- throw_ub_format!(
- "wide pointer metadata contains invalid information: \
- total size is bigger than largest supported object"
- );
+ throw_ub!(InvalidMeta("total size is bigger than largest supported object"));
}
Ok(Some((size, align)))
}
// Make sure the slice is not too big.
let size = elem.size.checked_mul(len, &*self.tcx).ok_or_else(|| {
- err_ub_format!(
- "invalid slice: \
- total size is bigger than largest supported object"
- )
+ err_ub!(InvalidMeta("slice is bigger than largest supported object"))
})?;
Ok(Some((size, elem.align.abi)))
}
::log_settings::settings().indentation -= 1;
let frame = self.stack.pop().expect("tried to pop a stack frame, but there were none");
- let stack_pop_info = M::stack_pop(self, frame.extra, unwinding)?;
- if let (false, StackPopInfo::StopUnwinding) = (unwinding, stack_pop_info) {
- bug!("Attempted to stop unwinding while there is no unwinding!");
- }
// Now where do we jump next?
- // Determine if we leave this function normally or via unwinding.
- let cur_unwinding =
- if let StackPopInfo::StopUnwinding = stack_pop_info { false } else { unwinding };
-
// Usually we want to clean up (deallocate locals), but in a few rare cases we don't.
// In that case, we return early. We also avoid validation in that case,
// because this is CTFE and the final value will be thoroughly validated anyway.
let (cleanup, next_block) = match frame.return_to_block {
StackPopCleanup::Goto { ret, unwind } => {
- (true, Some(if cur_unwinding { unwind } else { ret }))
+ (true, Some(if unwinding { unwind } else { ret }))
}
StackPopCleanup::None { cleanup, .. } => (cleanup, None),
};
if !cleanup {
assert!(self.stack.is_empty(), "only the topmost frame should ever be leaked");
assert!(next_block.is_none(), "tried to skip cleanup when we have a next block!");
- // Leak the locals, skip validation.
+ assert!(!unwinding, "tried to skip cleanup during unwinding");
+ // Leak the locals, skip validation, skip machine hook.
return Ok(());
}
self.deallocate_local(local.value)?;
}
- trace!(
- "StackPopCleanup: {:?} StackPopInfo: {:?} cur_unwinding = {:?}",
- frame.return_to_block,
- stack_pop_info,
- cur_unwinding
- );
- if cur_unwinding {
+ if M::stack_pop(self, frame.extra, unwinding)? == StackPopJump::NoJump {
+ // The hook already did everything.
+ // We want to skip the `info!` below, hence early return.
+ return Ok(());
+ }
+ // Normal return.
+ if unwinding {
// Follow the unwind edge.
- let unwind = next_block.expect("Encounted StackPopCleanup::None when unwinding!");
+ let unwind = next_block.expect("Encountered StackPopCleanup::None when unwinding!");
self.unwind_to_block(unwind);
} else {
// Follow the normal return edge.
// invariant -- that is, unless a function somehow has a ptr to
// its return place... but the way MIR is currently generated, the
// return place is always a local and then this cannot happen.
- self.validate_operand(self.place_to_op(return_place)?, vec![], None)?;
+ self.validate_operand(self.place_to_op(return_place)?)?;
}
} else {
// Uh, that shouldn't happen... the function did not intend to return
"CONTINUING({}) {} (unwinding = {})",
self.cur_frame(),
self.frame().instance,
- cur_unwinding
+ unwinding
);
}
self.walk_aggregate(mplace, fields)
}
- fn visit_primitive(&mut self, mplace: MPlaceTy<'tcx>) -> InterpResult<'tcx> {
+ fn visit_value(&mut self, mplace: MPlaceTy<'tcx>) -> InterpResult<'tcx> {
// Handle Reference types, as these are the only relocations supported by const eval.
// Raw pointers (and boxes) are handled by the `leftover_relocations` logic.
let ty = mplace.layout.ty;
None => self.ref_tracking.track((mplace, mutability, mode), || ()),
}
}
+ Ok(())
+ } else {
+ // Not a reference -- proceed recursively.
+ self.walk_value(mplace)
}
- Ok(())
}
}
if let Err(error) = interned {
// This can happen when e.g. the tag of an enum is not a valid discriminant. We do have
// to read enum discriminants in order to find references in enum variant fields.
- if let err_unsup!(ValidationFailure(_)) = error.kind {
+ if let err_ub!(ValidationFailure(_)) = error.kind {
let err = crate::const_eval::error_to_const_error(&ecx, error);
match err.struct_error(
ecx.tcx,
}
} else if ecx.memory.dead_alloc_map.contains_key(&alloc_id) {
// dangling pointer
- throw_unsup!(ValidationFailure("encountered dangling pointer in final constant".into()))
+ throw_ub_format!("encountered dangling pointer in final constant")
} else if ecx.tcx.alloc_map.lock().get(alloc_id).is_none() {
// We have hit an `AllocId` that is neither in local or global memory and isn't marked
// as dangling by local memory.
let substs = instance.substs;
let intrinsic_name = self.tcx.item_name(instance.def_id());
- // We currently do not handle any intrinsics that are *allowed* to diverge,
- // but `transmute` could lack a return place in case of UB.
+ // First handle intrinsics without return place.
let (dest, ret) = match ret {
- Some(p) => p,
None => match intrinsic_name {
- sym::transmute => throw_ub!(Unreachable),
+ sym::transmute => throw_ub_format!("transmuting to uninhabited type"),
+ sym::abort => M::abort(self)?,
+ // Unsupported diverging intrinsic.
_ => return Ok(false),
},
+ Some(p) => p,
};
// Keep the patterns in this match ordered the same as the list in
let bits = self.force_bits(val, layout_of.size)?;
let kind = match layout_of.abi {
ty::layout::Abi::Scalar(ref scalar) => scalar.value,
- _ => throw_unsup!(TypeNotPrimitive(ty)),
+ _ => bug!("{} called on invalid type {:?}", intrinsic_name, ty),
};
let (nonzero, intrinsic_name) = match intrinsic_name {
sym::cttz_nonzero => (true, sym::cttz),
if is_add {
// max unsigned
Scalar::from_uint(
- u128::max_value() >> (128 - num_bits),
+ u128::MAX >> (128 - num_bits),
Size::from_bits(num_bits),
)
} else {
};
self.write_scalar(val, dest)?;
}
+ sym::discriminant_value => {
+ let place = self.deref_operand(args[0])?;
+ let discr_val = self.read_discriminant(place.into())?.0;
+ self.write_scalar(Scalar::from_uint(discr_val, dest.layout.size), dest)?;
+ }
sym::unchecked_shl
| sym::unchecked_shr
| sym::unchecked_add
let layout = self.layout_of(substs.type_at(0))?;
let r_val = self.force_bits(r.to_scalar()?, layout.size)?;
if let sym::unchecked_shl | sym::unchecked_shr = intrinsic_name {
- throw_ub_format!("Overflowing shift by {} in `{}`", r_val, intrinsic_name);
+ throw_ub_format!("overflowing shift by {} in `{}`", r_val, intrinsic_name);
} else {
- throw_ub_format!("Overflow executing `{}`", intrinsic_name);
+ throw_ub_format!("overflow executing `{}`", intrinsic_name);
}
}
self.write_scalar(val, dest)?;
dest: PlaceTy<'tcx, M::PointerTag>,
) -> InterpResult<'tcx> {
// Performs an exact division, resulting in undefined behavior where
- // `x % y != 0` or `y == 0` or `x == T::min_value() && y == -1`.
+ // `x % y != 0` or `y == 0` or `x == T::MIN && y == -1`.
// First, check x % y != 0 (or if that computation overflows).
let (res, overflow, _ty) = self.overflowing_binary_op(BinOp::Rem, a, b)?;
- if overflow || res.to_bits(a.layout.size)? != 0 {
- // Then, check if `b` is -1, which is the "min_value / -1" case.
+ if overflow || res.assert_bits(a.layout.size) != 0 {
+ // Then, check if `b` is -1, which is the "MIN / -1" case.
let minus1 = Scalar::from_int(-1, dest.layout.size);
let b_scalar = b.to_scalar().unwrap();
if b_scalar == minus1 {
}
}
}
+
impl PrettyPrinter<'tcx> for AbsolutePathPrinter<'tcx> {
fn region_should_not_be_omitted(&self, _region: ty::Region<'_>) -> bool {
false
use std::hash::Hash;
use rustc::mir;
-use rustc::ty::{self, Ty, TyCtxt};
-use rustc_hir::def_id::DefId;
+use rustc::ty::{self, Ty};
use rustc_span::Span;
use super::{
/// Data returned by Machine::stack_pop,
/// to provide further control over the popping of the stack frame
#[derive(Eq, PartialEq, Debug, Copy, Clone)]
-pub enum StackPopInfo {
+pub enum StackPopJump {
/// Indicates that no special handling should be
/// done - we'll either return normally or unwind
/// based on the terminator for the function
/// we're leaving.
Normal,
- /// Indicates that we should stop unwinding,
- /// as we've reached a catch frame
- StopUnwinding,
+ /// Indicates that we should *not* jump to the return/unwind address, as the callback already
+ /// took care of everything.
+ NoJump,
}
/// Whether this kind of memory is allowed to leak
/// Whether to enforce the validity invariant
fn enforce_validity(ecx: &InterpCx<'mir, 'tcx, Self>) -> bool;
- /// Called before a basic block terminator is executed.
- /// You can use this to detect endlessly running programs.
- fn before_terminator(ecx: &mut InterpCx<'mir, 'tcx, Self>) -> InterpResult<'tcx>;
-
/// Entry point to all function calls.
///
/// Returns either the mir to use for the call, or `None` if execution should
/// Called to evaluate `Assert` MIR terminators that trigger a panic.
fn assert_panic(
ecx: &mut InterpCx<'mir, 'tcx, Self>,
- span: Span,
msg: &mir::AssertMessage<'tcx>,
unwind: Option<mir::BasicBlock>,
) -> InterpResult<'tcx>;
- /// Called for read access to a foreign static item.
- ///
- /// This will only be called once per static and machine; the result is cached in
- /// the machine memory. (This relies on `AllocMap::get_or` being able to add the
- /// owned allocation to the map even when the map is shared.)
- ///
- /// This allocation will then be fed to `tag_allocation` to initialize the "extra" state.
- fn find_foreign_static(
- tcx: TyCtxt<'tcx>,
- def_id: DefId,
- ) -> InterpResult<'tcx, Cow<'tcx, Allocation>>;
+ /// Called to evaluate `Abort` MIR terminator.
+ fn abort(_ecx: &mut InterpCx<'mir, 'tcx, Self>) -> InterpResult<'tcx, !> {
+ throw_unsup_format!("aborting execution is not supported");
+ }
/// Called for all binary operations where the LHS has pointer type.
///
) -> InterpResult<'tcx>;
/// Called to read the specified `local` from the `frame`.
+ #[inline]
fn access_local(
_ecx: &InterpCx<'mir, 'tcx, Self>,
frame: &Frame<'mir, 'tcx, Self::PointerTag, Self::FrameExtra>,
frame.locals[local].access()
}
+ /// Called before a basic block terminator is executed.
+ /// You can use this to detect endlessly running programs.
+ #[inline]
+ fn before_terminator(_ecx: &mut InterpCx<'mir, 'tcx, Self>) -> InterpResult<'tcx> {
+ Ok(())
+ }
+
/// Called before a `Static` value is accessed.
+ #[inline]
fn before_access_static(
_memory_extra: &Self::MemoryExtra,
_allocation: &Allocation,
Ok(())
}
+ /// Called for *every* memory access to determine the real ID of the given allocation.
+ /// This provides a way for the machine to "redirect" certain allocations as it sees fit.
+ ///
+ /// This is used by Miri to redirect extern statics to real allocations.
+ ///
+ /// This function must be idempotent.
+ #[inline]
+ fn canonical_alloc_id(_mem: &Memory<'mir, 'tcx, Self>, id: AllocId) -> AllocId {
+ id
+ }
+
/// Called to initialize the "extra" state of an allocation and make the pointers
/// it contains (in relocations) tagged. The way we construct allocations is
/// to always first construct it without extra and then add the extra.
/// This keeps uniform code paths for handling both allocations created by CTFE
- /// for statics, and allocations ceated by Miri during evaluation.
+ /// for statics, and allocations created by Miri during evaluation.
///
/// `kind` is the kind of the allocation being tagged; it can be `None` when
/// it's a static and `STATIC_KIND` is `None`.
/// Return the "base" tag for the given *static* allocation: the one that is used for direct
/// accesses to this static/const/fn allocation. If `id` is not a static allocation,
/// this will return an unusable tag (i.e., accesses will be UB)!
+ ///
+ /// Expects `id` to be already canonical, if needed.
fn tag_static_base_pointer(memory_extra: &Self::MemoryExtra, id: AllocId) -> Self::PointerTag;
/// Executes a retagging operation
Ok(())
}
- /// Called immediately before a new stack frame got pushed
+ /// Called immediately before a new stack frame got pushed.
fn stack_push(ecx: &mut InterpCx<'mir, 'tcx, Self>) -> InterpResult<'tcx, Self::FrameExtra>;
/// Called immediately after a stack frame gets popped
_ecx: &mut InterpCx<'mir, 'tcx, Self>,
_extra: Self::FrameExtra,
_unwinding: bool,
- ) -> InterpResult<'tcx, StackPopInfo> {
+ ) -> InterpResult<'tcx, StackPopJump> {
// By default, we do not support unwinding from panics
- Ok(StackPopInfo::Normal)
+ Ok(StackPopJump::Normal)
}
fn int_to_ptr(
int: u64,
) -> InterpResult<'tcx, Pointer<Self::PointerTag>> {
Err((if int == 0 {
- err_unsup!(InvalidNullPointerUsage)
+ // This is UB, seriously.
+ err_ub!(InvalidIntPointerUsage(0))
} else {
+ // This is just something we cannot support during const-eval.
err_unsup!(ReadBytesAsPointer)
})
.into())
/// through a pointer that was created by the program.
#[inline]
pub fn tag_static_base_pointer(&self, ptr: Pointer) -> Pointer<M::PointerTag> {
- ptr.with_tag(M::tag_static_base_pointer(&self.extra, ptr.alloc_id))
+ let id = M::canonical_alloc_id(self, ptr.alloc_id);
+ ptr.with_tag(M::tag_static_base_pointer(&self.extra, id))
}
pub fn create_fn_alloc(
kind: MemoryKind<M::MemoryKinds>,
) -> InterpResult<'tcx, Pointer<M::PointerTag>> {
if ptr.offset.bytes() != 0 {
- throw_unsup!(ReallocateNonBasePtr)
+ throw_ub_format!(
+ "reallocating {:?} which does not point to the beginning of an object",
+ ptr
+ );
}
// For simplicities' sake, we implement reallocate as "alloc, copy, dealloc".
trace!("deallocating: {}", ptr.alloc_id);
if ptr.offset.bytes() != 0 {
- throw_unsup!(DeallocateNonBasePtr)
+ throw_ub_format!(
+ "deallocating {:?} which does not point to the beginning of an object",
+ ptr
+ );
}
let (alloc_kind, mut alloc) = match self.alloc_map.remove(&ptr.alloc_id) {
None => {
// Deallocating static memory -- always an error
return Err(match self.tcx.alloc_map.lock().get(ptr.alloc_id) {
- Some(GlobalAlloc::Function(..)) => err_unsup!(DeallocatedWrongMemoryKind(
- "function".to_string(),
- format!("{:?}", kind),
- )),
- Some(GlobalAlloc::Static(..)) | Some(GlobalAlloc::Memory(..)) => err_unsup!(
- DeallocatedWrongMemoryKind("static".to_string(), format!("{:?}", kind))
- ),
- None => err_unsup!(DoubleFree),
+ Some(GlobalAlloc::Function(..)) => err_ub_format!("deallocating a function"),
+ Some(GlobalAlloc::Static(..)) | Some(GlobalAlloc::Memory(..)) => {
+ err_ub_format!("deallocating static memory")
+ }
+ None => err_ub!(PointerUseAfterFree(ptr.alloc_id)),
}
.into());
}
};
if alloc_kind != kind {
- throw_unsup!(DeallocatedWrongMemoryKind(
- format!("{:?}", alloc_kind),
- format!("{:?}", kind),
- ))
+ throw_ub_format!(
+ "deallocating `{:?}` memory using `{:?}` deallocation operation",
+ alloc_kind,
+ kind
+ );
}
if let Some((size, align)) = old_size_and_align {
if size != alloc.size || align != alloc.align {
- let bytes = alloc.size;
- throw_unsup!(IncorrectAllocationInformation(size, bytes, align, alloc.align))
+ throw_ub_format!(
+ "incorrect layout on deallocation: allocation has size {} and alignment {}, but gave size {} and alignment {}",
+ alloc.size.bytes(),
+ alloc.align.bytes(),
+ size.bytes(),
+ align.bytes(),
+ )
}
}
} else {
// The biggest power of two through which `offset` is divisible.
let offset_pow2 = 1 << offset.trailing_zeros();
- throw_unsup!(AlignmentCheckFailed {
+ throw_ub!(AlignmentCheckFailed {
has: Align::from_bytes(offset_pow2).unwrap(),
required: align,
})
assert!(size.bytes() == 0);
// Must be non-NULL.
if bits == 0 {
- throw_unsup!(InvalidNullPointerUsage)
+ throw_ub!(InvalidIntPointerUsage(0))
}
// Must be aligned.
if let Some(align) = align {
// It is sufficient to check this for the end pointer. The addition
// checks for overflow.
let end_ptr = ptr.offset(size, self)?;
- end_ptr.check_inbounds_alloc(allocation_size, msg)?;
+ if end_ptr.offset > allocation_size {
+ // equal is okay!
+ throw_ub!(PointerOutOfBounds { ptr: end_ptr.erase_tag(), msg, allocation_size })
+ }
// Test align. Check this last; if both bounds and alignment are violated
// we want the error to be about the bounds.
if let Some(align) = align {
// got picked we might be aligned even if this check fails.
// We instead have to fall back to converting to an integer and checking
// the "real" alignment.
- throw_unsup!(AlignmentCheckFailed { has: alloc_align, required: align });
+ throw_ub!(AlignmentCheckFailed { has: alloc_align, required: align });
}
check_offset_align(ptr.offset.bytes(), align)?;
}
let (size, _align) = self
.get_size_and_align(ptr.alloc_id, AllocCheck::MaybeDead)
.expect("alloc info with MaybeDead cannot fail");
- ptr.check_inbounds_alloc(size, CheckInAllocMsg::NullPointerTest).is_err()
+ // If the pointer is out-of-bounds, it may be null.
+ // Note that one-past-the-end (offset == size) is still inbounds, and never null.
+ ptr.offset > size
}
}
/// The `GlobalAlloc::Memory` branch here is still reachable though; when a static
/// contains a reference to memory that was created during its evaluation (i.e., not to
/// another static), those inner references only exist in "resolved" form.
+ ///
+ /// Assumes `id` is already canonical.
fn get_static_alloc(
memory_extra: &M::MemoryExtra,
tcx: TyCtxtAt<'tcx>,
let alloc = tcx.alloc_map.lock().get(id);
let alloc = match alloc {
Some(GlobalAlloc::Memory(mem)) => Cow::Borrowed(mem),
- Some(GlobalAlloc::Function(..)) => throw_unsup!(DerefFunctionPointer),
- None => throw_unsup!(DanglingPointerDeref),
+ Some(GlobalAlloc::Function(..)) => throw_ub!(DerefFunctionPointer(id)),
+ None => throw_ub!(PointerUseAfterFree(id)),
Some(GlobalAlloc::Static(def_id)) => {
// We got a "lazy" static that has not been computed yet.
if tcx.is_foreign_item(def_id) {
- trace!("static_alloc: foreign item {:?}", def_id);
- M::find_foreign_static(tcx.tcx, def_id)?
- } else {
- trace!("static_alloc: Need to compute {:?}", def_id);
- let instance = Instance::mono(tcx.tcx, def_id);
- let gid = GlobalId { instance, promoted: None };
- // use the raw query here to break validation cycles. Later uses of the static
- // will call the full query anyway
- let raw_const =
- tcx.const_eval_raw(ty::ParamEnv::reveal_all().and(gid)).map_err(|err| {
- // no need to report anything, the const_eval call takes care of that
- // for statics
- assert!(tcx.is_static(def_id));
- match err {
- ErrorHandled::Reported => err_inval!(ReferencedConstant),
- ErrorHandled::TooGeneric => err_inval!(TooGeneric),
- }
- })?;
- // Make sure we use the ID of the resolved memory, not the lazy one!
- let id = raw_const.alloc_id;
- let allocation = tcx.alloc_map.lock().unwrap_memory(id);
-
- M::before_access_static(memory_extra, allocation)?;
- Cow::Borrowed(allocation)
+ trace!("get_static_alloc: foreign item {:?}", def_id);
+ throw_unsup!(ReadForeignStatic(def_id))
}
+ trace!("get_static_alloc: Need to compute {:?}", def_id);
+ let instance = Instance::mono(tcx.tcx, def_id);
+ let gid = GlobalId { instance, promoted: None };
+ // use the raw query here to break validation cycles. Later uses of the static
+ // will call the full query anyway
+ let raw_const =
+ tcx.const_eval_raw(ty::ParamEnv::reveal_all().and(gid)).map_err(|err| {
+ // no need to report anything, the const_eval call takes care of that
+ // for statics
+ assert!(tcx.is_static(def_id));
+ match err {
+ ErrorHandled::Reported => err_inval!(ReferencedConstant),
+ ErrorHandled::TooGeneric => err_inval!(TooGeneric),
+ }
+ })?;
+ // Make sure we use the ID of the resolved memory, not the lazy one!
+ let id = raw_const.alloc_id;
+ let allocation = tcx.alloc_map.lock().unwrap_memory(id);
+
+ M::before_access_static(memory_extra, allocation)?;
+ Cow::Borrowed(allocation)
}
};
// We got tcx memory. Let the machine initialize its "extra" stuff.
&self,
id: AllocId,
) -> InterpResult<'tcx, &Allocation<M::PointerTag, M::AllocExtra>> {
+ let id = M::canonical_alloc_id(self, id);
// The error type of the inner closure here is somewhat funny. We have two
// ways of "erroring": An actual error, or because we got a reference from
// `get_static_alloc` that we can actually use directly without inserting anything anywhere.
&mut self,
id: AllocId,
) -> InterpResult<'tcx, &mut Allocation<M::PointerTag, M::AllocExtra>> {
+ let id = M::canonical_alloc_id(self, id);
let tcx = self.tcx;
let memory_extra = &self.extra;
let a = self.alloc_map.get_mut_or(id, || {
// to give us a cheap reference.
let alloc = Self::get_static_alloc(memory_extra, tcx, id)?;
if alloc.mutability == Mutability::Not {
- throw_unsup!(ModifiedConstantMemory)
+ throw_ub!(WriteToReadOnly(id))
}
match M::STATIC_KIND {
Some(kind) => Ok((MemoryKind::Machine(kind), alloc.into_owned())),
Ok(a) => {
let a = &mut a.1;
if a.mutability == Mutability::Not {
- throw_unsup!(ModifiedConstantMemory)
+ throw_ub!(WriteToReadOnly(id))
}
Ok(a)
}
id: AllocId,
liveness: AllocCheck,
) -> InterpResult<'static, (Size, Align)> {
+ let id = M::canonical_alloc_id(self, id);
// # Regular allocations
// Don't use `self.get_raw` here as that will
// a) cause cycles in case `id` refers to a static
// # Function pointers
// (both global from `alloc_map` and local from `extra_fn_ptr_map`)
- if let Some(_) = self.get_fn_alloc(id) {
+ if self.get_fn_alloc(id).is_some() {
return if let AllocCheck::Dereferenceable = liveness {
// The caller requested no function pointers.
- throw_unsup!(DerefFunctionPointer)
+ throw_ub!(DerefFunctionPointer(id))
} else {
Ok((Size::ZERO, Align::from_bytes(1).unwrap()))
};
if let AllocCheck::MaybeDead = liveness {
// Deallocated pointers are allowed, we should be able to find
// them in the map.
- Ok(*self.dead_alloc_map.get(&id).expect(
- "deallocated pointers should all be recorded in \
- `dead_alloc_map`",
- ))
+ Ok(*self
+ .dead_alloc_map
+ .get(&id)
+ .expect("deallocated pointers should all be recorded in `dead_alloc_map`"))
} else {
- throw_unsup!(DanglingPointerDeref)
+ throw_ub!(PointerUseAfterFree(id))
}
}
}
}
+ /// Assumes `id` is already canonical.
fn get_fn_alloc(&self, id: AllocId) -> Option<FnVal<'tcx, M::ExtraFnVal>> {
trace!("reading fn ptr: {}", id);
if let Some(extra) = self.extra_fn_ptr_map.get(&id) {
) -> InterpResult<'tcx, FnVal<'tcx, M::ExtraFnVal>> {
let ptr = self.force_ptr(ptr)?; // We definitely need a pointer value.
if ptr.offset.bytes() != 0 {
- throw_unsup!(InvalidFunctionPointer)
+ throw_ub!(InvalidFunctionPointer(ptr.erase_tag()))
}
- self.get_fn_alloc(ptr.alloc_id).ok_or_else(|| err_unsup!(ExecuteMemory).into())
+ let id = M::canonical_alloc_id(self, ptr.alloc_id);
+ self.get_fn_alloc(id).ok_or_else(|| err_ub!(InvalidFunctionPointer(ptr.erase_tag())).into())
}
pub fn mark_immutable(&mut self, id: AllocId) -> InterpResult<'tcx> {
self.get_raw(ptr.alloc_id)?.read_c_str(self, ptr)
}
+ /// Reads a 0x0000-terminated u16-sequence from memory. Returns them as a Vec<u16>.
+ /// Terminator 0x0000 is not included in the returned Vec<u16>.
+ ///
+ /// Performs appropriate bounds checks.
+ pub fn read_wide_str(&self, ptr: Scalar<M::PointerTag>) -> InterpResult<'tcx, Vec<u16>> {
+ let size_2bytes = Size::from_bytes(2);
+ let align_2bytes = Align::from_bytes(2).unwrap();
+ // We need to read at least 2 bytes, so we *need* a ptr.
+ let mut ptr = self.force_ptr(ptr)?;
+ let allocation = self.get_raw(ptr.alloc_id)?;
+ let mut u16_seq = Vec::new();
+
+ loop {
+ ptr = self
+ .check_ptr_access(ptr.into(), size_2bytes, align_2bytes)?
+ .expect("cannot be a ZST");
+ let single_u16 = allocation.read_scalar(self, ptr, size_2bytes)?.to_u16()?;
+ if single_u16 != 0x0000 {
+ u16_seq.push(single_u16);
+ ptr = ptr.offset(size_2bytes, self)?;
+ } else {
+ break;
+ }
+ }
+ Ok(u16_seq)
+ }
+
/// Writes the given stream of bytes into memory.
///
/// Performs appropriate bounds checks.
pub use self::memory::{AllocCheck, FnVal, Memory, MemoryKind};
-pub use self::machine::{AllocMap, Machine, MayLeak, StackPopInfo};
+pub use self::machine::{AllocMap, Machine, MayLeak, StackPopJump};
pub use self::operand::{ImmTy, Immediate, OpTy, Operand, ScalarMaybeUndef};
use std::convert::{TryFrom, TryInto};
-use rustc::ty::layout::{
- self, HasDataLayout, IntegerExt, LayoutOf, PrimitiveExt, Size, TyLayout, VariantIdx,
-};
-use rustc::{mir, ty};
-
use super::{InterpCx, MPlaceTy, Machine, MemPlace, Place, PlaceTy};
pub use rustc::mir::interpret::ScalarMaybeUndef;
use rustc::mir::interpret::{
sign_extend, truncate, AllocId, ConstValue, GlobalId, InterpResult, Pointer, Scalar,
};
-use rustc_ast::ast;
+use rustc::ty::layout::{
+ self, HasDataLayout, IntegerExt, LayoutOf, PrimitiveExt, Size, TyLayout, VariantIdx,
+};
+use rustc::ty::print::{FmtPrinter, PrettyPrinter, Printer};
+use rustc::ty::Ty;
+use rustc::{mir, ty};
+use rustc_hir::def::Namespace;
use rustc_macros::HashStable;
+use std::fmt::Write;
/// An `Immediate` represents a single immediate self-contained Rust value.
///
pub layout: TyLayout<'tcx>,
}
-// `Tag: Copy` because some methods on `Scalar` consume them by value
impl<Tag: Copy> std::fmt::Display for ImmTy<'tcx, Tag> {
- fn fmt(&self, fmt: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
- match &self.imm {
- Immediate::Scalar(ScalarMaybeUndef::Scalar(s)) => match s.to_bits(self.layout.size) {
- Ok(s) => {
- match self.layout.ty.kind {
- ty::Int(_) => {
- return write!(
- fmt,
- "{}",
- super::sign_extend(s, self.layout.size) as i128,
- );
- }
- ty::Uint(_) => return write!(fmt, "{}", s),
- ty::Bool if s == 0 => return fmt.write_str("false"),
- ty::Bool if s == 1 => return fmt.write_str("true"),
- ty::Char => {
- if let Some(c) = u32::try_from(s).ok().and_then(std::char::from_u32) {
- return write!(fmt, "{}", c);
- }
- }
- ty::Float(ast::FloatTy::F32) => {
- if let Ok(u) = u32::try_from(s) {
- return write!(fmt, "{}", f32::from_bits(u));
- }
- }
- ty::Float(ast::FloatTy::F64) => {
- if let Ok(u) = u64::try_from(s) {
- return write!(fmt, "{}", f64::from_bits(u));
- }
- }
- _ => {}
- }
- write!(fmt, "{:x}", s)
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ /// Helper function for printing a scalar to a FmtPrinter
+ fn p<'a, 'tcx, F: std::fmt::Write, Tag>(
+ cx: FmtPrinter<'a, 'tcx, F>,
+ s: ScalarMaybeUndef<Tag>,
+ ty: Ty<'tcx>,
+ ) -> Result<FmtPrinter<'a, 'tcx, F>, std::fmt::Error> {
+ match s {
+ ScalarMaybeUndef::Scalar(s) => {
+ cx.pretty_print_const_scalar(s.erase_tag(), ty, true)
}
- Err(_) => fmt.write_str("{pointer}"),
- },
- Immediate::Scalar(ScalarMaybeUndef::Undef) => fmt.write_str("{undef}"),
- Immediate::ScalarPair(..) => fmt.write_str("{wide pointer or tuple}"),
+ ScalarMaybeUndef::Undef => cx.typed_value(
+ |mut this| {
+ this.write_str("{undef ")?;
+ Ok(this)
+ },
+ |this| this.print_type(ty),
+ " ",
+ ),
+ }
}
+ ty::tls::with(|tcx| {
+ match self.imm {
+ Immediate::Scalar(s) => {
+ if let Some(ty) = tcx.lift(&self.layout.ty) {
+ let cx = FmtPrinter::new(tcx, f, Namespace::ValueNS);
+ p(cx, s, ty)?;
+ return Ok(());
+ }
+ write!(f, "{:?}: {}", s.erase_tag(), self.layout.ty)
+ }
+ Immediate::ScalarPair(a, b) => {
+ // FIXME(oli-obk): at least print tuples and slices nicely
+ write!(f, "({:?}, {:?}): {}", a.erase_tag(), b.erase_tag(), self.layout.ty,)
+ }
+ }
+ })
}
}
pub fn from_int(i: impl Into<i128>, layout: TyLayout<'tcx>) -> Self {
Self::from_scalar(Scalar::from_int(i, layout.size), layout)
}
-
- #[inline]
- pub fn to_bits(self) -> InterpResult<'tcx, u128> {
- self.to_scalar()?.to_bits(self.layout.size)
- }
}
// Use the existing layout if given (but sanity check in debug mode),
let len = mplace.len(self)?;
let bytes = self.memory.read_bytes(mplace.ptr, Size::from_bytes(len as u64))?;
let str = ::std::str::from_utf8(bytes)
- .map_err(|err| err_unsup!(ValidationFailure(err.to_string())))?;
+ .map_err(|err| err_ub_format!("this string is not valid UTF-8: {}", err))?;
Ok(str)
}
) -> InterpResult<'tcx, OpTy<'tcx, M::PointerTag>> {
let base = match op.try_as_mplace(self) {
Ok(mplace) => {
- // The easy case
+ // We can reuse the mplace field computation logic for indirect operands.
let field = self.mplace_field(mplace, field)?;
return Ok(field.into());
}
layout: Option<TyLayout<'tcx>>,
) -> InterpResult<'tcx, OpTy<'tcx, M::PointerTag>> {
let base_op = match place.local {
- mir::RETURN_PLACE => throw_unsup!(ReadFromReturnPointer),
+ mir::RETURN_PLACE => throw_ub!(ReadFromReturnPlace),
local => {
// Do not use the layout passed in as argument if the base we are looking at
// here is not the entire place.
Copy(ref place) | Move(ref place) => self.eval_place_to_op(place, layout)?,
Constant(ref constant) => {
- let val = self.subst_from_frame_and_normalize_erasing_regions(constant.literal);
+ let val =
+ self.subst_from_current_frame_and_normalize_erasing_regions(constant.literal);
self.eval_const_to_op(val, layout)?
}
};
// Then computing the absolute variant idx should not overflow any more.
let variant_index = variants_start
.checked_add(variant_index_relative)
- .expect("oveflow computing absolute variant idx");
+ .expect("overflow computing absolute variant idx");
let variants_len = rval
.layout
.ty
BitXor => (Scalar::from_uint(l ^ r, size), left_layout.ty),
Add | Sub | Mul | Rem | Div => {
- debug_assert!(!left_layout.abi.is_signed());
+ assert!(!left_layout.abi.is_signed());
let op: fn(u128, u128) -> (u128, bool) = match bin_op {
Add => u128::overflowing_add,
Sub => u128::overflowing_sub,
// bail out.
None => Place::null(&*self),
},
- layout: self.layout_of(self.subst_from_frame_and_normalize_erasing_regions(
- self.frame().body.return_ty(),
- ))?,
+ layout: self.layout_of(
+ self.subst_from_current_frame_and_normalize_erasing_regions(
+ self.frame().body.return_ty(),
+ ),
+ )?,
}
}
local => PlaceTy {
// This works even for dead/uninitialized locals; we check further when writing
- place: Place::Local { frame: self.cur_frame(), local: local },
+ place: Place::Local { frame: self.cur_frame(), local },
layout: self.layout_of_local(self.frame(), local, None)?,
},
};
if M::enforce_validity(self) {
// Data got changed, better make sure it matches the type!
- self.validate_operand(self.place_to_op(dest)?, vec![], None)?;
+ self.validate_operand(self.place_to_op(dest)?)?;
}
Ok(())
if M::enforce_validity(self) {
// Data got changed, better make sure it matches the type!
- self.validate_operand(dest.into(), vec![], None)?;
+ self.validate_operand(dest.into())?;
}
Ok(())
if M::enforce_validity(self) {
// Data got changed, better make sure it matches the type!
- self.validate_operand(self.place_to_op(dest)?, vec![], None)?;
+ self.validate_operand(self.place_to_op(dest)?)?;
}
Ok(())
// most likey we *are* running `typeck` right now. Investigate whether we can bail out
// on `typeck_tables().has_errors` at all const eval entry points.
debug!("Size mismatch when transmuting!\nsrc: {:#?}\ndest: {:#?}", src, dest);
- throw_unsup!(TransmuteSizeDiff(src.layout.ty, dest.layout.ty));
+ throw_inval!(TransmuteSizeDiff(src.layout.ty, dest.layout.ty));
}
// Unsized copies rely on interpreting `src.meta` with `dest.layout`, we want
// to avoid that here.
if M::enforce_validity(self) {
// Data got changed, better make sure it matches the type!
- self.validate_operand(dest.into(), vec![], None)?;
+ self.validate_operand(dest.into())?;
}
Ok(())
let all_bytes = 0..self.len();
// This 'inspect' is okay since following access respects undef and relocations. This does
- // influence interpreter exeuction, but only to detect the error of cycles in evalution
+ // influence interpreter exeuction, but only to detect the error of cycles in evaluation
// dependencies.
let bytes = self.inspect_with_undef_and_ptr_outside_interpreter(all_bytes);
}
NullaryOp(mir::NullOp::SizeOf, ty) => {
- let ty = self.subst_from_frame_and_normalize_erasing_regions(ty);
+ let ty = self.subst_from_current_frame_and_normalize_erasing_regions(ty);
let layout = self.layout_of(ty)?;
assert!(
!layout.is_unsized(),
self.eval_terminator(terminator)?;
if !self.stack.is_empty() {
// This should change *something*
- debug_assert!(self.cur_frame() != old_stack || self.frame().block != old_bb);
+ assert!(self.cur_frame() != old_stack || self.frame().block != old_bb);
if let Some(block) = self.frame().block {
info!("// executing {:?}", block);
}
if expected == cond_val {
self.go_to_block(target);
} else {
- M::assert_panic(self, terminator.source_info.span, msg, cleanup)?;
+ M::assert_panic(self, msg, cleanup)?;
}
}
+ Abort => {
+ M::abort(self)?;
+ }
+
// When we encounter Resume, we've finished unwinding
// cleanup for the current stack frame. We pop it in order
// to continue unwinding the next frame
Unreachable => throw_ub!(Unreachable),
// These should never occur for MIR we actually run.
- DropAndReplace { .. } | FalseEdges { .. } | FalseUnwind { .. } => {
+ DropAndReplace { .. }
+ | FalseEdges { .. }
+ | FalseUnwind { .. }
+ | Yield { .. }
+ | GeneratorDrop => {
bug!("{:#?} should have been eliminated by MIR pass", terminator.kind)
}
-
- // These are not (yet) supported. It is unclear if they even can occur in
- // MIR that we actually run.
- Yield { .. } | GeneratorDrop | Abort => {
- throw_unsup_format!("Unsupported terminator kind: {:#?}", terminator.kind)
- }
}
Ok(())
trace!("Skipping callee ZST");
return Ok(());
}
- let caller_arg = caller_arg.next().ok_or_else(|| err_unsup!(FunctionArgCountMismatch))?;
+ let caller_arg = caller_arg.next().ok_or_else(|| {
+ err_ub_format!("calling a function with fewer arguments than it requires")
+ })?;
if rust_abi {
- debug_assert!(!caller_arg.layout.is_zst(), "ZSTs must have been already filtered out");
+ assert!(!caller_arg.layout.is_zst(), "ZSTs must have been already filtered out");
}
// Now, check
if !Self::check_argument_compat(rust_abi, caller_arg.layout, callee_arg.layout) {
- throw_unsup!(FunctionArgMismatch(caller_arg.layout.ty, callee_arg.layout.ty))
+ throw_ub_format!(
+ "calling a function with argument of type {:?} passing data of type {:?}",
+ callee_arg.layout.ty,
+ caller_arg.layout.ty
+ )
}
// We allow some transmutes here
self.copy_op_transmute(caller_arg, callee_arg)
abi => abi,
};
if normalize_abi(caller_abi) != normalize_abi(callee_abi) {
- throw_unsup!(FunctionAbiMismatch(caller_abi, callee_abi))
+ throw_ub_format!(
+ "calling a function with ABI {:?} using caller ABI {:?}",
+ callee_abi,
+ caller_abi
+ )
}
}
StackPopCleanup::Goto { ret: ret.map(|p| p.1), unwind },
)?;
- // We want to pop this frame again in case there was an error, to put
- // the blame in the right location. Until the 2018 edition is used in
- // the compiler, we have to do this with an immediately invoked function.
- let res =
- (|| {
- trace!(
- "caller ABI: {:?}, args: {:#?}",
- caller_abi,
- args.iter()
- .map(|arg| (arg.layout.ty, format!("{:?}", **arg)))
- .collect::<Vec<_>>()
- );
- trace!(
- "spread_arg: {:?}, locals: {:#?}",
- body.spread_arg,
- body.args_iter()
- .map(|local| (
- local,
- self.layout_of_local(self.frame(), local, None).unwrap().ty
- ))
- .collect::<Vec<_>>()
- );
-
- // Figure out how to pass which arguments.
- // The Rust ABI is special: ZST get skipped.
- let rust_abi = match caller_abi {
- Abi::Rust | Abi::RustCall => true,
- _ => false,
+ // If an error is raised here, pop the frame again to get an accurate backtrace.
+ // To this end, we wrap it all in a `try` block.
+ let res: InterpResult<'tcx> = try {
+ trace!(
+ "caller ABI: {:?}, args: {:#?}",
+ caller_abi,
+ args.iter()
+ .map(|arg| (arg.layout.ty, format!("{:?}", **arg)))
+ .collect::<Vec<_>>()
+ );
+ trace!(
+ "spread_arg: {:?}, locals: {:#?}",
+ body.spread_arg,
+ body.args_iter()
+ .map(|local| (
+ local,
+ self.layout_of_local(self.frame(), local, None).unwrap().ty
+ ))
+ .collect::<Vec<_>>()
+ );
+
+ // Figure out how to pass which arguments.
+ // The Rust ABI is special: ZST get skipped.
+ let rust_abi = match caller_abi {
+ Abi::Rust | Abi::RustCall => true,
+ _ => false,
+ };
+ // We have two iterators: Where the arguments come from,
+ // and where they go to.
+
+ // For where they come from: If the ABI is RustCall, we untuple the
+ // last incoming argument. These two iterators do not have the same type,
+ // so to keep the code paths uniform we accept an allocation
+ // (for RustCall ABI only).
+ let caller_args: Cow<'_, [OpTy<'tcx, M::PointerTag>]> =
+ if caller_abi == Abi::RustCall && !args.is_empty() {
+ // Untuple
+ let (&untuple_arg, args) = args.split_last().unwrap();
+ trace!("eval_fn_call: Will pass last argument by untupling");
+ Cow::from(
+ args.iter()
+ .map(|&a| Ok(a))
+ .chain(
+ (0..untuple_arg.layout.fields.count())
+ .map(|i| self.operand_field(untuple_arg, i as u64)),
+ )
+ .collect::<InterpResult<'_, Vec<OpTy<'tcx, M::PointerTag>>>>(
+ )?,
+ )
+ } else {
+ // Plain arg passing
+ Cow::from(args)
};
- // We have two iterators: Where the arguments come from,
- // and where they go to.
-
- // For where they come from: If the ABI is RustCall, we untuple the
- // last incoming argument. These two iterators do not have the same type,
- // so to keep the code paths uniform we accept an allocation
- // (for RustCall ABI only).
- let caller_args: Cow<'_, [OpTy<'tcx, M::PointerTag>]> =
- if caller_abi == Abi::RustCall && !args.is_empty() {
- // Untuple
- let (&untuple_arg, args) = args.split_last().unwrap();
- trace!("eval_fn_call: Will pass last argument by untupling");
- Cow::from(args.iter().map(|&a| Ok(a))
- .chain((0..untuple_arg.layout.fields.count())
- .map(|i| self.operand_field(untuple_arg, i as u64))
- )
- .collect::<InterpResult<'_, Vec<OpTy<'tcx, M::PointerTag>>>>()?)
- } else {
- // Plain arg passing
- Cow::from(args)
- };
- // Skip ZSTs
- let mut caller_iter = caller_args
- .iter()
- .filter(|op| !rust_abi || !op.layout.is_zst())
- .copied();
-
- // Now we have to spread them out across the callee's locals,
- // taking into account the `spread_arg`. If we could write
- // this is a single iterator (that handles `spread_arg`), then
- // `pass_argument` would be the loop body. It takes care to
- // not advance `caller_iter` for ZSTs.
- let mut locals_iter = body.args_iter();
- while let Some(local) = locals_iter.next() {
- let dest = self.eval_place(&mir::Place::from(local))?;
- if Some(local) == body.spread_arg {
- // Must be a tuple
- for i in 0..dest.layout.fields.count() {
- let dest = self.place_field(dest, i as u64)?;
- self.pass_argument(rust_abi, &mut caller_iter, dest)?;
- }
- } else {
- // Normal argument
+ // Skip ZSTs
+ let mut caller_iter =
+ caller_args.iter().filter(|op| !rust_abi || !op.layout.is_zst()).copied();
+
+ // Now we have to spread them out across the callee's locals,
+ // taking into account the `spread_arg`. If we could write
+ // this is a single iterator (that handles `spread_arg`), then
+ // `pass_argument` would be the loop body. It takes care to
+ // not advance `caller_iter` for ZSTs.
+ for local in body.args_iter() {
+ let dest = self.eval_place(&mir::Place::from(local))?;
+ if Some(local) == body.spread_arg {
+ // Must be a tuple
+ for i in 0..dest.layout.fields.count() {
+ let dest = self.place_field(dest, i as u64)?;
self.pass_argument(rust_abi, &mut caller_iter, dest)?;
}
+ } else {
+ // Normal argument
+ self.pass_argument(rust_abi, &mut caller_iter, dest)?;
}
- // Now we should have no more caller args
- if caller_iter.next().is_some() {
- trace!("Caller has passed too many args");
- throw_unsup!(FunctionArgCountMismatch)
+ }
+ // Now we should have no more caller args
+ if caller_iter.next().is_some() {
+ throw_ub_format!("calling a function with more arguments than it expected")
+ }
+ // Don't forget to check the return type!
+ if let Some((caller_ret, _)) = ret {
+ let callee_ret = self.eval_place(&mir::Place::return_place())?;
+ if !Self::check_argument_compat(
+ rust_abi,
+ caller_ret.layout,
+ callee_ret.layout,
+ ) {
+ throw_ub_format!(
+ "calling a function with return type {:?} passing \
+ return place of type {:?}",
+ callee_ret.layout.ty,
+ caller_ret.layout.ty
+ )
}
- // Don't forget to check the return type!
- if let Some((caller_ret, _)) = ret {
- let callee_ret = self.eval_place(&mir::Place::return_place())?;
- if !Self::check_argument_compat(
- rust_abi,
- caller_ret.layout,
- callee_ret.layout,
- ) {
- throw_unsup!(FunctionRetMismatch(
- caller_ret.layout.ty,
- callee_ret.layout.ty
- ))
- }
- } else {
- let local = mir::RETURN_PLACE;
- let callee_layout = self.layout_of_local(self.frame(), local, None)?;
- if !callee_layout.abi.is_uninhabited() {
- throw_unsup!(FunctionRetMismatch(
- self.tcx.types.never,
- callee_layout.ty
- ))
- }
+ } else {
+ let local = mir::RETURN_PLACE;
+ let callee_layout = self.layout_of_local(self.frame(), local, None)?;
+ if !callee_layout.abi.is_uninhabited() {
+ throw_ub_format!("calling a returning function without a return place")
}
- Ok(())
- })();
+ }
+ };
match res {
Err(err) => {
self.stack.pop();
write_path(&mut msg, where_);
}
write!(&mut msg, ", but expected {}", $details).unwrap();
- throw_unsup!(ValidationFailure(msg))
+ throw_ub!(ValidationFailure(msg))
}};
($what:expr, $where:expr) => {{
let mut msg = format!("encountered {}", $what);
msg.push_str(" at ");
write_path(&mut msg, where_);
}
- throw_unsup!(ValidationFailure(msg))
+ throw_ub!(ValidationFailure(msg))
}};
}
($e:expr, $what:expr, $where:expr, $details:expr) => {{
match $e {
Ok(x) => x,
+ // We re-throw the error, so we are okay with allocation:
+ // this can only slow down builds that fail anyway.
Err(_) => throw_validation_failure!($what, $where, $details),
}
}};
($e:expr, $what:expr, $where:expr) => {{
match $e {
Ok(x) => x,
+ // We re-throw the error, so we are okay with allocation:
+ // this can only slow down builds that fail anyway.
Err(_) => throw_validation_failure!($what, $where),
}
}};
Field(Symbol),
Variant(Symbol),
GeneratorState(VariantIdx),
- ClosureVar(Symbol),
+ CapturedVar(Symbol),
ArrayElem(usize),
TupleElem(usize),
Deref,
- Tag,
+ EnumTag,
+ GeneratorTag,
DynDowncast,
}
for elem in path.iter() {
match elem {
Field(name) => write!(out, ".{}", name),
- Variant(name) => write!(out, ".<downcast-variant({})>", name),
+ EnumTag => write!(out, ".<enum-tag>"),
+ Variant(name) => write!(out, ".<enum-variant({})>", name),
+ GeneratorTag => write!(out, ".<generator-tag>"),
GeneratorState(idx) => write!(out, ".<generator-state({})>", idx.index()),
- ClosureVar(name) => write!(out, ".<closure-var({})>", name),
+ CapturedVar(name) => write!(out, ".<captured-var({})>", name),
TupleElem(idx) => write!(out, ".{}", idx),
ArrayElem(idx) => write!(out, "[{}]", idx),
// `.<deref>` does not match Rust syntax, but it is more readable for long paths -- and
// even use the usual syntax because we are just showing the projections,
// not the root.
Deref => write!(out, ".<deref>"),
- Tag => write!(out, ".<enum-tag>"),
DynDowncast => write!(out, ".<dyn-downcast>"),
}
.unwrap()
// "expected something <in the given range>" makes sense.
fn wrapping_range_format(r: &RangeInclusive<u128>, max_hi: u128) -> String {
let (lo, hi) = r.clone().into_inner();
- debug_assert!(hi <= max_hi);
+ assert!(hi <= max_hi);
if lo > hi {
format!("less or equal to {}, or greater or equal to {}", hi, lo)
} else if lo == hi {
format!("equal to {}", lo)
} else if lo == 0 {
- debug_assert!(hi < max_hi, "should not be printing if the range covers everything");
+ assert!(hi < max_hi, "should not be printing if the range covers everything");
format!("less or equal to {}", hi)
} else if hi == max_hi {
- debug_assert!(lo > 0, "should not be printing if the range covers everything");
+ assert!(lo > 0, "should not be printing if the range covers everything");
format!("greater or equal to {}", lo)
} else {
format!("in the range {:?}", r)
path: Vec<PathElem>,
ref_tracking_for_consts:
Option<&'rt mut RefTracking<MPlaceTy<'tcx, M::PointerTag>, Vec<PathElem>>>,
+ may_ref_to_static: bool,
ecx: &'rt InterpCx<'mir, 'tcx, M>,
}
impl<'rt, 'mir, 'tcx, M: Machine<'mir, 'tcx>> ValidityVisitor<'rt, 'mir, 'tcx, M> {
fn aggregate_field_path_elem(&mut self, layout: TyLayout<'tcx>, field: usize) -> PathElem {
+ // First, check if we are projecting to a variant.
+ match layout.variants {
+ layout::Variants::Multiple { discr_index, .. } => {
+ if discr_index == field {
+ return match layout.ty.kind {
+ ty::Adt(def, ..) if def.is_enum() => PathElem::EnumTag,
+ ty::Generator(..) => PathElem::GeneratorTag,
+ _ => bug!("non-variant type {:?}", layout.ty),
+ };
+ }
+ }
+ layout::Variants::Single { .. } => {}
+ }
+
+ // Now we know we are projecting to a field, so figure out which one.
match layout.ty.kind {
// generators and closures.
ty::Closure(def_id, _) | ty::Generator(def_id, _, _) => {
}
}
- PathElem::ClosureVar(name.unwrap_or_else(|| {
+ PathElem::CapturedVar(name.unwrap_or_else(|| {
// Fall back to showing the field index.
sym::integer(field)
}))
// enums
ty::Adt(def, ..) if def.is_enum() => {
- // we might be projecting *to* a variant, or to a field *in*a variant.
+ // we might be projecting *to* a variant, or to a field *in* a variant.
match layout.variants {
layout::Variants::Single { index } => {
// Inside a variant
PathElem::Field(def.variants[index].fields[field].ident.name)
}
- _ => bug!(),
+ layout::Variants::Multiple { .. } => bug!("we handled variants above"),
}
}
Ok(())
}
-}
-impl<'rt, 'mir, 'tcx, M: Machine<'mir, 'tcx>> ValueVisitor<'mir, 'tcx, M>
- for ValidityVisitor<'rt, 'mir, 'tcx, M>
-{
- type V = OpTy<'tcx, M::PointerTag>;
-
- #[inline(always)]
- fn ecx(&self) -> &InterpCx<'mir, 'tcx, M> {
- &self.ecx
- }
-
- #[inline]
- fn visit_field(
+ /// Check a reference or `Box`.
+ fn check_safe_pointer(
&mut self,
- old_op: OpTy<'tcx, M::PointerTag>,
- field: usize,
- new_op: OpTy<'tcx, M::PointerTag>,
+ value: OpTy<'tcx, M::PointerTag>,
+ kind: &str,
) -> InterpResult<'tcx> {
- let elem = self.aggregate_field_path_elem(old_op.layout, field);
- self.visit_elem(new_op, elem)
- }
-
- #[inline]
- fn visit_variant(
- &mut self,
- old_op: OpTy<'tcx, M::PointerTag>,
- variant_id: VariantIdx,
- new_op: OpTy<'tcx, M::PointerTag>,
- ) -> InterpResult<'tcx> {
- let name = match old_op.layout.ty.kind {
- ty::Adt(adt, _) => PathElem::Variant(adt.variants[variant_id].ident.name),
- // Generators also have variants
- ty::Generator(..) => PathElem::GeneratorState(variant_id),
- _ => bug!("Unexpected type with variant: {:?}", old_op.layout.ty),
- };
- self.visit_elem(new_op, name)
- }
-
- #[inline]
- fn visit_value(&mut self, op: OpTy<'tcx, M::PointerTag>) -> InterpResult<'tcx> {
- trace!("visit_value: {:?}, {:?}", *op, op.layout);
- // Translate some possible errors to something nicer.
- match self.walk_value(op) {
- Ok(()) => Ok(()),
+ let value = self.ecx.read_immediate(value)?;
+ // Handle wide pointers.
+ // Check metadata early, for better diagnostics
+ let place = try_validation!(self.ecx.ref_to_mplace(value), "undefined pointer", self.path);
+ if place.layout.is_unsized() {
+ self.check_wide_ptr_meta(place.meta, place.layout)?;
+ }
+ // Make sure this is dereferenceable and all.
+ let size_and_align = match self.ecx.size_and_align_of(place.meta, place.layout) {
+ Ok(res) => res,
Err(err) => match err.kind {
- err_ub!(InvalidDiscriminant(val)) => {
- throw_validation_failure!(val, self.path, "a valid enum discriminant")
+ err_ub!(InvalidMeta(msg)) => throw_validation_failure!(
+ format_args!("invalid {} metadata: {}", kind, msg),
+ self.path
+ ),
+ _ => bug!("Unexpected error during ptr size_and_align_of: {}", err),
+ },
+ };
+ let (size, align) = size_and_align
+ // for the purpose of validity, consider foreign types to have
+ // alignment and size determined by the layout (size will be 0,
+ // alignment should take attributes into account).
+ .unwrap_or_else(|| (place.layout.size, place.layout.align.abi));
+ let ptr: Option<_> = match self.ecx.memory.check_ptr_access_align(
+ place.ptr,
+ size,
+ Some(align),
+ CheckInAllocMsg::InboundsTest,
+ ) {
+ Ok(ptr) => ptr,
+ Err(err) => {
+ info!(
+ "{:?} did not pass access check for size {:?}, align {:?}",
+ place.ptr, size, align
+ );
+ match err.kind {
+ err_ub!(InvalidIntPointerUsage(0)) => {
+ throw_validation_failure!(format_args!("a NULL {}", kind), self.path)
+ }
+ err_ub!(InvalidIntPointerUsage(i)) => throw_validation_failure!(
+ format_args!("a {} to unallocated address {}", kind, i),
+ self.path
+ ),
+ err_ub!(AlignmentCheckFailed { required, has }) => throw_validation_failure!(
+ format_args!(
+ "an unaligned {} (required {} byte alignment but found {})",
+ kind,
+ required.bytes(),
+ has.bytes()
+ ),
+ self.path
+ ),
+ err_unsup!(ReadBytesAsPointer) => throw_validation_failure!(
+ format_args!("a dangling {} (created from integer)", kind),
+ self.path
+ ),
+ err_ub!(PointerOutOfBounds { .. }) => throw_validation_failure!(
+ format_args!(
+ "a dangling {} (going beyond the bounds of its allocation)",
+ kind
+ ),
+ self.path
+ ),
+ // This cannot happen during const-eval (because interning already detects
+ // dangling pointers), but it can happen in Miri.
+ err_ub!(PointerUseAfterFree(_)) => throw_validation_failure!(
+ format_args!("a dangling {} (use-after-free)", kind),
+ self.path
+ ),
+ _ => bug!("Unexpected error during ptr inbounds test: {}", err),
}
- err_unsup!(ReadPointerAsBytes) => {
- throw_validation_failure!("a pointer", self.path, "plain (non-pointer) bytes")
+ }
+ };
+ // Recursive checking
+ if let Some(ref mut ref_tracking) = self.ref_tracking_for_consts {
+ if let Some(ptr) = ptr {
+ // not a ZST
+ // Skip validation entirely for some external statics
+ let alloc_kind = self.ecx.tcx.alloc_map.lock().get(ptr.alloc_id);
+ if let Some(GlobalAlloc::Static(did)) = alloc_kind {
+ // `extern static` cannot be validated as they have no body.
+ // FIXME: Statics from other crates are also skipped.
+ // They might be checked at a different type, but for now we
+ // want to avoid recursing too deeply. This is not sound!
+ if !did.is_local() || self.ecx.tcx.is_foreign_item(did) {
+ return Ok(());
+ }
+ if !self.may_ref_to_static && self.ecx.tcx.is_static(did) {
+ throw_validation_failure!(
+ format_args!("a {} pointing to a static variable", kind),
+ self.path
+ );
+ }
}
- _ => Err(err),
- },
+ }
+ // Proceed recursively even for ZST, no reason to skip them!
+ // `!` is a ZST and we want to validate it.
+ // Normalize before handing `place` to tracking because that will
+ // check for duplicates.
+ let place = if size.bytes() > 0 {
+ self.ecx.force_mplace_ptr(place).expect("we already bounds-checked")
+ } else {
+ place
+ };
+ let path = &self.path;
+ ref_tracking.track(place, || {
+ // We need to clone the path anyway, make sure it gets created
+ // with enough space for the additional `Deref`.
+ let mut new_path = Vec::with_capacity(path.len() + 1);
+ new_path.clone_from(path);
+ new_path.push(PathElem::Deref);
+ new_path
+ });
}
+ Ok(())
}
- fn visit_primitive(&mut self, value: OpTy<'tcx, M::PointerTag>) -> InterpResult<'tcx> {
- let value = self.ecx.read_immediate(value)?;
+ /// Check if this is a value of primitive type, and if yes check the validity of the value
+ /// at that type. Return `true` if the type is indeed primitive.
+ fn try_visit_primitive(
+ &mut self,
+ value: OpTy<'tcx, M::PointerTag>,
+ ) -> InterpResult<'tcx, bool> {
// Go over all the primitive types
let ty = value.layout.ty;
match ty.kind {
ty::Bool => {
- let value = value.to_scalar_or_undef();
+ let value = self.ecx.read_scalar(value)?;
try_validation!(value.to_bool(), value, self.path, "a boolean");
+ Ok(true)
}
ty::Char => {
- let value = value.to_scalar_or_undef();
+ let value = self.ecx.read_scalar(value)?;
try_validation!(value.to_char(), value, self.path, "a valid unicode codepoint");
+ Ok(true)
}
ty::Float(_) | ty::Int(_) | ty::Uint(_) => {
+ let value = self.ecx.read_scalar(value)?;
// NOTE: Keep this in sync with the array optimization for int/float
// types below!
- let size = value.layout.size;
- let value = value.to_scalar_or_undef();
if self.ref_tracking_for_consts.is_some() {
// Integers/floats in CTFE: Must be scalar bits, pointers are dangerous
- try_validation!(
- value.to_bits(size),
- value,
- self.path,
- "initialized plain (non-pointer) bytes"
- );
+ let is_bits = value.not_undef().map_or(false, |v| v.is_bits());
+ if !is_bits {
+ throw_validation_failure!(
+ value,
+ self.path,
+ "initialized plain (non-pointer) bytes"
+ )
+ }
} else {
// At run-time, for now, we accept *anything* for these types, including
// undef. We should fix that, but let's start low.
}
+ Ok(true)
}
ty::RawPtr(..) => {
// We are conservative with undef for integers, but try to
// actually enforce our current rules for raw pointers.
- let place =
- try_validation!(self.ecx.ref_to_mplace(value), "undefined pointer", self.path);
+ let place = try_validation!(
+ self.ecx.ref_to_mplace(self.ecx.read_immediate(value)?),
+ "undefined pointer",
+ self.path
+ );
if place.layout.is_unsized() {
self.check_wide_ptr_meta(place.meta, place.layout)?;
}
+ Ok(true)
}
- _ if ty.is_box() || ty.is_region_ptr() => {
- // Handle wide pointers.
- // Check metadata early, for better diagnostics
- let place =
- try_validation!(self.ecx.ref_to_mplace(value), "undefined pointer", self.path);
- if place.layout.is_unsized() {
- self.check_wide_ptr_meta(place.meta, place.layout)?;
- }
- // Make sure this is dereferenceable and all.
- let (size, align) = self
- .ecx
- .size_and_align_of(place.meta, place.layout)?
- // for the purpose of validity, consider foreign types to have
- // alignment and size determined by the layout (size will be 0,
- // alignment should take attributes into account).
- .unwrap_or_else(|| (place.layout.size, place.layout.align.abi));
- let ptr: Option<_> = match self.ecx.memory.check_ptr_access_align(
- place.ptr,
- size,
- Some(align),
- CheckInAllocMsg::InboundsTest,
- ) {
- Ok(ptr) => ptr,
- Err(err) => {
- info!(
- "{:?} did not pass access check for size {:?}, align {:?}",
- place.ptr, size, align
- );
- match err.kind {
- err_unsup!(InvalidNullPointerUsage) => {
- throw_validation_failure!("NULL reference", self.path)
- }
- err_unsup!(AlignmentCheckFailed { required, has }) => {
- throw_validation_failure!(
- format_args!(
- "unaligned reference \
- (required {} byte alignment but found {})",
- required.bytes(),
- has.bytes()
- ),
- self.path
- )
- }
- err_unsup!(ReadBytesAsPointer) => throw_validation_failure!(
- "dangling reference (created from integer)",
- self.path
- ),
- _ => throw_validation_failure!(
- "dangling reference (not entirely in bounds)",
- self.path
- ),
- }
- }
- };
- // Recursive checking
- if let Some(ref mut ref_tracking) = self.ref_tracking_for_consts {
- if let Some(ptr) = ptr {
- // not a ZST
- // Skip validation entirely for some external statics
- let alloc_kind = self.ecx.tcx.alloc_map.lock().get(ptr.alloc_id);
- if let Some(GlobalAlloc::Static(did)) = alloc_kind {
- // `extern static` cannot be validated as they have no body.
- // FIXME: Statics from other crates are also skipped.
- // They might be checked at a different type, but for now we
- // want to avoid recursing too deeply. This is not sound!
- if !did.is_local() || self.ecx.tcx.is_foreign_item(did) {
- return Ok(());
- }
- }
- }
- // Proceed recursively even for ZST, no reason to skip them!
- // `!` is a ZST and we want to validate it.
- // Normalize before handing `place` to tracking because that will
- // check for duplicates.
- let place = if size.bytes() > 0 {
- self.ecx.force_mplace_ptr(place).expect("we already bounds-checked")
- } else {
- place
- };
- let path = &self.path;
- ref_tracking.track(place, || {
- // We need to clone the path anyway, make sure it gets created
- // with enough space for the additional `Deref`.
- let mut new_path = Vec::with_capacity(path.len() + 1);
- new_path.clone_from(path);
- new_path.push(PathElem::Deref);
- new_path
- });
- }
+ ty::Ref(..) => {
+ self.check_safe_pointer(value, "reference")?;
+ Ok(true)
+ }
+ ty::Adt(def, ..) if def.is_box() => {
+ self.check_safe_pointer(value, "box")?;
+ Ok(true)
}
ty::FnPtr(_sig) => {
- let value = value.to_scalar_or_undef();
+ let value = self.ecx.read_scalar(value)?;
let _fn = try_validation!(
value.not_undef().and_then(|ptr| self.ecx.memory.get_fn(ptr)),
value,
"a function pointer"
);
// FIXME: Check if the signature matches
+ Ok(true)
+ }
+ ty::Never => throw_validation_failure!("a value of the never type `!`", self.path),
+ ty::Foreign(..) | ty::FnDef(..) => {
+ // Nothing to check.
+ Ok(true)
}
- // This should be all the primitive types
- _ => bug!("Unexpected primitive type {}", value.layout.ty),
+ // The above should be all the (inhabited) primitive types. The rest is compound, we
+ // check them by visiting their fields/variants.
+ // (`Str` UTF-8 check happens in `visit_aggregate`, too.)
+ ty::Adt(..)
+ | ty::Tuple(..)
+ | ty::Array(..)
+ | ty::Slice(..)
+ | ty::Str
+ | ty::Dynamic(..)
+ | ty::Closure(..)
+ | ty::Generator(..) => Ok(false),
+ // Some types only occur during typechecking, they have no layout.
+ // We should not see them here and we could not check them anyway.
+ ty::Error
+ | ty::Infer(..)
+ | ty::Placeholder(..)
+ | ty::Bound(..)
+ | ty::Param(..)
+ | ty::Opaque(..)
+ | ty::UnnormalizedProjection(..)
+ | ty::Projection(..)
+ | ty::GeneratorWitness(..) => bug!("Encountered invalid type {:?}", ty),
}
- Ok(())
- }
-
- fn visit_uninhabited(&mut self) -> InterpResult<'tcx> {
- throw_validation_failure!("a value of an uninhabited type", self.path)
}
fn visit_scalar(
&mut self,
op: OpTy<'tcx, M::PointerTag>,
- layout: &layout::Scalar,
+ scalar_layout: &layout::Scalar,
) -> InterpResult<'tcx> {
let value = self.ecx.read_scalar(op)?;
+ let valid_range = &scalar_layout.valid_range;
+ let (lo, hi) = valid_range.clone().into_inner();
// Determine the allowed range
- let (lo, hi) = layout.valid_range.clone().into_inner();
// `max_hi` is as big as the size fits
- let max_hi = u128::max_value() >> (128 - op.layout.size.bits());
+ let max_hi = u128::MAX >> (128 - op.layout.size.bits());
assert!(hi <= max_hi);
// We could also write `(hi + 1) % (max_hi + 1) == lo` but `max_hi + 1` overflows for `u128`
if (lo == 0 && hi == max_hi) || (hi + 1 == lo) {
value.not_undef(),
value,
self.path,
- format_args!("something {}", wrapping_range_format(&layout.valid_range, max_hi),)
+ format_args!("something {}", wrapping_range_format(valid_range, max_hi),)
);
let bits = match value.to_bits_or_ptr(op.layout.size, self.ecx) {
Err(ptr) => {
self.path,
format_args!(
"something that cannot possibly fail to be {}",
- wrapping_range_format(&layout.valid_range, max_hi)
+ wrapping_range_format(valid_range, max_hi)
)
)
}
self.path,
format_args!(
"something that cannot possibly fail to be {}",
- wrapping_range_format(&layout.valid_range, max_hi)
+ wrapping_range_format(valid_range, max_hi)
)
)
}
Ok(data) => data,
};
// Now compare. This is slightly subtle because this is a special "wrap-around" range.
- if wrapping_range_contains(&layout.valid_range, bits) {
+ if wrapping_range_contains(&valid_range, bits) {
Ok(())
} else {
throw_validation_failure!(
bits,
self.path,
- format_args!("something {}", wrapping_range_format(&layout.valid_range, max_hi))
+ format_args!("something {}", wrapping_range_format(valid_range, max_hi))
)
}
}
+}
+
+impl<'rt, 'mir, 'tcx, M: Machine<'mir, 'tcx>> ValueVisitor<'mir, 'tcx, M>
+ for ValidityVisitor<'rt, 'mir, 'tcx, M>
+{
+ type V = OpTy<'tcx, M::PointerTag>;
+
+ #[inline(always)]
+ fn ecx(&self) -> &InterpCx<'mir, 'tcx, M> {
+ &self.ecx
+ }
+
+ #[inline]
+ fn visit_field(
+ &mut self,
+ old_op: OpTy<'tcx, M::PointerTag>,
+ field: usize,
+ new_op: OpTy<'tcx, M::PointerTag>,
+ ) -> InterpResult<'tcx> {
+ let elem = self.aggregate_field_path_elem(old_op.layout, field);
+ self.visit_elem(new_op, elem)
+ }
+
+ #[inline]
+ fn visit_variant(
+ &mut self,
+ old_op: OpTy<'tcx, M::PointerTag>,
+ variant_id: VariantIdx,
+ new_op: OpTy<'tcx, M::PointerTag>,
+ ) -> InterpResult<'tcx> {
+ let name = match old_op.layout.ty.kind {
+ ty::Adt(adt, _) => PathElem::Variant(adt.variants[variant_id].ident.name),
+ // Generators also have variants
+ ty::Generator(..) => PathElem::GeneratorState(variant_id),
+ _ => bug!("Unexpected type with variant: {:?}", old_op.layout.ty),
+ };
+ self.visit_elem(new_op, name)
+ }
+
+ #[inline(always)]
+ fn visit_union(&mut self, op: OpTy<'tcx, M::PointerTag>, fields: usize) -> InterpResult<'tcx> {
+ // Empty unions are not accepted by rustc. But uninhabited enums
+ // claim to be unions, so allow them, too.
+ assert!(op.layout.abi.is_uninhabited() || fields > 0);
+ Ok(())
+ }
+
+ #[inline]
+ fn visit_value(&mut self, op: OpTy<'tcx, M::PointerTag>) -> InterpResult<'tcx> {
+ trace!("visit_value: {:?}, {:?}", *op, op.layout);
+
+ // Check primitive types -- the leafs of our recursive descend.
+ if self.try_visit_primitive(op)? {
+ return Ok(());
+ }
+ // Sanity check: `builtin_deref` does not know any pointers that are not primitive.
+ assert!(op.layout.ty.builtin_deref(true).is_none());
+
+ // Recursively walk the type. Translate some possible errors to something nicer.
+ match self.walk_value(op) {
+ Ok(()) => {}
+ Err(err) => match err.kind {
+ err_ub!(InvalidDiscriminant(val)) => {
+ throw_validation_failure!(val, self.path, "a valid enum discriminant")
+ }
+ err_unsup!(ReadPointerAsBytes) => {
+ throw_validation_failure!("a pointer", self.path, "plain (non-pointer) bytes")
+ }
+ // Propagate upwards (that will also check for unexpected errors).
+ _ => return Err(err),
+ },
+ }
+
+ // *After* all of this, check the ABI. We need to check the ABI to handle
+ // types like `NonNull` where the `Scalar` info is more restrictive than what
+ // the fields say (`rustc_layout_scalar_valid_range_start`).
+ // But in most cases, this will just propagate what the fields say,
+ // and then we want the error to point at the field -- so, first recurse,
+ // then check ABI.
+ //
+ // FIXME: We could avoid some redundant checks here. For newtypes wrapping
+ // scalars, we do the same check on every "level" (e.g., first we check
+ // MyNewtype and then the scalar in there).
+ match op.layout.abi {
+ layout::Abi::Uninhabited => {
+ throw_validation_failure!(
+ format_args!("a value of uninhabited type {:?}", op.layout.ty),
+ self.path
+ );
+ }
+ layout::Abi::Scalar(ref scalar_layout) => {
+ self.visit_scalar(op, scalar_layout)?;
+ }
+ layout::Abi::ScalarPair { .. } | layout::Abi::Vector { .. } => {
+ // These have fields that we already visited above, so we already checked
+ // all their scalar-level restrictions.
+ // There is also no equivalent to `rustc_layout_scalar_valid_range_start`
+ // that would make skipping them here an issue.
+ }
+ layout::Abi::Aggregate { .. } => {
+ // Nothing to do.
+ }
+ }
+
+ Ok(())
+ }
fn visit_aggregate(
&mut self,
Err(err) => {
// For some errors we might be able to provide extra information
match err.kind {
- err_unsup!(ReadUndefBytes(offset)) => {
+ err_ub!(InvalidUndefBytes(Some(ptr))) => {
// Some byte was undefined, determine which
// element that byte belongs to so we can
// provide an index.
- let i = (offset.bytes() / layout.size.bytes()) as usize;
+ let i = (ptr.offset.bytes() / layout.size.bytes()) as usize;
self.path.push(PathElem::ArrayElem(i));
throw_validation_failure!("undefined bytes", self.path)
}
impl<'mir, 'tcx, M: Machine<'mir, 'tcx>> InterpCx<'mir, 'tcx, M> {
- /// This function checks the data at `op`. `op` is assumed to cover valid memory if it
- /// is an indirect operand.
- /// It will error if the bits at the destination do not match the ones described by the layout.
- ///
- /// `ref_tracking_for_consts` can be `None` to avoid recursive checking below references.
- /// This also toggles between "run-time" (no recursion) and "compile-time" (with recursion)
- /// validation (e.g., pointer values are fine in integers at runtime) and various other const
- /// specific validation checks.
- pub fn validate_operand(
+ fn validate_operand_internal(
&self,
op: OpTy<'tcx, M::PointerTag>,
path: Vec<PathElem>,
ref_tracking_for_consts: Option<
&mut RefTracking<MPlaceTy<'tcx, M::PointerTag>, Vec<PathElem>>,
>,
+ may_ref_to_static: bool,
) -> InterpResult<'tcx> {
- trace!("validate_operand: {:?}, {:?}", *op, op.layout.ty);
+ trace!("validate_operand_internal: {:?}, {:?}", *op, op.layout.ty);
// Construct a visitor
- let mut visitor = ValidityVisitor { path, ref_tracking_for_consts, ecx: self };
+ let mut visitor =
+ ValidityVisitor { path, ref_tracking_for_consts, may_ref_to_static, ecx: self };
// Try to cast to ptr *once* instead of all the time.
let op = self.force_op_ptr(op).unwrap_or(op);
- // Run it
- visitor.visit_value(op)
+ // Run it.
+ match visitor.visit_value(op) {
+ Ok(()) => Ok(()),
+ Err(err) if matches!(err.kind, err_ub!(ValidationFailure { .. })) => Err(err),
+ Err(err) if cfg!(debug_assertions) => {
+ bug!("Unexpected error during validation: {}", err)
+ }
+ Err(err) => Err(err),
+ }
+ }
+
+ /// This function checks the data at `op` to be const-valid.
+ /// `op` is assumed to cover valid memory if it is an indirect operand.
+ /// It will error if the bits at the destination do not match the ones described by the layout.
+ ///
+ /// `ref_tracking` is used to record references that we encounter so that they
+ /// can be checked recursively by an outside driving loop.
+ ///
+ /// `may_ref_to_static` controls whether references are allowed to point to statics.
+ #[inline(always)]
+ pub fn const_validate_operand(
+ &self,
+ op: OpTy<'tcx, M::PointerTag>,
+ path: Vec<PathElem>,
+ ref_tracking: &mut RefTracking<MPlaceTy<'tcx, M::PointerTag>, Vec<PathElem>>,
+ may_ref_to_static: bool,
+ ) -> InterpResult<'tcx> {
+ self.validate_operand_internal(op, path, Some(ref_tracking), may_ref_to_static)
+ }
+
+ /// This function checks the data at `op` to be runtime-valid.
+ /// `op` is assumed to cover valid memory if it is an indirect operand.
+ /// It will error if the bits at the destination do not match the ones described by the layout.
+ #[inline(always)]
+ pub fn validate_operand(&self, op: OpTy<'tcx, M::PointerTag>) -> InterpResult<'tcx> {
+ self.validate_operand_internal(op, vec![], None, false)
}
}
}
/// Visits the given value as a union. No automatic recursion can happen here.
#[inline(always)]
- fn visit_union(&mut self, _v: Self::V) -> InterpResult<'tcx>
+ fn visit_union(&mut self, _v: Self::V, _fields: usize) -> InterpResult<'tcx>
{
Ok(())
}
) -> InterpResult<'tcx> {
self.visit_value(new_val)
}
-
/// Called when recursing into an enum variant.
+ /// This gives the visitor the chance to track the stack of nested fields that
+ /// we are descending through.
#[inline(always)]
fn visit_variant(
&mut self,
self.visit_value(new_val)
}
- /// Called whenever we reach a value with uninhabited layout.
- /// Recursing to fields will *always* continue after this! This is not meant to control
- /// whether and how we descend recursively/ into the scalar's fields if there are any,
- /// it is meant to provide the chance for additional checks when a value of uninhabited
- /// layout is detected.
- #[inline(always)]
- fn visit_uninhabited(&mut self) -> InterpResult<'tcx>
- { Ok(()) }
- /// Called whenever we reach a value with scalar layout.
- /// We do NOT provide a `ScalarMaybeUndef` here to avoid accessing memory if the
- /// visitor is not even interested in scalars.
- /// Recursing to fields will *always* continue after this! This is not meant to control
- /// whether and how we descend recursively/ into the scalar's fields if there are any,
- /// it is meant to provide the chance for additional checks when a value of scalar
- /// layout is detected.
- #[inline(always)]
- fn visit_scalar(&mut self, _v: Self::V, _layout: &layout::Scalar) -> InterpResult<'tcx>
- { Ok(()) }
-
- /// Called whenever we reach a value of primitive type. There can be no recursion
- /// below such a value. This is the leaf function.
- /// We do *not* provide an `ImmTy` here because some implementations might want
- /// to write to the place this primitive lives in.
- #[inline(always)]
- fn visit_primitive(&mut self, _v: Self::V) -> InterpResult<'tcx>
- { Ok(()) }
-
// Default recursors. Not meant to be overloaded.
fn walk_aggregate(
&mut self,
fn walk_value(&mut self, v: Self::V) -> InterpResult<'tcx>
{
trace!("walk_value: type: {}", v.layout().ty);
- // If this is a multi-variant layout, we have to find the right one and proceed with
- // that.
- match v.layout().variants {
- layout::Variants::Multiple { .. } => {
- let op = v.to_op(self.ecx())?;
- let idx = self.ecx().read_discriminant(op)?.1;
- let inner = v.project_downcast(self.ecx(), idx)?;
- trace!("walk_value: variant layout: {:#?}", inner.layout());
- // recurse with the inner type
- return self.visit_variant(v, idx, inner);
- }
- layout::Variants::Single { .. } => {}
- }
- // Even for single variants, we might be able to get a more refined type:
- // If it is a trait object, switch to the actual type that was used to create it.
+ // Special treatment for special types, where the (static) layout is not sufficient.
match v.layout().ty.kind {
+ // If it is a trait object, switch to the real type that was used to create it.
ty::Dynamic(..) => {
// immediate trait objects are not a thing
let dest = v.to_op(self.ecx())?.assert_mem_place(self.ecx());
// recurse with the inner type
return self.visit_field(v, 0, Value::from_mem_place(inner));
},
- ty::Generator(..) => {
- // FIXME: Generator layout is lying: it claims a whole bunch of fields exist
- // when really many of them can be uninitialized.
- // Just treat them as a union for now, until hopefully the layout
- // computation is fixed.
- return self.visit_union(v);
- }
+ // Slices do not need special handling here: they have `Array` field
+ // placement with length 0, so we enter the `Array` case below which
+ // indirectly uses the metadata to determine the actual length.
_ => {},
};
- // If this is a scalar, visit it as such.
- // Things can be aggregates and have scalar layout at the same time, and that
- // is very relevant for `NonNull` and similar structs: We need to visit them
- // at their scalar layout *before* descending into their fields.
- // FIXME: We could avoid some redundant checks here. For newtypes wrapping
- // scalars, we do the same check on every "level" (e.g., first we check
- // MyNewtype and then the scalar in there).
- match v.layout().abi {
- layout::Abi::Uninhabited => {
- self.visit_uninhabited()?;
- }
- layout::Abi::Scalar(ref layout) => {
- self.visit_scalar(v, layout)?;
- }
- // FIXME: Should we do something for ScalarPair? Vector?
- _ => {}
- }
-
- // Check primitive types. We do this after checking the scalar layout,
- // just to have that done as well. Primitives can have varying layout,
- // so we check them separately and before aggregate handling.
- // It is CRITICAL that we get this check right, or we might be
- // validating the wrong thing!
- let primitive = match v.layout().fields {
- // Primitives appear as Union with 0 fields - except for Boxes and fat pointers.
- layout::FieldPlacement::Union(0) => true,
- _ => v.layout().ty.builtin_deref(true).is_some(),
- };
- if primitive {
- return self.visit_primitive(v);
- }
-
- // Proceed into the fields.
+ // Visit the fields of this value.
match v.layout().fields {
layout::FieldPlacement::Union(fields) => {
- // Empty unions are not accepted by rustc. That's great, it means we can
- // use that as an unambiguous signal for detecting primitives. Make sure
- // we did not miss any primitive.
- assert!(fields > 0);
- self.visit_union(v)
+ self.visit_union(v, fields)?;
},
layout::FieldPlacement::Arbitrary { ref offsets, .. } => {
// FIXME: We collect in a vec because otherwise there are lifetime
v.project_field(self.ecx(), i as u64)
})
.collect();
- self.visit_aggregate(v, fields.into_iter())
+ self.visit_aggregate(v, fields.into_iter())?;
},
layout::FieldPlacement::Array { .. } => {
// Let's get an mplace first.
let mplace = v.to_op(self.ecx())?.assert_mem_place(self.ecx());
// Now we can go over all the fields.
+ // This uses the *run-time length*, i.e., if we are a slice,
+ // the dynamic info from the metadata is used.
let iter = self.ecx().mplace_array_fields(mplace)?
.map(|f| f.and_then(|f| {
Ok(Value::from_mem_place(f))
}));
- self.visit_aggregate(v, iter)
+ self.visit_aggregate(v, iter)?;
+ }
+ }
+
+ match v.layout().variants {
+ // If this is a multi-variant layout, find the right variant and proceed
+ // with *its* fields.
+ layout::Variants::Multiple { .. } => {
+ let op = v.to_op(self.ecx())?;
+ let idx = self.ecx().read_discriminant(op)?.1;
+ let inner = v.project_downcast(self.ecx(), idx)?;
+ trace!("walk_value: variant layout: {:#?}", inner.layout());
+ // recurse with the inner type
+ self.visit_variant(v, idx, inner)
}
+ // For single-variant layouts, we already did anything there is to do.
+ layout::Variants::Single { .. } => Ok(())
}
}
}
#![feature(bool_to_option)]
#![feature(box_patterns)]
#![feature(box_syntax)]
+#![feature(const_if_match)]
+#![feature(const_fn)]
+#![feature(const_panic)]
#![feature(crate_visibility_modifier)]
#![feature(drain_filter)]
#![feature(exhaustive_patterns)]
use rustc::mir::mono::{InstantiationMode, MonoItem};
use rustc::mir::visit::Visitor as MirVisitor;
use rustc::mir::{self, Local, Location};
-use rustc::session::config::EntryFnType;
use rustc::ty::adjustment::{CustomCoerceUnsized, PointerCast};
use rustc::ty::print::obsolete::DefPathBasedNames;
-use rustc::ty::subst::{InternalSubsts, SubstsRef};
+use rustc::ty::subst::InternalSubsts;
use rustc::ty::{self, GenericParamDefKind, Instance, Ty, TyCtxt, TypeFoldable};
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use rustc_data_structures::sync::{par_iter, MTLock, MTRef, ParallelIterator};
use rustc_hir::def_id::{DefId, DefIdMap, LOCAL_CRATE};
use rustc_hir::itemlikevisit::ItemLikeVisitor;
use rustc_index::bit_set::GrowableBitSet;
+use rustc_session::config::EntryFnType;
use smallvec::SmallVec;
use std::iter;
tcx: TyCtxt<'tcx>,
body: &'a mir::Body<'tcx>,
output: &'a mut Vec<MonoItem<'tcx>>,
- param_substs: SubstsRef<'tcx>,
+ instance: Instance<'tcx>,
+}
+
+impl<'a, 'tcx> MirNeighborCollector<'a, 'tcx> {
+ pub fn monomorphize<T>(&self, value: T) -> T
+ where
+ T: TypeFoldable<'tcx>,
+ {
+ debug!("monomorphize: self.instance={:?}", self.instance);
+ if let Some(substs) = self.instance.substs_for_mir_body() {
+ self.tcx.subst_and_normalize_erasing_regions(substs, ty::ParamEnv::reveal_all(), &value)
+ } else {
+ self.tcx.normalize_erasing_regions(ty::ParamEnv::reveal_all(), value)
+ }
+ }
}
impl<'a, 'tcx> MirVisitor<'tcx> for MirNeighborCollector<'a, 'tcx> {
ref operand,
target_ty,
) => {
- let target_ty = self.tcx.subst_and_normalize_erasing_regions(
- self.param_substs,
- ty::ParamEnv::reveal_all(),
- &target_ty,
- );
+ let target_ty = self.monomorphize(target_ty);
let source_ty = operand.ty(self.body, self.tcx);
- let source_ty = self.tcx.subst_and_normalize_erasing_regions(
- self.param_substs,
- ty::ParamEnv::reveal_all(),
- &source_ty,
- );
+ let source_ty = self.monomorphize(source_ty);
let (source_ty, target_ty) =
find_vtable_types_for_unsizing(self.tcx, source_ty, target_ty);
// This could also be a different Unsize instruction, like
_,
) => {
let fn_ty = operand.ty(self.body, self.tcx);
- let fn_ty = self.tcx.subst_and_normalize_erasing_regions(
- self.param_substs,
- ty::ParamEnv::reveal_all(),
- &fn_ty,
- );
+ let fn_ty = self.monomorphize(fn_ty);
visit_fn_use(self.tcx, fn_ty, false, &mut self.output);
}
mir::Rvalue::Cast(
_,
) => {
let source_ty = operand.ty(self.body, self.tcx);
- let source_ty = self.tcx.subst_and_normalize_erasing_regions(
- self.param_substs,
- ty::ParamEnv::reveal_all(),
- &source_ty,
- );
+ let source_ty = self.monomorphize(source_ty);
match source_ty.kind {
ty::Closure(def_id, substs) => {
let instance = Instance::resolve_closure(
fn visit_const(&mut self, constant: &&'tcx ty::Const<'tcx>, location: Location) {
debug!("visiting const {:?} @ {:?}", *constant, location);
- collect_const(self.tcx, *constant, self.param_substs, self.output);
+ let substituted_constant = self.monomorphize(*constant);
+ let param_env = ty::ParamEnv::reveal_all();
+
+ match substituted_constant.val {
+ ty::ConstKind::Value(val) => collect_const_value(self.tcx, val, self.output),
+ ty::ConstKind::Unevaluated(def_id, substs, promoted) => {
+ match self.tcx.const_eval_resolve(param_env, def_id, substs, promoted, None) {
+ Ok(val) => collect_const_value(self.tcx, val, self.output),
+ Err(ErrorHandled::Reported) => {}
+ Err(ErrorHandled::TooGeneric) => span_bug!(
+ self.tcx.def_span(def_id),
+ "collection encountered polymorphic constant",
+ ),
+ }
+ }
+ _ => {}
+ }
self.super_const(constant);
}
match *kind {
mir::TerminatorKind::Call { ref func, .. } => {
let callee_ty = func.ty(self.body, tcx);
- let callee_ty = tcx.subst_and_normalize_erasing_regions(
- self.param_substs,
- ty::ParamEnv::reveal_all(),
- &callee_ty,
- );
+ let callee_ty = self.monomorphize(callee_ty);
visit_fn_use(self.tcx, callee_ty, true, &mut self.output);
}
mir::TerminatorKind::Drop { ref location, .. }
| mir::TerminatorKind::DropAndReplace { ref location, .. } => {
let ty = location.ty(self.body, self.tcx).ty;
- let ty = tcx.subst_and_normalize_erasing_regions(
- self.param_substs,
- ty::ParamEnv::reveal_all(),
- &ty,
- );
+ let ty = self.monomorphize(ty);
visit_drop_use(self.tcx, ty, true, self.output);
}
mir::TerminatorKind::Goto { .. }
(&ty::Adt(source_adt_def, source_substs), &ty::Adt(target_adt_def, target_substs)) => {
assert_eq!(source_adt_def, target_adt_def);
- let kind = monomorphize::custom_coerce_unsize_info(tcx, source_ty, target_ty);
-
- let coerce_index = match kind {
- CustomCoerceUnsized::Struct(i) => i,
- };
+ let CustomCoerceUnsized::Struct(coerce_index) =
+ monomorphize::custom_coerce_unsize_info(tcx, source_ty, target_ty);
let source_fields = &source_adt_def.non_enum_variant().fields;
let target_fields = &target_adt_def.non_enum_variant().fields;
fn visit_impl_item(&mut self, ii: &'v hir::ImplItem<'v>) {
match ii.kind {
- hir::ImplItemKind::Method(hir::FnSig { .. }, _) => {
+ hir::ImplItemKind::Fn(hir::FnSig { .. }, _) => {
let def_id = self.tcx.hir().local_def_id(ii.hir_id);
self.push_if_root(def_id);
}
let param_env = ty::ParamEnv::reveal_all();
let trait_ref = tcx.normalize_erasing_regions(param_env, trait_ref);
let overridden_methods: FxHashSet<_> =
- items.iter().map(|iiref| iiref.ident.modern()).collect();
+ items.iter().map(|iiref| iiref.ident.normalize_to_macros_2_0()).collect();
for method in tcx.provided_trait_methods(trait_ref.def_id) {
- if overridden_methods.contains(&method.ident.modern()) {
+ if overridden_methods.contains(&method.ident.normalize_to_macros_2_0()) {
continue;
}
debug!("collect_neighbours: {:?}", instance.def_id());
let body = tcx.instance_mir(instance.def);
- MirNeighborCollector { tcx, body: &body, output, param_substs: instance.substs }
- .visit_body(body);
+ MirNeighborCollector { tcx, body: &body, output, instance }.visit_body(body);
}
fn def_id_to_string(tcx: TyCtxt<'_>, def_id: DefId) -> String {
output
}
-fn collect_const<'tcx>(
- tcx: TyCtxt<'tcx>,
- constant: &'tcx ty::Const<'tcx>,
- param_substs: SubstsRef<'tcx>,
- output: &mut Vec<MonoItem<'tcx>>,
-) {
- debug!("visiting const {:?}", constant);
-
- let param_env = ty::ParamEnv::reveal_all();
- let substituted_constant =
- tcx.subst_and_normalize_erasing_regions(param_substs, param_env, &constant);
-
- match substituted_constant.val {
- ty::ConstKind::Value(val) => collect_const_value(tcx, val, output),
- ty::ConstKind::Unevaluated(def_id, substs, promoted) => {
- match tcx.const_eval_resolve(param_env, def_id, substs, promoted, None) {
- Ok(val) => collect_const_value(tcx, val, output),
- Err(ErrorHandled::Reported) => {}
- Err(ErrorHandled::TooGeneric) => {
- span_bug!(tcx.def_span(def_id), "collection encountered polymorphic constant",)
- }
- }
- }
- _ => {}
- }
-}
-
fn collect_const_value<'tcx>(
tcx: TyCtxt<'tcx>,
value: ConstValue<'tcx>,
let def_id = tcx.lang_items().coerce_unsized_trait().unwrap();
let trait_ref = ty::Binder::bind(ty::TraitRef {
- def_id: def_id,
+ def_id,
substs: tcx.mk_substs_trait(source_ty, &[target_ty.into()]),
});
match tcx.codegen_fulfill_obligation((ty::ParamEnv::reveal_all(), trait_ref)) {
- traits::VtableImpl(traits::VtableImplData { impl_def_id, .. }) => {
+ Some(traits::VtableImpl(traits::VtableImplData { impl_def_id, .. })) => {
tcx.coerce_unsized_info(impl_def_id).custom_kind.unwrap()
}
vtable => {
use rustc::ty::layout::VariantIdx;
use rustc::ty::query::Providers;
use rustc::ty::subst::{InternalSubsts, Subst};
-use rustc::ty::{self, Ty, TyCtxt};
+use rustc::ty::{self, Ty, TyCtxt, TypeFoldable};
use rustc_hir as hir;
use rustc_hir::def_id::DefId;
use rustc_index::vec::{Idx, IndexVec};
-use rustc_span::{sym, Span};
+use rustc_span::Span;
use rustc_target::spec::abi::Abi;
use std::fmt;
None,
),
ty::InstanceDef::FnPtrShim(def_id, ty) => {
+ // FIXME(eddyb) support generating shims for a "shallow type",
+ // e.g. `Foo<_>` or `[_]` instead of requiring a fully monomorphic
+ // `Foo<Bar>` or `[String]` etc.
+ assert!(!ty.needs_subst());
+
let trait_ = tcx.trait_of_item(def_id).unwrap();
let adjustment = match tcx.fn_trait_kind_from_lang_item(trait_) {
Some(ty::ClosureKind::FnOnce) => Adjustment::Identity,
None,
)
}
- ty::InstanceDef::DropGlue(def_id, ty) => build_drop_shim(tcx, def_id, ty),
+ ty::InstanceDef::DropGlue(def_id, ty) => {
+ // FIXME(eddyb) support generating shims for a "shallow type",
+ // e.g. `Foo<_>` or `[_]` instead of requiring a fully monomorphic
+ // `Foo<Bar>` or `[String]` etc.
+ assert!(!ty.needs_subst());
+
+ build_drop_shim(tcx, def_id, ty)
+ }
ty::InstanceDef::CloneShim(def_id, ty) => {
- let name = tcx.item_name(def_id);
- if name == sym::clone {
- build_clone_shim(tcx, def_id, ty)
- } else if name == sym::clone_from {
- debug!("make_shim({:?}: using default trait implementation", instance);
- return tcx.optimized_mir(def_id);
- } else {
- bug!("builtin clone shim {:?} not supported", instance)
- }
+ // FIXME(eddyb) support generating shims for a "shallow type",
+ // e.g. `Foo<_>` or `[_]` instead of requiring a fully monomorphic
+ // `Foo<Bar>` or `[String]` etc.
+ assert!(!ty.needs_subst());
+
+ build_clone_shim(tcx, def_id, ty)
}
ty::InstanceDef::Virtual(..) => {
bug!("InstanceDef::Virtual ({:?}) is for direct calls only", instance)
/// after the assignment, we can be sure to obtain the same place value.
/// (Concurrent accesses by other threads are no problem as these are anyway non-atomic
/// copies. Data races are UB.)
-fn is_stable(place: PlaceRef<'_, '_>) -> bool {
+fn is_stable(place: PlaceRef<'_>) -> bool {
place.projection.iter().all(|elem| {
match elem {
// Which place this evaluates to can change with any memory write,
// so cannot assume this to be stable.
ProjectionElem::Deref => false,
- // Array indices are intersting, but MIR building generates a *fresh*
+ // Array indices are interesting, but MIR building generates a *fresh*
// temporary for every array access, so the index cannot be changed as
// a side-effect.
ProjectionElem::Index { .. } |
}
/// Determine whether this type may be a reference (or box), and thus needs retagging.
-fn may_be_reference<'tcx>(ty: Ty<'tcx>) -> bool {
+fn may_be_reference(ty: Ty<'tcx>) -> bool {
match ty.kind {
// Primitive types that are not references
ty::Bool
{
let source_info = SourceInfo {
scope: OUTERMOST_SOURCE_SCOPE,
- span: span, // FIXME: Consider using just the span covering the function
- // argument declaration.
+ span, // FIXME: Consider using just the span covering the function
+ // argument declaration.
};
// Gather all arguments, skip return value.
let places = local_decls
//! Concrete error types for all operations which may be invalid in a certain const context.
-use rustc::session::config::nightly_options;
-use rustc::session::parse::feature_err;
-use rustc::ty::TyCtxt;
use rustc_errors::struct_span_err;
use rustc_hir::def_id::DefId;
+use rustc_session::config::nightly_options;
+use rustc_session::parse::feature_err;
use rustc_span::symbol::sym;
use rustc_span::{Span, Symbol};
/// Whether this operation can be evaluated by miri.
const IS_SUPPORTED_IN_MIRI: bool = true;
- /// Returns a boolean indicating whether the feature gate that would allow this operation is
- /// enabled, or `None` if such a feature gate does not exist.
- fn feature_gate(_tcx: TyCtxt<'tcx>) -> Option<bool> {
+ /// Returns the `Symbol` corresponding to the feature gate that would enable this operation,
+ /// or `None` if such a feature gate does not exist.
+ fn feature_gate() -> Option<Symbol> {
None
}
///
/// This check should assume that we are not in a non-const `fn`, where all operations are
/// legal.
+ ///
+ /// By default, it returns `true` if and only if this operation has a corresponding feature
+ /// gate and that gate is enabled.
fn is_allowed_in_item(&self, item: &Item<'_, '_>) -> bool {
- Self::feature_gate(item.tcx).unwrap_or(false)
+ Self::feature_gate().map_or(false, |gate| item.tcx.features().enabled(gate))
}
fn emit_error(&self, item: &Item<'_, '_>, span: Span) {
#[derive(Debug)]
pub struct Downcast;
impl NonConstOp for Downcast {
- fn feature_gate(tcx: TyCtxt<'_>) -> Option<bool> {
- Some(tcx.features().const_if_match)
+ fn feature_gate() -> Option<Symbol> {
+ Some(sym::const_if_match)
}
}
#[derive(Debug)]
pub struct IfOrMatch;
impl NonConstOp for IfOrMatch {
- fn feature_gate(tcx: TyCtxt<'_>) -> Option<bool> {
- Some(tcx.features().const_if_match)
+ fn feature_gate() -> Option<Symbol> {
+ Some(sym::const_if_match)
}
fn emit_error(&self, item: &Item<'_, '_>, span: Span) {
#[derive(Debug)]
pub struct Loop;
impl NonConstOp for Loop {
- fn feature_gate(tcx: TyCtxt<'_>) -> Option<bool> {
- Some(tcx.features().const_loop)
+ fn feature_gate() -> Option<Symbol> {
+ Some(sym::const_loop)
}
fn emit_error(&self, item: &Item<'_, '_>, span: Span) {
#[derive(Debug)]
pub struct MutBorrow;
impl NonConstOp for MutBorrow {
- fn feature_gate(tcx: TyCtxt<'_>) -> Option<bool> {
- Some(tcx.features().const_mut_refs)
+ fn feature_gate() -> Option<Symbol> {
+ Some(sym::const_mut_refs)
}
fn emit_error(&self, item: &Item<'_, '_>, span: Span) {
#[derive(Debug)]
pub struct MutAddressOf;
impl NonConstOp for MutAddressOf {
- fn feature_gate(tcx: TyCtxt<'_>) -> Option<bool> {
- Some(tcx.features().const_mut_refs)
+ fn feature_gate() -> Option<Symbol> {
+ Some(sym::const_mut_refs)
}
fn emit_error(&self, item: &Item<'_, '_>, span: Span) {
#[derive(Debug)]
pub struct MutDeref;
impl NonConstOp for MutDeref {
- fn feature_gate(tcx: TyCtxt<'_>) -> Option<bool> {
- Some(tcx.features().const_mut_refs)
+ fn feature_gate() -> Option<Symbol> {
+ Some(sym::const_mut_refs)
}
}
#[derive(Debug)]
pub struct Panic;
impl NonConstOp for Panic {
- fn feature_gate(tcx: TyCtxt<'_>) -> Option<bool> {
- Some(tcx.features().const_panic)
+ fn feature_gate() -> Option<Symbol> {
+ Some(sym::const_panic)
}
fn emit_error(&self, item: &Item<'_, '_>, span: Span) {
#[derive(Debug)]
pub struct RawPtrComparison;
impl NonConstOp for RawPtrComparison {
- fn feature_gate(tcx: TyCtxt<'_>) -> Option<bool> {
- Some(tcx.features().const_compare_raw_pointers)
+ fn feature_gate() -> Option<Symbol> {
+ Some(sym::const_compare_raw_pointers)
}
fn emit_error(&self, item: &Item<'_, '_>, span: Span) {
#[derive(Debug)]
pub struct RawPtrDeref;
impl NonConstOp for RawPtrDeref {
- fn feature_gate(tcx: TyCtxt<'_>) -> Option<bool> {
- Some(tcx.features().const_raw_ptr_deref)
+ fn feature_gate() -> Option<Symbol> {
+ Some(sym::const_raw_ptr_deref)
}
fn emit_error(&self, item: &Item<'_, '_>, span: Span) {
#[derive(Debug)]
pub struct RawPtrToIntCast;
impl NonConstOp for RawPtrToIntCast {
- fn feature_gate(tcx: TyCtxt<'_>) -> Option<bool> {
- Some(tcx.features().const_raw_ptr_to_usize_cast)
+ fn feature_gate() -> Option<Symbol> {
+ Some(sym::const_raw_ptr_to_usize_cast)
}
fn emit_error(&self, item: &Item<'_, '_>, span: Span) {
impl NonConstOp for UnionAccess {
fn is_allowed_in_item(&self, item: &Item<'_, '_>) -> bool {
// Union accesses are stable in all contexts except `const fn`.
- item.const_kind() != ConstKind::ConstFn || Self::feature_gate(item.tcx).unwrap()
+ item.const_kind() != ConstKind::ConstFn
+ || item.tcx.features().enabled(Self::feature_gate().unwrap())
}
- fn feature_gate(tcx: TyCtxt<'_>) -> Option<bool> {
- Some(tcx.features().const_fn_union)
+ fn feature_gate() -> Option<Symbol> {
+ Some(sym::const_fn_union)
}
fn emit_error(&self, item: &Item<'_, '_>, span: Span) {
-//! A copy of the `Qualif` trait in `qualify_consts.rs` that is suitable for the new validator.
+//! Structural const qualification.
+//!
+//! See the `Qualif` trait for more info.
use rustc::mir::*;
-use rustc::ty::{self, Ty};
+use rustc::ty::{self, AdtDef, Ty};
use rustc_span::DUMMY_SP;
use super::Item as ConstCx;
}
/// A "qualif"(-ication) is a way to look for something "bad" in the MIR that would disqualify some
-/// code for promotion or prevent it from evaluating at compile time. So `return true` means
-/// "I found something bad, no reason to go on searching". `false` is only returned if we
-/// definitely cannot find anything bad anywhere.
+/// code for promotion or prevent it from evaluating at compile time.
///
-/// The default implementations proceed structurally.
+/// Normally, we would determine what qualifications apply to each type and error when an illegal
+/// operation is performed on such a type. However, this was found to be too imprecise, especially
+/// in the presence of `enum`s. If only a single variant of an enum has a certain qualification, we
+/// needn't reject code unless it actually constructs and operates on the qualifed variant.
+///
+/// To accomplish this, const-checking and promotion use a value-based analysis (as opposed to a
+/// type-based one). Qualifications propagate structurally across variables: If a local (or a
+/// projection of a local) is assigned a qualifed value, that local itself becomes qualifed.
pub trait Qualif {
/// The name of the file used to debug the dataflow analysis that computes this qualif.
const ANALYSIS_NAME: &'static str;
/// Whether this `Qualif` is cleared when a local is moved from.
const IS_CLEARED_ON_MOVE: bool = false;
+ /// Extracts the field of `ConstQualifs` that corresponds to this `Qualif`.
fn in_qualifs(qualifs: &ConstQualifs) -> bool;
- /// Return the qualification that is (conservatively) correct for any value
- /// of the type.
- fn in_any_value_of_ty(_cx: &ConstCx<'_, 'tcx>, _ty: Ty<'tcx>) -> bool;
-
- fn in_projection_structurally(
- cx: &ConstCx<'_, 'tcx>,
- per_local: &mut impl FnMut(Local) -> bool,
- place: PlaceRef<'_, 'tcx>,
- ) -> bool {
- if let [proj_base @ .., elem] = place.projection {
- let base_qualif = Self::in_place(
- cx,
- per_local,
- PlaceRef { local: place.local, projection: proj_base },
- );
- let qualif = base_qualif
- && Self::in_any_value_of_ty(
- cx,
- Place::ty_from(place.local, proj_base, *cx.body, cx.tcx)
- .projection_ty(cx.tcx, elem)
- .ty,
- );
- match elem {
- ProjectionElem::Deref
- | ProjectionElem::Subslice { .. }
- | ProjectionElem::Field(..)
- | ProjectionElem::ConstantIndex { .. }
- | ProjectionElem::Downcast(..) => qualif,
-
- ProjectionElem::Index(local) => qualif || per_local(*local),
- }
- } else {
- bug!("This should be called if projection is not empty");
- }
- }
-
- fn in_projection(
- cx: &ConstCx<'_, 'tcx>,
- per_local: &mut impl FnMut(Local) -> bool,
- place: PlaceRef<'_, 'tcx>,
- ) -> bool {
- Self::in_projection_structurally(cx, per_local, place)
- }
-
- fn in_place(
- cx: &ConstCx<'_, 'tcx>,
- per_local: &mut impl FnMut(Local) -> bool,
- place: PlaceRef<'_, 'tcx>,
- ) -> bool {
- match place {
- PlaceRef { local, projection: [] } => per_local(local),
- PlaceRef { local: _, projection: [.., _] } => Self::in_projection(cx, per_local, place),
- }
- }
-
- fn in_operand(
- cx: &ConstCx<'_, 'tcx>,
- per_local: &mut impl FnMut(Local) -> bool,
- operand: &Operand<'tcx>,
- ) -> bool {
- match *operand {
- Operand::Copy(ref place) | Operand::Move(ref place) => {
- Self::in_place(cx, per_local, place.as_ref())
- }
-
- Operand::Constant(ref constant) => {
- // Check the qualifs of the value of `const` items.
- if let ty::ConstKind::Unevaluated(def_id, _, promoted) = constant.literal.val {
- assert!(promoted.is_none());
- // Don't peek inside trait associated constants.
- if cx.tcx.trait_of_item(def_id).is_none() {
- let qualifs = cx.tcx.at(constant.span).mir_const_qualif(def_id);
- if !Self::in_qualifs(&qualifs) {
- return false;
- }
-
- // Just in case the type is more specific than
- // the definition, e.g., impl associated const
- // with type parameters, take it into account.
- }
- }
- // Otherwise use the qualifs of the type.
- Self::in_any_value_of_ty(cx, constant.literal.ty)
- }
- }
- }
-
- fn in_rvalue_structurally(
- cx: &ConstCx<'_, 'tcx>,
- per_local: &mut impl FnMut(Local) -> bool,
- rvalue: &Rvalue<'tcx>,
- ) -> bool {
- match *rvalue {
- Rvalue::NullaryOp(..) => false,
-
- Rvalue::Discriminant(ref place) | Rvalue::Len(ref place) => {
- Self::in_place(cx, per_local, place.as_ref())
- }
-
- Rvalue::Use(ref operand)
- | Rvalue::Repeat(ref operand, _)
- | Rvalue::UnaryOp(_, ref operand)
- | Rvalue::Cast(_, ref operand, _) => Self::in_operand(cx, per_local, operand),
-
- Rvalue::BinaryOp(_, ref lhs, ref rhs)
- | Rvalue::CheckedBinaryOp(_, ref lhs, ref rhs) => {
- Self::in_operand(cx, per_local, lhs) || Self::in_operand(cx, per_local, rhs)
- }
-
- Rvalue::Ref(_, _, ref place) | Rvalue::AddressOf(_, ref place) => {
- // Special-case reborrows to be more like a copy of the reference.
- if let [proj_base @ .., ProjectionElem::Deref] = place.projection.as_ref() {
- let base_ty = Place::ty_from(place.local, proj_base, *cx.body, cx.tcx).ty;
- if let ty::Ref(..) = base_ty.kind {
- return Self::in_place(
- cx,
- per_local,
- PlaceRef { local: place.local, projection: proj_base },
- );
- }
- }
-
- Self::in_place(cx, per_local, place.as_ref())
- }
-
- Rvalue::Aggregate(_, ref operands) => {
- operands.iter().any(|o| Self::in_operand(cx, per_local, o))
- }
- }
- }
-
- fn in_rvalue(
- cx: &ConstCx<'_, 'tcx>,
- per_local: &mut impl FnMut(Local) -> bool,
- rvalue: &Rvalue<'tcx>,
- ) -> bool {
- Self::in_rvalue_structurally(cx, per_local, rvalue)
- }
-
- fn in_call(
- cx: &ConstCx<'_, 'tcx>,
- _per_local: &mut impl FnMut(Local) -> bool,
- _callee: &Operand<'tcx>,
- _args: &[Operand<'tcx>],
- return_ty: Ty<'tcx>,
- ) -> bool {
- // Be conservative about the returned value of a const fn.
- Self::in_any_value_of_ty(cx, return_ty)
- }
+ /// Returns `true` if *any* value of the given type could possibly have this `Qualif`.
+ ///
+ /// This function determines `Qualif`s when we cannot do a value-based analysis. Since qualif
+ /// propagation is context-insenstive, this includes function arguments and values returned
+ /// from a call to another function.
+ ///
+ /// It also determines the `Qualif`s for primitive types.
+ fn in_any_value_of_ty(cx: &ConstCx<'_, 'tcx>, ty: Ty<'tcx>) -> bool;
+
+ /// Returns `true` if this `Qualif` is inherent to the given struct or enum.
+ ///
+ /// By default, `Qualif`s propagate into ADTs in a structural way: An ADT only becomes
+ /// qualified if part of it is assigned a value with that `Qualif`. However, some ADTs *always*
+ /// have a certain `Qualif`, regardless of whether their fields have it. For example, a type
+ /// with a custom `Drop` impl is inherently `NeedsDrop`.
+ ///
+ /// Returning `true` for `in_adt_inherently` but `false` for `in_any_value_of_ty` is unsound.
+ fn in_adt_inherently(cx: &ConstCx<'_, 'tcx>, adt: &AdtDef) -> bool;
}
/// Constant containing interior mutability (`UnsafeCell<T>`).
!ty.is_freeze(cx.tcx, cx.param_env, DUMMY_SP)
}
- fn in_rvalue(
- cx: &ConstCx<'_, 'tcx>,
- per_local: &mut impl FnMut(Local) -> bool,
- rvalue: &Rvalue<'tcx>,
- ) -> bool {
- match *rvalue {
- Rvalue::Aggregate(ref kind, _) => {
- if let AggregateKind::Adt(def, ..) = **kind {
- if Some(def.did) == cx.tcx.lang_items().unsafe_cell_type() {
- let ty = rvalue.ty(*cx.body, cx.tcx);
- assert_eq!(Self::in_any_value_of_ty(cx, ty), true);
- return true;
- }
- }
- }
-
- _ => {}
- }
-
- Self::in_rvalue_structurally(cx, per_local, rvalue)
+ fn in_adt_inherently(cx: &ConstCx<'_, 'tcx>, adt: &AdtDef) -> bool {
+ // Exactly one type, `UnsafeCell`, has the `HasMutInterior` qualif inherently.
+ // It arises structurally for all other types.
+ Some(adt.did) == cx.tcx.lang_items().unsafe_cell_type()
}
}
ty.needs_drop(cx.tcx, cx.param_env)
}
- fn in_rvalue(
- cx: &ConstCx<'_, 'tcx>,
- per_local: &mut impl FnMut(Local) -> bool,
- rvalue: &Rvalue<'tcx>,
- ) -> bool {
- if let Rvalue::Aggregate(ref kind, _) = *rvalue {
+ fn in_adt_inherently(cx: &ConstCx<'_, 'tcx>, adt: &AdtDef) -> bool {
+ adt.has_dtor(cx.tcx)
+ }
+}
+
+// FIXME: Use `mir::visit::Visitor` for the `in_*` functions if/when it supports early return.
+
+/// Returns `true` if this `Rvalue` contains qualif `Q`.
+pub fn in_rvalue<Q, F>(cx: &ConstCx<'_, 'tcx>, in_local: &mut F, rvalue: &Rvalue<'tcx>) -> bool
+where
+ Q: Qualif,
+ F: FnMut(Local) -> bool,
+{
+ match rvalue {
+ Rvalue::NullaryOp(..) => Q::in_any_value_of_ty(cx, rvalue.ty(*cx.body, cx.tcx)),
+
+ Rvalue::Discriminant(place) | Rvalue::Len(place) => {
+ in_place::<Q, _>(cx, in_local, place.as_ref())
+ }
+
+ Rvalue::Use(operand)
+ | Rvalue::Repeat(operand, _)
+ | Rvalue::UnaryOp(_, operand)
+ | Rvalue::Cast(_, operand, _) => in_operand::<Q, _>(cx, in_local, operand),
+
+ Rvalue::BinaryOp(_, lhs, rhs) | Rvalue::CheckedBinaryOp(_, lhs, rhs) => {
+ in_operand::<Q, _>(cx, in_local, lhs) || in_operand::<Q, _>(cx, in_local, rhs)
+ }
+
+ Rvalue::Ref(_, _, place) | Rvalue::AddressOf(_, place) => {
+ // Special-case reborrows to be more like a copy of the reference.
+ if let &[ref proj_base @ .., ProjectionElem::Deref] = place.projection.as_ref() {
+ let base_ty = Place::ty_from(place.local, proj_base, *cx.body, cx.tcx).ty;
+ if let ty::Ref(..) = base_ty.kind {
+ return in_place::<Q, _>(
+ cx,
+ in_local,
+ PlaceRef { local: place.local, projection: proj_base },
+ );
+ }
+ }
+
+ in_place::<Q, _>(cx, in_local, place.as_ref())
+ }
+
+ Rvalue::Aggregate(kind, operands) => {
+ // Return early if we know that the struct or enum being constructed is always
+ // qualified.
if let AggregateKind::Adt(def, ..) = **kind {
- if def.has_dtor(cx.tcx) {
+ if Q::in_adt_inherently(cx, def) {
return true;
}
}
+
+ // Otherwise, proceed structurally...
+ operands.iter().any(|o| in_operand::<Q, _>(cx, in_local, o))
}
+ }
+}
- Self::in_rvalue_structurally(cx, per_local, rvalue)
+/// Returns `true` if this `Place` contains qualif `Q`.
+pub fn in_place<Q, F>(cx: &ConstCx<'_, 'tcx>, in_local: &mut F, place: PlaceRef<'tcx>) -> bool
+where
+ Q: Qualif,
+ F: FnMut(Local) -> bool,
+{
+ let mut projection = place.projection;
+ while let [ref proj_base @ .., proj_elem] = projection {
+ match *proj_elem {
+ ProjectionElem::Index(index) if in_local(index) => return true,
+
+ ProjectionElem::Deref
+ | ProjectionElem::Field(_, _)
+ | ProjectionElem::ConstantIndex { .. }
+ | ProjectionElem::Subslice { .. }
+ | ProjectionElem::Downcast(_, _)
+ | ProjectionElem::Index(_) => {}
+ }
+
+ let base_ty = Place::ty_from(place.local, proj_base, *cx.body, cx.tcx);
+ let proj_ty = base_ty.projection_ty(cx.tcx, proj_elem).ty;
+ if !Q::in_any_value_of_ty(cx, proj_ty) {
+ return false;
+ }
+
+ projection = proj_base;
+ }
+
+ assert!(projection.is_empty());
+ in_local(place.local)
+}
+
+/// Returns `true` if this `Operand` contains qualif `Q`.
+pub fn in_operand<Q, F>(cx: &ConstCx<'_, 'tcx>, in_local: &mut F, operand: &Operand<'tcx>) -> bool
+where
+ Q: Qualif,
+ F: FnMut(Local) -> bool,
+{
+ let constant = match operand {
+ Operand::Copy(place) | Operand::Move(place) => {
+ return in_place::<Q, _>(cx, in_local, place.as_ref());
+ }
+
+ Operand::Constant(c) => c,
+ };
+
+ // Check the qualifs of the value of `const` items.
+ if let ty::ConstKind::Unevaluated(def_id, _, promoted) = constant.literal.val {
+ assert!(promoted.is_none());
+ // Don't peek inside trait associated constants.
+ if cx.tcx.trait_of_item(def_id).is_none() {
+ let qualifs = cx.tcx.at(constant.span).mir_const_qualif(def_id);
+ if !Q::in_qualifs(&qualifs) {
+ return false;
+ }
+
+ // Just in case the type is more specific than
+ // the definition, e.g., impl associated const
+ // with type parameters, take it into account.
+ }
}
+ // Otherwise use the qualifs of the type.
+ Q::in_any_value_of_ty(cx, constant.literal.ty)
}
use std::marker::PhantomData;
-use super::{Item, Qualif};
+use super::{qualifs, Item, Qualif};
use crate::dataflow::{self as old_dataflow, generic as dataflow};
/// A `Visitor` that propagates qualifs between locals. This defines the transfer function of
fn apply_call_return_effect(
&mut self,
_block: BasicBlock,
- func: &mir::Operand<'tcx>,
- args: &[mir::Operand<'tcx>],
+ _func: &mir::Operand<'tcx>,
+ _args: &[mir::Operand<'tcx>],
return_place: &mir::Place<'tcx>,
) {
+ // We cannot reason about another function's internals, so use conservative type-based
+ // qualification for the result of a function call.
let return_ty = return_place.ty(*self.item.body, self.item.tcx).ty;
- let qualif = Q::in_call(
- self.item,
- &mut |l| self.qualifs_per_local.contains(l),
- func,
- args,
- return_ty,
- );
+ let qualif = Q::in_any_value_of_ty(self.item, return_ty);
+
if !return_place.is_indirect() {
self.assign_qualif_direct(return_place, qualif);
}
rvalue: &mir::Rvalue<'tcx>,
location: Location,
) {
- let qualif = Q::in_rvalue(self.item, &mut |l| self.qualifs_per_local.contains(l), rvalue);
+ let qualif = qualifs::in_rvalue::<Q, _>(
+ self.item,
+ &mut |l| self.qualifs_per_local.contains(l),
+ rvalue,
+ );
if !place.is_indirect() {
self.assign_qualif_direct(place, qualif);
}
// here; that occurs in `apply_call_return_effect`.
if let mir::TerminatorKind::DropAndReplace { value, location: dest, .. } = kind {
- let qualif =
- Q::in_operand(self.item, &mut |l| self.qualifs_per_local.contains(l), value);
+ let qualif = qualifs::in_operand::<Q, _>(
+ self.item,
+ &mut |l| self.qualifs_per_local.contains(l),
+ value,
+ );
+
if !dest.is_indirect() {
self.assign_qualif_direct(dest, qualif);
}
use rustc_hir::{def_id::DefId, HirId};
use rustc_index::bit_set::BitSet;
use rustc_infer::infer::TyCtxtInferExt;
-use rustc_infer::traits::{self, TraitEngine};
use rustc_span::symbol::sym;
use rustc_span::Span;
+use rustc_trait_selection::traits::error_reporting::InferCtxtExt;
+use rustc_trait_selection::traits::{self, TraitEngine};
use std::borrow::Cow;
use std::ops::Deref;
// If an operation is supported in miri (and is not already controlled by a feature gate) it
// can be turned on with `-Zunleash-the-miri-inside-of-you`.
- let is_unleashable = O::IS_SUPPORTED_IN_MIRI && O::feature_gate(self.tcx).is_none();
+ let is_unleashable = O::IS_SUPPORTED_IN_MIRI && O::feature_gate().is_none();
if is_unleashable && self.tcx.sess.opts.debugging_opts.unleash_the_miri_inside_of_you {
self.tcx.sess.span_warn(span, "skipping const checks");
}
};
self.visit_place_base(&place.local, ctx, location);
- self.visit_projection(&place.local, reborrowed_proj, ctx, location);
+ self.visit_projection(place.local, reborrowed_proj, ctx, location);
return;
}
}
Mutability::Mut => PlaceContext::MutatingUse(MutatingUseContext::AddressOf),
};
self.visit_place_base(&place.local, ctx, location);
- self.visit_projection(&place.local, reborrowed_proj, ctx, location);
+ self.visit_projection(place.local, reborrowed_proj, ctx, location);
return;
}
}
Rvalue::Ref(_, BorrowKind::Shared, ref place)
| Rvalue::Ref(_, BorrowKind::Shallow, ref place)
| Rvalue::AddressOf(Mutability::Not, ref place) => {
- let borrowed_place_has_mut_interior = HasMutInterior::in_place(
+ let borrowed_place_has_mut_interior = qualifs::in_place::<HasMutInterior, _>(
&self.item,
&mut |local| self.qualifs.has_mut_interior(local, location),
place.as_ref(),
}
fn visit_projection_elem(
&mut self,
- place_local: &Local,
+ place_local: Local,
proj_base: &[PlaceElem<'tcx>],
elem: &PlaceElem<'tcx>,
context: PlaceContext,
match elem {
ProjectionElem::Deref => {
- let base_ty = Place::ty_from(*place_local, proj_base, *self.body, self.tcx).ty;
+ let base_ty = Place::ty_from(place_local, proj_base, *self.body, self.tcx).ty;
if let ty::RawPtr(_) = base_ty.kind {
if proj_base.is_empty() {
if let (local, []) = (place_local, proj_base) {
- let decl = &self.body.local_decls[*local];
+ let decl = &self.body.local_decls[local];
if let LocalInfo::StaticRef { def_id, .. } = decl.local_info {
let span = decl.source_info.span;
self.check_static(def_id, span);
| ProjectionElem::Subslice { .. }
| ProjectionElem::Field(..)
| ProjectionElem::Index(_) => {
- let base_ty = Place::ty_from(*place_local, proj_base, *self.body, self.tcx).ty;
+ let base_ty = Place::ty_from(place_local, proj_base, *self.body, self.tcx).ty;
match base_ty.ty_adt_def() {
Some(def) if def.is_union() => {
self.check_op(ops::UnionAccess);
-use rustc::hir::map::Map;
-use rustc::lint::builtin::{SAFE_PACKED_BORROWS, UNUSED_UNSAFE};
use rustc::mir::visit::{MutatingUseContext, PlaceContext, Visitor};
use rustc::mir::*;
use rustc::ty::cast::CastTy;
use rustc_hir::def_id::DefId;
use rustc_hir::intravisit;
use rustc_hir::Node;
+use rustc_session::lint::builtin::{SAFE_PACKED_BORROWS, UNUSED_UNSAFE};
use rustc_span::symbol::{sym, Symbol};
use std::ops::Bound;
}
impl<'a, 'tcx> intravisit::Visitor<'tcx> for UnusedUnsafeVisitor<'a> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map<'this>(&'this mut self) -> intravisit::NestedVisitorMap<'this, Self::Map> {
+ fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<Self::Map> {
intravisit::NestedVisitorMap::None
}
use std::borrow::Cow;
use std::cell::Cell;
-use rustc::lint;
use rustc::mir::interpret::{InterpResult, Scalar};
use rustc::mir::visit::{
MutVisitor, MutatingUseContext, NonMutatingUseContext, PlaceContext, Visitor,
use rustc_ast::ast::Mutability;
use rustc_data_structures::fx::FxHashMap;
use rustc_hir::def::DefKind;
-use rustc_hir::def_id::DefId;
use rustc_hir::HirId;
use rustc_index::vec::IndexVec;
-use rustc_infer::traits;
+use rustc_session::lint;
use rustc_span::Span;
+use rustc_trait_selection::traits;
use crate::const_eval::error_to_const_error;
use crate::interpret::{
fn assert_panic(
_ecx: &mut InterpCx<'mir, 'tcx, Self>,
- _span: Span,
_msg: &rustc::mir::AssertMessage<'tcx>,
_unwind: Option<rustc::mir::BasicBlock>,
) -> InterpResult<'tcx> {
));
}
- fn find_foreign_static(
- _tcx: TyCtxt<'tcx>,
- _def_id: DefId,
- ) -> InterpResult<'tcx, Cow<'tcx, Allocation<Self::PointerTag>>> {
- throw_unsup!(ReadForeignStatic)
- }
-
#[inline(always)]
fn init_allocation_extra<'b>(
_memory_extra: &(),
Ok(())
}
- fn before_terminator(_ecx: &mut InterpCx<'mir, 'tcx, Self>) -> InterpResult<'tcx> {
- Ok(())
- }
-
#[inline(always)]
fn stack_push(_ecx: &mut InterpCx<'mir, 'tcx, Self>) -> InterpResult<'tcx> {
Ok(())
let r = match f(self) {
Ok(val) => Some(val),
Err(error) => {
- use rustc::mir::interpret::{
- InterpError::*, UndefinedBehaviorInfo, UnsupportedOpInfo,
- };
- match error.kind {
- MachineStop(_) => bug!("ConstProp does not stop"),
-
- // Some error shouldn't come up because creating them causes
- // an allocation, which we should avoid. When that happens,
- // dedicated error variants should be introduced instead.
- // Only test this in debug builds though to avoid disruptions.
- Unsupported(UnsupportedOpInfo::Unsupported(_))
- | Unsupported(UnsupportedOpInfo::ValidationFailure(_))
- | UndefinedBehavior(UndefinedBehaviorInfo::Ub(_))
- | UndefinedBehavior(UndefinedBehaviorInfo::UbExperimental(_))
- if cfg!(debug_assertions) =>
- {
- bug!("const-prop encountered allocating error: {:?}", error.kind);
- }
-
- Unsupported(_)
- | UndefinedBehavior(_)
- | InvalidProgram(_)
- | ResourceExhaustion(_) => {
- // Ignore these errors.
- }
- }
+ // Some errors shouldn't come up because creating them causes
+ // an allocation, which we should avoid. When that happens,
+ // dedicated error variants should be introduced instead.
+ // Only test this in debug builds though to avoid disruptions.
+ debug_assert!(
+ !error.kind.allocates(),
+ "const-prop encountered allocating error: {}",
+ error
+ );
None
}
};
let left_ty = left.ty(&self.local_decls, self.tcx);
let left_size_bits = self.ecx.layout_of(left_ty).ok()?.size.bits();
let right_size = r.layout.size;
- let r_bits = r.to_scalar().and_then(|r| r.to_bits(right_size));
+ let r_bits = r.to_scalar().ok();
+ // This is basically `force_bits`.
+ let r_bits = r_bits.and_then(|r| r.to_bits_or_ptr(right_size, &self.tcx).ok());
if r_bits.map_or(false, |b| b >= left_size_bits as u128) {
self.report_assert_as_lint(
lint::builtin::ARITHMETIC_OVERFLOW,
source_info: SourceInfo,
) {
trace!("attepting to replace {:?} with {:?}", rval, value);
- if let Err(e) = self.ecx.validate_operand(
+ if let Err(e) = self.ecx.const_validate_operand(
value,
vec![],
// FIXME: is ref tracking too expensive?
- Some(&mut interpret::RefTracking::empty()),
+ &mut interpret::RefTracking::empty(),
+ /*may_ref_to_static*/ true,
) {
trace!("validation error, attempt failed: {:?}", e);
return;
}
- // FIXME> figure out what tho do when try_read_immediate fails
+ // FIXME> figure out what to do when try_read_immediate fails
let imm = self.use_ecx(|this| this.ecx.try_read_immediate(value));
if let Some(Ok(imm)) = imm {
use crate::transform::{MirPass, MirSource};
use crate::util as mir_util;
use rustc::mir::{Body, BodyAndCache};
-use rustc::session::config::{OutputFilenames, OutputType};
use rustc::ty::TyCtxt;
+use rustc_session::config::{OutputFilenames, OutputType};
pub struct Marker(pub &'static str);
}
fn visit_local(&mut self, local: &mut Local, _: PlaceContext, _: Location) {
- assert_ne!(*local, self_arg());
+ assert_ne!(*local, SELF_ARG);
}
fn visit_place(&mut self, place: &mut Place<'tcx>, context: PlaceContext, location: Location) {
- if place.local == self_arg() {
+ if place.local == SELF_ARG {
replace_base(
place,
Place {
- local: self_arg(),
+ local: SELF_ARG,
projection: self.tcx().intern_place_elems(&[ProjectionElem::Deref]),
},
self.tcx,
for elem in place.projection.iter() {
if let PlaceElem::Index(local) = elem {
- assert_ne!(*local, self_arg());
+ assert_ne!(*local, SELF_ARG);
}
}
}
}
fn visit_local(&mut self, local: &mut Local, _: PlaceContext, _: Location) {
- assert_ne!(*local, self_arg());
+ assert_ne!(*local, SELF_ARG);
}
fn visit_place(&mut self, place: &mut Place<'tcx>, context: PlaceContext, location: Location) {
- if place.local == self_arg() {
+ if place.local == SELF_ARG {
replace_base(
place,
Place {
- local: self_arg(),
+ local: SELF_ARG,
projection: self.tcx().intern_place_elems(&[ProjectionElem::Field(
Field::new(0),
self.ref_gen_ty,
for elem in place.projection.iter() {
if let PlaceElem::Index(local) = elem {
- assert_ne!(*local, self_arg());
+ assert_ne!(*local, SELF_ARG);
}
}
}
place.projection = tcx.intern_place_elems(&new_projection);
}
-fn self_arg() -> Local {
- Local::new(1)
-}
+const SELF_ARG: Local = Local::from_u32(1);
/// Generator has not been resumed yet.
const UNRESUMED: usize = GeneratorSubsts::UNRESUMED;
// Create a Place referencing a generator struct field
fn make_field(&self, variant_index: VariantIdx, idx: usize, ty: Ty<'tcx>) -> Place<'tcx> {
- let self_place = Place::from(self_arg());
+ let self_place = Place::from(SELF_ARG);
let base = self.tcx.mk_place_downcast_unnamed(self_place, variant_index);
let mut projection = base.projection.to_vec();
projection.push(ProjectionElem::Field(Field::new(idx), ty));
// Create a statement which changes the discriminant
fn set_discr(&self, state_disc: VariantIdx, source_info: SourceInfo) -> Statement<'tcx> {
- let self_place = Place::from(self_arg());
+ let self_place = Place::from(SELF_ARG);
Statement {
source_info,
kind: StatementKind::SetDiscriminant {
let local_decls_len = body.local_decls.push(temp_decl);
let temp = Place::from(local_decls_len);
- let self_place = Place::from(self_arg());
+ let self_place = Place::from(SELF_ARG);
let assign = Statement {
source_info: source_info(body),
kind: StatementKind::Assign(box (temp, Rvalue::Discriminant(self_place))),
for (block, data) in body.basic_blocks().iter_enumerated() {
if let TerminatorKind::Yield { .. } = data.terminator().kind {
- let loc = Location { block: block, statement_index: data.statements.len() };
+ let loc = Location { block, statement_index: data.statements.len() };
if !movable {
// The `liveness` variable contains the liveness of MIR locals ignoring borrows.
let mut live_locals_here = storage_required;
live_locals_here.intersect(&liveness.outs[block]);
- // The generator argument is ignored
- live_locals_here.remove(self_arg());
+ // The generator argument is ignored.
+ live_locals_here.remove(SELF_ARG);
debug!("loc = {:?}, live_locals_here = {:?}", loc, live_locals_here);
// generator's resume function.
let param_env = tcx.param_env(def_id);
- let gen = self_arg();
let mut elaborator = DropShimElaborator { body, patch: MirPatch::new(body), tcx, param_env };
let (target, unwind, source_info) = match block_data.terminator() {
Terminator { source_info, kind: TerminatorKind::Drop { location, target, unwind } } => {
if let Some(local) = location.as_local() {
- if local == gen {
+ if local == SELF_ARG {
(target, unwind, source_info)
} else {
continue;
elaborate_drop(
&mut elaborator,
*source_info,
- &Place::from(gen),
+ &Place::from(SELF_ARG),
(),
*target,
unwind,
make_generator_state_argument_indirect(tcx, def_id, &mut body);
// Change the generator argument from &mut to *mut
- body.local_decls[self_arg()] = LocalDecl {
+ body.local_decls[SELF_ARG] = LocalDecl {
mutability: Mutability::Mut,
ty: tcx.mk_ptr(ty::TypeAndMut { ty: gen_ty, mutbl: hir::Mutability::Mut }),
user_ty: UserTypeProjections::none(),
0,
Statement {
source_info,
- kind: StatementKind::Retag(RetagKind::Raw, box Place::from(self_arg())),
+ kind: StatementKind::Retag(RetagKind::Raw, box Place::from(SELF_ARG)),
},
)
}
// unrelated code from the resume part of the function
simplify::remove_dead_blocks(&mut body);
- dump_mir(tcx, None, "generator_drop", &0, source, &mut body, |_, _| Ok(()));
+ dump_mir(tcx, None, "generator_drop", &0, source, &body, |_, _| Ok(()));
body
}
assert_block
}
+fn can_return<'tcx>(tcx: TyCtxt<'tcx>, body: &Body<'tcx>) -> bool {
+ // Returning from a function with an uninhabited return type is undefined behavior.
+ if body.return_ty().conservative_is_privately_uninhabited(tcx) {
+ return false;
+ }
+
+ // If there's a return terminator the function may return.
+ for block in body.basic_blocks() {
+ if let TerminatorKind::Return = block.terminator().kind {
+ return true;
+ }
+ }
+
+ // Otherwise the function can't return.
+ false
+}
+
+fn can_unwind<'tcx>(tcx: TyCtxt<'tcx>, body: &Body<'tcx>) -> bool {
+ // Nothing can unwind when landing pads are off.
+ if tcx.sess.no_landing_pads() {
+ return false;
+ }
+
+ // Unwinds can only start at certain terminators.
+ for block in body.basic_blocks() {
+ match block.terminator().kind {
+ // These never unwind.
+ TerminatorKind::Goto { .. }
+ | TerminatorKind::SwitchInt { .. }
+ | TerminatorKind::Abort
+ | TerminatorKind::Return
+ | TerminatorKind::Unreachable
+ | TerminatorKind::GeneratorDrop
+ | TerminatorKind::FalseEdges { .. }
+ | TerminatorKind::FalseUnwind { .. } => {}
+
+ // Resume will *continue* unwinding, but if there's no other unwinding terminator it
+ // will never be reached.
+ TerminatorKind::Resume => {}
+
+ TerminatorKind::Yield { .. } => {
+ unreachable!("`can_unwind` called before generator transform")
+ }
+
+ // These may unwind.
+ TerminatorKind::Drop { .. }
+ | TerminatorKind::DropAndReplace { .. }
+ | TerminatorKind::Call { .. }
+ | TerminatorKind::Assert { .. } => return true,
+ }
+ }
+
+ // If we didn't find an unwinding terminator, the function cannot unwind.
+ false
+}
+
fn create_generator_resume_function<'tcx>(
tcx: TyCtxt<'tcx>,
transform: TransformVisitor<'tcx>,
def_id: DefId,
source: MirSource<'tcx>,
body: &mut BodyAndCache<'tcx>,
+ can_return: bool,
) {
+ let can_unwind = can_unwind(tcx, body);
+
// Poison the generator when it unwinds
- for block in body.basic_blocks_mut() {
- let source_info = block.terminator().source_info;
- if let &TerminatorKind::Resume = &block.terminator().kind {
- block.statements.push(transform.set_discr(VariantIdx::new(POISONED), source_info));
+ if can_unwind {
+ let poison_block = BasicBlock::new(body.basic_blocks().len());
+ let source_info = source_info(body);
+ body.basic_blocks_mut().push(BasicBlockData {
+ statements: vec![transform.set_discr(VariantIdx::new(POISONED), source_info)],
+ terminator: Some(Terminator { source_info, kind: TerminatorKind::Resume }),
+ is_cleanup: true,
+ });
+
+ for (idx, block) in body.basic_blocks_mut().iter_enumerated_mut() {
+ let source_info = block.terminator().source_info;
+
+ if let TerminatorKind::Resume = block.terminator().kind {
+ // An existing `Resume` terminator is redirected to jump to our dedicated
+ // "poisoning block" above.
+ if idx != poison_block {
+ *block.terminator_mut() = Terminator {
+ source_info,
+ kind: TerminatorKind::Goto { target: poison_block },
+ };
+ }
+ } else if !block.is_cleanup {
+ // Any terminators that *can* unwind but don't have an unwind target set are also
+ // pointed at our poisoning block (unless they're part of the cleanup path).
+ if let Some(unwind @ None) = block.terminator_mut().unwind_mut() {
+ *unwind = Some(poison_block);
+ }
+ }
}
}
// Panic when resumed on the returned or poisoned state
let generator_kind = body.generator_kind.unwrap();
- cases.insert(1, (RETURNED, insert_panic_block(tcx, body, ResumedAfterReturn(generator_kind))));
- cases.insert(2, (POISONED, insert_panic_block(tcx, body, ResumedAfterPanic(generator_kind))));
+
+ if can_unwind {
+ cases.insert(
+ 1,
+ (POISONED, insert_panic_block(tcx, body, ResumedAfterPanic(generator_kind))),
+ );
+ }
+
+ if can_return {
+ cases.insert(
+ 1,
+ (RETURNED, insert_panic_block(tcx, body, ResumedAfterReturn(generator_kind))),
+ );
+ }
insert_switch(body, cases, &transform, TerminatorKind::Unreachable);
// Create a block to destroy an unresumed generators. This can only destroy upvars.
let drop_clean = BasicBlock::new(body.basic_blocks().len());
let term = TerminatorKind::Drop {
- location: Place::from(self_arg()),
+ location: Place::from(SELF_ARG),
target: return_block,
unwind: None,
};
let (remap, layout, storage_liveness) =
compute_layout(tcx, source, &upvars, interior, movable, body);
+ let can_return = can_return(tcx, body);
+
// Run the transformation which converts Places from Local to generator struct
// accesses for locals in `remap`.
// It also rewrites `return x` and `yield y` as writing a new generator state and returning
body.generator_drop = Some(box drop_shim);
// Create the Generator::resume function
- create_generator_resume_function(tcx, transform, def_id, source, body);
+ create_generator_resume_function(tcx, transform, def_id, source, body, can_return);
}
}
//! Inlining pass for MIR functions
-use rustc_hir::def_id::DefId;
-
-use rustc_index::bit_set::BitSet;
-use rustc_index::vec::{Idx, IndexVec};
-
use rustc::middle::codegen_fn_attrs::CodegenFnAttrFlags;
use rustc::mir::visit::*;
use rustc::mir::*;
-use rustc::session::config::Sanitizer;
use rustc::ty::subst::{InternalSubsts, Subst, SubstsRef};
use rustc::ty::{self, Instance, InstanceDef, ParamEnv, Ty, TyCtxt, TypeFoldable};
+use rustc_attr as attr;
+use rustc_hir::def_id::DefId;
+use rustc_index::bit_set::BitSet;
+use rustc_index::vec::{Idx, IndexVec};
+use rustc_session::config::Sanitizer;
+use rustc_target::spec::abi::Abi;
use super::simplify::{remove_dead_blocks, CfgSimplifier};
use crate::transform::{MirPass, MirSource};
use std::collections::VecDeque;
use std::iter;
-use rustc_attr as attr;
-use rustc_target::spec::abi::Abi;
-
const DEFAULT_THRESHOLD: usize = 50;
const HINT_THRESHOLD: usize = 100;
use crate::{shim, util};
-use rustc::hir::map::Map;
use rustc::mir::{BodyAndCache, ConstQualifs, MirPhase, Promoted};
use rustc::ty::query::Providers;
use rustc::ty::steal::Steal;
}
intravisit::walk_struct_def(self, v)
}
- type Map = Map<'tcx>;
- fn nested_visit_map<'b>(&'b mut self) -> NestedVisitorMap<'b, Self::Map> {
+ type Map = intravisit::ErasedMap<'tcx>;
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
}
// FIXME(eddyb) maybe cache this?
fn qualif_local<Q: qualifs::Qualif>(&self, local: Local) -> bool {
- let per_local = &mut |l| self.qualif_local::<Q>(l);
-
if let TempState::Defined { location: loc, .. } = self.temps[local] {
let num_stmts = self.body[loc.block].statements.len();
if loc.statement_index < num_stmts {
let statement = &self.body[loc.block].statements[loc.statement_index];
match &statement.kind {
- StatementKind::Assign(box (_, rhs)) => Q::in_rvalue(&self.item, per_local, rhs),
+ StatementKind::Assign(box (_, rhs)) => qualifs::in_rvalue::<Q, _>(
+ &self.item,
+ &mut |l| self.qualif_local::<Q>(l),
+ rhs,
+ ),
_ => {
span_bug!(
statement.source_info.span,
} else {
let terminator = self.body[loc.block].terminator();
match &terminator.kind {
- TerminatorKind::Call { func, args, .. } => {
+ TerminatorKind::Call { .. } => {
let return_ty = self.body.local_decls[local].ty;
- Q::in_call(&self.item, per_local, func, args, return_ty)
+ Q::in_any_value_of_ty(&self.item, return_ty)
}
kind => {
span_bug!(terminator.source_info.span, "{:?} not promotable", kind);
}
}
- fn validate_place(&self, place: PlaceRef<'_, 'tcx>) -> Result<(), Unpromotable> {
+ fn validate_place(&self, place: PlaceRef<'tcx>) -> Result<(), Unpromotable> {
match place {
PlaceRef { local, projection: [] } => self.validate_local(local),
PlaceRef { local: _, projection: [proj_base @ .., elem] } => {
let (blocks, local_decls) = self.source.basic_blocks_and_local_decls_mut();
match candidate {
Candidate::Ref(loc) => {
- let ref mut statement = blocks[loc.block].statements[loc.statement_index];
+ let statement = &mut blocks[loc.block].statements[loc.statement_index];
match statement.kind {
StatementKind::Assign(box (
_,
}
}
Candidate::Repeat(loc) => {
- let ref mut statement = blocks[loc.block].statements[loc.statement_index];
+ let statement = &mut blocks[loc.block].statements[loc.statement_index];
match statement.kind {
StatementKind::Assign(box (_, Rvalue::Repeat(ref mut operand, _))) => {
let ty = operand.ty(local_decls, self.tcx);
let attributes = tcx.get_attrs(def_id);
let param_env = tcx.param_env(def_id);
let move_data = MoveData::gather_moves(body, tcx, param_env).unwrap();
- let mdpe = MoveDataParamEnv { move_data: move_data, param_env: param_env };
+ let mdpe = MoveDataParamEnv { move_data, param_env };
let flow_inits = MaybeInitializedPlaces::new(tcx, body, &mdpe)
.into_engine(tcx, body, def_id)
let mut start = START_BLOCK;
+ // Vec of the blocks that should be merged. We store the indices here, instead of the
+ // statements itself to avoid moving the (relatively) large statements twice.
+ // We do not push the statements directly into the target block (`bb`) as that is slower
+ // due to additional reallocations
+ let mut merged_blocks = Vec::new();
loop {
let mut changed = false;
self.collapse_goto_chain(successor, &mut changed);
}
- let mut new_stmts = vec![];
let mut inner_changed = true;
+ merged_blocks.clear();
while inner_changed {
inner_changed = false;
inner_changed |= self.simplify_branch(&mut terminator);
- inner_changed |= self.merge_successor(&mut new_stmts, &mut terminator);
+ inner_changed |= self.merge_successor(&mut merged_blocks, &mut terminator);
changed |= inner_changed;
}
- let data = &mut self.basic_blocks[bb];
- data.statements.extend(new_stmts);
- data.terminator = Some(terminator);
+ let statements_to_merge =
+ merged_blocks.iter().map(|&i| self.basic_blocks[i].statements.len()).sum();
+
+ if statements_to_merge > 0 {
+ let mut statements = std::mem::take(&mut self.basic_blocks[bb].statements);
+ statements.reserve(statements_to_merge);
+ for &from in &merged_blocks {
+ statements.append(&mut self.basic_blocks[from].statements);
+ }
+ self.basic_blocks[bb].statements = statements;
+ }
+
+ self.basic_blocks[bb].terminator = Some(terminator);
changed |= inner_changed;
}
// merge a block with 1 `goto` predecessor to its parent
fn merge_successor(
&mut self,
- new_stmts: &mut Vec<Statement<'tcx>>,
+ merged_blocks: &mut Vec<BasicBlock>,
terminator: &mut Terminator<'tcx>,
) -> bool {
let target = match terminator.kind {
return false;
}
};
- new_stmts.extend(self.basic_blocks[target].statements.drain(..));
+
+ merged_blocks.push(target);
self.pred_count[target] = 0;
true
};
let first_succ = {
- if let Some(&first_succ) = terminator.successors().nth(0) {
+ if let Some(&first_succ) = terminator.successors().next() {
if terminator.successors().all(|s| *s == first_succ) {
let count = terminator.successors().count();
self.pred_count[first_succ] -= (count - 1) as u32;
debug!("destructor_call_block({:?}, {:?})", self, succ);
let tcx = self.tcx();
let drop_trait = tcx.lang_items().drop_trait().unwrap();
- let drop_fn = tcx.associated_items(drop_trait).in_definition_order().nth(0).unwrap();
+ let drop_fn = tcx.associated_items(drop_trait).in_definition_order().next().unwrap();
let ty = self.place_ty(self.place);
let substs = tcx.mk_substs_trait(ty, &[]);
self.elaborator.patch().new_block(base_block)
}
- /// Ceates a pair of drop-loops of `place`, which drops its contents, even
+ /// Creates a pair of drop-loops of `place`, which drops its contents, even
/// in the case of 1 panic. If `ptr_based`, creates a pointer loop,
/// otherwise create an index loop.
fn drop_loop_pair(
debug!("drop_flag_reset_block({:?},{:?})", self, mode);
let block = self.new_block(unwind, TerminatorKind::Goto { target: succ });
- let block_start = Location { block: block, statement_index: 0 };
+ let block_start = Location { block, statement_index: 0 };
self.elaborator.clear_drop_flag(block_start, self.path, mode);
block
}
let call = TerminatorKind::Call {
func: Operand::function_handle(tcx, free_func, substs, self.source_info.span),
- args: args,
+ args,
destination: Some((unit_temp, target)),
cleanup: None,
from_hir_call: false,
writeln!(file, "// MIR local liveness analysis for `{}`", node_path)?;
writeln!(file, "// source = {:?}", source)?;
writeln!(file, "// pass_name = {}", pass_name)?;
- writeln!(file, "")?;
+ writeln!(file)?;
write_mir_fn(tcx, source, body, &mut file, result)?;
Ok(())
});
write_basic_block(tcx, block, body, &mut |_, _| Ok(()), w)?;
print(w, " ", &result.outs)?;
if block.index() + 1 != body.basic_blocks().len() {
- writeln!(w, "")?;
+ writeln!(w)?;
}
}
if let Some(ref layout) = body.generator_layout {
writeln!(file, "// generator_layout = {:?}", layout)?;
}
- writeln!(file, "")?;
+ writeln!(file)?;
extra_data(PassWhere::BeforeCFG, &mut file)?;
write_user_type_annotations(body, &mut file)?;
write_mir_fn(tcx, source, body, &mut extra_data, &mut file)?;
first = false;
} else {
// Put empty lines between all items
- writeln!(w, "")?;
+ writeln!(w)?;
}
write_mir_fn(tcx, MirSource::item(def_id), body, &mut |_, _| Ok(()), w)?;
for (i, body) in tcx.promoted_mir(def_id).iter_enumerated() {
- writeln!(w, "")?;
+ writeln!(w)?;
let src = MirSource { instance: ty::InstanceDef::Item(def_id), promoted: Some(i) };
write_mir_fn(tcx, src, body, &mut |_, _| Ok(()), w)?;
}
extra_data(PassWhere::BeforeBlock(block), w)?;
write_basic_block(tcx, block, body, extra_data, w)?;
if block.index() + 1 != body.basic_blocks().len() {
- writeln!(w, "")?;
+ writeln!(w)?;
}
}
writeln!(w, "{}{:?}{}: {{", INDENT, block, cleanup_text)?;
// List of statements in the middle.
- let mut current_location = Location { block: block, statement_index: 0 };
+ let mut current_location = Location { block, statement_index: 0 };
for statement in &data.statements {
extra_data(PassWhere::BeforeLocation(current_location), w)?;
let indented_body = format!("{0}{0}{1:?};", INDENT, statement);
}
let children = match scope_tree.get(&parent) {
- Some(childs) => childs,
+ Some(children) => children,
None => return Ok(()),
};
write_scope_tree(tcx, body, &scope_tree, w, OUTERMOST_SOURCE_SCOPE, 1)?;
// Add an empty line before the first block is printed.
- writeln!(w, "")?;
+ writeln!(w)?;
Ok(())
}
trace!("write_mir_sig: {:?}", src.instance);
let kind = tcx.def_kind(src.def_id());
let is_function = match kind {
- Some(DefKind::Fn) | Some(DefKind::Method) | Some(DefKind::Ctor(..)) => true,
+ Some(DefKind::Fn) | Some(DefKind::AssocFn) | Some(DefKind::Ctor(..)) => true,
_ => tcx.is_closure(src.def_id()),
};
match (kind, src.promoted) {
rustc_session = { path = "../librustc_session" }
rustc_span = { path = "../librustc_span" }
rustc_target = { path = "../librustc_target" }
+rustc_trait_selection = { path = "../librustc_trait_selection" }
rustc_ast = { path = "../librustc_ast" }
smallvec = { version = "1.0", features = ["union", "may_dangle"] }
// value being matched, taking the variance field into account.
candidate.ascriptions.push(Ascription {
span: user_ty_span,
- user_ty: user_ty,
+ user_ty,
source: match_pair.place,
variance,
});
i == variant_index || {
self.hir.tcx().features().exhaustive_patterns
&& !v
- .uninhabited_from(self.hir.tcx(), substs, adt_def.adt_kind())
+ .uninhabited_from(
+ self.hir.tcx(),
+ substs,
+ adt_def.adt_kind(),
+ self.hir.param_env,
+ )
.is_empty()
}
}) && (adt_def.did.is_local()
PatKind::Slice { ref prefix, ref slice, ref suffix } => {
let len = prefix.len() + suffix.len();
let op = if slice.is_some() { BinOp::Ge } else { BinOp::Eq };
- Test {
- span: match_pair.pattern.span,
- kind: TestKind::Len { len: len as u64, op: op },
- }
+ Test { span: match_pair.pattern.span, kind: TestKind::Len { len: len as u64, op } }
}
PatKind::Or { .. } => bug!("or-patterns should have already been handled"),
..
})
| Node::ImplItem(hir::ImplItem {
- kind: hir::ImplItemKind::Method(hir::FnSig { decl, .. }, body_id),
+ kind: hir::ImplItemKind::Fn(hir::FnSig { decl, .. }, body_id),
..
})
| Node::TraitItem(hir::TraitItem {
kind:
- hir::TraitItemKind::Method(hir::FnSig { decl, .. }, hir::TraitMethod::Provided(body_id)),
+ hir::TraitItemKind::Fn(hir::FnSig { decl, .. }, hir::TraitFn::Provided(body_id)),
..
}) => (*body_id, decl.output.span()),
Node::Item(hir::Item { kind: hir::ItemKind::Static(ty, _, body_id), .. })
impl GuardFrameLocal {
fn new(id: hir::HirId, _binding_mode: BindingMode) -> Self {
- GuardFrameLocal { id: id }
+ GuardFrameLocal { id }
}
}
result.push(StmtRef::Mirror(Box::new(Stmt {
kind: StmtKind::Let {
- remainder_scope: remainder_scope,
+ remainder_scope,
init_scope: region::Scope {
id: hir_id.local_id,
data: region::ScopeData::Node,
use rustc::ty::{self, AdtKind, Ty};
use rustc_hir as hir;
use rustc_hir::def::{CtorKind, CtorOf, DefKind, Res};
-use rustc_hir::def_id::LocalDefId;
use rustc_index::vec::Idx;
use rustc_span::Span;
// a tuple-struct or tuple-variant. This has the type of a
// `Fn` but with the user-given substitutions.
Res::Def(DefKind::Fn, _)
- | Res::Def(DefKind::Method, _)
+ | Res::Def(DefKind::AssocFn, _)
| Res::Def(DefKind::Ctor(_, CtorKind::Fn), _)
| Res::Def(DefKind::Const, _)
| Res::Def(DefKind::AssocConst, _) => {
match res {
// A regular function, constructor function or a constant.
Res::Def(DefKind::Fn, _)
- | Res::Def(DefKind::Method, _)
+ | Res::Def(DefKind::AssocFn, _)
| Res::Def(DefKind::Ctor(_, CtorKind::Fn), _)
| Res::SelfCtor(..) => {
let user_ty = user_substs_applied_to_res(cx, expr.hir_id, res);
let closure_def_id = cx.body_owner;
let upvar_id = ty::UpvarId {
var_path: ty::UpvarPath { hir_id: var_hir_id },
- closure_expr_id: LocalDefId::from_def_id(closure_def_id),
+ closure_expr_id: closure_def_id.expect_local(),
};
let var_ty = cx.tables().node_type(var_hir_id);
) -> ExprRef<'tcx> {
let upvar_id = ty::UpvarId {
var_path: ty::UpvarPath { hir_id: var_hir_id },
- closure_expr_id: cx.tcx.hir().local_def_id(closure_expr.hir_id).to_local(),
+ closure_expr_id: cx.tcx.hir().local_def_id(closure_expr.hir_id).expect_local(),
};
let upvar_capture = cx.tables().upvar_capture(upvar_id);
let temp_lifetime = cx.region_scope_tree.temporary_scope(closure_expr.hir_id.local_id);
use rustc_index::vec::Idx;
use rustc_infer::infer::InferCtxt;
use rustc_span::symbol::{sym, Symbol};
+use rustc_trait_selection::infer::InferCtxtExt;
#[derive(Clone)]
crate struct Cx<'a, 'tcx> {
use super::{compare_const_vals, PatternFoldable, PatternFolder};
use super::{FieldPat, Pat, PatKind, PatRange};
-use rustc::ty::layout::{Integer, IntegerExt, Size, VariantIdx};
-use rustc::ty::{self, Const, Ty, TyCtxt, TypeFoldable, VariantDef};
-use rustc_hir::def_id::DefId;
-use rustc_hir::{HirId, RangeEnd};
-
-use rustc::lint;
use rustc::mir::interpret::{truncate, AllocId, ConstValue, Pointer, Scalar};
use rustc::mir::Field;
+use rustc::ty::layout::{Integer, IntegerExt, Size, VariantIdx};
+use rustc::ty::{self, Const, Ty, TyCtxt, TypeFoldable, VariantDef};
use rustc::util::common::ErrorReported;
-
use rustc_attr::{SignedInt, UnsignedInt};
+use rustc_hir::def_id::DefId;
+use rustc_hir::{HirId, RangeEnd};
+use rustc_session::lint;
use rustc_span::{Span, DUMMY_SP};
use arena::TypedArena;
/// Pushes a new row to the matrix. If the row starts with an or-pattern, this expands it.
crate fn push(&mut self, row: PatStack<'p, 'tcx>) {
if let Some(rows) = row.expand_or_pat() {
- self.0.extend(rows);
+ for row in rows {
+ // We recursively expand the or-patterns of the new rows.
+ // This is necessary as we might have `0 | (1 | 2)` or e.g., `x @ 0 | x @ (1 | 2)`.
+ self.push(row)
+ }
} else {
self.0.push(row);
}
fn is_uninhabited(&self, ty: Ty<'tcx>) -> bool {
if self.tcx.features().exhaustive_patterns {
- self.tcx.is_ty_uninhabited_from(self.module, ty)
+ self.tcx.is_ty_uninhabited_from(self.module, ty, self.param_env)
} else {
false
}
// eliminate it straight away.
remaining_ranges = vec![];
} else {
- // Otherwise explicitely compute the remaining ranges.
+ // Otherwise explicitly compute the remaining ranges.
remaining_ranges = other_range.subtract_from(remaining_ranges);
}
PatKind::Leaf { subpatterns }
}
}
- ty::Ref(..) => PatKind::Deref { subpattern: subpatterns.nth(0).unwrap() },
+ ty::Ref(..) => PatKind::Deref { subpattern: subpatterns.next().unwrap() },
ty::Slice(_) | ty::Array(..) => bug!("bad slice pattern {:?} {:?}", self, ty),
_ => PatKind::Wild,
},
def.variants
.iter()
.filter(|v| {
- !v.uninhabited_from(cx.tcx, substs, def.adt_kind())
+ !v.uninhabited_from(cx.tcx, substs, def.adt_kind(), cx.param_env)
.contains(cx.tcx, cx.module)
})
.map(|v| Variant(v.def_id))
) -> Option<IntRange<'tcx>> {
if let Some((target_size, bias)) = Self::integral_size_and_signed_bias(tcx, value.ty) {
let ty = value.ty;
- let val = if let ty::ConstKind::Value(ConstValue::Scalar(Scalar::Raw { data, size })) =
- value.val
- {
- // For this specific pattern we can skip a lot of effort and go
- // straight to the result, after doing a bit of checking. (We
- // could remove this branch and just use the next branch, which
- // is more general but much slower.)
- Scalar::<()>::check_raw(data, size, target_size);
- data
- } else if let Some(val) = value.try_eval_bits(tcx, param_env, ty) {
- // This is a more general form of the previous branch.
- val
- } else {
- return None;
- };
+ let val = (|| {
+ if let ty::ConstKind::Value(ConstValue::Scalar(scalar)) = value.val {
+ // For this specific pattern we can skip a lot of effort and go
+ // straight to the result, after doing a bit of checking. (We
+ // could remove this branch and just fall through, which
+ // is more general but much slower.)
+ if let Ok(bits) = scalar.to_bits_or_ptr(target_size, &tcx) {
+ return Some(bits);
+ }
+ }
+ // This is a more general form of the previous case.
+ value.try_eval_bits(tcx, param_env, ty)
+ })()?;
let val = val ^ bias;
Some(IntRange { range: val..=val, ty, span })
} else {
PatKind::Binding { .. } | PatKind::Wild => Some(ctor_wild_subpatterns.iter().collect()),
PatKind::Variant { adt_def, variant_index, ref subpatterns, .. } => {
- let ref variant = adt_def.variants[variant_index];
+ let variant = &adt_def.variants[variant_index];
let is_non_exhaustive = cx.is_foreign_non_exhaustive_variant(pat.ty, variant);
Some(Variant(variant.def_id))
.filter(|variant_constructor| variant_constructor == constructor)
use super::{PatCtxt, PatKind, PatternError};
-use rustc::hir::map::Map;
use rustc::ty::{self, Ty, TyCtxt};
use rustc_ast::ast::Mutability;
use rustc_errors::{error_code, struct_span_err, Applicability, DiagnosticBuilder};
}
impl<'tcx> Visitor<'tcx> for MatchVisitor<'_, 'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
PatternError::AssocConstInPattern(span) => {
self.span_e0158(span, "associated consts cannot be referenced in patterns")
}
+ PatternError::ConstParamInPattern(span) => {
+ self.span_e0158(span, "const parameters cannot be referenced in patterns")
+ }
PatternError::FloatBug => {
// FIXME(#31407) this is only necessary because float parsing is buggy
::rustc::mir::interpret::struct_error(
fn check_in_cx(&self, hir_id: HirId, f: impl FnOnce(MatchCheckCtxt<'_, 'tcx>)) {
let module = self.tcx.parent_module(hir_id);
- MatchCheckCtxt::create_and_enter(self.tcx, self.param_env, module, |cx| f(cx));
+ MatchCheckCtxt::create_and_enter(self.tcx, self.param_env, module.to_def_id(), |cx| f(cx));
}
fn check_match(
}
impl<'v> Visitor<'v> for AtBindingPatternVisitor<'_, '_, '_> {
- type Map = Map<'v>;
+ type Map = intravisit::ErasedMap<'v>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
-use rustc::lint;
use rustc::mir::Field;
use rustc::ty::{self, Ty, TyCtxt};
use rustc_hir as hir;
-use rustc_infer::infer::{InferCtxt, TyCtxtInferExt};
-use rustc_infer::traits::predicate_for_trait_def;
-use rustc_infer::traits::{self, ObligationCause, PredicateObligation};
-
use rustc_index::vec::Idx;
-
+use rustc_infer::infer::{InferCtxt, TyCtxtInferExt};
+use rustc_session::lint;
use rustc_span::Span;
+use rustc_trait_selection::traits::predicate_for_trait_def;
+use rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt;
+use rustc_trait_selection::traits::{self, ObligationCause, PredicateObligation};
use std::cell::Cell;
let kind = match cv.ty.kind {
ty::Float(_) => {
tcx.struct_span_lint_hir(
- ::rustc::lint::builtin::ILLEGAL_FLOATING_POINT_LITERAL_PATTERN,
+ lint::builtin::ILLEGAL_FLOATING_POINT_LITERAL_PATTERN,
id,
span,
|lint| lint.build("floating-point types cannot be used in patterns").emit(),
#[derive(Clone, Debug)]
crate enum PatternError {
AssocConstInPattern(Span),
+ ConstParamInPattern(Span),
StaticInPattern(Span),
FloatBug,
NonConstPath(Span),
| Res::SelfCtor(..) => PatKind::Leaf { subpatterns },
_ => {
- self.errors.push(PatternError::NonConstPath(span));
+ let pattern_error = match res {
+ Res::Def(DefKind::ConstParam, _) => PatternError::ConstParamInPattern(span),
+ _ => PatternError::NonConstPath(span),
+ };
+ self.errors.push(pattern_error);
PatKind::Wild
}
};
#![feature(box_patterns)]
#![feature(box_syntax)]
+#![feature(const_if_match)]
+#![feature(const_fn)]
+#![feature(const_panic)]
#![feature(crate_visibility_modifier)]
#![feature(bool_to_option)]
#![recursion_limit = "256"]
use rustc::hir::map::blocks::FnLikeNode;
-use rustc::lint::builtin::UNCONDITIONAL_RECURSION;
use rustc::mir::{self, Body, TerminatorKind};
use rustc::ty::subst::InternalSubsts;
use rustc::ty::{self, AssocItem, AssocItemContainer, Instance, TyCtxt};
use rustc_hir::def_id::DefId;
use rustc_hir::intravisit::FnKind;
use rustc_index::bit_set::BitSet;
+use rustc_session::lint::builtin::UNCONDITIONAL_RECURSION;
crate fn check<'tcx>(tcx: TyCtxt<'tcx>, body: &Body<'tcx>, def_id: DefId) {
let hir_id = tcx.hir().as_local_hir_id(def_id).unwrap();
+++ /dev/null
-//! Process the potential `cfg` attributes on a module.
-//! Also determine if the module should be included in this configuration.
-//!
-//! This module properly belongs in rustc_expand, but for now it's tied into
-//! parsing, so we leave it here to avoid complicated out-of-line dependencies.
-//!
-//! A principled solution to this wrong location would be to implement [#64197].
-//!
-//! [#64197]: https://github.com/rust-lang/rust/issues/64197
-
-use crate::{parse_in, validate_attr};
-use rustc_ast::ast::{self, AttrItem, Attribute, MetaItem};
-use rustc_ast::attr::HasAttrs;
-use rustc_ast::mut_visit::*;
-use rustc_ast::ptr::P;
-use rustc_ast::util::map_in_place::MapInPlace;
-use rustc_attr as attr;
-use rustc_data_structures::fx::FxHashMap;
-use rustc_errors::{error_code, struct_span_err, Applicability, Handler};
-use rustc_feature::{Feature, Features, State as FeatureState};
-use rustc_feature::{
- ACCEPTED_FEATURES, ACTIVE_FEATURES, REMOVED_FEATURES, STABLE_REMOVED_FEATURES,
-};
-use rustc_session::parse::{feature_err, ParseSess};
-use rustc_span::edition::{Edition, ALL_EDITIONS};
-use rustc_span::symbol::{sym, Symbol};
-use rustc_span::{Span, DUMMY_SP};
-
-use smallvec::SmallVec;
-
-/// A folder that strips out items that do not belong in the current configuration.
-pub struct StripUnconfigured<'a> {
- pub sess: &'a ParseSess,
- pub features: Option<&'a Features>,
-}
-
-fn get_features(
- span_handler: &Handler,
- krate_attrs: &[ast::Attribute],
- crate_edition: Edition,
- allow_features: &Option<Vec<String>>,
-) -> Features {
- fn feature_removed(span_handler: &Handler, span: Span, reason: Option<&str>) {
- let mut err = struct_span_err!(span_handler, span, E0557, "feature has been removed");
- err.span_label(span, "feature has been removed");
- if let Some(reason) = reason {
- err.note(reason);
- }
- err.emit();
- }
-
- fn active_features_up_to(edition: Edition) -> impl Iterator<Item = &'static Feature> {
- ACTIVE_FEATURES.iter().filter(move |feature| {
- if let Some(feature_edition) = feature.edition {
- feature_edition <= edition
- } else {
- false
- }
- })
- }
-
- let mut features = Features::default();
- let mut edition_enabled_features = FxHashMap::default();
-
- for &edition in ALL_EDITIONS {
- if edition <= crate_edition {
- // The `crate_edition` implies its respective umbrella feature-gate
- // (i.e., `#![feature(rust_20XX_preview)]` isn't needed on edition 20XX).
- edition_enabled_features.insert(edition.feature_name(), edition);
- }
- }
-
- for feature in active_features_up_to(crate_edition) {
- feature.set(&mut features, DUMMY_SP);
- edition_enabled_features.insert(feature.name, crate_edition);
- }
-
- // Process the edition umbrella feature-gates first, to ensure
- // `edition_enabled_features` is completed before it's queried.
- for attr in krate_attrs {
- if !attr.check_name(sym::feature) {
- continue;
- }
-
- let list = match attr.meta_item_list() {
- Some(list) => list,
- None => continue,
- };
-
- for mi in list {
- if !mi.is_word() {
- continue;
- }
-
- let name = mi.name_or_empty();
-
- let edition = ALL_EDITIONS.iter().find(|e| name == e.feature_name()).copied();
- if let Some(edition) = edition {
- if edition <= crate_edition {
- continue;
- }
-
- for feature in active_features_up_to(edition) {
- // FIXME(Manishearth) there is currently no way to set
- // lib features by edition
- feature.set(&mut features, DUMMY_SP);
- edition_enabled_features.insert(feature.name, edition);
- }
- }
- }
- }
-
- for attr in krate_attrs {
- if !attr.check_name(sym::feature) {
- continue;
- }
-
- let list = match attr.meta_item_list() {
- Some(list) => list,
- None => continue,
- };
-
- let bad_input = |span| {
- struct_span_err!(span_handler, span, E0556, "malformed `feature` attribute input")
- };
-
- for mi in list {
- let name = match mi.ident() {
- Some(ident) if mi.is_word() => ident.name,
- Some(ident) => {
- bad_input(mi.span())
- .span_suggestion(
- mi.span(),
- "expected just one word",
- format!("{}", ident.name),
- Applicability::MaybeIncorrect,
- )
- .emit();
- continue;
- }
- None => {
- bad_input(mi.span()).span_label(mi.span(), "expected just one word").emit();
- continue;
- }
- };
-
- if let Some(edition) = edition_enabled_features.get(&name) {
- let msg =
- &format!("the feature `{}` is included in the Rust {} edition", name, edition);
- span_handler.struct_span_warn_with_code(mi.span(), msg, error_code!(E0705)).emit();
- continue;
- }
-
- if ALL_EDITIONS.iter().any(|e| name == e.feature_name()) {
- // Handled in the separate loop above.
- continue;
- }
-
- let removed = REMOVED_FEATURES.iter().find(|f| name == f.name);
- let stable_removed = STABLE_REMOVED_FEATURES.iter().find(|f| name == f.name);
- if let Some(Feature { state, .. }) = removed.or(stable_removed) {
- if let FeatureState::Removed { reason } | FeatureState::Stabilized { reason } =
- state
- {
- feature_removed(span_handler, mi.span(), *reason);
- continue;
- }
- }
-
- if let Some(Feature { since, .. }) = ACCEPTED_FEATURES.iter().find(|f| name == f.name) {
- let since = Some(Symbol::intern(since));
- features.declared_lang_features.push((name, mi.span(), since));
- continue;
- }
-
- if let Some(allowed) = allow_features.as_ref() {
- if allowed.iter().find(|&f| name.as_str() == *f).is_none() {
- struct_span_err!(
- span_handler,
- mi.span(),
- E0725,
- "the feature `{}` is not in the list of allowed features",
- name
- )
- .emit();
- continue;
- }
- }
-
- if let Some(f) = ACTIVE_FEATURES.iter().find(|f| name == f.name) {
- f.set(&mut features, mi.span());
- features.declared_lang_features.push((name, mi.span(), None));
- continue;
- }
-
- features.declared_lib_features.push((name, mi.span()));
- }
- }
-
- features
-}
-
-// `cfg_attr`-process the crate's attributes and compute the crate's features.
-pub fn features(
- mut krate: ast::Crate,
- sess: &ParseSess,
- edition: Edition,
- allow_features: &Option<Vec<String>>,
-) -> (ast::Crate, Features) {
- let mut strip_unconfigured = StripUnconfigured { sess, features: None };
-
- let unconfigured_attrs = krate.attrs.clone();
- let diag = &sess.span_diagnostic;
- let err_count = diag.err_count();
- let features = match strip_unconfigured.configure(krate.attrs) {
- None => {
- // The entire crate is unconfigured.
- krate.attrs = Vec::new();
- krate.module.items = Vec::new();
- Features::default()
- }
- Some(attrs) => {
- krate.attrs = attrs;
- let features = get_features(diag, &krate.attrs, edition, allow_features);
- if err_count == diag.err_count() {
- // Avoid reconfiguring malformed `cfg_attr`s.
- strip_unconfigured.features = Some(&features);
- strip_unconfigured.configure(unconfigured_attrs);
- }
- features
- }
- };
- (krate, features)
-}
-
-#[macro_export]
-macro_rules! configure {
- ($this:ident, $node:ident) => {
- match $this.configure($node) {
- Some(node) => node,
- None => return Default::default(),
- }
- };
-}
-
-const CFG_ATTR_GRAMMAR_HELP: &str = "#[cfg_attr(condition, attribute, other_attribute, ...)]";
-const CFG_ATTR_NOTE_REF: &str = "for more information, visit \
- <https://doc.rust-lang.org/reference/conditional-compilation.html\
- #the-cfg_attr-attribute>";
-
-impl<'a> StripUnconfigured<'a> {
- pub fn configure<T: HasAttrs>(&mut self, mut node: T) -> Option<T> {
- self.process_cfg_attrs(&mut node);
- self.in_cfg(node.attrs()).then_some(node)
- }
-
- /// Parse and expand all `cfg_attr` attributes into a list of attributes
- /// that are within each `cfg_attr` that has a true configuration predicate.
- ///
- /// Gives compiler warnigns if any `cfg_attr` does not contain any
- /// attributes and is in the original source code. Gives compiler errors if
- /// the syntax of any `cfg_attr` is incorrect.
- pub fn process_cfg_attrs<T: HasAttrs>(&mut self, node: &mut T) {
- node.visit_attrs(|attrs| {
- attrs.flat_map_in_place(|attr| self.process_cfg_attr(attr));
- });
- }
-
- /// Parse and expand a single `cfg_attr` attribute into a list of attributes
- /// when the configuration predicate is true, or otherwise expand into an
- /// empty list of attributes.
- ///
- /// Gives a compiler warning when the `cfg_attr` contains no attributes and
- /// is in the original source file. Gives a compiler error if the syntax of
- /// the attribute is incorrect.
- fn process_cfg_attr(&mut self, attr: Attribute) -> Vec<Attribute> {
- if !attr.has_name(sym::cfg_attr) {
- return vec![attr];
- }
-
- let (cfg_predicate, expanded_attrs) = match self.parse_cfg_attr(&attr) {
- None => return vec![],
- Some(r) => r,
- };
-
- // Lint on zero attributes in source.
- if expanded_attrs.is_empty() {
- return vec![attr];
- }
-
- // At this point we know the attribute is considered used.
- attr::mark_used(&attr);
-
- if !attr::cfg_matches(&cfg_predicate, self.sess, self.features) {
- return vec![];
- }
-
- // We call `process_cfg_attr` recursively in case there's a
- // `cfg_attr` inside of another `cfg_attr`. E.g.
- // `#[cfg_attr(false, cfg_attr(true, some_attr))]`.
- expanded_attrs
- .into_iter()
- .flat_map(|(item, span)| {
- let attr = attr::mk_attr_from_item(attr.style, item, span);
- self.process_cfg_attr(attr)
- })
- .collect()
- }
-
- fn parse_cfg_attr(&self, attr: &Attribute) -> Option<(MetaItem, Vec<(AttrItem, Span)>)> {
- match attr.get_normal_item().args {
- ast::MacArgs::Delimited(dspan, delim, ref tts) if !tts.is_empty() => {
- let msg = "wrong `cfg_attr` delimiters";
- validate_attr::check_meta_bad_delim(self.sess, dspan, delim, msg);
- match parse_in(self.sess, tts.clone(), "`cfg_attr` input", |p| p.parse_cfg_attr()) {
- Ok(r) => return Some(r),
- Err(mut e) => {
- e.help(&format!("the valid syntax is `{}`", CFG_ATTR_GRAMMAR_HELP))
- .note(CFG_ATTR_NOTE_REF)
- .emit();
- }
- }
- }
- _ => self.error_malformed_cfg_attr_missing(attr.span),
- }
- None
- }
-
- fn error_malformed_cfg_attr_missing(&self, span: Span) {
- self.sess
- .span_diagnostic
- .struct_span_err(span, "malformed `cfg_attr` attribute input")
- .span_suggestion(
- span,
- "missing condition and attribute",
- CFG_ATTR_GRAMMAR_HELP.to_string(),
- Applicability::HasPlaceholders,
- )
- .note(CFG_ATTR_NOTE_REF)
- .emit();
- }
-
- /// Determines if a node with the given attributes should be included in this configuration.
- pub fn in_cfg(&self, attrs: &[Attribute]) -> bool {
- attrs.iter().all(|attr| {
- if !is_cfg(attr) {
- return true;
- }
- let meta_item = match validate_attr::parse_meta(self.sess, attr) {
- Ok(meta_item) => meta_item,
- Err(mut err) => {
- err.emit();
- return true;
- }
- };
- let error = |span, msg, suggestion: &str| {
- let mut err = self.sess.span_diagnostic.struct_span_err(span, msg);
- if !suggestion.is_empty() {
- err.span_suggestion(
- span,
- "expected syntax is",
- suggestion.into(),
- Applicability::MaybeIncorrect,
- );
- }
- err.emit();
- true
- };
- let span = meta_item.span;
- match meta_item.meta_item_list() {
- None => error(span, "`cfg` is not followed by parentheses", "cfg(/* predicate */)"),
- Some([]) => error(span, "`cfg` predicate is not specified", ""),
- Some([_, .., l]) => error(l.span(), "multiple `cfg` predicates are specified", ""),
- Some([single]) => match single.meta_item() {
- Some(meta_item) => attr::cfg_matches(meta_item, self.sess, self.features),
- None => error(single.span(), "`cfg` predicate key cannot be a literal", ""),
- },
- }
- })
- }
-
- /// Visit attributes on expression and statements (but not attributes on items in blocks).
- fn visit_expr_attrs(&mut self, attrs: &[Attribute]) {
- // flag the offending attributes
- for attr in attrs.iter() {
- self.maybe_emit_expr_attr_err(attr);
- }
- }
-
- /// If attributes are not allowed on expressions, emit an error for `attr`
- pub fn maybe_emit_expr_attr_err(&self, attr: &Attribute) {
- if !self.features.map(|features| features.stmt_expr_attributes).unwrap_or(true) {
- let mut err = feature_err(
- self.sess,
- sym::stmt_expr_attributes,
- attr.span,
- "attributes on expressions are experimental",
- );
-
- if attr.is_doc_comment() {
- err.help("`///` is for documentation comments. For a plain comment, use `//`.");
- }
-
- err.emit();
- }
- }
-
- pub fn configure_foreign_mod(&mut self, foreign_mod: &mut ast::ForeignMod) {
- let ast::ForeignMod { abi: _, items } = foreign_mod;
- items.flat_map_in_place(|item| self.configure(item));
- }
-
- pub fn configure_generic_params(&mut self, params: &mut Vec<ast::GenericParam>) {
- params.flat_map_in_place(|param| self.configure(param));
- }
-
- fn configure_variant_data(&mut self, vdata: &mut ast::VariantData) {
- match vdata {
- ast::VariantData::Struct(fields, ..) | ast::VariantData::Tuple(fields, _) => {
- fields.flat_map_in_place(|field| self.configure(field))
- }
- ast::VariantData::Unit(_) => {}
- }
- }
-
- pub fn configure_item_kind(&mut self, item: &mut ast::ItemKind) {
- match item {
- ast::ItemKind::Struct(def, _generics) | ast::ItemKind::Union(def, _generics) => {
- self.configure_variant_data(def)
- }
- ast::ItemKind::Enum(ast::EnumDef { variants }, _generics) => {
- variants.flat_map_in_place(|variant| self.configure(variant));
- for variant in variants {
- self.configure_variant_data(&mut variant.data);
- }
- }
- _ => {}
- }
- }
-
- pub fn configure_expr_kind(&mut self, expr_kind: &mut ast::ExprKind) {
- match expr_kind {
- ast::ExprKind::Match(_m, arms) => {
- arms.flat_map_in_place(|arm| self.configure(arm));
- }
- ast::ExprKind::Struct(_path, fields, _base) => {
- fields.flat_map_in_place(|field| self.configure(field));
- }
- _ => {}
- }
- }
-
- pub fn configure_expr(&mut self, expr: &mut P<ast::Expr>) {
- self.visit_expr_attrs(expr.attrs());
-
- // If an expr is valid to cfg away it will have been removed by the
- // outer stmt or expression folder before descending in here.
- // Anything else is always required, and thus has to error out
- // in case of a cfg attr.
- //
- // N.B., this is intentionally not part of the visit_expr() function
- // in order for filter_map_expr() to be able to avoid this check
- if let Some(attr) = expr.attrs().iter().find(|a| is_cfg(a)) {
- let msg = "removing an expression is not supported in this position";
- self.sess.span_diagnostic.span_err(attr.span, msg);
- }
-
- self.process_cfg_attrs(expr)
- }
-
- pub fn configure_pat(&mut self, pat: &mut P<ast::Pat>) {
- if let ast::PatKind::Struct(_path, fields, _etc) = &mut pat.kind {
- fields.flat_map_in_place(|field| self.configure(field));
- }
- }
-
- pub fn configure_fn_decl(&mut self, fn_decl: &mut ast::FnDecl) {
- fn_decl.inputs.flat_map_in_place(|arg| self.configure(arg));
- }
-}
-
-impl<'a> MutVisitor for StripUnconfigured<'a> {
- fn visit_foreign_mod(&mut self, foreign_mod: &mut ast::ForeignMod) {
- self.configure_foreign_mod(foreign_mod);
- noop_visit_foreign_mod(foreign_mod, self);
- }
-
- fn visit_item_kind(&mut self, item: &mut ast::ItemKind) {
- self.configure_item_kind(item);
- noop_visit_item_kind(item, self);
- }
-
- fn visit_expr(&mut self, expr: &mut P<ast::Expr>) {
- self.configure_expr(expr);
- self.configure_expr_kind(&mut expr.kind);
- noop_visit_expr(expr, self);
- }
-
- fn filter_map_expr(&mut self, expr: P<ast::Expr>) -> Option<P<ast::Expr>> {
- let mut expr = configure!(self, expr);
- self.configure_expr_kind(&mut expr.kind);
- noop_visit_expr(&mut expr, self);
- Some(expr)
- }
-
- fn flat_map_stmt(&mut self, stmt: ast::Stmt) -> SmallVec<[ast::Stmt; 1]> {
- noop_flat_map_stmt(configure!(self, stmt), self)
- }
-
- fn flat_map_item(&mut self, item: P<ast::Item>) -> SmallVec<[P<ast::Item>; 1]> {
- noop_flat_map_item(configure!(self, item), self)
- }
-
- fn flat_map_impl_item(&mut self, item: P<ast::AssocItem>) -> SmallVec<[P<ast::AssocItem>; 1]> {
- noop_flat_map_assoc_item(configure!(self, item), self)
- }
-
- fn flat_map_trait_item(&mut self, item: P<ast::AssocItem>) -> SmallVec<[P<ast::AssocItem>; 1]> {
- noop_flat_map_assoc_item(configure!(self, item), self)
- }
-
- fn visit_mac(&mut self, _mac: &mut ast::Mac) {
- // Don't configure interpolated AST (cf. issue #34171).
- // Interpolated AST will get configured once the surrounding tokens are parsed.
- }
-
- fn visit_pat(&mut self, pat: &mut P<ast::Pat>) {
- self.configure_pat(pat);
- noop_visit_pat(pat, self)
- }
-
- fn visit_fn_decl(&mut self, mut fn_decl: &mut P<ast::FnDecl>) {
- self.configure_fn_decl(&mut fn_decl);
- noop_visit_fn_decl(fn_decl, self);
- }
-}
-
-fn is_cfg(attr: &Attribute) -> bool {
- attr.check_name(sym::cfg)
-}
-
-/// Process the potential `cfg` attributes on a module.
-/// Also determine if the module should be included in this configuration.
-pub fn process_configure_mod(sess: &ParseSess, cfg_mods: bool, attrs: &mut Vec<Attribute>) -> bool {
- // Don't perform gated feature checking.
- let mut strip_unconfigured = StripUnconfigured { sess, features: None };
- strip_unconfigured.process_cfg_attrs(attrs);
- !cfg_mods || strip_unconfigured.in_cfg(&attrs)
-}
/// Used only for error recovery when arriving to EOF with mismatched braces.
matching_delim_spans: Vec<(token::DelimToken, Span, Span)>,
last_unclosed_found_span: Option<Span>,
+ /// Collect empty block spans that might have been auto-inserted by editors.
last_delim_empty_block_spans: FxHashMap<token::DelimToken, Span>,
}
if tts.is_empty() {
let empty_block_span = open_brace_span.to(close_brace_span);
- self.last_delim_empty_block_spans.insert(delim, empty_block_span);
+ if !sm.is_multiline(empty_block_span) {
+ // Only track if the block is in the form of `{}`, otherwise it is
+ // likely that it was written on purpose.
+ self.last_delim_empty_block_spans.insert(delim, empty_block_span);
+ }
}
if self.open_braces.is_empty() {
#![feature(bool_to_option)]
#![feature(crate_visibility_modifier)]
+#![feature(bindings_after_at)]
+#![feature(try_blocks)]
use rustc_ast::ast;
-use rustc_ast::token::{self, Nonterminal, Token};
+use rustc_ast::token::{self, Nonterminal};
use rustc_ast::tokenstream::{self, TokenStream, TokenTree};
use rustc_ast_pretty::pprust;
use rustc_data_structures::sync::Lrc;
use rustc_session::parse::ParseSess;
use rustc_span::{FileName, SourceFile, Span};
-use std::path::{Path, PathBuf};
+use std::path::Path;
use std::str;
use log::info;
use parser::{emit_unclosed_delims, make_unclosed_delims_error, Parser};
pub mod lexer;
pub mod validate_attr;
-#[macro_use]
-pub mod config;
-
-#[derive(Clone)]
-pub struct Directory {
- pub path: PathBuf,
- pub ownership: DirectoryOwnership,
-}
-
-#[derive(Copy, Clone)]
-pub enum DirectoryOwnership {
- Owned {
- // None if `mod.rs`, `Some("foo")` if we're in `foo.rs`.
- relative: Option<ast::Ident>,
- },
- UnownedViaBlock,
- UnownedViaMod,
-}
// A bunch of utility functions of the form `parse_<thing>_from_<source>`
// where <thing> includes crate, expr, item, stmt, tts, and one that
name: FileName,
source: String,
) -> Result<Parser<'_>, Vec<Diagnostic>> {
- let mut parser =
- maybe_source_file_to_parser(sess, sess.source_map().new_source_file(name, source))?;
- parser.recurse_into_file_modules = false;
- Ok(parser)
+ maybe_source_file_to_parser(sess, sess.source_map().new_source_file(name, source))
}
/// Creates a new parser, handling errors as appropriate if the file doesn't exist.
pub fn new_sub_parser_from_file<'a>(
sess: &'a ParseSess,
path: &Path,
- directory_ownership: DirectoryOwnership,
module_name: Option<String>,
sp: Span,
) -> Parser<'a> {
let mut p = source_file_to_parser(sess, file_to_source_file(sess, path, Some(sp)));
- p.directory.ownership = directory_ownership;
p.root_module_name = module_name;
p
}
let mut parser = stream_to_parser(sess, stream, None);
parser.unclosed_delims = unclosed_delims;
if parser.token == token::Eof {
- let span = Span::new(end_pos, end_pos, parser.token.span.ctxt());
- parser.set_token(Token::new(token::Eof, span));
+ parser.token.span = Span::new(end_pos, end_pos, parser.token.span.ctxt());
}
Ok(parser)
stream: TokenStream,
subparser_name: Option<&'static str>,
) -> Parser<'a> {
- Parser::new(sess, stream, None, true, false, subparser_name)
-}
-
-/// Given a stream, the `ParseSess` and the base directory, produces a parser.
-///
-/// Use this function when you are creating a parser from the token stream
-/// and also care about the current working directory of the parser (e.g.,
-/// you are trying to resolve modules defined inside a macro invocation).
-///
-/// # Note
-///
-/// The main usage of this function is outside of rustc, for those who uses
-/// librustc_ast as a library. Please do not remove this function while refactoring
-/// just because it is not used in rustc codebase!
-pub fn stream_to_parser_with_base_dir<'a>(
- sess: &'a ParseSess,
- stream: TokenStream,
- base_dir: Directory,
-) -> Parser<'a> {
- Parser::new(sess, stream, Some(base_dir), true, false, None)
+ Parser::new(sess, stream, false, subparser_name)
}
/// Runs the given subparser `f` on the tokens of the given `attr`'s item.
name: &'static str,
mut f: impl FnMut(&mut Parser<'a>) -> PResult<'a, T>,
) -> PResult<'a, T> {
- let mut parser = Parser::new(sess, tts, None, false, false, Some(name));
+ let mut parser = Parser::new(sess, tts, false, Some(name));
let result = f(&mut parser)?;
if parser.token != token::Eof {
parser.unexpected()?;
-use super::{Parser, PathStyle, TokenType};
+use super::{Parser, PathStyle};
use rustc_ast::ast;
use rustc_ast::attr;
use rustc_ast::token::{self, Nonterminal};
use log::debug;
#[derive(Debug)]
-enum InnerAttributeParsePolicy<'a> {
+pub(super) enum InnerAttrPolicy<'a> {
Permitted,
- NotPermitted { reason: &'a str, saw_doc_comment: bool, prev_attr_sp: Option<Span> },
+ Forbidden { reason: &'a str, saw_doc_comment: bool, prev_attr_sp: Option<Span> },
}
const DEFAULT_UNEXPECTED_INNER_ATTR_ERR_MSG: &str = "an inner attribute is not \
permitted in this context";
+pub(super) const DEFAULT_INNER_ATTR_FORBIDDEN: InnerAttrPolicy<'_> = InnerAttrPolicy::Forbidden {
+ reason: DEFAULT_UNEXPECTED_INNER_ATTR_ERR_MSG,
+ saw_doc_comment: false,
+ prev_attr_sp: None,
+};
+
impl<'a> Parser<'a> {
/// Parses attributes that appear before an item.
pub(super) fn parse_outer_attributes(&mut self) -> PResult<'a, Vec<ast::Attribute>> {
let mut just_parsed_doc_comment = false;
loop {
debug!("parse_outer_attributes: self.token={:?}", self.token);
- match self.token.kind {
- token::Pound => {
- let inner_error_reason = if just_parsed_doc_comment {
- "an inner attribute is not permitted following an outer doc comment"
- } else if !attrs.is_empty() {
- "an inner attribute is not permitted following an outer attribute"
- } else {
- DEFAULT_UNEXPECTED_INNER_ATTR_ERR_MSG
- };
- let inner_parse_policy = InnerAttributeParsePolicy::NotPermitted {
- reason: inner_error_reason,
- saw_doc_comment: just_parsed_doc_comment,
- prev_attr_sp: attrs.last().and_then(|a| Some(a.span)),
- };
- let attr = self.parse_attribute_with_inner_parse_policy(inner_parse_policy)?;
- attrs.push(attr);
- just_parsed_doc_comment = false;
- }
- token::DocComment(s) => {
- let attr = self.mk_doc_comment(s);
- if attr.style != ast::AttrStyle::Outer {
- let span = self.token.span;
- let mut err = self.struct_span_err(span, "expected outer doc comment");
- err.note(
+ if self.check(&token::Pound) {
+ let inner_error_reason = if just_parsed_doc_comment {
+ "an inner attribute is not permitted following an outer doc comment"
+ } else if !attrs.is_empty() {
+ "an inner attribute is not permitted following an outer attribute"
+ } else {
+ DEFAULT_UNEXPECTED_INNER_ATTR_ERR_MSG
+ };
+ let inner_parse_policy = InnerAttrPolicy::Forbidden {
+ reason: inner_error_reason,
+ saw_doc_comment: just_parsed_doc_comment,
+ prev_attr_sp: attrs.last().map(|a| a.span),
+ };
+ let attr = self.parse_attribute_with_inner_parse_policy(inner_parse_policy)?;
+ attrs.push(attr);
+ just_parsed_doc_comment = false;
+ } else if let token::DocComment(s) = self.token.kind {
+ let attr = self.mk_doc_comment(s);
+ if attr.style != ast::AttrStyle::Outer {
+ self.struct_span_err(self.token.span, "expected outer doc comment")
+ .note(
"inner doc comments like this (starting with \
- `//!` or `/*!`) can only appear before items",
- );
- return Err(err);
- }
- attrs.push(attr);
- self.bump();
- just_parsed_doc_comment = true;
+ `//!` or `/*!`) can only appear before items",
+ )
+ .emit();
}
- _ => break,
+ attrs.push(attr);
+ self.bump();
+ just_parsed_doc_comment = true;
+ } else {
+ break;
}
}
Ok(attrs)
}
fn mk_doc_comment(&self, s: Symbol) -> ast::Attribute {
- let style = comments::doc_comment_style(&s.as_str());
- attr::mk_doc_comment(style, s, self.token.span)
+ attr::mk_doc_comment(comments::doc_comment_style(&s.as_str()), s, self.token.span)
}
/// Matches `attribute = # ! [ meta_item ]`.
/// attribute.
pub fn parse_attribute(&mut self, permit_inner: bool) -> PResult<'a, ast::Attribute> {
debug!("parse_attribute: permit_inner={:?} self.token={:?}", permit_inner, self.token);
- let inner_parse_policy = if permit_inner {
- InnerAttributeParsePolicy::Permitted
- } else {
- InnerAttributeParsePolicy::NotPermitted {
- reason: DEFAULT_UNEXPECTED_INNER_ATTR_ERR_MSG,
- saw_doc_comment: false,
- prev_attr_sp: None,
- }
- };
+ let inner_parse_policy =
+ if permit_inner { InnerAttrPolicy::Permitted } else { DEFAULT_INNER_ATTR_FORBIDDEN };
self.parse_attribute_with_inner_parse_policy(inner_parse_policy)
}
- /// The same as `parse_attribute`, except it takes in an `InnerAttributeParsePolicy`
+ /// The same as `parse_attribute`, except it takes in an `InnerAttrPolicy`
/// that prescribes how to handle inner attributes.
fn parse_attribute_with_inner_parse_policy(
&mut self,
- inner_parse_policy: InnerAttributeParsePolicy<'_>,
+ inner_parse_policy: InnerAttrPolicy<'_>,
) -> PResult<'a, ast::Attribute> {
debug!(
"parse_attribute_with_inner_parse_policy: inner_parse_policy={:?} self.token={:?}",
inner_parse_policy, self.token
);
- let (span, item, style) = match self.token.kind {
- token::Pound => {
- let lo = self.token.span;
- self.bump();
-
- if let InnerAttributeParsePolicy::Permitted = inner_parse_policy {
- self.expected_tokens.push(TokenType::Token(token::Not));
- }
-
- let style = if self.token == token::Not {
- self.bump();
- ast::AttrStyle::Inner
- } else {
- ast::AttrStyle::Outer
- };
+ let lo = self.token.span;
+ let (span, item, style) = if self.eat(&token::Pound) {
+ let style =
+ if self.eat(&token::Not) { ast::AttrStyle::Inner } else { ast::AttrStyle::Outer };
- self.expect(&token::OpenDelim(token::Bracket))?;
- let item = self.parse_attr_item()?;
- self.expect(&token::CloseDelim(token::Bracket))?;
- let hi = self.prev_token.span;
-
- let attr_sp = lo.to(hi);
-
- // Emit error if inner attribute is encountered and not permitted
- if style == ast::AttrStyle::Inner {
- if let InnerAttributeParsePolicy::NotPermitted {
- reason,
- saw_doc_comment,
- prev_attr_sp,
- } = inner_parse_policy
- {
- let prev_attr_note = if saw_doc_comment {
- "previous doc comment"
- } else {
- "previous outer attribute"
- };
-
- let mut diagnostic = self.struct_span_err(attr_sp, reason);
-
- if let Some(prev_attr_sp) = prev_attr_sp {
- diagnostic
- .span_label(attr_sp, "not permitted following an outer attibute")
- .span_label(prev_attr_sp, prev_attr_note);
- }
-
- diagnostic
- .note(
- "inner attributes, like `#![no_std]`, annotate the item \
- enclosing them, and are usually found at the beginning of \
- source files. Outer attributes, like `#[test]`, annotate the \
- item following them.",
- )
- .emit();
- }
- }
+ self.expect(&token::OpenDelim(token::Bracket))?;
+ let item = self.parse_attr_item()?;
+ self.expect(&token::CloseDelim(token::Bracket))?;
+ let attr_sp = lo.to(self.prev_token.span);
- (attr_sp, item, style)
- }
- _ => {
- let token_str = pprust::token_to_string(&self.token);
- let msg = &format!("expected `#`, found `{}`", token_str);
- return Err(self.struct_span_err(self.token.span, msg));
+ // Emit error if inner attribute is encountered and forbidden.
+ if style == ast::AttrStyle::Inner {
+ self.error_on_forbidden_inner_attr(attr_sp, inner_parse_policy);
}
+
+ (attr_sp, item, style)
+ } else {
+ let token_str = pprust::token_to_string(&self.token);
+ let msg = &format!("expected `#`, found `{}`", token_str);
+ return Err(self.struct_span_err(self.token.span, msg));
};
Ok(attr::mk_attr_from_item(style, item, span))
}
+ pub(super) fn error_on_forbidden_inner_attr(&self, attr_sp: Span, policy: InnerAttrPolicy<'_>) {
+ if let InnerAttrPolicy::Forbidden { reason, saw_doc_comment, prev_attr_sp } = policy {
+ let prev_attr_note =
+ if saw_doc_comment { "previous doc comment" } else { "previous outer attribute" };
+
+ let mut diag = self.struct_span_err(attr_sp, reason);
+
+ if let Some(prev_attr_sp) = prev_attr_sp {
+ diag.span_label(attr_sp, "not permitted following an outer attribute")
+ .span_label(prev_attr_sp, prev_attr_note);
+ }
+
+ diag.note(
+ "inner attributes, like `#![no_std]`, annotate the item enclosing them, \
+ and are usually found at the beginning of source files. \
+ Outer attributes, like `#[test]`, annotate the item following them.",
+ )
+ .emit();
+ }
+ }
+
/// Parses an inner part of an attribute (the path and following tokens).
/// The tokens must be either a delimited token stream, or empty token stream,
/// or the "legacy" key-value form.
crate fn parse_inner_attributes(&mut self) -> PResult<'a, Vec<ast::Attribute>> {
let mut attrs: Vec<ast::Attribute> = vec![];
loop {
- match self.token.kind {
- token::Pound => {
- // Don't even try to parse if it's not an inner attribute.
- if !self.look_ahead(1, |t| t == &token::Not) {
- break;
- }
-
- let attr = self.parse_attribute(true)?;
- assert_eq!(attr.style, ast::AttrStyle::Inner);
+ // Only try to parse if it is an inner attribute (has `!`).
+ if self.check(&token::Pound) && self.look_ahead(1, |t| t == &token::Not) {
+ let attr = self.parse_attribute(true)?;
+ assert_eq!(attr.style, ast::AttrStyle::Inner);
+ attrs.push(attr);
+ } else if let token::DocComment(s) = self.token.kind {
+ // We need to get the position of this token before we bump.
+ let attr = self.mk_doc_comment(s);
+ if attr.style == ast::AttrStyle::Inner {
attrs.push(attr);
+ self.bump();
+ } else {
+ break;
}
- token::DocComment(s) => {
- // We need to get the position of this token before we bump.
- let attr = self.mk_doc_comment(s);
- if attr.style == ast::AttrStyle::Inner {
- attrs.push(attr);
- self.bump();
- } else {
- break;
- }
- }
- _ => break,
+ } else {
+ break;
}
}
Ok(attrs)
debug!("checking if {:?} is unusuffixed", lit);
if !lit.kind.is_unsuffixed() {
- let msg = "suffixed literals are not allowed in attributes";
- self.struct_span_err(lit.span, msg)
+ self.struct_span_err(lit.span, "suffixed literals are not allowed in attributes")
.help(
- "instead of using a suffixed literal \
- (`1u8`, `1.0f32`, etc.), use an unsuffixed version \
- (`1`, `1.0`, etc.)",
+ "instead of using a suffixed literal (`1u8`, `1.0f32`, etc.), \
+ use an unsuffixed version (`1`, `1.0`, etc.)",
)
.emit();
}
use log::{debug, trace};
use std::mem;
-const TURBOFISH: &'static str = "use `::<...>` instead of `<...>` to specify type arguments";
+const TURBOFISH: &str = "use `::<...>` instead of `<...>` to specify type arguments";
/// Creates a placeholder argument.
pub(super) fn dummy_arg(ident: Ident) -> Param {
}
pub enum Error {
- FileNotFoundForModule {
- mod_name: String,
- default_path: String,
- secondary_path: String,
- dir_path: String,
- },
- DuplicatePaths {
- mod_name: String,
- default_path: String,
- secondary_path: String,
- },
UselessDocComment,
}
impl Error {
fn span_err(self, sp: impl Into<MultiSpan>, handler: &Handler) -> DiagnosticBuilder<'_> {
match self {
- Error::FileNotFoundForModule {
- ref mod_name,
- ref default_path,
- ref secondary_path,
- ref dir_path,
- } => {
- let mut err = struct_span_err!(
- handler,
- sp,
- E0583,
- "file not found for module `{}`",
- mod_name,
- );
- err.help(&format!(
- "name the file either {} or {} inside the directory \"{}\"",
- default_path, secondary_path, dir_path,
- ));
- err
- }
- Error::DuplicatePaths { ref mod_name, ref default_path, ref secondary_path } => {
- let mut err = struct_span_err!(
- handler,
- sp,
- E0584,
- "file for module `{}` found at both {} and {}",
- mod_name,
- default_path,
- secondary_path,
- );
- err.help("delete or rename one of them to remove the ambiguity");
- err
- }
Error::UselessDocComment => {
let mut err = struct_span_err!(
handler,
TokenKind::CloseDelim(token::DelimToken::Brace),
TokenKind::CloseDelim(token::DelimToken::Paren),
];
- if let token::Ident(name, false) = self.normalized_token.kind {
- if Ident::new(name, self.normalized_token.span).is_raw_guess()
- && self.look_ahead(1, |t| valid_follow.contains(&t.kind))
+ match self.token.ident() {
+ Some((ident, false))
+ if ident.is_raw_guess()
+ && self.look_ahead(1, |t| valid_follow.contains(&t.kind)) =>
{
err.span_suggestion(
- self.normalized_token.span,
+ ident.span,
"you can escape reserved keywords to use them as identifiers",
- format!("r#{}", name),
+ format!("r#{}", ident.name),
Applicability::MaybeIncorrect,
);
}
+ _ => {}
}
if let Some(token_descr) = super::token_descr_opt(&self.token) {
err.span_label(self.token.span, format!("expected identifier, found {}", token_descr));
let msg = format!("expected `;`, found `{}`", super::token_descr(&self.token));
let appl = Applicability::MachineApplicable;
if self.token.span == DUMMY_SP || self.prev_token.span == DUMMY_SP {
- // Likely inside a macro, can't provide meaninful suggestions.
+ // Likely inside a macro, can't provide meaningful suggestions.
return self.expect(&token::Semi).map(drop);
} else if !sm.is_multiline(self.prev_token.span.until(self.token.span)) {
// The current token is in the same line as the prior token, not recoverable.
use super::{SemiColonMode, SeqSep, TokenExpectType};
use crate::maybe_recover_from_interpolated_ty_qpath;
-use rustc_ast::ast::{self, AttrStyle, AttrVec, CaptureBy, Field, Ident, Lit, DUMMY_NODE_ID};
-use rustc_ast::ast::{AnonConst, BinOp, BinOpKind, FnDecl, FnRetTy, Mac, Param, Ty, TyKind, UnOp};
+use rustc_ast::ast::{self, AttrStyle, AttrVec, CaptureBy, Field, Ident, Lit, UnOp, DUMMY_NODE_ID};
+use rustc_ast::ast::{AnonConst, BinOp, BinOpKind, FnDecl, FnRetTy, MacCall, Param, Ty, TyKind};
use rustc_ast::ast::{Arm, Async, BlockCheckMode, Expr, ExprKind, Label, Movability, RangeLimits};
use rustc_ast::ptr::P;
use rustc_ast::token::{self, Token, TokenKind};
AttrVec::new(),
));
}
- // N.B., `NtIdent(ident)` is normalized to `Ident` in `fn bump`.
_ => {}
};
}
fn parse_expr_catch_underscore(&mut self) -> PResult<'a, P<Expr>> {
match self.parse_expr() {
Ok(expr) => Ok(expr),
- Err(mut err) => match self.normalized_token.kind {
- token::Ident(name, false)
- if name == kw::Underscore && self.look_ahead(1, |t| t == &token::Comma) =>
+ Err(mut err) => match self.token.ident() {
+ Some((Ident { name: kw::Underscore, .. }, false))
+ if self.look_ahead(1, |t| t == &token::Comma) =>
{
// Special-case handling of `foo(_, _, _)`
err.emit();
///
/// Also performs recovery for `and` / `or` which are mistaken for `&&` and `||` respectively.
fn check_assoc_op(&self) -> Option<Spanned<AssocOp>> {
- Some(Spanned {
- node: match (AssocOp::from_token(&self.token), &self.normalized_token.kind) {
- (Some(op), _) => op,
- (None, token::Ident(sym::and, false)) => {
- self.error_bad_logical_op("and", "&&", "conjunction");
- AssocOp::LAnd
- }
- (None, token::Ident(sym::or, false)) => {
- self.error_bad_logical_op("or", "||", "disjunction");
- AssocOp::LOr
- }
- _ => return None,
- },
- span: self.normalized_token.span,
- })
+ let (op, span) = match (AssocOp::from_token(&self.token), self.token.ident()) {
+ (Some(op), _) => (op, self.token.span),
+ (None, Some((Ident { name: sym::and, span }, false))) => {
+ self.error_bad_logical_op("and", "&&", "conjunction");
+ (AssocOp::LAnd, span)
+ }
+ (None, Some((Ident { name: sym::or, span }, false))) => {
+ self.error_bad_logical_op("or", "||", "disjunction");
+ (AssocOp::LOr, span)
+ }
+ _ => return None,
+ };
+ Some(source_map::respan(span, op))
}
/// Error on `and` and `or` suggesting `&&` and `||` respectively.
let attrs = self.parse_or_use_outer_attributes(attrs)?;
let lo = self.token.span;
// Note: when adding new unary operators, don't forget to adjust TokenKind::can_begin_expr()
- let (hi, ex) = match self.normalized_token.kind {
+ let (hi, ex) = match self.token.uninterpolate().kind {
token::Not => self.parse_unary_expr(lo, UnOp::Not), // `!expr`
token::Tilde => self.recover_tilde_expr(lo), // `~expr`
token::BinOp(token::Minus) => self.parse_unary_expr(lo, UnOp::Neg), // `-expr`
}
fn is_mistaken_not_ident_negation(&self) -> bool {
- let token_cannot_continue_expr = |t: &Token| match t.kind {
+ let token_cannot_continue_expr = |t: &Token| match t.uninterpolate().kind {
// These tokens can start an expression after `!`, but
// can't continue an expression after an ident
token::Ident(name, is_raw) => token::ident_can_begin_expr(name, t.span, is_raw),
// Save the state of the parser before parsing type normally, in case there is a
// LessThan comparison after this cast.
let parser_snapshot_before_type = self.clone();
- match self.parse_ty_no_plus() {
- Ok(rhs) => Ok(mk_expr(self, rhs)),
+ let cast_expr = match self.parse_ty_no_plus() {
+ Ok(rhs) => mk_expr(self, rhs),
Err(mut type_err) => {
// Rewind to before attempting to parse the type with generics, to recover
// from situations like `x as usize < y` in which we first tried to parse
)
.emit();
- Ok(expr)
+ expr
}
Err(mut path_err) => {
// Couldn't parse as a path, return original error and parser state.
path_err.cancel();
mem::replace(self, parser_snapshot_after_type);
- Err(type_err)
+ return Err(type_err);
}
}
}
- }
+ };
+
+ self.parse_and_disallow_postfix_after_cast(cast_expr)
+ }
+
+ /// Parses a postfix operators such as `.`, `?`, or index (`[]`) after a cast,
+ /// then emits an error and returns the newly parsed tree.
+ /// The resulting parse tree for `&x as T[0]` has a precedence of `((&x) as T)[0]`.
+ fn parse_and_disallow_postfix_after_cast(
+ &mut self,
+ cast_expr: P<Expr>,
+ ) -> PResult<'a, P<Expr>> {
+ // Save the memory location of expr before parsing any following postfix operators.
+ // This will be compared with the memory location of the output expression.
+ // If they different we can assume we parsed another expression because the existing expression is not reallocated.
+ let addr_before = &*cast_expr as *const _ as usize;
+ let span = cast_expr.span;
+ let with_postfix = self.parse_dot_or_call_expr_with_(cast_expr, span)?;
+ let changed = addr_before != &*with_postfix as *const _ as usize;
+
+ // Check if an illegal postfix operator has been added after the cast.
+ // If the resulting expression is not a cast, or has a different memory location, it is an illegal postfix operator.
+ if !matches!(with_postfix.kind, ExprKind::Cast(_, _) | ExprKind::Type(_, _)) || changed {
+ let msg = format!(
+ "casts cannot be followed by {}",
+ match with_postfix.kind {
+ ExprKind::Index(_, _) => "indexing",
+ ExprKind::Try(_) => "?",
+ ExprKind::Field(_, _) => "a field access",
+ ExprKind::MethodCall(_, _) => "a method call",
+ ExprKind::Call(_, _) => "a function call",
+ ExprKind::Await(_) => "`.await`",
+ _ => unreachable!("parse_dot_or_call_expr_with_ shouldn't produce this"),
+ }
+ );
+ let mut err = self.struct_span_err(span, &msg);
+ // If type ascription is "likely an error", the user will already be getting a useful
+ // help message, and doesn't need a second.
+ if self.last_type_ascription.map_or(false, |last_ascription| last_ascription.1) {
+ self.maybe_annotate_with_ascription(&mut err, false);
+ } else {
+ let suggestions = vec![
+ (span.shrink_to_lo(), "(".to_string()),
+ (span.shrink_to_hi(), ")".to_string()),
+ ];
+ err.multipart_suggestion(
+ "try surrounding the expression in parentheses",
+ suggestions,
+ Applicability::MachineApplicable,
+ );
+ }
+ err.emit();
+ };
+ Ok(with_postfix)
}
fn parse_assoc_op_ascribe(&mut self, lhs: P<Expr>, lhs_span: Span) -> PResult<'a, P<Expr>> {
/// Parse `& mut? <expr>` or `& raw [ const | mut ] <expr>`.
fn parse_borrow_expr(&mut self, lo: Span) -> PResult<'a, (Span, ExprKind)> {
self.expect_and()?;
+ let has_lifetime = self.token.is_lifetime() && self.look_ahead(1, |t| t != &token::Colon);
+ let lifetime = has_lifetime.then(|| self.expect_lifetime()); // For recovery, see below.
let (borrow_kind, mutbl) = self.parse_borrow_modifiers(lo);
let expr = self.parse_prefix_expr(None);
- let (span, expr) = self.interpolated_or_expr_span(expr)?;
- Ok((lo.to(span), ExprKind::AddrOf(borrow_kind, mutbl, expr)))
+ let (hi, expr) = self.interpolated_or_expr_span(expr)?;
+ let span = lo.to(hi);
+ if let Some(lt) = lifetime {
+ self.error_remove_borrow_lifetime(span, lt.ident.span);
+ }
+ Ok((span, ExprKind::AddrOf(borrow_kind, mutbl, expr)))
+ }
+
+ fn error_remove_borrow_lifetime(&self, span: Span, lt_span: Span) {
+ self.struct_span_err(span, "borrow expressions cannot be annotated with lifetimes")
+ .span_label(lt_span, "annotated with lifetime here")
+ .span_suggestion(
+ lt_span,
+ "remove the lifetime annotation",
+ String::new(),
+ Applicability::MachineApplicable,
+ )
+ .emit();
}
/// Parse `mut?` or `raw [ const | mut ]`.
expr.map(|mut expr| {
attrs.extend::<Vec<_>>(expr.attrs.into());
expr.attrs = attrs;
- self.error_attr_on_if_expr(&expr);
expr
})
})
}
- fn error_attr_on_if_expr(&self, expr: &Expr) {
- if let (ExprKind::If(..), [a0, ..]) = (&expr.kind, &*expr.attrs) {
- // Just point to the first attribute in there...
- self.struct_span_err(a0.span, "attributes are not yet allowed on `if` expressions")
- .emit();
- }
- }
-
fn parse_dot_or_call_expr_with_(&mut self, mut e: P<Expr>, lo: Span) -> PResult<'a, P<Expr>> {
loop {
if self.eat(&token::Question) {
}
fn parse_dot_suffix_expr(&mut self, lo: Span, base: P<Expr>) -> PResult<'a, P<Expr>> {
- match self.normalized_token.kind {
+ match self.token.uninterpolate().kind {
token::Ident(..) => self.parse_dot_suffix(base, lo),
token::Literal(token::Lit { kind: token::Integer, symbol, suffix }) => {
Ok(self.parse_tuple_field_access_expr(lo, base, symbol, suffix))
/// Assuming we have just parsed `.`, continue parsing into an expression.
fn parse_dot_suffix(&mut self, self_arg: P<Expr>, lo: Span) -> PResult<'a, P<Expr>> {
- if self.normalized_token.span.rust_2018() && self.eat_keyword(kw::Await) {
+ if self.token.uninterpolated_span().rust_2018() && self.eat_keyword(kw::Await) {
return self.mk_await_expr(self_arg, lo);
}
} else if self.eat_lt() {
let (qself, path) = self.parse_qpath(PathStyle::Expr)?;
Ok(self.mk_expr(lo.to(path.span), ExprKind::Path(Some(qself), path), attrs))
- } else if self.token.is_path_start() {
+ } else if self.check_path() {
self.parse_path_start_expr(attrs)
} else if self.check_keyword(kw::Move) || self.check_keyword(kw::Static) {
self.parse_closure_expr(attrs)
// | ^ expected expression
self.bump();
Ok(self.mk_expr_err(self.token.span))
- } else if self.normalized_token.span.rust_2018() {
+ } else if self.token.uninterpolated_span().rust_2018() {
// `Span::rust_2018()` is somewhat expensive; don't get it repeatedly.
if self.check_keyword(kw::Async) {
if self.is_async_block() {
};
let kind = if es.len() == 1 && !trailing_comma {
// `(e)` is parenthesized `e`.
- ExprKind::Paren(es.into_iter().nth(0).unwrap())
+ ExprKind::Paren(es.into_iter().next().unwrap())
} else {
// `(e,)` is a tuple with only one field, `e`.
ExprKind::Tup(es)
// `!`, as an operator, is prefix, so we know this isn't that.
let (hi, kind) = if self.eat(&token::Not) {
// MACRO INVOCATION expression
- let mac = Mac {
+ let mac = MacCall {
path,
args: self.parse_mac_args()?,
prior_type_ascription: self.last_type_ascription,
};
- (self.prev_token.span, ExprKind::Mac(mac))
+ (self.prev_token.span, ExprKind::MacCall(mac))
} else if self.check(&token::OpenDelim(token::Brace)) {
if let Some(expr) = self.maybe_parse_struct_expr(lo, &path, &attrs) {
return expr;
self.maybe_recover_from_bad_qpath(expr, true)
}
+ /// Parse `'label: $expr`. The label is already parsed.
fn parse_labeled_expr(&mut self, label: Label, attrs: AttrVec) -> PResult<'a, P<Expr>> {
let lo = label.ident.span;
- self.expect(&token::Colon)?;
- if self.eat_keyword(kw::While) {
- return self.parse_while_expr(Some(label), lo, attrs);
- }
- if self.eat_keyword(kw::For) {
- return self.parse_for_expr(Some(label), lo, attrs);
- }
- if self.eat_keyword(kw::Loop) {
- return self.parse_loop_expr(Some(label), lo, attrs);
- }
- if self.token == token::OpenDelim(token::Brace) {
- return self.parse_block_expr(Some(label), lo, BlockCheckMode::Default, attrs);
+ let label = Some(label);
+ let ate_colon = self.eat(&token::Colon);
+ let expr = if self.eat_keyword(kw::While) {
+ self.parse_while_expr(label, lo, attrs)
+ } else if self.eat_keyword(kw::For) {
+ self.parse_for_expr(label, lo, attrs)
+ } else if self.eat_keyword(kw::Loop) {
+ self.parse_loop_expr(label, lo, attrs)
+ } else if self.check(&token::OpenDelim(token::Brace)) || self.token.is_whole_block() {
+ self.parse_block_expr(label, lo, BlockCheckMode::Default, attrs)
+ } else {
+ let msg = "expected `while`, `for`, `loop` or `{` after a label";
+ self.struct_span_err(self.token.span, msg).span_label(self.token.span, msg).emit();
+ // Continue as an expression in an effort to recover on `'label: non_block_expr`.
+ self.parse_expr()
+ }?;
+
+ if !ate_colon {
+ self.error_labeled_expr_must_be_followed_by_colon(lo, expr.span);
}
- let msg = "expected `while`, `for`, `loop` or `{` after a label";
- self.struct_span_err(self.token.span, msg).span_label(self.token.span, msg).emit();
- // Continue as an expression in an effort to recover on `'label: non_block_expr`.
- self.parse_expr()
+ Ok(expr)
+ }
+
+ fn error_labeled_expr_must_be_followed_by_colon(&self, lo: Span, span: Span) {
+ self.struct_span_err(span, "labeled expression must be followed by `:`")
+ .span_label(lo, "the label")
+ .span_suggestion_short(
+ lo.shrink_to_hi(),
+ "add `:` after the label",
+ ": ".to_string(),
+ Applicability::MachineApplicable,
+ )
+ .note("labels are used before loops and blocks, allowing e.g., `break 'label` to them")
+ .emit();
}
/// Recover on the syntax `do catch { ... }` suggesting `try { ... }` instead.
opt_label: Option<Label>,
lo: Span,
blk_mode: BlockCheckMode,
- outer_attrs: AttrVec,
+ mut attrs: AttrVec,
) -> PResult<'a, P<Expr>> {
if let Some(label) = opt_label {
self.sess.gated_spans.gate(sym::label_break_value, label.ident.span);
}
- self.expect(&token::OpenDelim(token::Brace))?;
-
- let mut attrs = outer_attrs;
- attrs.extend(self.parse_inner_attributes()?);
+ if self.token.is_whole_block() {
+ self.struct_span_err(self.token.span, "cannot use a `block` macro fragment here")
+ .span_label(lo.to(self.token.span), "the `block` fragment is within this context")
+ .emit();
+ }
- let blk = self.parse_block_tail(lo, blk_mode)?;
+ let (inner_attrs, blk) = self.parse_block_common(lo, blk_mode)?;
+ attrs.extend(inner_attrs);
Ok(self.mk_expr(blk.span, ExprKind::Block(blk, opt_label), attrs))
}
let movability =
if self.eat_keyword(kw::Static) { Movability::Static } else { Movability::Movable };
- let asyncness =
- if self.normalized_token.span.rust_2018() { self.parse_asyncness() } else { Async::No };
- if asyncness.is_async() {
+ let asyncness = if self.token.uninterpolated_span().rust_2018() {
+ self.parse_asyncness()
+ } else {
+ Async::No
+ };
+ if let Async::Yes { span, .. } = asyncness {
// Feature-gate `async ||` closures.
- self.sess.gated_spans.gate(sym::async_closure, self.normalized_prev_token.span);
+ self.sess.gated_spans.gate(sym::async_closure, span);
}
let capture_clause = self.parse_capture_clause();
))
}
- /// Parses an optional `move` prefix to a closure lke construct.
+ /// Parses an optional `move` prefix to a closure-like construct.
fn parse_capture_clause(&mut self) -> CaptureBy {
if self.eat_keyword(kw::Move) { CaptureBy::Value } else { CaptureBy::Ref }
}
let thn = if self.eat_keyword(kw::Else) || !cond.returns() {
self.error_missing_if_cond(lo, cond.span)
} else {
+ let attrs = self.parse_outer_attributes()?; // For recovery.
let not_block = self.token != token::OpenDelim(token::Brace);
- self.parse_block().map_err(|mut err| {
+ let block = self.parse_block().map_err(|mut err| {
if not_block {
err.span_label(lo, "this `if` expression has a condition, but no block");
}
err
- })?
+ })?;
+ self.error_on_if_block_attrs(lo, false, block.span, &attrs);
+ block
};
let els = if self.eat_keyword(kw::Else) { Some(self.parse_else_expr()?) } else { None };
Ok(self.mk_expr(lo.to(self.prev_token.span), ExprKind::If(cond, thn, els), attrs))
/// Parses an `else { ... }` expression (`else` token already eaten).
fn parse_else_expr(&mut self) -> PResult<'a, P<Expr>> {
- if self.eat_keyword(kw::If) {
- self.parse_if_expr(AttrVec::new())
+ let ctx_span = self.prev_token.span; // `else`
+ let attrs = self.parse_outer_attributes()?; // For recovery.
+ let expr = if self.eat_keyword(kw::If) {
+ self.parse_if_expr(AttrVec::new())?
} else {
let blk = self.parse_block()?;
- Ok(self.mk_expr(blk.span, ExprKind::Block(blk, None), AttrVec::new()))
- }
+ self.mk_expr(blk.span, ExprKind::Block(blk, None), AttrVec::new())
+ };
+ self.error_on_if_block_attrs(ctx_span, true, expr.span, &attrs);
+ Ok(expr)
+ }
+
+ fn error_on_if_block_attrs(
+ &self,
+ ctx_span: Span,
+ is_ctx_else: bool,
+ branch_span: Span,
+ attrs: &[ast::Attribute],
+ ) {
+ let (span, last) = match attrs {
+ [] => return,
+ [x0 @ xn] | [x0, .., xn] => (x0.span.to(xn.span), xn.span),
+ };
+ let ctx = if is_ctx_else { "else" } else { "if" };
+ self.struct_span_err(last, "outer attributes are not allowed on `if` and `else` branches")
+ .span_label(branch_span, "the attributes are attached to this branch")
+ .span_label(ctx_span, format!("the branch belongs to this `{}`", ctx))
+ .span_suggestion(
+ span,
+ "remove the attributes",
+ String::new(),
+ Applicability::MachineApplicable,
+ )
+ .emit();
}
/// Parses `for <src_pat> in <src_expr> <src_loop_block>` (`for` token already eaten).
fn is_try_block(&self) -> bool {
self.token.is_keyword(kw::Try) &&
self.look_ahead(1, |t| *t == token::OpenDelim(token::Brace)) &&
- self.normalized_token.span.rust_2018() &&
+ self.token.uninterpolated_span().rust_2018() &&
// Prevent `while try {} {}`, `if try {} {} else {}`, etc.
!self.restrictions.contains(Restrictions::NO_STRUCT_LITERAL)
}
/// Use in case of error after field-looking code: `S { foo: () with a }`.
fn find_struct_error_after_field_looking_code(&self) -> Option<Field> {
- if let token::Ident(name, _) = self.normalized_token.kind {
- if !self.token.is_reserved_ident() && self.look_ahead(1, |t| *t == token::Colon) {
- return Some(ast::Field {
- ident: Ident::new(name, self.normalized_token.span),
+ match self.token.ident() {
+ Some((ident, is_raw))
+ if (is_raw || !ident.is_reserved())
+ && self.look_ahead(1, |t| *t == token::Colon) =>
+ {
+ Some(ast::Field {
+ ident,
span: self.token.span,
expr: self.mk_expr_err(self.token.span),
is_shorthand: false,
attrs: AttrVec::new(),
id: DUMMY_NODE_ID,
is_placeholder: false,
- });
+ })
}
+ _ => None,
}
- None
}
fn recover_struct_comma_after_dotdot(&mut self, span: Span) {
use crate::maybe_whole;
use rustc_ast::ast::{self, AttrStyle, AttrVec, Attribute, Ident, DUMMY_NODE_ID};
-use rustc_ast::ast::{AssocItem, AssocItemKind, ForeignItemKind, Item, ItemKind};
-use rustc_ast::ast::{
- Async, Const, Defaultness, IsAuto, PathSegment, Unsafe, UseTree, UseTreeKind,
-};
-use rustc_ast::ast::{
- BindingMode, Block, FnDecl, FnSig, Mac, MacArgs, MacDelimiter, Param, SelfKind,
-};
+use rustc_ast::ast::{AssocItem, AssocItemKind, ForeignItemKind, Item, ItemKind, Mod};
+use rustc_ast::ast::{Async, Const, Defaultness, IsAuto, Mutability, Unsafe, UseTree, UseTreeKind};
+use rustc_ast::ast::{BindingMode, Block, FnDecl, FnSig, Param, SelfKind};
use rustc_ast::ast::{EnumDef, Generics, StructField, TraitRef, Ty, TyKind, Variant, VariantData};
-use rustc_ast::ast::{FnHeader, ForeignItem, Mutability, Visibility, VisibilityKind};
+use rustc_ast::ast::{FnHeader, ForeignItem, PathSegment, Visibility, VisibilityKind};
+use rustc_ast::ast::{MacArgs, MacCall, MacDelimiter};
use rustc_ast::ptr::P;
-use rustc_ast::token;
+use rustc_ast::token::{self, TokenKind};
use rustc_ast::tokenstream::{DelimSpan, TokenStream, TokenTree};
use rustc_ast_pretty::pprust;
use rustc_errors::{struct_span_err, Applicability, PResult, StashKey};
use rustc_span::symbol::{kw, sym, Symbol};
use log::debug;
+use std::convert::TryFrom;
use std::mem;
+impl<'a> Parser<'a> {
+ /// Parses a source module as a crate. This is the main entry point for the parser.
+ pub fn parse_crate_mod(&mut self) -> PResult<'a, ast::Crate> {
+ let lo = self.token.span;
+ let (module, attrs) = self.parse_mod(&token::Eof)?;
+ let span = lo.to(self.token.span);
+ let proc_macros = Vec::new(); // Filled in by `proc_macro_harness::inject()`.
+ Ok(ast::Crate { attrs, module, span, proc_macros })
+ }
+
+ /// Parses a `mod <foo> { ... }` or `mod <foo>;` item.
+ fn parse_item_mod(&mut self, attrs: &mut Vec<Attribute>) -> PResult<'a, ItemInfo> {
+ let id = self.parse_ident()?;
+ let (module, mut inner_attrs) = if self.eat(&token::Semi) {
+ Default::default()
+ } else {
+ self.expect(&token::OpenDelim(token::Brace))?;
+ self.parse_mod(&token::CloseDelim(token::Brace))?
+ };
+ attrs.append(&mut inner_attrs);
+ Ok((id, ItemKind::Mod(module)))
+ }
+
+ /// Parses the contents of a module (inner attributes followed by module items).
+ pub fn parse_mod(&mut self, term: &TokenKind) -> PResult<'a, (Mod, Vec<Attribute>)> {
+ let lo = self.token.span;
+ let attrs = self.parse_inner_attributes()?;
+ let module = self.parse_mod_items(term, lo)?;
+ Ok((module, attrs))
+ }
+
+ /// Given a termination token, parses all of the items in a module.
+ fn parse_mod_items(&mut self, term: &TokenKind, inner_lo: Span) -> PResult<'a, Mod> {
+ let mut items = vec![];
+ while let Some(item) = self.parse_item()? {
+ items.push(item);
+ self.maybe_consume_incorrect_semicolon(&items);
+ }
+
+ if !self.eat(term) {
+ let token_str = super::token_descr(&self.token);
+ if !self.maybe_consume_incorrect_semicolon(&items) {
+ let msg = &format!("expected item, found {}", token_str);
+ let mut err = self.struct_span_err(self.token.span, msg);
+ err.span_label(self.token.span, "expected item");
+ return Err(err);
+ }
+ }
+
+ let hi = if self.token.span.is_dummy() { inner_lo } else { self.prev_token.span };
+
+ Ok(Mod { inner: inner_lo.to(hi), items, inline: true })
+ }
+}
+
pub(super) type ItemInfo = (Ident, ItemKind);
impl<'a> Parser<'a> {
} else if vis.node.is_pub() && self.isnt_macro_invocation() {
self.recover_missing_kw_before_item()?;
return Ok(None);
- } else if macros_allowed && self.token.is_path_start() {
+ } else if macros_allowed && self.check_path() {
// MACRO INVOCATION ITEM
- (Ident::invalid(), ItemKind::Mac(self.parse_item_macro(vis)?))
+ (Ident::invalid(), ItemKind::MacCall(self.parse_item_macro(vis)?))
} else {
return Ok(None);
};
}
/// Parses an item macro, e.g., `item!();`.
- fn parse_item_macro(&mut self, vis: &Visibility) -> PResult<'a, Mac> {
+ fn parse_item_macro(&mut self, vis: &Visibility) -> PResult<'a, MacCall> {
let path = self.parse_path(PathStyle::Mod)?; // `foo::bar`
self.expect(&token::Not)?; // `!`
let args = self.parse_mac_args()?; // `( .. )` or `[ .. ]` (followed by `;`), or `{ .. }`.
self.eat_semi_for_macro_if_needed(&args);
self.complain_if_pub_macro(vis, false);
- Ok(Mac { path, args, prior_type_ascription: self.last_type_ascription })
+ Ok(MacCall { path, args, prior_type_ascription: self.last_type_ascription })
}
/// Recover if we parsed attributes and expected an item but there was none.
fn recover_attrs_no_item(&mut self, attrs: &[Attribute]) -> PResult<'a, ()> {
let (start, end) = match attrs {
[] => return Ok(()),
- [x0] => (x0, x0),
- [x0, .., xn] => (x0, xn),
+ [x0 @ xn] | [x0, .., xn] => (x0, xn),
};
let msg = if end.is_doc_comment() {
"expected item after doc comment"
self.token.is_keyword(kw::Async) && self.is_keyword_ahead(1, &[kw::Fn])
}
+ fn parse_polarity(&mut self) -> ast::ImplPolarity {
+ // Disambiguate `impl !Trait for Type { ... }` and `impl ! { ... }` for the never type.
+ if self.check(&token::Not) && self.look_ahead(1, |t| t.can_begin_type()) {
+ self.bump(); // `!`
+ ast::ImplPolarity::Negative(self.prev_token.span)
+ } else {
+ ast::ImplPolarity::Positive
+ }
+ }
+
/// Parses an implementation item.
///
/// ```
self.sess.gated_spans.gate(sym::const_trait_impl, span);
}
- // Disambiguate `impl !Trait for Type { ... }` and `impl ! { ... }` for the never type.
- let polarity = if self.check(&token::Not) && self.look_ahead(1, |t| t.can_begin_type()) {
- self.bump(); // `!`
- ast::ImplPolarity::Negative
- } else {
- ast::ImplPolarity::Positive
- };
+ let polarity = self.parse_polarity();
// Parse both types and traits as a type, then reinterpret if necessary.
let err_path = |span| ast::Path::from_ident(Ident::new(kw::Invalid, span));
&& self.look_ahead(1, |t| t.is_non_raw_ident_where(|i| i.name != kw::As))
{
self.bump(); // `default`
- Defaultness::Default(self.normalized_prev_token.span)
+ Defaultness::Default(self.prev_token.uninterpolated_span())
} else {
Defaultness::Final
}
/// Parses associated items.
fn parse_assoc_item(&mut self, req_name: ReqName) -> PResult<'a, Option<Option<P<AssocItem>>>> {
Ok(self.parse_item_(req_name)?.map(|Item { attrs, id, span, vis, ident, kind, tokens }| {
- let kind = match kind {
- ItemKind::Mac(a) => AssocItemKind::Macro(a),
- ItemKind::Fn(a, b, c, d) => AssocItemKind::Fn(a, b, c, d),
- ItemKind::TyAlias(a, b, c, d) => AssocItemKind::TyAlias(a, b, c, d),
- ItemKind::Const(a, b, c) => AssocItemKind::Const(a, b, c),
- ItemKind::Static(a, _, b) => {
- self.struct_span_err(span, "associated `static` items are not allowed").emit();
- AssocItemKind::Const(Defaultness::Final, a, b)
- }
- _ => return self.error_bad_item_kind(span, &kind, "`trait`s or `impl`s"),
+ let kind = match AssocItemKind::try_from(kind) {
+ Ok(kind) => kind,
+ Err(kind) => match kind {
+ ItemKind::Static(a, _, b) => {
+ self.struct_span_err(span, "associated `static` items are not allowed")
+ .emit();
+ AssocItemKind::Const(Defaultness::Final, a, b)
+ }
+ _ => return self.error_bad_item_kind(span, &kind, "`trait`s or `impl`s"),
+ },
};
Some(P(Item { attrs, id, span, vis, ident, kind, tokens }))
}))
}
fn parse_ident_or_underscore(&mut self) -> PResult<'a, ast::Ident> {
- match self.normalized_token.kind {
- token::Ident(name @ kw::Underscore, false) => {
+ match self.token.ident() {
+ Some((ident @ Ident { name: kw::Underscore, .. }, false)) => {
self.bump();
- Ok(Ident::new(name, self.normalized_prev_token.span))
+ Ok(ident)
}
_ => self.parse_ident(),
}
/// Parses a foreign item (one in an `extern { ... }` block).
pub fn parse_foreign_item(&mut self) -> PResult<'a, Option<Option<P<ForeignItem>>>> {
Ok(self.parse_item_(|_| true)?.map(|Item { attrs, id, span, vis, ident, kind, tokens }| {
- let kind = match kind {
- ItemKind::Mac(a) => ForeignItemKind::Macro(a),
- ItemKind::Fn(a, b, c, d) => ForeignItemKind::Fn(a, b, c, d),
- ItemKind::TyAlias(a, b, c, d) => ForeignItemKind::TyAlias(a, b, c, d),
- ItemKind::Static(a, b, c) => ForeignItemKind::Static(a, b, c),
- ItemKind::Const(_, a, b) => {
- self.error_on_foreign_const(span, ident);
- ForeignItemKind::Static(a, Mutability::Not, b)
- }
- _ => return self.error_bad_item_kind(span, &kind, "`extern` blocks"),
+ let kind = match ForeignItemKind::try_from(kind) {
+ Ok(kind) => kind,
+ Err(kind) => match kind {
+ ItemKind::Const(_, a, b) => {
+ self.error_on_foreign_const(span, ident);
+ ForeignItemKind::Static(a, Mutability::Not, b)
+ }
+ _ => return self.error_bad_item_kind(span, &kind, "`extern` blocks"),
+ },
};
Some(P(Item { attrs, id, span, vis, ident, kind, tokens }))
}))
};
self.sess.gated_spans.gate(sym::decl_macro, lo.to(self.prev_token.span));
- Ok((ident, ItemKind::MacroDef(ast::MacroDef { body, legacy: false })))
+ Ok((ident, ItemKind::MacroDef(ast::MacroDef { body, macro_rules: false })))
}
/// Is this unambiguously the start of a `macro_rules! foo` item defnition?
&& self.look_ahead(2, |t| t.is_ident())
}
- /// Parses a legacy `macro_rules! foo { ... }` declarative macro.
+ /// Parses a `macro_rules! foo { ... }` declarative macro.
fn parse_item_macro_rules(&mut self, vis: &Visibility) -> PResult<'a, ItemInfo> {
self.expect_keyword(kw::MacroRules)?; // `macro_rules`
self.expect(&token::Not)?; // `!`
self.eat_semi_for_macro_if_needed(&body);
self.complain_if_pub_macro(vis, true);
- Ok((ident, ItemKind::MacroDef(ast::MacroDef { body, legacy: true })))
+ Ok((ident, ItemKind::MacroDef(ast::MacroDef { body, macro_rules: true })))
}
/// Item macro invocations or `macro_rules!` definitions need inherited visibility.
/// This can either be `;` when there's no body,
/// or e.g. a block when the function is a provided one.
fn parse_fn_body(&mut self, attrs: &mut Vec<Attribute>) -> PResult<'a, Option<P<Block>>> {
- let (inner_attrs, body) = match self.token.kind {
- token::Semi => {
- self.bump();
- (Vec::new(), None)
- }
- token::OpenDelim(token::Brace) => {
- let (attrs, body) = self.parse_inner_attrs_and_block()?;
- (attrs, Some(body))
- }
- token::Interpolated(ref nt) => match **nt {
- token::NtBlock(..) => {
- let (attrs, body) = self.parse_inner_attrs_and_block()?;
- (attrs, Some(body))
- }
- _ => return self.expected_semi_or_open_brace(),
- },
- _ => return self.expected_semi_or_open_brace(),
+ let (inner_attrs, body) = if self.check(&token::Semi) {
+ self.bump(); // `;`
+ (Vec::new(), None)
+ } else if self.check(&token::OpenDelim(token::Brace)) || self.token.is_whole_block() {
+ self.parse_inner_attrs_and_block().map(|(attrs, body)| (attrs, Some(body)))?
+ } else if self.token.kind == token::Eq {
+ // Recover `fn foo() = $expr;`.
+ self.bump(); // `=`
+ let eq_sp = self.prev_token.span;
+ let _ = self.parse_expr()?;
+ self.expect_semi()?; // `;`
+ let span = eq_sp.to(self.prev_token.span);
+ self.struct_span_err(span, "function body cannot be `= expression;`")
+ .multipart_suggestion(
+ "surround the expression with `{` and `}` instead of `=` and `;`",
+ vec![(eq_sp, "{".to_string()), (self.prev_token.span, " }".to_string())],
+ Applicability::MachineApplicable,
+ )
+ .emit();
+ (Vec::new(), Some(self.mk_block_err(span)))
+ } else {
+ return self.expected_semi_or_open_brace();
};
attrs.extend(inner_attrs);
Ok(body)
let is_name_required = match self.token.kind {
token::DotDotDot => false,
- _ => req_name(self.normalized_token.span.edition()),
+ _ => req_name(self.token.span.edition()),
};
let (pat, ty) = if is_name_required || self.is_named_param() {
debug!("parse_param_general parse_pat (is_name_required:{})", is_name_required);
/// Returns the parsed optional self parameter and whether a self shortcut was used.
fn parse_self_param(&mut self) -> PResult<'a, Option<Param>> {
// Extract an identifier *after* having confirmed that the token is one.
- let expect_self_ident = |this: &mut Self| {
- match this.normalized_token.kind {
- // Preserve hygienic context.
- token::Ident(name, _) => {
- this.bump();
- Ident::new(name, this.normalized_prev_token.span)
- }
- _ => unreachable!(),
+ let expect_self_ident = |this: &mut Self| match this.token.ident() {
+ Some((ident, false)) => {
+ this.bump();
+ ident
}
+ _ => unreachable!(),
};
// Is `self` `n` tokens ahead?
let is_isolated_self = |this: &Self, n| {
// Only a limited set of initial token sequences is considered `self` parameters; anything
// else is parsed as a normal function parameter list, so some lookahead is required.
let eself_lo = self.token.span;
- let (eself, eself_ident, eself_hi) = match self.normalized_token.kind {
+ let (eself, eself_ident, eself_hi) = match self.token.uninterpolate().kind {
token::BinOp(token::And) => {
let eself = if is_isolated_self(self, 1) {
// `&self`
pub mod attr;
mod expr;
mod item;
-mod module;
-pub use module::{ModulePath, ModulePathSuccess};
mod pat;
mod path;
mod ty;
use diagnostics::Error;
use crate::lexer::UnmatchedBrace;
-use crate::{Directory, DirectoryOwnership};
use log::debug;
use rustc_ast::ast::DUMMY_NODE_ID;
use rustc_ast_pretty::pprust;
use rustc_errors::{struct_span_err, Applicability, DiagnosticBuilder, FatalError, PResult};
use rustc_session::parse::ParseSess;
-use rustc_span::source_map::respan;
+use rustc_span::source_map::{respan, Span, DUMMY_SP};
use rustc_span::symbol::{kw, sym, Symbol};
-use rustc_span::{FileName, Span, DUMMY_SP};
-use std::path::PathBuf;
use std::{cmp, mem, slice};
bitflags::bitflags! {
#[derive(Clone)]
pub struct Parser<'a> {
pub sess: &'a ParseSess,
- /// The current non-normalized token.
+ /// The current token.
pub token: Token,
- /// The current normalized token.
- /// "Normalized" means that some interpolated tokens
- /// (`$i: ident` and `$l: lifetime` meta-variables) are replaced
- /// with non-interpolated identifier and lifetime tokens they refer to.
- /// Use this if you need to check for `token::Ident` or `token::Lifetime` specifically,
- /// this also includes edition checks for edition-specific keyword identifiers.
- pub normalized_token: Token,
- /// The previous non-normalized token.
+ /// The previous token.
pub prev_token: Token,
- /// The previous normalized token.
- /// Use this if you need to check for `token::Ident` or `token::Lifetime` specifically,
- /// this also includes edition checks for edition-specific keyword identifiers.
- pub normalized_prev_token: Token,
restrictions: Restrictions,
- /// Used to determine the path to externally loaded source files.
- pub(super) directory: Directory,
- /// `true` to parse sub-modules in other files.
- // Public for rustfmt usage.
- pub recurse_into_file_modules: bool,
/// Name of the root module this parser originated from. If `None`, then the
/// name is not known. This does not change while the parser is descending
/// into modules, and sub-parsers have new values for this name.
expected_tokens: Vec<TokenType>,
token_cursor: TokenCursor,
desugar_doc_comments: bool,
- /// `true` we should configure out of line modules as we parse.
- // Public for rustfmt usage.
- pub cfg_mods: bool,
/// This field is used to keep track of how many left angle brackets we have seen. This is
/// required in order to detect extra leading left angle brackets (`<` characters) and error
/// appropriately.
pub fn new(
sess: &'a ParseSess,
tokens: TokenStream,
- directory: Option<Directory>,
- recurse_into_file_modules: bool,
desugar_doc_comments: bool,
subparser_name: Option<&'static str>,
) -> Self {
let mut parser = Parser {
sess,
token: Token::dummy(),
- normalized_token: Token::dummy(),
prev_token: Token::dummy(),
- normalized_prev_token: Token::dummy(),
restrictions: Restrictions::empty(),
- recurse_into_file_modules,
- directory: Directory {
- path: PathBuf::new(),
- ownership: DirectoryOwnership::Owned { relative: None },
- },
root_module_name: None,
expected_tokens: Vec::new(),
token_cursor: TokenCursor {
stack: Vec::new(),
},
desugar_doc_comments,
- cfg_mods: true,
unmatched_angle_bracket_count: 0,
max_angle_bracket_count: 0,
unclosed_delims: Vec::new(),
// Make parser point to the first token.
parser.bump();
- if let Some(directory) = directory {
- parser.directory = directory;
- } else if !parser.token.span.is_dummy() {
- if let Some(FileName::Real(path)) =
- &sess.source_map().lookup_char_pos(parser.token.span.lo()).file.unmapped_path
- {
- if let Some(directory_path) = path.parent() {
- parser.directory.path = directory_path.to_path_buf();
- }
- }
- }
-
parser
}
}
fn parse_ident_common(&mut self, recover: bool) -> PResult<'a, ast::Ident> {
- match self.normalized_token.kind {
- token::Ident(name, _) => {
- if self.token.is_reserved_ident() {
+ match self.token.ident() {
+ Some((ident, is_raw)) => {
+ if !is_raw && ident.is_reserved() {
let mut err = self.expected_ident_found();
if recover {
err.emit();
}
}
self.bump();
- Ok(Ident::new(name, self.normalized_prev_token.span))
+ Ok(ident)
}
_ => Err(match self.prev_token.kind {
TokenKind::DocComment(..) => {
Some((first, second)) if first == expected => {
let first_span = self.sess.source_map().start_point(self.token.span);
let second_span = self.token.span.with_lo(first_span.hi());
- self.set_token(Token::new(first, first_span));
+ self.token = Token::new(first, first_span);
self.bump_with(Token::new(second, second_span));
true
}
self.parse_delim_comma_seq(token::Paren, f)
}
- // Interpolated identifier (`$i: ident`) and lifetime (`$l: lifetime`)
- // tokens are replaced with usual identifier and lifetime tokens,
- // so the former are never encountered during normal parsing.
- crate fn set_token(&mut self, token: Token) {
- self.token = token;
- self.normalized_token = match &self.token.kind {
- token::Interpolated(nt) => match **nt {
- token::NtIdent(ident, is_raw) => {
- Token::new(token::Ident(ident.name, is_raw), ident.span)
- }
- token::NtLifetime(ident) => Token::new(token::Lifetime(ident.name), ident.span),
- _ => self.token.clone(),
- },
- _ => self.token.clone(),
- }
- }
-
/// Advance the parser by one token using provided token as the next one.
fn bump_with(&mut self, next_token: Token) {
// Bumping after EOF is a bad sign, usually an infinite loop.
}
// Update the current and previous tokens.
- self.prev_token = self.token.take();
- self.normalized_prev_token = self.normalized_token.take();
- self.set_token(next_token);
+ self.prev_token = mem::replace(&mut self.token, next_token);
// Diagnostics.
self.expected_tokens.clear();
/// Parses asyncness: `async` or nothing.
fn parse_asyncness(&mut self) -> Async {
if self.eat_keyword(kw::Async) {
- let span = self.normalized_prev_token.span;
+ let span = self.prev_token.uninterpolated_span();
Async::Yes { span, closure_id: DUMMY_NODE_ID, return_impl_trait_id: DUMMY_NODE_ID }
} else {
Async::No
/// Parses unsafety: `unsafe` or nothing.
fn parse_unsafety(&mut self) -> Unsafe {
if self.eat_keyword(kw::Unsafe) {
- Unsafe::Yes(self.normalized_prev_token.span)
+ Unsafe::Yes(self.prev_token.uninterpolated_span())
} else {
Unsafe::No
}
/// Parses constness: `const` or nothing.
fn parse_constness(&mut self) -> Const {
if self.eat_keyword(kw::Const) {
- Const::Yes(self.normalized_prev_token.span)
+ Const::Yes(self.prev_token.uninterpolated_span())
} else {
Const::No
}
&mut self.token_cursor.frame,
self.token_cursor.stack.pop().unwrap(),
);
- self.set_token(Token::new(TokenKind::CloseDelim(frame.delim), frame.span.close));
+ self.token = Token::new(TokenKind::CloseDelim(frame.delim), frame.span.close);
self.bump();
TokenTree::Delimited(frame.span, frame.delim, frame.tree_cursor.stream)
}
+++ /dev/null
-use super::diagnostics::Error;
-use super::item::ItemInfo;
-use super::Parser;
-
-use crate::{new_sub_parser_from_file, DirectoryOwnership};
-
-use rustc_ast::ast::{self, Attribute, Crate, Ident, ItemKind, Mod};
-use rustc_ast::attr;
-use rustc_ast::token::{self, TokenKind};
-use rustc_errors::PResult;
-use rustc_span::source_map::{FileName, SourceMap, Span, DUMMY_SP};
-use rustc_span::symbol::sym;
-
-use std::path::{self, Path, PathBuf};
-
-/// Information about the path to a module.
-// Public for rustfmt usage.
-pub struct ModulePath {
- name: String,
- path_exists: bool,
- pub result: Result<ModulePathSuccess, Error>,
-}
-
-// Public for rustfmt usage.
-pub struct ModulePathSuccess {
- pub path: PathBuf,
- pub directory_ownership: DirectoryOwnership,
-}
-
-impl<'a> Parser<'a> {
- /// Parses a source module as a crate. This is the main entry point for the parser.
- pub fn parse_crate_mod(&mut self) -> PResult<'a, Crate> {
- let lo = self.token.span;
- let krate = Ok(ast::Crate {
- attrs: self.parse_inner_attributes()?,
- module: self.parse_mod_items(&token::Eof, lo)?,
- span: lo.to(self.token.span),
- // Filled in by proc_macro_harness::inject()
- proc_macros: Vec::new(),
- });
- krate
- }
-
- /// Parses a `mod <foo> { ... }` or `mod <foo>;` item.
- pub(super) fn parse_item_mod(&mut self, attrs: &mut Vec<Attribute>) -> PResult<'a, ItemInfo> {
- let in_cfg = crate::config::process_configure_mod(self.sess, self.cfg_mods, attrs);
-
- let id_span = self.token.span;
- let id = self.parse_ident()?;
- let (module, mut inner_attrs) = if self.eat(&token::Semi) {
- if in_cfg && self.recurse_into_file_modules {
- // This mod is in an external file. Let's go get it!
- let ModulePathSuccess { path, directory_ownership } =
- self.submod_path(id, &attrs, id_span)?;
- self.eval_src_mod(path, directory_ownership, id.to_string(), id_span)?
- } else {
- (ast::Mod { inner: DUMMY_SP, items: Vec::new(), inline: false }, Vec::new())
- }
- } else {
- let old_directory = self.directory.clone();
- self.push_directory(id, &attrs);
-
- self.expect(&token::OpenDelim(token::Brace))?;
- let mod_inner_lo = self.token.span;
- let inner_attrs = self.parse_inner_attributes()?;
- let module = self.parse_mod_items(&token::CloseDelim(token::Brace), mod_inner_lo)?;
-
- self.directory = old_directory;
- (module, inner_attrs)
- };
- attrs.append(&mut inner_attrs);
- Ok((id, ItemKind::Mod(module)))
- }
-
- /// Given a termination token, parses all of the items in a module.
- fn parse_mod_items(&mut self, term: &TokenKind, inner_lo: Span) -> PResult<'a, Mod> {
- let mut items = vec![];
- while let Some(item) = self.parse_item()? {
- items.push(item);
- self.maybe_consume_incorrect_semicolon(&items);
- }
-
- if !self.eat(term) {
- let token_str = super::token_descr(&self.token);
- if !self.maybe_consume_incorrect_semicolon(&items) {
- let msg = &format!("expected item, found {}", token_str);
- let mut err = self.struct_span_err(self.token.span, msg);
- err.span_label(self.token.span, "expected item");
- return Err(err);
- }
- }
-
- let hi = if self.token.span.is_dummy() { inner_lo } else { self.prev_token.span };
-
- Ok(Mod { inner: inner_lo.to(hi), items, inline: true })
- }
-
- fn submod_path(
- &mut self,
- id: ast::Ident,
- outer_attrs: &[Attribute],
- id_sp: Span,
- ) -> PResult<'a, ModulePathSuccess> {
- if let Some(path) = Parser::submod_path_from_attr(outer_attrs, &self.directory.path) {
- return Ok(ModulePathSuccess {
- directory_ownership: match path.file_name().and_then(|s| s.to_str()) {
- // All `#[path]` files are treated as though they are a `mod.rs` file.
- // This means that `mod foo;` declarations inside `#[path]`-included
- // files are siblings,
- //
- // Note that this will produce weirdness when a file named `foo.rs` is
- // `#[path]` included and contains a `mod foo;` declaration.
- // If you encounter this, it's your own darn fault :P
- Some(_) => DirectoryOwnership::Owned { relative: None },
- _ => DirectoryOwnership::UnownedViaMod,
- },
- path,
- });
- }
-
- let relative = match self.directory.ownership {
- DirectoryOwnership::Owned { relative } => relative,
- DirectoryOwnership::UnownedViaBlock | DirectoryOwnership::UnownedViaMod => None,
- };
- let paths =
- Parser::default_submod_path(id, relative, &self.directory.path, self.sess.source_map());
-
- match self.directory.ownership {
- DirectoryOwnership::Owned { .. } => {
- paths.result.map_err(|err| self.span_fatal_err(id_sp, err))
- }
- DirectoryOwnership::UnownedViaBlock => {
- let msg = "Cannot declare a non-inline module inside a block \
- unless it has a path attribute";
- let mut err = self.struct_span_err(id_sp, msg);
- if paths.path_exists {
- let msg = format!(
- "Maybe `use` the module `{}` instead of redeclaring it",
- paths.name
- );
- err.span_note(id_sp, &msg);
- }
- Err(err)
- }
- DirectoryOwnership::UnownedViaMod => {
- let mut err =
- self.struct_span_err(id_sp, "cannot declare a new module at this location");
- if !id_sp.is_dummy() {
- let src_path = self.sess.source_map().span_to_filename(id_sp);
- if let FileName::Real(src_path) = src_path {
- if let Some(stem) = src_path.file_stem() {
- let mut dest_path = src_path.clone();
- dest_path.set_file_name(stem);
- dest_path.push("mod.rs");
- err.span_note(
- id_sp,
- &format!(
- "maybe move this module `{}` to its own \
- directory via `{}`",
- src_path.display(),
- dest_path.display()
- ),
- );
- }
- }
- }
- if paths.path_exists {
- err.span_note(
- id_sp,
- &format!(
- "... or maybe `use` the module `{}` instead \
- of possibly redeclaring it",
- paths.name
- ),
- );
- }
- Err(err)
- }
- }
- }
-
- // Public for rustfmt usage.
- pub fn submod_path_from_attr(attrs: &[Attribute], dir_path: &Path) -> Option<PathBuf> {
- if let Some(s) = attr::first_attr_value_str_by_name(attrs, sym::path) {
- let s = s.as_str();
-
- // On windows, the base path might have the form
- // `\\?\foo\bar` in which case it does not tolerate
- // mixed `/` and `\` separators, so canonicalize
- // `/` to `\`.
- #[cfg(windows)]
- let s = s.replace("/", "\\");
- Some(dir_path.join(&*s))
- } else {
- None
- }
- }
-
- /// Returns a path to a module.
- // Public for rustfmt usage.
- pub fn default_submod_path(
- id: ast::Ident,
- relative: Option<ast::Ident>,
- dir_path: &Path,
- source_map: &SourceMap,
- ) -> ModulePath {
- // If we're in a foo.rs file instead of a mod.rs file,
- // we need to look for submodules in
- // `./foo/<id>.rs` and `./foo/<id>/mod.rs` rather than
- // `./<id>.rs` and `./<id>/mod.rs`.
- let relative_prefix_string;
- let relative_prefix = if let Some(ident) = relative {
- relative_prefix_string = format!("{}{}", ident.name, path::MAIN_SEPARATOR);
- &relative_prefix_string
- } else {
- ""
- };
-
- let mod_name = id.name.to_string();
- let default_path_str = format!("{}{}.rs", relative_prefix, mod_name);
- let secondary_path_str =
- format!("{}{}{}mod.rs", relative_prefix, mod_name, path::MAIN_SEPARATOR);
- let default_path = dir_path.join(&default_path_str);
- let secondary_path = dir_path.join(&secondary_path_str);
- let default_exists = source_map.file_exists(&default_path);
- let secondary_exists = source_map.file_exists(&secondary_path);
-
- let result = match (default_exists, secondary_exists) {
- (true, false) => Ok(ModulePathSuccess {
- path: default_path,
- directory_ownership: DirectoryOwnership::Owned { relative: Some(id) },
- }),
- (false, true) => Ok(ModulePathSuccess {
- path: secondary_path,
- directory_ownership: DirectoryOwnership::Owned { relative: None },
- }),
- (false, false) => Err(Error::FileNotFoundForModule {
- mod_name: mod_name.clone(),
- default_path: default_path_str,
- secondary_path: secondary_path_str,
- dir_path: dir_path.display().to_string(),
- }),
- (true, true) => Err(Error::DuplicatePaths {
- mod_name: mod_name.clone(),
- default_path: default_path_str,
- secondary_path: secondary_path_str,
- }),
- };
-
- ModulePath { name: mod_name, path_exists: default_exists || secondary_exists, result }
- }
-
- /// Reads a module from a source file.
- fn eval_src_mod(
- &mut self,
- path: PathBuf,
- directory_ownership: DirectoryOwnership,
- name: String,
- id_sp: Span,
- ) -> PResult<'a, (Mod, Vec<Attribute>)> {
- let mut included_mod_stack = self.sess.included_mod_stack.borrow_mut();
- if let Some(i) = included_mod_stack.iter().position(|p| *p == path) {
- let mut err = String::from("circular modules: ");
- let len = included_mod_stack.len();
- for p in &included_mod_stack[i..len] {
- err.push_str(&p.to_string_lossy());
- err.push_str(" -> ");
- }
- err.push_str(&path.to_string_lossy());
- return Err(self.struct_span_err(id_sp, &err[..]));
- }
- included_mod_stack.push(path.clone());
- drop(included_mod_stack);
-
- let mut p0 =
- new_sub_parser_from_file(self.sess, &path, directory_ownership, Some(name), id_sp);
- p0.cfg_mods = self.cfg_mods;
- let mod_inner_lo = p0.token.span;
- let mod_attrs = p0.parse_inner_attributes()?;
- let mut m0 = p0.parse_mod_items(&token::Eof, mod_inner_lo)?;
- m0.inline = false;
- self.sess.included_mod_stack.borrow_mut().pop();
- Ok((m0, mod_attrs))
- }
-
- fn push_directory(&mut self, id: Ident, attrs: &[Attribute]) {
- if let Some(path) = attr::first_attr_value_str_by_name(attrs, sym::path) {
- self.directory.path.push(&*path.as_str());
- self.directory.ownership = DirectoryOwnership::Owned { relative: None };
- } else {
- // We have to push on the current module name in the case of relative
- // paths in order to ensure that any additional module paths from inline
- // `mod x { ... }` come after the relative extension.
- //
- // For example, a `mod z { ... }` inside `x/y.rs` should set the current
- // directory path to `/x/y/z`, not `/x/z` with a relative offset of `y`.
- if let DirectoryOwnership::Owned { relative } = &mut self.directory.ownership {
- if let Some(ident) = relative.take() {
- // remove the relative offset
- self.directory.path.push(&*ident.as_str());
- }
- }
- self.directory.path.push(&*id.as_str());
- }
- }
-}
use super::{Parser, PathStyle};
use crate::{maybe_recover_from_interpolated_ty_qpath, maybe_whole};
-use rustc_ast::ast::{
- self, AttrVec, Attribute, FieldPat, Mac, Pat, PatKind, RangeEnd, RangeSyntax,
-};
-use rustc_ast::ast::{BindingMode, Expr, ExprKind, Ident, Mutability, Path, QSelf};
+use rustc_ast::ast::{self, AttrVec, Attribute, FieldPat, MacCall, Pat, PatKind, RangeEnd};
+use rustc_ast::ast::{BindingMode, Expr, ExprKind, Ident, Mutability, Path, QSelf, RangeSyntax};
use rustc_ast::mut_visit::{noop_visit_mac, noop_visit_pat, MutVisitor};
use rustc_ast::ptr::P;
use rustc_ast::token;
/// Note that there are more tokens such as `@` for which we know that the `|`
/// is an illegal parse. However, the user's intent is less clear in that case.
fn recover_trailing_vert(&mut self, lo: Option<Span>) -> bool {
- let is_end_ahead = self.look_ahead(1, |token| match &token.kind {
+ let is_end_ahead = self.look_ahead(1, |token| match &token.uninterpolate().kind {
token::FatArrow // e.g. `a | => 0,`.
| token::Ident(kw::If, false) // e.g. `a | if expr`.
| token::Eq // e.g. `let a | = 0`.
// Here, `(pat,)` is a tuple pattern.
// For backward compatibility, `(..)` is a tuple pattern as well.
Ok(if fields.len() == 1 && !(trailing_comma || fields[0].is_rest()) {
- PatKind::Paren(fields.into_iter().nth(0).unwrap())
+ PatKind::Paren(fields.into_iter().next().unwrap())
} else {
PatKind::Tuple(fields)
})
fn make_all_value_bindings_mutable(pat: &mut P<Pat>) -> bool {
struct AddMut(bool);
impl MutVisitor for AddMut {
- fn visit_mac(&mut self, mac: &mut Mac) {
+ fn visit_mac(&mut self, mac: &mut MacCall) {
noop_visit_mac(mac, self);
}
fn parse_pat_mac_invoc(&mut self, path: Path) -> PResult<'a, PatKind> {
self.bump();
let args = self.parse_mac_args()?;
- let mac = Mac { path, args, prior_type_ascription: self.last_type_ascription };
- Ok(PatKind::Mac(mac))
+ let mac = MacCall { path, args, prior_type_ascription: self.last_type_ascription };
+ Ok(PatKind::MacCall(mac))
}
fn fatal_unexpected_non_pat(
}
fn parse_pat_range_end(&mut self) -> PResult<'a, P<Expr>> {
- if self.token.is_path_start() {
+ if self.check_path() {
let lo = self.token.span;
let (qself, path) = if self.eat_lt() {
// Parse a qualified path
}
pub(super) fn parse_path_segment_ident(&mut self) -> PResult<'a, Ident> {
- match self.normalized_token.kind {
- token::Ident(name, _) if name.is_path_segment_keyword() => {
+ match self.token.ident() {
+ Some((ident, false)) if ident.is_path_segment_keyword() => {
self.bump();
- Ok(Ident::new(name, self.normalized_prev_token.span))
+ Ok(ident)
}
_ => self.parse_ident(),
}
+use super::attr::DEFAULT_INNER_ATTR_FORBIDDEN;
use super::diagnostics::Error;
use super::expr::LhsExpr;
use super::pat::GateOr;
use super::path::PathStyle;
use super::{BlockMode, Parser, Restrictions, SemiColonMode};
use crate::maybe_whole;
-use crate::DirectoryOwnership;
use rustc_ast::ast;
-use rustc_ast::ast::{AttrStyle, AttrVec, Attribute, Mac, MacStmtStyle};
+use rustc_ast::ast::{AttrStyle, AttrVec, Attribute, MacCall, MacStmtStyle};
use rustc_ast::ast::{Block, BlockCheckMode, Expr, ExprKind, Local, Stmt, StmtKind, DUMMY_NODE_ID};
use rustc_ast::ptr::P;
use rustc_ast::token::{self, TokenKind};
self.bump(); // `var`
let msg = "write `let` instead of `var` to introduce a new variable";
self.recover_stmt_local(lo, attrs.into(), msg, "let")?
- } else if self.token.is_path_start()
- && !self.token.is_qpath_start()
- && !self.is_path_start_item()
- {
+ } else if self.check_path() && !self.token.is_qpath_start() && !self.is_path_start_item() {
// We have avoided contextual keywords like `union`, items with `crate` visibility,
// or `auto trait` items. We aim to parse an arbitrary path `a::b` but not something
// that starts like a path (1 token), but it fact not a path.
// Also, we avoid stealing syntax from `parse_item_`.
self.parse_stmt_path_start(lo, attrs)?
- } else if let Some(item) = self.parse_stmt_item(attrs.clone())? {
+ } else if let Some(item) = self.parse_item_common(attrs.clone(), false, true, |_| true)? {
// FIXME: Bad copy of attrs
self.mk_stmt(lo.to(item.span), StmtKind::Item(P(item)))
- } else if self.token == token::Semi {
+ } else if self.eat(&token::Semi) {
// Do not attempt to parse an expression if we're done here.
self.error_outer_attrs(&attrs);
- self.bump();
- let mut last_semi = lo;
- while self.token == token::Semi {
- last_semi = self.token.span;
- self.bump();
- }
- // We are encoding a string of semicolons as an an empty tuple that spans
- // the excess semicolons to preserve this info until the lint stage.
- let kind = StmtKind::Semi(self.mk_expr(
- lo.to(last_semi),
- ExprKind::Tup(Vec::new()),
- AttrVec::new(),
- ));
- self.mk_stmt(lo.to(last_semi), kind)
+ self.mk_stmt(lo, StmtKind::Empty)
} else if self.token != token::CloseDelim(token::Brace) {
// Remainder are line-expr stmts.
let e = self.parse_expr_res(Restrictions::STMT_EXPR, Some(attrs.into()))?;
Ok(Some(stmt))
}
- fn parse_stmt_item(&mut self, attrs: Vec<Attribute>) -> PResult<'a, Option<ast::Item>> {
- let old = mem::replace(&mut self.directory.ownership, DirectoryOwnership::UnownedViaBlock);
- let item = self.parse_item_common(attrs, false, true, |_| true)?;
- self.directory.ownership = old;
- Ok(item)
- }
-
fn parse_stmt_path_start(&mut self, lo: Span, attrs: Vec<Attribute>) -> PResult<'a, Stmt> {
let path = self.parse_path(PathStyle::Expr)?;
let style =
if delim == token::Brace { MacStmtStyle::Braces } else { MacStmtStyle::NoBraces };
- let mac = Mac { path, args, prior_type_ascription: self.last_type_ascription };
+ let mac = MacCall { path, args, prior_type_ascription: self.last_type_ascription };
let kind = if delim == token::Brace || self.token == token::Semi || self.token == token::Eof
{
- StmtKind::Mac(P((mac, style, attrs)))
+ StmtKind::MacCall(P((mac, style, attrs)))
} else {
// Since none of the above applied, this is an expression statement macro.
- let e = self.mk_expr(lo.to(hi), ExprKind::Mac(mac), AttrVec::new());
+ let e = self.mk_expr(lo.to(hi), ExprKind::MacCall(mac), AttrVec::new());
let e = self.maybe_recover_from_bad_qpath(e, true)?;
let e = self.parse_dot_or_call_expr_with(e, lo, attrs)?;
let e = self.parse_assoc_expr_with(0, LhsExpr::AlreadyParsed(e))?;
/// Error on outer attributes in this context.
/// Also error if the previous token was a doc comment.
fn error_outer_attrs(&self, attrs: &[Attribute]) {
- if !attrs.is_empty() {
- if matches!(self.prev_token.kind, TokenKind::DocComment(..)) {
- self.span_fatal_err(self.prev_token.span, Error::UselessDocComment).emit();
+ if let [.., last] = attrs {
+ if last.is_doc_comment() {
+ self.span_fatal_err(last.span, Error::UselessDocComment).emit();
} else if attrs.iter().any(|a| a.style == AttrStyle::Outer) {
- self.struct_span_err(self.token.span, "expected statement after outer attribute")
- .emit();
+ self.struct_span_err(last.span, "expected statement after outer attribute").emit();
}
}
}
/// Parses a block. No inner attributes are allowed.
pub fn parse_block(&mut self) -> PResult<'a, P<Block>> {
- maybe_whole!(self, NtBlock, |x| x);
-
- let lo = self.token.span;
-
- if !self.eat(&token::OpenDelim(token::Brace)) {
- return self.error_block_no_opening_brace();
+ let (attrs, block) = self.parse_inner_attrs_and_block()?;
+ if let [.., last] = &*attrs {
+ self.error_on_forbidden_inner_attr(last.span, DEFAULT_INNER_ATTR_FORBIDDEN);
}
-
- self.parse_block_tail(lo, BlockCheckMode::Default)
+ Ok(block)
}
fn error_block_no_opening_brace<T>(&mut self) -> PResult<'a, T> {
//
// which is valid in other languages, but not Rust.
match self.parse_stmt_without_recovery() {
- Ok(Some(stmt)) => {
+ // If the next token is an open brace (e.g., `if a b {`), the place-
+ // inside-a-block suggestion would be more likely wrong than right.
+ Ok(Some(_))
if self.look_ahead(1, |t| t == &token::OpenDelim(token::Brace))
- || do_not_suggest_help
- {
- // If the next token is an open brace (e.g., `if a b {`), the place-
- // inside-a-block suggestion would be more likely wrong than right.
- e.span_label(sp, "expected `{`");
- return Err(e);
- }
- let stmt_span = if self.eat(&token::Semi) {
+ || do_not_suggest_help => {}
+ Ok(Some(stmt)) => {
+ let stmt_own_line = self.sess.source_map().is_line_before_span_empty(sp);
+ let stmt_span = if stmt_own_line && self.eat(&token::Semi) {
// Expand the span to include the semicolon.
stmt.span.with_hi(self.prev_token.span.hi())
} else {
/// Parses a block. Inner attributes are allowed.
pub(super) fn parse_inner_attrs_and_block(
&mut self,
+ ) -> PResult<'a, (Vec<Attribute>, P<Block>)> {
+ self.parse_block_common(self.token.span, BlockCheckMode::Default)
+ }
+
+ /// Parses a block. Inner attributes are allowed.
+ pub(super) fn parse_block_common(
+ &mut self,
+ lo: Span,
+ blk_mode: BlockCheckMode,
) -> PResult<'a, (Vec<Attribute>, P<Block>)> {
maybe_whole!(self, NtBlock, |x| (Vec::new(), x));
- let lo = self.token.span;
- self.expect(&token::OpenDelim(token::Brace))?;
- Ok((self.parse_inner_attributes()?, self.parse_block_tail(lo, BlockCheckMode::Default)?))
+ if !self.eat(&token::OpenDelim(token::Brace)) {
+ return self.error_block_no_opening_brace();
+ }
+
+ Ok((self.parse_inner_attributes()?, self.parse_block_tail(lo, blk_mode)?))
}
/// Parses the rest of a block expression or function body.
/// Precondition: already parsed the '{'.
- pub(super) fn parse_block_tail(
- &mut self,
- lo: Span,
- s: BlockCheckMode,
- ) -> PResult<'a, P<Block>> {
+ fn parse_block_tail(&mut self, lo: Span, s: BlockCheckMode) -> PResult<'a, P<Block>> {
let mut stmts = vec![];
while !self.eat(&token::CloseDelim(token::Brace)) {
if self.token == token::Eof {
self.expect_semi()?;
eat_semi = false;
}
+ StmtKind::Empty => eat_semi = false,
_ => {}
}
use crate::{maybe_recover_from_interpolated_ty_qpath, maybe_whole};
use rustc_ast::ast::{self, BareFnTy, FnRetTy, GenericParam, Lifetime, MutTy, Ty, TyKind};
-use rustc_ast::ast::{
- GenericBound, GenericBounds, PolyTraitRef, TraitBoundModifier, TraitObjectSyntax,
-};
-use rustc_ast::ast::{Mac, Mutability};
+use rustc_ast::ast::{GenericBound, GenericBounds, MacCall, Mutability};
+use rustc_ast::ast::{PolyTraitRef, TraitBoundModifier, TraitObjectSyntax};
use rustc_ast::ptr::P;
use rustc_ast::token::{self, Token, TokenKind};
use rustc_errors::{pluralize, struct_span_err, Applicability, PResult};
} else {
let path = self.parse_path(PathStyle::Type)?;
let parse_plus = allow_plus == AllowPlus::Yes && self.check_plus();
- self.parse_remaining_bounds(lifetime_defs, path, lo, parse_plus)?
+ self.parse_remaining_bounds_path(lifetime_defs, path, lo, parse_plus)?
}
} else if self.eat_keyword(kw::Impl) {
self.parse_impl_ty(&mut impl_dyn_multi)?
} else if self.is_explicit_dyn_type() {
self.parse_dyn_ty(&mut impl_dyn_multi)?
- } else if self.check(&token::Question)
- || self.check_lifetime() && self.look_ahead(1, |t| t.is_like_plus())
- {
- // Bound list (trait object type)
- let bounds = self.parse_generic_bounds_common(allow_plus, None)?;
- TyKind::TraitObject(bounds, TraitObjectSyntax::None)
} else if self.eat_lt() {
// Qualified path
let (qself, path) = self.parse_qpath(PathStyle::Type)?;
TyKind::Path(Some(qself), path)
- } else if self.token.is_path_start() {
+ } else if self.check_path() {
self.parse_path_start_ty(lo, allow_plus)?
+ } else if self.can_begin_bound() {
+ self.parse_bare_trait_object(lo, allow_plus)?
} else if self.eat(&token::DotDotDot) {
if allow_c_variadic == AllowCVariadic::Yes {
TyKind::CVarArgs
})?;
if ts.len() == 1 && !trailing {
- let ty = ts.into_iter().nth(0).unwrap().into_inner();
+ let ty = ts.into_iter().next().unwrap().into_inner();
let maybe_bounds = allow_plus == AllowPlus::Yes && self.token.is_like_plus();
match ty.kind {
// `(TY_BOUND_NOPAREN) + BOUND + ...`.
TyKind::Path(None, path) if maybe_bounds => {
- self.parse_remaining_bounds(Vec::new(), path, lo, true)
+ self.parse_remaining_bounds_path(Vec::new(), path, lo, true)
}
- TyKind::TraitObject(mut bounds, TraitObjectSyntax::None)
+ TyKind::TraitObject(bounds, TraitObjectSyntax::None)
if maybe_bounds && bounds.len() == 1 && !trailing_plus =>
{
- let path = match bounds.remove(0) {
- GenericBound::Trait(pt, ..) => pt.trait_ref.path,
- GenericBound::Outlives(..) => {
- return Err(self.struct_span_err(
- ty.span,
- "expected trait bound, not lifetime bound",
- ));
- }
- };
- self.parse_remaining_bounds(Vec::new(), path, lo, true)
+ self.parse_remaining_bounds(bounds, true)
}
// `(TYPE)`
_ => Ok(TyKind::Paren(P(ty))),
}
}
- fn parse_remaining_bounds(
+ fn parse_bare_trait_object(&mut self, lo: Span, allow_plus: AllowPlus) -> PResult<'a, TyKind> {
+ let lt_no_plus = self.check_lifetime() && !self.look_ahead(1, |t| t.is_like_plus());
+ let bounds = self.parse_generic_bounds_common(allow_plus, None)?;
+ if lt_no_plus {
+ self.struct_span_err(lo, "lifetime in trait object type must be followed by `+`").emit()
+ }
+ Ok(TyKind::TraitObject(bounds, TraitObjectSyntax::None))
+ }
+
+ fn parse_remaining_bounds_path(
&mut self,
generic_params: Vec<GenericParam>,
path: ast::Path,
lo: Span,
parse_plus: bool,
) -> PResult<'a, TyKind> {
- assert_ne!(self.token, token::Question);
-
let poly_trait_ref = PolyTraitRef::new(generic_params, path, lo.to(self.prev_token.span));
- let mut bounds = vec![GenericBound::Trait(poly_trait_ref, TraitBoundModifier::None)];
- if parse_plus {
+ let bounds = vec![GenericBound::Trait(poly_trait_ref, TraitBoundModifier::None)];
+ self.parse_remaining_bounds(bounds, parse_plus)
+ }
+
+ /// Parse the remainder of a bare trait object type given an already parsed list.
+ fn parse_remaining_bounds(
+ &mut self,
+ mut bounds: GenericBounds,
+ plus: bool,
+ ) -> PResult<'a, TyKind> {
+ assert_ne!(self.token, token::Question);
+ if plus {
self.eat_plus(); // `+`, or `+=` gets split and `+` is discarded
bounds.append(&mut self.parse_generic_bounds(Some(self.prev_token.span))?);
}
/// Is a `dyn B0 + ... + Bn` type allowed here?
fn is_explicit_dyn_type(&mut self) -> bool {
self.check_keyword(kw::Dyn)
- && (self.normalized_token.span.rust_2018()
+ && (self.token.uninterpolated_span().rust_2018()
|| self.look_ahead(1, |t| {
t.can_begin_bound() && !can_continue_type_after_non_fn_ident(t)
}))
let path = self.parse_path(PathStyle::Type)?;
if self.eat(&token::Not) {
// Macro invocation in type position
- Ok(TyKind::Mac(Mac {
+ Ok(TyKind::MacCall(MacCall {
path,
args: self.parse_mac_args()?,
prior_type_ascription: self.last_type_ascription,
}))
} else if allow_plus == AllowPlus::Yes && self.check_plus() {
// `Trait1 + Trait2 + 'a`
- self.parse_remaining_bounds(Vec::new(), path, lo, true)
+ self.parse_remaining_bounds_path(Vec::new(), path, lo, true)
} else {
// Just a type path.
Ok(TyKind::Path(None, path))
})
}
-crate fn check_meta_bad_delim(sess: &ParseSess, span: DelimSpan, delim: MacDelimiter, msg: &str) {
+pub fn check_meta_bad_delim(sess: &ParseSess, span: DelimSpan, delim: MacDelimiter, msg: &str) {
if let ast::MacDelimiter::Parenthesis = delim {
return;
}
rustc_target = { path = "../librustc_target" }
rustc_ast = { path = "../librustc_ast" }
rustc_span = { path = "../librustc_span" }
+rustc_trait_selection = { path = "../librustc_trait_selection" }
fn target_from_impl_item<'tcx>(tcx: TyCtxt<'tcx>, impl_item: &hir::ImplItem<'_>) -> Target {
match impl_item.kind {
hir::ImplItemKind::Const(..) => Target::AssocConst,
- hir::ImplItemKind::Method(..) => {
+ hir::ImplItemKind::Fn(..) => {
let parent_hir_id = tcx.hir().get_parent_item(impl_item.hir_id);
let containing_item = tcx.hir().expect_item(parent_hir_id);
let containing_impl_is_for_trait = match &containing_item.kind {
impl Visitor<'tcx> for CheckAttrVisitor<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.tcx.hir())
}
fn visit_item(&mut self, item: &'tcx Item<'tcx>) {
//! through, but errors for structured control flow in a `const` should be emitted here.
use rustc::hir::map::Map;
-use rustc::hir::Hir;
-use rustc::session::config::nightly_options;
-use rustc::session::parse::feature_err;
use rustc::ty::query::Providers;
use rustc::ty::TyCtxt;
use rustc_ast::ast::Mutability;
use rustc_hir as hir;
use rustc_hir::def_id::DefId;
use rustc_hir::intravisit::{self, NestedVisitorMap, Visitor};
+use rustc_session::config::nightly_options;
+use rustc_session::parse::feature_err;
use rustc_span::{sym, Span, Symbol};
use std::fmt;
}
impl ConstKind {
- fn for_body(body: &hir::Body<'_>, hir_map: Hir<'_>) -> Option<Self> {
+ fn for_body(body: &hir::Body<'_>, hir_map: Map<'_>) -> Option<Self> {
let is_const_fn = |id| hir_map.fn_sig_by_hir_id(id).unwrap().header.is_const();
let owner = hir_map.body_owner(body.id());
impl<'tcx> Visitor<'tcx> for CheckConstVisitor<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.tcx.hir())
}
fn visit_anon_const(&mut self, anon: &'tcx hir::AnonConst) {
}
impl<'a, 'tcx> Visitor<'tcx> for MarkSymbolVisitor<'a, 'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
let trait_item = self.krate.trait_item(trait_item_ref.id);
match trait_item.kind {
hir::TraitItemKind::Const(_, Some(_))
- | hir::TraitItemKind::Method(_, hir::TraitMethod::Provided(_)) => {
+ | hir::TraitItemKind::Fn(_, hir::TraitFn::Provided(_)) => {
if has_allow_dead_code_or_lang_attr(
self.tcx,
trait_item.hir_id,
/// on inner functions when the outer function is already getting
/// an error. We could do this also by checking the parents, but
/// this is how the code is setup and it seems harmless enough.
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::All(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::All(self.tcx.hir())
}
fn visit_item(&mut self, item: &'tcx hir::Item<'tcx>) {
}
self.visit_nested_body(body_id)
}
- hir::ImplItemKind::Method(_, body_id) => {
+ hir::ImplItemKind::Fn(_, body_id) => {
if !self.symbol_is_live(impl_item.hir_id) {
let span = self.tcx.sess.source_map().def_span(impl_item.span);
self.warn_dead_code(
fn visit_trait_item(&mut self, trait_item: &'tcx hir::TraitItem<'tcx>) {
match trait_item.kind {
hir::TraitItemKind::Const(_, Some(body_id))
- | hir::TraitItemKind::Method(_, hir::TraitMethod::Provided(body_id)) => {
+ | hir::TraitItemKind::Fn(_, hir::TraitFn::Provided(body_id)) => {
self.visit_nested_body(body_id)
}
hir::TraitItemKind::Const(_, None)
- | hir::TraitItemKind::Method(_, hir::TraitMethod::Required(_))
+ | hir::TraitItemKind::Fn(_, hir::TraitFn::Required(_))
| hir::TraitItemKind::Type(..) => {}
}
}
-use rustc::hir::Hir;
-use rustc::session::config::EntryFnType;
-use rustc::session::{config, Session};
+use rustc::hir::map::Map;
use rustc::ty::query::Providers;
use rustc::ty::TyCtxt;
use rustc_ast::attr;
use rustc_hir::def_id::{CrateNum, DefId, CRATE_DEF_INDEX, LOCAL_CRATE};
use rustc_hir::itemlikevisit::ItemLikeVisitor;
use rustc_hir::{HirId, ImplItem, Item, ItemKind, TraitItem};
+use rustc_session::config::EntryFnType;
+use rustc_session::{config, Session};
use rustc_span::symbol::sym;
use rustc_span::{Span, DUMMY_SP};
struct EntryContext<'a, 'tcx> {
session: &'a Session,
- map: Hir<'tcx>,
+ map: Map<'tcx>,
/// The top-level function called `main`.
main_fn: Option<(HirId, Span)>,
impl<'a, 'tcx> ItemLikeVisitor<'tcx> for EntryContext<'a, 'tcx> {
fn visit_item(&mut self, item: &'tcx Item<'tcx>) {
let def_id = self.map.local_def_id(item.hir_id);
- let def_key = self.map.def_key(def_id);
+ let def_key = self.map.def_key(def_id.expect_local());
let at_root = def_key.parent == Some(CRATE_DEF_INDEX);
find_item(item, self, at_root);
}
}
// If the user wants no main function at all, then stop here.
- if attr::contains_name(&tcx.hir().krate().attrs, sym::no_main) {
+ if attr::contains_name(&tcx.hir().krate().item.attrs, sym::no_main) {
return None;
}
}
fn no_main_err(tcx: TyCtxt<'_>, visitor: &EntryContext<'_, '_>) {
- let sp = tcx.hir().krate().span;
+ let sp = tcx.hir().krate().item.span;
if *tcx.sess.parse_sess.reached_eof.borrow() {
// There's an unclosed brace that made the parser reach `Eof`, we shouldn't complain about
// the missing `fn main()` then as it might have been hidden inside an unclosed block.
// The file may be empty, which leads to the diagnostic machinery not emitting this
// note. This is a relatively simple way to detect that case and emit a span-less
// note instead.
- if let Ok(_) = tcx.sess.source_map().lookup_line(sp.lo()) {
+ if tcx.sess.source_map().lookup_line(sp.lo()).is_ok() {
err.set_span(sp);
err.span_label(sp, ¬e);
} else {
type Map = Map<'v>;
- fn nested_visit_map(&mut self) -> hir_visit::NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> hir_visit::NestedVisitorMap<Self::Map> {
panic!("visit_nested_xxx must be manually implemented in this visitor")
}
ast_visit::walk_lifetime(self, lifetime)
}
- fn visit_mac(&mut self, mac: &'v ast::Mac) {
- self.record("Mac", Id::None, mac);
+ fn visit_mac(&mut self, mac: &'v ast::MacCall) {
+ self.record("MacCall", Id::None, mac);
}
fn visit_path_segment(&mut self, path_span: Span, path_segment: &'v ast::PathSegment) {
-use rustc::hir::map::Map;
use rustc::ty::layout::{LayoutError, Pointer, SizeSkeleton, VariantIdx};
use rustc::ty::query::Providers;
use rustc::ty::{self, Ty, TyCtxt};
}
impl Visitor<'tcx> for ItemVisitor<'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
}
impl Visitor<'tcx> for ExprVisitor<'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
}
/// Traverses and collects all the lang items in all crates.
-fn collect<'tcx>(tcx: TyCtxt<'tcx>) -> LanguageItems {
+fn collect(tcx: TyCtxt<'_>) -> LanguageItems {
// Initialize the collector.
let mut collector = LanguageItemCollector::new(tcx);
impl Visitor<'tcx> for LibFeatureCollector<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::All(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::All(self.tcx.hir())
}
fn visit_attribute(&mut self, attr: &'tcx Attribute) {
use self::VarKind::*;
use rustc::hir::map::Map;
-use rustc::lint;
use rustc::ty::query::Providers;
use rustc::ty::{self, TyCtxt};
use rustc_ast::ast;
use rustc_hir::def_id::DefId;
use rustc_hir::intravisit::{self, FnKind, NestedVisitorMap, Visitor};
use rustc_hir::{Expr, HirId, HirIdMap, HirIdSet, Node};
+use rustc_session::lint;
use rustc_span::symbol::sym;
use rustc_span::Span;
impl<'tcx> Visitor<'tcx> for IrMaps<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.tcx.hir())
}
fn visit_fn(
intravisit::walk_fn(&mut fn_maps, fk, decl, body_id, sp, id);
// compute liveness
- let mut lsets = Liveness::new(&mut fn_maps, body_id);
+ let mut lsets = Liveness::new(&mut fn_maps, def_id);
let entry_ln = lsets.compute(&body.value);
// check for various error conditions
struct Liveness<'a, 'tcx> {
ir: &'a mut IrMaps<'tcx>,
tables: &'a ty::TypeckTables<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
s: Specials,
successors: Vec<LiveNode>,
rwu_table: RWUTable,
}
impl<'a, 'tcx> Liveness<'a, 'tcx> {
- fn new(ir: &'a mut IrMaps<'tcx>, body: hir::BodyId) -> Liveness<'a, 'tcx> {
+ fn new(ir: &'a mut IrMaps<'tcx>, def_id: DefId) -> Liveness<'a, 'tcx> {
// Special nodes and variables:
// - exit_ln represents the end of the fn, either by return or panic
// - implicit_ret_var is a pseudo-variable that represents
clean_exit_var: ir.add_variable(CleanExit),
};
- let tables = ir.tcx.body_tables(body);
+ let tables = ir.tcx.typeck_tables_of(def_id);
+ let param_env = ir.tcx.param_env(def_id);
let num_live_nodes = ir.num_live_nodes;
let num_vars = ir.num_vars;
Liveness {
ir,
tables,
+ param_env,
s: specials,
successors: vec![invalid_node(); num_live_nodes],
rwu_table: RWUTable::new(num_live_nodes * num_vars),
}
hir::ExprKind::Call(ref f, ref args) => {
- let m = self.ir.tcx.parent_module(expr.hir_id);
- let succ = if self.ir.tcx.is_ty_uninhabited_from(m, self.tables.expr_ty(expr)) {
+ let m = self.ir.tcx.parent_module(expr.hir_id).to_def_id();
+ let succ = if self.ir.tcx.is_ty_uninhabited_from(
+ m,
+ self.tables.expr_ty(expr),
+ self.param_env,
+ ) {
self.s.exit_ln
} else {
succ
}
hir::ExprKind::MethodCall(.., ref args) => {
- let m = self.ir.tcx.parent_module(expr.hir_id);
- let succ = if self.ir.tcx.is_ty_uninhabited_from(m, self.tables.expr_ty(expr)) {
+ let m = self.ir.tcx.parent_module(expr.hir_id).to_def_id();
+ let succ = if self.ir.tcx.is_ty_uninhabited_from(
+ m,
+ self.tables.expr_ty(expr),
+ self.param_env,
+ ) {
self.s.exit_ln
} else {
succ
// Checking for error conditions
impl<'a, 'tcx> Visitor<'tcx> for Liveness<'a, 'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
#[derive(Copy, Clone)]
struct CheckLoopVisitor<'a, 'hir> {
sess: &'a Session,
- hir_map: &'a Map<'hir>,
+ hir_map: Map<'hir>,
cx: Context,
}
fn check_mod_loops(tcx: TyCtxt<'_>, module_def_id: DefId) {
tcx.hir().visit_item_likes_in_module(
module_def_id,
- &mut CheckLoopVisitor { sess: &tcx.sess, hir_map: &tcx.hir(), cx: Normal }
- .as_deep_visitor(),
+ &mut CheckLoopVisitor { sess: &tcx.sess, hir_map: tcx.hir(), cx: Normal }.as_deep_visitor(),
);
}
impl<'a, 'hir> Visitor<'hir> for CheckLoopVisitor<'a, 'hir> {
type Map = Map<'hir>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.hir_map)
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.hir_map)
}
fn visit_anon_const(&mut self, c: &'hir hir::AnonConst) {
// makes all other generics or inline functions that it references
// reachable as well.
-use rustc::hir::map::Map;
use rustc::middle::codegen_fn_attrs::{CodegenFnAttrFlags, CodegenFnAttrs};
use rustc::middle::privacy;
-use rustc::session::config;
use rustc::ty::query::Providers;
use rustc::ty::{self, TyCtxt};
use rustc_data_structures::fx::FxHashSet;
use rustc_hir::def::{DefKind, Res};
use rustc_hir::def_id::LOCAL_CRATE;
use rustc_hir::def_id::{CrateNum, DefId};
-use rustc_hir::intravisit;
-use rustc_hir::intravisit::{NestedVisitorMap, Visitor};
+use rustc_hir::intravisit::{self, NestedVisitorMap, Visitor};
use rustc_hir::itemlikevisit::ItemLikeVisitor;
use rustc_hir::{HirIdSet, Node};
+use rustc_session::config;
use rustc_target::spec::abi::Abi;
// Returns true if the given item must be inlined because it may be
impl_item: &hir::ImplItem<'_>,
impl_src: DefId,
) -> bool {
- let codegen_fn_attrs = tcx.codegen_fn_attrs(impl_item.hir_id.owner_def_id());
+ let codegen_fn_attrs = tcx.codegen_fn_attrs(impl_item.hir_id.owner.to_def_id());
let generics = tcx.generics_of(tcx.hir().local_def_id(impl_item.hir_id));
if codegen_fn_attrs.requests_inline() || generics.requires_monomorphization(tcx) {
return true;
}
- if let hir::ImplItemKind::Method(method_sig, _) = &impl_item.kind {
+ if let hir::ImplItemKind::Fn(method_sig, _) = &impl_item.kind {
if method_sig.header.is_const() {
return true;
}
}
impl<'a, 'tcx> Visitor<'tcx> for ReachableContext<'a, 'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
},
Some(Node::TraitItem(trait_method)) => match trait_method.kind {
hir::TraitItemKind::Const(_, ref default) => default.is_some(),
- hir::TraitItemKind::Method(_, hir::TraitMethod::Provided(_)) => true,
- hir::TraitItemKind::Method(_, hir::TraitMethod::Required(_))
+ hir::TraitItemKind::Fn(_, hir::TraitFn::Provided(_)) => true,
+ hir::TraitItemKind::Fn(_, hir::TraitFn::Required(_))
| hir::TraitItemKind::Type(..) => false,
},
Some(Node::ImplItem(impl_item)) => {
match impl_item.kind {
hir::ImplItemKind::Const(..) => true,
- hir::ImplItemKind::Method(..) => {
+ hir::ImplItemKind::Fn(..) => {
let attrs = self.tcx.codegen_fn_attrs(def_id);
let generics = self.tcx.generics_of(def_id);
if generics.requires_monomorphization(self.tcx) || attrs.requests_inline() {
Node::TraitItem(trait_method) => {
match trait_method.kind {
hir::TraitItemKind::Const(_, None)
- | hir::TraitItemKind::Method(_, hir::TraitMethod::Required(_)) => {
+ | hir::TraitItemKind::Fn(_, hir::TraitFn::Required(_)) => {
// Keep going, nothing to get exported
}
hir::TraitItemKind::Const(_, Some(body_id))
- | hir::TraitItemKind::Method(_, hir::TraitMethod::Provided(body_id)) => {
+ | hir::TraitItemKind::Fn(_, hir::TraitFn::Provided(body_id)) => {
self.visit_nested_body(body_id);
}
hir::TraitItemKind::Type(..) => {}
hir::ImplItemKind::Const(_, body) => {
self.visit_nested_body(body);
}
- hir::ImplItemKind::Method(_, body) => {
+ hir::ImplItemKind::Fn(_, body) => {
let did = self.tcx.hir().get_parent_did(search_item);
if method_might_be_inlined(self.tcx, impl_item, did) {
self.visit_nested_body(body)
//! the parent links in the region hierarchy.
//!
//! For more information about how MIR-based region-checking works,
-//! see the [rustc guide].
+//! see the [rustc dev guide].
//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/borrow_check.html
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/borrow_check.html
-use rustc::hir::map::Map;
use rustc::middle::region::*;
use rustc::ty::query::Providers;
use rustc::ty::TyCtxt;
// Ordinarily, we can rely on the visit order of HIR intravisit
// to correspond to the actual execution order of statements.
- // However, there's a weird corner case with compund assignment
+ // However, there's a weird corner case with compound assignment
// operators (e.g. `a += b`). The evaluation order depends on whether
// or not the operator is overloaded (e.g. whether or not a trait
// like AddAssign is implemented).
}
impl<'tcx> Visitor<'tcx> for RegionResolutionVisitor<'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
//! propagating default levels lexically from parent to children ast nodes.
use rustc::hir::map::Map;
-use rustc::lint;
use rustc::middle::privacy::AccessLevels;
use rustc::middle::stability::{DeprecationEntry, Index};
-use rustc::session::parse::feature_err;
-use rustc::session::Session;
use rustc::ty::query::Providers;
use rustc::ty::TyCtxt;
use rustc_ast::ast::Attribute;
use rustc_hir::def_id::{DefId, CRATE_DEF_INDEX, LOCAL_CRATE};
use rustc_hir::intravisit::{self, NestedVisitorMap, Visitor};
use rustc_hir::{Generics, HirId, Item, StructField, Variant};
-use rustc_infer::traits::misc::can_type_implement_copy;
+use rustc_session::lint;
+use rustc_session::parse::feature_err;
+use rustc_session::Session;
use rustc_span::symbol::{sym, Symbol};
use rustc_span::Span;
+use rustc_trait_selection::traits::misc::can_type_implement_copy;
use std::cmp::Ordering;
use std::mem::replace;
/// deep-walking.
type Map = Map<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::All(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::All(self.tcx.hir())
}
fn visit_item(&mut self, i: &'tcx Item<'tcx>) {
impl<'a, 'tcx> Visitor<'tcx> for MissingStabilityAnnotations<'a, 'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.tcx.hir())
}
fn visit_item(&mut self, i: &'tcx Item<'tcx>) {
annotator.annotate(
hir::CRATE_HIR_ID,
- &krate.attrs,
- krate.span,
+ &krate.item.attrs,
+ krate.item.span,
AnnotationKind::Required,
|v| intravisit::walk_crate(v, krate),
);
/// Because stability levels are scoped lexically, we want to walk
/// nested items in the context of the outer item, so enable
/// deep-walking.
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.tcx.hir())
}
fn visit_item(&mut self, item: &'tcx hir::Item<'tcx>) {
.emit();
} else {
let param_env = self.tcx.param_env(def_id);
- if !can_type_implement_copy(self.tcx, param_env, ty).is_ok() {
+ if can_type_implement_copy(self.tcx, param_env, ty).is_err() {
feature_err(
&self.tcx.sess.parse_sess,
sym::untagged_unions,
if tcx.stability().staged_api[&LOCAL_CRATE] {
let krate = tcx.hir().krate();
let mut missing = MissingStabilityAnnotations { tcx, access_levels };
- missing.check_missing_stability(hir::CRATE_HIR_ID, krate.span, "crate");
+ missing.check_missing_stability(hir::CRATE_HIR_ID, krate.item.span, "crate");
intravisit::walk_crate(&mut missing, krate);
krate.visit_all_item_likes(&mut missing.as_deep_visitor());
}
}
// FIXME(#44232): the `used_features` table no longer exists, so we
- // don't lint about unused features. We should reenable this one day!
+ // don't lint about unused features. We should re-enable this one day!
}
fn unnecessary_stable_feature_lint(tcx: TyCtxt<'_>, span: Span, feature: Symbol, since: Symbol) {
//! Upvar (closure capture) collection from cross-body HIR uses of `Res::Local`s.
-use rustc::hir::map::Map;
use rustc::ty::query::Providers;
use rustc::ty::TyCtxt;
use rustc_data_structures::fx::{FxHashSet, FxIndexMap};
}
impl Visitor<'tcx> for LocalCollector {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
}
impl Visitor<'tcx> for CaptureCollector<'a, 'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
use rustc::middle::lang_items;
use rustc::middle::lang_items::whitelisted;
-use rustc::session::config;
-
-use rustc::hir::map::Map;
use rustc::ty::TyCtxt;
use rustc_data_structures::fx::FxHashSet;
use rustc_errors::struct_span_err;
use rustc_hir as hir;
use rustc_hir::intravisit::{self, NestedVisitorMap, Visitor};
use rustc_hir::weak_lang_items::WEAK_ITEMS_REFS;
+use rustc_session::config;
use rustc_span::symbol::Symbol;
use rustc_span::Span;
if items.eh_personality().is_none() {
items.missing.push(lang_items::EhPersonalityLangItem);
}
- if tcx.sess.target.target.options.custom_unwind_resume & items.eh_unwind_resume().is_none() {
- items.missing.push(lang_items::EhUnwindResumeLangItem);
- }
{
let mut cx = Context { tcx, items };
}
impl<'a, 'tcx, 'v> Visitor<'v> for Context<'a, 'tcx> {
- type Map = Map<'v>;
+ type Map = intravisit::ErasedMap<'v>;
- fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, Map<'v>> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
rustc_lint = { path = "../librustc_lint" }
rustc_metadata = { path = "../librustc_metadata" }
rustc_ast = { path = "../librustc_ast" }
+rustc_session = { path = "../librustc_session" }
rustc_span = { path = "../librustc_span" }
use crate::Registry;
use rustc::middle::cstore::MetadataLoader;
-use rustc::session::Session;
use rustc_ast::ast::{Crate, Ident};
use rustc_errors::struct_span_err;
use rustc_metadata::locator;
+use rustc_session::Session;
use rustc_span::symbol::sym;
use rustc_span::Span;
rustc_errors = { path = "../librustc_errors" }
rustc_hir = { path = "../librustc_hir" }
rustc_typeck = { path = "../librustc_typeck" }
+rustc_session = { path = "../librustc_session" }
rustc_ast = { path = "../librustc_ast" }
rustc_span = { path = "../librustc_span" }
rustc_data_structures = { path = "../librustc_data_structures" }
use rustc::bug;
use rustc::hir::map::Map;
-use rustc::lint;
use rustc::middle::privacy::{AccessLevel, AccessLevels};
use rustc::ty::fold::TypeVisitor;
use rustc::ty::query::Providers;
use rustc_hir::def_id::{CrateNum, DefId, CRATE_DEF_INDEX, LOCAL_CRATE};
use rustc_hir::intravisit::{self, DeepVisitor, NestedVisitorMap, Visitor};
use rustc_hir::{AssocItemKind, HirIdSet, Node, PatKind};
+use rustc_session::lint;
use rustc_span::hygiene::Transparency;
use rustc_span::symbol::{kw, sym};
use rustc_span::Span;
}
Node::Expr(expr) => {
return (
- ty::Visibility::Restricted(tcx.parent_module(expr.hir_id)),
+ ty::Visibility::Restricted(tcx.parent_module(expr.hir_id).to_def_id()),
expr.span,
"private",
);
impl Visitor<'tcx> for PubRestrictedVisitor<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::All(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::All(self.tcx.hir())
}
fn visit_vis(&mut self, vis: &'tcx hir::Visibility<'tcx>) {
self.has_pub_restricted = self.has_pub_restricted || vis.node.is_pub_restricted();
| DefKind::ForeignTy
| DefKind::Fn
| DefKind::OpaqueTy
- | DefKind::Method
+ | DefKind::AssocFn
| DefKind::Trait
| DefKind::TyParam
| DefKind::Variant => (),
/// We want to visit items in the context of their containing
/// module and so forth, so supply a crate for doing a deep walk.
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::All(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::All(self.tcx.hir())
}
fn visit_item(&mut self, item: &'tcx hir::Item<'tcx>) {
}
fn visit_macro_def(&mut self, md: &'tcx hir::MacroDef<'tcx>) {
- if attr::find_transparency(&md.attrs, md.legacy).0 != Transparency::Opaque {
+ if attr::find_transparency(&md.attrs, md.ast.macro_rules).0 != Transparency::Opaque {
self.update(md.hir_id, Some(AccessLevel::Public));
return;
}
/// We want to visit items in the context of their containing
/// module and so forth, so supply a crate for doing a deep walk.
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::All(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::All(self.tcx.hir())
}
fn visit_mod(&mut self, _m: &'tcx hir::Mod<'tcx>, _s: Span, _n: hir::HirId) {
/// We want to visit items in the context of their containing
/// module and so forth, so supply a crate for doing a deep walk.
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::All(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::All(self.tcx.hir())
}
fn visit_mod(&mut self, _m: &'tcx hir::Mod<'tcx>, _s: Span, _n: hir::HirId) {
_ => None,
};
let def = def.filter(|(kind, _)| match kind {
- DefKind::Method
+ DefKind::AssocFn
| DefKind::AssocConst
| DefKind::AssocTy
| DefKind::AssocOpaqueTy
}
impl<'a, 'b, 'tcx, 'v> Visitor<'v> for ObsoleteCheckTypeForPrivatenessVisitor<'a, 'b, 'tcx> {
- type Map = Map<'v>;
+ type Map = intravisit::ErasedMap<'v>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
/// We want to visit items in the context of their containing
/// module and so forth, so supply a crate for doing a deep walk.
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::All(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::All(self.tcx.hir())
}
fn visit_item(&mut self, item: &'tcx hir::Item<'tcx>) {
|| items.iter().any(|impl_item_ref| {
let impl_item = self.tcx.hir().impl_item(impl_item_ref.id);
match impl_item.kind {
- hir::ImplItemKind::Const(..) | hir::ImplItemKind::Method(..) => {
+ hir::ImplItemKind::Const(..) | hir::ImplItemKind::Fn(..) => {
self.access_levels.is_reachable(impl_item_ref.id.hir_id)
}
hir::ImplItemKind::OpaqueTy(..) | hir::ImplItemKind::TyAlias(_) => {
// types in private items.
let impl_item = self.tcx.hir().impl_item(impl_item_ref.id);
match impl_item.kind {
- hir::ImplItemKind::Const(..)
- | hir::ImplItemKind::Method(..)
+ hir::ImplItemKind::Const(..) | hir::ImplItemKind::Fn(..)
if self
.item_is_public(&impl_item.hir_id, &impl_item.vis) =>
{
impl<'a, 'tcx> Visitor<'tcx> for PrivateItemsInPublicInterfacesVisitor<'a, 'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.tcx.hir())
}
fn visit_item(&mut self, item: &'tcx hir::Item<'tcx>) {
rustc_expand = { path = "../librustc_expand" }
rustc_feature = { path = "../librustc_feature" }
rustc_hir = { path = "../librustc_hir" }
-rustc_infer = { path = "../librustc_infer" }
rustc_metadata = { path = "../librustc_metadata" }
rustc_session = { path = "../librustc_session" }
rustc_span = { path = "../librustc_span" }
//! Imports are also considered items and placed into modules here, but not resolved yet.
use crate::def_collector::collect_definitions;
-use crate::imports::ImportDirective;
-use crate::imports::ImportDirectiveSubclass::{self, GlobImport, SingleImport};
-use crate::macros::{LegacyBinding, LegacyScope};
+use crate::imports::{Import, ImportKind};
+use crate::macros::{MacroRulesBinding, MacroRulesScope};
use crate::Namespace::{self, MacroNS, TypeNS, ValueNS};
use crate::{CrateLint, Determinacy, PathResult, ResolutionError, VisResolutionError};
use crate::{
&mut self,
fragment: &AstFragment,
parent_scope: ParentScope<'a>,
- ) -> LegacyScope<'a> {
+ ) -> MacroRulesScope<'a> {
collect_definitions(&mut self.definitions, fragment, parent_scope.expansion);
let mut visitor = BuildReducedGraphVisitor { r: self, parent_scope };
fragment.visit_with(&mut visitor);
- visitor.parent_scope.legacy
+ visitor.parent_scope.macro_rules
}
crate fn build_reduced_graph_external(&mut self, module: Module<'a>) {
fn block_needs_anonymous_module(&mut self, block: &Block) -> bool {
// If any statements are items, we need to create an anonymous module
block.stmts.iter().any(|statement| match statement.kind {
- StmtKind::Item(_) | StmtKind::Mac(_) => true,
+ StmtKind::Item(_) | StmtKind::MacCall(_) => true,
_ => false,
})
}
- // Add an import directive to the current module.
- fn add_import_directive(
+ // Add an import to the current module.
+ fn add_import(
&mut self,
module_path: Vec<Segment>,
- subclass: ImportDirectiveSubclass<'a>,
+ kind: ImportKind<'a>,
span: Span,
id: NodeId,
item: &ast::Item,
vis: ty::Visibility,
) {
let current_module = self.parent_scope.module;
- let directive = self.r.arenas.alloc_import_directive(ImportDirective {
+ let import = self.r.arenas.alloc_import(Import {
+ kind,
parent_scope: self.parent_scope,
module_path,
imported_module: Cell::new(None),
- subclass,
span,
id,
use_span: item.span,
used: Cell::new(false),
});
- debug!("add_import_directive({:?})", directive);
+ debug!("add_import({:?})", import);
- self.r.indeterminate_imports.push(directive);
- match directive.subclass {
+ self.r.indeterminate_imports.push(import);
+ match import.kind {
// Don't add unresolved underscore imports to modules
- SingleImport { target: Ident { name: kw::Underscore, .. }, .. } => {}
- SingleImport { target, type_ns_only, .. } => {
+ ImportKind::Single { target: Ident { name: kw::Underscore, .. }, .. } => {}
+ ImportKind::Single { target, type_ns_only, .. } => {
self.r.per_ns(|this, ns| {
if !type_ns_only || ns == TypeNS {
let key = this.new_key(target, ns);
let mut resolution = this.resolution(current_module, key).borrow_mut();
- resolution.add_single_import(directive);
+ resolution.add_single_import(import);
}
});
}
// We don't add prelude imports to the globs since they only affect lexical scopes,
// which are not relevant to import resolution.
- GlobImport { is_prelude: true, .. } => {}
- GlobImport { .. } => current_module.globs.borrow_mut().push(directive),
+ ImportKind::Glob { is_prelude: true, .. } => {}
+ ImportKind::Glob { .. } => current_module.globs.borrow_mut().push(import),
_ => unreachable!(),
}
}
);
}
- let subclass = SingleImport {
+ let kind = ImportKind::Single {
source: source.ident,
target: ident,
source_bindings: PerNS {
type_ns_only,
nested,
};
- self.add_import_directive(
+ self.add_import(
module_path,
- subclass,
+ kind,
use_tree.span,
id,
item,
);
}
ast::UseTreeKind::Glob => {
- let subclass = GlobImport {
+ let kind = ImportKind::Glob {
is_prelude: attr::contains_name(&item.attrs, sym::prelude_import),
max_vis: Cell::new(ty::Visibility::Invisible),
};
- self.add_import_directive(
- prefix,
- subclass,
- use_tree.span,
- id,
- item,
- root_span,
- item.id,
- vis,
- );
+ self.add_import(prefix, kind, use_tree.span, id, item, root_span, item.id, vis);
}
ast::UseTreeKind::Nested(ref items) => {
// Ensure there is at most one `self` in the list
self.r.get_module(DefId { krate: crate_id, index: CRATE_DEF_INDEX })
};
- let used = self.process_legacy_macro_imports(item, module);
+ let used = self.process_macro_use_imports(item, module);
let binding =
(module, ty::Visibility::Public, sp, expansion).to_name_binding(self.r.arenas);
- let directive = self.r.arenas.alloc_import_directive(ImportDirective {
+ let import = self.r.arenas.alloc_import(Import {
+ kind: ImportKind::ExternCrate { source: orig_name, target: ident },
root_id: item.id,
id: item.id,
parent_scope: self.parent_scope,
imported_module: Cell::new(Some(ModuleOrUniformRoot::Module(module))),
- subclass: ImportDirectiveSubclass::ExternCrate {
- source: orig_name,
- target: ident,
- },
has_attributes: !item.attrs.is_empty(),
use_span_with_attributes: item.span_with_attributes(),
use_span: item.span,
vis: Cell::new(vis),
used: Cell::new(used),
});
- self.r.potentially_unused_imports.push(directive);
- let imported_binding = self.r.import(binding, directive);
+ self.r.potentially_unused_imports.push(import);
+ let imported_binding = self.r.import(binding, import);
if ptr::eq(parent, self.r.graph_root) {
- if let Some(entry) = self.r.extern_prelude.get(&ident.modern()) {
+ if let Some(entry) = self.r.extern_prelude.get(&ident.normalize_to_macros_2_0())
+ {
if expansion != ExpnId::root()
&& orig_name.is_some()
&& entry.extern_crate_item.is_none()
}
}
let entry =
- self.r.extern_prelude.entry(ident.modern()).or_insert(ExternPreludeEntry {
- extern_crate_item: None,
- introduced_by_item: true,
- });
+ self.r.extern_prelude.entry(ident.normalize_to_macros_2_0()).or_insert(
+ ExternPreludeEntry {
+ extern_crate_item: None,
+ introduced_by_item: true,
+ },
+ );
entry.extern_crate_item = Some(imported_binding);
if orig_name.is_some() {
entry.introduced_by_item = true;
// These items do not add names to modules.
ItemKind::Impl { .. } | ItemKind::ForeignMod(..) | ItemKind::GlobalAsm(..) => {}
- ItemKind::MacroDef(..) | ItemKind::Mac(_) => unreachable!(),
+ ItemKind::MacroDef(..) | ItemKind::MacCall(_) => unreachable!(),
}
}
ForeignItemKind::Fn(..) => {
(Res::Def(DefKind::Fn, self.r.definitions.local_def_id(item.id)), ValueNS)
}
- ForeignItemKind::Static(..) | ForeignItemKind::Const(..) => {
+ ForeignItemKind::Static(..) => {
(Res::Def(DefKind::Static, self.r.definitions.local_def_id(item.id)), ValueNS)
}
ForeignItemKind::TyAlias(..) => {
(Res::Def(DefKind::ForeignTy, self.r.definitions.local_def_id(item.id)), TypeNS)
}
- ForeignItemKind::Macro(_) => unreachable!(),
+ ForeignItemKind::MacCall(_) => unreachable!(),
};
let parent = self.parent_scope.module;
let expansion = self.parent_scope.expansion;
| Res::PrimTy(..)
| Res::ToolMod => self.r.define(parent, ident, TypeNS, (res, vis, span, expansion)),
Res::Def(DefKind::Fn, _)
- | Res::Def(DefKind::Method, _)
+ | Res::Def(DefKind::AssocFn, _)
| Res::Def(DefKind::Static, _)
| Res::Def(DefKind::Const, _)
| Res::Def(DefKind::AssocConst, _)
let field_names = cstore.struct_field_names_untracked(def_id, self.r.session);
self.insert_field_names(def_id, field_names);
}
- Res::Def(DefKind::Method, def_id) => {
+ Res::Def(DefKind::AssocFn, def_id) => {
if cstore.associated_item_cloned_untracked(def_id).method_has_self_argument {
self.r.has_self.insert(def_id);
}
}
}
- fn legacy_import_macro(
+ fn add_macro_use_binding(
&mut self,
name: ast::Name,
binding: &'a NameBinding<'a>,
}
/// Returns `true` if we should consider the underlying `extern crate` to be used.
- fn process_legacy_macro_imports(&mut self, item: &Item, module: Module<'a>) -> bool {
+ fn process_macro_use_imports(&mut self, item: &Item, module: Module<'a>) -> bool {
let mut import_all = None;
let mut single_imports = Vec::new();
for attr in &item.attrs {
}
}
- let macro_use_directive = |this: &Self, span| {
- this.r.arenas.alloc_import_directive(ImportDirective {
+ let macro_use_import = |this: &Self, span| {
+ this.r.arenas.alloc_import(Import {
+ kind: ImportKind::MacroUse,
root_id: item.id,
id: item.id,
parent_scope: this.parent_scope,
imported_module: Cell::new(Some(ModuleOrUniformRoot::Module(module))),
- subclass: ImportDirectiveSubclass::MacroUse,
use_span_with_attributes: item.span_with_attributes(),
has_attributes: !item.attrs.is_empty(),
use_span: item.span,
let allow_shadowing = self.parent_scope.expansion == ExpnId::root();
if let Some(span) = import_all {
- let directive = macro_use_directive(self, span);
- self.r.potentially_unused_imports.push(directive);
+ let import = macro_use_import(self, span);
+ self.r.potentially_unused_imports.push(import);
module.for_each_child(self, |this, ident, ns, binding| {
if ns == MacroNS {
- let imported_binding = this.r.import(binding, directive);
- this.legacy_import_macro(ident.name, imported_binding, span, allow_shadowing);
+ let imported_binding = this.r.import(binding, import);
+ this.add_macro_use_binding(ident.name, imported_binding, span, allow_shadowing);
}
});
} else {
ident.span,
);
if let Ok(binding) = result {
- let directive = macro_use_directive(self, ident.span);
- self.r.potentially_unused_imports.push(directive);
- let imported_binding = self.r.import(binding, directive);
- self.legacy_import_macro(
+ let import = macro_use_import(self, ident.span);
+ self.r.potentially_unused_imports.push(import);
+ let imported_binding = self.r.import(binding, import);
+ self.add_macro_use_binding(
ident.name,
imported_binding,
ident.span,
false
}
- fn visit_invoc(&mut self, id: NodeId) -> LegacyScope<'a> {
+ fn visit_invoc(&mut self, id: NodeId) -> MacroRulesScope<'a> {
let invoc_id = id.placeholder_to_expn_id();
self.parent_scope.module.unexpanded_invocations.borrow_mut().insert(invoc_id);
let old_parent_scope = self.r.invocation_parent_scopes.insert(invoc_id, self.parent_scope);
assert!(old_parent_scope.is_none(), "invocation data is reset for an invocation");
- LegacyScope::Invocation(invoc_id)
+ MacroRulesScope::Invocation(invoc_id)
}
fn proc_macro_stub(item: &ast::Item) -> Option<(MacroKind, Ident, Span)> {
}
}
- fn define_macro(&mut self, item: &ast::Item) -> LegacyScope<'a> {
+ fn define_macro(&mut self, item: &ast::Item) -> MacroRulesScope<'a> {
let parent_scope = self.parent_scope;
let expansion = parent_scope.expansion;
- let (ext, ident, span, is_legacy) = match &item.kind {
+ let (ext, ident, span, macro_rules) = match &item.kind {
ItemKind::MacroDef(def) => {
let ext = Lrc::new(self.r.compile_macro(item, self.r.session.edition()));
- (ext, item.ident, item.span, def.legacy)
+ (ext, item.ident, item.span, def.macro_rules)
}
ItemKind::Fn(..) => match Self::proc_macro_stub(item) {
Some((macro_kind, ident, span)) => {
self.r.proc_macro_stubs.insert(item.id);
(self.r.dummy_ext(macro_kind), ident, span, false)
}
- None => return parent_scope.legacy,
+ None => return parent_scope.macro_rules,
},
_ => unreachable!(),
};
self.r.macro_map.insert(def_id, ext);
self.r.local_macro_def_scopes.insert(item.id, parent_scope.module);
- if is_legacy {
- let ident = ident.modern();
+ if macro_rules {
+ let ident = ident.normalize_to_macros_2_0();
self.r.macro_names.insert(ident);
let is_macro_export = attr::contains_name(&item.attrs, sym::macro_export);
let vis = if is_macro_export {
self.r.check_reserved_macro_name(ident, res);
self.insert_unused_macro(ident, item.id, span);
}
- LegacyScope::Binding(self.r.arenas.alloc_legacy_binding(LegacyBinding {
- parent_legacy_scope: parent_scope.legacy,
+ MacroRulesScope::Binding(self.r.arenas.alloc_macro_rules_binding(MacroRulesBinding {
+ parent_macro_rules_scope: parent_scope.macro_rules,
binding,
ident,
}))
self.insert_unused_macro(ident, item.id, span);
}
self.r.define(module, ident, MacroNS, (res, vis, span, expansion));
- self.parent_scope.legacy
+ self.parent_scope.macro_rules
}
}
}
}
impl<'a, 'b> Visitor<'b> for BuildReducedGraphVisitor<'a, 'b> {
- method!(visit_expr: ast::Expr, ast::ExprKind::Mac, walk_expr);
- method!(visit_pat: ast::Pat, ast::PatKind::Mac, walk_pat);
- method!(visit_ty: ast::Ty, ast::TyKind::Mac, walk_ty);
+ method!(visit_expr: ast::Expr, ast::ExprKind::MacCall, walk_expr);
+ method!(visit_pat: ast::Pat, ast::PatKind::MacCall, walk_pat);
+ method!(visit_ty: ast::Ty, ast::TyKind::MacCall, walk_ty);
fn visit_item(&mut self, item: &'b Item) {
let macro_use = match item.kind {
ItemKind::MacroDef(..) => {
- self.parent_scope.legacy = self.define_macro(item);
+ self.parent_scope.macro_rules = self.define_macro(item);
return;
}
- ItemKind::Mac(..) => {
- self.parent_scope.legacy = self.visit_invoc(item.id);
+ ItemKind::MacCall(..) => {
+ self.parent_scope.macro_rules = self.visit_invoc(item.id);
return;
}
ItemKind::Mod(..) => self.contains_macro_use(&item.attrs),
_ => false,
};
let orig_current_module = self.parent_scope.module;
- let orig_current_legacy_scope = self.parent_scope.legacy;
+ let orig_current_macro_rules_scope = self.parent_scope.macro_rules;
self.build_reduced_graph_for_item(item);
visit::walk_item(self, item);
self.parent_scope.module = orig_current_module;
if !macro_use {
- self.parent_scope.legacy = orig_current_legacy_scope;
+ self.parent_scope.macro_rules = orig_current_macro_rules_scope;
}
}
fn visit_stmt(&mut self, stmt: &'b ast::Stmt) {
- if let ast::StmtKind::Mac(..) = stmt.kind {
- self.parent_scope.legacy = self.visit_invoc(stmt.id);
+ if let ast::StmtKind::MacCall(..) = stmt.kind {
+ self.parent_scope.macro_rules = self.visit_invoc(stmt.id);
} else {
visit::walk_stmt(self, stmt);
}
}
fn visit_foreign_item(&mut self, foreign_item: &'b ForeignItem) {
- if let ForeignItemKind::Macro(_) = foreign_item.kind {
+ if let ForeignItemKind::MacCall(_) = foreign_item.kind {
self.visit_invoc(foreign_item.id);
return;
}
fn visit_block(&mut self, block: &'b Block) {
let orig_current_module = self.parent_scope.module;
- let orig_current_legacy_scope = self.parent_scope.legacy;
+ let orig_current_macro_rules_scope = self.parent_scope.macro_rules;
self.build_reduced_graph_for_block(block);
visit::walk_block(self, block);
self.parent_scope.module = orig_current_module;
- self.parent_scope.legacy = orig_current_legacy_scope;
+ self.parent_scope.macro_rules = orig_current_macro_rules_scope;
}
fn visit_assoc_item(&mut self, item: &'b AssocItem, ctxt: AssocCtxt) {
let parent = self.parent_scope.module;
- if let AssocItemKind::Macro(_) = item.kind {
+ if let AssocItemKind::MacCall(_) = item.kind {
self.visit_invoc(item.id);
return;
}
// Add the item to the trait info.
let item_def_id = self.r.definitions.local_def_id(item.id);
let (res, ns) = match item.kind {
- AssocItemKind::Static(..) // Let's pretend it's a `const` for recovery.
- | AssocItemKind::Const(..) => (Res::Def(DefKind::AssocConst, item_def_id), ValueNS),
+ AssocItemKind::Const(..) => (Res::Def(DefKind::AssocConst, item_def_id), ValueNS),
AssocItemKind::Fn(_, ref sig, _, _) => {
if sig.decl.has_self() {
self.r.has_self.insert(item_def_id);
}
- (Res::Def(DefKind::Method, item_def_id), ValueNS)
+ (Res::Def(DefKind::AssocFn, item_def_id), ValueNS)
}
AssocItemKind::TyAlias(..) => (Res::Def(DefKind::AssocTy, item_def_id), TypeNS),
- AssocItemKind::Macro(_) => bug!(), // handled above
+ AssocItemKind::MacCall(_) => bug!(), // handled above
};
let vis = ty::Visibility::Public;
fn visit_token(&mut self, t: Token) {
if let token::Interpolated(nt) = t.kind {
if let token::NtExpr(ref expr) = *nt {
- if let ast::ExprKind::Mac(..) = expr.kind {
+ if let ast::ExprKind::MacCall(..) = expr.kind {
self.visit_invoc(expr.id);
}
}
//
// Although this is mostly a lint pass, it lives in here because it depends on
// resolve data structures and because it finalises the privacy information for
-// `use` directives.
+// `use` items.
//
// Unused trait imports can't be checked until the method resolution. We save
// candidates here, and do the actual check in librustc_typeck/check_unused.rs.
// - `check_crate` finally emits the diagnostics based on the data generated
// in the last step
-use crate::imports::ImportDirectiveSubclass;
+use crate::imports::ImportKind;
use crate::Resolver;
-use rustc::{lint, ty};
+use rustc::ty;
use rustc_ast::ast;
use rustc_ast::node_id::NodeMap;
use rustc_ast::visit::{self, Visitor};
use rustc_data_structures::fx::FxHashSet;
use rustc_errors::pluralize;
+use rustc_session::lint::builtin::{MACRO_USE_EXTERN_CRATE, UNUSED_IMPORTS};
use rustc_session::lint::BuiltinLintDiagnostics;
use rustc_span::{MultiSpan, Span, DUMMY_SP};
}
impl<'a, 'b> UnusedImportCheckVisitor<'a, 'b> {
- // We have information about whether `use` (import) directives are actually
+ // We have information about whether `use` (import) items are actually
// used now. If an import is not used at all, we signal a lint error.
fn check_import(&mut self, id: ast::NodeId) {
let mut used = false;
impl Resolver<'_> {
crate fn check_unused(&mut self, krate: &ast::Crate) {
- for directive in self.potentially_unused_imports.iter() {
- match directive.subclass {
- _ if directive.used.get()
- || directive.vis.get() == ty::Visibility::Public
- || directive.span.is_dummy() =>
+ for import in self.potentially_unused_imports.iter() {
+ match import.kind {
+ _ if import.used.get()
+ || import.vis.get() == ty::Visibility::Public
+ || import.span.is_dummy() =>
{
- if let ImportDirectiveSubclass::MacroUse = directive.subclass {
- if !directive.span.is_dummy() {
+ if let ImportKind::MacroUse = import.kind {
+ if !import.span.is_dummy() {
self.lint_buffer.buffer_lint(
- lint::builtin::MACRO_USE_EXTERN_CRATE,
- directive.id,
- directive.span,
- "deprecated `#[macro_use]` directive used to \
+ MACRO_USE_EXTERN_CRATE,
+ import.id,
+ import.span,
+ "deprecated `#[macro_use]` attribute used to \
import macros should be replaced at use sites \
- with a `use` statement to import the macro \
+ with a `use` item to import the macro \
instead",
);
}
}
}
- ImportDirectiveSubclass::ExternCrate { .. } => {
- self.maybe_unused_extern_crates.push((directive.id, directive.span));
+ ImportKind::ExternCrate { .. } => {
+ self.maybe_unused_extern_crates.push((import.id, import.span));
}
- ImportDirectiveSubclass::MacroUse => {
- let lint = lint::builtin::UNUSED_IMPORTS;
+ ImportKind::MacroUse => {
let msg = "unused `#[macro_use]` import";
- self.lint_buffer.buffer_lint(lint, directive.id, directive.span, msg);
+ self.lint_buffer.buffer_lint(UNUSED_IMPORTS, import.id, import.span, msg);
}
_ => {}
}
};
visitor.r.lint_buffer.buffer_lint_with_diagnostic(
- lint::builtin::UNUSED_IMPORTS,
+ UNUSED_IMPORTS,
unused.use_tree_id,
ms,
&msg,
use rustc::hir::map::definitions::*;
use rustc_ast::ast::*;
use rustc_ast::token::{self, Token};
-use rustc_ast::visit;
+use rustc_ast::visit::{self, FnKind};
use rustc_expand::expand::AstFragment;
-use rustc_hir::def_id::DefIndex;
+use rustc_hir::def_id::LocalDefId;
use rustc_span::hygiene::ExpnId;
use rustc_span::symbol::{kw, sym};
use rustc_span::Span;
/// Creates `DefId`s for nodes in the AST.
struct DefCollector<'a> {
definitions: &'a mut Definitions,
- parent_def: DefIndex,
+ parent_def: LocalDefId,
expansion: ExpnId,
}
impl<'a> DefCollector<'a> {
- fn create_def(&mut self, node_id: NodeId, data: DefPathData, span: Span) -> DefIndex {
+ fn create_def(&mut self, node_id: NodeId, data: DefPathData, span: Span) -> LocalDefId {
let parent_def = self.parent_def;
debug!("create_def(node_id={:?}, data={:?}, parent_def={:?})", node_id, data, parent_def);
self.definitions.create_def_with_parent(parent_def, node_id, data, self.expansion, span)
}
- fn with_parent<F: FnOnce(&mut Self)>(&mut self, parent_def: DefIndex, f: F) {
+ fn with_parent<F: FnOnce(&mut Self)>(&mut self, parent_def: LocalDefId, f: F) {
let orig_parent_def = std::mem::replace(&mut self.parent_def, parent_def);
f(self);
self.parent_def = orig_parent_def;
}
- fn visit_async_fn(
- &mut self,
- id: NodeId,
- name: Name,
- span: Span,
- header: &FnHeader,
- generics: &'a Generics,
- decl: &'a FnDecl,
- body: Option<&'a Block>,
- ) {
- let (closure_id, return_impl_trait_id) = match header.asyncness {
- Async::Yes { span: _, closure_id, return_impl_trait_id } => {
- (closure_id, return_impl_trait_id)
- }
- _ => unreachable!(),
- };
-
- // For async functions, we need to create their inner defs inside of a
- // closure to match their desugared representation.
- let fn_def_data = DefPathData::ValueNs(name);
- let fn_def = self.create_def(id, fn_def_data, span);
- return self.with_parent(fn_def, |this| {
- this.create_def(return_impl_trait_id, DefPathData::ImplTrait, span);
-
- visit::walk_generics(this, generics);
- visit::walk_fn_decl(this, decl);
-
- let closure_def = this.create_def(closure_id, DefPathData::ClosureExpr, span);
- this.with_parent(closure_def, |this| {
- if let Some(body) = body {
- visit::walk_block(this, body);
- }
- })
- });
- }
-
fn collect_field(&mut self, field: &'a StructField, index: Option<usize>) {
let index = |this: &Self| {
index.unwrap_or_else(|| {
| ItemKind::ExternCrate(..)
| ItemKind::ForeignMod(..)
| ItemKind::TyAlias(..) => DefPathData::TypeNs(i.ident.name),
- ItemKind::Fn(_, sig, generics, body) if sig.header.asyncness.is_async() => {
- return self.visit_async_fn(
- i.id,
- i.ident.name,
- i.span,
- &sig.header,
- generics,
- &sig.decl,
- body.as_deref(),
- );
- }
ItemKind::Static(..) | ItemKind::Const(..) | ItemKind::Fn(..) => {
DefPathData::ValueNs(i.ident.name)
}
ItemKind::MacroDef(..) => DefPathData::MacroNs(i.ident.name),
- ItemKind::Mac(..) => return self.visit_macro_invoc(i.id),
+ ItemKind::MacCall(..) => return self.visit_macro_invoc(i.id),
ItemKind::GlobalAsm(..) => DefPathData::Misc,
ItemKind::Use(..) => {
return visit::walk_item(self, i);
});
}
+ fn visit_fn(&mut self, fn_kind: FnKind<'a>, span: Span, _: NodeId) {
+ if let FnKind::Fn(_, _, sig, _, body) = fn_kind {
+ if let Async::Yes { closure_id, return_impl_trait_id, .. } = sig.header.asyncness {
+ self.create_def(return_impl_trait_id, DefPathData::ImplTrait, span);
+
+ // For async functions, we need to create their inner defs inside of a
+ // closure to match their desugared representation. Besides that,
+ // we must mirror everything that `visit::walk_fn` below does.
+ self.visit_fn_header(&sig.header);
+ visit::walk_fn_decl(self, &sig.decl);
+ if let Some(body) = body {
+ let closure_def = self.create_def(closure_id, DefPathData::ClosureExpr, span);
+ self.with_parent(closure_def, |this| this.visit_block(body));
+ }
+ return;
+ }
+ }
+
+ visit::walk_fn(self, fn_kind, span);
+ }
+
fn visit_use_tree(&mut self, use_tree: &'a UseTree, id: NodeId, _nested: bool) {
self.create_def(id, DefPathData::Misc, use_tree.span);
visit::walk_use_tree(self, use_tree, id);
}
fn visit_foreign_item(&mut self, foreign_item: &'a ForeignItem) {
- if let ForeignItemKind::Macro(_) = foreign_item.kind {
+ if let ForeignItemKind::MacCall(_) = foreign_item.kind {
return self.visit_macro_invoc(foreign_item.id);
}
fn visit_assoc_item(&mut self, i: &'a AssocItem, ctxt: visit::AssocCtxt) {
let def_data = match &i.kind {
- AssocItemKind::Fn(_, FnSig { header, decl }, generics, body)
- if header.asyncness.is_async() =>
- {
- return self.visit_async_fn(
- i.id,
- i.ident.name,
- i.span,
- header,
- generics,
- decl,
- body.as_deref(),
- );
- }
- AssocItemKind::Fn(..) | AssocItemKind::Const(..) | AssocItemKind::Static(..) => {
- DefPathData::ValueNs(i.ident.name)
- }
+ AssocItemKind::Fn(..) | AssocItemKind::Const(..) => DefPathData::ValueNs(i.ident.name),
AssocItemKind::TyAlias(..) => DefPathData::TypeNs(i.ident.name),
- AssocItemKind::Macro(..) => return self.visit_macro_invoc(i.id),
+ AssocItemKind::MacCall(..) => return self.visit_macro_invoc(i.id),
};
let def = self.create_def(i.id, def_data, i.span);
fn visit_pat(&mut self, pat: &'a Pat) {
match pat.kind {
- PatKind::Mac(..) => return self.visit_macro_invoc(pat.id),
+ PatKind::MacCall(..) => return self.visit_macro_invoc(pat.id),
_ => visit::walk_pat(self, pat),
}
}
fn visit_expr(&mut self, expr: &'a Expr) {
let parent_def = match expr.kind {
- ExprKind::Mac(..) => return self.visit_macro_invoc(expr.id),
+ ExprKind::MacCall(..) => return self.visit_macro_invoc(expr.id),
ExprKind::Closure(_, asyncness, ..) => {
// Async closures desugar to closures inside of closures, so
// we must create two defs.
fn visit_ty(&mut self, ty: &'a Ty) {
match ty.kind {
- TyKind::Mac(..) => return self.visit_macro_invoc(ty.id),
+ TyKind::MacCall(..) => return self.visit_macro_invoc(ty.id),
TyKind::ImplTrait(node_id, _) => {
self.create_def(node_id, DefPathData::ImplTrait, ty.span);
}
fn visit_stmt(&mut self, stmt: &'a Stmt) {
match stmt.kind {
- StmtKind::Mac(..) => self.visit_macro_invoc(stmt.id),
+ StmtKind::MacCall(..) => self.visit_macro_invoc(stmt.id),
_ => visit::walk_stmt(self, stmt),
}
}
fn visit_token(&mut self, t: Token) {
if let token::Interpolated(nt) = t.kind {
if let token::NtExpr(ref expr) = *nt {
- if let ExprKind::Mac(..) = expr.kind {
+ if let ExprKind::MacCall(..) = expr.kind {
self.visit_macro_invoc(expr.id);
}
}
use std::cmp::Reverse;
+use std::ptr;
use log::debug;
use rustc::bug;
-use rustc::session::Session;
use rustc::ty::{self, DefIdTree};
use rustc_ast::ast::{self, Ident, Path};
use rustc_ast::util::lev_distance::find_best_match_for_name;
use rustc_hir::def::Namespace::{self, *};
use rustc_hir::def::{self, CtorKind, CtorOf, DefKind, NonMacroAttrKind};
use rustc_hir::def_id::{DefId, CRATE_DEF_INDEX, LOCAL_CRATE};
+use rustc_session::Session;
use rustc_span::hygiene::MacroKind;
use rustc_span::source_map::SourceMap;
use rustc_span::symbol::{kw, Symbol};
use rustc_span::{BytePos, MultiSpan, Span};
-use crate::imports::{ImportDirective, ImportDirectiveSubclass, ImportResolver};
+use crate::imports::{Import, ImportKind, ImportResolver};
use crate::path_names_to_string;
use crate::{AmbiguityError, AmbiguityErrorMisc, AmbiguityKind};
-use crate::{BindingError, CrateLint, HasGenericParams, LegacyScope, Module, ModuleOrUniformRoot};
+use crate::{
+ BindingError, CrateLint, HasGenericParams, MacroRulesScope, Module, ModuleOrUniformRoot,
+};
use crate::{NameBinding, NameBindingKind, PrivacyError, VisResolutionError};
use crate::{ParentScope, PathResult, ResolutionError, Resolver, Scope, ScopeSet, Segment};
self.session,
span,
E0409,
- "variable `{}` is bound in inconsistent \
- ways within the same match arm",
+ "variable `{}` is bound inconsistently across alternatives separated by `|`",
variable_name
);
err.span_label(span, "bound in different ways");
}
}
}
- Scope::MacroRules(legacy_scope) => {
- if let LegacyScope::Binding(legacy_binding) = legacy_scope {
- let res = legacy_binding.binding.res();
+ Scope::MacroRules(macro_rules_scope) => {
+ if let MacroRulesScope::Binding(macro_rules_binding) = macro_rules_scope {
+ let res = macro_rules_binding.binding.res();
if filter_fn(res) {
suggestions
- .push(TypoSuggestion::from_res(legacy_binding.ident.name, res))
+ .push(TypoSuggestion::from_res(macro_rules_binding.ident.name, res))
}
}
}
let msg = format!("unsafe traits like `{}` should be implemented explicitly", ident);
err.span_note(ident.span, &msg);
}
- if self.macro_names.contains(&ident.modern()) {
+ if self.macro_names.contains(&ident.normalize_to_macros_2_0()) {
err.help("have you added the `#[macro_use]` on the module/import?");
}
}
err.emit();
}
- crate fn report_privacy_error(&self, privacy_error: &PrivacyError<'_>) {
- let PrivacyError { ident, binding, .. } = *privacy_error;
- let session = &self.session;
- let mk_struct_span_error = |is_constructor| {
- let mut descr = binding.res().descr().to_string();
- if is_constructor {
- descr += " constructor";
- }
- if binding.is_import() {
- descr += " import";
- }
-
- let mut err =
- struct_span_err!(session, ident.span, E0603, "{} `{}` is private", descr, ident);
-
- err.span_label(ident.span, &format!("this {} is private", descr));
- err.span_note(
- session.source_map().def_span(binding.span),
- &format!("the {} `{}` is defined here", descr, ident),
- );
-
- err
- };
-
- let mut err = if let NameBindingKind::Res(
+ /// If the binding refers to a tuple struct constructor with fields,
+ /// returns the span of its fields.
+ fn ctor_fields_span(&self, binding: &NameBinding<'_>) -> Option<Span> {
+ if let NameBindingKind::Res(
Res::Def(DefKind::Ctor(CtorOf::Struct, CtorKind::Fn), ctor_def_id),
_,
) = binding.kind
{
let def_id = (&*self).parent(ctor_def_id).expect("no parent for a constructor");
if let Some(fields) = self.field_names.get(&def_id) {
- let mut err = mk_struct_span_error(true);
let first_field = fields.first().expect("empty field list in the map");
- err.span_label(
- fields.iter().fold(first_field.span, |acc, field| acc.to(field.span)),
- "a constructor is private if any of the fields is private",
- );
- err
- } else {
- mk_struct_span_error(false)
+ return Some(fields.iter().fold(first_field.span, |acc, field| acc.to(field.span)));
}
- } else {
- mk_struct_span_error(false)
- };
+ }
+ None
+ }
+
+ crate fn report_privacy_error(&self, privacy_error: &PrivacyError<'_>) {
+ let PrivacyError { ident, binding, .. } = *privacy_error;
+
+ let res = binding.res();
+ let ctor_fields_span = self.ctor_fields_span(binding);
+ let plain_descr = res.descr().to_string();
+ let nonimport_descr =
+ if ctor_fields_span.is_some() { plain_descr + " constructor" } else { plain_descr };
+ let import_descr = nonimport_descr.clone() + " import";
+ let get_descr =
+ |b: &NameBinding<'_>| if b.is_import() { &import_descr } else { &nonimport_descr };
+
+ // Print the primary message.
+ let descr = get_descr(binding);
+ let mut err =
+ struct_span_err!(self.session, ident.span, E0603, "{} `{}` is private", descr, ident);
+ err.span_label(ident.span, &format!("this {} is private", descr));
+ if let Some(span) = ctor_fields_span {
+ err.span_label(span, "a constructor is private if any of the fields is private");
+ }
+
+ // Print the whole import chain to make it easier to see what happens.
+ let first_binding = binding;
+ let mut next_binding = Some(binding);
+ let mut next_ident = ident;
+ while let Some(binding) = next_binding {
+ let name = next_ident;
+ next_binding = match binding.kind {
+ _ if res == Res::Err => None,
+ NameBindingKind::Import { binding, import, .. } => match import.kind {
+ _ if binding.span.is_dummy() => None,
+ ImportKind::Single { source, .. } => {
+ next_ident = source;
+ Some(binding)
+ }
+ ImportKind::Glob { .. } | ImportKind::MacroUse => Some(binding),
+ ImportKind::ExternCrate { .. } => None,
+ },
+ _ => None,
+ };
+
+ let first = ptr::eq(binding, first_binding);
+ let descr = get_descr(binding);
+ let msg = format!(
+ "{and_refers_to}the {item} `{name}`{which} is defined here{dots}",
+ and_refers_to = if first { "" } else { "...and refers to " },
+ item = descr,
+ name = name,
+ which = if first { "" } else { " which" },
+ dots = if next_binding.is_some() { "..." } else { "" },
+ );
+ let def_span = self.session.source_map().def_span(binding.span);
+ let mut note_span = MultiSpan::from_span(def_span);
+ if !first && binding.vis == ty::Visibility::Public {
+ note_span.push_span_label(def_span, "consider importing it directly".into());
+ }
+ err.span_note(note_span, &msg);
+ }
err.emit();
}
}
// Sort extern crate names in reverse order to get
- // 1) some consistent ordering for emitted dignostics, and
+ // 1) some consistent ordering for emitted diagnostics, and
// 2) `std` suggestions before `core` suggestions.
let mut extern_crate_names =
self.r.extern_prelude.iter().map(|(ident, _)| ident.name).collect::<Vec<_>>();
/// ```
pub(crate) fn check_for_module_export_macro(
&mut self,
- directive: &'b ImportDirective<'b>,
+ import: &'b Import<'b>,
module: ModuleOrUniformRoot<'b>,
ident: Ident,
) -> Option<(Option<Suggestion>, Vec<String>)> {
let binding = resolution.borrow().binding()?;
if let Res::Def(DefKind::Macro(MacroKind::Bang), _) = binding.res() {
let module_name = crate_module.kind.name().unwrap();
- let import = match directive.subclass {
- ImportDirectiveSubclass::SingleImport { source, target, .. }
- if source != target =>
- {
+ let import_snippet = match import.kind {
+ ImportKind::Single { source, target, .. } if source != target => {
format!("{} as {}", source, target)
}
_ => format!("{}", ident),
};
let mut corrections: Vec<(Span, String)> = Vec::new();
- if !directive.is_nested() {
+ if !import.is_nested() {
// Assume this is the easy case of `use issue_59764::foo::makro;` and just remove
// intermediate segments.
- corrections.push((directive.span, format!("{}::{}", module_name, import)));
+ corrections.push((import.span, format!("{}::{}", module_name, import_snippet)));
} else {
// Find the binding span (and any trailing commas and spaces).
// ie. `use a::b::{c, d, e};`
// ^^^
let (found_closing_brace, binding_span) = find_span_of_binding_until_next_binding(
self.r.session,
- directive.span,
- directive.use_span,
+ import.span,
+ import.use_span,
);
debug!(
"check_for_module_export_macro: found_closing_brace={:?} binding_span={:?}",
let (has_nested, after_crate_name) = find_span_immediately_after_crate_name(
self.r.session,
module_name,
- directive.use_span,
+ import.use_span,
);
debug!(
"check_for_module_export_macro: has_nested={:?} after_crate_name={:?}",
start_point,
if has_nested {
// In this case, `start_snippet` must equal '{'.
- format!("{}{}, ", start_snippet, import)
+ format!("{}{}, ", start_snippet, import_snippet)
} else {
// In this case, add a `{`, then the moved import, then whatever
// was there before.
- format!("{{{}, {}", import, start_snippet)
+ format!("{{{}, {}", import_snippet, start_snippet)
},
));
}
//! A bunch of methods and structures more or less related to resolving imports.
-use ImportDirectiveSubclass::*;
-
use crate::diagnostics::Suggestion;
use crate::Determinacy::{self, *};
use crate::Namespace::{self, MacroNS, TypeNS};
use crate::{NameBinding, NameBindingKind, PathResult, PrivacyError, ToNameBinding};
use rustc::hir::exports::Export;
-use rustc::lint::builtin::{PUB_USE_OF_PRIVATE_EXTERN_CRATE, UNUSED_IMPORTS};
use rustc::ty;
use rustc::{bug, span_bug};
use rustc_ast::ast::{Ident, Name, NodeId};
use rustc_errors::{pluralize, struct_span_err, Applicability};
use rustc_hir::def::{self, PartialRes};
use rustc_hir::def_id::DefId;
+use rustc_session::lint::builtin::{PUB_USE_OF_PRIVATE_EXTERN_CRATE, UNUSED_IMPORTS};
use rustc_session::lint::BuiltinLintDiagnostics;
use rustc_session::DiagnosticMessageId;
use rustc_span::hygiene::ExpnId;
type Res = def::Res<NodeId>;
-/// Contains data for specific types of import directives.
+/// Contains data for specific kinds of imports.
#[derive(Clone, Debug)]
-pub enum ImportDirectiveSubclass<'a> {
- SingleImport {
+pub enum ImportKind<'a> {
+ Single {
/// `source` in `use prefix::source as target`.
source: Ident,
/// `target` in `use prefix::source as target`.
/// Did this import result from a nested import? ie. `use foo::{bar, baz};`
nested: bool,
},
- GlobImport {
+ Glob {
is_prelude: bool,
max_vis: Cell<ty::Visibility>, // The visibility of the greatest re-export.
// n.b. `max_vis` is only used in `finalize_import` to check for re-export errors.
MacroUse,
}
-/// One import directive.
+/// One import.
#[derive(Debug, Clone)]
-crate struct ImportDirective<'a> {
- /// The ID of the `extern crate`, `UseTree` etc that imported this `ImportDirective`.
+crate struct Import<'a> {
+ pub kind: ImportKind<'a>,
+
+ /// The ID of the `extern crate`, `UseTree` etc that imported this `Import`.
///
- /// In the case where the `ImportDirective` was expanded from a "nested" use tree,
+ /// In the case where the `Import` was expanded from a "nested" use tree,
/// this id is the ID of the leaf tree. For example:
///
/// ```ignore (pacify the mercilous tidy)
/// use foo::bar::{a, b}
/// ```
///
- /// If this is the import directive for `foo::bar::a`, we would have the ID of the `UseTree`
+ /// If this is the import for `foo::bar::a`, we would have the ID of the `UseTree`
/// for `a` in this field.
pub id: NodeId,
pub module_path: Vec<Segment>,
/// The resolution of `module_path`.
pub imported_module: Cell<Option<ModuleOrUniformRoot<'a>>>,
- pub subclass: ImportDirectiveSubclass<'a>,
pub vis: Cell<ty::Visibility>,
pub used: Cell<bool>,
}
-impl<'a> ImportDirective<'a> {
+impl<'a> Import<'a> {
pub fn is_glob(&self) -> bool {
- match self.subclass {
- ImportDirectiveSubclass::GlobImport { .. } => true,
+ match self.kind {
+ ImportKind::Glob { .. } => true,
_ => false,
}
}
pub fn is_nested(&self) -> bool {
- match self.subclass {
- ImportDirectiveSubclass::SingleImport { nested, .. } => nested,
+ match self.kind {
+ ImportKind::Single { nested, .. } => nested,
_ => false,
}
}
/// Records information about the resolution of a name in a namespace of a module.
pub struct NameResolution<'a> {
/// Single imports that may define the name in the namespace.
- /// Import directives are arena-allocated, so it's ok to use pointers as keys.
- single_imports: FxHashSet<PtrKey<'a, ImportDirective<'a>>>,
+ /// Imports are arena-allocated, so it's ok to use pointers as keys.
+ single_imports: FxHashSet<PtrKey<'a, Import<'a>>>,
/// The least shadowable known binding for this name, or None if there are no known bindings.
pub binding: Option<&'a NameBinding<'a>>,
shadowed_glob: Option<&'a NameBinding<'a>>,
})
}
- crate fn add_single_import(&mut self, directive: &'a ImportDirective<'a>) {
- self.single_imports.insert(PtrKey(directive));
+ crate fn add_single_import(&mut self, import: &'a Import<'a>) {
+ self.single_imports.insert(PtrKey(import));
}
}
}
}
- if !self.is_accessible_from(binding.vis, parent_scope.module) &&
+ if !(self.is_accessible_from(binding.vis, parent_scope.module) ||
// Remove this together with `PUB_USE_OF_PRIVATE_EXTERN_CRATE`
- !(self.last_import_segment && binding.is_extern_crate())
+ (self.last_import_segment && binding.is_extern_crate()))
{
self.privacy_errors.push(PrivacyError {
ident,
single_import.imported_module.get(),
return Err((Undetermined, Weak::No))
);
- let ident = match single_import.subclass {
- SingleImport { source, .. } => source,
+ let ident = match single_import.kind {
+ ImportKind::Single { source, .. } => source,
_ => unreachable!(),
};
match self.resolve_ident_in_module(
None => return Err((Undetermined, Weak::Yes)),
};
let tmp_parent_scope;
- let (mut adjusted_parent_scope, mut ident) = (parent_scope, ident.modern());
+ let (mut adjusted_parent_scope, mut ident) =
+ (parent_scope, ident.normalize_to_macros_2_0());
match ident.span.glob_adjust(module.expansion, glob_import.span) {
Some(Some(def)) => {
tmp_parent_scope =
Err((Determined, Weak::No))
}
- // Given a binding and an import directive that resolves to it,
- // return the corresponding binding defined by the import directive.
+ // Given a binding and an import that resolves to it,
+ // return the corresponding binding defined by the import.
crate fn import(
&self,
binding: &'a NameBinding<'a>,
- directive: &'a ImportDirective<'a>,
+ import: &'a Import<'a>,
) -> &'a NameBinding<'a> {
- let vis = if binding.pseudo_vis().is_at_least(directive.vis.get(), self) ||
+ let vis = if binding.pseudo_vis().is_at_least(import.vis.get(), self) ||
// cf. `PUB_USE_OF_PRIVATE_EXTERN_CRATE`
- !directive.is_glob() && binding.is_extern_crate()
+ !import.is_glob() && binding.is_extern_crate()
{
- directive.vis.get()
+ import.vis.get()
} else {
binding.pseudo_vis()
};
- if let GlobImport { ref max_vis, .. } = directive.subclass {
- if vis == directive.vis.get() || vis.is_at_least(max_vis.get(), self) {
+ if let ImportKind::Glob { ref max_vis, .. } = import.kind {
+ if vis == import.vis.get() || vis.is_at_least(max_vis.get(), self) {
max_vis.set(vis)
}
}
self.arenas.alloc_name_binding(NameBinding {
- kind: NameBindingKind::Import { binding, directive, used: Cell::new(false) },
+ kind: NameBindingKind::Import { binding, import, used: Cell::new(false) },
ambiguity: None,
- span: directive.span,
+ span: import.span,
vis,
- expansion: directive.parent_scope.expansion,
+ expansion: import.parent_scope.expansion,
})
}
};
// Define `binding` in `module`s glob importers.
- for directive in module.glob_importers.borrow_mut().iter() {
+ for import in module.glob_importers.borrow_mut().iter() {
let mut ident = key.ident;
- let scope = match ident.span.reverse_glob_adjust(module.expansion, directive.span) {
+ let scope = match ident.span.reverse_glob_adjust(module.expansion, import.span) {
Some(Some(def)) => self.macro_def_scope(def),
- Some(None) => directive.parent_scope.module,
+ Some(None) => import.parent_scope.module,
None => continue,
};
if self.is_accessible_from(binding.vis, scope) {
- let imported_binding = self.import(binding, directive);
+ let imported_binding = self.import(binding, import);
let key = BindingKey { ident, ..key };
- let _ = self.try_define(directive.parent_scope.module, key, imported_binding);
+ let _ = self.try_define(import.parent_scope.module, key, imported_binding);
}
}
// Define a "dummy" resolution containing a Res::Err as a placeholder for a
// failed resolution
- fn import_dummy_binding(&mut self, directive: &'a ImportDirective<'a>) {
- if let SingleImport { target, .. } = directive.subclass {
+ fn import_dummy_binding(&mut self, import: &'a Import<'a>) {
+ if let ImportKind::Single { target, .. } = import.kind {
let dummy_binding = self.dummy_binding;
- let dummy_binding = self.import(dummy_binding, directive);
+ let dummy_binding = self.import(dummy_binding, import);
self.per_ns(|this, ns| {
let key = this.new_key(target, ns);
- let _ = this.try_define(directive.parent_scope.module, key, dummy_binding);
+ let _ = this.try_define(import.parent_scope.module, key, dummy_binding);
// Consider erroneous imports used to avoid duplicate diagnostics.
this.record_use(target, ns, dummy_binding, false);
});
.chain(indeterminate_imports.into_iter().map(|i| (true, i)))
{
if let Some(err) = self.finalize_import(import) {
- if let SingleImport { source, ref source_bindings, .. } = import.subclass {
+ if let ImportKind::Single { source, ref source_bindings, .. } = import.kind {
if source.name == kw::SelfLower {
// Silence `unresolved import` error if E0429 is already emitted
if let Err(Determined) = source_bindings.value_ns.get() {
if seen_spans.insert(err.span) {
let path = import_path_to_string(
&import.module_path.iter().map(|seg| seg.ident).collect::<Vec<_>>(),
- &import.subclass,
+ &import.kind,
err.span,
);
errors.push((path, err));
self.r.used_imports.insert((import.id, TypeNS));
let path = import_path_to_string(
&import.module_path.iter().map(|seg| seg.ident).collect::<Vec<_>>(),
- &import.subclass,
+ &import.kind,
import.span,
);
let err = UnresolvedImportError {
/// Attempts to resolve the given import, returning true if its resolution is determined.
/// If successful, the resolved bindings are written into the module.
- fn resolve_import(&mut self, directive: &'b ImportDirective<'b>) -> bool {
+ fn resolve_import(&mut self, import: &'b Import<'b>) -> bool {
debug!(
"(resolving import for module) resolving import `{}::...` in `{}`",
- Segment::names_to_string(&directive.module_path),
- module_to_string(directive.parent_scope.module).unwrap_or_else(|| "???".to_string()),
+ Segment::names_to_string(&import.module_path),
+ module_to_string(import.parent_scope.module).unwrap_or_else(|| "???".to_string()),
);
- let module = if let Some(module) = directive.imported_module.get() {
+ let module = if let Some(module) = import.imported_module.get() {
module
} else {
// For better failure detection, pretend that the import will
// not define any names while resolving its module path.
- let orig_vis = directive.vis.replace(ty::Visibility::Invisible);
+ let orig_vis = import.vis.replace(ty::Visibility::Invisible);
let path_res = self.r.resolve_path(
- &directive.module_path,
+ &import.module_path,
None,
- &directive.parent_scope,
+ &import.parent_scope,
false,
- directive.span,
- directive.crate_lint(),
+ import.span,
+ import.crate_lint(),
);
- directive.vis.set(orig_vis);
+ import.vis.set(orig_vis);
match path_res {
PathResult::Module(module) => module,
}
};
- directive.imported_module.set(Some(module));
- let (source, target, source_bindings, target_bindings, type_ns_only) =
- match directive.subclass {
- SingleImport {
- source,
- target,
- ref source_bindings,
- ref target_bindings,
- type_ns_only,
- ..
- } => (source, target, source_bindings, target_bindings, type_ns_only),
- GlobImport { .. } => {
- self.resolve_glob_import(directive);
- return true;
- }
- _ => unreachable!(),
- };
+ import.imported_module.set(Some(module));
+ let (source, target, source_bindings, target_bindings, type_ns_only) = match import.kind {
+ ImportKind::Single {
+ source,
+ target,
+ ref source_bindings,
+ ref target_bindings,
+ type_ns_only,
+ ..
+ } => (source, target, source_bindings, target_bindings, type_ns_only),
+ ImportKind::Glob { .. } => {
+ self.resolve_glob_import(import);
+ return true;
+ }
+ _ => unreachable!(),
+ };
let mut indeterminate = false;
self.r.per_ns(|this, ns| {
if let Err(Undetermined) = source_bindings[ns].get() {
// For better failure detection, pretend that the import will
// not define any names while resolving its module path.
- let orig_vis = directive.vis.replace(ty::Visibility::Invisible);
+ let orig_vis = import.vis.replace(ty::Visibility::Invisible);
let binding = this.resolve_ident_in_module(
module,
source,
ns,
- &directive.parent_scope,
+ &import.parent_scope,
false,
- directive.span,
+ import.span,
);
- directive.vis.set(orig_vis);
+ import.vis.set(orig_vis);
source_bindings[ns].set(binding);
} else {
return;
};
- let parent = directive.parent_scope.module;
+ let parent = import.parent_scope.module;
match source_bindings[ns].get() {
Err(Undetermined) => indeterminate = true,
// Don't update the resolution, because it was never added.
Err(Determined) => {
let key = this.new_key(target, ns);
this.update_resolution(parent, key, |_, resolution| {
- resolution.single_imports.remove(&PtrKey(directive));
+ resolution.single_imports.remove(&PtrKey(import));
});
}
Ok(binding) if !binding.is_importable() => {
let msg = format!("`{}` is not directly importable", target);
- struct_span_err!(this.session, directive.span, E0253, "{}", &msg)
- .span_label(directive.span, "cannot be imported directly")
+ struct_span_err!(this.session, import.span, E0253, "{}", &msg)
+ .span_label(import.span, "cannot be imported directly")
.emit();
// Do not import this illegal binding. Import a dummy binding and pretend
// everything is fine
- this.import_dummy_binding(directive);
+ this.import_dummy_binding(import);
}
Ok(binding) => {
- let imported_binding = this.import(binding, directive);
+ let imported_binding = this.import(binding, import);
target_bindings[ns].set(Some(imported_binding));
this.define(parent, target, ns, imported_binding);
}
///
/// Optionally returns an unresolved import error. This error is buffered and used to
/// consolidate multiple unresolved import errors into a single diagnostic.
- fn finalize_import(
- &mut self,
- directive: &'b ImportDirective<'b>,
- ) -> Option<UnresolvedImportError> {
- let orig_vis = directive.vis.replace(ty::Visibility::Invisible);
+ fn finalize_import(&mut self, import: &'b Import<'b>) -> Option<UnresolvedImportError> {
+ let orig_vis = import.vis.replace(ty::Visibility::Invisible);
let prev_ambiguity_errors_len = self.r.ambiguity_errors.len();
let path_res = self.r.resolve_path(
- &directive.module_path,
+ &import.module_path,
None,
- &directive.parent_scope,
+ &import.parent_scope,
true,
- directive.span,
- directive.crate_lint(),
+ import.span,
+ import.crate_lint(),
);
let no_ambiguity = self.r.ambiguity_errors.len() == prev_ambiguity_errors_len;
- directive.vis.set(orig_vis);
+ import.vis.set(orig_vis);
if let PathResult::Failed { .. } | PathResult::NonModule(..) = path_res {
// Consider erroneous imports used to avoid duplicate diagnostics.
- self.r.used_imports.insert((directive.id, TypeNS));
+ self.r.used_imports.insert((import.id, TypeNS));
}
let module = match path_res {
PathResult::Module(module) => {
// Consistency checks, analogous to `finalize_macro_resolutions`.
- if let Some(initial_module) = directive.imported_module.get() {
+ if let Some(initial_module) = import.imported_module.get() {
if !ModuleOrUniformRoot::same_def(module, initial_module) && no_ambiguity {
- span_bug!(directive.span, "inconsistent resolution for an import");
+ span_bug!(import.span, "inconsistent resolution for an import");
}
} else {
if self.r.privacy_errors.is_empty() {
let msg = "cannot determine resolution for the import";
let msg_note = "import resolution is stuck, try simplifying other imports";
- self.r.session.struct_span_err(directive.span, msg).note(msg_note).emit();
+ self.r.session.struct_span_err(import.span, msg).note(msg_note).emit();
}
}
}
PathResult::Failed { is_error_from_last_segment: false, span, label, suggestion } => {
if no_ambiguity {
- assert!(directive.imported_module.get().is_none());
+ assert!(import.imported_module.get().is_none());
self.r
.report_error(span, ResolutionError::FailedToResolve { label, suggestion });
}
}
PathResult::Failed { is_error_from_last_segment: true, span, label, suggestion } => {
if no_ambiguity {
- assert!(directive.imported_module.get().is_none());
+ assert!(import.imported_module.get().is_none());
let err = match self.make_path_suggestion(
span,
- directive.module_path.clone(),
- &directive.parent_scope,
+ import.module_path.clone(),
+ &import.parent_scope,
) {
Some((suggestion, note)) => UnresolvedImportError {
span,
}
PathResult::NonModule(path_res) if path_res.base_res() == Res::Err => {
if no_ambiguity {
- assert!(directive.imported_module.get().is_none());
+ assert!(import.imported_module.get().is_none());
}
// The error was already reported earlier.
return None;
PathResult::Indeterminate | PathResult::NonModule(..) => unreachable!(),
};
- let (ident, target, source_bindings, target_bindings, type_ns_only) = match directive
- .subclass
- {
- SingleImport {
+ let (ident, target, source_bindings, target_bindings, type_ns_only) = match import.kind {
+ ImportKind::Single {
source,
target,
ref source_bindings,
type_ns_only,
..
} => (source, target, source_bindings, target_bindings, type_ns_only),
- GlobImport { is_prelude, ref max_vis } => {
- if directive.module_path.len() <= 1 {
+ ImportKind::Glob { is_prelude, ref max_vis } => {
+ if import.module_path.len() <= 1 {
// HACK(eddyb) `lint_if_path_starts_with_module` needs at least
// 2 segments, so the `resolve_path` above won't trigger it.
- let mut full_path = directive.module_path.clone();
+ let mut full_path = import.module_path.clone();
full_path.push(Segment::from_ident(Ident::invalid()));
self.r.lint_if_path_starts_with_module(
- directive.crate_lint(),
+ import.crate_lint(),
&full_path,
- directive.span,
+ import.span,
None,
);
}
if let ModuleOrUniformRoot::Module(module) = module {
- if module.def_id() == directive.parent_scope.module.def_id() {
+ if module.def_id() == import.parent_scope.module.def_id() {
// Importing a module into itself is not allowed.
return Some(UnresolvedImportError {
- span: directive.span,
+ span: import.span,
label: Some(String::from("cannot glob-import a module into itself")),
note: Vec::new(),
suggestion: None,
}
if !is_prelude &&
max_vis.get() != ty::Visibility::Invisible && // Allow empty globs.
- !max_vis.get().is_at_least(directive.vis.get(), &*self)
+ !max_vis.get().is_at_least(import.vis.get(), &*self)
{
let msg = "glob import doesn't reexport anything because no candidate is public enough";
- self.r.lint_buffer.buffer_lint(
- UNUSED_IMPORTS,
- directive.id,
- directive.span,
- msg,
- );
+ self.r.lint_buffer.buffer_lint(UNUSED_IMPORTS, import.id, import.span, msg);
}
return None;
}
let mut all_ns_err = true;
self.r.per_ns(|this, ns| {
if !type_ns_only || ns == TypeNS {
- let orig_vis = directive.vis.replace(ty::Visibility::Invisible);
+ let orig_vis = import.vis.replace(ty::Visibility::Invisible);
let orig_blacklisted_binding =
mem::replace(&mut this.blacklisted_binding, target_bindings[ns].get());
let orig_last_import_segment = mem::replace(&mut this.last_import_segment, true);
module,
ident,
ns,
- &directive.parent_scope,
+ &import.parent_scope,
true,
- directive.span,
+ import.span,
);
this.last_import_segment = orig_last_import_segment;
this.blacklisted_binding = orig_blacklisted_binding;
- directive.vis.set(orig_vis);
+ import.vis.set(orig_vis);
match binding {
Ok(binding) => {
ident,
ns,
target_binding,
- directive.module_path.is_empty(),
+ import.module_path.is_empty(),
);
}
}
let res = binding.res();
if let Ok(initial_res) = initial_res {
if res != initial_res && this.ambiguity_errors.is_empty() {
- span_bug!(directive.span, "inconsistent resolution for an import");
+ span_bug!(import.span, "inconsistent resolution for an import");
}
} else {
if res != Res::Err
let msg_note =
"import resolution is stuck, try simplifying other imports";
this.session
- .struct_span_err(directive.span, msg)
+ .struct_span_err(import.span, msg)
.note(msg_note)
.emit();
}
// single import (see test `issue-55884-2.rs`). In theory single imports should
// always block globs, even if they are not yet resolved, so that this kind of
// self-inconsistent resolution never happens.
- // Reenable the assert when the issue is fixed.
+ // Re-enable the assert when the issue is fixed.
// assert!(result[ns].get().is_err());
}
}
module,
ident,
ns,
- &directive.parent_scope,
+ &import.parent_scope,
true,
- directive.span,
+ import.span,
);
if binding.is_ok() {
all_ns_failed = false;
});
let (suggestion, note) =
- match self.check_for_module_export_macro(directive, module, ident) {
+ match self.check_for_module_export_macro(import, module, ident) {
Some((suggestion, note)) => (suggestion.or(lev_suggestion), note),
_ => (lev_suggestion, Vec::new()),
};
};
Some(UnresolvedImportError {
- span: directive.span,
+ span: import.span,
label: Some(label),
note,
suggestion,
})
} else {
// `resolve_ident_in_module` reported a privacy error.
- self.r.import_dummy_binding(directive);
+ self.r.import_dummy_binding(import);
None
};
}
let mut any_successful_reexport = false;
self.r.per_ns(|this, ns| {
if let Ok(binding) = source_bindings[ns].get() {
- let vis = directive.vis.get();
+ let vis = import.vis.get();
if !binding.pseudo_vis().is_at_least(vis, &*this) {
reexport_error = Some((ns, binding));
} else {
);
self.r.lint_buffer.buffer_lint(
PUB_USE_OF_PRIVATE_EXTERN_CRATE,
- directive.id,
- directive.span,
+ import.id,
+ import.span,
&msg,
);
} else if ns == TypeNS {
struct_span_err!(
self.r.session,
- directive.span,
+ import.span,
E0365,
"`{}` is private, and cannot be re-exported",
ident
)
- .span_label(directive.span, format!("re-export of private `{}`", ident))
+ .span_label(import.span, format!("re-export of private `{}`", ident))
.note(&format!("consider declaring type or module `{}` with `pub`", ident))
.emit();
} else {
let msg = format!("`{}` is private, and cannot be re-exported", ident);
let note_msg =
format!("consider marking `{}` as `pub` in the imported module", ident,);
- struct_span_err!(self.r.session, directive.span, E0364, "{}", &msg)
- .span_note(directive.span, ¬e_msg)
+ struct_span_err!(self.r.session, import.span, E0364, "{}", &msg)
+ .span_note(import.span, ¬e_msg)
.emit();
}
}
- if directive.module_path.len() <= 1 {
+ if import.module_path.len() <= 1 {
// HACK(eddyb) `lint_if_path_starts_with_module` needs at least
// 2 segments, so the `resolve_path` above won't trigger it.
- let mut full_path = directive.module_path.clone();
+ let mut full_path = import.module_path.clone();
full_path.push(Segment::from_ident(ident));
self.r.per_ns(|this, ns| {
if let Ok(binding) = source_bindings[ns].get() {
this.lint_if_path_starts_with_module(
- directive.crate_lint(),
+ import.crate_lint(),
&full_path,
- directive.span,
+ import.span,
Some(binding),
);
}
// this may resolve to either a value or a type, but for documentation
// purposes it's good enough to just favor one over the other.
self.r.per_ns(|this, ns| {
- if let Some(binding) = source_bindings[ns].get().ok() {
- this.import_res_map.entry(directive.id).or_default()[ns] = Some(binding.res());
+ if let Ok(binding) = source_bindings[ns].get() {
+ this.import_res_map.entry(import.id).or_default()[ns] = Some(binding.res());
}
});
- self.check_for_redundant_imports(
- ident,
- directive,
- source_bindings,
- target_bindings,
- target,
- );
+ self.check_for_redundant_imports(ident, import, source_bindings, target_bindings, target);
debug!("(resolving single import) successfully resolved import");
None
fn check_for_redundant_imports(
&mut self,
ident: Ident,
- directive: &'b ImportDirective<'b>,
+ import: &'b Import<'b>,
source_bindings: &PerNS<Cell<Result<&'b NameBinding<'b>, Determinacy>>>,
target_bindings: &PerNS<Cell<Option<&'b NameBinding<'b>>>>,
target: Ident,
) {
// Skip if the import was produced by a macro.
- if directive.parent_scope.expansion != ExpnId::root() {
+ if import.parent_scope.expansion != ExpnId::root() {
return;
}
// Skip if we are inside a named module (in contrast to an anonymous
// module defined by a block).
- if let ModuleKind::Def(..) = directive.parent_scope.module.kind {
+ if let ModuleKind::Def(..) = import.parent_scope.module.kind {
return;
}
let mut redundant_span = PerNS { value_ns: None, type_ns: None, macro_ns: None };
self.r.per_ns(|this, ns| {
- if let Some(binding) = source_bindings[ns].get().ok() {
+ if let Ok(binding) = source_bindings[ns].get() {
if binding.res() == Res::Err {
return;
}
match this.early_resolve_ident_in_lexical_scope(
target,
ScopeSet::All(ns, false),
- &directive.parent_scope,
+ &import.parent_scope,
false,
false,
- directive.span,
+ import.span,
) {
Ok(other_binding) => {
is_redundant[ns] = Some(
redundant_spans.dedup();
self.r.lint_buffer.buffer_lint_with_diagnostic(
UNUSED_IMPORTS,
- directive.id,
- directive.span,
+ import.id,
+ import.span,
&format!("the item `{}` is imported redundantly", ident),
BuiltinLintDiagnostics::RedundantImport(redundant_spans, ident),
);
}
}
- fn resolve_glob_import(&mut self, directive: &'b ImportDirective<'b>) {
- let module = match directive.imported_module.get().unwrap() {
+ fn resolve_glob_import(&mut self, import: &'b Import<'b>) {
+ let module = match import.imported_module.get().unwrap() {
ModuleOrUniformRoot::Module(module) => module,
_ => {
- self.r.session.span_err(directive.span, "cannot glob-import all possible crates");
+ self.r.session.span_err(import.span, "cannot glob-import all possible crates");
return;
}
};
if module.is_trait() {
- self.r.session.span_err(directive.span, "items in traits are not importable.");
+ self.r.session.span_err(import.span, "items in traits are not importable.");
return;
- } else if module.def_id() == directive.parent_scope.module.def_id() {
+ } else if module.def_id() == import.parent_scope.module.def_id() {
return;
- } else if let GlobImport { is_prelude: true, .. } = directive.subclass {
+ } else if let ImportKind::Glob { is_prelude: true, .. } = import.kind {
self.r.prelude = Some(module);
return;
}
// Add to module's glob_importers
- module.glob_importers.borrow_mut().push(directive);
+ module.glob_importers.borrow_mut().push(import);
// Ensure that `resolutions` isn't borrowed during `try_define`,
// since it might get updated via a glob cycle.
})
.collect::<Vec<_>>();
for (mut key, binding) in bindings {
- let scope = match key.ident.span.reverse_glob_adjust(module.expansion, directive.span) {
+ let scope = match key.ident.span.reverse_glob_adjust(module.expansion, import.span) {
Some(Some(def)) => self.r.macro_def_scope(def),
- Some(None) => directive.parent_scope.module,
+ Some(None) => import.parent_scope.module,
None => continue,
};
if self.r.is_accessible_from(binding.pseudo_vis(), scope) {
- let imported_binding = self.r.import(binding, directive);
- let _ = self.r.try_define(directive.parent_scope.module, key, imported_binding);
+ let imported_binding = self.r.import(binding, import);
+ let _ = self.r.try_define(import.parent_scope.module, key, imported_binding);
}
}
// Record the destination of this import
- self.r.record_partial_res(directive.id, PartialRes::new(module.res().unwrap()));
+ self.r.record_partial_res(import.id, PartialRes::new(module.res().unwrap()));
}
// Miscellaneous post-processing, including recording re-exports,
if is_good_import || binding.is_macro_def() {
let res = binding.res();
if res != Res::Err {
- if let Some(def_id) = res.opt_def_id() {
- if !def_id.is_local() {
- this.cstore().export_macros_untracked(def_id.krate);
- }
- }
reexports.push(Export { ident, res, span: binding.span, vis: binding.vis });
}
}
- if let NameBindingKind::Import { binding: orig_binding, directive, .. } = binding.kind {
+ if let NameBindingKind::Import { binding: orig_binding, import, .. } = binding.kind {
if ns == TypeNS
&& orig_binding.is_variant()
&& !orig_binding.vis.is_at_least(binding.vis, &*this)
{
- let msg = match directive.subclass {
- ImportDirectiveSubclass::SingleImport { .. } => {
+ let msg = match import.kind {
+ ImportKind::Single { .. } => {
format!("variant `{}` is private and cannot be re-exported", ident)
}
- ImportDirectiveSubclass::GlobImport { .. } => {
+ ImportKind::Glob { .. } => {
let msg = "enum is private and its variants \
cannot be re-exported"
.to_owned();
}
msg
}
- ref s @ _ => bug!("unexpected import subclass {:?}", s),
+ ref s => bug!("unexpected import kind {:?}", s),
};
let mut err = this.session.struct_span_err(binding.span, &msg);
- let imported_module = match directive.imported_module.get() {
+ let imported_module = match import.imported_module.get() {
Some(ModuleOrUniformRoot::Module(module)) => module,
_ => bug!("module should exist"),
};
let parent_module = imported_module.parent.expect("parent should exist");
let resolutions = this.resolutions(parent_module).borrow();
- let enum_path_segment_index = directive.module_path.len() - 1;
- let enum_ident = directive.module_path[enum_path_segment_index].ident;
+ let enum_path_segment_index = import.module_path.len() - 1;
+ let enum_ident = import.module_path[enum_path_segment_index].ident;
let key = this.new_key(enum_ident, TypeNS);
let enum_resolution = resolutions.get(&key).expect("resolution should exist");
}
}
-fn import_path_to_string(
- names: &[Ident],
- subclass: &ImportDirectiveSubclass<'_>,
- span: Span,
-) -> String {
+fn import_path_to_string(names: &[Ident], import_kind: &ImportKind<'_>, span: Span) -> String {
let pos = names.iter().position(|p| span == p.span && p.name != kw::PathRoot);
let global = !names.is_empty() && names[0].name == kw::PathRoot;
if let Some(pos) = pos {
} else {
let names = if global { &names[1..] } else { names };
if names.is_empty() {
- import_directive_subclass_to_string(subclass)
+ import_kind_to_string(import_kind)
} else {
format!(
"{}::{}",
names_to_string(&names.iter().map(|ident| ident.name).collect::<Vec<_>>()),
- import_directive_subclass_to_string(subclass),
+ import_kind_to_string(import_kind),
)
}
}
}
-fn import_directive_subclass_to_string(subclass: &ImportDirectiveSubclass<'_>) -> String {
- match *subclass {
- SingleImport { source, .. } => source.to_string(),
- GlobImport { .. } => "*".to_string(),
- ExternCrate { .. } => "<extern crate>".to_string(),
- MacroUse => "#[macro_use]".to_string(),
+fn import_kind_to_string(import_kind: &ImportKind<'_>) -> String {
+ match import_kind {
+ ImportKind::Single { source, .. } => source.to_string(),
+ ImportKind::Glob { .. } => "*".to_string(),
+ ImportKind::ExternCrate { .. } => "<extern crate>".to_string(),
+ ImportKind::MacroUse => "#[macro_use]".to_string(),
}
}
use crate::{Module, ModuleOrUniformRoot, NameBindingKind, ParentScope, PathResult};
use crate::{ResolutionError, Resolver, Segment, UseError};
-use rustc::{bug, lint, span_bug};
+use rustc::{bug, span_bug};
use rustc_ast::ast::*;
use rustc_ast::ptr::P;
use rustc_ast::util::lev_distance::find_best_match_for_name;
use rustc_hir::def::{self, CtorKind, DefKind, PartialRes, PerNS};
use rustc_hir::def_id::{DefId, CRATE_DEF_INDEX};
use rustc_hir::TraitCandidate;
+use rustc_session::lint;
use rustc_span::symbol::{kw, sym};
use rustc_span::Span;
use smallvec::{smallvec, SmallVec};
| Res::Def(DefKind::Static, _)
| Res::Local(..)
| Res::Def(DefKind::Fn, _)
- | Res::Def(DefKind::Method, _)
+ | Res::Def(DefKind::AssocFn, _)
| Res::Def(DefKind::AssocConst, _)
| Res::SelfCtor(..)
| Res::Def(DefKind::ConstParam, _) => true,
_ => false,
},
PathSource::TraitItem(ns) => match res {
- Res::Def(DefKind::AssocConst, _) | Res::Def(DefKind::Method, _)
+ Res::Def(DefKind::AssocConst, _) | Res::Def(DefKind::AssocFn, _)
if ns == ValueNS =>
{
true
visit::walk_foreign_item(this, foreign_item);
});
}
- ForeignItemKind::Const(..) | ForeignItemKind::Static(..) => {
+ ForeignItemKind::Static(..) => {
self.with_item_rib(HasGenericParams::No, |this| {
visit::walk_foreign_item(this, foreign_item);
});
}
- ForeignItemKind::Macro(..) => {
+ ForeignItemKind::MacCall(..) => {
visit::walk_foreign_item(self, foreign_item);
}
}
let prev = replace(&mut self.diagnostic_metadata.currently_processing_generics, true);
match arg {
GenericArg::Type(ref ty) => {
- // We parse const arguments as path types as we cannot distiguish them during
+ // We parse const arguments as path types as we cannot distinguish them during
// parsing. We try to resolve that ambiguity by attempting resolution the type
// namespace first, and if that fails we try again in the value namespace. If
// resolution in the value namespace succeeds, we have an generic const argument on
for item in trait_items {
this.with_trait_items(trait_items, |this| {
match &item.kind {
- AssocItemKind::Static(ty, _, default)
- | AssocItemKind::Const(_, ty, default) => {
+ AssocItemKind::Const(_, ty, default) => {
this.visit_ty(ty);
// Only impose the restrictions of `ConstRibKind` for an
// actual constant expression in a provided default.
AssocItemKind::TyAlias(_, generics, _, _) => {
walk_assoc_item(this, generics, item);
}
- AssocItemKind::Macro(_) => {
+ AssocItemKind::MacCall(_) => {
panic!("unexpanded macro in resolve!")
}
};
// do nothing, these are just around to be encoded
}
- ItemKind::Mac(_) => panic!("unexpanded macro in resolve!"),
+ ItemKind::MacCall(_) => panic!("unexpanded macro in resolve!"),
}
}
_ => unreachable!(),
};
- let ident = param.ident.modern();
+ let ident = param.ident.normalize_to_macros_2_0();
debug!("with_generic_param_rib: {}", param.id);
if seen_bindings.contains_key(&ident) {
for item in impl_items {
use crate::ResolutionError::*;
match &item.kind {
- AssocItemKind::Static(..) | AssocItemKind::Const(..) => {
+ AssocItemKind::Const(..) => {
debug!("resolve_implementation AssocItemKind::Const",);
// If this is a trait impl, ensure the const
// exists in trait
},
);
}
- AssocItemKind::Macro(_) => {
+ AssocItemKind::MacCall(_) => {
panic!("unexpanded macro in resolve!")
}
}
// Add the binding to the local ribs, if it doesn't already exist in the bindings map.
// (We must not add it if it's in the bindings map because that breaks the assumptions
// later passes make about or-patterns.)
- let ident = ident.modern_and_legacy();
+ let ident = ident.normalize_to_macro_rules();
let mut bound_iter = bindings.iter().filter(|(_, set)| set.contains(&ident));
// Already bound in a product pattern? e.g. `(a, a)` which is not allowed.
ident: Ident,
has_sub: bool,
) -> Option<Res> {
- let binding =
- self.resolve_ident_in_lexical_scope(ident, ValueNS, None, pat.span)?.item()?;
- let res = binding.res();
+ let ls_binding = self.resolve_ident_in_lexical_scope(ident, ValueNS, None, pat.span)?;
+ let (res, binding) = match ls_binding {
+ LexicalScopeBinding::Item(binding) if binding.is_ambiguity() => {
+ // For ambiguous bindings we don't know all their definitions and cannot check
+ // whether they can be shadowed by fresh bindings or not, so force an error.
+ self.r.record_use(ident, ValueNS, binding, false);
+ return None;
+ }
+ LexicalScopeBinding::Item(binding) => (binding.res(), Some(binding)),
+ LexicalScopeBinding::Res(res) => (res, None),
+ };
// An immutable (no `mut`) by-value (no `ref`) binding pattern without
// a sub pattern (no `@ $pat`) is syntactically ambiguous as it could
let is_syntactic_ambiguity = !has_sub && bm == BindingMode::ByValue(Mutability::Not);
match res {
- Res::Def(DefKind::Ctor(_, CtorKind::Const), _) | Res::Def(DefKind::Const, _)
+ Res::Def(DefKind::Ctor(_, CtorKind::Const), _)
+ | Res::Def(DefKind::Const, _)
+ | Res::Def(DefKind::ConstParam, _)
if is_syntactic_ambiguity =>
{
// Disambiguate in favor of a unit struct/variant or constant pattern.
- self.r.record_use(ident, ValueNS, binding, false);
+ if let Some(binding) = binding {
+ self.r.record_use(ident, ValueNS, binding, false);
+ }
Some(res)
}
Res::Def(DefKind::Ctor(..), _)
ResolutionError::BindingShadowsSomethingUnacceptable(
pat_src.descr(),
ident.name,
- binding,
+ binding.expect("no binding for a ctor or static"),
),
);
None
}
- Res::Def(DefKind::Fn, _) | Res::Err => {
+ Res::Def(DefKind::Fn, _) | Res::Local(..) | Res::Err => {
// These entities are explicitly allowed to be shadowed by fresh bindings.
None
}
- res => {
- span_bug!(
- ident.span,
- "unexpected resolution for an \
- identifier in pattern: {:?}",
- res
- );
- }
+ _ => span_bug!(
+ ident.span,
+ "unexpected resolution for an identifier in pattern: {:?}",
+ res
+ ),
}
}
self.diagnostic_metadata.unused_labels.insert(id, label.ident.span);
}
self.with_label_rib(NormalRibKind, |this| {
- let ident = label.ident.modern_and_legacy();
+ let ident = label.ident.normalize_to_macro_rules();
this.label_ribs.last_mut().unwrap().bindings.insert(ident, id);
f(this);
});
ExprKind::Break(Some(label), _) | ExprKind::Continue(Some(label)) => {
let node_id = self.search_label(label.ident, |rib, ident| {
- rib.bindings.get(&ident.modern_and_legacy()).cloned()
+ rib.bindings.get(&ident.normalize_to_macro_rules()).cloned()
});
match node_id {
None => {
.is_ok()
{
let def_id = module.def_id().unwrap();
- found_traits.push(TraitCandidate { def_id: def_id, import_ids: smallvec![] });
+ found_traits.push(TraitCandidate { def_id, import_ids: smallvec![] });
}
}
- ident.span = ident.span.modern();
+ ident.span = ident.span.normalize_to_macros_2_0();
let mut search_module = self.parent_scope.module;
loop {
self.get_traits_in_module_containing_item(ident, ns, search_module, &mut found_traits);
trait_name: Ident,
) -> SmallVec<[NodeId; 1]> {
let mut import_ids = smallvec![];
- while let NameBindingKind::Import { directive, binding, .. } = kind {
- self.r.maybe_unused_trait_imports.insert(directive.id);
- self.r.add_to_glob_map(&directive, trait_name);
- import_ids.push(directive.id);
+ while let NameBindingKind::Import { import, binding, .. } = kind {
+ self.r.maybe_unused_trait_imports.insert(import.id);
+ self.r.add_to_glob_map(&import, trait_name);
+ import_ids.push(import.id);
kind = &binding.kind;
}
import_ids
use crate::{CrateLint, Module, ModuleKind, ModuleOrUniformRoot};
use crate::{PathResult, PathSource, Segment};
-use rustc::session::config::nightly_options;
use rustc_ast::ast::{self, Expr, ExprKind, Ident, Item, ItemKind, NodeId, Path, Ty, TyKind};
use rustc_ast::util::lev_distance::find_best_match_for_name;
use rustc_data_structures::fx::FxHashSet;
use rustc_hir::def::{self, CtorKind, DefKind};
use rustc_hir::def_id::{DefId, CRATE_DEF_INDEX};
use rustc_hir::PrimTy;
+use rustc_session::config::nightly_options;
use rustc_span::hygiene::MacroKind;
use rustc_span::symbol::{kw, sym};
use rustc_span::Span;
.unwrap_or(false)
}
Res::Def(DefKind::Ctor(..), _)
- | Res::Def(DefKind::Method, _)
+ | Res::Def(DefKind::AssocFn, _)
| Res::Def(DefKind::Const, _)
| Res::Def(DefKind::AssocConst, _)
| Res::SelfCtor(_)
for param in params {
if let Ok(snippet) = self.tcx.sess.source_map().span_to_snippet(param.span)
{
- if snippet.starts_with("&") && !snippet.starts_with("&'") {
+ if snippet.starts_with('&') && !snippet.starts_with("&'") {
introduce_suggestion
.push((param.span, format!("&'a {}", &snippet[1..])));
} else if snippet.starts_with("&'_ ") {
}
};
- match (
- lifetime_names.len(),
- lifetime_names.iter().next(),
- snippet.as_ref().map(|s| s.as_str()),
- ) {
+ match (lifetime_names.len(), lifetime_names.iter().next(), snippet.as_deref()) {
(1, Some(name), Some("&")) => {
suggest_existing(err, format!("&{} ", name));
}
(1, Some(name), Some("'_")) => {
suggest_existing(err, name.to_string());
}
- (1, Some(name), Some(snippet)) if !snippet.ends_with(">") => {
+ (1, Some(name), Some(snippet)) if !snippet.ends_with('>') => {
suggest_existing(err, format!("{}<{}>", snippet, name));
}
(0, _, Some("&")) => {
(0, _, Some("'_")) => {
suggest_new(err, "'a");
}
- (0, _, Some(snippet)) if !snippet.ends_with(">") => {
+ (0, _, Some(snippet)) if !snippet.ends_with('>') => {
suggest_new(err, &format!("{}<'a>", snippet));
}
_ => {
use crate::late::diagnostics::{ForLifetimeSpanType, MissingLifetimeSpot};
use rustc::hir::map::Map;
-use rustc::lint;
use rustc::middle::resolve_lifetime::*;
use rustc::ty::{self, DefIdTree, GenericParamDefKind, TyCtxt};
use rustc::{bug, span_bug};
use rustc_errors::{struct_span_err, Applicability, DiagnosticBuilder};
use rustc_hir as hir;
use rustc_hir::def::{DefKind, Res};
-use rustc_hir::def_id::{CrateNum, DefId, DefIdMap, LocalDefId, LOCAL_CRATE};
+use rustc_hir::def_id::{CrateNum, DefId, DefIdMap, LOCAL_CRATE};
use rustc_hir::intravisit::{self, NestedVisitorMap, Visitor};
use rustc_hir::{GenericArg, GenericParam, LifetimeName, Node, ParamName, QPath};
use rustc_hir::{GenericParamKind, HirIdMap, HirIdSet, LifetimeParamKind};
+use rustc_session::lint;
use rustc_span::symbol::{kw, sym};
use rustc_span::Span;
use std::borrow::Cow;
let def_id = hir_map.local_def_id(param.hir_id);
let origin = LifetimeDefOrigin::from_param(param);
debug!("Region::early: index={} def_id={:?}", i, def_id);
- (param.name.modern(), Region::EarlyBound(i, def_id, origin))
+ (param.name.normalize_to_macros_2_0(), Region::EarlyBound(i, def_id, origin))
}
fn late(hir_map: &Map<'_>, param: &GenericParam<'_>) -> (ParamName, Region) {
"Region::late: param={:?} depth={:?} def_id={:?} origin={:?}",
param, depth, def_id, origin,
);
- (param.name.modern(), Region::LateBound(depth, def_id, origin))
+ (param.name.normalize_to_macros_2_0(), Region::LateBound(depth, def_id, origin))
}
fn late_anon(index: &Cell<u32>) -> Region {
*providers = ty::query::Providers {
resolve_lifetimes,
- named_region_map: |tcx, id| {
- let id = LocalDefId::from_def_id(DefId::local(id)); // (*)
- tcx.resolve_lifetimes(LOCAL_CRATE).defs.get(&id)
- },
-
- is_late_bound_map: |tcx, id| {
- let id = LocalDefId::from_def_id(DefId::local(id)); // (*)
- tcx.resolve_lifetimes(LOCAL_CRATE).late_bound.get(&id)
- },
-
+ named_region_map: |tcx, id| tcx.resolve_lifetimes(LOCAL_CRATE).defs.get(&id),
+ is_late_bound_map: |tcx, id| tcx.resolve_lifetimes(LOCAL_CRATE).late_bound.get(&id),
object_lifetime_defaults_map: |tcx, id| {
- let id = LocalDefId::from_def_id(DefId::local(id)); // (*)
tcx.resolve_lifetimes(LOCAL_CRATE).object_lifetime_defaults.get(&id)
},
..*providers
};
-
- // (*) FIXME the query should be defined to take a LocalDefId
}
/// Computes the `ResolveLifetimes` map that contains data for the
let mut rl = ResolveLifetimes::default();
for (hir_id, v) in named_region_map.defs {
- let map = rl.defs.entry(hir_id.owner_local_def_id()).or_default();
+ let map = rl.defs.entry(hir_id.owner).or_default();
map.insert(hir_id.local_id, v);
}
for hir_id in named_region_map.late_bound {
- let map = rl.late_bound.entry(hir_id.owner_local_def_id()).or_default();
+ let map = rl.late_bound.entry(hir_id.owner).or_default();
map.insert(hir_id.local_id);
}
for (hir_id, v) in named_region_map.object_lifetime_defaults {
- let map = rl.object_lifetime_defaults.entry(hir_id.owner_local_def_id()).or_default();
+ let map = rl.object_lifetime_defaults.entry(hir_id.owner).or_default();
map.insert(hir_id.local_id, v);
}
lifetime_uses: &mut Default::default(),
missing_named_lifetime_spots: vec![],
};
- for (_, item) in &krate.items {
+ for item in krate.items.values() {
visitor.visit_item(item);
}
}
impl<'a, 'tcx> Visitor<'tcx> for LifetimeContext<'a, 'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::All(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::All(self.tcx.hir())
}
// We want to nest trait/impl items in their parent, but nothing else.
LifetimeName::Implicit => {
// For types like `dyn Foo`, we should
// generate a special form of elided.
- span_bug!(ty.span, "object-lifetime-default expected, not implict",);
+ span_bug!(ty.span, "object-lifetime-default expected, not implicit",);
}
LifetimeName::ImplicitObjectLifetimeDefault => {
// If the user does not write *anything*, we
use self::hir::TraitItemKind::*;
self.missing_named_lifetime_spots.push((&trait_item.generics).into());
match trait_item.kind {
- Method(ref sig, _) => {
+ Fn(ref sig, _) => {
let tcx = self.tcx;
self.visit_early_late(
Some(tcx.hir().get_parent_item(trait_item.hir_id)),
use self::hir::ImplItemKind::*;
self.missing_named_lifetime_spots.push((&impl_item.generics).into());
match impl_item.kind {
- Method(ref sig, _) => {
+ Fn(ref sig, _) => {
let tcx = self.tcx;
self.visit_early_late(
Some(tcx.hir().get_parent_item(impl_item.hir_id)),
}
fn original_label(span: Span) -> Original {
- Original { kind: ShadowKind::Label, span: span }
+ Original { kind: ShadowKind::Label, span }
}
fn shadower_label(span: Span) -> Shadower {
- Shadower { kind: ShadowKind::Label, span: span }
+ Shadower { kind: ShadowKind::Label, span }
}
fn original_lifetime(span: Span) -> Original {
- Original { kind: ShadowKind::Lifetime, span: span }
+ Original { kind: ShadowKind::Lifetime, span }
}
fn shadower_lifetime(param: &hir::GenericParam<'_>) -> Shadower {
Shadower { kind: ShadowKind::Lifetime, span: param.span }
gather.visit_body(body);
impl<'v, 'a, 'tcx> Visitor<'v> for GatherLabels<'a, 'tcx> {
- type Map = Map<'v>;
+ type Map = intravisit::ErasedMap<'v>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
Scope::Binder { ref lifetimes, s, .. } => {
// FIXME (#24278): non-hygienic comparison
- if let Some(def) = lifetimes.get(&hir::ParamName::Plain(label.modern())) {
+ if let Some(def) =
+ lifetimes.get(&hir::ParamName::Plain(label.normalize_to_macros_2_0()))
+ {
let hir_id = tcx.hir().as_local_hir_id(def.id().unwrap()).unwrap();
signal_shadowing_problem(
fn add_bounds(set: &mut Set1<hir::LifetimeName>, bounds: &[hir::GenericBound<'_>]) {
for bound in bounds {
if let hir::GenericBound::Outlives(ref lifetime) = *bound {
- set.insert(lifetime.name.modern());
+ set.insert(lifetime.name.normalize_to_macros_2_0());
}
}
}
let missing_named_lifetime_spots = take(&mut self.missing_named_lifetime_spots);
let mut this = LifetimeContext {
tcx: *tcx,
- map: map,
+ map,
scope: &wrap_scope,
trait_ref_hack: self.trait_ref_hack,
is_in_fn_syntax: self.is_in_fn_syntax,
}
}
Node::ImplItem(impl_item) => {
- if let hir::ImplItemKind::Method(sig, _) = &impl_item.kind {
+ if let hir::ImplItemKind::Fn(sig, _) = &impl_item.kind {
find_arg_use_span(sig.decl.inputs);
}
}
Scope::Binder { ref lifetimes, s, .. } => {
match lifetime_ref.name {
LifetimeName::Param(param_name) => {
- if let Some(&def) = lifetimes.get(¶m_name.modern()) {
+ if let Some(&def) = lifetimes.get(¶m_name.normalize_to_macros_2_0())
+ {
break Some(def.shifted(late_depth));
}
}
match self.tcx.hir().get(fn_id) {
Node::Item(&hir::Item { kind: hir::ItemKind::Fn(..), .. })
| Node::TraitItem(&hir::TraitItem {
- kind: hir::TraitItemKind::Method(..),
- ..
+ kind: hir::TraitItemKind::Fn(..), ..
})
- | Node::ImplItem(&hir::ImplItem {
- kind: hir::ImplItemKind::Method(..), ..
- }) => {
+ | Node::ImplItem(&hir::ImplItem { kind: hir::ImplItemKind::Fn(..), .. }) => {
let scope = self.tcx.hir().local_def_id(fn_id);
def = Region::Free(scope, def.id().unwrap());
}
// `fn` definitions and methods.
Node::Item(&hir::Item { kind: hir::ItemKind::Fn(.., body), .. }) => Some(body),
- Node::TraitItem(&hir::TraitItem {
- kind: hir::TraitItemKind::Method(_, ref m), ..
- }) => {
+ Node::TraitItem(&hir::TraitItem { kind: hir::TraitItemKind::Fn(_, ref m), .. }) => {
if let hir::ItemKind::Trait(.., ref trait_items) =
self.tcx.hir().expect_item(self.tcx.hir().get_parent_item(parent)).kind
{
trait_items.iter().find(|ti| ti.id.hir_id == parent).map(|ti| ti.kind);
}
match *m {
- hir::TraitMethod::Required(_) => None,
- hir::TraitMethod::Provided(body) => Some(body),
+ hir::TraitFn::Required(_) => None,
+ hir::TraitFn::Provided(body) => Some(body),
}
}
- Node::ImplItem(&hir::ImplItem { kind: hir::ImplItemKind::Method(_, body), .. }) => {
+ Node::ImplItem(&hir::ImplItem { kind: hir::ImplItemKind::Fn(_, body), .. }) => {
if let hir::ItemKind::Impl { ref self_ty, ref items, .. } =
self.tcx.hir().expect_item(self.tcx.hir().get_parent_item(parent)).kind
{
}
impl<'a> Visitor<'a> for SelfVisitor<'a> {
- type Map = Map<'a>;
+ type Map = intravisit::ErasedMap<'a>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
}
impl<'v, 'a> Visitor<'v> for GatherLifetimes<'a> {
- type Map = Map<'v>;
+ type Map = intravisit::ErasedMap<'v>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
let lifetimes: Vec<_> = params
.iter()
.filter_map(|param| match param.kind {
- GenericParamKind::Lifetime { .. } => Some((param, param.name.modern())),
+ GenericParamKind::Lifetime { .. } => {
+ Some((param, param.name.normalize_to_macros_2_0()))
+ }
_ => None,
})
.collect();
}
Scope::Binder { ref lifetimes, s, .. } => {
- if let Some(&def) = lifetimes.get(¶m.name.modern()) {
+ if let Some(&def) = lifetimes.get(¶m.name.normalize_to_macros_2_0()) {
let hir_id = self.tcx.hir().as_local_hir_id(def.id().unwrap()).unwrap();
signal_shadowing_problem(
// `'a: 'b` means both `'a` and `'b` are referenced
appears_in_where_clause
.regions
- .insert(hir::LifetimeName::Param(param.name.modern()));
+ .insert(hir::LifetimeName::Param(param.name.normalize_to_macros_2_0()));
}
}
}
hir::GenericParamKind::Type { .. } | hir::GenericParamKind::Const { .. } => continue,
}
- let lt_name = hir::LifetimeName::Param(param.name.modern());
+ let lt_name = hir::LifetimeName::Param(param.name.normalize_to_macros_2_0());
// appears in the where clauses? early-bound.
if appears_in_where_clause.regions.contains(<_name) {
continue;
}
impl<'v> Visitor<'v> for ConstrainedCollector {
- type Map = Map<'v>;
+ type Map = intravisit::ErasedMap<'v>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
}
fn visit_lifetime(&mut self, lifetime_ref: &'v hir::Lifetime) {
- self.regions.insert(lifetime_ref.name.modern());
+ self.regions.insert(lifetime_ref.name.normalize_to_macros_2_0());
}
}
}
impl<'v> Visitor<'v> for AllCollector {
- type Map = Map<'v>;
+ type Map = intravisit::ErasedMap<'v>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
fn visit_lifetime(&mut self, lifetime_ref: &'v hir::Lifetime) {
- self.regions.insert(lifetime_ref.name.modern());
+ self.regions.insert(lifetime_ref.name.normalize_to_macros_2_0());
}
}
}
use rustc::hir::exports::ExportMap;
use rustc::hir::map::{DefKey, Definitions};
-use rustc::lint;
use rustc::middle::cstore::{CrateStore, MetadataLoaderDyn};
use rustc::span_bug;
use rustc::ty::query::Providers;
use rustc_expand::base::SyntaxExtension;
use rustc_hir::def::Namespace::*;
use rustc_hir::def::{self, CtorOf, DefKind, NonMacroAttrKind, PartialRes};
-use rustc_hir::def_id::{CrateNum, DefId, DefIdMap, CRATE_DEF_INDEX, LOCAL_CRATE};
+use rustc_hir::def_id::{CrateNum, DefId, DefIdMap, CRATE_DEF_INDEX};
use rustc_hir::PrimTy::{self, Bool, Char, Float, Int, Str, Uint};
use rustc_hir::{GlobMap, TraitMap};
use rustc_metadata::creader::{CStore, CrateLoader};
+use rustc_session::lint;
use rustc_session::lint::{BuiltinLintDiagnostics, LintBuffer};
use rustc_session::Session;
use rustc_span::hygiene::{ExpnId, ExpnKind, MacroKind, SyntaxContext, Transparency};
use diagnostics::{extend_span_to_previous_binding, find_span_of_binding_until_next_binding};
use diagnostics::{ImportSuggestion, Suggestion};
-use imports::{ImportDirective, ImportDirectiveSubclass, ImportResolver, NameResolution};
+use imports::{Import, ImportKind, ImportResolver, NameResolution};
use late::{HasGenericParams, PathSource, Rib, RibKind::*};
-use macros::{LegacyBinding, LegacyScope};
+use macros::{MacroRulesBinding, MacroRulesScope};
type Res = def::Res<NodeId>;
enum Scope<'a> {
DeriveHelpers(ExpnId),
DeriveHelpersCompat,
- MacroRules(LegacyScope<'a>),
+ MacroRules(MacroRulesScope<'a>),
CrateRoot,
Module(Module<'a>),
RegisteredAttrs,
pub struct ParentScope<'a> {
module: Module<'a>,
expansion: ExpnId,
- legacy: LegacyScope<'a>,
+ macro_rules: MacroRulesScope<'a>,
derives: &'a [ast::Path],
}
/// Creates a parent scope with the passed argument used as the module scope component,
/// and other scope components set to default empty values.
pub fn module(module: Module<'a>) -> ParentScope<'a> {
- ParentScope { module, expansion: ExpnId::root(), legacy: LegacyScope::Empty, derives: &[] }
+ ParentScope {
+ module,
+ expansion: ExpnId::root(),
+ macro_rules: MacroRulesScope::Empty,
+ derives: &[],
+ }
}
}
}
impl<'a> LexicalScopeBinding<'a> {
- fn item(self) -> Option<&'a NameBinding<'a>> {
- match self {
- LexicalScopeBinding::Item(binding) => Some(binding),
- _ => None,
- }
- }
-
fn res(self) -> Res {
match self {
LexicalScopeBinding::Item(binding) => binding.res(),
/// program) if all but one of them come from glob imports.
#[derive(Copy, Clone, PartialEq, Eq, Hash)]
struct BindingKey {
- /// The identifier for the binding, aways the `modern` version of the
+ /// The identifier for the binding, aways the `normalize_to_macros_2_0` version of the
/// identifier.
ident: Ident,
ns: Namespace,
no_implicit_prelude: bool,
- glob_importers: RefCell<Vec<&'a ImportDirective<'a>>>,
- globs: RefCell<Vec<&'a ImportDirective<'a>>>,
+ glob_importers: RefCell<Vec<&'a Import<'a>>>,
+ globs: RefCell<Vec<&'a Import<'a>>>,
// Used to memoize the traits in this module for faster searches through all traits in scope.
traits: RefCell<Option<Box<[(Ident, &'a NameBinding<'a>)]>>>,
enum NameBindingKind<'a> {
Res(Res, /* is_macro_export */ bool),
Module(Module<'a>),
- Import { binding: &'a NameBinding<'a>, directive: &'a ImportDirective<'a>, used: Cell<bool> },
+ Import { binding: &'a NameBinding<'a>, import: &'a Import<'a>, used: Cell<bool> },
}
impl<'a> NameBindingKind<'a> {
Import,
BuiltinAttr,
DeriveHelper,
- LegacyVsModern,
+ MacroRulesVsModularized,
GlobVsOuter,
GlobVsGlob,
GlobVsExpanded,
AmbiguityKind::Import => "name vs any other name during import resolution",
AmbiguityKind::BuiltinAttr => "built-in attribute vs any other name",
AmbiguityKind::DeriveHelper => "derive helper attribute vs any other name",
- AmbiguityKind::LegacyVsModern => "`macro_rules` vs non-`macro_rules` from other module",
+ AmbiguityKind::MacroRulesVsModularized => {
+ "`macro_rules` vs non-`macro_rules` from other module"
+ }
AmbiguityKind::GlobVsOuter => {
"glob import vs any other name from outer scope during import/macro resolution"
}
fn is_extern_crate(&self) -> bool {
match self.kind {
NameBindingKind::Import {
- directive:
- &ImportDirective { subclass: ImportDirectiveSubclass::ExternCrate { .. }, .. },
+ import: &Import { kind: ImportKind::ExternCrate { .. }, .. },
..
} => true,
NameBindingKind::Module(&ModuleData {
fn is_glob_import(&self) -> bool {
match self.kind {
- NameBindingKind::Import { directive, .. } => directive.is_glob(),
+ NameBindingKind::Import { import, .. } => import.is_glob(),
_ => false,
}
}
fn is_importable(&self) -> bool {
match self.res() {
Res::Def(DefKind::AssocConst, _)
- | Res::Def(DefKind::Method, _)
+ | Res::Def(DefKind::AssocFn, _)
| Res::Def(DefKind::AssocTy, _) => false,
_ => true,
}
field_names: FxHashMap<DefId, Vec<Spanned<Name>>>,
/// All imports known to succeed or fail.
- determined_imports: Vec<&'a ImportDirective<'a>>,
+ determined_imports: Vec<&'a Import<'a>>,
/// All non-determined imports.
- indeterminate_imports: Vec<&'a ImportDirective<'a>>,
+ indeterminate_imports: Vec<&'a Import<'a>>,
/// FIXME: Refactor things so that these fields are passed through arguments and not resolver.
/// We are resolving a last import segment during import validation.
/// Parent scopes in which the macros were invoked.
/// FIXME: `derives` are missing in these parent scopes and need to be taken from elsewhere.
invocation_parent_scopes: FxHashMap<ExpnId, ParentScope<'a>>,
- /// Legacy scopes *produced* by expanding the macro invocations,
+ /// `macro_rules` scopes *produced* by expanding the macro invocations,
/// include all the `macro_rules` items and other invocations generated by them.
- output_legacy_scopes: FxHashMap<ExpnId, LegacyScope<'a>>,
+ output_macro_rules_scopes: FxHashMap<ExpnId, MacroRulesScope<'a>>,
/// Helper attributes that are in scope for the given expansion.
helper_attrs: FxHashMap<ExpnId, Vec<Ident>>,
/// Avoid duplicated errors for "name already defined".
name_already_seen: FxHashMap<Name, Span>,
- potentially_unused_imports: Vec<&'a ImportDirective<'a>>,
+ potentially_unused_imports: Vec<&'a Import<'a>>,
/// Table for mapping struct IDs into struct constructor IDs,
/// it's not used during normal resolution, only for better error reporting.
modules: arena::TypedArena<ModuleData<'a>>,
local_modules: RefCell<Vec<Module<'a>>>,
name_bindings: arena::TypedArena<NameBinding<'a>>,
- import_directives: arena::TypedArena<ImportDirective<'a>>,
+ imports: arena::TypedArena<Import<'a>>,
name_resolutions: arena::TypedArena<RefCell<NameResolution<'a>>>,
- legacy_bindings: arena::TypedArena<LegacyBinding<'a>>,
+ macro_rules_bindings: arena::TypedArena<MacroRulesBinding<'a>>,
ast_paths: arena::TypedArena<ast::Path>,
}
fn alloc_name_binding(&'a self, name_binding: NameBinding<'a>) -> &'a NameBinding<'a> {
self.name_bindings.alloc(name_binding)
}
- fn alloc_import_directive(
- &'a self,
- import_directive: ImportDirective<'a>,
- ) -> &'a ImportDirective<'_> {
- self.import_directives.alloc(import_directive)
+ fn alloc_import(&'a self, import: Import<'a>) -> &'a Import<'_> {
+ self.imports.alloc(import)
}
fn alloc_name_resolution(&'a self) -> &'a RefCell<NameResolution<'a>> {
self.name_resolutions.alloc(Default::default())
}
- fn alloc_legacy_binding(&'a self, binding: LegacyBinding<'a>) -> &'a LegacyBinding<'a> {
- self.legacy_bindings.alloc(binding)
+ fn alloc_macro_rules_binding(
+ &'a self,
+ binding: MacroRulesBinding<'a>,
+ ) -> &'a MacroRulesBinding<'a> {
+ self.macro_rules_bindings.alloc(binding)
}
fn alloc_ast_paths(&'a self, paths: &[ast::Path]) -> &'a [ast::Path] {
self.ast_paths.alloc_from_iter(paths.iter().cloned())
impl<'a, 'b> DefIdTree for &'a Resolver<'b> {
fn parent(self, id: DefId) -> Option<DefId> {
- match id.krate {
- LOCAL_CRATE => self.definitions.def_key(id.index).parent,
- _ => self.cstore().def_key(id).parent,
+ match id.as_local() {
+ Some(id) => self.definitions.def_key(id).parent,
+ None => self.cstore().def_key(id).parent,
}
.map(|index| DefId { index, ..id })
}
/// the resolver is no longer needed as all the relevant information is inline.
impl rustc_ast_lowering::Resolver for Resolver<'_> {
fn def_key(&mut self, id: DefId) -> DefKey {
- if id.is_local() { self.definitions().def_key(id.index) } else { self.cstore().def_key(id) }
+ if let Some(id) = id.as_local() {
+ self.definitions().def_key(id)
+ } else {
+ self.cstore().def_key(id)
+ }
}
fn item_generics_num_lifetimes(&self, def_id: DefId, sess: &Session) -> usize {
dummy_ext_derive: Lrc::new(SyntaxExtension::dummy_derive(session.edition())),
non_macro_attrs: [non_macro_attr(false), non_macro_attr(true)],
invocation_parent_scopes,
- output_legacy_scopes: Default::default(),
+ output_macro_rules_scopes: Default::default(),
helper_attrs: Default::default(),
macro_defs,
local_macro_def_scopes: FxHashMap::default(),
}
fn new_key(&mut self, ident: Ident, ns: Namespace) -> BindingKey {
- let ident = ident.modern();
+ let ident = ident.normalize_to_macros_2_0();
let disambiguator = if ident.name == kw::Underscore {
self.underscore_disambiguator += 1;
self.underscore_disambiguator
misc2: AmbiguityErrorMisc::None,
});
}
- if let NameBindingKind::Import { directive, binding, ref used } = used_binding.kind {
+ if let NameBindingKind::Import { import, binding, ref used } = used_binding.kind {
// Avoid marking `extern crate` items that refer to a name from extern prelude,
// but not introduce it, as used if they are accessed from lexical scope.
if is_lexical_scope {
- if let Some(entry) = self.extern_prelude.get(&ident.modern()) {
+ if let Some(entry) = self.extern_prelude.get(&ident.normalize_to_macros_2_0()) {
if let Some(crate_item) = entry.extern_crate_item {
if ptr::eq(used_binding, crate_item) && !entry.introduced_by_item {
return;
}
}
used.set(true);
- directive.used.set(true);
- self.used_imports.insert((directive.id, ns));
- self.add_to_glob_map(&directive, ident);
+ import.used.set(true);
+ self.used_imports.insert((import.id, ns));
+ self.add_to_glob_map(&import, ident);
self.record_use(ident, ns, binding, false);
}
}
#[inline]
- fn add_to_glob_map(&mut self, directive: &ImportDirective<'_>, ident: Ident) {
- if directive.is_glob() {
- self.glob_map.entry(directive.id).or_default().insert(ident.name);
+ fn add_to_glob_map(&mut self, import: &Import<'_>, ident: Ident) {
+ if import.is_glob() {
+ self.glob_map.entry(import.id).or_default().insert(ident.name);
}
}
// derives (you need to resolve the derive first to add helpers into scope), but they
// should be available before the derive is expanded for compatibility.
// It's mess in general, so we are being conservative for now.
- // 1-3. `macro_rules` (open, not controlled), loop through legacy scopes. Have higher
+ // 1-3. `macro_rules` (open, not controlled), loop through `macro_rules` scopes. Have higher
// priority than prelude macros, but create ambiguities with macros in modules.
// 1-3. Names in modules (both normal `mod`ules and blocks), loop through hygienic parents
// (open, not controlled). Have higher priority than prelude macros, but create
TypeNS | ValueNS => Scope::Module(module),
MacroNS => Scope::DeriveHelpers(parent_scope.expansion),
};
- let mut ident = ident.modern();
+ let mut ident = ident.normalize_to_macros_2_0();
let mut use_prelude = !module.no_implicit_prelude;
loop {
}
}
Scope::DeriveHelpers(..) => Scope::DeriveHelpersCompat,
- Scope::DeriveHelpersCompat => Scope::MacroRules(parent_scope.legacy),
- Scope::MacroRules(legacy_scope) => match legacy_scope {
- LegacyScope::Binding(binding) => Scope::MacroRules(binding.parent_legacy_scope),
- LegacyScope::Invocation(invoc_id) => Scope::MacroRules(
- self.output_legacy_scopes
+ Scope::DeriveHelpersCompat => Scope::MacroRules(parent_scope.macro_rules),
+ Scope::MacroRules(macro_rules_scope) => match macro_rules_scope {
+ MacroRulesScope::Binding(binding) => {
+ Scope::MacroRules(binding.parent_macro_rules_scope)
+ }
+ MacroRulesScope::Invocation(invoc_id) => Scope::MacroRules(
+ self.output_macro_rules_scopes
.get(&invoc_id)
.cloned()
- .unwrap_or(self.invocation_parent_scopes[&invoc_id].legacy),
+ .unwrap_or(self.invocation_parent_scopes[&invoc_id].macro_rules),
),
- LegacyScope::Empty => Scope::Module(module),
+ MacroRulesScope::Empty => Scope::Module(module),
},
Scope::CrateRoot => match ns {
TypeNS => {
if ident.name == kw::Invalid {
return Some(LexicalScopeBinding::Res(Res::Err));
}
- let (general_span, modern_span) = if ident.name == kw::SelfUpper {
+ let (general_span, normalized_span) = if ident.name == kw::SelfUpper {
// FIXME(jseyfried) improve `Self` hygiene
let empty_span = ident.span.with_ctxt(SyntaxContext::root());
(empty_span, empty_span)
} else if ns == TypeNS {
- let modern_span = ident.span.modern();
- (modern_span, modern_span)
+ let normalized_span = ident.span.normalize_to_macros_2_0();
+ (normalized_span, normalized_span)
} else {
- (ident.span.modern_and_legacy(), ident.span.modern())
+ (ident.span.normalize_to_macro_rules(), ident.span.normalize_to_macros_2_0())
};
ident.span = general_span;
- let modern_ident = Ident { span: modern_span, ..ident };
+ let normalized_ident = Ident { span: normalized_span, ..ident };
// Walk backwards up the ribs in scope.
let record_used = record_used_id.is_some();
for i in (0..ribs.len()).rev() {
debug!("walk rib\n{:?}", ribs[i].bindings);
// Use the rib kind to determine whether we are resolving parameters
- // (modern hygiene) or local variables (legacy hygiene).
- let rib_ident = if ribs[i].kind.contains_params() { modern_ident } else { ident };
+ // (macro 2.0 hygiene) or local variables (`macro_rules` hygiene).
+ let rib_ident = if ribs[i].kind.contains_params() { normalized_ident } else { ident };
if let Some(res) = ribs[i].bindings.get(&rib_ident).cloned() {
// The ident resolves to a type parameter or local variable.
return Some(LexicalScopeBinding::Res(self.validate_res_from_ribs(
}
}
- ident = modern_ident;
+ ident = normalized_ident;
let mut poisoned = None;
loop {
let opt_module = if let Some(node_id) = record_used_id {
let mut adjusted_parent_scope = parent_scope;
match module {
ModuleOrUniformRoot::Module(m) => {
- if let Some(def) = ident.span.modernize_and_adjust(m.expansion) {
+ if let Some(def) = ident.span.normalize_to_macros_2_0_and_adjust(m.expansion) {
tmp_parent_scope =
ParentScope { module: self.macro_def_scope(def), ..*parent_scope };
adjusted_parent_scope = &tmp_parent_scope;
}
}
ModuleOrUniformRoot::ExternPrelude => {
- ident.span.modernize_and_adjust(ExpnId::root());
+ ident.span.normalize_to_macros_2_0_and_adjust(ExpnId::root());
}
ModuleOrUniformRoot::CrateRootAndExternPrelude | ModuleOrUniformRoot::CurrentScope => {
// No adjustments
let mark = if ident.name == kw::DollarCrate {
// When resolving `$crate` from a `macro_rules!` invoked in a `macro`,
// we don't want to pretend that the `macro_rules!` definition is in the `macro`
- // as described in `SyntaxContext::apply_mark`, so we ignore prepended modern marks.
+ // as described in `SyntaxContext::apply_mark`, so we ignore prepended opaque marks.
// FIXME: This is only a guess and it doesn't work correctly for `macro_rules!`
// definitions actually produced by `macro` and `macro` definitions produced by
// `macro_rules!`, but at least such configurations are not stable yet.
- ctxt = ctxt.modern_and_legacy();
+ ctxt = ctxt.normalize_to_macro_rules();
let mut iter = ctxt.marks().into_iter().rev().peekable();
let mut result = None;
- // Find the last modern mark from the end if it exists.
+ // Find the last opaque mark from the end if it exists.
while let Some(&(mark, transparency)) = iter.peek() {
if transparency == Transparency::Opaque {
result = Some(mark);
break;
}
}
- // Then find the last legacy mark from the end if it exists.
+ // Then find the last semi-transparent mark from the end if it exists.
for (mark, transparency) in iter {
if transparency == Transparency::SemiTransparent {
result = Some(mark);
}
result
} else {
- ctxt = ctxt.modern();
+ ctxt = ctxt.normalize_to_macros_2_0();
ctxt.adjust(ExpnId::root())
};
let module = match mark {
fn resolve_self(&mut self, ctxt: &mut SyntaxContext, module: Module<'a>) -> Module<'a> {
let mut module = self.get_module(module.normal_ancestor_id);
- while module.span.ctxt().modern() != *ctxt {
+ while module.span.ctxt().normalize_to_macros_2_0() != *ctxt {
let parent = module.parent.unwrap_or_else(|| self.macro_def_scope(ctxt.remove_mark()));
module = self.get_module(parent.normal_ancestor_id);
}
if ns == TypeNS {
if allow_super && name == kw::Super {
- let mut ctxt = ident.span.ctxt().modern();
+ let mut ctxt = ident.span.ctxt().normalize_to_macros_2_0();
let self_module = match i {
0 => Some(self.resolve_self(&mut ctxt, parent_scope.module)),
_ => match module {
}
if i == 0 {
if name == kw::SelfLower {
- let mut ctxt = ident.span.ctxt().modern();
+ let mut ctxt = ident.span.ctxt().normalize_to_macros_2_0();
module = Some(ModuleOrUniformRoot::Module(
self.resolve_self(&mut ctxt, parent_scope.module),
));
Applicability::MaybeIncorrect,
)),
)
- } else if !ident.is_reserved() {
- (format!("maybe a missing crate `{}`?", ident), None)
} else {
- // the parser will already have complained about the keyword being used
- return PathResult::NonModule(PartialRes::new(Res::Err));
+ (format!("maybe a missing crate `{}`?", ident), None)
}
} else if i == 0 {
(format!("use of undeclared type or module `{}`", ident), None)
// `ExternCrate` (also used for `crate::...`) then no need to issue a
// warning, this looks all good!
if let Some(binding) = second_binding {
- if let NameBindingKind::Import { directive: d, .. } = binding.kind {
- // Careful: we still want to rewrite paths from
- // renamed extern crates.
- if let ImportDirectiveSubclass::ExternCrate { source: None, .. } = d.subclass {
+ if let NameBindingKind::Import { import, .. } = binding.kind {
+ // Careful: we still want to rewrite paths from renamed extern crates.
+ if let ImportKind::ExternCrate { source: None, .. } = import.kind {
return;
}
}
}
}
- fn disambiguate_legacy_vs_modern(
+ fn disambiguate_macro_rules_vs_modularized(
&self,
- legacy: &'a NameBinding<'a>,
- modern: &'a NameBinding<'a>,
+ macro_rules: &'a NameBinding<'a>,
+ modularized: &'a NameBinding<'a>,
) -> bool {
- // Some non-controversial subset of ambiguities "modern macro name" vs "macro_rules"
+ // Some non-controversial subset of ambiguities "modularized macro name" vs "macro_rules"
// is disambiguated to mitigate regressions from macro modularization.
// Scoping for `macro_rules` behaves like scoping for `let` at module level, in general.
match (
- self.binding_parent_modules.get(&PtrKey(legacy)),
- self.binding_parent_modules.get(&PtrKey(modern)),
+ self.binding_parent_modules.get(&PtrKey(macro_rules)),
+ self.binding_parent_modules.get(&PtrKey(modularized)),
) {
- (Some(legacy), Some(modern)) => {
- legacy.normal_ancestor_id == modern.normal_ancestor_id
- && modern.is_ancestor_of(legacy)
+ (Some(macro_rules), Some(modularized)) => {
+ macro_rules.normal_ancestor_id == modularized.normal_ancestor_id
+ && modularized.is_ancestor_of(macro_rules)
}
_ => false,
}
// See https://github.com/rust-lang/rust/issues/32354
use NameBindingKind::Import;
- let directive = match (&new_binding.kind, &old_binding.kind) {
+ let import = match (&new_binding.kind, &old_binding.kind) {
// If there are two imports where one or both have attributes then prefer removing the
// import without attributes.
- (Import { directive: new, .. }, Import { directive: old, .. })
+ (Import { import: new, .. }, Import { import: old, .. })
if {
!new_binding.span.is_dummy()
&& !old_binding.span.is_dummy()
}
}
// Otherwise prioritize the new binding.
- (Import { directive, .. }, other) if !new_binding.span.is_dummy() => {
- Some((directive, new_binding.span, other.is_import()))
+ (Import { import, .. }, other) if !new_binding.span.is_dummy() => {
+ Some((import, new_binding.span, other.is_import()))
}
- (other, Import { directive, .. }) if !old_binding.span.is_dummy() => {
- Some((directive, old_binding.span, other.is_import()))
+ (other, Import { import, .. }) if !old_binding.span.is_dummy() => {
+ Some((import, old_binding.span, other.is_import()))
}
_ => None,
};
&& !has_dummy_span
&& ((new_binding.is_extern_crate() || old_binding.is_extern_crate()) || from_item);
- match directive {
- Some((directive, span, true)) if should_remove_import && directive.is_nested() => {
- self.add_suggestion_for_duplicate_nested_use(&mut err, directive, span)
+ match import {
+ Some((import, span, true)) if should_remove_import && import.is_nested() => {
+ self.add_suggestion_for_duplicate_nested_use(&mut err, import, span)
}
- Some((directive, _, true)) if should_remove_import && !directive.is_glob() => {
+ Some((import, _, true)) if should_remove_import && !import.is_glob() => {
// Simple case - remove the entire import. Due to the above match arm, this can
// only be a single use so just remove it entirely.
err.tool_only_span_suggestion(
- directive.use_span_with_attributes,
+ import.use_span_with_attributes,
"remove unnecessary import",
String::new(),
Applicability::MaybeIncorrect,
);
}
- Some((directive, span, _)) => {
- self.add_suggestion_for_rename_of_use(&mut err, name, directive, span)
+ Some((import, span, _)) => {
+ self.add_suggestion_for_rename_of_use(&mut err, name, import, span)
}
_ => {}
}
&self,
err: &mut DiagnosticBuilder<'_>,
name: Name,
- directive: &ImportDirective<'_>,
+ import: &Import<'_>,
binding_span: Span,
) {
let suggested_name = if name.as_str().chars().next().unwrap().is_uppercase() {
};
let mut suggestion = None;
- match directive.subclass {
- ImportDirectiveSubclass::SingleImport { type_ns_only: true, .. } => {
+ match import.kind {
+ ImportKind::Single { type_ns_only: true, .. } => {
suggestion = Some(format!("self as {}", suggested_name))
}
- ImportDirectiveSubclass::SingleImport { source, .. } => {
+ ImportKind::Single { source, .. } => {
if let Some(pos) =
source.span.hi().0.checked_sub(binding_span.lo().0).map(|pos| pos as usize)
{
}
}
}
- ImportDirectiveSubclass::ExternCrate { source, target, .. } => {
+ ImportKind::ExternCrate { source, target, .. } => {
suggestion = Some(format!(
"extern crate {} as {};",
source.unwrap_or(target.name),
/// If the nested use contains only one import then the suggestion will remove the entire
/// line.
///
- /// It is expected that the directive provided is a nested import - this isn't checked by the
+ /// It is expected that the provided import is nested - this isn't checked by the
/// function. If this invariant is not upheld, this function's behaviour will be unexpected
/// as characters expected by span manipulations won't be present.
fn add_suggestion_for_duplicate_nested_use(
&self,
err: &mut DiagnosticBuilder<'_>,
- directive: &ImportDirective<'_>,
+ import: &Import<'_>,
binding_span: Span,
) {
- assert!(directive.is_nested());
+ assert!(import.is_nested());
let message = "remove unnecessary import";
// Two examples will be used to illustrate the span manipulations we're doing:
//
// - Given `use issue_52891::{d, a, e};` where `a` is a duplicate then `binding_span` is
- // `a` and `directive.use_span` is `issue_52891::{d, a, e};`.
+ // `a` and `import.use_span` is `issue_52891::{d, a, e};`.
// - Given `use issue_52891::{d, e, a};` where `a` is a duplicate then `binding_span` is
- // `a` and `directive.use_span` is `issue_52891::{d, e, a};`.
+ // `a` and `import.use_span` is `issue_52891::{d, e, a};`.
let (found_closing_brace, span) =
- find_span_of_binding_until_next_binding(self.session, binding_span, directive.use_span);
+ find_span_of_binding_until_next_binding(self.session, binding_span, import.use_span);
// If there was a closing brace then identify the span to remove any trailing commas from
// previous imports.
// Remove the entire line if we cannot extend the span back, this indicates a
// `issue_52891::{self}` case.
err.span_suggestion(
- directive.use_span_with_attributes,
+ import.use_span_with_attributes,
message,
String::new(),
Applicability::MaybeIncorrect,
// Make sure `self`, `super` etc produce an error when passed to here.
return None;
}
- self.extern_prelude.get(&ident.modern()).cloned().and_then(|entry| {
+ self.extern_prelude.get(&ident.normalize_to_macros_2_0()).cloned().and_then(|entry| {
if let Some(binding) = entry.extern_crate_item {
if !speculative && entry.introduced_by_item {
self.record_use(ident, TypeNS, binding, false);
} else {
let crate_id = if !speculative {
self.crate_loader.process_path_extern(ident.name, ident.span)
- } else if let Some(crate_id) =
- self.crate_loader.maybe_process_path_extern(ident.name, ident.span)
- {
- crate_id
} else {
- return None;
+ self.crate_loader.maybe_process_path_extern(ident.name, ident.span)?
};
let crate_root = self.get_module(DefId { krate: crate_id, index: CRATE_DEF_INDEX });
Some(
use crate::{CrateLint, ParentScope, ResolutionError, Resolver, Scope, ScopeSet, Weak};
use crate::{ModuleKind, ModuleOrUniformRoot, NameBinding, PathResult, Segment, ToNameBinding};
use rustc::middle::stability;
-use rustc::session::parse::feature_err;
-use rustc::session::Session;
-use rustc::{lint, span_bug, ty};
+use rustc::{span_bug, ty};
use rustc_ast::ast::{self, Ident, NodeId};
use rustc_ast_pretty::pprust;
use rustc_attr::{self as attr, StabilityLevel};
use rustc_feature::is_builtin_attr_name;
use rustc_hir::def::{self, DefKind, NonMacroAttrKind};
use rustc_hir::def_id;
+use rustc_session::lint::builtin::UNUSED_MACROS;
+use rustc_session::parse::feature_err;
+use rustc_session::Session;
use rustc_span::edition::Edition;
use rustc_span::hygiene::{self, ExpnData, ExpnId, ExpnKind};
use rustc_span::symbol::{kw, sym, Symbol};
type Res = def::Res<NodeId>;
/// Binding produced by a `macro_rules` item.
-/// Not modularized, can shadow previous legacy bindings, etc.
+/// Not modularized, can shadow previous `macro_rules` bindings, etc.
#[derive(Debug)]
-pub struct LegacyBinding<'a> {
+pub struct MacroRulesBinding<'a> {
crate binding: &'a NameBinding<'a>,
- /// Legacy scope into which the `macro_rules` item was planted.
- crate parent_legacy_scope: LegacyScope<'a>,
+ /// `macro_rules` scope into which the `macro_rules` item was planted.
+ crate parent_macro_rules_scope: MacroRulesScope<'a>,
crate ident: Ident,
}
/// The scope introduced by a `macro_rules!` macro.
/// This starts at the macro's definition and ends at the end of the macro's parent
/// module (named or unnamed), or even further if it escapes with `#[macro_use]`.
-/// Some macro invocations need to introduce legacy scopes too because they
+/// Some macro invocations need to introduce `macro_rules` scopes too because they
/// can potentially expand into macro definitions.
#[derive(Copy, Clone, Debug)]
-pub enum LegacyScope<'a> {
+pub enum MacroRulesScope<'a> {
/// Empty "root" scope at the crate start containing no names.
Empty,
/// The scope introduced by a `macro_rules!` macro definition.
- Binding(&'a LegacyBinding<'a>),
+ Binding(&'a MacroRulesBinding<'a>),
/// The scope introduced by a macro invocation that can potentially
/// create a `macro_rules!` macro definition.
Invocation(ExpnId),
// Integrate the new AST fragment into all the definition and module structures.
// We are inside the `expansion` now, but other parent scope components are still the same.
let parent_scope = ParentScope { expansion, ..self.invocation_parent_scopes[&expansion] };
- let output_legacy_scope = self.build_reduced_graph(fragment, parent_scope);
- self.output_legacy_scopes.insert(expansion, output_legacy_scope);
+ let output_macro_rules_scope = self.build_reduced_graph(fragment, parent_scope);
+ self.output_macro_rules_scopes.insert(expansion, output_macro_rules_scope);
parent_scope.module.unexpanded_invocations.borrow_mut().remove(&expansion);
}
force,
) {
Ok((Some(ext), _)) => {
- let span = path.segments.last().unwrap().ident.span.modern();
+ let span = path
+ .segments
+ .last()
+ .unwrap()
+ .ident
+ .span
+ .normalize_to_macros_2_0();
helper_attrs.extend(
ext.helper_attrs.iter().map(|name| Ident::new(*name, span)),
);
fn check_unused_macros(&mut self) {
for (&node_id, &span) in self.unused_macros.iter() {
- self.lint_buffer.buffer_lint(
- lint::builtin::UNUSED_MACROS,
- node_id,
- span,
- "unused macro definition",
- );
+ self.lint_buffer.buffer_lint(UNUSED_MACROS, node_id, span, "unused macro definition");
}
}
fn add_derive_copy(&mut self, expn_id: ExpnId) {
self.containers_deriving_copy.insert(expn_id);
}
+
+ // The function that implements the resolution logic of `#[cfg_accessible(path)]`.
+ // Returns true if the path can certainly be resolved in one of three namespaces,
+ // returns false if the path certainly cannot be resolved in any of the three namespaces.
+ // Returns `Indeterminate` if we cannot give a certain answer yet.
+ fn cfg_accessible(&mut self, expn_id: ExpnId, path: &ast::Path) -> Result<bool, Indeterminate> {
+ let span = path.span;
+ let path = &Segment::from_path(path);
+ let parent_scope = self.invocation_parent_scopes[&expn_id];
+
+ let mut indeterminate = false;
+ for ns in [TypeNS, ValueNS, MacroNS].iter().copied() {
+ match self.resolve_path(path, Some(ns), &parent_scope, false, span, CrateLint::No) {
+ PathResult::Module(ModuleOrUniformRoot::Module(_)) => return Ok(true),
+ PathResult::NonModule(partial_res) if partial_res.unresolved_segments() == 0 => {
+ return Ok(true);
+ }
+ PathResult::Indeterminate => indeterminate = true,
+ // FIXME: `resolve_path` is not ready to report partially resolved paths
+ // correctly, so we just report an error if the path was reported as unresolved.
+ // This needs to be fixed for `cfg_accessible` to be useful.
+ PathResult::NonModule(..) | PathResult::Failed { .. } => {}
+ PathResult::Module(_) => panic!("unexpected path resolution"),
+ }
+ }
+
+ if indeterminate {
+ return Err(Indeterminate);
+ }
+
+ self.session
+ .struct_span_err(span, "not sure whether the path is accessible or not")
+ .span_note(span, "`cfg_accessible` is not fully implemented")
+ .emit();
+ Ok(false)
+ }
}
impl<'a> Resolver<'a> {
}
result
}
- Scope::MacroRules(legacy_scope) => match legacy_scope {
- LegacyScope::Binding(legacy_binding) if ident == legacy_binding.ident => {
- Ok((legacy_binding.binding, Flags::MACRO_RULES))
+ Scope::MacroRules(macro_rules_scope) => match macro_rules_scope {
+ MacroRulesScope::Binding(macro_rules_binding)
+ if ident == macro_rules_binding.ident =>
+ {
+ Ok((macro_rules_binding.binding, Flags::MACRO_RULES))
}
- LegacyScope::Invocation(invoc_id)
- if !this.output_legacy_scopes.contains_key(&invoc_id) =>
+ MacroRulesScope::Invocation(invoc_id)
+ if !this.output_macro_rules_scopes.contains_key(&invoc_id) =>
{
Err(Determinacy::Undetermined)
}
Some(AmbiguityKind::DeriveHelper)
} else if innermost_flags.contains(Flags::MACRO_RULES)
&& flags.contains(Flags::MODULE)
- && !this
- .disambiguate_legacy_vs_modern(innermost_binding, binding)
+ && !this.disambiguate_macro_rules_vs_modularized(
+ innermost_binding,
+ binding,
+ )
|| flags.contains(Flags::MACRO_RULES)
&& innermost_flags.contains(Flags::MODULE)
- && !this.disambiguate_legacy_vs_modern(
+ && !this.disambiguate_macro_rules_vs_modularized(
binding,
innermost_binding,
)
{
- Some(AmbiguityKind::LegacyVsModern)
+ Some(AmbiguityKind::MacroRulesVsModularized)
} else if innermost_binding.is_glob_import() {
Some(AmbiguityKind::GlobVsOuter)
} else if innermost_binding
rustc_parse = { path = "../librustc_parse" }
serde_json = "1"
rustc_ast = { path = "../librustc_ast" }
+rustc_session = { path = "../librustc_session" }
rustc_span = { path = "../librustc_span" }
rls-data = "0.19"
rls-span = "0.5"
//! DumpVisitor walks the AST and processes it, and Dumper is used for
//! recording the output.
-use rustc::session::config::Input;
use rustc::span_bug;
use rustc::ty::{self, DefIdTree, TyCtxt};
use rustc_ast::ast::{self, Attribute, NodeId, PatKind};
use rustc_data_structures::fx::FxHashSet;
use rustc_hir::def::{DefKind as HirDefKind, Res};
use rustc_hir::def_id::DefId;
+use rustc_session::config::Input;
use rustc_span::source_map::{respan, DUMMY_SP};
use rustc_span::*;
self.process_macro_use(trait_item.span);
let vis_span = trait_item.span.shrink_to_lo();
match trait_item.kind {
- ast::AssocItemKind::Static(ref ty, _, ref expr)
- | ast::AssocItemKind::Const(_, ref ty, ref expr) => {
+ ast::AssocItemKind::Const(_, ref ty, ref expr) => {
self.process_assoc_const(
trait_item.id,
trait_item.ident,
self.visit_ty(default_ty)
}
}
- ast::AssocItemKind::Macro(_) => {}
+ ast::AssocItemKind::MacCall(_) => {}
}
}
fn process_impl_item(&mut self, impl_item: &'l ast::AssocItem, impl_id: DefId) {
self.process_macro_use(impl_item.span);
match impl_item.kind {
- ast::AssocItemKind::Static(ref ty, _, ref expr)
- | ast::AssocItemKind::Const(_, ref ty, ref expr) => {
+ ast::AssocItemKind::Const(_, ref ty, ref expr) => {
self.process_assoc_const(
impl_item.id,
impl_item.ident,
// trait.
self.visit_ty(ty)
}
- ast::AssocItemKind::Macro(_) => {}
+ ast::AssocItemKind::MacCall(_) => {}
}
}
walk_list!(self, visit_ty, ty);
self.process_generic_params(ty_params, &qualname, item.id);
}
- Mac(_) => (),
+ MacCall(_) => (),
_ => visit::walk_item(self, item),
}
}
self.visit_ty(&ret_ty);
}
}
- ast::ForeignItemKind::Const(_, ref ty, _)
- | ast::ForeignItemKind::Static(ref ty, _, _) => {
+ ast::ForeignItemKind::Static(ref ty, _, _) => {
if let Some(var_data) = self.save_ctxt.get_extern_item_data(item) {
down_cast_data!(var_data, DefData, item.span);
self.dumper.dump_def(&access, var_data);
self.dumper.dump_def(&access, var_data);
}
}
- ast::ForeignItemKind::Macro(..) => {}
+ ast::ForeignItemKind::MacCall(..) => {}
}
}
}
use rustc::middle::cstore::ExternCrate;
use rustc::middle::privacy::AccessLevels;
-use rustc::session::config::{CrateType, Input, OutputType};
use rustc::ty::{self, DefIdTree, TyCtxt};
use rustc::{bug, span_bug};
use rustc_ast::ast::{self, Attribute, NodeId, PatKind, DUMMY_NODE_ID};
use rustc_hir::def::{CtorOf, DefKind as HirDefKind, Res};
use rustc_hir::def_id::{DefId, LOCAL_CRATE};
use rustc_hir::Node;
+use rustc_session::config::{CrateType, Input, OutputType};
use rustc_span::source_map::Spanned;
use rustc_span::*;
attributes: lower_attributes(item.attrs.clone(), self),
}))
}
- ast::ForeignItemKind::Const(_, ref ty, _)
- | ast::ForeignItemKind::Static(ref ty, _, _) => {
+ ast::ForeignItemKind::Static(ref ty, _, _) => {
filter!(self.span_utils, item.ident.span);
let id = id_from_node_id(item.id, self);
}
// FIXME(plietar): needs a new DefKind in rls-data
ast::ForeignItemKind::TyAlias(..) => None,
- ast::ForeignItemKind::Macro(..) => None,
+ ast::ForeignItemKind::MacCall(..) => None,
}
}
Some(_) => ImplKind::Direct,
None => ImplKind::Inherent,
},
- span: span,
+ span,
value: String::new(),
parent: None,
children: items
match self.tables.expr_ty_adjusted(&hir_node).kind {
ty::Adt(def, _) if !def.is_enum() => {
let variant = &def.non_enum_variant();
- let index = self.tcx.find_field_index(ident, variant).unwrap();
filter!(self.span_utils, ident.span);
let span = self.span_from_span(ident.span);
return Some(Data::RefData(Ref {
kind: RefKind::Variable,
span,
- ref_id: id_from_def_id(variant.fields[index].did),
+ ref_id: self
+ .tcx
+ .find_field_index(ident, variant)
+ .map(|index| id_from_def_id(variant.fields[index].did))
+ .unwrap_or_else(|| null_id()),
}));
}
ty::Tuple(..) => None,
| Res::Def(HirDefKind::Ctor(..), _) => {
Some(Ref { kind: RefKind::Variable, span, ref_id: id_from_def_id(res.def_id()) })
}
- Res::Def(HirDefKind::Method, decl_id) => {
+ Res::Def(HirDefKind::AssocFn, decl_id) => {
let def_id = if decl_id.is_local() {
let ti = self.tcx.associated_item(decl_id);
ExpnKind::Root | ExpnKind::AstPass(_) | ExpnKind::Desugaring(_) => return None,
};
- // If the callee is an imported macro from an external crate, need to get
- // the source span and name from the session, as their spans are localized
- // when read in, and no longer correspond to the source.
- if let Some(mac) = self.tcx.sess.imported_macro_spans.borrow().get(&callee.def_site) {
- let &(ref mac_name, mac_span) = mac;
- let mac_span = self.span_from_span(mac_span);
- return Some(MacroRef {
- span: callsite_span,
- qualname: mac_name.clone(), // FIXME: generate the real qualname
- callee_span: mac_span,
- });
- }
-
let callee_span = self.span_from_span(callee.def_site);
Some(MacroRef {
span: callsite_span,
| ast::TyKind::Infer
| ast::TyKind::Err
| ast::TyKind::ImplicitSelf
- | ast::TyKind::Mac(_) => Err("Ty"),
+ | ast::TyKind::MacCall(_) => Err("Ty"),
}
}
}
text.push(' ');
let trait_sig = if let Some(ref t) = *of_trait {
- if polarity == ast::ImplPolarity::Negative {
+ if let ast::ImplPolarity::Negative(_) = polarity {
text.push('!');
}
let trait_sig = t.path.make(offset + text.len(), id, scx)?;
ast::ItemKind::ExternCrate(_) => Err("extern crate"),
// FIXME should implement this (e.g., pub use).
ast::ItemKind::Use(_) => Err("import"),
- ast::ItemKind::Mac(..) | ast::ItemKind::MacroDef(_) => Err("Macro"),
+ ast::ItemKind::MacCall(..) | ast::ItemKind::MacroDef(_) => Err("Macro"),
}
}
}
text.push_str(&name);
text.push(';');
- Ok(Signature { text: text, defs: defs, refs: vec![] })
+ Ok(Signature { text, defs, refs: vec![] })
}
- ast::ForeignItemKind::Const(..) => Err("foreign const"),
- ast::ForeignItemKind::Macro(..) => Err("macro"),
+ ast::ForeignItemKind::MacCall(..) => Err("macro"),
}
}
}
use crate::generated_code;
-use rustc::session::Session;
use rustc_ast::token::{self, TokenKind};
use rustc_parse::lexer::{self, StringReader};
+use rustc_session::Session;
use rustc_span::*;
#[derive(Clone)]
type_description: type_desc.to_string(),
align: align.bytes(),
overall_size: overall_size.bytes(),
- packed: packed,
+ packed,
opt_discr_size: opt_discr_size.map(|s| s.bytes()),
variants,
};
pub enum PrintRequest {
FileNames,
Sysroot,
+ TargetLibdir,
CrateName,
Cfg,
TargetList,
"",
"print",
"Compiler information to print on stdout",
- "[crate-name|file-names|sysroot|cfg|target-list|\
+ "[crate-name|file-names|sysroot|target-libdir|cfg|target-list|\
target-cpus|target-features|relocation-models|\
code-models|tls-models|target-spec-json|native-static-libs]",
),
"crate-name" => PrintRequest::CrateName,
"file-names" => PrintRequest::FileNames,
"sysroot" => PrintRequest::Sysroot,
+ "target-libdir" => PrintRequest::TargetLibdir,
"cfg" => PrintRequest::Cfg,
"target-list" => PrintRequest::TargetList,
"target-cpus" => PrintRequest::TargetCPUs,
"tell the linker to strip debuginfo when building without debuginfo enabled."),
share_generics: Option<bool> = (None, parse_opt_bool, [TRACKED],
"make the current crate share its generic instantiations"),
- chalk: bool = (false, parse_bool, [TRACKED],
- "enable the experimental Chalk-based trait solving engine"),
no_parallel_llvm: bool = (false, parse_bool, [UNTRACKED],
"don't run LLVM in parallel (while keeping codegen-units and ThinLTO)"),
no_leak_check: bool = (false, parse_bool, [UNTRACKED],
out_of_fuel: bool,
}
+/// The behavior of the CTFE engine when an error occurs with regards to backtraces.
+#[derive(Clone, Copy)]
+pub enum CtfeBacktrace {
+ /// Do nothing special, return the error as usual without a backtrace.
+ Disabled,
+ /// Capture a backtrace at the point the error is created and return it in the error
+ /// (to be printed later if/when the error ever actually gets shown to the user).
+ Capture,
+ /// Capture a backtrace at the point the error is created and immediately print it out.
+ Immediate,
+}
+
/// Represents the data associated with a compilation
/// session for a single crate.
pub struct Session {
/// The maximum length of types during monomorphization.
pub type_length_limit: Once<usize>,
- /// Map from imported macro spans (which consist of
- /// the localized span for the macro body) to the
- /// macro name and definition span in the source crate.
- pub imported_macro_spans: OneThread<RefCell<FxHashMap<Span, (String, Span)>>>,
+ /// The maximum blocks a const expression can evaluate.
+ pub const_eval_limit: Once<usize>,
incr_comp_session: OneThread<RefCell<IncrCompSession>>,
/// Used for incremental compilation tests. Will only be populated if
/// Path for libraries that will take preference over libraries shipped by Rust.
/// Used by windows-gnu targets to priortize system mingw-w64 libraries.
pub system_library_path: OneThread<RefCell<Option<Option<PathBuf>>>>,
+
+ /// Tracks the current behavior of the CTFE engine when an error occurs.
+ /// Options range from returning the error without a backtrace to returning an error
+ /// and immediately printing the backtrace to stderr.
+ pub ctfe_backtrace: Lock<CtfeBacktrace>,
}
pub struct PerfStats {
.unwrap_or(self.opts.debug_assertions)
}
- pub fn crt_static(&self) -> bool {
+ /// Check whether this compile session and crate type use static crt.
+ pub fn crt_static(&self, crate_type: Option<config::CrateType>) -> bool {
// If the target does not opt in to crt-static support, use its default.
if self.target.target.options.crt_static_respected {
- self.crt_static_feature()
+ self.crt_static_feature(crate_type)
} else {
self.target.target.options.crt_static_default
}
}
- pub fn crt_static_feature(&self) -> bool {
+ /// Check whether this compile session and crate type use `crt-static` feature.
+ pub fn crt_static_feature(&self, crate_type: Option<config::CrateType>) -> bool {
let requested_features = self.opts.cg.target_feature.split(',');
let found_negative = requested_features.clone().any(|r| r == "-crt-static");
let found_positive = requested_features.clone().any(|r| r == "+crt-static");
- // If the target we're compiling for requests a static crt by default,
- // then see if the `-crt-static` feature was passed to disable that.
- // Otherwise if we don't have a static crt by default then see if the
- // `+crt-static` feature was passed.
- if self.target.target.options.crt_static_default { !found_negative } else { found_positive }
+ if found_positive || found_negative {
+ found_positive
+ } else if crate_type == Some(config::CrateType::ProcMacro)
+ || crate_type == None && self.opts.crate_types.contains(&config::CrateType::ProcMacro)
+ {
+ // FIXME: When crate_type is not available,
+ // we use compiler options to determine the crate_type.
+ // We can't check `#![crate_type = "proc-macro"]` here.
+ false
+ } else {
+ self.target.target.options.crt_static_default
+ }
}
pub fn must_not_eliminate_frame_pointers(&self) -> bool {
sopts.debugging_opts.time_passes,
);
+ let ctfe_backtrace = Lock::new(match env::var("RUSTC_CTFE_BACKTRACE") {
+ Ok(ref val) if val == "immediate" => CtfeBacktrace::Immediate,
+ Ok(ref val) if val != "0" => CtfeBacktrace::Capture,
+ _ => CtfeBacktrace::Disabled,
+ });
+
let sess = Session {
target: target_cfg,
host,
features: Once::new(),
recursion_limit: Once::new(),
type_length_limit: Once::new(),
- imported_macro_spans: OneThread::new(RefCell::new(FxHashMap::default())),
+ const_eval_limit: Once::new(),
incr_comp_session: OneThread::new(RefCell::new(IncrCompSession::NotInitialized)),
cgu_reuse_tracker,
prof,
trait_methods_not_found: Lock::new(Default::default()),
confused_type_with_std_module: Lock::new(Default::default()),
system_library_path: OneThread::new(RefCell::new(Default::default())),
+ ctfe_backtrace,
};
validate_commandline_args_with_session_available(&sess);
use crate::HashStableContext;
+use rustc_data_structures::fingerprint::Fingerprint;
use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
use rustc_data_structures::AtomicRef;
use rustc_index::vec::Idx;
+use rustc_macros::HashStable_Generic;
use rustc_serialize::{Decoder, Encoder};
+use std::borrow::Borrow;
use std::fmt;
use std::{u32, u64};
/// Item definitions in the currently-compiled crate would have the `CrateNum`
/// `LOCAL_CRATE` in their `DefId`.
-pub const LOCAL_CRATE: CrateNum = CrateNum::Index(CrateId::from_u32_const(0));
+pub const LOCAL_CRATE: CrateNum = CrateNum::Index(CrateId::from_u32(0));
impl Idx for CrateNum {
#[inline]
}
}
+#[derive(
+ Copy,
+ Clone,
+ Hash,
+ PartialEq,
+ Eq,
+ PartialOrd,
+ Ord,
+ Debug,
+ RustcEncodable,
+ RustcDecodable,
+ HashStable_Generic
+)]
+pub struct DefPathHash(pub Fingerprint);
+
+impl Borrow<Fingerprint> for DefPathHash {
+ #[inline]
+ fn borrow(&self) -> &Fingerprint {
+ &self.0
+ }
+}
+
rustc_index::newtype_index! {
/// A DefIndex is an index into the hir-map for a crate, identifying a
/// particular definition. It should really be considered an interned
/// Makes a local `DefId` from the given `DefIndex`.
#[inline]
pub fn local(index: DefIndex) -> DefId {
- DefId { krate: LOCAL_CRATE, index: index }
+ DefId { krate: LOCAL_CRATE, index }
}
#[inline]
}
#[inline]
- pub fn to_local(self) -> LocalDefId {
- LocalDefId::from_def_id(self)
+ pub fn as_local(self) -> Option<LocalDefId> {
+ if self.is_local() { Some(LocalDefId { local_def_index: self.index }) } else { None }
+ }
+
+ #[inline]
+ pub fn expect_local(self) -> LocalDefId {
+ self.as_local().unwrap_or_else(|| panic!("DefId::expect_local: `{:?}` isn't local", self))
}
pub fn is_top_level_module(self) -> bool {
/// few cases where we know that only DefIds from the local crate are expected
/// and a DefId from a different crate would signify a bug somewhere. This
/// is when LocalDefId comes in handy.
-#[derive(Clone, Copy, PartialEq, Eq, Hash)]
-pub struct LocalDefId(DefIndex);
+#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
+pub struct LocalDefId {
+ pub local_def_index: DefIndex,
+}
-impl LocalDefId {
+impl Idx for LocalDefId {
#[inline]
- pub fn from_def_id(def_id: DefId) -> LocalDefId {
- assert!(def_id.is_local());
- LocalDefId(def_id.index)
+ fn new(idx: usize) -> Self {
+ LocalDefId { local_def_index: Idx::new(idx) }
}
+ #[inline]
+ fn index(self) -> usize {
+ self.local_def_index.index()
+ }
+}
+impl LocalDefId {
#[inline]
pub fn to_def_id(self) -> DefId {
- DefId { krate: LOCAL_CRATE, index: self.0 }
+ DefId { krate: LOCAL_CRATE, index: self.local_def_index }
}
}
true
}
- fn modern(&self, ctxt: SyntaxContext) -> SyntaxContext {
+ fn normalize_to_macros_2_0(&self, ctxt: SyntaxContext) -> SyntaxContext {
self.syntax_context_data[ctxt.0 as usize].opaque
}
- fn modern_and_legacy(&self, ctxt: SyntaxContext) -> SyntaxContext {
+ fn normalize_to_macro_rules(&self, ctxt: SyntaxContext) -> SyntaxContext {
self.syntax_context_data[ctxt.0 as usize].opaque_and_semitransparent
}
let call_site_ctxt = self.expn_data(expn_id).call_site.ctxt();
let mut call_site_ctxt = if transparency == Transparency::SemiTransparent {
- self.modern(call_site_ctxt)
+ self.normalize_to_macros_2_0(call_site_ctxt)
} else {
- self.modern_and_legacy(call_site_ctxt)
+ self.normalize_to_macro_rules(call_site_ctxt)
};
if call_site_ctxt == SyntaxContext::root() {
HygieneData::with(|data| data.adjust(self, expn_id))
}
- /// Like `SyntaxContext::adjust`, but also modernizes `self`.
- pub fn modernize_and_adjust(&mut self, expn_id: ExpnId) -> Option<ExpnId> {
+ /// Like `SyntaxContext::adjust`, but also normalizes `self` to macros 2.0.
+ pub fn normalize_to_macros_2_0_and_adjust(&mut self, expn_id: ExpnId) -> Option<ExpnId> {
HygieneData::with(|data| {
- *self = data.modern(*self);
+ *self = data.normalize_to_macros_2_0(*self);
data.adjust(self, expn_id)
})
}
pub fn glob_adjust(&mut self, expn_id: ExpnId, glob_span: Span) -> Option<Option<ExpnId>> {
HygieneData::with(|data| {
let mut scope = None;
- let mut glob_ctxt = data.modern(glob_span.ctxt());
+ let mut glob_ctxt = data.normalize_to_macros_2_0(glob_span.ctxt());
while !data.is_descendant_of(expn_id, data.outer_expn(glob_ctxt)) {
scope = Some(data.remove_mark(&mut glob_ctxt).0);
if data.remove_mark(self).0 != scope.unwrap() {
return None;
}
- let mut glob_ctxt = data.modern(glob_span.ctxt());
+ let mut glob_ctxt = data.normalize_to_macros_2_0(glob_span.ctxt());
let mut marks = Vec::new();
while !data.is_descendant_of(expn_id, data.outer_expn(glob_ctxt)) {
marks.push(data.remove_mark(&mut glob_ctxt));
pub fn hygienic_eq(self, other: SyntaxContext, expn_id: ExpnId) -> bool {
HygieneData::with(|data| {
- let mut self_modern = data.modern(self);
- data.adjust(&mut self_modern, expn_id);
- self_modern == data.modern(other)
+ let mut self_normalized = data.normalize_to_macros_2_0(self);
+ data.adjust(&mut self_normalized, expn_id);
+ self_normalized == data.normalize_to_macros_2_0(other)
})
}
#[inline]
- pub fn modern(self) -> SyntaxContext {
- HygieneData::with(|data| data.modern(self))
+ pub fn normalize_to_macros_2_0(self) -> SyntaxContext {
+ HygieneData::with(|data| data.normalize_to_macros_2_0(self))
}
#[inline]
- pub fn modern_and_legacy(self) -> SyntaxContext {
- HygieneData::with(|data| data.modern_and_legacy(self))
+ pub fn normalize_to_macro_rules(self) -> SyntaxContext {
+ HygieneData::with(|data| data.normalize_to_macro_rules(self))
}
#[inline]
#![doc(html_root_url = "https://doc.rust-lang.org/nightly/")]
#![feature(crate_visibility_modifier)]
+#![feature(const_if_match)]
+#![feature(const_fn)]
+#![feature(const_panic)]
#![feature(nll)]
#![feature(optin_builtin_traits)]
#![feature(specialization)]
use hygiene::Transparency;
pub use hygiene::{DesugaringKind, ExpnData, ExpnId, ExpnKind, MacroKind, SyntaxContext};
pub mod def_id;
-use def_id::DefId;
+use def_id::{CrateNum, DefId, LOCAL_CRATE};
mod span_encoding;
pub use span_encoding::{Span, DUMMY_SP};
)]
pub enum FileName {
Real(PathBuf),
- /// A macro. This includes the full name of the macro, so that there are no clashes.
- Macros(String),
/// Call to `quote!`.
QuoteExpansion(u64),
/// Command line.
use FileName::*;
match *self {
Real(ref path) => write!(fmt, "{}", path.display()),
- Macros(ref name) => write!(fmt, "<{} macros>", name),
QuoteExpansion(_) => write!(fmt, "<quote expansion>"),
MacroExpansion(_) => write!(fmt, "<macro expansion>"),
Anon(_) => write!(fmt, "<anon>"),
use FileName::*;
match *self {
Real(_) => true,
- Macros(_)
- | Anon(_)
+ Anon(_)
| MacroExpansion(_)
| ProcMacroSourceCode(_)
| CfgSpec(_)
}
}
- pub fn is_macros(&self) -> bool {
- use FileName::*;
- match *self {
- Real(_)
- | Anon(_)
- | MacroExpansion(_)
- | ProcMacroSourceCode(_)
- | CfgSpec(_)
- | CliCrateAttr(_)
- | Custom(_)
- | QuoteExpansion(_)
- | DocTest(_, _) => false,
- Macros(_) => true,
- }
- }
-
pub fn quote_expansion_source_code(src: &str) -> FileName {
let mut hasher = StableHasher::new();
src.hash(&mut hasher);
}
#[inline]
- pub fn modernize_and_adjust(&mut self, expn_id: ExpnId) -> Option<ExpnId> {
+ pub fn normalize_to_macros_2_0_and_adjust(&mut self, expn_id: ExpnId) -> Option<ExpnId> {
let mut span = self.data();
- let mark = span.ctxt.modernize_and_adjust(expn_id);
+ let mark = span.ctxt.normalize_to_macros_2_0_and_adjust(expn_id);
*self = Span::new(span.lo, span.hi, span.ctxt);
mark
}
}
#[inline]
- pub fn modern(self) -> Span {
+ pub fn normalize_to_macros_2_0(self) -> Span {
let span = self.data();
- span.with_ctxt(span.ctxt.modern())
+ span.with_ctxt(span.ctxt.normalize_to_macros_2_0())
}
#[inline]
- pub fn modern_and_legacy(self) -> Span {
+ pub fn normalize_to_macro_rules(self) -> Span {
let span = self.data();
- span.with_ctxt(span.ctxt.modern_and_legacy())
+ span.with_ctxt(span.ctxt.normalize_to_macro_rules())
}
}
pub diff: u32,
}
-/// The state of the lazy external source loading mechanism of a `SourceFile`.
-#[derive(PartialEq, Eq, Clone)]
+#[derive(PartialEq, Eq, Clone, Debug)]
pub enum ExternalSource {
+ /// No external source has to be loaded, since the `SourceFile` represents a local crate.
+ Unneeded,
+ Foreign {
+ kind: ExternalSourceKind,
+ /// This SourceFile's byte-offset within the source_map of its original crate
+ original_start_pos: BytePos,
+ /// The end of this SourceFile within the source_map of its original crate
+ original_end_pos: BytePos,
+ },
+}
+
+/// The state of the lazy external source loading mechanism of a `SourceFile`.
+#[derive(PartialEq, Eq, Clone, Debug)]
+pub enum ExternalSourceKind {
/// The external source has been loaded already.
Present(String),
/// No attempt has been made to load the external source.
AbsentOk,
/// A failed attempt has been made to load the external source.
AbsentErr,
- /// No external source has to be loaded, since the `SourceFile` represents a local crate.
Unneeded,
}
impl ExternalSource {
pub fn is_absent(&self) -> bool {
- match *self {
- ExternalSource::Present(_) => false,
+ match self {
+ ExternalSource::Foreign { kind: ExternalSourceKind::Present(_), .. } => false,
_ => true,
}
}
pub fn get_source(&self) -> Option<&str> {
- match *self {
- ExternalSource::Present(ref src) => Some(src),
+ match self {
+ ExternalSource::Foreign { kind: ExternalSourceKind::Present(ref src), .. } => Some(src),
_ => None,
}
}
/// The unmapped path of the file that the source came from.
/// Set to `None` if the `SourceFile` was imported from an external crate.
pub unmapped_path: Option<FileName>,
- /// Indicates which crate this `SourceFile` was imported from.
- pub crate_of_origin: u32,
/// The complete source code.
pub src: Option<Lrc<String>>,
/// The source code's hash.
pub normalized_pos: Vec<NormalizedPos>,
/// A hash of the filename, used for speeding up hashing in incremental compilation.
pub name_hash: u128,
+ /// Indicates which crate this `SourceFile` was imported from.
+ pub cnum: CrateNum,
}
impl Encodable for SourceFile {
s.emit_struct_field("multibyte_chars", 6, |s| self.multibyte_chars.encode(s))?;
s.emit_struct_field("non_narrow_chars", 7, |s| self.non_narrow_chars.encode(s))?;
s.emit_struct_field("name_hash", 8, |s| self.name_hash.encode(s))?;
- s.emit_struct_field("normalized_pos", 9, |s| self.normalized_pos.encode(s))
+ s.emit_struct_field("normalized_pos", 9, |s| self.normalized_pos.encode(s))?;
+ s.emit_struct_field("cnum", 10, |s| self.cnum.encode(s))
})
}
}
let name_hash: u128 = d.read_struct_field("name_hash", 8, |d| Decodable::decode(d))?;
let normalized_pos: Vec<NormalizedPos> =
d.read_struct_field("normalized_pos", 9, |d| Decodable::decode(d))?;
+ let cnum: CrateNum = d.read_struct_field("cnum", 10, |d| Decodable::decode(d))?;
Ok(SourceFile {
name,
name_was_remapped,
unmapped_path: None,
- // `crate_of_origin` has to be set by the importer.
- // This value matches up with `rustc_hir::def_id::INVALID_CRATE`.
- // That constant is not available here, unfortunately.
- crate_of_origin: std::u32::MAX - 1,
start_pos,
end_pos,
src: None,
src_hash,
- external_src: Lock::new(ExternalSource::AbsentOk),
+ // Unused - the metadata decoder will construct
+ // a new SourceFile, filling in `external_src` properly
+ external_src: Lock::new(ExternalSource::Unneeded),
lines,
multibyte_chars,
non_narrow_chars,
normalized_pos,
name_hash,
+ cnum,
})
})
}
name,
name_was_remapped,
unmapped_path: Some(unmapped_path),
- crate_of_origin: 0,
src: Some(Lrc::new(src)),
src_hash,
external_src: Lock::new(ExternalSource::Unneeded),
non_narrow_chars,
normalized_pos,
name_hash,
+ cnum: LOCAL_CRATE,
}
}
where
F: FnOnce() -> Option<String>,
{
- if *self.external_src.borrow() == ExternalSource::AbsentOk {
+ if matches!(
+ *self.external_src.borrow(),
+ ExternalSource::Foreign { kind: ExternalSourceKind::AbsentOk, .. }
+ ) {
let src = get_src();
let mut external_src = self.external_src.borrow_mut();
// Check that no-one else have provided the source while we were getting it
- if *external_src == ExternalSource::AbsentOk {
+ if let ExternalSource::Foreign {
+ kind: src_kind @ ExternalSourceKind::AbsentOk, ..
+ } = &mut *external_src
+ {
if let Some(src) = src {
let mut hasher: StableHasher = StableHasher::new();
hasher.write(src.as_bytes());
if hasher.finish::<u128>() == self.src_hash {
- *external_src = ExternalSource::Present(src);
+ *src_kind = ExternalSourceKind::Present(src);
return true;
}
} else {
- *external_src = ExternalSource::AbsentErr;
+ *src_kind = ExternalSourceKind::AbsentErr;
}
false
}
let begin = {
- let line = if let Some(line) = self.lines.get(line_number) {
- line
- } else {
- return None;
- };
+ let line = self.lines.get(line_number)?;
let begin: BytePos = *line - self.start_pos;
begin.to_usize()
};
&self,
filename: FileName,
name_was_remapped: bool,
- crate_of_origin: u32,
src_hash: u128,
name_hash: u128,
source_len: usize,
+ cnum: CrateNum,
mut file_local_lines: Vec<BytePos>,
mut file_local_multibyte_chars: Vec<MultiByteChar>,
mut file_local_non_narrow_chars: Vec<NonNarrowChar>,
mut file_local_normalized_pos: Vec<NormalizedPos>,
+ original_start_pos: BytePos,
+ original_end_pos: BytePos,
) -> Lrc<SourceFile> {
let start_pos = self
.allocate_address_space(source_len)
name: filename,
name_was_remapped,
unmapped_path: None,
- crate_of_origin,
src: None,
src_hash,
- external_src: Lock::new(ExternalSource::AbsentOk),
+ external_src: Lock::new(ExternalSource::Foreign {
+ kind: ExternalSourceKind::AbsentOk,
+ original_start_pos,
+ original_end_pos,
+ }),
start_pos,
end_pos,
lines: file_local_lines,
non_narrow_chars: file_local_non_narrow_chars,
normalized_pos: file_local_normalized_pos,
name_hash,
+ cnum,
});
let mut files = self.files.borrow_mut();
Ok((lo, hi))
}
+ pub fn is_line_before_span_empty(&self, sp: Span) -> bool {
+ match self.span_to_prev_source(sp) {
+ Ok(s) => s.split('\n').last().map(|l| l.trim_start().is_empty()).unwrap_or(false),
+ Err(_) => false,
+ }
+ }
+
pub fn span_to_lines(&self, sp: Span) -> FileLinesResult {
debug!("span_to_lines(sp={:?})", sp);
let (lo, hi) = self.is_valid_span(sp)?;
/// if no character could be found or if an error occurred while retrieving the code snippet.
pub fn span_extend_to_prev_char(&self, sp: Span, c: char) -> Span {
if let Ok(prev_source) = self.span_to_prev_source(sp) {
- let prev_source = prev_source.rsplit(c).nth(0).unwrap_or("").trim_start();
+ let prev_source = prev_source.rsplit(c).next().unwrap_or("").trim_start();
if !prev_source.is_empty() && !prev_source.contains('\n') {
return sp.with_lo(BytePos(sp.lo().0 - prev_source.len() as u32));
}
for ws in &[" ", "\t", "\n"] {
let pat = pat.to_owned() + ws;
if let Ok(prev_source) = self.span_to_prev_source(sp) {
- let prev_source = prev_source.rsplit(&pat).nth(0).unwrap_or("").trim_start();
+ let prev_source = prev_source.rsplit(&pat).next().unwrap_or("").trim_start();
if !prev_source.is_empty() && (!prev_source.contains('\n') || accept_newlines) {
return sp.with_lo(BytePos(sp.lo().0 - prev_source.len() as u32));
}
pub fn span_until_char(&self, sp: Span, c: char) -> Span {
match self.span_to_snippet(sp) {
Ok(snippet) => {
- let snippet = snippet.split(c).nth(0).unwrap_or("").trim_end();
+ let snippet = snippet.split(c).next().unwrap_or("").trim_end();
if !snippet.is_empty() && !snippet.contains('\n') {
sp.with_hi(BytePos(sp.lo().0 + snippet.len() as u32))
} else {
whitespace_found = true;
}
- if whitespace_found && !c.is_whitespace() { false } else { true }
+ !whitespace_found || c.is_whitespace()
})
}
_ => None,
})
}
+
+ pub fn is_imported(&self, sp: Span) -> bool {
+ let source_file_index = self.lookup_source_file_idx(sp.lo());
+ let source_file = &self.files()[source_file_index];
+ source_file.is_imported()
+ }
}
#[derive(Clone)]
abi_unadjusted,
abi_vectorcall,
abi_x86_interrupt,
+ abort,
aborts,
address,
add_with_overflow,
caller_location,
cdylib,
cfg,
+ cfg_accessible,
cfg_attr,
cfg_attr_multi,
cfg_doctest,
console,
const_compare_raw_pointers,
const_constructor,
+ const_eval_limit,
const_extern_fn,
const_fn,
const_fn_union,
derive,
diagnostic,
direct,
+ discriminant_value,
doc,
doc_alias,
doc_cfg,
doc_keyword,
doc_masked,
- doc_spotlight,
doctest,
document_private_items,
dotdoteq_in_patterns,
dylib,
dyn_trait,
eh_personality,
- eh_unwind_resume,
enable,
Encodable,
env,
min_align_of,
min_const_fn,
min_const_unsafe_fn,
+ min_specialization,
mips_target_feature,
mmx_target_feature,
module,
plugin_registrar,
plugins,
Poll,
- poll_with_tls_context,
+ poll_with_context,
powerpc_target_feature,
precise_pointer_size_matching,
pref_align_of,
rustc_proc_macro_decls,
rustc_promotable,
rustc_regions,
+ rustc_unsafe_specialization_marker,
+ rustc_specialization_trait,
rustc_stable,
rustc_std_internal_symbol,
rustc_symbol_name,
rustc_variance,
rustfmt,
rust_eh_personality,
- rust_eh_unwind_resume,
rust_oom,
rvalue_static_promotion,
sanitize,
Some,
specialization,
speed,
- spotlight,
sse4a_target_feature,
stable,
staged_api,
target_has_atomic_load_store,
target_thread_local,
task,
+ _task_context,
tbm_target_feature,
termination_trait,
termination_trait_test,
}
/// "Normalize" ident for use in comparisons using "item hygiene".
- /// Identifiers with same string value become same if they came from the same "modern" macro
+ /// Identifiers with same string value become same if they came from the same macro 2.0 macro
/// (e.g., `macro` item, but not `macro_rules` item) and stay different if they came from
- /// different "modern" macros.
+ /// different macro 2.0 macros.
/// Technically, this operation strips all non-opaque marks from ident's syntactic context.
- pub fn modern(self) -> Ident {
- Ident::new(self.name, self.span.modern())
+ pub fn normalize_to_macros_2_0(self) -> Ident {
+ Ident::new(self.name, self.span.normalize_to_macros_2_0())
}
/// "Normalize" ident for use in comparisons using "local variable hygiene".
/// macro (e.g., `macro` or `macro_rules!` items) and stay different if they came from different
/// non-transparent macros.
/// Technically, this operation strips all transparent marks from ident's syntactic context.
- pub fn modern_and_legacy(self) -> Ident {
- Ident::new(self.name, self.span.modern_and_legacy())
+ pub fn normalize_to_macro_rules(self) -> Ident {
+ Ident::new(self.name, self.span.normalize_to_macro_rules())
}
/// Convert the name to a `SymbolStr`. This is a slowish operation because
}
}
+/// An newtype around `Ident` that calls [Ident::normalize_to_macro_rules] on
+/// construction.
+// FIXME(matthewj, petrochenkov) Use this more often, add a similar
+// `ModernIdent` struct and use that as well.
+#[derive(Copy, Clone, Eq, PartialEq, Hash)]
+pub struct MacroRulesNormalizedIdent(Ident);
+
+impl MacroRulesNormalizedIdent {
+ pub fn new(ident: Ident) -> Self {
+ Self(ident.normalize_to_macro_rules())
+ }
+}
+
+impl fmt::Debug for MacroRulesNormalizedIdent {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ fmt::Debug::fmt(&self.0, f)
+ }
+}
+
+impl fmt::Display for MacroRulesNormalizedIdent {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ fmt::Display::fmt(&self.0, f)
+ }
+}
+
/// An interned string.
///
/// Internally, a `Symbol` is implemented as an index, and all operations
impl Symbol {
const fn new(n: u32) -> Self {
- Symbol(SymbolIndex::from_u32_const(n))
+ Symbol(SymbolIndex::from_u32(n))
}
/// Maps a string to its interned representation.
`librustc_target` contains some very low-level details that are
specific to different compilation targets and so forth.
-For more information about how rustc works, see the [rustc guide].
+For more information about how rustc works, see the [rustc dev guide].
-[rustc guide]: https://rust-lang.github.io/rustc-guide/
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/
//
// NB: for all tagged `enum`s (which include all non-C-like
// `enum`s with defined FFI representation), this will
- // match the homogenous computation on the equivalent
+ // match the homogeneous computation on the equivalent
// `struct { tag; union { variant1; ... } }` and/or
// `union { struct { tag; variant1; } ... }`
// (the offsets of variant fields should be identical
- // between the two for either to be a homogenous aggregate).
+ // between the two for either to be a homogeneous aggregate).
let variant_start = total;
for variant_idx in variants.indices() {
let (variant_result, variant_total) =
/// The count of non-variadic arguments.
///
/// Should only be different from args.len() when c_variadic is true.
- /// This can be used to know wether an argument is variadic or not.
+ /// This can be used to know whether an argument is variadic or not.
pub fixed_count: usize,
pub conv: Conv,
}
}
-/// A pair of aligments, ABI-mandated and preferred.
+/// A pair of alignments, ABI-mandated and preferred.
#[derive(Copy, Clone, PartialEq, Eq, Hash, Debug, RustcEncodable, RustcDecodable)]
#[derive(HashStable_Generic)]
pub struct AbiAndPrefAlign {
pub fn offset(&self, i: usize) -> Size {
match *self {
- FieldPlacement::Union(_) => Size::ZERO,
+ FieldPlacement::Union(count) => {
+ assert!(i < count, "tried to access field {} of union with {} fields", i, count);
+ Size::ZERO
+ }
FieldPlacement::Array { stride, count } => {
let i = i as u64;
assert!(i < count);
#[derive(PartialEq, Eq, Hash, Debug, HashStable_Generic)]
pub struct LayoutDetails {
- pub variants: Variants,
+ /// Says where the fields are located within the layout.
+ /// Primitives and uninhabited enums appear as unions without fields.
pub fields: FieldPlacement,
+
+ /// Encodes information about multi-variant layouts.
+ /// Even with `Multiple` variants, a layout still has its own fields! Those are then
+ /// shared between all variants. One of them will be the discriminant,
+ /// but e.g. generators can have more.
+ ///
+ /// To access all fields of this layout, both `fields` and the fields of the active variant
+ /// must be taken into account.
+ pub variants: Variants,
+
+ /// The `abi` defines how this data is passed between functions, and it defines
+ /// value restrictions via `valid_range`.
+ ///
+ /// Note that this is entirely orthogonal to the recursive structure defined by
+ /// `variants` and `fields`; for example, `ManuallyDrop<Result<isize, isize>>` has
+ /// `Abi::ScalarPair`! So, even with non-`Aggregate` `abi`, `fields` and `variants`
+ /// have to be taken into account to find all fields of this layout.
pub abi: Abi,
/// The leaf scalar with the largest number of invalid values
}
}
+/// Trait for context types that can compute layouts of things.
pub trait LayoutOf {
type Ty;
type TyLayout;
}
}
+/// The `TyLayout` above will always be a `MaybeResult<TyLayout<'_, Self>>`.
+/// We can't add the bound due to the lifetime, but this trait is still useful when
+/// writing code that's generic over the `LayoutOf` impl.
+pub trait MaybeResult<T> {
+ type Error;
+
+ fn from(x: Result<T, Self::Error>) -> Self;
+ fn to_result(self) -> Result<T, Self::Error>;
+}
+
+impl<T> MaybeResult<T> for T {
+ type Error = !;
+
+ fn from(Ok(x): Result<T, Self::Error>) -> Self {
+ x
+ }
+ fn to_result(self) -> Result<T, Self::Error> {
+ Ok(self)
+ }
+}
+
+impl<T, E> MaybeResult<T> for Result<T, E> {
+ type Error = E;
+
+ fn from(x: Result<T, Self::Error>) -> Self {
+ x
+ }
+ fn to_result(self) -> Result<T, Self::Error> {
+ self
+ }
+}
+
#[derive(Copy, Clone, PartialEq, Eq)]
pub enum PointerKind {
/// Most general case, we know no restrictions to tell LLVM.
{
Ty::for_variant(self, cx, variant_index)
}
+
+ /// Callers might want to use `C: LayoutOf<Ty=Ty, TyLayout: MaybeResult<Self>>`
+ /// to allow recursion (see `might_permit_zero_init` below for an example).
pub fn field<C>(self, cx: &C, i: usize) -> C::TyLayout
where
Ty: TyLayoutMethods<'a, C>,
{
Ty::field(self, cx, i)
}
+
pub fn pointee_info_at<C>(self, cx: &C, offset: Size) -> Option<PointeeInfo>
where
Ty: TyLayoutMethods<'a, C>,
Abi::Aggregate { sized } => sized && self.size.bytes() == 0,
}
}
+
+ /// Determines if this type permits "raw" initialization by just transmuting some
+ /// memory into an instance of `T`.
+ /// `zero` indicates if the memory is zero-initialized, or alternatively
+ /// left entirely uninitialized.
+ /// This is conservative: in doubt, it will answer `true`.
+ ///
+ /// FIXME: Once we removed all the conservatism, we could alternatively
+ /// create an all-0/all-undef constant and run the const value validator to see if
+ /// this is a valid value for the given type.
+ pub fn might_permit_raw_init<C, E>(self, cx: &C, zero: bool) -> Result<bool, E>
+ where
+ Self: Copy,
+ Ty: TyLayoutMethods<'a, C>,
+ C: LayoutOf<Ty = Ty, TyLayout: MaybeResult<Self, Error = E>> + HasDataLayout,
+ {
+ let scalar_allows_raw_init = move |s: &Scalar| -> bool {
+ if zero {
+ let range = &s.valid_range;
+ // The range must contain 0.
+ range.contains(&0) || (*range.start() > *range.end()) // wrap-around allows 0
+ } else {
+ // The range must include all values. `valid_range_exclusive` handles
+ // the wrap-around using target arithmetic; with wrap-around then the full
+ // range is one where `start == end`.
+ let range = s.valid_range_exclusive(cx);
+ range.start == range.end
+ }
+ };
+
+ // Check the ABI.
+ let valid = match &self.abi {
+ Abi::Uninhabited => false, // definitely UB
+ Abi::Scalar(s) => scalar_allows_raw_init(s),
+ Abi::ScalarPair(s1, s2) => scalar_allows_raw_init(s1) && scalar_allows_raw_init(s2),
+ Abi::Vector { element: s, count } => *count == 0 || scalar_allows_raw_init(s),
+ Abi::Aggregate { .. } => true, // Cannot be excluded *right now*.
+ };
+ if !valid {
+ // This is definitely not okay.
+ trace!("might_permit_raw_init({:?}, zero={}): not valid", self.details, zero);
+ return Ok(false);
+ }
+
+ // If we have not found an error yet, we need to recursively descend.
+ // FIXME(#66151): For now, we are conservative and do not do this.
+ Ok(true)
+ }
}
#![doc(html_root_url = "https://doc.rust-lang.org/nightly/")]
#![feature(bool_to_option)]
+#![feature(const_if_match)]
+#![feature(const_fn)]
+#![feature(const_panic)]
#![feature(nll)]
+#![feature(never_type)]
+#![feature(associated_type_bounds)]
+#![feature(exhaustive_patterns)]
#[macro_use]
extern crate log;
-use super::apple_ios_base::{opts, Arch};
+use super::apple_sdk_base::{opts, AppleOS, Arch};
use crate::spec::{LinkerFlavor, Target, TargetOptions, TargetResult};
pub fn target() -> TargetResult {
- let base = opts(Arch::Arm64)?;
+ let base = opts(Arch::Arm64, AppleOS::iOS)?;
Ok(Target {
llvm_target: "arm64-apple-ios".to_string(),
target_endian: "little".to_string(),
--- /dev/null
+use super::apple_sdk_base::{opts, AppleOS, Arch};
+use crate::spec::{LinkerFlavor, Target, TargetOptions, TargetResult};
+
+pub fn target() -> TargetResult {
+ let base = opts(Arch::Arm64, AppleOS::tvOS)?;
+ Ok(Target {
+ llvm_target: "arm64-apple-tvos".to_string(),
+ target_endian: "little".to_string(),
+ target_pointer_width: "64".to_string(),
+ target_c_int_width: "32".to_string(),
+ data_layout: "e-m:o-i64:64-i128:128-n32:64-S128".to_string(),
+ arch: "aarch64".to_string(),
+ target_os: "tvos".to_string(),
+ target_env: String::new(),
+ target_vendor: "apple".to_string(),
+ linker_flavor: LinkerFlavor::Gcc,
+ options: TargetOptions {
+ features: "+neon,+fp-armv8,+cyclone".to_string(),
+ eliminate_frame_pointer: false,
+ max_atomic_width: Some(128),
+ abi_blacklist: super::arm_base::abi_blacklist(),
+ ..base
+ },
+ })
+}
+++ /dev/null
-use crate::spec::{LinkArgs, LinkerFlavor, TargetOptions};
-use std::env;
-use std::io;
-use std::path::Path;
-use std::process::Command;
-
-use Arch::*;
-
-#[allow(non_camel_case_types)]
-#[derive(Copy, Clone)]
-pub enum Arch {
- Armv7,
- Armv7s,
- Arm64,
- I386,
- X86_64,
- X86_64_macabi,
-}
-
-impl Arch {
- pub fn to_string(self) -> &'static str {
- match self {
- Armv7 => "armv7",
- Armv7s => "armv7s",
- Arm64 => "arm64",
- I386 => "i386",
- X86_64 => "x86_64",
- X86_64_macabi => "x86_64",
- }
- }
-}
-
-pub fn get_sdk_root(sdk_name: &str) -> Result<String, String> {
- // Following what clang does
- // (https://github.com/llvm/llvm-project/blob/
- // 296a80102a9b72c3eda80558fb78a3ed8849b341/clang/lib/Driver/ToolChains/Darwin.cpp#L1661-L1678)
- // to allow the SDK path to be set. (For clang, xcrun sets
- // SDKROOT; for rustc, the user or build system can set it, or we
- // can fall back to checking for xcrun on PATH.)
- if let Some(sdkroot) = env::var("SDKROOT").ok() {
- let p = Path::new(&sdkroot);
- match sdk_name {
- // Ignore `SDKROOT` if it's clearly set for the wrong platform.
- "iphoneos"
- if sdkroot.contains("iPhoneSimulator.platform")
- || sdkroot.contains("MacOSX.platform") =>
- {
- ()
- }
- "iphonesimulator"
- if sdkroot.contains("iPhoneOS.platform") || sdkroot.contains("MacOSX.platform") =>
- {
- ()
- }
- "macosx10.15"
- if sdkroot.contains("iPhoneOS.platform")
- || sdkroot.contains("iPhoneSimulator.platform") =>
- {
- ()
- }
- // Ignore `SDKROOT` if it's not a valid path.
- _ if !p.is_absolute() || p == Path::new("/") || !p.exists() => (),
- _ => return Ok(sdkroot),
- }
- }
- let res =
- Command::new("xcrun").arg("--show-sdk-path").arg("-sdk").arg(sdk_name).output().and_then(
- |output| {
- if output.status.success() {
- Ok(String::from_utf8(output.stdout).unwrap())
- } else {
- let error = String::from_utf8(output.stderr);
- let error = format!("process exit with error: {}", error.unwrap());
- Err(io::Error::new(io::ErrorKind::Other, &error[..]))
- }
- },
- );
-
- match res {
- Ok(output) => Ok(output.trim().to_string()),
- Err(e) => Err(format!("failed to get {} SDK path: {}", sdk_name, e)),
- }
-}
-
-fn build_pre_link_args(arch: Arch) -> Result<LinkArgs, String> {
- let sdk_name = match arch {
- Armv7 | Armv7s | Arm64 => "iphoneos",
- I386 | X86_64 => "iphonesimulator",
- X86_64_macabi => "macosx10.15",
- };
-
- let arch_name = arch.to_string();
-
- let sdk_root = get_sdk_root(sdk_name)?;
-
- let mut args = LinkArgs::new();
- args.insert(
- LinkerFlavor::Gcc,
- vec![
- "-arch".to_string(),
- arch_name.to_string(),
- "-isysroot".to_string(),
- sdk_root.clone(),
- "-Wl,-syslibroot".to_string(),
- sdk_root,
- ],
- );
-
- Ok(args)
-}
-
-fn target_cpu(arch: Arch) -> String {
- match arch {
- Armv7 => "cortex-a8", // iOS7 is supported on iPhone 4 and higher
- Armv7s => "cortex-a9",
- Arm64 => "cyclone",
- I386 => "yonah",
- X86_64 => "core2",
- X86_64_macabi => "core2",
- }
- .to_string()
-}
-
-fn link_env_remove(arch: Arch) -> Vec<String> {
- match arch {
- Armv7 | Armv7s | Arm64 | I386 | X86_64 => vec!["MACOSX_DEPLOYMENT_TARGET".to_string()],
- X86_64_macabi => vec!["IPHONEOS_DEPLOYMENT_TARGET".to_string()],
- }
-}
-
-pub fn opts(arch: Arch) -> Result<TargetOptions, String> {
- let pre_link_args = build_pre_link_args(arch)?;
- Ok(TargetOptions {
- cpu: target_cpu(arch),
- dynamic_linking: false,
- executables: true,
- pre_link_args,
- link_env_remove: link_env_remove(arch),
- has_elf_tls: false,
- eliminate_frame_pointer: false,
- ..super::apple_base::opts()
- })
-}
--- /dev/null
+use crate::spec::{LinkArgs, LinkerFlavor, TargetOptions};
+use std::env;
+use std::io;
+use std::path::Path;
+use std::process::Command;
+
+use Arch::*;
+#[allow(non_camel_case_types)]
+#[derive(Copy, Clone)]
+pub enum Arch {
+ Armv7,
+ Armv7s,
+ Arm64,
+ I386,
+ X86_64,
+ X86_64_macabi,
+}
+
+#[allow(non_camel_case_types)]
+#[derive(Copy, Clone)]
+pub enum AppleOS {
+ tvOS,
+ iOS,
+}
+
+impl Arch {
+ pub fn to_string(self) -> &'static str {
+ match self {
+ Armv7 => "armv7",
+ Armv7s => "armv7s",
+ Arm64 => "arm64",
+ I386 => "i386",
+ X86_64 => "x86_64",
+ X86_64_macabi => "x86_64",
+ }
+ }
+}
+
+pub fn get_sdk_root(sdk_name: &str) -> Result<String, String> {
+ // Following what clang does
+ // (https://github.com/llvm/llvm-project/blob/
+ // 296a80102a9b72c3eda80558fb78a3ed8849b341/clang/lib/Driver/ToolChains/Darwin.cpp#L1661-L1678)
+ // to allow the SDK path to be set. (For clang, xcrun sets
+ // SDKROOT; for rustc, the user or build system can set it, or we
+ // can fall back to checking for xcrun on PATH.)
+ if let Some(sdkroot) = env::var("SDKROOT").ok() {
+ let p = Path::new(&sdkroot);
+ match sdk_name {
+ // Ignore `SDKROOT` if it's clearly set for the wrong platform.
+ "appletvos"
+ if sdkroot.contains("TVSimulator.platform")
+ || sdkroot.contains("MacOSX.platform") =>
+ {
+ ()
+ }
+ "appletvsimulator"
+ if sdkroot.contains("TVOS.platform") || sdkroot.contains("MacOSX.platform") =>
+ {
+ ()
+ }
+ "iphoneos"
+ if sdkroot.contains("iPhoneSimulator.platform")
+ || sdkroot.contains("MacOSX.platform") =>
+ {
+ ()
+ }
+ "iphonesimulator"
+ if sdkroot.contains("iPhoneOS.platform") || sdkroot.contains("MacOSX.platform") =>
+ {
+ ()
+ }
+ "macosx10.15"
+ if sdkroot.contains("iPhoneOS.platform")
+ || sdkroot.contains("iPhoneSimulator.platform") =>
+ {
+ ()
+ }
+ // Ignore `SDKROOT` if it's not a valid path.
+ _ if !p.is_absolute() || p == Path::new("/") || !p.exists() => (),
+ _ => return Ok(sdkroot),
+ }
+ }
+ let res =
+ Command::new("xcrun").arg("--show-sdk-path").arg("-sdk").arg(sdk_name).output().and_then(
+ |output| {
+ if output.status.success() {
+ Ok(String::from_utf8(output.stdout).unwrap())
+ } else {
+ let error = String::from_utf8(output.stderr);
+ let error = format!("process exit with error: {}", error.unwrap());
+ Err(io::Error::new(io::ErrorKind::Other, &error[..]))
+ }
+ },
+ );
+
+ match res {
+ Ok(output) => Ok(output.trim().to_string()),
+ Err(e) => Err(format!("failed to get {} SDK path: {}", sdk_name, e)),
+ }
+}
+
+fn build_pre_link_args(arch: Arch, os: AppleOS) -> Result<LinkArgs, String> {
+ let sdk_name = match (arch, os) {
+ (Arm64, AppleOS::tvOS) => "appletvos",
+ (X86_64, AppleOS::tvOS) => "appletvsimulator",
+ (Armv7, AppleOS::iOS) => "iphoneos",
+ (Armv7s, AppleOS::iOS) => "iphoneos",
+ (Arm64, AppleOS::iOS) => "iphoneos",
+ (I386, AppleOS::iOS) => "iphonesimulator",
+ (X86_64, AppleOS::iOS) => "iphonesimulator",
+ (X86_64_macabi, AppleOS::iOS) => "macosx10.15",
+ _ => unreachable!(),
+ };
+
+ let arch_name = arch.to_string();
+
+ let sdk_root = get_sdk_root(sdk_name)?;
+
+ let mut args = LinkArgs::new();
+ args.insert(
+ LinkerFlavor::Gcc,
+ vec![
+ "-arch".to_string(),
+ arch_name.to_string(),
+ "-isysroot".to_string(),
+ sdk_root.clone(),
+ "-Wl,-syslibroot".to_string(),
+ sdk_root,
+ ],
+ );
+
+ Ok(args)
+}
+
+fn target_cpu(arch: Arch) -> String {
+ match arch {
+ Armv7 => "cortex-a8", // iOS7 is supported on iPhone 4 and higher
+ Armv7s => "cortex-a9",
+ Arm64 => "cyclone",
+ I386 => "yonah",
+ X86_64 => "core2",
+ X86_64_macabi => "core2",
+ }
+ .to_string()
+}
+
+fn link_env_remove(arch: Arch) -> Vec<String> {
+ match arch {
+ Armv7 | Armv7s | Arm64 | I386 | X86_64 => vec!["MACOSX_DEPLOYMENT_TARGET".to_string()],
+ X86_64_macabi => vec!["IPHONEOS_DEPLOYMENT_TARGET".to_string()],
+ }
+}
+
+pub fn opts(arch: Arch, os: AppleOS) -> Result<TargetOptions, String> {
+ let pre_link_args = build_pre_link_args(arch, os)?;
+ Ok(TargetOptions {
+ cpu: target_cpu(arch),
+ dynamic_linking: false,
+ executables: true,
+ pre_link_args,
+ link_env_remove: link_env_remove(arch),
+ has_elf_tls: false,
+ eliminate_frame_pointer: false,
+ ..super::apple_base::opts()
+ })
+}
-use super::apple_ios_base::{opts, Arch};
+use super::apple_sdk_base::{opts, AppleOS, Arch};
use crate::spec::{LinkerFlavor, Target, TargetOptions, TargetResult};
pub fn target() -> TargetResult {
- let base = opts(Arch::Armv7)?;
+ let base = opts(Arch::Armv7, AppleOS::iOS)?;
Ok(Target {
llvm_target: "armv7-apple-ios".to_string(),
target_endian: "little".to_string(),
-use super::apple_ios_base::{opts, Arch};
+use super::apple_sdk_base::{opts, AppleOS, Arch};
use crate::spec::{LinkerFlavor, Target, TargetOptions, TargetResult};
pub fn target() -> TargetResult {
- let base = opts(Arch::Armv7s)?;
+ let base = opts(Arch::Armv7s, AppleOS::iOS)?;
Ok(Target {
llvm_target: "armv7s-apple-ios".to_string(),
target_endian: "little".to_string(),
-use super::apple_ios_base::{opts, Arch};
+use super::apple_sdk_base::{opts, AppleOS, Arch};
use crate::spec::{LinkerFlavor, Target, TargetOptions, TargetResult};
pub fn target() -> TargetResult {
- let base = opts(Arch::I386)?;
+ let base = opts(Arch::I386, AppleOS::iOS)?;
Ok(Target {
llvm_target: "i386-apple-ios".to_string(),
target_endian: "little".to_string(),
base.abi_return_struct_as_int = true;
// Use -GNU here, because of the reason below:
- // Backgound and Problem:
+ // Background and Problem:
// If we use i686-unknown-windows, the LLVM IA32 MSVC generates compiler intrinsic
// _alldiv, _aulldiv, _allrem, _aullrem, _allmul, which will cause undefined symbol.
// A real issue is __aulldiv() is referred by __udivdi3() - udivmod_inner!(), from
// i386/umoddi3.S
// Possible solution:
// 1. Eliminate Intrinsics generation.
- // 1.1 Choose differnt target to bypass isTargetKnownWindowsMSVC().
+ // 1.1 Choose different target to bypass isTargetKnownWindowsMSVC().
// 1.2 Remove the "Setup Windows compiler runtime calls" in LLVM
// 2. Implement Intrinsics.
// We evaluated all options.
pub mod abi;
mod android_base;
mod apple_base;
-mod apple_ios_base;
+mod apple_sdk_base;
mod arm_base;
mod cloudabi_base;
mod dragonfly_base;
("armv7-apple-ios", armv7_apple_ios),
("armv7s-apple-ios", armv7s_apple_ios),
("x86_64-apple-ios-macabi", x86_64_apple_ios_macabi),
+ ("aarch64-apple-tvos", aarch64_apple_tvos),
+ ("x86_64-apple-tvos", x86_64_apple_tvos),
("armebv7r-none-eabi", armebv7r_none_eabi),
("armebv7r-none-eabihf", armebv7r_none_eabihf),
/// user-defined but before post_link_objects. Standard platform
/// libraries that should be always be linked to, usually go here.
pub late_link_args: LinkArgs,
+ /// Linker arguments used in addition to `late_link_args` if at least one
+ /// Rust dependency is dynamically linked.
+ pub late_link_args_dynamic: LinkArgs,
+ /// Linker arguments used in addition to `late_link_args` if aall Rust
+ /// dependencies are statically linked.
+ pub late_link_args_static: LinkArgs,
/// Objects to link after all others, always found within the
/// sysroot folder.
pub post_link_objects: Vec<String>, // ... unconditionally
pub archive_format: String,
/// Is asm!() allowed? Defaults to true.
pub allow_asm: bool,
- /// Whether the target uses a custom unwind resumption routine.
- /// By default LLVM lowers `resume` instructions into calls to `_Unwind_Resume`
- /// defined in libgcc. If this option is enabled, the target must provide
- /// `eh_unwind_resume` lang item.
- pub custom_unwind_resume: bool,
/// Whether the runtime startup code requires the `main` function be passed
/// `argc` and `argv` values.
pub main_needs_argc_argv: bool,
post_link_objects: Vec::new(),
post_link_objects_crt: Vec::new(),
late_link_args: LinkArgs::new(),
+ late_link_args_dynamic: LinkArgs::new(),
+ late_link_args_static: LinkArgs::new(),
link_env: Vec::new(),
link_env_remove: Vec::new(),
archive_format: "gnu".to_string(),
- custom_unwind_resume: false,
main_needs_argc_argv: true,
allow_asm: true,
has_elf_tls: false,
key!(pre_link_objects_exe_crt, list);
key!(pre_link_objects_dll, list);
key!(late_link_args, link_args);
+ key!(late_link_args_dynamic, link_args);
+ key!(late_link_args_static, link_args);
key!(post_link_objects, list);
key!(post_link_objects_crt, list);
key!(post_link_args, link_args);
key!(relro_level, RelroLevel)?;
key!(archive_format);
key!(allow_asm, bool);
- key!(custom_unwind_resume, bool);
key!(main_needs_argc_argv, bool);
key!(has_elf_tls, bool);
key!(obj_is_bitcode, bool);
target_option_val!(pre_link_objects_exe_crt);
target_option_val!(pre_link_objects_dll);
target_option_val!(link_args - late_link_args);
+ target_option_val!(link_args - late_link_args_dynamic);
+ target_option_val!(link_args - late_link_args_static);
target_option_val!(post_link_objects);
target_option_val!(post_link_objects_crt);
target_option_val!(link_args - post_link_args);
target_option_val!(relro_level);
target_option_val!(archive_format);
target_option_val!(allow_asm);
- target_option_val!(custom_unwind_resume);
target_option_val!(main_needs_argc_argv);
target_option_val!(has_elf_tls);
target_option_val!(obj_is_bitcode);
// The linker can be installed from `crates.io`.
linker: Some("rust-ptx-linker".to_string()),
- // With `ptx-linker` approach, it can be later overriden via link flags.
+ // With `ptx-linker` approach, it can be later overridden via link flags.
cpu: "sm_30".to_string(),
// FIXME: create tests for the atomics.
// Let the `ptx-linker` to handle LLVM lowering into MC / assembly.
obj_is_bitcode: true,
- // Convinient and predicable naming scheme.
+ // Convenient and predicable naming scheme.
dll_prefix: "".to_string(),
dll_suffix: ".ptx".to_string(),
exe_suffix: ".ptx".to_string(),
);
let mut late_link_args = LinkArgs::new();
+ let mut late_link_args_dynamic = LinkArgs::new();
+ let mut late_link_args_static = LinkArgs::new();
late_link_args.insert(
LinkerFlavor::Gcc,
vec![
"-lmingwex".to_string(),
"-lmingw32".to_string(),
- "-lgcc".to_string(), // alas, mingw* libraries above depend on libgcc
"-lmsvcrt".to_string(),
// mingw's msvcrt is a weird hybrid import library and static library.
// And it seems that the linker fails to use import symbols from msvcrt
"-lkernel32".to_string(),
],
);
+ late_link_args_dynamic.insert(
+ LinkerFlavor::Gcc,
+ vec![
+ // If any of our crates are dynamically linked then we need to use
+ // the shared libgcc_s-dw2-1.dll. This is required to support
+ // unwinding across DLL boundaries.
+ "-lgcc_s".to_string(),
+ "-lgcc".to_string(),
+ "-lkernel32".to_string(),
+ ],
+ );
+ late_link_args_static.insert(
+ LinkerFlavor::Gcc,
+ vec![
+ // If all of our crates are statically linked then we can get away
+ // with statically linking the libgcc unwinding code. This allows
+ // binaries to be redistributed without the libgcc_s-dw2-1.dll
+ // dependency, but unfortunately break unwinding across DLL
+ // boundaries when unwinding across FFI boundaries.
+ "-lgcc".to_string(),
+ "-lgcc_eh".to_string(),
+ "-lpthread".to_string(),
+ "-lkernel32".to_string(),
+ ],
+ );
TargetOptions {
// FIXME(#13846) this should be enabled for windows
"rsbegin.o".to_string(),
],
late_link_args,
+ late_link_args_dynamic,
+ late_link_args_static,
post_link_objects: vec!["rsend.o".to_string()],
- custom_unwind_resume: true,
abi_return_struct_as_int: true,
emit_debug_gdb_scripts: false,
requires_uwtable: true,
pre_link_objects_dll: vec!["rsbegin.o".to_string()],
late_link_args,
post_link_objects: vec!["rsend.o".to_string()],
- custom_unwind_resume: true,
abi_return_struct_as_int: true,
emit_debug_gdb_scripts: false,
requires_uwtable: true,
-use super::apple_ios_base::{opts, Arch};
+use super::apple_sdk_base::{opts, AppleOS, Arch};
use crate::spec::{LinkerFlavor, Target, TargetOptions, TargetResult};
pub fn target() -> TargetResult {
- let base = opts(Arch::X86_64)?;
+ let base = opts(Arch::X86_64, AppleOS::iOS)?;
Ok(Target {
llvm_target: "x86_64-apple-ios".to_string(),
target_endian: "little".to_string(),
-use super::apple_ios_base::{opts, Arch};
+use super::apple_sdk_base::{opts, AppleOS, Arch};
use crate::spec::{LinkerFlavor, Target, TargetOptions, TargetResult};
pub fn target() -> TargetResult {
- let base = opts(Arch::X86_64_macabi)?;
+ let base = opts(Arch::X86_64_macabi, AppleOS::iOS)?;
Ok(Target {
llvm_target: "x86_64-apple-ios13.0-macabi".to_string(),
target_endian: "little".to_string(),
--- /dev/null
+use super::apple_sdk_base::{opts, AppleOS, Arch};
+use crate::spec::{LinkerFlavor, Target, TargetOptions, TargetResult};
+
+pub fn target() -> TargetResult {
+ let base = opts(Arch::X86_64, AppleOS::iOS)?;
+ Ok(Target {
+ llvm_target: "x86_64-apple-tvos".to_string(),
+ target_endian: "little".to_string(),
+ target_pointer_width: "64".to_string(),
+ target_c_int_width: "32".to_string(),
+ data_layout: "e-m:o-i64:64-f80:128-n8:16:32:64-S128".to_string(),
+ arch: "x86_64".to_string(),
+ target_os: "tvos".to_string(),
+ target_env: String::new(),
+ target_vendor: "apple".to_string(),
+ linker_flavor: LinkerFlavor::Gcc,
+ options: TargetOptions { max_atomic_width: Some(64), stack_probes: true, ..base },
+ })
+}
--- /dev/null
+[package]
+authors = ["The Rust Project Developers"]
+name = "rustc_trait_selection"
+version = "0.0.0"
+edition = "2018"
+
+[lib]
+name = "rustc_trait_selection"
+path = "lib.rs"
+doctest = false
+
+[dependencies]
+fmt_macros = { path = "../libfmt_macros" }
+log = { version = "0.4", features = ["release_max_level_info", "std"] }
+rustc_attr = { path = "../librustc_attr" }
+rustc = { path = "../librustc" }
+rustc_ast = { path = "../librustc_ast" }
+rustc_data_structures = { path = "../librustc_data_structures" }
+rustc_errors = { path = "../librustc_errors" }
+rustc_hir = { path = "../librustc_hir" }
+rustc_index = { path = "../librustc_index" }
+rustc_infer = { path = "../librustc_infer" }
+rustc_macros = { path = "../librustc_macros" }
+rustc_session = { path = "../librustc_session" }
+rustc_span = { path = "../librustc_span" }
+rustc_target = { path = "../librustc_target" }
+smallvec = { version = "1.0", features = ["union", "may_dangle"] }
--- /dev/null
+use crate::traits::query::outlives_bounds::InferCtxtExt as _;
+use crate::traits::{self, TraitEngine, TraitEngineExt};
+
+use rustc::arena::ArenaAllocatable;
+use rustc::infer::canonical::{Canonical, CanonicalizedQueryResponse, QueryResponse};
+use rustc::middle::lang_items;
+use rustc::traits::query::Fallible;
+use rustc::ty::{self, Ty, TypeFoldable};
+use rustc_hir as hir;
+use rustc_infer::infer::outlives::env::OutlivesEnvironment;
+use rustc_infer::traits::ObligationCause;
+use rustc_span::{Span, DUMMY_SP};
+
+use std::fmt::Debug;
+
+pub use rustc_infer::infer::*;
+
+pub trait InferCtxtExt<'tcx> {
+ fn type_is_copy_modulo_regions(
+ &self,
+ param_env: ty::ParamEnv<'tcx>,
+ ty: Ty<'tcx>,
+ span: Span,
+ ) -> bool;
+
+ fn partially_normalize_associated_types_in<T>(
+ &self,
+ span: Span,
+ body_id: hir::HirId,
+ param_env: ty::ParamEnv<'tcx>,
+ value: &T,
+ ) -> InferOk<'tcx, T>
+ where
+ T: TypeFoldable<'tcx>;
+}
+
+impl<'cx, 'tcx> InferCtxtExt<'tcx> for InferCtxt<'cx, 'tcx> {
+ fn type_is_copy_modulo_regions(
+ &self,
+ param_env: ty::ParamEnv<'tcx>,
+ ty: Ty<'tcx>,
+ span: Span,
+ ) -> bool {
+ let ty = self.resolve_vars_if_possible(&ty);
+
+ if !(param_env, ty).has_local_value() {
+ return ty.is_copy_modulo_regions(self.tcx, param_env, span);
+ }
+
+ let copy_def_id = self.tcx.require_lang_item(lang_items::CopyTraitLangItem, None);
+
+ // This can get called from typeck (by euv), and `moves_by_default`
+ // rightly refuses to work with inference variables, but
+ // moves_by_default has a cache, which we want to use in other
+ // cases.
+ traits::type_known_to_meet_bound_modulo_regions(self, param_env, ty, copy_def_id, span)
+ }
+
+ /// Normalizes associated types in `value`, potentially returning
+ /// new obligations that must further be processed.
+ fn partially_normalize_associated_types_in<T>(
+ &self,
+ span: Span,
+ body_id: hir::HirId,
+ param_env: ty::ParamEnv<'tcx>,
+ value: &T,
+ ) -> InferOk<'tcx, T>
+ where
+ T: TypeFoldable<'tcx>,
+ {
+ debug!("partially_normalize_associated_types_in(value={:?})", value);
+ let mut selcx = traits::SelectionContext::new(self);
+ let cause = ObligationCause::misc(span, body_id);
+ let traits::Normalized { value, obligations } =
+ traits::normalize(&mut selcx, param_env, cause, value);
+ debug!(
+ "partially_normalize_associated_types_in: result={:?} predicates={:?}",
+ value, obligations
+ );
+ InferOk { value, obligations }
+ }
+}
+
+pub trait InferCtxtBuilderExt<'tcx> {
+ fn enter_canonical_trait_query<K, R>(
+ &mut self,
+ canonical_key: &Canonical<'tcx, K>,
+ operation: impl FnOnce(&InferCtxt<'_, 'tcx>, &mut dyn TraitEngine<'tcx>, K) -> Fallible<R>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, R>>
+ where
+ K: TypeFoldable<'tcx>,
+ R: Debug + TypeFoldable<'tcx>,
+ Canonical<'tcx, QueryResponse<'tcx, R>>: ArenaAllocatable;
+}
+
+impl<'tcx> InferCtxtBuilderExt<'tcx> for InferCtxtBuilder<'tcx> {
+ /// The "main method" for a canonicalized trait query. Given the
+ /// canonical key `canonical_key`, this method will create a new
+ /// inference context, instantiate the key, and run your operation
+ /// `op`. The operation should yield up a result (of type `R`) as
+ /// well as a set of trait obligations that must be fully
+ /// satisfied. These obligations will be processed and the
+ /// canonical result created.
+ ///
+ /// Returns `NoSolution` in the event of any error.
+ ///
+ /// (It might be mildly nicer to implement this on `TyCtxt`, and
+ /// not `InferCtxtBuilder`, but that is a bit tricky right now.
+ /// In part because we would need a `for<'tcx>` sort of
+ /// bound for the closure and in part because it is convenient to
+ /// have `'tcx` be free on this function so that we can talk about
+ /// `K: TypeFoldable<'tcx>`.)
+ fn enter_canonical_trait_query<K, R>(
+ &mut self,
+ canonical_key: &Canonical<'tcx, K>,
+ operation: impl FnOnce(&InferCtxt<'_, 'tcx>, &mut dyn TraitEngine<'tcx>, K) -> Fallible<R>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, R>>
+ where
+ K: TypeFoldable<'tcx>,
+ R: Debug + TypeFoldable<'tcx>,
+ Canonical<'tcx, QueryResponse<'tcx, R>>: ArenaAllocatable,
+ {
+ self.enter_with_canonical(
+ DUMMY_SP,
+ canonical_key,
+ |ref infcx, key, canonical_inference_vars| {
+ let mut fulfill_cx = TraitEngine::new(infcx.tcx);
+ let value = operation(infcx, &mut *fulfill_cx, key)?;
+ infcx.make_canonicalized_query_response(
+ canonical_inference_vars,
+ value,
+ &mut *fulfill_cx,
+ )
+ },
+ )
+ }
+}
+
+pub trait OutlivesEnvironmentExt<'tcx> {
+ fn add_implied_bounds(
+ &mut self,
+ infcx: &InferCtxt<'a, 'tcx>,
+ fn_sig_tys: &[Ty<'tcx>],
+ body_id: hir::HirId,
+ span: Span,
+ );
+}
+
+impl<'tcx> OutlivesEnvironmentExt<'tcx> for OutlivesEnvironment<'tcx> {
+ /// This method adds "implied bounds" into the outlives environment.
+ /// Implied bounds are outlives relationships that we can deduce
+ /// on the basis that certain types must be well-formed -- these are
+ /// either the types that appear in the function signature or else
+ /// the input types to an impl. For example, if you have a function
+ /// like
+ ///
+ /// ```
+ /// fn foo<'a, 'b, T>(x: &'a &'b [T]) { }
+ /// ```
+ ///
+ /// we can assume in the caller's body that `'b: 'a` and that `T:
+ /// 'b` (and hence, transitively, that `T: 'a`). This method would
+ /// add those assumptions into the outlives-environment.
+ ///
+ /// Tests: `src/test/compile-fail/regions-free-region-ordering-*.rs`
+ fn add_implied_bounds(
+ &mut self,
+ infcx: &InferCtxt<'a, 'tcx>,
+ fn_sig_tys: &[Ty<'tcx>],
+ body_id: hir::HirId,
+ span: Span,
+ ) {
+ debug!("add_implied_bounds()");
+
+ for &ty in fn_sig_tys {
+ let ty = infcx.resolve_vars_if_possible(&ty);
+ debug!("add_implied_bounds: ty = {}", ty);
+ let implied_bounds = infcx.implied_outlives_bounds(self.param_env, body_id, ty, span);
+ self.add_outlives_bounds(Some(infcx), implied_bounds)
+ }
+ }
+}
--- /dev/null
+//! This crates defines the trait resolution method.
+//!
+//! - **Traits.** Trait resolution is implemented in the `traits` module.
+//!
+//! For more information about how rustc works, see the [rustc guide].
+//!
+//! [rustc guide]: https://rust-lang.github.io/rustc-guide/
+//!
+//! # Note
+//!
+//! This API is completely unstable and subject to change.
+
+#![doc(html_root_url = "https://doc.rust-lang.org/nightly/")]
+#![feature(bool_to_option)]
+#![feature(drain_filter)]
+#![feature(in_band_lifetimes)]
+#![feature(crate_visibility_modifier)]
+#![recursion_limit = "512"] // For rustdoc
+
+#[macro_use]
+extern crate rustc_macros;
+#[cfg(target_arch = "x86_64")]
+#[macro_use]
+extern crate rustc_data_structures;
+#[macro_use]
+extern crate log;
+#[macro_use]
+extern crate rustc;
+
+pub mod infer;
+pub mod opaque_types;
+pub mod traits;
--- /dev/null
+use crate::infer::InferCtxtExt as _;
+use crate::traits::{self, PredicateObligation};
+use rustc::ty::fold::{BottomUpFolder, TypeFoldable, TypeFolder, TypeVisitor};
+use rustc::ty::free_region_map::FreeRegionRelations;
+use rustc::ty::subst::{GenericArg, GenericArgKind, InternalSubsts, SubstsRef};
+use rustc::ty::{self, GenericParamDefKind, Ty, TyCtxt};
+use rustc_data_structures::fx::FxHashMap;
+use rustc_data_structures::sync::Lrc;
+use rustc_hir as hir;
+use rustc_hir::def_id::{DefId, DefIdMap};
+use rustc_hir::Node;
+use rustc_infer::infer::error_reporting::unexpected_hidden_region_diagnostic;
+use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
+use rustc_infer::infer::{self, InferCtxt, InferOk};
+use rustc_session::config::nightly_options;
+use rustc_span::Span;
+
+pub type OpaqueTypeMap<'tcx> = DefIdMap<OpaqueTypeDecl<'tcx>>;
+
+/// Information about the opaque types whose values we
+/// are inferring in this function (these are the `impl Trait` that
+/// appear in the return type).
+#[derive(Copy, Clone, Debug)]
+pub struct OpaqueTypeDecl<'tcx> {
+ /// The opaque type (`ty::Opaque`) for this declaration.
+ pub opaque_type: Ty<'tcx>,
+
+ /// The substitutions that we apply to the opaque type that this
+ /// `impl Trait` desugars to. e.g., if:
+ ///
+ /// fn foo<'a, 'b, T>() -> impl Trait<'a>
+ ///
+ /// winds up desugared to:
+ ///
+ /// type Foo<'x, X> = impl Trait<'x>
+ /// fn foo<'a, 'b, T>() -> Foo<'a, T>
+ ///
+ /// then `substs` would be `['a, T]`.
+ pub substs: SubstsRef<'tcx>,
+
+ /// The span of this particular definition of the opaque type. So
+ /// for example:
+ ///
+ /// ```
+ /// type Foo = impl Baz;
+ /// fn bar() -> Foo {
+ /// ^^^ This is the span we are looking for!
+ /// ```
+ ///
+ /// In cases where the fn returns `(impl Trait, impl Trait)` or
+ /// other such combinations, the result is currently
+ /// over-approximated, but better than nothing.
+ pub definition_span: Span,
+
+ /// The type variable that represents the value of the opaque type
+ /// that we require. In other words, after we compile this function,
+ /// we will be created a constraint like:
+ ///
+ /// Foo<'a, T> = ?C
+ ///
+ /// where `?C` is the value of this type variable. =) It may
+ /// naturally refer to the type and lifetime parameters in scope
+ /// in this function, though ultimately it should only reference
+ /// those that are arguments to `Foo` in the constraint above. (In
+ /// other words, `?C` should not include `'b`, even though it's a
+ /// lifetime parameter on `foo`.)
+ pub concrete_ty: Ty<'tcx>,
+
+ /// Returns `true` if the `impl Trait` bounds include region bounds.
+ /// For example, this would be true for:
+ ///
+ /// fn foo<'a, 'b, 'c>() -> impl Trait<'c> + 'a + 'b
+ ///
+ /// but false for:
+ ///
+ /// fn foo<'c>() -> impl Trait<'c>
+ ///
+ /// unless `Trait` was declared like:
+ ///
+ /// trait Trait<'c>: 'c
+ ///
+ /// in which case it would be true.
+ ///
+ /// This is used during regionck to decide whether we need to
+ /// impose any additional constraints to ensure that region
+ /// variables in `concrete_ty` wind up being constrained to
+ /// something from `substs` (or, at minimum, things that outlive
+ /// the fn body). (Ultimately, writeback is responsible for this
+ /// check.)
+ pub has_required_region_bounds: bool,
+
+ /// The origin of the opaque type.
+ pub origin: hir::OpaqueTyOrigin,
+}
+
+/// Whether member constraints should be generated for all opaque types
+pub enum GenerateMemberConstraints {
+ /// The default, used by typeck
+ WhenRequired,
+ /// The borrow checker needs member constraints in any case where we don't
+ /// have a `'static` bound. This is because the borrow checker has more
+ /// flexibility in the values of regions. For example, given `f<'a, 'b>`
+ /// the borrow checker can have an inference variable outlive `'a` and `'b`,
+ /// but not be equal to `'static`.
+ IfNoStaticBound,
+}
+
+pub trait InferCtxtExt<'tcx> {
+ fn instantiate_opaque_types<T: TypeFoldable<'tcx>>(
+ &self,
+ parent_def_id: DefId,
+ body_id: hir::HirId,
+ param_env: ty::ParamEnv<'tcx>,
+ value: &T,
+ value_span: Span,
+ ) -> InferOk<'tcx, (T, OpaqueTypeMap<'tcx>)>;
+
+ fn constrain_opaque_types<FRR: FreeRegionRelations<'tcx>>(
+ &self,
+ opaque_types: &OpaqueTypeMap<'tcx>,
+ free_region_relations: &FRR,
+ );
+
+ fn constrain_opaque_type<FRR: FreeRegionRelations<'tcx>>(
+ &self,
+ def_id: DefId,
+ opaque_defn: &OpaqueTypeDecl<'tcx>,
+ mode: GenerateMemberConstraints,
+ free_region_relations: &FRR,
+ );
+
+ /*private*/
+ fn generate_member_constraint(
+ &self,
+ concrete_ty: Ty<'tcx>,
+ opaque_type_generics: &ty::Generics,
+ opaque_defn: &OpaqueTypeDecl<'tcx>,
+ opaque_type_def_id: DefId,
+ );
+
+ /*private*/
+ fn member_constraint_feature_gate(
+ &self,
+ opaque_defn: &OpaqueTypeDecl<'tcx>,
+ opaque_type_def_id: DefId,
+ conflict1: ty::Region<'tcx>,
+ conflict2: ty::Region<'tcx>,
+ ) -> bool;
+
+ fn infer_opaque_definition_from_instantiation(
+ &self,
+ def_id: DefId,
+ substs: SubstsRef<'tcx>,
+ instantiated_ty: Ty<'tcx>,
+ span: Span,
+ ) -> Ty<'tcx>;
+}
+
+impl<'a, 'tcx> InferCtxtExt<'tcx> for InferCtxt<'a, 'tcx> {
+ /// Replaces all opaque types in `value` with fresh inference variables
+ /// and creates appropriate obligations. For example, given the input:
+ ///
+ /// impl Iterator<Item = impl Debug>
+ ///
+ /// this method would create two type variables, `?0` and `?1`. It would
+ /// return the type `?0` but also the obligations:
+ ///
+ /// ?0: Iterator<Item = ?1>
+ /// ?1: Debug
+ ///
+ /// Moreover, it returns a `OpaqueTypeMap` that would map `?0` to
+ /// info about the `impl Iterator<..>` type and `?1` to info about
+ /// the `impl Debug` type.
+ ///
+ /// # Parameters
+ ///
+ /// - `parent_def_id` -- the `DefId` of the function in which the opaque type
+ /// is defined
+ /// - `body_id` -- the body-id with which the resulting obligations should
+ /// be associated
+ /// - `param_env` -- the in-scope parameter environment to be used for
+ /// obligations
+ /// - `value` -- the value within which we are instantiating opaque types
+ /// - `value_span` -- the span where the value came from, used in error reporting
+ fn instantiate_opaque_types<T: TypeFoldable<'tcx>>(
+ &self,
+ parent_def_id: DefId,
+ body_id: hir::HirId,
+ param_env: ty::ParamEnv<'tcx>,
+ value: &T,
+ value_span: Span,
+ ) -> InferOk<'tcx, (T, OpaqueTypeMap<'tcx>)> {
+ debug!(
+ "instantiate_opaque_types(value={:?}, parent_def_id={:?}, body_id={:?}, \
+ param_env={:?}, value_span={:?})",
+ value, parent_def_id, body_id, param_env, value_span,
+ );
+ let mut instantiator = Instantiator {
+ infcx: self,
+ parent_def_id,
+ body_id,
+ param_env,
+ value_span,
+ opaque_types: Default::default(),
+ obligations: vec![],
+ };
+ let value = instantiator.instantiate_opaque_types_in_map(value);
+ InferOk { value: (value, instantiator.opaque_types), obligations: instantiator.obligations }
+ }
+
+ /// Given the map `opaque_types` containing the opaque
+ /// `impl Trait` types whose underlying, hidden types are being
+ /// inferred, this method adds constraints to the regions
+ /// appearing in those underlying hidden types to ensure that they
+ /// at least do not refer to random scopes within the current
+ /// function. These constraints are not (quite) sufficient to
+ /// guarantee that the regions are actually legal values; that
+ /// final condition is imposed after region inference is done.
+ ///
+ /// # The Problem
+ ///
+ /// Let's work through an example to explain how it works. Assume
+ /// the current function is as follows:
+ ///
+ /// ```text
+ /// fn foo<'a, 'b>(..) -> (impl Bar<'a>, impl Bar<'b>)
+ /// ```
+ ///
+ /// Here, we have two `impl Trait` types whose values are being
+ /// inferred (the `impl Bar<'a>` and the `impl
+ /// Bar<'b>`). Conceptually, this is sugar for a setup where we
+ /// define underlying opaque types (`Foo1`, `Foo2`) and then, in
+ /// the return type of `foo`, we *reference* those definitions:
+ ///
+ /// ```text
+ /// type Foo1<'x> = impl Bar<'x>;
+ /// type Foo2<'x> = impl Bar<'x>;
+ /// fn foo<'a, 'b>(..) -> (Foo1<'a>, Foo2<'b>) { .. }
+ /// // ^^^^ ^^
+ /// // | |
+ /// // | substs
+ /// // def_id
+ /// ```
+ ///
+ /// As indicating in the comments above, each of those references
+ /// is (in the compiler) basically a substitution (`substs`)
+ /// applied to the type of a suitable `def_id` (which identifies
+ /// `Foo1` or `Foo2`).
+ ///
+ /// Now, at this point in compilation, what we have done is to
+ /// replace each of the references (`Foo1<'a>`, `Foo2<'b>`) with
+ /// fresh inference variables C1 and C2. We wish to use the values
+ /// of these variables to infer the underlying types of `Foo1` and
+ /// `Foo2`. That is, this gives rise to higher-order (pattern) unification
+ /// constraints like:
+ ///
+ /// ```text
+ /// for<'a> (Foo1<'a> = C1)
+ /// for<'b> (Foo1<'b> = C2)
+ /// ```
+ ///
+ /// For these equation to be satisfiable, the types `C1` and `C2`
+ /// can only refer to a limited set of regions. For example, `C1`
+ /// can only refer to `'static` and `'a`, and `C2` can only refer
+ /// to `'static` and `'b`. The job of this function is to impose that
+ /// constraint.
+ ///
+ /// Up to this point, C1 and C2 are basically just random type
+ /// inference variables, and hence they may contain arbitrary
+ /// regions. In fact, it is fairly likely that they do! Consider
+ /// this possible definition of `foo`:
+ ///
+ /// ```text
+ /// fn foo<'a, 'b>(x: &'a i32, y: &'b i32) -> (impl Bar<'a>, impl Bar<'b>) {
+ /// (&*x, &*y)
+ /// }
+ /// ```
+ ///
+ /// Here, the values for the concrete types of the two impl
+ /// traits will include inference variables:
+ ///
+ /// ```text
+ /// &'0 i32
+ /// &'1 i32
+ /// ```
+ ///
+ /// Ordinarily, the subtyping rules would ensure that these are
+ /// sufficiently large. But since `impl Bar<'a>` isn't a specific
+ /// type per se, we don't get such constraints by default. This
+ /// is where this function comes into play. It adds extra
+ /// constraints to ensure that all the regions which appear in the
+ /// inferred type are regions that could validly appear.
+ ///
+ /// This is actually a bit of a tricky constraint in general. We
+ /// want to say that each variable (e.g., `'0`) can only take on
+ /// values that were supplied as arguments to the opaque type
+ /// (e.g., `'a` for `Foo1<'a>`) or `'static`, which is always in
+ /// scope. We don't have a constraint quite of this kind in the current
+ /// region checker.
+ ///
+ /// # The Solution
+ ///
+ /// We generally prefer to make `<=` constraints, since they
+ /// integrate best into the region solver. To do that, we find the
+ /// "minimum" of all the arguments that appear in the substs: that
+ /// is, some region which is less than all the others. In the case
+ /// of `Foo1<'a>`, that would be `'a` (it's the only choice, after
+ /// all). Then we apply that as a least bound to the variables
+ /// (e.g., `'a <= '0`).
+ ///
+ /// In some cases, there is no minimum. Consider this example:
+ ///
+ /// ```text
+ /// fn baz<'a, 'b>() -> impl Trait<'a, 'b> { ... }
+ /// ```
+ ///
+ /// Here we would report a more complex "in constraint", like `'r
+ /// in ['a, 'b, 'static]` (where `'r` is some region appearing in
+ /// the hidden type).
+ ///
+ /// # Constrain regions, not the hidden concrete type
+ ///
+ /// Note that generating constraints on each region `Rc` is *not*
+ /// the same as generating an outlives constraint on `Tc` iself.
+ /// For example, if we had a function like this:
+ ///
+ /// ```rust
+ /// fn foo<'a, T>(x: &'a u32, y: T) -> impl Foo<'a> {
+ /// (x, y)
+ /// }
+ ///
+ /// // Equivalent to:
+ /// type FooReturn<'a, T> = impl Foo<'a>;
+ /// fn foo<'a, T>(..) -> FooReturn<'a, T> { .. }
+ /// ```
+ ///
+ /// then the hidden type `Tc` would be `(&'0 u32, T)` (where `'0`
+ /// is an inference variable). If we generated a constraint that
+ /// `Tc: 'a`, then this would incorrectly require that `T: 'a` --
+ /// but this is not necessary, because the opaque type we
+ /// create will be allowed to reference `T`. So we only generate a
+ /// constraint that `'0: 'a`.
+ ///
+ /// # The `free_region_relations` parameter
+ ///
+ /// The `free_region_relations` argument is used to find the
+ /// "minimum" of the regions supplied to a given opaque type.
+ /// It must be a relation that can answer whether `'a <= 'b`,
+ /// where `'a` and `'b` are regions that appear in the "substs"
+ /// for the opaque type references (the `<'a>` in `Foo1<'a>`).
+ ///
+ /// Note that we do not impose the constraints based on the
+ /// generic regions from the `Foo1` definition (e.g., `'x`). This
+ /// is because the constraints we are imposing here is basically
+ /// the concern of the one generating the constraining type C1,
+ /// which is the current function. It also means that we can
+ /// take "implied bounds" into account in some cases:
+ ///
+ /// ```text
+ /// trait SomeTrait<'a, 'b> { }
+ /// fn foo<'a, 'b>(_: &'a &'b u32) -> impl SomeTrait<'a, 'b> { .. }
+ /// ```
+ ///
+ /// Here, the fact that `'b: 'a` is known only because of the
+ /// implied bounds from the `&'a &'b u32` parameter, and is not
+ /// "inherent" to the opaque type definition.
+ ///
+ /// # Parameters
+ ///
+ /// - `opaque_types` -- the map produced by `instantiate_opaque_types`
+ /// - `free_region_relations` -- something that can be used to relate
+ /// the free regions (`'a`) that appear in the impl trait.
+ fn constrain_opaque_types<FRR: FreeRegionRelations<'tcx>>(
+ &self,
+ opaque_types: &OpaqueTypeMap<'tcx>,
+ free_region_relations: &FRR,
+ ) {
+ debug!("constrain_opaque_types()");
+
+ for (&def_id, opaque_defn) in opaque_types {
+ self.constrain_opaque_type(
+ def_id,
+ opaque_defn,
+ GenerateMemberConstraints::WhenRequired,
+ free_region_relations,
+ );
+ }
+ }
+
+ /// See `constrain_opaque_types` for documentation.
+ fn constrain_opaque_type<FRR: FreeRegionRelations<'tcx>>(
+ &self,
+ def_id: DefId,
+ opaque_defn: &OpaqueTypeDecl<'tcx>,
+ mode: GenerateMemberConstraints,
+ free_region_relations: &FRR,
+ ) {
+ debug!("constrain_opaque_type()");
+ debug!("constrain_opaque_type: def_id={:?}", def_id);
+ debug!("constrain_opaque_type: opaque_defn={:#?}", opaque_defn);
+
+ let tcx = self.tcx;
+
+ let concrete_ty = self.resolve_vars_if_possible(&opaque_defn.concrete_ty);
+
+ debug!("constrain_opaque_type: concrete_ty={:?}", concrete_ty);
+
+ let opaque_type_generics = tcx.generics_of(def_id);
+
+ let span = tcx.def_span(def_id);
+
+ // If there are required region bounds, we can use them.
+ if opaque_defn.has_required_region_bounds {
+ let predicates_of = tcx.predicates_of(def_id);
+ debug!("constrain_opaque_type: predicates: {:#?}", predicates_of,);
+ let bounds = predicates_of.instantiate(tcx, opaque_defn.substs);
+ debug!("constrain_opaque_type: bounds={:#?}", bounds);
+ let opaque_type = tcx.mk_opaque(def_id, opaque_defn.substs);
+
+ let required_region_bounds =
+ required_region_bounds(tcx, opaque_type, bounds.predicates);
+ debug_assert!(!required_region_bounds.is_empty());
+
+ for required_region in required_region_bounds {
+ concrete_ty.visit_with(&mut ConstrainOpaqueTypeRegionVisitor {
+ tcx: self.tcx,
+ op: |r| self.sub_regions(infer::CallReturn(span), required_region, r),
+ });
+ }
+ if let GenerateMemberConstraints::IfNoStaticBound = mode {
+ self.generate_member_constraint(
+ concrete_ty,
+ opaque_type_generics,
+ opaque_defn,
+ def_id,
+ );
+ }
+ return;
+ }
+
+ // There were no `required_region_bounds`,
+ // so we have to search for a `least_region`.
+ // Go through all the regions used as arguments to the
+ // opaque type. These are the parameters to the opaque
+ // type; so in our example above, `substs` would contain
+ // `['a]` for the first impl trait and `'b` for the
+ // second.
+ let mut least_region = None;
+ for param in &opaque_type_generics.params {
+ match param.kind {
+ GenericParamDefKind::Lifetime => {}
+ _ => continue,
+ }
+
+ // Get the value supplied for this region from the substs.
+ let subst_arg = opaque_defn.substs.region_at(param.index as usize);
+
+ // Compute the least upper bound of it with the other regions.
+ debug!("constrain_opaque_types: least_region={:?}", least_region);
+ debug!("constrain_opaque_types: subst_arg={:?}", subst_arg);
+ match least_region {
+ None => least_region = Some(subst_arg),
+ Some(lr) => {
+ if free_region_relations.sub_free_regions(self.tcx, lr, subst_arg) {
+ // keep the current least region
+ } else if free_region_relations.sub_free_regions(self.tcx, subst_arg, lr) {
+ // switch to `subst_arg`
+ least_region = Some(subst_arg);
+ } else {
+ // There are two regions (`lr` and
+ // `subst_arg`) which are not relatable. We
+ // can't find a best choice. Therefore,
+ // instead of creating a single bound like
+ // `'r: 'a` (which is our preferred choice),
+ // we will create a "in bound" like `'r in
+ // ['a, 'b, 'c]`, where `'a..'c` are the
+ // regions that appear in the impl trait.
+
+ // For now, enforce a feature gate outside of async functions.
+ self.member_constraint_feature_gate(opaque_defn, def_id, lr, subst_arg);
+
+ return self.generate_member_constraint(
+ concrete_ty,
+ opaque_type_generics,
+ opaque_defn,
+ def_id,
+ );
+ }
+ }
+ }
+ }
+
+ let least_region = least_region.unwrap_or(tcx.lifetimes.re_static);
+ debug!("constrain_opaque_types: least_region={:?}", least_region);
+
+ if let GenerateMemberConstraints::IfNoStaticBound = mode {
+ if least_region != tcx.lifetimes.re_static {
+ self.generate_member_constraint(
+ concrete_ty,
+ opaque_type_generics,
+ opaque_defn,
+ def_id,
+ );
+ }
+ }
+ concrete_ty.visit_with(&mut ConstrainOpaqueTypeRegionVisitor {
+ tcx: self.tcx,
+ op: |r| self.sub_regions(infer::CallReturn(span), least_region, r),
+ });
+ }
+
+ /// As a fallback, we sometimes generate an "in constraint". For
+ /// a case like `impl Foo<'a, 'b>`, where `'a` and `'b` cannot be
+ /// related, we would generate a constraint `'r in ['a, 'b,
+ /// 'static]` for each region `'r` that appears in the hidden type
+ /// (i.e., it must be equal to `'a`, `'b`, or `'static`).
+ ///
+ /// `conflict1` and `conflict2` are the two region bounds that we
+ /// detected which were unrelated. They are used for diagnostics.
+ fn generate_member_constraint(
+ &self,
+ concrete_ty: Ty<'tcx>,
+ opaque_type_generics: &ty::Generics,
+ opaque_defn: &OpaqueTypeDecl<'tcx>,
+ opaque_type_def_id: DefId,
+ ) {
+ // Create the set of choice regions: each region in the hidden
+ // type can be equal to any of the region parameters of the
+ // opaque type definition.
+ let choice_regions: Lrc<Vec<ty::Region<'tcx>>> = Lrc::new(
+ opaque_type_generics
+ .params
+ .iter()
+ .filter(|param| match param.kind {
+ GenericParamDefKind::Lifetime => true,
+ GenericParamDefKind::Type { .. } | GenericParamDefKind::Const => false,
+ })
+ .map(|param| opaque_defn.substs.region_at(param.index as usize))
+ .chain(std::iter::once(self.tcx.lifetimes.re_static))
+ .collect(),
+ );
+
+ concrete_ty.visit_with(&mut ConstrainOpaqueTypeRegionVisitor {
+ tcx: self.tcx,
+ op: |r| {
+ self.member_constraint(
+ opaque_type_def_id,
+ opaque_defn.definition_span,
+ concrete_ty,
+ r,
+ &choice_regions,
+ )
+ },
+ });
+ }
+
+ /// Member constraints are presently feature-gated except for
+ /// async-await. We expect to lift this once we've had a bit more
+ /// time.
+ fn member_constraint_feature_gate(
+ &self,
+ opaque_defn: &OpaqueTypeDecl<'tcx>,
+ opaque_type_def_id: DefId,
+ conflict1: ty::Region<'tcx>,
+ conflict2: ty::Region<'tcx>,
+ ) -> bool {
+ // If we have `#![feature(member_constraints)]`, no problems.
+ if self.tcx.features().member_constraints {
+ return false;
+ }
+
+ let span = self.tcx.def_span(opaque_type_def_id);
+
+ // Without a feature-gate, we only generate member-constraints for async-await.
+ let context_name = match opaque_defn.origin {
+ // No feature-gate required for `async fn`.
+ hir::OpaqueTyOrigin::AsyncFn => return false,
+
+ // Otherwise, generate the label we'll use in the error message.
+ hir::OpaqueTyOrigin::TypeAlias
+ | hir::OpaqueTyOrigin::FnReturn
+ | hir::OpaqueTyOrigin::Misc => "impl Trait",
+ };
+ let msg = format!("ambiguous lifetime bound in `{}`", context_name);
+ let mut err = self.tcx.sess.struct_span_err(span, &msg);
+
+ let conflict1_name = conflict1.to_string();
+ let conflict2_name = conflict2.to_string();
+ let label_owned;
+ let label = match (&*conflict1_name, &*conflict2_name) {
+ ("'_", "'_") => "the elided lifetimes here do not outlive one another",
+ _ => {
+ label_owned = format!(
+ "neither `{}` nor `{}` outlives the other",
+ conflict1_name, conflict2_name,
+ );
+ &label_owned
+ }
+ };
+ err.span_label(span, label);
+
+ if nightly_options::is_nightly_build() {
+ err.help("add #![feature(member_constraints)] to the crate attributes to enable");
+ }
+
+ err.emit();
+ true
+ }
+
+ /// Given the fully resolved, instantiated type for an opaque
+ /// type, i.e., the value of an inference variable like C1 or C2
+ /// (*), computes the "definition type" for an opaque type
+ /// definition -- that is, the inferred value of `Foo1<'x>` or
+ /// `Foo2<'x>` that we would conceptually use in its definition:
+ ///
+ /// type Foo1<'x> = impl Bar<'x> = AAA; <-- this type AAA
+ /// type Foo2<'x> = impl Bar<'x> = BBB; <-- or this type BBB
+ /// fn foo<'a, 'b>(..) -> (Foo1<'a>, Foo2<'b>) { .. }
+ ///
+ /// Note that these values are defined in terms of a distinct set of
+ /// generic parameters (`'x` instead of `'a`) from C1 or C2. The main
+ /// purpose of this function is to do that translation.
+ ///
+ /// (*) C1 and C2 were introduced in the comments on
+ /// `constrain_opaque_types`. Read that comment for more context.
+ ///
+ /// # Parameters
+ ///
+ /// - `def_id`, the `impl Trait` type
+ /// - `substs`, the substs used to instantiate this opaque type
+ /// - `instantiated_ty`, the inferred type C1 -- fully resolved, lifted version of
+ /// `opaque_defn.concrete_ty`
+ fn infer_opaque_definition_from_instantiation(
+ &self,
+ def_id: DefId,
+ substs: SubstsRef<'tcx>,
+ instantiated_ty: Ty<'tcx>,
+ span: Span,
+ ) -> Ty<'tcx> {
+ debug!(
+ "infer_opaque_definition_from_instantiation(def_id={:?}, instantiated_ty={:?})",
+ def_id, instantiated_ty
+ );
+
+ // Use substs to build up a reverse map from regions to their
+ // identity mappings. This is necessary because of `impl
+ // Trait` lifetimes are computed by replacing existing
+ // lifetimes with 'static and remapping only those used in the
+ // `impl Trait` return type, resulting in the parameters
+ // shifting.
+ let id_substs = InternalSubsts::identity_for_item(self.tcx, def_id);
+ let map: FxHashMap<GenericArg<'tcx>, GenericArg<'tcx>> =
+ substs.iter().enumerate().map(|(index, subst)| (*subst, id_substs[index])).collect();
+
+ // Convert the type from the function into a type valid outside
+ // the function, by replacing invalid regions with 'static,
+ // after producing an error for each of them.
+ let definition_ty = instantiated_ty.fold_with(&mut ReverseMapper::new(
+ self.tcx,
+ self.is_tainted_by_errors(),
+ def_id,
+ map,
+ instantiated_ty,
+ span,
+ ));
+ debug!("infer_opaque_definition_from_instantiation: definition_ty={:?}", definition_ty);
+
+ definition_ty
+ }
+}
+
+// Visitor that requires that (almost) all regions in the type visited outlive
+// `least_region`. We cannot use `push_outlives_components` because regions in
+// closure signatures are not included in their outlives components. We need to
+// ensure all regions outlive the given bound so that we don't end up with,
+// say, `ReScope` appearing in a return type and causing ICEs when other
+// functions end up with region constraints involving regions from other
+// functions.
+//
+// We also cannot use `for_each_free_region` because for closures it includes
+// the regions parameters from the enclosing item.
+//
+// We ignore any type parameters because impl trait values are assumed to
+// capture all the in-scope type parameters.
+struct ConstrainOpaqueTypeRegionVisitor<'tcx, OP>
+where
+ OP: FnMut(ty::Region<'tcx>),
+{
+ tcx: TyCtxt<'tcx>,
+ op: OP,
+}
+
+impl<'tcx, OP> TypeVisitor<'tcx> for ConstrainOpaqueTypeRegionVisitor<'tcx, OP>
+where
+ OP: FnMut(ty::Region<'tcx>),
+{
+ fn visit_binder<T: TypeFoldable<'tcx>>(&mut self, t: &ty::Binder<T>) -> bool {
+ t.skip_binder().visit_with(self);
+ false // keep visiting
+ }
+
+ fn visit_region(&mut self, r: ty::Region<'tcx>) -> bool {
+ match *r {
+ // ignore bound regions, keep visiting
+ ty::ReLateBound(_, _) => false,
+ _ => {
+ (self.op)(r);
+ false
+ }
+ }
+ }
+
+ fn visit_ty(&mut self, ty: Ty<'tcx>) -> bool {
+ // We're only interested in types involving regions
+ if !ty.flags.intersects(ty::TypeFlags::HAS_FREE_REGIONS) {
+ return false; // keep visiting
+ }
+
+ match ty.kind {
+ ty::Closure(def_id, ref substs) => {
+ // Skip lifetime parameters of the enclosing item(s)
+
+ for upvar_ty in substs.as_closure().upvar_tys(def_id, self.tcx) {
+ upvar_ty.visit_with(self);
+ }
+
+ substs.as_closure().sig_ty(def_id, self.tcx).visit_with(self);
+ }
+
+ ty::Generator(def_id, ref substs, _) => {
+ // Skip lifetime parameters of the enclosing item(s)
+ // Also skip the witness type, because that has no free regions.
+
+ for upvar_ty in substs.as_generator().upvar_tys(def_id, self.tcx) {
+ upvar_ty.visit_with(self);
+ }
+
+ substs.as_generator().return_ty(def_id, self.tcx).visit_with(self);
+ substs.as_generator().yield_ty(def_id, self.tcx).visit_with(self);
+ substs.as_generator().resume_ty(def_id, self.tcx).visit_with(self);
+ }
+ _ => {
+ ty.super_visit_with(self);
+ }
+ }
+
+ false
+ }
+}
+
+struct ReverseMapper<'tcx> {
+ tcx: TyCtxt<'tcx>,
+
+ /// If errors have already been reported in this fn, we suppress
+ /// our own errors because they are sometimes derivative.
+ tainted_by_errors: bool,
+
+ opaque_type_def_id: DefId,
+ map: FxHashMap<GenericArg<'tcx>, GenericArg<'tcx>>,
+ map_missing_regions_to_empty: bool,
+
+ /// initially `Some`, set to `None` once error has been reported
+ hidden_ty: Option<Ty<'tcx>>,
+
+ /// Span of function being checked.
+ span: Span,
+}
+
+impl ReverseMapper<'tcx> {
+ fn new(
+ tcx: TyCtxt<'tcx>,
+ tainted_by_errors: bool,
+ opaque_type_def_id: DefId,
+ map: FxHashMap<GenericArg<'tcx>, GenericArg<'tcx>>,
+ hidden_ty: Ty<'tcx>,
+ span: Span,
+ ) -> Self {
+ Self {
+ tcx,
+ tainted_by_errors,
+ opaque_type_def_id,
+ map,
+ map_missing_regions_to_empty: false,
+ hidden_ty: Some(hidden_ty),
+ span,
+ }
+ }
+
+ fn fold_kind_mapping_missing_regions_to_empty(
+ &mut self,
+ kind: GenericArg<'tcx>,
+ ) -> GenericArg<'tcx> {
+ assert!(!self.map_missing_regions_to_empty);
+ self.map_missing_regions_to_empty = true;
+ let kind = kind.fold_with(self);
+ self.map_missing_regions_to_empty = false;
+ kind
+ }
+
+ fn fold_kind_normally(&mut self, kind: GenericArg<'tcx>) -> GenericArg<'tcx> {
+ assert!(!self.map_missing_regions_to_empty);
+ kind.fold_with(self)
+ }
+}
+
+impl TypeFolder<'tcx> for ReverseMapper<'tcx> {
+ fn tcx(&self) -> TyCtxt<'tcx> {
+ self.tcx
+ }
+
+ fn fold_region(&mut self, r: ty::Region<'tcx>) -> ty::Region<'tcx> {
+ match r {
+ // Ignore bound regions and `'static` regions that appear in the
+ // type, we only need to remap regions that reference lifetimes
+ // from the function declaraion.
+ // This would ignore `'r` in a type like `for<'r> fn(&'r u32)`.
+ ty::ReLateBound(..) | ty::ReStatic => return r,
+
+ // If regions have been erased (by writeback), don't try to unerase
+ // them.
+ ty::ReErased => return r,
+
+ // The regions that we expect from borrow checking.
+ ty::ReEarlyBound(_) | ty::ReFree(_) | ty::ReEmpty(ty::UniverseIndex::ROOT) => {}
+
+ ty::ReEmpty(_)
+ | ty::RePlaceholder(_)
+ | ty::ReVar(_)
+ | ty::ReScope(_)
+ | ty::ReClosureBound(_) => {
+ // All of the regions in the type should either have been
+ // erased by writeback, or mapped back to named regions by
+ // borrow checking.
+ bug!("unexpected region kind in opaque type: {:?}", r);
+ }
+ }
+
+ let generics = self.tcx().generics_of(self.opaque_type_def_id);
+ match self.map.get(&r.into()).map(|k| k.unpack()) {
+ Some(GenericArgKind::Lifetime(r1)) => r1,
+ Some(u) => panic!("region mapped to unexpected kind: {:?}", u),
+ None if self.map_missing_regions_to_empty || self.tainted_by_errors => {
+ self.tcx.lifetimes.re_root_empty
+ }
+ None if generics.parent.is_some() => {
+ if let Some(hidden_ty) = self.hidden_ty.take() {
+ unexpected_hidden_region_diagnostic(
+ self.tcx,
+ None,
+ self.tcx.def_span(self.opaque_type_def_id),
+ hidden_ty,
+ r,
+ )
+ .emit();
+ }
+ self.tcx.lifetimes.re_root_empty
+ }
+ None => {
+ self.tcx
+ .sess
+ .struct_span_err(self.span, "non-defining opaque type use in defining scope")
+ .span_label(
+ self.span,
+ format!(
+ "lifetime `{}` is part of concrete type but not used in \
+ parameter list of the `impl Trait` type alias",
+ r
+ ),
+ )
+ .emit();
+
+ self.tcx().lifetimes.re_static
+ }
+ }
+ }
+
+ fn fold_ty(&mut self, ty: Ty<'tcx>) -> Ty<'tcx> {
+ match ty.kind {
+ ty::Closure(def_id, substs) => {
+ // I am a horrible monster and I pray for death. When
+ // we encounter a closure here, it is always a closure
+ // from within the function that we are currently
+ // type-checking -- one that is now being encapsulated
+ // in an opaque type. Ideally, we would
+ // go through the types/lifetimes that it references
+ // and treat them just like we would any other type,
+ // which means we would error out if we find any
+ // reference to a type/region that is not in the
+ // "reverse map".
+ //
+ // **However,** in the case of closures, there is a
+ // somewhat subtle (read: hacky) consideration. The
+ // problem is that our closure types currently include
+ // all the lifetime parameters declared on the
+ // enclosing function, even if they are unused by the
+ // closure itself. We can't readily filter them out,
+ // so here we replace those values with `'empty`. This
+ // can't really make a difference to the rest of the
+ // compiler; those regions are ignored for the
+ // outlives relation, and hence don't affect trait
+ // selection or auto traits, and they are erased
+ // during codegen.
+
+ let generics = self.tcx.generics_of(def_id);
+ let substs = self.tcx.mk_substs(substs.iter().enumerate().map(|(index, &kind)| {
+ if index < generics.parent_count {
+ // Accommodate missing regions in the parent kinds...
+ self.fold_kind_mapping_missing_regions_to_empty(kind)
+ } else {
+ // ...but not elsewhere.
+ self.fold_kind_normally(kind)
+ }
+ }));
+
+ self.tcx.mk_closure(def_id, substs)
+ }
+
+ ty::Generator(def_id, substs, movability) => {
+ let generics = self.tcx.generics_of(def_id);
+ let substs = self.tcx.mk_substs(substs.iter().enumerate().map(|(index, &kind)| {
+ if index < generics.parent_count {
+ // Accommodate missing regions in the parent kinds...
+ self.fold_kind_mapping_missing_regions_to_empty(kind)
+ } else {
+ // ...but not elsewhere.
+ self.fold_kind_normally(kind)
+ }
+ }));
+
+ self.tcx.mk_generator(def_id, substs, movability)
+ }
+
+ ty::Param(..) => {
+ // Look it up in the substitution list.
+ match self.map.get(&ty.into()).map(|k| k.unpack()) {
+ // Found it in the substitution list; replace with the parameter from the
+ // opaque type.
+ Some(GenericArgKind::Type(t1)) => t1,
+ Some(u) => panic!("type mapped to unexpected kind: {:?}", u),
+ None => {
+ self.tcx
+ .sess
+ .struct_span_err(
+ self.span,
+ &format!(
+ "type parameter `{}` is part of concrete type but not \
+ used in parameter list for the `impl Trait` type alias",
+ ty
+ ),
+ )
+ .emit();
+
+ self.tcx().types.err
+ }
+ }
+ }
+
+ _ => ty.super_fold_with(self),
+ }
+ }
+
+ fn fold_const(&mut self, ct: &'tcx ty::Const<'tcx>) -> &'tcx ty::Const<'tcx> {
+ trace!("checking const {:?}", ct);
+ // Find a const parameter
+ match ct.val {
+ ty::ConstKind::Param(..) => {
+ // Look it up in the substitution list.
+ match self.map.get(&ct.into()).map(|k| k.unpack()) {
+ // Found it in the substitution list, replace with the parameter from the
+ // opaque type.
+ Some(GenericArgKind::Const(c1)) => c1,
+ Some(u) => panic!("const mapped to unexpected kind: {:?}", u),
+ None => {
+ self.tcx
+ .sess
+ .struct_span_err(
+ self.span,
+ &format!(
+ "const parameter `{}` is part of concrete type but not \
+ used in parameter list for the `impl Trait` type alias",
+ ct
+ ),
+ )
+ .emit();
+
+ self.tcx().consts.err
+ }
+ }
+ }
+
+ _ => ct,
+ }
+ }
+}
+
+struct Instantiator<'a, 'tcx> {
+ infcx: &'a InferCtxt<'a, 'tcx>,
+ parent_def_id: DefId,
+ body_id: hir::HirId,
+ param_env: ty::ParamEnv<'tcx>,
+ value_span: Span,
+ opaque_types: OpaqueTypeMap<'tcx>,
+ obligations: Vec<PredicateObligation<'tcx>>,
+}
+
+impl<'a, 'tcx> Instantiator<'a, 'tcx> {
+ fn instantiate_opaque_types_in_map<T: TypeFoldable<'tcx>>(&mut self, value: &T) -> T {
+ debug!("instantiate_opaque_types_in_map(value={:?})", value);
+ let tcx = self.infcx.tcx;
+ value.fold_with(&mut BottomUpFolder {
+ tcx,
+ ty_op: |ty| {
+ if ty.references_error() {
+ return tcx.types.err;
+ } else if let ty::Opaque(def_id, substs) = ty.kind {
+ // Check that this is `impl Trait` type is
+ // declared by `parent_def_id` -- i.e., one whose
+ // value we are inferring. At present, this is
+ // always true during the first phase of
+ // type-check, but not always true later on during
+ // NLL. Once we support named opaque types more fully,
+ // this same scenario will be able to arise during all phases.
+ //
+ // Here is an example using type alias `impl Trait`
+ // that indicates the distinction we are checking for:
+ //
+ // ```rust
+ // mod a {
+ // pub type Foo = impl Iterator;
+ // pub fn make_foo() -> Foo { .. }
+ // }
+ //
+ // mod b {
+ // fn foo() -> a::Foo { a::make_foo() }
+ // }
+ // ```
+ //
+ // Here, the return type of `foo` references a
+ // `Opaque` indeed, but not one whose value is
+ // presently being inferred. You can get into a
+ // similar situation with closure return types
+ // today:
+ //
+ // ```rust
+ // fn foo() -> impl Iterator { .. }
+ // fn bar() {
+ // let x = || foo(); // returns the Opaque assoc with `foo`
+ // }
+ // ```
+ if let Some(opaque_hir_id) = tcx.hir().as_local_hir_id(def_id) {
+ let parent_def_id = self.parent_def_id;
+ let def_scope_default = || {
+ let opaque_parent_hir_id = tcx.hir().get_parent_item(opaque_hir_id);
+ parent_def_id == tcx.hir().local_def_id(opaque_parent_hir_id)
+ };
+ let (in_definition_scope, origin) = match tcx.hir().find(opaque_hir_id) {
+ Some(Node::Item(item)) => match item.kind {
+ // Anonymous `impl Trait`
+ hir::ItemKind::OpaqueTy(hir::OpaqueTy {
+ impl_trait_fn: Some(parent),
+ origin,
+ ..
+ }) => (parent == self.parent_def_id, origin),
+ // Named `type Foo = impl Bar;`
+ hir::ItemKind::OpaqueTy(hir::OpaqueTy {
+ impl_trait_fn: None,
+ origin,
+ ..
+ }) => (
+ may_define_opaque_type(tcx, self.parent_def_id, opaque_hir_id),
+ origin,
+ ),
+ _ => (def_scope_default(), hir::OpaqueTyOrigin::TypeAlias),
+ },
+ Some(Node::ImplItem(item)) => match item.kind {
+ hir::ImplItemKind::OpaqueTy(_) => (
+ may_define_opaque_type(tcx, self.parent_def_id, opaque_hir_id),
+ hir::OpaqueTyOrigin::TypeAlias,
+ ),
+ _ => (def_scope_default(), hir::OpaqueTyOrigin::TypeAlias),
+ },
+ _ => bug!(
+ "expected (impl) item, found {}",
+ tcx.hir().node_to_string(opaque_hir_id),
+ ),
+ };
+ if in_definition_scope {
+ return self.fold_opaque_ty(ty, def_id, substs, origin);
+ }
+
+ debug!(
+ "instantiate_opaque_types_in_map: \
+ encountered opaque outside its definition scope \
+ def_id={:?}",
+ def_id,
+ );
+ }
+ }
+
+ ty
+ },
+ lt_op: |lt| lt,
+ ct_op: |ct| ct,
+ })
+ }
+
+ fn fold_opaque_ty(
+ &mut self,
+ ty: Ty<'tcx>,
+ def_id: DefId,
+ substs: SubstsRef<'tcx>,
+ origin: hir::OpaqueTyOrigin,
+ ) -> Ty<'tcx> {
+ let infcx = self.infcx;
+ let tcx = infcx.tcx;
+
+ debug!("instantiate_opaque_types: Opaque(def_id={:?}, substs={:?})", def_id, substs);
+
+ // Use the same type variable if the exact same opaque type appears more
+ // than once in the return type (e.g., if it's passed to a type alias).
+ if let Some(opaque_defn) = self.opaque_types.get(&def_id) {
+ debug!("instantiate_opaque_types: returning concrete ty {:?}", opaque_defn.concrete_ty);
+ return opaque_defn.concrete_ty;
+ }
+ let span = tcx.def_span(def_id);
+ debug!("fold_opaque_ty {:?} {:?}", self.value_span, span);
+ let ty_var = infcx
+ .next_ty_var(TypeVariableOrigin { kind: TypeVariableOriginKind::TypeInference, span });
+
+ let predicates_of = tcx.predicates_of(def_id);
+ debug!("instantiate_opaque_types: predicates={:#?}", predicates_of,);
+ let bounds = predicates_of.instantiate(tcx, substs);
+
+ let param_env = tcx.param_env(def_id);
+ let InferOk { value: bounds, obligations } =
+ infcx.partially_normalize_associated_types_in(span, self.body_id, param_env, &bounds);
+ self.obligations.extend(obligations);
+
+ debug!("instantiate_opaque_types: bounds={:?}", bounds);
+
+ let required_region_bounds = required_region_bounds(tcx, ty, bounds.predicates.clone());
+ debug!("instantiate_opaque_types: required_region_bounds={:?}", required_region_bounds);
+
+ // Make sure that we are in fact defining the *entire* type
+ // (e.g., `type Foo<T: Bound> = impl Bar;` needs to be
+ // defined by a function like `fn foo<T: Bound>() -> Foo<T>`).
+ debug!("instantiate_opaque_types: param_env={:#?}", self.param_env,);
+ debug!("instantiate_opaque_types: generics={:#?}", tcx.generics_of(def_id),);
+
+ // Ideally, we'd get the span where *this specific `ty` came
+ // from*, but right now we just use the span from the overall
+ // value being folded. In simple cases like `-> impl Foo`,
+ // these are the same span, but not in cases like `-> (impl
+ // Foo, impl Bar)`.
+ let definition_span = self.value_span;
+
+ self.opaque_types.insert(
+ def_id,
+ OpaqueTypeDecl {
+ opaque_type: ty,
+ substs,
+ definition_span,
+ concrete_ty: ty_var,
+ has_required_region_bounds: !required_region_bounds.is_empty(),
+ origin,
+ },
+ );
+ debug!("instantiate_opaque_types: ty_var={:?}", ty_var);
+
+ for predicate in &bounds.predicates {
+ if let ty::Predicate::Projection(projection) = &predicate {
+ if projection.skip_binder().ty.references_error() {
+ // No point on adding these obligations since there's a type error involved.
+ return ty_var;
+ }
+ }
+ }
+
+ self.obligations.reserve(bounds.predicates.len());
+ for predicate in bounds.predicates {
+ // Change the predicate to refer to the type variable,
+ // which will be the concrete type instead of the opaque type.
+ // This also instantiates nested instances of `impl Trait`.
+ let predicate = self.instantiate_opaque_types_in_map(&predicate);
+
+ let cause = traits::ObligationCause::new(span, self.body_id, traits::SizedReturnType);
+
+ // Require that the predicate holds for the concrete type.
+ debug!("instantiate_opaque_types: predicate={:?}", predicate);
+ self.obligations.push(traits::Obligation::new(cause, self.param_env, predicate));
+ }
+
+ ty_var
+ }
+}
+
+/// Returns `true` if `opaque_hir_id` is a sibling or a child of a sibling of `def_id`.
+///
+/// Example:
+/// ```rust
+/// pub mod foo {
+/// pub mod bar {
+/// pub trait Bar { .. }
+///
+/// pub type Baz = impl Bar;
+///
+/// fn f1() -> Baz { .. }
+/// }
+///
+/// fn f2() -> bar::Baz { .. }
+/// }
+/// ```
+///
+/// Here, `def_id` is the `DefId` of the defining use of the opaque type (e.g., `f1` or `f2`),
+/// and `opaque_hir_id` is the `HirId` of the definition of the opaque type `Baz`.
+/// For the above example, this function returns `true` for `f1` and `false` for `f2`.
+pub fn may_define_opaque_type(tcx: TyCtxt<'_>, def_id: DefId, opaque_hir_id: hir::HirId) -> bool {
+ let mut hir_id = tcx.hir().as_local_hir_id(def_id).unwrap();
+
+ // Named opaque types can be defined by any siblings or children of siblings.
+ let scope = tcx.hir().get_defining_scope(opaque_hir_id);
+ // We walk up the node tree until we hit the root or the scope of the opaque type.
+ while hir_id != scope && hir_id != hir::CRATE_HIR_ID {
+ hir_id = tcx.hir().get_parent_item(hir_id);
+ }
+ // Syntactically, we are allowed to define the concrete type if:
+ let res = hir_id == scope;
+ trace!(
+ "may_define_opaque_type(def={:?}, opaque_node={:?}) = {}",
+ tcx.hir().find(hir_id),
+ tcx.hir().get(opaque_hir_id),
+ res
+ );
+ res
+}
+
+/// Given a set of predicates that apply to an object type, returns
+/// the region bounds that the (erased) `Self` type must
+/// outlive. Precisely *because* the `Self` type is erased, the
+/// parameter `erased_self_ty` must be supplied to indicate what type
+/// has been used to represent `Self` in the predicates
+/// themselves. This should really be a unique type; `FreshTy(0)` is a
+/// popular choice.
+///
+/// N.B., in some cases, particularly around higher-ranked bounds,
+/// this function returns a kind of conservative approximation.
+/// That is, all regions returned by this function are definitely
+/// required, but there may be other region bounds that are not
+/// returned, as well as requirements like `for<'a> T: 'a`.
+///
+/// Requires that trait definitions have been processed so that we can
+/// elaborate predicates and walk supertraits.
+//
+// FIXME: callers may only have a `&[Predicate]`, not a `Vec`, so that's
+// what this code should accept.
+crate fn required_region_bounds(
+ tcx: TyCtxt<'tcx>,
+ erased_self_ty: Ty<'tcx>,
+ predicates: Vec<ty::Predicate<'tcx>>,
+) -> Vec<ty::Region<'tcx>> {
+ debug!(
+ "required_region_bounds(erased_self_ty={:?}, predicates={:?})",
+ erased_self_ty, predicates
+ );
+
+ assert!(!erased_self_ty.has_escaping_bound_vars());
+
+ traits::elaborate_predicates(tcx, predicates)
+ .filter_map(|predicate| {
+ match predicate {
+ ty::Predicate::Projection(..)
+ | ty::Predicate::Trait(..)
+ | ty::Predicate::Subtype(..)
+ | ty::Predicate::WellFormed(..)
+ | ty::Predicate::ObjectSafe(..)
+ | ty::Predicate::ClosureKind(..)
+ | ty::Predicate::RegionOutlives(..)
+ | ty::Predicate::ConstEvaluatable(..) => None,
+ ty::Predicate::TypeOutlives(predicate) => {
+ // Search for a bound of the form `erased_self_ty
+ // : 'a`, but be wary of something like `for<'a>
+ // erased_self_ty : 'a` (we interpret a
+ // higher-ranked bound like that as 'static,
+ // though at present the code in `fulfill.rs`
+ // considers such bounds to be unsatisfiable, so
+ // it's kind of a moot point since you could never
+ // construct such an object, but this seems
+ // correct even if that code changes).
+ let ty::OutlivesPredicate(ref t, ref r) = predicate.skip_binder();
+ if t == &erased_self_ty && !r.has_escaping_bound_vars() {
+ Some(*r)
+ } else {
+ None
+ }
+ }
+ }
+ })
+ .collect()
+}
--- /dev/null
+//! Support code for rustdoc and external tools.
+//! You really don't want to be using this unless you need to.
+
+use super::*;
+
+use crate::infer::region_constraints::{Constraint, RegionConstraintData};
+use crate::infer::InferCtxt;
+use rustc::ty::fold::TypeFolder;
+use rustc::ty::{Region, RegionVid};
+
+use rustc_data_structures::fx::{FxHashMap, FxHashSet};
+
+use std::collections::hash_map::Entry;
+use std::collections::VecDeque;
+
+// FIXME(twk): this is obviously not nice to duplicate like that
+#[derive(Eq, PartialEq, Hash, Copy, Clone, Debug)]
+pub enum RegionTarget<'tcx> {
+ Region(Region<'tcx>),
+ RegionVid(RegionVid),
+}
+
+#[derive(Default, Debug, Clone)]
+pub struct RegionDeps<'tcx> {
+ larger: FxHashSet<RegionTarget<'tcx>>,
+ smaller: FxHashSet<RegionTarget<'tcx>>,
+}
+
+pub enum AutoTraitResult<A> {
+ ExplicitImpl,
+ PositiveImpl(A),
+ NegativeImpl,
+}
+
+impl<A> AutoTraitResult<A> {
+ fn is_auto(&self) -> bool {
+ match *self {
+ AutoTraitResult::PositiveImpl(_) | AutoTraitResult::NegativeImpl => true,
+ _ => false,
+ }
+ }
+}
+
+pub struct AutoTraitInfo<'cx> {
+ pub full_user_env: ty::ParamEnv<'cx>,
+ pub region_data: RegionConstraintData<'cx>,
+ pub vid_to_region: FxHashMap<ty::RegionVid, ty::Region<'cx>>,
+}
+
+pub struct AutoTraitFinder<'tcx> {
+ tcx: TyCtxt<'tcx>,
+}
+
+impl<'tcx> AutoTraitFinder<'tcx> {
+ pub fn new(tcx: TyCtxt<'tcx>) -> Self {
+ AutoTraitFinder { tcx }
+ }
+
+ /// Makes a best effort to determine whether and under which conditions an auto trait is
+ /// implemented for a type. For example, if you have
+ ///
+ /// ```
+ /// struct Foo<T> { data: Box<T> }
+ /// ```
+ ///
+ /// then this might return that Foo<T>: Send if T: Send (encoded in the AutoTraitResult type).
+ /// The analysis attempts to account for custom impls as well as other complex cases. This
+ /// result is intended for use by rustdoc and other such consumers.
+ ///
+ /// (Note that due to the coinductive nature of Send, the full and correct result is actually
+ /// quite simple to generate. That is, when a type has no custom impl, it is Send iff its field
+ /// types are all Send. So, in our example, we might have that Foo<T>: Send if Box<T>: Send.
+ /// But this is often not the best way to present to the user.)
+ ///
+ /// Warning: The API should be considered highly unstable, and it may be refactored or removed
+ /// in the future.
+ pub fn find_auto_trait_generics<A>(
+ &self,
+ ty: Ty<'tcx>,
+ orig_env: ty::ParamEnv<'tcx>,
+ trait_did: DefId,
+ auto_trait_callback: impl Fn(&InferCtxt<'_, 'tcx>, AutoTraitInfo<'tcx>) -> A,
+ ) -> AutoTraitResult<A> {
+ let tcx = self.tcx;
+
+ let trait_ref = ty::TraitRef { def_id: trait_did, substs: tcx.mk_substs_trait(ty, &[]) };
+
+ let trait_pred = ty::Binder::bind(trait_ref);
+
+ let bail_out = tcx.infer_ctxt().enter(|infcx| {
+ let mut selcx = SelectionContext::with_negative(&infcx, true);
+ let result = selcx.select(&Obligation::new(
+ ObligationCause::dummy(),
+ orig_env,
+ trait_pred.to_poly_trait_predicate(),
+ ));
+
+ match result {
+ Ok(Some(Vtable::VtableImpl(_))) => {
+ debug!(
+ "find_auto_trait_generics({:?}): \
+ manual impl found, bailing out",
+ trait_ref
+ );
+ true
+ }
+ _ => false,
+ }
+ });
+
+ // If an explicit impl exists, it always takes priority over an auto impl
+ if bail_out {
+ return AutoTraitResult::ExplicitImpl;
+ }
+
+ return tcx.infer_ctxt().enter(|infcx| {
+ let mut fresh_preds = FxHashSet::default();
+
+ // Due to the way projections are handled by SelectionContext, we need to run
+ // evaluate_predicates twice: once on the original param env, and once on the result of
+ // the first evaluate_predicates call.
+ //
+ // The problem is this: most of rustc, including SelectionContext and traits::project,
+ // are designed to work with a concrete usage of a type (e.g., Vec<u8>
+ // fn<T>() { Vec<T> }. This information will generally never change - given
+ // the 'T' in fn<T>() { ... }, we'll never know anything else about 'T'.
+ // If we're unable to prove that 'T' implements a particular trait, we're done -
+ // there's nothing left to do but error out.
+ //
+ // However, synthesizing an auto trait impl works differently. Here, we start out with
+ // a set of initial conditions - the ParamEnv of the struct/enum/union we're dealing
+ // with - and progressively discover the conditions we need to fulfill for it to
+ // implement a certain auto trait. This ends up breaking two assumptions made by trait
+ // selection and projection:
+ //
+ // * We can always cache the result of a particular trait selection for the lifetime of
+ // an InfCtxt
+ // * Given a projection bound such as '<T as SomeTrait>::SomeItem = K', if 'T:
+ // SomeTrait' doesn't hold, then we don't need to care about the 'SomeItem = K'
+ //
+ // We fix the first assumption by manually clearing out all of the InferCtxt's caches
+ // in between calls to SelectionContext.select. This allows us to keep all of the
+ // intermediate types we create bound to the 'tcx lifetime, rather than needing to lift
+ // them between calls.
+ //
+ // We fix the second assumption by reprocessing the result of our first call to
+ // evaluate_predicates. Using the example of '<T as SomeTrait>::SomeItem = K', our first
+ // pass will pick up 'T: SomeTrait', but not 'SomeItem = K'. On our second pass,
+ // traits::project will see that 'T: SomeTrait' is in our ParamEnv, allowing
+ // SelectionContext to return it back to us.
+
+ let (new_env, user_env) = match self.evaluate_predicates(
+ &infcx,
+ trait_did,
+ ty,
+ orig_env,
+ orig_env,
+ &mut fresh_preds,
+ false,
+ ) {
+ Some(e) => e,
+ None => return AutoTraitResult::NegativeImpl,
+ };
+
+ let (full_env, full_user_env) = self
+ .evaluate_predicates(
+ &infcx,
+ trait_did,
+ ty,
+ new_env,
+ user_env,
+ &mut fresh_preds,
+ true,
+ )
+ .unwrap_or_else(|| {
+ panic!("Failed to fully process: {:?} {:?} {:?}", ty, trait_did, orig_env)
+ });
+
+ debug!(
+ "find_auto_trait_generics({:?}): fulfilling \
+ with {:?}",
+ trait_ref, full_env
+ );
+ infcx.clear_caches();
+
+ // At this point, we already have all of the bounds we need. FulfillmentContext is used
+ // to store all of the necessary region/lifetime bounds in the InferContext, as well as
+ // an additional sanity check.
+ let mut fulfill = FulfillmentContext::new();
+ fulfill.register_bound(
+ &infcx,
+ full_env,
+ ty,
+ trait_did,
+ ObligationCause::misc(DUMMY_SP, hir::DUMMY_HIR_ID),
+ );
+ fulfill.select_all_or_error(&infcx).unwrap_or_else(|e| {
+ panic!("Unable to fulfill trait {:?} for '{:?}': {:?}", trait_did, ty, e)
+ });
+
+ let body_id_map: FxHashMap<_, _> = infcx
+ .inner
+ .borrow()
+ .region_obligations
+ .iter()
+ .map(|&(id, _)| (id, vec![]))
+ .collect();
+
+ infcx.process_registered_region_obligations(&body_id_map, None, full_env);
+
+ let region_data = infcx
+ .inner
+ .borrow_mut()
+ .unwrap_region_constraints()
+ .region_constraint_data()
+ .clone();
+
+ let vid_to_region = self.map_vid_to_region(®ion_data);
+
+ let info = AutoTraitInfo { full_user_env, region_data, vid_to_region };
+
+ return AutoTraitResult::PositiveImpl(auto_trait_callback(&infcx, info));
+ });
+ }
+}
+
+impl AutoTraitFinder<'tcx> {
+ /// The core logic responsible for computing the bounds for our synthesized impl.
+ ///
+ /// To calculate the bounds, we call `SelectionContext.select` in a loop. Like
+ /// `FulfillmentContext`, we recursively select the nested obligations of predicates we
+ /// encounter. However, whenever we encounter an `UnimplementedError` involving a type
+ /// parameter, we add it to our `ParamEnv`. Since our goal is to determine when a particular
+ /// type implements an auto trait, Unimplemented errors tell us what conditions need to be met.
+ ///
+ /// This method ends up working somewhat similarly to `FulfillmentContext`, but with a few key
+ /// differences. `FulfillmentContext` works under the assumption that it's dealing with concrete
+ /// user code. According, it considers all possible ways that a `Predicate` could be met, which
+ /// isn't always what we want for a synthesized impl. For example, given the predicate `T:
+ /// Iterator`, `FulfillmentContext` can end up reporting an Unimplemented error for `T:
+ /// IntoIterator` -- since there's an implementation of `Iterator` where `T: IntoIterator`,
+ /// `FulfillmentContext` will drive `SelectionContext` to consider that impl before giving up.
+ /// If we were to rely on `FulfillmentContext`s decision, we might end up synthesizing an impl
+ /// like this:
+ ///
+ /// impl<T> Send for Foo<T> where T: IntoIterator
+ ///
+ /// While it might be technically true that Foo implements Send where `T: IntoIterator`,
+ /// the bound is overly restrictive - it's really only necessary that `T: Iterator`.
+ ///
+ /// For this reason, `evaluate_predicates` handles predicates with type variables specially.
+ /// When we encounter an `Unimplemented` error for a bound such as `T: Iterator`, we immediately
+ /// add it to our `ParamEnv`, and add it to our stack for recursive evaluation. When we later
+ /// select it, we'll pick up any nested bounds, without ever inferring that `T: IntoIterator`
+ /// needs to hold.
+ ///
+ /// One additional consideration is supertrait bounds. Normally, a `ParamEnv` is only ever
+ /// constructed once for a given type. As part of the construction process, the `ParamEnv` will
+ /// have any supertrait bounds normalized -- e.g., if we have a type `struct Foo<T: Copy>`, the
+ /// `ParamEnv` will contain `T: Copy` and `T: Clone`, since `Copy: Clone`. When we construct our
+ /// own `ParamEnv`, we need to do this ourselves, through `traits::elaborate_predicates`, or
+ /// else `SelectionContext` will choke on the missing predicates. However, this should never
+ /// show up in the final synthesized generics: we don't want our generated docs page to contain
+ /// something like `T: Copy + Clone`, as that's redundant. Therefore, we keep track of a
+ /// separate `user_env`, which only holds the predicates that will actually be displayed to the
+ /// user.
+ fn evaluate_predicates(
+ &self,
+ infcx: &InferCtxt<'_, 'tcx>,
+ trait_did: DefId,
+ ty: Ty<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ user_env: ty::ParamEnv<'tcx>,
+ fresh_preds: &mut FxHashSet<ty::Predicate<'tcx>>,
+ only_projections: bool,
+ ) -> Option<(ty::ParamEnv<'tcx>, ty::ParamEnv<'tcx>)> {
+ let tcx = infcx.tcx;
+
+ let mut select = SelectionContext::with_negative(&infcx, true);
+
+ let mut already_visited = FxHashSet::default();
+ let mut predicates = VecDeque::new();
+ predicates.push_back(ty::Binder::bind(ty::TraitPredicate {
+ trait_ref: ty::TraitRef {
+ def_id: trait_did,
+ substs: infcx.tcx.mk_substs_trait(ty, &[]),
+ },
+ }));
+
+ let mut computed_preds: FxHashSet<_> = param_env.caller_bounds.iter().cloned().collect();
+ let mut user_computed_preds: FxHashSet<_> =
+ user_env.caller_bounds.iter().cloned().collect();
+
+ let mut new_env = param_env;
+ let dummy_cause = ObligationCause::misc(DUMMY_SP, hir::DUMMY_HIR_ID);
+
+ while let Some(pred) = predicates.pop_front() {
+ infcx.clear_caches();
+
+ if !already_visited.insert(pred) {
+ continue;
+ }
+
+ // Call `infcx.resolve_vars_if_possible` to see if we can
+ // get rid of any inference variables.
+ let obligation = infcx.resolve_vars_if_possible(&Obligation::new(
+ dummy_cause.clone(),
+ new_env,
+ pred,
+ ));
+ let result = select.select(&obligation);
+
+ match &result {
+ &Ok(Some(ref vtable)) => {
+ // If we see an explicit negative impl (e.g., `impl !Send for MyStruct`),
+ // we immediately bail out, since it's impossible for us to continue.
+ match vtable {
+ Vtable::VtableImpl(VtableImplData { impl_def_id, .. }) => {
+ // Blame 'tidy' for the weird bracket placement.
+ if infcx.tcx.impl_polarity(*impl_def_id) == ty::ImplPolarity::Negative {
+ debug!(
+ "evaluate_nested_obligations: found explicit negative impl\
+ {:?}, bailing out",
+ impl_def_id
+ );
+ return None;
+ }
+ }
+ _ => {}
+ }
+
+ let obligations = vtable.clone().nested_obligations().into_iter();
+
+ if !self.evaluate_nested_obligations(
+ ty,
+ obligations,
+ &mut user_computed_preds,
+ fresh_preds,
+ &mut predicates,
+ &mut select,
+ only_projections,
+ ) {
+ return None;
+ }
+ }
+ &Ok(None) => {}
+ &Err(SelectionError::Unimplemented) => {
+ if self.is_param_no_infer(pred.skip_binder().trait_ref.substs) {
+ already_visited.remove(&pred);
+ self.add_user_pred(
+ &mut user_computed_preds,
+ ty::Predicate::Trait(pred, hir::Constness::NotConst),
+ );
+ predicates.push_back(pred);
+ } else {
+ debug!(
+ "evaluate_nested_obligations: `Unimplemented` found, bailing: \
+ {:?} {:?} {:?}",
+ ty,
+ pred,
+ pred.skip_binder().trait_ref.substs
+ );
+ return None;
+ }
+ }
+ _ => panic!("Unexpected error for '{:?}': {:?}", ty, result),
+ };
+
+ computed_preds.extend(user_computed_preds.iter().cloned());
+ let normalized_preds =
+ elaborate_predicates(tcx, computed_preds.iter().cloned().collect());
+ new_env =
+ ty::ParamEnv::new(tcx.mk_predicates(normalized_preds), param_env.reveal, None);
+ }
+
+ let final_user_env = ty::ParamEnv::new(
+ tcx.mk_predicates(user_computed_preds.into_iter()),
+ user_env.reveal,
+ None,
+ );
+ debug!(
+ "evaluate_nested_obligations(ty={:?}, trait_did={:?}): succeeded with '{:?}' \
+ '{:?}'",
+ ty, trait_did, new_env, final_user_env
+ );
+
+ return Some((new_env, final_user_env));
+ }
+
+ /// This method is designed to work around the following issue:
+ /// When we compute auto trait bounds, we repeatedly call `SelectionContext.select`,
+ /// progressively building a `ParamEnv` based on the results we get.
+ /// However, our usage of `SelectionContext` differs from its normal use within the compiler,
+ /// in that we capture and re-reprocess predicates from `Unimplemented` errors.
+ ///
+ /// This can lead to a corner case when dealing with region parameters.
+ /// During our selection loop in `evaluate_predicates`, we might end up with
+ /// two trait predicates that differ only in their region parameters:
+ /// one containing a HRTB lifetime parameter, and one containing a 'normal'
+ /// lifetime parameter. For example:
+ ///
+ /// T as MyTrait<'a>
+ /// T as MyTrait<'static>
+ ///
+ /// If we put both of these predicates in our computed `ParamEnv`, we'll
+ /// confuse `SelectionContext`, since it will (correctly) view both as being applicable.
+ ///
+ /// To solve this, we pick the 'more strict' lifetime bound -- i.e., the HRTB
+ /// Our end goal is to generate a user-visible description of the conditions
+ /// under which a type implements an auto trait. A trait predicate involving
+ /// a HRTB means that the type needs to work with any choice of lifetime,
+ /// not just one specific lifetime (e.g., `'static`).
+ fn add_user_pred<'c>(
+ &self,
+ user_computed_preds: &mut FxHashSet<ty::Predicate<'c>>,
+ new_pred: ty::Predicate<'c>,
+ ) {
+ let mut should_add_new = true;
+ user_computed_preds.retain(|&old_pred| {
+ match (&new_pred, old_pred) {
+ (&ty::Predicate::Trait(new_trait, _), ty::Predicate::Trait(old_trait, _)) => {
+ if new_trait.def_id() == old_trait.def_id() {
+ let new_substs = new_trait.skip_binder().trait_ref.substs;
+ let old_substs = old_trait.skip_binder().trait_ref.substs;
+
+ if !new_substs.types().eq(old_substs.types()) {
+ // We can't compare lifetimes if the types are different,
+ // so skip checking `old_pred`.
+ return true;
+ }
+
+ for (new_region, old_region) in
+ new_substs.regions().zip(old_substs.regions())
+ {
+ match (new_region, old_region) {
+ // If both predicates have an `ReLateBound` (a HRTB) in the
+ // same spot, we do nothing.
+ (
+ ty::RegionKind::ReLateBound(_, _),
+ ty::RegionKind::ReLateBound(_, _),
+ ) => {}
+
+ (ty::RegionKind::ReLateBound(_, _), _)
+ | (_, ty::RegionKind::ReVar(_)) => {
+ // One of these is true:
+ // The new predicate has a HRTB in a spot where the old
+ // predicate does not (if they both had a HRTB, the previous
+ // match arm would have executed). A HRBT is a 'stricter'
+ // bound than anything else, so we want to keep the newer
+ // predicate (with the HRBT) in place of the old predicate.
+ //
+ // OR
+ //
+ // The old predicate has a region variable where the new
+ // predicate has some other kind of region. An region
+ // variable isn't something we can actually display to a user,
+ // so we choose their new predicate (which doesn't have a region
+ // variable).
+ //
+ // In both cases, we want to remove the old predicate,
+ // from `user_computed_preds`, and replace it with the new
+ // one. Having both the old and the new
+ // predicate in a `ParamEnv` would confuse `SelectionContext`.
+ //
+ // We're currently in the predicate passed to 'retain',
+ // so we return `false` to remove the old predicate from
+ // `user_computed_preds`.
+ return false;
+ }
+ (_, ty::RegionKind::ReLateBound(_, _))
+ | (ty::RegionKind::ReVar(_), _) => {
+ // This is the opposite situation as the previous arm.
+ // One of these is true:
+ //
+ // The old predicate has a HRTB lifetime in a place where the
+ // new predicate does not.
+ //
+ // OR
+ //
+ // The new predicate has a region variable where the old
+ // predicate has some other type of region.
+ //
+ // We want to leave the old
+ // predicate in `user_computed_preds`, and skip adding
+ // new_pred to `user_computed_params`.
+ should_add_new = false
+ }
+ _ => {}
+ }
+ }
+ }
+ }
+ _ => {}
+ }
+ return true;
+ });
+
+ if should_add_new {
+ user_computed_preds.insert(new_pred);
+ }
+ }
+
+ /// This is very similar to `handle_lifetimes`. However, instead of matching `ty::Region`s
+ /// to each other, we match `ty::RegionVid`s to `ty::Region`s.
+ fn map_vid_to_region<'cx>(
+ &self,
+ regions: &RegionConstraintData<'cx>,
+ ) -> FxHashMap<ty::RegionVid, ty::Region<'cx>> {
+ let mut vid_map: FxHashMap<RegionTarget<'cx>, RegionDeps<'cx>> = FxHashMap::default();
+ let mut finished_map = FxHashMap::default();
+
+ for constraint in regions.constraints.keys() {
+ match constraint {
+ &Constraint::VarSubVar(r1, r2) => {
+ {
+ let deps1 = vid_map.entry(RegionTarget::RegionVid(r1)).or_default();
+ deps1.larger.insert(RegionTarget::RegionVid(r2));
+ }
+
+ let deps2 = vid_map.entry(RegionTarget::RegionVid(r2)).or_default();
+ deps2.smaller.insert(RegionTarget::RegionVid(r1));
+ }
+ &Constraint::RegSubVar(region, vid) => {
+ {
+ let deps1 = vid_map.entry(RegionTarget::Region(region)).or_default();
+ deps1.larger.insert(RegionTarget::RegionVid(vid));
+ }
+
+ let deps2 = vid_map.entry(RegionTarget::RegionVid(vid)).or_default();
+ deps2.smaller.insert(RegionTarget::Region(region));
+ }
+ &Constraint::VarSubReg(vid, region) => {
+ finished_map.insert(vid, region);
+ }
+ &Constraint::RegSubReg(r1, r2) => {
+ {
+ let deps1 = vid_map.entry(RegionTarget::Region(r1)).or_default();
+ deps1.larger.insert(RegionTarget::Region(r2));
+ }
+
+ let deps2 = vid_map.entry(RegionTarget::Region(r2)).or_default();
+ deps2.smaller.insert(RegionTarget::Region(r1));
+ }
+ }
+ }
+
+ while !vid_map.is_empty() {
+ let target = *vid_map.keys().next().expect("Keys somehow empty");
+ let deps = vid_map.remove(&target).expect("Entry somehow missing");
+
+ for smaller in deps.smaller.iter() {
+ for larger in deps.larger.iter() {
+ match (smaller, larger) {
+ (&RegionTarget::Region(_), &RegionTarget::Region(_)) => {
+ if let Entry::Occupied(v) = vid_map.entry(*smaller) {
+ let smaller_deps = v.into_mut();
+ smaller_deps.larger.insert(*larger);
+ smaller_deps.larger.remove(&target);
+ }
+
+ if let Entry::Occupied(v) = vid_map.entry(*larger) {
+ let larger_deps = v.into_mut();
+ larger_deps.smaller.insert(*smaller);
+ larger_deps.smaller.remove(&target);
+ }
+ }
+ (&RegionTarget::RegionVid(v1), &RegionTarget::Region(r1)) => {
+ finished_map.insert(v1, r1);
+ }
+ (&RegionTarget::Region(_), &RegionTarget::RegionVid(_)) => {
+ // Do nothing; we don't care about regions that are smaller than vids.
+ }
+ (&RegionTarget::RegionVid(_), &RegionTarget::RegionVid(_)) => {
+ if let Entry::Occupied(v) = vid_map.entry(*smaller) {
+ let smaller_deps = v.into_mut();
+ smaller_deps.larger.insert(*larger);
+ smaller_deps.larger.remove(&target);
+ }
+
+ if let Entry::Occupied(v) = vid_map.entry(*larger) {
+ let larger_deps = v.into_mut();
+ larger_deps.smaller.insert(*smaller);
+ larger_deps.smaller.remove(&target);
+ }
+ }
+ }
+ }
+ }
+ }
+ finished_map
+ }
+
+ fn is_param_no_infer(&self, substs: SubstsRef<'_>) -> bool {
+ return self.is_of_param(substs.type_at(0)) && !substs.types().any(|t| t.has_infer_types());
+ }
+
+ pub fn is_of_param(&self, ty: Ty<'_>) -> bool {
+ return match ty.kind {
+ ty::Param(_) => true,
+ ty::Projection(p) => self.is_of_param(p.self_ty()),
+ _ => false,
+ };
+ }
+
+ fn is_self_referential_projection(&self, p: ty::PolyProjectionPredicate<'_>) -> bool {
+ match p.ty().skip_binder().kind {
+ ty::Projection(proj) if proj == p.skip_binder().projection_ty => true,
+ _ => false,
+ }
+ }
+
+ fn evaluate_nested_obligations(
+ &self,
+ ty: Ty<'_>,
+ nested: impl Iterator<Item = Obligation<'tcx, ty::Predicate<'tcx>>>,
+ computed_preds: &mut FxHashSet<ty::Predicate<'tcx>>,
+ fresh_preds: &mut FxHashSet<ty::Predicate<'tcx>>,
+ predicates: &mut VecDeque<ty::PolyTraitPredicate<'tcx>>,
+ select: &mut SelectionContext<'_, 'tcx>,
+ only_projections: bool,
+ ) -> bool {
+ let dummy_cause = ObligationCause::misc(DUMMY_SP, hir::DUMMY_HIR_ID);
+
+ for (obligation, mut predicate) in nested.map(|o| (o.clone(), o.predicate)) {
+ let is_new_pred = fresh_preds.insert(self.clean_pred(select.infcx(), predicate));
+
+ // Resolve any inference variables that we can, to help selection succeed
+ predicate = select.infcx().resolve_vars_if_possible(&predicate);
+
+ // We only add a predicate as a user-displayable bound if
+ // it involves a generic parameter, and doesn't contain
+ // any inference variables.
+ //
+ // Displaying a bound involving a concrete type (instead of a generic
+ // parameter) would be pointless, since it's always true
+ // (e.g. u8: Copy)
+ // Displaying an inference variable is impossible, since they're
+ // an internal compiler detail without a defined visual representation
+ //
+ // We check this by calling is_of_param on the relevant types
+ // from the various possible predicates
+ match &predicate {
+ &ty::Predicate::Trait(p, _) => {
+ if self.is_param_no_infer(p.skip_binder().trait_ref.substs)
+ && !only_projections
+ && is_new_pred
+ {
+ self.add_user_pred(computed_preds, predicate);
+ }
+ predicates.push_back(p);
+ }
+ &ty::Predicate::Projection(p) => {
+ debug!(
+ "evaluate_nested_obligations: examining projection predicate {:?}",
+ predicate
+ );
+
+ // As described above, we only want to display
+ // bounds which include a generic parameter but don't include
+ // an inference variable.
+ // Additionally, we check if we've seen this predicate before,
+ // to avoid rendering duplicate bounds to the user.
+ if self.is_param_no_infer(p.skip_binder().projection_ty.substs)
+ && !p.ty().skip_binder().has_infer_types()
+ && is_new_pred
+ {
+ debug!(
+ "evaluate_nested_obligations: adding projection predicate\
+ to computed_preds: {:?}",
+ predicate
+ );
+
+ // Under unusual circumstances, we can end up with a self-refeential
+ // projection predicate. For example:
+ // <T as MyType>::Value == <T as MyType>::Value
+ // Not only is displaying this to the user pointless,
+ // having it in the ParamEnv will cause an issue if we try to call
+ // poly_project_and_unify_type on the predicate, since this kind of
+ // predicate will normally never end up in a ParamEnv.
+ //
+ // For these reasons, we ignore these weird predicates,
+ // ensuring that we're able to properly synthesize an auto trait impl
+ if self.is_self_referential_projection(p) {
+ debug!(
+ "evaluate_nested_obligations: encountered a projection
+ predicate equating a type with itself! Skipping"
+ );
+ } else {
+ self.add_user_pred(computed_preds, predicate);
+ }
+ }
+
+ // There are three possible cases when we project a predicate:
+ //
+ // 1. We encounter an error. This means that it's impossible for
+ // our current type to implement the auto trait - there's bound
+ // that we could add to our ParamEnv that would 'fix' this kind
+ // of error, as it's not caused by an unimplemented type.
+ //
+ // 2. We successfully project the predicate (Ok(Some(_))), generating
+ // some subobligations. We then process these subobligations
+ // like any other generated sub-obligations.
+ //
+ // 3. We receive an 'ambiguous' result (Ok(None))
+ // If we were actually trying to compile a crate,
+ // we would need to re-process this obligation later.
+ // However, all we care about is finding out what bounds
+ // are needed for our type to implement a particular auto trait.
+ // We've already added this obligation to our computed ParamEnv
+ // above (if it was necessary). Therefore, we don't need
+ // to do any further processing of the obligation.
+ //
+ // Note that we *must* try to project *all* projection predicates
+ // we encounter, even ones without inference variable.
+ // This ensures that we detect any projection errors,
+ // which indicate that our type can *never* implement the given
+ // auto trait. In that case, we will generate an explicit negative
+ // impl (e.g. 'impl !Send for MyType'). However, we don't
+ // try to process any of the generated subobligations -
+ // they contain no new information, since we already know
+ // that our type implements the projected-through trait,
+ // and can lead to weird region issues.
+ //
+ // Normally, we'll generate a negative impl as a result of encountering
+ // a type with an explicit negative impl of an auto trait
+ // (for example, raw pointers have !Send and !Sync impls)
+ // However, through some **interesting** manipulations of the type
+ // system, it's actually possible to write a type that never
+ // implements an auto trait due to a projection error, not a normal
+ // negative impl error. To properly handle this case, we need
+ // to ensure that we catch any potential projection errors,
+ // and turn them into an explicit negative impl for our type.
+ debug!("Projecting and unifying projection predicate {:?}", predicate);
+
+ match poly_project_and_unify_type(select, &obligation.with(p)) {
+ Err(e) => {
+ debug!(
+ "evaluate_nested_obligations: Unable to unify predicate \
+ '{:?}' '{:?}', bailing out",
+ ty, e
+ );
+ return false;
+ }
+ Ok(Some(v)) => {
+ // We only care about sub-obligations
+ // when we started out trying to unify
+ // some inference variables. See the comment above
+ // for more infomration
+ if p.ty().skip_binder().has_infer_types() {
+ if !self.evaluate_nested_obligations(
+ ty,
+ v.clone().iter().cloned(),
+ computed_preds,
+ fresh_preds,
+ predicates,
+ select,
+ only_projections,
+ ) {
+ return false;
+ }
+ }
+ }
+ Ok(None) => {
+ // It's ok not to make progress when hvave no inference variables -
+ // in that case, we were only performing unifcation to check if an
+ // error occurred (which would indicate that it's impossible for our
+ // type to implement the auto trait).
+ // However, we should always make progress (either by generating
+ // subobligations or getting an error) when we started off with
+ // inference variables
+ if p.ty().skip_binder().has_infer_types() {
+ panic!("Unexpected result when selecting {:?} {:?}", ty, obligation)
+ }
+ }
+ }
+ }
+ &ty::Predicate::RegionOutlives(ref binder) => {
+ if select.infcx().region_outlives_predicate(&dummy_cause, binder).is_err() {
+ return false;
+ }
+ }
+ &ty::Predicate::TypeOutlives(ref binder) => {
+ match (
+ binder.no_bound_vars(),
+ binder.map_bound_ref(|pred| pred.0).no_bound_vars(),
+ ) {
+ (None, Some(t_a)) => {
+ select.infcx().register_region_obligation_with_cause(
+ t_a,
+ select.infcx().tcx.lifetimes.re_static,
+ &dummy_cause,
+ );
+ }
+ (Some(ty::OutlivesPredicate(t_a, r_b)), _) => {
+ select.infcx().register_region_obligation_with_cause(
+ t_a,
+ r_b,
+ &dummy_cause,
+ );
+ }
+ _ => {}
+ };
+ }
+ _ => panic!("Unexpected predicate {:?} {:?}", ty, predicate),
+ };
+ }
+ return true;
+ }
+
+ pub fn clean_pred(
+ &self,
+ infcx: &InferCtxt<'_, 'tcx>,
+ p: ty::Predicate<'tcx>,
+ ) -> ty::Predicate<'tcx> {
+ infcx.freshen(p)
+ }
+}
+
+// Replaces all ReVars in a type with ty::Region's, using the provided map
+pub struct RegionReplacer<'a, 'tcx> {
+ vid_to_region: &'a FxHashMap<ty::RegionVid, ty::Region<'tcx>>,
+ tcx: TyCtxt<'tcx>,
+}
+
+impl<'a, 'tcx> TypeFolder<'tcx> for RegionReplacer<'a, 'tcx> {
+ fn tcx<'b>(&'b self) -> TyCtxt<'tcx> {
+ self.tcx
+ }
+
+ fn fold_region(&mut self, r: ty::Region<'tcx>) -> ty::Region<'tcx> {
+ (match r {
+ &ty::ReVar(vid) => self.vid_to_region.get(&vid).cloned(),
+ _ => None,
+ })
+ .unwrap_or_else(|| r.super_fold_with(self))
+ }
+}
--- /dev/null
+// This file contains various trait resolution methods used by codegen.
+// They all assume regions can be erased and monomorphic types. It
+// seems likely that they should eventually be merged into more
+// general routines.
+
+use crate::infer::{InferCtxt, TyCtxtInferExt};
+use crate::traits::{
+ FulfillmentContext, Obligation, ObligationCause, SelectionContext, TraitEngine, Vtable,
+};
+use rustc::ty::fold::TypeFoldable;
+use rustc::ty::{self, TyCtxt};
+
+/// Attempts to resolve an obligation to a vtable. The result is
+/// a shallow vtable resolution, meaning that we do not
+/// (necessarily) resolve all nested obligations on the impl. Note
+/// that type check should guarantee to us that all nested
+/// obligations *could be* resolved if we wanted to.
+/// Assumes that this is run after the entire crate has been successfully type-checked.
+pub fn codegen_fulfill_obligation<'tcx>(
+ ty: TyCtxt<'tcx>,
+ (param_env, trait_ref): (ty::ParamEnv<'tcx>, ty::PolyTraitRef<'tcx>),
+) -> Option<Vtable<'tcx, ()>> {
+ // Remove any references to regions; this helps improve caching.
+ let trait_ref = ty.erase_regions(&trait_ref);
+
+ debug!(
+ "codegen_fulfill_obligation(trait_ref={:?}, def_id={:?})",
+ (param_env, trait_ref),
+ trait_ref.def_id()
+ );
+
+ // Do the initial selection for the obligation. This yields the
+ // shallow result we are looking for -- that is, what specific impl.
+ ty.infer_ctxt().enter(|infcx| {
+ let mut selcx = SelectionContext::new(&infcx);
+
+ let obligation_cause = ObligationCause::dummy();
+ let obligation =
+ Obligation::new(obligation_cause, param_env, trait_ref.to_poly_trait_predicate());
+
+ let selection = match selcx.select(&obligation) {
+ Ok(Some(selection)) => selection,
+ Ok(None) => {
+ // Ambiguity can happen when monomorphizing during trans
+ // expands to some humongo type that never occurred
+ // statically -- this humongo type can then overflow,
+ // leading to an ambiguous result. So report this as an
+ // overflow bug, since I believe this is the only case
+ // where ambiguity can result.
+ infcx.tcx.sess.delay_span_bug(
+ rustc_span::DUMMY_SP,
+ &format!(
+ "encountered ambiguity selecting `{:?}` during codegen, presuming due to \
+ overflow or prior type error",
+ trait_ref
+ ),
+ );
+ return None;
+ }
+ Err(e) => {
+ bug!("Encountered error `{:?}` selecting `{:?}` during codegen", e, trait_ref)
+ }
+ };
+
+ debug!("fulfill_obligation: selection={:?}", selection);
+
+ // Currently, we use a fulfillment context to completely resolve
+ // all nested obligations. This is because they can inform the
+ // inference of the impl's type parameters.
+ let mut fulfill_cx = FulfillmentContext::new();
+ let vtable = selection.map(|predicate| {
+ debug!("fulfill_obligation: register_predicate_obligation {:?}", predicate);
+ fulfill_cx.register_predicate_obligation(&infcx, predicate);
+ });
+ let vtable = drain_fulfillment_cx_or_panic(&infcx, &mut fulfill_cx, &vtable);
+
+ info!("Cache miss: {:?} => {:?}", trait_ref, vtable);
+ Some(vtable)
+ })
+}
+
+// # Global Cache
+
+/// Finishes processes any obligations that remain in the
+/// fulfillment context, and then returns the result with all type
+/// variables removed and regions erased. Because this is intended
+/// for use after type-check has completed, if any errors occur,
+/// it will panic. It is used during normalization and other cases
+/// where processing the obligations in `fulfill_cx` may cause
+/// type inference variables that appear in `result` to be
+/// unified, and hence we need to process those obligations to get
+/// the complete picture of the type.
+fn drain_fulfillment_cx_or_panic<T>(
+ infcx: &InferCtxt<'_, 'tcx>,
+ fulfill_cx: &mut FulfillmentContext<'tcx>,
+ result: &T,
+) -> T
+where
+ T: TypeFoldable<'tcx>,
+{
+ debug!("drain_fulfillment_cx_or_panic()");
+
+ // In principle, we only need to do this so long as `result`
+ // contains unbound type parameters. It could be a slight
+ // optimization to stop iterating early.
+ if let Err(errors) = fulfill_cx.select_all_or_error(infcx) {
+ bug!("Encountered errors `{:?}` resolving bounds after type-checking", errors);
+ }
+
+ let result = infcx.resolve_vars_if_possible(result);
+ infcx.tcx.erase_regions(&result)
+}
--- /dev/null
+//! See Rustc Dev Guide chapters on [trait-resolution] and [trait-specialization] for more info on
+//! how this works.
+//!
+//! [trait-resolution]: https://rustc-dev-guide.rust-lang.org/traits/resolution.html
+//! [trait-specialization]: https://rustc-dev-guide.rust-lang.org/traits/specialization.html
+
+use crate::infer::{CombinedSnapshot, InferOk, TyCtxtInferExt};
+use crate::traits::select::IntercrateAmbiguityCause;
+use crate::traits::SkipLeakCheck;
+use crate::traits::{self, Normalized, Obligation, ObligationCause, SelectionContext};
+use rustc::ty::fold::TypeFoldable;
+use rustc::ty::subst::Subst;
+use rustc::ty::{self, Ty, TyCtxt};
+use rustc_hir::def_id::{DefId, LOCAL_CRATE};
+use rustc_span::symbol::sym;
+use rustc_span::DUMMY_SP;
+
+/// Whether we do the orphan check relative to this crate or
+/// to some remote crate.
+#[derive(Copy, Clone, Debug)]
+enum InCrate {
+ Local,
+ Remote,
+}
+
+#[derive(Debug, Copy, Clone)]
+pub enum Conflict {
+ Upstream,
+ Downstream,
+}
+
+pub struct OverlapResult<'tcx> {
+ pub impl_header: ty::ImplHeader<'tcx>,
+ pub intercrate_ambiguity_causes: Vec<IntercrateAmbiguityCause>,
+
+ /// `true` if the overlap might've been permitted before the shift
+ /// to universes.
+ pub involves_placeholder: bool,
+}
+
+pub fn add_placeholder_note(err: &mut rustc_errors::DiagnosticBuilder<'_>) {
+ err.note(
+ "this behavior recently changed as a result of a bug fix; \
+ see rust-lang/rust#56105 for details",
+ );
+}
+
+/// If there are types that satisfy both impls, invokes `on_overlap`
+/// with a suitably-freshened `ImplHeader` with those types
+/// substituted. Otherwise, invokes `no_overlap`.
+pub fn overlapping_impls<F1, F2, R>(
+ tcx: TyCtxt<'_>,
+ impl1_def_id: DefId,
+ impl2_def_id: DefId,
+ skip_leak_check: SkipLeakCheck,
+ on_overlap: F1,
+ no_overlap: F2,
+) -> R
+where
+ F1: FnOnce(OverlapResult<'_>) -> R,
+ F2: FnOnce() -> R,
+{
+ debug!(
+ "overlapping_impls(\
+ impl1_def_id={:?}, \
+ impl2_def_id={:?})",
+ impl1_def_id, impl2_def_id,
+ );
+
+ let overlaps = tcx.infer_ctxt().enter(|infcx| {
+ let selcx = &mut SelectionContext::intercrate(&infcx);
+ overlap(selcx, skip_leak_check, impl1_def_id, impl2_def_id).is_some()
+ });
+
+ if !overlaps {
+ return no_overlap();
+ }
+
+ // In the case where we detect an error, run the check again, but
+ // this time tracking intercrate ambuiguity causes for better
+ // diagnostics. (These take time and can lead to false errors.)
+ tcx.infer_ctxt().enter(|infcx| {
+ let selcx = &mut SelectionContext::intercrate(&infcx);
+ selcx.enable_tracking_intercrate_ambiguity_causes();
+ on_overlap(overlap(selcx, skip_leak_check, impl1_def_id, impl2_def_id).unwrap())
+ })
+}
+
+fn with_fresh_ty_vars<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ impl_def_id: DefId,
+) -> ty::ImplHeader<'tcx> {
+ let tcx = selcx.tcx();
+ let impl_substs = selcx.infcx().fresh_substs_for_item(DUMMY_SP, impl_def_id);
+
+ let header = ty::ImplHeader {
+ impl_def_id,
+ self_ty: tcx.type_of(impl_def_id).subst(tcx, impl_substs),
+ trait_ref: tcx.impl_trait_ref(impl_def_id).subst(tcx, impl_substs),
+ predicates: tcx.predicates_of(impl_def_id).instantiate(tcx, impl_substs).predicates,
+ };
+
+ let Normalized { value: mut header, obligations } =
+ traits::normalize(selcx, param_env, ObligationCause::dummy(), &header);
+
+ header.predicates.extend(obligations.into_iter().map(|o| o.predicate));
+ header
+}
+
+/// Can both impl `a` and impl `b` be satisfied by a common type (including
+/// where-clauses)? If so, returns an `ImplHeader` that unifies the two impls.
+fn overlap<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ skip_leak_check: SkipLeakCheck,
+ a_def_id: DefId,
+ b_def_id: DefId,
+) -> Option<OverlapResult<'tcx>> {
+ debug!("overlap(a_def_id={:?}, b_def_id={:?})", a_def_id, b_def_id);
+
+ selcx.infcx().probe_maybe_skip_leak_check(skip_leak_check.is_yes(), |snapshot| {
+ overlap_within_probe(selcx, a_def_id, b_def_id, snapshot)
+ })
+}
+
+fn overlap_within_probe(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ a_def_id: DefId,
+ b_def_id: DefId,
+ snapshot: &CombinedSnapshot<'_, 'tcx>,
+) -> Option<OverlapResult<'tcx>> {
+ // For the purposes of this check, we don't bring any placeholder
+ // types into scope; instead, we replace the generic types with
+ // fresh type variables, and hence we do our evaluations in an
+ // empty environment.
+ let param_env = ty::ParamEnv::empty();
+
+ let a_impl_header = with_fresh_ty_vars(selcx, param_env, a_def_id);
+ let b_impl_header = with_fresh_ty_vars(selcx, param_env, b_def_id);
+
+ debug!("overlap: a_impl_header={:?}", a_impl_header);
+ debug!("overlap: b_impl_header={:?}", b_impl_header);
+
+ // Do `a` and `b` unify? If not, no overlap.
+ let obligations = match selcx
+ .infcx()
+ .at(&ObligationCause::dummy(), param_env)
+ .eq_impl_headers(&a_impl_header, &b_impl_header)
+ {
+ Ok(InferOk { obligations, value: () }) => obligations,
+ Err(_) => {
+ return None;
+ }
+ };
+
+ debug!("overlap: unification check succeeded");
+
+ // Are any of the obligations unsatisfiable? If so, no overlap.
+ let infcx = selcx.infcx();
+ let opt_failing_obligation = a_impl_header
+ .predicates
+ .iter()
+ .chain(&b_impl_header.predicates)
+ .map(|p| infcx.resolve_vars_if_possible(p))
+ .map(|p| Obligation {
+ cause: ObligationCause::dummy(),
+ param_env,
+ recursion_depth: 0,
+ predicate: p,
+ })
+ .chain(obligations)
+ .find(|o| !selcx.predicate_may_hold_fatal(o));
+ // FIXME: the call to `selcx.predicate_may_hold_fatal` above should be ported
+ // to the canonical trait query form, `infcx.predicate_may_hold`, once
+ // the new system supports intercrate mode (which coherence needs).
+
+ if let Some(failing_obligation) = opt_failing_obligation {
+ debug!("overlap: obligation unsatisfiable {:?}", failing_obligation);
+ return None;
+ }
+
+ let impl_header = selcx.infcx().resolve_vars_if_possible(&a_impl_header);
+ let intercrate_ambiguity_causes = selcx.take_intercrate_ambiguity_causes();
+ debug!("overlap: intercrate_ambiguity_causes={:#?}", intercrate_ambiguity_causes);
+
+ let involves_placeholder = match selcx.infcx().region_constraints_added_in_snapshot(snapshot) {
+ Some(true) => true,
+ _ => false,
+ };
+
+ Some(OverlapResult { impl_header, intercrate_ambiguity_causes, involves_placeholder })
+}
+
+pub fn trait_ref_is_knowable<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ trait_ref: ty::TraitRef<'tcx>,
+) -> Option<Conflict> {
+ debug!("trait_ref_is_knowable(trait_ref={:?})", trait_ref);
+ if orphan_check_trait_ref(tcx, trait_ref, InCrate::Remote).is_ok() {
+ // A downstream or cousin crate is allowed to implement some
+ // substitution of this trait-ref.
+ return Some(Conflict::Downstream);
+ }
+
+ if trait_ref_is_local_or_fundamental(tcx, trait_ref) {
+ // This is a local or fundamental trait, so future-compatibility
+ // is no concern. We know that downstream/cousin crates are not
+ // allowed to implement a substitution of this trait ref, which
+ // means impls could only come from dependencies of this crate,
+ // which we already know about.
+ return None;
+ }
+
+ // This is a remote non-fundamental trait, so if another crate
+ // can be the "final owner" of a substitution of this trait-ref,
+ // they are allowed to implement it future-compatibly.
+ //
+ // However, if we are a final owner, then nobody else can be,
+ // and if we are an intermediate owner, then we don't care
+ // about future-compatibility, which means that we're OK if
+ // we are an owner.
+ if orphan_check_trait_ref(tcx, trait_ref, InCrate::Local).is_ok() {
+ debug!("trait_ref_is_knowable: orphan check passed");
+ return None;
+ } else {
+ debug!("trait_ref_is_knowable: nonlocal, nonfundamental, unowned");
+ return Some(Conflict::Upstream);
+ }
+}
+
+pub fn trait_ref_is_local_or_fundamental<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ trait_ref: ty::TraitRef<'tcx>,
+) -> bool {
+ trait_ref.def_id.krate == LOCAL_CRATE || tcx.has_attr(trait_ref.def_id, sym::fundamental)
+}
+
+pub enum OrphanCheckErr<'tcx> {
+ NonLocalInputType(Vec<(Ty<'tcx>, bool /* Is this the first input type? */)>),
+ UncoveredTy(Ty<'tcx>, Option<Ty<'tcx>>),
+}
+
+/// Checks the coherence orphan rules. `impl_def_id` should be the
+/// `DefId` of a trait impl. To pass, either the trait must be local, or else
+/// two conditions must be satisfied:
+///
+/// 1. All type parameters in `Self` must be "covered" by some local type constructor.
+/// 2. Some local type must appear in `Self`.
+pub fn orphan_check(tcx: TyCtxt<'_>, impl_def_id: DefId) -> Result<(), OrphanCheckErr<'_>> {
+ debug!("orphan_check({:?})", impl_def_id);
+
+ // We only except this routine to be invoked on implementations
+ // of a trait, not inherent implementations.
+ let trait_ref = tcx.impl_trait_ref(impl_def_id).unwrap();
+ debug!("orphan_check: trait_ref={:?}", trait_ref);
+
+ // If the *trait* is local to the crate, ok.
+ if trait_ref.def_id.is_local() {
+ debug!("trait {:?} is local to current crate", trait_ref.def_id);
+ return Ok(());
+ }
+
+ orphan_check_trait_ref(tcx, trait_ref, InCrate::Local)
+}
+
+/// Checks whether a trait-ref is potentially implementable by a crate.
+///
+/// The current rule is that a trait-ref orphan checks in a crate C:
+///
+/// 1. Order the parameters in the trait-ref in subst order - Self first,
+/// others linearly (e.g., `<U as Foo<V, W>>` is U < V < W).
+/// 2. Of these type parameters, there is at least one type parameter
+/// in which, walking the type as a tree, you can reach a type local
+/// to C where all types in-between are fundamental types. Call the
+/// first such parameter the "local key parameter".
+/// - e.g., `Box<LocalType>` is OK, because you can visit LocalType
+/// going through `Box`, which is fundamental.
+/// - similarly, `FundamentalPair<Vec<()>, Box<LocalType>>` is OK for
+/// the same reason.
+/// - but (knowing that `Vec<T>` is non-fundamental, and assuming it's
+/// not local), `Vec<LocalType>` is bad, because `Vec<->` is between
+/// the local type and the type parameter.
+/// 3. Every type parameter before the local key parameter is fully known in C.
+/// - e.g., `impl<T> T: Trait<LocalType>` is bad, because `T` might be
+/// an unknown type.
+/// - but `impl<T> LocalType: Trait<T>` is OK, because `LocalType`
+/// occurs before `T`.
+/// 4. Every type in the local key parameter not known in C, going
+/// through the parameter's type tree, must appear only as a subtree of
+/// a type local to C, with only fundamental types between the type
+/// local to C and the local key parameter.
+/// - e.g., `Vec<LocalType<T>>>` (or equivalently `Box<Vec<LocalType<T>>>`)
+/// is bad, because the only local type with `T` as a subtree is
+/// `LocalType<T>`, and `Vec<->` is between it and the type parameter.
+/// - similarly, `FundamentalPair<LocalType<T>, T>` is bad, because
+/// the second occurrence of `T` is not a subtree of *any* local type.
+/// - however, `LocalType<Vec<T>>` is OK, because `T` is a subtree of
+/// `LocalType<Vec<T>>`, which is local and has no types between it and
+/// the type parameter.
+///
+/// The orphan rules actually serve several different purposes:
+///
+/// 1. They enable link-safety - i.e., 2 mutually-unknowing crates (where
+/// every type local to one crate is unknown in the other) can't implement
+/// the same trait-ref. This follows because it can be seen that no such
+/// type can orphan-check in 2 such crates.
+///
+/// To check that a local impl follows the orphan rules, we check it in
+/// InCrate::Local mode, using type parameters for the "generic" types.
+///
+/// 2. They ground negative reasoning for coherence. If a user wants to
+/// write both a conditional blanket impl and a specific impl, we need to
+/// make sure they do not overlap. For example, if we write
+/// ```
+/// impl<T> IntoIterator for Vec<T>
+/// impl<T: Iterator> IntoIterator for T
+/// ```
+/// We need to be able to prove that `Vec<$0>: !Iterator` for every type $0.
+/// We can observe that this holds in the current crate, but we need to make
+/// sure this will also hold in all unknown crates (both "independent" crates,
+/// which we need for link-safety, and also child crates, because we don't want
+/// child crates to get error for impl conflicts in a *dependency*).
+///
+/// For that, we only allow negative reasoning if, for every assignment to the
+/// inference variables, every unknown crate would get an orphan error if they
+/// try to implement this trait-ref. To check for this, we use InCrate::Remote
+/// mode. That is sound because we already know all the impls from known crates.
+///
+/// 3. For non-#[fundamental] traits, they guarantee that parent crates can
+/// add "non-blanket" impls without breaking negative reasoning in dependent
+/// crates. This is the "rebalancing coherence" (RFC 1023) restriction.
+///
+/// For that, we only a allow crate to perform negative reasoning on
+/// non-local-non-#[fundamental] only if there's a local key parameter as per (2).
+///
+/// Because we never perform negative reasoning generically (coherence does
+/// not involve type parameters), this can be interpreted as doing the full
+/// orphan check (using InCrate::Local mode), substituting non-local known
+/// types for all inference variables.
+///
+/// This allows for crates to future-compatibly add impls as long as they
+/// can't apply to types with a key parameter in a child crate - applying
+/// the rules, this basically means that every type parameter in the impl
+/// must appear behind a non-fundamental type (because this is not a
+/// type-system requirement, crate owners might also go for "semantic
+/// future-compatibility" involving things such as sealed traits, but
+/// the above requirement is sufficient, and is necessary in "open world"
+/// cases).
+///
+/// Note that this function is never called for types that have both type
+/// parameters and inference variables.
+fn orphan_check_trait_ref<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ trait_ref: ty::TraitRef<'tcx>,
+ in_crate: InCrate,
+) -> Result<(), OrphanCheckErr<'tcx>> {
+ debug!("orphan_check_trait_ref(trait_ref={:?}, in_crate={:?})", trait_ref, in_crate);
+
+ if trait_ref.needs_infer() && trait_ref.needs_subst() {
+ bug!(
+ "can't orphan check a trait ref with both params and inference variables {:?}",
+ trait_ref
+ );
+ }
+
+ // Given impl<P1..=Pn> Trait<T1..=Tn> for T0, an impl is valid only
+ // if at least one of the following is true:
+ //
+ // - Trait is a local trait
+ // (already checked in orphan_check prior to calling this function)
+ // - All of
+ // - At least one of the types T0..=Tn must be a local type.
+ // Let Ti be the first such type.
+ // - No uncovered type parameters P1..=Pn may appear in T0..Ti (excluding Ti)
+ //
+ fn uncover_fundamental_ty<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ ty: Ty<'tcx>,
+ in_crate: InCrate,
+ ) -> Vec<Ty<'tcx>> {
+ if fundamental_ty(ty) && ty_is_non_local(ty, in_crate).is_some() {
+ ty.walk_shallow().flat_map(|ty| uncover_fundamental_ty(tcx, ty, in_crate)).collect()
+ } else {
+ vec![ty]
+ }
+ }
+
+ let mut non_local_spans = vec![];
+ for (i, input_ty) in
+ trait_ref.input_types().flat_map(|ty| uncover_fundamental_ty(tcx, ty, in_crate)).enumerate()
+ {
+ debug!("orphan_check_trait_ref: check ty `{:?}`", input_ty);
+ let non_local_tys = ty_is_non_local(input_ty, in_crate);
+ if non_local_tys.is_none() {
+ debug!("orphan_check_trait_ref: ty_is_local `{:?}`", input_ty);
+ return Ok(());
+ } else if let ty::Param(_) = input_ty.kind {
+ debug!("orphan_check_trait_ref: uncovered ty: `{:?}`", input_ty);
+ let local_type = trait_ref
+ .input_types()
+ .flat_map(|ty| uncover_fundamental_ty(tcx, ty, in_crate))
+ .find(|ty| ty_is_non_local_constructor(ty, in_crate).is_none());
+
+ debug!("orphan_check_trait_ref: uncovered ty local_type: `{:?}`", local_type);
+
+ return Err(OrphanCheckErr::UncoveredTy(input_ty, local_type));
+ }
+ if let Some(non_local_tys) = non_local_tys {
+ for input_ty in non_local_tys {
+ non_local_spans.push((input_ty, i == 0));
+ }
+ }
+ }
+ // If we exit above loop, never found a local type.
+ debug!("orphan_check_trait_ref: no local type");
+ Err(OrphanCheckErr::NonLocalInputType(non_local_spans))
+}
+
+fn ty_is_non_local<'t>(ty: Ty<'t>, in_crate: InCrate) -> Option<Vec<Ty<'t>>> {
+ match ty_is_non_local_constructor(ty, in_crate) {
+ Some(ty) => {
+ if !fundamental_ty(ty) {
+ Some(vec![ty])
+ } else {
+ let tys: Vec<_> = ty
+ .walk_shallow()
+ .filter_map(|t| ty_is_non_local(t, in_crate))
+ .flat_map(|i| i)
+ .collect();
+ if tys.is_empty() { None } else { Some(tys) }
+ }
+ }
+ None => None,
+ }
+}
+
+fn fundamental_ty(ty: Ty<'_>) -> bool {
+ match ty.kind {
+ ty::Ref(..) => true,
+ ty::Adt(def, _) => def.is_fundamental(),
+ _ => false,
+ }
+}
+
+fn def_id_is_local(def_id: DefId, in_crate: InCrate) -> bool {
+ match in_crate {
+ // The type is local to *this* crate - it will not be
+ // local in any other crate.
+ InCrate::Remote => false,
+ InCrate::Local => def_id.is_local(),
+ }
+}
+
+fn ty_is_non_local_constructor(ty: Ty<'_>, in_crate: InCrate) -> Option<Ty<'_>> {
+ debug!("ty_is_non_local_constructor({:?})", ty);
+
+ match ty.kind {
+ ty::Bool
+ | ty::Char
+ | ty::Int(..)
+ | ty::Uint(..)
+ | ty::Float(..)
+ | ty::Str
+ | ty::FnDef(..)
+ | ty::FnPtr(_)
+ | ty::Array(..)
+ | ty::Slice(..)
+ | ty::RawPtr(..)
+ | ty::Ref(..)
+ | ty::Never
+ | ty::Tuple(..)
+ | ty::Param(..)
+ | ty::Projection(..) => Some(ty),
+
+ ty::Placeholder(..) | ty::Bound(..) | ty::Infer(..) => match in_crate {
+ InCrate::Local => Some(ty),
+ // The inference variable might be unified with a local
+ // type in that remote crate.
+ InCrate::Remote => None,
+ },
+
+ ty::Adt(def, _) => {
+ if def_id_is_local(def.did, in_crate) {
+ None
+ } else {
+ Some(ty)
+ }
+ }
+ ty::Foreign(did) => {
+ if def_id_is_local(did, in_crate) {
+ None
+ } else {
+ Some(ty)
+ }
+ }
+ ty::Opaque(..) => {
+ // This merits some explanation.
+ // Normally, opaque types are not involed when performing
+ // coherence checking, since it is illegal to directly
+ // implement a trait on an opaque type. However, we might
+ // end up looking at an opaque type during coherence checking
+ // if an opaque type gets used within another type (e.g. as
+ // a type parameter). This requires us to decide whether or
+ // not an opaque type should be considered 'local' or not.
+ //
+ // We choose to treat all opaque types as non-local, even
+ // those that appear within the same crate. This seems
+ // somewhat surprising at first, but makes sense when
+ // you consider that opaque types are supposed to hide
+ // the underlying type *within the same crate*. When an
+ // opaque type is used from outside the module
+ // where it is declared, it should be impossible to observe
+ // anyything about it other than the traits that it implements.
+ //
+ // The alternative would be to look at the underlying type
+ // to determine whether or not the opaque type itself should
+ // be considered local. However, this could make it a breaking change
+ // to switch the underlying ('defining') type from a local type
+ // to a remote type. This would violate the rule that opaque
+ // types should be completely opaque apart from the traits
+ // that they implement, so we don't use this behavior.
+ Some(ty)
+ }
+
+ ty::Dynamic(ref tt, ..) => {
+ if let Some(principal) = tt.principal() {
+ if def_id_is_local(principal.def_id(), in_crate) { None } else { Some(ty) }
+ } else {
+ Some(ty)
+ }
+ }
+
+ ty::Error => None,
+
+ ty::UnnormalizedProjection(..)
+ | ty::Closure(..)
+ | ty::Generator(..)
+ | ty::GeneratorWitness(..) => bug!("ty_is_local invoked on unexpected type: {:?}", ty),
+ }
+}
--- /dev/null
+use rustc::ty::TyCtxt;
+
+use super::FulfillmentContext;
+use super::TraitEngine;
+
+pub trait TraitEngineExt<'tcx> {
+ fn new(tcx: TyCtxt<'tcx>) -> Box<Self>;
+}
+
+impl<'tcx> TraitEngineExt<'tcx> for dyn TraitEngine<'tcx> {
+ fn new(_tcx: TyCtxt<'tcx>) -> Box<Self> {
+ Box::new(FulfillmentContext::new())
+ }
+}
--- /dev/null
+pub mod on_unimplemented;
+pub mod suggestions;
+
+use super::{
+ ConstEvalFailure, EvaluationResult, FulfillmentError, FulfillmentErrorCode,
+ MismatchedProjectionTypes, Obligation, ObligationCause, ObligationCauseCode,
+ OnUnimplementedDirective, OnUnimplementedNote, OutputTypeParameterMismatch, Overflow,
+ PredicateObligation, SelectionContext, SelectionError, TraitNotObjectSafe,
+};
+
+use crate::infer::error_reporting::{TyCategory, TypeAnnotationNeeded as ErrorCode};
+use crate::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
+use crate::infer::{self, InferCtxt, TyCtxtInferExt};
+use rustc::mir::interpret::ErrorHandled;
+use rustc::ty::error::ExpectedFound;
+use rustc::ty::fast_reject;
+use rustc::ty::fold::TypeFolder;
+use rustc::ty::SubtypePredicate;
+use rustc::ty::{
+ self, AdtKind, ToPolyTraitRef, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness,
+};
+use rustc_data_structures::fx::FxHashMap;
+use rustc_errors::{pluralize, struct_span_err, Applicability, DiagnosticBuilder};
+use rustc_hir as hir;
+use rustc_hir::def_id::{DefId, LOCAL_CRATE};
+use rustc_hir::{Node, QPath, TyKind, WhereBoundPredicate, WherePredicate};
+use rustc_session::DiagnosticMessageId;
+use rustc_span::source_map::SourceMap;
+use rustc_span::{ExpnKind, Span, DUMMY_SP};
+use std::fmt;
+
+use crate::traits::query::evaluate_obligation::InferCtxtExt as _;
+use crate::traits::query::normalize::AtExt as _;
+use on_unimplemented::InferCtxtExt as _;
+use suggestions::InferCtxtExt as _;
+
+pub use rustc_infer::traits::error_reporting::*;
+
+pub trait InferCtxtExt<'tcx> {
+ fn report_fulfillment_errors(
+ &self,
+ errors: &[FulfillmentError<'tcx>],
+ body_id: Option<hir::BodyId>,
+ fallback_has_occurred: bool,
+ );
+
+ fn report_overflow_error<T>(
+ &self,
+ obligation: &Obligation<'tcx, T>,
+ suggest_increasing_limit: bool,
+ ) -> !
+ where
+ T: fmt::Display + TypeFoldable<'tcx>;
+
+ fn report_overflow_error_cycle(&self, cycle: &[PredicateObligation<'tcx>]) -> !;
+
+ fn report_selection_error(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ error: &SelectionError<'tcx>,
+ fallback_has_occurred: bool,
+ points_at_arg: bool,
+ );
+
+ /// Given some node representing a fn-like thing in the HIR map,
+ /// returns a span and `ArgKind` information that describes the
+ /// arguments it expects. This can be supplied to
+ /// `report_arg_count_mismatch`.
+ fn get_fn_like_arguments(&self, node: Node<'_>) -> (Span, Vec<ArgKind>);
+
+ /// Reports an error when the number of arguments needed by a
+ /// trait match doesn't match the number that the expression
+ /// provides.
+ fn report_arg_count_mismatch(
+ &self,
+ span: Span,
+ found_span: Option<Span>,
+ expected_args: Vec<ArgKind>,
+ found_args: Vec<ArgKind>,
+ is_closure: bool,
+ ) -> DiagnosticBuilder<'tcx>;
+}
+
+impl<'a, 'tcx> InferCtxtExt<'tcx> for InferCtxt<'a, 'tcx> {
+ fn report_fulfillment_errors(
+ &self,
+ errors: &[FulfillmentError<'tcx>],
+ body_id: Option<hir::BodyId>,
+ fallback_has_occurred: bool,
+ ) {
+ #[derive(Debug)]
+ struct ErrorDescriptor<'tcx> {
+ predicate: ty::Predicate<'tcx>,
+ index: Option<usize>, // None if this is an old error
+ }
+
+ let mut error_map: FxHashMap<_, Vec<_>> = self
+ .reported_trait_errors
+ .borrow()
+ .iter()
+ .map(|(&span, predicates)| {
+ (
+ span,
+ predicates
+ .iter()
+ .map(|&predicate| ErrorDescriptor { predicate, index: None })
+ .collect(),
+ )
+ })
+ .collect();
+
+ for (index, error) in errors.iter().enumerate() {
+ // We want to ignore desugarings here: spans are equivalent even
+ // if one is the result of a desugaring and the other is not.
+ let mut span = error.obligation.cause.span;
+ let expn_data = span.ctxt().outer_expn_data();
+ if let ExpnKind::Desugaring(_) = expn_data.kind {
+ span = expn_data.call_site;
+ }
+
+ error_map.entry(span).or_default().push(ErrorDescriptor {
+ predicate: error.obligation.predicate,
+ index: Some(index),
+ });
+
+ self.reported_trait_errors
+ .borrow_mut()
+ .entry(span)
+ .or_default()
+ .push(error.obligation.predicate.clone());
+ }
+
+ // We do this in 2 passes because we want to display errors in order, though
+ // maybe it *is* better to sort errors by span or something.
+ let mut is_suppressed = vec![false; errors.len()];
+ for (_, error_set) in error_map.iter() {
+ // We want to suppress "duplicate" errors with the same span.
+ for error in error_set {
+ if let Some(index) = error.index {
+ // Suppress errors that are either:
+ // 1) strictly implied by another error.
+ // 2) implied by an error with a smaller index.
+ for error2 in error_set {
+ if error2.index.map_or(false, |index2| is_suppressed[index2]) {
+ // Avoid errors being suppressed by already-suppressed
+ // errors, to prevent all errors from being suppressed
+ // at once.
+ continue;
+ }
+
+ if self.error_implies(&error2.predicate, &error.predicate)
+ && !(error2.index >= error.index
+ && self.error_implies(&error.predicate, &error2.predicate))
+ {
+ info!("skipping {:?} (implied by {:?})", error, error2);
+ is_suppressed[index] = true;
+ break;
+ }
+ }
+ }
+ }
+ }
+
+ for (error, suppressed) in errors.iter().zip(is_suppressed) {
+ if !suppressed {
+ self.report_fulfillment_error(error, body_id, fallback_has_occurred);
+ }
+ }
+ }
+
+ /// Reports that an overflow has occurred and halts compilation. We
+ /// halt compilation unconditionally because it is important that
+ /// overflows never be masked -- they basically represent computations
+ /// whose result could not be truly determined and thus we can't say
+ /// if the program type checks or not -- and they are unusual
+ /// occurrences in any case.
+ fn report_overflow_error<T>(
+ &self,
+ obligation: &Obligation<'tcx, T>,
+ suggest_increasing_limit: bool,
+ ) -> !
+ where
+ T: fmt::Display + TypeFoldable<'tcx>,
+ {
+ let predicate = self.resolve_vars_if_possible(&obligation.predicate);
+ let mut err = struct_span_err!(
+ self.tcx.sess,
+ obligation.cause.span,
+ E0275,
+ "overflow evaluating the requirement `{}`",
+ predicate
+ );
+
+ if suggest_increasing_limit {
+ self.suggest_new_overflow_limit(&mut err);
+ }
+
+ self.note_obligation_cause_code(
+ &mut err,
+ &obligation.predicate,
+ &obligation.cause.code,
+ &mut vec![],
+ );
+
+ err.emit();
+ self.tcx.sess.abort_if_errors();
+ bug!();
+ }
+
+ /// Reports that a cycle was detected which led to overflow and halts
+ /// compilation. This is equivalent to `report_overflow_error` except
+ /// that we can give a more helpful error message (and, in particular,
+ /// we do not suggest increasing the overflow limit, which is not
+ /// going to help).
+ fn report_overflow_error_cycle(&self, cycle: &[PredicateObligation<'tcx>]) -> ! {
+ let cycle = self.resolve_vars_if_possible(&cycle.to_owned());
+ assert!(!cycle.is_empty());
+
+ debug!("report_overflow_error_cycle: cycle={:?}", cycle);
+
+ self.report_overflow_error(&cycle[0], false);
+ }
+
+ fn report_selection_error(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ error: &SelectionError<'tcx>,
+ fallback_has_occurred: bool,
+ points_at_arg: bool,
+ ) {
+ let tcx = self.tcx;
+ let span = obligation.cause.span;
+
+ let mut err = match *error {
+ SelectionError::Unimplemented => {
+ if let ObligationCauseCode::CompareImplMethodObligation {
+ item_name,
+ impl_item_def_id,
+ trait_item_def_id,
+ }
+ | ObligationCauseCode::CompareImplTypeObligation {
+ item_name,
+ impl_item_def_id,
+ trait_item_def_id,
+ } = obligation.cause.code
+ {
+ self.report_extra_impl_obligation(
+ span,
+ item_name,
+ impl_item_def_id,
+ trait_item_def_id,
+ &format!("`{}`", obligation.predicate),
+ )
+ .emit();
+ return;
+ }
+ match obligation.predicate {
+ ty::Predicate::Trait(ref trait_predicate, _) => {
+ let trait_predicate = self.resolve_vars_if_possible(trait_predicate);
+
+ if self.tcx.sess.has_errors() && trait_predicate.references_error() {
+ return;
+ }
+ let trait_ref = trait_predicate.to_poly_trait_ref();
+ let (post_message, pre_message, type_def) = self
+ .get_parent_trait_ref(&obligation.cause.code)
+ .map(|(t, s)| {
+ (
+ format!(" in `{}`", t),
+ format!("within `{}`, ", t),
+ s.map(|s| (format!("within this `{}`", t), s)),
+ )
+ })
+ .unwrap_or_default();
+
+ let OnUnimplementedNote { message, label, note, enclosing_scope } =
+ self.on_unimplemented_note(trait_ref, obligation);
+ let have_alt_message = message.is_some() || label.is_some();
+ let is_try = self
+ .tcx
+ .sess
+ .source_map()
+ .span_to_snippet(span)
+ .map(|s| &s == "?")
+ .unwrap_or(false);
+ let is_from = format!("{}", trait_ref.print_only_trait_path())
+ .starts_with("std::convert::From<");
+ let (message, note) = if is_try && is_from {
+ (
+ Some(format!(
+ "`?` couldn't convert the error to `{}`",
+ trait_ref.self_ty(),
+ )),
+ Some(
+ "the question mark operation (`?`) implicitly performs a \
+ conversion on the error value using the `From` trait"
+ .to_owned(),
+ ),
+ )
+ } else {
+ (message, note)
+ };
+
+ let mut err = struct_span_err!(
+ self.tcx.sess,
+ span,
+ E0277,
+ "{}",
+ message.unwrap_or_else(|| format!(
+ "the trait bound `{}` is not satisfied{}",
+ trait_ref.without_const().to_predicate(),
+ post_message,
+ ))
+ );
+
+ let explanation =
+ if obligation.cause.code == ObligationCauseCode::MainFunctionType {
+ "consider using `()`, or a `Result`".to_owned()
+ } else {
+ format!(
+ "{}the trait `{}` is not implemented for `{}`",
+ pre_message,
+ trait_ref.print_only_trait_path(),
+ trait_ref.self_ty(),
+ )
+ };
+
+ if self.suggest_add_reference_to_arg(
+ &obligation,
+ &mut err,
+ &trait_ref,
+ points_at_arg,
+ have_alt_message,
+ ) {
+ self.note_obligation_cause(&mut err, obligation);
+ err.emit();
+ return;
+ }
+ if let Some(ref s) = label {
+ // If it has a custom `#[rustc_on_unimplemented]`
+ // error message, let's display it as the label!
+ err.span_label(span, s.as_str());
+ err.help(&explanation);
+ } else {
+ err.span_label(span, explanation);
+ }
+ if let Some((msg, span)) = type_def {
+ err.span_label(span, &msg);
+ }
+ if let Some(ref s) = note {
+ // If it has a custom `#[rustc_on_unimplemented]` note, let's display it
+ err.note(s.as_str());
+ }
+ if let Some(ref s) = enclosing_scope {
+ let enclosing_scope_span = tcx.def_span(
+ tcx.hir()
+ .opt_local_def_id(obligation.cause.body_id)
+ .unwrap_or_else(|| {
+ tcx.hir().body_owner_def_id(hir::BodyId {
+ hir_id: obligation.cause.body_id,
+ })
+ }),
+ );
+
+ err.span_label(enclosing_scope_span, s.as_str());
+ }
+
+ self.suggest_borrow_on_unsized_slice(&obligation.cause.code, &mut err);
+ self.suggest_fn_call(&obligation, &mut err, &trait_ref, points_at_arg);
+ self.suggest_remove_reference(&obligation, &mut err, &trait_ref);
+ self.suggest_semicolon_removal(&obligation, &mut err, span, &trait_ref);
+ self.note_version_mismatch(&mut err, &trait_ref);
+ if self.suggest_impl_trait(&mut err, span, &obligation, &trait_ref) {
+ err.emit();
+ return;
+ }
+
+ // Try to report a help message
+ if !trait_ref.has_infer_types_or_consts()
+ && self.predicate_can_apply(obligation.param_env, trait_ref)
+ {
+ // If a where-clause may be useful, remind the
+ // user that they can add it.
+ //
+ // don't display an on-unimplemented note, as
+ // these notes will often be of the form
+ // "the type `T` can't be frobnicated"
+ // which is somewhat confusing.
+ self.suggest_restricting_param_bound(
+ &mut err,
+ &trait_ref,
+ obligation.cause.body_id,
+ );
+ } else {
+ if !have_alt_message {
+ // Can't show anything else useful, try to find similar impls.
+ let impl_candidates = self.find_similar_impl_candidates(trait_ref);
+ self.report_similar_impl_candidates(impl_candidates, &mut err);
+ }
+ self.suggest_change_mut(
+ &obligation,
+ &mut err,
+ &trait_ref,
+ points_at_arg,
+ );
+ }
+
+ // If this error is due to `!: Trait` not implemented but `(): Trait` is
+ // implemented, and fallback has occurred, then it could be due to a
+ // variable that used to fallback to `()` now falling back to `!`. Issue a
+ // note informing about the change in behaviour.
+ if trait_predicate.skip_binder().self_ty().is_never()
+ && fallback_has_occurred
+ {
+ let predicate = trait_predicate.map_bound(|mut trait_pred| {
+ trait_pred.trait_ref.substs = self.tcx.mk_substs_trait(
+ self.tcx.mk_unit(),
+ &trait_pred.trait_ref.substs[1..],
+ );
+ trait_pred
+ });
+ let unit_obligation = Obligation {
+ predicate: ty::Predicate::Trait(
+ predicate,
+ hir::Constness::NotConst,
+ ),
+ ..obligation.clone()
+ };
+ if self.predicate_may_hold(&unit_obligation) {
+ err.note(
+ "the trait is implemented for `()`. \
+ Possibly this error has been caused by changes to \
+ Rust's type-inference algorithm (see issue #48950 \
+ <https://github.com/rust-lang/rust/issues/48950> \
+ for more information). Consider whether you meant to use \
+ the type `()` here instead.",
+ );
+ }
+ }
+
+ err
+ }
+
+ ty::Predicate::Subtype(ref predicate) => {
+ // Errors for Subtype predicates show up as
+ // `FulfillmentErrorCode::CodeSubtypeError`,
+ // not selection error.
+ span_bug!(span, "subtype requirement gave wrong error: `{:?}`", predicate)
+ }
+
+ ty::Predicate::RegionOutlives(ref predicate) => {
+ let predicate = self.resolve_vars_if_possible(predicate);
+ let err = self
+ .region_outlives_predicate(&obligation.cause, &predicate)
+ .err()
+ .unwrap();
+ struct_span_err!(
+ self.tcx.sess,
+ span,
+ E0279,
+ "the requirement `{}` is not satisfied (`{}`)",
+ predicate,
+ err,
+ )
+ }
+
+ ty::Predicate::Projection(..) | ty::Predicate::TypeOutlives(..) => {
+ let predicate = self.resolve_vars_if_possible(&obligation.predicate);
+ struct_span_err!(
+ self.tcx.sess,
+ span,
+ E0280,
+ "the requirement `{}` is not satisfied",
+ predicate
+ )
+ }
+
+ ty::Predicate::ObjectSafe(trait_def_id) => {
+ let violations = self.tcx.object_safety_violations(trait_def_id);
+ report_object_safety_error(self.tcx, span, trait_def_id, violations)
+ }
+
+ ty::Predicate::ClosureKind(closure_def_id, closure_substs, kind) => {
+ let found_kind = self.closure_kind(closure_def_id, closure_substs).unwrap();
+ let closure_span = self
+ .tcx
+ .sess
+ .source_map()
+ .def_span(self.tcx.hir().span_if_local(closure_def_id).unwrap());
+ let hir_id = self.tcx.hir().as_local_hir_id(closure_def_id).unwrap();
+ let mut err = struct_span_err!(
+ self.tcx.sess,
+ closure_span,
+ E0525,
+ "expected a closure that implements the `{}` trait, \
+ but this closure only implements `{}`",
+ kind,
+ found_kind
+ );
+
+ err.span_label(
+ closure_span,
+ format!("this closure implements `{}`, not `{}`", found_kind, kind),
+ );
+ err.span_label(
+ obligation.cause.span,
+ format!("the requirement to implement `{}` derives from here", kind),
+ );
+
+ // Additional context information explaining why the closure only implements
+ // a particular trait.
+ if let Some(tables) = self.in_progress_tables {
+ let tables = tables.borrow();
+ match (found_kind, tables.closure_kind_origins().get(hir_id)) {
+ (ty::ClosureKind::FnOnce, Some((span, name))) => {
+ err.span_label(
+ *span,
+ format!(
+ "closure is `FnOnce` because it moves the \
+ variable `{}` out of its environment",
+ name
+ ),
+ );
+ }
+ (ty::ClosureKind::FnMut, Some((span, name))) => {
+ err.span_label(
+ *span,
+ format!(
+ "closure is `FnMut` because it mutates the \
+ variable `{}` here",
+ name
+ ),
+ );
+ }
+ _ => {}
+ }
+ }
+
+ err.emit();
+ return;
+ }
+
+ ty::Predicate::WellFormed(ty) => {
+ // WF predicates cannot themselves make
+ // errors. They can only block due to
+ // ambiguity; otherwise, they always
+ // degenerate into other obligations
+ // (which may fail).
+ span_bug!(span, "WF predicate not satisfied for {:?}", ty);
+ }
+
+ ty::Predicate::ConstEvaluatable(..) => {
+ // Errors for `ConstEvaluatable` predicates show up as
+ // `SelectionError::ConstEvalFailure`,
+ // not `Unimplemented`.
+ span_bug!(
+ span,
+ "const-evaluatable requirement gave wrong error: `{:?}`",
+ obligation
+ )
+ }
+ }
+ }
+
+ OutputTypeParameterMismatch(ref found_trait_ref, ref expected_trait_ref, _) => {
+ let found_trait_ref = self.resolve_vars_if_possible(&*found_trait_ref);
+ let expected_trait_ref = self.resolve_vars_if_possible(&*expected_trait_ref);
+
+ if expected_trait_ref.self_ty().references_error() {
+ return;
+ }
+
+ let found_trait_ty = found_trait_ref.self_ty();
+
+ let found_did = match found_trait_ty.kind {
+ ty::Closure(did, _) | ty::Foreign(did) | ty::FnDef(did, _) => Some(did),
+ ty::Adt(def, _) => Some(def.did),
+ _ => None,
+ };
+
+ let found_span = found_did
+ .and_then(|did| self.tcx.hir().span_if_local(did))
+ .map(|sp| self.tcx.sess.source_map().def_span(sp)); // the sp could be an fn def
+
+ if self.reported_closure_mismatch.borrow().contains(&(span, found_span)) {
+ // We check closures twice, with obligations flowing in different directions,
+ // but we want to complain about them only once.
+ return;
+ }
+
+ self.reported_closure_mismatch.borrow_mut().insert((span, found_span));
+
+ let found = match found_trait_ref.skip_binder().substs.type_at(1).kind {
+ ty::Tuple(ref tys) => vec![ArgKind::empty(); tys.len()],
+ _ => vec![ArgKind::empty()],
+ };
+
+ let expected_ty = expected_trait_ref.skip_binder().substs.type_at(1);
+ let expected = match expected_ty.kind {
+ ty::Tuple(ref tys) => tys
+ .iter()
+ .map(|t| ArgKind::from_expected_ty(t.expect_ty(), Some(span)))
+ .collect(),
+ _ => vec![ArgKind::Arg("_".to_owned(), expected_ty.to_string())],
+ };
+
+ if found.len() == expected.len() {
+ self.report_closure_arg_mismatch(
+ span,
+ found_span,
+ found_trait_ref,
+ expected_trait_ref,
+ )
+ } else {
+ let (closure_span, found) = found_did
+ .and_then(|did| self.tcx.hir().get_if_local(did))
+ .map(|node| {
+ let (found_span, found) = self.get_fn_like_arguments(node);
+ (Some(found_span), found)
+ })
+ .unwrap_or((found_span, found));
+
+ self.report_arg_count_mismatch(
+ span,
+ closure_span,
+ expected,
+ found,
+ found_trait_ty.is_closure(),
+ )
+ }
+ }
+
+ TraitNotObjectSafe(did) => {
+ let violations = self.tcx.object_safety_violations(did);
+ report_object_safety_error(self.tcx, span, did, violations)
+ }
+
+ ConstEvalFailure(ErrorHandled::TooGeneric) => {
+ // In this instance, we have a const expression containing an unevaluated
+ // generic parameter. We have no idea whether this expression is valid or
+ // not (e.g. it might result in an error), but we don't want to just assume
+ // that it's okay, because that might result in post-monomorphisation time
+ // errors. The onus is really on the caller to provide values that it can
+ // prove are well-formed.
+ let mut err = self
+ .tcx
+ .sess
+ .struct_span_err(span, "constant expression depends on a generic parameter");
+ // FIXME(const_generics): we should suggest to the user how they can resolve this
+ // issue. However, this is currently not actually possible
+ // (see https://github.com/rust-lang/rust/issues/66962#issuecomment-575907083).
+ err.note("this may fail depending on what value the parameter takes");
+ err
+ }
+
+ // Already reported in the query.
+ ConstEvalFailure(ErrorHandled::Reported) => {
+ self.tcx.sess.delay_span_bug(span, "constant in type had an ignored error");
+ return;
+ }
+
+ Overflow => {
+ bug!("overflow should be handled before the `report_selection_error` path");
+ }
+ };
+
+ self.note_obligation_cause(&mut err, obligation);
+ self.point_at_returns_when_relevant(&mut err, &obligation);
+
+ err.emit();
+ }
+
+ /// Given some node representing a fn-like thing in the HIR map,
+ /// returns a span and `ArgKind` information that describes the
+ /// arguments it expects. This can be supplied to
+ /// `report_arg_count_mismatch`.
+ fn get_fn_like_arguments(&self, node: Node<'_>) -> (Span, Vec<ArgKind>) {
+ match node {
+ Node::Expr(&hir::Expr {
+ kind: hir::ExprKind::Closure(_, ref _decl, id, span, _),
+ ..
+ }) => (
+ self.tcx.sess.source_map().def_span(span),
+ self.tcx
+ .hir()
+ .body(id)
+ .params
+ .iter()
+ .map(|arg| {
+ if let hir::Pat { kind: hir::PatKind::Tuple(ref args, _), span, .. } =
+ *arg.pat
+ {
+ ArgKind::Tuple(
+ Some(span),
+ args.iter()
+ .map(|pat| {
+ let snippet = self
+ .tcx
+ .sess
+ .source_map()
+ .span_to_snippet(pat.span)
+ .unwrap();
+ (snippet, "_".to_owned())
+ })
+ .collect::<Vec<_>>(),
+ )
+ } else {
+ let name =
+ self.tcx.sess.source_map().span_to_snippet(arg.pat.span).unwrap();
+ ArgKind::Arg(name, "_".to_owned())
+ }
+ })
+ .collect::<Vec<ArgKind>>(),
+ ),
+ Node::Item(&hir::Item { span, kind: hir::ItemKind::Fn(ref sig, ..), .. })
+ | Node::ImplItem(&hir::ImplItem {
+ span,
+ kind: hir::ImplItemKind::Fn(ref sig, _),
+ ..
+ })
+ | Node::TraitItem(&hir::TraitItem {
+ span,
+ kind: hir::TraitItemKind::Fn(ref sig, _),
+ ..
+ }) => (
+ self.tcx.sess.source_map().def_span(span),
+ sig.decl
+ .inputs
+ .iter()
+ .map(|arg| match arg.clone().kind {
+ hir::TyKind::Tup(ref tys) => ArgKind::Tuple(
+ Some(arg.span),
+ vec![("_".to_owned(), "_".to_owned()); tys.len()],
+ ),
+ _ => ArgKind::empty(),
+ })
+ .collect::<Vec<ArgKind>>(),
+ ),
+ Node::Ctor(ref variant_data) => {
+ let span = variant_data
+ .ctor_hir_id()
+ .map(|hir_id| self.tcx.hir().span(hir_id))
+ .unwrap_or(DUMMY_SP);
+ let span = self.tcx.sess.source_map().def_span(span);
+
+ (span, vec![ArgKind::empty(); variant_data.fields().len()])
+ }
+ _ => panic!("non-FnLike node found: {:?}", node),
+ }
+ }
+
+ /// Reports an error when the number of arguments needed by a
+ /// trait match doesn't match the number that the expression
+ /// provides.
+ fn report_arg_count_mismatch(
+ &self,
+ span: Span,
+ found_span: Option<Span>,
+ expected_args: Vec<ArgKind>,
+ found_args: Vec<ArgKind>,
+ is_closure: bool,
+ ) -> DiagnosticBuilder<'tcx> {
+ let kind = if is_closure { "closure" } else { "function" };
+
+ let args_str = |arguments: &[ArgKind], other: &[ArgKind]| {
+ let arg_length = arguments.len();
+ let distinct = match &other[..] {
+ &[ArgKind::Tuple(..)] => true,
+ _ => false,
+ };
+ match (arg_length, arguments.get(0)) {
+ (1, Some(&ArgKind::Tuple(_, ref fields))) => {
+ format!("a single {}-tuple as argument", fields.len())
+ }
+ _ => format!(
+ "{} {}argument{}",
+ arg_length,
+ if distinct && arg_length > 1 { "distinct " } else { "" },
+ pluralize!(arg_length)
+ ),
+ }
+ };
+
+ let expected_str = args_str(&expected_args, &found_args);
+ let found_str = args_str(&found_args, &expected_args);
+
+ let mut err = struct_span_err!(
+ self.tcx.sess,
+ span,
+ E0593,
+ "{} is expected to take {}, but it takes {}",
+ kind,
+ expected_str,
+ found_str,
+ );
+
+ err.span_label(span, format!("expected {} that takes {}", kind, expected_str));
+
+ if let Some(found_span) = found_span {
+ err.span_label(found_span, format!("takes {}", found_str));
+
+ // move |_| { ... }
+ // ^^^^^^^^-- def_span
+ //
+ // move |_| { ... }
+ // ^^^^^-- prefix
+ let prefix_span = self.tcx.sess.source_map().span_until_non_whitespace(found_span);
+ // move |_| { ... }
+ // ^^^-- pipe_span
+ let pipe_span =
+ if let Some(span) = found_span.trim_start(prefix_span) { span } else { found_span };
+
+ // Suggest to take and ignore the arguments with expected_args_length `_`s if
+ // found arguments is empty (assume the user just wants to ignore args in this case).
+ // For example, if `expected_args_length` is 2, suggest `|_, _|`.
+ if found_args.is_empty() && is_closure {
+ let underscores = vec!["_"; expected_args.len()].join(", ");
+ err.span_suggestion(
+ pipe_span,
+ &format!(
+ "consider changing the closure to take and ignore the expected argument{}",
+ if expected_args.len() < 2 { "" } else { "s" }
+ ),
+ format!("|{}|", underscores),
+ Applicability::MachineApplicable,
+ );
+ }
+
+ if let &[ArgKind::Tuple(_, ref fields)] = &found_args[..] {
+ if fields.len() == expected_args.len() {
+ let sugg = fields
+ .iter()
+ .map(|(name, _)| name.to_owned())
+ .collect::<Vec<String>>()
+ .join(", ");
+ err.span_suggestion(
+ found_span,
+ "change the closure to take multiple arguments instead of a single tuple",
+ format!("|{}|", sugg),
+ Applicability::MachineApplicable,
+ );
+ }
+ }
+ if let &[ArgKind::Tuple(_, ref fields)] = &expected_args[..] {
+ if fields.len() == found_args.len() && is_closure {
+ let sugg = format!(
+ "|({}){}|",
+ found_args
+ .iter()
+ .map(|arg| match arg {
+ ArgKind::Arg(name, _) => name.to_owned(),
+ _ => "_".to_owned(),
+ })
+ .collect::<Vec<String>>()
+ .join(", "),
+ // add type annotations if available
+ if found_args.iter().any(|arg| match arg {
+ ArgKind::Arg(_, ty) => ty != "_",
+ _ => false,
+ }) {
+ format!(
+ ": ({})",
+ fields
+ .iter()
+ .map(|(_, ty)| ty.to_owned())
+ .collect::<Vec<String>>()
+ .join(", ")
+ )
+ } else {
+ String::new()
+ },
+ );
+ err.span_suggestion(
+ found_span,
+ "change the closure to accept a tuple instead of individual arguments",
+ sugg,
+ Applicability::MachineApplicable,
+ );
+ }
+ }
+ }
+
+ err
+ }
+}
+
+trait InferCtxtPrivExt<'tcx> {
+ // returns if `cond` not occurring implies that `error` does not occur - i.e., that
+ // `error` occurring implies that `cond` occurs.
+ fn error_implies(&self, cond: &ty::Predicate<'tcx>, error: &ty::Predicate<'tcx>) -> bool;
+
+ fn report_fulfillment_error(
+ &self,
+ error: &FulfillmentError<'tcx>,
+ body_id: Option<hir::BodyId>,
+ fallback_has_occurred: bool,
+ );
+
+ fn report_projection_error(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ error: &MismatchedProjectionTypes<'tcx>,
+ );
+
+ fn fuzzy_match_tys(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> bool;
+
+ fn describe_generator(&self, body_id: hir::BodyId) -> Option<&'static str>;
+
+ fn find_similar_impl_candidates(
+ &self,
+ trait_ref: ty::PolyTraitRef<'tcx>,
+ ) -> Vec<ty::TraitRef<'tcx>>;
+
+ fn report_similar_impl_candidates(
+ &self,
+ impl_candidates: Vec<ty::TraitRef<'tcx>>,
+ err: &mut DiagnosticBuilder<'_>,
+ );
+
+ /// Gets the parent trait chain start
+ fn get_parent_trait_ref(
+ &self,
+ code: &ObligationCauseCode<'tcx>,
+ ) -> Option<(String, Option<Span>)>;
+
+ /// If the `Self` type of the unsatisfied trait `trait_ref` implements a trait
+ /// with the same path as `trait_ref`, a help message about
+ /// a probable version mismatch is added to `err`
+ fn note_version_mismatch(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ trait_ref: &ty::PolyTraitRef<'tcx>,
+ );
+
+ fn mk_obligation_for_def_id(
+ &self,
+ def_id: DefId,
+ output_ty: Ty<'tcx>,
+ cause: ObligationCause<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ ) -> PredicateObligation<'tcx>;
+
+ fn maybe_report_ambiguity(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ body_id: Option<hir::BodyId>,
+ );
+
+ fn predicate_can_apply(
+ &self,
+ param_env: ty::ParamEnv<'tcx>,
+ pred: ty::PolyTraitRef<'tcx>,
+ ) -> bool;
+
+ fn note_obligation_cause(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ obligation: &PredicateObligation<'tcx>,
+ );
+
+ fn suggest_unsized_bound_if_applicable(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ obligation: &PredicateObligation<'tcx>,
+ );
+
+ fn is_recursive_obligation(
+ &self,
+ obligated_types: &mut Vec<&ty::TyS<'tcx>>,
+ cause_code: &ObligationCauseCode<'tcx>,
+ ) -> bool;
+}
+
+impl<'a, 'tcx> InferCtxtPrivExt<'tcx> for InferCtxt<'a, 'tcx> {
+ // returns if `cond` not occurring implies that `error` does not occur - i.e., that
+ // `error` occurring implies that `cond` occurs.
+ fn error_implies(&self, cond: &ty::Predicate<'tcx>, error: &ty::Predicate<'tcx>) -> bool {
+ if cond == error {
+ return true;
+ }
+
+ let (cond, error) = match (cond, error) {
+ (&ty::Predicate::Trait(..), &ty::Predicate::Trait(ref error, _)) => (cond, error),
+ _ => {
+ // FIXME: make this work in other cases too.
+ return false;
+ }
+ };
+
+ for implication in super::elaborate_predicates(self.tcx, vec![*cond]) {
+ if let ty::Predicate::Trait(implication, _) = implication {
+ let error = error.to_poly_trait_ref();
+ let implication = implication.to_poly_trait_ref();
+ // FIXME: I'm just not taking associated types at all here.
+ // Eventually I'll need to implement param-env-aware
+ // `Γ₁ ⊦ φ₁ => Γ₂ ⊦ φ₂` logic.
+ let param_env = ty::ParamEnv::empty();
+ if self.can_sub(param_env, error, implication).is_ok() {
+ debug!("error_implies: {:?} -> {:?} -> {:?}", cond, error, implication);
+ return true;
+ }
+ }
+ }
+
+ false
+ }
+
+ fn report_fulfillment_error(
+ &self,
+ error: &FulfillmentError<'tcx>,
+ body_id: Option<hir::BodyId>,
+ fallback_has_occurred: bool,
+ ) {
+ debug!("report_fulfillment_error({:?})", error);
+ match error.code {
+ FulfillmentErrorCode::CodeSelectionError(ref selection_error) => {
+ self.report_selection_error(
+ &error.obligation,
+ selection_error,
+ fallback_has_occurred,
+ error.points_at_arg_span,
+ );
+ }
+ FulfillmentErrorCode::CodeProjectionError(ref e) => {
+ self.report_projection_error(&error.obligation, e);
+ }
+ FulfillmentErrorCode::CodeAmbiguity => {
+ self.maybe_report_ambiguity(&error.obligation, body_id);
+ }
+ FulfillmentErrorCode::CodeSubtypeError(ref expected_found, ref err) => {
+ self.report_mismatched_types(
+ &error.obligation.cause,
+ expected_found.expected,
+ expected_found.found,
+ err.clone(),
+ )
+ .emit();
+ }
+ }
+ }
+
+ fn report_projection_error(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ error: &MismatchedProjectionTypes<'tcx>,
+ ) {
+ let predicate = self.resolve_vars_if_possible(&obligation.predicate);
+
+ if predicate.references_error() {
+ return;
+ }
+
+ self.probe(|_| {
+ let err_buf;
+ let mut err = &error.err;
+ let mut values = None;
+
+ // try to find the mismatched types to report the error with.
+ //
+ // this can fail if the problem was higher-ranked, in which
+ // cause I have no idea for a good error message.
+ if let ty::Predicate::Projection(ref data) = predicate {
+ let mut selcx = SelectionContext::new(self);
+ let (data, _) = self.replace_bound_vars_with_fresh_vars(
+ obligation.cause.span,
+ infer::LateBoundRegionConversionTime::HigherRankedType,
+ data,
+ );
+ let mut obligations = vec![];
+ let normalized_ty = super::normalize_projection_type(
+ &mut selcx,
+ obligation.param_env,
+ data.projection_ty,
+ obligation.cause.clone(),
+ 0,
+ &mut obligations,
+ );
+
+ debug!(
+ "report_projection_error obligation.cause={:?} obligation.param_env={:?}",
+ obligation.cause, obligation.param_env
+ );
+
+ debug!(
+ "report_projection_error normalized_ty={:?} data.ty={:?}",
+ normalized_ty, data.ty
+ );
+
+ let is_normalized_ty_expected = match &obligation.cause.code {
+ ObligationCauseCode::ItemObligation(_)
+ | ObligationCauseCode::BindingObligation(_, _)
+ | ObligationCauseCode::ObjectCastObligation(_) => false,
+ _ => true,
+ };
+
+ if let Err(error) = self.at(&obligation.cause, obligation.param_env).eq_exp(
+ is_normalized_ty_expected,
+ normalized_ty,
+ data.ty,
+ ) {
+ values = Some(infer::ValuePairs::Types(ExpectedFound::new(
+ is_normalized_ty_expected,
+ normalized_ty,
+ data.ty,
+ )));
+
+ err_buf = error;
+ err = &err_buf;
+ }
+ }
+
+ let msg = format!("type mismatch resolving `{}`", predicate);
+ let error_id = (DiagnosticMessageId::ErrorId(271), Some(obligation.cause.span), msg);
+ let fresh = self.tcx.sess.one_time_diagnostics.borrow_mut().insert(error_id);
+ if fresh {
+ let mut diag = struct_span_err!(
+ self.tcx.sess,
+ obligation.cause.span,
+ E0271,
+ "type mismatch resolving `{}`",
+ predicate
+ );
+ self.note_type_err(&mut diag, &obligation.cause, None, values, err);
+ self.note_obligation_cause(&mut diag, obligation);
+ diag.emit();
+ }
+ });
+ }
+
+ fn fuzzy_match_tys(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> bool {
+ /// returns the fuzzy category of a given type, or None
+ /// if the type can be equated to any type.
+ fn type_category(t: Ty<'_>) -> Option<u32> {
+ match t.kind {
+ ty::Bool => Some(0),
+ ty::Char => Some(1),
+ ty::Str => Some(2),
+ ty::Int(..) | ty::Uint(..) | ty::Infer(ty::IntVar(..)) => Some(3),
+ ty::Float(..) | ty::Infer(ty::FloatVar(..)) => Some(4),
+ ty::Ref(..) | ty::RawPtr(..) => Some(5),
+ ty::Array(..) | ty::Slice(..) => Some(6),
+ ty::FnDef(..) | ty::FnPtr(..) => Some(7),
+ ty::Dynamic(..) => Some(8),
+ ty::Closure(..) => Some(9),
+ ty::Tuple(..) => Some(10),
+ ty::Projection(..) => Some(11),
+ ty::Param(..) => Some(12),
+ ty::Opaque(..) => Some(13),
+ ty::Never => Some(14),
+ ty::Adt(adt, ..) => match adt.adt_kind() {
+ AdtKind::Struct => Some(15),
+ AdtKind::Union => Some(16),
+ AdtKind::Enum => Some(17),
+ },
+ ty::Generator(..) => Some(18),
+ ty::Foreign(..) => Some(19),
+ ty::GeneratorWitness(..) => Some(20),
+ ty::Placeholder(..) | ty::Bound(..) | ty::Infer(..) | ty::Error => None,
+ ty::UnnormalizedProjection(..) => bug!("only used with chalk-engine"),
+ }
+ }
+
+ match (type_category(a), type_category(b)) {
+ (Some(cat_a), Some(cat_b)) => match (&a.kind, &b.kind) {
+ (&ty::Adt(def_a, _), &ty::Adt(def_b, _)) => def_a == def_b,
+ _ => cat_a == cat_b,
+ },
+ // infer and error can be equated to all types
+ _ => true,
+ }
+ }
+
+ fn describe_generator(&self, body_id: hir::BodyId) -> Option<&'static str> {
+ self.tcx.hir().body(body_id).generator_kind.map(|gen_kind| match gen_kind {
+ hir::GeneratorKind::Gen => "a generator",
+ hir::GeneratorKind::Async(hir::AsyncGeneratorKind::Block) => "an async block",
+ hir::GeneratorKind::Async(hir::AsyncGeneratorKind::Fn) => "an async function",
+ hir::GeneratorKind::Async(hir::AsyncGeneratorKind::Closure) => "an async closure",
+ })
+ }
+
+ fn find_similar_impl_candidates(
+ &self,
+ trait_ref: ty::PolyTraitRef<'tcx>,
+ ) -> Vec<ty::TraitRef<'tcx>> {
+ let simp = fast_reject::simplify_type(self.tcx, trait_ref.skip_binder().self_ty(), true);
+ let all_impls = self.tcx.all_impls(trait_ref.def_id());
+
+ match simp {
+ Some(simp) => all_impls
+ .iter()
+ .filter_map(|&def_id| {
+ let imp = self.tcx.impl_trait_ref(def_id).unwrap();
+ let imp_simp = fast_reject::simplify_type(self.tcx, imp.self_ty(), true);
+ if let Some(imp_simp) = imp_simp {
+ if simp != imp_simp {
+ return None;
+ }
+ }
+
+ Some(imp)
+ })
+ .collect(),
+ None => {
+ all_impls.iter().map(|&def_id| self.tcx.impl_trait_ref(def_id).unwrap()).collect()
+ }
+ }
+ }
+
+ fn report_similar_impl_candidates(
+ &self,
+ impl_candidates: Vec<ty::TraitRef<'tcx>>,
+ err: &mut DiagnosticBuilder<'_>,
+ ) {
+ if impl_candidates.is_empty() {
+ return;
+ }
+
+ let len = impl_candidates.len();
+ let end = if impl_candidates.len() <= 5 { impl_candidates.len() } else { 4 };
+
+ let normalize = |candidate| {
+ self.tcx.infer_ctxt().enter(|ref infcx| {
+ let normalized = infcx
+ .at(&ObligationCause::dummy(), ty::ParamEnv::empty())
+ .normalize(candidate)
+ .ok();
+ match normalized {
+ Some(normalized) => format!("\n {:?}", normalized.value),
+ None => format!("\n {:?}", candidate),
+ }
+ })
+ };
+
+ // Sort impl candidates so that ordering is consistent for UI tests.
+ let mut normalized_impl_candidates =
+ impl_candidates.iter().map(normalize).collect::<Vec<String>>();
+
+ // Sort before taking the `..end` range,
+ // because the ordering of `impl_candidates` may not be deterministic:
+ // https://github.com/rust-lang/rust/pull/57475#issuecomment-455519507
+ normalized_impl_candidates.sort();
+
+ err.help(&format!(
+ "the following implementations were found:{}{}",
+ normalized_impl_candidates[..end].join(""),
+ if len > 5 { format!("\nand {} others", len - 4) } else { String::new() }
+ ));
+ }
+
+ /// Gets the parent trait chain start
+ fn get_parent_trait_ref(
+ &self,
+ code: &ObligationCauseCode<'tcx>,
+ ) -> Option<(String, Option<Span>)> {
+ match code {
+ &ObligationCauseCode::BuiltinDerivedObligation(ref data) => {
+ let parent_trait_ref = self.resolve_vars_if_possible(&data.parent_trait_ref);
+ match self.get_parent_trait_ref(&data.parent_code) {
+ Some(t) => Some(t),
+ None => {
+ let ty = parent_trait_ref.skip_binder().self_ty();
+ let span =
+ TyCategory::from_ty(ty).map(|(_, def_id)| self.tcx.def_span(def_id));
+ Some((ty.to_string(), span))
+ }
+ }
+ }
+ _ => None,
+ }
+ }
+
+ /// If the `Self` type of the unsatisfied trait `trait_ref` implements a trait
+ /// with the same path as `trait_ref`, a help message about
+ /// a probable version mismatch is added to `err`
+ fn note_version_mismatch(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ trait_ref: &ty::PolyTraitRef<'tcx>,
+ ) {
+ let get_trait_impl = |trait_def_id| {
+ let mut trait_impl = None;
+ self.tcx.for_each_relevant_impl(trait_def_id, trait_ref.self_ty(), |impl_def_id| {
+ if trait_impl.is_none() {
+ trait_impl = Some(impl_def_id);
+ }
+ });
+ trait_impl
+ };
+ let required_trait_path = self.tcx.def_path_str(trait_ref.def_id());
+ let all_traits = self.tcx.all_traits(LOCAL_CRATE);
+ let traits_with_same_path: std::collections::BTreeSet<_> = all_traits
+ .iter()
+ .filter(|trait_def_id| **trait_def_id != trait_ref.def_id())
+ .filter(|trait_def_id| self.tcx.def_path_str(**trait_def_id) == required_trait_path)
+ .collect();
+ for trait_with_same_path in traits_with_same_path {
+ if let Some(impl_def_id) = get_trait_impl(*trait_with_same_path) {
+ let impl_span = self.tcx.def_span(impl_def_id);
+ err.span_help(impl_span, "trait impl with same name found");
+ let trait_crate = self.tcx.crate_name(trait_with_same_path.krate);
+ let crate_msg = format!(
+ "perhaps two different versions of crate `{}` are being used?",
+ trait_crate
+ );
+ err.note(&crate_msg);
+ }
+ }
+ }
+
+ fn mk_obligation_for_def_id(
+ &self,
+ def_id: DefId,
+ output_ty: Ty<'tcx>,
+ cause: ObligationCause<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ ) -> PredicateObligation<'tcx> {
+ let new_trait_ref =
+ ty::TraitRef { def_id, substs: self.tcx.mk_substs_trait(output_ty, &[]) };
+ Obligation::new(cause, param_env, new_trait_ref.without_const().to_predicate())
+ }
+
+ fn maybe_report_ambiguity(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ body_id: Option<hir::BodyId>,
+ ) {
+ // Unable to successfully determine, probably means
+ // insufficient type information, but could mean
+ // ambiguous impls. The latter *ought* to be a
+ // coherence violation, so we don't report it here.
+
+ let predicate = self.resolve_vars_if_possible(&obligation.predicate);
+ let span = obligation.cause.span;
+
+ debug!(
+ "maybe_report_ambiguity(predicate={:?}, obligation={:?} body_id={:?}, code={:?})",
+ predicate, obligation, body_id, obligation.cause.code,
+ );
+
+ // Ambiguity errors are often caused as fallout from earlier
+ // errors. So just ignore them if this infcx is tainted.
+ if self.is_tainted_by_errors() {
+ return;
+ }
+
+ let mut err = match predicate {
+ ty::Predicate::Trait(ref data, _) => {
+ let trait_ref = data.to_poly_trait_ref();
+ let self_ty = trait_ref.self_ty();
+ debug!("self_ty {:?} {:?} trait_ref {:?}", self_ty, self_ty.kind, trait_ref);
+
+ if predicate.references_error() {
+ return;
+ }
+ // Typically, this ambiguity should only happen if
+ // there are unresolved type inference variables
+ // (otherwise it would suggest a coherence
+ // failure). But given #21974 that is not necessarily
+ // the case -- we can have multiple where clauses that
+ // are only distinguished by a region, which results
+ // in an ambiguity even when all types are fully
+ // known, since we don't dispatch based on region
+ // relationships.
+
+ // This is kind of a hack: it frequently happens that some earlier
+ // error prevents types from being fully inferred, and then we get
+ // a bunch of uninteresting errors saying something like "<generic
+ // #0> doesn't implement Sized". It may even be true that we
+ // could just skip over all checks where the self-ty is an
+ // inference variable, but I was afraid that there might be an
+ // inference variable created, registered as an obligation, and
+ // then never forced by writeback, and hence by skipping here we'd
+ // be ignoring the fact that we don't KNOW the type works
+ // out. Though even that would probably be harmless, given that
+ // we're only talking about builtin traits, which are known to be
+ // inhabited. We used to check for `self.tcx.sess.has_errors()` to
+ // avoid inundating the user with unnecessary errors, but we now
+ // check upstream for type errors and don't add the obligations to
+ // begin with in those cases.
+ if self
+ .tcx
+ .lang_items()
+ .sized_trait()
+ .map_or(false, |sized_id| sized_id == trait_ref.def_id())
+ {
+ self.need_type_info_err(body_id, span, self_ty, ErrorCode::E0282).emit();
+ return;
+ }
+ let mut err = self.need_type_info_err(body_id, span, self_ty, ErrorCode::E0283);
+ err.note(&format!("cannot resolve `{}`", predicate));
+ if let ObligationCauseCode::ItemObligation(def_id) = obligation.cause.code {
+ self.suggest_fully_qualified_path(&mut err, def_id, span, trait_ref.def_id());
+ } else if let (
+ Ok(ref snippet),
+ ObligationCauseCode::BindingObligation(ref def_id, _),
+ ) =
+ (self.tcx.sess.source_map().span_to_snippet(span), &obligation.cause.code)
+ {
+ let generics = self.tcx.generics_of(*def_id);
+ if !generics.params.is_empty() && !snippet.ends_with('>') {
+ // FIXME: To avoid spurious suggestions in functions where type arguments
+ // where already supplied, we check the snippet to make sure it doesn't
+ // end with a turbofish. Ideally we would have access to a `PathSegment`
+ // instead. Otherwise we would produce the following output:
+ //
+ // error[E0283]: type annotations needed
+ // --> $DIR/issue-54954.rs:3:24
+ // |
+ // LL | const ARR_LEN: usize = Tt::const_val::<[i8; 123]>();
+ // | ^^^^^^^^^^^^^^^^^^^^^^^^^^
+ // | |
+ // | cannot infer type
+ // | help: consider specifying the type argument
+ // | in the function call:
+ // | `Tt::const_val::<[i8; 123]>::<T>`
+ // ...
+ // LL | const fn const_val<T: Sized>() -> usize {
+ // | --------- - required by this bound in `Tt::const_val`
+ // |
+ // = note: cannot resolve `_: Tt`
+
+ err.span_suggestion(
+ span,
+ &format!(
+ "consider specifying the type argument{} in the function call",
+ if generics.params.len() > 1 { "s" } else { "" },
+ ),
+ format!(
+ "{}::<{}>",
+ snippet,
+ generics
+ .params
+ .iter()
+ .map(|p| p.name.to_string())
+ .collect::<Vec<String>>()
+ .join(", ")
+ ),
+ Applicability::HasPlaceholders,
+ );
+ }
+ }
+ err
+ }
+
+ ty::Predicate::WellFormed(ty) => {
+ // Same hacky approach as above to avoid deluging user
+ // with error messages.
+ if ty.references_error() || self.tcx.sess.has_errors() {
+ return;
+ }
+ self.need_type_info_err(body_id, span, ty, ErrorCode::E0282)
+ }
+
+ ty::Predicate::Subtype(ref data) => {
+ if data.references_error() || self.tcx.sess.has_errors() {
+ // no need to overload user in such cases
+ return;
+ }
+ let &SubtypePredicate { a_is_expected: _, a, b } = data.skip_binder();
+ // both must be type variables, or the other would've been instantiated
+ assert!(a.is_ty_var() && b.is_ty_var());
+ self.need_type_info_err(body_id, span, a, ErrorCode::E0282)
+ }
+ ty::Predicate::Projection(ref data) => {
+ let trait_ref = data.to_poly_trait_ref(self.tcx);
+ let self_ty = trait_ref.self_ty();
+ if predicate.references_error() {
+ return;
+ }
+ let mut err = self.need_type_info_err(body_id, span, self_ty, ErrorCode::E0284);
+ err.note(&format!("cannot resolve `{}`", predicate));
+ err
+ }
+
+ _ => {
+ if self.tcx.sess.has_errors() {
+ return;
+ }
+ let mut err = struct_span_err!(
+ self.tcx.sess,
+ span,
+ E0284,
+ "type annotations needed: cannot resolve `{}`",
+ predicate,
+ );
+ err.span_label(span, &format!("cannot resolve `{}`", predicate));
+ err
+ }
+ };
+ self.note_obligation_cause(&mut err, obligation);
+ err.emit();
+ }
+
+ /// Returns `true` if the trait predicate may apply for *some* assignment
+ /// to the type parameters.
+ fn predicate_can_apply(
+ &self,
+ param_env: ty::ParamEnv<'tcx>,
+ pred: ty::PolyTraitRef<'tcx>,
+ ) -> bool {
+ struct ParamToVarFolder<'a, 'tcx> {
+ infcx: &'a InferCtxt<'a, 'tcx>,
+ var_map: FxHashMap<Ty<'tcx>, Ty<'tcx>>,
+ }
+
+ impl<'a, 'tcx> TypeFolder<'tcx> for ParamToVarFolder<'a, 'tcx> {
+ fn tcx<'b>(&'b self) -> TyCtxt<'tcx> {
+ self.infcx.tcx
+ }
+
+ fn fold_ty(&mut self, ty: Ty<'tcx>) -> Ty<'tcx> {
+ if let ty::Param(ty::ParamTy { name, .. }) = ty.kind {
+ let infcx = self.infcx;
+ self.var_map.entry(ty).or_insert_with(|| {
+ infcx.next_ty_var(TypeVariableOrigin {
+ kind: TypeVariableOriginKind::TypeParameterDefinition(name, None),
+ span: DUMMY_SP,
+ })
+ })
+ } else {
+ ty.super_fold_with(self)
+ }
+ }
+ }
+
+ self.probe(|_| {
+ let mut selcx = SelectionContext::new(self);
+
+ let cleaned_pred =
+ pred.fold_with(&mut ParamToVarFolder { infcx: self, var_map: Default::default() });
+
+ let cleaned_pred = super::project::normalize(
+ &mut selcx,
+ param_env,
+ ObligationCause::dummy(),
+ &cleaned_pred,
+ )
+ .value;
+
+ let obligation = Obligation::new(
+ ObligationCause::dummy(),
+ param_env,
+ cleaned_pred.without_const().to_predicate(),
+ );
+
+ self.predicate_may_hold(&obligation)
+ })
+ }
+
+ fn note_obligation_cause(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ obligation: &PredicateObligation<'tcx>,
+ ) {
+ // First, attempt to add note to this error with an async-await-specific
+ // message, and fall back to regular note otherwise.
+ if !self.maybe_note_obligation_cause_for_async_await(err, obligation) {
+ self.note_obligation_cause_code(
+ err,
+ &obligation.predicate,
+ &obligation.cause.code,
+ &mut vec![],
+ );
+ self.suggest_unsized_bound_if_applicable(err, obligation);
+ }
+ }
+
+ fn suggest_unsized_bound_if_applicable(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ obligation: &PredicateObligation<'tcx>,
+ ) {
+ if let (
+ ty::Predicate::Trait(pred, _),
+ ObligationCauseCode::BindingObligation(item_def_id, span),
+ ) = (&obligation.predicate, &obligation.cause.code)
+ {
+ if let (Some(generics), true) = (
+ self.tcx.hir().get_if_local(*item_def_id).as_ref().and_then(|n| n.generics()),
+ Some(pred.def_id()) == self.tcx.lang_items().sized_trait(),
+ ) {
+ for param in generics.params {
+ if param.span == *span
+ && !param.bounds.iter().any(|bound| {
+ bound.trait_def_id() == self.tcx.lang_items().sized_trait()
+ })
+ {
+ let (span, separator) = match param.bounds {
+ [] => (span.shrink_to_hi(), ":"),
+ [.., bound] => (bound.span().shrink_to_hi(), " + "),
+ };
+ err.span_suggestion(
+ span,
+ "consider relaxing the implicit `Sized` restriction",
+ format!("{} ?Sized", separator),
+ Applicability::MachineApplicable,
+ );
+ return;
+ }
+ }
+ }
+ }
+ }
+
+ fn is_recursive_obligation(
+ &self,
+ obligated_types: &mut Vec<&ty::TyS<'tcx>>,
+ cause_code: &ObligationCauseCode<'tcx>,
+ ) -> bool {
+ if let ObligationCauseCode::BuiltinDerivedObligation(ref data) = cause_code {
+ let parent_trait_ref = self.resolve_vars_if_possible(&data.parent_trait_ref);
+
+ if obligated_types.iter().any(|ot| ot == &parent_trait_ref.skip_binder().self_ty()) {
+ return true;
+ }
+ }
+ false
+ }
+}
+
+pub fn recursive_type_with_infinite_size_error(
+ tcx: TyCtxt<'tcx>,
+ type_def_id: DefId,
+) -> DiagnosticBuilder<'tcx> {
+ assert!(type_def_id.is_local());
+ let span = tcx.hir().span_if_local(type_def_id).unwrap();
+ let span = tcx.sess.source_map().def_span(span);
+ let mut err = struct_span_err!(
+ tcx.sess,
+ span,
+ E0072,
+ "recursive type `{}` has infinite size",
+ tcx.def_path_str(type_def_id)
+ );
+ err.span_label(span, "recursive type has infinite size");
+ err.help(&format!(
+ "insert indirection (e.g., a `Box`, `Rc`, or `&`) \
+ at some point to make `{}` representable",
+ tcx.def_path_str(type_def_id)
+ ));
+ err
+}
+
+/// Summarizes information
+#[derive(Clone)]
+pub enum ArgKind {
+ /// An argument of non-tuple type. Parameters are (name, ty)
+ Arg(String, String),
+
+ /// An argument of tuple type. For a "found" argument, the span is
+ /// the locationo in the source of the pattern. For a "expected"
+ /// argument, it will be None. The vector is a list of (name, ty)
+ /// strings for the components of the tuple.
+ Tuple(Option<Span>, Vec<(String, String)>),
+}
+
+impl ArgKind {
+ fn empty() -> ArgKind {
+ ArgKind::Arg("_".to_owned(), "_".to_owned())
+ }
+
+ /// Creates an `ArgKind` from the expected type of an
+ /// argument. It has no name (`_`) and an optional source span.
+ pub fn from_expected_ty(t: Ty<'_>, span: Option<Span>) -> ArgKind {
+ match t.kind {
+ ty::Tuple(ref tys) => ArgKind::Tuple(
+ span,
+ tys.iter().map(|ty| ("_".to_owned(), ty.to_string())).collect::<Vec<_>>(),
+ ),
+ _ => ArgKind::Arg("_".to_owned(), t.to_string()),
+ }
+ }
+}
+
+/// Suggest restricting a type param with a new bound.
+pub fn suggest_constraining_type_param(
+ tcx: TyCtxt<'_>,
+ generics: &hir::Generics<'_>,
+ err: &mut DiagnosticBuilder<'_>,
+ param_name: &str,
+ constraint: &str,
+ source_map: &SourceMap,
+ span: Span,
+ def_id: Option<DefId>,
+) -> bool {
+ const MSG_RESTRICT_BOUND_FURTHER: &str = "consider further restricting this bound with";
+ const MSG_RESTRICT_TYPE: &str = "consider restricting this type parameter with";
+ const MSG_RESTRICT_TYPE_FURTHER: &str = "consider further restricting this type parameter with";
+
+ let param = generics.params.iter().find(|p| p.name.ident().as_str() == param_name);
+
+ let param = if let Some(param) = param {
+ param
+ } else {
+ return false;
+ };
+
+ if def_id == tcx.lang_items().sized_trait() {
+ // Type parameters are already `Sized` by default.
+ err.span_label(param.span, &format!("this type parameter needs to be `{}`", constraint));
+ return true;
+ }
+
+ if param_name.starts_with("impl ") {
+ // If there's an `impl Trait` used in argument position, suggest
+ // restricting it:
+ //
+ // fn foo(t: impl Foo) { ... }
+ // --------
+ // |
+ // help: consider further restricting this bound with `+ Bar`
+ //
+ // Suggestion for tools in this case is:
+ //
+ // fn foo(t: impl Foo) { ... }
+ // --------
+ // |
+ // replace with: `impl Foo + Bar`
+
+ err.span_help(param.span, &format!("{} `+ {}`", MSG_RESTRICT_BOUND_FURTHER, constraint));
+
+ err.tool_only_span_suggestion(
+ param.span,
+ MSG_RESTRICT_BOUND_FURTHER,
+ format!("{} + {}", param_name, constraint),
+ Applicability::MachineApplicable,
+ );
+
+ return true;
+ }
+
+ if generics.where_clause.predicates.is_empty() {
+ if let Some(bounds_span) = param.bounds_span() {
+ // If user has provided some bounds, suggest restricting them:
+ //
+ // fn foo<T: Foo>(t: T) { ... }
+ // ---
+ // |
+ // help: consider further restricting this bound with `+ Bar`
+ //
+ // Suggestion for tools in this case is:
+ //
+ // fn foo<T: Foo>(t: T) { ... }
+ // --
+ // |
+ // replace with: `T: Bar +`
+
+ err.span_help(
+ bounds_span,
+ &format!("{} `+ {}`", MSG_RESTRICT_BOUND_FURTHER, constraint),
+ );
+
+ let span_hi = param.span.with_hi(span.hi());
+ let span_with_colon = source_map.span_through_char(span_hi, ':');
+
+ if span_hi != param.span && span_with_colon != span_hi {
+ err.tool_only_span_suggestion(
+ span_with_colon,
+ MSG_RESTRICT_BOUND_FURTHER,
+ format!("{}: {} + ", param_name, constraint),
+ Applicability::MachineApplicable,
+ );
+ }
+ } else {
+ // If user hasn't provided any bounds, suggest adding a new one:
+ //
+ // fn foo<T>(t: T) { ... }
+ // - help: consider restricting this type parameter with `T: Foo`
+
+ err.span_help(
+ param.span,
+ &format!("{} `{}: {}`", MSG_RESTRICT_TYPE, param_name, constraint),
+ );
+
+ err.tool_only_span_suggestion(
+ param.span,
+ MSG_RESTRICT_TYPE,
+ format!("{}: {}", param_name, constraint),
+ Applicability::MachineApplicable,
+ );
+ }
+
+ true
+ } else {
+ // This part is a bit tricky, because using the `where` clause user can
+ // provide zero, one or many bounds for the same type parameter, so we
+ // have following cases to consider:
+ //
+ // 1) When the type parameter has been provided zero bounds
+ //
+ // Message:
+ // fn foo<X, Y>(x: X, y: Y) where Y: Foo { ... }
+ // - help: consider restricting this type parameter with `where X: Bar`
+ //
+ // Suggestion:
+ // fn foo<X, Y>(x: X, y: Y) where Y: Foo { ... }
+ // - insert: `, X: Bar`
+ //
+ //
+ // 2) When the type parameter has been provided one bound
+ //
+ // Message:
+ // fn foo<T>(t: T) where T: Foo { ... }
+ // ^^^^^^
+ // |
+ // help: consider further restricting this bound with `+ Bar`
+ //
+ // Suggestion:
+ // fn foo<T>(t: T) where T: Foo { ... }
+ // ^^
+ // |
+ // replace with: `T: Bar +`
+ //
+ //
+ // 3) When the type parameter has been provided many bounds
+ //
+ // Message:
+ // fn foo<T>(t: T) where T: Foo, T: Bar {... }
+ // - help: consider further restricting this type parameter with `where T: Zar`
+ //
+ // Suggestion:
+ // fn foo<T>(t: T) where T: Foo, T: Bar {... }
+ // - insert: `, T: Zar`
+
+ let mut param_spans = Vec::new();
+
+ for predicate in generics.where_clause.predicates {
+ if let WherePredicate::BoundPredicate(WhereBoundPredicate {
+ span, bounded_ty, ..
+ }) = predicate
+ {
+ if let TyKind::Path(QPath::Resolved(_, path)) = &bounded_ty.kind {
+ if let Some(segment) = path.segments.first() {
+ if segment.ident.to_string() == param_name {
+ param_spans.push(span);
+ }
+ }
+ }
+ }
+ }
+
+ let where_clause_span =
+ generics.where_clause.span_for_predicates_or_empty_place().shrink_to_hi();
+
+ match ¶m_spans[..] {
+ &[] => {
+ err.span_help(
+ param.span,
+ &format!("{} `where {}: {}`", MSG_RESTRICT_TYPE, param_name, constraint),
+ );
+
+ err.tool_only_span_suggestion(
+ where_clause_span,
+ MSG_RESTRICT_TYPE,
+ format!(", {}: {}", param_name, constraint),
+ Applicability::MachineApplicable,
+ );
+ }
+
+ &[¶m_span] => {
+ err.span_help(
+ param_span,
+ &format!("{} `+ {}`", MSG_RESTRICT_BOUND_FURTHER, constraint),
+ );
+
+ let span_hi = param_span.with_hi(span.hi());
+ let span_with_colon = source_map.span_through_char(span_hi, ':');
+
+ if span_hi != param_span && span_with_colon != span_hi {
+ err.tool_only_span_suggestion(
+ span_with_colon,
+ MSG_RESTRICT_BOUND_FURTHER,
+ format!("{}: {} +", param_name, constraint),
+ Applicability::MachineApplicable,
+ );
+ }
+ }
+
+ _ => {
+ err.span_help(
+ param.span,
+ &format!(
+ "{} `where {}: {}`",
+ MSG_RESTRICT_TYPE_FURTHER, param_name, constraint,
+ ),
+ );
+
+ err.tool_only_span_suggestion(
+ where_clause_span,
+ MSG_RESTRICT_BOUND_FURTHER,
+ format!(", {}: {}", param_name, constraint),
+ Applicability::MachineApplicable,
+ );
+ }
+ }
+
+ true
+ }
+}
--- /dev/null
+use super::{
+ ObligationCauseCode, OnUnimplementedDirective, OnUnimplementedNote, PredicateObligation,
+};
+use crate::infer::InferCtxt;
+use rustc::ty::subst::Subst;
+use rustc::ty::{self, GenericParamDefKind};
+use rustc_hir as hir;
+use rustc_hir::def_id::DefId;
+use rustc_span::symbol::sym;
+
+use super::InferCtxtPrivExt;
+
+crate trait InferCtxtExt<'tcx> {
+ /*private*/
+ fn impl_similar_to(
+ &self,
+ trait_ref: ty::PolyTraitRef<'tcx>,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> Option<DefId>;
+
+ /*private*/
+ fn describe_enclosure(&self, hir_id: hir::HirId) -> Option<&'static str>;
+
+ fn on_unimplemented_note(
+ &self,
+ trait_ref: ty::PolyTraitRef<'tcx>,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> OnUnimplementedNote;
+}
+
+impl<'a, 'tcx> InferCtxtExt<'tcx> for InferCtxt<'a, 'tcx> {
+ fn impl_similar_to(
+ &self,
+ trait_ref: ty::PolyTraitRef<'tcx>,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> Option<DefId> {
+ let tcx = self.tcx;
+ let param_env = obligation.param_env;
+ let trait_ref = tcx.erase_late_bound_regions(&trait_ref);
+ let trait_self_ty = trait_ref.self_ty();
+
+ let mut self_match_impls = vec![];
+ let mut fuzzy_match_impls = vec![];
+
+ self.tcx.for_each_relevant_impl(trait_ref.def_id, trait_self_ty, |def_id| {
+ let impl_substs = self.fresh_substs_for_item(obligation.cause.span, def_id);
+ let impl_trait_ref = tcx.impl_trait_ref(def_id).unwrap().subst(tcx, impl_substs);
+
+ let impl_self_ty = impl_trait_ref.self_ty();
+
+ if let Ok(..) = self.can_eq(param_env, trait_self_ty, impl_self_ty) {
+ self_match_impls.push(def_id);
+
+ if trait_ref
+ .substs
+ .types()
+ .skip(1)
+ .zip(impl_trait_ref.substs.types().skip(1))
+ .all(|(u, v)| self.fuzzy_match_tys(u, v))
+ {
+ fuzzy_match_impls.push(def_id);
+ }
+ }
+ });
+
+ let impl_def_id = if self_match_impls.len() == 1 {
+ self_match_impls[0]
+ } else if fuzzy_match_impls.len() == 1 {
+ fuzzy_match_impls[0]
+ } else {
+ return None;
+ };
+
+ tcx.has_attr(impl_def_id, sym::rustc_on_unimplemented).then_some(impl_def_id)
+ }
+
+ /// Used to set on_unimplemented's `ItemContext`
+ /// to be the enclosing (async) block/function/closure
+ fn describe_enclosure(&self, hir_id: hir::HirId) -> Option<&'static str> {
+ let hir = &self.tcx.hir();
+ let node = hir.find(hir_id)?;
+ match &node {
+ hir::Node::Item(hir::Item { kind: hir::ItemKind::Fn(sig, _, body_id), .. }) => {
+ self.describe_generator(*body_id).or_else(|| {
+ Some(if let hir::FnHeader { asyncness: hir::IsAsync::Async, .. } = sig.header {
+ "an async function"
+ } else {
+ "a function"
+ })
+ })
+ }
+ hir::Node::TraitItem(hir::TraitItem {
+ kind: hir::TraitItemKind::Fn(_, hir::TraitFn::Provided(body_id)),
+ ..
+ }) => self.describe_generator(*body_id).or_else(|| Some("a trait method")),
+ hir::Node::ImplItem(hir::ImplItem {
+ kind: hir::ImplItemKind::Fn(sig, body_id),
+ ..
+ }) => self.describe_generator(*body_id).or_else(|| {
+ Some(if let hir::FnHeader { asyncness: hir::IsAsync::Async, .. } = sig.header {
+ "an async method"
+ } else {
+ "a method"
+ })
+ }),
+ hir::Node::Expr(hir::Expr {
+ kind: hir::ExprKind::Closure(_is_move, _, body_id, _, gen_movability),
+ ..
+ }) => self.describe_generator(*body_id).or_else(|| {
+ Some(if gen_movability.is_some() { "an async closure" } else { "a closure" })
+ }),
+ hir::Node::Expr(hir::Expr { .. }) => {
+ let parent_hid = hir.get_parent_node(hir_id);
+ if parent_hid != hir_id {
+ return self.describe_enclosure(parent_hid);
+ } else {
+ None
+ }
+ }
+ _ => None,
+ }
+ }
+
+ fn on_unimplemented_note(
+ &self,
+ trait_ref: ty::PolyTraitRef<'tcx>,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> OnUnimplementedNote {
+ let def_id =
+ self.impl_similar_to(trait_ref, obligation).unwrap_or_else(|| trait_ref.def_id());
+ let trait_ref = *trait_ref.skip_binder();
+
+ let mut flags = vec![];
+ flags.push((
+ sym::item_context,
+ self.describe_enclosure(obligation.cause.body_id).map(|s| s.to_owned()),
+ ));
+
+ match obligation.cause.code {
+ ObligationCauseCode::BuiltinDerivedObligation(..)
+ | ObligationCauseCode::ImplDerivedObligation(..) => {}
+ _ => {
+ // this is a "direct", user-specified, rather than derived,
+ // obligation.
+ flags.push((sym::direct, None));
+ }
+ }
+
+ if let ObligationCauseCode::ItemObligation(item) = obligation.cause.code {
+ // FIXME: maybe also have some way of handling methods
+ // from other traits? That would require name resolution,
+ // which we might want to be some sort of hygienic.
+ //
+ // Currently I'm leaving it for what I need for `try`.
+ if self.tcx.trait_of_item(item) == Some(trait_ref.def_id) {
+ let method = self.tcx.item_name(item);
+ flags.push((sym::from_method, None));
+ flags.push((sym::from_method, Some(method.to_string())));
+ }
+ }
+ if let Some((t, _)) = self.get_parent_trait_ref(&obligation.cause.code) {
+ flags.push((sym::parent_trait, Some(t)));
+ }
+
+ if let Some(k) = obligation.cause.span.desugaring_kind() {
+ flags.push((sym::from_desugaring, None));
+ flags.push((sym::from_desugaring, Some(format!("{:?}", k))));
+ }
+ let generics = self.tcx.generics_of(def_id);
+ let self_ty = trait_ref.self_ty();
+ // This is also included through the generics list as `Self`,
+ // but the parser won't allow you to use it
+ flags.push((sym::_Self, Some(self_ty.to_string())));
+ if let Some(def) = self_ty.ty_adt_def() {
+ // We also want to be able to select self's original
+ // signature with no type arguments resolved
+ flags.push((sym::_Self, Some(self.tcx.type_of(def.did).to_string())));
+ }
+
+ for param in generics.params.iter() {
+ let value = match param.kind {
+ GenericParamDefKind::Type { .. } | GenericParamDefKind::Const => {
+ trait_ref.substs[param.index as usize].to_string()
+ }
+ GenericParamDefKind::Lifetime => continue,
+ };
+ let name = param.name;
+ flags.push((name, Some(value)));
+ }
+
+ if let Some(true) = self_ty.ty_adt_def().map(|def| def.did.is_local()) {
+ flags.push((sym::crate_local, None));
+ }
+
+ // Allow targeting all integers using `{integral}`, even if the exact type was resolved
+ if self_ty.is_integral() {
+ flags.push((sym::_Self, Some("{integral}".to_owned())));
+ }
+
+ if let ty::Array(aty, len) = self_ty.kind {
+ flags.push((sym::_Self, Some("[]".to_owned())));
+ flags.push((sym::_Self, Some(format!("[{}]", aty))));
+ if let Some(def) = aty.ty_adt_def() {
+ // We also want to be able to select the array's type's original
+ // signature with no type arguments resolved
+ flags.push((
+ sym::_Self,
+ Some(format!("[{}]", self.tcx.type_of(def.did).to_string())),
+ ));
+ let tcx = self.tcx;
+ if let Some(len) = len.try_eval_usize(tcx, ty::ParamEnv::empty()) {
+ flags.push((
+ sym::_Self,
+ Some(format!("[{}; {}]", self.tcx.type_of(def.did).to_string(), len)),
+ ));
+ } else {
+ flags.push((
+ sym::_Self,
+ Some(format!("[{}; _]", self.tcx.type_of(def.did).to_string())),
+ ));
+ }
+ }
+ }
+ if let ty::Dynamic(traits, _) = self_ty.kind {
+ for t in *traits.skip_binder() {
+ match t {
+ ty::ExistentialPredicate::Trait(trait_ref) => {
+ flags.push((sym::_Self, Some(self.tcx.def_path_str(trait_ref.def_id))))
+ }
+ _ => {}
+ }
+ }
+ }
+
+ if let Ok(Some(command)) =
+ OnUnimplementedDirective::of_item(self.tcx, trait_ref.def_id, def_id)
+ {
+ command.evaluate(self.tcx, trait_ref, &flags[..])
+ } else {
+ OnUnimplementedNote::default()
+ }
+ }
+}
--- /dev/null
+use super::{
+ EvaluationResult, Obligation, ObligationCause, ObligationCauseCode, PredicateObligation,
+};
+
+use crate::infer::InferCtxt;
+use crate::traits::error_reporting::suggest_constraining_type_param;
+
+use rustc::ty::TypeckTables;
+use rustc::ty::{self, AdtKind, DefIdTree, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness};
+use rustc_errors::{error_code, struct_span_err, Applicability, DiagnosticBuilder, Style};
+use rustc_hir as hir;
+use rustc_hir::def::DefKind;
+use rustc_hir::def_id::DefId;
+use rustc_hir::intravisit::Visitor;
+use rustc_hir::Node;
+use rustc_span::symbol::{kw, sym};
+use rustc_span::{MultiSpan, Span, DUMMY_SP};
+use std::fmt;
+
+use super::InferCtxtPrivExt;
+use crate::traits::query::evaluate_obligation::InferCtxtExt as _;
+
+crate trait InferCtxtExt<'tcx> {
+ fn suggest_restricting_param_bound(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ trait_ref: &ty::PolyTraitRef<'_>,
+ body_id: hir::HirId,
+ );
+
+ fn suggest_borrow_on_unsized_slice(
+ &self,
+ code: &ObligationCauseCode<'tcx>,
+ err: &mut DiagnosticBuilder<'tcx>,
+ );
+
+ fn get_closure_name(
+ &self,
+ def_id: DefId,
+ err: &mut DiagnosticBuilder<'_>,
+ msg: &str,
+ ) -> Option<String>;
+
+ fn suggest_fn_call(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ err: &mut DiagnosticBuilder<'_>,
+ trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
+ points_at_arg: bool,
+ );
+
+ fn suggest_add_reference_to_arg(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ err: &mut DiagnosticBuilder<'tcx>,
+ trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
+ points_at_arg: bool,
+ has_custom_message: bool,
+ ) -> bool;
+
+ fn suggest_remove_reference(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ err: &mut DiagnosticBuilder<'tcx>,
+ trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
+ );
+
+ fn suggest_change_mut(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ err: &mut DiagnosticBuilder<'tcx>,
+ trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
+ points_at_arg: bool,
+ );
+
+ fn suggest_semicolon_removal(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ err: &mut DiagnosticBuilder<'tcx>,
+ span: Span,
+ trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
+ );
+
+ fn suggest_impl_trait(
+ &self,
+ err: &mut DiagnosticBuilder<'tcx>,
+ span: Span,
+ obligation: &PredicateObligation<'tcx>,
+ trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
+ ) -> bool;
+
+ fn point_at_returns_when_relevant(
+ &self,
+ err: &mut DiagnosticBuilder<'tcx>,
+ obligation: &PredicateObligation<'tcx>,
+ );
+
+ fn report_closure_arg_mismatch(
+ &self,
+ span: Span,
+ found_span: Option<Span>,
+ expected_ref: ty::PolyTraitRef<'tcx>,
+ found: ty::PolyTraitRef<'tcx>,
+ ) -> DiagnosticBuilder<'tcx>;
+
+ fn suggest_fully_qualified_path(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ def_id: DefId,
+ span: Span,
+ trait_ref: DefId,
+ );
+
+ fn maybe_note_obligation_cause_for_async_await(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> bool;
+
+ fn note_obligation_cause_for_async_await(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ target_span: Span,
+ scope_span: &Option<Span>,
+ expr: Option<hir::HirId>,
+ snippet: String,
+ first_generator: DefId,
+ last_generator: Option<DefId>,
+ trait_ref: ty::TraitRef<'_>,
+ target_ty: Ty<'tcx>,
+ tables: &ty::TypeckTables<'_>,
+ obligation: &PredicateObligation<'tcx>,
+ next_code: Option<&ObligationCauseCode<'tcx>>,
+ );
+
+ fn note_obligation_cause_code<T>(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ predicate: &T,
+ cause_code: &ObligationCauseCode<'tcx>,
+ obligated_types: &mut Vec<&ty::TyS<'tcx>>,
+ ) where
+ T: fmt::Display;
+
+ fn suggest_new_overflow_limit(&self, err: &mut DiagnosticBuilder<'_>);
+}
+
+impl<'a, 'tcx> InferCtxtExt<'tcx> for InferCtxt<'a, 'tcx> {
+ fn suggest_restricting_param_bound(
+ &self,
+ mut err: &mut DiagnosticBuilder<'_>,
+ trait_ref: &ty::PolyTraitRef<'_>,
+ body_id: hir::HirId,
+ ) {
+ let self_ty = trait_ref.self_ty();
+ let (param_ty, projection) = match &self_ty.kind {
+ ty::Param(_) => (true, None),
+ ty::Projection(projection) => (false, Some(projection)),
+ _ => return,
+ };
+
+ let suggest_restriction =
+ |generics: &hir::Generics<'_>, msg, err: &mut DiagnosticBuilder<'_>| {
+ let span = generics.where_clause.span_for_predicates_or_empty_place();
+ if !span.from_expansion() && span.desugaring_kind().is_none() {
+ err.span_suggestion(
+ generics.where_clause.span_for_predicates_or_empty_place().shrink_to_hi(),
+ &format!("consider further restricting {}", msg),
+ format!(
+ "{} {} ",
+ if !generics.where_clause.predicates.is_empty() {
+ ","
+ } else {
+ " where"
+ },
+ trait_ref.without_const().to_predicate(),
+ ),
+ Applicability::MachineApplicable,
+ );
+ }
+ };
+
+ // FIXME: Add check for trait bound that is already present, particularly `?Sized` so we
+ // don't suggest `T: Sized + ?Sized`.
+ let mut hir_id = body_id;
+ while let Some(node) = self.tcx.hir().find(hir_id) {
+ match node {
+ hir::Node::TraitItem(hir::TraitItem {
+ generics,
+ kind: hir::TraitItemKind::Fn(..),
+ ..
+ }) if param_ty && self_ty == self.tcx.types.self_param => {
+ // Restricting `Self` for a single method.
+ suggest_restriction(&generics, "`Self`", err);
+ return;
+ }
+
+ hir::Node::Item(hir::Item { kind: hir::ItemKind::Fn(_, generics, _), .. })
+ | hir::Node::TraitItem(hir::TraitItem {
+ generics,
+ kind: hir::TraitItemKind::Fn(..),
+ ..
+ })
+ | hir::Node::ImplItem(hir::ImplItem {
+ generics,
+ kind: hir::ImplItemKind::Fn(..),
+ ..
+ })
+ | hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::Trait(_, _, generics, _, _),
+ ..
+ })
+ | hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::Impl { generics, .. }, ..
+ }) if projection.is_some() => {
+ // Missing associated type bound.
+ suggest_restriction(&generics, "the associated type", err);
+ return;
+ }
+
+ hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::Struct(_, generics),
+ span,
+ ..
+ })
+ | hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::Enum(_, generics), span, ..
+ })
+ | hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::Union(_, generics),
+ span,
+ ..
+ })
+ | hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::Trait(_, _, generics, ..),
+ span,
+ ..
+ })
+ | hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::Impl { generics, .. },
+ span,
+ ..
+ })
+ | hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::Fn(_, generics, _),
+ span,
+ ..
+ })
+ | hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::TyAlias(_, generics),
+ span,
+ ..
+ })
+ | hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::TraitAlias(generics, _),
+ span,
+ ..
+ })
+ | hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::OpaqueTy(hir::OpaqueTy { generics, .. }),
+ span,
+ ..
+ })
+ | hir::Node::TraitItem(hir::TraitItem { generics, span, .. })
+ | hir::Node::ImplItem(hir::ImplItem { generics, span, .. })
+ if param_ty =>
+ {
+ // Missing generic type parameter bound.
+ let param_name = self_ty.to_string();
+ let constraint = trait_ref.print_only_trait_path().to_string();
+ if suggest_constraining_type_param(
+ self.tcx,
+ generics,
+ &mut err,
+ ¶m_name,
+ &constraint,
+ self.tcx.sess.source_map(),
+ *span,
+ Some(trait_ref.def_id()),
+ ) {
+ return;
+ }
+ }
+
+ hir::Node::Crate(..) => return,
+
+ _ => {}
+ }
+
+ hir_id = self.tcx.hir().get_parent_item(hir_id);
+ }
+ }
+
+ /// When encountering an assignment of an unsized trait, like `let x = ""[..];`, provide a
+ /// suggestion to borrow the initializer in order to use have a slice instead.
+ fn suggest_borrow_on_unsized_slice(
+ &self,
+ code: &ObligationCauseCode<'tcx>,
+ err: &mut DiagnosticBuilder<'tcx>,
+ ) {
+ if let &ObligationCauseCode::VariableType(hir_id) = code {
+ let parent_node = self.tcx.hir().get_parent_node(hir_id);
+ if let Some(Node::Local(ref local)) = self.tcx.hir().find(parent_node) {
+ if let Some(ref expr) = local.init {
+ if let hir::ExprKind::Index(_, _) = expr.kind {
+ if let Ok(snippet) = self.tcx.sess.source_map().span_to_snippet(expr.span) {
+ err.span_suggestion(
+ expr.span,
+ "consider borrowing here",
+ format!("&{}", snippet),
+ Applicability::MachineApplicable,
+ );
+ }
+ }
+ }
+ }
+ }
+ }
+
+ /// Given a closure's `DefId`, return the given name of the closure.
+ ///
+ /// This doesn't account for reassignments, but it's only used for suggestions.
+ fn get_closure_name(
+ &self,
+ def_id: DefId,
+ err: &mut DiagnosticBuilder<'_>,
+ msg: &str,
+ ) -> Option<String> {
+ let get_name =
+ |err: &mut DiagnosticBuilder<'_>, kind: &hir::PatKind<'_>| -> Option<String> {
+ // Get the local name of this closure. This can be inaccurate because
+ // of the possibility of reassignment, but this should be good enough.
+ match &kind {
+ hir::PatKind::Binding(hir::BindingAnnotation::Unannotated, _, name, None) => {
+ Some(format!("{}", name))
+ }
+ _ => {
+ err.note(&msg);
+ None
+ }
+ }
+ };
+
+ let hir = self.tcx.hir();
+ let hir_id = hir.as_local_hir_id(def_id)?;
+ let parent_node = hir.get_parent_node(hir_id);
+ match hir.find(parent_node) {
+ Some(hir::Node::Stmt(hir::Stmt { kind: hir::StmtKind::Local(local), .. })) => {
+ get_name(err, &local.pat.kind)
+ }
+ // Different to previous arm because one is `&hir::Local` and the other
+ // is `P<hir::Local>`.
+ Some(hir::Node::Local(local)) => get_name(err, &local.pat.kind),
+ _ => return None,
+ }
+ }
+
+ /// We tried to apply the bound to an `fn` or closure. Check whether calling it would
+ /// evaluate to a type that *would* satisfy the trait binding. If it would, suggest calling
+ /// it: `bar(foo)` → `bar(foo())`. This case is *very* likely to be hit if `foo` is `async`.
+ fn suggest_fn_call(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ err: &mut DiagnosticBuilder<'_>,
+ trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
+ points_at_arg: bool,
+ ) {
+ let self_ty = trait_ref.self_ty();
+ let (def_id, output_ty, callable) = match self_ty.kind {
+ ty::Closure(def_id, substs) => {
+ (def_id, self.closure_sig(def_id, substs).output(), "closure")
+ }
+ ty::FnDef(def_id, _) => (def_id, self_ty.fn_sig(self.tcx).output(), "function"),
+ _ => return,
+ };
+ let msg = format!("use parentheses to call the {}", callable);
+
+ let obligation = self.mk_obligation_for_def_id(
+ trait_ref.def_id(),
+ output_ty.skip_binder(),
+ obligation.cause.clone(),
+ obligation.param_env,
+ );
+
+ match self.evaluate_obligation(&obligation) {
+ Ok(EvaluationResult::EvaluatedToOk)
+ | Ok(EvaluationResult::EvaluatedToOkModuloRegions)
+ | Ok(EvaluationResult::EvaluatedToAmbig) => {}
+ _ => return,
+ }
+ let hir = self.tcx.hir();
+ // Get the name of the callable and the arguments to be used in the suggestion.
+ let snippet = match hir.get_if_local(def_id) {
+ Some(hir::Node::Expr(hir::Expr {
+ kind: hir::ExprKind::Closure(_, decl, _, span, ..),
+ ..
+ })) => {
+ err.span_label(*span, "consider calling this closure");
+ let name = match self.get_closure_name(def_id, err, &msg) {
+ Some(name) => name,
+ None => return,
+ };
+ let args = decl.inputs.iter().map(|_| "_").collect::<Vec<_>>().join(", ");
+ format!("{}({})", name, args)
+ }
+ Some(hir::Node::Item(hir::Item {
+ ident,
+ kind: hir::ItemKind::Fn(.., body_id),
+ ..
+ })) => {
+ err.span_label(ident.span, "consider calling this function");
+ let body = hir.body(*body_id);
+ let args = body
+ .params
+ .iter()
+ .map(|arg| match &arg.pat.kind {
+ hir::PatKind::Binding(_, _, ident, None)
+ // FIXME: provide a better suggestion when encountering `SelfLower`, it
+ // should suggest a method call.
+ if ident.name != kw::SelfLower => ident.to_string(),
+ _ => "_".to_string(),
+ })
+ .collect::<Vec<_>>()
+ .join(", ");
+ format!("{}({})", ident, args)
+ }
+ _ => return,
+ };
+ if points_at_arg {
+ // When the obligation error has been ensured to have been caused by
+ // an argument, the `obligation.cause.span` points at the expression
+ // of the argument, so we can provide a suggestion. This is signaled
+ // by `points_at_arg`. Otherwise, we give a more general note.
+ err.span_suggestion(
+ obligation.cause.span,
+ &msg,
+ snippet,
+ Applicability::HasPlaceholders,
+ );
+ } else {
+ err.help(&format!("{}: `{}`", msg, snippet));
+ }
+ }
+
+ fn suggest_add_reference_to_arg(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ err: &mut DiagnosticBuilder<'tcx>,
+ trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
+ points_at_arg: bool,
+ has_custom_message: bool,
+ ) -> bool {
+ if !points_at_arg {
+ return false;
+ }
+
+ let span = obligation.cause.span;
+ let param_env = obligation.param_env;
+ let trait_ref = trait_ref.skip_binder();
+
+ if let ObligationCauseCode::ImplDerivedObligation(obligation) = &obligation.cause.code {
+ // Try to apply the original trait binding obligation by borrowing.
+ let self_ty = trait_ref.self_ty();
+ let found = self_ty.to_string();
+ let new_self_ty = self.tcx.mk_imm_ref(self.tcx.lifetimes.re_static, self_ty);
+ let substs = self.tcx.mk_substs_trait(new_self_ty, &[]);
+ let new_trait_ref = ty::TraitRef::new(obligation.parent_trait_ref.def_id(), substs);
+ let new_obligation = Obligation::new(
+ ObligationCause::dummy(),
+ param_env,
+ new_trait_ref.without_const().to_predicate(),
+ );
+ if self.predicate_must_hold_modulo_regions(&new_obligation) {
+ if let Ok(snippet) = self.tcx.sess.source_map().span_to_snippet(span) {
+ // We have a very specific type of error, where just borrowing this argument
+ // might solve the problem. In cases like this, the important part is the
+ // original type obligation, not the last one that failed, which is arbitrary.
+ // Because of this, we modify the error to refer to the original obligation and
+ // return early in the caller.
+ let msg = format!(
+ "the trait bound `{}: {}` is not satisfied",
+ found,
+ obligation.parent_trait_ref.skip_binder().print_only_trait_path(),
+ );
+ if has_custom_message {
+ err.note(&msg);
+ } else {
+ err.message = vec![(msg, Style::NoStyle)];
+ }
+ if snippet.starts_with('&') {
+ // This is already a literal borrow and the obligation is failing
+ // somewhere else in the obligation chain. Do not suggest non-sense.
+ return false;
+ }
+ err.span_label(
+ span,
+ &format!(
+ "expected an implementor of trait `{}`",
+ obligation.parent_trait_ref.skip_binder().print_only_trait_path(),
+ ),
+ );
+ err.span_suggestion(
+ span,
+ "consider borrowing here",
+ format!("&{}", snippet),
+ Applicability::MaybeIncorrect,
+ );
+ return true;
+ }
+ }
+ }
+ false
+ }
+
+ /// Whenever references are used by mistake, like `for (i, e) in &vec.iter().enumerate()`,
+ /// suggest removing these references until we reach a type that implements the trait.
+ fn suggest_remove_reference(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ err: &mut DiagnosticBuilder<'tcx>,
+ trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
+ ) {
+ let trait_ref = trait_ref.skip_binder();
+ let span = obligation.cause.span;
+
+ if let Ok(snippet) = self.tcx.sess.source_map().span_to_snippet(span) {
+ let refs_number =
+ snippet.chars().filter(|c| !c.is_whitespace()).take_while(|c| *c == '&').count();
+ if let Some('\'') = snippet.chars().filter(|c| !c.is_whitespace()).nth(refs_number) {
+ // Do not suggest removal of borrow from type arguments.
+ return;
+ }
+
+ let mut trait_type = trait_ref.self_ty();
+
+ for refs_remaining in 0..refs_number {
+ if let ty::Ref(_, t_type, _) = trait_type.kind {
+ trait_type = t_type;
+
+ let new_obligation = self.mk_obligation_for_def_id(
+ trait_ref.def_id,
+ trait_type,
+ ObligationCause::dummy(),
+ obligation.param_env,
+ );
+
+ if self.predicate_may_hold(&new_obligation) {
+ let sp = self
+ .tcx
+ .sess
+ .source_map()
+ .span_take_while(span, |c| c.is_whitespace() || *c == '&');
+
+ let remove_refs = refs_remaining + 1;
+
+ let msg = if remove_refs == 1 {
+ "consider removing the leading `&`-reference".to_string()
+ } else {
+ format!("consider removing {} leading `&`-references", remove_refs)
+ };
+
+ err.span_suggestion_short(
+ sp,
+ &msg,
+ String::new(),
+ Applicability::MachineApplicable,
+ );
+ break;
+ }
+ } else {
+ break;
+ }
+ }
+ }
+ }
+
+ /// Check if the trait bound is implemented for a different mutability and note it in the
+ /// final error.
+ fn suggest_change_mut(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ err: &mut DiagnosticBuilder<'tcx>,
+ trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
+ points_at_arg: bool,
+ ) {
+ let span = obligation.cause.span;
+ if let Ok(snippet) = self.tcx.sess.source_map().span_to_snippet(span) {
+ let refs_number =
+ snippet.chars().filter(|c| !c.is_whitespace()).take_while(|c| *c == '&').count();
+ if let Some('\'') = snippet.chars().filter(|c| !c.is_whitespace()).nth(refs_number) {
+ // Do not suggest removal of borrow from type arguments.
+ return;
+ }
+ let trait_ref = self.resolve_vars_if_possible(trait_ref);
+ if trait_ref.has_infer_types_or_consts() {
+ // Do not ICE while trying to find if a reborrow would succeed on a trait with
+ // unresolved bindings.
+ return;
+ }
+
+ if let ty::Ref(region, t_type, mutability) = trait_ref.skip_binder().self_ty().kind {
+ let trait_type = match mutability {
+ hir::Mutability::Mut => self.tcx.mk_imm_ref(region, t_type),
+ hir::Mutability::Not => self.tcx.mk_mut_ref(region, t_type),
+ };
+
+ let new_obligation = self.mk_obligation_for_def_id(
+ trait_ref.skip_binder().def_id,
+ trait_type,
+ ObligationCause::dummy(),
+ obligation.param_env,
+ );
+
+ if self.evaluate_obligation_no_overflow(&new_obligation).must_apply_modulo_regions()
+ {
+ let sp = self
+ .tcx
+ .sess
+ .source_map()
+ .span_take_while(span, |c| c.is_whitespace() || *c == '&');
+ if points_at_arg && mutability == hir::Mutability::Not && refs_number > 0 {
+ err.span_suggestion(
+ sp,
+ "consider changing this borrow's mutability",
+ "&mut ".to_string(),
+ Applicability::MachineApplicable,
+ );
+ } else {
+ err.note(&format!(
+ "`{}` is implemented for `{:?}`, but not for `{:?}`",
+ trait_ref.print_only_trait_path(),
+ trait_type,
+ trait_ref.skip_binder().self_ty(),
+ ));
+ }
+ }
+ }
+ }
+ }
+
+ fn suggest_semicolon_removal(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ err: &mut DiagnosticBuilder<'tcx>,
+ span: Span,
+ trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
+ ) {
+ let hir = self.tcx.hir();
+ let parent_node = hir.get_parent_node(obligation.cause.body_id);
+ let node = hir.find(parent_node);
+ if let Some(hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::Fn(sig, _, body_id), ..
+ })) = node
+ {
+ let body = hir.body(*body_id);
+ if let hir::ExprKind::Block(blk, _) = &body.value.kind {
+ if sig.decl.output.span().overlaps(span)
+ && blk.expr.is_none()
+ && "()" == &trait_ref.self_ty().to_string()
+ {
+ // FIXME(estebank): When encountering a method with a trait
+ // bound not satisfied in the return type with a body that has
+ // no return, suggest removal of semicolon on last statement.
+ // Once that is added, close #54771.
+ if let Some(ref stmt) = blk.stmts.last() {
+ let sp = self.tcx.sess.source_map().end_point(stmt.span);
+ err.span_label(sp, "consider removing this semicolon");
+ }
+ }
+ }
+ }
+ }
+
+ /// If all conditions are met to identify a returned `dyn Trait`, suggest using `impl Trait` if
+ /// applicable and signal that the error has been expanded appropriately and needs to be
+ /// emitted.
+ fn suggest_impl_trait(
+ &self,
+ err: &mut DiagnosticBuilder<'tcx>,
+ span: Span,
+ obligation: &PredicateObligation<'tcx>,
+ trait_ref: &ty::Binder<ty::TraitRef<'tcx>>,
+ ) -> bool {
+ match obligation.cause.code.peel_derives() {
+ // Only suggest `impl Trait` if the return type is unsized because it is `dyn Trait`.
+ ObligationCauseCode::SizedReturnType => {}
+ _ => return false,
+ }
+
+ let hir = self.tcx.hir();
+ let parent_node = hir.get_parent_node(obligation.cause.body_id);
+ let node = hir.find(parent_node);
+ let (sig, body_id) = if let Some(hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::Fn(sig, _, body_id),
+ ..
+ })) = node
+ {
+ (sig, body_id)
+ } else {
+ return false;
+ };
+ let body = hir.body(*body_id);
+ let trait_ref = self.resolve_vars_if_possible(trait_ref);
+ let ty = trait_ref.skip_binder().self_ty();
+ let is_object_safe = match ty.kind {
+ ty::Dynamic(predicates, _) => {
+ // If the `dyn Trait` is not object safe, do not suggest `Box<dyn Trait>`.
+ predicates
+ .principal_def_id()
+ .map_or(true, |def_id| self.tcx.object_safety_violations(def_id).is_empty())
+ }
+ // We only want to suggest `impl Trait` to `dyn Trait`s.
+ // For example, `fn foo() -> str` needs to be filtered out.
+ _ => return false,
+ };
+
+ let ret_ty = if let hir::FnRetTy::Return(ret_ty) = sig.decl.output {
+ ret_ty
+ } else {
+ return false;
+ };
+
+ // Use `TypeVisitor` instead of the output type directly to find the span of `ty` for
+ // cases like `fn foo() -> (dyn Trait, i32) {}`.
+ // Recursively look for `TraitObject` types and if there's only one, use that span to
+ // suggest `impl Trait`.
+
+ // Visit to make sure there's a single `return` type to suggest `impl Trait`,
+ // otherwise suggest using `Box<dyn Trait>` or an enum.
+ let mut visitor = ReturnsVisitor::default();
+ visitor.visit_body(&body);
+
+ let tables = self.in_progress_tables.map(|t| t.borrow()).unwrap();
+
+ let mut ret_types = visitor
+ .returns
+ .iter()
+ .filter_map(|expr| tables.node_type_opt(expr.hir_id))
+ .map(|ty| self.resolve_vars_if_possible(&ty));
+ let (last_ty, all_returns_have_same_type) = ret_types.clone().fold(
+ (None, true),
+ |(last_ty, mut same): (std::option::Option<Ty<'_>>, bool), ty| {
+ let ty = self.resolve_vars_if_possible(&ty);
+ same &= last_ty.map_or(true, |last_ty| last_ty == ty) && ty.kind != ty::Error;
+ (Some(ty), same)
+ },
+ );
+ let all_returns_conform_to_trait =
+ if let Some(ty_ret_ty) = tables.node_type_opt(ret_ty.hir_id) {
+ match ty_ret_ty.kind {
+ ty::Dynamic(predicates, _) => {
+ let cause = ObligationCause::misc(ret_ty.span, ret_ty.hir_id);
+ let param_env = ty::ParamEnv::empty();
+ ret_types.all(|returned_ty| {
+ predicates.iter().all(|predicate| {
+ let pred = predicate.with_self_ty(self.tcx, returned_ty);
+ let obl = Obligation::new(cause.clone(), param_env, pred);
+ self.predicate_may_hold(&obl)
+ })
+ })
+ }
+ _ => false,
+ }
+ } else {
+ true
+ };
+
+ let (snippet, last_ty) =
+ if let (true, hir::TyKind::TraitObject(..), Ok(snippet), true, Some(last_ty)) = (
+ // Verify that we're dealing with a return `dyn Trait`
+ ret_ty.span.overlaps(span),
+ &ret_ty.kind,
+ self.tcx.sess.source_map().span_to_snippet(ret_ty.span),
+ // If any of the return types does not conform to the trait, then we can't
+ // suggest `impl Trait` nor trait objects, it is a type mismatch error.
+ all_returns_conform_to_trait,
+ last_ty,
+ ) {
+ (snippet, last_ty)
+ } else {
+ return false;
+ };
+ err.code(error_code!(E0746));
+ err.set_primary_message("return type cannot have an unboxed trait object");
+ err.children.clear();
+ let impl_trait_msg = "for information on `impl Trait`, see \
+ <https://doc.rust-lang.org/book/ch10-02-traits.html\
+ #returning-types-that-implement-traits>";
+ let trait_obj_msg = "for information on trait objects, see \
+ <https://doc.rust-lang.org/book/ch17-02-trait-objects.html\
+ #using-trait-objects-that-allow-for-values-of-different-types>";
+ let has_dyn = snippet.split_whitespace().next().map_or(false, |s| s == "dyn");
+ let trait_obj = if has_dyn { &snippet[4..] } else { &snippet[..] };
+ if all_returns_have_same_type {
+ // Suggest `-> impl Trait`.
+ err.span_suggestion(
+ ret_ty.span,
+ &format!(
+ "return `impl {1}` instead, as all return paths are of type `{}`, \
+ which implements `{1}`",
+ last_ty, trait_obj,
+ ),
+ format!("impl {}", trait_obj),
+ Applicability::MachineApplicable,
+ );
+ err.note(impl_trait_msg);
+ } else {
+ if is_object_safe {
+ // Suggest `-> Box<dyn Trait>` and `Box::new(returned_value)`.
+ // Get all the return values and collect their span and suggestion.
+ let mut suggestions = visitor
+ .returns
+ .iter()
+ .map(|expr| {
+ (
+ expr.span,
+ format!(
+ "Box::new({})",
+ self.tcx.sess.source_map().span_to_snippet(expr.span).unwrap()
+ ),
+ )
+ })
+ .collect::<Vec<_>>();
+ // Add the suggestion for the return type.
+ suggestions.push((ret_ty.span, format!("Box<dyn {}>", trait_obj)));
+ err.multipart_suggestion(
+ "return a boxed trait object instead",
+ suggestions,
+ Applicability::MaybeIncorrect,
+ );
+ } else {
+ // This is currently not possible to trigger because E0038 takes precedence, but
+ // leave it in for completeness in case anything changes in an earlier stage.
+ err.note(&format!(
+ "if trait `{}` was object safe, you could return a trait object",
+ trait_obj,
+ ));
+ }
+ err.note(trait_obj_msg);
+ err.note(&format!(
+ "if all the returned values were of the same type you could use \
+ `impl {}` as the return type",
+ trait_obj,
+ ));
+ err.note(impl_trait_msg);
+ err.note("you can create a new `enum` with a variant for each returned type");
+ }
+ true
+ }
+
+ fn point_at_returns_when_relevant(
+ &self,
+ err: &mut DiagnosticBuilder<'tcx>,
+ obligation: &PredicateObligation<'tcx>,
+ ) {
+ match obligation.cause.code.peel_derives() {
+ ObligationCauseCode::SizedReturnType => {}
+ _ => return,
+ }
+
+ let hir = self.tcx.hir();
+ let parent_node = hir.get_parent_node(obligation.cause.body_id);
+ let node = hir.find(parent_node);
+ if let Some(hir::Node::Item(hir::Item { kind: hir::ItemKind::Fn(_, _, body_id), .. })) =
+ node
+ {
+ let body = hir.body(*body_id);
+ // Point at all the `return`s in the function as they have failed trait bounds.
+ let mut visitor = ReturnsVisitor::default();
+ visitor.visit_body(&body);
+ let tables = self.in_progress_tables.map(|t| t.borrow()).unwrap();
+ for expr in &visitor.returns {
+ if let Some(returned_ty) = tables.node_type_opt(expr.hir_id) {
+ let ty = self.resolve_vars_if_possible(&returned_ty);
+ err.span_label(expr.span, &format!("this returned value is of type `{}`", ty));
+ }
+ }
+ }
+ }
+
+ fn report_closure_arg_mismatch(
+ &self,
+ span: Span,
+ found_span: Option<Span>,
+ expected_ref: ty::PolyTraitRef<'tcx>,
+ found: ty::PolyTraitRef<'tcx>,
+ ) -> DiagnosticBuilder<'tcx> {
+ crate fn build_fn_sig_string<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ trait_ref: &ty::TraitRef<'tcx>,
+ ) -> String {
+ let inputs = trait_ref.substs.type_at(1);
+ let sig = if let ty::Tuple(inputs) = inputs.kind {
+ tcx.mk_fn_sig(
+ inputs.iter().map(|k| k.expect_ty()),
+ tcx.mk_ty_infer(ty::TyVar(ty::TyVid { index: 0 })),
+ false,
+ hir::Unsafety::Normal,
+ ::rustc_target::spec::abi::Abi::Rust,
+ )
+ } else {
+ tcx.mk_fn_sig(
+ ::std::iter::once(inputs),
+ tcx.mk_ty_infer(ty::TyVar(ty::TyVid { index: 0 })),
+ false,
+ hir::Unsafety::Normal,
+ ::rustc_target::spec::abi::Abi::Rust,
+ )
+ };
+ ty::Binder::bind(sig).to_string()
+ }
+
+ let argument_is_closure = expected_ref.skip_binder().substs.type_at(0).is_closure();
+ let mut err = struct_span_err!(
+ self.tcx.sess,
+ span,
+ E0631,
+ "type mismatch in {} arguments",
+ if argument_is_closure { "closure" } else { "function" }
+ );
+
+ let found_str = format!(
+ "expected signature of `{}`",
+ build_fn_sig_string(self.tcx, found.skip_binder())
+ );
+ err.span_label(span, found_str);
+
+ let found_span = found_span.unwrap_or(span);
+ let expected_str = format!(
+ "found signature of `{}`",
+ build_fn_sig_string(self.tcx, expected_ref.skip_binder())
+ );
+ err.span_label(found_span, expected_str);
+
+ err
+ }
+
+ fn suggest_fully_qualified_path(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ def_id: DefId,
+ span: Span,
+ trait_ref: DefId,
+ ) {
+ if let Some(assoc_item) = self.tcx.opt_associated_item(def_id) {
+ if let ty::AssocKind::Const | ty::AssocKind::Type = assoc_item.kind {
+ err.note(&format!(
+ "{}s cannot be accessed directly on a `trait`, they can only be \
+ accessed through a specific `impl`",
+ assoc_item.kind.suggestion_descr(),
+ ));
+ err.span_suggestion(
+ span,
+ "use the fully qualified path to an implementation",
+ format!("<Type as {}>::{}", self.tcx.def_path_str(trait_ref), assoc_item.ident),
+ Applicability::HasPlaceholders,
+ );
+ }
+ }
+ }
+
+ /// Adds an async-await specific note to the diagnostic when the future does not implement
+ /// an auto trait because of a captured type.
+ ///
+ /// ```ignore (diagnostic)
+ /// note: future does not implement `Qux` as this value is used across an await
+ /// --> $DIR/issue-64130-3-other.rs:17:5
+ /// |
+ /// LL | let x = Foo;
+ /// | - has type `Foo`
+ /// LL | baz().await;
+ /// | ^^^^^^^^^^^ await occurs here, with `x` maybe used later
+ /// LL | }
+ /// | - `x` is later dropped here
+ /// ```
+ ///
+ /// When the diagnostic does not implement `Send` or `Sync` specifically, then the diagnostic
+ /// is "replaced" with a different message and a more specific error.
+ ///
+ /// ```ignore (diagnostic)
+ /// error: future cannot be sent between threads safely
+ /// --> $DIR/issue-64130-2-send.rs:21:5
+ /// |
+ /// LL | fn is_send<T: Send>(t: T) { }
+ /// | ------- ---- required by this bound in `is_send`
+ /// ...
+ /// LL | is_send(bar());
+ /// | ^^^^^^^ future returned by `bar` is not send
+ /// |
+ /// = help: within `impl std::future::Future`, the trait `std::marker::Send` is not
+ /// implemented for `Foo`
+ /// note: future is not send as this value is used across an await
+ /// --> $DIR/issue-64130-2-send.rs:15:5
+ /// |
+ /// LL | let x = Foo;
+ /// | - has type `Foo`
+ /// LL | baz().await;
+ /// | ^^^^^^^^^^^ await occurs here, with `x` maybe used later
+ /// LL | }
+ /// | - `x` is later dropped here
+ /// ```
+ ///
+ /// Returns `true` if an async-await specific note was added to the diagnostic.
+ fn maybe_note_obligation_cause_for_async_await(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> bool {
+ debug!(
+ "maybe_note_obligation_cause_for_async_await: obligation.predicate={:?} \
+ obligation.cause.span={:?}",
+ obligation.predicate, obligation.cause.span
+ );
+ let source_map = self.tcx.sess.source_map();
+
+ // Attempt to detect an async-await error by looking at the obligation causes, looking
+ // for a generator to be present.
+ //
+ // When a future does not implement a trait because of a captured type in one of the
+ // generators somewhere in the call stack, then the result is a chain of obligations.
+ //
+ // Given a `async fn` A that calls a `async fn` B which captures a non-send type and that
+ // future is passed as an argument to a function C which requires a `Send` type, then the
+ // chain looks something like this:
+ //
+ // - `BuiltinDerivedObligation` with a generator witness (B)
+ // - `BuiltinDerivedObligation` with a generator (B)
+ // - `BuiltinDerivedObligation` with `std::future::GenFuture` (B)
+ // - `BuiltinDerivedObligation` with `impl std::future::Future` (B)
+ // - `BuiltinDerivedObligation` with `impl std::future::Future` (B)
+ // - `BuiltinDerivedObligation` with a generator witness (A)
+ // - `BuiltinDerivedObligation` with a generator (A)
+ // - `BuiltinDerivedObligation` with `std::future::GenFuture` (A)
+ // - `BuiltinDerivedObligation` with `impl std::future::Future` (A)
+ // - `BuiltinDerivedObligation` with `impl std::future::Future` (A)
+ // - `BindingObligation` with `impl_send (Send requirement)
+ //
+ // The first obligation in the chain is the most useful and has the generator that captured
+ // the type. The last generator has information about where the bound was introduced. At
+ // least one generator should be present for this diagnostic to be modified.
+ let (mut trait_ref, mut target_ty) = match obligation.predicate {
+ ty::Predicate::Trait(p, _) => {
+ (Some(p.skip_binder().trait_ref), Some(p.skip_binder().self_ty()))
+ }
+ _ => (None, None),
+ };
+ let mut generator = None;
+ let mut last_generator = None;
+ let mut next_code = Some(&obligation.cause.code);
+ while let Some(code) = next_code {
+ debug!("maybe_note_obligation_cause_for_async_await: code={:?}", code);
+ match code {
+ ObligationCauseCode::BuiltinDerivedObligation(derived_obligation)
+ | ObligationCauseCode::ImplDerivedObligation(derived_obligation) => {
+ let ty = derived_obligation.parent_trait_ref.self_ty();
+ debug!(
+ "maybe_note_obligation_cause_for_async_await: \
+ parent_trait_ref={:?} self_ty.kind={:?}",
+ derived_obligation.parent_trait_ref, ty.kind
+ );
+
+ match ty.kind {
+ ty::Generator(did, ..) => {
+ generator = generator.or(Some(did));
+ last_generator = Some(did);
+ }
+ ty::GeneratorWitness(..) => {}
+ _ if generator.is_none() => {
+ trait_ref = Some(*derived_obligation.parent_trait_ref.skip_binder());
+ target_ty = Some(ty);
+ }
+ _ => {}
+ }
+
+ next_code = Some(derived_obligation.parent_code.as_ref());
+ }
+ _ => break,
+ }
+ }
+
+ // Only continue if a generator was found.
+ debug!(
+ "maybe_note_obligation_cause_for_async_await: generator={:?} trait_ref={:?} \
+ target_ty={:?}",
+ generator, trait_ref, target_ty
+ );
+ let (generator_did, trait_ref, target_ty) = match (generator, trait_ref, target_ty) {
+ (Some(generator_did), Some(trait_ref), Some(target_ty)) => {
+ (generator_did, trait_ref, target_ty)
+ }
+ _ => return false,
+ };
+
+ let span = self.tcx.def_span(generator_did);
+
+ // Do not ICE on closure typeck (#66868).
+ if self.tcx.hir().as_local_hir_id(generator_did).is_none() {
+ return false;
+ }
+
+ // Get the tables from the infcx if the generator is the function we are
+ // currently type-checking; otherwise, get them by performing a query.
+ // This is needed to avoid cycles.
+ let in_progress_tables = self.in_progress_tables.map(|t| t.borrow());
+ let generator_did_root = self.tcx.closure_base_def_id(generator_did);
+ debug!(
+ "maybe_note_obligation_cause_for_async_await: generator_did={:?} \
+ generator_did_root={:?} in_progress_tables.local_id_root={:?} span={:?}",
+ generator_did,
+ generator_did_root,
+ in_progress_tables.as_ref().map(|t| t.local_id_root),
+ span
+ );
+ let query_tables;
+ let tables: &TypeckTables<'tcx> = match &in_progress_tables {
+ Some(t) if t.local_id_root == Some(generator_did_root) => t,
+ _ => {
+ query_tables = self.tcx.typeck_tables_of(generator_did);
+ &query_tables
+ }
+ };
+
+ // Look for a type inside the generator interior that matches the target type to get
+ // a span.
+ let target_ty_erased = self.tcx.erase_regions(&target_ty);
+ let target_span = tables
+ .generator_interior_types
+ .iter()
+ .find(|ty::GeneratorInteriorTypeCause { ty, .. }| {
+ // Careful: the regions for types that appear in the
+ // generator interior are not generally known, so we
+ // want to erase them when comparing (and anyway,
+ // `Send` and other bounds are generally unaffected by
+ // the choice of region). When erasing regions, we
+ // also have to erase late-bound regions. This is
+ // because the types that appear in the generator
+ // interior generally contain "bound regions" to
+ // represent regions that are part of the suspended
+ // generator frame. Bound regions are preserved by
+ // `erase_regions` and so we must also call
+ // `erase_late_bound_regions`.
+ let ty_erased = self.tcx.erase_late_bound_regions(&ty::Binder::bind(*ty));
+ let ty_erased = self.tcx.erase_regions(&ty_erased);
+ let eq = ty::TyS::same_type(ty_erased, target_ty_erased);
+ debug!(
+ "maybe_note_obligation_cause_for_async_await: ty_erased={:?} \
+ target_ty_erased={:?} eq={:?}",
+ ty_erased, target_ty_erased, eq
+ );
+ eq
+ })
+ .map(|ty::GeneratorInteriorTypeCause { span, scope_span, expr, .. }| {
+ (span, source_map.span_to_snippet(*span), scope_span, expr)
+ });
+
+ debug!(
+ "maybe_note_obligation_cause_for_async_await: target_ty={:?} \
+ generator_interior_types={:?} target_span={:?}",
+ target_ty, tables.generator_interior_types, target_span
+ );
+ if let Some((target_span, Ok(snippet), scope_span, expr)) = target_span {
+ self.note_obligation_cause_for_async_await(
+ err,
+ *target_span,
+ scope_span,
+ *expr,
+ snippet,
+ generator_did,
+ last_generator,
+ trait_ref,
+ target_ty,
+ tables,
+ obligation,
+ next_code,
+ );
+ true
+ } else {
+ false
+ }
+ }
+
+ /// Unconditionally adds the diagnostic note described in
+ /// `maybe_note_obligation_cause_for_async_await`'s documentation comment.
+ fn note_obligation_cause_for_async_await(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ target_span: Span,
+ scope_span: &Option<Span>,
+ expr: Option<hir::HirId>,
+ snippet: String,
+ first_generator: DefId,
+ last_generator: Option<DefId>,
+ trait_ref: ty::TraitRef<'_>,
+ target_ty: Ty<'tcx>,
+ tables: &ty::TypeckTables<'_>,
+ obligation: &PredicateObligation<'tcx>,
+ next_code: Option<&ObligationCauseCode<'tcx>>,
+ ) {
+ let source_map = self.tcx.sess.source_map();
+
+ let is_async_fn = self
+ .tcx
+ .parent(first_generator)
+ .map(|parent_did| self.tcx.asyncness(parent_did))
+ .map(|parent_asyncness| parent_asyncness == hir::IsAsync::Async)
+ .unwrap_or(false);
+ let is_async_move = self
+ .tcx
+ .hir()
+ .as_local_hir_id(first_generator)
+ .and_then(|hir_id| self.tcx.hir().maybe_body_owned_by(hir_id))
+ .map(|body_id| self.tcx.hir().body(body_id))
+ .and_then(|body| body.generator_kind())
+ .map(|generator_kind| match generator_kind {
+ hir::GeneratorKind::Async(..) => true,
+ _ => false,
+ })
+ .unwrap_or(false);
+ let await_or_yield = if is_async_fn || is_async_move { "await" } else { "yield" };
+
+ // Special case the primary error message when send or sync is the trait that was
+ // not implemented.
+ let is_send = self.tcx.is_diagnostic_item(sym::send_trait, trait_ref.def_id);
+ let is_sync = self.tcx.is_diagnostic_item(sym::sync_trait, trait_ref.def_id);
+ let hir = self.tcx.hir();
+ let trait_explanation = if is_send || is_sync {
+ let (trait_name, trait_verb) =
+ if is_send { ("`Send`", "sent") } else { ("`Sync`", "shared") };
+
+ err.clear_code();
+ err.set_primary_message(format!(
+ "future cannot be {} between threads safely",
+ trait_verb
+ ));
+
+ let original_span = err.span.primary_span().unwrap();
+ let mut span = MultiSpan::from_span(original_span);
+
+ let message = if let Some(name) = last_generator
+ .and_then(|generator_did| self.tcx.parent(generator_did))
+ .and_then(|parent_did| hir.as_local_hir_id(parent_did))
+ .and_then(|parent_hir_id| hir.opt_name(parent_hir_id))
+ {
+ format!("future returned by `{}` is not {}", name, trait_name)
+ } else {
+ format!("future is not {}", trait_name)
+ };
+
+ span.push_span_label(original_span, message);
+ err.set_span(span);
+
+ format!("is not {}", trait_name)
+ } else {
+ format!("does not implement `{}`", trait_ref.print_only_trait_path())
+ };
+
+ // Look at the last interior type to get a span for the `.await`.
+ let await_span = tables.generator_interior_types.iter().map(|t| t.span).last().unwrap();
+ let mut span = MultiSpan::from_span(await_span);
+ span.push_span_label(
+ await_span,
+ format!("{} occurs here, with `{}` maybe used later", await_or_yield, snippet),
+ );
+
+ span.push_span_label(target_span, format!("has type `{}`", target_ty));
+
+ // If available, use the scope span to annotate the drop location.
+ if let Some(scope_span) = scope_span {
+ span.push_span_label(
+ source_map.end_point(*scope_span),
+ format!("`{}` is later dropped here", snippet),
+ );
+ }
+
+ err.span_note(
+ span,
+ &format!(
+ "future {} as this value is used across an {}",
+ trait_explanation, await_or_yield,
+ ),
+ );
+
+ if let Some(expr_id) = expr {
+ let expr = hir.expect_expr(expr_id);
+ debug!("target_ty evaluated from {:?}", expr);
+
+ let parent = hir.get_parent_node(expr_id);
+ if let Some(hir::Node::Expr(e)) = hir.find(parent) {
+ let parent_span = hir.span(parent);
+ let parent_did = parent.owner.to_def_id();
+ // ```rust
+ // impl T {
+ // fn foo(&self) -> i32 {}
+ // }
+ // T.foo();
+ // ^^^^^^^ a temporary `&T` created inside this method call due to `&self`
+ // ```
+ //
+ let is_region_borrow =
+ tables.expr_adjustments(expr).iter().any(|adj| adj.is_region_borrow());
+
+ // ```rust
+ // struct Foo(*const u8);
+ // bar(Foo(std::ptr::null())).await;
+ // ^^^^^^^^^^^^^^^^^^^^^ raw-ptr `*T` created inside this struct ctor.
+ // ```
+ debug!("parent_def_kind: {:?}", self.tcx.def_kind(parent_did));
+ let is_raw_borrow_inside_fn_like_call = match self.tcx.def_kind(parent_did) {
+ Some(DefKind::Fn) | Some(DefKind::Ctor(..)) => target_ty.is_unsafe_ptr(),
+ _ => false,
+ };
+
+ if (tables.is_method_call(e) && is_region_borrow)
+ || is_raw_borrow_inside_fn_like_call
+ {
+ err.span_help(
+ parent_span,
+ "consider moving this into a `let` \
+ binding to create a shorter lived borrow",
+ );
+ }
+ }
+ }
+
+ // Add a note for the item obligation that remains - normally a note pointing to the
+ // bound that introduced the obligation (e.g. `T: Send`).
+ debug!("note_obligation_cause_for_async_await: next_code={:?}", next_code);
+ self.note_obligation_cause_code(
+ err,
+ &obligation.predicate,
+ next_code.unwrap(),
+ &mut Vec::new(),
+ );
+ }
+
+ fn note_obligation_cause_code<T>(
+ &self,
+ err: &mut DiagnosticBuilder<'_>,
+ predicate: &T,
+ cause_code: &ObligationCauseCode<'tcx>,
+ obligated_types: &mut Vec<&ty::TyS<'tcx>>,
+ ) where
+ T: fmt::Display,
+ {
+ let tcx = self.tcx;
+ match *cause_code {
+ ObligationCauseCode::ExprAssignable
+ | ObligationCauseCode::MatchExpressionArm { .. }
+ | ObligationCauseCode::Pattern { .. }
+ | ObligationCauseCode::IfExpression { .. }
+ | ObligationCauseCode::IfExpressionWithNoElse
+ | ObligationCauseCode::MainFunctionType
+ | ObligationCauseCode::StartFunctionType
+ | ObligationCauseCode::IntrinsicType
+ | ObligationCauseCode::MethodReceiver
+ | ObligationCauseCode::ReturnNoExpression
+ | ObligationCauseCode::MiscObligation => {}
+ ObligationCauseCode::SliceOrArrayElem => {
+ err.note("slice and array elements must have `Sized` type");
+ }
+ ObligationCauseCode::TupleElem => {
+ err.note("only the last element of a tuple may have a dynamically sized type");
+ }
+ ObligationCauseCode::ProjectionWf(data) => {
+ err.note(&format!("required so that the projection `{}` is well-formed", data,));
+ }
+ ObligationCauseCode::ReferenceOutlivesReferent(ref_ty) => {
+ err.note(&format!(
+ "required so that reference `{}` does not outlive its referent",
+ ref_ty,
+ ));
+ }
+ ObligationCauseCode::ObjectTypeBound(object_ty, region) => {
+ err.note(&format!(
+ "required so that the lifetime bound of `{}` for `{}` is satisfied",
+ region, object_ty,
+ ));
+ }
+ ObligationCauseCode::ItemObligation(item_def_id) => {
+ let item_name = tcx.def_path_str(item_def_id);
+ let msg = format!("required by `{}`", item_name);
+
+ if let Some(sp) = tcx.hir().span_if_local(item_def_id) {
+ let sp = tcx.sess.source_map().def_span(sp);
+ err.span_label(sp, &msg);
+ } else {
+ err.note(&msg);
+ }
+ }
+ ObligationCauseCode::BindingObligation(item_def_id, span) => {
+ let item_name = tcx.def_path_str(item_def_id);
+ let msg = format!("required by this bound in `{}`", item_name);
+ if let Some(ident) = tcx.opt_item_name(item_def_id) {
+ err.span_label(ident.span, "");
+ }
+ if span != DUMMY_SP {
+ err.span_label(span, &msg);
+ } else {
+ err.note(&msg);
+ }
+ }
+ ObligationCauseCode::ObjectCastObligation(object_ty) => {
+ err.note(&format!(
+ "required for the cast to the object type `{}`",
+ self.ty_to_string(object_ty)
+ ));
+ }
+ ObligationCauseCode::Coercion { source: _, target } => {
+ err.note(&format!("required by cast to type `{}`", self.ty_to_string(target)));
+ }
+ ObligationCauseCode::RepeatVec(suggest_const_in_array_repeat_expressions) => {
+ err.note(
+ "the `Copy` trait is required because the repeated element will be copied",
+ );
+ if suggest_const_in_array_repeat_expressions {
+ err.note(
+ "this array initializer can be evaluated at compile-time, see issue \
+ #48147 <https://github.com/rust-lang/rust/issues/49147> \
+ for more information",
+ );
+ if tcx.sess.opts.unstable_features.is_nightly_build() {
+ err.help(
+ "add `#![feature(const_in_array_repeat_expressions)]` to the \
+ crate attributes to enable",
+ );
+ }
+ }
+ }
+ ObligationCauseCode::VariableType(_) => {
+ err.note("all local variables must have a statically known size");
+ if !self.tcx.features().unsized_locals {
+ err.help("unsized locals are gated as an unstable feature");
+ }
+ }
+ ObligationCauseCode::SizedArgumentType => {
+ err.note("all function arguments must have a statically known size");
+ if !self.tcx.features().unsized_locals {
+ err.help("unsized locals are gated as an unstable feature");
+ }
+ }
+ ObligationCauseCode::SizedReturnType => {
+ err.note("the return type of a function must have a statically known size");
+ }
+ ObligationCauseCode::SizedYieldType => {
+ err.note("the yield type of a generator must have a statically known size");
+ }
+ ObligationCauseCode::AssignmentLhsSized => {
+ err.note("the left-hand-side of an assignment must have a statically known size");
+ }
+ ObligationCauseCode::TupleInitializerSized => {
+ err.note("tuples must have a statically known size to be initialized");
+ }
+ ObligationCauseCode::StructInitializerSized => {
+ err.note("structs must have a statically known size to be initialized");
+ }
+ ObligationCauseCode::FieldSized { adt_kind: ref item, last } => match *item {
+ AdtKind::Struct => {
+ if last {
+ err.note(
+ "the last field of a packed struct may only have a \
+ dynamically sized type if it does not need drop to be run",
+ );
+ } else {
+ err.note(
+ "only the last field of a struct may have a dynamically sized type",
+ );
+ }
+ }
+ AdtKind::Union => {
+ err.note("no field of a union may have a dynamically sized type");
+ }
+ AdtKind::Enum => {
+ err.note("no field of an enum variant may have a dynamically sized type");
+ }
+ },
+ ObligationCauseCode::ConstSized => {
+ err.note("constant expressions must have a statically known size");
+ }
+ ObligationCauseCode::ConstPatternStructural => {
+ err.note("constants used for pattern-matching must derive `PartialEq` and `Eq`");
+ }
+ ObligationCauseCode::SharedStatic => {
+ err.note("shared static variables must have a type that implements `Sync`");
+ }
+ ObligationCauseCode::BuiltinDerivedObligation(ref data) => {
+ let parent_trait_ref = self.resolve_vars_if_possible(&data.parent_trait_ref);
+ let ty = parent_trait_ref.skip_binder().self_ty();
+ err.note(&format!("required because it appears within the type `{}`", ty));
+ obligated_types.push(ty);
+
+ let parent_predicate = parent_trait_ref.without_const().to_predicate();
+ if !self.is_recursive_obligation(obligated_types, &data.parent_code) {
+ self.note_obligation_cause_code(
+ err,
+ &parent_predicate,
+ &data.parent_code,
+ obligated_types,
+ );
+ }
+ }
+ ObligationCauseCode::ImplDerivedObligation(ref data) => {
+ let parent_trait_ref = self.resolve_vars_if_possible(&data.parent_trait_ref);
+ err.note(&format!(
+ "required because of the requirements on the impl of `{}` for `{}`",
+ parent_trait_ref.print_only_trait_path(),
+ parent_trait_ref.skip_binder().self_ty()
+ ));
+ let parent_predicate = parent_trait_ref.without_const().to_predicate();
+ self.note_obligation_cause_code(
+ err,
+ &parent_predicate,
+ &data.parent_code,
+ obligated_types,
+ );
+ }
+ ObligationCauseCode::CompareImplMethodObligation { .. } => {
+ err.note(&format!(
+ "the requirement `{}` appears on the impl method \
+ but not on the corresponding trait method",
+ predicate
+ ));
+ }
+ ObligationCauseCode::CompareImplTypeObligation { .. } => {
+ err.note(&format!(
+ "the requirement `{}` appears on the associated impl type \
+ but not on the corresponding associated trait type",
+ predicate
+ ));
+ }
+ ObligationCauseCode::ReturnType
+ | ObligationCauseCode::ReturnValue(_)
+ | ObligationCauseCode::BlockTailExpression(_) => (),
+ ObligationCauseCode::TrivialBound => {
+ err.help("see issue #48214");
+ if tcx.sess.opts.unstable_features.is_nightly_build() {
+ err.help("add `#![feature(trivial_bounds)]` to the crate attributes to enable");
+ }
+ }
+ ObligationCauseCode::AssocTypeBound(ref data) => {
+ err.span_label(data.original, "associated type defined here");
+ if let Some(sp) = data.impl_span {
+ err.span_label(sp, "in this `impl` item");
+ }
+ for sp in &data.bounds {
+ err.span_label(*sp, "restricted in this bound");
+ }
+ }
+ }
+ }
+
+ fn suggest_new_overflow_limit(&self, err: &mut DiagnosticBuilder<'_>) {
+ let current_limit = self.tcx.sess.recursion_limit.get();
+ let suggested_limit = current_limit * 2;
+ err.help(&format!(
+ "consider adding a `#![recursion_limit=\"{}\"]` attribute to your crate (`{}`)",
+ suggested_limit, self.tcx.crate_name,
+ ));
+ }
+}
+
+/// Collect all the returned expressions within the input expression.
+/// Used to point at the return spans when we want to suggest some change to them.
+#[derive(Default)]
+struct ReturnsVisitor<'v> {
+ returns: Vec<&'v hir::Expr<'v>>,
+ in_block_tail: bool,
+}
+
+impl<'v> Visitor<'v> for ReturnsVisitor<'v> {
+ type Map = hir::intravisit::ErasedMap<'v>;
+
+ fn nested_visit_map(&mut self) -> hir::intravisit::NestedVisitorMap<Self::Map> {
+ hir::intravisit::NestedVisitorMap::None
+ }
+
+ fn visit_expr(&mut self, ex: &'v hir::Expr<'v>) {
+ // Visit every expression to detect `return` paths, either through the function's tail
+ // expression or `return` statements. We walk all nodes to find `return` statements, but
+ // we only care about tail expressions when `in_block_tail` is `true`, which means that
+ // they're in the return path of the function body.
+ match ex.kind {
+ hir::ExprKind::Ret(Some(ex)) => {
+ self.returns.push(ex);
+ }
+ hir::ExprKind::Block(block, _) if self.in_block_tail => {
+ self.in_block_tail = false;
+ for stmt in block.stmts {
+ hir::intravisit::walk_stmt(self, stmt);
+ }
+ self.in_block_tail = true;
+ if let Some(expr) = block.expr {
+ self.visit_expr(expr);
+ }
+ }
+ hir::ExprKind::Match(_, arms, _) if self.in_block_tail => {
+ for arm in arms {
+ self.visit_expr(arm.body);
+ }
+ }
+ // We need to walk to find `return`s in the entire body.
+ _ if !self.in_block_tail => hir::intravisit::walk_expr(self, ex),
+ _ => self.returns.push(ex),
+ }
+ }
+
+ fn visit_body(&mut self, body: &'v hir::Body<'v>) {
+ assert!(!self.in_block_tail);
+ if body.generator_kind().is_none() {
+ if let hir::ExprKind::Block(block, None) = body.value.kind {
+ if block.expr.is_some() {
+ self.in_block_tail = true;
+ }
+ }
+ }
+ hir::intravisit::walk_body(self, body);
+ }
+}
--- /dev/null
+use crate::infer::{InferCtxt, ShallowResolver};
+use rustc::ty::error::ExpectedFound;
+use rustc::ty::{self, ToPolyTraitRef, Ty, TypeFoldable};
+use rustc_data_structures::obligation_forest::ProcessResult;
+use rustc_data_structures::obligation_forest::{DoCompleted, Error, ForestObligation};
+use rustc_data_structures::obligation_forest::{ObligationForest, ObligationProcessor};
+use rustc_infer::traits::{TraitEngine, TraitEngineExt as _};
+use std::marker::PhantomData;
+
+use super::project;
+use super::select::SelectionContext;
+use super::wf;
+use super::CodeAmbiguity;
+use super::CodeProjectionError;
+use super::CodeSelectionError;
+use super::{ConstEvalFailure, Unimplemented};
+use super::{FulfillmentError, FulfillmentErrorCode};
+use super::{ObligationCause, PredicateObligation};
+
+use crate::traits::error_reporting::InferCtxtExt as _;
+use crate::traits::query::evaluate_obligation::InferCtxtExt as _;
+
+impl<'tcx> ForestObligation for PendingPredicateObligation<'tcx> {
+ /// Note that we include both the `ParamEnv` and the `Predicate`,
+ /// as the `ParamEnv` can influence whether fulfillment succeeds
+ /// or fails.
+ type CacheKey = ty::ParamEnvAnd<'tcx, ty::Predicate<'tcx>>;
+
+ fn as_cache_key(&self) -> Self::CacheKey {
+ self.obligation.param_env.and(self.obligation.predicate)
+ }
+}
+
+/// The fulfillment context is used to drive trait resolution. It
+/// consists of a list of obligations that must be (eventually)
+/// satisfied. The job is to track which are satisfied, which yielded
+/// errors, and which are still pending. At any point, users can call
+/// `select_where_possible`, and the fulfillment context will try to do
+/// selection, retaining only those obligations that remain
+/// ambiguous. This may be helpful in pushing type inference
+/// along. Once all type inference constraints have been generated, the
+/// method `select_all_or_error` can be used to report any remaining
+/// ambiguous cases as errors.
+pub struct FulfillmentContext<'tcx> {
+ // A list of all obligations that have been registered with this
+ // fulfillment context.
+ predicates: ObligationForest<PendingPredicateObligation<'tcx>>,
+ // Should this fulfillment context register type-lives-for-region
+ // obligations on its parent infcx? In some cases, region
+ // obligations are either already known to hold (normalization) or
+ // hopefully verifed elsewhere (type-impls-bound), and therefore
+ // should not be checked.
+ //
+ // Note that if we are normalizing a type that we already
+ // know is well-formed, there should be no harm setting this
+ // to true - all the region variables should be determinable
+ // using the RFC 447 rules, which don't depend on
+ // type-lives-for-region constraints, and because the type
+ // is well-formed, the constraints should hold.
+ register_region_obligations: bool,
+ // Is it OK to register obligations into this infcx inside
+ // an infcx snapshot?
+ //
+ // The "primary fulfillment" in many cases in typeck lives
+ // outside of any snapshot, so any use of it inside a snapshot
+ // will lead to trouble and therefore is checked against, but
+ // other fulfillment contexts sometimes do live inside of
+ // a snapshot (they don't *straddle* a snapshot, so there
+ // is no trouble there).
+ usable_in_snapshot: bool,
+}
+
+#[derive(Clone, Debug)]
+pub struct PendingPredicateObligation<'tcx> {
+ pub obligation: PredicateObligation<'tcx>,
+ pub stalled_on: Vec<ty::InferTy>,
+}
+
+// `PendingPredicateObligation` is used a lot. Make sure it doesn't unintentionally get bigger.
+#[cfg(target_arch = "x86_64")]
+static_assert_size!(PendingPredicateObligation<'_>, 136);
+
+impl<'a, 'tcx> FulfillmentContext<'tcx> {
+ /// Creates a new fulfillment context.
+ pub fn new() -> FulfillmentContext<'tcx> {
+ FulfillmentContext {
+ predicates: ObligationForest::new(),
+ register_region_obligations: true,
+ usable_in_snapshot: false,
+ }
+ }
+
+ pub fn new_in_snapshot() -> FulfillmentContext<'tcx> {
+ FulfillmentContext {
+ predicates: ObligationForest::new(),
+ register_region_obligations: true,
+ usable_in_snapshot: true,
+ }
+ }
+
+ pub fn new_ignoring_regions() -> FulfillmentContext<'tcx> {
+ FulfillmentContext {
+ predicates: ObligationForest::new(),
+ register_region_obligations: false,
+ usable_in_snapshot: false,
+ }
+ }
+
+ /// Attempts to select obligations using `selcx`.
+ fn select(
+ &mut self,
+ selcx: &mut SelectionContext<'a, 'tcx>,
+ ) -> Result<(), Vec<FulfillmentError<'tcx>>> {
+ debug!("select(obligation-forest-size={})", self.predicates.len());
+
+ let mut errors = Vec::new();
+
+ loop {
+ debug!("select: starting another iteration");
+
+ // Process pending obligations.
+ let outcome = self.predicates.process_obligations(
+ &mut FulfillProcessor {
+ selcx,
+ register_region_obligations: self.register_region_obligations,
+ },
+ DoCompleted::No,
+ );
+ debug!("select: outcome={:#?}", outcome);
+
+ // FIXME: if we kept the original cache key, we could mark projection
+ // obligations as complete for the projection cache here.
+
+ errors.extend(outcome.errors.into_iter().map(|e| to_fulfillment_error(e)));
+
+ // If nothing new was added, no need to keep looping.
+ if outcome.stalled {
+ break;
+ }
+ }
+
+ debug!(
+ "select({} predicates remaining, {} errors) done",
+ self.predicates.len(),
+ errors.len()
+ );
+
+ if errors.is_empty() { Ok(()) } else { Err(errors) }
+ }
+}
+
+impl<'tcx> TraitEngine<'tcx> for FulfillmentContext<'tcx> {
+ /// "Normalize" a projection type `<SomeType as SomeTrait>::X` by
+ /// creating a fresh type variable `$0` as well as a projection
+ /// predicate `<SomeType as SomeTrait>::X == $0`. When the
+ /// inference engine runs, it will attempt to find an impl of
+ /// `SomeTrait` or a where-clause that lets us unify `$0` with
+ /// something concrete. If this fails, we'll unify `$0` with
+ /// `projection_ty` again.
+ fn normalize_projection_type(
+ &mut self,
+ infcx: &InferCtxt<'_, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ projection_ty: ty::ProjectionTy<'tcx>,
+ cause: ObligationCause<'tcx>,
+ ) -> Ty<'tcx> {
+ debug!("normalize_projection_type(projection_ty={:?})", projection_ty);
+
+ debug_assert!(!projection_ty.has_escaping_bound_vars());
+
+ // FIXME(#20304) -- cache
+
+ let mut selcx = SelectionContext::new(infcx);
+ let mut obligations = vec![];
+ let normalized_ty = project::normalize_projection_type(
+ &mut selcx,
+ param_env,
+ projection_ty,
+ cause,
+ 0,
+ &mut obligations,
+ );
+ self.register_predicate_obligations(infcx, obligations);
+
+ debug!("normalize_projection_type: result={:?}", normalized_ty);
+
+ normalized_ty
+ }
+
+ fn register_predicate_obligation(
+ &mut self,
+ infcx: &InferCtxt<'_, 'tcx>,
+ obligation: PredicateObligation<'tcx>,
+ ) {
+ // this helps to reduce duplicate errors, as well as making
+ // debug output much nicer to read and so on.
+ let obligation = infcx.resolve_vars_if_possible(&obligation);
+
+ debug!("register_predicate_obligation(obligation={:?})", obligation);
+
+ assert!(!infcx.is_in_snapshot() || self.usable_in_snapshot);
+
+ self.predicates
+ .register_obligation(PendingPredicateObligation { obligation, stalled_on: vec![] });
+ }
+
+ fn select_all_or_error(
+ &mut self,
+ infcx: &InferCtxt<'_, 'tcx>,
+ ) -> Result<(), Vec<FulfillmentError<'tcx>>> {
+ self.select_where_possible(infcx)?;
+
+ let errors: Vec<_> = self
+ .predicates
+ .to_errors(CodeAmbiguity)
+ .into_iter()
+ .map(|e| to_fulfillment_error(e))
+ .collect();
+ if errors.is_empty() { Ok(()) } else { Err(errors) }
+ }
+
+ fn select_where_possible(
+ &mut self,
+ infcx: &InferCtxt<'_, 'tcx>,
+ ) -> Result<(), Vec<FulfillmentError<'tcx>>> {
+ let mut selcx = SelectionContext::new(infcx);
+ self.select(&mut selcx)
+ }
+
+ fn pending_obligations(&self) -> Vec<PredicateObligation<'tcx>> {
+ self.predicates.map_pending_obligations(|o| o.obligation.clone())
+ }
+}
+
+struct FulfillProcessor<'a, 'b, 'tcx> {
+ selcx: &'a mut SelectionContext<'b, 'tcx>,
+ register_region_obligations: bool,
+}
+
+fn mk_pending(os: Vec<PredicateObligation<'tcx>>) -> Vec<PendingPredicateObligation<'tcx>> {
+ os.into_iter()
+ .map(|o| PendingPredicateObligation { obligation: o, stalled_on: vec![] })
+ .collect()
+}
+
+impl<'a, 'b, 'tcx> ObligationProcessor for FulfillProcessor<'a, 'b, 'tcx> {
+ type Obligation = PendingPredicateObligation<'tcx>;
+ type Error = FulfillmentErrorCode<'tcx>;
+
+ /// Processes a predicate obligation and returns either:
+ /// - `Changed(v)` if the predicate is true, presuming that `v` are also true
+ /// - `Unchanged` if we don't have enough info to be sure
+ /// - `Error(e)` if the predicate does not hold
+ ///
+ /// This is always inlined, despite its size, because it has a single
+ /// callsite and it is called *very* frequently.
+ #[inline(always)]
+ fn process_obligation(
+ &mut self,
+ pending_obligation: &mut Self::Obligation,
+ ) -> ProcessResult<Self::Obligation, Self::Error> {
+ // If we were stalled on some unresolved variables, first check whether
+ // any of them have been resolved; if not, don't bother doing more work
+ // yet.
+ let change = match pending_obligation.stalled_on.len() {
+ // Match arms are in order of frequency, which matters because this
+ // code is so hot. 1 and 0 dominate; 2+ is fairly rare.
+ 1 => {
+ let infer = pending_obligation.stalled_on[0];
+ ShallowResolver::new(self.selcx.infcx()).shallow_resolve_changed(infer)
+ }
+ 0 => {
+ // In this case we haven't changed, but wish to make a change.
+ true
+ }
+ _ => {
+ // This `for` loop was once a call to `all()`, but this lower-level
+ // form was a perf win. See #64545 for details.
+ (|| {
+ for &infer in &pending_obligation.stalled_on {
+ if ShallowResolver::new(self.selcx.infcx()).shallow_resolve_changed(infer) {
+ return true;
+ }
+ }
+ false
+ })()
+ }
+ };
+
+ if !change {
+ debug!(
+ "process_predicate: pending obligation {:?} still stalled on {:?}",
+ self.selcx.infcx().resolve_vars_if_possible(&pending_obligation.obligation),
+ pending_obligation.stalled_on
+ );
+ return ProcessResult::Unchanged;
+ }
+
+ // This part of the code is much colder.
+
+ pending_obligation.stalled_on.truncate(0);
+
+ let obligation = &mut pending_obligation.obligation;
+
+ if obligation.predicate.has_infer_types_or_consts() {
+ obligation.predicate =
+ self.selcx.infcx().resolve_vars_if_possible(&obligation.predicate);
+ }
+
+ debug!("process_obligation: obligation = {:?} cause = {:?}", obligation, obligation.cause);
+
+ fn infer_ty(ty: Ty<'tcx>) -> ty::InferTy {
+ match ty.kind {
+ ty::Infer(infer) => infer,
+ _ => panic!(),
+ }
+ }
+
+ match obligation.predicate {
+ ty::Predicate::Trait(ref data, _) => {
+ let trait_obligation = obligation.with(data.clone());
+
+ if data.is_global() {
+ // no type variables present, can use evaluation for better caching.
+ // FIXME: consider caching errors too.
+ if self.selcx.infcx().predicate_must_hold_considering_regions(&obligation) {
+ debug!(
+ "selecting trait `{:?}` at depth {} evaluated to holds",
+ data, obligation.recursion_depth
+ );
+ return ProcessResult::Changed(vec![]);
+ }
+ }
+
+ match self.selcx.select(&trait_obligation) {
+ Ok(Some(vtable)) => {
+ debug!(
+ "selecting trait `{:?}` at depth {} yielded Ok(Some)",
+ data, obligation.recursion_depth
+ );
+ ProcessResult::Changed(mk_pending(vtable.nested_obligations()))
+ }
+ Ok(None) => {
+ debug!(
+ "selecting trait `{:?}` at depth {} yielded Ok(None)",
+ data, obligation.recursion_depth
+ );
+
+ // This is a bit subtle: for the most part, the
+ // only reason we can fail to make progress on
+ // trait selection is because we don't have enough
+ // information about the types in the trait.
+ pending_obligation.stalled_on =
+ trait_ref_type_vars(self.selcx, data.to_poly_trait_ref());
+
+ debug!(
+ "process_predicate: pending obligation {:?} now stalled on {:?}",
+ self.selcx.infcx().resolve_vars_if_possible(obligation),
+ pending_obligation.stalled_on
+ );
+
+ ProcessResult::Unchanged
+ }
+ Err(selection_err) => {
+ info!(
+ "selecting trait `{:?}` at depth {} yielded Err",
+ data, obligation.recursion_depth
+ );
+
+ ProcessResult::Error(CodeSelectionError(selection_err))
+ }
+ }
+ }
+
+ ty::Predicate::RegionOutlives(ref binder) => {
+ match self.selcx.infcx().region_outlives_predicate(&obligation.cause, binder) {
+ Ok(()) => ProcessResult::Changed(vec![]),
+ Err(_) => ProcessResult::Error(CodeSelectionError(Unimplemented)),
+ }
+ }
+
+ ty::Predicate::TypeOutlives(ref binder) => {
+ // Check if there are higher-ranked vars.
+ match binder.no_bound_vars() {
+ // If there are, inspect the underlying type further.
+ None => {
+ // Convert from `Binder<OutlivesPredicate<Ty, Region>>` to `Binder<Ty>`.
+ let binder = binder.map_bound_ref(|pred| pred.0);
+
+ // Check if the type has any bound vars.
+ match binder.no_bound_vars() {
+ // If so, this obligation is an error (for now). Eventually we should be
+ // able to support additional cases here, like `for<'a> &'a str: 'a`.
+ // NOTE: this is duplicate-implemented between here and fulfillment.
+ None => ProcessResult::Error(CodeSelectionError(Unimplemented)),
+ // Otherwise, we have something of the form
+ // `for<'a> T: 'a where 'a not in T`, which we can treat as
+ // `T: 'static`.
+ Some(t_a) => {
+ let r_static = self.selcx.tcx().lifetimes.re_static;
+ if self.register_region_obligations {
+ self.selcx.infcx().register_region_obligation_with_cause(
+ t_a,
+ r_static,
+ &obligation.cause,
+ );
+ }
+ ProcessResult::Changed(vec![])
+ }
+ }
+ }
+ // If there aren't, register the obligation.
+ Some(ty::OutlivesPredicate(t_a, r_b)) => {
+ if self.register_region_obligations {
+ self.selcx.infcx().register_region_obligation_with_cause(
+ t_a,
+ r_b,
+ &obligation.cause,
+ );
+ }
+ ProcessResult::Changed(vec![])
+ }
+ }
+ }
+
+ ty::Predicate::Projection(ref data) => {
+ let project_obligation = obligation.with(data.clone());
+ match project::poly_project_and_unify_type(self.selcx, &project_obligation) {
+ Ok(None) => {
+ let tcx = self.selcx.tcx();
+ pending_obligation.stalled_on =
+ trait_ref_type_vars(self.selcx, data.to_poly_trait_ref(tcx));
+ ProcessResult::Unchanged
+ }
+ Ok(Some(os)) => ProcessResult::Changed(mk_pending(os)),
+ Err(e) => ProcessResult::Error(CodeProjectionError(e)),
+ }
+ }
+
+ ty::Predicate::ObjectSafe(trait_def_id) => {
+ if !self.selcx.tcx().is_object_safe(trait_def_id) {
+ ProcessResult::Error(CodeSelectionError(Unimplemented))
+ } else {
+ ProcessResult::Changed(vec![])
+ }
+ }
+
+ ty::Predicate::ClosureKind(closure_def_id, closure_substs, kind) => {
+ match self.selcx.infcx().closure_kind(closure_def_id, closure_substs) {
+ Some(closure_kind) => {
+ if closure_kind.extends(kind) {
+ ProcessResult::Changed(vec![])
+ } else {
+ ProcessResult::Error(CodeSelectionError(Unimplemented))
+ }
+ }
+ None => ProcessResult::Unchanged,
+ }
+ }
+
+ ty::Predicate::WellFormed(ty) => {
+ match wf::obligations(
+ self.selcx.infcx(),
+ obligation.param_env,
+ obligation.cause.body_id,
+ ty,
+ obligation.cause.span,
+ ) {
+ None => {
+ pending_obligation.stalled_on = vec![infer_ty(ty)];
+ ProcessResult::Unchanged
+ }
+ Some(os) => ProcessResult::Changed(mk_pending(os)),
+ }
+ }
+
+ ty::Predicate::Subtype(ref subtype) => {
+ match self.selcx.infcx().subtype_predicate(
+ &obligation.cause,
+ obligation.param_env,
+ subtype,
+ ) {
+ None => {
+ // None means that both are unresolved.
+ pending_obligation.stalled_on = vec![
+ infer_ty(subtype.skip_binder().a),
+ infer_ty(subtype.skip_binder().b),
+ ];
+ ProcessResult::Unchanged
+ }
+ Some(Ok(ok)) => ProcessResult::Changed(mk_pending(ok.obligations)),
+ Some(Err(err)) => {
+ let expected_found = ExpectedFound::new(
+ subtype.skip_binder().a_is_expected,
+ subtype.skip_binder().a,
+ subtype.skip_binder().b,
+ );
+ ProcessResult::Error(FulfillmentErrorCode::CodeSubtypeError(
+ expected_found,
+ err,
+ ))
+ }
+ }
+ }
+
+ ty::Predicate::ConstEvaluatable(def_id, substs) => {
+ match self.selcx.infcx().const_eval_resolve(
+ obligation.param_env,
+ def_id,
+ substs,
+ None,
+ Some(obligation.cause.span),
+ ) {
+ Ok(_) => ProcessResult::Changed(vec![]),
+ Err(err) => ProcessResult::Error(CodeSelectionError(ConstEvalFailure(err))),
+ }
+ }
+ }
+ }
+
+ fn process_backedge<'c, I>(
+ &mut self,
+ cycle: I,
+ _marker: PhantomData<&'c PendingPredicateObligation<'tcx>>,
+ ) where
+ I: Clone + Iterator<Item = &'c PendingPredicateObligation<'tcx>>,
+ {
+ if self.selcx.coinductive_match(cycle.clone().map(|s| s.obligation.predicate)) {
+ debug!("process_child_obligations: coinductive match");
+ } else {
+ let cycle: Vec<_> = cycle.map(|c| c.obligation.clone()).collect();
+ self.selcx.infcx().report_overflow_error_cycle(&cycle);
+ }
+ }
+}
+
+/// Returns the set of type variables contained in a trait ref
+fn trait_ref_type_vars<'a, 'tcx>(
+ selcx: &mut SelectionContext<'a, 'tcx>,
+ t: ty::PolyTraitRef<'tcx>,
+) -> Vec<ty::InferTy> {
+ t.skip_binder() // ok b/c this check doesn't care about regions
+ .input_types()
+ .map(|t| selcx.infcx().resolve_vars_if_possible(&t))
+ .filter(|t| t.has_infer_types())
+ .flat_map(|t| t.walk())
+ .filter_map(|t| match t.kind {
+ ty::Infer(infer) => Some(infer),
+ _ => None,
+ })
+ .collect()
+}
+
+fn to_fulfillment_error<'tcx>(
+ error: Error<PendingPredicateObligation<'tcx>, FulfillmentErrorCode<'tcx>>,
+) -> FulfillmentError<'tcx> {
+ let obligation = error.backtrace.into_iter().next().unwrap().obligation;
+ FulfillmentError::new(obligation, error.error)
+}
--- /dev/null
+//! Miscellaneous type-system utilities that are too small to deserve their own modules.
+
+use crate::infer::InferCtxtExt as _;
+use crate::traits::{self, ObligationCause};
+
+use rustc::ty::{self, Ty, TyCtxt, TypeFoldable};
+use rustc_hir as hir;
+use rustc_infer::infer::TyCtxtInferExt;
+
+use crate::traits::error_reporting::InferCtxtExt;
+
+#[derive(Clone)]
+pub enum CopyImplementationError<'tcx> {
+ InfrigingFields(Vec<&'tcx ty::FieldDef>),
+ NotAnAdt,
+ HasDestructor,
+}
+
+pub fn can_type_implement_copy(
+ tcx: TyCtxt<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ self_type: Ty<'tcx>,
+) -> Result<(), CopyImplementationError<'tcx>> {
+ // FIXME: (@jroesch) float this code up
+ tcx.infer_ctxt().enter(|infcx| {
+ let (adt, substs) = match self_type.kind {
+ // These types used to have a builtin impl.
+ // Now libcore provides that impl.
+ ty::Uint(_)
+ | ty::Int(_)
+ | ty::Bool
+ | ty::Float(_)
+ | ty::Char
+ | ty::RawPtr(..)
+ | ty::Never
+ | ty::Ref(_, _, hir::Mutability::Not) => return Ok(()),
+
+ ty::Adt(adt, substs) => (adt, substs),
+
+ _ => return Err(CopyImplementationError::NotAnAdt),
+ };
+
+ let mut infringing = Vec::new();
+ for variant in &adt.variants {
+ for field in &variant.fields {
+ let ty = field.ty(tcx, substs);
+ if ty.references_error() {
+ continue;
+ }
+ let span = tcx.def_span(field.did);
+ let cause = ObligationCause { span, ..ObligationCause::dummy() };
+ let ctx = traits::FulfillmentContext::new();
+ match traits::fully_normalize(&infcx, ctx, cause, param_env, &ty) {
+ Ok(ty) => {
+ if !infcx.type_is_copy_modulo_regions(param_env, ty, span) {
+ infringing.push(field);
+ }
+ }
+ Err(errors) => {
+ infcx.report_fulfillment_errors(&errors, None, false);
+ }
+ };
+ }
+ }
+ if !infringing.is_empty() {
+ return Err(CopyImplementationError::InfrigingFields(infringing));
+ }
+ if adt.has_dtor(tcx) {
+ return Err(CopyImplementationError::HasDestructor);
+ }
+
+ Ok(())
+ })
+}
--- /dev/null
+//! Trait Resolution. See the [rustc dev guide] for more information on how this works.
+//!
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/traits/resolution.html
+
+#[allow(dead_code)]
+pub mod auto_trait;
+pub mod codegen;
+mod coherence;
+mod engine;
+pub mod error_reporting;
+mod fulfill;
+pub mod misc;
+mod object_safety;
+mod on_unimplemented;
+mod project;
+pub mod query;
+mod select;
+mod specialize;
+mod structural_match;
+mod util;
+pub mod wf;
+
+use crate::infer::outlives::env::OutlivesEnvironment;
+use crate::infer::{InferCtxt, RegionckMode, TyCtxtInferExt};
+use crate::traits::error_reporting::InferCtxtExt as _;
+use crate::traits::query::evaluate_obligation::InferCtxtExt as _;
+use rustc::middle::region;
+use rustc::ty::fold::TypeFoldable;
+use rustc::ty::subst::{InternalSubsts, SubstsRef};
+use rustc::ty::{self, GenericParamDefKind, ToPredicate, Ty, TyCtxt, WithConstness};
+use rustc::util::common::ErrorReported;
+use rustc_hir as hir;
+use rustc_hir::def_id::DefId;
+use rustc_span::{Span, DUMMY_SP};
+
+use std::fmt::Debug;
+
+pub use self::FulfillmentErrorCode::*;
+pub use self::ObligationCauseCode::*;
+pub use self::SelectionError::*;
+pub use self::Vtable::*;
+
+pub use self::coherence::{add_placeholder_note, orphan_check, overlapping_impls};
+pub use self::coherence::{OrphanCheckErr, OverlapResult};
+pub use self::engine::TraitEngineExt;
+pub use self::fulfill::{FulfillmentContext, PendingPredicateObligation};
+pub use self::object_safety::astconv_object_safety_violations;
+pub use self::object_safety::is_vtable_safe_method;
+pub use self::object_safety::MethodViolationCode;
+pub use self::object_safety::ObjectSafetyViolation;
+pub use self::on_unimplemented::{OnUnimplementedDirective, OnUnimplementedNote};
+pub use self::project::{
+ normalize, normalize_projection_type, normalize_to, poly_project_and_unify_type,
+};
+pub use self::select::{EvaluationCache, SelectionCache, SelectionContext};
+pub use self::select::{EvaluationResult, IntercrateAmbiguityCause, OverflowError};
+pub use self::specialize::find_associated_item;
+pub use self::specialize::specialization_graph::FutureCompatOverlapError;
+pub use self::specialize::specialization_graph::FutureCompatOverlapErrorKind;
+pub use self::specialize::{specialization_graph, translate_substs, OverlapError};
+pub use self::structural_match::search_for_structural_match_violation;
+pub use self::structural_match::type_marked_structural;
+pub use self::structural_match::NonStructuralMatchTy;
+pub use self::util::{elaborate_predicates, elaborate_trait_ref, elaborate_trait_refs};
+pub use self::util::{expand_trait_aliases, TraitAliasExpander};
+pub use self::util::{
+ get_vtable_index_of_object_method, impl_is_default, impl_item_is_final,
+ predicate_for_trait_def, upcast_choices,
+};
+pub use self::util::{
+ supertrait_def_ids, supertraits, transitive_bounds, SupertraitDefIds, Supertraits,
+};
+
+pub use rustc_infer::traits::*;
+
+/// Whether to skip the leak check, as part of a future compatibility warning step.
+#[derive(Copy, Clone, PartialEq, Eq, Debug)]
+pub enum SkipLeakCheck {
+ Yes,
+ No,
+}
+
+impl SkipLeakCheck {
+ fn is_yes(self) -> bool {
+ self == SkipLeakCheck::Yes
+ }
+}
+
+/// The "default" for skip-leak-check corresponds to the current
+/// behavior (do not skip the leak check) -- not the behavior we are
+/// transitioning into.
+impl Default for SkipLeakCheck {
+ fn default() -> Self {
+ SkipLeakCheck::No
+ }
+}
+
+/// The mode that trait queries run in.
+#[derive(Copy, Clone, PartialEq, Eq, Debug)]
+pub enum TraitQueryMode {
+ // Standard/un-canonicalized queries get accurate
+ // spans etc. passed in and hence can do reasonable
+ // error reporting on their own.
+ Standard,
+ // Canonicalized queries get dummy spans and hence
+ // must generally propagate errors to
+ // pre-canonicalization callsites.
+ Canonical,
+}
+
+/// Creates predicate obligations from the generic bounds.
+pub fn predicates_for_generics<'tcx>(
+ cause: ObligationCause<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ generic_bounds: &ty::InstantiatedPredicates<'tcx>,
+) -> PredicateObligations<'tcx> {
+ util::predicates_for_generics(cause, 0, param_env, generic_bounds)
+}
+
+/// Determines whether the type `ty` is known to meet `bound` and
+/// returns true if so. Returns false if `ty` either does not meet
+/// `bound` or is not known to meet bound (note that this is
+/// conservative towards *no impl*, which is the opposite of the
+/// `evaluate` methods).
+pub fn type_known_to_meet_bound_modulo_regions<'a, 'tcx>(
+ infcx: &InferCtxt<'a, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ ty: Ty<'tcx>,
+ def_id: DefId,
+ span: Span,
+) -> bool {
+ debug!(
+ "type_known_to_meet_bound_modulo_regions(ty={:?}, bound={:?})",
+ ty,
+ infcx.tcx.def_path_str(def_id)
+ );
+
+ let trait_ref = ty::TraitRef { def_id, substs: infcx.tcx.mk_substs_trait(ty, &[]) };
+ let obligation = Obligation {
+ param_env,
+ cause: ObligationCause::misc(span, hir::DUMMY_HIR_ID),
+ recursion_depth: 0,
+ predicate: trait_ref.without_const().to_predicate(),
+ };
+
+ let result = infcx.predicate_must_hold_modulo_regions(&obligation);
+ debug!(
+ "type_known_to_meet_ty={:?} bound={} => {:?}",
+ ty,
+ infcx.tcx.def_path_str(def_id),
+ result
+ );
+
+ if result && ty.has_infer_types_or_consts() {
+ // Because of inference "guessing", selection can sometimes claim
+ // to succeed while the success requires a guess. To ensure
+ // this function's result remains infallible, we must confirm
+ // that guess. While imperfect, I believe this is sound.
+
+ // The handling of regions in this area of the code is terrible,
+ // see issue #29149. We should be able to improve on this with
+ // NLL.
+ let mut fulfill_cx = FulfillmentContext::new_ignoring_regions();
+
+ // We can use a dummy node-id here because we won't pay any mind
+ // to region obligations that arise (there shouldn't really be any
+ // anyhow).
+ let cause = ObligationCause::misc(span, hir::DUMMY_HIR_ID);
+
+ fulfill_cx.register_bound(infcx, param_env, ty, def_id, cause);
+
+ // Note: we only assume something is `Copy` if we can
+ // *definitively* show that it implements `Copy`. Otherwise,
+ // assume it is move; linear is always ok.
+ match fulfill_cx.select_all_or_error(infcx) {
+ Ok(()) => {
+ debug!(
+ "type_known_to_meet_bound_modulo_regions: ty={:?} bound={} success",
+ ty,
+ infcx.tcx.def_path_str(def_id)
+ );
+ true
+ }
+ Err(e) => {
+ debug!(
+ "type_known_to_meet_bound_modulo_regions: ty={:?} bound={} errors={:?}",
+ ty,
+ infcx.tcx.def_path_str(def_id),
+ e
+ );
+ false
+ }
+ }
+ } else {
+ result
+ }
+}
+
+fn do_normalize_predicates<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ region_context: DefId,
+ cause: ObligationCause<'tcx>,
+ elaborated_env: ty::ParamEnv<'tcx>,
+ predicates: Vec<ty::Predicate<'tcx>>,
+) -> Result<Vec<ty::Predicate<'tcx>>, ErrorReported> {
+ debug!(
+ "do_normalize_predicates(predicates={:?}, region_context={:?}, cause={:?})",
+ predicates, region_context, cause,
+ );
+ let span = cause.span;
+ tcx.infer_ctxt().enter(|infcx| {
+ // FIXME. We should really... do something with these region
+ // obligations. But this call just continues the older
+ // behavior (i.e., doesn't cause any new bugs), and it would
+ // take some further refactoring to actually solve them. In
+ // particular, we would have to handle implied bounds
+ // properly, and that code is currently largely confined to
+ // regionck (though I made some efforts to extract it
+ // out). -nmatsakis
+ //
+ // @arielby: In any case, these obligations are checked
+ // by wfcheck anyway, so I'm not sure we have to check
+ // them here too, and we will remove this function when
+ // we move over to lazy normalization *anyway*.
+ let fulfill_cx = FulfillmentContext::new_ignoring_regions();
+ let predicates =
+ match fully_normalize(&infcx, fulfill_cx, cause, elaborated_env, &predicates) {
+ Ok(predicates) => predicates,
+ Err(errors) => {
+ infcx.report_fulfillment_errors(&errors, None, false);
+ return Err(ErrorReported);
+ }
+ };
+
+ debug!("do_normalize_predictes: normalized predicates = {:?}", predicates);
+
+ let region_scope_tree = region::ScopeTree::default();
+
+ // We can use the `elaborated_env` here; the region code only
+ // cares about declarations like `'a: 'b`.
+ let outlives_env = OutlivesEnvironment::new(elaborated_env);
+
+ infcx.resolve_regions_and_report_errors(
+ region_context,
+ ®ion_scope_tree,
+ &outlives_env,
+ RegionckMode::default(),
+ );
+
+ let predicates = match infcx.fully_resolve(&predicates) {
+ Ok(predicates) => predicates,
+ Err(fixup_err) => {
+ // If we encounter a fixup error, it means that some type
+ // variable wound up unconstrained. I actually don't know
+ // if this can happen, and I certainly don't expect it to
+ // happen often, but if it did happen it probably
+ // represents a legitimate failure due to some kind of
+ // unconstrained variable, and it seems better not to ICE,
+ // all things considered.
+ tcx.sess.span_err(span, &fixup_err.to_string());
+ return Err(ErrorReported);
+ }
+ };
+ if predicates.has_local_value() {
+ // FIXME: shouldn't we, you know, actually report an error here? or an ICE?
+ Err(ErrorReported)
+ } else {
+ Ok(predicates)
+ }
+ })
+}
+
+// FIXME: this is gonna need to be removed ...
+/// Normalizes the parameter environment, reporting errors if they occur.
+pub fn normalize_param_env_or_error<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ region_context: DefId,
+ unnormalized_env: ty::ParamEnv<'tcx>,
+ cause: ObligationCause<'tcx>,
+) -> ty::ParamEnv<'tcx> {
+ // I'm not wild about reporting errors here; I'd prefer to
+ // have the errors get reported at a defined place (e.g.,
+ // during typeck). Instead I have all parameter
+ // environments, in effect, going through this function
+ // and hence potentially reporting errors. This ensures of
+ // course that we never forget to normalize (the
+ // alternative seemed like it would involve a lot of
+ // manual invocations of this fn -- and then we'd have to
+ // deal with the errors at each of those sites).
+ //
+ // In any case, in practice, typeck constructs all the
+ // parameter environments once for every fn as it goes,
+ // and errors will get reported then; so after typeck we
+ // can be sure that no errors should occur.
+
+ debug!(
+ "normalize_param_env_or_error(region_context={:?}, unnormalized_env={:?}, cause={:?})",
+ region_context, unnormalized_env, cause
+ );
+
+ let mut predicates: Vec<_> =
+ util::elaborate_predicates(tcx, unnormalized_env.caller_bounds.to_vec()).collect();
+
+ debug!("normalize_param_env_or_error: elaborated-predicates={:?}", predicates);
+
+ let elaborated_env = ty::ParamEnv::new(
+ tcx.intern_predicates(&predicates),
+ unnormalized_env.reveal,
+ unnormalized_env.def_id,
+ );
+
+ // HACK: we are trying to normalize the param-env inside *itself*. The problem is that
+ // normalization expects its param-env to be already normalized, which means we have
+ // a circularity.
+ //
+ // The way we handle this is by normalizing the param-env inside an unnormalized version
+ // of the param-env, which means that if the param-env contains unnormalized projections,
+ // we'll have some normalization failures. This is unfortunate.
+ //
+ // Lazy normalization would basically handle this by treating just the
+ // normalizing-a-trait-ref-requires-itself cycles as evaluation failures.
+ //
+ // Inferred outlives bounds can create a lot of `TypeOutlives` predicates for associated
+ // types, so to make the situation less bad, we normalize all the predicates *but*
+ // the `TypeOutlives` predicates first inside the unnormalized parameter environment, and
+ // then we normalize the `TypeOutlives` bounds inside the normalized parameter environment.
+ //
+ // This works fairly well because trait matching does not actually care about param-env
+ // TypeOutlives predicates - these are normally used by regionck.
+ let outlives_predicates: Vec<_> = predicates
+ .drain_filter(|predicate| match predicate {
+ ty::Predicate::TypeOutlives(..) => true,
+ _ => false,
+ })
+ .collect();
+
+ debug!(
+ "normalize_param_env_or_error: predicates=(non-outlives={:?}, outlives={:?})",
+ predicates, outlives_predicates
+ );
+ let non_outlives_predicates = match do_normalize_predicates(
+ tcx,
+ region_context,
+ cause.clone(),
+ elaborated_env,
+ predicates,
+ ) {
+ Ok(predicates) => predicates,
+ // An unnormalized env is better than nothing.
+ Err(ErrorReported) => {
+ debug!("normalize_param_env_or_error: errored resolving non-outlives predicates");
+ return elaborated_env;
+ }
+ };
+
+ debug!("normalize_param_env_or_error: non-outlives predicates={:?}", non_outlives_predicates);
+
+ // Not sure whether it is better to include the unnormalized TypeOutlives predicates
+ // here. I believe they should not matter, because we are ignoring TypeOutlives param-env
+ // predicates here anyway. Keeping them here anyway because it seems safer.
+ let outlives_env: Vec<_> =
+ non_outlives_predicates.iter().chain(&outlives_predicates).cloned().collect();
+ let outlives_env =
+ ty::ParamEnv::new(tcx.intern_predicates(&outlives_env), unnormalized_env.reveal, None);
+ let outlives_predicates = match do_normalize_predicates(
+ tcx,
+ region_context,
+ cause,
+ outlives_env,
+ outlives_predicates,
+ ) {
+ Ok(predicates) => predicates,
+ // An unnormalized env is better than nothing.
+ Err(ErrorReported) => {
+ debug!("normalize_param_env_or_error: errored resolving outlives predicates");
+ return elaborated_env;
+ }
+ };
+ debug!("normalize_param_env_or_error: outlives predicates={:?}", outlives_predicates);
+
+ let mut predicates = non_outlives_predicates;
+ predicates.extend(outlives_predicates);
+ debug!("normalize_param_env_or_error: final predicates={:?}", predicates);
+ ty::ParamEnv::new(
+ tcx.intern_predicates(&predicates),
+ unnormalized_env.reveal,
+ unnormalized_env.def_id,
+ )
+}
+
+pub fn fully_normalize<'a, 'tcx, T>(
+ infcx: &InferCtxt<'a, 'tcx>,
+ mut fulfill_cx: FulfillmentContext<'tcx>,
+ cause: ObligationCause<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ value: &T,
+) -> Result<T, Vec<FulfillmentError<'tcx>>>
+where
+ T: TypeFoldable<'tcx>,
+{
+ debug!("fully_normalize_with_fulfillcx(value={:?})", value);
+ let selcx = &mut SelectionContext::new(infcx);
+ let Normalized { value: normalized_value, obligations } =
+ project::normalize(selcx, param_env, cause, value);
+ debug!(
+ "fully_normalize: normalized_value={:?} obligations={:?}",
+ normalized_value, obligations
+ );
+ for obligation in obligations {
+ fulfill_cx.register_predicate_obligation(selcx.infcx(), obligation);
+ }
+
+ debug!("fully_normalize: select_all_or_error start");
+ fulfill_cx.select_all_or_error(infcx)?;
+ debug!("fully_normalize: select_all_or_error complete");
+ let resolved_value = infcx.resolve_vars_if_possible(&normalized_value);
+ debug!("fully_normalize: resolved_value={:?}", resolved_value);
+ Ok(resolved_value)
+}
+
+/// Normalizes the predicates and checks whether they hold in an empty
+/// environment. If this returns false, then either normalize
+/// encountered an error or one of the predicates did not hold. Used
+/// when creating vtables to check for unsatisfiable methods.
+pub fn normalize_and_test_predicates<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ predicates: Vec<ty::Predicate<'tcx>>,
+) -> bool {
+ debug!("normalize_and_test_predicates(predicates={:?})", predicates);
+
+ let result = tcx.infer_ctxt().enter(|infcx| {
+ let param_env = ty::ParamEnv::reveal_all();
+ let mut selcx = SelectionContext::new(&infcx);
+ let mut fulfill_cx = FulfillmentContext::new();
+ let cause = ObligationCause::dummy();
+ let Normalized { value: predicates, obligations } =
+ normalize(&mut selcx, param_env, cause.clone(), &predicates);
+ for obligation in obligations {
+ fulfill_cx.register_predicate_obligation(&infcx, obligation);
+ }
+ for predicate in predicates {
+ let obligation = Obligation::new(cause.clone(), param_env, predicate);
+ fulfill_cx.register_predicate_obligation(&infcx, obligation);
+ }
+
+ fulfill_cx.select_all_or_error(&infcx).is_ok()
+ });
+ debug!("normalize_and_test_predicates(predicates={:?}) = {:?}", predicates, result);
+ result
+}
+
+fn substitute_normalize_and_test_predicates<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ key: (DefId, SubstsRef<'tcx>),
+) -> bool {
+ debug!("substitute_normalize_and_test_predicates(key={:?})", key);
+
+ let predicates = tcx.predicates_of(key.0).instantiate(tcx, key.1).predicates;
+ let result = normalize_and_test_predicates(tcx, predicates);
+
+ debug!("substitute_normalize_and_test_predicates(key={:?}) = {:?}", key, result);
+ result
+}
+
+/// Given a trait `trait_ref`, iterates the vtable entries
+/// that come from `trait_ref`, including its supertraits.
+#[inline] // FIXME(#35870): avoid closures being unexported due to `impl Trait`.
+fn vtable_methods<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ trait_ref: ty::PolyTraitRef<'tcx>,
+) -> &'tcx [Option<(DefId, SubstsRef<'tcx>)>] {
+ debug!("vtable_methods({:?})", trait_ref);
+
+ tcx.arena.alloc_from_iter(supertraits(tcx, trait_ref).flat_map(move |trait_ref| {
+ let trait_methods = tcx
+ .associated_items(trait_ref.def_id())
+ .in_definition_order()
+ .filter(|item| item.kind == ty::AssocKind::Method);
+
+ // Now list each method's DefId and InternalSubsts (for within its trait).
+ // If the method can never be called from this object, produce None.
+ trait_methods.map(move |trait_method| {
+ debug!("vtable_methods: trait_method={:?}", trait_method);
+ let def_id = trait_method.def_id;
+
+ // Some methods cannot be called on an object; skip those.
+ if !is_vtable_safe_method(tcx, trait_ref.def_id(), &trait_method) {
+ debug!("vtable_methods: not vtable safe");
+ return None;
+ }
+
+ // The method may have some early-bound lifetimes; add regions for those.
+ let substs = trait_ref.map_bound(|trait_ref| {
+ InternalSubsts::for_item(tcx, def_id, |param, _| match param.kind {
+ GenericParamDefKind::Lifetime => tcx.lifetimes.re_erased.into(),
+ GenericParamDefKind::Type { .. } | GenericParamDefKind::Const => {
+ trait_ref.substs[param.index as usize]
+ }
+ })
+ });
+
+ // The trait type may have higher-ranked lifetimes in it;
+ // erase them if they appear, so that we get the type
+ // at some particular call site.
+ let substs =
+ tcx.normalize_erasing_late_bound_regions(ty::ParamEnv::reveal_all(), &substs);
+
+ // It's possible that the method relies on where-clauses that
+ // do not hold for this particular set of type parameters.
+ // Note that this method could then never be called, so we
+ // do not want to try and codegen it, in that case (see #23435).
+ let predicates = tcx.predicates_of(def_id).instantiate_own(tcx, substs);
+ if !normalize_and_test_predicates(tcx, predicates.predicates) {
+ debug!("vtable_methods: predicates do not hold");
+ return None;
+ }
+
+ Some((def_id, substs))
+ })
+ }))
+}
+
+pub fn provide(providers: &mut ty::query::Providers<'_>) {
+ object_safety::provide(providers);
+ *providers = ty::query::Providers {
+ specialization_graph_of: specialize::specialization_graph_provider,
+ specializes: specialize::specializes,
+ codegen_fulfill_obligation: codegen::codegen_fulfill_obligation,
+ vtable_methods,
+ substitute_normalize_and_test_predicates,
+ ..*providers
+ };
+}
--- /dev/null
+//! "Object safety" refers to the ability for a trait to be converted
+//! to an object. In general, traits may only be converted to an
+//! object if all of their methods meet certain criteria. In particular,
+//! they must:
+//!
+//! - have a suitable receiver from which we can extract a vtable and coerce to a "thin" version
+//! that doesn't contain the vtable;
+//! - not reference the erased type `Self` except for in this receiver;
+//! - not have generic type parameters.
+
+use super::elaborate_predicates;
+
+use crate::infer::TyCtxtInferExt;
+use crate::traits::query::evaluate_obligation::InferCtxtExt;
+use crate::traits::{self, Obligation, ObligationCause};
+use rustc::ty::subst::{InternalSubsts, Subst};
+use rustc::ty::{self, Predicate, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness};
+use rustc_errors::Applicability;
+use rustc_hir as hir;
+use rustc_hir::def_id::DefId;
+use rustc_session::lint::builtin::WHERE_CLAUSES_OBJECT_SAFETY;
+use rustc_span::symbol::Symbol;
+use rustc_span::Span;
+use smallvec::SmallVec;
+
+use std::iter;
+
+pub use crate::traits::{MethodViolationCode, ObjectSafetyViolation};
+
+/// Returns the object safety violations that affect
+/// astconv -- currently, `Self` in supertraits. This is needed
+/// because `object_safety_violations` can't be used during
+/// type collection.
+pub fn astconv_object_safety_violations(
+ tcx: TyCtxt<'_>,
+ trait_def_id: DefId,
+) -> Vec<ObjectSafetyViolation> {
+ debug_assert!(tcx.generics_of(trait_def_id).has_self);
+ let violations = traits::supertrait_def_ids(tcx, trait_def_id)
+ .map(|def_id| predicates_reference_self(tcx, def_id, true))
+ .filter(|spans| !spans.is_empty())
+ .map(|spans| ObjectSafetyViolation::SupertraitSelf(spans))
+ .collect();
+
+ debug!("astconv_object_safety_violations(trait_def_id={:?}) = {:?}", trait_def_id, violations);
+
+ violations
+}
+
+fn object_safety_violations(tcx: TyCtxt<'_>, trait_def_id: DefId) -> Vec<ObjectSafetyViolation> {
+ debug_assert!(tcx.generics_of(trait_def_id).has_self);
+ debug!("object_safety_violations: {:?}", trait_def_id);
+
+ traits::supertrait_def_ids(tcx, trait_def_id)
+ .flat_map(|def_id| object_safety_violations_for_trait(tcx, def_id))
+ .collect()
+}
+
+/// We say a method is *vtable safe* if it can be invoked on a trait
+/// object. Note that object-safe traits can have some
+/// non-vtable-safe methods, so long as they require `Self: Sized` or
+/// otherwise ensure that they cannot be used when `Self = Trait`.
+pub fn is_vtable_safe_method(tcx: TyCtxt<'_>, trait_def_id: DefId, method: &ty::AssocItem) -> bool {
+ debug_assert!(tcx.generics_of(trait_def_id).has_self);
+ debug!("is_vtable_safe_method({:?}, {:?})", trait_def_id, method);
+ // Any method that has a `Self: Sized` bound cannot be called.
+ if generics_require_sized_self(tcx, method.def_id) {
+ return false;
+ }
+
+ match virtual_call_violation_for_method(tcx, trait_def_id, method) {
+ None | Some(MethodViolationCode::WhereClauseReferencesSelf) => true,
+ Some(_) => false,
+ }
+}
+
+fn object_safety_violations_for_trait(
+ tcx: TyCtxt<'_>,
+ trait_def_id: DefId,
+) -> Vec<ObjectSafetyViolation> {
+ // Check methods for violations.
+ let mut violations: Vec<_> = tcx
+ .associated_items(trait_def_id)
+ .in_definition_order()
+ .filter(|item| item.kind == ty::AssocKind::Method)
+ .filter_map(|item| {
+ object_safety_violation_for_method(tcx, trait_def_id, &item)
+ .map(|(code, span)| ObjectSafetyViolation::Method(item.ident.name, code, span))
+ })
+ .filter(|violation| {
+ if let ObjectSafetyViolation::Method(
+ _,
+ MethodViolationCode::WhereClauseReferencesSelf,
+ span,
+ ) = violation
+ {
+ // Using `CRATE_NODE_ID` is wrong, but it's hard to get a more precise id.
+ // It's also hard to get a use site span, so we use the method definition span.
+ tcx.struct_span_lint_hir(
+ WHERE_CLAUSES_OBJECT_SAFETY,
+ hir::CRATE_HIR_ID,
+ *span,
+ |lint| {
+ let mut err = lint.build(&format!(
+ "the trait `{}` cannot be made into an object",
+ tcx.def_path_str(trait_def_id)
+ ));
+ let node = tcx.hir().get_if_local(trait_def_id);
+ let msg = if let Some(hir::Node::Item(item)) = node {
+ err.span_label(
+ item.ident.span,
+ "this trait cannot be made into an object...",
+ );
+ format!("...because {}", violation.error_msg())
+ } else {
+ format!(
+ "the trait cannot be made into an object because {}",
+ violation.error_msg()
+ )
+ };
+ err.span_label(*span, &msg);
+ match (node, violation.solution()) {
+ (Some(_), Some((note, None))) => {
+ err.help(¬e);
+ }
+ (Some(_), Some((note, Some((sugg, span))))) => {
+ err.span_suggestion(
+ span,
+ ¬e,
+ sugg,
+ Applicability::MachineApplicable,
+ );
+ }
+ // Only provide the help if its a local trait, otherwise it's not actionable.
+ _ => {}
+ }
+ err.emit();
+ },
+ );
+ false
+ } else {
+ true
+ }
+ })
+ .collect();
+
+ // Check the trait itself.
+ if trait_has_sized_self(tcx, trait_def_id) {
+ // We don't want to include the requirement from `Sized` itself to be `Sized` in the list.
+ let spans = get_sized_bounds(tcx, trait_def_id);
+ violations.push(ObjectSafetyViolation::SizedSelf(spans));
+ }
+ let spans = predicates_reference_self(tcx, trait_def_id, false);
+ if !spans.is_empty() {
+ violations.push(ObjectSafetyViolation::SupertraitSelf(spans));
+ }
+
+ violations.extend(
+ tcx.associated_items(trait_def_id)
+ .in_definition_order()
+ .filter(|item| item.kind == ty::AssocKind::Const)
+ .map(|item| ObjectSafetyViolation::AssocConst(item.ident.name, item.ident.span)),
+ );
+
+ debug!(
+ "object_safety_violations_for_trait(trait_def_id={:?}) = {:?}",
+ trait_def_id, violations
+ );
+
+ violations
+}
+
+fn get_sized_bounds(tcx: TyCtxt<'_>, trait_def_id: DefId) -> SmallVec<[Span; 1]> {
+ tcx.hir()
+ .get_if_local(trait_def_id)
+ .and_then(|node| match node {
+ hir::Node::Item(hir::Item {
+ kind: hir::ItemKind::Trait(.., generics, bounds, _),
+ ..
+ }) => Some(
+ generics
+ .where_clause
+ .predicates
+ .iter()
+ .filter_map(|pred| {
+ match pred {
+ hir::WherePredicate::BoundPredicate(pred)
+ if pred.bounded_ty.hir_id.owner.to_def_id() == trait_def_id =>
+ {
+ // Fetch spans for trait bounds that are Sized:
+ // `trait T where Self: Pred`
+ Some(pred.bounds.iter().filter_map(|b| match b {
+ hir::GenericBound::Trait(
+ trait_ref,
+ hir::TraitBoundModifier::None,
+ ) if trait_has_sized_self(
+ tcx,
+ trait_ref.trait_ref.trait_def_id(),
+ ) =>
+ {
+ Some(trait_ref.span)
+ }
+ _ => None,
+ }))
+ }
+ _ => None,
+ }
+ })
+ .flatten()
+ .chain(bounds.iter().filter_map(|b| match b {
+ hir::GenericBound::Trait(trait_ref, hir::TraitBoundModifier::None)
+ if trait_has_sized_self(tcx, trait_ref.trait_ref.trait_def_id()) =>
+ {
+ // Fetch spans for supertraits that are `Sized`: `trait T: Super`
+ Some(trait_ref.span)
+ }
+ _ => None,
+ }))
+ .collect::<SmallVec<[Span; 1]>>(),
+ ),
+ _ => None,
+ })
+ .unwrap_or_else(SmallVec::new)
+}
+
+fn predicates_reference_self(
+ tcx: TyCtxt<'_>,
+ trait_def_id: DefId,
+ supertraits_only: bool,
+) -> SmallVec<[Span; 1]> {
+ let trait_ref = ty::Binder::dummy(ty::TraitRef::identity(tcx, trait_def_id));
+ let predicates = if supertraits_only {
+ tcx.super_predicates_of(trait_def_id)
+ } else {
+ tcx.predicates_of(trait_def_id)
+ };
+ let self_ty = tcx.types.self_param;
+ let has_self_ty = |t: Ty<'_>| t.walk().any(|t| t == self_ty);
+ predicates
+ .predicates
+ .iter()
+ .map(|(predicate, sp)| (predicate.subst_supertrait(tcx, &trait_ref), sp))
+ .filter_map(|(predicate, &sp)| {
+ match predicate {
+ ty::Predicate::Trait(ref data, _) => {
+ // In the case of a trait predicate, we can skip the "self" type.
+ if data.skip_binder().input_types().skip(1).any(has_self_ty) {
+ Some(sp)
+ } else {
+ None
+ }
+ }
+ ty::Predicate::Projection(ref data) => {
+ // And similarly for projections. This should be redundant with
+ // the previous check because any projection should have a
+ // matching `Trait` predicate with the same inputs, but we do
+ // the check to be safe.
+ //
+ // Note that we *do* allow projection *outputs* to contain
+ // `self` (i.e., `trait Foo: Bar<Output=Self::Result> { type Result; }`),
+ // we just require the user to specify *both* outputs
+ // in the object type (i.e., `dyn Foo<Output=(), Result=()>`).
+ //
+ // This is ALT2 in issue #56288, see that for discussion of the
+ // possible alternatives.
+ if data
+ .skip_binder()
+ .projection_ty
+ .trait_ref(tcx)
+ .input_types()
+ .skip(1)
+ .any(has_self_ty)
+ {
+ Some(sp)
+ } else {
+ None
+ }
+ }
+ ty::Predicate::WellFormed(..)
+ | ty::Predicate::ObjectSafe(..)
+ | ty::Predicate::TypeOutlives(..)
+ | ty::Predicate::RegionOutlives(..)
+ | ty::Predicate::ClosureKind(..)
+ | ty::Predicate::Subtype(..)
+ | ty::Predicate::ConstEvaluatable(..) => None,
+ }
+ })
+ .collect()
+}
+
+fn trait_has_sized_self(tcx: TyCtxt<'_>, trait_def_id: DefId) -> bool {
+ generics_require_sized_self(tcx, trait_def_id)
+}
+
+fn generics_require_sized_self(tcx: TyCtxt<'_>, def_id: DefId) -> bool {
+ let sized_def_id = match tcx.lang_items().sized_trait() {
+ Some(def_id) => def_id,
+ None => {
+ return false; /* No Sized trait, can't require it! */
+ }
+ };
+
+ // Search for a predicate like `Self : Sized` amongst the trait bounds.
+ let predicates = tcx.predicates_of(def_id);
+ let predicates = predicates.instantiate_identity(tcx).predicates;
+ elaborate_predicates(tcx, predicates).any(|predicate| match predicate {
+ ty::Predicate::Trait(ref trait_pred, _) => {
+ trait_pred.def_id() == sized_def_id && trait_pred.skip_binder().self_ty().is_param(0)
+ }
+ ty::Predicate::Projection(..)
+ | ty::Predicate::Subtype(..)
+ | ty::Predicate::RegionOutlives(..)
+ | ty::Predicate::WellFormed(..)
+ | ty::Predicate::ObjectSafe(..)
+ | ty::Predicate::ClosureKind(..)
+ | ty::Predicate::TypeOutlives(..)
+ | ty::Predicate::ConstEvaluatable(..) => false,
+ })
+}
+
+/// Returns `Some(_)` if this method makes the containing trait not object safe.
+fn object_safety_violation_for_method(
+ tcx: TyCtxt<'_>,
+ trait_def_id: DefId,
+ method: &ty::AssocItem,
+) -> Option<(MethodViolationCode, Span)> {
+ debug!("object_safety_violation_for_method({:?}, {:?})", trait_def_id, method);
+ // Any method that has a `Self : Sized` requisite is otherwise
+ // exempt from the regulations.
+ if generics_require_sized_self(tcx, method.def_id) {
+ return None;
+ }
+
+ let violation = virtual_call_violation_for_method(tcx, trait_def_id, method);
+ // Get an accurate span depending on the violation.
+ violation.map(|v| {
+ let node = tcx.hir().get_if_local(method.def_id);
+ let span = match (v, node) {
+ (MethodViolationCode::ReferencesSelfInput(arg), Some(node)) => node
+ .fn_decl()
+ .and_then(|decl| decl.inputs.get(arg + 1))
+ .map_or(method.ident.span, |arg| arg.span),
+ (MethodViolationCode::UndispatchableReceiver, Some(node)) => node
+ .fn_decl()
+ .and_then(|decl| decl.inputs.get(0))
+ .map_or(method.ident.span, |arg| arg.span),
+ (MethodViolationCode::ReferencesSelfOutput, Some(node)) => {
+ node.fn_decl().map_or(method.ident.span, |decl| decl.output.span())
+ }
+ _ => method.ident.span,
+ };
+ (v, span)
+ })
+}
+
+/// Returns `Some(_)` if this method cannot be called on a trait
+/// object; this does not necessarily imply that the enclosing trait
+/// is not object safe, because the method might have a where clause
+/// `Self:Sized`.
+fn virtual_call_violation_for_method<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ trait_def_id: DefId,
+ method: &ty::AssocItem,
+) -> Option<MethodViolationCode> {
+ // The method's first parameter must be named `self`
+ if !method.method_has_self_argument {
+ // We'll attempt to provide a structured suggestion for `Self: Sized`.
+ let sugg =
+ tcx.hir().get_if_local(method.def_id).as_ref().and_then(|node| node.generics()).map(
+ |generics| match generics.where_clause.predicates {
+ [] => (" where Self: Sized", generics.where_clause.span),
+ [.., pred] => (", Self: Sized", pred.span().shrink_to_hi()),
+ },
+ );
+ return Some(MethodViolationCode::StaticMethod(sugg));
+ }
+
+ let sig = tcx.fn_sig(method.def_id);
+
+ for (i, input_ty) in sig.skip_binder().inputs()[1..].iter().enumerate() {
+ if contains_illegal_self_type_reference(tcx, trait_def_id, input_ty) {
+ return Some(MethodViolationCode::ReferencesSelfInput(i));
+ }
+ }
+ if contains_illegal_self_type_reference(tcx, trait_def_id, sig.output().skip_binder()) {
+ return Some(MethodViolationCode::ReferencesSelfOutput);
+ }
+
+ // We can't monomorphize things like `fn foo<A>(...)`.
+ let own_counts = tcx.generics_of(method.def_id).own_counts();
+ if own_counts.types + own_counts.consts != 0 {
+ return Some(MethodViolationCode::Generic);
+ }
+
+ if tcx
+ .predicates_of(method.def_id)
+ .predicates
+ .iter()
+ // A trait object can't claim to live more than the concrete type,
+ // so outlives predicates will always hold.
+ .cloned()
+ .filter(|(p, _)| p.to_opt_type_outlives().is_none())
+ .collect::<Vec<_>>()
+ // Do a shallow visit so that `contains_illegal_self_type_reference`
+ // may apply it's custom visiting.
+ .visit_tys_shallow(|t| contains_illegal_self_type_reference(tcx, trait_def_id, t))
+ {
+ return Some(MethodViolationCode::WhereClauseReferencesSelf);
+ }
+
+ let receiver_ty =
+ tcx.liberate_late_bound_regions(method.def_id, &sig.map_bound(|sig| sig.inputs()[0]));
+
+ // Until `unsized_locals` is fully implemented, `self: Self` can't be dispatched on.
+ // However, this is already considered object-safe. We allow it as a special case here.
+ // FIXME(mikeyhew) get rid of this `if` statement once `receiver_is_dispatchable` allows
+ // `Receiver: Unsize<Receiver[Self => dyn Trait]>`.
+ if receiver_ty != tcx.types.self_param {
+ if !receiver_is_dispatchable(tcx, method, receiver_ty) {
+ return Some(MethodViolationCode::UndispatchableReceiver);
+ } else {
+ // Do sanity check to make sure the receiver actually has the layout of a pointer.
+
+ use rustc::ty::layout::Abi;
+
+ let param_env = tcx.param_env(method.def_id);
+
+ let abi_of_ty = |ty: Ty<'tcx>| -> &Abi {
+ match tcx.layout_of(param_env.and(ty)) {
+ Ok(layout) => &layout.abi,
+ Err(err) => bug!("error: {}\n while computing layout for type {:?}", err, ty),
+ }
+ };
+
+ // e.g., `Rc<()>`
+ let unit_receiver_ty =
+ receiver_for_self_ty(tcx, receiver_ty, tcx.mk_unit(), method.def_id);
+
+ match abi_of_ty(unit_receiver_ty) {
+ &Abi::Scalar(..) => (),
+ abi => {
+ tcx.sess.delay_span_bug(
+ tcx.def_span(method.def_id),
+ &format!(
+ "receiver when `Self = ()` should have a Scalar ABI; found {:?}",
+ abi
+ ),
+ );
+ }
+ }
+
+ let trait_object_ty =
+ object_ty_for_trait(tcx, trait_def_id, tcx.mk_region(ty::ReStatic));
+
+ // e.g., `Rc<dyn Trait>`
+ let trait_object_receiver =
+ receiver_for_self_ty(tcx, receiver_ty, trait_object_ty, method.def_id);
+
+ match abi_of_ty(trait_object_receiver) {
+ &Abi::ScalarPair(..) => (),
+ abi => {
+ tcx.sess.delay_span_bug(
+ tcx.def_span(method.def_id),
+ &format!(
+ "receiver when `Self = {}` should have a ScalarPair ABI; \
+ found {:?}",
+ trait_object_ty, abi
+ ),
+ );
+ }
+ }
+ }
+ }
+
+ None
+}
+
+/// Performs a type substitution to produce the version of `receiver_ty` when `Self = self_ty`.
+/// For example, for `receiver_ty = Rc<Self>` and `self_ty = Foo`, returns `Rc<Foo>`.
+fn receiver_for_self_ty<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ receiver_ty: Ty<'tcx>,
+ self_ty: Ty<'tcx>,
+ method_def_id: DefId,
+) -> Ty<'tcx> {
+ debug!("receiver_for_self_ty({:?}, {:?}, {:?})", receiver_ty, self_ty, method_def_id);
+ let substs = InternalSubsts::for_item(tcx, method_def_id, |param, _| {
+ if param.index == 0 { self_ty.into() } else { tcx.mk_param_from_def(param) }
+ });
+
+ let result = receiver_ty.subst(tcx, substs);
+ debug!(
+ "receiver_for_self_ty({:?}, {:?}, {:?}) = {:?}",
+ receiver_ty, self_ty, method_def_id, result
+ );
+ result
+}
+
+/// Creates the object type for the current trait. For example,
+/// if the current trait is `Deref`, then this will be
+/// `dyn Deref<Target = Self::Target> + 'static`.
+fn object_ty_for_trait<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ trait_def_id: DefId,
+ lifetime: ty::Region<'tcx>,
+) -> Ty<'tcx> {
+ debug!("object_ty_for_trait: trait_def_id={:?}", trait_def_id);
+
+ let trait_ref = ty::TraitRef::identity(tcx, trait_def_id);
+
+ let trait_predicate =
+ ty::ExistentialPredicate::Trait(ty::ExistentialTraitRef::erase_self_ty(tcx, trait_ref));
+
+ let mut associated_types = traits::supertraits(tcx, ty::Binder::dummy(trait_ref))
+ .flat_map(|super_trait_ref| {
+ tcx.associated_items(super_trait_ref.def_id())
+ .in_definition_order()
+ .map(move |item| (super_trait_ref, item))
+ })
+ .filter(|(_, item)| item.kind == ty::AssocKind::Type)
+ .collect::<Vec<_>>();
+
+ // existential predicates need to be in a specific order
+ associated_types.sort_by_cached_key(|(_, item)| tcx.def_path_hash(item.def_id));
+
+ let projection_predicates = associated_types.into_iter().map(|(super_trait_ref, item)| {
+ // We *can* get bound lifetimes here in cases like
+ // `trait MyTrait: for<'s> OtherTrait<&'s T, Output=bool>`.
+ //
+ // binder moved to (*)...
+ let super_trait_ref = super_trait_ref.skip_binder();
+ ty::ExistentialPredicate::Projection(ty::ExistentialProjection {
+ ty: tcx.mk_projection(item.def_id, super_trait_ref.substs),
+ item_def_id: item.def_id,
+ substs: super_trait_ref.substs,
+ })
+ });
+
+ let existential_predicates =
+ tcx.mk_existential_predicates(iter::once(trait_predicate).chain(projection_predicates));
+
+ let object_ty = tcx.mk_dynamic(
+ // (*) ... binder re-introduced here
+ ty::Binder::bind(existential_predicates),
+ lifetime,
+ );
+
+ debug!("object_ty_for_trait: object_ty=`{}`", object_ty);
+
+ object_ty
+}
+
+/// Checks the method's receiver (the `self` argument) can be dispatched on when `Self` is a
+/// trait object. We require that `DispatchableFromDyn` be implemented for the receiver type
+/// in the following way:
+/// - let `Receiver` be the type of the `self` argument, i.e `Self`, `&Self`, `Rc<Self>`,
+/// - require the following bound:
+///
+/// ```
+/// Receiver[Self => T]: DispatchFromDyn<Receiver[Self => dyn Trait]>
+/// ```
+///
+/// where `Foo[X => Y]` means "the same type as `Foo`, but with `X` replaced with `Y`"
+/// (substitution notation).
+///
+/// Some examples of receiver types and their required obligation:
+/// - `&'a mut self` requires `&'a mut Self: DispatchFromDyn<&'a mut dyn Trait>`,
+/// - `self: Rc<Self>` requires `Rc<Self>: DispatchFromDyn<Rc<dyn Trait>>`,
+/// - `self: Pin<Box<Self>>` requires `Pin<Box<Self>>: DispatchFromDyn<Pin<Box<dyn Trait>>>`.
+///
+/// The only case where the receiver is not dispatchable, but is still a valid receiver
+/// type (just not object-safe), is when there is more than one level of pointer indirection.
+/// E.g., `self: &&Self`, `self: &Rc<Self>`, `self: Box<Box<Self>>`. In these cases, there
+/// is no way, or at least no inexpensive way, to coerce the receiver from the version where
+/// `Self = dyn Trait` to the version where `Self = T`, where `T` is the unknown erased type
+/// contained by the trait object, because the object that needs to be coerced is behind
+/// a pointer.
+///
+/// In practice, we cannot use `dyn Trait` explicitly in the obligation because it would result
+/// in a new check that `Trait` is object safe, creating a cycle (until object_safe_for_dispatch
+/// is stabilized, see tracking issue https://github.com/rust-lang/rust/issues/43561).
+/// Instead, we fudge a little by introducing a new type parameter `U` such that
+/// `Self: Unsize<U>` and `U: Trait + ?Sized`, and use `U` in place of `dyn Trait`.
+/// Written as a chalk-style query:
+///
+/// forall (U: Trait + ?Sized) {
+/// if (Self: Unsize<U>) {
+/// Receiver: DispatchFromDyn<Receiver[Self => U]>
+/// }
+/// }
+///
+/// for `self: &'a mut Self`, this means `&'a mut Self: DispatchFromDyn<&'a mut U>`
+/// for `self: Rc<Self>`, this means `Rc<Self>: DispatchFromDyn<Rc<U>>`
+/// for `self: Pin<Box<Self>>`, this means `Pin<Box<Self>>: DispatchFromDyn<Pin<Box<U>>>`
+//
+// FIXME(mikeyhew) when unsized receivers are implemented as part of unsized rvalues, add this
+// fallback query: `Receiver: Unsize<Receiver[Self => U]>` to support receivers like
+// `self: Wrapper<Self>`.
+#[allow(dead_code)]
+fn receiver_is_dispatchable<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ method: &ty::AssocItem,
+ receiver_ty: Ty<'tcx>,
+) -> bool {
+ debug!("receiver_is_dispatchable: method = {:?}, receiver_ty = {:?}", method, receiver_ty);
+
+ let traits = (tcx.lang_items().unsize_trait(), tcx.lang_items().dispatch_from_dyn_trait());
+ let (unsize_did, dispatch_from_dyn_did) = if let (Some(u), Some(cu)) = traits {
+ (u, cu)
+ } else {
+ debug!("receiver_is_dispatchable: Missing Unsize or DispatchFromDyn traits");
+ return false;
+ };
+
+ // the type `U` in the query
+ // use a bogus type parameter to mimic a forall(U) query using u32::MAX for now.
+ // FIXME(mikeyhew) this is a total hack. Once object_safe_for_dispatch is stabilized, we can
+ // replace this with `dyn Trait`
+ let unsized_self_ty: Ty<'tcx> =
+ tcx.mk_ty_param(::std::u32::MAX, Symbol::intern("RustaceansAreAwesome"));
+
+ // `Receiver[Self => U]`
+ let unsized_receiver_ty =
+ receiver_for_self_ty(tcx, receiver_ty, unsized_self_ty, method.def_id);
+
+ // create a modified param env, with `Self: Unsize<U>` and `U: Trait` added to caller bounds
+ // `U: ?Sized` is already implied here
+ let param_env = {
+ let mut param_env = tcx.param_env(method.def_id);
+
+ // Self: Unsize<U>
+ let unsize_predicate = ty::TraitRef {
+ def_id: unsize_did,
+ substs: tcx.mk_substs_trait(tcx.types.self_param, &[unsized_self_ty.into()]),
+ }
+ .without_const()
+ .to_predicate();
+
+ // U: Trait<Arg1, ..., ArgN>
+ let trait_predicate = {
+ let substs =
+ InternalSubsts::for_item(tcx, method.container.assert_trait(), |param, _| {
+ if param.index == 0 {
+ unsized_self_ty.into()
+ } else {
+ tcx.mk_param_from_def(param)
+ }
+ });
+
+ ty::TraitRef { def_id: unsize_did, substs }.without_const().to_predicate()
+ };
+
+ let caller_bounds: Vec<Predicate<'tcx>> = param_env
+ .caller_bounds
+ .iter()
+ .cloned()
+ .chain(iter::once(unsize_predicate))
+ .chain(iter::once(trait_predicate))
+ .collect();
+
+ param_env.caller_bounds = tcx.intern_predicates(&caller_bounds);
+
+ param_env
+ };
+
+ // Receiver: DispatchFromDyn<Receiver[Self => U]>
+ let obligation = {
+ let predicate = ty::TraitRef {
+ def_id: dispatch_from_dyn_did,
+ substs: tcx.mk_substs_trait(receiver_ty, &[unsized_receiver_ty.into()]),
+ }
+ .without_const()
+ .to_predicate();
+
+ Obligation::new(ObligationCause::dummy(), param_env, predicate)
+ };
+
+ tcx.infer_ctxt().enter(|ref infcx| {
+ // the receiver is dispatchable iff the obligation holds
+ infcx.predicate_must_hold_modulo_regions(&obligation)
+ })
+}
+
+fn contains_illegal_self_type_reference<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ trait_def_id: DefId,
+ ty: Ty<'tcx>,
+) -> bool {
+ // This is somewhat subtle. In general, we want to forbid
+ // references to `Self` in the argument and return types,
+ // since the value of `Self` is erased. However, there is one
+ // exception: it is ok to reference `Self` in order to access
+ // an associated type of the current trait, since we retain
+ // the value of those associated types in the object type
+ // itself.
+ //
+ // ```rust
+ // trait SuperTrait {
+ // type X;
+ // }
+ //
+ // trait Trait : SuperTrait {
+ // type Y;
+ // fn foo(&self, x: Self) // bad
+ // fn foo(&self) -> Self // bad
+ // fn foo(&self) -> Option<Self> // bad
+ // fn foo(&self) -> Self::Y // OK, desugars to next example
+ // fn foo(&self) -> <Self as Trait>::Y // OK
+ // fn foo(&self) -> Self::X // OK, desugars to next example
+ // fn foo(&self) -> <Self as SuperTrait>::X // OK
+ // }
+ // ```
+ //
+ // However, it is not as simple as allowing `Self` in a projected
+ // type, because there are illegal ways to use `Self` as well:
+ //
+ // ```rust
+ // trait Trait : SuperTrait {
+ // ...
+ // fn foo(&self) -> <Self as SomeOtherTrait>::X;
+ // }
+ // ```
+ //
+ // Here we will not have the type of `X` recorded in the
+ // object type, and we cannot resolve `Self as SomeOtherTrait`
+ // without knowing what `Self` is.
+
+ let mut supertraits: Option<Vec<ty::PolyTraitRef<'tcx>>> = None;
+ let mut error = false;
+ let self_ty = tcx.types.self_param;
+ ty.maybe_walk(|ty| {
+ match ty.kind {
+ ty::Param(_) => {
+ if ty == self_ty {
+ error = true;
+ }
+
+ false // no contained types to walk
+ }
+
+ ty::Projection(ref data) => {
+ // This is a projected type `<Foo as SomeTrait>::X`.
+
+ // Compute supertraits of current trait lazily.
+ if supertraits.is_none() {
+ let trait_ref = ty::Binder::bind(ty::TraitRef::identity(tcx, trait_def_id));
+ supertraits = Some(traits::supertraits(tcx, trait_ref).collect());
+ }
+
+ // Determine whether the trait reference `Foo as
+ // SomeTrait` is in fact a supertrait of the
+ // current trait. In that case, this type is
+ // legal, because the type `X` will be specified
+ // in the object type. Note that we can just use
+ // direct equality here because all of these types
+ // are part of the formal parameter listing, and
+ // hence there should be no inference variables.
+ let projection_trait_ref = ty::Binder::bind(data.trait_ref(tcx));
+ let is_supertrait_of_current_trait =
+ supertraits.as_ref().unwrap().contains(&projection_trait_ref);
+
+ if is_supertrait_of_current_trait {
+ false // do not walk contained types, do not report error, do collect $200
+ } else {
+ true // DO walk contained types, POSSIBLY reporting an error
+ }
+ }
+
+ _ => true, // walk contained types, if any
+ }
+ });
+
+ error
+}
+
+pub fn provide(providers: &mut ty::query::Providers<'_>) {
+ *providers = ty::query::Providers { object_safety_violations, ..*providers };
+}
--- /dev/null
+use fmt_macros::{Parser, Piece, Position};
+
+use rustc::ty::{self, GenericParamDefKind, TyCtxt};
+use rustc::util::common::ErrorReported;
+
+use rustc_ast::ast::{MetaItem, NestedMetaItem};
+use rustc_attr as attr;
+use rustc_data_structures::fx::FxHashMap;
+use rustc_errors::struct_span_err;
+use rustc_hir::def_id::DefId;
+use rustc_span::symbol::{kw, sym, Symbol};
+use rustc_span::Span;
+
+#[derive(Clone, Debug)]
+pub struct OnUnimplementedFormatString(Symbol);
+
+#[derive(Debug)]
+pub struct OnUnimplementedDirective {
+ pub condition: Option<MetaItem>,
+ pub subcommands: Vec<OnUnimplementedDirective>,
+ pub message: Option<OnUnimplementedFormatString>,
+ pub label: Option<OnUnimplementedFormatString>,
+ pub note: Option<OnUnimplementedFormatString>,
+ pub enclosing_scope: Option<OnUnimplementedFormatString>,
+}
+
+#[derive(Default)]
+pub struct OnUnimplementedNote {
+ pub message: Option<String>,
+ pub label: Option<String>,
+ pub note: Option<String>,
+ pub enclosing_scope: Option<String>,
+}
+
+fn parse_error(
+ tcx: TyCtxt<'_>,
+ span: Span,
+ message: &str,
+ label: &str,
+ note: Option<&str>,
+) -> ErrorReported {
+ let mut diag = struct_span_err!(tcx.sess, span, E0232, "{}", message);
+ diag.span_label(span, label);
+ if let Some(note) = note {
+ diag.note(note);
+ }
+ diag.emit();
+ ErrorReported
+}
+
+impl<'tcx> OnUnimplementedDirective {
+ fn parse(
+ tcx: TyCtxt<'tcx>,
+ trait_def_id: DefId,
+ items: &[NestedMetaItem],
+ span: Span,
+ is_root: bool,
+ ) -> Result<Self, ErrorReported> {
+ let mut errored = false;
+ let mut item_iter = items.iter();
+
+ let condition = if is_root {
+ None
+ } else {
+ let cond = item_iter
+ .next()
+ .ok_or_else(|| {
+ parse_error(
+ tcx,
+ span,
+ "empty `on`-clause in `#[rustc_on_unimplemented]`",
+ "empty on-clause here",
+ None,
+ )
+ })?
+ .meta_item()
+ .ok_or_else(|| {
+ parse_error(
+ tcx,
+ span,
+ "invalid `on`-clause in `#[rustc_on_unimplemented]`",
+ "invalid on-clause here",
+ None,
+ )
+ })?;
+ attr::eval_condition(cond, &tcx.sess.parse_sess, &mut |_| true);
+ Some(cond.clone())
+ };
+
+ let mut message = None;
+ let mut label = None;
+ let mut note = None;
+ let mut enclosing_scope = None;
+ let mut subcommands = vec![];
+
+ let parse_value = |value_str| {
+ OnUnimplementedFormatString::try_parse(tcx, trait_def_id, value_str, span).map(Some)
+ };
+
+ for item in item_iter {
+ if item.check_name(sym::message) && message.is_none() {
+ if let Some(message_) = item.value_str() {
+ message = parse_value(message_)?;
+ continue;
+ }
+ } else if item.check_name(sym::label) && label.is_none() {
+ if let Some(label_) = item.value_str() {
+ label = parse_value(label_)?;
+ continue;
+ }
+ } else if item.check_name(sym::note) && note.is_none() {
+ if let Some(note_) = item.value_str() {
+ note = parse_value(note_)?;
+ continue;
+ }
+ } else if item.check_name(sym::enclosing_scope) && enclosing_scope.is_none() {
+ if let Some(enclosing_scope_) = item.value_str() {
+ enclosing_scope = parse_value(enclosing_scope_)?;
+ continue;
+ }
+ } else if item.check_name(sym::on)
+ && is_root
+ && message.is_none()
+ && label.is_none()
+ && note.is_none()
+ {
+ if let Some(items) = item.meta_item_list() {
+ if let Ok(subcommand) =
+ Self::parse(tcx, trait_def_id, &items, item.span(), false)
+ {
+ subcommands.push(subcommand);
+ } else {
+ errored = true;
+ }
+ continue;
+ }
+ }
+
+ // nothing found
+ parse_error(
+ tcx,
+ item.span(),
+ "this attribute must have a valid value",
+ "expected value here",
+ Some(r#"eg `#[rustc_on_unimplemented(message="foo")]`"#),
+ );
+ }
+
+ if errored {
+ Err(ErrorReported)
+ } else {
+ Ok(OnUnimplementedDirective {
+ condition,
+ subcommands,
+ message,
+ label,
+ note,
+ enclosing_scope,
+ })
+ }
+ }
+
+ pub fn of_item(
+ tcx: TyCtxt<'tcx>,
+ trait_def_id: DefId,
+ impl_def_id: DefId,
+ ) -> Result<Option<Self>, ErrorReported> {
+ let attrs = tcx.get_attrs(impl_def_id);
+
+ let attr = if let Some(item) = attr::find_by_name(&attrs, sym::rustc_on_unimplemented) {
+ item
+ } else {
+ return Ok(None);
+ };
+
+ let result = if let Some(items) = attr.meta_item_list() {
+ Self::parse(tcx, trait_def_id, &items, attr.span, true).map(Some)
+ } else if let Some(value) = attr.value_str() {
+ Ok(Some(OnUnimplementedDirective {
+ condition: None,
+ message: None,
+ subcommands: vec![],
+ label: Some(OnUnimplementedFormatString::try_parse(
+ tcx,
+ trait_def_id,
+ value,
+ attr.span,
+ )?),
+ note: None,
+ enclosing_scope: None,
+ }))
+ } else {
+ return Err(ErrorReported);
+ };
+ debug!("of_item({:?}/{:?}) = {:?}", trait_def_id, impl_def_id, result);
+ result
+ }
+
+ pub fn evaluate(
+ &self,
+ tcx: TyCtxt<'tcx>,
+ trait_ref: ty::TraitRef<'tcx>,
+ options: &[(Symbol, Option<String>)],
+ ) -> OnUnimplementedNote {
+ let mut message = None;
+ let mut label = None;
+ let mut note = None;
+ let mut enclosing_scope = None;
+ info!("evaluate({:?}, trait_ref={:?}, options={:?})", self, trait_ref, options);
+
+ for command in self.subcommands.iter().chain(Some(self)).rev() {
+ if let Some(ref condition) = command.condition {
+ if !attr::eval_condition(condition, &tcx.sess.parse_sess, &mut |c| {
+ c.ident().map_or(false, |ident| {
+ options.contains(&(ident.name, c.value_str().map(|s| s.to_string())))
+ })
+ }) {
+ debug!("evaluate: skipping {:?} due to condition", command);
+ continue;
+ }
+ }
+ debug!("evaluate: {:?} succeeded", command);
+ if let Some(ref message_) = command.message {
+ message = Some(message_.clone());
+ }
+
+ if let Some(ref label_) = command.label {
+ label = Some(label_.clone());
+ }
+
+ if let Some(ref note_) = command.note {
+ note = Some(note_.clone());
+ }
+
+ if let Some(ref enclosing_scope_) = command.enclosing_scope {
+ enclosing_scope = Some(enclosing_scope_.clone());
+ }
+ }
+
+ let options: FxHashMap<Symbol, String> =
+ options.iter().filter_map(|(k, v)| v.as_ref().map(|v| (*k, v.to_owned()))).collect();
+ OnUnimplementedNote {
+ label: label.map(|l| l.format(tcx, trait_ref, &options)),
+ message: message.map(|m| m.format(tcx, trait_ref, &options)),
+ note: note.map(|n| n.format(tcx, trait_ref, &options)),
+ enclosing_scope: enclosing_scope.map(|e_s| e_s.format(tcx, trait_ref, &options)),
+ }
+ }
+}
+
+impl<'tcx> OnUnimplementedFormatString {
+ fn try_parse(
+ tcx: TyCtxt<'tcx>,
+ trait_def_id: DefId,
+ from: Symbol,
+ err_sp: Span,
+ ) -> Result<Self, ErrorReported> {
+ let result = OnUnimplementedFormatString(from);
+ result.verify(tcx, trait_def_id, err_sp)?;
+ Ok(result)
+ }
+
+ fn verify(
+ &self,
+ tcx: TyCtxt<'tcx>,
+ trait_def_id: DefId,
+ span: Span,
+ ) -> Result<(), ErrorReported> {
+ let name = tcx.item_name(trait_def_id);
+ let generics = tcx.generics_of(trait_def_id);
+ let s = self.0.as_str();
+ let parser = Parser::new(&s, None, vec![], false);
+ let mut result = Ok(());
+ for token in parser {
+ match token {
+ Piece::String(_) => (), // Normal string, no need to check it
+ Piece::NextArgument(a) => match a.position {
+ // `{Self}` is allowed
+ Position::ArgumentNamed(s) if s == kw::SelfUpper => (),
+ // `{ThisTraitsName}` is allowed
+ Position::ArgumentNamed(s) if s == name => (),
+ // `{from_method}` is allowed
+ Position::ArgumentNamed(s) if s == sym::from_method => (),
+ // `{from_desugaring}` is allowed
+ Position::ArgumentNamed(s) if s == sym::from_desugaring => (),
+ // `{ItemContext}` is allowed
+ Position::ArgumentNamed(s) if s == sym::item_context => (),
+ // So is `{A}` if A is a type parameter
+ Position::ArgumentNamed(s) => {
+ match generics.params.iter().find(|param| param.name == s) {
+ Some(_) => (),
+ None => {
+ struct_span_err!(
+ tcx.sess,
+ span,
+ E0230,
+ "there is no parameter `{}` on trait `{}`",
+ s,
+ name
+ )
+ .emit();
+ result = Err(ErrorReported);
+ }
+ }
+ }
+ // `{:1}` and `{}` are not to be used
+ Position::ArgumentIs(_) | Position::ArgumentImplicitlyIs(_) => {
+ struct_span_err!(
+ tcx.sess,
+ span,
+ E0231,
+ "only named substitution parameters are allowed"
+ )
+ .emit();
+ result = Err(ErrorReported);
+ }
+ },
+ }
+ }
+
+ result
+ }
+
+ pub fn format(
+ &self,
+ tcx: TyCtxt<'tcx>,
+ trait_ref: ty::TraitRef<'tcx>,
+ options: &FxHashMap<Symbol, String>,
+ ) -> String {
+ let name = tcx.item_name(trait_ref.def_id);
+ let trait_str = tcx.def_path_str(trait_ref.def_id);
+ let generics = tcx.generics_of(trait_ref.def_id);
+ let generic_map = generics
+ .params
+ .iter()
+ .filter_map(|param| {
+ let value = match param.kind {
+ GenericParamDefKind::Type { .. } | GenericParamDefKind::Const => {
+ trait_ref.substs[param.index as usize].to_string()
+ }
+ GenericParamDefKind::Lifetime => return None,
+ };
+ let name = param.name;
+ Some((name, value))
+ })
+ .collect::<FxHashMap<Symbol, String>>();
+ let empty_string = String::new();
+
+ let s = self.0.as_str();
+ let parser = Parser::new(&s, None, vec![], false);
+ let item_context = (options.get(&sym::item_context)).unwrap_or(&empty_string);
+ parser
+ .map(|p| match p {
+ Piece::String(s) => s,
+ Piece::NextArgument(a) => match a.position {
+ Position::ArgumentNamed(s) => match generic_map.get(&s) {
+ Some(val) => val,
+ None if s == name => &trait_str,
+ None => {
+ if let Some(val) = options.get(&s) {
+ val
+ } else if s == sym::from_desugaring || s == sym::from_method {
+ // don't break messages using these two arguments incorrectly
+ &empty_string
+ } else if s == sym::item_context {
+ &item_context
+ } else {
+ bug!(
+ "broken on_unimplemented {:?} for {:?}: \
+ no argument matching {:?}",
+ self.0,
+ trait_ref,
+ s
+ )
+ }
+ }
+ },
+ _ => bug!("broken on_unimplemented {:?} - bad format arg", self.0),
+ },
+ })
+ .collect()
+ }
+}
--- /dev/null
+//! Code for projecting associated types out of trait references.
+
+use super::elaborate_predicates;
+use super::specialization_graph;
+use super::translate_substs;
+use super::util;
+use super::MismatchedProjectionTypes;
+use super::Obligation;
+use super::ObligationCause;
+use super::PredicateObligation;
+use super::Selection;
+use super::SelectionContext;
+use super::SelectionError;
+use super::{Normalized, NormalizedTy, ProjectionCacheEntry, ProjectionCacheKey};
+use super::{VtableClosureData, VtableFnPointerData, VtableGeneratorData, VtableImplData};
+
+use crate::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
+use crate::infer::{InferCtxt, InferOk, LateBoundRegionConversionTime};
+use crate::traits::error_reporting::InferCtxtExt;
+use rustc::ty::fold::{TypeFoldable, TypeFolder};
+use rustc::ty::subst::{InternalSubsts, Subst};
+use rustc::ty::{self, ToPolyTraitRef, ToPredicate, Ty, TyCtxt, WithConstness};
+use rustc_ast::ast::Ident;
+use rustc_errors::ErrorReported;
+use rustc_hir::def_id::DefId;
+use rustc_span::symbol::sym;
+use rustc_span::DUMMY_SP;
+
+pub use rustc::traits::Reveal;
+
+pub type PolyProjectionObligation<'tcx> = Obligation<'tcx, ty::PolyProjectionPredicate<'tcx>>;
+
+pub type ProjectionObligation<'tcx> = Obligation<'tcx, ty::ProjectionPredicate<'tcx>>;
+
+pub type ProjectionTyObligation<'tcx> = Obligation<'tcx, ty::ProjectionTy<'tcx>>;
+
+/// When attempting to resolve `<T as TraitRef>::Name` ...
+#[derive(Debug)]
+pub enum ProjectionTyError<'tcx> {
+ /// ...we found multiple sources of information and couldn't resolve the ambiguity.
+ TooManyCandidates,
+
+ /// ...an error occurred matching `T : TraitRef`
+ TraitSelectionError(SelectionError<'tcx>),
+}
+
+#[derive(PartialEq, Eq, Debug)]
+enum ProjectionTyCandidate<'tcx> {
+ // from a where-clause in the env or object type
+ ParamEnv(ty::PolyProjectionPredicate<'tcx>),
+
+ // from the definition of `Trait` when you have something like <<A as Trait>::B as Trait2>::C
+ TraitDef(ty::PolyProjectionPredicate<'tcx>),
+
+ // from a "impl" (or a "pseudo-impl" returned by select)
+ Select(Selection<'tcx>),
+}
+
+enum ProjectionTyCandidateSet<'tcx> {
+ None,
+ Single(ProjectionTyCandidate<'tcx>),
+ Ambiguous,
+ Error(SelectionError<'tcx>),
+}
+
+impl<'tcx> ProjectionTyCandidateSet<'tcx> {
+ fn mark_ambiguous(&mut self) {
+ *self = ProjectionTyCandidateSet::Ambiguous;
+ }
+
+ fn mark_error(&mut self, err: SelectionError<'tcx>) {
+ *self = ProjectionTyCandidateSet::Error(err);
+ }
+
+ // Returns true if the push was successful, or false if the candidate
+ // was discarded -- this could be because of ambiguity, or because
+ // a higher-priority candidate is already there.
+ fn push_candidate(&mut self, candidate: ProjectionTyCandidate<'tcx>) -> bool {
+ use self::ProjectionTyCandidate::*;
+ use self::ProjectionTyCandidateSet::*;
+
+ // This wacky variable is just used to try and
+ // make code readable and avoid confusing paths.
+ // It is assigned a "value" of `()` only on those
+ // paths in which we wish to convert `*self` to
+ // ambiguous (and return false, because the candidate
+ // was not used). On other paths, it is not assigned,
+ // and hence if those paths *could* reach the code that
+ // comes after the match, this fn would not compile.
+ let convert_to_ambiguous;
+
+ match self {
+ None => {
+ *self = Single(candidate);
+ return true;
+ }
+
+ Single(current) => {
+ // Duplicates can happen inside ParamEnv. In the case, we
+ // perform a lazy deduplication.
+ if current == &candidate {
+ return false;
+ }
+
+ // Prefer where-clauses. As in select, if there are multiple
+ // candidates, we prefer where-clause candidates over impls. This
+ // may seem a bit surprising, since impls are the source of
+ // "truth" in some sense, but in fact some of the impls that SEEM
+ // applicable are not, because of nested obligations. Where
+ // clauses are the safer choice. See the comment on
+ // `select::SelectionCandidate` and #21974 for more details.
+ match (current, candidate) {
+ (ParamEnv(..), ParamEnv(..)) => convert_to_ambiguous = (),
+ (ParamEnv(..), _) => return false,
+ (_, ParamEnv(..)) => unreachable!(),
+ (_, _) => convert_to_ambiguous = (),
+ }
+ }
+
+ Ambiguous | Error(..) => {
+ return false;
+ }
+ }
+
+ // We only ever get here when we moved from a single candidate
+ // to ambiguous.
+ let () = convert_to_ambiguous;
+ *self = Ambiguous;
+ false
+ }
+}
+
+/// Evaluates constraints of the form:
+///
+/// for<...> <T as Trait>::U == V
+///
+/// If successful, this may result in additional obligations. Also returns
+/// the projection cache key used to track these additional obligations.
+pub fn poly_project_and_unify_type<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &PolyProjectionObligation<'tcx>,
+) -> Result<Option<Vec<PredicateObligation<'tcx>>>, MismatchedProjectionTypes<'tcx>> {
+ debug!("poly_project_and_unify_type(obligation={:?})", obligation);
+
+ let infcx = selcx.infcx();
+ infcx.commit_if_ok(|snapshot| {
+ let (placeholder_predicate, placeholder_map) =
+ infcx.replace_bound_vars_with_placeholders(&obligation.predicate);
+
+ let placeholder_obligation = obligation.with(placeholder_predicate);
+ let result = project_and_unify_type(selcx, &placeholder_obligation)?;
+ infcx
+ .leak_check(false, &placeholder_map, snapshot)
+ .map_err(|err| MismatchedProjectionTypes { err })?;
+ Ok(result)
+ })
+}
+
+/// Evaluates constraints of the form:
+///
+/// <T as Trait>::U == V
+///
+/// If successful, this may result in additional obligations.
+fn project_and_unify_type<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionObligation<'tcx>,
+) -> Result<Option<Vec<PredicateObligation<'tcx>>>, MismatchedProjectionTypes<'tcx>> {
+ debug!("project_and_unify_type(obligation={:?})", obligation);
+
+ let mut obligations = vec![];
+ let normalized_ty = match opt_normalize_projection_type(
+ selcx,
+ obligation.param_env,
+ obligation.predicate.projection_ty,
+ obligation.cause.clone(),
+ obligation.recursion_depth,
+ &mut obligations,
+ ) {
+ Some(n) => n,
+ None => return Ok(None),
+ };
+
+ debug!(
+ "project_and_unify_type: normalized_ty={:?} obligations={:?}",
+ normalized_ty, obligations
+ );
+
+ let infcx = selcx.infcx();
+ match infcx
+ .at(&obligation.cause, obligation.param_env)
+ .eq(normalized_ty, obligation.predicate.ty)
+ {
+ Ok(InferOk { obligations: inferred_obligations, value: () }) => {
+ obligations.extend(inferred_obligations);
+ Ok(Some(obligations))
+ }
+ Err(err) => {
+ debug!("project_and_unify_type: equating types encountered error {:?}", err);
+ Err(MismatchedProjectionTypes { err })
+ }
+ }
+}
+
+/// Normalizes any associated type projections in `value`, replacing
+/// them with a fully resolved type where possible. The return value
+/// combines the normalized result and any additional obligations that
+/// were incurred as result.
+pub fn normalize<'a, 'b, 'tcx, T>(
+ selcx: &'a mut SelectionContext<'b, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ cause: ObligationCause<'tcx>,
+ value: &T,
+) -> Normalized<'tcx, T>
+where
+ T: TypeFoldable<'tcx>,
+{
+ let mut obligations = Vec::new();
+ let value = normalize_to(selcx, param_env, cause, value, &mut obligations);
+ Normalized { value, obligations }
+}
+
+pub fn normalize_to<'a, 'b, 'tcx, T>(
+ selcx: &'a mut SelectionContext<'b, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ cause: ObligationCause<'tcx>,
+ value: &T,
+ obligations: &mut Vec<PredicateObligation<'tcx>>,
+) -> T
+where
+ T: TypeFoldable<'tcx>,
+{
+ normalize_with_depth_to(selcx, param_env, cause, 0, value, obligations)
+}
+
+/// As `normalize`, but with a custom depth.
+pub fn normalize_with_depth<'a, 'b, 'tcx, T>(
+ selcx: &'a mut SelectionContext<'b, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ cause: ObligationCause<'tcx>,
+ depth: usize,
+ value: &T,
+) -> Normalized<'tcx, T>
+where
+ T: TypeFoldable<'tcx>,
+{
+ let mut obligations = Vec::new();
+ let value = normalize_with_depth_to(selcx, param_env, cause, depth, value, &mut obligations);
+ Normalized { value, obligations }
+}
+
+pub fn normalize_with_depth_to<'a, 'b, 'tcx, T>(
+ selcx: &'a mut SelectionContext<'b, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ cause: ObligationCause<'tcx>,
+ depth: usize,
+ value: &T,
+ obligations: &mut Vec<PredicateObligation<'tcx>>,
+) -> T
+where
+ T: TypeFoldable<'tcx>,
+{
+ debug!("normalize_with_depth(depth={}, value={:?})", depth, value);
+ let mut normalizer = AssocTypeNormalizer::new(selcx, param_env, cause, depth, obligations);
+ let result = normalizer.fold(value);
+ debug!(
+ "normalize_with_depth: depth={} result={:?} with {} obligations",
+ depth,
+ result,
+ normalizer.obligations.len()
+ );
+ debug!("normalize_with_depth: depth={} obligations={:?}", depth, normalizer.obligations);
+ result
+}
+
+struct AssocTypeNormalizer<'a, 'b, 'tcx> {
+ selcx: &'a mut SelectionContext<'b, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ cause: ObligationCause<'tcx>,
+ obligations: &'a mut Vec<PredicateObligation<'tcx>>,
+ depth: usize,
+}
+
+impl<'a, 'b, 'tcx> AssocTypeNormalizer<'a, 'b, 'tcx> {
+ fn new(
+ selcx: &'a mut SelectionContext<'b, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ cause: ObligationCause<'tcx>,
+ depth: usize,
+ obligations: &'a mut Vec<PredicateObligation<'tcx>>,
+ ) -> AssocTypeNormalizer<'a, 'b, 'tcx> {
+ AssocTypeNormalizer { selcx, param_env, cause, obligations, depth }
+ }
+
+ fn fold<T: TypeFoldable<'tcx>>(&mut self, value: &T) -> T {
+ let value = self.selcx.infcx().resolve_vars_if_possible(value);
+
+ if !value.has_projections() { value } else { value.fold_with(self) }
+ }
+}
+
+impl<'a, 'b, 'tcx> TypeFolder<'tcx> for AssocTypeNormalizer<'a, 'b, 'tcx> {
+ fn tcx<'c>(&'c self) -> TyCtxt<'tcx> {
+ self.selcx.tcx()
+ }
+
+ fn fold_ty(&mut self, ty: Ty<'tcx>) -> Ty<'tcx> {
+ if !ty.has_projections() {
+ return ty;
+ }
+ // We don't want to normalize associated types that occur inside of region
+ // binders, because they may contain bound regions, and we can't cope with that.
+ //
+ // Example:
+ //
+ // for<'a> fn(<T as Foo<&'a>>::A)
+ //
+ // Instead of normalizing `<T as Foo<&'a>>::A` here, we'll
+ // normalize it when we instantiate those bound regions (which
+ // should occur eventually).
+
+ let ty = ty.super_fold_with(self);
+ match ty.kind {
+ ty::Opaque(def_id, substs) if !substs.has_escaping_bound_vars() => {
+ // (*)
+ // Only normalize `impl Trait` after type-checking, usually in codegen.
+ match self.param_env.reveal {
+ Reveal::UserFacing => ty,
+
+ Reveal::All => {
+ let recursion_limit = *self.tcx().sess.recursion_limit.get();
+ if self.depth >= recursion_limit {
+ let obligation = Obligation::with_depth(
+ self.cause.clone(),
+ recursion_limit,
+ self.param_env,
+ ty,
+ );
+ self.selcx.infcx().report_overflow_error(&obligation, true);
+ }
+
+ let generic_ty = self.tcx().type_of(def_id);
+ let concrete_ty = generic_ty.subst(self.tcx(), substs);
+ self.depth += 1;
+ let folded_ty = self.fold_ty(concrete_ty);
+ self.depth -= 1;
+ folded_ty
+ }
+ }
+ }
+
+ ty::Projection(ref data) if !data.has_escaping_bound_vars() => {
+ // (*)
+
+ // (*) This is kind of hacky -- we need to be able to
+ // handle normalization within binders because
+ // otherwise we wind up a need to normalize when doing
+ // trait matching (since you can have a trait
+ // obligation like `for<'a> T::B : Fn(&'a int)`), but
+ // we can't normalize with bound regions in scope. So
+ // far now we just ignore binders but only normalize
+ // if all bound regions are gone (and then we still
+ // have to renormalize whenever we instantiate a
+ // binder). It would be better to normalize in a
+ // binding-aware fashion.
+
+ let normalized_ty = normalize_projection_type(
+ self.selcx,
+ self.param_env,
+ *data,
+ self.cause.clone(),
+ self.depth,
+ &mut self.obligations,
+ );
+ debug!(
+ "AssocTypeNormalizer: depth={} normalized {:?} to {:?}, \
+ now with {} obligations",
+ self.depth,
+ ty,
+ normalized_ty,
+ self.obligations.len()
+ );
+ normalized_ty
+ }
+
+ _ => ty,
+ }
+ }
+
+ fn fold_const(&mut self, constant: &'tcx ty::Const<'tcx>) -> &'tcx ty::Const<'tcx> {
+ constant.eval(self.selcx.tcx(), self.param_env)
+ }
+}
+
+/// The guts of `normalize`: normalize a specific projection like `<T
+/// as Trait>::Item`. The result is always a type (and possibly
+/// additional obligations). If ambiguity arises, which implies that
+/// there are unresolved type variables in the projection, we will
+/// substitute a fresh type variable `$X` and generate a new
+/// obligation `<T as Trait>::Item == $X` for later.
+pub fn normalize_projection_type<'a, 'b, 'tcx>(
+ selcx: &'a mut SelectionContext<'b, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ projection_ty: ty::ProjectionTy<'tcx>,
+ cause: ObligationCause<'tcx>,
+ depth: usize,
+ obligations: &mut Vec<PredicateObligation<'tcx>>,
+) -> Ty<'tcx> {
+ opt_normalize_projection_type(
+ selcx,
+ param_env,
+ projection_ty,
+ cause.clone(),
+ depth,
+ obligations,
+ )
+ .unwrap_or_else(move || {
+ // if we bottom out in ambiguity, create a type variable
+ // and a deferred predicate to resolve this when more type
+ // information is available.
+
+ let tcx = selcx.infcx().tcx;
+ let def_id = projection_ty.item_def_id;
+ let ty_var = selcx.infcx().next_ty_var(TypeVariableOrigin {
+ kind: TypeVariableOriginKind::NormalizeProjectionType,
+ span: tcx.def_span(def_id),
+ });
+ let projection = ty::Binder::dummy(ty::ProjectionPredicate { projection_ty, ty: ty_var });
+ let obligation =
+ Obligation::with_depth(cause, depth + 1, param_env, projection.to_predicate());
+ obligations.push(obligation);
+ ty_var
+ })
+}
+
+/// The guts of `normalize`: normalize a specific projection like `<T
+/// as Trait>::Item`. The result is always a type (and possibly
+/// additional obligations). Returns `None` in the case of ambiguity,
+/// which indicates that there are unbound type variables.
+///
+/// This function used to return `Option<NormalizedTy<'tcx>>`, which contains a
+/// `Ty<'tcx>` and an obligations vector. But that obligation vector was very
+/// often immediately appended to another obligations vector. So now this
+/// function takes an obligations vector and appends to it directly, which is
+/// slightly uglier but avoids the need for an extra short-lived allocation.
+fn opt_normalize_projection_type<'a, 'b, 'tcx>(
+ selcx: &'a mut SelectionContext<'b, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ projection_ty: ty::ProjectionTy<'tcx>,
+ cause: ObligationCause<'tcx>,
+ depth: usize,
+ obligations: &mut Vec<PredicateObligation<'tcx>>,
+) -> Option<Ty<'tcx>> {
+ let infcx = selcx.infcx();
+
+ let projection_ty = infcx.resolve_vars_if_possible(&projection_ty);
+ let cache_key = ProjectionCacheKey::new(projection_ty);
+
+ debug!(
+ "opt_normalize_projection_type(\
+ projection_ty={:?}, \
+ depth={})",
+ projection_ty, depth
+ );
+
+ // FIXME(#20304) For now, I am caching here, which is good, but it
+ // means we don't capture the type variables that are created in
+ // the case of ambiguity. Which means we may create a large stream
+ // of such variables. OTOH, if we move the caching up a level, we
+ // would not benefit from caching when proving `T: Trait<U=Foo>`
+ // bounds. It might be the case that we want two distinct caches,
+ // or else another kind of cache entry.
+
+ let cache_result = infcx.inner.borrow_mut().projection_cache.try_start(cache_key);
+ match cache_result {
+ Ok(()) => {}
+ Err(ProjectionCacheEntry::Ambiguous) => {
+ // If we found ambiguity the last time, that means we will continue
+ // to do so until some type in the key changes (and we know it
+ // hasn't, because we just fully resolved it).
+ debug!(
+ "opt_normalize_projection_type: \
+ found cache entry: ambiguous"
+ );
+ return None;
+ }
+ Err(ProjectionCacheEntry::InProgress) => {
+ // If while normalized A::B, we are asked to normalize
+ // A::B, just return A::B itself. This is a conservative
+ // answer, in the sense that A::B *is* clearly equivalent
+ // to A::B, though there may be a better value we can
+ // find.
+
+ // Under lazy normalization, this can arise when
+ // bootstrapping. That is, imagine an environment with a
+ // where-clause like `A::B == u32`. Now, if we are asked
+ // to normalize `A::B`, we will want to check the
+ // where-clauses in scope. So we will try to unify `A::B`
+ // with `A::B`, which can trigger a recursive
+ // normalization. In that case, I think we will want this code:
+ //
+ // ```
+ // let ty = selcx.tcx().mk_projection(projection_ty.item_def_id,
+ // projection_ty.substs;
+ // return Some(NormalizedTy { value: v, obligations: vec![] });
+ // ```
+
+ debug!(
+ "opt_normalize_projection_type: \
+ found cache entry: in-progress"
+ );
+
+ // But for now, let's classify this as an overflow:
+ let recursion_limit = *selcx.tcx().sess.recursion_limit.get();
+ let obligation =
+ Obligation::with_depth(cause, recursion_limit, param_env, projection_ty);
+ selcx.infcx().report_overflow_error(&obligation, false);
+ }
+ Err(ProjectionCacheEntry::NormalizedTy(ty)) => {
+ // This is the hottest path in this function.
+ //
+ // If we find the value in the cache, then return it along
+ // with the obligations that went along with it. Note
+ // that, when using a fulfillment context, these
+ // obligations could in principle be ignored: they have
+ // already been registered when the cache entry was
+ // created (and hence the new ones will quickly be
+ // discarded as duplicated). But when doing trait
+ // evaluation this is not the case, and dropping the trait
+ // evaluations can causes ICEs (e.g., #43132).
+ debug!(
+ "opt_normalize_projection_type: \
+ found normalized ty `{:?}`",
+ ty
+ );
+
+ // Once we have inferred everything we need to know, we
+ // can ignore the `obligations` from that point on.
+ if infcx.unresolved_type_vars(&ty.value).is_none() {
+ infcx.inner.borrow_mut().projection_cache.complete_normalized(cache_key, &ty);
+ // No need to extend `obligations`.
+ } else {
+ obligations.extend(ty.obligations);
+ }
+
+ obligations.push(get_paranoid_cache_value_obligation(
+ infcx,
+ param_env,
+ projection_ty,
+ cause,
+ depth,
+ ));
+ return Some(ty.value);
+ }
+ Err(ProjectionCacheEntry::Error) => {
+ debug!(
+ "opt_normalize_projection_type: \
+ found error"
+ );
+ let result = normalize_to_error(selcx, param_env, projection_ty, cause, depth);
+ obligations.extend(result.obligations);
+ return Some(result.value);
+ }
+ }
+
+ let obligation = Obligation::with_depth(cause.clone(), depth, param_env, projection_ty);
+ match project_type(selcx, &obligation) {
+ Ok(ProjectedTy::Progress(Progress {
+ ty: projected_ty,
+ obligations: mut projected_obligations,
+ })) => {
+ // if projection succeeded, then what we get out of this
+ // is also non-normalized (consider: it was derived from
+ // an impl, where-clause etc) and hence we must
+ // re-normalize it
+
+ debug!(
+ "opt_normalize_projection_type: \
+ projected_ty={:?} \
+ depth={} \
+ projected_obligations={:?}",
+ projected_ty, depth, projected_obligations
+ );
+
+ let result = if projected_ty.has_projections() {
+ let mut normalizer = AssocTypeNormalizer::new(
+ selcx,
+ param_env,
+ cause,
+ depth + 1,
+ &mut projected_obligations,
+ );
+ let normalized_ty = normalizer.fold(&projected_ty);
+
+ debug!(
+ "opt_normalize_projection_type: \
+ normalized_ty={:?} depth={}",
+ normalized_ty, depth
+ );
+
+ Normalized { value: normalized_ty, obligations: projected_obligations }
+ } else {
+ Normalized { value: projected_ty, obligations: projected_obligations }
+ };
+
+ let cache_value = prune_cache_value_obligations(infcx, &result);
+ infcx.inner.borrow_mut().projection_cache.insert_ty(cache_key, cache_value);
+ obligations.extend(result.obligations);
+ Some(result.value)
+ }
+ Ok(ProjectedTy::NoProgress(projected_ty)) => {
+ debug!(
+ "opt_normalize_projection_type: \
+ projected_ty={:?} no progress",
+ projected_ty
+ );
+ let result = Normalized { value: projected_ty, obligations: vec![] };
+ infcx.inner.borrow_mut().projection_cache.insert_ty(cache_key, result.clone());
+ // No need to extend `obligations`.
+ Some(result.value)
+ }
+ Err(ProjectionTyError::TooManyCandidates) => {
+ debug!(
+ "opt_normalize_projection_type: \
+ too many candidates"
+ );
+ infcx.inner.borrow_mut().projection_cache.ambiguous(cache_key);
+ None
+ }
+ Err(ProjectionTyError::TraitSelectionError(_)) => {
+ debug!("opt_normalize_projection_type: ERROR");
+ // if we got an error processing the `T as Trait` part,
+ // just return `ty::err` but add the obligation `T :
+ // Trait`, which when processed will cause the error to be
+ // reported later
+
+ infcx.inner.borrow_mut().projection_cache.error(cache_key);
+ let result = normalize_to_error(selcx, param_env, projection_ty, cause, depth);
+ obligations.extend(result.obligations);
+ Some(result.value)
+ }
+ }
+}
+
+/// If there are unresolved type variables, then we need to include
+/// any subobligations that bind them, at least until those type
+/// variables are fully resolved.
+fn prune_cache_value_obligations<'a, 'tcx>(
+ infcx: &'a InferCtxt<'a, 'tcx>,
+ result: &NormalizedTy<'tcx>,
+) -> NormalizedTy<'tcx> {
+ if infcx.unresolved_type_vars(&result.value).is_none() {
+ return NormalizedTy { value: result.value, obligations: vec![] };
+ }
+
+ let mut obligations: Vec<_> = result
+ .obligations
+ .iter()
+ .filter(|obligation| match obligation.predicate {
+ // We found a `T: Foo<X = U>` predicate, let's check
+ // if `U` references any unresolved type
+ // variables. In principle, we only care if this
+ // projection can help resolve any of the type
+ // variables found in `result.value` -- but we just
+ // check for any type variables here, for fear of
+ // indirect obligations (e.g., we project to `?0`,
+ // but we have `T: Foo<X = ?1>` and `?1: Bar<X =
+ // ?0>`).
+ ty::Predicate::Projection(ref data) => infcx.unresolved_type_vars(&data.ty()).is_some(),
+
+ // We are only interested in `T: Foo<X = U>` predicates, whre
+ // `U` references one of `unresolved_type_vars`. =)
+ _ => false,
+ })
+ .cloned()
+ .collect();
+
+ obligations.shrink_to_fit();
+
+ NormalizedTy { value: result.value, obligations }
+}
+
+/// Whenever we give back a cache result for a projection like `<T as
+/// Trait>::Item ==> X`, we *always* include the obligation to prove
+/// that `T: Trait` (we may also include some other obligations). This
+/// may or may not be necessary -- in principle, all the obligations
+/// that must be proven to show that `T: Trait` were also returned
+/// when the cache was first populated. But there are some vague concerns,
+/// and so we take the precautionary measure of including `T: Trait` in
+/// the result:
+///
+/// Concern #1. The current setup is fragile. Perhaps someone could
+/// have failed to prove the concerns from when the cache was
+/// populated, but also not have used a snapshot, in which case the
+/// cache could remain populated even though `T: Trait` has not been
+/// shown. In this case, the "other code" is at fault -- when you
+/// project something, you are supposed to either have a snapshot or
+/// else prove all the resulting obligations -- but it's still easy to
+/// get wrong.
+///
+/// Concern #2. Even within the snapshot, if those original
+/// obligations are not yet proven, then we are able to do projections
+/// that may yet turn out to be wrong. This *may* lead to some sort
+/// of trouble, though we don't have a concrete example of how that
+/// can occur yet. But it seems risky at best.
+fn get_paranoid_cache_value_obligation<'a, 'tcx>(
+ infcx: &'a InferCtxt<'a, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ projection_ty: ty::ProjectionTy<'tcx>,
+ cause: ObligationCause<'tcx>,
+ depth: usize,
+) -> PredicateObligation<'tcx> {
+ let trait_ref = projection_ty.trait_ref(infcx.tcx).to_poly_trait_ref();
+ Obligation {
+ cause,
+ recursion_depth: depth,
+ param_env,
+ predicate: trait_ref.without_const().to_predicate(),
+ }
+}
+
+/// If we are projecting `<T as Trait>::Item`, but `T: Trait` does not
+/// hold. In various error cases, we cannot generate a valid
+/// normalized projection. Therefore, we create an inference variable
+/// return an associated obligation that, when fulfilled, will lead to
+/// an error.
+///
+/// Note that we used to return `Error` here, but that was quite
+/// dubious -- the premise was that an error would *eventually* be
+/// reported, when the obligation was processed. But in general once
+/// you see a `Error` you are supposed to be able to assume that an
+/// error *has been* reported, so that you can take whatever heuristic
+/// paths you want to take. To make things worse, it was possible for
+/// cycles to arise, where you basically had a setup like `<MyType<$0>
+/// as Trait>::Foo == $0`. Here, normalizing `<MyType<$0> as
+/// Trait>::Foo> to `[type error]` would lead to an obligation of
+/// `<MyType<[type error]> as Trait>::Foo`. We are supposed to report
+/// an error for this obligation, but we legitimately should not,
+/// because it contains `[type error]`. Yuck! (See issue #29857 for
+/// one case where this arose.)
+fn normalize_to_error<'a, 'tcx>(
+ selcx: &mut SelectionContext<'a, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ projection_ty: ty::ProjectionTy<'tcx>,
+ cause: ObligationCause<'tcx>,
+ depth: usize,
+) -> NormalizedTy<'tcx> {
+ let trait_ref = projection_ty.trait_ref(selcx.tcx()).to_poly_trait_ref();
+ let trait_obligation = Obligation {
+ cause,
+ recursion_depth: depth,
+ param_env,
+ predicate: trait_ref.without_const().to_predicate(),
+ };
+ let tcx = selcx.infcx().tcx;
+ let def_id = projection_ty.item_def_id;
+ let new_value = selcx.infcx().next_ty_var(TypeVariableOrigin {
+ kind: TypeVariableOriginKind::NormalizeProjectionType,
+ span: tcx.def_span(def_id),
+ });
+ Normalized { value: new_value, obligations: vec![trait_obligation] }
+}
+
+enum ProjectedTy<'tcx> {
+ Progress(Progress<'tcx>),
+ NoProgress(Ty<'tcx>),
+}
+
+struct Progress<'tcx> {
+ ty: Ty<'tcx>,
+ obligations: Vec<PredicateObligation<'tcx>>,
+}
+
+impl<'tcx> Progress<'tcx> {
+ fn error(tcx: TyCtxt<'tcx>) -> Self {
+ Progress { ty: tcx.types.err, obligations: vec![] }
+ }
+
+ fn with_addl_obligations(mut self, mut obligations: Vec<PredicateObligation<'tcx>>) -> Self {
+ debug!(
+ "with_addl_obligations: self.obligations.len={} obligations.len={}",
+ self.obligations.len(),
+ obligations.len()
+ );
+
+ debug!(
+ "with_addl_obligations: self.obligations={:?} obligations={:?}",
+ self.obligations, obligations
+ );
+
+ self.obligations.append(&mut obligations);
+ self
+ }
+}
+
+/// Computes the result of a projection type (if we can).
+///
+/// IMPORTANT:
+/// - `obligation` must be fully normalized
+fn project_type<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+) -> Result<ProjectedTy<'tcx>, ProjectionTyError<'tcx>> {
+ debug!("project(obligation={:?})", obligation);
+
+ let recursion_limit = *selcx.tcx().sess.recursion_limit.get();
+ if obligation.recursion_depth >= recursion_limit {
+ debug!("project: overflow!");
+ return Err(ProjectionTyError::TraitSelectionError(SelectionError::Overflow));
+ }
+
+ let obligation_trait_ref = &obligation.predicate.trait_ref(selcx.tcx());
+
+ debug!("project: obligation_trait_ref={:?}", obligation_trait_ref);
+
+ if obligation_trait_ref.references_error() {
+ return Ok(ProjectedTy::Progress(Progress::error(selcx.tcx())));
+ }
+
+ let mut candidates = ProjectionTyCandidateSet::None;
+
+ // Make sure that the following procedures are kept in order. ParamEnv
+ // needs to be first because it has highest priority, and Select checks
+ // the return value of push_candidate which assumes it's ran at last.
+ assemble_candidates_from_param_env(selcx, obligation, &obligation_trait_ref, &mut candidates);
+
+ assemble_candidates_from_trait_def(selcx, obligation, &obligation_trait_ref, &mut candidates);
+
+ assemble_candidates_from_impls(selcx, obligation, &obligation_trait_ref, &mut candidates);
+
+ match candidates {
+ ProjectionTyCandidateSet::Single(candidate) => Ok(ProjectedTy::Progress(
+ confirm_candidate(selcx, obligation, &obligation_trait_ref, candidate),
+ )),
+ ProjectionTyCandidateSet::None => Ok(ProjectedTy::NoProgress(
+ selcx
+ .tcx()
+ .mk_projection(obligation.predicate.item_def_id, obligation.predicate.substs),
+ )),
+ // Error occurred while trying to processing impls.
+ ProjectionTyCandidateSet::Error(e) => Err(ProjectionTyError::TraitSelectionError(e)),
+ // Inherent ambiguity that prevents us from even enumerating the
+ // candidates.
+ ProjectionTyCandidateSet::Ambiguous => Err(ProjectionTyError::TooManyCandidates),
+ }
+}
+
+/// The first thing we have to do is scan through the parameter
+/// environment to see whether there are any projection predicates
+/// there that can answer this question.
+fn assemble_candidates_from_param_env<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ obligation_trait_ref: &ty::TraitRef<'tcx>,
+ candidate_set: &mut ProjectionTyCandidateSet<'tcx>,
+) {
+ debug!("assemble_candidates_from_param_env(..)");
+ assemble_candidates_from_predicates(
+ selcx,
+ obligation,
+ obligation_trait_ref,
+ candidate_set,
+ ProjectionTyCandidate::ParamEnv,
+ obligation.param_env.caller_bounds.iter().cloned(),
+ );
+}
+
+/// In the case of a nested projection like <<A as Foo>::FooT as Bar>::BarT, we may find
+/// that the definition of `Foo` has some clues:
+///
+/// ```
+/// trait Foo {
+/// type FooT : Bar<BarT=i32>
+/// }
+/// ```
+///
+/// Here, for example, we could conclude that the result is `i32`.
+fn assemble_candidates_from_trait_def<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ obligation_trait_ref: &ty::TraitRef<'tcx>,
+ candidate_set: &mut ProjectionTyCandidateSet<'tcx>,
+) {
+ debug!("assemble_candidates_from_trait_def(..)");
+
+ let tcx = selcx.tcx();
+ // Check whether the self-type is itself a projection.
+ let (def_id, substs) = match obligation_trait_ref.self_ty().kind {
+ ty::Projection(ref data) => (data.trait_ref(tcx).def_id, data.substs),
+ ty::Opaque(def_id, substs) => (def_id, substs),
+ ty::Infer(ty::TyVar(_)) => {
+ // If the self-type is an inference variable, then it MAY wind up
+ // being a projected type, so induce an ambiguity.
+ candidate_set.mark_ambiguous();
+ return;
+ }
+ _ => return,
+ };
+
+ // If so, extract what we know from the trait and try to come up with a good answer.
+ let trait_predicates = tcx.predicates_of(def_id);
+ let bounds = trait_predicates.instantiate(tcx, substs);
+ let bounds = elaborate_predicates(tcx, bounds.predicates);
+ assemble_candidates_from_predicates(
+ selcx,
+ obligation,
+ obligation_trait_ref,
+ candidate_set,
+ ProjectionTyCandidate::TraitDef,
+ bounds,
+ )
+}
+
+fn assemble_candidates_from_predicates<'cx, 'tcx, I>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ obligation_trait_ref: &ty::TraitRef<'tcx>,
+ candidate_set: &mut ProjectionTyCandidateSet<'tcx>,
+ ctor: fn(ty::PolyProjectionPredicate<'tcx>) -> ProjectionTyCandidate<'tcx>,
+ env_predicates: I,
+) where
+ I: IntoIterator<Item = ty::Predicate<'tcx>>,
+{
+ debug!("assemble_candidates_from_predicates(obligation={:?})", obligation);
+ let infcx = selcx.infcx();
+ for predicate in env_predicates {
+ debug!("assemble_candidates_from_predicates: predicate={:?}", predicate);
+ if let ty::Predicate::Projection(data) = predicate {
+ let same_def_id = data.projection_def_id() == obligation.predicate.item_def_id;
+
+ let is_match = same_def_id
+ && infcx.probe(|_| {
+ let data_poly_trait_ref = data.to_poly_trait_ref(infcx.tcx);
+ let obligation_poly_trait_ref = obligation_trait_ref.to_poly_trait_ref();
+ infcx
+ .at(&obligation.cause, obligation.param_env)
+ .sup(obligation_poly_trait_ref, data_poly_trait_ref)
+ .map(|InferOk { obligations: _, value: () }| {
+ // FIXME(#32730) -- do we need to take obligations
+ // into account in any way? At the moment, no.
+ })
+ .is_ok()
+ });
+
+ debug!(
+ "assemble_candidates_from_predicates: candidate={:?} \
+ is_match={} same_def_id={}",
+ data, is_match, same_def_id
+ );
+
+ if is_match {
+ candidate_set.push_candidate(ctor(data));
+ }
+ }
+ }
+}
+
+fn assemble_candidates_from_impls<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ obligation_trait_ref: &ty::TraitRef<'tcx>,
+ candidate_set: &mut ProjectionTyCandidateSet<'tcx>,
+) {
+ // If we are resolving `<T as TraitRef<...>>::Item == Type`,
+ // start out by selecting the predicate `T as TraitRef<...>`:
+ let poly_trait_ref = obligation_trait_ref.to_poly_trait_ref();
+ let trait_obligation = obligation.with(poly_trait_ref.to_poly_trait_predicate());
+ let _ = selcx.infcx().commit_if_ok(|_| {
+ let vtable = match selcx.select(&trait_obligation) {
+ Ok(Some(vtable)) => vtable,
+ Ok(None) => {
+ candidate_set.mark_ambiguous();
+ return Err(());
+ }
+ Err(e) => {
+ debug!("assemble_candidates_from_impls: selection error {:?}", e);
+ candidate_set.mark_error(e);
+ return Err(());
+ }
+ };
+
+ let eligible = match &vtable {
+ super::VtableClosure(_)
+ | super::VtableGenerator(_)
+ | super::VtableFnPointer(_)
+ | super::VtableObject(_)
+ | super::VtableTraitAlias(_) => {
+ debug!("assemble_candidates_from_impls: vtable={:?}", vtable);
+ true
+ }
+ super::VtableImpl(impl_data) => {
+ // We have to be careful when projecting out of an
+ // impl because of specialization. If we are not in
+ // codegen (i.e., projection mode is not "any"), and the
+ // impl's type is declared as default, then we disable
+ // projection (even if the trait ref is fully
+ // monomorphic). In the case where trait ref is not
+ // fully monomorphic (i.e., includes type parameters),
+ // this is because those type parameters may
+ // ultimately be bound to types from other crates that
+ // may have specialized impls we can't see. In the
+ // case where the trait ref IS fully monomorphic, this
+ // is a policy decision that we made in the RFC in
+ // order to preserve flexibility for the crate that
+ // defined the specializable impl to specialize later
+ // for existing types.
+ //
+ // In either case, we handle this by not adding a
+ // candidate for an impl if it contains a `default`
+ // type.
+ //
+ // NOTE: This should be kept in sync with the similar code in
+ // `rustc::ty::instance::resolve_associated_item()`.
+ let node_item =
+ assoc_ty_def(selcx, impl_data.impl_def_id, obligation.predicate.item_def_id)
+ .map_err(|ErrorReported| ())?;
+
+ let is_default = if node_item.node.is_from_trait() {
+ // If true, the impl inherited a `type Foo = Bar`
+ // given in the trait, which is implicitly default.
+ // Otherwise, the impl did not specify `type` and
+ // neither did the trait:
+ //
+ // ```rust
+ // trait Foo { type T; }
+ // impl Foo for Bar { }
+ // ```
+ //
+ // This is an error, but it will be
+ // reported in `check_impl_items_against_trait`.
+ // We accept it here but will flag it as
+ // an error when we confirm the candidate
+ // (which will ultimately lead to `normalize_to_error`
+ // being invoked).
+ false
+ } else {
+ // If we're looking at a trait *impl*, the item is
+ // specializable if the impl or the item are marked
+ // `default`.
+ node_item.item.defaultness.is_default()
+ || super::util::impl_is_default(selcx.tcx(), node_item.node.def_id())
+ };
+
+ match is_default {
+ // Non-specializable items are always projectable
+ false => true,
+
+ // Only reveal a specializable default if we're past type-checking
+ // and the obligation is monomorphic, otherwise passes such as
+ // transmute checking and polymorphic MIR optimizations could
+ // get a result which isn't correct for all monomorphizations.
+ true if obligation.param_env.reveal == Reveal::All => {
+ // NOTE(eddyb) inference variables can resolve to parameters, so
+ // assume `poly_trait_ref` isn't monomorphic, if it contains any.
+ let poly_trait_ref =
+ selcx.infcx().resolve_vars_if_possible(&poly_trait_ref);
+ !poly_trait_ref.needs_infer() && !poly_trait_ref.needs_subst()
+ }
+
+ true => {
+ debug!(
+ "assemble_candidates_from_impls: not eligible due to default: \
+ assoc_ty={} predicate={}",
+ selcx.tcx().def_path_str(node_item.item.def_id),
+ obligation.predicate,
+ );
+ false
+ }
+ }
+ }
+ super::VtableParam(..) => {
+ // This case tell us nothing about the value of an
+ // associated type. Consider:
+ //
+ // ```
+ // trait SomeTrait { type Foo; }
+ // fn foo<T:SomeTrait>(...) { }
+ // ```
+ //
+ // If the user writes `<T as SomeTrait>::Foo`, then the `T
+ // : SomeTrait` binding does not help us decide what the
+ // type `Foo` is (at least, not more specifically than
+ // what we already knew).
+ //
+ // But wait, you say! What about an example like this:
+ //
+ // ```
+ // fn bar<T:SomeTrait<Foo=usize>>(...) { ... }
+ // ```
+ //
+ // Doesn't the `T : Sometrait<Foo=usize>` predicate help
+ // resolve `T::Foo`? And of course it does, but in fact
+ // that single predicate is desugared into two predicates
+ // in the compiler: a trait predicate (`T : SomeTrait`) and a
+ // projection. And the projection where clause is handled
+ // in `assemble_candidates_from_param_env`.
+ false
+ }
+ super::VtableAutoImpl(..) | super::VtableBuiltin(..) => {
+ // These traits have no associated types.
+ span_bug!(
+ obligation.cause.span,
+ "Cannot project an associated type from `{:?}`",
+ vtable
+ );
+ }
+ };
+
+ if eligible {
+ if candidate_set.push_candidate(ProjectionTyCandidate::Select(vtable)) {
+ Ok(())
+ } else {
+ Err(())
+ }
+ } else {
+ Err(())
+ }
+ });
+}
+
+fn confirm_candidate<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ obligation_trait_ref: &ty::TraitRef<'tcx>,
+ candidate: ProjectionTyCandidate<'tcx>,
+) -> Progress<'tcx> {
+ debug!("confirm_candidate(candidate={:?}, obligation={:?})", candidate, obligation);
+
+ match candidate {
+ ProjectionTyCandidate::ParamEnv(poly_projection)
+ | ProjectionTyCandidate::TraitDef(poly_projection) => {
+ confirm_param_env_candidate(selcx, obligation, poly_projection)
+ }
+
+ ProjectionTyCandidate::Select(vtable) => {
+ confirm_select_candidate(selcx, obligation, obligation_trait_ref, vtable)
+ }
+ }
+}
+
+fn confirm_select_candidate<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ obligation_trait_ref: &ty::TraitRef<'tcx>,
+ vtable: Selection<'tcx>,
+) -> Progress<'tcx> {
+ match vtable {
+ super::VtableImpl(data) => confirm_impl_candidate(selcx, obligation, data),
+ super::VtableGenerator(data) => confirm_generator_candidate(selcx, obligation, data),
+ super::VtableClosure(data) => confirm_closure_candidate(selcx, obligation, data),
+ super::VtableFnPointer(data) => confirm_fn_pointer_candidate(selcx, obligation, data),
+ super::VtableObject(_) => confirm_object_candidate(selcx, obligation, obligation_trait_ref),
+ super::VtableAutoImpl(..)
+ | super::VtableParam(..)
+ | super::VtableBuiltin(..)
+ | super::VtableTraitAlias(..) =>
+ // we don't create Select candidates with this kind of resolution
+ {
+ span_bug!(
+ obligation.cause.span,
+ "Cannot project an associated type from `{:?}`",
+ vtable
+ )
+ }
+ }
+}
+
+fn confirm_object_candidate<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ obligation_trait_ref: &ty::TraitRef<'tcx>,
+) -> Progress<'tcx> {
+ let self_ty = obligation_trait_ref.self_ty();
+ let object_ty = selcx.infcx().shallow_resolve(self_ty);
+ debug!("confirm_object_candidate(object_ty={:?})", object_ty);
+ let data = match object_ty.kind {
+ ty::Dynamic(ref data, ..) => data,
+ _ => span_bug!(
+ obligation.cause.span,
+ "confirm_object_candidate called with non-object: {:?}",
+ object_ty
+ ),
+ };
+ let env_predicates = data
+ .projection_bounds()
+ .map(|p| p.with_self_ty(selcx.tcx(), object_ty).to_predicate())
+ .collect();
+ let env_predicate = {
+ let env_predicates = elaborate_predicates(selcx.tcx(), env_predicates);
+
+ // select only those projections that are actually projecting an
+ // item with the correct name
+ let env_predicates = env_predicates.filter_map(|p| match p {
+ ty::Predicate::Projection(data) => {
+ if data.projection_def_id() == obligation.predicate.item_def_id {
+ Some(data)
+ } else {
+ None
+ }
+ }
+ _ => None,
+ });
+
+ // select those with a relevant trait-ref
+ let mut env_predicates = env_predicates.filter(|data| {
+ let data_poly_trait_ref = data.to_poly_trait_ref(selcx.tcx());
+ let obligation_poly_trait_ref = obligation_trait_ref.to_poly_trait_ref();
+ selcx.infcx().probe(|_| {
+ selcx
+ .infcx()
+ .at(&obligation.cause, obligation.param_env)
+ .sup(obligation_poly_trait_ref, data_poly_trait_ref)
+ .is_ok()
+ })
+ });
+
+ // select the first matching one; there really ought to be one or
+ // else the object type is not WF, since an object type should
+ // include all of its projections explicitly
+ match env_predicates.next() {
+ Some(env_predicate) => env_predicate,
+ None => {
+ debug!(
+ "confirm_object_candidate: no env-predicate \
+ found in object type `{:?}`; ill-formed",
+ object_ty
+ );
+ return Progress::error(selcx.tcx());
+ }
+ }
+ };
+
+ confirm_param_env_candidate(selcx, obligation, env_predicate)
+}
+
+fn confirm_generator_candidate<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ vtable: VtableGeneratorData<'tcx, PredicateObligation<'tcx>>,
+) -> Progress<'tcx> {
+ let gen_sig = vtable.substs.as_generator().poly_sig(vtable.generator_def_id, selcx.tcx());
+ let Normalized { value: gen_sig, obligations } = normalize_with_depth(
+ selcx,
+ obligation.param_env,
+ obligation.cause.clone(),
+ obligation.recursion_depth + 1,
+ &gen_sig,
+ );
+
+ debug!(
+ "confirm_generator_candidate: obligation={:?},gen_sig={:?},obligations={:?}",
+ obligation, gen_sig, obligations
+ );
+
+ let tcx = selcx.tcx();
+
+ let gen_def_id = tcx.lang_items().gen_trait().unwrap();
+
+ let predicate = super::util::generator_trait_ref_and_outputs(
+ tcx,
+ gen_def_id,
+ obligation.predicate.self_ty(),
+ gen_sig,
+ )
+ .map_bound(|(trait_ref, yield_ty, return_ty)| {
+ let name = tcx.associated_item(obligation.predicate.item_def_id).ident.name;
+ let ty = if name == sym::Return {
+ return_ty
+ } else if name == sym::Yield {
+ yield_ty
+ } else {
+ bug!()
+ };
+
+ ty::ProjectionPredicate {
+ projection_ty: ty::ProjectionTy {
+ substs: trait_ref.substs,
+ item_def_id: obligation.predicate.item_def_id,
+ },
+ ty,
+ }
+ });
+
+ confirm_param_env_candidate(selcx, obligation, predicate)
+ .with_addl_obligations(vtable.nested)
+ .with_addl_obligations(obligations)
+}
+
+fn confirm_fn_pointer_candidate<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ fn_pointer_vtable: VtableFnPointerData<'tcx, PredicateObligation<'tcx>>,
+) -> Progress<'tcx> {
+ let fn_type = selcx.infcx().shallow_resolve(fn_pointer_vtable.fn_ty);
+ let sig = fn_type.fn_sig(selcx.tcx());
+ let Normalized { value: sig, obligations } = normalize_with_depth(
+ selcx,
+ obligation.param_env,
+ obligation.cause.clone(),
+ obligation.recursion_depth + 1,
+ &sig,
+ );
+
+ confirm_callable_candidate(selcx, obligation, sig, util::TupleArgumentsFlag::Yes)
+ .with_addl_obligations(fn_pointer_vtable.nested)
+ .with_addl_obligations(obligations)
+}
+
+fn confirm_closure_candidate<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ vtable: VtableClosureData<'tcx, PredicateObligation<'tcx>>,
+) -> Progress<'tcx> {
+ let tcx = selcx.tcx();
+ let infcx = selcx.infcx();
+ let closure_sig_ty = vtable.substs.as_closure().sig_ty(vtable.closure_def_id, tcx);
+ let closure_sig = infcx.shallow_resolve(closure_sig_ty).fn_sig(tcx);
+ let Normalized { value: closure_sig, obligations } = normalize_with_depth(
+ selcx,
+ obligation.param_env,
+ obligation.cause.clone(),
+ obligation.recursion_depth + 1,
+ &closure_sig,
+ );
+
+ debug!(
+ "confirm_closure_candidate: obligation={:?},closure_sig={:?},obligations={:?}",
+ obligation, closure_sig, obligations
+ );
+
+ confirm_callable_candidate(selcx, obligation, closure_sig, util::TupleArgumentsFlag::No)
+ .with_addl_obligations(vtable.nested)
+ .with_addl_obligations(obligations)
+}
+
+fn confirm_callable_candidate<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ fn_sig: ty::PolyFnSig<'tcx>,
+ flag: util::TupleArgumentsFlag,
+) -> Progress<'tcx> {
+ let tcx = selcx.tcx();
+
+ debug!("confirm_callable_candidate({:?},{:?})", obligation, fn_sig);
+
+ // the `Output` associated type is declared on `FnOnce`
+ let fn_once_def_id = tcx.lang_items().fn_once_trait().unwrap();
+
+ let predicate = super::util::closure_trait_ref_and_return_type(
+ tcx,
+ fn_once_def_id,
+ obligation.predicate.self_ty(),
+ fn_sig,
+ flag,
+ )
+ .map_bound(|(trait_ref, ret_type)| ty::ProjectionPredicate {
+ projection_ty: ty::ProjectionTy::from_ref_and_name(
+ tcx,
+ trait_ref,
+ Ident::with_dummy_span(rustc_hir::FN_OUTPUT_NAME),
+ ),
+ ty: ret_type,
+ });
+
+ confirm_param_env_candidate(selcx, obligation, predicate)
+}
+
+fn confirm_param_env_candidate<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ poly_cache_entry: ty::PolyProjectionPredicate<'tcx>,
+) -> Progress<'tcx> {
+ let infcx = selcx.infcx();
+ let cause = &obligation.cause;
+ let param_env = obligation.param_env;
+
+ let (cache_entry, _) = infcx.replace_bound_vars_with_fresh_vars(
+ cause.span,
+ LateBoundRegionConversionTime::HigherRankedType,
+ &poly_cache_entry,
+ );
+
+ let cache_trait_ref = cache_entry.projection_ty.trait_ref(infcx.tcx);
+ let obligation_trait_ref = obligation.predicate.trait_ref(infcx.tcx);
+ match infcx.at(cause, param_env).eq(cache_trait_ref, obligation_trait_ref) {
+ Ok(InferOk { value: _, obligations }) => Progress { ty: cache_entry.ty, obligations },
+ Err(e) => {
+ let msg = format!(
+ "Failed to unify obligation `{:?}` with poly_projection `{:?}`: {:?}",
+ obligation, poly_cache_entry, e,
+ );
+ debug!("confirm_param_env_candidate: {}", msg);
+ infcx.tcx.sess.delay_span_bug(obligation.cause.span, &msg);
+ Progress { ty: infcx.tcx.types.err, obligations: vec![] }
+ }
+ }
+}
+
+fn confirm_impl_candidate<'cx, 'tcx>(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ obligation: &ProjectionTyObligation<'tcx>,
+ impl_vtable: VtableImplData<'tcx, PredicateObligation<'tcx>>,
+) -> Progress<'tcx> {
+ let tcx = selcx.tcx();
+
+ let VtableImplData { impl_def_id, substs, nested } = impl_vtable;
+ let assoc_item_id = obligation.predicate.item_def_id;
+ let trait_def_id = tcx.trait_id_of_impl(impl_def_id).unwrap();
+
+ let param_env = obligation.param_env;
+ let assoc_ty = match assoc_ty_def(selcx, impl_def_id, assoc_item_id) {
+ Ok(assoc_ty) => assoc_ty,
+ Err(ErrorReported) => return Progress { ty: tcx.types.err, obligations: nested },
+ };
+
+ if !assoc_ty.item.defaultness.has_value() {
+ // This means that the impl is missing a definition for the
+ // associated type. This error will be reported by the type
+ // checker method `check_impl_items_against_trait`, so here we
+ // just return Error.
+ debug!(
+ "confirm_impl_candidate: no associated type {:?} for {:?}",
+ assoc_ty.item.ident, obligation.predicate
+ );
+ return Progress { ty: tcx.types.err, obligations: nested };
+ }
+ let substs = obligation.predicate.substs.rebase_onto(tcx, trait_def_id, substs);
+ let substs = translate_substs(selcx.infcx(), param_env, impl_def_id, substs, assoc_ty.node);
+ let ty = if let ty::AssocKind::OpaqueTy = assoc_ty.item.kind {
+ let item_substs = InternalSubsts::identity_for_item(tcx, assoc_ty.item.def_id);
+ tcx.mk_opaque(assoc_ty.item.def_id, item_substs)
+ } else {
+ tcx.type_of(assoc_ty.item.def_id)
+ };
+ if substs.len() != tcx.generics_of(assoc_ty.item.def_id).count() {
+ tcx.sess
+ .delay_span_bug(DUMMY_SP, "impl item and trait item have different parameter counts");
+ Progress { ty: tcx.types.err, obligations: nested }
+ } else {
+ Progress { ty: ty.subst(tcx, substs), obligations: nested }
+ }
+}
+
+/// Locate the definition of an associated type in the specialization hierarchy,
+/// starting from the given impl.
+///
+/// Based on the "projection mode", this lookup may in fact only examine the
+/// topmost impl. See the comments for `Reveal` for more details.
+fn assoc_ty_def(
+ selcx: &SelectionContext<'_, '_>,
+ impl_def_id: DefId,
+ assoc_ty_def_id: DefId,
+) -> Result<specialization_graph::NodeItem<ty::AssocItem>, ErrorReported> {
+ let tcx = selcx.tcx();
+ let assoc_ty_name = tcx.associated_item(assoc_ty_def_id).ident;
+ let trait_def_id = tcx.impl_trait_ref(impl_def_id).unwrap().def_id;
+ let trait_def = tcx.trait_def(trait_def_id);
+
+ // This function may be called while we are still building the
+ // specialization graph that is queried below (via TraitDef::ancestors()),
+ // so, in order to avoid unnecessary infinite recursion, we manually look
+ // for the associated item at the given impl.
+ // If there is no such item in that impl, this function will fail with a
+ // cycle error if the specialization graph is currently being built.
+ let impl_node = specialization_graph::Node::Impl(impl_def_id);
+ for item in impl_node.items(tcx) {
+ if matches!(item.kind, ty::AssocKind::Type | ty::AssocKind::OpaqueTy)
+ && tcx.hygienic_eq(item.ident, assoc_ty_name, trait_def_id)
+ {
+ return Ok(specialization_graph::NodeItem {
+ node: specialization_graph::Node::Impl(impl_def_id),
+ item: *item,
+ });
+ }
+ }
+
+ let ancestors = trait_def.ancestors(tcx, impl_def_id)?;
+ if let Some(assoc_item) = ancestors.leaf_def(tcx, assoc_ty_name, ty::AssocKind::Type) {
+ Ok(assoc_item)
+ } else {
+ // This is saying that neither the trait nor
+ // the impl contain a definition for this
+ // associated type. Normally this situation
+ // could only arise through a compiler bug --
+ // if the user wrote a bad item name, it
+ // should have failed in astconv.
+ bug!("No associated type `{}` for {}", assoc_ty_name, tcx.def_path_str(impl_def_id))
+ }
+}
+
+crate trait ProjectionCacheKeyExt<'tcx>: Sized {
+ fn from_poly_projection_predicate(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ predicate: &ty::PolyProjectionPredicate<'tcx>,
+ ) -> Option<Self>;
+}
+
+impl<'tcx> ProjectionCacheKeyExt<'tcx> for ProjectionCacheKey<'tcx> {
+ fn from_poly_projection_predicate(
+ selcx: &mut SelectionContext<'cx, 'tcx>,
+ predicate: &ty::PolyProjectionPredicate<'tcx>,
+ ) -> Option<Self> {
+ let infcx = selcx.infcx();
+ // We don't do cross-snapshot caching of obligations with escaping regions,
+ // so there's no cache key to use
+ predicate.no_bound_vars().map(|predicate| {
+ ProjectionCacheKey::new(
+ // We don't attempt to match up with a specific type-variable state
+ // from a specific call to `opt_normalize_projection_type` - if
+ // there's no precise match, the original cache entry is "stranded"
+ // anyway.
+ infcx.resolve_vars_if_possible(&predicate.projection_ty),
+ )
+ })
+ }
+}
--- /dev/null
+use crate::infer::at::At;
+use crate::infer::canonical::OriginalQueryValues;
+use crate::infer::InferOk;
+
+use rustc::ty::subst::GenericArg;
+use rustc::ty::{self, Ty, TyCtxt};
+
+pub use rustc::traits::query::{DropckOutlivesResult, DtorckConstraint};
+
+pub trait AtExt<'tcx> {
+ fn dropck_outlives(&self, ty: Ty<'tcx>) -> InferOk<'tcx, Vec<GenericArg<'tcx>>>;
+}
+
+impl<'cx, 'tcx> AtExt<'tcx> for At<'cx, 'tcx> {
+ /// Given a type `ty` of some value being dropped, computes a set
+ /// of "kinds" (types, regions) that must be outlive the execution
+ /// of the destructor. These basically correspond to data that the
+ /// destructor might access. This is used during regionck to
+ /// impose "outlives" constraints on any lifetimes referenced
+ /// within.
+ ///
+ /// The rules here are given by the "dropck" RFCs, notably [#1238]
+ /// and [#1327]. This is a fixed-point computation, where we
+ /// explore all the data that will be dropped (transitively) when
+ /// a value of type `ty` is dropped. For each type T that will be
+ /// dropped and which has a destructor, we must assume that all
+ /// the types/regions of T are live during the destructor, unless
+ /// they are marked with a special attribute (`#[may_dangle]`).
+ ///
+ /// [#1238]: https://github.com/rust-lang/rfcs/blob/master/text/1238-nonparametric-dropck.md
+ /// [#1327]: https://github.com/rust-lang/rfcs/blob/master/text/1327-dropck-param-eyepatch.md
+ fn dropck_outlives(&self, ty: Ty<'tcx>) -> InferOk<'tcx, Vec<GenericArg<'tcx>>> {
+ debug!("dropck_outlives(ty={:?}, param_env={:?})", ty, self.param_env,);
+
+ // Quick check: there are a number of cases that we know do not require
+ // any destructor.
+ let tcx = self.infcx.tcx;
+ if trivial_dropck_outlives(tcx, ty) {
+ return InferOk { value: vec![], obligations: vec![] };
+ }
+
+ let mut orig_values = OriginalQueryValues::default();
+ let c_ty = self.infcx.canonicalize_query(&self.param_env.and(ty), &mut orig_values);
+ let span = self.cause.span;
+ debug!("c_ty = {:?}", c_ty);
+ if let Ok(result) = &tcx.dropck_outlives(c_ty) {
+ if result.is_proven() {
+ if let Ok(InferOk { value, obligations }) =
+ self.infcx.instantiate_query_response_and_region_obligations(
+ self.cause,
+ self.param_env,
+ &orig_values,
+ result,
+ )
+ {
+ let ty = self.infcx.resolve_vars_if_possible(&ty);
+ let kinds = value.into_kinds_reporting_overflows(tcx, span, ty);
+ return InferOk { value: kinds, obligations };
+ }
+ }
+ }
+
+ // Errors and ambiuity in dropck occur in two cases:
+ // - unresolved inference variables at the end of typeck
+ // - non well-formed types where projections cannot be resolved
+ // Either of these should have created an error before.
+ tcx.sess.delay_span_bug(span, "dtorck encountered internal error");
+
+ InferOk { value: vec![], obligations: vec![] }
+ }
+}
+
+/// This returns true if the type `ty` is "trivial" for
+/// dropck-outlives -- that is, if it doesn't require any types to
+/// outlive. This is similar but not *quite* the same as the
+/// `needs_drop` test in the compiler already -- that is, for every
+/// type T for which this function return true, needs-drop would
+/// return `false`. But the reverse does not hold: in particular,
+/// `needs_drop` returns false for `PhantomData`, but it is not
+/// trivial for dropck-outlives.
+///
+/// Note also that `needs_drop` requires a "global" type (i.e., one
+/// with erased regions), but this function does not.
+pub fn trivial_dropck_outlives<'tcx>(tcx: TyCtxt<'tcx>, ty: Ty<'tcx>) -> bool {
+ match ty.kind {
+ // None of these types have a destructor and hence they do not
+ // require anything in particular to outlive the dtor's
+ // execution.
+ ty::Infer(ty::FreshIntTy(_))
+ | ty::Infer(ty::FreshFloatTy(_))
+ | ty::Bool
+ | ty::Int(_)
+ | ty::Uint(_)
+ | ty::Float(_)
+ | ty::Never
+ | ty::FnDef(..)
+ | ty::FnPtr(_)
+ | ty::Char
+ | ty::GeneratorWitness(..)
+ | ty::RawPtr(_)
+ | ty::Ref(..)
+ | ty::Str
+ | ty::Foreign(..)
+ | ty::Error => true,
+
+ // [T; N] and [T] have same properties as T.
+ ty::Array(ty, _) | ty::Slice(ty) => trivial_dropck_outlives(tcx, ty),
+
+ // (T1..Tn) and closures have same properties as T1..Tn --
+ // check if *any* of those are trivial.
+ ty::Tuple(ref tys) => tys.iter().all(|t| trivial_dropck_outlives(tcx, t.expect_ty())),
+ ty::Closure(def_id, ref substs) => {
+ substs.as_closure().upvar_tys(def_id, tcx).all(|t| trivial_dropck_outlives(tcx, t))
+ }
+
+ ty::Adt(def, _) => {
+ if Some(def.did) == tcx.lang_items().manually_drop() {
+ // `ManuallyDrop` never has a dtor.
+ true
+ } else {
+ // Other types might. Moreover, PhantomData doesn't
+ // have a dtor, but it is considered to own its
+ // content, so it is non-trivial. Unions can have `impl Drop`,
+ // and hence are non-trivial as well.
+ false
+ }
+ }
+
+ // The following *might* require a destructor: needs deeper inspection.
+ ty::Dynamic(..)
+ | ty::Projection(..)
+ | ty::Param(_)
+ | ty::Opaque(..)
+ | ty::Placeholder(..)
+ | ty::Infer(_)
+ | ty::Bound(..)
+ | ty::Generator(..) => false,
+
+ ty::UnnormalizedProjection(..) => bug!("only used with chalk-engine"),
+ }
+}
--- /dev/null
+use crate::infer::canonical::OriginalQueryValues;
+use crate::infer::InferCtxt;
+use crate::traits::{
+ EvaluationResult, OverflowError, PredicateObligation, SelectionContext, TraitQueryMode,
+};
+
+pub trait InferCtxtExt<'tcx> {
+ fn predicate_may_hold(&self, obligation: &PredicateObligation<'tcx>) -> bool;
+
+ fn predicate_must_hold_considering_regions(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> bool;
+
+ fn predicate_must_hold_modulo_regions(&self, obligation: &PredicateObligation<'tcx>) -> bool;
+
+ fn evaluate_obligation(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> Result<EvaluationResult, OverflowError>;
+
+ // Helper function that canonicalizes and runs the query. If an
+ // overflow results, we re-run it in the local context so we can
+ // report a nice error.
+ /*crate*/
+ fn evaluate_obligation_no_overflow(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> EvaluationResult;
+}
+
+impl<'cx, 'tcx> InferCtxtExt<'tcx> for InferCtxt<'cx, 'tcx> {
+ /// Evaluates whether the predicate can be satisfied (by any means)
+ /// in the given `ParamEnv`.
+ fn predicate_may_hold(&self, obligation: &PredicateObligation<'tcx>) -> bool {
+ self.evaluate_obligation_no_overflow(obligation).may_apply()
+ }
+
+ /// Evaluates whether the predicate can be satisfied in the given
+ /// `ParamEnv`, and returns `false` if not certain. However, this is
+ /// not entirely accurate if inference variables are involved.
+ ///
+ /// This version may conservatively fail when outlives obligations
+ /// are required.
+ fn predicate_must_hold_considering_regions(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> bool {
+ self.evaluate_obligation_no_overflow(obligation).must_apply_considering_regions()
+ }
+
+ /// Evaluates whether the predicate can be satisfied in the given
+ /// `ParamEnv`, and returns `false` if not certain. However, this is
+ /// not entirely accurate if inference variables are involved.
+ ///
+ /// This version ignores all outlives constraints.
+ fn predicate_must_hold_modulo_regions(&self, obligation: &PredicateObligation<'tcx>) -> bool {
+ self.evaluate_obligation_no_overflow(obligation).must_apply_modulo_regions()
+ }
+
+ /// Evaluate a given predicate, capturing overflow and propagating it back.
+ fn evaluate_obligation(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> Result<EvaluationResult, OverflowError> {
+ let mut _orig_values = OriginalQueryValues::default();
+ let c_pred = self
+ .canonicalize_query(&obligation.param_env.and(obligation.predicate), &mut _orig_values);
+ // Run canonical query. If overflow occurs, rerun from scratch but this time
+ // in standard trait query mode so that overflow is handled appropriately
+ // within `SelectionContext`.
+ self.tcx.evaluate_obligation(c_pred)
+ }
+
+ // Helper function that canonicalizes and runs the query. If an
+ // overflow results, we re-run it in the local context so we can
+ // report a nice error.
+ fn evaluate_obligation_no_overflow(
+ &self,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> EvaluationResult {
+ match self.evaluate_obligation(obligation) {
+ Ok(result) => result,
+ Err(OverflowError) => {
+ let mut selcx = SelectionContext::with_query_mode(&self, TraitQueryMode::Standard);
+ selcx.evaluate_root_obligation(obligation).unwrap_or_else(|r| {
+ span_bug!(
+ obligation.cause.span,
+ "Overflow should be caught earlier in standard query mode: {:?}, {:?}",
+ obligation,
+ r,
+ )
+ })
+ }
+ }
+ }
+}
--- /dev/null
+pub use rustc::traits::query::{CandidateStep, MethodAutoderefBadTy, MethodAutoderefStepsResult};
--- /dev/null
+//! Experimental types for the trait query interface. The methods
+//! defined in this module are all based on **canonicalization**,
+//! which makes a canonical query by replacing unbound inference
+//! variables and regions, so that results can be reused more broadly.
+//! The providers for the queries defined here can be found in
+//! `librustc_traits`.
+
+pub mod dropck_outlives;
+pub mod evaluate_obligation;
+pub mod method_autoderef;
+pub mod normalize;
+pub mod outlives_bounds;
+pub mod type_op;
+
+pub use rustc::traits::query::*;
--- /dev/null
+//! Code for the 'normalization' query. This consists of a wrapper
+//! which folds deeply, invoking the underlying
+//! `normalize_projection_ty` query when it encounters projections.
+
+use crate::infer::at::At;
+use crate::infer::canonical::OriginalQueryValues;
+use crate::infer::{InferCtxt, InferOk};
+use crate::traits::error_reporting::InferCtxtExt;
+use crate::traits::{Obligation, ObligationCause, PredicateObligation, Reveal};
+use rustc::ty::fold::{TypeFoldable, TypeFolder};
+use rustc::ty::subst::Subst;
+use rustc::ty::{self, Ty, TyCtxt};
+use rustc_infer::traits::Normalized;
+
+use super::NoSolution;
+
+pub use rustc::traits::query::NormalizationResult;
+
+pub trait AtExt<'tcx> {
+ fn normalize<T>(&self, value: &T) -> Result<Normalized<'tcx, T>, NoSolution>
+ where
+ T: TypeFoldable<'tcx>;
+}
+
+impl<'cx, 'tcx> AtExt<'tcx> for At<'cx, 'tcx> {
+ /// Normalize `value` in the context of the inference context,
+ /// yielding a resulting type, or an error if `value` cannot be
+ /// normalized. If you don't care about regions, you should prefer
+ /// `normalize_erasing_regions`, which is more efficient.
+ ///
+ /// If the normalization succeeds and is unambiguous, returns back
+ /// the normalized value along with various outlives relations (in
+ /// the form of obligations that must be discharged).
+ ///
+ /// N.B., this will *eventually* be the main means of
+ /// normalizing, but for now should be used only when we actually
+ /// know that normalization will succeed, since error reporting
+ /// and other details are still "under development".
+ fn normalize<T>(&self, value: &T) -> Result<Normalized<'tcx, T>, NoSolution>
+ where
+ T: TypeFoldable<'tcx>,
+ {
+ debug!(
+ "normalize::<{}>(value={:?}, param_env={:?})",
+ ::std::any::type_name::<T>(),
+ value,
+ self.param_env,
+ );
+ if !value.has_projections() {
+ return Ok(Normalized { value: value.clone(), obligations: vec![] });
+ }
+
+ let mut normalizer = QueryNormalizer {
+ infcx: self.infcx,
+ cause: self.cause,
+ param_env: self.param_env,
+ obligations: vec![],
+ error: false,
+ anon_depth: 0,
+ };
+
+ let value1 = value.fold_with(&mut normalizer);
+ if normalizer.error {
+ Err(NoSolution)
+ } else {
+ Ok(Normalized { value: value1, obligations: normalizer.obligations })
+ }
+ }
+}
+
+struct QueryNormalizer<'cx, 'tcx> {
+ infcx: &'cx InferCtxt<'cx, 'tcx>,
+ cause: &'cx ObligationCause<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ obligations: Vec<PredicateObligation<'tcx>>,
+ error: bool,
+ anon_depth: usize,
+}
+
+impl<'cx, 'tcx> TypeFolder<'tcx> for QueryNormalizer<'cx, 'tcx> {
+ fn tcx<'c>(&'c self) -> TyCtxt<'tcx> {
+ self.infcx.tcx
+ }
+
+ fn fold_ty(&mut self, ty: Ty<'tcx>) -> Ty<'tcx> {
+ if !ty.has_projections() {
+ return ty;
+ }
+
+ let ty = ty.super_fold_with(self);
+ match ty.kind {
+ ty::Opaque(def_id, substs) if !substs.has_escaping_bound_vars() => {
+ // (*)
+ // Only normalize `impl Trait` after type-checking, usually in codegen.
+ match self.param_env.reveal {
+ Reveal::UserFacing => ty,
+
+ Reveal::All => {
+ let recursion_limit = *self.tcx().sess.recursion_limit.get();
+ if self.anon_depth >= recursion_limit {
+ let obligation = Obligation::with_depth(
+ self.cause.clone(),
+ recursion_limit,
+ self.param_env,
+ ty,
+ );
+ self.infcx.report_overflow_error(&obligation, true);
+ }
+
+ let generic_ty = self.tcx().type_of(def_id);
+ let concrete_ty = generic_ty.subst(self.tcx(), substs);
+ self.anon_depth += 1;
+ if concrete_ty == ty {
+ bug!(
+ "infinite recursion generic_ty: {:#?}, substs: {:#?}, \
+ concrete_ty: {:#?}, ty: {:#?}",
+ generic_ty,
+ substs,
+ concrete_ty,
+ ty
+ );
+ }
+ let folded_ty = self.fold_ty(concrete_ty);
+ self.anon_depth -= 1;
+ folded_ty
+ }
+ }
+ }
+
+ ty::Projection(ref data) if !data.has_escaping_bound_vars() => {
+ // (*)
+ // (*) This is kind of hacky -- we need to be able to
+ // handle normalization within binders because
+ // otherwise we wind up a need to normalize when doing
+ // trait matching (since you can have a trait
+ // obligation like `for<'a> T::B : Fn(&'a int)`), but
+ // we can't normalize with bound regions in scope. So
+ // far now we just ignore binders but only normalize
+ // if all bound regions are gone (and then we still
+ // have to renormalize whenever we instantiate a
+ // binder). It would be better to normalize in a
+ // binding-aware fashion.
+
+ let tcx = self.infcx.tcx;
+
+ let mut orig_values = OriginalQueryValues::default();
+ // HACK(matthewjasper) `'static` is special-cased in selection,
+ // so we cannot canonicalize it.
+ let c_data = self
+ .infcx
+ .canonicalize_hr_query_hack(&self.param_env.and(*data), &mut orig_values);
+ debug!("QueryNormalizer: c_data = {:#?}", c_data);
+ debug!("QueryNormalizer: orig_values = {:#?}", orig_values);
+ match tcx.normalize_projection_ty(c_data) {
+ Ok(result) => {
+ // We don't expect ambiguity.
+ if result.is_ambiguous() {
+ self.error = true;
+ return ty;
+ }
+
+ match self.infcx.instantiate_query_response_and_region_obligations(
+ self.cause,
+ self.param_env,
+ &orig_values,
+ &result,
+ ) {
+ Ok(InferOk { value: result, obligations }) => {
+ debug!("QueryNormalizer: result = {:#?}", result);
+ debug!("QueryNormalizer: obligations = {:#?}", obligations);
+ self.obligations.extend(obligations);
+ return result.normalized_ty;
+ }
+
+ Err(_) => {
+ self.error = true;
+ return ty;
+ }
+ }
+ }
+
+ Err(NoSolution) => {
+ self.error = true;
+ ty
+ }
+ }
+ }
+
+ _ => ty,
+ }
+ }
+
+ fn fold_const(&mut self, constant: &'tcx ty::Const<'tcx>) -> &'tcx ty::Const<'tcx> {
+ constant.eval(self.infcx.tcx, self.param_env)
+ }
+}
--- /dev/null
+use crate::infer::canonical::OriginalQueryValues;
+use crate::infer::InferCtxt;
+use crate::traits::query::NoSolution;
+use crate::traits::{FulfillmentContext, ObligationCause, TraitEngine};
+use rustc::ty::{self, Ty};
+use rustc_hir as hir;
+use rustc_infer::traits::TraitEngineExt as _;
+use rustc_span::source_map::Span;
+
+pub use rustc::traits::query::OutlivesBound;
+
+pub trait InferCtxtExt<'tcx> {
+ fn implied_outlives_bounds(
+ &self,
+ param_env: ty::ParamEnv<'tcx>,
+ body_id: hir::HirId,
+ ty: Ty<'tcx>,
+ span: Span,
+ ) -> Vec<OutlivesBound<'tcx>>;
+}
+
+impl<'cx, 'tcx> InferCtxtExt<'tcx> for InferCtxt<'cx, 'tcx> {
+ /// Implied bounds are region relationships that we deduce
+ /// automatically. The idea is that (e.g.) a caller must check that a
+ /// function's argument types are well-formed immediately before
+ /// calling that fn, and hence the *callee* can assume that its
+ /// argument types are well-formed. This may imply certain relationships
+ /// between generic parameters. For example:
+ ///
+ /// fn foo<'a,T>(x: &'a T)
+ ///
+ /// can only be called with a `'a` and `T` such that `&'a T` is WF.
+ /// For `&'a T` to be WF, `T: 'a` must hold. So we can assume `T: 'a`.
+ ///
+ /// # Parameters
+ ///
+ /// - `param_env`, the where-clauses in scope
+ /// - `body_id`, the body-id to use when normalizing assoc types.
+ /// Note that this may cause outlives obligations to be injected
+ /// into the inference context with this body-id.
+ /// - `ty`, the type that we are supposed to assume is WF.
+ /// - `span`, a span to use when normalizing, hopefully not important,
+ /// might be useful if a `bug!` occurs.
+ fn implied_outlives_bounds(
+ &self,
+ param_env: ty::ParamEnv<'tcx>,
+ body_id: hir::HirId,
+ ty: Ty<'tcx>,
+ span: Span,
+ ) -> Vec<OutlivesBound<'tcx>> {
+ debug!("implied_outlives_bounds(ty = {:?})", ty);
+
+ let mut orig_values = OriginalQueryValues::default();
+ let key = self.canonicalize_query(¶m_env.and(ty), &mut orig_values);
+ let result = match self.tcx.implied_outlives_bounds(key) {
+ Ok(r) => r,
+ Err(NoSolution) => {
+ self.tcx.sess.delay_span_bug(
+ span,
+ "implied_outlives_bounds failed to solve all obligations",
+ );
+ return vec![];
+ }
+ };
+ assert!(result.value.is_proven());
+
+ let result = self.instantiate_query_response_and_region_obligations(
+ &ObligationCause::misc(span, body_id),
+ param_env,
+ &orig_values,
+ &result,
+ );
+ debug!("implied_outlives_bounds for {:?}: {:#?}", ty, result);
+ let result = match result {
+ Ok(v) => v,
+ Err(_) => {
+ self.tcx.sess.delay_span_bug(span, "implied_outlives_bounds failed to instantiate");
+ return vec![];
+ }
+ };
+
+ // Instantiation may have produced new inference variables and constraints on those
+ // variables. Process these constraints.
+ let mut fulfill_cx = FulfillmentContext::new();
+ fulfill_cx.register_predicate_obligations(self, result.obligations);
+ if fulfill_cx.select_all_or_error(self).is_err() {
+ self.tcx.sess.delay_span_bug(
+ span,
+ "implied_outlives_bounds failed to solve obligations from instantiation",
+ );
+ }
+
+ result.value
+ }
+}
--- /dev/null
+use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
+use crate::traits::query::Fallible;
+use rustc::ty::{ParamEnvAnd, TyCtxt};
+
+pub use rustc::traits::query::type_op::AscribeUserType;
+
+impl<'tcx> super::QueryTypeOp<'tcx> for AscribeUserType<'tcx> {
+ type QueryResponse = ();
+
+ fn try_fast_path(
+ _tcx: TyCtxt<'tcx>,
+ _key: &ParamEnvAnd<'tcx, Self>,
+ ) -> Option<Self::QueryResponse> {
+ None
+ }
+
+ fn perform_query(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, ()>> {
+ tcx.type_op_ascribe_user_type(canonicalized)
+ }
+}
--- /dev/null
+use crate::infer::{InferCtxt, InferOk};
+use crate::traits::query::Fallible;
+use std::fmt;
+
+use crate::infer::canonical::query_response;
+use crate::infer::canonical::QueryRegionConstraints;
+use crate::traits::engine::TraitEngineExt as _;
+use crate::traits::{ObligationCause, TraitEngine};
+use rustc_infer::traits::TraitEngineExt as _;
+use rustc_span::source_map::DUMMY_SP;
+use std::rc::Rc;
+
+pub struct CustomTypeOp<F, G> {
+ closure: F,
+ description: G,
+}
+
+impl<F, G> CustomTypeOp<F, G> {
+ pub fn new<'tcx, R>(closure: F, description: G) -> Self
+ where
+ F: FnOnce(&InferCtxt<'_, 'tcx>) -> Fallible<InferOk<'tcx, R>>,
+ G: Fn() -> String,
+ {
+ CustomTypeOp { closure, description }
+ }
+}
+
+impl<'tcx, F, R, G> super::TypeOp<'tcx> for CustomTypeOp<F, G>
+where
+ F: for<'a, 'cx> FnOnce(&'a InferCtxt<'cx, 'tcx>) -> Fallible<InferOk<'tcx, R>>,
+ G: Fn() -> String,
+{
+ type Output = R;
+
+ /// Processes the operation and all resulting obligations,
+ /// returning the final result along with any region constraints
+ /// (they will be given over to the NLL region solver).
+ fn fully_perform(
+ self,
+ infcx: &InferCtxt<'_, 'tcx>,
+ ) -> Fallible<(Self::Output, Option<Rc<QueryRegionConstraints<'tcx>>>)> {
+ if cfg!(debug_assertions) {
+ info!("fully_perform({:?})", self);
+ }
+
+ scrape_region_constraints(infcx, || Ok((self.closure)(infcx)?))
+ }
+}
+
+impl<F, G> fmt::Debug for CustomTypeOp<F, G>
+where
+ G: Fn() -> String,
+{
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ write!(f, "{}", (self.description)())
+ }
+}
+
+/// Executes `op` and then scrapes out all the "old style" region
+/// constraints that result, creating query-region-constraints.
+fn scrape_region_constraints<'tcx, R>(
+ infcx: &InferCtxt<'_, 'tcx>,
+ op: impl FnOnce() -> Fallible<InferOk<'tcx, R>>,
+) -> Fallible<(R, Option<Rc<QueryRegionConstraints<'tcx>>>)> {
+ let mut fulfill_cx = TraitEngine::new(infcx.tcx);
+ let dummy_body_id = ObligationCause::dummy().body_id;
+
+ // During NLL, we expect that nobody will register region
+ // obligations **except** as part of a custom type op (and, at the
+ // end of each custom type op, we scrape out the region
+ // obligations that resulted). So this vector should be empty on
+ // entry.
+ let pre_obligations = infcx.take_registered_region_obligations();
+ assert!(
+ pre_obligations.is_empty(),
+ "scrape_region_constraints: incoming region obligations = {:#?}",
+ pre_obligations,
+ );
+
+ let InferOk { value, obligations } = infcx.commit_if_ok(|_| op())?;
+ debug_assert!(obligations.iter().all(|o| o.cause.body_id == dummy_body_id));
+ fulfill_cx.register_predicate_obligations(infcx, obligations);
+ if let Err(e) = fulfill_cx.select_all_or_error(infcx) {
+ infcx.tcx.sess.diagnostic().delay_span_bug(
+ DUMMY_SP,
+ &format!("errors selecting obligation during MIR typeck: {:?}", e),
+ );
+ }
+
+ let region_obligations = infcx.take_registered_region_obligations();
+
+ let region_constraint_data = infcx.take_and_reset_region_constraints();
+
+ let region_constraints = query_response::make_query_region_constraints(
+ infcx.tcx,
+ region_obligations
+ .iter()
+ .map(|(_, r_o)| (r_o.sup_type, r_o.sub_region))
+ .map(|(ty, r)| (infcx.resolve_vars_if_possible(&ty), r)),
+ ®ion_constraint_data,
+ );
+
+ if region_constraints.is_empty() {
+ Ok((value, None))
+ } else {
+ Ok((value, Some(Rc::new(region_constraints))))
+ }
+}
--- /dev/null
+use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
+use crate::traits::query::Fallible;
+use rustc::ty::{ParamEnvAnd, TyCtxt};
+
+pub use rustc::traits::query::type_op::Eq;
+
+impl<'tcx> super::QueryTypeOp<'tcx> for Eq<'tcx> {
+ type QueryResponse = ();
+
+ fn try_fast_path(
+ _tcx: TyCtxt<'tcx>,
+ key: &ParamEnvAnd<'tcx, Eq<'tcx>>,
+ ) -> Option<Self::QueryResponse> {
+ if key.value.a == key.value.b { Some(()) } else { None }
+ }
+
+ fn perform_query(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, ()>> {
+ tcx.type_op_eq(canonicalized)
+ }
+}
--- /dev/null
+use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
+use crate::traits::query::outlives_bounds::OutlivesBound;
+use crate::traits::query::Fallible;
+use rustc::ty::{ParamEnvAnd, Ty, TyCtxt};
+
+#[derive(Clone, Debug, HashStable, TypeFoldable, Lift)]
+pub struct ImpliedOutlivesBounds<'tcx> {
+ pub ty: Ty<'tcx>,
+}
+
+impl<'tcx> ImpliedOutlivesBounds<'tcx> {
+ pub fn new(ty: Ty<'tcx>) -> Self {
+ ImpliedOutlivesBounds { ty }
+ }
+}
+
+impl<'tcx> super::QueryTypeOp<'tcx> for ImpliedOutlivesBounds<'tcx> {
+ type QueryResponse = Vec<OutlivesBound<'tcx>>;
+
+ fn try_fast_path(
+ _tcx: TyCtxt<'tcx>,
+ _key: &ParamEnvAnd<'tcx, Self>,
+ ) -> Option<Self::QueryResponse> {
+ None
+ }
+
+ fn perform_query(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self::QueryResponse>> {
+ // FIXME this `unchecked_map` is only necessary because the
+ // query is defined as taking a `ParamEnvAnd<Ty>`; it should
+ // take a `ImpliedOutlivesBounds` instead
+ let canonicalized = canonicalized.unchecked_map(|ParamEnvAnd { param_env, value }| {
+ let ImpliedOutlivesBounds { ty } = value;
+ param_env.and(ty)
+ });
+
+ tcx.implied_outlives_bounds(canonicalized)
+ }
+}
--- /dev/null
+use crate::infer::canonical::{
+ Canonicalized, CanonicalizedQueryResponse, OriginalQueryValues, QueryRegionConstraints,
+};
+use crate::infer::{InferCtxt, InferOk};
+use crate::traits::query::Fallible;
+use crate::traits::ObligationCause;
+use rustc::ty::fold::TypeFoldable;
+use rustc::ty::{ParamEnvAnd, TyCtxt};
+use std::fmt;
+use std::rc::Rc;
+
+pub mod ascribe_user_type;
+pub mod custom;
+pub mod eq;
+pub mod implied_outlives_bounds;
+pub mod normalize;
+pub mod outlives;
+pub mod prove_predicate;
+use self::prove_predicate::ProvePredicate;
+pub mod subtype;
+
+pub use rustc::traits::query::type_op::*;
+
+/// "Type ops" are used in NLL to perform some particular action and
+/// extract out the resulting region constraints (or an error if it
+/// cannot be completed).
+pub trait TypeOp<'tcx>: Sized + fmt::Debug {
+ type Output;
+
+ /// Processes the operation and all resulting obligations,
+ /// returning the final result along with any region constraints
+ /// (they will be given over to the NLL region solver).
+ fn fully_perform(
+ self,
+ infcx: &InferCtxt<'_, 'tcx>,
+ ) -> Fallible<(Self::Output, Option<Rc<QueryRegionConstraints<'tcx>>>)>;
+}
+
+/// "Query type ops" are type ops that are implemented using a
+/// [canonical query][c]. The `Self` type here contains the kernel of
+/// information needed to do the operation -- `TypeOp` is actually
+/// implemented for `ParamEnvAnd<Self>`, since we always need to bring
+/// along a parameter environment as well. For query type-ops, we will
+/// first canonicalize the key and then invoke the query on the tcx,
+/// which produces the resulting query region constraints.
+///
+/// [c]: https://rustc-dev-guide.rust-lang.org/traits/canonicalization.html
+pub trait QueryTypeOp<'tcx>: fmt::Debug + Sized + TypeFoldable<'tcx> + 'tcx {
+ type QueryResponse: TypeFoldable<'tcx>;
+
+ /// Give query the option for a simple fast path that never
+ /// actually hits the tcx cache lookup etc. Return `Some(r)` with
+ /// a final result or `None` to do the full path.
+ fn try_fast_path(
+ tcx: TyCtxt<'tcx>,
+ key: &ParamEnvAnd<'tcx, Self>,
+ ) -> Option<Self::QueryResponse>;
+
+ /// Performs the actual query with the canonicalized key -- the
+ /// real work happens here. This method is not given an `infcx`
+ /// because it shouldn't need one -- and if it had access to one,
+ /// it might do things like invoke `sub_regions`, which would be
+ /// bad, because it would create subregion relationships that are
+ /// not captured in the return value.
+ fn perform_query(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self::QueryResponse>>;
+
+ fn fully_perform_into(
+ query_key: ParamEnvAnd<'tcx, Self>,
+ infcx: &InferCtxt<'_, 'tcx>,
+ output_query_region_constraints: &mut QueryRegionConstraints<'tcx>,
+ ) -> Fallible<Self::QueryResponse> {
+ if let Some(result) = QueryTypeOp::try_fast_path(infcx.tcx, &query_key) {
+ return Ok(result);
+ }
+
+ // FIXME(#33684) -- We need to use
+ // `canonicalize_hr_query_hack` here because of things
+ // like the subtype query, which go awry around
+ // `'static` otherwise.
+ let mut canonical_var_values = OriginalQueryValues::default();
+ let canonical_self =
+ infcx.canonicalize_hr_query_hack(&query_key, &mut canonical_var_values);
+ let canonical_result = Self::perform_query(infcx.tcx, canonical_self)?;
+
+ let param_env = query_key.param_env;
+
+ let InferOk { value, obligations } = infcx
+ .instantiate_nll_query_response_and_region_obligations(
+ &ObligationCause::dummy(),
+ param_env,
+ &canonical_var_values,
+ canonical_result,
+ output_query_region_constraints,
+ )?;
+
+ // Typically, instantiating NLL query results does not
+ // create obligations. However, in some cases there
+ // are unresolved type variables, and unify them *can*
+ // create obligations. In that case, we have to go
+ // fulfill them. We do this via a (recursive) query.
+ for obligation in obligations {
+ let () = ProvePredicate::fully_perform_into(
+ obligation.param_env.and(ProvePredicate::new(obligation.predicate)),
+ infcx,
+ output_query_region_constraints,
+ )?;
+ }
+
+ Ok(value)
+ }
+}
+
+impl<'tcx, Q> TypeOp<'tcx> for ParamEnvAnd<'tcx, Q>
+where
+ Q: QueryTypeOp<'tcx>,
+{
+ type Output = Q::QueryResponse;
+
+ fn fully_perform(
+ self,
+ infcx: &InferCtxt<'_, 'tcx>,
+ ) -> Fallible<(Self::Output, Option<Rc<QueryRegionConstraints<'tcx>>>)> {
+ let mut region_constraints = QueryRegionConstraints::default();
+ let r = Q::fully_perform_into(self, infcx, &mut region_constraints)?;
+
+ // Promote the final query-region-constraints into a
+ // (optional) ref-counted vector:
+ let opt_qrc =
+ if region_constraints.is_empty() { None } else { Some(Rc::new(region_constraints)) };
+
+ Ok((r, opt_qrc))
+ }
+}
--- /dev/null
+use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
+use crate::traits::query::Fallible;
+use rustc::ty::fold::TypeFoldable;
+use rustc::ty::{self, Lift, ParamEnvAnd, Ty, TyCtxt};
+use std::fmt;
+
+pub use rustc::traits::query::type_op::Normalize;
+
+impl<'tcx, T> super::QueryTypeOp<'tcx> for Normalize<T>
+where
+ T: Normalizable<'tcx> + 'tcx,
+{
+ type QueryResponse = T;
+
+ fn try_fast_path(_tcx: TyCtxt<'tcx>, key: &ParamEnvAnd<'tcx, Self>) -> Option<T> {
+ if !key.value.value.has_projections() { Some(key.value.value) } else { None }
+ }
+
+ fn perform_query(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self::QueryResponse>> {
+ T::type_op_method(tcx, canonicalized)
+ }
+}
+
+pub trait Normalizable<'tcx>: fmt::Debug + TypeFoldable<'tcx> + Lift<'tcx> + Copy {
+ fn type_op_method(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Normalize<Self>>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self>>;
+}
+
+impl Normalizable<'tcx> for Ty<'tcx> {
+ fn type_op_method(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Normalize<Self>>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self>> {
+ tcx.type_op_normalize_ty(canonicalized)
+ }
+}
+
+impl Normalizable<'tcx> for ty::Predicate<'tcx> {
+ fn type_op_method(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Normalize<Self>>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self>> {
+ tcx.type_op_normalize_predicate(canonicalized)
+ }
+}
+
+impl Normalizable<'tcx> for ty::PolyFnSig<'tcx> {
+ fn type_op_method(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Normalize<Self>>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self>> {
+ tcx.type_op_normalize_poly_fn_sig(canonicalized)
+ }
+}
+
+impl Normalizable<'tcx> for ty::FnSig<'tcx> {
+ fn type_op_method(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Normalize<Self>>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self>> {
+ tcx.type_op_normalize_fn_sig(canonicalized)
+ }
+}
--- /dev/null
+use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
+use crate::traits::query::dropck_outlives::{trivial_dropck_outlives, DropckOutlivesResult};
+use crate::traits::query::Fallible;
+use rustc::ty::{ParamEnvAnd, Ty, TyCtxt};
+
+#[derive(Copy, Clone, Debug, HashStable, TypeFoldable, Lift)]
+pub struct DropckOutlives<'tcx> {
+ dropped_ty: Ty<'tcx>,
+}
+
+impl<'tcx> DropckOutlives<'tcx> {
+ pub fn new(dropped_ty: Ty<'tcx>) -> Self {
+ DropckOutlives { dropped_ty }
+ }
+}
+
+impl super::QueryTypeOp<'tcx> for DropckOutlives<'tcx> {
+ type QueryResponse = DropckOutlivesResult<'tcx>;
+
+ fn try_fast_path(
+ tcx: TyCtxt<'tcx>,
+ key: &ParamEnvAnd<'tcx, Self>,
+ ) -> Option<Self::QueryResponse> {
+ if trivial_dropck_outlives(tcx, key.value.dropped_ty) {
+ Some(DropckOutlivesResult::default())
+ } else {
+ None
+ }
+ }
+
+ fn perform_query(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, Self::QueryResponse>> {
+ // Subtle: note that we are not invoking
+ // `infcx.at(...).dropck_outlives(...)` here, but rather the
+ // underlying `dropck_outlives` query. This same underlying
+ // query is also used by the
+ // `infcx.at(...).dropck_outlives(...)` fn. Avoiding the
+ // wrapper means we don't need an infcx in this code, which is
+ // good because the interface doesn't give us one (so that we
+ // know we are not registering any subregion relations or
+ // other things).
+
+ // FIXME convert to the type expected by the `dropck_outlives`
+ // query. This should eventually be fixed by changing the
+ // *underlying query*.
+ let canonicalized = canonicalized.unchecked_map(|ParamEnvAnd { param_env, value }| {
+ let DropckOutlives { dropped_ty } = value;
+ param_env.and(dropped_ty)
+ });
+
+ tcx.dropck_outlives(canonicalized)
+ }
+}
--- /dev/null
+use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
+use crate::traits::query::Fallible;
+use rustc::ty::{ParamEnvAnd, Predicate, TyCtxt};
+
+pub use rustc::traits::query::type_op::ProvePredicate;
+
+impl<'tcx> super::QueryTypeOp<'tcx> for ProvePredicate<'tcx> {
+ type QueryResponse = ();
+
+ fn try_fast_path(
+ tcx: TyCtxt<'tcx>,
+ key: &ParamEnvAnd<'tcx, Self>,
+ ) -> Option<Self::QueryResponse> {
+ // Proving Sized, very often on "obviously sized" types like
+ // `&T`, accounts for about 60% percentage of the predicates
+ // we have to prove. No need to canonicalize and all that for
+ // such cases.
+ if let Predicate::Trait(trait_ref, _) = key.value.predicate {
+ if let Some(sized_def_id) = tcx.lang_items().sized_trait() {
+ if trait_ref.def_id() == sized_def_id {
+ if trait_ref.skip_binder().self_ty().is_trivially_sized(tcx) {
+ return Some(());
+ }
+ }
+ }
+ }
+
+ None
+ }
+
+ fn perform_query(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, ()>> {
+ tcx.type_op_prove_predicate(canonicalized)
+ }
+}
--- /dev/null
+use crate::infer::canonical::{Canonicalized, CanonicalizedQueryResponse};
+use crate::traits::query::Fallible;
+use rustc::ty::{ParamEnvAnd, TyCtxt};
+
+pub use rustc::traits::query::type_op::Subtype;
+
+impl<'tcx> super::QueryTypeOp<'tcx> for Subtype<'tcx> {
+ type QueryResponse = ();
+
+ fn try_fast_path(_tcx: TyCtxt<'tcx>, key: &ParamEnvAnd<'tcx, Self>) -> Option<()> {
+ if key.value.sub == key.value.sup { Some(()) } else { None }
+ }
+
+ fn perform_query(
+ tcx: TyCtxt<'tcx>,
+ canonicalized: Canonicalized<'tcx, ParamEnvAnd<'tcx, Self>>,
+ ) -> Fallible<CanonicalizedQueryResponse<'tcx, ()>> {
+ tcx.type_op_subtype(canonicalized)
+ }
+}
--- /dev/null
+// ignore-tidy-filelength
+
+//! Candidate selection. See the [rustc dev guide] for more information on how this works.
+//!
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/traits/resolution.html#selection
+
+use self::EvaluationResult::*;
+use self::SelectionCandidate::*;
+
+use super::coherence::{self, Conflict};
+use super::project;
+use super::project::{normalize_with_depth, normalize_with_depth_to};
+use super::util;
+use super::util::{closure_trait_ref_and_return_type, predicate_for_trait_def};
+use super::wf;
+use super::DerivedObligationCause;
+use super::Selection;
+use super::SelectionResult;
+use super::TraitNotObjectSafe;
+use super::TraitQueryMode;
+use super::{BuiltinDerivedObligation, ImplDerivedObligation, ObligationCauseCode};
+use super::{Normalized, ProjectionCacheKey};
+use super::{ObjectCastObligation, Obligation};
+use super::{ObligationCause, PredicateObligation, TraitObligation};
+use super::{OutputTypeParameterMismatch, Overflow, SelectionError, Unimplemented};
+use super::{
+ VtableAutoImpl, VtableBuiltin, VtableClosure, VtableFnPointer, VtableGenerator, VtableImpl,
+ VtableObject, VtableParam, VtableTraitAlias,
+};
+use super::{
+ VtableAutoImplData, VtableBuiltinData, VtableClosureData, VtableFnPointerData,
+ VtableGeneratorData, VtableImplData, VtableObjectData, VtableTraitAliasData,
+};
+
+use crate::infer::{CombinedSnapshot, InferCtxt, InferOk, PlaceholderMap, TypeFreshener};
+use crate::traits::error_reporting::InferCtxtExt;
+use crate::traits::project::ProjectionCacheKeyExt;
+use rustc::dep_graph::{DepKind, DepNodeIndex};
+use rustc::middle::lang_items;
+use rustc::ty::fast_reject;
+use rustc::ty::relate::TypeRelation;
+use rustc::ty::subst::{Subst, SubstsRef};
+use rustc::ty::{self, ToPolyTraitRef, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness};
+use rustc_ast::attr;
+use rustc_data_structures::fx::{FxHashMap, FxHashSet};
+use rustc_hir as hir;
+use rustc_hir::def_id::DefId;
+use rustc_index::bit_set::GrowableBitSet;
+use rustc_span::symbol::sym;
+use rustc_target::spec::abi::Abi;
+
+use std::cell::{Cell, RefCell};
+use std::cmp;
+use std::fmt::{self, Display};
+use std::iter;
+use std::rc::Rc;
+
+pub use rustc::traits::select::*;
+
+pub struct SelectionContext<'cx, 'tcx> {
+ infcx: &'cx InferCtxt<'cx, 'tcx>,
+
+ /// Freshener used specifically for entries on the obligation
+ /// stack. This ensures that all entries on the stack at one time
+ /// will have the same set of placeholder entries, which is
+ /// important for checking for trait bounds that recursively
+ /// require themselves.
+ freshener: TypeFreshener<'cx, 'tcx>,
+
+ /// If `true`, indicates that the evaluation should be conservative
+ /// and consider the possibility of types outside this crate.
+ /// This comes up primarily when resolving ambiguity. Imagine
+ /// there is some trait reference `$0: Bar` where `$0` is an
+ /// inference variable. If `intercrate` is true, then we can never
+ /// say for sure that this reference is not implemented, even if
+ /// there are *no impls at all for `Bar`*, because `$0` could be
+ /// bound to some type that in a downstream crate that implements
+ /// `Bar`. This is the suitable mode for coherence. Elsewhere,
+ /// though, we set this to false, because we are only interested
+ /// in types that the user could actually have written --- in
+ /// other words, we consider `$0: Bar` to be unimplemented if
+ /// there is no type that the user could *actually name* that
+ /// would satisfy it. This avoids crippling inference, basically.
+ intercrate: bool,
+
+ intercrate_ambiguity_causes: Option<Vec<IntercrateAmbiguityCause>>,
+
+ /// Controls whether or not to filter out negative impls when selecting.
+ /// This is used in librustdoc to distinguish between the lack of an impl
+ /// and a negative impl
+ allow_negative_impls: bool,
+
+ /// The mode that trait queries run in, which informs our error handling
+ /// policy. In essence, canonicalized queries need their errors propagated
+ /// rather than immediately reported because we do not have accurate spans.
+ query_mode: TraitQueryMode,
+}
+
+// A stack that walks back up the stack frame.
+struct TraitObligationStack<'prev, 'tcx> {
+ obligation: &'prev TraitObligation<'tcx>,
+
+ /// The trait ref from `obligation` but "freshened" with the
+ /// selection-context's freshener. Used to check for recursion.
+ fresh_trait_ref: ty::PolyTraitRef<'tcx>,
+
+ /// Starts out equal to `depth` -- if, during evaluation, we
+ /// encounter a cycle, then we will set this flag to the minimum
+ /// depth of that cycle for all participants in the cycle. These
+ /// participants will then forego caching their results. This is
+ /// not the most efficient solution, but it addresses #60010. The
+ /// problem we are trying to prevent:
+ ///
+ /// - If you have `A: AutoTrait` requires `B: AutoTrait` and `C: NonAutoTrait`
+ /// - `B: AutoTrait` requires `A: AutoTrait` (coinductive cycle, ok)
+ /// - `C: NonAutoTrait` requires `A: AutoTrait` (non-coinductive cycle, not ok)
+ ///
+ /// you don't want to cache that `B: AutoTrait` or `A: AutoTrait`
+ /// is `EvaluatedToOk`; this is because they were only considered
+ /// ok on the premise that if `A: AutoTrait` held, but we indeed
+ /// encountered a problem (later on) with `A: AutoTrait. So we
+ /// currently set a flag on the stack node for `B: AutoTrait` (as
+ /// well as the second instance of `A: AutoTrait`) to suppress
+ /// caching.
+ ///
+ /// This is a simple, targeted fix. A more-performant fix requires
+ /// deeper changes, but would permit more caching: we could
+ /// basically defer caching until we have fully evaluated the
+ /// tree, and then cache the entire tree at once. In any case, the
+ /// performance impact here shouldn't be so horrible: every time
+ /// this is hit, we do cache at least one trait, so we only
+ /// evaluate each member of a cycle up to N times, where N is the
+ /// length of the cycle. This means the performance impact is
+ /// bounded and we shouldn't have any terrible worst-cases.
+ reached_depth: Cell<usize>,
+
+ previous: TraitObligationStackList<'prev, 'tcx>,
+
+ /// The number of parent frames plus one (thus, the topmost frame has depth 1).
+ depth: usize,
+
+ /// The depth-first number of this node in the search graph -- a
+ /// pre-order index. Basically, a freshly incremented counter.
+ dfn: usize,
+}
+
+struct SelectionCandidateSet<'tcx> {
+ // A list of candidates that definitely apply to the current
+ // obligation (meaning: types unify).
+ vec: Vec<SelectionCandidate<'tcx>>,
+
+ // If `true`, then there were candidates that might or might
+ // not have applied, but we couldn't tell. This occurs when some
+ // of the input types are type variables, in which case there are
+ // various "builtin" rules that might or might not trigger.
+ ambiguous: bool,
+}
+
+#[derive(PartialEq, Eq, Debug, Clone)]
+struct EvaluatedCandidate<'tcx> {
+ candidate: SelectionCandidate<'tcx>,
+ evaluation: EvaluationResult,
+}
+
+/// When does the builtin impl for `T: Trait` apply?
+enum BuiltinImplConditions<'tcx> {
+ /// The impl is conditional on `T1, T2, ...: Trait`.
+ Where(ty::Binder<Vec<Ty<'tcx>>>),
+ /// There is no built-in impl. There may be some other
+ /// candidate (a where-clause or user-defined impl).
+ None,
+ /// It is unknown whether there is an impl.
+ Ambiguous,
+}
+
+impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> {
+ pub fn new(infcx: &'cx InferCtxt<'cx, 'tcx>) -> SelectionContext<'cx, 'tcx> {
+ SelectionContext {
+ infcx,
+ freshener: infcx.freshener(),
+ intercrate: false,
+ intercrate_ambiguity_causes: None,
+ allow_negative_impls: false,
+ query_mode: TraitQueryMode::Standard,
+ }
+ }
+
+ pub fn intercrate(infcx: &'cx InferCtxt<'cx, 'tcx>) -> SelectionContext<'cx, 'tcx> {
+ SelectionContext {
+ infcx,
+ freshener: infcx.freshener(),
+ intercrate: true,
+ intercrate_ambiguity_causes: None,
+ allow_negative_impls: false,
+ query_mode: TraitQueryMode::Standard,
+ }
+ }
+
+ pub fn with_negative(
+ infcx: &'cx InferCtxt<'cx, 'tcx>,
+ allow_negative_impls: bool,
+ ) -> SelectionContext<'cx, 'tcx> {
+ debug!("with_negative({:?})", allow_negative_impls);
+ SelectionContext {
+ infcx,
+ freshener: infcx.freshener(),
+ intercrate: false,
+ intercrate_ambiguity_causes: None,
+ allow_negative_impls,
+ query_mode: TraitQueryMode::Standard,
+ }
+ }
+
+ pub fn with_query_mode(
+ infcx: &'cx InferCtxt<'cx, 'tcx>,
+ query_mode: TraitQueryMode,
+ ) -> SelectionContext<'cx, 'tcx> {
+ debug!("with_query_mode({:?})", query_mode);
+ SelectionContext {
+ infcx,
+ freshener: infcx.freshener(),
+ intercrate: false,
+ intercrate_ambiguity_causes: None,
+ allow_negative_impls: false,
+ query_mode,
+ }
+ }
+
+ /// Enables tracking of intercrate ambiguity causes. These are
+ /// used in coherence to give improved diagnostics. We don't do
+ /// this until we detect a coherence error because it can lead to
+ /// false overflow results (#47139) and because it costs
+ /// computation time.
+ pub fn enable_tracking_intercrate_ambiguity_causes(&mut self) {
+ assert!(self.intercrate);
+ assert!(self.intercrate_ambiguity_causes.is_none());
+ self.intercrate_ambiguity_causes = Some(vec![]);
+ debug!("selcx: enable_tracking_intercrate_ambiguity_causes");
+ }
+
+ /// Gets the intercrate ambiguity causes collected since tracking
+ /// was enabled and disables tracking at the same time. If
+ /// tracking is not enabled, just returns an empty vector.
+ pub fn take_intercrate_ambiguity_causes(&mut self) -> Vec<IntercrateAmbiguityCause> {
+ assert!(self.intercrate);
+ self.intercrate_ambiguity_causes.take().unwrap_or(vec![])
+ }
+
+ pub fn infcx(&self) -> &'cx InferCtxt<'cx, 'tcx> {
+ self.infcx
+ }
+
+ pub fn tcx(&self) -> TyCtxt<'tcx> {
+ self.infcx.tcx
+ }
+
+ pub fn closure_typer(&self) -> &'cx InferCtxt<'cx, 'tcx> {
+ self.infcx
+ }
+
+ ///////////////////////////////////////////////////////////////////////////
+ // Selection
+ //
+ // The selection phase tries to identify *how* an obligation will
+ // be resolved. For example, it will identify which impl or
+ // parameter bound is to be used. The process can be inconclusive
+ // if the self type in the obligation is not fully inferred. Selection
+ // can result in an error in one of two ways:
+ //
+ // 1. If no applicable impl or parameter bound can be found.
+ // 2. If the output type parameters in the obligation do not match
+ // those specified by the impl/bound. For example, if the obligation
+ // is `Vec<Foo>: Iterable<Bar>`, but the impl specifies
+ // `impl<T> Iterable<T> for Vec<T>`, than an error would result.
+
+ /// Attempts to satisfy the obligation. If successful, this will affect the surrounding
+ /// type environment by performing unification.
+ pub fn select(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ ) -> SelectionResult<'tcx, Selection<'tcx>> {
+ debug!("select({:?})", obligation);
+ debug_assert!(!obligation.predicate.has_escaping_bound_vars());
+
+ let pec = &ProvisionalEvaluationCache::default();
+ let stack = self.push_stack(TraitObligationStackList::empty(pec), obligation);
+
+ let candidate = match self.candidate_from_obligation(&stack) {
+ Err(SelectionError::Overflow) => {
+ // In standard mode, overflow must have been caught and reported
+ // earlier.
+ assert!(self.query_mode == TraitQueryMode::Canonical);
+ return Err(SelectionError::Overflow);
+ }
+ Err(e) => {
+ return Err(e);
+ }
+ Ok(None) => {
+ return Ok(None);
+ }
+ Ok(Some(candidate)) => candidate,
+ };
+
+ match self.confirm_candidate(obligation, candidate) {
+ Err(SelectionError::Overflow) => {
+ assert!(self.query_mode == TraitQueryMode::Canonical);
+ Err(SelectionError::Overflow)
+ }
+ Err(e) => Err(e),
+ Ok(candidate) => Ok(Some(candidate)),
+ }
+ }
+
+ ///////////////////////////////////////////////////////////////////////////
+ // EVALUATION
+ //
+ // Tests whether an obligation can be selected or whether an impl
+ // can be applied to particular types. It skips the "confirmation"
+ // step and hence completely ignores output type parameters.
+ //
+ // The result is "true" if the obligation *may* hold and "false" if
+ // we can be sure it does not.
+
+ /// Evaluates whether the obligation `obligation` can be satisfied (by any means).
+ pub fn predicate_may_hold_fatal(&mut self, obligation: &PredicateObligation<'tcx>) -> bool {
+ debug!("predicate_may_hold_fatal({:?})", obligation);
+
+ // This fatal query is a stopgap that should only be used in standard mode,
+ // where we do not expect overflow to be propagated.
+ assert!(self.query_mode == TraitQueryMode::Standard);
+
+ self.evaluate_root_obligation(obligation)
+ .expect("Overflow should be caught earlier in standard query mode")
+ .may_apply()
+ }
+
+ /// Evaluates whether the obligation `obligation` can be satisfied
+ /// and returns an `EvaluationResult`. This is meant for the
+ /// *initial* call.
+ pub fn evaluate_root_obligation(
+ &mut self,
+ obligation: &PredicateObligation<'tcx>,
+ ) -> Result<EvaluationResult, OverflowError> {
+ self.evaluation_probe(|this| {
+ this.evaluate_predicate_recursively(
+ TraitObligationStackList::empty(&ProvisionalEvaluationCache::default()),
+ obligation.clone(),
+ )
+ })
+ }
+
+ fn evaluation_probe(
+ &mut self,
+ op: impl FnOnce(&mut Self) -> Result<EvaluationResult, OverflowError>,
+ ) -> Result<EvaluationResult, OverflowError> {
+ self.infcx.probe(|snapshot| -> Result<EvaluationResult, OverflowError> {
+ let result = op(self)?;
+ match self.infcx.region_constraints_added_in_snapshot(snapshot) {
+ None => Ok(result),
+ Some(_) => Ok(result.max(EvaluatedToOkModuloRegions)),
+ }
+ })
+ }
+
+ /// Evaluates the predicates in `predicates` recursively. Note that
+ /// this applies projections in the predicates, and therefore
+ /// is run within an inference probe.
+ fn evaluate_predicates_recursively<'o, I>(
+ &mut self,
+ stack: TraitObligationStackList<'o, 'tcx>,
+ predicates: I,
+ ) -> Result<EvaluationResult, OverflowError>
+ where
+ I: IntoIterator<Item = PredicateObligation<'tcx>>,
+ {
+ let mut result = EvaluatedToOk;
+ for obligation in predicates {
+ let eval = self.evaluate_predicate_recursively(stack, obligation.clone())?;
+ debug!("evaluate_predicate_recursively({:?}) = {:?}", obligation, eval);
+ if let EvaluatedToErr = eval {
+ // fast-path - EvaluatedToErr is the top of the lattice,
+ // so we don't need to look on the other predicates.
+ return Ok(EvaluatedToErr);
+ } else {
+ result = cmp::max(result, eval);
+ }
+ }
+ Ok(result)
+ }
+
+ fn evaluate_predicate_recursively<'o>(
+ &mut self,
+ previous_stack: TraitObligationStackList<'o, 'tcx>,
+ obligation: PredicateObligation<'tcx>,
+ ) -> Result<EvaluationResult, OverflowError> {
+ debug!(
+ "evaluate_predicate_recursively(previous_stack={:?}, obligation={:?})",
+ previous_stack.head(),
+ obligation
+ );
+
+ // `previous_stack` stores a `TraitObligatiom`, while `obligation` is
+ // a `PredicateObligation`. These are distinct types, so we can't
+ // use any `Option` combinator method that would force them to be
+ // the same.
+ match previous_stack.head() {
+ Some(h) => self.check_recursion_limit(&obligation, h.obligation)?,
+ None => self.check_recursion_limit(&obligation, &obligation)?,
+ }
+
+ match obligation.predicate {
+ ty::Predicate::Trait(ref t, _) => {
+ debug_assert!(!t.has_escaping_bound_vars());
+ let obligation = obligation.with(t.clone());
+ self.evaluate_trait_predicate_recursively(previous_stack, obligation)
+ }
+
+ ty::Predicate::Subtype(ref p) => {
+ // Does this code ever run?
+ match self.infcx.subtype_predicate(&obligation.cause, obligation.param_env, p) {
+ Some(Ok(InferOk { mut obligations, .. })) => {
+ self.add_depth(obligations.iter_mut(), obligation.recursion_depth);
+ self.evaluate_predicates_recursively(
+ previous_stack,
+ obligations.into_iter(),
+ )
+ }
+ Some(Err(_)) => Ok(EvaluatedToErr),
+ None => Ok(EvaluatedToAmbig),
+ }
+ }
+
+ ty::Predicate::WellFormed(ty) => match wf::obligations(
+ self.infcx,
+ obligation.param_env,
+ obligation.cause.body_id,
+ ty,
+ obligation.cause.span,
+ ) {
+ Some(mut obligations) => {
+ self.add_depth(obligations.iter_mut(), obligation.recursion_depth);
+ self.evaluate_predicates_recursively(previous_stack, obligations.into_iter())
+ }
+ None => Ok(EvaluatedToAmbig),
+ },
+
+ ty::Predicate::TypeOutlives(..) | ty::Predicate::RegionOutlives(..) => {
+ // We do not consider region relationships when evaluating trait matches.
+ Ok(EvaluatedToOkModuloRegions)
+ }
+
+ ty::Predicate::ObjectSafe(trait_def_id) => {
+ if self.tcx().is_object_safe(trait_def_id) {
+ Ok(EvaluatedToOk)
+ } else {
+ Ok(EvaluatedToErr)
+ }
+ }
+
+ ty::Predicate::Projection(ref data) => {
+ let project_obligation = obligation.with(data.clone());
+ match project::poly_project_and_unify_type(self, &project_obligation) {
+ Ok(Some(mut subobligations)) => {
+ self.add_depth(subobligations.iter_mut(), obligation.recursion_depth);
+ let result = self.evaluate_predicates_recursively(
+ previous_stack,
+ subobligations.into_iter(),
+ );
+ if let Some(key) =
+ ProjectionCacheKey::from_poly_projection_predicate(self, data)
+ {
+ self.infcx.inner.borrow_mut().projection_cache.complete(key);
+ }
+ result
+ }
+ Ok(None) => Ok(EvaluatedToAmbig),
+ Err(_) => Ok(EvaluatedToErr),
+ }
+ }
+
+ ty::Predicate::ClosureKind(closure_def_id, closure_substs, kind) => {
+ match self.infcx.closure_kind(closure_def_id, closure_substs) {
+ Some(closure_kind) => {
+ if closure_kind.extends(kind) {
+ Ok(EvaluatedToOk)
+ } else {
+ Ok(EvaluatedToErr)
+ }
+ }
+ None => Ok(EvaluatedToAmbig),
+ }
+ }
+
+ ty::Predicate::ConstEvaluatable(def_id, substs) => {
+ match self.tcx().const_eval_resolve(
+ obligation.param_env,
+ def_id,
+ substs,
+ None,
+ None,
+ ) {
+ Ok(_) => Ok(EvaluatedToOk),
+ Err(_) => Ok(EvaluatedToErr),
+ }
+ }
+ }
+ }
+
+ fn evaluate_trait_predicate_recursively<'o>(
+ &mut self,
+ previous_stack: TraitObligationStackList<'o, 'tcx>,
+ mut obligation: TraitObligation<'tcx>,
+ ) -> Result<EvaluationResult, OverflowError> {
+ debug!("evaluate_trait_predicate_recursively({:?})", obligation);
+
+ if !self.intercrate
+ && obligation.is_global()
+ && obligation.param_env.caller_bounds.iter().all(|bound| bound.needs_subst())
+ {
+ // If a param env has no global bounds, global obligations do not
+ // depend on its particular value in order to work, so we can clear
+ // out the param env and get better caching.
+ debug!("evaluate_trait_predicate_recursively({:?}) - in global", obligation);
+ obligation.param_env = obligation.param_env.without_caller_bounds();
+ }
+
+ let stack = self.push_stack(previous_stack, &obligation);
+ let fresh_trait_ref = stack.fresh_trait_ref;
+ if let Some(result) = self.check_evaluation_cache(obligation.param_env, fresh_trait_ref) {
+ debug!("CACHE HIT: EVAL({:?})={:?}", fresh_trait_ref, result);
+ return Ok(result);
+ }
+
+ if let Some(result) = stack.cache().get_provisional(fresh_trait_ref) {
+ debug!("PROVISIONAL CACHE HIT: EVAL({:?})={:?}", fresh_trait_ref, result);
+ stack.update_reached_depth(stack.cache().current_reached_depth());
+ return Ok(result);
+ }
+
+ // Check if this is a match for something already on the
+ // stack. If so, we don't want to insert the result into the
+ // main cache (it is cycle dependent) nor the provisional
+ // cache (which is meant for things that have completed but
+ // for a "backedge" -- this result *is* the backedge).
+ if let Some(cycle_result) = self.check_evaluation_cycle(&stack) {
+ return Ok(cycle_result);
+ }
+
+ let (result, dep_node) = self.in_task(|this| this.evaluate_stack(&stack));
+ let result = result?;
+
+ if !result.must_apply_modulo_regions() {
+ stack.cache().on_failure(stack.dfn);
+ }
+
+ let reached_depth = stack.reached_depth.get();
+ if reached_depth >= stack.depth {
+ debug!("CACHE MISS: EVAL({:?})={:?}", fresh_trait_ref, result);
+ self.insert_evaluation_cache(obligation.param_env, fresh_trait_ref, dep_node, result);
+
+ stack.cache().on_completion(stack.depth, |fresh_trait_ref, provisional_result| {
+ self.insert_evaluation_cache(
+ obligation.param_env,
+ fresh_trait_ref,
+ dep_node,
+ provisional_result.max(result),
+ );
+ });
+ } else {
+ debug!("PROVISIONAL: {:?}={:?}", fresh_trait_ref, result);
+ debug!(
+ "evaluate_trait_predicate_recursively: caching provisionally because {:?} \
+ is a cycle participant (at depth {}, reached depth {})",
+ fresh_trait_ref, stack.depth, reached_depth,
+ );
+
+ stack.cache().insert_provisional(stack.dfn, reached_depth, fresh_trait_ref, result);
+ }
+
+ Ok(result)
+ }
+
+ /// If there is any previous entry on the stack that precisely
+ /// matches this obligation, then we can assume that the
+ /// obligation is satisfied for now (still all other conditions
+ /// must be met of course). One obvious case this comes up is
+ /// marker traits like `Send`. Think of a linked list:
+ ///
+ /// struct List<T> { data: T, next: Option<Box<List<T>>> }
+ ///
+ /// `Box<List<T>>` will be `Send` if `T` is `Send` and
+ /// `Option<Box<List<T>>>` is `Send`, and in turn
+ /// `Option<Box<List<T>>>` is `Send` if `Box<List<T>>` is
+ /// `Send`.
+ ///
+ /// Note that we do this comparison using the `fresh_trait_ref`
+ /// fields. Because these have all been freshened using
+ /// `self.freshener`, we can be sure that (a) this will not
+ /// affect the inferencer state and (b) that if we see two
+ /// fresh regions with the same index, they refer to the same
+ /// unbound type variable.
+ fn check_evaluation_cycle(
+ &mut self,
+ stack: &TraitObligationStack<'_, 'tcx>,
+ ) -> Option<EvaluationResult> {
+ if let Some(cycle_depth) = stack
+ .iter()
+ .skip(1) // Skip top-most frame.
+ .find(|prev| {
+ stack.obligation.param_env == prev.obligation.param_env
+ && stack.fresh_trait_ref == prev.fresh_trait_ref
+ })
+ .map(|stack| stack.depth)
+ {
+ debug!(
+ "evaluate_stack({:?}) --> recursive at depth {}",
+ stack.fresh_trait_ref, cycle_depth,
+ );
+
+ // If we have a stack like `A B C D E A`, where the top of
+ // the stack is the final `A`, then this will iterate over
+ // `A, E, D, C, B` -- i.e., all the participants apart
+ // from the cycle head. We mark them as participating in a
+ // cycle. This suppresses caching for those nodes. See
+ // `in_cycle` field for more details.
+ stack.update_reached_depth(cycle_depth);
+
+ // Subtle: when checking for a coinductive cycle, we do
+ // not compare using the "freshened trait refs" (which
+ // have erased regions) but rather the fully explicit
+ // trait refs. This is important because it's only a cycle
+ // if the regions match exactly.
+ let cycle = stack.iter().skip(1).take_while(|s| s.depth >= cycle_depth);
+ let cycle = cycle.map(|stack| {
+ ty::Predicate::Trait(stack.obligation.predicate, hir::Constness::NotConst)
+ });
+ if self.coinductive_match(cycle) {
+ debug!("evaluate_stack({:?}) --> recursive, coinductive", stack.fresh_trait_ref);
+ Some(EvaluatedToOk)
+ } else {
+ debug!("evaluate_stack({:?}) --> recursive, inductive", stack.fresh_trait_ref);
+ Some(EvaluatedToRecur)
+ }
+ } else {
+ None
+ }
+ }
+
+ fn evaluate_stack<'o>(
+ &mut self,
+ stack: &TraitObligationStack<'o, 'tcx>,
+ ) -> Result<EvaluationResult, OverflowError> {
+ // In intercrate mode, whenever any of the types are unbound,
+ // there can always be an impl. Even if there are no impls in
+ // this crate, perhaps the type would be unified with
+ // something from another crate that does provide an impl.
+ //
+ // In intra mode, we must still be conservative. The reason is
+ // that we want to avoid cycles. Imagine an impl like:
+ //
+ // impl<T:Eq> Eq for Vec<T>
+ //
+ // and a trait reference like `$0 : Eq` where `$0` is an
+ // unbound variable. When we evaluate this trait-reference, we
+ // will unify `$0` with `Vec<$1>` (for some fresh variable
+ // `$1`), on the condition that `$1 : Eq`. We will then wind
+ // up with many candidates (since that are other `Eq` impls
+ // that apply) and try to winnow things down. This results in
+ // a recursive evaluation that `$1 : Eq` -- as you can
+ // imagine, this is just where we started. To avoid that, we
+ // check for unbound variables and return an ambiguous (hence possible)
+ // match if we've seen this trait before.
+ //
+ // This suffices to allow chains like `FnMut` implemented in
+ // terms of `Fn` etc, but we could probably make this more
+ // precise still.
+ let unbound_input_types =
+ stack.fresh_trait_ref.skip_binder().input_types().any(|ty| ty.is_fresh());
+ // This check was an imperfect workaround for a bug in the old
+ // intercrate mode; it should be removed when that goes away.
+ if unbound_input_types && self.intercrate {
+ debug!(
+ "evaluate_stack({:?}) --> unbound argument, intercrate --> ambiguous",
+ stack.fresh_trait_ref
+ );
+ // Heuristics: show the diagnostics when there are no candidates in crate.
+ if self.intercrate_ambiguity_causes.is_some() {
+ debug!("evaluate_stack: intercrate_ambiguity_causes is some");
+ if let Ok(candidate_set) = self.assemble_candidates(stack) {
+ if !candidate_set.ambiguous && candidate_set.vec.is_empty() {
+ let trait_ref = stack.obligation.predicate.skip_binder().trait_ref;
+ let self_ty = trait_ref.self_ty();
+ let cause = IntercrateAmbiguityCause::DownstreamCrate {
+ trait_desc: trait_ref.print_only_trait_path().to_string(),
+ self_desc: if self_ty.has_concrete_skeleton() {
+ Some(self_ty.to_string())
+ } else {
+ None
+ },
+ };
+ debug!("evaluate_stack: pushing cause = {:?}", cause);
+ self.intercrate_ambiguity_causes.as_mut().unwrap().push(cause);
+ }
+ }
+ }
+ return Ok(EvaluatedToAmbig);
+ }
+ if unbound_input_types
+ && stack.iter().skip(1).any(|prev| {
+ stack.obligation.param_env == prev.obligation.param_env
+ && self.match_fresh_trait_refs(
+ &stack.fresh_trait_ref,
+ &prev.fresh_trait_ref,
+ prev.obligation.param_env,
+ )
+ })
+ {
+ debug!(
+ "evaluate_stack({:?}) --> unbound argument, recursive --> giving up",
+ stack.fresh_trait_ref
+ );
+ return Ok(EvaluatedToUnknown);
+ }
+
+ match self.candidate_from_obligation(stack) {
+ Ok(Some(c)) => self.evaluate_candidate(stack, &c),
+ Ok(None) => Ok(EvaluatedToAmbig),
+ Err(Overflow) => Err(OverflowError),
+ Err(..) => Ok(EvaluatedToErr),
+ }
+ }
+
+ /// For defaulted traits, we use a co-inductive strategy to solve, so
+ /// that recursion is ok. This routine returns `true` if the top of the
+ /// stack (`cycle[0]`):
+ ///
+ /// - is a defaulted trait,
+ /// - it also appears in the backtrace at some position `X`,
+ /// - all the predicates at positions `X..` between `X` and the top are
+ /// also defaulted traits.
+ pub fn coinductive_match<I>(&mut self, cycle: I) -> bool
+ where
+ I: Iterator<Item = ty::Predicate<'tcx>>,
+ {
+ let mut cycle = cycle;
+ cycle.all(|predicate| self.coinductive_predicate(predicate))
+ }
+
+ fn coinductive_predicate(&self, predicate: ty::Predicate<'tcx>) -> bool {
+ let result = match predicate {
+ ty::Predicate::Trait(ref data, _) => self.tcx().trait_is_auto(data.def_id()),
+ _ => false,
+ };
+ debug!("coinductive_predicate({:?}) = {:?}", predicate, result);
+ result
+ }
+
+ /// Further evaluates `candidate` to decide whether all type parameters match and whether nested
+ /// obligations are met. Returns whether `candidate` remains viable after this further
+ /// scrutiny.
+ fn evaluate_candidate<'o>(
+ &mut self,
+ stack: &TraitObligationStack<'o, 'tcx>,
+ candidate: &SelectionCandidate<'tcx>,
+ ) -> Result<EvaluationResult, OverflowError> {
+ debug!(
+ "evaluate_candidate: depth={} candidate={:?}",
+ stack.obligation.recursion_depth, candidate
+ );
+ let result = self.evaluation_probe(|this| {
+ let candidate = (*candidate).clone();
+ match this.confirm_candidate(stack.obligation, candidate) {
+ Ok(selection) => this.evaluate_predicates_recursively(
+ stack.list(),
+ selection.nested_obligations().into_iter(),
+ ),
+ Err(..) => Ok(EvaluatedToErr),
+ }
+ })?;
+ debug!(
+ "evaluate_candidate: depth={} result={:?}",
+ stack.obligation.recursion_depth, result
+ );
+ Ok(result)
+ }
+
+ fn check_evaluation_cache(
+ &self,
+ param_env: ty::ParamEnv<'tcx>,
+ trait_ref: ty::PolyTraitRef<'tcx>,
+ ) -> Option<EvaluationResult> {
+ let tcx = self.tcx();
+ if self.can_use_global_caches(param_env) {
+ let cache = tcx.evaluation_cache.hashmap.borrow();
+ if let Some(cached) = cache.get(¶m_env.and(trait_ref)) {
+ return Some(cached.get(tcx));
+ }
+ }
+ self.infcx
+ .evaluation_cache
+ .hashmap
+ .borrow()
+ .get(¶m_env.and(trait_ref))
+ .map(|v| v.get(tcx))
+ }
+
+ fn insert_evaluation_cache(
+ &mut self,
+ param_env: ty::ParamEnv<'tcx>,
+ trait_ref: ty::PolyTraitRef<'tcx>,
+ dep_node: DepNodeIndex,
+ result: EvaluationResult,
+ ) {
+ // Avoid caching results that depend on more than just the trait-ref
+ // - the stack can create recursion.
+ if result.is_stack_dependent() {
+ return;
+ }
+
+ if self.can_use_global_caches(param_env) {
+ if !trait_ref.has_local_value() {
+ debug!(
+ "insert_evaluation_cache(trait_ref={:?}, candidate={:?}) global",
+ trait_ref, result,
+ );
+ // This may overwrite the cache with the same value
+ // FIXME: Due to #50507 this overwrites the different values
+ // This should be changed to use HashMapExt::insert_same
+ // when that is fixed
+ self.tcx()
+ .evaluation_cache
+ .hashmap
+ .borrow_mut()
+ .insert(param_env.and(trait_ref), WithDepNode::new(dep_node, result));
+ return;
+ }
+ }
+
+ debug!("insert_evaluation_cache(trait_ref={:?}, candidate={:?})", trait_ref, result,);
+ self.infcx
+ .evaluation_cache
+ .hashmap
+ .borrow_mut()
+ .insert(param_env.and(trait_ref), WithDepNode::new(dep_node, result));
+ }
+
+ /// For various reasons, it's possible for a subobligation
+ /// to have a *lower* recursion_depth than the obligation used to create it.
+ /// Projection sub-obligations may be returned from the projection cache,
+ /// which results in obligations with an 'old' `recursion_depth`.
+ /// Additionally, methods like `wf::obligations` and
+ /// `InferCtxt.subtype_predicate` produce subobligations without
+ /// taking in a 'parent' depth, causing the generated subobligations
+ /// to have a `recursion_depth` of `0`.
+ ///
+ /// To ensure that obligation_depth never decreasees, we force all subobligations
+ /// to have at least the depth of the original obligation.
+ fn add_depth<T: 'cx, I: Iterator<Item = &'cx mut Obligation<'tcx, T>>>(
+ &self,
+ it: I,
+ min_depth: usize,
+ ) {
+ it.for_each(|o| o.recursion_depth = cmp::max(min_depth, o.recursion_depth) + 1);
+ }
+
+ /// Checks that the recursion limit has not been exceeded.
+ ///
+ /// The weird return type of this function allows it to be used with the `try` (`?`)
+ /// operator within certain functions.
+ fn check_recursion_limit<T: Display + TypeFoldable<'tcx>, V: Display + TypeFoldable<'tcx>>(
+ &self,
+ obligation: &Obligation<'tcx, T>,
+ error_obligation: &Obligation<'tcx, V>,
+ ) -> Result<(), OverflowError> {
+ let recursion_limit = *self.infcx.tcx.sess.recursion_limit.get();
+ if obligation.recursion_depth >= recursion_limit {
+ match self.query_mode {
+ TraitQueryMode::Standard => {
+ self.infcx().report_overflow_error(error_obligation, true);
+ }
+ TraitQueryMode::Canonical => {
+ return Err(OverflowError);
+ }
+ }
+ }
+ Ok(())
+ }
+
+ ///////////////////////////////////////////////////////////////////////////
+ // CANDIDATE ASSEMBLY
+ //
+ // The selection process begins by examining all in-scope impls,
+ // caller obligations, and so forth and assembling a list of
+ // candidates. See the [rustc dev guide] for more details.
+ //
+ // [rustc dev guide]:
+ // https://rustc-dev-guide.rust-lang.org/traits/resolution.html#candidate-assembly
+
+ fn candidate_from_obligation<'o>(
+ &mut self,
+ stack: &TraitObligationStack<'o, 'tcx>,
+ ) -> SelectionResult<'tcx, SelectionCandidate<'tcx>> {
+ // Watch out for overflow. This intentionally bypasses (and does
+ // not update) the cache.
+ self.check_recursion_limit(&stack.obligation, &stack.obligation)?;
+
+ // Check the cache. Note that we freshen the trait-ref
+ // separately rather than using `stack.fresh_trait_ref` --
+ // this is because we want the unbound variables to be
+ // replaced with fresh types starting from index 0.
+ let cache_fresh_trait_pred = self.infcx.freshen(stack.obligation.predicate.clone());
+ debug!(
+ "candidate_from_obligation(cache_fresh_trait_pred={:?}, obligation={:?})",
+ cache_fresh_trait_pred, stack
+ );
+ debug_assert!(!stack.obligation.predicate.has_escaping_bound_vars());
+
+ if let Some(c) =
+ self.check_candidate_cache(stack.obligation.param_env, &cache_fresh_trait_pred)
+ {
+ debug!("CACHE HIT: SELECT({:?})={:?}", cache_fresh_trait_pred, c);
+ return c;
+ }
+
+ // If no match, compute result and insert into cache.
+ //
+ // FIXME(nikomatsakis) -- this cache is not taking into
+ // account cycles that may have occurred in forming the
+ // candidate. I don't know of any specific problems that
+ // result but it seems awfully suspicious.
+ let (candidate, dep_node) =
+ self.in_task(|this| this.candidate_from_obligation_no_cache(stack));
+
+ debug!("CACHE MISS: SELECT({:?})={:?}", cache_fresh_trait_pred, candidate);
+ self.insert_candidate_cache(
+ stack.obligation.param_env,
+ cache_fresh_trait_pred,
+ dep_node,
+ candidate.clone(),
+ );
+ candidate
+ }
+
+ fn in_task<OP, R>(&mut self, op: OP) -> (R, DepNodeIndex)
+ where
+ OP: FnOnce(&mut Self) -> R,
+ {
+ let (result, dep_node) =
+ self.tcx().dep_graph.with_anon_task(DepKind::TraitSelect, || op(self));
+ self.tcx().dep_graph.read_index(dep_node);
+ (result, dep_node)
+ }
+
+ // Treat negative impls as unimplemented, and reservation impls as ambiguity.
+ fn filter_negative_and_reservation_impls(
+ &mut self,
+ candidate: SelectionCandidate<'tcx>,
+ ) -> SelectionResult<'tcx, SelectionCandidate<'tcx>> {
+ if let ImplCandidate(def_id) = candidate {
+ let tcx = self.tcx();
+ match tcx.impl_polarity(def_id) {
+ ty::ImplPolarity::Negative if !self.allow_negative_impls => {
+ return Err(Unimplemented);
+ }
+ ty::ImplPolarity::Reservation => {
+ if let Some(intercrate_ambiguity_clauses) =
+ &mut self.intercrate_ambiguity_causes
+ {
+ let attrs = tcx.get_attrs(def_id);
+ let attr = attr::find_by_name(&attrs, sym::rustc_reservation_impl);
+ let value = attr.and_then(|a| a.value_str());
+ if let Some(value) = value {
+ debug!(
+ "filter_negative_and_reservation_impls: \
+ reservation impl ambiguity on {:?}",
+ def_id
+ );
+ intercrate_ambiguity_clauses.push(
+ IntercrateAmbiguityCause::ReservationImpl {
+ message: value.to_string(),
+ },
+ );
+ }
+ }
+ return Ok(None);
+ }
+ _ => {}
+ };
+ }
+ Ok(Some(candidate))
+ }
+
+ fn candidate_from_obligation_no_cache<'o>(
+ &mut self,
+ stack: &TraitObligationStack<'o, 'tcx>,
+ ) -> SelectionResult<'tcx, SelectionCandidate<'tcx>> {
+ if stack.obligation.predicate.references_error() {
+ // If we encounter a `Error`, we generally prefer the
+ // most "optimistic" result in response -- that is, the
+ // one least likely to report downstream errors. But
+ // because this routine is shared by coherence and by
+ // trait selection, there isn't an obvious "right" choice
+ // here in that respect, so we opt to just return
+ // ambiguity and let the upstream clients sort it out.
+ return Ok(None);
+ }
+
+ if let Some(conflict) = self.is_knowable(stack) {
+ debug!("coherence stage: not knowable");
+ if self.intercrate_ambiguity_causes.is_some() {
+ debug!("evaluate_stack: intercrate_ambiguity_causes is some");
+ // Heuristics: show the diagnostics when there are no candidates in crate.
+ if let Ok(candidate_set) = self.assemble_candidates(stack) {
+ let mut no_candidates_apply = true;
+ {
+ let evaluated_candidates =
+ candidate_set.vec.iter().map(|c| self.evaluate_candidate(stack, &c));
+
+ for ec in evaluated_candidates {
+ match ec {
+ Ok(c) => {
+ if c.may_apply() {
+ no_candidates_apply = false;
+ break;
+ }
+ }
+ Err(e) => return Err(e.into()),
+ }
+ }
+ }
+
+ if !candidate_set.ambiguous && no_candidates_apply {
+ let trait_ref = stack.obligation.predicate.skip_binder().trait_ref;
+ let self_ty = trait_ref.self_ty();
+ let trait_desc = trait_ref.print_only_trait_path().to_string();
+ let self_desc = if self_ty.has_concrete_skeleton() {
+ Some(self_ty.to_string())
+ } else {
+ None
+ };
+ let cause = if let Conflict::Upstream = conflict {
+ IntercrateAmbiguityCause::UpstreamCrateUpdate { trait_desc, self_desc }
+ } else {
+ IntercrateAmbiguityCause::DownstreamCrate { trait_desc, self_desc }
+ };
+ debug!("evaluate_stack: pushing cause = {:?}", cause);
+ self.intercrate_ambiguity_causes.as_mut().unwrap().push(cause);
+ }
+ }
+ }
+ return Ok(None);
+ }
+
+ let candidate_set = self.assemble_candidates(stack)?;
+
+ if candidate_set.ambiguous {
+ debug!("candidate set contains ambig");
+ return Ok(None);
+ }
+
+ let mut candidates = candidate_set.vec;
+
+ debug!("assembled {} candidates for {:?}: {:?}", candidates.len(), stack, candidates);
+
+ // At this point, we know that each of the entries in the
+ // candidate set is *individually* applicable. Now we have to
+ // figure out if they contain mutual incompatibilities. This
+ // frequently arises if we have an unconstrained input type --
+ // for example, we are looking for `$0: Eq` where `$0` is some
+ // unconstrained type variable. In that case, we'll get a
+ // candidate which assumes $0 == int, one that assumes `$0 ==
+ // usize`, etc. This spells an ambiguity.
+
+ // If there is more than one candidate, first winnow them down
+ // by considering extra conditions (nested obligations and so
+ // forth). We don't winnow if there is exactly one
+ // candidate. This is a relatively minor distinction but it
+ // can lead to better inference and error-reporting. An
+ // example would be if there was an impl:
+ //
+ // impl<T:Clone> Vec<T> { fn push_clone(...) { ... } }
+ //
+ // and we were to see some code `foo.push_clone()` where `boo`
+ // is a `Vec<Bar>` and `Bar` does not implement `Clone`. If
+ // we were to winnow, we'd wind up with zero candidates.
+ // Instead, we select the right impl now but report "`Bar` does
+ // not implement `Clone`".
+ if candidates.len() == 1 {
+ return self.filter_negative_and_reservation_impls(candidates.pop().unwrap());
+ }
+
+ // Winnow, but record the exact outcome of evaluation, which
+ // is needed for specialization. Propagate overflow if it occurs.
+ let mut candidates = candidates
+ .into_iter()
+ .map(|c| match self.evaluate_candidate(stack, &c) {
+ Ok(eval) if eval.may_apply() => {
+ Ok(Some(EvaluatedCandidate { candidate: c, evaluation: eval }))
+ }
+ Ok(_) => Ok(None),
+ Err(OverflowError) => Err(Overflow),
+ })
+ .flat_map(Result::transpose)
+ .collect::<Result<Vec<_>, _>>()?;
+
+ debug!("winnowed to {} candidates for {:?}: {:?}", candidates.len(), stack, candidates);
+
+ let needs_infer = stack.obligation.predicate.needs_infer();
+
+ // If there are STILL multiple candidates, we can further
+ // reduce the list by dropping duplicates -- including
+ // resolving specializations.
+ if candidates.len() > 1 {
+ let mut i = 0;
+ while i < candidates.len() {
+ let is_dup = (0..candidates.len()).filter(|&j| i != j).any(|j| {
+ self.candidate_should_be_dropped_in_favor_of(
+ &candidates[i],
+ &candidates[j],
+ needs_infer,
+ )
+ });
+ if is_dup {
+ debug!("Dropping candidate #{}/{}: {:?}", i, candidates.len(), candidates[i]);
+ candidates.swap_remove(i);
+ } else {
+ debug!("Retaining candidate #{}/{}: {:?}", i, candidates.len(), candidates[i]);
+ i += 1;
+
+ // If there are *STILL* multiple candidates, give up
+ // and report ambiguity.
+ if i > 1 {
+ debug!("multiple matches, ambig");
+ return Ok(None);
+ }
+ }
+ }
+ }
+
+ // If there are *NO* candidates, then there are no impls --
+ // that we know of, anyway. Note that in the case where there
+ // are unbound type variables within the obligation, it might
+ // be the case that you could still satisfy the obligation
+ // from another crate by instantiating the type variables with
+ // a type from another crate that does have an impl. This case
+ // is checked for in `evaluate_stack` (and hence users
+ // who might care about this case, like coherence, should use
+ // that function).
+ if candidates.is_empty() {
+ return Err(Unimplemented);
+ }
+
+ // Just one candidate left.
+ self.filter_negative_and_reservation_impls(candidates.pop().unwrap().candidate)
+ }
+
+ fn is_knowable<'o>(&mut self, stack: &TraitObligationStack<'o, 'tcx>) -> Option<Conflict> {
+ debug!("is_knowable(intercrate={:?})", self.intercrate);
+
+ if !self.intercrate {
+ return None;
+ }
+
+ let obligation = &stack.obligation;
+ let predicate = self.infcx().resolve_vars_if_possible(&obligation.predicate);
+
+ // Okay to skip binder because of the nature of the
+ // trait-ref-is-knowable check, which does not care about
+ // bound regions.
+ let trait_ref = predicate.skip_binder().trait_ref;
+
+ coherence::trait_ref_is_knowable(self.tcx(), trait_ref)
+ }
+
+ /// Returns `true` if the global caches can be used.
+ /// Do note that if the type itself is not in the
+ /// global tcx, the local caches will be used.
+ fn can_use_global_caches(&self, param_env: ty::ParamEnv<'tcx>) -> bool {
+ // If there are any e.g. inference variables in the `ParamEnv`, then we
+ // always use a cache local to this particular scope. Otherwise, we
+ // switch to a global cache.
+ if param_env.has_local_value() {
+ return false;
+ }
+
+ // Avoid using the master cache during coherence and just rely
+ // on the local cache. This effectively disables caching
+ // during coherence. It is really just a simplification to
+ // avoid us having to fear that coherence results "pollute"
+ // the master cache. Since coherence executes pretty quickly,
+ // it's not worth going to more trouble to increase the
+ // hit-rate, I don't think.
+ if self.intercrate {
+ return false;
+ }
+
+ // Otherwise, we can use the global cache.
+ true
+ }
+
+ fn check_candidate_cache(
+ &mut self,
+ param_env: ty::ParamEnv<'tcx>,
+ cache_fresh_trait_pred: &ty::PolyTraitPredicate<'tcx>,
+ ) -> Option<SelectionResult<'tcx, SelectionCandidate<'tcx>>> {
+ let tcx = self.tcx();
+ let trait_ref = &cache_fresh_trait_pred.skip_binder().trait_ref;
+ if self.can_use_global_caches(param_env) {
+ let cache = tcx.selection_cache.hashmap.borrow();
+ if let Some(cached) = cache.get(¶m_env.and(*trait_ref)) {
+ return Some(cached.get(tcx));
+ }
+ }
+ self.infcx
+ .selection_cache
+ .hashmap
+ .borrow()
+ .get(¶m_env.and(*trait_ref))
+ .map(|v| v.get(tcx))
+ }
+
+ /// Determines whether can we safely cache the result
+ /// of selecting an obligation. This is almost always `true`,
+ /// except when dealing with certain `ParamCandidate`s.
+ ///
+ /// Ordinarily, a `ParamCandidate` will contain no inference variables,
+ /// since it was usually produced directly from a `DefId`. However,
+ /// certain cases (currently only librustdoc's blanket impl finder),
+ /// a `ParamEnv` may be explicitly constructed with inference types.
+ /// When this is the case, we do *not* want to cache the resulting selection
+ /// candidate. This is due to the fact that it might not always be possible
+ /// to equate the obligation's trait ref and the candidate's trait ref,
+ /// if more constraints end up getting added to an inference variable.
+ ///
+ /// Because of this, we always want to re-run the full selection
+ /// process for our obligation the next time we see it, since
+ /// we might end up picking a different `SelectionCandidate` (or none at all).
+ fn can_cache_candidate(
+ &self,
+ result: &SelectionResult<'tcx, SelectionCandidate<'tcx>>,
+ ) -> bool {
+ match result {
+ Ok(Some(SelectionCandidate::ParamCandidate(trait_ref))) => {
+ !trait_ref.skip_binder().input_types().any(|t| t.walk().any(|t_| t_.is_ty_infer()))
+ }
+ _ => true,
+ }
+ }
+
+ fn insert_candidate_cache(
+ &mut self,
+ param_env: ty::ParamEnv<'tcx>,
+ cache_fresh_trait_pred: ty::PolyTraitPredicate<'tcx>,
+ dep_node: DepNodeIndex,
+ candidate: SelectionResult<'tcx, SelectionCandidate<'tcx>>,
+ ) {
+ let tcx = self.tcx();
+ let trait_ref = cache_fresh_trait_pred.skip_binder().trait_ref;
+
+ if !self.can_cache_candidate(&candidate) {
+ debug!(
+ "insert_candidate_cache(trait_ref={:?}, candidate={:?} -\
+ candidate is not cacheable",
+ trait_ref, candidate
+ );
+ return;
+ }
+
+ if self.can_use_global_caches(param_env) {
+ if let Err(Overflow) = candidate {
+ // Don't cache overflow globally; we only produce this in certain modes.
+ } else if !trait_ref.has_local_value() {
+ if !candidate.has_local_value() {
+ debug!(
+ "insert_candidate_cache(trait_ref={:?}, candidate={:?}) global",
+ trait_ref, candidate,
+ );
+ // This may overwrite the cache with the same value.
+ tcx.selection_cache
+ .hashmap
+ .borrow_mut()
+ .insert(param_env.and(trait_ref), WithDepNode::new(dep_node, candidate));
+ return;
+ }
+ }
+ }
+
+ debug!(
+ "insert_candidate_cache(trait_ref={:?}, candidate={:?}) local",
+ trait_ref, candidate,
+ );
+ self.infcx
+ .selection_cache
+ .hashmap
+ .borrow_mut()
+ .insert(param_env.and(trait_ref), WithDepNode::new(dep_node, candidate));
+ }
+
+ fn assemble_candidates<'o>(
+ &mut self,
+ stack: &TraitObligationStack<'o, 'tcx>,
+ ) -> Result<SelectionCandidateSet<'tcx>, SelectionError<'tcx>> {
+ let TraitObligationStack { obligation, .. } = *stack;
+ let obligation = &Obligation {
+ param_env: obligation.param_env,
+ cause: obligation.cause.clone(),
+ recursion_depth: obligation.recursion_depth,
+ predicate: self.infcx().resolve_vars_if_possible(&obligation.predicate),
+ };
+
+ if obligation.predicate.skip_binder().self_ty().is_ty_var() {
+ // Self is a type variable (e.g., `_: AsRef<str>`).
+ //
+ // This is somewhat problematic, as the current scheme can't really
+ // handle it turning to be a projection. This does end up as truly
+ // ambiguous in most cases anyway.
+ //
+ // Take the fast path out - this also improves
+ // performance by preventing assemble_candidates_from_impls from
+ // matching every impl for this trait.
+ return Ok(SelectionCandidateSet { vec: vec![], ambiguous: true });
+ }
+
+ let mut candidates = SelectionCandidateSet { vec: Vec::new(), ambiguous: false };
+
+ self.assemble_candidates_for_trait_alias(obligation, &mut candidates)?;
+
+ // Other bounds. Consider both in-scope bounds from fn decl
+ // and applicable impls. There is a certain set of precedence rules here.
+ let def_id = obligation.predicate.def_id();
+ let lang_items = self.tcx().lang_items();
+
+ if lang_items.copy_trait() == Some(def_id) {
+ debug!("obligation self ty is {:?}", obligation.predicate.skip_binder().self_ty());
+
+ // User-defined copy impls are permitted, but only for
+ // structs and enums.
+ self.assemble_candidates_from_impls(obligation, &mut candidates)?;
+
+ // For other types, we'll use the builtin rules.
+ let copy_conditions = self.copy_clone_conditions(obligation);
+ self.assemble_builtin_bound_candidates(copy_conditions, &mut candidates)?;
+ } else if lang_items.sized_trait() == Some(def_id) {
+ // Sized is never implementable by end-users, it is
+ // always automatically computed.
+ let sized_conditions = self.sized_conditions(obligation);
+ self.assemble_builtin_bound_candidates(sized_conditions, &mut candidates)?;
+ } else if lang_items.unsize_trait() == Some(def_id) {
+ self.assemble_candidates_for_unsizing(obligation, &mut candidates);
+ } else {
+ if lang_items.clone_trait() == Some(def_id) {
+ // Same builtin conditions as `Copy`, i.e., every type which has builtin support
+ // for `Copy` also has builtin support for `Clone`, and tuples/arrays of `Clone`
+ // types have builtin support for `Clone`.
+ let clone_conditions = self.copy_clone_conditions(obligation);
+ self.assemble_builtin_bound_candidates(clone_conditions, &mut candidates)?;
+ }
+
+ self.assemble_generator_candidates(obligation, &mut candidates)?;
+ self.assemble_closure_candidates(obligation, &mut candidates)?;
+ self.assemble_fn_pointer_candidates(obligation, &mut candidates)?;
+ self.assemble_candidates_from_impls(obligation, &mut candidates)?;
+ self.assemble_candidates_from_object_ty(obligation, &mut candidates);
+ }
+
+ self.assemble_candidates_from_projected_tys(obligation, &mut candidates);
+ self.assemble_candidates_from_caller_bounds(stack, &mut candidates)?;
+ // Auto implementations have lower priority, so we only
+ // consider triggering a default if there is no other impl that can apply.
+ if candidates.vec.is_empty() {
+ self.assemble_candidates_from_auto_impls(obligation, &mut candidates)?;
+ }
+ debug!("candidate list size: {}", candidates.vec.len());
+ Ok(candidates)
+ }
+
+ fn assemble_candidates_from_projected_tys(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ candidates: &mut SelectionCandidateSet<'tcx>,
+ ) {
+ debug!("assemble_candidates_for_projected_tys({:?})", obligation);
+
+ // Before we go into the whole placeholder thing, just
+ // quickly check if the self-type is a projection at all.
+ match obligation.predicate.skip_binder().trait_ref.self_ty().kind {
+ ty::Projection(_) | ty::Opaque(..) => {}
+ ty::Infer(ty::TyVar(_)) => {
+ span_bug!(
+ obligation.cause.span,
+ "Self=_ should have been handled by assemble_candidates"
+ );
+ }
+ _ => return,
+ }
+
+ let result = self.infcx.probe(|snapshot| {
+ self.match_projection_obligation_against_definition_bounds(obligation, snapshot)
+ });
+
+ if result {
+ candidates.vec.push(ProjectionCandidate);
+ }
+ }
+
+ fn match_projection_obligation_against_definition_bounds(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ snapshot: &CombinedSnapshot<'_, 'tcx>,
+ ) -> bool {
+ let poly_trait_predicate = self.infcx().resolve_vars_if_possible(&obligation.predicate);
+ let (placeholder_trait_predicate, placeholder_map) =
+ self.infcx().replace_bound_vars_with_placeholders(&poly_trait_predicate);
+ debug!(
+ "match_projection_obligation_against_definition_bounds: \
+ placeholder_trait_predicate={:?}",
+ placeholder_trait_predicate,
+ );
+
+ let (def_id, substs) = match placeholder_trait_predicate.trait_ref.self_ty().kind {
+ ty::Projection(ref data) => (data.trait_ref(self.tcx()).def_id, data.substs),
+ ty::Opaque(def_id, substs) => (def_id, substs),
+ _ => {
+ span_bug!(
+ obligation.cause.span,
+ "match_projection_obligation_against_definition_bounds() called \
+ but self-ty is not a projection: {:?}",
+ placeholder_trait_predicate.trait_ref.self_ty()
+ );
+ }
+ };
+ debug!(
+ "match_projection_obligation_against_definition_bounds: \
+ def_id={:?}, substs={:?}",
+ def_id, substs
+ );
+
+ let predicates_of = self.tcx().predicates_of(def_id);
+ let bounds = predicates_of.instantiate(self.tcx(), substs);
+ debug!(
+ "match_projection_obligation_against_definition_bounds: \
+ bounds={:?}",
+ bounds
+ );
+
+ let elaborated_predicates = util::elaborate_predicates(self.tcx(), bounds.predicates);
+ let matching_bound = elaborated_predicates.filter_to_traits().find(|bound| {
+ self.infcx.probe(|_| {
+ self.match_projection(
+ obligation,
+ bound.clone(),
+ placeholder_trait_predicate.trait_ref.clone(),
+ &placeholder_map,
+ snapshot,
+ )
+ })
+ });
+
+ debug!(
+ "match_projection_obligation_against_definition_bounds: \
+ matching_bound={:?}",
+ matching_bound
+ );
+ match matching_bound {
+ None => false,
+ Some(bound) => {
+ // Repeat the successful match, if any, this time outside of a probe.
+ let result = self.match_projection(
+ obligation,
+ bound,
+ placeholder_trait_predicate.trait_ref.clone(),
+ &placeholder_map,
+ snapshot,
+ );
+
+ assert!(result);
+ true
+ }
+ }
+ }
+
+ fn match_projection(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ trait_bound: ty::PolyTraitRef<'tcx>,
+ placeholder_trait_ref: ty::TraitRef<'tcx>,
+ placeholder_map: &PlaceholderMap<'tcx>,
+ snapshot: &CombinedSnapshot<'_, 'tcx>,
+ ) -> bool {
+ debug_assert!(!placeholder_trait_ref.has_escaping_bound_vars());
+ self.infcx
+ .at(&obligation.cause, obligation.param_env)
+ .sup(ty::Binder::dummy(placeholder_trait_ref), trait_bound)
+ .is_ok()
+ && self.infcx.leak_check(false, placeholder_map, snapshot).is_ok()
+ }
+
+ /// Given an obligation like `<SomeTrait for T>`, searches the obligations that the caller
+ /// supplied to find out whether it is listed among them.
+ ///
+ /// Never affects the inference environment.
+ fn assemble_candidates_from_caller_bounds<'o>(
+ &mut self,
+ stack: &TraitObligationStack<'o, 'tcx>,
+ candidates: &mut SelectionCandidateSet<'tcx>,
+ ) -> Result<(), SelectionError<'tcx>> {
+ debug!("assemble_candidates_from_caller_bounds({:?})", stack.obligation);
+
+ let all_bounds = stack
+ .obligation
+ .param_env
+ .caller_bounds
+ .iter()
+ .filter_map(|o| o.to_opt_poly_trait_ref());
+
+ // Micro-optimization: filter out predicates relating to different traits.
+ let matching_bounds =
+ all_bounds.filter(|p| p.def_id() == stack.obligation.predicate.def_id());
+
+ // Keep only those bounds which may apply, and propagate overflow if it occurs.
+ let mut param_candidates = vec![];
+ for bound in matching_bounds {
+ let wc = self.evaluate_where_clause(stack, bound.clone())?;
+ if wc.may_apply() {
+ param_candidates.push(ParamCandidate(bound));
+ }
+ }
+
+ candidates.vec.extend(param_candidates);
+
+ Ok(())
+ }
+
+ fn evaluate_where_clause<'o>(
+ &mut self,
+ stack: &TraitObligationStack<'o, 'tcx>,
+ where_clause_trait_ref: ty::PolyTraitRef<'tcx>,
+ ) -> Result<EvaluationResult, OverflowError> {
+ self.evaluation_probe(|this| {
+ match this.match_where_clause_trait_ref(stack.obligation, where_clause_trait_ref) {
+ Ok(obligations) => {
+ this.evaluate_predicates_recursively(stack.list(), obligations.into_iter())
+ }
+ Err(()) => Ok(EvaluatedToErr),
+ }
+ })
+ }
+
+ fn assemble_generator_candidates(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ candidates: &mut SelectionCandidateSet<'tcx>,
+ ) -> Result<(), SelectionError<'tcx>> {
+ if self.tcx().lang_items().gen_trait() != Some(obligation.predicate.def_id()) {
+ return Ok(());
+ }
+
+ // Okay to skip binder because the substs on generator types never
+ // touch bound regions, they just capture the in-scope
+ // type/region parameters.
+ let self_ty = *obligation.self_ty().skip_binder();
+ match self_ty.kind {
+ ty::Generator(..) => {
+ debug!(
+ "assemble_generator_candidates: self_ty={:?} obligation={:?}",
+ self_ty, obligation
+ );
+
+ candidates.vec.push(GeneratorCandidate);
+ }
+ ty::Infer(ty::TyVar(_)) => {
+ debug!("assemble_generator_candidates: ambiguous self-type");
+ candidates.ambiguous = true;
+ }
+ _ => {}
+ }
+
+ Ok(())
+ }
+
+ /// Checks for the artificial impl that the compiler will create for an obligation like `X :
+ /// FnMut<..>` where `X` is a closure type.
+ ///
+ /// Note: the type parameters on a closure candidate are modeled as *output* type
+ /// parameters and hence do not affect whether this trait is a match or not. They will be
+ /// unified during the confirmation step.
+ fn assemble_closure_candidates(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ candidates: &mut SelectionCandidateSet<'tcx>,
+ ) -> Result<(), SelectionError<'tcx>> {
+ let kind = match self.tcx().fn_trait_kind_from_lang_item(obligation.predicate.def_id()) {
+ Some(k) => k,
+ None => {
+ return Ok(());
+ }
+ };
+
+ // Okay to skip binder because the substs on closure types never
+ // touch bound regions, they just capture the in-scope
+ // type/region parameters
+ match obligation.self_ty().skip_binder().kind {
+ ty::Closure(closure_def_id, closure_substs) => {
+ debug!("assemble_unboxed_candidates: kind={:?} obligation={:?}", kind, obligation);
+ match self.infcx.closure_kind(closure_def_id, closure_substs) {
+ Some(closure_kind) => {
+ debug!("assemble_unboxed_candidates: closure_kind = {:?}", closure_kind);
+ if closure_kind.extends(kind) {
+ candidates.vec.push(ClosureCandidate);
+ }
+ }
+ None => {
+ debug!("assemble_unboxed_candidates: closure_kind not yet known");
+ candidates.vec.push(ClosureCandidate);
+ }
+ }
+ }
+ ty::Infer(ty::TyVar(_)) => {
+ debug!("assemble_unboxed_closure_candidates: ambiguous self-type");
+ candidates.ambiguous = true;
+ }
+ _ => {}
+ }
+
+ Ok(())
+ }
+
+ /// Implements one of the `Fn()` family for a fn pointer.
+ fn assemble_fn_pointer_candidates(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ candidates: &mut SelectionCandidateSet<'tcx>,
+ ) -> Result<(), SelectionError<'tcx>> {
+ // We provide impl of all fn traits for fn pointers.
+ if self.tcx().fn_trait_kind_from_lang_item(obligation.predicate.def_id()).is_none() {
+ return Ok(());
+ }
+
+ // Okay to skip binder because what we are inspecting doesn't involve bound regions.
+ let self_ty = *obligation.self_ty().skip_binder();
+ match self_ty.kind {
+ ty::Infer(ty::TyVar(_)) => {
+ debug!("assemble_fn_pointer_candidates: ambiguous self-type");
+ candidates.ambiguous = true; // Could wind up being a fn() type.
+ }
+ // Provide an impl, but only for suitable `fn` pointers.
+ ty::FnDef(..) | ty::FnPtr(_) => {
+ if let ty::FnSig {
+ unsafety: hir::Unsafety::Normal,
+ abi: Abi::Rust,
+ c_variadic: false,
+ ..
+ } = self_ty.fn_sig(self.tcx()).skip_binder()
+ {
+ candidates.vec.push(FnPointerCandidate);
+ }
+ }
+ _ => {}
+ }
+
+ Ok(())
+ }
+
+ /// Searches for impls that might apply to `obligation`.
+ fn assemble_candidates_from_impls(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ candidates: &mut SelectionCandidateSet<'tcx>,
+ ) -> Result<(), SelectionError<'tcx>> {
+ debug!("assemble_candidates_from_impls(obligation={:?})", obligation);
+
+ self.tcx().for_each_relevant_impl(
+ obligation.predicate.def_id(),
+ obligation.predicate.skip_binder().trait_ref.self_ty(),
+ |impl_def_id| {
+ self.infcx.probe(|snapshot| {
+ if let Ok(_substs) = self.match_impl(impl_def_id, obligation, snapshot) {
+ candidates.vec.push(ImplCandidate(impl_def_id));
+ }
+ });
+ },
+ );
+
+ Ok(())
+ }
+
+ fn assemble_candidates_from_auto_impls(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ candidates: &mut SelectionCandidateSet<'tcx>,
+ ) -> Result<(), SelectionError<'tcx>> {
+ // Okay to skip binder here because the tests we do below do not involve bound regions.
+ let self_ty = *obligation.self_ty().skip_binder();
+ debug!("assemble_candidates_from_auto_impls(self_ty={:?})", self_ty);
+
+ let def_id = obligation.predicate.def_id();
+
+ if self.tcx().trait_is_auto(def_id) {
+ match self_ty.kind {
+ ty::Dynamic(..) => {
+ // For object types, we don't know what the closed
+ // over types are. This means we conservatively
+ // say nothing; a candidate may be added by
+ // `assemble_candidates_from_object_ty`.
+ }
+ ty::Foreign(..) => {
+ // Since the contents of foreign types is unknown,
+ // we don't add any `..` impl. Default traits could
+ // still be provided by a manual implementation for
+ // this trait and type.
+ }
+ ty::Param(..) | ty::Projection(..) => {
+ // In these cases, we don't know what the actual
+ // type is. Therefore, we cannot break it down
+ // into its constituent types. So we don't
+ // consider the `..` impl but instead just add no
+ // candidates: this means that typeck will only
+ // succeed if there is another reason to believe
+ // that this obligation holds. That could be a
+ // where-clause or, in the case of an object type,
+ // it could be that the object type lists the
+ // trait (e.g., `Foo+Send : Send`). See
+ // `compile-fail/typeck-default-trait-impl-send-param.rs`
+ // for an example of a test case that exercises
+ // this path.
+ }
+ ty::Infer(ty::TyVar(_)) => {
+ // The auto impl might apply; we don't know.
+ candidates.ambiguous = true;
+ }
+ ty::Generator(_, _, movability)
+ if self.tcx().lang_items().unpin_trait() == Some(def_id) =>
+ {
+ match movability {
+ hir::Movability::Static => {
+ // Immovable generators are never `Unpin`, so
+ // suppress the normal auto-impl candidate for it.
+ }
+ hir::Movability::Movable => {
+ // Movable generators are always `Unpin`, so add an
+ // unconditional builtin candidate.
+ candidates.vec.push(BuiltinCandidate { has_nested: false });
+ }
+ }
+ }
+
+ _ => candidates.vec.push(AutoImplCandidate(def_id)),
+ }
+ }
+
+ Ok(())
+ }
+
+ /// Searches for impls that might apply to `obligation`.
+ fn assemble_candidates_from_object_ty(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ candidates: &mut SelectionCandidateSet<'tcx>,
+ ) {
+ debug!(
+ "assemble_candidates_from_object_ty(self_ty={:?})",
+ obligation.self_ty().skip_binder()
+ );
+
+ self.infcx.probe(|_snapshot| {
+ // The code below doesn't care about regions, and the
+ // self-ty here doesn't escape this probe, so just erase
+ // any LBR.
+ let self_ty = self.tcx().erase_late_bound_regions(&obligation.self_ty());
+ let poly_trait_ref = match self_ty.kind {
+ ty::Dynamic(ref data, ..) => {
+ if data.auto_traits().any(|did| did == obligation.predicate.def_id()) {
+ debug!(
+ "assemble_candidates_from_object_ty: matched builtin bound, \
+ pushing candidate"
+ );
+ candidates.vec.push(BuiltinObjectCandidate);
+ return;
+ }
+
+ if let Some(principal) = data.principal() {
+ if !self.infcx.tcx.features().object_safe_for_dispatch {
+ principal.with_self_ty(self.tcx(), self_ty)
+ } else if self.tcx().is_object_safe(principal.def_id()) {
+ principal.with_self_ty(self.tcx(), self_ty)
+ } else {
+ return;
+ }
+ } else {
+ // Only auto trait bounds exist.
+ return;
+ }
+ }
+ ty::Infer(ty::TyVar(_)) => {
+ debug!("assemble_candidates_from_object_ty: ambiguous");
+ candidates.ambiguous = true; // could wind up being an object type
+ return;
+ }
+ _ => return,
+ };
+
+ debug!("assemble_candidates_from_object_ty: poly_trait_ref={:?}", poly_trait_ref);
+
+ // Count only those upcast versions that match the trait-ref
+ // we are looking for. Specifically, do not only check for the
+ // correct trait, but also the correct type parameters.
+ // For example, we may be trying to upcast `Foo` to `Bar<i32>`,
+ // but `Foo` is declared as `trait Foo: Bar<u32>`.
+ let upcast_trait_refs = util::supertraits(self.tcx(), poly_trait_ref)
+ .filter(|upcast_trait_ref| {
+ self.infcx
+ .probe(|_| self.match_poly_trait_ref(obligation, *upcast_trait_ref).is_ok())
+ })
+ .count();
+
+ if upcast_trait_refs > 1 {
+ // Can be upcast in many ways; need more type information.
+ candidates.ambiguous = true;
+ } else if upcast_trait_refs == 1 {
+ candidates.vec.push(ObjectCandidate);
+ }
+ })
+ }
+
+ /// Searches for unsizing that might apply to `obligation`.
+ fn assemble_candidates_for_unsizing(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ candidates: &mut SelectionCandidateSet<'tcx>,
+ ) {
+ // We currently never consider higher-ranked obligations e.g.
+ // `for<'a> &'a T: Unsize<Trait+'a>` to be implemented. This is not
+ // because they are a priori invalid, and we could potentially add support
+ // for them later, it's just that there isn't really a strong need for it.
+ // A `T: Unsize<U>` obligation is always used as part of a `T: CoerceUnsize<U>`
+ // impl, and those are generally applied to concrete types.
+ //
+ // That said, one might try to write a fn with a where clause like
+ // for<'a> Foo<'a, T>: Unsize<Foo<'a, Trait>>
+ // where the `'a` is kind of orthogonal to the relevant part of the `Unsize`.
+ // Still, you'd be more likely to write that where clause as
+ // T: Trait
+ // so it seems ok if we (conservatively) fail to accept that `Unsize`
+ // obligation above. Should be possible to extend this in the future.
+ let source = match obligation.self_ty().no_bound_vars() {
+ Some(t) => t,
+ None => {
+ // Don't add any candidates if there are bound regions.
+ return;
+ }
+ };
+ let target = obligation.predicate.skip_binder().trait_ref.substs.type_at(1);
+
+ debug!("assemble_candidates_for_unsizing(source={:?}, target={:?})", source, target);
+
+ let may_apply = match (&source.kind, &target.kind) {
+ // Trait+Kx+'a -> Trait+Ky+'b (upcasts).
+ (&ty::Dynamic(ref data_a, ..), &ty::Dynamic(ref data_b, ..)) => {
+ // Upcasts permit two things:
+ //
+ // 1. Dropping auto traits, e.g., `Foo + Send` to `Foo`
+ // 2. Tightening the region bound, e.g., `Foo + 'a` to `Foo + 'b` if `'a: 'b`
+ //
+ // Note that neither of these changes requires any
+ // change at runtime. Eventually this will be
+ // generalized.
+ //
+ // We always upcast when we can because of reason
+ // #2 (region bounds).
+ data_a.principal_def_id() == data_b.principal_def_id()
+ && data_b
+ .auto_traits()
+ // All of a's auto traits need to be in b's auto traits.
+ .all(|b| data_a.auto_traits().any(|a| a == b))
+ }
+
+ // `T` -> `Trait`
+ (_, &ty::Dynamic(..)) => true,
+
+ // Ambiguous handling is below `T` -> `Trait`, because inference
+ // variables can still implement `Unsize<Trait>` and nested
+ // obligations will have the final say (likely deferred).
+ (&ty::Infer(ty::TyVar(_)), _) | (_, &ty::Infer(ty::TyVar(_))) => {
+ debug!("assemble_candidates_for_unsizing: ambiguous");
+ candidates.ambiguous = true;
+ false
+ }
+
+ // `[T; n]` -> `[T]`
+ (&ty::Array(..), &ty::Slice(_)) => true,
+
+ // `Struct<T>` -> `Struct<U>`
+ (&ty::Adt(def_id_a, _), &ty::Adt(def_id_b, _)) if def_id_a.is_struct() => {
+ def_id_a == def_id_b
+ }
+
+ // `(.., T)` -> `(.., U)`
+ (&ty::Tuple(tys_a), &ty::Tuple(tys_b)) => tys_a.len() == tys_b.len(),
+
+ _ => false,
+ };
+
+ if may_apply {
+ candidates.vec.push(BuiltinUnsizeCandidate);
+ }
+ }
+
+ fn assemble_candidates_for_trait_alias(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ candidates: &mut SelectionCandidateSet<'tcx>,
+ ) -> Result<(), SelectionError<'tcx>> {
+ // Okay to skip binder here because the tests we do below do not involve bound regions.
+ let self_ty = *obligation.self_ty().skip_binder();
+ debug!("assemble_candidates_for_trait_alias(self_ty={:?})", self_ty);
+
+ let def_id = obligation.predicate.def_id();
+
+ if self.tcx().is_trait_alias(def_id) {
+ candidates.vec.push(TraitAliasCandidate(def_id));
+ }
+
+ Ok(())
+ }
+
+ ///////////////////////////////////////////////////////////////////////////
+ // WINNOW
+ //
+ // Winnowing is the process of attempting to resolve ambiguity by
+ // probing further. During the winnowing process, we unify all
+ // type variables and then we also attempt to evaluate recursive
+ // bounds to see if they are satisfied.
+
+ /// Returns `true` if `victim` should be dropped in favor of
+ /// `other`. Generally speaking we will drop duplicate
+ /// candidates and prefer where-clause candidates.
+ ///
+ /// See the comment for "SelectionCandidate" for more details.
+ fn candidate_should_be_dropped_in_favor_of(
+ &mut self,
+ victim: &EvaluatedCandidate<'tcx>,
+ other: &EvaluatedCandidate<'tcx>,
+ needs_infer: bool,
+ ) -> bool {
+ if victim.candidate == other.candidate {
+ return true;
+ }
+
+ // Check if a bound would previously have been removed when normalizing
+ // the param_env so that it can be given the lowest priority. See
+ // #50825 for the motivation for this.
+ let is_global =
+ |cand: &ty::PolyTraitRef<'_>| cand.is_global() && !cand.has_late_bound_regions();
+
+ match other.candidate {
+ // Prefer `BuiltinCandidate { has_nested: false }` to anything else.
+ // This is a fix for #53123 and prevents winnowing from accidentally extending the
+ // lifetime of a variable.
+ BuiltinCandidate { has_nested: false } => true,
+ ParamCandidate(ref cand) => match victim.candidate {
+ AutoImplCandidate(..) => {
+ bug!(
+ "default implementations shouldn't be recorded \
+ when there are other valid candidates"
+ );
+ }
+ // Prefer `BuiltinCandidate { has_nested: false }` to anything else.
+ // This is a fix for #53123 and prevents winnowing from accidentally extending the
+ // lifetime of a variable.
+ BuiltinCandidate { has_nested: false } => false,
+ ImplCandidate(..)
+ | ClosureCandidate
+ | GeneratorCandidate
+ | FnPointerCandidate
+ | BuiltinObjectCandidate
+ | BuiltinUnsizeCandidate
+ | BuiltinCandidate { .. }
+ | TraitAliasCandidate(..) => {
+ // Global bounds from the where clause should be ignored
+ // here (see issue #50825). Otherwise, we have a where
+ // clause so don't go around looking for impls.
+ !is_global(cand)
+ }
+ ObjectCandidate | ProjectionCandidate => {
+ // Arbitrarily give param candidates priority
+ // over projection and object candidates.
+ !is_global(cand)
+ }
+ ParamCandidate(..) => false,
+ },
+ ObjectCandidate | ProjectionCandidate => match victim.candidate {
+ AutoImplCandidate(..) => {
+ bug!(
+ "default implementations shouldn't be recorded \
+ when there are other valid candidates"
+ );
+ }
+ // Prefer `BuiltinCandidate { has_nested: false }` to anything else.
+ // This is a fix for #53123 and prevents winnowing from accidentally extending the
+ // lifetime of a variable.
+ BuiltinCandidate { has_nested: false } => false,
+ ImplCandidate(..)
+ | ClosureCandidate
+ | GeneratorCandidate
+ | FnPointerCandidate
+ | BuiltinObjectCandidate
+ | BuiltinUnsizeCandidate
+ | BuiltinCandidate { .. }
+ | TraitAliasCandidate(..) => true,
+ ObjectCandidate | ProjectionCandidate => {
+ // Arbitrarily give param candidates priority
+ // over projection and object candidates.
+ true
+ }
+ ParamCandidate(ref cand) => is_global(cand),
+ },
+ ImplCandidate(other_def) => {
+ // See if we can toss out `victim` based on specialization.
+ // This requires us to know *for sure* that the `other` impl applies
+ // i.e., `EvaluatedToOk`.
+ if other.evaluation.must_apply_modulo_regions() {
+ match victim.candidate {
+ ImplCandidate(victim_def) => {
+ let tcx = self.tcx();
+ if tcx.specializes((other_def, victim_def)) {
+ return true;
+ }
+ return match tcx.impls_are_allowed_to_overlap(other_def, victim_def) {
+ Some(ty::ImplOverlapKind::Permitted { marker: true }) => {
+ // Subtle: If the predicate we are evaluating has inference
+ // variables, do *not* allow discarding candidates due to
+ // marker trait impls.
+ //
+ // Without this restriction, we could end up accidentally
+ // constrainting inference variables based on an arbitrarily
+ // chosen trait impl.
+ //
+ // Imagine we have the following code:
+ //
+ // ```rust
+ // #[marker] trait MyTrait {}
+ // impl MyTrait for u8 {}
+ // impl MyTrait for bool {}
+ // ```
+ //
+ // And we are evaluating the predicate `<_#0t as MyTrait>`.
+ //
+ // During selection, we will end up with one candidate for each
+ // impl of `MyTrait`. If we were to discard one impl in favor
+ // of the other, we would be left with one candidate, causing
+ // us to "successfully" select the predicate, unifying
+ // _#0t with (for example) `u8`.
+ //
+ // However, we have no reason to believe that this unification
+ // is correct - we've essentially just picked an arbitrary
+ // *possibility* for _#0t, and required that this be the *only*
+ // possibility.
+ //
+ // Eventually, we will either:
+ // 1) Unify all inference variables in the predicate through
+ // some other means (e.g. type-checking of a function). We will
+ // then be in a position to drop marker trait candidates
+ // without constraining inference variables (since there are
+ // none left to constrin)
+ // 2) Be left with some unconstrained inference variables. We
+ // will then correctly report an inference error, since the
+ // existence of multiple marker trait impls tells us nothing
+ // about which one should actually apply.
+ !needs_infer
+ }
+ Some(_) => true,
+ None => false,
+ };
+ }
+ ParamCandidate(ref cand) => {
+ // Prefer the impl to a global where clause candidate.
+ return is_global(cand);
+ }
+ _ => (),
+ }
+ }
+
+ false
+ }
+ ClosureCandidate
+ | GeneratorCandidate
+ | FnPointerCandidate
+ | BuiltinObjectCandidate
+ | BuiltinUnsizeCandidate
+ | BuiltinCandidate { has_nested: true } => {
+ match victim.candidate {
+ ParamCandidate(ref cand) => {
+ // Prefer these to a global where-clause bound
+ // (see issue #50825).
+ is_global(cand) && other.evaluation.must_apply_modulo_regions()
+ }
+ _ => false,
+ }
+ }
+ _ => false,
+ }
+ }
+
+ ///////////////////////////////////////////////////////////////////////////
+ // BUILTIN BOUNDS
+ //
+ // These cover the traits that are built-in to the language
+ // itself: `Copy`, `Clone` and `Sized`.
+
+ fn assemble_builtin_bound_candidates(
+ &mut self,
+ conditions: BuiltinImplConditions<'tcx>,
+ candidates: &mut SelectionCandidateSet<'tcx>,
+ ) -> Result<(), SelectionError<'tcx>> {
+ match conditions {
+ BuiltinImplConditions::Where(nested) => {
+ debug!("builtin_bound: nested={:?}", nested);
+ candidates
+ .vec
+ .push(BuiltinCandidate { has_nested: !nested.skip_binder().is_empty() });
+ }
+ BuiltinImplConditions::None => {}
+ BuiltinImplConditions::Ambiguous => {
+ debug!("assemble_builtin_bound_candidates: ambiguous builtin");
+ candidates.ambiguous = true;
+ }
+ }
+
+ Ok(())
+ }
+
+ fn sized_conditions(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ ) -> BuiltinImplConditions<'tcx> {
+ use self::BuiltinImplConditions::{Ambiguous, None, Where};
+
+ // NOTE: binder moved to (*)
+ let self_ty = self.infcx.shallow_resolve(obligation.predicate.skip_binder().self_ty());
+
+ match self_ty.kind {
+ ty::Infer(ty::IntVar(_))
+ | ty::Infer(ty::FloatVar(_))
+ | ty::Uint(_)
+ | ty::Int(_)
+ | ty::Bool
+ | ty::Float(_)
+ | ty::FnDef(..)
+ | ty::FnPtr(_)
+ | ty::RawPtr(..)
+ | ty::Char
+ | ty::Ref(..)
+ | ty::Generator(..)
+ | ty::GeneratorWitness(..)
+ | ty::Array(..)
+ | ty::Closure(..)
+ | ty::Never
+ | ty::Error => {
+ // safe for everything
+ Where(ty::Binder::dummy(Vec::new()))
+ }
+
+ ty::Str | ty::Slice(_) | ty::Dynamic(..) | ty::Foreign(..) => None,
+
+ ty::Tuple(tys) => {
+ Where(ty::Binder::bind(tys.last().into_iter().map(|k| k.expect_ty()).collect()))
+ }
+
+ ty::Adt(def, substs) => {
+ let sized_crit = def.sized_constraint(self.tcx());
+ // (*) binder moved here
+ Where(ty::Binder::bind(
+ sized_crit.iter().map(|ty| ty.subst(self.tcx(), substs)).collect(),
+ ))
+ }
+
+ ty::Projection(_) | ty::Param(_) | ty::Opaque(..) => None,
+ ty::Infer(ty::TyVar(_)) => Ambiguous,
+
+ ty::UnnormalizedProjection(..)
+ | ty::Placeholder(..)
+ | ty::Bound(..)
+ | ty::Infer(ty::FreshTy(_))
+ | ty::Infer(ty::FreshIntTy(_))
+ | ty::Infer(ty::FreshFloatTy(_)) => {
+ bug!("asked to assemble builtin bounds of unexpected type: {:?}", self_ty);
+ }
+ }
+ }
+
+ fn copy_clone_conditions(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ ) -> BuiltinImplConditions<'tcx> {
+ // NOTE: binder moved to (*)
+ let self_ty = self.infcx.shallow_resolve(obligation.predicate.skip_binder().self_ty());
+
+ use self::BuiltinImplConditions::{Ambiguous, None, Where};
+
+ match self_ty.kind {
+ ty::Infer(ty::IntVar(_))
+ | ty::Infer(ty::FloatVar(_))
+ | ty::FnDef(..)
+ | ty::FnPtr(_)
+ | ty::Error => Where(ty::Binder::dummy(Vec::new())),
+
+ ty::Uint(_)
+ | ty::Int(_)
+ | ty::Bool
+ | ty::Float(_)
+ | ty::Char
+ | ty::RawPtr(..)
+ | ty::Never
+ | ty::Ref(_, _, hir::Mutability::Not) => {
+ // Implementations provided in libcore
+ None
+ }
+
+ ty::Dynamic(..)
+ | ty::Str
+ | ty::Slice(..)
+ | ty::Generator(..)
+ | ty::GeneratorWitness(..)
+ | ty::Foreign(..)
+ | ty::Ref(_, _, hir::Mutability::Mut) => None,
+
+ ty::Array(element_ty, _) => {
+ // (*) binder moved here
+ Where(ty::Binder::bind(vec![element_ty]))
+ }
+
+ ty::Tuple(tys) => {
+ // (*) binder moved here
+ Where(ty::Binder::bind(tys.iter().map(|k| k.expect_ty()).collect()))
+ }
+
+ ty::Closure(def_id, substs) => {
+ // (*) binder moved here
+ Where(ty::Binder::bind(substs.as_closure().upvar_tys(def_id, self.tcx()).collect()))
+ }
+
+ ty::Adt(..) | ty::Projection(..) | ty::Param(..) | ty::Opaque(..) => {
+ // Fallback to whatever user-defined impls exist in this case.
+ None
+ }
+
+ ty::Infer(ty::TyVar(_)) => {
+ // Unbound type variable. Might or might not have
+ // applicable impls and so forth, depending on what
+ // those type variables wind up being bound to.
+ Ambiguous
+ }
+
+ ty::UnnormalizedProjection(..)
+ | ty::Placeholder(..)
+ | ty::Bound(..)
+ | ty::Infer(ty::FreshTy(_))
+ | ty::Infer(ty::FreshIntTy(_))
+ | ty::Infer(ty::FreshFloatTy(_)) => {
+ bug!("asked to assemble builtin bounds of unexpected type: {:?}", self_ty);
+ }
+ }
+ }
+
+ /// For default impls, we need to break apart a type into its
+ /// "constituent types" -- meaning, the types that it contains.
+ ///
+ /// Here are some (simple) examples:
+ ///
+ /// ```
+ /// (i32, u32) -> [i32, u32]
+ /// Foo where struct Foo { x: i32, y: u32 } -> [i32, u32]
+ /// Bar<i32> where struct Bar<T> { x: T, y: u32 } -> [i32, u32]
+ /// Zed<i32> where enum Zed { A(T), B(u32) } -> [i32, u32]
+ /// ```
+ fn constituent_types_for_ty(&self, t: Ty<'tcx>) -> Vec<Ty<'tcx>> {
+ match t.kind {
+ ty::Uint(_)
+ | ty::Int(_)
+ | ty::Bool
+ | ty::Float(_)
+ | ty::FnDef(..)
+ | ty::FnPtr(_)
+ | ty::Str
+ | ty::Error
+ | ty::Infer(ty::IntVar(_))
+ | ty::Infer(ty::FloatVar(_))
+ | ty::Never
+ | ty::Char => Vec::new(),
+
+ ty::UnnormalizedProjection(..)
+ | ty::Placeholder(..)
+ | ty::Dynamic(..)
+ | ty::Param(..)
+ | ty::Foreign(..)
+ | ty::Projection(..)
+ | ty::Bound(..)
+ | ty::Infer(ty::TyVar(_))
+ | ty::Infer(ty::FreshTy(_))
+ | ty::Infer(ty::FreshIntTy(_))
+ | ty::Infer(ty::FreshFloatTy(_)) => {
+ bug!("asked to assemble constituent types of unexpected type: {:?}", t);
+ }
+
+ ty::RawPtr(ty::TypeAndMut { ty: element_ty, .. }) | ty::Ref(_, element_ty, _) => {
+ vec![element_ty]
+ }
+
+ ty::Array(element_ty, _) | ty::Slice(element_ty) => vec![element_ty],
+
+ ty::Tuple(ref tys) => {
+ // (T1, ..., Tn) -- meets any bound that all of T1...Tn meet
+ tys.iter().map(|k| k.expect_ty()).collect()
+ }
+
+ ty::Closure(def_id, ref substs) => {
+ substs.as_closure().upvar_tys(def_id, self.tcx()).collect()
+ }
+
+ ty::Generator(def_id, ref substs, _) => {
+ let witness = substs.as_generator().witness(def_id, self.tcx());
+ substs
+ .as_generator()
+ .upvar_tys(def_id, self.tcx())
+ .chain(iter::once(witness))
+ .collect()
+ }
+
+ ty::GeneratorWitness(types) => {
+ // This is sound because no regions in the witness can refer to
+ // the binder outside the witness. So we'll effectivly reuse
+ // the implicit binder around the witness.
+ types.skip_binder().to_vec()
+ }
+
+ // For `PhantomData<T>`, we pass `T`.
+ ty::Adt(def, substs) if def.is_phantom_data() => substs.types().collect(),
+
+ ty::Adt(def, substs) => def.all_fields().map(|f| f.ty(self.tcx(), substs)).collect(),
+
+ ty::Opaque(def_id, substs) => {
+ // We can resolve the `impl Trait` to its concrete type,
+ // which enforces a DAG between the functions requiring
+ // the auto trait bounds in question.
+ vec![self.tcx().type_of(def_id).subst(self.tcx(), substs)]
+ }
+ }
+ }
+
+ fn collect_predicates_for_types(
+ &mut self,
+ param_env: ty::ParamEnv<'tcx>,
+ cause: ObligationCause<'tcx>,
+ recursion_depth: usize,
+ trait_def_id: DefId,
+ types: ty::Binder<Vec<Ty<'tcx>>>,
+ ) -> Vec<PredicateObligation<'tcx>> {
+ // Because the types were potentially derived from
+ // higher-ranked obligations they may reference late-bound
+ // regions. For example, `for<'a> Foo<&'a int> : Copy` would
+ // yield a type like `for<'a> &'a int`. In general, we
+ // maintain the invariant that we never manipulate bound
+ // regions, so we have to process these bound regions somehow.
+ //
+ // The strategy is to:
+ //
+ // 1. Instantiate those regions to placeholder regions (e.g.,
+ // `for<'a> &'a int` becomes `&0 int`.
+ // 2. Produce something like `&'0 int : Copy`
+ // 3. Re-bind the regions back to `for<'a> &'a int : Copy`
+
+ types
+ .skip_binder()
+ .iter()
+ .flat_map(|ty| {
+ // binder moved -\
+ let ty: ty::Binder<Ty<'tcx>> = ty::Binder::bind(ty); // <----/
+
+ self.infcx.commit_unconditionally(|_| {
+ let (skol_ty, _) = self.infcx.replace_bound_vars_with_placeholders(&ty);
+ let Normalized { value: normalized_ty, mut obligations } =
+ project::normalize_with_depth(
+ self,
+ param_env,
+ cause.clone(),
+ recursion_depth,
+ &skol_ty,
+ );
+ let skol_obligation = predicate_for_trait_def(
+ self.tcx(),
+ param_env,
+ cause.clone(),
+ trait_def_id,
+ recursion_depth,
+ normalized_ty,
+ &[],
+ );
+ obligations.push(skol_obligation);
+ obligations
+ })
+ })
+ .collect()
+ }
+
+ ///////////////////////////////////////////////////////////////////////////
+ // CONFIRMATION
+ //
+ // Confirmation unifies the output type parameters of the trait
+ // with the values found in the obligation, possibly yielding a
+ // type error. See the [rustc dev guide] for more details.
+ //
+ // [rustc dev guide]:
+ // https://rustc-dev-guide.rust-lang.org/traits/resolution.html#confirmation
+
+ fn confirm_candidate(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ candidate: SelectionCandidate<'tcx>,
+ ) -> Result<Selection<'tcx>, SelectionError<'tcx>> {
+ debug!("confirm_candidate({:?}, {:?})", obligation, candidate);
+
+ match candidate {
+ BuiltinCandidate { has_nested } => {
+ let data = self.confirm_builtin_candidate(obligation, has_nested);
+ Ok(VtableBuiltin(data))
+ }
+
+ ParamCandidate(param) => {
+ let obligations = self.confirm_param_candidate(obligation, param);
+ Ok(VtableParam(obligations))
+ }
+
+ ImplCandidate(impl_def_id) => {
+ Ok(VtableImpl(self.confirm_impl_candidate(obligation, impl_def_id)))
+ }
+
+ AutoImplCandidate(trait_def_id) => {
+ let data = self.confirm_auto_impl_candidate(obligation, trait_def_id);
+ Ok(VtableAutoImpl(data))
+ }
+
+ ProjectionCandidate => {
+ self.confirm_projection_candidate(obligation);
+ Ok(VtableParam(Vec::new()))
+ }
+
+ ClosureCandidate => {
+ let vtable_closure = self.confirm_closure_candidate(obligation)?;
+ Ok(VtableClosure(vtable_closure))
+ }
+
+ GeneratorCandidate => {
+ let vtable_generator = self.confirm_generator_candidate(obligation)?;
+ Ok(VtableGenerator(vtable_generator))
+ }
+
+ FnPointerCandidate => {
+ let data = self.confirm_fn_pointer_candidate(obligation)?;
+ Ok(VtableFnPointer(data))
+ }
+
+ TraitAliasCandidate(alias_def_id) => {
+ let data = self.confirm_trait_alias_candidate(obligation, alias_def_id);
+ Ok(VtableTraitAlias(data))
+ }
+
+ ObjectCandidate => {
+ let data = self.confirm_object_candidate(obligation);
+ Ok(VtableObject(data))
+ }
+
+ BuiltinObjectCandidate => {
+ // This indicates something like `Trait + Send: Send`. In this case, we know that
+ // this holds because that's what the object type is telling us, and there's really
+ // no additional obligations to prove and no types in particular to unify, etc.
+ Ok(VtableParam(Vec::new()))
+ }
+
+ BuiltinUnsizeCandidate => {
+ let data = self.confirm_builtin_unsize_candidate(obligation)?;
+ Ok(VtableBuiltin(data))
+ }
+ }
+ }
+
+ fn confirm_projection_candidate(&mut self, obligation: &TraitObligation<'tcx>) {
+ self.infcx.commit_unconditionally(|snapshot| {
+ let result =
+ self.match_projection_obligation_against_definition_bounds(obligation, snapshot);
+ assert!(result);
+ })
+ }
+
+ fn confirm_param_candidate(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ param: ty::PolyTraitRef<'tcx>,
+ ) -> Vec<PredicateObligation<'tcx>> {
+ debug!("confirm_param_candidate({:?},{:?})", obligation, param);
+
+ // During evaluation, we already checked that this
+ // where-clause trait-ref could be unified with the obligation
+ // trait-ref. Repeat that unification now without any
+ // transactional boundary; it should not fail.
+ match self.match_where_clause_trait_ref(obligation, param.clone()) {
+ Ok(obligations) => obligations,
+ Err(()) => {
+ bug!(
+ "Where clause `{:?}` was applicable to `{:?}` but now is not",
+ param,
+ obligation
+ );
+ }
+ }
+ }
+
+ fn confirm_builtin_candidate(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ has_nested: bool,
+ ) -> VtableBuiltinData<PredicateObligation<'tcx>> {
+ debug!("confirm_builtin_candidate({:?}, {:?})", obligation, has_nested);
+
+ let lang_items = self.tcx().lang_items();
+ let obligations = if has_nested {
+ let trait_def = obligation.predicate.def_id();
+ let conditions = if Some(trait_def) == lang_items.sized_trait() {
+ self.sized_conditions(obligation)
+ } else if Some(trait_def) == lang_items.copy_trait() {
+ self.copy_clone_conditions(obligation)
+ } else if Some(trait_def) == lang_items.clone_trait() {
+ self.copy_clone_conditions(obligation)
+ } else {
+ bug!("unexpected builtin trait {:?}", trait_def)
+ };
+ let nested = match conditions {
+ BuiltinImplConditions::Where(nested) => nested,
+ _ => bug!("obligation {:?} had matched a builtin impl but now doesn't", obligation),
+ };
+
+ let cause = obligation.derived_cause(BuiltinDerivedObligation);
+ self.collect_predicates_for_types(
+ obligation.param_env,
+ cause,
+ obligation.recursion_depth + 1,
+ trait_def,
+ nested,
+ )
+ } else {
+ vec![]
+ };
+
+ debug!("confirm_builtin_candidate: obligations={:?}", obligations);
+
+ VtableBuiltinData { nested: obligations }
+ }
+
+ /// This handles the case where a `auto trait Foo` impl is being used.
+ /// The idea is that the impl applies to `X : Foo` if the following conditions are met:
+ ///
+ /// 1. For each constituent type `Y` in `X`, `Y : Foo` holds
+ /// 2. For each where-clause `C` declared on `Foo`, `[Self => X] C` holds.
+ fn confirm_auto_impl_candidate(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ trait_def_id: DefId,
+ ) -> VtableAutoImplData<PredicateObligation<'tcx>> {
+ debug!("confirm_auto_impl_candidate({:?}, {:?})", obligation, trait_def_id);
+
+ let types = obligation.predicate.map_bound(|inner| {
+ let self_ty = self.infcx.shallow_resolve(inner.self_ty());
+ self.constituent_types_for_ty(self_ty)
+ });
+ self.vtable_auto_impl(obligation, trait_def_id, types)
+ }
+
+ /// See `confirm_auto_impl_candidate`.
+ fn vtable_auto_impl(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ trait_def_id: DefId,
+ nested: ty::Binder<Vec<Ty<'tcx>>>,
+ ) -> VtableAutoImplData<PredicateObligation<'tcx>> {
+ debug!("vtable_auto_impl: nested={:?}", nested);
+
+ let cause = obligation.derived_cause(BuiltinDerivedObligation);
+ let mut obligations = self.collect_predicates_for_types(
+ obligation.param_env,
+ cause,
+ obligation.recursion_depth + 1,
+ trait_def_id,
+ nested,
+ );
+
+ let trait_obligations: Vec<PredicateObligation<'_>> =
+ self.infcx.commit_unconditionally(|_| {
+ let poly_trait_ref = obligation.predicate.to_poly_trait_ref();
+ let (trait_ref, _) =
+ self.infcx.replace_bound_vars_with_placeholders(&poly_trait_ref);
+ let cause = obligation.derived_cause(ImplDerivedObligation);
+ self.impl_or_trait_obligations(
+ cause,
+ obligation.recursion_depth + 1,
+ obligation.param_env,
+ trait_def_id,
+ &trait_ref.substs,
+ )
+ });
+
+ // Adds the predicates from the trait. Note that this contains a `Self: Trait`
+ // predicate as usual. It won't have any effect since auto traits are coinductive.
+ obligations.extend(trait_obligations);
+
+ debug!("vtable_auto_impl: obligations={:?}", obligations);
+
+ VtableAutoImplData { trait_def_id, nested: obligations }
+ }
+
+ fn confirm_impl_candidate(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ impl_def_id: DefId,
+ ) -> VtableImplData<'tcx, PredicateObligation<'tcx>> {
+ debug!("confirm_impl_candidate({:?},{:?})", obligation, impl_def_id);
+
+ // First, create the substitutions by matching the impl again,
+ // this time not in a probe.
+ self.infcx.commit_unconditionally(|snapshot| {
+ let substs = self.rematch_impl(impl_def_id, obligation, snapshot);
+ debug!("confirm_impl_candidate: substs={:?}", substs);
+ let cause = obligation.derived_cause(ImplDerivedObligation);
+ self.vtable_impl(
+ impl_def_id,
+ substs,
+ cause,
+ obligation.recursion_depth + 1,
+ obligation.param_env,
+ )
+ })
+ }
+
+ fn vtable_impl(
+ &mut self,
+ impl_def_id: DefId,
+ mut substs: Normalized<'tcx, SubstsRef<'tcx>>,
+ cause: ObligationCause<'tcx>,
+ recursion_depth: usize,
+ param_env: ty::ParamEnv<'tcx>,
+ ) -> VtableImplData<'tcx, PredicateObligation<'tcx>> {
+ debug!(
+ "vtable_impl(impl_def_id={:?}, substs={:?}, recursion_depth={})",
+ impl_def_id, substs, recursion_depth,
+ );
+
+ let mut impl_obligations = self.impl_or_trait_obligations(
+ cause,
+ recursion_depth,
+ param_env,
+ impl_def_id,
+ &substs.value,
+ );
+
+ debug!(
+ "vtable_impl: impl_def_id={:?} impl_obligations={:?}",
+ impl_def_id, impl_obligations
+ );
+
+ // Because of RFC447, the impl-trait-ref and obligations
+ // are sufficient to determine the impl substs, without
+ // relying on projections in the impl-trait-ref.
+ //
+ // e.g., `impl<U: Tr, V: Iterator<Item=U>> Foo<<U as Tr>::T> for V`
+ impl_obligations.append(&mut substs.obligations);
+
+ VtableImplData { impl_def_id, substs: substs.value, nested: impl_obligations }
+ }
+
+ fn confirm_object_candidate(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ ) -> VtableObjectData<'tcx, PredicateObligation<'tcx>> {
+ debug!("confirm_object_candidate({:?})", obligation);
+
+ // FIXME(nmatsakis) skipping binder here seems wrong -- we should
+ // probably flatten the binder from the obligation and the binder
+ // from the object. Have to try to make a broken test case that
+ // results.
+ let self_ty = self.infcx.shallow_resolve(*obligation.self_ty().skip_binder());
+ let poly_trait_ref = match self_ty.kind {
+ ty::Dynamic(ref data, ..) => data
+ .principal()
+ .unwrap_or_else(|| {
+ span_bug!(obligation.cause.span, "object candidate with no principal")
+ })
+ .with_self_ty(self.tcx(), self_ty),
+ _ => span_bug!(obligation.cause.span, "object candidate with non-object"),
+ };
+
+ let mut upcast_trait_ref = None;
+ let mut nested = vec![];
+ let vtable_base;
+
+ {
+ let tcx = self.tcx();
+
+ // We want to find the first supertrait in the list of
+ // supertraits that we can unify with, and do that
+ // unification. We know that there is exactly one in the list
+ // where we can unify, because otherwise select would have
+ // reported an ambiguity. (When we do find a match, also
+ // record it for later.)
+ let nonmatching = util::supertraits(tcx, poly_trait_ref).take_while(|&t| {
+ match self.infcx.commit_if_ok(|_| self.match_poly_trait_ref(obligation, t)) {
+ Ok(obligations) => {
+ upcast_trait_ref = Some(t);
+ nested.extend(obligations);
+ false
+ }
+ Err(_) => true,
+ }
+ });
+
+ // Additionally, for each of the non-matching predicates that
+ // we pass over, we sum up the set of number of vtable
+ // entries, so that we can compute the offset for the selected
+ // trait.
+ vtable_base = nonmatching.map(|t| super::util::count_own_vtable_entries(tcx, t)).sum();
+ }
+
+ VtableObjectData { upcast_trait_ref: upcast_trait_ref.unwrap(), vtable_base, nested }
+ }
+
+ fn confirm_fn_pointer_candidate(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ ) -> Result<VtableFnPointerData<'tcx, PredicateObligation<'tcx>>, SelectionError<'tcx>> {
+ debug!("confirm_fn_pointer_candidate({:?})", obligation);
+
+ // Okay to skip binder; it is reintroduced below.
+ let self_ty = self.infcx.shallow_resolve(*obligation.self_ty().skip_binder());
+ let sig = self_ty.fn_sig(self.tcx());
+ let trait_ref = closure_trait_ref_and_return_type(
+ self.tcx(),
+ obligation.predicate.def_id(),
+ self_ty,
+ sig,
+ util::TupleArgumentsFlag::Yes,
+ )
+ .map_bound(|(trait_ref, _)| trait_ref);
+
+ let Normalized { value: trait_ref, obligations } = project::normalize_with_depth(
+ self,
+ obligation.param_env,
+ obligation.cause.clone(),
+ obligation.recursion_depth + 1,
+ &trait_ref,
+ );
+
+ self.confirm_poly_trait_refs(
+ obligation.cause.clone(),
+ obligation.param_env,
+ obligation.predicate.to_poly_trait_ref(),
+ trait_ref,
+ )?;
+ Ok(VtableFnPointerData { fn_ty: self_ty, nested: obligations })
+ }
+
+ fn confirm_trait_alias_candidate(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ alias_def_id: DefId,
+ ) -> VtableTraitAliasData<'tcx, PredicateObligation<'tcx>> {
+ debug!("confirm_trait_alias_candidate({:?}, {:?})", obligation, alias_def_id);
+
+ self.infcx.commit_unconditionally(|_| {
+ let (predicate, _) =
+ self.infcx().replace_bound_vars_with_placeholders(&obligation.predicate);
+ let trait_ref = predicate.trait_ref;
+ let trait_def_id = trait_ref.def_id;
+ let substs = trait_ref.substs;
+
+ let trait_obligations = self.impl_or_trait_obligations(
+ obligation.cause.clone(),
+ obligation.recursion_depth,
+ obligation.param_env,
+ trait_def_id,
+ &substs,
+ );
+
+ debug!(
+ "confirm_trait_alias_candidate: trait_def_id={:?} trait_obligations={:?}",
+ trait_def_id, trait_obligations
+ );
+
+ VtableTraitAliasData { alias_def_id, substs: substs, nested: trait_obligations }
+ })
+ }
+
+ fn confirm_generator_candidate(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ ) -> Result<VtableGeneratorData<'tcx, PredicateObligation<'tcx>>, SelectionError<'tcx>> {
+ // Okay to skip binder because the substs on generator types never
+ // touch bound regions, they just capture the in-scope
+ // type/region parameters.
+ let self_ty = self.infcx.shallow_resolve(*obligation.self_ty().skip_binder());
+ let (generator_def_id, substs) = match self_ty.kind {
+ ty::Generator(id, substs, _) => (id, substs),
+ _ => bug!("closure candidate for non-closure {:?}", obligation),
+ };
+
+ debug!("confirm_generator_candidate({:?},{:?},{:?})", obligation, generator_def_id, substs);
+
+ let trait_ref = self.generator_trait_ref_unnormalized(obligation, generator_def_id, substs);
+ let Normalized { value: trait_ref, mut obligations } = normalize_with_depth(
+ self,
+ obligation.param_env,
+ obligation.cause.clone(),
+ obligation.recursion_depth + 1,
+ &trait_ref,
+ );
+
+ debug!(
+ "confirm_generator_candidate(generator_def_id={:?}, \
+ trait_ref={:?}, obligations={:?})",
+ generator_def_id, trait_ref, obligations
+ );
+
+ obligations.extend(self.confirm_poly_trait_refs(
+ obligation.cause.clone(),
+ obligation.param_env,
+ obligation.predicate.to_poly_trait_ref(),
+ trait_ref,
+ )?);
+
+ Ok(VtableGeneratorData { generator_def_id, substs, nested: obligations })
+ }
+
+ fn confirm_closure_candidate(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ ) -> Result<VtableClosureData<'tcx, PredicateObligation<'tcx>>, SelectionError<'tcx>> {
+ debug!("confirm_closure_candidate({:?})", obligation);
+
+ let kind = self
+ .tcx()
+ .fn_trait_kind_from_lang_item(obligation.predicate.def_id())
+ .unwrap_or_else(|| bug!("closure candidate for non-fn trait {:?}", obligation));
+
+ // Okay to skip binder because the substs on closure types never
+ // touch bound regions, they just capture the in-scope
+ // type/region parameters.
+ let self_ty = self.infcx.shallow_resolve(*obligation.self_ty().skip_binder());
+ let (closure_def_id, substs) = match self_ty.kind {
+ ty::Closure(id, substs) => (id, substs),
+ _ => bug!("closure candidate for non-closure {:?}", obligation),
+ };
+
+ let trait_ref = self.closure_trait_ref_unnormalized(obligation, closure_def_id, substs);
+ let Normalized { value: trait_ref, mut obligations } = normalize_with_depth(
+ self,
+ obligation.param_env,
+ obligation.cause.clone(),
+ obligation.recursion_depth + 1,
+ &trait_ref,
+ );
+
+ debug!(
+ "confirm_closure_candidate(closure_def_id={:?}, trait_ref={:?}, obligations={:?})",
+ closure_def_id, trait_ref, obligations
+ );
+
+ obligations.extend(self.confirm_poly_trait_refs(
+ obligation.cause.clone(),
+ obligation.param_env,
+ obligation.predicate.to_poly_trait_ref(),
+ trait_ref,
+ )?);
+
+ obligations.push(Obligation::new(
+ obligation.cause.clone(),
+ obligation.param_env,
+ ty::Predicate::ClosureKind(closure_def_id, substs, kind),
+ ));
+
+ Ok(VtableClosureData { closure_def_id, substs, nested: obligations })
+ }
+
+ /// In the case of closure types and fn pointers,
+ /// we currently treat the input type parameters on the trait as
+ /// outputs. This means that when we have a match we have only
+ /// considered the self type, so we have to go back and make sure
+ /// to relate the argument types too. This is kind of wrong, but
+ /// since we control the full set of impls, also not that wrong,
+ /// and it DOES yield better error messages (since we don't report
+ /// errors as if there is no applicable impl, but rather report
+ /// errors are about mismatched argument types.
+ ///
+ /// Here is an example. Imagine we have a closure expression
+ /// and we desugared it so that the type of the expression is
+ /// `Closure`, and `Closure` expects an int as argument. Then it
+ /// is "as if" the compiler generated this impl:
+ ///
+ /// impl Fn(int) for Closure { ... }
+ ///
+ /// Now imagine our obligation is `Fn(usize) for Closure`. So far
+ /// we have matched the self type `Closure`. At this point we'll
+ /// compare the `int` to `usize` and generate an error.
+ ///
+ /// Note that this checking occurs *after* the impl has selected,
+ /// because these output type parameters should not affect the
+ /// selection of the impl. Therefore, if there is a mismatch, we
+ /// report an error to the user.
+ fn confirm_poly_trait_refs(
+ &mut self,
+ obligation_cause: ObligationCause<'tcx>,
+ obligation_param_env: ty::ParamEnv<'tcx>,
+ obligation_trait_ref: ty::PolyTraitRef<'tcx>,
+ expected_trait_ref: ty::PolyTraitRef<'tcx>,
+ ) -> Result<Vec<PredicateObligation<'tcx>>, SelectionError<'tcx>> {
+ self.infcx
+ .at(&obligation_cause, obligation_param_env)
+ .sup(obligation_trait_ref, expected_trait_ref)
+ .map(|InferOk { obligations, .. }| obligations)
+ .map_err(|e| OutputTypeParameterMismatch(expected_trait_ref, obligation_trait_ref, e))
+ }
+
+ fn confirm_builtin_unsize_candidate(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ ) -> Result<VtableBuiltinData<PredicateObligation<'tcx>>, SelectionError<'tcx>> {
+ let tcx = self.tcx();
+
+ // `assemble_candidates_for_unsizing` should ensure there are no late-bound
+ // regions here. See the comment there for more details.
+ let source = self.infcx.shallow_resolve(obligation.self_ty().no_bound_vars().unwrap());
+ let target = obligation.predicate.skip_binder().trait_ref.substs.type_at(1);
+ let target = self.infcx.shallow_resolve(target);
+
+ debug!("confirm_builtin_unsize_candidate(source={:?}, target={:?})", source, target);
+
+ let mut nested = vec![];
+ match (&source.kind, &target.kind) {
+ // Trait+Kx+'a -> Trait+Ky+'b (upcasts).
+ (&ty::Dynamic(ref data_a, r_a), &ty::Dynamic(ref data_b, r_b)) => {
+ // See `assemble_candidates_for_unsizing` for more info.
+ let existential_predicates = data_a.map_bound(|data_a| {
+ let iter = data_a
+ .principal()
+ .map(|x| ty::ExistentialPredicate::Trait(x))
+ .into_iter()
+ .chain(
+ data_a
+ .projection_bounds()
+ .map(|x| ty::ExistentialPredicate::Projection(x)),
+ )
+ .chain(data_b.auto_traits().map(ty::ExistentialPredicate::AutoTrait));
+ tcx.mk_existential_predicates(iter)
+ });
+ let source_trait = tcx.mk_dynamic(existential_predicates, r_b);
+
+ // Require that the traits involved in this upcast are **equal**;
+ // only the **lifetime bound** is changed.
+ //
+ // FIXME: This condition is arguably too strong -- it would
+ // suffice for the source trait to be a *subtype* of the target
+ // trait. In particular, changing from something like
+ // `for<'a, 'b> Foo<'a, 'b>` to `for<'a> Foo<'a, 'a>` should be
+ // permitted. And, indeed, in the in commit
+ // 904a0bde93f0348f69914ee90b1f8b6e4e0d7cbc, this
+ // condition was loosened. However, when the leak check was
+ // added back, using subtype here actually guides the coercion
+ // code in such a way that it accepts `old-lub-glb-object.rs`.
+ // This is probably a good thing, but I've modified this to `.eq`
+ // because I want to continue rejecting that test (as we have
+ // done for quite some time) before we are firmly comfortable
+ // with what our behavior should be there. -nikomatsakis
+ let InferOk { obligations, .. } = self
+ .infcx
+ .at(&obligation.cause, obligation.param_env)
+ .eq(target, source_trait) // FIXME -- see below
+ .map_err(|_| Unimplemented)?;
+ nested.extend(obligations);
+
+ // Register one obligation for 'a: 'b.
+ let cause = ObligationCause::new(
+ obligation.cause.span,
+ obligation.cause.body_id,
+ ObjectCastObligation(target),
+ );
+ let outlives = ty::OutlivesPredicate(r_a, r_b);
+ nested.push(Obligation::with_depth(
+ cause,
+ obligation.recursion_depth + 1,
+ obligation.param_env,
+ ty::Binder::bind(outlives).to_predicate(),
+ ));
+ }
+
+ // `T` -> `Trait`
+ (_, &ty::Dynamic(ref data, r)) => {
+ let mut object_dids = data.auto_traits().chain(data.principal_def_id());
+ if let Some(did) = object_dids.find(|did| !tcx.is_object_safe(*did)) {
+ return Err(TraitNotObjectSafe(did));
+ }
+
+ let cause = ObligationCause::new(
+ obligation.cause.span,
+ obligation.cause.body_id,
+ ObjectCastObligation(target),
+ );
+
+ let predicate_to_obligation = |predicate| {
+ Obligation::with_depth(
+ cause.clone(),
+ obligation.recursion_depth + 1,
+ obligation.param_env,
+ predicate,
+ )
+ };
+
+ // Create obligations:
+ // - Casting `T` to `Trait`
+ // - For all the various builtin bounds attached to the object cast. (In other
+ // words, if the object type is `Foo + Send`, this would create an obligation for
+ // the `Send` check.)
+ // - Projection predicates
+ nested.extend(
+ data.iter().map(|predicate| {
+ predicate_to_obligation(predicate.with_self_ty(tcx, source))
+ }),
+ );
+
+ // We can only make objects from sized types.
+ let tr = ty::TraitRef::new(
+ tcx.require_lang_item(lang_items::SizedTraitLangItem, None),
+ tcx.mk_substs_trait(source, &[]),
+ );
+ nested.push(predicate_to_obligation(tr.without_const().to_predicate()));
+
+ // If the type is `Foo + 'a`, ensure that the type
+ // being cast to `Foo + 'a` outlives `'a`:
+ let outlives = ty::OutlivesPredicate(source, r);
+ nested.push(predicate_to_obligation(ty::Binder::dummy(outlives).to_predicate()));
+ }
+
+ // `[T; n]` -> `[T]`
+ (&ty::Array(a, _), &ty::Slice(b)) => {
+ let InferOk { obligations, .. } = self
+ .infcx
+ .at(&obligation.cause, obligation.param_env)
+ .eq(b, a)
+ .map_err(|_| Unimplemented)?;
+ nested.extend(obligations);
+ }
+
+ // `Struct<T>` -> `Struct<U>`
+ (&ty::Adt(def, substs_a), &ty::Adt(_, substs_b)) => {
+ let fields =
+ def.all_fields().map(|field| tcx.type_of(field.did)).collect::<Vec<_>>();
+
+ // The last field of the structure has to exist and contain type parameters.
+ let field = if let Some(&field) = fields.last() {
+ field
+ } else {
+ return Err(Unimplemented);
+ };
+ let mut ty_params = GrowableBitSet::new_empty();
+ let mut found = false;
+ for ty in field.walk() {
+ if let ty::Param(p) = ty.kind {
+ ty_params.insert(p.index as usize);
+ found = true;
+ }
+ }
+ if !found {
+ return Err(Unimplemented);
+ }
+
+ // Replace type parameters used in unsizing with
+ // Error and ensure they do not affect any other fields.
+ // This could be checked after type collection for any struct
+ // with a potentially unsized trailing field.
+ let params = substs_a
+ .iter()
+ .enumerate()
+ .map(|(i, &k)| if ty_params.contains(i) { tcx.types.err.into() } else { k });
+ let substs = tcx.mk_substs(params);
+ for &ty in fields.split_last().unwrap().1 {
+ if ty.subst(tcx, substs).references_error() {
+ return Err(Unimplemented);
+ }
+ }
+
+ // Extract `Field<T>` and `Field<U>` from `Struct<T>` and `Struct<U>`.
+ let inner_source = field.subst(tcx, substs_a);
+ let inner_target = field.subst(tcx, substs_b);
+
+ // Check that the source struct with the target's
+ // unsized parameters is equal to the target.
+ let params = substs_a.iter().enumerate().map(|(i, &k)| {
+ if ty_params.contains(i) { substs_b.type_at(i).into() } else { k }
+ });
+ let new_struct = tcx.mk_adt(def, tcx.mk_substs(params));
+ let InferOk { obligations, .. } = self
+ .infcx
+ .at(&obligation.cause, obligation.param_env)
+ .eq(target, new_struct)
+ .map_err(|_| Unimplemented)?;
+ nested.extend(obligations);
+
+ // Construct the nested `Field<T>: Unsize<Field<U>>` predicate.
+ nested.push(predicate_for_trait_def(
+ tcx,
+ obligation.param_env,
+ obligation.cause.clone(),
+ obligation.predicate.def_id(),
+ obligation.recursion_depth + 1,
+ inner_source,
+ &[inner_target.into()],
+ ));
+ }
+
+ // `(.., T)` -> `(.., U)`
+ (&ty::Tuple(tys_a), &ty::Tuple(tys_b)) => {
+ assert_eq!(tys_a.len(), tys_b.len());
+
+ // The last field of the tuple has to exist.
+ let (&a_last, a_mid) = if let Some(x) = tys_a.split_last() {
+ x
+ } else {
+ return Err(Unimplemented);
+ };
+ let &b_last = tys_b.last().unwrap();
+
+ // Check that the source tuple with the target's
+ // last element is equal to the target.
+ let new_tuple = tcx.mk_tup(
+ a_mid.iter().map(|k| k.expect_ty()).chain(iter::once(b_last.expect_ty())),
+ );
+ let InferOk { obligations, .. } = self
+ .infcx
+ .at(&obligation.cause, obligation.param_env)
+ .eq(target, new_tuple)
+ .map_err(|_| Unimplemented)?;
+ nested.extend(obligations);
+
+ // Construct the nested `T: Unsize<U>` predicate.
+ nested.push(predicate_for_trait_def(
+ tcx,
+ obligation.param_env,
+ obligation.cause.clone(),
+ obligation.predicate.def_id(),
+ obligation.recursion_depth + 1,
+ a_last.expect_ty(),
+ &[b_last],
+ ));
+ }
+
+ _ => bug!(),
+ };
+
+ Ok(VtableBuiltinData { nested })
+ }
+
+ ///////////////////////////////////////////////////////////////////////////
+ // Matching
+ //
+ // Matching is a common path used for both evaluation and
+ // confirmation. It basically unifies types that appear in impls
+ // and traits. This does affect the surrounding environment;
+ // therefore, when used during evaluation, match routines must be
+ // run inside of a `probe()` so that their side-effects are
+ // contained.
+
+ fn rematch_impl(
+ &mut self,
+ impl_def_id: DefId,
+ obligation: &TraitObligation<'tcx>,
+ snapshot: &CombinedSnapshot<'_, 'tcx>,
+ ) -> Normalized<'tcx, SubstsRef<'tcx>> {
+ match self.match_impl(impl_def_id, obligation, snapshot) {
+ Ok(substs) => substs,
+ Err(()) => {
+ bug!(
+ "Impl {:?} was matchable against {:?} but now is not",
+ impl_def_id,
+ obligation
+ );
+ }
+ }
+ }
+
+ fn match_impl(
+ &mut self,
+ impl_def_id: DefId,
+ obligation: &TraitObligation<'tcx>,
+ snapshot: &CombinedSnapshot<'_, 'tcx>,
+ ) -> Result<Normalized<'tcx, SubstsRef<'tcx>>, ()> {
+ let impl_trait_ref = self.tcx().impl_trait_ref(impl_def_id).unwrap();
+
+ // Before we create the substitutions and everything, first
+ // consider a "quick reject". This avoids creating more types
+ // and so forth that we need to.
+ if self.fast_reject_trait_refs(obligation, &impl_trait_ref) {
+ return Err(());
+ }
+
+ let (skol_obligation, placeholder_map) =
+ self.infcx().replace_bound_vars_with_placeholders(&obligation.predicate);
+ let skol_obligation_trait_ref = skol_obligation.trait_ref;
+
+ let impl_substs = self.infcx.fresh_substs_for_item(obligation.cause.span, impl_def_id);
+
+ let impl_trait_ref = impl_trait_ref.subst(self.tcx(), impl_substs);
+
+ let Normalized { value: impl_trait_ref, obligations: mut nested_obligations } =
+ project::normalize_with_depth(
+ self,
+ obligation.param_env,
+ obligation.cause.clone(),
+ obligation.recursion_depth + 1,
+ &impl_trait_ref,
+ );
+
+ debug!(
+ "match_impl(impl_def_id={:?}, obligation={:?}, \
+ impl_trait_ref={:?}, skol_obligation_trait_ref={:?})",
+ impl_def_id, obligation, impl_trait_ref, skol_obligation_trait_ref
+ );
+
+ let InferOk { obligations, .. } = self
+ .infcx
+ .at(&obligation.cause, obligation.param_env)
+ .eq(skol_obligation_trait_ref, impl_trait_ref)
+ .map_err(|e| debug!("match_impl: failed eq_trait_refs due to `{}`", e))?;
+ nested_obligations.extend(obligations);
+
+ if let Err(e) = self.infcx.leak_check(false, &placeholder_map, snapshot) {
+ debug!("match_impl: failed leak check due to `{}`", e);
+ return Err(());
+ }
+
+ if !self.intercrate
+ && self.tcx().impl_polarity(impl_def_id) == ty::ImplPolarity::Reservation
+ {
+ debug!("match_impl: reservation impls only apply in intercrate mode");
+ return Err(());
+ }
+
+ debug!("match_impl: success impl_substs={:?}", impl_substs);
+ Ok(Normalized { value: impl_substs, obligations: nested_obligations })
+ }
+
+ fn fast_reject_trait_refs(
+ &mut self,
+ obligation: &TraitObligation<'_>,
+ impl_trait_ref: &ty::TraitRef<'_>,
+ ) -> bool {
+ // We can avoid creating type variables and doing the full
+ // substitution if we find that any of the input types, when
+ // simplified, do not match.
+
+ obligation.predicate.skip_binder().input_types().zip(impl_trait_ref.input_types()).any(
+ |(obligation_ty, impl_ty)| {
+ let simplified_obligation_ty =
+ fast_reject::simplify_type(self.tcx(), obligation_ty, true);
+ let simplified_impl_ty = fast_reject::simplify_type(self.tcx(), impl_ty, false);
+
+ simplified_obligation_ty.is_some()
+ && simplified_impl_ty.is_some()
+ && simplified_obligation_ty != simplified_impl_ty
+ },
+ )
+ }
+
+ /// Normalize `where_clause_trait_ref` and try to match it against
+ /// `obligation`. If successful, return any predicates that
+ /// result from the normalization. Normalization is necessary
+ /// because where-clauses are stored in the parameter environment
+ /// unnormalized.
+ fn match_where_clause_trait_ref(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ where_clause_trait_ref: ty::PolyTraitRef<'tcx>,
+ ) -> Result<Vec<PredicateObligation<'tcx>>, ()> {
+ self.match_poly_trait_ref(obligation, where_clause_trait_ref)
+ }
+
+ /// Returns `Ok` if `poly_trait_ref` being true implies that the
+ /// obligation is satisfied.
+ fn match_poly_trait_ref(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ poly_trait_ref: ty::PolyTraitRef<'tcx>,
+ ) -> Result<Vec<PredicateObligation<'tcx>>, ()> {
+ debug!(
+ "match_poly_trait_ref: obligation={:?} poly_trait_ref={:?}",
+ obligation, poly_trait_ref
+ );
+
+ self.infcx
+ .at(&obligation.cause, obligation.param_env)
+ .sup(obligation.predicate.to_poly_trait_ref(), poly_trait_ref)
+ .map(|InferOk { obligations, .. }| obligations)
+ .map_err(|_| ())
+ }
+
+ ///////////////////////////////////////////////////////////////////////////
+ // Miscellany
+
+ fn match_fresh_trait_refs(
+ &self,
+ previous: &ty::PolyTraitRef<'tcx>,
+ current: &ty::PolyTraitRef<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ ) -> bool {
+ let mut matcher = ty::_match::Match::new(self.tcx(), param_env);
+ matcher.relate(previous, current).is_ok()
+ }
+
+ fn push_stack<'o>(
+ &mut self,
+ previous_stack: TraitObligationStackList<'o, 'tcx>,
+ obligation: &'o TraitObligation<'tcx>,
+ ) -> TraitObligationStack<'o, 'tcx> {
+ let fresh_trait_ref =
+ obligation.predicate.to_poly_trait_ref().fold_with(&mut self.freshener);
+
+ let dfn = previous_stack.cache.next_dfn();
+ let depth = previous_stack.depth() + 1;
+ TraitObligationStack {
+ obligation,
+ fresh_trait_ref,
+ reached_depth: Cell::new(depth),
+ previous: previous_stack,
+ dfn,
+ depth,
+ }
+ }
+
+ fn closure_trait_ref_unnormalized(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ closure_def_id: DefId,
+ substs: SubstsRef<'tcx>,
+ ) -> ty::PolyTraitRef<'tcx> {
+ debug!(
+ "closure_trait_ref_unnormalized(obligation={:?}, closure_def_id={:?}, substs={:?})",
+ obligation, closure_def_id, substs,
+ );
+ let closure_type = self.infcx.closure_sig(closure_def_id, substs);
+
+ debug!("closure_trait_ref_unnormalized: closure_type = {:?}", closure_type);
+
+ // (1) Feels icky to skip the binder here, but OTOH we know
+ // that the self-type is an unboxed closure type and hence is
+ // in fact unparameterized (or at least does not reference any
+ // regions bound in the obligation). Still probably some
+ // refactoring could make this nicer.
+ closure_trait_ref_and_return_type(
+ self.tcx(),
+ obligation.predicate.def_id(),
+ obligation.predicate.skip_binder().self_ty(), // (1)
+ closure_type,
+ util::TupleArgumentsFlag::No,
+ )
+ .map_bound(|(trait_ref, _)| trait_ref)
+ }
+
+ fn generator_trait_ref_unnormalized(
+ &mut self,
+ obligation: &TraitObligation<'tcx>,
+ closure_def_id: DefId,
+ substs: SubstsRef<'tcx>,
+ ) -> ty::PolyTraitRef<'tcx> {
+ let gen_sig = substs.as_generator().poly_sig(closure_def_id, self.tcx());
+
+ // (1) Feels icky to skip the binder here, but OTOH we know
+ // that the self-type is an generator type and hence is
+ // in fact unparameterized (or at least does not reference any
+ // regions bound in the obligation). Still probably some
+ // refactoring could make this nicer.
+
+ super::util::generator_trait_ref_and_outputs(
+ self.tcx(),
+ obligation.predicate.def_id(),
+ obligation.predicate.skip_binder().self_ty(), // (1)
+ gen_sig,
+ )
+ .map_bound(|(trait_ref, ..)| trait_ref)
+ }
+
+ /// Returns the obligations that are implied by instantiating an
+ /// impl or trait. The obligations are substituted and fully
+ /// normalized. This is used when confirming an impl or default
+ /// impl.
+ fn impl_or_trait_obligations(
+ &mut self,
+ cause: ObligationCause<'tcx>,
+ recursion_depth: usize,
+ param_env: ty::ParamEnv<'tcx>,
+ def_id: DefId, // of impl or trait
+ substs: SubstsRef<'tcx>, // for impl or trait
+ ) -> Vec<PredicateObligation<'tcx>> {
+ debug!("impl_or_trait_obligations(def_id={:?})", def_id);
+ let tcx = self.tcx();
+
+ // To allow for one-pass evaluation of the nested obligation,
+ // each predicate must be preceded by the obligations required
+ // to normalize it.
+ // for example, if we have:
+ // impl<U: Iterator<Item: Copy>, V: Iterator<Item = U>> Foo for V
+ // the impl will have the following predicates:
+ // <V as Iterator>::Item = U,
+ // U: Iterator, U: Sized,
+ // V: Iterator, V: Sized,
+ // <U as Iterator>::Item: Copy
+ // When we substitute, say, `V => IntoIter<u32>, U => $0`, the last
+ // obligation will normalize to `<$0 as Iterator>::Item = $1` and
+ // `$1: Copy`, so we must ensure the obligations are emitted in
+ // that order.
+ let predicates = tcx.predicates_of(def_id);
+ assert_eq!(predicates.parent, None);
+ let mut obligations = Vec::with_capacity(predicates.predicates.len());
+ for (predicate, _) in predicates.predicates {
+ let predicate = normalize_with_depth_to(
+ self,
+ param_env,
+ cause.clone(),
+ recursion_depth,
+ &predicate.subst(tcx, substs),
+ &mut obligations,
+ );
+ obligations.push(Obligation {
+ cause: cause.clone(),
+ recursion_depth,
+ param_env,
+ predicate,
+ });
+ }
+
+ // We are performing deduplication here to avoid exponential blowups
+ // (#38528) from happening, but the real cause of the duplication is
+ // unknown. What we know is that the deduplication avoids exponential
+ // amount of predicates being propagated when processing deeply nested
+ // types.
+ //
+ // This code is hot enough that it's worth avoiding the allocation
+ // required for the FxHashSet when possible. Special-casing lengths 0,
+ // 1 and 2 covers roughly 75-80% of the cases.
+ if obligations.len() <= 1 {
+ // No possibility of duplicates.
+ } else if obligations.len() == 2 {
+ // Only two elements. Drop the second if they are equal.
+ if obligations[0] == obligations[1] {
+ obligations.truncate(1);
+ }
+ } else {
+ // Three or more elements. Use a general deduplication process.
+ let mut seen = FxHashSet::default();
+ obligations.retain(|i| seen.insert(i.clone()));
+ }
+
+ obligations
+ }
+}
+
+trait TraitObligationExt<'tcx> {
+ fn derived_cause(
+ &self,
+ variant: fn(DerivedObligationCause<'tcx>) -> ObligationCauseCode<'tcx>,
+ ) -> ObligationCause<'tcx>;
+}
+
+impl<'tcx> TraitObligationExt<'tcx> for TraitObligation<'tcx> {
+ #[allow(unused_comparisons)]
+ fn derived_cause(
+ &self,
+ variant: fn(DerivedObligationCause<'tcx>) -> ObligationCauseCode<'tcx>,
+ ) -> ObligationCause<'tcx> {
+ /*!
+ * Creates a cause for obligations that are derived from
+ * `obligation` by a recursive search (e.g., for a builtin
+ * bound, or eventually a `auto trait Foo`). If `obligation`
+ * is itself a derived obligation, this is just a clone, but
+ * otherwise we create a "derived obligation" cause so as to
+ * keep track of the original root obligation for error
+ * reporting.
+ */
+
+ let obligation = self;
+
+ // NOTE(flaper87): As of now, it keeps track of the whole error
+ // chain. Ideally, we should have a way to configure this either
+ // by using -Z verbose or just a CLI argument.
+ let derived_cause = DerivedObligationCause {
+ parent_trait_ref: obligation.predicate.to_poly_trait_ref(),
+ parent_code: Rc::new(obligation.cause.code.clone()),
+ };
+ let derived_code = variant(derived_cause);
+ ObligationCause::new(obligation.cause.span, obligation.cause.body_id, derived_code)
+ }
+}
+
+impl<'o, 'tcx> TraitObligationStack<'o, 'tcx> {
+ fn list(&'o self) -> TraitObligationStackList<'o, 'tcx> {
+ TraitObligationStackList::with(self)
+ }
+
+ fn cache(&self) -> &'o ProvisionalEvaluationCache<'tcx> {
+ self.previous.cache
+ }
+
+ fn iter(&'o self) -> TraitObligationStackList<'o, 'tcx> {
+ self.list()
+ }
+
+ /// Indicates that attempting to evaluate this stack entry
+ /// required accessing something from the stack at depth `reached_depth`.
+ fn update_reached_depth(&self, reached_depth: usize) {
+ assert!(
+ self.depth > reached_depth,
+ "invoked `update_reached_depth` with something under this stack: \
+ self.depth={} reached_depth={}",
+ self.depth,
+ reached_depth,
+ );
+ debug!("update_reached_depth(reached_depth={})", reached_depth);
+ let mut p = self;
+ while reached_depth < p.depth {
+ debug!("update_reached_depth: marking {:?} as cycle participant", p.fresh_trait_ref);
+ p.reached_depth.set(p.reached_depth.get().min(reached_depth));
+ p = p.previous.head.unwrap();
+ }
+ }
+}
+
+/// The "provisional evaluation cache" is used to store intermediate cache results
+/// when solving auto traits. Auto traits are unusual in that they can support
+/// cycles. So, for example, a "proof tree" like this would be ok:
+///
+/// - `Foo<T>: Send` :-
+/// - `Bar<T>: Send` :-
+/// - `Foo<T>: Send` -- cycle, but ok
+/// - `Baz<T>: Send`
+///
+/// Here, to prove `Foo<T>: Send`, we have to prove `Bar<T>: Send` and
+/// `Baz<T>: Send`. Proving `Bar<T>: Send` in turn required `Foo<T>: Send`.
+/// For non-auto traits, this cycle would be an error, but for auto traits (because
+/// they are coinductive) it is considered ok.
+///
+/// However, there is a complication: at the point where we have
+/// "proven" `Bar<T>: Send`, we have in fact only proven it
+/// *provisionally*. In particular, we proved that `Bar<T>: Send`
+/// *under the assumption* that `Foo<T>: Send`. But what if we later
+/// find out this assumption is wrong? Specifically, we could
+/// encounter some kind of error proving `Baz<T>: Send`. In that case,
+/// `Bar<T>: Send` didn't turn out to be true.
+///
+/// In Issue #60010, we found a bug in rustc where it would cache
+/// these intermediate results. This was fixed in #60444 by disabling
+/// *all* caching for things involved in a cycle -- in our example,
+/// that would mean we don't cache that `Bar<T>: Send`. But this led
+/// to large slowdowns.
+///
+/// Specifically, imagine this scenario, where proving `Baz<T>: Send`
+/// first requires proving `Bar<T>: Send` (which is true:
+///
+/// - `Foo<T>: Send` :-
+/// - `Bar<T>: Send` :-
+/// - `Foo<T>: Send` -- cycle, but ok
+/// - `Baz<T>: Send`
+/// - `Bar<T>: Send` -- would be nice for this to be a cache hit!
+/// - `*const T: Send` -- but what if we later encounter an error?
+///
+/// The *provisional evaluation cache* resolves this issue. It stores
+/// cache results that we've proven but which were involved in a cycle
+/// in some way. We track the minimal stack depth (i.e., the
+/// farthest from the top of the stack) that we are dependent on.
+/// The idea is that the cache results within are all valid -- so long as
+/// none of the nodes in between the current node and the node at that minimum
+/// depth result in an error (in which case the cached results are just thrown away).
+///
+/// During evaluation, we consult this provisional cache and rely on
+/// it. Accessing a cached value is considered equivalent to accessing
+/// a result at `reached_depth`, so it marks the *current* solution as
+/// provisional as well. If an error is encountered, we toss out any
+/// provisional results added from the subtree that encountered the
+/// error. When we pop the node at `reached_depth` from the stack, we
+/// can commit all the things that remain in the provisional cache.
+struct ProvisionalEvaluationCache<'tcx> {
+ /// next "depth first number" to issue -- just a counter
+ dfn: Cell<usize>,
+
+ /// Stores the "coldest" depth (bottom of stack) reached by any of
+ /// the evaluation entries. The idea here is that all things in the provisional
+ /// cache are always dependent on *something* that is colder in the stack:
+ /// therefore, if we add a new entry that is dependent on something *colder still*,
+ /// we have to modify the depth for all entries at once.
+ ///
+ /// Example:
+ ///
+ /// Imagine we have a stack `A B C D E` (with `E` being the top of
+ /// the stack). We cache something with depth 2, which means that
+ /// it was dependent on C. Then we pop E but go on and process a
+ /// new node F: A B C D F. Now F adds something to the cache with
+ /// depth 1, meaning it is dependent on B. Our original cache
+ /// entry is also dependent on B, because there is a path from E
+ /// to C and then from C to F and from F to B.
+ reached_depth: Cell<usize>,
+
+ /// Map from cache key to the provisionally evaluated thing.
+ /// The cache entries contain the result but also the DFN in which they
+ /// were added. The DFN is used to clear out values on failure.
+ ///
+ /// Imagine we have a stack like:
+ ///
+ /// - `A B C` and we add a cache for the result of C (DFN 2)
+ /// - Then we have a stack `A B D` where `D` has DFN 3
+ /// - We try to solve D by evaluating E: `A B D E` (DFN 4)
+ /// - `E` generates various cache entries which have cyclic dependices on `B`
+ /// - `A B D E F` and so forth
+ /// - the DFN of `F` for example would be 5
+ /// - then we determine that `E` is in error -- we will then clear
+ /// all cache values whose DFN is >= 4 -- in this case, that
+ /// means the cached value for `F`.
+ map: RefCell<FxHashMap<ty::PolyTraitRef<'tcx>, ProvisionalEvaluation>>,
+}
+
+/// A cache value for the provisional cache: contains the depth-first
+/// number (DFN) and result.
+#[derive(Copy, Clone, Debug)]
+struct ProvisionalEvaluation {
+ from_dfn: usize,
+ result: EvaluationResult,
+}
+
+impl<'tcx> Default for ProvisionalEvaluationCache<'tcx> {
+ fn default() -> Self {
+ Self {
+ dfn: Cell::new(0),
+ reached_depth: Cell::new(std::usize::MAX),
+ map: Default::default(),
+ }
+ }
+}
+
+impl<'tcx> ProvisionalEvaluationCache<'tcx> {
+ /// Get the next DFN in sequence (basically a counter).
+ fn next_dfn(&self) -> usize {
+ let result = self.dfn.get();
+ self.dfn.set(result + 1);
+ result
+ }
+
+ /// Check the provisional cache for any result for
+ /// `fresh_trait_ref`. If there is a hit, then you must consider
+ /// it an access to the stack slots at depth
+ /// `self.current_reached_depth()` and above.
+ fn get_provisional(&self, fresh_trait_ref: ty::PolyTraitRef<'tcx>) -> Option<EvaluationResult> {
+ debug!(
+ "get_provisional(fresh_trait_ref={:?}) = {:#?} with reached-depth {}",
+ fresh_trait_ref,
+ self.map.borrow().get(&fresh_trait_ref),
+ self.reached_depth.get(),
+ );
+ Some(self.map.borrow().get(&fresh_trait_ref)?.result)
+ }
+
+ /// Current value of the `reached_depth` counter -- all the
+ /// provisional cache entries are dependent on the item at this
+ /// depth.
+ fn current_reached_depth(&self) -> usize {
+ self.reached_depth.get()
+ }
+
+ /// Insert a provisional result into the cache. The result came
+ /// from the node with the given DFN. It accessed a minimum depth
+ /// of `reached_depth` to compute. It evaluated `fresh_trait_ref`
+ /// and resulted in `result`.
+ fn insert_provisional(
+ &self,
+ from_dfn: usize,
+ reached_depth: usize,
+ fresh_trait_ref: ty::PolyTraitRef<'tcx>,
+ result: EvaluationResult,
+ ) {
+ debug!(
+ "insert_provisional(from_dfn={}, reached_depth={}, fresh_trait_ref={:?}, result={:?})",
+ from_dfn, reached_depth, fresh_trait_ref, result,
+ );
+ let r_d = self.reached_depth.get();
+ self.reached_depth.set(r_d.min(reached_depth));
+
+ debug!("insert_provisional: reached_depth={:?}", self.reached_depth.get());
+
+ self.map.borrow_mut().insert(fresh_trait_ref, ProvisionalEvaluation { from_dfn, result });
+ }
+
+ /// Invoked when the node with dfn `dfn` does not get a successful
+ /// result. This will clear out any provisional cache entries
+ /// that were added since `dfn` was created. This is because the
+ /// provisional entries are things which must assume that the
+ /// things on the stack at the time of their creation succeeded --
+ /// since the failing node is presently at the top of the stack,
+ /// these provisional entries must either depend on it or some
+ /// ancestor of it.
+ fn on_failure(&self, dfn: usize) {
+ debug!("on_failure(dfn={:?})", dfn,);
+ self.map.borrow_mut().retain(|key, eval| {
+ if !eval.from_dfn >= dfn {
+ debug!("on_failure: removing {:?}", key);
+ false
+ } else {
+ true
+ }
+ });
+ }
+
+ /// Invoked when the node at depth `depth` completed without
+ /// depending on anything higher in the stack (if that completion
+ /// was a failure, then `on_failure` should have been invoked
+ /// already). The callback `op` will be invoked for each
+ /// provisional entry that we can now confirm.
+ fn on_completion(
+ &self,
+ depth: usize,
+ mut op: impl FnMut(ty::PolyTraitRef<'tcx>, EvaluationResult),
+ ) {
+ debug!("on_completion(depth={}, reached_depth={})", depth, self.reached_depth.get(),);
+
+ if self.reached_depth.get() < depth {
+ debug!("on_completion: did not yet reach depth to complete");
+ return;
+ }
+
+ for (fresh_trait_ref, eval) in self.map.borrow_mut().drain() {
+ debug!("on_completion: fresh_trait_ref={:?} eval={:?}", fresh_trait_ref, eval,);
+
+ op(fresh_trait_ref, eval.result);
+ }
+
+ self.reached_depth.set(std::usize::MAX);
+ }
+}
+
+#[derive(Copy, Clone)]
+struct TraitObligationStackList<'o, 'tcx> {
+ cache: &'o ProvisionalEvaluationCache<'tcx>,
+ head: Option<&'o TraitObligationStack<'o, 'tcx>>,
+}
+
+impl<'o, 'tcx> TraitObligationStackList<'o, 'tcx> {
+ fn empty(cache: &'o ProvisionalEvaluationCache<'tcx>) -> TraitObligationStackList<'o, 'tcx> {
+ TraitObligationStackList { cache, head: None }
+ }
+
+ fn with(r: &'o TraitObligationStack<'o, 'tcx>) -> TraitObligationStackList<'o, 'tcx> {
+ TraitObligationStackList { cache: r.cache(), head: Some(r) }
+ }
+
+ fn head(&self) -> Option<&'o TraitObligationStack<'o, 'tcx>> {
+ self.head
+ }
+
+ fn depth(&self) -> usize {
+ if let Some(head) = self.head { head.depth } else { 0 }
+ }
+}
+
+impl<'o, 'tcx> Iterator for TraitObligationStackList<'o, 'tcx> {
+ type Item = &'o TraitObligationStack<'o, 'tcx>;
+
+ fn next(&mut self) -> Option<&'o TraitObligationStack<'o, 'tcx>> {
+ match self.head {
+ Some(o) => {
+ *self = o.previous;
+ Some(o)
+ }
+ None => None,
+ }
+ }
+}
+
+impl<'o, 'tcx> fmt::Debug for TraitObligationStack<'o, 'tcx> {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ write!(f, "TraitObligationStack({:?})", self.obligation)
+ }
+}
--- /dev/null
+//! Logic and data structures related to impl specialization, explained in
+//! greater detail below.
+//!
+//! At the moment, this implementation support only the simple "chain" rule:
+//! If any two impls overlap, one must be a strict subset of the other.
+//!
+//! See the [rustc dev guide] for a bit more detail on how specialization
+//! fits together with the rest of the trait machinery.
+//!
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/traits/specialization.html
+
+pub mod specialization_graph;
+use specialization_graph::GraphExt;
+
+use crate::infer::{InferCtxt, InferOk, TyCtxtInferExt};
+use crate::traits::select::IntercrateAmbiguityCause;
+use crate::traits::{self, coherence, FutureCompatOverlapErrorKind, ObligationCause, TraitEngine};
+use rustc::lint::LintDiagnosticBuilder;
+use rustc::ty::subst::{InternalSubsts, Subst, SubstsRef};
+use rustc::ty::{self, TyCtxt, TypeFoldable};
+use rustc_data_structures::fx::FxHashSet;
+use rustc_errors::struct_span_err;
+use rustc_hir::def_id::DefId;
+use rustc_session::lint::builtin::COHERENCE_LEAK_CHECK;
+use rustc_session::lint::builtin::ORDER_DEPENDENT_TRAIT_OBJECTS;
+use rustc_span::DUMMY_SP;
+
+use super::util::impl_trait_ref_and_oblig;
+use super::{FulfillmentContext, SelectionContext};
+
+/// Information pertinent to an overlapping impl error.
+#[derive(Debug)]
+pub struct OverlapError {
+ pub with_impl: DefId,
+ pub trait_desc: String,
+ pub self_desc: Option<String>,
+ pub intercrate_ambiguity_causes: Vec<IntercrateAmbiguityCause>,
+ pub involves_placeholder: bool,
+}
+
+/// Given a subst for the requested impl, translate it to a subst
+/// appropriate for the actual item definition (whether it be in that impl,
+/// a parent impl, or the trait).
+///
+/// When we have selected one impl, but are actually using item definitions from
+/// a parent impl providing a default, we need a way to translate between the
+/// type parameters of the two impls. Here the `source_impl` is the one we've
+/// selected, and `source_substs` is a substitution of its generics.
+/// And `target_node` is the impl/trait we're actually going to get the
+/// definition from. The resulting substitution will map from `target_node`'s
+/// generics to `source_impl`'s generics as instantiated by `source_subst`.
+///
+/// For example, consider the following scenario:
+///
+/// ```rust
+/// trait Foo { ... }
+/// impl<T, U> Foo for (T, U) { ... } // target impl
+/// impl<V> Foo for (V, V) { ... } // source impl
+/// ```
+///
+/// Suppose we have selected "source impl" with `V` instantiated with `u32`.
+/// This function will produce a substitution with `T` and `U` both mapping to `u32`.
+///
+/// where-clauses add some trickiness here, because they can be used to "define"
+/// an argument indirectly:
+///
+/// ```rust
+/// impl<'a, I, T: 'a> Iterator for Cloned<I>
+/// where I: Iterator<Item = &'a T>, T: Clone
+/// ```
+///
+/// In a case like this, the substitution for `T` is determined indirectly,
+/// through associated type projection. We deal with such cases by using
+/// *fulfillment* to relate the two impls, requiring that all projections are
+/// resolved.
+pub fn translate_substs<'a, 'tcx>(
+ infcx: &InferCtxt<'a, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ source_impl: DefId,
+ source_substs: SubstsRef<'tcx>,
+ target_node: specialization_graph::Node,
+) -> SubstsRef<'tcx> {
+ debug!(
+ "translate_substs({:?}, {:?}, {:?}, {:?})",
+ param_env, source_impl, source_substs, target_node
+ );
+ let source_trait_ref =
+ infcx.tcx.impl_trait_ref(source_impl).unwrap().subst(infcx.tcx, &source_substs);
+
+ // translate the Self and Param parts of the substitution, since those
+ // vary across impls
+ let target_substs = match target_node {
+ specialization_graph::Node::Impl(target_impl) => {
+ // no need to translate if we're targeting the impl we started with
+ if source_impl == target_impl {
+ return source_substs;
+ }
+
+ fulfill_implication(infcx, param_env, source_trait_ref, target_impl).unwrap_or_else(
+ |_| {
+ bug!(
+ "When translating substitutions for specialization, the expected \
+ specialization failed to hold"
+ )
+ },
+ )
+ }
+ specialization_graph::Node::Trait(..) => source_trait_ref.substs,
+ };
+
+ // directly inherent the method generics, since those do not vary across impls
+ source_substs.rebase_onto(infcx.tcx, source_impl, target_substs)
+}
+
+/// Given a selected impl described by `impl_data`, returns the
+/// definition and substitutions for the method with the name `name`
+/// the kind `kind`, and trait method substitutions `substs`, in
+/// that impl, a less specialized impl, or the trait default,
+/// whichever applies.
+pub fn find_associated_item<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ item: &ty::AssocItem,
+ substs: SubstsRef<'tcx>,
+ impl_data: &super::VtableImplData<'tcx, ()>,
+) -> (DefId, SubstsRef<'tcx>) {
+ debug!("find_associated_item({:?}, {:?}, {:?}, {:?})", param_env, item, substs, impl_data);
+ assert!(!substs.needs_infer());
+
+ let trait_def_id = tcx.trait_id_of_impl(impl_data.impl_def_id).unwrap();
+ let trait_def = tcx.trait_def(trait_def_id);
+
+ if let Ok(ancestors) = trait_def.ancestors(tcx, impl_data.impl_def_id) {
+ match ancestors.leaf_def(tcx, item.ident, item.kind) {
+ Some(node_item) => {
+ let substs = tcx.infer_ctxt().enter(|infcx| {
+ let param_env = param_env.with_reveal_all();
+ let substs = substs.rebase_onto(tcx, trait_def_id, impl_data.substs);
+ let substs = translate_substs(
+ &infcx,
+ param_env,
+ impl_data.impl_def_id,
+ substs,
+ node_item.node,
+ );
+ infcx.tcx.erase_regions(&substs)
+ });
+ (node_item.item.def_id, substs)
+ }
+ None => bug!("{:?} not found in {:?}", item, impl_data.impl_def_id),
+ }
+ } else {
+ (item.def_id, substs)
+ }
+}
+
+/// Is `impl1` a specialization of `impl2`?
+///
+/// Specialization is determined by the sets of types to which the impls apply;
+/// `impl1` specializes `impl2` if it applies to a subset of the types `impl2` applies
+/// to.
+pub(super) fn specializes(tcx: TyCtxt<'_>, (impl1_def_id, impl2_def_id): (DefId, DefId)) -> bool {
+ debug!("specializes({:?}, {:?})", impl1_def_id, impl2_def_id);
+
+ // The feature gate should prevent introducing new specializations, but not
+ // taking advantage of upstream ones.
+ let features = tcx.features();
+ let specialization_enabled = features.specialization || features.min_specialization;
+ if !specialization_enabled && (impl1_def_id.is_local() || impl2_def_id.is_local()) {
+ return false;
+ }
+
+ // We determine whether there's a subset relationship by:
+ //
+ // - skolemizing impl1,
+ // - assuming the where clauses for impl1,
+ // - instantiating impl2 with fresh inference variables,
+ // - unifying,
+ // - attempting to prove the where clauses for impl2
+ //
+ // The last three steps are encapsulated in `fulfill_implication`.
+ //
+ // See RFC 1210 for more details and justification.
+
+ // Currently we do not allow e.g., a negative impl to specialize a positive one
+ if tcx.impl_polarity(impl1_def_id) != tcx.impl_polarity(impl2_def_id) {
+ return false;
+ }
+
+ // create a parameter environment corresponding to a (placeholder) instantiation of impl1
+ let penv = tcx.param_env(impl1_def_id);
+ let impl1_trait_ref = tcx.impl_trait_ref(impl1_def_id).unwrap();
+
+ // Create a infcx, taking the predicates of impl1 as assumptions:
+ tcx.infer_ctxt().enter(|infcx| {
+ // Normalize the trait reference. The WF rules ought to ensure
+ // that this always succeeds.
+ let impl1_trait_ref = match traits::fully_normalize(
+ &infcx,
+ FulfillmentContext::new(),
+ ObligationCause::dummy(),
+ penv,
+ &impl1_trait_ref,
+ ) {
+ Ok(impl1_trait_ref) => impl1_trait_ref,
+ Err(err) => {
+ bug!("failed to fully normalize {:?}: {:?}", impl1_trait_ref, err);
+ }
+ };
+
+ // Attempt to prove that impl2 applies, given all of the above.
+ fulfill_implication(&infcx, penv, impl1_trait_ref, impl2_def_id).is_ok()
+ })
+}
+
+/// Attempt to fulfill all obligations of `target_impl` after unification with
+/// `source_trait_ref`. If successful, returns a substitution for *all* the
+/// generics of `target_impl`, including both those needed to unify with
+/// `source_trait_ref` and those whose identity is determined via a where
+/// clause in the impl.
+fn fulfill_implication<'a, 'tcx>(
+ infcx: &InferCtxt<'a, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ source_trait_ref: ty::TraitRef<'tcx>,
+ target_impl: DefId,
+) -> Result<SubstsRef<'tcx>, ()> {
+ debug!(
+ "fulfill_implication({:?}, trait_ref={:?} |- {:?} applies)",
+ param_env, source_trait_ref, target_impl
+ );
+
+ let selcx = &mut SelectionContext::new(&infcx);
+ let target_substs = infcx.fresh_substs_for_item(DUMMY_SP, target_impl);
+ let (target_trait_ref, mut obligations) =
+ impl_trait_ref_and_oblig(selcx, param_env, target_impl, target_substs);
+ debug!(
+ "fulfill_implication: target_trait_ref={:?}, obligations={:?}",
+ target_trait_ref, obligations
+ );
+
+ // do the impls unify? If not, no specialization.
+ match infcx.at(&ObligationCause::dummy(), param_env).eq(source_trait_ref, target_trait_ref) {
+ Ok(InferOk { obligations: o, .. }) => {
+ obligations.extend(o);
+ }
+ Err(_) => {
+ debug!(
+ "fulfill_implication: {:?} does not unify with {:?}",
+ source_trait_ref, target_trait_ref
+ );
+ return Err(());
+ }
+ }
+
+ // attempt to prove all of the predicates for impl2 given those for impl1
+ // (which are packed up in penv)
+
+ infcx.save_and_restore_in_snapshot_flag(|infcx| {
+ // If we came from `translate_substs`, we already know that the
+ // predicates for our impl hold (after all, we know that a more
+ // specialized impl holds, so our impl must hold too), and
+ // we only want to process the projections to determine the
+ // the types in our substs using RFC 447, so we can safely
+ // ignore region obligations, which allows us to avoid threading
+ // a node-id to assign them with.
+ //
+ // If we came from specialization graph construction, then
+ // we already make a mockery out of the region system, so
+ // why not ignore them a bit earlier?
+ let mut fulfill_cx = FulfillmentContext::new_ignoring_regions();
+ for oblig in obligations.into_iter() {
+ fulfill_cx.register_predicate_obligation(&infcx, oblig);
+ }
+ match fulfill_cx.select_all_or_error(infcx) {
+ Err(errors) => {
+ // no dice!
+ debug!(
+ "fulfill_implication: for impls on {:?} and {:?}, \
+ could not fulfill: {:?} given {:?}",
+ source_trait_ref, target_trait_ref, errors, param_env.caller_bounds
+ );
+ Err(())
+ }
+
+ Ok(()) => {
+ debug!(
+ "fulfill_implication: an impl for {:?} specializes {:?}",
+ source_trait_ref, target_trait_ref
+ );
+
+ // Now resolve the *substitution* we built for the target earlier, replacing
+ // the inference variables inside with whatever we got from fulfillment.
+ Ok(infcx.resolve_vars_if_possible(&target_substs))
+ }
+ }
+ })
+}
+
+// Query provider for `specialization_graph_of`.
+pub(super) fn specialization_graph_provider(
+ tcx: TyCtxt<'_>,
+ trait_id: DefId,
+) -> &specialization_graph::Graph {
+ let mut sg = specialization_graph::Graph::new();
+
+ let mut trait_impls = tcx.all_impls(trait_id);
+
+ // The coherence checking implementation seems to rely on impls being
+ // iterated over (roughly) in definition order, so we are sorting by
+ // negated `CrateNum` (so remote definitions are visited first) and then
+ // by a flattened version of the `DefIndex`.
+ trait_impls
+ .sort_unstable_by_key(|def_id| (-(def_id.krate.as_u32() as i64), def_id.index.index()));
+
+ for impl_def_id in trait_impls {
+ if impl_def_id.is_local() {
+ // This is where impl overlap checking happens:
+ let insert_result = sg.insert(tcx, impl_def_id);
+ // Report error if there was one.
+ let (overlap, used_to_be_allowed) = match insert_result {
+ Err(overlap) => (Some(overlap), None),
+ Ok(Some(overlap)) => (Some(overlap.error), Some(overlap.kind)),
+ Ok(None) => (None, None),
+ };
+
+ if let Some(overlap) = overlap {
+ let impl_span =
+ tcx.sess.source_map().def_span(tcx.span_of_impl(impl_def_id).unwrap());
+
+ // Work to be done after we've built the DiagnosticBuilder. We have to define it
+ // now because the struct_lint methods don't return back the DiagnosticBuilder
+ // that's passed in.
+ let decorate = |err: LintDiagnosticBuilder<'_>| {
+ let msg = format!(
+ "conflicting implementations of trait `{}`{}:{}",
+ overlap.trait_desc,
+ overlap
+ .self_desc
+ .clone()
+ .map_or(String::new(), |ty| { format!(" for type `{}`", ty) }),
+ match used_to_be_allowed {
+ Some(FutureCompatOverlapErrorKind::Issue33140) => " (E0119)",
+ _ => "",
+ }
+ );
+ let mut err = err.build(&msg);
+ match tcx.span_of_impl(overlap.with_impl) {
+ Ok(span) => {
+ err.span_label(
+ tcx.sess.source_map().def_span(span),
+ "first implementation here".to_string(),
+ );
+
+ err.span_label(
+ impl_span,
+ format!(
+ "conflicting implementation{}",
+ overlap
+ .self_desc
+ .map_or(String::new(), |ty| format!(" for `{}`", ty))
+ ),
+ );
+ }
+ Err(cname) => {
+ let msg = match to_pretty_impl_header(tcx, overlap.with_impl) {
+ Some(s) => format!(
+ "conflicting implementation in crate `{}`:\n- {}",
+ cname, s
+ ),
+ None => format!("conflicting implementation in crate `{}`", cname),
+ };
+ err.note(&msg);
+ }
+ }
+
+ for cause in &overlap.intercrate_ambiguity_causes {
+ cause.add_intercrate_ambiguity_hint(&mut err);
+ }
+
+ if overlap.involves_placeholder {
+ coherence::add_placeholder_note(&mut err);
+ }
+ err.emit()
+ };
+
+ match used_to_be_allowed {
+ None => {
+ sg.has_errored = true;
+ let err = struct_span_err!(tcx.sess, impl_span, E0119, "");
+ decorate(LintDiagnosticBuilder::new(err));
+ }
+ Some(kind) => {
+ let lint = match kind {
+ FutureCompatOverlapErrorKind::Issue33140 => {
+ ORDER_DEPENDENT_TRAIT_OBJECTS
+ }
+ FutureCompatOverlapErrorKind::LeakCheck => COHERENCE_LEAK_CHECK,
+ };
+ tcx.struct_span_lint_hir(
+ lint,
+ tcx.hir().as_local_hir_id(impl_def_id).unwrap(),
+ impl_span,
+ decorate,
+ )
+ }
+ };
+ }
+ } else {
+ let parent = tcx.impl_parent(impl_def_id).unwrap_or(trait_id);
+ sg.record_impl_from_cstore(tcx, parent, impl_def_id)
+ }
+ }
+
+ tcx.arena.alloc(sg)
+}
+
+/// Recovers the "impl X for Y" signature from `impl_def_id` and returns it as a
+/// string.
+fn to_pretty_impl_header(tcx: TyCtxt<'_>, impl_def_id: DefId) -> Option<String> {
+ use std::fmt::Write;
+
+ let trait_ref = tcx.impl_trait_ref(impl_def_id)?;
+ let mut w = "impl".to_owned();
+
+ let substs = InternalSubsts::identity_for_item(tcx, impl_def_id);
+
+ // FIXME: Currently only handles ?Sized.
+ // Needs to support ?Move and ?DynSized when they are implemented.
+ let mut types_without_default_bounds = FxHashSet::default();
+ let sized_trait = tcx.lang_items().sized_trait();
+
+ if !substs.is_noop() {
+ types_without_default_bounds.extend(substs.types());
+ w.push('<');
+ w.push_str(
+ &substs
+ .iter()
+ .map(|k| k.to_string())
+ .filter(|k| k != "'_")
+ .collect::<Vec<_>>()
+ .join(", "),
+ );
+ w.push('>');
+ }
+
+ write!(w, " {} for {}", trait_ref.print_only_trait_path(), tcx.type_of(impl_def_id)).unwrap();
+
+ // The predicates will contain default bounds like `T: Sized`. We need to
+ // remove these bounds, and add `T: ?Sized` to any untouched type parameters.
+ let predicates = tcx.predicates_of(impl_def_id).predicates;
+ let mut pretty_predicates =
+ Vec::with_capacity(predicates.len() + types_without_default_bounds.len());
+
+ for (p, _) in predicates {
+ if let Some(poly_trait_ref) = p.to_opt_poly_trait_ref() {
+ if Some(poly_trait_ref.def_id()) == sized_trait {
+ types_without_default_bounds.remove(poly_trait_ref.self_ty());
+ continue;
+ }
+ }
+ pretty_predicates.push(p.to_string());
+ }
+
+ pretty_predicates
+ .extend(types_without_default_bounds.iter().map(|ty| format!("{}: ?Sized", ty)));
+
+ if !pretty_predicates.is_empty() {
+ write!(w, "\n where {}", pretty_predicates.join(", ")).unwrap();
+ }
+
+ w.push(';');
+ Some(w)
+}
--- /dev/null
+use super::OverlapError;
+
+use crate::traits;
+use rustc::ty::fast_reject::{self, SimplifiedType};
+use rustc::ty::{self, TyCtxt, TypeFoldable};
+use rustc_hir::def_id::DefId;
+
+pub use rustc::traits::specialization_graph::*;
+
+#[derive(Copy, Clone, Debug)]
+pub enum FutureCompatOverlapErrorKind {
+ Issue33140,
+ LeakCheck,
+}
+
+#[derive(Debug)]
+pub struct FutureCompatOverlapError {
+ pub error: OverlapError,
+ pub kind: FutureCompatOverlapErrorKind,
+}
+
+/// The result of attempting to insert an impl into a group of children.
+enum Inserted {
+ /// The impl was inserted as a new child in this group of children.
+ BecameNewSibling(Option<FutureCompatOverlapError>),
+
+ /// The impl should replace existing impls [X1, ..], because the impl specializes X1, X2, etc.
+ ReplaceChildren(Vec<DefId>),
+
+ /// The impl is a specialization of an existing child.
+ ShouldRecurseOn(DefId),
+}
+
+trait ChildrenExt {
+ fn insert_blindly(&mut self, tcx: TyCtxt<'tcx>, impl_def_id: DefId);
+ fn remove_existing(&mut self, tcx: TyCtxt<'tcx>, impl_def_id: DefId);
+
+ fn insert(
+ &mut self,
+ tcx: TyCtxt<'tcx>,
+ impl_def_id: DefId,
+ simplified_self: Option<SimplifiedType>,
+ ) -> Result<Inserted, OverlapError>;
+}
+
+impl ChildrenExt for Children {
+ /// Insert an impl into this set of children without comparing to any existing impls.
+ fn insert_blindly(&mut self, tcx: TyCtxt<'tcx>, impl_def_id: DefId) {
+ let trait_ref = tcx.impl_trait_ref(impl_def_id).unwrap();
+ if let Some(st) = fast_reject::simplify_type(tcx, trait_ref.self_ty(), false) {
+ debug!("insert_blindly: impl_def_id={:?} st={:?}", impl_def_id, st);
+ self.nonblanket_impls.entry(st).or_default().push(impl_def_id)
+ } else {
+ debug!("insert_blindly: impl_def_id={:?} st=None", impl_def_id);
+ self.blanket_impls.push(impl_def_id)
+ }
+ }
+
+ /// Removes an impl from this set of children. Used when replacing
+ /// an impl with a parent. The impl must be present in the list of
+ /// children already.
+ fn remove_existing(&mut self, tcx: TyCtxt<'tcx>, impl_def_id: DefId) {
+ let trait_ref = tcx.impl_trait_ref(impl_def_id).unwrap();
+ let vec: &mut Vec<DefId>;
+ if let Some(st) = fast_reject::simplify_type(tcx, trait_ref.self_ty(), false) {
+ debug!("remove_existing: impl_def_id={:?} st={:?}", impl_def_id, st);
+ vec = self.nonblanket_impls.get_mut(&st).unwrap();
+ } else {
+ debug!("remove_existing: impl_def_id={:?} st=None", impl_def_id);
+ vec = &mut self.blanket_impls;
+ }
+
+ let index = vec.iter().position(|d| *d == impl_def_id).unwrap();
+ vec.remove(index);
+ }
+
+ /// Attempt to insert an impl into this set of children, while comparing for
+ /// specialization relationships.
+ fn insert(
+ &mut self,
+ tcx: TyCtxt<'tcx>,
+ impl_def_id: DefId,
+ simplified_self: Option<SimplifiedType>,
+ ) -> Result<Inserted, OverlapError> {
+ let mut last_lint = None;
+ let mut replace_children = Vec::new();
+
+ debug!("insert(impl_def_id={:?}, simplified_self={:?})", impl_def_id, simplified_self,);
+
+ let possible_siblings = match simplified_self {
+ Some(st) => PotentialSiblings::Filtered(filtered_children(self, st)),
+ None => PotentialSiblings::Unfiltered(iter_children(self)),
+ };
+
+ for possible_sibling in possible_siblings {
+ debug!(
+ "insert: impl_def_id={:?}, simplified_self={:?}, possible_sibling={:?}",
+ impl_def_id, simplified_self, possible_sibling,
+ );
+
+ let create_overlap_error = |overlap: traits::coherence::OverlapResult<'_>| {
+ let trait_ref = overlap.impl_header.trait_ref.unwrap();
+ let self_ty = trait_ref.self_ty();
+
+ OverlapError {
+ with_impl: possible_sibling,
+ trait_desc: trait_ref.print_only_trait_path().to_string(),
+ // Only report the `Self` type if it has at least
+ // some outer concrete shell; otherwise, it's
+ // not adding much information.
+ self_desc: if self_ty.has_concrete_skeleton() {
+ Some(self_ty.to_string())
+ } else {
+ None
+ },
+ intercrate_ambiguity_causes: overlap.intercrate_ambiguity_causes,
+ involves_placeholder: overlap.involves_placeholder,
+ }
+ };
+
+ let report_overlap_error = |overlap: traits::coherence::OverlapResult<'_>,
+ last_lint: &mut _| {
+ // Found overlap, but no specialization; error out or report future-compat warning.
+
+ // Do we *still* get overlap if we disable the future-incompatible modes?
+ let should_err = traits::overlapping_impls(
+ tcx,
+ possible_sibling,
+ impl_def_id,
+ traits::SkipLeakCheck::default(),
+ |_| true,
+ || false,
+ );
+
+ let error = create_overlap_error(overlap);
+
+ if should_err {
+ Err(error)
+ } else {
+ *last_lint = Some(FutureCompatOverlapError {
+ error,
+ kind: FutureCompatOverlapErrorKind::LeakCheck,
+ });
+
+ Ok((false, false))
+ }
+ };
+
+ let last_lint_mut = &mut last_lint;
+ let (le, ge) = traits::overlapping_impls(
+ tcx,
+ possible_sibling,
+ impl_def_id,
+ traits::SkipLeakCheck::Yes,
+ |overlap| {
+ if let Some(overlap_kind) =
+ tcx.impls_are_allowed_to_overlap(impl_def_id, possible_sibling)
+ {
+ match overlap_kind {
+ ty::ImplOverlapKind::Permitted { marker: _ } => {}
+ ty::ImplOverlapKind::Issue33140 => {
+ *last_lint_mut = Some(FutureCompatOverlapError {
+ error: create_overlap_error(overlap),
+ kind: FutureCompatOverlapErrorKind::Issue33140,
+ });
+ }
+ }
+
+ return Ok((false, false));
+ }
+
+ let le = tcx.specializes((impl_def_id, possible_sibling));
+ let ge = tcx.specializes((possible_sibling, impl_def_id));
+
+ if le == ge {
+ report_overlap_error(overlap, last_lint_mut)
+ } else {
+ Ok((le, ge))
+ }
+ },
+ || Ok((false, false)),
+ )?;
+
+ if le && !ge {
+ debug!(
+ "descending as child of TraitRef {:?}",
+ tcx.impl_trait_ref(possible_sibling).unwrap()
+ );
+
+ // The impl specializes `possible_sibling`.
+ return Ok(Inserted::ShouldRecurseOn(possible_sibling));
+ } else if ge && !le {
+ debug!(
+ "placing as parent of TraitRef {:?}",
+ tcx.impl_trait_ref(possible_sibling).unwrap()
+ );
+
+ replace_children.push(possible_sibling);
+ } else {
+ // Either there's no overlap, or the overlap was already reported by
+ // `overlap_error`.
+ }
+ }
+
+ if !replace_children.is_empty() {
+ return Ok(Inserted::ReplaceChildren(replace_children));
+ }
+
+ // No overlap with any potential siblings, so add as a new sibling.
+ debug!("placing as new sibling");
+ self.insert_blindly(tcx, impl_def_id);
+ Ok(Inserted::BecameNewSibling(last_lint))
+ }
+}
+
+fn iter_children(children: &mut Children) -> impl Iterator<Item = DefId> + '_ {
+ let nonblanket = children.nonblanket_impls.iter_mut().flat_map(|(_, v)| v.iter());
+ children.blanket_impls.iter().chain(nonblanket).cloned()
+}
+
+fn filtered_children(
+ children: &mut Children,
+ st: SimplifiedType,
+) -> impl Iterator<Item = DefId> + '_ {
+ let nonblanket = children.nonblanket_impls.entry(st).or_default().iter();
+ children.blanket_impls.iter().chain(nonblanket).cloned()
+}
+
+// A custom iterator used by Children::insert
+enum PotentialSiblings<I, J>
+where
+ I: Iterator<Item = DefId>,
+ J: Iterator<Item = DefId>,
+{
+ Unfiltered(I),
+ Filtered(J),
+}
+
+impl<I, J> Iterator for PotentialSiblings<I, J>
+where
+ I: Iterator<Item = DefId>,
+ J: Iterator<Item = DefId>,
+{
+ type Item = DefId;
+
+ fn next(&mut self) -> Option<Self::Item> {
+ match *self {
+ PotentialSiblings::Unfiltered(ref mut iter) => iter.next(),
+ PotentialSiblings::Filtered(ref mut iter) => iter.next(),
+ }
+ }
+}
+
+pub trait GraphExt {
+ /// Insert a local impl into the specialization graph. If an existing impl
+ /// conflicts with it (has overlap, but neither specializes the other),
+ /// information about the area of overlap is returned in the `Err`.
+ fn insert(
+ &mut self,
+ tcx: TyCtxt<'tcx>,
+ impl_def_id: DefId,
+ ) -> Result<Option<FutureCompatOverlapError>, OverlapError>;
+
+ /// Insert cached metadata mapping from a child impl back to its parent.
+ fn record_impl_from_cstore(&mut self, tcx: TyCtxt<'tcx>, parent: DefId, child: DefId);
+}
+
+impl GraphExt for Graph {
+ /// Insert a local impl into the specialization graph. If an existing impl
+ /// conflicts with it (has overlap, but neither specializes the other),
+ /// information about the area of overlap is returned in the `Err`.
+ fn insert(
+ &mut self,
+ tcx: TyCtxt<'tcx>,
+ impl_def_id: DefId,
+ ) -> Result<Option<FutureCompatOverlapError>, OverlapError> {
+ assert!(impl_def_id.is_local());
+
+ let trait_ref = tcx.impl_trait_ref(impl_def_id).unwrap();
+ let trait_def_id = trait_ref.def_id;
+
+ debug!(
+ "insert({:?}): inserting TraitRef {:?} into specialization graph",
+ impl_def_id, trait_ref
+ );
+
+ // If the reference itself contains an earlier error (e.g., due to a
+ // resolution failure), then we just insert the impl at the top level of
+ // the graph and claim that there's no overlap (in order to suppress
+ // bogus errors).
+ if trait_ref.references_error() {
+ debug!(
+ "insert: inserting dummy node for erroneous TraitRef {:?}, \
+ impl_def_id={:?}, trait_def_id={:?}",
+ trait_ref, impl_def_id, trait_def_id
+ );
+
+ self.parent.insert(impl_def_id, trait_def_id);
+ self.children.entry(trait_def_id).or_default().insert_blindly(tcx, impl_def_id);
+ return Ok(None);
+ }
+
+ let mut parent = trait_def_id;
+ let mut last_lint = None;
+ let simplified = fast_reject::simplify_type(tcx, trait_ref.self_ty(), false);
+
+ // Descend the specialization tree, where `parent` is the current parent node.
+ loop {
+ use self::Inserted::*;
+
+ let insert_result =
+ self.children.entry(parent).or_default().insert(tcx, impl_def_id, simplified)?;
+
+ match insert_result {
+ BecameNewSibling(opt_lint) => {
+ last_lint = opt_lint;
+ break;
+ }
+ ReplaceChildren(grand_children_to_be) => {
+ // We currently have
+ //
+ // P
+ // |
+ // G
+ //
+ // and we are inserting the impl N. We want to make it:
+ //
+ // P
+ // |
+ // N
+ // |
+ // G
+
+ // Adjust P's list of children: remove G and then add N.
+ {
+ let siblings = self.children.get_mut(&parent).unwrap();
+ for &grand_child_to_be in &grand_children_to_be {
+ siblings.remove_existing(tcx, grand_child_to_be);
+ }
+ siblings.insert_blindly(tcx, impl_def_id);
+ }
+
+ // Set G's parent to N and N's parent to P.
+ for &grand_child_to_be in &grand_children_to_be {
+ self.parent.insert(grand_child_to_be, impl_def_id);
+ }
+ self.parent.insert(impl_def_id, parent);
+
+ // Add G as N's child.
+ for &grand_child_to_be in &grand_children_to_be {
+ self.children
+ .entry(impl_def_id)
+ .or_default()
+ .insert_blindly(tcx, grand_child_to_be);
+ }
+ break;
+ }
+ ShouldRecurseOn(new_parent) => {
+ parent = new_parent;
+ }
+ }
+ }
+
+ self.parent.insert(impl_def_id, parent);
+ Ok(last_lint)
+ }
+
+ /// Insert cached metadata mapping from a child impl back to its parent.
+ fn record_impl_from_cstore(&mut self, tcx: TyCtxt<'tcx>, parent: DefId, child: DefId) {
+ if self.parent.insert(child, parent).is_some() {
+ bug!(
+ "When recording an impl from the crate store, information about its parent \
+ was already present."
+ );
+ }
+
+ self.children.entry(parent).or_default().insert_blindly(tcx, child);
+ }
+}
--- /dev/null
+use crate::infer::{InferCtxt, TyCtxtInferExt};
+use crate::traits::ObligationCause;
+use crate::traits::{self, ConstPatternStructural, TraitEngine};
+
+use rustc::ty::{self, AdtDef, Ty, TyCtxt, TypeFoldable, TypeVisitor};
+use rustc_data_structures::fx::FxHashSet;
+use rustc_hir as hir;
+use rustc_span::Span;
+
+#[derive(Debug)]
+pub enum NonStructuralMatchTy<'tcx> {
+ Adt(&'tcx AdtDef),
+ Param,
+}
+
+/// This method traverses the structure of `ty`, trying to find an
+/// instance of an ADT (i.e. struct or enum) that was declared without
+/// the `#[structural_match]` attribute, or a generic type parameter
+/// (which cannot be determined to be `structural_match`).
+///
+/// The "structure of a type" includes all components that would be
+/// considered when doing a pattern match on a constant of that
+/// type.
+///
+/// * This means this method descends into fields of structs/enums,
+/// and also descends into the inner type `T` of `&T` and `&mut T`
+///
+/// * The traversal doesn't dereference unsafe pointers (`*const T`,
+/// `*mut T`), and it does not visit the type arguments of an
+/// instantiated generic like `PhantomData<T>`.
+///
+/// The reason we do this search is Rust currently require all ADTs
+/// reachable from a constant's type to be annotated with
+/// `#[structural_match]`, an attribute which essentially says that
+/// the implementation of `PartialEq::eq` behaves *equivalently* to a
+/// comparison against the unfolded structure.
+///
+/// For more background on why Rust has this requirement, and issues
+/// that arose when the requirement was not enforced completely, see
+/// Rust RFC 1445, rust-lang/rust#61188, and rust-lang/rust#62307.
+pub fn search_for_structural_match_violation<'tcx>(
+ id: hir::HirId,
+ span: Span,
+ tcx: TyCtxt<'tcx>,
+ ty: Ty<'tcx>,
+) -> Option<NonStructuralMatchTy<'tcx>> {
+ // FIXME: we should instead pass in an `infcx` from the outside.
+ tcx.infer_ctxt().enter(|infcx| {
+ let mut search = Search { id, span, infcx, found: None, seen: FxHashSet::default() };
+ ty.visit_with(&mut search);
+ search.found
+ })
+}
+
+/// This method returns true if and only if `adt_ty` itself has been marked as
+/// eligible for structural-match: namely, if it implements both
+/// `StructuralPartialEq` and `StructuralEq` (which are respectively injected by
+/// `#[derive(PartialEq)]` and `#[derive(Eq)]`).
+///
+/// Note that this does *not* recursively check if the substructure of `adt_ty`
+/// implements the traits.
+pub fn type_marked_structural(
+ id: hir::HirId,
+ span: Span,
+ infcx: &InferCtxt<'_, 'tcx>,
+ adt_ty: Ty<'tcx>,
+) -> bool {
+ let mut fulfillment_cx = traits::FulfillmentContext::new();
+ let cause = ObligationCause::new(span, id, ConstPatternStructural);
+ // require `#[derive(PartialEq)]`
+ let structural_peq_def_id = infcx.tcx.lang_items().structural_peq_trait().unwrap();
+ fulfillment_cx.register_bound(
+ infcx,
+ ty::ParamEnv::empty(),
+ adt_ty,
+ structural_peq_def_id,
+ cause,
+ );
+ // for now, require `#[derive(Eq)]`. (Doing so is a hack to work around
+ // the type `for<'a> fn(&'a ())` failing to implement `Eq` itself.)
+ let cause = ObligationCause::new(span, id, ConstPatternStructural);
+ let structural_teq_def_id = infcx.tcx.lang_items().structural_teq_trait().unwrap();
+ fulfillment_cx.register_bound(
+ infcx,
+ ty::ParamEnv::empty(),
+ adt_ty,
+ structural_teq_def_id,
+ cause,
+ );
+
+ // We deliberately skip *reporting* fulfillment errors (via
+ // `report_fulfillment_errors`), for two reasons:
+ //
+ // 1. The error messages would mention `std::marker::StructuralPartialEq`
+ // (a trait which is solely meant as an implementation detail
+ // for now), and
+ //
+ // 2. We are sometimes doing future-incompatibility lints for
+ // now, so we do not want unconditional errors here.
+ fulfillment_cx.select_all_or_error(infcx).is_ok()
+}
+
+/// This implements the traversal over the structure of a given type to try to
+/// find instances of ADTs (specifically structs or enums) that do not implement
+/// the structural-match traits (`StructuralPartialEq` and `StructuralEq`).
+struct Search<'a, 'tcx> {
+ id: hir::HirId,
+ span: Span,
+
+ infcx: InferCtxt<'a, 'tcx>,
+
+ /// Records first ADT that does not implement a structural-match trait.
+ found: Option<NonStructuralMatchTy<'tcx>>,
+
+ /// Tracks ADTs previously encountered during search, so that
+ /// we will not recur on them again.
+ seen: FxHashSet<hir::def_id::DefId>,
+}
+
+impl Search<'a, 'tcx> {
+ fn tcx(&self) -> TyCtxt<'tcx> {
+ self.infcx.tcx
+ }
+
+ fn type_marked_structural(&self, adt_ty: Ty<'tcx>) -> bool {
+ type_marked_structural(self.id, self.span, &self.infcx, adt_ty)
+ }
+}
+
+impl<'a, 'tcx> TypeVisitor<'tcx> for Search<'a, 'tcx> {
+ fn visit_ty(&mut self, ty: Ty<'tcx>) -> bool {
+ debug!("Search visiting ty: {:?}", ty);
+
+ let (adt_def, substs) = match ty.kind {
+ ty::Adt(adt_def, substs) => (adt_def, substs),
+ ty::Param(_) => {
+ self.found = Some(NonStructuralMatchTy::Param);
+ return true; // Stop visiting.
+ }
+ ty::RawPtr(..) => {
+ // structural-match ignores substructure of
+ // `*const _`/`*mut _`, so skip `super_visit_with`.
+ //
+ // For example, if you have:
+ // ```
+ // struct NonStructural;
+ // #[derive(PartialEq, Eq)]
+ // struct T(*const NonStructural);
+ // const C: T = T(std::ptr::null());
+ // ```
+ //
+ // Even though `NonStructural` does not implement `PartialEq`,
+ // structural equality on `T` does not recur into the raw
+ // pointer. Therefore, one can still use `C` in a pattern.
+
+ // (But still tell caller to continue search.)
+ return false;
+ }
+ ty::FnDef(..) | ty::FnPtr(..) => {
+ // types of formals and return in `fn(_) -> _` are also irrelevant;
+ // so we do not recur into them via `super_visit_with`
+ //
+ // (But still tell caller to continue search.)
+ return false;
+ }
+ ty::Array(_, n)
+ if { n.try_eval_usize(self.tcx(), ty::ParamEnv::reveal_all()) == Some(0) } =>
+ {
+ // rust-lang/rust#62336: ignore type of contents
+ // for empty array.
+ return false;
+ }
+ _ => {
+ ty.super_visit_with(self);
+ return false;
+ }
+ };
+
+ if !self.seen.insert(adt_def.did) {
+ debug!("Search already seen adt_def: {:?}", adt_def);
+ // let caller continue its search
+ return false;
+ }
+
+ if !self.type_marked_structural(ty) {
+ debug!("Search found ty: {:?}", ty);
+ self.found = Some(NonStructuralMatchTy::Adt(&adt_def));
+ return true; // Halt visiting!
+ }
+
+ // structural-match does not care about the
+ // instantiation of the generics in an ADT (it
+ // instead looks directly at its fields outside
+ // this match), so we skip super_visit_with.
+ //
+ // (Must not recur on substs for `PhantomData<T>` cf
+ // rust-lang/rust#55028 and rust-lang/rust#55837; but also
+ // want to skip substs when only uses of generic are
+ // behind unsafe pointers `*const T`/`*mut T`.)
+
+ // even though we skip super_visit_with, we must recur on
+ // fields of ADT.
+ let tcx = self.tcx();
+ for field_ty in adt_def.all_fields().map(|field| field.ty(tcx, substs)) {
+ if field_ty.visit_with(self) {
+ // found an ADT without structural-match; halt visiting!
+ assert!(self.found.is_some());
+ return true;
+ }
+ }
+
+ // Even though we do not want to recur on substs, we do
+ // want our caller to continue its own search.
+ false
+ }
+}
--- /dev/null
+use rustc_errors::DiagnosticBuilder;
+use rustc_span::Span;
+use smallvec::smallvec;
+use smallvec::SmallVec;
+
+use rustc::ty::outlives::Component;
+use rustc::ty::subst::{GenericArg, Subst, SubstsRef};
+use rustc::ty::{self, ToPolyTraitRef, ToPredicate, Ty, TyCtxt, WithConstness};
+use rustc_data_structures::fx::FxHashSet;
+use rustc_hir as hir;
+use rustc_hir::def_id::DefId;
+
+use super::{Normalized, Obligation, ObligationCause, PredicateObligation, SelectionContext};
+
+fn anonymize_predicate<'tcx>(tcx: TyCtxt<'tcx>, pred: &ty::Predicate<'tcx>) -> ty::Predicate<'tcx> {
+ match *pred {
+ ty::Predicate::Trait(ref data, constness) => {
+ ty::Predicate::Trait(tcx.anonymize_late_bound_regions(data), constness)
+ }
+
+ ty::Predicate::RegionOutlives(ref data) => {
+ ty::Predicate::RegionOutlives(tcx.anonymize_late_bound_regions(data))
+ }
+
+ ty::Predicate::TypeOutlives(ref data) => {
+ ty::Predicate::TypeOutlives(tcx.anonymize_late_bound_regions(data))
+ }
+
+ ty::Predicate::Projection(ref data) => {
+ ty::Predicate::Projection(tcx.anonymize_late_bound_regions(data))
+ }
+
+ ty::Predicate::WellFormed(data) => ty::Predicate::WellFormed(data),
+
+ ty::Predicate::ObjectSafe(data) => ty::Predicate::ObjectSafe(data),
+
+ ty::Predicate::ClosureKind(closure_def_id, closure_substs, kind) => {
+ ty::Predicate::ClosureKind(closure_def_id, closure_substs, kind)
+ }
+
+ ty::Predicate::Subtype(ref data) => {
+ ty::Predicate::Subtype(tcx.anonymize_late_bound_regions(data))
+ }
+
+ ty::Predicate::ConstEvaluatable(def_id, substs) => {
+ ty::Predicate::ConstEvaluatable(def_id, substs)
+ }
+ }
+}
+
+struct PredicateSet<'tcx> {
+ tcx: TyCtxt<'tcx>,
+ set: FxHashSet<ty::Predicate<'tcx>>,
+}
+
+impl PredicateSet<'tcx> {
+ fn new(tcx: TyCtxt<'tcx>) -> Self {
+ Self { tcx, set: Default::default() }
+ }
+
+ fn insert(&mut self, pred: &ty::Predicate<'tcx>) -> bool {
+ // We have to be careful here because we want
+ //
+ // for<'a> Foo<&'a int>
+ //
+ // and
+ //
+ // for<'b> Foo<&'b int>
+ //
+ // to be considered equivalent. So normalize all late-bound
+ // regions before we throw things into the underlying set.
+ self.set.insert(anonymize_predicate(self.tcx, pred))
+ }
+}
+
+impl<T: AsRef<ty::Predicate<'tcx>>> Extend<T> for PredicateSet<'tcx> {
+ fn extend<I: IntoIterator<Item = T>>(&mut self, iter: I) {
+ for pred in iter {
+ self.insert(pred.as_ref());
+ }
+ }
+}
+
+///////////////////////////////////////////////////////////////////////////
+// `Elaboration` iterator
+///////////////////////////////////////////////////////////////////////////
+
+/// "Elaboration" is the process of identifying all the predicates that
+/// are implied by a source predicate. Currently, this basically means
+/// walking the "supertraits" and other similar assumptions. For example,
+/// if we know that `T: Ord`, the elaborator would deduce that `T: PartialOrd`
+/// holds as well. Similarly, if we have `trait Foo: 'static`, and we know that
+/// `T: Foo`, then we know that `T: 'static`.
+pub struct Elaborator<'tcx> {
+ stack: Vec<ty::Predicate<'tcx>>,
+ visited: PredicateSet<'tcx>,
+}
+
+pub fn elaborate_trait_ref<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ trait_ref: ty::PolyTraitRef<'tcx>,
+) -> Elaborator<'tcx> {
+ elaborate_predicates(tcx, vec![trait_ref.without_const().to_predicate()])
+}
+
+pub fn elaborate_trait_refs<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ trait_refs: impl Iterator<Item = ty::PolyTraitRef<'tcx>>,
+) -> Elaborator<'tcx> {
+ let predicates = trait_refs.map(|trait_ref| trait_ref.without_const().to_predicate()).collect();
+ elaborate_predicates(tcx, predicates)
+}
+
+pub fn elaborate_predicates<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ mut predicates: Vec<ty::Predicate<'tcx>>,
+) -> Elaborator<'tcx> {
+ let mut visited = PredicateSet::new(tcx);
+ predicates.retain(|pred| visited.insert(pred));
+ Elaborator { stack: predicates, visited }
+}
+
+impl Elaborator<'tcx> {
+ pub fn filter_to_traits(self) -> FilterToTraits<Self> {
+ FilterToTraits::new(self)
+ }
+
+ fn elaborate(&mut self, predicate: &ty::Predicate<'tcx>) {
+ let tcx = self.visited.tcx;
+ match *predicate {
+ ty::Predicate::Trait(ref data, _) => {
+ // Get predicates declared on the trait.
+ let predicates = tcx.super_predicates_of(data.def_id());
+
+ let predicates = predicates
+ .predicates
+ .iter()
+ .map(|(pred, _)| pred.subst_supertrait(tcx, &data.to_poly_trait_ref()));
+ debug!("super_predicates: data={:?} predicates={:?}", data, predicates.clone());
+
+ // Only keep those bounds that we haven't already seen.
+ // This is necessary to prevent infinite recursion in some
+ // cases. One common case is when people define
+ // `trait Sized: Sized { }` rather than `trait Sized { }`.
+ let visited = &mut self.visited;
+ let predicates = predicates.filter(|pred| visited.insert(pred));
+
+ self.stack.extend(predicates);
+ }
+ ty::Predicate::WellFormed(..) => {
+ // Currently, we do not elaborate WF predicates,
+ // although we easily could.
+ }
+ ty::Predicate::ObjectSafe(..) => {
+ // Currently, we do not elaborate object-safe
+ // predicates.
+ }
+ ty::Predicate::Subtype(..) => {
+ // Currently, we do not "elaborate" predicates like `X <: Y`,
+ // though conceivably we might.
+ }
+ ty::Predicate::Projection(..) => {
+ // Nothing to elaborate in a projection predicate.
+ }
+ ty::Predicate::ClosureKind(..) => {
+ // Nothing to elaborate when waiting for a closure's kind to be inferred.
+ }
+ ty::Predicate::ConstEvaluatable(..) => {
+ // Currently, we do not elaborate const-evaluatable
+ // predicates.
+ }
+ ty::Predicate::RegionOutlives(..) => {
+ // Nothing to elaborate from `'a: 'b`.
+ }
+ ty::Predicate::TypeOutlives(ref data) => {
+ // We know that `T: 'a` for some type `T`. We can
+ // often elaborate this. For example, if we know that
+ // `[U]: 'a`, that implies that `U: 'a`. Similarly, if
+ // we know `&'a U: 'b`, then we know that `'a: 'b` and
+ // `U: 'b`.
+ //
+ // We can basically ignore bound regions here. So for
+ // example `for<'c> Foo<'a,'c>: 'b` can be elaborated to
+ // `'a: 'b`.
+
+ // Ignore `for<'a> T: 'a` -- we might in the future
+ // consider this as evidence that `T: 'static`, but
+ // I'm a bit wary of such constructions and so for now
+ // I want to be conservative. --nmatsakis
+ let ty_max = data.skip_binder().0;
+ let r_min = data.skip_binder().1;
+ if r_min.is_late_bound() {
+ return;
+ }
+
+ let visited = &mut self.visited;
+ let mut components = smallvec![];
+ tcx.push_outlives_components(ty_max, &mut components);
+ self.stack.extend(
+ components
+ .into_iter()
+ .filter_map(|component| match component {
+ Component::Region(r) => {
+ if r.is_late_bound() {
+ None
+ } else {
+ Some(ty::Predicate::RegionOutlives(ty::Binder::dummy(
+ ty::OutlivesPredicate(r, r_min),
+ )))
+ }
+ }
+
+ Component::Param(p) => {
+ let ty = tcx.mk_ty_param(p.index, p.name);
+ Some(ty::Predicate::TypeOutlives(ty::Binder::dummy(
+ ty::OutlivesPredicate(ty, r_min),
+ )))
+ }
+
+ Component::UnresolvedInferenceVariable(_) => None,
+
+ Component::Projection(_) | Component::EscapingProjection(_) => {
+ // We can probably do more here. This
+ // corresponds to a case like `<T as
+ // Foo<'a>>::U: 'b`.
+ None
+ }
+ })
+ .filter(|p| visited.insert(p)),
+ );
+ }
+ }
+ }
+}
+
+impl Iterator for Elaborator<'tcx> {
+ type Item = ty::Predicate<'tcx>;
+
+ fn size_hint(&self) -> (usize, Option<usize>) {
+ (self.stack.len(), None)
+ }
+
+ fn next(&mut self) -> Option<ty::Predicate<'tcx>> {
+ // Extract next item from top-most stack frame, if any.
+ if let Some(pred) = self.stack.pop() {
+ self.elaborate(&pred);
+ Some(pred)
+ } else {
+ None
+ }
+ }
+}
+
+///////////////////////////////////////////////////////////////////////////
+// Supertrait iterator
+///////////////////////////////////////////////////////////////////////////
+
+pub type Supertraits<'tcx> = FilterToTraits<Elaborator<'tcx>>;
+
+pub fn supertraits<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ trait_ref: ty::PolyTraitRef<'tcx>,
+) -> Supertraits<'tcx> {
+ elaborate_trait_ref(tcx, trait_ref).filter_to_traits()
+}
+
+pub fn transitive_bounds<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ bounds: impl Iterator<Item = ty::PolyTraitRef<'tcx>>,
+) -> Supertraits<'tcx> {
+ elaborate_trait_refs(tcx, bounds).filter_to_traits()
+}
+
+///////////////////////////////////////////////////////////////////////////
+// `TraitAliasExpander` iterator
+///////////////////////////////////////////////////////////////////////////
+
+/// "Trait alias expansion" is the process of expanding a sequence of trait
+/// references into another sequence by transitively following all trait
+/// aliases. e.g. If you have bounds like `Foo + Send`, a trait alias
+/// `trait Foo = Bar + Sync;`, and another trait alias
+/// `trait Bar = Read + Write`, then the bounds would expand to
+/// `Read + Write + Sync + Send`.
+/// Expansion is done via a DFS (depth-first search), and the `visited` field
+/// is used to avoid cycles.
+pub struct TraitAliasExpander<'tcx> {
+ tcx: TyCtxt<'tcx>,
+ stack: Vec<TraitAliasExpansionInfo<'tcx>>,
+}
+
+/// Stores information about the expansion of a trait via a path of zero or more trait aliases.
+#[derive(Debug, Clone)]
+pub struct TraitAliasExpansionInfo<'tcx> {
+ pub path: SmallVec<[(ty::PolyTraitRef<'tcx>, Span); 4]>,
+}
+
+impl<'tcx> TraitAliasExpansionInfo<'tcx> {
+ fn new(trait_ref: ty::PolyTraitRef<'tcx>, span: Span) -> Self {
+ Self { path: smallvec![(trait_ref, span)] }
+ }
+
+ /// Adds diagnostic labels to `diag` for the expansion path of a trait through all intermediate
+ /// trait aliases.
+ pub fn label_with_exp_info(
+ &self,
+ diag: &mut DiagnosticBuilder<'_>,
+ top_label: &str,
+ use_desc: &str,
+ ) {
+ diag.span_label(self.top().1, top_label);
+ if self.path.len() > 1 {
+ for (_, sp) in self.path.iter().rev().skip(1).take(self.path.len() - 2) {
+ diag.span_label(*sp, format!("referenced here ({})", use_desc));
+ }
+ }
+ diag.span_label(
+ self.bottom().1,
+ format!("trait alias used in trait object type ({})", use_desc),
+ );
+ }
+
+ pub fn trait_ref(&self) -> &ty::PolyTraitRef<'tcx> {
+ &self.top().0
+ }
+
+ pub fn top(&self) -> &(ty::PolyTraitRef<'tcx>, Span) {
+ self.path.last().unwrap()
+ }
+
+ pub fn bottom(&self) -> &(ty::PolyTraitRef<'tcx>, Span) {
+ self.path.first().unwrap()
+ }
+
+ fn clone_and_push(&self, trait_ref: ty::PolyTraitRef<'tcx>, span: Span) -> Self {
+ let mut path = self.path.clone();
+ path.push((trait_ref, span));
+
+ Self { path }
+ }
+}
+
+pub fn expand_trait_aliases<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ trait_refs: impl IntoIterator<Item = (ty::PolyTraitRef<'tcx>, Span)>,
+) -> TraitAliasExpander<'tcx> {
+ let items: Vec<_> = trait_refs
+ .into_iter()
+ .map(|(trait_ref, span)| TraitAliasExpansionInfo::new(trait_ref, span))
+ .collect();
+ TraitAliasExpander { tcx, stack: items }
+}
+
+impl<'tcx> TraitAliasExpander<'tcx> {
+ /// If `item` is a trait alias and its predicate has not yet been visited, then expands `item`
+ /// to the definition, pushes the resulting expansion onto `self.stack`, and returns `false`.
+ /// Otherwise, immediately returns `true` if `item` is a regular trait, or `false` if it is a
+ /// trait alias.
+ /// The return value indicates whether `item` should be yielded to the user.
+ fn expand(&mut self, item: &TraitAliasExpansionInfo<'tcx>) -> bool {
+ let tcx = self.tcx;
+ let trait_ref = item.trait_ref();
+ let pred = trait_ref.without_const().to_predicate();
+
+ debug!("expand_trait_aliases: trait_ref={:?}", trait_ref);
+
+ // Don't recurse if this bound is not a trait alias.
+ let is_alias = tcx.is_trait_alias(trait_ref.def_id());
+ if !is_alias {
+ return true;
+ }
+
+ // Don't recurse if this trait alias is already on the stack for the DFS search.
+ let anon_pred = anonymize_predicate(tcx, &pred);
+ if item.path.iter().rev().skip(1).any(|(tr, _)| {
+ anonymize_predicate(tcx, &tr.without_const().to_predicate()) == anon_pred
+ }) {
+ return false;
+ }
+
+ // Get components of trait alias.
+ let predicates = tcx.super_predicates_of(trait_ref.def_id());
+
+ let items = predicates.predicates.iter().rev().filter_map(|(pred, span)| {
+ pred.subst_supertrait(tcx, &trait_ref)
+ .to_opt_poly_trait_ref()
+ .map(|trait_ref| item.clone_and_push(trait_ref, *span))
+ });
+ debug!("expand_trait_aliases: items={:?}", items.clone());
+
+ self.stack.extend(items);
+
+ false
+ }
+}
+
+impl<'tcx> Iterator for TraitAliasExpander<'tcx> {
+ type Item = TraitAliasExpansionInfo<'tcx>;
+
+ fn size_hint(&self) -> (usize, Option<usize>) {
+ (self.stack.len(), None)
+ }
+
+ fn next(&mut self) -> Option<TraitAliasExpansionInfo<'tcx>> {
+ while let Some(item) = self.stack.pop() {
+ if self.expand(&item) {
+ return Some(item);
+ }
+ }
+ None
+ }
+}
+
+///////////////////////////////////////////////////////////////////////////
+// Iterator over def-IDs of supertraits
+///////////////////////////////////////////////////////////////////////////
+
+pub struct SupertraitDefIds<'tcx> {
+ tcx: TyCtxt<'tcx>,
+ stack: Vec<DefId>,
+ visited: FxHashSet<DefId>,
+}
+
+pub fn supertrait_def_ids(tcx: TyCtxt<'_>, trait_def_id: DefId) -> SupertraitDefIds<'_> {
+ SupertraitDefIds {
+ tcx,
+ stack: vec![trait_def_id],
+ visited: Some(trait_def_id).into_iter().collect(),
+ }
+}
+
+impl Iterator for SupertraitDefIds<'tcx> {
+ type Item = DefId;
+
+ fn next(&mut self) -> Option<DefId> {
+ let def_id = self.stack.pop()?;
+ let predicates = self.tcx.super_predicates_of(def_id);
+ let visited = &mut self.visited;
+ self.stack.extend(
+ predicates
+ .predicates
+ .iter()
+ .filter_map(|(pred, _)| pred.to_opt_poly_trait_ref())
+ .map(|trait_ref| trait_ref.def_id())
+ .filter(|&super_def_id| visited.insert(super_def_id)),
+ );
+ Some(def_id)
+ }
+}
+
+///////////////////////////////////////////////////////////////////////////
+// Other
+///////////////////////////////////////////////////////////////////////////
+
+/// A filter around an iterator of predicates that makes it yield up
+/// just trait references.
+pub struct FilterToTraits<I> {
+ base_iterator: I,
+}
+
+impl<I> FilterToTraits<I> {
+ fn new(base: I) -> FilterToTraits<I> {
+ FilterToTraits { base_iterator: base }
+ }
+}
+
+impl<'tcx, I: Iterator<Item = ty::Predicate<'tcx>>> Iterator for FilterToTraits<I> {
+ type Item = ty::PolyTraitRef<'tcx>;
+
+ fn next(&mut self) -> Option<ty::PolyTraitRef<'tcx>> {
+ while let Some(pred) = self.base_iterator.next() {
+ if let ty::Predicate::Trait(data, _) = pred {
+ return Some(data.to_poly_trait_ref());
+ }
+ }
+ None
+ }
+
+ fn size_hint(&self) -> (usize, Option<usize>) {
+ let (_, upper) = self.base_iterator.size_hint();
+ (0, upper)
+ }
+}
+
+///////////////////////////////////////////////////////////////////////////
+// Other
+///////////////////////////////////////////////////////////////////////////
+
+/// Instantiate all bound parameters of the impl with the given substs,
+/// returning the resulting trait ref and all obligations that arise.
+/// The obligations are closed under normalization.
+pub fn impl_trait_ref_and_oblig<'a, 'tcx>(
+ selcx: &mut SelectionContext<'a, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ impl_def_id: DefId,
+ impl_substs: SubstsRef<'tcx>,
+) -> (ty::TraitRef<'tcx>, Vec<PredicateObligation<'tcx>>) {
+ let impl_trait_ref = selcx.tcx().impl_trait_ref(impl_def_id).unwrap();
+ let impl_trait_ref = impl_trait_ref.subst(selcx.tcx(), impl_substs);
+ let Normalized { value: impl_trait_ref, obligations: normalization_obligations1 } =
+ super::normalize(selcx, param_env, ObligationCause::dummy(), &impl_trait_ref);
+
+ let predicates = selcx.tcx().predicates_of(impl_def_id);
+ let predicates = predicates.instantiate(selcx.tcx(), impl_substs);
+ let Normalized { value: predicates, obligations: normalization_obligations2 } =
+ super::normalize(selcx, param_env, ObligationCause::dummy(), &predicates);
+ let impl_obligations =
+ predicates_for_generics(ObligationCause::dummy(), 0, param_env, &predicates);
+
+ let impl_obligations: Vec<_> = impl_obligations
+ .into_iter()
+ .chain(normalization_obligations1)
+ .chain(normalization_obligations2)
+ .collect();
+
+ (impl_trait_ref, impl_obligations)
+}
+
+/// See [`super::obligations_for_generics`].
+pub fn predicates_for_generics<'tcx>(
+ cause: ObligationCause<'tcx>,
+ recursion_depth: usize,
+ param_env: ty::ParamEnv<'tcx>,
+ generic_bounds: &ty::InstantiatedPredicates<'tcx>,
+) -> Vec<PredicateObligation<'tcx>> {
+ debug!("predicates_for_generics(generic_bounds={:?})", generic_bounds);
+
+ generic_bounds
+ .predicates
+ .iter()
+ .map(|&predicate| Obligation {
+ cause: cause.clone(),
+ recursion_depth,
+ param_env,
+ predicate,
+ })
+ .collect()
+}
+
+pub fn predicate_for_trait_ref<'tcx>(
+ cause: ObligationCause<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ trait_ref: ty::TraitRef<'tcx>,
+ recursion_depth: usize,
+) -> PredicateObligation<'tcx> {
+ Obligation {
+ cause,
+ param_env,
+ recursion_depth,
+ predicate: trait_ref.without_const().to_predicate(),
+ }
+}
+
+pub fn predicate_for_trait_def(
+ tcx: TyCtxt<'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ cause: ObligationCause<'tcx>,
+ trait_def_id: DefId,
+ recursion_depth: usize,
+ self_ty: Ty<'tcx>,
+ params: &[GenericArg<'tcx>],
+) -> PredicateObligation<'tcx> {
+ let trait_ref =
+ ty::TraitRef { def_id: trait_def_id, substs: tcx.mk_substs_trait(self_ty, params) };
+ predicate_for_trait_ref(cause, param_env, trait_ref, recursion_depth)
+}
+
+/// Casts a trait reference into a reference to one of its super
+/// traits; returns `None` if `target_trait_def_id` is not a
+/// supertrait.
+pub fn upcast_choices(
+ tcx: TyCtxt<'tcx>,
+ source_trait_ref: ty::PolyTraitRef<'tcx>,
+ target_trait_def_id: DefId,
+) -> Vec<ty::PolyTraitRef<'tcx>> {
+ if source_trait_ref.def_id() == target_trait_def_id {
+ return vec![source_trait_ref]; // Shortcut the most common case.
+ }
+
+ supertraits(tcx, source_trait_ref).filter(|r| r.def_id() == target_trait_def_id).collect()
+}
+
+/// Given a trait `trait_ref`, returns the number of vtable entries
+/// that come from `trait_ref`, excluding its supertraits. Used in
+/// computing the vtable base for an upcast trait of a trait object.
+pub fn count_own_vtable_entries(tcx: TyCtxt<'tcx>, trait_ref: ty::PolyTraitRef<'tcx>) -> usize {
+ let mut entries = 0;
+ // Count number of methods and add them to the total offset.
+ // Skip over associated types and constants.
+ for trait_item in tcx.associated_items(trait_ref.def_id()).in_definition_order() {
+ if trait_item.kind == ty::AssocKind::Method {
+ entries += 1;
+ }
+ }
+ entries
+}
+
+/// Given an upcast trait object described by `object`, returns the
+/// index of the method `method_def_id` (which should be part of
+/// `object.upcast_trait_ref`) within the vtable for `object`.
+pub fn get_vtable_index_of_object_method<N>(
+ tcx: TyCtxt<'tcx>,
+ object: &super::VtableObjectData<'tcx, N>,
+ method_def_id: DefId,
+) -> usize {
+ // Count number of methods preceding the one we are selecting and
+ // add them to the total offset.
+ // Skip over associated types and constants.
+ let mut entries = object.vtable_base;
+ for trait_item in tcx.associated_items(object.upcast_trait_ref.def_id()).in_definition_order() {
+ if trait_item.def_id == method_def_id {
+ // The item with the ID we were given really ought to be a method.
+ assert_eq!(trait_item.kind, ty::AssocKind::Method);
+ return entries;
+ }
+ if trait_item.kind == ty::AssocKind::Method {
+ entries += 1;
+ }
+ }
+
+ bug!("get_vtable_index_of_object_method: {:?} was not found", method_def_id);
+}
+
+pub fn closure_trait_ref_and_return_type(
+ tcx: TyCtxt<'tcx>,
+ fn_trait_def_id: DefId,
+ self_ty: Ty<'tcx>,
+ sig: ty::PolyFnSig<'tcx>,
+ tuple_arguments: TupleArgumentsFlag,
+) -> ty::Binder<(ty::TraitRef<'tcx>, Ty<'tcx>)> {
+ let arguments_tuple = match tuple_arguments {
+ TupleArgumentsFlag::No => sig.skip_binder().inputs()[0],
+ TupleArgumentsFlag::Yes => tcx.intern_tup(sig.skip_binder().inputs()),
+ };
+ let trait_ref = ty::TraitRef {
+ def_id: fn_trait_def_id,
+ substs: tcx.mk_substs_trait(self_ty, &[arguments_tuple.into()]),
+ };
+ ty::Binder::bind((trait_ref, sig.skip_binder().output()))
+}
+
+pub fn generator_trait_ref_and_outputs(
+ tcx: TyCtxt<'tcx>,
+ fn_trait_def_id: DefId,
+ self_ty: Ty<'tcx>,
+ sig: ty::PolyGenSig<'tcx>,
+) -> ty::Binder<(ty::TraitRef<'tcx>, Ty<'tcx>, Ty<'tcx>)> {
+ let trait_ref = ty::TraitRef {
+ def_id: fn_trait_def_id,
+ substs: tcx.mk_substs_trait(self_ty, &[sig.skip_binder().resume_ty.into()]),
+ };
+ ty::Binder::bind((trait_ref, sig.skip_binder().yield_ty, sig.skip_binder().return_ty))
+}
+
+pub fn impl_is_default(tcx: TyCtxt<'_>, node_item_def_id: DefId) -> bool {
+ match tcx.hir().as_local_hir_id(node_item_def_id) {
+ Some(hir_id) => {
+ let item = tcx.hir().expect_item(hir_id);
+ if let hir::ItemKind::Impl { defaultness, .. } = item.kind {
+ defaultness.is_default()
+ } else {
+ false
+ }
+ }
+ None => tcx.impl_defaultness(node_item_def_id).is_default(),
+ }
+}
+
+pub fn impl_item_is_final(tcx: TyCtxt<'_>, assoc_item: &ty::AssocItem) -> bool {
+ assoc_item.defaultness.is_final() && !impl_is_default(tcx, assoc_item.container.id())
+}
+
+pub enum TupleArgumentsFlag {
+ Yes,
+ No,
+}
--- /dev/null
+use crate::infer::InferCtxt;
+use crate::opaque_types::required_region_bounds;
+use crate::traits::{self, AssocTypeBoundData};
+use rustc::middle::lang_items;
+use rustc::ty::subst::SubstsRef;
+use rustc::ty::{self, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness};
+use rustc_hir as hir;
+use rustc_hir::def_id::DefId;
+use rustc_span::symbol::{kw, Ident};
+use rustc_span::Span;
+
+/// Returns the set of obligations needed to make `ty` well-formed.
+/// If `ty` contains unresolved inference variables, this may include
+/// further WF obligations. However, if `ty` IS an unresolved
+/// inference variable, returns `None`, because we are not able to
+/// make any progress at all. This is to prevent "livelock" where we
+/// say "$0 is WF if $0 is WF".
+pub fn obligations<'a, 'tcx>(
+ infcx: &InferCtxt<'a, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ body_id: hir::HirId,
+ ty: Ty<'tcx>,
+ span: Span,
+) -> Option<Vec<traits::PredicateObligation<'tcx>>> {
+ let mut wf = WfPredicates { infcx, param_env, body_id, span, out: vec![], item: None };
+ if wf.compute(ty) {
+ debug!("wf::obligations({:?}, body_id={:?}) = {:?}", ty, body_id, wf.out);
+
+ let result = wf.normalize();
+ debug!("wf::obligations({:?}, body_id={:?}) ~~> {:?}", ty, body_id, result);
+ Some(result)
+ } else {
+ None // no progress made, return None
+ }
+}
+
+/// Returns the obligations that make this trait reference
+/// well-formed. For example, if there is a trait `Set` defined like
+/// `trait Set<K:Eq>`, then the trait reference `Foo: Set<Bar>` is WF
+/// if `Bar: Eq`.
+pub fn trait_obligations<'a, 'tcx>(
+ infcx: &InferCtxt<'a, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ body_id: hir::HirId,
+ trait_ref: &ty::TraitRef<'tcx>,
+ span: Span,
+ item: Option<&'tcx hir::Item<'tcx>>,
+) -> Vec<traits::PredicateObligation<'tcx>> {
+ let mut wf = WfPredicates { infcx, param_env, body_id, span, out: vec![], item };
+ wf.compute_trait_ref(trait_ref, Elaborate::All);
+ wf.normalize()
+}
+
+pub fn predicate_obligations<'a, 'tcx>(
+ infcx: &InferCtxt<'a, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ body_id: hir::HirId,
+ predicate: &ty::Predicate<'tcx>,
+ span: Span,
+) -> Vec<traits::PredicateObligation<'tcx>> {
+ let mut wf = WfPredicates { infcx, param_env, body_id, span, out: vec![], item: None };
+
+ // (*) ok to skip binders, because wf code is prepared for it
+ match *predicate {
+ ty::Predicate::Trait(ref t, _) => {
+ wf.compute_trait_ref(&t.skip_binder().trait_ref, Elaborate::None); // (*)
+ }
+ ty::Predicate::RegionOutlives(..) => {}
+ ty::Predicate::TypeOutlives(ref t) => {
+ wf.compute(t.skip_binder().0);
+ }
+ ty::Predicate::Projection(ref t) => {
+ let t = t.skip_binder(); // (*)
+ wf.compute_projection(t.projection_ty);
+ wf.compute(t.ty);
+ }
+ ty::Predicate::WellFormed(t) => {
+ wf.compute(t);
+ }
+ ty::Predicate::ObjectSafe(_) => {}
+ ty::Predicate::ClosureKind(..) => {}
+ ty::Predicate::Subtype(ref data) => {
+ wf.compute(data.skip_binder().a); // (*)
+ wf.compute(data.skip_binder().b); // (*)
+ }
+ ty::Predicate::ConstEvaluatable(def_id, substs) => {
+ let obligations = wf.nominal_obligations(def_id, substs);
+ wf.out.extend(obligations);
+
+ for ty in substs.types() {
+ wf.compute(ty);
+ }
+ }
+ }
+
+ wf.normalize()
+}
+
+struct WfPredicates<'a, 'tcx> {
+ infcx: &'a InferCtxt<'a, 'tcx>,
+ param_env: ty::ParamEnv<'tcx>,
+ body_id: hir::HirId,
+ span: Span,
+ out: Vec<traits::PredicateObligation<'tcx>>,
+ item: Option<&'tcx hir::Item<'tcx>>,
+}
+
+/// Controls whether we "elaborate" supertraits and so forth on the WF
+/// predicates. This is a kind of hack to address #43784. The
+/// underlying problem in that issue was a trait structure like:
+///
+/// ```
+/// trait Foo: Copy { }
+/// trait Bar: Foo { }
+/// impl<T: Bar> Foo for T { }
+/// impl<T> Bar for T { }
+/// ```
+///
+/// Here, in the `Foo` impl, we will check that `T: Copy` holds -- but
+/// we decide that this is true because `T: Bar` is in the
+/// where-clauses (and we can elaborate that to include `T:
+/// Copy`). This wouldn't be a problem, except that when we check the
+/// `Bar` impl, we decide that `T: Foo` must hold because of the `Foo`
+/// impl. And so nowhere did we check that `T: Copy` holds!
+///
+/// To resolve this, we elaborate the WF requirements that must be
+/// proven when checking impls. This means that (e.g.) the `impl Bar
+/// for T` will be forced to prove not only that `T: Foo` but also `T:
+/// Copy` (which it won't be able to do, because there is no `Copy`
+/// impl for `T`).
+#[derive(Debug, PartialEq, Eq, Copy, Clone)]
+enum Elaborate {
+ All,
+ None,
+}
+
+impl<'a, 'tcx> WfPredicates<'a, 'tcx> {
+ fn cause(&mut self, code: traits::ObligationCauseCode<'tcx>) -> traits::ObligationCause<'tcx> {
+ traits::ObligationCause::new(self.span, self.body_id, code)
+ }
+
+ fn normalize(&mut self) -> Vec<traits::PredicateObligation<'tcx>> {
+ let cause = self.cause(traits::MiscObligation);
+ let infcx = &mut self.infcx;
+ let param_env = self.param_env;
+ let mut obligations = Vec::with_capacity(self.out.len());
+ for pred in &self.out {
+ assert!(!pred.has_escaping_bound_vars());
+ let mut selcx = traits::SelectionContext::new(infcx);
+ let i = obligations.len();
+ let value =
+ traits::normalize_to(&mut selcx, param_env, cause.clone(), pred, &mut obligations);
+ obligations.insert(i, value);
+ }
+ obligations
+ }
+
+ /// Pushes the obligations required for `trait_ref` to be WF into `self.out`.
+ fn compute_trait_ref(&mut self, trait_ref: &ty::TraitRef<'tcx>, elaborate: Elaborate) {
+ let tcx = self.infcx.tcx;
+ let obligations = self.nominal_obligations(trait_ref.def_id, trait_ref.substs);
+
+ let cause = self.cause(traits::MiscObligation);
+ let param_env = self.param_env;
+
+ let item = &self.item;
+ let extend_cause_with_original_assoc_item_obligation =
+ |cause: &mut traits::ObligationCause<'_>,
+ pred: &ty::Predicate<'_>,
+ trait_assoc_items: &[ty::AssocItem]| {
+ let trait_item = tcx
+ .hir()
+ .as_local_hir_id(trait_ref.def_id)
+ .and_then(|trait_id| tcx.hir().find(trait_id));
+ let (trait_name, trait_generics) = match trait_item {
+ Some(hir::Node::Item(hir::Item {
+ ident,
+ kind: hir::ItemKind::Trait(.., generics, _, _),
+ ..
+ }))
+ | Some(hir::Node::Item(hir::Item {
+ ident,
+ kind: hir::ItemKind::TraitAlias(generics, _),
+ ..
+ })) => (Some(ident), Some(generics)),
+ _ => (None, None),
+ };
+
+ let item_span = item.map(|i| tcx.sess.source_map().def_span(i.span));
+ match pred {
+ ty::Predicate::Projection(proj) => {
+ // The obligation comes not from the current `impl` nor the `trait` being
+ // implemented, but rather from a "second order" obligation, like in
+ // `src/test/ui/associated-types/point-at-type-on-obligation-failure.rs`:
+ //
+ // error[E0271]: type mismatch resolving `<Foo2 as Bar2>::Ok == ()`
+ // --> $DIR/point-at-type-on-obligation-failure.rs:13:5
+ // |
+ // LL | type Ok;
+ // | -- associated type defined here
+ // ...
+ // LL | impl Bar for Foo {
+ // | ---------------- in this `impl` item
+ // LL | type Ok = ();
+ // | ^^^^^^^^^^^^^ expected `u32`, found `()`
+ // |
+ // = note: expected type `u32`
+ // found type `()`
+ //
+ // FIXME: we would want to point a span to all places that contributed to this
+ // obligation. In the case above, it should be closer to:
+ //
+ // error[E0271]: type mismatch resolving `<Foo2 as Bar2>::Ok == ()`
+ // --> $DIR/point-at-type-on-obligation-failure.rs:13:5
+ // |
+ // LL | type Ok;
+ // | -- associated type defined here
+ // LL | type Sibling: Bar2<Ok=Self::Ok>;
+ // | -------------------------------- obligation set here
+ // ...
+ // LL | impl Bar for Foo {
+ // | ---------------- in this `impl` item
+ // LL | type Ok = ();
+ // | ^^^^^^^^^^^^^ expected `u32`, found `()`
+ // ...
+ // LL | impl Bar2 for Foo2 {
+ // | ---------------- in this `impl` item
+ // LL | type Ok = u32;
+ // | -------------- obligation set here
+ // |
+ // = note: expected type `u32`
+ // found type `()`
+ if let Some(hir::ItemKind::Impl { items, .. }) = item.map(|i| &i.kind) {
+ let trait_assoc_item = tcx.associated_item(proj.projection_def_id());
+ if let Some(impl_item) =
+ items.iter().find(|item| item.ident == trait_assoc_item.ident)
+ {
+ cause.span = impl_item.span;
+ cause.code = traits::AssocTypeBound(Box::new(AssocTypeBoundData {
+ impl_span: item_span,
+ original: trait_assoc_item.ident.span,
+ bounds: vec![],
+ }));
+ }
+ }
+ }
+ ty::Predicate::Trait(proj, _) => {
+ // An associated item obligation born out of the `trait` failed to be met.
+ // Point at the `impl` that failed the obligation, the associated item that
+ // needed to meet the obligation, and the definition of that associated item,
+ // which should hold the obligation in most cases. An example can be seen in
+ // `src/test/ui/associated-types/point-at-type-on-obligation-failure-2.rs`:
+ //
+ // error[E0277]: the trait bound `bool: Bar` is not satisfied
+ // --> $DIR/point-at-type-on-obligation-failure-2.rs:8:5
+ // |
+ // LL | type Assoc: Bar;
+ // | ----- associated type defined here
+ // ...
+ // LL | impl Foo for () {
+ // | --------------- in this `impl` item
+ // LL | type Assoc = bool;
+ // | ^^^^^^^^^^^^^^^^^^ the trait `Bar` is not implemented for `bool`
+ //
+ // If the obligation comes from the where clause in the `trait`, we point at it:
+ //
+ // error[E0277]: the trait bound `bool: Bar` is not satisfied
+ // --> $DIR/point-at-type-on-obligation-failure-2.rs:8:5
+ // |
+ // | trait Foo where <Self as Foo>>::Assoc: Bar {
+ // | -------------------------- restricted in this bound
+ // LL | type Assoc;
+ // | ----- associated type defined here
+ // ...
+ // LL | impl Foo for () {
+ // | --------------- in this `impl` item
+ // LL | type Assoc = bool;
+ // | ^^^^^^^^^^^^^^^^^^ the trait `Bar` is not implemented for `bool`
+ if let (
+ ty::Projection(ty::ProjectionTy { item_def_id, .. }),
+ Some(hir::ItemKind::Impl { items, .. }),
+ ) = (&proj.skip_binder().self_ty().kind, item.map(|i| &i.kind))
+ {
+ if let Some((impl_item, trait_assoc_item)) = trait_assoc_items
+ .iter()
+ .find(|i| i.def_id == *item_def_id)
+ .and_then(|trait_assoc_item| {
+ items
+ .iter()
+ .find(|i| i.ident == trait_assoc_item.ident)
+ .map(|impl_item| (impl_item, trait_assoc_item))
+ })
+ {
+ let bounds = trait_generics
+ .map(|generics| {
+ get_generic_bound_spans(
+ &generics,
+ trait_name,
+ trait_assoc_item.ident,
+ )
+ })
+ .unwrap_or_else(Vec::new);
+ cause.span = impl_item.span;
+ cause.code = traits::AssocTypeBound(Box::new(AssocTypeBoundData {
+ impl_span: item_span,
+ original: trait_assoc_item.ident.span,
+ bounds,
+ }));
+ }
+ }
+ }
+ _ => {}
+ }
+ };
+
+ if let Elaborate::All = elaborate {
+ // FIXME: Make `extend_cause_with_original_assoc_item_obligation` take an iterator
+ // instead of a slice.
+ let trait_assoc_items: Vec<_> =
+ tcx.associated_items(trait_ref.def_id).in_definition_order().copied().collect();
+
+ let predicates = obligations.iter().map(|obligation| obligation.predicate).collect();
+ let implied_obligations = traits::elaborate_predicates(tcx, predicates);
+ let implied_obligations = implied_obligations.map(|pred| {
+ let mut cause = cause.clone();
+ extend_cause_with_original_assoc_item_obligation(
+ &mut cause,
+ &pred,
+ &*trait_assoc_items,
+ );
+ traits::Obligation::new(cause, param_env, pred)
+ });
+ self.out.extend(implied_obligations);
+ }
+
+ self.out.extend(obligations);
+
+ self.out.extend(trait_ref.substs.types().filter(|ty| !ty.has_escaping_bound_vars()).map(
+ |ty| traits::Obligation::new(cause.clone(), param_env, ty::Predicate::WellFormed(ty)),
+ ));
+ }
+
+ /// Pushes the obligations required for `trait_ref::Item` to be WF
+ /// into `self.out`.
+ fn compute_projection(&mut self, data: ty::ProjectionTy<'tcx>) {
+ // A projection is well-formed if (a) the trait ref itself is
+ // WF and (b) the trait-ref holds. (It may also be
+ // normalizable and be WF that way.)
+ let trait_ref = data.trait_ref(self.infcx.tcx);
+ self.compute_trait_ref(&trait_ref, Elaborate::None);
+
+ if !data.has_escaping_bound_vars() {
+ let predicate = trait_ref.without_const().to_predicate();
+ let cause = self.cause(traits::ProjectionWf(data));
+ self.out.push(traits::Obligation::new(cause, self.param_env, predicate));
+ }
+ }
+
+ /// Pushes the obligations required for an array length to be WF
+ /// into `self.out`.
+ fn compute_array_len(&mut self, constant: ty::Const<'tcx>) {
+ if let ty::ConstKind::Unevaluated(def_id, substs, promoted) = constant.val {
+ assert!(promoted.is_none());
+
+ let obligations = self.nominal_obligations(def_id, substs);
+ self.out.extend(obligations);
+
+ let predicate = ty::Predicate::ConstEvaluatable(def_id, substs);
+ let cause = self.cause(traits::MiscObligation);
+ self.out.push(traits::Obligation::new(cause, self.param_env, predicate));
+ }
+ }
+
+ fn require_sized(&mut self, subty: Ty<'tcx>, cause: traits::ObligationCauseCode<'tcx>) {
+ if !subty.has_escaping_bound_vars() {
+ let cause = self.cause(cause);
+ let trait_ref = ty::TraitRef {
+ def_id: self.infcx.tcx.require_lang_item(lang_items::SizedTraitLangItem, None),
+ substs: self.infcx.tcx.mk_substs_trait(subty, &[]),
+ };
+ self.out.push(traits::Obligation::new(
+ cause,
+ self.param_env,
+ trait_ref.without_const().to_predicate(),
+ ));
+ }
+ }
+
+ /// Pushes new obligations into `out`. Returns `true` if it was able
+ /// to generate all the predicates needed to validate that `ty0`
+ /// is WF. Returns false if `ty0` is an unresolved type variable,
+ /// in which case we are not able to simplify at all.
+ fn compute(&mut self, ty0: Ty<'tcx>) -> bool {
+ let mut subtys = ty0.walk();
+ let param_env = self.param_env;
+ while let Some(ty) = subtys.next() {
+ match ty.kind {
+ ty::Bool
+ | ty::Char
+ | ty::Int(..)
+ | ty::Uint(..)
+ | ty::Float(..)
+ | ty::Error
+ | ty::Str
+ | ty::GeneratorWitness(..)
+ | ty::Never
+ | ty::Param(_)
+ | ty::Bound(..)
+ | ty::Placeholder(..)
+ | ty::Foreign(..) => {
+ // WfScalar, WfParameter, etc
+ }
+
+ ty::Slice(subty) => {
+ self.require_sized(subty, traits::SliceOrArrayElem);
+ }
+
+ ty::Array(subty, len) => {
+ self.require_sized(subty, traits::SliceOrArrayElem);
+ self.compute_array_len(*len);
+ }
+
+ ty::Tuple(ref tys) => {
+ if let Some((_last, rest)) = tys.split_last() {
+ for elem in rest {
+ self.require_sized(elem.expect_ty(), traits::TupleElem);
+ }
+ }
+ }
+
+ ty::RawPtr(_) => {
+ // simple cases that are WF if their type args are WF
+ }
+
+ ty::Projection(data) => {
+ subtys.skip_current_subtree(); // subtree handled by compute_projection
+ self.compute_projection(data);
+ }
+
+ ty::UnnormalizedProjection(..) => bug!("only used with chalk-engine"),
+
+ ty::Adt(def, substs) => {
+ // WfNominalType
+ let obligations = self.nominal_obligations(def.did, substs);
+ self.out.extend(obligations);
+ }
+
+ ty::FnDef(did, substs) => {
+ let obligations = self.nominal_obligations(did, substs);
+ self.out.extend(obligations);
+ }
+
+ ty::Ref(r, rty, _) => {
+ // WfReference
+ if !r.has_escaping_bound_vars() && !rty.has_escaping_bound_vars() {
+ let cause = self.cause(traits::ReferenceOutlivesReferent(ty));
+ self.out.push(traits::Obligation::new(
+ cause,
+ param_env,
+ ty::Predicate::TypeOutlives(ty::Binder::dummy(ty::OutlivesPredicate(
+ rty, r,
+ ))),
+ ));
+ }
+ }
+
+ ty::Generator(..) => {
+ // Walk ALL the types in the generator: this will
+ // include the upvar types as well as the yield
+ // type. Note that this is mildly distinct from
+ // the closure case, where we have to be careful
+ // about the signature of the closure. We don't
+ // have the problem of implied bounds here since
+ // generators don't take arguments.
+ }
+
+ ty::Closure(def_id, substs) => {
+ // Only check the upvar types for WF, not the rest
+ // of the types within. This is needed because we
+ // capture the signature and it may not be WF
+ // without the implied bounds. Consider a closure
+ // like `|x: &'a T|` -- it may be that `T: 'a` is
+ // not known to hold in the creator's context (and
+ // indeed the closure may not be invoked by its
+ // creator, but rather turned to someone who *can*
+ // verify that).
+ //
+ // The special treatment of closures here really
+ // ought not to be necessary either; the problem
+ // is related to #25860 -- there is no way for us
+ // to express a fn type complete with the implied
+ // bounds that it is assuming. I think in reality
+ // the WF rules around fn are a bit messed up, and
+ // that is the rot problem: `fn(&'a T)` should
+ // probably always be WF, because it should be
+ // shorthand for something like `where(T: 'a) {
+ // fn(&'a T) }`, as discussed in #25860.
+ //
+ // Note that we are also skipping the generic
+ // types. This is consistent with the `outlives`
+ // code, but anyway doesn't matter: within the fn
+ // body where they are created, the generics will
+ // always be WF, and outside of that fn body we
+ // are not directly inspecting closure types
+ // anyway, except via auto trait matching (which
+ // only inspects the upvar types).
+ subtys.skip_current_subtree(); // subtree handled by compute_projection
+ for upvar_ty in substs.as_closure().upvar_tys(def_id, self.infcx.tcx) {
+ self.compute(upvar_ty);
+ }
+ }
+
+ ty::FnPtr(_) => {
+ // let the loop iterate into the argument/return
+ // types appearing in the fn signature
+ }
+
+ ty::Opaque(did, substs) => {
+ // all of the requirements on type parameters
+ // should've been checked by the instantiation
+ // of whatever returned this exact `impl Trait`.
+
+ // for named opaque `impl Trait` types we still need to check them
+ if ty::is_impl_trait_defn(self.infcx.tcx, did).is_none() {
+ let obligations = self.nominal_obligations(did, substs);
+ self.out.extend(obligations);
+ }
+ }
+
+ ty::Dynamic(data, r) => {
+ // WfObject
+ //
+ // Here, we defer WF checking due to higher-ranked
+ // regions. This is perhaps not ideal.
+ self.from_object_ty(ty, data, r);
+
+ // FIXME(#27579) RFC also considers adding trait
+ // obligations that don't refer to Self and
+ // checking those
+
+ let defer_to_coercion = self.infcx.tcx.features().object_safe_for_dispatch;
+
+ if !defer_to_coercion {
+ let cause = self.cause(traits::MiscObligation);
+ let component_traits = data.auto_traits().chain(data.principal_def_id());
+ self.out.extend(component_traits.map(|did| {
+ traits::Obligation::new(
+ cause.clone(),
+ param_env,
+ ty::Predicate::ObjectSafe(did),
+ )
+ }));
+ }
+ }
+
+ // Inference variables are the complicated case, since we don't
+ // know what type they are. We do two things:
+ //
+ // 1. Check if they have been resolved, and if so proceed with
+ // THAT type.
+ // 2. If not, check whether this is the type that we
+ // started with (ty0). In that case, we've made no
+ // progress at all, so return false. Otherwise,
+ // we've at least simplified things (i.e., we went
+ // from `Vec<$0>: WF` to `$0: WF`, so we can
+ // register a pending obligation and keep
+ // moving. (Goal is that an "inductive hypothesis"
+ // is satisfied to ensure termination.)
+ ty::Infer(_) => {
+ let ty = self.infcx.shallow_resolve(ty);
+ if let ty::Infer(_) = ty.kind {
+ // not yet resolved...
+ if ty == ty0 {
+ // ...this is the type we started from! no progress.
+ return false;
+ }
+
+ let cause = self.cause(traits::MiscObligation);
+ self.out.push(
+ // ...not the type we started from, so we made progress.
+ traits::Obligation::new(
+ cause,
+ self.param_env,
+ ty::Predicate::WellFormed(ty),
+ ),
+ );
+ } else {
+ // Yes, resolved, proceed with the
+ // result. Should never return false because
+ // `ty` is not a Infer.
+ assert!(self.compute(ty));
+ }
+ }
+ }
+ }
+
+ // if we made it through that loop above, we made progress!
+ return true;
+ }
+
+ fn nominal_obligations(
+ &mut self,
+ def_id: DefId,
+ substs: SubstsRef<'tcx>,
+ ) -> Vec<traits::PredicateObligation<'tcx>> {
+ let predicates = self.infcx.tcx.predicates_of(def_id).instantiate(self.infcx.tcx, substs);
+ let cause = self.cause(traits::ItemObligation(def_id));
+ predicates
+ .predicates
+ .into_iter()
+ .map(|pred| traits::Obligation::new(cause.clone(), self.param_env, pred))
+ .filter(|pred| !pred.has_escaping_bound_vars())
+ .collect()
+ }
+
+ fn from_object_ty(
+ &mut self,
+ ty: Ty<'tcx>,
+ data: ty::Binder<&'tcx ty::List<ty::ExistentialPredicate<'tcx>>>,
+ region: ty::Region<'tcx>,
+ ) {
+ // Imagine a type like this:
+ //
+ // trait Foo { }
+ // trait Bar<'c> : 'c { }
+ //
+ // &'b (Foo+'c+Bar<'d>)
+ // ^
+ //
+ // In this case, the following relationships must hold:
+ //
+ // 'b <= 'c
+ // 'd <= 'c
+ //
+ // The first conditions is due to the normal region pointer
+ // rules, which say that a reference cannot outlive its
+ // referent.
+ //
+ // The final condition may be a bit surprising. In particular,
+ // you may expect that it would have been `'c <= 'd`, since
+ // usually lifetimes of outer things are conservative
+ // approximations for inner things. However, it works somewhat
+ // differently with trait objects: here the idea is that if the
+ // user specifies a region bound (`'c`, in this case) it is the
+ // "master bound" that *implies* that bounds from other traits are
+ // all met. (Remember that *all bounds* in a type like
+ // `Foo+Bar+Zed` must be met, not just one, hence if we write
+ // `Foo<'x>+Bar<'y>`, we know that the type outlives *both* 'x and
+ // 'y.)
+ //
+ // Note: in fact we only permit builtin traits, not `Bar<'d>`, I
+ // am looking forward to the future here.
+ if !data.has_escaping_bound_vars() && !region.has_escaping_bound_vars() {
+ let implicit_bounds = object_region_bounds(self.infcx.tcx, data);
+
+ let explicit_bound = region;
+
+ self.out.reserve(implicit_bounds.len());
+ for implicit_bound in implicit_bounds {
+ let cause = self.cause(traits::ObjectTypeBound(ty, explicit_bound));
+ let outlives =
+ ty::Binder::dummy(ty::OutlivesPredicate(explicit_bound, implicit_bound));
+ self.out.push(traits::Obligation::new(
+ cause,
+ self.param_env,
+ outlives.to_predicate(),
+ ));
+ }
+ }
+ }
+}
+
+/// Given an object type like `SomeTrait + Send`, computes the lifetime
+/// bounds that must hold on the elided self type. These are derived
+/// from the declarations of `SomeTrait`, `Send`, and friends -- if
+/// they declare `trait SomeTrait : 'static`, for example, then
+/// `'static` would appear in the list. The hard work is done by
+/// `infer::required_region_bounds`, see that for more information.
+pub fn object_region_bounds<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ existential_predicates: ty::Binder<&'tcx ty::List<ty::ExistentialPredicate<'tcx>>>,
+) -> Vec<ty::Region<'tcx>> {
+ // Since we don't actually *know* the self type for an object,
+ // this "open(err)" serves as a kind of dummy standin -- basically
+ // a placeholder type.
+ let open_ty = tcx.mk_ty_infer(ty::FreshTy(0));
+
+ let predicates = existential_predicates
+ .iter()
+ .filter_map(|predicate| {
+ if let ty::ExistentialPredicate::Projection(_) = *predicate.skip_binder() {
+ None
+ } else {
+ Some(predicate.with_self_ty(tcx, open_ty))
+ }
+ })
+ .collect();
+
+ required_region_bounds(tcx, open_ty, predicates)
+}
+
+/// Find the span of a generic bound affecting an associated type.
+fn get_generic_bound_spans(
+ generics: &hir::Generics<'_>,
+ trait_name: Option<&Ident>,
+ assoc_item_name: Ident,
+) -> Vec<Span> {
+ let mut bounds = vec![];
+ for clause in generics.where_clause.predicates.iter() {
+ if let hir::WherePredicate::BoundPredicate(pred) = clause {
+ match &pred.bounded_ty.kind {
+ hir::TyKind::Path(hir::QPath::Resolved(Some(ty), path)) => {
+ let mut s = path.segments.iter();
+ if let (a, Some(b), None) = (s.next(), s.next(), s.next()) {
+ if a.map(|s| &s.ident) == trait_name
+ && b.ident == assoc_item_name
+ && is_self_path(&ty.kind)
+ {
+ // `<Self as Foo>::Bar`
+ bounds.push(pred.span);
+ }
+ }
+ }
+ hir::TyKind::Path(hir::QPath::TypeRelative(ty, segment)) => {
+ if segment.ident == assoc_item_name {
+ if is_self_path(&ty.kind) {
+ // `Self::Bar`
+ bounds.push(pred.span);
+ }
+ }
+ }
+ _ => {}
+ }
+ }
+ }
+ bounds
+}
+
+fn is_self_path(kind: &hir::TyKind<'_>) -> bool {
+ match kind {
+ hir::TyKind::Path(hir::QPath::Resolved(None, path)) => {
+ let mut s = path.segments.iter();
+ if let (Some(segment), None) = (s.next(), s.next()) {
+ if segment.ident.name == kw::SelfUpper {
+ // `type(Self)`
+ return true;
+ }
+ }
+ }
+ _ => {}
+ }
+ false
+}
rustc_target = { path = "../librustc_target" }
rustc_ast = { path = "../librustc_ast" }
rustc_span = { path = "../librustc_span" }
-chalk-engine = { version = "0.9.0", default-features=false }
smallvec = { version = "1.0", features = ["union", "may_dangle"] }
rustc_infer = { path = "../librustc_infer" }
+rustc_trait_selection = { path = "../librustc_trait_selection" }
+++ /dev/null
-mod program_clauses;
-mod resolvent_ops;
-mod unify;
-
-use chalk_engine::fallible::Fallible;
-use chalk_engine::forest::Forest;
-use chalk_engine::{context, hh::HhGoal, DelayedLiteral, ExClause, Literal};
-use rustc::ty::fold::{TypeFoldable, TypeFolder, TypeVisitor};
-use rustc::ty::query::Providers;
-use rustc::ty::subst::{GenericArg, GenericArgKind};
-use rustc::ty::{self, TyCtxt};
-use rustc_infer::infer::canonical::{
- Canonical, CanonicalVarValues, Certainty, OriginalQueryValues, QueryRegionConstraints,
- QueryResponse,
-};
-use rustc_infer::infer::{InferCtxt, LateBoundRegionConversionTime, TyCtxtInferExt};
-use rustc_infer::traits::{
- self, ChalkCanonicalGoal, ChalkContextLift, Clause, DomainGoal, Environment, ExClauseFold,
- Goal, GoalKind, InEnvironment, QuantifierKind,
-};
-use rustc_macros::{Lift, TypeFoldable};
-use rustc_span::DUMMY_SP;
-
-use std::fmt::{self, Debug};
-use std::marker::PhantomData;
-
-use self::unify::*;
-
-#[derive(Copy, Clone, Debug)]
-crate struct ChalkArenas<'tcx> {
- _phantom: PhantomData<&'tcx ()>,
-}
-
-#[derive(Copy, Clone)]
-crate struct ChalkContext<'tcx> {
- _arenas: ChalkArenas<'tcx>,
- tcx: TyCtxt<'tcx>,
-}
-
-#[derive(Copy, Clone)]
-crate struct ChalkInferenceContext<'cx, 'tcx> {
- infcx: &'cx InferCtxt<'cx, 'tcx>,
-}
-
-#[derive(Copy, Clone, Debug)]
-crate struct UniverseMap;
-
-crate type RegionConstraint<'tcx> = ty::OutlivesPredicate<GenericArg<'tcx>, ty::Region<'tcx>>;
-
-#[derive(Clone, Debug, PartialEq, Eq, Hash, TypeFoldable, Lift)]
-crate struct ConstrainedSubst<'tcx> {
- subst: CanonicalVarValues<'tcx>,
- constraints: Vec<RegionConstraint<'tcx>>,
-}
-
-impl context::Context for ChalkArenas<'tcx> {
- type CanonicalExClause = Canonical<'tcx, ChalkExClause<'tcx>>;
-
- type CanonicalGoalInEnvironment = Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>>;
-
- // u-canonicalization not yet implemented
- type UCanonicalGoalInEnvironment = Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>>;
-
- type CanonicalConstrainedSubst = Canonical<'tcx, ConstrainedSubst<'tcx>>;
-
- // u-canonicalization not yet implemented
- type UniverseMap = UniverseMap;
-
- type Solution = Canonical<'tcx, QueryResponse<'tcx, ()>>;
-
- type InferenceNormalizedSubst = CanonicalVarValues<'tcx>;
-
- type GoalInEnvironment = InEnvironment<'tcx, Goal<'tcx>>;
-
- type RegionConstraint = RegionConstraint<'tcx>;
-
- type Substitution = CanonicalVarValues<'tcx>;
-
- type Environment = Environment<'tcx>;
-
- type Goal = Goal<'tcx>;
-
- type DomainGoal = DomainGoal<'tcx>;
-
- type BindersGoal = ty::Binder<Goal<'tcx>>;
-
- type Parameter = GenericArg<'tcx>;
-
- type ProgramClause = Clause<'tcx>;
-
- type ProgramClauses = Vec<Clause<'tcx>>;
-
- type UnificationResult = UnificationResult<'tcx>;
-
- type Variance = ty::Variance;
-
- fn goal_in_environment(
- env: &Environment<'tcx>,
- goal: Goal<'tcx>,
- ) -> InEnvironment<'tcx, Goal<'tcx>> {
- env.with(goal)
- }
-}
-
-impl context::AggregateOps<ChalkArenas<'tcx>> for ChalkContext<'tcx> {
- fn make_solution(
- &self,
- root_goal: &Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>>,
- mut simplified_answers: impl context::AnswerStream<ChalkArenas<'tcx>>,
- ) -> Option<Canonical<'tcx, QueryResponse<'tcx, ()>>> {
- use chalk_engine::SimplifiedAnswer;
-
- debug!("make_solution(root_goal = {:?})", root_goal);
-
- if simplified_answers.peek_answer().is_none() {
- return None;
- }
-
- let SimplifiedAnswer { subst: constrained_subst, ambiguous } =
- simplified_answers.next_answer().unwrap();
-
- debug!("make_solution: ambiguous flag = {}", ambiguous);
-
- let ambiguous = simplified_answers.peek_answer().is_some() || ambiguous;
-
- let solution = constrained_subst.unchecked_map(|cs| match ambiguous {
- true => QueryResponse {
- var_values: cs.subst.make_identity(self.tcx),
- region_constraints: QueryRegionConstraints::default(),
- certainty: Certainty::Ambiguous,
- value: (),
- },
-
- false => QueryResponse {
- var_values: cs.subst,
- region_constraints: QueryRegionConstraints::default(),
-
- // FIXME: restore this later once we get better at handling regions
- // region_constraints: cs.constraints
- // .into_iter()
- // .map(|c| ty::Binder::bind(c))
- // .collect(),
- certainty: Certainty::Proven,
- value: (),
- },
- });
-
- debug!("make_solution: solution = {:?}", solution);
-
- Some(solution)
- }
-}
-
-impl context::ContextOps<ChalkArenas<'tcx>> for ChalkContext<'tcx> {
- /// Returns `true` if this is a coinductive goal: basically proving that an auto trait
- /// is implemented or proving that a trait reference is well-formed.
- fn is_coinductive(&self, goal: &Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>>) -> bool {
- use rustc::traits::{WellFormed, WhereClause};
-
- let mut goal = goal.value.goal;
- loop {
- match goal {
- GoalKind::DomainGoal(domain_goal) => match domain_goal {
- DomainGoal::WellFormed(WellFormed::Trait(..)) => return true,
- DomainGoal::Holds(WhereClause::Implemented(trait_predicate)) => {
- return self.tcx.trait_is_auto(trait_predicate.def_id());
- }
- _ => return false,
- },
-
- GoalKind::Quantified(_, bound_goal) => goal = *bound_goal.skip_binder(),
- _ => return false,
- }
- }
- }
-
- /// Creates an inference table for processing a new goal and instantiate that goal
- /// in that context, returning "all the pieces".
- ///
- /// More specifically: given a u-canonical goal `arg`, creates a
- /// new inference table `T` and populates it with the universes
- /// found in `arg`. Then, creates a substitution `S` that maps
- /// each bound variable in `arg` to a fresh inference variable
- /// from T. Returns:
- ///
- /// - the table `T`,
- /// - the substitution `S`,
- /// - the environment and goal found by substitution `S` into `arg`.
- fn instantiate_ucanonical_goal<R>(
- &self,
- arg: &Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>>,
- op: impl context::WithInstantiatedUCanonicalGoal<ChalkArenas<'tcx>, Output = R>,
- ) -> R {
- self.tcx.infer_ctxt().enter_with_canonical(DUMMY_SP, arg, |ref infcx, arg, subst| {
- let chalk_infcx = &mut ChalkInferenceContext { infcx };
- op.with(chalk_infcx, subst, arg.environment, arg.goal)
- })
- }
-
- fn instantiate_ex_clause<R>(
- &self,
- _num_universes: usize,
- arg: &Canonical<'tcx, ChalkExClause<'tcx>>,
- op: impl context::WithInstantiatedExClause<ChalkArenas<'tcx>, Output = R>,
- ) -> R {
- self.tcx.infer_ctxt().enter_with_canonical(DUMMY_SP, &arg.upcast(), |ref infcx, arg, _| {
- let chalk_infcx = &mut ChalkInferenceContext { infcx };
- op.with(chalk_infcx, arg)
- })
- }
-
- /// Returns `true` if this solution has no region constraints.
- fn empty_constraints(ccs: &Canonical<'tcx, ConstrainedSubst<'tcx>>) -> bool {
- ccs.value.constraints.is_empty()
- }
-
- fn inference_normalized_subst_from_ex_clause(
- canon_ex_clause: &'a Canonical<'tcx, ChalkExClause<'tcx>>,
- ) -> &'a CanonicalVarValues<'tcx> {
- &canon_ex_clause.value.subst
- }
-
- fn inference_normalized_subst_from_subst(
- canon_subst: &'a Canonical<'tcx, ConstrainedSubst<'tcx>>,
- ) -> &'a CanonicalVarValues<'tcx> {
- &canon_subst.value.subst
- }
-
- fn canonical(
- u_canon: &'a Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>>,
- ) -> &'a Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>> {
- u_canon
- }
-
- fn is_trivial_substitution(
- u_canon: &Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>>,
- canonical_subst: &Canonical<'tcx, ConstrainedSubst<'tcx>>,
- ) -> bool {
- let subst = &canonical_subst.value.subst;
- assert_eq!(u_canon.variables.len(), subst.var_values.len());
- subst.var_values.iter_enumerated().all(|(cvar, kind)| match kind.unpack() {
- GenericArgKind::Lifetime(r) => match r {
- &ty::ReLateBound(debruijn, br) => {
- debug_assert_eq!(debruijn, ty::INNERMOST);
- cvar == br.assert_bound_var()
- }
- _ => false,
- },
- GenericArgKind::Type(ty) => match ty.kind {
- ty::Bound(debruijn, bound_ty) => {
- debug_assert_eq!(debruijn, ty::INNERMOST);
- cvar == bound_ty.var
- }
- _ => false,
- },
- GenericArgKind::Const(ct) => match ct.val {
- ty::ConstKind::Bound(debruijn, bound_ct) => {
- debug_assert_eq!(debruijn, ty::INNERMOST);
- cvar == bound_ct
- }
- _ => false,
- },
- })
- }
-
- fn num_universes(canon: &Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>>) -> usize {
- canon.max_universe.index() + 1
- }
-
- /// Convert a goal G *from* the canonical universes *into* our
- /// local universes. This will yield a goal G' that is the same
- /// but for the universes of universally quantified names.
- fn map_goal_from_canonical(
- _map: &UniverseMap,
- value: &Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>>,
- ) -> Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>> {
- *value // FIXME universe maps not implemented yet
- }
-
- fn map_subst_from_canonical(
- _map: &UniverseMap,
- value: &Canonical<'tcx, ConstrainedSubst<'tcx>>,
- ) -> Canonical<'tcx, ConstrainedSubst<'tcx>> {
- value.clone() // FIXME universe maps not implemented yet
- }
-}
-
-impl context::InferenceTable<ChalkArenas<'tcx>, ChalkArenas<'tcx>>
- for ChalkInferenceContext<'cx, 'tcx>
-{
- fn into_goal(&self, domain_goal: DomainGoal<'tcx>) -> Goal<'tcx> {
- self.infcx.tcx.mk_goal(GoalKind::DomainGoal(domain_goal))
- }
-
- fn cannot_prove(&self) -> Goal<'tcx> {
- self.infcx.tcx.mk_goal(GoalKind::CannotProve)
- }
-
- fn into_hh_goal(&mut self, goal: Goal<'tcx>) -> ChalkHhGoal<'tcx> {
- match *goal {
- GoalKind::Implies(hypotheses, goal) => {
- HhGoal::Implies(hypotheses.iter().cloned().collect(), goal)
- }
- GoalKind::And(left, right) => HhGoal::And(left, right),
- GoalKind::Not(subgoal) => HhGoal::Not(subgoal),
- GoalKind::DomainGoal(d) => HhGoal::DomainGoal(d),
- GoalKind::Quantified(QuantifierKind::Universal, binder) => HhGoal::ForAll(binder),
- GoalKind::Quantified(QuantifierKind::Existential, binder) => HhGoal::Exists(binder),
- GoalKind::Subtype(a, b) => HhGoal::Unify(ty::Variance::Covariant, a.into(), b.into()),
- GoalKind::CannotProve => HhGoal::CannotProve,
- }
- }
-
- fn add_clauses(
- &mut self,
- env: &Environment<'tcx>,
- clauses: Vec<Clause<'tcx>>,
- ) -> Environment<'tcx> {
- Environment {
- clauses: self
- .infcx
- .tcx
- .mk_clauses(env.clauses.iter().cloned().chain(clauses.into_iter())),
- }
- }
-}
-
-impl context::TruncateOps<ChalkArenas<'tcx>, ChalkArenas<'tcx>>
- for ChalkInferenceContext<'cx, 'tcx>
-{
- fn truncate_goal(
- &mut self,
- _subgoal: &InEnvironment<'tcx, Goal<'tcx>>,
- ) -> Option<InEnvironment<'tcx, Goal<'tcx>>> {
- None // FIXME we should truncate at some point!
- }
-
- fn truncate_answer(
- &mut self,
- _subst: &CanonicalVarValues<'tcx>,
- ) -> Option<CanonicalVarValues<'tcx>> {
- None // FIXME we should truncate at some point!
- }
-}
-
-impl context::UnificationOps<ChalkArenas<'tcx>, ChalkArenas<'tcx>>
- for ChalkInferenceContext<'cx, 'tcx>
-{
- fn program_clauses(
- &self,
- environment: &Environment<'tcx>,
- goal: &DomainGoal<'tcx>,
- ) -> Vec<Clause<'tcx>> {
- self.program_clauses_impl(environment, goal)
- }
-
- fn instantiate_binders_universally(&mut self, arg: &ty::Binder<Goal<'tcx>>) -> Goal<'tcx> {
- self.infcx.replace_bound_vars_with_placeholders(arg).0
- }
-
- fn instantiate_binders_existentially(&mut self, arg: &ty::Binder<Goal<'tcx>>) -> Goal<'tcx> {
- self.infcx
- .replace_bound_vars_with_fresh_vars(
- DUMMY_SP,
- LateBoundRegionConversionTime::HigherRankedType,
- arg,
- )
- .0
- }
-
- fn debug_ex_clause(&mut self, value: &'v ChalkExClause<'tcx>) -> Box<dyn Debug + 'v> {
- let string = format!("{:?}", self.infcx.resolve_vars_if_possible(value));
- Box::new(string)
- }
-
- fn canonicalize_goal(
- &mut self,
- value: &InEnvironment<'tcx, Goal<'tcx>>,
- ) -> Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>> {
- let mut _orig_values = OriginalQueryValues::default();
- self.infcx.canonicalize_query(value, &mut _orig_values)
- }
-
- fn canonicalize_ex_clause(
- &mut self,
- value: &ChalkExClause<'tcx>,
- ) -> Canonical<'tcx, ChalkExClause<'tcx>> {
- self.infcx.canonicalize_response(value)
- }
-
- fn canonicalize_constrained_subst(
- &mut self,
- subst: CanonicalVarValues<'tcx>,
- constraints: Vec<RegionConstraint<'tcx>>,
- ) -> Canonical<'tcx, ConstrainedSubst<'tcx>> {
- self.infcx.canonicalize_response(&ConstrainedSubst { subst, constraints })
- }
-
- fn u_canonicalize_goal(
- &mut self,
- value: &Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>>,
- ) -> (Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>>, UniverseMap) {
- (*value, UniverseMap)
- }
-
- fn invert_goal(
- &mut self,
- _value: &InEnvironment<'tcx, Goal<'tcx>>,
- ) -> Option<InEnvironment<'tcx, Goal<'tcx>>> {
- panic!("goal inversion not yet implemented")
- }
-
- fn unify_parameters(
- &mut self,
- environment: &Environment<'tcx>,
- variance: ty::Variance,
- a: &GenericArg<'tcx>,
- b: &GenericArg<'tcx>,
- ) -> Fallible<UnificationResult<'tcx>> {
- self.infcx.commit_if_ok(|_| {
- unify(self.infcx, *environment, variance, a, b)
- .map_err(|_| chalk_engine::fallible::NoSolution)
- })
- }
-
- fn sink_answer_subset(
- &self,
- value: &Canonical<'tcx, ConstrainedSubst<'tcx>>,
- ) -> Canonical<'tcx, ConstrainedSubst<'tcx>> {
- value.clone()
- }
-
- fn lift_delayed_literal(
- &self,
- value: DelayedLiteral<ChalkArenas<'tcx>>,
- ) -> DelayedLiteral<ChalkArenas<'tcx>> {
- match self.infcx.tcx.lift(&value) {
- Some(literal) => literal,
- None => bug!("cannot lift {:?}", value),
- }
- }
-
- fn into_ex_clause(
- &mut self,
- result: UnificationResult<'tcx>,
- ex_clause: &mut ChalkExClause<'tcx>,
- ) {
- into_ex_clause(result, ex_clause);
- }
-}
-
-crate fn into_ex_clause(result: UnificationResult<'tcx>, ex_clause: &mut ChalkExClause<'tcx>) {
- ex_clause.subgoals.extend(result.goals.into_iter().map(Literal::Positive));
-
- // FIXME: restore this later once we get better at handling regions
- let _ = result.constraints.len(); // trick `-D dead-code`
- // ex_clause.constraints.extend(result.constraints);
-}
-
-type ChalkHhGoal<'tcx> = HhGoal<ChalkArenas<'tcx>>;
-
-type ChalkExClause<'tcx> = ExClause<ChalkArenas<'tcx>>;
-
-impl Debug for ChalkContext<'tcx> {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- write!(f, "ChalkContext")
- }
-}
-
-impl Debug for ChalkInferenceContext<'cx, 'tcx> {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- write!(f, "ChalkInferenceContext")
- }
-}
-
-impl ChalkContextLift<'tcx> for ChalkArenas<'a> {
- type LiftedExClause = ChalkExClause<'tcx>;
- type LiftedDelayedLiteral = DelayedLiteral<ChalkArenas<'tcx>>;
- type LiftedLiteral = Literal<ChalkArenas<'tcx>>;
-
- fn lift_ex_clause_to_tcx(
- ex_clause: &ChalkExClause<'a>,
- tcx: TyCtxt<'tcx>,
- ) -> Option<Self::LiftedExClause> {
- Some(ChalkExClause {
- subst: tcx.lift(&ex_clause.subst)?,
- delayed_literals: tcx.lift(&ex_clause.delayed_literals)?,
- constraints: tcx.lift(&ex_clause.constraints)?,
- subgoals: tcx.lift(&ex_clause.subgoals)?,
- })
- }
-
- fn lift_delayed_literal_to_tcx(
- literal: &DelayedLiteral<ChalkArenas<'a>>,
- tcx: TyCtxt<'tcx>,
- ) -> Option<Self::LiftedDelayedLiteral> {
- Some(match literal {
- DelayedLiteral::CannotProve(()) => DelayedLiteral::CannotProve(()),
- DelayedLiteral::Negative(index) => DelayedLiteral::Negative(*index),
- DelayedLiteral::Positive(index, subst) => {
- DelayedLiteral::Positive(*index, tcx.lift(subst)?)
- }
- })
- }
-
- fn lift_literal_to_tcx(
- literal: &Literal<ChalkArenas<'a>>,
- tcx: TyCtxt<'tcx>,
- ) -> Option<Self::LiftedLiteral> {
- Some(match literal {
- Literal::Negative(goal) => Literal::Negative(tcx.lift(goal)?),
- Literal::Positive(goal) => Literal::Positive(tcx.lift(goal)?),
- })
- }
-}
-
-impl ExClauseFold<'tcx> for ChalkArenas<'tcx> {
- fn fold_ex_clause_with<F: TypeFolder<'tcx>>(
- ex_clause: &ChalkExClause<'tcx>,
- folder: &mut F,
- ) -> ChalkExClause<'tcx> {
- ExClause {
- subst: ex_clause.subst.fold_with(folder),
- delayed_literals: ex_clause.delayed_literals.fold_with(folder),
- constraints: ex_clause.constraints.fold_with(folder),
- subgoals: ex_clause.subgoals.fold_with(folder),
- }
- }
-
- fn visit_ex_clause_with<V: TypeVisitor<'tcx>>(
- ex_clause: &ExClause<Self>,
- visitor: &mut V,
- ) -> bool {
- let ExClause { subst, delayed_literals, constraints, subgoals } = ex_clause;
- subst.visit_with(visitor)
- || delayed_literals.visit_with(visitor)
- || constraints.visit_with(visitor)
- || subgoals.visit_with(visitor)
- }
-}
-
-trait Upcast<'tcx>: 'tcx {
- type Upcasted: 'tcx;
-
- fn upcast(&self) -> Self::Upcasted;
-}
-
-impl<'tcx> Upcast<'tcx> for DelayedLiteral<ChalkArenas<'tcx>> {
- type Upcasted = DelayedLiteral<ChalkArenas<'tcx>>;
-
- fn upcast(&self) -> Self::Upcasted {
- match self {
- &DelayedLiteral::CannotProve(..) => DelayedLiteral::CannotProve(()),
- &DelayedLiteral::Negative(index) => DelayedLiteral::Negative(index),
- DelayedLiteral::Positive(index, subst) => {
- DelayedLiteral::Positive(*index, subst.clone())
- }
- }
- }
-}
-
-impl<'tcx> Upcast<'tcx> for Literal<ChalkArenas<'tcx>> {
- type Upcasted = Literal<ChalkArenas<'tcx>>;
-
- fn upcast(&self) -> Self::Upcasted {
- match self {
- &Literal::Negative(goal) => Literal::Negative(goal),
- &Literal::Positive(goal) => Literal::Positive(goal),
- }
- }
-}
-
-impl<'tcx> Upcast<'tcx> for ExClause<ChalkArenas<'tcx>> {
- type Upcasted = ExClause<ChalkArenas<'tcx>>;
-
- fn upcast(&self) -> Self::Upcasted {
- ExClause {
- subst: self.subst.clone(),
- delayed_literals: self.delayed_literals.iter().map(|l| l.upcast()).collect(),
- constraints: self.constraints.clone(),
- subgoals: self.subgoals.iter().map(|g| g.upcast()).collect(),
- }
- }
-}
-
-impl<'tcx, T> Upcast<'tcx> for Canonical<'tcx, T>
-where
- T: Upcast<'tcx>,
-{
- type Upcasted = Canonical<'tcx, T::Upcasted>;
-
- fn upcast(&self) -> Self::Upcasted {
- Canonical {
- max_universe: self.max_universe,
- value: self.value.upcast(),
- variables: self.variables,
- }
- }
-}
-
-crate fn provide(p: &mut Providers<'_>) {
- *p = Providers { evaluate_goal, ..*p };
-}
-
-crate fn evaluate_goal<'tcx>(
- tcx: TyCtxt<'tcx>,
- goal: ChalkCanonicalGoal<'tcx>,
-) -> Result<&'tcx Canonical<'tcx, QueryResponse<'tcx, ()>>, traits::query::NoSolution> {
- use crate::lowering::Lower;
- use rustc::traits::WellFormed;
-
- let goal = goal.unchecked_map(|goal| InEnvironment {
- environment: goal.environment,
- goal: match goal.goal {
- ty::Predicate::WellFormed(ty) => {
- tcx.mk_goal(GoalKind::DomainGoal(DomainGoal::WellFormed(WellFormed::Ty(ty))))
- }
-
- ty::Predicate::Subtype(predicate) => tcx.mk_goal(GoalKind::Quantified(
- QuantifierKind::Universal,
- predicate.map_bound(|pred| tcx.mk_goal(GoalKind::Subtype(pred.a, pred.b))),
- )),
-
- other => tcx.mk_goal(GoalKind::from_poly_domain_goal(other.lower(), tcx)),
- },
- });
-
- debug!("evaluate_goal(goal = {:?})", goal);
-
- let context = ChalkContext { _arenas: ChalkArenas { _phantom: PhantomData }, tcx };
-
- let mut forest = Forest::new(context);
- let solution = forest.solve(&goal);
-
- debug!("evaluate_goal: solution = {:?}", solution);
-
- solution.map(|ok| Ok(&*tcx.arena.alloc(ok))).unwrap_or(Err(traits::query::NoSolution))
-}
+++ /dev/null
-use crate::generic_types;
-use crate::lowering::Lower;
-use rustc::traits::{Clause, GoalKind, ProgramClause, ProgramClauseCategory};
-use rustc::ty::subst::{GenericArg, InternalSubsts, Subst};
-use rustc::ty::{self, Ty, TyCtxt};
-use rustc_hir as hir;
-use rustc_hir::def_id::DefId;
-
-/// Returns a predicate of the form
-/// `Implemented(ty: Trait) :- Implemented(nested: Trait)...`
-/// where `Trait` is specified by `trait_def_id`.
-fn builtin_impl_clause(
- tcx: TyCtxt<'tcx>,
- ty: Ty<'tcx>,
- nested: &[GenericArg<'tcx>],
- trait_def_id: DefId,
-) -> ProgramClause<'tcx> {
- ProgramClause {
- goal: ty::TraitPredicate {
- trait_ref: ty::TraitRef { def_id: trait_def_id, substs: tcx.mk_substs_trait(ty, &[]) },
- }
- .lower(),
- hypotheses: tcx.mk_goals(
- nested
- .iter()
- .cloned()
- .map(|nested_ty| ty::TraitRef {
- def_id: trait_def_id,
- substs: tcx.mk_substs_trait(nested_ty.expect_ty(), &[]),
- })
- .map(|trait_ref| ty::TraitPredicate { trait_ref })
- .map(|pred| GoalKind::DomainGoal(pred.lower()))
- .map(|goal_kind| tcx.mk_goal(goal_kind)),
- ),
- category: ProgramClauseCategory::Other,
- }
-}
-
-crate fn assemble_builtin_unsize_impls<'tcx>(
- tcx: TyCtxt<'tcx>,
- unsize_def_id: DefId,
- source: Ty<'tcx>,
- target: Ty<'tcx>,
- clauses: &mut Vec<Clause<'tcx>>,
-) {
- match (&source.kind, &target.kind) {
- (ty::Dynamic(data_a, ..), ty::Dynamic(data_b, ..)) => {
- if data_a.principal_def_id() != data_b.principal_def_id()
- || data_b.auto_traits().any(|b| data_a.auto_traits().all(|a| a != b))
- {
- return;
- }
-
- // FIXME: rules for trait upcast
- }
-
- (_, &ty::Dynamic(..)) => {
- // FIXME: basically, we should have something like:
- // ```
- // forall<T> {
- // Implemented(T: Unsize< for<...> dyn Trait<...> >) :-
- // for<...> Implemented(T: Trait<...>).
- // }
- // ```
- // The question is: how to correctly handle the higher-ranked
- // `for<...>` binder in order to have a generic rule?
- // (Having generic rules is useful for caching, as we may be able
- // to turn this function and others into tcx queries later on).
- }
-
- (ty::Array(_, length), ty::Slice(_)) => {
- let ty_param = generic_types::bound(tcx, 0);
- let array_ty = tcx.mk_ty(ty::Array(ty_param, length));
- let slice_ty = tcx.mk_ty(ty::Slice(ty_param));
-
- // `forall<T> { Implemented([T; N]: Unsize<[T]>). }`
- let clause = ProgramClause {
- goal: ty::TraitPredicate {
- trait_ref: ty::TraitRef {
- def_id: unsize_def_id,
- substs: tcx.mk_substs_trait(array_ty, &[slice_ty.into()]),
- },
- }
- .lower(),
- hypotheses: ty::List::empty(),
- category: ProgramClauseCategory::Other,
- };
-
- clauses.push(Clause::ForAll(ty::Binder::bind(clause)));
- }
-
- (ty::Infer(ty::TyVar(_)), _) | (_, ty::Infer(ty::TyVar(_))) => {
- // FIXME: ambiguous
- }
-
- (ty::Adt(def_id_a, ..), ty::Adt(def_id_b, ..)) => {
- if def_id_a != def_id_b {
- return;
- }
-
- // FIXME: rules for struct unsizing
- }
-
- (&ty::Tuple(tys_a), &ty::Tuple(tys_b)) => {
- if tys_a.len() != tys_b.len() {
- return;
- }
-
- // FIXME: rules for tuple unsizing
- }
-
- _ => (),
- }
-}
-
-crate fn assemble_builtin_sized_impls<'tcx>(
- tcx: TyCtxt<'tcx>,
- sized_def_id: DefId,
- ty: Ty<'tcx>,
- clauses: &mut Vec<Clause<'tcx>>,
-) {
- let mut push_builtin_impl = |ty: Ty<'tcx>, nested: &[GenericArg<'tcx>]| {
- let clause = builtin_impl_clause(tcx, ty, nested, sized_def_id);
- // Bind innermost bound vars that may exist in `ty` and `nested`.
- clauses.push(Clause::ForAll(ty::Binder::bind(clause)));
- };
-
- match &ty.kind {
- // Non parametric primitive types.
- ty::Bool
- | ty::Char
- | ty::Int(..)
- | ty::Uint(..)
- | ty::Float(..)
- | ty::Infer(ty::IntVar(_))
- | ty::Infer(ty::FloatVar(_))
- | ty::Error
- | ty::Never => push_builtin_impl(ty, &[]),
-
- // These ones are always `Sized`.
- &ty::Array(_, length) => {
- push_builtin_impl(tcx.mk_ty(ty::Array(generic_types::bound(tcx, 0), length)), &[]);
- }
- ty::RawPtr(ptr) => {
- push_builtin_impl(generic_types::raw_ptr(tcx, ptr.mutbl), &[]);
- }
- &ty::Ref(_, _, mutbl) => {
- push_builtin_impl(generic_types::ref_ty(tcx, mutbl), &[]);
- }
- ty::FnPtr(fn_ptr) => {
- let fn_ptr = fn_ptr.skip_binder();
- let fn_ptr = generic_types::fn_ptr(
- tcx,
- fn_ptr.inputs_and_output.len(),
- fn_ptr.c_variadic,
- fn_ptr.unsafety,
- fn_ptr.abi,
- );
- push_builtin_impl(fn_ptr, &[]);
- }
- &ty::FnDef(def_id, ..) => {
- push_builtin_impl(generic_types::fn_def(tcx, def_id), &[]);
- }
- &ty::Closure(def_id, ..) => {
- push_builtin_impl(generic_types::closure(tcx, def_id), &[]);
- }
- &ty::Generator(def_id, ..) => {
- push_builtin_impl(generic_types::generator(tcx, def_id), &[]);
- }
-
- // `Sized` if the last type is `Sized` (because else we will get a WF error anyway).
- &ty::Tuple(type_list) => {
- let type_list = generic_types::type_list(tcx, type_list.len());
- push_builtin_impl(tcx.mk_ty(ty::Tuple(type_list)), &type_list);
- }
-
- // Struct def
- ty::Adt(adt_def, _) => {
- let substs = InternalSubsts::bound_vars_for_item(tcx, adt_def.did);
- let adt = tcx.mk_ty(ty::Adt(adt_def, substs));
- let sized_constraint = adt_def
- .sized_constraint(tcx)
- .iter()
- .map(|ty| GenericArg::from(ty.subst(tcx, substs)))
- .collect::<Vec<_>>();
- push_builtin_impl(adt, &sized_constraint);
- }
-
- // Artificially trigger an ambiguity by adding two possible types to
- // unify against.
- ty::Infer(ty::TyVar(_)) => {
- push_builtin_impl(tcx.types.i32, &[]);
- push_builtin_impl(tcx.types.f32, &[]);
- }
-
- ty::Projection(_projection_ty) => {
- // FIXME: add builtin impls from the associated type values found in
- // trait impls of `projection_ty.trait_ref(tcx)`.
- }
-
- // The `Sized` bound can only come from the environment.
- ty::Param(..) | ty::Placeholder(..) | ty::UnnormalizedProjection(..) => (),
-
- // Definitely not `Sized`.
- ty::Foreign(..) | ty::Str | ty::Slice(..) | ty::Dynamic(..) | ty::Opaque(..) => (),
-
- ty::Bound(..)
- | ty::GeneratorWitness(..)
- | ty::Infer(ty::FreshTy(_))
- | ty::Infer(ty::FreshIntTy(_))
- | ty::Infer(ty::FreshFloatTy(_)) => bug!("unexpected type {:?}", ty),
- }
-}
-
-crate fn assemble_builtin_copy_clone_impls<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_def_id: DefId,
- ty: Ty<'tcx>,
- clauses: &mut Vec<Clause<'tcx>>,
-) {
- let mut push_builtin_impl = |ty: Ty<'tcx>, nested: &[GenericArg<'tcx>]| {
- let clause = builtin_impl_clause(tcx, ty, nested, trait_def_id);
- // Bind innermost bound vars that may exist in `ty` and `nested`.
- clauses.push(Clause::ForAll(ty::Binder::bind(clause)));
- };
-
- match &ty.kind {
- // Implementations provided in libcore.
- ty::Bool
- | ty::Char
- | ty::Int(..)
- | ty::Uint(..)
- | ty::Float(..)
- | ty::RawPtr(..)
- | ty::Never
- | ty::Ref(_, _, hir::Mutability::Not) => (),
-
- // Non parametric primitive types.
- ty::Infer(ty::IntVar(_)) | ty::Infer(ty::FloatVar(_)) | ty::Error => {
- push_builtin_impl(ty, &[])
- }
-
- // These implement `Copy`/`Clone` if their element types do.
- &ty::Array(_, length) => {
- let element_ty = generic_types::bound(tcx, 0);
- push_builtin_impl(
- tcx.mk_ty(ty::Array(element_ty, length)),
- &[GenericArg::from(element_ty)],
- );
- }
- &ty::Tuple(type_list) => {
- let type_list = generic_types::type_list(tcx, type_list.len());
- push_builtin_impl(tcx.mk_ty(ty::Tuple(type_list)), &**type_list);
- }
- &ty::Closure(def_id, ..) => {
- let closure_ty = generic_types::closure(tcx, def_id);
- let upvar_tys: Vec<_> = match &closure_ty.kind {
- ty::Closure(_, substs) => substs
- .as_closure()
- .upvar_tys(def_id, tcx)
- .map(|ty| GenericArg::from(ty))
- .collect(),
- _ => bug!(),
- };
- push_builtin_impl(closure_ty, &upvar_tys);
- }
-
- // These ones are always `Clone`.
- ty::FnPtr(fn_ptr) => {
- let fn_ptr = fn_ptr.skip_binder();
- let fn_ptr = generic_types::fn_ptr(
- tcx,
- fn_ptr.inputs_and_output.len(),
- fn_ptr.c_variadic,
- fn_ptr.unsafety,
- fn_ptr.abi,
- );
- push_builtin_impl(fn_ptr, &[]);
- }
- &ty::FnDef(def_id, ..) => {
- push_builtin_impl(generic_types::fn_def(tcx, def_id), &[]);
- }
-
- // These depend on whatever user-defined impls might exist.
- ty::Adt(_, _) => (),
-
- // Artificially trigger an ambiguity by adding two possible types to
- // unify against.
- ty::Infer(ty::TyVar(_)) => {
- push_builtin_impl(tcx.types.i32, &[]);
- push_builtin_impl(tcx.types.f32, &[]);
- }
-
- ty::Projection(_projection_ty) => {
- // FIXME: add builtin impls from the associated type values found in
- // trait impls of `projection_ty.trait_ref(tcx)`.
- }
-
- // The `Copy`/`Clone` bound can only come from the environment.
- ty::Param(..) | ty::Placeholder(..) | ty::UnnormalizedProjection(..) | ty::Opaque(..) => (),
-
- // Definitely not `Copy`/`Clone`.
- ty::Dynamic(..)
- | ty::Foreign(..)
- | ty::Generator(..)
- | ty::Str
- | ty::Slice(..)
- | ty::Ref(_, _, hir::Mutability::Mut) => (),
-
- ty::Bound(..)
- | ty::GeneratorWitness(..)
- | ty::Infer(ty::FreshTy(_))
- | ty::Infer(ty::FreshIntTy(_))
- | ty::Infer(ty::FreshFloatTy(_)) => bug!("unexpected type {:?}", ty),
- }
-}
+++ /dev/null
-mod builtin;
-mod primitive;
-
-use super::ChalkInferenceContext;
-use rustc::traits::{
- Clause, DomainGoal, Environment, FromEnv, ProgramClause, ProgramClauseCategory, WellFormed,
-};
-use rustc::ty::{self, TyCtxt};
-use rustc_hir::def_id::DefId;
-use std::iter;
-
-use self::builtin::*;
-use self::primitive::*;
-
-fn assemble_clauses_from_impls<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_def_id: DefId,
- clauses: &mut Vec<Clause<'tcx>>,
-) {
- tcx.for_each_impl(trait_def_id, |impl_def_id| {
- clauses.extend(tcx.program_clauses_for(impl_def_id).into_iter().cloned());
- });
-}
-
-fn assemble_clauses_from_assoc_ty_values<'tcx>(
- tcx: TyCtxt<'tcx>,
- trait_def_id: DefId,
- clauses: &mut Vec<Clause<'tcx>>,
-) {
- tcx.for_each_impl(trait_def_id, |impl_def_id| {
- for def_id in tcx.associated_item_def_ids(impl_def_id).iter() {
- clauses.extend(tcx.program_clauses_for(*def_id).into_iter().cloned());
- }
- });
-}
-
-impl ChalkInferenceContext<'cx, 'tcx> {
- pub(super) fn program_clauses_impl(
- &self,
- environment: &Environment<'tcx>,
- goal: &DomainGoal<'tcx>,
- ) -> Vec<Clause<'tcx>> {
- use rustc::infer::canonical::OriginalQueryValues;
- use rustc::traits::WhereClause::*;
-
- let goal = self.infcx.resolve_vars_if_possible(goal);
-
- debug!("program_clauses(goal = {:?})", goal);
-
- let mut clauses = match goal {
- DomainGoal::Holds(Implemented(trait_predicate)) => {
- // These come from:
- // * implementations of the trait itself (rule `Implemented-From-Impl`)
- // * the trait decl (rule `Implemented-From-Env`)
-
- let mut clauses = vec![];
-
- assemble_clauses_from_impls(self.infcx.tcx, trait_predicate.def_id(), &mut clauses);
-
- if Some(trait_predicate.def_id()) == self.infcx.tcx.lang_items().sized_trait() {
- assemble_builtin_sized_impls(
- self.infcx.tcx,
- trait_predicate.def_id(),
- trait_predicate.self_ty(),
- &mut clauses,
- );
- }
-
- if Some(trait_predicate.def_id()) == self.infcx.tcx.lang_items().unsize_trait() {
- let source = trait_predicate.self_ty();
- let target = trait_predicate.trait_ref.substs.type_at(1);
- assemble_builtin_unsize_impls(
- self.infcx.tcx,
- trait_predicate.def_id(),
- source,
- target,
- &mut clauses,
- );
- }
-
- if Some(trait_predicate.def_id()) == self.infcx.tcx.lang_items().copy_trait() {
- assemble_builtin_copy_clone_impls(
- self.infcx.tcx,
- trait_predicate.def_id(),
- trait_predicate.self_ty(),
- &mut clauses,
- );
- }
-
- if Some(trait_predicate.def_id()) == self.infcx.tcx.lang_items().clone_trait() {
- // For all builtin impls, the conditions for `Copy` and
- // `Clone` are the same.
- assemble_builtin_copy_clone_impls(
- self.infcx.tcx,
- trait_predicate.def_id(),
- trait_predicate.self_ty(),
- &mut clauses,
- );
- }
-
- // FIXME: we need to add special rules for other builtin impls:
- // * `Generator`
- // * `FnOnce` / `FnMut` / `Fn`
- // * trait objects
- // * auto traits
-
- // Rule `Implemented-From-Env` will be computed from the environment.
- clauses
- }
-
- DomainGoal::Holds(ProjectionEq(projection_predicate)) => {
- // These come from:
- // * the assoc type definition (rule `ProjectionEq-Placeholder`)
- // * normalization of the assoc ty values (rule `ProjectionEq-Normalize`)
- // * implied bounds from trait definitions (rule `Implied-Bound-From-Trait`)
- // * implied bounds from type definitions (rule `Implied-Bound-From-Type`)
-
- let clauses = self
- .infcx
- .tcx
- .program_clauses_for(projection_predicate.projection_ty.item_def_id)
- .into_iter()
- // only select `ProjectionEq-Placeholder` and `ProjectionEq-Normalize`
- .filter(|clause| clause.category() == ProgramClauseCategory::Other)
- .cloned()
- .collect::<Vec<_>>();
-
- // Rules `Implied-Bound-From-Trait` and `Implied-Bound-From-Type` will be computed
- // from the environment.
- clauses
- }
-
- // For outlive requirements, just assume they hold. `ResolventOps::resolvent_clause`
- // will register them as actual region constraints later.
- DomainGoal::Holds(RegionOutlives(..)) | DomainGoal::Holds(TypeOutlives(..)) => {
- vec![Clause::Implies(ProgramClause {
- goal,
- hypotheses: ty::List::empty(),
- category: ProgramClauseCategory::Other,
- })]
- }
-
- DomainGoal::WellFormed(WellFormed::Trait(trait_predicate)) => {
- // These come from -- the trait decl (rule `WellFormed-TraitRef`).
- self.infcx
- .tcx
- .program_clauses_for(trait_predicate.def_id())
- .into_iter()
- // only select `WellFormed-TraitRef`
- .filter(|clause| clause.category() == ProgramClauseCategory::WellFormed)
- .cloned()
- .collect()
- }
-
- DomainGoal::WellFormed(WellFormed::Ty(ty)) => {
- // These come from:
- // * the associated type definition if `ty` refers to an unnormalized
- // associated type (rule `WellFormed-AssocTy`)
- // * custom rules for built-in types
- // * the type definition otherwise (rule `WellFormed-Type`)
- let clauses = match ty.kind {
- ty::Projection(data) => self.infcx.tcx.program_clauses_for(data.item_def_id),
-
- // These types are always WF.
- ty::Bool
- | ty::Char
- | ty::Int(..)
- | ty::Uint(..)
- | ty::Float(..)
- | ty::Str
- | ty::Param(..)
- | ty::Placeholder(..)
- | ty::Error
- | ty::Never => {
- let wf_clause = ProgramClause {
- goal,
- hypotheses: ty::List::empty(),
- category: ProgramClauseCategory::WellFormed,
- };
- let wf_clause = Clause::Implies(wf_clause);
-
- self.infcx.tcx.mk_clauses(iter::once(wf_clause))
- }
-
- // Always WF (recall that we do not check for parameters to be WF).
- ty::RawPtr(ptr) => wf_clause_for_raw_ptr(self.infcx.tcx, ptr.mutbl),
-
- // Always WF (recall that we do not check for parameters to be WF).
- ty::FnPtr(fn_ptr) => {
- let fn_ptr = fn_ptr.skip_binder();
- wf_clause_for_fn_ptr(
- self.infcx.tcx,
- fn_ptr.inputs_and_output.len(),
- fn_ptr.c_variadic,
- fn_ptr.unsafety,
- fn_ptr.abi,
- )
- }
-
- // WF if inner type is `Sized`.
- ty::Slice(..) => wf_clause_for_slice(self.infcx.tcx),
-
- // WF if inner type is `Sized`.
- ty::Array(_, length) => wf_clause_for_array(self.infcx.tcx, length),
-
- // WF if all types but the last one are `Sized`.
- ty::Tuple(types) => wf_clause_for_tuple(self.infcx.tcx, types.len()),
-
- // WF if `sub_ty` outlives `region`.
- ty::Ref(_, _, mutbl) => wf_clause_for_ref(self.infcx.tcx, mutbl),
-
- ty::FnDef(def_id, ..) => wf_clause_for_fn_def(self.infcx.tcx, def_id),
-
- ty::Dynamic(..) => {
- // FIXME: no rules yet for trait objects
- ty::List::empty()
- }
-
- ty::Adt(def, ..) => self.infcx.tcx.program_clauses_for(def.did),
-
- // FIXME: these are probably wrong
- ty::Foreign(def_id)
- | ty::Closure(def_id, ..)
- | ty::Generator(def_id, ..)
- | ty::Opaque(def_id, ..) => self.infcx.tcx.program_clauses_for(def_id),
-
- // Artificially trigger an ambiguity.
- ty::Infer(..) => {
- let tcx = self.infcx.tcx;
- let types = [tcx.types.i32, tcx.types.u32, tcx.types.f32, tcx.types.f64];
- let clauses = types
- .iter()
- .cloned()
- .map(|ty| ProgramClause {
- goal: DomainGoal::WellFormed(WellFormed::Ty(ty)),
- hypotheses: ty::List::empty(),
- category: ProgramClauseCategory::WellFormed,
- })
- .map(|clause| Clause::Implies(clause));
- tcx.mk_clauses(clauses)
- }
-
- ty::GeneratorWitness(..) | ty::UnnormalizedProjection(..) | ty::Bound(..) => {
- bug!("unexpected type {:?}", ty)
- }
- };
-
- clauses
- .into_iter()
- .filter(|clause| clause.category() == ProgramClauseCategory::WellFormed)
- .cloned()
- .collect()
- }
-
- DomainGoal::FromEnv(FromEnv::Trait(..)) => {
- // These come from:
- // * implied bounds from trait definitions (rule `Implied-Bound-From-Trait`)
- // * implied bounds from type definitions (rule `Implied-Bound-From-Type`)
- // * implied bounds from assoc type defs (rules `Implied-Trait-From-AssocTy`,
- // `Implied-Bound-From-AssocTy` and `Implied-WC-From-AssocTy`)
-
- // All of these rules are computed in the environment.
- vec![]
- }
-
- DomainGoal::FromEnv(FromEnv::Ty(..)) => {
- // There are no `FromEnv::Ty(..) :- ...` rules (this predicate only
- // comes from the environment).
- vec![]
- }
-
- DomainGoal::Normalize(projection_predicate) => {
- // These come from -- assoc ty values (rule `Normalize-From-Impl`).
- let mut clauses = vec![];
-
- assemble_clauses_from_assoc_ty_values(
- self.infcx.tcx,
- projection_predicate.projection_ty.trait_ref(self.infcx.tcx).def_id,
- &mut clauses,
- );
-
- clauses
- }
- };
-
- debug!("program_clauses: clauses = {:?}", clauses);
- debug!("program_clauses: adding clauses from environment = {:?}", environment);
-
- let mut _orig_query_values = OriginalQueryValues::default();
- let canonical_environment =
- self.infcx.canonicalize_query(environment, &mut _orig_query_values).value;
- let env_clauses = self.infcx.tcx.program_clauses_for_env(canonical_environment);
-
- debug!("program_clauses: env_clauses = {:?}", env_clauses);
-
- clauses.extend(env_clauses.into_iter().cloned());
- clauses.extend(environment.clauses.iter().cloned());
- clauses
- }
-}
+++ /dev/null
-use crate::generic_types;
-use crate::lowering::Lower;
-use rustc::traits::{
- Clause, Clauses, DomainGoal, GoalKind, ProgramClause, ProgramClauseCategory, WellFormed,
-};
-use rustc::ty::{self, TyCtxt};
-use rustc_hir as hir;
-use rustc_hir::def_id::DefId;
-use rustc_target::spec::abi;
-use std::iter;
-
-crate fn wf_clause_for_raw_ptr(tcx: TyCtxt<'_>, mutbl: hir::Mutability) -> Clauses<'_> {
- let ptr_ty = generic_types::raw_ptr(tcx, mutbl);
-
- let wf_clause = ProgramClause {
- goal: DomainGoal::WellFormed(WellFormed::Ty(ptr_ty)),
- hypotheses: ty::List::empty(),
- category: ProgramClauseCategory::WellFormed,
- };
- let wf_clause = Clause::Implies(wf_clause);
-
- // `forall<T> { WellFormed(*const T). }`
- tcx.mk_clauses(iter::once(wf_clause))
-}
-
-crate fn wf_clause_for_fn_ptr(
- tcx: TyCtxt<'_>,
- arity_and_output: usize,
- variadic: bool,
- unsafety: hir::Unsafety,
- abi: abi::Abi,
-) -> Clauses<'_> {
- let fn_ptr = generic_types::fn_ptr(tcx, arity_and_output, variadic, unsafety, abi);
-
- let wf_clause = ProgramClause {
- goal: DomainGoal::WellFormed(WellFormed::Ty(fn_ptr)),
- hypotheses: ty::List::empty(),
- category: ProgramClauseCategory::WellFormed,
- };
- let wf_clause = Clause::ForAll(ty::Binder::bind(wf_clause));
-
- // `forall <T1, ..., Tn+1> { WellFormed(for<> fn(T1, ..., Tn) -> Tn+1). }`
- // where `n + 1` == `arity_and_output`
- tcx.mk_clauses(iter::once(wf_clause))
-}
-
-crate fn wf_clause_for_slice(tcx: TyCtxt<'_>) -> Clauses<'_> {
- let ty = generic_types::bound(tcx, 0);
- let slice_ty = tcx.mk_slice(ty);
-
- let sized_trait = match tcx.lang_items().sized_trait() {
- Some(def_id) => def_id,
- None => return ty::List::empty(),
- };
- let sized_implemented =
- ty::TraitRef { def_id: sized_trait, substs: tcx.mk_substs_trait(ty, ty::List::empty()) };
- let sized_implemented: DomainGoal<'_> =
- ty::TraitPredicate { trait_ref: sized_implemented }.lower();
-
- let wf_clause = ProgramClause {
- goal: DomainGoal::WellFormed(WellFormed::Ty(slice_ty)),
- hypotheses: tcx.mk_goals(iter::once(tcx.mk_goal(GoalKind::DomainGoal(sized_implemented)))),
- category: ProgramClauseCategory::WellFormed,
- };
- let wf_clause = Clause::ForAll(ty::Binder::bind(wf_clause));
-
- // `forall<T> { WellFormed([T]) :- Implemented(T: Sized). }`
- tcx.mk_clauses(iter::once(wf_clause))
-}
-
-crate fn wf_clause_for_array<'tcx>(
- tcx: TyCtxt<'tcx>,
- length: &'tcx ty::Const<'tcx>,
-) -> Clauses<'tcx> {
- let ty = generic_types::bound(tcx, 0);
- let array_ty = tcx.mk_ty(ty::Array(ty, length));
-
- let sized_trait = match tcx.lang_items().sized_trait() {
- Some(def_id) => def_id,
- None => return ty::List::empty(),
- };
- let sized_implemented =
- ty::TraitRef { def_id: sized_trait, substs: tcx.mk_substs_trait(ty, ty::List::empty()) };
- let sized_implemented: DomainGoal<'_> =
- ty::TraitPredicate { trait_ref: sized_implemented }.lower();
-
- let wf_clause = ProgramClause {
- goal: DomainGoal::WellFormed(WellFormed::Ty(array_ty)),
- hypotheses: tcx.mk_goals(iter::once(tcx.mk_goal(GoalKind::DomainGoal(sized_implemented)))),
- category: ProgramClauseCategory::WellFormed,
- };
- let wf_clause = Clause::ForAll(ty::Binder::bind(wf_clause));
-
- // `forall<T> { WellFormed([T; length]) :- Implemented(T: Sized). }`
- tcx.mk_clauses(iter::once(wf_clause))
-}
-
-crate fn wf_clause_for_tuple(tcx: TyCtxt<'_>, arity: usize) -> Clauses<'_> {
- let type_list = generic_types::type_list(tcx, arity);
- let tuple_ty = tcx.mk_ty(ty::Tuple(type_list));
-
- let sized_trait = match tcx.lang_items().sized_trait() {
- Some(def_id) => def_id,
- None => return ty::List::empty(),
- };
-
- // If `arity == 0` (i.e. the unit type) or `arity == 1`, this list of
- // hypotheses is actually empty.
- let sized_implemented = type_list[0..std::cmp::max(arity, 1) - 1]
- .iter()
- .map(|ty| ty::TraitRef {
- def_id: sized_trait,
- substs: tcx.mk_substs_trait(ty.expect_ty(), ty::List::empty()),
- })
- .map(|trait_ref| ty::TraitPredicate { trait_ref })
- .map(|predicate| predicate.lower());
-
- let wf_clause = ProgramClause {
- goal: DomainGoal::WellFormed(WellFormed::Ty(tuple_ty)),
- hypotheses: tcx.mk_goals(
- sized_implemented.map(|domain_goal| tcx.mk_goal(GoalKind::DomainGoal(domain_goal))),
- ),
- category: ProgramClauseCategory::WellFormed,
- };
- let wf_clause = Clause::ForAll(ty::Binder::bind(wf_clause));
-
- // ```
- // forall<T1, ..., Tn-1, Tn> {
- // WellFormed((T1, ..., Tn)) :-
- // Implemented(T1: Sized),
- // ...
- // Implemented(Tn-1: Sized).
- // }
- // ```
- tcx.mk_clauses(iter::once(wf_clause))
-}
-
-crate fn wf_clause_for_ref(tcx: TyCtxt<'_>, mutbl: hir::Mutability) -> Clauses<'_> {
- let region = tcx.mk_region(ty::ReLateBound(ty::INNERMOST, ty::BoundRegion::BrAnon(0)));
- let ty = generic_types::bound(tcx, 1);
- let ref_ty = tcx.mk_ref(region, ty::TypeAndMut { ty, mutbl });
-
- let outlives: DomainGoal<'_> = ty::OutlivesPredicate(ty, region).lower();
- let wf_clause = ProgramClause {
- goal: DomainGoal::WellFormed(WellFormed::Ty(ref_ty)),
- hypotheses: tcx.mk_goals(iter::once(tcx.mk_goal(outlives.into_goal()))),
- category: ProgramClauseCategory::WellFormed,
- };
- let wf_clause = Clause::ForAll(ty::Binder::bind(wf_clause));
-
- // `forall<'a, T> { WellFormed(&'a T) :- Outlives(T: 'a). }`
- tcx.mk_clauses(iter::once(wf_clause))
-}
-
-crate fn wf_clause_for_fn_def(tcx: TyCtxt<'_>, def_id: DefId) -> Clauses<'_> {
- let fn_def = generic_types::fn_def(tcx, def_id);
-
- let wf_clause = ProgramClause {
- goal: DomainGoal::WellFormed(WellFormed::Ty(fn_def)),
- hypotheses: ty::List::empty(),
- category: ProgramClauseCategory::WellFormed,
- };
- let wf_clause = Clause::ForAll(ty::Binder::bind(wf_clause));
-
- // `forall <T1, ..., Tn+1> { WellFormed(fn some_fn(T1, ..., Tn) -> Tn+1). }`
- // where `def_id` maps to the `some_fn` function definition
- tcx.mk_clauses(iter::once(wf_clause))
-}
+++ /dev/null
-use chalk_engine::fallible::{Fallible, NoSolution};
-use chalk_engine::{context, ExClause, Literal};
-use rustc::ty::relate::{Relate, RelateResult, TypeRelation};
-use rustc::ty::subst::GenericArg;
-use rustc::ty::{self, Ty, TyCtxt};
-use rustc_infer::infer::canonical::{Canonical, CanonicalVarValues};
-use rustc_infer::infer::{InferCtxt, LateBoundRegionConversionTime};
-use rustc_infer::traits::{
- Clause, DomainGoal, Environment, Goal, GoalKind, InEnvironment, ProgramClause, WhereClause,
-};
-use rustc_span::DUMMY_SP;
-
-use super::unify::*;
-use super::{ChalkArenas, ChalkExClause, ChalkInferenceContext, ConstrainedSubst};
-
-impl context::ResolventOps<ChalkArenas<'tcx>, ChalkArenas<'tcx>>
- for ChalkInferenceContext<'cx, 'tcx>
-{
- fn resolvent_clause(
- &mut self,
- environment: &Environment<'tcx>,
- goal: &DomainGoal<'tcx>,
- subst: &CanonicalVarValues<'tcx>,
- clause: &Clause<'tcx>,
- ) -> Fallible<Canonical<'tcx, ChalkExClause<'tcx>>> {
- use chalk_engine::context::UnificationOps;
-
- debug!("resolvent_clause(goal = {:?}, clause = {:?})", goal, clause);
-
- let result = self.infcx.probe(|_| {
- let ProgramClause { goal: consequence, hypotheses, .. } = match clause {
- Clause::Implies(program_clause) => *program_clause,
- Clause::ForAll(program_clause) => {
- self.infcx
- .replace_bound_vars_with_fresh_vars(
- DUMMY_SP,
- LateBoundRegionConversionTime::HigherRankedType,
- program_clause,
- )
- .0
- }
- };
-
- let result =
- unify(self.infcx, *environment, ty::Variance::Invariant, goal, &consequence)
- .map_err(|_| NoSolution)?;
-
- let mut ex_clause = ExClause {
- subst: subst.clone(),
- delayed_literals: vec![],
- constraints: vec![],
- subgoals: vec![],
- };
-
- self.into_ex_clause(result, &mut ex_clause);
-
- ex_clause.subgoals.extend(hypotheses.iter().map(|g| match g {
- GoalKind::Not(g) => Literal::Negative(environment.with(*g)),
- g => Literal::Positive(environment.with(*g)),
- }));
-
- // If we have a goal of the form `T: 'a` or `'a: 'b`, then just
- // assume it is true (no subgoals) and register it as a constraint
- // instead.
- match goal {
- DomainGoal::Holds(WhereClause::RegionOutlives(pred)) => {
- assert_eq!(ex_clause.subgoals.len(), 0);
- ex_clause.constraints.push(ty::OutlivesPredicate(pred.0.into(), pred.1));
- }
-
- DomainGoal::Holds(WhereClause::TypeOutlives(pred)) => {
- assert_eq!(ex_clause.subgoals.len(), 0);
- ex_clause.constraints.push(ty::OutlivesPredicate(pred.0.into(), pred.1));
- }
-
- _ => (),
- };
-
- let canonical_ex_clause = self.canonicalize_ex_clause(&ex_clause);
- Ok(canonical_ex_clause)
- });
-
- debug!("resolvent_clause: result = {:?}", result);
- result
- }
-
- fn apply_answer_subst(
- &mut self,
- ex_clause: ChalkExClause<'tcx>,
- selected_goal: &InEnvironment<'tcx, Goal<'tcx>>,
- answer_table_goal: &Canonical<'tcx, InEnvironment<'tcx, Goal<'tcx>>>,
- canonical_answer_subst: &Canonical<'tcx, ConstrainedSubst<'tcx>>,
- ) -> Fallible<ChalkExClause<'tcx>> {
- debug!(
- "apply_answer_subst(ex_clause = {:?}, selected_goal = {:?})",
- self.infcx.resolve_vars_if_possible(&ex_clause),
- self.infcx.resolve_vars_if_possible(selected_goal)
- );
-
- let (answer_subst, _) = self
- .infcx
- .instantiate_canonical_with_fresh_inference_vars(DUMMY_SP, canonical_answer_subst);
-
- let mut substitutor = AnswerSubstitutor {
- infcx: self.infcx,
- environment: selected_goal.environment,
- answer_subst: answer_subst.subst,
- binder_index: ty::INNERMOST,
- ex_clause,
- };
-
- substitutor.relate(&answer_table_goal.value, &selected_goal).map_err(|_| NoSolution)?;
-
- let mut ex_clause = substitutor.ex_clause;
- ex_clause.constraints.extend(answer_subst.constraints);
-
- debug!("apply_answer_subst: ex_clause = {:?}", ex_clause);
- Ok(ex_clause)
- }
-}
-
-struct AnswerSubstitutor<'cx, 'tcx> {
- infcx: &'cx InferCtxt<'cx, 'tcx>,
- environment: Environment<'tcx>,
- answer_subst: CanonicalVarValues<'tcx>,
- binder_index: ty::DebruijnIndex,
- ex_clause: ChalkExClause<'tcx>,
-}
-
-impl AnswerSubstitutor<'cx, 'tcx> {
- fn unify_free_answer_var(
- &mut self,
- answer_var: ty::BoundVar,
- pending: GenericArg<'tcx>,
- ) -> RelateResult<'tcx, ()> {
- let answer_param = &self.answer_subst.var_values[answer_var];
- let pending =
- &ty::fold::shift_out_vars(self.infcx.tcx, &pending, self.binder_index.as_u32());
-
- super::into_ex_clause(
- unify(self.infcx, self.environment, ty::Variance::Invariant, answer_param, pending)?,
- &mut self.ex_clause,
- );
-
- Ok(())
- }
-}
-
-impl TypeRelation<'tcx> for AnswerSubstitutor<'cx, 'tcx> {
- fn tcx(&self) -> TyCtxt<'tcx> {
- self.infcx.tcx
- }
-
- fn param_env(&self) -> ty::ParamEnv<'tcx> {
- // FIXME(oli-obk): learn chalk and create param envs
- ty::ParamEnv::empty()
- }
-
- fn tag(&self) -> &'static str {
- "chalk_context::answer_substitutor"
- }
-
- fn a_is_expected(&self) -> bool {
- true
- }
-
- fn relate_with_variance<T: Relate<'tcx>>(
- &mut self,
- _variance: ty::Variance,
- a: &T,
- b: &T,
- ) -> RelateResult<'tcx, T> {
- // We don't care about variance.
- self.relate(a, b)
- }
-
- fn binders<T: Relate<'tcx>>(
- &mut self,
- a: &ty::Binder<T>,
- b: &ty::Binder<T>,
- ) -> RelateResult<'tcx, ty::Binder<T>> {
- self.binder_index.shift_in(1);
- let result = self.relate(a.skip_binder(), b.skip_binder())?;
- self.binder_index.shift_out(1);
- Ok(ty::Binder::bind(result))
- }
-
- fn tys(&mut self, a: Ty<'tcx>, b: Ty<'tcx>) -> RelateResult<'tcx, Ty<'tcx>> {
- let b = self.infcx.shallow_resolve(b);
- debug!("AnswerSubstitutor::tys(a = {:?}, b = {:?})", a, b);
-
- if let &ty::Bound(debruijn, bound_ty) = &a.kind {
- // Free bound var
- if debruijn == self.binder_index {
- self.unify_free_answer_var(bound_ty.var, b.into())?;
- return Ok(b);
- }
- }
-
- match (&a.kind, &b.kind) {
- (&ty::Bound(a_debruijn, a_bound), &ty::Bound(b_debruijn, b_bound)) => {
- assert_eq!(a_debruijn, b_debruijn);
- assert_eq!(a_bound.var, b_bound.var);
- Ok(a)
- }
-
- // Those should have been canonicalized away.
- (ty::Placeholder(..), _) => {
- bug!("unexpected placeholder ty in `AnswerSubstitutor`: {:?} ", a);
- }
-
- // Everything else should just be a perfect match as well,
- // and we forbid inference variables.
- _ => match ty::relate::super_relate_tys(self, a, b) {
- Ok(ty) => Ok(ty),
- Err(err) => bug!("type mismatch in `AnswerSubstitutor`: {}", err),
- },
- }
- }
-
- fn regions(
- &mut self,
- a: ty::Region<'tcx>,
- b: ty::Region<'tcx>,
- ) -> RelateResult<'tcx, ty::Region<'tcx>> {
- let b = match b {
- &ty::ReVar(vid) => self
- .infcx
- .inner
- .borrow_mut()
- .unwrap_region_constraints()
- .opportunistic_resolve_var(self.infcx.tcx, vid),
-
- other => other,
- };
-
- if let &ty::ReLateBound(debruijn, bound) = a {
- // Free bound region
- if debruijn == self.binder_index {
- self.unify_free_answer_var(bound.assert_bound_var(), b.into())?;
- return Ok(b);
- }
- }
-
- match (a, b) {
- (&ty::ReLateBound(a_debruijn, a_bound), &ty::ReLateBound(b_debruijn, b_bound)) => {
- assert_eq!(a_debruijn, b_debruijn);
- assert_eq!(a_bound.assert_bound_var(), b_bound.assert_bound_var());
- }
-
- (ty::ReStatic, ty::ReStatic) | (ty::ReErased, ty::ReErased) => (),
-
- (ty::ReEmpty(a_ui), ty::ReEmpty(b_ui)) => {
- assert_eq!(a_ui, b_ui);
- }
-
- (&ty::ReFree(a_free), &ty::ReFree(b_free)) => {
- assert_eq!(a_free, b_free);
- }
-
- _ => bug!("unexpected regions in `AnswerSubstitutor`: {:?}, {:?}", a, b),
- }
-
- Ok(a)
- }
-
- fn consts(
- &mut self,
- a: &'tcx ty::Const<'tcx>,
- b: &'tcx ty::Const<'tcx>,
- ) -> RelateResult<'tcx, &'tcx ty::Const<'tcx>> {
- if let ty::Const { val: ty::ConstKind::Bound(debruijn, bound_ct), .. } = a {
- if *debruijn == self.binder_index {
- self.unify_free_answer_var(*bound_ct, b.into())?;
- return Ok(b);
- }
- }
-
- match (a, b) {
- (
- ty::Const { val: ty::ConstKind::Bound(a_debruijn, a_bound), .. },
- ty::Const { val: ty::ConstKind::Bound(b_debruijn, b_bound), .. },
- ) => {
- assert_eq!(a_debruijn, b_debruijn);
- assert_eq!(a_bound, b_bound);
- Ok(a)
- }
-
- // Everything else should just be a perfect match as well,
- // and we forbid inference variables.
- _ => match ty::relate::super_relate_consts(self, a, b) {
- Ok(ct) => Ok(ct),
- Err(err) => bug!("const mismatch in `AnswerSubstitutor`: {}", err),
- },
- }
- }
-}
+++ /dev/null
-use rustc::ty;
-use rustc::ty::relate::{Relate, RelateResult, TypeRelation};
-use rustc_infer::infer::nll_relate::{NormalizationStrategy, TypeRelating, TypeRelatingDelegate};
-use rustc_infer::infer::{InferCtxt, RegionVariableOrigin};
-use rustc_infer::traits::{DomainGoal, Environment, Goal, InEnvironment};
-use rustc_span::DUMMY_SP;
-
-crate struct UnificationResult<'tcx> {
- crate goals: Vec<InEnvironment<'tcx, Goal<'tcx>>>,
- crate constraints: Vec<super::RegionConstraint<'tcx>>,
-}
-
-crate fn unify<'me, 'tcx, T: Relate<'tcx>>(
- infcx: &'me InferCtxt<'me, 'tcx>,
- environment: Environment<'tcx>,
- variance: ty::Variance,
- a: &T,
- b: &T,
-) -> RelateResult<'tcx, UnificationResult<'tcx>> {
- debug!(
- "unify(
- a = {:?},
- b = {:?},
- environment = {:?},
- )",
- a, b, environment
- );
-
- let mut delegate = ChalkTypeRelatingDelegate::new(infcx, environment);
-
- TypeRelating::new(infcx, &mut delegate, variance).relate(a, b)?;
-
- debug!("unify: goals = {:?}, constraints = {:?}", delegate.goals, delegate.constraints);
-
- Ok(UnificationResult { goals: delegate.goals, constraints: delegate.constraints })
-}
-
-struct ChalkTypeRelatingDelegate<'me, 'tcx> {
- infcx: &'me InferCtxt<'me, 'tcx>,
- environment: Environment<'tcx>,
- goals: Vec<InEnvironment<'tcx, Goal<'tcx>>>,
- constraints: Vec<super::RegionConstraint<'tcx>>,
-}
-
-impl ChalkTypeRelatingDelegate<'me, 'tcx> {
- fn new(infcx: &'me InferCtxt<'me, 'tcx>, environment: Environment<'tcx>) -> Self {
- Self { infcx, environment, goals: Vec::new(), constraints: Vec::new() }
- }
-}
-
-impl TypeRelatingDelegate<'tcx> for &mut ChalkTypeRelatingDelegate<'_, 'tcx> {
- fn create_next_universe(&mut self) -> ty::UniverseIndex {
- self.infcx.create_next_universe()
- }
-
- fn next_existential_region_var(&mut self, _was_placeholder: bool) -> ty::Region<'tcx> {
- self.infcx.next_region_var(RegionVariableOrigin::MiscVariable(DUMMY_SP))
- }
-
- fn next_placeholder_region(&mut self, placeholder: ty::PlaceholderRegion) -> ty::Region<'tcx> {
- self.infcx.tcx.mk_region(ty::RePlaceholder(placeholder))
- }
-
- fn generalize_existential(&mut self, universe: ty::UniverseIndex) -> ty::Region<'tcx> {
- self.infcx
- .next_region_var_in_universe(RegionVariableOrigin::MiscVariable(DUMMY_SP), universe)
- }
-
- fn push_outlives(&mut self, sup: ty::Region<'tcx>, sub: ty::Region<'tcx>) {
- self.constraints.push(ty::OutlivesPredicate(sup.into(), sub));
- }
-
- fn push_domain_goal(&mut self, domain_goal: DomainGoal<'tcx>) {
- let goal = self.environment.with(self.infcx.tcx.mk_goal(domain_goal.into_goal()));
- self.goals.push(goal);
- }
-
- fn normalization() -> NormalizationStrategy {
- NormalizationStrategy::Lazy
- }
-
- fn forbid_inference_vars() -> bool {
- false
- }
-}
use rustc_hir::def_id::DefId;
use rustc_infer::infer::canonical::{Canonical, QueryResponse};
use rustc_infer::infer::TyCtxtInferExt;
-use rustc_infer::traits::query::dropck_outlives::trivial_dropck_outlives;
-use rustc_infer::traits::query::dropck_outlives::{DropckOutlivesResult, DtorckConstraint};
-use rustc_infer::traits::query::{CanonicalTyGoal, NoSolution};
-use rustc_infer::traits::{Normalized, ObligationCause, TraitEngine, TraitEngineExt};
+use rustc_infer::traits::TraitEngineExt as _;
use rustc_span::source_map::{Span, DUMMY_SP};
+use rustc_trait_selection::traits::query::dropck_outlives::trivial_dropck_outlives;
+use rustc_trait_selection::traits::query::dropck_outlives::{
+ DropckOutlivesResult, DtorckConstraint,
+};
+use rustc_trait_selection::traits::query::normalize::AtExt;
+use rustc_trait_selection::traits::query::{CanonicalTyGoal, NoSolution};
+use rustc_trait_selection::traits::{
+ Normalized, ObligationCause, TraitEngine, TraitEngineExt as _,
+};
crate fn provide(p: &mut Providers<'_>) {
*p = Providers { dropck_outlives, adt_dtorck_constraint, ..*p };
use rustc::ty::query::Providers;
use rustc::ty::{ParamEnvAnd, TyCtxt};
use rustc_infer::infer::TyCtxtInferExt;
-use rustc_infer::traits::query::CanonicalPredicateGoal;
-use rustc_infer::traits::{
+use rustc_span::source_map::DUMMY_SP;
+use rustc_trait_selection::traits::query::CanonicalPredicateGoal;
+use rustc_trait_selection::traits::{
EvaluationResult, Obligation, ObligationCause, OverflowError, SelectionContext, TraitQueryMode,
};
-use rustc_span::source_map::DUMMY_SP;
crate fn provide(p: &mut Providers<'_>) {
*p = Providers { evaluate_obligation, ..*p };
+++ /dev/null
-//! Utilities for creating generic types with bound vars in place of parameter values.
-
-use rustc::ty::subst::{GenericArg, InternalSubsts, SubstsRef};
-use rustc::ty::{self, Ty, TyCtxt};
-use rustc_hir as hir;
-use rustc_hir::def_id::DefId;
-use rustc_target::spec::abi;
-
-crate fn bound(tcx: TyCtxt<'tcx>, index: u32) -> Ty<'tcx> {
- let ty = ty::Bound(ty::INNERMOST, ty::BoundVar::from_u32(index).into());
- tcx.mk_ty(ty)
-}
-
-crate fn raw_ptr(tcx: TyCtxt<'tcx>, mutbl: hir::Mutability) -> Ty<'tcx> {
- tcx.mk_ptr(ty::TypeAndMut { ty: bound(tcx, 0), mutbl })
-}
-
-crate fn fn_ptr(
- tcx: TyCtxt<'tcx>,
- arity_and_output: usize,
- c_variadic: bool,
- unsafety: hir::Unsafety,
- abi: abi::Abi,
-) -> Ty<'tcx> {
- let inputs_and_output = tcx.mk_type_list(
- (0..arity_and_output)
- .map(|i| ty::BoundVar::from(i))
- // DebruijnIndex(1) because we are going to inject these in a `PolyFnSig`
- .map(|var| tcx.mk_ty(ty::Bound(ty::DebruijnIndex::from(1usize), var.into()))),
- );
-
- let fn_sig = ty::Binder::bind(ty::FnSig { inputs_and_output, c_variadic, unsafety, abi });
- tcx.mk_fn_ptr(fn_sig)
-}
-
-crate fn type_list(tcx: TyCtxt<'tcx>, arity: usize) -> SubstsRef<'tcx> {
- tcx.mk_substs(
- (0..arity)
- .map(|i| ty::BoundVar::from(i))
- .map(|var| tcx.mk_ty(ty::Bound(ty::INNERMOST, var.into())))
- .map(|ty| GenericArg::from(ty)),
- )
-}
-
-crate fn ref_ty(tcx: TyCtxt<'tcx>, mutbl: hir::Mutability) -> Ty<'tcx> {
- let region = tcx.mk_region(ty::ReLateBound(ty::INNERMOST, ty::BoundRegion::BrAnon(0)));
-
- tcx.mk_ref(region, ty::TypeAndMut { ty: bound(tcx, 1), mutbl })
-}
-
-crate fn fn_def(tcx: TyCtxt<'tcx>, def_id: DefId) -> Ty<'tcx> {
- tcx.mk_ty(ty::FnDef(def_id, InternalSubsts::bound_vars_for_item(tcx, def_id)))
-}
-
-crate fn closure(tcx: TyCtxt<'tcx>, def_id: DefId) -> Ty<'tcx> {
- tcx.mk_closure(def_id, InternalSubsts::bound_vars_for_item(tcx, def_id))
-}
-
-crate fn generator(tcx: TyCtxt<'tcx>, def_id: DefId) -> Ty<'tcx> {
- tcx.mk_generator(
- def_id,
- InternalSubsts::bound_vars_for_item(tcx, def_id),
- hir::Movability::Movable,
- )
-}
use rustc_hir as hir;
use rustc_infer::infer::canonical::{self, Canonical};
use rustc_infer::infer::{InferCtxt, TyCtxtInferExt};
-use rustc_infer::traits::query::outlives_bounds::OutlivesBound;
-use rustc_infer::traits::query::{CanonicalTyGoal, Fallible, NoSolution};
-use rustc_infer::traits::wf;
-use rustc_infer::traits::FulfillmentContext;
-use rustc_infer::traits::{TraitEngine, TraitEngineExt};
+use rustc_infer::traits::TraitEngineExt as _;
use rustc_span::source_map::DUMMY_SP;
+use rustc_trait_selection::infer::InferCtxtBuilderExt;
+use rustc_trait_selection::traits::query::outlives_bounds::OutlivesBound;
+use rustc_trait_selection::traits::query::{CanonicalTyGoal, Fallible, NoSolution};
+use rustc_trait_selection::traits::wf;
+use rustc_trait_selection::traits::FulfillmentContext;
+use rustc_trait_selection::traits::TraitEngine;
use smallvec::{smallvec, SmallVec};
crate fn provide(p: &mut Providers<'_>) {
// to avoids duplicate errors that otherwise show up.
fulfill_cx.register_predicate_obligations(
infcx,
- obligations.iter().filter(|o| o.predicate.has_infer_types()).cloned(),
+ obligations.iter().filter(|o| o.predicate.has_infer_types_or_consts()).cloned(),
);
// From the full set of obligations, just filter down to the
#[macro_use]
extern crate rustc;
-mod chalk_context;
mod dropck_outlives;
mod evaluate_obligation;
-mod generic_types;
mod implied_outlives_bounds;
pub mod lowering;
mod normalize_erasing_regions;
evaluate_obligation::provide(p);
implied_outlives_bounds::provide(p);
lowering::provide(p);
- chalk_context::provide(p);
normalize_projection_ty::provide(p);
normalize_erasing_regions::provide(p);
type_op::provide(p);
let node_kind = match node {
Node::TraitItem(item) => match item.kind {
- TraitItemKind::Method(..) => NodeKind::Fn,
+ TraitItemKind::Fn(..) => NodeKind::Fn,
_ => NodeKind::Other,
},
Node::ImplItem(item) => match item.kind {
- ImplItemKind::Method(..) => NodeKind::Fn,
+ ImplItemKind::Fn(..) => NodeKind::Fn,
_ => NodeKind::Other,
},
}
}
-/// Used for implied bounds related rules (see rustc guide).
+/// Used for implied bounds related rules (see rustc dev guide).
trait IntoFromEnvGoal {
/// Transforms an existing goal into a `FromEnv` goal.
fn into_from_env_goal(self) -> Self;
}
-/// Used for well-formedness related rules (see rustc guide).
+/// Used for well-formedness related rules (see rustc dev guide).
trait IntoWellFormedGoal {
/// Transforms an existing goal into a `WellFormed` goal.
fn into_well_formed_goal(self) -> Self;
fn program_clauses_for_trait(tcx: TyCtxt<'_>, def_id: DefId) -> Clauses<'_> {
// `trait Trait<P1..Pn> where WC { .. } // P0 == Self`
- // Rule Implemented-From-Env (see rustc guide)
+ // Rule Implemented-From-Env (see rustc dev guide)
//
// ```
// forall<Self, P1..Pn> {
return List::empty();
}
- // Rule Implemented-From-Impl (see rustc guide)
+ // Rule Implemented-From-Impl (see rustc dev guide)
//
// `impl<P0..Pn> Trait<A1..An> for A0 where WC { .. }`
//
}
pub fn program_clauses_for_associated_type_value(tcx: TyCtxt<'_>, item_id: DefId) -> Clauses<'_> {
- // Rule Normalize-From-Impl (see rustc guide)
+ // Rule Normalize-From-Impl (see rustc dev guide)
//
// ```
// impl<P0..Pn> Trait<A1..An> for A0 {
impl Visitor<'tcx> for ClauseDumper<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map<'this>(&'this mut self) -> NestedVisitorMap<'this, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.tcx.hir())
}
fn visit_item(&mut self, item: &'tcx hir::Item<'tcx>) {
use rustc::ty::query::Providers;
use rustc::ty::{self, ParamEnvAnd, Ty, TyCtxt};
use rustc_infer::infer::TyCtxtInferExt;
-use rustc_infer::traits::{Normalized, ObligationCause};
+use rustc_trait_selection::traits::query::normalize::AtExt;
+use rustc_trait_selection::traits::{Normalized, ObligationCause};
use std::sync::atomic::Ordering;
crate fn provide(p: &mut Providers<'_>) {
use rustc_hir as hir;
use rustc_infer::infer::canonical::{Canonical, QueryResponse};
use rustc_infer::infer::TyCtxtInferExt;
-use rustc_infer::traits::query::{
+use rustc_infer::traits::TraitEngineExt as _;
+use rustc_span::DUMMY_SP;
+use rustc_trait_selection::infer::InferCtxtBuilderExt;
+use rustc_trait_selection::traits::query::{
normalize::NormalizationResult, CanonicalProjectionGoal, NoSolution,
};
-use rustc_infer::traits::{self, ObligationCause, SelectionContext, TraitEngineExt};
-use rustc_span::DUMMY_SP;
+use rustc_trait_selection::traits::{self, ObligationCause, SelectionContext};
use std::sync::atomic::Ordering;
crate fn provide(p: &mut Providers<'_>) {
use rustc_infer::infer::at::ToTrace;
use rustc_infer::infer::canonical::{Canonical, QueryResponse};
use rustc_infer::infer::{InferCtxt, TyCtxtInferExt};
-use rustc_infer::traits::query::type_op::ascribe_user_type::AscribeUserType;
-use rustc_infer::traits::query::type_op::eq::Eq;
-use rustc_infer::traits::query::type_op::normalize::Normalize;
-use rustc_infer::traits::query::type_op::prove_predicate::ProvePredicate;
-use rustc_infer::traits::query::type_op::subtype::Subtype;
-use rustc_infer::traits::query::{Fallible, NoSolution};
-use rustc_infer::traits::{Normalized, Obligation, ObligationCause, TraitEngine, TraitEngineExt};
+use rustc_infer::traits::TraitEngineExt as _;
use rustc_span::DUMMY_SP;
+use rustc_trait_selection::infer::InferCtxtBuilderExt;
+use rustc_trait_selection::infer::InferCtxtExt;
+use rustc_trait_selection::traits::query::normalize::AtExt;
+use rustc_trait_selection::traits::query::type_op::ascribe_user_type::AscribeUserType;
+use rustc_trait_selection::traits::query::type_op::eq::Eq;
+use rustc_trait_selection::traits::query::type_op::normalize::Normalize;
+use rustc_trait_selection::traits::query::type_op::prove_predicate::ProvePredicate;
+use rustc_trait_selection::traits::query::type_op::subtype::Subtype;
+use rustc_trait_selection::traits::query::{Fallible, NoSolution};
+use rustc_trait_selection::traits::{Normalized, Obligation, ObligationCause, TraitEngine};
use std::fmt;
crate fn provide(p: &mut Providers<'_>) {
rustc_hir = { path = "../librustc_hir" }
rustc_infer = { path = "../librustc_infer" }
rustc_span = { path = "../librustc_span" }
+rustc_session = { path = "../librustc_session" }
rustc_target = { path = "../librustc_target" }
+rustc_trait_selection = { path = "../librustc_trait_selection" }
use rustc::middle::lang_items;
use rustc::ty::{self, Ty, TyCtxt};
use rustc_infer::infer::TyCtxtInferExt;
-use rustc_infer::traits;
use rustc_span::DUMMY_SP;
+use rustc_trait_selection::traits;
fn is_copy_raw<'tcx>(tcx: TyCtxt<'tcx>, query: ty::ParamEnvAnd<'tcx, Ty<'tcx>>) -> bool {
is_item_raw(tcx, query, lang_items::CopyTraitLangItem)
use rustc::ty::subst::SubstsRef;
use rustc::ty::{self, Instance, TyCtxt, TypeFoldable};
use rustc_hir::def_id::DefId;
-use rustc_infer::traits;
+use rustc_span::sym;
use rustc_target::spec::abi::Abi;
+use rustc_trait_selection::traits;
use log::debug;
debug!(" => intrinsic");
ty::InstanceDef::Intrinsic(def_id)
}
- _ => {
- if Some(def_id) == tcx.lang_items().drop_in_place_fn() {
- let ty = substs.type_at(0);
- if ty.needs_drop(tcx, param_env.with_reveal_all()) {
- debug!(" => nontrivial drop glue");
- ty::InstanceDef::DropGlue(def_id, Some(ty))
- } else {
- debug!(" => trivial drop glue");
- ty::InstanceDef::DropGlue(def_id, None)
+ ty::FnDef(def_id, substs) if Some(def_id) == tcx.lang_items().drop_in_place_fn() => {
+ let ty = substs.type_at(0);
+
+ if ty.needs_drop(tcx, param_env) {
+ // `DropGlue` requires a monomorphic aka concrete type.
+ if ty.needs_subst() {
+ return None;
}
+
+ debug!(" => nontrivial drop glue");
+ ty::InstanceDef::DropGlue(def_id, Some(ty))
} else {
- debug!(" => free item");
- ty::InstanceDef::Item(def_id)
+ debug!(" => trivial drop glue");
+ ty::InstanceDef::DropGlue(def_id, None)
}
}
+ _ => {
+ debug!(" => free item");
+ ty::InstanceDef::Item(def_id)
+ }
};
- Some(Instance { def: def, substs: substs })
+ Some(Instance { def, substs })
};
debug!("resolve(def_id={:?}, substs={:?}) = {:?}", def_id, substs, result);
result
);
let trait_ref = ty::TraitRef::from_method(tcx, trait_id, rcvr_substs);
- let vtbl = tcx.codegen_fulfill_obligation((param_env, ty::Binder::bind(trait_ref)));
+ let vtbl = tcx.codegen_fulfill_obligation((param_env, ty::Binder::bind(trait_ref)))?;
// Now that we know which impl is being used, we can dispatch to
// the actual function:
trait_closure_kind,
))
}
- traits::VtableFnPointer(ref data) => Some(Instance {
- def: ty::InstanceDef::FnPtrShim(trait_item.def_id, data.fn_ty),
- substs: rcvr_substs,
- }),
+ traits::VtableFnPointer(ref data) => {
+ // `FnPtrShim` requires a monomorphic aka concrete type.
+ if data.fn_ty.needs_subst() {
+ return None;
+ }
+
+ Some(Instance {
+ def: ty::InstanceDef::FnPtrShim(trait_item.def_id, data.fn_ty),
+ substs: rcvr_substs,
+ })
+ }
traits::VtableObject(ref data) => {
let index = traits::get_vtable_index_of_object_method(tcx, data, def_id);
Some(Instance { def: ty::InstanceDef::Virtual(def_id, index), substs: rcvr_substs })
}
traits::VtableBuiltin(..) => {
- if tcx.lang_items().clone_trait().is_some() {
- Some(Instance {
- def: ty::InstanceDef::CloneShim(def_id, trait_ref.self_ty()),
- substs: rcvr_substs,
- })
+ if Some(trait_ref.def_id) == tcx.lang_items().clone_trait() {
+ // FIXME(eddyb) use lang items for methods instead of names.
+ let name = tcx.item_name(def_id);
+ if name == sym::clone {
+ let self_ty = trait_ref.self_ty();
+
+ // `CloneShim` requires a monomorphic aka concrete type.
+ if self_ty.needs_subst() {
+ return None;
+ }
+
+ Some(Instance {
+ def: ty::InstanceDef::CloneShim(def_id, self_ty),
+ substs: rcvr_substs,
+ })
+ } else {
+ assert_eq!(name, sym::clone_from);
+
+ // Use the default `fn clone_from` from `trait Clone`.
+ let substs = tcx.erase_regions(&rcvr_substs);
+ Some(ty::Instance::new(def_id, substs))
+ }
} else {
None
}
use rustc::hir::map as hir_map;
-use rustc::session::CrateDisambiguator;
use rustc::ty::subst::Subst;
use rustc::ty::{self, ToPredicate, Ty, TyCtxt, WithConstness};
use rustc_data_structures::svh::Svh;
use rustc_hir as hir;
use rustc_hir::def_id::{CrateNum, DefId, LOCAL_CRATE};
-use rustc_infer::traits;
+use rustc_session::CrateDisambiguator;
use rustc_span::symbol::Symbol;
use rustc_span::Span;
+use rustc_trait_selection::traits;
fn sized_constraint_for_ty<'tcx>(
tcx: TyCtxt<'tcx>,
}
}
-fn associated_items<'tcx>(tcx: TyCtxt<'tcx>, def_id: DefId) -> &'tcx ty::AssociatedItems {
+fn associated_items(tcx: TyCtxt<'_>, def_id: DefId) -> &ty::AssociatedItems {
let items = tcx.associated_item_def_ids(def_id).iter().map(|did| tcx.associated_item(*did));
tcx.arena.alloc(ty::AssociatedItems::new(items))
}
// are any errors at that point, so after type checking you can be
// sure that this will succeed without errors anyway.
- let unnormalized_env = ty::ParamEnv::new(
- tcx.intern_predicates(&predicates),
- traits::Reveal::UserFacing,
- tcx.sess.opts.debugging_opts.chalk.then_some(def_id),
- );
+ let unnormalized_env =
+ ty::ParamEnv::new(tcx.intern_predicates(&predicates), traits::Reveal::UserFacing, None);
let body_id = tcx.hir().as_local_hir_id(def_id).map_or(hir::DUMMY_HIR_ID, |id| {
tcx.hir().maybe_body_owned_by(id).map_or(id, |body| body.hir_id)
}
fn crate_hash(tcx: TyCtxt<'_>, crate_num: CrateNum) -> Svh {
- assert_eq!(crate_num, LOCAL_CRATE);
- tcx.hir().crate_hash
+ tcx.index_hir(crate_num).crate_hash
}
fn instance_def_size_estimate<'tcx>(
rustc_errors = { path = "../librustc_errors" }
rustc_hir = { path = "../librustc_hir" }
rustc_target = { path = "../librustc_target" }
+rustc_session = { path = "../librustc_session" }
smallvec = { version = "1.0", features = ["union", "may_dangle"] }
rustc_ast = { path = "../librustc_ast" }
rustc_span = { path = "../librustc_span" }
rustc_index = { path = "../librustc_index" }
rustc_infer = { path = "../librustc_infer" }
+rustc_trait_selection = { path = "../librustc_trait_selection" }
For high-level intro to how type checking works in rustc, see the
-[type checking] chapter of the [rustc guide].
+[type checking] chapter of the [rustc dev guide].
-[type checking]: https://rust-lang.github.io/rustc-guide/type-checking.html
-[rustc guide]: https://rust-lang.github.io/rustc-guide/
+[type checking]: https://rustc-dev-guide.rust-lang.org/type-checking.html
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/
// ignore-tidy-filelength
use crate::collect::PlaceholderHirTyCollector;
-use crate::lint;
use crate::middle::lang_items::SizedTraitLangItem;
use crate::middle::resolve_lifetime as rl;
use crate::require_c_abi_if_c_variadic;
use crate::util::common::ErrorReported;
-use rustc::lint::builtin::AMBIGUOUS_ASSOCIATED_ITEMS;
-use rustc::session::{parse::feature_err, Session};
use rustc::ty::subst::{self, InternalSubsts, Subst, SubstsRef};
use rustc::ty::{self, Const, DefIdTree, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness};
use rustc::ty::{GenericParamDef, GenericParamDefKind};
use rustc_hir::intravisit::Visitor;
use rustc_hir::print;
use rustc_hir::{Constness, ExprKind, GenericArg, GenericArgs};
-use rustc_infer::traits;
-use rustc_infer::traits::astconv_object_safety_violations;
-use rustc_infer::traits::error_reporting::report_object_safety_error;
-use rustc_infer::traits::wf::object_region_bounds;
+use rustc_session::lint::builtin::{AMBIGUOUS_ASSOCIATED_ITEMS, LATE_BOUND_LIFETIME_ARGUMENTS};
+use rustc_session::parse::feature_err;
+use rustc_session::Session;
use rustc_span::symbol::sym;
use rustc_span::{MultiSpan, Span, DUMMY_SP};
use rustc_target::spec::abi;
-use smallvec::SmallVec;
+use rustc_trait_selection::traits;
+use rustc_trait_selection::traits::astconv_object_safety_violations;
+use rustc_trait_selection::traits::error_reporting::report_object_safety_error;
+use rustc_trait_selection::traits::wf::object_region_bounds;
+use smallvec::SmallVec;
use std::collections::BTreeSet;
use std::iter;
use std::slice;
let mut multispan = MultiSpan::from_span(span);
multispan.push_span_label(span_late, note.to_string());
tcx.struct_span_lint_hir(
- lint::builtin::LATE_BOUND_LIFETIME_ARGUMENTS,
+ LATE_BOUND_LIFETIME_ARGUMENTS,
args.args[0].id(),
multispan,
|lint| lint.build(msg).emit(),
/// Given the type/lifetime/const arguments provided to some path (along with
/// an implicit `Self`, if this is a trait reference), returns the complete
/// set of substitutions. This may involve applying defaulted type parameters.
- /// Also returns back constriants on associated types.
+ /// Also returns back constraints on associated types.
///
/// Example:
///
let (assoc_ident, def_scope) =
tcx.adjust_ident_and_get_scope(binding.item_name, candidate.def_id(), hir_ref_id);
- // We have already adjusted the item name above, so compare with `ident.modern()` instead
+ // We have already adjusted the item name above, so compare with `ident.normalize_to_macros_2_0()` instead
// of calling `filter_by_name_and_kind`.
let assoc_ty = tcx
.associated_items(candidate.def_id())
.filter_by_name_unhygienic(assoc_ident.name)
- .find(|i| i.kind == ty::AssocKind::Type && i.ident.modern() == assoc_ident)
+ .find(|i| {
+ i.kind == ty::AssocKind::Type && i.ident.normalize_to_macros_2_0() == assoc_ident
+ })
.expect("missing associated type");
if !assoc_ty.vis.is_accessible_from(def_scope, tcx) {
}
for (projection_bound, _) in &bounds.projection_bounds {
- for (_, def_ids) in &mut associated_types {
+ for def_ids in associated_types.values_mut() {
def_ids.remove(&projection_bound.projection_def_id());
}
}
let (assoc_ident, def_scope) =
tcx.adjust_ident_and_get_scope(assoc_ident, trait_did, hir_ref_id);
- // We have already adjusted the item name above, so compare with `ident.modern()` instead
+ // We have already adjusted the item name above, so compare with `ident.normalize_to_macros_2_0()` instead
// of calling `filter_by_name_and_kind`.
let item = tcx
.associated_items(trait_did)
.in_definition_order()
- .find(|i| i.kind.namespace() == Namespace::TypeNS && i.ident.modern() == assoc_ident)
+ .find(|i| {
+ i.kind.namespace() == Namespace::TypeNS
+ && i.ident.normalize_to_macros_2_0() == assoc_ident
+ })
.expect("missing associated type");
let ty = self.projected_ty_from_poly_trait_ref(span, item.def_id, assoc_segment, bound);
}
// Case 4. Reference to a method or associated const.
- DefKind::Method | DefKind::AssocConst => {
+ DefKind::AssocFn | DefKind::AssocConst => {
if segments.len() >= 2 {
let generics = tcx.generics_of(def_id);
path_segs.push(PathSeg(generics.parent.unwrap(), last - 1));
let tcx = self.tcx();
- // We proactively collect all the infered type params to emit a single error per fn def.
+ // We proactively collect all the inferred type params to emit a single error per fn def.
let mut visitor = PlaceholderHirTyCollector::default();
for ty in decl.inputs {
visitor.visit_ty(ty);
use rustc_hir as hir;
use rustc_hir::ExprKind;
use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
-use rustc_infer::traits::ObligationCauseCode;
-use rustc_infer::traits::{IfExpressionCause, MatchExpressionArmCause, ObligationCause};
use rustc_span::Span;
+use rustc_trait_selection::traits::ObligationCauseCode;
+use rustc_trait_selection::traits::{IfExpressionCause, MatchExpressionArmCause, ObligationCause};
impl<'a, 'tcx> FnCtxt<'a, 'tcx> {
pub fn check_match(
use super::method::MethodCallee;
use super::{FnCtxt, Needs, PlaceOp};
-use rustc::session::DiagnosticMessageId;
use rustc::ty::adjustment::{Adjust, Adjustment, OverloadedDeref};
use rustc::ty::{self, TraitRef, Ty, TyCtxt, WithConstness};
use rustc::ty::{ToPredicate, TypeFoldable};
+use rustc_ast::ast::Ident;
use rustc_errors::struct_span_err;
use rustc_hir as hir;
use rustc_infer::infer::{InferCtxt, InferOk};
-use rustc_infer::traits::{self, TraitEngine};
-
-use rustc_ast::ast::Ident;
+use rustc_session::DiagnosticMessageId;
use rustc_span::Span;
+use rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt;
+use rustc_trait_selection::traits::{self, TraitEngine};
use std::iter;
use super::FnCtxt;
use crate::hir::def_id::DefId;
-use crate::lint;
use crate::type_error_struct;
use crate::util::common::ErrorReported;
use rustc::middle::lang_items;
-use rustc::session::Session;
use rustc::ty::adjustment::AllowTwoPhase;
use rustc::ty::cast::{CastKind, CastTy};
use rustc::ty::error::TypeError;
use rustc_ast::ast;
use rustc_errors::{struct_span_err, Applicability, DiagnosticBuilder};
use rustc_hir as hir;
-use rustc_infer::traits;
-use rustc_infer::traits::error_reporting::report_object_safety_error;
+use rustc_session::lint;
+use rustc_session::Session;
use rustc_span::Span;
+use rustc_trait_selection::traits;
+use rustc_trait_selection::traits::error_reporting::report_object_safety_error;
/// Reifies a cast check to be checked once we have full type information for
/// a function context.
use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
use rustc_infer::infer::LateBoundRegionConversionTime;
use rustc_infer::infer::{InferOk, InferResult};
-use rustc_infer::traits::error_reporting::ArgKind;
-use rustc_infer::traits::Obligation;
use rustc_span::source_map::Span;
use rustc_target::spec::abi::Abi;
+use rustc_trait_selection::traits::error_reporting::ArgKind;
+use rustc_trait_selection::traits::error_reporting::InferCtxtExt as _;
+use rustc_trait_selection::traits::Obligation;
use std::cmp;
use std::iter;
// The `Future` trait has only one associted item, `Output`,
// so check that this is what we see.
let output_assoc_item =
- self.tcx.associated_items(future_trait).in_definition_order().nth(0).unwrap().def_id;
+ self.tcx.associated_items(future_trait).in_definition_order().next().unwrap().def_id;
if output_assoc_item != predicate.projection_ty.item_def_id {
span_bug!(
cause_span,
use crate::astconv::AstConv;
use crate::check::{FnCtxt, Needs};
-use rustc::session::parse::feature_err;
use rustc::ty::adjustment::{
Adjust, Adjustment, AllowTwoPhase, AutoBorrow, AutoBorrowMutability, PointerCast,
};
use rustc_hir::def_id::DefId;
use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
use rustc_infer::infer::{Coercion, InferOk, InferResult};
-use rustc_infer::traits::{self, ObligationCause, ObligationCauseCode};
+use rustc_session::parse::feature_err;
use rustc_span::symbol::sym;
use rustc_span::{self, Span};
use rustc_target::spec::abi::Abi;
+use rustc_trait_selection::traits::error_reporting::InferCtxtExt;
+use rustc_trait_selection::traits::{self, ObligationCause, ObligationCauseCode};
+
use smallvec::{smallvec, SmallVec};
use std::ops::Deref;
vec![]
}
-fn simple<'tcx>(kind: Adjust<'tcx>) -> impl FnOnce(Ty<'tcx>) -> Vec<Adjustment<'tcx>> {
+fn simple(kind: Adjust<'tcx>) -> impl FnOnce(Ty<'tcx>) -> Vec<Adjustment<'tcx>> {
move |target| vec![Adjustment { kind, target }]
}
-use rustc::hir::map::Map;
use rustc::ty::error::{ExpectedFound, TypeError};
use rustc::ty::subst::{InternalSubsts, Subst};
use rustc::ty::util::ExplicitSelf;
use rustc_hir::intravisit;
use rustc_hir::{GenericParamKind, ImplItemKind, TraitItemKind};
use rustc_infer::infer::{self, InferOk, TyCtxtInferExt};
-use rustc_infer::traits::{self, ObligationCause, ObligationCauseCode, Reveal};
use rustc_span::Span;
+use rustc_trait_selection::traits::error_reporting::InferCtxtExt;
+use rustc_trait_selection::traits::{self, ObligationCause, ObligationCauseCode, Reveal};
use super::{potentially_plural_count, FnCtxt, Inherited};
let tcx = infcx.tcx;
let impl_m_hir_id = tcx.hir().as_local_hir_id(impl_m.def_id).unwrap();
let (impl_m_output, impl_m_iter) = match tcx.hir().expect_impl_item(impl_m_hir_id).kind {
- ImplItemKind::Method(ref impl_m_sig, _) => {
+ ImplItemKind::Fn(ref impl_m_sig, _) => {
(&impl_m_sig.decl.output, impl_m_sig.decl.inputs.iter())
}
_ => bug!("{:?} is not a method", impl_m),
TypeError::Mutability => {
if let Some(trait_m_hir_id) = tcx.hir().as_local_hir_id(trait_m.def_id) {
let trait_m_iter = match tcx.hir().expect_trait_item(trait_m_hir_id).kind {
- TraitItemKind::Method(ref trait_m_sig, _) => trait_m_sig.decl.inputs.iter(),
- _ => bug!("{:?} is not a TraitItemKind::Method", trait_m),
+ TraitItemKind::Fn(ref trait_m_sig, _) => trait_m_sig.decl.inputs.iter(),
+ _ => bug!("{:?} is not a TraitItemKind::Fn", trait_m),
};
impl_m_iter
if let Some(trait_m_hir_id) = tcx.hir().as_local_hir_id(trait_m.def_id) {
let (trait_m_output, trait_m_iter) =
match tcx.hir().expect_trait_item(trait_m_hir_id).kind {
- TraitItemKind::Method(ref trait_m_sig, _) => {
+ TraitItemKind::Fn(ref trait_m_sig, _) => {
(&trait_m_sig.decl.output, trait_m_sig.decl.inputs.iter())
}
- _ => bug!("{:?} is not a TraitItemKind::Method", trait_m),
+ _ => bug!("{:?} is not a TraitItemKind::Fn", trait_m),
};
let impl_iter = impl_sig.inputs().iter();
let trait_m_hir_id = tcx.hir().as_local_hir_id(trait_m.def_id);
let trait_span = if let Some(trait_id) = trait_m_hir_id {
match tcx.hir().expect_trait_item(trait_id).kind {
- TraitItemKind::Method(ref trait_m_sig, _) => {
+ TraitItemKind::Fn(ref trait_m_sig, _) => {
let pos = if trait_number_args > 0 { trait_number_args - 1 } else { 0 };
if let Some(arg) = trait_m_sig.decl.inputs.get(pos) {
Some(if pos == 0 {
};
let impl_m_hir_id = tcx.hir().as_local_hir_id(impl_m.def_id).unwrap();
let impl_span = match tcx.hir().expect_impl_item(impl_m_hir_id).kind {
- ImplItemKind::Method(ref impl_m_sig, _) => {
+ ImplItemKind::Fn(ref impl_m_sig, _) => {
let pos = if impl_number_args > 0 { impl_number_args - 1 } else { 0 };
if let Some(arg) = impl_m_sig.decl.inputs.get(pos) {
if pos == 0 {
let impl_m = tcx.hir().as_local_hir_id(impl_m.def_id)?;
let impl_m = tcx.hir().impl_item(hir::ImplItemId { hir_id: impl_m });
let input_tys = match impl_m.kind {
- hir::ImplItemKind::Method(ref sig, _) => sig.decl.inputs,
+ hir::ImplItemKind::Fn(ref sig, _) => sig.decl.inputs,
_ => unreachable!(),
};
struct Visitor(Option<Span>, hir::def_id::DefId);
}
}
}
- type Map = Map<'v>;
+ type Map = intravisit::ErasedMap<'v>;
fn nested_visit_map(
&mut self,
- ) -> intravisit::NestedVisitorMap<'_, Self::Map>
+ ) -> intravisit::NestedVisitorMap<Self::Map>
{
intravisit::NestedVisitorMap::None
}
use crate::check::FnCtxt;
use rustc_infer::infer::InferOk;
-use rustc_infer::traits::{self, ObligationCause};
+use rustc_trait_selection::infer::InferCtxtExt as _;
+use rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt as _;
+use rustc_trait_selection::traits::{self, ObligationCause};
use rustc::ty::adjustment::AllowTwoPhase;
use rustc::ty::{self, AssocItem, Ty};
//
// FIXME? Other potential candidate methods: `as_ref` and
// `as_mut`?
- .find(|a| a.check_name(sym::rustc_conversion_suggestion))
- .is_some()
+ .any(|a| a.check_name(sym::rustc_conversion_suggestion))
});
methods
) -> Option<(Span, &'static str, String)> {
let sm = self.sess().source_map();
let sp = expr.span;
- if !sm.span_to_filename(sp).is_real() {
+ if sm.is_imported(sp) {
// Ignore if span is from within a macro #41858, #58298. We previously used the macro
// call span, but that breaks down when the type error comes from multiple calls down.
return None;
{
// We have `&T`, check if what was expected was `T`. If so,
// we may want to suggest removing a `&`.
- if !sm.span_to_filename(expr.span).is_real() {
+ if sm.is_imported(expr.span) {
if let Ok(code) = sm.span_to_snippet(sp) {
- if code.chars().next() == Some('&') {
+ if code.starts_with('&') {
return Some((
sp,
"consider removing the borrow",
// FIXME(estebank): modify once we decide to suggest `as` casts
return false;
}
- if !self.tcx.sess.source_map().span_to_filename(expr.span).is_real() {
+ if self.tcx.sess.source_map().is_imported(expr.span) {
// Ignore if span is from within a macro.
return false;
}
use rustc::ty::{self, Predicate, Ty, TyCtxt};
use rustc_errors::struct_span_err;
use rustc_infer::infer::outlives::env::OutlivesEnvironment;
-use rustc_infer::infer::{InferOk, SuppressRegionErrors, TyCtxtInferExt};
-use rustc_infer::traits::{ObligationCause, TraitEngine, TraitEngineExt};
+use rustc_infer::infer::{InferOk, RegionckMode, TyCtxtInferExt};
+use rustc_infer::traits::TraitEngineExt as _;
use rustc_span::Span;
+use rustc_trait_selection::traits::error_reporting::InferCtxtExt;
+use rustc_trait_selection::traits::query::dropck_outlives::AtExt;
+use rustc_trait_selection::traits::{ObligationCause, TraitEngine, TraitEngineExt};
/// This function confirms that the `Drop` implementation identified by
/// `drop_impl_did` is not any more specialized than the type it is
drop_impl_did,
®ion_scope_tree,
&outlives_env,
- SuppressRegionErrors::default(),
+ RegionckMode::default(),
);
Ok(())
})
use crate::util::common::ErrorReported;
use rustc::middle::lang_items;
+use rustc::mir::interpret::ErrorHandled;
use rustc::ty;
use rustc::ty::adjustment::{Adjust, Adjustment, AllowTwoPhase, AutoBorrow, AutoBorrowMutability};
use rustc::ty::Ty;
use rustc_hir::{ExprKind, QPath};
use rustc_infer::infer;
use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
-use rustc_infer::traits::{self, ObligationCauseCode};
use rustc_span::hygiene::DesugaringKind;
use rustc_span::source_map::Span;
use rustc_span::symbol::{kw, sym, Symbol};
+use rustc_trait_selection::traits::{self, ObligationCauseCode};
use std::fmt::Display;
let needs = Needs::maybe_mut_place(mutbl);
let ty = self.check_expr_with_expectation_and_needs(&oprnd, hint, needs);
- let tm = ty::TypeAndMut { ty: ty, mutbl: mutbl };
+ let tm = ty::TypeAndMut { ty, mutbl };
match kind {
_ if tm.ty.references_error() => self.tcx.types.err,
hir::BorrowKind::Raw => {
};
if element_ty.references_error() {
- tcx.types.err
- } else if let Ok(count) = count {
- tcx.mk_ty(ty::Array(t, count))
- } else {
- tcx.types.err
+ return tcx.types.err;
+ }
+ match count {
+ Ok(count) => tcx.mk_ty(ty::Array(t, count)),
+ Err(ErrorHandled::TooGeneric) => {
+ self.tcx.sess.span_err(
+ tcx.def_span(count_def_id),
+ "array lengths can't depend on generic parameters",
+ );
+ tcx.types.err
+ }
+ Err(ErrorHandled::Reported) => tcx.types.err,
}
}
.fields
.iter()
.enumerate()
- .map(|(i, field)| (field.ident.modern(), (i, field)))
+ .map(|(i, field)| (field.ident.normalize_to_macros_2_0(), (i, field)))
.collect::<FxHashMap<_, _>>();
let mut seen_fields = FxHashMap::default();
let (ident, def_scope) =
self.tcx.adjust_ident_and_get_scope(field, base_def.did, self.body_id);
let fields = &base_def.non_enum_variant().fields;
- if let Some(index) = fields.iter().position(|f| f.ident.modern() == ident) {
+ if let Some(index) =
+ fields.iter().position(|f| f.ident.normalize_to_macros_2_0() == ident)
+ {
let field = &fields[index];
let field_ty = self.field_ty(expr.span, field, substs);
// Save the index of all fields regardless of their visibility in case
}
fn point_at_param_definition(&self, err: &mut DiagnosticBuilder<'_>, param: ty::ParamTy) {
- let generics = self.tcx.generics_of(self.body_id.owner_def_id());
+ let generics = self.tcx.generics_of(self.body_id.owner.to_def_id());
let generic_param = generics.type_param(¶m, self.tcx);
if let ty::GenericParamDefKind::Type { synthetic: Some(..), .. } = generic_param.kind {
return;
//! types computed here.
use super::FnCtxt;
-use rustc::hir::map::Map;
use rustc::middle::region::{self, YieldData};
use rustc::ty::{self, Ty};
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
// librustc/middle/region.rs since `expr_count` is compared against the results
// there.
impl<'a, 'tcx> Visitor<'tcx> for InteriorVisitor<'a, 'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
// ZST in a temporary, so skip its type, just in case it
// can significantly complicate the generator type.
Res::Def(DefKind::Fn, _)
- | Res::Def(DefKind::Method, _)
+ | Res::Def(DefKind::AssocFn, _)
| Res::Def(DefKind::Ctor(_, CtorKind::Fn), _) => {
// NOTE(eddyb) this assumes a path expression has
// no nested expressions to keep track of.
),
"rustc_peek" => (1, vec![param(0)], param(0)),
"caller_location" => (0, vec![], tcx.caller_location_ty()),
- "panic_if_uninhabited" => (1, Vec::new(), tcx.mk_unit()),
- "init" => (1, Vec::new(), param(0)),
- "uninit" => (1, Vec::new(), param(0)),
+ "assert_inhabited" | "assert_zero_valid" | "assert_uninit_valid" => {
+ (1, Vec::new(), tcx.mk_unit())
+ }
"forget" => (1, vec![param(0)], tcx.mk_unit()),
"transmute" => (2, vec![param(0)], param(1)),
"move_val_init" => (1, vec![tcx.mk_mut_ptr(param(0)), param(0)], tcx.mk_unit()),
"try" => {
let mut_u8 = tcx.mk_mut_ptr(tcx.types.u8);
- let fn_ty = ty::Binder::bind(tcx.mk_fn_sig(
+ let try_fn_ty = ty::Binder::bind(tcx.mk_fn_sig(
iter::once(mut_u8),
tcx.mk_unit(),
false,
hir::Unsafety::Normal,
Abi::Rust,
));
- (0, vec![tcx.mk_fn_ptr(fn_ty), mut_u8, mut_u8], tcx.types.i32)
+ let catch_fn_ty = ty::Binder::bind(tcx.mk_fn_sig(
+ [mut_u8, mut_u8].iter().cloned(),
+ tcx.mk_unit(),
+ false,
+ hir::Unsafety::Normal,
+ Abi::Rust,
+ ));
+ (
+ 0,
+ vec![tcx.mk_fn_ptr(try_fn_ty), mut_u8, tcx.mk_fn_ptr(catch_fn_ty)],
+ tcx.types.i32,
+ )
}
"va_start" | "va_end" => match mk_va_list_ty(hir::Mutability::Mut) {
use rustc::ty::{self, GenericParamDefKind, Ty};
use rustc_hir as hir;
use rustc_infer::infer::{self, InferOk};
-use rustc_infer::traits;
use rustc_span::Span;
+use rustc_trait_selection::traits;
use std::ops::Deref;
-//! Method lookup: the secret sauce of Rust. See the [rustc guide] for more information.
+//! Method lookup: the secret sauce of Rust. See the [rustc dev guide] for more information.
//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/method-lookup.html
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/method-lookup.html
mod confirm;
pub mod probe;
use rustc_hir::def::{CtorOf, DefKind, Namespace};
use rustc_hir::def_id::DefId;
use rustc_infer::infer::{self, InferOk};
-use rustc_infer::traits;
use rustc_span::Span;
+use rustc_trait_selection::traits;
+use rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt;
use self::probe::{IsSuggestion, ProbeScope};
ProbeScope::TraitsInScope,
)?;
debug!("resolve_ufcs: pick={:?}", pick);
- for import_id in pick.import_ids {
- let import_def_id = tcx.hir().local_def_id(import_id);
- debug!("resolve_ufcs: used_trait_import: {:?}", import_def_id);
- Lrc::get_mut(&mut self.tables.borrow_mut().used_trait_imports)
- .unwrap()
- .insert(import_def_id);
+ {
+ let mut tables = self.tables.borrow_mut();
+ let used_trait_imports = Lrc::get_mut(&mut tables.used_trait_imports).unwrap();
+ for import_id in pick.import_ids {
+ let import_def_id = tcx.hir().local_def_id(import_id);
+ debug!("resolve_ufcs: used_trait_import: {:?}", import_def_id);
+ used_trait_imports.insert(import_def_id);
+ }
}
let def_kind = pick.item.def_kind();
use crate::hir::def::DefKind;
use crate::hir::def_id::DefId;
-use rustc::lint;
use rustc::middle::stability;
-use rustc::session::config::nightly_options;
use rustc::ty::subst::{InternalSubsts, Subst, SubstsRef};
use rustc::ty::GenericParamDefKind;
use rustc::ty::{
use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
use rustc_infer::infer::unify_key::{ConstVariableOrigin, ConstVariableOriginKind};
use rustc_infer::infer::{self, InferOk, TyCtxtInferExt};
-use rustc_infer::traits::query::method_autoderef::MethodAutoderefBadTy;
-use rustc_infer::traits::query::method_autoderef::{CandidateStep, MethodAutoderefStepsResult};
-use rustc_infer::traits::query::CanonicalTyGoal;
-use rustc_infer::traits::{self, ObligationCause};
+use rustc_session::config::nightly_options;
+use rustc_session::lint;
use rustc_span::{symbol::Symbol, Span, DUMMY_SP};
+use rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt;
+use rustc_trait_selection::traits::query::method_autoderef::MethodAutoderefBadTy;
+use rustc_trait_selection::traits::query::method_autoderef::{
+ CandidateStep, MethodAutoderefStepsResult,
+};
+use rustc_trait_selection::traits::query::CanonicalTyGoal;
+use rustc_trait_selection::traits::{self, ObligationCause};
use std::cmp::max;
use std::iter;
use std::mem;
}
fn assemble_inherent_candidates(&mut self) {
- let steps = self.steps.clone();
+ let steps = Lrc::clone(&self.steps);
for step in steps.iter() {
self.assemble_probe(&step.self_ty);
}
self.assemble_inherent_impl_for_primitive(lang_def_id);
}
ty::Slice(_) => {
- let lang_def_id = lang_items.slice_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
-
- let lang_def_id = lang_items.slice_u8_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
-
- let lang_def_id = lang_items.slice_alloc_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
-
- let lang_def_id = lang_items.slice_u8_alloc_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
- }
- ty::RawPtr(ty::TypeAndMut { ty: _, mutbl: hir::Mutability::Not }) => {
- let lang_def_id = lang_items.const_ptr_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
- }
- ty::RawPtr(ty::TypeAndMut { ty: _, mutbl: hir::Mutability::Mut }) => {
- let lang_def_id = lang_items.mut_ptr_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
- }
- ty::Int(ast::IntTy::I8) => {
- let lang_def_id = lang_items.i8_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
- }
- ty::Int(ast::IntTy::I16) => {
- let lang_def_id = lang_items.i16_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
- }
- ty::Int(ast::IntTy::I32) => {
- let lang_def_id = lang_items.i32_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
- }
- ty::Int(ast::IntTy::I64) => {
- let lang_def_id = lang_items.i64_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
- }
- ty::Int(ast::IntTy::I128) => {
- let lang_def_id = lang_items.i128_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
- }
- ty::Int(ast::IntTy::Isize) => {
- let lang_def_id = lang_items.isize_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
- }
- ty::Uint(ast::UintTy::U8) => {
- let lang_def_id = lang_items.u8_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
- }
- ty::Uint(ast::UintTy::U16) => {
- let lang_def_id = lang_items.u16_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
- }
- ty::Uint(ast::UintTy::U32) => {
- let lang_def_id = lang_items.u32_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
- }
- ty::Uint(ast::UintTy::U64) => {
- let lang_def_id = lang_items.u64_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
+ for &lang_def_id in &[
+ lang_items.slice_impl(),
+ lang_items.slice_u8_impl(),
+ lang_items.slice_alloc_impl(),
+ lang_items.slice_u8_alloc_impl(),
+ ] {
+ self.assemble_inherent_impl_for_primitive(lang_def_id);
+ }
}
- ty::Uint(ast::UintTy::U128) => {
- let lang_def_id = lang_items.u128_impl();
+ ty::RawPtr(ty::TypeAndMut { ty: _, mutbl }) => {
+ let lang_def_id = match mutbl {
+ hir::Mutability::Not => lang_items.const_ptr_impl(),
+ hir::Mutability::Mut => lang_items.mut_ptr_impl(),
+ };
self.assemble_inherent_impl_for_primitive(lang_def_id);
}
- ty::Uint(ast::UintTy::Usize) => {
- let lang_def_id = lang_items.usize_impl();
+ ty::Int(i) => {
+ let lang_def_id = match i {
+ ast::IntTy::I8 => lang_items.i8_impl(),
+ ast::IntTy::I16 => lang_items.i16_impl(),
+ ast::IntTy::I32 => lang_items.i32_impl(),
+ ast::IntTy::I64 => lang_items.i64_impl(),
+ ast::IntTy::I128 => lang_items.i128_impl(),
+ ast::IntTy::Isize => lang_items.isize_impl(),
+ };
self.assemble_inherent_impl_for_primitive(lang_def_id);
}
- ty::Float(ast::FloatTy::F32) => {
- let lang_def_id = lang_items.f32_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
-
- let lang_def_id = lang_items.f32_runtime_impl();
+ ty::Uint(i) => {
+ let lang_def_id = match i {
+ ast::UintTy::U8 => lang_items.u8_impl(),
+ ast::UintTy::U16 => lang_items.u16_impl(),
+ ast::UintTy::U32 => lang_items.u32_impl(),
+ ast::UintTy::U64 => lang_items.u64_impl(),
+ ast::UintTy::U128 => lang_items.u128_impl(),
+ ast::UintTy::Usize => lang_items.usize_impl(),
+ };
self.assemble_inherent_impl_for_primitive(lang_def_id);
}
- ty::Float(ast::FloatTy::F64) => {
- let lang_def_id = lang_items.f64_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
-
- let lang_def_id = lang_items.f64_runtime_impl();
- self.assemble_inherent_impl_for_primitive(lang_def_id);
+ ty::Float(f) => {
+ let (lang_def_id1, lang_def_id2) = match f {
+ ast::FloatTy::F32 => (lang_items.f32_impl(), lang_items.f32_runtime_impl()),
+ ast::FloatTy::F64 => (lang_items.f64_impl(), lang_items.f64_runtime_impl()),
+ };
+ self.assemble_inherent_impl_for_primitive(lang_def_id1);
+ self.assemble_inherent_impl_for_primitive(lang_def_id2);
}
_ => {}
}
return r;
}
- debug!("pick: actual search failed, assemble diagnotics");
+ debug!("pick: actual search failed, assemble diagnostics");
let static_candidates = mem::take(&mut self.static_candidates);
let private_candidate = self.private_candidate.take();
let predicate = trait_ref.without_const().to_predicate();
let obligation = traits::Obligation::new(cause, self.param_env, predicate);
if !self.predicate_may_hold(&obligation) {
+ result = ProbeResult::NoMatch;
if self.probe(|_| {
match self.select_trait_candidate(trait_ref) {
Err(_) => return true,
// Determine exactly which obligation wasn't met, so
// that we can give more context in the error.
if !self.predicate_may_hold(&obligation) {
- result = ProbeResult::NoMatch;
let o = self.resolve_vars_if_possible(obligation);
let predicate =
self.resolve_vars_if_possible(&predicate);
_ => {
// Some nested subobligation of this predicate
// failed.
- result = ProbeResult::NoMatch;
let predicate = self.resolve_vars_if_possible(&predicate);
possibly_unsatisfied_predicates.push((predicate, None));
}
let method_names = pcx.candidate_method_names();
pcx.allow_similar_names = false;
- let applicable_close_candidates: Vec<ty::AssocItem> =
- method_names
- .iter()
- .filter_map(|&method_name| {
- pcx.reset();
- pcx.method_name = Some(method_name);
- pcx.assemble_inherent_candidates();
- pcx.assemble_extension_candidates_for_traits_in_scope(hir::DUMMY_HIR_ID)
- .map_or(None, |_| {
- pcx.pick_core()
- .and_then(|pick| pick.ok())
- .and_then(|pick| Some(pick.item))
- })
- })
- .collect();
+ let applicable_close_candidates: Vec<ty::AssocItem> = method_names
+ .iter()
+ .filter_map(|&method_name| {
+ pcx.reset();
+ pcx.method_name = Some(method_name);
+ pcx.assemble_inherent_candidates();
+ pcx.assemble_extension_candidates_for_traits_in_scope(hir::DUMMY_HIR_ID)
+ .map_or(None, |_| {
+ pcx.pick_core().and_then(|pick| pick.ok()).map(|pick| pick.item)
+ })
+ })
+ .collect();
if applicable_close_candidates.is_empty() {
Ok(None)
use crate::check::FnCtxt;
use crate::middle::lang_items::FnOnceTraitLangItem;
use rustc::hir::map as hir_map;
-use rustc::hir::map::Map;
use rustc::ty::print::with_crate_prefix;
use rustc::ty::{self, ToPolyTraitRef, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness};
use rustc_ast::ast;
use rustc_hir::intravisit;
use rustc_hir::{ExprKind, Node, QPath};
use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
-use rustc_infer::traits::Obligation;
use rustc_span::symbol::kw;
use rustc_span::{source_map, FileName, Span};
+use rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt;
+use rustc_trait_selection::traits::Obligation;
use std::cmp::Ordering;
});
if let Some((field, field_ty)) = field_receiver {
- let scope = self.tcx.parent_module(self.body_id);
+ let scope = self.tcx.parent_module(self.body_id).to_def_id();
let is_accessible = field.vis.is_accessible_from(scope, self.tcx);
if is_accessible {
(&self_ty.kind, parent_pred)
{
if let ty::Adt(def, _) = p.skip_binder().trait_ref.self_ty().kind {
- let id = self.tcx.hir().as_local_hir_id(def.did).unwrap();
- let node = self.tcx.hir().get(id);
+ let node = self
+ .tcx
+ .hir()
+ .as_local_hir_id(def.did)
+ .map(|id| self.tcx.hir().get(id));
match node {
- hir::Node::Item(hir::Item { kind, .. }) => {
+ Some(hir::Node::Item(hir::Item { kind, .. })) => {
if let Some(g) = kind.generics() {
let key = match &g.where_clause.predicates[..] {
[.., pred] => {
candidates: Vec<DefId>,
) {
let module_did = self.tcx.parent_module(self.body_id);
- let module_id = self.tcx.hir().as_local_hir_id(module_did).unwrap();
+ let module_id = self.tcx.hir().as_local_hir_id(module_did.to_def_id()).unwrap();
let krate = self.tcx.hir().krate();
let (span, found_use) = UsePlacementFinder::check(self.tcx, krate, module_id);
if let Some(span) = span {
if let ty::AssocKind::Method = item.kind {
let id = self.tcx.hir().as_local_hir_id(item.def_id);
if let Some(hir::Node::TraitItem(hir::TraitItem {
- kind: hir::TraitItemKind::Method(fn_sig, method),
+ kind: hir::TraitItemKind::Fn(fn_sig, method),
..
})) = id.map(|id| self.tcx.hir().get(id))
{
let self_first_arg = match method {
- hir::TraitMethod::Required([ident, ..]) => {
+ hir::TraitFn::Required([ident, ..]) => {
ident.name == kw::SelfLower
}
- hir::TraitMethod::Provided(body_id) => {
+ hir::TraitFn::Provided(body_id) => {
match &self.tcx.hir().body(*body_id).params[..] {
[hir::Param {
pat:
}
}
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<Self::Map> {
intravisit::NestedVisitorMap::None
}
}
use crate::astconv::{AstConv, GenericArgCountMismatch, PathSeg};
use crate::middle::lang_items;
use rustc::hir::map::blocks::FnLikeNode;
-use rustc::hir::map::Map;
use rustc::middle::region;
use rustc::mir::interpret::ConstValue;
-use rustc::session::parse::feature_err;
use rustc::ty::adjustment::{
Adjust, Adjustment, AllowTwoPhase, AutoBorrow, AutoBorrowMutability, PointerCast,
};
use rustc_index::vec::Idx;
use rustc_infer::infer::canonical::{Canonical, OriginalQueryValues, QueryResponse};
use rustc_infer::infer::error_reporting::TypeAnnotationNeeded::E0282;
-use rustc_infer::infer::opaque_types::OpaqueTypeDecl;
use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
use rustc_infer::infer::unify_key::{ConstVariableOrigin, ConstVariableOriginKind};
use rustc_infer::infer::{self, InferCtxt, InferOk, InferResult, TyCtxtInferExt};
-use rustc_infer::traits::error_reporting::recursive_type_with_infinite_size_error;
-use rustc_infer::traits::{self, ObligationCause, ObligationCauseCode, TraitEngine};
+use rustc_session::config::{self, EntryFnType};
+use rustc_session::lint;
+use rustc_session::parse::feature_err;
+use rustc_session::Session;
use rustc_span::hygiene::DesugaringKind;
use rustc_span::source_map::{original_sp, DUMMY_SP};
use rustc_span::symbol::{kw, sym, Ident};
use rustc_span::{self, BytePos, MultiSpan, Span};
use rustc_target::spec::abi::Abi;
+use rustc_trait_selection::infer::InferCtxtExt as _;
+use rustc_trait_selection::opaque_types::{InferCtxtExt as _, OpaqueTypeDecl};
+use rustc_trait_selection::traits::error_reporting::recursive_type_with_infinite_size_error;
+use rustc_trait_selection::traits::error_reporting::InferCtxtExt as _;
+use rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt as _;
+use rustc_trait_selection::traits::{
+ self, ObligationCause, ObligationCauseCode, TraitEngine, TraitEngineExt,
+};
use std::cell::{Cell, Ref, RefCell, RefMut};
use std::cmp;
use std::ops::{self, Deref};
use std::slice;
-use crate::lint;
use crate::require_c_abi_if_c_variadic;
-use crate::session::config::EntryFnType;
-use crate::session::Session;
use crate::util::common::{indenter, ErrorReported};
use crate::TypeAndSubsts;
/// where all arms diverge), we may be
/// able to provide a more informative
/// message to the user.
- /// If this is `None`, a default messsage
+ /// If this is `None`, a default message
/// will be generated, which is suitable
/// for most cases.
custom_note: Option<&'static str>,
impl Inherited<'_, 'tcx> {
pub fn build(tcx: TyCtxt<'tcx>, def_id: DefId) -> InheritedBuilder<'tcx> {
- let hir_id_root = if def_id.is_local() {
- let hir_id = tcx.hir().as_local_hir_id(def_id).unwrap();
- DefId::local(hir_id.owner)
+ let hir_id_root = if let Some(def_id) = def_id.as_local() {
+ tcx.hir().local_def_id_to_hir_id(def_id).owner.to_def_id()
} else {
def_id
};
}
pub fn check_wf_new(tcx: TyCtxt<'_>) {
- let mut visit = wfcheck::CheckTypeWellFormedVisitor::new(tcx);
- tcx.hir().krate().par_visit_all_item_likes(&mut visit);
+ let visit = wfcheck::CheckTypeWellFormedVisitor::new(tcx);
+ tcx.hir().krate().par_visit_all_item_likes(&visit);
}
fn check_mod_item_types(tcx: TyCtxt<'_>, module_def_id: DefId) {
},
Node::TraitItem(item) => match item.kind {
hir::TraitItemKind::Const(ref ty, Some(body)) => Some((body, Some(ty), None, None)),
- hir::TraitItemKind::Method(ref sig, hir::TraitMethod::Provided(body)) => {
+ hir::TraitItemKind::Fn(ref sig, hir::TraitFn::Provided(body)) => {
Some((body, None, Some(&sig.header), Some(&sig.decl)))
}
_ => None,
},
Node::ImplItem(item) => match item.kind {
hir::ImplItemKind::Const(ref ty, body) => Some((body, Some(ty), None, None)),
- hir::ImplItemKind::Method(ref sig, body) => {
+ hir::ImplItemKind::Fn(ref sig, body) => {
Some((body, None, Some(&sig.header), Some(&sig.decl)))
}
_ => None,
ty::Opaque(def_id, substs) => {
debug!("fixup_opaque_types: found type {:?}", ty);
// Here, we replace any inference variables that occur within
- // the substs of an opaque type. By definition, any type occuring
+ // the substs of an opaque type. By definition, any type occurring
// in the substs has a corresponding generic parameter, which is what
// we replace it with.
// This replacement is only run on the function signature, so any
// Consistency check our TypeckTables instance can hold all ItemLocalIds
// it will need to hold.
- assert_eq!(tables.local_id_root, Some(DefId::local(id.owner)));
+ assert_eq!(tables.local_id_root, Some(id.owner.to_def_id()));
tables
}
}
impl<'a, 'tcx> Visitor<'tcx> for GatherLocalsVisitor<'a, 'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
for item in items.iter() {
let item = tcx.hir().trait_item(item.id);
- if let hir::TraitItemKind::Method(sig, _) = &item.kind {
+ if let hir::TraitItemKind::Fn(sig, _) = &item.kind {
let abi = sig.header.abi;
fn_maybe_err(tcx, item.ident.span, abi);
}
) {
let kind = match impl_item.kind {
hir::ImplItemKind::Const(..) => ty::AssocKind::Const,
- hir::ImplItemKind::Method(..) => ty::AssocKind::Method,
+ hir::ImplItemKind::Fn(..) => ty::AssocKind::Method,
hir::ImplItemKind::OpaqueTy(..) => ty::AssocKind::OpaqueTy,
hir::ImplItemKind::TyAlias(_) => ty::AssocKind::Type,
};
- let mut ancestor_impls = trait_def
- .ancestors(tcx, impl_id)
+ let ancestors = match trait_def.ancestors(tcx, impl_id) {
+ Ok(ancestors) => ancestors,
+ Err(_) => return,
+ };
+ let mut ancestor_impls = ancestors
.skip(1)
.filter_map(|parent| {
if parent.is_from_trait() {
}
});
- // If `opt_result` is `None`, we have only encoutered `default impl`s that don't contain the
+ // If `opt_result` is `None`, we have only encountered `default impl`s that don't contain the
// item. This is allowed, the item isn't actually getting specialized here.
let result = opt_result.unwrap_or(Ok(()));
err.emit()
}
}
- hir::ImplItemKind::Method(..) => {
+ hir::ImplItemKind::Fn(..) => {
let opt_trait_span = tcx.hir().span_if_local(ty_trait_item.def_id);
if ty_trait_item.kind == ty::AssocKind::Method {
compare_impl_method(
// Check for missing items from trait
let mut missing_items = Vec::new();
- for trait_item in tcx.associated_items(impl_trait_ref.def_id).in_definition_order() {
- let is_implemented = trait_def
- .ancestors(tcx, impl_id)
- .leaf_def(tcx, trait_item.ident, trait_item.kind)
- .map(|node_item| !node_item.node.is_from_trait())
- .unwrap_or(false);
-
- if !is_implemented && !traits::impl_is_default(tcx, impl_id) {
- if !trait_item.defaultness.has_value() {
- missing_items.push(*trait_item);
+ if let Ok(ancestors) = trait_def.ancestors(tcx, impl_id) {
+ for trait_item in tcx.associated_items(impl_trait_ref.def_id).in_definition_order() {
+ let is_implemented = ancestors
+ .leaf_def(tcx, trait_item.ident, trait_item.kind)
+ .map(|node_item| !node_item.node.is_from_trait())
+ .unwrap_or(false);
+
+ if !is_implemented && !traits::impl_is_default(tcx, impl_id) {
+ if !trait_item.defaultness.has_value() {
+ missing_items.push(*trait_item);
+ }
}
}
}
debug!("resolve_vars_with_obligations(ty={:?})", ty);
// No Infer()? Nothing needs doing.
- if !ty.has_infer_types() && !ty.has_infer_consts() {
+ if !ty.has_infer_types_or_consts() {
debug!("resolve_vars_with_obligations: ty={:?}", ty);
return ty;
}
// If `ty` is a type variable, see whether we already know what it is.
ty = self.resolve_vars_if_possible(&ty);
- if !ty.has_infer_types() && !ty.has_infer_consts() {
+ if !ty.has_infer_types_or_consts() {
debug!("resolve_vars_with_obligations: ty={:?}", ty);
return ty;
}
pub fn write_method_call(&self, hir_id: hir::HirId, method: MethodCallee<'tcx>) {
debug!("write_method_call(hir_id={:?}, method={:?})", hir_id, method);
- self.write_resolution(hir_id, Ok((DefKind::Method, method.def_id)));
+ self.write_resolution(hir_id, Ok((DefKind::AssocFn, method.def_id)));
self.write_substs(hir_id, method.substs);
// When the method is confirmed, the `method.substs` includes
// }
// ```
//
- // In the above snippet, the inference varaible created by
+ // In the above snippet, the inference variable created by
// instantiating `Option<Foo>` will be completely unconstrained.
// We treat this as a non-defining use by making the inference
// variable fall back to the opaque type itself.
let substs = self.fresh_substs_for_item(span, did);
let substd_ty = self.instantiate_type_scheme(span, &substs, &ity);
- TypeAndSubsts { substs: substs, ty: substd_ty }
+ TypeAndSubsts { substs, ty: substd_ty }
}
/// Unifies the output type with the expected type early, for more coercions
let node = self.tcx.hir().get(self.tcx.hir().get_parent_item(id));
match node {
Node::Item(&hir::Item { kind: hir::ItemKind::Fn(_, _, body_id), .. })
- | Node::ImplItem(&hir::ImplItem {
- kind: hir::ImplItemKind::Method(_, body_id), ..
- }) => {
+ | Node::ImplItem(&hir::ImplItem { kind: hir::ImplItemKind::Fn(_, body_id), .. }) => {
let body = self.tcx.hir().body(body_id);
if let ExprKind::Block(block, _) = &body.value.kind {
return Some(block.span);
}
Node::TraitItem(&hir::TraitItem {
ident,
- kind: hir::TraitItemKind::Method(ref sig, ..),
+ kind: hir::TraitItemKind::Fn(ref sig, ..),
..
}) => Some((&sig.decl, ident, true)),
Node::ImplItem(&hir::ImplItem {
ident,
- kind: hir::ImplItemKind::Method(ref sig, ..),
+ kind: hir::ImplItemKind::Fn(ref sig, ..),
..
}) => Some((&sig.decl, ident, false)),
_ => None,
match hir.get_if_local(def_id) {
Some(Node::Item(hir::Item { kind: ItemKind::Fn(.., body_id), .. }))
| Some(Node::ImplItem(hir::ImplItem {
- kind: hir::ImplItemKind::Method(_, body_id),
+ kind: hir::ImplItemKind::Fn(_, body_id),
..
}))
| Some(Node::TraitItem(hir::TraitItem {
- kind: hir::TraitItemKind::Method(.., hir::TraitMethod::Provided(body_id)),
+ kind: hir::TraitItemKind::Fn(.., hir::TraitFn::Provided(body_id)),
..
})) => {
let body = hir.body(*body_id);
.join(", ")
}
Some(Node::TraitItem(hir::TraitItem {
- kind: hir::TraitItemKind::Method(.., hir::TraitMethod::Required(idents)),
+ kind: hir::TraitItemKind::Fn(.., hir::TraitFn::Required(idents)),
..
})) => {
sugg_call = idents
.tcx
.associated_items(future_trait)
.in_definition_order()
- .nth(0)
+ .next()
.unwrap()
.def_id;
let predicate =
is_alias_variant_ctor = true;
}
}
- Res::Def(DefKind::Method, def_id) | Res::Def(DefKind::AssocConst, def_id) => {
+ Res::Def(DefKind::AssocFn, def_id) | Res::Def(DefKind::AssocConst, def_id) => {
let container = tcx.associated_item(def_id).container;
debug!("instantiate_value_path: def_id={:?} container={:?}", def_id, container);
match container {
handler.note_without_error(&format!(
"rustc {} running on {}",
option_env!("CFG_VERSION").unwrap_or("unknown_version"),
- crate::session::config::host_triple(),
+ config::host_triple(),
));
}
use rustc_hir as hir;
use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
use rustc_span::Span;
+use rustc_trait_selection::infer::InferCtxtExt;
impl<'a, 'tcx> FnCtxt<'a, 'tcx> {
/// Checks a `a <op>= b`
}
/// Dereferences a single level of immutable referencing.
-fn deref_ty_if_possible<'tcx>(ty: Ty<'tcx>) -> Ty<'tcx> {
+fn deref_ty_if_possible(ty: Ty<'tcx>) -> Ty<'tcx> {
match ty.kind {
ty::Ref(_, ty, hir::Mutability::Not) => ty,
_ => ty,
use rustc_hir::{HirId, Pat, PatKind};
use rustc_infer::infer;
use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
-use rustc_infer::traits::{ObligationCause, Pattern};
use rustc_span::hygiene::DesugaringKind;
use rustc_span::source_map::{Span, Spanned};
+use rustc_trait_selection::traits::{ObligationCause, Pattern};
use std::cmp;
use std::collections::hash_map::Entry::{Occupied, Vacant};
/// found type `std::result::Result<_, _>`
/// ```
span: Option<Span>,
+ /// This refers to the parent pattern. Used to provide extra diagnostic information on errors.
+ /// ```text
+ /// error[E0308]: mismatched types
+ /// --> $DIR/const-in-struct-pat.rs:8:17
+ /// |
+ /// L | struct f;
+ /// | --------- unit struct defined here
+ /// ...
+ /// L | let Thing { f } = t;
+ /// | ^
+ /// | |
+ /// | expected struct `std::string::String`, found struct `f`
+ /// | `f` is interpreted as a unit struct, not a new binding
+ /// | help: bind the struct field to a different name instead: `f: other_f`
+ /// ```
+ parent_pat: Option<&'tcx Pat<'tcx>>,
}
impl<'tcx> FnCtxt<'_, 'tcx> {
enum AdjustMode {
/// Peel off all immediate reference types.
Peel,
- /// Reset binding mode to the inital mode.
+ /// Reset binding mode to the initial mode.
Reset,
/// Pass on the input binding mode and expected type.
Pass,
span: Option<Span>,
origin_expr: bool,
) {
- self.check_pat(pat, expected, INITIAL_BM, TopInfo { expected, origin_expr, span });
+ let info = TopInfo { expected, origin_expr, span, parent_pat: None };
+ self.check_pat(pat, expected, INITIAL_BM, info);
}
/// Type check the given `pat` against the `expected` type
self.check_pat_struct(pat, qpath, fields, etc, expected, def_bm, ti)
}
PatKind::Or(pats) => {
+ let parent_pat = Some(pat);
for pat in pats {
- self.check_pat(pat, expected, def_bm, ti);
+ self.check_pat(pat, expected, def_bm, TopInfo { parent_pat, ..ti });
}
expected
}
fn check_pat_ident(
&self,
- pat: &Pat<'_>,
+ pat: &'tcx Pat<'tcx>,
ba: hir::BindingAnnotation,
var_id: HirId,
sub: Option<&'tcx Pat<'tcx>>,
}
if let Some(p) = sub {
- self.check_pat(&p, expected, def_bm, ti);
+ self.check_pat(&p, expected, def_bm, TopInfo { parent_pat: Some(&pat), ..ti });
}
local_ty
let var_ty = self.resolve_vars_with_obligations(var_ty);
let msg = format!("first introduced with type `{}` here", var_ty);
err.span_label(hir.span(var_id), msg);
- let in_arm = hir.parent_iter(var_id).any(|(_, n)| matches!(n, hir::Node::Arm(..)));
- let pre = if in_arm { "in the same arm, " } else { "" };
+ let in_match = hir.parent_iter(var_id).any(|(_, n)| {
+ matches!(
+ n,
+ hir::Node::Expr(hir::Expr {
+ kind: hir::ExprKind::Match(.., hir::MatchSource::Normal),
+ ..
+ })
+ )
+ });
+ let pre = if in_match { "in the same arm, " } else { "" };
err.note(&format!("{}a binding must have the same type in all alternatives", pre));
err.emit();
}
variant_ty
} else {
for field in fields {
+ let ti = TopInfo { parent_pat: Some(&pat), ..ti };
self.check_pat(&field.pat, self.tcx.types.err, def_bm, ti);
}
return self.tcx.types.err;
self.demand_eqtype_pat(pat.span, expected, pat_ty, ti);
// Type-check subpatterns.
- if self
- .check_struct_pat_fields(pat_ty, pat.hir_id, pat.span, variant, fields, etc, def_bm, ti)
- {
+ if self.check_struct_pat_fields(pat_ty, &pat, variant, fields, etc, def_bm, ti) {
pat_ty
} else {
self.tcx.types.err
self.set_tainted_by_errors();
return tcx.types.err;
}
- Res::Def(DefKind::Method, _)
+ Res::Def(DefKind::AssocFn, _)
| Res::Def(DefKind::Ctor(_, CtorKind::Fictive), _)
| Res::Def(DefKind::Ctor(_, CtorKind::Fn), _) => {
report_unexpected_variant_res(tcx, res, pat.span, qpath);
Res::Def(DefKind::Ctor(_, CtorKind::Const), _)
| Res::SelfCtor(..)
| Res::Def(DefKind::Const, _)
- | Res::Def(DefKind::AssocConst, _) => {} // OK
+ | Res::Def(DefKind::AssocConst, _)
+ | Res::Def(DefKind::ConstParam, _) => {} // OK
_ => bug!("unexpected pattern resolution: {:?}", res),
}
// Type-check the path.
- let pat_ty = self.instantiate_value_path(segments, opt_ty, res, pat.span, pat.hir_id).0;
- if let Some(mut err) =
+ let (pat_ty, pat_res) =
+ self.instantiate_value_path(segments, opt_ty, res, pat.span, pat.hir_id);
+ if let Some(err) =
self.demand_suptype_with_origin(&self.pattern_cause(ti, pat.span), expected, pat_ty)
{
- err.emit();
+ self.emit_bad_pat_path(err, pat.span, res, pat_res, segments, ti.parent_pat);
}
pat_ty
}
+ fn emit_bad_pat_path(
+ &self,
+ mut e: DiagnosticBuilder<'_>,
+ pat_span: Span,
+ res: Res,
+ pat_res: Res,
+ segments: &'b [hir::PathSegment<'b>],
+ parent_pat: Option<&Pat<'_>>,
+ ) {
+ if let Some(span) = self.tcx.hir().res_span(pat_res) {
+ e.span_label(span, &format!("{} defined here", res.descr()));
+ if let [hir::PathSegment { ident, .. }] = &*segments {
+ e.span_label(
+ pat_span,
+ &format!(
+ "`{}` is interpreted as {} {}, not a new binding",
+ ident,
+ res.article(),
+ res.descr(),
+ ),
+ );
+ let (msg, sugg) = match parent_pat {
+ Some(Pat { kind: hir::PatKind::Struct(..), .. }) => (
+ "bind the struct field to a different name instead",
+ format!("{}: other_{}", ident, ident.as_str().to_lowercase()),
+ ),
+ _ => (
+ "introduce a new binding instead",
+ format!("other_{}", ident.as_str().to_lowercase()),
+ ),
+ };
+ e.span_suggestion(ident.span, msg, sugg, Applicability::HasPlaceholders);
+ }
+ }
+ e.emit();
+ }
+
fn check_pat_tuple_struct(
&self,
- pat: &Pat<'_>,
+ pat: &'tcx Pat<'tcx>,
qpath: &hir::QPath<'_>,
subpats: &'tcx [&'tcx Pat<'tcx>],
ddpos: Option<usize>,
) -> Ty<'tcx> {
let tcx = self.tcx;
let on_error = || {
+ let parent_pat = Some(pat);
for pat in subpats {
- self.check_pat(&pat, tcx.types.err, def_bm, ti);
+ self.check_pat(&pat, tcx.types.err, def_bm, TopInfo { parent_pat, ..ti });
}
};
let report_unexpected_res = |res: Res| {
);
let mut err = struct_span_err!(tcx.sess, pat.span, E0164, "{}", msg);
match (res, &pat.kind) {
- (Res::Def(DefKind::Fn, _), _) | (Res::Def(DefKind::Method, _), _) => {
+ (Res::Def(DefKind::Fn, _), _) | (Res::Def(DefKind::AssocFn, _), _) => {
err.span_label(pat.span, "`fn` calls are not allowed in patterns");
err.help(
"for more information, visit \
on_error();
return tcx.types.err;
}
- Res::Def(DefKind::AssocConst, _) | Res::Def(DefKind::Method, _) => {
+ Res::Def(DefKind::AssocConst, _) | Res::Def(DefKind::AssocFn, _) => {
report_unexpected_res(res);
return tcx.types.err;
}
};
for (i, subpat) in subpats.iter().enumerate_and_adjust(variant.fields.len(), ddpos) {
let field_ty = self.field_ty(subpat.span, &variant.fields[i], substs);
- self.check_pat(&subpat, field_ty, def_bm, ti);
+ self.check_pat(&subpat, field_ty, def_bm, TopInfo { parent_pat: Some(&pat), ..ti });
self.tcx.check_stability(variant.fields[i].did, Some(pat.hir_id), subpat.span);
}
// N-arity-tuple, e.g., `V_i((p_0, .., p_N))`. Meanwhile, the user supplied a pattern
// with the subpatterns directly in the tuple variant pattern, e.g., `V_i(p_0, .., p_N)`.
let missing_parenthesis = match (&expected.kind, fields, had_err) {
- // #67037: only do this if we could sucessfully type-check the expected type against
+ // #67037: only do this if we could successfully type-check the expected type against
// the tuple struct pattern. Otherwise the substs could get out of range on e.g.,
// `let P() = U;` where `P != U` with `struct P<T>(T);`.
(ty::Adt(_, substs), [field], false) => {
fn check_struct_pat_fields(
&self,
adt_ty: Ty<'tcx>,
- pat_id: HirId,
- span: Span,
+ pat: &'tcx Pat<'tcx>,
variant: &'tcx ty::VariantDef,
fields: &'tcx [hir::FieldPat<'tcx>],
etc: bool,
let (substs, adt) = match adt_ty.kind {
ty::Adt(adt, substs) => (substs, adt),
- _ => span_bug!(span, "struct pattern is not an ADT"),
+ _ => span_bug!(pat.span, "struct pattern is not an ADT"),
};
let kind_name = adt.variant_descr();
.fields
.iter()
.enumerate()
- .map(|(i, field)| (field.ident.modern(), (i, field)))
+ .map(|(i, field)| (field.ident.normalize_to_macros_2_0(), (i, field)))
.collect::<FxHashMap<_, _>>();
// Keep track of which fields have already appeared in the pattern.
.get(&ident)
.map(|(i, f)| {
self.write_field_index(field.hir_id, *i);
- self.tcx.check_stability(f.did, Some(pat_id), span);
+ self.tcx.check_stability(f.did, Some(pat.hir_id), span);
self.field_ty(span, f, substs)
})
.unwrap_or_else(|| {
}
};
- self.check_pat(&field.pat, field_ty, def_bm, ti);
+ self.check_pat(&field.pat, field_ty, def_bm, TopInfo { parent_pat: Some(&pat), ..ti });
}
let mut unmentioned_fields = variant
.fields
.iter()
- .map(|field| field.ident.modern())
+ .map(|field| field.ident.normalize_to_macros_2_0())
.filter(|ident| !used_fields.contains_key(&ident))
.collect::<Vec<_>>();
if variant.is_field_list_non_exhaustive() && !adt.did.is_local() && !etc {
struct_span_err!(
tcx.sess,
- span,
+ pat.span,
E0638,
"`..` required with {} marked as non-exhaustive",
kind_name
if kind_name == "union" {
if fields.len() != 1 {
tcx.sess
- .struct_span_err(span, "union patterns should have exactly one field")
+ .struct_span_err(pat.span, "union patterns should have exactly one field")
.emit();
}
if etc {
- tcx.sess.struct_span_err(span, "`..` cannot be used in union patterns").emit();
+ tcx.sess.struct_span_err(pat.span, "`..` cannot be used in union patterns").emit();
}
} else if !etc && !unmentioned_fields.is_empty() {
- self.error_unmentioned_fields(span, &unmentioned_fields, variant);
+ self.error_unmentioned_fields(pat.span, &unmentioned_fields, variant);
}
no_field_errors
}
fn check_pat_ref(
&self,
- pat: &Pat<'_>,
+ pat: &'tcx Pat<'tcx>,
inner: &'tcx Pat<'tcx>,
mutbl: hir::Mutability,
expected: Ty<'tcx>,
} else {
(tcx.types.err, tcx.types.err)
};
- self.check_pat(&inner, inner_ty, def_bm, ti);
+ self.check_pat(&inner, inner_ty, def_bm, TopInfo { parent_pat: Some(&pat), ..ti });
rptr_ty
}
use crate::check::FnCtxt;
use crate::mem_categorization as mc;
use crate::middle::region;
-use rustc::hir::map::Map;
use rustc::ty::adjustment;
use rustc::ty::subst::{GenericArgKind, SubstsRef};
use rustc::ty::{self, Ty};
use rustc_hir::intravisit::{self, NestedVisitorMap, Visitor};
use rustc_hir::PatKind;
use rustc_infer::infer::outlives::env::OutlivesEnvironment;
-use rustc_infer::infer::{self, RegionObligation, SuppressRegionErrors};
+use rustc_infer::infer::{self, RegionObligation, RegionckMode};
use rustc_span::Span;
+use rustc_trait_selection::infer::OutlivesEnvironmentExt;
+use rustc_trait_selection::opaque_types::InferCtxtExt;
use std::mem;
use std::ops::Deref;
rcx.visit_body(body);
rcx.visit_region_obligations(id);
}
- rcx.resolve_regions_and_report_errors(SuppressRegionErrors::when_nll_is_enabled(self.tcx));
-
- assert!(self.tables.borrow().free_region_map.is_empty());
- self.tables.borrow_mut().free_region_map = rcx.outlives_environment.into_free_region_map();
+ rcx.resolve_regions_and_report_errors(RegionckMode::for_item_body(self.tcx));
}
/// Region checking during the WF phase for items. `wf_tys` are the
rcx.outlives_environment.add_implied_bounds(self, wf_tys, item_id, span);
rcx.outlives_environment.save_implied_bounds(item_id);
rcx.visit_region_obligations(item_id);
- rcx.resolve_regions_and_report_errors(SuppressRegionErrors::default());
+ rcx.resolve_regions_and_report_errors(RegionckMode::default());
}
/// Region check a function body. Not invoked on closures, but
rcx.visit_fn_body(fn_id, body, self.tcx.hir().span(fn_id));
}
- rcx.resolve_regions_and_report_errors(SuppressRegionErrors::when_nll_is_enabled(self.tcx));
-
- // In this mode, we also copy the free-region-map into the
- // tables of the enclosing fcx. In the other regionck modes
- // (e.g., `regionck_item`), we don't have an enclosing tables.
- assert!(self.tables.borrow().free_region_map.is_empty());
- self.tables.borrow_mut().free_region_map = rcx.outlives_environment.into_free_region_map();
+ rcx.resolve_regions_and_report_errors(RegionckMode::for_item_body(self.tcx));
}
}
self.select_all_obligations_or_error();
}
- fn resolve_regions_and_report_errors(&self, suppress: SuppressRegionErrors) {
+ fn resolve_regions_and_report_errors(&self, mode: RegionckMode) {
self.infcx.process_registered_region_obligations(
self.outlives_environment.region_bound_pairs_map(),
self.implicit_region_bound,
self.subject_def_id,
&self.region_scope_tree,
&self.outlives_environment,
- suppress,
+ mode,
);
}
// hierarchy, and in particular the relationships between free
// regions, until regionck, as described in #3238.
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
use crate::expr_use_visitor as euv;
use crate::mem_categorization as mc;
use crate::mem_categorization::PlaceBase;
-use rustc::hir::map::Map;
use rustc::ty::{self, Ty, TyCtxt, UpvarSubsts};
use rustc_ast::ast;
use rustc_data_structures::fx::FxIndexMap;
}
impl<'a, 'tcx> Visitor<'tcx> for InferBorrowKindVisitor<'a, 'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
for (&var_hir_id, _) in upvars.iter() {
let upvar_id = ty::UpvarId {
var_path: ty::UpvarPath { hir_id: var_hir_id },
- closure_expr_id: LocalDefId::from_def_id(closure_def_id),
+ closure_expr_id: closure_def_id.expect_local(),
};
debug!("seed upvar_id {:?}", upvar_id);
// Adding the upvar Id to the list of Upvars, which will be added
let upvar_ty = self.node_ty(var_hir_id);
let upvar_id = ty::UpvarId {
var_path: ty::UpvarPath { hir_id: var_hir_id },
- closure_expr_id: LocalDefId::from_def_id(closure_def_id),
+ closure_expr_id: closure_def_id.expect_local(),
};
let capture = self.tables.borrow().upvar_capture(upvar_id);
use crate::constrained_generic_params::{identify_constrained_generic_params, Parameter};
use rustc::middle::lang_items;
-use rustc::session::parse::feature_err;
use rustc::ty::subst::{InternalSubsts, Subst};
+use rustc::ty::trait_def::TraitSpecializationKind;
use rustc::ty::{
self, AdtKind, GenericParamDefKind, ToPredicate, Ty, TyCtxt, TypeFoldable, WithConstness,
};
use rustc_ast::ast;
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use rustc_errors::{struct_span_err, Applicability, DiagnosticBuilder};
+use rustc_hir as hir;
use rustc_hir::def_id::DefId;
+use rustc_hir::itemlikevisit::ParItemLikeVisitor;
use rustc_hir::ItemKind;
-use rustc_infer::infer::opaque_types::may_define_opaque_type;
-use rustc_infer::traits::{self, ObligationCause, ObligationCauseCode};
+use rustc_session::parse::feature_err;
use rustc_span::symbol::sym;
use rustc_span::Span;
-
-use rustc_hir as hir;
-use rustc_hir::itemlikevisit::ParItemLikeVisitor;
+use rustc_trait_selection::opaque_types::may_define_opaque_type;
+use rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt;
+use rustc_trait_selection::traits::{self, ObligationCause, ObligationCauseCode};
/// Helper type of a temporary returned by `.for_item(...)`.
/// This is necessary because we can't write the following bound:
let trait_item = tcx.hir().expect_trait_item(hir_id);
let method_sig = match trait_item.kind {
- hir::TraitItemKind::Method(ref sig, _) => Some(sig),
+ hir::TraitItemKind::Fn(ref sig, _) => Some(sig),
_ => None,
};
check_object_unsafe_self_trait_by_name(tcx, &trait_item);
{
trait_should_be_self.push(ty.span)
}
- hir::TraitItemKind::Method(sig, _) => {
+ hir::TraitItemKind::Fn(sig, _) => {
for ty in sig.decl.inputs {
if could_be_self(trait_def_id, ty) {
trait_should_be_self.push(ty.span);
let impl_item = tcx.hir().expect_impl_item(hir_id);
let method_sig = match impl_item.kind {
- hir::ImplItemKind::Method(ref sig, _) => Some(sig),
+ hir::ImplItemKind::Fn(ref sig, _) => Some(sig),
_ => None,
};
let trait_def_id = tcx.hir().local_def_id(item.hir_id);
let trait_def = tcx.trait_def(trait_def_id);
- if trait_def.is_marker {
+ if trait_def.is_marker
+ || matches!(trait_def.specialization_kind, TraitSpecializationKind::Marker)
+ {
for associated_def_id in &*tcx.associated_item_def_ids(trait_def_id) {
struct_span_err!(
tcx.sess,
use crate::check::FnCtxt;
-use rustc::hir::map::Map;
use rustc::ty::adjustment::{Adjust, Adjustment, PointerCast};
use rustc::ty::fold::{TypeFoldable, TypeFolder};
use rustc::ty::{self, Ty, TyCtxt};
use rustc_data_structures::sync::Lrc;
use rustc_hir as hir;
-use rustc_hir::def_id::{DefId, DefIdSet, DefIndex};
+use rustc_hir::def_id::DefIdSet;
use rustc_hir::intravisit::{self, NestedVisitorMap, Visitor};
use rustc_infer::infer::error_reporting::TypeAnnotationNeeded::E0282;
use rustc_infer::infer::InferCtxt;
use rustc_span::symbol::sym;
use rustc_span::Span;
+use rustc_trait_selection::opaque_types::InferCtxtExt;
use std::mem;
wbcx.visit_fru_field_types();
wbcx.visit_opaque_types(body.value.span);
wbcx.visit_coercion_casts();
- wbcx.visit_free_region_map();
wbcx.visit_user_provided_tys();
wbcx.visit_user_provided_sigs();
wbcx.visit_generator_interior_types();
body: &'tcx hir::Body<'tcx>,
rustc_dump_user_substs: bool,
) -> WritebackCx<'cx, 'tcx> {
- let owner = body.id().hir_id;
+ let owner = body.id().hir_id.owner;
WritebackCx {
fcx,
- tables: ty::TypeckTables::empty(Some(DefId::local(owner.owner))),
+ tables: ty::TypeckTables::empty(Some(owner.to_def_id())),
body,
rustc_dump_user_substs,
}
fn write_ty_to_tables(&mut self, hir_id: hir::HirId, ty: Ty<'tcx>) {
debug!("write_ty_to_tables({:?}, {:?})", hir_id, ty);
- assert!(!ty.needs_infer() && !ty.has_placeholders());
+ assert!(!ty.needs_infer() && !ty.has_placeholders() && !ty.has_free_regions());
self.tables.node_types_mut().insert(hir_id, ty);
}
// traffic in node-ids or update tables in the type context etc.
impl<'cx, 'tcx> Visitor<'tcx> for WritebackCx<'cx, 'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
let new_upvar_capture = match *upvar_capture {
ty::UpvarCapture::ByValue => ty::UpvarCapture::ByValue,
ty::UpvarCapture::ByRef(ref upvar_borrow) => {
- let r = upvar_borrow.region;
- let r = self.resolve(&r, &upvar_id.var_path.hir_id);
- ty::UpvarCapture::ByRef(ty::UpvarBorrow { kind: upvar_borrow.kind, region: r })
+ ty::UpvarCapture::ByRef(ty::UpvarBorrow {
+ kind: upvar_borrow.kind,
+ region: self.tcx().lifetimes.re_erased,
+ })
}
};
debug!("Upvar capture for {:?} resolved to {:?}", upvar_id, new_upvar_capture);
let common_local_id_root = fcx_tables.local_id_root.unwrap();
for (&id, &origin) in fcx_tables.closure_kind_origins().iter() {
- let hir_id = hir::HirId { owner: common_local_id_root.index, local_id: id };
+ let hir_id = hir::HirId { owner: common_local_id_root.expect_local(), local_id: id };
self.tables.closure_kind_origins_mut().insert(hir_id, origin);
}
}
}
}
- fn visit_free_region_map(&mut self) {
- self.tables.free_region_map = self.fcx.tables.borrow().free_region_map.clone();
- debug_assert!(!self.tables.free_region_map.elements().any(|r| r.has_local_value()));
- }
-
fn visit_user_provided_tys(&mut self) {
let fcx_tables = self.fcx.tables.borrow();
debug_assert_eq!(fcx_tables.local_id_root, self.tables.local_id_root);
let mut errors_buffer = Vec::new();
for (&local_id, c_ty) in fcx_tables.user_provided_types().iter() {
- let hir_id = hir::HirId { owner: common_local_id_root.index, local_id };
+ let hir_id = hir::HirId { owner: common_local_id_root.expect_local(), local_id };
if cfg!(debug_assertions) && c_ty.has_local_value() {
span_bug!(hir_id.to_span(self.fcx.tcx), "writeback: `{:?}` is a local value", c_ty);
fn visit_opaque_types(&mut self, span: Span) {
for (&def_id, opaque_defn) in self.fcx.opaque_types.borrow().iter() {
let hir_id = self.tcx().hir().as_local_hir_id(def_id).unwrap();
- let instantiated_ty =
- self.tcx().erase_regions(&self.resolve(&opaque_defn.concrete_ty, &hir_id));
+ let instantiated_ty = self.resolve(&opaque_defn.concrete_ty, &hir_id);
debug_assert!(!instantiated_ty.has_escaping_bound_vars());
let common_local_id_root = fcx_tables.local_id_root.unwrap();
for (&local_id, fn_sig) in fcx_tables.liberated_fn_sigs().iter() {
- let hir_id = hir::HirId { owner: common_local_id_root.index, local_id };
+ let hir_id = hir::HirId { owner: common_local_id_root.expect_local(), local_id };
let fn_sig = self.resolve(fn_sig, &hir_id);
self.tables.liberated_fn_sigs_mut().insert(hir_id, fn_sig.clone());
}
let common_local_id_root = fcx_tables.local_id_root.unwrap();
for (&local_id, ftys) in fcx_tables.fru_field_types().iter() {
- let hir_id = hir::HirId { owner: common_local_id_root.index, local_id };
+ let hir_id = hir::HirId { owner: common_local_id_root.expect_local(), local_id };
let ftys = self.resolve(ftys, &hir_id);
self.tables.fru_field_types_mut().insert(hir_id, ftys);
}
}
}
-impl Locatable for DefIndex {
- fn to_span(&self, tcx: TyCtxt<'_>) -> Span {
- let hir_id = tcx.hir().def_index_to_hir_id(*self);
- tcx.hir().span(hir_id)
- }
-}
-
impl Locatable for hir::HirId {
fn to_span(&self, tcx: TyCtxt<'_>) -> Span {
tcx.hir().span(*self)
}
}
-///////////////////////////////////////////////////////////////////////////
-// The Resolver. This is the type folding engine that detects
-// unresolved types and so forth.
-
+/// The Resolver. This is the type folding engine that detects
+/// unresolved types and so forth.
struct Resolver<'cx, 'tcx> {
tcx: TyCtxt<'tcx>,
infcx: &'cx InferCtxt<'cx, 'tcx>,
fn fold_ty(&mut self, t: Ty<'tcx>) -> Ty<'tcx> {
match self.infcx.fully_resolve(&t) {
- Ok(t) => t,
+ Ok(t) => self.infcx.tcx.erase_regions(&t),
Err(_) => {
debug!("Resolver::fold_ty: input type `{:?}` not fully resolvable", t);
self.report_error(t);
}
}
- // FIXME This should be carefully checked
- // We could use `self.report_error` but it doesn't accept a ty::Region, right now.
fn fold_region(&mut self, r: ty::Region<'tcx>) -> ty::Region<'tcx> {
- self.infcx.fully_resolve(&r).unwrap_or(self.tcx.lifetimes.re_static)
+ debug_assert!(!r.is_late_bound(), "Should not be resolving bound region.");
+ self.tcx.lifetimes.re_erased
}
fn fold_const(&mut self, ct: &'tcx ty::Const<'tcx>) -> &'tcx ty::Const<'tcx> {
match self.infcx.fully_resolve(&ct) {
- Ok(ct) => ct,
+ Ok(ct) => self.infcx.tcx.erase_regions(&ct),
Err(_) => {
debug!("Resolver::fold_const: input const `{:?}` not fully resolvable", ct);
// FIXME: we'd like to use `self.report_error`, but it doesn't yet
-use crate::lint;
use rustc::ty::TyCtxt;
use rustc_ast::ast;
use rustc_data_structures::fx::FxHashMap;
use rustc_hir::def_id::{DefId, DefIdSet, LOCAL_CRATE};
use rustc_hir::itemlikevisit::ItemLikeVisitor;
use rustc_hir::print::visibility_qualified;
+use rustc_session::lint;
use rustc_span::Span;
pub fn check_crate(tcx: TyCtxt<'_>) {
use rustc_hir::ItemKind;
use rustc_infer::infer;
use rustc_infer::infer::outlives::env::OutlivesEnvironment;
-use rustc_infer::infer::{SuppressRegionErrors, TyCtxtInferExt};
-use rustc_infer::traits::misc::{can_type_implement_copy, CopyImplementationError};
-use rustc_infer::traits::predicate_for_trait_def;
-use rustc_infer::traits::{self, ObligationCause, TraitEngine};
+use rustc_infer::infer::{RegionckMode, TyCtxtInferExt};
+use rustc_trait_selection::traits::error_reporting::InferCtxtExt;
+use rustc_trait_selection::traits::misc::{can_type_implement_copy, CopyImplementationError};
+use rustc_trait_selection::traits::predicate_for_trait_def;
+use rustc_trait_selection::traits::{self, ObligationCause, TraitEngine, TraitEngineExt};
pub fn check_trait(tcx: TyCtxt<'_>, trait_def_id: DefId) {
let lang_items = tcx.lang_items();
impl_did,
®ion_scope_tree,
&outlives_env,
- SuppressRegionErrors::default(),
+ RegionckMode::default(),
);
}
}
}
}
-pub fn coerce_unsized_info<'tcx>(tcx: TyCtxt<'tcx>, impl_did: DefId) -> CoerceUnsizedInfo {
+pub fn coerce_unsized_info(tcx: TyCtxt<'tcx>, impl_did: DefId) -> CoerceUnsizedInfo {
debug!("compute_coerce_unsized_info(impl_did={:?})", impl_did);
let coerce_unsized_trait = tcx.lang_items().coerce_unsized_trait().unwrap();
impl_did,
®ion_scope_tree,
&outlives_env,
- SuppressRegionErrors::default(),
+ RegionckMode::default(),
);
CoerceUnsizedInfo { custom_kind: kind }
use rustc_hir as hir;
use rustc_hir::def_id::{CrateNum, DefId, LOCAL_CRATE};
use rustc_hir::itemlikevisit::ItemLikeVisitor;
-use rustc_infer::traits::{self, SkipLeakCheck};
+use rustc_trait_selection::traits::{self, SkipLeakCheck};
pub fn crate_inherent_impls_overlap_check(tcx: TyCtxt<'_>, crate_num: CrateNum) {
assert_eq!(crate_num, LOCAL_CRATE);
let impl_items2 = self.tcx.associated_items(impl2);
for item1 in impl_items1.in_definition_order() {
- let collision = impl_items2
- .filter_by_name_unhygienic(item1.ident.name)
- .find(|item2| {
- // Symbols and namespace match, compare hygienically.
- item1.kind.namespace() == item2.kind.namespace()
- && item1.ident.modern() == item2.ident.modern()
- })
- .is_some();
+ let collision = impl_items2.filter_by_name_unhygienic(item1.ident.name).any(|item2| {
+ // Symbols and namespace match, compare hygienically.
+ item1.kind.namespace() == item2.kind.namespace()
+ && item1.ident.normalize_to_macros_2_0()
+ == item2.ident.normalize_to_macros_2_0()
+ });
if collision {
return true;
let collision = impl_items2.filter_by_name_unhygienic(item1.ident.name).find(|item2| {
// Symbols and namespace match, compare hygienically.
item1.kind.namespace() == item2.kind.namespace()
- && item1.ident.modern() == item2.ident.modern()
+ && item1.ident.normalize_to_macros_2_0()
+ == item2.ident.normalize_to_macros_2_0()
});
if let Some(item2) = collision {
- let name = item1.ident.modern();
+ let name = item1.ident.normalize_to_macros_2_0();
let mut err = struct_span_err!(
self.tcx.sess,
self.tcx.span_of_impl(item1.def_id).unwrap(),
use rustc::ty::{self, TyCtxt, TypeFoldable};
use rustc_errors::struct_span_err;
use rustc_hir::def_id::{DefId, LOCAL_CRATE};
-use rustc_infer::traits;
use rustc_span::Span;
+use rustc_trait_selection::traits;
mod builtin;
mod inherent_impls;
return;
}
+ if let ty::trait_def::TraitSpecializationKind::AlwaysApplicable =
+ tcx.trait_def(trait_def_id).specialization_kind
+ {
+ if !tcx.features().specialization && !tcx.features().min_specialization {
+ let span = impl_header_span(tcx, impl_def_id);
+ tcx.sess
+ .struct_span_err(
+ span,
+ "implementing `rustc_specialization_trait` traits is unstable",
+ )
+ .help("add `#![feature(min_specialization)]` to the crate attributes to enable")
+ .emit();
+ return;
+ }
+ }
+
let trait_name = if did == li.fn_trait() {
"Fn"
} else if did == li.fn_mut_trait() {
use rustc_hir as hir;
use rustc_hir::itemlikevisit::ItemLikeVisitor;
use rustc_infer::infer::TyCtxtInferExt;
-use rustc_infer::traits;
+use rustc_trait_selection::traits;
pub fn check(tcx: TyCtxt<'_>) {
let mut orphan = OrphanChecker { tcx };
// impl !Send for (A, B) { }
// ```
//
- // This final impl is legal according to the orpan
+ // This final impl is legal according to the orphan
// rules, but it invalidates the reasoning from
// `two_foos` above.
debug!(
.emit();
}
- (_, _, Unsafety::Unsafe, hir::ImplPolarity::Negative) => {
+ (_, _, Unsafety::Unsafe, hir::ImplPolarity::Negative(_)) => {
// Reported in AST validation
self.tcx.sess.delay_span_bug(item.span, "unsafe negative impl");
}
- (_, _, Unsafety::Normal, hir::ImplPolarity::Negative)
+ (_, _, Unsafety::Normal, hir::ImplPolarity::Negative(_))
| (Unsafety::Unsafe, _, Unsafety::Unsafe, hir::ImplPolarity::Positive)
| (Unsafety::Normal, Some(_), Unsafety::Unsafe, hir::ImplPolarity::Positive)
| (Unsafety::Normal, None, Unsafety::Normal, _) => {
use crate::astconv::{AstConv, Bounds, SizedByDefault};
use crate::check::intrinsic::intrinsic_operation_unsafety;
use crate::constrained_generic_params as cgp;
-use crate::lint;
use crate::middle::lang_items;
use crate::middle::resolve_lifetime as rl;
use rustc::hir::map::blocks::FnLikeNode;
use rustc::hir::map::Map;
use rustc::middle::codegen_fn_attrs::{CodegenFnAttrFlags, CodegenFnAttrs};
use rustc::mir::mono::Linkage;
-use rustc::session::parse::feature_err;
use rustc::ty::query::Providers;
use rustc::ty::subst::{InternalSubsts, Subst};
use rustc::ty::util::Discr;
use rustc_hir::def_id::{DefId, LOCAL_CRATE};
use rustc_hir::intravisit::{self, NestedVisitorMap, Visitor};
use rustc_hir::{GenericParamKind, Node, Unsafety};
+use rustc_session::lint;
+use rustc_session::parse::feature_err;
use rustc_span::symbol::{kw, sym, Symbol};
use rustc_span::{Span, DUMMY_SP};
use rustc_target::spec::abi;
crate struct PlaceholderHirTyCollector(crate Vec<Span>);
impl<'v> Visitor<'v> for PlaceholderHirTyCollector {
- type Map = Map<'v>;
+ type Map = intravisit::ErasedMap<'v>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
fn visit_ty(&mut self, t: &'v hir::Ty<'v>) {
.unwrap_or(&"ParamName");
let mut sugg: Vec<_> =
- placeholder_types.iter().map(|sp| (*sp, type_name.to_string())).collect();
+ placeholder_types.iter().map(|sp| (*sp, (*type_name).to_string())).collect();
if generics.is_empty() {
sugg.push((span, format!("<{}>", type_name)));
} else if let Some(arg) = generics.iter().find(|arg| match arg.name {
}) {
// Account for `_` already present in cases like `struct S<_>(_);` and suggest
// `struct S<T>(T);` instead of `struct S<_, T>(T);`.
- sugg.push((arg.span, type_name.to_string()));
+ sugg.push((arg.span, (*type_name).to_string()));
} else {
sugg.push((
generics.iter().last().unwrap().span.shrink_to_hi(),
impl Visitor<'tcx> for CollectItemTypesVisitor<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
- NestedVisitorMap::OnlyBodies(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
+ NestedVisitorMap::OnlyBodies(self.tcx.hir())
}
fn visit_item(&mut self, item: &'tcx hir::Item<'tcx>) {
tcx.generics_of(def_id);
match trait_item.kind {
- hir::TraitItemKind::Method(..) => {
+ hir::TraitItemKind::Fn(..) => {
tcx.type_of(def_id);
tcx.fn_sig(def_id);
}
tcx.predicates_of(def_id);
let impl_item = tcx.hir().expect_impl_item(impl_item_id);
match impl_item.kind {
- hir::ImplItemKind::Method(..) => {
+ hir::ImplItemKind::Fn(..) => {
tcx.fn_sig(def_id);
}
hir::ImplItemKind::TyAlias(_) | hir::ImplItemKind::OpaqueTy(_) => {
.iter()
.map(|f| {
let fid = tcx.hir().local_def_id(f.hir_id);
- let dup_span = seen_fields.get(&f.ident.modern()).cloned();
+ let dup_span = seen_fields.get(&f.ident.normalize_to_macros_2_0()).cloned();
if let Some(prev_span) = dup_span {
struct_span_err!(
tcx.sess,
.span_label(prev_span, format!("`{}` first declared here", f.ident))
.emit();
} else {
- seen_fields.insert(f.ident.modern(), f.span);
+ seen_fields.insert(f.ident.normalize_to_macros_2_0(), f.span);
}
ty::FieldDef {
}
let is_marker = tcx.has_attr(def_id, sym::marker);
+ let spec_kind = if tcx.has_attr(def_id, sym::rustc_unsafe_specialization_marker) {
+ ty::trait_def::TraitSpecializationKind::Marker
+ } else if tcx.has_attr(def_id, sym::rustc_specialization_trait) {
+ ty::trait_def::TraitSpecializationKind::AlwaysApplicable
+ } else {
+ ty::trait_def::TraitSpecializationKind::None
+ };
let def_path_hash = tcx.def_path_hash(def_id);
- let def = ty::TraitDef::new(def_id, unsafety, paren_sugar, is_auto, is_marker, def_path_hash);
+ let def = ty::TraitDef::new(
+ def_id,
+ unsafety,
+ paren_sugar,
+ is_auto,
+ is_marker,
+ spec_kind,
+ def_path_hash,
+ );
tcx.arena.alloc(def)
}
}
impl Visitor<'tcx> for LateBoundRegionsDetector<'tcx> {
- type Map = Map<'tcx>;
+ type Map = intravisit::ErasedMap<'tcx>;
- fn nested_visit_map(&mut self) -> NestedVisitorMap<'_, Self::Map> {
+ fn nested_visit_map(&mut self) -> NestedVisitorMap<Self::Map> {
NestedVisitorMap::None
}
match node {
Node::TraitItem(item) => match item.kind {
- hir::TraitItemKind::Method(ref sig, _) => {
+ hir::TraitItemKind::Fn(ref sig, _) => {
has_late_bound_regions(tcx, &item.generics, &sig.decl)
}
_ => None,
},
Node::ImplItem(item) => match item.kind {
- hir::ImplItemKind::Method(ref sig, _) => {
+ hir::ImplItemKind::Fn(ref sig, _) => {
has_late_bound_regions(tcx, &item.generics, &sig.decl)
}
_ => None,
.any(is_suggestable_infer_ty)
}
-/// Whether `ty` is a type with `_` placeholders that can be infered. Used in diagnostics only to
+/// Whether `ty` is a type with `_` placeholders that can be inferred. Used in diagnostics only to
/// use inference to provide suggestions for the appropriate type if possible.
fn is_suggestable_infer_ty(ty: &hir::Ty<'_>) -> bool {
use hir::TyKind::*;
match tcx.hir().get(hir_id) {
TraitItem(hir::TraitItem {
- kind: TraitItemKind::Method(sig, TraitMethod::Provided(_)),
+ kind: TraitItemKind::Fn(sig, TraitFn::Provided(_)),
ident,
generics,
..
})
- | ImplItem(hir::ImplItem { kind: ImplItemKind::Method(sig, _), ident, generics, .. })
+ | ImplItem(hir::ImplItem { kind: ImplItemKind::Fn(sig, _), ident, generics, .. })
| Item(hir::Item { kind: ItemKind::Fn(sig, generics, _), ident, .. }) => {
match get_infer_ret_ty(&sig.decl.output) {
Some(ty) => {
}
TraitItem(hir::TraitItem {
- kind: TraitItemKind::Method(FnSig { header, decl }, _),
+ kind: TraitItemKind::Fn(FnSig { header, decl }, _),
ident,
generics,
..
let is_rustc_reservation = tcx.has_attr(def_id, sym::rustc_reservation_impl);
let item = tcx.hir().expect_item(hir_id);
match &item.kind {
- hir::ItemKind::Impl { polarity: hir::ImplPolarity::Negative, .. } => {
+ hir::ItemKind::Impl { polarity: hir::ImplPolarity::Negative(_), .. } => {
if is_rustc_reservation {
tcx.sess.span_err(item.span, "reservation impls can't be negative");
}
codegen_fn_attrs.flags |= CodegenFnAttrFlags::NO_MANGLE;
} else if attr.check_name(sym::rustc_std_internal_symbol) {
codegen_fn_attrs.flags |= CodegenFnAttrFlags::RUSTC_STD_INTERNAL_SYMBOL;
- } else if attr.check_name(sym::no_debug) {
- codegen_fn_attrs.flags |= CodegenFnAttrFlags::NO_DEBUG;
} else if attr.check_name(sym::used) {
codegen_fn_attrs.flags |= CodegenFnAttrFlags::USED;
} else if attr.check_name(sym::thread_local) {
use rustc::hir::map::Map;
-use rustc::session::parse::feature_err;
use rustc::ty::subst::{GenericArgKind, InternalSubsts, Subst};
use rustc::ty::util::IntTypeExt;
use rustc::ty::{self, DefIdTree, Ty, TyCtxt, TypeFoldable};
use rustc_hir::intravisit;
use rustc_hir::intravisit::Visitor;
use rustc_hir::Node;
-use rustc_infer::traits;
+use rustc_session::parse::feature_err;
use rustc_span::symbol::{sym, Ident};
use rustc_span::{Span, DUMMY_SP};
+use rustc_trait_selection::traits;
use super::ItemCtxt;
use super::{bad_placeholder_type, is_suggestable_infer_ty};
match tcx.hir().get(hir_id) {
Node::TraitItem(item) => match item.kind {
- TraitItemKind::Method(..) => {
+ TraitItemKind::Fn(..) => {
let substs = InternalSubsts::identity_for_item(tcx, def_id);
tcx.mk_fn_def(def_id, substs)
}
},
Node::ImplItem(item) => match item.kind {
- ImplItemKind::Method(..) => {
+ ImplItemKind::Fn(..) => {
let substs = InternalSubsts::identity_for_item(tcx, def_id);
tcx.mk_fn_def(def_id, substs)
}
impl<'tcx> intravisit::Visitor<'tcx> for ConstraintLocator<'tcx> {
type Map = Map<'tcx>;
- fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<'_, Self::Map> {
- intravisit::NestedVisitorMap::All(&self.tcx.hir())
+ fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<Self::Map> {
+ intravisit::NestedVisitorMap::All(self.tcx.hir())
}
fn visit_expr(&mut self, ex: &'tcx Expr<'tcx>) {
if let hir::ExprKind::Closure(..) = ex.kind {
vec.into_iter().collect()
}
-/// If `include_projections` is false, returns the list of parameters that are
+/// If `include_nonconstraining` is false, returns the list of parameters that are
/// constrained by `t` - i.e., the value of each parameter in the list is
/// uniquely determined by `t` (see RFC 447). If it is true, return the list
/// of parameters whose values are needed in order to constrain `ty` - these
}
fn visit_const(&mut self, c: &'tcx ty::Const<'tcx>) -> bool {
- if let ty::ConstKind::Param(data) = c.val {
- self.parameters.push(Parameter::from(data));
+ match c.val {
+ ty::ConstKind::Unevaluated(..) if !self.include_nonconstraining => {
+ // Constant expressions are not injective
+ return c.ty.visit_with(self);
+ }
+ ty::ConstKind::Param(data) => {
+ self.parameters.push(Parameter::from(data));
+ }
+ _ => {}
}
- false
+
+ c.super_visit_with(self)
}
}
for &var_id in upvars.keys() {
let upvar_id = ty::UpvarId {
var_path: ty::UpvarPath { hir_id: var_id },
- closure_expr_id: closure_def_id.to_local(),
+ closure_expr_id: closure_def_id.expect_local(),
};
let upvar_capture = self.mc.tables.upvar_capture(upvar_id);
let captured_place = return_if_err!(self.cat_captured_var(
//! fixed, but for the moment it's easier to do these checks early.
use crate::constrained_generic_params as cgp;
+use min_specialization::check_min_specialization;
+
use rustc::ty::query::Providers;
use rustc::ty::{self, TyCtxt, TypeFoldable};
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use rustc_hir as hir;
use rustc_hir::def_id::DefId;
use rustc_hir::itemlikevisit::ItemLikeVisitor;
+use rustc_span::Span;
+
use std::collections::hash_map::Entry::{Occupied, Vacant};
-use rustc_span::Span;
+mod min_specialization;
/// Checks that all the type/lifetime parameters on an impl also
/// appear in the trait ref or self type (or are constrained by a
}
fn check_mod_impl_wf(tcx: TyCtxt<'_>, module_def_id: DefId) {
- tcx.hir().visit_item_likes_in_module(module_def_id, &mut ImplWfCheck { tcx });
+ let min_specialization = tcx.features().min_specialization;
+ tcx.hir()
+ .visit_item_likes_in_module(module_def_id, &mut ImplWfCheck { tcx, min_specialization });
}
pub fn provide(providers: &mut Providers<'_>) {
struct ImplWfCheck<'tcx> {
tcx: TyCtxt<'tcx>,
+ min_specialization: bool,
}
impl ItemLikeVisitor<'tcx> for ImplWfCheck<'tcx> {
let impl_def_id = self.tcx.hir().local_def_id(item.hir_id);
enforce_impl_params_are_constrained(self.tcx, impl_def_id, items);
enforce_impl_items_are_distinct(self.tcx, items);
+ if self.min_specialization {
+ check_min_specialization(self.tcx, impl_def_id, item.span);
+ }
}
}
hir::ImplItemKind::TyAlias(_) => &mut seen_type_items,
_ => &mut seen_value_items,
};
- match seen_items.entry(impl_item.ident.modern()) {
+ match seen_items.entry(impl_item.ident.normalize_to_macros_2_0()) {
Occupied(entry) => {
let mut err = struct_span_err!(
tcx.sess,
--- /dev/null
+//! # Minimal Specialization
+//!
+//! This module contains the checks for sound specialization used when the
+//! `min_specialization` feature is enabled. This requires that the impl is
+//! *always applicable*.
+//!
+//! If `impl1` specializes `impl2` then `impl1` is always applicable if we know
+//! that all the bounds of `impl2` are satisfied, and all of the bounds of
+//! `impl1` are satisfied for some choice of lifetimes then we know that
+//! `impl1` applies for any choice of lifetimes.
+//!
+//! ## Basic approach
+//!
+//! To enforce this requirement on specializations we take the following
+//! approach:
+//!
+//! 1. Match up the substs for `impl2` so that the implemented trait and
+//! self-type match those for `impl1`.
+//! 2. Check for any direct use of `'static` in the substs of `impl2`.
+//! 3. Check that all of the generic parameters of `impl1` occur at most once
+//! in the *unconstrained* substs for `impl2`. A parameter is constrained if
+//! its value is completely determined by an associated type projection
+//! predicate.
+//! 4. Check that all predicates on `impl1` either exist on `impl2` (after
+//! matching substs), or are well-formed predicates for the trait's type
+//! arguments.
+//!
+//! ## Example
+//!
+//! Suppose we have the following always applicable impl:
+//!
+//! ```rust
+//! impl<T> SpecExtend<T> for std::vec::IntoIter<T> { /* specialized impl */ }
+//! impl<T, I: Iterator<Item=T>> SpecExtend<T> for I { /* default impl */ }
+//! ```
+//!
+//! We get that the subst for `impl2` are `[T, std::vec::IntoIter<T>]`. `T` is
+//! constrained to be `<I as Iterator>::Item`, so we check only
+//! `std::vec::IntoIter<T>` for repeated parameters, which it doesn't have. The
+//! predicates of `impl1` are only `T: Sized`, which is also a predicate of
+//! `impl2`. So this specialization is sound.
+//!
+//! ## Extensions
+//!
+//! Unfortunately not all specializations in the standard library are allowed
+//! by this. So there are two extensions to these rules that allow specializing
+//! on some traits: that is, using them as bounds on the specializing impl,
+//! even when they don't occur in the base impl.
+//!
+//! ### rustc_specialization_trait
+//!
+//! If a trait is always applicable, then it's sound to specialize on it. We
+//! check trait is always applicable in the same way as impls, except that step
+//! 4 is now "all predicates on `impl1` are always applicable". We require that
+//! `specialization` or `min_specialization` is enabled to implement these
+//! traits.
+//!
+//! ### rustc_unsafe_specialization_marker
+//!
+//! There are also some specialization on traits with no methods, including the
+//! stable `FusedIterator` trait. We allow marking marker traits with an
+//! unstable attribute that means we ignore them in point 3 of the checks
+//! above. This is unsound, in the sense that the specialized impl may be used
+//! when it doesn't apply, but we allow it in the short term since it can't
+//! cause use after frees with purely safe code in the same way as specializing
+//! on traits with methods can.
+
+use crate::constrained_generic_params as cgp;
+
+use rustc::middle::region::ScopeTree;
+use rustc::ty::subst::{GenericArg, InternalSubsts, SubstsRef};
+use rustc::ty::trait_def::TraitSpecializationKind;
+use rustc::ty::{self, InstantiatedPredicates, TyCtxt, TypeFoldable};
+use rustc_data_structures::fx::FxHashSet;
+use rustc_hir as hir;
+use rustc_hir::def_id::DefId;
+use rustc_infer::infer::outlives::env::OutlivesEnvironment;
+use rustc_infer::infer::{InferCtxt, RegionckMode, TyCtxtInferExt};
+use rustc_infer::traits::specialization_graph::Node;
+use rustc_span::Span;
+use rustc_trait_selection::traits::{self, translate_substs, wf};
+
+pub(super) fn check_min_specialization(tcx: TyCtxt<'_>, impl_def_id: DefId, span: Span) {
+ if let Some(node) = parent_specialization_node(tcx, impl_def_id) {
+ tcx.infer_ctxt().enter(|infcx| {
+ check_always_applicable(&infcx, impl_def_id, node, span);
+ });
+ }
+}
+
+fn parent_specialization_node(tcx: TyCtxt<'_>, impl1_def_id: DefId) -> Option<Node> {
+ let trait_ref = tcx.impl_trait_ref(impl1_def_id)?;
+ let trait_def = tcx.trait_def(trait_ref.def_id);
+
+ let impl2_node = trait_def.ancestors(tcx, impl1_def_id).ok()?.nth(1)?;
+
+ let always_applicable_trait =
+ matches!(trait_def.specialization_kind, TraitSpecializationKind::AlwaysApplicable);
+ if impl2_node.is_from_trait() && !always_applicable_trait {
+ // Implementing a normal trait isn't a specialization.
+ return None;
+ }
+ Some(impl2_node)
+}
+
+/// Check that `impl1` is a sound specialization
+fn check_always_applicable(
+ infcx: &InferCtxt<'_, '_>,
+ impl1_def_id: DefId,
+ impl2_node: Node,
+ span: Span,
+) {
+ if let Some((impl1_substs, impl2_substs)) =
+ get_impl_substs(infcx, impl1_def_id, impl2_node, span)
+ {
+ let impl2_def_id = impl2_node.def_id();
+ debug!(
+ "check_always_applicable(\nimpl1_def_id={:?},\nimpl2_def_id={:?},\nimpl2_substs={:?}\n)",
+ impl1_def_id, impl2_def_id, impl2_substs
+ );
+
+ let tcx = infcx.tcx;
+
+ let parent_substs = if impl2_node.is_from_trait() {
+ impl2_substs.to_vec()
+ } else {
+ unconstrained_parent_impl_substs(tcx, impl2_def_id, impl2_substs)
+ };
+
+ check_static_lifetimes(tcx, &parent_substs, span);
+ check_duplicate_params(tcx, impl1_substs, &parent_substs, span);
+
+ check_predicates(infcx, impl1_def_id, impl1_substs, impl2_node, impl2_substs, span);
+ }
+}
+
+/// Given a specializing impl `impl1`, and the base impl `impl2`, returns two
+/// substitutions `(S1, S2)` that equate their trait references. The returned
+/// types are expressed in terms of the generics of `impl1`.
+///
+/// Example
+///
+/// impl<A, B> Foo<A> for B { /* impl2 */ }
+/// impl<C> Foo<Vec<C>> for C { /* impl1 */ }
+///
+/// Would return `S1 = [C]` and `S2 = [Vec<C>, C]`.
+fn get_impl_substs<'tcx>(
+ infcx: &InferCtxt<'_, 'tcx>,
+ impl1_def_id: DefId,
+ impl2_node: Node,
+ span: Span,
+) -> Option<(SubstsRef<'tcx>, SubstsRef<'tcx>)> {
+ let tcx = infcx.tcx;
+ let param_env = tcx.param_env(impl1_def_id);
+
+ let impl1_substs = InternalSubsts::identity_for_item(tcx, impl1_def_id);
+ let impl2_substs = translate_substs(infcx, param_env, impl1_def_id, impl1_substs, impl2_node);
+
+ // Conservatively use an empty `ParamEnv`.
+ let outlives_env = OutlivesEnvironment::new(ty::ParamEnv::empty());
+ infcx.resolve_regions_and_report_errors(
+ impl1_def_id,
+ &ScopeTree::default(),
+ &outlives_env,
+ RegionckMode::default(),
+ );
+ let impl2_substs = match infcx.fully_resolve(&impl2_substs) {
+ Ok(s) => s,
+ Err(_) => {
+ tcx.sess.struct_span_err(span, "could not resolve substs on overridden impl").emit();
+ return None;
+ }
+ };
+ Some((impl1_substs, impl2_substs))
+}
+
+/// Returns a list of all of the unconstrained subst of the given impl.
+///
+/// For example given the impl:
+///
+/// impl<'a, T, I> ... where &'a I: IntoIterator<Item=&'a T>
+///
+/// This would return the substs corresponding to `['a, I]`, because knowing
+/// `'a` and `I` determines the value of `T`.
+fn unconstrained_parent_impl_substs<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ impl_def_id: DefId,
+ impl_substs: SubstsRef<'tcx>,
+) -> Vec<GenericArg<'tcx>> {
+ let impl_generic_predicates = tcx.predicates_of(impl_def_id);
+ let mut unconstrained_parameters = FxHashSet::default();
+ let mut constrained_params = FxHashSet::default();
+ let impl_trait_ref = tcx.impl_trait_ref(impl_def_id);
+
+ // Unfortunately the functions in `constrained_generic_parameters` don't do
+ // what we want here. We want only a list of constrained parameters while
+ // the functions in `cgp` add the constrained parameters to a list of
+ // unconstrained parameters.
+ for (predicate, _) in impl_generic_predicates.predicates.iter() {
+ if let ty::Predicate::Projection(proj) = predicate {
+ let projection_ty = proj.skip_binder().projection_ty;
+ let projected_ty = proj.skip_binder().ty;
+
+ let unbound_trait_ref = projection_ty.trait_ref(tcx);
+ if Some(unbound_trait_ref) == impl_trait_ref {
+ continue;
+ }
+
+ unconstrained_parameters.extend(cgp::parameters_for(&projection_ty, true));
+
+ for param in cgp::parameters_for(&projected_ty, false) {
+ if !unconstrained_parameters.contains(¶m) {
+ constrained_params.insert(param.0);
+ }
+ }
+
+ unconstrained_parameters.extend(cgp::parameters_for(&projected_ty, true));
+ }
+ }
+
+ impl_substs
+ .iter()
+ .enumerate()
+ .filter(|&(idx, _)| !constrained_params.contains(&(idx as u32)))
+ .map(|(_, arg)| *arg)
+ .collect()
+}
+
+/// Check that parameters of the derived impl don't occur more than once in the
+/// equated substs of the base impl.
+///
+/// For example forbid the following:
+///
+/// impl<A> Tr for A { }
+/// impl<B> Tr for (B, B) { }
+///
+/// Note that only consider the unconstrained parameters of the base impl:
+///
+/// impl<S, I: IntoIterator<Item = S>> Tr<S> for I { }
+/// impl<T> Tr<T> for Vec<T> { }
+///
+/// The substs for the parent impl here are `[T, Vec<T>]`, which repeats `T`,
+/// but `S` is constrained in the parent impl, so `parent_substs` is only
+/// `[Vec<T>]`. This means we allow this impl.
+fn check_duplicate_params<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ impl1_substs: SubstsRef<'tcx>,
+ parent_substs: &Vec<GenericArg<'tcx>>,
+ span: Span,
+) {
+ let mut base_params = cgp::parameters_for(parent_substs, true);
+ base_params.sort_by_key(|param| param.0);
+ if let (_, [duplicate, ..]) = base_params.partition_dedup() {
+ let param = impl1_substs[duplicate.0 as usize];
+ tcx.sess
+ .struct_span_err(span, &format!("specializing impl repeats parameter `{}`", param))
+ .emit();
+ }
+}
+
+/// Check that `'static` lifetimes are not introduced by the specializing impl.
+///
+/// For example forbid the following:
+///
+/// impl<A> Tr for A { }
+/// impl Tr for &'static i32 { }
+fn check_static_lifetimes<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ parent_substs: &Vec<GenericArg<'tcx>>,
+ span: Span,
+) {
+ if tcx.any_free_region_meets(parent_substs, |r| *r == ty::ReStatic) {
+ tcx.sess.struct_span_err(span, &format!("cannot specialize on `'static` lifetime")).emit();
+ }
+}
+
+/// Check whether predicates on the specializing impl (`impl1`) are allowed.
+///
+/// Each predicate `P` must be:
+///
+/// * global (not reference any parameters)
+/// * `T: Tr` predicate where `Tr` is an always-applicable trait
+/// * on the base `impl impl2`
+/// * Currently this check is done using syntactic equality, which is
+/// conservative but generally sufficient.
+/// * a well-formed predicate of a type argument of the trait being implemented,
+/// including the `Self`-type.
+fn check_predicates<'tcx>(
+ infcx: &InferCtxt<'_, 'tcx>,
+ impl1_def_id: DefId,
+ impl1_substs: SubstsRef<'tcx>,
+ impl2_node: Node,
+ impl2_substs: SubstsRef<'tcx>,
+ span: Span,
+) {
+ let tcx = infcx.tcx;
+ let impl1_predicates = tcx.predicates_of(impl1_def_id).instantiate(tcx, impl1_substs);
+ let mut impl2_predicates = if impl2_node.is_from_trait() {
+ // Always applicable traits have to be always applicable without any
+ // assumptions.
+ InstantiatedPredicates::empty()
+ } else {
+ tcx.predicates_of(impl2_node.def_id()).instantiate(tcx, impl2_substs)
+ };
+ debug!(
+ "check_always_applicable(\nimpl1_predicates={:?},\nimpl2_predicates={:?}\n)",
+ impl1_predicates, impl2_predicates,
+ );
+
+ // Since impls of always applicable traits don't get to assume anything, we
+ // can also assume their supertraits apply.
+ //
+ // For example, we allow:
+ //
+ // #[rustc_specialization_trait]
+ // trait AlwaysApplicable: Debug { }
+ //
+ // impl<T> Tr for T { }
+ // impl<T: AlwaysApplicable> Tr for T { }
+ //
+ // Specializing on `AlwaysApplicable` allows also specializing on `Debug`
+ // which is sound because we forbid impls like the following
+ //
+ // impl<D: Debug> AlwaysApplicable for D { }
+ let always_applicable_traits: Vec<_> = impl1_predicates
+ .predicates
+ .iter()
+ .filter(|predicate| {
+ matches!(
+ trait_predicate_kind(tcx, predicate),
+ Some(TraitSpecializationKind::AlwaysApplicable)
+ )
+ })
+ .copied()
+ .collect();
+
+ // Include the well-formed predicates of the type parameters of the impl.
+ for ty in tcx.impl_trait_ref(impl1_def_id).unwrap().substs.types() {
+ if let Some(obligations) = wf::obligations(
+ infcx,
+ tcx.param_env(impl1_def_id),
+ tcx.hir().as_local_hir_id(impl1_def_id).unwrap(),
+ ty,
+ span,
+ ) {
+ impl2_predicates
+ .predicates
+ .extend(obligations.into_iter().map(|obligation| obligation.predicate))
+ }
+ }
+ impl2_predicates.predicates.extend(traits::elaborate_predicates(tcx, always_applicable_traits));
+
+ for predicate in impl1_predicates.predicates {
+ if !impl2_predicates.predicates.contains(&predicate) {
+ check_specialization_on(tcx, &predicate, span)
+ }
+ }
+}
+
+fn check_specialization_on<'tcx>(tcx: TyCtxt<'tcx>, predicate: &ty::Predicate<'tcx>, span: Span) {
+ debug!("can_specialize_on(predicate = {:?})", predicate);
+ match predicate {
+ // Global predicates are either always true or always false, so we
+ // are fine to specialize on.
+ _ if predicate.is_global() => (),
+ // We allow specializing on explicitly marked traits with no associated
+ // items.
+ ty::Predicate::Trait(pred, hir::Constness::NotConst) => {
+ if !matches!(
+ trait_predicate_kind(tcx, predicate),
+ Some(TraitSpecializationKind::Marker)
+ ) {
+ tcx.sess
+ .struct_span_err(
+ span,
+ &format!(
+ "cannot specialize on trait `{}`",
+ tcx.def_path_str(pred.def_id()),
+ ),
+ )
+ .emit()
+ }
+ }
+ _ => tcx
+ .sess
+ .struct_span_err(span, &format!("cannot specialize on `{:?}`", predicate))
+ .emit(),
+ }
+}
+
+fn trait_predicate_kind<'tcx>(
+ tcx: TyCtxt<'tcx>,
+ predicate: &ty::Predicate<'tcx>,
+) -> Option<TraitSpecializationKind> {
+ match predicate {
+ ty::Predicate::Trait(pred, hir::Constness::NotConst) => {
+ Some(tcx.trait_def(pred.def_id()).specialization_kind)
+ }
+ ty::Predicate::Trait(_, hir::Constness::Const)
+ | ty::Predicate::RegionOutlives(_)
+ | ty::Predicate::TypeOutlives(_)
+ | ty::Predicate::Projection(_)
+ | ty::Predicate::WellFormed(_)
+ | ty::Predicate::Subtype(_)
+ | ty::Predicate::ObjectSafe(_)
+ | ty::Predicate::ClosureKind(..)
+ | ty::Predicate::ConstEvaluatable(..) => None,
+ }
+}
#![feature(nll)]
#![feature(try_blocks)]
#![feature(never_type)]
+#![feature(slice_partition_dedup)]
#![recursion_limit = "256"]
#[macro_use]
mod structured_errors;
mod variance;
-use rustc::lint;
use rustc::middle;
-use rustc::session;
-use rustc::session::config::EntryFnType;
use rustc::ty::query::Providers;
use rustc::ty::subst::SubstsRef;
use rustc::ty::{self, Ty, TyCtxt};
use rustc_hir::def_id::{DefId, LOCAL_CRATE};
use rustc_hir::Node;
use rustc_infer::infer::{InferOk, TyCtxtInferExt};
-use rustc_infer::traits::{ObligationCause, ObligationCauseCode, TraitEngine, TraitEngineExt};
+use rustc_infer::traits::TraitEngineExt as _;
+use rustc_session::config::EntryFnType;
use rustc_span::{Span, DUMMY_SP};
use rustc_target::spec::abi::Abi;
+use rustc_trait_selection::traits::error_reporting::InferCtxtExt as _;
+use rustc_trait_selection::traits::{
+ ObligationCause, ObligationCauseCode, TraitEngine, TraitEngineExt as _,
+};
use std::iter;
use rustc_hir::PatKind;
use rustc_infer::infer::InferCtxt;
use rustc_span::Span;
+use rustc_trait_selection::infer::InferCtxtExt;
#[derive(Clone, Debug)]
pub enum PlaceBase {
| Res::Def(DefKind::ConstParam, _)
| Res::Def(DefKind::AssocConst, _)
| Res::Def(DefKind::Fn, _)
- | Res::Def(DefKind::Method, _)
+ | Res::Def(DefKind::AssocFn, _)
| Res::SelfCtor(..) => Ok(self.cat_rvalue(hir_id, span, expr_ty)),
Res::Def(DefKind::Static, _) => Ok(Place {
let upvar_id = ty::UpvarId {
var_path: ty::UpvarPath { hir_id: var_id },
- closure_expr_id: closure_expr_def_id.to_local(),
+ closure_expr_id: closure_expr_def_id.expect_local(),
};
let var_ty = self.node_ty(var_id)?;
predicates_added = false;
let mut visitor = InferVisitor {
- tcx: tcx,
+ tcx,
global_inferred_outlives: &mut global_inferred_outlives,
predicates_added: &mut predicates_added,
- explicit_map: explicit_map,
+ explicit_map,
};
// Visit all the crates and infer predicates
-use rustc::session::Session;
use rustc::ty::{Ty, TypeFoldable};
use rustc_errors::{Applicability, DiagnosticBuilder, DiagnosticId};
+use rustc_session::Session;
use rustc_span::Span;
pub trait StructuredDiagnostic<'tcx> {
}
fn visit_trait_item(&mut self, trait_item: &hir::TraitItem<'_>) {
- if let hir::TraitItemKind::Method(..) = trait_item.kind {
+ if let hir::TraitItemKind::Fn(..) = trait_item.kind {
self.visit_node_helper(trait_item.hir_id);
}
}
fn visit_impl_item(&mut self, impl_item: &hir::ImplItem<'_>) {
- if let hir::ImplItemKind::Method(..) = impl_item.kind {
+ if let hir::ImplItemKind::Fn(..) = impl_item.kind {
self.visit_node_helper(impl_item.hir_id);
}
}
-//! Module for inferring the variance of type and lifetime parameters. See the [rustc guide]
+//! Module for inferring the variance of type and lifetime parameters. See the [rustc dev guide]
//! chapter for more info.
//!
-//! [rustc guide]: https://rust-lang.github.io/rustc-guide/variance.html
+//! [rustc dev guide]: https://rustc-dev-guide.rust-lang.org/variance.html
use hir::Node;
use rustc::ty::query::Providers;
},
Node::TraitItem(item) => match item.kind {
- hir::TraitItemKind::Method(..) => {}
+ hir::TraitItemKind::Fn(..) => {}
_ => unsupported(),
},
Node::ImplItem(item) => match item.kind {
- hir::ImplItemKind::Method(..) => {}
+ hir::ImplItemKind::Fn(..) => {}
_ => unsupported(),
},
// See the following for a discussion on dep-graph management.
//
- // - https://rust-lang.github.io/rustc-guide/query.html
- // - https://rust-lang.github.io/rustc-guide/variance.html
+ // - https://rustc-dev-guide.rust-lang.org/query.html
+ // - https://rustc-dev-guide.rust-lang.org/variance.html
tcx.hir().krate().visit_all_item_likes(&mut terms_cx);
terms_cx
}
fn visit_trait_item(&mut self, trait_item: &hir::TraitItem<'_>) {
- if let hir::TraitItemKind::Method(..) = trait_item.kind {
+ if let hir::TraitItemKind::Fn(..) = trait_item.kind {
self.add_inferreds_for_item(trait_item.hir_id);
}
}
fn visit_impl_item(&mut self, impl_item: &hir::ImplItem<'_>) {
- if let hir::ImplItemKind::Method(..) = impl_item.kind {
+ if let hir::ImplItemKind::Fn(..) = impl_item.kind {
self.add_inferreds_for_item(impl_item.hir_id);
}
}
-For more information about how `librustdoc` works, see the [rustc guide].
+For more information about how `librustdoc` works, see the [rustc dev guide].
-[rustc guide]: https://rust-lang.github.io/rustc-guide/rustdoc.html
+[rustc dev guide]: https://rustc-dev-guide.rust-lang.org/rustdoc.html
use rustc::ty::{self, Region, RegionVid, TypeFoldable};
use rustc_data_structures::fx::FxHashSet;
use rustc_hir as hir;
-use rustc_infer::traits::auto_trait::{self, AutoTraitResult};
+use rustc_trait_selection::traits::auto_trait::{self, AutoTraitResult};
use std::fmt::Debug;
// Writing a projection trait bound of the form
// <T as Trait>::Name : ?Sized
// is illegal, because ?Sized bounds can only
- // be written in the (here, nonexistant) definition
+ // be written in the (here, nonexistent) definition
// of the type.
// Therefore, we make sure that we never add a ?Sized
// bound for projections
+use crate::rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt;
use rustc::ty::subst::Subst;
use rustc::ty::{ToPredicate, WithConstness};
use rustc_hir as hir;
use rustc_metadata::creader::LoadedMacro;
use rustc_mir::const_eval::is_min_const_fn;
use rustc_span::hygiene::MacroKind;
-use rustc_span::symbol::sym;
use rustc_span::Span;
use crate::clean::{self, GetDefId, ToSource, TypeKind};
attrs: Option<Attrs<'_>>,
visited: &mut FxHashSet<DefId>,
) -> Option<Vec<clean::Item>> {
- let did = if let Some(did) = res.opt_def_id() {
- did
- } else {
- return None;
- };
+ let did = res.opt_def_id()?;
if did.is_local() {
return None;
}
let generics = (cx.tcx.generics_of(did), predicates).clean(cx);
let generics = filter_non_trait_generics(did, generics);
let (generics, supertrait_bounds) = separate_supertrait_bounds(generics);
- let is_spotlight = load_attrs(cx, did).clean(cx).has_doc_flag(sym::spotlight);
let is_auto = cx.tcx.trait_is_auto(did);
clean::Trait {
auto: auto_trait,
generics,
items: trait_items,
bounds: supertrait_bounds,
- is_spotlight,
is_auto,
}
}
name: ref _name,
},
ref bounds,
- } => !(*s == "Self" && did == trait_did) && !bounds.is_empty(),
+ } => !(bounds.is_empty() || *s == "Self" && did == trait_did),
_ => true,
});
g
pub use self::types::Visibility::{Inherited, Public};
pub use self::types::*;
-const FN_OUTPUT_NAME: &'static str = "Output";
+const FN_OUTPUT_NAME: &str = "Output";
pub trait Clean<T> {
fn clean(&self, cx: &DocContext<'_>) -> T;
cx.tcx
.hir()
.krate()
+ .item
.module
.item_ids
.iter()
cx.tcx
.hir()
.krate()
+ .item
.module
.item_ids
.iter()
impl Clean<Item> for doctree::Trait<'_> {
fn clean(&self, cx: &DocContext<'_>) -> Item {
let attrs = self.attrs.clean(cx);
- let is_spotlight = attrs.has_doc_flag(sym::spotlight);
Item {
name: Some(self.name.clean(cx)),
attrs,
items: self.items.iter().map(|ti| ti.clean(cx)).collect(),
generics: self.generics.clean(cx),
bounds: self.bounds.clean(cx),
- is_spotlight,
is_auto: self.is_auto.clean(cx),
}),
}
}
}
+impl Clean<TypeKind> for hir::def::DefKind {
+ fn clean(&self, _: &DocContext<'_>) -> TypeKind {
+ match *self {
+ hir::def::DefKind::Mod => TypeKind::Module,
+ hir::def::DefKind::Struct => TypeKind::Struct,
+ hir::def::DefKind::Union => TypeKind::Union,
+ hir::def::DefKind::Enum => TypeKind::Enum,
+ hir::def::DefKind::Trait => TypeKind::Trait,
+ hir::def::DefKind::TyAlias => TypeKind::Typedef,
+ hir::def::DefKind::ForeignTy => TypeKind::Foreign,
+ hir::def::DefKind::TraitAlias => TypeKind::TraitAlias,
+ hir::def::DefKind::Fn => TypeKind::Function,
+ hir::def::DefKind::Const => TypeKind::Const,
+ hir::def::DefKind::Static => TypeKind::Static,
+ hir::def::DefKind::Macro(_) => TypeKind::Macro,
+ _ => TypeKind::Foreign,
+ }
+ }
+}
+
impl Clean<Item> for hir::TraitItem<'_> {
fn clean(&self, cx: &DocContext<'_>) -> Item {
let inner = match self.kind {
hir::TraitItemKind::Const(ref ty, default) => {
AssocConstItem(ty.clean(cx), default.map(|e| print_const_expr(cx, e)))
}
- hir::TraitItemKind::Method(ref sig, hir::TraitMethod::Provided(body)) => {
+ hir::TraitItemKind::Fn(ref sig, hir::TraitFn::Provided(body)) => {
MethodItem((sig, &self.generics, body, None).clean(cx))
}
- hir::TraitItemKind::Method(ref sig, hir::TraitMethod::Required(ref names)) => {
+ hir::TraitItemKind::Fn(ref sig, hir::TraitFn::Required(ref names)) => {
let (generics, decl) = enter_impl_trait(cx, || {
(self.generics.clean(cx), (&*sig.decl, &names[..]).clean(cx))
});
hir::ImplItemKind::Const(ref ty, expr) => {
AssocConstItem(ty.clean(cx), Some(print_const_expr(cx, expr)))
}
- hir::ImplItemKind::Method(ref sig, body) => {
+ hir::ImplItemKind::Fn(ref sig, body) => {
MethodItem((sig, &self.generics, body, Some(self.defaultness)).clean(cx))
}
hir::ImplItemKind::TyAlias(ref ty) => {
pub decl: FnDecl,
pub header: hir::FnHeader,
pub defaultness: Option<hir::Defaultness>,
- pub all_types: Vec<Type>,
- pub ret_types: Vec<Type>,
+ pub all_types: Vec<(Type, TypeKind)>,
+ pub ret_types: Vec<(Type, TypeKind)>,
}
#[derive(Clone, Debug)]
pub header: hir::FnHeader,
pub decl: FnDecl,
pub generics: Generics,
- pub all_types: Vec<Type>,
- pub ret_types: Vec<Type>,
+ pub all_types: Vec<(Type, TypeKind)>,
+ pub ret_types: Vec<(Type, TypeKind)>,
}
#[derive(Clone, Debug)]
pub decl: FnDecl,
pub generics: Generics,
pub header: hir::FnHeader,
- pub all_types: Vec<Type>,
- pub ret_types: Vec<Type>,
+ pub all_types: Vec<(Type, TypeKind)>,
+ pub ret_types: Vec<(Type, TypeKind)>,
}
#[derive(Clone, PartialEq, Eq, Debug, Hash)]
pub items: Vec<Item>,
pub generics: Generics,
pub bounds: Vec<GenericBound>,
- pub is_spotlight: bool,
pub is_auto: bool,
}
Never,
}
-#[derive(Clone, Copy, Debug)]
+#[derive(Clone, PartialEq, Eq, Hash, Copy, Debug)]
pub enum TypeKind {
Enum,
Function,
let args: Vec<_> = substs
.iter()
.filter_map(|kind| match kind.unpack() {
- GenericArgKind::Lifetime(lt) => {
- lt.clean(cx).and_then(|lt| Some(GenericArg::Lifetime(lt)))
- }
+ GenericArgKind::Lifetime(lt) => lt.clean(cx).map(|lt| GenericArg::Lifetime(lt)),
GenericArgKind::Type(_) if skip_self => {
skip_self = false;
None
arg: &Type,
cx: &DocContext<'_>,
recurse: i32,
-) -> FxHashSet<Type> {
+) -> FxHashSet<(Type, TypeKind)> {
let arg_s = arg.print().to_string();
let mut res = FxHashSet::default();
if recurse >= 10 {
if !adds.is_empty() {
res.extend(adds);
} else if !ty.is_full_generic() {
- res.insert(ty);
+ if let Some(did) = ty.def_id() {
+ if let Some(kind) = cx.tcx.def_kind(did).clean(cx) {
+ res.insert((ty, kind));
+ }
+ }
}
}
}
if !adds.is_empty() {
res.extend(adds);
} else if !ty.is_full_generic() {
- res.insert(ty.clone());
+ if let Some(did) = ty.def_id() {
+ if let Some(kind) = cx.tcx.def_kind(did).clean(cx) {
+ res.insert((ty.clone(), kind));
+ }
+ }
}
}
}
}
} else {
- res.insert(arg.clone());
+ if let Some(did) = arg.def_id() {
+ if let Some(kind) = cx.tcx.def_kind(did).clean(cx) {
+ res.insert((arg.clone(), kind));
+ }
+ }
if let Some(gens) = arg.generics() {
for gen in gens.iter() {
if gen.is_full_generic() {
if !adds.is_empty() {
res.extend(adds);
}
- } else {
- res.insert(gen.clone());
+ } else if let Some(did) = gen.def_id() {
+ if let Some(kind) = cx.tcx.def_kind(did).clean(cx) {
+ res.insert((gen.clone(), kind));
+ }
}
}
}
generics: &Generics,
decl: &FnDecl,
cx: &DocContext<'_>,
-) -> (Vec<Type>, Vec<Type>) {
+) -> (Vec<(Type, TypeKind)>, Vec<(Type, TypeKind)>) {
let mut all_types = FxHashSet::default();
for arg in decl.inputs.values.iter() {
if arg.type_.is_self_type() {
if !args.is_empty() {
all_types.extend(args);
} else {
- all_types.insert(arg.type_.clone());
+ if let Some(did) = arg.type_.def_id() {
+ if let Some(kind) = cx.tcx.def_kind(did).clean(cx) {
+ all_types.insert((arg.type_.clone(), kind));
+ }
+ }
}
}
FnRetTy::Return(ref return_type) => {
let mut ret = get_real_types(generics, &return_type, cx, 0);
if ret.is_empty() {
- ret.insert(return_type.clone());
+ if let Some(did) = return_type.def_id() {
+ if let Some(kind) = cx.tcx.def_kind(did).clean(cx) {
+ ret.insert((return_type.clone(), kind));
+ }
+ }
}
ret.into_iter().collect()
}
use std::collections::BTreeMap;
+use std::convert::TryFrom;
use std::ffi::OsStr;
use std::fmt;
use std::path::PathBuf;
-use rustc::lint::Level;
-use rustc::session;
-use rustc::session::config::{
+use rustc_session::config::{self, parse_crate_types_from_list, parse_externs, CrateType};
+use rustc_session::config::{
build_codegen_options, build_debugging_options, get_cmd_lint_options, host_triple,
nightly_options,
};
-use rustc::session::config::{parse_crate_types_from_list, parse_externs, CrateType};
-use rustc::session::config::{CodegenOptions, DebuggingOptions, ErrorOutputType, Externs};
-use rustc::session::search_paths::SearchPath;
+use rustc_session::config::{CodegenOptions, DebuggingOptions, ErrorOutputType, Externs};
+use rustc_session::lint::Level;
+use rustc_session::search_paths::SearchPath;
use rustc_span::edition::{Edition, DEFAULT_EDITION};
use rustc_target::spec::TargetTriple;
use crate::passes::{self, Condition, DefaultPassOption};
use crate::theme;
+#[derive(Clone, Copy, PartialEq, Eq, Debug)]
+pub enum OutputFormat {
+ Json,
+ Html,
+}
+
+impl OutputFormat {
+ pub fn is_json(&self) -> bool {
+ match self {
+ OutputFormat::Json => true,
+ _ => false,
+ }
+ }
+}
+
+impl TryFrom<&str> for OutputFormat {
+ type Error = String;
+
+ fn try_from(value: &str) -> Result<Self, Self::Error> {
+ match value {
+ "json" => Ok(OutputFormat::Json),
+ "html" => Ok(OutputFormat::Html),
+ _ => Err(format!("unknown output format `{}`", value)),
+ }
+ }
+}
+
/// Configuration options for rustdoc.
#[derive(Clone)]
pub struct Options {
pub crate_version: Option<String>,
/// Collected options specific to outputting final pages.
pub render_options: RenderOptions,
+ /// Output format rendering (used only for "show-coverage" option for the moment)
+ pub output_format: Option<OutputFormat>,
}
impl fmt::Debug for Options {
return Err(0);
}
- let color = session::config::parse_color(&matches);
- let (json_rendered, _artifacts) = session::config::parse_json(&matches);
- let error_format = session::config::parse_error_format(&matches, color, json_rendered);
+ let color = config::parse_color(&matches);
+ let (json_rendered, _artifacts) = config::parse_json(&matches);
+ let error_format = config::parse_error_format(&matches, color, json_rendered);
let codegen_options = build_codegen_options(matches, error_format);
let debugging_options = build_debugging_options(matches, error_format);
}
}
- match matches.opt_str("w").as_ref().map(|s| &**s) {
- Some("html") | None => {}
- Some(s) => {
- diag.struct_err(&format!("unknown output format: {}", s)).emit();
- return Err(1);
- }
- }
-
let index_page = matches.opt_str("index-page").map(|s| PathBuf::from(&s));
if let Some(ref index_page) = index_page {
if !index_page.is_file() {
}
};
+ let output_format = match matches.opt_str("output-format") {
+ Some(s) => match OutputFormat::try_from(s.as_str()) {
+ Ok(o) => {
+ if o.is_json() && !show_coverage {
+ diag.struct_err("json output format isn't supported for doc generation")
+ .emit();
+ return Err(1);
+ } else if !o.is_json() && show_coverage {
+ diag.struct_err(
+ "html output format isn't supported for the --show-coverage option",
+ )
+ .emit();
+ return Err(1);
+ }
+ Some(o)
+ }
+ Err(e) => {
+ diag.struct_err(&e).emit();
+ return Err(1);
+ }
+ },
+ None => None,
+ };
let crate_name = matches.opt_str("crate-name");
let proc_macro_crate = crate_types.contains(&CrateType::ProcMacro);
let playground_url = matches.opt_str("playground-url");
generate_search_filter,
generate_redirect_pages,
},
+ output_format,
})
}
for flag in deprecated_flags.iter() {
if matches.opt_present(flag) {
+ if *flag == "output-format" && matches.opt_present("show-coverage") {
+ continue;
+ }
let mut err =
diag.struct_warn(&format!("the '{}' flag is considered deprecated", flag));
err.warn(
use rustc::middle::cstore::CrateStore;
use rustc::middle::privacy::AccessLevels;
-use rustc::session::config::ErrorOutputType;
-use rustc::session::DiagnosticOutput;
-use rustc::session::{self, config};
use rustc::ty::{Ty, TyCtxt};
+use rustc_ast::ast::CRATE_NODE_ID;
+use rustc_attr as attr;
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use rustc_driver::abort_on_err;
+use rustc_errors::emitter::{Emitter, EmitterWriter};
+use rustc_errors::json::JsonEmitter;
use rustc_feature::UnstableFeatures;
use rustc_hir::def::Namespace::TypeNS;
use rustc_hir::def_id::{CrateNum, DefId, DefIndex, LOCAL_CRATE};
use rustc_hir::HirId;
use rustc_interface::interface;
use rustc_resolve as resolve;
+use rustc_session::config::ErrorOutputType;
use rustc_session::lint;
-
-use rustc_ast::ast::CRATE_NODE_ID;
-use rustc_attr as attr;
-use rustc_errors::emitter::{Emitter, EmitterWriter};
-use rustc_errors::json::JsonEmitter;
+use rustc_session::DiagnosticOutput;
+use rustc_session::{config, Session};
use rustc_span::source_map;
use rustc_span::symbol::sym;
use rustc_span::DUMMY_SP;
use crate::passes::{self, Condition::*, ConditionalPass};
-pub use rustc::session::config::{CodegenOptions, DebuggingOptions, Input, Options};
-pub use rustc::session::search_paths::SearchPath;
+pub use rustc_session::config::{CodegenOptions, DebuggingOptions, Input, Options};
+pub use rustc_session::search_paths::SearchPath;
pub type ExternalPaths = FxHashMap<DefId, (Vec<String>, clean::TypeKind)>;
}
impl<'tcx> DocContext<'tcx> {
- pub fn sess(&self) -> &session::Session {
+ pub fn sess(&self) -> &Session {
&self.tcx.sess
}
mut manual_passes,
display_warnings,
render_options,
+ output_format,
..
} = options;
let mut renderinfo = RenderInfo::default();
renderinfo.access_levels = access_levels;
+ renderinfo.output_format = output_format;
let mut ctxt = DocContext {
tcx,
let sender = self.errors.sender.clone().unwrap();
rayon::spawn(move || match fs::write(&path, &contents) {
Ok(_) => {
- sender
- .send(None)
- .expect(&format!("failed to send error on \"{}\"", path.display()));
+ sender.send(None).unwrap_or_else(|_| {
+ panic!("failed to send error on \"{}\"", path.display())
+ });
}
Err(e) => {
- sender
- .send(Some(format!("\"{}\": {}", path.display(), e)))
- .expect(&format!("failed to send non-error on \"{}\"", path.display()));
+ sender.send(Some(format!("\"{}\": {}", path.display(), e))).unwrap_or_else(
+ |_| panic!("failed to send non-error on \"{}\"", path.display()),
+ );
}
});
Ok(())
vis: &'hir hir::Visibility<'hir>,
) -> Module<'hir> {
Module {
- name: name,
+ name,
id: hir::CRATE_HIR_ID,
vis,
where_outer: rustc_span::DUMMY_SP,
Buffer { for_html: false, buffer: String::new() }
}
- crate fn is_empty(&self) -> bool {
- self.buffer.is_empty()
- }
-
crate fn into_inner(self) -> String {
self.buffer
}
- crate fn insert_str(&mut self, idx: usize, s: &str) {
- self.buffer.insert_str(idx, s);
- }
-
- crate fn push_str(&mut self, s: &str) {
- self.buffer.push_str(s);
- }
-
// Intended for consumption by write! and writeln! (std::fmt) but without
// the fmt::Result return type imposed by fmt::Write (and avoiding the trait
// import).
fn string<T: Display>(&mut self, text: T, klass: Class) -> io::Result<()>;
}
-// Implement `Writer` for anthing that can be written to, this just implements
+// Implement `Writer` for anything that can be written to, this just implements
// the default rustdoc behaviour.
impl<U: Write> Writer for U {
fn string<T: Display>(&mut self, text: T, klass: Class) -> io::Result<()> {
fn from(item: &'a clean::Item) -> ItemType {
let inner = match item.inner {
clean::StrippedItem(box ref item) => item,
- ref inner @ _ => inner,
+ ref inner => inner,
};
match *inner {
}
}
-pub const NAMESPACE_TYPE: &'static str = "t";
-pub const NAMESPACE_VALUE: &'static str = "v";
-pub const NAMESPACE_MACRO: &'static str = "m";
-pub const NAMESPACE_KEYWORD: &'static str = "k";
+pub const NAMESPACE_TYPE: &str = "t";
+pub const NAMESPACE_VALUE: &str = "v";
+pub const NAMESPACE_MACRO: &str = "m";
+pub const NAMESPACE_KEYWORD: &str = "k";
""
}
)),
- playground_button.as_ref().map(String::as_str),
+ playground_button.as_deref(),
Some((s1.as_str(), s2)),
));
Some(Event::Html(s.into()))
""
}
)),
- playground_button.as_ref().map(String::as_str),
+ playground_button.as_deref(),
None,
));
Some(Event::Html(s.into()))
type Item = String;
fn next(&mut self) -> Option<String> {
- let next_event = self.inner.next();
- if next_event.is_none() {
- return None;
- }
- let next_event = next_event.unwrap();
+ let next_event = self.inner.next()?;
let (ret, is_in) = match next_event {
Event::Start(Tag::Paragraph) => (None, 1),
Event::Start(Tag::Heading(_)) => (None, 1),
}
}
let mut s = String::with_capacity(md.len() * 3 / 2);
- let mut p = ParserWrapper { inner: Parser::new(md), is_in: 0, is_first: true };
- while let Some(t) = p.next() {
- if !t.is_empty() {
- s.push_str(&t);
- }
- }
+ let p = ParserWrapper { inner: Parser::new(md), is_in: 0, is_first: true };
+ p.filter(|t| !t.is_empty()).for_each(|i| s.push_str(&i));
s
}
use rustc::middle::privacy::AccessLevels;
use rustc::middle::stability;
-use rustc_ast::ast;
use rustc_ast_pretty::pprust;
use rustc_data_structures::flock;
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
use serde::ser::SerializeSeq;
use serde::{Serialize, Serializer};
-use crate::clean::{self, AttributesExt, Deprecation, GetDefId, SelfTy};
-use crate::config::RenderOptions;
+use crate::clean::{self, AttributesExt, Deprecation, GetDefId, SelfTy, TypeKind};
+use crate::config::{OutputFormat, RenderOptions};
use crate::docfs::{DocFS, ErrorStorage, PathError};
use crate::doctree;
use crate::html::escape::Escape;
pub deref_trait_did: Option<DefId>,
pub deref_mut_trait_did: Option<DefId>,
pub owned_box_did: Option<DefId>,
+ pub output_format: Option<OutputFormat>,
}
// Helper structs for rendering items/sidebars and carrying along contextual
/// A type used for the search index.
#[derive(Debug)]
-struct Type {
+struct RenderType {
+ ty: Option<DefId>,
+ idx: Option<usize>,
name: Option<String>,
- generics: Option<Vec<String>>,
+ generics: Option<Vec<Generic>>,
}
-impl Serialize for Type {
+impl Serialize for RenderType {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
if let Some(name) = &self.name {
let mut seq = serializer.serialize_seq(None)?;
- seq.serialize_element(&name)?;
+ if let Some(id) = self.idx {
+ seq.serialize_element(&id)?;
+ } else {
+ seq.serialize_element(&name)?;
+ }
if let Some(generics) = &self.generics {
seq.serialize_element(&generics)?;
}
}
}
+/// A type used for the search index.
+#[derive(Debug)]
+struct Generic {
+ name: String,
+ defid: Option<DefId>,
+ idx: Option<usize>,
+}
+
+impl Serialize for Generic {
+ fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
+ where
+ S: Serializer,
+ {
+ if let Some(id) = self.idx {
+ serializer.serialize_some(&id)
+ } else {
+ serializer.serialize_some(&self.name)
+ }
+ }
+}
+
/// Full type of functions/methods in the search index.
#[derive(Debug)]
struct IndexItemFunctionType {
- inputs: Vec<Type>,
- output: Option<Vec<Type>>,
+ inputs: Vec<TypeWithKind>,
+ output: Option<Vec<TypeWithKind>>,
}
impl Serialize for IndexItemFunctionType {
// If we couldn't figure out a type, just write `null`.
let mut iter = self.inputs.iter();
if match self.output {
- Some(ref output) => iter.chain(output.iter()).any(|ref i| i.name.is_none()),
- None => iter.any(|ref i| i.name.is_none()),
+ Some(ref output) => iter.chain(output.iter()).any(|ref i| i.ty.name.is_none()),
+ None => iter.any(|ref i| i.ty.name.is_none()),
} {
serializer.serialize_none()
} else {
}
}
+#[derive(Debug)]
+pub struct TypeWithKind {
+ ty: RenderType,
+ kind: TypeKind,
+}
+
+impl From<(RenderType, TypeKind)> for TypeWithKind {
+ fn from(x: (RenderType, TypeKind)) -> TypeWithKind {
+ TypeWithKind { ty: x.0, kind: x.1 }
+ }
+}
+
+impl Serialize for TypeWithKind {
+ fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
+ where
+ S: Serializer,
+ {
+ let mut seq = serializer.serialize_seq(None)?;
+ seq.serialize_element(&self.ty.name)?;
+ let x: ItemType = self.kind.into();
+ seq.serialize_element(&x)?;
+ seq.end()
+ }
+}
+
thread_local!(static CACHE_KEY: RefCell<Arc<Cache>> = Default::default());
thread_local!(pub static CURRENT_DEPTH: Cell<usize> = Cell::new(0));
<p>Version {}</p>\
</div>\
<a id='all-types' href='index.html'><p>Back to index</p></a>",
- crate_name, version
+ crate_name,
+ Escape(version),
)
} else {
String::new()
}
if self.shared.sort_modules_alphabetically {
- for (_, items) in &mut map {
+ for items in map.values_mut() {
items.sort();
}
}
let mut path = String::new();
- // We can safely ignore macros from other libraries
+ // We can safely ignore synthetic `SourceFile`s.
let file = match item.source.filename {
FileName::Real(ref path) => path,
_ => return None,
f.generics.print()
)
.len();
- write!(w, "{}<pre class='rust fn'>", render_spotlight_traits(it));
+ write!(w, "<pre class='rust fn'>");
render_attributes(w, it, false);
write!(
w,
let item_type = m.type_();
let id = cx.derive_id(format!("{}.{}", item_type, name));
let ns_id = cx.derive_id(format!("{}.{}", name, item_type.name_space()));
- write!(
- w,
- "<h3 id='{id}' class='method'>{extra}<code id='{ns_id}'>",
- extra = render_spotlight_traits(m),
- id = id,
- ns_id = ns_id
- );
+ write!(w, "<h3 id='{id}' class='method'><code id='{ns_id}'>", id = id, ns_id = ns_id);
render_assoc_item(w, m, AssocItemLink::Anchor(Some(&id)), ItemType::Impl);
write!(w, "</code>");
render_stability_since(w, m, t);
let name = it.name.as_ref().unwrap();
let ty = match it.type_() {
Typedef | AssocType => AssocType,
- s @ _ => s,
+ s => s,
};
let anchor = format!("#{}.{}", ty, name);
render_assoc_items(w, cx, it, it.def_id, AssocItemRender::All)
}
-fn render_attribute(attr: &ast::MetaItem) -> Option<String> {
- let path = pprust::path_to_string(&attr.path);
-
- if attr.is_word() {
- Some(path)
- } else if let Some(v) = attr.value_str() {
- Some(format!("{} = {:?}", path, v))
- } else if let Some(values) = attr.meta_item_list() {
- let display: Vec<_> = values
- .iter()
- .filter_map(|attr| attr.meta_item().and_then(|mi| render_attribute(mi)))
- .collect();
-
- if !display.is_empty() { Some(format!("{}({})", path, display.join(", "))) } else { None }
- } else {
- None
- }
-}
-
-const ATTRIBUTE_WHITELIST: &'static [Symbol] = &[
+const ATTRIBUTE_WHITELIST: &[Symbol] = &[
sym::export_name,
sym::lang,
sym::link_section,
if !ATTRIBUTE_WHITELIST.contains(&attr.name_or_empty()) {
continue;
}
- if let Some(s) = render_attribute(&attr.meta().unwrap()) {
- attrs.push_str(&format!("#[{}]\n", s));
- }
+
+ attrs.push_str(&pprust::attribute_to_string(&attr));
}
if !attrs.is_empty() {
write!(
let deref_impl =
traits.iter().find(|t| t.inner_impl().trait_.def_id() == c.deref_trait_did);
if let Some(impl_) = deref_impl {
- let has_deref_mut = traits
- .iter()
- .find(|t| t.inner_impl().trait_.def_id() == c.deref_mut_trait_did)
- .is_some();
+ let has_deref_mut =
+ traits.iter().any(|t| t.inner_impl().trait_.def_id() == c.deref_mut_trait_did);
render_deref_methods(w, cx, impl_, containing_item, has_deref_mut);
}
}
}
-fn render_spotlight_traits(item: &clean::Item) -> String {
- match item.inner {
- clean::FunctionItem(clean::Function { ref decl, .. })
- | clean::TyMethodItem(clean::TyMethod { ref decl, .. })
- | clean::MethodItem(clean::Method { ref decl, .. })
- | clean::ForeignFunctionItem(clean::Function { ref decl, .. }) => spotlight_decl(decl),
- _ => String::new(),
- }
-}
-
-fn spotlight_decl(decl: &clean::FnDecl) -> String {
- let mut out = Buffer::html();
- let mut trait_ = String::new();
-
- if let Some(did) = decl.output.def_id() {
- let c = cache();
- if let Some(impls) = c.impls.get(&did) {
- for i in impls {
- let impl_ = i.inner_impl();
- if impl_.trait_.def_id().map_or(false, |d| c.traits[&d].is_spotlight) {
- if out.is_empty() {
- out.push_str(&format!(
- "<h3 class=\"important\">Important traits for {}</h3>\
- <code class=\"content\">",
- impl_.for_.print()
- ));
- trait_.push_str(&impl_.for_.print().to_string());
- }
-
- //use the "where" class here to make it small
- out.push_str(&format!(
- "<span class=\"where fmt-newline\">{}</span>",
- impl_.print()
- ));
- let t_did = impl_.trait_.def_id().unwrap();
- for it in &impl_.items {
- if let clean::TypedefItem(ref tydef, _) = it.inner {
- out.push_str("<span class=\"where fmt-newline\"> ");
- assoc_type(
- &mut out,
- it,
- &[],
- Some(&tydef.type_),
- AssocItemLink::GotoSource(t_did, &FxHashSet::default()),
- "",
- );
- out.push_str(";</span>");
- }
- }
- }
- }
- }
- }
-
- if !out.is_empty() {
- out.insert_str(
- 0,
- &format!(
- "<div class=\"important-traits\"><div class='tooltip'>ⓘ\
- <span class='tooltiptext'>Important traits for {}</span></div>\
- <div class=\"content hidden\">",
- trait_
- ),
- );
- out.push_str("</code></div></div>");
- }
-
- out.into_inner()
-}
-
fn render_impl(
w: &mut Buffer,
cx: &Context,
use_absolute: Option<bool>,
is_on_foreign_type: bool,
show_default_items: bool,
- // This argument is used to reference same type with different pathes to avoid duplication
+ // This argument is used to reference same type with different paths to avoid duplication
// in documentation pages for trait with automatic implementations like "Send" and "Sync".
aliases: &[String],
) {
(true, " hidden")
};
match item.inner {
- clean::MethodItem(clean::Method { ref decl, .. })
- | clean::TyMethodItem(clean::TyMethod { ref decl, .. }) => {
+ clean::MethodItem(clean::Method { .. })
+ | clean::TyMethodItem(clean::TyMethod { .. }) => {
// Only render when the method is not static or we allow static methods
if render_method_item {
let id = cx.derive_id(format!("{}.{}", item_type, name));
let ns_id = cx.derive_id(format!("{}.{}", name, item_type.name_space()));
write!(w, "<h4 id='{}' class=\"{}{}\">", id, item_type, extra_class);
- write!(w, "{}", spotlight_decl(decl));
write!(w, "<code id='{}'>", ns_id);
render_assoc_item(w, item, link.anchor(&id), ItemType::Impl);
write!(w, "</code>");
) {
for trait_item in &t.items {
let n = trait_item.name.clone();
- if i.items.iter().find(|m| m.name == n).is_some() {
+ if i.items.iter().any(|m| m.name == n) {
continue;
}
let did = i.trait_.as_ref().unwrap().def_id().unwrap();
"<div class='block version'>\
<p>Version {}</p>\
</div>",
- version
+ Escape(version)
);
}
}
document(w, cx, it)
}
-crate const BASIC_KEYWORDS: &'static str = "rust, rustlang, rust-lang";
+crate const BASIC_KEYWORDS: &str = "rust, rustlang, rust-lang";
fn make_item_keywords(it: &clean::Item) -> String {
format!("{}, {}", BASIC_KEYWORDS, it.name.as_ref().unwrap())
use serde::Serialize;
use super::{plain_summary_line, shorten, Impl, IndexItem, IndexItemFunctionType, ItemType};
-use super::{RenderInfo, Type};
+use super::{Generic, RenderInfo, RenderType, TypeWithKind};
/// Indicates where an external crate can be found.
pub enum ExternalLocation {
deref_trait_did,
deref_mut_trait_did,
owned_box_did,
+ ..
} = renderinfo;
let external_paths =
let mut lastpathid = 0usize;
for item in search_index {
- item.parent_idx = item.parent.map(|defid| {
+ item.parent_idx = item.parent.and_then(|defid| {
if defid_to_pathid.contains_key(&defid) {
- *defid_to_pathid.get(&defid).expect("no pathid")
+ defid_to_pathid.get(&defid).map(|x| *x)
} else {
let pathid = lastpathid;
defid_to_pathid.insert(defid, pathid);
lastpathid += 1;
- let &(ref fqp, short) = paths.get(&defid).unwrap();
- crate_paths.push((short, fqp.last().unwrap().clone()));
- pathid
+ if let Some(&(ref fqp, short)) = paths.get(&defid) {
+ crate_paths.push((short, fqp.last().unwrap().clone()));
+ Some(pathid)
+ } else {
+ None
+ }
}
});
_ => return None,
};
- let inputs =
- all_types.iter().map(|arg| get_index_type(&arg)).filter(|a| a.name.is_some()).collect();
+ let inputs = all_types
+ .iter()
+ .map(|(ty, kind)| TypeWithKind::from((get_index_type(&ty), *kind)))
+ .filter(|a| a.ty.name.is_some())
+ .collect();
let output = ret_types
.iter()
- .map(|arg| get_index_type(&arg))
- .filter(|a| a.name.is_some())
+ .map(|(ty, kind)| TypeWithKind::from((get_index_type(&ty), *kind)))
+ .filter(|a| a.ty.name.is_some())
.collect::<Vec<_>>();
let output = if output.is_empty() { None } else { Some(output) };
Some(IndexItemFunctionType { inputs, output })
}
-fn get_index_type(clean_type: &clean::Type) -> Type {
- let t = Type {
+fn get_index_type(clean_type: &clean::Type) -> RenderType {
+ let t = RenderType {
+ ty: clean_type.def_id(),
+ idx: None,
name: get_index_type_name(clean_type, true).map(|s| s.to_ascii_lowercase()),
generics: get_generics(clean_type),
};
}
}
-fn get_generics(clean_type: &clean::Type) -> Option<Vec<String>> {
+fn get_generics(clean_type: &clean::Type) -> Option<Vec<Generic>> {
clean_type.generics().and_then(|types| {
let r = types
.iter()
- .filter_map(|t| get_index_type_name(t, false))
- .map(|s| s.to_ascii_lowercase())
+ .filter_map(|t| {
+ if let Some(name) = get_index_type_name(t, false) {
+ Some(Generic { name: name.to_ascii_lowercase(), defid: t.def_id(), idx: None })
+ } else {
+ None
+ }
+ })
.collect::<Vec<_>>();
if r.is_empty() { None } else { Some(r) }
})
// If we're including source files, and we haven't seen this file yet,
// then we need to render it out to the filesystem.
if self.scx.include_sources
- // skip all invalid or macro spans
+ // skip all synthetic "files"
&& item.source.filename.is_real()
// skip non-local items
&& item.def_id.is_local()
function handleEscape(ev) {
var help = getHelpElement();
var search = getSearchElement();
- hideModal();
if (hasClass(help, "hidden") === false) {
displayHelp(false, ev, help);
} else if (hasClass(search, "hidden") === false) {
case "s":
case "S":
displayHelp(false, ev);
- hideModal();
ev.preventDefault();
focusSearchBar();
break;
case "?":
if (ev.shiftKey) {
- hideModal();
displayHelp(true, ev);
}
break;
}
function initSearch(rawSearchIndex) {
- var currentResults, index, searchIndex;
var MAX_LEV_DISTANCE = 3;
var MAX_RESULTS = 200;
var GENERICS_DATA = 1;
var NAME = 0;
var INPUTS_DATA = 0;
var OUTPUT_DATA = 1;
+ var NO_TYPE_FILTER = -1;
+ var currentResults, index, searchIndex;
var params = getQueryStringParams();
// Populate search bar with query string search term when provided,
return i;
}
}
- return -1;
+ return NO_TYPE_FILTER;
}
var valLower = query.query.toLowerCase(),
};
}
+ function getObjectFromId(id) {
+ if (typeof id === "number") {
+ return searchIndex[id];
+ }
+ return {'name': id};
+ }
+
function checkGenerics(obj, val) {
// The names match, but we need to be sure that all generics kinda
// match as well.
for (var y = 0; y < vlength; ++y) {
var lev = { pos: -1, lev: MAX_LEV_DISTANCE + 1};
var elength = elems.length;
+ var firstGeneric = getObjectFromId(val.generics[y]).name;
for (var x = 0; x < elength; ++x) {
- var tmp_lev = levenshtein(elems[x], val.generics[y]);
+ var tmp_lev = levenshtein(getObjectFromId(elems[x]).name,
+ firstGeneric);
if (tmp_lev < lev.lev) {
lev.lev = tmp_lev;
lev.pos = x;
for (var y = 0; allFound === true && y < val.generics.length; ++y) {
allFound = false;
+ var firstGeneric = getObjectFromId(val.generics[y]).name;
for (x = 0; allFound === false && x < elems.length; ++x) {
- allFound = elems[x] === val.generics[y];
+ allFound = getObjectFromId(elems[x]).name === firstGeneric;
}
if (allFound === true) {
elems.splice(x - 1, 1);
return lev_distance + 1;
}
- function findArg(obj, val, literalSearch) {
+ function findArg(obj, val, literalSearch, typeFilter) {
var lev_distance = MAX_LEV_DISTANCE + 1;
- if (obj && obj.type && obj.type[INPUTS_DATA] &&
- obj.type[INPUTS_DATA].length > 0) {
+ if (obj && obj.type && obj.type[INPUTS_DATA] && obj.type[INPUTS_DATA].length > 0) {
var length = obj.type[INPUTS_DATA].length;
for (var i = 0; i < length; i++) {
- var tmp = checkType(obj.type[INPUTS_DATA][i], val, literalSearch);
- if (literalSearch === true && tmp === true) {
- return true;
+ var tmp = obj.type[INPUTS_DATA][i];
+ if (typePassesFilter(typeFilter, tmp[1]) === false) {
+ continue;
+ }
+ tmp = checkType(tmp, val, literalSearch);
+ if (literalSearch === true) {
+ if (tmp === true) {
+ return true;
+ }
+ continue;
}
lev_distance = Math.min(tmp, lev_distance);
if (lev_distance === 0) {
return literalSearch === true ? false : lev_distance;
}
- function checkReturned(obj, val, literalSearch) {
+ function checkReturned(obj, val, literalSearch, typeFilter) {
var lev_distance = MAX_LEV_DISTANCE + 1;
if (obj && obj.type && obj.type.length > OUTPUT_DATA) {
var ret = obj.type[OUTPUT_DATA];
- if (!obj.type[OUTPUT_DATA].length) {
+ if (typeof ret[0] === "string") {
ret = [ret];
}
for (var x = 0; x < ret.length; ++x) {
- var r = ret[x];
- if (typeof r === "string") {
- r = [r];
+ var tmp = ret[x];
+ if (typePassesFilter(typeFilter, tmp[1]) === false) {
+ continue;
}
- var tmp = checkType(r, val, literalSearch);
+ tmp = checkType(tmp, val, literalSearch);
if (literalSearch === true) {
if (tmp === true) {
return true;
function typePassesFilter(filter, type) {
// No filter
- if (filter < 0) return true;
+ if (filter <= NO_TYPE_FILTER) return true;
// Exact match
if (filter === type) return true;
var name = itemTypes[type];
switch (itemTypes[filter]) {
case "constant":
- return (name == "associatedconstant");
+ return name === "associatedconstant";
case "fn":
- return (name == "method" || name == "tymethod");
+ return name === "method" || name === "tymethod";
case "type":
- return (name == "primitive" || name == "keyword");
+ return name === "primitive" || name === "associatedtype";
+ case "trait":
+ return name === "traitalias";
}
// No match
if (filterCrates !== undefined && searchIndex[i].crate !== filterCrates) {
continue;
}
- in_args = findArg(searchIndex[i], val, true);
- returned = checkReturned(searchIndex[i], val, true);
+ in_args = findArg(searchIndex[i], val, true, typeFilter);
+ returned = checkReturned(searchIndex[i], val, true, typeFilter);
ty = searchIndex[i];
fullId = generateId(ty);
- if (searchWords[i] === val.name) {
- // filter type: ... queries
- if (typePassesFilter(typeFilter, searchIndex[i].ty) &&
- results[fullId] === undefined)
- {
- results[fullId] = {id: i, index: -1};
- }
- } else if ((in_args === true || returned === true) &&
- typePassesFilter(typeFilter, searchIndex[i].ty)) {
- if (in_args === true || returned === true) {
- if (in_args === true) {
- results_in_args[fullId] = {
- id: i,
- index: -1,
- dontValidate: true,
- };
- }
- if (returned === true) {
- results_returned[fullId] = {
- id: i,
- index: -1,
- dontValidate: true,
- };
- }
- } else {
- results[fullId] = {
- id: i,
- index: -1,
- dontValidate: true,
- };
- }
+ if (searchWords[i] === val.name
+ && typePassesFilter(typeFilter, searchIndex[i].ty)
+ && results[fullId] === undefined) {
+ results[fullId] = {
+ id: i,
+ index: -1,
+ dontValidate: true,
+ };
+ }
+ if (in_args === true && results_in_args[fullId] === undefined) {
+ results_in_args[fullId] = {
+ id: i,
+ index: -1,
+ dontValidate: true,
+ };
+ }
+ if (returned === true && results_returned[fullId] === undefined) {
+ results_returned[fullId] = {
+ id: i,
+ index: -1,
+ dontValidate: true,
+ };
}
}
query.inputs = [val];
// allow searching for void (no output) functions as well
var typeOutput = type.length > OUTPUT_DATA ? type[OUTPUT_DATA].name : "";
- returned = checkReturned(ty, output, true);
+ returned = checkReturned(ty, output, true, NO_TYPE_FILTER);
if (output.name === "*" || returned === true) {
in_args = false;
var is_module = false;
lev += 1;
}
}
- if ((in_args = findArg(ty, valGenerics)) <= MAX_LEV_DISTANCE) {
- if (typePassesFilter(typeFilter, ty.ty) === false) {
- in_args = MAX_LEV_DISTANCE + 1;
- }
- }
- if ((returned = checkReturned(ty, valGenerics)) <= MAX_LEV_DISTANCE) {
- if (typePassesFilter(typeFilter, ty.ty) === false) {
- returned = MAX_LEV_DISTANCE + 1;
- }
- }
+ in_args = findArg(ty, valGenerics, false, typeFilter);
+ returned = checkReturned(ty, valGenerics, false, typeFilter);
lev += lev_add;
if (lev > 0 && val.length > 3 && searchWords[j].indexOf(val) > -1) {
lineNumbersFunc(e);
});
- function showModal(content) {
- var modal = document.createElement("div");
- modal.id = "important";
- addClass(modal, "modal");
- modal.innerHTML = "<div class=\"modal-content\"><div class=\"close\" id=\"modal-close\">✕" +
- "</div><div class=\"whiter\"></div><span class=\"docblock\">" + content +
- "</span></div>";
- document.getElementsByTagName("body")[0].appendChild(modal);
- document.getElementById("modal-close").onclick = hideModal;
- modal.onclick = hideModal;
- }
-
- function hideModal() {
- var modal = document.getElementById("important");
- if (modal) {
- modal.parentNode.removeChild(modal);
- }
- }
-
- onEachLazy(document.getElementsByClassName("important-traits"), function(e) {
- e.onclick = function() {
- showModal(e.lastElementChild.innerHTML);
- };
- });
-
// In the search display, allows to switch between tabs.
function printTab(nb) {
if (nb === 0 || nb === 1 || nb === 2) {
border-radius: 3px;
padding: 0 0.1em;
}
-.docblock pre code, .docblock-short pre code, .docblock code.spotlight {
+.docblock pre code, .docblock-short pre code {
padding: 0;
}
-.docblock code.spotlight :last-child {
- padding-bottom: 0.6em;
-}
pre {
padding: 14px;
}
font-size: 0.8em;
}
-.content .methods > div:not(.important-traits) {
+.content .methods > div {
margin-left: 40px;
margin-bottom: 15px;
}
.information {
position: absolute;
- left: -20px;
+ left: -25px;
margin-top: 7px;
z-index: 1;
}
width: 120px;
display: none;
text-align: center;
- padding: 5px 3px;
+ padding: 5px 3px 3px 3px;
border-radius: 6px;
margin-left: 5px;
top: -5px;
left: 105%;
z-index: 10;
+ font-size: 16px;
}
.tooltip:hover .tooltiptext {
content: " ";
position: absolute;
top: 50%;
- left: 11px;
+ left: 16px;
margin-top: -5px;
border-width: 5px;
border-style: solid;
}
-.important-traits .tooltip .tooltiptext {
+.tooltip.compile_fail, .tooltip.ignore {
+ font-weight: bold;
+ font-size: 20px;
+}
+
+.tooltip .tooltiptext {
border: 1px solid;
+ font-weight: normal;
}
pre.rust {
font-size: 16px;
}
-.important-traits {
- cursor: pointer;
- z-index: 2;
-}
-
-h4 > .important-traits {
- position: absolute;
- left: -44px;
- top: 2px;
-}
-
#all-types {
text-align: center;
border: 1px solid;
z-index: 1;
}
- h4 > .important-traits {
- position: absolute;
- left: -22px;
- top: 24px;
- }
-
#titles > div > div.count {
float: left;
width: 100%;
}
}
-.modal {
- position: fixed;
- width: 100vw;
- height: 100vh;
- z-index: 10000;
- top: 0;
- left: 0;
-}
-
-.modal-content {
- display: block;
- max-width: 60%;
- min-width: 200px;
- padding: 8px;
- top: 40%;
- position: absolute;
- left: 50%;
- transform: translate(-50%, -40%);
- border: 1px solid;
- border-radius: 4px;
- border-top-right-radius: 0;
-}
-
-.modal-content > .docblock {
- margin: 0;
-}
-
h3.important {
margin: 0;
margin-bottom: 13px;
font-size: 19px;
}
-.modal-content > .docblock > code.content {
- margin: 0;
- padding: 0;
- font-size: 20px;
-}
-
-.modal-content > .close {
- position: absolute;
- font-weight: 900;
- right: -25px;
- top: -1px;
- font-size: 18px;
- width: 25px;
- padding-right: 2px;
- border-top-right-radius: 5px;
- border-bottom-right-radius: 5px;
- text-align: center;
- border: 1px solid;
- border-right: 0;
- cursor: pointer;
-}
-
-.modal-content > .whiter {
- height: 25px;
- position: absolute;
- width: 3px;
- right: -2px;
- top: 0px;
-}
-
-#main > div.important-traits {
- position: absolute;
- left: -24px;
- margin-top: 16px;
-}
-
-.content > .methods > .method > div.important-traits {
- position: absolute;
- font-weight: 400;
- left: -42px;
- margin-top: 2px;
-}
-
kbd {
display: inline-block;
padding: 3px 5px;
}
pre.compile_fail {
- border-left: 2px solid rgba(255,0,0,.6);
+ border-left: 2px solid rgba(255,0,0,.8);
}
pre.compile_fail:hover, .information:hover + pre.compile_fail {
}
.tooltip.compile_fail {
- color: rgba(255,0,0,.6);
+ color: rgba(255,0,0,.8);
}
.information > .compile_fail:hover {
}
.information > .ignore:hover {
- color: rgba(255,142,0,1);
+ color: #ff9200;
}
.search-failed a {
}
.tooltip .tooltiptext {
- background-color: black;
+ background-color: #000;
color: #fff;
+ border-color: #000;
}
.tooltip .tooltiptext::after {
border-color: transparent black transparent transparent;
}
-.important-traits .tooltip .tooltiptext {
- background-color: white;
- color: black;
- border-color: black;
-}
-
#titles > div:not(.selected) {
background-color: #252525;
border-top-color: #252525;
color: #888;
}
-.modal {
- background-color: rgba(0,0,0,0.3);
-}
-
-.modal-content {
- background-color: #272727;
- border-color: #999;
-}
-
-.modal-content > .close {
- background-color: #272727;
- border-color: #999;
-}
-
-.modal-content > .close:hover {
- background-color: #ff1f1f;
- color: white;
-}
-
-.modal-content > .whiter {
- background-color: #272727;
-}
-
-.modal-content > .close:hover + .whiter {
- background-color: #ff1f1f;
-}
-
@media (max-width: 700px) {
.sidebar-menu {
background-color: #505050;
}
pre.compile_fail {
- border-left: 2px solid rgba(255,0,0,.4);
+ border-left: 2px solid rgba(255,0,0,.5);
}
pre.compile_fail:hover, .information:hover + pre.compile_fail {
}
pre.ignore {
- border-left: 2px solid rgba(255,142,0,.4);
+ border-left: 2px solid rgba(255,142,0,.6);
}
pre.ignore:hover, .information:hover + pre.ignore {
}
.tooltip.compile_fail {
- color: rgba(255,0,0,.3);
+ color: rgba(255,0,0,.5);
}
.information > .compile_fail:hover {
}
.tooltip.ignore {
- color: rgba(255,142,0,.3);
+ color: rgba(255,142,0,.6);
}
.information > .ignore:hover {
- color: rgba(255,142,0,1);
+ color: #ff9200;
}
.search-failed a {
}
.tooltip .tooltiptext {
- background-color: black;
+ background-color: #000;
color: #fff;
}
border-color: transparent black transparent transparent;
}
-.important-traits .tooltip .tooltiptext {
- background-color: white;
- color: black;
- border-color: black;
-}
-
#titles > div:not(.selected) {
background-color: #e6e6e6;
border-top-color: #e6e6e6;
color: #888;
}
-.modal {
- background-color: rgba(0,0,0,0.3);
-}
-
-.modal-content {
- background-color: #eee;
- border-color: #999;
-}
-
-.modal-content > .close {
- background-color: #eee;
- border-color: #999;
-}
-
-.modal-content > .close:hover {
- background-color: #ff1f1f;
- color: white;
-}
-
-.modal-content > .whiter {
- background-color: #eee;
-}
-
-.modal-content > .close:hover + .whiter {
- background-color: #ff1f1f;
-}
-
@media (max-width: 700px) {
.sidebar-menu {
background-color: #F1F1F1;
//! directly written to a `Write` handle.
/// The file contents of the main `rustdoc.css` file, responsible for the core layout of the page.
-pub static RUSTDOC_CSS: &'static str = include_str!("static/rustdoc.css");
+pub static RUSTDOC_CSS: &str = include_str!("static/rustdoc.css");
/// The file contents of `settings.css`, responsible for the items on the settings page.
-pub static SETTINGS_CSS: &'static str = include_str!("static/settings.css");
+pub static SETTINGS_CSS: &str = include_str!("static/settings.css");
/// The file contents of the `noscript.css` file, used in case JS isn't supported or is disabled.
-pub static NOSCRIPT_CSS: &'static str = include_str!("static/noscript.css");
+pub static NOSCRIPT_CSS: &str = include_str!("static/noscript.css");
/// The file contents of `normalize.css`, included to even out standard elements between browser
/// implementations.
-pub static NORMALIZE_CSS: &'static str = include_str!("static/normalize.css");
+pub static NORMALIZE_CSS: &str = include_str!("static/normalize.css");
/// The file contents of `main.js`, which contains the core JavaScript used on documentation pages,
/// including search behavior and docblock folding, among others.
-pub static MAIN_JS: &'static str = include_str!("static/main.js");
+pub static MAIN_JS: &str = include_str!("static/main.js");
/// The file contents of `settings.js`, which contains the JavaScript used to handle the settings
/// page.
-pub static SETTINGS_JS: &'static str = include_str!("static/settings.js");
+pub static SETTINGS_JS: &str = include_str!("static/settings.js");
/// The file contents of `storage.js`, which contains functionality related to browser Local
/// Storage, used to store documentation settings.
-pub static STORAGE_JS: &'static str = include_str!("static/storage.js");
+pub static STORAGE_JS: &str = include_str!("static/storage.js");
/// The file contents of `brush.svg`, the icon used for the theme-switch button.
-pub static BRUSH_SVG: &'static [u8] = include_bytes!("static/brush.svg");
+pub static BRUSH_SVG: &[u8] = include_bytes!("static/brush.svg");
/// The file contents of `wheel.svg`, the icon used for the settings button.
-pub static WHEEL_SVG: &'static [u8] = include_bytes!("static/wheel.svg");
+pub static WHEEL_SVG: &[u8] = include_bytes!("static/wheel.svg");
/// The file contents of `down-arrow.svg`, the icon used for the crate choice combobox.
-pub static DOWN_ARROW_SVG: &'static [u8] = include_bytes!("static/down-arrow.svg");
+pub static DOWN_ARROW_SVG: &[u8] = include_bytes!("static/down-arrow.svg");
/// The contents of `COPYRIGHT.txt`, the license listing for files distributed with documentation
/// output.
-pub static COPYRIGHT: &'static [u8] = include_bytes!("static/COPYRIGHT.txt");
+pub static COPYRIGHT: &[u8] = include_bytes!("static/COPYRIGHT.txt");
/// The contents of `LICENSE-APACHE.txt`, the text of the Apache License, version 2.0.
-pub static LICENSE_APACHE: &'static [u8] = include_bytes!("static/LICENSE-APACHE.txt");
+pub static LICENSE_APACHE: &[u8] = include_bytes!("static/LICENSE-APACHE.txt");
/// The contents of `LICENSE-MIT.txt`, the text of the MIT License.
-pub static LICENSE_MIT: &'static [u8] = include_bytes!("static/LICENSE-MIT.txt");
+pub static LICENSE_MIT: &[u8] = include_bytes!("static/LICENSE-MIT.txt");
/// The contents of `rust-logo.png`, the default icon of the documentation.
-pub static RUST_LOGO: &'static [u8] = include_bytes!("static/rust-logo.png");
+pub static RUST_LOGO: &[u8] = include_bytes!("static/rust-logo.png");
/// The contents of `favicon.ico`, the default favicon of the documentation.
-pub static RUST_FAVICON: &'static [u8] = include_bytes!("static/favicon.ico");
+pub static RUST_FAVICON: &[u8] = include_bytes!("static/favicon.ico");
/// The built-in themes given to every documentation site.
pub mod themes {
/// The "light" theme, selected by default when no setting is available. Used as the basis for
/// the `--check-theme` functionality.
- pub static LIGHT: &'static str = include_str!("static/themes/light.css");
+ pub static LIGHT: &str = include_str!("static/themes/light.css");
/// The "dark" theme.
- pub static DARK: &'static str = include_str!("static/themes/dark.css");
+ pub static DARK: &str = include_str!("static/themes/dark.css");
}
/// Files related to the Fira Sans font.
pub mod fira_sans {
/// The file `FiraSans-Regular.woff`, the Regular variant of the Fira Sans font.
- pub static REGULAR: &'static [u8] = include_bytes!("static/FiraSans-Regular.woff");
+ pub static REGULAR: &[u8] = include_bytes!("static/FiraSans-Regular.woff");
/// The file `FiraSans-Medium.woff`, the Medium variant of the Fira Sans font.
- pub static MEDIUM: &'static [u8] = include_bytes!("static/FiraSans-Medium.woff");
+ pub static MEDIUM: &[u8] = include_bytes!("static/FiraSans-Medium.woff");
/// The file `FiraSans-LICENSE.txt`, the license text for the Fira Sans font.
- pub static LICENSE: &'static [u8] = include_bytes!("static/FiraSans-LICENSE.txt");
+ pub static LICENSE: &[u8] = include_bytes!("static/FiraSans-LICENSE.txt");
}
/// Files related to the Source Serif Pro font.
pub mod source_serif_pro {
/// The file `SourceSerifPro-Regular.ttf.woff`, the Regular variant of the Source Serif Pro
/// font.
- pub static REGULAR: &'static [u8] = include_bytes!("static/SourceSerifPro-Regular.ttf.woff");
+ pub static REGULAR: &[u8] = include_bytes!("static/SourceSerifPro-Regular.ttf.woff");
/// The file `SourceSerifPro-Bold.ttf.woff`, the Bold variant of the Source Serif Pro font.
- pub static BOLD: &'static [u8] = include_bytes!("static/SourceSerifPro-Bold.ttf.woff");
+ pub static BOLD: &[u8] = include_bytes!("static/SourceSerifPro-Bold.ttf.woff");
/// The file `SourceSerifPro-It.ttf.woff`, the Italic variant of the Source Serif Pro font.
- pub static ITALIC: &'static [u8] = include_bytes!("static/SourceSerifPro-It.ttf.woff");
+ pub static ITALIC: &[u8] = include_bytes!("static/SourceSerifPro-It.ttf.woff");
/// The file `SourceSerifPro-LICENSE.txt`, the license text for the Source Serif Pro font.
- pub static LICENSE: &'static [u8] = include_bytes!("static/SourceSerifPro-LICENSE.md");
+ pub static LICENSE: &[u8] = include_bytes!("static/SourceSerifPro-LICENSE.md");
}
/// Files related to the Source Code Pro font.
pub mod source_code_pro {
/// The file `SourceCodePro-Regular.woff`, the Regular variant of the Source Code Pro font.
- pub static REGULAR: &'static [u8] = include_bytes!("static/SourceCodePro-Regular.woff");
+ pub static REGULAR: &[u8] = include_bytes!("static/SourceCodePro-Regular.woff");
/// The file `SourceCodePro-Semibold.woff`, the Semibold variant of the Source Code Pro font.
- pub static SEMIBOLD: &'static [u8] = include_bytes!("static/SourceCodePro-Semibold.woff");
+ pub static SEMIBOLD: &[u8] = include_bytes!("static/SourceCodePro-Semibold.woff");
/// The file `SourceCodePro-LICENSE.txt`, the license text of the Source Code Pro font.
- pub static LICENSE: &'static [u8] = include_bytes!("static/SourceCodePro-LICENSE.txt");
+ pub static LICENSE: &[u8] = include_bytes!("static/SourceCodePro-LICENSE.txt");
}
/// Files related to the sidebar in rustdoc sources.
pub mod sidebar {
/// File script to handle sidebar.
- pub static SOURCE_SCRIPT: &'static str = include_str!("static/source-script.js");
+ pub static SOURCE_SCRIPT: &str = include_str!("static/source-script.js");
}
extern crate rustc_session;
extern crate rustc_span as rustc_span;
extern crate rustc_target;
+extern crate rustc_trait_selection;
extern crate rustc_typeck;
extern crate test as testing;
#[macro_use]
use std::panic;
use std::process;
-use rustc::session::config::{make_crate_type_option, ErrorOutputType, RustcOptGroup};
-use rustc::session::{early_error, early_warn};
+use rustc_session::config::{make_crate_type_option, ErrorOutputType, RustcOptGroup};
+use rustc_session::{early_error, early_warn};
#[macro_use]
mod externalfiles;
use crate::clean;
+use crate::config::OutputFormat;
use crate::core::DocContext;
use crate::fold::{self, DocFolder};
use crate::passes::Pass;
use rustc_ast::attr;
use rustc_span::symbol::sym;
use rustc_span::FileName;
+use serde::Serialize;
+use serde_json;
use std::collections::BTreeMap;
use std::ops;
description: "counts the number of items with and without documentation",
};
-fn calculate_doc_coverage(krate: clean::Crate, _: &DocContext<'_>) -> clean::Crate {
- let mut calc = CoverageCalculator::default();
+fn calculate_doc_coverage(krate: clean::Crate, ctx: &DocContext<'_>) -> clean::Crate {
+ let mut calc = CoverageCalculator::new();
let krate = calc.fold_crate(krate);
- calc.print_results();
+ calc.print_results(ctx.renderinfo.borrow().output_format);
krate
}
-#[derive(Default, Copy, Clone)]
+#[derive(Default, Copy, Clone, Serialize)]
struct ItemCount {
total: u64,
with_docs: u64,
}
}
-#[derive(Default)]
struct CoverageCalculator {
items: BTreeMap<FileName, ItemCount>,
}
+fn limit_filename_len(filename: String) -> String {
+ let nb_chars = filename.chars().count();
+ if nb_chars > 35 {
+ "...".to_string()
+ + &filename[filename.char_indices().nth(nb_chars - 32).map(|x| x.0).unwrap_or(0)..]
+ } else {
+ filename
+ }
+}
+
impl CoverageCalculator {
- fn print_results(&self) {
+ fn new() -> CoverageCalculator {
+ CoverageCalculator { items: Default::default() }
+ }
+
+ fn to_json(&self) -> String {
+ serde_json::to_string(
+ &self
+ .items
+ .iter()
+ .map(|(k, v)| (k.to_string(), v))
+ .collect::<BTreeMap<String, &ItemCount>>(),
+ )
+ .expect("failed to convert JSON data to string")
+ }
+
+ fn print_results(&self, output_format: Option<OutputFormat>) {
+ if output_format.map(|o| o.is_json()).unwrap_or_else(|| false) {
+ println!("{}", self.to_json());
+ return;
+ }
let mut total = ItemCount::default();
fn print_table_line() {
for (file, &count) in &self.items {
if let Some(percentage) = count.percentage() {
- let mut name = file.to_string();
- // if a filename is too long, shorten it so we don't blow out the table
- // FIXME(misdreavus): this needs to count graphemes, and probably also track
- // double-wide characters...
- if name.len() > 35 {
- name = "...".to_string() + &name[name.len() - 32..];
- }
-
- print_table_record(&name, count, percentage);
+ print_table_record(&limit_filename_len(file.to_string()), count, percentage);
total += count;
}
-use rustc::lint;
use rustc::ty;
use rustc_ast::ast::{self, Ident};
use rustc_errors::Applicability;
};
use rustc_hir::def_id::DefId;
use rustc_resolve::ParentScope;
+use rustc_session::lint;
use rustc_span::symbol::Symbol;
use rustc_span::DUMMY_SP;
// In case this is a trait item, skip the
// early return and try looking for the trait.
let value = match res {
- Res::Def(DefKind::Method, _) | Res::Def(DefKind::AssocConst, _) => true,
+ Res::Def(DefKind::AssocFn, _) | Res::Def(DefKind::AssocConst, _) => true,
Res::Def(DefKind::AssocTy, _) => false,
Res::Def(DefKind::Variant, _) => {
return handle_variant(cx, res, extra_fragment);
let parent_node = self.cx.as_local_hir_id(item.def_id).and_then(|hir_id| {
// FIXME: this fails hard for impls in non-module scope, but is necessary for the
// current `resolve()` implementation.
- match self.cx.as_local_hir_id(self.cx.tcx.parent_module(hir_id)).unwrap() {
+ match self.cx.as_local_hir_id(self.cx.tcx.parent_module(hir_id).to_def_id()).unwrap() {
id if id != hir_id => Some(id),
_ => None,
}
for (res, ns) in candidates {
let (action, mut suggestion) = match res {
- Res::Def(DefKind::Method, _) | Res::Def(DefKind::Fn, _) => {
+ Res::Def(DefKind::AssocFn, _) | Res::Def(DefKind::Fn, _) => {
("add parentheses", format!("{}()", path_str))
}
Res::Def(DefKind::Macro(..), _) => {
//! Contains information about "passes", used to modify crate information during the documentation
//! process.
-use rustc::lint;
use rustc::middle::privacy::AccessLevels;
use rustc_hir::def_id::{DefId, DefIdSet};
+use rustc_session::lint;
use rustc_span::{InnerSpan, Span, DUMMY_SP};
use std::mem;
use std::ops::Range;
use rustc::hir::map::Map;
-use rustc::session::{self, config, DiagnosticOutput};
use rustc::util::common::ErrorReported;
use rustc_ast::ast;
use rustc_ast::with_globals;
use rustc_hir as hir;
use rustc_hir::intravisit;
use rustc_interface::interface;
+use rustc_session::{self, config, DiagnosticOutput, Session};
use rustc_span::edition::Edition;
use rustc_span::source_map::SourceMap;
use rustc_span::symbol::sym;
cg: options.codegen_options.clone(),
externs: options.externs.clone(),
unstable_features: UnstableFeatures::from_environment(),
- lint_cap: Some(::rustc::lint::Level::Allow),
+ lint_cap: Some(rustc_session::lint::Level::Allow),
actually_rustdoc: true,
debugging_opts: config::DebuggingOptions { ..config::basic_debugging_options() },
edition: options.edition,
let mut hir_collector = HirCollector {
sess: compiler.session(),
collector: &mut collector,
- map: *tcx.hir(),
+ map: tcx.hir(),
codes: ErrorCodes::from(
compiler.session().opts.unstable_features.is_nightly_build(),
),
};
- hir_collector.visit_testable("".to_string(), &krate.attrs, |this| {
+ hir_collector.visit_testable("".to_string(), &krate.item.attrs, |this| {
intravisit::walk_crate(this, krate);
});
});
TestOptions { no_crate_inject: false, display_warnings: false, attrs: Vec::new() };
let test_attrs: Vec<_> = krate
+ .item
.attrs
.iter()
.filter(|a| a.check_name(sym::doc))
}
if !found_macro {
- if let ast::ItemKind::Mac(..) = item.kind {
+ if let ast::ItemKind::MacCall(..) = item.kind {
found_macro = true;
}
}
}
struct HirCollector<'a, 'hir> {
- sess: &'a session::Session,
+ sess: &'a Session,
collector: &'a mut Collector,
- map: &'a Map<'hir>,
+ map: Map<'hir>,
codes: ErrorCodes,
}
impl<'a, 'hir> intravisit::Visitor<'hir> for HirCollector<'a, 'hir> {
type Map = Map<'hir>;
- fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<'_, Self::Map> {
- intravisit::NestedVisitorMap::All(&self.map)
+ fn nested_visit_map(&mut self) -> intravisit::NestedVisitorMap<Self::Map> {
+ intravisit::NestedVisitorMap::All(self.map)
}
fn visit_item(&mut self, item: &'hir hir::Item) {
}
fn visit_macro_def(&mut self, macro_def: &'hir hir::MacroDef) {
- self.visit_testable(macro_def.name.to_string(), ¯o_def.attrs, |_| ());
+ self.visit_testable(macro_def.ident.to_string(), ¯o_def.attrs, |_| ());
}
}
pub fn visit(mut self, krate: &'tcx hir::Crate) -> Module<'tcx> {
let mut module = self.visit_mod_contents(
- krate.span,
- krate.attrs,
+ krate.item.span,
+ krate.item.attrs,
&Spanned { span: rustc_span::DUMMY_SP, node: hir::VisibilityKind::Public },
hir::CRATE_HIR_ID,
- &krate.module,
+ &krate.item.module,
None,
);
// Attach the crate's exported macros to the top-level module:
def: &'tcx hir::MacroDef,
renamed: Option<ast::Name>,
) -> Macro<'tcx> {
- debug!("visit_local_macro: {}", def.name);
- let tts = def.body.trees().collect::<Vec<_>>();
+ debug!("visit_local_macro: {}", def.ident);
+ let tts = def.ast.body.inner_tokens().trees().collect::<Vec<_>>();
// Extract the spans of all matchers. They represent the "interface" of the macro.
let matchers = tts.chunks(4).map(|arm| arm[0].span()).collect();
hid: def.hir_id,
def_id: self.cx.tcx.hir().local_def_id(def.hir_id),
attrs: &def.attrs,
- name: renamed.unwrap_or(def.name),
+ name: renamed.unwrap_or(def.ident.name),
whence: def.span,
matchers,
imported_from: None,
#[derive(Debug, Default, Copy, Clone)]
pub struct System;
-// The AllocRef impl just forwards to the GlobalAlloc impl, which is in `std::sys::*::alloc`.
+// The AllocRef impl checks the layout size to be non-zero and forwards to the GlobalAlloc impl,
+// which is in `std::sys::*::alloc`.
#[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl AllocRef for System {
#[inline]
- unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<u8>, AllocErr> {
- NonNull::new(GlobalAlloc::alloc(self, layout)).ok_or(AllocErr)
+ fn alloc(&mut self, layout: Layout) -> Result<(NonNull<u8>, usize), AllocErr> {
+ if layout.size() == 0 {
+ Ok((layout.dangling(), 0))
+ } else {
+ unsafe {
+ NonNull::new(GlobalAlloc::alloc(self, layout))
+ .ok_or(AllocErr)
+ .map(|p| (p, layout.size()))
+ }
+ }
}
#[inline]
- unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<NonNull<u8>, AllocErr> {
- NonNull::new(GlobalAlloc::alloc_zeroed(self, layout)).ok_or(AllocErr)
+ fn alloc_zeroed(&mut self, layout: Layout) -> Result<(NonNull<u8>, usize), AllocErr> {
+ if layout.size() == 0 {
+ Ok((layout.dangling(), 0))
+ } else {
+ unsafe {
+ NonNull::new(GlobalAlloc::alloc_zeroed(self, layout))
+ .ok_or(AllocErr)
+ .map(|p| (p, layout.size()))
+ }
+ }
}
#[inline]
unsafe fn dealloc(&mut self, ptr: NonNull<u8>, layout: Layout) {
- GlobalAlloc::dealloc(self, ptr.as_ptr(), layout)
+ if layout.size() != 0 {
+ GlobalAlloc::dealloc(self, ptr.as_ptr(), layout)
+ }
}
#[inline]
ptr: NonNull<u8>,
layout: Layout,
new_size: usize,
- ) -> Result<NonNull<u8>, AllocErr> {
- NonNull::new(GlobalAlloc::realloc(self, ptr.as_ptr(), layout, new_size)).ok_or(AllocErr)
+ ) -> Result<(NonNull<u8>, usize), AllocErr> {
+ match (layout.size(), new_size) {
+ (0, 0) => Ok((layout.dangling(), 0)),
+ (0, _) => self.alloc(Layout::from_size_align_unchecked(new_size, layout.align())),
+ (_, 0) => {
+ self.dealloc(ptr, layout);
+ Ok((layout.dangling(), 0))
+ }
+ (_, _) => NonNull::new(GlobalAlloc::realloc(self, ptr.as_ptr(), layout, new_size))
+ .ok_or(AllocErr)
+ .map(|p| (p, new_size)),
+ }
}
}
// a backtrace or actually symbolizing it.
use crate::env;
+use crate::ffi::c_void;
use crate::fmt;
use crate::sync::atomic::{AtomicUsize, Ordering::SeqCst};
use crate::sync::Mutex;
}
struct BacktraceFrame {
- frame: backtrace::Frame,
+ frame: RawFrame,
symbols: Vec<BacktraceSymbol>,
}
+enum RawFrame {
+ Actual(backtrace::Frame),
+ #[cfg(test)]
+ Fake,
+}
+
struct BacktraceSymbol {
name: Option<Vec<u8>>,
filename: Option<BytesOrWide>,
impl fmt::Debug for Backtrace {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
let mut capture = match &self.inner {
- Inner::Unsupported => return fmt.write_str("unsupported backtrace"),
- Inner::Disabled => return fmt.write_str("disabled backtrace"),
+ Inner::Unsupported => return fmt.write_str("<unsupported>"),
+ Inner::Disabled => return fmt.write_str("<disabled>"),
Inner::Captured(c) => c.lock().unwrap(),
};
capture.resolve();
if let Some(fn_name) = self.name.as_ref().map(|b| backtrace::SymbolName::new(b)) {
write!(fmt, "fn: \"{:#}\"", fn_name)?;
} else {
- write!(fmt, "fn: \"<unknown>\"")?;
+ write!(fmt, "fn: <unknown>")?;
}
if let Some(fname) = self.filename.as_ref() {
- write!(fmt, ", file: {:?}", fname)?;
+ write!(fmt, ", file: \"{:?}\"", fname)?;
}
if let Some(line) = self.lineno.as_ref() {
let mut actual_start = None;
unsafe {
backtrace::trace_unsynchronized(|frame| {
- frames.push(BacktraceFrame { frame: frame.clone(), symbols: Vec::new() });
+ frames.push(BacktraceFrame {
+ frame: RawFrame::Actual(frame.clone()),
+ symbols: Vec::new(),
+ });
if frame.symbol_address() as usize == ip && actual_start.is_none() {
actual_start = Some(frames.len());
}
let _lock = lock();
for frame in self.frames.iter_mut() {
let symbols = &mut frame.symbols;
+ let frame = match &frame.frame {
+ RawFrame::Actual(frame) => frame,
+ #[cfg(test)]
+ RawFrame::Fake => unimplemented!(),
+ };
unsafe {
- backtrace::resolve_frame_unsynchronized(&frame.frame, |symbol| {
+ backtrace::resolve_frame_unsynchronized(frame, |symbol| {
symbols.push(BacktraceSymbol {
name: symbol.name().map(|m| m.as_bytes().to_vec()),
filename: symbol.filename_raw().map(|b| match b {
}
}
}
+
+impl RawFrame {
+ fn ip(&self) -> *mut c_void {
+ match self {
+ RawFrame::Actual(frame) => frame.ip(),
+ #[cfg(test)]
+ RawFrame::Fake => 1 as *mut c_void,
+ }
+ }
+}
+
+#[test]
+fn test_debug() {
+ let backtrace = Backtrace {
+ inner: Inner::Captured(Mutex::new(Capture {
+ actual_start: 1,
+ resolved: true,
+ frames: vec![
+ BacktraceFrame {
+ frame: RawFrame::Fake,
+ symbols: vec![BacktraceSymbol {
+ name: Some(b"std::backtrace::Backtrace::create".to_vec()),
+ filename: Some(BytesOrWide::Bytes(b"rust/backtrace.rs".to_vec())),
+ lineno: Some(100),
+ }],
+ },
+ BacktraceFrame {
+ frame: RawFrame::Fake,
+ symbols: vec![BacktraceSymbol {
+ name: Some(b"__rust_maybe_catch_panic".to_vec()),
+ filename: None,
+ lineno: None,
+ }],
+ },
+ BacktraceFrame {
+ frame: RawFrame::Fake,
+ symbols: vec![
+ BacktraceSymbol {
+ name: Some(b"std::rt::lang_start_internal".to_vec()),
+ filename: Some(BytesOrWide::Bytes(b"rust/rt.rs".to_vec())),
+ lineno: Some(300),
+ },
+ BacktraceSymbol {
+ name: Some(b"std::rt::lang_start".to_vec()),
+ filename: Some(BytesOrWide::Bytes(b"rust/rt.rs".to_vec())),
+ lineno: Some(400),
+ },
+ ],
+ },
+ ],
+ })),
+ };
+
+ #[rustfmt::skip]
+ let expected = "Backtrace [\
+ \n { fn: \"__rust_maybe_catch_panic\" },\
+ \n { fn: \"std::rt::lang_start_internal\", file: \"rust/rt.rs\", line: 300 },\
+ \n { fn: \"std::rt::lang_start\", file: \"rust/rt.rs\", line: 400 },\
+ \n]";
+
+ assert_eq!(format!("{:#?}", backtrace), expected);
+}
KEYS.with(|keys| {
let (k0, k1) = keys.get();
keys.set((k0.wrapping_add(1), k1));
- RandomState { k0: k0, k1: k1 }
+ RandomState { k0, k1 }
})
}
}
/// (such as `*` and `?`). On Windows this is not done, and such arguments are
/// passed as-is.
///
-/// On glibc Linux, arguments are retrieved by placing a function in .init_array.
-/// glibc passes argc, argv, and envp to functions in .init_array, as a non-standard extension.
+/// On glibc Linux systems, arguments are retrieved by placing a function in ".init_array".
+/// Glibc passes argc, argv, and envp to functions in ".init_array", as a non-standard extension.
/// This allows `std::env::args` to work even in a `cdylib` or `staticlib`, as it does on macOS
/// and Windows.
///
/// set to arbitrary text, and it may not even exist, so this property should
/// not be relied upon for security purposes.
///
-/// On glibc Linux, arguments are retrieved by placing a function in .init_array.
-/// glibc passes argc, argv, and envp to functions in .init_array, as a non-standard extension.
+/// On glibc Linux systems, arguments are retrieved by placing a function in ".init_array".
+/// Glibc passes argc, argv, and envp to functions in ".init_array", as a non-standard extension.
/// This allows `std::env::args` to work even in a `cdylib` or `staticlib`, as it does on macOS
/// and Windows.
///
/// }
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
- #[rustc_deprecated(since = "1.41.0", reason = "use the Display impl or to_string()")]
+ #[rustc_deprecated(since = "1.42.0", reason = "use the Display impl or to_string()")]
fn description(&self) -> &str {
"description() is deprecated; use Display"
}
}
}
+#[unstable(feature = "try_reserve", reason = "new API", issue = "48043")]
+impl Error for alloc::collections::TryReserveError {}
+
// Copied from `any.rs`.
impl dyn Error + 'static {
/// Returns `true` if the boxed type is the same as `T`
//! *[See also the `f32` primitive type](../../std/primitive.f32.html).*
//!
//! Mathematically significant numbers are provided in the `consts` sub-module.
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
#![allow(missing_docs)]
//! *[See also the `f64` primitive type](../../std/primitive.f64.html).*
//!
//! Mathematically significant numbers are provided in the `consts` sub-module.
+//!
+//! Although using these constants won’t cause compilation warnings,
+//! new code should use the associated constants directly on the primitive type.
#![stable(feature = "rust1", since = "1.0.0")]
#![allow(missing_docs)]
///
/// It is equivalent to `OpenOptions::new()` but allows you to write more
/// readable code. Instead of `OpenOptions::new().read(true).open("foo.txt")`
- /// you can write `File::with_options().read(true).open("foo.txt"). This
+ /// you can write `File::with_options().read(true).open("foo.txt")`. This
/// also avoids the need to import `OpenOptions`.
///
/// See the [`OpenOptions::new`] function for more details.
//! Asynchronous values.
-use core::cell::Cell;
-use core::marker::Unpin;
-use core::ops::{Drop, Generator, GeneratorState};
-use core::option::Option;
-use core::pin::Pin;
-use core::ptr::NonNull;
-use core::task::{Context, Poll};
+#[cfg(bootstrap)]
+use core::{
+ cell::Cell,
+ marker::Unpin,
+ ops::{Drop, Generator, GeneratorState},
+ pin::Pin,
+ ptr::NonNull,
+ task::{Context, Poll},
+};
#[doc(inline)]
#[stable(feature = "futures_api", since = "1.36.0")]
/// This function returns a `GenFuture` underneath, but hides it in `impl Trait` to give
/// better error messages (`impl Future` rather than `GenFuture<[closure.....]>`).
// This is `const` to avoid extra errors after we recover from `const async fn`
+#[cfg(bootstrap)]
#[doc(hidden)]
#[unstable(feature = "gen_future", issue = "50547")]
pub const fn from_generator<T: Generator<Yield = ()>>(x: T) -> impl Future<Output = T::Return> {
}
/// A wrapper around generators used to implement `Future` for `async`/`await` code.
+#[cfg(bootstrap)]
#[doc(hidden)]
#[unstable(feature = "gen_future", issue = "50547")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Ord, PartialOrd, Hash)]
// We rely on the fact that async/await futures are immovable in order to create
// self-referential borrows in the underlying generator.
+#[cfg(bootstrap)]
impl<T: Generator<Yield = ()>> !Unpin for GenFuture<T> {}
+#[cfg(bootstrap)]
#[doc(hidden)]
#[unstable(feature = "gen_future", issue = "50547")]
impl<T: Generator<Yield = ()>> Future for GenFuture<T> {
// Safe because we're !Unpin + !Drop mapping to a ?Unpin value
let gen = unsafe { Pin::map_unchecked_mut(self, |s| &mut s.0) };
let _guard = unsafe { set_task_context(cx) };
- match gen.resume(
- #[cfg(not(bootstrap))]
- (),
- ) {
+ match gen.resume(()) {
GeneratorState::Yielded(()) => Poll::Pending,
GeneratorState::Complete(x) => Poll::Ready(x),
}
}
}
+#[cfg(bootstrap)]
thread_local! {
static TLS_CX: Cell<Option<NonNull<Context<'static>>>> = Cell::new(None);
}
+#[cfg(bootstrap)]
struct SetOnDrop(Option<NonNull<Context<'static>>>);
+#[cfg(bootstrap)]
impl Drop for SetOnDrop {
fn drop(&mut self) {
TLS_CX.with(|tls_cx| {
// Safety: the returned guard must drop before `cx` is dropped and before
// any previous guard is dropped.
+#[cfg(bootstrap)]
unsafe fn set_task_context(cx: &mut Context<'_>) -> SetOnDrop {
// transmute the context's lifetime to 'static so we can store it.
let cx = core::mem::transmute::<&mut Context<'_>, &mut Context<'static>>(cx);
SetOnDrop(old_cx)
}
+#[cfg(bootstrap)]
#[doc(hidden)]
#[unstable(feature = "gen_future", issue = "50547")]
/// Polls a future in the current thread-local task waker.
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
pub fn new(inner: T) -> Cursor<T> {
- Cursor { pos: 0, inner: inner }
+ Cursor { pos: 0, inner }
}
/// Consumes this cursor, returning the underlying value.
F: FnMut(&R) -> usize,
{
let start_len = buf.len();
- let mut g = Guard { len: buf.len(), buf: buf };
+ let mut g = Guard { len: buf.len(), buf };
let ret;
loop {
if g.len == g.buf.len() {
/// [`&str`]: ../../std/primitive.str.html
/// [slice]: ../../std/primitive.slice.html
#[stable(feature = "rust1", since = "1.0.0")]
-#[doc(spotlight)]
pub trait Read {
/// Pull some bytes from this source into the specified buffer, returning
/// how many bytes were read.
where
Self: Sized,
{
- Take { inner: self, limit: limit }
+ Take { inner: self, limit }
}
}
/// ABI compatible with the `iovec` type on Unix platforms and `WSABUF` on
/// Windows.
#[stable(feature = "iovec", since = "1.36.0")]
+#[derive(Copy, Clone)]
#[repr(transparent)]
pub struct IoSlice<'a>(sys::io::IoSlice<'a>);
///
/// [`write_all`]: #method.write_all
#[stable(feature = "rust1", since = "1.0.0")]
-#[doc(spotlight)]
pub trait Write {
/// Write a buffer into this writer, returning how many bytes were written.
///
{
let result = local_s
.try_with(|s| {
- if let Ok(mut borrowed) = s.try_borrow_mut() {
- if let Some(w) = borrowed.as_mut() {
- return w.write_fmt(args);
- }
+ // Note that we completely remove a local sink to write to in case
+ // our printing recursively panics/prints, so the recursive
+ // panic/print goes to the global sink instead of our local sink.
+ let prev = s.borrow_mut().take();
+ if let Some(mut w) = prev {
+ let result = w.write_fmt(args);
+ *s.borrow_mut() = Some(w);
+ return result;
}
global_s().write_fmt(args)
})
#[doc(keyword = "else")]
//
-/// What to do when an [`if`] condition does not hold.
+/// What expression to evaluate when an [`if`] condition evaluates to [`false`].
///
-/// The documentation for this keyword is [not yet complete]. Pull requests welcome!
+/// `else` expressions are optional. When no else expressions are supplied it is assumed to evaluate
+/// to the unit type `()`.
+///
+/// The type that the `else` blocks evaluate to must be compatible with the type that the `if` block
+/// evaluates to.
+///
+/// As can be seen below, `else` must be followed by either: `if`, `if let`, or a block `{}` and it
+/// will return the value of that expression.
+///
+/// ```rust
+/// let result = if true == false {
+/// "oh no"
+/// } else if "something" == "other thing" {
+/// "oh dear"
+/// } else if let Some(200) = "blarg".parse::<i32>().ok() {
+/// "uh oh"
+/// } else {
+/// println!("Sneaky side effect.");
+/// "phew, nothing's broken"
+/// };
+/// ```
+///
+/// Here's another example but here we do not try and return an expression:
+///
+/// ```rust
+/// if true == false {
+/// println!("oh no");
+/// } else if "something" == "other thing" {
+/// println!("oh dear");
+/// } else if let Some(200) = "blarg".parse::<i32>().ok() {
+/// println!("uh oh");
+/// } else {
+/// println!("phew, nothing's broken");
+/// }
+/// ```
///
+/// The above is _still_ an expression but it will always evaluate to `()`.
+///
+/// There is possibly no limit to the number of `else` blocks that could follow an `if` expression
+/// however if you have several then a [`match`] expression might be preferable.
+///
+/// Read more about control flow in the [Rust Book].
+///
+/// [Rust Book]: ../book/ch03-05-control-flow.html#handling-multiple-conditions-with-else-if
+/// [`match`]: keyword.match.html
+/// [`false`]: keyword.false.html
/// [`if`]: keyword.if.html
-/// [not yet complete]: https://github.com/rust-lang/rust/issues/34601
mod else_keyword {}
#[doc(keyword = "enum")]
//
/// Iterate over a series of values with [`for`].
///
-/// The documentation for this keyword is [not yet complete]. Pull requests welcome!
+/// The expression immediately following `in` must implement the [`Iterator`] trait.
+///
+/// ## Literal Examples:
+///
+/// * `for _ **in** 1..3 {}` - Iterate over an exclusive range up to but excluding 3.
+/// * `for _ **in** 1..=3 {}` - Iterate over an inclusive range up to and includeing 3.
+///
+/// (Read more about [range patterns])
///
+/// [`Iterator`]: ../book/ch13-04-performance.html
+/// [`range patterns`]: ../reference/patterns.html?highlight=range#range-patterns
/// [`for`]: keyword.for.html
-/// [not yet complete]: https://github.com/rust-lang/rust/issues/34601
mod in_keyword {}
#[doc(keyword = "let")]
//
/// Make an item visible to others.
///
-/// The documentation for this keyword is [not yet complete]. Pull requests welcome!
+/// The keyword `pub` makes any module, function, or data structure accessible from inside
+/// of external modules. The `pub` keyword may also be used in a `use` declaration to re-export
+/// an identifier from a namespace.
///
-/// [not yet complete]: https://github.com/rust-lang/rust/issues/34601
+/// For more information on the `pub` keyword, please see the visibility section
+/// of the [reference] and for some examples, see [Rust by Example].
+///
+/// [reference]:../reference/visibility-and-privacy.html?highlight=pub#visibility-and-privacy
+/// [Rust by Example]:../rust-by-example/mod/visibility.html
mod pub_keyword {}
#[doc(keyword = "ref")]
#![feature(arbitrary_self_types)]
#![feature(array_error_internals)]
#![feature(asm)]
-#![feature(assoc_int_consts)]
#![feature(associated_type_bounds)]
#![feature(atomic_mut_ptr)]
#![feature(box_syntax)]
#![feature(c_variadic)]
+#![cfg_attr(not(bootstrap), feature(cfg_accessible))]
#![feature(cfg_target_has_atomic)]
#![feature(cfg_target_thread_local)]
#![feature(char_error_internals)]
#![feature(doc_cfg)]
#![feature(doc_keyword)]
#![feature(doc_masked)]
-#![feature(doc_spotlight)]
#![feature(dropck_eyepatch)]
#![feature(duration_constants)]
#![feature(exact_size_is_empty)]
#![feature(shrink_to)]
#![feature(slice_concat_ext)]
#![feature(slice_internals)]
-#![feature(specialization)]
+#![cfg_attr(bootstrap, feature(specialization))]
+#![cfg_attr(not(bootstrap), feature(min_specialization))]
#![feature(staged_api)]
#![feature(std_internals)]
#![feature(stdsimd)]
/// builds or when debugging in release mode is significantly faster.
///
/// Note that the macro is intended as a debugging tool and therefore you
-/// should avoid having uses of it in version control for longer periods.
+/// should avoid having uses of it in version control for long periods.
/// Use cases involving debug output that should be added to version control
/// are better served by macros such as [`debug!`] from the [`log`] crate.
///
type Iter = vec::IntoIter<SocketAddr>;
fn to_socket_addrs(&self) -> io::Result<vec::IntoIter<SocketAddr>> {
// try to parse as a regular SocketAddr first
- if let Some(addr) = self.parse().ok() {
+ if let Ok(addr) = self.parse() {
return Ok(vec![addr].into_iter());
}
/// Returns [`true`] if this address is reserved by IANA for future use. [IETF RFC 1112]
/// defines the block of reserved addresses as `240.0.0.0/4`. This range normally includes the
- /// broadcast address `255.255.255.255`, but this implementation explicitely excludes it, since
+ /// broadcast address `255.255.255.255`, but this implementation explicitly excludes it, since
/// it is obviously not reserved for future use.
///
/// [IETF RFC 1112]: https://tools.ietf.org/html/rfc1112
}
// read `::` if previous code parsed less than 8 groups
- if !self.read_given_char(':').is_some() || !self.read_given_char(':').is_some() {
+ if self.read_given_char(':').is_none() || self.read_given_char(':').is_none() {
return None;
}
/// }
/// ```
#[stable(feature = "metadata_ext", since = "1.1.0")]
- #[rustc_deprecated(since = "1.8.0", reason = "other methods of this trait are now prefered")]
+ #[rustc_deprecated(since = "1.8.0", reason = "other methods of this trait are now preferred")]
#[allow(deprecated)]
fn as_raw_stat(&self) -> &raw::stat;
use crate::intrinsics;
use crate::mem::{self, ManuallyDrop};
use crate::process;
-use crate::raw;
use crate::sync::atomic::{AtomicBool, Ordering};
use crate::sys::stdio::panic_output;
use crate::sys_common::backtrace::{self, RustBacktrace};
// hook up these functions, but it is not this day!
#[allow(improper_ctypes)]
extern "C" {
- fn __rust_maybe_catch_panic(
- f: fn(*mut u8),
- data: *mut u8,
- data_ptr: *mut usize,
- vtable_ptr: *mut usize,
- ) -> u32;
+ fn __rust_panic_cleanup(payload: *mut u8) -> *mut (dyn Any + Send + 'static);
/// `payload` is actually a `*mut &mut dyn BoxMeUp` but that would cause FFI warnings.
/// It cannot be `Box<dyn BoxMeUp>` because the other end of this call does not depend
union Data<F, R> {
f: ManuallyDrop<F>,
r: ManuallyDrop<R>,
+ p: ManuallyDrop<Box<dyn Any + Send>>,
}
// We do some sketchy operations with ownership here for the sake of
- // performance. We can only pass pointers down to
- // `__rust_maybe_catch_panic` (can't pass objects by value), so we do all
- // the ownership tracking here manually using a union.
+ // performance. We can only pass pointers down to `do_call` (can't pass
+ // objects by value), so we do all the ownership tracking here manually
+ // using a union.
//
// We go through a transition where:
//
- // * First, we set the data to be the closure that we're going to call.
+ // * First, we set the data field `f` to be the argumentless closure that we're going to call.
// * When we make the function call, the `do_call` function below, we take
- // ownership of the function pointer. At this point the `Data` union is
+ // ownership of the function pointer. At this point the `data` union is
// entirely uninitialized.
// * If the closure successfully returns, we write the return value into the
- // data's return slot. Note that `ptr::write` is used as it's overwriting
- // uninitialized data.
- // * Finally, when we come back out of the `__rust_maybe_catch_panic` we're
+ // data's return slot (field `r`).
+ // * If the closure panics (`do_catch` below), we write the panic payload into field `p`.
+ // * Finally, when we come back out of the `try` intrinsic we're
// in one of two states:
//
// 1. The closure didn't panic, in which case the return value was
- // filled in. We move it out of `data` and return it.
- // 2. The closure panicked, in which case the return value wasn't
- // filled in. In this case the entire `data` union is invalid, so
- // there is no need to drop anything.
+ // filled in. We move it out of `data.r` and return it.
+ // 2. The closure panicked, in which case the panic payload was
+ // filled in. We move it out of `data.p` and return it.
//
// Once we stack all that together we should have the "most efficient'
// method of calling a catch panic whilst juggling ownership.
- let mut any_data = 0;
- let mut any_vtable = 0;
let mut data = Data { f: ManuallyDrop::new(f) };
- let r = __rust_maybe_catch_panic(
- do_call::<F, R>,
- &mut data as *mut _ as *mut u8,
- &mut any_data,
- &mut any_vtable,
- );
-
- return if r == 0 {
+ let data_ptr = &mut data as *mut _ as *mut u8;
+ return if do_try(do_call::<F, R>, data_ptr, do_catch::<F, R>) == 0 {
Ok(ManuallyDrop::into_inner(data.r))
} else {
- update_panic_count(-1);
- Err(mem::transmute(raw::TraitObject {
- data: any_data as *mut _,
- vtable: any_vtable as *mut _,
- }))
+ Err(ManuallyDrop::into_inner(data.p))
};
+ // Compatibility wrapper around the try intrinsic for bootstrap.
+ //
+ // We also need to mark it #[inline(never)] to work around a bug on MinGW
+ // targets: the unwinding implementation was relying on UB, but this only
+ // becomes a problem in practice if inlining is involved.
+ #[cfg(not(bootstrap))]
+ use intrinsics::r#try as do_try;
+ #[cfg(bootstrap)]
+ #[inline(never)]
+ unsafe fn do_try(try_fn: fn(*mut u8), data: *mut u8, catch_fn: fn(*mut u8, *mut u8)) -> i32 {
+ use crate::mem::MaybeUninit;
+ #[cfg(target_env = "msvc")]
+ type TryPayload = [u64; 2];
+ #[cfg(not(target_env = "msvc"))]
+ type TryPayload = *mut u8;
+
+ let mut payload: MaybeUninit<TryPayload> = MaybeUninit::uninit();
+ let payload_ptr = payload.as_mut_ptr() as *mut u8;
+ let r = intrinsics::r#try(try_fn, data, payload_ptr);
+ if r != 0 {
+ #[cfg(target_env = "msvc")]
+ {
+ catch_fn(data, payload_ptr)
+ }
+ #[cfg(not(target_env = "msvc"))]
+ {
+ catch_fn(data, payload.assume_init())
+ }
+ }
+ r
+ }
+
+ // We consider unwinding to be rare, so mark this function as cold. However,
+ // do not mark it no-inline -- that decision is best to leave to the
+ // optimizer (in most cases this function is not inlined even as a normal,
+ // non-cold function, though, as of the writing of this comment).
+ #[cold]
+ unsafe fn cleanup(payload: *mut u8) -> Box<dyn Any + Send + 'static> {
+ let obj = Box::from_raw(__rust_panic_cleanup(payload));
+ update_panic_count(-1);
+ obj
+ }
+
+ // See comment on do_try above for why #[inline(never)] is needed on bootstrap.
+ #[cfg_attr(bootstrap, inline(never))]
+ #[cfg_attr(not(bootstrap), inline)]
fn do_call<F: FnOnce() -> R, R>(data: *mut u8) {
unsafe {
let data = data as *mut Data<F, R>;
data.r = ManuallyDrop::new(f());
}
}
+
+ // We *do* want this part of the catch to be inlined: this allows the
+ // compiler to properly track accesses to the Data union and optimize it
+ // away most of the time.
+ #[inline]
+ fn do_catch<F: FnOnce() -> R, R>(data: *mut u8, payload: *mut u8) {
+ unsafe {
+ let data = data as *mut Data<F, R>;
+ let data = &mut (*data);
+ let obj = cleanup(payload);
+ data.p = ManuallyDrop::new(obj);
+ }
+ }
}
/// Determines whether the current thread is unwinding because of panic.
PartialEq, PartialOrd, RustcDecodable, RustcEncodable,
};
+#[cfg(not(bootstrap))]
+#[unstable(
+ feature = "cfg_accessible",
+ issue = "64797",
+ reason = "`cfg_accessible` is not fully implemented"
+)]
+#[doc(hidden)]
+pub use core::prelude::v1::cfg_accessible;
+
// The file so far is equivalent to src/libcore/prelude/v1.rs,
// and below to src/liballoc/prelude.rs.
// Those files are duplicated rather than using glob imports
#[doc(primitive = "f32")]
/// The 32-bit floating point type.
///
-/// *[See also the `std::f32` module](f32/index.html).*
+/// *[See also the `std::f32::consts` module](f32/consts/index.html).*
///
#[stable(feature = "rust1", since = "1.0.0")]
mod prim_f32 {}
//
/// The 64-bit floating point type.
///
-/// *[See also the `std::f64` module](f64/index.html).*
+/// *[See also the `std::f64::consts` module](f64/consts/index.html).*
///
#[stable(feature = "rust1", since = "1.0.0")]
mod prim_f64 {}
#[doc(primitive = "i8")]
//
/// The 8-bit signed integer type.
-///
-/// *[See also the `std::i8` module](i8/index.html).*
#[stable(feature = "rust1", since = "1.0.0")]
mod prim_i8 {}
#[doc(primitive = "i16")]
//
/// The 16-bit signed integer type.
-///
-/// *[See also the `std::i16` module](i16/index.html).*
#[stable(feature = "rust1", since = "1.0.0")]
mod prim_i16 {}
#[doc(primitive = "i32")]
//
/// The 32-bit signed integer type.
-///
-/// *[See also the `std::i32` module](i32/index.html).*
#[stable(feature = "rust1", since = "1.0.0")]
mod prim_i32 {}
#[doc(primitive = "i64")]
//
/// The 64-bit signed integer type.
-///
-/// *[See also the `std::i64` module](i64/index.html).*
#[stable(feature = "rust1", since = "1.0.0")]
mod prim_i64 {}
#[doc(primitive = "i128")]
//
/// The 128-bit signed integer type.
-///
-/// *[See also the `std::i128` module](i128/index.html).*
#[stable(feature = "i128", since = "1.26.0")]
mod prim_i128 {}
#[doc(primitive = "u8")]
//
/// The 8-bit unsigned integer type.
-///
-/// *[See also the `std::u8` module](u8/index.html).*
#[stable(feature = "rust1", since = "1.0.0")]
mod prim_u8 {}
#[doc(primitive = "u16")]
//
/// The 16-bit unsigned integer type.
-///
-/// *[See also the `std::u16` module](u16/index.html).*
#[stable(feature = "rust1", since = "1.0.0")]
mod prim_u16 {}
#[doc(primitive = "u32")]
//
/// The 32-bit unsigned integer type.
-///
-/// *[See also the `std::u32` module](u32/index.html).*
#[stable(feature = "rust1", since = "1.0.0")]
mod prim_u32 {}
#[doc(primitive = "u64")]
//
/// The 64-bit unsigned integer type.
-///
-/// *[See also the `std::u64` module](u64/index.html).*
#[stable(feature = "rust1", since = "1.0.0")]
mod prim_u64 {}
#[doc(primitive = "u128")]
//
/// The 128-bit unsigned integer type.
-///
-/// *[See also the `std::u128` module](u128/index.html).*
#[stable(feature = "i128", since = "1.26.0")]
mod prim_u128 {}
//
/// The pointer-sized signed integer type.
///
-/// *[See also the `std::isize` module](isize/index.html).*
-///
/// The size of this primitive is how many bytes it takes to reference any
/// location in memory. For example, on a 32 bit target, this is 4 bytes
/// and on a 64 bit target, this is 8 bytes.
//
/// The pointer-sized unsigned integer type.
///
-/// *[See also the `std::usize` module](usize/index.html).*
-///
/// The size of this primitive is how many bytes it takes to reference any
/// location in memory. For example, on a 32 bit target, this is 4 bytes
/// and on a 64 bit target, this is 8 bytes.
tail: UnsafeCell<*mut Node<T>>, // where to pop from
tail_prev: AtomicPtr<Node<T>>, // where to pop from
cache_bound: usize, // maximum cache size
- cached_nodes: AtomicUsize, // number of nodes marked as cachable
+ cached_nodes: AtomicUsize, // number of nodes marked as cacheable
addition: Addition,
}
impl<'mutex, T: ?Sized> MutexGuard<'mutex, T> {
unsafe fn new(lock: &'mutex Mutex<T>) -> LockResult<MutexGuard<'mutex, T>> {
- poison::map_result(lock.poison.borrow(), |guard| MutexGuard { lock: lock, poison: guard })
+ poison::map_result(lock.poison.borrow(), |guard| MutexGuard { lock, poison: guard })
}
}
/// assert!(handle.join().is_err());
/// assert_eq!(INIT.is_completed(), false);
/// ```
- #[stable(feature = "once_is_completed", since = "1.44.0")]
+ #[stable(feature = "once_is_completed", since = "1.43.0")]
#[inline]
pub fn is_completed(&self) -> bool {
// An `Acquire` load is enough because that makes all the initialization
impl<'rwlock, T: ?Sized> RwLockReadGuard<'rwlock, T> {
unsafe fn new(lock: &'rwlock RwLock<T>) -> LockResult<RwLockReadGuard<'rwlock, T>> {
- poison::map_result(lock.poison.borrow(), |_| RwLockReadGuard { lock: lock })
+ poison::map_result(lock.poison.borrow(), |_| RwLockReadGuard { lock })
}
}
impl<'rwlock, T: ?Sized> RwLockWriteGuard<'rwlock, T> {
unsafe fn new(lock: &'rwlock RwLock<T>) -> LockResult<RwLockWriteGuard<'rwlock, T>> {
- poison::map_result(lock.poison.borrow(), |guard| RwLockWriteGuard {
- lock: lock,
- poison: guard,
- })
+ poison::map_result(lock.poison.borrow(), |guard| RwLockWriteGuard { lock, poison: guard })
}
}
use crate::mem;
+#[derive(Copy, Clone)]
pub struct IoSlice<'a>(&'a [u8]);
impl<'a> IoSlice<'a> {
use crate::mem;
+#[derive(Copy, Clone)]
pub struct IoSlice<'a>(&'a [u8]);
impl<'a> IoSlice<'a> {
/// It is also possible to obtain a mutable reference `&mut UserRef<T>`. Unlike
/// regular mutable references, these are not exclusive. Userspace may always
/// write to the backing memory at any time, so it can't be assumed that there
-/// the pointed-to memory is uniquely borrowed. The two different refence types
+/// the pointed-to memory is uniquely borrowed. The two different reference types
/// are used solely to indicate intent: a mutable reference is for writing to
/// user memory, an immutable reference for reading from user memory.
#[unstable(feature = "sgx_platform", issue = "56975")]
use crate::mem;
+#[derive(Copy, Clone)]
pub struct IoSlice<'a>(&'a [u8]);
impl<'a> IoSlice<'a> {
use libc::{c_void, iovec};
+#[derive(Copy, Clone)]
#[repr(transparent)]
pub struct IoSlice<'a> {
vec: iovec,
// On Unix-like platforms, libc::abort will unregister signal handlers
// including the SIGABRT handler, preventing the abort from being blocked, and
-// fclose streams, with the side effect of flushing them so libc bufferred
+// fclose streams, with the side effect of flushing them so libc buffered
// output will be printed. Additionally the shell will generally print a more
// understandable error message like "Abort trap" rather than "Illegal
// instruction" that intrinsics::abort would cause, as intrinsics::abort is
};
let mut timeout = libc::timeval {
tv_sec: secs,
- tv_usec: (dur.subsec_nanos() / 1000) as libc::suseconds_t,
+ tv_usec: dur.subsec_micros() as libc::suseconds_t,
};
if timeout.tv_sec == 0 && timeout.tv_usec == 0 {
timeout.tv_usec = 1;
if #[cfg(target_os = "fuchsia")] {
// fuchsia doesn't have /dev/null
} else if #[cfg(target_os = "redox")] {
- const DEV_NULL: &'static str = "null:\0";
+ const DEV_NULL: &str = "null:\0";
} else {
- const DEV_NULL: &'static str = "/dev/null\0";
+ const DEV_NULL: &str = "/dev/null\0";
}
}
pub unsafe fn new() -> Handler {
make_handler()
}
+
+ fn null() -> Handler {
+ Handler { _data: crate::ptr::null_mut() }
+ }
}
impl Drop for Handler {
use libc::{mmap, munmap};
use libc::{sigaction, sighandler_t, SA_ONSTACK, SA_SIGINFO, SIGBUS, SIG_DFL};
use libc::{sigaltstack, SIGSTKSZ, SS_DISABLE};
- use libc::{MAP_ANON, MAP_PRIVATE, PROT_READ, PROT_WRITE, SIGSEGV};
+ use libc::{MAP_ANON, MAP_PRIVATE, PROT_NONE, PROT_READ, PROT_WRITE, SIGSEGV};
+ use crate::sys::unix::os::page_size;
use crate::sys_common::thread_info;
#[cfg(any(target_os = "linux", target_os = "android"))]
}
static mut MAIN_ALTSTACK: *mut libc::c_void = ptr::null_mut();
+ static mut NEED_ALTSTACK: bool = false;
pub unsafe fn init() {
let mut action: sigaction = mem::zeroed();
- action.sa_flags = SA_SIGINFO | SA_ONSTACK;
- action.sa_sigaction = signal_handler as sighandler_t;
- sigaction(SIGSEGV, &action, ptr::null_mut());
- sigaction(SIGBUS, &action, ptr::null_mut());
+ for &signal in &[SIGSEGV, SIGBUS] {
+ sigaction(signal, ptr::null_mut(), &mut action);
+ // Configure our signal handler if one is not already set.
+ if action.sa_sigaction == SIG_DFL {
+ action.sa_flags = SA_SIGINFO | SA_ONSTACK;
+ action.sa_sigaction = signal_handler as sighandler_t;
+ sigaction(signal, &action, ptr::null_mut());
+ NEED_ALTSTACK = true;
+ }
+ }
let handler = make_handler();
MAIN_ALTSTACK = handler._data;
}
unsafe fn get_stackp() -> *mut libc::c_void {
- let stackp =
- mmap(ptr::null_mut(), SIGSTKSZ, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, -1, 0);
+ let stackp = mmap(
+ ptr::null_mut(),
+ SIGSTKSZ + page_size(),
+ PROT_READ | PROT_WRITE,
+ MAP_PRIVATE | MAP_ANON,
+ -1,
+ 0,
+ );
if stackp == MAP_FAILED {
panic!("failed to allocate an alternative stack");
}
- stackp
+ let guard_result = libc::mprotect(stackp, page_size(), PROT_NONE);
+ if guard_result != 0 {
+ panic!("failed to set up alternative stack guard page");
+ }
+ stackp.add(page_size())
}
#[cfg(any(
}
pub unsafe fn make_handler() -> Handler {
+ if !NEED_ALTSTACK {
+ return Handler::null();
+ }
let mut stack = mem::zeroed();
sigaltstack(ptr::null(), &mut stack);
// Configure alternate signal stack, if one is not already set.
sigaltstack(&stack, ptr::null_mut());
Handler { _data: stack.ss_sp as *mut libc::c_void }
} else {
- Handler { _data: ptr::null_mut() }
+ Handler::null()
}
}
ss_size: SIGSTKSZ,
};
sigaltstack(&stack, ptr::null_mut());
- munmap(handler._data, SIGSTKSZ);
+ // We know from `get_stackp` that the alternate stack we installed is part of a mapping
+ // that started one page earlier, so walk back a page and unmap from there.
+ munmap(handler._data.sub(page_size()), SIGSTKSZ + page_size());
}
}
}
target_os = "openbsd"
)))]
mod imp {
- use crate::ptr;
-
pub unsafe fn init() {}
pub unsafe fn cleanup() {}
pub unsafe fn make_handler() -> super::Handler {
- super::Handler { _data: ptr::null_mut() }
+ super::Handler::null()
}
pub unsafe fn drop_handler(_handler: &mut super::Handler) {}
use libc::{c_void, iovec};
+#[derive(Copy, Clone)]
#[repr(transparent)]
pub struct IoSlice<'a> {
vec: iovec,
// On Unix-like platforms, libc::abort will unregister signal handlers
// including the SIGABRT handler, preventing the abort from being blocked, and
-// fclose streams, with the side effect of flushing them so libc bufferred
+// fclose streams, with the side effect of flushing them so libc buffered
// output will be printed. Additionally the shell will generally print a more
// understandable error message like "Abort trap" rather than "Illegal
// instruction" that intrinsics::abort would cause, as intrinsics::abort is
use crate::marker::PhantomData;
use crate::slice;
+#[derive(Copy, Clone)]
#[repr(transparent)]
pub struct IoSlice<'a> {
vec: wasi::Ciovec,
use crate::mem;
+#[derive(Copy, Clone)]
pub struct IoSlice<'a>(&'a [u8]);
impl<'a> IoSlice<'a> {
pub szSystemStatus: [u8; WSASYS_STATUS_LEN + 1],
}
+#[derive(Copy, Clone)]
#[repr(C)]
pub struct WSABUF {
pub len: ULONG,
_dwBufferSize: DWORD) -> BOOL {
SetLastError(ERROR_CALL_NOT_IMPLEMENTED as DWORD); 0
}
+ pub fn GetSystemTimePreciseAsFileTime(lpSystemTimeAsFileTime: LPFILETIME)
+ -> () {
+ GetSystemTimeAsFileTime(lpSystemTimeAsFileTime)
+ }
pub fn SleepConditionVariableSRW(ConditionVariable: PCONDITION_VARIABLE,
SRWLock: PSRWLOCK,
dwMilliseconds: DWORD,
use crate::slice;
use crate::sys::c;
+#[derive(Copy, Clone)]
#[repr(transparent)]
pub struct IoSlice<'a> {
vec: c::WSABUF,
pub fn now() -> SystemTime {
unsafe {
let mut t: SystemTime = mem::zeroed();
- c::GetSystemTimeAsFileTime(&mut t.t);
+ c::GetSystemTimePreciseAsFileTime(&mut t.t);
t
}
}
let mut print_path = move |fmt: &mut fmt::Formatter<'_>, bows: BytesOrWideString<'_>| {
output_filename(fmt, bows, print_fmt, cwd.as_ref())
};
- write!(fmt, "stack backtrace:\n")?;
+ writeln!(fmt, "stack backtrace:")?;
let mut bt_fmt = BacktraceFmt::new(fmt, print_fmt, &mut print_path);
bt_fmt.add_context()?;
let mut idx = 0;
}
}
for (key, maybe_val) in self.vars.iter() {
- if let &Some(ref val) = maybe_val {
+ if let Some(ref val) = maybe_val {
env::set_var(key, val);
} else {
env::remove_var(key);
#[inline]
fn final_lead_surrogate(&self) -> Option<u16> {
- let len = self.len();
- if len < 3 {
- return None;
- }
- match &self.bytes[(len - 3)..] {
- &[0xED, b2 @ 0xA0..=0xAF, b3] => Some(decode_surrogate(b2, b3)),
+ match self.bytes {
+ [.., 0xED, b2 @ 0xA0..=0xAF, b3] => Some(decode_surrogate(b2, b3)),
_ => None,
}
}
#[inline]
fn initial_trail_surrogate(&self) -> Option<u16> {
- let len = self.len();
- if len < 3 {
- return None;
- }
- match &self.bytes[..3] {
- &[0xED, b2 @ 0xB0..=0xBF, b3] => Some(decode_surrogate(b2, b3)),
+ match self.bytes {
+ [0xED, b2 @ 0xB0..=0xBF, b3, ..] => Some(decode_surrogate(b2, b3)),
_ => None,
}
}
bootstrap || !disable_unstable_features
}
-// Gets the CLI options assotiated with `report-time` feature.
+// Gets the CLI options associated with `report-time` feature.
fn get_time_options(
matches: &getopts::Matches,
allow_unstable: bool,
if !quiet {
if ntest != 0 || nbench != 0 {
- writeln!(output, "")?;
+ writeln!(output)?;
}
writeln!(output, "{}, {}", plural(ntest, "test"), plural(nbench, "benchmark"))?;
Some(_) => test_output.push(b'\n'),
None => (),
}
- write!(test_output, "---- {} stderr ----\n", test_name).unwrap();
+ writeln!(test_output, "---- {} stderr ----", test_name).unwrap();
}
// Process exit code to be used to indicate test failures.
const ERROR_EXIT_CODE: i32 = 101;
-const SECONDARY_TEST_INVOKER_VAR: &'static str = "__RUST_TEST_INVOKE";
+const SECONDARY_TEST_INVOKER_VAR: &str = "__RUST_TEST_INVOKE";
// The default console test runner. It accepts the command line
// arguments and a vector of test_descs.
.filter(|test| test.desc.name.as_slice() == name)
.map(make_owned_test)
.next()
- .expect(&format!("couldn't find a test with the provided name '{}'", name));
+ .unwrap_or_else(|| panic!("couldn't find a test with the provided name '{}'", name));
let TestDescAndFn { desc, testfn } = test;
let testfn = match testfn {
StaticTestFn(f) => f,
Json,
}
-/// Whether ignored test should be runned or not
+/// Whether ignored test should be run or not
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum RunIgnored {
Yes,
}
fn median(&self) -> f64 {
- self.percentile(50 as f64)
+ self.percentile(50_f64)
}
fn var(&self) -> f64 {
}
fn std_dev_pct(&self) -> f64 {
- let hundred = 100 as f64;
+ let hundred = 100_f64;
(self.std_dev() / self.mean()) * hundred
}
}
fn median_abs_dev_pct(&self) -> f64 {
- let hundred = 100 as f64;
+ let hundred = 100_f64;
(self.median_abs_dev() / self.median()) * hundred
}
fn quartiles(&self) -> (f64, f64, f64) {
let mut tmp = self.to_vec();
local_sort(&mut tmp);
- let first = 25f64;
+ let first = 25_f64;
let a = percentile_of_sorted(&tmp, first);
- let second = 50f64;
+ let second = 50_f64;
let b = percentile_of_sorted(&tmp, second);
- let third = 75f64;
+ let third = 75_f64;
let c = percentile_of_sorted(&tmp, third);
(a, b, c)
}
}
let zero: f64 = 0.0;
assert!(zero <= pct);
- let hundred = 100f64;
+ let hundred = 100_f64;
assert!(pct <= hundred);
if pct == hundred {
return sorted_samples[sorted_samples.len() - 1];
let mut tmp = samples.to_vec();
local_sort(&mut tmp);
let lo = percentile_of_sorted(&tmp, pct);
- let hundred = 100 as f64;
+ let hundred = 100_f64;
let hi = percentile_of_sorted(&tmp, hundred - pct);
for samp in samples {
if *samp > hi {
unsafe impl Send for TestResult {}
/// Creates a `TestResult` depending on the raw result of test execution
-/// and assotiated data.
+/// and associated data.
pub fn calc_result<'a>(
desc: &TestDesc,
task_result: Result<(), &'a (dyn Any + 'static + Send)>,
}
pub fn with_padding(&self, padding: NamePadding) -> TestName {
- let name = match self {
- &TestName::StaticTestName(name) => Cow::Borrowed(name),
- &TestName::DynTestName(ref name) => Cow::Owned(name.clone()),
- &TestName::AlignedTestName(ref name, _) => name.clone(),
+ let name = match *self {
+ TestName::StaticTestName(name) => Cow::Borrowed(name),
+ TestName::DynTestName(ref name) => Cow::Owned(name.clone()),
+ TestName::AlignedTestName(ref name, _) => name.clone(),
};
TestName::AlignedTestName(name, padding)
} else if target.contains("dragonfly") {
println!("cargo:rustc-link-lib=gcc_pic");
} else if target.contains("pc-windows-gnu") {
- println!("cargo:rustc-link-lib=static-nobundle=gcc_eh");
- println!("cargo:rustc-link-lib=static-nobundle=pthread");
+ // This is handled in the target spec with late_link_args_[static|dynamic]
+
+ // cfg!(bootstrap) doesn't work in build scripts
+ if env::var("RUSTC_STAGE").ok() == Some("0".to_string()) {
+ println!("cargo:rustc-link-lib=static-nobundle=gcc_eh");
+ println!("cargo:rustc-link-lib=static-nobundle=pthread");
+ }
} else if target.contains("uwp-windows-gnu") {
println!("cargo:rustc-link-lib=unwind");
} else if target.contains("fuchsia") {
}
// Unwind info registration/deregistration routines.
- // See the docs of `unwind` module in libstd.
+ // See the docs of libpanic_unwind.
extern "C" {
fn rust_eh_register_frames(eh_frame_begin: *const u8, object: *mut u8);
fn rust_eh_unregister_frames(eh_frame_begin: *const u8, object: *mut u8);
}
- unsafe fn init() {
+ unsafe extern "C" fn init() {
// register unwind info on module startup
rust_eh_register_frames(&__EH_FRAME_BEGIN__ as *const u8, &mut OBJ as *mut _ as *mut u8);
}
- unsafe fn uninit() {
+ unsafe extern "C" fn uninit() {
// unregister on shutdown
rust_eh_unregister_frames(&__EH_FRAME_BEGIN__ as *const u8, &mut OBJ as *mut _ as *mut u8);
}
- // MSVC-specific init/uninit routine registration
- pub mod ms_init {
- // .CRT$X?? sections are roughly analogous to ELF's .init_array and .fini_array,
- // except that they exploit the fact that linker will sort them alphabitically,
- // so e.g., sections with names between .CRT$XIA and .CRT$XIZ are guaranteed to be
- // placed between those two, without requiring any ordering of objects on the linker
- // command line.
- // Note that ordering of same-named sections from different objects is not guaranteed.
- // Since .CRT$XIA contains init array's header symbol, which must always come first,
- // we place our initialization callback into .CRT$XIB.
+ // MinGW-specific init/uninit routine registration
+ pub mod mingw_init {
+ // MinGW's startup objects (crt0.o / dllcrt0.o) will invoke global constructors in the
+ // .ctors and .dtors sections on startup and exit. In the case of DLLs, this is done when
+ // the DLL is loaded and unloaded.
+ //
+ // The linker will sort the sections, which ensures that our callbacks are located at the
+ // end of the list. Since constructors are run in reverse order, this ensures that our
+ // callbacks are the first and last ones executed.
- #[link_section = ".CRT$XIB"] // .CRT$XI? : C initialization callbacks
- pub static P_INIT: unsafe fn() = super::init;
+ #[link_section = ".ctors.65535"] // .ctors.* : C initialization callbacks
+ pub static P_INIT: unsafe extern "C" fn() = super::init;
- #[link_section = ".CRT$XTY"] // .CRT$XT? : C termination callbacks
- pub static P_UNINIT: unsafe fn() = super::uninit;
+ #[link_section = ".dtors.65535"] // .dtors.* : C termination callbacks
+ pub static P_UNINIT: unsafe extern "C" fn() = super::uninit;
}
}
} else {
for (const auto &C : PipelineStartEPCallbacks)
PB.registerPipelineStartEPCallback(C);
- for (const auto &C : OptimizerLastEPCallbacks)
- PB.registerOptimizerLastEPCallback(C);
+ if (OptStage != LLVMRustOptStage::PreLinkThinLTO) {
+ for (const auto &C : OptimizerLastEPCallbacks)
+ PB.registerOptimizerLastEPCallback(C);
+ }
switch (OptStage) {
case LLVMRustOptStage::PreLinkNoLTO:
break;
case LLVMRustOptStage::PreLinkThinLTO:
MPM = PB.buildThinLTOPreLinkDefaultPipeline(OptLevel, DebugPassManager);
+ if (!OptimizerLastEPCallbacks.empty()) {
+ FunctionPassManager FPM(DebugPassManager);
+ for (const auto &C : OptimizerLastEPCallbacks)
+ C(FPM, OptLevel);
+ MPM.addPass(createModuleToFunctionPassAdaptor(std::move(FPM)));
+ }
break;
case LLVMRustOptStage::PreLinkFatLTO:
MPM = PB.buildLTOPreLinkDefaultPipeline(OptLevel, DebugPassManager);
TimerGroup::printAll(OS);
}
-extern "C" LLVMValueRef LLVMRustGetNamedValue(LLVMModuleRef M,
- const char *Name) {
- return wrap(unwrap(M)->getNamedValue(Name));
+extern "C" LLVMValueRef LLVMRustGetNamedValue(LLVMModuleRef M, const char *Name,
+ size_t NameLen) {
+ return wrap(unwrap(M)->getNamedValue(StringRef(Name, NameLen)));
}
extern "C" LLVMValueRef LLVMRustGetOrInsertFunction(LLVMModuleRef M,
const char *Name,
+ size_t NameLen,
LLVMTypeRef FunctionTy) {
- return wrap(
- unwrap(M)->getOrInsertFunction(Name, unwrap<FunctionType>(FunctionTy))
+ return wrap(unwrap(M)
+ ->getOrInsertFunction(StringRef(Name, NameLen),
+ unwrap<FunctionType>(FunctionTy))
#if LLVM_VERSION_GE(9, 0)
- .getCallee()
+ .getCallee()
#endif
- );
+ );
}
extern "C" LLVMValueRef
}
}
-extern "C" LLVMValueRef LLVMRustInlineAsm(LLVMTypeRef Ty, char *AsmString,
- char *Constraints,
- LLVMBool HasSideEffects,
- LLVMBool IsAlignStack,
- LLVMRustAsmDialect Dialect) {
- return wrap(InlineAsm::get(unwrap<FunctionType>(Ty), AsmString, Constraints,
+extern "C" LLVMValueRef
+LLVMRustInlineAsm(LLVMTypeRef Ty, char *AsmString, size_t AsmStringLen,
+ char *Constraints, size_t ConstraintsLen,
+ LLVMBool HasSideEffects, LLVMBool IsAlignStack,
+ LLVMRustAsmDialect Dialect) {
+ return wrap(InlineAsm::get(unwrap<FunctionType>(Ty),
+ StringRef(AsmString, AsmStringLen),
+ StringRef(Constraints, ConstraintsLen),
HasSideEffects, IsAlignStack, fromRust(Dialect)));
}
-extern "C" bool LLVMRustInlineAsmVerify(LLVMTypeRef Ty,
- char *Constraints) {
- return InlineAsm::Verify(unwrap<FunctionType>(Ty), Constraints);
+extern "C" bool LLVMRustInlineAsmVerify(LLVMTypeRef Ty, char *Constraints,
+ size_t ConstraintsLen) {
+ return InlineAsm::Verify(unwrap<FunctionType>(Ty),
+ StringRef(Constraints, ConstraintsLen));
}
-extern "C" void LLVMRustAppendModuleInlineAsm(LLVMModuleRef M, const char *Asm) {
- unwrap(M)->appendModuleInlineAsm(StringRef(Asm));
+extern "C" void LLVMRustAppendModuleInlineAsm(LLVMModuleRef M, const char *Asm,
+ size_t AsmLen) {
+ unwrap(M)->appendModuleInlineAsm(StringRef(Asm, AsmLen));
}
typedef DIBuilder *LLVMRustDIBuilderRef;
extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateCompileUnit(
LLVMRustDIBuilderRef Builder, unsigned Lang, LLVMMetadataRef FileRef,
- const char *Producer, bool isOptimized, const char *Flags,
- unsigned RuntimeVer, const char *SplitName,
+ const char *Producer, size_t ProducerLen, bool isOptimized,
+ const char *Flags, unsigned RuntimeVer,
+ const char *SplitName, size_t SplitNameLen,
LLVMRustDebugEmissionKind Kind) {
auto *File = unwrapDI<DIFile>(FileRef);
- return wrap(Builder->createCompileUnit(Lang, File, Producer, isOptimized,
- Flags, RuntimeVer, SplitName,
+ return wrap(Builder->createCompileUnit(Lang, File, StringRef(Producer, ProducerLen),
+ isOptimized, Flags, RuntimeVer,
+ StringRef(SplitName, SplitNameLen),
fromRust(Kind)));
}
-extern "C" LLVMMetadataRef
-LLVMRustDIBuilderCreateFile(LLVMRustDIBuilderRef Builder, const char *Filename,
- const char *Directory) {
- return wrap(Builder->createFile(Filename, Directory));
+extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateFile(
+ LLVMRustDIBuilderRef Builder,
+ const char *Filename, size_t FilenameLen,
+ const char *Directory, size_t DirectoryLen) {
+ return wrap(Builder->createFile(StringRef(Filename, FilenameLen),
+ StringRef(Directory, DirectoryLen)));
}
extern "C" LLVMMetadataRef
}
extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateFunction(
- LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
- const char *LinkageName, LLVMMetadataRef File, unsigned LineNo,
+ LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope,
+ const char *Name, size_t NameLen,
+ const char *LinkageName, size_t LinkageNameLen,
+ LLVMMetadataRef File, unsigned LineNo,
LLVMMetadataRef Ty, unsigned ScopeLine, LLVMRustDIFlags Flags,
LLVMRustDISPFlags SPFlags, LLVMValueRef Fn, LLVMMetadataRef TParam,
LLVMMetadataRef Decl) {
llvmFlags |= DINode::DIFlags::FlagMainSubprogram;
#endif
DISubprogram *Sub = Builder->createFunction(
- unwrapDI<DIScope>(Scope), Name, LinkageName, unwrapDI<DIFile>(File),
- LineNo, unwrapDI<DISubroutineType>(Ty), ScopeLine, llvmFlags,
+ unwrapDI<DIScope>(Scope),
+ StringRef(Name, NameLen),
+ StringRef(LinkageName, LinkageNameLen),
+ unwrapDI<DIFile>(File), LineNo,
+ unwrapDI<DISubroutineType>(Ty), ScopeLine, llvmFlags,
llvmSPFlags, TParams, unwrapDIPtr<DISubprogram>(Decl));
#else
bool IsLocalToUnit = isSet(SPFlags & LLVMRustDISPFlags::SPFlagLocalToUnit);
if (isSet(SPFlags & LLVMRustDISPFlags::SPFlagMainSubprogram))
llvmFlags |= DINode::DIFlags::FlagMainSubprogram;
DISubprogram *Sub = Builder->createFunction(
- unwrapDI<DIScope>(Scope), Name, LinkageName, unwrapDI<DIFile>(File),
- LineNo, unwrapDI<DISubroutineType>(Ty), IsLocalToUnit, IsDefinition,
+ unwrapDI<DIScope>(Scope),
+ StringRef(Name, NameLen),
+ StringRef(LinkageName, LinkageNameLen),
+ unwrapDI<DIFile>(File), LineNo,
+ unwrapDI<DISubroutineType>(Ty), IsLocalToUnit, IsDefinition,
ScopeLine, llvmFlags, IsOptimized, TParams,
unwrapDIPtr<DISubprogram>(Decl));
#endif
return wrap(Sub);
}
-extern "C" LLVMMetadataRef
-LLVMRustDIBuilderCreateBasicType(LLVMRustDIBuilderRef Builder, const char *Name,
- uint64_t SizeInBits, uint32_t AlignInBits,
- unsigned Encoding) {
- return wrap(Builder->createBasicType(Name, SizeInBits, Encoding));
+extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateBasicType(
+ LLVMRustDIBuilderRef Builder, const char *Name, size_t NameLen,
+ uint64_t SizeInBits, uint32_t AlignInBits, unsigned Encoding) {
+ return wrap(Builder->createBasicType(StringRef(Name, NameLen), SizeInBits, Encoding));
}
extern "C" LLVMMetadataRef LLVMRustDIBuilderCreatePointerType(
LLVMRustDIBuilderRef Builder, LLVMMetadataRef PointeeTy,
- uint64_t SizeInBits, uint32_t AlignInBits, const char *Name) {
+ uint64_t SizeInBits, uint32_t AlignInBits, unsigned AddressSpace,
+ const char *Name, size_t NameLen) {
return wrap(Builder->createPointerType(unwrapDI<DIType>(PointeeTy),
SizeInBits, AlignInBits,
- /* DWARFAddressSpace */ None,
- Name));
+ AddressSpace,
+ StringRef(Name, NameLen)));
}
extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateStructType(
- LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
+ LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope,
+ const char *Name, size_t NameLen,
LLVMMetadataRef File, unsigned LineNumber, uint64_t SizeInBits,
uint32_t AlignInBits, LLVMRustDIFlags Flags,
LLVMMetadataRef DerivedFrom, LLVMMetadataRef Elements,
unsigned RunTimeLang, LLVMMetadataRef VTableHolder,
- const char *UniqueId) {
+ const char *UniqueId, size_t UniqueIdLen) {
return wrap(Builder->createStructType(
- unwrapDI<DIDescriptor>(Scope), Name, unwrapDI<DIFile>(File), LineNumber,
+ unwrapDI<DIDescriptor>(Scope), StringRef(Name, NameLen),
+ unwrapDI<DIFile>(File), LineNumber,
SizeInBits, AlignInBits, fromRust(Flags), unwrapDI<DIType>(DerivedFrom),
DINodeArray(unwrapDI<MDTuple>(Elements)), RunTimeLang,
- unwrapDI<DIType>(VTableHolder), UniqueId));
+ unwrapDI<DIType>(VTableHolder), StringRef(UniqueId, UniqueIdLen)));
}
extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateVariantPart(
- LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
+ LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope,
+ const char *Name, size_t NameLen,
LLVMMetadataRef File, unsigned LineNumber, uint64_t SizeInBits,
uint32_t AlignInBits, LLVMRustDIFlags Flags, LLVMMetadataRef Discriminator,
- LLVMMetadataRef Elements, const char *UniqueId) {
+ LLVMMetadataRef Elements, const char *UniqueId, size_t UniqueIdLen) {
return wrap(Builder->createVariantPart(
- unwrapDI<DIDescriptor>(Scope), Name, unwrapDI<DIFile>(File), LineNumber,
+ unwrapDI<DIDescriptor>(Scope), StringRef(Name, NameLen),
+ unwrapDI<DIFile>(File), LineNumber,
SizeInBits, AlignInBits, fromRust(Flags), unwrapDI<DIDerivedType>(Discriminator),
- DINodeArray(unwrapDI<MDTuple>(Elements)), UniqueId));
+ DINodeArray(unwrapDI<MDTuple>(Elements)), StringRef(UniqueId, UniqueIdLen)));
}
extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateMemberType(
- LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
+ LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope,
+ const char *Name, size_t NameLen,
LLVMMetadataRef File, unsigned LineNo, uint64_t SizeInBits,
uint32_t AlignInBits, uint64_t OffsetInBits, LLVMRustDIFlags Flags,
LLVMMetadataRef Ty) {
- return wrap(Builder->createMemberType(unwrapDI<DIDescriptor>(Scope), Name,
+ return wrap(Builder->createMemberType(unwrapDI<DIDescriptor>(Scope),
+ StringRef(Name, NameLen),
unwrapDI<DIFile>(File), LineNo,
SizeInBits, AlignInBits, OffsetInBits,
fromRust(Flags), unwrapDI<DIType>(Ty)));
extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateVariantMemberType(
LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope,
- const char *Name, LLVMMetadataRef File, unsigned LineNo, uint64_t SizeInBits,
- uint32_t AlignInBits, uint64_t OffsetInBits, LLVMValueRef Discriminant,
+ const char *Name, size_t NameLen, LLVMMetadataRef File, unsigned LineNo,
+ uint64_t SizeInBits, uint32_t AlignInBits, uint64_t OffsetInBits, LLVMValueRef Discriminant,
LLVMRustDIFlags Flags, LLVMMetadataRef Ty) {
llvm::ConstantInt* D = nullptr;
if (Discriminant) {
D = unwrap<llvm::ConstantInt>(Discriminant);
}
- return wrap(Builder->createVariantMemberType(unwrapDI<DIDescriptor>(Scope), Name,
+ return wrap(Builder->createVariantMemberType(unwrapDI<DIDescriptor>(Scope),
+ StringRef(Name, NameLen),
unwrapDI<DIFile>(File), LineNo,
SizeInBits, AlignInBits, OffsetInBits, D,
fromRust(Flags), unwrapDI<DIType>(Ty)));
}
extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateStaticVariable(
- LLVMRustDIBuilderRef Builder, LLVMMetadataRef Context, const char *Name,
- const char *LinkageName, LLVMMetadataRef File, unsigned LineNo,
+ LLVMRustDIBuilderRef Builder, LLVMMetadataRef Context,
+ const char *Name, size_t NameLen,
+ const char *LinkageName, size_t LinkageNameLen,
+ LLVMMetadataRef File, unsigned LineNo,
LLVMMetadataRef Ty, bool IsLocalToUnit, LLVMValueRef V,
LLVMMetadataRef Decl = nullptr, uint32_t AlignInBits = 0) {
llvm::GlobalVariable *InitVal = cast<llvm::GlobalVariable>(unwrap(V));
}
llvm::DIGlobalVariableExpression *VarExpr = Builder->createGlobalVariableExpression(
- unwrapDI<DIDescriptor>(Context), Name, LinkageName,
+ unwrapDI<DIDescriptor>(Context), StringRef(Name, NameLen),
+ StringRef(LinkageName, LinkageNameLen),
unwrapDI<DIFile>(File), LineNo, unwrapDI<DIType>(Ty), IsLocalToUnit,
#if LLVM_VERSION_GE(10, 0)
/* isDefined */ true,
extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateVariable(
LLVMRustDIBuilderRef Builder, unsigned Tag, LLVMMetadataRef Scope,
- const char *Name, LLVMMetadataRef File, unsigned LineNo,
+ const char *Name, size_t NameLen,
+ LLVMMetadataRef File, unsigned LineNo,
LLVMMetadataRef Ty, bool AlwaysPreserve, LLVMRustDIFlags Flags,
unsigned ArgNo, uint32_t AlignInBits) {
if (Tag == 0x100) { // DW_TAG_auto_variable
return wrap(Builder->createAutoVariable(
- unwrapDI<DIDescriptor>(Scope), Name, unwrapDI<DIFile>(File), LineNo,
+ unwrapDI<DIDescriptor>(Scope), StringRef(Name, NameLen),
+ unwrapDI<DIFile>(File), LineNo,
unwrapDI<DIType>(Ty), AlwaysPreserve, fromRust(Flags), AlignInBits));
} else {
return wrap(Builder->createParameterVariable(
- unwrapDI<DIDescriptor>(Scope), Name, ArgNo, unwrapDI<DIFile>(File),
- LineNo, unwrapDI<DIType>(Ty), AlwaysPreserve, fromRust(Flags)));
+ unwrapDI<DIDescriptor>(Scope), StringRef(Name, NameLen), ArgNo,
+ unwrapDI<DIFile>(File), LineNo,
+ unwrapDI<DIType>(Ty), AlwaysPreserve, fromRust(Flags)));
}
}
unwrap(InsertAtEnd)));
}
-extern "C" LLVMMetadataRef
-LLVMRustDIBuilderCreateEnumerator(LLVMRustDIBuilderRef Builder,
- const char *Name, uint64_t Val) {
- return wrap(Builder->createEnumerator(Name, Val));
+extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateEnumerator(
+ LLVMRustDIBuilderRef Builder, const char *Name, size_t NameLen,
+ int64_t Value, bool IsUnsigned) {
+ return wrap(Builder->createEnumerator(StringRef(Name, NameLen), Value, IsUnsigned));
}
extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateEnumerationType(
- LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
+ LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope,
+ const char *Name, size_t NameLen,
LLVMMetadataRef File, unsigned LineNumber, uint64_t SizeInBits,
uint32_t AlignInBits, LLVMMetadataRef Elements,
LLVMMetadataRef ClassTy, bool IsScoped) {
return wrap(Builder->createEnumerationType(
- unwrapDI<DIDescriptor>(Scope), Name, unwrapDI<DIFile>(File), LineNumber,
+ unwrapDI<DIDescriptor>(Scope), StringRef(Name, NameLen),
+ unwrapDI<DIFile>(File), LineNumber,
SizeInBits, AlignInBits, DINodeArray(unwrapDI<MDTuple>(Elements)),
unwrapDI<DIType>(ClassTy), "", IsScoped));
}
extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateUnionType(
- LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
+ LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope,
+ const char *Name, size_t NameLen,
LLVMMetadataRef File, unsigned LineNumber, uint64_t SizeInBits,
uint32_t AlignInBits, LLVMRustDIFlags Flags, LLVMMetadataRef Elements,
- unsigned RunTimeLang, const char *UniqueId) {
+ unsigned RunTimeLang, const char *UniqueId, size_t UniqueIdLen) {
return wrap(Builder->createUnionType(
- unwrapDI<DIDescriptor>(Scope), Name, unwrapDI<DIFile>(File), LineNumber,
- SizeInBits, AlignInBits, fromRust(Flags),
- DINodeArray(unwrapDI<MDTuple>(Elements)), RunTimeLang, UniqueId));
+ unwrapDI<DIDescriptor>(Scope), StringRef(Name, NameLen), unwrapDI<DIFile>(File),
+ LineNumber, SizeInBits, AlignInBits, fromRust(Flags),
+ DINodeArray(unwrapDI<MDTuple>(Elements)), RunTimeLang,
+ StringRef(UniqueId, UniqueIdLen)));
}
extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateTemplateTypeParameter(
- LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
+ LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope,
+ const char *Name, size_t NameLen,
LLVMMetadataRef Ty, LLVMMetadataRef File, unsigned LineNo,
unsigned ColumnNo) {
return wrap(Builder->createTemplateTypeParameter(
- unwrapDI<DIDescriptor>(Scope), Name, unwrapDI<DIType>(Ty)));
+ unwrapDI<DIDescriptor>(Scope), StringRef(Name, NameLen), unwrapDI<DIType>(Ty)));
}
-extern "C" LLVMMetadataRef
-LLVMRustDIBuilderCreateNameSpace(LLVMRustDIBuilderRef Builder,
- LLVMMetadataRef Scope, const char *Name,
- LLVMMetadataRef File, unsigned LineNo) {
+extern "C" LLVMMetadataRef LLVMRustDIBuilderCreateNameSpace(
+ LLVMRustDIBuilderRef Builder, LLVMMetadataRef Scope,
+ const char *Name, size_t NameLen, bool ExportSymbols) {
return wrap(Builder->createNameSpace(
- unwrapDI<DIDescriptor>(Scope), Name,
- false // ExportSymbols (only relevant for C++ anonymous namespaces)
- ));
+ unwrapDI<DIDescriptor>(Scope), StringRef(Name, NameLen), ExportSymbols
+ ));
}
extern "C" void
extern "C" LLVMValueRef LLVMRustBuildCall(LLVMBuilderRef B, LLVMValueRef Fn,
LLVMValueRef *Args, unsigned NumArgs,
- OperandBundleDef *Bundle,
- const char *Name) {
+ OperandBundleDef *Bundle) {
unsigned Len = Bundle ? 1 : 0;
ArrayRef<OperandBundleDef> Bundles = makeArrayRef(Bundle, Len);
return wrap(unwrap(B)->CreateCall(
- unwrap(Fn), makeArrayRef(unwrap(Args), NumArgs), Bundles, Name));
+ unwrap(Fn), makeArrayRef(unwrap(Args), NumArgs), Bundles));
}
extern "C" LLVMValueRef LLVMRustBuildMemCpy(LLVMBuilderRef B,
# source tarball for a stable release you'll likely see `1.x.0` for rustc and
# `0.x.0` for Cargo where they were released on `date`.
-date: 2020-01-30
+date: 2020-03-12
rustc: beta
cargo: beta
# Compiler Test Documentation
Documentation for the compiler testing framework can be found in
-[the rustc guide](https://rust-lang.github.io/rustc-guide/tests/intro.html).
+[the rustc dev guide](https://rustc-dev-guide.rust-lang.org/tests/intro.html).
--- /dev/null
+// compile-flags: -O
+
+#![crate_type = "lib"]
+
+extern "C" {
+ fn bar();
+}
+
+// CHECK-LABEL: @foo
+#[no_mangle]
+pub unsafe fn foo() -> i32 {
+ // CHECK: call void @bar
+ // CHECK: ret i32 0
+ std::panic::catch_unwind(|| {
+ bar();
+ 0
+ })
+ .unwrap()
+}
// CHECK: @STATIC = {{.*}}, align 4
// This checks the constants from inline_enum_const
-// CHECK: @{{[0-9]+}} = {{.*}}, align 2
+// CHECK: @alloc5 = {{.*}}, align 2
// This checks the constants from {low,high}_align_const, they share the same
// constant, but the alignment differs, so the higher one should be used
-// CHECK: [[LOW_HIGH:@[0-9]+]] = {{.*}} getelementptr inbounds (<{ [8 x i8] }>, <{ [8 x i8] }>* @2, i32 0, i32 0, i32 0), {{.*}},
+// CHECK: [[LOW_HIGH:@[0-9]+]] = {{.*}} getelementptr inbounds (<{ [8 x i8] }>, <{ [8 x i8] }>* @alloc15, i32 0, i32 0, i32 0), {{.*}}
#[derive(Copy, Clone)]
// repr(i16) is required for the {low,high}_align_const test
--- /dev/null
+// Verify that no column information is emitted for MSVC targets
+//
+// only-msvc
+// compile-flags: -C debuginfo=2
+
+// CHECK-NOT: !DILexicalBlock({{.*}}column: {{.*}})
+// CHECK-NOT: !DILocation({{.*}}column: {{.*}})
+
+pub fn add(a: u32, b: u32) -> u32 {
+ a + b
+}
+
+fn main() {
+ let c = add(1, 2);
+ println!("{}", c);
+}
--- /dev/null
+// Verify that debuginfo column nubmers are 1-based byte offsets.
+//
+// ignore-windows
+// compile-flags: -C debuginfo=2
+
+fn main() {
+ unsafe {
+ // Column numbers are 1-based. Regression test for #65437.
+ // CHECK: call void @giraffe(), !dbg [[A:!.*]]
+ giraffe();
+
+ // Column numbers use byte offests. Regression test for #67360
+ // CHECK: call void @turtle(), !dbg [[B:!.*]]
+/* ż */ turtle();
+
+ // CHECK: [[A]] = !DILocation(line: 10, column: 9,
+ // CHECK: [[B]] = !DILocation(line: 14, column: 10,
+ }
+}
+
+extern {
+ fn giraffe();
+ fn turtle();
+}
--- /dev/null
+// Verify that DIEnumerator uses isUnsigned flag when appropriate.
+//
+// compile-flags: -g -C no-prepopulate-passes
+
+#[repr(i64)]
+pub enum I64 {
+ I64Min = std::i64::MIN,
+ I64Max = std::i64::MAX,
+}
+
+#[repr(u64)]
+pub enum U64 {
+ U64Min = std::u64::MIN,
+ U64Max = std::u64::MAX,
+}
+
+fn main() {
+ let _a = I64::I64Min;
+ let _b = I64::I64Max;
+ let _c = U64::U64Min;
+ let _d = U64::U64Max;
+}
+
+// CHECK: !DIEnumerator(name: "I64Min", value: -9223372036854775808)
+// CHECK: !DIEnumerator(name: "I64Max", value: 9223372036854775807)
+// CHECK: !DIEnumerator(name: "U64Min", value: 0, isUnsigned: true)
+// CHECK: !DIEnumerator(name: "U64Max", value: 18446744073709551615, isUnsigned: true)
include!("aux_mod.rs");
// Here we check that the expansion of the file!() macro is mapped.
-// CHECK: @0 = private unnamed_addr constant <{ [34 x i8] }> <{ [34 x i8] c"/the/src/remap_path_prefix/main.rs" }>, align 1
+// CHECK: @alloc1 = private unnamed_addr constant <{ [34 x i8] }> <{ [34 x i8] c"/the/src/remap_path_prefix/main.rs" }>, align 1
pub static FILE_PATH: &'static str = file!();
fn main() {
use std::iter;
-// CHECK: @helper([[USIZE:i[0-9]+]] %_1)
-#[no_mangle]
-pub fn helper(_: usize) {
-}
-
// CHECK-LABEL: @repeat_take_collect
#[no_mangle]
pub fn repeat_take_collect() -> Vec<u8> {
-// CHECK: call void @llvm.memset.p0i8.[[USIZE]](i8* {{(nonnull )?}}align 1{{.*}} %{{[0-9]+}}, i8 42, [[USIZE]] 100000, i1 false)
+// CHECK: call void @llvm.memset.p0i8.i{{[0-9]+}}(i8* {{(nonnull )?}}align 1{{.*}} %{{[0-9]+}}, i8 42, i{{[0-9]+}} 100000, i1 false)
iter::repeat(42).take(100000).collect()
}
//[MSAN-RECOVER-LTO] compile-flags: -Zsanitizer=memory -Zsanitizer-recover=memory -C lto=fat
//
// MSAN-NOT: @__msan_keep_going
-// MSAN-RECOVER: @__msan_keep_going = weak_odr {{.*}} constant i32 1
-// MSAN-RECOVER-LTO: @__msan_keep_going = weak_odr {{.*}} constant i32 1
+// MSAN-RECOVER: @__msan_keep_going = weak_odr {{.*}}constant i32 1
+// MSAN-RECOVER-LTO: @__msan_keep_going = weak_odr {{.*}}constant i32 1
// ASAN-LABEL: define i32 @penguin(
// ASAN: call void @__asan_report_load4(i64 %0)
--- /dev/null
+// compile-flags: -C panic=abort -O
+
+#![crate_type = "lib"]
+#![feature(unwind_attributes, core_intrinsics)]
+
+extern "C" {
+ #[unwind(allow)]
+ fn bar(data: *mut u8);
+}
+extern "Rust" {
+ fn catch(data: *mut u8, exception: *mut u8);
+}
+
+// CHECK-LABEL: @foo
+#[no_mangle]
+pub unsafe fn foo() -> i32 {
+ // CHECK: call void @bar
+ // CHECK: ret i32 0
+ std::intrinsics::r#try(|x| bar(x), 0 as *mut u8, |x, y| catch(x, y))
+}
fn panic_impl(info: &PanicInfo) -> ! { loop {} }
#[lang = "eh_personality"]
fn eh_personality() {}
-#[lang = "eh_unwind_resume"]
-fn eh_unwind_resume() {}
+++ /dev/null
-// compile-flags: -Z chalk
-
-trait Foo { }
-
-impl Foo for i32 { }
-
-impl Foo for u32 { }
-
-fn gimme<F: Foo>() { }
-
-// Note: this also tests that `std::process::Termination` is implemented for `()`.
-fn main() {
- gimme::<i32>();
- gimme::<u32>();
- gimme::<f32>(); //~ERROR the trait bound `f32: Foo` is not satisfied
-}
+++ /dev/null
-// compile-flags: -Z chalk
-
-trait Foo { }
-
-impl<T> Foo for (T, u32) { }
-
-fn gimme<F: Foo>() { }
-
-fn foo<T>() {
- gimme::<(T, u32)>();
- gimme::<(Option<T>, u32)>();
- gimme::<(Option<T>, f32)>(); //~ ERROR
-}
-
-fn main() {
- gimme::<(i32, u32)>();
- gimme::<(i32, f32)>(); //~ ERROR
-}
+++ /dev/null
-// compile-flags: -Z chalk
-
-trait Foo: Sized { }
-
-trait Bar {
- type Item: Foo;
-}
-
-impl Foo for i32 { }
-
-impl Foo for str { }
-//~^ ERROR the size for values of type `str` cannot be known at compilation time
-
-// Implicit `T: Sized` bound.
-impl<T> Foo for Option<T> { }
-
-impl Bar for () {
- type Item = i32;
-}
-
-impl<T> Bar for Option<T> {
- type Item = Option<T>;
-}
-
-impl Bar for f32 {
-//~^ ERROR the trait bound `f32: Foo` is not satisfied
- type Item = f32;
- //~^ ERROR the trait bound `f32: Foo` is not satisfied
-}
-
-trait Baz<U: ?Sized> where U: Foo { }
-
-impl Baz<i32> for i32 { }
-
-impl Baz<f32> for f32 { }
-//~^ ERROR the trait bound `f32: Foo` is not satisfied
-
-fn main() {
-}
+++ /dev/null
-// compile-flags: -Z chalk
-
-#![feature(trivial_bounds)]
-
-trait Bar {
- fn foo();
-}
-trait Foo: Bar { }
-
-struct S where S: Foo;
-
-impl Foo for S {
-}
-
-fn bar<T: Bar>() {
- T::foo();
-}
-
-fn foo<T: Foo>() {
- bar::<T>()
-}
-
-fn main() {
- // For some reason, the error is duplicated...
-
- foo::<S>() //~ ERROR the type `S` is not well-formed (chalk)
- //~^ ERROR the type `S` is not well-formed (chalk)
-}
+++ /dev/null
-// compile-flags: -Z chalk
-
-trait Foo { }
-
-struct S<T: Foo> {
- x: T,
-}
-
-impl Foo for i32 { }
-impl<T> Foo for Option<T> { }
-
-fn main() {
- let s = S {
- x: 5,
- };
-
- let s = S { //~ ERROR the trait bound `{float}: Foo` is not satisfied
- x: 5.0,
- };
-
- let s = S {
- x: Some(5.0),
- };
-}
+++ /dev/null
-// ignore-lldb
-
-// compile-flags:-g
-
-// gdb-command:run
-
-// gdb-command:info locals
-// gdb-check:No locals.
-// gdb-command:continue
-
-// gdb-command:info locals
-// gdb-check:abc = 10
-// gdb-command:continue
-
-#![allow(unused_variables)]
-#![feature(no_debug)]
-#![feature(omit_gdb_pretty_printer_section)]
-#![omit_gdb_pretty_printer_section]
-
-#[inline(never)]
-fn id<T>(x: T) -> T {x}
-
-fn function_with_debuginfo() {
- let abc = 10_usize;
- id(abc); // #break
-}
-
-#[no_debug]
-fn function_without_debuginfo() {
- let abc = -57i32;
- id(abc); // #break
-}
-
-fn main() {
- function_without_debuginfo();
- function_with_debuginfo();
-}
// compile-flags: -Zquery-dep-graph
#![feature(rustc_attrs)]
-#![allow(private_no_mangle_fns)]
-
-#![rustc_partition_codegened(module="change_symbol_export_status-mod1", cfg="rpass2")]
-#![rustc_partition_reused(module="change_symbol_export_status-mod2", cfg="rpass2")]
+#![rustc_partition_codegened(module = "change_symbol_export_status-mod1", cfg = "rpass2")]
+#![rustc_partition_reused(module = "change_symbol_export_status-mod2", cfg = "rpass2")]
// This test case makes sure that a change in symbol visibility is detected by
// our dependency tracking. We do this by changing a module's visibility to
-// Adapated from rust-lang/rust#58813
+// Adapted from rust-lang/rust#58813
// revisions: rpass1 cfail2
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_callee_function() {
callee2(1, 2)
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_argument_function() {
callee1(1, 3)
#[cfg(not(cfail1))]
use super::callee2 as callee;
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
- #[rustc_dirty(label="HirBody", cfg="cfail2")]
- #[rustc_clean(label="HirBody", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner_items", cfg="cfail3")]
pub fn change_callee_indirectly_function() {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_callee_method() {
let s = Struct;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_argument_method() {
let s = Struct;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_ufcs_callee_method() {
let s = Struct;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_argument_method_ufcs() {
let s = Struct;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
-// One might think this would be expanded in the HirBody/Mir, but it actually
-// results in slightly different Hir/Mir.
+// One might think this would be expanded in the hir_owner_items/Mir, but it actually
+// results in slightly different hir_owner/Mir.
pub fn change_to_ufcs() {
let s = Struct;
Struct::method1(&s, 'x', true);
#[cfg(not(cfail1))]
use super::Struct2 as Struct;
- #[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn change_closure_body() {
let _ = || 3u32;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir, typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir, typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_parameter() {
let x = 0u32;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_parameter_pattern() {
let _ = |(x,): (u32,)| x;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_move() {
let _ = move || 1;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, typeck_tables_of")]
-#[rustc_clean(cfg="cfail3")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner_items, typeck_tables_of")]
+#[rustc_clean(cfg = "cfail3")]
pub fn add_type_ascription_to_parameter() {
let closure = |x: u32| x + 1u32;
let _: u32 = closure(1);
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir, typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir, typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_parameter_type() {
let closure = |x: u16| (x as u64) + 1;
const CONST_VISIBILITY: u8 = 0;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub const CONST_VISIBILITY: u8 = 0;
const CONST_CHANGE_TYPE_1: i32 = 0;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
const CONST_CHANGE_TYPE_1: u32 = 0;
const CONST_CHANGE_TYPE_2: Option<u32> = None;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
const CONST_CHANGE_TYPE_2: Option<u64> = None;
// Change value between simple literals
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
const CONST_CHANGE_VALUE_1: i16 = {
#[cfg(cfail1)]
// Change value between expressions
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
const CONST_CHANGE_VALUE_2: i16 = {
#[cfg(cfail1)]
{ 1 + 2 }
};
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
const CONST_CHANGE_VALUE_3: i16 = {
#[cfg(cfail1)]
{ 2 * 3 }
};
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
const CONST_CHANGE_VALUE_4: i16 = {
#[cfg(cfail1)]
#[cfg(not(cfail1))]
use super::ReferencedType2 as Type;
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
const CONST_CHANGE_TYPE_INDIRECTLY_1: Type = Type;
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
const CONST_CHANGE_TYPE_INDIRECTLY_2: Option<Type> = None;
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_field_value_struct_like() -> Enum {
Enum::Struct {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
// FIXME(michaelwoerister):Interesting. I would have thought that that changes the MIR. And it
// would if it were not all constants
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_constructor_path_struct_like() {
let _ = Enum2::Struct {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_constructor_variant_struct_like() {
let _ = Enum2::Struct2 {
#[rustc_clean(
cfg="cfail2",
- except="fn_sig,Hir,HirBody,optimized_mir,mir_built,\
+ except="fn_sig,hir_owner,hir_owner_items,optimized_mir,mir_built,\
typeck_tables_of"
)]
#[rustc_clean(cfg="cfail3")]
#[cfg(not(cfail1))]
use super::Enum2::Struct2 as Variant;
- #[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn function() -> Enum2 {
Variant {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_field_value_tuple_like() -> Enum {
Enum::Tuple(0, 1, 3)
#[cfg(not(cfail1))]
#[rustc_clean(
cfg="cfail2",
- except="HirBody,optimized_mir,mir_built,typeck_tables_of"
+ except="hir_owner_items,optimized_mir,mir_built,typeck_tables_of"
)]
#[rustc_clean(cfg="cfail3")]
pub fn change_constructor_path_tuple_like() {
#[cfg(not(cfail1))]
#[rustc_clean(
cfg="cfail2",
- except="HirBody,optimized_mir,mir_built,typeck_tables_of"
+ except="hir_owner_items,optimized_mir,mir_built,typeck_tables_of"
)]
#[rustc_clean(cfg="cfail3")]
pub fn change_constructor_variant_tuple_like() {
#[rustc_clean(
cfg="cfail2",
- except="fn_sig,Hir,HirBody,optimized_mir,mir_built,\
+ except="fn_sig,hir_owner,hir_owner_items,optimized_mir,mir_built,\
typeck_tables_of"
)]
#[rustc_clean(cfg="cfail3")]
#[cfg(not(cfail1))]
use super::Enum2::Tuple2 as Variant;
- #[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built,typeck_tables_of")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn function() -> Enum2 {
Variant(0, 1, 2)
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_constructor_path_c_like() {
let _ = Clike2::B;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_constructor_variant_c_like() {
let _ = Clike::C;
#[rustc_clean(
cfg="cfail2",
- except="fn_sig,Hir,HirBody,optimized_mir,mir_built,\
+ except="fn_sig,hir_owner,hir_owner_items,optimized_mir,mir_built,\
typeck_tables_of"
)]
#[rustc_clean(cfg="cfail3")]
#[cfg(not(cfail1))]
use super::Clike::B as Variant;
- #[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn function() -> Clike {
Variant
enum EnumVisibility { A }
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub enum EnumVisibility {
A
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumChangeNameCStyleVariant {
Variant1,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumChangeNameTupleStyleVariant {
Variant1,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumChangeNameStructStyleVariant {
Variant1,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
enum EnumChangeValueCStyleVariant0 {
Variant1,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumChangeValueCStyleVariant1 {
Variant1,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumAddCStyleVariant {
Variant1,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumRemoveCStyleVariant {
Variant1,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumAddTupleStyleVariant {
Variant1,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumRemoveTupleStyleVariant {
Variant1,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumAddStructStyleVariant {
Variant1,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumRemoveStructStyleVariant {
Variant1,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
enum EnumChangeFieldTypeTupleStyleVariant {
Variant1(u32,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
enum EnumChangeFieldTypeStructStyleVariant {
Variant1,
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumChangeFieldNameStructStyleVariant {
Variant1 { a: u32, c: u32 },
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
enum EnumChangeOrderTupleStyleVariant {
Variant1(
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumChangeFieldOrderStructStyleVariant {
Variant1 { b: f32, a: u32 },
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumAddFieldTupleStyleVariant {
Variant1(u32, u32, u32),
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
enum EnumAddFieldStructStyleVariant {
Variant1 { a: u32, b: u32, c: u32 },
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
#[must_use]
enum EnumAddMustUse {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
#[repr(C)]
enum EnumAddReprC {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
enum EnumSwapUsageTypeParameters<A, B> {
Variant1 {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
enum EnumSwapUsageLifetimeParameters<'a, 'b> {
Variant1 {
#[cfg(not(cfail1))]
use super::ReferencedType2 as FieldType;
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
enum TupleStyle {
Variant1(
#[cfg(not(cfail1))]
use super::ReferencedType2 as FieldType;
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
enum StructStyle {
Variant1 {
#[cfg(not(cfail1))]
use super::ReferencedTrait2 as Trait;
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody,predicates_of")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,predicates_of")]
#[rustc_clean(cfg="cfail3")]
enum Enum<T: Trait> {
Variant1(T)
#[cfg(not(cfail1))]
use super::ReferencedTrait2 as Trait;
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody,predicates_of")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,predicates_of")]
#[rustc_clean(cfg="cfail3")]
enum Enum<T> where T: Trait {
Variant1(T)
#![crate_type="rlib"]
// Case 1: The function body is not exported to metadata. If the body changes,
-// the hash of the HirBody node should change, but not the hash of
-// either the Hir or the Metadata node.
+// the hash of the hir_owner_items node should change, but not the hash of
+// either the hir_owner or the Metadata node.
#[cfg(cfail1)]
pub fn body_not_exported_to_metadata() -> u32 {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn body_not_exported_to_metadata() -> u32 {
2
// Case 2: The function body *is* exported to metadata because the function is
-// marked as #[inline]. Only the hash of the Hir depnode should be
+// marked as #[inline]. Only the hash of the hir_owner depnode should be
// unaffected by a change to the body.
#[cfg(cfail1)]
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
#[inline]
pub fn body_exported_to_metadata_because_of_inline() -> u32 {
// Case 2: The function body *is* exported to metadata because the function is
-// generic. Only the hash of the Hir depnode should be
+// generic. Only the hash of the hir_owner depnode should be
// unaffected by a change to the body.
#[cfg(cfail1)]
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
#[inline]
pub fn body_exported_to_metadata_because_of_generic() -> u32 {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_loop_body() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_iteration_variable_name() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir, typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir, typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_iteration_variable_pattern() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, promoted_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, promoted_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_iterable() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir, typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir, typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_break() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_loop_label() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_loop_label_to_break() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_break_label() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_loop_label_to_continue() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_continue_label() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_continue_to_break() {
let mut _x = 0;
// revisions: cfail1 cfail2 cfail3
// compile-flags: -Z query-dep-graph -Zincremental-ignore-spans
-
#![allow(warnings)]
#![feature(linkage)]
#![feature(rustc_attrs)]
#![crate_type = "rlib"]
-
// Add Parameter ---------------------------------------------------------------
#[cfg(cfail1)]
pub fn add_parameter() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2",
- except = "Hir, HirBody, mir_built, optimized_mir, typeck_tables_of, fn_sig")]
+#[rustc_clean(
+ cfg = "cfail2",
+ except = "hir_owner, hir_owner_items, mir_built, optimized_mir, typeck_tables_of, fn_sig"
+)]
#[rustc_clean(cfg = "cfail3")]
pub fn add_parameter(p: i32) {}
-
// Add Return Type -------------------------------------------------------------
#[cfg(cfail1)]
pub fn add_return_type() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2", except = "Hir, HirBody")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items")]
#[rustc_clean(cfg = "cfail3")]
pub fn add_return_type() -> () {}
-
// Change Parameter Type -------------------------------------------------------
#[cfg(cfail1)]
pub fn type_of_parameter(p: i32) {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2",
- except = "Hir, HirBody, mir_built, optimized_mir, typeck_tables_of, fn_sig")]
+#[rustc_clean(
+ cfg = "cfail2",
+ except = "hir_owner, hir_owner_items, mir_built, optimized_mir, typeck_tables_of, fn_sig"
+)]
#[rustc_clean(cfg = "cfail3")]
pub fn type_of_parameter(p: i64) {}
-
// Change Parameter Type Reference ---------------------------------------------
#[cfg(cfail1)]
pub fn type_of_parameter_ref(p: &i32) {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2",
- except = "Hir, HirBody, mir_built, optimized_mir, typeck_tables_of, fn_sig")]
+#[rustc_clean(
+ cfg = "cfail2",
+ except = "hir_owner, hir_owner_items, mir_built, optimized_mir, typeck_tables_of, fn_sig"
+)]
#[rustc_clean(cfg = "cfail3")]
pub fn type_of_parameter_ref(p: &mut i32) {}
-
// Change Parameter Order ------------------------------------------------------
#[cfg(cfail1)]
pub fn order_of_parameters(p1: i32, p2: i64) {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2",
- except = "Hir, HirBody, mir_built, optimized_mir, typeck_tables_of, fn_sig")]
+#[rustc_clean(
+ cfg = "cfail2",
+ except = "hir_owner, hir_owner_items, mir_built, optimized_mir, typeck_tables_of, fn_sig"
+)]
#[rustc_clean(cfg = "cfail3")]
pub fn order_of_parameters(p2: i64, p1: i32) {}
-
// Unsafe ----------------------------------------------------------------------
#[cfg(cfail1)]
pub fn make_unsafe() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2",
- except = "Hir, HirBody, mir_built, optimized_mir, typeck_tables_of, fn_sig")]
+#[rustc_clean(
+ cfg = "cfail2",
+ except = "hir_owner, hir_owner_items, mir_built, optimized_mir, typeck_tables_of, fn_sig"
+)]
#[rustc_clean(cfg = "cfail3")]
pub unsafe fn make_unsafe() {}
-
// Extern ----------------------------------------------------------------------
#[cfg(cfail1)]
pub fn make_extern() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2", except = "Hir, HirBody, typeck_tables_of, fn_sig")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items, typeck_tables_of, fn_sig")]
#[rustc_clean(cfg = "cfail3")]
pub extern "C" fn make_extern() {}
-
// Type Parameter --------------------------------------------------------------
#[cfg(cfail1)]
pub fn type_parameter() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2",
- except = "Hir, HirBody, generics_of, type_of, predicates_of")]
+#[rustc_clean(
+ cfg = "cfail2",
+ except = "hir_owner, hir_owner_items, generics_of, type_of, predicates_of"
+)]
#[rustc_clean(cfg = "cfail3")]
pub fn type_parameter<T>() {}
-
// Lifetime Parameter ----------------------------------------------------------
#[cfg(cfail1)]
pub fn lifetime_parameter() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2", except = "Hir, HirBody, generics_of")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items, generics_of")]
#[rustc_clean(cfg = "cfail3")]
pub fn lifetime_parameter<'a>() {}
-
// Trait Bound -----------------------------------------------------------------
#[cfg(cfail1)]
pub fn trait_bound<T>() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2", except = "Hir, HirBody, predicates_of")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items, predicates_of")]
#[rustc_clean(cfg = "cfail3")]
pub fn trait_bound<T: Eq>() {}
-
// Builtin Bound ---------------------------------------------------------------
#[cfg(cfail1)]
pub fn builtin_bound<T>() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2", except = "Hir, HirBody, predicates_of")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items, predicates_of")]
#[rustc_clean(cfg = "cfail3")]
pub fn builtin_bound<T: Send>() {}
-
// Lifetime Bound --------------------------------------------------------------
#[cfg(cfail1)]
pub fn lifetime_bound<'a, T>() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2",
- except = "Hir, HirBody, generics_of, type_of, predicates_of")]
+#[rustc_clean(
+ cfg = "cfail2",
+ except = "hir_owner, hir_owner_items, generics_of, type_of, predicates_of"
+)]
#[rustc_clean(cfg = "cfail3")]
pub fn lifetime_bound<'a, T: 'a>() {}
-
// Second Trait Bound ----------------------------------------------------------
#[cfg(cfail1)]
pub fn second_trait_bound<T: Eq>() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2", except = "Hir, HirBody, predicates_of")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items, predicates_of")]
#[rustc_clean(cfg = "cfail3")]
pub fn second_trait_bound<T: Eq + Clone>() {}
-
// Second Builtin Bound --------------------------------------------------------
#[cfg(cfail1)]
pub fn second_builtin_bound<T: Send>() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2", except = "Hir, HirBody, predicates_of")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items, predicates_of")]
#[rustc_clean(cfg = "cfail3")]
pub fn second_builtin_bound<T: Send + Sized>() {}
-
// Second Lifetime Bound -------------------------------------------------------
#[cfg(cfail1)]
pub fn second_lifetime_bound<'a, 'b, T: 'a>() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2",
- except = "Hir, HirBody, generics_of, type_of, predicates_of")]
+#[rustc_clean(
+ cfg = "cfail2",
+ except = "hir_owner, hir_owner_items, generics_of, type_of, predicates_of"
+)]
#[rustc_clean(cfg = "cfail3")]
pub fn second_lifetime_bound<'a, 'b, T: 'a + 'b>() {}
-
// Inline ----------------------------------------------------------------------
#[cfg(cfail1)]
pub fn inline() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2", except = "Hir, HirBody")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items")]
#[rustc_clean(cfg = "cfail3")]
#[inline]
pub fn inline() {}
-
// Inline Never ----------------------------------------------------------------
#[cfg(cfail1)]
pub fn inline_never() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2", except = "Hir, HirBody")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items")]
#[rustc_clean(cfg = "cfail3")]
#[inline(never)]
pub fn inline_never() {}
-
// No Mangle -------------------------------------------------------------------
#[cfg(cfail1)]
pub fn no_mangle() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2", except = "Hir, HirBody")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items")]
#[rustc_clean(cfg = "cfail3")]
#[no_mangle]
pub fn no_mangle() {}
-
// Linkage ---------------------------------------------------------------------
#[cfg(cfail1)]
pub fn linkage() {}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2", except = "Hir, HirBody")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items")]
#[rustc_clean(cfg = "cfail3")]
#[linkage = "weak_odr"]
pub fn linkage() {}
-
// Return Impl Trait -----------------------------------------------------------
#[cfg(cfail1)]
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg = "cfail2", except = "Hir, HirBody, typeck_tables_of, fn_sig")]
+#[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items, typeck_tables_of, fn_sig")]
#[rustc_clean(cfg = "cfail3")]
pub fn return_impl_trait() -> impl Clone {
0
}
-
// Change Return Impl Trait ----------------------------------------------------
#[cfg(cfail1)]
0u32
}
-
// Change Return Type Indirectly -----------------------------------------------
pub struct ReferencedType1;
#[cfg(not(cfail1))]
use super::ReferencedType2 as ReturnType;
- #[rustc_clean(cfg = "cfail2",
- except = "Hir, HirBody, mir_built, optimized_mir, typeck_tables_of, fn_sig")]
+ #[rustc_clean(
+ cfg = "cfail2",
+ except = "hir_owner, hir_owner_items, mir_built, optimized_mir, typeck_tables_of, fn_sig"
+ )]
#[rustc_clean(cfg = "cfail3")]
pub fn indirect_return_type() -> ReturnType {
ReturnType {}
}
}
-
// Change Parameter Type Indirectly --------------------------------------------
pub mod change_parameter_type_indirectly {
#[cfg(not(cfail1))]
use super::ReferencedType2 as ParameterType;
- #[rustc_clean(cfg = "cfail2",
- except = "Hir, HirBody, mir_built, optimized_mir, typeck_tables_of, fn_sig")]
+ #[rustc_clean(
+ cfg = "cfail2",
+ except = "hir_owner, hir_owner_items, mir_built, optimized_mir, typeck_tables_of, fn_sig"
+ )]
#[rustc_clean(cfg = "cfail3")]
pub fn indirect_parameter_type(p: ParameterType) {}
}
-
// Change Trait Bound Indirectly -----------------------------------------------
pub trait ReferencedTrait1 {}
#[cfg(not(cfail1))]
use super::ReferencedTrait2 as Trait;
- #[rustc_clean(cfg = "cfail2", except = "Hir, HirBody, predicates_of")]
+ #[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items, predicates_of")]
#[rustc_clean(cfg = "cfail3")]
pub fn indirect_trait_bound<T: Trait>(p: T) {}
}
-
// Change Trait Bound Indirectly In Where Clause -------------------------------
pub mod change_trait_bound_indirectly_in_where_clause {
#[cfg(not(cfail1))]
use super::ReferencedTrait2 as Trait;
- #[rustc_clean(cfg = "cfail2", except = "Hir, HirBody, predicates_of")]
+ #[rustc_clean(cfg = "cfail2", except = "hir_owner, hir_owner_items, predicates_of")]
#[rustc_clean(cfg = "cfail3")]
pub fn indirect_trait_bound_where<T>(p: T)
where
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_condition(x: bool) -> u32 {
if !x {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_then_branch(x: bool) -> u32 {
if x {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_else_branch(x: bool) -> u32 {
if x {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_else_branch(x: bool) -> u32 {
let mut ret = 1;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_condition_if_let(x: Option<u32>) -> u32 {
if let Some(_) = x {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_then_branch_if_let(x: Option<u32>) -> u32 {
if let Some(x) = x {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_else_branch_if_let(x: Option<u32>) -> u32 {
if let Some(x) = x {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_else_branch_if_let(x: Option<u32>) -> u32 {
let mut ret = 1;
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
fn change_simple_index(slice: &[u32]) -> u32 {
slice[4]
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
fn change_lower_bound(slice: &[u32]) -> &[u32] {
&slice[2..5]
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
fn change_upper_bound(slice: &[u32]) -> &[u32] {
&slice[3..7]
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
fn add_lower_bound(slice: &[u32]) -> &[u32] {
&slice[3..4]
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
fn add_upper_bound(slice: &[u32]) -> &[u32] {
&slice[3..7]
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
fn change_mutability(slice: &mut [u32]) -> u32 {
(&slice[3..5])[0]
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
fn exclusive_to_inclusive_range(slice: &[u32]) -> &[u32] {
&slice[3..=7]
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,associated_item_def_ids")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,associated_item_def_ids")]
#[rustc_clean(cfg="cfail3")]
impl Foo {
#[rustc_clean(cfg="cfail3")]
impl Foo {
#[rustc_clean(
cfg="cfail2",
- except="HirBody,optimized_mir,promoted_mir,mir_built,typeck_tables_of"
+ except="hir_owner_items,optimized_mir,promoted_mir,mir_built,typeck_tables_of"
)]
#[rustc_clean(cfg="cfail3")]
pub fn method_body() {
impl Foo {
#[rustc_clean(
cfg="cfail2",
- except="HirBody,optimized_mir,promoted_mir,mir_built,typeck_tables_of"
+ except="hir_owner_items,optimized_mir,promoted_mir,mir_built,typeck_tables_of"
)]
#[rustc_clean(cfg="cfail3")]
#[inline]
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
impl Foo {
- #[rustc_clean(cfg="cfail2", except="associated_item,Hir,HirBody")]
+ #[rustc_clean(cfg="cfail2", except="associated_item,hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
fn method_privacy() { }
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
impl Foo {
#[rustc_dirty(cfg="cfail2", except="type_of,predicates_of,promoted_mir")]
impl Foo {
#[rustc_clean(
cfg="cfail2",
- except="Hir,HirBody,fn_sig,typeck_tables_of,optimized_mir,mir_built"
+ except="hir_owner,hir_owner_items,fn_sig,typeck_tables_of,optimized_mir,mir_built"
)]
#[rustc_clean(cfg="cfail3")]
pub fn method_selfmutness(&mut self) { }
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,associated_item_def_ids")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,associated_item_def_ids")]
#[rustc_clean(cfg="cfail3")]
impl Foo {
#[rustc_clean(cfg="cfail2")]
impl Foo {
#[rustc_clean(
cfg="cfail2",
- except="Hir,HirBody,fn_sig,typeck_tables_of,optimized_mir,mir_built"
+ except="hir_owner,hir_owner_items,fn_sig,typeck_tables_of,optimized_mir,mir_built"
)]
#[rustc_clean(cfg="cfail3")]
pub fn add_method_parameter(&self, _: i32) { }
#[rustc_clean(cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
impl Foo {
- #[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_method_parameter_name(&self, b: i64) { }
}
impl Foo {
#[rustc_clean(
cfg="cfail2",
- except="Hir,HirBody,fn_sig,optimized_mir,mir_built,typeck_tables_of")]
+ except="hir_owner,hir_owner_items,fn_sig,optimized_mir,mir_built,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_method_return_type(&self) -> u8 { 0 }
}
#[rustc_clean(cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
impl Foo {
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
#[inline]
pub fn make_method_inline(&self) -> u8 { 0 }
#[rustc_clean(cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
impl Foo {
- #[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_method_parameter_order(&self, b: i64, a: i64) { }
}
impl Foo {
#[rustc_clean(
cfg="cfail2",
- except="Hir,HirBody,fn_sig,typeck_tables_of,optimized_mir,mir_built"
+ except="hir_owner,hir_owner_items,fn_sig,typeck_tables_of,optimized_mir,mir_built"
)]
#[rustc_clean(cfg="cfail3")]
pub unsafe fn make_method_unsafe(&self) { }
#[rustc_clean(cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
impl Foo {
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody,fn_sig,typeck_tables_of")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,fn_sig,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub extern fn make_method_extern(&self) { }
}
#[rustc_clean(cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
impl Foo {
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody,fn_sig,typeck_tables_of")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,fn_sig,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub extern "system" fn change_method_calling_convention(&self) { }
}
// if we lower generics before the body, then the `HirId` for
// things in the body will be affected. So if you start to see
// `typeck_tables_of` appear dirty, that might be the cause. -nmatsakis
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_lifetime_parameter_to_method<'a>(&self) { }
}
// appear dirty, that might be the cause. -nmatsakis
#[rustc_clean(
cfg="cfail2",
- except="Hir,HirBody,generics_of,predicates_of,type_of",
+ except="hir_owner,hir_owner_items,generics_of,predicates_of,type_of",
)]
#[rustc_clean(cfg="cfail3")]
pub fn add_type_parameter_to_method<T>(&self) { }
impl Foo {
#[rustc_clean(
cfg="cfail2",
- except="Hir,HirBody,generics_of,predicates_of,type_of,typeck_tables_of"
+ except="hir_owner,hir_owner_items,generics_of,predicates_of,type_of"
)]
#[rustc_clean(cfg="cfail3")]
pub fn add_lifetime_bound_to_lifetime_param_of_method<'a, 'b: 'a>(&self) { }
// generics before the body, then the `HirId` for things in the
// body will be affected. So if you start to see `typeck_tables_of`
// appear dirty, that might be the cause. -nmatsakis
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody,generics_of,predicates_of,\
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,generics_of,predicates_of,\
type_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_lifetime_bound_to_type_param_of_method<'a, T: 'a>(&self) { }
// generics before the body, then the `HirId` for things in the
// body will be affected. So if you start to see `typeck_tables_of`
// appear dirty, that might be the cause. -nmatsakis
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody,predicates_of")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,predicates_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_trait_bound_to_type_param_of_method<T: Clone>(&self) { }
}
#[rustc_clean(cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
impl Foo {
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
#[no_mangle]
pub fn add_no_mangle_to_method(&self) { }
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,generics_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,generics_of")]
#[rustc_clean(cfg="cfail3")]
impl<T> Bar<T> {
#[rustc_clean(
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
impl Bar<u64> {
#[rustc_clean(cfg="cfail2", except="fn_sig,optimized_mir,mir_built,typeck_tables_of")]
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
impl<T: 'static> Bar<T> {
#[rustc_clean(cfg="cfail2")]
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
impl<T: Clone> Bar<T> {
#[rustc_clean(cfg="cfail2")]
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub fn change_template(a: i32) -> i32 {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub fn change_output(a: i32) -> i32 {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub fn change_input(_a: i32, _b: i32) -> i32 {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub fn change_input_constraint(_a: i32, _b: i32) -> i32 {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub fn change_clobber(_a: i32) -> i32 {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub fn change_options(_a: i32) -> i32 {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir")]
+ except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_name() {
let _y = 2u64;
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,typeck_tables_of,mir_built,optimized_mir")]
+ except="hir_owner_items,typeck_tables_of,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn add_type() {
let _x: u32 = 2u32;
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,typeck_tables_of,mir_built,optimized_mir")]
+ except="hir_owner_items,typeck_tables_of,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_type() {
let _x: u8 = 2;
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,typeck_tables_of,mir_built,optimized_mir")]
+ except="hir_owner_items,typeck_tables_of,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_mutability_of_reference_type() {
let _x: &mut u64;
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,typeck_tables_of,mir_built,optimized_mir")]
+ except="hir_owner_items,typeck_tables_of,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_mutability_of_slot() {
let _x: u64 = 0;
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,typeck_tables_of,mir_built,optimized_mir")]
+ except="hir_owner_items,typeck_tables_of,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_simple_binding_to_pattern() {
let (_a, _b) = (0u8, 'x');
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir")]
+ except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_name_in_pattern() {
let (_a, _c) = (1u8, 'y');
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,typeck_tables_of,mir_built,optimized_mir")]
+ except="hir_owner_items,typeck_tables_of,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn add_ref_in_pattern() {
let (ref _a, _b) = (1u8, 'y');
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,typeck_tables_of,mir_built,optimized_mir")]
+ except="hir_owner_items,typeck_tables_of,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn add_amp_in_pattern() {
let (&_a, _b) = (&1u8, 'y');
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,typeck_tables_of,mir_built,optimized_mir")]
+ except="hir_owner_items,typeck_tables_of,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_mutability_of_binding_in_pattern() {
let (mut _a, _b) = (99u8, 'q');
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,typeck_tables_of,mir_built,optimized_mir")]
+ except="hir_owner_items,typeck_tables_of,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn add_initializer() {
let _x: i16 = 3i16;
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir")]
+ except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_initializer() {
let _x = 5u16;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_loop_body() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir, typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir, typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_break() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_loop_label() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_loop_label_to_break() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir, typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir, typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_break_label() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_loop_label_to_continue() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_continue_label() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir, typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir, typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_continue_to_break() {
let mut _x = 0;
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+ except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_arm(x: u32) -> u32 {
match x {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir")]
+ except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_order_of_arms(x: u32) -> u32 {
match x {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+ except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_guard_clause(x: u32, y: bool) -> u32 {
match x {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+ except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_guard_clause(x: u32, y: bool) -> u32 {
match x {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+ except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_at_binding(x: u32) -> u32 {
match x {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir")]
+ except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_name_of_at_binding(x: u32) -> u32 {
match x {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+ except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_simple_name_to_pattern(x: u32) -> u32 {
match (x, x & 1) {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir")]
+ except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_name_in_pattern(x: u32) -> u32 {
match (x, x & 1) {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+ except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_mutability_of_binding_in_pattern(x: u32) -> u32 {
match (x, x & 1) {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+ except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_ref_to_binding_in_pattern(x: u32) -> u32 {
match (x, x & 1) {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
-except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_amp_to_binding_in_pattern(x: u32) -> u32 {
match (&x, x & 1) {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir")]
+ except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_rhs_of_arm(x: u32) -> u32 {
match x {
#[cfg(not(cfail1))]
#[rustc_clean(cfg="cfail2",
- except="HirBody,mir_built,optimized_mir,typeck_tables_of")]
+ except="hir_owner_items,mir_built,optimized_mir,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_alternative_to_arm(x: u32) -> u32 {
match x {
// Indexing expression
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn indexing(slice: &[u8]) -> u8 {
#[cfg(cfail1)]
// Arithmetic overflow plus
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn arithmetic_overflow_plus(val: i32) -> i32 {
#[cfg(cfail1)]
// Arithmetic overflow minus
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn arithmetic_overflow_minus(val: i32) -> i32 {
#[cfg(cfail1)]
// Arithmetic overflow mult
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn arithmetic_overflow_mult(val: i32) -> i32 {
#[cfg(cfail1)]
// Arithmetic overflow negation
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn arithmetic_overflow_negation(val: i32) -> i32 {
#[cfg(cfail1)]
// Division by zero
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn division_by_zero(val: i32) -> i32 {
#[cfg(cfail1)]
}
// Division by zero
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn mod_by_zero(val: i32) -> i32 {
#[cfg(cfail1)]
// shift left
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn shift_left(val: i32, shift: usize) -> i32 {
#[cfg(cfail1)]
// shift right
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn shift_right(val: i32, shift: usize) -> i32 {
#[cfg(cfail1)]
static STATIC_VISIBILITY: u8 = 0;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub static STATIC_VISIBILITY: u8 = 0;
static STATIC_MUTABILITY: u8 = 0;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
static mut STATIC_MUTABILITY: u8 = 0;
static STATIC_LINKAGE: u8 = 0;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
#[linkage="weak_odr"]
static STATIC_LINKAGE: u8 = 0;
static STATIC_NO_MANGLE: u8 = 0;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
#[no_mangle]
static STATIC_NO_MANGLE: u8 = 0;
static STATIC_THREAD_LOCAL: u8 = 0;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
#[thread_local]
static STATIC_THREAD_LOCAL: u8 = 0;
static STATIC_CHANGE_TYPE_1: i16 = 0;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
static STATIC_CHANGE_TYPE_1: u64 = 0;
static STATIC_CHANGE_TYPE_2: Option<i8> = None;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
static STATIC_CHANGE_TYPE_2: Option<u16> = None;
// Change value between simple literals
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
static STATIC_CHANGE_VALUE_1: i16 = {
#[cfg(cfail1)]
// Change value between expressions
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
static STATIC_CHANGE_VALUE_2: i16 = {
#[cfg(cfail1)]
{ 1 + 2 }
};
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
static STATIC_CHANGE_VALUE_3: i16 = {
#[cfg(cfail1)]
{ 2 * 3 }
};
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
static STATIC_CHANGE_VALUE_4: i16 = {
#[cfg(cfail1)]
#[cfg(not(cfail1))]
use super::ReferencedType2 as Type;
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
static STATIC_CHANGE_TYPE_INDIRECTLY_1: Type = Type;
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody,type_of")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items,type_of")]
#[rustc_clean(cfg="cfail3")]
static STATIC_CHANGE_TYPE_INDIRECTLY_2: Option<Type> = None;
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_field_value_regular_struct() -> RegularStruct {
RegularStruct {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_field_order_regular_struct() -> RegularStruct {
RegularStruct {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_field_regular_struct() -> RegularStruct {
let struct1 = RegularStruct {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_field_label_regular_struct() -> RegularStruct {
let struct1 = RegularStruct {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_constructor_path_regular_struct() {
let _ = RegularStruct2 {
#[rustc_clean(
cfg="cfail2",
- except="fn_sig,Hir,HirBody,optimized_mir,mir_built,typeck_tables_of"
+ except="fn_sig,hir_owner,hir_owner_items,optimized_mir,mir_built,typeck_tables_of"
)]
#[rustc_clean(cfg="cfail3")]
pub fn function() -> Struct {
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,optimized_mir,mir_built")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,optimized_mir,mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_field_value_tuple_struct() -> TupleStruct {
TupleStruct(0, 1, 3)
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody,mir_built,typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items,mir_built,typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn change_constructor_path_tuple_struct() {
let _ = TupleStruct2(0, 1, 2);
#[rustc_clean(
cfg="cfail2",
- except="fn_sig,Hir,HirBody,optimized_mir,mir_built,typeck_tables_of"
+ except="fn_sig,hir_owner,hir_owner_items,optimized_mir,mir_built,typeck_tables_of"
)]
#[rustc_clean(cfg="cfail3")]
pub fn function() -> Struct {
pub struct LayoutPacked;
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_dirty(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct LayoutC;
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_dirty(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct TupleStructFieldType(i32);
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_clean(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct TupleStructAddField(i32);
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_dirty(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct TupleStructFieldVisibility(char);
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_dirty(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct RecordStructFieldType { x: f32 }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_clean(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct RecordStructFieldName { x: f32 }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_dirty(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct RecordStructAddField { x: f32 }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_dirty(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct RecordStructFieldVisibility { x: f32 }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_dirty(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct AddLifetimeParameter<'a>(&'a f32, &'a f64);
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_dirty(label="type_of", cfg="cfail2")]
#[rustc_dirty(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct AddLifetimeParameterBound<'a, 'b>(&'a f32, &'b f64);
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_clean(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_dirty(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct AddLifetimeParameterBoundWhereClause<'a, 'b>(&'a f32, &'b f64);
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_clean(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_dirty(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct AddTypeParameter<T1>(T1, T1);
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_dirty(label="type_of", cfg="cfail2")]
#[rustc_dirty(label="generics_of", cfg="cfail2")]
#[rustc_dirty(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct AddTypeParameterBound<T>(T);
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_clean(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_dirty(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct AddTypeParameterBoundWhereClause<T>(T);
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_clean(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_dirty(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
// fingerprint is stable (i.e., that there are no random influences like memory
// addresses taken into account by the hashing algorithm).
// Note: there is no #[cfg(...)], so this is ALWAYS compiled
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="HirBody", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail2")]
#[rustc_clean(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
struct Visibility;
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_clean(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
-#[rustc_clean(label="HirBody", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
+#[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
#[cfg(not(cfail1))]
use super::ReferencedType2 as FieldType;
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_dirty(label="HirBody", cfg="cfail2")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_clean(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
- #[rustc_clean(label="HirBody", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
#[cfg(not(cfail1))]
use super::ReferencedType2 as FieldType;
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_dirty(label="HirBody", cfg="cfail2")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_clean(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_clean(label="predicates_of", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
- #[rustc_clean(label="HirBody", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
#[cfg(not(cfail1))]
use super::ReferencedTrait2 as Trait;
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_dirty(label="HirBody", cfg="cfail2")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_clean(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_dirty(label="predicates_of", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
- #[rustc_clean(label="HirBody", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
#[cfg(not(cfail1))]
use super::ReferencedTrait2 as Trait;
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_dirty(label="HirBody", cfg="cfail2")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_clean(label="type_of", cfg="cfail2")]
#[rustc_clean(label="generics_of", cfg="cfail2")]
#[rustc_dirty(label="predicates_of", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
- #[rustc_clean(label="HirBody", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[rustc_clean(label="type_of", cfg="cfail3")]
#[rustc_clean(label="generics_of", cfg="cfail3")]
#[rustc_clean(label="predicates_of", cfg="cfail3")]
trait TraitVisibility { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
pub trait TraitVisibility { }
trait TraitUnsafety { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
unsafe trait TraitUnsafety { }
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
pub trait TraitAddMethod {
fn method();
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeMethodName {
fn methodChanged();
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddReturnType {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method() -> u32;
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeReturnType {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method() -> u64;
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddParameterToMethod {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method(a: u32);
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeMethodParameterName {
// FIXME(#38501) This should preferably always be clean.
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method(b: u32);
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
- #[rustc_dirty(label="HirBody", cfg="cfail2")]
- #[rustc_clean(label="HirBody", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner_items", cfg="cfail3")]
fn with_default(y: i32) {}
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeMethodParameterType {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method(a: i64);
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeMethodParameterTypeRef {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method(a: &mut i32);
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeMethodParametersOrder {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method(b: i64, a: i32);
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddMethodAutoImplementation {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method() { }
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeOrderOfMethods {
fn method1();
fn method0();
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeModeSelfRefToMut {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method(&mut self);
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeModeSelfOwnToMut: Sized {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
- #[rustc_dirty(label="HirBody", cfg="cfail2")]
- #[rustc_clean(label="HirBody", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner_items", cfg="cfail3")]
fn method(mut self) {}
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeModeSelfOwnToRef {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method(&self);
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddUnsafeModifier {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
unsafe fn method();
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddExternModifier {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
extern fn method();
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeExternCToRustIntrinsic {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
extern "stdcall" fn method();
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddTypeParameterToMethod {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method<T>();
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddLifetimeParameterToMethod {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method<'a>();
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddTraitBoundToMethodTypeParameter {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method<T: ReferencedTrait0>();
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddBuiltinBoundToMethodTypeParameter {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method<T: Sized>();
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddLifetimeBoundToMethodLifetimeParameter {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method<'a, 'b: 'a>(a: &'a u32, b: &'b u32);
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondTraitBoundToMethodTypeParameter {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method<T: ReferencedTrait0 + ReferencedTrait1>();
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondBuiltinBoundToMethodTypeParameter {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method<T: Sized + Sync>();
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondLifetimeBoundToMethodLifetimeParameter {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method<'a, 'b, 'c: 'a + 'b>(a: &'a u32, b: &'b u32, c: &'c u32);
}
#[cfg(cfail1)]
trait TraitAddAssociatedType {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method();
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddAssociatedType {
type Associated;
// Apparently the type bound contributes to the predicates of the trait, but
// does not change the associated item itself.
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddTraitBoundToAssociatedType {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
type Associated: ReferencedTrait0;
fn method();
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddLifetimeBoundToAssociatedType<'a> {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
type Associated: 'a;
fn method();
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddDefaultToAssociatedType {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
type Associated = ReferenceType0;
fn method();
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddAssociatedConstant {
const Value: u32;
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddInitializerToAssociatedConstant {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
const Value: u32 = 1;
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method();
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeTypeOfAssociatedConstant {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
const Value: f64;
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method();
}
trait TraitAddSuperTrait { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSuperTrait : ReferencedTrait0 { }
trait TraitAddBuiltiBound { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddBuiltiBound : Send { }
trait TraitAddStaticLifetimeBound { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddStaticLifetimeBound : 'static { }
trait TraitAddTraitAsSecondBound : ReferencedTrait0 { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddTraitAsSecondBound : ReferencedTrait0 + ReferencedTrait1 { }
#[cfg(cfail1)]
trait TraitAddTraitAsSecondBoundFromBuiltin : Send { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddTraitAsSecondBoundFromBuiltin : Send + ReferencedTrait0 { }
trait TraitAddBuiltinBoundAsSecondBound : ReferencedTrait0 { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddBuiltinBoundAsSecondBound : ReferencedTrait0 + Send { }
#[cfg(cfail1)]
trait TraitAddBuiltinBoundAsSecondBoundFromBuiltin : Send { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddBuiltinBoundAsSecondBoundFromBuiltin: Send + Copy { }
trait TraitAddStaticBoundAsSecondBound : ReferencedTrait0 { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddStaticBoundAsSecondBound : ReferencedTrait0 + 'static { }
#[cfg(cfail1)]
trait TraitAddStaticBoundAsSecondBoundFromBuiltin : Send { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddStaticBoundAsSecondBoundFromBuiltin : Send + 'static { }
trait TraitAddTypeParameterToTrait { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddTypeParameterToTrait<T> { }
trait TraitAddLifetimeParameterToTrait { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddLifetimeParameterToTrait<'a> { }
trait TraitAddTraitBoundToTypeParameterOfTrait<T> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddTraitBoundToTypeParameterOfTrait<T: ReferencedTrait0> { }
trait TraitAddLifetimeBoundToTypeParameterOfTrait<'a, T> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddLifetimeBoundToTypeParameterOfTrait<'a, T: 'a> { }
trait TraitAddLifetimeBoundToLifetimeParameterOfTrait<'a, 'b> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddLifetimeBoundToLifetimeParameterOfTrait<'a: 'b, 'b> { }
trait TraitAddBuiltinBoundToTypeParameterOfTrait<T> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddBuiltinBoundToTypeParameterOfTrait<T: Send> { }
trait TraitAddSecondTypeParameterToTrait<T> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondTypeParameterToTrait<T, S> { }
trait TraitAddSecondLifetimeParameterToTrait<'a> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondLifetimeParameterToTrait<'a, 'b> { }
trait TraitAddSecondTraitBoundToTypeParameterOfTrait<T: ReferencedTrait0> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondTraitBoundToTypeParameterOfTrait<T: ReferencedTrait0 + ReferencedTrait1> { }
trait TraitAddSecondLifetimeBoundToTypeParameterOfTrait<'a, 'b, T: 'a> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondLifetimeBoundToTypeParameterOfTrait<'a, 'b, T: 'a + 'b> { }
trait TraitAddSecondLifetimeBoundToLifetimeParameterOfTrait<'a: 'b, 'b, 'c> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondLifetimeBoundToLifetimeParameterOfTrait<'a: 'b + 'c, 'b, 'c> { }
trait TraitAddSecondBuiltinBoundToTypeParameterOfTrait<T: Send> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondBuiltinBoundToTypeParameterOfTrait<T: Send + Sync> { }
trait TraitAddTraitBoundToTypeParameterOfTraitWhere<T> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddTraitBoundToTypeParameterOfTraitWhere<T> where T: ReferencedTrait0 { }
trait TraitAddLifetimeBoundToTypeParameterOfTraitWhere<'a, T> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddLifetimeBoundToTypeParameterOfTraitWhere<'a, T> where T: 'a { }
trait TraitAddLifetimeBoundToLifetimeParameterOfTraitWhere<'a, 'b> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddLifetimeBoundToLifetimeParameterOfTraitWhere<'a, 'b> where 'a: 'b { }
trait TraitAddBuiltinBoundToTypeParameterOfTraitWhere<T> { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddBuiltinBoundToTypeParameterOfTraitWhere<T> where T: Send { }
trait TraitAddSecondTraitBoundToTypeParameterOfTraitWhere<T> where T: ReferencedTrait0 { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondTraitBoundToTypeParameterOfTraitWhere<T>
where T: ReferencedTrait0 + ReferencedTrait1 { }
trait TraitAddSecondLifetimeBoundToTypeParameterOfTraitWhere<'a, 'b, T> where T: 'a { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondLifetimeBoundToTypeParameterOfTraitWhere<'a, 'b, T> where T: 'a + 'b { }
trait TraitAddSecondLifetimeBoundToLifetimeParameterOfTraitWhere<'a, 'b, 'c> where 'a: 'b { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondLifetimeBoundToLifetimeParameterOfTraitWhere<'a, 'b, 'c> where 'a: 'b + 'c { }
trait TraitAddSecondBuiltinBoundToTypeParameterOfTraitWhere<T> where T: Send { }
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitAddSecondBuiltinBoundToTypeParameterOfTraitWhere<T> where T: Send + Sync { }
#[cfg(not(cfail1))]
use super::ReferenceType1 as ReturnType;
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeReturnType {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method() -> ReturnType;
}
}
#[cfg(not(cfail1))]
use super::ReferenceType1 as ArgType;
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeArgType {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method(a: ArgType);
}
}
#[cfg(not(cfail1))]
use super::ReferencedTrait1 as Bound;
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeBoundOfMethodTypeParameter {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method<T: Bound>(a: T);
}
}
#[cfg(not(cfail1))]
use super::ReferencedTrait1 as Bound;
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeBoundOfMethodTypeParameterWhere {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method<T>(a: T) where T: Bound;
}
}
#[cfg(not(cfail1))]
use super::ReferencedTrait1 as Bound;
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeTraitBound<T: Bound> {
fn method(a: T);
}
#[cfg(not(cfail1))]
use super::ReferencedTrait1 as Bound;
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
trait TraitChangeTraitBoundWhere<T> where T: Bound {
fn method(a: T);
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
pub trait ChangeMethodNameTrait {
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method_name2();
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl ChangeMethodNameTrait for Foo {
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method_name2() { }
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl ChangeMethodBodyTrait for Foo {
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
- #[rustc_dirty(label="HirBody", cfg="cfail2")]
- #[rustc_clean(label="HirBody", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner_items", cfg="cfail3")]
fn method_name() {
()
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl ChangeMethodBodyTraitInlined for Foo {
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
- #[rustc_dirty(label="HirBody", cfg="cfail2")]
- #[rustc_clean(label="HirBody", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner_items", cfg="cfail3")]
#[inline]
fn method_name() {
panic!()
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl ChangeMethodSelfnessTrait for Foo {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method_name(&self) {
()
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl RemoveMethodSelfnessTrait for Foo {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method_name() {}
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl ChangeMethodSelfmutnessTrait for Foo {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method_name(&mut self) {}
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl ChangeItemKindTrait for Foo {
type name = ();
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl RemoveItemTrait for Foo {
type TypeName = ();
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl AddItemTrait for Foo {
type TypeName = ();
fn method_name() { }
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
pub trait ChangeHasValueTrait {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method_name() { }
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl ChangeHasValueTrait for Foo {
fn method_name() { }
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl AddDefaultTrait for Foo {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
default fn method_name() { }
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl AddArgumentTrait for Foo {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method_name(&self, _x: u32) { }
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl ChangeArgumentTypeTrait for Foo {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn method_name(&self, _x: char) { }
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl<T> AddTypeParameterToImpl<T> for Bar<T> {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn id(t: T) -> T { t }
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl ChangeSelfTypeOfImpl for u64 {
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn id(self) -> Self { self }
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl<T: 'static> AddLifetimeBoundToImplParameter for T {
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn id(self) -> Self { self }
}
}
#[cfg(not(cfail1))]
-#[rustc_dirty(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_dirty(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl<T: Clone> AddTraitBoundToImplParameter for T {
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
fn id(self) -> Self { self }
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl AddNoMangleToMethod for Foo {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
#[no_mangle]
fn add_no_mangle_to_method(&self) { }
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_clean(label="Hir", cfg="cfail3")]
+#[rustc_clean(label="hir_owner", cfg="cfail2")]
+#[rustc_clean(label="hir_owner", cfg="cfail3")]
impl MakeMethodInline for Foo {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="Hir", cfg="cfail3")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail3")]
#[inline]
fn make_method_inline(&self) -> u8 { 0 }
}
type ChangePrimitiveType = i32;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type ChangePrimitiveType = i64;
type ChangeMutability = &'static i32;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type ChangeMutability = &'static mut i32;
type ChangeLifetime<'a> = (&'static i32, &'a i32);
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type ChangeLifetime<'a> = (&'a i32, &'a i32);
type ChangeTypeStruct = Struct1;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type ChangeTypeStruct = Struct2;
type ChangeTypeTuple = (u32, u64);
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type ChangeTypeTuple = (u32, i64);
type ChangeTypeEnum = Enum1;
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type ChangeTypeEnum = Enum2;
type AddTupleField = (i32, i64);
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type AddTupleField = (i32, i64, i16);
type ChangeNestedTupleField = (i32, (i64, i16));
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type ChangeNestedTupleField = (i32, (i64, i8));
type AddTypeParam<T1> = (T1, T1);
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type AddTypeParam<T1, T2> = (T1, T2);
type AddTypeParamBound<T1> = (T1, u32);
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type AddTypeParamBound<T1: Clone> = (T1, u32);
type AddTypeParamBoundWhereClause<T1> where T1: Clone = (T1, u32);
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type AddTypeParamBoundWhereClause<T1> where T1: Clone+Copy = (T1, u32);
type AddLifetimeParam<'a> = (&'a u32, &'a u32);
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type AddLifetimeParam<'a, 'b> = (&'a u32, &'b u32);
type AddLifetimeParamBound<'a, 'b> = (&'a u32, &'b u32);
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type AddLifetimeParamBound<'a, 'b: 'a> = (&'a u32, &'b u32);
= (&'a u32, &'b u32, &'c u32);
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type AddLifetimeParamBoundWhereClause<'a, 'b, 'c>
where 'b: 'a,
#[cfg(not(cfail1))]
use super::ReferencedTrait2 as Trait;
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type ChangeTraitBoundIndirectly<T: Trait> = (T, u32);
}
#[cfg(not(cfail1))]
use super::ReferencedTrait2 as Trait;
- #[rustc_clean(cfg="cfail2", except="Hir,HirBody")]
+ #[rustc_clean(cfg="cfail2", except="hir_owner,hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
type ChangeTraitBoundIndirectly<T> where T : Trait = (T, u32);
}
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn const_negation() -> i32 {
-1
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn const_bitwise_not() -> i32 {
!99
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn var_negation(x: i32, y: i32) -> i32 {
-y
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn var_bitwise_not(x: i32, y: i32) -> i32 {
!y
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built,typeck_tables_of", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn var_deref(x: &i32, y: &i32) -> i32 {
*y
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn first_const_add() -> i32 {
2 + 3
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn second_const_add() -> i32 {
1 + 3
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn first_var_add(a: i32, b: i32) -> i32 {
b + 2
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn second_var_add(a: i32, b: i32) -> i32 {
1 + b
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn plus_to_minus(a: i32) -> i32 {
1 - a
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn plus_to_mult(a: i32) -> i32 {
1 * a
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn plus_to_div(a: i32) -> i32 {
1 / a
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn plus_to_mod(a: i32) -> i32 {
1 % a
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn and_to_or(a: bool, b: bool) -> bool {
a || b
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn bitwise_and_to_bitwise_or(a: i32) -> i32 {
1 | a
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn bitwise_and_to_bitwise_xor(a: i32) -> i32 {
1 ^ a
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn bitwise_and_to_lshift(a: i32) -> i32 {
a << 1
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn bitwise_and_to_rshift(a: i32) -> i32 {
a >> 1
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn eq_to_uneq(a: i32) -> bool {
a != 1
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn eq_to_lt(a: i32) -> bool {
a < 1
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn eq_to_gt(a: i32) -> bool {
a > 1
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn eq_to_le(a: i32) -> bool {
a <= 1
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn eq_to_ge(a: i32) -> bool {
a >= 1
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built,typeck_tables_of", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built,typeck_tables_of", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn type_cast(a: u8) -> u64 {
let b = a as u32;
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn value_cast(a: u32) -> i32 {
2 as i32
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn place() -> i32 {
let mut x = 10;
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn rvalue() -> i32 {
let mut x = 10;
}
#[cfg(not(cfail1))]
-#[rustc_clean(except="HirBody,optimized_mir,mir_built", cfg="cfail2")]
+#[rustc_clean(except="hir_owner_items,optimized_mir,mir_built", cfg="cfail2")]
#[rustc_clean(cfg="cfail3")]
pub fn index_to_slice(s: &[u8], i: usize, j: usize) -> u8 {
s[j]
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_loop_body() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_loop_condition() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_break() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_loop_label() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_loop_label_to_break() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_break_label() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_loop_label_to_continue() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_continue_label() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_continue_to_break() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_loop_body() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_loop_condition() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir, typeck_tables_of")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir, typeck_tables_of")]
#[rustc_clean(cfg="cfail3")]
pub fn add_break() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_loop_label() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_loop_label_to_break() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_break_label() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items")]
#[rustc_clean(cfg="cfail3")]
pub fn add_loop_label_to_continue() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built")]
#[rustc_clean(cfg="cfail3")]
pub fn change_continue_label() {
let mut _x = 0;
}
#[cfg(not(cfail1))]
-#[rustc_clean(cfg="cfail2", except="HirBody, mir_built, optimized_mir")]
+#[rustc_clean(cfg="cfail2", except="hir_owner_items, mir_built, optimized_mir")]
#[rustc_clean(cfg="cfail3")]
pub fn change_continue_to_break() {
let mut _x = 0;
#[cfg(rpass2)]
use Trait2;
- #[rustc_clean(label="Hir", cfg="rpass2")]
- #[rustc_clean(label="HirBody", cfg="rpass2")]
+ #[rustc_clean(label="hir_owner", cfg="rpass2")]
+ #[rustc_clean(label="hir_owner_items", cfg="rpass2")]
#[rustc_dirty(label="typeck_tables_of", cfg="rpass2")]
fn bar() {
().method();
}
- #[rustc_clean(label="Hir", cfg="rpass2")]
- #[rustc_clean(label="HirBody", cfg="rpass2")]
+ #[rustc_clean(label="hir_owner", cfg="rpass2")]
+ #[rustc_clean(label="hir_owner_items", cfg="rpass2")]
#[rustc_clean(label="typeck_tables_of", cfg="rpass2")]
fn baz() {
22; // no method call, traits in scope don't matter
#![crate_type = "rlib"]
#![feature(rustc_attrs)]
-#[rustc_clean(label="Hir", cfg="cfail2")]
-#[rustc_dirty(label="HirBody", cfg="cfail2")]
+#[rustc_clean(label = "hir_owner", cfg = "cfail2")]
+#[rustc_dirty(label = "hir_owner_items", cfg = "cfail2")]
pub fn foo() {
#[cfg(cfail1)]
- pub fn baz() { } // order is different...
+ pub fn baz() {} // order is different...
- #[rustc_clean(label="Hir", cfg="cfail2")]
- #[rustc_clean(label="HirBody", cfg="cfail2")]
- pub fn bar() { } // but that doesn't matter.
+ // FIXME: Make "hir_owner" use `rustc_clean` here. Currently "hir_owner" includes a reference to
+ // the parent node, which is the statement holding this item. Changing the position of
+ // `bar` in `foo` will update that reference and make `hir_owner(bar)` dirty.
+ #[rustc_dirty(label = "hir_owner", cfg = "cfail2")]
+ #[rustc_clean(label = "hir_owner_items", cfg = "cfail2")]
+ pub fn bar() {} // but that doesn't matter.
#[cfg(cfail2)]
- pub fn baz() { } // order is different...
+ pub fn baz() {} // order is different...
- pub fn bap() { } // neither does adding a new item
+ pub fn bap() {} // neither does adding a new item
}
#[cfg(rpass3)]
use mod2::Foo;
- #[rustc_clean(label="Hir", cfg="rpass2")]
- #[rustc_clean(label="HirBody", cfg="rpass2")]
- #[rustc_clean(label="Hir", cfg="rpass3")]
- #[rustc_dirty(label="HirBody", cfg="rpass3")]
+ #[rustc_clean(label="hir_owner", cfg="rpass2")]
+ #[rustc_clean(label="hir_owner_items", cfg="rpass2")]
+ #[rustc_clean(label="hir_owner", cfg="rpass3")]
+ #[rustc_dirty(label="hir_owner_items", cfg="rpass3")]
fn in_expr() {
Foo(0);
}
- #[rustc_clean(label="Hir", cfg="rpass2")]
- #[rustc_clean(label="HirBody", cfg="rpass2")]
- #[rustc_clean(label="Hir", cfg="rpass3")]
- #[rustc_dirty(label="HirBody", cfg="rpass3")]
+ #[rustc_clean(label="hir_owner", cfg="rpass2")]
+ #[rustc_clean(label="hir_owner_items", cfg="rpass2")]
+ #[rustc_clean(label="hir_owner", cfg="rpass3")]
+ #[rustc_dirty(label="hir_owner_items", cfg="rpass3")]
fn in_type() {
test::<Foo>();
}
// Regression test for #34991: an ICE occurred here because we inline
// some of the vector routines and give them a local def-id `X`. This
-// got hashed after codegen (`Hir(X)`). When we load back up, we get an
+// got hashed after codegen (`hir_owner(X)`). When we load back up, we get an
// error because the `X` is remapped to the original def-id (in
// libstd), and we can't hash a HIR node from std.
--- /dev/null
+// revisions: rpass1 rpass2
+
+#![allow(unused_imports)]
+
+#[macro_export]
+macro_rules! a_macro {
+ () => {};
+}
+
+#[cfg(rpass1)]
+use a_macro as same_name;
+
+mod same_name {}
+
+mod needed_mod {
+ fn _crash() {
+ use super::same_name;
+ }
+}
+
+fn main() {}
#![feature(rustc_attrs)]
-#[rustc_clean(label="Hir", cfg="rpass2")]
-#[rustc_clean(label="HirBody", cfg="rpass2")]
+#[rustc_clean(label="hir_owner", cfg="rpass2")]
+#[rustc_clean(label="hir_owner_items", cfg="rpass2")]
fn line_same() {
let _ = line!();
}
-#[rustc_clean(label="Hir", cfg="rpass2")]
-#[rustc_clean(label="HirBody", cfg="rpass2")]
+#[rustc_clean(label="hir_owner", cfg="rpass2")]
+#[rustc_clean(label="hir_owner_items", cfg="rpass2")]
fn col_same() {
let _ = column!();
}
-#[rustc_clean(label="Hir", cfg="rpass2")]
-#[rustc_clean(label="HirBody", cfg="rpass2")]
+#[rustc_clean(label="hir_owner", cfg="rpass2")]
+#[rustc_clean(label="hir_owner_items", cfg="rpass2")]
fn file_same() {
let _ = file!();
}
-#[rustc_clean(label="Hir", cfg="rpass2")]
-#[rustc_dirty(label="HirBody", cfg="rpass2")]
+#[rustc_clean(label="hir_owner", cfg="rpass2")]
+#[rustc_dirty(label="hir_owner_items", cfg="rpass2")]
fn line_different() {
#[cfg(rpass1)]
{
}
}
-#[rustc_clean(label="Hir", cfg="rpass2")]
-#[rustc_dirty(label="HirBody", cfg="rpass2")]
+#[rustc_clean(label="hir_owner", cfg="rpass2")]
+#[rustc_dirty(label="hir_owner_items", cfg="rpass2")]
fn col_different() {
#[cfg(rpass1)]
{
-#[rustc_clean(label="Hir", cfg="rpass2")]
+#[rustc_clean(label="hir_owner", cfg="rpass2")]
pub struct SomeType {
pub x: u32,
pub y: i64,
-#[rustc_clean(label="Hir", cfg="rpass2")]
+#[rustc_clean(label="hir_owner", cfg="rpass2")]
pub struct SomeOtherType {
pub a: i32,
pub b: u64,
pub fn main() {}
#[cfg(rpass2)]
-#[rustc_dirty(label="Hir", cfg="rpass2")]
-#[rustc_dirty(label="HirBody", cfg="rpass2")]
+#[rustc_dirty(label="hir_owner", cfg="rpass2")]
+#[rustc_dirty(label="hir_owner_items", cfg="rpass2")]
pub fn main() {}
}
#[cfg(cfail2)]
- #[rustc_dirty(label="HirBody", cfg="cfail2")]
+ #[rustc_dirty(label="hir_owner_items", cfg="cfail2")]
#[rustc_dirty(label="optimized_mir", cfg="cfail2")]
pub fn x() {
println!("{}", "2");
// rust-lang/rust#59535:
//
-// This is analgous to cgu_invalidated_when_import_removed.rs, but it covers
+// This is analogous to cgu_invalidated_when_import_removed.rs, but it covers
// the other direction:
//
// We start with a call-graph like `[A] -> [B -> D] [C]` (where the letters are
// In cfail2, ThinLTO decides that foo() does not get inlined into main, and
// instead bar() gets inlined into foo(). But faulty logic in our incr.
// ThinLTO implementation thought that `main()` is unchanged and thus reused
- // the object file still containing a call to the now non-existant bar().
+ // the object file still containing a call to the now non-existent bar().
pub fn foo(){
bar()
}
fn main() {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
//[cfail2]~^ ERROR found unchecked `#[rustc_dirty]` / `#[rustc_clean]` attribute
{
// empty block
}
- #[rustc_clean(label="Hir", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
//[cfail2]~^ ERROR found unchecked `#[rustc_dirty]` / `#[rustc_clean]` attribute
{
// empty block
}
struct _Struct {
- #[rustc_dirty(label="Hir", cfg="cfail2")]
+ #[rustc_dirty(label="hir_owner", cfg="cfail2")]
//[cfail2]~^ ERROR found unchecked `#[rustc_dirty]` / `#[rustc_clean]` attribute
_field1: i32,
- #[rustc_clean(label="Hir", cfg="cfail2")]
+ #[rustc_clean(label="hir_owner", cfg="cfail2")]
//[cfail2]~^ ERROR found unchecked `#[rustc_dirty]` / `#[rustc_clean]` attribute
_field2: i32,
}
static Y: i32 = 42;
-static mut BAR: *const &'static i32 = [&Y].as_ptr();
+static mut BAR: *const &i32 = [&Y].as_ptr();
-static mut FOO: *const &'static i32 = [unsafe { &X }].as_ptr();
+static mut FOO: *const &i32 = [unsafe { &X }].as_ptr();
fn main() {}
// START rustc.FOO.PromoteTemps.before.mir
// bb0: {
// ...
-// _5 = const Scalar(alloc1+0) : &i32;
+// _5 = const {alloc1+0: &i32};
// _4 = &(*_5);
// _3 = [move _4];
// _2 = &_3;
-// _1 = move _2 as &[&'static i32] (Pointer(Unsize));
-// _0 = const core::slice::<impl [&'static i32]>::as_ptr(move _1) -> [return: bb2, unwind: bb1];
+// _1 = move _2 as &[&i32] (Pointer(Unsize));
+// _0 = const core::slice::<impl [&i32]>::as_ptr(move _1) -> [return: bb2, unwind: bb1];
// }
// ...
// bb2: {
// START rustc.BAR.PromoteTemps.before.mir
// bb0: {
// ...
-// _5 = const Scalar(alloc0+0) : &i32;
+// _5 = const {alloc0+0: &i32};
// _4 = &(*_5);
// _3 = [move _4];
// _2 = &_3;
-// _1 = move _2 as &[&'static i32] (Pointer(Unsize));
-// _0 = const core::slice::<impl [&'static i32]>::as_ptr(move _1) -> [return: bb2, unwind: bb1];
+// _1 = move _2 as &[&i32] (Pointer(Unsize));
+// _0 = const core::slice::<impl [&i32]>::as_ptr(move _1) -> [return: bb2, unwind: bb1];
// }
// ...
// bb2: {
// ...
// _6 = const BAR::promoted[0];
// _2 = &(*_6);
-// _1 = move _2 as &[&'static i32] (Pointer(Unsize));
-// _0 = const core::slice::<impl [&'static i32]>::as_ptr(move _1) -> [return: bb2, unwind: bb1];
+// _1 = move _2 as &[&i32] (Pointer(Unsize));
+// _0 = const core::slice::<impl [&i32]>::as_ptr(move _1) -> [return: bb2, unwind: bb1];
// }
// ...
// bb2: {
// ...
// _6 = const FOO::promoted[0];
// _2 = &(*_6);
-// _1 = move _2 as &[&'static i32] (Pointer(Unsize));
-// _0 = const core::slice::<impl [&'static i32]>::as_ptr(move _1) -> [return: bb2, unwind: bb1];
+// _1 = move _2 as &[&i32] (Pointer(Unsize));
+// _0 = const core::slice::<impl [&i32]>::as_ptr(move _1) -> [return: bb2, unwind: bb1];
// }
// ...
// bb2: {
// START rustc.main.ConstProp.after.mir
// bb0: {
// ...
-// _3 = const Scalar(0x01) : std::option::Option<bool>;
+// _3 = const {transmute(0x01): std::option::Option<bool>};
// _4 = const 1isize;
// switchInt(const 1isize) -> [1isize: bb2, otherwise: bb1];
// }
// START rustc.main.ConstProp.after.mir
// bb0: {
// ...
-// _3 = const Scalar(<ZST>) : ();
+// _3 = const ();
// _2 = (move _3, const 0u8, const 0u8);
// ...
// _1 = const encode(move _2) -> bb1;
// START rustc.main.ConstProp.before.mir
// bb0: {
// ...
-// _3 = const Scalar(alloc0+0) : &u8;
+// _3 = const {alloc0+0: &u8};
// _2 = (*_3);
// ...
-// _5 = const Scalar(alloc0+0) : &u8;
+// _5 = const {alloc0+0: &u8};
// _4 = (*_5);
// _1 = Add(move _2, move _4);
// ...
--- /dev/null
+//! Tests that generators that cannot return or unwind don't have unnecessary
+//! panic branches.
+
+// compile-flags: -Zno-landing-pads
+
+#![feature(generators, generator_trait)]
+
+struct HasDrop;
+
+impl Drop for HasDrop {
+ fn drop(&mut self) {}
+}
+
+fn callee() {}
+
+fn main() {
+ let _gen = |_x: u8| {
+ let _d = HasDrop;
+ loop {
+ yield;
+ callee();
+ }
+ };
+}
+
+// END RUST SOURCE
+
+// START rustc.main-{{closure}}.generator_resume.0.mir
+// bb0: {
+// ...
+// switchInt(move _11) -> [0u32: bb1, 3u32: bb5, otherwise: bb6];
+// }
+// ...
+// END rustc.main-{{closure}}.generator_resume.0.mir
--- /dev/null
+// ignore-wasm32-bare compiled with panic=abort by default
+
+// Ensure that there are no drop terminators in `unwrap<T>` (except the one along the cleanup
+// path).
+
+fn unwrap<T>(opt: Option<T>) -> T {
+ match opt {
+ Some(x) => x,
+ None => panic!(),
+ }
+}
+
+fn main() {
+ let _ = unwrap(Some(1i32));
+}
+
+// END RUST SOURCE
+// START rustc.unwrap.SimplifyCfg-elaborate-drops.after.mir
+// fn unwrap(_1: std::option::Option<T>) -> T {
+// ...
+// bb0: {
+// ...
+// switchInt(move _2) -> [0isize: bb2, 1isize: bb4, otherwise: bb3];
+// }
+// bb1 (cleanup): {
+// resume;
+// }
+// bb2: {
+// ...
+// const std::rt::begin_panic::<&str>(const "explicit panic") -> bb5;
+// }
+// bb3: {
+// unreachable;
+// }
+// bb4: {
+// ...
+// return;
+// }
+// bb5 (cleanup): {
+// drop(_1) -> bb1;
+// }
+// }
+// END rustc.unwrap.SimplifyCfg-elaborate-drops.after.mir
// goto -> bb7;
// }
// bb2: {
-// switchInt((*(*((_1 as Some).0: &'<empty> &'<empty> i32)))) -> [0i32: bb3, otherwise: bb1];
+// switchInt((*(*((_1 as Some).0: &&i32)))) -> [0i32: bb3, otherwise: bb1];
// }
// bb3: {
// goto -> bb4;
// }
// bb4: {
// _4 = &shallow _1;
-// _5 = &shallow ((_1 as Some).0: &'<empty> &'<empty> i32);
-// _6 = &shallow (*((_1 as Some).0: &'<empty> &'<empty> i32));
-// _7 = &shallow (*(*((_1 as Some).0: &'<empty> &'<empty> i32)));
+// _5 = &shallow ((_1 as Some).0: &&i32);
+// _6 = &shallow (*((_1 as Some).0: &&i32));
+// _7 = &shallow (*(*((_1 as Some).0: &&i32)));
// StorageLive(_8);
// _8 = _2;
// switchInt(move _8) -> [false: bb6, otherwise: bb5];
// goto -> bb7;
// }
// bb2: {
-// switchInt((*(*((_1 as Some).0: &'<empty> &'<empty> i32)))) -> [0i32: bb3, otherwise: bb1];
+// switchInt((*(*((_1 as Some).0: &&i32)))) -> [0i32: bb3, otherwise: bb1];
// }
// bb3: {
// goto -> bb4;
// compile-flags: -C overflow-checks=no
-fn use_zst(_: ((), ())) { }
+fn use_zst(_: ((), ())) {}
struct Temp {
- x: u8
+ x: u8,
}
-fn use_u8(_: u8) { }
+fn use_u8(_: u8) {}
fn main() {
let ((), ()) = ((), ());
use_zst(((), ()));
- use_u8((Temp { x : 40 }).x + 2);
+ use_u8((Temp { x: 40 }).x + 2);
}
// END RUST SOURCE
// bb0: {
// StorageLive(_1);
// StorageLive(_2);
-// _2 = const Scalar(<ZST>) : ();
+// _2 = const ();
// StorageLive(_3);
-// _3 = const Scalar(<ZST>) : ();
-// _1 = const Scalar(<ZST>) : ((), ());
+// _3 = const ();
+// _1 = const {transmute(()): ((), ())};
// StorageDead(_3);
// StorageDead(_2);
// StorageDead(_1);
// StorageLive(_4);
// StorageLive(_6);
-// _6 = const Scalar(<ZST>) : ();
+// _6 = const ();
// StorageLive(_7);
-// _7 = const Scalar(<ZST>) : ();
+// _7 = const ();
// StorageDead(_7);
// StorageDead(_6);
-// _4 = const use_zst(const Scalar(<ZST>) : ((), ())) -> bb1;
+// _4 = const use_zst(const {transmute(()): ((), ())}) -> bb1;
// }
// bb1: {
// StorageDead(_4);
// StorageLive(_8);
// StorageLive(_10);
// StorageLive(_11);
-// _11 = const Scalar(0x28) : Temp;
+// _11 = const {transmute(0x28) : Temp};
// _10 = const 40u8;
// StorageDead(_10);
// _8 = const use_u8(const 42u8) -> bb2;
// }
// bb0: {
// StorageLive(_1);
-// _1 = const use_zst(const Scalar(<ZST>) : ((), ())) -> bb1;
+// _1 = const use_zst(const {transmute(()): ((), ())}) -> bb1;
// }
// bb1: {
// StorageDead(_1);
// END RUST SOURCE
// START rustc.XXX.mir_map.0.mir
-// let mut _0: &'static Foo;
-// let _1: &'static Foo;
+// let mut _0: &Foo;
+// let _1: &Foo;
// let _2: Foo;
-// let mut _3: &'static [(u32, u32)];
-// let mut _4: &'static [(u32, u32); 42];
-// let _5: &'static [(u32, u32); 42];
+// let mut _3: &[(u32, u32)];
+// let mut _4: &[(u32, u32); 42];
+// let _5: &[(u32, u32); 42];
// let _6: [(u32, u32); 42];
// let mut _7: (u32, u32);
// let mut _8: (u32, u32);
// _6 = [move _7, move _8, move _9, move _10, move _11, move _12, move _13, move _14, move _15, move _16, move _17, move _18, move _19, move _20, move _21, move _22, move _23, move _24, move _25, move _26, move _27, move _28, move _29, move _30, move _31, move _32, move _33, move _34, move _35, move _36, move _37, move _38, move _39, move _40, move _41, move _42, move _43, move _44, move _45, move _46, move _47, move _48];
// _5 = &_6;
// _4 = &(*_5);
-// _3 = move _4 as &'static [(u32, u32)] (Pointer(Unsize));
+// _3 = move _4 as &[(u32, u32)] (Pointer(Unsize));
// _2 = Foo { tup: const "hi", data: move _3 };
// _1 = &_2;
// _0 = &(*_1);
#![feature(rustc_attrs)]
fn main() {
- #![rustc_dummy("hi", 1, 2, 1.012, pi = 3.14, bye, name ("John"))]
+ #![rustc_dummy("hi", 1, 2, 1.012, pi = 3.14, bye, name("John"))]
#[rustc_dummy = 8]
fn f() { }
mac! {
struct S { field1 : u8, field2 : u16, } impl Clone for S
{
- fn clone () -> S
+ fn clone() -> S
{
panic ! () ;
}
mac! {
- a
- (aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa
- aaaaaaaa aaaaaaaa) a
+ a(aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa
+ aaaaaaaa aaaaaaaa) a
[aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa aaaaaaaa
aaaaaaaa aaaaaaaa] a
{
--- /dev/null
+// pp-exact
+
+#[cfg(FALSE)]
+fn simple_attr() {
+
+ #[attr]
+ if true { }
+
+ #[allow_warnings]
+ if true { }
+}
+
+#[cfg(FALSE)]
+fn if_else_chain() {
+
+ #[first_attr]
+ if true { } else if false { } else { }
+}
+
+#[cfg(FALSE)]
+fn if_let() {
+
+ #[attr]
+ if let Some(_) = Some(true) { }
+}
+
+#[cfg(FALSE)]
+fn let_attr_if() {
+ let _ = #[attr] if let _ = 0 { };
+ let _ = #[attr] if true { };
+
+ let _ = #[attr] if let _ = 0 { } else { };
+ let _ = #[attr] if true { } else { };
+}
+
+
+fn main() { }
// pp-exact
+// pretty-compare-only
// The next line should not be expanded
((::alloc::fmt::format as
for<'r> fn(std::fmt::Arguments<'r>) -> std::string::String {std::fmt::format})(((::core::fmt::Arguments::new_v1
as
- fn(&[&str], &[std::fmt::ArgumentV1<'_>]) -> std::fmt::Arguments<'_> {std::fmt::Arguments::<'_>::new_v1})((&([("test"
- as
- &'static str)]
- as
- [&str; 1])
- as
- &[&str; 1]),
- (&(match (()
- as
- ())
- {
- ()
- =>
- ([]
- as
- [std::fmt::ArgumentV1<'_>; 0]),
- }
- as
- [std::fmt::ArgumentV1<'_>; 0])
- as
- &[std::fmt::ArgumentV1<'_>; 0]))
+ fn(&[&str], &[std::fmt::ArgumentV1]) -> std::fmt::Arguments {std::fmt::Arguments::new_v1})((&([("test"
+ as
+ &str)]
+ as
+ [&str; 1])
+ as
+ &[&str; 1]),
+ (&(match (()
+ as
+ ())
+ {
+ ()
+ =>
+ ([]
+ as
+ [std::fmt::ArgumentV1; 0]),
+ }
+ as
+ [std::fmt::ArgumentV1; 0])
+ as
+ &[std::fmt::ArgumentV1; 0]))
as
- std::fmt::Arguments<'_>))
+ std::fmt::Arguments))
as std::string::String);
(res as std::string::String)
} as std::string::String);
#[cfg(debug_assertions)]
field: 0,
- #[cfg(not (debug_assertions))]
+ #[cfg(not(debug_assertions))]
field: 1,};
extern crate rustc_hir;
extern crate rustc_target;
extern crate rustc_driver;
+extern crate rustc_session;
extern crate rustc_span;
use std::any::Any;
use std::sync::Arc;
use std::path::Path;
-use rustc_span::symbol::Symbol;
-use rustc::session::Session;
-use rustc::session::config::OutputFilenames;
use rustc::ty::TyCtxt;
use rustc::ty::query::Providers;
use rustc::middle::cstore::{EncodedMetadata, MetadataLoader, MetadataLoaderDyn};
use rustc_codegen_utils::codegen_backend::CodegenBackend;
use rustc_data_structures::sync::MetadataRef;
use rustc_data_structures::owning_ref::OwningRef;
+use rustc_session::Session;
+use rustc_session::config::OutputFilenames;
+use rustc_span::symbol::Symbol;
use rustc_target::spec::Target;
pub struct NoLlvmMetadataLoader;
outputs: &OutputFilenames,
) -> Result<(), ErrorReported> {
use std::io::Write;
- use rustc::session::config::CrateType;
+ use rustc_session::config::CrateType;
use rustc_codegen_utils::link::out_filename;
let crate_name = codegen_results.downcast::<Symbol>()
.expect("in link: codegen_results is not a Symbol");
#![feature(rustc_private)]
-extern crate rustc;
extern crate rustc_interface;
extern crate rustc_driver;
+extern crate rustc_session;
extern crate rustc_span;
-use rustc::session::DiagnosticOutput;
-use rustc::session::config::{Input, Options,
- OutputType, OutputTypes};
+use rustc_session::DiagnosticOutput;
+use rustc_session::config::{Input, Options, OutputType, OutputTypes};
use rustc_interface::interface;
use rustc_span::source_map::FileName;
--- /dev/null
+-include ../tools.mk
+
+# Test that previously triggered a linker failure with root cause
+# similar to one found in the issue #69368.
+#
+# The crate that provides oom lang item is missing some other lang
+# items. Necessary to prevent the use of start-group / end-group.
+#
+# The weak lang items are defined in a separate compilation units,
+# so that linker could omit them if not used.
+#
+# The crates that need those weak lang items are dependencies of
+# crates that provide them.
+
+all:
+ $(RUSTC) a.rs
+ $(RUSTC) b.rs
+ $(RUSTC) c.rs
--- /dev/null
+#![crate_type = "rlib"]
+#![feature(lang_items)]
+#![feature(panic_unwind)]
+#![no_std]
+
+extern crate panic_unwind;
+
+#[panic_handler]
+pub fn panic_handler(_: &core::panic::PanicInfo) -> ! {
+ loop {}
+}
+
+#[no_mangle]
+extern "C" fn __rust_drop_panic() -> ! {
+ loop {}
+}
--- /dev/null
+#![crate_type = "rlib"]
+#![feature(alloc_error_handler)]
+#![no_std]
+
+#[alloc_error_handler]
+pub fn error_handler(_: core::alloc::Layout) -> ! {
+ panic!();
+}
--- /dev/null
+#![crate_type = "bin"]
+#![feature(start)]
+#![no_std]
+
+extern crate alloc;
+extern crate a;
+extern crate b;
+
+use alloc::vec::Vec;
+use core::alloc::*;
+
+struct Allocator;
+
+unsafe impl GlobalAlloc for Allocator {
+ unsafe fn alloc(&self, _: Layout) -> *mut u8 {
+ loop {}
+ }
+
+ unsafe fn dealloc(&self, _: *mut u8, _: Layout) {
+ loop {}
+ }
+}
+
+#[global_allocator]
+static ALLOCATOR: Allocator = Allocator;
+
+#[start]
+fn main(argc: isize, _argv: *const *const u8) -> isize {
+ let mut v = Vec::new();
+ for i in 0..argc {
+ v.push(i);
+ }
+ v.iter().sum()
+}
--- /dev/null
+const QUERY = 'struct:"string"';
+
+const EXPECTED = {
+ 'in_args': [
+ { 'path': 'std::string::String', 'name': 'ne' },
+ ],
+ 'returned': [
+ { 'path': 'std::string::String', 'name': 'add' },
+ ],
+};
--- /dev/null
+const QUERY = 'struct:string';
+
+const EXPECTED = {
+ 'in_args': [
+ { 'path': 'std::string::String', 'name': 'ne' },
+ ],
+ 'returned': [
+ { 'path': 'std::string::String', 'name': 'add' },
+ ],
+};
--- /dev/null
+// compile-flags:-Z unstable-options --output-format html --show-coverage
+
+/// Foo
+pub struct Xo;
--- /dev/null
+error: html output format isn't supported for the --show-coverage option
+
--- /dev/null
+// build-pass
+// compile-flags:-Z unstable-options --output-format json --show-coverage
+
+pub mod foo {
+ /// Hello!
+ pub struct Foo;
+ /// Bar
+ pub enum Bar { A }
+}
+
+/// X
+pub struct X;
+
+/// Bar
+pub mod bar {
+ /// bar
+ pub struct Bar;
+ /// X
+ pub enum X { Y }
+}
+
+/// yolo
+pub enum Yolo { X }
+
+pub struct Xo<T: Clone> {
+ x: T,
+}
--- /dev/null
+{"$DIR/json.rs":{"total":13,"with_docs":7}}
// This previously triggered an ICE.
pub(in crate::r#mod) fn main() {}
-//~^ ERROR expected module, found unresolved item
+//~^ ERROR failed to resolve: maybe a missing crate `r#mod`
-error[E0577]: expected module, found unresolved item `crate::r#mod`
- --> $DIR/issue-61732.rs:3:8
+error[E0433]: failed to resolve: maybe a missing crate `r#mod`?
+ --> $DIR/issue-61732.rs:3:15
|
LL | pub(in crate::r#mod) fn main() {}
- | ^^^^^^^^^^^^ not a module
+ | ^^^^^ maybe a missing crate `r#mod`?
error: Compilation failed, aborting rustdoc
error: aborting due to 2 previous errors
-For more information about this error, try `rustc --explain E0577`.
+For more information about this error, try `rustc --explain E0433`.
pub enum Foo {
Bar,
}
+
+// @has foo/struct.Repr.html '//*[@class="docblock attributes top-attr"]' '#[repr(C, align(8))]'
+#[repr(C, align(8))]
+pub struct Repr;
--- /dev/null
+// compile-flags: --crate-version=<script>alert("hi")</script> -Z unstable-options
+
+#![crate_name = "foo"]
+
+// @has 'foo/index.html' '//div[@class="block version"]/p' 'Version <script>alert("hi")</script>'
+// @has 'foo/all.html' '//div[@class="block version"]/p' 'Version <script>alert("hi")</script>'
+++ /dev/null
-#![feature(doc_spotlight)]
-
-pub struct Wrapper<T> {
- inner: T,
-}
-
-impl<T: SomeTrait> SomeTrait for Wrapper<T> {}
-
-#[doc(spotlight)]
-pub trait SomeTrait {
- // @has doc_spotlight/trait.SomeTrait.html
- // @has - '//code[@class="content"]' 'impl<T: SomeTrait> SomeTrait for Wrapper<T>'
- fn wrap_me(self) -> Wrapper<Self> where Self: Sized {
- Wrapper {
- inner: self,
- }
- }
-}
-
-pub struct SomeStruct;
-impl SomeTrait for SomeStruct {}
-
-impl SomeStruct {
- // @has doc_spotlight/struct.SomeStruct.html
- // @has - '//code[@class="content"]' 'impl SomeTrait for SomeStruct'
- // @has - '//code[@class="content"]' 'impl<T: SomeTrait> SomeTrait for Wrapper<T>'
- pub fn new() -> SomeStruct {
- SomeStruct
- }
-}
-
-// @has doc_spotlight/fn.bare_fn.html
-// @has - '//code[@class="content"]' 'impl SomeTrait for SomeStruct'
-pub fn bare_fn() -> SomeStruct {
- SomeStruct
-}
extern crate issue_26606_macro;
// @has issue_26606/constant.FOO.html
-// @!has - '//a/@href' '../src/'
+// @has - '//a/@href' '../src/issue_26606/auxiliary/issue-26606-macro.rs.html#3'
make_item!(FOO);
#![feature(rustc_private)]
-// We're testing linkage visibility; the compiler warns us, but we want to
-// do the runtime check that these functions aren't exported.
-#![allow(private_no_mangle_fns)]
-
extern crate rustc_metadata;
use rustc_metadata::dynamic_lib::DynamicLibrary;
#[no_mangle]
-pub fn foo() { bar(); }
+pub fn foo() {
+ bar();
+}
pub fn foo2<T>() {
fn bar2() {
}
#[no_mangle]
-fn bar() { }
+fn bar() {}
#[allow(dead_code)]
#[no_mangle]
-fn baz() { }
+fn baz() {}
pub fn test() {
let lib = DynamicLibrary::open(None).unwrap();
impl<'a, 'tcx> LateLintPass<'a, 'tcx> for $struct {
fn check_crate(&mut self, cx: &LateContext, krate: &rustc_hir::Crate) {
$(
- if !attr::contains_name(&krate.attrs, $attr) {
+ if !attr::contains_name(&krate.item.attrs, $attr) {
cx.lint(CRATE_NOT_OKAY, |lint| {
let msg = format!("crate is not marked with #![{}]", $attr);
- lint.build(&msg).set_span(krate.span).emit()
+ lint.build(&msg).set_span(krate.item.span).emit()
});
}
)*
impl<'a, 'tcx> LateLintPass<'a, 'tcx> for Pass {
fn check_crate(&mut self, cx: &LateContext, krate: &rustc_hir::Crate) {
- if !attr::contains_name(&krate.attrs, Symbol::intern("crate_okay")) {
+ if !attr::contains_name(&krate.item.attrs, Symbol::intern("crate_okay")) {
cx.lint(CRATE_NOT_OKAY, |lint| {
lint.build("crate is not marked with #![crate_okay]")
- .set_span(krate.span)
+ .set_span(krate.item.span)
.emit()
});
}
unsafe {
let layout = Layout::from_size_align(4, 2).unwrap();
- let ptr = Global.alloc(layout.clone()).unwrap();
+ let (ptr, _) = Global.alloc(layout.clone()).unwrap();
helper::work_with(&ptr);
assert_eq!(HITS.load(Ordering::SeqCst), n + 1);
Global.dealloc(ptr, layout.clone());
drop(s);
assert_eq!(HITS.load(Ordering::SeqCst), n + 4);
- let ptr = System.alloc(layout.clone()).unwrap();
+ let (ptr, _) = System.alloc(layout.clone()).unwrap();
assert_eq!(HITS.load(Ordering::SeqCst), n + 4);
helper::work_with(&ptr);
System.dealloc(ptr, layout);
let n = GLOBAL.0.load(Ordering::SeqCst);
let layout = Layout::from_size_align(4, 2).unwrap();
- let ptr = Global.alloc(layout.clone()).unwrap();
+ let (ptr, _) = Global.alloc(layout.clone()).unwrap();
helper::work_with(&ptr);
assert_eq!(GLOBAL.0.load(Ordering::SeqCst), n + 1);
Global.dealloc(ptr, layout.clone());
assert_eq!(GLOBAL.0.load(Ordering::SeqCst), n + 2);
- let ptr = System.alloc(layout.clone()).unwrap();
+ let (ptr, _) = System.alloc(layout.clone()).unwrap();
assert_eq!(GLOBAL.0.load(Ordering::SeqCst), n + 2);
helper::work_with(&ptr);
System.dealloc(ptr, layout);
+++ /dev/null
-// Tests that anonymous parameters are a hard error in edition 2018.
-
-// edition:2018
-
-trait T {
- fn foo(i32); //~ expected one of `:`, `@`, or `|`, found `)`
-
- fn bar_with_default_impl(String, String) {}
- //~^ ERROR expected one of `:`
- //~| ERROR expected one of `:`
-
- // do not complain about missing `b`
- fn baz(a:usize, b, c: usize) -> usize { //~ ERROR expected one of `:`
- a + b + c
- }
-}
-
-fn main() {}
+++ /dev/null
-error: expected one of `:`, `@`, or `|`, found `)`
- --> $DIR/anon-params-denied-2018.rs:6:15
- |
-LL | fn foo(i32);
- | ^ expected one of `:`, `@`, or `|`
- |
- = note: anonymous parameters are removed in the 2018 edition (see RFC 1685)
-help: if this is a `self` type, give it a parameter name
- |
-LL | fn foo(self: i32);
- | ^^^^^^^^^
-help: if this was a parameter name, give it a type
- |
-LL | fn foo(i32: TypeName);
- | ^^^^^^^^^^^^^
-help: if this is a type, explicitly ignore the parameter name
- |
-LL | fn foo(_: i32);
- | ^^^^^^
-
-error: expected one of `:`, `@`, or `|`, found `,`
- --> $DIR/anon-params-denied-2018.rs:8:36
- |
-LL | fn bar_with_default_impl(String, String) {}
- | ^ expected one of `:`, `@`, or `|`
- |
- = note: anonymous parameters are removed in the 2018 edition (see RFC 1685)
-help: if this is a `self` type, give it a parameter name
- |
-LL | fn bar_with_default_impl(self: String, String) {}
- | ^^^^^^^^^^^^
-help: if this was a parameter name, give it a type
- |
-LL | fn bar_with_default_impl(String: TypeName, String) {}
- | ^^^^^^^^^^^^^^^^
-help: if this is a type, explicitly ignore the parameter name
- |
-LL | fn bar_with_default_impl(_: String, String) {}
- | ^^^^^^^^^
-
-error: expected one of `:`, `@`, or `|`, found `)`
- --> $DIR/anon-params-denied-2018.rs:8:44
- |
-LL | fn bar_with_default_impl(String, String) {}
- | ^ expected one of `:`, `@`, or `|`
- |
- = note: anonymous parameters are removed in the 2018 edition (see RFC 1685)
-help: if this was a parameter name, give it a type
- |
-LL | fn bar_with_default_impl(String, String: TypeName) {}
- | ^^^^^^^^^^^^^^^^
-help: if this is a type, explicitly ignore the parameter name
- |
-LL | fn bar_with_default_impl(String, _: String) {}
- | ^^^^^^^^^
-
-error: expected one of `:`, `@`, or `|`, found `,`
- --> $DIR/anon-params-denied-2018.rs:13:22
- |
-LL | fn baz(a:usize, b, c: usize) -> usize {
- | ^ expected one of `:`, `@`, or `|`
- |
- = note: anonymous parameters are removed in the 2018 edition (see RFC 1685)
-help: if this was a parameter name, give it a type
- |
-LL | fn baz(a:usize, b: TypeName, c: usize) -> usize {
- | ^^^^^^^^^^^
-help: if this is a type, explicitly ignore the parameter name
- |
-LL | fn baz(a:usize, _: b, c: usize) -> usize {
- | ^^^^
-
-error: aborting due to 4 previous errors
-
+++ /dev/null
-#![warn(anonymous_parameters)]
-// Test for the anonymous_parameters deprecation lint (RFC 1685)
-
-// build-pass (FIXME(62277): could be check-pass?)
-// edition:2015
-// run-rustfix
-
-trait T {
- fn foo(_: i32); //~ WARNING anonymous parameters are deprecated
- //~| WARNING hard error
-
- fn bar_with_default_impl(_: String, _: String) {}
- //~^ WARNING anonymous parameters are deprecated
- //~| WARNING hard error
- //~| WARNING anonymous parameters are deprecated
- //~| WARNING hard error
-}
-
-fn main() {}
+++ /dev/null
-#![warn(anonymous_parameters)]
-// Test for the anonymous_parameters deprecation lint (RFC 1685)
-
-// build-pass (FIXME(62277): could be check-pass?)
-// edition:2015
-// run-rustfix
-
-trait T {
- fn foo(i32); //~ WARNING anonymous parameters are deprecated
- //~| WARNING hard error
-
- fn bar_with_default_impl(String, String) {}
- //~^ WARNING anonymous parameters are deprecated
- //~| WARNING hard error
- //~| WARNING anonymous parameters are deprecated
- //~| WARNING hard error
-}
-
-fn main() {}
+++ /dev/null
-warning: anonymous parameters are deprecated and will be removed in the next edition.
- --> $DIR/anon-params-deprecated.rs:9:12
- |
-LL | fn foo(i32);
- | ^^^ help: try naming the parameter or explicitly ignoring it: `_: i32`
- |
-note: the lint level is defined here
- --> $DIR/anon-params-deprecated.rs:1:9
- |
-LL | #![warn(anonymous_parameters)]
- | ^^^^^^^^^^^^^^^^^^^^
- = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in the 2018 edition!
- = note: for more information, see issue #41686 <https://github.com/rust-lang/rust/issues/41686>
-
-warning: anonymous parameters are deprecated and will be removed in the next edition.
- --> $DIR/anon-params-deprecated.rs:12:30
- |
-LL | fn bar_with_default_impl(String, String) {}
- | ^^^^^^ help: try naming the parameter or explicitly ignoring it: `_: String`
- |
- = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in the 2018 edition!
- = note: for more information, see issue #41686 <https://github.com/rust-lang/rust/issues/41686>
-
-warning: anonymous parameters are deprecated and will be removed in the next edition.
- --> $DIR/anon-params-deprecated.rs:12:38
- |
-LL | fn bar_with_default_impl(String, String) {}
- | ^^^^^^ help: try naming the parameter or explicitly ignoring it: `_: String`
- |
- = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in the 2018 edition!
- = note: for more information, see issue #41686 <https://github.com/rust-lang/rust/issues/41686>
-
--- /dev/null
+// Tests that anonymous parameters are a hard error in edition 2018.
+
+// edition:2018
+
+trait T {
+ fn foo(i32); //~ expected one of `:`, `@`, or `|`, found `)`
+
+ fn bar_with_default_impl(String, String) {}
+ //~^ ERROR expected one of `:`
+ //~| ERROR expected one of `:`
+
+ // do not complain about missing `b`
+ fn baz(a:usize, b, c: usize) -> usize { //~ ERROR expected one of `:`
+ a + b + c
+ }
+}
+
+fn main() {}
--- /dev/null
+error: expected one of `:`, `@`, or `|`, found `)`
+ --> $DIR/anon-params-denied-2018.rs:6:15
+ |
+LL | fn foo(i32);
+ | ^ expected one of `:`, `@`, or `|`
+ |
+ = note: anonymous parameters are removed in the 2018 edition (see RFC 1685)
+help: if this is a `self` type, give it a parameter name
+ |
+LL | fn foo(self: i32);
+ | ^^^^^^^^^
+help: if this was a parameter name, give it a type
+ |
+LL | fn foo(i32: TypeName);
+ | ^^^^^^^^^^^^^
+help: if this is a type, explicitly ignore the parameter name
+ |
+LL | fn foo(_: i32);
+ | ^^^^^^
+
+error: expected one of `:`, `@`, or `|`, found `,`
+ --> $DIR/anon-params-denied-2018.rs:8:36
+ |
+LL | fn bar_with_default_impl(String, String) {}
+ | ^ expected one of `:`, `@`, or `|`
+ |
+ = note: anonymous parameters are removed in the 2018 edition (see RFC 1685)
+help: if this is a `self` type, give it a parameter name
+ |
+LL | fn bar_with_default_impl(self: String, String) {}
+ | ^^^^^^^^^^^^
+help: if this was a parameter name, give it a type
+ |
+LL | fn bar_with_default_impl(String: TypeName, String) {}
+ | ^^^^^^^^^^^^^^^^
+help: if this is a type, explicitly ignore the parameter name
+ |
+LL | fn bar_with_default_impl(_: String, String) {}
+ | ^^^^^^^^^
+
+error: expected one of `:`, `@`, or `|`, found `)`
+ --> $DIR/anon-params-denied-2018.rs:8:44
+ |
+LL | fn bar_with_default_impl(String, String) {}
+ | ^ expected one of `:`, `@`, or `|`
+ |
+ = note: anonymous parameters are removed in the 2018 edition (see RFC 1685)
+help: if this was a parameter name, give it a type
+ |
+LL | fn bar_with_default_impl(String, String: TypeName) {}
+ | ^^^^^^^^^^^^^^^^
+help: if this is a type, explicitly ignore the parameter name
+ |
+LL | fn bar_with_default_impl(String, _: String) {}
+ | ^^^^^^^^^
+
+error: expected one of `:`, `@`, or `|`, found `,`
+ --> $DIR/anon-params-denied-2018.rs:13:22
+ |
+LL | fn baz(a:usize, b, c: usize) -> usize {
+ | ^ expected one of `:`, `@`, or `|`
+ |
+ = note: anonymous parameters are removed in the 2018 edition (see RFC 1685)
+help: if this was a parameter name, give it a type
+ |
+LL | fn baz(a:usize, b: TypeName, c: usize) -> usize {
+ | ^^^^^^^^^^^
+help: if this is a type, explicitly ignore the parameter name
+ |
+LL | fn baz(a:usize, _: b, c: usize) -> usize {
+ | ^^^^
+
+error: aborting due to 4 previous errors
+
--- /dev/null
+#![warn(anonymous_parameters)]
+// Test for the anonymous_parameters deprecation lint (RFC 1685)
+
+// build-pass (FIXME(62277): could be check-pass?)
+// edition:2015
+// run-rustfix
+
+trait T {
+ fn foo(_: i32); //~ WARNING anonymous parameters are deprecated
+ //~| WARNING hard error
+
+ fn bar_with_default_impl(_: String, _: String) {}
+ //~^ WARNING anonymous parameters are deprecated
+ //~| WARNING hard error
+ //~| WARNING anonymous parameters are deprecated
+ //~| WARNING hard error
+}
+
+fn main() {}
--- /dev/null
+#![warn(anonymous_parameters)]
+// Test for the anonymous_parameters deprecation lint (RFC 1685)
+
+// build-pass (FIXME(62277): could be check-pass?)
+// edition:2015
+// run-rustfix
+
+trait T {
+ fn foo(i32); //~ WARNING anonymous parameters are deprecated
+ //~| WARNING hard error
+
+ fn bar_with_default_impl(String, String) {}
+ //~^ WARNING anonymous parameters are deprecated
+ //~| WARNING hard error
+ //~| WARNING anonymous parameters are deprecated
+ //~| WARNING hard error
+}
+
+fn main() {}
--- /dev/null
+warning: anonymous parameters are deprecated and will be removed in the next edition.
+ --> $DIR/anon-params-deprecated.rs:9:12
+ |
+LL | fn foo(i32);
+ | ^^^ help: try naming the parameter or explicitly ignoring it: `_: i32`
+ |
+note: the lint level is defined here
+ --> $DIR/anon-params-deprecated.rs:1:9
+ |
+LL | #![warn(anonymous_parameters)]
+ | ^^^^^^^^^^^^^^^^^^^^
+ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in the 2018 edition!
+ = note: for more information, see issue #41686 <https://github.com/rust-lang/rust/issues/41686>
+
+warning: anonymous parameters are deprecated and will be removed in the next edition.
+ --> $DIR/anon-params-deprecated.rs:12:30
+ |
+LL | fn bar_with_default_impl(String, String) {}
+ | ^^^^^^ help: try naming the parameter or explicitly ignoring it: `_: String`
+ |
+ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in the 2018 edition!
+ = note: for more information, see issue #41686 <https://github.com/rust-lang/rust/issues/41686>
+
+warning: anonymous parameters are deprecated and will be removed in the next edition.
+ --> $DIR/anon-params-deprecated.rs:12:38
+ |
+LL | fn bar_with_default_impl(String, String) {}
+ | ^^^^^^ help: try naming the parameter or explicitly ignoring it: `_: String`
+ |
+ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in the 2018 edition!
+ = note: for more information, see issue #41686 <https://github.com/rust-lang/rust/issues/41686>
+
--- /dev/null
+// check-pass
+// edition:2018
+// aux-build:anon-params-edition-hygiene.rs
+
+#[macro_use]
+extern crate anon_params_edition_hygiene;
+
+generate_trait_2015!(u8);
+
+fn main() {}
--- /dev/null
+// edition:2015
+
+#[macro_export]
+macro_rules! generate_trait_2015 {
+ ($Type: ident) => {
+ trait Trait {
+ fn method($Type) {}
+ }
+ };
+}
+
+fn main() {}
--- /dev/null
+// build-fail
+// ignore-emscripten no asm! support
+
+#![feature(asm)]
+
+fn main() {
+ unsafe {
+ asm!("nop" : "+r"("r15"));
+ //~^ malformed inline assembly
+ }
+}
--- /dev/null
+error[E0668]: malformed inline assembly
+ --> $DIR/issue-62046.rs:8:9
+ |
+LL | asm!("nop" : "+r"("r15"));
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+ = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0668`.
--- /dev/null
+// build-fail
+// ignore-emscripten no asm! support
+// Regression test for #69092
+
+#![feature(asm)]
+
+fn main() {
+ unsafe { asm!(".ascii \"Xen\0\""); }
+ //~^ ERROR: <inline asm>:1:9: error: expected string in '.ascii' directive
+}
--- /dev/null
+error: <inline asm>:1:9: error: expected string in '.ascii' directive
+ .ascii "Xen
+ ^
+
+ --> $DIR/issue-69092.rs:8:14
+ |
+LL | unsafe { asm!(".ascii \"Xen\0\""); }
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^
+
+error: aborting due to previous error
+
+++ /dev/null
-struct Cat {
- meows : usize,
-
- how_hungry : isize,
-}
-
-impl Cat {
- pub fn speak(&self) { self.meows += 1; }
-}
-
-fn cat(in_x : usize, in_y : isize) -> Cat {
- Cat {
- meows: in_x,
- how_hungry: in_y
- }
-}
-
-fn main() {
- let nyan : Cat = cat(52, 99);
- nyan.speak = || println!("meow"); //~ ERROR attempted to take value of method
- nyan.speak += || println!("meow"); //~ ERROR attempted to take value of method
-}
+++ /dev/null
-error[E0615]: attempted to take value of method `speak` on type `Cat`
- --> $DIR/assign-to-method.rs:20:8
- |
-LL | nyan.speak = || println!("meow");
- | ^^^^^
- |
- = help: methods are immutable and cannot be assigned to
-
-error[E0615]: attempted to take value of method `speak` on type `Cat`
- --> $DIR/assign-to-method.rs:21:8
- |
-LL | nyan.speak += || println!("meow");
- | ^^^^^
- |
- = help: methods are immutable and cannot be assigned to
-
-error: aborting due to 2 previous errors
-
-For more information about this error, try `rustc --explain E0615`.
| ^^^^^^^^^^^
| |
| variant or associated item not found in `Enum`
- | help: there is a method with a similar name: `misspellable`
+ | help: there is an associated function with a similar name: `misspellable`
error[E0599]: no variant or associated item named `mispellable_trait` found for enum `Enum` in the current scope
--> $DIR/associated-item-enum.rs:18:11
error: aborting due to 96 previous errors
+For more information about this error, try `rustc --explain E0719`.
}
fn main() {
- assert_eq!(1028, std::mem::size_of_val(&single()));
- assert_eq!(1032, std::mem::size_of_val(&single_with_noop()));
- assert_eq!(3084, std::mem::size_of_val(&joined()));
- assert_eq!(3084, std::mem::size_of_val(&joined_with_noop()));
- assert_eq!(7188, std::mem::size_of_val(&mixed_sizes()));
+ assert_eq!(1025, std::mem::size_of_val(&single()));
+ assert_eq!(1026, std::mem::size_of_val(&single_with_noop()));
+ assert_eq!(3078, std::mem::size_of_val(&joined()));
+ assert_eq!(3079, std::mem::size_of_val(&joined_with_noop()));
+ assert_eq!(7181, std::mem::size_of_val(&mixed_sizes()));
}
}
fn main() {
- assert_eq!(8, std::mem::size_of_val(&single()));
- assert_eq!(12, std::mem::size_of_val(&single_with_noop()));
- assert_eq!(3084, std::mem::size_of_val(&joined()));
- assert_eq!(3084, std::mem::size_of_val(&joined_with_noop()));
- assert_eq!(3080, std::mem::size_of_val(&join_retval()));
+ assert_eq!(2, std::mem::size_of_val(&single()));
+ assert_eq!(3, std::mem::size_of_val(&single_with_noop()));
+ assert_eq!(3078, std::mem::size_of_val(&joined()));
+ assert_eq!(3078, std::mem::size_of_val(&joined_with_noop()));
+ assert_eq!(3074, std::mem::size_of_val(&join_retval()));
}
fn main() {
assert_eq!(2, std::mem::size_of_val(&base()));
- assert_eq!(8, std::mem::size_of_val(&await1_level1()));
- assert_eq!(12, std::mem::size_of_val(&await2_level1()));
- assert_eq!(12, std::mem::size_of_val(&await3_level1()));
- assert_eq!(24, std::mem::size_of_val(&await3_level2()));
- assert_eq!(36, std::mem::size_of_val(&await3_level3()));
- assert_eq!(48, std::mem::size_of_val(&await3_level4()));
- assert_eq!(60, std::mem::size_of_val(&await3_level5()));
+ assert_eq!(3, std::mem::size_of_val(&await1_level1()));
+ assert_eq!(4, std::mem::size_of_val(&await2_level1()));
+ assert_eq!(5, std::mem::size_of_val(&await3_level1()));
+ assert_eq!(8, std::mem::size_of_val(&await3_level2()));
+ assert_eq!(11, std::mem::size_of_val(&await3_level3()));
+ assert_eq!(14, std::mem::size_of_val(&await3_level4()));
+ assert_eq!(17, std::mem::size_of_val(&await3_level5()));
assert_eq!(1, wait(base()));
assert_eq!(1, wait(await1_level1()));
--> $DIR/dont-print-desugared-async.rs:5:20
|
LL | async fn async_fn(&ref mut s: &[i32]) {}
- | -^^^^^^^^^
- | ||
- | |cannot borrow as mutable through `&` reference
- | help: consider changing this to be a mutable reference: `&mut ref mut s`
+ | ^^^^^^^^^ cannot borrow as mutable through `&` reference
error: aborting due to previous error
--- /dev/null
+// check-pass
+// edition:2018
+
+macro_rules! with_doc {
+ ($doc: expr) => {
+ #[doc = $doc]
+ async fn f() {}
+ };
+}
+
+with_doc!(concat!(""));
+
+fn main() {}
--- /dev/null
+// Regression test for #54239, shouldn't trigger lint.
+// check-pass
+// edition:2018
+
+#![deny(missing_debug_implementations)]
+
+struct DontLookAtMe(i32);
+
+async fn secret() -> DontLookAtMe {
+ DontLookAtMe(41)
+}
+
+pub async fn looking() -> i32 { // Shouldn't trigger lint here.
+ secret().await.0
+}
+
+fn main() {}
LL | (|_| 2333).await;
| ^^^^^^^^^^^^^^^^ the trait `std::future::Future` is not implemented for `[closure@$DIR/issue-62009-1.rs:16:5: 16:15]`
|
- ::: $SRC_DIR/libstd/future.rs:LL:COL
+ ::: $SRC_DIR/libcore/future/mod.rs:LL:COL
|
LL | F: Future,
- | ------ required by this bound in `std::future::poll_with_tls_context`
+ | ------ required by this bound in `std::future::poll_with_context`
error: aborting due to 4 previous errors
LL | foo(move || self.bar()).await;
| ^^^^^^^
-error[E0521]: borrowed data escapes outside of method
+error[E0521]: borrowed data escapes outside of associated function
--> $DIR/issue-62097.rs:13:9
|
LL | pub async fn run_dummy_fn(&self) {
- | ----- `self` is a reference that is only valid in the method body
+ | ----- `self` is a reference that is only valid in the associated function body
LL | foo(|| self.bar()).await;
- | ^^^^^^^^^^^^^^^^^^ `self` escapes the method body here
+ | ^^^^^^^^^^^^^^^^^^ `self` escapes the associated function body here
error: aborting due to 2 previous errors
LL | / {
LL | | foo
LL | | }
- | |_____^ method was supposed to return data with lifetime `'a` but it is returning data with lifetime `'1`
+ | |_____^ associated function was supposed to return data with lifetime `'a` but it is returning data with lifetime `'1`
error: aborting due to previous error
error: functions cannot be both `const` and `async`
- --> $DIR/no-const-async.rs:4:1
+ --> $DIR/no-const-async.rs:4:5
|
LL | pub const async fn x() {}
- | ^^^^-----^-----^^^^^^^^^^
+ | ----^^^^^-^^^^^----------
| | |
| | `async` because of this
| `const` because of this
--- /dev/null
+// edition:2018
+// check-pass
+
+#![no_std]
+#![crate_type = "rlib"]
+
+use core::future::Future;
+
+async fn a(f: impl Future) {
+ f.await;
+}
+
+fn main() {}
--> $DIR/auto-ref-slice-plus-ref.rs:7:7
|
LL | a.test_mut();
- | ^^^^^^^^ help: there is a method with a similar name: `get_mut`
+ | ^^^^^^^^ help: there is an associated function with a similar name: `get_mut`
|
= help: items from traits can only be used if the trait is implemented and in scope
note: `MyIter` defines an item `test_mut`, perhaps you need to implement it
error[E0567]: auto traits cannot have generic parameters
- --> $DIR/auto-trait-validation.rs:3:1
+ --> $DIR/auto-trait-validation.rs:3:19
|
LL | auto trait Generic<T> {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^
+ | -------^^^ help: remove the parameters
+ | |
+ | auto trait cannot have generic parameters
error[E0568]: auto traits cannot have super traits
- --> $DIR/auto-trait-validation.rs:5:1
+ --> $DIR/auto-trait-validation.rs:5:20
|
LL | auto trait Bound : Copy {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ----- ^^^^ help: remove the super traits
+ | |
+ | auto trait cannot have super traits
error[E0380]: auto traits cannot have methods or associated items
- --> $DIR/auto-trait-validation.rs:7:1
+ --> $DIR/auto-trait-validation.rs:7:25
|
LL | auto trait MyTrait { fn foo() {} }
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ------- ^^^
+ | |
+ | auto trait cannot have items
error: aborting due to 3 previous errors
--- /dev/null
+// Identifier pattern referring to an ambiguity item is an error (issue #46079).
+
+mod m {
+ pub fn f() {}
+}
+use m::*;
+
+mod n {
+ pub fn f() {}
+}
+use n::*; // OK, no conflict with `use m::*;`
+
+fn main() {
+ let v = f; //~ ERROR `f` is ambiguous
+ match v {
+ f => {} //~ ERROR `f` is ambiguous
+ }
+}
--- /dev/null
+error[E0659]: `f` is ambiguous (glob import vs glob import in the same module)
+ --> $DIR/ambiguity-item.rs:14:13
+ |
+LL | let v = f;
+ | ^ ambiguous name
+ |
+note: `f` could refer to the function imported here
+ --> $DIR/ambiguity-item.rs:6:5
+ |
+LL | use m::*;
+ | ^^^^
+ = help: consider adding an explicit import of `f` to disambiguate
+note: `f` could also refer to the function imported here
+ --> $DIR/ambiguity-item.rs:11:5
+ |
+LL | use n::*; // OK, no conflict with `use m::*;`
+ | ^^^^
+ = help: consider adding an explicit import of `f` to disambiguate
+
+error[E0659]: `f` is ambiguous (glob import vs glob import in the same module)
+ --> $DIR/ambiguity-item.rs:16:9
+ |
+LL | f => {}
+ | ^ ambiguous name
+ |
+note: `f` could refer to the function imported here
+ --> $DIR/ambiguity-item.rs:6:5
+ |
+LL | use m::*;
+ | ^^^^
+ = help: consider adding an explicit import of `f` to disambiguate
+note: `f` could also refer to the function imported here
+ --> $DIR/ambiguity-item.rs:11:5
+ |
+LL | use n::*; // OK, no conflict with `use m::*;`
+ | ^^^^
+ = help: consider adding an explicit import of `f` to disambiguate
+
+error: aborting due to 2 previous errors
+
+For more information about this error, try `rustc --explain E0659`.
--- /dev/null
+// Identifier pattern referring to a const generic parameter is an error (issue #68853).
+
+#![feature(const_generics)] //~ WARN the feature `const_generics` is incomplete
+
+fn check<const N: usize>() {
+ match 1 {
+ N => {} //~ ERROR const parameters cannot be referenced in patterns
+ _ => {}
+ }
+}
+
+fn main() {}
--- /dev/null
+warning: the feature `const_generics` is incomplete and may cause the compiler to crash
+ --> $DIR/const-param.rs:3:12
+ |
+LL | #![feature(const_generics)]
+ | ^^^^^^^^^^^^^^
+ |
+ = note: `#[warn(incomplete_features)]` on by default
+
+error[E0158]: const parameters cannot be referenced in patterns
+ --> $DIR/const-param.rs:7:9
+ |
+LL | N => {}
+ | ^
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0158`.
error[E0308]: mismatched types
--> $DIR/blind-item-block-middle.rs:6:9
|
+LL | mod foo { pub struct bar; }
+ | --------------- unit struct defined here
+...
LL | let bar = 5;
- | ^^^ expected integer, found struct `foo::bar`
+ | ^^^
+ | |
+ | expected integer, found struct `foo::bar`
+ | `bar` is interpreted as a unit struct, not a new binding
+ | help: introduce a new binding instead: `other_bar`
error: aborting due to previous error
LL | if (true) { 12; };;; -num;
| ^^ help: remove these semicolons
|
- = note: `#[warn(redundant_semicolon)]` on by default
+ = note: `#[warn(redundant_semicolons)]` on by default
--> $DIR/issue-3563.rs:3:17
|
LL | || self.b()
- | ^ help: there is a method with a similar name: `a`
+ | ^ help: there is an associated function with a similar name: `a`
error: aborting due to previous error
--- /dev/null
+// Tests using a combination of pattern features has the expected borrow checking behavior
+#![feature(bindings_after_at)]
+#![feature(or_patterns)]
+#![feature(box_patterns)]
+
+#![feature(move_ref_pattern)]
+
+enum Test {
+ Foo,
+ Bar,
+ _Baz,
+}
+
+// bindings_after_at + slice_patterns
+
+fn bindings_after_at_slice_patterns_move_binding(x: [String; 4]) {
+ match x {
+ a @ [.., _] => (),
+ _ => (),
+ };
+
+ &x;
+ //~^ ERROR borrow of moved value
+}
+
+fn bindings_after_at_slice_patterns_borrows_binding_mut(mut x: [String; 4]) {
+ let r = match x {
+ ref mut foo @ [.., _] => Some(foo),
+ _ => None,
+ };
+
+ &x;
+ //~^ ERROR cannot borrow
+
+ drop(r);
+}
+
+fn bindings_after_at_slice_patterns_borrows_slice_mut1(mut x: [String; 4]) {
+ let r = match x {
+ ref foo @ [.., ref mut bar] => (),
+ //~^ ERROR cannot borrow
+ _ => (),
+ };
+
+ drop(r);
+}
+
+fn bindings_after_at_slice_patterns_borrows_slice_mut2(mut x: [String; 4]) {
+ let r = match x {
+ [ref foo @ .., ref bar] => Some(foo),
+ _ => None,
+ };
+
+ &mut x;
+ //~^ ERROR cannot borrow
+
+ drop(r);
+}
+
+fn bindings_after_at_slice_patterns_borrows_both(mut x: [String; 4]) {
+ let r = match x {
+ ref foo @ [.., ref bar] => Some(foo),
+ _ => None,
+ };
+
+ &mut x;
+ //~^ ERROR cannot borrow
+
+ drop(r);
+}
+
+// bindings_after_at + or_patterns
+
+fn bindings_after_at_or_patterns_move(x: Option<Test>) {
+ match x {
+ foo @ Some(Test::Foo | Test::Bar) => (),
+ _ => (),
+ }
+
+ &x;
+ //~^ ERROR borrow of moved value
+}
+
+fn bindings_after_at_or_patterns_borrows(mut x: Option<Test>) {
+ let r = match x {
+ ref foo @ Some(Test::Foo | Test::Bar) => Some(foo),
+ _ => None,
+ };
+
+ &mut x;
+ //~^ ERROR cannot borrow
+
+ drop(r);
+}
+
+fn bindings_after_at_or_patterns_borrows_mut(mut x: Option<Test>) {
+ let r = match x {
+ ref mut foo @ Some(Test::Foo | Test::Bar) => Some(foo),
+ _ => None,
+ };
+
+ &x;
+ //~^ ERROR cannot borrow
+
+ drop(r);
+}
+
+// bindings_after_at + box_patterns
+
+fn bindings_after_at_box_patterns_borrows_both(mut x: Option<Box<String>>) {
+ let r = match x {
+ ref foo @ Some(box ref s) => Some(foo),
+ _ => None,
+ };
+
+ &mut x;
+ //~^ ERROR cannot borrow
+
+ drop(r);
+}
+
+fn bindings_after_at_box_patterns_borrows_mut(mut x: Option<Box<String>>) {
+ match x {
+ ref foo @ Some(box ref mut s) => (),
+ //~^ ERROR cannot borrow
+ _ => (),
+ };
+}
+
+// bindings_after_at + slice_patterns + or_patterns
+
+fn bindings_after_at_slice_patterns_or_patterns_moves(x: [Option<Test>; 4]) {
+ match x {
+ a @ [.., Some(Test::Foo | Test::Bar)] => (),
+ _ => (),
+ };
+
+ &x;
+ //~^ ERROR borrow of moved value
+}
+
+fn bindings_after_at_slice_patterns_or_patterns_borrows_binding(mut x: [Option<Test>; 4]) {
+ let r = match x {
+ ref a @ [ref b @ .., Some(Test::Foo | Test::Bar)] => Some(a),
+ _ => None,
+ };
+
+ &mut x;
+ //~^ ERROR cannot borrow
+
+ drop(r);
+}
+
+fn bindings_after_at_slice_patterns_or_patterns_borrows_slice(mut x: [Option<Test>; 4]) {
+ let r = match x {
+ ref a @ [ref b @ .., Some(Test::Foo | Test::Bar)] => Some(b),
+ _ => None,
+ };
+
+ &mut x;
+ //~^ ERROR cannot borrow
+
+ drop(r);
+}
+
+// bindings_after_at + slice_patterns + box_patterns
+
+fn bindings_after_at_slice_patterns_box_patterns_borrows(mut x: [Option<Box<String>>; 4]) {
+ let r = match x {
+ [_, ref a @ Some(box ref b), ..] => Some(a),
+ _ => None,
+ };
+
+ &mut x;
+ //~^ ERROR cannot borrow
+
+ drop(r);
+}
+
+// bindings_after_at + slice_patterns + or_patterns + box_patterns
+
+fn bindings_after_at_slice_patterns_or_patterns_box_patterns_borrows(
+ mut x: [Option<Box<Test>>; 4]
+) {
+ let r = match x {
+ [_, ref a @ Some(box Test::Foo | box Test::Bar), ..] => Some(a),
+ _ => None,
+ };
+
+ &mut x;
+ //~^ ERROR cannot borrow
+
+ drop(r);
+}
+
+fn bindings_after_at_slice_patterns_or_patterns_box_patterns_borrows_mut(
+ mut x: [Option<Box<Test>>; 4]
+) {
+ let r = match x {
+ [_, ref mut a @ Some(box Test::Foo | box Test::Bar), ..] => Some(a),
+ _ => None,
+ };
+
+ &x;
+ //~^ ERROR cannot borrow
+
+ drop(r);
+}
+
+fn bindings_after_at_slice_patterns_or_patterns_box_patterns_borrows_binding(
+ mut x: [Option<Box<Test>>; 4]
+) {
+ let r = match x {
+ ref a @ [_, ref b @ Some(box Test::Foo | box Test::Bar), ..] => Some(a),
+ _ => None,
+ };
+
+ &mut x;
+ //~^ ERROR cannot borrow
+
+ drop(r);
+}
+
+fn main() {}
--- /dev/null
+error: cannot borrow value as mutable because it is also borrowed as immutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:40:9
+ |
+LL | ref foo @ [.., ref mut bar] => (),
+ | -------^^^^^^^^-----------^
+ | | |
+ | | mutable borrow, by `bar`, occurs here
+ | immutable borrow, by `foo`, occurs here
+
+error: cannot borrow value as mutable because it is also borrowed as immutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:124:9
+ |
+LL | ref foo @ Some(box ref mut s) => (),
+ | -------^^^^^^^^^^^^---------^
+ | | |
+ | | mutable borrow, by `s`, occurs here
+ | immutable borrow, by `foo`, occurs here
+
+error[E0382]: borrow of moved value: `x`
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:22:5
+ |
+LL | fn bindings_after_at_slice_patterns_move_binding(x: [String; 4]) {
+ | - move occurs because `x` has type `[std::string::String; 4]`, which does not implement the `Copy` trait
+LL | match x {
+LL | a @ [.., _] => (),
+ | ----------- value moved here
+...
+LL | &x;
+ | ^^ value borrowed here after move
+
+error[E0502]: cannot borrow `x` as immutable because it is also borrowed as mutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:32:5
+ |
+LL | ref mut foo @ [.., _] => Some(foo),
+ | --------------------- mutable borrow occurs here
+...
+LL | &x;
+ | ^^ immutable borrow occurs here
+...
+LL | drop(r);
+ | - mutable borrow later used here
+
+error[E0502]: cannot borrow `x` as mutable because it is also borrowed as immutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:54:5
+ |
+LL | [ref foo @ .., ref bar] => Some(foo),
+ | ------------ immutable borrow occurs here
+...
+LL | &mut x;
+ | ^^^^^^ mutable borrow occurs here
+...
+LL | drop(r);
+ | - immutable borrow later used here
+
+error[E0502]: cannot borrow `x` as mutable because it is also borrowed as immutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:66:5
+ |
+LL | ref foo @ [.., ref bar] => Some(foo),
+ | ----------------------- immutable borrow occurs here
+...
+LL | &mut x;
+ | ^^^^^^ mutable borrow occurs here
+...
+LL | drop(r);
+ | - immutable borrow later used here
+
+error[E0382]: borrow of moved value: `x`
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:80:5
+ |
+LL | fn bindings_after_at_or_patterns_move(x: Option<Test>) {
+ | - move occurs because `x` has type `std::option::Option<Test>`, which does not implement the `Copy` trait
+LL | match x {
+LL | foo @ Some(Test::Foo | Test::Bar) => (),
+ | ---------------------------------
+ | |
+ | value moved here
+ | value moved here
+...
+LL | &x;
+ | ^^ value borrowed here after move
+
+error[E0502]: cannot borrow `x` as mutable because it is also borrowed as immutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:90:5
+ |
+LL | ref foo @ Some(Test::Foo | Test::Bar) => Some(foo),
+ | ------------------------------------- immutable borrow occurs here
+...
+LL | &mut x;
+ | ^^^^^^ mutable borrow occurs here
+...
+LL | drop(r);
+ | - immutable borrow later used here
+
+error[E0502]: cannot borrow `x` as immutable because it is also borrowed as mutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:102:5
+ |
+LL | ref mut foo @ Some(Test::Foo | Test::Bar) => Some(foo),
+ | ----------------------------------------- mutable borrow occurs here
+...
+LL | &x;
+ | ^^ immutable borrow occurs here
+...
+LL | drop(r);
+ | - mutable borrow later used here
+
+error[E0502]: cannot borrow `x` as mutable because it is also borrowed as immutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:116:5
+ |
+LL | ref foo @ Some(box ref s) => Some(foo),
+ | ------------------------- immutable borrow occurs here
+...
+LL | &mut x;
+ | ^^^^^^ mutable borrow occurs here
+...
+LL | drop(r);
+ | - immutable borrow later used here
+
+error[E0382]: borrow of moved value: `x`
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:138:5
+ |
+LL | fn bindings_after_at_slice_patterns_or_patterns_moves(x: [Option<Test>; 4]) {
+ | - move occurs because `x` has type `[std::option::Option<Test>; 4]`, which does not implement the `Copy` trait
+LL | match x {
+LL | a @ [.., Some(Test::Foo | Test::Bar)] => (),
+ | -------------------------------------
+ | |
+ | value moved here
+ | value moved here
+...
+LL | &x;
+ | ^^ value borrowed here after move
+
+error[E0502]: cannot borrow `x` as mutable because it is also borrowed as immutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:148:5
+ |
+LL | ref a @ [ref b @ .., Some(Test::Foo | Test::Bar)] => Some(a),
+ | ------------------------------------------------- immutable borrow occurs here
+...
+LL | &mut x;
+ | ^^^^^^ mutable borrow occurs here
+...
+LL | drop(r);
+ | - immutable borrow later used here
+
+error[E0502]: cannot borrow `x` as mutable because it is also borrowed as immutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:160:5
+ |
+LL | ref a @ [ref b @ .., Some(Test::Foo | Test::Bar)] => Some(b),
+ | ---------- immutable borrow occurs here
+...
+LL | &mut x;
+ | ^^^^^^ mutable borrow occurs here
+...
+LL | drop(r);
+ | - immutable borrow later used here
+
+error[E0502]: cannot borrow `x` as mutable because it is also borrowed as immutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:174:5
+ |
+LL | [_, ref a @ Some(box ref b), ..] => Some(a),
+ | ----------------------- immutable borrow occurs here
+...
+LL | &mut x;
+ | ^^^^^^ mutable borrow occurs here
+...
+LL | drop(r);
+ | - immutable borrow later used here
+
+error[E0502]: cannot borrow `x` as mutable because it is also borrowed as immutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:190:5
+ |
+LL | [_, ref a @ Some(box Test::Foo | box Test::Bar), ..] => Some(a),
+ | ------------------------------------------- immutable borrow occurs here
+...
+LL | &mut x;
+ | ^^^^^^ mutable borrow occurs here
+...
+LL | drop(r);
+ | - immutable borrow later used here
+
+error[E0502]: cannot borrow `x` as immutable because it is also borrowed as mutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:204:5
+ |
+LL | [_, ref mut a @ Some(box Test::Foo | box Test::Bar), ..] => Some(a),
+ | ----------------------------------------------- mutable borrow occurs here
+...
+LL | &x;
+ | ^^ immutable borrow occurs here
+...
+LL | drop(r);
+ | - mutable borrow later used here
+
+error[E0502]: cannot borrow `x` as mutable because it is also borrowed as immutable
+ --> $DIR/bindings-after-at-or-patterns-slice-patterns-box-patterns.rs:218:5
+ |
+LL | ref a @ [_, ref b @ Some(box Test::Foo | box Test::Bar), ..] => Some(a),
+ | ------------------------------------------------------------ immutable borrow occurs here
+...
+LL | &mut x;
+ | ^^^^^^ mutable borrow occurs here
+...
+LL | drop(r);
+ | - immutable borrow later used here
+
+error: aborting due to 17 previous errors
+
+Some errors have detailed explanations: E0382, E0502.
+For more information about an error, try `rustc --explain E0382`.
+++ /dev/null
-// run-pass
-// compile-flags: -Z chalk
-
-// Test that `Clone` is correctly implemented for builtin types.
-
-#[derive(Copy, Clone)]
-struct S(i32);
-
-fn test_clone<T: Clone>(arg: T) {
- let _ = arg.clone();
-}
-
-fn test_copy<T: Copy>(arg: T) {
- let _ = arg;
- let _ = arg;
-}
-
-fn test_copy_clone<T: Copy + Clone>(arg: T) {
- test_copy(arg);
- test_clone(arg);
-}
-
-fn foo() { }
-
-fn main() {
- test_copy_clone(foo);
- let f: fn() = foo;
- test_copy_clone(f);
- // FIXME: add closures when they're considered WF
- test_copy_clone([1; 56]);
- test_copy_clone((1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1));
- test_copy_clone((1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, true, 'a', 1.1));
- test_copy_clone(());
- test_copy_clone(((1, 1), (1, 1, 1), (1.1, 1, 1, 'a'), ()));
-
- let a = (
- (S(1), S(0)),
- (
- (S(0), S(0), S(1)),
- S(0)
- )
- );
- test_copy_clone(a);
-}
+++ /dev/null
-// run-pass
-// compile-flags: -Z chalk
-
-trait Foo { }
-
-impl Foo for i32 { }
-
-struct S<T: Foo> {
- x: T,
-}
-
-fn only_foo<T: Foo>(_x: &T) { }
-
-impl<T> S<T> {
- // Test that we have the correct environment inside an inherent method.
- fn dummy_foo(&self) {
- only_foo(&self.x)
- }
-}
-
-trait Bar { }
-impl Bar for u32 { }
-
-fn only_bar<T: Bar>() { }
-
-impl<T> S<T> {
- // Test that the environment of `dummy_bar` adds up with the environment
- // of the inherent impl.
- fn dummy_bar<U: Bar>(&self) {
- only_foo(&self.x);
- only_bar::<U>();
- }
-}
-
-fn main() {
- let s = S {
- x: 5,
- };
-
- s.dummy_foo();
- s.dummy_bar::<u32>();
-}
+++ /dev/null
-#![feature(rustc_attrs)]
-#![allow(dead_code)]
-
-trait Foo { }
-
-#[rustc_dump_program_clauses] //~ ERROR program clause dump
-trait Bar where Self: Foo { }
-
-#[rustc_dump_env_program_clauses] //~ ERROR program clause dump
-fn bar<T: Bar + ?Sized>() {
-}
-
-fn main() {
-}
+++ /dev/null
-error: program clause dump
- --> $DIR/lower_env1.rs:6:1
- |
-LL | #[rustc_dump_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<Self> { FromEnv(Self: Foo) :- FromEnv(Self: Bar). }
- = note: forall<Self> { Implemented(Self: Bar) :- FromEnv(Self: Bar). }
- = note: forall<Self> { WellFormed(Self: Bar) :- Implemented(Self: Bar), WellFormed(Self: Foo). }
-
-error: program clause dump
- --> $DIR/lower_env1.rs:9:1
- |
-LL | #[rustc_dump_env_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<Self> { FromEnv(Self: Foo) :- FromEnv(Self: Bar). }
- = note: forall<Self> { Implemented(Self: Bar) :- FromEnv(Self: Bar). }
- = note: forall<Self> { Implemented(Self: Foo) :- FromEnv(Self: Foo). }
-
-error: aborting due to 2 previous errors
-
+++ /dev/null
-#![feature(rustc_attrs)]
-#![allow(dead_code)]
-
-trait Foo { }
-
-#[rustc_dump_program_clauses] //~ ERROR program clause dump
-struct S<'a, T: ?Sized> where T: Foo {
- data: &'a T,
-}
-
-#[rustc_dump_env_program_clauses] //~ ERROR program clause dump
-fn bar<T: Foo>(_x: S<'_, T>) { // note that we have an implicit `T: Sized` bound
-}
-
-fn main() {
-}
+++ /dev/null
-error: program clause dump
- --> $DIR/lower_env2.rs:6:1
- |
-LL | #[rustc_dump_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<'a, T> { FromEnv(T: Foo) :- FromEnv(S<'a, T>). }
- = note: forall<'a, T> { TypeOutlives(T: 'a) :- FromEnv(S<'a, T>). }
- = note: forall<'a, T> { WellFormed(S<'a, T>) :- WellFormed(T: Foo), TypeOutlives(T: 'a). }
-
-error: program clause dump
- --> $DIR/lower_env2.rs:11:1
- |
-LL | #[rustc_dump_env_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<'a, T> { FromEnv(T: Foo) :- FromEnv(S<'a, T>). }
- = note: forall<'a, T> { TypeOutlives(T: 'a) :- FromEnv(S<'a, T>). }
- = note: forall<Self> { Implemented(Self: Foo) :- FromEnv(Self: Foo). }
- = note: forall<Self> { Implemented(Self: std::marker::Sized) :- FromEnv(Self: std::marker::Sized). }
-
-error: aborting due to 2 previous errors
-
+++ /dev/null
-#![feature(rustc_attrs)]
-#![allow(dead_code)]
-
-trait Foo {
- #[rustc_dump_env_program_clauses] //~ ERROR program clause dump
- fn foo(&self);
-}
-
-impl<T> Foo for T where T: Clone {
- #[rustc_dump_env_program_clauses] //~ ERROR program clause dump
- fn foo(&self) {
- }
-}
-
-fn main() {
-}
+++ /dev/null
-error: program clause dump
- --> $DIR/lower_env3.rs:5:5
- |
-LL | #[rustc_dump_env_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<Self> { Implemented(Self: Foo) :- FromEnv(Self: Foo). }
-
-error: program clause dump
- --> $DIR/lower_env3.rs:10:5
- |
-LL | #[rustc_dump_env_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<Self> { FromEnv(Self: std::marker::Sized) :- FromEnv(Self: std::clone::Clone). }
- = note: forall<Self> { Implemented(Self: std::clone::Clone) :- FromEnv(Self: std::clone::Clone). }
- = note: forall<Self> { Implemented(Self: std::marker::Sized) :- FromEnv(Self: std::marker::Sized). }
-
-error: aborting due to 2 previous errors
-
+++ /dev/null
-#![feature(rustc_attrs)]
-
-trait Foo { }
-
-#[rustc_dump_program_clauses] //~ ERROR program clause dump
-impl<T: 'static> Foo for T where T: Iterator<Item = i32> { }
-
-trait Bar {
- type Assoc;
-}
-
-impl<T> Bar for T where T: Iterator<Item = i32> {
- #[rustc_dump_program_clauses] //~ ERROR program clause dump
- type Assoc = Vec<T>;
-}
-
-fn main() {
- println!("hello");
-}
+++ /dev/null
-error: program clause dump
- --> $DIR/lower_impl.rs:5:1
- |
-LL | #[rustc_dump_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<T> { Implemented(T: Foo) :- ProjectionEq(<T as std::iter::Iterator>::Item == i32), TypeOutlives(T: 'static), Implemented(T: std::iter::Iterator), Implemented(T: std::marker::Sized). }
-
-error: program clause dump
- --> $DIR/lower_impl.rs:13:5
- |
-LL | #[rustc_dump_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<T> { Normalize(<T as Bar>::Assoc -> std::vec::Vec<T>) :- Implemented(T: Bar). }
-
-error: aborting due to 2 previous errors
-
+++ /dev/null
-#![feature(rustc_attrs)]
-
-#[rustc_dump_program_clauses] //~ ERROR program clause dump
-struct Foo<'a, T> where Box<T>: Clone {
- _x: std::marker::PhantomData<&'a T>,
-}
-
-fn main() { }
+++ /dev/null
-error: program clause dump
- --> $DIR/lower_struct.rs:3:1
- |
-LL | #[rustc_dump_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<'a, T> { FromEnv(T: std::marker::Sized) :- FromEnv(Foo<'a, T>). }
- = note: forall<'a, T> { FromEnv(std::boxed::Box<T>: std::clone::Clone) :- FromEnv(Foo<'a, T>). }
- = note: forall<'a, T> { TypeOutlives(T: 'a) :- FromEnv(Foo<'a, T>). }
- = note: forall<'a, T> { WellFormed(Foo<'a, T>) :- WellFormed(T: std::marker::Sized), WellFormed(std::boxed::Box<T>: std::clone::Clone), TypeOutlives(T: 'a). }
-
-error: aborting due to previous error
-
+++ /dev/null
-#![feature(rustc_attrs)]
-
-trait Bar { }
-
-#[rustc_dump_program_clauses] //~ ERROR program clause dump
-trait Foo<S, T: ?Sized> {
- #[rustc_dump_program_clauses] //~ ERROR program clause dump
- type Assoc: Bar + ?Sized;
-}
-
-fn main() {
- println!("hello");
-}
+++ /dev/null
-error: program clause dump
- --> $DIR/lower_trait.rs:5:1
- |
-LL | #[rustc_dump_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<Self, S, T> { FromEnv(<Self as Foo<S, T>>::Assoc: Bar) :- FromEnv(Self: Foo<S, T>). }
- = note: forall<Self, S, T> { FromEnv(S: std::marker::Sized) :- FromEnv(Self: Foo<S, T>). }
- = note: forall<Self, S, T> { Implemented(Self: Foo<S, T>) :- FromEnv(Self: Foo<S, T>). }
- = note: forall<Self, S, T> { WellFormed(Self: Foo<S, T>) :- Implemented(Self: Foo<S, T>), WellFormed(S: std::marker::Sized), WellFormed(<Self as Foo<S, T>>::Assoc: Bar). }
-
-error: program clause dump
- --> $DIR/lower_trait.rs:7:5
- |
-LL | #[rustc_dump_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<Self, S, T, ^3> { ProjectionEq(<Self as Foo<S, T>>::Assoc == ^3) :- Normalize(<Self as Foo<S, T>>::Assoc -> ^3). }
- = note: forall<Self, S, T> { FromEnv(Self: Foo<S, T>) :- FromEnv(Unnormalized(<Self as Foo<S, T>>::Assoc)). }
- = note: forall<Self, S, T> { ProjectionEq(<Self as Foo<S, T>>::Assoc == Unnormalized(<Self as Foo<S, T>>::Assoc)). }
- = note: forall<Self, S, T> { WellFormed(Unnormalized(<Self as Foo<S, T>>::Assoc)) :- WellFormed(Self: Foo<S, T>). }
-
-error: aborting due to 2 previous errors
-
+++ /dev/null
-#![feature(rustc_attrs)]
-
-#[rustc_dump_program_clauses] //~ ERROR program clause dump
-trait Foo<F: ?Sized> where for<'a> F: Fn(&'a (u8, u16)) -> &'a u8
-{
-}
-
-fn main() {
- println!("hello");
-}
+++ /dev/null
-error: program clause dump
- --> $DIR/lower_trait_higher_rank.rs:3:1
- |
-LL | #[rustc_dump_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<'a, Self, F> { FromEnv(F: std::ops::Fn<(&'a (u8, u16),)>) :- FromEnv(Self: Foo<F>). }
- = note: forall<'a, Self, F> { ProjectionEq(<F as std::ops::FnOnce<(&'a (u8, u16),)>>::Output == &'a u8) :- FromEnv(Self: Foo<F>). }
- = note: forall<Self, F> { Implemented(Self: Foo<F>) :- FromEnv(Self: Foo<F>). }
- = note: forall<Self, F> { WellFormed(Self: Foo<F>) :- Implemented(Self: Foo<F>), forall<'a> { WellFormed(F: std::ops::Fn<(&'a (u8, u16),)>) }, forall<'a> { ProjectionEq(<F as std::ops::FnOnce<(&'a (u8, u16),)>>::Output == &'a u8) }. }
-
-error: aborting due to previous error
-
+++ /dev/null
-#![feature(rustc_attrs)]
-
-use std::borrow::Borrow;
-
-#[rustc_dump_program_clauses] //~ ERROR program clause dump
-trait Foo<'a, 'b, T, U>
-where
- T: Borrow<U> + ?Sized,
- U: ?Sized + 'b,
- 'a: 'b,
- Box<T>:, // NOTE(#53696) this checks an empty list of bounds.
-{
-}
-
-fn main() {
- println!("hello");
-}
+++ /dev/null
-error: program clause dump
- --> $DIR/lower_trait_where_clause.rs:5:1
- |
-LL | #[rustc_dump_program_clauses]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- |
- = note: forall<'a, 'b, Self, T, U> { FromEnv(T: std::borrow::Borrow<U>) :- FromEnv(Self: Foo<'a, 'b, T, U>). }
- = note: forall<'a, 'b, Self, T, U> { Implemented(Self: Foo<'a, 'b, T, U>) :- FromEnv(Self: Foo<'a, 'b, T, U>). }
- = note: forall<'a, 'b, Self, T, U> { RegionOutlives('a: 'b) :- FromEnv(Self: Foo<'a, 'b, T, U>). }
- = note: forall<'a, 'b, Self, T, U> { TypeOutlives(U: 'b) :- FromEnv(Self: Foo<'a, 'b, T, U>). }
- = note: forall<'a, 'b, Self, T, U> { TypeOutlives(std::boxed::Box<T>: '<empty>) :- FromEnv(Self: Foo<'a, 'b, T, U>). }
- = note: forall<'a, 'b, Self, T, U> { WellFormed(Self: Foo<'a, 'b, T, U>) :- Implemented(Self: Foo<'a, 'b, T, U>), WellFormed(T: std::borrow::Borrow<U>), TypeOutlives(U: 'b), RegionOutlives('a: 'b), TypeOutlives(std::boxed::Box<T>: '<empty>). }
-
-error: aborting due to previous error
-
+++ /dev/null
-// run-pass
-// compile-flags: -Z chalk
-
-trait Foo { }
-
-trait Bar {
- type Item: Foo;
-}
-
-impl Foo for i32 { }
-impl Bar for i32 {
- type Item = i32;
-}
-
-fn only_foo<T: Foo>() { }
-
-fn only_bar<T: Bar>() {
- // `T` implements `Bar` hence `<T as Bar>::Item` must also implement `Bar`
- only_foo::<T::Item>()
-}
-
-fn main() {
- only_bar::<i32>();
- only_foo::<<i32 as Bar>::Item>();
-}
+++ /dev/null
-// run-pass
-// compile-flags: -Z chalk
-
-trait Foo { }
-trait Bar: Foo { }
-
-impl Foo for i32 { }
-impl Bar for i32 { }
-
-fn only_foo<T: Foo>() { }
-
-fn only_bar<T: Bar>() {
- // `T` implements `Bar` hence `T` must also implement `Foo`
- only_foo::<T>()
-}
-
-fn main() {
- only_bar::<i32>()
-}
+++ /dev/null
-// run-pass
-// compile-flags: -Z chalk
-
-trait Foo { }
-trait Bar<U> where U: Foo { }
-
-impl Foo for i32 { }
-impl Bar<i32> for i32 { }
-
-fn only_foo<T: Foo>() { }
-
-fn only_bar<U, T: Bar<U>>() {
- only_foo::<U>()
-}
-
-fn main() {
- only_bar::<i32, i32>()
-}
+++ /dev/null
-// run-pass
-// compile-flags: -Z chalk
-
-trait Eq { }
-trait Hash: Eq { }
-
-impl Eq for i32 { }
-impl Hash for i32 { }
-
-struct Set<T: Hash> {
- _x: T,
-}
-
-fn only_eq<T: Eq>() { }
-
-fn take_a_set<T>(_: &Set<T>) {
- // `Set<T>` is an input type of `take_a_set`, hence we know that
- // `T` must implement `Hash`, and we know in turn that `T` must
- // implement `Eq`.
- only_eq::<T>()
-}
-
-fn main() {
- let set = Set {
- _x: 5,
- };
-
- take_a_set(&set);
-}
+++ /dev/null
-// compile-flags: -Z chalk
-
-trait Foo { }
-impl Foo for i32 { }
-
-trait Bar { }
-impl Bar for i32 { }
-impl Bar for u32 { }
-
-fn only_foo<T: Foo>(_x: T) { }
-
-fn only_bar<T: Bar>(_x: T) { }
-
-fn main() {
- let x = 5.0;
-
- // The only type which implements `Foo` is `i32`, so the chalk trait solver
- // is expecting a variable of type `i32`. This behavior differs from the
- // old-style trait solver. I guess this will change, that's why I'm
- // adding that test.
- only_foo(x); //~ ERROR mismatched types
-
- // Here we have two solutions so we get back the behavior of the old-style
- // trait solver.
- only_bar(x); //~ ERROR the trait bound `{float}: Bar` is not satisfied
-}
+++ /dev/null
-error[E0308]: mismatched types
- --> $DIR/type_inference.rs:21:14
- |
-LL | only_foo(x);
- | ^ expected `i32`, found floating-point number
-
-error[E0277]: the trait bound `{float}: Bar` is not satisfied
- --> $DIR/type_inference.rs:25:5
- |
-LL | fn only_bar<T: Bar>(_x: T) { }
- | -------- --- required by this bound in `only_bar`
-...
-LL | only_bar(x);
- | ^^^^^^^^ the trait `Bar` is not implemented for `{float}`
- |
- = help: the following implementations were found:
- <i32 as Bar>
- <u32 as Bar>
-
-error: aborting due to 2 previous errors
-
-Some errors have detailed explanations: E0277, E0308.
-For more information about an error, try `rustc --explain E0277`.
error[E0198]: negative impls cannot be unsafe
- --> $DIR/coherence-negative-impls-safe.rs:7:1
+ --> $DIR/coherence-negative-impls-safe.rs:7:13
|
LL | unsafe impl !Send for TestType {}
- | ------^^^^^^^^^^^^^^^^^^^^^^^^^^^
- | |
+ | ------ -^^^^
+ | | |
+ | | negative because of this
| unsafe because of this
error: aborting due to previous error
--- /dev/null
+#![feature(cfg_accessible)]
+
+#[cfg_accessible] //~ ERROR malformed `cfg_accessible` attribute input
+struct S1;
+
+#[cfg_accessible = "value"] //~ ERROR malformed `cfg_accessible` attribute input
+struct S2;
+
+#[cfg_accessible()] //~ ERROR `cfg_accessible` path is not specified
+struct S3;
+
+#[cfg_accessible(std, core)] //~ ERROR multiple `cfg_accessible` paths are specified
+struct S4;
+
+#[cfg_accessible("std")] //~ ERROR `cfg_accessible` path cannot be a literal
+struct S5;
+
+#[cfg_accessible(std = "value")] //~ ERROR `cfg_accessible` path cannot accept arguments
+struct S6;
+
+#[cfg_accessible(std(value))] //~ ERROR `cfg_accessible` path cannot accept arguments
+struct S7;
+
+fn main() {}
--- /dev/null
+error: malformed `cfg_accessible` attribute input
+ --> $DIR/cfg_accessible-input-validation.rs:3:1
+ |
+LL | #[cfg_accessible]
+ | ^^^^^^^^^^^^^^^^^ help: must be of the form: `#[cfg_accessible(path)]`
+
+error: malformed `cfg_accessible` attribute input
+ --> $DIR/cfg_accessible-input-validation.rs:6:1
+ |
+LL | #[cfg_accessible = "value"]
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: must be of the form: `#[cfg_accessible(path)]`
+
+error: `cfg_accessible` path is not specified
+ --> $DIR/cfg_accessible-input-validation.rs:9:1
+ |
+LL | #[cfg_accessible()]
+ | ^^^^^^^^^^^^^^^^^^^
+
+error: multiple `cfg_accessible` paths are specified
+ --> $DIR/cfg_accessible-input-validation.rs:12:23
+ |
+LL | #[cfg_accessible(std, core)]
+ | ^^^^
+
+error: `cfg_accessible` path cannot be a literal
+ --> $DIR/cfg_accessible-input-validation.rs:15:18
+ |
+LL | #[cfg_accessible("std")]
+ | ^^^^^
+
+error: `cfg_accessible` path cannot accept arguments
+ --> $DIR/cfg_accessible-input-validation.rs:18:18
+ |
+LL | #[cfg_accessible(std = "value")]
+ | ^^^^^^^^^^^^^
+
+error: `cfg_accessible` path cannot accept arguments
+ --> $DIR/cfg_accessible-input-validation.rs:21:18
+ |
+LL | #[cfg_accessible(std(value))]
+ | ^^^^^^^^^^
+
+error: aborting due to 7 previous errors
+
--- /dev/null
+#![feature(cfg_accessible)]
+
+#[cfg_accessible(Z)] //~ ERROR cannot determine whether the path is accessible or not
+struct S;
+
+#[cfg_accessible(S)] //~ ERROR cannot determine whether the path is accessible or not
+struct Z;
+
+fn main() {}
--- /dev/null
+error: cannot determine whether the path is accessible or not
+ --> $DIR/cfg_accessible-stuck.rs:6:1
+ |
+LL | #[cfg_accessible(S)]
+ | ^^^^^^^^^^^^^^^^^^^^
+
+error: cannot determine whether the path is accessible or not
+ --> $DIR/cfg_accessible-stuck.rs:3:1
+ |
+LL | #[cfg_accessible(Z)]
+ | ^^^^^^^^^^^^^^^^^^^^
+
+error: aborting due to 2 previous errors
+
--- /dev/null
+#[cfg_accessible(std)] //~ ERROR use of unstable library feature 'cfg_accessible'
+fn main() {}
--- /dev/null
+error[E0658]: use of unstable library feature 'cfg_accessible': `cfg_accessible` is not fully implemented
+ --> $DIR/cfg_accessible-unstable.rs:1:3
+ |
+LL | #[cfg_accessible(std)]
+ | ^^^^^^^^^^^^^^
+ |
+ = note: see issue #64797 <https://github.com/rust-lang/rust/issues/64797> for more information
+ = help: add `#![feature(cfg_accessible)]` to the crate attributes to enable
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0658`.
--- /dev/null
+#![feature(cfg_accessible)]
+
+mod m {
+ pub struct ExistingPublic;
+ struct ExistingPrivate;
+}
+
+#[cfg_accessible(m::ExistingPublic)]
+struct ExistingPublic;
+
+// FIXME: Not implemented yet.
+#[cfg_accessible(m::ExistingPrivate)] //~ ERROR not sure whether the path is accessible or not
+struct ExistingPrivate;
+
+// FIXME: Not implemented yet.
+#[cfg_accessible(m::NonExistent)] //~ ERROR not sure whether the path is accessible or not
+struct ExistingPrivate;
+
+#[cfg_accessible(n::AccessibleExpanded)] // OK, `cfg_accessible` can wait and retry.
+struct AccessibleExpanded;
+
+macro_rules! generate_accessible_expanded {
+ () => {
+ mod n {
+ pub struct AccessibleExpanded;
+ }
+ };
+}
+
+generate_accessible_expanded!();
+
+struct S {
+ field: u8,
+}
+
+// FIXME: Not implemented yet.
+#[cfg_accessible(S::field)] //~ ERROR not sure whether the path is accessible or not
+struct Field;
+
+fn main() {
+ ExistingPublic;
+ AccessibleExpanded;
+}
--- /dev/null
+error: not sure whether the path is accessible or not
+ --> $DIR/cfg_accessible.rs:12:18
+ |
+LL | #[cfg_accessible(m::ExistingPrivate)]
+ | ^^^^^^^^^^^^^^^^^^
+ |
+note: `cfg_accessible` is not fully implemented
+ --> $DIR/cfg_accessible.rs:12:18
+ |
+LL | #[cfg_accessible(m::ExistingPrivate)]
+ | ^^^^^^^^^^^^^^^^^^
+
+error: not sure whether the path is accessible or not
+ --> $DIR/cfg_accessible.rs:16:18
+ |
+LL | #[cfg_accessible(m::NonExistent)]
+ | ^^^^^^^^^^^^^^
+ |
+note: `cfg_accessible` is not fully implemented
+ --> $DIR/cfg_accessible.rs:16:18
+ |
+LL | #[cfg_accessible(m::NonExistent)]
+ | ^^^^^^^^^^^^^^
+
+error: not sure whether the path is accessible or not
+ --> $DIR/cfg_accessible.rs:37:18
+ |
+LL | #[cfg_accessible(S::field)]
+ | ^^^^^^^^
+ |
+note: `cfg_accessible` is not fully implemented
+ --> $DIR/cfg_accessible.rs:37:18
+ |
+LL | #[cfg_accessible(S::field)]
+ | ^^^^^^^^
+
+error: aborting due to 3 previous errors
+
error: aborting due to 10 previous errors
-Some errors have detailed explanations: E0566, E0587.
+Some errors have detailed explanations: E0566, E0587, E0634.
For more information about an error, try `rustc --explain E0566`.
error[E0277]: arrays only have std trait implementations for lengths 0..=32
- --> $DIR/into-iter-no-impls-length-33.rs:12:5
+ --> $DIR/into-iter-no-impls-length-33.rs:12:19
|
LL | IntoIter::new([0i32; 33])
- | ^^^^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
+ | ^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
|
= note: required by `std::array::IntoIter::<T, N>::new`
= note: the return type of a function must have a statically known size
error[E0277]: arrays only have std trait implementations for lengths 0..=32
- --> $DIR/into-iter-no-impls-length-33.rs:18:5
+ --> $DIR/into-iter-no-impls-length-33.rs:18:19
|
LL | IntoIter::new([0i32; 33])
- | ^^^^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
+ | ^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
|
= note: required by `std::array::IntoIter::<T, N>::new`
= note: the return type of a function must have a statically known size
error[E0277]: arrays only have std trait implementations for lengths 0..=32
- --> $DIR/into-iter-no-impls-length-33.rs:24:5
+ --> $DIR/into-iter-no-impls-length-33.rs:24:19
|
LL | IntoIter::new([0i32; 33])
- | ^^^^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
+ | ^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
|
= note: required by `std::array::IntoIter::<T, N>::new`
= note: the return type of a function must have a statically known size
error[E0277]: arrays only have std trait implementations for lengths 0..=32
- --> $DIR/into-iter-no-impls-length-33.rs:30:5
+ --> $DIR/into-iter-no-impls-length-33.rs:30:19
|
LL | IntoIter::new([0i32; 33])
- | ^^^^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
+ | ^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
|
= note: required by `std::array::IntoIter::<T, N>::new`
= note: the return type of a function must have a statically known size
error[E0277]: arrays only have std trait implementations for lengths 0..=32
- --> $DIR/into-iter-no-impls-length-33.rs:36:5
+ --> $DIR/into-iter-no-impls-length-33.rs:36:19
|
LL | IntoIter::new([0i32; 33])
- | ^^^^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
+ | ^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
|
= note: required by `std::array::IntoIter::<T, N>::new`
= note: the return type of a function must have a statically known size
error[E0277]: arrays only have std trait implementations for lengths 0..=32
- --> $DIR/into-iter-no-impls-length-33.rs:42:5
+ --> $DIR/into-iter-no-impls-length-33.rs:42:19
|
LL | IntoIter::new([0i32; 33])
- | ^^^^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
+ | ^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
|
= note: required by `std::array::IntoIter::<T, N>::new`
= note: the return type of a function must have a statically known size
error[E0277]: arrays only have std trait implementations for lengths 0..=32
- --> $DIR/into-iter-no-impls-length-33.rs:48:5
+ --> $DIR/into-iter-no-impls-length-33.rs:48:19
|
LL | IntoIter::new([0i32; 33])
- | ^^^^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
+ | ^^^^^^^^^^ the trait `std::array::LengthAtMost32` is not implemented for `[i32; 33]`
|
= note: required by `std::array::IntoIter::<T, N>::new`
--> $DIR/cannot-infer-const-args.rs:9:5
|
LL | foo();
- | ^^^ cannot infer type for fn item `fn() -> usize {foo::<_: usize>}`
+ | ^^^ cannot infer type for fn item `fn() -> usize {foo::<{_: usize}>}`
error: aborting due to previous error
struct S<const N: usize>;
fn main() {
- assert_eq!(std::any::type_name::<S<3>>(), "const_generic_type_name::S<3usize>");
+ assert_eq!(std::any::type_name::<S<3>>(), "const_generic_type_name::S<3>");
}
--> $DIR/fn-const-param-infer.rs:16:31
|
LL | let _: Checked<not_one> = Checked::<not_two>;
- | ---------------- ^^^^^^^^^^^^^^^^^^ expected `not_one`, found `not_two`
+ | ---------------- ^^^^^^^^^^^^^^^^^^ expected `{not_one as fn(usize) -> bool}`, found `{not_two as fn(usize) -> bool}`
| |
| expected due to this
|
- = note: expected struct `Checked<not_one>`
- found struct `Checked<not_two>`
+ = note: expected struct `Checked<{not_one as fn(usize) -> bool}>`
+ found struct `Checked<{not_two as fn(usize) -> bool}>`
error[E0308]: mismatched types
--> $DIR/fn-const-param-infer.rs:20:24
--> $DIR/fn-const-param-infer.rs:25:40
|
LL | let _: Checked<{generic::<u32>}> = Checked::<{generic::<u16>}>;
- | ------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `generic::<u32>`, found `generic::<u16>`
+ | ------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `{generic::<u32> as fn(usize) -> bool}`, found `{generic::<u16> as fn(usize) -> bool}`
| |
| expected due to this
|
- = note: expected struct `Checked<generic::<u32>>`
- found struct `Checked<generic::<u16>>`
+ = note: expected struct `Checked<{generic::<u32> as fn(usize) -> bool}>`
+ found struct `Checked<{generic::<u16> as fn(usize) -> bool}>`
error: aborting due to 4 previous errors
--- /dev/null
+#![feature(const_generics)]
+//~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash
+
+fn foo<const N: usize>() {
+ let _ = [0u64; N + 1];
+ //~^ ERROR array lengths can't depend on generic parameters
+}
+
+fn main() {}
--- /dev/null
+warning: the feature `const_generics` is incomplete and may cause the compiler to crash
+ --> $DIR/issue-62456.rs:1:12
+ |
+LL | #![feature(const_generics)]
+ | ^^^^^^^^^^^^^^
+ |
+ = note: `#[warn(incomplete_features)]` on by default
+
+error: array lengths can't depend on generic parameters
+ --> $DIR/issue-62456.rs:5:20
+ |
+LL | let _ = [0u64; N + 1];
+ | ^^^^^
+
+error: aborting due to previous error
+
--- /dev/null
+// Regression test for #62504
+
+#![feature(const_generics)]
+#![allow(incomplete_features)]
+
+trait HasSize {
+ const SIZE: usize;
+}
+
+impl<const X: usize> HasSize for ArrayHolder<{ X }> {
+ const SIZE: usize = X;
+}
+
+struct ArrayHolder<const X: usize>([u32; X]);
+
+impl<const X: usize> ArrayHolder<{ X }> {
+ pub const fn new() -> Self {
+ ArrayHolder([0; Self::SIZE])
+ //~^ ERROR: array lengths can't depend on generic parameters
+ }
+}
+
+fn main() {
+ let mut array = ArrayHolder::new();
+}
--- /dev/null
+error: array lengths can't depend on generic parameters
+ --> $DIR/issue-62504.rs:18:25
+ |
+LL | ArrayHolder([0; Self::SIZE])
+ | ^^^^^^^^^^
+
+error: aborting due to previous error
+
+++ /dev/null
-// run-pass
-// compile-flags: -Z chalk
-
-#![feature(const_generics)]
-//~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash
-
-pub struct Foo<T, const N: usize>([T; N]);
-impl<T, const N: usize> Foo<T, {N}> {}
-
-fn main() {}
+++ /dev/null
-warning: the feature `const_generics` is incomplete and may cause the compiler to crash
- --> $DIR/issue-65675.rs:4:12
- |
-LL | #![feature(const_generics)]
- | ^^^^^^^^^^^^^^
- |
- = note: `#[warn(incomplete_features)]` on by default
-
--- /dev/null
+// Regression test for #67739
+
+#![allow(incomplete_features)]
+#![feature(const_generics)]
+
+use std::mem;
+
+pub trait Trait {
+ type Associated: Sized;
+
+ fn associated_size(&self) -> usize {
+ [0u8; mem::size_of::<Self::Associated>()];
+ //~^ ERROR: array lengths can't depend on generic parameters
+ 0
+ }
+}
+
+fn main() {}
--- /dev/null
+error: array lengths can't depend on generic parameters
+ --> $DIR/issue-67739.rs:12:15
+ |
+LL | [0u8; mem::size_of::<Self::Associated>()];
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+error: aborting due to previous error
+
struct Const<const P: *const u32>;
fn main() {
- let _: Const<{15 as *const _}> = Const::<{10 as *const _}>; //~ mismatched types
- let _: Const<{10 as *const _}> = Const::<{10 as *const _}>;
+ let _: Const<{ 15 as *const _ }> = Const::<{ 10 as *const _ }>; //~ mismatched types
+ let _: Const<{ 10 as *const _ }> = Const::<{ 10 as *const _ }>;
}
= note: `#[warn(incomplete_features)]` on by default
error[E0308]: mismatched types
- --> $DIR/raw-ptr-const-param.rs:7:38
+ --> $DIR/raw-ptr-const-param.rs:7:40
|
-LL | let _: Const<{15 as *const _}> = Const::<{10 as *const _}>;
- | ----------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^ expected `{pointer}`, found `{pointer}`
+LL | let _: Const<{ 15 as *const _ }> = Const::<{ 10 as *const _ }>;
+ | ------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `{0xf as *const u32}`, found `{0xa as *const u32}`
| |
| expected due to this
|
- = note: expected struct `Const<{pointer}>`
- found struct `Const<{pointer}>`
+ = note: expected struct `Const<{0xf as *const u32}>`
+ found struct `Const<{0xa as *const u32}>`
error: aborting due to previous error
LL | const I32_REF_U8_UNION: u8 = unsafe { Nonsense { int_32_ref: &3 }.uint_8 };
| --------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
|
= note: `#[deny(const_err)]` on by default
LL | const I32_REF_U16_UNION: u16 = unsafe { Nonsense { int_32_ref: &3 }.uint_16 };
| ----------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: any use of this value will cause an error
--> $DIR/const-pointer-values-in-various-types.rs:34:45
LL | const I32_REF_U32_UNION: u32 = unsafe { Nonsense { int_32_ref: &3 }.uint_32 };
| ----------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error[E0080]: it is undefined behavior to use this value
--> $DIR/const-pointer-values-in-various-types.rs:37:5
LL | const I32_REF_I8_UNION: i8 = unsafe { Nonsense { int_32_ref: &3 }.int_8 };
| --------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: any use of this value will cause an error
--> $DIR/const-pointer-values-in-various-types.rs:46:45
LL | const I32_REF_I16_UNION: i16 = unsafe { Nonsense { int_32_ref: &3 }.int_16 };
| ----------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: any use of this value will cause an error
--> $DIR/const-pointer-values-in-various-types.rs:49:45
LL | const I32_REF_I32_UNION: i32 = unsafe { Nonsense { int_32_ref: &3 }.int_32 };
| ----------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error[E0080]: it is undefined behavior to use this value
--> $DIR/const-pointer-values-in-various-types.rs:52:5
LL | const I32_REF_F32_UNION: f32 = unsafe { Nonsense { int_32_ref: &3 }.float_32 };
| ----------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error[E0080]: it is undefined behavior to use this value
--> $DIR/const-pointer-values-in-various-types.rs:61:5
LL | const I32_REF_BOOL_UNION: bool = unsafe { Nonsense { int_32_ref: &3 }.truthy_falsey };
| ------------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: any use of this value will cause an error
--> $DIR/const-pointer-values-in-various-types.rs:67:47
LL | const I32_REF_CHAR_UNION: char = unsafe { Nonsense { int_32_ref: &3 }.character };
| ------------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: any use of this value will cause an error
--> $DIR/const-pointer-values-in-various-types.rs:70:39
LL | const STR_U8_UNION: u8 = unsafe { Nonsense { stringy: "3" }.uint_8 };
| ----------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: any use of this value will cause an error
--> $DIR/const-pointer-values-in-various-types.rs:73:41
LL | const STR_U16_UNION: u16 = unsafe { Nonsense { stringy: "3" }.uint_16 };
| ------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: any use of this value will cause an error
--> $DIR/const-pointer-values-in-various-types.rs:76:41
LL | const STR_U32_UNION: u32 = unsafe { Nonsense { stringy: "3" }.uint_32 };
| ------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error[E0080]: it is undefined behavior to use this value
--> $DIR/const-pointer-values-in-various-types.rs:79:5
LL | const STR_U128_UNION: u128 = unsafe { Nonsense { stringy: "3" }.uint_128 };
| --------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: any use of this value will cause an error
--> $DIR/const-pointer-values-in-various-types.rs:85:39
LL | const STR_I8_UNION: i8 = unsafe { Nonsense { stringy: "3" }.int_8 };
| ----------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: any use of this value will cause an error
--> $DIR/const-pointer-values-in-various-types.rs:88:41
LL | const STR_I16_UNION: i16 = unsafe { Nonsense { stringy: "3" }.int_16 };
| ------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: any use of this value will cause an error
--> $DIR/const-pointer-values-in-various-types.rs:91:41
LL | const STR_I32_UNION: i32 = unsafe { Nonsense { stringy: "3" }.int_32 };
| ------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error[E0080]: it is undefined behavior to use this value
--> $DIR/const-pointer-values-in-various-types.rs:94:5
LL | const STR_I128_UNION: i128 = unsafe { Nonsense { stringy: "3" }.int_128 };
| --------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: any use of this value will cause an error
--> $DIR/const-pointer-values-in-various-types.rs:100:41
LL | const STR_F32_UNION: f32 = unsafe { Nonsense { stringy: "3" }.float_32 };
| ------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error[E0080]: it is undefined behavior to use this value
--> $DIR/const-pointer-values-in-various-types.rs:103:5
LL | const STR_BOOL_UNION: bool = unsafe { Nonsense { stringy: "3" }.truthy_falsey };
| --------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: any use of this value will cause an error
--> $DIR/const-pointer-values-in-various-types.rs:109:43
LL | const STR_CHAR_UNION: char = unsafe { Nonsense { stringy: "3" }.character };
| --------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | a raw memory access tried to access part of a pointer value as raw bytes
+ | unable to turn pointer into raw bytes
error: aborting due to 29 previous errors
#[lang = "eh_personality"]
fn eh() {}
-#[lang = "eh_unwind_resume"]
-fn eh_unwind_resume() {}
#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub trait Foo {
fn foo(self) -> u32;
LL | const Z2: i32 = unsafe { *(42 as *const i32) };
| -------------------------^^^^^^^^^^^^^^^^^^^---
| |
- | a memory access tried to interpret some bytes as a pointer
+ | unable to turn bytes into a pointer
error: any use of this value will cause an error
--> $DIR/const_raw_ptr_ops.rs:17:26
LL | const Z3: i32 = unsafe { *(44 as *const i32) };
| -------------------------^^^^^^^^^^^^^^^^^^^---
| |
- | a memory access tried to interpret some bytes as a pointer
+ | unable to turn bytes into a pointer
error: aborting due to 5 previous errors
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
fn main() {
const MIN: i8 = -5;
const TEST: () = { unsafe { //~ NOTE
let slice: *const [u8] = mem::transmute((1usize, usize::MAX));
let _val = &*slice; //~ ERROR: any use of this value will cause an error
- //~^ NOTE: total size is bigger than largest supported object
+ //~^ NOTE: slice is bigger than largest supported object
//~^^ on by default
} };
LL | / const TEST: () = { unsafe {
LL | | let slice: *const [u8] = mem::transmute((1usize, usize::MAX));
LL | | let _val = &*slice;
- | | ^^^^^^^ invalid slice: total size is bigger than largest supported object
+ | | ^^^^^^^ invalid metadata in wide pointer: slice is bigger than largest supported object
LL | |
LL | |
LL | | } };
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
enum Foo {
A = 5,
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![feature(const_fn, rustc_attrs)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
use std::time::Duration;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![feature(extern_types)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub trait Nullable {
const NULL: Self;
-// build-pass (FIXME(62277): could be check-pass?)
+// Regression test for #50356: Compiler panic when using repr(packed)
+// associated constant in a match arm
+
+// check-pass
#[derive(Copy, Clone, PartialEq, Eq)]
#[repr(packed)]
pub struct Num(u64);
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
struct S(pub &'static u32, pub u32);
LL | const X: u64 = *wat(42);
| ---------------^^^^^^^^-
| |
- | dangling pointer was dereferenced
+ | pointer to alloc2 was dereferenced after this allocation got freed
|
= note: `#[deny(const_err)]` on by default
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub struct Stats;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// https://github.com/rust-lang/rust/issues/51300
#[derive(PartialEq, Eq, Clone, Copy)]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
macro_rules! m {
() => {{
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub const STATIC_TRAIT: &dyn Test = &();
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// Test that we can handle newtypes wrapping extern types
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
// if `X` were used instead of `x`, `X - 10` would result in a lint.
// This file should never produce a lint, no matter how the const
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
pub fn main() {
let y: &'static mut [u8; 0] = &mut [];
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![warn(const_err)]
#![crate_type = "lib"]
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
#![warn(const_err)]
pub const Z: u32 = 0 - 1;
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
const PARSE_BOOL: Option<&'static str> = None;
static FOO: (Option<&str>, u32) = (PARSE_BOOL, 42);
--> $DIR/transmute-const.rs:5:1
|
LL | static FOO: bool = unsafe { mem::transmute(3u8) };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 3, but expected something less or equal to 1
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 3, but expected a boolean
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+#![feature(const_transmute, never_type)]
#![allow(const_err)] // make sure we cannot allow away the errors tested here
+use std::mem;
#[repr(transparent)]
#[derive(Copy, Clone)]
struct Wrap<T>(T);
+#[derive(Copy, Clone)]
+enum Never {}
+
+// # simple enum with discriminant 0
+
#[repr(usize)]
#[derive(Copy, Clone)]
enum Enum {
A = 0,
}
-#[repr(C)]
-union TransmuteEnum {
- in1: &'static u8,
- in2: usize,
- out1: Enum,
- out2: Wrap<Enum>,
-}
-const GOOD_ENUM: Enum = unsafe { TransmuteEnum { in2: 0 }.out1 };
+const GOOD_ENUM: Enum = unsafe { mem::transmute(0usize) };
-const BAD_ENUM: Enum = unsafe { TransmuteEnum { in2: 1 }.out1 };
+const BAD_ENUM: Enum = unsafe { mem::transmute(1usize) };
//~^ ERROR is undefined behavior
-const BAD_ENUM_PTR: Enum = unsafe { TransmuteEnum { in1: &1 }.out1 };
+const BAD_ENUM_PTR: Enum = unsafe { mem::transmute(&1) };
//~^ ERROR is undefined behavior
-const BAD_ENUM_WRAPPED: Wrap<Enum> = unsafe { TransmuteEnum { in1: &1 }.out2 };
+const BAD_ENUM_WRAPPED: Wrap<Enum> = unsafe { mem::transmute(&1) };
//~^ ERROR is undefined behavior
+// # simple enum with discriminant 2
+
// (Potentially) invalid enum discriminant
#[repr(usize)]
#[derive(Copy, Clone)]
A = 2,
}
-#[repr(C)]
-union TransmuteEnum2 {
- in1: usize,
- in2: &'static u8,
- in3: (),
- out1: Enum2,
- out2: Wrap<Enum2>, // something wrapping the enum so that we test layout first, not enum
- out3: Option<Enum2>,
-}
-const BAD_ENUM2: Enum2 = unsafe { TransmuteEnum2 { in1: 0 }.out1 };
+const BAD_ENUM2: Enum2 = unsafe { mem::transmute(0usize) };
//~^ ERROR is undefined behavior
-const BAD_ENUM2_PTR: Enum2 = unsafe { TransmuteEnum2 { in2: &0 }.out1 };
+const BAD_ENUM2_PTR: Enum2 = unsafe { mem::transmute(&0) };
//~^ ERROR is undefined behavior
-const BAD_ENUM2_WRAPPED: Wrap<Enum2> = unsafe { TransmuteEnum2 { in2: &0 }.out2 };
+// something wrapping the enum so that we test layout first, not enum
+const BAD_ENUM2_WRAPPED: Wrap<Enum2> = unsafe { mem::transmute(&0) };
//~^ ERROR is undefined behavior
// Undef enum discriminant.
-const BAD_ENUM2_UNDEF : Enum2 = unsafe { TransmuteEnum2 { in3: () }.out1 };
+#[repr(C)]
+union MaybeUninit<T: Copy> {
+ uninit: (),
+ init: T,
+}
+const BAD_ENUM2_UNDEF : Enum2 = unsafe { MaybeUninit { uninit: () }.init };
//~^ ERROR is undefined behavior
// Pointer value in an enum with a niche that is not just 0.
-const BAD_ENUM2_OPTION_PTR: Option<Enum2> = unsafe { TransmuteEnum2 { in2: &0 }.out3 };
+const BAD_ENUM2_OPTION_PTR: Option<Enum2> = unsafe { mem::transmute(&0) };
//~^ ERROR is undefined behavior
+// # valid discriminant for uninhabited variant
+
+// An enum with 3 variants of which some are uninhabited -- so the uninhabited variants *do*
+// have a discriminant.
+enum UninhDiscriminant {
+ A,
+ B(!),
+ C,
+ D(Never),
+}
+
+const GOOD_INHABITED_VARIANT1: UninhDiscriminant = unsafe { mem::transmute(0u8) }; // variant A
+const GOOD_INHABITED_VARIANT2: UninhDiscriminant = unsafe { mem::transmute(2u8) }; // variant C
+
+const BAD_UNINHABITED_VARIANT1: UninhDiscriminant = unsafe { mem::transmute(1u8) };
+//~^ ERROR is undefined behavior
+const BAD_UNINHABITED_VARIANT2: UninhDiscriminant = unsafe { mem::transmute(3u8) };
+//~^ ERROR is undefined behavior
+
+// # other
+
// Invalid enum field content (mostly to test printing of paths for enum tuple
// variants and tuples).
-#[repr(C)]
-union TransmuteChar {
- a: u32,
- b: char,
-}
// Need to create something which does not clash with enum layout optimizations.
-const BAD_OPTION_CHAR: Option<(char, char)> = Some(('x', unsafe { TransmuteChar { a: !0 }.b }));
+const BAD_OPTION_CHAR: Option<(char, char)> = Some(('x', unsafe { mem::transmute(!0u32) }));
+//~^ ERROR is undefined behavior
+
+// All variants are uninhabited but also have data.
+const BAD_UNINHABITED_WITH_DATA1: Result<(i32, Never), (i32, !)> = unsafe { mem::transmute(1u64) };
+//~^ ERROR is undefined behavior
+const BAD_UNINHABITED_WITH_DATA2: Result<(i32, !), (i32, Never)> = unsafe { mem::transmute(1u64) };
//~^ ERROR is undefined behavior
fn main() {
error[E0080]: it is undefined behavior to use this value
--> $DIR/ub-enum.rs:23:1
|
-LL | const BAD_ENUM: Enum = unsafe { TransmuteEnum { in2: 1 }.out1 };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 1, but expected a valid enum discriminant
+LL | const BAD_ENUM: Enum = unsafe { mem::transmute(1usize) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 1, but expected a valid enum discriminant
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
--> $DIR/ub-enum.rs:26:1
|
-LL | const BAD_ENUM_PTR: Enum = unsafe { TransmuteEnum { in1: &1 }.out1 };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer, but expected a valid enum discriminant
+LL | const BAD_ENUM_PTR: Enum = unsafe { mem::transmute(&1) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer at .<enum-tag>, but expected initialized plain (non-pointer) bytes
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
--> $DIR/ub-enum.rs:29:1
|
-LL | const BAD_ENUM_WRAPPED: Wrap<Enum> = unsafe { TransmuteEnum { in1: &1 }.out2 };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer, but expected something that cannot possibly fail to be equal to 0
+LL | const BAD_ENUM_WRAPPED: Wrap<Enum> = unsafe { mem::transmute(&1) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer at .0.<enum-tag>, but expected initialized plain (non-pointer) bytes
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-enum.rs:48:1
+ --> $DIR/ub-enum.rs:41:1
|
-LL | const BAD_ENUM2: Enum2 = unsafe { TransmuteEnum2 { in1: 0 }.out1 };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 0, but expected a valid enum discriminant
+LL | const BAD_ENUM2: Enum2 = unsafe { mem::transmute(0usize) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 0, but expected a valid enum discriminant
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-enum.rs:50:1
+ --> $DIR/ub-enum.rs:43:1
|
-LL | const BAD_ENUM2_PTR: Enum2 = unsafe { TransmuteEnum2 { in2: &0 }.out1 };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer, but expected a valid enum discriminant
+LL | const BAD_ENUM2_PTR: Enum2 = unsafe { mem::transmute(&0) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer at .<enum-tag>, but expected initialized plain (non-pointer) bytes
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-enum.rs:52:1
+ --> $DIR/ub-enum.rs:46:1
|
-LL | const BAD_ENUM2_WRAPPED: Wrap<Enum2> = unsafe { TransmuteEnum2 { in2: &0 }.out2 };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer, but expected something that cannot possibly fail to be equal to 2
+LL | const BAD_ENUM2_WRAPPED: Wrap<Enum2> = unsafe { mem::transmute(&0) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer at .0.<enum-tag>, but expected initialized plain (non-pointer) bytes
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-enum.rs:56:1
+ --> $DIR/ub-enum.rs:55:1
|
-LL | const BAD_ENUM2_UNDEF : Enum2 = unsafe { TransmuteEnum2 { in3: () }.out1 };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered uninitialized bytes, but expected a valid enum discriminant
+LL | const BAD_ENUM2_UNDEF : Enum2 = unsafe { MaybeUninit { uninit: () }.init };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered uninitialized bytes at .<enum-tag>, but expected initialized plain (non-pointer) bytes
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-enum.rs:60:1
+ --> $DIR/ub-enum.rs:59:1
|
-LL | const BAD_ENUM2_OPTION_PTR: Option<Enum2> = unsafe { TransmuteEnum2 { in2: &0 }.out3 };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer, but expected a valid enum discriminant
+LL | const BAD_ENUM2_OPTION_PTR: Option<Enum2> = unsafe { mem::transmute(&0) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer at .<enum-tag>, but expected initialized plain (non-pointer) bytes
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-enum.rs:71:1
+ --> $DIR/ub-enum.rs:76:1
|
-LL | const BAD_OPTION_CHAR: Option<(char, char)> = Some(('x', unsafe { TransmuteChar { a: !0 }.b }));
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 4294967295 at .<downcast-variant(Some)>.0.1, but expected something less or equal to 1114111
+LL | const BAD_UNINHABITED_VARIANT1: UninhDiscriminant = unsafe { mem::transmute(1u8) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of the never type `!` at .<enum-variant(B)>.0
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
-error: aborting due to 9 previous errors
+error[E0080]: it is undefined behavior to use this value
+ --> $DIR/ub-enum.rs:78:1
+ |
+LL | const BAD_UNINHABITED_VARIANT2: UninhDiscriminant = unsafe { mem::transmute(3u8) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of uninhabited type Never at .<enum-variant(D)>.0
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+
+error[E0080]: it is undefined behavior to use this value
+ --> $DIR/ub-enum.rs:86:1
+ |
+LL | const BAD_OPTION_CHAR: Option<(char, char)> = Some(('x', unsafe { mem::transmute(!0u32) }));
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 4294967295 at .<enum-variant(Some)>.0.1, but expected a valid unicode codepoint
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+
+error[E0080]: it is undefined behavior to use this value
+ --> $DIR/ub-enum.rs:90:1
+ |
+LL | const BAD_UNINHABITED_WITH_DATA1: Result<(i32, Never), (i32, !)> = unsafe { mem::transmute(1u64) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of the never type `!` at .<enum-variant(Err)>.0.1
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+
+error[E0080]: it is undefined behavior to use this value
+ --> $DIR/ub-enum.rs:92:1
+ |
+LL | const BAD_UNINHABITED_WITH_DATA2: Result<(i32, !), (i32, Never)> = unsafe { mem::transmute(1u64) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of uninhabited type Never at .<enum-variant(Err)>.0.1
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+
+error: aborting due to 13 previous errors
For more information about this error, try `rustc --explain E0080`.
//~^ ERROR it is undefined behavior to use this value
#[repr(C)]
-union Transmute {
+union MaybeUninit<T: Copy> {
uninit: (),
- out: NonZeroU8,
+ init: T,
}
-const UNINIT: NonZeroU8 = unsafe { Transmute { uninit: () }.out };
+const UNINIT: NonZeroU8 = unsafe { MaybeUninit { uninit: () }.init };
//~^ ERROR it is undefined behavior to use this value
// Also test other uses of rustc_layout_scalar_valid_range_start
LL | | let ptr: &[u8; 256] = mem::transmute(&0u8); // &0 gets promoted so it does not dangle
LL | | // Use address-of-element for pointer arithmetic. This could wrap around to NULL!
LL | | let out_of_bounds_ptr = &ptr[255];
- | | ^^^^^^^^^ Memory access failed: pointer must be in-bounds at offset 256, but is outside bounds of allocation 8 which has size 1
+ | | ^^^^^^^^^ Memory access failed: pointer must be in-bounds at offset 256, but is outside bounds of alloc8 which has size 1
LL | | mem::transmute(out_of_bounds_ptr)
LL | | } };
| |____-
error[E0080]: it is undefined behavior to use this value
--> $DIR/ub-nonnull.rs:32:1
|
-LL | const UNINIT: NonZeroU8 = unsafe { Transmute { uninit: () }.out };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered uninitialized bytes, but expected something greater or equal to 1
+LL | const UNINIT: NonZeroU8 = unsafe { MaybeUninit { uninit: () }.init };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered uninitialized bytes at .0, but expected initialized plain (non-pointer) bytes
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
const UNALIGNED: &u16 = unsafe { mem::transmute(&[0u8; 4]) };
//~^ ERROR it is undefined behavior to use this value
-//~^^ type validation failed: encountered unaligned reference (required 2 byte alignment but found 1)
+//~^^ type validation failed: encountered an unaligned reference (required 2 byte alignment but found 1)
+
+const UNALIGNED_BOX: Box<u16> = unsafe { mem::transmute(&[0u8; 4]) };
+//~^ ERROR it is undefined behavior to use this value
+//~^^ type validation failed: encountered an unaligned box (required 2 byte alignment but found 1)
const NULL: &u16 = unsafe { mem::transmute(0usize) };
//~^ ERROR it is undefined behavior to use this value
+const NULL_BOX: Box<u16> = unsafe { mem::transmute(0usize) };
+//~^ ERROR it is undefined behavior to use this value
+
// It is very important that we reject this: We do promote `&(4 * REF_AS_USIZE)`,
// but that would fail to compile; so we ended up breaking user code that would
// have worked fine had we not promoted.
const REF_AS_USIZE_SLICE: &[usize] = &[unsafe { mem::transmute(&0) }];
//~^ ERROR it is undefined behavior to use this value
+const REF_AS_USIZE_BOX_SLICE: Box<[usize]> = unsafe { mem::transmute::<&[usize], _>(&[mem::transmute(&0)]) };
+//~^ ERROR it is undefined behavior to use this value
+
const USIZE_AS_REF: &'static u8 = unsafe { mem::transmute(1337usize) };
//~^ ERROR it is undefined behavior to use this value
+const USIZE_AS_BOX: Box<u8> = unsafe { mem::transmute(1337usize) };
+//~^ ERROR it is undefined behavior to use this value
+
fn main() {}
--> $DIR/ub-ref.rs:7:1
|
LL | const UNALIGNED: &u16 = unsafe { mem::transmute(&[0u8; 4]) };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered unaligned reference (required 2 byte alignment but found 1)
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered an unaligned reference (required 2 byte alignment but found 1)
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
--> $DIR/ub-ref.rs:11:1
|
+LL | const UNALIGNED_BOX: Box<u16> = unsafe { mem::transmute(&[0u8; 4]) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered an unaligned box (required 2 byte alignment but found 1)
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+
+error[E0080]: it is undefined behavior to use this value
+ --> $DIR/ub-ref.rs:15:1
+ |
LL | const NULL: &u16 = unsafe { mem::transmute(0usize) };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 0, but expected something greater or equal to 1
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a NULL reference
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+
+error[E0080]: it is undefined behavior to use this value
+ --> $DIR/ub-ref.rs:18:1
+ |
+LL | const NULL_BOX: Box<u16> = unsafe { mem::transmute(0usize) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a NULL box
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-ref.rs:17:1
+ --> $DIR/ub-ref.rs:24:1
|
LL | const REF_AS_USIZE: usize = unsafe { mem::transmute(&0) };
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer, but expected initialized plain (non-pointer) bytes
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-ref.rs:20:1
+ --> $DIR/ub-ref.rs:27:1
|
LL | const REF_AS_USIZE_SLICE: &[usize] = &[unsafe { mem::transmute(&0) }];
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer at .<deref>, but expected plain (non-pointer) bytes
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-ref.rs:23:1
+ --> $DIR/ub-ref.rs:30:1
+ |
+LL | const REF_AS_USIZE_BOX_SLICE: Box<[usize]> = unsafe { mem::transmute::<&[usize], _>(&[mem::transmute(&0)]) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer at .<deref>, but expected plain (non-pointer) bytes
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+
+error[E0080]: it is undefined behavior to use this value
+ --> $DIR/ub-ref.rs:33:1
|
LL | const USIZE_AS_REF: &'static u8 = unsafe { mem::transmute(1337usize) };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling reference (created from integer)
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a dangling reference (created from integer)
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+
+error[E0080]: it is undefined behavior to use this value
+ --> $DIR/ub-ref.rs:36:1
+ |
+LL | const USIZE_AS_BOX: Box<u8> = unsafe { mem::transmute(1337usize) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a dangling box (created from integer)
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
-error: aborting due to 5 previous errors
+error: aborting due to 9 previous errors
For more information about this error, try `rustc --explain E0080`.
enum Bar {}
#[repr(C)]
-union TransmuteUnion<A: Clone + Copy, B: Clone + Copy> {
- a: A,
- b: B,
+union MaybeUninit<T: Copy> {
+ uninit: (),
+ init: T,
}
-const BAD_BAD_BAD: Bar = unsafe { (TransmuteUnion::<(), Bar> { a: () }).b };
+const BAD_BAD_BAD: Bar = unsafe { MaybeUninit { uninit: () }.init };
//~^ ERROR it is undefined behavior to use this value
const BAD_BAD_REF: &Bar = unsafe { mem::transmute(1usize) };
//~^ ERROR it is undefined behavior to use this value
-const BAD_BAD_ARRAY: [Bar; 1] = unsafe { (TransmuteUnion::<(), [Bar; 1]> { a: () }).b };
+const BAD_BAD_ARRAY: [Bar; 1] = unsafe { MaybeUninit { uninit: () }.init };
//~^ ERROR it is undefined behavior to use this value
fn main() {}
error[E0080]: it is undefined behavior to use this value
--> $DIR/ub-uninhabit.rs:15:1
|
-LL | const BAD_BAD_BAD: Bar = unsafe { (TransmuteUnion::<(), Bar> { a: () }).b };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of an uninhabited type
+LL | const BAD_BAD_BAD: Bar = unsafe { MaybeUninit { uninit: () }.init };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of uninhabited type Bar
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
--> $DIR/ub-uninhabit.rs:18:1
|
LL | const BAD_BAD_REF: &Bar = unsafe { mem::transmute(1usize) };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of an uninhabited type at .<deref>
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of uninhabited type Bar at .<deref>
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
--> $DIR/ub-uninhabit.rs:21:1
|
-LL | const BAD_BAD_ARRAY: [Bar; 1] = unsafe { (TransmuteUnion::<(), [Bar; 1]> { a: () }).b };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of an uninhabited type
+LL | const BAD_BAD_ARRAY: [Bar; 1] = unsafe { MaybeUninit { uninit: () }.init };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of uninhabited type Bar at [0]
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
LL | | let another_var = 13;
LL | | move || { let _ = bad_ref; let _ = another_var; }
LL | | };
- | |__^ type validation failed: encountered 0 at .<deref>.<dyn-downcast>.<closure-var(bad_ref)>, but expected something greater or equal to 1
+ | |__^ type validation failed: encountered a NULL reference at .<deref>.<dyn-downcast>.<captured-var(bad_ref)>
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
// ignore-tidy-linelength
+#![feature(const_transmute)]
#![allow(unused)]
#![allow(const_err)] // make sure we cannot allow away the errors tested here
+use std::mem;
+
// normalize-stderr-test "offset \d+" -> "offset N"
-// normalize-stderr-test "allocation \d+" -> "allocation N"
+// normalize-stderr-test "alloc\d+" -> "allocN"
// normalize-stderr-test "size \d+" -> "size N"
#[repr(C)]
-union BoolTransmute {
- val: u8,
- bl: bool,
-}
-
-#[repr(C)]
-#[derive(Copy, Clone)]
-struct SliceRepr {
- ptr: *const u8,
- len: usize,
-}
-
-#[repr(C)]
-#[derive(Copy, Clone)]
-struct BadSliceRepr {
- ptr: *const u8,
- len: &'static u8,
-}
-
-#[repr(C)]
-union SliceTransmute {
- repr: SliceRepr,
- bad: BadSliceRepr,
- addr: usize,
- slice: &'static [u8],
- raw_slice: *const [u8],
- str: &'static str,
- my_str: &'static MyStr,
- my_slice: &'static MySliceBool,
-}
-
-#[repr(C)]
-#[derive(Copy, Clone)]
-struct DynRepr {
- ptr: *const u8,
- vtable: *const u8,
-}
-
-#[repr(C)]
-#[derive(Copy, Clone)]
-struct DynRepr2 {
- ptr: *const u8,
- vtable: *const u64,
-}
-
-#[repr(C)]
-#[derive(Copy, Clone)]
-struct BadDynRepr {
- ptr: *const u8,
- vtable: usize,
-}
-
-#[repr(C)]
-union DynTransmute {
- repr: DynRepr,
- repr2: DynRepr2,
- bad: BadDynRepr,
- addr: usize,
- rust: &'static dyn Trait,
- raw_rust: *const dyn Trait,
+union MaybeUninit<T: Copy> {
+ uninit: (),
+ init: T,
}
trait Trait {}
// # str
// OK
-const STR_VALID: &str = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 1 } }.str};
+const STR_VALID: &str = unsafe { mem::transmute((&42u8, 1usize)) };
// bad str
-const STR_TOO_LONG: &str = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 999 } }.str};
+const STR_TOO_LONG: &str = unsafe { mem::transmute((&42u8, 999usize)) };
+//~^ ERROR it is undefined behavior to use this value
+const NESTED_STR_MUCH_TOO_LONG: (&str,) = (unsafe { mem::transmute((&42, usize::MAX)) },);
//~^ ERROR it is undefined behavior to use this value
// bad str
-const STR_LENGTH_PTR: &str = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.str};
+const STR_LENGTH_PTR: &str = unsafe { mem::transmute((&42u8, &3)) };
//~^ ERROR it is undefined behavior to use this value
// bad str in user-defined unsized type
-const MY_STR_LENGTH_PTR: &MyStr = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.my_str};
+const MY_STR_LENGTH_PTR: &MyStr = unsafe { mem::transmute((&42u8, &3)) };
+//~^ ERROR it is undefined behavior to use this value
+const MY_STR_MUCH_TOO_LONG: &MyStr = unsafe { mem::transmute((&42u8, usize::MAX)) };
//~^ ERROR it is undefined behavior to use this value
// invalid UTF-8
-const STR_NO_UTF8: &str = unsafe { SliceTransmute { slice: &[0xFF] }.str };
+const STR_NO_UTF8: &str = unsafe { mem::transmute::<&[u8], _>(&[0xFF]) };
//~^ ERROR it is undefined behavior to use this value
// invalid UTF-8 in user-defined str-like
-const MYSTR_NO_UTF8: &MyStr = unsafe { SliceTransmute { slice: &[0xFF] }.my_str };
+const MYSTR_NO_UTF8: &MyStr = unsafe { mem::transmute::<&[u8], _>(&[0xFF]) };
//~^ ERROR it is undefined behavior to use this value
// # slice
// OK
-const SLICE_VALID: &[u8] = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 1 } }.slice};
+const SLICE_VALID: &[u8] = unsafe { mem::transmute((&42u8, 1usize)) };
// bad slice: length uninit
-const SLICE_LENGTH_UNINIT: &[u8] = unsafe { SliceTransmute { addr: 42 }.slice};
+const SLICE_LENGTH_UNINIT: &[u8] = unsafe {
//~^ ERROR it is undefined behavior to use this value
+ let uninit_len = MaybeUninit::<usize> { uninit: () };
+ mem::transmute((42, uninit_len))
+};
// bad slice: length too big
-const SLICE_TOO_LONG: &[u8] = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 999 } }.slice};
+const SLICE_TOO_LONG: &[u8] = unsafe { mem::transmute((&42u8, 999usize)) };
//~^ ERROR it is undefined behavior to use this value
// bad slice: length not an int
-const SLICE_LENGTH_PTR: &[u8] = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.slice};
+const SLICE_LENGTH_PTR: &[u8] = unsafe { mem::transmute((&42u8, &3)) };
+//~^ ERROR it is undefined behavior to use this value
+// bad slice box: length too big
+const SLICE_TOO_LONG_BOX: Box<[u8]> = unsafe { mem::transmute((&42u8, 999usize)) };
+//~^ ERROR it is undefined behavior to use this value
+// bad slice box: length not an int
+const SLICE_LENGTH_PTR_BOX: Box<[u8]> = unsafe { mem::transmute((&42u8, &3)) };
//~^ ERROR it is undefined behavior to use this value
// bad data *inside* the slice
-const SLICE_CONTENT_INVALID: &[bool] = &[unsafe { BoolTransmute { val: 3 }.bl }];
+const SLICE_CONTENT_INVALID: &[bool] = &[unsafe { mem::transmute(3u8) }];
//~^ ERROR it is undefined behavior to use this value
// good MySliceBool
const MYSLICE_GOOD: &MySliceBool = &MySlice(true, [false]);
// bad: sized field is not okay
-const MYSLICE_PREFIX_BAD: &MySliceBool = &MySlice(unsafe { BoolTransmute { val: 3 }.bl }, [false]);
+const MYSLICE_PREFIX_BAD: &MySliceBool = &MySlice(unsafe { mem::transmute(3u8) }, [false]);
//~^ ERROR it is undefined behavior to use this value
// bad: unsized part is not okay
-const MYSLICE_SUFFIX_BAD: &MySliceBool = &MySlice(true, [unsafe { BoolTransmute { val: 3 }.bl }]);
+const MYSLICE_SUFFIX_BAD: &MySliceBool = &MySlice(true, [unsafe { mem::transmute(3u8) }]);
//~^ ERROR it is undefined behavior to use this value
// # raw slice
-const RAW_SLICE_VALID: *const [u8] = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 1 } }.raw_slice}; // ok
-const RAW_SLICE_TOO_LONG: *const [u8] = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 999 } }.raw_slice}; // ok because raw
-const RAW_SLICE_MUCH_TOO_LONG: *const [u8] = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: usize::max_value() } }.raw_slice}; // ok because raw
-const RAW_SLICE_LENGTH_UNINIT: *const [u8] = unsafe { SliceTransmute { addr: 42 }.raw_slice};
+const RAW_SLICE_VALID: *const [u8] = unsafe { mem::transmute((&42u8, 1usize)) }; // ok
+const RAW_SLICE_TOO_LONG: *const [u8] = unsafe { mem::transmute((&42u8, 999usize)) }; // ok because raw
+const RAW_SLICE_MUCH_TOO_LONG: *const [u8] = unsafe { mem::transmute((&42u8, usize::MAX)) }; // ok because raw
+const RAW_SLICE_LENGTH_UNINIT: *const [u8] = unsafe {
//~^ ERROR it is undefined behavior to use this value
+ let uninit_len = MaybeUninit::<usize> { uninit: () };
+ mem::transmute((42, uninit_len))
+};
// # trait object
// bad trait object
-const TRAIT_OBJ_SHORT_VTABLE_1: &dyn Trait = unsafe { DynTransmute { repr: DynRepr { ptr: &92, vtable: &3 } }.rust};
+const TRAIT_OBJ_SHORT_VTABLE_1: &dyn Trait = unsafe { mem::transmute((&92u8, &3u8)) };
//~^ ERROR it is undefined behavior to use this value
// bad trait object
-const TRAIT_OBJ_SHORT_VTABLE_2: &dyn Trait = unsafe { DynTransmute { repr2: DynRepr2 { ptr: &92, vtable: &3 } }.rust};
+const TRAIT_OBJ_SHORT_VTABLE_2: &dyn Trait = unsafe { mem::transmute((&92u8, &3u64)) };
//~^ ERROR it is undefined behavior to use this value
// bad trait object
-const TRAIT_OBJ_INT_VTABLE: &dyn Trait = unsafe { DynTransmute { bad: BadDynRepr { ptr: &92, vtable: 3 } }.rust};
+const TRAIT_OBJ_INT_VTABLE: &dyn Trait = unsafe { mem::transmute((&92u8, 4usize)) };
//~^ ERROR it is undefined behavior to use this value
// bad data *inside* the trait object
-const TRAIT_OBJ_CONTENT_INVALID: &dyn Trait = &unsafe { BoolTransmute { val: 3 }.bl };
+const TRAIT_OBJ_CONTENT_INVALID: &dyn Trait = unsafe { mem::transmute::<_, &bool>(&3u8) };
//~^ ERROR it is undefined behavior to use this value
// # raw trait object
-const RAW_TRAIT_OBJ_VTABLE_NULL: *const dyn Trait = unsafe { DynTransmute { bad: BadDynRepr { ptr: &92, vtable: 0 } }.raw_rust};
+const RAW_TRAIT_OBJ_VTABLE_NULL: *const dyn Trait = unsafe { mem::transmute((&92u8, 0usize)) };
//~^ ERROR it is undefined behavior to use this value
-const RAW_TRAIT_OBJ_VTABLE_INVALID: *const dyn Trait = unsafe { DynTransmute { repr2: DynRepr2 { ptr: &92, vtable: &3 } }.raw_rust};
+const RAW_TRAIT_OBJ_VTABLE_INVALID: *const dyn Trait = unsafe { mem::transmute((&92u8, &3u64)) };
//~^ ERROR it is undefined behavior to use this value
-const RAW_TRAIT_OBJ_CONTENT_INVALID: *const dyn Trait = &unsafe { BoolTransmute { val: 3 }.bl } as *const _; // ok because raw
+const RAW_TRAIT_OBJ_CONTENT_INVALID: *const dyn Trait = unsafe { mem::transmute::<_, &bool>(&3u8) } as *const dyn Trait; // ok because raw
// Const eval fails for these, so they need to be statics to error.
static mut RAW_TRAIT_OBJ_VTABLE_NULL_THROUGH_REF: *const dyn Trait = unsafe {
- DynTransmute { bad: BadDynRepr { ptr: &92, vtable: 0 } }.rust
+ mem::transmute::<_, &dyn Trait>((&92u8, 0usize))
//~^ ERROR could not evaluate static initializer
};
static mut RAW_TRAIT_OBJ_VTABLE_INVALID_THROUGH_REF: *const dyn Trait = unsafe {
- DynTransmute { repr2: DynRepr2 { ptr: &92, vtable: &3 } }.rust
+ mem::transmute::<_, &dyn Trait>((&92u8, &3u64))
//~^ ERROR could not evaluate static initializer
};
-fn main() {
- let _ = RAW_TRAIT_OBJ_VTABLE_NULL;
- let _ = RAW_TRAIT_OBJ_VTABLE_INVALID;
-}
+fn main() {}
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:86:1
+ --> $DIR/ub-wide-ptr.rs:32:1
|
-LL | const STR_TOO_LONG: &str = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 999 } }.str};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling reference (not entirely in bounds)
+LL | const STR_TOO_LONG: &str = unsafe { mem::transmute((&42u8, 999usize)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a dangling reference (going beyond the bounds of its allocation)
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:89:1
+ --> $DIR/ub-wide-ptr.rs:34:1
|
-LL | const STR_LENGTH_PTR: &str = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.str};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered non-integer slice length in wide pointer
+LL | const NESTED_STR_MUCH_TOO_LONG: (&str,) = (unsafe { mem::transmute((&42, usize::MAX)) },);
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered invalid reference metadata: slice is bigger than largest supported object at .0
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:92:1
+ --> $DIR/ub-wide-ptr.rs:37:1
|
-LL | const MY_STR_LENGTH_PTR: &MyStr = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.my_str};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered non-integer slice length in wide pointer
+LL | const STR_LENGTH_PTR: &str = unsafe { mem::transmute((&42u8, &3)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered non-integer slice length in wide pointer
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:96:1
+ --> $DIR/ub-wide-ptr.rs:40:1
|
-LL | const STR_NO_UTF8: &str = unsafe { SliceTransmute { slice: &[0xFF] }.str };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered uninitialized or non-UTF-8 data in str at .<deref>
+LL | const MY_STR_LENGTH_PTR: &MyStr = unsafe { mem::transmute((&42u8, &3)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered non-integer slice length in wide pointer
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:99:1
+ --> $DIR/ub-wide-ptr.rs:42:1
|
-LL | const MYSTR_NO_UTF8: &MyStr = unsafe { SliceTransmute { slice: &[0xFF] }.my_str };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered uninitialized or non-UTF-8 data in str at .<deref>.0
+LL | const MY_STR_MUCH_TOO_LONG: &MyStr = unsafe { mem::transmute((&42u8, usize::MAX)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered invalid reference metadata: slice is bigger than largest supported object
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:106:1
+ --> $DIR/ub-wide-ptr.rs:46:1
|
-LL | const SLICE_LENGTH_UNINIT: &[u8] = unsafe { SliceTransmute { addr: 42 }.slice};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered undefined pointer
+LL | const STR_NO_UTF8: &str = unsafe { mem::transmute::<&[u8], _>(&[0xFF]) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered uninitialized or non-UTF-8 data in str at .<deref>
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:109:1
+ --> $DIR/ub-wide-ptr.rs:49:1
+ |
+LL | const MYSTR_NO_UTF8: &MyStr = unsafe { mem::transmute::<&[u8], _>(&[0xFF]) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered uninitialized or non-UTF-8 data in str at .<deref>.0
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+
+error[E0080]: it is undefined behavior to use this value
+ --> $DIR/ub-wide-ptr.rs:56:1
+ |
+LL | / const SLICE_LENGTH_UNINIT: &[u8] = unsafe {
+LL | |
+LL | | let uninit_len = MaybeUninit::<usize> { uninit: () };
+LL | | mem::transmute((42, uninit_len))
+LL | | };
+ | |__^ type validation failed: encountered undefined pointer
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+
+error[E0080]: it is undefined behavior to use this value
+ --> $DIR/ub-wide-ptr.rs:62:1
|
-LL | const SLICE_TOO_LONG: &[u8] = unsafe { SliceTransmute { repr: SliceRepr { ptr: &42, len: 999 } }.slice};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling reference (not entirely in bounds)
+LL | const SLICE_TOO_LONG: &[u8] = unsafe { mem::transmute((&42u8, 999usize)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a dangling reference (going beyond the bounds of its allocation)
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:112:1
+ --> $DIR/ub-wide-ptr.rs:65:1
|
-LL | const SLICE_LENGTH_PTR: &[u8] = unsafe { SliceTransmute { bad: BadSliceRepr { ptr: &42, len: &3 } }.slice};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered non-integer slice length in wide pointer
+LL | const SLICE_LENGTH_PTR: &[u8] = unsafe { mem::transmute((&42u8, &3)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered non-integer slice length in wide pointer
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:116:1
+ --> $DIR/ub-wide-ptr.rs:68:1
|
-LL | const SLICE_CONTENT_INVALID: &[bool] = &[unsafe { BoolTransmute { val: 3 }.bl }];
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 3 at .<deref>[0], but expected something less or equal to 1
+LL | const SLICE_TOO_LONG_BOX: Box<[u8]> = unsafe { mem::transmute((&42u8, 999usize)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a dangling box (going beyond the bounds of its allocation)
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:122:1
+ --> $DIR/ub-wide-ptr.rs:71:1
|
-LL | const MYSLICE_PREFIX_BAD: &MySliceBool = &MySlice(unsafe { BoolTransmute { val: 3 }.bl }, [false]);
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 3 at .<deref>.0, but expected something less or equal to 1
+LL | const SLICE_LENGTH_PTR_BOX: Box<[u8]> = unsafe { mem::transmute((&42u8, &3)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered non-integer slice length in wide pointer
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:125:1
+ --> $DIR/ub-wide-ptr.rs:75:1
|
-LL | const MYSLICE_SUFFIX_BAD: &MySliceBool = &MySlice(true, [unsafe { BoolTransmute { val: 3 }.bl }]);
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 3 at .<deref>.1[0], but expected something less or equal to 1
+LL | const SLICE_CONTENT_INVALID: &[bool] = &[unsafe { mem::transmute(3u8) }];
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 3 at .<deref>[0], but expected a boolean
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:132:1
+ --> $DIR/ub-wide-ptr.rs:81:1
|
-LL | const RAW_SLICE_LENGTH_UNINIT: *const [u8] = unsafe { SliceTransmute { addr: 42 }.raw_slice};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered undefined pointer
+LL | const MYSLICE_PREFIX_BAD: &MySliceBool = &MySlice(unsafe { mem::transmute(3u8) }, [false]);
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 3 at .<deref>.0, but expected a boolean
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:137:1
+ --> $DIR/ub-wide-ptr.rs:84:1
|
-LL | const TRAIT_OBJ_SHORT_VTABLE_1: &dyn Trait = unsafe { DynTransmute { repr: DynRepr { ptr: &92, vtable: &3 } }.rust};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling or unaligned vtable pointer in wide pointer or too small vtable
+LL | const MYSLICE_SUFFIX_BAD: &MySliceBool = &MySlice(true, [unsafe { mem::transmute(3u8) }]);
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 3 at .<deref>.1[0], but expected a boolean
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:140:1
+ --> $DIR/ub-wide-ptr.rs:91:1
|
-LL | const TRAIT_OBJ_SHORT_VTABLE_2: &dyn Trait = unsafe { DynTransmute { repr2: DynRepr2 { ptr: &92, vtable: &3 } }.rust};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling or unaligned vtable pointer in wide pointer or too small vtable
+LL | / const RAW_SLICE_LENGTH_UNINIT: *const [u8] = unsafe {
+LL | |
+LL | | let uninit_len = MaybeUninit::<usize> { uninit: () };
+LL | | mem::transmute((42, uninit_len))
+LL | | };
+ | |__^ type validation failed: encountered undefined pointer
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:143:1
+ --> $DIR/ub-wide-ptr.rs:99:1
+ |
+LL | const TRAIT_OBJ_SHORT_VTABLE_1: &dyn Trait = unsafe { mem::transmute((&92u8, &3u8)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling or unaligned vtable pointer in wide pointer or too small vtable
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+
+error[E0080]: it is undefined behavior to use this value
+ --> $DIR/ub-wide-ptr.rs:102:1
|
-LL | const TRAIT_OBJ_INT_VTABLE: &dyn Trait = unsafe { DynTransmute { bad: BadDynRepr { ptr: &92, vtable: 3 } }.rust};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling or unaligned vtable pointer in wide pointer or too small vtable
+LL | const TRAIT_OBJ_SHORT_VTABLE_2: &dyn Trait = unsafe { mem::transmute((&92u8, &3u64)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling or unaligned vtable pointer in wide pointer or too small vtable
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:147:1
+ --> $DIR/ub-wide-ptr.rs:105:1
+ |
+LL | const TRAIT_OBJ_INT_VTABLE: &dyn Trait = unsafe { mem::transmute((&92u8, 4usize)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling or unaligned vtable pointer in wide pointer or too small vtable
+ |
+ = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
+
+error[E0080]: it is undefined behavior to use this value
+ --> $DIR/ub-wide-ptr.rs:109:1
|
-LL | const TRAIT_OBJ_CONTENT_INVALID: &dyn Trait = &unsafe { BoolTransmute { val: 3 }.bl };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 3 at .<deref>.<dyn-downcast>, but expected something less or equal to 1
+LL | const TRAIT_OBJ_CONTENT_INVALID: &dyn Trait = unsafe { mem::transmute::<_, &bool>(&3u8) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 3 at .<deref>.<dyn-downcast>, but expected a boolean
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:151:1
+ --> $DIR/ub-wide-ptr.rs:113:1
|
-LL | const RAW_TRAIT_OBJ_VTABLE_NULL: *const dyn Trait = unsafe { DynTransmute { bad: BadDynRepr { ptr: &92, vtable: 0 } }.raw_rust};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling or unaligned vtable pointer in wide pointer or too small vtable
+LL | const RAW_TRAIT_OBJ_VTABLE_NULL: *const dyn Trait = unsafe { mem::transmute((&92u8, 0usize)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling or unaligned vtable pointer in wide pointer or too small vtable
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: it is undefined behavior to use this value
- --> $DIR/ub-wide-ptr.rs:153:1
+ --> $DIR/ub-wide-ptr.rs:115:1
|
-LL | const RAW_TRAIT_OBJ_VTABLE_INVALID: *const dyn Trait = unsafe { DynTransmute { repr2: DynRepr2 { ptr: &92, vtable: &3 } }.raw_rust};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling or unaligned vtable pointer in wide pointer or too small vtable
+LL | const RAW_TRAIT_OBJ_VTABLE_INVALID: *const dyn Trait = unsafe { mem::transmute((&92u8, &3u64)) };
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered dangling or unaligned vtable pointer in wide pointer or too small vtable
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
error[E0080]: could not evaluate static initializer
- --> $DIR/ub-wide-ptr.rs:159:5
+ --> $DIR/ub-wide-ptr.rs:121:5
|
-LL | DynTransmute { bad: BadDynRepr { ptr: &92, vtable: 0 } }.rust
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ invalid use of NULL pointer
+LL | mem::transmute::<_, &dyn Trait>((&92u8, 0usize))
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ invalid use of NULL pointer
error[E0080]: could not evaluate static initializer
- --> $DIR/ub-wide-ptr.rs:163:5
+ --> $DIR/ub-wide-ptr.rs:125:5
|
-LL | DynTransmute { repr2: DynRepr2 { ptr: &92, vtable: &3 } }.rust
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Memory access failed: pointer must be in-bounds at offset N, but is outside bounds of allocation N which has size N
+LL | mem::transmute::<_, &dyn Trait>((&92u8, &3u64))
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Memory access failed: pointer must be in-bounds at offset N, but is outside bounds of allocN which has size N
-error: aborting due to 20 previous errors
+error: aborting due to 24 previous errors
For more information about this error, try `rustc --explain E0080`.
--> $DIR/union-ub.rs:31:1
|
LL | const BAD_BOOL: bool = unsafe { DummyUnion { u8: 42 }.bool};
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 42, but expected something less or equal to 1
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered 42, but expected a boolean
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
LL | unsafe { std::mem::transmute(()) }
| ^^^^^^^^^^^^^^^^^^^^^^^
| |
- | entering unreachable code
+ | transmuting to uninhabited type
| inside call to `foo` at $DIR/validate_uninhabited_zsts.rs:14:26
...
LL | const FOO: [Empty; 3] = [foo(); 3];
--> $DIR/validate_uninhabited_zsts.rs:17:1
|
LL | const BAR: [Empty; 3] = [unsafe { std::mem::transmute(()) }; 3];
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of an uninhabited type
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of uninhabited type Empty at [0]
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
-// build-pass (FIXME(62277): could be check-pass?)
+// check-pass
static ASSERT: () = [()][!(std::mem::size_of::<u32>() == 4) as usize];
const fn add(x: i8, y: i8) -> i8 { x+y }
const fn sub(x: i8, y: i8) -> i8 { x-y }
const fn mul(x: i8, y: i8) -> i8 { x*y }
-// div and rem are always checked, so we cannot test their result in case of oveflow.
+// div and rem are always checked, so we cannot test their result in case of overflow.
const fn neg(x: i8) -> i8 { -x }
fn main() {
// run-pass
-#![feature(const_int_conversion)]
-
const REVERSE: u32 = 0x12345678_u32.reverse_bits();
const FROM_BE_BYTES: i32 = i32::from_be_bytes([0x12, 0x34, 0x56, 0x78]);
const FROM_LE_BYTES: i32 = i32::from_le_bytes([0x12, 0x34, 0x56, 0x78]);
LL | const SHL_U8: u8 = unsafe { intrinsics::unchecked_shl(5_u8, 8) };
| ----------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 8 in `unchecked_shl`
+ | overflowing shift by 8 in `unchecked_shl`
|
= note: `#[deny(const_err)]` on by default
LL | const SHL_U16: u16 = unsafe { intrinsics::unchecked_shl(5_u16, 16) };
| ------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 16 in `unchecked_shl`
+ | overflowing shift by 16 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:19:31
LL | const SHL_U32: u32 = unsafe { intrinsics::unchecked_shl(5_u32, 32) };
| ------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 32 in `unchecked_shl`
+ | overflowing shift by 32 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:21:31
LL | const SHL_U64: u64 = unsafe { intrinsics::unchecked_shl(5_u64, 64) };
| ------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 64 in `unchecked_shl`
+ | overflowing shift by 64 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:23:33
LL | const SHL_U128: u128 = unsafe { intrinsics::unchecked_shl(5_u128, 128) };
| --------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 128 in `unchecked_shl`
+ | overflowing shift by 128 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:28:29
LL | const SHL_I8: i8 = unsafe { intrinsics::unchecked_shl(5_i8, 8) };
| ----------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 8 in `unchecked_shl`
+ | overflowing shift by 8 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:30:31
LL | const SHL_I16: i16 = unsafe { intrinsics::unchecked_shl(5_16, 16) };
| ------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 16 in `unchecked_shl`
+ | overflowing shift by 16 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:32:31
LL | const SHL_I32: i32 = unsafe { intrinsics::unchecked_shl(5_i32, 32) };
| ------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 32 in `unchecked_shl`
+ | overflowing shift by 32 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:34:31
LL | const SHL_I64: i64 = unsafe { intrinsics::unchecked_shl(5_i64, 64) };
| ------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 64 in `unchecked_shl`
+ | overflowing shift by 64 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:36:33
LL | const SHL_I128: i128 = unsafe { intrinsics::unchecked_shl(5_i128, 128) };
| --------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 128 in `unchecked_shl`
+ | overflowing shift by 128 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:41:33
LL | const SHL_I8_NEG: i8 = unsafe { intrinsics::unchecked_shl(5_i8, -1) };
| --------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 255 in `unchecked_shl`
+ | overflowing shift by 255 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:43:35
LL | const SHL_I16_NEG: i16 = unsafe { intrinsics::unchecked_shl(5_16, -1) };
| ----------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 65535 in `unchecked_shl`
+ | overflowing shift by 65535 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:45:35
LL | const SHL_I32_NEG: i32 = unsafe { intrinsics::unchecked_shl(5_i32, -1) };
| ----------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 4294967295 in `unchecked_shl`
+ | overflowing shift by 4294967295 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:47:35
LL | const SHL_I64_NEG: i64 = unsafe { intrinsics::unchecked_shl(5_i64, -1) };
| ----------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 18446744073709551615 in `unchecked_shl`
+ | overflowing shift by 18446744073709551615 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:49:37
LL | const SHL_I128_NEG: i128 = unsafe { intrinsics::unchecked_shl(5_i128, -1) };
| ------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 340282366920938463463374607431768211455 in `unchecked_shl`
+ | overflowing shift by 340282366920938463463374607431768211455 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:55:40
LL | const SHL_I8_NEG_RANDOM: i8 = unsafe { intrinsics::unchecked_shl(5_i8, -6) };
| ---------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 250 in `unchecked_shl`
+ | overflowing shift by 250 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:57:42
LL | const SHL_I16_NEG_RANDOM: i16 = unsafe { intrinsics::unchecked_shl(5_16, -13) };
| -----------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 65523 in `unchecked_shl`
+ | overflowing shift by 65523 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:59:42
LL | const SHL_I32_NEG_RANDOM: i32 = unsafe { intrinsics::unchecked_shl(5_i32, -25) };
| -----------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 4294967271 in `unchecked_shl`
+ | overflowing shift by 4294967271 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:61:42
LL | const SHL_I64_NEG_RANDOM: i64 = unsafe { intrinsics::unchecked_shl(5_i64, -30) };
| -----------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 18446744073709551586 in `unchecked_shl`
+ | overflowing shift by 18446744073709551586 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:63:44
LL | const SHL_I128_NEG_RANDOM: i128 = unsafe { intrinsics::unchecked_shl(5_i128, -93) };
| -------------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 340282366920938463463374607431768211363 in `unchecked_shl`
+ | overflowing shift by 340282366920938463463374607431768211363 in `unchecked_shl`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:70:29
LL | const SHR_U8: u8 = unsafe { intrinsics::unchecked_shr(5_u8, 8) };
| ----------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 8 in `unchecked_shr`
+ | overflowing shift by 8 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:72:31
LL | const SHR_U16: u16 = unsafe { intrinsics::unchecked_shr(5_u16, 16) };
| ------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 16 in `unchecked_shr`
+ | overflowing shift by 16 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:74:31
LL | const SHR_U32: u32 = unsafe { intrinsics::unchecked_shr(5_u32, 32) };
| ------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 32 in `unchecked_shr`
+ | overflowing shift by 32 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:76:31
LL | const SHR_U64: u64 = unsafe { intrinsics::unchecked_shr(5_u64, 64) };
| ------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 64 in `unchecked_shr`
+ | overflowing shift by 64 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:78:33
LL | const SHR_U128: u128 = unsafe { intrinsics::unchecked_shr(5_u128, 128) };
| --------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 128 in `unchecked_shr`
+ | overflowing shift by 128 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:83:29
LL | const SHR_I8: i8 = unsafe { intrinsics::unchecked_shr(5_i8, 8) };
| ----------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 8 in `unchecked_shr`
+ | overflowing shift by 8 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:85:31
LL | const SHR_I16: i16 = unsafe { intrinsics::unchecked_shr(5_16, 16) };
| ------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 16 in `unchecked_shr`
+ | overflowing shift by 16 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:87:31
LL | const SHR_I32: i32 = unsafe { intrinsics::unchecked_shr(5_i32, 32) };
| ------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 32 in `unchecked_shr`
+ | overflowing shift by 32 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:89:31
LL | const SHR_I64: i64 = unsafe { intrinsics::unchecked_shr(5_i64, 64) };
| ------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 64 in `unchecked_shr`
+ | overflowing shift by 64 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:91:33
LL | const SHR_I128: i128 = unsafe { intrinsics::unchecked_shr(5_i128, 128) };
| --------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 128 in `unchecked_shr`
+ | overflowing shift by 128 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:96:33
LL | const SHR_I8_NEG: i8 = unsafe { intrinsics::unchecked_shr(5_i8, -1) };
| --------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 255 in `unchecked_shr`
+ | overflowing shift by 255 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:98:35
LL | const SHR_I16_NEG: i16 = unsafe { intrinsics::unchecked_shr(5_16, -1) };
| ----------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 65535 in `unchecked_shr`
+ | overflowing shift by 65535 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:100:35
LL | const SHR_I32_NEG: i32 = unsafe { intrinsics::unchecked_shr(5_i32, -1) };
| ----------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 4294967295 in `unchecked_shr`
+ | overflowing shift by 4294967295 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:102:35
LL | const SHR_I64_NEG: i64 = unsafe { intrinsics::unchecked_shr(5_i64, -1) };
| ----------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 18446744073709551615 in `unchecked_shr`
+ | overflowing shift by 18446744073709551615 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:104:37
LL | const SHR_I128_NEG: i128 = unsafe { intrinsics::unchecked_shr(5_i128, -1) };
| ------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 340282366920938463463374607431768211455 in `unchecked_shr`
+ | overflowing shift by 340282366920938463463374607431768211455 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:110:40
LL | const SHR_I8_NEG_RANDOM: i8 = unsafe { intrinsics::unchecked_shr(5_i8, -6) };
| ---------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 250 in `unchecked_shr`
+ | overflowing shift by 250 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:112:42
LL | const SHR_I16_NEG_RANDOM: i16 = unsafe { intrinsics::unchecked_shr(5_16, -13) };
| -----------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 65523 in `unchecked_shr`
+ | overflowing shift by 65523 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:114:42
LL | const SHR_I32_NEG_RANDOM: i32 = unsafe { intrinsics::unchecked_shr(5_i32, -25) };
| -----------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 4294967271 in `unchecked_shr`
+ | overflowing shift by 4294967271 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:116:42
LL | const SHR_I64_NEG_RANDOM: i64 = unsafe { intrinsics::unchecked_shr(5_i64, -30) };
| -----------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 18446744073709551586 in `unchecked_shr`
+ | overflowing shift by 18446744073709551586 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:118:44
LL | const SHR_I128_NEG_RANDOM: i128 = unsafe { intrinsics::unchecked_shr(5_i128, -93) };
| -------------------------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflowing shift by 340282366920938463463374607431768211363 in `unchecked_shr`
+ | overflowing shift by 340282366920938463463374607431768211363 in `unchecked_shr`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:123:25
LL | const _: u16 = unsafe { std::intrinsics::unchecked_add(40000u16, 30000) };
| ------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflow executing `unchecked_add`
+ | overflow executing `unchecked_add`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:126:25
LL | const _: u32 = unsafe { std::intrinsics::unchecked_sub(14u32, 22) };
| ------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflow executing `unchecked_sub`
+ | overflow executing `unchecked_sub`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:129:25
LL | const _: u16 = unsafe { std::intrinsics::unchecked_mul(300u16, 250u16) };
| ------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflow executing `unchecked_mul`
+ | overflow executing `unchecked_mul`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:132:25
LL | const _: i32 = unsafe { std::intrinsics::unchecked_div(i32::min_value(), -1) };
| ------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflow executing `unchecked_div`
+ | overflow executing `unchecked_div`
error: any use of this value will cause an error
--> $DIR/const-int-unchecked.rs:137:25
LL | const _: i32 = unsafe { std::intrinsics::unchecked_rem(i32::min_value(), -1) };
| ------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^---
| |
- | Overflow executing `unchecked_rem`
+ | overflow executing `unchecked_rem`
error: aborting due to 47 previous errors
--> $DIR/const-points-to-static.rs:5:1
|
LL | const TEST: &u8 = &MY_STATIC;
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ constant accesses static
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a reference pointing to a static variable
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
--- /dev/null
+// run-pass
+#![feature(const_discriminant)]
+#![allow(dead_code)]
+
+use std::mem::{discriminant, Discriminant};
+
+// `discriminant(const_expr)` may get const-propagated.
+// As we want to check that const-eval is equal to ordinary exection,
+// we wrap `const_expr` with a function which is not const to prevent this.
+#[inline(never)]
+fn identity<T>(x: T) -> T { x }
+
+enum Test {
+ A(u8),
+ B,
+ C { a: u8, b: u8 },
+}
+
+const TEST_A: Discriminant<Test> = discriminant(&Test::A(5));
+const TEST_A_OTHER: Discriminant<Test> = discriminant(&Test::A(17));
+const TEST_B: Discriminant<Test> = discriminant(&Test::B);
+
+enum Void {}
+
+enum SingleVariant {
+ V,
+ Never(Void),
+}
+
+const TEST_V: Discriminant<SingleVariant> = discriminant(&SingleVariant::V);
+
+fn main() {
+ assert_eq!(TEST_A, TEST_A_OTHER);
+ assert_eq!(TEST_A, discriminant(identity(&Test::A(17))));
+ assert_eq!(TEST_B, discriminant(identity(&Test::B)));
+ assert_ne!(TEST_A, TEST_B);
+ assert_ne!(TEST_B, discriminant(identity(&Test::C { a: 42, b: 7 })));
+
+ assert_eq!(TEST_V, discriminant(identity(&SingleVariant::V)));
+}
struct W(u32);
struct A { a: u32 }
+#[allow(redundant_semicolons)]
const fn basics((a,): (u32,)) -> u32 {
// Deferred assignment:
let b: u32;
struct W(f32);
struct A { a: f32 }
+#[allow(redundant_semicolons)]
const fn basics((a,): (f32,)) -> f32 {
// Deferred assignment:
let b: f32;
--- /dev/null
+// check-pass
+#![feature(const_eval_limit)]
+#![const_eval_limit="1000"]
+
+const CONSTANT: usize = limit();
+
+fn main() {
+ assert_eq!(CONSTANT, 1764);
+}
+
+const fn limit() -> usize {
+ let x = 42;
+
+ x * 42
+}
--- /dev/null
+#![feature(const_eval_limit)]
+#![const_eval_limit="18_446_744_073_709_551_615"]
+//~^ ERROR `limit` must be a non-negative integer
+
+const CONSTANT: usize = limit();
+
+fn main() {
+ assert_eq!(CONSTANT, 1764);
+}
+
+const fn limit() -> usize {
+ let x = 42;
+
+ x * 42
+}
--- /dev/null
+error: `limit` must be a non-negative integer
+ --> $DIR/const_eval_limit_overflow.rs:2:1
+ |
+LL | #![const_eval_limit="18_446_744_073_709_551_615"]
+ | ^^^^^^^^^^^^^^^^^^^^----------------------------^
+ | |
+ | not a valid integer
+
+error: aborting due to previous error
+
--- /dev/null
+// ignore-tidy-linelength
+// only-x86_64
+// check-pass
+// NOTE: We always compile this test with -Copt-level=0 because higher opt-levels
+// optimize away the const function
+// compile-flags:-Copt-level=0
+#![feature(const_eval_limit)]
+#![const_eval_limit="2"]
+
+const CONSTANT: usize = limit();
+//~^ WARNING Constant evaluating a complex constant, this might take some time
+
+fn main() {
+ assert_eq!(CONSTANT, 1764);
+}
+
+const fn limit() -> usize { //~ WARNING Constant evaluating a complex constant, this might take some time
+ let x = 42;
+
+ x * 42
+}
--- /dev/null
+warning: Constant evaluating a complex constant, this might take some time
+ --> $DIR/const_eval_limit_reached.rs:17:1
+ |
+LL | / const fn limit() -> usize {
+LL | | let x = 42;
+LL | |
+LL | | x * 42
+LL | | }
+ | |_^
+
+warning: Constant evaluating a complex constant, this might take some time
+ --> $DIR/const_eval_limit_reached.rs:10:1
+ |
+LL | const CONSTANT: usize = limit();
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
--- /dev/null
+#![const_eval_limit="42"]
+//~^ ERROR the `#[const_eval_limit]` attribute is an experimental feature [E0658]
+
+const CONSTANT: usize = limit();
+
+fn main() {
+ assert_eq!(CONSTANT, 1764);
+}
+
+const fn limit() -> usize {
+ let x = 42;
+
+ x * 42
+}
--- /dev/null
+error[E0658]: the `#[const_eval_limit]` attribute is an experimental feature
+ --> $DIR/feature-gate-const_eval_limit.rs:1:1
+ |
+LL | #![const_eval_limit="42"]
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+ = note: see issue #67217 <https://github.com/rust-lang/rust/issues/67217> for more information
+ = help: add `#![feature(const_eval_limit)]` to the crate attributes to enable
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0658`.
LL | | let y = ();
LL | | unsafe { Foo { y: &y }.long_live_the_unit }
LL | | };
- | |__^ type validation failed: encountered dangling pointer in final constant
+ | |__^ encountered dangling pointer in final constant
|
= note: `#[deny(const_err)]` on by default
LL | | let x = 42;
LL | | &x
LL | | };
- | |__^ type validation failed: encountered dangling pointer in final constant
+ | |__^ encountered dangling pointer in final constant
|
= note: `#[deny(const_err)]` on by default
... |
LL | | .slice
LL | | };
- | |__^ invalid slice: total size is bigger than largest supported object
+ | |__^ type validation failed: encountered invalid reference metadata: slice is bigger than largest supported object
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
--- /dev/null
+// build-pass
+//
+// (this is deliberately *not* check-pass; I have confirmed that the bug in
+// question does not replicate when one uses `cargo check` alone.)
+
+pub enum Void {}
+
+enum UninhabitedUnivariant {
+ _Variant(Void),
+}
+
+enum UninhabitedMultivariant2 {
+ _Variant(Void),
+ _Warriont(Void),
+}
+
+enum UninhabitedMultivariant3 {
+ _Variant(Void),
+ _Warriont(Void),
+ _Worrynot(Void),
+}
+
+#[repr(C)]
+enum UninhabitedUnivariantC {
+ _Variant(Void),
+}
+
+#[repr(i32)]
+enum UninhabitedUnivariant32 {
+ _Variant(Void),
+}
+
+fn main() {
+ let _seed: UninhabitedUnivariant = None.unwrap();
+ match _seed {
+ UninhabitedUnivariant::_Variant(_x) => {}
+ }
+
+ let _seed: UninhabitedMultivariant2 = None.unwrap();
+ match _seed {
+ UninhabitedMultivariant2::_Variant(_x) => {}
+ UninhabitedMultivariant2::_Warriont(_x) => {}
+ }
+
+ let _seed: UninhabitedMultivariant2 = None.unwrap();
+ match _seed {
+ UninhabitedMultivariant2::_Variant(_x) => {}
+ _ => {}
+ }
+
+ let _seed: UninhabitedMultivariant2 = None.unwrap();
+ match _seed {
+ UninhabitedMultivariant2::_Warriont(_x) => {}
+ _ => {}
+ }
+
+ let _seed: UninhabitedMultivariant3 = None.unwrap();
+ match _seed {
+ UninhabitedMultivariant3::_Variant(_x) => {}
+ UninhabitedMultivariant3::_Warriont(_x) => {}
+ UninhabitedMultivariant3::_Worrynot(_x) => {}
+ }
+
+ let _seed: UninhabitedMultivariant3 = None.unwrap();
+ match _seed {
+ UninhabitedMultivariant3::_Variant(_x) => {}
+ _ => {}
+ }
+
+ let _seed: UninhabitedMultivariant3 = None.unwrap();
+ match _seed {
+ UninhabitedMultivariant3::_Warriont(_x) => {}
+ _ => {}
+ }
+
+ let _seed: UninhabitedMultivariant3 = None.unwrap();
+ match _seed {
+ UninhabitedMultivariant3::_Worrynot(_x) => {}
+ _ => {}
+ }
+
+ let _seed: UninhabitedUnivariantC = None.unwrap();
+ match _seed {
+ UninhabitedUnivariantC::_Variant(_x) => {}
+ }
+
+ let _seed: UninhabitedUnivariant32 = None.unwrap();
+ match _seed {
+ UninhabitedUnivariant32::_Variant(_x) => {}
+ }
+}
LL | #[rustc_allow_const_fn_ptr]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
error: aborting due to previous error
LL | my_fn();
| ^^^^^^^
| |
- | tried to call a function with ABI C using caller ABI Rust
+ | calling a function with ABI C using caller ABI Rust
| inside call to `call_rust_fn` at $DIR/abi-mismatch.rs:13:17
...
LL | const VAL: () = call_rust_fn(unsafe { std::mem::transmute(c_fn as extern "C" fn()) });
LL | | unsafe { &*(&FOO as *const _ as *const usize) }
LL | |
LL | | };
- | |__^ constant accesses static
+ | |__^ type validation failed: encountered a reference pointing to a static variable
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
LL | | &FOO
LL | |
LL | | };
- | |__^ constant accesses static
+ | |__^ type validation failed: encountered a reference pointing to a static variable
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
LL | | // Test that `MUTABLE_BEHIND_RAW` is actually immutable, by doing this at const time.
LL | | unsafe {
LL | | *MUTABLE_BEHIND_RAW = 99
- | | ^^^^^^^^^^^^^^^^^^^^^^^^ tried to modify constant memory
+ | | ^^^^^^^^^^^^^^^^^^^^^^^^ writing to alloc1 which is read-only
LL | | }
LL | | };
| |__-
LL | const MUTABLE_BEHIND_RAW: *mut i32 = &UnsafeCell::new(42) as *const _ as *mut _;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-thread 'rustc' panicked at 'no errors encountered even though `delay_span_bug` issued', src/librustc_errors/lib.rs:355:17
+thread 'rustc' panicked at 'no errors encountered even though `delay_span_bug` issued', src/librustc_errors/lib.rs:360:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
error: internal compiler error: unexpected panic
LL | intrinsics::ptr_offset_from(self, origin)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
- | a memory access tried to interpret some bytes as a pointer
+ | unable to turn bytes into a pointer
| inside call to `std::ptr::const_ptr::<impl *const u8>::offset_from` at $DIR/offset_from_ub.rs:28:14
|
::: $DIR/offset_from_ub.rs:26:1
LL | intrinsics::ptr_offset_from(self, origin)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
- | exact_div: 1 cannot be divided by 2 without remainder
+ | exact_div: 1isize cannot be divided by 2isize without remainder
| inside call to `std::ptr::const_ptr::<impl *const u16>::offset_from` at $DIR/offset_from_ub.rs:36:14
|
::: $DIR/offset_from_ub.rs:31:1
LL | intrinsics::ptr_offset_from(self, origin)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
- | a memory access tried to interpret some bytes as a pointer
+ | unable to turn bytes into a pointer
| inside call to `std::ptr::const_ptr::<impl *const u8>::offset_from` at $DIR/offset_from_ub.rs:49:14
|
::: $DIR/offset_from_ub.rs:45:1
| ^^^^^^^^^^^^^^^^^^^
|
= note: source type: `usize` (word size)
- = note: target type: `&'static [u8]` (2 * word size)
+ = note: target type: `&[u8]` (2 * word size)
error: could not evaluate constant pattern
--> $DIR/transmute-size-mismatch-before-typeck.rs:10:9
--> $DIR/validate_never_arrays.rs:3:1
|
LL | const _: &[!; 1] = unsafe { &*(1_usize as *const [!; 1]) };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of an uninhabited type at .<deref>
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of the never type `!` at .<deref>[0]
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
--> $DIR/validate_never_arrays.rs:6:1
|
LL | const _: &[!] = unsafe { &*(1_usize as *const [!; 1]) };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of an uninhabited type at .<deref>[0]
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of the never type `!` at .<deref>[0]
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
--> $DIR/validate_never_arrays.rs:7:1
|
LL | const _: &[!] = unsafe { &*(1_usize as *const [!; 42]) };
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of an uninhabited type at .<deref>[0]
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a value of the never type `!` at .<deref>[0]
|
= note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.
-// Check that functions visible to macros through paths with >2 segements are
+// Check that functions visible to macros through paths with >2 segments are
// considered reachable
// aux-build:field-method-macro.rs
LL | }
| - the item list ends here
-error: expected `[`, found `#`
+error: expected one of `!` or `[`, found `#`
--> $DIR/issue-40006.rs:19:17
|
LL | fn xxx() { ### }
- | ^ expected `[`
+ | ^ expected one of `!` or `[`
error: expected one of `!` or `::`, found `=`
--> $DIR/issue-40006.rs:22:7
// Test that macro-expanded non-inline modules behave correctly
macro_rules! mod_decl {
- ($i:ident) => { mod $i; } //~ ERROR Cannot declare a non-inline module inside a block
+ ($i:ident) => {
+ mod $i; //~ ERROR Cannot declare a non-inline module inside a block
+ };
}
mod macro_expanded_mod_helper {
error: Cannot declare a non-inline module inside a block unless it has a path attribute
- --> $DIR/macro-expanded-mod.rs:4:25
+ --> $DIR/macro-expanded-mod.rs:5:9
|
-LL | ($i:ident) => { mod $i; }
- | ^^
+LL | mod $i;
+ | ^^^^^^^
...
LL | mod_decl!(foo);
| --------------- in this macro invocation
error: Cannot declare a non-inline module inside a block unless it has a path attribute
- --> $DIR/non-inline-mod-restriction.rs:4:9
+ --> $DIR/non-inline-mod-restriction.rs:4:5
|
LL | mod foo;
- | ^^^
+ | ^^^^^^^^
error: aborting due to previous error
#![feature(generators, generator_trait, untagged_unions)]
#![feature(move_ref_pattern)]
+#![feature(bindings_after_at)]
#![allow(unused_assignments)]
#![allow(unused_variables)]
}
}
+fn bindings_after_at_dynamic_init_move(a: &Allocator, c: bool) {
+ let foo = if c { Some(a.alloc()) } else { None };
+ let _x;
+
+ if let bar @ Some(_) = foo {
+ _x = bar;
+ }
+}
+
+fn bindings_after_at_dynamic_init_ref(a: &Allocator, c: bool) {
+ let foo = if c { Some(a.alloc()) } else { None };
+ let _x;
+
+ if let bar @ Some(_baz) = &foo {
+ _x = bar;
+ }
+}
+
+fn bindings_after_at_dynamic_drop_move(a: &Allocator, c: bool) {
+ let foo = if c { Some(a.alloc()) } else { None };
+
+ if let bar @ Some(_) = foo {
+ bar
+ } else {
+ None
+ };
+}
+
+fn bindings_after_at_dynamic_drop_ref(a: &Allocator, c: bool) {
+ let foo = if c { Some(a.alloc()) } else { None };
+
+ if let bar @ Some(_baz) = &foo {
+ bar
+ } else {
+ &None
+ };
+}
+
fn move_ref_pattern(a: &Allocator) {
let mut tup = (a.alloc(), a.alloc(), a.alloc(), a.alloc());
let (ref _a, ref mut _b, _c, mut _d) = tup;
run_test(|a| panic_after_init_temp(a));
run_test(|a| panic_after_init_by_loop(a));
+ run_test(|a| bindings_after_at_dynamic_init_move(a, true));
+ run_test(|a| bindings_after_at_dynamic_init_move(a, false));
+ run_test(|a| bindings_after_at_dynamic_init_ref(a, true));
+ run_test(|a| bindings_after_at_dynamic_init_ref(a, false));
+ run_test(|a| bindings_after_at_dynamic_drop_move(a, true));
+ run_test(|a| bindings_after_at_dynamic_drop_move(a, false));
+ run_test(|a| bindings_after_at_dynamic_drop_ref(a, true));
+ run_test(|a| bindings_after_at_dynamic_drop_ref(a, false));
+
run_test_nopanic(|a| union1(a));
}
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
help: you can escape reserved keywords to use them as identifiers
|
-LL | () => (pub fn r#async () { })
- | ^^^^^^^
+LL | () => (pub fn r#async() {})
+ | ^^^^^^^
error: aborting due to previous error
| ^^^^^ no rules expected this token in macro call
error: macro expansion ends with an incomplete expression: expected one of `move`, `|`, or `||`
- --> <::edition_kw_macro_2015::passes_ident macros>:1:22
+ --> $DIR/auxiliary/edition-kw-macro-2015.rs:27:23
|
-LL | ($ i : ident) => ($ i)
- | ^ expected one of `move`, `|`, or `||`
+LL | ($i: ident) => ($i)
+ | ^ expected one of `move`, `|`, or `||`
|
::: $DIR/edition-keywords-2018-2015-parsing.rs:16:8
|
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
help: you can escape reserved keywords to use them as identifiers
|
-LL | () => (pub fn r#async () { })
- | ^^^^^^^
+LL | () => (pub fn r#async() {})
+ | ^^^^^^^
error: aborting due to previous error
| ^^^^^ no rules expected this token in macro call
error: macro expansion ends with an incomplete expression: expected one of `move`, `|`, or `||`
- --> <::edition_kw_macro_2018::passes_ident macros>:1:22
+ --> $DIR/auxiliary/edition-kw-macro-2018.rs:27:23
|
-LL | ($ i : ident) => ($ i)
- | ^ expected one of `move`, `|`, or `||`
+LL | ($i: ident) => ($i)
+ | ^ expected one of `move`, `|`, or `||`
|
::: $DIR/edition-keywords-2018-2018-parsing.rs:16:8
|
|
LL | #![deny(overflowing_literals)]
| ^^^^^^^^^^^^^^^^^^^^
+ = note: the literal `223` does not fit into the type `i8` whose range is `-128..=127`
error: literal out of range for `i16`
--> $DIR/enum-discrim-too-small2.rs:15:12
|
LL | Ci16 = 55555,
| ^^^^^
+ |
+ = note: the literal `55555` does not fit into the type `i16` whose range is `-32768..=32767`
error: literal out of range for `i32`
--> $DIR/enum-discrim-too-small2.rs:22:12
|
LL | Ci32 = 3_000_000_000,
| ^^^^^^^^^^^^^
+ |
+ = note: the literal `3_000_000_000` does not fit into the type `i32` whose range is `-2147483648..=2147483647`
error: literal out of range for `i64`
--> $DIR/enum-discrim-too-small2.rs:29:12
|
LL | Ci64 = 9223372036854775809,
| ^^^^^^^^^^^^^^^^^^^
+ |
+ = note: the literal `9223372036854775809` does not fit into the type `i64` whose range is `-9223372036854775808..=9223372036854775807`
error: aborting due to 4 previous errors
| ^
| |
| not allowed in type signatures
- | help: replace `_` with the correct type: `&'static str`
+ | help: replace `_` with the correct type: `&str`
error: aborting due to 2 previous errors
error[E0197]: inherent impls cannot be unsafe
- --> $DIR/E0197.rs:3:1
+ --> $DIR/E0197.rs:3:13
|
LL | unsafe impl Foo { }
- | ------^^^^^^^^^^^^^
+ | ------ ^^^ inherent impl for this type
| |
| unsafe because of this
error[E0198]: negative impls cannot be unsafe
- --> $DIR/E0198.rs:5:1
+ --> $DIR/E0198.rs:5:13
|
LL | unsafe impl !Send for Foo { }
- | ------^^^^^^^^^^^^^^^^^^^^^^^
- | |
+ | ------ -^^^^
+ | | |
+ | | negative because of this
| unsafe because of this
error: aborting due to previous error
LL | const VALUE: u8 = unsafe { *REG_ADDR };
| ---------------------------^^^^^^^^^---
| |
- | a memory access tried to interpret some bytes as a pointer
+ | unable to turn bytes into a pointer
|
= note: `#[deny(const_err)]` on by default
error[E0583]: file not found for module `module_that_doesnt_exist`
- --> $DIR/E0583.rs:1:5
+ --> $DIR/E0583.rs:1:1
|
LL | mod module_that_doesnt_exist;
- | ^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
- = help: name the file either module_that_doesnt_exist.rs or module_that_doesnt_exist/mod.rs inside the directory "$DIR"
+ = help: to create the module `module_that_doesnt_exist`, create file "$DIR/module_that_doesnt_exist.rs"
error: aborting due to previous error
fn main() {
let foo = inner::Foo;
- foo.method(); //~ ERROR method `method` is private [E0624]
+ foo.method(); //~ ERROR associated function `method` is private [E0624]
}
-error[E0624]: method `method` is private
+error[E0624]: associated function `method` is private
--> $DIR/E0624.rs:11:9
|
LL | foo.method();
error: aborting due to 2 previous errors
+For more information about this error, try `rustc --explain E0719`.
- change the original fn declaration to match the expected signature,
and do the cast in the fn body (the preferred option)
-- cast the fn item fo a fn pointer before calling transmute, as shown here:
+- cast the fn item of a fn pointer before calling transmute, as shown here:
```
let f: extern "C" fn(*mut i32) = transmute(foo as extern "C" fn(_));
= note: see issue #38412 <https://github.com/rust-lang/rust/issues/38412> for more information
= help: add `#![feature(unstable_undeclared)]` to the crate attributes to enable
-error[E0624]: method `pub_crate` is private
+error[E0624]: associated function `pub_crate` is private
--> $DIR/explore-issue-38412.rs:50:7
|
LL | r.pub_crate();
| ^^^^^^^^^
-error[E0624]: method `pub_mod` is private
+error[E0624]: associated function `pub_mod` is private
--> $DIR/explore-issue-38412.rs:51:7
|
LL | r.pub_mod();
| ^^^^^^^
-error[E0624]: method `private` is private
+error[E0624]: associated function `private` is private
--> $DIR/explore-issue-38412.rs:52:7
|
LL | r.private();
= note: see issue #38412 <https://github.com/rust-lang/rust/issues/38412> for more information
= help: add `#![feature(unstable_undeclared)]` to the crate attributes to enable
-error[E0624]: method `pub_crate` is private
+error[E0624]: associated function `pub_crate` is private
--> $DIR/explore-issue-38412.rs:63:7
|
LL | t.pub_crate();
| ^^^^^^^^^
-error[E0624]: method `pub_mod` is private
+error[E0624]: associated function `pub_mod` is private
--> $DIR/explore-issue-38412.rs:64:7
|
LL | t.pub_mod();
| ^^^^^^^
-error[E0624]: method `private` is private
+error[E0624]: associated function `private` is private
--> $DIR/explore-issue-38412.rs:65:7
|
LL | t.private();
+++ /dev/null
-#[doc(spotlight)] //~ ERROR: `#[doc(spotlight)]` is experimental
-trait SomeTrait {}
-
-fn main() {}
+++ /dev/null
-error[E0658]: `#[doc(spotlight)]` is experimental
- --> $DIR/feature-gate-doc_spotlight.rs:1:1
- |
-LL | #[doc(spotlight)]
- | ^^^^^^^^^^^^^^^^^
- |
- = note: see issue #45040 <https://github.com/rust-lang/rust/issues/45040> for more information
- = help: add `#![feature(doc_spotlight)]` to the crate attributes to enable
-
-error: aborting due to previous error
-
-For more information about this error, try `rustc --explain E0658`.
+++ /dev/null
-#![deny(deprecated)]
-#![feature(no_debug)]
-
-#[no_debug] //~ ERROR use of deprecated attribute `no_debug`
-fn main() {}
+++ /dev/null
-error: use of deprecated attribute `no_debug`: the `#[no_debug]` attribute was an experimental feature that has been deprecated due to lack of demand. See https://github.com/rust-lang/rust/issues/29721
- --> $DIR/feature-gate-no-debug-2.rs:4:1
- |
-LL | #[no_debug]
- | ^^^^^^^^^^^ help: remove this attribute
- |
-note: the lint level is defined here
- --> $DIR/feature-gate-no-debug-2.rs:1:9
- |
-LL | #![deny(deprecated)]
- | ^^^^^^^^^^
-
-error: aborting due to previous error
-
+++ /dev/null
-#![allow(deprecated)]
-
-#[no_debug] //~ ERROR the `#[no_debug]` attribute was
-fn main() {}
+++ /dev/null
-error[E0658]: the `#[no_debug]` attribute was an experimental feature that has been deprecated due to lack of demand
- --> $DIR/feature-gate-no-debug.rs:3:1
- |
-LL | #[no_debug]
- | ^^^^^^^^^^^
- |
- = note: see issue #29721 <https://github.com/rust-lang/rust/issues/29721> for more information
- = help: add `#![feature(no_debug)]` to the crate attributes to enable
-
-error: aborting due to previous error
-
-For more information about this error, try `rustc --explain E0658`.
= help: add `#![feature(optin_builtin_traits)]` to the crate attributes to enable
error[E0658]: negative trait bounds are not yet fully implemented; use marker types for now
- --> $DIR/feature-gate-optin-builtin-traits.rs:9:1
+ --> $DIR/feature-gate-optin-builtin-traits.rs:9:6
|
LL | impl !AutoDummyTrait for DummyStruct {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^
|
= note: see issue #13231 <https://github.com/rust-lang/rust/issues/13231> for more information
= help: add `#![feature(optin_builtin_traits)]` to the crate attributes to enable
LL | #[rustc_variance]
| ^^^^^^^^^^^^^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
error[E0658]: the `#[rustc_error]` attribute is just used for rustc unit tests and will never be stable
LL | #[rustc_error]
| ^^^^^^^^^^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
error[E0658]: the `#[rustc_nonnull_optimization_guaranteed]` attribute is just used to enable niche optimizations in libcore and will never be stable
LL | #[rustc_nonnull_optimization_guaranteed]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
error: aborting due to 3 previous errors
LL | #[rustc::unknown]
| ^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
error: expected attribute, found macro `rustc::unknown`
LL | #[unknown::rustc]
| ^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
error: expected attribute, found macro `unknown::rustc`
LL | #[rustc_unknown]
| ^^^^^^^^^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
error: cannot find attribute `rustc_unknown` in this scope
LL | #[rustc_dummy]
| ^^^^^^^^^^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
error: aborting due to 7 previous errors
fn ice() {
hof(|c| match c {
- A::new() => (), //~ ERROR expected tuple struct or tuple variant, found method
+ A::new() => (), //~ ERROR expected tuple struct or tuple variant, found associated function
_ => ()
})
}
-error[E0164]: expected tuple struct or tuple variant, found method `A::new`
+error[E0164]: expected tuple struct or tuple variant, found associated function `A::new`
--> $DIR/fn-in-pat.rs:11:9
|
LL | A::new() => (),
+++ /dev/null
-#![feature(generator_trait)]
-#![feature(generators)]
-
-// Test that we cannot create a generator that returns a value of its
-// own type.
-
-use std::ops::Generator;
-
-pub fn want_cyclic_generator_return<T>(_: T)
- where T: Generator<Yield = (), Return = T>
-{
-}
-
-fn supply_cyclic_generator_return() {
- want_cyclic_generator_return(|| {
- //~^ ERROR type mismatch
- if false { yield None.unwrap(); }
- None.unwrap()
- })
-}
-
-pub fn want_cyclic_generator_yield<T>(_: T)
- where T: Generator<Yield = T, Return = ()>
-{
-}
-
-fn supply_cyclic_generator_yield() {
- want_cyclic_generator_yield(|| {
- //~^ ERROR type mismatch
- if false { yield None.unwrap(); }
- None.unwrap()
- })
-}
-
-fn main() { }
+++ /dev/null
-error[E0271]: type mismatch resolving `<[generator@$DIR/generator-yielding-or-returning-itself.rs:15:34: 19:6 _] as std::ops::Generator>::Return == [generator@$DIR/generator-yielding-or-returning-itself.rs:15:34: 19:6 _]`
- --> $DIR/generator-yielding-or-returning-itself.rs:15:5
- |
-LL | pub fn want_cyclic_generator_return<T>(_: T)
- | ----------------------------
-LL | where T: Generator<Yield = (), Return = T>
- | ---------- required by this bound in `want_cyclic_generator_return`
-...
-LL | want_cyclic_generator_return(|| {
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cyclic type of infinite size
- |
- = note: closures cannot capture themselves or take themselves as argument;
- this error may be the result of a recent compiler bug-fix,
- see issue #46062 <https://github.com/rust-lang/rust/issues/46062>
- for more information
-
-error[E0271]: type mismatch resolving `<[generator@$DIR/generator-yielding-or-returning-itself.rs:28:33: 32:6 _] as std::ops::Generator>::Yield == [generator@$DIR/generator-yielding-or-returning-itself.rs:28:33: 32:6 _]`
- --> $DIR/generator-yielding-or-returning-itself.rs:28:5
- |
-LL | pub fn want_cyclic_generator_yield<T>(_: T)
- | ---------------------------
-LL | where T: Generator<Yield = T, Return = ()>
- | --------- required by this bound in `want_cyclic_generator_yield`
-...
-LL | want_cyclic_generator_yield(|| {
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ cyclic type of infinite size
- |
- = note: closures cannot capture themselves or take themselves as argument;
- this error may be the result of a recent compiler bug-fix,
- see issue #46062 <https://github.com/rust-lang/rust/issues/46062>
- for more information
-
-error: aborting due to 2 previous errors
-
-For more information about this error, try `rustc --explain E0271`.
--- /dev/null
+//! Tests that generator discriminant sizes and ranges are chosen optimally and that they are
+//! reflected in the output of `mem::discriminant`.
+
+// run-pass
+
+#![feature(generators, generator_trait, core_intrinsics)]
+
+use std::intrinsics::discriminant_value;
+use std::marker::Unpin;
+use std::mem::size_of_val;
+use std::{cmp, ops::*};
+
+macro_rules! yield25 {
+ ($e:expr) => {
+ yield $e;
+ yield $e;
+ yield $e;
+ yield $e;
+ yield $e;
+
+ yield $e;
+ yield $e;
+ yield $e;
+ yield $e;
+ yield $e;
+
+ yield $e;
+ yield $e;
+ yield $e;
+ yield $e;
+ yield $e;
+
+ yield $e;
+ yield $e;
+ yield $e;
+ yield $e;
+ yield $e;
+
+ yield $e;
+ yield $e;
+ yield $e;
+ yield $e;
+ yield $e;
+ };
+}
+
+/// Yields 250 times.
+macro_rules! yield250 {
+ () => {
+ yield250!(())
+ };
+
+ ($e:expr) => {
+ yield25!($e);
+ yield25!($e);
+ yield25!($e);
+ yield25!($e);
+ yield25!($e);
+
+ yield25!($e);
+ yield25!($e);
+ yield25!($e);
+ yield25!($e);
+ yield25!($e);
+ };
+}
+
+fn cycle(gen: impl Generator<()> + Unpin, expected_max_discr: u64) {
+ let mut gen = Box::pin(gen);
+ let mut max_discr = 0;
+ loop {
+ max_discr = cmp::max(max_discr, discriminant_value(gen.as_mut().get_mut()));
+ match gen.as_mut().resume(()) {
+ GeneratorState::Yielded(_) => {}
+ GeneratorState::Complete(_) => {
+ assert_eq!(max_discr, expected_max_discr);
+ return;
+ }
+ }
+ }
+}
+
+fn main() {
+ // Has only one invalid discr. value.
+ let gen_u8_tiny_niche = || {
+ || {
+ // 3 reserved variants
+
+ yield250!(); // 253 variants
+
+ yield; // 254
+ yield; // 255
+ }
+ };
+
+ // Uses all values in the u8 discriminant.
+ let gen_u8_full = || {
+ || {
+ // 3 reserved variants
+
+ yield250!(); // 253 variants
+
+ yield; // 254
+ yield; // 255
+ yield; // 256
+ }
+ };
+
+ // Barely needs a u16 discriminant.
+ let gen_u16 = || {
+ || {
+ // 3 reserved variants
+
+ yield250!(); // 253 variants
+
+ yield; // 254
+ yield; // 255
+ yield; // 256
+ yield; // 257
+ }
+ };
+
+ assert_eq!(size_of_val(&gen_u8_tiny_niche()), 1);
+ assert_eq!(size_of_val(&Some(gen_u8_tiny_niche())), 1); // uses niche
+ assert_eq!(size_of_val(&Some(Some(gen_u8_tiny_niche()))), 2); // cannot use niche anymore
+ assert_eq!(size_of_val(&gen_u8_full()), 1);
+ assert_eq!(size_of_val(&Some(gen_u8_full())), 2); // cannot use niche
+ assert_eq!(size_of_val(&gen_u16()), 2);
+ assert_eq!(size_of_val(&Some(gen_u16())), 2); // uses niche
+
+ cycle(gen_u8_tiny_niche(), 254);
+ cycle(gen_u8_full(), 255);
+ cycle(gen_u16(), 256);
+}
--- /dev/null
+#![feature(generator_trait)]
+#![feature(generators)]
+
+// Test that we cannot create a generator that returns a value of its
+// own type.
+
+use std::ops::Generator;
+
+pub fn want_cyclic_generator_return<T>(_: T)
+ where T: Generator<Yield = (), Return = T>
+{
+}
+
+fn supply_cyclic_generator_return() {
+ want_cyclic_generator_return(|| {
+ //~^ ERROR type mismatch
+ if false { yield None.unwrap(); }
+ None.unwrap()
+ })
+}
+
+pub fn want_cyclic_generator_yield<T>(_: T)
+ where T: Generator<Yield = T, Return = ()>
+{
+}
+
+fn supply_cyclic_generator_yield() {
+ want_cyclic_generator_yield(|| {
+ //~^ ERROR type mismatch
+ if false { yield None.unwrap(); }
+ None.unwrap()
+ })
+}
+
+fn main() { }
--- /dev/null
+error[E0271]: type mismatch resolving `<[generator@$DIR/generator-yielding-or-returning-itself.rs:15:34: 19:6 _] as std::ops::Generator>::Return == [generator@$DIR/generator-yielding-or-returning-itself.rs:15:34: 19:6 _]`
+ --> $DIR/generator-yielding-or-returning-itself.rs:15:5
+ |
+LL | pub fn want_cyclic_generator_return<T>(_: T)
+ | ----------------------------
+LL | where T: Generator<Yield = (), Return = T>
+ | ---------- required by this bound in `want_cyclic_generator_return`
+...
+LL | want_cyclic_generator_return(|| {
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cyclic type of infinite size
+ |
+ = note: closures cannot capture themselves or take themselves as argument;
+ this error may be the result of a recent compiler bug-fix,
+ see issue #46062 <https://github.com/rust-lang/rust/issues/46062>
+ for more information
+
+error[E0271]: type mismatch resolving `<[generator@$DIR/generator-yielding-or-returning-itself.rs:28:33: 32:6 _] as std::ops::Generator>::Yield == [generator@$DIR/generator-yielding-or-returning-itself.rs:28:33: 32:6 _]`
+ --> $DIR/generator-yielding-or-returning-itself.rs:28:5
+ |
+LL | pub fn want_cyclic_generator_yield<T>(_: T)
+ | ---------------------------
+LL | where T: Generator<Yield = T, Return = ()>
+ | --------- required by this bound in `want_cyclic_generator_yield`
+...
+LL | want_cyclic_generator_yield(|| {
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ cyclic type of infinite size
+ |
+ = note: closures cannot capture themselves or take themselves as argument;
+ this error may be the result of a recent compiler bug-fix,
+ see issue #46062 <https://github.com/rust-lang/rust/issues/46062>
+ for more information
+
+error: aborting due to 2 previous errors
+
+For more information about this error, try `rustc --explain E0271`.
--- /dev/null
+// Regression test for #64620
+
+#![feature(generators)]
+
+pub fn crash(arr: [usize; 1]) {
+ yield arr[0]; //~ ERROR: yield expression outside of generator literal
+}
+
+fn main() {}
--- /dev/null
+error[E0627]: yield expression outside of generator literal
+ --> $DIR/issue-64620-yield-array-element.rs:6:5
+ |
+LL | yield arr[0];
+ | ^^^^^^^^^^^^
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0627`.
use std::ops::{Generator, GeneratorState};
+fn mkstr(my_name: String, my_mood: String) -> String {
+ format!("{} is {}", my_name.trim(), my_mood.trim())
+}
+
fn my_scenario() -> impl Generator<String, Yield = &'static str, Return = String> {
|_arg: String| {
let my_name = yield "What is your name?";
let my_mood = yield "How are you feeling?";
- format!("{} is {}", my_name.trim(), my_mood.trim())
+ mkstr(my_name, my_mood)
}
}
--- /dev/null
+#![feature(generators)]
+
+// run-pass
+
+use std::mem::size_of_val;
+
+fn main() {
+ // Generator taking a `Copy`able resume arg.
+ let gen_copy = |mut x: usize| {
+ loop {
+ drop(x);
+ x = yield;
+ }
+ };
+
+ // Generator taking a non-`Copy` resume arg.
+ let gen_move = |mut x: Box<usize>| {
+ loop {
+ drop(x);
+ x = yield;
+ }
+ };
+
+ // Neither of these generators have the resume arg live across the `yield`, so they should be
+ // 4 Bytes in size (only storing the discriminant)
+ assert_eq!(size_of_val(&gen_copy), 1);
+ assert_eq!(size_of_val(&gen_move), 1);
+}
}
}
-fn overlap_x_and_y() -> impl Generator<Yield = (), Return = ()>{
+fn overlap_x_and_y() -> impl Generator<Yield = (), Return = ()> {
static || {
let x = Foo([0; FOO_SIZE]);
yield;
}
fn main() {
- assert_eq!(1028, std::mem::size_of_val(&move_before_yield()));
- assert_eq!(1032, std::mem::size_of_val(&move_before_yield_with_noop()));
- assert_eq!(2056, std::mem::size_of_val(&overlap_move_points()));
- assert_eq!(1032, std::mem::size_of_val(&overlap_x_and_y()));
+ assert_eq!(1025, std::mem::size_of_val(&move_before_yield()));
+ assert_eq!(1026, std::mem::size_of_val(&move_before_yield_with_noop()));
+ assert_eq!(2051, std::mem::size_of_val(&overlap_move_points()));
+ assert_eq!(1026, std::mem::size_of_val(&overlap_x_and_y()));
}
error: aborting due to previous error
+For more information about this error, try `rustc --explain E0628`.
trait Foo {
type Bar<,>;
- //~^ ERROR expected one of `>`, `const`, identifier, or lifetime, found `,`
+ //~^ ERROR expected one of `#`, `>`, `const`, identifier, or lifetime, found `,`
}
fn main() {}
-error: expected one of `>`, `const`, identifier, or lifetime, found `,`
+error: expected one of `#`, `>`, `const`, identifier, or lifetime, found `,`
--> $DIR/empty_generics.rs:5:14
|
LL | trait Foo {
| - while parsing this item list starting here
LL | type Bar<,>;
- | ^ expected one of `>`, `const`, identifier, or lifetime
+ | ^ expected one of `#`, `>`, `const`, identifier, or lifetime
LL |
LL | }
| - the item list ends here
--- /dev/null
+// Ensure macro metavariables are compared with legacy hygiene
+
+#![feature(rustc_attrs)]
+
+// run-pass
+
+macro_rules! make_mac {
+ ( $($dollar:tt $arg:ident),+ ) => {
+ macro_rules! mac {
+ ( $($dollar $arg : ident),+ ) => {
+ $( $dollar $arg )-+
+ }
+ }
+ }
+}
+
+macro_rules! show_hygiene {
+ ( $dollar:tt $arg:ident ) => {
+ make_mac!($dollar $arg, $dollar arg);
+ }
+}
+
+show_hygiene!( $arg );
+
+fn main() {
+ let x = 5;
+ let y = 3;
+ assert_eq!(2, mac!(x, y));
+}
--- /dev/null
+// Ensure macro metavariables are not compared without removing transparent
+// marks.
+
+#![feature(rustc_attrs)]
+
+// run-pass
+
+#[rustc_macro_transparency = "transparent"]
+macro_rules! k {
+ ($($s:tt)*) => {
+ macro_rules! m {
+ ($y:tt) => {
+ $($s)*
+ }
+ }
+ }
+}
+
+k!(1 + $y);
+
+fn main() {
+ let x = 2;
+ assert_eq!(3, m!(x));
+}
--- /dev/null
+#![feature(stmt_expr_attributes)]
+
+fn main() {
+ let _ = #[cfg(FALSE)] if true {}; //~ ERROR removing an expression
+}
--- /dev/null
+error: removing an expression is not supported in this position
+ --> $DIR/bad-cfg.rs:4:13
+ |
+LL | let _ = #[cfg(FALSE)] if true {};
+ | ^^^^^^^^^^^^^
+
+error: aborting due to previous error
+
--- /dev/null
+// check-pass
+
+fn main() {
+ #[allow(unused_variables)]
+ if true {
+ let a = 1;
+ } else if false {
+ let b = 1;
+ } else {
+ let c = 1;
+ }
+}
--- /dev/null
+// check-pass
+
+#[cfg(FALSE)]
+fn simple_attr() {
+ #[attr] if true {}
+ #[allow_warnings] if true {}
+}
+
+#[cfg(FALSE)]
+fn if_else_chain() {
+ #[first_attr] if true {
+ } else if false {
+ } else {
+ }
+}
+
+#[cfg(FALSE)]
+fn if_let() {
+ #[attr] if let Some(_) = Some(true) {}
+}
+
+fn bar() {
+ #[cfg(FALSE)]
+ if true {
+ let x: () = true; // Should not error due to the #[cfg(FALSE)]
+ }
+
+ #[cfg_attr(not(unset_attr), cfg(FALSE))]
+ if true {
+ let a: () = true; // Should not error due to the applied #[cfg(FALSE)]
+ }
+}
+
+macro_rules! custom_macro {
+ ($expr:expr) => {}
+}
+
+custom_macro! {
+ #[attr] if true {}
+}
+
+
+fn main() {}
--- /dev/null
+#[cfg(FALSE)]
+fn if_else_parse_error() {
+ if true {
+ } #[attr] else if false { //~ ERROR expected
+ }
+}
+
+#[cfg(FALSE)]
+fn else_attr_ifparse_error() {
+ if true {
+ } else #[attr] if false { //~ ERROR outer attributes are not allowed
+ } else {
+ }
+}
+
+#[cfg(FALSE)]
+fn else_parse_error() {
+ if true {
+ } else if false {
+ } #[attr] else { //~ ERROR expected
+ }
+}
+
+fn main() {
+}
--- /dev/null
+error: expected expression, found keyword `else`
+ --> $DIR/else-attrs.rs:4:15
+ |
+LL | } #[attr] else if false {
+ | ^^^^ expected expression
+
+error: outer attributes are not allowed on `if` and `else` branches
+ --> $DIR/else-attrs.rs:11:12
+ |
+LL | } else #[attr] if false {
+ | _______----_^^^^^^^_-
+ | | | |
+ | | | help: remove the attributes
+ | | the branch belongs to this `else`
+LL | | } else {
+LL | | }
+ | |_____- the attributes are attached to this branch
+
+error: expected expression, found keyword `else`
+ --> $DIR/else-attrs.rs:20:15
+ |
+LL | } #[attr] else {
+ | ^^^^ expected expression
+
+error: aborting due to 3 previous errors
+
--- /dev/null
+// run-pass
+
+fn main() {
+ let x = 1;
+
+ #[cfg(FALSE)]
+ if false {
+ x = 2;
+ } else if true {
+ x = 3;
+ } else {
+ x = 4;
+ }
+ assert_eq!(x, 1);
+}
--- /dev/null
+// check-pass
+
+#![feature(let_chains)] //~ WARN the feature `let_chains` is incomplete
+
+#[cfg(FALSE)]
+fn foo() {
+ #[attr]
+ if let Some(_) = Some(true) && let Ok(_) = Ok(1) {
+ } else if let Some(false) = Some(true) {
+ }
+}
+
+fn main() {}
--- /dev/null
+warning: the feature `let_chains` is incomplete and may cause the compiler to crash
+ --> $DIR/let-chains-attr.rs:3:12
+ |
+LL | #![feature(let_chains)]
+ | ^^^^^^^^^^
+ |
+ = note: `#[warn(incomplete_features)]` on by default
+
--- /dev/null
+fn main() {
+ let _ = #[deny(warnings)] if true { //~ ERROR attributes on expressions
+ } else if false {
+ } else {
+ };
+}
--- /dev/null
+error[E0658]: attributes on expressions are experimental
+ --> $DIR/stmt-expr-gated.rs:2:13
+ |
+LL | let _ = #[deny(warnings)] if true {
+ | ^^^^^^^^^^^^^^^^^
+ |
+ = note: see issue #15701 <https://github.com/rust-lang/rust/issues/15701> for more information
+ = help: add `#![feature(stmt_expr_attributes)]` to the crate attributes to enable
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0658`.
--- /dev/null
+// Regression test for #57200
+// FIXME: The error is temporary hack, we'll revisit here at some point.
+
+#![feature(impl_trait_in_bindings)]
+#![allow(incomplete_features)]
+
+fn bug<'a, 'b, T>()
+where
+ 'a: 'b,
+{
+ let f: impl Fn(&'a T) -> &'b T = |x| x;
+ //~^ ERROR: lifetimes in impl Trait types in bindings are not currently supported
+}
+
+fn main() {}
--- /dev/null
+error: lifetimes in impl Trait types in bindings are not currently supported
+ --> $DIR/issue-57200.rs:11:12
+ |
+LL | let f: impl Fn(&'a T) -> &'b T = |x| x;
+ | ^^^^^^^^^^^^^^^^^^^^^^^
+
+error: aborting due to previous error
+
--- /dev/null
+// Regression test for #57201
+// FIXME: The error is temporary hack, we'll revisit here at some point.
+
+#![feature(impl_trait_in_bindings)]
+#![allow(incomplete_features)]
+
+fn bug<'a, 'b, T>()
+where
+ 'a: 'b,
+{
+ let f: &impl Fn(&'a T) -> &'b T = &|x| x;
+ //~^ ERROR: lifetimes in impl Trait types in bindings are not currently supported
+}
+
+fn main() {}
--- /dev/null
+error: lifetimes in impl Trait types in bindings are not currently supported
+ --> $DIR/issue-57201.rs:11:13
+ |
+LL | let f: &impl Fn(&'a T) -> &'b T = &|x| x;
+ | ^^^^^^^^^^^^^^^^^^^^^^^
+
+error: aborting due to previous error
+
--- /dev/null
+// Regression test for #60473
+
+#![feature(impl_trait_in_bindings)]
+#![allow(incomplete_features)]
+
+struct A<'a>(&'a ());
+
+trait Trait<T> {
+}
+
+impl<T> Trait<T> for () {
+}
+
+fn main() {
+ let x: impl Trait<A> = (); // FIXME: The error doesn't seem correct.
+ //~^ ERROR: opaque type expands to a recursive type
+}
--- /dev/null
+error[E0720]: opaque type expands to a recursive type
+ --> $DIR/issue-60473.rs:15:12
+ |
+LL | let x: impl Trait<A> = (); // FIXME: The error doesn't seem correct.
+ | ^^^^^^^^^^^^^ expands to a recursive type
+ |
+ = note: type resolves to itself
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0720`.
--- /dev/null
+// Regression test for #67166
+
+#![feature(impl_trait_in_bindings)]
+#![allow(incomplete_features)]
+
+pub fn run() {
+ let _foo: Box<impl Copy + '_> = Box::new(()); // FIXME: The error doesn't much make sense.
+ //~^ ERROR: opaque type expands to a recursive type
+}
+
+fn main() {}
--- /dev/null
+error[E0720]: opaque type expands to a recursive type
+ --> $DIR/issue-67166.rs:7:19
+ |
+LL | let _foo: Box<impl Copy + '_> = Box::new(()); // FIXME: The error doesn't much make sense.
+ | ^^^^^^^^^^^^^^ expands to a recursive type
+ |
+ = note: type resolves to itself
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0720`.
fn with_bound<'a>(x: &'a i32) -> impl LifetimeTrait<'a> + 'static { x }
//~^ ERROR cannot infer an appropriate lifetime
-// Tests that a closure type contianing 'b cannot be returned from a type where
+// Tests that a closure type containing 'b cannot be returned from a type where
// only 'a was expected.
fn move_lifetime_into_fn<'a, 'b>(x: &'a u32, y: &'b u32) -> impl Fn(&'a u32) {
//~^ ERROR lifetime mismatch
LL | pub use parser::ParseOptions;
| ^^^^^^^^^^^^ this struct import is private
|
-note: the struct import `ParseOptions` is defined here
+note: the struct import `ParseOptions` is defined here...
--> $DIR/issue-55884-2.rs:9:9
|
LL | use ParseOptions;
| ^^^^^^^^^^^^
+note: ...and refers to the struct import `ParseOptions` which is defined here...
+ --> $DIR/issue-55884-2.rs:12:9
+ |
+LL | pub use parser::ParseOptions;
+ | ^^^^^^^^^^^^^^^^^^^^ consider importing it directly
+note: ...and refers to the struct import `ParseOptions` which is defined here...
+ --> $DIR/issue-55884-2.rs:6:13
+ |
+LL | pub use options::*;
+ | ^^^^^^^^^^ consider importing it directly
+note: ...and refers to the struct `ParseOptions` which is defined here
+ --> $DIR/issue-55884-2.rs:2:5
+ |
+LL | pub struct ParseOptions {}
+ | ^^^^^^^^^^^^^^^^^^^^^^^ consider importing it directly
error: aborting due to previous error
LL | use b::a::foo::S;
| ^^^ this module import is private
|
-note: the module import `foo` is defined here
+note: the module import `foo` is defined here...
--> $DIR/reexports.rs:21:17
|
LL | pub use super::foo; // This is OK since the value `foo` is visible enough.
| ^^^^^^^^^^
+note: ...and refers to the module `foo` which is defined here
+ --> $DIR/reexports.rs:16:5
+ |
+LL | mod foo {
+ | ^^^^^^^
error[E0603]: module import `foo` is private
--> $DIR/reexports.rs:34:15
LL | use b::b::foo::S as T;
| ^^^ this module import is private
|
-note: the module import `foo` is defined here
+note: the module import `foo` is defined here...
--> $DIR/reexports.rs:26:17
|
LL | pub use super::*; // This is also OK since the value `foo` is visible enough.
| ^^^^^^^^
+note: ...and refers to the module `foo` which is defined here
+ --> $DIR/reexports.rs:16:5
+ |
+LL | mod foo {
+ | ^^^^^^^
warning: glob import doesn't reexport anything because no candidate is public enough
--> $DIR/reexports.rs:9:17
+// compile-flags: -O
// run-pass
#![allow(unused_must_use)]
#![feature(intrinsics)]
-use std::thread;
-
-extern "rust-intrinsic" {
- pub fn init<T>() -> T;
-}
+use std::{mem, thread};
const SIZE: usize = 1024 * 1024;
fn main() {
// do the test in a new thread to avoid (spurious?) stack overflows
thread::spawn(|| {
- let _memory: [u8; SIZE] = unsafe { init() };
+ let _memory: [u8; SIZE] = unsafe { mem::zeroed() };
}).join();
}
+++ /dev/null
-#![allow(deprecated)]
-#![feature(core_intrinsics)]
-
-use std::intrinsics::{init};
-
-// Test that the `init` intrinsic is really unsafe
-pub fn main() {
- let stuff = init::<isize>(); //~ ERROR call to unsafe function is unsafe
-}
+++ /dev/null
-error[E0133]: call to unsafe function is unsafe and requires unsafe function or block
- --> $DIR/init-unsafe.rs:8:17
- |
-LL | let stuff = init::<isize>();
- | ^^^^^^^^^^^^^^^ call to unsafe function
- |
- = note: consult the function's documentation for information on how to avoid undefined behavior
-
-error: aborting due to previous error
-
-For more information about this error, try `rustc --explain E0133`.
mod rusti {
extern "rust-intrinsic" {
- pub fn init<T>() -> T;
pub fn move_val_init<T>(dst: *mut T, src: T);
}
}
// sanity check
check_drops_state(0, None);
- let mut x: Box<D> = box D(1);
- assert_eq!(x.0, 1);
+ let mut x: Option<Box<D>> = Some(box D(1));
+ assert_eq!(x.as_ref().unwrap().0, 1);
// A normal overwrite, to demonstrate `check_drops_state`.
- x = box D(2);
+ x = Some(box D(2));
// At this point, one destructor has run, because the
// overwrite of `x` drops its initial value.
check_drops_state(1, Some(1));
- let mut y: Box<D> = rusti::init();
+ let mut y: Option<Box<D>> = std::mem::zeroed();
// An initial binding does not overwrite anything.
check_drops_state(1, Some(1));
// during such a destructor call. We do so after the end of
// this scope.
- assert_eq!(y.0, 2);
- y.0 = 3;
- assert_eq!(y.0, 3);
+ assert_eq!(y.as_ref().unwrap().0, 2);
+ y.as_mut().unwrap().0 = 3;
+ assert_eq!(y.as_ref().unwrap().0, 3);
check_drops_state(1, Some(1));
}
+++ /dev/null
-// run-pass
-// pretty-expanded FIXME #23616
-
-#![feature(intrinsics)]
-
-mod rusti {
- extern "rust-intrinsic" {
- pub fn uninit<T>() -> T;
- }
-}
-pub fn main() {
- let _a : isize = unsafe {rusti::uninit()};
-}
--- /dev/null
+// run-pass
+// ignore-wasm32-bare compiled with panic=abort by default
+
+// This test checks panic emitted from `mem::{uninitialized,zeroed}`.
+
+#![feature(never_type)]
+#![allow(deprecated, invalid_value)]
+
+use std::{
+ mem::{self, MaybeUninit, ManuallyDrop},
+ panic,
+ ptr::NonNull,
+ num,
+};
+
+#[allow(dead_code)]
+struct Foo {
+ x: u8,
+ y: !,
+}
+
+enum Bar {}
+
+#[allow(dead_code)]
+enum OneVariant { Variant(i32) }
+
+// An enum with ScalarPair layout
+#[allow(dead_code)]
+enum LR {
+ Left(i64),
+ Right(i64),
+}
+#[allow(dead_code, non_camel_case_types)]
+enum LR_NonZero {
+ Left(num::NonZeroI64),
+ Right(num::NonZeroI64),
+}
+
+fn test_panic_msg<T>(op: impl (FnOnce() -> T) + panic::UnwindSafe, msg: &str) {
+ let err = panic::catch_unwind(op).err();
+ assert_eq!(
+ err.as_ref().and_then(|a| a.downcast_ref::<String>()).map(|s| &**s),
+ Some(msg)
+ );
+}
+
+fn main() {
+ unsafe {
+ // Uninhabited types
+ test_panic_msg(
+ || mem::uninitialized::<!>(),
+ "attempted to instantiate uninhabited type `!`"
+ );
+ test_panic_msg(
+ || mem::zeroed::<!>(),
+ "attempted to instantiate uninhabited type `!`"
+ );
+ test_panic_msg(
+ || MaybeUninit::<!>::uninit().assume_init(),
+ "attempted to instantiate uninhabited type `!`"
+ );
+
+ test_panic_msg(
+ || mem::uninitialized::<Foo>(),
+ "attempted to instantiate uninhabited type `Foo`"
+ );
+ test_panic_msg(
+ || mem::zeroed::<Foo>(),
+ "attempted to instantiate uninhabited type `Foo`"
+ );
+ test_panic_msg(
+ || MaybeUninit::<Foo>::uninit().assume_init(),
+ "attempted to instantiate uninhabited type `Foo`"
+ );
+
+ test_panic_msg(
+ || mem::uninitialized::<Bar>(),
+ "attempted to instantiate uninhabited type `Bar`"
+ );
+ test_panic_msg(
+ || mem::zeroed::<Bar>(),
+ "attempted to instantiate uninhabited type `Bar`"
+ );
+ test_panic_msg(
+ || MaybeUninit::<Bar>::uninit().assume_init(),
+ "attempted to instantiate uninhabited type `Bar`"
+ );
+
+ // Types that do not like zero-initialziation
+ test_panic_msg(
+ || mem::uninitialized::<fn()>(),
+ "attempted to leave type `fn()` uninitialized, which is invalid"
+ );
+ test_panic_msg(
+ || mem::zeroed::<fn()>(),
+ "attempted to zero-initialize type `fn()`, which is invalid"
+ );
+
+ test_panic_msg(
+ || mem::uninitialized::<*const dyn Send>(),
+ "attempted to leave type `*const dyn std::marker::Send` uninitialized, which is invalid"
+ );
+ test_panic_msg(
+ || mem::zeroed::<*const dyn Send>(),
+ "attempted to zero-initialize type `*const dyn std::marker::Send`, which is invalid"
+ );
+
+ /* FIXME(#66151) we conservatively do not error here yet.
+ test_panic_msg(
+ || mem::uninitialized::<LR_NonZero>(),
+ "attempted to leave type `LR_NonZero` uninitialized, which is invalid"
+ );
+ test_panic_msg(
+ || mem::zeroed::<LR_NonZero>(),
+ "attempted to zero-initialize type `LR_NonZero`, which is invalid"
+ );
+
+ test_panic_msg(
+ || mem::uninitialized::<ManuallyDrop<LR_NonZero>>(),
+ "attempted to leave type `std::mem::ManuallyDrop<LR_NonZero>` uninitialized, \
+ which is invalid"
+ );
+ test_panic_msg(
+ || mem::zeroed::<ManuallyDrop<LR_NonZero>>(),
+ "attempted to zero-initialize type `std::mem::ManuallyDrop<LR_NonZero>`, \
+ which is invalid"
+ );
+
+ test_panic_msg(
+ || mem::uninitialized::<(NonNull<u32>, u32, u32)>(),
+ "attempted to leave type `(std::ptr::NonNull<u32>, u32, u32)` uninitialized, \
+ which is invalid"
+ );
+ test_panic_msg(
+ || mem::zeroed::<(NonNull<u32>, u32, u32)>(),
+ "attempted to zero-initialize type `(std::ptr::NonNull<u32>, u32, u32)`, \
+ which is invalid"
+ );
+ */
+
+ // Types that can be zero, but not uninit.
+ test_panic_msg(
+ || mem::uninitialized::<bool>(),
+ "attempted to leave type `bool` uninitialized, which is invalid"
+ );
+ test_panic_msg(
+ || mem::uninitialized::<LR>(),
+ "attempted to leave type `LR` uninitialized, which is invalid"
+ );
+ test_panic_msg(
+ || mem::uninitialized::<ManuallyDrop<LR>>(),
+ "attempted to leave type `std::mem::ManuallyDrop<LR>` uninitialized, which is invalid"
+ );
+
+ // Some things that should work.
+ let _val = mem::zeroed::<bool>();
+ let _val = mem::zeroed::<LR>();
+ let _val = mem::zeroed::<ManuallyDrop<LR>>();
+ let _val = mem::zeroed::<OneVariant>();
+ let _val = mem::zeroed::<Option<&'static i32>>();
+ let _val = mem::zeroed::<MaybeUninit<NonNull<u32>>>();
+ let _val = mem::uninitialized::<MaybeUninit<bool>>();
+
+ // These are UB because they have not been officially blessed, but we await the resolution
+ // of <https://github.com/rust-lang/unsafe-code-guidelines/issues/71> before doing
+ // anything about that.
+ let _val = mem::uninitialized::<i32>();
+ let _val = mem::uninitialized::<*const ()>();
+ }
+}
error[E0583]: file not found for module `baz`
- --> $DIR/auxiliary/foo/bar.rs:1:9
+ --> $DIR/auxiliary/foo/bar.rs:1:1
|
LL | pub mod baz;
- | ^^^
+ | ^^^^^^^^^^^^
|
- = help: name the file either bar/baz.rs or bar/baz/mod.rs inside the directory "$DIR/auxiliary/foo"
+ = help: to create the module `baz`, create file "$DIR/auxiliary/foo/bar/baz.rs"
error: aborting due to previous error
--- /dev/null
+#[derive(Clone)]
+pub struct Struct<A>(A);
+
+impl<A> Struct<A> {
+ pub fn new() -> Self {
+ todo!()
+ }
+}
-error[E0521]: borrowed data escapes outside of method
+error[E0521]: borrowed data escapes outside of associated function
--> $DIR/issue-16683.rs:4:9
|
LL | fn b(&self) {
- | ----- `self` is a reference that is only valid in the method body
+ | ----- `self` is a reference that is only valid in the associated function body
LL | self.a();
- | ^^^^^^^^ `self` escapes the method body here
+ | ^^^^^^^^ `self` escapes the associated function body here
error: aborting due to previous error
-error[E0521]: borrowed data escapes outside of method
+error[E0521]: borrowed data escapes outside of associated function
--> $DIR/issue-17758.rs:7:9
|
LL | fn bar(&self) {
- | ----- `self` is a reference that is only valid in the method body
+ | ----- `self` is a reference that is only valid in the associated function body
LL | self.foo();
- | ^^^^^^^^^^ `self` escapes the method body here
+ | ^^^^^^^^^^ `self` escapes the associated function body here
error: aborting due to previous error
type Type_8<'a,,> = &'a ();
-//~^ error: expected one of `>`, `const`, identifier, or lifetime, found `,`
+//~^ error: expected one of `#`, `>`, `const`, identifier, or lifetime, found `,`
//type Type_9<T,,> = Box<T>; // error: expected identifier, found `,`
-error: expected one of `>`, `const`, identifier, or lifetime, found `,`
+error: expected one of `#`, `>`, `const`, identifier, or lifetime, found `,`
--> $DIR/issue-20616-8.rs:31:16
|
LL | type Type_8<'a,,> = &'a ();
- | ^ expected one of `>`, `const`, identifier, or lifetime
+ | ^ expected one of `#`, `>`, `const`, identifier, or lifetime
error: aborting due to previous error
type Type_9<T,,> = Box<T>;
-//~^ error: expected one of `>`, `const`, identifier, or lifetime, found `,`
+//~^ error: expected one of `#`, `>`, `const`, identifier, or lifetime, found `,`
-error: expected one of `>`, `const`, identifier, or lifetime, found `,`
+error: expected one of `#`, `>`, `const`, identifier, or lifetime, found `,`
--> $DIR/issue-20616-9.rs:34:15
|
LL | type Type_9<T,,> = Box<T>;
- | ^ expected one of `>`, `const`, identifier, or lifetime
+ | ^ expected one of `#`, `>`, `const`, identifier, or lifetime
error: aborting due to previous error
LL | let new: T::B = unsafe { std::mem::transmute(value) };
| ^^^^^^^^^^^^^^^^^^^
|
- = note: source type: `<T as Trait<'a>>::A` (size can vary because of <T as Trait>::A)
- = note: target type: `<T as Trait<'a>>::B` (size can vary because of <T as Trait>::B)
+ = note: source type: `<T as Trait>::A` (this type does not have a fixed size)
+ = note: target type: `<T as Trait>::B` (this type does not have a fixed size)
error: aborting due to previous error
use crate1::A::Foo;
fn bar(f: Foo) {
Foo::foo(&f);
- //~^ ERROR: method `foo` is private
+ //~^ ERROR: associated function `foo` is private
}
}
-error[E0624]: method `foo` is private
+error[E0624]: associated function `foo` is private
--> $DIR/issue-21202.rs:10:9
|
LL | Foo::foo(&f);
#![feature(optin_builtin_traits)]
unsafe auto trait Trait {
-//~^ ERROR E0380
- type Output;
+ type Output; //~ ERROR E0380
}
fn call_method<T: Trait>(x: T) {}
error[E0380]: auto traits cannot have methods or associated items
- --> $DIR/issue-23080-2.rs:5:1
+ --> $DIR/issue-23080-2.rs:6:10
|
-LL | / unsafe auto trait Trait {
-LL | |
-LL | | type Output;
-LL | | }
- | |_^
+LL | unsafe auto trait Trait {
+ | ----- auto trait cannot have items
+LL | type Output;
+ | ^^^^^^
error[E0275]: overflow evaluating the requirement `<() as Trait>::Output`
|
#![feature(optin_builtin_traits)]
unsafe auto trait Trait {
-//~^ ERROR E0380
- fn method(&self) {
+ fn method(&self) { //~ ERROR E0380
println!("Hello");
}
}
error[E0380]: auto traits cannot have methods or associated items
- --> $DIR/issue-23080.rs:3:1
+ --> $DIR/issue-23080.rs:4:8
|
-LL | / unsafe auto trait Trait {
-LL | |
-LL | | fn method(&self) {
-LL | | println!("Hello");
-LL | | }
-LL | | }
- | |_^
+LL | unsafe auto trait Trait {
+ | ----- auto trait cannot have items
+LL | fn method(&self) {
+ | ^^^^^^
error: aborting due to previous error
| ^^^^^
| |
| function or associated item not found in `dyn std::ops::BitXor<_>`
- | help: there is a method with a similar name: `bitxor`
+ | help: there is an associated function with a similar name: `bitxor`
error[E0191]: the value of the associated type `Output` (from trait `std::ops::BitXor`) must be specified
--> $DIR/issue-28344.rs:8:13
| ^^^^^
| |
| function or associated item not found in `dyn std::ops::BitXor<_>`
- | help: there is a method with a similar name: `bitxor`
+ | help: there is an associated function with a similar name: `bitxor`
error: aborting due to 4 previous errors
error[E0308]: mismatched types
--> $DIR/issue-33504.rs:7:13
|
+LL | struct Test;
+ | ------------ unit struct defined here
+...
LL | let Test = 1;
- | ^^^^ expected integer, found struct `Test`
+ | ^^^^
+ | |
+ | expected integer, found struct `Test`
+ | `Test` is interpreted as a unit struct, not a new binding
+ | help: introduce a new binding instead: `other_test`
error: aborting due to previous error
+// compile-flags: -Zsave-analysis
+// Also regression test for #69416
+
mod my_mod {
pub struct MyStruct {
priv_field: isize
let _woohoo = (Box::new(my_struct)).priv_field;
//~^ ERROR field `priv_field` of struct `my_mod::MyStruct` is private
- (&my_struct).happyfun(); //~ ERROR method `happyfun` is private
+ (&my_struct).happyfun(); //~ ERROR associated function `happyfun` is private
- (Box::new(my_struct)).happyfun(); //~ ERROR method `happyfun` is private
+ (Box::new(my_struct)).happyfun(); //~ ERROR associated function `happyfun` is private
let nope = my_struct.priv_field;
//~^ ERROR field `priv_field` of struct `my_mod::MyStruct` is private
}
error[E0616]: field `priv_field` of struct `my_mod::MyStruct` is private
- --> $DIR/issue-3763.rs:15:19
+ --> $DIR/issue-3763.rs:18:19
|
LL | let _woohoo = (&my_struct).priv_field;
| ^^^^^^^^^^^^^^^^^^^^^^^
error[E0616]: field `priv_field` of struct `my_mod::MyStruct` is private
- --> $DIR/issue-3763.rs:18:19
+ --> $DIR/issue-3763.rs:21:19
|
LL | let _woohoo = (Box::new(my_struct)).priv_field;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-error[E0624]: method `happyfun` is private
- --> $DIR/issue-3763.rs:21:18
+error[E0624]: associated function `happyfun` is private
+ --> $DIR/issue-3763.rs:24:18
|
LL | (&my_struct).happyfun();
| ^^^^^^^^
-error[E0624]: method `happyfun` is private
- --> $DIR/issue-3763.rs:23:27
+error[E0624]: associated function `happyfun` is private
+ --> $DIR/issue-3763.rs:26:27
|
LL | (Box::new(my_struct)).happyfun();
| ^^^^^^^^
error[E0616]: field `priv_field` of struct `my_mod::MyStruct` is private
- --> $DIR/issue-3763.rs:24:16
+ --> $DIR/issue-3763.rs:27:16
|
LL | let nope = my_struct.priv_field;
| ^^^^^^^^^^^^^^^^^^^^
#[inline(never)]
fn verify_stack_usage(before_ptr: *mut Vec<Big>) {
- // to check stack usage, create locals before and after
+ // To check stack usage, create locals before and after
// and check the difference in addresses between them.
let mut stack_var: Vec<Big> = vec![];
test::black_box(&mut stack_var);
let stack_usage = isize::abs(
(&mut stack_var as *mut _ as isize) -
(before_ptr as isize)) as usize;
- // give space for 2 copies of `Big` + 128 "misc" bytes.
- if stack_usage > mem::size_of::<Big>() * 2 + 128 {
+ // Give space for 2 copies of `Big` + 272 "misc" bytes
+ // (value observed on x86_64-pc-windows-gnu).
+ if stack_usage > mem::size_of::<Big>() * 2 + 272 {
panic!("used {} bytes of stack, but `struct Big` is only {} bytes",
stack_usage, mem::size_of::<Big>());
}
error[E0308]: mismatched types
--> $DIR/issue-4968.rs:5:16
|
+LL | const A: (isize,isize) = (4,2);
+ | ------------------------------- constant defined here
+LL | fn main() {
LL | match 42 { A => () }
- | ^ expected integer, found tuple
+ | ^
+ | |
+ | expected integer, found tuple
+ | `A` is interpreted as a constant, not a new binding
+ | help: introduce a new binding instead: `other_a`
|
= note: expected type `{integer}`
found tuple `(isize, isize)`
--> $DIR/option-as_deref.rs:2:29
|
LL | let _result = &Some(42).as_deref();
- | ^^^^^^^^ help: there is a method with a similar name: `as_ref`
+ | ^^^^^^^^ help: there is an associated function with a similar name: `as_ref`
|
= note: the method `as_deref` exists but the following trait bounds were not satisfied:
`{integer}: std::ops::Deref`
--> $DIR/result-as_deref.rs:4:27
|
LL | let _result = &Ok(42).as_deref();
- | ^^^^^^^^ help: there is a method with a similar name: `as_ref`
+ | ^^^^^^^^ help: there is an associated function with a similar name: `as_ref`
|
= note: the method `as_deref` exists but the following trait bounds were not satisfied:
`{integer}: std::ops::Deref`
--> $DIR/result-as_deref_err.rs:4:28
|
LL | let _result = &Err(41).as_deref_err();
- | ^^^^^^^^^^^^ help: there is a method with a similar name: `as_deref_mut`
+ | ^^^^^^^^^^^^ help: there is an associated function with a similar name: `as_deref_mut`
|
= note: the method `as_deref_err` exists but the following trait bounds were not satisfied:
`{integer}: std::ops::Deref`
--> $DIR/result-as_deref_mut.rs:4:31
|
LL | let _result = &mut Ok(42).as_deref_mut();
- | ^^^^^^^^^^^^ help: there is a method with a similar name: `as_deref_err`
+ | ^^^^^^^^^^^^ help: there is an associated function with a similar name: `as_deref_err`
|
= note: the method `as_deref_mut` exists but the following trait bounds were not satisfied:
`{integer}: std::ops::DerefMut`
--> $DIR/result-as_deref_mut_err.rs:4:32
|
LL | let _result = &mut Err(41).as_deref_mut_err();
- | ^^^^^^^^^^^^^^^^ help: there is a method with a similar name: `as_deref_mut`
+ | ^^^^^^^^^^^^^^^^ help: there is an associated function with a similar name: `as_deref_mut`
|
= note: the method `as_deref_mut_err` exists but the following trait bounds were not satisfied:
`{integer}: std::ops::DerefMut`
error[E0308]: mismatched types
--> $DIR/issue-5100.rs:8:9
|
+LL | enum A { B, C }
+ | - unit variant defined here
+...
LL | match (true, false) {
| ------------- this expression has type `(bool, bool)`
LL | A::B => (),
}
fn main() {
- test::Foo::<test::B>::foo(); //~ ERROR method `foo` is private
+ test::Foo::<test::B>::foo(); //~ ERROR associated function `foo` is private
}
-error[E0624]: method `foo` is private
+error[E0624]: associated function `foo` is private
--> $DIR/issue-53498.rs:16:5
|
LL | test::Foo::<test::B>::foo();
-error[E0164]: expected tuple struct or tuple variant, found method `Path::new`
+error[E0164]: expected tuple struct or tuple variant, found associated function `Path::new`
--> $DIR/issue-55587.rs:4:9
|
LL | let Path::new();
| ^^^^^^^
|
= note: `#[deny(overflowing_literals)]` on by default
+ = note: the literal `100_000` does not fit into the type `u16` whose range is `0..=65535`
error: aborting due to previous error
--- /dev/null
+macro_rules! suite {
+ ( $( $fn:ident; )* ) => {
+ $(
+ const A = "A".$fn();
+ //~^ ERROR the name `A` is defined multiple times
+ //~| ERROR missing type for `const` item
+ //~| ERROR the type placeholder `_` is not allowed within types
+ )*
+ }
+}
+
+suite! {
+ len;
+ is_empty;
+}
+
+fn main() {}
--- /dev/null
+error[E0428]: the name `A` is defined multiple times
+ --> $DIR/issue-69396-const-no-type-in-macro.rs:4:13
+ |
+LL | const A = "A".$fn();
+ | ^^^^^^^^^^^^^^^^^^^^
+ | |
+ | `A` redefined here
+ | previous definition of the value `A` here
+...
+LL | / suite! {
+LL | | len;
+LL | | is_empty;
+LL | | }
+ | |_- in this macro invocation
+ |
+ = note: `A` must be defined only once in the value namespace of this module
+ = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
+
+error: missing type for `const` item
+ --> $DIR/issue-69396-const-no-type-in-macro.rs:4:19
+ |
+LL | const A = "A".$fn();
+ | ^ help: provide a type for the item: `A: usize`
+...
+LL | / suite! {
+LL | | len;
+LL | | is_empty;
+LL | | }
+ | |_- in this macro invocation
+ |
+ = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
+
+error[E0121]: the type placeholder `_` is not allowed within types on item signatures
+ --> $DIR/issue-69396-const-no-type-in-macro.rs:4:19
+ |
+LL | const A = "A".$fn();
+ | ^
+ | |
+ | not allowed in type signatures
+ | help: replace `_` with the correct type: `bool`
+...
+LL | / suite! {
+LL | | len;
+LL | | is_empty;
+LL | | }
+ | |_- in this macro invocation
+ |
+ = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
+
+error: aborting due to 3 previous errors
+
+Some errors have detailed explanations: E0121, E0428.
+For more information about an error, try `rustc --explain E0121`.
--- /dev/null
+trait TraitA {
+ const VALUE: usize;
+}
+
+struct A;
+impl TraitA for A {
+ const VALUE: usize = 0;
+}
+
+trait TraitB {
+ type MyA: TraitA;
+ const VALUE: usize = Self::MyA::VALUE;
+}
+
+struct B;
+impl TraitB for B { //~ ERROR not all trait items implemented, missing: `MyA`
+ type M = A; //~ ERROR type `M` is not a member of trait `TraitB`
+}
+
+fn main() {
+ let _ = [0; B::VALUE];
+ //~^ ERROR array lengths can't depend on generic parameters
+}
--- /dev/null
+error[E0437]: type `M` is not a member of trait `TraitB`
+ --> $DIR/issue-69602-type-err-during-codegen-ice.rs:17:5
+ |
+LL | type M = A;
+ | ^^^^^^^^^^^^^ not a member of trait `TraitB`
+
+error[E0046]: not all trait items implemented, missing: `MyA`
+ --> $DIR/issue-69602-type-err-during-codegen-ice.rs:16:1
+ |
+LL | type MyA: TraitA;
+ | ----------------- `MyA` from trait
+...
+LL | impl TraitB for B {
+ | ^^^^^^^^^^^^^^^^^ missing `MyA` in implementation
+
+error: array lengths can't depend on generic parameters
+ --> $DIR/issue-69602-type-err-during-codegen-ice.rs:21:17
+ |
+LL | let _ = [0; B::VALUE];
+ | ^^^^^^^^
+
+error: aborting due to 3 previous errors
+
+Some errors have detailed explanations: E0046, E0437.
+For more information about an error, try `rustc --explain E0046`.
--- /dev/null
+// aux-build:issue-69725.rs
+
+extern crate issue_69725;
+use issue_69725::Struct;
+
+fn crash<A>() {
+ let _ = Struct::<A>::new().clone();
+ //~^ ERROR: no method named `clone` found
+}
+
+fn main() {}
--- /dev/null
+error[E0599]: no method named `clone` found for struct `issue_69725::Struct<A>` in the current scope
+ --> $DIR/issue-69725.rs:7:32
+ |
+LL | let _ = Struct::<A>::new().clone();
+ | ^^^^^ method not found in `issue_69725::Struct<A>`
+ |
+ ::: $DIR/auxiliary/issue-69725.rs:2:1
+ |
+LL | pub struct Struct<A>(A);
+ | ------------------------ doesn't satisfy `issue_69725::Struct<A>: std::clone::Clone`
+ |
+ = note: the method `clone` exists but the following trait bounds were not satisfied:
+ `A: std::clone::Clone`
+ which is required by `issue_69725::Struct<A>: std::clone::Clone`
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0599`.
error[E0308]: mismatched types
--> $DIR/issue-7867.rs:7:9
|
+LL | enum A { B, C }
+ | - unit variant defined here
+...
LL | match (true, false) {
| ------------- this expression has type `(bool, bool)`
LL | A::B => (),
-{"message":"mismatched types","code":{"code":"E0308","explanation":"This error occurs when the compiler was unable to infer the concrete type of a
-variable. It can occur for several cases, the most common of which is a
-mismatch in the expected type that the compiler inferred for a variable's
-initializing expression, and the actual type explicitly assigned to the
-variable.
+{"message":"mismatched types","code":{"code":"E0308","explanation":"Expected type did not match the received type.
-For example:
+Erroneous code example:
```compile_fail,E0308
let x: i32 = \"I am not a number!\";
// |
// type `i32` assigned to variable `x`
```
-"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":621,"byte_end":622,"line_start":17,"line_end":17,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1; // Error in the middle of line.","highlight_start":22,"highlight_end":23}],"label":"expected struct `std::string::String`, found integer","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":612,"byte_end":618,"line_start":17,"line_end":17,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String = 1; // Error in the middle of line.","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[{"message":"try using a conversion method","code":null,"level":"help","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":621,"byte_end":622,"line_start":17,"line_end":17,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1; // Error in the middle of line.","highlight_start":22,"highlight_end":23}],"label":null,"suggested_replacement":"1.to_string()","suggestion_applicability":"MaybeIncorrect","expansion":null}],"children":[],"rendered":null}],"rendered":"$DIR/json-bom-plus-crlf-multifile-aux.rs:17:22: error[E0308]: mismatched types
-"}
-{"message":"mismatched types","code":{"code":"E0308","explanation":"This error occurs when the compiler was unable to infer the concrete type of a
+
+This error occurs when the compiler was unable to infer the concrete type of a
variable. It can occur for several cases, the most common of which is a
mismatch in the expected type that the compiler inferred for a variable's
initializing expression, and the actual type explicitly assigned to the
variable.
+"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":621,"byte_end":622,"line_start":17,"line_end":17,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1; // Error in the middle of line.","highlight_start":22,"highlight_end":23}],"label":"expected struct `std::string::String`, found integer","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":612,"byte_end":618,"line_start":17,"line_end":17,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String = 1; // Error in the middle of line.","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[{"message":"try using a conversion method","code":null,"level":"help","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":621,"byte_end":622,"line_start":17,"line_end":17,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1; // Error in the middle of line.","highlight_start":22,"highlight_end":23}],"label":null,"suggested_replacement":"1.to_string()","suggestion_applicability":"MaybeIncorrect","expansion":null}],"children":[],"rendered":null}],"rendered":"$DIR/json-bom-plus-crlf-multifile-aux.rs:17:22: error[E0308]: mismatched types
+"}
+{"message":"mismatched types","code":{"code":"E0308","explanation":"Expected type did not match the received type.
-For example:
+Erroneous code example:
```compile_fail,E0308
let x: i32 = \"I am not a number!\";
// |
// type `i32` assigned to variable `x`
```
-"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":681,"byte_end":682,"line_start":19,"line_end":19,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1","highlight_start":22,"highlight_end":23}],"label":"expected struct `std::string::String`, found integer","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":672,"byte_end":678,"line_start":19,"line_end":19,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String = 1","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[{"message":"try using a conversion method","code":null,"level":"help","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":681,"byte_end":682,"line_start":19,"line_end":19,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1","highlight_start":22,"highlight_end":23}],"label":null,"suggested_replacement":"1.to_string()","suggestion_applicability":"MaybeIncorrect","expansion":null}],"children":[],"rendered":null}],"rendered":"$DIR/json-bom-plus-crlf-multifile-aux.rs:19:22: error[E0308]: mismatched types
-"}
-{"message":"mismatched types","code":{"code":"E0308","explanation":"This error occurs when the compiler was unable to infer the concrete type of a
+
+This error occurs when the compiler was unable to infer the concrete type of a
variable. It can occur for several cases, the most common of which is a
mismatch in the expected type that the compiler inferred for a variable's
initializing expression, and the actual type explicitly assigned to the
variable.
+"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":681,"byte_end":682,"line_start":19,"line_end":19,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1","highlight_start":22,"highlight_end":23}],"label":"expected struct `std::string::String`, found integer","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":672,"byte_end":678,"line_start":19,"line_end":19,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String = 1","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[{"message":"try using a conversion method","code":null,"level":"help","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":681,"byte_end":682,"line_start":19,"line_end":19,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1","highlight_start":22,"highlight_end":23}],"label":null,"suggested_replacement":"1.to_string()","suggestion_applicability":"MaybeIncorrect","expansion":null}],"children":[],"rendered":null}],"rendered":"$DIR/json-bom-plus-crlf-multifile-aux.rs:19:22: error[E0308]: mismatched types
+"}
+{"message":"mismatched types","code":{"code":"E0308","explanation":"Expected type did not match the received type.
-For example:
+Erroneous code example:
```compile_fail,E0308
let x: i32 = \"I am not a number!\";
// |
// type `i32` assigned to variable `x`
```
-"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":745,"byte_end":746,"line_start":23,"line_end":23,"column_start":1,"column_end":2,"is_primary":true,"text":[{"text":"1; // Error after the newline.","highlight_start":1,"highlight_end":2}],"label":"expected struct `std::string::String`, found integer","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":735,"byte_end":741,"line_start":22,"line_end":22,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String =","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[{"message":"try using a conversion method","code":null,"level":"help","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":745,"byte_end":746,"line_start":23,"line_end":23,"column_start":1,"column_end":2,"is_primary":true,"text":[{"text":"1; // Error after the newline.","highlight_start":1,"highlight_end":2}],"label":null,"suggested_replacement":"1.to_string()","suggestion_applicability":"MaybeIncorrect","expansion":null}],"children":[],"rendered":null}],"rendered":"$DIR/json-bom-plus-crlf-multifile-aux.rs:23:1: error[E0308]: mismatched types
-"}
-{"message":"mismatched types","code":{"code":"E0308","explanation":"This error occurs when the compiler was unable to infer the concrete type of a
+
+This error occurs when the compiler was unable to infer the concrete type of a
variable. It can occur for several cases, the most common of which is a
mismatch in the expected type that the compiler inferred for a variable's
initializing expression, and the actual type explicitly assigned to the
variable.
+"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":745,"byte_end":746,"line_start":23,"line_end":23,"column_start":1,"column_end":2,"is_primary":true,"text":[{"text":"1; // Error after the newline.","highlight_start":1,"highlight_end":2}],"label":"expected struct `std::string::String`, found integer","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":735,"byte_end":741,"line_start":22,"line_end":22,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String =","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[{"message":"try using a conversion method","code":null,"level":"help","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":745,"byte_end":746,"line_start":23,"line_end":23,"column_start":1,"column_end":2,"is_primary":true,"text":[{"text":"1; // Error after the newline.","highlight_start":1,"highlight_end":2}],"label":null,"suggested_replacement":"1.to_string()","suggestion_applicability":"MaybeIncorrect","expansion":null}],"children":[],"rendered":null}],"rendered":"$DIR/json-bom-plus-crlf-multifile-aux.rs:23:1: error[E0308]: mismatched types
+"}
+{"message":"mismatched types","code":{"code":"E0308","explanation":"Expected type did not match the received type.
-For example:
+Erroneous code example:
```compile_fail,E0308
let x: i32 = \"I am not a number!\";
// |
// type `i32` assigned to variable `x`
```
+
+This error occurs when the compiler was unable to infer the concrete type of a
+variable. It can occur for several cases, the most common of which is a
+mismatch in the expected type that the compiler inferred for a variable's
+initializing expression, and the actual type explicitly assigned to the
+variable.
"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":801,"byte_end":809,"line_start":25,"line_end":26,"column_start":22,"column_end":6,"is_primary":true,"text":[{"text":" let s : String = (","highlight_start":22,"highlight_end":23},{"text":" ); // Error spanning the newline.","highlight_start":1,"highlight_end":6}],"label":"expected struct `std::string::String`, found `()`","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf-multifile-aux.rs","byte_start":792,"byte_end":798,"line_start":25,"line_end":25,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String = (","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[],"rendered":"$DIR/json-bom-plus-crlf-multifile-aux.rs:25:22: error[E0308]: mismatched types
"}
{"message":"aborting due to 4 previous errors","code":null,"level":"error","spans":[],"children":[],"rendered":"error: aborting due to 4 previous errors
-{"message":"mismatched types","code":{"code":"E0308","explanation":"This error occurs when the compiler was unable to infer the concrete type of a
-variable. It can occur for several cases, the most common of which is a
-mismatch in the expected type that the compiler inferred for a variable's
-initializing expression, and the actual type explicitly assigned to the
-variable.
+{"message":"mismatched types","code":{"code":"E0308","explanation":"Expected type did not match the received type.
-For example:
+Erroneous code example:
```compile_fail,E0308
let x: i32 = \"I am not a number!\";
// |
// type `i32` assigned to variable `x`
```
-"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":606,"byte_end":607,"line_start":16,"line_end":16,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1; // Error in the middle of line.","highlight_start":22,"highlight_end":23}],"label":"expected struct `std::string::String`, found integer","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":597,"byte_end":603,"line_start":16,"line_end":16,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String = 1; // Error in the middle of line.","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[{"message":"try using a conversion method","code":null,"level":"help","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":606,"byte_end":607,"line_start":16,"line_end":16,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1; // Error in the middle of line.","highlight_start":22,"highlight_end":23}],"label":null,"suggested_replacement":"1.to_string()","suggestion_applicability":"MaybeIncorrect","expansion":null}],"children":[],"rendered":null}],"rendered":"$DIR/json-bom-plus-crlf.rs:16:22: error[E0308]: mismatched types
-"}
-{"message":"mismatched types","code":{"code":"E0308","explanation":"This error occurs when the compiler was unable to infer the concrete type of a
+
+This error occurs when the compiler was unable to infer the concrete type of a
variable. It can occur for several cases, the most common of which is a
mismatch in the expected type that the compiler inferred for a variable's
initializing expression, and the actual type explicitly assigned to the
variable.
+"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":606,"byte_end":607,"line_start":16,"line_end":16,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1; // Error in the middle of line.","highlight_start":22,"highlight_end":23}],"label":"expected struct `std::string::String`, found integer","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":597,"byte_end":603,"line_start":16,"line_end":16,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String = 1; // Error in the middle of line.","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[{"message":"try using a conversion method","code":null,"level":"help","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":606,"byte_end":607,"line_start":16,"line_end":16,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1; // Error in the middle of line.","highlight_start":22,"highlight_end":23}],"label":null,"suggested_replacement":"1.to_string()","suggestion_applicability":"MaybeIncorrect","expansion":null}],"children":[],"rendered":null}],"rendered":"$DIR/json-bom-plus-crlf.rs:16:22: error[E0308]: mismatched types
+"}
+{"message":"mismatched types","code":{"code":"E0308","explanation":"Expected type did not match the received type.
-For example:
+Erroneous code example:
```compile_fail,E0308
let x: i32 = \"I am not a number!\";
// |
// type `i32` assigned to variable `x`
```
-"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":666,"byte_end":667,"line_start":18,"line_end":18,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1","highlight_start":22,"highlight_end":23}],"label":"expected struct `std::string::String`, found integer","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":657,"byte_end":663,"line_start":18,"line_end":18,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String = 1","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[{"message":"try using a conversion method","code":null,"level":"help","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":666,"byte_end":667,"line_start":18,"line_end":18,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1","highlight_start":22,"highlight_end":23}],"label":null,"suggested_replacement":"1.to_string()","suggestion_applicability":"MaybeIncorrect","expansion":null}],"children":[],"rendered":null}],"rendered":"$DIR/json-bom-plus-crlf.rs:18:22: error[E0308]: mismatched types
-"}
-{"message":"mismatched types","code":{"code":"E0308","explanation":"This error occurs when the compiler was unable to infer the concrete type of a
+
+This error occurs when the compiler was unable to infer the concrete type of a
variable. It can occur for several cases, the most common of which is a
mismatch in the expected type that the compiler inferred for a variable's
initializing expression, and the actual type explicitly assigned to the
variable.
+"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":666,"byte_end":667,"line_start":18,"line_end":18,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1","highlight_start":22,"highlight_end":23}],"label":"expected struct `std::string::String`, found integer","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":657,"byte_end":663,"line_start":18,"line_end":18,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String = 1","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[{"message":"try using a conversion method","code":null,"level":"help","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":666,"byte_end":667,"line_start":18,"line_end":18,"column_start":22,"column_end":23,"is_primary":true,"text":[{"text":" let s : String = 1","highlight_start":22,"highlight_end":23}],"label":null,"suggested_replacement":"1.to_string()","suggestion_applicability":"MaybeIncorrect","expansion":null}],"children":[],"rendered":null}],"rendered":"$DIR/json-bom-plus-crlf.rs:18:22: error[E0308]: mismatched types
+"}
+{"message":"mismatched types","code":{"code":"E0308","explanation":"Expected type did not match the received type.
-For example:
+Erroneous code example:
```compile_fail,E0308
let x: i32 = \"I am not a number!\";
// |
// type `i32` assigned to variable `x`
```
-"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":730,"byte_end":731,"line_start":22,"line_end":22,"column_start":1,"column_end":2,"is_primary":true,"text":[{"text":"1; // Error after the newline.","highlight_start":1,"highlight_end":2}],"label":"expected struct `std::string::String`, found integer","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":720,"byte_end":726,"line_start":21,"line_end":21,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String =","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[{"message":"try using a conversion method","code":null,"level":"help","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":730,"byte_end":731,"line_start":22,"line_end":22,"column_start":1,"column_end":2,"is_primary":true,"text":[{"text":"1; // Error after the newline.","highlight_start":1,"highlight_end":2}],"label":null,"suggested_replacement":"1.to_string()","suggestion_applicability":"MaybeIncorrect","expansion":null}],"children":[],"rendered":null}],"rendered":"$DIR/json-bom-plus-crlf.rs:22:1: error[E0308]: mismatched types
-"}
-{"message":"mismatched types","code":{"code":"E0308","explanation":"This error occurs when the compiler was unable to infer the concrete type of a
+
+This error occurs when the compiler was unable to infer the concrete type of a
variable. It can occur for several cases, the most common of which is a
mismatch in the expected type that the compiler inferred for a variable's
initializing expression, and the actual type explicitly assigned to the
variable.
+"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":730,"byte_end":731,"line_start":22,"line_end":22,"column_start":1,"column_end":2,"is_primary":true,"text":[{"text":"1; // Error after the newline.","highlight_start":1,"highlight_end":2}],"label":"expected struct `std::string::String`, found integer","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":720,"byte_end":726,"line_start":21,"line_end":21,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String =","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[{"message":"try using a conversion method","code":null,"level":"help","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":730,"byte_end":731,"line_start":22,"line_end":22,"column_start":1,"column_end":2,"is_primary":true,"text":[{"text":"1; // Error after the newline.","highlight_start":1,"highlight_end":2}],"label":null,"suggested_replacement":"1.to_string()","suggestion_applicability":"MaybeIncorrect","expansion":null}],"children":[],"rendered":null}],"rendered":"$DIR/json-bom-plus-crlf.rs:22:1: error[E0308]: mismatched types
+"}
+{"message":"mismatched types","code":{"code":"E0308","explanation":"Expected type did not match the received type.
-For example:
+Erroneous code example:
```compile_fail,E0308
let x: i32 = \"I am not a number!\";
// |
// type `i32` assigned to variable `x`
```
+
+This error occurs when the compiler was unable to infer the concrete type of a
+variable. It can occur for several cases, the most common of which is a
+mismatch in the expected type that the compiler inferred for a variable's
+initializing expression, and the actual type explicitly assigned to the
+variable.
"},"level":"error","spans":[{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":786,"byte_end":794,"line_start":24,"line_end":25,"column_start":22,"column_end":6,"is_primary":true,"text":[{"text":" let s : String = (","highlight_start":22,"highlight_end":23},{"text":" ); // Error spanning the newline.","highlight_start":1,"highlight_end":6}],"label":"expected struct `std::string::String`, found `()`","suggested_replacement":null,"suggestion_applicability":null,"expansion":null},{"file_name":"$DIR/json-bom-plus-crlf.rs","byte_start":777,"byte_end":783,"line_start":24,"line_end":24,"column_start":13,"column_end":19,"is_primary":false,"text":[{"text":" let s : String = (","highlight_start":13,"highlight_end":19}],"label":"expected due to this","suggested_replacement":null,"suggestion_applicability":null,"expansion":null}],"children":[],"rendered":"$DIR/json-bom-plus-crlf.rs:24:22: error[E0308]: mismatched types
"}
{"message":"aborting due to 4 previous errors","code":null,"level":"error","spans":[],"children":[],"rendered":"error: aborting due to 4 previous errors
use extern::foo; //~ ERROR expected identifier, found keyword `extern`
+ //~| ERROR unresolved import `r#extern`
fn main() {}
LL | use r#extern::foo;
| ^^^^^^^^
-error: aborting due to previous error
+error[E0432]: unresolved import `r#extern`
+ --> $DIR/keyword-extern-as-identifier-use.rs:1:5
+ |
+LL | use extern::foo;
+ | ^^^^^^ maybe a missing crate `r#extern`?
+
+error: aborting due to 2 previous errors
+For more information about this error, try `rustc --explain E0432`.
--> $DIR/label_break_value_illegal_uses.rs:6:12
|
LL | unsafe 'b: {}
- | ^^ expected `{`
+ | ^^----
+ | |
+ | expected `{`
+ | help: try placing this code inside a block: `{ 'b: {} }`
error: expected `{`, found `'b`
--> $DIR/label_break_value_illegal_uses.rs:10:13
| lifetime `'a` defined here
LL |
LL | if x > y { x } else { y }
- | ^ method was supposed to return data with lifetime `'a` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'a` but it is returning data with lifetime `'1`
error: aborting due to previous error
| lifetime `'a` defined here
LL |
LL | x
- | ^ method was supposed to return data with lifetime `'1` but it is returning data with lifetime `'a`
+ | ^ associated function was supposed to return data with lifetime `'1` but it is returning data with lifetime `'a`
error: aborting due to previous error
| lifetime `'a` defined here
LL |
LL | if true { x } else { self }
- | ^^^^ method was supposed to return data with lifetime `'a` but it is returning data with lifetime `'1`
+ | ^^^^ associated function was supposed to return data with lifetime `'a` but it is returning data with lifetime `'1`
error: aborting due to previous error
| |
| let's call the lifetime of this reference `'2`
LL | x
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: aborting due to previous error
| |
| let's call the lifetime of this reference `'2`
LL | if true { x } else { self }
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: aborting due to previous error
| ^^^
|
= note: `#[deny(overflowing_literals)]` on by default
+ = note: the literal `256` does not fit into the type `u8` whose range is `0..=255`
error: range endpoint is out of range for `u8`
--> $DIR/deny-overflowing-literals.rs:5:14
--- /dev/null
+// build-fail
+// only-x86_64
+
+fn main() {
+ Bug::V([0; !0]); //~ ERROR is too big for the current
+}
+
+enum Bug {
+ V([u8; !0]),
+}
--- /dev/null
+error: the type `[u8; 18446744073709551615]` is too big for the current architecture
+ --> $DIR/issue-69485-var-size-diffs-too-large.rs:5:12
+ |
+LL | Bug::V([0; !0]);
+ | ^^^^^^^
+
+error: aborting due to previous error
+
--- /dev/null
+// check-pass
+// compile-flags: -W rust-2018-compatibility
+// error-pattern: `try` is a keyword in the 2018 edition
+
+fn main() {}
+
+mod lint_pre_expansion_extern_module_aux;
--- /dev/null
+warning: `try` is a keyword in the 2018 edition
+ --> $DIR/lint_pre_expansion_extern_module_aux.rs:3:8
+ |
+LL | pub fn try() {}
+ | ^^^ help: you can use a raw identifier to stay compatible: `r#try`
+ |
+ = note: `-W keyword-idents` implied by `-W rust-2018-compatibility`
+ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in the 2018 edition!
+ = note: for more information, see issue #49716 <https://github.com/rust-lang/rust/issues/49716>
+
|
LL | let range_c = 0..=256;
| ^^^
+ |
+ = note: the literal `256` does not fit into the type `u8` whose range is `0..=255`
error: literal out of range for `u8`
--> $DIR/lint-range-endpoint-overflow.rs:7:19
|
LL | let range_d = 256..5;
| ^^^
+ |
+ = note: the literal `256` does not fit into the type `u8` whose range is `0..=255`
error: literal out of range for `u8`
--> $DIR/lint-range-endpoint-overflow.rs:8:22
|
LL | let range_e = 0..257;
| ^^^
+ |
+ = note: the literal `257` does not fit into the type `u8` whose range is `0..=255`
error: range endpoint is out of range for `u8`
--> $DIR/lint-range-endpoint-overflow.rs:9:20
|
LL | #![warn(overflowing_literals)]
| ^^^^^^^^^^^^^^^^^^^^
+ = note: the literal `128` does not fit into the type `i8` whose range is `-128..=127`
error: aborting due to previous error
|
LL | #![warn(overflowing_literals)]
| ^^^^^^^^^^^^^^^^^^^^
+ = note: the literal `200` does not fit into the type `i8` whose range is `-128..=127`
error: aborting due to previous error
|
LL | #![deny(overflowing_literals)]
| ^^^^^^^^^^^^^^^^^^^^
+ = note: the literal `256` does not fit into the type `u8` whose range is `0..=255`
error: literal out of range for `u8`
--> $DIR/lint-type-overflow.rs:13:14
|
LL | let x1 = 256_u8;
| ^^^^^^
+ |
+ = note: the literal `256_u8` does not fit into the type `u8` whose range is `0..=255`
error: literal out of range for `i8`
--> $DIR/lint-type-overflow.rs:16:18
|
LL | let x1: i8 = 128;
| ^^^
+ |
+ = note: the literal `128` does not fit into the type `i8` whose range is `-128..=127`
error: literal out of range for `i8`
--> $DIR/lint-type-overflow.rs:18:19
|
LL | let x3: i8 = -129;
| ^^^
+ |
+ = note: the literal `129` does not fit into the type `i8` whose range is `-128..=127`
error: literal out of range for `i8`
--> $DIR/lint-type-overflow.rs:19:19
|
LL | let x3: i8 = -(129);
| ^^^^^
+ |
+ = note: the literal `129` does not fit into the type `i8` whose range is `-128..=127`
error: literal out of range for `i8`
--> $DIR/lint-type-overflow.rs:20:20
|
LL | let x3: i8 = -{129};
| ^^^
+ |
+ = note: the literal `129` does not fit into the type `i8` whose range is `-128..=127`
error: literal out of range for `i8`
--> $DIR/lint-type-overflow.rs:22:10
|
LL | test(1000);
| ^^^^
+ |
+ = note: the literal `1000` does not fit into the type `i8` whose range is `-128..=127`
error: literal out of range for `i8`
--> $DIR/lint-type-overflow.rs:24:13
|
LL | let x = 128_i8;
| ^^^^^^
+ |
+ = note: the literal `128_i8` does not fit into the type `i8` whose range is `-128..=127`
error: literal out of range for `i8`
--> $DIR/lint-type-overflow.rs:28:14
|
LL | let x = -129_i8;
| ^^^^^^
+ |
+ = note: the literal `129_i8` does not fit into the type `i8` whose range is `-128..=127`
error: literal out of range for `i32`
--> $DIR/lint-type-overflow.rs:32:18
|
LL | let x: i32 = 2147483648;
| ^^^^^^^^^^
+ |
+ = note: the literal `2147483648` does not fit into the type `i32` whose range is `-2147483648..=2147483647`
error: literal out of range for `i32`
--> $DIR/lint-type-overflow.rs:33:13
|
LL | let x = 2147483648_i32;
| ^^^^^^^^^^^^^^
+ |
+ = note: the literal `2147483648_i32` does not fit into the type `i32` whose range is `-2147483648..=2147483647`
error: literal out of range for `i32`
--> $DIR/lint-type-overflow.rs:36:19
|
LL | let x: i32 = -2147483649;
| ^^^^^^^^^^
+ |
+ = note: the literal `2147483649` does not fit into the type `i32` whose range is `-2147483648..=2147483647`
error: literal out of range for `i32`
--> $DIR/lint-type-overflow.rs:37:14
|
LL | let x = -2147483649_i32;
| ^^^^^^^^^^^^^^
+ |
+ = note: the literal `2147483649_i32` does not fit into the type `i32` whose range is `-2147483648..=2147483647`
error: literal out of range for `i32`
--> $DIR/lint-type-overflow.rs:38:13
|
LL | let x = 2147483648;
| ^^^^^^^^^^
+ |
+ = note: the literal `2147483648` does not fit into the type `i32` whose range is `-2147483648..=2147483647`
error: literal out of range for `i64`
--> $DIR/lint-type-overflow.rs:40:13
|
LL | let x = 9223372036854775808_i64;
| ^^^^^^^^^^^^^^^^^^^^^^^
+ |
+ = note: the literal `9223372036854775808_i64` does not fit into the type `i64` whose range is `-9223372036854775808..=9223372036854775807`
error: literal out of range for `i64`
--> $DIR/lint-type-overflow.rs:42:13
|
LL | let x = 18446744073709551615_i64;
| ^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+ = note: the literal `18446744073709551615_i64` does not fit into the type `i64` whose range is `-9223372036854775808..=9223372036854775807`
error: literal out of range for `i64`
--> $DIR/lint-type-overflow.rs:43:19
|
LL | let x: i64 = -9223372036854775809;
| ^^^^^^^^^^^^^^^^^^^
+ |
+ = note: the literal `9223372036854775809` does not fit into the type `i64` whose range is `-9223372036854775808..=9223372036854775807`
error: literal out of range for `i64`
--> $DIR/lint-type-overflow.rs:44:14
|
LL | let x = -9223372036854775809_i64;
| ^^^^^^^^^^^^^^^^^^^^^^^
+ |
+ = note: the literal `9223372036854775809_i64` does not fit into the type `i64` whose range is `-9223372036854775808..=9223372036854775807`
error: aborting due to 18 previous errors
|
LL | #![deny(overflowing_literals)]
| ^^^^^^^^^^^^^^^^^^^^
+ = note: the literal `128` does not fit into the type `i8` whose range is `-128..=127`
error: literal out of range for `f32`
--> $DIR/lint-type-overflow2.rs:9:14
|
LL | let x = -3.40282357e+38_f32;
| ^^^^^^^^^^^^^^^^^^
+ |
+ = note: the literal `3.40282357e+38_f32` does not fit into the type `f32` and will be converted to `std::f32::INFINITY`
error: literal out of range for `f32`
--> $DIR/lint-type-overflow2.rs:10:14
|
LL | let x = 3.40282357e+38_f32;
| ^^^^^^^^^^^^^^^^^^
+ |
+ = note: the literal `3.40282357e+38_f32` does not fit into the type `f32` and will be converted to `std::f32::INFINITY`
error: literal out of range for `f64`
--> $DIR/lint-type-overflow2.rs:11:14
|
LL | let x = -1.7976931348623159e+308_f64;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+ = note: the literal `1.7976931348623159e+308_f64` does not fit into the type `f64` and will be converted to `std::f64::INFINITY`
error: literal out of range for `f64`
--> $DIR/lint-type-overflow2.rs:12:14
|
LL | let x = 1.7976931348623159e+308_f64;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+ = note: the literal `1.7976931348623159e+308_f64` does not fit into the type `f64` and will be converted to `std::f64::INFINITY`
error: aborting due to 5 previous errors
--- /dev/null
+// ignore-test: not a test
+
+pub fn try() {}
// aux-build:redundant-semi-proc-macro-def.rs
-#![deny(redundant_semicolon)]
+#![deny(redundant_semicolons)]
extern crate redundant_semi_proc_macro;
use redundant_semi_proc_macro::should_preserve_spans;
-TokenStream [Ident { ident: "fn", span: #0 bytes(197..199) }, Ident { ident: "span_preservation", span: #0 bytes(200..217) }, Group { delimiter: Parenthesis, stream: TokenStream [], span: #0 bytes(217..219) }, Group { delimiter: Brace, stream: TokenStream [Ident { ident: "let", span: #0 bytes(227..230) }, Ident { ident: "tst", span: #0 bytes(231..234) }, Punct { ch: '=', spacing: Alone, span: #0 bytes(235..236) }, Literal { lit: Lit { kind: Integer, symbol: "123", suffix: None }, span: Span { lo: BytePos(237), hi: BytePos(240), ctxt: #0 } }, Punct { ch: ';', spacing: Joint, span: #0 bytes(240..241) }, Punct { ch: ';', spacing: Alone, span: #0 bytes(241..242) }, Ident { ident: "match", span: #0 bytes(288..293) }, Ident { ident: "tst", span: #0 bytes(294..297) }, Group { delimiter: Brace, stream: TokenStream [Literal { lit: Lit { kind: Integer, symbol: "123", suffix: None }, span: Span { lo: BytePos(482), hi: BytePos(485), ctxt: #0 } }, Punct { ch: '=', spacing: Joint, span: #0 bytes(486..488) }, Punct { ch: '>', spacing: Alone, span: #0 bytes(486..488) }, Group { delimiter: Parenthesis, stream: TokenStream [], span: #0 bytes(489..491) }, Punct { ch: ',', spacing: Alone, span: #0 bytes(491..492) }, Ident { ident: "_", span: #0 bytes(501..502) }, Punct { ch: '=', spacing: Joint, span: #0 bytes(503..505) }, Punct { ch: '>', spacing: Alone, span: #0 bytes(503..505) }, Group { delimiter: Parenthesis, stream: TokenStream [], span: #0 bytes(506..508) }], span: #0 bytes(298..514) }, Punct { ch: ';', spacing: Joint, span: #0 bytes(514..515) }, Punct { ch: ';', spacing: Joint, span: #0 bytes(515..516) }, Punct { ch: ';', spacing: Alone, span: #0 bytes(516..517) }], span: #0 bytes(221..561) }]
+TokenStream [Ident { ident: "fn", span: #0 bytes(198..200) }, Ident { ident: "span_preservation", span: #0 bytes(201..218) }, Group { delimiter: Parenthesis, stream: TokenStream [], span: #0 bytes(218..220) }, Group { delimiter: Brace, stream: TokenStream [Ident { ident: "let", span: #0 bytes(228..231) }, Ident { ident: "tst", span: #0 bytes(232..235) }, Punct { ch: '=', spacing: Alone, span: #0 bytes(236..237) }, Literal { lit: Lit { kind: Integer, symbol: "123", suffix: None }, span: Span { lo: BytePos(238), hi: BytePos(241), ctxt: #0 } }, Punct { ch: ';', spacing: Joint, span: #0 bytes(241..242) }, Punct { ch: ';', spacing: Alone, span: #0 bytes(242..243) }, Ident { ident: "match", span: #0 bytes(289..294) }, Ident { ident: "tst", span: #0 bytes(295..298) }, Group { delimiter: Brace, stream: TokenStream [Literal { lit: Lit { kind: Integer, symbol: "123", suffix: None }, span: Span { lo: BytePos(483), hi: BytePos(486), ctxt: #0 } }, Punct { ch: '=', spacing: Joint, span: #0 bytes(487..489) }, Punct { ch: '>', spacing: Alone, span: #0 bytes(487..489) }, Group { delimiter: Parenthesis, stream: TokenStream [], span: #0 bytes(490..492) }, Punct { ch: ',', spacing: Alone, span: #0 bytes(492..493) }, Ident { ident: "_", span: #0 bytes(502..503) }, Punct { ch: '=', spacing: Joint, span: #0 bytes(504..506) }, Punct { ch: '>', spacing: Alone, span: #0 bytes(504..506) }, Group { delimiter: Parenthesis, stream: TokenStream [], span: #0 bytes(507..509) }], span: #0 bytes(299..515) }, Punct { ch: ';', spacing: Joint, span: #0 bytes(515..516) }, Punct { ch: ';', spacing: Joint, span: #0 bytes(516..517) }, Punct { ch: ';', spacing: Alone, span: #0 bytes(517..518) }], span: #0 bytes(222..562) }]
error: unnecessary trailing semicolon
--> $DIR/redundant-semi-proc-macro.rs:9:19
|
note: the lint level is defined here
--> $DIR/redundant-semi-proc-macro.rs:3:9
|
-LL | #![deny(redundant_semicolon)]
- | ^^^^^^^^^^^^^^^^^^^
+LL | #![deny(redundant_semicolons)]
+ | ^^^^^^^^^^^^^^^^^^^^
error: unnecessary trailing semicolons
--> $DIR/redundant-semi-proc-macro.rs:16:7
// ignore-tidy-tab
#![warn(unused_mut, unused_parens)] // UI tests pass `-A unused`—see Issue #43896
-#![feature(no_debug)]
#[no_mangle] const DISCOVERY: usize = 1;
//~^ ERROR const items should never be `#[no_mangle]`
warp_factor: f32,
}
-#[no_debug] // should suggest removal of deprecated attribute
-//~^ WARN deprecated
-//~| HELP remove this attribute
fn main() {
while true {
//~^ WARN denote infinite loops
warning: denote infinite loops with `loop { ... }`
- --> $DIR/suggestions.rs:46:5
+ --> $DIR/suggestions.rs:42:5
|
LL | while true {
| ^^^^^^^^^^ help: use `loop`
= note: `#[warn(while_true)]` on by default
warning: unnecessary parentheses around assigned value
- --> $DIR/suggestions.rs:49:31
+ --> $DIR/suggestions.rs:45:31
|
LL | let mut registry_no = (format!("NX-{}", 74205));
| ^^^^^^^^^^^^^^^^^^^^^^^^^ help: remove these parentheses
LL | #![warn(unused_mut, unused_parens)] // UI tests pass `-A unused`—see Issue #43896
| ^^^^^^^^^^^^^
-warning: use of deprecated attribute `no_debug`: the `#[no_debug]` attribute was an experimental feature that has been deprecated due to lack of demand. See https://github.com/rust-lang/rust/issues/29721
- --> $DIR/suggestions.rs:42:1
- |
-LL | #[no_debug] // should suggest removal of deprecated attribute
- | ^^^^^^^^^^^ help: remove this attribute
- |
- = note: `#[warn(deprecated)]` on by default
-
warning: variable does not need to be mutable
- --> $DIR/suggestions.rs:49:13
+ --> $DIR/suggestions.rs:45:13
|
LL | let mut registry_no = (format!("NX-{}", 74205));
| ----^^^^^^^^^^^
| ^^^^^^^^^^
warning: variable does not need to be mutable
- --> $DIR/suggestions.rs:55:13
+ --> $DIR/suggestions.rs:51:13
|
LL | let mut
| _____________^
| help: remove this `mut`
error: const items should never be `#[no_mangle]`
- --> $DIR/suggestions.rs:6:14
+ --> $DIR/suggestions.rs:5:14
|
LL | #[no_mangle] const DISCOVERY: usize = 1;
| -----^^^^^^^^^^^^^^^^^^^^^^
= note: `#[deny(no_mangle_const_items)]` on by default
warning: functions generic over types or consts must be mangled
- --> $DIR/suggestions.rs:12:1
+ --> $DIR/suggestions.rs:11:1
|
LL | #[no_mangle]
| ------------ help: remove this attribute
= note: `#[warn(no_mangle_generic_items)]` on by default
warning: the `warp_factor:` in this pattern is redundant
- --> $DIR/suggestions.rs:61:23
+ --> $DIR/suggestions.rs:57:23
|
LL | Equinox { warp_factor: warp_factor } => {}
| ^^^^^^^^^^^^^^^^^^^^^^^^ help: use shorthand field pattern: `warp_factor`
= note: `#[warn(non_shorthand_field_patterns)]` on by default
error: const items should never be `#[no_mangle]`
- --> $DIR/suggestions.rs:22:18
+ --> $DIR/suggestions.rs:21:18
|
LL | #[no_mangle] pub const DAUNTLESS: bool = true;
| ---------^^^^^^^^^^^^^^^^^^^^^^^^
| help: try a static value: `pub static`
warning: functions generic over types or consts must be mangled
- --> $DIR/suggestions.rs:25:18
+ --> $DIR/suggestions.rs:24:18
|
LL | #[no_mangle] pub fn val_jean<T>() {}
| ------------ ^^^^^^^^^^^^^^^^^^^^^^^
| help: remove this attribute
error: const items should never be `#[no_mangle]`
- --> $DIR/suggestions.rs:30:18
+ --> $DIR/suggestions.rs:29:18
|
LL | #[no_mangle] pub(crate) const VETAR: bool = true;
| ----------------^^^^^^^^^^^^^^^^^^^^
| help: try a static value: `pub static`
warning: functions generic over types or consts must be mangled
- --> $DIR/suggestions.rs:33:18
+ --> $DIR/suggestions.rs:32:18
|
LL | #[no_mangle] pub(crate) fn crossfield<T>() {}
| ------------ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
LL | #![warn(overflowing_literals)]
| ^^^^^^^^^^^^^^^^^^^^
+ = note: the literal `255i8` does not fit into the type `i8` whose range is `-128..=127`
warning: literal out of range for i8
--> $DIR/type-overflow.rs:10:16
LL | let fail = 0b1000_0001i8;
| ^^^^^^^^^^^^^ help: consider using `u8` instead: `0b1000_0001u8`
|
- = note: the literal `0b1000_0001i8` (decimal `129`) does not fit into an `i8` and will become `-127i8`
+ = note: the literal `0b1000_0001i8` (decimal `129`) does not fit into the type `i8` and will become `-127i8`
warning: literal out of range for i64
--> $DIR/type-overflow.rs:12:16
LL | let fail = 0x8000_0000_0000_0000i64;
| ^^^^^^^^^^^^^^^^^^^^^^^^ help: consider using `u64` instead: `0x8000_0000_0000_0000u64`
|
- = note: the literal `0x8000_0000_0000_0000i64` (decimal `9223372036854775808`) does not fit into an `i64` and will become `-9223372036854775808i64`
+ = note: the literal `0x8000_0000_0000_0000i64` (decimal `9223372036854775808`) does not fit into the type `i64` and will become `-9223372036854775808i64`
warning: literal out of range for u32
--> $DIR/type-overflow.rs:14:16
LL | let fail = 0x1_FFFF_FFFFu32;
| ^^^^^^^^^^^^^^^^ help: consider using `u64` instead: `0x1_FFFF_FFFFu64`
|
- = note: the literal `0x1_FFFF_FFFFu32` (decimal `8589934591`) does not fit into an `u32` and will become `4294967295u32`
+ = note: the literal `0x1_FFFF_FFFFu32` (decimal `8589934591`) does not fit into the type `u32` and will become `4294967295u32`
warning: literal out of range for i128
--> $DIR/type-overflow.rs:16:22
LL | let fail: i128 = 0x8000_0000_0000_0000_0000_0000_0000_0000;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
- = note: the literal `0x8000_0000_0000_0000_0000_0000_0000_0000` (decimal `170141183460469231731687303715884105728`) does not fit into an `i128` and will become `-170141183460469231731687303715884105728i128`
+ = note: the literal `0x8000_0000_0000_0000_0000_0000_0000_0000` (decimal `170141183460469231731687303715884105728`) does not fit into the type `i128` and will become `-170141183460469231731687303715884105728i128`
= help: consider using `u128` instead
warning: literal out of range for i32
LL | let fail = 0x8FFF_FFFF_FFFF_FFFE;
| ^^^^^^^^^^^^^^^^^^^^^
|
- = note: the literal `0x8FFF_FFFF_FFFF_FFFE` (decimal `10376293541461622782`) does not fit into an `i32` and will become `-2i32`
+ = note: the literal `0x8FFF_FFFF_FFFF_FFFE` (decimal `10376293541461622782`) does not fit into the type `i32` and will become `-2i32`
= help: consider using `i128` instead
warning: literal out of range for i8
LL | let fail = -0b1111_1111i8;
| ^^^^^^^^^^^^^ help: consider using `i16` instead: `0b1111_1111i16`
|
- = note: the literal `0b1111_1111i8` (decimal `255`) does not fit into an `i8` and will become `-1i8`
+ = note: the literal `0b1111_1111i8` (decimal `255`) does not fit into the type `i8` and will become `-1i8`
-error: the type `&'static T` does not permit zero-initialization
+error: the type `&T` does not permit zero-initialization
--> $DIR/uninitialized-zeroed.rs:29:32
|
LL | let _val: &'static T = mem::zeroed();
| ^^^^^^^^^^^^^
= note: references must be non-null
-error: the type `&'static T` does not permit being left uninitialized
+error: the type `&T` does not permit being left uninitialized
--> $DIR/uninitialized-zeroed.rs:30:32
|
LL | let _val: &'static T = mem::uninitialized();
|
= note: references must be non-null
-error: the type `Wrap<&'static T>` does not permit zero-initialization
+error: the type `Wrap<&T>` does not permit zero-initialization
--> $DIR/uninitialized-zeroed.rs:32:38
|
LL | let _val: Wrap<&'static T> = mem::zeroed();
LL | struct Wrap<T> { wrapped: T }
| ^^^^^^^^^^
-error: the type `Wrap<&'static T>` does not permit being left uninitialized
+error: the type `Wrap<&T>` does not permit being left uninitialized
--> $DIR/uninitialized-zeroed.rs:33:38
|
LL | let _val: Wrap<&'static T> = mem::uninitialized();
|
= note: enums with no variants have no valid value
-error: the type `&'static i32` does not permit zero-initialization
+error: the type `&i32` does not permit zero-initialization
--> $DIR/uninitialized-zeroed.rs:49:34
|
LL | let _val: &'static i32 = mem::zeroed();
|
= note: references must be non-null
-error: the type `&'static i32` does not permit being left uninitialized
+error: the type `&i32` does not permit being left uninitialized
--> $DIR/uninitialized-zeroed.rs:50:34
|
LL | let _val: &'static i32 = mem::uninitialized();
|
= note: `NonBig` must be initialized inside its custom valid range
-error: the type `&'static i32` does not permit zero-initialization
+error: the type `&i32` does not permit zero-initialization
--> $DIR/uninitialized-zeroed.rs:84:34
|
LL | let _val: &'static i32 = mem::transmute(0usize);
|
= note: references must be non-null
-error: the type `&'static [i32]` does not permit zero-initialization
+error: the type `&[i32]` does not permit zero-initialization
--> $DIR/uninitialized-zeroed.rs:85:36
|
LL | let _val: &'static [i32] = mem::transmute((0usize, 0usize));
"message": "cannot find type `Iter` in this scope",
"code": {
"code": "E0412",
- "explanation": "The type name used is not in scope.
+ "explanation": "A used type name is not in scope.
Erroneous code examples:
error: expected one of `!`, `.`, `::`, `;`, `?`, `{`, `}`, or an operator, found `error`
--> $DIR/main.rs:10:20
|
-LL | / macro_rules! pong {
-LL | | () => { syntax error };
- | | ^^^^^ expected one of 8 possible tokens
-LL | | }
- | |_- in this expansion of `pong!`
+LL | / macro_rules! pong {
+LL | | () => { syntax error };
+ | | ^^^^^ expected one of 8 possible tokens
+LL | | }
+ | |__- in this expansion of `pong!`
...
-LL | ping!();
- | -------- in this macro invocation
+LL | ping!();
+ | -------- in this macro invocation
|
- ::: <::ping::ping macros>:1:1
+ ::: $DIR/auxiliary/ping.rs:5:1
|
-LL | () => { pong ! () ; }
- | ---------------------
- | | |
- | | in this macro invocation
- | in this expansion of `ping!`
+LL | / macro_rules! ping {
+LL | | () => {
+LL | | pong!();
+ | | -------- in this macro invocation
+LL | | }
+LL | | }
+ | |_- in this expansion of `ping!`
error: expected one of `!`, `.`, `::`, `;`, `?`, `{`, `}`, or an operator, found `error`
--> $DIR/main.rs:10:20
|
-LL | / macro_rules! pong {
-LL | | () => { syntax error };
- | | ^^^^^ expected one of 8 possible tokens
-LL | | }
- | |_- in this expansion of `pong!` (#5)
+LL | / macro_rules! pong {
+LL | | () => { syntax error };
+ | | ^^^^^ expected one of 8 possible tokens
+LL | | }
+ | |__- in this expansion of `pong!` (#5)
...
-LL | deep!();
- | -------- in this macro invocation (#1)
+LL | deep!();
+ | -------- in this macro invocation (#1)
|
- ::: <::ping::deep macros>:1:1
+ ::: $DIR/auxiliary/ping.rs:5:1
|
-LL | () => { foo ! () ; }
- | --------------------
- | | |
- | | in this macro invocation (#2)
- | in this expansion of `deep!` (#1)
- |
- ::: <::ping::foo macros>:1:1
- |
-LL | () => { bar ! () ; }
- | --------------------
- | | |
- | | in this macro invocation (#3)
- | in this expansion of `foo!` (#2)
- |
- ::: <::ping::bar macros>:1:1
- |
-LL | () => { ping ! () ; }
- | ---------------------
- | | |
- | | in this macro invocation (#4)
- | in this expansion of `bar!` (#3)
- |
- ::: <::ping::ping macros>:1:1
- |
-LL | () => { pong ! () ; }
- | ---------------------
- | | |
- | | in this macro invocation (#5)
- | in this expansion of `ping!` (#4)
+LL | / macro_rules! ping {
+LL | | () => {
+LL | | pong!();
+ | | -------- in this macro invocation (#5)
+LL | | }
+LL | | }
+ | |_- in this expansion of `ping!` (#4)
+...
+LL | / macro_rules! deep {
+LL | | () => {
+LL | | foo!();
+ | | ------- in this macro invocation (#2)
+LL | | }
+LL | | }
+ | |__- in this expansion of `deep!` (#1)
+...
+LL | / macro_rules! foo {
+LL | | () => {
+LL | | bar!();
+ | | ------- in this macro invocation (#3)
+LL | | }
+LL | | }
+ | |__- in this expansion of `foo!` (#2)
+...
+LL | / macro_rules! bar {
+LL | | () => {
+LL | | ping!();
+ | | -------- in this macro invocation (#4)
+LL | | }
+LL | | }
+ | |__- in this expansion of `bar!` (#3)
error: aborting due to 3 previous errors
//~^ ERROR no rules expected
assert!(true "whatever" blah);
- //~^ WARN unexpected string literal
+ //~^ ERROR unexpected string literal
//~^^ ERROR no rules expected
assert!(true;);
- //~^ WARN macro requires an expression
+ //~^ ERROR macro requires an expression
assert!(false || true "error message");
- //~^ WARN unexpected string literal
+ //~^ ERROR unexpected string literal
}
| |
| help: missing comma here
-warning: unexpected string literal
+error: unexpected string literal
--> $DIR/assert-trailing-junk.rs:15:18
|
LL | assert!(true "whatever" blah);
| -^^^^^^^^^^
| |
| help: try adding a comma
- |
- = note: this is going to be an error in the future
error: no rules expected the token `blah`
--> $DIR/assert-trailing-junk.rs:15:29
| |
| help: missing comma here
-warning: macro requires an expression as an argument
+error: macro requires an expression as an argument
--> $DIR/assert-trailing-junk.rs:19:5
|
LL | assert!(true;);
| ^^^^^^^^^^^^-^^
| |
| help: try removing semicolon
- |
- = note: this is going to be an error in the future
-warning: unexpected string literal
+error: unexpected string literal
--> $DIR/assert-trailing-junk.rs:22:27
|
LL | assert!(false || true "error message");
| -^^^^^^^^^^^^^^^
| |
| help: try adding a comma
- |
- = note: this is going to be an error in the future
-error: aborting due to 4 previous errors
+error: aborting due to 7 previous errors
--- /dev/null
+// Regression test for #58490
+
+macro_rules! a {
+ ( @1 $i:item ) => {
+ a! { @2 $i }
+ };
+ ( @2 $i:item ) => {
+ $i
+ };
+}
+mod b {
+ a! {
+ @1
+ #[macro_export]
+ macro_rules! b { () => () }
+ }
+ #[macro_export]
+ macro_rules! b { () => () }
+ //~^ ERROR: the name `b` is defined multiple times
+}
+mod c {
+ #[allow(unused_imports)]
+ use crate::b;
+}
+
+fn main() {}
--- /dev/null
+error[E0428]: the name `b` is defined multiple times
+ --> $DIR/issue-58490.rs:18:5
+ |
+LL | macro_rules! b { () => () }
+ | -------------- previous definition of the macro `b` here
+...
+LL | macro_rules! b { () => () }
+ | ^^^^^^^^^^^^^^ `b` redefined here
+ |
+ = note: `b` must be defined only once in the macro namespace of this module
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0428`.
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:21:23
+ --> $DIR/macro-comma-behavior.rs:20:23
|
LL | assert_eq!(1, 1, "{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:24:23
+ --> $DIR/macro-comma-behavior.rs:23:23
|
LL | assert_ne!(1, 2, "{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:30:29
+ --> $DIR/macro-comma-behavior.rs:29:29
|
LL | debug_assert_eq!(1, 1, "{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:33:29
+ --> $DIR/macro-comma-behavior.rs:32:29
|
LL | debug_assert_ne!(1, 2, "{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:54:19
+ --> $DIR/macro-comma-behavior.rs:53:19
|
LL | format_args!("{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:72:21
+ --> $DIR/macro-comma-behavior.rs:71:21
|
LL | unimplemented!("{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:81:24
+ --> $DIR/macro-comma-behavior.rs:80:24
|
LL | write!(f, "{}",)?;
| ^^
#[cfg(std)] use std::fmt;
#[cfg(core)] use core::fmt;
#[cfg(core)] #[lang = "eh_personality"] fn eh_personality() {}
-#[cfg(core)] #[lang = "eh_unwind_resume"] fn eh_unwind_resume() {}
#[cfg(core)] #[lang = "panic_impl"] fn panic_impl(panic: &core::panic::PanicInfo) -> ! { loop {} }
// (see documentation of the similarly-named test in run-pass)
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:21:23
+ --> $DIR/macro-comma-behavior.rs:20:23
|
LL | assert_eq!(1, 1, "{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:24:23
+ --> $DIR/macro-comma-behavior.rs:23:23
|
LL | assert_ne!(1, 2, "{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:30:29
+ --> $DIR/macro-comma-behavior.rs:29:29
|
LL | debug_assert_eq!(1, 1, "{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:33:29
+ --> $DIR/macro-comma-behavior.rs:32:29
|
LL | debug_assert_ne!(1, 2, "{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:38:18
+ --> $DIR/macro-comma-behavior.rs:37:18
|
LL | eprint!("{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:50:18
+ --> $DIR/macro-comma-behavior.rs:49:18
|
LL | format!("{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:54:19
+ --> $DIR/macro-comma-behavior.rs:53:19
|
LL | format_args!("{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:61:17
+ --> $DIR/macro-comma-behavior.rs:60:17
|
LL | print!("{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:72:21
+ --> $DIR/macro-comma-behavior.rs:71:21
|
LL | unimplemented!("{}",);
| ^^
error: 1 positional argument in format string, but no arguments were given
- --> $DIR/macro-comma-behavior.rs:81:24
+ --> $DIR/macro-comma-behavior.rs:80:24
|
LL | write!(f, "{}",)?;
| ^^
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: expanding `println! { "Hello, World!" }`
- = note: to `{ $crate :: io :: _print ($crate :: format_args_nl ! ("Hello, World!")) ; }`
+ = note: to `{ $crate :: io :: _print($crate :: format_args_nl ! ("Hello, World!")) ; }`
+// FIXME: missing sysroot spans (#53081)
+// ignore-i586-unknown-linux-gnu
+// ignore-i586-unknown-linux-musl
+// ignore-i686-unknown-linux-musl
+
// error-pattern: cannot find a built-in macro with name `line`
#![feature(rustc_attrs)]
error: cannot find a built-in macro with name `unknown`
- --> $DIR/unknown-builtin.rs:6:1
+ --> $DIR/unknown-builtin.rs:11:1
|
LL | macro_rules! unknown { () => () }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error: cannot find a built-in macro with name `line`
- --> <::core::macros::builtin::line macros>:1:1
+ --> $SRC_DIR/libcore/macros/mod.rs:LL:COL
|
-LL | () => { } ;
- | ^^^^^^^^^^^
+LL | / macro_rules! line {
+LL | | () => {
+LL | | /* compiler built-in */
+LL | | };
+LL | | }
+ | |_____^
error: aborting due to 2 previous errors
-error[E0164]: expected tuple struct or tuple variant, found method `Path::new`
+error[E0164]: expected tuple struct or tuple variant, found associated function `Path::new`
--> $DIR/match-fn-call.rs:6:9
|
LL | Path::new("foo") => println!("foo"),
|
= help: for more information, visit https://doc.rust-lang.org/book/ch18-00-patterns.html
-error[E0164]: expected tuple struct or tuple variant, found method `Path::new`
+error[E0164]: expected tuple struct or tuple variant, found associated function `Path::new`
--> $DIR/match-fn-call.rs:8:9
|
LL | Path::new("bar") => println!("bar"),
error[E0308]: mismatched types
--> $DIR/match-tag-nullary.rs:4:40
|
+LL | enum B { B }
+ | - unit variant defined here
+LL |
LL | fn main() { let x: A = A::A; match x { B::B => { } } }
| - ^^^^ expected enum `A`, found enum `B`
| |
--- /dev/null
+// compile-flags: -Zsave-analysis
+// Also regression test for #69409
+
+struct Cat {
+ meows : usize,
+ how_hungry : isize,
+}
+
+impl Cat {
+ pub fn speak(&self) { self.meows += 1; }
+}
+
+fn cat(in_x : usize, in_y : isize) -> Cat {
+ Cat {
+ meows: in_x,
+ how_hungry: in_y
+ }
+}
+
+fn main() {
+ let nyan : Cat = cat(52, 99);
+ nyan.speak = || println!("meow"); //~ ERROR attempted to take value of method
+ nyan.speak += || println!("meow"); //~ ERROR attempted to take value of method
+}
--- /dev/null
+error[E0615]: attempted to take value of method `speak` on type `Cat`
+ --> $DIR/assign-to-method.rs:22:10
+ |
+LL | nyan.speak = || println!("meow");
+ | ^^^^^
+ |
+ = help: methods are immutable and cannot be assigned to
+
+error[E0615]: attempted to take value of method `speak` on type `Cat`
+ --> $DIR/assign-to-method.rs:23:10
+ |
+LL | nyan.speak += || println!("meow");
+ | ^^^^^
+ |
+ = help: methods are immutable and cannot be assigned to
+
+error: aborting due to 2 previous errors
+
+For more information about this error, try `rustc --explain E0615`.
error[E0689]: can't call method `pow` on ambiguous numeric type `{integer}`
--> $DIR/method-on-ambiguous-numeric-type.rs:30:9
|
-LL | mac!(bar);
- | ---------- you must specify a type for this binding, like `i32`
LL | bar.pow(2);
| ^^^
+ |
+help: you must specify a type for this binding, like `i32`
+ |
+LL | ($ident:ident) => { let $ident: i32 = 42; }
+ | ^^^^^^^^^^^
error: aborting due to 5 previous errors
fn main() {
match 0u32 {
Foo::bar => {}
- //~^ ERROR expected unit struct, unit variant or constant, found method `Foo::bar`
+ //~^ ERROR expected unit struct, unit variant or constant, found associated function
}
match 0u32 {
<Foo>::bar => {}
- //~^ ERROR expected unit struct, unit variant or constant, found method `Foo::bar`
+ //~^ ERROR expected unit struct, unit variant or constant, found associated function
}
match 0u32 {
<Foo>::trait_bar => {}
- //~^ ERROR expected unit struct, unit variant or constant, found method `Foo::trait_bar`
+ //~^ ERROR expected unit struct, unit variant or constant, found associated function
}
if let Foo::bar = 0u32 {}
- //~^ ERROR expected unit struct, unit variant or constant, found method `Foo::bar`
+ //~^ ERROR expected unit struct, unit variant or constant, found associated function
if let <Foo>::bar = 0u32 {}
- //~^ ERROR expected unit struct, unit variant or constant, found method `Foo::bar`
+ //~^ ERROR expected unit struct, unit variant or constant, found associated function
if let Foo::trait_bar = 0u32 {}
- //~^ ERROR expected unit struct, unit variant or constant, found method `Foo::trait_bar`
+ //~^ ERROR expected unit struct, unit variant or constant, found associated function
}
-error[E0533]: expected unit struct, unit variant or constant, found method `Foo::bar`
+error[E0533]: expected unit struct, unit variant or constant, found associated function `Foo::bar`
--> $DIR/method-path-in-pattern.rs:15:9
|
LL | Foo::bar => {}
| ^^^^^^^^
-error[E0533]: expected unit struct, unit variant or constant, found method `Foo::bar`
+error[E0533]: expected unit struct, unit variant or constant, found associated function `Foo::bar`
--> $DIR/method-path-in-pattern.rs:19:9
|
LL | <Foo>::bar => {}
| ^^^^^^^^^^
-error[E0533]: expected unit struct, unit variant or constant, found method `Foo::trait_bar`
+error[E0533]: expected unit struct, unit variant or constant, found associated function `Foo::trait_bar`
--> $DIR/method-path-in-pattern.rs:23:9
|
LL | <Foo>::trait_bar => {}
| ^^^^^^^^^^^^^^^^
-error[E0533]: expected unit struct, unit variant or constant, found method `Foo::bar`
+error[E0533]: expected unit struct, unit variant or constant, found associated function `Foo::bar`
--> $DIR/method-path-in-pattern.rs:26:12
|
LL | if let Foo::bar = 0u32 {}
| ^^^^^^^^
-error[E0533]: expected unit struct, unit variant or constant, found method `Foo::bar`
+error[E0533]: expected unit struct, unit variant or constant, found associated function `Foo::bar`
--> $DIR/method-path-in-pattern.rs:28:12
|
LL | if let <Foo>::bar = 0u32 {}
| ^^^^^^^^^^
-error[E0533]: expected unit struct, unit variant or constant, found method `Foo::trait_bar`
+error[E0533]: expected unit struct, unit variant or constant, found associated function `Foo::trait_bar`
--> $DIR/method-path-in-pattern.rs:30:12
|
LL | if let Foo::trait_bar = 0u32 {}
fn main() {
match 0u32 {
<Foo as MyTrait>::trait_bar => {}
- //~^ ERROR expected unit struct, unit variant or constant, found method `MyTrait::trait_bar`
+ //~^ ERROR expected unit struct, unit variant or constant, found associated function
}
}
-error[E0532]: expected unit struct, unit variant or constant, found method `MyTrait::trait_bar`
+error[E0532]: expected unit struct, unit variant or constant, found associated function `MyTrait::trait_bar`
--> $DIR/method-resolvable-path-in-pattern.rs:11:9
|
LL | <Foo as MyTrait>::trait_bar => {}
--- /dev/null
+// check-pass
+// compile-flags: --emit=mir,link
+// Regression test for #60390, this ICE requires `--emit=mir` flag.
+
+fn main() {
+ enum Inner { Member(u32) };
+ Inner::Member(0);
+}
let y = &mut &2;
let z = &3;
// There's an implicit reborrow of `x` on the right-hand side of the
- // assignement. Note that writing an explicit reborrow would not show this
+ // assignment. Note that writing an explicit reborrow would not show this
// bug, as now there would be two reborrows on the right-hand side and at
// least one of them would happen before the left-hand side is evaluated.
*{ x = z; &mut *y } = x;
-error[E0409]: variable `y` is bound in inconsistent ways within the same match arm
+error[E0409]: variable `y` is bound inconsistently across alternatives separated by `|`
--> $DIR/E0409.rs:5:23
|
LL | (0, ref y) | (y, 0) => {}
error[E0583]: file not found for module `missing`
- --> $DIR/foo.rs:4:5
+ --> $DIR/foo.rs:4:1
|
LL | mod missing;
- | ^^^^^^^
+ | ^^^^^^^^^^^^
|
- = help: name the file either foo/missing.rs or foo/missing/mod.rs inside the directory "$DIR"
+ = help: to create the module `missing`, create file "$DIR/foo/missing.rs"
error: aborting due to previous error
error[E0583]: file not found for module `missing`
- --> $DIR/foo_inline.rs:4:9
+ --> $DIR/foo_inline.rs:4:5
|
LL | mod missing;
- | ^^^^^^^
+ | ^^^^^^^^^^^^
|
- = help: name the file either missing.rs or missing/mod.rs inside the directory "$DIR/foo_inline/inline"
+ = help: to create the module `missing`, create file "$DIR/foo_inline/inline/missing.rs"
error: aborting due to previous error
fn main() {
assert_eq!(mod_file_aux::bar(), 10);
+ //~^ ERROR failed to resolve: use of undeclared type or module `mod_file_aux`
}
error[E0584]: file for module `mod_file_disambig_aux` found at both mod_file_disambig_aux.rs and mod_file_disambig_aux/mod.rs
- --> $DIR/mod_file_disambig.rs:1:5
+ --> $DIR/mod_file_disambig.rs:1:1
|
LL | mod mod_file_disambig_aux;
- | ^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: delete or rename one of them to remove the ambiguity
-error: aborting due to previous error
+error[E0433]: failed to resolve: use of undeclared type or module `mod_file_aux`
+ --> $DIR/mod_file_disambig.rs:4:16
+ |
+LL | assert_eq!(mod_file_aux::bar(), 10);
+ | ^^^^^^^^^^^^ use of undeclared type or module `mod_file_aux`
+
+error: aborting due to 2 previous errors
-For more information about this error, try `rustc --explain E0584`.
+Some errors have detailed explanations: E0433, E0584.
+For more information about an error, try `rustc --explain E0433`.
+++ /dev/null
-// run-pass
-// ignore-wasm32-bare compiled with panic=abort by default
-// This test checks that instantiating an uninhabited type via `mem::{uninitialized,zeroed}` results
-// in a runtime panic.
-
-#![feature(never_type)]
-#![allow(deprecated, invalid_value)]
-
-use std::{mem, panic};
-
-#[allow(dead_code)]
-struct Foo {
- x: u8,
- y: !,
-}
-
-enum Bar {}
-
-fn main() {
- unsafe {
- assert_eq!(
- panic::catch_unwind(|| {
- mem::uninitialized::<!>()
- }).err().and_then(|a| a.downcast_ref::<String>().map(|s| {
- s == "Attempted to instantiate uninhabited type !"
- })),
- Some(true)
- );
-
- assert_eq!(
- panic::catch_unwind(|| {
- mem::zeroed::<!>()
- }).err().and_then(|a| a.downcast_ref::<String>().map(|s| {
- s == "Attempted to instantiate uninhabited type !"
- })),
- Some(true)
- );
-
- assert_eq!(
- panic::catch_unwind(|| {
- mem::MaybeUninit::<!>::uninit().assume_init()
- }).err().and_then(|a| a.downcast_ref::<String>().map(|s| {
- s == "Attempted to instantiate uninhabited type !"
- })),
- Some(true)
- );
-
- assert_eq!(
- panic::catch_unwind(|| {
- mem::uninitialized::<Foo>()
- }).err().and_then(|a| a.downcast_ref::<String>().map(|s| {
- s == "Attempted to instantiate uninhabited type Foo"
- })),
- Some(true)
- );
-
- assert_eq!(
- panic::catch_unwind(|| {
- mem::zeroed::<Foo>()
- }).err().and_then(|a| a.downcast_ref::<String>().map(|s| {
- s == "Attempted to instantiate uninhabited type Foo"
- })),
- Some(true)
- );
-
- assert_eq!(
- panic::catch_unwind(|| {
- mem::MaybeUninit::<Foo>::uninit().assume_init()
- }).err().and_then(|a| a.downcast_ref::<String>().map(|s| {
- s == "Attempted to instantiate uninhabited type Foo"
- })),
- Some(true)
- );
-
- assert_eq!(
- panic::catch_unwind(|| {
- mem::uninitialized::<Bar>()
- }).err().and_then(|a| a.downcast_ref::<String>().map(|s| {
- s == "Attempted to instantiate uninhabited type Bar"
- })),
- Some(true)
- );
-
- assert_eq!(
- panic::catch_unwind(|| {
- mem::zeroed::<Bar>()
- }).err().and_then(|a| a.downcast_ref::<String>().map(|s| {
- s == "Attempted to instantiate uninhabited type Bar"
- })),
- Some(true)
- );
-
- assert_eq!(
- panic::catch_unwind(|| {
- mem::MaybeUninit::<Bar>::uninit().assume_init()
- }).err().and_then(|a| a.downcast_ref::<String>().map(|s| {
- s == "Attempted to instantiate uninhabited type Bar"
- })),
- Some(true)
- );
- }
-}
--> $DIR/dont-print-desugared.rs:4:10
|
LL | for &ref mut x in s {}
- | -^^^^^^^^^
- | ||
- | |cannot borrow as mutable through `&` reference
- | help: consider changing this to be a mutable reference: `&mut ref mut x`
+ | ^^^^^^^^^ cannot borrow as mutable through `&` reference
error[E0597]: `y` does not live long enough
--> $DIR/dont-print-desugared.rs:17:16
-// Ths test case is exploring the space of how blocs with tail
+// This test case is exploring the space of how blocks with tail
// expressions and statements can be composed, trying to keep each
// case on one line so that we can compare them via a vertical scan
// with the human eye.
impl<'a> Foo2<'a> {
// should not produce outlives suggestions to name 'self
fn get_bar(&self) -> Bar2 {
- Bar2::new(&self) //~ERROR borrowed data escapes outside of method
+ Bar2::new(&self) //~ERROR borrowed data escapes outside of associated function
}
}
|
= help: consider adding the following bound: `'b: 'a`
-error[E0521]: borrowed data escapes outside of method
+error[E0521]: borrowed data escapes outside of associated function
--> $DIR/outlives-suggestion-simple.rs:73:9
|
LL | fn get_bar(&self) -> Bar2 {
| -----
| |
- | `self` declared here, outside of the method body
- | `self` is a reference that is only valid in the method body
+ | `self` declared here, outside of the associated function body
+ | `self` is a reference that is only valid in the associated function body
LL | Bar2::new(&self)
- | ^^^^^^^^^^^^^^^^ `self` escapes the method body here
+ | ^^^^^^^^^^^^^^^^ `self` escapes the associated function body here
error: aborting due to 9 previous errors
--- /dev/null
+// Regression test for issue #69490
+
+// check-pass
+
+pub trait Trait<T> {
+ const S: &'static str;
+}
+
+impl<T> Trait<()> for T
+where
+ T: for<'a> Trait<&'a ()>,
+{
+ // Use of `T::S` here caused an ICE
+ const S: &'static str = T::S;
+}
+
+// Some similar cases that didn't ICE:
+
+impl<'a, T> Trait<()> for (T,)
+where
+ T: Trait<&'a ()>,
+{
+ const S: &'static str = T::S;
+}
+
+impl<T> Trait<()> for [T; 1]
+where
+ T: Trait<for<'a> fn(&'a ())>,
+{
+ const S: &'static str = T::S;
+}
+
+fn main() {}
+++ /dev/null
-// run-pass
-// compile-flags: -Z no-landing-pads -C codegen-units=1
-// ignore-emscripten no threads support
-
-use std::thread;
-
-static mut HIT: bool = false;
-
-struct A;
-
-impl Drop for A {
- fn drop(&mut self) {
- unsafe { HIT = true; }
- }
-}
-
-fn main() {
- thread::spawn(move|| -> () {
- let _a = A;
- panic!();
- }).join().unwrap_err();
- assert!(unsafe { !HIT });
-}
}
#[lang = "eh_personality"] extern fn eh_personality() {}
-#[lang = "eh_unwind_resume"] extern fn eh_unwind_resume() {}
#[lang = "panic_impl"] fn panic_impl(panic: &PanicInfo) -> ! { loop {} }
LL | #[rustc_on_unimplemented = "test error `{Self}` with `{Bar}`"]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
error: aborting due to previous error
--- /dev/null
+// Test or-patterns with box-patterns
+
+// run-pass
+
+#![feature(or_patterns)]
+#![feature(box_patterns)]
+
+#[derive(Debug, PartialEq)]
+enum MatchArm {
+ Arm(usize),
+ Wild,
+}
+
+#[derive(Debug)]
+enum Test {
+ Foo,
+ Bar,
+ Baz,
+ Qux,
+}
+
+fn test(x: Option<Box<Test>>) -> MatchArm {
+ match x {
+ Some(box Test::Foo | box Test::Bar) => MatchArm::Arm(0),
+ Some(box Test::Baz) => MatchArm::Arm(1),
+ Some(_) => MatchArm::Arm(2),
+ _ => MatchArm::Wild,
+ }
+}
+
+fn main() {
+ assert_eq!(test(Some(Box::new(Test::Foo))), MatchArm::Arm(0));
+ assert_eq!(test(Some(Box::new(Test::Bar))), MatchArm::Arm(0));
+ assert_eq!(test(Some(Box::new(Test::Baz))), MatchArm::Arm(1));
+ assert_eq!(test(Some(Box::new(Test::Qux))), MatchArm::Arm(2));
+ assert_eq!(test(None), MatchArm::Wild);
+}
fn main() {
// One level:
let Ok(a) | Err(ref a): Result<&u8, u8> = Ok(&0);
- //~^ ERROR variable `a` is bound in inconsistent ways
+ //~^ ERROR variable `a` is bound inconsistently
let Ok(ref mut a) | Err(a): Result<u8, &mut u8> = Ok(0);
- //~^ ERROR variable `a` is bound in inconsistent ways
+ //~^ ERROR variable `a` is bound inconsistently
let Ok(ref a) | Err(ref mut a): Result<&u8, &mut u8> = Ok(&0);
- //~^ ERROR variable `a` is bound in inconsistent ways
+ //~^ ERROR variable `a` is bound inconsistently
//~| ERROR mismatched types
let Ok((ref a, b)) | Err((ref mut a, ref b)) = Ok((0, &0));
- //~^ ERROR variable `a` is bound in inconsistent ways
- //~| ERROR variable `b` is bound in inconsistent ways
+ //~^ ERROR variable `a` is bound inconsistently
+ //~| ERROR variable `b` is bound inconsistently
//~| ERROR mismatched types
// Two levels:
let Ok(Ok(a) | Err(a)) | Err(ref a) = Err(0);
- //~^ ERROR variable `a` is bound in inconsistent ways
+ //~^ ERROR variable `a` is bound inconsistently
// Three levels:
- let Ok([ Ok((Ok(ref a) | Err(a),)) | Err(a) ]) | Err(a) = Err(&1);
- //~^ ERROR variable `a` is bound in inconsistent ways
+ let Ok([Ok((Ok(ref a) | Err(a),)) | Err(a)]) | Err(a) = Err(&1);
+ //~^ ERROR variable `a` is bound inconsistently
}
-error[E0409]: variable `a` is bound in inconsistent ways within the same match arm
+error[E0409]: variable `a` is bound inconsistently across alternatives separated by `|`
--> $DIR/inconsistent-modes.rs:7:25
|
LL | let Ok(a) | Err(ref a): Result<&u8, u8> = Ok(&0);
| |
| first binding
-error[E0409]: variable `a` is bound in inconsistent ways within the same match arm
+error[E0409]: variable `a` is bound inconsistently across alternatives separated by `|`
--> $DIR/inconsistent-modes.rs:9:29
|
LL | let Ok(ref mut a) | Err(a): Result<u8, &mut u8> = Ok(0);
| |
| first binding
-error[E0409]: variable `a` is bound in inconsistent ways within the same match arm
+error[E0409]: variable `a` is bound inconsistently across alternatives separated by `|`
--> $DIR/inconsistent-modes.rs:11:33
|
LL | let Ok(ref a) | Err(ref mut a): Result<&u8, &mut u8> = Ok(&0);
| - first binding ^ bound in different ways
-error[E0409]: variable `a` is bound in inconsistent ways within the same match arm
+error[E0409]: variable `a` is bound inconsistently across alternatives separated by `|`
--> $DIR/inconsistent-modes.rs:14:39
|
LL | let Ok((ref a, b)) | Err((ref mut a, ref b)) = Ok((0, &0));
| - first binding ^ bound in different ways
-error[E0409]: variable `b` is bound in inconsistent ways within the same match arm
+error[E0409]: variable `b` is bound inconsistently across alternatives separated by `|`
--> $DIR/inconsistent-modes.rs:14:46
|
LL | let Ok((ref a, b)) | Err((ref mut a, ref b)) = Ok((0, &0));
| - first binding ^ bound in different ways
-error[E0409]: variable `a` is bound in inconsistent ways within the same match arm
+error[E0409]: variable `a` is bound inconsistently across alternatives separated by `|`
--> $DIR/inconsistent-modes.rs:20:38
|
LL | let Ok(Ok(a) | Err(a)) | Err(ref a) = Err(0);
| |
| first binding
-error[E0409]: variable `a` is bound in inconsistent ways within the same match arm
- --> $DIR/inconsistent-modes.rs:24:34
+error[E0409]: variable `a` is bound inconsistently across alternatives separated by `|`
+ --> $DIR/inconsistent-modes.rs:24:33
|
-LL | let Ok([ Ok((Ok(ref a) | Err(a),)) | Err(a) ]) | Err(a) = Err(&1);
- | - ^ bound in different ways
- | |
- | first binding
+LL | let Ok([Ok((Ok(ref a) | Err(a),)) | Err(a)]) | Err(a) = Err(&1);
+ | - ^ bound in different ways
+ | |
+ | first binding
error[E0308]: mismatched types
--> $DIR/inconsistent-modes.rs:11:25
--- /dev/null
+#![feature(or_patterns)]
+
+fn main() {
+ let 0 | (1 | 2) = 0; //~ ERROR refutable pattern in local binding
+ match 0 {
+ //~^ ERROR non-exhaustive patterns
+ 0 | (1 | 2) => {}
+ }
+}
--- /dev/null
+error[E0005]: refutable pattern in local binding: `std::i32::MIN..=-1i32` and `3i32..=std::i32::MAX` not covered
+ --> $DIR/issue-69875-should-have-been-expanded-earlier-non-exhaustive.rs:4:9
+ |
+LL | let 0 | (1 | 2) = 0;
+ | ^^^^^^^^^^^ patterns `std::i32::MIN..=-1i32` and `3i32..=std::i32::MAX` not covered
+ |
+ = note: `let` bindings require an "irrefutable pattern", like a `struct` or an `enum` with only one variant
+ = note: for more information, visit https://doc.rust-lang.org/book/ch18-02-refutability.html
+help: you might want to use `if let` to ignore the variant that isn't matched
+ |
+LL | if let 0 | (1 | 2) = 0 { /* */ }
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+error[E0004]: non-exhaustive patterns: `std::i32::MIN..=-1i32` and `3i32..=std::i32::MAX` not covered
+ --> $DIR/issue-69875-should-have-been-expanded-earlier-non-exhaustive.rs:5:11
+ |
+LL | match 0 {
+ | ^ patterns `std::i32::MIN..=-1i32` and `3i32..=std::i32::MAX` not covered
+ |
+ = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms
+
+error: aborting due to 2 previous errors
+
+Some errors have detailed explanations: E0004, E0005.
+For more information about an error, try `rustc --explain E0004`.
--- /dev/null
+// check-pass
+
+#![feature(or_patterns)]
+
+fn main() {
+ let 0 | (1 | _) = 0;
+ if let 0 | (1 | 2) = 0 {}
+ if let x @ 0 | x @ (1 | 2) = 0 {}
+}
| | expected `usize`, found `isize`
| first introduced with type `usize` here
|
- = note: in the same arm, a binding must have the same type in all alternatives
+ = note: a binding must have the same type in all alternatives
error[E0308]: mismatched types
--> $DIR/or-patterns-binding-type-mismatch.rs:38:47
| | expected `usize`, found `isize`
| first introduced with type `usize` here
|
- = note: in the same arm, a binding must have the same type in all alternatives
+ = note: a binding must have the same type in all alternatives
error[E0308]: mismatched types
--> $DIR/or-patterns-binding-type-mismatch.rs:42:22
| | expected `u16`, found `u8`
| first introduced with type `u16` here
|
- = note: in the same arm, a binding must have the same type in all alternatives
+ = note: a binding must have the same type in all alternatives
error[E0308]: mismatched types
--> $DIR/or-patterns-binding-type-mismatch.rs:42:25
| | expected `u8`, found `u16`
| first introduced with type `u8` here
|
- = note: in the same arm, a binding must have the same type in all alternatives
+ = note: a binding must have the same type in all alternatives
error[E0308]: mismatched types
--> $DIR/or-patterns-binding-type-mismatch.rs:47:44
LL | = Some((0u8, Some((1u16, 2u32))))
| ------------------------------- this expression has type `std::option::Option<(u8, std::option::Option<(u16, u32)>)>`
|
- = note: in the same arm, a binding must have the same type in all alternatives
+ = note: a binding must have the same type in all alternatives
error[E0308]: mismatched types
--> $DIR/or-patterns-binding-type-mismatch.rs:47:53
LL | = Some((0u8, Some((1u16, 2u32))))
| ------------------------------- this expression has type `std::option::Option<(u8, std::option::Option<(u16, u32)>)>`
|
- = note: in the same arm, a binding must have the same type in all alternatives
+ = note: a binding must have the same type in all alternatives
error[E0308]: mismatched types
--> $DIR/or-patterns-binding-type-mismatch.rs:47:62
LL | = Some((0u8, Some((1u16, 2u32))))
| ------------------------------- this expression has type `std::option::Option<(u8, std::option::Option<(u16, u32)>)>`
|
- = note: in the same arm, a binding must have the same type in all alternatives
+ = note: a binding must have the same type in all alternatives
error[E0308]: mismatched types
--> $DIR/or-patterns-binding-type-mismatch.rs:47:65
LL | = Some((0u8, Some((1u16, 2u32))))
| ------------------------------- this expression has type `std::option::Option<(u8, std::option::Option<(u16, u32)>)>`
|
- = note: in the same arm, a binding must have the same type in all alternatives
+ = note: a binding must have the same type in all alternatives
error[E0308]: mismatched types
--> $DIR/or-patterns-binding-type-mismatch.rs:55:39
--- /dev/null
+// Test or-patterns with slice-patterns
+
+// run-pass
+
+#![feature(or_patterns)]
+
+#[derive(Debug, PartialEq)]
+enum MatchArm {
+ Arm(usize),
+ Wild,
+}
+
+#[derive(Debug)]
+enum Test {
+ Foo,
+ Bar,
+ Baz,
+ Qux,
+}
+
+fn test(foo: &[Option<Test>]) -> MatchArm {
+ match foo {
+ [.., Some(Test::Qux | Test::Foo)] => MatchArm::Arm(0),
+ [Some(Test::Foo), .., Some(Test::Baz | Test::Bar)] => MatchArm::Arm(1),
+ [.., Some(Test::Bar | Test::Baz), _] => MatchArm::Arm(2),
+ _ => MatchArm::Wild,
+ }
+}
+
+fn main() {
+ let foo = vec![
+ Some(Test::Foo),
+ Some(Test::Bar),
+ Some(Test::Baz),
+ Some(Test::Qux),
+ ];
+
+ // path 1a
+ assert_eq!(test(&foo), MatchArm::Arm(0));
+ // path 1b
+ assert_eq!(test(&[Some(Test::Bar), Some(Test::Foo)]), MatchArm::Arm(0));
+ // path 2a
+ assert_eq!(test(&foo[..3]), MatchArm::Arm(1));
+ // path 2b
+ assert_eq!(test(&[Some(Test::Foo), Some(Test::Foo), Some(Test::Bar)]), MatchArm::Arm(1));
+ // path 3a
+ assert_eq!(test(&foo[1..3]), MatchArm::Arm(2));
+ // path 3b
+ assert_eq!(test(&[Some(Test::Bar), Some(Test::Baz), Some(Test::Baz), Some(Test::Bar)]),
+ MatchArm::Arm(2));
+ // path 4
+ assert_eq!(test(&foo[4..]), MatchArm::Wild);
+}
fn panic_impl(info: &PanicInfo) -> ! { loop {} }
#[lang = "eh_personality"]
fn eh_personality() {}
-#[lang = "eh_unwind_resume"]
-fn eh_unwind_resume() {}
--- /dev/null
+// run-pass
+// ignore-emscripten no subprocess support
+
+#![feature(set_stdio)]
+
+use std::fmt;
+use std::fmt::{Display, Formatter};
+use std::io::set_panic;
+
+pub struct A;
+
+impl Display for A {
+ fn fmt(&self, _f: &mut Formatter<'_>) -> fmt::Result {
+ panic!();
+ }
+}
+
+fn main() {
+ set_panic(Some(Box::new(Vec::new())));
+ assert!(std::panic::catch_unwind(|| {
+ eprintln!("{}", A);
+ })
+ .is_err());
+}
error: expected statement after outer attribute
- --> $DIR/attr-dangling-in-fn.rs:5:1
+ --> $DIR/attr-dangling-in-fn.rs:4:3
|
-LL | }
- | ^
+LL | #[foo = "bar"]
+ | ^^^^^^^^^^^^^^
error: aborting due to previous error
//~^ ERROR an inner attribute is not permitted in this context
#[cfg(FALSE)] fn e() { let _ = #[attr] &mut #![attr] 0; }
//~^ ERROR an inner attribute is not permitted in this context
-#[cfg(FALSE)] fn e() { let _ = #[attr] if 0 {}; }
-//~^ ERROR attributes are not yet allowed on `if` expressions
#[cfg(FALSE)] fn e() { let _ = if 0 #[attr] {}; }
-//~^ ERROR expected `{`, found `#`
+//~^ ERROR outer attributes are not allowed on `if`
#[cfg(FALSE)] fn e() { let _ = if 0 {#![attr]}; }
//~^ ERROR an inner attribute is not permitted in this context
#[cfg(FALSE)] fn e() { let _ = if 0 {} #[attr] else {}; }
//~^ ERROR expected one of
#[cfg(FALSE)] fn e() { let _ = if 0 {} else #[attr] {}; }
-//~^ ERROR expected `{`, found `#`
+//~^ ERROR outer attributes are not allowed on `if`
#[cfg(FALSE)] fn e() { let _ = if 0 {} else {#![attr]}; }
//~^ ERROR an inner attribute is not permitted in this context
#[cfg(FALSE)] fn e() { let _ = if 0 {} else #[attr] if 0 {}; }
-//~^ ERROR attributes are not yet allowed on `if` expressions
-//~| ERROR expected `{`, found `#`
+//~^ ERROR outer attributes are not allowed on `if`
#[cfg(FALSE)] fn e() { let _ = if 0 {} else if 0 #[attr] {}; }
-//~^ ERROR expected `{`, found `#`
+//~^ ERROR outer attributes are not allowed on `if`
#[cfg(FALSE)] fn e() { let _ = if 0 {} else if 0 {#![attr]}; }
//~^ ERROR an inner attribute is not permitted in this context
-#[cfg(FALSE)] fn e() { let _ = #[attr] if let _ = 0 {}; }
-//~^ ERROR attributes are not yet allowed on `if` expressions
#[cfg(FALSE)] fn e() { let _ = if let _ = 0 #[attr] {}; }
-//~^ ERROR expected `{`, found `#`
+//~^ ERROR outer attributes are not allowed on `if`
#[cfg(FALSE)] fn e() { let _ = if let _ = 0 {#![attr]}; }
//~^ ERROR an inner attribute is not permitted in this context
#[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} #[attr] else {}; }
//~^ ERROR expected one of
#[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} else #[attr] {}; }
-//~^ ERROR expected `{`, found `#`
+//~^ ERROR outer attributes are not allowed on `if`
#[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} else {#![attr]}; }
//~^ ERROR an inner attribute is not permitted in this context
#[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} else #[attr] if let _ = 0 {}; }
-//~^ ERROR attributes are not yet allowed on `if` expressions
-//~| ERROR expected `{`, found `#`
+//~^ ERROR outer attributes are not allowed on `if`
#[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} else if let _ = 0 #[attr] {}; }
-//~^ ERROR expected `{`, found `#`
+//~^ ERROR outer attributes are not allowed on `if`
#[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} else if let _ = 0 {#![attr]}; }
//~^ ERROR an inner attribute is not permitted in this context
|
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
-error: attributes are not yet allowed on `if` expressions
- --> $DIR/attr-stmt-expr-attr-bad.rs:41:32
- |
-LL | #[cfg(FALSE)] fn e() { let _ = #[attr] if 0 {}; }
- | ^^^^^^^
-
-error: expected `{`, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:43:37
+error: outer attributes are not allowed on `if` and `else` branches
+ --> $DIR/attr-stmt-expr-attr-bad.rs:41:37
|
LL | #[cfg(FALSE)] fn e() { let _ = if 0 #[attr] {}; }
- | -- ^ --- help: try placing this code inside a block: `{ {}; }`
+ | -- ^^^^^^^ -- the attributes are attached to this branch
| | |
- | | expected `{`
- | this `if` expression has a condition, but no block
+ | | help: remove the attributes
+ | the branch belongs to this `if`
error: an inner attribute is not permitted in this context
- --> $DIR/attr-stmt-expr-attr-bad.rs:45:38
+ --> $DIR/attr-stmt-expr-attr-bad.rs:43:38
|
LL | #[cfg(FALSE)] fn e() { let _ = if 0 {#![attr]}; }
| ^^^^^^^^
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
error: expected one of `.`, `;`, `?`, `else`, or an operator, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:47:40
+ --> $DIR/attr-stmt-expr-attr-bad.rs:45:40
|
LL | #[cfg(FALSE)] fn e() { let _ = if 0 {} #[attr] else {}; }
| ^ expected one of `.`, `;`, `?`, `else`, or an operator
-error: expected `{`, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:49:45
+error: outer attributes are not allowed on `if` and `else` branches
+ --> $DIR/attr-stmt-expr-attr-bad.rs:47:45
|
LL | #[cfg(FALSE)] fn e() { let _ = if 0 {} else #[attr] {}; }
- | ^ --- help: try placing this code inside a block: `{ {}; }`
- | |
- | expected `{`
+ | ---- ^^^^^^^ -- the attributes are attached to this branch
+ | | |
+ | | help: remove the attributes
+ | the branch belongs to this `else`
error: an inner attribute is not permitted in this context
- --> $DIR/attr-stmt-expr-attr-bad.rs:51:46
+ --> $DIR/attr-stmt-expr-attr-bad.rs:49:46
|
LL | #[cfg(FALSE)] fn e() { let _ = if 0 {} else {#![attr]}; }
| ^^^^^^^^
|
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
-error: attributes are not yet allowed on `if` expressions
- --> $DIR/attr-stmt-expr-attr-bad.rs:53:45
+error: outer attributes are not allowed on `if` and `else` branches
+ --> $DIR/attr-stmt-expr-attr-bad.rs:51:45
|
LL | #[cfg(FALSE)] fn e() { let _ = if 0 {} else #[attr] if 0 {}; }
- | ^^^^^^^
+ | ---- ^^^^^^^ ------- the attributes are attached to this branch
+ | | |
+ | | help: remove the attributes
+ | the branch belongs to this `else`
-error: expected `{`, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:53:45
- |
-LL | #[cfg(FALSE)] fn e() { let _ = if 0 {} else #[attr] if 0 {}; }
- | ^ -------- help: try placing this code inside a block: `{ if 0 {}; }`
- | |
- | expected `{`
-
-error: expected `{`, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:56:50
+error: outer attributes are not allowed on `if` and `else` branches
+ --> $DIR/attr-stmt-expr-attr-bad.rs:53:50
|
LL | #[cfg(FALSE)] fn e() { let _ = if 0 {} else if 0 #[attr] {}; }
- | -- ^ --- help: try placing this code inside a block: `{ {}; }`
+ | -- ^^^^^^^ -- the attributes are attached to this branch
| | |
- | | expected `{`
- | this `if` expression has a condition, but no block
+ | | help: remove the attributes
+ | the branch belongs to this `if`
error: an inner attribute is not permitted in this context
- --> $DIR/attr-stmt-expr-attr-bad.rs:58:51
+ --> $DIR/attr-stmt-expr-attr-bad.rs:55:51
|
LL | #[cfg(FALSE)] fn e() { let _ = if 0 {} else if 0 {#![attr]}; }
| ^^^^^^^^
|
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
-error: attributes are not yet allowed on `if` expressions
- --> $DIR/attr-stmt-expr-attr-bad.rs:60:32
- |
-LL | #[cfg(FALSE)] fn e() { let _ = #[attr] if let _ = 0 {}; }
- | ^^^^^^^
-
-error: expected `{`, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:62:45
+error: outer attributes are not allowed on `if` and `else` branches
+ --> $DIR/attr-stmt-expr-attr-bad.rs:57:45
|
LL | #[cfg(FALSE)] fn e() { let _ = if let _ = 0 #[attr] {}; }
- | -- ^ --- help: try placing this code inside a block: `{ {}; }`
+ | -- ^^^^^^^ -- the attributes are attached to this branch
| | |
- | | expected `{`
- | this `if` expression has a condition, but no block
+ | | help: remove the attributes
+ | the branch belongs to this `if`
error: an inner attribute is not permitted in this context
- --> $DIR/attr-stmt-expr-attr-bad.rs:64:46
+ --> $DIR/attr-stmt-expr-attr-bad.rs:59:46
|
LL | #[cfg(FALSE)] fn e() { let _ = if let _ = 0 {#![attr]}; }
| ^^^^^^^^
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
error: expected one of `.`, `;`, `?`, `else`, or an operator, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:66:48
+ --> $DIR/attr-stmt-expr-attr-bad.rs:61:48
|
LL | #[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} #[attr] else {}; }
| ^ expected one of `.`, `;`, `?`, `else`, or an operator
-error: expected `{`, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:68:53
+error: outer attributes are not allowed on `if` and `else` branches
+ --> $DIR/attr-stmt-expr-attr-bad.rs:63:53
|
LL | #[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} else #[attr] {}; }
- | ^ --- help: try placing this code inside a block: `{ {}; }`
- | |
- | expected `{`
+ | ---- ^^^^^^^ -- the attributes are attached to this branch
+ | | |
+ | | help: remove the attributes
+ | the branch belongs to this `else`
error: an inner attribute is not permitted in this context
- --> $DIR/attr-stmt-expr-attr-bad.rs:70:54
+ --> $DIR/attr-stmt-expr-attr-bad.rs:65:54
|
LL | #[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} else {#![attr]}; }
| ^^^^^^^^
|
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
-error: attributes are not yet allowed on `if` expressions
- --> $DIR/attr-stmt-expr-attr-bad.rs:72:53
+error: outer attributes are not allowed on `if` and `else` branches
+ --> $DIR/attr-stmt-expr-attr-bad.rs:67:53
|
LL | #[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} else #[attr] if let _ = 0 {}; }
- | ^^^^^^^
+ | ---- ^^^^^^^ --------------- the attributes are attached to this branch
+ | | |
+ | | help: remove the attributes
+ | the branch belongs to this `else`
-error: expected `{`, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:72:53
- |
-LL | #[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} else #[attr] if let _ = 0 {}; }
- | ^ ---------------- help: try placing this code inside a block: `{ if let _ = 0 {}; }`
- | |
- | expected `{`
-
-error: expected `{`, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:75:66
+error: outer attributes are not allowed on `if` and `else` branches
+ --> $DIR/attr-stmt-expr-attr-bad.rs:69:66
|
LL | #[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} else if let _ = 0 #[attr] {}; }
- | -- ^ --- help: try placing this code inside a block: `{ {}; }`
+ | -- ^^^^^^^ -- the attributes are attached to this branch
| | |
- | | expected `{`
- | this `if` expression has a condition, but no block
+ | | help: remove the attributes
+ | the branch belongs to this `if`
error: an inner attribute is not permitted in this context
- --> $DIR/attr-stmt-expr-attr-bad.rs:77:67
+ --> $DIR/attr-stmt-expr-attr-bad.rs:71:67
|
LL | #[cfg(FALSE)] fn e() { let _ = if let _ = 0 {} else if let _ = 0 {#![attr]}; }
| ^^^^^^^^
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
error: an inner attribute is not permitted following an outer attribute
- --> $DIR/attr-stmt-expr-attr-bad.rs:80:32
+ --> $DIR/attr-stmt-expr-attr-bad.rs:74:32
|
LL | #[cfg(FALSE)] fn s() { #[attr] #![attr] let _ = 0; }
- | ------- ^^^^^^^^ not permitted following an outer attibute
+ | ------- ^^^^^^^^ not permitted following an outer attribute
| |
| previous outer attribute
|
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
error: an inner attribute is not permitted following an outer attribute
- --> $DIR/attr-stmt-expr-attr-bad.rs:82:32
+ --> $DIR/attr-stmt-expr-attr-bad.rs:76:32
|
LL | #[cfg(FALSE)] fn s() { #[attr] #![attr] 0; }
- | ------- ^^^^^^^^ not permitted following an outer attibute
+ | ------- ^^^^^^^^ not permitted following an outer attribute
| |
| previous outer attribute
|
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
error: an inner attribute is not permitted following an outer attribute
- --> $DIR/attr-stmt-expr-attr-bad.rs:84:32
+ --> $DIR/attr-stmt-expr-attr-bad.rs:78:32
|
LL | #[cfg(FALSE)] fn s() { #[attr] #![attr] foo!(); }
- | ------- ^^^^^^^^ not permitted following an outer attibute
+ | ------- ^^^^^^^^ not permitted following an outer attribute
| |
| previous outer attribute
|
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
error: an inner attribute is not permitted following an outer attribute
- --> $DIR/attr-stmt-expr-attr-bad.rs:86:32
+ --> $DIR/attr-stmt-expr-attr-bad.rs:80:32
|
LL | #[cfg(FALSE)] fn s() { #[attr] #![attr] foo![]; }
- | ------- ^^^^^^^^ not permitted following an outer attibute
+ | ------- ^^^^^^^^ not permitted following an outer attribute
| |
| previous outer attribute
|
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
error: an inner attribute is not permitted following an outer attribute
- --> $DIR/attr-stmt-expr-attr-bad.rs:88:32
+ --> $DIR/attr-stmt-expr-attr-bad.rs:82:32
|
LL | #[cfg(FALSE)] fn s() { #[attr] #![attr] foo!{}; }
- | ------- ^^^^^^^^ not permitted following an outer attibute
+ | ------- ^^^^^^^^ not permitted following an outer attribute
| |
| previous outer attribute
|
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
error[E0586]: inclusive range with no end
- --> $DIR/attr-stmt-expr-attr-bad.rs:94:35
+ --> $DIR/attr-stmt-expr-attr-bad.rs:88:35
|
LL | #[cfg(FALSE)] fn e() { match 0 { 0..=#[attr] 10 => () } }
| ^^^ help: use `..` instead
= note: inclusive ranges must be bounded at the end (`..=b` or `a..=b`)
error: expected one of `=>`, `if`, or `|`, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:94:38
+ --> $DIR/attr-stmt-expr-attr-bad.rs:88:38
|
LL | #[cfg(FALSE)] fn e() { match 0 { 0..=#[attr] 10 => () } }
| ^ expected one of `=>`, `if`, or `|`
error[E0586]: inclusive range with no end
- --> $DIR/attr-stmt-expr-attr-bad.rs:97:35
+ --> $DIR/attr-stmt-expr-attr-bad.rs:91:35
|
LL | #[cfg(FALSE)] fn e() { match 0 { 0..=#[attr] -10 => () } }
| ^^^ help: use `..` instead
= note: inclusive ranges must be bounded at the end (`..=b` or `a..=b`)
error: expected one of `=>`, `if`, or `|`, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:97:38
+ --> $DIR/attr-stmt-expr-attr-bad.rs:91:38
|
LL | #[cfg(FALSE)] fn e() { match 0 { 0..=#[attr] -10 => () } }
| ^ expected one of `=>`, `if`, or `|`
error: unexpected token: `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:100:39
+ --> $DIR/attr-stmt-expr-attr-bad.rs:94:39
|
LL | #[cfg(FALSE)] fn e() { match 0 { 0..=-#[attr] 10 => () } }
| ^
error[E0586]: inclusive range with no end
- --> $DIR/attr-stmt-expr-attr-bad.rs:102:35
+ --> $DIR/attr-stmt-expr-attr-bad.rs:96:35
|
LL | #[cfg(FALSE)] fn e() { match 0 { 0..=#[attr] FOO => () } }
| ^^^ help: use `..` instead
= note: inclusive ranges must be bounded at the end (`..=b` or `a..=b`)
error: expected one of `=>`, `if`, or `|`, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:102:38
+ --> $DIR/attr-stmt-expr-attr-bad.rs:96:38
|
LL | #[cfg(FALSE)] fn e() { match 0 { 0..=#[attr] FOO => () } }
| ^ expected one of `=>`, `if`, or `|`
error: unexpected token: `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:106:34
+ --> $DIR/attr-stmt-expr-attr-bad.rs:100:34
|
LL | #[cfg(FALSE)] fn e() { let _ = x.#![attr]foo(); }
| ^
error: expected one of `.`, `;`, `?`, or an operator, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:106:34
+ --> $DIR/attr-stmt-expr-attr-bad.rs:100:34
|
LL | #[cfg(FALSE)] fn e() { let _ = x.#![attr]foo(); }
| ^ expected one of `.`, `;`, `?`, or an operator
error: unexpected token: `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:109:34
+ --> $DIR/attr-stmt-expr-attr-bad.rs:103:34
|
LL | #[cfg(FALSE)] fn e() { let _ = x.#[attr]foo(); }
| ^
error: expected one of `.`, `;`, `?`, or an operator, found `#`
- --> $DIR/attr-stmt-expr-attr-bad.rs:109:34
+ --> $DIR/attr-stmt-expr-attr-bad.rs:103:34
|
LL | #[cfg(FALSE)] fn e() { let _ = x.#[attr]foo(); }
| ^ expected one of `.`, `;`, `?`, or an operator
error: expected statement after outer attribute
- --> $DIR/attr-stmt-expr-attr-bad.rs:114:44
+ --> $DIR/attr-stmt-expr-attr-bad.rs:108:37
|
LL | #[cfg(FALSE)] fn e() { { fn foo() { #[attr]; } } }
- | ^
+ | ^^^^^^^
error: expected statement after outer attribute
- --> $DIR/attr-stmt-expr-attr-bad.rs:116:45
+ --> $DIR/attr-stmt-expr-attr-bad.rs:110:37
|
LL | #[cfg(FALSE)] fn e() { { fn foo() { #[attr] } } }
- | ^
+ | ^^^^^^^
-error: aborting due to 57 previous errors
+error: aborting due to 53 previous errors
For more information about this error, try `rustc --explain E0586`.
--- /dev/null
+#![feature(label_break_value)]
+
+fn main() {}
+
+macro_rules! m {
+ ($b:block) => {
+ 'lab: $b; //~ ERROR cannot use a `block` macro fragment here
+ unsafe $b; //~ ERROR cannot use a `block` macro fragment here
+ |x: u8| -> () $b; //~ ERROR cannot use a `block` macro fragment here
+ }
+}
+
+fn foo() {
+ m!({});
+}
--- /dev/null
+error: cannot use a `block` macro fragment here
+ --> $DIR/bad-interpolated-block.rs:7:15
+ |
+LL | 'lab: $b;
+ | ------^^
+ | |
+ | the `block` fragment is within this context
+...
+LL | m!({});
+ | ------- in this macro invocation
+ |
+ = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
+
+error: cannot use a `block` macro fragment here
+ --> $DIR/bad-interpolated-block.rs:8:16
+ |
+LL | unsafe $b;
+ | -------^^
+ | |
+ | the `block` fragment is within this context
+...
+LL | m!({});
+ | ------- in this macro invocation
+ |
+ = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
+
+error: cannot use a `block` macro fragment here
+ --> $DIR/bad-interpolated-block.rs:9:23
+ |
+LL | |x: u8| -> () $b;
+ | ^^ the `block` fragment is within this context
+...
+LL | m!({});
+ | ------- in this macro invocation
+ |
+ = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
+
+error: aborting due to 3 previous errors
+
--- /dev/null
+// edition:2018
+
+#![feature(try_blocks)]
+
+fn main() {}
+
+fn f1() {
+ loop
+ let x = 0; //~ ERROR expected `{`, found keyword `let`
+ drop(0);
+ }
+
+fn f2() {
+ while true
+ let x = 0; //~ ERROR expected `{`, found keyword `let`
+ }
+
+fn f3() {
+ for x in 0..1
+ let x = 0; //~ ERROR expected `{`, found keyword `let`
+ }
+
+fn f4() {
+ try //~ ERROR expected expression, found reserved keyword `try`
+ let x = 0;
+ }
+
+fn f5() {
+ async //~ ERROR async closures are unstable
+ let x = 0; //~ ERROR expected one of `move`, `|`, or `||`, found keyword `let`
+ }
--- /dev/null
+error: expected `{`, found keyword `let`
+ --> $DIR/block-no-opening-brace.rs:9:9
+ |
+LL | let x = 0;
+ | ^^^-------
+ | |
+ | expected `{`
+ | help: try placing this code inside a block: `{ let x = 0; }`
+
+error: expected `{`, found keyword `let`
+ --> $DIR/block-no-opening-brace.rs:15:9
+ |
+LL | let x = 0;
+ | ^^^-------
+ | |
+ | expected `{`
+ | help: try placing this code inside a block: `{ let x = 0; }`
+
+error: expected `{`, found keyword `let`
+ --> $DIR/block-no-opening-brace.rs:20:9
+ |
+LL | let x = 0;
+ | ^^^-------
+ | |
+ | expected `{`
+ | help: try placing this code inside a block: `{ let x = 0; }`
+
+error: expected expression, found reserved keyword `try`
+ --> $DIR/block-no-opening-brace.rs:24:5
+ |
+LL | try
+ | ^^^ expected expression
+
+error: expected one of `move`, `|`, or `||`, found keyword `let`
+ --> $DIR/block-no-opening-brace.rs:30:9
+ |
+LL | async
+ | - expected one of `move`, `|`, or `||`
+LL | let x = 0;
+ | ^^^ unexpected token
+
+error[E0658]: async closures are unstable
+ --> $DIR/block-no-opening-brace.rs:29:5
+ |
+LL | async
+ | ^^^^^
+ |
+ = note: see issue #62290 <https://github.com/rust-lang/rust/issues/62290> for more information
+ = help: add `#![feature(async_closure)]` to the crate attributes to enable
+
+error: aborting due to 6 previous errors
+
+For more information about this error, try `rustc --explain E0658`.
type A = for<'a: 'b,> fn(); // OK(rejected later by ast_validation)
type A = for<'a: 'b +> fn(); // OK (rejected later by ast_validation)
type A = for<'a, T> fn(); // OK (rejected later by ast_validation)
-type A = for<,> fn(); //~ ERROR expected one of `>`, `const`, identifier, or lifetime, found `,`
+type A = for<,> fn(); //~ ERROR expected one of `#`, `>`, `const`, identifier, or lifetime
fn main() {}
-error: expected one of `>`, `const`, identifier, or lifetime, found `,`
+error: expected one of `#`, `>`, `const`, identifier, or lifetime, found `,`
--> $DIR/bounds-lifetime.rs:9:14
|
LL | type A = for<,> fn();
- | ^ expected one of `>`, `const`, identifier, or lifetime
+ | ^ expected one of `#`, `>`, `const`, identifier, or lifetime
error: aborting due to previous error
}
fn main() {
- circular_modules_hello::say_hello();
+ circular_modules_hello::say_hello(); //~ ERROR cannot find function `say_hello` in module
}
error: circular modules: $DIR/circular_modules_hello.rs -> $DIR/circular_modules_main.rs -> $DIR/circular_modules_hello.rs
- --> $DIR/circular_modules_main.rs:2:5
+ --> $DIR/circular_modules_main.rs:2:1
|
LL | mod circular_modules_hello;
- | ^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
-error: aborting due to previous error
+error[E0425]: cannot find function `say_hello` in module `circular_modules_hello`
+ --> $DIR/circular_modules_main.rs:9:29
+ |
+LL | circular_modules_hello::say_hello();
+ | ^^^^^^^^^ not found in `circular_modules_hello`
+ |
+help: possible candidate is found in another module, you can import it into scope
+ |
+LL | use circular_modules_hello::say_hello;
+ |
+
+error: aborting due to 2 previous errors
+For more information about this error, try `rustc --explain E0425`.
fn main() {
let x = || -> i32 22;
- //~^ ERROR expected one of `!`, `(`, `+`, `::`, `<`, or `{`, found `22`
+ //~^ ERROR expected `{`, found `22`
}
-error: expected one of `!`, `(`, `+`, `::`, `<`, or `{`, found `22`
+error: expected `{`, found `22`
--> $DIR/closure-return-syntax.rs:5:23
|
LL | let x = || -> i32 22;
- | ^^ expected one of `!`, `(`, `+`, `::`, `<`, or `{`
+ | ^^
+ | |
+ | expected `{`
+ | help: try placing this code inside a block: `{ 22 }`
error: aborting due to previous error
-# //~ ERROR expected `[`, found `<eof>`
+# //~ ERROR expected one of `!` or `[`, found `<eof>`
-error: expected `[`, found `<eof>`
+error: expected one of `!` or `[`, found `<eof>`
--> $DIR/column-offset-1-based.rs:1:1
|
LL | #
- | ^ expected `[`
+ | ^ expected one of `!` or `[`
error: aborting due to previous error
//~^ ERROR found a documentation comment that doesn't document anything
//~| HELP maybe a comment was intended
;
- //~^ WARNING unnecessary trailing semicolon
- //~| HELP remove this semicolon
}
|
= help: doc comments must come before what they document, maybe a comment was intended with `//`?
-warning: unnecessary trailing semicolon
- --> $DIR/doc-before-semi.rs:5:5
- |
-LL | ;
- | ^ help: remove this semicolon
- |
- = note: `#[warn(redundant_semicolon)]` on by default
-
error: aborting due to previous error
For more information about this error, try `rustc --explain E0585`.
fn main() {
if true /*!*/ {}
- //~^ ERROR expected `{`, found doc comment `/*!*/`
+ //~^ ERROR outer attributes are not allowed on
+ //~| ERROR expected outer doc comment
}
-error: expected `{`, found doc comment `/*!*/`
+error: expected outer doc comment
--> $DIR/doc-comment-in-if-statement.rs:2:13
|
LL | if true /*!*/ {}
- | -- ^^^^^ expected `{`
- | |
- | this `if` expression has a condition, but no block
+ | ^^^^^
+ |
+ = note: inner doc comments like this (starting with `//!` or `/*!`) can only appear before items
+
+error: outer attributes are not allowed on `if` and `else` branches
+ --> $DIR/doc-comment-in-if-statement.rs:2:13
+ |
+LL | if true /*!*/ {}
+ | -- ^^^^^ -- the attributes are attached to this branch
+ | | |
+ | | help: remove the attributes
+ | the branch belongs to this `if`
-error: aborting due to previous error
+error: aborting due to 2 previous errors
--- /dev/null
+fn main() {}
+
+fn syntax() {
+ fn foo() = 42; //~ ERROR function body cannot be `= expression;`
+ fn bar() -> u8 = 42; //~ ERROR function body cannot be `= expression;`
+}
+
+extern {
+ fn foo() = 42; //~ ERROR function body cannot be `= expression;`
+ //~^ ERROR incorrect function inside `extern` block
+ fn bar() -> u8 = 42; //~ ERROR function body cannot be `= expression;`
+ //~^ ERROR incorrect function inside `extern` block
+}
+
+trait Foo {
+ fn foo() = 42; //~ ERROR function body cannot be `= expression;`
+ fn bar() -> u8 = 42; //~ ERROR function body cannot be `= expression;`
+}
+
+impl Foo for () {
+ fn foo() = 42; //~ ERROR function body cannot be `= expression;`
+ fn bar() -> u8 = 42; //~ ERROR function body cannot be `= expression;`
+}
--- /dev/null
+error: function body cannot be `= expression;`
+ --> $DIR/fn-body-eq-expr-semi.rs:4:14
+ |
+LL | fn foo() = 42;
+ | ^^^^^
+ |
+help: surround the expression with `{` and `}` instead of `=` and `;`
+ |
+LL | fn foo() { 42 }
+ | ^ ^
+
+error: function body cannot be `= expression;`
+ --> $DIR/fn-body-eq-expr-semi.rs:5:20
+ |
+LL | fn bar() -> u8 = 42;
+ | ^^^^^
+ |
+help: surround the expression with `{` and `}` instead of `=` and `;`
+ |
+LL | fn bar() -> u8 { 42 }
+ | ^ ^
+
+error: function body cannot be `= expression;`
+ --> $DIR/fn-body-eq-expr-semi.rs:9:14
+ |
+LL | fn foo() = 42;
+ | ^^^^^
+ |
+help: surround the expression with `{` and `}` instead of `=` and `;`
+ |
+LL | fn foo() { 42 }
+ | ^ ^
+
+error: function body cannot be `= expression;`
+ --> $DIR/fn-body-eq-expr-semi.rs:11:20
+ |
+LL | fn bar() -> u8 = 42;
+ | ^^^^^
+ |
+help: surround the expression with `{` and `}` instead of `=` and `;`
+ |
+LL | fn bar() -> u8 { 42 }
+ | ^ ^
+
+error: function body cannot be `= expression;`
+ --> $DIR/fn-body-eq-expr-semi.rs:16:14
+ |
+LL | fn foo() = 42;
+ | ^^^^^
+ |
+help: surround the expression with `{` and `}` instead of `=` and `;`
+ |
+LL | fn foo() { 42 }
+ | ^ ^
+
+error: function body cannot be `= expression;`
+ --> $DIR/fn-body-eq-expr-semi.rs:17:20
+ |
+LL | fn bar() -> u8 = 42;
+ | ^^^^^
+ |
+help: surround the expression with `{` and `}` instead of `=` and `;`
+ |
+LL | fn bar() -> u8 { 42 }
+ | ^ ^
+
+error: function body cannot be `= expression;`
+ --> $DIR/fn-body-eq-expr-semi.rs:21:14
+ |
+LL | fn foo() = 42;
+ | ^^^^^
+ |
+help: surround the expression with `{` and `}` instead of `=` and `;`
+ |
+LL | fn foo() { 42 }
+ | ^ ^
+
+error: function body cannot be `= expression;`
+ --> $DIR/fn-body-eq-expr-semi.rs:22:20
+ |
+LL | fn bar() -> u8 = 42;
+ | ^^^^^
+ |
+help: surround the expression with `{` and `}` instead of `=` and `;`
+ |
+LL | fn bar() -> u8 { 42 }
+ | ^ ^
+
+error: incorrect function inside `extern` block
+ --> $DIR/fn-body-eq-expr-semi.rs:9:8
+ |
+LL | extern {
+ | ------ `extern` blocks define existing foreign functions and functions inside of them cannot have a body
+LL | fn foo() = 42;
+ | ^^^ ----- help: remove the invalid body: `;`
+ | |
+ | cannot have a body
+ |
+ = help: you might have meant to write a function accessible through FFI, which can be done by writing `extern fn` outside of the `extern` block
+ = note: for more information, visit https://doc.rust-lang.org/std/keyword.extern.html
+
+error: incorrect function inside `extern` block
+ --> $DIR/fn-body-eq-expr-semi.rs:11:8
+ |
+LL | extern {
+ | ------ `extern` blocks define existing foreign functions and functions inside of them cannot have a body
+...
+LL | fn bar() -> u8 = 42;
+ | ^^^ ----- help: remove the invalid body: `;`
+ | |
+ | cannot have a body
+ |
+ = help: you might have meant to write a function accessible through FFI, which can be done by writing `extern fn` outside of the `extern` block
+ = note: for more information, visit https://doc.rust-lang.org/std/keyword.extern.html
+
+error: aborting due to 10 previous errors
+
--> $DIR/fn-header-semantic-fail.rs:13:5
|
LL | const async unsafe extern "C" fn ff5() {} // OK.
- | -----^-----^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^-^^^^^------------------------------
| | |
| | `async` because of this
| `const` because of this
--> $DIR/fn-header-semantic-fail.rs:21:9
|
LL | const async unsafe extern "C" fn ft5();
- | -----^-----^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^-^^^^^----------------------------
| | |
| | `async` because of this
| `const` because of this
--> $DIR/fn-header-semantic-fail.rs:34:9
|
LL | const async unsafe extern "C" fn ft5() {}
- | -----^-----^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^-^^^^^------------------------------
| | |
| | `async` because of this
| `const` because of this
--> $DIR/fn-header-semantic-fail.rs:46:9
|
LL | const async unsafe extern "C" fn fi5() {}
- | -----^-----^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^-^^^^^------------------------------
| | |
| | `async` because of this
| `const` because of this
--> $DIR/fn-header-semantic-fail.rs:55:9
|
LL | const async unsafe extern "C" fn fe5();
- | -----^-----^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^-^^^^^----------------------------
| | |
| | `async` because of this
| `const` because of this
| |___- previous doc comment
LL |
LL | #![recursion_limit="100"]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^ not permitted following an outer attibute
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^ not permitted following an outer attribute
|
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
| ---------------------- previous outer attribute
LL |
LL | #![recursion_limit="100"]
- | ^^^^^^^^^^^^^^^^^^^^^^^^^ not permitted following an outer attibute
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^ not permitted following an outer attribute
|
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
-// error-pattern:expected `[`, found `vec`
mod blade_runner {
- #vec[doc(
+ #vec[doc( //~ ERROR expected one of `!` or `[`, found `vec`
brief = "Blade Runner is probably the best movie ever",
desc = "I like that in the world of Blade Runner it is always
raining, and that it's always night time. And Aliens
-error: expected `[`, found `vec`
- --> $DIR/issue-1655.rs:3:6
+error: expected one of `!` or `[`, found `vec`
+ --> $DIR/issue-1655.rs:2:6
|
LL | #vec[doc(
- | ^^^ expected `[`
+ | ^^^ expected one of `!` or `[`
error: aborting due to previous error
--- /dev/null
+// edition:2018
+#![crate_type = "lib"]
+#![feature(type_ascription)]
+use std::future::Future;
+use std::pin::Pin;
+
+// This tests the parser for "x as Y[z]". It errors, but we want to give useful
+// errors and parse such that further code gives useful errors.
+pub fn index_after_as_cast() {
+ vec![1, 2, 3] as Vec<i32>[0];
+ //~^ ERROR: casts cannot be followed by indexing
+ vec![1, 2, 3]: Vec<i32>[0];
+ //~^ ERROR: casts cannot be followed by indexing
+}
+
+pub fn index_after_cast_to_index() {
+ (&[0]) as &[i32][0];
+ //~^ ERROR: casts cannot be followed by indexing
+ (&[0i32]): &[i32; 1][0];
+ //~^ ERROR: casts cannot be followed by indexing
+}
+
+pub fn cast_after_cast() {
+ if 5u64 as i32 as u16 == 0u16 {
+
+ }
+ if 5u64: u64: u64 == 0u64 {
+
+ }
+ let _ = 5u64: u64: u64 as u8 as i8 == 9i8;
+ let _ = 0i32: i32: i32;
+ let _ = 0 as i32: i32;
+ let _ = 0i32: i32 as i32;
+ let _ = 0 as i32 as i32;
+ let _ = 0i32: i32: i32 as u32 as i32;
+}
+
+pub fn cast_cast_method_call() {
+ let _ = 0i32: i32: i32.count_ones();
+ //~^ ERROR: casts cannot be followed by a method call
+ let _ = 0 as i32: i32.count_ones();
+ //~^ ERROR: casts cannot be followed by a method call
+ let _ = 0i32: i32 as i32.count_ones();
+ //~^ ERROR: casts cannot be followed by a method call
+ let _ = 0 as i32 as i32.count_ones();
+ //~^ ERROR: casts cannot be followed by a method call
+ let _ = 0i32: i32: i32 as u32 as i32.count_ones();
+ //~^ ERROR: casts cannot be followed by a method call
+ let _ = 0i32: i32.count_ones(): u32;
+ //~^ ERROR: casts cannot be followed by a method call
+ let _ = 0 as i32.count_ones(): u32;
+ //~^ ERROR: casts cannot be followed by a method call
+ let _ = 0i32: i32.count_ones() as u32;
+ //~^ ERROR: casts cannot be followed by a method call
+ let _ = 0 as i32.count_ones() as u32;
+ //~^ ERROR: casts cannot be followed by a method call
+ let _ = 0i32: i32: i32.count_ones() as u32 as i32;
+ //~^ ERROR: casts cannot be followed by a method call
+}
+
+pub fn multiline_error() {
+ let _ = 0
+ as i32
+ .count_ones();
+ //~^^^ ERROR: casts cannot be followed by a method call
+}
+
+// this tests that the precedence for `!x as Y.Z` is still what we expect
+pub fn precedence() {
+ let x: i32 = &vec![1, 2, 3] as &Vec<i32>[0];
+ //~^ ERROR: casts cannot be followed by indexing
+}
+
+pub fn method_calls() {
+ 0 as i32.max(0);
+ //~^ ERROR: casts cannot be followed by a method call
+ 0: i32.max(0);
+ //~^ ERROR: casts cannot be followed by a method call
+}
+
+pub fn complex() {
+ let _ = format!(
+ "{} and {}",
+ if true { 33 } else { 44 } as i32.max(0),
+ //~^ ERROR: casts cannot be followed by a method call
+ if true { 33 } else { 44 }: i32.max(0)
+ //~^ ERROR: casts cannot be followed by a method call
+ );
+}
+
+pub fn in_condition() {
+ if 5u64 as i32.max(0) == 0 {
+ //~^ ERROR: casts cannot be followed by a method call
+ }
+ if 5u64: u64.max(0) == 0 {
+ //~^ ERROR: casts cannot be followed by a method call
+ }
+}
+
+pub fn inside_block() {
+ let _ = if true {
+ 5u64 as u32.max(0) == 0
+ //~^ ERROR: casts cannot be followed by a method call
+ } else { false };
+ let _ = if true {
+ 5u64: u64.max(0) == 0
+ //~^ ERROR: casts cannot be followed by a method call
+ } else { false };
+}
+
+static bar: &[i32] = &(&[1,2,3] as &[i32][0..1]);
+//~^ ERROR: casts cannot be followed by indexing
+
+static bar2: &[i32] = &(&[1i32,2,3]: &[i32; 3][0..1]);
+//~^ ERROR: casts cannot be followed by indexing
+
+
+pub fn cast_then_try() -> Result<u64,u64> {
+ Err(0u64) as Result<u64,u64>?;
+ //~^ ERROR: casts cannot be followed by ?
+ Err(0u64): Result<u64,u64>?;
+ //~^ ERROR: casts cannot be followed by ?
+ Ok(1)
+}
+
+
+pub fn cast_then_call() {
+ type F = fn(u8);
+ // type ascription won't actually do [unique drop fn type] -> fn(u8) casts.
+ let drop_ptr = drop as fn(u8);
+ drop as F();
+ //~^ ERROR: parenthesized type parameters may only be used with a `Fn` trait [E0214]
+ drop_ptr: F();
+ //~^ ERROR: parenthesized type parameters may only be used with a `Fn` trait [E0214]
+}
+
+pub fn cast_to_fn_should_work() {
+ let drop_ptr = drop as fn(u8);
+ drop as fn(u8);
+ drop_ptr: fn(u8);
+}
+
+pub fn parens_after_cast_error() {
+ let drop_ptr = drop as fn(u8);
+ drop as fn(u8)(0);
+ //~^ ERROR: casts cannot be followed by a function call
+ drop_ptr: fn(u8)(0);
+ //~^ ERROR: casts cannot be followed by a function call
+}
+
+pub async fn cast_then_await() {
+ Box::pin(noop()) as Pin<Box<dyn Future<Output = ()>>>.await;
+ //~^ ERROR: casts cannot be followed by `.await`
+
+ Box::pin(noop()): Pin<Box<_>>.await;
+ //~^ ERROR: casts cannot be followed by `.await`
+}
+
+pub async fn noop() {}
+
+#[derive(Default)]
+pub struct Foo {
+ pub bar: u32,
+}
+
+pub fn struct_field() {
+ Foo::default() as Foo.bar;
+ //~^ ERROR: cannot be followed by a field access
+ Foo::default(): Foo.bar;
+ //~^ ERROR: cannot be followed by a field access
+}
--- /dev/null
+error: casts cannot be followed by indexing
+ --> $DIR/issue-35813-postfix-after-cast.rs:10:5
+ |
+LL | vec![1, 2, 3] as Vec<i32>[0];
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (vec![1, 2, 3] as Vec<i32>)[0];
+ | ^ ^
+
+error: casts cannot be followed by indexing
+ --> $DIR/issue-35813-postfix-after-cast.rs:12:5
+ |
+LL | vec![1, 2, 3]: Vec<i32>[0];
+ | ^^^^^^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (vec![1, 2, 3]: Vec<i32>)[0];
+ | ^ ^
+
+error: casts cannot be followed by indexing
+ --> $DIR/issue-35813-postfix-after-cast.rs:17:5
+ |
+LL | (&[0]) as &[i32][0];
+ | ^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | ((&[0]) as &[i32])[0];
+ | ^ ^
+
+error: casts cannot be followed by indexing
+ --> $DIR/issue-35813-postfix-after-cast.rs:19:5
+ |
+LL | (&[0i32]): &[i32; 1][0];
+ | ^^^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | ((&[0i32]): &[i32; 1])[0];
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:39:13
+ |
+LL | let _ = 0i32: i32: i32.count_ones();
+ | ^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | let _ = (0i32: i32: i32).count_ones();
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:41:13
+ |
+LL | let _ = 0 as i32: i32.count_ones();
+ | ^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | let _ = (0 as i32: i32).count_ones();
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:43:13
+ |
+LL | let _ = 0i32: i32 as i32.count_ones();
+ | ^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | let _ = (0i32: i32 as i32).count_ones();
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:45:13
+ |
+LL | let _ = 0 as i32 as i32.count_ones();
+ | ^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | let _ = (0 as i32 as i32).count_ones();
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:47:13
+ |
+LL | let _ = 0i32: i32: i32 as u32 as i32.count_ones();
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | let _ = (0i32: i32: i32 as u32 as i32).count_ones();
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:49:13
+ |
+LL | let _ = 0i32: i32.count_ones(): u32;
+ | ^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | let _ = (0i32: i32).count_ones(): u32;
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:51:13
+ |
+LL | let _ = 0 as i32.count_ones(): u32;
+ | ^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | let _ = (0 as i32).count_ones(): u32;
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:53:13
+ |
+LL | let _ = 0i32: i32.count_ones() as u32;
+ | ^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | let _ = (0i32: i32).count_ones() as u32;
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:55:13
+ |
+LL | let _ = 0 as i32.count_ones() as u32;
+ | ^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | let _ = (0 as i32).count_ones() as u32;
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:57:13
+ |
+LL | let _ = 0i32: i32: i32.count_ones() as u32 as i32;
+ | ^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | let _ = (0i32: i32: i32).count_ones() as u32 as i32;
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:62:13
+ |
+LL | let _ = 0
+ | _____________^
+LL | | as i32
+ | |______________^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | let _ = (0
+LL | as i32)
+ |
+
+error: casts cannot be followed by indexing
+ --> $DIR/issue-35813-postfix-after-cast.rs:70:18
+ |
+LL | let x: i32 = &vec![1, 2, 3] as &Vec<i32>[0];
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | let x: i32 = (&vec![1, 2, 3] as &Vec<i32>)[0];
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:75:5
+ |
+LL | 0 as i32.max(0);
+ | ^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (0 as i32).max(0);
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:77:5
+ |
+LL | 0: i32.max(0);
+ | ^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (0: i32).max(0);
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:92:8
+ |
+LL | if 5u64 as i32.max(0) == 0 {
+ | ^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | if (5u64 as i32).max(0) == 0 {
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:95:8
+ |
+LL | if 5u64: u64.max(0) == 0 {
+ | ^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | if (5u64: u64).max(0) == 0 {
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:102:9
+ |
+LL | 5u64 as u32.max(0) == 0
+ | ^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (5u64 as u32).max(0) == 0
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:106:9
+ |
+LL | 5u64: u64.max(0) == 0
+ | ^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (5u64: u64).max(0) == 0
+ | ^ ^
+
+error: casts cannot be followed by indexing
+ --> $DIR/issue-35813-postfix-after-cast.rs:111:24
+ |
+LL | static bar: &[i32] = &(&[1,2,3] as &[i32][0..1]);
+ | ^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | static bar: &[i32] = &((&[1,2,3] as &[i32])[0..1]);
+ | ^ ^
+
+error: casts cannot be followed by indexing
+ --> $DIR/issue-35813-postfix-after-cast.rs:114:25
+ |
+LL | static bar2: &[i32] = &(&[1i32,2,3]: &[i32; 3][0..1]);
+ | ^^^^^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | static bar2: &[i32] = &((&[1i32,2,3]: &[i32; 3])[0..1]);
+ | ^ ^
+
+error: casts cannot be followed by ?
+ --> $DIR/issue-35813-postfix-after-cast.rs:119:5
+ |
+LL | Err(0u64) as Result<u64,u64>?;
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (Err(0u64) as Result<u64,u64>)?;
+ | ^ ^
+
+error: casts cannot be followed by ?
+ --> $DIR/issue-35813-postfix-after-cast.rs:121:5
+ |
+LL | Err(0u64): Result<u64,u64>?;
+ | ^^^^^^^^^-^^^^^^^^^^^^^^^^
+ | |
+ | help: maybe write a path separator here: `::`
+ |
+ = note: `#![feature(type_ascription)]` lets you annotate an expression with a type: `<expr>: <type>`
+ = note: see issue #23416 <https://github.com/rust-lang/rust/issues/23416> for more information
+
+error: casts cannot be followed by a function call
+ --> $DIR/issue-35813-postfix-after-cast.rs:145:5
+ |
+LL | drop as fn(u8)(0);
+ | ^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (drop as fn(u8))(0);
+ | ^ ^
+
+error: casts cannot be followed by a function call
+ --> $DIR/issue-35813-postfix-after-cast.rs:147:5
+ |
+LL | drop_ptr: fn(u8)(0);
+ | ^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (drop_ptr: fn(u8))(0);
+ | ^ ^
+
+error: casts cannot be followed by `.await`
+ --> $DIR/issue-35813-postfix-after-cast.rs:152:5
+ |
+LL | Box::pin(noop()) as Pin<Box<dyn Future<Output = ()>>>.await;
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (Box::pin(noop()) as Pin<Box<dyn Future<Output = ()>>>).await;
+ | ^ ^
+
+error: casts cannot be followed by `.await`
+ --> $DIR/issue-35813-postfix-after-cast.rs:155:5
+ |
+LL | Box::pin(noop()): Pin<Box<_>>.await;
+ | ^^^^^^^^^^^^^^^^-^^^^^^^^^^^^
+ | |
+ | help: maybe write a path separator here: `::`
+ |
+ = note: `#![feature(type_ascription)]` lets you annotate an expression with a type: `<expr>: <type>`
+ = note: see issue #23416 <https://github.com/rust-lang/rust/issues/23416> for more information
+
+error: casts cannot be followed by a field access
+ --> $DIR/issue-35813-postfix-after-cast.rs:167:5
+ |
+LL | Foo::default() as Foo.bar;
+ | ^^^^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (Foo::default() as Foo).bar;
+ | ^ ^
+
+error: casts cannot be followed by a field access
+ --> $DIR/issue-35813-postfix-after-cast.rs:169:5
+ |
+LL | Foo::default(): Foo.bar;
+ | ^^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (Foo::default(): Foo).bar;
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:84:9
+ |
+LL | if true { 33 } else { 44 } as i32.max(0),
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (if true { 33 } else { 44 } as i32).max(0),
+ | ^ ^
+
+error: casts cannot be followed by a method call
+ --> $DIR/issue-35813-postfix-after-cast.rs:86:9
+ |
+LL | if true { 33 } else { 44 }: i32.max(0)
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+help: try surrounding the expression in parentheses
+ |
+LL | (if true { 33 } else { 44 }: i32).max(0)
+ | ^ ^
+
+error[E0214]: parenthesized type parameters may only be used with a `Fn` trait
+ --> $DIR/issue-35813-postfix-after-cast.rs:131:13
+ |
+LL | drop as F();
+ | ^^^ only `Fn` traits may use parentheses
+
+error[E0214]: parenthesized type parameters may only be used with a `Fn` trait
+ --> $DIR/issue-35813-postfix-after-cast.rs:133:15
+ |
+LL | drop_ptr: F();
+ | ^^^ only `Fn` traits may use parentheses
+
+error: aborting due to 36 previous errors
+
+For more information about this error, try `rustc --explain E0214`.
error: couldn't read $DIR/../parser: $ACCESS_DENIED_MSG (os error $ACCESS_DENIED_CODE)
- --> $DIR/issue-5806.rs:5:5
+ --> $DIR/issue-5806.rs:5:1
|
LL | mod foo;
- | ^^^
+ | ^^^^^^^^
error: aborting due to previous error
LL | impl W <s(f;Y(;]
| ^ expected one of 7 possible tokens
-error: expected one of `!`, `&&`, `&`, `(`, `)`, `*`, `+`, `,`, `->`, `...`, `::`, `<`, `>`, `?`, `[`, `_`, `dyn`, `extern`, `fn`, `for`, `impl`, `unsafe`, or lifetime, found `;`
+error: expected one of `!`, `&&`, `&`, `(`, `)`, `*`, `+`, `,`, `->`, `...`, `::`, `<`, `>`, `?`, `[`, `_`, `dyn`, `extern`, `fn`, `for`, `impl`, `unsafe`, lifetime, or path, found `;`
--> $DIR/issue-63116.rs:3:15
|
LL | impl W <s(f;Y(;]
-// error-pattern: aborting due to 7 previous errors
+// error-pattern: aborting due to 5 previous errors
fn i(n{...,f #
| | expected `}`
| `..` must be at the end and cannot have a trailing comma
-error: expected `[`, found `}`
+error: expected one of `!` or `[`, found `}`
--> $DIR/issue-63135.rs:3:16
|
LL | fn i(n{...,f #
- | ^ expected `[`
+ | ^ expected one of `!` or `[`
-error: expected one of `:` or `|`, found `)`
- --> $DIR/issue-63135.rs:3:16
- |
-LL | fn i(n{...,f #
- | ^ expected one of `:` or `|`
-
-error: expected `;` or `{`, found `<eof>`
- --> $DIR/issue-63135.rs:3:16
- |
-LL | fn i(n{...,f #
- | ^ expected `;` or `{`
-
-error: aborting due to 7 previous errors
+error: aborting due to 5 previous errors
--- /dev/null
+fn main() {}
+
+type X<'a> = (?'a) +;
+//~^ ERROR `?` may only modify trait bounds, not lifetime bounds
+//~| ERROR at least one trait is required for an object type
+//~| WARN trait objects without an explicit `dyn` are deprecated
--- /dev/null
+error: `?` may only modify trait bounds, not lifetime bounds
+ --> $DIR/issue-68890-2.rs:3:15
+ |
+LL | type X<'a> = (?'a) +;
+ | ^
+
+warning: trait objects without an explicit `dyn` are deprecated
+ --> $DIR/issue-68890-2.rs:3:14
+ |
+LL | type X<'a> = (?'a) +;
+ | ^^^^^^^ help: use `dyn`: `dyn (?'a) +`
+ |
+ = note: `#[warn(bare_trait_objects)]` on by default
+
+error[E0224]: at least one trait is required for an object type
+ --> $DIR/issue-68890-2.rs:3:14
+ |
+LL | type X<'a> = (?'a) +;
+ | ^^^^^^^
+
+error: aborting due to 2 previous errors
+
enum e{A((?'a a+?+l))}
//~^ ERROR `?` may only modify trait bounds, not lifetime bounds
//~| ERROR expected one of `)`, `+`, or `,`
-//~| ERROR expected trait bound, not lifetime bound
+//~| ERROR expected item, found `)`
LL | enum e{A((?'a a+?+l))}
| ^ expected one of `)`, `+`, or `,`
-error: expected trait bound, not lifetime bound
- --> $DIR/issue-68890.rs:1:11
+error: expected item, found `)`
+ --> $DIR/issue-68890.rs:1:21
|
LL | enum e{A((?'a a+?+l))}
- | ^^^
+ | ^ expected item
error: aborting due to 3 previous errors
--- /dev/null
+#![feature(label_break_value)]
+
+fn main() {
+ 'l0 while false {} //~ ERROR labeled expression must be followed by `:`
+ 'l1 for _ in 0..1 {} //~ ERROR labeled expression must be followed by `:`
+ 'l2 loop {} //~ ERROR labeled expression must be followed by `:`
+ 'l3 {} //~ ERROR labeled expression must be followed by `:`
+ 'l4 0; //~ ERROR labeled expression must be followed by `:`
+ //~^ ERROR expected `while`, `for`, `loop` or `{`
+
+ macro_rules! m {
+ ($b:block) => {
+ 'l5 $b; //~ ERROR cannot use a `block` macro fragment here
+ }
+ }
+ m!({}); //~ ERROR labeled expression must be followed by `:`
+}
--- /dev/null
+error: labeled expression must be followed by `:`
+ --> $DIR/labeled-no-colon-expr.rs:4:5
+ |
+LL | 'l0 while false {}
+ | ----^^^^^^^^^^^^^^
+ | | |
+ | | help: add `:` after the label
+ | the label
+ |
+ = note: labels are used before loops and blocks, allowing e.g., `break 'label` to them
+
+error: labeled expression must be followed by `:`
+ --> $DIR/labeled-no-colon-expr.rs:5:5
+ |
+LL | 'l1 for _ in 0..1 {}
+ | ----^^^^^^^^^^^^^^^^
+ | | |
+ | | help: add `:` after the label
+ | the label
+ |
+ = note: labels are used before loops and blocks, allowing e.g., `break 'label` to them
+
+error: labeled expression must be followed by `:`
+ --> $DIR/labeled-no-colon-expr.rs:6:5
+ |
+LL | 'l2 loop {}
+ | ----^^^^^^^
+ | | |
+ | | help: add `:` after the label
+ | the label
+ |
+ = note: labels are used before loops and blocks, allowing e.g., `break 'label` to them
+
+error: labeled expression must be followed by `:`
+ --> $DIR/labeled-no-colon-expr.rs:7:5
+ |
+LL | 'l3 {}
+ | ----^^
+ | | |
+ | | help: add `:` after the label
+ | the label
+ |
+ = note: labels are used before loops and blocks, allowing e.g., `break 'label` to them
+
+error: expected `while`, `for`, `loop` or `{` after a label
+ --> $DIR/labeled-no-colon-expr.rs:8:9
+ |
+LL | 'l4 0;
+ | ^ expected `while`, `for`, `loop` or `{` after a label
+
+error: labeled expression must be followed by `:`
+ --> $DIR/labeled-no-colon-expr.rs:8:9
+ |
+LL | 'l4 0;
+ | ----^
+ | | |
+ | | help: add `:` after the label
+ | the label
+ |
+ = note: labels are used before loops and blocks, allowing e.g., `break 'label` to them
+
+error: cannot use a `block` macro fragment here
+ --> $DIR/labeled-no-colon-expr.rs:13:17
+ |
+LL | 'l5 $b;
+ | ----^^
+ | |
+ | the `block` fragment is within this context
+...
+LL | m!({});
+ | ------- in this macro invocation
+ |
+ = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
+
+error: labeled expression must be followed by `:`
+ --> $DIR/labeled-no-colon-expr.rs:16:8
+ |
+LL | 'l5 $b;
+ | ---- help: add `:` after the label
+ | |
+ | the label
+...
+LL | m!({});
+ | ^^
+ |
+ = note: labels are used before loops and blocks, allowing e.g., `break 'label` to them
+
+error: aborting due to 8 previous errors
+
// `ty` matcher in particular doesn't accept a single lifetime
macro_rules! m {
- ($t: ty) => ( let _: $t; )
+ ($t: ty) => {
+ let _: $t;
+ };
}
fn main() {
- m!('static); //~ ERROR expected type, found `'static`
+ m!('static);
+ //~^ ERROR lifetime in trait object type must be followed by `+`
+ //~| ERROR at least one trait is required for an object type
+ //~| WARN trait objects without an explicit `dyn` are deprecated
}
-error: expected type, found `'static`
- --> $DIR/trait-object-macro-matcher.rs:9:8
+error: lifetime in trait object type must be followed by `+`
+ --> $DIR/trait-object-macro-matcher.rs:11:8
|
LL | m!('static);
- | ^^^^^^^ expected type
+ | ^^^^^^^
-error: aborting due to previous error
+warning: trait objects without an explicit `dyn` are deprecated
+ --> $DIR/trait-object-macro-matcher.rs:11:8
+ |
+LL | m!('static);
+ | ^^^^^^^ help: use `dyn`: `dyn 'static`
+ |
+ = note: `#[warn(bare_trait_objects)]` on by default
+
+error[E0224]: at least one trait is required for an object type
+ --> $DIR/trait-object-macro-matcher.rs:11:8
+ |
+LL | m!('static);
+ | ^^^^^^^
+
+error: aborting due to 2 previous errors
error: unexpected closing delimiter: `}`
--> $DIR/mismatched-delim-brace-empty-block.rs:5:1
|
-LL | fn main() {
- | ___________-
-LL | |
-LL | | }
- | |_- this block is empty, you might have not meant to close it
-LL | let _ = ();
-LL | }
- | ^ unexpected closing delimiter
+LL | }
+ | ^ unexpected closing delimiter
error: aborting due to previous error
// ignore-windows
mod not_a_real_file; //~ ERROR file not found for module `not_a_real_file`
-//~^ HELP name the file either not_a_real_file.rs or not_a_real_file/mod.rs inside the directory
+//~^ HELP to create the module `not_a_real_file`, create file
fn main() {
assert_eq!(mod_file_aux::bar(), 10);
+ //~^ ERROR failed to resolve: use of undeclared type or module `mod_file_aux`
}
error[E0583]: file not found for module `not_a_real_file`
- --> $DIR/mod_file_not_exist.rs:3:5
+ --> $DIR/mod_file_not_exist.rs:3:1
|
LL | mod not_a_real_file;
- | ^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^^^
|
- = help: name the file either not_a_real_file.rs or not_a_real_file/mod.rs inside the directory "$DIR"
+ = help: to create the module `not_a_real_file`, create file "$DIR/not_a_real_file.rs"
-error: aborting due to previous error
+error[E0433]: failed to resolve: use of undeclared type or module `mod_file_aux`
+ --> $DIR/mod_file_not_exist.rs:7:16
+ |
+LL | assert_eq!(mod_file_aux::bar(), 10);
+ | ^^^^^^^^^^^^ use of undeclared type or module `mod_file_aux`
+
+error: aborting due to 2 previous errors
-For more information about this error, try `rustc --explain E0583`.
+Some errors have detailed explanations: E0433, E0583.
+For more information about an error, try `rustc --explain E0433`.
// only-windows
mod not_a_real_file; //~ ERROR file not found for module `not_a_real_file`
-//~^ HELP name the file either not_a_real_file.rs or not_a_real_file\mod.rs inside the directory
+//~^ HELP to create the module `not_a_real_file`, create file
fn main() {
assert_eq!(mod_file_aux::bar(), 10);
+ //~^ ERROR failed to resolve: use of undeclared type or module `mod_file_aux`
}
error[E0583]: file not found for module `not_a_real_file`
- --> $DIR/mod_file_not_exist_windows.rs:3:5
+ --> $DIR/mod_file_not_exist_windows.rs:3:1
|
LL | mod not_a_real_file;
- | ^^^^^^^^^^^^^^^
+ | ^^^^^^^^^^^^^^^^^^^^
|
- = help: name the file either not_a_real_file.rs or not_a_real_file/mod.rs inside the directory "$DIR"
+ = help: to create the module `not_a_real_file`, create file "$DIR/not_a_real_file.rs"
-error: aborting due to previous error
+error[E0433]: failed to resolve: use of undeclared type or module `mod_file_aux`
+ --> $DIR/mod_file_not_exist_windows.rs:7:16
+ |
+LL | assert_eq!(mod_file_aux::bar(), 10);
+ | ^^^^^^^^^^^^ use of undeclared type or module `mod_file_aux`
+
+error: aborting due to 2 previous errors
-For more information about this error, try `rustc --explain E0583`.
+Some errors have detailed explanations: E0433, E0583.
+For more information about an error, try `rustc --explain E0433`.
error: couldn't read $DIR/not_a_real_file.rs: $FILE_NOT_FOUND_MSG (os error 2)
- --> $DIR/mod_file_with_path_attr.rs:4:5
+ --> $DIR/mod_file_with_path_attr.rs:4:1
|
LL | mod m;
- | ^
+ | ^^^^^^
error: aborting due to previous error
+++ /dev/null
-fn main() {
- #[attr] if true {};
- //~^ ERROR cannot find attribute
- //~| ERROR attributes are not yet allowed on `if` expressions
- #[attr] if true {};
- //~^ ERROR cannot find attribute
- //~| ERROR attributes are not yet allowed on `if` expressions
- let _recovery_witness: () = 0; //~ ERROR mismatched types
-}
+++ /dev/null
-error: attributes are not yet allowed on `if` expressions
- --> $DIR/recovery-attr-on-if.rs:2:5
- |
-LL | #[attr] if true {};
- | ^^^^^^^
-
-error: attributes are not yet allowed on `if` expressions
- --> $DIR/recovery-attr-on-if.rs:5:5
- |
-LL | #[attr] if true {};
- | ^^^^^^^
-
-error: cannot find attribute `attr` in this scope
- --> $DIR/recovery-attr-on-if.rs:5:7
- |
-LL | #[attr] if true {};
- | ^^^^
-
-error: cannot find attribute `attr` in this scope
- --> $DIR/recovery-attr-on-if.rs:2:7
- |
-LL | #[attr] if true {};
- | ^^^^
-
-error[E0308]: mismatched types
- --> $DIR/recovery-attr-on-if.rs:8:33
- |
-LL | let _recovery_witness: () = 0;
- | -- ^ expected `()`, found integer
- | |
- | expected due to this
-
-error: aborting due to 5 previous errors
-
-For more information about this error, try `rustc --explain E0308`.
let mut x;
if cond {
- x = &'blk [1,2,3]; //~ ERROR expected `:`, found `[`
+ x = &'blk [1,2,3]; //~ ERROR borrow expressions cannot be annotated with lifetimes
}
}
-error: expected `:`, found `[`
- --> $DIR/regions-out-of-scope-slice.rs:7:19
+error: borrow expressions cannot be annotated with lifetimes
+ --> $DIR/regions-out-of-scope-slice.rs:7:13
|
LL | x = &'blk [1,2,3];
- | ^ expected `:`
+ | ^----^^^^^^^^
+ | |
+ | annotated with lifetime here
+ | help: remove the lifetime annotation
error: aborting due to previous error
--- /dev/null
+// Expansion drives parsing, so conditional compilation will strip
+// out outline modules and we will never attempt parsing them.
+
+// check-pass
+
+fn main() {}
+
+#[cfg(FALSE)]
+mod foo {
+ mod bar {
+ mod baz; // This was an error before.
+ }
+}
fn check<'a>() {
let _: Box<Trait + ('a)>; //~ ERROR parenthesized lifetime bounds are not supported
- let _: Box<('a) + Trait>;
- //~^ ERROR expected type, found `'a`
- //~| ERROR expected `:`, found `)`
+ let _: Box<('a) + Trait>; //~ ERROR lifetime in trait object type must be followed by `+`
}
fn main() {}
LL | let _: Box<Trait + ('a)>;
| ^^^^ help: remove the parentheses
-error: expected `:`, found `)`
- --> $DIR/trait-object-lifetime-parens.rs:9:19
- |
-LL | let _: Box<('a) + Trait>;
- | ^ expected `:`
-
-error: expected type, found `'a`
+error: lifetime in trait object type must be followed by `+`
--> $DIR/trait-object-lifetime-parens.rs:9:17
|
LL | let _: Box<('a) + Trait>;
- | - ^^ expected type
- | |
- | while parsing the type for `_`
+ | ^^
-error: aborting due to 4 previous errors
+error: aborting due to 3 previous errors
trait Trait<'a> {}
+trait Obj {}
+
fn f<T: (Copy) + (?Sized) + (for<'a> Trait<'a>)>() {}
fn main() {
- let _: Box<(Copy) + (?Sized) + (for<'a> Trait<'a>)>;
+ let _: Box<(Obj) + (?Sized) + (for<'a> Trait<'a>)>;
+ //~^ ERROR `?Trait` is not permitted in trait object types
+ //~| ERROR only auto traits can be used as additional traits
+ //~| WARN trait objects without an explicit `dyn` are deprecated
+ let _: Box<(?Sized) + (for<'a> Trait<'a>) + (Obj)>;
//~^ ERROR `?Trait` is not permitted in trait object types
+ //~| ERROR only auto traits can be used as additional traits
//~| WARN trait objects without an explicit `dyn` are deprecated
- let _: Box<(?Sized) + (for<'a> Trait<'a>) + (Copy)>;
- //~^ WARN trait objects without an explicit `dyn` are deprecated
- let _: Box<(for<'a> Trait<'a>) + (Copy) + (?Sized)>;
- //~^ ERROR use of undeclared lifetime name `'a`
- //~| ERROR `?Trait` is not permitted in trait object types
+ let _: Box<(for<'a> Trait<'a>) + (Obj) + (?Sized)>;
+ //~^ ERROR `?Trait` is not permitted in trait object types
+ //~| ERROR only auto traits can be used as additional traits
//~| WARN trait objects without an explicit `dyn` are deprecated
}
error: `?Trait` is not permitted in trait object types
- --> $DIR/trait-object-trait-parens.rs:6:25
+ --> $DIR/trait-object-trait-parens.rs:8:24
|
-LL | let _: Box<(Copy) + (?Sized) + (for<'a> Trait<'a>)>;
- | ^^^^^^^^
+LL | let _: Box<(Obj) + (?Sized) + (for<'a> Trait<'a>)>;
+ | ^^^^^^^^
error: `?Trait` is not permitted in trait object types
- --> $DIR/trait-object-trait-parens.rs:11:47
+ --> $DIR/trait-object-trait-parens.rs:12:17
|
-LL | let _: Box<(for<'a> Trait<'a>) + (Copy) + (?Sized)>;
- | ^^^^^^^^
+LL | let _: Box<(?Sized) + (for<'a> Trait<'a>) + (Obj)>;
+ | ^^^^^^
+
+error: `?Trait` is not permitted in trait object types
+ --> $DIR/trait-object-trait-parens.rs:16:46
+ |
+LL | let _: Box<(for<'a> Trait<'a>) + (Obj) + (?Sized)>;
+ | ^^^^^^^^
warning: trait objects without an explicit `dyn` are deprecated
- --> $DIR/trait-object-trait-parens.rs:6:16
+ --> $DIR/trait-object-trait-parens.rs:8:16
|
-LL | let _: Box<(Copy) + (?Sized) + (for<'a> Trait<'a>)>;
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use `dyn`: `dyn (Copy) + (?Sized) + (for<'a> Trait<'a>)`
+LL | let _: Box<(Obj) + (?Sized) + (for<'a> Trait<'a>)>;
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use `dyn`: `dyn (Obj) + (?Sized) + (for<'a> Trait<'a>)`
|
= note: `#[warn(bare_trait_objects)]` on by default
warning: trait objects without an explicit `dyn` are deprecated
- --> $DIR/trait-object-trait-parens.rs:9:16
+ --> $DIR/trait-object-trait-parens.rs:12:16
|
-LL | let _: Box<(?Sized) + (for<'a> Trait<'a>) + (Copy)>;
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use `dyn`: `dyn (?Sized) + (for<'a> Trait<'a>) + (Copy)`
+LL | let _: Box<(?Sized) + (for<'a> Trait<'a>) + (Obj)>;
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use `dyn`: `dyn (?Sized) + (for<'a> Trait<'a>) + (Obj)`
warning: trait objects without an explicit `dyn` are deprecated
- --> $DIR/trait-object-trait-parens.rs:11:16
+ --> $DIR/trait-object-trait-parens.rs:16:16
+ |
+LL | let _: Box<(for<'a> Trait<'a>) + (Obj) + (?Sized)>;
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use `dyn`: `dyn (for<'a> Trait<'a>) + (Obj) + (?Sized)`
+
+error[E0225]: only auto traits can be used as additional traits in a trait object
+ --> $DIR/trait-object-trait-parens.rs:8:35
+ |
+LL | let _: Box<(Obj) + (?Sized) + (for<'a> Trait<'a>)>;
+ | ----- ^^^^^^^^^^^^^^^^^^^
+ | | |
+ | | additional non-auto trait
+ | | trait alias used in trait object type (additional use)
+ | first non-auto trait
+ | trait alias used in trait object type (first use)
+
+error[E0225]: only auto traits can be used as additional traits in a trait object
+ --> $DIR/trait-object-trait-parens.rs:12:49
|
-LL | let _: Box<(for<'a> Trait<'a>) + (Copy) + (?Sized)>;
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use `dyn`: `dyn (for<'a> Trait<'a>) + (Copy) + (?Sized)`
+LL | let _: Box<(?Sized) + (for<'a> Trait<'a>) + (Obj)>;
+ | ------------------- ^^^^^
+ | | |
+ | | additional non-auto trait
+ | | trait alias used in trait object type (additional use)
+ | first non-auto trait
+ | trait alias used in trait object type (first use)
-error[E0261]: use of undeclared lifetime name `'a`
- --> $DIR/trait-object-trait-parens.rs:11:31
+error[E0225]: only auto traits can be used as additional traits in a trait object
+ --> $DIR/trait-object-trait-parens.rs:16:38
|
-LL | fn main() {
- | - help: consider introducing lifetime `'a` here: `<'a>`
-...
-LL | let _: Box<(for<'a> Trait<'a>) + (Copy) + (?Sized)>;
- | ^^ undeclared lifetime
+LL | let _: Box<(for<'a> Trait<'a>) + (Obj) + (?Sized)>;
+ | ----------------- ^^^^^
+ | | |
+ | | additional non-auto trait
+ | | trait alias used in trait object type (additional use)
+ | first non-auto trait
+ | trait alias used in trait object type (first use)
-error: aborting due to 3 previous errors
+error: aborting due to 6 previous errors
-For more information about this error, try `rustc --explain E0261`.
+For more information about this error, try `rustc --explain E0225`.
--- /dev/null
+// Test bindings-after-at with box-patterns
+
+// run-pass
+
+#![feature(bindings_after_at)]
+#![feature(box_patterns)]
+
+#[derive(Debug, PartialEq)]
+enum MatchArm {
+ Arm(usize),
+ Wild,
+}
+
+fn test(x: Option<Box<i32>>) -> MatchArm {
+ match x {
+ ref bar @ Some(box n) if n > 0 => {
+ // bar is a &Option<Box<i32>>
+ assert_eq!(bar, &x);
+
+ MatchArm::Arm(0)
+ },
+ Some(ref bar @ box n) if n < 0 => {
+ // bar is a &Box<i32> here
+ assert_eq!(**bar, n);
+
+ MatchArm::Arm(1)
+ },
+ _ => MatchArm::Wild,
+ }
+}
+
+fn main() {
+ assert_eq!(test(Some(Box::new(2))), MatchArm::Arm(0));
+ assert_eq!(test(Some(Box::new(-1))), MatchArm::Arm(1));
+ assert_eq!(test(Some(Box::new(0))), MatchArm::Wild);
+}
--- /dev/null
+// Test bindings-after-at with or-patterns and box-patterns
+
+// run-pass
+
+#![feature(bindings_after_at)]
+#![feature(or_patterns)]
+#![feature(box_patterns)]
+
+#[derive(Debug, PartialEq)]
+enum MatchArm {
+ Arm(usize),
+ Wild,
+}
+
+#[derive(Debug, PartialEq)]
+enum Test {
+ Foo,
+ Bar,
+ Baz,
+ Qux,
+}
+
+fn test(foo: Option<Box<Test>>) -> MatchArm {
+ match foo {
+ ref bar @ Some(box Test::Foo | box Test::Bar) => {
+ assert_eq!(bar, &foo);
+
+ MatchArm::Arm(0)
+ },
+ Some(ref bar @ box Test::Baz | ref bar @ box Test::Qux) => {
+ assert!(**bar == Test::Baz || **bar == Test::Qux);
+
+ MatchArm::Arm(1)
+ },
+ _ => MatchArm::Wild,
+ }
+}
+
+fn main() {
+ assert_eq!(test(Some(Box::new(Test::Foo))), MatchArm::Arm(0));
+ assert_eq!(test(Some(Box::new(Test::Bar))), MatchArm::Arm(0));
+ assert_eq!(test(Some(Box::new(Test::Baz))), MatchArm::Arm(1));
+ assert_eq!(test(Some(Box::new(Test::Qux))), MatchArm::Arm(1));
+ assert_eq!(test(None), MatchArm::Wild);
+}
--- /dev/null
+// Test bindings-after-at with or-patterns and slice-patterns
+
+// run-pass
+
+#![feature(bindings_after_at)]
+#![feature(or_patterns)]
+
+#[derive(Debug, PartialEq)]
+enum MatchArm {
+ Arm(usize),
+ Wild,
+}
+
+#[derive(Debug, PartialEq)]
+enum Test {
+ Foo,
+ Bar,
+ Baz,
+ Qux,
+}
+
+fn test(foo: &[Option<Test>]) -> MatchArm {
+ match foo {
+ bar @ [Some(Test::Foo), .., Some(Test::Qux | Test::Foo)] => {
+ assert_eq!(bar, foo);
+
+ MatchArm::Arm(0)
+ },
+ [.., bar @ Some(Test::Bar | Test::Qux), _] => {
+ assert!(bar == &Some(Test::Bar) || bar == &Some(Test::Qux));
+
+ MatchArm::Arm(1)
+ },
+ _ => MatchArm::Wild,
+ }
+}
+
+fn main() {
+ let foo = vec![
+ Some(Test::Foo),
+ Some(Test::Bar),
+ Some(Test::Baz),
+ Some(Test::Qux),
+ ];
+
+ // path 1a
+ assert_eq!(test(&foo), MatchArm::Arm(0));
+ // path 1b
+ assert_eq!(test(&[Some(Test::Foo), Some(Test::Bar), Some(Test::Foo)]), MatchArm::Arm(0));
+ // path 2a
+ assert_eq!(test(&foo[..3]), MatchArm::Arm(1));
+ // path 2b
+ assert_eq!(test(&[Some(Test::Bar), Some(Test::Qux), Some(Test::Baz)]), MatchArm::Arm(1));
+ // path 3
+ assert_eq!(test(&foo[1..2]), MatchArm::Wild);
+}
--- /dev/null
+// Test bindings-after-at with or-patterns
+
+// run-pass
+
+#![feature(bindings_after_at)]
+#![feature(or_patterns)]
+
+#[derive(Debug, PartialEq)]
+enum MatchArm {
+ Arm(usize),
+ Wild,
+}
+
+#[derive(Debug, Clone, Copy, PartialEq)]
+enum Test {
+ Foo,
+ Bar,
+ Baz,
+ Qux,
+}
+
+fn test(foo: Option<Test>) -> MatchArm {
+ match foo {
+ bar @ Some(Test::Foo | Test::Bar) => {
+ assert!(bar == Some(Test::Foo) || bar == Some(Test::Bar));
+
+ MatchArm::Arm(0)
+ },
+ Some(_) => MatchArm::Arm(1),
+ _ => MatchArm::Wild,
+ }
+}
+
+fn main() {
+ assert_eq!(test(Some(Test::Foo)), MatchArm::Arm(0));
+ assert_eq!(test(Some(Test::Bar)), MatchArm::Arm(0));
+ assert_eq!(test(Some(Test::Baz)), MatchArm::Arm(1));
+ assert_eq!(test(Some(Test::Qux)), MatchArm::Arm(1));
+ assert_eq!(test(None), MatchArm::Wild);
+}
--- /dev/null
+// Test bindings-after-at with slice-patterns
+
+// run-pass
+
+#![feature(bindings_after_at)]
+
+#[derive(Debug, PartialEq)]
+enum MatchArm {
+ Arm(usize),
+ Wild,
+}
+
+fn test(foo: &[i32]) -> MatchArm {
+ match foo {
+ [bar @ .., n] if n == &5 => {
+ for i in bar {
+ assert!(i < &5);
+ }
+
+ MatchArm::Arm(0)
+ },
+ bar @ [x0, .., xn] => {
+ assert_eq!(x0, &1);
+ assert_eq!(x0, &1);
+ assert_eq!(xn, &4);
+ assert_eq!(bar, &[1, 2, 3, 4]);
+
+ MatchArm::Arm(1)
+ },
+ _ => MatchArm::Wild,
+ }
+}
+
+fn main() {
+ let foo = vec![1, 2, 3, 4, 5];
+
+ assert_eq!(test(&foo), MatchArm::Arm(0));
+ assert_eq!(test(&foo[..4]), MatchArm::Arm(1));
+ assert_eq!(test(&foo[0..1]), MatchArm::Wild);
+}
fn match_on_uninhab() {
match uninhab_ref() {
- //~^ ERROR non-exhaustive patterns: type `&'static !` is non-empty
+ //~^ ERROR non-exhaustive patterns: type `&!` is non-empty
}
match uninhab_union() {
-error[E0004]: non-exhaustive patterns: type `&'static !` is non-empty
+error[E0004]: non-exhaustive patterns: type `&!` is non-empty
--> $DIR/always-inhabited-union-ref.rs:23:11
|
LL | match uninhab_ref() {
}
self::baz::A;
self::baz::A::foo();
- self::baz::A::bar(); //~ ERROR: method `bar` is private
+ self::baz::A::bar(); //~ ERROR: associated function `bar` is private
self::baz::A.foo2();
// this used to cause an ICE in privacy traversal.
fn lol() {
bar::A;
bar::A::foo();
- bar::A::bar(); //~ ERROR: method `bar` is private
+ bar::A::bar(); //~ ERROR: associated function `bar` is private
bar::A.foo2();
}
mod foo {
fn test() {
::bar::A::foo();
- ::bar::A::bar(); //~ ERROR: method `bar` is private
+ ::bar::A::bar(); //~ ERROR: associated function `bar` is private
::bar::A.foo2();
::bar::baz::A::foo(); //~ ERROR: module `baz` is private
::bar::baz::A::bar(); //~ ERROR: module `baz` is private
- //~^ ERROR: method `bar` is private
+ //~^ ERROR: associated function `bar` is private
::bar::baz::A.foo2(); //~ ERROR: module `baz` is private
::bar::baz::A.bar2(); //~ ERROR: module `baz` is private
- //~^ ERROR: method `bar2` is private
+ //~^ ERROR: associated function `bar2` is private
let _: isize =
::bar::B::foo(); //~ ERROR: trait `B` is private
LL | trait B {
| ^^^^^^^
-error[E0624]: method `bar` is private
+error[E0624]: associated function `bar` is private
--> $DIR/privacy1.rs:77:9
|
LL | self::baz::A::bar();
| ^^^^^^^^^^^^^^^^^
-error[E0624]: method `bar` is private
+error[E0624]: associated function `bar` is private
--> $DIR/privacy1.rs:95:5
|
LL | bar::A::bar();
| ^^^^^^^^^^^
-error[E0624]: method `bar` is private
+error[E0624]: associated function `bar` is private
--> $DIR/privacy1.rs:102:9
|
LL | ::bar::A::bar();
| ^^^^^^^^^^^^^
-error[E0624]: method `bar` is private
+error[E0624]: associated function `bar` is private
--> $DIR/privacy1.rs:105:9
|
LL | ::bar::baz::A::bar();
| ^^^^^^^^^^^^^^^^^^
-error[E0624]: method `bar2` is private
+error[E0624]: associated function `bar2` is private
--> $DIR/privacy1.rs:108:23
|
LL | ::bar::baz::A.bar2();
LL | use bar::glob::foo;
| ^^^ this function import is private
|
-note: the function import `foo` is defined here
+note: the function import `foo` is defined here...
--> $DIR/privacy2.rs:10:13
|
LL | use foo;
| ^^^
+note: ...and refers to the function `foo` which is defined here
+ --> $DIR/privacy2.rs:14:1
+ |
+LL | pub fn foo() {}
+ | ^^^^^^^^^^^^ consider importing it directly
error: requires `sized` lang_item
fn main() {
let s = a::Foo { x: 1 };
s.bar();
- s.foo(); //~ ERROR method `foo` is private
+ s.foo(); //~ ERROR associated function `foo` is private
}
-error[E0624]: method `foo` is private
+error[E0624]: associated function `foo` is private
--> $DIR/private-impl-method.rs:20:7
|
LL | s.foo();
fn main() {
let nyan : cat = cat(52, 99);
- nyan.nap(); //~ ERROR method `nap` is private
+ nyan.nap(); //~ ERROR associated function `nap` is private
}
-error[E0624]: method `nap` is private
+error[E0624]: associated function `nap` is private
--> $DIR/private-method-cross-crate.rs:7:8
|
LL | nyan.nap();
fn main() {
let x = a::Foo;
- x.f(); //~ ERROR method `f` is private
+ x.f(); //~ ERROR associated function `f` is private
}
-error[E0624]: method `f` is private
+error[E0624]: associated function `f` is private
--> $DIR/private-method-inherited.rs:13:7
|
LL | x.f();
fn main() {
let nyan : kitties::Cat = kitties::cat(52, 99);
- nyan.nap(); //~ ERROR method `nap` is private
+ nyan.nap(); //~ ERROR associated function `nap` is private
}
-error[E0624]: method `nap` is private
+error[E0624]: associated function `nap` is private
--> $DIR/private-method.rs:22:8
|
LL | nyan.nap();
LL | S::default().x;
| ^^^^^^^^^^^^^^
-error[E0624]: method `f` is private
+error[E0624]: associated function `f` is private
--> $DIR/test.rs:32:18
|
LL | S::default().f();
| ^
-error[E0624]: method `g` is private
+error[E0624]: associated function `g` is private
--> $DIR/test.rs:33:5
|
LL | S::g();
LL | let _ = u.z;
| ^^^
-error[E0624]: method `g` is private
+error[E0624]: associated function `g` is private
--> $DIR/test.rs:45:7
|
LL | u.g();
| ^
-error[E0624]: method `h` is private
+error[E0624]: associated function `h` is private
--> $DIR/test.rs:46:7
|
LL | u.h();
-fn main () { let y : u32 = "z" ; { let x : u32 = "y" ; } }
+fn main() { let y : u32 = "z" ; { let x : u32 = "y" ; } }
#[proc_macro_derive(Unstable)]
pub fn derive(_input: TokenStream) -> TokenStream {
- "unsafe fn foo() -> u32 { ::std::intrinsics::init() }".parse().unwrap()
+ "unsafe fn foo() -> u32 { ::std::intrinsics::abort() }".parse().unwrap()
}
--- /dev/null
+// Test proc-macro crate can be built without addtional RUSTFLAGS
+// on musl target
+// override -Ctarget-feature=-crt-static from compiletest
+// compile-flags: -Ctarget-feature=
+// ignore-wasm32
+// build-pass
+#![crate_type = "proc-macro"]
+
+extern crate proc_macro;
+
+use proc_macro::TokenStream;
+
+#[proc_macro_derive(Foo)]
+pub fn derive_foo(input: TokenStream) -> TokenStream {
+ input
+}
-PRINT-BANG INPUT (DISPLAY): struct M ($crate :: S) ;
+PRINT-BANG INPUT (DISPLAY): struct M($crate :: S) ;
PRINT-BANG INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
},
]
PRINT-ATTR INPUT (DISPLAY): struct A(crate::S);
-PRINT-ATTR RE-COLLECTED (DISPLAY): struct A ($crate :: S) ;
+PRINT-ATTR RE-COLLECTED (DISPLAY): struct A($crate :: S) ;
PRINT-ATTR INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
PRINT-ATTR INPUT (DISPLAY): struct A(identity!(crate :: S));
-PRINT-ATTR RE-COLLECTED (DISPLAY): struct A (identity ! ($crate :: S)) ;
+PRINT-ATTR RE-COLLECTED (DISPLAY): struct A(identity ! ($crate :: S)) ;
PRINT-ATTR INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
},
]
PRINT-ATTR INPUT (DISPLAY): struct B(identity!(::dollar_crate_external :: S));
-PRINT-ATTR RE-COLLECTED (DISPLAY): struct B (identity ! ($crate :: S)) ;
+PRINT-ATTR RE-COLLECTED (DISPLAY): struct B(identity ! ($crate :: S)) ;
PRINT-ATTR INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
-PRINT-BANG INPUT (DISPLAY): struct M ($crate :: S) ;
+PRINT-BANG INPUT (DISPLAY): struct M($crate :: S) ;
PRINT-BANG INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
},
]
PRINT-ATTR INPUT (DISPLAY): struct A(crate::S);
-PRINT-ATTR RE-COLLECTED (DISPLAY): struct A ($crate :: S) ;
+PRINT-ATTR RE-COLLECTED (DISPLAY): struct A($crate :: S) ;
PRINT-ATTR INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
},
]
PRINT-DERIVE INPUT (DISPLAY): struct D(crate::S);
-PRINT-DERIVE RE-COLLECTED (DISPLAY): struct D ($crate :: S) ;
+PRINT-DERIVE RE-COLLECTED (DISPLAY): struct D($crate :: S) ;
PRINT-DERIVE INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
span: #3 bytes(LO..HI),
},
]
-PRINT-BANG INPUT (DISPLAY): struct M ($crate :: S) ;
+PRINT-BANG INPUT (DISPLAY): struct M($crate :: S) ;
PRINT-BANG INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
},
]
PRINT-ATTR INPUT (DISPLAY): struct A(::dollar_crate_external::S);
-PRINT-ATTR RE-COLLECTED (DISPLAY): struct A ($crate :: S) ;
+PRINT-ATTR RE-COLLECTED (DISPLAY): struct A($crate :: S) ;
PRINT-ATTR INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
},
]
PRINT-DERIVE INPUT (DISPLAY): struct D(::dollar_crate_external::S);
-PRINT-DERIVE RE-COLLECTED (DISPLAY): struct D ($crate :: S) ;
+PRINT-DERIVE RE-COLLECTED (DISPLAY): struct D($crate :: S) ;
PRINT-DERIVE INPUT (DEBUG): TokenStream [
Ident {
ident: "struct",
LL | #[derive(Unstable)]
| ^^^^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
= note: this error originates in a derive macro (in Nightly builds, run with -Z macro-backtrace for more info)
| ^^^^^^^^^ not found in this scope
error[E0412]: cannot find type `ItemUse` in crate `$crate`
- --> $DIR/auxiliary/mixed-site-span.rs:14:1
- |
-LL | / pub fn proc_macro_rules(input: TokenStream) -> TokenStream {
-LL | | if input.is_empty() {
-LL | | let id = |s| TokenTree::from(Ident::new(s, Span::mixed_site()));
-LL | | let item_def = id("ItemDef");
-... |
-LL | | }
-LL | | }
- | |_^ not found in `$crate`
- |
- ::: $DIR/mixed-site-span.rs:26:1
- |
-LL | pass_dollar_crate!();
- | --------------------- in this macro invocation
+ --> $DIR/mixed-site-span.rs:26:1
+ |
+LL | pass_dollar_crate!();
+ | ^^^^^^^^^^^^^^^^^^^^^ not found in `$crate`
|
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
help: possible candidate is found in another module, you can import it into scope
error: hello to you, too!
- --> $DIR/auxiliary/multispan.rs:31:1
- |
-LL | / pub fn hello(input: TokenStream) -> TokenStream {
-LL | | if let Err(diag) = parse(input) {
-LL | | diag.emit();
-LL | | }
-LL | |
-LL | | TokenStream::new()
-LL | | }
- | |_^
- |
- ::: $DIR/multispan.rs:14:5
- |
-LL | hello!(hi);
- | ----------- in this macro invocation
+ --> $DIR/multispan.rs:14:5
+ |
+LL | hello!(hi);
+ | ^^^^^^^^^^^
|
note: found these 'hi's
--> $DIR/multispan.rs:14:12
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
error: hello to you, too!
- --> $DIR/auxiliary/multispan.rs:31:1
- |
-LL | / pub fn hello(input: TokenStream) -> TokenStream {
-LL | | if let Err(diag) = parse(input) {
-LL | | diag.emit();
-LL | | }
-LL | |
-LL | | TokenStream::new()
-LL | | }
- | |_^
- |
- ::: $DIR/multispan.rs:17:5
- |
-LL | hello!(hi hi);
- | -------------- in this macro invocation
+ --> $DIR/multispan.rs:17:5
+ |
+LL | hello!(hi hi);
+ | ^^^^^^^^^^^^^^
|
note: found these 'hi's
--> $DIR/multispan.rs:17:12
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
error: hello to you, too!
- --> $DIR/auxiliary/multispan.rs:31:1
- |
-LL | / pub fn hello(input: TokenStream) -> TokenStream {
-LL | | if let Err(diag) = parse(input) {
-LL | | diag.emit();
-LL | | }
-LL | |
-LL | | TokenStream::new()
-LL | | }
- | |_^
- |
- ::: $DIR/multispan.rs:20:5
- |
-LL | hello!(hi hi hi);
- | ----------------- in this macro invocation
+ --> $DIR/multispan.rs:20:5
+ |
+LL | hello!(hi hi hi);
+ | ^^^^^^^^^^^^^^^^^
|
note: found these 'hi's
--> $DIR/multispan.rs:20:12
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
error: hello to you, too!
- --> $DIR/auxiliary/multispan.rs:31:1
- |
-LL | / pub fn hello(input: TokenStream) -> TokenStream {
-LL | | if let Err(diag) = parse(input) {
-LL | | diag.emit();
-LL | | }
-LL | |
-LL | | TokenStream::new()
-LL | | }
- | |_^
- |
- ::: $DIR/multispan.rs:23:5
- |
-LL | hello!(hi hey hi yo hi beep beep hi hi);
- | ---------------------------------------- in this macro invocation
+ --> $DIR/multispan.rs:23:5
+ |
+LL | hello!(hi hey hi yo hi beep beep hi hi);
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: found these 'hi's
--> $DIR/multispan.rs:23:12
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
error: hello to you, too!
- --> $DIR/auxiliary/multispan.rs:31:1
- |
-LL | / pub fn hello(input: TokenStream) -> TokenStream {
-LL | | if let Err(diag) = parse(input) {
-LL | | diag.emit();
-LL | | }
-LL | |
-LL | | TokenStream::new()
-LL | | }
- | |_^
- |
- ::: $DIR/multispan.rs:24:5
- |
-LL | hello!(hi there, hi how are you? hi... hi.);
- | -------------------------------------------- in this macro invocation
+ --> $DIR/multispan.rs:24:5
+ |
+LL | hello!(hi there, hi how are you? hi... hi.);
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: found these 'hi's
--> $DIR/multispan.rs:24:12
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
error: hello to you, too!
- --> $DIR/auxiliary/multispan.rs:31:1
- |
-LL | / pub fn hello(input: TokenStream) -> TokenStream {
-LL | | if let Err(diag) = parse(input) {
-LL | | diag.emit();
-LL | | }
-LL | |
-LL | | TokenStream::new()
-LL | | }
- | |_^
- |
- ::: $DIR/multispan.rs:25:5
- |
-LL | hello!(whoah. hi di hi di ho);
- | ------------------------------ in this macro invocation
+ --> $DIR/multispan.rs:25:5
+ |
+LL | hello!(whoah. hi di hi di ho);
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: found these 'hi's
--> $DIR/multispan.rs:25:19
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
error: hello to you, too!
- --> $DIR/auxiliary/multispan.rs:31:1
- |
-LL | / pub fn hello(input: TokenStream) -> TokenStream {
-LL | | if let Err(diag) = parse(input) {
-LL | | diag.emit();
-LL | | }
-LL | |
-LL | | TokenStream::new()
-LL | | }
- | |_^
- |
- ::: $DIR/multispan.rs:26:5
- |
-LL | hello!(hi good hi and good bye);
- | -------------------------------- in this macro invocation
+ --> $DIR/multispan.rs:26:5
+ |
+LL | hello!(hi good hi and good bye);
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: found these 'hi's
--> $DIR/multispan.rs:26:12
extern crate span_api_tests;
-use span_api_tests::{reemit, assert_fake_source_file, assert_source_file, macro_stringify};
+// FIXME(69775): Investigate `assert_fake_source_file`.
+
+use span_api_tests::{reemit, assert_source_file, macro_stringify};
macro_rules! say_hello {
($macname:ident) => ( $macname! { "Hello, world!" })
assert_source_file! { "Hello, world!" }
}
-say_hello_extern! { assert_fake_source_file }
+say_hello_extern! { assert_source_file }
reemit! {
assert_source_file! { "Hello, world!" }
error: found 2 equal signs, need exactly 3
- --> $DIR/auxiliary/three-equals.rs:42:1
- |
-LL | / pub fn three_equals(input: TokenStream) -> TokenStream {
-LL | | if let Err(diag) = parse(input) {
-LL | | diag.emit();
-LL | | return TokenStream::new();
-... |
-LL | | "3".parse().unwrap()
-LL | | }
- | |_^
- |
- ::: $DIR/three-equals.rs:15:5
- |
-LL | three_equals!(==);
- | ------------------ in this macro invocation
+ --> $DIR/three-equals.rs:15:5
+ |
+LL | three_equals!(==);
+ | ^^^^^^^^^^^^^^^^^^
|
= help: input must be: `===`
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
fn main() {
match 10 {
<S as Tr>::A::f::<u8> => {}
- //~^ ERROR expected unit struct, unit variant or constant, found method `<<S as Tr>::A>::f<u8>`
+ //~^ ERROR expected unit struct, unit variant or constant, found associated function
0 ..= <S as Tr>::A::f::<u8> => {} //~ ERROR only char and numeric types are allowed in range
}
}
-error[E0533]: expected unit struct, unit variant or constant, found method `<<S as Tr>::A>::f<u8>`
+error[E0533]: expected unit struct, unit variant or constant, found associated function `<<S as Tr>::A>::f<u8>`
--> $DIR/qualified-path-params.rs:20:9
|
LL | <S as Tr>::A::f::<u8> => {}
#[lang = "eh_personality"]
extern fn eh_personality() {}
-#[cfg(target_os = "windows")]
-#[lang = "eh_unwind_resume"]
-extern fn eh_unwind_resume() {}
-
// take a reference to any built-in range
fn take_range(_r: &impl RangeBounds<i8>) {}
error: `#[panic_handler]` function required, but not found
error[E0308]: mismatched types
- --> $DIR/issue-54505-no-std.rs:28:16
+ --> $DIR/issue-54505-no-std.rs:24:16
|
LL | take_range(0..1);
| ^^^^
found struct `core::ops::Range<{integer}>`
error[E0308]: mismatched types
- --> $DIR/issue-54505-no-std.rs:33:16
+ --> $DIR/issue-54505-no-std.rs:29:16
|
LL | take_range(1..);
| ^^^
found struct `core::ops::RangeFrom<{integer}>`
error[E0308]: mismatched types
- --> $DIR/issue-54505-no-std.rs:38:16
+ --> $DIR/issue-54505-no-std.rs:34:16
|
LL | take_range(..);
| ^^
found struct `core::ops::RangeFull`
error[E0308]: mismatched types
- --> $DIR/issue-54505-no-std.rs:43:16
+ --> $DIR/issue-54505-no-std.rs:39:16
|
LL | take_range(0..=1);
| ^^^^^
found struct `core::ops::RangeInclusive<{integer}>`
error[E0308]: mismatched types
- --> $DIR/issue-54505-no-std.rs:48:16
+ --> $DIR/issue-54505-no-std.rs:44:16
|
LL | take_range(..5);
| ^^^
found struct `core::ops::RangeTo<{integer}>`
error[E0308]: mismatched types
- --> $DIR/issue-54505-no-std.rs:53:16
+ --> $DIR/issue-54505-no-std.rs:49:16
|
LL | take_range(..=42);
| ^^^^^
println!("allocate({:?})", layout);
}
- let ret = Global.alloc(layout).unwrap_or_else(|_| handle_alloc_error(layout));
+ let (ptr, _) = Global.alloc(layout).unwrap_or_else(|_| handle_alloc_error(layout));
if PRINT {
- println!("allocate({:?}) = {:?}", layout, ret);
+ println!("allocate({:?}) = {:?}", layout, ptr);
}
- ret.cast().as_ptr()
+ ptr.cast().as_ptr()
}
unsafe fn deallocate(ptr: *mut u8, layout: Layout) {
println!("reallocate({:?}, old={:?}, new={:?})", ptr, old, new);
}
- let ret = Global.realloc(NonNull::new_unchecked(ptr), old, new.size())
+ let (ptr, _) = Global.realloc(NonNull::new_unchecked(ptr), old, new.size())
.unwrap_or_else(|_| handle_alloc_error(
Layout::from_size_align_unchecked(new.size(), old.align())
));
if PRINT {
println!("reallocate({:?}, old={:?}, new={:?}) = {:?}",
- ptr, old, new, ret);
+ ptr, old, new, ptr);
}
- ret.cast().as_ptr()
+ ptr.cast().as_ptr()
}
fn idx_to_size(i: usize) -> usize { (i+1) * 10 }
// Test the parse error for an empty recursion_limit
-#![recursion_limit = ""] //~ ERROR `recursion_limit` must be a non-negative integer
- //~| `recursion_limit` must be a non-negative integer
+#![recursion_limit = ""] //~ ERROR `limit` must be a non-negative integer
+ //~| `limit` must be a non-negative integer
fn main() {}
-error: `recursion_limit` must be a non-negative integer
+error: `limit` must be a non-negative integer
--> $DIR/empty.rs:3:1
|
LL | #![recursion_limit = ""]
| ^^^^^^^^^^^^^^^^^^^^^--^
| |
- | `recursion_limit` must be a non-negative integer
+ | `limit` must be a non-negative integer
error: aborting due to previous error
// Test the parse error for an invalid digit in recursion_limit
-#![recursion_limit = "-100"] //~ ERROR `recursion_limit` must be a non-negative integer
+#![recursion_limit = "-100"] //~ ERROR `limit` must be a non-negative integer
//~| not a valid integer
fn main() {}
-error: `recursion_limit` must be a non-negative integer
+error: `limit` must be a non-negative integer
--> $DIR/invalid_digit.rs:3:1
|
LL | #![recursion_limit = "-100"]
// Test the parse error for an overflowing recursion_limit
#![recursion_limit = "999999999999999999999999"]
-//~^ ERROR `recursion_limit` must be a non-negative integer
-//~| `recursion_limit` is too large
+//~^ ERROR `limit` must be a non-negative integer
+//~| `limit` is too large
fn main() {}
-error: `recursion_limit` must be a non-negative integer
+error: `limit` must be a non-negative integer
--> $DIR/overflow.rs:3:1
|
LL | #![recursion_limit = "999999999999999999999999"]
| ^^^^^^^^^^^^^^^^^^^^^--------------------------^
| |
- | `recursion_limit` is too large
+ | `limit` is too large
error: aborting due to previous error
-// Test that a `recursion_limit` of 0 is valid
+// Test that a `limit` of 0 is valid
#![recursion_limit = "0"]
-error[E0623]: lifetime mismatch
+error[E0491]: in type `&'b &'a usize`, reference has a longer lifetime than the data it references
--> $DIR/regions-free-region-ordering-caller.rs:11:12
|
-LL | fn call2<'a, 'b>(a: &'a usize, b: &'b usize) {
- | --------- ---------
- | |
- | these two types are declared with different lifetimes...
LL | let z: Option<&'b &'a usize> = None;
- | ^^^^^^^^^^^^^^^^^^^^^ ...but data from `a` flows into `b` here
+ | ^^^^^^^^^^^^^^^^^^^^^
+ |
+note: the pointer is valid for the lifetime `'b` as defined on the function body at 10:14
+ --> $DIR/regions-free-region-ordering-caller.rs:10:14
+ |
+LL | fn call2<'a, 'b>(a: &'a usize, b: &'b usize) {
+ | ^^
+note: but the referenced data is only valid for the lifetime `'a` as defined on the function body at 10:10
+ --> $DIR/regions-free-region-ordering-caller.rs:10:10
+ |
+LL | fn call2<'a, 'b>(a: &'a usize, b: &'b usize) {
+ | ^^
-error[E0623]: lifetime mismatch
+error[E0491]: in type `&'b Paramd<'a>`, reference has a longer lifetime than the data it references
--> $DIR/regions-free-region-ordering-caller.rs:17:12
|
-LL | fn call3<'a, 'b>(a: &'a usize, b: &'b usize) {
- | --------- ---------
- | |
- | these two types are declared with different lifetimes...
-LL | let y: Paramd<'a> = Paramd { x: a };
LL | let z: Option<&'b Paramd<'a>> = None;
- | ^^^^^^^^^^^^^^^^^^^^^^ ...but data from `a` flows into `b` here
+ | ^^^^^^^^^^^^^^^^^^^^^^
+ |
+note: the pointer is valid for the lifetime `'b` as defined on the function body at 15:14
+ --> $DIR/regions-free-region-ordering-caller.rs:15:14
+ |
+LL | fn call3<'a, 'b>(a: &'a usize, b: &'b usize) {
+ | ^^
+note: but the referenced data is only valid for the lifetime `'a` as defined on the function body at 15:10
+ --> $DIR/regions-free-region-ordering-caller.rs:15:10
+ |
+LL | fn call3<'a, 'b>(a: &'a usize, b: &'b usize) {
+ | ^^
-error[E0623]: lifetime mismatch
+error[E0491]: in type `&'a &'b usize`, reference has a longer lifetime than the data it references
--> $DIR/regions-free-region-ordering-caller.rs:22:12
|
-LL | fn call4<'a, 'b>(a: &'a usize, b: &'b usize) {
- | --------- --------- these two types are declared with different lifetimes...
LL | let z: Option<&'a &'b usize> = None;
- | ^^^^^^^^^^^^^^^^^^^^^ ...but data from `b` flows into `a` here
+ | ^^^^^^^^^^^^^^^^^^^^^
+ |
+note: the pointer is valid for the lifetime `'a` as defined on the function body at 21:10
+ --> $DIR/regions-free-region-ordering-caller.rs:21:10
+ |
+LL | fn call4<'a, 'b>(a: &'a usize, b: &'b usize) {
+ | ^^
+note: but the referenced data is only valid for the lifetime `'b` as defined on the function body at 21:14
+ --> $DIR/regions-free-region-ordering-caller.rs:21:14
+ |
+LL | fn call4<'a, 'b>(a: &'a usize, b: &'b usize) {
+ | ^^
error: aborting due to 3 previous errors
-For more information about this error, try `rustc --explain E0623`.
+For more information about this error, try `rustc --explain E0491`.
struct Paramd<'a> { x: &'a usize }
fn call2<'a, 'b>(a: &'a usize, b: &'b usize) {
- let z: Option<&'b &'a usize> = None;//[migrate]~ ERROR E0623
+ let z: Option<&'b &'a usize> = None;//[migrate]~ ERROR E0491
//[nll]~^ ERROR lifetime may not live long enough
}
fn call3<'a, 'b>(a: &'a usize, b: &'b usize) {
let y: Paramd<'a> = Paramd { x: a };
- let z: Option<&'b Paramd<'a>> = None;//[migrate]~ ERROR E0623
+ let z: Option<&'b Paramd<'a>> = None;//[migrate]~ ERROR E0491
//[nll]~^ ERROR lifetime may not live long enough
}
fn call4<'a, 'b>(a: &'a usize, b: &'b usize) {
- let z: Option<&'a &'b usize> = None;//[migrate]~ ERROR E0623
+ let z: Option<&'a &'b usize> = None;//[migrate]~ ERROR E0491
//[nll]~^ ERROR lifetime may not live long enough
}
x: isize
}
-fn alloc<'a>(_bcx : &'a arena) -> &'a Bcx<'a> {
+fn alloc(_bcx: &arena) -> &Bcx<'_> {
unsafe {
let layout = Layout::new::<Bcx>();
- let ptr = Global.alloc(layout).unwrap_or_else(|_| handle_alloc_error(layout));
+ let (ptr, _) = Global.alloc(layout).unwrap_or_else(|_| handle_alloc_error(layout));
&*(ptr.as_ptr() as *const _)
}
}
-fn h<'a>(bcx : &'a Bcx<'a>) -> &'a Bcx<'a> {
+fn h<'a>(bcx: &'a Bcx<'a>) -> &'a Bcx<'a> {
return alloc(bcx.fcx.arena);
}
-fn g(fcx : &Fcx) {
- let bcx = Bcx { fcx: fcx };
+fn g(fcx: &Fcx) {
+ let bcx = Bcx { fcx };
let bcx2 = h(&bcx);
unsafe {
Global.dealloc(NonNull::new_unchecked(bcx2 as *const _ as *mut _), Layout::new::<Bcx>());
}
}
-fn f(ccx : &Ccx) {
+fn f(ccx: &Ccx) {
let a = arena(());
- let fcx = Fcx { arena: &a, ccx: ccx };
+ let fcx = Fcx { arena: &a, ccx };
return g(&fcx);
}
error: aborting due to 4 previous errors
+For more information about this error, try `rustc --explain E0693`.
LL | #[rustc_attribute_should_be_reserved]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
error: cannot determine resolution for the macro `foo`
--- /dev/null
+// Regression test for issue #63882.
+
+type A = crate::r#break; //~ ERROR cannot find type `r#break` in module `crate`
+
+fn main() {}
--- /dev/null
+error[E0412]: cannot find type `r#break` in module `crate`
+ --> $DIR/raw-ident-in-path.rs:3:17
+ |
+LL | type A = crate::r#break;
+ | ^^^^^^^ not found in `crate`
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0412`.
enum Opts {
- A(isize), B(isize), C(isize)
+ A(isize),
+ B(isize),
+ C(isize),
}
fn matcher1(x: Opts) {
match x {
- Opts::A(ref i) | Opts::B(i) => {}
- //~^ ERROR variable `i` is bound in inconsistent ways within the same match arm
- //~^^ ERROR mismatched types
- Opts::C(_) => {}
+ Opts::A(ref i) | Opts::B(i) => {}
+ //~^ ERROR variable `i` is bound inconsistently
+ //~^^ ERROR mismatched types
+ Opts::C(_) => {}
}
}
fn matcher2(x: Opts) {
match x {
- Opts::A(ref i) | Opts::B(i) => {}
- //~^ ERROR variable `i` is bound in inconsistent ways within the same match arm
- //~^^ ERROR mismatched types
- Opts::C(_) => {}
+ Opts::A(ref i) | Opts::B(i) => {}
+ //~^ ERROR variable `i` is bound inconsistently
+ //~^^ ERROR mismatched types
+ Opts::C(_) => {}
}
}
fn matcher4(x: Opts) {
match x {
- Opts::A(ref mut i) | Opts::B(ref i) => {}
- //~^ ERROR variable `i` is bound in inconsistent ways within the same match arm
- //~^^ ERROR mismatched types
- Opts::C(_) => {}
+ Opts::A(ref mut i) | Opts::B(ref i) => {}
+ //~^ ERROR variable `i` is bound inconsistently
+ //~^^ ERROR mismatched types
+ Opts::C(_) => {}
}
}
-
fn matcher5(x: Opts) {
match x {
- Opts::A(ref i) | Opts::B(ref i) => {}
- Opts::C(_) => {}
+ Opts::A(ref i) | Opts::B(ref i) => {}
+ Opts::C(_) => {}
}
}
-error[E0409]: variable `i` is bound in inconsistent ways within the same match arm
- --> $DIR/resolve-inconsistent-binding-mode.rs:7:32
+error[E0409]: variable `i` is bound inconsistently across alternatives separated by `|`
+ --> $DIR/resolve-inconsistent-binding-mode.rs:9:34
|
-LL | Opts::A(ref i) | Opts::B(i) => {}
- | - ^ bound in different ways
- | |
- | first binding
+LL | Opts::A(ref i) | Opts::B(i) => {}
+ | - ^ bound in different ways
+ | |
+ | first binding
-error[E0409]: variable `i` is bound in inconsistent ways within the same match arm
- --> $DIR/resolve-inconsistent-binding-mode.rs:16:32
+error[E0409]: variable `i` is bound inconsistently across alternatives separated by `|`
+ --> $DIR/resolve-inconsistent-binding-mode.rs:18:34
|
-LL | Opts::A(ref i) | Opts::B(i) => {}
- | - ^ bound in different ways
- | |
- | first binding
+LL | Opts::A(ref i) | Opts::B(i) => {}
+ | - ^ bound in different ways
+ | |
+ | first binding
-error[E0409]: variable `i` is bound in inconsistent ways within the same match arm
- --> $DIR/resolve-inconsistent-binding-mode.rs:25:40
+error[E0409]: variable `i` is bound inconsistently across alternatives separated by `|`
+ --> $DIR/resolve-inconsistent-binding-mode.rs:27:42
|
-LL | Opts::A(ref mut i) | Opts::B(ref i) => {}
- | - first binding ^ bound in different ways
+LL | Opts::A(ref mut i) | Opts::B(ref i) => {}
+ | - first binding ^ bound in different ways
error[E0308]: mismatched types
- --> $DIR/resolve-inconsistent-binding-mode.rs:7:32
+ --> $DIR/resolve-inconsistent-binding-mode.rs:9:34
|
LL | match x {
| - this expression has type `Opts`
-LL | Opts::A(ref i) | Opts::B(i) => {}
- | ----- ^ expected `&isize`, found `isize`
- | |
- | first introduced with type `&isize` here
+LL | Opts::A(ref i) | Opts::B(i) => {}
+ | ----- ^ expected `&isize`, found `isize`
+ | |
+ | first introduced with type `&isize` here
|
= note: in the same arm, a binding must have the same type in all alternatives
error[E0308]: mismatched types
- --> $DIR/resolve-inconsistent-binding-mode.rs:16:32
+ --> $DIR/resolve-inconsistent-binding-mode.rs:18:34
|
LL | match x {
| - this expression has type `Opts`
-LL | Opts::A(ref i) | Opts::B(i) => {}
- | ----- ^ expected `&isize`, found `isize`
- | |
- | first introduced with type `&isize` here
+LL | Opts::A(ref i) | Opts::B(i) => {}
+ | ----- ^ expected `&isize`, found `isize`
+ | |
+ | first introduced with type `&isize` here
|
= note: in the same arm, a binding must have the same type in all alternatives
error[E0308]: mismatched types
- --> $DIR/resolve-inconsistent-binding-mode.rs:25:36
+ --> $DIR/resolve-inconsistent-binding-mode.rs:27:38
|
LL | match x {
| - this expression has type `Opts`
-LL | Opts::A(ref mut i) | Opts::B(ref i) => {}
- | --------- ^^^^^ types differ in mutability
- | |
- | first introduced with type `&mut isize` here
+LL | Opts::A(ref mut i) | Opts::B(ref i) => {}
+ | --------- ^^^^^ types differ in mutability
+ | |
+ | first introduced with type `&mut isize` here
|
= note: expected type `&mut isize`
found type `&isize`
(A, B) | (ref B, c) | (c, A) => ()
//~^ ERROR variable `A` is not bound in all patterns
//~| ERROR variable `B` is not bound in all patterns
- //~| ERROR variable `B` is bound in inconsistent ways
+ //~| ERROR variable `B` is bound inconsistently
//~| ERROR mismatched types
//~| ERROR variable `c` is not bound in all patterns
//~| HELP consider making the path in the pattern qualified: `?::A`
| | variable not in all patterns
| pattern doesn't bind `c`
-error[E0409]: variable `B` is bound in inconsistent ways within the same match arm
+error[E0409]: variable `B` is bound inconsistently across alternatives separated by `|`
--> $DIR/resolve-inconsistent-names.rs:19:23
|
LL | (A, B) | (ref B, c) | (c, A) => ()
error[E0308]: mismatched types
--> $DIR/const.rs:14:9
|
+LL | const FOO: Foo = Foo{bar: 5};
+ | ----------------------------- constant defined here
+...
LL | match &f {
| -- this expression has type `&Foo`
LL | FOO => {},
- | ^^^ expected `&Foo`, found struct `Foo`
+ | ^^^
+ | |
+ | expected `&Foo`, found struct `Foo`
+ | `FOO` is interpreted as a constant, not a new binding
+ | help: introduce a new binding instead: `other_foo`
error: aborting due to previous error
let x = &Some((3, 3));
let _: &i32 = match x {
Some((x, 3)) | &Some((ref x, 5)) => x,
- //~^ ERROR is bound in inconsistent ways
+ //~^ ERROR is bound inconsistently
_ => &5i32,
};
}
-error[E0409]: variable `x` is bound in inconsistent ways within the same match arm
+error[E0409]: variable `x` is bound inconsistently across alternatives separated by `|`
--> $DIR/issue-44912-or.rs:6:35
|
LL | Some((x, 3)) | &Some((ref x, 5)) => x,
error: aborting due to previous error
+For more information about this error, try `rustc --explain E0739`.
error: inherent impls cannot be `const`
- --> $DIR/inherent-impl.rs:9:1
+ --> $DIR/inherent-impl.rs:9:12
|
LL | impl const S {}
- | ^^^^^-----^^^^^
+ | ----- ^ inherent impl for this type
| |
| `const` because of this
|
= note: only trait implementations may be annotated with `const`
error: inherent impls cannot be `const`
- --> $DIR/inherent-impl.rs:12:1
+ --> $DIR/inherent-impl.rs:12:12
|
LL | impl const T {}
- | ^^^^^-----^^^^^
+ | ----- ^ inherent impl for this type
| |
| `const` because of this
|
#![warn(macro_use_extern_crate, unused)]
-#[macro_use] //~ WARN should be replaced at use sites with a `use` statement
+#[macro_use] //~ WARN should be replaced at use sites with a `use` item
extern crate macro_use_warned_against;
#[macro_use] //~ WARN unused `#[macro_use]`
extern crate macro_use_warned_against2;
-warning: deprecated `#[macro_use]` directive used to import macros should be replaced at use sites with a `use` statement to import the macro instead
+warning: deprecated `#[macro_use]` attribute used to import macros should be replaced at use sites with a `use` item to import the macro instead
--> $DIR/macro-use-warned-against.rs:7:1
|
LL | #[macro_use]
// needs-sanitizer-support
// only-x86_64
//
-// compile-flags: -Z sanitizer=address -O
+// compile-flags: -Z sanitizer=address -O -g
//
// run-fail
// error-pattern: AddressSanitizer: stack-buffer-overflow
-// error-pattern: 'xs' <== Memory access at offset
+// error-pattern: 'xs' (line 15) <== Memory access at offset
#![feature(test)]
use std::hint::black_box;
-use std::mem;
fn main() {
let xs = [0, 1, 2, 3];
--- /dev/null
+// needs-sanitizer-support
+// only-x86_64
+//
+// compile-flags: -Z sanitizer=address -O
+//
+// run-fail
+// error-pattern: AddressSanitizer: SEGV
+
+use std::ffi::c_void;
+
+extern "C" {
+ fn free(ptr: *mut c_void);
+}
+
+fn main() {
+ unsafe {
+ free(1 as *mut c_void);
+ }
+}
--- /dev/null
+// Regression test for sanitizer function instrumentation passes not
+// being run when compiling with new LLVM pass manager and ThinLTO.
+// Note: The issue occurred only on non-zero opt-level.
+//
+// min-llvm-version 9.0
+// needs-sanitizer-support
+// only-x86_64
+//
+// no-prefer-dynamic
+// revisions: opt0 opt1
+// compile-flags: -Znew-llvm-pass-manager=yes -Zsanitizer=address -Clto=thin
+//[opt0]compile-flags: -Copt-level=0
+//[opt1]compile-flags: -Copt-level=1
+// run-fail
+// error-pattern: ERROR: AddressSanitizer: stack-use-after-scope
+
+static mut P: *mut usize = std::ptr::null_mut();
+
+fn main() {
+ unsafe {
+ {
+ let mut x = 0;
+ P = &mut x;
+ }
+ std::ptr::write_volatile(P, 123);
+ }
+}
--> $DIR/arbitrary_self_types_pin_lifetime_mismatch-async.rs:8:52
|
LL | async fn a(self: Pin<&Foo>, f: &Foo) -> &Foo { f }
- | - - ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | - - ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
| | |
| | let's call the lifetime of this reference `'1`
| let's call the lifetime of this reference `'2`
--> $DIR/arbitrary_self_types_pin_lifetime_mismatch-async.rs:11:75
|
LL | async fn c(self: Pin<&Self>, f: &Foo, g: &Foo) -> (Pin<&Foo>, &Foo) { (self, f) }
- | - - ^^^^^^^^^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | - - ^^^^^^^^^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
| | |
| | let's call the lifetime of this reference `'1`
| let's call the lifetime of this reference `'2`
--> $DIR/arbitrary_self_types_pin_lifetime_mismatch-async.rs:17:64
|
LL | async fn bar<'a>(self: Alias<&Self>, arg: &'a ()) -> &() { arg }
- | -- - ^^^ method was supposed to return data with lifetime `'1` but it is returning data with lifetime `'a`
+ | -- - ^^^ associated function was supposed to return data with lifetime `'1` but it is returning data with lifetime `'a`
| | |
| | let's call the lifetime of this reference `'1`
| lifetime `'a` defined here
--> $DIR/arbitrary_self_types_pin_lifetime_mismatch.rs:6:46
|
LL | fn a(self: Pin<&Foo>, f: &Foo) -> &Foo { f }
- | - - ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | - - ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
| | |
| | let's call the lifetime of this reference `'1`
| let's call the lifetime of this reference `'2`
--> $DIR/arbitrary_self_types_pin_lifetime_mismatch.rs:8:69
|
LL | fn c(self: Pin<&Self>, f: &Foo, g: &Foo) -> (Pin<&Foo>, &Foo) { (self, f) }
- | - - ^^^^^^^^^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | - - ^^^^^^^^^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
| | |
| | let's call the lifetime of this reference `'1`
| let's call the lifetime of this reference `'2`
--> $DIR/arbitrary_self_types_pin_lifetime_mismatch.rs:13:58
|
LL | fn bar<'a>(self: Alias<&Self>, arg: &'a ()) -> &() { arg }
- | -- ---- has type `std::pin::Pin<&'1 Foo>` ^^^ method was supposed to return data with lifetime `'1` but it is returning data with lifetime `'a`
+ | -- ---- has type `std::pin::Pin<&'1 Foo>` ^^^ associated function was supposed to return data with lifetime `'1` but it is returning data with lifetime `'a`
| |
| lifetime `'a` defined here
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/lt-ref-self-async.rs:19:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/lt-ref-self-async.rs:23:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/lt-ref-self-async.rs:27:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/lt-ref-self-async.rs:31:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/lt-ref-self-async.rs:35:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: aborting due to 6 previous errors
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/lt-ref-self.rs:17:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/lt-ref-self.rs:21:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/lt-ref-self.rs:25:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/lt-ref-self.rs:29:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/lt-ref-self.rs:33:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: aborting due to 6 previous errors
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-self-async.rs:19:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-self-async.rs:23:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-self-async.rs:27:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-self-async.rs:31:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-self-async.rs:35:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: aborting due to 6 previous errors
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-self.rs:17:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-self.rs:21:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-self.rs:25:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-self.rs:29:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-self.rs:33:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: aborting due to 6 previous errors
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-struct-async.rs:17:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-struct-async.rs:21:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-struct-async.rs:25:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-struct-async.rs:29:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: aborting due to 5 previous errors
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-struct.rs:15:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-struct.rs:19:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-struct.rs:23:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-mut-struct.rs:27:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: aborting due to 5 previous errors
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-self.rs:27:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-self.rs:31:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-self.rs:35:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-self.rs:39:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-self.rs:43:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-self.rs:47:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: aborting due to 7 previous errors
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-struct-async.rs:17:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-struct-async.rs:21:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-struct-async.rs:25:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-struct-async.rs:29:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: aborting due to 5 previous errors
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-struct.rs:15:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-struct.rs:19:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-struct.rs:23:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: lifetime may not live long enough
--> $DIR/ref-struct.rs:27:9
| |
| let's call the lifetime of this reference `'2`
LL | f
- | ^ method was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
+ | ^ associated function was supposed to return data with lifetime `'2` but it is returning data with lifetime `'1`
error: aborting due to 5 previous errors
LL | use foo::bar::f as g;
| ^^^ this module import is private
|
-note: the module import `bar` is defined here
+note: the module import `bar` is defined here...
--> $DIR/shadowed-use-visibility.rs:4:9
|
LL | use foo as bar;
| ^^^^^^^^^^
+note: ...and refers to the module `foo` which is defined here
+ --> $DIR/shadowed-use-visibility.rs:1:1
+ |
+LL | mod foo {
+ | ^^^^^^^
error[E0603]: module import `f` is private
--> $DIR/shadowed-use-visibility.rs:15:10
LL | use bar::f::f;
| ^ this module import is private
|
-note: the module import `f` is defined here
+note: the module import `f` is defined here...
--> $DIR/shadowed-use-visibility.rs:11:9
|
LL | use foo as f;
| ^^^^^^^^
+note: ...and refers to the module `foo` which is defined here
+ --> $DIR/shadowed-use-visibility.rs:1:1
+ |
+LL | mod foo {
+ | ^^^^^^^
error: aborting due to 2 previous errors
--- /dev/null
+#[macro_export]
+macro_rules! define_parse_error {
+ () => {
+ #[macro_export]
+ macro_rules! parse_error {
+ () => { parse error }
+ }
+ }
+}
--- /dev/null
+extern crate transitive_dep_three;
+
+transitive_dep_three::define_parse_error!();
--- /dev/null
+// Tests that we properly serialize/deserialize spans from transitive dependencies
+// (e.g. imported SourceFiles)
+//
+// The order of these next lines is important, since we need
+// transitive_dep_two.rs to be able to reference transitive_dep_three.rs
+//
+// aux-build: transitive_dep_three.rs
+// aux-build: transitive_dep_two.rs
+// compile-flags: -Z macro-backtrace
+
+extern crate transitive_dep_two;
+
+transitive_dep_two::parse_error!(); //~ ERROR expected one of
--- /dev/null
+error: expected one of `!` or `::`, found `error`
+ --> $DIR/auxiliary/transitive_dep_three.rs:6:27
+ |
+LL | / macro_rules! parse_error {
+LL | | () => { parse error }
+ | | ^^^^^ expected one of `!` or `::`
+LL | | }
+ | |_________- in this expansion of `transitive_dep_two::parse_error!`
+ |
+ ::: $DIR/transitive-dep-span.rs:13:1
+ |
+LL | transitive_dep_two::parse_error!();
+ | -----------------------------------
+ | |
+ | in this macro invocation
+ | in this macro invocation
+
+error: aborting due to previous error
+
LL | let _ = (vec![1,2,3]).into_iter().sum() as f64;
| ^^^
| |
- | cannot infer type for type parameter `S` declared on the method `sum`
+ | cannot infer type for type parameter `S` declared on the associated function `sum`
| help: consider specifying the type argument in the method call: `sum::<S>`
|
= note: type must be known at this point
error: inherent impls cannot be `default`
- --> $DIR/validation.rs:7:1
+ --> $DIR/validation.rs:7:14
|
LL | default impl S {}
- | -------^^^^^^^
+ | ------- ^ inherent impl for this type
| |
| `default` because of this
|
--- /dev/null
+#![feature(rustc_attrs)]
+
+#[rustc_specialization_trait]
+pub trait SpecTrait {
+ fn method(&self);
+}
--- /dev/null
+// Test that associated types in trait objects are not considered to be
+// constrained.
+
+#![feature(min_specialization)]
+
+trait Specializable {
+ fn f();
+}
+
+trait B<T> {
+ type Y;
+}
+
+trait C {
+ type Y;
+}
+
+impl<A: ?Sized> Specializable for A {
+ default fn f() {}
+}
+
+impl<'a, T> Specializable for dyn B<T, Y = T> + 'a {
+ //~^ ERROR specializing impl repeats parameter `T`
+ fn f() {}
+}
+
+impl<'a, T> Specializable for dyn C<Y = (T, T)> + 'a {
+ //~^ ERROR specializing impl repeats parameter `T`
+ fn f() {}
+}
+
+fn main() {}
--- /dev/null
+error: specializing impl repeats parameter `T`
+ --> $DIR/dyn-trait-assoc-types.rs:22:1
+ |
+LL | / impl<'a, T> Specializable for dyn B<T, Y = T> + 'a {
+LL | |
+LL | | fn f() {}
+LL | | }
+ | |_^
+
+error: specializing impl repeats parameter `T`
+ --> $DIR/dyn-trait-assoc-types.rs:27:1
+ |
+LL | / impl<'a, T> Specializable for dyn C<Y = (T, T)> + 'a {
+LL | |
+LL | | fn f() {}
+LL | | }
+ | |_^
+
+error: aborting due to 2 previous errors
+
--- /dev/null
+// Check that specialization traits can't be implemented without a feature.
+
+// gate-test-min_specialization
+
+// aux-build:specialization-trait.rs
+
+extern crate specialization_trait;
+
+struct A {}
+
+impl specialization_trait::SpecTrait for A {
+ //~^ ERROR implementing `rustc_specialization_trait` traits is unstable
+ fn method(&self) {}
+}
+
+fn main() {}
--- /dev/null
+error: implementing `rustc_specialization_trait` traits is unstable
+ --> $DIR/impl_specialization_trait.rs:11:1
+ |
+LL | impl specialization_trait::SpecTrait for A {
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ |
+ = help: add `#![feature(min_specialization)]` to the crate attributes to enable
+
+error: aborting due to previous error
+
--- /dev/null
+// Test that specializing on the well-formed predicates of the trait and
+// self-type of an impl is allowed.
+
+// check-pass
+
+#![feature(min_specialization)]
+
+struct OrdOnly<T: Ord>(T);
+
+trait SpecTrait<U> {
+ fn f();
+}
+
+impl<T, U> SpecTrait<U> for T {
+ default fn f() {}
+}
+
+impl<T: Ord> SpecTrait<()> for OrdOnly<T> {
+ fn f() {}
+}
+
+impl<T: Ord> SpecTrait<OrdOnly<T>> for () {
+ fn f() {}
+}
+
+impl<T: Ord, U: Ord, V: Ord> SpecTrait<(OrdOnly<T>, OrdOnly<U>)> for &[OrdOnly<V>] {
+ fn f() {}
+}
+
+fn main() {}
--- /dev/null
+// Test that projection bounds can't be specialized on.
+
+#![feature(min_specialization)]
+
+trait X {
+ fn f();
+}
+trait Id {
+ type This;
+}
+impl<T> Id for T {
+ type This = T;
+}
+
+impl<T: Id> X for T {
+ default fn f() {}
+}
+
+impl<I, V: Id<This = (I,)>> X for V {
+ //~^ ERROR cannot specialize on
+ fn f() {}
+}
+
+fn main() {}
--- /dev/null
+error: cannot specialize on `Binder(ProjectionPredicate(ProjectionTy { substs: [V], item_def_id: DefId(0:6 ~ repeated_projection_type[317d]::Id[0]::This[0]) }, (I,)))`
+ --> $DIR/repeated_projection_type.rs:19:1
+ |
+LL | / impl<I, V: Id<This = (I,)>> X for V {
+LL | |
+LL | | fn f() {}
+LL | | }
+ | |_^
+
+error: aborting due to previous error
+
--- /dev/null
+// Test that directly specializing on repeated lifetime parameters is not
+// allowed.
+
+#![feature(min_specialization)]
+
+trait X {
+ fn f();
+}
+
+impl<T> X for T {
+ default fn f() {}
+}
+
+impl<'a> X for (&'a u8, &'a u8) {
+ //~^ ERROR specializing impl repeats parameter `'a`
+ fn f() {}
+}
+
+fn main() {}
--- /dev/null
+error: specializing impl repeats parameter `'a`
+ --> $DIR/repeating_lifetimes.rs:14:1
+ |
+LL | / impl<'a> X for (&'a u8, &'a u8) {
+LL | |
+LL | | fn f() {}
+LL | | }
+ | |_^
+
+error: aborting due to previous error
+
--- /dev/null
+// Test that specializing on two type parameters being equal is not allowed.
+
+#![feature(min_specialization)]
+
+trait X {
+ fn f();
+}
+
+impl<T> X for T {
+ default fn f() {}
+}
+impl<T> X for (T, T) {
+ //~^ ERROR specializing impl repeats parameter `T`
+ fn f() {}
+}
+
+fn main() {}
--- /dev/null
+error: specializing impl repeats parameter `T`
+ --> $DIR/repeating_param.rs:12:1
+ |
+LL | / impl<T> X for (T, T) {
+LL | |
+LL | | fn f() {}
+LL | | }
+ | |_^
+
+error: aborting due to previous error
+
--- /dev/null
+// Check that we can specialize on a concrete iterator type. This requires us
+// to consider which parameters in the parent impl are constrained.
+
+// check-pass
+
+#![feature(min_specialization)]
+
+trait SpecFromIter<T> {
+ fn f(&self);
+}
+
+impl<'a, T: 'a, I: Iterator<Item = &'a T>> SpecFromIter<T> for I {
+ default fn f(&self) {}
+}
+
+impl<'a, T> SpecFromIter<T> for std::slice::Iter<'a, T> {
+ fn f(&self) {}
+}
+
+fn main() {}
--- /dev/null
+// Check that lifetime parameters are allowed in specializing impls.
+
+// check-pass
+
+#![feature(min_specialization)]
+
+trait MySpecTrait {
+ fn f();
+}
+
+impl<T> MySpecTrait for T {
+ default fn f() {}
+}
+
+impl<'a, T: ?Sized> MySpecTrait for &'a T {
+ fn f() {}
+}
+
+fn main() {}
--- /dev/null
+// Test that `rustc_unsafe_specialization_marker` is only allowed on marker traits.
+
+#![feature(rustc_attrs)]
+
+#[rustc_unsafe_specialization_marker]
+trait SpecMarker {
+ fn f();
+ //~^ ERROR marker traits
+}
+
+#[rustc_unsafe_specialization_marker]
+trait SpecMarker2 {
+ type X;
+ //~^ ERROR marker traits
+}
+
+fn main() {}
--- /dev/null
+error[E0714]: marker traits cannot have associated items
+ --> $DIR/specialization_marker.rs:7:5
+ |
+LL | fn f();
+ | ^^^^^^^
+
+error[E0714]: marker traits cannot have associated items
+ --> $DIR/specialization_marker.rs:13:5
+ |
+LL | type X;
+ | ^^^^^^^
+
+error: aborting due to 2 previous errors
+
+For more information about this error, try `rustc --explain E0714`.
--- /dev/null
+// Test that supertraits can't be assumed in impls of
+// `rustc_specialization_trait`, as such impls would
+// allow specializing on the supertrait.
+
+#![feature(min_specialization)]
+#![feature(rustc_attrs)]
+
+#[rustc_specialization_trait]
+trait SpecMarker: Default {
+ fn f();
+}
+
+impl<T: Default> SpecMarker for T {
+ //~^ ERROR cannot specialize
+ fn f() {}
+}
+
+fn main() {}
--- /dev/null
+error: cannot specialize on trait `std::default::Default`
+ --> $DIR/specialization_super_trait.rs:13:1
+ |
+LL | / impl<T: Default> SpecMarker for T {
+LL | |
+LL | | fn f() {}
+LL | | }
+ | |_^
+
+error: aborting due to previous error
+
--- /dev/null
+// Test that `rustc_specialization_trait` requires always applicable impls.
+
+#![feature(min_specialization)]
+#![feature(rustc_attrs)]
+
+#[rustc_specialization_trait]
+trait SpecMarker {
+ fn f();
+}
+
+impl SpecMarker for &'static u8 {
+ //~^ ERROR cannot specialize
+ fn f() {}
+}
+
+impl<T> SpecMarker for (T, T) {
+ //~^ ERROR specializing impl
+ fn f() {}
+}
+
+impl<T: Clone> SpecMarker for [T] {
+ //~^ ERROR cannot specialize
+ fn f() {}
+}
+
+fn main() {}
--- /dev/null
+error: cannot specialize on `'static` lifetime
+ --> $DIR/specialization_trait.rs:11:1
+ |
+LL | / impl SpecMarker for &'static u8 {
+LL | |
+LL | | fn f() {}
+LL | | }
+ | |_^
+
+error: specializing impl repeats parameter `T`
+ --> $DIR/specialization_trait.rs:16:1
+ |
+LL | / impl<T> SpecMarker for (T, T) {
+LL | |
+LL | | fn f() {}
+LL | | }
+ | |_^
+
+error: cannot specialize on trait `std::clone::Clone`
+ --> $DIR/specialization_trait.rs:21:1
+ |
+LL | / impl<T: Clone> SpecMarker for [T] {
+LL | |
+LL | | fn f() {}
+LL | | }
+ | |_^
+
+error: aborting due to 3 previous errors
+
--- /dev/null
+// Test that specializing on a `rustc_unsafe_specialization_marker` trait is
+// allowed.
+
+// check-pass
+
+#![feature(min_specialization)]
+#![feature(rustc_attrs)]
+
+#[rustc_unsafe_specialization_marker]
+trait SpecMarker {}
+
+trait X {
+ fn f();
+}
+
+impl<T> X for T {
+ default fn f() {}
+}
+
+impl<T: SpecMarker> X for T {
+ fn f() {}
+}
+
+fn main() {}
--- /dev/null
+// Test that specializing on a `rustc_specialization_trait` trait is allowed.
+
+// check-pass
+
+#![feature(min_specialization)]
+#![feature(rustc_attrs)]
+
+#[rustc_specialization_trait]
+trait SpecTrait {
+ fn g(&self);
+}
+
+trait X {
+ fn f(&self);
+}
+
+impl<T> X for T {
+ default fn f(&self) {}
+}
+
+impl<T: SpecTrait> X for T {
+ fn f(&self) {
+ self.g();
+ }
+}
+
+fn main() {}
--- /dev/null
+// Test that directly specializing on `'static` is not allowed.
+
+#![feature(min_specialization)]
+
+trait X {
+ fn f();
+}
+
+impl<T> X for &'_ T {
+ default fn f() {}
+}
+
+impl X for &'static u8 {
+ //~^ ERROR cannot specialize on `'static` lifetime
+ fn f() {}
+}
+
+fn main() {}
--- /dev/null
+error: cannot specialize on `'static` lifetime
+ --> $DIR/specialize_on_static.rs:13:1
+ |
+LL | / impl X for &'static u8 {
+LL | |
+LL | | fn f() {}
+LL | | }
+ | |_^
+
+error: aborting due to previous error
+
--- /dev/null
+// Test that specializing on a trait is not allowed in general.
+
+#![feature(min_specialization)]
+
+trait SpecMarker {}
+
+trait X {
+ fn f();
+}
+
+impl<T> X for T {
+ default fn f() {}
+}
+
+impl<T: SpecMarker> X for T {
+ //~^ ERROR cannot specialize on trait `SpecMarker`
+ fn f() {}
+}
+
+fn main() {}
--- /dev/null
+error: cannot specialize on trait `SpecMarker`
+ --> $DIR/specialize_on_trait.rs:15:1
+ |
+LL | / impl<T: SpecMarker> X for T {
+LL | |
+LL | | fn f() {}
+LL | | }
+ | |_^
+
+error: aborting due to previous error
+
}
fn main() {
- let _ = a::S::new(); //~ ERROR method `new` is private
+ let _ = a::S::new(); //~ ERROR associated function `new` is private
}
-error[E0624]: method `new` is private
+error[E0624]: associated function `new` is private
--> $DIR/static-method-privacy.rs:9:13
|
LL | let _ = a::S::new();
LL | #[rustc_err]
| ^^^^^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
error: cannot find attribute `rustc_err` in this scope
--- /dev/null
+#[allow(non_camel_case_types)]
+struct foo;
+struct Thing {
+ foo: String,
+}
+
+fn example(t: Thing) {
+ let Thing { foo } = t; //~ ERROR mismatched types
+}
+
+fn main() {}
--- /dev/null
+error[E0308]: mismatched types
+ --> $DIR/const-in-struct-pat.rs:8:17
+ |
+LL | struct foo;
+ | ----------- unit struct defined here
+...
+LL | let Thing { foo } = t;
+ | ^^^ - this expression has type `Thing`
+ | |
+ | expected struct `std::string::String`, found struct `foo`
+ | `foo` is interpreted as a unit struct, not a new binding
+ | help: bind the struct field to a different name instead: `foo: other_foo`
+
+error: aborting due to previous error
+
+For more information about this error, try `rustc --explain E0308`.
static mut SM = "abc";
//~^ ERROR missing type for `static mut` item
//~| HELP provide a type for the item
-//~| SUGGESTION &'static str
+//~| SUGGESTION &str
--> $DIR/const-no-type.rs:43:12
|
LL | static mut SM = "abc";
- | ^^ help: provide a type for the item: `SM: &'static str`
+ | ^^ help: provide a type for the item: `SM: &str`
error: missing type for `const` item
--> $DIR/const-no-type.rs:14:7
| ----------- method `bat` not found for this
...
LL | f.bat(1.0);
- | ^^^ help: there is a method with a similar name: `bar`
+ | ^^^ help: there is an associated function with a similar name: `bar`
error[E0599]: no method named `is_emtpy` found for struct `std::string::String` in the current scope
--> $DIR/suggest-methods.rs:21:15
|
LL | let _ = s.is_emtpy();
- | ^^^^^^^^ help: there is a method with a similar name: `is_empty`
+ | ^^^^^^^^ help: there is an associated function with a similar name: `is_empty`
error[E0599]: no method named `count_eos` found for type `u32` in the current scope
--> $DIR/suggest-methods.rs:25:19
|
LL | let _ = 63u32.count_eos();
- | ^^^^^^^^^ help: there is a method with a similar name: `count_zeros`
+ | ^^^^^^^^^ help: there is an associated function with a similar name: `count_zeros`
error[E0599]: no method named `count_o` found for type `u32` in the current scope
--> $DIR/suggest-methods.rs:28:19
error[E0658]: negative trait bounds are not yet fully implemented; use marker types for now
- --> $DIR/syntax-trait-polarity-feature-gate.rs:7:1
+ --> $DIR/syntax-trait-polarity-feature-gate.rs:7:6
|
LL | impl !Send for TestType {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ^^^^^
|
= note: see issue #13231 <https://github.com/rust-lang/rust/issues/13231> for more information
= help: add `#![feature(optin_builtin_traits)]` to the crate attributes to enable
error: inherent impls cannot be negative
- --> $DIR/syntax-trait-polarity.rs:7:1
+ --> $DIR/syntax-trait-polarity.rs:7:7
|
LL | impl !TestType {}
- | ^^^^^^^^^^^^^^^^^
+ | -^^^^^^^^ inherent impl for this type
+ | |
+ | negative because of this
error[E0198]: negative impls cannot be unsafe
- --> $DIR/syntax-trait-polarity.rs:12:1
+ --> $DIR/syntax-trait-polarity.rs:12:13
|
LL | unsafe impl !Send for TestType {}
- | ------^^^^^^^^^^^^^^^^^^^^^^^^^^^
- | |
+ | ------ -^^^^
+ | | |
+ | | negative because of this
| unsafe because of this
error: inherent impls cannot be negative
- --> $DIR/syntax-trait-polarity.rs:19:1
+ --> $DIR/syntax-trait-polarity.rs:19:10
|
LL | impl<T> !TestType2<T> {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^
+ | -^^^^^^^^^^^^ inherent impl for this type
+ | |
+ | negative because of this
error[E0198]: negative impls cannot be unsafe
- --> $DIR/syntax-trait-polarity.rs:22:1
+ --> $DIR/syntax-trait-polarity.rs:22:16
|
LL | unsafe impl<T> !Send for TestType2<T> {}
- | ------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- | |
+ | ------ -^^^^
+ | | |
+ | | negative because of this
| unsafe because of this
error[E0192]: negative impls are only allowed for auto traits (e.g., `Send` and `Sync`)
--- /dev/null
+// compile-flags:--test
+// run-pass
+// ignore-emscripten no subprocess support
+
+use std::fmt;
+use std::fmt::{Display, Formatter};
+
+pub struct A(Vec<u32>);
+
+impl Display for A {
+ fn fmt(&self, _f: &mut Formatter<'_>) -> fmt::Result {
+ self.0[0];
+ Ok(())
+ }
+}
+
+#[test]
+fn main() {
+ let result = std::panic::catch_unwind(|| {
+ let a = A(vec![]);
+ eprintln!("{}", a);
+ });
+ assert!(result.is_err());
+}
LL | #[rustc_diagnostic_item = "foomp"]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
- = note: see issue #29642 <https://github.com/rust-lang/rust/issues/29642> for more information
= help: add `#![feature(rustc_attrs)]` to the crate attributes to enable
error: aborting due to previous error
S.c(); // OK
// a, b, c are resolved as inherent items, their traits don't need to be in scope
let c = &S as &dyn C;
- c.a(); //~ ERROR method `a` is private
+ c.a(); //~ ERROR associated function `a` is private
c.b(); // OK
c.c(); // OK
//~^ ERROR no function or associated item named `b` found
S::c(&S); // OK
// a, b, c are resolved as inherent items, their traits don't need to be in scope
- C::a(&S); //~ ERROR method `a` is private
+ C::a(&S); //~ ERROR associated function `a` is private
C::b(&S); // OK
C::c(&S); // OK
}
LL | use method::B;
|
-error[E0624]: method `a` is private
+error[E0624]: associated function `a` is private
--> $DIR/trait-item-privacy.rs:72:7
|
LL | c.a();
LL | use method::B;
|
-error[E0624]: method `a` is private
+error[E0624]: associated function `a` is private
--> $DIR/trait-item-privacy.rs:84:5
|
LL | C::a(&S);
-error[E0624]: method `method` is private
+error[E0624]: associated function `method` is private
--> $DIR/trait-method-private.rs:19:9
|
LL | foo.method();
error[E0197]: inherent impls cannot be unsafe
- --> $DIR/trait-safety-inherent-impl.rs:5:1
+ --> $DIR/trait-safety-inherent-impl.rs:5:13
|
-LL | unsafe impl SomeStruct {
- | ^-----
- | |
- | _unsafe because of this
+LL | unsafe impl SomeStruct {
+ | ------ ^^^^^^^^^^ inherent impl for this type
| |
-LL | | fn foo(self) { }
-LL | | }
- | |_^
+ | unsafe because of this
error: aborting due to previous error
error[E0568]: auto traits cannot have super traits
- --> $DIR/traits-inductive-overflow-supertrait-oibit.rs:7:1
+ --> $DIR/traits-inductive-overflow-supertrait-oibit.rs:7:19
|
LL | auto trait Magic: Copy {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ----- ^^^^ help: remove the super traits
+ | |
+ | auto trait cannot have super traits
error[E0277]: the trait bound `NoClone: std::marker::Copy` is not satisfied
--> $DIR/traits-inductive-overflow-supertrait-oibit.rs:15:23
LL | transmute(x)
| ^^^^^^^^^
|
- = note: source type: `<C as TypeConstructor<'a>>::T` (size can vary because of <C as TypeConstructor>::T)
- = note: target type: `<C as TypeConstructor<'b>>::T` (size can vary because of <C as TypeConstructor>::T)
+ = note: `<C as TypeConstructor>::T` does not have a fixed size
error[E0512]: cannot transmute between types of different sizes, or dependently-sized types
--> $DIR/main.rs:20:17
--- /dev/null
+// check-pass
+// Regression test for issue #55099
+// Tests that we don't incorrectly consider a lifetime to part
+// of the concrete type
+
+#![feature(type_alias_impl_trait)]
+
+trait Future {
+}
+
+struct AndThen<F>(F);
+
+impl<F> Future for AndThen<F> {
+}
+
+struct Foo<'a> {
+ x: &'a mut (),
+}
+
+type F = impl Future;
+
+impl<'a> Foo<'a> {
+ fn reply(&mut self) -> F {
+ AndThen(|| ())
+ }
+}
+
+fn main() {}
use std::marker::PhantomData;
-/* copied Index and TryFrom for convinience (and simplicity) */
+/* copied Index and TryFrom for convenience (and simplicity) */
trait MyIndex<T> {
type O;
fn my_index(self) -> Self::O;
// check-pass
// Regression test for issue #67844
-// Ensures that we properly handle nested TAIT occurences
+// Ensures that we properly handle nested TAIT occurrences
// with generic parameters
#![feature(type_alias_impl_trait)]
LL | .or_else(|err| {
| ^^^^^^^
| |
- | cannot infer type for type parameter `F` declared on the method `or_else`
+ | cannot infer type for type parameter `F` declared on the associated function `or_else`
| help: consider specifying the type arguments in the method call: `or_else::<F, O>`
error: aborting due to previous error
LL | lst.sort_by_key(|&(v, _)| v.iter().sum());
| ^^^^^^^^^^^ --- help: consider specifying the type argument in the method call: `sum::<S>`
| |
- | cannot infer type for type parameter `K` declared on the method `sort_by_key`
+ | cannot infer type for type parameter `K` declared on the associated function `sort_by_key`
error: aborting due to previous error
fn main() {
println!("{}", std::mem:size_of::<BTreeMap<u32, u32>>());
- //~^ ERROR expected one of
+ //~^ ERROR casts cannot be followed by a function call
+ //~| ERROR expected value, found module `std::mem` [E0423]
+ //~| ERROR cannot find type `size_of` in this scope [E0412]
}
-error: expected one of `!`, `,`, or `::`, found `(`
- --> $DIR/issue-54516.rs:4:58
+error: casts cannot be followed by a function call
+ --> $DIR/issue-54516.rs:4:20
|
LL | println!("{}", std::mem:size_of::<BTreeMap<u32, u32>>());
- | - ^ expected one of `!`, `,`, or `::`
+ | ^^^^^^^^-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| help: maybe write a path separator here: `::`
|
= note: `#![feature(type_ascription)]` lets you annotate an expression with a type: `<expr>: <type>`
= note: see issue #23416 <https://github.com/rust-lang/rust/issues/23416> for more information
-error: aborting due to previous error
+error[E0423]: expected value, found module `std::mem`
+ --> $DIR/issue-54516.rs:4:20
+ |
+LL | println!("{}", std::mem:size_of::<BTreeMap<u32, u32>>());
+ | ^^^^^^^^- help: maybe you meant to write a path separator here: `::`
+ | |
+ | not a value
+
+error[E0412]: cannot find type `size_of` in this scope
+ --> $DIR/issue-54516.rs:4:29
+ |
+LL | println!("{}", std::mem:size_of::<BTreeMap<u32, u32>>());
+ | -^^^^^^^ not found in this scope
+ | |
+ | help: maybe you meant to write a path separator here: `::`
+
+error: aborting due to 3 previous errors
+Some errors have detailed explanations: E0412, E0423.
+For more information about an error, try `rustc --explain E0412`.
fn main() {
let u: usize = std::mem:size_of::<u32>();
- //~^ ERROR expected one of
+ //~^ ERROR casts cannot be followed by a function call
+ //~| ERROR expected value, found module `std::mem` [E0423]
+ //~| ERROR cannot find type `size_of` in this scope [E0412]
}
-error: expected one of `!`, `::`, or `;`, found `(`
- --> $DIR/issue-60933.rs:2:43
+error: casts cannot be followed by a function call
+ --> $DIR/issue-60933.rs:2:20
|
LL | let u: usize = std::mem:size_of::<u32>();
- | - ^ expected one of `!`, `::`, or `;`
+ | ^^^^^^^^-^^^^^^^^^^^^^^
| |
| help: maybe write a path separator here: `::`
|
= note: `#![feature(type_ascription)]` lets you annotate an expression with a type: `<expr>: <type>`
= note: see issue #23416 <https://github.com/rust-lang/rust/issues/23416> for more information
-error: aborting due to previous error
+error[E0423]: expected value, found module `std::mem`
+ --> $DIR/issue-60933.rs:2:20
+ |
+LL | let u: usize = std::mem:size_of::<u32>();
+ | ^^^^^^^^- help: maybe you meant to write a path separator here: `::`
+ | |
+ | not a value
+
+error[E0412]: cannot find type `size_of` in this scope
+ --> $DIR/issue-60933.rs:2:29
+ |
+LL | let u: usize = std::mem:size_of::<u32>();
+ | -^^^^^^^ not found in this scope
+ | |
+ | help: maybe you meant to write a path separator here: `::`
+
+error: aborting due to 3 previous errors
+Some errors have detailed explanations: E0412, E0423.
+For more information about an error, try `rustc --explain E0412`.
-// Fix issue 52082: Confusing error if accidentially defining a type paramter with the same name as
+// Fix issue 52082: Confusing error if accidentally defining a type parameter with the same name as
// an existing type
//
// To this end, make sure that when trying to retrieve a field of a (reference to) type parameter,
error[E0568]: auto traits cannot have super traits
- --> $DIR/typeck-auto-trait-no-supertraits-2.rs:3:1
+ --> $DIR/typeck-auto-trait-no-supertraits-2.rs:3:20
|
LL | auto trait Magic : Sized where Option<Self> : Magic {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ----- ^^^^^ help: remove the super traits
+ | |
+ | auto trait cannot have super traits
error: aborting due to previous error
error[E0568]: auto traits cannot have super traits
- --> $DIR/typeck-auto-trait-no-supertraits.rs:27:1
+ --> $DIR/typeck-auto-trait-no-supertraits.rs:27:19
|
LL | auto trait Magic: Copy {}
- | ^^^^^^^^^^^^^^^^^^^^^^^^^
+ | ----- ^^^^ help: remove the super traits
+ | |
+ | auto trait cannot have super traits
error: aborting due to previous error
| ^
| |
| not allowed in type signatures
- | help: replace `_` with the correct type: `&'static str`
+ | help: replace `_` with the correct type: `&str`
error[E0121]: the type placeholder `_` is not allowed within types on item signatures
--> $DIR/typeck_type_placeholder_item.rs:15:15
| ^
| |
| not allowed in type signatures
- | help: replace `_` with the correct type: `&'static str`
+ | help: replace `_` with the correct type: `&str`
error[E0121]: the type placeholder `_` is not allowed within types on item signatures
--> $DIR/typeck_type_placeholder_item.rs:88:22
<u8 as Tr::Y>::NN; //~ ERROR cannot find method or associated constant `NN` in `Tr::Y`
<u8 as E::Y>::NN; //~ ERROR failed to resolve: `Y` is a variant, not a module
- let _: <u8 as Dr>::Z; //~ ERROR expected associated type, found method `Dr::Z`
+ let _: <u8 as Dr>::Z; //~ ERROR expected associated type, found associated function `Dr::Z`
<u8 as Dr>::X; //~ ERROR expected method or associated constant, found associated type `Dr::X`
- let _: <u8 as Dr>::Z::N; //~ ERROR expected associated type, found method `Dr::Z`
+ let _: <u8 as Dr>::Z::N; //~ ERROR expected associated type, found associated function `Dr::Z`
<u8 as Dr>::X::N; //~ ERROR no associated item named `N` found for type `u16`
}
--> $DIR/ufcs-partially-resolved.rs:22:17
|
LL | fn Y() {}
- | --------- similarly named method `Y` defined here
+ | --------- similarly named associated function `Y` defined here
...
LL | <u8 as Tr>::N;
- | ^ help: a method with a similar name exists: `Y`
+ | ^ help: an associated function with a similar name exists: `Y`
error[E0576]: cannot find method or associated constant `N` in enum `E`
--> $DIR/ufcs-partially-resolved.rs:23:16
LL | <u8 as Tr::Y>::NN;
| ^^ not found in `Tr::Y`
-error[E0575]: expected associated type, found method `Dr::Z`
+error[E0575]: expected associated type, found associated function `Dr::Z`
--> $DIR/ufcs-partially-resolved.rs:52:12
|
LL | type X = u16;
--> $DIR/ufcs-partially-resolved.rs:53:5
|
LL | fn Z() {}
- | --------- similarly named method `Z` defined here
+ | --------- similarly named associated function `Z` defined here
...
LL | <u8 as Dr>::X;
| ^^^^^^^^^^^^-
| |
- | help: a method with a similar name exists: `Z`
+ | help: an associated function with a similar name exists: `Z`
|
= note: can't use a type alias as a constructor
-error[E0575]: expected associated type, found method `Dr::Z`
+error[E0575]: expected associated type, found associated function `Dr::Z`
--> $DIR/ufcs-partially-resolved.rs:54:12
|
LL | type X = u16;
error: expected `{`, found `std`
--> $DIR/unsafe-block-without-braces.rs:3:9
|
-LL | unsafe //{
- | - expected `{`
LL | std::mem::transmute::<f32, u32>(1.0);
- | ^^^ unexpected token
+ | ^^^----------------------------------
+ | |
+ | expected `{`
+ | help: try placing this code inside a block: `{ std::mem::transmute::<f32, u32>(1.0); }`
error: aborting due to previous error
-// Check that array elemen types must be Sized. Issue #25692.
+// Check that array element types must be Sized. Issue #25692.
#![allow(dead_code)]
fn main() {
let _ = xc_private_method_lib::Struct::static_meth_struct();
- //~^ ERROR: method `static_meth_struct` is private
+ //~^ ERROR: associated function `static_meth_struct` is private
let _ = xc_private_method_lib::Enum::static_meth_enum();
- //~^ ERROR: method `static_meth_enum` is private
+ //~^ ERROR: associated function `static_meth_enum` is private
}
-error[E0624]: method `static_meth_struct` is private
+error[E0624]: associated function `static_meth_struct` is private
--> $DIR/xc-private-method.rs:6:13
|
LL | let _ = xc_private_method_lib::Struct::static_meth_struct();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-error[E0624]: method `static_meth_enum` is private
+error[E0624]: associated function `static_meth_enum` is private
--> $DIR/xc-private-method.rs:9:13
|
LL | let _ = xc_private_method_lib::Enum::static_meth_enum();
fn main() {
let _ = xc_private_method_lib::Struct{ x: 10 }.meth_struct();
- //~^ ERROR method `meth_struct` is private
+ //~^ ERROR associated function `meth_struct` is private
let _ = xc_private_method_lib::Enum::Variant1(20).meth_enum();
- //~^ ERROR method `meth_enum` is private
+ //~^ ERROR associated function `meth_enum` is private
}
-error[E0624]: method `meth_struct` is private
+error[E0624]: associated function `meth_struct` is private
--> $DIR/xc-private-method2.rs:6:52
|
LL | let _ = xc_private_method_lib::Struct{ x: 10 }.meth_struct();
| ^^^^^^^^^^^
-error[E0624]: method `meth_enum` is private
+error[E0624]: associated function `meth_enum` is private
--> $DIR/xc-private-method2.rs:9:55
|
LL | let _ = xc_private_method_lib::Enum::Variant1(20).meth_enum();
-Subproject commit e57bd02999c9f40d52116e0beca7d1dccb0643de
+Subproject commit 7019b3ed3d539db7429d10a343b69be8c426b576
-Subproject commit fc5d0cc583cb1cd35d58fdb7f3e0cfa12dccd6c0
+Subproject commit 23549a8c362a403026432f65a6cb398cb10d44b7
let captures = RE.captures(line)?;
match (cfg, captures.name("cfgs")) {
- // Only error messages that contain our `cfg` betweeen the square brackets apply to us.
+ // Only error messages that contain our `cfg` between the square brackets apply to us.
(Some(cfg), Some(filter)) if !filter.as_str().split(',').any(|s| s == cfg) => return None,
(Some(_), Some(_)) => {}
assert!(parse_rs(&config, "// no-system-llvm").ignore);
}
+#[test]
+fn llvm_version() {
+ let mut config = config();
+
+ config.llvm_version = Some("8.1.2-rust".to_owned());
+ assert!(parse_rs(&config, "// min-llvm-version 9.0").ignore);
+
+ config.llvm_version = Some("9.0.1-rust-1.43.0-dev".to_owned());
+ assert!(parse_rs(&config, "// min-llvm-version 9.2").ignore);
+
+ config.llvm_version = Some("9.3.1-rust-1.43.0-dev".to_owned());
+ assert!(!parse_rs(&config, "// min-llvm-version 9.2").ignore);
+
+ // FIXME.
+ // config.llvm_version = Some("10.0.0-rust".to_owned());
+ // assert!(!parse_rs(&config, "// min-llvm-version 9.0").ignore);
+}
+
#[test]
fn ignore_target() {
let mut config = config();
// can turn it back on if needed.
if !self.is_rustdoc()
// Note that we use the local pass mode here as we don't want
- // to set unused to allow if we've overriden the pass mode
+ // to set unused to allow if we've overridden the pass mode
// via command line flags.
&& local_pm != Some(PassMode::Run)
{
Command::new(&nodejs)
.arg(root.join("src/tools/rustdoc-js/tester.js"))
.arg(out_dir.parent().expect("no parent"))
- .arg(&self.testpaths.file.file_stem().expect("couldn't get file stem")),
+ .arg(self.testpaths.file.with_extension("js")),
);
if !res.status.success() {
self.fatal_proc_rec("rustdoc-js test failed!", &res);
let json = cflags.contains("--error-format json")
|| cflags.contains("--error-format pretty-json")
|| cflags.contains("--error-format=json")
- || cflags.contains("--error-format=pretty-json");
+ || cflags.contains("--error-format=pretty-json")
+ || cflags.contains("--output-format json")
+ || cflags.contains("--output-format=json");
let mut normalized = output.to_string();
-Subproject commit 3c444bf6a6cff3b9014005f21cc44995b34862ce
+Subproject commit 0ff05c4cfe534321b194bf3bedf028df92ef519c
#!/usr/bin/env python
# -*- coding: utf-8 -*-
-# This script publishes the new "current" toolstate in the toolstate repo (not to be
-# confused with publishing the test results, which happens in
-# `src/ci/docker/x86_64-gnu-tools/checktools.sh`).
-# It is set as callback for `src/ci/docker/x86_64-gnu-tools/repo.sh` by the CI scripts
-# when a new commit lands on `master` (i.e., after it passed all checks on `auto`).
+# This script computes the new "current" toolstate for the toolstate repo (not to be
+# confused with publishing the test results, which happens in `src/bootstrap/toolstate.rs`).
+# It gets called from `src/ci/publish_toolstate.sh` when a new commit lands on `master`
+# (i.e., after it passed all checks on `auto`).
from __future__ import print_function
const fs = require('fs');
-
-const TEST_FOLDER = 'src/test/rustdoc-js-std/';
+const path = require('path');
function getNextStep(content, pos, stop) {
while (pos < content.length && content[pos] !== stop &&
}
function main(argv) {
- if (argv.length !== 3) {
- console.error("Expected toolchain to check as argument (for example \
- 'x86_64-apple-darwin')");
+ if (argv.length !== 4) {
+ console.error("USAGE: node tester.js STD_DOCS TEST_FOLDER");
return 1;
}
- var toolchain = argv[2];
+ var std_docs = argv[2];
+ var test_folder = argv[3];
- var mainJs = readFileMatching("build/" + toolchain + "/doc/", "main", ".js");
- var ALIASES = readFileMatching("build/" + toolchain + "/doc/", "aliases", ".js");
- var searchIndex = readFileMatching("build/" + toolchain + "/doc/",
- "search-index", ".js").split("\n");
+ var mainJs = readFileMatching(std_docs, "main", ".js");
+ var ALIASES = readFileMatching(std_docs, "aliases", ".js");
+ var searchIndex = readFileMatching(std_docs, "search-index", ".js").split("\n");
if (searchIndex[searchIndex.length - 1].length === 0) {
searchIndex.pop();
}
finalJS = "";
var arraysToLoad = ["itemTypes"];
- var variablesToLoad = ["MAX_LEV_DISTANCE", "MAX_RESULTS",
+ var variablesToLoad = ["MAX_LEV_DISTANCE", "MAX_RESULTS", "NO_TYPE_FILTER",
"GENERICS_DATA", "NAME", "INPUTS_DATA", "OUTPUT_DATA",
"TY_PRIMITIVE", "TY_KEYWORD",
"levenshtein_row2"];
var errors = 0;
- fs.readdirSync(TEST_FOLDER).forEach(function(file) {
- var loadedFile = loadContent(readFile(TEST_FOLDER + file) +
+ fs.readdirSync(test_folder).forEach(function(file) {
+ var loadedFile = loadContent(readFile(path.join(test_folder, file)) +
'exports.QUERY = QUERY;exports.EXPECTED = EXPECTED;');
const expected = loadedFile.EXPECTED;
const query = loadedFile.QUERY;
console.log("OK");
}
});
- return errors;
+ return errors > 0 ? 1 : 0;
}
process.exit(main(process.argv));
const fs = require('fs');
+const path = require('path');
const { spawnSync } = require('child_process');
-const TEST_FOLDER = 'src/test/rustdoc-js/';
-
function getNextStep(content, pos, stop) {
while (pos < content.length && content[pos] !== stop &&
(content[pos] === ' ' || content[pos] === '\t' || content[pos] === '\n')) {
finalJS = "";
var arraysToLoad = ["itemTypes"];
- var variablesToLoad = ["MAX_LEV_DISTANCE", "MAX_RESULTS",
+ var variablesToLoad = ["MAX_LEV_DISTANCE", "MAX_RESULTS", "NO_TYPE_FILTER",
"GENERICS_DATA", "NAME", "INPUTS_DATA", "OUTPUT_DATA",
"TY_PRIMITIVE", "TY_KEYWORD",
"levenshtein_row2"];
var errors = 0;
for (var j = 3; j < argv.length; ++j) {
- const test_name = argv[j];
+ const test_file = argv[j];
+ const test_name = path.basename(test_file, ".js");
process.stdout.write('Checking "' + test_name + '" ... ');
- if (!fs.existsSync(TEST_FOLDER + test_name + ".js")) {
+ if (!fs.existsSync(test_file)) {
errors += 1;
console.error("FAILED");
console.error("==> Missing '" + test_name + ".js' file...");
const test_out_folder = out_folder + test_name;
var [loaded, index] = load_files(test_out_folder, test_name);
- var loadedFile = loadContent(readFile(TEST_FOLDER + test_name + ".js") +
+ var loadedFile = loadContent(readFile(test_file) +
'exports.QUERY = QUERY;exports.EXPECTED = EXPECTED;');
const expected = loadedFile.EXPECTED;
const query = loadedFile.QUERY;
console.log("OK");
}
}
- return errors;
+ return errors > 0 ? 1 : 0;
}
process.exit(main(process.argv));
edition = "2018"
[dependencies]
+cargo_metadata = "0.9.1"
regex = "1"
-serde = { version = "1.0.8", features = ["derive"] }
-serde_json = "1.0.2"
lazy_static = "1"
walkdir = "2"
-//! Checks the licenses of third-party dependencies by inspecting vendors.
+//! Checks the licenses of third-party dependencies.
-use std::collections::{BTreeSet, HashMap, HashSet};
-use std::fs;
+use cargo_metadata::{Metadata, Package, PackageId, Resolve};
+use std::collections::{BTreeSet, HashSet};
use std::path::Path;
-use std::process::Command;
-
-use serde::Deserialize;
-use serde_json;
+/// These are licenses that are allowed for all crates, including the runtime,
+/// rustc, tools, etc.
const LICENSES: &[&str] = &[
"MIT/Apache-2.0",
"MIT / Apache-2.0",
/// should be considered bugs. Exceptions are only allowed in Rust
/// tooling. It is _crucial_ that no exception crates be dependencies
/// of the Rust runtime (std/test).
-const EXCEPTIONS: &[&str] = &[
- "mdbook", // MPL2, mdbook
- "openssl", // BSD+advertising clause, cargo, mdbook
- "pest", // MPL2, mdbook via handlebars
- "arrayref", // BSD-2-Clause, mdbook via handlebars via pest
- "thread-id", // Apache-2.0, mdbook
- "toml-query", // MPL-2.0, mdbook
- "is-match", // MPL-2.0, mdbook
- "cssparser", // MPL-2.0, rustdoc
- "smallvec", // MPL-2.0, rustdoc
- "rdrand", // ISC, mdbook, rustfmt
- "fuchsia-cprng", // BSD-3-Clause, mdbook, rustfmt
- "fuchsia-zircon-sys", // BSD-3-Clause, rustdoc, rustc, cargo
- "fuchsia-zircon", // BSD-3-Clause, rustdoc, rustc, cargo (jobserver & tempdir)
- "cssparser-macros", // MPL-2.0, rustdoc
- "selectors", // MPL-2.0, rustdoc
- "clippy_lints", // MPL-2.0, rls
- "colored", // MPL-2.0, rustfmt
- "ordslice", // Apache-2.0, rls
- "cloudabi", // BSD-2-Clause, (rls -> crossbeam-channel 0.2 -> rand 0.5)
- "ryu", // Apache-2.0, rls/cargo/... (because of serde)
- "bytesize", // Apache-2.0, cargo
- "im-rc", // MPL-2.0+, cargo
- "adler32", // BSD-3-Clause AND Zlib, cargo dep that isn't used
- "constant_time_eq", // CC0-1.0, rustfmt
- "utf8parse", // Apache-2.0 OR MIT, cargo via strip-ansi-escapes
- "vte", // Apache-2.0 OR MIT, cargo via strip-ansi-escapes
- "sized-chunks", // MPL-2.0+, cargo via im-rc
- "bitmaps", // MPL-2.0+, cargo via im-rc
+const EXCEPTIONS: &[(&str, &str)] = &[
+ ("mdbook", "MPL-2.0"), // mdbook
+ ("openssl", "Apache-2.0"), // cargo, mdbook
+ ("arrayref", "BSD-2-Clause"), // mdbook via handlebars via pest
+ ("toml-query", "MPL-2.0"), // mdbook
+ ("toml-query_derive", "MPL-2.0"), // mdbook
+ ("is-match", "MPL-2.0"), // mdbook
+ ("rdrand", "ISC"), // mdbook, rustfmt
+ ("fuchsia-cprng", "BSD-3-Clause"), // mdbook, rustfmt
+ ("fuchsia-zircon-sys", "BSD-3-Clause"), // rustdoc, rustc, cargo
+ ("fuchsia-zircon", "BSD-3-Clause"), // rustdoc, rustc, cargo (jobserver & tempdir)
+ ("colored", "MPL-2.0"), // rustfmt
+ ("ordslice", "Apache-2.0"), // rls
+ ("cloudabi", "BSD-2-Clause"), // (rls -> crossbeam-channel 0.2 -> rand 0.5)
+ ("ryu", "Apache-2.0 OR BSL-1.0"), // rls/cargo/... (because of serde)
+ ("bytesize", "Apache-2.0"), // cargo
+ ("im-rc", "MPL-2.0+"), // cargo
+ ("adler32", "BSD-3-Clause AND Zlib"), // cargo dep that isn't used
+ ("constant_time_eq", "CC0-1.0"), // rustfmt
+ ("sized-chunks", "MPL-2.0+"), // cargo via im-rc
+ ("bitmaps", "MPL-2.0+"), // cargo via im-rc
// FIXME: this dependency violates the documentation comment above:
- "fortanix-sgx-abi", // MPL-2.0+, libstd but only for `sgx` target
- "dunce", // CC0-1.0 mdbook-linkcheck
- "codespan-reporting", // Apache-2.0 mdbook-linkcheck
- "codespan", // Apache-2.0 mdbook-linkcheck
- "crossbeam-channel", // MIT/Apache-2.0 AND BSD-2-Clause, cargo
+ ("fortanix-sgx-abi", "MPL-2.0"), // libstd but only for `sgx` target
+ ("dunce", "CC0-1.0"), // mdbook-linkcheck
+ ("codespan-reporting", "Apache-2.0"), // mdbook-linkcheck
+ ("codespan", "Apache-2.0"), // mdbook-linkcheck
+ ("crossbeam-channel", "MIT/Apache-2.0 AND BSD-2-Clause"), // cargo
];
+/// These are the root crates that are part of the runtime. The licenses for
+/// these and all their dependencies *must not* be in the exception list.
+const RUNTIME_CRATES: &[&str] = &["std", "core", "alloc", "test", "panic_abort", "panic_unwind"];
+
/// Which crates to check against the whitelist?
-const WHITELIST_CRATES: &[CrateVersion<'_>] =
- &[CrateVersion("rustc", "0.0.0"), CrateVersion("rustc_codegen_llvm", "0.0.0")];
+const WHITELIST_CRATES: &[&str] = &["rustc", "rustc_codegen_llvm"];
/// Whitelist of crates rustc is allowed to depend on. Avoid adding to the list if possible.
-const WHITELIST: &[Crate<'_>] = &[
- Crate("adler32"),
- Crate("aho-corasick"),
- Crate("annotate-snippets"),
- Crate("ansi_term"),
- Crate("arrayvec"),
- Crate("atty"),
- Crate("autocfg"),
- Crate("backtrace"),
- Crate("backtrace-sys"),
- Crate("bitflags"),
- Crate("build_const"),
- Crate("byteorder"),
- Crate("c2-chacha"),
- Crate("cc"),
- Crate("cfg-if"),
- Crate("chalk-engine"),
- Crate("chalk-macros"),
- Crate("cloudabi"),
- Crate("cmake"),
- Crate("compiler_builtins"),
- Crate("crc"),
- Crate("crc32fast"),
- Crate("crossbeam-deque"),
- Crate("crossbeam-epoch"),
- Crate("crossbeam-queue"),
- Crate("crossbeam-utils"),
- Crate("datafrog"),
- Crate("dlmalloc"),
- Crate("either"),
- Crate("ena"),
- Crate("env_logger"),
- Crate("filetime"),
- Crate("flate2"),
- Crate("fortanix-sgx-abi"),
- Crate("fuchsia-zircon"),
- Crate("fuchsia-zircon-sys"),
- Crate("getopts"),
- Crate("getrandom"),
- Crate("hashbrown"),
- Crate("humantime"),
- Crate("indexmap"),
- Crate("itertools"),
- Crate("jobserver"),
- Crate("kernel32-sys"),
- Crate("lazy_static"),
- Crate("libc"),
- Crate("libz-sys"),
- Crate("lock_api"),
- Crate("log"),
- Crate("log_settings"),
- Crate("measureme"),
- Crate("memchr"),
- Crate("memmap"),
- Crate("memoffset"),
- Crate("miniz-sys"),
- Crate("miniz_oxide"),
- Crate("miniz_oxide_c_api"),
- Crate("nodrop"),
- Crate("num_cpus"),
- Crate("owning_ref"),
- Crate("parking_lot"),
- Crate("parking_lot_core"),
- Crate("pkg-config"),
- Crate("polonius-engine"),
- Crate("ppv-lite86"),
- Crate("proc-macro2"),
- Crate("punycode"),
- Crate("quick-error"),
- Crate("quote"),
- Crate("rand"),
- Crate("rand_chacha"),
- Crate("rand_core"),
- Crate("rand_hc"),
- Crate("rand_isaac"),
- Crate("rand_pcg"),
- Crate("rand_xorshift"),
- Crate("redox_syscall"),
- Crate("redox_termios"),
- Crate("regex"),
- Crate("regex-syntax"),
- Crate("remove_dir_all"),
- Crate("rustc-demangle"),
- Crate("rustc-hash"),
- Crate("rustc-rayon"),
- Crate("rustc-rayon-core"),
- Crate("rustc_version"),
- Crate("scoped-tls"),
- Crate("scopeguard"),
- Crate("semver"),
- Crate("semver-parser"),
- Crate("serde"),
- Crate("serde_derive"),
- Crate("smallvec"),
- Crate("stable_deref_trait"),
- Crate("syn"),
- Crate("synstructure"),
- Crate("tempfile"),
- Crate("termcolor"),
- Crate("terminon"),
- Crate("termion"),
- Crate("termize"),
- Crate("thread_local"),
- Crate("ucd-util"),
- Crate("unicode-normalization"),
- Crate("unicode-script"),
- Crate("unicode-security"),
- Crate("unicode-width"),
- Crate("unicode-xid"),
- Crate("unreachable"),
- Crate("utf8-ranges"),
- Crate("vcpkg"),
- Crate("version_check"),
- Crate("void"),
- Crate("wasi"),
- Crate("winapi"),
- Crate("winapi-build"),
- Crate("winapi-i686-pc-windows-gnu"),
- Crate("winapi-util"),
- Crate("winapi-x86_64-pc-windows-gnu"),
- Crate("wincolor"),
- Crate("hermit-abi"),
+///
+/// This list is here to provide a speed-bump to adding a new dependency to
+/// rustc. Please check with the compiler team before adding an entry.
+const WHITELIST: &[&str] = &[
+ "adler32",
+ "aho-corasick",
+ "annotate-snippets",
+ "ansi_term",
+ "arrayvec",
+ "atty",
+ "autocfg",
+ "backtrace",
+ "backtrace-sys",
+ "bitflags",
+ "byteorder",
+ "c2-chacha",
+ "cc",
+ "cfg-if",
+ "cloudabi",
+ "cmake",
+ "compiler_builtins",
+ "crc32fast",
+ "crossbeam-deque",
+ "crossbeam-epoch",
+ "crossbeam-queue",
+ "crossbeam-utils",
+ "datafrog",
+ "dlmalloc",
+ "either",
+ "ena",
+ "env_logger",
+ "filetime",
+ "flate2",
+ "fortanix-sgx-abi",
+ "fuchsia-zircon",
+ "fuchsia-zircon-sys",
+ "getopts",
+ "getrandom",
+ "hashbrown",
+ "hermit-abi",
+ "humantime",
+ "indexmap",
+ "itertools",
+ "jobserver",
+ "kernel32-sys",
+ "lazy_static",
+ "libc",
+ "libz-sys",
+ "lock_api",
+ "log",
+ "log_settings",
+ "measureme",
+ "memchr",
+ "memmap",
+ "memoffset",
+ "miniz_oxide",
+ "nodrop",
+ "num_cpus",
+ "parking_lot",
+ "parking_lot_core",
+ "pkg-config",
+ "polonius-engine",
+ "ppv-lite86",
+ "proc-macro2",
+ "punycode",
+ "quick-error",
+ "quote",
+ "rand",
+ "rand_chacha",
+ "rand_core",
+ "rand_hc",
+ "rand_isaac",
+ "rand_pcg",
+ "rand_xorshift",
+ "redox_syscall",
+ "redox_termios",
+ "regex",
+ "regex-syntax",
+ "remove_dir_all",
+ "rustc-demangle",
+ "rustc-hash",
+ "rustc-rayon",
+ "rustc-rayon-core",
+ "rustc_version",
+ "scoped-tls",
+ "scopeguard",
+ "semver",
+ "semver-parser",
+ "serde",
+ "serde_derive",
+ "smallvec",
+ "stable_deref_trait",
+ "syn",
+ "synstructure",
+ "tempfile",
+ "termcolor",
+ "termion",
+ "termize",
+ "thread_local",
+ "ucd-util",
+ "unicode-normalization",
+ "unicode-script",
+ "unicode-security",
+ "unicode-width",
+ "unicode-xid",
+ "utf8-ranges",
+ "vcpkg",
+ "version_check",
+ "wasi",
+ "winapi",
+ "winapi-build",
+ "winapi-i686-pc-windows-gnu",
+ "winapi-util",
+ "winapi-x86_64-pc-windows-gnu",
+ "wincolor",
];
-// Some types for Serde to deserialize the output of `cargo metadata` to.
-
-#[derive(Deserialize)]
-struct Output {
- resolve: Resolve,
-}
-
-#[derive(Deserialize)]
-struct Resolve {
- nodes: Vec<ResolveNode>,
-}
-
-#[derive(Deserialize)]
-struct ResolveNode {
- id: String,
- dependencies: Vec<String>,
-}
-
-/// A unique identifier for a crate.
-#[derive(Copy, Clone, PartialOrd, Ord, PartialEq, Eq, Debug, Hash)]
-struct Crate<'a>(&'a str); // (name)
-
-#[derive(Copy, Clone, PartialOrd, Ord, PartialEq, Eq, Debug, Hash)]
-struct CrateVersion<'a>(&'a str, &'a str); // (name, version)
-
-impl Crate<'_> {
- pub fn id_str(&self) -> String {
- format!("{} ", self.0)
- }
-}
-
-impl<'a> CrateVersion<'a> {
- /// Returns the struct and whether or not the dependency is in-tree.
- pub fn from_str(s: &'a str) -> (Self, bool) {
- let mut parts = s.split(' ');
- let name = parts.next().unwrap();
- let version = parts.next().unwrap();
- let path = parts.next().unwrap();
-
- let is_path_dep = path.starts_with("(path+");
-
- (CrateVersion(name, version), is_path_dep)
- }
-
- pub fn id_str(&self) -> String {
- format!("{} {}", self.0, self.1)
- }
+/// Dependency checks.
+///
+/// `path` is path to the `src` directory, `cargo` is path to the cargo executable.
+pub fn check(path: &Path, cargo: &Path, bad: &mut bool) {
+ let mut cmd = cargo_metadata::MetadataCommand::new();
+ cmd.cargo_path(cargo)
+ .manifest_path(path.parent().unwrap().join("Cargo.toml"))
+ .features(cargo_metadata::CargoOpt::AllFeatures);
+ let metadata = t!(cmd.exec());
+ check_exceptions(&metadata, bad);
+ check_whitelist(&metadata, bad);
+ check_crate_duplicate(&metadata, bad);
}
-impl<'a> From<CrateVersion<'a>> for Crate<'a> {
- fn from(cv: CrateVersion<'a>) -> Crate<'a> {
- Crate(cv.0)
+/// Check that all licenses are in the valid list in `LICENSES`.
+///
+/// Packages listed in `EXCEPTIONS` are allowed for tools.
+fn check_exceptions(metadata: &Metadata, bad: &mut bool) {
+ // Validate the EXCEPTIONS list hasn't changed.
+ for (name, license) in EXCEPTIONS {
+ // Check that the package actually exists.
+ if !metadata.packages.iter().any(|p| p.name == *name) {
+ println!(
+ "could not find exception package `{}`\n\
+ Remove from EXCEPTIONS list if it is no longer used.",
+ name
+ );
+ *bad = true;
+ }
+ // Check that the license hasn't changed.
+ for pkg in metadata.packages.iter().filter(|p| p.name == *name) {
+ if pkg.name == "fuchsia-cprng" {
+ // This package doesn't declare a license expression. Manual
+ // inspection of the license file is necessary, which appears
+ // to be BSD-3-Clause.
+ assert!(pkg.license.is_none());
+ continue;
+ }
+ match &pkg.license {
+ None => {
+ println!(
+ "dependency exception `{}` does not declare a license expression",
+ pkg.id
+ );
+ *bad = true;
+ }
+ Some(pkg_license) => {
+ if pkg_license.as_str() != *license {
+ println!("dependency exception `{}` license has changed", name);
+ println!(" previously `{}` now `{}`", license, pkg_license);
+ println!(" update EXCEPTIONS for the new license");
+ *bad = true;
+ }
+ }
+ }
+ }
}
-}
-/// Checks the dependency at the given path. Changes `bad` to `true` if a check failed.
-///
-/// Specifically, this checks that the license is correct.
-pub fn check(path: &Path, bad: &mut bool) {
- // Check licences.
- let path = path.join("../vendor");
- assert!(path.exists(), "vendor directory missing");
- let mut saw_dir = false;
- for dir in t!(path.read_dir()) {
- saw_dir = true;
- let dir = t!(dir);
+ let exception_names: Vec<_> = EXCEPTIONS.iter().map(|(name, _license)| *name).collect();
+ let runtime_ids = compute_runtime_crates(metadata);
- // Skip our exceptions.
- let is_exception = EXCEPTIONS.iter().any(|exception| {
- dir.path().to_str().unwrap().contains(&format!("vendor/{}", exception))
- });
- if is_exception {
+ // Check if any package does not have a valid license.
+ for pkg in &metadata.packages {
+ if pkg.source.is_none() {
+ // No need to check local packages.
continue;
}
-
- let toml = dir.path().join("Cargo.toml");
- *bad = !check_license(&toml) || *bad;
+ if !runtime_ids.contains(&pkg.id) && exception_names.contains(&pkg.name.as_str()) {
+ continue;
+ }
+ let license = match &pkg.license {
+ Some(license) => license,
+ None => {
+ println!("dependency `{}` does not define a license expression", pkg.id,);
+ *bad = true;
+ continue;
+ }
+ };
+ if !LICENSES.contains(&license.as_str()) {
+ if pkg.name == "fortanix-sgx-abi" {
+ // This is a specific exception because SGX is considered
+ // "third party". See
+ // https://github.com/rust-lang/rust/issues/62620 for more. In
+ // general, these should never be added.
+ continue;
+ }
+ println!("invalid license `{}` in `{}`", license, pkg.id);
+ *bad = true;
+ }
}
- assert!(saw_dir, "no vendored source");
}
/// Checks the dependency of `WHITELIST_CRATES` at the given path. Changes `bad` to `true` if a
/// check failed.
///
/// Specifically, this checks that the dependencies are on the `WHITELIST`.
-pub fn check_whitelist(path: &Path, cargo: &Path, bad: &mut bool) {
- // Get dependencies from Cargo metadata.
- let resolve = get_deps(path, cargo);
-
+fn check_whitelist(metadata: &Metadata, bad: &mut bool) {
+ // Check that the WHITELIST does not have unused entries.
+ for name in WHITELIST {
+ if !metadata.packages.iter().any(|p| p.name == *name) {
+ println!(
+ "could not find whitelisted package `{}`\n\
+ Remove from WHITELIST list if it is no longer used.",
+ name
+ );
+ *bad = true;
+ }
+ }
// Get the whitelist in a convenient form.
let whitelist: HashSet<_> = WHITELIST.iter().cloned().collect();
let mut visited = BTreeSet::new();
let mut unapproved = BTreeSet::new();
for &krate in WHITELIST_CRATES.iter() {
- let mut bad = check_crate_whitelist(&whitelist, &resolve, &mut visited, krate, false);
+ let pkg = pkg_from_name(metadata, krate);
+ let mut bad = check_crate_whitelist(&whitelist, metadata, &mut visited, pkg);
unapproved.append(&mut bad);
}
if !unapproved.is_empty() {
println!("Dependencies not on the whitelist:");
for dep in unapproved {
- println!("* {}", dep.id_str());
+ println!("* {}", dep);
}
*bad = true;
}
-
- check_crate_duplicate(&resolve, bad);
-}
-
-fn check_license(path: &Path) -> bool {
- if !path.exists() {
- panic!("{} does not exist", path.display());
- }
- let contents = t!(fs::read_to_string(&path));
-
- let mut found_license = false;
- for line in contents.lines() {
- if !line.starts_with("license") {
- continue;
- }
- let license = extract_license(line);
- if !LICENSES.contains(&&*license) {
- println!("invalid license {} in {}", license, path.display());
- return false;
- }
- found_license = true;
- break;
- }
- if !found_license {
- println!("no license in {}", path.display());
- return false;
- }
-
- true
-}
-
-fn extract_license(line: &str) -> String {
- let first_quote = line.find('"');
- let last_quote = line.rfind('"');
- if let (Some(f), Some(l)) = (first_quote, last_quote) {
- let license = &line[f + 1..l];
- license.into()
- } else {
- "bad-license-parse".into()
- }
-}
-
-/// Gets the dependencies of the crate at the given path using `cargo metadata`.
-fn get_deps(path: &Path, cargo: &Path) -> Resolve {
- // Run `cargo metadata` to get the set of dependencies.
- let output = Command::new(cargo)
- .arg("metadata")
- .arg("--format-version")
- .arg("1")
- .arg("--manifest-path")
- .arg(path.join("../Cargo.toml"))
- .output()
- .expect("Unable to run `cargo metadata`")
- .stdout;
- let output = String::from_utf8_lossy(&output);
- let output: Output = serde_json::from_str(&output).unwrap();
-
- output.resolve
}
/// Checks the dependencies of the given crate from the given cargo metadata to see if they are on
/// the whitelist. Returns a list of illegal dependencies.
fn check_crate_whitelist<'a>(
- whitelist: &'a HashSet<Crate<'_>>,
- resolve: &'a Resolve,
- visited: &mut BTreeSet<CrateVersion<'a>>,
- krate: CrateVersion<'a>,
- must_be_on_whitelist: bool,
-) -> BTreeSet<Crate<'a>> {
+ whitelist: &'a HashSet<&'static str>,
+ metadata: &'a Metadata,
+ visited: &mut BTreeSet<&'a PackageId>,
+ krate: &'a Package,
+) -> BTreeSet<&'a PackageId> {
// This will contain bad deps.
let mut unapproved = BTreeSet::new();
// Check if we have already visited this crate.
- if visited.contains(&krate) {
+ if visited.contains(&krate.id) {
return unapproved;
}
- visited.insert(krate);
+ visited.insert(&krate.id);
// If this path is in-tree, we don't require it to be on the whitelist.
- if must_be_on_whitelist {
+ if krate.source.is_some() {
// If this dependency is not on `WHITELIST`, add to bad set.
- if !whitelist.contains(&krate.into()) {
- unapproved.insert(krate.into());
+ if !whitelist.contains(krate.name.as_str()) {
+ unapproved.insert(&krate.id);
}
}
- // Do a DFS in the crate graph (it's a DAG, so we know we have no cycles!).
- let to_check = resolve
- .nodes
- .iter()
- .find(|n| n.id.starts_with(&krate.id_str()))
- .expect("crate does not exist");
+ // Do a DFS in the crate graph.
+ let to_check = deps_of(metadata, &krate.id);
- for dep in to_check.dependencies.iter() {
- let (krate, is_path_dep) = CrateVersion::from_str(dep);
-
- let mut bad = check_crate_whitelist(whitelist, resolve, visited, krate, !is_path_dep);
+ for dep in to_check {
+ let mut bad = check_crate_whitelist(whitelist, metadata, visited, dep);
unapproved.append(&mut bad);
}
unapproved
}
-fn check_crate_duplicate(resolve: &Resolve, bad: &mut bool) {
+/// Prevents multiple versions of some expensive crates.
+fn check_crate_duplicate(metadata: &Metadata, bad: &mut bool) {
const FORBIDDEN_TO_HAVE_DUPLICATES: &[&str] = &[
// These two crates take quite a long time to build, so don't allow two versions of them
// to accidentally sneak into our dependency graph, in order to ensure we keep our CI times
"cargo",
"rustc-ap-syntax",
];
- let mut name_to_id: HashMap<_, Vec<_>> = HashMap::new();
- for node in resolve.nodes.iter() {
- name_to_id.entry(node.id.split_whitespace().next().unwrap()).or_default().push(&node.id);
- }
- for name in FORBIDDEN_TO_HAVE_DUPLICATES {
- if name_to_id[name].len() <= 1 {
- continue;
- }
- println!("crate `{}` is duplicated in `Cargo.lock`", name);
- for id in name_to_id[name].iter() {
- println!(" * {}", id);
+ for &name in FORBIDDEN_TO_HAVE_DUPLICATES {
+ let matches: Vec<_> = metadata.packages.iter().filter(|pkg| pkg.name == name).collect();
+ match matches.len() {
+ 0 => {
+ println!(
+ "crate `{}` is missing, update `check_crate_duplicate` \
+ if it is no longer used",
+ name
+ );
+ *bad = true;
+ }
+ 1 => {}
+ _ => {
+ println!(
+ "crate `{}` is duplicated in `Cargo.lock`, \
+ it is too expensive to build multiple times, \
+ so make sure only one version appears across all dependencies",
+ name
+ );
+ for pkg in matches {
+ println!(" * {}", pkg.id);
+ }
+ *bad = true;
+ }
}
- *bad = true;
+ }
+}
+
+/// Returns a list of dependencies for the given package.
+fn deps_of<'a>(metadata: &'a Metadata, pkg_id: &'a PackageId) -> Vec<&'a Package> {
+ let resolve = metadata.resolve.as_ref().unwrap();
+ let node = resolve
+ .nodes
+ .iter()
+ .find(|n| &n.id == pkg_id)
+ .unwrap_or_else(|| panic!("could not find `{}` in resolve", pkg_id));
+ node.deps
+ .iter()
+ .map(|dep| {
+ metadata.packages.iter().find(|pkg| pkg.id == dep.pkg).unwrap_or_else(|| {
+ panic!("could not find dep `{}` for pkg `{}` in resolve", dep.pkg, pkg_id)
+ })
+ })
+ .collect()
+}
+
+/// Finds a package with the given name.
+fn pkg_from_name<'a>(metadata: &'a Metadata, name: &'static str) -> &'a Package {
+ let mut i = metadata.packages.iter().filter(|p| p.name == name);
+ let result =
+ i.next().unwrap_or_else(|| panic!("could not find package `{}` in package list", name));
+ assert!(i.next().is_none(), "more than one package found for `{}`", name);
+ result
+}
+
+/// Finds all the packages that are in the rust runtime.
+fn compute_runtime_crates<'a>(metadata: &'a Metadata) -> HashSet<&'a PackageId> {
+ let resolve = metadata.resolve.as_ref().unwrap();
+ let mut result = HashSet::new();
+ for name in RUNTIME_CRATES {
+ let id = &pkg_from_name(metadata, name).id;
+ normal_deps_of_r(resolve, id, &mut result);
+ }
+ result
+}
+
+/// Recursively find all normal dependencies.
+fn normal_deps_of_r<'a>(
+ resolve: &'a Resolve,
+ pkg_id: &'a PackageId,
+ result: &mut HashSet<&'a PackageId>,
+) {
+ if !result.insert(pkg_id) {
+ return;
+ }
+ let node = resolve
+ .nodes
+ .iter()
+ .find(|n| &n.id == pkg_id)
+ .unwrap_or_else(|| panic!("could not find `{}` in resolve", pkg_id));
+ // Don't care about dev-dependencies.
+ // Build dependencies *shouldn't* matter unless they do some kind of
+ // codegen. For now we'll assume they don't.
+ let deps = node.deps.iter().filter(|node_dep| {
+ node_dep
+ .dep_kinds
+ .iter()
+ .any(|kind_info| kind_info.kind == cargo_metadata::DependencyKind::Normal)
+ });
+ for dep in deps {
+ normal_deps_of_r(resolve, &dep.pkg, result);
}
}
pub fn collect_lib_features(base_src_path: &Path) -> Features {
let mut lib_features = Features::new();
- // This library feature is defined in the `compiler_builtins` crate, which
- // has been moved out-of-tree. Now it can no longer be auto-discovered by
- // `tidy`, because we need to filter out its (submodule) directory. Manually
- // add it to the set of known library features so we can still generate docs.
- lib_features.insert(
- "compiler_builtins_lib".to_owned(),
- Feature {
- level: Status::Unstable,
- since: None,
- has_gate_test: false,
- tracking_issue: None,
- },
- );
-
map_lib_features(base_src_path, &mut |res, _, _| {
if let Ok((name, feature)) = res {
lib_features.insert(name.to_owned(), feature);
//! Tidy checks source code in this repository.
//!
//! This program runs all of the various tidy checks for style, cleanliness,
-//! etc. This is run by default on `make check` and as part of the auto
-//! builders.
+//! etc. This is run by default on `./x.py test` and as part of the auto
+//! builders. The tidy checks can be executed with `./x.py test tidy`.
#![deny(warnings)]
pal::check(&path, &mut bad);
unstable_book::check(&path, collected, &mut bad);
unit_tests::check(&path, &mut bad);
- if !args.iter().any(|s| *s == "--no-vendor") {
- deps::check(&path, &mut bad);
- }
- deps::check_whitelist(&path, &cargo, &mut bad);
+ deps::check(&path, &cargo, &mut bad);
extdeps::check(&path, &mut bad);
ui_tests::check(&path, &mut bad);
error_codes_check::check(&path, &mut bad);
"src/libpanic_unwind",
"src/libunwind",
// black_box implementation is LLVM-version specific and it uses
- // target_os to tell targets with different LLVM-versions appart
+ // target_os to tell targets with different LLVM-versions apart
// (e.g. `wasm32-unknown-emscripten` vs `wasm32-unknown-unknown`):
"src/libcore/hint.rs",
"src/libstd/sys/", // Platform-specific code for std lives here.
"src/libstd/sys_common/mod.rs",
"src/libstd/sys_common/net.rs",
"src/libstd/sys_common/backtrace.rs",
+ // panic_unwind shims
+ "src/libstd/panicking.rs",
"src/libterm", // Not sure how to make this crate portable, but test crate needs it.
"src/libtest", // Probably should defer to unstable `std::sys` APIs.
"src/libstd/sync/mpsc", // some tests are only run on non-emscripten
-use crate::features::{CollectedFeatures, Feature, Features, Status};
+use crate::features::{CollectedFeatures, Features, Status};
use std::collections::BTreeSet;
use std::fs;
use std::path::{Path, PathBuf};
pub fn check(path: &Path, features: CollectedFeatures, bad: &mut bool) {
let lang_features = features.lang;
- let mut lib_features = features
+ let lib_features = features
.lib
.into_iter()
.filter(|&(ref name, _)| !lang_features.contains_key(name))
.collect::<Features>();
- // This library feature is defined in the `compiler_builtins` crate, which
- // has been moved out-of-tree. Now it can no longer be auto-discovered by
- // `tidy`, because we need to filter out its (submodule) directory. Manually
- // add it to the set of known library features so we can still generate docs.
- lib_features.insert(
- "compiler_builtins_lib".to_owned(),
- Feature {
- level: Status::Unstable,
- since: None,
- has_gate_test: false,
- tracking_issue: None,
- },
- );
-
// Library features
let unstable_lib_feature_names = collect_unstable_feature_names(&lib_features);
let unstable_book_lib_features_section_file_names =
eprintln!("Must provide path to write unicode tables to");
eprintln!(
"e.g. {} src/libcore/unicode/unicode_data.rs",
- std::env::args().nth(0).unwrap_or_default()
+ std::env::args().next().unwrap_or_default()
);
std::process::exit(1);
});
[assign]
[ping.icebreakers-llvm]
+alias = ["llvm", "llvms"]
message = """\
Hey LLVM ICE-breakers! This bug has been identified as a good
"LLVM ICE-breaking candidate". In case it's useful, here are some
[instructions] for tackling these sorts of bugs. Maybe take a look?
Thanks! <3
-[instructions]: https://rust-lang.github.io/rustc-guide/ice-breaker/llvm.html
+[instructions]: https://rustc-dev-guide.rust-lang.org/ice-breaker/llvm.html
"""
label = "ICEBreaker-LLVM"
[ping.icebreakers-cleanup-crew]
+alias = ["cleanup", "cleanups", "cleanup-crew", "shrink", "reduce", "bisect"]
message = """\
Hey Cleanup Crew ICE-breakers! This bug has been identified as a good
"Cleanup ICE-breaking candidate". In case it's useful, here are some
[instructions] for tackling these sorts of bugs. Maybe take a look?
Thanks! <3
-[instructions]: https://rust-lang.github.io/rustc-guide/ice-breaker/cleanup-crew.html
+[instructions]: https://rustc-dev-guide.rust-lang.org/ice-breaker/cleanup-crew.html
"""
label = "ICEBreaker-Cleanup-Crew"