1 //! Mono Item Collection
2 //! ====================
4 //! This module is responsible for discovering all items that will contribute
5 //! to code generation of the crate. The important part here is that it not only
6 //! needs to find syntax-level items (functions, structs, etc) but also all
7 //! their monomorphized instantiations. Every non-generic, non-const function
8 //! maps to one LLVM artifact. Every generic function can produce
9 //! from zero to N artifacts, depending on the sets of type arguments it
10 //! is instantiated with.
11 //! This also applies to generic items from other crates: A generic definition
12 //! in crate X might produce monomorphizations that are compiled into crate Y.
13 //! We also have to collect these here.
15 //! The following kinds of "mono items" are handled here:
23 //! The following things also result in LLVM artifacts, but are not collected
24 //! here, since we instantiate them locally on demand when needed in a given
34 //! Let's define some terms first:
36 //! - A "mono item" is something that results in a function or global in
37 //! the LLVM IR of a codegen unit. Mono items do not stand on their
38 //! own, they can reference other mono items. For example, if function
39 //! `foo()` calls function `bar()` then the mono item for `foo()`
40 //! references the mono item for function `bar()`. In general, the
41 //! definition for mono item A referencing a mono item B is that
42 //! the LLVM artifact produced for A references the LLVM artifact produced
45 //! - Mono items and the references between them form a directed graph,
46 //! where the mono items are the nodes and references form the edges.
47 //! Let's call this graph the "mono item graph".
49 //! - The mono item graph for a program contains all mono items
50 //! that are needed in order to produce the complete LLVM IR of the program.
52 //! The purpose of the algorithm implemented in this module is to build the
53 //! mono item graph for the current crate. It runs in two phases:
55 //! 1. Discover the roots of the graph by traversing the HIR of the crate.
56 //! 2. Starting from the roots, find neighboring nodes by inspecting the MIR
57 //! representation of the item corresponding to a given node, until no more
58 //! new nodes are found.
60 //! ### Discovering roots
62 //! The roots of the mono item graph correspond to the public non-generic
63 //! syntactic items in the source code. We find them by walking the HIR of the
64 //! crate, and whenever we hit upon a public function, method, or static item,
65 //! we create a mono item consisting of the items DefId and, since we only
66 //! consider non-generic items, an empty type-substitution set. (In eager
67 //! collection mode, during incremental compilation, all non-generic functions
68 //! are considered as roots, as well as when the `-Clink-dead-code` option is
69 //! specified. Functions marked `#[no_mangle]` and functions called by inlinable
70 //! functions also always act as roots.)
72 //! ### Finding neighbor nodes
73 //! Given a mono item node, we can discover neighbors by inspecting its
74 //! MIR. We walk the MIR and any time we hit upon something that signifies a
75 //! reference to another mono item, we have found a neighbor. Since the
76 //! mono item we are currently at is always monomorphic, we also know the
77 //! concrete type arguments of its neighbors, and so all neighbors again will be
78 //! monomorphic. The specific forms a reference to a neighboring node can take
79 //! in MIR are quite diverse. Here is an overview:
81 //! #### Calling Functions/Methods
82 //! The most obvious form of one mono item referencing another is a
83 //! function or method call (represented by a CALL terminator in MIR). But
84 //! calls are not the only thing that might introduce a reference between two
85 //! function mono items, and as we will see below, they are just a
86 //! specialization of the form described next, and consequently will not get any
87 //! special treatment in the algorithm.
89 //! #### Taking a reference to a function or method
90 //! A function does not need to actually be called in order to be a neighbor of
91 //! another function. It suffices to just take a reference in order to introduce
92 //! an edge. Consider the following example:
95 //! fn print_val<T: Display>(x: T) {
96 //! println!("{}", x);
99 //! fn call_fn(f: &Fn(i32), x: i32) {
104 //! let print_i32 = print_val::<i32>;
105 //! call_fn(&print_i32, 0);
108 //! The MIR of none of these functions will contain an explicit call to
109 //! `print_val::<i32>`. Nonetheless, in order to mono this program, we need
110 //! an instance of this function. Thus, whenever we encounter a function or
111 //! method in operand position, we treat it as a neighbor of the current
112 //! mono item. Calls are just a special case of that.
115 //! In a way, closures are a simple case. Since every closure object needs to be
116 //! constructed somewhere, we can reliably discover them by observing
117 //! `RValue::Aggregate` expressions with `AggregateKind::Closure`. This is also
118 //! true for closures inlined from other crates.
121 //! Drop glue mono items are introduced by MIR drop-statements. The
122 //! generated mono item will again have drop-glue item neighbors if the
123 //! type to be dropped contains nested values that also need to be dropped. It
124 //! might also have a function item neighbor for the explicit `Drop::drop`
125 //! implementation of its type.
127 //! #### Unsizing Casts
128 //! A subtle way of introducing neighbor edges is by casting to a trait object.
129 //! Since the resulting fat-pointer contains a reference to a vtable, we need to
130 //! instantiate all object-save methods of the trait, as we need to store
131 //! pointers to these functions even if they never get called anywhere. This can
132 //! be seen as a special case of taking a function reference.
135 //! Since `Box` expression have special compiler support, no explicit calls to
136 //! `exchange_malloc()` and `box_free()` may show up in MIR, even if the
137 //! compiler will generate them. We have to observe `Rvalue::Box` expressions
138 //! and Box-typed drop-statements for that purpose.
141 //! Interaction with Cross-Crate Inlining
142 //! -------------------------------------
143 //! The binary of a crate will not only contain machine code for the items
144 //! defined in the source code of that crate. It will also contain monomorphic
145 //! instantiations of any extern generic functions and of functions marked with
147 //! The collection algorithm handles this more or less mono. If it is
148 //! about to create a mono item for something with an external `DefId`,
149 //! it will take a look if the MIR for that item is available, and if so just
150 //! proceed normally. If the MIR is not available, it assumes that the item is
151 //! just linked to and no node is created; which is exactly what we want, since
152 //! no machine code should be generated in the current crate for such an item.
154 //! Eager and Lazy Collection Mode
155 //! ------------------------------
156 //! Mono item collection can be performed in one of two modes:
158 //! - Lazy mode means that items will only be instantiated when actually
159 //! referenced. The goal is to produce the least amount of machine code
162 //! - Eager mode is meant to be used in conjunction with incremental compilation
163 //! where a stable set of mono items is more important than a minimal
164 //! one. Thus, eager mode will instantiate drop-glue for every drop-able type
165 //! in the crate, even if no drop call for that type exists (yet). It will
166 //! also instantiate default implementations of trait methods, something that
167 //! otherwise is only done on demand.
172 //! Some things are not yet fully implemented in the current version of this
176 //! Ideally, no mono item should be generated for const fns unless there
177 //! is a call to them that cannot be evaluated at compile time. At the moment
178 //! this is not implemented however: a mono item will be produced
179 //! regardless of whether it is actually needed or not.
181 use crate::monomorphize;
183 use rustc_data_structures::fx::{FxHashMap, FxHashSet};
184 use rustc_data_structures::sync::{par_iter, MTLock, MTRef, ParallelIterator};
185 use rustc_errors::{ErrorReported, FatalError};
186 use rustc_hir as hir;
187 use rustc_hir::def_id::{DefId, DefIdMap, LocalDefId, LOCAL_CRATE};
188 use rustc_hir::itemlikevisit::ItemLikeVisitor;
189 use rustc_hir::lang_items::LangItem;
190 use rustc_index::bit_set::GrowableBitSet;
191 use rustc_middle::mir::interpret::{AllocId, ConstValue};
192 use rustc_middle::mir::interpret::{ErrorHandled, GlobalAlloc, Scalar};
193 use rustc_middle::mir::mono::{InstantiationMode, MonoItem};
194 use rustc_middle::mir::visit::Visitor as MirVisitor;
195 use rustc_middle::mir::{self, Local, Location};
196 use rustc_middle::ty::adjustment::{CustomCoerceUnsized, PointerCast};
197 use rustc_middle::ty::subst::{GenericArgKind, InternalSubsts};
198 use rustc_middle::ty::{self, GenericParamDefKind, Instance, Ty, TyCtxt, TypeFoldable};
199 use rustc_middle::{middle::codegen_fn_attrs::CodegenFnAttrFlags, mir::visit::TyContext};
200 use rustc_session::config::EntryFnType;
201 use rustc_session::lint::builtin::LARGE_ASSIGNMENTS;
202 use rustc_span::source_map::{dummy_spanned, respan, Span, Spanned, DUMMY_SP};
203 use rustc_target::abi::Size;
204 use smallvec::SmallVec;
207 use std::path::PathBuf;
210 pub enum MonoItemCollectionMode {
215 /// Maps every mono item to all mono items it references in its
217 pub struct InliningMap<'tcx> {
218 // Maps a source mono item to the range of mono items
220 // The range selects elements within the `targets` vecs.
221 index: FxHashMap<MonoItem<'tcx>, Range<usize>>,
222 targets: Vec<MonoItem<'tcx>>,
224 // Contains one bit per mono item in the `targets` field. That bit
225 // is true if that mono item needs to be inlined into every CGU.
226 inlines: GrowableBitSet<usize>,
229 impl<'tcx> InliningMap<'tcx> {
230 fn new() -> InliningMap<'tcx> {
232 index: FxHashMap::default(),
234 inlines: GrowableBitSet::with_capacity(1024),
238 fn record_accesses(&mut self, source: MonoItem<'tcx>, new_targets: &[(MonoItem<'tcx>, bool)]) {
239 let start_index = self.targets.len();
240 let new_items_count = new_targets.len();
241 let new_items_count_total = new_items_count + self.targets.len();
243 self.targets.reserve(new_items_count);
244 self.inlines.ensure(new_items_count_total);
246 for (i, (target, inline)) in new_targets.iter().enumerate() {
247 self.targets.push(*target);
249 self.inlines.insert(i + start_index);
253 let end_index = self.targets.len();
254 assert!(self.index.insert(source, start_index..end_index).is_none());
257 // Internally iterate over all items referenced by `source` which will be
258 // made available for inlining.
259 pub fn with_inlining_candidates<F>(&self, source: MonoItem<'tcx>, mut f: F)
261 F: FnMut(MonoItem<'tcx>),
263 if let Some(range) = self.index.get(&source) {
264 for (i, candidate) in self.targets[range.clone()].iter().enumerate() {
265 if self.inlines.contains(range.start + i) {
272 // Internally iterate over all items and the things each accesses.
273 pub fn iter_accesses<F>(&self, mut f: F)
275 F: FnMut(MonoItem<'tcx>, &[MonoItem<'tcx>]),
277 for (&accessor, range) in &self.index {
278 f(accessor, &self.targets[range.clone()])
283 pub fn collect_crate_mono_items(
285 mode: MonoItemCollectionMode,
286 ) -> (FxHashSet<MonoItem<'_>>, InliningMap<'_>) {
287 let _prof_timer = tcx.prof.generic_activity("monomorphization_collector");
290 tcx.sess.time("monomorphization_collector_root_collections", || collect_roots(tcx, mode));
292 debug!("building mono item graph, beginning at roots");
294 let mut visited = MTLock::new(FxHashSet::default());
295 let mut inlining_map = MTLock::new(InliningMap::new());
298 let visited: MTRef<'_, _> = &mut visited;
299 let inlining_map: MTRef<'_, _> = &mut inlining_map;
301 tcx.sess.time("monomorphization_collector_graph_walk", || {
302 par_iter(roots).for_each(|root| {
303 let mut recursion_depths = DefIdMap::default();
308 &mut recursion_depths,
315 (visited.into_inner(), inlining_map.into_inner())
318 // Find all non-generic items by walking the HIR. These items serve as roots to
319 // start monomorphizing from.
320 fn collect_roots(tcx: TyCtxt<'_>, mode: MonoItemCollectionMode) -> Vec<MonoItem<'_>> {
321 debug!("collecting roots");
322 let mut roots = Vec::new();
325 let entry_fn = tcx.entry_fn(LOCAL_CRATE);
327 debug!("collect_roots: entry_fn = {:?}", entry_fn);
329 let mut visitor = RootCollector { tcx, mode, entry_fn, output: &mut roots };
331 tcx.hir().krate().visit_all_item_likes(&mut visitor);
333 visitor.push_extra_entry_roots();
336 // We can only codegen items that are instantiable - items all of
337 // whose predicates hold. Luckily, items that aren't instantiable
338 // can't actually be used, so we can just skip codegenning them.
341 .filter_map(|root| root.node.is_instantiable(tcx).then_some(root.node))
345 // Collect all monomorphized items reachable from `starting_point`
346 fn collect_items_rec<'tcx>(
348 starting_point: Spanned<MonoItem<'tcx>>,
349 visited: MTRef<'_, MTLock<FxHashSet<MonoItem<'tcx>>>>,
350 recursion_depths: &mut DefIdMap<usize>,
351 inlining_map: MTRef<'_, MTLock<InliningMap<'tcx>>>,
353 if !visited.lock_mut().insert(starting_point.node) {
354 // We've been here already, no need to search again.
357 debug!("BEGIN collect_items_rec({})", starting_point.node);
359 let mut neighbors = Vec::new();
360 let recursion_depth_reset;
362 match starting_point.node {
363 MonoItem::Static(def_id) => {
364 let instance = Instance::mono(tcx, def_id);
366 // Sanity check whether this ended up being collected accidentally
367 debug_assert!(should_codegen_locally(tcx, &instance));
369 let ty = instance.ty(tcx, ty::ParamEnv::reveal_all());
370 visit_drop_use(tcx, ty, true, starting_point.span, &mut neighbors);
372 recursion_depth_reset = None;
374 if let Ok(alloc) = tcx.eval_static_initializer(def_id) {
375 for &((), id) in alloc.relocations().values() {
376 collect_miri(tcx, id, &mut neighbors);
380 MonoItem::Fn(instance) => {
381 // Sanity check whether this ended up being collected accidentally
382 debug_assert!(should_codegen_locally(tcx, &instance));
384 // Keep track of the monomorphization recursion depth
385 recursion_depth_reset =
386 Some(check_recursion_limit(tcx, instance, starting_point.span, recursion_depths));
387 check_type_length_limit(tcx, instance);
389 rustc_data_structures::stack::ensure_sufficient_stack(|| {
390 collect_neighbours(tcx, instance, &mut neighbors);
393 MonoItem::GlobalAsm(..) => {
394 recursion_depth_reset = None;
398 record_accesses(tcx, starting_point.node, neighbors.iter().map(|i| &i.node), inlining_map);
400 for neighbour in neighbors {
401 collect_items_rec(tcx, neighbour, visited, recursion_depths, inlining_map);
404 if let Some((def_id, depth)) = recursion_depth_reset {
405 recursion_depths.insert(def_id, depth);
408 debug!("END collect_items_rec({})", starting_point.node);
411 fn record_accesses<'a, 'tcx: 'a>(
413 caller: MonoItem<'tcx>,
414 callees: impl Iterator<Item = &'a MonoItem<'tcx>>,
415 inlining_map: MTRef<'_, MTLock<InliningMap<'tcx>>>,
417 let is_inlining_candidate = |mono_item: &MonoItem<'tcx>| {
418 mono_item.instantiation_mode(tcx) == InstantiationMode::LocalCopy
421 // We collect this into a `SmallVec` to avoid calling `is_inlining_candidate` in the lock.
422 // FIXME: Call `is_inlining_candidate` when pushing to `neighbors` in `collect_items_rec`
423 // instead to avoid creating this `SmallVec`.
424 let accesses: SmallVec<[_; 128]> =
425 callees.map(|mono_item| (*mono_item, is_inlining_candidate(mono_item))).collect();
427 inlining_map.lock_mut().record_accesses(caller, &accesses);
430 /// Format instance name that is already known to be too long for rustc.
431 /// Show only the first and last 32 characters to avoid blasting
432 /// the user's terminal with thousands of lines of type-name.
434 /// If the type name is longer than before+after, it will be written to a file.
435 fn shrunk_instance_name(
437 instance: &Instance<'tcx>,
440 ) -> (String, Option<PathBuf>) {
441 let s = instance.to_string();
443 // Only use the shrunk version if it's really shorter.
444 // This also avoids the case where before and after slices overlap.
445 if s.chars().nth(before + after + 1).is_some() {
446 // An iterator of all byte positions including the end of the string.
447 let positions = || s.char_indices().map(|(i, _)| i).chain(iter::once(s.len()));
449 let shrunk = format!(
450 "{before}...{after}",
451 before = &s[..positions().nth(before).unwrap_or(s.len())],
452 after = &s[positions().rev().nth(after).unwrap_or(0)..],
455 let path = tcx.output_filenames(LOCAL_CRATE).temp_path_ext("long-type.txt", None);
456 let written_to_path = std::fs::write(&path, s).ok().map(|_| path);
458 (shrunk, written_to_path)
464 fn check_recursion_limit<'tcx>(
466 instance: Instance<'tcx>,
468 recursion_depths: &mut DefIdMap<usize>,
469 ) -> (DefId, usize) {
470 let def_id = instance.def_id();
471 let recursion_depth = recursion_depths.get(&def_id).cloned().unwrap_or(0);
472 debug!(" => recursion depth={}", recursion_depth);
474 let adjusted_recursion_depth = if Some(def_id) == tcx.lang_items().drop_in_place_fn() {
475 // HACK: drop_in_place creates tight monomorphization loops. Give
482 // Code that needs to instantiate the same function recursively
483 // more than the recursion limit is assumed to be causing an
484 // infinite expansion.
485 if !tcx.sess.recursion_limit().value_within_limit(adjusted_recursion_depth) {
486 let (shrunk, written_to_path) = shrunk_instance_name(tcx, &instance, 32, 32);
487 let error = format!("reached the recursion limit while instantiating `{}`", shrunk);
488 let mut err = tcx.sess.struct_span_fatal(span, &error);
490 tcx.def_span(def_id),
491 &format!("`{}` defined here", tcx.def_path_str(def_id)),
493 if let Some(path) = written_to_path {
494 err.note(&format!("the full type name has been written to '{}'", path.display()));
500 recursion_depths.insert(def_id, recursion_depth + 1);
502 (def_id, recursion_depth)
505 fn check_type_length_limit<'tcx>(tcx: TyCtxt<'tcx>, instance: Instance<'tcx>) {
506 let type_length = instance
509 .flat_map(|arg| arg.walk())
510 .filter(|arg| match arg.unpack() {
511 GenericArgKind::Type(_) | GenericArgKind::Const(_) => true,
512 GenericArgKind::Lifetime(_) => false,
515 debug!(" => type length={}", type_length);
517 // Rust code can easily create exponentially-long types using only a
518 // polynomial recursion depth. Even with the default recursion
519 // depth, you can easily get cases that take >2^60 steps to run,
520 // which means that rustc basically hangs.
522 // Bail out in these cases to avoid that bad user experience.
523 if !tcx.sess.type_length_limit().value_within_limit(type_length) {
524 let (shrunk, written_to_path) = shrunk_instance_name(tcx, &instance, 32, 32);
525 let msg = format!("reached the type-length limit while instantiating `{}`", shrunk);
526 let mut diag = tcx.sess.struct_span_fatal(tcx.def_span(instance.def_id()), &msg);
527 if let Some(path) = written_to_path {
528 diag.note(&format!("the full type name has been written to '{}'", path.display()));
531 "consider adding a `#![type_length_limit=\"{}\"]` attribute to your crate",
535 tcx.sess.abort_if_errors();
539 struct MirNeighborCollector<'a, 'tcx> {
541 body: &'a mir::Body<'tcx>,
542 output: &'a mut Vec<Spanned<MonoItem<'tcx>>>,
543 instance: Instance<'tcx>,
546 impl<'a, 'tcx> MirNeighborCollector<'a, 'tcx> {
547 pub fn monomorphize<T>(&self, value: T) -> T
549 T: TypeFoldable<'tcx>,
551 debug!("monomorphize: self.instance={:?}", self.instance);
552 self.instance.subst_mir_and_normalize_erasing_regions(
554 ty::ParamEnv::reveal_all(),
560 impl<'a, 'tcx> MirVisitor<'tcx> for MirNeighborCollector<'a, 'tcx> {
561 fn visit_rvalue(&mut self, rvalue: &mir::Rvalue<'tcx>, location: Location) {
562 debug!("visiting rvalue {:?}", *rvalue);
564 let span = self.body.source_info(location).span;
567 // When doing an cast from a regular pointer to a fat pointer, we
568 // have to instantiate all methods of the trait being cast to, so we
569 // can build the appropriate vtable.
571 mir::CastKind::Pointer(PointerCast::Unsize),
575 let target_ty = self.monomorphize(target_ty);
576 let source_ty = operand.ty(self.body, self.tcx);
577 let source_ty = self.monomorphize(source_ty);
578 let (source_ty, target_ty) =
579 find_vtable_types_for_unsizing(self.tcx, source_ty, target_ty);
580 // This could also be a different Unsize instruction, like
581 // from a fixed sized array to a slice. But we are only
582 // interested in things that produce a vtable.
583 if target_ty.is_trait() && !source_ty.is_trait() {
584 create_mono_items_for_vtable_methods(
594 mir::CastKind::Pointer(PointerCast::ReifyFnPointer),
598 let fn_ty = operand.ty(self.body, self.tcx);
599 let fn_ty = self.monomorphize(fn_ty);
600 visit_fn_use(self.tcx, fn_ty, false, span, &mut self.output);
603 mir::CastKind::Pointer(PointerCast::ClosureFnPointer(_)),
607 let source_ty = operand.ty(self.body, self.tcx);
608 let source_ty = self.monomorphize(source_ty);
609 match *source_ty.kind() {
610 ty::Closure(def_id, substs) => {
611 let instance = Instance::resolve_closure(
615 ty::ClosureKind::FnOnce,
617 if should_codegen_locally(self.tcx, &instance) {
618 self.output.push(create_fn_mono_item(self.tcx, instance, span));
624 mir::Rvalue::NullaryOp(mir::NullOp::Box, _) => {
626 let exchange_malloc_fn_def_id =
627 tcx.require_lang_item(LangItem::ExchangeMalloc, None);
628 let instance = Instance::mono(tcx, exchange_malloc_fn_def_id);
629 if should_codegen_locally(tcx, &instance) {
630 self.output.push(create_fn_mono_item(self.tcx, instance, span));
633 mir::Rvalue::ThreadLocalRef(def_id) => {
634 assert!(self.tcx.is_thread_local_static(def_id));
635 let instance = Instance::mono(self.tcx, def_id);
636 if should_codegen_locally(self.tcx, &instance) {
637 trace!("collecting thread-local static {:?}", def_id);
638 self.output.push(respan(span, MonoItem::Static(def_id)));
641 _ => { /* not interesting */ }
644 self.super_rvalue(rvalue, location);
647 /// This does not walk the constant, as it has been handled entirely here and trying
648 /// to walk it would attempt to evaluate the `ty::Const` inside, which doesn't necessarily
649 /// work, as some constants cannot be represented in the type system.
650 fn visit_constant(&mut self, constant: &mir::Constant<'tcx>, location: Location) {
651 let literal = self.monomorphize(constant.literal);
652 let val = match literal {
653 mir::ConstantKind::Val(val, _) => val,
654 mir::ConstantKind::Ty(ct) => match ct.val {
655 ty::ConstKind::Value(val) => val,
656 ty::ConstKind::Unevaluated(ct) => {
657 let param_env = ty::ParamEnv::reveal_all();
658 match self.tcx.const_eval_resolve(param_env, ct, None) {
659 // The `monomorphize` call should have evaluated that constant already.
661 Err(ErrorHandled::Reported(ErrorReported) | ErrorHandled::Linted) => return,
662 Err(ErrorHandled::TooGeneric) => span_bug!(
663 self.body.source_info(location).span,
664 "collection encountered polymorphic constant: {:?}",
672 collect_const_value(self.tcx, val, self.output);
673 self.visit_ty(literal.ty(), TyContext::Location(location));
676 fn visit_const(&mut self, constant: &&'tcx ty::Const<'tcx>, location: Location) {
677 debug!("visiting const {:?} @ {:?}", *constant, location);
679 let substituted_constant = self.monomorphize(*constant);
680 let param_env = ty::ParamEnv::reveal_all();
682 match substituted_constant.val {
683 ty::ConstKind::Value(val) => collect_const_value(self.tcx, val, self.output),
684 ty::ConstKind::Unevaluated(unevaluated) => {
685 match self.tcx.const_eval_resolve(param_env, unevaluated, None) {
686 // The `monomorphize` call should have evaluated that constant already.
687 Ok(val) => span_bug!(
688 self.body.source_info(location).span,
689 "collection encountered the unevaluated constant {} which evaluated to {:?}",
690 substituted_constant,
693 Err(ErrorHandled::Reported(ErrorReported) | ErrorHandled::Linted) => {}
694 Err(ErrorHandled::TooGeneric) => span_bug!(
695 self.body.source_info(location).span,
696 "collection encountered polymorphic constant: {}",
704 self.super_const(constant);
707 fn visit_terminator(&mut self, terminator: &mir::Terminator<'tcx>, location: Location) {
708 debug!("visiting terminator {:?} @ {:?}", terminator, location);
709 let source = self.body.source_info(location).span;
712 match terminator.kind {
713 mir::TerminatorKind::Call { ref func, .. } => {
714 let callee_ty = func.ty(self.body, tcx);
715 let callee_ty = self.monomorphize(callee_ty);
716 visit_fn_use(self.tcx, callee_ty, true, source, &mut self.output);
718 mir::TerminatorKind::Drop { ref place, .. }
719 | mir::TerminatorKind::DropAndReplace { ref place, .. } => {
720 let ty = place.ty(self.body, self.tcx).ty;
721 let ty = self.monomorphize(ty);
722 visit_drop_use(self.tcx, ty, true, source, self.output);
724 mir::TerminatorKind::InlineAsm { ref operands, .. } => {
727 mir::InlineAsmOperand::SymFn { ref value } => {
728 let fn_ty = self.monomorphize(value.literal.ty());
729 visit_fn_use(self.tcx, fn_ty, false, source, &mut self.output);
731 mir::InlineAsmOperand::SymStatic { def_id } => {
732 let instance = Instance::mono(self.tcx, def_id);
733 if should_codegen_locally(self.tcx, &instance) {
734 trace!("collecting asm sym static {:?}", def_id);
735 self.output.push(respan(source, MonoItem::Static(def_id)));
742 mir::TerminatorKind::Goto { .. }
743 | mir::TerminatorKind::SwitchInt { .. }
744 | mir::TerminatorKind::Resume
745 | mir::TerminatorKind::Abort
746 | mir::TerminatorKind::Return
747 | mir::TerminatorKind::Unreachable
748 | mir::TerminatorKind::Assert { .. } => {}
749 mir::TerminatorKind::GeneratorDrop
750 | mir::TerminatorKind::Yield { .. }
751 | mir::TerminatorKind::FalseEdge { .. }
752 | mir::TerminatorKind::FalseUnwind { .. } => bug!(),
755 self.super_terminator(terminator, location);
758 fn visit_operand(&mut self, operand: &mir::Operand<'tcx>, location: Location) {
759 self.super_operand(operand, location);
760 let limit = self.tcx.sess.move_size_limit();
764 let limit = Size::from_bytes(limit);
765 let ty = operand.ty(self.body, self.tcx);
766 let ty = self.monomorphize(ty);
767 let layout = self.tcx.layout_of(ty::ParamEnv::reveal_all().and(ty));
768 if let Ok(layout) = layout {
769 if layout.size > limit {
771 let source_info = self.body.source_info(location);
772 debug!(?source_info);
773 let lint_root = source_info.scope.lint_root(&self.body.source_scopes);
775 let lint_root = match lint_root {
776 Some(lint_root) => lint_root,
777 // This happens when the issue is in a function from a foreign crate that
778 // we monomorphized in the current crate. We can't get a `HirId` for things
780 // FIXME: Find out where to report the lint on. Maybe simply crate-level lint root
781 // but correct span? This would make the lint at least accept crate-level lint attributes.
784 self.tcx.struct_span_lint_hir(
789 let mut err = lint.build(&format!("moving {} bytes", layout.size.bytes()));
790 err.span_label(source_info.span, "value moved from here");
800 _place_local: &Local,
801 _context: mir::visit::PlaceContext,
807 fn visit_drop_use<'tcx>(
810 is_direct_call: bool,
812 output: &mut Vec<Spanned<MonoItem<'tcx>>>,
814 let instance = Instance::resolve_drop_in_place(tcx, ty);
815 visit_instance_use(tcx, instance, is_direct_call, source, output);
818 fn visit_fn_use<'tcx>(
821 is_direct_call: bool,
823 output: &mut Vec<Spanned<MonoItem<'tcx>>>,
825 if let ty::FnDef(def_id, substs) = *ty.kind() {
826 let instance = if is_direct_call {
827 ty::Instance::resolve(tcx, ty::ParamEnv::reveal_all(), def_id, substs).unwrap().unwrap()
829 ty::Instance::resolve_for_fn_ptr(tcx, ty::ParamEnv::reveal_all(), def_id, substs)
832 visit_instance_use(tcx, instance, is_direct_call, source, output);
836 fn visit_instance_use<'tcx>(
838 instance: ty::Instance<'tcx>,
839 is_direct_call: bool,
841 output: &mut Vec<Spanned<MonoItem<'tcx>>>,
843 debug!("visit_item_use({:?}, is_direct_call={:?})", instance, is_direct_call);
844 if !should_codegen_locally(tcx, &instance) {
849 ty::InstanceDef::Virtual(..) | ty::InstanceDef::Intrinsic(_) => {
851 bug!("{:?} being reified", instance);
854 ty::InstanceDef::DropGlue(_, None) => {
855 // Don't need to emit noop drop glue if we are calling directly.
857 output.push(create_fn_mono_item(tcx, instance, source));
860 ty::InstanceDef::DropGlue(_, Some(_))
861 | ty::InstanceDef::VtableShim(..)
862 | ty::InstanceDef::ReifyShim(..)
863 | ty::InstanceDef::ClosureOnceShim { .. }
864 | ty::InstanceDef::Item(..)
865 | ty::InstanceDef::FnPtrShim(..)
866 | ty::InstanceDef::CloneShim(..) => {
867 output.push(create_fn_mono_item(tcx, instance, source));
872 // Returns `true` if we should codegen an instance in the local crate.
873 // Returns `false` if we can just link to the upstream crate and therefore don't
875 fn should_codegen_locally<'tcx>(tcx: TyCtxt<'tcx>, instance: &Instance<'tcx>) -> bool {
876 let def_id = match instance.def {
877 ty::InstanceDef::Item(def) => def.did,
878 ty::InstanceDef::DropGlue(def_id, Some(_)) => def_id,
879 ty::InstanceDef::VtableShim(..)
880 | ty::InstanceDef::ReifyShim(..)
881 | ty::InstanceDef::ClosureOnceShim { .. }
882 | ty::InstanceDef::Virtual(..)
883 | ty::InstanceDef::FnPtrShim(..)
884 | ty::InstanceDef::DropGlue(..)
885 | ty::InstanceDef::Intrinsic(_)
886 | ty::InstanceDef::CloneShim(..) => return true,
889 if tcx.is_foreign_item(def_id) {
890 // Foreign items are always linked against, there's no way of instantiating them.
894 if def_id.is_local() {
895 // Local items cannot be referred to locally without monomorphizing them locally.
899 if tcx.is_reachable_non_generic(def_id)
900 || instance.polymorphize(tcx).upstream_monomorphization(tcx).is_some()
902 // We can link to the item in question, no instance needed in this crate.
906 if !tcx.is_mir_available(def_id) {
907 bug!("no MIR available for {:?}", def_id);
913 /// For a given pair of source and target type that occur in an unsizing coercion,
914 /// this function finds the pair of types that determines the vtable linking
917 /// For example, the source type might be `&SomeStruct` and the target type\
918 /// might be `&SomeTrait` in a cast like:
920 /// let src: &SomeStruct = ...;
921 /// let target = src as &SomeTrait;
923 /// Then the output of this function would be (SomeStruct, SomeTrait) since for
924 /// constructing the `target` fat-pointer we need the vtable for that pair.
926 /// Things can get more complicated though because there's also the case where
927 /// the unsized type occurs as a field:
930 /// struct ComplexStruct<T: ?Sized> {
937 /// In this case, if `T` is sized, `&ComplexStruct<T>` is a thin pointer. If `T`
938 /// is unsized, `&SomeStruct` is a fat pointer, and the vtable it points to is
939 /// for the pair of `T` (which is a trait) and the concrete type that `T` was
940 /// originally coerced from:
942 /// let src: &ComplexStruct<SomeStruct> = ...;
943 /// let target = src as &ComplexStruct<SomeTrait>;
945 /// Again, we want this `find_vtable_types_for_unsizing()` to provide the pair
946 /// `(SomeStruct, SomeTrait)`.
948 /// Finally, there is also the case of custom unsizing coercions, e.g., for
949 /// smart pointers such as `Rc` and `Arc`.
950 fn find_vtable_types_for_unsizing<'tcx>(
954 ) -> (Ty<'tcx>, Ty<'tcx>) {
955 let ptr_vtable = |inner_source: Ty<'tcx>, inner_target: Ty<'tcx>| {
956 let param_env = ty::ParamEnv::reveal_all();
957 let type_has_metadata = |ty: Ty<'tcx>| -> bool {
958 if ty.is_sized(tcx.at(DUMMY_SP), param_env) {
961 let tail = tcx.struct_tail_erasing_lifetimes(ty, param_env);
963 ty::Foreign(..) => false,
964 ty::Str | ty::Slice(..) | ty::Dynamic(..) => true,
965 _ => bug!("unexpected unsized tail: {:?}", tail),
968 if type_has_metadata(inner_source) {
969 (inner_source, inner_target)
971 tcx.struct_lockstep_tails_erasing_lifetimes(inner_source, inner_target, param_env)
975 match (&source_ty.kind(), &target_ty.kind()) {
976 (&ty::Ref(_, a, _), &ty::Ref(_, b, _) | &ty::RawPtr(ty::TypeAndMut { ty: b, .. }))
977 | (&ty::RawPtr(ty::TypeAndMut { ty: a, .. }), &ty::RawPtr(ty::TypeAndMut { ty: b, .. })) => {
980 (&ty::Adt(def_a, _), &ty::Adt(def_b, _)) if def_a.is_box() && def_b.is_box() => {
981 ptr_vtable(source_ty.boxed_ty(), target_ty.boxed_ty())
984 (&ty::Adt(source_adt_def, source_substs), &ty::Adt(target_adt_def, target_substs)) => {
985 assert_eq!(source_adt_def, target_adt_def);
987 let CustomCoerceUnsized::Struct(coerce_index) =
988 monomorphize::custom_coerce_unsize_info(tcx, source_ty, target_ty);
990 let source_fields = &source_adt_def.non_enum_variant().fields;
991 let target_fields = &target_adt_def.non_enum_variant().fields;
994 coerce_index < source_fields.len() && source_fields.len() == target_fields.len()
997 find_vtable_types_for_unsizing(
999 source_fields[coerce_index].ty(tcx, source_substs),
1000 target_fields[coerce_index].ty(tcx, target_substs),
1004 "find_vtable_types_for_unsizing: invalid coercion {:?} -> {:?}",
1011 fn create_fn_mono_item<'tcx>(
1013 instance: Instance<'tcx>,
1015 ) -> Spanned<MonoItem<'tcx>> {
1016 debug!("create_fn_mono_item(instance={})", instance);
1017 respan(source, MonoItem::Fn(instance.polymorphize(tcx)))
1020 /// Creates a `MonoItem` for each method that is referenced by the vtable for
1021 /// the given trait/impl pair.
1022 fn create_mono_items_for_vtable_methods<'tcx>(
1027 output: &mut Vec<Spanned<MonoItem<'tcx>>>,
1029 assert!(!trait_ty.has_escaping_bound_vars() && !impl_ty.has_escaping_bound_vars());
1031 if let ty::Dynamic(ref trait_ty, ..) = trait_ty.kind() {
1032 if let Some(principal) = trait_ty.principal() {
1033 let poly_trait_ref = principal.with_self_ty(tcx, impl_ty);
1034 assert!(!poly_trait_ref.has_escaping_bound_vars());
1036 // Walk all methods of the trait, including those of its supertraits
1037 let methods = tcx.vtable_methods(poly_trait_ref);
1038 let methods = methods
1041 .filter_map(|method| method)
1042 .map(|(def_id, substs)| {
1043 ty::Instance::resolve_for_vtable(
1045 ty::ParamEnv::reveal_all(),
1051 .filter(|&instance| should_codegen_locally(tcx, &instance))
1052 .map(|item| create_fn_mono_item(tcx, item, source));
1053 output.extend(methods);
1056 // Also add the destructor.
1057 visit_drop_use(tcx, impl_ty, false, source, output);
1061 //=-----------------------------------------------------------------------------
1063 //=-----------------------------------------------------------------------------
1065 struct RootCollector<'a, 'tcx> {
1067 mode: MonoItemCollectionMode,
1068 output: &'a mut Vec<Spanned<MonoItem<'tcx>>>,
1069 entry_fn: Option<(DefId, EntryFnType)>,
1072 impl ItemLikeVisitor<'v> for RootCollector<'_, 'v> {
1073 fn visit_item(&mut self, item: &'v hir::Item<'v>) {
1075 hir::ItemKind::ExternCrate(..)
1076 | hir::ItemKind::Use(..)
1077 | hir::ItemKind::ForeignMod { .. }
1078 | hir::ItemKind::TyAlias(..)
1079 | hir::ItemKind::Trait(..)
1080 | hir::ItemKind::TraitAlias(..)
1081 | hir::ItemKind::OpaqueTy(..)
1082 | hir::ItemKind::Mod(..) => {
1083 // Nothing to do, just keep recursing.
1086 hir::ItemKind::Impl { .. } => {
1087 if self.mode == MonoItemCollectionMode::Eager {
1088 create_mono_items_for_default_impls(self.tcx, item, self.output);
1092 hir::ItemKind::Enum(_, ref generics)
1093 | hir::ItemKind::Struct(_, ref generics)
1094 | hir::ItemKind::Union(_, ref generics) => {
1095 if generics.params.is_empty() {
1096 if self.mode == MonoItemCollectionMode::Eager {
1098 "RootCollector: ADT drop-glue for {}",
1099 self.tcx.def_path_str(item.def_id.to_def_id())
1102 let ty = Instance::new(item.def_id.to_def_id(), InternalSubsts::empty())
1103 .ty(self.tcx, ty::ParamEnv::reveal_all());
1104 visit_drop_use(self.tcx, ty, true, DUMMY_SP, self.output);
1108 hir::ItemKind::GlobalAsm(..) => {
1110 "RootCollector: ItemKind::GlobalAsm({})",
1111 self.tcx.def_path_str(item.def_id.to_def_id())
1113 self.output.push(dummy_spanned(MonoItem::GlobalAsm(item.item_id())));
1115 hir::ItemKind::Static(..) => {
1117 "RootCollector: ItemKind::Static({})",
1118 self.tcx.def_path_str(item.def_id.to_def_id())
1120 self.output.push(dummy_spanned(MonoItem::Static(item.def_id.to_def_id())));
1122 hir::ItemKind::Const(..) => {
1123 // const items only generate mono items if they are
1124 // actually used somewhere. Just declaring them is insufficient.
1126 // but even just declaring them must collect the items they refer to
1127 if let Ok(val) = self.tcx.const_eval_poly(item.def_id.to_def_id()) {
1128 collect_const_value(self.tcx, val, &mut self.output);
1131 hir::ItemKind::Fn(..) => {
1132 self.push_if_root(item.def_id);
1137 fn visit_trait_item(&mut self, _: &'v hir::TraitItem<'v>) {
1138 // Even if there's a default body with no explicit generics,
1139 // it's still generic over some `Self: Trait`, so not a root.
1142 fn visit_impl_item(&mut self, ii: &'v hir::ImplItem<'v>) {
1143 if let hir::ImplItemKind::Fn(hir::FnSig { .. }, _) = ii.kind {
1144 self.push_if_root(ii.def_id);
1148 fn visit_foreign_item(&mut self, _foreign_item: &'v hir::ForeignItem<'v>) {}
1151 impl RootCollector<'_, 'v> {
1152 fn is_root(&self, def_id: LocalDefId) -> bool {
1153 !item_requires_monomorphization(self.tcx, def_id)
1154 && match self.mode {
1155 MonoItemCollectionMode::Eager => true,
1156 MonoItemCollectionMode::Lazy => {
1157 self.entry_fn.and_then(|(id, _)| id.as_local()) == Some(def_id)
1158 || self.tcx.is_reachable_non_generic(def_id)
1161 .codegen_fn_attrs(def_id)
1163 .contains(CodegenFnAttrFlags::RUSTC_STD_INTERNAL_SYMBOL)
1168 /// If `def_id` represents a root, pushes it onto the list of
1169 /// outputs. (Note that all roots must be monomorphic.)
1170 fn push_if_root(&mut self, def_id: LocalDefId) {
1171 if self.is_root(def_id) {
1172 debug!("RootCollector::push_if_root: found root def_id={:?}", def_id);
1174 let instance = Instance::mono(self.tcx, def_id.to_def_id());
1175 self.output.push(create_fn_mono_item(self.tcx, instance, DUMMY_SP));
1179 /// As a special case, when/if we encounter the
1180 /// `main()` function, we also have to generate a
1181 /// monomorphized copy of the start lang item based on
1182 /// the return type of `main`. This is not needed when
1183 /// the user writes their own `start` manually.
1184 fn push_extra_entry_roots(&mut self) {
1185 let main_def_id = match self.entry_fn {
1186 Some((def_id, EntryFnType::Main)) => def_id,
1190 let start_def_id = match self.tcx.lang_items().require(LangItem::Start) {
1192 Err(err) => self.tcx.sess.fatal(&err),
1194 let main_ret_ty = self.tcx.fn_sig(main_def_id).output();
1196 // Given that `main()` has no arguments,
1197 // then its return type cannot have
1198 // late-bound regions, since late-bound
1199 // regions must appear in the argument
1201 let main_ret_ty = self.tcx.erase_regions(main_ret_ty.no_bound_vars().unwrap());
1203 let start_instance = Instance::resolve(
1205 ty::ParamEnv::reveal_all(),
1207 self.tcx.intern_substs(&[main_ret_ty.into()]),
1212 self.output.push(create_fn_mono_item(self.tcx, start_instance, DUMMY_SP));
1216 fn item_requires_monomorphization(tcx: TyCtxt<'_>, def_id: LocalDefId) -> bool {
1217 let generics = tcx.generics_of(def_id);
1218 generics.requires_monomorphization(tcx)
1221 fn create_mono_items_for_default_impls<'tcx>(
1223 item: &'tcx hir::Item<'tcx>,
1224 output: &mut Vec<Spanned<MonoItem<'tcx>>>,
1227 hir::ItemKind::Impl(ref impl_) => {
1228 for param in impl_.generics.params {
1230 hir::GenericParamKind::Lifetime { .. } => {}
1231 hir::GenericParamKind::Type { .. } | hir::GenericParamKind::Const { .. } => {
1238 "create_mono_items_for_default_impls(item={})",
1239 tcx.def_path_str(item.def_id.to_def_id())
1242 if let Some(trait_ref) = tcx.impl_trait_ref(item.def_id) {
1243 let param_env = ty::ParamEnv::reveal_all();
1244 let trait_ref = tcx.normalize_erasing_regions(param_env, trait_ref);
1245 let overridden_methods: FxHashSet<_> =
1246 impl_.items.iter().map(|iiref| iiref.ident.normalize_to_macros_2_0()).collect();
1247 for method in tcx.provided_trait_methods(trait_ref.def_id) {
1248 if overridden_methods.contains(&method.ident.normalize_to_macros_2_0()) {
1252 if tcx.generics_of(method.def_id).own_requires_monomorphization() {
1257 InternalSubsts::for_item(tcx, method.def_id, |param, _| match param.kind {
1258 GenericParamDefKind::Lifetime => tcx.lifetimes.re_erased.into(),
1259 GenericParamDefKind::Type { .. }
1260 | GenericParamDefKind::Const { .. } => {
1261 trait_ref.substs[param.index as usize]
1264 let instance = ty::Instance::resolve(tcx, param_env, method.def_id, substs)
1268 let mono_item = create_fn_mono_item(tcx, instance, DUMMY_SP);
1269 if mono_item.node.is_instantiable(tcx) && should_codegen_locally(tcx, &instance)
1271 output.push(mono_item);
1280 /// Scans the miri alloc in order to find function calls, closures, and drop-glue.
1281 fn collect_miri<'tcx>(
1284 output: &mut Vec<Spanned<MonoItem<'tcx>>>,
1286 match tcx.global_alloc(alloc_id) {
1287 GlobalAlloc::Static(def_id) => {
1288 assert!(!tcx.is_thread_local_static(def_id));
1289 let instance = Instance::mono(tcx, def_id);
1290 if should_codegen_locally(tcx, &instance) {
1291 trace!("collecting static {:?}", def_id);
1292 output.push(dummy_spanned(MonoItem::Static(def_id)));
1295 GlobalAlloc::Memory(alloc) => {
1296 trace!("collecting {:?} with {:#?}", alloc_id, alloc);
1297 for &((), inner) in alloc.relocations().values() {
1298 rustc_data_structures::stack::ensure_sufficient_stack(|| {
1299 collect_miri(tcx, inner, output);
1303 GlobalAlloc::Function(fn_instance) => {
1304 if should_codegen_locally(tcx, &fn_instance) {
1305 trace!("collecting {:?} with {:#?}", alloc_id, fn_instance);
1306 output.push(create_fn_mono_item(tcx, fn_instance, DUMMY_SP));
1312 /// Scans the MIR in order to find function calls, closures, and drop-glue.
1313 fn collect_neighbours<'tcx>(
1315 instance: Instance<'tcx>,
1316 output: &mut Vec<Spanned<MonoItem<'tcx>>>,
1318 debug!("collect_neighbours: {:?}", instance.def_id());
1319 let body = tcx.instance_mir(instance.def);
1321 MirNeighborCollector { tcx, body: &body, output, instance }.visit_body(&body);
1324 fn collect_const_value<'tcx>(
1326 value: ConstValue<'tcx>,
1327 output: &mut Vec<Spanned<MonoItem<'tcx>>>,
1330 ConstValue::Scalar(Scalar::Ptr(ptr)) => collect_miri(tcx, ptr.alloc_id, output),
1331 ConstValue::Slice { data: alloc, start: _, end: _ } | ConstValue::ByRef { alloc, .. } => {
1332 for &((), id) in alloc.relocations().values() {
1333 collect_miri(tcx, id, output);