1 // Copyright 2016 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
11 //! Partitioning Codegen Units for Incremental Compilation
12 //! ======================================================
14 //! The task of this module is to take the complete set of translation items of
15 //! a crate and produce a set of codegen units from it, where a codegen unit
16 //! is a named set of (translation-item, linkage) pairs. That is, this module
17 //! decides which translation item appears in which codegen units with which
18 //! linkage. The following paragraphs describe some of the background on the
19 //! partitioning scheme.
21 //! The most important opportunity for saving on compilation time with
22 //! incremental compilation is to avoid re-translating and re-optimizing code.
23 //! Since the unit of translation and optimization for LLVM is "modules" or, how
24 //! we call them "codegen units", the particulars of how much time can be saved
25 //! by incremental compilation are tightly linked to how the output program is
26 //! partitioned into these codegen units prior to passing it to LLVM --
27 //! especially because we have to treat codegen units as opaque entities once
28 //! they are created: There is no way for us to incrementally update an existing
29 //! LLVM module and so we have to build any such module from scratch if it was
30 //! affected by some change in the source code.
32 //! From that point of view it would make sense to maximize the number of
33 //! codegen units by, for example, putting each function into its own module.
34 //! That way only those modules would have to be re-compiled that were actually
35 //! affected by some change, minimizing the number of functions that could have
36 //! been re-used but just happened to be located in a module that is
39 //! However, since LLVM optimization does not work across module boundaries,
40 //! using such a highly granular partitioning would lead to very slow runtime
41 //! code since it would effectively prohibit inlining and other inter-procedure
42 //! optimizations. We want to avoid that as much as possible.
44 //! Thus we end up with a trade-off: The bigger the codegen units, the better
45 //! LLVM's optimizer can do its work, but also the smaller the compilation time
46 //! reduction we get from incremental compilation.
48 //! Ideally, we would create a partitioning such that there are few big codegen
49 //! units with few interdependencies between them. For now though, we use the
50 //! following heuristic to determine the partitioning:
52 //! - There are two codegen units for every source-level module:
53 //! - One for "stable", that is non-generic, code
54 //! - One for more "volatile" code, i.e. monomorphized instances of functions
55 //! defined in that module
56 //! - Code for monomorphized instances of functions from external crates gets
57 //! placed into every codegen unit that uses that instance.
59 //! In order to see why this heuristic makes sense, let's take a look at when a
60 //! codegen unit can get invalidated:
62 //! 1. The most straightforward case is when the BODY of a function or global
63 //! changes. Then any codegen unit containing the code for that item has to be
64 //! re-compiled. Note that this includes all codegen units where the function
67 //! 2. The next case is when the SIGNATURE of a function or global changes. In
68 //! this case, all codegen units containing a REFERENCE to that item have to be
69 //! re-compiled. This is a superset of case 1.
71 //! 3. The final and most subtle case is when a REFERENCE to a generic function
72 //! is added or removed somewhere. Even though the definition of the function
73 //! might be unchanged, a new REFERENCE might introduce a new monomorphized
74 //! instance of this function which has to be placed and compiled somewhere.
75 //! Conversely, when removing a REFERENCE, it might have been the last one with
76 //! that particular set of generic arguments and thus we have to remove it.
78 //! From the above we see that just using one codegen unit per source-level
79 //! module is not such a good idea, since just adding a REFERENCE to some
80 //! generic item somewhere else would invalidate everything within the module
81 //! containing the generic item. The heuristic above reduces this detrimental
82 //! side-effect of references a little by at least not touching the non-generic
83 //! code of the module.
85 //! As another optimization, monomorphized functions from external crates get
86 //! some special handling. Since we assume that the definition of such a
87 //! function changes rather infrequently compared to local items, we can just
88 //! instantiate external functions in every codegen unit where it is referenced
89 //! -- without having to fear that doing this will cause a lot of unnecessary
90 //! re-compilations. If such a reference is added or removed, the codegen unit
91 //! has to be re-translated anyway.
92 //! (Note that this only makes sense if external crates actually don't change
93 //! frequently. For certain multi-crate projects this might not be a valid
96 //! A Note on Inlining
97 //! ------------------
98 //! As briefly mentioned above, in order for LLVM to be able to inline a
99 //! function call, the body of the function has to be available in the LLVM
100 //! module where the call is made. This has a few consequences for partitioning:
102 //! - The partitioning algorithm has to take care of placing functions into all
103 //! codegen units where they should be available for inlining. It also has to
104 //! decide on the correct linkage for these functions.
106 //! - The partitioning algorithm has to know which functions are likely to get
107 //! inlined, so it can distribute function instantiations accordingly. Since
108 //! there is no way of knowing for sure which functions LLVM will decide to
109 //! inline in the end, we apply a heuristic here: Only functions marked with
110 //! #[inline] and (as stated above) functions from external crates are
111 //! considered for inlining by the partitioner. The current implementation
112 //! will not try to determine if a function is likely to be inlined by looking
113 //! at the functions definition.
115 //! Note though that as a side-effect of creating a codegen units per
116 //! source-level module, functions from the same module will be available for
117 //! inlining, even when they are not marked #[inline].
119 use collector::InliningMap;
120 use context::SharedCrateContext;
123 use rustc::dep_graph::{DepNode, WorkProductId};
124 use rustc::hir::def_id::DefId;
125 use rustc::hir::map::DefPathData;
126 use rustc::session::config::NUMBERED_CODEGEN_UNIT_MARKER;
127 use rustc::ty::TyCtxt;
128 use rustc::ty::item_path::characteristic_def_id_of_type;
129 use rustc_incremental::IchHasher;
130 use std::cmp::Ordering;
133 use symbol_map::SymbolMap;
134 use syntax::ast::NodeId;
135 use syntax::symbol::{Symbol, InternedString};
136 use trans_item::{TransItem, InstantiationMode};
137 use util::nodemap::{FxHashMap, FxHashSet};
139 pub enum PartitioningStrategy {
140 /// Generate one codegen unit per source-level module.
143 /// Partition the whole crate into a fixed number of codegen units.
144 FixedUnitCount(usize)
147 pub struct CodegenUnit<'tcx> {
148 /// A name for this CGU. Incremental compilation requires that
149 /// name be unique amongst **all** crates. Therefore, it should
150 /// contain something unique to this crate (e.g., a module path)
151 /// as well as the crate name and disambiguator.
152 name: InternedString,
154 items: FxHashMap<TransItem<'tcx>, llvm::Linkage>,
157 impl<'tcx> CodegenUnit<'tcx> {
158 pub fn new(name: InternedString,
159 items: FxHashMap<TransItem<'tcx>, llvm::Linkage>)
167 pub fn empty(name: InternedString) -> Self {
168 Self::new(name, FxHashMap())
171 pub fn contains_item(&self, item: &TransItem<'tcx>) -> bool {
172 self.items.contains_key(item)
175 pub fn name(&self) -> &str {
179 pub fn items(&self) -> &FxHashMap<TransItem<'tcx>, llvm::Linkage> {
183 pub fn work_product_id(&self) -> Arc<WorkProductId> {
184 Arc::new(WorkProductId(self.name().to_string()))
187 pub fn work_product_dep_node(&self) -> DepNode<DefId> {
188 DepNode::WorkProduct(self.work_product_id())
191 pub fn compute_symbol_name_hash(&self,
192 scx: &SharedCrateContext,
193 symbol_map: &SymbolMap) -> u64 {
194 let mut state = IchHasher::new();
195 let exported_symbols = scx.exported_symbols();
196 let all_items = self.items_in_deterministic_order(scx.tcx(), symbol_map);
197 for (item, _) in all_items {
198 let symbol_name = symbol_map.get(item).unwrap();
199 symbol_name.len().hash(&mut state);
200 symbol_name.hash(&mut state);
201 let exported = match item {
202 TransItem::Fn(ref instance) => {
203 let node_id = scx.tcx().map.as_local_node_id(instance.def);
204 node_id.map(|node_id| exported_symbols.contains(&node_id))
207 TransItem::Static(node_id) => {
208 exported_symbols.contains(&node_id)
210 TransItem::DropGlue(..) => false,
212 exported.hash(&mut state);
214 state.finish().to_smaller_hash()
217 pub fn items_in_deterministic_order(&self,
219 symbol_map: &SymbolMap)
220 -> Vec<(TransItem<'tcx>, llvm::Linkage)> {
221 let mut items: Vec<(TransItem<'tcx>, llvm::Linkage)> =
222 self.items.iter().map(|(item, linkage)| (*item, *linkage)).collect();
224 // The codegen tests rely on items being process in the same order as
225 // they appear in the file, so for local items, we sort by node_id first
226 items.sort_by(|&(trans_item1, _), &(trans_item2, _)| {
227 let node_id1 = local_node_id(tcx, trans_item1);
228 let node_id2 = local_node_id(tcx, trans_item2);
230 match (node_id1, node_id2) {
232 let symbol_name1 = symbol_map.get(trans_item1).unwrap();
233 let symbol_name2 = symbol_map.get(trans_item2).unwrap();
234 symbol_name1.cmp(symbol_name2)
236 // In the following two cases we can avoid looking up the symbol
237 (None, Some(_)) => Ordering::Less,
238 (Some(_), None) => Ordering::Greater,
239 (Some(node_id1), Some(node_id2)) => {
240 let ordering = node_id1.cmp(&node_id2);
242 if ordering != Ordering::Equal {
246 let symbol_name1 = symbol_map.get(trans_item1).unwrap();
247 let symbol_name2 = symbol_map.get(trans_item2).unwrap();
248 symbol_name1.cmp(symbol_name2)
255 fn local_node_id(tcx: TyCtxt, trans_item: TransItem) -> Option<NodeId> {
257 TransItem::Fn(instance) => {
258 tcx.map.as_local_node_id(instance.def)
260 TransItem::Static(node_id) => Some(node_id),
261 TransItem::DropGlue(_) => None,
268 // Anything we can't find a proper codegen unit for goes into this.
269 const FALLBACK_CODEGEN_UNIT: &'static str = "__rustc_fallback_codegen_unit";
271 pub fn partition<'a, 'tcx, I>(scx: &SharedCrateContext<'a, 'tcx>,
273 strategy: PartitioningStrategy,
274 inlining_map: &InliningMap<'tcx>)
275 -> Vec<CodegenUnit<'tcx>>
276 where I: Iterator<Item = TransItem<'tcx>>
280 // In the first step, we place all regular translation items into their
281 // respective 'home' codegen unit. Regular translation items are all
282 // functions and statics defined in the local crate.
283 let mut initial_partitioning = place_root_translation_items(scx,
286 debug_dump(scx, "INITIAL PARTITONING:", initial_partitioning.codegen_units.iter());
288 // If the partitioning should produce a fixed count of codegen units, merge
289 // until that count is reached.
290 if let PartitioningStrategy::FixedUnitCount(count) = strategy {
291 merge_codegen_units(&mut initial_partitioning, count, &tcx.crate_name.as_str());
293 debug_dump(scx, "POST MERGING:", initial_partitioning.codegen_units.iter());
296 // In the next step, we use the inlining map to determine which addtional
297 // translation items have to go into each codegen unit. These additional
298 // translation items can be drop-glue, functions from external crates, and
299 // local functions the definition of which is marked with #[inline].
300 let post_inlining = place_inlined_translation_items(initial_partitioning,
303 debug_dump(scx, "POST INLINING:", post_inlining.0.iter());
305 // Finally, sort by codegen unit name, so that we get deterministic results
306 let mut result = post_inlining.0;
307 result.sort_by(|cgu1, cgu2| {
308 (&cgu1.name[..]).cmp(&cgu2.name[..])
314 struct PreInliningPartitioning<'tcx> {
315 codegen_units: Vec<CodegenUnit<'tcx>>,
316 roots: FxHashSet<TransItem<'tcx>>,
319 struct PostInliningPartitioning<'tcx>(Vec<CodegenUnit<'tcx>>);
321 fn place_root_translation_items<'a, 'tcx, I>(scx: &SharedCrateContext<'a, 'tcx>,
323 -> PreInliningPartitioning<'tcx>
324 where I: Iterator<Item = TransItem<'tcx>>
327 let mut roots = FxHashSet();
328 let mut codegen_units = FxHashMap();
329 let is_incremental_build = tcx.sess.opts.incremental.is_some();
331 for trans_item in trans_items {
332 let is_root = trans_item.instantiation_mode(tcx) == InstantiationMode::GloballyShared;
335 let characteristic_def_id = characteristic_def_id_of_trans_item(scx, trans_item);
336 let is_volatile = is_incremental_build &&
337 trans_item.is_generic_fn();
339 let codegen_unit_name = match characteristic_def_id {
340 Some(def_id) => compute_codegen_unit_name(tcx, def_id, is_volatile),
341 None => Symbol::intern(FALLBACK_CODEGEN_UNIT).as_str(),
344 let make_codegen_unit = || {
345 CodegenUnit::empty(codegen_unit_name.clone())
348 let mut codegen_unit = codegen_units.entry(codegen_unit_name.clone())
349 .or_insert_with(make_codegen_unit);
351 let linkage = match trans_item.explicit_linkage(tcx) {
352 Some(explicit_linkage) => explicit_linkage,
356 TransItem::Static(..) => llvm::ExternalLinkage,
357 TransItem::DropGlue(..) => unreachable!(),
362 codegen_unit.items.insert(trans_item, linkage);
363 roots.insert(trans_item);
367 // always ensure we have at least one CGU; otherwise, if we have a
368 // crate with just types (for example), we could wind up with no CGU
369 if codegen_units.is_empty() {
370 let codegen_unit_name = Symbol::intern(FALLBACK_CODEGEN_UNIT).as_str();
371 codegen_units.entry(codegen_unit_name.clone())
372 .or_insert_with(|| CodegenUnit::empty(codegen_unit_name.clone()));
375 PreInliningPartitioning {
376 codegen_units: codegen_units.into_iter()
377 .map(|(_, codegen_unit)| codegen_unit)
383 fn merge_codegen_units<'tcx>(initial_partitioning: &mut PreInliningPartitioning<'tcx>,
384 target_cgu_count: usize,
386 assert!(target_cgu_count >= 1);
387 let codegen_units = &mut initial_partitioning.codegen_units;
389 // Merge the two smallest codegen units until the target size is reached.
390 // Note that "size" is estimated here rather inaccurately as the number of
391 // translation items in a given unit. This could be improved on.
392 while codegen_units.len() > target_cgu_count {
393 // Sort small cgus to the back
394 codegen_units.sort_by_key(|cgu| -(cgu.items.len() as i64));
395 let smallest = codegen_units.pop().unwrap();
396 let second_smallest = codegen_units.last_mut().unwrap();
398 for (k, v) in smallest.items.into_iter() {
399 second_smallest.items.insert(k, v);
403 for (index, cgu) in codegen_units.iter_mut().enumerate() {
404 cgu.name = numbered_codegen_unit_name(crate_name, index);
407 // If the initial partitioning contained less than target_cgu_count to begin
408 // with, we won't have enough codegen units here, so add a empty units until
409 // we reach the target count
410 while codegen_units.len() < target_cgu_count {
411 let index = codegen_units.len();
413 CodegenUnit::empty(numbered_codegen_unit_name(crate_name, index)));
417 fn place_inlined_translation_items<'tcx>(initial_partitioning: PreInliningPartitioning<'tcx>,
418 inlining_map: &InliningMap<'tcx>)
419 -> PostInliningPartitioning<'tcx> {
420 let mut new_partitioning = Vec::new();
422 for codegen_unit in &initial_partitioning.codegen_units[..] {
423 // Collect all items that need to be available in this codegen unit
424 let mut reachable = FxHashSet();
425 for root in codegen_unit.items.keys() {
426 follow_inlining(*root, inlining_map, &mut reachable);
429 let mut new_codegen_unit =
430 CodegenUnit::empty(codegen_unit.name.clone());
432 // Add all translation items that are not already there
433 for trans_item in reachable {
434 if let Some(linkage) = codegen_unit.items.get(&trans_item) {
435 // This is a root, just copy it over
436 new_codegen_unit.items.insert(trans_item, *linkage);
438 if initial_partitioning.roots.contains(&trans_item) {
439 bug!("GloballyShared trans-item inlined into other CGU: \
443 // This is a cgu-private copy
444 new_codegen_unit.items.insert(trans_item, llvm::InternalLinkage);
448 new_partitioning.push(new_codegen_unit);
451 return PostInliningPartitioning(new_partitioning);
453 fn follow_inlining<'tcx>(trans_item: TransItem<'tcx>,
454 inlining_map: &InliningMap<'tcx>,
455 visited: &mut FxHashSet<TransItem<'tcx>>) {
456 if !visited.insert(trans_item) {
460 inlining_map.with_inlining_candidates(trans_item, |target| {
461 follow_inlining(target, inlining_map, visited);
466 fn characteristic_def_id_of_trans_item<'a, 'tcx>(scx: &SharedCrateContext<'a, 'tcx>,
467 trans_item: TransItem<'tcx>)
471 TransItem::Fn(instance) => {
472 // If this is a method, we want to put it into the same module as
473 // its self-type. If the self-type does not provide a characteristic
474 // DefId, we use the location of the impl after all.
476 if tcx.trait_of_item(instance.def).is_some() {
477 let self_ty = instance.substs.type_at(0);
478 // This is an implementation of a trait method.
479 return characteristic_def_id_of_type(self_ty).or(Some(instance.def));
482 if let Some(impl_def_id) = tcx.impl_of_method(instance.def) {
483 // This is a method within an inherent impl, find out what the
485 let impl_self_ty = tcx.item_type(impl_def_id);
486 let impl_self_ty = tcx.erase_regions(&impl_self_ty);
487 let impl_self_ty = monomorphize::apply_param_substs(scx,
491 if let Some(def_id) = characteristic_def_id_of_type(impl_self_ty) {
498 TransItem::DropGlue(dg) => characteristic_def_id_of_type(dg.ty()),
499 TransItem::Static(node_id) => Some(tcx.map.local_def_id(node_id)),
503 fn compute_codegen_unit_name<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
507 // Unfortunately we cannot just use the `ty::item_path` infrastructure here
508 // because we need paths to modules and the DefIds of those are not
509 // available anymore for external items.
510 let mut mod_path = String::with_capacity(64);
512 let def_path = tcx.def_path(def_id);
513 mod_path.push_str(&tcx.crate_name(def_path.krate).as_str());
515 for part in tcx.def_path(def_id)
520 DefPathData::Module(..) => true,
524 mod_path.push_str("-");
525 mod_path.push_str(&part.data.as_interned_str());
529 mod_path.push_str(".volatile");
532 return Symbol::intern(&mod_path[..]).as_str();
535 fn numbered_codegen_unit_name(crate_name: &str, index: usize) -> InternedString {
536 Symbol::intern(&format!("{}{}{}", crate_name, NUMBERED_CODEGEN_UNIT_MARKER, index)).as_str()
539 fn debug_dump<'a, 'b, 'tcx, I>(scx: &SharedCrateContext<'a, 'tcx>,
542 where I: Iterator<Item=&'b CodegenUnit<'tcx>>,
545 if cfg!(debug_assertions) {
548 let symbol_map = SymbolMap::build(scx, cgu.items
550 .map(|(&trans_item, _)| trans_item));
551 debug!("CodegenUnit {}:", cgu.name);
553 for (trans_item, linkage) in &cgu.items {
554 let symbol_name = symbol_map.get_or_compute(scx, *trans_item);
555 let symbol_hash_start = symbol_name.rfind('h');
556 let symbol_hash = symbol_hash_start.map(|i| &symbol_name[i ..])
557 .unwrap_or("<no hash>");
559 debug!(" - {} [{:?}] [{}]",
560 trans_item.to_string(scx.tcx()),