1 // Copyright 2016 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
12 //! This module manages how the incremental compilation cache is represented in
15 //! Incremental compilation caches are managed according to a copy-on-write
16 //! strategy: Once a complete, consistent cache version is finalized, it is
17 //! never modified. Instead, when a subsequent compilation session is started,
18 //! the compiler will allocate a new version of the cache that starts out as
19 //! a copy of the previous version. Then only this new copy is modified and it
20 //! will not be visible to other processes until it is finalized. This ensures
21 //! that multiple compiler processes can be executed concurrently for the same
22 //! crate without interfering with each other or blocking each other.
24 //! More concretely this is implemented via the following protocol:
26 //! 1. For a newly started compilation session, the compiler allocates a
27 //! new `session` directory within the incremental compilation directory.
28 //! This session directory will have a unique name that ends with the suffix
29 //! "-working" and that contains a creation timestamp.
30 //! 2. Next, the compiler looks for the newest finalized session directory,
31 //! that is, a session directory from a previous compilation session that
32 //! has been marked as valid and consistent. A session directory is
33 //! considered finalized if the "-working" suffix in the directory name has
34 //! been replaced by the SVH of the crate.
35 //! 3. Once the compiler has found a valid, finalized session directory, it will
36 //! hard-link/copy its contents into the new "-working" directory. If all
37 //! goes well, it will have its own, private copy of the source directory and
38 //! subsequently not have to worry about synchronizing with other compiler
40 //! 4. Now the compiler can do its normal compilation process, which involves
41 //! reading and updating its private session directory.
42 //! 5. When compilation finishes without errors, the private session directory
43 //! will be in a state where it can be used as input for other compilation
44 //! sessions. That is, it will contain a dependency graph and cache artifacts
45 //! that are consistent with the state of the source code it was compiled
46 //! from, with no need to change them ever again. At this point, the compiler
47 //! finalizes and "publishes" its private session directory by renaming it
48 //! from "s-{timestamp}-{random}-working" to "s-{timestamp}-{SVH}".
49 //! 6. At this point the "old" session directory that we copied our data from
50 //! at the beginning of the session has become obsolete because we have just
51 //! published a more current version. Thus the compiler will delete it.
53 //! ## Garbage Collection
55 //! Naively following the above protocol might lead to old session directories
56 //! piling up if a compiler instance crashes for some reason before its able to
57 //! remove its private session directory. In order to avoid wasting disk space,
58 //! the compiler also does some garbage collection each time it is started in
59 //! incremental compilation mode. Specifically, it will scan the incremental
60 //! compilation directory for private session directories that are not in use
61 //! any more and will delete those. It will also delete any finalized session
62 //! directories for a given crate except for the most recent one.
64 //! ## Synchronization
66 //! There is some synchronization needed in order for the compiler to be able to
67 //! determine whether a given private session directory is not in used any more.
68 //! This is done by creating a lock file for each session directory and
69 //! locking it while the directory is still being used. Since file locks have
70 //! operating system support, we can rely on the lock being released if the
71 //! compiler process dies for some unexpected reason. Thus, when garbage
72 //! collecting private session directories, the collecting process can determine
73 //! whether the directory is still in use by trying to acquire a lock on the
74 //! file. If locking the file fails, the original process must still be alive.
75 //! If locking the file succeeds, we know that the owning process is not alive
76 //! any more and we can safely delete the directory.
77 //! There is still a small time window between the original process creating the
78 //! lock file and actually locking it. In order to minimize the chance that
79 //! another process tries to acquire the lock in just that instance, only
80 //! session directories that are older than a few seconds are considered for
81 //! garbage collection.
83 //! Another case that has to be considered is what happens if one process
84 //! deletes a finalized session directory that another process is currently
85 //! trying to copy from. This case is also handled via the lock file. Before
86 //! a process starts copying a finalized session directory, it will acquire a
87 //! shared lock on the directory's lock file. Any garbage collecting process,
88 //! on the other hand, will acquire an exclusive lock on the lock file.
89 //! Thus, if a directory is being collected, any reader process will fail
90 //! acquiring the shared lock and will leave the directory alone. Conversely,
91 //! if a collecting process can't acquire the exclusive lock because the
92 //! directory is currently being read from, it will leave collecting that
93 //! directory to another process at a later point in time.
94 //! The exact same scheme is also used when reading the metadata hashes file
95 //! from an extern crate. When a crate is compiled, the hash values of its
96 //! metadata are stored in a file in its session directory. When the
97 //! compilation session of another crate imports the first crate's metadata,
98 //! it also has to read in the accompanying metadata hashes. It thus will access
99 //! the finalized session directory of all crates it links to and while doing
100 //! so, it will also place a read lock on that the respective session directory
101 //! so that it won't be deleted while the metadata hashes are loaded.
105 //! This system relies on two features being available in the file system in
106 //! order to work really well: file locking and hard linking.
107 //! If hard linking is not available (like on FAT) the data in the cache
108 //! actually has to be copied at the beginning of each session.
109 //! If file locking does not work reliably (like on NFS), some of the
110 //! synchronization will go haywire.
111 //! In both cases we recommend to locate the incremental compilation directory
112 //! on a file system that supports these things.
113 //! It might be a good idea though to try and detect whether we are on an
114 //! unsupported file system and emit a warning in that case. This is not yet
117 use rustc::hir::def_id::{CrateNum, LOCAL_CRATE};
118 use rustc::hir::svh::Svh;
119 use rustc::session::Session;
120 use rustc::ty::TyCtxt;
121 use rustc::util::fs as fs_util;
122 use rustc_data_structures::{flock, base_n};
123 use rustc_data_structures::fx::{FxHashSet, FxHashMap};
125 use std::ffi::OsString;
126 use std::fs as std_fs;
129 use std::path::{Path, PathBuf};
130 use std::time::{UNIX_EPOCH, SystemTime, Duration};
131 use std::__rand::{thread_rng, Rng};
133 const LOCK_FILE_EXT: &'static str = ".lock";
134 const DEP_GRAPH_FILENAME: &'static str = "dep-graph.bin";
135 const WORK_PRODUCTS_FILENAME: &'static str = "work-products.bin";
136 const METADATA_HASHES_FILENAME: &'static str = "metadata.bin";
138 // We encode integers using the following base, so they are shorter than decimal
139 // or hexadecimal numbers (we want short file and directory names). Since these
140 // numbers will be used in file names, we choose an encoding that is not
141 // case-sensitive (as opposed to base64, for example).
142 const INT_ENCODE_BASE: u64 = 36;
144 pub fn dep_graph_path(sess: &Session) -> PathBuf {
145 in_incr_comp_dir_sess(sess, DEP_GRAPH_FILENAME)
148 pub fn work_products_path(sess: &Session) -> PathBuf {
149 in_incr_comp_dir_sess(sess, WORK_PRODUCTS_FILENAME)
152 pub fn metadata_hash_export_path(sess: &Session) -> PathBuf {
153 in_incr_comp_dir_sess(sess, METADATA_HASHES_FILENAME)
156 pub fn metadata_hash_import_path(import_session_dir: &Path) -> PathBuf {
157 import_session_dir.join(METADATA_HASHES_FILENAME)
160 pub fn lock_file_path(session_dir: &Path) -> PathBuf {
161 let crate_dir = session_dir.parent().unwrap();
163 let directory_name = session_dir.file_name().unwrap().to_string_lossy();
164 assert_no_characters_lost(&directory_name);
166 let dash_indices: Vec<_> = directory_name.match_indices("-")
169 if dash_indices.len() != 3 {
170 bug!("Encountered incremental compilation session directory with \
172 session_dir.display())
175 crate_dir.join(&directory_name[0 .. dash_indices[2]])
176 .with_extension(&LOCK_FILE_EXT[1..])
179 pub fn in_incr_comp_dir_sess(sess: &Session, file_name: &str) -> PathBuf {
180 in_incr_comp_dir(&sess.incr_comp_session_dir(), file_name)
183 pub fn in_incr_comp_dir(incr_comp_session_dir: &Path, file_name: &str) -> PathBuf {
184 incr_comp_session_dir.join(file_name)
187 /// Allocates the private session directory. The boolean in the Ok() result
188 /// indicates whether we should try loading a dep graph from the successfully
189 /// initialized directory, or not.
190 /// The post-condition of this fn is that we have a valid incremental
191 /// compilation session directory, if the result is `Ok`. A valid session
192 /// directory is one that contains a locked lock file. It may or may not contain
193 /// a dep-graph and work products from a previous session.
194 /// If the call fails, the fn may leave behind an invalid session directory.
195 /// The garbage collection will take care of it.
196 pub fn prepare_session_directory(tcx: TyCtxt) -> Result<bool, ()> {
197 debug!("prepare_session_directory");
199 // {incr-comp-dir}/{crate-name-and-disambiguator}
200 let crate_dir = crate_path_tcx(tcx, LOCAL_CRATE);
201 debug!("crate-dir: {}", crate_dir.display());
202 try!(create_dir(tcx.sess, &crate_dir, "crate"));
204 let mut source_directories_already_tried = FxHashSet();
207 // Generate a session directory of the form:
209 // {incr-comp-dir}/{crate-name-and-disambiguator}/s-{timestamp}-{random}-working
210 let session_dir = generate_session_dir_path(&crate_dir);
211 debug!("session-dir: {}", session_dir.display());
213 // Lock the new session directory. If this fails, return an
214 // error without retrying
215 let (directory_lock, lock_file_path) = try!(lock_directory(tcx.sess, &session_dir));
217 // Now that we have the lock, we can actually create the session
219 try!(create_dir(tcx.sess, &session_dir, "session"));
221 // Find a suitable source directory to copy from. Ignore those that we
222 // have already tried before.
223 let source_directory = find_source_directory(&crate_dir,
224 &source_directories_already_tried);
226 let source_directory = if let Some(dir) = source_directory {
229 // There's nowhere to copy from, we're done
230 debug!("no source directory found. Continuing with empty session \
233 tcx.sess.init_incr_comp_session(session_dir, directory_lock);
237 debug!("attempting to copy data from source: {}",
238 source_directory.display());
240 let print_file_copy_stats = tcx.sess.opts.debugging_opts.incremental_info;
242 // Try copying over all files from the source directory
243 if let Ok(allows_links) = copy_files(&session_dir, &source_directory,
244 print_file_copy_stats) {
245 debug!("successfully copied data from: {}",
246 source_directory.display());
249 tcx.sess.warn(&format!("Hard linking files in the incremental \
250 compilation cache failed. Copying files \
251 instead. Consider moving the cache \
252 directory to a file system which supports \
253 hard linking in session dir `{}`",
254 session_dir.display())
258 tcx.sess.init_incr_comp_session(session_dir, directory_lock);
261 debug!("copying failed - trying next directory");
263 // Something went wrong while trying to copy/link files from the
264 // source directory. Try again with a different one.
265 source_directories_already_tried.insert(source_directory);
267 // Try to remove the session directory we just allocated. We don't
268 // know if there's any garbage in it from the failed copy action.
269 if let Err(err) = safe_remove_dir_all(&session_dir) {
270 tcx.sess.warn(&format!("Failed to delete partly initialized \
271 session dir `{}`: {}",
272 session_dir.display(),
276 delete_session_dir_lock_file(tcx.sess, &lock_file_path);
277 mem::drop(directory_lock);
283 /// This function finalizes and thus 'publishes' the session directory by
284 /// renaming it to `s-{timestamp}-{svh}` and releasing the file lock.
285 /// If there have been compilation errors, however, this function will just
286 /// delete the presumably invalid session directory.
287 pub fn finalize_session_directory(sess: &Session, svh: Svh) {
288 if sess.opts.incremental.is_none() {
292 let incr_comp_session_dir: PathBuf = sess.incr_comp_session_dir().clone();
294 if sess.has_errors() {
295 // If there have been any errors during compilation, we don't want to
296 // publish this session directory. Rather, we'll just delete it.
298 debug!("finalize_session_directory() - invalidating session directory: {}",
299 incr_comp_session_dir.display());
301 if let Err(err) = safe_remove_dir_all(&*incr_comp_session_dir) {
302 sess.warn(&format!("Error deleting incremental compilation \
303 session directory `{}`: {}",
304 incr_comp_session_dir.display(),
308 let lock_file_path = lock_file_path(&*incr_comp_session_dir);
309 delete_session_dir_lock_file(sess, &lock_file_path);
310 sess.mark_incr_comp_session_as_invalid();
313 debug!("finalize_session_directory() - session directory: {}",
314 incr_comp_session_dir.display());
316 let old_sub_dir_name = incr_comp_session_dir.file_name()
319 assert_no_characters_lost(&old_sub_dir_name);
321 // Keep the 's-{timestamp}-{random-number}' prefix, but replace the
322 // '-working' part with the SVH of the crate
323 let dash_indices: Vec<_> = old_sub_dir_name.match_indices("-")
326 if dash_indices.len() != 3 {
327 bug!("Encountered incremental compilation session directory with \
329 incr_comp_session_dir.display())
332 // State: "s-{timestamp}-{random-number}-"
333 let mut new_sub_dir_name = String::from(&old_sub_dir_name[.. dash_indices[2] + 1]);
336 base_n::push_str(svh.as_u64(), INT_ENCODE_BASE, &mut new_sub_dir_name);
338 // Create the full path
339 let new_path = incr_comp_session_dir.parent().unwrap().join(new_sub_dir_name);
340 debug!("finalize_session_directory() - new path: {}", new_path.display());
342 match std_fs::rename(&*incr_comp_session_dir, &new_path) {
344 debug!("finalize_session_directory() - directory renamed successfully");
346 // This unlocks the directory
347 sess.finalize_incr_comp_session(new_path);
350 // Warn about the error. However, no need to abort compilation now.
351 sess.warn(&format!("Error finalizing incremental compilation \
352 session directory `{}`: {}",
353 incr_comp_session_dir.display(),
356 debug!("finalize_session_directory() - error, marking as invalid");
357 // Drop the file lock, so we can garage collect
358 sess.mark_incr_comp_session_as_invalid();
362 let _ = garbage_collect_session_directories(sess);
365 pub fn delete_all_session_dir_contents(sess: &Session) -> io::Result<()> {
366 let sess_dir_iterator = sess.incr_comp_session_dir().read_dir()?;
367 for entry in sess_dir_iterator {
369 safe_remove_file(&entry.path())?
374 fn copy_files(target_dir: &Path,
376 print_stats_on_success: bool)
377 -> Result<bool, ()> {
378 // We acquire a shared lock on the lock file of the directory, so that
379 // nobody deletes it out from under us while we are reading from it.
380 let lock_file_path = lock_file_path(source_dir);
381 let _lock = if let Ok(lock) = flock::Lock::new(&lock_file_path,
382 false, // don't wait,
383 false, // don't create
384 false) { // not exclusive
387 // Could not acquire the lock, don't try to copy from here
391 let source_dir_iterator = match source_dir.read_dir() {
393 Err(_) => return Err(())
396 let mut files_linked = 0;
397 let mut files_copied = 0;
399 for entry in source_dir_iterator {
402 let file_name = entry.file_name();
404 let target_file_path = target_dir.join(file_name);
405 let source_path = entry.path();
407 debug!("copying into session dir: {}", source_path.display());
408 match fs_util::link_or_copy(source_path, target_file_path) {
409 Ok(fs_util::LinkOrCopy::Link) => {
412 Ok(fs_util::LinkOrCopy::Copy) => {
415 Err(_) => return Err(())
424 if print_stats_on_success {
425 println!("incr. comp. session directory: {} files hard-linked", files_linked);
426 println!("incr. comp. session directory: {} files copied", files_copied);
429 Ok(files_linked > 0 || files_copied == 0)
432 /// Generate unique directory path of the form:
433 /// {crate_dir}/s-{timestamp}-{random-number}-working
434 fn generate_session_dir_path(crate_dir: &Path) -> PathBuf {
435 let timestamp = timestamp_to_string(SystemTime::now());
436 debug!("generate_session_dir_path: timestamp = {}", timestamp);
437 let random_number = thread_rng().next_u32();
438 debug!("generate_session_dir_path: random_number = {}", random_number);
440 let directory_name = format!("s-{}-{}-working",
442 base_n::encode(random_number as u64,
444 debug!("generate_session_dir_path: directory_name = {}", directory_name);
445 let directory_path = crate_dir.join(directory_name);
446 debug!("generate_session_dir_path: directory_path = {}", directory_path.display());
450 fn create_dir(sess: &Session, path: &Path, dir_tag: &str) -> Result<(),()> {
451 match fs_util::create_dir_racy(path) {
453 debug!("{} directory created successfully", dir_tag);
457 sess.err(&format!("Could not create incremental compilation {} \
467 /// Allocate a the lock-file and lock it.
468 fn lock_directory(sess: &Session,
470 -> Result<(flock::Lock, PathBuf), ()> {
471 let lock_file_path = lock_file_path(session_dir);
472 debug!("lock_directory() - lock_file: {}", lock_file_path.display());
474 match flock::Lock::new(&lock_file_path,
476 true, // create the lock file
477 true) { // the lock should be exclusive
478 Ok(lock) => Ok((lock, lock_file_path)),
480 sess.err(&format!("incremental compilation: could not create \
481 session directory lock file: {}", err));
487 fn delete_session_dir_lock_file(sess: &Session,
488 lock_file_path: &Path) {
489 if let Err(err) = safe_remove_file(&lock_file_path) {
490 sess.warn(&format!("Error deleting lock file for incremental \
491 compilation session directory `{}`: {}",
492 lock_file_path.display(),
497 /// Find the most recent published session directory that is not in the
499 fn find_source_directory(crate_dir: &Path,
500 source_directories_already_tried: &FxHashSet<PathBuf>)
502 let iter = crate_dir.read_dir()
504 .filter_map(|e| e.ok().map(|e| e.path()));
506 find_source_directory_in_iter(iter, source_directories_already_tried)
509 fn find_source_directory_in_iter<I>(iter: I,
510 source_directories_already_tried: &FxHashSet<PathBuf>)
512 where I: Iterator<Item=PathBuf>
514 let mut best_candidate = (UNIX_EPOCH, None);
516 for session_dir in iter {
517 debug!("find_source_directory_in_iter - inspecting `{}`",
518 session_dir.display());
520 let directory_name = session_dir.file_name().unwrap().to_string_lossy();
521 assert_no_characters_lost(&directory_name);
523 if source_directories_already_tried.contains(&session_dir) ||
524 !is_session_directory(&directory_name) ||
525 !is_finalized(&directory_name) {
526 debug!("find_source_directory_in_iter - ignoring.");
530 let timestamp = extract_timestamp_from_session_dir(&directory_name)
531 .unwrap_or_else(|_| {
532 bug!("unexpected incr-comp session dir: {}", session_dir.display())
535 if timestamp > best_candidate.0 {
536 best_candidate = (timestamp, Some(session_dir.clone()));
543 fn is_finalized(directory_name: &str) -> bool {
544 !directory_name.ends_with("-working")
547 fn is_session_directory(directory_name: &str) -> bool {
548 directory_name.starts_with("s-") &&
549 !directory_name.ends_with(LOCK_FILE_EXT)
552 fn is_session_directory_lock_file(file_name: &str) -> bool {
553 file_name.starts_with("s-") && file_name.ends_with(LOCK_FILE_EXT)
556 fn extract_timestamp_from_session_dir(directory_name: &str)
557 -> Result<SystemTime, ()> {
558 if !is_session_directory(directory_name) {
562 let dash_indices: Vec<_> = directory_name.match_indices("-")
565 if dash_indices.len() != 3 {
569 string_to_timestamp(&directory_name[dash_indices[0]+1 .. dash_indices[1]])
572 fn timestamp_to_string(timestamp: SystemTime) -> String {
573 let duration = timestamp.duration_since(UNIX_EPOCH).unwrap();
574 let micros = duration.as_secs() * 1_000_000 +
575 (duration.subsec_nanos() as u64) / 1000;
576 base_n::encode(micros, INT_ENCODE_BASE)
579 fn string_to_timestamp(s: &str) -> Result<SystemTime, ()> {
580 let micros_since_unix_epoch = u64::from_str_radix(s, 36);
582 if micros_since_unix_epoch.is_err() {
586 let micros_since_unix_epoch = micros_since_unix_epoch.unwrap();
588 let duration = Duration::new(micros_since_unix_epoch / 1_000_000,
589 1000 * (micros_since_unix_epoch % 1_000_000) as u32);
590 Ok(UNIX_EPOCH + duration)
593 fn crate_path_tcx(tcx: TyCtxt, cnum: CrateNum) -> PathBuf {
594 crate_path(tcx.sess, &tcx.crate_name(cnum), &tcx.crate_disambiguator(cnum))
597 /// Finds the session directory containing the correct metadata hashes file for
598 /// the given crate. In order to do that it has to compute the crate directory
599 /// of the given crate, and in there, look for the session directory with the
600 /// correct SVH in it.
601 /// Note that we have to match on the exact SVH here, not just the
602 /// crate's (name, disambiguator) pair. The metadata hashes are only valid for
603 /// the exact version of the binary we are reading from now (i.e. the hashes
604 /// are part of the dependency graph of a specific compilation session).
605 pub fn find_metadata_hashes_for(tcx: TyCtxt, cnum: CrateNum) -> Option<PathBuf> {
606 let crate_directory = crate_path_tcx(tcx, cnum);
608 if !crate_directory.exists() {
612 let dir_entries = match crate_directory.read_dir() {
613 Ok(dir_entries) => dir_entries,
616 .err(&format!("incremental compilation: Could not read crate directory `{}`: {}",
617 crate_directory.display(), e));
622 let target_svh = tcx.sess.cstore.crate_hash(cnum);
623 let target_svh = base_n::encode(target_svh.as_u64(), INT_ENCODE_BASE);
625 let sub_dir = find_metadata_hashes_iter(&target_svh, dir_entries.filter_map(|e| {
626 e.ok().map(|e| e.file_name().to_string_lossy().into_owned())
629 sub_dir.map(|sub_dir_name| crate_directory.join(&sub_dir_name))
632 fn find_metadata_hashes_iter<'a, I>(target_svh: &str, iter: I) -> Option<OsString>
633 where I: Iterator<Item=String>
635 for sub_dir_name in iter {
636 if !is_session_directory(&sub_dir_name) || !is_finalized(&sub_dir_name) {
637 // This is not a usable session directory
641 let is_match = if let Some(last_dash_pos) = sub_dir_name.rfind("-") {
642 let candidate_svh = &sub_dir_name[last_dash_pos + 1 .. ];
643 target_svh == candidate_svh
645 // some kind of invalid directory name
650 return Some(OsString::from(sub_dir_name))
657 fn crate_path(sess: &Session,
659 crate_disambiguator: &str)
661 use std::hash::{Hasher, Hash};
662 use std::collections::hash_map::DefaultHasher;
664 let incr_dir = sess.opts.incremental.as_ref().unwrap().clone();
666 // The full crate disambiguator is really long. A hash of it should be
668 let mut hasher = DefaultHasher::new();
669 crate_disambiguator.hash(&mut hasher);
671 let crate_name = format!("{}-{}",
673 base_n::encode(hasher.finish(), INT_ENCODE_BASE));
674 incr_dir.join(crate_name)
677 fn assert_no_characters_lost(s: &str) {
678 if s.contains('\u{FFFD}') {
679 bug!("Could not losslessly convert '{}'.", s)
683 fn is_old_enough_to_be_collected(timestamp: SystemTime) -> bool {
684 timestamp < SystemTime::now() - Duration::from_secs(10)
687 pub fn garbage_collect_session_directories(sess: &Session) -> io::Result<()> {
688 debug!("garbage_collect_session_directories() - begin");
690 let session_directory = sess.incr_comp_session_dir();
691 debug!("garbage_collect_session_directories() - session directory: {}",
692 session_directory.display());
694 let crate_directory = session_directory.parent().unwrap();
695 debug!("garbage_collect_session_directories() - crate directory: {}",
696 crate_directory.display());
698 // First do a pass over the crate directory, collecting lock files and
699 // session directories
700 let mut session_directories = FxHashSet();
701 let mut lock_files = FxHashSet();
703 for dir_entry in try!(crate_directory.read_dir()) {
704 let dir_entry = match dir_entry {
705 Ok(dir_entry) => dir_entry,
712 let entry_name = dir_entry.file_name();
713 let entry_name = entry_name.to_string_lossy();
715 if is_session_directory_lock_file(&entry_name) {
716 assert_no_characters_lost(&entry_name);
717 lock_files.insert(entry_name.into_owned());
718 } else if is_session_directory(&entry_name) {
719 assert_no_characters_lost(&entry_name);
720 session_directories.insert(entry_name.into_owned());
722 // This is something we don't know, leave it alone
726 // Now map from lock files to session directories
727 let lock_file_to_session_dir: FxHashMap<String, Option<String>> =
728 lock_files.into_iter()
729 .map(|lock_file_name| {
730 assert!(lock_file_name.ends_with(LOCK_FILE_EXT));
731 let dir_prefix_end = lock_file_name.len() - LOCK_FILE_EXT.len();
733 let dir_prefix = &lock_file_name[0 .. dir_prefix_end];
734 session_directories.iter()
735 .find(|dir_name| dir_name.starts_with(dir_prefix))
737 (lock_file_name, session_dir.map(String::clone))
741 // Delete all lock files, that don't have an associated directory. They must
742 // be some kind of leftover
743 for (lock_file_name, directory_name) in &lock_file_to_session_dir {
744 if directory_name.is_none() {
745 let timestamp = match extract_timestamp_from_session_dir(lock_file_name) {
746 Ok(timestamp) => timestamp,
748 debug!("Found lock-file with malformed timestamp: {}",
749 crate_directory.join(&lock_file_name).display());
755 let lock_file_path = crate_directory.join(&**lock_file_name);
757 if is_old_enough_to_be_collected(timestamp) {
758 debug!("garbage_collect_session_directories() - deleting \
759 garbage lock file: {}", lock_file_path.display());
760 delete_session_dir_lock_file(sess, &lock_file_path);
762 debug!("garbage_collect_session_directories() - lock file with \
763 no session dir not old enough to be collected: {}",
764 lock_file_path.display());
769 // Filter out `None` directories
770 let lock_file_to_session_dir: FxHashMap<String, String> =
771 lock_file_to_session_dir.into_iter()
772 .filter_map(|(lock_file_name, directory_name)| {
773 directory_name.map(|n| (lock_file_name, n))
777 let mut deletion_candidates = vec![];
778 let mut definitely_delete = vec![];
780 for (lock_file_name, directory_name) in &lock_file_to_session_dir {
781 debug!("garbage_collect_session_directories() - inspecting: {}",
784 let timestamp = match extract_timestamp_from_session_dir(directory_name) {
785 Ok(timestamp) => timestamp,
787 debug!("Found session-dir with malformed timestamp: {}",
788 crate_directory.join(directory_name).display());
794 if is_finalized(directory_name) {
795 let lock_file_path = crate_directory.join(lock_file_name);
796 match flock::Lock::new(&lock_file_path,
798 false, // don't create the lock-file
799 true) { // get an exclusive lock
801 debug!("garbage_collect_session_directories() - \
802 successfully acquired lock");
803 debug!("garbage_collect_session_directories() - adding \
804 deletion candidate: {}", directory_name);
806 // Note that we are holding on to the lock
807 deletion_candidates.push((timestamp,
808 crate_directory.join(directory_name),
812 debug!("garbage_collect_session_directories() - \
813 not collecting, still in use");
816 } else if is_old_enough_to_be_collected(timestamp) {
817 // When cleaning out "-working" session directories, i.e.
818 // session directories that might still be in use by another
819 // compiler instance, we only look a directories that are
820 // at least ten seconds old. This is supposed to reduce the
821 // chance of deleting a directory in the time window where
822 // the process has allocated the directory but has not yet
823 // acquired the file-lock on it.
825 // Try to acquire the directory lock. If we can't, it
826 // means that the owning process is still alive and we
827 // leave this directory alone.
828 let lock_file_path = crate_directory.join(lock_file_name);
829 match flock::Lock::new(&lock_file_path,
831 false, // don't create the lock-file
832 true) { // get an exclusive lock
834 debug!("garbage_collect_session_directories() - \
835 successfully acquired lock");
837 // Note that we are holding on to the lock
838 definitely_delete.push((crate_directory.join(directory_name),
842 debug!("garbage_collect_session_directories() - \
843 not collecting, still in use");
847 debug!("garbage_collect_session_directories() - not finalized, not \
852 // Delete all but the most recent of the candidates
853 for (path, lock) in all_except_most_recent(deletion_candidates) {
854 debug!("garbage_collect_session_directories() - deleting `{}`",
857 if let Err(err) = safe_remove_dir_all(&path) {
858 sess.warn(&format!("Failed to garbage collect finalized incremental \
859 compilation session directory `{}`: {}",
863 delete_session_dir_lock_file(sess, &lock_file_path(&path));
867 // Let's make it explicit that the file lock is released at this point,
868 // or rather, that we held on to it until here
872 for (path, lock) in definitely_delete {
873 debug!("garbage_collect_session_directories() - deleting `{}`",
876 if let Err(err) = safe_remove_dir_all(&path) {
877 sess.warn(&format!("Failed to garbage collect incremental \
878 compilation session directory `{}`: {}",
882 delete_session_dir_lock_file(sess, &lock_file_path(&path));
885 // Let's make it explicit that the file lock is released at this point,
886 // or rather, that we held on to it until here
893 fn all_except_most_recent(deletion_candidates: Vec<(SystemTime, PathBuf, Option<flock::Lock>)>)
894 -> FxHashMap<PathBuf, Option<flock::Lock>> {
895 let most_recent = deletion_candidates.iter()
896 .map(|&(timestamp, ..)| timestamp)
899 if let Some(most_recent) = most_recent {
900 deletion_candidates.into_iter()
901 .filter(|&(timestamp, ..)| timestamp != most_recent)
902 .map(|(_, path, lock)| (path, lock))
909 /// Since paths of artifacts within session directories can get quite long, we
910 /// need to support deleting files with very long paths. The regular
911 /// WinApi functions only support paths up to 260 characters, however. In order
912 /// to circumvent this limitation, we canonicalize the path of the directory
913 /// before passing it to std::fs::remove_dir_all(). This will convert the path
914 /// into the '\\?\' format, which supports much longer paths.
915 fn safe_remove_dir_all(p: &Path) -> io::Result<()> {
917 let canonicalized = try!(p.canonicalize());
918 std_fs::remove_dir_all(canonicalized)
924 fn safe_remove_file(p: &Path) -> io::Result<()> {
926 let canonicalized = try!(p.canonicalize());
927 std_fs::remove_file(canonicalized)
934 fn test_all_except_most_recent() {
935 assert_eq!(all_except_most_recent(
937 (UNIX_EPOCH + Duration::new(4, 0), PathBuf::from("4"), None),
938 (UNIX_EPOCH + Duration::new(1, 0), PathBuf::from("1"), None),
939 (UNIX_EPOCH + Duration::new(5, 0), PathBuf::from("5"), None),
940 (UNIX_EPOCH + Duration::new(3, 0), PathBuf::from("3"), None),
941 (UNIX_EPOCH + Duration::new(2, 0), PathBuf::from("2"), None),
942 ]).keys().cloned().collect::<FxHashSet<PathBuf>>(),
948 ].into_iter().collect::<FxHashSet<PathBuf>>()
951 assert_eq!(all_except_most_recent(
953 ]).keys().cloned().collect::<FxHashSet<PathBuf>>(),
959 fn test_timestamp_serialization() {
960 for i in 0 .. 1_000u64 {
961 let time = UNIX_EPOCH + Duration::new(i * 1_434_578, (i as u32) * 239_000);
962 let s = timestamp_to_string(time);
963 assert_eq!(Ok(time), string_to_timestamp(&s));
968 fn test_find_source_directory_in_iter() {
969 let already_visited = FxHashSet();
972 assert_eq!(find_source_directory_in_iter(
973 vec![PathBuf::from("crate-dir/s-3234-0000-svh"),
974 PathBuf::from("crate-dir/s-2234-0000-svh"),
975 PathBuf::from("crate-dir/s-1234-0000-svh")].into_iter(), &already_visited),
976 Some(PathBuf::from("crate-dir/s-3234-0000-svh")));
978 // Filter out "-working"
979 assert_eq!(find_source_directory_in_iter(
980 vec![PathBuf::from("crate-dir/s-3234-0000-working"),
981 PathBuf::from("crate-dir/s-2234-0000-svh"),
982 PathBuf::from("crate-dir/s-1234-0000-svh")].into_iter(), &already_visited),
983 Some(PathBuf::from("crate-dir/s-2234-0000-svh")));
986 assert_eq!(find_source_directory_in_iter(vec![].into_iter(), &already_visited),
989 // Handle only working
990 assert_eq!(find_source_directory_in_iter(
991 vec![PathBuf::from("crate-dir/s-3234-0000-working"),
992 PathBuf::from("crate-dir/s-2234-0000-working"),
993 PathBuf::from("crate-dir/s-1234-0000-working")].into_iter(), &already_visited),
998 fn test_find_metadata_hashes_iter()
1000 assert_eq!(find_metadata_hashes_iter("testsvh2",
1002 String::from("s-timestamp1-testsvh1"),
1003 String::from("s-timestamp2-testsvh2"),
1004 String::from("s-timestamp3-testsvh3"),
1006 Some(OsString::from("s-timestamp2-testsvh2"))
1009 assert_eq!(find_metadata_hashes_iter("testsvh2",
1011 String::from("s-timestamp1-testsvh1"),
1012 String::from("s-timestamp2-testsvh2"),
1013 String::from("invalid-name"),
1015 Some(OsString::from("s-timestamp2-testsvh2"))
1018 assert_eq!(find_metadata_hashes_iter("testsvh2",
1020 String::from("s-timestamp1-testsvh1"),
1021 String::from("s-timestamp2-testsvh2-working"),
1022 String::from("s-timestamp3-testsvh3"),
1027 assert_eq!(find_metadata_hashes_iter("testsvh1",
1029 String::from("s-timestamp1-random1-working"),
1030 String::from("s-timestamp2-random2-working"),
1031 String::from("s-timestamp3-random3-working"),
1036 assert_eq!(find_metadata_hashes_iter("testsvh2",
1038 String::from("timestamp1-testsvh2"),
1039 String::from("timestamp2-testsvh2"),
1040 String::from("timestamp3-testsvh2"),