3 Although programs should use unwinding sparingly, there's *a lot* of code that
4 *can* panic. If you unwrap a None, index out of bounds, or divide by 0, your
5 program *will* panic. On debug builds, *every* arithmetic operation can panic
6 if it overflows. Unless you are very careful and tightly control what code runs,
7 pretty much everything can unwind, and you need to be ready for it.
9 Being ready for unwinding is often referred to as *exception safety*
10 in the broader programming world. In Rust, there are two levels of exception
11 safety that one may concern themselves with:
13 * In unsafe code, we *must* be exception safe to the point of not violating
14 memory safety. We'll call this *minimal* exception safety.
16 * In safe code, it is *good* to be exception safe to the point of your program
17 doing the right thing. We'll call this *maximal* exception safety.
19 As is the case in many places in Rust, Unsafe code must be ready to deal with
20 bad Safe code when it comes to unwinding. Code that transiently creates
21 unsound states must be careful that a panic does not cause that state to be
22 used. Generally this means ensuring that only non-panicking code is run while
23 these states exist, or making a guard that cleans up the state in the case of
24 a panic. This does not necessarily mean that the state a panic witnesses is a
25 fully *coherent* state. We need only guarantee that it's a *safe* state.
27 Most Unsafe code is leaf-like, and therefore fairly easy to make exception-safe.
28 It controls all the code that runs, and most of that code can't panic. However
29 it is not uncommon for Unsafe code to work with arrays of temporarily
30 uninitialized data while repeatedly invoking caller-provided code. Such code
31 needs to be careful and consider exception safety.
39 `Vec::push_all` is a temporary hack to get extending a Vec by a slice reliably
40 efficient without specialization. Here's a simple implementation:
43 impl<T: Clone> Vec<T> {
44 fn push_all(&mut self, to_push: &[T]) {
45 self.reserve(to_push.len());
47 // can't overflow because we just reserved this
48 self.set_len(self.len() + to_push.len());
50 for (i, x) in to_push.iter().enumerate() {
51 self.ptr().offset(i as isize).write(x.clone());
58 We bypass `push` in order to avoid redundant capacity and `len` checks on the
59 Vec that we definitely know has capacity. The logic is totally correct, except
60 there's a subtle problem with our code: it's not exception-safe! `set_len`,
61 `offset`, and `write` are all fine, but *clone* is the panic bomb we over-
64 Clone is completely out of our control, and is totally free to panic. If it
65 does, our function will exit early with the length of the Vec set too large. If
66 the Vec is looked at or dropped, uninitialized memory will be read!
68 The fix in this case is fairly simple. If we want to guarantee that the values
69 we *did* clone are dropped we can set the len *in* the loop. If we just want to
70 guarantee that uninitialized memory can't be observed, we can set the len
77 ## BinaryHeap::sift_up
79 Bubbling an element up a heap is a bit more complicated than extending a Vec.
80 The pseudocode is as follows:
83 bubble_up(heap, index):
84 while index != 0 && heap[index] < heap[parent(index)]:
85 heap.swap(index, parent(index))
90 A literal transcription of this code to Rust is totally fine, but has an annoying
91 performance characteristic: the `self` element is swapped over and over again
92 uselessly. We would *rather* have the following:
95 bubble_up(heap, index):
96 let elem = heap[index]
97 while index != 0 && element < heap[parent(index)]:
98 heap[index] = heap[parent(index)]
103 This code ensures that each element is copied as little as possible (it is in
104 fact necessary that elem be copied twice in general). However it now exposes
105 some exception safety trouble! At all times, there exists two copies of one
106 value. If we panic in this function something will be double-dropped.
107 Unfortunately, we also don't have full control of the code: that comparison is
110 Unlike Vec, the fix isn't as easy here. One option is to break the user-defined
111 code and the unsafe code into two separate phases:
114 bubble_up(heap, index):
115 let end_index = index;
116 while end_index != 0 && heap[end_index] < heap[parent(end_index)]:
117 end_index = parent(end_index)
119 let elem = heap[index]
120 while index != end_index:
121 heap[index] = heap[parent(index)]
122 index = parent(index)
126 If the user-defined code blows up, that's no problem anymore, because we haven't
127 actually touched the state of the heap yet. Once we do start messing with the
128 heap, we're working with only data and functions that we trust, so there's no
131 Perhaps you're not happy with this design. Surely, it's cheating! And we have
132 to do the complex heap traversal *twice*! Alright, let's bite the bullet. Let's
133 intermix untrusted and unsafe code *for reals*.
135 If Rust had `try` and `finally` like in Java, we could do the following:
138 bubble_up(heap, index):
139 let elem = heap[index]
141 while index != 0 && element < heap[parent(index)]:
142 heap[index] = heap[parent(index)]
143 index = parent(index)
148 The basic idea is simple: if the comparison panics, we just toss the loose
149 element in the logically uninitialized index and bail out. Anyone who observes
150 the heap will see a potentially *inconsistent* heap, but at least it won't
151 cause any double-drops! If the algorithm terminates normally, then this
152 operation happens to coincide precisely with the how we finish up regardless.
154 Sadly, Rust has no such construct, so we're going to need to roll our own! The
155 way to do this is to store the algorithm's state in a separate struct with a
156 destructor for the "finally" logic. Whether we panic or not, that destructor
157 will run and clean up after us.
160 struct Hole<'a, T: 'a> {
162 /// `elt` is always `Some` from new until drop.
167 impl<'a, T> Hole<'a, T> {
168 fn new(data: &'a mut [T], pos: usize) -> Self {
170 let elt = ptr::read(&data[pos]);
179 fn pos(&self) -> usize { self.pos }
181 fn removed(&self) -> &T { self.elt.as_ref().unwrap() }
183 unsafe fn get(&self, index: usize) -> &T { &self.data[index] }
185 unsafe fn move_to(&mut self, index: usize) {
186 let index_ptr: *const _ = &self.data[index];
187 let hole_ptr = &mut self.data[self.pos];
188 ptr::copy_nonoverlapping(index_ptr, hole_ptr, 1);
193 impl<'a, T> Drop for Hole<'a, T> {
195 // fill the hole again
198 ptr::write(&mut self.data[pos], self.elt.take().unwrap());
203 impl<T: Ord> BinaryHeap<T> {
204 fn sift_up(&mut self, pos: usize) {
206 // Take out the value at `pos` and create a hole.
207 let mut hole = Hole::new(&mut self.data, pos);
209 while hole.pos() != 0 {
210 let parent = parent(hole.pos());
211 if hole.removed() <= hole.get(parent) { break }
212 hole.move_to(parent);
214 // Hole will be unconditionally filled here; panic or not!