]> git.lizzy.rs Git - rust.git/commitdiff
Auto merge of #41092 - jonhoo:std-fence-intrinsics, r=alexcrichton
authorbors <bors@rust-lang.org>
Sat, 8 Apr 2017 22:37:35 +0000 (22:37 +0000)
committerbors <bors@rust-lang.org>
Sat, 8 Apr 2017 22:37:35 +0000 (22:37 +0000)
Add safe wrapper for atomic_compilerfence intrinsics

This PR adds a proposed safe wrapper for the `atomic_singlethreadfence_*` intrinsics introduced by [RFC #888](https://github.com/rust-lang/rfcs/pull/888). See #41091 for further discussion.

src/doc/unstable-book/src/SUMMARY.md
src/doc/unstable-book/src/compiler-barriers.md [new file with mode: 0644]
src/libcore/sync/atomic.rs

index 20812de524add29aa089e6bdd28dabcc08b5771a..93ce911ac6cc72169cb25bb45163beda497fde2b 100644 (file)
@@ -37,6 +37,7 @@
 - [collections](collections.md)
 - [collections_range](collections-range.md)
 - [command_envs](command-envs.md)
+- [compiler_barriers](compiler-barriers.md)
 - [compiler_builtins](compiler-builtins.md)
 - [compiler_builtins_lib](compiler-builtins-lib.md)
 - [concat_idents](concat-idents.md)
diff --git a/src/doc/unstable-book/src/compiler-barriers.md b/src/doc/unstable-book/src/compiler-barriers.md
new file mode 100644 (file)
index 0000000..827447f
--- /dev/null
@@ -0,0 +1,106 @@
+# `compiler_barriers`
+
+The tracking issue for this feature is: [#41091]
+
+[#41091]: https://github.com/rust-lang/rust/issues/41091
+
+------------------------
+
+The `compiler_barriers` feature exposes the `compiler_barrier` function
+in `std::sync::atomic`. This function is conceptually similar to C++'s
+`atomic_signal_fence`, which can currently only be accessed in nightly
+Rust using the `atomic_singlethreadfence_*` instrinsic functions in
+`core`, or through the mostly equivalent literal assembly:
+
+```rust
+#![feature(asm)]
+unsafe { asm!("" ::: "memory" : "volatile") };
+```
+
+A `compiler_barrier` restricts the kinds of memory re-ordering the
+compiler is allowed to do. Specifically, depending on the given ordering
+semantics, the compiler may be disallowed from moving reads or writes
+from before or after the call to the other side of the call to
+`compiler_barrier`. Note that it does **not** prevent the *hardware*
+from doing such re-ordering. This is not a problem in a single-threaded,
+execution context, but when other threads may modify memory at the same
+time, stronger synchronization primitives are required.
+
+## Examples
+
+`compiler_barrier` is generally only useful for preventing a thread from
+racing *with itself*. That is, if a given thread is executing one piece
+of code, and is then interrupted, and starts executing code elsewhere
+(while still in the same thread, and conceptually still on the same
+core). In traditional programs, this can only occur when a signal
+handler is registered. In more low-level code, such situations can also
+arise when handling interrupts, when implementing green threads with
+pre-emption, etc.
+
+To give a straightforward example of when a `compiler_barrier` is
+necessary, consider the following example:
+
+```rust
+# use std::sync::atomic::{AtomicBool, AtomicUsize};
+# use std::sync::atomic::{ATOMIC_BOOL_INIT, ATOMIC_USIZE_INIT};
+# use std::sync::atomic::Ordering;
+static IMPORTANT_VARIABLE: AtomicUsize = ATOMIC_USIZE_INIT;
+static IS_READY: AtomicBool = ATOMIC_BOOL_INIT;
+
+fn main() {
+    IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
+    IS_READY.store(true, Ordering::Relaxed);
+}
+
+fn signal_handler() {
+    if IS_READY.load(Ordering::Relaxed) {
+        assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
+    }
+}
+```
+
+The way it is currently written, the `assert_eq!` is *not* guaranteed to
+succeed, despite everything happening in a single thread. To see why,
+remember that the compiler is free to swap the stores to
+`IMPORTANT_VARIABLE` and `IS_READ` since they are both
+`Ordering::Relaxed`. If it does, and the signal handler is invoked right
+after `IS_READY` is updated, then the signal handler will see
+`IS_READY=1`, but `IMPORTANT_VARIABLE=0`.
+
+Using a `compiler_barrier`, we can remedy this situation:
+
+```rust
+#![feature(compiler_barriers)]
+# use std::sync::atomic::{AtomicBool, AtomicUsize};
+# use std::sync::atomic::{ATOMIC_BOOL_INIT, ATOMIC_USIZE_INIT};
+# use std::sync::atomic::Ordering;
+use std::sync::atomic::compiler_barrier;
+
+static IMPORTANT_VARIABLE: AtomicUsize = ATOMIC_USIZE_INIT;
+static IS_READY: AtomicBool = ATOMIC_BOOL_INIT;
+
+fn main() {
+    IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
+    // prevent earlier writes from being moved beyond this point
+    compiler_barrier(Ordering::Release);
+    IS_READY.store(true, Ordering::Relaxed);
+}
+
+fn signal_handler() {
+    if IS_READY.load(Ordering::Relaxed) {
+        assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
+    }
+}
+```
+
+A deeper discussion of compiler barriers with various re-ordering
+semantics (such as `Ordering::SeqCst`) is beyond the scope of this text.
+Curious readers are encouraged to read the Linux kernel's discussion of
+[memory barriers][1], the C++ references on [`std::memory_order`][2] and
+[`atomic_signal_fence`][3], and [this StackOverflow answer][4] for
+further details.
+
+[1]: https://www.kernel.org/doc/Documentation/memory-barriers.txt
+[2]: http://en.cppreference.com/w/cpp/atomic/memory_order
+[3]: http://www.cplusplus.com/reference/atomic/atomic_signal_fence/
+[4]: http://stackoverflow.com/a/18454971/472927
index a4050f271eb99b30a08eb91961680bf468444e28..0c70524ead246beee3a85bbe16754f9612f34329 100644 (file)
@@ -1591,6 +1591,47 @@ pub fn fence(order: Ordering) {
 }
 
 
+/// A compiler memory barrier.
+///
+/// `compiler_barrier` does not emit any machine code, but prevents the compiler from re-ordering
+/// memory operations across this point. Which reorderings are disallowed is dictated by the given
+/// [`Ordering`]. Note that `compiler_barrier` does *not* introduce inter-thread memory
+/// synchronization; for that, a [`fence`] is needed.
+///
+/// The re-ordering prevented by the different ordering semantics are:
+///
+///  - with [`SeqCst`], no re-ordering of reads and writes across this point is allowed.
+///  - with [`Release`], preceding reads and writes cannot be moved past subsequent writes.
+///  - with [`Acquire`], subsequent reads and writes cannot be moved ahead of preceding reads.
+///  - with [`AcqRel`], both of the above rules are enforced.
+///
+/// # Panics
+///
+/// Panics if `order` is [`Relaxed`].
+///
+/// [`fence`]: fn.fence.html
+/// [`Ordering`]: enum.Ordering.html
+/// [`Acquire`]: enum.Ordering.html#variant.Acquire
+/// [`SeqCst`]: enum.Ordering.html#variant.SeqCst
+/// [`Release`]: enum.Ordering.html#variant.Release
+/// [`AcqRel`]: enum.Ordering.html#variant.AcqRel
+/// [`Relaxed`]: enum.Ordering.html#variant.Relaxed
+#[inline]
+#[unstable(feature = "compiler_barriers", issue = "41091")]
+pub fn compiler_barrier(order: Ordering) {
+    unsafe {
+        match order {
+            Acquire => intrinsics::atomic_singlethreadfence_acq(),
+            Release => intrinsics::atomic_singlethreadfence_rel(),
+            AcqRel => intrinsics::atomic_singlethreadfence_acqrel(),
+            SeqCst => intrinsics::atomic_singlethreadfence(),
+            Relaxed => panic!("there is no such thing as a relaxed barrier"),
+            __Nonexhaustive => panic!("invalid memory ordering"),
+        }
+    }
+}
+
+
 #[cfg(target_has_atomic = "8")]
 #[stable(feature = "atomic_debug", since = "1.3.0")]
 impl fmt::Debug for AtomicBool {