Skip to content

Commit 064f718

Browse files
authored
Rollup merge of rust-lang#44595 - budziq:stabilize_compiler_fences, r=alexcrichton
stabilized compiler_fences (fixes rust-lang#41091) I did not know what to proceed with "unstable-book" entry. The feature would no longer be unstable so I have deleted it. If it was the wrong call I'll revert it (unfortunately his case is not described in the CONTRIBUTING.md).
2 parents 1437e53 + 5f62c0c commit 064f718

File tree

2 files changed

+54
-111
lines changed

2 files changed

+54
-111
lines changed

src/doc/unstable-book/src/library-features/compiler-fences.md

-106
This file was deleted.

src/libcore/sync/atomic.rs

+54-5
Original file line numberDiff line numberDiff line change
@@ -1679,10 +1679,14 @@ pub fn fence(order: Ordering) {
16791679

16801680
/// A compiler memory fence.
16811681
///
1682-
/// `compiler_fence` does not emit any machine code, but prevents the compiler from re-ordering
1683-
/// memory operations across this point. Which reorderings are disallowed is dictated by the given
1684-
/// [`Ordering`]. Note that `compiler_fence` does *not* introduce inter-thread memory
1685-
/// synchronization; for that, a [`fence`] is needed.
1682+
/// `compiler_fence` does not emit any machine code, but restricts the kinds
1683+
/// of memory re-ordering the compiler is allowed to do. Specifically, depending on
1684+
/// the given [`Ordering`] semantics, the compiler may be disallowed from moving reads
1685+
/// or writes from before or after the call to the other side of the call to
1686+
/// `compiler_fence`. Note that it does **not** prevent the *hardware*
1687+
/// from doing such re-ordering. This is not a problem in a single-threaded,
1688+
/// execution context, but when other threads may modify memory at the same
1689+
/// time, stronger synchronization primitives such as [`fence`] are required.
16861690
///
16871691
/// The re-ordering prevented by the different ordering semantics are:
16881692
///
@@ -1691,19 +1695,64 @@ pub fn fence(order: Ordering) {
16911695
/// - with [`Acquire`], subsequent reads and writes cannot be moved ahead of preceding reads.
16921696
/// - with [`AcqRel`], both of the above rules are enforced.
16931697
///
1698+
/// `compiler_fence` is generally only useful for preventing a thread from
1699+
/// racing *with itself*. That is, if a given thread is executing one piece
1700+
/// of code, and is then interrupted, and starts executing code elsewhere
1701+
/// (while still in the same thread, and conceptually still on the same
1702+
/// core). In traditional programs, this can only occur when a signal
1703+
/// handler is registered. In more low-level code, such situations can also
1704+
/// arise when handling interrupts, when implementing green threads with
1705+
/// pre-emption, etc. Curious readers are encouraged to read the Linux kernel's
1706+
/// discussion of [memory barriers].
1707+
///
16941708
/// # Panics
16951709
///
16961710
/// Panics if `order` is [`Relaxed`].
16971711
///
1712+
/// # Examples
1713+
///
1714+
/// Without `compiler_fence`, the `assert_eq!` in following code
1715+
/// is *not* guaranteed to succeed, despite everything happening in a single thread.
1716+
/// To see why, remember that the compiler is free to swap the stores to
1717+
/// `IMPORTANT_VARIABLE` and `IS_READ` since they are both
1718+
/// `Ordering::Relaxed`. If it does, and the signal handler is invoked right
1719+
/// after `IS_READY` is updated, then the signal handler will see
1720+
/// `IS_READY=1`, but `IMPORTANT_VARIABLE=0`.
1721+
/// Using a `compiler_fence` remedies this situation.
1722+
///
1723+
/// ```
1724+
/// use std::sync::atomic::{AtomicBool, AtomicUsize};
1725+
/// use std::sync::atomic::{ATOMIC_BOOL_INIT, ATOMIC_USIZE_INIT};
1726+
/// use std::sync::atomic::Ordering;
1727+
/// use std::sync::atomic::compiler_fence;
1728+
///
1729+
/// static IMPORTANT_VARIABLE: AtomicUsize = ATOMIC_USIZE_INIT;
1730+
/// static IS_READY: AtomicBool = ATOMIC_BOOL_INIT;
1731+
///
1732+
/// fn main() {
1733+
/// IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
1734+
/// // prevent earlier writes from being moved beyond this point
1735+
/// compiler_fence(Ordering::Release);
1736+
/// IS_READY.store(true, Ordering::Relaxed);
1737+
/// }
1738+
///
1739+
/// fn signal_handler() {
1740+
/// if IS_READY.load(Ordering::Relaxed) {
1741+
/// assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
1742+
/// }
1743+
/// }
1744+
/// ```
1745+
///
16981746
/// [`fence`]: fn.fence.html
16991747
/// [`Ordering`]: enum.Ordering.html
17001748
/// [`Acquire`]: enum.Ordering.html#variant.Acquire
17011749
/// [`SeqCst`]: enum.Ordering.html#variant.SeqCst
17021750
/// [`Release`]: enum.Ordering.html#variant.Release
17031751
/// [`AcqRel`]: enum.Ordering.html#variant.AcqRel
17041752
/// [`Relaxed`]: enum.Ordering.html#variant.Relaxed
1753+
/// [memory barriers]: https://www.kernel.org/doc/Documentation/memory-barriers.txt
17051754
#[inline]
1706-
#[unstable(feature = "compiler_fences", issue = "41091")]
1755+
#[stable(feature = "compiler_fences", since = "1.22.0")]
17071756
pub fn compiler_fence(order: Ordering) {
17081757
unsafe {
17091758
match order {

0 commit comments

Comments
 (0)