Skip to content

A less misleading intro to atomic::Ordering #55233

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 50 additions & 4 deletions src/libcore/sync/atomic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -170,16 +170,62 @@ unsafe impl<T> Sync for AtomicPtr<T> {}
/// Atomic memory orderings
///
/// Memory orderings limit the ways that both the compiler and CPU may reorder
/// instructions around atomic operations. At its most restrictive,
/// "sequentially consistent" atomics allow neither reads nor writes
/// to be moved either before or after the atomic operation; on the other end
/// "relaxed" atomics allow all reorderings.
/// instructions and optimize around atomic operations.
///
/// An operation on an atomic variable can do three things:
///
/// * Make the atomic itself safe to use from multiple threads. This happens always, even with the
/// lowest [`Relaxed`] ordering and an atomic variable always forms a "sane" time line of its own
/// operations. Note that [`Relaxed`] doesn't provide any guarantees at all about how different
/// threads see the relative orders of operations on *different* atomics (which is not something
/// human brain interprets as remotely "sane").
/// * Additionally, synchronize other (even non-atomic) memory. A store with [`Release`] ordering
/// forms a synchronization (unidirectional) edge with any thread that does a load with
/// [`Acquire`] *on the written value*. Conceptually, imagine the threads being independent
/// entities, the value being tagged with a snapshot of all the memory on release and acquire
/// waiting for all that bulk of data to arrive (this is of course not what actually happens in
/// the hardware, but about the least broken intuitive understanding of the model).
/// * Finally, a [`SeqCst`] operation participates in a globally consistent time line (in addition
/// to being [`AcqRel`]). Two [`SeqCst`] operations have a well defined relation which one
/// happened earlier and which one later. This allows to synchronize memory without the need to
/// "meet" on the common value or even the same atomic variable ‒ a [`SeqCst`] load acquires from
/// any previous [`SeqCst`] store.
///
/// # Common traps for the unaware
///
/// The rules of the memory model have some very unintuitive consequences.
///
/// * Load-only operations never "publish". Even [`load(Ordering::SeqCst)`][AtomicUsize::load] has
/// only acquire semantics, reads and writes of other memory can be reordered after that
/// operation.
/// * Similarly, store-only operations allow reordering before them.
/// * Some operations ([`compare_and_swap`]) may fail. In such case
/// they don't write any value and are load-only and therefore don't have the "release" semantics
/// (again, even if they are marked as [`SeqCst`]).
/// * The release-acquire synchronization of other memory happens only on the one specific value,
/// not on the atomic variable. If one value released by thread `A` is overwritten by another
/// one by thread `B` (even when computed from the first one), an acquire load in `C` seeing the
/// second value acquires only the data from `B` (if `B` released). To make this work properly,
/// either use [`SeqCst`] for everything or chain the synchronization. `B` must acquire from `A`
/// and then release again, even if `B` is not interested in the other memory from `A`.
///
/// # More details
///
/// This is only an intuitive introduction. If doing anything even slightly complex, be sure to
/// study the exact definitions and prove your algorithm correct.
///
/// Rust's memory orderings are [the same as
/// LLVM's](https://llvm.org/docs/LangRef.html#memory-model-for-concurrent-operations).
///
/// For more information see the [nomicon].
///
/// [`Relaxed`]: #variant.Relaxed
/// [`Release`]: #variant.Release
/// [`Acquire`]: #variant.Acquire
/// [`AcqRel`]: #variant.AcqRel
/// [`SeqCst`]: #variant.SeqCst
/// [AtomicUsize::load]: struct.AtomicUsize.html#method.load
/// [`compare_and_swap`]: struct.AtomicUsize.html#method.compare_and_swap
/// [nomicon]: ../../../nomicon/atomics.html
#[stable(feature = "rust1", since = "1.0.0")]
#[derive(Copy, Clone, Debug)]
Expand Down