Skip to content

Commit f6d262a

Browse files
committed
Add unstable book entry
1 parent 2598e45 commit f6d262a

File tree

3 files changed

+100
-1
lines changed

3 files changed

+100
-1
lines changed

src/doc/unstable-book/src/SUMMARY.md

+1
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@
3737
- [collections](collections.md)
3838
- [collections_range](collections-range.md)
3939
- [command_envs](command-envs.md)
40+
- [compiler_barriers](compiler-barriers.md)
4041
- [compiler_builtins](compiler-builtins.md)
4142
- [compiler_builtins_lib](compiler-builtins-lib.md)
4243
- [concat_idents](concat-idents.md)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
# `compiler_barriers`
2+
3+
The tracking issue for this feature is: [#41092]
4+
5+
[#41092]: https://github.com/rust-lang/rust/issues/41092
6+
7+
------------------------
8+
9+
The `compiler_barriers` feature exposes the `compiler_barrier` function
10+
in `std::sync::atomic`. This function is conceptually similar to C++'s
11+
`atomic_signal_fence`, which can currently only be accessed in nightly
12+
Rust using the `atomic_singlethreadfence_*` instrinsic functions in
13+
`core`, or through the mostly equivalent literal assembly:
14+
15+
```rust
16+
#![feature(asm)]
17+
unsafe { asm!("" ::: "memory" : "volatile") };
18+
```
19+
20+
A `compiler_barrier` restricts the kinds of memory re-ordering the
21+
compiler is allowed to do. Specifically, depending on the given ordering
22+
semantics, the compiler may be disallowed from moving reads or writes
23+
from before or after the call to the other side of the call to
24+
`compiler_barrier`.
25+
26+
## Examples
27+
28+
The need to prevent re-ordering of reads and writes often arises when
29+
working with low-level devices. Consider a piece of code that interacts
30+
with an ethernet card with a set of internal registers that are accessed
31+
through an address port register (`a: &mut usize`) and a data port
32+
register (`d: &usize`). To read internal register 5, the following code
33+
might then be used:
34+
35+
```rust
36+
fn read_fifth(a: &mut usize, d: &usize) -> usize {
37+
*a = 5;
38+
*d
39+
}
40+
```
41+
42+
In this case, the compiler is free to re-order these two statements if
43+
it thinks doing so might result in better performance, register use, or
44+
anything else compilers care about. However, in doing so, it would break
45+
the code, as `x` would be set to the value of some other device
46+
register!
47+
48+
By inserting a compiler barrier, we can force the compiler to not
49+
re-arrange these two statements, making the code function correctly
50+
again:
51+
52+
```rust
53+
#![feature(compiler_barriers)]
54+
use std::sync::atomic;
55+
56+
fn read_fifth(a: &mut usize, d: &usize) -> usize {
57+
*a = 5;
58+
atomic::compiler_barrier(atomic::Ordering::SeqCst);
59+
*d
60+
}
61+
```
62+
63+
Compiler barriers are also useful in code that implements low-level
64+
synchronization primitives. Consider a structure with two different
65+
atomic variables, with a dependency chain between them:
66+
67+
```rust
68+
use std::sync::atomic;
69+
70+
fn thread1(x: &atomic::AtomicUsize, y: &atomic::AtomicUsize) {
71+
x.store(1, atomic::Ordering::Release);
72+
let v1 = y.load(atomic::Ordering::Acquire);
73+
}
74+
fn thread2(x: &atomic::AtomicUsize, y: &atomic::AtomicUsize) {
75+
y.store(1, atomic::Ordering::Release);
76+
let v2 = x.load(atomic::Ordering::Acquire);
77+
}
78+
```
79+
80+
This code will guarantee that `thread1` sees any writes to `y` made by
81+
`thread2`, and that `thread2` sees any writes to `x`. Intuitively, one
82+
might also expect that if `thread2` sees `v2 == 0`, `thread1` must see
83+
`v1 == 1` (since `thread2`'s store happened before its `load`, and its
84+
load did not see `thread1`'s store). However, the code as written does
85+
*not* guarantee this, because the compiler is allowed to re-order the
86+
store and load within each thread. To enforce this particular behavior,
87+
a call to `compiler_barrier(Ordering::SeqCst)` would need to be inserted
88+
between the `store` and `load` in both functions.
89+
90+
Compiler barriers with weaker re-ordering semantics (such as
91+
`Ordering::Acquire`) can also be useful, but are beyond the scope of
92+
this text. Curious readers are encouraged to read the Linux kernel's
93+
discussion of [memory barriers][1], as well as C++ references on
94+
[`std::memory_order`][2] and [`atomic_signal_fence`][3].
95+
96+
[1]: https://www.kernel.org/doc/Documentation/memory-barriers.txt
97+
[2]: http://en.cppreference.com/w/cpp/atomic/memory_order
98+
[3]: http://www.cplusplus.com/reference/atomic/atomic_signal_fence/

src/libcore/sync/atomic.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -1598,7 +1598,7 @@ pub fn fence(order: Ordering) {
15981598
/// [`AcqRel`]: enum.Ordering.html#variant.AcqRel
15991599
/// [`Relaxed`]: enum.Ordering.html#variant.Relaxed
16001600
#[inline]
1601-
#[unstable(feature = "std_compiler_fences", issue = "41091")]
1601+
#[unstable(feature = "compiler_barriers", issue = "41091")]
16021602
pub fn compiler_barrier(order: Ordering) {
16031603
unsafe {
16041604
match order {

0 commit comments

Comments
 (0)