Skip to content

std: Start implementing wasm32 atomics #54017

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 5, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions src/libcore/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,7 @@
#![feature(powerpc_target_feature)]
#![feature(mips_target_feature)]
#![feature(aarch64_target_feature)]
#![feature(wasm_target_feature)]
#![feature(const_slice_len)]
#![feature(const_str_as_bytes)]
#![feature(const_str_len)]
Expand Down
8 changes: 8 additions & 0 deletions src/libcore/sync/atomic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2251,7 +2251,15 @@ unsafe fn atomic_umin<T>(dst: *mut T, val: T, order: Ordering) -> T {
/// [`Relaxed`]: enum.Ordering.html#variant.Relaxed
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
#[cfg_attr(target_arch = "wasm32", allow(unused_variables))]
pub fn fence(order: Ordering) {
// On wasm32 it looks like fences aren't implemented in LLVM yet in that
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add a FIXME here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have this panic or fail to compile on wasm rather than being a nop?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sfackler unfortunately this is used by Arc which means that apps quickly stop working if it fails to compile or panics :(

Alternatively we could remove libstd's usage of fences on wasm, but this seemed like a smaller local change for now

// they will cause LLVM to abort. The wasm instruction set doesn't have
// fences right now. There's discussion online about the best way for tools
// to conventionally implement fences at
// https://github.com/WebAssembly/tool-conventions/issues/59. We should
// follow that discussion and implement a solution when one comes about!
#[cfg(not(target_arch = "wasm32"))]
unsafe {
match order {
Acquire => intrinsics::atomic_fence_acq(),
Expand Down
1 change: 1 addition & 0 deletions src/libstd/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -256,6 +256,7 @@
#![feature(const_ip)]
#![feature(core_intrinsics)]
#![feature(dropck_eyepatch)]
#![feature(duration_as_u128)]
#![feature(exact_size_is_empty)]
#![feature(external_doc)]
#![feature(fixed_size_array)]
Expand Down
104 changes: 104 additions & 0 deletions src/libstd/sys/wasm/condvar_atomics.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.

use arch::wasm32::atomic;
use cmp;
use mem;
use sync::atomic::{AtomicUsize, Ordering::SeqCst};
use sys::mutex::Mutex;
use time::Duration;

pub struct Condvar {
cnt: AtomicUsize,
}

// Condition variables are implemented with a simple counter internally that is
// likely to cause spurious wakeups. Blocking on a condition variable will first
// read the value of the internal counter, unlock the given mutex, and then
// block if and only if the counter's value is still the same. Notifying a
// condition variable will modify the counter (add one for now) and then wake up
// a thread waiting on the address of the counter.
//
// A thread waiting on the condition variable will as a result avoid going to
// sleep if it's notified after the lock is unlocked but before it fully goes to
// sleep. A sleeping thread is guaranteed to be woken up at some point as it can
// only be woken up with a call to `wake`.
//
// Note that it's possible for 2 or more threads to be woken up by a call to
// `notify_one` with this implementation. That can happen where the modification
// of `cnt` causes any threads in the middle of `wait` to avoid going to sleep,
// and the subsequent `wake` may wake up a thread that's actually blocking. We
// consider this a spurious wakeup, though, which all users of condition
// variables must already be prepared to handle. As a result, this source of
// spurious wakeups is currently though to be ok, although it may be problematic
// later on if it causes too many spurious wakeups.

impl Condvar {
pub const fn new() -> Condvar {
Condvar { cnt: AtomicUsize::new(0) }
}

#[inline]
pub unsafe fn init(&mut self) {
// nothing to do
}

pub unsafe fn notify_one(&self) {
self.cnt.fetch_add(1, SeqCst);
atomic::wake(self.ptr(), 1);
}

#[inline]
pub unsafe fn notify_all(&self) {
self.cnt.fetch_add(1, SeqCst);
atomic::wake(self.ptr(), -1); // -1 == "wake everyone"
}

pub unsafe fn wait(&self, mutex: &Mutex) {
// "atomically block and unlock" implemented by loading our current
// counter's value, unlocking the mutex, and blocking if the counter
// still has the same value.
//
// Notifications happen by incrementing the counter and then waking a
// thread. Incrementing the counter after we unlock the mutex will
// prevent us from sleeping and otherwise the call to `wake` will
// wake us up once we're asleep.
let ticket = self.cnt.load(SeqCst) as i32;
mutex.unlock();
let val = atomic::wait_i32(self.ptr(), ticket, -1);
// 0 == woken, 1 == not equal to `ticket`, 2 == timeout (shouldn't happen)
debug_assert!(val == 0 || val == 1);
mutex.lock();
}

pub unsafe fn wait_timeout(&self, mutex: &Mutex, dur: Duration) -> bool {
let ticket = self.cnt.load(SeqCst) as i32;
mutex.unlock();
let nanos = dur.as_nanos();
let nanos = cmp::min(i64::max_value() as u128, nanos);

// If the return value is 2 then a timeout happened, so we return
// `false` as we weren't actually notified.
let ret = atomic::wait_i32(self.ptr(), ticket, nanos as i64) != 2;
mutex.lock();
return ret
}

#[inline]
pub unsafe fn destroy(&self) {
// nothing to do
}

#[inline]
fn ptr(&self) -> *mut i32 {
assert_eq!(mem::size_of::<usize>(), mem::size_of::<i32>());
&self.cnt as *const AtomicUsize as *mut i32
}
}
22 changes: 18 additions & 4 deletions src/libstd/sys/wasm/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -36,24 +36,38 @@ pub mod args;
#[cfg(feature = "backtrace")]
pub mod backtrace;
pub mod cmath;
pub mod condvar;
pub mod env;
pub mod fs;
pub mod memchr;
pub mod mutex;
pub mod net;
pub mod os;
pub mod os_str;
pub mod path;
pub mod pipe;
pub mod process;
pub mod rwlock;
pub mod stack_overflow;
pub mod thread;
pub mod thread_local;
pub mod time;
pub mod stdio;

cfg_if! {
if #[cfg(target_feature = "atomics")] {
#[path = "condvar_atomics.rs"]
pub mod condvar;
#[path = "mutex_atomics.rs"]
pub mod mutex;
#[path = "rwlock_atomics.rs"]
pub mod rwlock;
#[path = "thread_local_atomics.rs"]
pub mod thread_local;
} else {
pub mod condvar;
pub mod mutex;
pub mod rwlock;
pub mod thread_local;
}
}

#[cfg(not(test))]
pub fn init() {
}
Expand Down
163 changes: 163 additions & 0 deletions src/libstd/sys/wasm/mutex_atomics.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.

use arch::wasm32::atomic;
use cell::UnsafeCell;
use mem;
use sync::atomic::{AtomicUsize, AtomicU64, Ordering::SeqCst};

pub struct Mutex {
locked: AtomicUsize,
}

// Mutexes have a pretty simple implementation where they contain an `i32`
// internally that is 0 when unlocked and 1 when the mutex is locked.
// Acquisition has a fast path where it attempts to cmpxchg the 0 to a 1, and
// if it fails it then waits for a notification. Releasing a lock is then done
// by swapping in 0 and then notifying any waiters, if present.

impl Mutex {
pub const fn new() -> Mutex {
Mutex { locked: AtomicUsize::new(0) }
}

#[inline]
pub unsafe fn init(&mut self) {
// nothing to do
}

pub unsafe fn lock(&self) {
while !self.try_lock() {
let val = atomic::wait_i32(
self.ptr(),
1, // we expect our mutex is locked
-1, // wait infinitely
);
// we should have either woke up (0) or got a not-equal due to a
// race (1). We should never time out (2)
debug_assert!(val == 0 || val == 1);
}
}

pub unsafe fn unlock(&self) {
let prev = self.locked.swap(0, SeqCst);
debug_assert_eq!(prev, 1);
atomic::wake(self.ptr(), 1); // wake up one waiter, if any
}

#[inline]
pub unsafe fn try_lock(&self) -> bool {
self.locked.compare_exchange(0, 1, SeqCst, SeqCst).is_ok()
}

#[inline]
pub unsafe fn destroy(&self) {
// nothing to do
}

#[inline]
fn ptr(&self) -> *mut i32 {
assert_eq!(mem::size_of::<usize>(), mem::size_of::<i32>());
&self.locked as *const AtomicUsize as *mut isize as *mut i32
}
}

pub struct ReentrantMutex {
owner: AtomicU64,
recursions: UnsafeCell<u32>,
}

unsafe impl Send for ReentrantMutex {}
unsafe impl Sync for ReentrantMutex {}

// Reentrant mutexes are similarly implemented to mutexs above except that
// instead of "1" meaning unlocked we use the id of a thread to represent
// whether it has locked a mutex. That way we have an atomic counter which
// always holds the id of the thread that currently holds the lock (or 0 if the
// lock is unlocked).
//
// Once a thread acquires a lock recursively, which it detects by looking at
// the value that's already there, it will update a local `recursions` counter
// in a nonatomic fashion (as we hold the lock). The lock is then fully
// released when this recursion counter reaches 0.

impl ReentrantMutex {
pub unsafe fn uninitialized() -> ReentrantMutex {
ReentrantMutex {
owner: AtomicU64::new(0),
recursions: UnsafeCell::new(0),
}
}

pub unsafe fn init(&mut self) {
// nothing to do...
}

pub unsafe fn lock(&self) {
let me = thread_id();
while let Err(owner) = self._try_lock(me) {
let val = atomic::wait_i64(self.ptr(), owner as i64, -1);
debug_assert!(val == 0 || val == 1);
}
}

#[inline]
pub unsafe fn try_lock(&self) -> bool {
self._try_lock(thread_id()).is_ok()
}

#[inline]
unsafe fn _try_lock(&self, id: u64) -> Result<(), u64> {
let id = id.checked_add(1).unwrap(); // make sure `id` isn't 0
match self.owner.compare_exchange(0, id, SeqCst, SeqCst) {
// we transitioned from unlocked to locked
Ok(_) => {
debug_assert_eq!(*self.recursions.get(), 0);
Ok(())
}

// we currently own this lock, so let's update our count and return
// true.
Err(n) if n == id => {
*self.recursions.get() += 1;
Ok(())
}

// Someone else owns the lock, let our caller take care of it
Err(other) => Err(other),
}
}

pub unsafe fn unlock(&self) {
// If we didn't ever recursively lock the lock then we fully unlock the
// mutex and wake up a waiter, if any. Otherwise we decrement our
// recursive counter and let some one else take care of the zero.
match *self.recursions.get() {
0 => {
self.owner.swap(0, SeqCst);
atomic::wake(self.ptr() as *mut i32, 1); // wake up one waiter, if any
}
ref mut n => *n -= 1,
}
}

pub unsafe fn destroy(&self) {
// nothing to do...
}

#[inline]
fn ptr(&self) -> *mut i64 {
&self.owner as *const AtomicU64 as *mut i64
}
}

fn thread_id() -> u64 {
panic!("thread ids not implemented on wasm with atomics yet")
}
Loading