Skip to content

Commit 54f0e41

Browse files
committed
[compiler-rt][builtins] Switch libatomic locks to pthread_mutex_t.
When an uninstrumented libatomic is used with a TSan instrumented memcpy, TSan may report a data race in circumstances where writes are arguably safe. This occurs because __atomic_compare_exchange won't be instrumented in an uninstrumented libatomic, so TSan doesn't know that the subsequent memcpy is race-free. On the other hand, pthread_mutex_(un)lock will be intercepted by TSan, meaning an uninstrumented libatomic will not report this false-positive. pthread_mutexes also may try a number of different strategies to acquire the lock, which may bound the amount of time a thread has to wait for a lock during contention. While pthread_mutex_lock has a larger overhead (due to the function call and some dispatching), a dispatch to libatomic already predicates a lack of performance guarantees.
1 parent 335fb94 commit 54f0e41

File tree

1 file changed

+5
-12
lines changed

1 file changed

+5
-12
lines changed

compiler-rt/lib/builtins/atomic.c

Lines changed: 5 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -94,19 +94,12 @@ static Lock locks[SPINLOCK_COUNT]; // initialized to OS_SPINLOCK_INIT which is 0
9494
#else
9595
_Static_assert(__atomic_always_lock_free(sizeof(uintptr_t), 0),
9696
"Implementation assumes lock-free pointer-size cmpxchg");
97-
typedef _Atomic(uintptr_t) Lock;
97+
#include <pthread.h>
98+
typedef pthread_mutex_t Lock;
9899
/// Unlock a lock. This is a release operation.
99-
__inline static void unlock(Lock *l) {
100-
__c11_atomic_store(l, 0, __ATOMIC_RELEASE);
101-
}
102-
/// Locks a lock. In the current implementation, this is potentially
103-
/// unbounded in the contended case.
104-
__inline static void lock(Lock *l) {
105-
uintptr_t old = 0;
106-
while (!__c11_atomic_compare_exchange_weak(l, &old, 1, __ATOMIC_ACQUIRE,
107-
__ATOMIC_RELAXED))
108-
old = 0;
109-
}
100+
__inline static void unlock(Lock *l) { pthread_mutex_unlock(l); }
101+
/// Locks a lock.
102+
__inline static void lock(Lock *l) { pthread_mutex_lock(l); }
110103
/// locks for atomic operations
111104
static Lock locks[SPINLOCK_COUNT];
112105
#endif

0 commit comments

Comments
 (0)