Skip to content

[mlir] Fix race condition introduced in ThreadLocalCache #93280

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 24, 2024
Merged

Conversation

Mogball
Copy link
Contributor

@Mogball Mogball commented May 24, 2024

Okay, so an apparently not-so-rare crash could occur if the perInstanceState point got re-allocated to the same pointer as before. All the values have been destroyed, so the TLC is left with dangling pointers, but then the same ValueT * is pulled out of the TLC and dereferenced, leading to a crash.

I suppose the purpose of the weak_ptr was that it would get reset to a default state when the perInstanceState shared pointer got destryoed (its reference count would only ever be 1, very briefly 2 when it gets aliased to a ValueT * but then assigned to a weak_ptr).

Basically, there are circular references between TLC instances and perInstanceState instances and we have to ensure there are no dangling references.

  1. Ensure the TLC entries are reset to a valid default state if the TLC (i.e. owning thread) lives longer than the perInstanceState. a. This is currently achieved by storing weak_ptr in the TLC.
  2. If perInstanceState lives longer than the TLC, it cannot contain dangling references to entries in destroyed TLCs. a. This is not currently the case.
  3. If both are being destroyed at the same time, we cannot race. a. The destructors are synchronized because the TLC destructor locks weak_ptr while it is destructing, preventing the owning perInstanceState of the entry from destructing. If perInstanceState got destructed first, the weak_ptr lock would fail.

And

  1. Ensure get in the common (initialized) case is as fast as possible (no atomics).

We need to change the TLC to store a ValueT ** so that it can be shared with entries owned by perInstanceState and written to null when they are destroyed. However, this is no longer something synchronized by an atomic, meaning that (2) becomes a problem. This is fine because when TLC destructs, we remove the entries from perInstanceState that could reference the TLC entries.

This patch shows the same perf gain as before but hopefully without the bug.

@llvmbot llvmbot added mlir:core MLIR Core Infrastructure mlir labels May 24, 2024
@llvmbot
Copy link
Member

llvmbot commented May 24, 2024

@llvm/pr-subscribers-mlir

@llvm/pr-subscribers-mlir-core

Author: Jeff Niu (Mogball)

Changes

Okay, so an apparently not-so-rare crash could occur if the perInstanceState point got re-allocated to the same pointer as before. All the values have been destroyed, so the TLC is left with dangling pointers, but then the same ValueT * is pulled out of the TLC and dereferenced, leading to a crash.

I suppose the purpose of the weak_ptr was that it would get reset to a default state when the perInstanceState shared pointer got destryoed (its reference count would only ever be 1, very briefly 2 when it gets aliased to a ValueT * but then assigned to a weak_ptr).

Basically, there are circular references between TLC instances and perInstanceState instances and we have to ensure there are no dangling references.

  1. Ensure the TLC entries are reset to a valid default state if the TLC (i.e. owning thread) lives longer than the perInstanceState. a. This is currently achieved by storing weak_ptr in the TLC.
  2. If perInstanceState lives longer than the TLC, it cannot contain dangling references to entries in destroyed TLCs. a. This is not currently the case.
  3. If both are being destroyed at the same time, we cannot race. a. The destructors are synchronized because the TLC destructor locks weak_ptr while it is destructing, preventing the owning perInstanceState of the entry from destructing. If perInstanceState got destructed first, the weak_ptr lock would fail.

And

  1. Ensure get in the common (initialized) case is as fast as possible (no atomics).

We need to change the TLC to store a ValueT ** so that it can be shared with entries owned by perInstanceState and written to null when they are destroyed. However, this is no longer something synchronized by an atomic, meaning that (2) becomes a problem. This is fine because when TLC destructs, we remove the entries from perInstanceState that could reference the TLC entries.

This patch shows the same perf gain as before but hopefully without the bug.


Full diff: https://github.com/llvm/llvm-project/pull/93280.diff

1 Files Affected:

  • (modified) mlir/include/mlir/Support/ThreadLocalCache.h (+83-25)
diff --git a/mlir/include/mlir/Support/ThreadLocalCache.h b/mlir/include/mlir/Support/ThreadLocalCache.h
index d19257bf6e25e..c6b50ae962861 100644
--- a/mlir/include/mlir/Support/ThreadLocalCache.h
+++ b/mlir/include/mlir/Support/ThreadLocalCache.h
@@ -16,7 +16,6 @@
 
 #include "mlir/Support/LLVM.h"
 #include "llvm/ADT/DenseMap.h"
-#include "llvm/Support/ManagedStatic.h"
 #include "llvm/Support/Mutex.h"
 
 namespace mlir {
@@ -25,28 +24,91 @@ namespace mlir {
 /// cache has very large lock contention.
 template <typename ValueT>
 class ThreadLocalCache {
+  struct PerInstanceState;
+
+  /// The "observer" is owned by a thread-local cache instance. It is
+  /// constructed the first time a `ThreadLocalCache` instance is accessed by a
+  /// thread, unless `perInstanceState` happens to get re-allocated to the same
+  /// address as a previous one. This class is destructed the thread in which
+  /// the `thread_local` cache lives is destroyed.
+  ///
+  /// This class is called the "observer" because while values cached in
+  /// thread-local caches are owned by `PerInstanceState`, a reference is stored
+  /// via this class in the TLC. With a double pointer, it knows when the
+  /// referenced value has been destroyed.
+  struct Observer {
+    Observer() : ptr(std::make_unique<ValueT *>(nullptr)) {}
+
+    /// This is the double pointer, explicitly allocated because we need to keep
+    /// the address stable if the TLC map re-allocates. It is owned by the
+    /// observer and shared with the value owner.
+    std::unique_ptr<ValueT *> ptr;
+    /// Because `Owner` living inside `PerInstanceState` contains a reference to
+    /// the double pointer, and livkewise this class contains a reference to the
+    /// value, we need to synchronize destruction of the TLC and the
+    /// `PerInstanceState` to avoid racing. This weak pointer is acquired during
+    /// TLC destruction if the `PerInstanceState` hasn't entered its destructor
+    /// yet, and prevents it from happening.
+    std::weak_ptr<PerInstanceState> keepalive;
+  };
+
+  /// This struct owns the cache entries. It contains a reference back to the
+  /// reference inside the cache so that it can be written to null to indicate
+  /// that the cache entry is invalidated. It needs to do this because
+  /// `perInstanceState` could get re-allocated to the same pointer and we don't
+  /// remove entries from the TLC when it is deallocated. Thus, we have to reset
+  /// the TLC entries to a starting state in case the `ThreadLocalCache` lives
+  /// shorter than the threads.
+  struct Owner {
+    /// Save a pointer to the reference and write it to the newly created entry.
+    Owner(Observer &observer)
+        : value(std::make_unique<ValueT>()), ptrRef(observer.ptr.get()) {
+      *ptrRef = value.get();
+    }
+    /// Upon destruction, reset it to nullptr so the next time the cache is hit
+    /// with the same pointer, it will restart in a valid state.
+    ~Owner() {
+      if (ptrRef)
+        *ptrRef = nullptr;
+    }
+
+    Owner(Owner &&other) : value(std::move(other.value)), ptrRef(other.ptrRef) {
+      other.ptrRef = nullptr;
+    }
+    Owner &operator=(Owner &&other) {
+      value = std::move(other.value);
+      ptrRef = other.ptrRef;
+      other.ptrRef = nullptr;
+      return *this;
+    }
+
+    std::unique_ptr<ValueT> value;
+    ValueT **ptrRef;
+  };
+
   // Keep a separate shared_ptr protected state that can be acquired atomically
   // instead of using shared_ptr's for each value. This avoids a problem
   // where the instance shared_ptr is locked() successfully, and then the
   // ThreadLocalCache gets destroyed before remove() can be called successfully.
   struct PerInstanceState {
-    /// Remove the given value entry. This is generally called when a thread
-    /// local cache is destructing.
+    /// Remove the given value entry. This is called when a thread local cache
+    /// is destructing but still contains references to values owned by the
+    /// `PerInstanceState`. Removal is required because it prevents writeback to
+    /// a pointer that was deallocated.
     void remove(ValueT *value) {
       // Erase the found value directly, because it is guaranteed to be in the
       // list.
       llvm::sys::SmartScopedLock<true> threadInstanceLock(instanceMutex);
-      auto it =
-          llvm::find_if(instances, [&](std::unique_ptr<ValueT> &instance) {
-            return instance.get() == value;
-          });
+      auto it = llvm::find_if(instances, [&](Owner &instance) {
+        return instance.value.get() == value;
+      });
       assert(it != instances.end() && "expected value to exist in cache");
       instances.erase(it);
     }
 
     /// Owning pointers to all of the values that have been constructed for this
     /// object in the static cache.
-    SmallVector<std::unique_ptr<ValueT>, 1> instances;
+    SmallVector<Owner, 1> instances;
 
     /// A mutex used when a new thread instance has been added to the cache for
     /// this object.
@@ -57,14 +119,14 @@ class ThreadLocalCache {
   /// instance of the non-static cache and a weak reference to an instance of
   /// ValueT. We use a weak reference here so that the object can be destroyed
   /// without needing to lock access to the cache itself.
-  struct CacheType
-      : public llvm::SmallDenseMap<PerInstanceState *,
-                                   std::pair<std::weak_ptr<ValueT>, ValueT *>> {
+  struct CacheType : public llvm::SmallDenseMap<PerInstanceState *, Observer> {
     ~CacheType() {
-      // Remove the values of this cache that haven't already expired.
-      for (auto &it : *this)
-        if (std::shared_ptr<ValueT> value = it.second.first.lock())
-          it.first->remove(value.get());
+      // Remove the values of this cache that haven't already expired. This is
+      // required because if we don't remove them, they will contain a reference
+      // back to the data here that is being destroyed.
+      for (auto &[instance, observer] : *this)
+        if (std::shared_ptr<PerInstanceState> state = observer.keepalive.lock())
+          state->remove(*observer.ptr);
     }
 
     /// Clear out any unused entries within the map. This method is not
@@ -72,7 +134,7 @@ class ThreadLocalCache {
     void clearExpiredEntries() {
       for (auto it = this->begin(), e = this->end(); it != e;) {
         auto curIt = it++;
-        if (curIt->second.first.expired())
+        if (!*curIt->second.ptr)
           this->erase(curIt);
       }
     }
@@ -89,27 +151,23 @@ class ThreadLocalCache {
   ValueT &get() {
     // Check for an already existing instance for this thread.
     CacheType &staticCache = getStaticCache();
-    std::pair<std::weak_ptr<ValueT>, ValueT *> &threadInstance =
-        staticCache[perInstanceState.get()];
-    if (ValueT *value = threadInstance.second)
+    Observer &threadInstance = staticCache[perInstanceState.get()];
+    if (ValueT *value = *threadInstance.ptr)
       return *value;
 
     // Otherwise, create a new instance for this thread.
     {
       llvm::sys::SmartScopedLock<true> threadInstanceLock(
           perInstanceState->instanceMutex);
-      threadInstance.second =
-          perInstanceState->instances.emplace_back(std::make_unique<ValueT>())
-              .get();
+      perInstanceState->instances.emplace_back(threadInstance);
     }
-    threadInstance.first =
-        std::shared_ptr<ValueT>(perInstanceState, threadInstance.second);
+    threadInstance.keepalive = perInstanceState;
 
     // Before returning the new instance, take the chance to clear out any used
     // entries in the static map. The cache is only cleared within the same
     // thread to remove the need to lock the cache itself.
     staticCache.clearExpiredEntries();
-    return *threadInstance.second;
+    return **threadInstance.ptr;
   }
   ValueT &operator*() { return get(); }
   ValueT *operator->() { return &get(); }

Okay, so an apparently not-so-rare crash could occur if the
`perInstanceState` point got re-allocated to the same pointer as before.
All the values have been destroyed, so the TLC is left with dangling
pointers, but then the same `ValueT *` is pulled out of the TLC and
dereferenced, leading to a crash.

I suppose the purpose of the `weak_ptr` was that it would get reset to a
default state when the `perInstanceState` shared pointer got destryoed
(its reference count would only ever be 1, very briefly 2 when it gets
aliased to a `ValueT *` but then assigned to a `weak_ptr`).

Basically, there are circular references between TLC instances and
`perInstanceState` instances and we have to ensure there are no dangling
references.

1. Ensure the TLC entries are reset to a valid default state if the TLC
   (i.e. owning thread) lives longer than the `perInstanceState`.
    a. This is currently achieved by storing `weak_ptr` in the TLC.
2. If `perInstanceState` lives longer than the TLC, it cannot contain
   dangling references to entries in destroyed TLCs.
    a. This is not currently the case.
3. If both are being destroyed at the same time, we cannot race.
    a. The destructors are synchronized because the TLC destructor locks
       `weak_ptr` while it is destructing, preventing the owning
       `perInstanceState` of the entry from destructing. If
       `perInstanceState` got destructed first, the `weak_ptr` lock would
        fail.

And

4. Ensure `get` in the common (initialized) case is as fast as possible
   (no atomics).

We need to change the TLC to store a `ValueT **` so that it can be
shared with entries owned by `perInstanceState` and written to null when
they are destroyed. However, this is no longer something synchronized by
an atomic, meaning that (2) becomes a problem. This is fine because when
TLC destructs, we remove the entries from `perInstanceState` that could
reference the TLC entries. Also, make the `ValueT **` a weak pointer so
that destructors never write to invalid memory.

This patch shows the same perf gain as before but hopefully without the
bug.
@Mogball Mogball merged commit 6977bfb into llvm:main May 24, 2024
4 of 7 checks passed
/// This is the double pointer, explicitly allocated because we need to keep
/// the address stable if the TLC map re-allocates. It is owned by the
/// observer and shared with the value owner.
std::unique_ptr<ValueT *> ptr;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TL;DR: compared to before, we had a single weak_ptr<ValueT *> that served the purposes of both ptr and keepalive in this new TLC entry. Splitting them up allows the common case to be faster by not having the atomic.

@stellaraccident
Copy link
Contributor

I looked at both patches but haven't been online to see if you reverted the original. If not, I'd strongly prefer that you revert completely to the original state and then fix forward with more review.

These kind of races are very tricky, and while the performance is important, not getting into a fix forward loop is important.

Thank you for doing this! But bugs here can have extreme impacts on everyone, and it is better to be systematic and careful.

@stellaraccident
Copy link
Contributor

stellaraccident commented May 24, 2024

Also, if referencing "before" in a patch description, please link to the commit or patch you are referring to.

@kiranchandramohan
Copy link
Contributor

kiranchandramohan commented May 24, 2024

I am getting some compilation errors related to this patch.

llvm-project/mlir/lib/Support/StorageUniquer.cpp:117:20: error: implicit instantiation of undefined template 'std::atomic<(anonymous namespace)::ParametricStorageUniquer::Shard *>'
  117 |       : shards(new std::atomic<Shard *>[numShards]), numShards(numShards),
  ...
error: implicit instantiation of undefined template 'std::atomic<(anonymous namespace)::ParametricStorageUniquer::Shard *>'
  130 |           static_assert(sizeof(_Tp)>0,
...
error: implicit instantiation of undefined template 'std::atomic<(anonymous namespace)::ParametricStorageUniquer::Shard *>'
  720 |         return get()[__i];

kiranchandramohan added a commit that referenced this pull request May 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:core MLIR Core Infrastructure mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants