-
Notifications
You must be signed in to change notification settings - Fork 13.5k
Thread Safety Analysis: Support reentrant capabilities #137133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@llvm/pr-subscribers-clang-analysis @llvm/pr-subscribers-clang Author: Marco Elver (melver) ChangesIntroduce the The most significant changes required are plumbing to propagate if the Care was taken to avoid increasing the size of both CapabilityExpr and Patch is 39.45 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/137133.diff 11 Files Affected:
diff --git a/clang/docs/ReleaseNotes.rst b/clang/docs/ReleaseNotes.rst
index cf90218c562e2..6d2b3b288d506 100644
--- a/clang/docs/ReleaseNotes.rst
+++ b/clang/docs/ReleaseNotes.rst
@@ -340,6 +340,7 @@ Improvements to Clang's diagnostics
as function arguments or return value respectively. Note that
:doc:`ThreadSafetyAnalysis` still does not perform alias analysis. The
feature will be default-enabled with ``-Wthread-safety`` in a future release.
+- The :doc:`ThreadSafetyAnalysis` now supports reentrant capabilities.
- Clang will now do a better job producing common nested names, when producing
common types for ternary operator, template argument deduction and multiple return auto deduction.
- The ``-Wsign-compare`` warning now treats expressions with bitwise not(~) and minus(-) as signed integers
diff --git a/clang/docs/ThreadSafetyAnalysis.rst b/clang/docs/ThreadSafetyAnalysis.rst
index 130069c5659d6..368826aebc689 100644
--- a/clang/docs/ThreadSafetyAnalysis.rst
+++ b/clang/docs/ThreadSafetyAnalysis.rst
@@ -434,6 +434,13 @@ class can be used as a capability. The string argument specifies the kind of
capability in error messages, e.g. ``"mutex"``. See the ``Container`` example
given above, or the ``Mutex`` class in :ref:`mutexheader`.
+REENTRANT
+---------
+
+``REENTRANT`` is an attribute on capability classes, denoting that they are
+reentrant. Marking a capability as reentrant means that acquiring the same
+capability multiple times is safe.
+
.. _scoped_capability:
SCOPED_CAPABILITY
@@ -846,6 +853,9 @@ implementation.
#define CAPABILITY(x) \
THREAD_ANNOTATION_ATTRIBUTE__(capability(x))
+ #define REENTRANT \
+ THREAD_ANNOTATION_ATTRIBUTE__(reentrant_capability)
+
#define SCOPED_CAPABILITY \
THREAD_ANNOTATION_ATTRIBUTE__(scoped_lockable)
diff --git a/clang/include/clang/Analysis/Analyses/ThreadSafetyCommon.h b/clang/include/clang/Analysis/Analyses/ThreadSafetyCommon.h
index e99c5b2466334..29d464f867367 100644
--- a/clang/include/clang/Analysis/Analyses/ThreadSafetyCommon.h
+++ b/clang/include/clang/Analysis/Analyses/ThreadSafetyCommon.h
@@ -271,26 +271,32 @@ class CFGWalker {
// translateAttrExpr needs it, but that should be moved too.
class CapabilityExpr {
private:
- /// The capability expression and whether it's negated.
- llvm::PointerIntPair<const til::SExpr *, 1, bool> CapExpr;
+ /// The capability expression and flags.
+ llvm::PointerIntPair<const til::SExpr *, 2, unsigned> CapExpr;
/// The kind of capability as specified by @ref CapabilityAttr::getName.
StringRef CapKind;
public:
- CapabilityExpr() : CapExpr(nullptr, false) {}
- CapabilityExpr(const til::SExpr *E, StringRef Kind, bool Neg)
- : CapExpr(E, Neg), CapKind(Kind) {}
+ static constexpr unsigned FlagNegative = 1u << 0;
+ static constexpr unsigned FlagReentrant = 1u << 1;
+
+ CapabilityExpr() : CapExpr(nullptr, 0) {}
+ CapabilityExpr(const til::SExpr *E, StringRef Kind, unsigned Flags)
+ : CapExpr(E, Flags), CapKind(Kind) {}
// Don't allow implicitly-constructed StringRefs since we'll capture them.
- template <typename T> CapabilityExpr(const til::SExpr *, T, bool) = delete;
+ template <typename T>
+ CapabilityExpr(const til::SExpr *, T, unsigned) = delete;
const til::SExpr *sexpr() const { return CapExpr.getPointer(); }
StringRef getKind() const { return CapKind; }
- bool negative() const { return CapExpr.getInt(); }
+ bool negative() const { return CapExpr.getInt() & FlagNegative; }
+ bool reentrant() const { return CapExpr.getInt() & FlagReentrant; }
CapabilityExpr operator!() const {
- return CapabilityExpr(CapExpr.getPointer(), CapKind, !CapExpr.getInt());
+ return CapabilityExpr(CapExpr.getPointer(), CapKind,
+ CapExpr.getInt() ^ FlagNegative);
}
bool equals(const CapabilityExpr &other) const {
@@ -388,7 +394,7 @@ class SExprBuilder {
til::LiteralPtr *createVariable(const VarDecl *VD);
// Create placeholder for this: we don't know the VarDecl on construction yet.
- std::pair<til::LiteralPtr *, StringRef>
+ std::tuple<til::LiteralPtr *, StringRef, unsigned>
createThisPlaceholder(const Expr *Exp);
// Translate a clang statement or expression to a TIL expression.
diff --git a/clang/include/clang/Basic/Attr.td b/clang/include/clang/Basic/Attr.td
index d48aed5b73cf5..88be9d3d13629 100644
--- a/clang/include/clang/Basic/Attr.td
+++ b/clang/include/clang/Basic/Attr.td
@@ -3990,6 +3990,13 @@ def LocksExcluded : InheritableAttr {
let Documentation = [Undocumented];
}
+def ReentrantCapability : InheritableAttr {
+ let Spellings = [Clang<"reentrant_capability">];
+ let Subjects = SubjectList<[Record, TypedefName]>;
+ let Documentation = [Undocumented];
+ let SimpleHandler = 1;
+}
+
// C/C++ consumed attributes.
def Consumable : InheritableAttr {
diff --git a/clang/lib/Analysis/ThreadSafety.cpp b/clang/lib/Analysis/ThreadSafety.cpp
index 42fb0fe7dcdaa..5b1883a0a4b15 100644
--- a/clang/lib/Analysis/ThreadSafety.cpp
+++ b/clang/lib/Analysis/ThreadSafety.cpp
@@ -99,8 +99,6 @@ class FactSet;
/// particular point in program execution. Currently, a fact is a capability,
/// along with additional information, such as where it was acquired, whether
/// it is exclusive or shared, etc.
-///
-/// FIXME: this analysis does not currently support re-entrant locking.
class FactEntry : public CapabilityExpr {
public:
enum FactEntryKind { Lockable, ScopedLockable };
@@ -114,21 +112,25 @@ class FactEntry : public CapabilityExpr {
};
private:
- const FactEntryKind Kind : 8;
+ const FactEntryKind Kind : 4;
/// Exclusive or shared.
- LockKind LKind : 8;
+ const LockKind LKind : 4;
+
+ /// How it was acquired.
+ const SourceKind Source : 4;
- // How it was acquired.
- SourceKind Source : 8;
+ /// Reentrancy count.
+ unsigned int ReentrancyCount : 20;
/// Where it was acquired.
- SourceLocation AcquireLoc;
+ const SourceLocation AcquireLoc;
public:
FactEntry(FactEntryKind FK, const CapabilityExpr &CE, LockKind LK,
SourceLocation Loc, SourceKind Src)
- : CapabilityExpr(CE), Kind(FK), LKind(LK), Source(Src), AcquireLoc(Loc) {}
+ : CapabilityExpr(CE), Kind(FK), LKind(LK), Source(Src),
+ ReentrancyCount(0), AcquireLoc(Loc) {}
virtual ~FactEntry() = default;
LockKind kind() const { return LKind; }
@@ -139,22 +141,41 @@ class FactEntry : public CapabilityExpr {
bool declared() const { return Source == Declared; }
bool managed() const { return Source == Managed; }
- virtual void
- handleRemovalFromIntersection(const FactSet &FSet, FactManager &FactMan,
- SourceLocation JoinLoc, LockErrorKind LEK,
- ThreadSafetyHandler &Handler) const = 0;
+ virtual void handleRemovalFromIntersection(FactSet &FSet,
+ FactManager &FactMan,
+ SourceLocation JoinLoc,
+ LockErrorKind LEK,
+ ThreadSafetyHandler &Handler) = 0;
virtual void handleLock(FactSet &FSet, FactManager &FactMan,
const FactEntry &entry,
- ThreadSafetyHandler &Handler) const = 0;
+ ThreadSafetyHandler &Handler) = 0;
virtual void handleUnlock(FactSet &FSet, FactManager &FactMan,
const CapabilityExpr &Cp, SourceLocation UnlockLoc,
- bool FullyRemove,
- ThreadSafetyHandler &Handler) const = 0;
+ bool FullyRemove, ThreadSafetyHandler &Handler) = 0;
// Return true if LKind >= LK, where exclusive > shared
bool isAtLeast(LockKind LK) const {
return (LKind == LK_Exclusive) || (LK == LK_Shared);
}
+
+ // Return true if we can acquire a capability reentrant.
+ [[nodiscard]] bool tryReenter(LockKind ReenterKind) {
+ if (!reentrant())
+ return false;
+ if (kind() != ReenterKind)
+ return false;
+ if (++ReentrancyCount == 0)
+ llvm::report_fatal_error("Maximum reentrancy reached");
+ return true;
+ }
+
+ // Return true if we are releasing a capability previously acquired reentrant.
+ [[nodiscard]] bool leaveReentrant() {
+ if (!ReentrancyCount)
+ return false;
+ ReentrancyCount--;
+ return true;
+ }
};
using FactID = unsigned short;
@@ -163,7 +184,7 @@ using FactID = unsigned short;
/// the analysis of a single routine.
class FactManager {
private:
- std::vector<std::unique_ptr<const FactEntry>> Facts;
+ std::vector<std::unique_ptr<FactEntry>> Facts;
public:
FactID newFact(std::unique_ptr<FactEntry> Entry) {
@@ -171,7 +192,7 @@ class FactManager {
return static_cast<unsigned short>(Facts.size() - 1);
}
- const FactEntry &operator[](FactID F) const { return *Facts[F]; }
+ FactEntry &operator[](FactID F) { return *Facts[F]; }
};
/// A FactSet is the set of facts that are known to be true at a
@@ -241,23 +262,21 @@ class FactSet {
});
}
- const FactEntry *findLock(FactManager &FM, const CapabilityExpr &CapE) const {
+ FactEntry *findLock(FactManager &FM, const CapabilityExpr &CapE) {
auto I = std::find_if(begin(), end(), [&](FactID ID) {
return FM[ID].matches(CapE);
});
return I != end() ? &FM[*I] : nullptr;
}
- const FactEntry *findLockUniv(FactManager &FM,
- const CapabilityExpr &CapE) const {
+ FactEntry *findLockUniv(FactManager &FM, const CapabilityExpr &CapE) {
auto I = std::find_if(begin(), end(), [&](FactID ID) -> bool {
return FM[ID].matchesUniv(CapE);
});
return I != end() ? &FM[*I] : nullptr;
}
- const FactEntry *findPartialMatch(FactManager &FM,
- const CapabilityExpr &CapE) const {
+ FactEntry *findPartialMatch(FactManager &FM, const CapabilityExpr &CapE) {
auto I = std::find_if(begin(), end(), [&](FactID ID) -> bool {
return FM[ID].partiallyMatches(CapE);
});
@@ -864,10 +883,9 @@ class LockableFactEntry : public FactEntry {
SourceKind Src = Acquired)
: FactEntry(Lockable, CE, LK, Loc, Src) {}
- void
- handleRemovalFromIntersection(const FactSet &FSet, FactManager &FactMan,
- SourceLocation JoinLoc, LockErrorKind LEK,
- ThreadSafetyHandler &Handler) const override {
+ void handleRemovalFromIntersection(FactSet &FSet, FactManager &FactMan,
+ SourceLocation JoinLoc, LockErrorKind LEK,
+ ThreadSafetyHandler &Handler) override {
if (!asserted() && !negative() && !isUniversal()) {
Handler.handleMutexHeldEndOfScope(getKind(), toString(), loc(), JoinLoc,
LEK);
@@ -875,15 +893,18 @@ class LockableFactEntry : public FactEntry {
}
void handleLock(FactSet &FSet, FactManager &FactMan, const FactEntry &entry,
- ThreadSafetyHandler &Handler) const override {
+ ThreadSafetyHandler &Handler) override {
+ if (tryReenter(entry.kind()))
+ return;
Handler.handleDoubleLock(entry.getKind(), entry.toString(), loc(),
entry.loc());
}
void handleUnlock(FactSet &FSet, FactManager &FactMan,
const CapabilityExpr &Cp, SourceLocation UnlockLoc,
- bool FullyRemove,
- ThreadSafetyHandler &Handler) const override {
+ bool FullyRemove, ThreadSafetyHandler &Handler) override {
+ if (leaveReentrant())
+ return;
FSet.removeLock(FactMan, Cp);
if (!Cp.negative()) {
FSet.addLock(FactMan, std::make_unique<LockableFactEntry>(
@@ -935,10 +956,9 @@ class ScopedLockableFactEntry : public FactEntry {
UnderlyingMutexes.push_back(UnderlyingCapability{M, UCK_ReleasedShared});
}
- void
- handleRemovalFromIntersection(const FactSet &FSet, FactManager &FactMan,
- SourceLocation JoinLoc, LockErrorKind LEK,
- ThreadSafetyHandler &Handler) const override {
+ void handleRemovalFromIntersection(FactSet &FSet, FactManager &FactMan,
+ SourceLocation JoinLoc, LockErrorKind LEK,
+ ThreadSafetyHandler &Handler) override {
if (LEK == LEK_LockedAtEndOfFunction || LEK == LEK_NotLockedAtEndOfFunction)
return;
@@ -956,7 +976,7 @@ class ScopedLockableFactEntry : public FactEntry {
}
void handleLock(FactSet &FSet, FactManager &FactMan, const FactEntry &entry,
- ThreadSafetyHandler &Handler) const override {
+ ThreadSafetyHandler &Handler) override {
for (const auto &UnderlyingMutex : UnderlyingMutexes) {
if (UnderlyingMutex.Kind == UCK_Acquired)
lock(FSet, FactMan, UnderlyingMutex.Cap, entry.kind(), entry.loc(),
@@ -968,8 +988,7 @@ class ScopedLockableFactEntry : public FactEntry {
void handleUnlock(FactSet &FSet, FactManager &FactMan,
const CapabilityExpr &Cp, SourceLocation UnlockLoc,
- bool FullyRemove,
- ThreadSafetyHandler &Handler) const override {
+ bool FullyRemove, ThreadSafetyHandler &Handler) override {
assert(!Cp.negative() && "Managing object cannot be negative.");
for (const auto &UnderlyingMutex : UnderlyingMutexes) {
// Remove/lock the underlying mutex if it exists/is still unlocked; warn
@@ -996,8 +1015,8 @@ class ScopedLockableFactEntry : public FactEntry {
void lock(FactSet &FSet, FactManager &FactMan, const CapabilityExpr &Cp,
LockKind kind, SourceLocation loc,
ThreadSafetyHandler *Handler) const {
- if (const FactEntry *Fact = FSet.findLock(FactMan, Cp)) {
- if (Handler)
+ if (FactEntry *Fact = FSet.findLock(FactMan, Cp)) {
+ if (!Fact->tryReenter(kind) && Handler)
Handler->handleDoubleLock(Cp.getKind(), Cp.toString(), Fact->loc(),
loc);
} else {
@@ -1009,7 +1028,9 @@ class ScopedLockableFactEntry : public FactEntry {
void unlock(FactSet &FSet, FactManager &FactMan, const CapabilityExpr &Cp,
SourceLocation loc, ThreadSafetyHandler *Handler) const {
- if (FSet.findLock(FactMan, Cp)) {
+ if (FactEntry *Fact = FSet.findLock(FactMan, Cp)) {
+ if (Fact->leaveReentrant())
+ return;
FSet.removeLock(FactMan, Cp);
FSet.addLock(FactMan, std::make_unique<LockableFactEntry>(
!Cp, LK_Exclusive, loc));
@@ -1071,28 +1092,28 @@ class ThreadSafetyAnalyzer {
bool join(const FactEntry &a, const FactEntry &b, bool CanModify);
- void intersectAndWarn(FactSet &EntrySet, const FactSet &ExitSet,
+ void intersectAndWarn(FactSet &EntrySet, FactSet &ExitSet,
SourceLocation JoinLoc, LockErrorKind EntryLEK,
LockErrorKind ExitLEK);
- void intersectAndWarn(FactSet &EntrySet, const FactSet &ExitSet,
+ void intersectAndWarn(FactSet &EntrySet, FactSet &ExitSet,
SourceLocation JoinLoc, LockErrorKind LEK) {
intersectAndWarn(EntrySet, ExitSet, JoinLoc, LEK, LEK);
}
void runAnalysis(AnalysisDeclContext &AC);
- void warnIfMutexNotHeld(const FactSet &FSet, const NamedDecl *D,
- const Expr *Exp, AccessKind AK, Expr *MutexExp,
+ void warnIfMutexNotHeld(FactSet &FSet, const NamedDecl *D, const Expr *Exp,
+ AccessKind AK, Expr *MutexExp,
ProtectedOperationKind POK, til::LiteralPtr *Self,
SourceLocation Loc);
- void warnIfMutexHeld(const FactSet &FSet, const NamedDecl *D, const Expr *Exp,
+ void warnIfMutexHeld(FactSet &FSet, const NamedDecl *D, const Expr *Exp,
Expr *MutexExp, til::LiteralPtr *Self,
SourceLocation Loc);
- void checkAccess(const FactSet &FSet, const Expr *Exp, AccessKind AK,
+ void checkAccess(FactSet &FSet, const Expr *Exp, AccessKind AK,
ProtectedOperationKind POK);
- void checkPtAccess(const FactSet &FSet, const Expr *Exp, AccessKind AK,
+ void checkPtAccess(FactSet &FSet, const Expr *Exp, AccessKind AK,
ProtectedOperationKind POK);
};
@@ -1306,8 +1327,7 @@ void ThreadSafetyAnalyzer::addLock(FactSet &FSet,
Entry->loc(), Entry->getKind());
}
- // FIXME: Don't always warn when we have support for reentrant locks.
- if (const FactEntry *Cp = FSet.findLock(FactMan, *Entry)) {
+ if (FactEntry *Cp = FSet.findLock(FactMan, *Entry)) {
if (!Entry->asserted())
Cp->handleLock(FSet, FactMan, *Entry, Handler);
} else {
@@ -1323,7 +1343,7 @@ void ThreadSafetyAnalyzer::removeLock(FactSet &FSet, const CapabilityExpr &Cp,
if (Cp.shouldIgnore())
return;
- const FactEntry *LDat = FSet.findLock(FactMan, Cp);
+ FactEntry *LDat = FSet.findLock(FactMan, Cp);
if (!LDat) {
SourceLocation PrevLoc;
if (const FactEntry *Neg = FSet.findLock(FactMan, !Cp))
@@ -1546,7 +1566,7 @@ class BuildLockset : public ConstStmtVisitor<BuildLockset> {
ThreadSafetyAnalyzer *Analyzer;
FactSet FSet;
// The fact set for the function on exit.
- const FactSet &FunctionExitFSet;
+ FactSet &FunctionExitFSet;
LocalVariableMap::Context LVarCtx;
unsigned CtxIndex;
@@ -1571,7 +1591,7 @@ class BuildLockset : public ConstStmtVisitor<BuildLockset> {
public:
BuildLockset(ThreadSafetyAnalyzer *Anlzr, CFGBlockInfo &Info,
- const FactSet &FunctionExitFSet)
+ FactSet &FunctionExitFSet)
: ConstStmtVisitor<BuildLockset>(), Analyzer(Anlzr), FSet(Info.EntrySet),
FunctionExitFSet(FunctionExitFSet), LVarCtx(Info.EntryContext),
CtxIndex(Info.EntryIndex) {}
@@ -1590,10 +1610,12 @@ class BuildLockset : public ConstStmtVisitor<BuildLockset> {
/// Warn if the LSet does not contain a lock sufficient to protect access
/// of at least the passed in AccessKind.
-void ThreadSafetyAnalyzer::warnIfMutexNotHeld(
- const FactSet &FSet, const NamedDecl *D, const Expr *Exp, AccessKind AK,
- Expr *MutexExp, ProtectedOperationKind POK, til::LiteralPtr *Self,
- SourceLocation Loc) {
+void ThreadSafetyAnalyzer::warnIfMutexNotHeld(FactSet &FSet, const NamedDecl *D,
+ const Expr *Exp, AccessKind AK,
+ Expr *MutexExp,
+ ProtectedOperationKind POK,
+ til::LiteralPtr *Self,
+ SourceLocation Loc) {
LockKind LK = getLockKindFromAccessKind(AK);
CapabilityExpr Cp = SxBuilder.translateAttrExpr(MutexExp, D, Exp, Self);
if (Cp.isInvalid()) {
@@ -1649,9 +1671,8 @@ void ThreadSafetyAnalyzer::warnIfMutexNotHeld(
}
/// Warn if the LSet contains the given lock.
-void ThreadSafetyAnalyzer::warnIfMutexHeld(const FactSet &FSet,
- const NamedDecl *D, const Expr *Exp,
- Expr *MutexExp,
+void ThreadSafetyAnalyzer::warnIfMutexHeld(FactSet &FSet, const NamedDecl *D,
+ const Expr *Exp, Expr *MutexExp,
til::LiteralPtr *Self,
SourceLocation Loc) {
CapabilityExpr Cp = SxBuilder.translateAttrExpr(MutexExp, D, Exp, Self);
@@ -1674,7 +1695,7 @@ void ThreadSafetyAnalyzer::warnIfMutexHeld(const FactSet &FSet,
/// marked with guarded_by, we must ensure the appropriate mutexes are held.
/// Similarly, we check if the access is to an expression that dereferences
/// a pointer marked with pt_guarded_by.
-void ThreadSafetyAnalyzer::checkAccess(const FactSet &FSet, const Expr *Exp,
+void ThreadSafetyAnalyzer::checkAccess(FactSet &FSet, const Expr *Exp,
AccessKind AK,
ProtectedOperationKind POK) {
Exp = Exp->IgnoreImplicit()->IgnoreParenCasts();
@@ -1741,7 +1762...
[truncated]
|
In https://lore.kernel.org/all/CANpmjNPquO=W1JAh1FNQb8pMQjgeZAKCPQUAd7qUg=5pjJ6x=Q@mail.gmail.com/ I wrote:
But thinking about this more, this is unnecessarily complex and introduces other headaches in the analysis. The simpler option is to just declare the whole capability/lock as reentrant, as done in this PR. My conclusion is that the more complex version would be a case of YAGNI, and in the odd cases where it might need to be more fine-grained, marking the whole capability as reentrant will be traded off against potential false negatives. For user space locking primitives I've never seen this kind of mixed interface -- only in the kernel I could imagine this to be something we could cook up, but I'm happy to take the false negatives for having a simpler-to-use and saner feature in Clang. |
63aca0c
to
cff267e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the biggest issue is that removing const
from FactEntry
does not work. You'll have to undo all those changes and instead create a new FactEntry
for every lock/unlock.
static constexpr unsigned FlagNegative = 1u << 0; | ||
static constexpr unsigned FlagReentrant = 1u << 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if there is a use case for having both. If the capability can be acquired an arbitrary number of times, does it make sense to require it not being held? At least the negative capability should not be required when acquiring a reentrant mutex.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't follow. We need a way to query both, so we need at least 2 bits to store that info.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do, but the question is whether we have 4 possible states or just 3: can a mutex be both negative and reentrant? If yes, we have 2×bool
, otherwise we have something like enum {Regular, Negative, Reentrant}
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need 4 states.
I'm adding this test:
+class TestNegativeWithReentrantMutex {
+ ReentrantMutex rmu;
+ int a GUARDED_BY(rmu);
+
+public:
+ void baz() EXCLUSIVE_LOCKS_REQUIRED(!rmu) {
+ rmu.Lock();
+ a = 0;
+ rmu.Unlock();
+ }
+};
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's exactly the question. Does this make sense? If rmu
can be acquired an arbitrary number of times, why does this require !rmu
? It can be acquired even if it is already held.
So I guess the question is whether we should not simply disallow !rmu
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There might be valid uses for that, even if it's not something we'd expect to be frequent. Developer creativity how to use a feature can sometimes lead to new uses we never expected, and I wouldn't want to prohibit them.
If it's not wrong semantically, I tend to not want to impose a certain pattern and imply to the user "you're holding it wrong" even though it's not strictly wrong and they might actually know what they're doing. I've been on the receiving end of such forced restrictions, and it's rather annoying - though quite often it's because the restriction allowed for a simpler implementation, I'd argue in this case that's a moot point.
I think if we were talking about synchronization only, I'd be more inclined to say "this never makes sense", but the feature is now about arbitrary capabilities.
For example, we might want to express that some capability is never held when entering a function for performance reasons, yet that capability itself can be reentrantly acquired on slow paths. I don't see a reason to prohibit uses like that.
1e01f55
to
cadff84
Compare
Good catch, reworked this. |
1a2594c
to
89ceb0d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two more things come to mind:
- Consider adding a check to
SemaDeclAttr.cpp
that the new attribute is always accompanied bycapability
orlockable
. Although I wonder whether that's too early. I'm not sure if we can see the other attributes already. - You should probably do something in
ThreadSafetyAnalyzer::intersectAndWarn
: if two branches join with different counts, we should warn and drop the fact with higher count.
std::optional<FactID> replaceLock(FactManager &FM, iterator It, | ||
std::unique_ptr<FactEntry> Entry) { | ||
if (It == end()) | ||
return std::nullopt; | ||
FactID F = FM.newFact(std::move(Entry)); | ||
*It = F; | ||
return F; | ||
} | ||
|
||
std::optional<FactID> replaceLock(FactManager &FM, const CapabilityExpr &CapE, | ||
std::unique_ptr<FactEntry> Entry) { | ||
return replaceLock(FM, findLockIter(FM, CapE), std::move(Entry)); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we just use removeLock
and then addLock
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could, but it's less efficient, esp. if we used findLockIter():
- without findLockIter: we save a pop_back() and push_back().
- with preceding findLockIter: like (1), but also saves a search.
Sure, maybe we're just shaving off some constant overheads, but I also find it makes it less verbose and more readable.
I have this later patch I would send out as a separate PR later:
commit 9a41f2b68a28e6a2239b45a11a0cd360dce320af
Author: Marco Elver <[email protected]>
Date: Fri Apr 25 18:51:16 2025 +0200
Thread Safety Analysis: Use replaceLock instead of removeLock+addLock
In ScopedLockableFactEntry::unlock(), we can avoid a second search,
pop_back(), and push_back() if we use the already obtained iterator into
the FactSet to replace the old FactEntry and take its position in the
vector.
diff --git a/clang/lib/Analysis/ThreadSafety.cpp b/clang/lib/Analysis/ThreadSafety.cpp
index cb62f11d6f15..429f22715886 100644
--- a/clang/lib/Analysis/ThreadSafety.cpp
+++ b/clang/lib/Analysis/ThreadSafety.cpp
@@ -1090,9 +1090,9 @@ private:
return;
}
- FSet.removeLock(FactMan, Cp);
- FSet.addLock(FactMan,
- std::make_unique<LockableFactEntry>(!Cp, LK_Exclusive, loc));
+ FSet.replaceLock(
+ FactMan, It,
+ std::make_unique<LockableFactEntry>(!Cp, LK_Exclusive, loc));
} else if (Handler) {
SourceLocation PrevLoc;
if (const FactEntry *Neg = FSet.findLock(FactMan, !Cp))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of reusing iterators for removeLock
. In fact I can see a number of additional places where we do findLock
followed by removeLock
, and I agree that reusing the search result sounds like a good idea. But I think the case for combining removeLock
and addLock
into replaceLock
is not that strong.
For now my main worry is that it makes the change a bit larger than it would need to be. Such NFC refactorings could also be done later (or earlier, if you want, but I wouldn't want to drag this out).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replaceLock without caller-iterator still saves a copy+pop_back+push_back. Not much, but not nothing either.
This PR should include the bare minimum use of replaceLock - I have one patch queued that is not part of this PR to do an unrelated substitution with replaceLock.
If anything I'd move replaceLock introduction before this PR, but I think we're just splitting hairs at that point.
static constexpr unsigned FlagNegative = 1u << 0; | ||
static constexpr unsigned FlagReentrant = 1u << 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do, but the question is whether we have 4 possible states or just 3: can a mutex be both negative and reentrant? If yes, we have 2×bool
, otherwise we have something like enum {Regular, Negative, Reentrant}
.
Comment fix and apply new clang-format style in ScopedLockableFactEntry::unlock(). Factored out from #137133 NFC.
427dcd2
to
018e11b
Compare
Added.
Fixed. Also addressed all other comments as best as I could. PTAL. |
018e11b
to
2d31d33
Compare
Factored out from llvm/llvm-project#137133 NFC.
Factored out from llvm#137133 NFC.
Comment fix and apply new clang-format style in ScopedLockableFactEntry::unlock(). Factored out from llvm#137133 NFC.
Factored out from llvm#137133 NFC.
Comment fix and apply new clang-format style in ScopedLockableFactEntry::unlock(). Factored out from llvm#137133 NFC.
Factored out from llvm#137133 NFC.
Comment fix and apply new clang-format style in ScopedLockableFactEntry::unlock(). Factored out from llvm#137133 NFC.
Comment fix and apply new clang-format style in ScopedLockableFactEntry::unlock(). Factored out from llvm/llvm-project#137133 NFC.
Comment fix and apply new clang-format style in ScopedLockableFactEntry::unlock(). Factored out from llvm#137133 NFC.
static constexpr unsigned FlagNegative = 1u << 0; | ||
static constexpr unsigned FlagReentrant = 1u << 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's exactly the question. Does this make sense? If rmu
can be acquired an arbitrary number of times, why does this require !rmu
? It can be acquired even if it is already held.
So I guess the question is whether we should not simply disallow !rmu
.
Factored out from llvm#137133 NFC.
Comment fix and apply new clang-format style in ScopedLockableFactEntry::unlock(). Factored out from llvm#137133 NFC.
Move documentation inline and add missing documentation for LEK_NotLockedAtEndOfFunction. NFC. Factored out from: #137133
Move documentation inline and add missing documentation for LEK_NotLockedAtEndOfFunction. NFC. Factored out from: llvm/llvm-project#137133
2d31d33
to
c018a47
Compare
Thanks for the feedback. Addressed comments as best as I could. Most notable changes:
PTAL. |
Rather than holding a single bool, switch it to contain flags, which is both more descriptive and simplifies adding more flags in subsequent changes. NFC.
Introduce the `reentrant_capability` attribute, which may be specified alongside the `capability(..)` attribute to denote that the defined capability type is reentrant. Marking a capability as reentrant means that acquiring the same capability multiple times is safe, and does not produce warnings on attempted re-acquisition. The most significant changes required are plumbing to propagate if the attribute is present to a CapabilityExpr, and then introducing a ReentrancyCount to FactEntry that can be incremented while a fact remains in the FactSet.
c018a47
to
423ab37
Compare
Gentle ping - PTAL. Many thanks! |
Introduce the
reentrant_capability
attribute, which may be specifiedalongside the
capability(..)
attribute to denote that the definedcapability type is reentrant. Marking a capability as reentrant means
that acquiring the same capability multiple times is safe, and does not
produce warnings on attempted re-acquisition.
The most significant changes required are plumbing to propagate if the
attribute is present to a CapabilityExpr, and introducing ReentrancyCount
to the LockableFactEntry class.