-
Notifications
You must be signed in to change notification settings - Fork 13.5k
[scudo] Mitigate commitbase exploit #75295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Thank you for submitting a Pull Request (PR) to the LLVM Project! This PR will be automatically labeled and the relevant teams will be If you wish to, you can add reviewers by using the "Reviewers" section on this page. If this is not working for you, it is probably because you do not have write If you have received no comments on your PR for a week, you can request a review If you have further questions, they may be answered by the LLVM GitHub User Guide. You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums. |
|
70f5ca3
to
1407cd4
Compare
1407cd4
to
1ff75e4
Compare
@llvm/pr-subscribers-compiler-rt-sanitizer Author: None (SIELA1915) ChangesThis is a proposed mitigation for an attack which overwrites the commit base of a fake secondary chunk. This fake secondary chunk is forged by an attacker who has successfully leaked the scudo cookie. This mitigation tracks in a separate memory region the addresses of all currently allocated secondary chunks and only adds free'd secondary chunks to the cache if they were recorded in this dedicated memory region. Two options where added in the Allocator Config, Performance impact was tested using some custom benchmarks, which are available at https://github.com/SIELA1915/house-of-scudo/tree/master/benchmark.
While in the worst case (many secondary allocations with many secondary chunks being allocated at the same time) the performance impact is quite high, in usual usage it should not matter match. This seems to also be confirmed by the following benchmark tests of mimalloc-bench, where scudo is the normal version and scudo_fixed the one with the mitigation (values are averages of five runs for each):
Full diff: https://github.com/llvm/llvm-project/pull/75295.diff 4 Files Affected:
diff --git a/compiler-rt/lib/scudo/standalone/allocator_config.h b/compiler-rt/lib/scudo/standalone/allocator_config.h
index 3c6aa3acb0e45b..02d0b4e0e5d7ba 100644
--- a/compiler-rt/lib/scudo/standalone/allocator_config.h
+++ b/compiler-rt/lib/scudo/standalone/allocator_config.h
@@ -153,6 +153,8 @@ struct DefaultConfig {
static const s32 MinReleaseToOsIntervalMs = INT32_MIN;
static const s32 MaxReleaseToOsIntervalMs = INT32_MAX;
};
+ static const bool VerifyInUseAddresses = true;
+ static const u32 InUseBlocksSize = 1000U;
template <typename Config> using CacheT = MapAllocatorCache<Config>;
};
@@ -198,6 +200,8 @@ struct AndroidConfig {
static const s32 MinReleaseToOsIntervalMs = 0;
static const s32 MaxReleaseToOsIntervalMs = 1000;
};
+ static const bool VerifyInUseAddresses = true;
+ static const u32 InUseBlocksSize = 1000U;
template <typename Config> using CacheT = MapAllocatorCache<Config>;
};
@@ -230,6 +234,8 @@ struct FuchsiaConfig {
template <typename Config> using PrimaryT = SizeClassAllocator64<Config>;
struct Secondary {
+ static const bool VerifyInUseAddresses = true;
+ static const u32 InUseBlocksSize = 1000U;
template <typename Config> using CacheT = MapAllocatorNoCache<Config>;
};
template <typename Config> using SecondaryT = MapAllocator<Config>;
@@ -254,6 +260,8 @@ struct TrustyConfig {
template <typename Config> using PrimaryT = SizeClassAllocator64<Config>;
struct Secondary {
+ static const bool VerifyInUseAddresses = true;
+ static const u32 InUseBlocksSize = 1000U;
template <typename Config> using CacheT = MapAllocatorNoCache<Config>;
};
diff --git a/compiler-rt/lib/scudo/standalone/secondary.h b/compiler-rt/lib/scudo/standalone/secondary.h
index f52a4188bcf3a6..d7173aebbf0496 100644
--- a/compiler-rt/lib/scudo/standalone/secondary.h
+++ b/compiler-rt/lib/scudo/standalone/secondary.h
@@ -11,6 +11,7 @@
#include "chunk.h"
#include "common.h"
+#include "internal_defs.h"
#include "list.h"
#include "mem_map.h"
#include "memtag.h"
@@ -476,6 +477,23 @@ template <typename Config> class MapAllocator {
DCHECK_EQ(FreedBytes, 0U);
Cache.init(ReleaseToOsInterval);
Stats.init();
+
+ if (Config::Secondary::VerifyInUseAddresses) {
+ ReservedMemoryT InUseReserved;
+ InUseReserved.create(
+ 0U, sizeof(void *) * (Config::Secondary::InUseBlocksSize + 1),
+ "scudo:secondary_integrity");
+ DCHECK_NE(InUseReserved.getBase(), 0U);
+
+ InUseAddresses = InUseReserved.dispatch(
+ InUseReserved.getBase(),
+ sizeof(void *) * (Config::Secondary::InUseBlocksSize + 1));
+ CHECK(InUseAddresses.isAllocated());
+ InUseAddresses.setMemoryPermission(
+ InUseAddresses.getBase(),
+ sizeof(void *) * (Config::Secondary::InUseBlocksSize + 1), 0);
+ }
+
if (LIKELY(S))
S->link(&Stats);
}
@@ -544,6 +562,8 @@ template <typename Config> class MapAllocator {
u32 NumberOfAllocs GUARDED_BY(Mutex) = 0;
u32 NumberOfFrees GUARDED_BY(Mutex) = 0;
LocalStats Stats GUARDED_BY(Mutex);
+
+ MemMapT InUseAddresses GUARDED_BY(Mutex) = {};
};
// As with the Primary, the size passed to this function includes any desired
@@ -588,6 +608,54 @@ void *MapAllocator<Config>::allocate(const Options &Options, uptr Size,
BlockEnd - PtrInt);
{
ScopedLock L(Mutex);
+
+ if (Config::Secondary::VerifyInUseAddresses) {
+ void **IntegrityList =
+ reinterpret_cast<void **>(InUseAddresses.getBase());
+ bool isFull = true;
+ bool reachedEnd = false;
+
+ while (!reachedEnd) {
+ for (u32 I = 0; I < Config::Secondary::InUseBlocksSize; I++) {
+ if (IntegrityList[I] == nullptr) {
+ isFull = false;
+ IntegrityList[I] = Ptr;
+ break;
+ }
+ }
+ if (isFull &&
+ IntegrityList[Config::Secondary::InUseBlocksSize] != nullptr) {
+ IntegrityList = static_cast<void **>(
+ IntegrityList[Config::Secondary::InUseBlocksSize]);
+ } else {
+ reachedEnd = true;
+ }
+ }
+
+ if (isFull) {
+ ReservedMemoryT InUseReserved;
+ InUseReserved.create(
+ 0U, sizeof(void *) * (Config::Secondary::InUseBlocksSize + 1),
+ "scudo:secondary_integrity");
+ DCHECK_NE(InUseReserved.getBase(), 0U);
+
+ MemMapT NewAddresses = InUseReserved.dispatch(
+ InUseReserved.getBase(),
+ sizeof(void *) * (Config::Secondary::InUseBlocksSize + 1));
+ CHECK(NewAddresses.isAllocated());
+ NewAddresses.setMemoryPermission(
+ NewAddresses.getBase(),
+ sizeof(void *) * (Config::Secondary::InUseBlocksSize + 1), 0);
+
+ IntegrityList[Config::Secondary::InUseBlocksSize] =
+ reinterpret_cast<void *>(NewAddresses.getBase());
+
+ IntegrityList = static_cast<void **>(
+ IntegrityList[Config::Secondary::InUseBlocksSize]);
+ IntegrityList[0] = Ptr;
+ }
+ }
+
InUseBlocks.push_back(H);
AllocatedBytes += H->CommitSize;
FragmentedBytes += H->MemMap.getCapacity() - H->CommitSize;
@@ -662,6 +730,56 @@ void *MapAllocator<Config>::allocate(const Options &Options, uptr Size,
*BlockEndPtr = CommitBase + CommitSize;
{
ScopedLock L(Mutex);
+
+ if (Config::Secondary::VerifyInUseAddresses) {
+ void **IntegrityList =
+ reinterpret_cast<void **>(InUseAddresses.getBase());
+ bool isFull = true;
+ bool reachedEnd = false;
+
+ while (!reachedEnd) {
+ for (u32 I = 0; I < Config::Secondary::InUseBlocksSize; I++) {
+ if (IntegrityList[I] == nullptr) {
+ isFull = false;
+ IntegrityList[I] = reinterpret_cast<void *>(
+ HeaderPos + LargeBlock::getHeaderSize());
+ break;
+ }
+ }
+ if (isFull &&
+ IntegrityList[Config::Secondary::InUseBlocksSize] != nullptr) {
+ IntegrityList = static_cast<void **>(
+ IntegrityList[Config::Secondary::InUseBlocksSize]);
+ } else {
+ reachedEnd = true;
+ }
+ }
+
+ if (isFull) {
+ ReservedMemoryT InUseReserved;
+ InUseReserved.create(
+ 0U, sizeof(void *) * (Config::Secondary::InUseBlocksSize + 1),
+ "scudo:secondary_integrity");
+ DCHECK_NE(InUseReserved.getBase(), 0U);
+
+ MemMapT NewAddresses = InUseReserved.dispatch(
+ InUseReserved.getBase(),
+ sizeof(void *) * (Config::Secondary::InUseBlocksSize + 1));
+ CHECK(NewAddresses.isAllocated());
+ NewAddresses.setMemoryPermission(
+ NewAddresses.getBase(),
+ sizeof(void *) * (Config::Secondary::InUseBlocksSize + 1), 0);
+
+ IntegrityList[Config::Secondary::InUseBlocksSize] =
+ reinterpret_cast<void *>(NewAddresses.getBase());
+
+ IntegrityList = static_cast<void **>(
+ IntegrityList[Config::Secondary::InUseBlocksSize]);
+ IntegrityList[0] =
+ reinterpret_cast<void *>(HeaderPos + LargeBlock::getHeaderSize());
+ }
+ }
+
InUseBlocks.push_back(H);
AllocatedBytes += CommitSize;
FragmentedBytes += H->MemMap.getCapacity() - CommitSize;
@@ -681,6 +799,35 @@ void MapAllocator<Config>::deallocate(const Options &Options, void *Ptr)
const uptr CommitSize = H->CommitSize;
{
ScopedLock L(Mutex);
+
+ if (Config::Secondary::VerifyInUseAddresses) {
+ void **IntegrityList =
+ reinterpret_cast<void **>(InUseAddresses.getBase());
+ bool isValid = false;
+ bool reachedEnd = false;
+
+ while (!reachedEnd) {
+ for (u32 I = 0; I < Config::Secondary::InUseBlocksSize; I++) {
+ if (IntegrityList[I] == Ptr) {
+ isValid = true;
+ IntegrityList[I] = nullptr;
+ break;
+ }
+ }
+ if (!isValid &&
+ IntegrityList[Config::Secondary::InUseBlocksSize] != nullptr) {
+ IntegrityList = static_cast<void **>(
+ IntegrityList[Config::Secondary::InUseBlocksSize]);
+ } else {
+ reachedEnd = true;
+ }
+ }
+
+ if (!isValid) {
+ return;
+ }
+ }
+
InUseBlocks.remove(H);
FreedBytes += CommitSize;
FragmentedBytes -= H->MemMap.getCapacity() - CommitSize;
diff --git a/compiler-rt/lib/scudo/standalone/tests/combined_test.cpp b/compiler-rt/lib/scudo/standalone/tests/combined_test.cpp
index 3dbd93cacefd68..721e524b07b824 100644
--- a/compiler-rt/lib/scudo/standalone/tests/combined_test.cpp
+++ b/compiler-rt/lib/scudo/standalone/tests/combined_test.cpp
@@ -205,6 +205,8 @@ struct TestConditionVariableConfig {
#endif
struct Secondary {
+ static const bool VerifyInUseAddresses = true;
+ static const scudo::u32 InUseBlocksSize = 1000U;
template <typename Config>
using CacheT = scudo::MapAllocatorNoCache<Config>;
};
@@ -678,6 +680,8 @@ struct DeathConfig {
using PrimaryT = scudo::SizeClassAllocator64<Config>;
struct Secondary {
+ static const bool VerifyInUseAddresses = true;
+ static const scudo::u32 InUseBlocksSize = 1000U;
template <typename Config>
using CacheT = scudo::MapAllocatorNoCache<Config>;
};
diff --git a/compiler-rt/lib/scudo/standalone/tests/secondary_test.cpp b/compiler-rt/lib/scudo/standalone/tests/secondary_test.cpp
index 18d2e187fa3ce2..a5485b0e117707 100644
--- a/compiler-rt/lib/scudo/standalone/tests/secondary_test.cpp
+++ b/compiler-rt/lib/scudo/standalone/tests/secondary_test.cpp
@@ -85,6 +85,8 @@ template <typename Config> static void testSecondaryBasic(void) {
struct NoCacheConfig {
static const bool MaySupportMemoryTagging = false;
struct Secondary {
+ static const bool VerifyInUseAddresses = true;
+ static const scudo::u32 InUseBlocksSize = 1000U;
template <typename Config>
using CacheT = scudo::MapAllocatorNoCache<Config>;
};
@@ -102,6 +104,8 @@ struct TestConfig {
static const scudo::s32 MaxReleaseToOsIntervalMs = INT32_MAX;
};
+ static const bool VerifyInUseAddresses = true;
+ static const scudo::u32 InUseBlocksSize = 1000U;
template <typename Config> using CacheT = scudo::MapAllocatorCache<Config>;
};
};
|
Thanks for submitting this pull request. I'm concerning with the overhead of doing this. We always need to visit all the in-use blocks for every allocation/deallocation. In terms of memory overhead, from the implementation, it doesn't have a way to reduce the space used by pointer recording (BTW, we already store the blocks in-use). That means the performance and memory footprint may be getting worse overtime. In addition, when the checksum is compromised, a fake chunk can be freed to the primary allocator as well. So it seems to me that this check can be bypassed easily. So far, if we do have concern for this, quarantine is a good option to mitigate this issue (even it's doing a different protection than this) |
This is a proposed mitigation for an attack which overwrites the commit base of a fake secondary chunk. This fake secondary chunk is forged by an attacker who has successfully leaked the scudo cookie.
When this fake secondary chunk is freed, the chunk is put into the secondary cache, and when it is retrieved again from the cache a chunk will be allocated at the CommitBase location, which the attacker is able to freely choose.
This mitigation tracks in a separate memory region the addresses of all currently allocated secondary chunks and only adds free'd secondary chunks to the cache if they were recorded in this dedicated memory region.
The tracking in this mitigation happens through a simple list of addresses, which is looped over at allocation/deallocation. Each block of contiguous memory has a at compile time configurable number of addresses stored, as well as one additional address pointing to the next block in case there are more addresses.
Two options where added in the Allocator Config,
InUseBlocksSize
which specifies the number of header addresses which are saved in a contiguous memory region andVerifyInUseAddresses
to completely disable this mitigation.Performance impact was tested using some custom benchmarks, which are available at https://github.com/SIELA1915/house-of-scudo/tree/master/benchmark.
While in the worst case (many secondary allocations with many secondary chunks being allocated at the same time) the performance impact is quite high, in usual usage it should not matter match. This seems to also be confirmed by the following benchmark tests of mimalloc-bench, where scudo is the normal version and scudo_fixed the one with the mitigation (values are averages of five runs for each):