Skip to content

Reapply "[scudo] Fix the logic of MaxAllowedFragmentedPages" (#108130) #108134

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 11, 2024

Conversation

ChiaHungDuan
Copy link
Contributor

This reverts commit 76151c4.

Also changed to check MaxAllowedFragmentedPages.

@llvmbot
Copy link
Member

llvmbot commented Sep 11, 2024

@llvm/pr-subscribers-compiler-rt-sanitizer

Author: None (ChiaHungDuan)

Changes

This reverts commit 76151c4.

Also changed to check MaxAllowedFragmentedPages.


Full diff: https://github.com/llvm/llvm-project/pull/108134.diff

1 Files Affected:

  • (modified) compiler-rt/lib/scudo/standalone/secondary.h (+17-8)
diff --git a/compiler-rt/lib/scudo/standalone/secondary.h b/compiler-rt/lib/scudo/standalone/secondary.h
index 1a232b9b9fb2d7..2fae29e5a21687 100644
--- a/compiler-rt/lib/scudo/standalone/secondary.h
+++ b/compiler-rt/lib/scudo/standalone/secondary.h
@@ -72,13 +72,16 @@ namespace {
 struct CachedBlock {
   static constexpr u16 CacheIndexMax = UINT16_MAX;
   static constexpr u16 InvalidEntry = CacheIndexMax;
-  //   * MaxReleasedCachePages default is currently 4
-  //        - We arrived at this value after noticing that mapping
-  //        in larger memory regions performs better than releasing
-  //        memory and forcing a cache hit. According to the data,
-  //        it suggests that beyond 4 pages, the release execution time is
-  //        longer than the map execution time. In this way, the default
-  //        is dependent on the platform.
+  // We allow a certain amount of fragmentation and part of the fragmented bytes
+  // will be released by `releaseAndZeroPagesToOS()`. This increases the chance
+  // of cache hit rate and reduces the overhead to the RSS at the same time. See
+  // more details in the `MapAllocatorCache::retrieve()` section.
+  //
+  // We arrived at this default value after noticing that mapping in larger
+  // memory regions performs better than releasing memory and forcing a cache
+  // hit. According to the data, it suggests that beyond 4 pages, the release
+  // execution time is longer than the map execution time. In this way,
+  // the default is dependent on the platform.
   static constexpr uptr MaxReleasedCachePages = 4U;
 
   uptr CommitBase = 0;
@@ -725,8 +728,14 @@ MapAllocator<Config>::tryAllocateFromCache(const Options &Options, uptr Size,
   uptr EntryHeaderPos;
   uptr MaxAllowedFragmentedPages = MaxUnreleasedCachePages;
 
-  if (UNLIKELY(useMemoryTagging<Config>(Options)))
+  if (LIKELY(!useMemoryTagging<Config>(Options))) {
     MaxAllowedFragmentedPages += CachedBlock::MaxReleasedCachePages;
+  } else {
+    // TODO: Enable MaxReleasedCachePages may result in pages for an entry being
+    // partially released and it erases the tag of those pages as well. To
+    // support this feature for MTE, we need to tag those pages again.
+    DCHECK_EQ(MaxAllowedFragmentedPages, MaxUnreleasedCachePages);
+  }
 
   Entry = Cache.retrieve(MaxAllowedFragmentedPages, Size, Alignment,
                          getHeadersSize(), EntryHeaderPos);

Copy link
Contributor

@cferris1000 cferris1000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ChiaHungDuan ChiaHungDuan merged commit 323911d into llvm:main Sep 11, 2024
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants