Skip to content

[compiler-rt][fuchsia] Preallocate a vmar for sanitizer internals #75256

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 11, 2024

Conversation

PiJoules
Copy link
Contributor

In an effort to reduce more mmap fragmentation, allocate a large enough vmar where we can map sanitizer internals via DoAnonymousMmap. Objects being mapped here include asan's FakeStack, LowLevelAllocator mappings, the primary allocator's TwoLevelMap, InternalMmapVector, StackStore, and asan's thread internals. The vmar is large enough to hold the total size of these objects seen in a "typical" process lifetime. If the vmar is full, it will fallback to mapping in the root vmar.

@llvmbot
Copy link
Member

llvmbot commented Dec 13, 2023

@llvm/pr-subscribers-compiler-rt-sanitizer

Author: None (PiJoules)

Changes

In an effort to reduce more mmap fragmentation, allocate a large enough vmar where we can map sanitizer internals via DoAnonymousMmap. Objects being mapped here include asan's FakeStack, LowLevelAllocator mappings, the primary allocator's TwoLevelMap, InternalMmapVector, StackStore, and asan's thread internals. The vmar is large enough to hold the total size of these objects seen in a "typical" process lifetime. If the vmar is full, it will fallback to mapping in the root vmar.


Full diff: https://github.com/llvm/llvm-project/pull/75256.diff

1 Files Affected:

  • (modified) compiler-rt/lib/sanitizer_common/sanitizer_fuchsia.cpp (+63-14)
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_fuchsia.cpp b/compiler-rt/lib/sanitizer_common/sanitizer_fuchsia.cpp
index 0245164403c51..42bfd53bcf7b2 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_fuchsia.cpp
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_fuchsia.cpp
@@ -129,6 +129,55 @@ uptr GetMaxVirtualAddress() { return GetMaxUserVirtualAddress(); }
 
 bool ErrorIsOOM(error_t err) { return err == ZX_ERR_NO_MEMORY; }
 
+// For any sanitizer internal that needs to map something which can be unmmaped
+// later, first attempt to map to a pre-allocated vmar. This helps reduce
+// fragmentation from many small anonymous mmap calls. A good value for this
+// vmar size would be the total size of your typical sanitizer internal objects
+// allocated in an "average" process lifetime. Examples of this include:
+// FakeStack, LowLevelAllocator mappings, TwoLevelMap, InternalMmapVector,
+// StackStore, CreateAsanThread, etc.
+//
+// 0xc52000 is ~12.32MB. This is roughly equal to the total sum of sanitizer
+// internal mappings in a run of boot-libc-unittests.
+constexpr size_t kSanitizerHeapVmarSize = 0xc52000;
+static zx_handle_t gSanitizerHeapVmar = 0;
+
+static zx_status_t GetSanitizerHeapVmar(zx_handle_t *vmar) {
+  zx_status_t status = ZX_OK;
+  if (!gSanitizerHeapVmar) {
+    CHECK_EQ(kSanitizerHeapVmarSize % GetPageSizeCached(), 0);
+    uintptr_t base;
+    status = _zx_vmar_allocate(
+        _zx_vmar_root_self(),
+        ZX_VM_CAN_MAP_READ | ZX_VM_CAN_MAP_WRITE | ZX_VM_CAN_MAP_SPECIFIC, 0,
+        kSanitizerHeapVmarSize, &gSanitizerHeapVmar, &base);
+  }
+  *vmar = gSanitizerHeapVmar;
+  if (status == ZX_OK)
+    CHECK_NE(gSanitizerHeapVmar, 0);
+  return status;
+}
+
+static zx_status_t TryVmoMapSanitizerVmar(zx_vm_option_t options,
+                                          size_t vmar_offset, zx_handle_t vmo,
+                                          size_t size, uintptr_t *addr) {
+  zx_status_t status;
+  zx_handle_t vmar;
+  if ((status = GetSanitizerHeapVmar(&vmar)) != ZX_OK)
+    return status;
+
+  status = _zx_vmar_map(gSanitizerHeapVmar, options, vmar_offset, vmo,
+                        /*vmo_offset=*/0, size, addr);
+  if (status == ZX_ERR_NO_RESOURCES) {
+    // This means there's no space in the heap vmar, so fallback to the root
+    // vmar.
+    status = _zx_vmar_map(_zx_vmar_root_self(), options, vmar_offset, vmo,
+                          /*vmo_offset=*/0, size, addr);
+  }
+
+  return status;
+}
+
 static void *DoAnonymousMmapOrDie(uptr size, const char *mem_type,
                                   bool raw_report, bool die_for_nomem) {
   size = RoundUpTo(size, GetPageSize());
@@ -144,11 +193,9 @@ static void *DoAnonymousMmapOrDie(uptr size, const char *mem_type,
   _zx_object_set_property(vmo, ZX_PROP_NAME, mem_type,
                           internal_strlen(mem_type));
 
-  // TODO(mcgrathr): Maybe allocate a VMAR for all sanitizer heap and use that?
   uintptr_t addr;
-  status =
-      _zx_vmar_map(_zx_vmar_root_self(), ZX_VM_PERM_READ | ZX_VM_PERM_WRITE, 0,
-                   vmo, 0, size, &addr);
+  status = TryVmoMapSanitizerVmar(ZX_VM_PERM_READ | ZX_VM_PERM_WRITE,
+                                  /*vmar_offset=*/0, vmo, size, &addr);
   _zx_handle_close(vmo);
 
   if (status != ZX_OK) {
@@ -243,6 +290,12 @@ void UnmapOrDieVmar(void *addr, uptr size, zx_handle_t target_vmar) {
 
   zx_status_t status =
       _zx_vmar_unmap(target_vmar, reinterpret_cast<uintptr_t>(addr), size);
+  if (status != ZX_OK && target_vmar == gSanitizerHeapVmar) {
+    // If there wasn't any space in the heap vmar, the fallback was the root
+    // vmar.
+    status = _zx_vmar_unmap(_zx_vmar_root_self(),
+                            reinterpret_cast<uintptr_t>(addr), size);
+  }
   if (status != ZX_OK) {
     Report("ERROR: %s failed to deallocate 0x%zx (%zd) bytes at address %p\n",
            SanitizerToolName, size, size, addr);
@@ -308,17 +361,14 @@ void *MmapAlignedOrDieOnFatalError(uptr size, uptr alignment,
   _zx_object_set_property(vmo, ZX_PROP_NAME, mem_type,
                           internal_strlen(mem_type));
 
-  // TODO(mcgrathr): Maybe allocate a VMAR for all sanitizer heap and use that?
-
   // Map a larger size to get a chunk of address space big enough that
   // it surely contains an aligned region of the requested size.  Then
   // overwrite the aligned middle portion with a mapping from the
   // beginning of the VMO, and unmap the excess before and after.
   size_t map_size = size + alignment;
   uintptr_t addr;
-  status =
-      _zx_vmar_map(_zx_vmar_root_self(), ZX_VM_PERM_READ | ZX_VM_PERM_WRITE, 0,
-                   vmo, 0, map_size, &addr);
+  status = TryVmoMapSanitizerVmar(ZX_VM_PERM_READ | ZX_VM_PERM_WRITE,
+                                  /*vmar_offset=*/0, vmo, map_size, &addr);
   if (status == ZX_OK) {
     uintptr_t map_addr = addr;
     uintptr_t map_end = map_addr + map_size;
@@ -326,14 +376,13 @@ void *MmapAlignedOrDieOnFatalError(uptr size, uptr alignment,
     uintptr_t end = addr + size;
     if (addr != map_addr) {
       zx_info_vmar_t info;
-      status = _zx_object_get_info(_zx_vmar_root_self(), ZX_INFO_VMAR, &info,
+      status = _zx_object_get_info(gSanitizerHeapVmar, ZX_INFO_VMAR, &info,
                                    sizeof(info), NULL, NULL);
       if (status == ZX_OK) {
         uintptr_t new_addr;
-        status = _zx_vmar_map(
-            _zx_vmar_root_self(),
+        status = TryVmoMapSanitizerVmar(
             ZX_VM_PERM_READ | ZX_VM_PERM_WRITE | ZX_VM_SPECIFIC_OVERWRITE,
-            addr - info.base, vmo, 0, size, &new_addr);
+            addr - info.base, vmo, size, &new_addr);
         if (status == ZX_OK)
           CHECK_EQ(new_addr, addr);
       }
@@ -357,7 +406,7 @@ void *MmapAlignedOrDieOnFatalError(uptr size, uptr alignment,
 }
 
 void UnmapOrDie(void *addr, uptr size) {
-  UnmapOrDieVmar(addr, size, _zx_vmar_root_self());
+  UnmapOrDieVmar(addr, size, gSanitizerHeapVmar);
 }
 
 void ReleaseMemoryPagesToOS(uptr beg, uptr end) {

@PiJoules
Copy link
Contributor Author

WIP for now and will request a formal review once I assert I can do 100 runs of the boot-libc-unittests on bringup.riscv64-asan without a flake.

@PiJoules PiJoules force-pushed the sanitizer-heap-vmar branch from 2e9fd48 to b21c54c Compare December 13, 2023 19:27
@PiJoules
Copy link
Contributor Author

Ok now requesting a formal review since it ran 100 times without flaking. Prior to this, it would flake about every 10 runs.

@PiJoules PiJoules changed the title [WIP][compiler-rt][fuchsia] Preallocate a vmar for sanitizer internals [compiler-rt][fuchsia] Preallocate a vmar for sanitizer internals Dec 13, 2023
@PiJoules
Copy link
Contributor Author

ping

@@ -357,7 +406,7 @@ void *MmapAlignedOrDieOnFatalError(uptr size, uptr alignment,
}

void UnmapOrDie(void *addr, uptr size) {
UnmapOrDieVmar(addr, size, _zx_vmar_root_self());
UnmapOrDieVmar(addr, size, gSanitizerHeapVmar);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to use the fallback logic in case the root vmar was used, as above.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This dispatches to UnmapOrDieVmar above which contains that fallback logic. When this is called, it should try unmapping from gSanitizerHeapVmar first then try unmapping again in the root.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks. The way I saw the diff that was hard to find, probably just my lack of familiarity with the GH code review UI.

@PiJoules PiJoules force-pushed the sanitizer-heap-vmar branch from b21c54c to 39042e4 Compare January 11, 2024 18:54
@PiJoules PiJoules requested a review from frobtech January 11, 2024 18:54
In an effort to reduce more mmap fragmentation, allocate a large enough
vmar where we can map sanitizer internals via DoAnonymousMmap. Objects
being mapped here include asan's FakeStack, LowLevelAllocator mappings,
the primary allocator's TwoLevelMap, InternalMmapVector, StackStore, and
asan's thread internals. The vmar is large enough to hold the total size
of these objects seen in a "typical" process lifetime. If the vmar is
full, it will fallback to mapping in the root vmar.
@PiJoules PiJoules force-pushed the sanitizer-heap-vmar branch from 39042e4 to b7acfdc Compare January 11, 2024 20:05
@@ -357,7 +406,7 @@ void *MmapAlignedOrDieOnFatalError(uptr size, uptr alignment,
}

void UnmapOrDie(void *addr, uptr size) {
UnmapOrDieVmar(addr, size, _zx_vmar_root_self());
UnmapOrDieVmar(addr, size, gSanitizerHeapVmar);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks. The way I saw the diff that was hard to find, probably just my lack of familiarity with the GH code review UI.

@PiJoules PiJoules merged commit 93b4705 into llvm:main Jan 11, 2024
@PiJoules PiJoules deleted the sanitizer-heap-vmar branch January 11, 2024 23:30
PiJoules added a commit that referenced this pull request Jan 13, 2024
Forward fix for #75256

The process for MmapAlignedOrDieOnFatalError involves trimming the start
and end of a mapping to ensure it's aligned correctly. This invloves
calling zx_vmar_map again but overwriting a part of the original vmar
which involves a call to zx_object_get_info(ZX_INFO_VMAR). After
#75256, we unconditionally
called this on gSanitizerHeapVmar but this can lead to a
ZX_ERR_INVALID_ARGS if the prior mapping was on the root vmar.

This can be fixed by also returning the vmar we did the last mapping to
and using that for followup operations that specifically involve the
same vmar. This way we don't have to try each syscall for both vmars.
justinfargnoli pushed a commit to justinfargnoli/llvm-project that referenced this pull request Jan 28, 2024
…vm#75256)

In an effort to reduce more mmap fragmentation, allocate a large enough
vmar where we can map sanitizer internals via DoAnonymousMmap. Objects
being mapped here include asan's FakeStack, LowLevelAllocator mappings,
the primary allocator's TwoLevelMap, InternalMmapVector, StackStore, and
asan's thread internals. The vmar is large enough to hold the total size
of these objects seen in a "typical" process lifetime. If the vmar is
full, it will fallback to mapping in the root vmar.
justinfargnoli pushed a commit to justinfargnoli/llvm-project that referenced this pull request Jan 28, 2024
Forward fix for llvm#75256

The process for MmapAlignedOrDieOnFatalError involves trimming the start
and end of a mapping to ensure it's aligned correctly. This invloves
calling zx_vmar_map again but overwriting a part of the original vmar
which involves a call to zx_object_get_info(ZX_INFO_VMAR). After
llvm#75256, we unconditionally
called this on gSanitizerHeapVmar but this can lead to a
ZX_ERR_INVALID_ARGS if the prior mapping was on the root vmar.

This can be fixed by also returning the vmar we did the last mapping to
and using that for followup operations that specifically involve the
same vmar. This way we don't have to try each syscall for both vmars.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants