Skip to content

[WIP] Bump allocator for rustc #38725

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 6 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 14 additions & 10 deletions src/liballoc_frame/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
extern crate libc;

use core::ptr;
use core::sync::atomic::{AtomicPtr, AtomicUsize, Ordering};

// The minimum alignment guaranteed by the architecture. This value is used to
// add fast paths for low alignment values. In practice, the alignment is a
Expand All @@ -47,26 +48,29 @@ const MIN_ALIGN: usize = 16;
const CHUNK_SIZE: usize = 4096 * 16;
const CHUNK_ALIGN: usize = 4096;

static mut HEAP: *mut u8 = ptr::null_mut();
static mut HEAP_LEFT: usize = 0;
static HEAP: AtomicPtr<u8> = AtomicPtr::new(ptr::null_mut());
static HEAP_LEFT: AtomicUsize = AtomicUsize::new(0);

#[no_mangle]
pub extern "C" fn __rust_allocate(size: usize, align: usize) -> *mut u8 {
let new_align = if align < MIN_ALIGN { MIN_ALIGN } else { align };
let new_size = (size + new_align - 1) & !(new_align - 1);

unsafe {
if new_size < HEAP_LEFT {
HEAP_LEFT -= new_size;
let p = HEAP;
HEAP = HEAP.offset(new_size as isize);
return p;
} else if new_size > CHUNK_SIZE {
if new_size > CHUNK_SIZE {
return imp::allocate(size, align);
}

let heap = HEAP.load(Ordering::SeqCst);
let heap_left = HEAP_LEFT.load(Ordering::SeqCst);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, this eliminates the instant UB due to data races, but it's still a logical race. Suppose a context switch happens after these two loads have been executed but before the following lines are executed, and another thread allocates. When the first thread continues, it will return the same address as the other thread did.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. I have little experience with atomics, as you may have guessed. I would rather use a Mutex or thread_local!, but neither of those are in core. I'll make it use a spinlock which should hopefully be correct.

if new_size < heap_left {
HEAP_LEFT.store(heap_left - new_size, Ordering::SeqCst);
HEAP.store(heap.offset(new_size as isize), Ordering::SeqCst);
return heap;
} else {
HEAP_LEFT = CHUNK_SIZE - new_size;
HEAP_LEFT.store(CHUNK_SIZE - new_size, Ordering::SeqCst);
let p = imp::allocate(CHUNK_SIZE, CHUNK_ALIGN);
HEAP = p.offset(new_size as isize);
HEAP.store(p.offset(new_size as isize), Ordering::SeqCst);
return p;
}
}
Expand Down