Closed
Description
Right now, the current behavior for a task on stack exhaustion is to attempt to unwind. This "works" because a task allocates itself 10k more stack in order to begin unwinding, and then prays that destructors don't re-overflow the stack.
Right now re-overflowing results in an abort of the runtime, but this is not the best behavior. There's a few routes we could take:
- We have "unlimited stacks" in theory (good ol' 64-bit address spaces), so perhaps just allocating an extra megabyte for this use case would be "enough"
- Continue to fail everything
- "Taint" the current task on second overflow, immediately context-switch away, and then leak the entire task (no destructors are run).