-
Notifications
You must be signed in to change notification settings - Fork 167
Remove AsyncIterator: Sendable
requirement from debounce
#198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
# Motivation The current implementation of `AsyncDebounceSequence` requires the base `AsyncIterator` to be `Sendable`. This is causing two problems: 1. It only allows users to use `debounce` if their `AsyncSequence.AsyncIterator` is `Sendable` 2. In `debounce` we are creating a lot of new `Task`s and reating `Task`s is not cheap. My main goal of this PR was to remove the `Sendable` constraint from `debounce`. # Modification This PR overhauls the implementation of `debounce` and aligns it with the implementation of the open `merge` PR apple#185 . The most important changes are this: - I removed the `Sendable` requirement from the base sequences `AsyncIterator`. - Instead of creating new Tasks for the sleep and for the upstream consumption. I am now creating one Task and manipulate it by signalling continuations - I am not cancelling the sleep. Instead I am recalculating the time left to sleep when a sleep finishes. # Result In the end, this PR swaps the implementation of `AsyncDebounceSequence` and drops the `Sendable` constraint and passes all tests. Furthermore, on my local performance testing I saw up 150% speed increase in throughput.
This passes the tests but I still need to do some cleanup and check all comments. Furthermore, I wanna add more tests for the edge-cases |
So far this looks like a nice improvement. I will take a fine tooth comb to it on Monday. |
@swift-ci please test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks fantastic, it seems that this is definitely both a perf and correctness boon!
52f58f5
to
bda1f5f
Compare
case .resumeUpstreamContinuation(let upstreamContinuation): | ||
// This is signalling the upstream task that is consuming the upstream | ||
// sequence to signal demand. | ||
upstreamContinuation?.resume(returning: ()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm these continuations are being resumed inside the critical region; doesn't that mean that we could potentially get a recursion in the lock?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resuming continuations while hold the lock is safe. From the docs of resume
After resume enqueues the task, control immediately returns to the caller. The task continues executing when its executor is able to reschedule it.
This plays out to the resumed job to be enqueued in the appropriate executor and we will continue right away with our code and drop the lock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm that means perhaps we have some work to do for some optimizations for the entire framework because I was under I guess the mistaken presumption that there was an edge case where it could be resumed such that it becomes recursive
case .resumeUpstreamAndClockContinuation(let upstreamContinuation, let clockContinuation, let deadline): | ||
// This is signalling the upstream task that is consuming the upstream | ||
// sequence to signal demand and start the clock task. | ||
upstreamContinuation?.resume(returning: ()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think perhaps we need to make sure to return these back out and resume them outside of the lock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above comment but this ought to be totally safe
|
||
switch action { | ||
case .startTask(let base): | ||
self.startTask( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can presume that the task start inside the lock is fine; It should perhaps we be something we should test on a single core device (however since the unit tests pass w/ this? then we are fine since the validation diagrams force it into a single threaded mode)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is also safe to do since we are starting the Task
while holding the lock. If we are in a single threaded env the new Task
will get created but it will never make any progress until the current job (which is calling next()
) is finished.
If we are not holding the lock across the start of the Task
we get weird race conditions where the upstream consumption child tasks could call out to the state machine before we even transitioned from initial
which I want to avoid and would cause unnecessary complexity in the state handling.
I think the task creation feels right this way; basically I think the rule is that if work is to be done it isn't done until thee first call to next - signifying demand.
|
Motivation
The current implementation of
AsyncDebounceSequence
requires the baseAsyncIterator
to beSendable
. This is causing two problems:debounce
if theirAsyncSequence.AsyncIterator
isSendable
debounce
we are creating a lot of newTask
s and reatingTask
s is not cheap.My main goal of this PR was to remove the
Sendable
constraint fromdebounce
.Modification
This PR overhauls the implementation of
debounce
and aligns it with the implementation of the openmerge
PR #185 . The most important changes are this:Sendable
requirement from the base sequencesAsyncIterator
.Result
In the end, this PR swaps the implementation of
AsyncDebounceSequence
and drops theSendable
constraint and passes all tests. Furthermore, on my local performance testing I saw up to 150% speed increase in throughput.