Closed
Description
I have a job that can run for an extended period — let’s call it InfiniteSleepJob
:
class InfiniteSleepJob < ApplicationJob
# limit only one simultaneous execution of operation per key
limits_concurrency to: 1, key: ->(key, *_args) { key }, group: "my_group", duration: 1.day
def perform(*args)
# do smt
end
end
When I add a few jobs to the queue and the worker unexpectedly dies during execution (e.g., due to an HA event or if I forcibly kill the Docker container with docker kill container
), the concurrency limit gets stuck, even after the job is pruned. Block is released only when it expires
Here’s an example of a job that was pruned, but the concurrency limit wasn’t removed:
Here’s my queue.yml
:
default: &default
dispatchers:
- polling_interval: 1
batch_size: 500
concurrency_maintenance_interval: 120
workers:
- queues: "default"
threads: 3
processes: 1
polling_interval: 1
I’d expect the queue to unlock as soon as the job fails, allowing the blocked jobs to proceed as expected.
Could you kindly clarify whether this is the expected behavior? Am I missing something?
Metadata
Metadata
Assignees
Labels
No labels