Skip to content

Custom TTL per LOCK in LockRegistry #3444

Open
@xak2000

Description

@xak2000

Current default (and only) implementation of LockRepository based on JDBC (DefaultLockRepository) allows to set static TTL for all locks, controlled by this repository.

But not all locks are equal. Some locks require bigger TTL than others. It would be good to support custom TTL per lock.

From DB point of view, I think this is easy task. Currently DB stores CREATED_DATE field. If new EXPIRATION_DATE field will be added to the table, it would be easy to expire records based on this date instead of date of creation + TTL.

From repository point of view it is not so easy. It requires to change the interface or create a new interface that extends current LockRepository and adds new method:

boolean acquire(String lock, Duration ttl)

From registry point of view it is not so easy too. The only method of LockRegistry allows to obtain a lock only using lockKey (Lock obtain(Object lockKey)). But to support custom TTL per Lock we need to add another method, like:

Lock obtain(Object lockKey, Duration ttl)

that would set an explicit TTL on JdbcLock. And JdbcLock then will pass this TTL into it's LockRepository from doLock() method.

So, for JDBC implementation it looks totally possible to implement.
But I didn't investigate other implementations of LockRepository and LockRegistry (not based on JDBC). I'm not familiar with Redis or Zookeeper, so it would be good if someone first evaluate the idea of this feature request.

Context

Not all business processes are the same. Some usually finish in 2 secs. Other in 20 mins. So, having a static TTL for all locks of repository is impractical. Setting it to intentionally big value is impractical too, as this means that fast business process, that is killed before lock release, will not be restarted until this big TTL ends.

The workaround to this problem is to use multiple instances of LockRepository (and wrapping LockRegistry) with multiple different INT_LOCK tables. In this case each instance of LockRepository can be configured with explicit TTL. This is only partial workaround as it's hard to predict all possible TTLs for each possible business process. But it cloud be generalized to, say, 3 tables (with TTL=10 secs, 10 mins and 10 hours).

The other workaround, I think, is to periodically call lock() methods again on currently held lock. As I see in the code, it should update the CREATED_DATE field in the table to current datetime, so expiration date will be prolonged too. I actually think, this is the best approach, as it allows to not set too long TTL and at the same time to not mistakenly acquire the lock, that other process still holds.

But it's not always easy to implement this in business logic. For example, if some external service is called with big timeout (and if timeout must be big for this service for some reason) and this external service usually responds in 2 secs, but sometimes in 2 mins. In this case, calling thread is blocked until external service respond or timeout is reached, so no lock() method could be called again in this timeframe.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions