Skip to content

Commit 52d6b39

Browse files
committed
Expect callers to hold read locks before channel_monitor_updated
Our existing lockorder tests assume that a read lock on a thread that is already holding the same read lock is totally fine. This isn't at all true. The `std` `RwLock` behavior is platform-dependent - on most platforms readers can starve writers as readers will never block for a pending writer. However, on platforms where this is not the case, one thread trying to take a write lock may deadlock with another thread that both already has, and is attempting to take again, a read lock. Worse, our in-tree `FairRwLock` exhibits this behavior explicitly on all platforms to avoid the starvation issue. Sadly, a user ended up hitting this deadlock in production in the form of a call to `get_and_clear_pending_msg_events` which holds the `ChannelManager::total_consistency_lock` before calling `process_pending_monitor_events` and eventually `channel_monitor_updated`, which tries to take the same read lock again. Luckily, the fix is trivial, simply remove the redundand read lock in `channel_monitor_updated`. Fixes #2000
1 parent 6b92478 commit 52d6b39

File tree

1 file changed

+10
-2
lines changed

1 file changed

+10
-2
lines changed

lightning/src/ln/channelmanager.rs

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4176,7 +4176,7 @@ where
41764176
}
41774177

41784178
fn channel_monitor_updated(&self, funding_txo: &OutPoint, highest_applied_update_id: u64, counterparty_node_id: Option<&PublicKey>) {
4179-
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(&self.total_consistency_lock, &self.persistence_notifier);
4179+
debug_assert!(self.total_consistency_lock.try_write().is_err()); // Caller holds read lock
41804180

41814181
let counterparty_node_id = match counterparty_node_id {
41824182
Some(cp_id) => cp_id.clone(),
@@ -5116,6 +5116,8 @@ where
51165116

51175117
/// Process pending events from the `chain::Watch`, returning whether any events were processed.
51185118
fn process_pending_monitor_events(&self) -> bool {
5119+
debug_assert!(self.total_consistency_lock.try_write().is_err()); // Caller holds read lock
5120+
51195121
let mut failed_channels = Vec::new();
51205122
let mut pending_monitor_events = self.chain_monitor.release_pending_monitor_events();
51215123
let has_pending_monitor_events = !pending_monitor_events.is_empty();
@@ -5193,7 +5195,13 @@ where
51935195
/// update events as a separate process method here.
51945196
#[cfg(fuzzing)]
51955197
pub fn process_monitor_events(&self) {
5196-
self.process_pending_monitor_events();
5198+
PersistenceNotifierGuard::optionally_notify(&self.total_consistency_lock, &self.persistence_notifier, || {
5199+
if self.process_pending_monitor_events() {
5200+
NotifyOption::DoPersist
5201+
} else {
5202+
NotifyOption::SkipPersist
5203+
}
5204+
});
51975205
}
51985206

51995207
/// Check the holding cell in each channel and free any pending HTLCs in them if possible.

0 commit comments

Comments
 (0)