Skip to content

Commit bdc6477

Browse files
committed
Expect callers to hold read locks before channel_monitor_updated
Our existing lockorder tests assume that a read lock on a thread that is already holding the same read lock is totally fine. This isn't at all true. The `std` `RwLock` behavior is platform-dependent - on most platforms readers can starve writers as readers will never block for a pending writer. However, on platforms where this is not the case, one thread trying to take a write lock may deadlock with another thread that both already has, and is attempting to take again, a read lock. Worse, our in-tree `FairRwLock` exhibits this behavior explicitly on all platforms to avoid the starvation issue. Sadly, a user ended up hitting this deadlock in production in the form of a call to `get_and_clear_pending_msg_events` which holds the `ChannelManager::total_consistency_lock` before calling `process_pending_monitor_events` and eventually `channel_monitor_updated`, which tries to take the same read lock again. Luckily, the fix is trivial, simply remove the redundand read lock in `channel_monitor_updated`. Fixes #2000
1 parent 0dcd7df commit bdc6477

File tree

1 file changed

+10
-2
lines changed

1 file changed

+10
-2
lines changed

lightning/src/ln/channelmanager.rs

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4175,7 +4175,7 @@ where
41754175
}
41764176

41774177
fn channel_monitor_updated(&self, funding_txo: &OutPoint, highest_applied_update_id: u64, counterparty_node_id: Option<&PublicKey>) {
4178-
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(&self.total_consistency_lock, &self.persistence_notifier);
4178+
debug_assert!(self.total_consistency_lock.try_write().is_err()); // Caller holds read lock
41794179

41804180
let counterparty_node_id = match counterparty_node_id {
41814181
Some(cp_id) => cp_id.clone(),
@@ -5115,6 +5115,8 @@ where
51155115

51165116
/// Process pending events from the `chain::Watch`, returning whether any events were processed.
51175117
fn process_pending_monitor_events(&self) -> bool {
5118+
debug_assert!(self.total_consistency_lock.try_write().is_err()); // Caller holds read lock
5119+
51185120
let mut failed_channels = Vec::new();
51195121
let mut pending_monitor_events = self.chain_monitor.release_pending_monitor_events();
51205122
let has_pending_monitor_events = !pending_monitor_events.is_empty();
@@ -5192,7 +5194,13 @@ where
51925194
/// update events as a separate process method here.
51935195
#[cfg(fuzzing)]
51945196
pub fn process_monitor_events(&self) {
5195-
self.process_pending_monitor_events();
5197+
PersistenceNotifierGuard::optionally_notify(&self.total_consistency_lock, &self.persistence_notifier, || {
5198+
if self.process_pending_monitor_events() {
5199+
NotifyOption::DoPersist
5200+
} else {
5201+
NotifyOption::SkipPersist
5202+
}
5203+
});
51965204
}
51975205

51985206
/// Check the holding cell in each channel and free any pending HTLCs in them if possible.

0 commit comments

Comments
 (0)