Skip to content

Move relayed HTLC htlc_minimum_msat policy check from incoming channel to outgoing one #670

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions lightning/src/ln/channel.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3158,11 +3158,6 @@ impl<ChanSigner: ChannelKeys> Channel<ChanSigner> {
);
}

/// Allowed in any state (including after shutdown)
pub fn get_their_htlc_minimum_msat(&self) -> u64 {
self.our_htlc_minimum_msat
}

pub fn get_value_satoshis(&self) -> u64 {
self.channel_value_satoshis
}
Expand Down
15 changes: 11 additions & 4 deletions lightning/src/ln/channelmanager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1158,9 +1158,6 @@ impl<ChanSigner: ChannelKeys, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref>
if !chan.is_live() { // channel_disabled
break Some(("Forwarding channel is not in a ready state.", 0x1000 | 20, Some(self.get_channel_update(chan).unwrap())));
}
if *amt_to_forward < chan.get_their_htlc_minimum_msat() { // amount_below_minimum
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

chan here is the forwarding_id channel, not the inbound channel, no? See L1151 above.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay you're right, actual code is correct, that just a badly-named method.

But, after looking further, I think that placing our forward policy check should be decided only we effectively process the HTLC forward :

  • performance : it's a hit to lock the forward chan, and thus stop its operation, while we have not yet decided to accept this HTLC backward. If it rejected by update_add_htlc, we may have not to lock at all the forward one. And architecturally, that's an unnecessary tightening, you may want to run chans in parallel threads.
  • correctness : channel conditions may change, like is_live() and thus we should take forward decision as as near as the real conditions we can. Also you may prevent some features like clients intentionally holding a HTLC before the forward channel is even setup. Also clients implementing this kind of hold-on/delay relay logic may expose themselves to risk, as the height against which is evaluated the cltv_delta at reception might not be the same than the one at which forwarding is accomplished, thus committing an insecure HTLC.

I would lean towards moving all forward chan related relay check in process_pending_htlc_forwards as this PR is doing for htlc_minimum_msat.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point of checking it early is that its somewhat obnoxious to sit on an HTLC until some batch timer fires before we fail it back if we don't even have an upstream channel (open) that we can forward it on to. That said, its arguably better for privacy to do so, but we should just explicitly wait to fail backwards instead of waiting to check if we can forwards.

In any case, its somewhat nicer from a performance lens to just take the lock and fail them than to make the user set a timer and call forward later.

As for correctness around is_live, see #/661, though that's not solved by this type of move. If we're really worried about state changes before we go to forward (though I'm pretty sure I've looked over the forwarding code to make sure its ok if the chain advances before we forward), then we should just refactor the checks and do them twice.

break Some(("HTLC amount was below the htlc_minimum_msat", 0x1000 | 11, Some(self.get_channel_update(chan).unwrap())));
}
let fee = amt_to_forward.checked_mul(chan.get_fee_proportional_millionths() as u64).and_then(|prop_fee| { (prop_fee / 1000000).checked_add(chan.get_our_fee_base_msat(&self.fee_estimator) as u64) });
if fee.is_none() || msg.amount_msat < fee.unwrap() || (msg.amount_msat - fee.unwrap()) < *amt_to_forward { // fee_insufficient
break Some(("Prior hop has deviated from specified fees parameters or origin node has obsolete ones", 0x1000 | 12, Some(self.get_channel_update(chan).unwrap())));
Expand Down Expand Up @@ -1188,7 +1185,7 @@ impl<ChanSigner: ChannelKeys, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref>
{
let mut res = Vec::with_capacity(8 + 128);
if let Some(chan_update) = chan_update {
if code == 0x1000 | 11 || code == 0x1000 | 12 {
if code == 0x1000 | 12 { // fee_insufficient
res.extend_from_slice(&byte_utils::be64_to_array(msg.amount_msat));
}
else if code == 0x1000 | 13 {
Expand Down Expand Up @@ -1587,6 +1584,16 @@ impl<ChanSigner: ChannelKeys, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref>
htlc_id: prev_htlc_id,
incoming_packet_shared_secret: incoming_shared_secret,
});
if amt_to_forward < chan.get().get_our_htlc_minimum_msat() {
let mut data = Vec::with_capacity(8 + 128); // 8-bytes-htlc_msat + 2-byte-length + length-byte-channel_update
data.extend_from_slice(&byte_utils::be64_to_array(amt_to_forward));
let chan_update = self.get_channel_update(chan.get()).unwrap();
data.extend_from_slice(&chan_update.encode_with_len()[..]);
failed_forwards.push((htlc_source, payment_hash,
HTLCFailReason::Reason { failure_code: 0x1000 | 11, data } // amount_below_minimum
));
continue;
}
match chan.get_mut().send_htlc(amt_to_forward, payment_hash, outgoing_cltv_value, htlc_source.clone(), onion_packet) {
Err(e) => {
if let ChannelError::Ignore(msg) = e {
Expand Down
30 changes: 26 additions & 4 deletions lightning/src/ln/functional_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5909,10 +5909,11 @@ fn run_onion_failure_test<F1,F2>(_name: &str, test_case: u8, nodes: &Vec<Node>,
}

// test_case
// 0: node1 fails backward
// 0: final node fails backward
// 1: final node fails backward
// 2: payment completed but the user rejects the payment
// 3: final node fails backward (but tamper onion payloads from node0)
// 4: intermediate node failure, fails backward
// 100: trigger error in the intermediate node and tamper returning fail_htlc
// 200: trigger error in the final node and tamper returning fail_htlc
fn run_onion_failure_test_with_fail_intercept<F1,F2,F3>(_name: &str, test_case: u8, nodes: &Vec<Node>, route: &Route, payment_hash: &PaymentHash, mut callback_msg: F1, mut callback_fail: F2, mut callback_node: F3, expected_retryable: bool, expected_error_code: Option<u16>, expected_channel_update: Option<HTLCFailChannelUpdate>)
Expand Down Expand Up @@ -6013,13 +6014,25 @@ fn run_onion_failure_test_with_fail_intercept<F1,F2,F3>(_name: &str, test_case:
assert!(update_1_0.update_fail_htlcs.len() == 1);
update_1_0
},
4 => { // intermediate node failure; failing backward to start node
assert!(nodes[1].node.get_and_clear_pending_msg_events().is_empty());
// forwarding on 1
expect_htlc_forward!(&nodes[1]);

// backward fail on 1
expect_htlc_forward!(&nodes[1]);
check_added_monitors!(nodes[1], 1);
let update_1_0 = get_htlc_update_msgs!(nodes[1], nodes[0].node.get_our_node_id());
assert!(update_1_0.update_fail_htlcs.len() == 1);
update_1_0
},
_ => unreachable!(),
};

// 1 => 0 commitment_signed_dance
if update_1_0.update_fail_htlcs.len() > 0 {
let mut fail_msg = update_1_0.update_fail_htlcs[0].clone();
if test_case == 100 {
if test_case == 100 || test_case == 4 {
callback_fail(&mut fail_msg);
}
nodes[0].node.handle_update_fail_htlc(&nodes[1].node.get_our_node_id(), &fail_msg);
Expand Down Expand Up @@ -6269,11 +6282,20 @@ fn test_onion_failure() {
run_onion_failure_test("unknown_next_peer", 0, &nodes, &bogus_route, &payment_hash, |_| {}, ||{}, true, Some(PERM|10),
Some(msgs::HTLCFailChannelUpdate::ChannelClosed{short_channel_id: bogus_route.paths[0][1].short_channel_id, is_permanent:true}));

let amt_to_forward = nodes[1].node.channel_state.lock().unwrap().by_id.get(&channels[1].2).unwrap().get_their_htlc_minimum_msat() - 1;
let amt_to_forward = nodes[1].node.channel_state.lock().unwrap().by_id.get(&channels[1].2).unwrap().get_our_htlc_minimum_msat() - 1;
let mut bogus_route = route.clone();
let route_len = bogus_route.paths[0].len();
bogus_route.paths[0][route_len-1].fee_msat = amt_to_forward;
run_onion_failure_test("amount_below_minimum", 0, &nodes, &bogus_route, &payment_hash, |_| {}, ||{}, true, Some(UPDATE|11), Some(msgs::HTLCFailChannelUpdate::ChannelUpdateMessage{msg: ChannelUpdate::dummy()}));
run_onion_failure_test_with_fail_intercept("amount_below_minimum", 4, &nodes, &bogus_route, &payment_hash, |_| {}, |msg| {
let session_priv = SecretKey::from_slice(&[3; 32]).unwrap();
let onion_keys = onion_utils::construct_onion_keys(&Secp256k1::new(), &route.paths[0], &session_priv).unwrap();
let mut data = Vec::with_capacity(8 + 128);
data.extend_from_slice(&byte_utils::be64_to_array(amt_to_forward));
let mut chan_update = ChannelUpdate::dummy();
chan_update.contents.htlc_minimum_msat = amt_to_forward + 1;
data.extend_from_slice(&chan_update.encode_with_len()[..]);
msg.reason = onion_utils::build_first_hop_failure_packet(&onion_keys[0].shared_secret[..], UPDATE|11, &data);
}, ||{}, true, Some(UPDATE|11), Some(msgs::HTLCFailChannelUpdate::ChannelUpdateMessage{msg: ChannelUpdate::dummy()}));

//TODO: with new config API, we will be able to generate both valid and
//invalid channel_update cases.
Expand Down