-
Notifications
You must be signed in to change notification settings - Fork 97
Sample v0 #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sample v0 #4
Conversation
nit: there is a bunch of inconsistent whitespace here, does it make since to use rustfmt (do you like its formatting in this crate?) or, if not, it might be useful to have this in vimrc (depending on your terminal settings):
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really awesome work. A few comments about how we interact with tokio to start.
This is good for more review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking great, I didn't do a careful review, but glanced over most of it.
keys_manager: Arc<KeysManager>, payment_storage: PaymentInfoStorage, network: Network, | ||
) { | ||
let mut pending_txs: HashMap<OutPoint, Transaction> = HashMap::new(); | ||
loop { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eventually I think we should move the loop to the background-processor
crate. We already spawn a background thread there that does waiting on the ChannelManager
until we do things to it, I think that's fine for use as a fetch-events loop, too. It would require another parameter in the form of the ChainMonitor, but I think that's OK. Also, technically, we must process all pending events before writing the ChannelManager
- otherwise we have races where we may have gotten an event, written the manager to disk without it, and then crashed before handling it.
Sadly, this is made more complicated by our desire to be async here. We'll probably want to allow the event-handling function to by async, and then call https://docs.rs/futures/0.3.13/futures/executor/fn.block_on.html on whatever is returned. I think this would mean we can't use #[tokio::main]
(because we can't block on tokio net futures in another executor) but need a global tokio::Runtime
, need to call Runtime::enter()
at the top of the event-handling function and call tokio::spawn
and return the tokio::JoinHandle
as a future. (I think you're allowed to do this, but would need to check if you can poll a tokio::JoinHandle
in an outside executor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Could we also assume that backgroundprocessor::start()
is called from within a tokio
runtime?
In any case, are you fine to push this to follow-up? I can open an issue in rust-lightning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, a followup is fine, but I don't think calling ::start()
from within a runtime fixes the issue (nor should we assume it if we don't have to) - it spawns a new thread directly, and that thread isn't in the runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be worth making an async
version of background-processor
? Or does that require modifying the guts of ChannelManager
as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just implemented an async
version of background-processor
in https://gitlab.com/lightning-signer/lnrod/-/merge_requests/10/diffs . Perhaps we want to add that to the original crate under a tokio feature gate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! Wasn't sure if our use of Condvar
in ChannelManager
would be a problem, but I think it's ok to mix non-tokio sync code.
@TheBlueMatt What do you think? Looks like @devrandom's change could be DRY-ed up and included without much repetition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that the use of tokio::task::spawn_blocking
allows standard blocking code to be spawned inside a tokio runtime. It uses a separate thread that isn't responsible for other things. The Condvar
waiting is inside that thread.
478235b
to
fc93106
Compare
src/main.rs
Outdated
event_notifier.clone(), | ||
tcp_stream, | ||
) | ||
.await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this waits until the connection is closed which is not what you want here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the future need to be held onto in this case? If so, we could push them into a Vec
. Our lightning-net-tokio
doesn't do so, but that may be an oversight.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think anything in a tokio::spawn
makes progress regardless, since the threadpool implicitly awaits it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, that wasn't clear. You would need another spawn on the setup_inbound
to fire it off.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, this looks great! Very well organized and easy to read through. Thanks for spending so much effort on making this guide-worthy. 😄
Left mostly minor comments. Still tinkering a bit on a node abstraction, but that can come as a follow-up.
src/main.rs
Outdated
event_notifier.clone(), | ||
tcp_stream, | ||
) | ||
.await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the future need to be held onto in this case? If so, we could push them into a Vec
. Our lightning-net-tokio
doesn't do so, but that may be an oversight.
keys_manager: Arc<KeysManager>, payment_storage: PaymentInfoStorage, network: Network, | ||
) { | ||
let mut pending_txs: HashMap<OutPoint, Transaction> = HashMap::new(); | ||
loop { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be worth making an async
version of background-processor
? Or does that require modifying the guts of ChannelManager
as well?
@@ -0,0 +1,792 @@ | |||
# This file is automatically @generated by Cargo. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be included because this is a binary? I vaguely recall that being the case but just want to make sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say you want this to protect against dependency attacks by locking to hashes. i.e. supposedly you have reviewed all the locked dependencies and only need to re-review if this file gets modified.
src/main.rs
Outdated
println!( | ||
"\nEVENT: received payment from payment hash {} of {} millisatoshis", | ||
hex_utils::hex_str(&payment_hash.0), | ||
payment.amt_msat | ||
); | ||
print!("> "); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be preferable to log these? The user may be typing something when these are printed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think that'd be better. Down to address in follow-up.
let peer_mgr = peer_manager.clone(); | ||
let event_ntfns = event_notifier.clone(); | ||
tokio::spawn(async move { | ||
lightning_net_tokio::setup_outbound(peer_mgr, event_ntfns, pubkey, stream).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TheBlueMatt any idea why outbound connections to RL nodes isn't working? the node ID never shows up in get_peer_node_ids
. Outbound connections to lnd work.
Could you squash any fixup commits? Feel free to squash everything in one commit if that makes the most sense. |
…onnect to chan peers on startup
We're going to split payment storage into two maps, one for inbound and one for outbound.
To prevent clashes between payment hashes
bebb837
to
a73d564
Compare
Squashed! |
TODOs
announce_channel = true
it results in ano_route_found
error from lnd)cli::send_payment
-- use the route hints from the invoice