-
Notifications
You must be signed in to change notification settings - Fork 19
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs a rebase after #7 has been merged. Also, compilation currently fails in multiple places.
let channel_id = self.request_to_cid.remove(request_id)?; | ||
|
||
if let Some(channel) = self.channels_by_id.get_mut(&channel_id) { | ||
if channel.state == state { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mh, IIUC this means we would just remove the channel and fail silently if the channel is in a wrong state? We should probably return an error and at least log this, even though it should never happen.
*self.peer_manager.lock().unwrap() = Some(peer_manager); | ||
} | ||
|
||
fn connect_to_counterparty(&self, counterparty_node_id: PublicKey) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused: what is the purpose of this method and why are we generating a random channel id?
cde4adc
to
759f2ec
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did a high-level pass over the channel_manager.rs
structures and found them a bit confusing tbh. Would be great to get some clarity here and as part of the docs.
// confirms_within_blocks: Option<u32>, | ||
// channel_expiry_blocks: Option<u32>, | ||
announce_channel: bool, | ||
order_id: Option<OrderId>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above, if we store the Order
object that lead to the channel, this field would be superfluous.
Ok(()) | ||
} | ||
|
||
/* No errors for getinfo method */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Please remove.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would love to land this first work-in-progress draft so that the code can be included in the upcoming project restructuring. For this, could you resolve any conflicts with current main and make sure it compiles and passes tests? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Did a quick high-level pass. Will look more closely when this is rebased on main and the conflicts have been resolved. As stated above, let's do this soon so that all this code can be part of the restructuring PR coming up.
{ | ||
entropy_source: ES, | ||
peer_manager: Mutex<Option<Arc<PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>>>>, | ||
channel_manager: Arc<ChannelManager<M, T, ES, NS, SP, F, R, L>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use the AChannelManager
trait to at least make this a bit easier now.
|
||
impl InboundRequestState { | ||
fn info_received(&self, versions: Vec<u16>, options: OptionsSupported) -> Result<Self, ChannelStateError> { | ||
let max_shared_version = versions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that we're in the process of dropping versioning support in all specs, so all the corresponding code can be removed: BitcoinAndLightningLayerSpecs/lsp#63
b37b754
to
f1e259e
Compare
@Srg213 Thank you for the rebase and addressing most of the comments! If you don't mind, could you undraft this PR? I'd then land it and start the restructuring work early next week. Any further changes to make this work could then happen as follow-ups. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Going ahead and merge this draft so that we have a unified codebase. Any further changes can then be done in follow-up PRs.
No description provided.