Skip to content

Trivial Followups to #1825 #1904

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Dec 12, 2022

Conversation

TheBlueMatt
Copy link
Collaborator

Various little nits I stumbled upon in #1825. A few are actually things from #1825, but most of them are just things that happen to be in code #1825 touches.

@TheBlueMatt TheBlueMatt force-pushed the 2022-12-1825-followups branch from efc0861 to f7966ab Compare December 7, 2022 00:44
Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is also this comment from #1825, if we would like to have sane variable names: #1825 (comment)

if request.outpoints().len() != tx.input.len() {
// Do a linear search for each input (we don't expect large input sets to
// exist) to ensure the request's input set is fully spent to be resilient
// against the external claim reordering inputs.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think here we can assume we have counterparty malleable transactions, where only one input is claiming a HTLC output, all the remaining ones can be freely malleable (even sounds true pre-anchor, HTLC-preimage on holder commitment transaction). Therefore, you might have to assume transactions with thousands of inputs, and the worst-case reasoning would be rather the block size. If you assume something like 41 bytes for the outpoint/nsequence/script_sig and 1 WU for the witnesscript (let's say an OP_TRUE), you might have each input taking 165 WU, and for a block ~24k of inputs to iter on ? So inner iterations would be more like 576M. Even if this reasoning is more accurate, and you could really stale a LDK node on this, I think this a low-concern as your counterparty would have to buy the whole blockspace, and keep going on.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, because the outer loop is on the request outpoints not the total number of inputs, its still just input-count * expected request count. Still, I went ahead and replaced it with a sorted vec + binary searches.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my understanding, if the iteration is based on the transaction matched inputs, the counterparty could influence it. Though, as mentioned I don't think this is a real concern as we're ultimately bounded by the block size here. Effectively, not so bad to be more conservative here, if we modify our aggregation logic to enable cross-channel one, we could have more than 483*2 input in a transaction issued by ourselves. We might do so in the future to save on fees.

@TheBlueMatt
Copy link
Collaborator Author

There is also this comment from #1825, if we would like to have sane variable names: #1825 (comment)

Sure, feel free to open a followup. Neither this PR nor 1825 were touching that code.

@TheBlueMatt TheBlueMatt force-pushed the 2022-12-1825-followups branch from b0c0941 to e05e02d Compare December 7, 2022 18:41
@codecov-commenter
Copy link

codecov-commenter commented Dec 7, 2022

Codecov Report

Base: 90.60% // Head: 90.62% // Increases project coverage by +0.02% 🎉

Coverage data is based on head (00cdee3) compared to base (d9d4611).
Patch coverage: 80.95% of modified lines in pull request are covered.

❗ Current head 00cdee3 differs from pull request most recent head 67f9f01. Consider uploading reports for the commit 67f9f01 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1904      +/-   ##
==========================================
+ Coverage   90.60%   90.62%   +0.02%     
==========================================
  Files          91       91              
  Lines       48656    48643      -13     
  Branches    48656    48643      -13     
==========================================
- Hits        44083    44081       -2     
+ Misses       4573     4562      -11     
Impacted Files Coverage Δ
lightning/src/chain/channelmonitor.rs 91.00% <ø> (ø)
lightning/src/util/events.rs 25.45% <ø> (ø)
lightning/src/ln/chan_utils.rs 93.56% <66.66%> (-0.05%) ⬇️
lightning/src/chain/onchaintx.rs 95.38% <91.66%> (+2.44%) ⬆️
lightning/src/chain/package.rs 91.59% <0.00%> (-0.17%) ⬇️
lightning/src/ln/functional_tests.rs 97.06% <0.00%> (-0.02%) ⬇️
lightning/src/ln/channelmanager.rs 86.71% <0.00%> (+0.02%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@TheBlueMatt TheBlueMatt force-pushed the 2022-12-1825-followups branch from e05e02d to 1e63e05 Compare December 7, 2022 19:19
@wpaulino
Copy link
Contributor

wpaulino commented Dec 7, 2022

Should we include a change here to re-word the code comment "all from the same commitment" that prompted #1905? Otherwise this LGTM after squash.

@TheBlueMatt
Copy link
Collaborator Author

Not 100% sure I get how to clarify that, but I took a stab and pushed an extra commit, lmk if you think its clearer.

In 19daccf, a `PackageId` type was
added to differentiate between an opaque Id for packages and the
`Txid` type which was being used for that purpose. It, however,
failed to also replace the single inner field in
`OnchainEvent::Claim` which was also a package ID. We do so here.
In `update_claims_view_from_matched_txn` we have two different
tx-equivalence checks which do the same thing - both check that the
tx which appeared on chain spent all of the outpoints which we
intended to spend in a given package. While one is more effecient
than the other (but only usable in a subset of cases), the
difference between O(N) and O(N^2) when N is 1-5 is trivial.

Still, it is possible we hit this code with just shy of 900 HTLC
outputs in a channel, and a transaction with a ton of inputs.

While having to spin through a few million entries if our
counterparty wastes a full block isn't really a big deal, we go
ahead and use a sorted vec and binary searches because its trivial.
`Event`s with an odd type are ignored by older versions of LDK,
however they rely on the `write_tlv_fields` call to know how many
bytes to read and discard. We were missing that call in our writing
of `Event::BumpTransaction`, which we add here.
@TheBlueMatt TheBlueMatt force-pushed the 2022-12-1825-followups branch from c9ac4ad to 00cdee3 Compare December 7, 2022 23:18
wpaulino
wpaulino previously approved these changes Dec 7, 2022
@TheBlueMatt
Copy link
Collaborator Author

Further tweaked comment wording with @wpaulino's suggestion and a few more words.

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 67f9f01

Thanks for clarifying the comment, this is really valuable in the future to document all our aggregation and claiming assumptions, if we enable cross-channel aggregation for fee/weight compression performance reasons (even if it's years from now).

@ariard
Copy link

ariard commented Dec 8, 2022

Sure, feel free to open a followup. Neither this PR nor 1825 were touching that code.

I'll take it on its own PR, addressing few more followups from #1825, especially starting to document the external fee-bumping reserves requirements in vanilla anchor output deployment.

Comment on lines -737 to +739
// If the claim does not require external funds to be allocated through
// additional inputs we can simply check the inputs in order as they
// cannot change under us.
if request.outpoints().len() != tx.input.len() {
let mut tx_inputs = tx.input.iter().map(|input| &input.previous_output).collect::<Vec<_>>();
tx_inputs.sort_unstable();
for request_input in request.outpoints() {
if tx_inputs.binary_search(&request_input).is_err() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took a little while but convinced myself this works. Squashes it down nicely too.

@TheBlueMatt TheBlueMatt merged commit a4df59b into lightningdevkit:main Dec 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants