-
Notifications
You must be signed in to change notification settings - Fork 403
Doc and comment followups to #2562 #2591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doc and comment followups to #2562 #2591
Conversation
Codecov ReportAll modified lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #2591 +/- ##
==========================================
+ Coverage 88.86% 90.14% +1.27%
==========================================
Files 113 113
Lines 84517 92721 +8204
Branches 84517 92721 +8204
==========================================
+ Hits 75109 83580 +8471
+ Misses 7205 6910 -295
- Partials 2203 2231 +28
☔ View full report in Codecov by Sentry. |
@@ -98,7 +98,7 @@ impl MonitorUpdateId { | |||
/// If at some point no further progress can be made towards persisting the pending updates, the | |||
/// node should simply shut down. | |||
/// | |||
/// * If the persistence has failed and cannot be retried further (e.g. because of some timeout), | |||
/// * If the persistence has failed and cannot be retried further (e.g. because of an outage), | |||
/// [`ChannelMonitorUpdateStatus::UnrecoverableError`] can be used, though this will result in | |||
/// an immediate panic and future operations in LDK generally failing. | |||
/// |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is some inherent gap about how clients should or will implement async persist.
Imo, they might just start a thread or future, on completion just call channel_monitor_updated. (other method is queued messages)
In case of thread/future, if there is a failure inside of that, i don't expect them to be implementing "infinite retries in thread" for each InProgress monitor_update individually. (Whereas current documentation for async-persist seems to assume that)
Hence #2562 (comment)
Let me know if i misunderstand something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a future, I think they absolutely will retry infinitely in the task - there's very little overhead for the task and there's no reason to move to batching it. If they're doing a full OS thread, I agree, probably doesn't make sense to spawn a whole thread rather hopefully you simply notify an existing background thread. Still, I'm not quite sure what you're suggesting I change here - can you provide a concrete suggested phrasing change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I didn't feel it is alright to have many infinite retrying resources in system (that could be anything) if there is an outage or db failure. (But its ok, whatever it is.)
For Asynchronous persistence case,
We need to mention Implementations should retry all pending persistence operations in the background with [`ChainMonitor::list_pending_monitor_updates`] and [`ChainMonitor::get_monitor`].
Currently we mention this only for sync case in these docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For async users I kinda assume they dont do that polling but instead just spawn a task that loop {}
s forever.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can assume that, I think users will trip up on this. Since async has many different ways to be implemented and this is more related to failure handling in async.
We can have something like:
"
Implementations should either implement async persistence by retrying infinitely in a loop OR retry all pending persistence operations in the background using [ChainMonitor::list_pending_monitor_updates
] and [ChainMonitor::get_monitor
].
"
Monitor update failure can no longer lead to a `ChannelUnavailable` error, but more common cases such as the peer being disconnected always could, so those should be documented clearer.
Timeouts may be worth retrying, but an outage is a more general term which obviously cannot be retried.
which described the script type incorrectly.
5ee4186
to
39094d4
Compare
This clarifies somewhat that async persistence should run indefinitely or keep trying via polling, and that either is acceptable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
No description provided.