-
Notifications
You must be signed in to change notification settings - Fork 407
Stop reading monitors when persisting in updating persister #2706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@domZippilli Wouldn't it be easier for the MUP to store some (conservatively updated and persisted) tracking state mapping the monitor id to latest update id? This might allow us to not read the stored monitor during persist at all? |
I mean if we can avoid additional state that'd be ideal - additional state means we have to check that its consistent with the existing state and potentially handle inconsistency between them. |
Yeah, there was a monitor_id,u64 map in here at some point in the process. When we had the discussions on the RFC I think there was a general preference to avoid duplicate state in the MUP, which is how we ended up reading it from storage like this. It'd be ideal, I suppose, to read just the required bytes, but ranged reads in KVStore seems like asking a lot. |
B/w the redundant deletes and reading monitor_update_id bytes I prefer reading update_id bytes. (after reading full_monitor from storage and this will avoid allocating for some of the bigger things in that struct, which is our main pain-point) And this can be specifically problematic if consolidation threshold ( |
Yes, we would need to implement it in a way that would never break anything if it gotten out-of-date. I think worst case we would issue a bunch of redundant/nop delete operations to catch up.
Yeah, I think I also had suggested going with some tracking state in the PR originally. There def. is a trade-off between performance and robustness/complexity here. If we now find that these CMU reads are substantially increasing heap fragmentation, we might need to reconsider. However, we might still think the superfluous reads are not worth introducing the additional complexity. I guess that depends how bad the 'brutalizing' really is.
I'm not sure I'm following here? Reading the full monitor will allocate and return a |
Tentatively assigning to @G8XSU |
Doing a monitor read for deletes will mean reading the full monitor bytes, which does kinda suck, vs not doing a monitor read will just mean 100 deletes - in cases where users have a real need for the
I don't think this is really the case. One huge allocation followed by it being deallocated should be mostly tolerable, especially if we're talking about something substantially larger than a page or two, where its just gonna get its own special handling by allocating a few pages. That will still lead to memory bloat since those pages are unlikely to be returned to the OS, but at least it won't be a huge amount of fragmentation that we can never reuse. So, all that said, I'm okay with either solution, but marginally prefer to just issue the deletes and move on, cause it feels simpler than trying to figure out partial reading. I don't think there's a huge performance argument to either, which generally means I'd prefer to avoid yet more allocations, which even if they don't create more fragmentation, does mean we use yet more memory. |
There are 3 possible approaches:
Overall my preference for implementation would be (3+2)> (1) > (2) Let me know if everyone is on same page. |
SGTM. I still think we might be able to improve on the laid out options if we'd take on some additional complexity for tracking state. However, since that seems to be off the table |
All sounds good to me. |
Minor hiccup: when we are persisting with update_id == CLOSED_CHANNEL_UPDATE_ID Option-1 It is fine to leave some updates since we have |
^ Will be going with Option-2 |
That seems fine. Note that there can be multiple CLOSED_CHANNEL_UPDATE_ID monitor updates and all must be persisted (or just the full monitor each time, which seems fine for a closed channel). |
Turns out this line is brutalizing our heap and leading to fragmentation, it needs to go away -
rust-lightning/lightning/src/util/persist.rs
Line 613 in 281a0ae
The text was updated successfully, but these errors were encountered: