-
Notifications
You must be signed in to change notification settings - Fork 104
Move transaction broadcasting and fee estimation to dedicated modules #205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
7042e38
to
f8fe15d
Compare
3335b34
to
efea678
Compare
efea678
to
69710b7
Compare
Rebased on main to resolve minor conflicts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
src/tx_broadcaster.rs
Outdated
|
||
pub(crate) async fn process_next_package(&self) { | ||
let mut receiver = self.queue_receiver.lock().await; | ||
if let Some(next_package) = receiver.recv().await { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than just processing one package, are we able to keep reading until nothing's left?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I originally avoided that since broadcasting might also trigger the Esplora rate limits. So I thought it might make sense to let the server breathe for ~1sec between packages. However, probably doesn't make a big difference and we already have the 500ms retry delay now. So made it a while let Some(..)
now.
We introduce a separate `TransactionBroadcaster` which will regularly process a message queue of transactions to be broadcast. This allows us to a) decouple broadcasting from the BDK wallet and b) add transactions to the queue from a blocking context, without the need to schlep around a separate runtime or mess with it.
Due to the rate-limiting applied by many Esplora servers we often receive HTTP 429 ('too many requests') errors during syncing. Here, we simply give broadcasting transactions a second chance after a slight delay of 500ms to ease the pain of immediate failures. Generally, rebroadcasting will then be initiated by the `OuputSweeper`.
We also decouple fee estimation from the BDK on-chain wallet and BDK's corresponding `EsploraBlockchain`. Instead we use the esplora client directly in a dedicated `OnchainFeeEstimator` object. For one, this change is nice as it allows us to move more things out of the `Wallet` thread and corresponding locks. Moreover, it makes things more modular in general, which makes future upgrades and testing easier.
69710b7
to
ea31d2d
Compare
Added new tranaction-bytes-logging commit and included minor requested changes: > git diff-tree -U2 69710b7 ea31d2d
diff --git a/src/lib.rs b/src/lib.rs
index 8adfe55..ae0fda1 100644
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -686,5 +686,5 @@ impl<K: KVStore + Sync + Send + 'static> Node<K> {
}
_ = interval.tick() => {
- tx_bcaster.process_next_package().await;
+ tx_bcaster.process_queue().await;
}
}
diff --git a/src/logger.rs b/src/logger.rs
index cab62c6..b0504a3 100644
--- a/src/logger.rs
+++ b/src/logger.rs
@@ -1,4 +1,4 @@
pub(crate) use lightning::util::logger::Logger;
-pub(crate) use lightning::{log_debug, log_error, log_info, log_trace};
+pub(crate) use lightning::{log_bytes, log_debug, log_error, log_info, log_trace};
use lightning::util::logger::{Level, Record};
diff --git a/src/tx_broadcaster.rs b/src/tx_broadcaster.rs
index ae98373..4e56cba 100644
--- a/src/tx_broadcaster.rs
+++ b/src/tx_broadcaster.rs
@@ -1,5 +1,6 @@
-use crate::logger::{log_debug, log_error, log_trace, Logger};
+use crate::logger::{log_bytes, log_debug, log_error, log_trace, Logger};
use lightning::chain::chaininterface::BroadcasterInterface;
+use lightning::util::ser::Writeable;
use esplora_client::AsyncClient as EsploraClient;
@@ -34,7 +35,7 @@ where
}
- pub(crate) async fn process_next_package(&self) {
+ pub(crate) async fn process_queue(&self) {
let mut receiver = self.queue_receiver.lock().await;
- if let Some(next_package) = receiver.recv().await {
+ while let Some(next_package) = receiver.recv().await {
for tx in &next_package {
match self.esplora_client.broadcast(tx).await {
@@ -67,4 +68,9 @@ where
e
);
+ log_trace!(
+ self.logger,
+ "Failed broadcast transaction bytes: {}",
+ log_bytes!(tx.encode())
+ );
}
}
@@ -77,4 +83,9 @@ where
e
);
+ log_trace!(
+ self.logger,
+ "Failed broadcast transaction bytes: {}",
+ log_bytes!(tx.encode())
+ );
}
}, |
Fixes #22.
Previously, transaction broadcasting and fee estimation was the concern of our BDK-based onchain wallet and its
EsploraBlockchain
. However, this for one meant these functions would also be subject to BDK's locking requirements, pontentially even requiring additional runtime threads. Given that theBlockchain
trait is going away in BDK 1.0 anyways, the little convenience it gave us wasn't worth the additional complexity. Moreover, the prior design was not particularly modular, which made creating mock objects for tests hard, e.g., in #152.Here, we therefore move transaction broadcasting and fee estimation to two dedicated modules which take a cloned
EsploraClient
instance. This also allows us to make the calls in an async fashion while providing the blocking interface to LDK objects in a more reasonable manner that doesn't lean on tokio'sblock_in_place
.While general rebroadcasting of transactions will be initiated via the
OutputSweeper
to come in #152, we now also immediately retry rebroadcasts after a small delay if they fail due to connection (or HTTP) errors.