-
Notifications
You must be signed in to change notification settings - Fork 8
Do not commit on each update when batching, and allow for setting the max batch size #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
batching.go
Outdated
} | ||
|
||
// Batch creates a set of deferred updates to the database. | ||
func (d *Datastore) Batch() (ds.Batch, error) { | ||
return &batch{ds: d, batch: &pgx.Batch{}}, nil | ||
b := &batch{ds: d, batch: &pgx.Batch{}, maxBatchSize: 0} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we set the default maxBatchSize
to 1 since that emulates the current behavior?
8e6d63a
to
6795d2e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is like a multi-batch. I'm not sure if it should be in this library. You could easily build a batching datastore that wraps this one and provides this functionality.
I'd be ok with accepting the change to not wrap each query in a transaction...I think this would be consistent with the badger datastore but I'm interested in the reason why?
} | ||
|
||
if err != nil { | ||
b.batch = &pgx.Batch{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need to queue a BEGIN
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good call, thanks!
In the Hydra Boosters we are experiencing txnid wraparound failures because autovacuum is never able to complete. We have ameliorated this by tuning autovacuum to run (a lot) more often in the table itself, however it would be good to reduce the number of transactions. I added the max batch size tunable because https://github.com/libp2p/go-libp2p-kad-dht/blob/0b7ac010657443bc0675b3bd61133fe04d61d25b/providers/providers_manager.go#L32 hardcodes the batch size to 256 and when we tried this approach (single transaction per batch without tuning) |
The issue with wrapping this with a separate datastore is that |
No description provided.