Skip to content

Do not commit on each update when batching, and allow for setting the max batch size #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

mcamou
Copy link

@mcamou mcamou commented Jan 13, 2022

No description provided.

batching.go Outdated
}

// Batch creates a set of deferred updates to the database.
func (d *Datastore) Batch() (ds.Batch, error) {
return &batch{ds: d, batch: &pgx.Batch{}}, nil
b := &batch{ds: d, batch: &pgx.Batch{}, maxBatchSize: 0}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we set the default maxBatchSize to 1 since that emulates the current behavior?

@mcamou mcamou force-pushed the mc/batching-updates branch from 8e6d63a to 6795d2e Compare January 13, 2022 18:44
Copy link
Owner

@alanshaw alanshaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is like a multi-batch. I'm not sure if it should be in this library. You could easily build a batching datastore that wraps this one and provides this functionality.

I'd be ok with accepting the change to not wrap each query in a transaction...I think this would be consistent with the badger datastore but I'm interested in the reason why?

}

if err != nil {
b.batch = &pgx.Batch{}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need to queue a BEGIN here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good call, thanks!

@mcamou
Copy link
Author

mcamou commented Jan 14, 2022

In the Hydra Boosters we are experiencing txnid wraparound failures because autovacuum is never able to complete. We have ameliorated this by tuning autovacuum to run (a lot) more often in the table itself, however it would be good to reduce the number of transactions. I added the max batch size tunable because https://github.com/libp2p/go-libp2p-kad-dht/blob/0b7ac010657443bc0675b3bd61133fe04d61d25b/providers/providers_manager.go#L32 hardcodes the batch size to 256 and when we tried this approach (single transaction per batch without tuning) findprov time went up so I am thinking that (at least for the Hydras) it's too high.

@mcamou
Copy link
Author

mcamou commented Jan 14, 2022

The issue with wrapping this with a separate datastore is that ipfs-ds-postgres owns the DB connection and transactions are per-connection, so aside from removing the wrapping of each statement in a transaction you would have to either expose the connection or add methods to allow for beginning/committing the transaction from the outside.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants