Skip to content

Keep track of the S3 storage URL configured and error if the URL configured doesn't match #55

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
emmaling27 opened this issue Mar 6, 2025 · 2 comments

Comments

@emmaling27
Copy link
Contributor

The backend fails to start up if you have local storage configured and then try to start with S3 storage, and vice versa. But if you switch from S3 to R2 (or to a different S3 url), the backend will start but write to the new storage and fail to get files it expects to be there because they're actually in the old storage.

The backend should include the S3 URL in the StorageTagInitializer stored in the database.

@Spioune
Copy link
Contributor

Spioune commented Mar 6, 2025

I was thinking, what if the user wants to go from S3 to R2, he copied the bucket (and all the files), in this case it should not throw an error.
Then this makes me think that checking the storage type like it is done today is maybe not the right way ?
Maybe a better way could be checking if files exist or not ? @emmaling27

@emmaling27
Copy link
Contributor Author

It seems hard to verify all the files exist. You'd have to find all the places in the db where we keep track of files, iterate over them, making a request to the cloud storage provider to check the shas match. The supported flow for migrating storage providers is to do a snapshot export and import (see these instructions), which should ensure all files are copied over correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants