You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The backend fails to start up if you have local storage configured and then try to start with S3 storage, and vice versa. But if you switch from S3 to R2 (or to a different S3 url), the backend will start but write to the new storage and fail to get files it expects to be there because they're actually in the old storage.
The backend should include the S3 URL in the StorageTagInitializer stored in the database.
The text was updated successfully, but these errors were encountered:
I was thinking, what if the user wants to go from S3 to R2, he copied the bucket (and all the files), in this case it should not throw an error.
Then this makes me think that checking the storage type like it is done today is maybe not the right way ?
Maybe a better way could be checking if files exist or not ? @emmaling27
It seems hard to verify all the files exist. You'd have to find all the places in the db where we keep track of files, iterate over them, making a request to the cloud storage provider to check the shas match. The supported flow for migrating storage providers is to do a snapshot export and import (see these instructions), which should ensure all files are copied over correctly.
The backend fails to start up if you have local storage configured and then try to start with S3 storage, and vice versa. But if you switch from S3 to R2 (or to a different S3 url), the backend will start but write to the new storage and fail to get files it expects to be there because they're actually in the old storage.
The backend should include the S3 URL in the
StorageTagInitializer
stored in the database.The text was updated successfully, but these errors were encountered: