-
Notifications
You must be signed in to change notification settings - Fork 274
Support IsolationLevels and Concurrency Safety Validation Checks #819
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi I am interested in working on it! |
Some relevant links to the Java implementation
|
Hey @sungwy I would like to contribute by working on these. Is there any of these that I can pick and starts looking into it like any of the initial validation implementation ? |
@guptaakashdeep yes, I don't think there's a particular order we should implement these with, so please feel free to assign yourself to the one you find most interesting! Sung |
Thanks @sungwy ! Do we have any already existing class where I can implement these Validation functions or should we just add directly in |
I think we could create a new module as |
Sounds good @sungwy !! |
@guptaakashdeep @sungwy see #1935 which should be the building blocks needed to crank out the 4 Sub-issues |
Also going to crank out a manifest group implementation today Edit: @sungwy it looks like the manifestgroup.entries method is extremely similar to the |
Feature Request / Improvement
Support enforcing Isolation Levels from specified snapshot ID
https://iceberg.apache.org/docs/latest/spark-configuration/#write-options
There's been a lot of continued interest in using multiple PyIceberg applications concurrently and having proper support for optimistic concurrency.
I think the best place to start is through the implementation of the individual validation functions
Once this is complete, we'll be able to introduce the Isolation Levels and correctly implement the validation logic in the _OverwriteFiles snapshot producer, similarly to the Java implementation
The text was updated successfully, but these errors were encountered: