-
Notifications
You must be signed in to change notification settings - Fork 524
Manually register einsum xla #8787
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
As the overwrite is written, there is no meaningful unit test we can add as we rely on the generated XLANativeFunctions function. I could do something like what xla/torch_xla/csrc/aten_xla_type.cpp Line 1531 in 17270e2
This cleanest way to do something like that would be to create a utility function shared by both
@tengyifei @ysiraichi: Do y'all have any opinions on this? |
@pgmoka are you looking for a unit test? A good test IMO is what we wrote in the https://github.com/tengyifei/playground/blob/master/aot-einsum-3.ipynb notebook. We could verify the lowering of einsum in an custom op. Another test is we should remove the two workarounds referenced in https://github.com/search?q=repo%3Apytorch%2Fxla+8713&type=code, and then the unit test for XLAPatchedLinear should still pass. Because we also check its lowering there. |
CC: @lsy323 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add a unit test in Python and also remove the _xla_einsum
workaround in this PR (which will also test that this registration worked).
In this PR, we modify this behavior to always accommodate explicitly donated buffers, working simultaneously with LTC, IF enabled.
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Han Qi <[email protected]>
Not sure how all these commits got into this branch. Usually I rebase the branch on top of the latest master and then force push. This way I only have a single commit in the PR. |
One thing you can do is to check for |
I honestly don't know how this happened either. I think I messed something up while fetching the current master to rebase to branch with. I needed to do this to get the latest changes related to |
Gotcha. In that case could you squash the commits from git and reset the commit message so that it hopefully doesn't confuse future readers? Thanks! |
Use .backward() with in-place grad mutations for the GA API (#8768) Use placeholder tensor in scan (#8785) Pin update to 20250303 (#8788) Co-authored-by: Chengji Yao <[email protected]> correct linter
e2aace1
to
b922fa0
Compare
Pull request was closed
Too many conflicts. I accidentally merged from master rather than rebasing, and it caused a bunch of issues. My changes are small enough that I will just carry on in a separate PR. I apologize to the reviewers for the noise |
Do manual registration of XLANativeFunctions::einsum for XLA.
This is necessary because currently PyTorch overwrites the key AutogradXLA registration with a its XLA key registration. While ideally we would be able to resolve this problem, this work around resolves the issue from our end. It is also not possible to use full code generation due to #8739.
This manual registration relies on the
XLANativeFunctions::einsum
function fromxla/torch_xla/csrc/aten_xla_type.cpp