Skip to content

Add canonicalization for aten.add.tensor op #935

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

erman-gurses
Copy link
Collaborator

@erman-gurses erman-gurses commented Jun 13, 2022

This PR is related to issue #911

@erman-gurses erman-gurses changed the title Add canonicalization for aten.add.tensor op [Draft] Add canonicalization for aten.add.tensor op Jun 13, 2022
@erman-gurses
Copy link
Collaborator Author

Added canonicalization based on the examples below:

Example1:

%0 = torch.prim.NumToTensor.Scalar %int0 : !torch.int -> !torch.vtensor<[],si64>
%1 = torch.prim.NumToTensor.Scalar %int2 : !torch.int -> !torch.vtensor<[],si64>
%2 = torch.aten.add.Tensor %0, %1, %int3 : !torch.vtensor<[],si64>, !torch.vtensor<[],si64>, !torch.int -> !torch.vtensor<[],si64>

Example2:

%0 = torch.vtensor.literal(dense<0> : tensor<si64>) : !torch.vtensor<[],si64>
%1 = torch.prim.NumToTensor.Scalar %int2 : !torch.int -> !torch.vtensor<[],si64>
%2 = torch.aten.add.Tensor %0, %1, %int3 : !torch.vtensor<[],si64>, !torch.vtensor<[],si64>, !torch.int -> !torch.vtensor<[],si64>

After canonicalization, they should look like this:

%0 =  aten.mul.int(%int2, %int3) : !torch.int, !torch.int -> !torch.int
%1 =  aten.add.int(%0, %int0) : !torch.int, !torch.int -> !torch.int
%2 = torch.prime.NumToTensor %1 :  !torch.int -> !torch.vtensor<[],si64>

@erman-gurses erman-gurses requested a review from pashu123 June 13, 2022 15:25
@nirvedhmeshram
Copy link
Collaborator

I think it would be helpful to add test in IR here
https://github.com/llvm/torch-mlir/blob/main/test/Dialect/Torch/canonicalize.mlir
to see how the pattern is working

@erman-gurses
Copy link
Collaborator Author

erman-gurses commented Jun 13, 2022

@nirvedhmeshram Yes, I will do that, I am still missing to handle this part for now below. Working on it.
%0 = torch.vtensor.literal(dense<0> : tensor) : !torch.vtensor<[],si64>

@vivekkhandelwal1
Copy link
Collaborator

The issue that is being addressed with this PR: #911.

@erman-gurses erman-gurses self-assigned this Jun 17, 2022
@erman-gurses erman-gurses marked this pull request as ready for review June 17, 2022 16:12
@erman-gurses erman-gurses requested a review from silvasean June 17, 2022 16:17
@pashu123 pashu123 changed the title [Draft] Add canonicalization for aten.add.tensor op Add canonicalization for aten.add.tensor op Jun 17, 2022
@erman-gurses erman-gurses force-pushed the canonicalization-for-aten-add-tensor-op branch from 8bab003 to 837e563 Compare June 17, 2022 18:51
Copy link
Collaborator

@ramiro050 ramiro050 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this! I have a few comments below

Copy link
Collaborator

@ramiro050 ramiro050 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just have a couple of small comments, but other than that it LGTM!

@erman-gurses erman-gurses force-pushed the canonicalization-for-aten-add-tensor-op branch from 45523d6 to 80128df Compare June 23, 2022 13:52
@erman-gurses erman-gurses requested a review from silvasean June 23, 2022 14:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants