-
Notifications
You must be signed in to change notification settings - Fork 553
Using Torch-MLIR as a front-end for MLIR files #1175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I think you are looking for the Examples: https://github.com/llvm/torch-mlir/blob/main/examples/torchscript_resnet18_all_output_types.py We definitely need to surface this better in our docs I guess -- where would you recommend we talk about / mention this API? |
Perhaps on the development page, right after the I'll try to write something, just so I remember better next time, and then send a PR. Feel free to correct as much as needed (wording is not my strongest suit) before merging. |
Adding an example on how to extract MLIR output from the compilation process in various different formats to the development documentation. This should help developers trying to either debug torch_mlir or use it for the purpose of extracting MLIR outputs for other testing. Fixes llvm#1175
Adding an example on how to extract MLIR output from the compilation process in various different formats to the development documentation. This should help developers trying to either debug torch_mlir or use it for the purpose of extracting MLIR outputs for other testing. Fixes llvm#1175
Adding an example on how to extract MLIR output from the compilation process in various different formats to the development documentation. This should help developers trying to either debug torch_mlir or use it for the purpose of extracting MLIR outputs for other testing. Fixes #1175
Signed-off-by: Gong Su <[email protected]> Co-authored-by: Alexandre Eichenberger <[email protected]>
We want to use different front-ends to look at the IR they produce and see if we can feed them into our optimising pipeline. Torch-MLIR seems to support a lot of the features we want, mainly being able to handle different MLIR dialects for the same models.
The idea is to get a model in Python, run through Torch-MLIR and get the MLIR file in the end. By looking at the tests, I see that there's a
python
folder with the JIT IR exporting MLIR files viaModuleBuilder
. But it seems it only outputs the Torch dialect.Further looking at the tests, I see in the
Conversion
folder that there's atorch-mlir-opt
that converts the Torch dialect into the others. While I can seeMHLO
andTOSA
, I can also seeLinalg
,SCF
andStd
separate from each other. From the docs, I assumed there was one that used all three together.So I have two questions:
ModuleBuilder
to export directly to a particular dialect? If not, istorch-mlir-opt
the best option?Linalg
withtorch-mlir-opt
? Will it useSCF
andStd
for the remaining ops automatically?@silvasean
The text was updated successfully, but these errors were encountered: