Skip to content

Using Torch-MLIR as a front-end for MLIR files #1175

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
rengolin opened this issue Aug 8, 2022 · 2 comments · Fixed by #1195
Closed

Using Torch-MLIR as a front-end for MLIR files #1175

rengolin opened this issue Aug 8, 2022 · 2 comments · Fixed by #1195

Comments

@rengolin
Copy link
Member

rengolin commented Aug 8, 2022

We want to use different front-ends to look at the IR they produce and see if we can feed them into our optimising pipeline. Torch-MLIR seems to support a lot of the features we want, mainly being able to handle different MLIR dialects for the same models.

The idea is to get a model in Python, run through Torch-MLIR and get the MLIR file in the end. By looking at the tests, I see that there's a python folder with the JIT IR exporting MLIR files via ModuleBuilder. But it seems it only outputs the Torch dialect.

Further looking at the tests, I see in the Conversion folder that there's a torch-mlir-opt that converts the Torch dialect into the others. While I can see MHLO and TOSA, I can also see Linalg, SCF and Std separate from each other. From the docs, I assumed there was one that used all three together.

So I have two questions:

  1. Is there a way to use the ModuleBuilder to export directly to a particular dialect? If not, is torch-mlir-opt the best option?
  2. Can I export a full model to Linalg with torch-mlir-opt? Will it use SCF and Std for the remaining ops automatically?

@silvasean

@silvasean
Copy link
Contributor

I think you are looking for the torch_mlir.compile Python function.

Examples: https://github.com/llvm/torch-mlir/blob/main/examples/torchscript_resnet18_all_output_types.py
Tests: https://github.com/llvm/torch-mlir/tree/main/python/test/compile_api

We definitely need to surface this better in our docs I guess -- where would you recommend we talk about / mention this API?

@rengolin
Copy link
Member Author

rengolin commented Aug 9, 2022

Perhaps on the development page, right after the PYTHONPATH section?

I'll try to write something, just so I remember better next time, and then send a PR. Feel free to correct as much as needed (wording is not my strongest suit) before merging.

rengolin added a commit to rengolin/torch-mlir that referenced this issue Aug 9, 2022
Adding an example on how to extract MLIR output from the compilation
process in various different formats to the development documentation.

This should help developers trying to either debug torch_mlir or use it
for the purpose of extracting MLIR outputs for other testing.

Fixes llvm#1175
rengolin added a commit to rengolin/torch-mlir that referenced this issue Aug 9, 2022
Adding an example on how to extract MLIR output from the compilation
process in various different formats to the development documentation.

This should help developers trying to either debug torch_mlir or use it
for the purpose of extracting MLIR outputs for other testing.

Fixes llvm#1175
rengolin added a commit that referenced this issue Aug 10, 2022
Adding an example on how to extract MLIR output from the compilation
process in various different formats to the development documentation.

This should help developers trying to either debug torch_mlir or use it
for the purpose of extracting MLIR outputs for other testing.

Fixes #1175
qedawkins pushed a commit to nod-ai/torch-mlir that referenced this issue Oct 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants