Skip to content

[RFC] Add ONNX import support to torch-mlir #1255

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
qedawkins opened this issue Aug 19, 2022 · 3 comments
Closed

[RFC] Add ONNX import support to torch-mlir #1255

qedawkins opened this issue Aug 19, 2022 · 3 comments

Comments

@qedawkins
Copy link
Collaborator

qedawkins commented Aug 19, 2022

Motivation:

  • Some customers and end users of torch-mlir would like to have a way to support ONNX as a “frontend” and lower down to dialects they care about (Linalg, MHLO, TOSA).
  • Unify shape functions, and MHLO / TOSA exporters so customers don’t have to keep variations of these in multiple repositories.

Torch-MLIR-with-onnx

Goals:

  • Leverage as much of the onnx-mlir “frontend” without the backend kernels, shape functions and runtime and target the torch-mlir dialect / “torch-mlir backend contract”
  • Retain the “EmitONNXBasic” export from onnx-mlir so exporting from torch-mlir can be used in onnx-mlir. (Discuss with onnx-mlir team if this is a possibility)
  • Leverage OnnxtoOnnx decompositions from onnx-mlir (Discuss with onnx-mlir team if this is a possibility)
  • Minimal dependencies in torch-mlir – add onnx submodule to externals/onnx.
  • Request onnx-mlir team if they can dual license or LLVM license onnx-mlir code so we can share / importer code. If that is not a possibility add a dependency on a small onnx-mlir-exporter repo that can be pulled in at compile time. PoC: https://github.com/nod-ai/onnx-mlir-exporter
  • Use torch-mlir shape functions etc once we are in Torch-MLIR dialect
  • Use torch-mlir MHLO and TOSA exporters from Torch-MLIR dialect

Non-Goals:

Replicate all of onnx-mlir capabilities (Kernel dialect, Runtime etc)

Advantages:

Unified usage of torch-mlir shape library for both torch and ONNX paths.
Unified MHLO and TOSA export paths
ONNX – > Linalg is now a possibility to codegen ONNX graphs. (see onnx_add.py example)

Status / Proof of Concept:

An integration between onnx-mlir and torch-mlir is present in the following branch:
https://github.com/nod-ai/torch-mlir/tree/onnx-mlir

It incorporates onnx-mlir into the structure of torch-mlir to allow for natural access to the ONNX dialect. Without having to build all of the additional tools/features/runtime provided by onnx-mlir. We can take PyTorch modules, export to ONNX with torch.onnx.export, and subsequently load the ONNX module into the same frontend used for torch-mlir. This integration includes the ONNX-to-ONNX decompositions found in ONNX-MLIR.

E2E ONNX to Linalg Example

We have an end-to-end lowering PyTorch -> ONNX (torch.onnx.export) -> ONNX Dialect -> Torch Dialect -> Linalg -> Refbackend.
ONNX Add

Additional ONNX Lowerings

ONNX LogSoftmax

Resnet18

Looking forward to any feedback!

cc
@henrytwo @sjain-stanford @silvasean @ashay @stellaraccident @sstamenova @asaadaldien @stephenneuendorffer @fortianyou @ZihengJiang @byronyi @makslevental

@silvasean
Copy link
Contributor

A few comments on this:

  • If users are already using torch.onnx.export they can switch to torch_mlir.compile. So I would focus this effort on users that are producing ONNX but not via Torch
  • torch.onnx.export already does shape inference/etc. so there is nothing to reuse from Torch-MLIR.
  • it doesn't make sense for Torch-MLIR to depend on ONNX. If we want an ONNX path that reuses our "backend contract" -> {Linalg,TOSA,MHLO} lowerings then onnx-mlir can take a dependency on Torch-MLIR.

@qedawkins
Copy link
Collaborator Author

@silvasean Thank you for the feedback. Based on your suggestions I opened another RFC on the onnx-mlir side: onnx/onnx-mlir#1639

qedawkins pushed a commit to nod-ai/torch-mlir that referenced this issue Oct 3, 2022
- use GENERATE_NATIVE_HEADERS option of add_jar (require
  cmake 3.11+) to generate JNI header since javah was
  deprecated since Java 8

Signed-off-by: Gong Su <[email protected]>
qedawkins pushed a commit to nod-ai/torch-mlir that referenced this issue Oct 3, 2022
…ums (llvm#1261)

* [maccel]: Change --maccel option from a string option to a list of enums

Signed-off-by: Ettore Tiotto <[email protected]>
Signed-off-by: Tung D. Le <[email protected]>

* - add queryEntryPoints Java API (llvm#1255)

- use GENERATE_NATIVE_HEADERS option of add_jar (require
  cmake 3.11+) to generate JNI header since javah was
  deprecated since Java 8

Signed-off-by: Gong Su <[email protected]>
Signed-off-by: Tung D. Le <[email protected]>

* Do not set ownership for an output OMTensor that is also a block argument (llvm#1256)

* Do not set ownership for an output that is also a block argument

Signed-off-by: Tung D. Le <[email protected]>

* Edit lit tests

Signed-off-by: Tung D. Le <[email protected]>

* More name changes

Signed-off-by: Tung D. Le <[email protected]>

* Edit comments

Signed-off-by: Tung D. Le <[email protected]>

* typos

Signed-off-by: Tung D. Le <[email protected]>

* Make the llvm.ident lit test more meaningful (llvm#1260)

* Make the llvm.ident lit test more meaningful

Update the test to specifically look for a commit hash instead of any characters

Signed-off-by: Stella Stamenova <[email protected]>

* Account for .git suffix

Signed-off-by: Stella Stamenova <[email protected]>

Co-authored-by: Tung D. Le <[email protected]>
Signed-off-by: Tung D. Le <[email protected]>

* [backend_cpp]: Use ModelLib to create CategoryMapper cpp tests.

Signed-off-by: Ettore Tiotto <[email protected]>
Signed-off-by: Tung D. Le <[email protected]>

* Revert "[backend_cpp]: Use ModelLib to create CategoryMapper cpp tests."

This reverts commit 00e8a6bdd6d90c6125326173340fd3e00f9c838c.

Signed-off-by: Tung D. Le <[email protected]>

* [Accelerator] Do not use NNPA preprocessor to avoid exposing accelerator code (llvm#1263)

* Do not use NNPA preprocessor to avoid exposing accelerator code

Signed-off-by: Tung D. Le <[email protected]>

* clang-format

Signed-off-by: Tung D. Le <[email protected]>

* Move OptimizationLevel to the common place

Signed-off-by: Tung D. Le <[email protected]>

* Rename functions

Signed-off-by: Tung D. Le <[email protected]>

* format

Signed-off-by: Tung D. Le <[email protected]>

* Address comments

Signed-off-by: Tung D. Le <[email protected]>

* generate Accelerator option enum from CMake

Signed-off-by: Kevin O'Brien <[email protected]>
Signed-off-by: Tung D. Le <[email protected]>

* Edit CMakeLists.txt

Signed-off-by: Tung D. Le <[email protected]>

* clang-format

Signed-off-by: Tung D. Le <[email protected]>

Co-authored-by: gongsu832 <[email protected]>
Co-authored-by: Tung D. Le <[email protected]>
Co-authored-by: Stella Stamenova <[email protected]>
Co-authored-by: Kevin O'Brien <[email protected]>
@silvasean
Copy link
Contributor

This work seems to have gotten off the ground (see onnx/onnx-mlir#1639). Closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants