-
Notifications
You must be signed in to change notification settings - Fork 553
[RFC] Add ONNX import support to torch-mlir #1255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
A few comments on this:
|
@silvasean Thank you for the feedback. Based on your suggestions I opened another RFC on the onnx-mlir side: onnx/onnx-mlir#1639 |
qedawkins
pushed a commit
to nod-ai/torch-mlir
that referenced
this issue
Oct 3, 2022
- use GENERATE_NATIVE_HEADERS option of add_jar (require cmake 3.11+) to generate JNI header since javah was deprecated since Java 8 Signed-off-by: Gong Su <[email protected]>
qedawkins
pushed a commit
to nod-ai/torch-mlir
that referenced
this issue
Oct 3, 2022
…ums (llvm#1261) * [maccel]: Change --maccel option from a string option to a list of enums Signed-off-by: Ettore Tiotto <[email protected]> Signed-off-by: Tung D. Le <[email protected]> * - add queryEntryPoints Java API (llvm#1255) - use GENERATE_NATIVE_HEADERS option of add_jar (require cmake 3.11+) to generate JNI header since javah was deprecated since Java 8 Signed-off-by: Gong Su <[email protected]> Signed-off-by: Tung D. Le <[email protected]> * Do not set ownership for an output OMTensor that is also a block argument (llvm#1256) * Do not set ownership for an output that is also a block argument Signed-off-by: Tung D. Le <[email protected]> * Edit lit tests Signed-off-by: Tung D. Le <[email protected]> * More name changes Signed-off-by: Tung D. Le <[email protected]> * Edit comments Signed-off-by: Tung D. Le <[email protected]> * typos Signed-off-by: Tung D. Le <[email protected]> * Make the llvm.ident lit test more meaningful (llvm#1260) * Make the llvm.ident lit test more meaningful Update the test to specifically look for a commit hash instead of any characters Signed-off-by: Stella Stamenova <[email protected]> * Account for .git suffix Signed-off-by: Stella Stamenova <[email protected]> Co-authored-by: Tung D. Le <[email protected]> Signed-off-by: Tung D. Le <[email protected]> * [backend_cpp]: Use ModelLib to create CategoryMapper cpp tests. Signed-off-by: Ettore Tiotto <[email protected]> Signed-off-by: Tung D. Le <[email protected]> * Revert "[backend_cpp]: Use ModelLib to create CategoryMapper cpp tests." This reverts commit 00e8a6bdd6d90c6125326173340fd3e00f9c838c. Signed-off-by: Tung D. Le <[email protected]> * [Accelerator] Do not use NNPA preprocessor to avoid exposing accelerator code (llvm#1263) * Do not use NNPA preprocessor to avoid exposing accelerator code Signed-off-by: Tung D. Le <[email protected]> * clang-format Signed-off-by: Tung D. Le <[email protected]> * Move OptimizationLevel to the common place Signed-off-by: Tung D. Le <[email protected]> * Rename functions Signed-off-by: Tung D. Le <[email protected]> * format Signed-off-by: Tung D. Le <[email protected]> * Address comments Signed-off-by: Tung D. Le <[email protected]> * generate Accelerator option enum from CMake Signed-off-by: Kevin O'Brien <[email protected]> Signed-off-by: Tung D. Le <[email protected]> * Edit CMakeLists.txt Signed-off-by: Tung D. Le <[email protected]> * clang-format Signed-off-by: Tung D. Le <[email protected]> Co-authored-by: gongsu832 <[email protected]> Co-authored-by: Tung D. Le <[email protected]> Co-authored-by: Stella Stamenova <[email protected]> Co-authored-by: Kevin O'Brien <[email protected]>
This work seems to have gotten off the ground (see onnx/onnx-mlir#1639). Closing this issue. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Motivation:
Goals:
Non-Goals:
Replicate all of onnx-mlir capabilities (Kernel dialect, Runtime etc)
Advantages:
Unified usage of torch-mlir shape library for both torch and ONNX paths.
Unified MHLO and TOSA export paths
ONNX – > Linalg is now a possibility to codegen ONNX graphs. (see onnx_add.py example)
Status / Proof of Concept:
An integration between onnx-mlir and torch-mlir is present in the following branch:
https://github.com/nod-ai/torch-mlir/tree/onnx-mlir
It incorporates onnx-mlir into the structure of torch-mlir to allow for natural access to the ONNX dialect. Without having to build all of the additional tools/features/runtime provided by onnx-mlir. We can take PyTorch modules, export to ONNX with
torch.onnx.export
, and subsequently load the ONNX module into the same frontend used for torch-mlir. This integration includes the ONNX-to-ONNX decompositions found in ONNX-MLIR.E2E ONNX to Linalg Example
We have an end-to-end lowering PyTorch -> ONNX (torch.onnx.export) -> ONNX Dialect -> Torch Dialect -> Linalg -> Refbackend.
ONNX Add
Additional ONNX Lowerings
ONNX LogSoftmax
Resnet18
Looking forward to any feedback!
cc
@henrytwo @sjain-stanford @silvasean @ashay @stellaraccident @sstamenova @asaadaldien @stephenneuendorffer @fortianyou @ZihengJiang @byronyi @makslevental
The text was updated successfully, but these errors were encountered: