-
Notifications
You must be signed in to change notification settings - Fork 553
Summary of problems during iteratively executing TorchSimplificationPipeline #1324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks @Vremold for the nice summary of the issues! Iteratively running For 1., the issue with
The fix should be similar to the one you made for the |
For 2. shape inference should never create invalid IR. If it does then that is a bug. Can you make a reproducer? (a .mlir file and a Reorganization of TorchSimplificationPipeline is not a good solution. It can only hide the problems. It does not fix them. It will always be possible to write a new program that hits these issues. These are all legitimate bugs in the relevant passes and should be individually fixed. |
The original python script comes from RollModule_basic e2e test case. It looks like this:
I recorded the log in the process.
IR after the first
IR after the second
The error message is:
I understand your concerns and I will follow the same line of thinking to deal with these issues later. |
* - Build libzdnn.a with -fPIC and embed in model.so when -maccel=NNPA specified - Add CompilerConfigMap to store states associated with certain options - Move options in main() to CompilerOptions.cpp - Fix compiler warning in Stickify.cpp Signed-off-by: Gong Su <[email protected]> * - fix NNPA_ENABLED for lit test - install zdnn.h so we no longer need third_party/zdnn-lib - move options back to onnx-mlir.cpp::main (consolidation requires much more effort, deferred) - use DEPENDS in add_onnx_mlir_library for libzdnn dependency Signed-off-by: Gong Su <[email protected]> * Remove zdnn-lib from .gitmodules Signed-off-by: Gong Su <[email protected]> * Make libzdnn ALL target so it gets built before other NNPA components Signed-off-by: Gong Su <[email protected]> * Fix libzdnn dependency for NNPA components Signed-off-by: Gong Su <[email protected]> * Build NNPA in dev image as well Signed-off-by: Gong Su <[email protected]> * Comment out BYPRODUCTS in target libzdnn since generator support requires cmake 3.20+ which is not yet available on official Ubuntu Focal Signed-off-by: Gong Su <[email protected]> * - Force setting cached MLIR_DIR to honor command line argument - unset cached LLVM_DIR so it changes along with MLIR_DIR - surround third_party with set(CMAKE_MESSAGE_LOG_LEVEL NOTICE) and set(CMAKE_MESSAGE_LOG_LEVEL STATUS) to mask out their cmake output so we can see more clearly output by onnx-mlir only. third_party cmake output can still be turned on by the --log-level command line option Signed-off-by: Gong Su <[email protected]> * revert set(CMAKE_MESSAGE_LOG_LEVEL NOTICE) and set(CMAKE_MESSAGE_LOG_LEVEL STATUS) around third_party to make another PR Signed-off-by: Gong Su <[email protected]> * revert force setting cached MLIR_DIR and unsetting cached LLVM_DIR to make another PR Signed-off-by: Gong Su <[email protected]> Co-authored-by: Charles Volzka <[email protected]> Co-authored-by: Tung D. Le <[email protected]>
* Use shapeHelperInferShapes template to reduce boilerplate code. (llvm#1340) Leverage the template function 'shapeHelperInferShapes' to reduce code "duplication" in ShapeInference.cpp for several ONNX operators. As an example, the following implementation of the Transpose operator inferShapes member function: ONNXTransposeOp::inferShapes( std::function<void(mlir::Region &)> doShapeInference) { // Cannot infer shape if no shape exists. if (!data().getType().isa<RankedTensorType>()) return success(); auto elementType = data().getType().cast<ShapedType>().getElementType(); ONNXTransposeOpAdaptor operandAdaptor(*this); ONNXTransposeOpShapeHelper shapeHelper(this); if (failed(shapeHelper.computeShape(operandAdaptor))) return emitError("Failed to scan Transpose parameters successfully"); SmallVector<int64_t, 4> outputDims; IndexExpr::getShape(shapeHelper.dimsForOutput(), outputDims); getResult().setType(RankedTensorType::get(outputDims, elementType)); return success(); } Becomes: ONNXTransposeOp::inferShapes( std::function<void(mlir::Region &)> doShapeInference) { return shapeHelperInferShapes<ONNXTransposeOpShapeHelper, ONNXTransposeOp, ONNXTransposeOpAdaptor>(this, elementType); } Signed-off-by: Ettore Tiotto [email protected] Signed-off-by: Philip Lassen <[email protected]> * Allow lowering ONNXGemm with dynamic dims to ZHigh and fix zDNN Conv condition (llvm#1332) * Allow lowering ONNXGemm with dynamic dims to ZHigh Signed-off-by: Tung D. Le <[email protected]> * Update lit tests Signed-off-by: Tung D. Le <[email protected]> * Fix zDNN Conv condition Signed-off-by: Tung D. Le <[email protected]> Signed-off-by: Philip Lassen <[email protected]> * Fix builders with boolean output types Signed-off-by: Philip Lassen <[email protected]> * Fix format issue Signed-off-by: Philip Lassen <[email protected]> * Fix legality check of ONNXToZHigh for MaxPool. (llvm#1343) * Fix legality check of NNPA for 1d maxpool Signed-off-by: Haruki Imai <[email protected]> * Apply the same fix to conv Signed-off-by: Haruki Imai <[email protected]> * Add lit test for 1d maxpool and averagepool Signed-off-by: Haruki Imai <[email protected]> * Insert diation check after checking shape Signed-off-by: Haruki Imai <[email protected]> * Simplify lit test for pooling and update func name Signed-off-by: Haruki Imai <[email protected]> * Change func name to test_pool_not_lowered_pool1d and test_pool_not_lowered_pool3d Signed-off-by: Haruki Imai <[email protected]> Signed-off-by: Philip Lassen <[email protected]> * embed libzdnn in model.so (llvm#1324) * - Build libzdnn.a with -fPIC and embed in model.so when -maccel=NNPA specified - Add CompilerConfigMap to store states associated with certain options - Move options in main() to CompilerOptions.cpp - Fix compiler warning in Stickify.cpp Signed-off-by: Gong Su <[email protected]> * - fix NNPA_ENABLED for lit test - install zdnn.h so we no longer need third_party/zdnn-lib - move options back to onnx-mlir.cpp::main (consolidation requires much more effort, deferred) - use DEPENDS in add_onnx_mlir_library for libzdnn dependency Signed-off-by: Gong Su <[email protected]> * Remove zdnn-lib from .gitmodules Signed-off-by: Gong Su <[email protected]> * Make libzdnn ALL target so it gets built before other NNPA components Signed-off-by: Gong Su <[email protected]> * Fix libzdnn dependency for NNPA components Signed-off-by: Gong Su <[email protected]> * Build NNPA in dev image as well Signed-off-by: Gong Su <[email protected]> * Comment out BYPRODUCTS in target libzdnn since generator support requires cmake 3.20+ which is not yet available on official Ubuntu Focal Signed-off-by: Gong Su <[email protected]> * - Force setting cached MLIR_DIR to honor command line argument - unset cached LLVM_DIR so it changes along with MLIR_DIR - surround third_party with set(CMAKE_MESSAGE_LOG_LEVEL NOTICE) and set(CMAKE_MESSAGE_LOG_LEVEL STATUS) to mask out their cmake output so we can see more clearly output by onnx-mlir only. third_party cmake output can still be turned on by the --log-level command line option Signed-off-by: Gong Su <[email protected]> * revert set(CMAKE_MESSAGE_LOG_LEVEL NOTICE) and set(CMAKE_MESSAGE_LOG_LEVEL STATUS) around third_party to make another PR Signed-off-by: Gong Su <[email protected]> * revert force setting cached MLIR_DIR and unsetting cached LLVM_DIR to make another PR Signed-off-by: Gong Su <[email protected]> Co-authored-by: Charles Volzka <[email protected]> Co-authored-by: Tung D. Le <[email protected]> Signed-off-by: Philip Lassen <[email protected]> * ScatterElements operator code gen. (llvm#1352) Implement support for the ONNX ScatterElement operator: - verification (verify diagnostic completeness) - shape inference (should be trivial, but verify) - initial codegen support - codegen for negative indices - add lit test to check code generation - enable end-to-end tests (backend tests) Signed-off-by: Ettore Tiotto [email protected] Signed-off-by: Philip Lassen <[email protected]> * Improve variable naming of builder lists Signed-off-by: Philip Lassen <[email protected]> Co-authored-by: Ettore Tiotto <[email protected]> Co-authored-by: Tung D. Le <[email protected]> Co-authored-by: Haruki Imai <[email protected]> Co-authored-by: gongsu832 <[email protected]> Co-authored-by: Charles Volzka <[email protected]>
Closing this because the reduced test case passes now:
|
Recently, I'm working on iteratively executing TorchSimplificationPipeline in LowerBackendContract pass. The motivation of doing so mainly comes from two aspects:
canonicalizer
,shape-simiplification-pipeline
andrefine-rypes
.However, if executing TorchSimplificationPipeline more than one times. The following problems will arise.
dtype
param of the op (for example,dtype
istorch.none
), therefine-types
of the next iteration will fail. An example can be seen in PR Fix a bug about torch-refine-types pass when the dtype of output tensor is known #1280. Thanks a lot for the suggestions of @ramiro050. It indeed solves part of the problems. However, the following e2e tests still fail, namely RandLikeModule_basic,FullLikeModuleInt2DStatic_basic,FullLikeModuleInt2D_basic and FullModuleInt3D_basic. That's because that thedtype
of initial captured aten ops istorch.none
. Take RandLikeModule_basic as example, the initial IR is:decompose-complex-ops
pass, AtenRollOp will be split into several AtenSliceTensorOps and AtenCatOp. Here is an example IR:If run TorchSimplificationPipeline second time, %3 and %4 will be inferred to have static shape and the verification of ListConstructOp will fail, because it finds out that its two operands have different shape.
All the problems are found in current e2e tests. Any idea to solve the above problems? At my point of view, the most straight solution is reorder and reorganization of the passes in TorchSimplificationPipeline
The text was updated successfully, but these errors were encountered: