-
Notifications
You must be signed in to change notification settings - Fork 557
Add more info about unit tests. #1360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
cc @JakopinA PTAL |
|
qedawkins
pushed a commit
to nod-ai/torch-mlir
that referenced
this pull request
Oct 3, 2022
* Use shapeHelperInferShapes template to reduce boilerplate code. (llvm#1340) Leverage the template function 'shapeHelperInferShapes' to reduce code "duplication" in ShapeInference.cpp for several ONNX operators. As an example, the following implementation of the Transpose operator inferShapes member function: ONNXTransposeOp::inferShapes( std::function<void(mlir::Region &)> doShapeInference) { // Cannot infer shape if no shape exists. if (!data().getType().isa<RankedTensorType>()) return success(); auto elementType = data().getType().cast<ShapedType>().getElementType(); ONNXTransposeOpAdaptor operandAdaptor(*this); ONNXTransposeOpShapeHelper shapeHelper(this); if (failed(shapeHelper.computeShape(operandAdaptor))) return emitError("Failed to scan Transpose parameters successfully"); SmallVector<int64_t, 4> outputDims; IndexExpr::getShape(shapeHelper.dimsForOutput(), outputDims); getResult().setType(RankedTensorType::get(outputDims, elementType)); return success(); } Becomes: ONNXTransposeOp::inferShapes( std::function<void(mlir::Region &)> doShapeInference) { return shapeHelperInferShapes<ONNXTransposeOpShapeHelper, ONNXTransposeOp, ONNXTransposeOpAdaptor>(this, elementType); } Signed-off-by: Ettore Tiotto [email protected] Signed-off-by: Philip Lassen <[email protected]> * Allow lowering ONNXGemm with dynamic dims to ZHigh and fix zDNN Conv condition (llvm#1332) * Allow lowering ONNXGemm with dynamic dims to ZHigh Signed-off-by: Tung D. Le <[email protected]> * Update lit tests Signed-off-by: Tung D. Le <[email protected]> * Fix zDNN Conv condition Signed-off-by: Tung D. Le <[email protected]> Signed-off-by: Philip Lassen <[email protected]> * Fix builders with boolean output types Signed-off-by: Philip Lassen <[email protected]> * Fix format issue Signed-off-by: Philip Lassen <[email protected]> * Fix legality check of ONNXToZHigh for MaxPool. (llvm#1343) * Fix legality check of NNPA for 1d maxpool Signed-off-by: Haruki Imai <[email protected]> * Apply the same fix to conv Signed-off-by: Haruki Imai <[email protected]> * Add lit test for 1d maxpool and averagepool Signed-off-by: Haruki Imai <[email protected]> * Insert diation check after checking shape Signed-off-by: Haruki Imai <[email protected]> * Simplify lit test for pooling and update func name Signed-off-by: Haruki Imai <[email protected]> * Change func name to test_pool_not_lowered_pool1d and test_pool_not_lowered_pool3d Signed-off-by: Haruki Imai <[email protected]> Signed-off-by: Philip Lassen <[email protected]> * embed libzdnn in model.so (llvm#1324) * - Build libzdnn.a with -fPIC and embed in model.so when -maccel=NNPA specified - Add CompilerConfigMap to store states associated with certain options - Move options in main() to CompilerOptions.cpp - Fix compiler warning in Stickify.cpp Signed-off-by: Gong Su <[email protected]> * - fix NNPA_ENABLED for lit test - install zdnn.h so we no longer need third_party/zdnn-lib - move options back to onnx-mlir.cpp::main (consolidation requires much more effort, deferred) - use DEPENDS in add_onnx_mlir_library for libzdnn dependency Signed-off-by: Gong Su <[email protected]> * Remove zdnn-lib from .gitmodules Signed-off-by: Gong Su <[email protected]> * Make libzdnn ALL target so it gets built before other NNPA components Signed-off-by: Gong Su <[email protected]> * Fix libzdnn dependency for NNPA components Signed-off-by: Gong Su <[email protected]> * Build NNPA in dev image as well Signed-off-by: Gong Su <[email protected]> * Comment out BYPRODUCTS in target libzdnn since generator support requires cmake 3.20+ which is not yet available on official Ubuntu Focal Signed-off-by: Gong Su <[email protected]> * - Force setting cached MLIR_DIR to honor command line argument - unset cached LLVM_DIR so it changes along with MLIR_DIR - surround third_party with set(CMAKE_MESSAGE_LOG_LEVEL NOTICE) and set(CMAKE_MESSAGE_LOG_LEVEL STATUS) to mask out their cmake output so we can see more clearly output by onnx-mlir only. third_party cmake output can still be turned on by the --log-level command line option Signed-off-by: Gong Su <[email protected]> * revert set(CMAKE_MESSAGE_LOG_LEVEL NOTICE) and set(CMAKE_MESSAGE_LOG_LEVEL STATUS) around third_party to make another PR Signed-off-by: Gong Su <[email protected]> * revert force setting cached MLIR_DIR and unsetting cached LLVM_DIR to make another PR Signed-off-by: Gong Su <[email protected]> Co-authored-by: Charles Volzka <[email protected]> Co-authored-by: Tung D. Le <[email protected]> Signed-off-by: Philip Lassen <[email protected]> * ScatterElements operator code gen. (llvm#1352) Implement support for the ONNX ScatterElement operator: - verification (verify diagnostic completeness) - shape inference (should be trivial, but verify) - initial codegen support - codegen for negative indices - add lit test to check code generation - enable end-to-end tests (backend tests) Signed-off-by: Ettore Tiotto [email protected] Signed-off-by: Philip Lassen <[email protected]> * Improve variable naming of builder lists Signed-off-by: Philip Lassen <[email protected]> Co-authored-by: Ettore Tiotto <[email protected]> Co-authored-by: Tung D. Le <[email protected]> Co-authored-by: Haruki Imai <[email protected]> Co-authored-by: gongsu832 <[email protected]> Co-authored-by: Charles Volzka <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.