-
Notifications
You must be signed in to change notification settings - Fork 554
[TRACKER] Bloom-PyTorch Model #1340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
model support
Hub issue for progress on adding support for a specific model
Comments
this seems like a local build issue. |
it's might because george-cumsum-op-support and prashant-max-other-op-support both did change on cumsum op, might lead to conflict |
#1348 aten::_reshape_alias op need to be fixed first. |
qedawkins
pushed a commit
to nod-ai/torch-mlir
that referenced
this issue
Oct 3, 2022
…#1340) Leverage the template function 'shapeHelperInferShapes' to reduce code "duplication" in ShapeInference.cpp for several ONNX operators. As an example, the following implementation of the Transpose operator inferShapes member function: ONNXTransposeOp::inferShapes( std::function<void(mlir::Region &)> doShapeInference) { // Cannot infer shape if no shape exists. if (!data().getType().isa<RankedTensorType>()) return success(); auto elementType = data().getType().cast<ShapedType>().getElementType(); ONNXTransposeOpAdaptor operandAdaptor(*this); ONNXTransposeOpShapeHelper shapeHelper(this); if (failed(shapeHelper.computeShape(operandAdaptor))) return emitError("Failed to scan Transpose parameters successfully"); SmallVector<int64_t, 4> outputDims; IndexExpr::getShape(shapeHelper.dimsForOutput(), outputDims); getResult().setType(RankedTensorType::get(outputDims, elementType)); return success(); } Becomes: ONNXTransposeOp::inferShapes( std::function<void(mlir::Region &)> doShapeInference) { return shapeHelperInferShapes<ONNXTransposeOpShapeHelper, ONNXTransposeOp, ONNXTransposeOpAdaptor>(this, elementType); } Signed-off-by: Ettore Tiotto [email protected]
qedawkins
pushed a commit
to nod-ai/torch-mlir
that referenced
this issue
Oct 3, 2022
* Use shapeHelperInferShapes template to reduce boilerplate code. (llvm#1340) Leverage the template function 'shapeHelperInferShapes' to reduce code "duplication" in ShapeInference.cpp for several ONNX operators. As an example, the following implementation of the Transpose operator inferShapes member function: ONNXTransposeOp::inferShapes( std::function<void(mlir::Region &)> doShapeInference) { // Cannot infer shape if no shape exists. if (!data().getType().isa<RankedTensorType>()) return success(); auto elementType = data().getType().cast<ShapedType>().getElementType(); ONNXTransposeOpAdaptor operandAdaptor(*this); ONNXTransposeOpShapeHelper shapeHelper(this); if (failed(shapeHelper.computeShape(operandAdaptor))) return emitError("Failed to scan Transpose parameters successfully"); SmallVector<int64_t, 4> outputDims; IndexExpr::getShape(shapeHelper.dimsForOutput(), outputDims); getResult().setType(RankedTensorType::get(outputDims, elementType)); return success(); } Becomes: ONNXTransposeOp::inferShapes( std::function<void(mlir::Region &)> doShapeInference) { return shapeHelperInferShapes<ONNXTransposeOpShapeHelper, ONNXTransposeOp, ONNXTransposeOpAdaptor>(this, elementType); } Signed-off-by: Ettore Tiotto [email protected] Signed-off-by: Philip Lassen <[email protected]> * Allow lowering ONNXGemm with dynamic dims to ZHigh and fix zDNN Conv condition (llvm#1332) * Allow lowering ONNXGemm with dynamic dims to ZHigh Signed-off-by: Tung D. Le <[email protected]> * Update lit tests Signed-off-by: Tung D. Le <[email protected]> * Fix zDNN Conv condition Signed-off-by: Tung D. Le <[email protected]> Signed-off-by: Philip Lassen <[email protected]> * Fix builders with boolean output types Signed-off-by: Philip Lassen <[email protected]> * Fix format issue Signed-off-by: Philip Lassen <[email protected]> * Fix legality check of ONNXToZHigh for MaxPool. (llvm#1343) * Fix legality check of NNPA for 1d maxpool Signed-off-by: Haruki Imai <[email protected]> * Apply the same fix to conv Signed-off-by: Haruki Imai <[email protected]> * Add lit test for 1d maxpool and averagepool Signed-off-by: Haruki Imai <[email protected]> * Insert diation check after checking shape Signed-off-by: Haruki Imai <[email protected]> * Simplify lit test for pooling and update func name Signed-off-by: Haruki Imai <[email protected]> * Change func name to test_pool_not_lowered_pool1d and test_pool_not_lowered_pool3d Signed-off-by: Haruki Imai <[email protected]> Signed-off-by: Philip Lassen <[email protected]> * embed libzdnn in model.so (llvm#1324) * - Build libzdnn.a with -fPIC and embed in model.so when -maccel=NNPA specified - Add CompilerConfigMap to store states associated with certain options - Move options in main() to CompilerOptions.cpp - Fix compiler warning in Stickify.cpp Signed-off-by: Gong Su <[email protected]> * - fix NNPA_ENABLED for lit test - install zdnn.h so we no longer need third_party/zdnn-lib - move options back to onnx-mlir.cpp::main (consolidation requires much more effort, deferred) - use DEPENDS in add_onnx_mlir_library for libzdnn dependency Signed-off-by: Gong Su <[email protected]> * Remove zdnn-lib from .gitmodules Signed-off-by: Gong Su <[email protected]> * Make libzdnn ALL target so it gets built before other NNPA components Signed-off-by: Gong Su <[email protected]> * Fix libzdnn dependency for NNPA components Signed-off-by: Gong Su <[email protected]> * Build NNPA in dev image as well Signed-off-by: Gong Su <[email protected]> * Comment out BYPRODUCTS in target libzdnn since generator support requires cmake 3.20+ which is not yet available on official Ubuntu Focal Signed-off-by: Gong Su <[email protected]> * - Force setting cached MLIR_DIR to honor command line argument - unset cached LLVM_DIR so it changes along with MLIR_DIR - surround third_party with set(CMAKE_MESSAGE_LOG_LEVEL NOTICE) and set(CMAKE_MESSAGE_LOG_LEVEL STATUS) to mask out their cmake output so we can see more clearly output by onnx-mlir only. third_party cmake output can still be turned on by the --log-level command line option Signed-off-by: Gong Su <[email protected]> * revert set(CMAKE_MESSAGE_LOG_LEVEL NOTICE) and set(CMAKE_MESSAGE_LOG_LEVEL STATUS) around third_party to make another PR Signed-off-by: Gong Su <[email protected]> * revert force setting cached MLIR_DIR and unsetting cached LLVM_DIR to make another PR Signed-off-by: Gong Su <[email protected]> Co-authored-by: Charles Volzka <[email protected]> Co-authored-by: Tung D. Le <[email protected]> Signed-off-by: Philip Lassen <[email protected]> * ScatterElements operator code gen. (llvm#1352) Implement support for the ONNX ScatterElement operator: - verification (verify diagnostic completeness) - shape inference (should be trivial, but verify) - initial codegen support - codegen for negative indices - add lit test to check code generation - enable end-to-end tests (backend tests) Signed-off-by: Ettore Tiotto [email protected] Signed-off-by: Philip Lassen <[email protected]> * Improve variable naming of builder lists Signed-off-by: Philip Lassen <[email protected]> Co-authored-by: Ettore Tiotto <[email protected]> Co-authored-by: Tung D. Le <[email protected]> Co-authored-by: Haruki Imai <[email protected]> Co-authored-by: gongsu832 <[email protected]> Co-authored-by: Charles Volzka <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi
I am working on providing the support for bloom-pytorch model via torch-mlir. As of now, I used 2 unmerged patch to solver the ops that not supported. Those 2 patch pass the check independently, but when I try to merge them together. It results in the following segmentation error:
Here is the patch to run:
https://github.com/AmosLewis/SHARK/tree/bloom
To test it,
The torch_mlir used is https://github.com/AmosLewis/torch-mlir/tree/bloom3.
It is the combination of george-cumsum-op-support and prashant-max-other-op-support.
I tried to run the python test
cmake --build build --target check-torch-mlir-python
The george-cumsum-op-support and prashant-max-other-op-support op support patches pass the check independently. But when combining them together, it leads to the following error:
The text was updated successfully, but these errors were encountered: