We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Currently the aten::mul converter for Tensor * Scalar assumes that the scalar value is a float. This causes errors for int Tensor * int.
GRAPH: [Torch-TensorRT - Debug Build] - graph(%in_tensor.1 : Tensor): %2 : int = prim::Constantvalue=2 %3 : Tensor = aten::mul(%in_tensor.1, %2) return (%3)
ERROR: [Torch-TensorRT TorchScript Conversion Context] - 4: [layers.cpp::validate::2385] Error Code 4: Internal Error (%3 : Tensor = aten::mul(%in_tensor.1, %2): operation PROD has incompatible input types Int32 and Float)
Steps to reproduce the behavior:
import torch import torch.nn as nn import torch_tensorrt class MulTest(nn.Module): def __init__(self): super(MulTest, self).__init__() def forward(self, in_tensor: torch.Tensor): return in_tensor * 2 def reproduce_error(): torch_tensorrt.logging.set_reportable_log_level(torch_tensorrt.logging.Level.Graph) model = MulTest().eval().cuda() x = torch.randn(10, 5).int().cuda() test_output = model.forward(x) print(torch.jit.script(model).graph) trt_model = torch_tensorrt.compile(model, inputs=[x], **{ "truncate_long_and_double": True, }) converted_output = trt_model.forward(x) reproduce_error()
Int Tensor * int should be a valid operation.
Build information about Torch-TensorRT can be found by turning on debug messages
conda
pip
libtorch
The text was updated successfully, but these errors were encountered:
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
Sorry, something went wrong.
Closing w/ #1095. @mfeliz-cruise please comment if this issue should stay open.
peri044
bowang007
No branches or pull requests
Bug Description
Currently the aten::mul converter for Tensor * Scalar assumes that the scalar value is a float. This causes errors for int Tensor * int.
GRAPH: [Torch-TensorRT - Debug Build] - graph(%in_tensor.1 : Tensor):
%2 : int = prim::Constantvalue=2
%3 : Tensor = aten::mul(%in_tensor.1, %2)
return (%3)
ERROR: [Torch-TensorRT TorchScript Conversion Context] - 4: [layers.cpp::validate::2385] Error Code 4: Internal Error (%3 : Tensor = aten::mul(%in_tensor.1, %2): operation PROD has incompatible input types Int32 and Float)
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Int Tensor * int should be a valid operation.
Environment
conda
,pip
,libtorch
, source):Additional context
The text was updated successfully, but these errors were encountered: