Skip to content

🐛 [Bug] aten::mul errors out for int Tensor * int #1094

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mfeliz-cruise opened this issue May 27, 2022 · 2 comments
Closed

🐛 [Bug] aten::mul errors out for int Tensor * int #1094

mfeliz-cruise opened this issue May 27, 2022 · 2 comments
Assignees
Labels
bug Something isn't working component: converters Issues re: Specific op converters No Activity

Comments

@mfeliz-cruise
Copy link
Contributor

Bug Description

Currently the aten::mul converter for Tensor * Scalar assumes that the scalar value is a float. This causes errors for int Tensor * int.

GRAPH: [Torch-TensorRT - Debug Build] - graph(%in_tensor.1 : Tensor):
%2 : int = prim::Constantvalue=2
%3 : Tensor = aten::mul(%in_tensor.1, %2)
return (%3)

ERROR: [Torch-TensorRT TorchScript Conversion Context] - 4: [layers.cpp::validate::2385] Error Code 4: Internal Error (%3 : Tensor = aten::mul(%in_tensor.1, %2): operation PROD has incompatible input types Int32 and Float)

To Reproduce

Steps to reproduce the behavior:

  1. Run the code below
import torch
import torch.nn as nn
import torch_tensorrt

class MulTest(nn.Module):
    def __init__(self):
        super(MulTest, self).__init__()
    
    def forward(self, in_tensor: torch.Tensor):
        return in_tensor * 2

def reproduce_error():
    torch_tensorrt.logging.set_reportable_log_level(torch_tensorrt.logging.Level.Graph)
    model = MulTest().eval().cuda()

    x = torch.randn(10, 5).int().cuda()
    test_output = model.forward(x)

    print(torch.jit.script(model).graph)
    trt_model = torch_tensorrt.compile(model, inputs=[x],  **{
            "truncate_long_and_double": True,
        })
    converted_output = trt_model.forward(x)

reproduce_error()

Expected behavior

Int Tensor * int should be a valid operation.

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0):
  • PyTorch Version (e.g. 1.0):
  • CPU Architecture:
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, libtorch, source):
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version:
  • CUDA version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

@mfeliz-cruise mfeliz-cruise added the bug Something isn't working label May 27, 2022
@andi4191 andi4191 added the component: converters Issues re: Specific op converters label May 28, 2022
@github-actions
Copy link

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

@ncomly-nvidia
Copy link
Contributor

Closing w/ #1095. @mfeliz-cruise please comment if this issue should stay open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working component: converters Issues re: Specific op converters No Activity
Projects
None yet
Development

No branches or pull requests

5 participants