Skip to content

🐛 [Bug] aten::slice issue with 'None' input and aten::unbind issue with negative axis #1087

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mfeliz-cruise opened this issue May 24, 2022 · 1 comment
Assignees
Labels
bug Something isn't working component: evaluators Issues re: Specific op evaluators

Comments

@mfeliz-cruise
Copy link
Contributor

Bug Description

The model below shows issues with the valuators for both aten::slice and aten::unbind. Slice does not correctly handle a "None" input for its 'start' argument and unbind does not correctly handle a negative axis.

To Reproduce

Steps to reproduce the behavior:

  1. Run the model below
  2. You should see the slice error:
  3. Resolve the slice issue:

Expected ivalue->isInt() to be true but got false Requested unwrapping of arg IValue assuming it was l however type is NoneType

  1. Run the model again
  2. You should see the unbind issue:

Expected eval_list->elements().size() == n->outputs().size() to be true but got false
Size of evaluated results: 2 and node outputs size: 3 must match.

import torch
import torch.nn as nn
import torch_tensorrt

class Unbind(nn.Module):
    def __init__(self):
        super(Unbind, self).__init__()
    
    def forward(self, in_tensor: torch.Tensor):
        mid = torch.unbind(in_tensor, -1)
        x, y, z = mid[:3]
        return x, y, z

def reproduce_error():
    torch_tensorrt.logging.set_reportable_log_level(torch_tensorrt.logging.Level.Graph)
    model = Unbind().eval().cuda()

    x = torch.randn(3500, 5).cuda()
    test_output = model.forward(x)
    print(test_output)
    print(torch.jit.script(model).graph)
    trt_model = torch_tensorrt.compile(model, inputs=[x],  **{
            "truncate_long_and_double": True,
        })
    converted_output = trt_model.forward(x)
    print(converted_output)

reproduce_error()

Expected behavior

Torch-TensorRT should produce valid results for this model without erroring out.

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0): 1.0
  • PyTorch Version (e.g. 1.0): 1.11
  • CPU Architecture:
  • OS (e.g., Linux): Linux
  • How you installed PyTorch (conda, pip, libtorch, source): source
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version:
  • CUDA version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

@mfeliz-cruise mfeliz-cruise added the bug Something isn't working label May 24, 2022
@narendasan narendasan added the component: evaluators Issues re: Specific op evaluators label May 25, 2022
@narendasan
Copy link
Collaborator

Looks like the fixes for these have been merged, reopen if there are still issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working component: evaluators Issues re: Specific op evaluators
Projects
None yet
Development

No branches or pull requests

3 participants