Skip to content

✨[Feature] Support INT64 inputs at the graph input level #1546

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
peri044 opened this issue Dec 13, 2022 · 2 comments · Fixed by #1551
Closed

✨[Feature] Support INT64 inputs at the graph input level #1546

peri044 opened this issue Dec 13, 2022 · 2 comments · Fixed by #1551
Assignees
Labels
component: api Issues re: API (both TRTorch and future pytorch integration) feature request New feature or request

Comments

@peri044
Copy link
Collaborator

peri044 commented Dec 13, 2022

Is your feature request related to a problem? Please describe.
Currently if the graph is given int64 inputs, the compilation fails with unsupported type.

Describe the solution you'd like
Solution we are expecting : Cast INT64 inputs to INT32 and issue a warning to users about this instead of breaking compilation.

Describe alternatives you've considered

Additional context

@peri044 peri044 added the feature request New feature or request label Dec 13, 2022
@ncomly-nvidia ncomly-nvidia added the component: api Issues re: API (both TRTorch and future pytorch integration) label Dec 13, 2022
@gs-olive
Copy link
Collaborator

gs-olive commented Dec 15, 2022

This feature may warrant a multi-phase detailed RFC, as it relates closely to Issue #1346, and the current (Draft) implementation can cause issues for users requiring aten::scatter support or other Torch-only Long-only operations. The linked PR #1551 implements autocasting in a way which is efficient (no-copying) but can break existing models due to issues with Int64-required operations, since it in-place casts input tensors for the duration of inference. I think additional design work is needed to track data types through different engines and implement an efficient casting system.

RFC: #1553

@Christina-Young-NVIDIA
Copy link
Collaborator

Christina-Young-NVIDIA commented Dec 20, 2022

Feature split between MVP & some optional phases (detailed in RFC), trying to complete MVP for this sprint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: api Issues re: API (both TRTorch and future pytorch integration) feature request New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants