✨[Feature] Automatic conversion for int32<->int64 in fallback #1382
Labels
component: partitioning
feature request
New feature or request
release: v1.3
Tagged to be included in v1.3
As we know, if there are some operators that torch-tensorrt doesn't support, the model will be partitioned into tensorrt and torch subgraphs. TensorRT doesn't support int64 value and will truncate int64 to int32.
In some cases, the operators in the torch subgraph consume int64 value(like aten::index), and this value is produced from tensorrt subgraph(truncated into int32), this will cause an error. We need to track the data type conversion and automatic convert the data type back to the origianl type between torch and tensorrt.
Here is a typical case
subgraph log
The text was updated successfully, but these errors were encountered: