Skip to content

fix: fix the renaming error in squeeze converter #1187

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion core/conversion/converters/impl/squeeze.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,8 @@ auto squeeze_registrations TORCHTRT_UNUSED = RegisterNodeConversionPatterns().pa
}

if (selfDim[dim] != 1) {
auto out = ctx->AssociateValueAndTensor(n->outputs()[0], self);
auto id_layer = ctx->net->addIdentity(*self);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could use already existing utility for this

nvinfer1::ITensor* applyIdentityOp(ConversionCtx* ctx, nvinfer1::ITensor* tensor, const std::string& tensor_name) {

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @peri044. BTW, do we still need this test case

TEST(Converters, ATenSqueezeDontNeedSqueezeConvertsCorrectly) {
? I'm thinking of deleting it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. This test doesn't seem useful.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@peri044 it seems that we should have an explicit name for output tensor if we want to use this util:

nvinfer1::ITensor* applyIdentityOp(ConversionCtx* ctx, nvinfer1::ITensor* tensor, const std::string& tensor_name) {
.
I checked around but have no idea what name would be appropriate, any idea?

auto out = ctx->AssociateValueAndTensor(n->outputs()[0], id_layer->getOutput(0));

LOG_DEBUG("Output tensor shape: " << out->getDimensions());

Expand Down
26 changes: 26 additions & 0 deletions tests/core/conversion/converters/test_squeeze.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,32 @@ TEST(Converters, ATenSqueezeDontNeedSqueezeConvertsCorrectly) {
params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {});
auto trt_results = torch_tensorrt::tests::util::RunGraphEngine(g, params, {trt_in, trt_in_add});

ASSERT_TRUE(
torch_tensorrt::tests::util::almostEqual(jit_results[0], trt_results[0].reshape_as(jit_results[0]), 2e-6));
}

TEST(Converters, ATenSqueezeNeedIdentityConvertsCorrectly) {
const auto graph = R"IR(
graph(%0 : Tensor):
%2 : int = prim::Constant[value=1]()
%3 : Tensor = aten::squeeze(%0, %2)
return (%3))IR";

auto g = std::make_shared<torch::jit::Graph>();
torch::jit::parseIR(graph, &*g);

auto in = at::randint(1, 10, {2, 3, 3}, {at::kCUDA});

auto jit_in = at::clone(in);

auto params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {});
auto jit_results = torch_tensorrt::tests::util::RunGraph(g, params, {jit_in});

auto trt_in = at::clone(jit_in);

params = torch_tensorrt::core::ir::get_static_params(g->inputs(), {});
auto trt_results = torch_tensorrt::tests::util::RunGraphEngine(g, params, {trt_in});

ASSERT_TRUE(
torch_tensorrt::tests::util::almostEqual(jit_results[0], trt_results[0].reshape_as(jit_results[0]), 2e-6));
}