Skip to content

Commit bfbaebe

Browse files
narendasandheerajperiperi044WeiTorch-TensorRT Github Bot
authored
Update release branch with latest test fixes (#1339)
* chore: additional options for perf_run tool Signed-off-by: dperi <[email protected]> * feat: Add fx2trt backend and revamp current perf utility to accept CLI arguments Signed-off-by: Dheeraj Peri <[email protected]> * chore: Refactor fx2trt functionality Signed-off-by: Dheeraj Peri <[email protected]> * chore: Fix fp16 functionality for fx2trt backend Signed-off-by: Dheeraj Peri <[email protected]> * chore: refactor Signed-off-by: Dheeraj Peri <[email protected]> * chore: minor change Signed-off-by: Dheeraj Peri <[email protected]> * refactor: Refactor perf_run and add internal benchmark scripts Signed-off-by: Dheeraj Peri <[email protected]> * chore : minor refactor Signed-off-by: Dheeraj Peri <[email protected]> * chore: Apply precommit tooling Signed-off-by: Dheeraj Peri <[email protected]> * chore: Fix data loader issues and nox file paths Signed-off-by: Dheeraj Peri <[email protected]> * chore: rebase and minor changes Signed-off-by: Dheeraj Peri <[email protected]> * chore: Fix reporting to a file setting Signed-off-by: Dheeraj Peri <[email protected]> * Update lower.py (#1324) * docs: [Automated] Regenerating documenation for e374eb1 Signed-off-by: Torch-TensorRT Github Bot <[email protected]> * refactor: Refactor testing to use cosine similarity, remove redundancy models and restructuring Signed-off-by: Dheeraj Peri <[email protected]> * chore: move to cosine similarity comparison Signed-off-by: Dheeraj Peri <[email protected]> * refactor: Refactor nox file testing Signed-off-by: Dheeraj Peri <[email protected]> * chore: add missing scripts Signed-off-by: Dheeraj Peri <[email protected]> * chore: Linter fixes Signed-off-by: Dheeraj Peri <[email protected]> * fix!: Fixed Windows compilation failures Signed-off-by: Anurag Dixit <[email protected]> * chore: Minor fix Signed-off-by: Dheeraj Peri <[email protected]> * chore: use rn18 instead of rn50 Signed-off-by: Dheeraj Peri <[email protected]> * docs: [Automated] Regenerating documenation for a1a4786 Signed-off-by: Torch-TensorRT Github Bot <[email protected]> * chore: Add cpp tests with cosine sim Signed-off-by: Dheeraj Peri <[email protected]> * chore: linter fixes Signed-off-by: Dheeraj Peri <[email protected]> * [feat] Add support for argmax and argmin (#1312) * [feat] Add support for argmax and argmin Adds support for aten::argmax and aten::argmin. Fixes # (issue) Please delete options that are not relevant and/or add your own. - Bug fix (non-breaking change which fixes an issue) - New feature (non-breaking change which adds functionality) - Breaking change (fix or feature that would cause existing functionality to not work as expected) - This change requires a documentation update - [ ] My code follows the style guidelines of this project (You can use the linters) - [ ] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas and hacks - [ ] I have made corresponding changes to the documentation - [ ] I have added tests to verify my fix or my feature - [ ] New and existing unit tests pass locally with my changes - [ ] I have added the relevant labels to my PR in so that relevant reviewers are notified * move max.cpp tests to test_max.cpp no functional change * fix permissions on max.cpp * docs: [Automated] Regenerating documenation for 9db2852 Signed-off-by: Torch-TensorRT Github Bot <[email protected]> * chore: Deepcopy other objects Signed-off-by: Dheeraj Peri <[email protected]> * fix: Fix deepcopy issues of PTQ calibrators Signed-off-by: Dheeraj Peri <[email protected]> * chore: linter fixes Signed-off-by: Dheeraj Peri <[email protected]> * chore: Adding a guideline to build on Windows platform (#1337) * chore: Adding Windows build guideline Signed-off-by: Anurag Dixit <[email protected]> * chore: Fix formatting Signed-off-by: Anurag Dixit <[email protected]> Signed-off-by: Anurag Dixit <[email protected]> * docs: [Automated] Regenerating documenation for 00a1f03 Signed-off-by: Torch-TensorRT Github Bot <[email protected]> * chore: minor fixes Signed-off-by: Dheeraj Peri <[email protected]> * chore: Linter fixes Signed-off-by: Dheeraj Peri <[email protected]> * chore: Linter fixes Signed-off-by: Dheeraj Peri <[email protected]> * docs: [Automated] Regenerating documenation for 1efe4b1 Signed-off-by: Torch-TensorRT Github Bot <[email protected]> * docs: [Automated] Regenerating documenation for 10b9ecd Signed-off-by: Torch-TensorRT Github Bot <[email protected]> * add support for aten::reciprocal(int) (#1308) * docs: [Automated] Regenerating documenation for 096fd41 Signed-off-by: Torch-TensorRT Github Bot <[email protected]> Signed-off-by: dperi <[email protected]> Signed-off-by: Dheeraj Peri <[email protected]> Signed-off-by: Torch-TensorRT Github Bot <[email protected]> Signed-off-by: Anurag Dixit <[email protected]> Co-authored-by: dperi <[email protected]> Co-authored-by: Dheeraj Peri <[email protected]> Co-authored-by: Wei <[email protected]> Co-authored-by: Torch-TensorRT Github Bot <[email protected]> Co-authored-by: Anurag Dixit <[email protected]> Co-authored-by: Michael Feliz <[email protected]>
1 parent 087f97d commit bfbaebe

File tree

162 files changed

+2627
-668
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

162 files changed

+2627
-668
lines changed

.circleci/config.yml

+1
Original file line numberDiff line numberDiff line change
@@ -435,6 +435,7 @@ commands:
435435
mkdir -p /tmp/artifacts/test_results
436436
cd tests/py
437437
pytest --junitxml=/tmp/artifacts/test_results/api/api_test_results.xml api/
438+
pytest --junitxml=/tmp/artifacts/test_results/models/models_test_results.xml models/
438439
pytest --junitxml=/tmp/artifacts/test_results/integrations/integrations_test_results.xml integrations/
439440
cd ~/project
440441

.github/workflows/docgen.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ jobs:
3131
- name: Set up Python 3.9.4
3232
uses: actions/setup-python@v2
3333
with:
34-
python-version: 3.9.4
34+
python-version: 3.9.4
3535
- uses: actions/checkout@v2
3636
with:
3737
ref: ${{github.head_ref}}

.github/workflows/linter.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ jobs:
3939
pip3 install -r $GITHUB_WORKSPACE/.github/scripts/requirements.txt
4040
pip3 install -r $GITHUB_WORKSPACE/requirements-dev.txt
4141
- name: Lint C++
42-
run: |
42+
run: |
4343
cd $GITHUB_WORKSPACE
4444
python3 $GITHUB_WORKSPACE/.github/scripts/run_cpp_linter.py
4545
env:

core/conversion/converters/impl/max.cpp

+89-41
Original file line numberDiff line numberDiff line change
@@ -13,47 +13,95 @@ namespace conversion {
1313
namespace converters {
1414
namespace impl {
1515
namespace {
16-
auto max_registrations TORCHTRT_UNUSED = RegisterNodeConversionPatterns().pattern(
17-
{"aten::max.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices)",
18-
[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
19-
auto self = args[0].ITensorOrFreeze(ctx);
20-
auto dim = args[1].unwrapToInt();
21-
auto keep_dims = args[2].unwrapToBool();
22-
auto selfDim = util::toVec(self->getDimensions());
23-
if (dim < 0) {
24-
dim = selfDim.size() + dim;
25-
}
26-
uint32_t shiftDim = 1 << dim;
27-
auto TopKOperation = nvinfer1::TopKOperation::kMAX;
28-
auto topk_layer = ctx->net->addTopK(*self, TopKOperation, 1, shiftDim);
29-
TORCHTRT_CHECK(topk_layer, "Unable to create max layer from node: " << *n);
30-
auto topk_dims = util::toVec(topk_layer->getOutput(0)->getDimensions());
31-
32-
nvinfer1::ITensor* out0 = nullptr;
33-
nvinfer1::ITensor* out1 = nullptr;
34-
if (!keep_dims) {
35-
if (topk_dims[dim] == 1) {
36-
auto squeeze_layer = ctx->net->addShuffle(*topk_layer->getOutput(0));
37-
squeeze_layer->setReshapeDimensions(util::squeezeDims(topk_layer->getOutput(0)->getDimensions(), dim));
38-
TORCHTRT_CHECK(squeeze_layer, "Unable to create squeeze_layer layer from node: " << *n);
39-
out0 = ctx->AssociateValueAndTensor(n->outputs()[0], squeeze_layer->getOutput(0));
40-
41-
auto squeeze_layer_indices = ctx->net->addShuffle(*topk_layer->getOutput(1));
42-
squeeze_layer_indices->setReshapeDimensions(
43-
util::squeezeDims(topk_layer->getOutput(1)->getDimensions(), dim));
44-
TORCHTRT_CHECK(squeeze_layer_indices, "Unable to create squeeze_layer_indices layer from node: " << *n);
45-
out1 = ctx->AssociateValueAndTensor(n->outputs()[1], squeeze_layer_indices->getOutput(0));
46-
}
47-
} else {
48-
out0 = ctx->AssociateValueAndTensor(n->outputs()[0], topk_layer->getOutput(0));
49-
out1 = ctx->AssociateValueAndTensor(n->outputs()[1], topk_layer->getOutput(1));
50-
}
51-
52-
LOG_DEBUG("Output tensor(0) shape: " << out0->getDimensions());
53-
LOG_DEBUG("Output tensor(1) shape: " << out1->getDimensions());
54-
55-
return true;
56-
}});
16+
17+
bool min_max_dim(ConversionCtx* ctx, const torch::jit::Node* n, args& args, nvinfer1::TopKOperation topKOperation) {
18+
auto self = args[0].ITensorOrFreeze(ctx);
19+
auto dim = args[1].unwrapToInt();
20+
auto keep_dims = args[2].unwrapToBool();
21+
auto selfDim = util::toVec(self->getDimensions());
22+
if (dim < 0) {
23+
dim = selfDim.size() + dim;
24+
}
25+
uint32_t reduce_axes_mask = 1 << dim;
26+
auto topk_layer = ctx->net->addTopK(*self, topKOperation, 1, reduce_axes_mask);
27+
TORCHTRT_CHECK(topk_layer, "Unable to create topk layer from node: " << *n);
28+
auto topk_dims = util::toVec(topk_layer->getOutput(0)->getDimensions());
29+
30+
nvinfer1::ITensor* out0 = nullptr;
31+
nvinfer1::ITensor* out1 = nullptr;
32+
if (!keep_dims) {
33+
TORCHTRT_CHECK(topk_dims[dim] == 1, "Unexpected size in squeeze dimension. Expected: 1 Actual: " << topk_dims[dim]);
34+
auto squeeze_layer = ctx->net->addShuffle(*topk_layer->getOutput(0));
35+
squeeze_layer->setReshapeDimensions(util::squeezeDims(topk_layer->getOutput(0)->getDimensions(), dim));
36+
TORCHTRT_CHECK(squeeze_layer, "Unable to create squeeze_layer layer from node: " << *n);
37+
out0 = ctx->AssociateValueAndTensor(n->outputs()[0], squeeze_layer->getOutput(0));
38+
39+
auto squeeze_layer_indices = ctx->net->addShuffle(*topk_layer->getOutput(1));
40+
squeeze_layer_indices->setReshapeDimensions(util::squeezeDims(topk_layer->getOutput(1)->getDimensions(), dim));
41+
TORCHTRT_CHECK(squeeze_layer_indices, "Unable to create squeeze_layer_indices layer from node: " << *n);
42+
out1 = ctx->AssociateValueAndTensor(n->outputs()[1], squeeze_layer_indices->getOutput(0));
43+
} else {
44+
out0 = ctx->AssociateValueAndTensor(n->outputs()[0], topk_layer->getOutput(0));
45+
out1 = ctx->AssociateValueAndTensor(n->outputs()[1], topk_layer->getOutput(1));
46+
}
47+
48+
LOG_DEBUG("Output tensor(0) shape: " << out0->getDimensions());
49+
LOG_DEBUG("Output tensor(1) shape: " << out1->getDimensions());
50+
51+
return true;
52+
}
53+
54+
bool arg_min_max(ConversionCtx* ctx, const torch::jit::Node* n, args& args, nvinfer1::TopKOperation topKOperation) {
55+
auto self = args[0].ITensorOrFreeze(ctx);
56+
auto dim = args[1].unwrapToInt();
57+
auto keep_dims = args[2].unwrapToBool();
58+
auto selfDim = util::toVec(self->getDimensions());
59+
if (dim < 0) {
60+
dim = selfDim.size() + dim;
61+
}
62+
uint32_t reduce_axes_mask = 1 << dim;
63+
auto topk_layer = ctx->net->addTopK(*self, topKOperation, 1, reduce_axes_mask);
64+
TORCHTRT_CHECK(topk_layer, "Unable to create topk layer from node: " << *n);
65+
auto topk_dims = util::toVec(topk_layer->getOutput(0)->getDimensions());
66+
67+
nvinfer1::ITensor* out = nullptr;
68+
if (!keep_dims) {
69+
TORCHTRT_CHECK(topk_dims[dim] == 1, "Unexpected size in squeeze dimension. Expected: 1 Actual: " << topk_dims[dim]);
70+
auto squeeze_layer_indices = ctx->net->addShuffle(*topk_layer->getOutput(1));
71+
squeeze_layer_indices->setReshapeDimensions(util::squeezeDims(topk_layer->getOutput(1)->getDimensions(), dim));
72+
TORCHTRT_CHECK(squeeze_layer_indices, "Unable to create squeeze_layer_indices layer from node: " << *n);
73+
out = ctx->AssociateValueAndTensor(n->outputs()[0], squeeze_layer_indices->getOutput(0));
74+
} else {
75+
out = ctx->AssociateValueAndTensor(n->outputs()[0], topk_layer->getOutput(1));
76+
}
77+
78+
LOG_DEBUG("Output tensor shape: " << out->getDimensions());
79+
80+
return true;
81+
}
82+
83+
auto max_registrations TORCHTRT_UNUSED =
84+
RegisterNodeConversionPatterns()
85+
.pattern(
86+
{"aten::max.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices)",
87+
[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
88+
return min_max_dim(ctx, n, args, nvinfer1::TopKOperation::kMAX);
89+
}})
90+
.pattern(
91+
{"aten::min.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices)",
92+
[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
93+
return min_max_dim(ctx, n, args, nvinfer1::TopKOperation::kMIN);
94+
}})
95+
.pattern(
96+
{"aten::argmax(Tensor self, int dim, bool keepdim=False) -> (Tensor)",
97+
[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
98+
return arg_min_max(ctx, n, args, nvinfer1::TopKOperation::kMAX);
99+
}})
100+
.pattern(
101+
{"aten::argmin(Tensor self, int dim, bool keepdim=False) -> (Tensor)",
102+
[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
103+
return arg_min_max(ctx, n, args, nvinfer1::TopKOperation::kMIN);
104+
}});
57105
} // namespace
58106
} // namespace impl
59107
} // namespace converters

core/conversion/converters/impl/unary.cpp

+15-1
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,21 @@ auto abs_registration TORCHTRT_UNUSED = RegisterNodeConversionPatterns().pattern
4949
}
5050
}});
5151

52+
auto reciprocal_registration TORCHTRT_UNUSED = RegisterNodeConversionPatterns().pattern(
53+
{"aten::reciprocal(Tensor self) -> Tensor", [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
54+
auto in = args[0].ITensorOrFreeze(ctx);
55+
if (in->getType() == nvinfer1::DataType::kINT32) {
56+
// pytorch implicitly casts to float for aten::reciprocal(int)
57+
in = castITensor(ctx, in, nvinfer1::DataType::kFLOAT);
58+
}
59+
auto unary_layer = ctx->net->addUnary(*in, nvinfer1::UnaryOperation::kRECIP);
60+
TORCHTRT_CHECK(unary_layer, "Unable to create recip layer from node: " << *n);
61+
unary_layer->setName(util::node_info(n).c_str());
62+
auto out_tensor = ctx->AssociateValueAndTensor(n->outputs()[0], unary_layer->getOutput(0));
63+
LOG_DEBUG("Output tensor shape: " << out_tensor->getDimensions());
64+
return true;
65+
}});
66+
5267
#define convert(unary, trt_type) \
5368
auto unary##_registrations TORCHTRT_UNUSED = RegisterNodeConversionPatterns().pattern( \
5469
{"aten::" #unary "(Tensor self) -> Tensor", \
@@ -74,7 +89,6 @@ convert(sinh, kSINH);
7489
convert(tan, kTAN);
7590
convert(atan, kATAN);
7691
convert(floor, kFLOOR);
77-
convert(reciprocal, kRECIP);
7892
convert(log, kLOG);
7993
convert(ceil, kCEIL);
8094
convert(sqrt, kSQRT);

core/partitioning/shape_analysis.cpp

+1-1
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,7 @@ void getSegmentsOutputByRunning(
167167
}
168168
if (cur_ivalue.toTensor().sizes().size() == 0) {
169169
// handle Scalar types, which has sizes of []
170-
input_shapes.push_back(util::toVec(util::toDims(c10::List<long int>({1}))));
170+
input_shapes.push_back(util::toVec(util::toDims(c10::List<int64_t>({1}))));
171171
} else {
172172
input_shapes.push_back(util::toVec(util::toDims(cur_ivalue.toTensor().sizes())));
173173
}

cpp/bin/torchtrtc/main.cpp

+1-1
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ bool unload_library(void* custom_lib) {
3535
bool success = false;
3636
#if defined(_WIN32)
3737
// Returns status non-zero for success
38-
success = FreeLibrary(custom_lib) ? true : false;
38+
success = FreeLibrary((HMODULE)custom_lib) ? true : false;
3939
#else
4040
success = dlclose(custom_lib) ? false : true;
4141
#endif

cpp/include/torch_tensorrt/torch_tensorrt.h

+4-4
Original file line numberDiff line numberDiff line change
@@ -365,7 +365,7 @@ class TensorFormat {
365365
* signifying a static input shape or a set of three input shapes representing
366366
* the min, optiminal and max input shapes allowed for the engine.
367367
*/
368-
struct TORCHTRT_API Input : torch::CustomClassHolder {
368+
struct Input : torch::CustomClassHolder {
369369
/// Minimum acceptable input size into the engine
370370
std::vector<int64_t> min_shape;
371371
/// Optimal input size into the engine (size optimized for given kernels accept any size in min max range)
@@ -520,7 +520,7 @@ struct TORCHTRT_API Input : torch::CustomClassHolder {
520520
*
521521
* This struct can either hold a complex inputs of shape or a flattened one,
522522
*/
523-
struct TORCHTRT_API GraphInputs {
523+
struct GraphInputs {
524524
torch::jit::IValue input_signature; // nested Input, full input spec
525525
std::vector<Input> inputs; // flatten input spec
526526
};
@@ -592,14 +592,14 @@ struct CompileSpec {
592592
*
593593
* @param inputs
594594
*/
595-
CompileSpec(std::vector<Input> inputs);
595+
TORCHTRT_API CompileSpec(std::vector<Input> inputs);
596596

597597
/**
598598
* @brief Construct a new Compile Spec object from IValue which represents the nesting of input tensors for a module.
599599
*
600600
* @param input_signature
601601
*/
602-
CompileSpec(torch::jit::IValue input_signature);
602+
TORCHTRT_API CompileSpec(torch::jit::IValue input_signature);
603603
// Defaults should reflect TensorRT defaults for BuilderConfig
604604

605605
/**

docs/_cpp_api/classtorch__tensorrt_1_1DataType.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/classtorch__tensorrt_1_1Device_1_1DeviceType.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/classtorch__tensorrt_1_1TensorFormat.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/classtorch__tensorrt_1_1ptq_1_1Int8CacheCalibrator.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/classtorch__tensorrt_1_1ptq_1_1Int8Calibrator.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/define_macros_8h_1a18d295a837ac71add5578860b55e5502.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/define_macros_8h_1a282fd3c0b1c3a215148ae372070e1268.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/define_macros_8h_1a31398a6d4d27e28817afb0f0139e909e.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/define_macros_8h_1a35703561b26b1a9d2738ad7d58b27827.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/define_macros_8h_1abd1465eb38256d3f22cc1426b23d516b.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/define_macros_8h_1abe87b341f562fd1cf40b7672e4d759da.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/define_macros_8h_1ad19939408f7be171a74a89928b36eb59.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/define_macros_8h_1adad592a7b1b7eed529cdf6acd584c883.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

docs/_cpp_api/dir_cpp.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@
197197

198198

199199
<div class="version">
200-
master (1.2.0a0+51a991e)
200+
master (1.2.0a0+096fd41)
201201
</div>
202202

203203

docs/_cpp_api/dir_cpp_include.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@
197197

198198

199199
<div class="version">
200-
master (1.2.0a0+51a991e)
200+
master (1.2.0a0+096fd41)
201201
</div>
202202

203203

docs/_cpp_api/dir_cpp_include_torch_tensorrt.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@
197197

198198

199199
<div class="version">
200-
master (1.2.0a0+51a991e)
200+
master (1.2.0a0+096fd41)
201201
</div>
202202

203203

docs/_cpp_api/enum_logging_8h_1a130f65408ad8cbaee060f05e8db69558.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@
199199

200200

201201
<div class="version">
202-
master (1.2.0a0+51a991e)
202+
master (1.2.0a0+096fd41)
203203
</div>
204204

205205

0 commit comments

Comments
 (0)