Skip to content

Commit 8580106

Browse files
committed
Merge branch 'master' of https://github.com/NVIDIA/trtorch into ptq
2 parents dd443a6 + 6be3f1f commit 8580106

File tree

5 files changed

+148
-33
lines changed

5 files changed

+148
-33
lines changed

Diff for: CONTRIBUTING.md

+76
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
# Contribution Guidelines
2+
3+
### Developing TRTorch
4+
5+
Do try to fill an issue with your feature or bug before filling a PR (op support is generally an exception as long as you provide tests to prove functionality). There is also a backlog (https://github.com/NVIDIA/TRTorch/issues) of issues which are tagged with the area of focus, a coarse priority level and whether the issue may be accessible to new contributors. Let us know if you are interested in working on a issue. We are happy to provide guidance and mentorship for new contributors. Though note, there is no claiming of issues, we prefer getting working code quickly vs. addressing concerns about "wasted work".
6+
7+
#### Communication
8+
9+
The primary location for discussion is GitHub issues. This is the best place for questions about the project and discussion about specific issues.
10+
11+
We use the PyTorch Slack for communication about core development, integration with PyTorch and other communication that doesn't make sense in GitHub issues. If you need an invite, take a look at the [PyTorch README](https://github.com/pytorch/pytorch/blob/master/README.md) for instructions on requesting one.
12+
13+
### Coding Guidelines
14+
15+
- We generally follow the coding guidelines used in PyTorch, right now this is not strictly enforced, but match the style used in the code already
16+
17+
- Avoid introducing unnecessary complexity into existing code so that maintainability and readability are preserved
18+
19+
- Try to avoid commiting commented out code
20+
21+
- Minimize warnings (and no errors) from the compiler
22+
23+
- Make sure all converter tests and the core module testsuite pass
24+
25+
- New features should have corresponding tests or if its a difficult feature to test in a testing framework, your methodology for testing.
26+
27+
- Comment subtleties and design decisions
28+
29+
- Document hacks, we can discuss it only if we can find it
30+
31+
### Commits and PRs
32+
33+
- Try to keep pull requests focused (multiple pull requests are okay). Typically PRs should focus on a single issue or a small collection of closely related issue.
34+
35+
- Typically we try to follow the guidelines set by https://www.conventionalcommits.org/en/v1.0.0/ for commit messages for clarity. Again not strictly enforced.
36+
37+
- #### Sign Your Work
38+
We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
39+
40+
Any contribution which contains commits that are not Signed-Off will not be accepted.
41+
42+
To sign off on a commit you simply use the `--signoff` (or `-s`) option when committing your changes:
43+
44+
$ git commit -s -m "Add cool feature."
45+
46+
This will append the following to your commit message:
47+
48+
Signed-off-by: Your Name <[email protected]>
49+
50+
By doing this you certify the below:
51+
52+
Developer Certificate of Origin
53+
Version 1.1
54+
55+
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
56+
1 Letterman Drive
57+
Suite D4700
58+
San Francisco, CA, 94129
59+
60+
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
61+
62+
63+
Developer's Certificate of Origin 1.1
64+
65+
By making a contribution to this project, I certify that:
66+
67+
(a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or
68+
69+
(b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or
70+
71+
(c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it.
72+
73+
(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
74+
75+
76+
Thanks in advance for your patience as we review your contributions; we do appreciate them!

Diff for: README.md

+59-16
Original file line numberDiff line numberDiff line change
@@ -1,33 +1,70 @@
1-
# TRTorch
1+
# TRTorch
22

33
> Ahead of Time (AOT) compiling for PyTorch JIT
44
5-
## Compiling TRTorch
5+
TRTorch is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA's TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch's Just-In-Time (JIT) compiler, TRTorch is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into an module targeting a TensorRT engine. TRTorch operates as a PyTorch extention and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT's suite of configurations at compile time, so you are able to specify operating precision (FP32/F16) and other settings for your module.
6+
7+
More Information / System Architecture:
8+
9+
- [GTC 2020 Talk](https://developer.nvidia.com/gtc/2020/video/s21671)
10+
11+
## Example Usage
12+
13+
```c++
14+
#include "torch/script.h"
15+
#include "trtorch/trtorch.h"
16+
17+
...
18+
auto compile_settings = trtorch::ExtraInfo(dims);
19+
// FP16 execution
20+
compile_settings.op_precision = torch::kHalf;
21+
// Compile module
22+
auto trt_mod = trtorch::CompileGraph(ts_mod, compile_settings);
23+
// Run like normal
24+
auto results = trt_mod.forward({in_tensor});
25+
...
26+
```
27+
28+
## Platform Support
29+
30+
| Platform | Support |
31+
| -------- | ------- |
32+
| Linux AMD64 / GPU | **Supported** |
33+
| Linux aarch64 / GPU | **Planned/Possible with Native Compiation and small modifications to the build system** |
34+
| Linux aarch64 / DLA | **Planned/Possible with Native Compilation but untested** |
35+
| Windows / GPU | - |
36+
| Linux ppc64le / GPU | - |
637
738
### Dependencies
839
940
- Libtorch 1.4.0
1041
- CUDA 10.1
1142
- cuDNN 7.6
12-
- TensorRT 6.0.1.5
43+
- TensorRT 6.0.1
1344
14-
Install TensorRT, CUDA and cuDNN on the system before starting to compile.
45+
## Prebuilt Binaries
46+
47+
Releases: https://github.com/NVIDIA/TRTorch/releases
1548
49+
## Compiling TRTorch
50+
51+
Install TensorRT, CUDA and cuDNN on the system before starting to compile.
1652
1753
``` shell
18-
bazel build //:libtrtorch --cxxopt="-DNDEBUG"
54+
bazel build //:libtrtorch --compilation_mode=opt
1955
```
2056

21-
### Debug build
57+
### Debug build
2258
``` shell
2359
bazel build //:libtrtorch --compilation_mode=dbg
2460
```
2561

26-
A tarball with the include files and library can then be found in bazel-bin
62+
A tarball with the include files and library can then be found in bazel-bin
2763

28-
### Running TRTorch on a JIT Graph
64+
### Running TRTorch on a JIT Graph
2965

30-
> Make sure to add LibTorch's version of CUDA 10.1 to your LD_LIBRARY_PATH `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TRTorch/external/libtorch/lib`
66+
> Make sure to add LibTorch to your LD_LIBRARY_PATH <br>
67+
>`export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TRTorch/external/libtorch/lib`
3168
3269

3370
``` shell
@@ -38,22 +75,28 @@ bazel run //cpp/trtorchexec -- $(realpath <PATH TO GRAPH>) <input-size>
3875

3976
### In TRTorch?
4077

41-
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters.
78+
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/NVIDIA/TRTorch/issues) for information on the support status of various operators.
4279

4380
### In my application?
4481

45-
> The Node Converter Registry is not exposed currently in the public API but you can try using internal headers.
82+
> The Node Converter Registry is not exposed in the top level API but you can try using the internal headers shipped with the tarball.
4683
47-
You can register a converter for your op using the NodeConverterRegistry inside your application.
84+
You can register a converter for your op using the NodeConverterRegistry inside your application.
4885

4986
## Structure of the repo
5087

5188
| Component | Description |
5289
| ------------- | ------------------------------------------------------------ |
53-
| [**core**]() | Main JIT ingest, lowering, conversion and execution implementations |
54-
| [**cpp**]() | C++ API for TRTorch |
55-
| [**tests**]() | Unit test for TRTorch |
90+
| [**core**](core) | Main JIT ingest, lowering, conversion and execution implementations |
91+
| [**cpp**](cpp) | C++ specific components including API and example applications |
92+
| [**cpp/api**](cpp/api) | C++ API for TRTorch |
93+
| [**tests**](tests) | Unit test for TRTorch |
94+
95+
## Contributing
96+
97+
Take a look at the [CONTRIBUTING.md](CONTRIBUTING.md)
98+
5699

57100
## License
58101

59-
The TRTorch license can be found in the LICENSE file.
102+
The TRTorch license can be found in the LICENSE file. It is licensed with a BSD Style licence

Diff for: tests/modules/hub.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
"inception": models.inception_v3(pretrained=True),
1111
"googlenet": models.googlenet(pretrained=True),
1212
"shufflenet": models.shufflenet_v2_x1_0(pretrained=True),
13-
"mobilenet": models.mobilenet_v2(pretrained=True),
13+
"mobilenet_v2": models.mobilenet_v2(pretrained=True),
1414
"resnext50_32x4d": models.resnext50_32x4d(pretrained=True),
1515
"wideresnet50_2": models.wide_resnet50_2(pretrained=True),
1616
"mnasnet": models.mnasnet1_0(pretrained=True),

Diff for: tests/modules/test_compiled_modules.cpp

+6-8
Original file line numberDiff line numberDiff line change
@@ -28,11 +28,9 @@ TEST_P(ModuleTests, CompiledModuleIsClose) {
2828
INSTANTIATE_TEST_SUITE_P(CompiledModuleForwardIsCloseSuite,
2929
ModuleTests,
3030
testing::Values(
31-
PathAndInSize({"tests/modules/lenet.jit.pt",
32-
{{1,1,28,28}}}),
33-
PathAndInSize({"tests/modules/resnet18.jit.pt",
34-
{{1,3,224,224}}}),
35-
PathAndInSize({"tests/modules/resnet50.jit.pt",
36-
{{1,3,224,224}}}),
37-
PathAndInSize({"tests/modules/mobilenet_v2.jit.pt",
38-
{{1,3,224,224}}})));
31+
PathAndInSize({"tests/modules/resnet18.jit.pt",
32+
{{1,3,224,224}}}),
33+
PathAndInSize({"tests/modules/resnet50.jit.pt",
34+
{{1,3,224,224}}}),
35+
PathAndInSize({"tests/modules/mobilenet_v2.jit.pt",
36+
{{1,3,224,224}}})));

Diff for: tests/modules/test_modules_as_engines.cpp

+6-8
Original file line numberDiff line numberDiff line change
@@ -19,11 +19,9 @@ TEST_P(ModuleTests, ModuleAsEngineIsClose) {
1919
INSTANTIATE_TEST_SUITE_P(ModuleAsEngineForwardIsCloseSuite,
2020
ModuleTests,
2121
testing::Values(
22-
PathAndInSize({"tests/modules/lenet.jit.pt",
23-
{{1,1,28,28}}}),
24-
PathAndInSize({"tests/modules/resnet18.jit.pt",
25-
{{1,3,224,224}}}),
26-
PathAndInSize({"tests/modules/resnet50.jit.pt",
27-
{{1,3,224,224}}}),
28-
PathAndInSize({"tests/modules/mobilenet_v2.jit.pt",
29-
{{1,3,224,224}}})));
22+
PathAndInSize({"tests/modules/resnet18.jit.pt",
23+
{{1,3,224,224}}}),
24+
PathAndInSize({"tests/modules/resnet50.jit.pt",
25+
{{1,3,224,224}}}),
26+
PathAndInSize({"tests/modules/mobilenet_v2.jit.pt",
27+
{{1,3,224,224}}})));

0 commit comments

Comments
 (0)