You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do try to fill an issue with your feature or bug before filling a PR (op support is generally an exception as long as you provide tests to prove functionality). There is also a backlog (https://github.com/NVIDIA/TRTorch/issues) of issues which are tagged with the area of focus, a coarse priority level and whether the issue may be accessible to new contributors. Let us know if you are interested in working on a issue. We are happy to provide guidance and mentorship for new contributors. Though note, there is no claiming of issues, we prefer getting working code quickly vs. addressing concerns about "wasted work".
6
+
7
+
#### Communication
8
+
9
+
The primary location for discussion is GitHub issues. This is the best place for questions about the project and discussion about specific issues.
10
+
11
+
We use the PyTorch Slack for communication about core development, integration with PyTorch and other communication that doesn't make sense in GitHub issues. If you need an invite, take a look at the [PyTorch README](https://github.com/pytorch/pytorch/blob/master/README.md) for instructions on requesting one.
12
+
13
+
### Coding Guidelines
14
+
15
+
- We generally follow the coding guidelines used in PyTorch, right now this is not strictly enforced, but match the style used in the code already
16
+
17
+
- Avoid introducing unnecessary complexity into existing code so that maintainability and readability are preserved
18
+
19
+
- Try to avoid commiting commented out code
20
+
21
+
- Minimize warnings (and no errors) from the compiler
22
+
23
+
- Make sure all converter tests and the core module testsuite pass
24
+
25
+
- New features should have corresponding tests or if its a difficult feature to test in a testing framework, your methodology for testing.
26
+
27
+
- Comment subtleties and design decisions
28
+
29
+
- Document hacks, we can discuss it only if we can find it
30
+
31
+
### Commits and PRs
32
+
33
+
- Try to keep pull requests focused (multiple pull requests are okay). Typically PRs should focus on a single issue or a small collection of closely related issue.
34
+
35
+
- Typically we try to follow the guidelines set by https://www.conventionalcommits.org/en/v1.0.0/ for commit messages for clarity. Again not strictly enforced.
36
+
37
+
-#### Sign Your Work
38
+
We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
39
+
40
+
Any contribution which contains commits that are not Signed-Off will not be accepted.
41
+
42
+
To sign off on a commit you simply use the `--signoff` (or `-s`) option when committing your changes:
43
+
44
+
$ git commit -s -m "Add cool feature."
45
+
46
+
This will append the following to your commit message:
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
56
+
1 Letterman Drive
57
+
Suite D4700
58
+
San Francisco, CA, 94129
59
+
60
+
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
61
+
62
+
63
+
Developer's Certificate of Origin 1.1
64
+
65
+
By making a contribution to this project, I certify that:
66
+
67
+
(a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or
68
+
69
+
(b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or
70
+
71
+
(c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it.
72
+
73
+
(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
74
+
75
+
76
+
Thanks in advance for your patience as we review your contributions; we do appreciate them!
TRTorch is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA's TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch's Just-In-Time (JIT) compiler, TRTorch is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into an module targeting a TensorRT engine. TRTorch operates as a PyTorch extention and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT's suite of configurations at compile time, so you are able to specify operating precision (FP32/F16) and other settings for your module.
Install TensorRT, CUDA and cuDNN on the system before starting to compile.
16
52
17
53
``` shell
18
-
bazel build //:libtrtorch --cxxopt="-DNDEBUG"
54
+
bazel build //:libtrtorch --compilation_mode=opt
19
55
```
20
56
21
-
### Debug build
57
+
### Debug build
22
58
```shell
23
59
bazel build //:libtrtorch --compilation_mode=dbg
24
60
```
25
61
26
-
A tarball with the include files and library can then be found in bazel-bin
62
+
A tarball with the include files and library can then be found in bazel-bin
27
63
28
-
### Running TRTorch on a JIT Graph
64
+
### Running TRTorch on a JIT Graph
29
65
30
-
> Make sure to add LibTorch's version of CUDA 10.1 to your LD_LIBRARY_PATH `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TRTorch/external/libtorch/lib`
66
+
> Make sure to add LibTorch to your LD_LIBRARY_PATH <br>
@@ -38,22 +75,28 @@ bazel run //cpp/trtorchexec -- $(realpath <PATH TO GRAPH>) <input-size>
38
75
39
76
### In TRTorch?
40
77
41
-
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters.
78
+
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/NVIDIA/TRTorch/issues) for information on the support status of various operators.
42
79
43
80
### In my application?
44
81
45
-
> The Node Converter Registry is not exposed currently in the public API but you can try using internal headers.
82
+
> The Node Converter Registry is not exposed in the top level API but you can try using the internal headers shipped with the tarball.
46
83
47
-
You can register a converter for your op using the NodeConverterRegistry inside your application.
84
+
You can register a converter for your op using the NodeConverterRegistry inside your application.
0 commit comments