Skip to content

Commit 5a78f8c

Browse files
authored
Merge pull request #1131 from lamhoangtung/fix_doc_hyperlink
fix: Update broken repo hyperlink
2 parents 484fc90 + 3c1fafc commit 5a78f8c

File tree

21 files changed

+39
-39
lines changed

21 files changed

+39
-39
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
### Developing Torch-TensorRT
44

5-
Do try to fill an issue with your feature or bug before filling a PR (op support is generally an exception as long as you provide tests to prove functionality). There is also a backlog (https://github.com/NVIDIA/Torch-TensorRT/issues) of issues which are tagged with the area of focus, a coarse priority level and whether the issue may be accessible to new contributors. Let us know if you are interested in working on a issue. We are happy to provide guidance and mentorship for new contributors. Though note, there is no claiming of issues, we prefer getting working code quickly vs. addressing concerns about "wasted work".
5+
Do try to fill an issue with your feature or bug before filling a PR (op support is generally an exception as long as you provide tests to prove functionality). There is also a backlog (https://github.com/pytorch/TensorRT/issues) of issues which are tagged with the area of focus, a coarse priority level and whether the issue may be accessible to new contributors. Let us know if you are interested in working on a issue. We are happy to provide guidance and mentorship for new contributors. Though note, there is no claiming of issues, we prefer getting working code quickly vs. addressing concerns about "wasted work".
66

77
#### Communication
88

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ These are the following dependencies used to verify the testcases. Torch-TensorR
118118

119119
## Prebuilt Binaries and Wheel files
120120

121-
Releases: https://github.com/NVIDIA/Torch-TensorRT/releases
121+
Releases: https://github.com/pytorch/TensorRT/releases
122122

123123
## Compiling Torch-TensorRT
124124

@@ -291,7 +291,7 @@ Supported Python versions:
291291

292292
### In Torch-TensorRT?
293293

294-
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/NVIDIA/Torch-TensorRT/issues) for information on the support status of various operators.
294+
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/pytorch/TensorRT/issues) for information on the support status of various operators.
295295

296296
### In my application?
297297

core/partitioning/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ from the user. Shapes can be calculated by running the graphs with JIT.
1515
it's still a phase in our partitioning process.
1616
- `Stitching`. Stitch all TensorRT engines with PyTorch nodes altogether.
1717

18-
Test cases for each of these components could be found [here](https://github.com/NVIDIA/Torch-TensorRT/tree/master/tests/core/partitioning).
18+
Test cases for each of these components could be found [here](https://github.com/pytorch/TensorRT/tree/master/tests/core/partitioning).
1919

2020
Here is the brief description of functionalities of each file:
2121
- `PartitionInfo.h/cpp`: The automatic fallback APIs that is used for partitioning.

core/plugins/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,4 +37,4 @@ If you'd like to compile your plugin with Torch-TensorRT,
3737

3838
Once you've completed the above steps, upon successful compilation of Torch-TensorRT library, your plugin should be available in `libtorchtrt_plugins.so`.
3939

40-
A sample runtime application on how to run a network with plugins can be found <a href="https://github.com/NVIDIA/Torch-TensorRT/tree/master/examples/torchtrt_runtime_example" >here</a>
40+
A sample runtime application on how to run a network with plugins can be found <a href="https://github.com/pytorch/TensorRT/tree/master/examples/torchtrt_runtime_example" >here</a>

docsrc/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@
123123
"logo_icon": "&#xe86f",
124124

125125
# Set the repo location to get a badge with stats
126-
'repo_url': 'https://github.com/nvidia/Torch-TensorRT/',
126+
'repo_url': 'https://github.com/pytorch/TensorRT/',
127127
'repo_name': 'Torch-TensorRT',
128128

129129
# Visible levels of the global TOC; -1 means unlimited

docsrc/contributors/lowering.rst

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Dead code elimination will check if a node has side effects and not delete it if
3333
Eliminate Exeception Or Pass Pattern
3434
***************************************
3535

36-
`Torch-TensorRT/core/lowering/passes/exception_elimination.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/exception_elimination.cpp>`_
36+
`Torch-TensorRT/core/lowering/passes/exception_elimination.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/exception_elimination.cpp>`_
3737

3838
A common pattern in scripted modules are dimension gaurds which will throw execptions if
3939
the input dimension is not what was expected.
@@ -68,7 +68,7 @@ Freeze attributes and inline constants and modules. Propogates constants in the
6868
Fuse AddMM Branches
6969
***************************************
7070

71-
`Torch-TensorRT/core/lowering/passes/fuse_addmm_branches.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/fuse_addmm_branches.cpp>`_
71+
`Torch-TensorRT/core/lowering/passes/fuse_addmm_branches.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/fuse_addmm_branches.cpp>`_
7272

7373
A common pattern in scripted modules is tensors of different dimensions use different constructions for implementing linear layers. We fuse these
7474
different varients into a single one that will get caught by the Unpack AddMM pass.
@@ -101,7 +101,7 @@ This pass fuse the addmm or matmul + add generated by JIT back to linear
101101
Fuse Flatten Linear
102102
***************************************
103103

104-
`Torch-TensorRT/core/lowering/passes/fuse_flatten_linear.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/fuse_flatten_linear.cpp>`_
104+
`Torch-TensorRT/core/lowering/passes/fuse_flatten_linear.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/fuse_flatten_linear.cpp>`_
105105

106106
TensorRT implicity flattens input layers into fully connected layers when they are higher than 1D. So when there is a
107107
``aten::flatten`` -> ``aten::linear`` pattern we remove the ``aten::flatten``.
@@ -134,7 +134,7 @@ Removes _all_ tuples and raises an error if some cannot be removed, this is used
134134
Module Fallback
135135
*****************
136136

137-
`Torch-TensorRT/core/lowering/passes/module_fallback.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/module_fallback.cpp>`_
137+
`Torch-TensorRT/core/lowering/passes/module_fallback.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/module_fallback.cpp>`_
138138

139139
Module fallback consists of two lowering passes that must be run as a pair. The first pass is run before freezing to place delimiters in the graph around modules
140140
that should run in PyTorch. The second pass marks nodes between these delimiters after freezing to signify they should run in PyTorch.
@@ -162,30 +162,30 @@ Right now, it does:
162162
Remove Contiguous
163163
***************************************
164164

165-
`Torch-TensorRT/core/lowering/passes/remove_contiguous.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/remove_contiguous.cpp>`_
165+
`Torch-TensorRT/core/lowering/passes/remove_contiguous.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/remove_contiguous.cpp>`_
166166

167167
Removes contiguous operators since we are doing TensorRT memory is already contiguous.
168168

169169

170170
Remove Dropout
171171
***************************************
172172

173-
`Torch-TensorRT/core/lowering/passes/remove_dropout.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/remove_dropout.cpp>`_
173+
`Torch-TensorRT/core/lowering/passes/remove_dropout.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/remove_dropout.cpp>`_
174174

175175
Removes dropout operators since we are doing inference.
176176

177177
Remove To
178178
***************************************
179179

180-
`Torch-TensorRT/core/lowering/passes/remove_to.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/remove_to.cpp>`_
180+
`Torch-TensorRT/core/lowering/passes/remove_to.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/remove_to.cpp>`_
181181

182182
Removes ``aten::to`` operators that do casting, since TensorRT mangages it itself. It is important that this is one of the last passes run so that
183183
other passes have a change to move required cast operators out of the main namespace.
184184

185185
Unpack AddMM
186186
***************************************
187187

188-
`Torch-TensorRT/core/lowering/passes/unpack_addmm.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/unpack_addmm.cpp>`_
188+
`Torch-TensorRT/core/lowering/passes/unpack_addmm.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/unpack_addmm.cpp>`_
189189

190190
Unpacks ``aten::addmm`` into ``aten::matmul`` and ``aten::add_`` (with an additional ``trt::const``
191191
op to freeze the bias in the TensorRT graph). This lets us reuse the ``aten::matmul`` and ``aten::add_``
@@ -194,7 +194,7 @@ converters instead of needing a dedicated converter.
194194
Unpack LogSoftmax
195195
***************************************
196196

197-
`Torch-TensorRT/core/lowering/passes/unpack_log_softmax.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/unpack_log_softmax.cpp>`_
197+
`Torch-TensorRT/core/lowering/passes/unpack_log_softmax.cpp <https://github.com/pytorch/TensorRT/blob/master/core/lowering/passes/unpack_log_softmax.cpp>`_
198198

199199
Unpacks ``aten::logsoftmax`` into ``aten::softmax`` and ``aten::log``. This lets us reuse the
200200
``aten::softmax`` and ``aten::log`` converters instead of needing a dedicated converter.

docsrc/tutorials/installation.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,14 +25,14 @@ You can install the python package using
2525

2626
.. code-block:: sh
2727
28-
pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases
28+
pip3 install torch-tensorrt -f https://github.com/pytorch/TensorRT/releases
2929
3030
.. _bin-dist:
3131

3232
C++ Binary Distribution
3333
------------------------
3434

35-
Precompiled tarballs for releases are provided here: https://github.com/NVIDIA/Torch-TensorRT/releases
35+
Precompiled tarballs for releases are provided here: https://github.com/pytorch/TensorRT/releases
3636

3737
.. _compile-from-source:
3838

docsrc/tutorials/ptq.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ Then all thats required to setup the module for INT8 calibration is to set the f
138138
If you have an existing Calibrator implementation for TensorRT you may directly set the ``ptq_calibrator`` field with a pointer to your calibrator and it will work as well.
139139
From here not much changes in terms of how to execution works. You are still able to fully use LibTorch as the sole interface for inference. Data should remain
140140
in FP32 precision when it's passed into `trt_mod.forward`. There exists an example application in the Torch-TensorRT demo that takes you from training a VGG16 network on
141-
CIFAR10 to deploying in INT8 with Torch-TensorRT here: https://github.com/NVIDIA/Torch-TensorRT/tree/master/cpp/ptq
141+
CIFAR10 to deploying in INT8 with Torch-TensorRT here: https://github.com/pytorch/TensorRT/tree/master/cpp/ptq
142142

143143
.. _writing_ptq_python:
144144

@@ -199,8 +199,8 @@ to use ``CacheCalibrator`` to use in INT8 mode.
199199
trt_mod = torch_tensorrt.compile(model, compile_settings)
200200
201201
If you already have an existing calibrator class (implemented directly using TensorRT API), you can directly set the calibrator field to your class which can be very convenient.
202-
For a demo on how PTQ can be performed on a VGG network using Torch-TensorRT API, you can refer to https://github.com/NVIDIA/Torch-TensorRT/blob/master/tests/py/test_ptq_dataloader_calibrator.py
203-
and https://github.com/NVIDIA/Torch-TensorRT/blob/master/tests/py/test_ptq_trt_calibrator.py
202+
For a demo on how PTQ can be performed on a VGG network using Torch-TensorRT API, you can refer to https://github.com/pytorch/TensorRT/blob/master/tests/py/test_ptq_dataloader_calibrator.py
203+
and https://github.com/pytorch/TensorRT/blob/master/tests/py/test_ptq_trt_calibrator.py
204204

205205
Citations
206206
^^^^^^^^^^^

docsrc/tutorials/runtime.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ programs just as you would otherwise via PyTorch API.
2626

2727
.. note:: If you are linking ``libtorchtrt_runtime.so``, likely using the following flags will help ``-Wl,--no-as-needed -ltorchtrt -Wl,--as-needed`` as theres no direct symbol dependency to anything in the Torch-TensorRT runtime for most Torch-TensorRT runtime applications
2828

29-
An example of how to use ``libtorchtrt_runtime.so`` can be found here: https://github.com/NVIDIA/Torch-TensorRT/tree/master/examples/torchtrt_example
29+
An example of how to use ``libtorchtrt_runtime.so`` can be found here: https://github.com/pytorch/TensorRT/tree/master/examples/torchtrt_runtime_example
3030

3131
Plugin Library
3232
---------------

examples/custom_converters/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ from torch.utils import cpp_extension
6666
6767
6868
# library_dirs should point to the libtorch_tensorrt.so, include_dirs should point to the dir that include the headers
69-
# 1) download the latest package from https://github.com/NVIDIA/Torch-TensorRT/releases/
69+
# 1) download the latest package from https://github.com/pytorch/TensorRT/releases/
7070
# 2) Extract the file from downloaded package, we will get the "torch_tensorrt" directory
7171
# 3) Set torch_tensorrt_path to that directory
7272
torch_tensorrt_path = <PATH TO TRTORCH>
@@ -87,7 +87,7 @@ setup(
8787
```
8888
Make sure to include the path for header files in `include_dirs` and the path
8989
for dependent libraries in `library_dirs`. Generally speaking, you should download
90-
the latest package from [here](https://github.com/NVIDIA/Torch-TensorRT/releases), extract
90+
the latest package from [here](https://github.com/pytorch/TensorRT/releases), extract
9191
the files, and the set the `torch_tensorrt_path` to it. You could also add other compilation
9292
flags in cpp_extension if you need. Then, run above python scripts as:
9393
```shell

examples/custom_converters/elu_converter/setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55

66
# library_dirs should point to the libtrtorch.so, include_dirs should point to the dir that include the headers
7-
# 1) download the latest package from https://github.com/NVIDIA/Torch-TensorRT/releases/
7+
# 1) download the latest package from https://github.com/pytorch/TensorRT/releases/
88
# 2) Extract the file from downloaded package, we will get the "trtorch" directory
99
# 3) Set trtorch_path to that directory
1010
torchtrt_path = <PATH TO TORCHTRT>

examples/int8/ptq/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -139,11 +139,11 @@ This will build a binary named `ptq` in `bazel-out/k8-<opt|dbg>/bin/cpp/int8/ptq
139139

140140
## Compilation using Makefile
141141

142-
1) Download releases of <a href="https://pytorch.org">LibTorch</a>, <a href="https://github.com/NVIDIA/Torch-TensorRT/releases">Torch-TensorRT </a>and <a href="https://developer.nvidia.com/nvidia-tensorrt-download">TensorRT</a> and unpack them in the deps directory.
142+
1) Download releases of <a href="https://pytorch.org">LibTorch</a>, <a href="https://github.com/pytorch/TensorRT/releases">Torch-TensorRT </a>and <a href="https://developer.nvidia.com/nvidia-tensorrt-download">TensorRT</a> and unpack them in the deps directory.
143143

144144
```sh
145145
cd examples/torch_tensorrtrt_example/deps
146-
# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/NVIDIA/Torch-TensorRT/releases
146+
# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/pytorch/TensorRT/releases
147147
tar -xvzf libtorch_tensorrt.tar.gz
148148
# unzip libtorch downloaded from pytorch.org
149149
unzip libtorch.zip

examples/int8/qat/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,11 +33,11 @@ This will build a binary named `qat` in `bazel-out/k8-<opt|dbg>/bin/cpp/int8/qat
3333

3434
## Compilation using Makefile
3535

36-
1) Download releases of <a href="https://pytorch.org">LibTorch</a>, <a href="https://github.com/NVIDIA/Torch-TensorRT/releases">Torch-TensorRT </a>and <a href="https://developer.nvidia.com/nvidia-tensorrt-download">TensorRT</a> and unpack them in the deps directory. Ensure CUDA is installed at `/usr/local/cuda` , if not you need to modify the CUDA include and lib paths in the Makefile.
36+
1) Download releases of <a href="https://pytorch.org">LibTorch</a>, <a href="https://github.com/pytorch/TensorRT/releases">Torch-TensorRT </a>and <a href="https://developer.nvidia.com/nvidia-tensorrt-download">TensorRT</a> and unpack them in the deps directory. Ensure CUDA is installed at `/usr/local/cuda` , if not you need to modify the CUDA include and lib paths in the Makefile.
3737

3838
```sh
3939
cd examples/torch_tensorrt_example/deps
40-
# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/NVIDIA/Torch-TensorRT/releases
40+
# Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/pytorch/TensorRT/releases
4141
tar -xvzf libtorch_tensorrt.tar.gz
4242
# unzip libtorch downloaded from pytorch.org
4343
unzip libtorch.zip

examples/torchtrt_runtime_example/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ The main goal is to use Torch-TensorRT runtime library `libtorchtrt_runtime.so`,
2121

2222
```sh
2323
cd examples/torch_tensorrtrt_example/deps
24-
// Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/NVIDIA/Torch-TensorRT/releases
24+
// Download latest Torch-TensorRT release tar file (libtorch_tensorrt.tar.gz) from https://github.com/pytorch/TensorRT/releases
2525
tar -xvzf libtorch_tensorrt.tar.gz
2626
unzip libtorch-cxx11-abi-shared-with-deps-[PYTORCH_VERSION].zip
2727
```

notebooks/CitriNet-example.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -929,7 +929,7 @@
929929
"In this notebook, we have walked through the complete process of optimizing the Citrinet model with Torch-TensorRT. On an A100 GPU, with Torch-TensorRT, we observe a speedup of ~**2.4X** with FP32, and ~**2.9X** with FP16 at batchsize of 128.\n",
930930
"\n",
931931
"### What's next\n",
932-
"Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
932+
"Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
933933
]
934934
},
935935
{

notebooks/EfficientNet-example.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -658,7 +658,7 @@
658658
"In this notebook, we have walked through the complete process of compiling TorchScript models with Torch-TensorRT for EfficientNet-B0 model and test the performance impact of the optimization. With Torch-TensorRT, we observe a speedup of **1.35x** with FP32, and **3.13x** with FP16 on an NVIDIA 3090 GPU. These acceleration numbers will vary from GPU to GPU(as well as implementation to implementation based on the ops used) and we encorage you to try out latest generation of Data center compute cards for maximum acceleration.\n",
659659
"\n",
660660
"### What's next\n",
661-
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
661+
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
662662
]
663663
},
664664
{

notebooks/Hugging-Face-BERT.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -678,7 +678,7 @@
678678
"Torch-TensorRT (FP16): 3.15x\n",
679679
"\n",
680680
"### What's next\n",
681-
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT."
681+
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT."
682682
]
683683
},
684684
{

0 commit comments

Comments
 (0)