Skip to content

Commit e5f96d9

Browse files
committed
feat: Update documentation with new library name Torch-TensorRT
Signed-off-by: Dheeraj Peri <[email protected]> Signed-off-by: Dheeraj Peri <[email protected]> Signed-off-by: Dheeraj Peri <[email protected]> Signed-off-by: Dheeraj Peri <[email protected]>
1 parent 483ef59 commit e5f96d9

32 files changed

+716
-709
lines changed

Diff for: docsrc/Makefile

+5-5
Original file line numberDiff line numberDiff line change
@@ -21,14 +21,14 @@ check_clean:
2121
clean: check_clean
2222
rm -rf $(BUILDDIR)/*
2323
ifndef VERSION
24-
rm -rf /tmp/trtorch_docs
25-
mkdir -p /tmp/trtorch_docs
26-
mv $(DESTDIR)/v* /tmp/trtorch_docs
24+
rm -rf /tmp/torchtrt_docs
25+
mkdir -p /tmp/torchtrt_docs
26+
mv $(DESTDIR)/v* /tmp/torchtrt_docs
2727
endif
2828
rm -r $(DESTDIR)/*
2929
ifndef VERSION
30-
mv /tmp/trtorch_docs/v* $(DESTDIR)
31-
rm -rf /tmp/trtorch_docs
30+
mv /tmp/torchtrt_docs/v* $(DESTDIR)
31+
rm -rf /tmp/torchtrt_docs
3232
endif
3333
rm -rf $(SOURCEDIR)/_cpp_api
3434
rm -rf $(SOURCEDIR)/_notebooks

Diff for: docsrc/RELEASE_CHECKLIST.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
# Release Process
22

3-
Here is the process we use for creating new releases of TRTorch
3+
Here is the process we use for creating new releases of Torch-TensorRT
44

55
## Criteria for Release
66

7-
While TRTorch is in alpha, patch versions are bumped sequentially on breaking changes in the compiler.
7+
While Torch-TensorRT is in alpha, patch versions are bumped sequentially on breaking changes in the compiler.
88

9-
In beta TRTorch will get a minor version bump on breaking changes, or upgrade to the next version of PyTorch, patch version will be incremented based on significant bug fixes, or siginficant new functionality in the compiler.
9+
In beta Torch-TensorRT will get a minor version bump on breaking changes, or upgrade to the next version of PyTorch, patch version will be incremented based on significant bug fixes, or siginficant new functionality in the compiler.
1010

11-
Once TRTorch hits version 1.0.0, major versions are bumped on breaking API changes, breaking changes or significant new functionality in the compiler
11+
Once Torch-TensorRT hits version 1.0.0, major versions are bumped on breaking API changes, breaking changes or significant new functionality in the compiler
1212
will result in a minor version bump and sigificant bug fixes will result in a patch version change.
1313

1414
## Steps to Packaging a Release
@@ -20,7 +20,7 @@ will result in a minor version bump and sigificant bug fixes will result in a pa
2020
- Required, Python API and Optional Tests should pass on both x86_64 and aarch64
2121
- All checked in applications (cpp and python) should compile and work
2222
3. Generate new index of converters and evalutators
23-
- `bazel run //tools/supportedops -- <PATH TO TRTORCH>/docsrc/indices/supported_ops.rst`
23+
- `bazel run //tools/supportedops -- <PATH TO Torch-TensorRT>/docsrc/indices/supported_ops.rst`
2424
4. Version bump PR
2525
- There should be a PR which will be the PR that bumps the actual version of the library, this PR should contain the following
2626
- Bump version in `py/setup.py`
@@ -49,7 +49,7 @@ will result in a minor version bump and sigificant bug fixes will result in a pa
4949
- `[3, 224, 224]`
5050
- `[3, 1920, 1080]` (P2)
5151
- Batch Sizes: 1, 4, 8, 16, 32
52-
- Frameworks: PyTorch, TRTorch, ONNX + TRT
52+
- Frameworks: PyTorch, Torch-TensorRT, ONNX + TRT
5353
- If any models do not convert to ONNX / TRT, that is fine. Mark them as failling / no result
5454
- Devices:
5555
- A100 (P0)
@@ -61,11 +61,11 @@ will result in a minor version bump and sigificant bug fixes will result in a pa
6161

6262
6. Once PR is merged tag commit and start creating release on GitHub
6363
- Paste in Milestone information and Changelog information into release notes
64-
- Generate libtrtorch.tar.gz for the following platforms:
64+
- Generate libtorchtrt.tar.gz for the following platforms:
6565
- x86_64 cxx11-abi
6666
- x86_64 pre-cxx11-abi
6767
- TODO: Add cxx11-abi build for aarch64 when a manylinux container for aarch64 exists
6868
- Generate Python packages for Python 3.6/3.7/3.8/3.9 for x86_64
6969
- TODO: Build a manylinux container for aarch64
70-
- `docker run -it -v$(pwd)/..:/workspace/TRTorch build_trtorch_wheel /bin/bash /workspace/TRTorch/py/build_whl.sh` generates all wheels
71-
- To build container `docker build -t build_trtorch_wheel .`
70+
- `docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh` generates all wheels
71+
- To build container `docker build -t build_torch_tensorrt_wheel .`

Diff for: docsrc/conf.py

+20-20
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,15 @@
1010
# add these directories to sys.path here. If the directory is relative to the
1111
# documentation root, use os.path.abspath to make it absolute, like shown here.
1212
#
13-
import os
13+
import os
1414
import sys
1515

1616
sys.path.append(os.path.join(os.path.dirname(__name__), '../py'))
1717

1818
import sphinx_material
1919
# -- Project information -----------------------------------------------------
2020

21-
project = 'TRTorch'
21+
project = 'Torch-TensorRT'
2222
copyright = '2021, NVIDIA Corporation'
2323
author = 'NVIDIA Corporation'
2424

@@ -63,15 +63,15 @@
6363
html_static_path = ['_static']
6464

6565
# Setup the breathe extension
66-
breathe_projects = {"TRTorch": "./_tmp/xml"}
67-
breathe_default_project = "TRTorch"
66+
breathe_projects = {"Torch-TensorRT": "./_tmp/xml"}
67+
breathe_default_project = "Torch-TensorRT"
6868

6969
# Setup the exhale extension
7070
exhale_args = {
7171
# These arguments are required
7272
"containmentFolder": "./_cpp_api",
73-
"rootFileName": "trtorch_cpp.rst",
74-
"rootFileTitle": "TRTorch C++ API",
73+
"rootFileName": "torch_tensort_cpp.rst",
74+
"rootFileTitle": "Torch-TensorRT C++ API",
7575
"doxygenStripFromPath": "..",
7676
# Suggested optional arguments
7777
"createTreeView": True,
@@ -92,10 +92,10 @@
9292
# Material theme options (see theme.conf for more information)
9393
html_theme_options = {
9494
# Set the name of the project to appear in the navigation.
95-
'nav_title': 'TRTorch',
95+
'nav_title': 'Torch-TensorRT',
9696
# Specify a base_url used to generate sitemap.xml. If not
9797
# specified, then no sitemap will be built.
98-
'base_url': 'https://nvidia.github.io/TRTorch/',
98+
'base_url': 'https://nvidia.github.io/Torch-TensorRT/',
9999

100100
# Set the color and the accent color
101101
'theme_color': '84bd00',
@@ -107,8 +107,8 @@
107107
"logo_icon": "&#xe86f",
108108

109109
# Set the repo location to get a badge with stats
110-
'repo_url': 'https://github.com/nvidia/TRTorch/',
111-
'repo_name': 'TRTorch',
110+
'repo_url': 'https://github.com/nvidia/Torch-TensorRT/',
111+
'repo_name': 'Torch-TensorRT',
112112

113113
# Visible levels of the global TOC; -1 means unlimited
114114
'globaltoc_depth': 1,
@@ -118,21 +118,21 @@
118118
'globaltoc_includehidden': True,
119119
'master_doc': True,
120120
"version_info": {
121-
"master": "https://nvidia.github.io/TRTorch/",
122-
"v0.4.1": "https://nvidia.github.io/TRTorch/v0.4.1/",
123-
"v0.4.0": "https://nvidia.github.io/TRTorch/v0.4.0/",
124-
"v0.3.0": "https://nvidia.github.io/TRTorch/v0.3.0/",
125-
"v0.2.0": "https://nvidia.github.io/TRTorch/v0.2.0/",
126-
"v0.1.0": "https://nvidia.github.io/TRTorch/v0.1.0/",
127-
"v0.0.3": "https://nvidia.github.io/TRTorch/v0.0.3/",
128-
"v0.0.2": "https://nvidia.github.io/TRTorch/v0.0.2/",
129-
"v0.0.1": "https://nvidia.github.io/TRTorch/v0.0.1/",
121+
"master": "https://nvidia.github.io/Torch-TensorRT/",
122+
"v0.4.1": "https://nvidia.github.io/Torch-TensorRT/v0.4.1/",
123+
"v0.4.0": "https://nvidia.github.io/Torch-TensorRT/v0.4.0/",
124+
"v0.3.0": "https://nvidia.github.io/Torch-TensorRT/v0.3.0/",
125+
"v0.2.0": "https://nvidia.github.io/Torch-TensorRT/v0.2.0/",
126+
"v0.1.0": "https://nvidia.github.io/Torch-TensorRT/v0.1.0/",
127+
"v0.0.3": "https://nvidia.github.io/Torch-TensorRT/v0.0.3/",
128+
"v0.0.2": "https://nvidia.github.io/Torch-TensorRT/v0.0.2/",
129+
"v0.0.1": "https://nvidia.github.io/Torch-TensorRT/v0.0.1/",
130130
}
131131
}
132132

133133
# Tell sphinx what the primary language being documented is.
134134
primary_domain = 'cpp'
135-
cpp_id_attributes = ["TRTORCH_API"]
135+
cpp_id_attributes = ["TORCHTRT_API"]
136136

137137
# Tell sphinx what the pygments highlight language should be.
138138
highlight_language = 'cpp'

Diff for: docsrc/contributors/conversion.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ inputs and assemble an array of resources to pass to the converter. Inputs can b
3232
static value has been evaluated
3333

3434
* The input is from a node that has not been converted
35-
* TRTorch will error out here
35+
* Torch-TensorRT will error out here
3636

3737
Node Evaluation
3838
-----------------
@@ -49,4 +49,4 @@ Node converters map JIT nodes to layers or subgraphs of layers. They then associ
4949
and the TRT graph together in the conversion context. This allows the conversion stage to assemble the inputs
5050
for the next node. There are some cases where a node produces an output that is not a Tensor but a static result
5151
from a calculation done on inputs which need to be converted first. In this case the converter may associate the outputs in
52-
the ``evaluated_value_map`` instead of the ``value_tensor_map``. For more information take a look at: :ref:`writing_converters`
52+
the ``evaluated_value_map`` instead of the ``value_tensor_map``. For more information take a look at: :ref:`writing_converters`

Diff for: docsrc/contributors/lowering.rst

+10-10
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Dead code elimination will check if a node has side effects and not delete it if
3333
Eliminate Exeception Or Pass Pattern
3434
***************************************
3535

36-
`trtorch/core/lowering/passes/exception_elimination.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/exception_elimination.cpp>`_
36+
`Torch-TensorRT/core/lowering/passes/exception_elimination.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/exception_elimination.cpp>`_
3737

3838
A common pattern in scripted modules are dimension gaurds which will throw execptions if
3939
the input dimension is not what was expected.
@@ -68,7 +68,7 @@ Freeze attributes and inline constants and modules. Propogates constants in the
6868
Fuse AddMM Branches
6969
***************************************
7070

71-
`trtorch/core/lowering/passes/fuse_addmm_branches.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/fuse_addmm_branches.cpp>`_
71+
`Torch-TensorRT/core/lowering/passes/fuse_addmm_branches.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/fuse_addmm_branches.cpp>`_
7272

7373
A common pattern in scripted modules is tensors of different dimensions use different constructions for implementing linear layers. We fuse these
7474
different varients into a single one that will get caught by the Unpack AddMM pass.
@@ -101,7 +101,7 @@ This pass fuse the addmm or matmul + add generated by JIT back to linear
101101
Fuse Flatten Linear
102102
***************************************
103103

104-
`trtorch/core/lowering/passes/fuse_flatten_linear.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/fuse_flatten_linear.cpp>`_
104+
`Torch-TensorRT/core/lowering/passes/fuse_flatten_linear.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/fuse_flatten_linear.cpp>`_
105105

106106
TensorRT implicity flattens input layers into fully connected layers when they are higher than 1D. So when there is a
107107
``aten::flatten`` -> ``aten::linear`` pattern we remove the ``aten::flatten``.
@@ -134,7 +134,7 @@ Removes _all_ tuples and raises an error if some cannot be removed, this is used
134134
Module Fallback
135135
*****************
136136

137-
`trtorch/core/lowering/passes/module_fallback.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/module_fallback.cpp>`
137+
`Torch-TensorRT/core/lowering/passes/module_fallback.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/module_fallback.cpp>`
138138

139139
Module fallback consists of two lowering passes that must be run as a pair. The first pass is run before freezing to place delimiters in the graph around modules
140140
that should run in PyTorch. The second pass marks nodes between these delimiters after freezing to signify they should run in PyTorch.
@@ -162,30 +162,30 @@ Right now, it does:
162162
Remove Contiguous
163163
***************************************
164164

165-
`trtorch/core/lowering/passes/remove_contiguous.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/remove_contiguous.cpp>`_
165+
`Torch-TensorRT/core/lowering/passes/remove_contiguous.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/remove_contiguous.cpp>`_
166166

167167
Removes contiguous operators since we are doing TensorRT memory is already contiguous.
168168

169169

170170
Remove Dropout
171171
***************************************
172172

173-
`trtorch/core/lowering/passes/remove_dropout.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/remove_dropout.cpp>`_
173+
`Torch-TensorRT/core/lowering/passes/remove_dropout.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/remove_dropout.cpp>`_
174174

175175
Removes dropout operators since we are doing inference.
176176

177177
Remove To
178178
***************************************
179179

180-
`trtorch/core/lowering/passes/remove_to.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/remove_to.cpp>`_
180+
`Torch-TensorRT/core/lowering/passes/remove_to.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/remove_to.cpp>`_
181181

182182
Removes ``aten::to`` operators that do casting, since TensorRT mangages it itself. It is important that this is one of the last passes run so that
183183
other passes have a change to move required cast operators out of the main namespace.
184184

185185
Unpack AddMM
186186
***************************************
187187

188-
`trtorch/core/lowering/passes/unpack_addmm.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/unpack_addmm.cpp>`_
188+
`Torch-TensorRT/core/lowering/passes/unpack_addmm.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/unpack_addmm.cpp>`_
189189

190190
Unpacks ``aten::addmm`` into ``aten::matmul`` and ``aten::add_`` (with an additional ``trt::const``
191191
op to freeze the bias in the TensorRT graph). This lets us reuse the ``aten::matmul`` and ``aten::add_``
@@ -194,7 +194,7 @@ converters instead of needing a dedicated converter.
194194
Unpack LogSoftmax
195195
***************************************
196196

197-
`trtorch/core/lowering/passes/unpack_log_softmax.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/unpack_log_softmax.cpp>`_
197+
`Torch-TensorRT/core/lowering/passes/unpack_log_softmax.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/unpack_log_softmax.cpp>`_
198198

199199
Unpacks ``aten::logsoftmax`` into ``aten::softmax`` and ``aten::log``. This lets us reuse the
200200
``aten::softmax`` and ``aten::log`` converters instead of needing a dedicated converter.
@@ -204,4 +204,4 @@ Unroll Loops
204204

205205
`torch/csrc/jit/passes/loop_unrolling.h <https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/passes/loop_unrolling.h>`_
206206

207-
Unrolls the operations of compatable loops (e.g. sufficently short) so that you only have to go through the loop once.
207+
Unrolls the operations of compatable loops (e.g. sufficently short) so that you only have to go through the loop once.

Diff for: docsrc/contributors/phases.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Lowering
1515
^^^^^^^^^^^
1616
:ref:`lowering`
1717

18-
The lowering is made up of a set of passes (some from PyTorch and some specific to TRTorch)
18+
The lowering is made up of a set of passes (some from PyTorch and some specific to Torch-TensorRT)
1919
run over the graph IR to map the large PyTorch opset to a reduced opset that is easier to convert to
2020
TensorRT.
2121

@@ -43,4 +43,4 @@ Compilation and Runtime
4343
The final compilation phase constructs a TorchScript program to run the converted TensorRT engine. It
4444
takes a serialized engine and instantiates it within a engine manager, then the compiler will
4545
build out a JIT graph that references this engine and wraps it in a module to return to the user.
46-
When the user executes the module, the JIT program run in the JIT runtime extended by TRTorch with the data providied from the user.
46+
When the user executes the module, the JIT program run in the JIT runtime extended by Torch-TensorRT with the data providied from the user.

Diff for: docsrc/contributors/runtime.rst

+4-4
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ torch::jit::Value type).
2121
TensorRT Engine Executor Op
2222
----------------------------
2323

24-
When the TRTorch is loaded, it registers an operator in the PyTorch JIT operator library called
24+
When the Torch-TensorRT is loaded, it registers an operator in the PyTorch JIT operator library called
2525
``trt::execute_engine(Tensor[] inputs, __torch__.torch.classes.tensorrt.Engine engine) -> Tensor[]`` which takes an
2626
instantiated engine and list of inputs. Compiled graphs store this engine in an attribute so that it is portable and serializable.
2727
When the op is called, an instnantiated engine and input tensors are popped off the runtime stack. These inputs are passed into a generic engine execution function which
@@ -72,8 +72,8 @@ execution.
7272
ABI Versioning and Serialization Format
7373
=========================================
7474

75-
TRTorch programs are standard TorchScript with TensorRT engines as objects embedded in the graph. Therefore there is a serialization format
76-
for the TensorRT engines. The format for TRTorch serialized programs are versioned with an "ABI" version which tells the runtime about runtime compatibility.
75+
Torch-TensorRT programs are standard TorchScript with TensorRT engines as objects embedded in the graph. Therefore there is a serialization format
76+
for the TensorRT engines. The format for Torch-TensorRT serialized programs are versioned with an "ABI" version which tells the runtime about runtime compatibility.
7777

7878
> Current ABI version is 3
7979

@@ -82,4 +82,4 @@ The format is a vector of serialized strings. They encode the following informat
8282
* ABI Version for the program
8383
* Name of the TRT engine
8484
* Device information: Includes the target device the engine was built on, SM capability and other device information. This information is used at deserialization time to select the correct device to run the engine
85-
* Serialized TensorRT engine
85+
* Serialized TensorRT engine

Diff for: docsrc/contributors/system_overview.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
System Overview
44
================
55

6-
TRTorch is primarily a C++ Library with a Python API planned. We use Bazel as our build system and target Linux x86_64 and
6+
Torch-TensorRT is primarily a C++ Library with a Python API planned. We use Bazel as our build system and target Linux x86_64 and
77
Linux aarch64 (only natively) right now. The compiler we use is GCC 7.5.0 and the library is untested with compilers before that
88
version so there may be compilation errors if you try to use an older compiler.
99

@@ -13,7 +13,7 @@ The repository is structured into:
1313
* cpp: C++ API
1414
* tests: tests of the C++ API, the core and converters
1515
* py: Python API
16-
* notebooks: Example applications built with TRTorch
16+
* notebooks: Example applications built with Torch-TensorRT
1717
* docs: Documentation
1818
* docsrc: Documentation Source
1919
* third_party: BUILD files for dependency libraries
@@ -26,4 +26,4 @@ The core has a couple major parts: The top level compiler interface which coordi
2626
converting and generating a new module and returning it back to the user. The there are the three main phases of the
2727
compiler, the lowering phase, the conversion phase, and the execution phase.
2828

29-
.. include:: phases.rst
29+
.. include:: phases.rst

Diff for: docsrc/contributors/useful_links.rst

+1-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
.. _useful_links:
22

3-
Useful Links for TRTorch Development
3+
Useful Links for Torch-TensorRT Development
44
=====================================
55

66
TensorRT Available Layers and Expected Dimensions
@@ -32,4 +32,3 @@ PyTorch IR Documentation
3232
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3333

3434
* https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/OVERVIEW.md
35-

0 commit comments

Comments
 (0)