Skip to content

Commit c60c521

Browse files
committed
feat: Makefile for trtorchrt.so example
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
1 parent 8581fd9 commit c60c521

File tree

11 files changed

+99
-61
lines changed

11 files changed

+99
-61
lines changed

Diff for: .gitignore

+4-1
Original file line numberDiff line numberDiff line change
@@ -40,4 +40,7 @@ py/wheelhouse
4040
py/.eggs
4141
notebooks/.ipynb_checkpoints/
4242
*.cache
43-
tests/py/data
43+
tests/py/data
44+
examples/**/deps/**/*
45+
!examples/**/deps/.gitkeep
46+
examples/trtorchrt_example/trtorchrt_example

Diff for: core/plugins/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ On a high level, TRTorch plugin library interface does the following :
66

77
- Uses TensorRT plugin registry as the main data structure to access all plugins.
88

9-
- Automatically registers TensorRT plugins with empty namepsace.
9+
- Automatically registers TensorRT plugins with empty namepsace.
1010

1111
- Automatically registers TRTorch plugins with `"trtorch"` namespace.
1212

@@ -37,4 +37,4 @@ If you'd like to compile your plugin with TRTorch,
3737

3838
Once you've completed the above steps, upon successful compilation of TRTorch library, your plugin should be available in `libtrtorch_plugins.so`.
3939

40-
A sample runtime application on how to run a network with plugins can be found <a href="https://github.com/NVIDIA/TRTorch/tree/master/examples/sample_rt_app" >here</a>
40+
A sample runtime application on how to run a network with plugins can be found <a href="https://github.com/NVIDIA/TRTorch/tree/master/examples/trtorchrt_example" >here</a>

Diff for: docsrc/tutorials/runtime.rst

+12-1
Original file line numberDiff line numberDiff line change
@@ -22,4 +22,15 @@ link ``libtrtorchrt.so`` in your deployment programs or use ``DL_OPEN`` or ``LD_
2222
you can load the runtime with ``torch.ops.load_library("libtrtorchrt.so")``. You can then continue to use
2323
programs just as you would otherwise via PyTorch API.
2424

25-
.. note:: If you are using the standard distribution of PyTorch in Python on x86, likely you will need the pre-cxx11-abi variant of ``libtrtorchrt.so``, check :ref:`Installation` documentation for more details.
25+
.. note:: If you are using the standard distribution of PyTorch in Python on x86, likely you will need the pre-cxx11-abi variant of ``libtrtorchrt.so``, check :ref:`Installation` documentation for more details.
26+
27+
.. note:: If you are linking ``libtrtorchrt.so``, likely using the following flags will help ``-Wl,--no-as-needed -ltrtorchrt -Wl,--as-needed`` as theres no direct symbol dependency to anything in the TRTorch runtime for most TRTorch runtime applications
28+
29+
An example of how to use ``libtrtorchrt.so`` can be found here: https://github.com/NVIDIA/TRTorch/tree/master/examples/trtorchrt_example
30+
31+
Plugin Library
32+
---------------
33+
34+
In the case you use TRTorch as a converter to a TensorRT engine and your engine uses plugins provided by TRTorch, TRTorch
35+
ships the library ``libtrtorch_plugins.so`` which contains the implementation of the TensorRT plugins used by TRTorch during
36+
compilation. This library can be ``DL_OPEN`` or ``LD_PRELOAD`` similar to other TensorRT plugin libraries.

Diff for: examples/sample_rt_app/BUILD

-21
This file was deleted.

Diff for: examples/sample_rt_app/README.md

-36
This file was deleted.

Diff for: examples/trtorchrt_example/BUILD

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
package(default_visibility = ["//visibility:public"])
2+
3+
cc_binary(
4+
name = "trtorchrt_example",
5+
srcs = [
6+
"main.cpp"
7+
],
8+
deps = [
9+
"//core/runtime:runtime",
10+
"@libtorch//:libtorch",
11+
"@libtorch//:caffe2",
12+
"@tensorrt//:nvinfer",
13+
],
14+
)

Diff for: examples/trtorchrt_example/Makefile

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
CXX=g++
2+
DEP_DIR=$(PWD)/deps
3+
INCLUDE_DIRS=-I$(DEP_DIR)/libtorch/include -I$(DEP_DIR)/trtorch/include
4+
LIB_DIRS=-L$(DEP_DIR)/trtorch/lib -L$(DEP_DIR)/libtorch/lib # -Wl,-rpath $(DEP_DIR)/tensorrt/lib -Wl,-rpath $(DEP_DIR)/cudnn/lib64
5+
LIBS=-Wl,--no-as-needed -ltrtorchrt -Wl,--as-needed -ltorch -ltorch_cuda -ltorch_cpu -ltorch_global_deps -lbackend_with_compiler -lc10 -lc10_cuda
6+
SRCS=main.cpp
7+
8+
TARGET=trtorchrt_example
9+
10+
$(TARGET):
11+
$(CXX) $(SRCS) $(INCLUDE_DIRS) $(LIB_DIRS) $(LIBS) -o $(TARGET)
12+
13+
clean:
14+
$(RM) $(TARGET)

Diff for: examples/trtorchrt_example/README.md

+53
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
# trtorchrt_example
2+
3+
## Sample application which uses TRTorch runtime library and plugin library.
4+
5+
This sample is a demonstration on how to use TRTorch runtime library `libtrtorchrt.so` along with plugin library `libtrtorch_plugins.so`
6+
7+
In this demo, we convert two models `ConvGelu` and `Norm` to TensorRT using TRTorch python API and perform inference using `samplertapp`. In these models, `Gelu` and `Norm` layer are expressed as plugins in the network.
8+
9+
### Generating Torch script modules with TRT Engines
10+
11+
The following command will generate `conv_gelu.jit` and `norm.jit` torchscript modules which contain TensorRT engines.
12+
13+
```sh
14+
python network.py
15+
```
16+
17+
### `trtorchrt_example`
18+
The main goal is to use TRTorch runtime library `libtrtorchrt.so`, a lightweight library sufficient enough to deploy your Torchscript programs containing TRT engines.
19+
20+
1) Download releases of LibTorch and TRTorch from https://pytorch.org and the TRTorch github repo and unpack both in the deps directory.
21+
22+
```sh
23+
cd examples/trtorchrt_example/deps
24+
// Download latest TRTorch release tar file (libtrtorch.tar.gz) from https://github.com/NVIDIA/TRTorch/releases
25+
tar -xvzf libtrtorch.tar.gz
26+
unzip libtorch-cxx11-abi-shared-with-deps-1.9.0+cu111.zip
27+
```
28+
29+
> If cuDNN and TensorRT are not installed on your system / in your LD_LIBRARY_PATH then do the following as well
30+
31+
```sh
32+
cd deps
33+
mkdir cudnn && tar -xvzf <cuDNN TARBALL> --directory cudnn --strip-components=1
34+
mkdir tensorrt && tar -xvzf <TensorRT TARBALL> --directory tensorrt --strip-components=1
35+
cd ..
36+
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/deps/trtorch/lib:$(pwd)/deps/libtorch/lib:$(pwd)/deps/tensorrt/lib:$(pwd)/deps/cudnn/lib64:/usr/local/cuda/lib
37+
```
38+
39+
This gives maximum compatibility with system configurations for running this example but in general you are better off adding `-Wl,-rpath $(DEP_DIR)/tensorrt/lib -Wl,-rpath $(DEP_DIR)/cudnn/lib64` to your linking command for actual applications
40+
41+
2) Build and run `trtorchrt_example`
42+
43+
`trtorchrt_example` is a binary which loads the torchscript modules `conv_gelu.jit` or `norm.jit` and runs the TRT engines on a random input using TRTorch runtime components. Checkout the `main.cpp` and `Makefile ` file for necessary code and compilation dependencies.
44+
45+
To build and run the app
46+
47+
```sh
48+
cd examples/trtorchrt_example
49+
make
50+
# If paths are different than the ones below, change as necessary
51+
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/deps/trtorch/lib:$(pwd)/deps/libtorch/lib:$(pwd)/deps/tensorrt/lib:$(pwd)/deps/cudnn/lib64:/usr/local/cuda/lib
52+
./trtorchrt_example $PWD/examples/trtorchrt_example/norm.jit
53+
```

Diff for: examples/trtorchrt_example/deps/.gitkeep

Whitespace-only changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)