You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-6
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ Torch-TensorRT
15
15
---
16
16
<divalign="left">
17
17
18
-
Torch-TensorRT brings the power of TensorRT to PyTorch. Accelerate inference latency by up to 5x compared to eager execution in just one line of code.
18
+
Torch-TensorRT brings the power of TensorRT to PyTorch. Accelerate inference latency by up to 5x compared to eager execution in just one line of code.
19
19
</div></div>
20
20
21
21
## Installation
@@ -52,7 +52,7 @@ optimized_model(x) # this will be fast!
52
52
```
53
53
54
54
### Option 2: Export
55
-
If you want to optimize your model ahead-of-time and/or deploy in a C++ environment, Torch-TensorRT provides an export-style workflow that serializes an optimized module. This module can be deployed in PyTorch or with libtorch (i.e. without a Python dependency).
55
+
If you want to optimize your model ahead-of-time and/or deploy in a C++ environment, Torch-TensorRT provides an export-style workflow that serializes an optimized module. This module can be deployed in PyTorch or with libtorch (i.e. without a Python dependency).
56
56
57
57
#### Step 1: Optimize + serialize
58
58
```python
@@ -62,7 +62,7 @@ import torch_tensorrt
62
62
model = MyModel().eval().cuda() # define your model here
63
63
inputs = [torch.randn((1, 3, 224, 224)).cuda()] # define a list of representative inputs here
torch_tensorrt.save(trt_gm, "trt.ep", inputs=inputs) # PyTorch only supports Python runtime for an ExportedProgram. For C++ deployment, use a TorchScript file
@@ -116,9 +116,9 @@ auto results = trt_mod.forward({input_tensor});
116
116
117
117
These are the following dependencies used to verify the testcases. Torch-TensorRT can work with other versions, but the tests are not guaranteed to pass.
118
118
119
-
- Bazel 5.2.0
120
-
- Libtorch 2.4.0.dev (latest nightly) (built with CUDA 12.4)
121
-
- CUDA 12.1
119
+
- Bazel 6.3.2
120
+
- Libtorch 2.5.0.dev (latest nightly) (built with CUDA 12.4)
0 commit comments