Skip to content

Commit 25f4371

Browse files
committed
feat!: Lock bazel version
BREAKING CHANGE: Bazel version is now locked to Bazel 3.2.0 and will be bumped manually from now on. Builds will fail on all other versions since now bazel will check the version before it compiles. Documentation on how to install bazel is added as well to support aarch64 until bazel releases binaries for the platform (which is soon) Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
1 parent bbcf2ca commit 25f4371

File tree

7 files changed

+135
-11
lines changed

7 files changed

+135
-11
lines changed

Diff for: .bazelversion

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
3.2.0

Diff for: README.md

+19-1
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,7 @@ torch.jit.save(trt_ts_module, "trt_torchscript_module.ts")
6868

6969
### Dependencies
7070

71+
- Bazel 3.2.0
7172
- Libtorch 1.5.0
7273
- CUDA 10.2
7374
- cuDNN 7.6.5
@@ -81,7 +82,24 @@ Releases: https://github.com/NVIDIA/TRTorch/releases
8182

8283
### Installing Dependencies
8384

84-
You need to start by having CUDA installed on the system, Libtorch will automatically be pulled for you by bazel,
85+
#### 0. Install Bazel
86+
87+
If you don't have bazel installed, the easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk
88+
89+
Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
90+
91+
Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
92+
93+
```sh
94+
export BAZEL_VERSION=<VERSION>
95+
mkdir bazel
96+
cd bazel
97+
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
98+
unzip bazel-$BAZEL_VERSION-dist.zip
99+
bash ./compile.sh
100+
```
101+
102+
You need to start by having CUDA installed on the system, LibTorch will automatically be pulled for you by bazel,
85103
then you have two options.
86104

87105
#### 1. Building using cuDNN & TensorRT tarball distributions

Diff for: docs/_sources/tutorials/installation.rst.txt

+17-1
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,23 @@ Compiling From Source
4848
Dependencies for Compilation
4949
******************************************
5050

51-
TRTorch is built with Bazel, so begin by installing it. https://docs.bazel.build/versions/master/install.html
51+
TRTorch is built with Bazel, so begin by installing it.
52+
53+
The easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk
54+
55+
Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
56+
57+
Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
58+
59+
```sh
60+
export BAZEL_VERSION=3.2.0
61+
mkdir bazel
62+
cd bazel
63+
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
64+
unzip bazel-$BAZEL_VERSION-dist.zip
65+
bash ./compile.sh
66+
cp output/bazel /usr/local/bin/
67+
```
5268

5369
You will also need to have CUDA installed on the system (or if running in a container, the system must have
5470
the CUDA driver installed and the container must have CUDA)

Diff for: docs/searchindex.js

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

Diff for: docs/sitemap.xml

+1-1
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/class_view_hierarchy.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/classtrtorch_1_1ExtraInfo_1_1DataType.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/classtrtorch_1_1ExtraInfo_1_1DeviceType.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/classtrtorch_1_1ptq_1_1Int8CacheCalibrator.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/classtrtorch_1_1ptq_1_1Int8Calibrator.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1a18d295a837ac71add5578860b55e5502.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1a20c1fbeb21757871c52299dc52351b5f.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1a25ee153c325dfc7466a33cbd5c1ff055.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1a48d6029a45583a06848891cb0e86f7ba.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1a71b02dddfabe869498ad5a88e11c440f.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1a9d31d0569348d109b1b069b972dd143e.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1abe87b341f562fd1cf40b7672e4d759da.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1ae1c56ab8a40af292a9a4964651524d84.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/dir_cpp.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/dir_cpp_api.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/dir_cpp_api_include.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/dir_cpp_api_include_trtorch.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/enum_logging_8h_1a5f612ff2f783ff4fbe89d168f0d817d4.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/file_cpp_api_include_trtorch_logging.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/file_cpp_api_include_trtorch_macros.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/file_cpp_api_include_trtorch_ptq.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/file_cpp_api_include_trtorch_trtorch.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/file_view_hierarchy.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1a118d65b179defff7fff279eb9cd126cb.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1a396a688110397538f8b3fb7dfdaf38bb.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1a9b420280bfacc016d7e36a5704021949.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1aa533955a2b908db9e5df5acdfa24715f.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1abc57d473f3af292551dee8b9c78373ad.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1adf5435f0dbb09c0d931a1b851847236b.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1aef44b69c62af7cf2edc8875a9506641a.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1a2cf17d43ba9117b3b4d652744b4f0447.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1a4422781719d7befedb364cacd91c6247.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1a536bba54b70e44554099d23fa3d7e804.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1a5f33b142bc2f3f2aaf462270b3ad7e31.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1a726f6e7091b6b7be45b5a4275b2ffb10.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1ab01696cfe08b6a5293c55935a9713c25.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1ae38897d1ca4438227c970029d0f76fb5.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/namespace_trtorch.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/namespace_trtorch__logging.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/namespace_trtorch__ptq.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/program_listing_file_cpp_api_include_trtorch_logging.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/program_listing_file_cpp_api_include_trtorch_macros.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/program_listing_file_cpp_api_include_trtorch_ptq.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/program_listing_file_cpp_api_include_trtorch_trtorch.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/structtrtorch_1_1ExtraInfo.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/structtrtorch_1_1ExtraInfo_1_1InputRange.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/trtorch_cpp.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/unabridged_api.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/unabridged_orphan.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/contributors/execution.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/contributors/phases.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/contributors/system_overview.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/index.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/genindex.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/py-modindex.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/search.html</loc></url></urlset>
1+
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/class_view_hierarchy.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/classtrtorch_1_1ExtraInfo_1_1DataType.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/classtrtorch_1_1ExtraInfo_1_1DeviceType.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/classtrtorch_1_1ptq_1_1Int8CacheCalibrator.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/classtrtorch_1_1ptq_1_1Int8Calibrator.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1a18d295a837ac71add5578860b55e5502.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1a20c1fbeb21757871c52299dc52351b5f.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1a25ee153c325dfc7466a33cbd5c1ff055.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1a48d6029a45583a06848891cb0e86f7ba.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1a71b02dddfabe869498ad5a88e11c440f.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1a9d31d0569348d109b1b069b972dd143e.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1abe87b341f562fd1cf40b7672e4d759da.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/define_macros_8h_1ae1c56ab8a40af292a9a4964651524d84.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/dir_cpp.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/dir_cpp_api.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/dir_cpp_api_include.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/dir_cpp_api_include_trtorch.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/enum_logging_8h_1a5f612ff2f783ff4fbe89d168f0d817d4.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/file_cpp_api_include_trtorch_logging.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/file_cpp_api_include_trtorch_macros.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/file_cpp_api_include_trtorch_ptq.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/file_cpp_api_include_trtorch_trtorch.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/file_view_hierarchy.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1a118d65b179defff7fff279eb9cd126cb.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1a396a688110397538f8b3fb7dfdaf38bb.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1a9b420280bfacc016d7e36a5704021949.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1aa533955a2b908db9e5df5acdfa24715f.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1abc57d473f3af292551dee8b9c78373ad.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1adf5435f0dbb09c0d931a1b851847236b.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_logging_8h_1aef44b69c62af7cf2edc8875a9506641a.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1a2cf17d43ba9117b3b4d652744b4f0447.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1a4422781719d7befedb364cacd91c6247.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1a536bba54b70e44554099d23fa3d7e804.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1a5f33b142bc2f3f2aaf462270b3ad7e31.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1a726f6e7091b6b7be45b5a4275b2ffb10.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1ab01696cfe08b6a5293c55935a9713c25.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/function_trtorch_8h_1ae38897d1ca4438227c970029d0f76fb5.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/namespace_trtorch.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/namespace_trtorch__logging.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/namespace_trtorch__ptq.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/program_listing_file_cpp_api_include_trtorch_logging.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/program_listing_file_cpp_api_include_trtorch_macros.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/program_listing_file_cpp_api_include_trtorch_ptq.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/program_listing_file_cpp_api_include_trtorch_trtorch.h.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/structtrtorch_1_1ExtraInfo.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/structtrtorch_1_1ExtraInfo_1_1InputRange.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/trtorch_cpp.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/unabridged_api.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/_cpp_api/unabridged_orphan.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/index.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/tutorials/installation.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/genindex.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/py-modindex.html</loc></url><url><loc>https://nvidia.github.io/TRTorch/search.html</loc></url></urlset>

Diff for: docs/tutorials/installation.html

+79-6
Original file line numberDiff line numberDiff line change
@@ -455,32 +455,32 @@
455455
<ul class="md-nav__list">
456456
<li class="md-nav__item">
457457
<a class="md-nav__link" href="../contributors/useful_links.html#tensorrt-available-layers-and-expected-dimensions">
458-
TensorRT Available Layers and Expected Dimensions:
458+
TensorRT Available Layers and Expected Dimensions
459459
</a>
460460
</li>
461461
<li class="md-nav__item">
462462
<a class="md-nav__link" href="../contributors/useful_links.html#tensorrt-c-documentation">
463-
TensorRT C++ Documentation:
463+
TensorRT C++ Documentation
464464
</a>
465465
</li>
466466
<li class="md-nav__item">
467467
<a class="md-nav__link" href="../contributors/useful_links.html#tensorrt-python-documentation-sometimes-easier-to-read">
468-
TensorRT Python Documentation (Sometimes easier to read):
468+
TensorRT Python Documentation (Sometimes easier to read)
469469
</a>
470470
</li>
471471
<li class="md-nav__item">
472472
<a class="md-nav__link" href="../contributors/useful_links.html#pytorch-functional-api">
473-
PyTorch Functional API:
473+
PyTorch Functional API
474474
</a>
475475
</li>
476476
<li class="md-nav__item">
477477
<a class="md-nav__link" href="../contributors/useful_links.html#pytorch-native-ops">
478-
PyTorch native_ops:
478+
PyTorch native_ops
479479
</a>
480480
</li>
481481
<li class="md-nav__item">
482482
<a class="md-nav__link" href="../contributors/useful_links.html#pytorch-ir-documentation">
483-
PyTorch IR Documentation:
483+
PyTorch IR Documentation
484484
</a>
485485
</li>
486486
</ul>
@@ -751,10 +751,83 @@ <h2 id="dependencies-for-compilation">
751751
</h2>
752752
<p>
753753
TRTorch is built with Bazel, so begin by installing it.
754+
</p>
755+
<p>
756+
The easiest way is to install bazelisk using the method of you choosing
757+
<a class="reference external" href="https://github.com/bazelbuild/bazelisk">
758+
https://github.com/bazelbuild/bazelisk
759+
</a>
760+
</p>
761+
<p>
762+
Otherwise you can use the following instructions to install binaries
754763
<a class="reference external" href="https://docs.bazel.build/versions/master/install.html">
755764
https://docs.bazel.build/versions/master/install.html
756765
</a>
757766
</p>
767+
<p>
768+
Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
769+
</p>
770+
<p>
771+
<code class="docutils literal notranslate">
772+
<span class="pre">
773+
`sh
774+
</span>
775+
<span class="pre">
776+
export
777+
</span>
778+
<span class="pre">
779+
BAZEL_VERSION=3.2.0
780+
</span>
781+
<span class="pre">
782+
mkdir
783+
</span>
784+
<span class="pre">
785+
bazel
786+
</span>
787+
<span class="pre">
788+
cd
789+
</span>
790+
<span class="pre">
791+
bazel
792+
</span>
793+
<span class="pre">
794+
curl
795+
</span>
796+
<span class="pre">
797+
-fSsL
798+
</span>
799+
<span class="pre">
800+
-O
801+
</span>
802+
<span class="pre">
803+
https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
804+
</span>
805+
<span class="pre">
806+
unzip
807+
</span>
808+
<span class="pre">
809+
bazel-$BAZEL_VERSION-dist.zip
810+
</span>
811+
<span class="pre">
812+
bash
813+
</span>
814+
<span class="pre">
815+
./compile.sh
816+
</span>
817+
<span class="pre">
818+
cp
819+
</span>
820+
<span class="pre">
821+
output/bazel
822+
</span>
823+
<span class="pre">
824+
/usr/local/bin/
825+
</span>
826+
<span class="pre">
827+
`
828+
</span>
829+
</code>
830+
</p>
758831
<p>
759832
You will also need to have CUDA installed on the system (or if running in a container, the system must have
760833
the CUDA driver installed and the container must have CUDA)

Diff for: docsrc/tutorials/installation.rst

+17-1
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,23 @@ Compiling From Source
4848
Dependencies for Compilation
4949
******************************************
5050

51-
TRTorch is built with Bazel, so begin by installing it. https://docs.bazel.build/versions/master/install.html
51+
TRTorch is built with Bazel, so begin by installing it.
52+
53+
The easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk
54+
55+
Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
56+
57+
Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
58+
59+
```sh
60+
export BAZEL_VERSION=3.2.0
61+
mkdir bazel
62+
cd bazel
63+
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
64+
unzip bazel-$BAZEL_VERSION-dist.zip
65+
bash ./compile.sh
66+
cp output/bazel /usr/local/bin/
67+
```
5268

5369
You will also need to have CUDA installed on the system (or if running in a container, the system must have
5470
the CUDA driver installed and the container must have CUDA)

0 commit comments

Comments
 (0)