You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
24
+
Step 2: Move into the llama.cpp folder and build it. You can also add hardware-specific flags (for ex: `-DGGML_CUDA=1` for Nvidia GPUs).
25
25
26
26
```
27
-
cd llama.cpp && LLAMA_CURL=1 make
27
+
cd llama.cpp
28
+
cmake -B build # optionally, add -DGGML_CUDA=ON to activate CUDA
29
+
cmake --build build --config Release
28
30
```
29
31
32
+
Note: for other hardware support (for ex: AMD ROCm, Intel SYCL), please refer to [llama.cpp's build guide](https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md)
33
+
30
34
Once installed, you can use the `llama-cli` or `llama-server` as follows:
0 commit comments