We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
1 parent 4d56ad8 commit ea6d320Copy full SHA for ea6d320
README.md
@@ -2,7 +2,8 @@
2
3
To install, run
4
```make LLAMA_HIPBLAS=1```
5
-To use ROCM, set GPU layers with --gpulayers when starting koboldcpp
+To use ROCM, set GPU layers with --gpulayers when starting koboldcpp
6
+Original [llama.cpp rocm port](https://github.com/ggerganov/llama.cpp/pull/1087) by SlyEcho, ported to koboldcpp by yellowrosecx
7
8
Comparison with OpenCL using 6800xt
9
| Model | Offloading Method | Time Taken - Processing 593 tokens| Time Taken - Generating 200 tokens| Total Time | Perf. Diff.
0 commit comments