Skip to content

Commit 9190b17

Browse files
authored
Update README.md
1 parent 2780ea2 commit 9190b17

File tree

1 file changed

+6
-1
lines changed

1 file changed

+6
-1
lines changed

README.md

+6-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,10 @@
1-
# koboldcpp
1+
# koboldcpp-ROCM
22

3+
To install, run "make LLAMA_HIPBLAS=1" twice. IDK why it needs done twice. The .so files don't get made until its ran a second time
4+
```make LLAMA_HIPBLAS=1 && make LLAMA_HIPBLAS=1```
5+
To use ROCM, set GPU layers with --gpulayers when starting koboldcpp
6+
7+
--------
38
A self contained distributable from Concedo that exposes llama.cpp function bindings, allowing it to be used via a simulated Kobold API endpoint.
49

510
What does it mean? You get llama.cpp with a fancy UI, persistent stories, editing tools, save formats, memory, world info, author's note, characters, scenarios and everything Kobold and Kobold Lite have to offer. In a tiny package around 20 MB in size, excluding model weights.

0 commit comments

Comments
 (0)