1
1
# llama.cpp
2
2
3
- ![ llama] ( https://user-images.githubusercontent.com/1991296/227761327-6d83e30e-2200-41a6-bfbb-f575231c54f4 .png )
3
+ ![ llama] ( https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524 .png )
4
4
5
5
[ ![ Actions Status] ( https://github.com/ggerganov/llama.cpp/workflows/CI/badge.svg )] ( https://github.com/ggerganov/llama.cpp/actions )
6
6
[ ![ License: MIT] ( https://img.shields.io/badge/license-MIT-blue.svg )] ( https://opensource.org/licenses/MIT )
@@ -10,7 +10,6 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
10
10
** Hot topics:**
11
11
12
12
- [ Roadmap (short-term)] ( https://github.com/ggerganov/llama.cpp/discussions/457 )
13
- - Support for [ GPT4All] ( https://github.com/ggerganov/llama.cpp#using-gpt4all )
14
13
15
14
## Description
16
15
@@ -28,20 +27,31 @@ Please do not make conclusions about the models based on the results from this i
28
27
For all I know, it can be completely wrong. This project is for educational purposes.
29
28
New features will probably be added mostly through community contributions.
30
29
31
- Supported platforms:
30
+ ** Supported platforms:**
32
31
33
32
- [X] Mac OS
34
33
- [X] Linux
35
34
- [X] Windows (via CMake)
36
35
- [X] Docker
37
36
38
- Supported models:
37
+ ** Supported models:**
39
38
40
39
- [X] LLaMA 🦙
41
40
- [X] [ Alpaca] ( https://github.com/ggerganov/llama.cpp#instruction-mode-with-alpaca )
42
41
- [X] [ GPT4All] ( https://github.com/ggerganov/llama.cpp#using-gpt4all )
43
42
- [X] [ Chinese LLaMA / Alpaca] ( https://github.com/ymcui/Chinese-LLaMA-Alpaca )
44
43
- [X] [ Vigogne (French)] ( https://github.com/bofenghuang/vigogne )
44
+ - [X] [ Vicuna] ( https://github.com/ggerganov/llama.cpp/discussions/643#discussioncomment-5533894 )
45
+
46
+ ** Bindings:**
47
+
48
+ - Python: [ abetlen/llama-cpp-python] ( https://github.com/abetlen/llama-cpp-python )
49
+ - Go: [ go-skynet/go-llama.cpp] ( https://github.com/go-skynet/go-llama.cpp )
50
+
51
+ ** UI:**
52
+
53
+ - [ nat/openplayground] ( https://github.com/nat/openplayground )
54
+ - [ oobabooga/text-generation-webui] ( https://github.com/oobabooga/text-generation-webui )
45
55
46
56
---
47
57
@@ -374,3 +384,6 @@ docker run -v /llama/models:/models ghcr.io/ggerganov/llama.cpp:light -m /models
374
384
- Clean-up any trailing whitespaces, use 4 spaces indentation, brackets on same line, ` void * ptr ` , ` int & a `
375
385
- See [ good first issues] ( https://github.com/ggerganov/llama.cpp/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22 ) for tasks suitable for first contributions
376
386
387
+ ### Docs
388
+
389
+ - [ GGML tips & tricks] ( https://github.com/ggerganov/llama.cpp/wiki/GGML-Tips-&-Tricks )
0 commit comments