Skip to content

Commit 8d10406

Browse files
authored
readme : change logo + add bindings + add uis + add wiki
1 parent ed1c214 commit 8d10406

File tree

1 file changed

+17
-4
lines changed

1 file changed

+17
-4
lines changed

README.md

+17-4
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# llama.cpp
22

3-
![llama](https://user-images.githubusercontent.com/1991296/227761327-6d83e30e-2200-41a6-bfbb-f575231c54f4.png)
3+
![llama](https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png)
44

55
[![Actions Status](https://github.com/ggerganov/llama.cpp/workflows/CI/badge.svg)](https://github.com/ggerganov/llama.cpp/actions)
66
[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)
@@ -10,7 +10,6 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
1010
**Hot topics:**
1111

1212
- [Roadmap (short-term)](https://github.com/ggerganov/llama.cpp/discussions/457)
13-
- Support for [GPT4All](https://github.com/ggerganov/llama.cpp#using-gpt4all)
1413

1514
## Description
1615

@@ -28,20 +27,31 @@ Please do not make conclusions about the models based on the results from this i
2827
For all I know, it can be completely wrong. This project is for educational purposes.
2928
New features will probably be added mostly through community contributions.
3029

31-
Supported platforms:
30+
**Supported platforms:**
3231

3332
- [X] Mac OS
3433
- [X] Linux
3534
- [X] Windows (via CMake)
3635
- [X] Docker
3736

38-
Supported models:
37+
**Supported models:**
3938

4039
- [X] LLaMA 🦙
4140
- [X] [Alpaca](https://github.com/ggerganov/llama.cpp#instruction-mode-with-alpaca)
4241
- [X] [GPT4All](https://github.com/ggerganov/llama.cpp#using-gpt4all)
4342
- [X] [Chinese LLaMA / Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
4443
- [X] [Vigogne (French)](https://github.com/bofenghuang/vigogne)
44+
- [X] [Vicuna](https://github.com/ggerganov/llama.cpp/discussions/643#discussioncomment-5533894)
45+
46+
**Bindings:**
47+
48+
- Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
49+
- Go: [go-skynet/go-llama.cpp](https://github.com/go-skynet/go-llama.cpp)
50+
51+
**UI:**
52+
53+
- [nat/openplayground](https://github.com/nat/openplayground)
54+
- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
4555

4656
---
4757

@@ -374,3 +384,6 @@ docker run -v /llama/models:/models ghcr.io/ggerganov/llama.cpp:light -m /models
374384
- Clean-up any trailing whitespaces, use 4 spaces indentation, brackets on same line, `void * ptr`, `int & a`
375385
- See [good first issues](https://github.com/ggerganov/llama.cpp/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) for tasks suitable for first contributions
376386

387+
### Docs
388+
389+
- [GGML tips & tricks](https://github.com/ggerganov/llama.cpp/wiki/GGML-Tips-&-Tricks)

0 commit comments

Comments
 (0)