Skip to content

Commit 8dc342c

Browse files
committed
quick readme update
1 parent f11c0f9 commit 8dc342c

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

README.md

+2-3
Original file line numberDiff line numberDiff line change
@@ -20,12 +20,11 @@ The main goal of `llama.cpp` is to run the LLaMA model using 4-bit integer quant
2020
- Apple silicon first-class citizen - optimized via ARM NEON and Accelerate framework
2121
- AVX2 support for x86 architectures
2222
- Mixed F16 / F32 precision
23-
- 4-bit integer quantization support
23+
- 4 & 8 bit integer quantization support
2424
- Runs on the CPU
2525

2626
The original implementation of `llama.cpp` was [hacked in an evening](https://github.com/ggerganov/llama.cpp/issues/33#issuecomment-1465108022).
27-
Since then, the project has improved significantly thanks to many contributions. This project is for educational purposes and serves
28-
as the main playground for developing new features for the [ggml](https://github.com/ggerganov/ggml) library.
27+
Since then, the project has improved significantly thanks to many contributions. This project is for educational purposes and serves as the main playground for developing new features for the [ggml](https://github.com/ggerganov/ggml) library.
2928

3029
**Supported platforms:**
3130

0 commit comments

Comments
 (0)