Skip to content

Commit 2a710b2

Browse files
committed
readme : update gpt4all instructions
1 parent 315a95a commit 2a710b2

File tree

1 file changed

+10
-9
lines changed

1 file changed

+10
-9
lines changed

README.md

+10-9
Original file line numberDiff line numberDiff line change
@@ -270,18 +270,19 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach.
270270
271271
### Using [GPT4All](https://github.com/nomic-ai/gpt4all)
272272
273-
- Obtain the `gpt4all-lora-quantized.bin` model
273+
- Obtain the `tokenizer.model` file from LLaMA model and put it to `models`
274+
- Obtain the `added_tokens.json` file from Alpaca model and put it to `models`
275+
- Obtain the `gpt4all-lora-quantized.bin` file from GPT4All model and put it to `models/gpt4all-7B`
274276
- It is distributed in the old `ggml` format which is now obsoleted
275-
- You have to convert it to the new format using [./convert-gpt4all-to-ggml.py](./convert-gpt4all-to-ggml.py). You may also need to
276-
convert the model from the old format to the new format with [./migrate-ggml-2023-03-30-pr613.py](./migrate-ggml-2023-03-30-pr613.py):
277+
- You have to convert it to the new format using `convert.py`:
277278
278-
```bash
279-
python3 convert-gpt4all-to-ggml.py models/gpt4all-7B/gpt4all-lora-quantized.bin ./models/tokenizer.model
280-
python3 migrate-ggml-2023-03-30-pr613.py models/gpt4all-7B/gpt4all-lora-quantized.bin models/gpt4all-7B/gpt4all-lora-quantized-new.bin
281-
```
279+
```bash
280+
python3 convert.py models/gpt4all-7B/gpt4all-lora-quantized.bin
281+
```
282+
283+
- You can now use the newly generated `models/gpt4all-7B/ggml-model-q4_0.bin` model in exactly the same way as all other models
282284

283-
- You can now use the newly generated `gpt4all-lora-quantized-new.bin` model in exactly the same way as all other models
284-
- The original model is saved in the same folder with a suffix `.orig`
285+
- The newer GPT4All-J model is not yet supported!
285286

286287
### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data
287288

0 commit comments

Comments
 (0)