|
| 1 | +### Examples for input embedding directly |
| 2 | + |
| 3 | +## Requirement |
| 4 | +build `libembdinput.so` |
| 5 | +run the following comman in main dir (../../). |
| 6 | +``` |
| 7 | +make |
| 8 | +``` |
| 9 | + |
| 10 | +## [LLaVA](https://github.com/haotian-liu/LLaVA/) example (llava.py) |
| 11 | + |
| 12 | +1. Obtian LLaVA model (following https://github.com/haotian-liu/LLaVA/ , use https://huggingface.co/liuhaotian/LLaVA-13b-delta-v1-1/). |
| 13 | +2. Convert it to ggml format. |
| 14 | +3. `llava_projection.pth` is [pytorch_model-00003-of-00003.bin](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v1-1/blob/main/pytorch_model-00003-of-00003.bin). |
| 15 | + |
| 16 | +``` |
| 17 | +import torch |
| 18 | +
|
| 19 | +bin_path = "../LLaVA-13b-delta-v1-1/pytorch_model-00003-of-00003.bin" |
| 20 | +pth_path = "./examples/embd_input/llava_projection.pth" |
| 21 | +
|
| 22 | +dic = torch.load(bin_path) |
| 23 | +used_key = ["model.mm_projector.weight","model.mm_projector.bias"] |
| 24 | +torch.save({k: dic[k] for k in used_key}, pth_path) |
| 25 | +``` |
| 26 | +4. Check the path of LLaVA model and `llava_projection.pth` in `llava.py`. |
| 27 | + |
| 28 | + |
| 29 | +## [PandaGPT](https://github.com/yxuansu/PandaGPT) example (panda_gpt.py) |
| 30 | + |
| 31 | +1. Obtian PandaGPT lora model from https://github.com/yxuansu/PandaGPT. Rename the file to `adapter_model.bin`. Use [convert-lora-to-ggml.py](../../convert-lora-to-ggml.py) to convert it to ggml format. |
| 32 | +The `adapter_config.json` is |
| 33 | +``` |
| 34 | +{ |
| 35 | + "peft_type": "LORA", |
| 36 | + "fan_in_fan_out": false, |
| 37 | + "bias": null, |
| 38 | + "modules_to_save": null, |
| 39 | + "r": 32, |
| 40 | + "lora_alpha": 32, |
| 41 | + "lora_dropout": 0.1, |
| 42 | + "target_modules": ["q_proj", "k_proj", "v_proj", "o_proj"] |
| 43 | +} |
| 44 | +``` |
| 45 | +2. Papare the `vicuna` v0 model. |
| 46 | +3. Obtain the [ImageBind](https://dl.fbaipublicfiles.com/imagebind/imagebind_huge.pth) model. |
| 47 | +4. Clone the PandaGPT source. |
| 48 | +``` |
| 49 | +git clone https://github.com/yxuansu/PandaGPT |
| 50 | +``` |
| 51 | +5. Install the requirement of PandaGPT. |
| 52 | +6. Check the path of PandaGPT source, ImageBind model, lora model and vicuna model in panda_gpt.py. |
| 53 | + |
| 54 | +## [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4/) example (minigpt4.py) |
| 55 | + |
| 56 | +1. Obtain MiniGPT-4 model from https://github.com/Vision-CAIR/MiniGPT-4/ and put it in `embd-input`. |
| 57 | +2. Clone the MiniGPT-4 source. |
| 58 | +``` |
| 59 | +git clone https://github.com/Vision-CAIR/MiniGPT-4/ |
| 60 | +``` |
| 61 | +3. Install the requirement of PandaGPT. |
| 62 | +4. Papare the `vicuna` v0 model. |
| 63 | +5. Check the path of MiniGPT-4 source, MiniGPT-4 model and vicuna model in `minigpt4.py`. |
0 commit comments