-
Notifications
You must be signed in to change notification settings - Fork 12k
Support Adept Persimmon 8b #3410
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
ggerganov
merged 30 commits into
ggml-org:master
from
phillip-kravtsov:phillip-kravtsov/support-adept-persimmon-8b
Oct 7, 2023
Merged
Changes from 28 commits
Commits
Show all changes
30 commits
Select commit
Hold shift + click to select a range
7cdc3ea
Produces garbage output
phillip-kravtsov 4bcf412
wip: correct tensors up to RoPE
phillip-kravtsov c9e1446
correct tensors thru RoPE
phillip-kravtsov d1b40ef
Correct outputs through masked & softmax'd KQ
phillip-kravtsov db2181a
fp32 works
phillip-kravtsov 3f31799
Rename adept->persimmon
phillip-kravtsov 720503b
Merge branch 'master' of github.com:phillip-kravtsov/llama.cpp into p…
phillip-kravtsov d61eed0
Produces correct outputs
phillip-kravtsov d0a7143
Merge branch 'master' of github.com:ggerganov/llama.cpp into phillip-…
phillip-kravtsov fa92f6e
clean up convert scripts
phillip-kravtsov c28a6c5
remove printing logic from ggml.c
phillip-kravtsov 47dcb9f
remove prints from llama.cpp & fix merge
phillip-kravtsov 7473773
Merge branch 'master' of github.com:ggerganov/llama.cpp into phillip-…
phillip-kravtsov d904aff
trivial cleanups
phillip-kravtsov ec0ce97
Add offload funcs
phillip-kravtsov 3db04db
update conversion script to directly take adept artifacts rather than…
phillip-kravtsov f28f52c
Fix norm eps bug
phillip-kravtsov d93cf1e
Merge branch 'master' of github.com:ggerganov/llama.cpp into phillip-…
phillip-kravtsov 574a9e1
Merge branch 'master' of github.com:ggerganov/llama.cpp into phillip-…
phillip-kravtsov 2b56591
Support sqr and concat on metal, persimmon-8b-q4 runs correctly
phillip-kravtsov e6bf87f
Small changes from review
phillip-kravtsov cd4d3df
Formatting changes
phillip-kravtsov 422b110
Minor changes to conversion script
phillip-kravtsov 5a0990c
Merge branch 'master' of github.com:ggerganov/llama.cpp into phillip-…
phillip-kravtsov 7a279fe
Remove old script
phillip-kravtsov c90ed9f
Fix editorconfig formatting
phillip-kravtsov 5d259d3
Merge branch 'master' of github.com:ggerganov/llama.cpp into phillip-…
phillip-kravtsov 1d518d6
Fix build
phillip-kravtsov 0c1a8f6
Merge branch 'master' of github.com:ggerganov/llama.cpp into phillip-…
phillip-kravtsov 485a471
add overlooked offload code ggml-ci
phillip-kravtsov File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,130 @@ | ||
import torch | ||
import os | ||
from pprint import pprint | ||
import sys | ||
import argparse | ||
from pathlib import Path | ||
from sentencepiece import SentencePieceProcessor | ||
if 'NO_LOCAL_GGUF' not in os.environ: | ||
sys.path.insert(1, str(Path(__file__).parent / 'gguf-py' / 'gguf')) | ||
import gguf | ||
|
||
def _flatten_dict(dct, tensors, prefix=None): | ||
assert isinstance(dct, dict) | ||
for key in dct.keys(): | ||
new_prefix = prefix + '.' + key if prefix is not None else key | ||
if isinstance(dct[key], torch.Tensor): | ||
tensors[new_prefix] = dct[key] | ||
elif isinstance(dct[key], dict): | ||
_flatten_dict(dct[key], tensors, new_prefix) | ||
else: | ||
raise ValueError(type(dct[key])) | ||
return None | ||
|
||
def _get_sentencepiece_tokenizer_info(dir_model: Path): | ||
tokenizer_path = dir_model / 'adept_vocab.model' | ||
print('gguf: getting sentencepiece tokenizer from', tokenizer_path) | ||
tokenizer = SentencePieceProcessor(str(tokenizer_path)) | ||
print('gguf: adding tokens') | ||
tokens: list[bytes] = [] | ||
scores: list[float] = [] | ||
toktypes: list[int] = [] | ||
|
||
for i in range(tokenizer.vocab_size()): | ||
text: bytes | ||
score: float | ||
|
||
piece = tokenizer.id_to_piece(i) | ||
text = piece.encode("utf-8") | ||
score = tokenizer.get_score(i) | ||
|
||
toktype = 1 | ||
if tokenizer.is_unknown(i): | ||
toktype = 2 | ||
if tokenizer.is_control(i): | ||
toktype = 3 | ||
if tokenizer.is_unused(i): | ||
toktype = 5 | ||
if tokenizer.is_byte(i): | ||
toktype = 6 | ||
|
||
tokens.append(text) | ||
scores.append(score) | ||
toktypes.append(toktype) | ||
pass | ||
return tokens, scores, toktypes | ||
|
||
def main(): | ||
parser = argparse.ArgumentParser(description="Convert a Persimmon model from Adept (e.g. Persimmon 8b chat) to a GGML compatible file") | ||
parser.add_argument("--outfile", type=Path, help="path to write to; default: based on input") | ||
parser.add_argument("--ckpt-path", type=Path, help="path to persimmon checkpoint .pt file") | ||
parser.add_argument("--model-dir", type=Path, help="directory containing model e.g. 8b_chat_model_release") | ||
parser.add_argument("--adept-inference-dir", type=str, help="path to adept-inference code directory") | ||
args = parser.parse_args() | ||
sys.path.append(str(args.adept_inference_dir)) | ||
persimmon_model = torch.load(args.ckpt_path) | ||
hparams = persimmon_model['args'] | ||
pprint(hparams) | ||
tensors = {} | ||
_flatten_dict(persimmon_model['model'], tensors, None) | ||
|
||
arch = gguf.MODEL_ARCH.PERSIMMON | ||
gguf_writer = gguf.GGUFWriter(args.outfile, gguf.MODEL_ARCH_NAMES[arch]) | ||
|
||
block_count = hparams.num_layers | ||
head_count = hparams.num_attention_heads | ||
head_count_kv = head_count | ||
ctx_length = hparams.seq_length | ||
hidden_size = hparams.hidden_size | ||
|
||
gguf_writer.add_name('persimmon-8b-chat') | ||
gguf_writer.add_context_length(ctx_length) | ||
gguf_writer.add_embedding_length(hidden_size) | ||
gguf_writer.add_block_count(block_count) | ||
gguf_writer.add_feed_forward_length(hparams.ffn_hidden_size) | ||
gguf_writer.add_rope_dimension_count(hidden_size // head_count) | ||
gguf_writer.add_head_count(head_count) | ||
gguf_writer.add_head_count_kv(head_count_kv) | ||
gguf_writer.add_rope_freq_base(hparams.rotary_emb_base) | ||
gguf_writer.add_layer_norm_eps(hparams.layernorm_epsilon) | ||
|
||
tokens, scores, toktypes = _get_sentencepiece_tokenizer_info(args.model_dir) | ||
gguf_writer.add_tokenizer_model('llama') | ||
gguf_writer.add_token_list(tokens) | ||
gguf_writer.add_token_scores(scores) | ||
gguf_writer.add_token_types(toktypes) | ||
gguf_writer.add_bos_token_id(71013) | ||
gguf_writer.add_eos_token_id(71013) | ||
|
||
tensor_map = gguf.get_tensor_name_map(arch, block_count) | ||
print(tensor_map) | ||
for name in tensors.keys(): | ||
data = tensors[name] | ||
if name.endswith(".self_attention.rotary_emb.inv_freq"): | ||
continue | ||
old_dtype = data.dtype | ||
# TODO: FP16 conversion produces garbage outputs. (Q8_0 does not, so..?) | ||
data = data.to(torch.float32).squeeze().numpy() | ||
new_name = tensor_map.get_name(name, try_suffixes = (".weight", ".bias")) | ||
if new_name is None: | ||
print("Can not map tensor '" + name + "'") | ||
sys.exit() | ||
n_dims = len(data.shape) | ||
print(new_name + ", n_dims = " + str(n_dims) + ", " + str(old_dtype) + " --> " + str(data.dtype)) | ||
gguf_writer.add_tensor(new_name, data) | ||
print("gguf: write header") | ||
gguf_writer.write_header_to_file() | ||
print("gguf: write metadata") | ||
gguf_writer.write_kv_data_to_file() | ||
print("gguf: write tensors") | ||
gguf_writer.write_tensors_to_file() | ||
|
||
gguf_writer.close() | ||
|
||
print(f"gguf: model successfully exported to '{args.outfile}'") | ||
print("") | ||
|
||
|
||
|
||
if __name__ == '__main__': | ||
main() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.