Skip to content

Error doing a full fine-tune with train-text-from-scratch #4703

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
RonanKMcGovern opened this issue Dec 30, 2023 · 2 comments
Closed

Error doing a full fine-tune with train-text-from-scratch #4703

RonanKMcGovern opened this issue Dec 30, 2023 · 2 comments

Comments

@RonanKMcGovern
Copy link

I am trying to do a full fine-tune of TinyLLama on a Mac M1 with 8 GB of RAM.

Fine-tuning Script:

#!/bin/bash

# Path to the llama.cpp directory
LLAMA_CPP_DIR="./llama.cpp"

# Train the model
echo "Starting training process..."
$LLAMA_CPP_DIR/train-text-from-scratch \
        --vocab-model $LLAMA_CPP_DIR/models/ggml-vocab-llama.gguf \
        --ctx 4096 --embd 2048 --head 32 --layer 22 \
        --checkpoint-in  $LLAMA_CPP_DIR/models/ggml-model-f32.gguf \
        --checkpoint-out $LLAMA_CPP_DIR/models/chk-tinyllama-1.1b-3t-chat-LATEST.gguf \
        --model-out $LLAMA_CPP_DIR/models/tinyllama-1.1b-intermediate-step-1431k-3t-chat.gguf \
        --train-data "data/train_data.txt" \
        -t 6 -b 1 --seed 1 --adam-iter 256 \
        # --no-checkpointing

# Check if the train command succeeded
if [ $? -ne 0 ]; then
    echo "Training process failed."
    exit 1
fi

# Run prediction with the fine-tuned model
echo "Running prediction with the fine-tuned model..."
$LLAMA_CPP_DIR/main -m $LLAMA_CPP_DIR/models/tinyllama-1.1b-intermediate-step-1431k-3t-chat.gguf

# End of script
echo "Fine-tuning process completed."

It's not clear from the examples folder, but it seems that one needs to use an 'F32' model, so I prepared an F32 model using:

python3 convert.py ./models/pytorch_model.bin --outtype f32 --ctx 4096

Note that I want to train on context of 4096, so I changed the config.json to have max embeddings of 4096 after downloading config.json and pytorch_model.bin from the TinyLlama repo.

When I run './fine-tune.sh' I get:

Starting training process...
main: seed: 1
llama_model_loader: loaded meta data with 17 key-value pairs and 0 tensors from ./llama.cpp/models/ggml-vocab-llama.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  11:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  12:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  13:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  14:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  15:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  16:            tokenizer.ggml.unknown_token_id u32              = 0
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = all F32 (guessed)
llm_load_print_meta: model params     = 0.00 B
llm_load_print_meta: model size       = 0.00 MiB (nan BPW) 
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llama_model_load: vocab only - skipping tensors
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
main: init model
GGML_ASSERT: common/train.cpp:460: ggml_are_same_shape(a, b)
./fine-tune.sh: line 16: 68883 Abort trap: 6           $LLAMA_CPP_DIR/train-text-from-scratch --vocab-model $LLAMA_CPP_DIR/models/ggml-vocab-llama.gguf --ctx 4096 --embd 2048 --head 32 --layer 22 --checkpoint-in $LLAMA_CPP_DIR/models/ggml-model-f32.gguf --checkpoint-out $LLAMA_CPP_DIR/models/chk-tinyllama-1.1b-3t-chat-LATEST.gguf --model-out $LLAMA_CPP_DIR/models/tinyllama-1.1b-intermediate-step-1431k-3t-chat.fp16.gguf --train-data "data/train_data.txt" -t 6 -b 1 --seed 1 --adam-iter 256 --no-checkpointing
Training process failed.
(dataEnv) (base) ronanmcgovern@Ronans-MacBook-Pro ADVANCED-fine-tuning % ./fine-tune.sh
Starting training process...
main: seed: 1
llama_model_loader: loaded meta data with 17 key-value pairs and 0 tensors from ./llama.cpp/models/ggml-vocab-llama.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  11:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  12:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  13:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  14:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  15:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  16:            tokenizer.ggml.unknown_token_id u32              = 0
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = all F32 (guessed)
llm_load_print_meta: model params     = 0.00 B
llm_load_print_meta: model size       = 0.00 MiB (nan BPW) 
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llama_model_load: vocab only - skipping tensors
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
main: init model
GGML_ASSERT: common/train.cpp:460: ggml_are_same_shape(a, b)
./fine-tune.sh: line 16: 68908 Abort trap: 6           $LLAMA_CPP_DIR/train-text-from-scratch --vocab-model $LLAMA_CPP_DIR/models/ggml-vocab-llama.gguf --ctx 4096 --embd 2048 --head 32 --layer 22 --checkpoint-in $LLAMA_CPP_DIR/models/ggml-model-f32.gguf --checkpoint-out $LLAMA_CPP_DIR/models/chk-tinyllama-1.1b-3t-chat-LATEST.gguf --model-out $LLAMA_CPP_DIR/models/tinyllama-1.1b-intermediate-step-1431k-3t-chat.fp16.gguf --train-data "data/train_data.txt" -t 6 -b 1 --seed 1 --adam-iter 256
Training process failed.

Summary of Issues:

  • The model is clearly not being loaded correctly. Is there a specific vocab file that I should be using. As per the logs, you can see that the llama vocab file is being used but it seems that values for the layers are not being correctly overwritten. Also, there are 4 kv groups, which is different than Llama 2 so clearly I need to input that info somehow.
  • Is it true that I can only input an f32 model for a full fine tune? (it would be great if I could input any gguf quant)?
Copy link
Contributor

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Mar 18, 2024
Copy link
Contributor

github-actions bot commented Apr 2, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Apr 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant