Skip to content

Commit d86f1e5

Browse files
petterreinholdtsenggerganov
authored andcommitted
talk-llama : reject runs without required arguments (ggml-org#2153)
* Extended talk-llama example to reject runs without required arguments. Print warning and exit if models are not specified on the command line. * Update examples/talk-llama/talk-llama.cpp * Update examples/talk-llama/talk-llama.cpp --------- Co-authored-by: Georgi Gerganov <[email protected]>
1 parent 6a99fe0 commit d86f1e5

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

examples/talk-llama/talk-llama.cpp

+8
Original file line numberDiff line numberDiff line change
@@ -288,6 +288,10 @@ int main(int argc, char ** argv) {
288288
cparams.use_gpu = params.use_gpu;
289289

290290
struct whisper_context * ctx_wsp = whisper_init_from_file_with_params(params.model_wsp.c_str(), cparams);
291+
if (!ctx_wsp) {
292+
fprintf(stderr, "No whisper.cpp model specified. Please provide using -mw <modelfile>\n");
293+
return 1;
294+
}
291295

292296
// llama init
293297

@@ -301,6 +305,10 @@ int main(int argc, char ** argv) {
301305
}
302306

303307
struct llama_model * model_llama = llama_load_model_from_file(params.model_llama.c_str(), lmparams);
308+
if (!model_llama) {
309+
fprintf(stderr, "No llama.cpp model specified. Please provide using -ml <modelfile>\n");
310+
return 1;
311+
}
304312

305313
llama_context_params lcparams = llama_context_default_params();
306314

0 commit comments

Comments
 (0)