Running Custom Model with isinstance(model, VllmModelForTextGeneration) problem #15858
Unanswered
yesilcagri
asked this question in
Q&A
Replies: 1 comment
-
Make sure you have implemented all of the required methods for that interface |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a custom pretrained model. I prepared my custom model classes. And registered them in _TEXT_GENERATION_MODELS dictionary in register.py. I load the model successfully. But when I call the generate function like below I got the error "ValueError: LLM.generate() is only supported for (conditional) generation models (XForCausalLM, XForConditionalGeneration)."
llm = LLM(model="my_model", trust_remote_code=True, dtype="half")
outputs = llm.generate(prompts, sampling_params)
I did a lot of debugging. When I compare with LLama, what I see that isinstance(model, VllmModelForTextGeneration) function (vllm/model_executor/models/interfaces_base.py) returns false for my model, but it returns true for llama. I don't know how to handle this. Can you help me?
Beta Was this translation helpful? Give feedback.
All reactions