-
Notifications
You must be signed in to change notification settings - Fork 184
Pydantic Issue when running Ollama + FastAPI backend #244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
|
Thanks for the reply.
|
@BastianSpatz As Typescript doesn't have create-llama/templates/components/llamaindex/typescript/streaming/suggestion.ts Lines 16 to 39 in 8ce4a85
Can you try using the NextJS template first with your Ollama model - if that works you could modify |
Thank you for the help Ill check it out :) |
Great. can you let me know the result, we can keep the ticket open till then |
Using the same approach as in the Typescript version it works |
cool. can you send a PR or post here your changes? |
Sorry here is what i changed in the
I have noticed that after a few questions the format of the output questions by the llm seem to deteriorate. |
Thanks @BastianSpatz |
When using ollama as a model source, i get the error:
ERROR: Error when generating next question: 1 validation error for LLMStructuredPredictEndEvent output value is not a valid dict (type=type_error.dict)
when it wants to generate the NextQuestions.
create-llama/templates/types/streaming/fastapi/app/api/services/suggestion.py
Lines 50 to 55 in 1d93775
I think this is a llama-index/pydantic problem when calling
astructured_predict
in the calldispatcher.event(LLMStructuredPredictEndEvent(output=result))
.Has anybody anybody seen or fixed this error?
The text was updated successfully, but these errors were encountered: