-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running the model using Ollama #153
Comments
yes, you can set step by step.
OPENAI_API_KEY=ollama
MODEL="qwen2.5:32b"
BASE_URL="http://localhost:11434/v1"
elif model_provider == ModelProvider.OPENAI:
# Get and validate API key
api_key = os.getenv("OPENAI_API_KEY")
model = os.getenv("MODEL")
base_url = os.getenv("BASE_URL")
if not api_key:
# Print error to console
print(f"API Key Error: Please make sure OPENAI_API_KEY is set in your .env file.")
raise ValueError("OpenAI API key not found. Please make sure OPENAI_API_KEY is set in your .env file.")
return ChatOpenAI(model=model, api_key=api_key, base_url=base_url)
|
In Langchain package, there is also a model called ChatOllama. It follows the same syntax as the codes for ChatOpenAI and ChatGroq in llm/models.py. It does support localhost. We can also add several lines in this file to enable local ollama models. Here is the link to Langchain's documents: https://python.langchain.com/docs/integrations/chat/ollama/ |
Can we run this code using LLM provided by Ollama
The text was updated successfully, but these errors were encountered: