Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running the model using Ollama #153

Open
aksapre opened this issue Mar 8, 2025 · 2 comments
Open

Running the model using Ollama #153

aksapre opened this issue Mar 8, 2025 · 2 comments
Labels
enhancement New feature or request

Comments

@aksapre
Copy link

aksapre commented Mar 8, 2025

Can we run this code using LLM provided by Ollama

@aksapre aksapre added the enhancement New feature or request label Mar 8, 2025
@try1995
Copy link

try1995 commented Mar 9, 2025

yes, you can set step by step.

  • set .env file like
OPENAI_API_KEY=ollama
MODEL="qwen2.5:32b"
BASE_URL="http://localhost:11434/v1"
  • change the code file "src\llm\models.py" ,line 96,get model func
 elif model_provider == ModelProvider.OPENAI:
        # Get and validate API key
        api_key = os.getenv("OPENAI_API_KEY")
        model = os.getenv("MODEL")
        base_url = os.getenv("BASE_URL")
        if not api_key:
            # Print error to console
            print(f"API Key Error: Please make sure OPENAI_API_KEY is set in your .env file.")
            raise ValueError("OpenAI API key not found.  Please make sure OPENAI_API_KEY is set in your .env file.")
        return ChatOpenAI(model=model, api_key=api_key, base_url=base_url)
  • run python src/main.py --ticker NVDA, random select [openai] option, it will use the model which you set in .env file

  • enjoy

@li992
Copy link

li992 commented Mar 9, 2025

yes, you can set step by step.

  • set .env file like

OPENAI_API_KEY=ollama
MODEL="qwen2.5:32b"
BASE_URL="http://localhost:11434/v1"

  • change the code file "src\llm\models.py" ,line 96,get model func

elif model_provider == ModelProvider.OPENAI:
# Get and validate API key
api_key = os.getenv("OPENAI_API_KEY")
model = os.getenv("MODEL")
base_url = os.getenv("BASE_URL")
if not api_key:
# Print error to console
print(f"API Key Error: Please make sure OPENAI_API_KEY is set in your .env file.")
raise ValueError("OpenAI API key not found. Please make sure OPENAI_API_KEY is set in your .env file.")
return ChatOpenAI(model=model, api_key=api_key, base_url=base_url)

  • run python src/main.py --ticker NVDA, random select [openai] option, it will use the model which you set in .env file
  • enjoy

In Langchain package, there is also a model called ChatOllama. It follows the same syntax as the codes for ChatOpenAI and ChatGroq in llm/models.py. It does support localhost. We can also add several lines in this file to enable local ollama models. Here is the link to Langchain's documents: https://python.langchain.com/docs/integrations/chat/ollama/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants