-
-
Notifications
You must be signed in to change notification settings - Fork 97
Add Local Model Support via Ollama Integration #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey @crmne, this is a fantastic proposal! Local model support via Ollama would be a huge win for I really like the proposed configuration and usage pattern. Keeping the API consistent with existing cloud models minimizes the learning curve for users. The example snippet is clear and concise: # Configuration
RubyLLM.configure do |config|
config.ollama_host = "http://localhost:11434" # Default
end
# Usage remains identical to cloud models
chat = RubyLLM.chat(model: 'llama2', provider: :ollama) # Explicitly set provider
chat.ask("What's the capital of France?")
# Or with embeddings
RubyLLM.embed("Ruby is a programmer's best friend", model: 'nomic-embed-text', provider: :ollama) # Explicitly set provider One minor suggestion: Adding an explicit Regarding the technical details, your outline is solid. The separation into One thing to consider during implementation is error handling. Ollama might return different error codes and messages compared to cloud providers. The provider should gracefully handle these differences and translate them into consistent |
FWIW, Ollama has an OpenAI compatible API. In theory this should work so long as you can specify a base URL, example: client = OpenAI(
base_url = 'http://localhost:11434/v1',
api_key='ollama'
) Further, Ollama also supports tools which would be great to integrate. The Ollama docs state that the OpenAI compatible API also supports tools. |
I'm working with require 'ruby_llm'
require 'pathname'
HERE = Pathname.pwd
module RubyLLM
module Providers
module OpenAI
def api_base
XAI_BASE_URL
end
end
end
class Models
class << self
def models_file
File.join(HERE, 'xai_models.json')
end
end
end
end
# Configure ruby_llm with the API key
RubyLLM.configure do |config|
config.openai_api_key = XAI_API_KEY
end
# Initialize the chat client with the model
chat = RubyLLM.chat(model: 'grok-2-latest')
# Send the message
response = chat.ask('What is the meaning of xyzzy?')
debug_me { [:response] } The
|
Closed by 89c371a |
TL;DR: Add support for running fully local models via Ollama.
Background
While cloud models offer state-of-the-art capabilities, there are compelling cases for running models locally:
Ollama provides an excellent way to run models like Llama, Mistral, and others locally with a simple API that's compatible with our existing architecture.
Proposed Solution
Add a new provider interface for Ollama that implements our existing abstractions:
Technical Details
For those looking to help implement this, you'll need to:
lib/ruby_llm/providers/ollama/
complete
- For chat functionalityembed
- For embeddingsapi_base
- Returns the Ollama API endpointcapabilities
- Define model capabilitiesThe PR should include:
Benefits
The text was updated successfully, but these errors were encountered: