-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
[Feature] MCP bridge support #7934
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I would love this |
added to March 2025 roadmap 🔥 |
Hi @James4Ever0, @turian and everyone else on this thread. Initial implementation is done, is this what you wanted? #9426 I'd love your feedback on this as it's in Beta/ You can centrally define all MCP tools on the litellm config and your MCP clients can call list + call tool on litellm (h/t @wagnerjt ) |
Hi @ishaan-jaff, thanks for your contribution to this. However, I think what @James4Ever0 is looking for is to have LiteLLM act as a bridge between the clients and the MCP Server, utilizing the tools from the existing MCP Server (similar to SecretiveShell/MCP-Bridge), rather than having LiteLLM function as an MCP Server itself. That said, this approach might still be useful in certain use cases. |
Thanks @ishaan-jaff for this. I will be testing it when I get a moment! I just wanted to share with @forpr1093 and @James4Ever0 that a MCP-bridge solution would be really great for a single ease of use point for the various deployed MCP servers. The problem is there are a number of things that MCP provides (Resources, Tools, and Prompts) as well as how quickly the MCP specification is changing around the transport and the authentication that it is probably best to start small. While this first PR is only a MCP server on top of the litellm proxy, another win would be to simply incorporate the MCP client for tools into the litellm sdk so it can bring in tools from various MCP-backed servers |
What do you think about this interface @wagnerjt @forpr1093. This is an OpenAI MCP bridge LiteLLM Python SDK with MCP Toolsimport asyncio
from litellm import experimental_createMCPClient, completion
from litellm.mcp_stdio import Experimental_StdioMCPTransport
from litellm import openai
async def main():
client_one = None
try:
# Initialize an MCP client to connect to a `stdio` MCP server:
transport = Experimental_StdioMCPTransport(
command='node',
args=['src/stdio/dist/server.js']
)
client_one = await experimental_createMCPClient(
transport=transport
)
tool_set_one = await client_one.list_tools()
tools = tool_set_one
response = await litellm.completion(
model="gpt-4o",
tools=tools,
messages=[
{
"role": "user",
"content": "Find products under $100"
}
]
)
print(response.text)
except Exception as error:
print(error)
finally:
await asyncio.gather(
client_one.close() if client_one else asyncio.sleep(0),
)
if __name__ == "__main__":
asyncio.run(main()) LiteLLM Proxy with MCP Toolsimport asyncio
from openai import OpenAI
from litellm import experimental_createMCPClient
from litellm.mcp_stdio import Experimental_StdioMCPTransport
async def main():
client_one = None
try:
# Initialize an MCP client to connect to a `stdio` MCP server:
transport = Experimental_StdioMCPTransport(
command='node',
args=['src/stdio/dist/server.js']
)
client_one = await experimental_createMCPClient(
transport=transport
)
# Get tools from MCP client
tool_set_one = await client_one.list_tools()
tools = tool_set_one
# Initialize OpenAI client with custom base URL
openai_client = OpenAI(
base_url="http://localhost:4000",
api_key="your-api-key"
)
# Create completion with the tools
response = openai_client.chat.completions.create(
model="gpt-4o",
tools=tools,
messages=[
{
"role": "user",
"content": "Find products under $100"
}
]
)
print(response.choices[0].message.content)
except Exception as error:
print(error)
finally:
await asyncio.gather(
client_one.close() if client_one else asyncio.sleep(0),
)
if __name__ == "__main__":
asyncio.run(main()) |
fwiw, vercel just added mcp to ai sdk, might be a good place to look for clues => https://github.com/search?q=repo%3Avercel%2Fai%20mcp&type=code |
Hi everyone here's our initial implementation of a MCP bridge with litellm python SDK. Is this what you wanted ? @wagnerjt @forpr1093 @turian @James4Ever0 ? #9436 OverviewLiteLLM acts as a MCP bridge to utilize MCP tools with all LiteLLM supported models. LiteLLM offers the following features for using MCP
Usage1. List Available MCP ToolsIn this example we'll use
# Create server parameters for stdio connection
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
import os
import litellm
from litellm import experimental_mcp_client
server_params = StdioServerParameters(
command="python3",
# Make sure to update to the full absolute path to your math_server.py file
args=["./mcp_server.py"],
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()
# Get tools
tools = await experimental_mcp_client.load_mcp_tools(session=session, format="openai")
print("MCP TOOLS: ", tools)
messages = [{"role": "user", "content": "what's (3 + 5)"}]
llm_response = await litellm.acompletion(
model="gpt-4o",
api_key=os.getenv("OPENAI_API_KEY"),
messages=messages,
tools=tools,
)
print("LLM RESPONSE: ", json.dumps(llm_response, indent=4, default=str)) 2. List and Call MCP ToolsIn this example we'll use
The first llm response returns a list of OpenAI tools. We take the first tool call from the LLM response and pass it to How
|
docs here: https://docs.litellm.ai/docs/mcp I'd love feedback on this from the litellm community ! Bonus - Contributor issue - Can we get help with this ?Task: LiteLLM should maintain a json of all known MCP Servers, can we get help with a script that scrapes all servers here: https://github.com/modelcontextprotocol/servers/tree/main/src and stores as a json and add it here: https://github.com/BerriAI/litellm/blob/main/mcp_servers.json The benefit of this is we can then allow litellm users to easily reference well known MCP servers thoughts @wagnerjt @rawwerks @James4Ever0 ? Each server can be stored as the following {
"brave-search": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"BRAVE_API_KEY",
"mcp/brave-search"
],
"env": {
"BRAVE_API_KEY": "YOUR_API_KEY_HERE"
}
}
} |
Hey @ishaan-jaff. I want to start to say that I have spent some time today to experiment with the mcp client so far. I will test out the other parts I was fumbling with and the proxy aspect next week when I have more time. First, I really like that you went with making the MCP client config and passing in the session as an argument. This allows us to have the flexibility on the various transport layers (sse vs stdio) and security measure that will come in the future. Very nice 👏! I tested a go based MCP server over sse with the mcp client and was able to
Side note for the script to maintain the MCP servers that are out there, maybe it is sufficient to add a link to https://mcp-get.com/ to the docs until the official mcp-registry comes through. |
Wanted to bump the thread that the new spec came out today! I'm still reviewing it myself but wanted to highlight the SSE -> http as well as the authentication. Once I get my hands on it more, I'll come with additional feedback and proposals https://spec.modelcontextprotocol.io/specification/2025-03-26/ |
Thanks for sharing the new spec @wagnerjt. If I'm not mistaken we'll need to wait for the mcp python SDK to support the new HTTP transport too |
Having an example mcp_server.py would have been helpful when I was trying this out, e.g. # # server.py
from mcp.server.fastmcp import FastMCP
# # Create an MCP server
mcp = FastMCP("Demo")
# Add an addition tool
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
# Add a dynamic greeting resource
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
"""Get a personalized greeting"""
return f"Hello, {name}!"
if __name__ == "__main__":
mcp.run() |
So, when such support will be part of litellm? |
Hi @ishaan-jaff, thank you for all the work on MCP support so far! Is it correct that currently (as of v1.65.0) the only way to 'onboard' new servers onto litellm is by setting the SSE url, e.g.
Do you have any plans to add support for the Claude Desktop spec with
We're looking at integrating our app with some MCP servers that we're looking to pull as docker images, so ideally we could just specify |
hi @nbailey-sigtech - help me understand this a bit better. If given a non sse server, would litellm then need to run npx -y ... ? In my SSE implementation we're forwarding requests to the SSE server (cc @wagnerjt any thoughts on how we can support this ?) |
The support for LiteLLM Proxy to act as the MCP bridge with multiple servers over SSE is now in v1.65.1-nightly. You can see the respective PR here. I personally think LiteLLM should not support the functionality of dealing with starting and running various MCP servers with commands @nbailey-sigtech. There are too many languages, execution patterns, etc. In a development sense, you can use write a shell script to loop over and start the servers than embedding all of this within LiteLLM. If you really wanted to do this, you can technically bake all of the dependencies within the Dockerfile to do this. |
Makes sense, just wanted to know if that was on the roadmap. Interested to see what direction you take this in. Thanks both for the speedy response! |
Is this issue now complete? |
I love this, can we declare MCP server handle as |
@wagnerjt Given that a majority of MCP servers currently use stdio, it would be helpful to have a way to opt in to supporting stdio servers without having to fire up an HTTP server. |
I get where you are coming from @bendavis78, but with the introduction of docker's mcp toolkit, everything will more than likely be ran and communicate over http (although there is still a way to use stdio..it is sort of living in a different process). This isn't my decision, but I am of the opinion that the maintainers do not need to worry about the up keep of stdio as a feature. Come join the discussion on more mcp features here. |
Uh oh!
There was an error while loading. Please reload this page.
The Feature
Act as a MCP bridge to utilize tools and provide enhanced response.
LiteLLM will read a MCP server config and act as a middle man between the MCP server, the LLM server and the LLM client.
There is an existing project for bridging MCP servers to OpenAI compatible clients.
Motivation, pitch
MCP is a general protocol for LLM Agents. By doing so MCP can be integrated into existing chat systems and have wider use cases.
Are you a ML Ops Team?
Yes
The text was updated successfully, but these errors were encountered: