This example demonstrates how to integrate the Model Context Protocol (MCP) into a simple CLI chatbot. The implementation showcases MCP's flexibility by supporting multiple tools through MCP servers and is compatible with any LLM provider that follows OpenAI API standards.
- Python 3.10
python-dotenv
requests
mcp
uvicorn
-
Install the dependencies:
pip install -r requirements.txt
-
Set up environment variables:
Create a
.env
file in the root directory and add your API key:LLM_API_KEY=your_api_key_here
Note: The current implementation is configured to use the Groq API endpoint (
https://api.groq.com/openai/v1/chat/completions
) with thellama-3.2-90b-vision-preview
model. If you plan to use a different LLM provider, you'll need to modify theLLMClient
class inmain.py
to use the appropriate endpoint URL and model parameters. -
Configure servers:
The
servers_config.json
follows the same structure as Claude Desktop, allowing for easy integration of multiple servers. Here's an example:{ "mcpServers": { "sqlite": { "command": "uvx", "args": ["mcp-server-sqlite", "--db-path", "./test.db"] }, "puppeteer": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-puppeteer"] } } }
Environment variables are supported as well. Pass them as you would with the Claude Desktop App.
Example:
{ "mcpServers": { "server_name": { "command": "uvx", "args": ["mcp-server-name", "--additional-args"], "env": { "API_KEY": "your_api_key_here" } } } }
-
Run the client:
python main.py
-
Interact with the assistant:
The assistant will automatically detect available tools and can respond to queries based on the tools provided by the configured servers.
-
Exit the session:
Type
quit
orexit
to end the session.
- Tool Discovery: Tools are automatically discovered from configured servers.
- System Prompt: Tools are dynamically included in the system prompt, allowing the LLM to understand available capabilities.
- Server Integration: Supports any MCP-compatible server, tested with various server implementations including Uvicorn and Node.js.
- Configuration: Manages environment variables and server configurations
- Server: Handles MCP server initialization, tool discovery, and execution
- Tool: Represents individual tools with their properties and formatting
- LLMClient: Manages communication with the LLM provider
- ChatSession: Orchestrates the interaction between user, LLM, and tools
-
Tool Integration:
- Tools are dynamically discovered from MCP servers
- Tool descriptions are automatically included in system prompt
- Tool execution is handled through standardized MCP protocol
-
Runtime Flow:
- User input is received
- Input is sent to LLM with context of available tools
- LLM response is parsed:
- If it's a tool call → execute tool and return result
- If it's a direct response → return to user
- Tool results are sent back to LLM for interpretation
- Final response is presented to user