Skip to content

Commit c788179

Browse files
authored
Expose mcp-agent apps (MCPApp) as MCP servers (#112)
* WIP app server changes * Some more WIP to get app server set up for workflows * WIP update app server * Updates to workflow * Simplify workflows to remove pause functionality. * The app server kinda sorta works! * Agent server is operational * Comment out cancellation * Update readme
1 parent 6ff26f3 commit c788179

21 files changed

+2677
-207
lines changed

Diff for: examples/mcp_basic_slack_agent/main.py

+1
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77

88
app = MCPApp(name="mcp_basic_agent")
99

10+
1011
async def example_usage():
1112
async with app.run() as agent_app:
1213
logger = agent_app.logger

Diff for: examples/mcp_hello_world/main.py

+5-2
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,12 @@ async def example_usage():
3333

3434
try:
3535
filesystem_client = await connection_manager.get_server(
36-
server_name="filesystem", client_session_factory=MCPAgentClientSession
36+
server_name="filesystem",
37+
client_session_factory=MCPAgentClientSession,
38+
)
39+
logger.info(
40+
"filesystem: Connected to server with persistent connection."
3741
)
38-
logger.info("filesystem: Connected to server with persistent connection.")
3942

4043
fetch_client = await connection_manager.get_server(
4144
server_name="fetch", client_session_factory=MCPAgentClientSession

Diff for: examples/mcp_researcher/main.py

+8-7
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313

1414
app = MCPApp(name="mcp_root_test")
1515

16+
1617
async def example_usage():
1718
async with app.run() as agent_app:
1819
folder_path = Path("agent_folder")
@@ -22,13 +23,13 @@ async def example_usage():
2223

2324
# Overwrite the config because full path to agent folder needs to be passed
2425
context.config.mcp.servers["interpreter"].args = [
25-
"run",
26-
"-i",
27-
"--rm",
28-
"--pull=always",
29-
"-v",
30-
f"{os.path.abspath('agent_folder')}:/mnt/data/",
31-
"ghcr.io/evalstate/mcp-py-repl:latest",
26+
"run",
27+
"-i",
28+
"--rm",
29+
"--pull=always",
30+
"-v",
31+
f"{os.path.abspath('agent_folder')}:/mnt/data/",
32+
"ghcr.io/evalstate/mcp-py-repl:latest",
3233
]
3334

3435
async with MCPConnectionManager(context.server_registry):

Diff for: examples/workflow_mcp_server/README.md

+185
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,185 @@
1+
# Workflow MCP Server Example
2+
3+
This example demonstrates three approaches to creating agents and workflows:
4+
5+
1. Traditional workflow-based approach with manual agent creation
6+
2. Programmatic agent configuration using AgentConfig
7+
3. Declarative agent configuration using FastMCPApp decorators
8+
9+
All three approaches can use `app_server.py` to expose the agents and workflows as an MCP server.
10+
11+
## Concepts Demonstrated
12+
13+
- Using the `Workflow` base class to create custom workflows
14+
- Registering workflows with an `MCPApp`
15+
- Creating and registering agent configurations with both programmatic and declarative approaches
16+
- Exposing workflows and agents as MCP tools using `app_server.py`
17+
- Connecting to a workflow server using `gen_client`
18+
- Lazy instantiation of agents from configurations when their tools are called
19+
20+
## Components in this Example
21+
22+
1. **DataProcessorWorkflow**: A traditional workflow that processes data in three steps:
23+
24+
- Finding and retrieving content from a source (file or URL)
25+
- Analyzing the content
26+
- Formatting the results
27+
28+
2. **SummarizationWorkflow**: A traditional workflow that summarizes text content:
29+
30+
- Generates a concise summary
31+
- Extracts key points
32+
- Returns structured data
33+
34+
3. **Research Team**: A parallel workflow created using the agent configuration system:
35+
36+
- Uses a fan-in/fan-out pattern with multiple specialized agents
37+
- Demonstrates declarative workflow pattern configuration
38+
39+
4. **Specialist Router**: A router workflow created using FastMCPApp decorators:
40+
- Routes requests to specialized agents based on content
41+
- Shows how to use the decorator syntax for workflow creation
42+
43+
## How to Run
44+
45+
1. Copy the example secrets file:
46+
47+
```
48+
cp mcp_agent.secrets.yaml.example mcp_agent.secrets.yaml
49+
```
50+
51+
2. Edit `mcp_agent.secrets.yaml` to add your API keys.
52+
53+
3. Run the client, which will automatically start the server:
54+
```
55+
uv run client.py
56+
```
57+
58+
## Code Structure
59+
60+
- `basic_agent_server.py`: Defines the BasicAgentWorkflow and creates an MCP server
61+
- `client.py`: Connects to the server and runs the workflow
62+
- `mcp_agent.config.yaml`: Configuration for MCP servers
63+
- `mcp_agent.secrets.yaml`: Secret API keys (not included in repository)
64+
65+
## Understanding the Code
66+
67+
### Approach 1: Traditional Workflow Definition
68+
69+
Workflows are defined by subclassing the `Workflow` base class and implementing:
70+
71+
- The `run` method containing the main workflow logic
72+
- Optional:`initialize` and `cleanup` methods for setup and teardown
73+
- Optional: a custom `create` class method for specialized instantiation
74+
75+
Workflows are registered with the MCPApp using the `@app.workflow` decorator:
76+
77+
Example:
78+
79+
```python
80+
app = MCPApp(name="workflow_mcp_server")
81+
82+
@app.workflow
83+
class DataProcessorWorkflow(Workflow[str]):
84+
@classmethod
85+
async def create(cls, executor: Executor, name: str | None = None, **kwargs: Any) -> "DataProcessorWorkflow":
86+
# Custom instantiation logic
87+
workflow = cls(executor=executor, name=name, **kwargs)
88+
await workflow.initialize()
89+
return workflow
90+
91+
async def initialize(self):
92+
# Set up resources like agents and LLMs
93+
94+
async def run(self, source: str, analysis_prompt: Optional[str] = None, output_format: Optional[str] = None) -> WorkflowResult[str]:
95+
# Workflow implementation...
96+
97+
async def cleanup(self):
98+
# Clean up resources
99+
```
100+
101+
### Approach 2: Programmatic Agent Configuration
102+
103+
Agent configurations can be created programmatically using Pydantic models:
104+
105+
```python
106+
# Create a basic agent configuration
107+
research_agent_config = AgentConfig(
108+
name="researcher",
109+
instruction="You are a helpful research assistant that finds information and presents it clearly.",
110+
server_names=["fetch", "filesystem"],
111+
llm_config=AugmentedLLMConfig(
112+
factory=OpenAIAugmentedLLM,
113+
)
114+
)
115+
116+
# Create a parallel workflow configuration
117+
research_team_config = AgentConfig(
118+
name="research_team",
119+
instruction="You are a research team that produces high-quality, accurate content.",
120+
parallel_config=ParallelWorkflowConfig(
121+
fan_in_agent="editor",
122+
fan_out_agents=["summarizer", "fact_checker"],
123+
)
124+
)
125+
126+
# Register the configurations with the app
127+
app.register_agent_config(research_agent_config)
128+
app.register_agent_config(research_team_config)
129+
```
130+
131+
### Approach 3: Declarative Agent Configuration with FastMCPApp
132+
133+
FastMCPApp provides decorators for creating agent configurations in a more declarative style:
134+
135+
```python
136+
fast_app = FastMCPApp(name="fast_workflow_mcp_server")
137+
138+
# Basic agent with OpenAI LLM
139+
@fast_app.agent("assistant", "You are a helpful assistant that answers questions concisely.",
140+
server_names=["calculator"])
141+
def assistant_config(config):
142+
config.llm_config = AugmentedLLMConfig(
143+
factory=OpenAIAugmentedLLM,
144+
)
145+
return config
146+
147+
# Router workflow with specialist agents
148+
@fast_app.router("specialist_router", "You route requests to the appropriate specialist.",
149+
agent_names=["mathematician", "programmer", "writer"])
150+
def router_config(config):
151+
config.llm_config = AugmentedLLMConfig(
152+
factory=OpenAIAugmentedLLM
153+
)
154+
config.router_config.top_k = 1
155+
return config
156+
```
157+
158+
### Exposing Workflows and Agents as Tools
159+
160+
The MCP server automatically exposes both workflows and agent configurations as tools:
161+
162+
**Workflow tools**:
163+
164+
- Running a workflow: `workflows/{workflow_id}/run`
165+
- Checking status: `workflows/{workflow_id}/get_status`
166+
- Controlling workflow execution: `workflows/resume`, `workflows/cancel`
167+
168+
**Agent tools**:
169+
170+
- Running an agent: `agents/{agent_name}/generate`
171+
- Getting string response: `agents/{agent_name}/generate_str`
172+
- Getting structured response: `agents/{agent_name}/generate_structured`
173+
174+
Agent configurations are lazily instantiated when their tools are called. If the agent is already active, the existing instance is reused.
175+
176+
### Connecting to the Workflow Server
177+
178+
The client connects to the workflow server using the `gen_client` function:
179+
180+
```python
181+
async with gen_client("workflow_server", context.server_registry) as server:
182+
# Connect and use the server
183+
```
184+
185+
You can then call both workflow and agent tools through this client connection.

Diff for: examples/workflow_mcp_server/basic_agent_server.py

+132
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,132 @@
1+
"""
2+
Workflow MCP Server Example
3+
4+
This example demonstrates three approaches to creating agents and workflows:
5+
1. Traditional workflow-based approach with manual agent creation
6+
2. Programmatic agent configuration using AgentConfig
7+
3. Declarative agent configuration using FastMCPApp decorators
8+
"""
9+
10+
import asyncio
11+
import os
12+
import logging
13+
14+
from mcp_agent.app import MCPApp
15+
from mcp_agent.app_server import create_mcp_server_for_app
16+
from mcp_agent.agents.agent import Agent
17+
from mcp_agent.workflows.llm.augmented_llm import RequestParams
18+
from mcp_agent.workflows.llm.llm_selector import ModelPreferences
19+
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM
20+
from mcp_agent.workflows.llm.augmented_llm_anthropic import AnthropicAugmentedLLM
21+
from mcp_agent.executor.workflow import Workflow, WorkflowResult
22+
23+
# Initialize logging
24+
logging.basicConfig(level=logging.INFO)
25+
logger = logging.getLogger(__name__)
26+
27+
# Create a single FastMCPApp instance (which extends MCPApp)
28+
app = MCPApp(name="basic_agent_server", description="Basic agent server example")
29+
30+
31+
@app.workflow
32+
class BasicAgentWorkflow(Workflow[str]):
33+
"""
34+
A basic workflow that demonstrates how to create a simple agent.
35+
This workflow is used as an example of a basic agent configuration.
36+
"""
37+
38+
async def run(self, input: str) -> WorkflowResult[str]:
39+
"""
40+
Run the basic agent workflow.
41+
42+
Args:
43+
input: The input string to prompt the agent.
44+
45+
Returns:
46+
WorkflowResult containing the processed data.
47+
"""
48+
49+
logger = app.logger
50+
context = app.context
51+
52+
logger.info("Current config:", data=context.config.model_dump())
53+
logger.info("Received input:", data=input)
54+
55+
# Add the current directory to the filesystem server's args
56+
context.config.mcp.servers["filesystem"].args.extend([os.getcwd()])
57+
58+
finder_agent = Agent(
59+
name="finder",
60+
instruction="""You are an agent with access to the filesystem,
61+
as well as the ability to fetch URLs. Your job is to identify
62+
the closest match to a user's request, make the appropriate tool calls,
63+
and return the URI and CONTENTS of the closest match.""",
64+
server_names=["fetch", "filesystem"],
65+
)
66+
67+
async with finder_agent:
68+
logger.info("finder: Connected to server, calling list_tools...")
69+
result = await finder_agent.list_tools()
70+
logger.info("Tools available:", data=result.model_dump())
71+
72+
llm = await finder_agent.attach_llm(OpenAIAugmentedLLM)
73+
result = await llm.generate_str(
74+
message="Print the contents of mcp_agent.config.yaml verbatim",
75+
)
76+
logger.info(f"mcp_agent.config.yaml contents: {result}")
77+
78+
# Let's switch the same agent to a different LLM
79+
llm = await finder_agent.attach_llm(AnthropicAugmentedLLM)
80+
81+
result = await llm.generate_str(
82+
message="Print the first 2 paragraphs of https://modelcontextprotocol.io/introduction",
83+
)
84+
logger.info(f"First 2 paragraphs of Model Context Protocol docs: {result}")
85+
86+
# Multi-turn conversations
87+
result = await llm.generate_str(
88+
message="Summarize those paragraphs in a 128 character tweet",
89+
# You can configure advanced options by setting the request_params object
90+
request_params=RequestParams(
91+
# See https://modelcontextprotocol.io/docs/concepts/sampling#model-preferences for more details
92+
modelPreferences=ModelPreferences(
93+
costPriority=0.1,
94+
speedPriority=0.2,
95+
intelligencePriority=0.7,
96+
),
97+
# You can also set the model directly using the 'model' field
98+
# Generally request_params type aligns with the Sampling API type in MCP
99+
),
100+
)
101+
logger.info(f"Paragraph as a tweet: {result}")
102+
return WorkflowResult(value=result)
103+
104+
105+
async def main():
106+
async with app.run() as agent_app:
107+
# Add the current directory to the filesystem server's args if needed
108+
context = agent_app.context
109+
if "filesystem" in context.config.mcp.servers:
110+
context.config.mcp.servers["filesystem"].args.extend([os.getcwd()])
111+
112+
# Log registered workflows and agent configurations
113+
logger.info(f"Creating MCP server for {agent_app.name}")
114+
115+
logger.info("Registered workflows:")
116+
for workflow_id in agent_app.workflows:
117+
logger.info(f" - {workflow_id}")
118+
119+
logger.info("Registered agent configurations:")
120+
for name, config in agent_app.agent_configs.items():
121+
workflow_type = config.get_agent_type() or "basic"
122+
logger.info(f" - {name} ({workflow_type})")
123+
124+
# Create the MCP server that exposes both workflows and agent configurations
125+
mcp_server = create_mcp_server_for_app(agent_app)
126+
127+
# Run the server
128+
await mcp_server.run_stdio_async()
129+
130+
131+
if __name__ == "__main__":
132+
asyncio.run(main())

0 commit comments

Comments
 (0)