You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OS: Debian GNU/Linux 11 (bullseye) / kernel: Linux 5.10.0-34-cloud-amd64 / x86-64 (on GCP Vertex AI Workbench)
Python Version: 3.10.16
modelcontextprotocol SDK Version: [v1.5.0]
anyio Version: [v4.9.0]
Description:
When using the documented mcp.client.stdio.stdio_client to connect to a mcp.server.fastmcp.FastMCP server running via the stdio transport (await mcp.run_stdio_async()), the client consistently hangs during the await session.initialize() call, eventually timing out.
Extensive debugging using monkeypatching revealed the following sequence:
The client connects successfully via stdio_client.
The client sends the initialize request.
The server process starts correctly.
The background task within mcp.server.stdio.stdio_serversuccessfully reads the initialize request from the process's stdin (using anyio.wrap_file(TextIOWrapper(...))).
This background task successfully sends the validated JSONRPCMessage onto the anyio memory stream (read_stream_writer) intended for the server's main processing loop.
The server's main processing loop, specifically within mcp.shared.session.BaseSession._receive_loop, awaits messages on the receiving end of that same anyio memory stream (async for message in self._read_stream:).
Crucially, the async for loop in BaseSession._receive_loop never yields the message that was sent to the memory stream. It remains blocked.
Because the initialize message is never received by the BaseSession loop, no response is generated.
The client eventually times out waiting for the initialize response.
This indicates a failure in message passing across the anyio memory stream used internally by the stdio transport implementation, specifically between the task group managing stdio bridging and the task group managing session message processing, when running under the asyncio backend in this configuration.
A separate test confirmed that replacing the internal anyio memory streams with standard asyncio.Queues does allow the message to be transferred successfully between these task contexts, allowing initialization and subsequent communication to proceed. This strongly suggests the issue lies within the anyio memory stream implementation or its usage in this specific cross-task-group stdio scenario.
Steps to Reproduce:
Save the following server code as mcp_file_server.py: (Use the original, unpatched version that calls await mcp.run_stdio_async())
# mcp_file_server.py (Original - Demonstrates Hang)importasyncioimportsysfrompathlibimportPathimportlogginglogging.basicConfig(level=logging.DEBUG, format='%(asctime)s [%(name)s] %(levelname)s: %(message)s')
log=logging.getLogger("MCPFileServer_Original")
try:
importpandasaspdfrommcp.server.fastmcpimportFastMCPimportmcp.server.stdioasmcp_stdioexceptImportErrorase:
log.error(f"Import error: {e}")
sys.exit(1)
mcp=FastMCP("FileToolsServer")
log.info("FastMCP server 'FileToolsServer' initialized.")
@mcp.tool()defFileReaderTool(uri: str) ->str:
log.info(f"Tool 'FileReaderTool' called with URI: {uri}")
ifnoturi.startswith("file:"): returnf"Error: Invalid URI scheme."try:
fp=Path(uri.replace("file://", "")).resolve()
ifnotfp.is_file(): returnf"Error: File not found: {fp}"content=fp.read_text(encoding="utf-8")
log.info(f"Read {len(content)} chars from {fp}")
returncontentexceptExceptionase: log.exception(f"Error reading file {uri}"); returnf"Error: Failed to read file '{uri}'. Reason: {str(e)}"@mcp.tool()defCsvReaderTool(uri: str) ->str:
log.info(f"Tool 'CsvReaderTool' called with URI: {uri}")
ifnoturi.startswith("file:"): returnf"Error: Invalid URI scheme."try:
fp=Path(uri.replace("file://", "")).resolve()
ifnotfp.is_file(): returnf"Error: CSV file not found: {fp}"df=pd.read_csv(fp)
content_str=df.to_string(index=False)
log.info(f"Read and formatted CSV from {fp}")
returncontent_strexceptExceptionase: log.exception(f"Error reading CSV file {uri}"); returnf"Error: Failed to read CSV file '{uri}'. Reason: {str(e)}"asyncdefmain():
log.info("Starting MCP server main() coroutine.")
try:
log.info("Entering stdio_server context manager...")
# stdio_server yields anyio memory streamsasyncwithmcp_stdio.stdio_server() as (read_stream, write_stream):
log.debug(f"stdio_server provided read_stream: {type(read_stream)}")
log.debug(f"stdio_server provided write_stream: {type(write_stream)}")
log.info("stdio streams established. Calling mcp.run_stdio_async()...")
log.debug(">>> About to await mcp.run_stdio_async()")
# This internally calls Server.run which uses BaseSession._receive_loopawaitmcp.run_stdio_async()
log.debug("<<< mcp.run_stdio_async() completed") # This is never reached before client disconnectlog.info("mcp.run_stdio_async() finished.")
log.info("stdio_server context exited.")
exceptExceptionase:
log.exception("Exception occurred within stdio_server or mcp.run_stdio_async()")
finally:
log.info("MCP server main() function exiting.")
if__name__=="__main__":
log.info(f"Executing server script: {__file__}")
try:
asyncio.run(main())
exceptKeyboardInterrupt: log.info("Server stopped by user.")
exceptExceptionase: log.exception("An unexpected error occurred at the top level.")
Save the following client code as minimal_client.py: (Use the version corrected for Python 3.10 timeouts and list_tools processing)
# minimal_client.pyimportasyncioimportsysimportloggingfrompathlibimportPathlogging.basicConfig(level=logging.INFO, format='%(asctime)s [Minimal Client] %(levelname)s: %(message)s')
log=logging.getLogger("MinimalClient")
try:
frommcpimportClientSession, StdioServerParameters, typesasmcp_typesfrommcp.client.stdioimportstdio_clientexceptImportErrorase:
sys.exit(f"Import Error: {e}. Ensure 'modelcontextprotocol' is installed.")
SERVER_SCRIPT_PATH=Path("./mcp_file_server.py").resolve()
asyncdefrun_minimal_test_inner():
log.info("Starting minimal client test.")
ifnotSERVER_SCRIPT_PATH.is_file():
log.error(f"Server script not found: {SERVER_SCRIPT_PATH}")
returnFalseserver_params=StdioServerParameters(command=sys.executable, args=[str(SERVER_SCRIPT_PATH)])
log.info(f"Server params: {sys.executable}{SERVER_SCRIPT_PATH}")
init_successful=Falsetry:
log.info("Attempting to connect via stdio_client...")
asyncwithstdio_client(server_params) as (reader, writer):
log.info("stdio_client connected. Creating ClientSession...")
asyncwithClientSession(reader, writer) assession:
log.info("ClientSession created. Initializing...")
try:
init_timeout=30.0init_result=awaitasyncio.wait_for(session.initialize(), timeout=init_timeout)
log.info(f"Initialize successful! Server capabilities: {init_result.capabilities}")
init_successful=Truetry:
list_timeout=15.0list_tools_response=awaitasyncio.wait_for(session.list_tools(), timeout=list_timeout)
log.info(f"Raw tools list response: {list_tools_response!r}")
tools_list=getattr(list_tools_response, 'tools', None)
iftools_listisnotNoneandisinstance(tools_list, list):
tool_names= [t.namefortintools_listifhasattr(t, 'name')]
iftool_names: log.info(f"Successfully listed tools: {tool_names}")
else: log.warning("Tools list present but no tool names found.")
else: log.warning("Could not get tools list from response.")
exceptasyncio.TimeoutError: log.error("Timeout listing tools.")
exceptExceptionase_list: log.exception("Error listing tools.")
exceptasyncio.TimeoutError: log.error(f"Timeout ({init_timeout}s) waiting for session.initialize().")
exceptExceptionase_init: log.exception("Error during session.initialize().")
log.info("Exiting ClientSession context.")
log.info("Exiting stdio_client context.")
exceptExceptionase_main: log.exception(f"An error occurred connecting or during session: {e_main}")
returninit_successfulasyncdefmain_with_overall_timeout():
overall_timeout=45.0log.info(f"Running test with overall timeout: {overall_timeout}s")
try:
success=awaitasyncio.wait_for(run_minimal_test_inner(), timeout=overall_timeout)
ifsuccess: log.info("Minimal client test: INITIALIZATION SUCCEEDED.")
else: log.error("Minimal client test: INITIALIZATION FAILED (within timeout).")
exceptasyncio.TimeoutError: log.error(f"Minimal client test: OVERALL TIMEOUT ({overall_timeout}s) REACHED.")
exceptExceptionase: log.exception("Unexpected error in main_with_overall_timeout")
if__name__=="__main__":
try: asyncio.run(main_with_overall_timeout())
exceptKeyboardInterrupt: log.info("Test interrupted.")
Install dependencies:pip install modelcontextprotocol pandas (or using uv)
Run the client:python minimal_client.py
Expected Behavior:
The client connects, initializes successfully, lists tools, and exits cleanly.
Actual Behavior:
The client connects but hangs at the Initializing... step. After the 30-second timeout expires for session.initialize(), it logs the timeout error and exits. Server logs confirm that mcp.run_stdio_async() was awaited but never processed the incoming message until after the client disconnected.
Logs:
(Logs showing the client timeout and the server hanging after >>> About to await mcp.run_stdio_async())
Further debugging using extensive monkeypatching confirmed that the background task in mcp.server.stdio.stdio_serverdoes successfully read the initialize request from stdin and sends it to the internal anyio memory stream.
However, the async for loop within mcp.shared.session.BaseSession._receive_loop (which reads from that memory stream) never yields the message.
Replacing the internal anyio memory streams with standard asyncio.Queues allowed the communication to succeed, isolating the problem to the anyio memory stream communication between the stdio bridging task group and the session processing task group.
This appears to be a bug in the stdio transport implementation related to anyio memory streams and task group interaction under the asyncio backend.
The patched working version with asyncio.Queue attached in
@Stark-X I believe this is the root cause of the timeout. You created a read_stream and write_stream externally, while mcp.run_stdio_async() also creates its own read and write streams. Replacing await mcp.run_stdio_async() with: await mcp._mcp_server.run(read_stream, write_stream, mcp._mcp_server.create_initialization_options()) would fix it.
@Stark-X I believe this is the root cause of the timeout. You created a read_stream and write_stream externally, while mcp.run_stdio_async() also creates its own read and write streams. Replacing await mcp.run_stdio_async() with: await mcp._mcp_server.run(read_stream, write_stream, mcp._mcp_server.create_initialization_options()) would fix it.
Yes, I think @Stark-X is right here. I think this is a matter of mixing lowlevel primitives and FastCMP
Labels:
bug
,transport:stdio
,client
,server
,anyio
Body:
Environment:
modelcontextprotocol
SDK Version: [v1.5.0]anyio
Version: [v4.9.0]Description:
When using the documented
mcp.client.stdio.stdio_client
to connect to amcp.server.fastmcp.FastMCP
server running via the stdio transport (await mcp.run_stdio_async()
), the client consistently hangs during theawait session.initialize()
call, eventually timing out.Extensive debugging using monkeypatching revealed the following sequence:
stdio_client
.initialize
request.mcp.server.stdio.stdio_server
successfully reads theinitialize
request from the process's stdin (usinganyio.wrap_file(TextIOWrapper(...))
).JSONRPCMessage
onto theanyio
memory stream (read_stream_writer
) intended for the server's main processing loop.mcp.shared.session.BaseSession._receive_loop
, awaits messages on the receiving end of that sameanyio
memory stream (async for message in self._read_stream:
).async for
loop inBaseSession._receive_loop
never yields the message that was sent to the memory stream. It remains blocked.initialize
message is never received by theBaseSession
loop, no response is generated.initialize
response.This indicates a failure in message passing across the
anyio
memory stream used internally by the stdio transport implementation, specifically between the task group managing stdio bridging and the task group managing session message processing, when running under theasyncio
backend in this configuration.A separate test confirmed that replacing the internal
anyio
memory streams with standardasyncio.Queue
s does allow the message to be transferred successfully between these task contexts, allowing initialization and subsequent communication to proceed. This strongly suggests the issue lies within theanyio
memory stream implementation or its usage in this specific cross-task-group stdio scenario.Steps to Reproduce:
Save the following server code as
mcp_file_server.py
:(Use the original, unpatched version that calls
await mcp.run_stdio_async()
)Save the following client code as
minimal_client.py
:(Use the version corrected for Python 3.10 timeouts and list_tools processing)
Install dependencies:
pip install modelcontextprotocol pandas
(or usinguv
)Run the client:
python minimal_client.py
Expected Behavior:
The client connects, initializes successfully, lists tools, and exits cleanly.
Actual Behavior:
The client connects but hangs at the
Initializing...
step. After the 30-second timeout expires forsession.initialize()
, it logs the timeout error and exits. Server logs confirm thatmcp.run_stdio_async()
was awaited but never processed the incoming message until after the client disconnected.Logs:
(Logs showing the client timeout and the server hanging after
>>> About to await mcp.run_stdio_async()
)Additional Context:
mcp.server.stdio.stdio_server
does successfully read theinitialize
request from stdin and sends it to the internalanyio
memory stream.async for
loop withinmcp.shared.session.BaseSession._receive_loop
(which reads from that memory stream) never yields the message.anyio
memory streams with standardasyncio.Queue
s allowed the communication to succeed, isolating the problem to theanyio
memory stream communication between the stdio bridging task group and the session processing task group.This appears to be a bug in the stdio transport implementation related to
anyio
memory streams and task group interaction under theasyncio
backend.The patched working version with asyncio.Queue attached in
[working_code.zip](https://github.com/user-attachments/files/19485125/working_code.zip)
Run vis
uv run minimal_client.py
The text was updated successfully, but these errors were encountered: