Skip to content

list_tables crashes MCP server for large schemas — needs pagination and metadata trimming #25

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
puneet-sutar opened this issue Mar 30, 2025 · 8 comments · Fixed by #26
Assignees

Comments

@puneet-sutar
Copy link

puneet-sutar commented Mar 30, 2025

list_tables crashes MCP server for large schemas — needs pagination and metadata trimming

📝 Updated Issue Description:

When using list_tables on databases with a large number of tables (100+), the MCP server may crash or disconnect. This is likely due to the payload size exceeding internal limits, especially when passed to an LLM or when streaming via stdio.

This issue can manifest as BrokenResourceError, unhandled exceptions in TaskGroup, or server transport unexpectedly closing. We've seen this happen in ClickHouse Cloud setups where schemas are large and metadata (like comments) is verbose.

✅ Proposed Fix:

  • Add pagination to the list_tables tool (e.g., limit and offset params).
  • Avoid returning redundant metadata such as the comments array—most of this is already embedded in the create_table_query.
  • Update the tool description to make its limitations and expected usage clearer.
  • Optionally, cap the total number of tables fetched in a single call to a configurable value (e.g., 100–200).

Original Description

I am trying it with my clickhouse cloud instance and the project always crashes when list_tables is called on the databases with more then 100 tables.

Anything I can do the fix it?

Pasting some log lines if they help. Let me know if you need more info here.

 + Exception Group Traceback (most recent call last):
  |   File "/Users/puneetsutar/.cache/uv/archive-v0/UJ0ZpE2MZyeSn-kocDSZB/bin/mcp-clickhouse", line 12, in <module>
  |     sys.exit(main())
  |              ~~~~^^
  |   File "/Users/puneetsutar/.cache/uv/archive-v0/UJ0ZpE2MZyeSn-kocDSZB/lib/python3.13/site-packages/mcp_clickhouse/main.py", line 5, in main
  |     mcp.run()
  |     ~~~~~~~^^
  |   File "/Users/puneetsutar/.cache/uv/archive-v0/UJ0ZpE2MZyeSn-kocDSZB/lib/python3.13/site-packages/mcp/server/fastmcp/server.py", line 159, in run
  |     anyio.run(self.run_stdio_async)
  |     ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
  |   File "/Users/puneetsutar/.cache/uv/archive-v0/UJ0ZpE2MZyeSn-kocDSZB/lib/python3.13/site-packages/anyio/_core/_eventloop.py", line 74, in run
  |     return async_backend.run(func, args, {}, backend_options)
  |            ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |   File "/Users/puneetsutar/.cache/uv/archive-v0/UJ0ZpE2MZyeSn-kocDSZB/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 2310, in run
  |     return runner.run(wrapper())
  |            ~~~~~~~~~~^^^^^^^^^^^
  |   File "/opt/homebrew/Cellar/[email protected]/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 118, in run
  |     return self._loop.run_until_complete(task)
  |            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
  |   File "/opt/homebrew/Cellar/[email protected]/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py", line 725, in run_until_complete
  |     return future.result()
  |            ~~~~~~~~~~~~~^^
  |   File "/Users/puneetsutar/.cache/uv/archive-v0/UJ0ZpE2MZyeSn-kocDSZB/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 2298, in wrapper
  |     return await func(*args)
  |            ^^^^^^^^^^^^^^^^^
  |   File "/Users/puneetsutar/.cache/uv/archive-v0/UJ0ZpE2MZyeSn-kocDSZB/lib/python3.13/site-packages/mcp/server/fastmcp/server.py", line 460, in run_stdio_async
  |     async with stdio_server() as (read_stream, write_stream):
  |                ~~~~~~~~~~~~^^
  |   File "/opt/homebrew/Cellar/[email protected]/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/contextlib.py", line 221, in __aexit__
  |     await anext(self.gen)
  |   File "/Users/puneetsutar/.cache/uv/archive-v0/UJ0ZpE2MZyeSn-kocDSZB/lib/python3.13/site-packages/mcp/server/stdio.py", line 83, in stdio_server
  |     async with anyio.create_task_group() as tg:
  |                ~~~~~~~~~~~~~~~~~~~~~~~^^
  |   File "/Users/puneetsutar/.cache/uv/archive-v0/UJ0ZpE2MZyeSn-kocDSZB/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 772, in __aexit__
  |     raise BaseExceptionGroup(
  |         "unhandled errors in a TaskGroup", self._exceptions
  |     ) from None
  | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "/Users/puneetsutar/.cache/uv/archive-v0/UJ0ZpE2MZyeSn-kocDSZB/lib/python3.13/site-packages/mcp/server/stdio.py", line 69, in stdin_reader
    |     await read_stream_writer.send(message)
    |   File "/Users/puneetsutar/.cache/uv/archive-v0/UJ0ZpE2MZyeSn-kocDSZB/lib/python3.13/site-packages/anyio/streams/memory.py", line 255, in send
    |     raise BrokenResourceError from None
    | anyio.BrokenResourceError
    +------------------------------------
2025-03-30T21:23:43.811Z [mcp-clickhouse] [info] Server transport closed
2025-03-30T21:23:43.811Z [mcp-clickhouse] [info] Client transport closed
2025-03-30T21:23:43.811Z [mcp-clickhouse] [info] Server transport closed unexpectedly, this is likely due to the process exiting early. If you are developing this MCP server you can add output to stderr (i.e. `console.error('...')` in JavaScript, `print('...', file=sys.stderr)` in python) and it will appear in this log.
2025-03-30T21:23:43.811Z [mcp-clickhouse] [error] Server disconnected. For troubleshooting guidance, please visit our [debugging documentation](https://modelcontextprotocol.io/docs/tools/debugging) {"context":"connection"}
2025-03-30T21:23:43.812Z [mcp-clickhouse] [info] Client transport closed
@iskakaushik
Copy link
Collaborator

@puneet-sutar -- I made the error handling account for this to provide more meaningful errors. We are running it in prod where there are 1000s of tables and not running into any issues, could you please try the new main branch and see if it gives a more meaningful error message.

@iskakaushik
Copy link
Collaborator

cc: @serprex

jovezhong pushed a commit to jovezhong/mcp-timeplus that referenced this issue Apr 5, 2025
jovezhong added a commit to jovezhong/mcp-timeplus that referenced this issue Apr 5, 2025
* feat: Add query timeout and thread pool for SELECT queries (ClickHouse#20)

* add action to publish to pypi (ClickHouse#19)

* add client_name with mcp_clickhouse (ClickHouse#21)

* Add descriptions for tools required by Bedrock (ClickHouse#23)

Addresses ClickHouse#22

* 0.1.5

* fix: prevent BrokenResourceError by returning structured responses for query errors (ClickHouse#26)

fixes ClickHouse#25

* Update README.md - Added link to the Youtube overview (ClickHouse#27)

* fix cherry pick issue and add desc

* Update publish.yml

* change mcp client name

---------

Co-authored-by: Kaushik Iska <[email protected]>
Co-authored-by: Philip Dubé <[email protected]>
Co-authored-by: Mikhail Shustov <[email protected]>
Co-authored-by: Dmitry Pavlov <[email protected]>
Co-authored-by: Philip Dubé <[email protected]>
Co-authored-by: Ryadh DAHIMENE <[email protected]>
@albedoweb
Copy link

I'm trying last version of mcp with Cluade Desktop + Clickhouse Cloud and see similar errors:

LastModifiedDate,\\\\n    LastModifiedById,\\\\n    SystemModstamp,\\\\n    DeveloperName,\\\\n    PortalAccountId,\\\\n    PortalType,\\\\n    PortalAccountOwnerId\\\\nFROM stg._salesforce_UserRole\\\\nFINAL\"}"}],"isError":false}}
  + Exception Group Traceback (most recent call last):
  |   File "/Users/nikolay/.cache/uv/archive-v0/etnCGSh5dD99zwmn60ysV/bin/mcp-clickhouse", line 12, in <module>
  |     sys.exit(main())
  |              ~~~~^^
  |   File "/Users/nikolay/.cache/uv/archive-v0/etnCGSh5dD99zwmn60ysV/lib/python3.13/site-packages/mcp_clickhouse/main.py", line 5, in main
  |     mcp.run()
  |     ~~~~~~~^^
  |   File "/Users/nikolay/.cache/uv/archive-v0/etnCGSh5dD99zwmn60ysV/lib/python3.13/site-packages/mcp/server/fastmcp/server.py", line 159, in run
  |     anyio.run(self.run_stdio_async)
  |     ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
  |   File "/Users/nikolay/.cache/uv/archive-v0/etnCGSh5dD99zwmn60ysV/lib/python3.13/site-packages/anyio/_core/_eventloop.py", line 74, in run
  |     return async_backend.run(func, args, {}, backend_options)
  |            ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |   File "/Users/nikolay/.cache/uv/archive-v0/etnCGSh5dD99zwmn60ysV/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 2310, in run
  |     return runner.run(wrapper())
  |            ~~~~~~~~~~^^^^^^^^^^^
  |   File "/Users/nikolay/.local/share/uv/python/cpython-3.13.3-macos-aarch64-none/lib/python3.13/asyncio/runners.py", line 118, in run
  |     return self._loop.run_until_complete(task)
  |            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
  |   File "/Users/nikolay/.local/share/uv/python/cpython-3.13.3-macos-aarch64-none/lib/python3.13/asyncio/base_events.py", line 719, in run_until_complete
  |     return future.result()
  |            ~~~~~~~~~~~~~^^
  |   File "/Users/nikolay/.cache/uv/archive-v0/etnCGSh5dD99zwmn60ysV/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 2298, in wrapper
  |     return await func(*args)
  |            ^^^^^^^^^^^^^^^^^
  |   File "/Users/nikolay/.cache/uv/archive-v0/etnCGSh5dD99zwmn60ysV/lib/python3.13/site-packages/mcp/server/fastmcp/server.py", line 460, in run_stdio_async
  |     async with stdio_server() as (read_stream, write_stream):
  |                ~~~~~~~~~~~~^^
  |   File "/Users/nikolay/.local/share/uv/python/cpython-3.13.3-macos-aarch64-none/lib/python3.13/contextlib.py", line 221, in __aexit__
  |     await anext(self.gen)
  |   File "/Users/nikolay/.cache/uv/archive-v0/etnCGSh5dD99zwmn60ysV/lib/python3.13/site-packages/mcp/server/stdio.py", line 83, in stdio_server
  |     async with anyio.create_task_group() as tg:
  |                ~~~~~~~~~~~~~~~~~~~~~~~^^
  |   File "/Users/nikolay/.cache/uv/archive-v0/etnCGSh5dD99zwmn60ysV/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 772, in __aexit__
  |     raise BaseExceptionGroup(
  |         "unhandled errors in a TaskGroup", self._exceptions


Looks like it's getting some data, but can't process response properly.

@mweidner037
Copy link

Not sure if the error message is the same, but my colleagues were running into a similar issue (crashing at list_tables) until they upgraded from free to Claude Pro.

@jovezhong
Copy link

When there are many tables, the list_tables tool can generate a very large JSON response and this may exceed the LLM token limit. In my fork of the mcp server, I disabled the comments array for each table, since the create_table_query SQL contains (almost) all informations, and most LLM can understand the column data type, default value, nullable and comment via checking such DDL.
https://github.com/ClickHouse/mcp-clickhouse/blob/main/mcp_clickhouse/mcp_server.py#L97

@iskakaushik
Copy link
Collaborator

Thanks for reporting this!

It seems the core issue is with how list_tables currently works—when there are hundreds or thousands of tables, the response payload can become too large, which might crash the MCP server or exceed the LLM token limit.

We’ll get to this soon. A proper fix will likely involve:

  • Making the list_tables method paginated (e.g. with limit/offset),
  • Possibly trimming unneeded metadata (like the comments array),
  • Updating the tool description to better reflect these constraints.

That said, this is a great opportunity for someone looking to dig into the MCP codebase. If you’re interested, happy to onboard and guide you 🙂. Let me know!

@iskakaushik iskakaushik changed the title I am trying it with my clickhouse cloud instance and the project always crashes when list_tables is called on the databases with more then 100 tables. list_tables crashes MCP server for large schemas — needs pagination and metadata trimming Apr 14, 2025
@albedoweb
Copy link

Not sure if the error message is the same, but my colleagues were running into a similar issue (crashing at list_tables) until they upgraded from free to Claude Pro.

I've tried with Pro, same result

@iskakaushik
Copy link
Collaborator

hi all, I made a PR potentially addressing this, can someone who is able to consistently reproduce this issue try with #34 and see if it resolves it? I tested with a database containing 4000 tables and it worked fine but that was a purely synthetic test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
6 participants