This directory contains a comprehensive integration test suite for the InfluxDB MCP Server. The tests verify that the server correctly interacts with an InfluxDB instance and exposes all the functionality through the Model Context Protocol.
- Node.js 16 or higher
- Docker (for running InfluxDB during tests)
- npm or yarn
-
Install dependencies:
npm install
-
Make sure Docker is running on your system
To run the integration tests:
npm test
The test suite:
- Starts an InfluxDB 2.7 container in Docker
- Initializes the database with an organization, bucket, and authentication token
- Tests core InfluxDB connectivity directly:
- Writing data to InfluxDB
- Listing organizations and buckets
- Querying measurements in a bucket
- Creating new buckets
- Tests InfluxDB MCP Server functionality:
- Direct server process communication
- Testing write-data operations
- Testing Flux queries
- Verifying prompt content and validity
The tests use the following configuration:
- InfluxDB running on port 8086
- A test organization named "test-org"
- A test bucket named "test-bucket"
- A test admin token and a separate token for the MCP server
After the tests complete, the suite:
- Cleans up any spawned server processes
- Stops and removes the InfluxDB Docker container
- Performs additional cleanup of any leftover Docker containers
The integration tests require Docker to be installed and running on your system. Make sure you have proper permissions to create and manage Docker containers before running the tests.
You can run specific tests to isolate issues:
# Run a single test by name
npm test -- -t "should list organizations"
# Skip integration tests
npm test -- --testPathIgnorePatterns=integration.test.js
The project now uses multiple testing approaches to ensure functionality and stability:
-
Integration Tests (
tests/integration.test.js
):- End-to-end tests that use Docker to run a real InfluxDB instance
- Tests both direct API calls and MCP client interactions
- Contains robust cleanup to prevent hanging processes
-
Direct Handler Tests (
direct-tests/handlers.test.js
):- Tests each handler function directly without MCP protocol overhead
- Validates that the individual handlers work correctly with InfluxDB
- Uses environment variable mocking to ensure proper test isolation
The direct API testing approach focuses on verifying the core functionality without depending on the MCP client. This approach has several advantages:
- Isolates Test Concerns: Separates InfluxDB connectivity testing from MCP client/server communication issues
- More Reliable Tests: Reduces dependency on the MCP client, which can have its own reliability challenges
- Better Error Visibility: Makes it easier to diagnose exactly where issues occur in the stack
- Faster Execution: Direct API calls are typically faster than going through the full MCP stack
For testing server functionality directly, we use:
- Child process spawning with controlled environment variables
- Direct HTTP fetch calls to InfluxDB API endpoints
- Code scanning to validate prompt content
- All tests are now passing
- Previous issues with list operations timing out have been resolved
- The integration tests now correctly validate both direct API access and MCP server functionality
During development, we encountered several challenges with testing the MCP server:
- Problem: The MCP client and server had compatibility issues with method names and protocol implementation.
- Solution: Created direct handler tests that bypass the MCP protocol to validate core functionality independently of protocol issues.
- Problem: The MCP server was using the regular token (INFLUXDB_TOKEN) instead of the admin token (INFLUXDB_ADMIN_TOKEN) when making API calls, resulting in 401 unauthorized errors.
- Solution: Updated the tests to use the admin token for all operations, ensuring proper authorization.
- Problem: The test suite was creating two instances of the server process - one explicitly and another through the StdioClientTransport, causing conflicts and connection issues.
- Solution: Restructured the tests to either use direct API calls or properly manage a single server process instance.
- Problem: Environment variables set during runtime were not affecting already-imported modules that had cached the values.
- Solution: Implemented Jest module mocking to override environment configuration during testing.
- Problem: The promise race used for timeouts wasn't properly canceling fetch operations when timeouts occurred, leading to dangling network requests.
- Solution: Implemented AbortController for proper request cancellation in the influxRequest function.
- Problem: "Port is already allocated" errors occur when Docker tries to start InfluxDB on a port already in use.
- Solution:
- Use a randomized port for InfluxDB containers to avoid conflicts (implemented)
- Clean up any lingering Docker containers using
docker ps -a
anddocker rm
- Problem: The MCP client struggles to connect to the server or times out during operations.
- Solution:
- For testing, use the direct API approach implemented in the current tests
- When MCP client is needed:
- Check authorization tokens are correct and have sufficient permissions
- Add proper error handling with detailed logging
- Implement connection validation and reconnection logic
- Add appropriate timeout handling using AbortController
- Consider opening an issue with the MCP SDK if problems persist
- Problem: Tests hang after showing "Sample data written successfully" with no progress.
- Solution:
- Add detailed logging for each step of the test process
- Use AbortController with fetch operations to ensure proper cancellation
- Implement proper error propagation so issues are visible rather than silent
- Run tests in series with
--runInBand
option to avoid race conditions
- Problem: Docker containers aren't properly cleaned up, leading to resource leaks.
- Solution:
- Implement a robust
afterAll()
that cleans up all resources regardless of test success/failure - Add container ID logging for better debugging
- Forcefully remove any leftover test containers
- Implement a robust
When tests are failing:
- Add detailed logging throughout your server implementation
- Check Docker container status:
docker ps -a
- View detailed logs from containers:
docker logs <container-id>
- Run tests with increased verbosity:
npm test -- --verbose
- Check if ports are in use:
lsof -i :<port>
- Inspect network traffic for API authorization issues: look for 401/403 errors
- Compare direct API responses with MCP server responses to identify formatting discrepancies