Skip to content

Commit 2eb11a4

Browse files
committed
update README
1 parent c97fe13 commit 2eb11a4

File tree

4 files changed

+108
-29
lines changed

4 files changed

+108
-29
lines changed

Diff for: Dockerfile

+53-19
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,65 @@
1-
# Use Python base image
2-
FROM python:3.10-slim-bookworm
1+
# --------
2+
# Builder stage
3+
# --------
4+
FROM python:3.11-slim AS builder
35

4-
# Install system dependencies
5-
RUN apt-get update && apt-get install -y \
6-
build-essential \
6+
# Install build tools and dependencies
7+
RUN apt-get update && apt-get install -y --no-install-recommends \
78
gcc \
89
g++ \
10+
libc6-dev \
11+
libffi-dev \
12+
libpq-dev \
13+
make \
14+
&& rm -rf /var/lib/apt/lists/*
15+
16+
# Set working directory for builder
17+
WORKDIR /app
18+
19+
# Copy the entire project first
20+
COPY . /app/
21+
22+
# Upgrade pip and build wheels for all dependencies
23+
RUN pip install --upgrade pip \
24+
&& mkdir /wheels \
25+
&& pip wheel --wheel-dir=/wheels -r requirements.txt
26+
27+
# --------
28+
# Final runtime stage
29+
# --------
30+
FROM python:3.11-slim
31+
32+
# Install runtime dependencies
33+
RUN apt-get update && apt-get install -y --no-install-recommends \
34+
libffi-dev \
35+
libpq-dev \
936
&& rm -rf /var/lib/apt/lists/*
1037

11-
# Install the project into `/app`
1238
WORKDIR /app
1339

1440
# Copy the entire project
15-
COPY . /app
41+
COPY . /app/
42+
43+
# Copy the wheels built in the builder stage
44+
COPY --from=builder /wheels /wheels
45+
46+
# Install Python dependencies from the local wheel cache
47+
RUN pip install --upgrade pip \
48+
&& pip install --no-cache-dir --no-index --find-links=/wheels -r requirements.txt
49+
50+
# Create models directory
51+
RUN mkdir -p /app/models
52+
53+
# Pre-download models
54+
RUN python -c "from sentence_transformers import SentenceTransformer; \
55+
model = SentenceTransformer('all-MiniLM-L6-v2'); \
56+
model.save('/app/models/all-MiniLM-L6-v2')"
1657

17-
# Install dependencies first for better caching
18-
RUN pip install --no-cache-dir \
19-
mcp \
20-
pydantic \
21-
requests \
22-
"numpy<2.0" \
23-
fastapi \
24-
uvicorn \
25-
&& pip install --no-cache-dir faiss-cpu sentence-transformers
58+
# Set environment variable for Sentence Transformers
59+
ENV SENTENCE_TRANSFORMERS_HOME=/app/models
2660

27-
# Install the package in development mode
28-
RUN pip install -e .
61+
# Expose port
62+
EXPOSE 8000
2963

3064
# Run the server
31-
ENTRYPOINT ["python", "-m", "mcp_server_any_openapi.server"]
65+
ENTRYPOINT ["python", "-m", "mcp_server_any_openapi.server"]

Diff for: README.md

+44-9
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,43 @@ query -> [Embedding] -> FAISS TopK -> OpenAPI docs -> MCP Client (Claude Desktop
1313
MCP Client -> Construct OpenAPI Request -> Execute Request -> Return Response
1414
```
1515

16-
## Usage Example
16+
## Multi-instance config example
17+
18+
Here is the multi-instance config example. I design it so it can more flexibly used for multiple set of apis:
19+
```
20+
{
21+
"mcpServers": {
22+
"finance_openapi": {
23+
"command": "docker",
24+
"args": [
25+
"run",
26+
"-i",
27+
"--rm",
28+
"-e",
29+
"OPENAPI_JSON_DOCS_URL=https://api.finance.com/openapi.json",
30+
"-e",
31+
"MCP_API_PREFIX=finance",
32+
"buryhuang/mcp-server-any-openapi:latest"
33+
]
34+
},
35+
"healthcare_openapi": {
36+
"command": "docker",
37+
"args": [
38+
"run",
39+
"-i",
40+
"--rm",
41+
"-e",
42+
"OPENAPI_JSON_DOCS_URL=https://api.healthcare.com/openapi.json",
43+
"-e",
44+
"MCP_API_PREFIX=healthcare",
45+
"buryhuang/mcp-server-any-openapi:latest"
46+
]
47+
}
48+
}
49+
}
50+
```
51+
52+
## Claude Desktop Usage Example
1753
Claude Desktop Project Prompt:
1854
```
1955
You should get the api spec details from tools financial_api_request_schema
@@ -31,16 +67,15 @@ Get prices for all stocks
3167

3268
## Features
3369

34-
- 🏗️ Multiple deployment options:
35-
- Public Docker image (`buryhuang/mcp-server-any-openapi`)
36-
- Local Python package (`pip install`)
70+
- 🧠 Use remote openapi json file as source, no local file system access, no updating required for API changes
3771
- 🔍 Semantic search using optimized MiniLM-L3 model (43MB vs original 90MB)
38-
- 📚 Comprehensive endpoint documentation including parameters, request bodies, and responses
3972
- 🚀 FastAPI-based server with async support
40-
- 🐳 Multi-platform Docker support
41-
- 🧠 Intelligent chunking for large OpenAPI specs (handles 100KB+ documents)
73+
- 🧠 Endpoint based chunking OpenAPI specs (handles 100KB+ documents), no loss of endpoint context
4274
- ⚡ In-memory FAISS vector search for instant endpoint discovery
43-
- 🐢 Cold start penalty (~15s for model loading)
75+
76+
## Limitations
77+
- 🐢 Cold start penalty (~15s for model loading) if not using docker image
78+
- If using docker image, the size may be bigger
4479

4580
## Challenges Addressed
4681

@@ -164,7 +199,7 @@ Official images support 3 platforms:
164199
```bash
165200
# Build and push using buildx
166201
docker buildx create --use
167-
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 \
202+
docker buildx build --platform linux/amd64,linux/arm64 \
168203
-t buryhuang/mcp-server-any-openapi:latest \
169204
--push .
170205
```

Diff for: pyproject.toml

+2-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,8 @@ dependencies = [
1717
"numpy",
1818
"sentence-transformers",
1919
"fastapi",
20-
"uvicorn"
20+
"uvicorn",
21+
"huggingface-hub"
2122
]
2223

2324
[tool.hatch.build.targets.wheel]

Diff for: requirements.txt

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
mcp==1.2.1
2+
pydantic>=2.10.1,<3.0.0
3+
requests==2.31.0
4+
numpy==1.24.3
5+
fastapi==0.115.8
6+
uvicorn==0.34.0
7+
faiss-cpu==1.7.4
8+
sentence-transformers==2.2.2
9+
huggingface-hub==0.14.1

0 commit comments

Comments
 (0)