Claude Prompt for MCP Servers
Opinionated recipe for turning an existing query BigQuery API into a clean MCP tool. Focus: tight schema, OAuth 2.0 client credentials flow, retries, and TypeBox-backed input validation.
More prompts for MCP Servers.
Opinionated recipe for turning an existing send email via SendGrid API into a clean MCP tool. Focus: tight schema, JWT bearer token flow, retries, and Zod-backed input validation.
Opinionated recipe for turning an existing invoke AWS Lambda API into a clean MCP tool. Focus: tight schema, JWT bearer token flow, retries, and TypeBox-backed input validation.
Opinionated recipe for turning an existing create Linear issue API into a clean MCP tool. Focus: tight schema, API key in header flow, retries, and Zod-backed input validation.
Opinionated recipe for turning an existing query BigQuery API into a clean MCP tool. Focus: tight schema, OAuth 2.0 authorization code flow flow, retries, and ArkType-backed input validation.
Opinionated recipe for turning an existing search Notion pages API into a clean MCP tool. Focus: tight schema, JWT bearer token flow, retries, and Zod-backed input validation.
Production MCP server exposing query Elasticsearch index over WebSocket. Tested with custom host via MCP SDK — includes auth (JWT bearer token), schema validation (TypeBox), error handling, and host-specific install instructions.
You are an agent platform engineer. Your task: take an existing "query BigQuery" capability the team already uses via a REST/SDK call, and wrap it as a clean, well-schema'd MCP tool. Target runtime: Python 3.12 + poetry. Auth: OAuth 2.0 client credentials. Validator: TypeBox.
The default at most companies is to dump the existing REST API straight into a tool. Don't. Design the tool *for the model*, not for backwards compatibility with the API.
## Part 1 — Design the tool-for-the-model
Before writing any code, answer:
1. **What is the single verb this tool performs?** (query BigQuery) — if you need two verbs, that's two tools.
2. **What are the 3–7 inputs the model actually needs to provide?** Everything else belongs in config/env.
3. **What does the model need back?** Prefer small, structured responses. If the upstream returns 200 fields, pick the ones that matter and summarize the rest.
4. **What mistakes will the model make?** (e.g. passing an email when an ID is required). Design input schema and error messages to *teach* the model to recover.
Write this out as a 1-page design doc before coding.
## Part 2 — TypeBox schema
Write the full input schema:
- Descriptions that read like docstrings to the model ("The ID of the X to operate on. Format: prefix_<12 hex chars>. Get it from the `search_x` tool first.")
- Enums over free strings wherever possible
- Sensible defaults for optional fields
Write the full output schema. Output should be:
- Deterministic shape (same keys every time)
- Short (< 4KB typical)
- Include a `next_actions` hint array where relevant ("you can now call `foo` with this id")
## Part 3 — OAuth 2.0 client credentials implementation
- Where the secret lives
- How it's loaded on cold start
- Refresh strategy for long-lived sessions
- What happens if auth fails mid-call (don't silently retry forever)
- Test that a rotated secret is picked up without restart (or document that it isn't)
## Part 4 — Implementation in Python 3.12 + poetry
Full server file. Structure:
- `server.ts` / `server.py` — MCP server setup
- `tools/query BigQuery.ts` — handler
- `clients/upstream.ts` — thin wrapper around the existing API with retries, timeouts, and a single place to instrument
- `schema.ts` — TypeBox schemas shared between handler and tests
## Part 5 — Retry + timeout policy
- Per-request timeout: X seconds, cancellable
- Retry: only on idempotent errors (list the HTTP codes), max 3 attempts, exp backoff with jitter
- Never retry on 4xx except 408/429
- Surface to the model whether we retried (so it doesn't retry a third time itself)
## Part 6 — Tests
- Golden-input unit tests (table-driven)
- Validation tests: each schema rule has a rejecting test case
- Integration test against a recorded fixture of the upstream
- Contract test: run `tools/list` and snapshot the advertised schema — fail CI if it drifts
## Part 7 — Rollout checklist
- Local smoke test with mcp inspector
- Wire into one host, run 10 real tasks, record transcripts
- Review transcripts — did the model use it correctly? Did it recover from errors?
- Iterate on descriptions based on observed model mistakes (this is the highest-leverage step)
Deliver real code. The file should be something a reviewer could approve on a PR.