Connect your company's internal systems to Claude Code via MCP — from a generic fetch server to writing a dedicated MCP Server, covering deployment platforms, monitoring, ticketing, and tool design principles.
The previous article covered connecting to databases. Databases have standardized protocols, so integration is relatively straightforward. But most teams also rely on a whole constellation of internal systems day-to-day: deployment platforms, monitoring dashboards, ticket trackers, internal APIs, and config stores.
These systems rarely have off-the-shelf MCP servers — but nearly all of them expose HTTP APIs. This article shows you how to use MCP to connect these internal tools to Claude Code, so it can check monitoring, inspect deployment status, and manage tickets for you directly.
There are two ways to integrate internal tools:
Approach 1: Use a generic HTTP MCP server
The community offers general-purpose MCP servers that can wrap any REST API as an MCP tool. You write an API description file and it converts it into tools Claude can call. This works well when the API surface is simple and you don't need complex logic.
Approach 2: Write your own MCP server
Use the TypeScript or Python MCP SDK to build a purpose-built server with full control over tool definitions, parameter validation, and error handling. This is the way to go when you need to combine multiple APIs, transform data, or add business logic.
This article covers both approaches, starting with the simpler one.
The lightest-weight option is the official @anthropic-ai/mcp-server-fetch, which lets Claude make HTTP requests directly. Configuration is minimal:
{
"mcpServers": {
"fetch": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-fetch"]
}
}
}
Once configured, Claude can call internal APIs right away:
帮我查一下部署平台 https://deploy.internal.com/api/v1/services/user-service 的当前状态
Claude sends a GET request, receives the response, and parses it for you.
But this approach has clear limitations:
Fine for ad-hoc use, but not a long-term solution.
When you use an internal system regularly, building a dedicated MCP server is the better choice. Let's walk through a real scenario: integrating with a company deployment platform.
Assume your deployment platform exposes these APIs:
GET /api/v1/services — List all servicesGET /api/v1/services/:name/status — Check service statusPOST /api/v1/services/:name/deploy — Trigger a deploymentGET /api/v1/services/:name/logs — View recent deployment logsStart by initializing the project:
mkdir mcp-deploy && cd mcp-deploy
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
npx tsc --init
Core implementation in src/index.ts:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const API_BASE = process.env.DEPLOY_API_URL!;
const API_TOKEN = process.env.DEPLOY_API_TOKEN!;
async function api(path: string, method = "GET", body?: unknown) {
const res = await fetch(`${API_BASE}${path}`, {
method,
headers: {
Authorization: `Bearer ${API_TOKEN}`,
"Content-Type": "application/json",
},
body: body ? JSON.stringify(body) : undefined,
});
if (!res.ok) {
throw new Error(`API error: ${res.status} ${await res.text()}`);
}
return res.json();
}
const server = new McpServer({
name: "deploy-platform",
version: "1.0.0",
});
// 列出所有服务
server.tool("list_services", "列出部署平台上的所有服务及其状态", {}, async () => {
const data = await api("/api/v1/services");
return { content: [{ type: "text", text: JSON.stringify(data, null, 2) }] };
});
// 查看单个服务状态
server.tool(
"service_status",
"查看指定服务的当前部署状态、版本号和健康检查结果",
{ name: z.string().describe("服务名称,如 user-service") },
async ({ name }) => {
const data = await api(`/api/v1/services/${name}/status`);
return { content: [{ type: "text", text: JSON.stringify(data, null, 2) }] };
}
);
// 查看部署日志
server.tool(
"deploy_logs",
"查看指定服务最近的部署日志",
{
name: z.string().describe("服务名称"),
limit: z.number().optional().default(10).describe("返回条数,默认 10"),
},
async ({ name, limit }) => {
const data = await api(`/api/v1/services/${name}/logs?limit=${limit}`);
return { content: [{ type: "text", text: JSON.stringify(data, null, 2) }] };
}
);
// 触发部署
server.tool(
"trigger_deploy",
"触发指定服务的部署。这是一个写操作,会实际影响生产环境",
{
name: z.string().describe("服务名称"),
version: z.string().describe("要部署的版本号或 git ref"),
},
async ({ name, version }) => {
const data = await api(`/api/v1/services/${name}/deploy`, "POST", {
version,
});
return { content: [{ type: "text", text: JSON.stringify(data, null, 2) }] };
}
);
const transport = new StdioServerTransport();
server.connect(transport);
Compile:
npx tsc
{
"mcpServers": {
"deploy": {
"command": "node",
"args": ["/path/to/mcp-deploy/dist/index.js"],
"env": {
"DEPLOY_API_URL": "https://deploy.internal.com",
"DEPLOY_API_TOKEN": "your-api-token"
}
}
}
}
Put the token in .claude/settings.local.json (not committed to git) and the URL in .claude/settings.json (committed and shared with the team).
Once everything is configured, conversations become natural:
user-service 现在什么状态?
→ Claude 调用 service_status("user-service")
→ 返回:运行中,版本 v2.3.1,最后部署 2 小时前,健康检查全部通过
最近几次部署有没有失败的?
→ Claude 调用 deploy_logs("user-service", 20)
→ 分析日志,告诉你第 3 次部署回滚了,原因是健康检查超时
把 user-service 部署到 v2.3.2
→ Claude 调用 trigger_deploy("user-service", "v2.3.2")
→ 因为工具描述里标注了"写操作",Claude 会先跟你确认
This deserves careful thought.
Read operations are safe to expose. Checking status, viewing logs, searching tickets — these have no side effects, and there's no damage if Claude gets it wrong.
Write operations fall into two categories:
Low-risk writes are fine to expose, as long as you clearly label them in the tool description. Claude automatically asks for user confirmation on operations marked as having side effects. Examples: creating tickets, sending messages, updating configuration.
High-risk writes are best left out. Deleting resources, triggering rollbacks, modifying permissions — the consequences are severe and irreversible, so keeping them manual is safer.
If you must expose write operations, do at least two things:
| System | Tools to Expose | Notes |
|---|---|---|
| Deployment platform (K8s / Kamal) | Service status, logs, trigger deploys | Add confirmation for write ops |
| Monitoring (Grafana / Datadog) | Query metrics, view alert history | Limit query time ranges to avoid pulling too much data |
| Ticket tracker (Jira / Linear) | Search tickets, create tickets, update status | Creating tickets is a write op, but low-risk |
| Internal docs (Notion / Confluence) | Search docs, read page content | Watch out for pagination — don't fetch too much at once |
| Config store (Consul / etcd) | Read config, diff across environments | Read-only — don't expose writes |
| CI/CD (GitHub Actions / Jenkins) | View build status, trigger builds | Triggering builds is a medium-risk write op |
Writing MCP tools is different from writing APIs. APIs are designed for programmers; tools are designed for AI. A few principles are worth keeping in mind:
Use clear, descriptive tool names
✗ get_svc_stat — Claude may not guess the abbreviation correctly
✓ service_status — Immediately clear what it does
Write descriptions for the AI
Tool descriptions aren't human documentation — they're how Claude decides when to call a tool. Be explicit about what the tool does, what it returns, and when it should be used.
✗ "Get service status"
✓ "View the current deployment status, version number, and health check results for a given service. Use when the user asks whether a service is running normally"
Define parameters with zod and .describe()
Parameters with .describe() tell Claude what to fill in. Without descriptions, Claude can only guess from the parameter name.
Return structured data
MCP tools return text, but you should return formatted JSON whenever possible. Claude handles structured data far more accurately than free-form text.
Keep the right granularity
Don't cram a complex workflow into a single tool. Don't split a simple query across three tools. The rule of thumb: one tool performs one self-contained, meaningful operation.
There are several options for where to keep your MCP server code:
Inside the project repository (recommended starting point)
your-project/
├── .claude/settings.json
├── mcp-servers/
│ └── deploy/
│ ├── src/index.ts
│ ├── package.json
│ └── tsconfig.json
└── ...
The advantage is that code and configuration live together — teammates just clone the repo, install dependencies, and they're good to go.
Standalone repository
When an MCP server needs to be shared across multiple projects, put it in its own repo and publish it as an npm package or Docker image.
{
"mcpServers": {
"deploy": {
"command": "npx",
"args": ["-y", "@yourcompany/mcp-deploy-server"]
}
}
}
Global installation
For company-wide MCP servers (e.g., those integrating with centralized auth or a unified logging platform), install globally and configure in ~/.claude/settings.json.
The most common issues during MCP server development are "Claude isn't calling my tool" and "it called the tool but got an error."
Verify the server is running
After restarting Claude Code, type /mcp to see the list of connected MCP servers. If yours isn't there, double-check your command and args.
Test the server independently
MCP servers communicate over stdio, so you can test directly from the terminal:
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' | node dist/index.js
If you get a tool list back, the server itself is working fine.
Check Claude's tool call logs
Claude Code displays the input and output of each tool call. If parameters are wrong, it usually means the tool description or parameter definitions aren't clear enough and Claude misunderstood.
Let's tie it all together with a real example. Suppose you want to integrate Sentry into Claude Code so it can query production errors directly.
server.tool(
"search_errors",
"在 Sentry 中搜索最近的错误。用于排查线上问题、查看错误趋势",
{
query: z.string().describe("搜索关键词,如错误信息、函数名"),
hours: z.number().optional().default(24).describe("查看最近多少小时的错误"),
},
async ({ query, hours }) => {
const since = new Date(Date.now() - hours * 3600000).toISOString();
const data = await api(
`/api/0/projects/${ORG}/${PROJECT}/issues/?query=${encodeURIComponent(query)}&start=${since}&sort=date`
);
const summary = data.map((issue: any) => ({
title: issue.title,
count: issue.count,
firstSeen: issue.firstSeen,
lastSeen: issue.lastSeen,
link: issue.permalink,
}));
return {
content: [{ type: "text", text: JSON.stringify(summary, null, 2) }],
};
}
);
server.tool(
"error_details",
"查看 Sentry 中某个错误的详细信息,包括堆栈和最近一次事件",
{ issueId: z.string().describe("Sentry issue ID") },
async ({ issueId }) => {
const [issue, latest] = await Promise.all([
api(`/api/0/issues/${issueId}/`),
api(`/api/0/issues/${issueId}/events/latest/`),
]);
return {
content: [
{
type: "text",
text: JSON.stringify(
{
title: issue.title,
count: issue.count,
users: issue.userCount,
stacktrace: latest.entries?.find(
(e: any) => e.type === "exception"
),
},
null,
2
),
},
],
};
}
);
With this integration in place, debugging production issues becomes a conversation:
最近 4 小时有没有新的 500 错误?
→ Claude 搜索 Sentry
→ 发现 3 个新 issue,最严重的一个影响了 120 个用户
→ 自动拉堆栈,定位到是某个空指针异常
→ 在代码里找到对应位置,给出修复方案
From discovering the issue to pinpointing the code — the entire process happens in a single conversation.
This article covered how to use MCP to integrate internal tools. The core idea is simple: if an internal system has an HTTP API, wrap it in an MCP server, and Claude can use it directly.
Every example in this article wraps an existing API — the deployment platform and Sentry already have interfaces, and the MCP server just adds a layer of translation and adaptation. The next article will cover a different scenario: when there's no existing API for the capability you need, and you have to build an MCP server from scratch — implementing your own logic, managing state, and handling complex multi-step interactions.