From Zero to MCP: What Building My First Model Context Protocol Server Taught Me

Ezequiel Godoy
Platform Engineering MCP Python

After hearing about MCP (Model Context Protocol) on a podcast, I immediately saw its potential: a standard for AI assistants to interact with external tools. As a Python learner diving into AI tooling, I decided to build an MCP server. By the end, you’ll know how to start your own and use it for real automation.

I picked the Dutch national railway API, a real-world system I use daily. This post covers lessons learned building my first MCP server—leveraging AI for development, testing with new tools, and extending automations beyond chat.

The end result? An AI that checks my commute before I wake up. But before diving into the details, let’s step back and discuss why MCP matters for engineers.

What is MCP, and Why Should Platform Engineers Care?

Model Context Protocol is Anthropic’s open standard for connecting AI assistants to external data sources and tools. As Anthropic’s team described it on the Talk Python podcast, think of it as a USB-C for AI: a universal interface that lets any compatible AI assistant communicate with any compatible server.

For platform engineers, this matters because MCP servers serve as specialized APIs for interacting with large language models (LLMs). Unlike typical APIs consumed by humans or traditional programs, MCP servers are formatted for easy understanding by AI systems, enabling automated access to various functions and data sources.

  • Developer Experience tooling: Claude, your AI assistant, can ask natural-language questions about your systems—such as platform status, CI/CD pipeline checks, or Kubernetes pod logs—by making requests via MCP servers rather than standard APIs.
  • Self-service acceleration: Instead of building dashboards, you build MCP servers. The AI becomes the interface.
  • Automation potential: MCP servers can do more than support conversation. You can use them to automate regular tasks, trigger workflows in response to events, or integrate with home automation systems, all via AI-driven interactions with your tools.

The MCP ecosystem is new but developing quickly. I used Python both to improve my skills and because its libraries make it easier to integrate with APIs. FastMCP, the framework I selected, abstracts much of the background work and makes building MCP servers approachable, even for those unfamiliar with the standard.

The Unexpected MVP: AI-Assisted API Analysis

Here’s where things got interesting. The API I was working with had comprehensive but dense documentation. Multiple endpoints, nested response structures, and authentication quirks. Traditionally, I’d spend hours reading docs, experimenting with curl commands, and building mental models of how everything fits together.

Instead, I tried something different: I fed the API documentation directly to Claude and asked it to help me understand the structure.

Within minutes, I had a clear picture of which endpoints mattered, how authentication worked, and what edge cases to watch for. More importantly, Claude helped me identify issues I would have missed: handling name variations in user input, distinguishing between optional and required parameters, and identifying inconsistencies in response formats.

This AI-assisted analysis cut down the time I spent reading documentation and trying out endpoints. It changed my development workflow: instead of just reading and experimenting, I now start by analyzing the docs with AI, confirming key details, and building with greater confidence that I haven’t missed anything vital.

To help you get started quickly, here are two actionable steps you can try today:

  1. Dump your API documentation into Claude and ask for a summary of the most relevant endpoints. This not only saves time but also provides a clear roadmap for what to prioritize.
  2. After getting Claude’s summary of endpoints, confirm with it how authentication works and discuss possible unusual situations or errors (edge cases). This helps you prepare for any issues that may arise when linking to the API.

A word of caution: AI analysis isn’t a replacement for reading the actual docs. I still encountered surprises during implementation. While using AI provides remarkably effective initial insights into unfamiliar APIs, it’s crucial to remain aware of potential risks. One drawback is the possibility of hallucinations, where the AI may fabricate information, which can lead to misunderstandings if not cross-verified with the original documentation.

This workflow applies to any API you’re integrating with. Next time you’re staring at a 50-page API reference, try feeding it to an AI first.

Technical Decisions: Why FastMCP and Python

For my project, I chose FastMCP, a Python library that streamlines the creation of MCP servers. FastMCP relies on decorators for tool definitions, making the setup process similar to other familiar Python frameworks like FastAPI.

from fastmcp import FastMCP

mcp = FastMCP("my-server")

@mcp.tool()
async def get_status(service: str) -> str:
    """Get current status for a service."""
    status = await api_client.get_status(service)
    return format_response(status)

What I appreciated about FastMCP:

  • Minimal boilerplate: The framework handles the protocol layer; you focus on business logic.
  • Type hints matter: MCP uses your function signatures to generate tool descriptions for the AI. Well-typed code directly improves how Claude understands your tools.
  • Async-first: Modern Python async patterns work naturally.

The project structure ended up clean:

my-mcp-server/
├── src/
│   ├── server.py        # MCP server with tools
│   ├── api_client.py    # External API wrapper
│   └── models.py        # Pydantic models
├── tests/
└── pyproject.toml

One decision I’d make differently: I initially put too much logic in the tool functions themselves. Separating the API client layer earlier would have made testing significantly easier. If you’re building your own, start with that separation from day one.

The Testing Revelation: MCP Inspector

Testing MCP servers presented a unique challenge. Traditional unit tests work for the API client layer, but how do you test the MCP protocol integration? How do you verify that Claude will actually understand your tool descriptions?

Enter MCP Inspector—a debugging tool that lets you interact with your MCP server directly. You can:

  • See all registered tools and their schemas.
  • Execute tools with test inputs.
  • Inspect the JSON responses.
  • Verify that your descriptions make sense.

This was a revelation. Instead of deploying to Claude Desktop and hoping for the best, I could iterate rapidly. By using MCP Inspector, I made real-time improvements without unnecessary delays. I encourage you to try this out for yourself. The first step is straightforward: you can install MCP Inspector with a simple command. Open your terminal and run pip install mcp-inspector to get started right away.

The feedback loop dropped from minutes to seconds. I caught issues like unclear parameter descriptions, missing error handling, and response formatting problems before they ever reached a real AI conversation. For anyone building MCP servers: start with MCP Inspector from day one. Don’t wait until you’re “ready” to test with Claude. The inspector-driven development workflow is faster and catches more issues.

Beyond Chat: Practical Automation

Here’s where MCP gets interesting beyond the obvious “chat with your data” use case.

Once my MCP server worked with Claude Desktop, I connected it to Home Assistant to solve a real annoyance: checking train disruptions before my commute. One day, I got an early alert about a delay and left ten minutes earlier, catching an alternative train and avoiding being late. This automation ensured a smoother start to my day and showed the practical impact of this technology.

The setup:

  1. Home Assistant reads my calendar to know when I have office days (I work hybrid).
  2. A morning automation triggers at 7:00 AM on commute days.
  3. The automation calls my MCP server to check for disruptions on my route.
  4. If there are delays, I get a notification on my phone with alternatives.
# Home Assistant automation (simplified)
automation:
  - alias: "Morning Commute Check"
    trigger:
      - platform: time
        at: "07:00:00"
    condition:
      - condition: state
        entity_id: calendar.work
        state: "on"
    action:
      - service: rest_command.check_commute
      - condition: template
        value_template: "{{ disruptions | length > 0 }}"
      - service: notify.mobile
        data:
          message: "Disruptions on your route. Consider leaving earlier."

The MCP server I built for chatting with Claude doubles as the backend for this automation. Same code, different interface.

This pattern—build once, use everywhere—is what makes MCP compelling for platform engineers. Your internal tools don’t need separate implementations for chat, automation, and dashboards. The MCP server becomes the single source of truth.

Other ideas I’m exploring:

  1. Slack bot that answers team questions using internal MCP servers: This idea has significant potential to streamline team communication and leverage existing MCP servers to provide instant answers. It’s a practical start for teams looking to integrate MCP into their daily workflows.
  2. CI/CD notifications enriched with context from platform tools: This concept can considerably enhance developer productivity by providing more context in notifications, reducing the need to switch contexts or search for additional information. It’s a medium-effort project with high impact on engineering efficiency.
  3. Morning briefings that aggregate data from multiple MCP sources: Although this is a more advanced idea, it offers the potential for comprehensive, real-time overviews by consolidating information from various sources. The effort required is greater, but the strategic value and insights it provides can be very compelling.

MCP Readiness Checklist

  1. Enhance Descriptions: Your tool’s docstring and parameter descriptions serve as documentation for AI. Ensure they are clear and precise to improve AI understanding and utilization of your tools.
  2. Focus on Error Handling: Error messages are more than just notifications. Craft them to provide useful feedback, like suggesting input checks instead of generic errors.
  3. Prioritize Tool Development: Start with a smaller set of well-designed tools. Investing time to thoughtfully design each tool, focusing on inputs, outputs, and edge cases, pays off more than overextending with multiple tools.
  4. Think Versatile from the Start: Design your MCP server with potential applications beyond chat in mind. Aim for clean separation of concerns and predictable response formats to facilitate future automation and scalability of the application.

What’s Next

The MCP ecosystem is still early. If you’re a Python developer or platform engineer curious about AI tooling, now is a great time to experiment. Build something small, learn the protocol, and see how it changes your thinking about developer experience.

Pick an API you actually use. Build an MCP server for it. Then ask yourself: what else could this power do?

The code for my project is open source: mcp-server-ns-bridge. Questions, feedback, and contributions welcome.

I’d love to hear about your own MCP experiments and the APIs you’re integrating. What API will you ‘USB-C’ next? Let’s foster dialogue and collective learning as we explore the endless possibilities of MCP together.