Anthropic’s Model Context Protocol (MCP): The “USB-C” Standard for AI Integration (Part 2)
May 15th, 2025
Meta des: Learn how to implement Anthropic’s Model Context Protocol (MCP) using Python to connect LLMs with external data for smarter, scalable AI workflows.
Read more: Anthropic’s Model Context Protocol (MCP): The “USB-C” Standard for AI Integration (Part 1)
Introduction
In Part 1 of our exploration into Anthropic’s Model Context Protocol (MCP), we examined how this open standard aims to create a common language for AI systems to access external data, tools, and resources, much like how USB-C became the universal connector for devices. In this follow-up, Part 2, we move from concept to code by walking through a minimal but functional example that demonstrates how MCP can be practically implemented. Using Python, we set up a simple MCP server that serves structured documents and a client that connects, retrieves, and processes them to build context for an LLM-powered application. This hands-on dive shows how MCP can act as the bridge between language models and enterprise knowledge, enabling more dynamic and scalable AI workflows.
MCP Server Code (server.py)
import logging
from mcp.server.fastmcp import FastMCP
# Configure basic logging for the server_
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(\__name_\_)
# Create an MCP server instance with a name. This will handle MCP protocol details._
mcp = FastMCP("DocumentServer")
# Example text-based resources (short documents) to serve._
# In a real scenario, these could be file contents or database records._
documents = {
"doc1": (
"Python is a high-level, interpreted programming language created by Guido van Rossum and first released in 1991. "
"Known for its readability and broad standard library, Python supports multiple programming paradigms, including procedural, object-oriented, and functional programming. "
"It is widely used in domains such as web development (with frameworks like Django and Flask), data science (with libraries like pandas, NumPy, and scikit-learn), "
"artificial intelligence and machine learning (with TensorFlow, PyTorch), automation, scripting, and education. "
"Python's simplicity and community support have made it one of the most popular programming languages in the world."
),
"doc2": (
"OpenAI's GPT-4 is a state-of-the-art large language model that excels in understanding and generating natural language. "
"Trained on a mixture of licensed and publicly available data, GPT-4 exhibits strong reasoning, summarization, translation, and question-answering abilities. "
"It can be used to build chatbots, assist with programming, generate creative content, and even analyze documents. "
"Despite its impressive capabilities, GPT-4, like all LLMs, has limitations such as hallucination (producing plausible but incorrect information), "
"context length limitations, and lack of real-time awareness. OpenAI provides access to GPT-4 via their API and tools like ChatGPT."
),
"doc3": (
"Retrieval-Augmented Generation (RAG) is a technique that enhances language model performance by incorporating external knowledge at runtime. "
"Instead of relying solely on the model's internal weights, RAG systems retrieve relevant documents from a knowledge base and pass them into the prompt. "
"This approach improves factual accuracy, allows dynamic updates to the model's knowledge without retraining, and enables domain-specific applications. "
"RAG is commonly used in customer support, legal tech, search engines, and AI assistants where precise, up-to-date answers are essential."
),
"doc4": (
"The Model Context Protocol (MCP) is an open standard introduced by Anthropic that defines how AI systems interact with external data and tools. "
"It uses a JSON-RPC-based architecture where hosts (LLM-powered apps) connect to servers (data sources/tools) over stdio or HTTP transports. "
"MCP supports capabilities like 'resources' for document access, 'tools' for actions, and 'prompts' for predefined message templates. "
"It allows AI assistants to securely access enterprise data like databases, documents, and APIs, enabling scalable, context-rich AI workflows. "
"MCP is model-agnostic and designed to foster an open ecosystem of reusable integrations."
)
}
# Define MCP resources to expose the documents._
# Each resource is identified by a URI; here we use the scheme "docs://"._
@mcp.resource("docs://doc1")
def get_doc1() -> str:
"""Return the content of document 1."""
logger.info("Serving resource docs://doc1")
return documents\["doc1"\]
@mcp.resource("docs://doc2")
def get_doc2() -> str:
"""Return the content of document 2."""
logger.info("Serving resource docs://doc2")
return documents\["doc2"\]
# Optionally, we could define a dynamic resource with a placeholder (e.g. docs://{name}),_
# but here we use fixed URIs for simplicity and to allow listing them easily._
# Start the MCP server when this script is run._
# This will listen for incoming MCP client connections over stdio._
if \_\_name\_\_ == "\__main_\_":
# Running the server via stdio transport._
# This call will block and wait for a client (e.g., our MCP client) to connect via STDIN/STDOUT._
mcp.run() *\# :contentReference\[oaicite:5\]{index=5}*
Explanation: We create a named MCP server using FastMCP and register two text resources, docs://doc1 and docs://doc2. Each is exposed via a URI and returns a string (the document text). The @mcp.resource decorator hooks these functions into the MCP server, so any MCP client can list and retrieve them. When the script is executed (e.g., launched by a client over STDIO), mcp.run() starts the server’s event loop and waits for client requests on standard input/output. We use Python’s logging to record when a resource is served, which aids in debugging and auditing.
MCP Client Code (client.py)
import asyncio
import logging
import os
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from openai import OpenAI
# Configure logging for the client_
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(\__name_\_)
async def main():
# Set OpenAI API key from environment (ensure it's configured before running)._
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
logger.error("OpenAI API key not set. Please set the OPENAI_API_KEY env variable.")
return
# Initialize the OpenAI client with API key_
client = OpenAI(api_key=api_key)
# Define parameters to launch the MCP server via stdio transport._
# This will spawn the server.py script as a subprocess._
server_params = StdioServerParameters(
command="python3",
args=\["server.py"\]
)
logger.info("Starting MCP server via stdio...")
# Launch the MCP server and establish a stdio connection._
async with stdio_client(server_params) as (read_stream, write_stream):
async with ClientSession(read_stream, write_stream) as session:
# Initialize the MCP session (handshake between client and server)._
try:
await session.initialize()
logger.info("MCP session initialized (handshake successful).")
except Exception as e:
ogger.error(f"Failed to initialize MCP session: {e}")
return
# List available resources from the server._
try:
resources_result = await session.list_resources()
logger.info(f"Available resources: {resources_result}")
except Exception as e:
logger.error(f"Error listing resources: {e}")
return
# Extract the actual resources list from the ListResourcesResult object_
resources_list = resources_result.resources if hasattr(resources_result, 'resources') else \[\]
if not resources_list:
logger.error("No resources found on the MCP server.")
return
# Read all resources to provide context for the LLM_
all_docs = {}
for resource in resources_list:
resource_uri = resource.uri if hasattr(resource, "uri") else str(resource)
resource_name = resource.name if hasattr(resource, "name") else resource_uri.split('/')\[-1\]
logger.info(f"Reading resource: {resource_uri}")
try:
meta, content_tuple = await session.read_resource(resource_uri)
# Extract text content_
text_content = ""
if isinstance(content_tuple, tuple) and len(content_tuple) == 2 and content_tuple\[0\] == 'contents':
contents_list = content_tuple\[1\]
if isinstance(contents_list, list) and contents_list:
for item in contents_list:
if hasattr(item, 'text'):
text_content += item.text
# Store document with its URI_
all_docs\[resource_uri\] = {
"name": resource_name,
"content": text_content
}
logger.info(f"Added document {resource_uri} to context")
except Exception as e:
logger.error(f"Failed to read resource {resource_uri}: {e}")
if not all_docs:
logger.error("Failed to retrieve any document content.")
return
# Format all documents for the context_
context = ""
for uri, doc_info in all_docs.items():
context += f"Document: {uri}\\n"
context += f"Content: {doc_info\['content'\]}\\n\\n"
# Ask a question that requires the LLM to find information in a specific document_
user_question = "What does MCP stand for, and what is its purpose according to the documents?"
prompt = f"You have access to several documents. Review them carefully and answer the question based only on the information contained in these documents.\\n\\n{context}\\n\\nQuestion: {user_question}"
messages = \[
{"role": "user", "content": prompt}
\]
# Call the OpenAI API with the constructed prompt using the new v1.0+ format_
try:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
temperature=0.3 *\# Lower temperature for more focused response*
)
except Exception as e:
logger.error(f"OpenAI API call failed: {e}")
return
# Extract the assistant's reply from the response (new API format)_
answer = response.choices\[0\].message.content.strip()
print("OpenAI response:", answer)
# Run the asynchronous main function_
if \_\_name\_\_ == "\__main_\_":
asyncio.run(main())
Explanation:
The script launches an MCP server (server.py) via StdioServerParameters and connects using stdio_client, which provides read/write streams over STDIN/STDOUT. These streams are used to create a ClientSession, where session.initialize() performs the MCP handshake.
After initialization, the client lists available resources with session.list_resources() and reads their content via session.read_resource(uri). The content is aggregated into a prompt context. A question is then asked using OpenAI's ChatCompletion.create (with gpt-4o-mini), providing the documents as context. The model's response is printed. The script includes error handling throughout for API key issues, connection failures, and read errors.
Running the Example
To run this example locally, make sure you have installed the required packages and set your OpenAI API key:
- Install dependencies: pip install mcp openai (installs the MCP SDK and OpenAI SDK).
- Set API key: Set the environment variable OPENAI_API_KEY to your OpenAI API key (or modify the code to assign openai.api_key directly).
- Run the server and client: Simply run the client script – it will spawn the server automatically. For example: python client.py. The client should list the resources provided by the server, read the content of the selected resource, and then output the OpenAI model’s response based on that content.
Server’s running
Client’s running
This example demonstrates a basic retrieval-augmented generation flow: the MCP server provides external knowledge (documents) as standardized resources, and the client retrieves and feeds that data into an LLM (OpenAI GPT-4o-mini) to generate an informed answer. By using MCP's official SDK, we ensure compatibility with the 2025 MCP standard and take advantage of a robust, standardized interface for connecting data sources to LLMs. Logging and error handling are included to make the system more maintainable and transparent during execution. The pattern shown here can be extended to integrate various data sources and tools via MCP, empowering more advanced RAG workflows with OpenAI models.
Conclusion
This practical demonstration reveals just how powerful and approachable the Model Context Protocol can be. By using a lightweight server-client architecture over standard I/O, we’ve enabled an LLM to securely access external knowledge sources, without bloating the model or reinventing integration wheels. Whether you’re building AI agents, internal copilots, or knowledge-driven assistants, MCP offers a model-agnostic and extensible framework that supports resource discovery, contextual grounding, and modular interoperability. As more AI systems adopt MCP, we’re one step closer to an ecosystem where models can plug into knowledge like devices plug into USB-C: universally, reliably, and intelligently.
Subscribe to our newsletter and get the latest AI news updates: https://lnkd.in/g28eg-iW