🟡 🤝 Agents Wednesday, April 29, 2026 · 2 min read ·

AWS Shows How to Run a Serverless MCP Proxy on Bedrock AgentCore Runtime for Governance and Audit

Editorial illustration: cloud gateway with three authentication layers toward an AI agent and upstream server

On April 29, 2026, AWS published a reference architecture for running a custom Model Context Protocol (MCP) proxy on Bedrock AgentCore Runtime. The proxy sits between an AI agent and upstream MCP servers to add governance, an audit trail, and input sanitization without modifying existing servers. The demo uses FastMCP and three layers of authentication.

On April 29, 2026, AWS published a detailed reference architecture for running a stateless MCP proxy as a serverless workload on Amazon Bedrock AgentCore Runtime. The article’s author, Senior Solutions Architect Nizar Kheir, describes the proxy as a “programmable layer between AI agents and upstream MCP servers” that enables enterprise customers to apply their own security and compliance controls without refactoring existing infrastructure.

How Does the Proxy Work?

The proxy is built on the FastMCP Python library and at startup dynamically discovers tools from upstream MCP servers (e.g., AgentCore Gateway, a custom MCP server, or a third party), then re-exposes them to the client with injected custom logic. AWS includes a concrete code snippet in the article showing how a tool handler is generated that forwards calls with the ability to inject tokenization, validation, or filtering before sending:

def _make_tool_handler(tool_name: str):
    def handler(**kwargs) -> str:
        result = _send_gateway_request("tools/call", ...)
        # Custom logic: tokenization, validation, filtering
        return result

Three Layers of Authentication

The architecture defines three independent authentication layers. The client to proxy uses AgentCore Identity (IAM or JWT/OAuth 2.0 tokens), proxy to upstream server uses AWS SigV4 or OAuth client credentials, and upstream to external tools uses AgentCore credential providers that manage OAuth tokens and API keys.

Typical Use Cases

AWS outlines five typical scenarios: input sanitization before the backend receives a tool call, generating compliance-aligned audit trails, redacting sensitive data at the protocol level, tool-level access control based on caller identity, and PII tokenization in tool call arguments. All of this can be achieved without any changes to the upstream server.

Code Availability

The full demo is available in the GitHub repository aws-samples/sample-mcp-proxy-agentcore-runtime. The setup_and_deploy.py script automates deployment using a deploy_config.json that defines the upstream gateway endpoint, auth method, region, and optional Cognito credentials.

Frequently Asked Questions

What is an MCP proxy?
A programmable layer between an MCP client (agent) and an upstream MCP server that intercepts tool calls to add governance controls, audit records, and data sanitization — without modifying the upstream server itself.
What authentication methods are supported?
Three independent layers: agent→proxy via AgentCore Identity (IAM or JWT/OAuth 2.0), proxy→upstream via AWS SigV4 or OAuth client credentials, and upstream→tools via AgentCore credential providers for OAuth tokens and API keys.
What does the proxy do for enterprise compliance?
Input sanitization before the backend receives it, generating compliance audit trails, redacting sensitive data, tool-level access control, and tokenizing PII in tool call arguments.
🤖

This article was generated using artificial intelligence from primary sources.