AWS Combines Bedrock AgentCore, MCP and Nova 2 Sonic for Omnichannel Ordering — First Enterprise Agentic Showcase
Why it matters
AWS has published an architectural example combining Bedrock AgentCore Runtime, the MCP protocol and the Nova 2 Sonic voice model in an omnichannel ordering system. This is the first public integration of the new AWS agentic services and a demonstration of microVM isolation for production agents.
AWS has published an architectural example that publicly combines three of its new agentic services for the first time — Bedrock AgentCore Runtime, the MCP protocol (Model Context Protocol) and the Nova 2 Sonic voice model — in a practical omnichannel ordering scenario. The demonstration shows a restaurant receiving orders simultaneously through a voice phone channel, a web form and a text chat, with a single shared agent layer coordinating all three sides.
Why Is This Integration More Important Than an Ordinary Tutorial?
Since AWS announced AgentCore at the most recent re:Invent announcements, most publicly available examples have been “hello world” level — one agent, one tool, one channel. This architecture is the first to simultaneously use all three major building blocks of an enterprise agent: an isolated runtime environment, a standardized protocol for external tools, and a voice model with streaming low latency.
Bedrock AgentCore Runtime is a managed execution environment for agents. Each agent session runs in a separate microVM (a miniature virtual machine, typically Firecracker), providing enterprise-grade isolation — two concurrent agents cannot leak memory or state to each other, and the runtime has built-in timeout, memory and observability hooks. This is an important contrast to DIY approaches where a developer runs LangChain or a similar framework inside a Lambda function.
MCP (Model Context Protocol) is the protocol Anthropic proposed and AWS now supports as the standard way of exposing tools to an agent. Instead of a custom function-calling schema per model, the agent receives a list of MCP servers (e.g., “inventory system,” “payment,” “orders”) and communicates with them in a unified format. This example uses MCP to access the restaurant’s inventory, price list and POS system.
Nova 2 Sonic is AWS’s new audio foundation model — speech-to-speech with low latency suitable for real-time phone conversations. Rather than converting voice to text and back, it works directly with audio tokens, eliminating the cumulative latency of a traditional STT+LLM+TTS pipeline.
What Does the Architecture Look Like in Practice?
The scenario shown in the AWS blog works as follows. A restaurant guest calls in and orders by voice — the call lands on a Nova 2 Sonic streaming endpoint that maintains the conversation and forwards structured intents (ordered_items, modifications, payment_method) to the AgentCore agent. In parallel, another guest orders through web chat — the same agent layer receives the text via REST API and routes it to the same agent using the same MCP stack.
The agent uses an identical toolkit for both channels — an MCP server for inventory checks (is a dish available), a second MCP server for price confirmation, and a third for the final POS entry. All coordination happens in AgentCore Runtime, which tracks session memory separately per customer despite sharing the same runtime.
What Is Different From the ToolSimulator Announced Yesterday?
This is a key moment in understanding AWS’s agentic stack. ToolSimulator (announced in the morning run) is a development tool — a test environment where a developer can simulate an agent without actually invoking tools, useful for unit tests and evaluation. AgentCore Runtime is the other side of the same story — production execution of the agent in real infrastructure with real tool invocations, a real user and real billing.
The ToolSimulator + AgentCore Runtime pair gives development teams a complete path from dev to prod. Developing in ToolSimulator means faster iteration and cheaper testing; deploying in AgentCore Runtime means the same agent goes live with enterprise isolation, audit logs and observability through CloudWatch.
For AWS customers already using Bedrock, this blog is a blueprint for the first serious agentic pilots in 2026. Restaurant ordering is an illustrative domain, but the same architecture transfers directly to contact centers, travel agencies, banking voice bots and any scenario where the same work happens across multiple channels. The only question is how quickly AWS partners and integrators will take that blueprint and turn it into the first publicly demonstrable production deployment.
This article was generated using artificial intelligence from primary sources.
Related news
Anthropic: Memory for Managed Agents in public beta — AI agents that remember context between sessions
GitHub: Cloud agent sessions now available directly from issues and project views
ArXiv SWE-chat — a dataset of real developer interactions with AI coding agents in production