🤖 24 AI
🟡 🛡️ Security Friday, April 17, 2026 · 3 min read

LangChain and Cisco AI Defense: middleware protection for agents against prompt injection attacks

Why it matters

LangChain and Cisco have introduced a middleware integration that protects agentic systems across three layers: LLM calls, MCP tools, and the execution flow itself. The system operates in two modes — Monitor (logs risks without interrupting) and Enforce (blocks policy violations with an audited reason). The solution is focused on production environments where orchestrators chain agents in real time.

Cisco AI Defense and LangChain announced a joint integration on April 16, 2026, bringing runtime protection to agentic systems built on the LangChain platform. The post is authored by Siddhant Dash, senior product manager at Cisco AI Defense, and the title “A Developer’s First 10 Minutes” suggests the focus is on the ease of adding protection to existing projects.

Why is middleware better than scattered checks?

A typical problem in production agentic applications is deciding where to insert security checks. If every developer adds their own filters around LLM calls, tool invocations, and the agent loop, the security policy becomes inconsistent and difficult to audit.

Cisco and LangChain chose a middleware approach — a single security layer that sits between the application and the agent framework. This means the developer writes clean application code, and the security policy is applied once, at a single point, throughout the entire agent loop.

Three protection modes

The integration covers three distinct types of interaction that agents have with the outside world:

LLM mode protects direct calls to the underlying model. If an agent sends a prompt containing an injection attempt or sensitive data, that call is intercepted.

MCP mode protects calls to tools and data sources via the Model Context Protocol. Because MCP tools have real access to files, databases, and external APIs, this is the most vulnerable point — and Cisco places an explicit wall here.

Middleware mode covers the LangChain execution flow itself — planning, routing, and orchestration between agents. This is critical for multi-agent architectures where an orchestrator decides in real time which agent runs next and which tool it uses.

Monitor vs. Enforce

The integration offers two operational modes that cover different stages of development:

Monitor mode logs risky signals and decision traces without interrupting the agent. Ideal for development and staging environments where developers want to see what the policy would block before enabling it in production.

Enforce mode actively blocks policy violations. When a prompt injection or other security event is detected, the agent is halted, and an audited reason along with a Request ID for investigation is returned to the application. Everything is written to a log for later analysis.

Developer launchpad and first impressions

Cisco has launched dev.aidefense.cisco.com/demo-runner, which enables side-by-side testing of Monitor and Enforce modes on pre-built scenarios — safe prompts, injection attempts, and requests for sensitive data.

The work of securing agents is gradually shifting from ad hoc filters into a mature enterprise category. Cisco — a traditional networking player — is entering the AI security layer directly, signaling that agent security is being treated as an infrastructure concern rather than an application one.

🤖

This article was generated using artificial intelligence from primary sources.