The AI security market is crowded. Vendors are racing to protect prompts, harden models, detect jailbreaks, and scan for data leakage at the LLM layer. The investment is real. The intent is good.
And most of it is missing the point.
Here is the problem: agents do not just think. They act. They call APIs. They trigger workflows. They write to databases, send emails, move money, and modify production systems. The moment an agent decides to do something, it leaves the model layer entirely and enters your infrastructure.
That is the action layer. And right now, most security teams have almost no visibility into it.
What the Action Layer Actually Is
Think about how an AI agent works. A user gives it a task. The LLM reasons through it. Then the agent reaches out through MCP servers to interact with real systems: your CRM, your file storage, your internal APIs, your third-party services.
Every one of those interactions is an action. Every action carries risk.
The LLM is the brain. The MCP servers are the hands. The APIs are the buttons and levers those hands can press. Securing the brain without securing the hands is not a security strategy. It is a gap.
Why This Gap Exists
The AI security conversation started at the model layer because that is where AI started. Early generative AI deployments were mostly interfaces: a user asked a question, the model answered. The risk profile looked like prompt injection, data leakage in responses, and model misuse.
Agents changed everything. Agents do not just respond. They plan, they execute, and they chain actions together autonomously. A single agent task can trigger dozens of downstream API calls across systems the security team has never mapped to an AI workflow.
The tools built for the model layer were not designed for this. They watch conversations. They do not watch infrastructure.
What Happens When the Action Layer Goes Unmonitored
Consider what a compromised agent actually looks like in the real world. It does not announce itself with a suspicious prompt. It passes your prompt filters cleanly. It is authorized to take the action it takes. What it does is call an API it has never called before. Or call a familiar API with an unusual payload. Or chain together a sequence of individually innocuous actions that collectively exfiltrate sensitive data.
None of that is visible at the model layer. All of it is visible at the action layer, if you are watching.
This is not theoretical. The McKinsey breach, where attackers moved laterally through an API that fed an AI assistant, is exactly this pattern. The damage did not happen in the prompt. It happened in the infrastructure.
You Cannot Secure What You Cannot See
This is the core challenge. Your agentic infrastructure is not a single layer. It spans LLMs and agents at the top, MCP servers in the middle, and APIs and downstream systems at the action layer. Each layer connects to the others. Risk flows across all three.
Securing agents requires visibility across the full stack: what is exposed externally, how it is configured, what your code reveals about how agents connect, and what is actually happening in live traffic at runtime. The Agentic Security Graph is how Salt brings all four of those data sources together into one correlated picture of your environment.

Every node in that graph is a potential entry point. Every connection is a potential attack path. The risk scores and posture gap overlays are not theoretical. They are built from live data across four distinct sources, including runtime API traffic that no model-layer tool can see.
What Action Layer Security Requires
Securing the action layer requires three things that most AI security tools are not built to provide.
- First, you need to know what APIs and MCP servers your agents are connected to. Not what your developers told you. What is actually there, including the rogue deployments, the shadow MCPs, and the endpoints that were never registered anywhere internal.
- Second, you need a behavioral baseline. What does normal look like for this agent? Which APIs does it call, at what frequency, with what kind of payload? Anomaly detection without a baseline is just noise.
- Third, you need runtime visibility across your actual infrastructure: Kubernetes, load balancers, API gateways, legacy systems, modern cloud services. Agents do not respect your technology stack preferences. They call whatever they have access to. Your monitoring needs to cover all of it.
Why This Is the Most Critical Layer
The model layer is where AI makes decisions. The action layer is where those decisions become real.
Every dollar of damage from an agentic security incident happens at the action layer. Data exfiltration happens through an API call. Unauthorized transactions happen through a service integration. Lateral movement happens across MCP server connections.
You can have perfect prompt security and still get breached if you are not watching what your agents do after the conversation ends.
The security industry is going to figure this out. The question is whether your organization gets there before an incident forces the conversation.
At Salt Security, we built our agentic security platform starting from the action layer because that is where eight years of API security expertise lives. We know what normal looks like. We know what lateral movement looks like. We know how to baseline agent behavior across 70 different infrastructure technologies.
The model layer matters. But the action layer is where security actually gets tested.
See what your Agentic Security Graph looks like at salt.security/agentic-assessment
The post Everyone Is Securing the Wrong Layer of AI appeared first on Security Boulevard.
Michael Callahan
Source: Security Boulevard
Source Link: https://securityboulevard.com/2026/04/everyone-is-securing-the-wrong-layer-of-ai/