National Cyber Warfare Foundation (NCWF)

AI security’s Great Wall problem


0 user ratings
2026-02-09 11:12:30
milo
Blue Team (CND)

AI security requires more than cloud hardening. The real attack surface isn't your infrastructure—it's the supply chains, agents, and humans that make up the system around it.


The post AI security’s ‘Great Wall’ problem appeared first on CyberScoop.



The Great Wall of China was built to slow northern raiders and prevent steppe armies from riding straight into the empire’s heart. Yet in 1644, its most impregnable fortress fell without a siege.





At Shanhai Pass, where the wall meets the Bohai Sea, General Wu Sangui commanded the eastern gate. Behind him: a rebel army had just taken Beijing, the emperor was dead, and the Ming Dynasty was buckling under internal crisis. Ahead: Manchu forces who had spent decades probing for weakness. Wu faced the oldest dilemma in fortress warfare: who is the greater threat?





He opened the gate. The Manchus poured through, defeated the rebels, and never left. They founded the Qing Dynasty and ruled China for the next 268 years, the last imperial dynasty before the republic.





The wall didn’t fail. The stone held. What broke was the human system it depended on.





Walls do not fail because the bricks are weak. They fail because the system around the wall is weak. Underpaid guards get bribed, gate procedures degrade, supply lines break. The attacker does not need to knock the wall down when they can walk through the gate.





That is why I disagree with the increasingly popular framing that AI security is fundamentally a cloud infrastructure problem. Cloud security matters. Identity, telemetry, and incident response are table stakes. But treating AI risk as something you can solve primarily by hardening the hosting layer is a comforting simplification, not a complete threat model.





Palo Alto Networks recently reported that 99% of organizations experienced at least one attack on an AI system in the past year. If nearly everyone is getting hit, the right conclusion is not “build a higher wall.” It is “we are defending the wrong boundaries.”





The fortress fallacy





A fortress mindset starts with an implicit assumption: secure the infrastructure and you secure the system. That mental model can work when the system boundary is clean and the workload is deterministic, but AI breaks both assumptions. Modern AI stacks are ecosystems that depend on components sitting outside the neat perimeter even when a model runs inside your cloud tenant: open-source libraries, data pipelines, evaluation tools, plug-ins, agent frameworks, and the humans who can change any of the above. If your security plan begins and ends with cloud controls, you will build excellent defenses around a system that attackers are not planning to assault head-on. Attackers route around strength and target weakness. Why scale the wall when someone at the gate is underpaid, overworked, or facing an impossible choice?





Consider the trust problem. Most organizations are consuming models trained elsewhere, fine-tuning them, and shipping them into production with limited internal expertise. Even if you self-host, you did not write the model or curate the training data. Cloud hardening does not change that reliance; it just makes the box look more secure. The most dangerous failures are not always intrusions but permissioned outcomes. AI agents turn mixed-trust inputs into execution, and when an agent can read internal content, open tickets, or trigger workflows, the attack surface moves to whatever the agent is allowed to do. If an attacker can influence what the agent consumes, they can influence what it executes, often without malware or exploit chains. Real-world AI incidents frequently involve the glue: telemetry, orchestration, plug-ins, and vendor services beside the model.





Humans remain the soft underbelly. AI security discussions obsess over architecture while threat actors obsess over access paths. Cloud controls cannot prevent coercion, social engineering, or insider risk. If a small set of people can approve changes to an AI system’s tools or policies, that is where the attacker will focus. 





History teaches this lesson clearly: the Great Wall’s garrison soldiers took payments to allow traders and smugglers to pass, to skip patrols, or to neglect watch duties. Irregular pay and delayed wages made them susceptible. Today, the “bribe” often isn’t cash. It might be a phishing link, a fake vendor request, or pressure to cut corners during a production crisis. The strongest wall doesn’t matter if the person guarding the gate is persuaded to open it.





Security from within 





Begin with a premise security leaders should already accept: everything can be breached. Your job is to reduce the odds, detect threats quickly, and limit the damage when prevention fails.





Threat model the entire AI system, not just where it’s hosted. Include data supply chains, tooling, evaluation pipelines, plug-ins, agents, and the people who can change them. If your threat model leaves out upstream and downstream dependencies, it’s incomplete.





Treat non-human identities as serious risks. Agents, service accounts, and tool credentials should be managed like privileged user accounts. Apply zero standing privileges (no always-on access), just-in-time access, contextual approvals for high-impact actions, and monitoring for unusual tool behavior. The moment you give an agent broad permissions “for productivity,” you’ve created a new control plane that attackers can hijack.





Build audit-grade change control. You should be able to answer: who changed agent permissions, retrieval sources, tool bindings, or policy gates? If those controls can be modified quietly, you’re running a system that’s one rushed change away from becoming a security incident.





Invest in detection that assumes manipulation, not just compromise. The hard part of AI misuse is that malicious actions can look legitimate in traditional logs. You need traceability from input to outcome: what context was fed in, what tool calls happened, what policy was active, and what boundary was crossed.





Cloud security is necessary—but not sufficient. When vendors say AI security is mainly a cloud infrastructure problem, I hear the modern version of “the wall is tall, so we’re safe.” Wu Sangui’s wall was tall too, stretching to the sea at the empire’s edge. None of that mattered when the system around it collapsed. Empires fall from within.





AI security isn’t solved by building a better fortress. It’s solved by governing delegated authority, hardening the supply chain, and building systems that can prove what happened when—not if—something goes wrong.





David Schwed is the CEO of SVRN, an AI infrastructure company focused on building decentralized, sovereign intelligence systems that empower users with data privacy and autonomous computational power.


The post AI security’s ‘Great Wall’ problem appeared first on CyberScoop.



Source: CyberScoop
Source Link: https://cyberscoop.com/ai-threat-modeling-beyond-cloud-infrastructure-op-ed/


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Blue Team (CND)



Copyright 2012 through 2026 - National Cyber Warfare Foundation - All rights reserved worldwide.