"To mitigate the risks of indirect prompt injection attacks in your AI system, we recommend two approaches: implementing AI prompt shields [...] and establishing robust supply chain security mechanisms [...]."
DataDome
Source: Security Boulevard
Source Link: https://securityboulevard.com/2026/01/mcp-security-how-to-prevent-prompt-injection-and-tool-poisoning-attacks/