National Cyber Warfare Foundation (NCWF)

ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts


0 user ratings
2025-11-19 10:30:24
milo
Attacks
Malicious actors can exploit default configurations in ServiceNow's Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks.
The second-order prompt injection, according to AppOmni, makes use of Now Assist's agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive



Source: TheHackerNews
Source Link: https://thehackernews.com/2025/11/servicenow-ai-agents-can-be-tricked.html


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Attacks



Copyright 2012 through 2025 - National Cyber Warfare Foundation - All rights reserved worldwide.