National Cyber Warfare Foundation (NCWF)

LLM Prompt Injection Worm


0 user ratings
2024-03-04 17:06:29
milo
Blue Team (CND)

 - archive -- 

Researchers have demonstrated a worm that spreads through prompt injection. Details:



In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it “jailbreaks the GenAI service” and ultimately steals data from the emails, Nassi says. “The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client,” Nassi says...



The post LLM Prompt Injection Worm appeared first on Security Boulevard.



Bruce Schneier

Source: Security Boulevard
Source Link: https://securityboulevard.com/2024/03/llm-prompt-injection-worm/


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Blue Team (CND)



Copyright 2012 through 2025 - National Cyber Warfare Foundation - All rights reserved worldwide.