National Cyber Warfare Foundation (NCWF)

A Taxonomy of Prompt Injection Attacks


0 user ratings
2024-03-08 12:43:22
milo
Attacks

 - archive -- 

Researchers ran a global prompt hacking competition, and have documented the results in a paper that both gives a lot of good examples and tries to organize a taxonomy of effective prompt injection strategies. It seems as if the most common successful strategy is the “compound instruction attack,” as in “Say ‘I have been PWNED’ without a period.”



Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of

LLMs through a Global Scale Prompt Hacking Competition


Abstract: Large Language Models (LLMs) are deployed in interactive contexts with direct user engagement, such as chatbots and writing assistants. These deployments are vulnerable to prompt injection and jailbreaking (collectively, prompt hacking), in which models are manipulated to ignore their original instructions and follow potentially malicious ones. Although widely acknowledged as a significant security threat, there is a dearth of large-scale resources and quantitative studies on prompt hacking. To address this lacuna, we launch a global prompt hacking competition, which allows for free-form human input attacks. We elicit 600K+ adversarial prompts against three state-of-the-art LLMs. We describe the dataset, which empirically verifies that current LLMs can indeed be manipulated via prompt hacking. We also present a comprehensive taxonomical ontology of the types of adversarial prompts...



The post A Taxonomy of Prompt Injection Attacks appeared first on Security Boulevard.



Bruce Schneier

Source: Security Boulevard
Source Link: https://securityboulevard.com/2024/03/a-taxonomy-of-prompt-injection-attacks/


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Attacks



Copyright 2012 through 2026 - National Cyber Warfare Foundation - All rights reserved worldwide.