National Cyber Warfare Foundation (NCWF)

Attacks on GenAI Models Can Take Seconds, Often Succeed: Report


0 user ratings
2024-10-10 12:35:30
milo
Attacks

 - archive -- 
AI cybersecurity jailbreak

A study by Pillar Security found that generative AI models are highly susceptible to jailbreak attacks, which take an average of 42 seconds and five interactions to execute, and that 20% of attempts succeed.


The post Attacks on GenAI Models Can Take Seconds, Often Succeed: Report appeared first on Security Boulevard.



Jeffrey Burt

Source: Security Boulevard
Source Link: https://securityboulevard.com/2024/10/attacks-on-genai-models-can-take-seconds-often-succeed-report/


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Attacks



Copyright 2012 through 2025 - National Cyber Warfare Foundation - All rights reserved worldwide.