
A study by Pillar Security found that generative AI models are highly susceptible to jailbreak attacks, which take an average of 42 seconds and five interactions to execute, and that 20% of attempts succeed.
The post Attacks on GenAI Models Can Take Seconds, Often Succeed: Report appeared first on Security Boulevard.
Jeffrey Burt
Source: Security Boulevard
Source Link: https://securityboulevard.com/2024/10/attacks-on-genai-models-can-take-seconds-often-succeed-report/