Attackers can use indirect prompt injections to trick Anthropic’s Claude into exfiltrating data the AI model’s users have access to.
A critical vulnerability in Anthropic's Claude AI allows attackers to exfiltrate user data via a chained exploit that abuses the platform's own File API.