AI Agents from Anthropic, Google, and Microsoft Hijacked via Prompt Injection Attacks
Security researchers successfully hijacked AI agents from Anthropic, Google, and Microsoft through prompt injection attacks, stealing API keys and access tokens. All three companies paid bug bounties ranging from $100 to $1,337 but didn't publish public advisories or assign CVEs, leaving users on vulnerable versions unaware of the risks.