Warning: ChatGPT Can Be Tricked To Leak Personal Data With Just Your Email Address

OpenAI has improved its generative AI model to include support for the Model Context Protocol (MCP), which can be used to turn ChatGPT into a powerful agent. With MCP, ChatGPT can connect to and access information on "Gmail, Calendar, SharePoint, Notion, and more". While ChatGPT's new capabilities may be handy, Eito demonstrated that your email address is all an attacker needs to exploit MCP.

Eito explained that once you connect ChatGPT to an email account using MCP, a malicious actor can simply send a calendar invite to your email address. The danger of this invite lies in the fact that it contains a jailbreak prompt, and with that prompt, ChatGPT can read your email and send sensitive information back to the attackers -- and there's no need for the invite to even be accepted. ChatGPT is essentially tricked into using the attackers instructions with the jailbreak prompt. Eito demonstrated how it works in a video posted on his X account.

We've written extensively about the potential for AI to be used for nefarious purposes. For example, criminals now use AI to crack well-crafted passwords, and AI-powered ransomware can be used to automate cyberattacks, very quickly. This is particularly troublesome, as the ransomware industry is projected to hit a whopping $265 billion by 2031. While leaders in the space continue to invest heavily to improve their models, we hope they also prioritize investments in AI security guardrails to reduce the likelihood of abuse.