Warning: ChatGPT Can Be Tricked To Leak Personal Data With Just Your Email Address

hero chatgpt gmail details eito x hackers ai
Oxford University Computer Science Alumni and Artificial Intelligence researcher, Eito Miyamura, revealed that his team was able to trick ChatGPT into divulging sensitive email data, using some relatively simple methods. They were able to do this by exploiting a vulnerability in Model Context Protocol (MCP), which was recently added to ChatGPT.

body3 chatgpt gmail details eito x hackers ai

OpenAI has improved its generative AI model to include support for the Model Context Protocol (MCP), which can be used to turn ChatGPT into a powerful agent.  With MCP, ChatGPT can connect to and access information on "Gmail, Calendar, SharePoint, Notion, and more". While ChatGPT's new capabilities may be handy, Eito demonstrated that your email address is all an attacker needs to exploit MCP.

body1 chatgpt gmail details eito x hackers ai

Eito explained that once you connect ChatGPT to an email account using MCP, a malicious actor can simply send a calendar invite to your email address. The danger of this invite lies in the fact that it contains a jailbreak prompt, and with that prompt, ChatGPT can read your email and send sensitive information back to the attackers -- and there's no need for the invite to even be accepted. ChatGPT is essentially tricked into using the attackers instructions with the jailbreak prompt. Eito demonstrated how it works in a video posted on his X account.

body2 chatgpt gmail details eito x hackers ai

We've written extensively about the potential for AI to be used for nefarious purposes. For example, criminals now use AI to crack well-crafted passwords, and AI-powered ransomware can be used to automate cyberattacks, very quickly. This is particularly troublesome, as the ransomware industry is projected to hit a whopping $265 billion by 2031. While leaders in the space continue to invest heavily to improve their models, we hope they also prioritize investments in AI security guardrails to reduce the likelihood of abuse.
Tags:  security, Hackers, AI, chatgpt