In an unexpected but also unsurprising turn of events, OpenAI's new ChatGPT Atlas AI browser has already been jailbroken, and the security exploit was uncovered within a week of the application's release. As is the case with other AI browsers, the vector of attack stems from an issue inherent to generative AI systems called "prompt injection", where unwanted prompts can be inserted into a user's system, which tricks the AI into performing an unwanted task, including exporting a user's messages to an attacker. This is achievable in various ways. Twitter user @elder_plinius, for example, demonstrates one manner in which ChatGPT Atlas can be exploited below. With this specific attack method, the AI is being prompted to click hidden "Copy to Clipboard" buttons on a web page that inserts a phishing link into the clipboard of the end user without their knowledge—and even ChatGPT Atlas itself is unaware of having done this.
Other attacks are mentioned by 
Fortune's coverage of ChatGPT Atlas being cracked, but few details are given on them. However, experts in general are warning about the security dangers posed by these browsers, since hiding code or instructions that an end user can't see but the AI can is relatively easy. Since any exploit of the AI effectively becomes a browser-wide exploit in these circumstances, it also means that any information stored within your browser, like your login information or messages across various platforms, becomes vulnerable to automated theft. Similar exploits have been reported for Perplexity's Comet browser and Fellou's self-named AI browser. 
Google Chrome has also been integrated with Gemini AI features, but similar exploits have yet to be identified.
While it is good that these issues aren't exclusive to ChatGPT Atlas, they don't bode well for AI web browsers at large, at least at this early stage of development. Hopefully their devs take these threats seriously and don't expose their users to further risk. While these AI-powered browsers are undeniably powerful, that power clearly comes with a high security risk 
mirrored by the generative AI solutions themselves.