Super-Size Security Fail: McDonald's AI Hiring Bot Exposes 64M Records
Security researchers Carroll and Sam Curry started by trying to probe the chatbot with prompt injection attacks, which can sometimes hijack LLMs. Paradox.ai has seemingly done a good job in securing its platform on this front, as the researchers came up empty.
Undeterred, the researchers took another approach. They noticed a login link on the McHire site meant for Paradox.ai staff, which they quickly penetrated. Shockingly, the most basic username and password possible, “123456,” got them access. Even worse, the developers didn’t bother to protect this administrator account with two factor authentication.

Once they got access, the researchers began to poke around and soon found that manipulating the applicant ID granted them access to other people’s chat logs, which exposed over 64 million records. These contained slices of conversations applicants had with the chatbot, including various pieces of personal information.
While the violation of privacy is bad enough, it could also expose applicants to cyber threats. The kinds of personal information gives threat actors an excellent tool to conduct phishing campaigns against applicants, especially because these people would’ve been expecting to hear back about the status of their job application.
If companies are going to make use of AI as part of the job application process, then they need to ensure security is top of mind. These systems hold a treasure trove of data that cybercriminals would love to access, so it needs to be properly secured. McDonald’s and Paradox.ai were doing the absolute minimum, which just won't cut it in this day and age.