AI Arms Race Begins As Microsoft Finds Malicious Use Of OpenAI By Threat Actors
Today, OpenAI published research in collaboration with Microsoft Threat Intelligence regarding the use of AI services for malicious cyber purposes. In this research, the researchers disrupted five state-affiliated threat actors out of China, Iran, North Korea, and Russia. In general, the actors “generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks.”
Microsoft’s sister blog post elaborated on the specific actions of threat actor groups, such as the Russian military intelligence actor Forest Blizzard. This group has been known to target victims who are of interest to the Russian government, including defense, government, energy, and information technology targets. Forest Blizzard has also used large language models like GPT to aid in reconnaissance and basic scripting tasks. A similar theme also runs throughout the other threat actor groups listed in this blog post who used LLM tools to aid their activities.
Regardless, these sorts of activities come as no real surprise, especially given the prior nefarious uses of AI and large language models. Thankfully, Microsoft and OpenAI are taking steps to head off the threat, and identify where this research is going, to prevent it. However, threat actors are always going to be working on means to bypass these security measures, and as such, this is definitely not the last we will hear of this battle.