AI Arms Race Begins As Microsoft Finds Malicious Use Of OpenAI By Threat Actors

microsoft and openai discover threat actors using ai llm
Artificial intelligence has found a footing in large language models (LLMs), which are all the rage right now, even finding their way onto PCs with user-run instances. However, with this increase in accessibility comes an increase in the risk of nefarious activity, like creating bioweapons using GPT-4. Now, Microsoft and OpenAI are reporting that threat actors are using the capabilities of artificial intelligence and LLMs to aid in malicious efforts like social engineering or routine scripting.

Today, OpenAI published research in collaboration with Microsoft Threat Intelligence regarding the use of AI services for malicious cyber purposes. In this research, the researchers disrupted five state-affiliated threat actors out of China, Iran, North Korea, and Russia. In general, the actors “generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks.”

msft microsoft and openai discover threat actors using ai llm

Microsoft’s sister blog post elaborated on the specific actions of threat actor groups, such as the Russian military intelligence actor Forest Blizzard. This group has been known to target victims who are of interest to the Russian government, including defense, government, energy, and information technology targets. Forest Blizzard has also used large language models like GPT to aid in reconnaissance and basic scripting tasks. A similar theme also runs throughout the other threat actor groups listed in this blog post who used LLM tools to aid their activities.

Regardless, these sorts of activities come as no real surprise, especially given the prior nefarious uses of AI and large language models. Thankfully, Microsoft and OpenAI are taking steps to head off the threat, and identify where this research is going, to prevent it. However, threat actors are always going to be working on means to bypass these security measures, and as such, this is definitely not the last we will hear of this battle.