AWS Cloud Breach Achieved Admin Access In Record Time With Help From AI

hero llm
As terrifying proof of how much artificial intelligence can compress the cyberattack lifecycle, researchers have documented a real-world AWS cloud intrusion that went from a simple credential leak to full administrative control in under 10 minutes.

The incident, observed by the Sysdig Threat Research Team, began when a threat actor discovered valid AWS access keys left exposed in a public Amazon S3 bucket that was ironically being used to store Retrieval-Augmented Generation (RAG) data for the victim's own AI models. Within seconds of obtaining these credentials, the attacker deployed large language models (LLMs) to automate the heavy lifting of cloud exploitation.

What makes this breach significant is not just the speed, but the degree of AI-led decision-making. The intruder utilized LLMs to conduct rapid reconnaissance, identifying that the stolen credentials belonged to a user with limited permissions but significant access to AWS Lambda and Amazon Bedrock. Rather than manually testing privilege escalation paths, the attacker used AI to generate and inject malicious code into an existing Lambda function named EC2-init. The script, complete with Serbian-language comments and error handling characteristic of LLM output, successfully created new administrative access keys for a user account named "frick."

It's quite astonishing how the process—from initial entry to gaining full access—was done in under 10 minutes. Once established as an administrator, the AI-powered actor moved laterally across 19 distinct AWS principals (a technique designed to camouflage their activity and ensure persistence) allowing the attacker to begin LLMjacking, potentially stealing computational power from the victim's Amazon Bedrock and GPU resources to run their own unauthorized AI models or potentially resell the compute capacity on the black market.

Sysdig noted several fingerprints that confirmed the presence of AI at the helm, such as the hacker attempting to assume roles in AWS account IDs that didn't exist or were comprised of ascending and descending digits. While such errors might have slowed a human, the AI simply iterated past them with relentless speed, eventually launching a publicly accessible JupyterLab server to maintain a backdoor that bypassed AWS credentials entirely.

In response to this incident, AWS made an official comment to The Register saying, “AWS services and infrastructure are not affected by this issue, and they operated as designed throughout the incident described." 

The company added that "the report describes an account compromised through misconfigured S3 buckets. We recommend all customers secure their cloud resources by following security, identity, and compliance best practices, including never opening up public access to S3 buckets or any storage service, least-privilege access, secure credential management, and enabling monitoring services like GuardDuty, to reduce risks of unauthorized activity."

Main image created with Gemini Nano Banana
AL

Aaron Leong

Tech enthusiast, YouTuber, engineer, rock climber, family guy. 'Nuff said.