The First Ever NVIDIA ACE-Powered AI Game NPCs Are Coming To Mecha BREAK

hero mecha break screenshot
When we talk about technological advancements in video games, there are a few things that always spring to mind: new rendering techniques or paradigms, advanced physics simulation, or perhaps new ways of presenting the game, like streaming. There's an area where nothing has really changed in 40 years, though, and that's the field of NPC conversations. You pick things to say from a pre-set list, and you get pre-recorded dialogue in response.

NVIDIA's already demonstrated its ACE technology, which it debuted with the Ramen Shop demo last year. ACE stands for 'Avatar Cloud Engine', and it uses a combination of multiple AI models working in concert to deliver lifelike AI characters that can accept dynamic (i.e. not pre-scripted) inputs and give responses that are generated on the fly by AI. We've tried it; it's pretty wild, and while extant demos have had their flaws, it's very easy to see the potential for such a technology.

As it happens, ACE is coming to a "real" game (as opposed to a tech demo) sooner than anyone really expected. Chinese developer Seasun Games, now known as Amazing Seasun Games, is working furiously on its upcoming mecha action title Mecha BREAK, due to release early next year. One of the features apparently coming to the final version of the game is NVIDIA's ACE for NPC interactions.



The video embedded above shows a demo of the technology in action. The player asks a character in the game for advice on selecting which mech to use for a particular mission. The character's responses are reasonably life-like, although we have to note that they're not particularly immersive; the character says "players" instead of "pilots" or another in-world term. Still, the demo is pretty cool and very brief; check it out.

NVIDIA says that the language model in use is called Nemotron-4 4B Instruct, and that it is designed for role-playing. It's delivered as an NVIDIA Inference Microservice, a containerized version of the model ready for integration into other applications. Nemotron-4 4B Instruct interfaces with NVIDIA's Audio2Face-3D for facial animation as well as OpenAI's Whisper for speech recognition; all of this is done on-device.

Meanwhile, Elevenlabs' technology provides the voice synthesis, although that part is done through the cloud. It's not clear whether players will be able to converse with NPCs (simply lacking voice) when that cloud service is unavailable, but given that Mecha BREAK is an online game to begin with, that's a relatively minor concern. It's also not clear if players will be required to engage with ACE to play the game or it will be a strictly optional feature.



Of course, Mecha BREAK also supports NVIDIA's DLSS, including DLSS 3 frame generation. The company has the video above where you can get a peek at the gameplay; if you're really into it, you can head to the official site to sign up for the upcoming Steam beta. Jensen Huang previously noted on stage that other large game developers are experimenting with ACE—primarily Chinese companies like Hoyoverse, NetEase, and Tencent. It will be a strange future if this technology goes mainstream.