Menu Menu

What Nvidia’s AI NPC demo could mean for game development

Nvidia demoed its capabilities for AI and unique character building during Computex 2023. The gaming giant showed how an innocuous NPC could provide generated, off-the-cuff responses to player’s unique prompts during an in-game interaction.

Contemporary titles can have anywhere between a handful and hundreds of non-playable characters (NPCs).

Primarily intended to carry the overarching narrative forward by delivering missions, these secondary characters carry the bulk-load for creating a sense of emotional investment and fleshing out the playable world to seem lived-in and believable.

Like other aspects of gaming, the graphics and basic dialogue of these characters have advanced over time. Compare the original edition of Skyrim, for instance, to contemporary titles like Hitman 3 or Red Dead Redemption II and you’ll see how drastic an improvement has been made in the last decade.

Though the overall standard has risen, however, that’s not to say there isn’t room improvement – and a lot at that. Aside from a few exceptionally crafted titles, engaging with NPCs can still feel obligatory, overtly scripted, and at times static or goofy.

Nvidia, a leading proprietor of PC graphics cards and (more recently) AI computing, is looking to advance this facet of game-building using the principles of generative software like ChatGPT and Bard.

 

Nvidia’s ‘Omniverse Avatar Cloud Engine’ demo

At the annual Computex conference this week, Nvidia CEO Jensen Huang announced the release of the ‘Omniverse Avatar Cloud Engine’ (ACE).

Huang claims that the developer toolkit can turn bland NPCs into sophisticated characters able to respond to original voice prompts from players with spontaneous voice dialogue and real-time facial animations. Sounds far-fetched right?

During a linear gameplay demo, we supposedly saw the engine in action. Through a first-person lens, the playable protagonist entered a breath-taking rendering of a Cyberpunk ramen shop created with the best in Nvidia’s immersive technology and Unreal Engine 5.

The main attraction, though, began when the player approached the resturant owner – an NPC called Jin.

Instead of scripted dialogue options appearing off the bat, which is custom for 99% of role playing games, the player spoke out loud and received bespoke in-game responses from the NPC.

Jin obviously had a backstory pre-programmed in with a mission he was compelled to impart, but everything else was pure ‘improvisation’ courtesy of AI, according to Huang.

Check the footage out below.


Logistical problems and early community doubt

The reaction to the demo has been about as mixed as you’d expect from the gaming community.

While many are lauding the technical advancements being made and the potential for unbridled immersion, others state that the concept is either too ambitious to work well or is straight up BS on Nvidia’s part. After all, outside of the pre-recorded footage, we’ve no way of verifying the technology or trying it out.

Whatever side of the fence you’re on, it is difficult to see beyond some of the glaring logistical problems that could instantly thwart such an undertaking.

Looking beyond the robotic performance of the NPC – which will undoubtedly improve as AI learns to better mimic human speech – the player deliberately asked basic, inane questions typical of any create-your-own protagonist beginning a busywork side-mission.

Exactly how well Jin would hold up when faced with a 12-year-old who’s just rage quit from FIFA remains to be seen. More to the point, was the demo intended purely to motivate investors?

Given the huge inventory of top-of-the-range hardware that was used purely to showcase the demo in such an enclosed area, there’s also the question of exactly how much processing power would be needed to equip even a handful of NPCs with these capabilities in a full-scale game.

Many developers currently refrain from rendering working mirrors in their games to optimise performance, frame-rate, and graphical fidelity. Throwing the entire framework of AI NPCs into the mix would almost certainly discount the maximum power capabilities of next gen consoles as well as the vast majority of PCs.

Maybe developers won’t embrace the ACE toolkit in the way the demo attempts, but major titles including S.T.A.L.K.E.R. 2 Heart of Chernobyl, and Fort Solis are already using Nvidia’s ‘Audio2Face’ application to try and match the facial animations of characters more accurately to their actor’s speech.


What impact could this have for developers?

Assuming, against the odds, Nvidia finds a way of making ACE desirable for game studios – perhaps through a cloud-based infrastructure to compensate for processing power – what could the wide-scale introduction of AI mean for those working in the industry?

A common belief is that robots will inevitably overthrow specialist digital job roles. In any case, ACE reaching a convincing enough standard would put voice actors, animators, and story writers at risk straight away. In-fact, Ubisoft is already using an AI language tool called Ghostwriter.

Those keen to unreservedly embrace AI in the name of progress will be thrilled by the prospect of several massive upsides. The boon to smaller studios is markedly huge when you think about it.

Ambitious titles only achievable now with a huge team of specialist developers will realistically be possible with a just a fraction of the manpower. AI is already able to contribute everything from concept art and scripts, to 3D animation and dynamic lighting.

Are AI NPCs really next on the list? We’ll find out soon enough.

Accessibility