Can you tell which of these seemingly identical bits of Steam iconography were generated using AI (trick question, it's none of them).

Can you tell which of these seemingly identical bits of Steam iconography were generated using AI (trick question, it’s none of them).

Aurich Lawson


Last summer, Valve told Ars Technica that it was worried about potential legal issues surrounding games made with the assistance of AI models trained on copyrighted works and that it was “working through how to integrate [AI] into our already-existing review policies.” Today, the company is rolling out the results of that months-long review, announcing a new set of developer policies that it says “will enable us to release the vast majority of games that use [AI tools].”

Developers that use AI-powered tools “in the development [or] execution of your game” will now be allowed to put their games on Steam so long as they disclose that usage in the standard Content Survey when submitting to Steam. Such AI integration will be separated into categories of “pre-generated” content that is “created with the help of AI tools during development” (e.g., using DALL-E for in-game images) and “live-generated” content that is “created with the help of AI tools while the game is running” (e.g., using Nvidia’s AI-powered NPC technology).

Those disclosures will be shared on the Steam store pages for these games, which should help players who want to avoid certain types of AI content. But disclosure will not be sufficient for games that use live-generated AI for “Adult Only Sexual Content,” which Valve says it is “unable to release… right now.”

Put up the guardrails

For pre-generated AI content, Valve warns that developers still have to ensure that their games “will not include illegal or infringing content.” But that promise only extends to the “output of AI-generated content” and doesn’t address the copyright status of content used by the training models themselves. The status of those training models was a primary concern for Valve last summer when the company cited the “legal uncertainty relating to data used to train AI models,” but such concerns don’t even merit a mention in today’s new policies.

For live-generated content, on the other hand, Valve is requiring developers “to tell us what kind of guardrails you’re putting on your AI to ensure it’s not generating illegal content.” Such guardrails should hopefully prevent situations like that faced by AI Dungeon, which in 2021 drew controversy for using an OpenAI model that could be used to generate sexual content featuring children in the game. Valve says a new “in-game overlay” will allow players to submit reports if they run into that kind of inappropriate AI-generated content in Steam games.

Over the last year or so, many game developers have started to embrace a variety of AI tools in the creation of everything from background art and NPC dialogue to motion capture and voice generation. But some developers have taken a hardline stance against anything that could supplant the role of humans in game making. “We are extremely against the idea that anything creative could or should take [the] place of skilled specialists, to which we mean ourselves,” Digital Extremes Creative Director Rebecca Ford told the CBC last year.

In September, Epic Games CEO Tim Sweeney responded to reports of a ChatGPT-powered game being banned from Steam by explicitly welcoming such games on the Epic Games Store. “We don’t ban games for using new technologies,” Sweeney wrote on social media.


Source link