Are video game developers using AI? Players want to know, but the rules are patchy

As with all creative industries, generative artificial intelligence (AI) has been infiltrating video games.

Non-generative AI has been in the industry long before things like ChatGPT became household names. Video games would contain AI-driven gameplay systems such as matchmaking, non-player character (NPC) behaviour, or iconic fictional AI characters such as SHODAN and GLaDOS.

Now, generative AI is being used to produce game assets and speed up development. This is threatening creative jobs and fuelling worries about low-effort releases or “slop”.

If you buy a video game today, you may have no reliable way of knowing whether generative AI was used in any part of its development – from the art and voice work to the code and marketing.

Should developers disclose it? Since 2023, AI disclosure in video games has gone from non-existent to patchy. It’s arguably more to do with copyright concerns than being transparent with players.

A messy baseline

Steam, owned by US video game company Valve, is the largest digital storefront for PC games. It’s also the closest thing to a baseline for AI disclosure – simply because it was the first major platform to formalise a position.

Amid the rise of AI in 2023, Valve rejected AI-produced games on Steam, citing legal uncertainty and stating the company was “continuing to learn about AI”.

By January 2024, Valve formalised its disclosure rules, requiring developers to declare two categories of AI use: pre-generated content (made during development) and live-generated content (created while the game runs).

While industry leaders are optimistic about AI’s role in game development, disclosure remains contentious. Tim Sweeney, chief executive of Steam’s competitor Epic Games, mocked Steam’s AI disclosure in late 2025 as being akin to telling players what shampoo developers use.

In recent weeks, Valve has narrowed its disclosure rules, clarifying that developers who submit games to their platform only need to report AI if the output is directly experienced by players.

This changes what counts as relevant transparency, effectively giving a green light to AI coding and other behind-the-scenes processes.

Valve’s focus on player-facing AI does provide consumers with some transparency and the game submissions are checked before release. However, it’s not clear what happens if the makers of a game don’t disclose AI when they should have.

The disclosure system also keeps Steam ahead of a legal grey area regarding copyright and generative AI output. If needed, Valve could quickly pull titles affected by AI copyright claims. Some AI models can memorise copyrighted material and reproduce it when prompted, so this is not an entirely hypothetical scenario.

AI disclosure on Steam doesn’t have a consistent format – developers simply have a text field where they can write their disclosure in free form. Since it’s not treated as an official tag, consumers also can’t search or filter for AI content when browsing for games in the store.

At the time of writing, a search of SteamDB – a third-party catalogue of Steam’s database – lists more than 15,000 games and software with Steam’s AI disclosure label, with no total count available on Steam itself.

In response, user watchdogs have stepped in. The Steam curator group AI Check tracks games with AI-generated assets and flags whether developers disclose AI use – and how.

Players are largely in the dark

Outside Steam, disclosure is inconsistent if not absent. Indie storefront itch.io provides a searchable “AI Generated” tag, but no disclosure is required on game pages.

There’s currently no clear AI disclosure on mobile app stores or console storefronts (Nintendo, PlayStation, Xbox), and they’ve been criticised for letting “AI slop” flood their stores.

Epic Games Store and another major distribution platform, GOG.com, also lack clear AI disclosures. GOG recently faced backlash for using AI-generated artwork in its own storefront promotion.

All this leaves players in the dark, while developers face backlash for AI use that many consider harmful for the industry.

Transparency is important

Many players care about AI use in games and when disclosure is missing. There are plenty of cases in which developers were “caught out” using generative AI and responded with ad hoc statements, asset changes, or even had Game of the Year awards rescinded.

But there are also cases in which suspicion has caused cancellations or wrongful accusations of games using AI art when it was actually drawn by a human artist.




Read more:
Distrust in AI is on the rise – but along with healthy scepticism comes the risk of harm


This is why transparency on AI use is important. Many Australians report low familiarity with AI, and research suggests having more information can shift people’s views, helping people make informed choices and avoid witch hunts.

Many people have ethical concerns about AI use, or are worried about environmental consequences due to how many resources the AI data centres chew up.

All this means AI disclosure is currently a consumer rights issue, but it’s governed entirely by the platforms where people purchase the games.

Players don’t need to know what shampoo a developer uses. But they do deserve a clear view of whether the art was AI-generated, whether writers or voice actors were replaced, and whether a game built on AI-generated code is likely to survive an update.

Steam’s disclosure system is a start, but it means little if the information can’t be found or filtered for. Every game storefront should make generative AI use clear at the point of purchase – because players deserve better.

Thomas Byers, PhD Candidate & Research Assistant, Faculty of Engineering & IT, The University of Melbourne

Thomas Byers, PhD Candidate & Research Assistant, Faculty of Engineering & IT, The University of Melbourne

Leave a Reply

Your email address will not be published. Required fields are marked *