San Francisco’s Game Developers Conference (GDC) – the global gathering of the greatest creative minds in the games industry – opened its doors for the second time since the pandemic in March 2023.
Each year a number of key trends stand out. For 2023 it was applications of artificial intelligence (AI) for game development, with the future shape of the gaming experience – with and without virtual reality (VR) – high on the agenda.
Following last year’s tentative steps back on to the expo floor – interacting with devices was off the menu because of COVID fears – Meta (formerly Facebook) was busy sanitising display models of its popular Quest 2 headset to encourage visitors to try it out. The company was also keen to push its heavily discounted Quest Pro headset, which features “colour pass-through”, meaning someone wearing the headset can see an augmented view of the world around them.
Chinese manufacturer Pico of the Pico 4 headset, which has striking similarities to the Quest, had an even larger stand and was generating similar levels of interest. Clearly the time was right to start interacting with shared devices again, as long as good hygiene prevailed.
AI developments and issues
Before the conference, considerable buzz had been building around ChatGPT, the chatbot that can provide convincing, detailed written answers to users’ questions.
The latest updated version was released at the event, offering several improvements. Most noticeable was the reduction in repetition of key phrases in the chatbot’s answers which alerted people to the fact it was AI-generated content.
For games developers the interest in this type of AI relates to speeding up and easing game development, but there were also concerns raised about jobs being replaced by AI. The positive consensus was that humans are still best placed to ask the right questions to generate game content, even if such content was created by AI.
Adobe announced its Firefly AI tool that can generate both images and 3D models. This technology might also assist with generating “substances” – materials that can be applied to game models and scenes. The company indicated its tool was to “enhance the creative process, rather than replace it”.
Adobe drew a cheer from developers for promising “clean and safe” content, meaning anything the AI created would be based on what it had learned from Adobe stock and public domain images rather than sources simply scraped from anywhere on the internet. This should avoid the potential ethical and legal issues of unintentionally publishing content learned from a copyrighted source, a key issue for this kind of AI.
Some developers demonstrated AI implementing content into a game scene based upon typing a sentence. For example, key in “Create a scene with 10 boulders”, and huge rocks were inserted into a scene without the usual manual placement of objects. I met one developer who had created a tool to search for and use AI to automatically rig (that is, prepare for animation) public-domain game models from the internet.
Welcome to the metaverse
The other major theme, which has figured much in public awareness since Facebook changed its name to Meta, is that of the metaverse – a network of virtual worlds through which people can access many functions of the internet including games. We know Meta has already gambled big in this area, and clearly companies like Pico are happy to sell devices to experience it.
So it wasn’t surprising that several tool and game engine providers were positioning themselves as a useful resource when creating content for the metaverse. The previous generation of game engines (the software frameworks used by developers used to create games) such as Unreal Engine 4 and Unity have made significant efforts to adapt to engage the film and television industries. It’s clear from this year’s GDC conference that the goal for the next generation of game engines is to prepare for digital spaces like the metaverse.
Meanwhile, Epic Games, developer of the hugely successful Fortnite, is pitching that the omniverse (as it prefers to call the metaverse) doesn’t have to be about putting a VR headset on and disconnecting from those around you. Epic believes it actually already here via games (like Fortnite) played on a 2D screen using the PCs, consoles and portable-handheld devices people already have access to.
To accelerate the creation of content for this Epic vision of an omniverse, the company announced the launch of its Unreal Engine for Fortnite (UEFN), which would allow content creators more advanced control than ever before of gameplay and assets – the graphics and things a game relies upon. It also announced a new computer language called Verse, which the company hopes will be easy to understand while capable of powering the omniverse future.
Epic is no stranger to creating languages to allow customisation of the behaviour of elements in a game, and has suggested that Verse could become the standard language across a range of metaverse/omniverse implementations, not just its own. If a major partner such as Microsoft were to come on board, there would be more confident industry take-up of Verse.
Epic is proposing the (Fortnite-powered) omniverse as a place to easily leap from one connected experience to another – to transition from a first-person shooter game to a racing game, Ready Player One-style, should be achievable with current technology.
But it’s the console manufacturers who are best placed to create devices with supporting operating systems to enable simple user function across gaming and other experiences. So maybe our attention should be on what Microsoft, Sony or even Nintendo do next. Welcome to the Mario-verse?