Third Dimension wants games devs to build Rome in a day.
Image Credit: Third Dimension

Third Dimension AI raises $6.9M to build game worlds with generative AI

by · VentureBeat

Third Dimension AI raised $6.9 million to enable game developers to build 3D game worlds using generative AI.

The capital will be used to expand the Third Dimension team, further train generative AI models that convert 2D (images/video) to 3D and to bring to life Third Dimension’s vision of becoming the leading 3D generation company, said Tolga Kart, CEO of Third Dimension, in an interview with GamesBeat.

Felicis led the funding round, with participation from Abstract Ventures, MVP, Soma Capital and the Salt Fund.

The founders are autonomous vehicle and gaming experts: Kart, Piotr Sokolski and Özgun Pelvan. They created Third Dimension to simplify creation of large-scale 3D environments, whether real or imaginary, and make it as easy as pressing a button.

Third Dimension can build worlds for drone piloting.

Prior to founding Third Dimension, Kart was vice president of engineering for a synthetic data company, Parallel Domain. Before joining Tesla as technical program management & simulation for autopilot for over two years, Kart was a senior director on Call of Duty at Activision for over seven years. Kart worked on Call of Duty: WWII and Call of Duty: Advanced Warfare while working for Activision’s Sledgehammer Games studio.

Sokolski spent four years at Wayve building photorealistic neural simulators for self-driving cars, as well as over three years at Google; and Pelvan is a published machine learning engineer with five published papers about neural networks and data imputation.

Third Dimension provides immersive quality, rendering engine ready content that is ready to use by professionals across multiple industries. Third Dimension’s target customers range from The U.S. Military to video game developers to autonomous vehicle companies.

“The ability to simulate the real world is one of the last frontiers in solving some very difficult engineering problems”, said Kart. “Video game engines combined with new generative AI technology will not only make creative industries’ content more robust, but will also enable all simulation efforts to represent the real world in the most accurate fashion.”

How it works

Third Dimension generates worlds for games.

Third Dimension wants to build a one-stop tool that accelerates companies’ abilities to create worlds, allows artists to have a baseline to get to quality faster and allows engineers to build fantastical or entirely accurate representations of the real world in high-fidelity. This technology will accelerate workflows of developers from months of time to days or hours and help save millions in expensive graphics development budgets, the company said.

In the world that Kart conceives, game artists in the future will stil be creating concepts. They can create a video or a 2D image or a 3D image. The can feed that into the AI engine, which will create a version of the world that is basedon the inspiration.

“Now a level designer or a concept artist can block out a world and figure out what this world’s going to look like. They draw a concept. Maybe it takes a couple days. They feed out into a generation system that allows them to generate the world by Third Dimension, and within a day or two, they have a fully playable world,” Kart said. “It’s not only like pitching ideas, but it’s also like supercharging the production process. The core goal here is that it shouldn’t cost a billion dollars to make video games. We will just accelerate the process and therefore make it less expensive to create content at large scale.”

Third Dimension generates worlds for autonomous vehicle testing.

It won’t be just for video games. It can also be used for autonomous vehicle training, simulation, virtual backgrounds for film and other applications.

Kart chose to focus on world generation because it is a large-scale production problem that consumers a ton of resources and makes games very expensive. He showed a demo of taking a 3D image and converting into a simulated game world with 3D geometry.

Third Dimension foresees virtual set applications for film and TV.

“The goal here is to make it look like real life,” Kart said. “We are going to go after the actual final pixels that are going to end up on the screen. It has to save time and money. Otherwise, it’s not useful for anybody.”

Kart said the company is in the midst of research still in terms of how it converts videos into 3D mesh.

“We’re creating a large-scale 3D world reconstruction pipeline,” he said.

“Third Dimension is going to redefine 3D creation.” says Aydin Senkut, managing partner at Felicis, in a statement. “Their groundbreaking technology, which seamlessly generates precise 3D environments with a single click, is set to transform how engineers and artists create and simulate both real and imagined environments. By accelerating creativity and enhancing the detail in simulations, Third Dimension opens up new possibilities across various industries from defense to video games, and more. We’re so excited to be partnering with this incredibly experienced, veteran team.”

Origins

Two of Third Dimension’s founders.

The company got started earlier this year. The aims is to enable developers to create large scale, usable environments, whether they’re true digital twins of real places, or completely virtual worlds.

They wanted to create 3D versions of real world locations like San Francisco. These worlds can be used for games, simulation, military applications and geospatial work. It gets there through a combination of reconstruction, and generation. The work is a combination of Radiance Fields and Diffusion (Nerf, Gaussian Splatting and Image/Video Generation).

With the workflow in the future, Third Dimension envisions the creator starts with an image, either drawn by the creator generated. The creator can also have a 3D block of the world, using the creator’s own inspiration or source image. That gets converted directly to 3D using Third Dimension’s tech, and it’s ready to be loaded into a game engine. Kart started working on this in early 2023.

Third Dimension will also focus on military uses.

While working on autonomous driving, Kart was exposed to the concept of synthetic data, which uses video game engines to generate synthetic data that can be used for testing autonomous driving software. It’s like using driving games to test whether the software can really work well in the presence of humans.

“With a few friends, we decided to go solve a large-scale world creation problem. We can generate 3D worlds at scale, whether those are like real world reconstructions or eventually completely imagined environments for video games,” Kart said.

This isn’t the stuff of procedural worlds. Rather, it’s more ambitious than procedural tech that has been around for decades.

“We’re going to take a sample and convert that into mesh. That’s going to allow us to do real-world reconstruction,” Kart said.