Latent Technology raises $2.1M to blend AI and game animations
Latent Technology has raised $2.1 million to build the next-generation animation technology for virtual worlds and game characters. …


Connect with top gaming leaders in Los Angeles at GamesBeat Summit 2023 this May 22-23. Register here.
London-based startup Latent Technology has raised $2.1 million to build the next-generation animation technology for virtual worlds and game characters.
The startup uses AI-based animation technology and real-world physics to enable virtual world characters to move in physically accurate real-time, in contrast to todayâs tech where characters are loaded with thousands of hand-crafted animations.
Spark Capital and Root Ventures led the investment round, and game venture capital fund Bitkraft Ventures also participated. The funding round will help Latent to scale its team and develop the first version of their product, said Jorge del Val, CEO of Latent Technology, in an interview with GamesBeat.
âMy cofounder and I were working in the AI space in research in the video game industry,â he said. âAfter knowing how interesting this space could be, I had this idea in my head to take the technology and take the next step in animation. We are building the next-generation animation technology for virtual worlds, which will allow characters in virtual worlds to take their own decisions about the movements they make, and allow them to interact physically with their environment, instead of loading them with thousands of animations.â
Event
GamesBeat Summit 2023
Join the GamesBeat community in Los Angeles this May 22-23. Youâll hear from the brightest minds within the gaming industry to share their updates on the latest developments.
Latentâs mission
Founded last November by AI gaming veterans del Val and CTO Jack Harmer, Latent Technology developed a technology dubbed Generative Physics Animation. They previously worked at Electronic Arts and Embark Studios.
They have a better way to animate virtual worlds and characters, said del Val. It allows virtual world characters to move about using physics-based natural movements in real time. By contrast, todayâs game characters are loaded down with thousands of handcrafted animations that show every single movement.
âIt isnât hard to see that videogames havenât fundamentally changed in a long time, and the magic we used to feel early on is long gone,â del Val said in a statement. âMeanwhile, technologies such as artificial intelligence have advanced dramatically. There is a huge potential to leverage the latest technology to empower players and creators in ways they never imagined. We aim at nothing less than to reinvent how virtual worlds are experienced and created so that we can bring magic back at the fingertips of players and game creatorsâ
Leveraging the latest advancements in reinforcement learning and generative modeling, the resulting characters interact physically with the environment in an emergent manner, increasing immersion while dramatically reducing development time, del Val said.
âTraditional animation is limited, unrealistic and bound to game design,â del Val said. âThe industry solution typically implies scaling up the team and the complexity of the outcome. We believe giving the characters the autonomy to decide how to move in real time while interacting physically with their environment has the potential to radically change how we approach this problem.â
Del Val acknowledges that there are built-in tradeoffs. Sometimes, game developers donât want characters to have realistic movements. They want them to have superhuman movements, like having Call of Duty characters run around at 40 miles per hour all of the time.
âItâs not a problem for the technology because you can train them with different physics,â he said. âThen you can transport that into the game. What we want to do in the companies is keep developing different physics in different conditions that solve different tasks.â
Del Val thinks that his company can still deal with such circumstances by modifying the physics of whatâs possible in a game.
âThere is an inherent tradeoff between creative control and emergent interactions. We are used to the leftmost extreme of this trade-off: millimetrically controlling the outcome. However, this usually comes at the price of a limited experience and a big team scale,â del Val said. âThe other side of the trade-off implies delegating details to the computer and embracing the results. Due to this, experiences can be more general and much more immersive, while needing only a fraction of the time to produce them. This part of the spectrum hasnât really been explored that much, while it could have profound implications in the industry.â
The founders believe a physics-based approach really yields more realism in movements. No matter how much motion capture data that game artists use, there is often something artificial about how character animations look, del Val said.
âIt takes a lot of scale for the team, and the result is still not interactive,â he said. âTake the simple problem of throwing a rock at a character. If the animator hasnât thought about how the character will react beforehand, then the character wonât be able to react. Thatâs a pretty fundamental problem. Thatâs exactly what weâre trying to solve. If we manage to solve this problem, then most of the animations in characters that are made for reactions with the environment will become natural. They will be emergent.â
The reason this is hard to do is that when you have a physics-based character, the challenge of making that accurate can only be solved with machine-learning technology, del Val said.
âWe train this character in a physical simulation, and allow them to learn how to move by themselves, giving them a reference of real human data, to learn to solve a task, not just by itself, but also like a human would do it,â del Val said. âWe want to create a product that would be very easy to integrate and use by any game studio.â
This kind of improvement in efficiency is enabled by AI, and it is the kind of thing weâll need to create applications for something as huge as the metaverse, del Val said. But this doesnât mean the artists wonât be necessary. Instead, it means that the artists will operate at a high level of creative control.
The company has two people now and it is hiring. Once the tech is ready, it could work with various game engines such as Unity, which is the first target platform for the startup. The team can add different game engines like Unreal over time, del Val said.
GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.