Create your own video game character with the help of an AI model. Yes, theoretically, you should be able to do it now, thanks to a new AI language model from NVIDIA, called CALM. CALM stands for Conditional Adversarial Latent Models, and it was developed to train Directable Virtual Characters, or in other words, video game characters.
The model’s structure and training were detailed in a paper written by NVIDIA in collaboration with Technion – Israel Institute of Technology, Bar-Ilan University, and Simon Fraser University.
CALM was trained in a simulated reality for 10 continuous years which would translate into 10 days in real-world time. After the training, the model was capable of 5.000.000.000 body movements. The character simulated by CALM, a white-patterned warrior, is now able to imitate and simulate instinctual human movements that are related to walking, running, and fighting with a sword.
In this work, we present Conditional Adversarial Latent Models (CALM), an approach for generating diverse and directable behaviors for user-controlled interactive virtual characters. Using imitation learning, CALM learns a representation of movement that captures the complexity and diversity of human motion, and enables direct control over character movements. The approach jointly learns a control policy and a motion encoder that reconstructs key characteristics of a given motion without merely replicating it. The results show that CALM learns a semantic motion representation, enabling control over the generated motions and style-conditioning for higher-level task training. Once trained, the character can be controlled using intuitive interfaces, akin to those found in video games.
You can create your own video game character with it, or at least a similar AI model, by implementing CALM’s code into your work. You can find it on GitHub.
What do you think about this new AI breakthrough? While the movements are very human, there is a feeling of uncanny valley to it. But what’s your take?