Elon Musk’s artificial intelligence venture, xAI, is entering a bold new phase, one that could redefine the boundaries of machine intelligence. The company is now developing “world models”, a cutting-edge form of AI designed not just to process language and images but to understand and interact with the physical world itself.
Musk recently confirmed on X (formerly Twitter) that xAI is planning to release “a great AI-generated game before the end of next year.” He first teased the idea in 2024, but the new world-modelling hires suggest that the project is now rapidly progressing.
The XAI game studio will release a great AI-generated game before the end of next year https://t.co/F14rJXNzk9
— Elon Musk (@elonmusk) October 6, 2025
Unlike current large language models (LLMs) such as ChatGPT or Grok, which predict text or generate images based on patterns, world models take a more immersive approach. These systems learn from video footage, robotics data, and simulations to develop a sense of physical intuition: understanding gravity, motion, light, and cause-and-effect relationships.
To accelerate this push, xAI has reportedly been hiring top researchers from Nvidia, a global leader in AI hardware and simulation technology. Among the notable hires are Zeeshan Patel and Ethan He, both of whom previously worked on Nvidia’s Omniverse platform — a powerful engine for creating realistic virtual environments.
These experts bring deep experience in simulation, visual learning, and physical modeling, skills essential for training AI systems that can reason about the real world.
Alongside its world-modelling research, xAI also launched a new image and video generation model this week, touting “massive upgrades” in realism and functionality. The tool has been made freely available to users, signalling xAI’s intent to compete directly with OpenAI’s DALL·E, Stability AI’s Stable Diffusion, and other visual-generation leaders.
