Multimodal AI company Luma AI has raised $900 million in a Series C round led by Humain, the PIF-backed full-stack AI company, with participation from AMD Ventures, Andreessen Horowitz, Amplify Partners, Matrix Partners, and other existing investors. The raise marks one of the largest global funding rounds for an AGI-focused company this year.
Powering Multimodal General Intelligence with Massive Compute
The funding will accelerate Luma AI’s mission to build multimodal general intelligence AI systems capable of understanding, generating and acting within the physical world across video, audio, images and language. As part of the partnership, Luma AI will become a major customer of Humain’s upcoming Project Halo, a 2-gigawatt AI supercluster in Saudi Arabia, set to become one of the world’s largest compute buildouts.
CEO Amit Jain said Luma AI aims to train AI systems on “a quadrillion tokens of multimodal information essentially humanity’s digital memory.” He added that Humain’s rapid infrastructure deployment is essential for training world-scale models that can simulate and understand real environments.
Strategic Roadmap for World Models and Global Products
The funding announcement was made at the US–Saudi Investment Forum, where both companies unveiled a joint roadmap to advance large-scale World Models — foundational AI systems that learn from peta-scale multimodal data, far exceeding current LLM training regimes. These models will power Humain’s product suite, Humain Create, enabling next-generation applications across robotics, entertainment, advertising, gaming and personalised education.
Commercial Momentum and Product Expansion
Luma AI’s flagship model Ray3 has already been adopted by global studios, creative agencies and brands, including deep integration within Adobe’s ecosystem. The company plans to expand its capabilities into simulation, industrial design and robotics.
Final Take
With $900 million in fresh capital and access to a 2-GW compute cluster, Luma AI is positioning itself at the forefront of multimodal AGI—ushering in an era where AI can learn, reason and act across the full spectrum of human and physical experiences.

