AI Cloud Firm Lambda Secures $480 Million in Funding as Demand for Nvidia-Powered Servers Surges
- ByStartupStory | February 20, 2025

Silicon Valley-based Lambda, a rising player in AI cloud computing, has raised $480 million in a Series D funding round as the demand for Nvidia-powered AI infrastructure continues to skyrocket. The round was led by Andra Capital and SGW, the family office of early Google investor Scott Hassan, bringing Lambda’s total equity raised to $863 million.
While the company did not officially disclose its valuation, sources familiar with the deal indicated that the funding round values Lambda at $2.5 billion post-money. Other participants in the round included ARK Invest, G Squared, and Super Micro, with additional backing from key strategic investors such as Andrej Karpathy (OpenAI, Tesla), Fincadia Advisors, In-Q-Tel (the CIA’s investment arm), and several leading server manufacturers, including Pegatron, Wistron, Wiwynn, and Supermicro.
The new $480 million funding will be used primarily to purchase more Nvidia GPUs, expand Lambda’s cloud infrastructure, and enhance its software offerings. Among the company’s software products are its Model Inference API, which helps businesses deploy AI models at scale, and its upcoming Chat AI Assistant, designed to integrate AI-driven chat capabilities into enterprise applications.
Beyond cloud services, Lambda also helps enterprises deploy Nvidia servers in their data centres. Earlier this year, the company installed a GB200 NVL72 rack at ECL’s hydrogen-powered data centre in Mountain View, California, highlighting its commitment to sustainable AI computing solutions.
Founded in 2012, Lambda has carved out a niche by providing AI developers and enterprises with high-performance cloud infrastructure optimised for training and deploying AI models. The company’s flagship Lambda GPU Cloud operates out of data centres in San Francisco, California, and Allen, Texas, with recent expansions into Vernon, California, through a deal with Supermicro.
Most of Lambda’s business revolves around renting out AI servers powered by Nvidia GPUs, which have become essential for running large-scale AI workloads. The explosion in demand for generative AI and large language models (LLMs) has put Nvidia chips in short supply, making cloud-based access to high-end hardware more critical than ever.
“We’re scaling infrastructure and software, enabling AI developers to train, fine-tune, and deploy models faster and easier than ever,” said Lambda CEO and co-founder Stephen Balaban.
Lambda is positioning itself as a key enabler of open-source AI development, a space traditionally dominated by hyperscalers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. One of its most notable recent moves has been its support for DeepSeek-R1, an open-source AI model developed in China.
“Lambda is really well positioned as a company to take advantage of open-source AI models like DeepSeek-R1 because we have well over 25,000 GPUs on our cloud platform that can be readily repurposed to host these models,” Balaban noted.
The company says that the launch of DeepSeek-R1 in January 2024 has led to a spike in demand for Nvidia’s latest H200 chips, with customers pre-purchasing large blocks of Lambda’s H200 capacity even before public availability.
With hyperscalers like AWS, Google Cloud, and Microsoft Azure already dominating the AI cloud market, Lambda’s challenge is clear: carving out a competitive edge in a space controlled by trillion-dollar tech giants. However, the company is banking on its dedicated AI-first infrastructure, developer-friendly approach, and support for open-source AI to differentiate itself.
With a rapidly growing customer base of over 5,000 organisations, including clients in manufacturing, financial services, and government sectors, Lambda’s latest funding round positions it as one of the market’s most well-funded independent AI cloud providers.