This article originally appeared on Light Reading.
Data center operator Equinix, in collaboration with Nvidia, has rolled out a fully managed private cloud service that allows companies to quickly build and run their own massive AI models.
Under the partnership, Equinix would install and operate a company's privately owned Nvidia infrastructure at its International Business Exchange data centers. Corporate customers will buy Nvidia systems and pay Equinix to operate them on their behalf.
The service, which is now commercially available, is built on Nvidia DGX systems, Nvidia networking as well as Nvidia AI software.
According to Charles Meyer, president and CEO of Equinix, companies need adaptable, scalable hybrid infrastructure in their local markets to bring AI supercomputing in their data.
"Our new service provides customers a fast and cost-effective way to adopt advanced AI infrastructure that's operated and managed by experts globally," he said, adding that the new service enables customers to operate their AI infrastructure in close proximity to their data.
DGX systems housed within Equinix's data centers are connected to the outside world through a high-speed private network, and the company also provides high-bandwidth interconnections to cloud services and enterprise service providers.
With their Nvidia partners in tow, the Equinix managed services team undertook comprehensive training on how to build and operate the AI systems.
"Generative AI is transforming every industry," said Jensen Huang, founder and CEO of Nvidia. "Now, enterprises can own Nvidia AI supercomputing and software, paired with the operational efficiency of Equinix management, in hundreds of data centers worldwide."
Without disclosing names, Equinix said there are already enterprise customers using the new managed services – many of whom belong in industries such as biopharma, financial services, software, automotive and retail.
These customers are building AI Centers of Excellence to provide a strategic foundation for a broad range of large language model (LLM) use cases. These include accelerating time to market for new medications, developing AI copilots for customer service agents and building virtual productivity assistants.
Owning Their AI Infrastructure
In October, IDC predicted enterprise spending on generative AI (genAI) software, infrastructure hardware and IT services would reach nearly $16 billion worldwide for 2023, and will grow to $143 billion in 2027.
The technology research firm said generative AI investments will follow a natural progression over the next several years as organizations transition from early experimentation to aggressive buildout with targeted use cases to widespread adoption across business activities with an extension of genAI use to the edge.
Equinix's managed AI deal with Nvidia comes at a time when companies in Asia-Pacific are showing interest in owning their AI computing system for privacy and security reasons.
"Today, as we talk to enterprise customers around the world, one of their number one concerns and ideas around AI is being able to own their own model and really own their own future," Charlie Boyle, Nvidia vice president of DGX systems, said during a press briefing yesterday.
In Asia-Pacific, he added that many companies are rapidly expanding their use of the technology. But they don't have the in-house expertise to build out their own large language models.
He pointed out that most companies need to be very close to the AI processing that they're trying to accomplish.
"The AI model, the AI execution has to be very close to the data. And all those elements come together in customers wanting to do AI, wanting to do it fast, securely, and near their data. But many times they're lacking either the data center space or the internal expertise of how to manage all of that," Boyle said.