Dell has launched new high-end servers that includes prime Nvidia GPUs for enterprise AI workloads, bundled in an ‘AI Manufacturing facility’ package deal with software program, networking, and managed assist – all aimed toward accelerating AI adoption and driving gross sales.
In sum – what to know
New servers – a bunch of latest high-end Dell servers combine top-line Nvidia GPUs for enterprise cloud and edge AI workloads.
Manufacturing facility package deal – {hardware} is being bundled with new software program and networking switches as a part of a joint ‘AI Manufacturing facility’ pitch.
Drive and promote – the masterplan (!) is to speed up enterprise adoption of AI, and promote a lot of servers and software program licences and such.
Dell and Nvidia have deepened their AI dealings with the introduction of latest AI server {hardware}, built-in AI software program, and absolutely managed AI companies – all geared to enterprises, deploying AI throughout edge and cloud setups. The transfer seeks to make high-end AI accessible to enterprises that lack abilities, infrastructure, and capability to construct and run domain-specific AI engines themselves. The plan is to speed up enterprise adoption of AI, and promote a lot of servers and software program licences and such.
Sounds apparent, however the pair need to be the go-to edge and hybrid cloud/edge AI infrastructure suppliers for enterprises, and never only for massive centralised cloud hyperscalers within the cloud. At Dell Tech World in Las Vegas, they launched a bunch of latest choices as a part of their joint ‘AI Manufacturing facility’ (Dell AI Manufacturing facility with Nvidia – in full) initiative. These mix Dell’s infrastructure with Nvidia’s GPUs and AI software program. In sum, Dell is launching a raft of latest servers, all accessible within the second half of the 12 months, for accelerated compute and knowledge processing, built-in with new community switches, an up to date administration platform, and 24/7 managed assist.
The brand new servers embody air-cooled PowerEdge XE9780 and XE9785 items (for “integration into current enterprise knowledge facilities”) and direct-to-chip liquid-cooled PowerEdge XE9780L and XE9785L items (pictured; to “speed up rack-scale deployment”). These are the successors to the PowerEdge XE9680, offered as Dell’s “quickest ramping resolution ever”. They assist as much as 192 Blackwell Extremely GPUs (“custom-made” to 256) per Dell’s highest-end server cupboards – particularly its IR7000 system. The Extremely sequence, launched at GTC 2025, sits on the prime of Nvidia’s GPU portfolio
Dell reckons the brand new items ship as much as 4 instances quicker giant language mannequin (LLM) coaching with an eight-way Nvidia HGX B300.4 configuration – the place eight Blackwell Extremely GPUs are interconnected by way of fifth-generation NVLink, the chip agency’s newest GPU-to-GPU interconnect expertise, a part of the Blackwell structure. There are different servers apart from: the PowerEdge XE9712, that includes Nvidia’s GB300 NVL72 system, gives “50-times extra AI reasoning inference output and and five-times enchancment in throughput”; and the PowerEdge XE7740 and XE7745, accessible with Nvidia’s enterprise-geared RTX Professional 6000 GPUs in July, and supported in Nvidia’s ‘Enterprise AI Manufacturing facility’ design.
This XE7740/XE7745 platform works as a “common platform to assist meet the wants of bodily and agentic AI use circumstances like robotics, digital twins, and multi-modal AI purposes”, mentioned Dell. It helps as much as eight GPUs in a 4U (4×1.75 inch rack unit) chassis. Dell mentioned it can present a brand new PowerEdge XE server for Nvidia’s upcoming Vera Rubin supercomputing structure platform, an advance on its Blackwell sequence, geared for ultra-high-density compute and superior reminiscence and networking applied sciences.
In the meantime, Dell is providing assist for Nvidia’s upgraded Ethernet and InfiniBand networking switches (PowerSwitch SN5600, SN2201 Ethernet, Quantum-X800) to ship as much as 800 gigabits per second of throughput to match the calls for of AI workloads on the brand new servers. Moreover, Dell is providing “enhancements” to its AI Information Platform, particularly associated to its ObjectScale and PowerScale storage platforms, and with integration with Nvidia’s BlueField-3 DPU community card and Spectrum-4 Ethernet change chip. Introduction of KV cache and RDMA assist, now present lower-latency entry to knowledge throughout inference, it mentioned.
There’s different stuff too: Dell is providing Nvidia’s AI Enterprise software program suite, together with microservices, retrieval fashions, reasoning fashions, instruments for creating agentic AI fashions and workflows, plus assist for Purple Hat OpenShift. Dell is providing a managed service for the entire AI Manufacturing facility resolution, taking good care of the whole lot within the Nvidia AI stack with 24/7 monitoring, reporting, model upgrades, and patching.
The brand new Dell Managed Providers for the Dell AI Manufacturing facility with Nvidia simplify AI operations with administration of the total Nvidia AI options stack — together with AI platforms, infrastructure and Nvidia AI Enterprise software program. Dell managed companies consultants deal with 24×7 monitoring, reporting, model upgrades and patching, serving to groups overcome useful resource and experience constraints by offering cost-effective, scalable and proactive IT assist.
Michael Dell, chairman and chief government officer at Dell Applied sciences, mentioned: “We’re on a mission to carry AI to thousands and thousands of shoppers around the globe. Our job is to make AI extra accessible. With the Dell AI Manufacturing facility with Nvidia, enterprises can handle the whole AI lifecycle throughout use circumstances, from coaching to deployment, at any scale.
Jensen Huang, founder and chief government officer at Nvidia, mentioned: “AI factories are the infrastructure of recent trade, producing intelligence to energy work throughout healthcare, finance and manufacturing. With Dell, we’re providing the broadest line of Blackwell AI techniques to serve AI factories in clouds, enterprises and on the edge.”