This week we’ve launched a wave of purpose-built datacenters and infrastructure investments we’re making all over the world to assist the worldwide adoption of cutting-edge AI workloads and cloud providers.
In the present day in Wisconsin we launched Fairwater, our latest US AI datacenter, the most important and most subtle AI manufacturing facility we’ve constructed but. Along with our Fairwater datacenter in Wisconsin, we even have a number of similar Fairwater datacenters below development in different places throughout the US.
In Narvik, Norway, Microsoft introduced plans with nScale and Aker JV to develop a brand new hyperscale AI datacenter.
In Loughton, UK, we introduced a partnership with nScale to construct the UK’s largest supercomputer to assist providers within the UK.
These AI datacenters are important capital initiatives, representing tens of billions of {dollars} of investments and tons of of 1000’s of cutting-edge AI chips, and can seamlessly join with our world Microsoft Cloud of over 400 datacenters in 70 areas all over the world. By means of innovation that may allow us to hyperlink these AI datacenters in a distributed community, we multiply the effectivity and compute in an exponential method to additional democratize entry to AI providers globally.
So what’s an AI datacenter?
The AI datacenter: the brand new manufacturing facility of the AI period

An AI datacenter is a singular, purpose-built facility designed particularly for AI coaching in addition to operating large-scale synthetic intelligence fashions and functions. Microsoft’s AI datacenters energy OpenAI, Microsoft AI, our Copilot capabilities and lots of extra main AI workloads.
The brand new Fairwater AI datacenter in Wisconsin stands as a outstanding feat of engineering, protecting 315 acres and housing three huge buildings with a mixed 1.2 million sq. ft below roofs. Setting up this facility required 46.6 miles of deep basis piles, 26.5 million kilos of structural metal, 120 miles of medium-voltage underground cable and 72.6 miles of mechanical piping.
Not like typical cloud datacenters, that are optimized to run many smaller, unbiased workloads akin to internet hosting web sites, e mail or enterprise functions, this datacenter is constructed to work as one huge AI supercomputer utilizing a single flat networking interconnecting tons of of 1000’s of the newest NVIDIA GPUs. In reality, it’s going to ship 10X the efficiency of the world’s quickest supercomputer right now, enabling AI coaching and inference workloads at a degree by no means earlier than seen.
The function of our AI datacenters – powering frontier AI
Efficient AI fashions depend on 1000’s of computer systems working collectively, powered by GPUs, or specialised AI accelerators, to course of huge concurrent mathematical computations. They’re interconnected with extraordinarily quick networks to allow them to share outcomes immediately, and all of that is supported by huge storage methods that maintain the information (like textual content, photographs or video) damaged down into tokens, the small models of data the AI learns from. The purpose is to maintain these chips busy on a regular basis, as a result of if the information or the community can’t sustain, every little thing slows down.
The AI coaching itself is a cycle: the AI processes tokens in sequence, makes predictions concerning the subsequent one, checks them towards the proper solutions and adjusts itself. This repeats trillions of instances till the system will get higher at no matter it’s being skilled to do. Consider it like knowledgeable soccer staff’s follow. Every GPU is a participant operating a drill, the tokens are the performs being executed step-by-step, and the community is the teaching employees, shouting directions and retaining everybody in sync. The staff repeats performs again and again, correcting errors till they’ll execute them completely. By the top, the AI mannequin, just like the staff, has mastered its technique and is able to carry out below actual sport circumstances.
AI infrastructure at frontier scale
Objective-built infrastructure is vital to with the ability to energy AI effectively. To compute the token math at this trillion-parameter scale of main AI fashions, the core of the AI datacenter is made up of devoted AI accelerators (akin to GPUs) mounted on server boards alongside CPUs, reminiscence and storage. A single server hosts a number of GPU accelerators, linked for high-bandwidth communication. These servers are then put in right into a rack, with top-of-rack (ToR) switches offering low-latency networking between them. Each rack within the datacenter is interconnected, making a tightly coupled cluster. From the skin, this structure seems like many unbiased servers, however at scale it features as a single supercomputer the place tons of of 1000’s of accelerators can practice a single mannequin in parallel.
This datacenter runs a single, huge cluster of interconnected NVIDIA GB200 servers and tens of millions of compute cores and exabytes of storage, all engineered for probably the most demanding AI workloads. Azure was the primary cloud supplier to deliver on-line the NVIDIA GB200 server, rack and full datacenter clusters. Every rack packs 72 NVIDIA Blackwell GPUs, tied collectively in a single NVLink area that delivers 1.8 terabytes of GPU-to-GPU bandwidth and offers each GPU entry to 14 terabytes of pooled reminiscence. Relatively than behaving like dozens of separate chips, the rack operates as a single, big accelerator, able to processing an astonishing 865,000 tokens per second, the very best throughput of any cloud platform accessible right now. The Norway and UK AI datacenters will use comparable clusters, and benefit from NVIDIAs subsequent AI chip design (GB300) which provides much more pooled reminiscence per rack.
The problem in establishing supercomputing scale, notably as AI coaching necessities proceed to require breakthrough scales of computing, is getting the networking topology good. To make sure low latency communication throughout a number of layers in a cloud setting, Microsoft wanted to increase efficiency past a single rack. For the newest NVIDIA GB200 and GB300 deployments globally, on the rack degree these GPUs talk over NVLink and NVSwitch at terabytes per second, collapsing reminiscence and bandwidth boundaries. Then to attach throughout a number of racks right into a pod, Azure makes use of each InfiniBand and Ethernet materials that ship 800 Gbps, in a full fats tree non-blocking structure to make sure that each GPU can discuss to each different GPU at full line fee with out congestion. And throughout the datacenter, a number of pods of racks are interconnected to cut back hop counts and allow tens of 1000’s of GPUs to operate as one global-scale supercomputer.
When specified by a standard datacenter hallway, bodily distance between racks introduces latency into the system. To handle this, the racks within the Wisconsin AI datacenter are specified by a two-story datacenter configuration, so along with racks networked to adjoining racks, they’re networked to further racks above or beneath them.
This layered method units Azure aside. Microsoft Azure was not simply the primary cloud to deliver GB200 on-line at rack and datacenter scale; we’re doing it at huge scale with clients right now. By co-engineering the complete stack with the most effective from our business companions coupled with our personal purpose-built methods, Microsoft has constructed probably the most highly effective, tightly coupled AI supercomputer on the planet, purpose-built for frontier fashions.

Addressing the environmental impression: closed loop liquid cooling at facility scale
Conventional air cooling can’t deal with the density of recent AI {hardware}. Our datacenters use superior liquid cooling methods — built-in pipes flow into chilly liquid instantly into servers, extracting warmth effectively. The closed-loop recirculation ensures zero water waste, with water solely wanted to replenish as soon as after which it’s regularly reused.
By designing purpose-built AI datacenters, we had been capable of construct liquid cooling infrastructure into the power on to get us extra rack-density within the datacenter. Fairwater is supported by the second largest water-cooled chiller plant on the planet and can constantly flow into water in its closed loop cooling system. The new water is then piped out to the cooling “fins” on all sides of the datacenter, the place 172 20-foot followers chill and recirculate the water again to the datacenter. This technique retains the AI datacenter operating effectively, even at peak masses.

Over 90% of our datacenter capability makes use of this technique, requiring water solely as soon as throughout development and regularly reusing it with no evaporation losses. The remaining 10% of conventional servers use out of doors air for cooling, switching to water solely through the hottest days, a design that dramatically reduces water utilization in comparison with conventional datacenters.
We’re additionally utilizing liquid cooling to assist AI workloads in lots of our present datacenters; this liquid cooling is achieved with Warmth Exchanger Models (HXUs) that additionally function with zero-operational water use.
Storage and compute: Constructed for AI velocity
Trendy datacenters can include exabytes of storage and tens of millions of CPU compute scores. To assist the AI infrastructure cluster, a wholly separate datacenter infrastructure is required to retailer and course of the information used and generated by the AI cluster. To offer you an instance of the size — the Wisconsin AI datacenter’s storage methods are 5 soccer fields in size!

We reengineered Azure storage for probably the most demanding AI workloads, throughout these huge datacenter deployments for true supercomputing scale. Every Azure Blob Storage account can maintain over 2 million learn/write transactions per second, and with tens of millions of accounts accessible, we will elastically scale to fulfill nearly any knowledge requirement.
Behind this functionality is a essentially rearchitected storage basis that aggregates capability and bandwidth throughout 1000’s of storage nodes and tons of of 1000’s of drives. This allows scale to exabyte scale storage, eliminating the necessity for handbook sharding and simplifying operations for even the most important AI and analytics workloads.
Key improvements akin to BlobFuse2 ship high-throughput, low-latency entry for GPU node-local coaching, guaranteeing that compute assets are by no means idle and that huge AI coaching datasets are all the time accessible when wanted. Multiprotocol assist permits seamless integration with numerous knowledge pipelines, whereas deep integration with analytics engines and AI instruments accelerates knowledge preparation and deployment.
Automated scaling dynamically allocates assets as demand grows, mixed with superior safety, resiliency and cost-effective tiered storage, Azure’s storage platform units the tempo for next-generation workloads, delivering the efficiency, scalability and reliability required.
AI WAN: Connecting a number of datacenters for a good bigger AI supercomputer
These new AI datacenters are a part of a world community of Azure AI datacenters, interconnected through our Vast Space Community (WAN). This isn’t nearly one constructing, it’s a few distributed, resilient and scalable system that operates as a single, highly effective AI machine. Our AI WAN is constructed with progress capabilities in AI-native bandwidth scales to allow large-scale distributed coaching throughout a number of, geographically numerous Azure areas, thus permitting clients to harness the ability of an enormous AI supercomputer.
This can be a elementary shift in how we take into consideration AI supercomputers. As an alternative of being restricted by the partitions of a single facility, we’re constructing a distributed system the place compute, storage and networking assets are seamlessly pooled and orchestrated throughout datacenter areas. This implies higher resiliency, scalability and adaptability for purchasers.
Bringing all of it collectively
To satisfy the vital wants of the most important AI challenges, we wanted to revamp each layer of our cloud infrastructure stack. This isn’t nearly remoted breakthroughs, however composing a number of new approaches throughout silicon, servers, networks and datacenters, resulting in developments the place software program and {hardware} are optimized as one purpose-built system.
Microsoft’s Wisconsin datacenter will play a vital function in the way forward for AI, constructed on actual expertise, actual funding and actual group impression. As we join this facility with different regional datacenters, and as each layer of our infrastructure is harmonized as an entire system, we’re unleashing a brand new period of cloud-powered intelligence, safe, adaptive and prepared for what’s subsequent.
To study extra about Microsoft’s datacenter improvements, take a look at the digital datacenter tour at datacenters.microsoft.com.
Scott Guthrie is liable for hyperscale cloud computing options and providers together with Azure, Microsoft’s cloud computing platform, generative AI options, knowledge platforms and knowledge and cybersecurity. These platforms and providers assist organizations worldwide clear up pressing challenges and drive long-term transformation.