Immediately, we’re unveiling the subsequent Fairwater web site of Azure AI datacenters in Atlanta, Georgia. This purpose-built datacenter is related to our first Fairwater web site in Wisconsin, prior generations of AI supercomputers and the broader Azure world datacenter footprint to create the world’s first planet-scale AI superfactory. By packing computing energy extra densely than ever earlier than, every Fairwater web site is constructed to effectively meet unprecedented demand for AI compute, push the frontiers of mannequin intelligence and empower each individual and group on the planet to realize extra.
To fulfill this demand, we have now reinvented how we design AI datacenters and the programs we run within them. Fairwater is a departure from the normal cloud datacenter mannequin and makes use of a single flat community that may combine tons of of hundreds of the newest NVIDIA GB200 and GB300 GPUs into an enormous supercomputer. These improvements are a product of many years of expertise designing datacenters and networks, in addition to learnings from supporting a number of the largest AI coaching jobs on the planet.
Whereas the Fairwater datacenter design is effectively suited to coaching the subsequent technology of frontier fashions, it’s also constructed with fungibility in thoughts. Coaching has advanced from a single monolithic job into a spread of workloads with completely different necessities (corresponding to pre-training, fine-tuning, reinforcement studying and artificial information technology). Microsoft has deployed a devoted AI WAN spine to combine every Fairwater web site right into a broader elastic system that allows dynamic allocation of numerous AI workloads and maximizes GPU utilization of the mixed system.
Beneath, we stroll by way of a number of the thrilling technical improvements that help Fairwater, from the way in which we construct datacenters to the networking inside and throughout the websites.
Most density of compute
Fashionable AI infrastructure is more and more constrained by the legal guidelines of physics. The pace of sunshine is now a key bottleneck in our capability to tightly combine accelerators, compute and storage with performant latency. Fairwater is designed to maximise the density of compute to reduce latency inside and throughout racks and maximize system efficiency.
One of many key levers for driving density is bettering cooling at scale. AI servers within the Fairwater datacenters are related to a facility-wide cooling system designed for longevity, with a closed-loop strategy that reuses the liquid repeatedly after the preliminary fill with no evaporation. The water used within the preliminary fill is equal to what 20 properties devour in a 12 months and is barely changed if water chemistry signifies it’s wanted (it’s designed for 6-plus years), making it extraordinarily environment friendly and sustainable.
Liquid-based cooling additionally gives a lot greater warmth switch, enabling us to maximise rack and row-level energy (~140kW per rack, 1,360 kW per row) to pack compute as densely as attainable contained in the datacenter. State-of-the-art cooling additionally helps us maximize utilization of this dense compute in steady-state operations, enabling massive coaching jobs to run performantly at excessive scale. After biking by way of a system of chilly plate paths throughout the GPU fleet, warmth is dissipated by one of many largest chiller crops on the planet.

One other manner we’re driving compute density is with a two-story datacenter constructing design. Many AI workloads are very delicate to latency, which suggests cable run lengths can meaningfully impression cluster efficiency. Each GPU in Fairwater is related to each different GPU, so the two-story datacenter constructing strategy permits for placement of racks in three dimensions to reduce cable lengths, which in flip improves latency, bandwidth, reliability and value.

Excessive-availability, low-cost energy
We’re pushing the envelope in serving this compute with cost-efficient, dependable energy. The Atlanta web site was chosen with resilient utility energy in thoughts and is able to attaining 4×9 availability at 3×9 price. By securing extremely out there grid energy, we are able to additionally forgo conventional resiliency approaches for the GPU fleet (corresponding to on-site technology, UPS programs and dual-corded distribution), driving price financial savings for patrons and sooner time-to-market for Microsoft.
We’ve got additionally labored with our trade companions to codevelop power-management options to mitigate energy oscillations created by massive scale jobs, a rising problem in sustaining grid stability as AI demand scales. This features a software-driven answer that introduces supplementary workloads in periods of decreased exercise, a hardware-driven answer the place the GPUs implement their very own energy thresholds and an on-site power storage answer to additional masks energy fluctuations with out using extra energy.
Slicing-edge accelerators and networking programs
Fairwater’s world-class datacenter design is powered by purpose-built servers, cutting-edge AI accelerators and novel networking programs. Every Fairwater datacenter runs a single, coherent cluster of interconnected NVIDIA Blackwell GPUs, with a sophisticated community structure that may scale reliably past conventional Clos community limits with current-gen switches (tons of of hundreds of GPUs on a single flat community). This required innovation throughout scale-up networking, scale-out networking and networking protocol.
When it comes to scale-up, every rack of AI accelerators homes as much as 72 NVIDIA Blackwell GPUs, related by way of NVLink for ultra-low-latency communication inside the rack. Blackwell accelerators present the best compute density out there right this moment, with help for low-precision quantity codecs like FP4 to extend complete FLOPS and allow environment friendly reminiscence use. Every rack gives 1.8 TB of GPU-to-GPU bandwidth, with over 14 TB of pooled reminiscence out there to every GPU.

These racks then use scale-out networking to create pods and clusters that allow all GPUs to perform as a single supercomputer with minimal hop counts. We obtain this with a two-tier, ethernet-based backend community that helps huge cluster sizes with 800 Gbps GPU-to-GPU connectivity. Counting on a broad ethernet ecosystem and SONiC (Software program for Open Community within the Cloud – which is our personal working system for our community switches) additionally helps us keep away from vendor lock-in and handle price, as we are able to use commodity {hardware} as a substitute of proprietary options.
Enhancements throughout packet trimming, packet spray and high-frequency telemetry are core elements of our optimized AI community. We’re additionally working to allow deeper management and optimization of community routes. Collectively, these applied sciences ship superior congestion management, fast detection and retransmission and agile load balancing, guaranteeing ultra-reliable, low-latency efficiency for contemporary AI workloads.
Planet scale
Even with these improvements, compute calls for for big coaching jobs (now measured in trillions of parameters) are shortly outpacing the facility and area constraints of a single facility. To serve these wants, we have now constructed a devoted AI WAN optical community to increase Fairwater’s scale-up and scale-out networks. Leveraging our scale and many years of hyperscale experience, we delivered over 120,000 new fiber miles throughout the US final 12 months — increasing AI community attain and reliability nationwide.
With this high-performance, high-resiliency spine, we are able to instantly join completely different generations of supercomputers into an AI superfactory that exceeds the capabilities of a single web site throughout geographically numerous areas. This empowers AI builders to faucet our broader community of Azure AI datacenters, segmenting visitors primarily based on their wants throughout scale-up and scale-out networks inside a web site, in addition to throughout websites by way of the continent spanning AI WAN.
This can be a significant departure from the previous, the place all visitors needed to journey the scale-out community whatever the necessities of the workload. Not solely does it present prospects with fit-for-purpose networking at a extra granular degree, it additionally helps create fungibility to maximise the flexibleness and utilization of our infrastructure.
Placing all of it collectively
The brand new Fairwater web site in Atlanta represents the subsequent leap within the Azure AI infrastructure and displays our expertise operating the most important AI coaching jobs on the planet. It combines breakthrough improvements in compute density, sustainability and networking programs to effectively serve the large demand for computational energy we’re seeing. It additionally integrates deeply with different AI datacenters and the broader Azure platform to kind the world’s first AI superfactory. Collectively, these improvements present a versatile, fit-for-purpose infrastructure that may serve the total spectrum of contemporary AI workloads and empower each individual and group on the planet to realize extra. For our prospects, this implies simpler integration of AI into each workflow and the flexibility to create modern AI options that have been beforehand unattainable.
Discover out extra about how Microsoft Azure will help you combine AI to streamline and strengthen improvement lifecycles right here.
Scott Guthrie is chargeable for hyperscale cloud computing options and companies together with Azure, Microsoft’s cloud computing platform, generative AI options, information platforms and data and cybersecurity. These platforms and companies assist organizations worldwide clear up pressing challenges and drive long-term transformation.
Editor’s word: An replace was made to extra clearly clarify how we optimize our community.

