This week we have now launched a wave of purpose-built datacenters and infrastructure investments we’re making around the globe to help the worldwide adoption of cutting-edge AI workloads and cloud companies.
Immediately in Wisconsin we launched Fairwater, our latest US AI datacenter, the most important and most subtle AI manufacturing unit we’ve constructed but. Along with our Fairwater datacenter in Wisconsin, we even have a number of equivalent Fairwater datacenters below development in different areas throughout the US.
In Narvik, Norway, Microsoft introduced plans with nScale and Aker JV to develop a brand new hyperscale AI datacenter.
In Loughton, UK, we introduced a partnership with nScale to construct the UK’s largest supercomputer to help companies within the UK.
These AI datacenters are important capital initiatives, representing tens of billions of {dollars} of investments and a whole lot of 1000’s of cutting-edge AI chips, and can seamlessly join with our international Microsoft Cloud of over 400 datacenters in 70 areas around the globe. Via innovation that may allow us to hyperlink these AI datacenters in a distributed community, we multiply the effectivity and compute in an exponential solution to additional democratize entry to AI companies globally.
So what’s an AI datacenter?
The AI datacenter: the brand new manufacturing unit of the AI period

An AI datacenter is a singular, purpose-built facility designed particularly for AI coaching in addition to operating large-scale synthetic intelligence fashions and purposes. Microsoft’s AI datacenters energy OpenAI, Microsoft AI, our Copilot capabilities and plenty of extra main AI workloads.
The brand new Fairwater AI datacenter in Wisconsin stands as a exceptional feat of engineering, overlaying 315 acres and housing three large buildings with a mixed 1.2 million sq. toes below roofs. Setting up this facility required 46.6 miles of deep basis piles, 26.5 million kilos of structural metal, 120 miles of medium-voltage underground cable and 72.6 miles of mechanical piping.
In contrast to typical cloud datacenters, that are optimized to run many smaller, unbiased workloads equivalent to internet hosting web sites, electronic mail or enterprise purposes, this datacenter is constructed to work as one large AI supercomputer utilizing a single flat networking interconnecting a whole lot of 1000’s of the newest NVIDIA GPUs. Actually, it can ship 10X the efficiency of the world’s quickest supercomputer immediately, enabling AI coaching and inference workloads at a stage by no means earlier than seen.
The function of our AI datacenters – powering frontier AI
Efficient AI fashions depend on 1000’s of computer systems working collectively, powered by GPUs, or specialised AI accelerators, to course of large concurrent mathematical computations. They’re interconnected with extraordinarily quick networks to allow them to share outcomes immediately, and all of that is supported by monumental storage methods that maintain the info (like textual content, pictures or video) damaged down into tokens, the small items of knowledge the AI learns from. The objective is to maintain these chips busy on a regular basis, as a result of if the info or the community can’t sustain, all the things slows down.
The AI coaching itself is a cycle: the AI processes tokens in sequence, makes predictions in regards to the subsequent one, checks them in opposition to the fitting solutions and adjusts itself. This repeats trillions of instances till the system will get higher at no matter it’s being skilled to do. Consider it like knowledgeable soccer group’s follow. Every GPU is a participant operating a drill, the tokens are the performs being executed step-by-step, and the community is the teaching employees, shouting directions and maintaining everybody in sync. The group repeats performs time and again, correcting errors till they will execute them completely. By the tip, the AI mannequin, just like the group, has mastered its technique and is able to carry out below actual sport situations.
AI infrastructure at frontier scale
Goal-built infrastructure is important to with the ability to energy AI effectively. To compute the token math at this trillion-parameter scale of main AI fashions, the core of the AI datacenter is made up of devoted AI accelerators (equivalent to GPUs) mounted on server boards alongside CPUs, reminiscence and storage. A single server hosts a number of GPU accelerators, linked for high-bandwidth communication. These servers are then put in right into a rack, with top-of-rack (ToR) switches offering low-latency networking between them. Each rack within the datacenter is interconnected, making a tightly coupled cluster. From the skin, this structure seems like many unbiased servers, however at scale it capabilities as a single supercomputer the place a whole lot of 1000’s of accelerators can practice a single mannequin in parallel.
This datacenter runs a single, large cluster of interconnected NVIDIA GB200 servers and thousands and thousands of compute cores and exabytes of storage, all engineered for probably the most demanding AI workloads. Azure was the primary cloud supplier to convey on-line the NVIDIA GB200 server, rack and full datacenter clusters. Every rack packs 72 NVIDIA Blackwell GPUs, tied collectively in a single NVLink area that delivers 1.8 terabytes of GPU-to-GPU bandwidth and offers each GPU entry to 14 terabytes of pooled reminiscence. Fairly than behaving like dozens of separate chips, the rack operates as a single, large accelerator, able to processing an astonishing 865,000 tokens per second, the best throughput of any cloud platform out there immediately. The Norway and UK AI datacenters will use comparable clusters, and reap the benefits of NVIDIAs subsequent AI chip design (GB300) which provides much more pooled reminiscence per rack.
The problem in establishing supercomputing scale, notably as AI coaching necessities proceed to require breakthrough scales of computing, is getting the networking topology excellent. To make sure low latency communication throughout a number of layers in a cloud surroundings, Microsoft wanted to increase efficiency past a single rack. For the newest NVIDIA GB200 and GB300 deployments globally, on the rack stage these GPUs talk over NVLink and NVSwitch at terabytes per second, collapsing reminiscence and bandwidth obstacles. Then to attach throughout a number of racks right into a pod, Azure makes use of each InfiniBand and Ethernet materials that ship 800 Gbps, in a full fats tree non-blocking structure to make sure that each GPU can discuss to each different GPU at full line fee with out congestion. And throughout the datacenter, a number of pods of racks are interconnected to cut back hop counts and allow tens of 1000’s of GPUs to perform as one global-scale supercomputer.
When specified by a standard datacenter hallway, bodily distance between racks introduces latency into the system. To handle this, the racks within the Wisconsin AI datacenter are specified by a two-story datacenter configuration, so along with racks networked to adjoining racks, they’re networked to further racks above or beneath them.
This layered method units Azure aside. Microsoft Azure was not simply the primary cloud to convey GB200 on-line at rack and datacenter scale; we’re doing it at large scale with clients immediately. By co-engineering the complete stack with one of the best from our trade companions coupled with our personal purpose-built methods, Microsoft has constructed probably the most highly effective, tightly coupled AI supercomputer on the planet, purpose-built for frontier fashions.

Addressing the environmental affect: closed loop liquid cooling at facility scale
Conventional air cooling can’t deal with the density of recent AI {hardware}. Our datacenters use superior liquid cooling methods — built-in pipes flow into chilly liquid instantly into servers, extracting warmth effectively. The closed-loop recirculation ensures zero water waste, with water solely wanted to refill as soon as after which it’s frequently reused.
By designing purpose-built AI datacenters, we have been capable of construct liquid cooling infrastructure into the power on to get us extra rack-density within the datacenter. Fairwater is supported by the second largest water-cooled chiller plant on the planet and can constantly flow into water in its closed loop cooling system. The new water is then piped out to the cooling “fins” on all sides of the datacenter, the place 172 20-foot followers chill and recirculate the water again to the datacenter. This technique retains the AI datacenter operating effectively, even at peak hundreds.

Over 90% of our datacenter capability makes use of this technique, requiring water solely as soon as throughout development and frequently reusing it with no evaporation losses. The remaining 10% of conventional servers use outside air for cooling, switching to water solely in the course of the hottest days, a design that dramatically reduces water utilization in comparison with conventional datacenters.
We’re additionally utilizing liquid cooling to help AI workloads in a lot of our present datacenters; this liquid cooling is completed with Warmth Exchanger Models (HXUs) that additionally function with zero-operational water use.
Storage and compute: Constructed for AI velocity
Fashionable datacenters can comprise exabytes of storage and thousands and thousands of CPU compute cores. To help the AI infrastructure cluster, a completely separate datacenter infrastructure is required to retailer and course of the info used and generated by the AI cluster. To provide you an instance of the size — the Wisconsin AI datacenter’s storage methods are 5 soccer fields in size!

We reengineered Azure storage for probably the most demanding AI workloads, throughout these large datacenter deployments for true supercomputing scale. Every Azure Blob Storage account can maintain over 2 million learn/write transactions per second, and with thousands and thousands of accounts out there, we are able to elastically scale to satisfy just about any knowledge requirement.
Behind this functionality is a essentially rearchitected storage basis that aggregates capability and bandwidth throughout 1000’s of storage nodes and a whole lot of 1000’s of drives. This permits scale to exabyte scale storage, eliminating the necessity for guide sharding and simplifying operations for even the most important AI and analytics workloads.
Key improvements equivalent to BlobFuse2 ship high-throughput, low-latency entry for GPU node-local coaching, making certain that compute assets are by no means idle and that large AI coaching datasets are all the time out there when wanted. Multiprotocol help permits seamless integration with various knowledge pipelines, whereas deep integration with analytics engines and AI instruments accelerates knowledge preparation and deployment.
Computerized scaling dynamically allocates assets as demand grows, mixed with superior safety, resiliency and cost-effective tiered storage, Azure’s storage platform units the tempo for next-generation workloads, delivering the efficiency, scalability and reliability required.
AI WAN: Connecting a number of datacenters for an excellent bigger AI supercomputer
These new AI datacenters are a part of a world community of Azure AI datacenters, interconnected through our Broad Space Community (WAN). This isn’t nearly one constructing, it’s a few distributed, resilient and scalable system that operates as a single, highly effective AI machine. Our AI WAN is constructed with progress capabilities in AI-native bandwidth scales to allow large-scale distributed coaching throughout a number of, geographically various Azure areas, thus permitting clients to harness the ability of a large AI supercomputer.
This can be a basic shift in how we take into consideration AI supercomputers. As a substitute of being restricted by the partitions of a single facility, we’re constructing a distributed system the place compute, storage and networking assets are seamlessly pooled and orchestrated throughout datacenter areas. This implies larger resiliency, scalability and adaptability for purchasers.
Bringing all of it collectively
To fulfill the important wants of the most important AI challenges, we wanted to revamp each layer of our cloud infrastructure stack. This isn’t nearly remoted breakthroughs, however composing a number of new approaches throughout silicon, servers, networks and datacenters, resulting in developments the place software program and {hardware} are optimized as one purpose-built system.
Microsoft’s Wisconsin datacenter will play a important function in the way forward for AI, constructed on actual know-how, actual funding and actual group affect. As we join this facility with different regional datacenters, and as each layer of our infrastructure is harmonized as an entire system, we’re unleashing a brand new period of cloud-powered intelligence, safe, adaptive and prepared for what’s subsequent.
To be taught extra about Microsoft’s datacenter improvements, take a look at the digital datacenter tour at datacenters.microsoft.com.
Scott Guthrie is chargeable for hyperscale cloud computing options and companies together with Azure, Microsoft’s cloud computing platform, generative AI options, knowledge platforms and data and cybersecurity. These platforms and companies assist organizations worldwide clear up pressing challenges and drive long-term transformation.