New Amazon EC2 P6e-GB200 UltraServers accelerated by NVIDIA Grace Blackwell GPUs for the best AI efficiency


Voiced by Polly

At the moment, we’re asserting the overall availability of Amazon Elastic Compute Cloud (Amazon EC2) P6e-GB200 UltraServers, accelerated by NVIDIA GB200 NVL72 to supply the best GPU efficiency for AI coaching and inference. Amazon EC2 UltraServers join a number of EC2 situations utilizing a devoted, high-bandwidth, and low-latency accelerator interconnect throughout these situations.

The NVIDIA Grace Blackwell Superchips join two high-performance NVIDIA Blackwell tensor core GPUs and an NVIDIA Grace CPU based mostly on Arm structure utilizing the NVIDIA NVLink-C2C interconnect. Every Grace Blackwell Superchip delivers 10 petaflops of FP8 compute (with out sparsity) and as much as 372 GB HBM3e reminiscence. With the superchip structure, GPU and CPU are colocated inside one compute module, growing bandwidth between GPU and CPU considerably in comparison with present era EC2 P5en situations.

With EC2 P6e-GB200 UltraServers, you possibly can entry as much as 72 NVIDIA Blackwell GPUs inside one NVLink area to make use of 360 petaflops of FP8 compute (with out sparsity) and 13.4 TB of whole excessive bandwidth reminiscence (HBM3e). Powered by the AWS Nitro System, P6e-GB200 UltraServers are deployed in EC2 UltraClusters to securely and reliably scale to tens of 1000’s of GPUs.

EC2 P6e-GB200 UltraServers ship as much as 28.8 Tbps of whole Elastic Material Adapter (EFAv4) networking. EFA can be coupled with NVIDIA GPUDirect RDMA to allow low-latency GPU-to-GPU communication between servers with working system bypass.

EC2 P6e-GB200 UltraServers specs
EC2 P6e-GB200 UltraServers can be found in sizes starting from 36 to 72 GPUs beneath NVLink. Listed below are the specs for EC2 P6e-GB200 UltraServers:

UltraServer kind GPUs
GPU
reminiscence (GB)
vCPUs Occasion reminiscence
(GiB)
Occasion storage (TB) Mixture EFA Community Bandwidth (Gbps) EBS bandwidth (Gbps)
u-p6e-gb200x36 36 6660 1296 8640 202.5 14400 540
u-p6e-gb200x72 72 13320 2592 17280 405 28800 1080

P6e-GB200 UltraServers are perfect for probably the most compute and reminiscence intensive AI workloads, reminiscent of coaching and inference of frontier fashions, together with combination of specialists fashions and reasoning fashions, on the trillion-parameter scale.

You’ll be able to construct agentic and generative AI purposes, together with query answering, code era, video and picture era, speech recognition, and extra.

P6e-GB200 UltraServers in motion
You should utilize EC2 P6e-GB200 UltraServers within the Dallas Native Zone by means of EC2 Capability Blocks for ML. The Dallas Native Zone (us-east-1-dfw-2a) is an extension of the US East (N. Virginia) Area.

To order your EC2 Capability Blocks, select Capability Reservations on the Amazon EC2 console. You’ll be able to choose Buy Capability Blocks for ML after which select your whole capability and specify how lengthy you want the EC2 Capability Block for u-p6e-gb200x36 or u-p6e-gb200x72 UltraServers.

As soon as Capability Block is efficiently scheduled, it’s charged up entrance and its value doesn’t change after buy. The fee can be billed to your account inside 12 hours after you buy the EC2 Capability Blocks. To be taught extra, go to Capability Blocks for ML within the Amazon EC2 Person Information.

To run situations inside your bought Capability Block, you should use AWS Administration Console, AWS Command Line Interface (AWS CLI) or AWS SDKs. On the software program aspect, you can begin with the AWS Deep Studying AMIs. These photographs are preconfigured with the frameworks and instruments that you simply most likely already know and use: PyTorch, JAX, and much more.

You may as well combine EC2 P6e-GB200 UltraServers seamlessly with numerous AWS managed companies. For instance:

  • Amazon SageMaker Hyperpod gives managed, resilient infrastructure that routinely handles the provisioning and administration of P6e-GB200 UltraServers, changing defective situations with preconfigured spare capability throughout the similar NVLink area to keep up efficiency.
  • Amazon Elastic Kubernetes Providers (Amazon EKS) permits one managed node group to span throughout a number of P6e-GB200 UltraServers as nodes, automating their provisioning and lifecycle administration inside Kubernetes clusters. You should utilize EKS topology-aware routing for P6e-GB200 UltraServers, enabling optimum placement of tightly coupled elements of distributed workloads inside a single UltraServer’s NVLink-connected situations.
  • Amazon FSx for Lustre file programs present knowledge entry for P6e-GB200 UltraServers on the lots of of GB/s of throughput and thousands and thousands of enter/output operations per second (IOPS) required for large-scale HPC and AI workloads. For quick entry to massive datasets, you should use as much as 405 TB of native NVMe SSD storage or just about limitless cost-effective storage with Amazon Easy Storage Service (Amazon S3).

Now accessible
Amazon EC2 P6e-GB200 UltraServers can be found in the present day within the Dallas Native Zone (us-east-1-dfw-2a) by means of EC2 Capability Blocks for ML. For extra info, go to the Amazon EC2 pricing web page.

Give Amazon EC2 P6e-GB200 UltraServers a strive within the Amazon EC2 console. To be taught extra, go to the Amazon EC2 P6e situations web page and ship suggestions to AWS re:Submit for EC2 or by means of your traditional AWS Help contacts.

Channy



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles