Amazon SageMaker HyperPod introduces Amazon EKS help


Voiced by Polly

At present, we’re happy to announce Amazon Elastic Kubernetes Service (EKS) help in Amazon SageMaker HyperPod — purpose-built infrastructure engineered with resilience at its core for basis mannequin (FM) improvement. This new functionality allows prospects to orchestrate HyperPod clusters utilizing EKS, combining the facility of Kubernetes with Amazon SageMaker HyperPod‘s resilient surroundings designed for coaching giant fashions. Amazon SageMaker HyperPod helps effectively scale throughout greater than a thousand synthetic intelligence (AI) accelerators, lowering coaching time by as much as 40%.

Amazon SageMaker HyperPod now allows prospects to handle their clusters utilizing a Kubernetes-based interface. This integration permits seamless switching between Slurm and Amazon EKS for optimizing numerous workloads, together with coaching, fine-tuning, experimentation, and inference. The CloudWatch Observability EKS add-on offers complete monitoring capabilities, providing insights into CPU, community, disk, and different low-level node metrics on a unified dashboard. This enhanced observability extends to useful resource utilization throughout your complete cluster, node-level metrics, pod-level efficiency, and container-specific utilization knowledge, facilitating environment friendly troubleshooting and optimization.

Launched at re:Invent 2023, Amazon SageMaker HyperPod has develop into a go-to answer for AI startups and enterprises trying to effectively prepare and deploy giant scale fashions. It’s suitable with SageMaker’s distributed coaching libraries, which provide Mannequin Parallel and Information Parallel software program optimizations that assist cut back coaching time by as much as 20%. SageMaker HyperPod mechanically detects and repairs or replaces defective situations, enabling knowledge scientists to coach fashions uninterrupted for weeks or months. This permits knowledge scientists to give attention to mannequin improvement, reasonably than managing infrastructure.

The combination of Amazon EKS with Amazon SageMaker HyperPod makes use of some great benefits of Kubernetes, which has develop into standard for machine studying (ML) workloads as a result of its scalability and wealthy open-source tooling. Organizations typically standardize on Kubernetes for constructing functions, together with these required for generative AI use instances, because it permits reuse of capabilities throughout environments whereas assembly compliance and governance requirements. At present’s announcement allows prospects to scale and optimize useful resource utilization throughout greater than a thousand AI accelerators. This flexibility enhances the developer expertise, containerized app administration, and dynamic scaling for FM coaching and inference workloads.

Amazon EKS help in Amazon SageMaker HyperPod strengthens resilience by deep well being checks, automated node restoration, and job auto-resume capabilities, making certain uninterrupted coaching for big scale and/or long-running jobs. Job administration will be streamlined with the non-obligatory HyperPod CLI, designed for Kubernetes environments, although prospects may use their very own CLI instruments. Integration with Amazon CloudWatch Container Insights offers superior observability, providing deeper insights into cluster efficiency, well being, and utilization. Moreover, knowledge scientists can use instruments like Kubeflow for automated ML workflows. The combination additionally consists of Amazon SageMaker managed MLflow, offering a strong answer for experiment monitoring and mannequin administration.

At a excessive degree, Amazon SageMaker HyperPod cluster is created by the cloud admin utilizing the HyperPod cluster API and is totally managed by the HyperPod service, eradicating the undifferentiated heavy lifting concerned in constructing and optimizing ML infrastructure. Amazon EKS is used to orchestrate these HyperPod nodes, just like how Slurm orchestrates HyperPod nodes, offering prospects with a well-known Kubernetes-based administrator expertise.

Let’s discover the right way to get began with Amazon EKS help in Amazon SageMaker HyperPod
I begin by making ready the situation, checking the conditions, and creating an Amazon EKS cluster with a single AWS CloudFormation stack following the Amazon SageMaker HyperPod EKS workshop, configured with VPC and storage sources.

To create and handle Amazon SageMaker HyperPod clusters, I can use both the AWS Administration Console or AWS Command Line Interface (AWS CLI). Utilizing the AWS CLI, I specify my cluster configuration in a JSON file. I select the Amazon EKS cluster created beforehand because the orchestrator of the SageMaker HyperPod Cluster. Then, I create the cluster employee nodes that I name “worker-group-1”, with a personal Subnet, NodeRecovery set to Automated to allow automated node restoration and for OnStartDeepHealthChecks I add InstanceStress and InstanceConnectivity to allow deep well being checks.

cat > eli-cluster-config.json << EOL
{
    "ClusterName": "example-hp-cluster",
    "Orchestrator": {
        "Eks": {
            "ClusterArn": "${EKS_CLUSTER_ARN}"
        }
    },
    "InstanceGroups": [
        {
            "InstanceGroupName": "worker-group-1",
            "InstanceType": "ml.p5.48xlarge",
            "InstanceCount": 32,
            "LifeCycleConfig": {
                "SourceS3Uri": "s3://${BUCKET_NAME}",
                "OnCreate": "on_create.sh"
            },
            "ExecutionRole": "${EXECUTION_ROLE}",
            "ThreadsPerCore": 1,
            "OnStartDeepHealthChecks": [
                "InstanceStress",
                "InstanceConnectivity"
            ],
        },
  ....
    ],
    "VpcConfig": {
        "SecurityGroupIds": [
            "$SECURITY_GROUP"
        ],
        "Subnets": [
            "$SUBNET_ID"
        ]
    },
    "ResilienceConfig": {
        "NodeRecovery": "Automated"
    }
}
EOL

You’ll be able to add InstanceStorageConfigs to provision and mount an extra Amazon EBS volumes on HyperPod nodes.

To create the cluster utilizing the SageMaker HyperPod APIs, I run the next AWS CLI command:

aws sagemaker create-cluster  
--cli-input-json file://eli-cluster-config.json

The AWS command returns the ARN of the brand new HyperPod cluster.

{
"ClusterArn": "arn:aws:sagemaker:us-east-2:ACCOUNT-ID:cluster/wccy5z4n4m49"
}

I then confirm the HyperPod cluster standing within the SageMaker Console, awaiting till the standing modifications to InService.

Alternatively, you may examine the cluster standing utilizing the AWS CLI operating the describe-cluster command:

aws sagemaker describe-cluster --cluster-name my-hyperpod-cluster

As soon as the cluster is prepared, I can entry the SageMaker HyperPod cluster nodes. For many operations, I can use kubectl instructions to handle sources and jobs from my improvement surroundings, utilizing the total energy of Kubernetes orchestration whereas benefiting from SageMaker HyperPod’s managed infrastructure. On this event, for superior troubleshooting or direct node entry, I exploit AWS Methods Supervisor (SSM) to log into particular person nodes, following the directions within the Entry your SageMaker HyperPod cluster nodes web page.

To run jobs on the SageMaker HyperPod cluster orchestrated by EKS, I observe the steps outlined within the Run jobs on SageMaker HyperPod cluster by Amazon EKS web page. You need to use the HyperPod CLI and the native kubectl command to search out avaible HyperPod clusters and submit coaching jobs (Pods). For managing ML experiments and coaching runs, you should use Kubeflow Coaching Operator, Kueue and Amazon SageMaker-managed MLflow.

Lastly, within the SageMaker Console, I can view the Standing and Kubernetes model of lately added EKS clusters, offering a complete overview of my SageMaker HyperPod surroundings.

And I can monitor cluster efficiency and well being insights utilizing Amazon CloudWatch Container.

Issues to know
Listed here are some key issues it is best to learn about Amazon EKS help in Amazon SageMaker HyperPod:

Resilient Setting – This integration offers a extra resilient coaching surroundings with deep well being checks, automated node restoration, and job auto-resume. SageMaker HyperPod mechanically detects, diagnoses, and recovers from faults, permitting you to repeatedly prepare basis fashions for weeks or months with out disruption. This may cut back coaching time by as much as 40%.

Enhanced GPU Observability Amazon CloudWatch Container Insights offers detailed metrics and logs on your containerized functions and microservices. This permits complete monitoring of cluster efficiency and well being.

Scientist-Pleasant Device – This launch features a customized HyperPod CLI for job administration, Kubeflow Coaching Operators for distributed coaching, Kueue for scheduling, and integration with SageMaker Managed MLflow for experiment monitoring. It additionally works with SageMaker’s distributed coaching libraries, which offer Mannequin Parallel and Information Parallel optimizations to considerably cut back coaching time. These libraries, mixed with auto-resumption of jobs, allow environment friendly and uninterrupted coaching of huge fashions.

Versatile Useful resource Utilization – This integration enhances developer expertise and scalability for FM workloads. Information scientists can effectively share compute capability throughout coaching and inference duties. You need to use your current Amazon EKS clusters or create and connect new ones to HyperPod compute, convey your individual instruments for job submission, queuing and monitoring.

To get began with Amazon SageMaker HyperPod on Amazon EKS, you may discover sources such because the SageMaker HyperPod EKS Workshop, the aws-do-hyperpod mission, and the awsome-distributed-training mission. This launch is mostly obtainable within the AWS Areas the place Amazon SageMaker HyperPod is out there besides Europe(London). For pricing info, go to the Amazon SageMaker Pricing web page.

This weblog put up was a collaborative effort. I want to thank Manoj Ravi, Adhesh Garg, Tomonori Shimomura, Alex Iankoulski, Anoop Saha, and your complete crew for his or her important contributions in compiling and refining the data offered right here. Their collective experience was essential in creating this complete article.

– Eli.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles