Amazon EMR on Amazon EKS lets you submit Apache Spark jobs on demand on Amazon Elastic Kubernetes Service (Amazon EKS) with out provisioning clusters. With EMR on EKS, you possibly can consolidate analytical workloads together with your different Kubernetes-based functions on the identical Amazon EKS cluster to enhance useful resource utilization and simplify infrastructure administration. Kubernetes makes use of namespaces to offer isolation between teams of assets inside a single Kubernetes cluster. Amazon EMR creates a digital cluster by registering Amazon EMR with a namespace on an EKS cluster. Amazon EMR can then run analytics workloads on that namespace.
In EMR on EKS, you possibly can submit your Spark jobs to Amazon EMR digital clusters utilizing the AWS Command Line Interface (AWS CLI), SDK, or Amazon EMR Studio. Amazon EMR requests the Kubernetes scheduler on Amazon EKS to schedule pods. For each job you run, EMR on EKS creates a container with an Amazon Linux 2 base picture, Apache Spark, and related dependencies. Every Spark job runs in a pod on Amazon EKS employee nodes. In case your Amazon EKS cluster has employee nodes in numerous Availability Zones, the Spark utility driver and executor pods can unfold throughout a number of Availability Zones. On this case, information switch costs apply for cross-AZ communication and will increase information processing latency. If you wish to cut back information processing latency and keep away from cross-AZ information switch prices, you must configure Spark functions to run solely inside a single Availability Zone.
On this put up, we share 4 design patterns to handle EMR on EKS workloads for Apache Spark. We then present how one can use a pod template to schedule a job with EMR on EKS, and use Karpenter as our autoscaling software.
Sample 1: Handle Spark jobs by pod template
Clients typically consolidate a number of functions on a shared Amazon EKS cluster to enhance utilization and save prices. Nonetheless, every utility might have completely different necessities. For instance, you could need to run performance-intensive workloads reminiscent of machine studying mannequin coaching jobs on SSD-backed situations for higher efficiency, or fault-tolerant and versatile functions on Amazon Elastic Compute Cloud (Amazon EC2) Spot Cases for decrease price. In EMR on EKS, there are a couple of methods to configure how your Spark job runs on Amazon EKS employee nodes. You’ll be able to make the most of the Spark configurations on Kubernetes with the EMR on EKS StartJobRun API, or you should utilize Spark’s pod template characteristic. Pod templates are specs that decide how one can run every pod in your EKS clusters. With pod templates, you will have extra flexibility and may use pod template information to outline Kubernetes pod configurations that Spark doesn’t help.
You need to use pod templates to realize the next advantages:
- Cut back prices – You’ll be able to schedule Spark executor pods to run on EC2 Spot Cases whereas scheduling Spark driver pods to run on EC2 On-Demand Cases.
- Enhance monitoring – You’ll be able to improve your Spark workload’s observability. For instance, you possibly can deploy a sidecar container through a pod template to your Spark job that may ahead logs to your centralized logging utility
- Enhance useful resource utilization – You’ll be able to help a number of groups working their Spark workloads on the identical shared Amazon EKS cluster
You’ll be able to implement these patterns utilizing pod templates and Kubernetes labels and selectors. Kubernetes labels are key-value pairs which might be hooked up to things, reminiscent of Kubernetes employee nodes, to establish attributes which might be significant and related to customers. You’ll be able to then select the place Kubernetes schedules pods utilizing nodeSelector or Kubernetes affinity and anti-affinity in order that it could possibly solely run on particular employee nodes. nodeSelector is the best solution to constrain pods to nodes with particular labels. Affinity and anti-affinity develop the kinds of constraints you possibly can outline.
Autoscaling in Spark workload
Autoscaling is a perform that routinely scales your compute assets up or right down to modifications in demand. For Kubernetes auto scaling, Amazon EKS helps two auto scaling merchandise: the Kubernetes Cluster Autoscaler and the Karpenter open-source auto scaling mission. Kubernetes autoscaling ensures your cluster has sufficient nodes to schedule your pods with out losing assets. If some pods fail to schedule on present employee nodes as a result of inadequate assets, it will increase the scale of the cluster and provides further nodes. It additionally makes an attempt to take away underutilized nodes when its pods can run elsewhere.
Sample 2: Activate Dynamic Useful resource Allocation (DRA) in Spark
Spark offers a mechanism referred to as Dynamic Useful resource Allocation (DRA), which dynamically adjusts the assets your utility occupies based mostly on the workload. With DRA, the Spark driver spawns the preliminary variety of executors after which scales up the quantity till the required most variety of executors is met to course of the pending duties. Idle executors are deleted when there are not any pending duties. It’s notably helpful when you’re not sure what number of executors are wanted in your job processing.
You’ll be able to implement it in EMR on EKS by following the Dynamic Useful resource Allocation workshop.
Sample 3: Absolutely management cluster autoscaling by Cluster Autoscaler
Cluster Autoscaler makes use of the idea of node teams because the component of capability management and scale. In AWS, node teams are applied by auto scaling teams. Cluster Autoscaler implements it by controlling the DesiredReplicas
discipline of your auto scaling teams.
To avoid wasting prices and enhance useful resource utilization, you should utilize Cluster Autoscaler in your Amazon EKS cluster to routinely scale your Spark pods. The next are suggestions for autoscaling Spark jobs with Amazon EMR on EKS utilizing Cluster Autoscaler:
- Create Availability Zone bounded auto scaling teams to ensure Cluster Autoscaler solely provides employee nodes in the identical Availability Zone to keep away from cross-AZ information switch costs and information processing latency.
- Create separate node teams for EC2 On-Demand and Spot Cases. By doing this, you possibly can add or shrink driver pods and executor pods independently.
- In Cluster Autoscaler, every node in a node group must have equivalent scheduling properties. That features EC2 occasion sorts, which must be of comparable vCPU to reminiscence ratio to keep away from inconsistency and wastage of assets. To be taught extra about Cluster Autoscaler node teams greatest practices, check with Configuring your Node Teams.
- Adhere to Spot Occasion greatest practices and maximize diversification to take benefits of a number of Spot swimming pools. Create a number of node teams for Spark executor pods with completely different vCPU to reminiscence ratios. This drastically will increase the steadiness and resiliency of your utility.
- When you will have a number of node teams, use pod templates and Kubernetes labels and selectors to handle Spark pod deployment to particular Availability Zones and EC2 occasion sorts.
The next diagram illustrates Availability Zone bounded auto scaling teams.
As a number of node teams are created, Cluster Autoscaler has the idea of expanders, which offer completely different methods for choosing which node group to scale. As of this writing, the next methods are supported: random, most-pods, least-waste, and precedence. With a number of node teams of EC2 On-Demand and Spot Cases, you should utilize the precedence expander, which permits Cluster Autoscaler to pick out the node group that has the very best precedence assigned by the consumer. For configuration particulars, check with Precedence based mostly expander for Cluster Autoscaler.
Sample 4: Group-less autoscaling with Karpenter
Karpenter is an open-source, versatile, high-performance Kubernetes cluster auto scaler constructed with AWS. The general objective is similar of auto scaling Amazon EKS clusters to regulate un-schedulable pods; nevertheless, Karpenter takes a unique strategy than Cluster Autoscaler, generally known as group-less provisioning. It observes the mixture useful resource requests of unscheduled pods and makes choices to launch minimal compute assets to suit the un-schedulable pods for environment friendly binpacking and lowering scheduling latency. It might additionally delete nodes to cut back infrastructure prices. Karpenter works straight with the Amazon EC2 Fleet.
To configure Karpenter, you create provisioners that outline how Karpenter manages un-schedulable pods and expired nodes. It’s best to make the most of the idea of layered constraints to handle scheduling constraints. To scale back EMR on EKS prices and enhance Amazon EKS cluster utilization, you should utilize Karpenter with related constraints of Single-AZ, On-Demand Cases for Spark driver pods, and Spot Cases for executor pods with out creating a number of kinds of node teams. With its group-less strategy, Karpenter means that you can be extra versatile and diversify higher.
The next are suggestions for auto scaling EMR on EKS with Karpenter:
- Configure Karpenter provisioners to launch nodes in a single Availability Zone to keep away from cross-AZ information switch prices and cut back information processing latency.
- Create a provisioner for EC2 Spot Cases and EC2 On-Demand Cases. You’ll be able to cut back prices by scheduling Spark driver pods to run on EC2 On-Demand Cases and schedule Spark executor pods to run on EC2 Spot Cases.
- Restrict the occasion sorts by offering a listing of EC2 situations or let Karpenter select from all of the Spot swimming pools out there to it. This follows the Spot greatest practices of diversifying throughout a number of Spot swimming pools.
- Use pod templates and Kubernetes labels and selectors to permit Karpenter to spin up right-sized nodes required for un-schedulable pods.
The next diagram illustrates how Karpenter works.
To summarize the design patterns we mentioned:
- Pod templates assist tailor your Spark workloads. You’ll be able to configure Spark pods in a single Availability Zone and make the most of EC2 Spot Cases for Spark executor pods, leading to higher price-performance.
- EMR on EKS helps the DRA characteristic in Spark. It’s helpful when you’re not acquainted what number of Spark executors are wanted in your job processing, and use DRA to dynamically alter the assets your utility wants.
- Using Cluster Autoscaler lets you absolutely management how one can autoscale your Amazon EMR on EKS workloads. It improves your Spark utility availability and cluster effectivity by quickly launching right-sized compute assets.
- Karpenter simplifies autoscaling with its group-less provisioning of compute assets. The advantages embody diminished scheduling latency, and environment friendly bin-packing to cut back infrastructure prices.
Walkthrough overview
In our instance walkthrough, we’ll present how one can use Pod template to schedule a job with EMR on EKS. We use Karpenter as our autoscaling software.
We full the next steps to implement the answer:
- Create an Amazon EKS cluster.
- Put together the cluster for EMR on EKS.
- Register the cluster with Amazon EMR.
- For Amazon EKS auto scaling, arrange Karpenter auto scaling in Amazon EKS.
- Submit a pattern Spark job utilizing pod templates to run in single Availability Zone and make the most of Spot for Spark executor pods.
The next diagram illustrates this structure.
Conditions
To observe together with the walkthrough, guarantee that you’ve the next prerequisite assets:
- An AWS account that gives entry to AWS providers.
- An AWS Identification and Entry Administration Consumer (IAM) consumer with an entry key and secret key to configure the AWS CLI, and permissions to create IAM roles, IAM insurance policies, Amazon EKS IAM roles and repair linked roles, AWS CloudFormation stacks, and a VPC. For extra data, see Actions, assets, and situation keys for Amazon Elastic Container Service for Kubernetes and Utilizing service-linked roles. You should full all steps on this put up as the identical consumer.
- An Amazon Easy Storage Service (Amazon S3) bucket to retailer your pod templates.
- The AWS CLI, eksctl, and kubectl. Directions for set up of those instruments are given in Step 1.
Create an Amazon EKS cluster
There are two methods to create an EKS cluster: you should utilize AWS Administration Console and AWS CLI, or you possibly can set up all of the required assets for Amazon EKS utilizing eksctl, a easy command line utility for creating and managing Kubernetes clusters on EKS. For this put up, we use eksctl to create our cluster.
Let’s begin with putting in the instruments to arrange and handle your Kubernetes cluster.
- Set up the AWS CLI with the next command (Linux OS) and ensure it really works:
For different working methods, see Putting in, updating, and uninstalling the AWS CLI model.
- Set up eksctl, the command line utility for creating and managing Kubernetes clusters on Amazon EKS:
eksctl is a software collectively developed by AWS and Weaveworks that automates a lot of the expertise of making EKS clusters.
- Set up the Kubernetes command-line software, kubectl, which lets you run instructions in opposition to Kubernetes clusters:
- Create a brand new file referred to as
eks-create-cluster.yaml
with the next: - Create an Amazon EKS cluster utilizing the
eks-create-cluster.yaml
file:On this Amazon EKS cluster, we create a single managed node group with a basic goal
m5.xlarge
EC2 Occasion. Launching Amazon EKS cluster, its managed node teams, and all dependencies usually takes 10–quarter-hour. - After you create the cluster, you possibly can run the next to substantiate all node teams had been created:
Now you can use kubectl to work together with the created Amazon EKS cluster.
- After you create your Amazon EKS cluster, you need to configure your kubeconfig file in your cluster utilizing the AWS CLI:
Now you can use kubectl to hook up with your Kubernetes cluster.
Put together your Amazon EKS cluster for EMR on EKS
Now we put together our Amazon EKS cluster to combine it with EMR on EKS.
- Let’s create the namespace emr-on-eks-blog in our Amazon EKS cluster:
- We use the automation powered by eksctl to create role-based entry management permissions and so as to add the EMR on EKS service-linked function into the
aws-auth
configmap: - The Amazon EKS cluster already has an OpenID Join supplier URL. You allow IAM roles for service accounts by associating IAM with the Amazon EKS cluster OIDC:
Now let’s create the IAM function that Amazon EMR makes use of to run Spark jobs.
- Create the file
blog-emr-trust-policy.json
:This IAM function incorporates all permissions that the Spark job wants—as an example, we offer entry to S3 buckets and Amazon CloudWatch to entry vital information (pod templates) and share logs.
Subsequent, we have to connect the required IAM insurance policies to the function so it could possibly write logs to Amazon S3 and CloudWatch.
- Create the file
blog-emr-policy-document
with the required IAM insurance policies. Substitute the bucket identify together with your S3 bucket ARN. - Now we replace the belief relationship between the IAM function we simply created with the Amazon EMR service identification. The namespace supplied right here within the belief coverage must be similar when registering the digital cluster in subsequent step:
Register the Amazon EKS cluster with Amazon EMR
Registering your Amazon EKS cluster is the ultimate step to arrange EMR on EKS to run workloads.
We create a digital cluster and map it to the Kubernetes namespace created earlier:
After you register, you must get affirmation that your EMR digital cluster is created:
Arrange Karpenter in Amazon EKS
To get began with Karpenter, guarantee there’s some compute capability out there, and set up it utilizing the Helm charts supplied within the public repository. Karpenter additionally requires permissions to provision compute assets. For extra data, check with Getting Began.
Karpenter’s single accountability is to provision compute in your Kubernetes clusters, which is configured by a customized useful resource referred to as a provisioner. As soon as put in in your cluster, the Karpenter provisioner observes incoming Kubernetes pods, which may’t be scheduled as a result of inadequate compute assets within the cluster, and routinely launches new assets to satisfy their scheduling and useful resource necessities.
For our use case, we provision two provisioners.
The primary is a Karpenter provisioner for Spark driver pods to run on EC2 On-Demand Cases:
The second is a Karpenter provisioner for Spark executor pods to run on EC2 Spot Cases:
Notice the highlighted portion of the provisioner config. Within the necessities
part, we use the well-known labels with Amazon EKS and Karpenter so as to add constraints for the way Karpenter launches nodes. We add constraints that if the pod is searching for a label karpenter.sh/capacity-type: spot
, it makes use of this provisioner to launch an EC2 Spot Occasion solely in Availability Zone us-west-2b
. Equally, we observe the identical constraint for the karpenter.sh/capacity-type: on-demand label
. We may also be extra granular and supply EC2 occasion sorts in our provisioner, and they are often of various vCPU and reminiscence ratios, supplying you with extra flexibility and including resiliency to your utility. Karpenter launches nodes solely when each the provisioner’s and pod’s necessities are met. To be taught extra concerning the Karpenter provisioner API, check with Provisioner API.
Within the subsequent step, we outline pod necessities and align them with what we’ve got outlined in Karpenter’s provisioner.
Submit Spark job utilizing Pod template
In Kubernetes, labels are key-value pairs which might be hooked up to things, reminiscent of pods. Labels are meant for use to specify figuring out attributes of objects which might be significant and related to customers. You’ll be able to constrain a pod in order that it could possibly solely run on specific set of nodes. There are a number of methods to do that, and the really useful approaches all use label selectors to facilitate the choice.
Starting with Amazon EMR variations 5.33.0 or 6.3.0, EMR on EKS helps Spark’s pod template characteristic. We use pod templates so as to add particular labels the place Spark driver and executor pods must be launched.
Create a pod template file for a Spark driver pod and save them in your S3 bucket:
Create a pod template file for a Spark executor pod and save them in your S3 bucket:
Pod templates present completely different fields to handle job scheduling. For added particulars, check with Pod template fields. Notice the nodeSelector
for the Spark driver pods and Spark executor pods, which match the labels we outlined with the Karpenter provisioner.
For a pattern Spark job, we use the next code, which creates a number of parallel threads and waits for a couple of seconds:
Copy the pattern Spark job into your S3 bucket:
Earlier than we submit the Spark job, let’s get the required values of the EMR digital cluster and Amazon EMR job execution function ARN:
To allow the pod template characteristic with EMR on EKS, you should utilize configuration-overrides
to specify the Amazon S3 path to the pod template:
Within the Spark job, we’re requesting two cores for the Spark driver and one core every for Spark executor pod. As a result of we solely had a single EC2 occasion in our managed node group, Karpenter appears to be like on the un-schedulable Spark driver pods and makes use of the on-demand provisioner to launch EC2 On-Demand Cases for Spark driver pods in us-west-2b
. Equally, when the Spark executor pods are in pending
state, as a result of there are not any Spot Cases, Karpenter launches Spot Cases in us-west-2b
.
This manner, Karpenter optimizes your prices by ranging from zero Spot and On-Demand Cases and solely creates them dynamically when required. Moreover, Karpenter batches pending pods after which binpacks them based mostly on CPU, reminiscence, and GPUs required, making an allowance for node overhead, VPC CNI assets required, and daemon units that might be packed when mentioning a brand new node. This makes certain you’re effectively using your assets with least wastage.
Clear up
Don’t overlook to wash up the assets you created to keep away from any pointless costs.
- Delete all of the digital clusters that you simply created:
- Delete the Amazon EKS cluster:
- Delete the
EMR_EKS_Job_Execution_Role
function and insurance policies.
Conclusion
On this put up, we noticed how one can create an Amazon EKS cluster, configure Amazon EKS managed node teams, create an EMR digital cluster on Amazon EKS, and submit Spark jobs. Utilizing pod templates, we noticed how to make sure Spark workloads are scheduled in the identical Availability Zone and make the most of Spot with Karpenter auto scaling to cut back prices and optimize your Spark workloads.
To get began, check out the EMR on EKS workshop. For extra assets, check with the next:
In regards to the creator
Jamal Arif is a Options Architect at AWS and a containers specialist. He helps AWS prospects of their modernization journey to construct modern, resilient, and cost-effective options. In his spare time, Jamal enjoys spending time open air together with his household mountain climbing and mountain biking.