HomeBig DataConstruct a centralized observability platform for Apache Spark on Amazon EMR on...

Construct a centralized observability platform for Apache Spark on Amazon EMR on EKS utilizing exterior Spark Historical past Server


Monitoring and troubleshooting Apache Spark purposes develop into more and more complicated as corporations scale their information analytics workloads. As information processing necessities develop, enterprises deploy these purposes throughout a number of Amazon EMR on EKS clusters to deal with various workloads effectively. Nevertheless, this method creates a problem in sustaining complete visibility into Spark purposes operating throughout these separate clusters. Information engineers and platform groups want a unified view to successfully monitor and optimize their Spark purposes.

Though Spark supplies highly effective built-in monitoring capabilities by way of Spark Historical past Server (SHS), implementing a scalable and safe observability resolution throughout a number of clusters requires cautious architectural issues. Organizations want an answer that not solely consolidates Spark utility metrics however extends its options by including different efficiency monitoring and troubleshooting packages whereas offering safe entry to those insights and sustaining operational effectivity.

This publish demonstrates find out how to construct a centralized observability platform utilizing SHS for Spark purposes operating on EMR on EKS. We showcase find out how to improve SHS with efficiency monitoring instruments, with a sample relevant to many monitoring options corresponding to SparkMeasure and DataFlint. On this publish, we use DataFlint for instance to exhibit how one can combine extra monitoring options. We clarify find out how to acquire Spark occasions from a number of EMR on EKS clusters right into a central Amazon Easy Storage Service (Amazon S3) bucket; deploy SHS on a devoted Amazon Elastic Kubernetes Service (Amazon EKS) cluster; and configure safe entry utilizing AWS Load Balancer Controller, AWS Personal Certificates Authority, Amazon Route 53, and AWS Consumer VPN. This resolution supplies groups with a single, safe interface to observe, analyze, and troubleshoot Spark purposes throughout a number of clusters.

Overview of resolution

Contemplate DataCorp Analytics, a data-driven enterprise operating a number of enterprise models with various Spark workloads. Their Monetary Analytics staff processes time-sensitive buying and selling information requiring strict processing instances and devoted sources, and their Advertising and marketing Analytics staff handles buyer habits information with versatile necessities, requiring a number of EMR on EKS clusters to accommodate these distinct workload patterns. As their Spark purposes develop in quantity and complexity throughout these clusters, information and platform engineers wrestle to keep up complete visibility whereas sustaining safe entry to monitoring instruments.

This situation presents a really perfect use case for implementing a centralized observability platform utilizing SHS and DataFlint. The answer deploys SHS on a devoted EKS cluster, configured to learn occasions from a number of EMR on EKS clusters by way of a centralized S3 bucket. Entry is secured by way of Load Balancer Controller, AWS Personal CA, Route 53, and Consumer VPN, and DataFlint enhances the monitoring capabilities with extra insights and visualizations. The next structure diagram illustrates the parts and their interactions.

Architecture diagram

The answer workflow is as follows:

  1. Spark purposes on EMR on EKS use a customized EMR Docker picture that features DataFlint JARs for enhanced metrics assortment. These purposes generate detailed occasion logs containing execution metrics, efficiency information, and DataFlint-specific insights. The logs are written to a centralized Amazon S3 location by way of the next configuration (observe particularly the configurationOverrides part). For added info, discover the StartJobRun information to learn to run Spark jobs and overview the StartJobRun API reference.
{
  "identify": "${SPARK_JOB_NAME}", 
  "virtualClusterId": "${VIRTUAL_CLUSTER_ID}",  
  "executionRoleArn": "${IAM_ROLE_ARN_FOR_JOB_EXECUTION}",
  "releaseLabel": "emr-7.2.0-latest", 
  "jobDriver": {
    "sparkSubmitJobDriver": {
      "entryPoint": "s3://${S3_BUCKET_NAME}/app/${SPARK_APP_FILE}",
      "entryPointArguments": [
        "--input-path",
        "s3://${S3_BUCKET_NAME}/data/input",
        "--output-path",
        "s3://${S3_BUCKET_NAME}/data/output"
      ],
       "sparkSubmitParameters": "--conf spark.driver.cores=1 --conf spark.driver.reminiscence=4G --conf spark.kubernetes.driver.restrict.cores=1200m --conf spark.executor.cores=2  --conf spark.executor.cases=3  --conf spark.executor.reminiscence=4G"
    }
  }, 
  "configurationOverrides": {
    "applicationConfiguration": [
      {
        "classification": "spark-defaults", 
        "properties": {
          "spark.driver.memory":"2G",
          "spark.kubernetes.container.image": "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${EMR_REPO_NAME}:${EMR_IMAGE_TAG}",
          "spark.app.name": "${SPARK_JOB_NAME}"
          "spark.eventLog.enabled": "true",
          "spark.eventLog.dir": "s3://${S3_BUCKET_NAME}/spark-events/"
         }
      }
    ], 
    "monitoringConfiguration": {
      "persistentAppUI": "ENABLED",
      "s3MonitoringConfiguration": {
        "logUri": "s3://${S3_BUCKET_NAME}/spark-events/"
      }
    }
  }
}

  1. A devoted SHS deployed on Amazon EKS reads these centralized logs. The Amazon S3 location is configured within the SHS to learn from the central Amazon S3 location by way of the next code:
env:
  - identify: SPARK_HISTORY_OPTS
    worth: "-Dspark.historical past.fs.logDirectory=s3a://${S3_BUCKET}/spark-events/"

  1. We configure Load Balancer Controller, AWS Personal CA, a Route 53 hosted zone, and Consumer VPN to securely entry the SHS UI utilizing an internet browser.
  2. Lastly, customers can entry the SHS internet interface at https://spark-history-server.instance.inside/.

Yow will discover the code base within the AWS Samples GitHub repository.

Conditions

Earlier than you deploy this resolution, be sure the next stipulations are in place:

Arrange a typical infrastructure

Full the next steps to arrange the infrastructure:

  1. Clone the repository to your native machine and set the 2 surroundings variables. Change with the AWS Area the place you need to deploy these sources.
git clone [email protected]:aws-samples/sample-centralized-spark-history-server-emr-on-eks.git
cd sample-centralized-spark-history-server-emr-on-eks
export REPO_DIR=$(pwd)
export AWS_REGION=

  1. Execute the next script to create the widespread infrastructure. The script creates a safe digital personal cloud (VPC) networking surroundings with private and non-private subnets and an encrypted S3 bucket to retailer Spark utility logs.
cd ${REPO_DIR}/infra
./deploy_infra.sh

  1. To confirm profitable infrastructure deployment, open the AWS CloudFormation console, select your stack, and examine the Occasions, Sources, and Outputs tabs for completion standing, particulars, and record of sources created.

Arrange EMR on EKS clusters

This part covers constructing a customized EMR on EKS Docker picture with DataFlint integration, launching two EMR on EKS clusters (datascience-cluster-v and analytics-cluster-v), and configuring the clusters for job submission. Moreover, we arrange the mandatory IAM roles for service accounts (IRSA) to allow Spark jobs to put in writing occasions to the centralized S3 bucket. Full the next steps:

  1. Deploy two EMR on EKS clusters:
cd ${REPO_DIR}/emr-on-eks
./deploy_emr_on_eks.sh

  1. To confirm profitable creation of the EMR on EKS clusters utilizing the AWS CLI, execute the next command:
aws emr-containers list-virtual-clusters 
    --query "virtualClusters[?state=='RUNNING']"

  1. Execute the next command for the datascience-cluster-v and analytics-cluster-v clusters to confirm their respective states, container supplier info, and related EKS cluster particulars. Change with the ID of every cluster obtained from the list-virtual-clusters output.
aws emr-containers describe-virtual-cluster 
    --id 

Configure and execute Spark jobs on EMR on EKS clusters

Full the next steps to configure and execute Spark jobs on the EMR on EKS clusters:

  1. Generate customized EMR on EKS picture and StartJobRun request JSON recordsdata to run Spark jobs:
cd ${REPO_DIR}/jobs
./configure_jobs.sh

The script performs the next duties:

  • Prepares the surroundings by importing the pattern Spark utility spark_history_demo.py to a delegated S3 bucket for job execution.
  • Creates a customized Amazon EMR container picture by extending the bottom EMR 7.2.0 picture with the DataFlint JAR for added insights and publishing it to an Amazon Elastic Container Registry (Amazon ECR) repository.
  • Generates cluster-specific StartJobRun request JSON recordsdata for datascience-cluster-v and analytics-cluster-v.

Evaluate start-job-run-request-datascience-cluster-v.json and start-job-run-request-analytics-cluster-v.json for added particulars.

  1. Execute the next instructions to submit Spark jobs on the EMR on EKS digital clusters:
aws emr-containers start-job-run 
--cli-input-json file://${REPO_DIR}/jobs/start-job-run/start-job-run-request-datascience-cluster-v.json
aws emr-containers start-job-run 
--cli-input-json file://${REPO_DIR}/jobs/start-job-run/start-job-run-request-analytics-cluster-v.json

  1. Confirm the profitable era of the logs within the S3 bucket:

aws s3 ls s3://emr-spark-logs--/spark-events/

You’ve gotten efficiently arrange an EMR on EKS surroundings, executed Spark jobs, and picked up the logs within the centralized S3 bucket. Subsequent, we’ll deploy SHS, configure its safe entry, and visualize the logs utilizing it.

Arrange AWS Personal CA and create a Route 53 personal hosted zone

Use the next code to deploy AWS Personal CA and create a Route 53 personal hosted zone. It will present a user-friendly URL to hook up with SHS over HTTPS.

cd ${REPO_DIR}/ssl
./deploy_ssl.sh

Arrange SHS on Amazon EKS

Full the next steps to construct a Docker picture containing SHS with DataFlint, deploy it on an EKS cluster utilizing a Helm chart, and expose it by way of a Kubernetes service of kind LoadBalancer. We use a Spark 3.5.0 base picture, which incorporates SHS by default. Nevertheless, though this simplifies deployment, it ends in a bigger picture dimension. For environments the place picture dimension is vital, think about constructing a customized picture with simply the standalone SHS element as a substitute of utilizing the whole Spark distribution.

  1. Deploy SHS on the spark-history-server EKS cluster:
cd ${REPO_DIR}/shs
./deploy_shs.sh

  1. Confirm the deployment by itemizing the pods and viewing the pod logs:
kubectl get pods --namespace spark-history
kubectl logs  --namespace spark-history

  1. Evaluate the logs and make sure there aren’t any errors or exceptions.

You’ve gotten efficiently deployed SHS on the spark-history-server EKS cluster, and configured it to learn logs from the emr-spark-logs-- S3 bucket.

Deploy Consumer VPN and add entry to Route 53 for safe entry

Full the next steps to deploy Consumer VPN to securely join your consumer machine (corresponding to your laptop computer) to SHS and configure Route 53 to generate a user-friendly URL:

  1. Deploy the Consumer VPN:
cd ${REPO_DIR}/vpn
./deploy_vpn.sh

  1. Add entry to Route 53:
cd ${REPO_DIR}/dns
./deploy_dns.sh

Add certificates to native trusted shops

Full the next steps so as to add the SSL certificates to your working system’s trusted certificates shops for safe connections:

  1. For macOS customers, utilizing Keychain Entry (GUI):
    1. Open Keychain Entry from Functions, Utilities, select the System keychain within the navigation pane, and select File, Import Gadgets.
    2. Browse to and select ${REPO_DIR}/ssl/certificates/ca-certificate.pem, then select the imported certificates.
    3. Develop the Belief part and set When utilizing this certificates to At all times Belief.
    4. Shut and enter your password when prompted and save.
    5. Alternatively, you may execute the next command to incorporate the certificates in Keychain and belief it:
sudo safety add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain "${REPO_DIR}/ssl/certificates/ca-certificate.pem"

  1. For Home windows customers:
    1. Rename ca-certificate.pem to ca-certificate.crt.
    2. Select (right-click) ca-certificate.crt and select Set up Certificates.
    3. Select Native Machine (admin rights required).
    4. Choose Place all certificates within the following retailer.
    5. Select Browse and select Trusted Root Certification Authorities.
    6. Full the set up by selecting Subsequent and End.

Arrange Consumer VPN in your consumer machine for safe entry

Full the next steps to put in and configure Consumer VPN in your consumer machine (corresponding to your laptop computer) and create a VPN connection to the AWS Cloud:

  1. Obtain, set up, and launch the Consumer VPN utility from the official obtain web page on your working system.
  2. Create your VPN profile:
    1. Select File within the menu bar, select Handle Profiles, and select Add Profile.
    2. Enter a reputation on your profile. Instance: SparkHistoryServerUI
    3. Browse to ${REPO_DIR}/vpn/client_vpn_certs/client-config.ovpn, select the certificates file, and select Add Profile to avoid wasting your configuration.
  3. Choose your newly created profile, select Join, and watch for the connection affirmation to ascertain the VPN connection.

Once you’re related, you’ll have safe entry to the AWS sources in your surroundings.

VPN connection details

Securely entry the SHS URL

Full the next steps to securely entry SHS utilizing an internet browser:

  1. Execute the next command to get the SHS URL:

https://spark-history-server.instance.inside/

  1. Copy this URL and enter it into your internet browser to entry the SHS UI.

The next screenshot exhibits an instance of the UI.

Spark History Server job summary page

  1. Select an App ID to view its detailed execution info and metrics.

Spark History Server job detail page

  1. Select the DataFlint tab to view detailed utility insights and analytics.

DataFlint insights page

DataFlint shows varied useful metrics, together with alerts, as proven within the following screenshot.

DataFlint alerts page

Clear up

To keep away from incurring future fees from the sources created on this tutorial, clear up your surroundings after finishing the steps. To take away all provisioned sources:

  1. Disconnect from the Consumer VPN.
  2. Run the cleanup.sh script:
cd ${REPO_DIR}/
./cleanup.sh

Conclusion

On this publish, we demonstrated find out how to construct a centralized observability platform for Spark purposes utilizing SHS and improve SHS with efficiency monitoring instruments like DataFlint. The answer aggregates Spark occasions from a number of EMR on EKS clusters right into a unified monitoring interface, offering complete visibility into your Spark purposes’ efficiency and useful resource utilization. Through the use of a customized EMR picture with efficiency monitoring software integration, we enhanced the usual Spark metrics to achieve deeper insights into utility habits. In case your surroundings makes use of a mixture of EMR on EKS, Amazon EMR on EC2, or Amazon EMR Serverless, you may seamlessly lengthen this structure to combination the logs from EMR on EC2 and EMR Serverless in the same approach and visualize them utilizing SHS.

Though this resolution supplies a sturdy basis for Spark monitoring, manufacturing deployments ought to think about implementing authentication and authorization. SHS helps customized authentication by way of javax servlet filters and fine-grained authorization by way of entry management lists (ACLs). We encourage you to discover implementing authentication filters for safe entry management, configuring user- and group-based ACLs for view and modify permissions, and organising group mapping suppliers for role-based entry. For detailed steering, check with Spark’s internet UI safety documentation and SHS safety features.

Whereas AWS endeavors to use greatest practices for safety inside this instance, every group has its personal insurance policies. Please be sure to make use of the particular insurance policies of your group when deploying this resolution as a place to begin for implementing centralized Spark monitoring in your information processing surroundings.


Concerning the authors

Sri Potluri is a Cloud Infrastructure Architect at AWS. He’s keen about fixing complicated issues and delivering well-structured options for various clients. His experience spans throughout a spread of cloud applied sciences, offering scalable and dependable infrastructures tailor-made to every undertaking’s distinctive challenges.

Suvojit Dasgupta is a Principal Information Architect at AWS. He leads a staff of expert engineers in designing and constructing scalable information options for AWS clients. He makes a speciality of creating and implementing revolutionary information architectures to handle complicated enterprise challenges.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments