The rise of distributed information processing frameworks resembling Apache Spark has revolutionized the best way organizations handle and analyze large-scale information. Nonetheless, as the quantity and complexity of information proceed to develop, the necessity for fine-grained entry management (FGAC) has change into more and more vital. That is significantly true in eventualities the place delicate or proprietary information should be shared throughout a number of groups or organizations, resembling within the case of open information initiatives. Implementing strong entry management mechanisms is essential to keep up safe and managed entry to information saved in Open Desk Format (OTF) inside a contemporary information lake.
One method to addressing this problem is through the use of Amazon EMR on Amazon Elastic Kubernetes Service (Amazon EKS) and incorporating FGAC mechanisms. With Amazon EMR on EKS, you possibly can run open supply huge information frameworks resembling Spark on Amazon EKS. This integration supplies the scalability and suppleness of Kubernetes, whereas additionally utilizing the information processing capabilities of Amazon EMR.
On February 6th 2025, AWS launched fine-grained entry management primarily based on AWS Lake Formation for EMR on EKS from Amazon EMR 7.7 and better model. Now you can considerably improve your information governance and safety frameworks utilizing this function.
On this put up, we display tips on how to implement FGAC on Apache Iceberg tables utilizing EMR on EKS with Lake Formation.
Knowledge mesh use case
With FGAC in a information mesh structure, area homeowners can handle entry to their information merchandise at a granular degree. This decentralized method permits for better agility and management, ensuring information is accessible solely to approved customers and providers inside or throughout domains. Insurance policies might be tailor-made to particular information merchandise, contemplating elements like information sensitivity, consumer roles, and supposed use. This localized management enhances safety and compliance whereas supporting the self-service nature of the information mesh.
FGAC is particularly helpful in enterprise domains that take care of delicate information, resembling healthcare, finance, authorized, human assets, and others. On this put up, we concentrate on examples from the healthcare area, showcasing how we will obtain the next:
- Share affected person information securely – Knowledge mesh allows totally different departments inside a hospital to handle their very own affected person information as unbiased domains. FGAC makes positive solely approved personnel can entry particular affected person data or information parts primarily based on their roles and need-to-know foundation.
- Facilitate analysis and collaboration – Researchers can entry de-identified affected person information from varied hospital domains via the information mesh structure, enabling collaboration between multidisciplinary groups throughout totally different healthcare establishments, fostering data sharing, and accelerating analysis and discovery. FGAC helps compliance with privateness laws (resembling HIPAA) by limiting entry to delicate information parts or permitting entry solely to aggregated, anonymized datasets.
- Enhance operational effectivity – Knowledge mesh can streamline information sharing between hospitals and insurance coverage firms, simplifying billing and claims processing. FGAC makes positive solely approved personnel inside every group can entry the mandatory information, defending delicate monetary data.
Answer overview
On this put up, we discover tips on how to implement FGAC on Iceberg tables inside an EMR on EKS utility, utilizing the capabilities of Lake Formation. For particulars on tips on how to implement FGAC on Amazon EMR on EC2, seek advice from Wonderful-grained entry management in Amazon EMR Serverless with AWS Lake Formation.
The next elements play crucial roles on this resolution design:
- Apache Iceberg OTF:
- Excessive-performance desk format for large-scale analytics
- Helps schema evolution, ACID transactions, and time journey
- Appropriate with Spark, Trino, Presto, and Flink
- Amazon S3 Tables totally managed Iceberg tables for analytics workload
- AWS Lake Formation:
- FGAC for information lakes
- Column-, row-, and cell-level safety controls
- Knowledge mesh producers and shoppers:
- Producers: Create and serve domain-specific information merchandise
- Shoppers: Entry and combine information merchandise
- Allows self-service information consumption
To display how you need to use Lake Formation to implement cross-account FGAC inside an EMR on EKS setting, we create tables within the AWS Glue Knowledge Catalog in a central AWS account appearing as producer and provision totally different consumer personas to replicate varied roles and entry ranges in a separate AWS account appearing as a number of shoppers. Shoppers might be unfold throughout a number of accounts in real-world eventualities.
The next diagram illustrates the high-level resolution structure.
To display the cross-account information sharing and information filtering with Lake Formation FGAC, the answer deploys two totally different Iceberg tables with different entry for various shoppers. The permission mapping for shoppers are with cross-account desk shares and information cell filters.
It has two totally different groups with totally different ranges of Lake Formation permissions to entry Sufferers and Claims Iceberg tables. The next desk summarizes the answer’s consumer personas.
| Persona/Desk Identify | Sufferers | Claims |
|
Sufferers Care Group ( |
|
Full desk entry |
|
Claims Care Group ( |
No entry | Full desk entry |
Conditions
This resolution requires an AWS account with an AWS Identification and Entry Administration (IAM) energy consumer position that may create and work together with AWS providers, together with Amazon EMR, Amazon EKS, AWS Glue, Lake Formation, and Amazon Easy Storage Service (Amazon S3). Further particular necessities for every account are detailed within the related sections.
Clone the mission
To get began, obtain the mission both to your pc or the AWS CloudShell console:
Arrange infrastructure in producer account
To arrange the infrastructure within the producer account, you need to have the next further assets:
The setup script deploys the next infrastructure:
- An S3 bucket to retailer pattern information in Iceberg desk format, registered as a knowledge location in Lake Formation
- An AWS Glue database named
healthcare_db - Two AWS Glue tables:
SufferersandClaimsIceberg tables - A Lake Formation information entry IAM position
- Cross-account permissions enabled for the buyer account:
- Permit the buyer to explain the database
healthcare_dbwithin the producer account - Permit to entry the
Sufferersdesk utilizing a knowledge cell filter, primarily based on row-level chosenstate, and exclude columnssn - Permit full desk entry to the
Claimsdesk
- Permit the buyer to explain the database
Run the next producer_iceberg_datalake_setup.sh script to create a growth setting within the producer account. Replace its parameters in line with your necessities:
Allow cross-account Lake Formation entry in producer account
A client account ID and an EMR on EKS Engine session tag should set within the producer’s setting. It permits the buyer to entry the producer’s AWS Glue tables ruled by Lake Formation. Full the next steps to allow cross-account entry:
- Open the Lake Formation console within the producer account.
- Select Utility integration settings underneath Administration within the navigation pane.
- Choose Permit exterior engines to filter information in Amazon S3 places registered with Lake Formation.
- For Session tag values, enter EMR on EKS Engine.
- For AWS account IDs, enter your client account ID.
- Select Save.
Determine 2: Producer Account – Lake Formation third-party engine configuration display screen with session tags, account IDs, and information entry permissions.
Validate FGAC setup in producer setting
To validate the FGAC setup within the producer account, examine the Iceberg tables, information filter, and FGAC permission settings.
Iceberg tables
Two AWS Glue tables in Iceberg format had been created by producer_iceberg_datalake_setup.sh. On the Lake Formation console, select Tables underneath Knowledge Catalog within the navigation pane to see the tables listed.
Determine 3: Lake Formation interface displaying claims and sufferers tables from healthcare_db with Apache Iceberg format.
The next screenshot exhibits an instance of the sufferers desk information.
The next screenshot exhibits an instance of the claims desk information.
Knowledge cell filter towards sufferers desk
After efficiently working the producer_iceberg_datalake_setup.sh script, a brand new information cell filter named patients_column_row_filter was created in Lake Formation. This filter performs two capabilities:
- Exclude the
ssncolumn from thesufferersdesk information - Embody rows the place the state is Texas or New York
To view the information cell filter, select Knowledge filters underneath Knowledge Catalog within the navigation pane of the Lake Formation console, and open the filter. Select View permission to view the permission particulars.
FGAC permissions permitting cross-account entry
To view all of the FGAC permissions, select Knowledge permissions underneath Permissions within the navigation pane of the Lake Formation console, and filter by the database title healthcare_db.
Be sure that to revoke information permissions with the IAMAllowedPrincipals principal related to the healthcare_db tables, as a result of it’s going to trigger cross-account information sharing to fail, significantly with AWS Useful resource Entry Supervisor (AWS RAM).
Determine 7: Lake Formation information permissions interface displaying filtered healthcare database assets with granular entry controls
The next desk summarizes the general FGAC setup.
| Useful resource Kind | Useful resource | Permissions | Grant Permissions |
| Database | Describe | Describe | |
| Knowledge Cell Filter | Choose | Choose | |
| Desk | Choose, Describe | Choose, Describe |
Arrange infrastructure in client account
To arrange the infrastructure within the client account, you need to have the next further assets:
- eksctl and kubectl packages should be put in
- An IAM position within the client account should be a Lake Formation administrator to run
consumer_emr_on_eks_setup.shscript - The Lake Formation admin should settle for the AWS RAM useful resource share invitations utilizing the AWS RAM console, if the buyer account is outdoors of the producer’s organizational unit
The setup script deploys the next infrastructure:
- An EKS cluster known as
fgac-blogwith two namespaces:- Person namespace:
lf-fgac-user - System namespace:
lf-fgac-secure
- Person namespace:
- An EMR on EKS digital cluster
emr-on-eks-fgac-blog:- Arrange with a safety configuration
emr-on-eks-fgac-sec-conifg - Two EMR on EKS job execution IAM roles:
- Position for the Sufferers Care Group (
team1):emr_on_eks_fgac_job_team1_execution_role - Position for Claims Care Group (
team2):emr_on_eks_fgac_job_team2_execution_role
- Position for the Sufferers Care Group (
- A question engine IAM position utilized by FGAC safe house:
emr_on_eks_fgac_query_execution_role
- Arrange with a safety configuration
- An S3 bucket to retailer PySpark job scripts and logs
- An AWS Glue native database named
consumer_healthcare_db - Two useful resource hyperlinks to cross-account shared AWS Glue tables:
rl_patientsandrl_claims - Lake Formation permission on Amazon EMR IAM roles
Run the next consumer_emr_on_eks_setup.sh script to arrange a growth setting within the client account. Replace the parameters in line with your use case:
Allow cross-account Lake Formation entry in client account
The buyer account should add the buyer account ID with an EMR on EKS Engine session tag in Lake Formation. This session tag might be utilized by EMR on EKS job execution IAM roles to entry Lake Formation tables. Full the next steps:
- Open the Lake Formation console within the client account.
- Select Utility integration settings underneath Administration within the navigation pane.
- Choose Permit exterior engines to filter information in Amazon S3 places registered with Lake Formation.
- For Session tag values, enter EMR on EKS Engine.
- For AWS account IDs, enter your client account ID.
- Select Save.
Determine 9: Client Account – Lake Formation third-party engine configuration display screen with session tags, account IDs, and information entry permissions
Validate FGAC setup in client setting
To validate the FGAC setup within the producer account, examine the EKS cluster, namespaces, and Spark job scripts to check information permissions.
EKS cluster
On the Amazon EKS console, select Clusters within the navigation pane and ensure the EKS cluster fgac-blog is listed.
Namespaces in Amazon EKS
Kubernetes makes use of namespaces as logical partitioning system for organizing objects resembling Pods and Deployments. Namespaces additionally function as a privilege boundary within the Kubernetes role-based entry management (RBAC) system. Multi-tenant workloads in Amazon EKS might be secured utilizing namespaces.
This resolution creates two namespaces:
lf-fgac-userlf-fgac-secure
The StartJobRun API makes use of the backend workflows to submit a Spark job’s UserComponents (JobRunner, Driver, Executors) within the consumer namespace, and the corresponding system elements within the system namespace to perform the specified FGAC behaviors.
You may confirm the namespaces with the next command:kubectl get namespaceThe next screenshot exhibits an instance of the anticipated output.
Spark job script to check Sufferers Care Group’s information permissions
Beginning with Amazon EMR model 6.6.0, you need to use Spark on EMR on EKS with the Iceberg desk format. For extra data on how Iceberg works in an immutable information lake, see Construct a high-performance, ACID compliant, evolving information lake utilizing Apache Iceberg on Amazon EMR.
The next script is a snippet of the PySpark job that retrieves filtered information for the Claims and Affected person tables:
Spark job script to check Claims Care Group’s information permissions
The next script is a snippet of the PySpark job that retrieves information from the Claims desk:
Validate job execution roles for EMR on EKS
The Sufferers Care Group makes use of the emr_on_eks_fgac_job_team1_execution_role IAM position to execute a PySpark job on EMR on EKS. The job execution position has permission to question each the Sufferers and Claims tables.
The Claims Care Group makes use of the emr_on_eks_fgac_job_team2_execution_role IAM position to execute jobs on EMR on EKS. The job execution position solely has permission to entry Claims information.
Each IAM job execution roles have the next permissions:
The next code is the job execution IAM position belief coverage:
The next code is the question engine IAM position coverage (emr_on_eks_fgac_query_execution_role-policy):
The next code is the question engine IAM position belief coverage:
Run PySpark jobs on EMR on EKS with FGAC
For extra particulars about tips on how to work with Iceberg tables in EMR on EKS jobs, seek advice from Utilizing Apache Iceberg with Amazon EMR on EKS. Full the next steps to run the PySpark jobs on EMR on EKS with FGAC:
- Run the next instructions to run the sufferers and claims jobs:
- Watch the applying logs from the Spark driver pod:
kubectl logs drive-pod-name -c spark-kubernetes-driver -n lf-fgac-user -f
Alternatively, you possibly can navigate to the Amazon EMR console, open your digital cluster, and select the open icon subsequent to the job to open the Spark UI and monitor the job progress.
View PySpark jobs output on EMR on EKS with FGAC
In Amazon S3, navigate to the Spark output logs folder:
The Sufferers Care Group PySpark job has question entry to the Sufferers and Claims tables. The Sufferers desk has filtered out the SSN column and solely exhibits data for Texas and New York declare data, as laid out in our FGAC setup.
The next screenshot exhibits the Claims desk for less than Texas and New York.
The next screenshot exhibits the Sufferers desk with out the SSN column.
Equally, navigate to the Spark output log folder for the Claims Care Group job:
As proven within the following screenshot, the Claims Care Group solely has entry to the Claims desk, so when the job tried to entry the Sufferers desk, it acquired an entry denied error.
Issues and limitations
Though the method mentioned on this put up supplies helpful insights and sensible implementation methods, it’s vital to acknowledge the important thing issues and limitations earlier than you begin utilizing this function. To be taught extra about utilizing EMR on EKS with Lake Formation, seek advice from How Amazon EMR on EKS works with AWS Lake Formation.
Clear up
To keep away from incurring future expenses, delete the assets generated for those who don’t want the answer anymore. Run the next cleanup scripts (change the AWS Area if vital).Run the next script within the client account:
Run the next script within the producer account:
Conclusion
On this put up, we demonstrated tips on how to combine Lake Formation with EMR on EKS to implement fine-grained entry management on Iceberg tables. This integration provides organizations a contemporary method to imposing detailed information permissions inside a multi-account open information lake setting. By centralizing information administration in a main account and thoroughly regulating consumer entry in secondary accounts, this technique can simplify governance and improve safety.
For extra details about Amazon EMR 7.7 in reference to EMR on EKS, see Amazon EMR on EKS 7.7.0 releases. To be taught extra about utilizing Lake Formation with EMR on EKS, see Allow Lake Formation with Amazon EMR on EKS.
We encourage you to discover this resolution in your particular use instances and share your suggestions and questions within the feedback part.
Concerning the authors













