HomeBig DataModernize Amazon Redshift authentication by migrating consumer administration to AWS IAM Id...

Modernize Amazon Redshift authentication by migrating consumer administration to AWS IAM Id Heart


Amazon Redshift is a robust cloud-based information warehouse that organizations can use to research each structured and semi-structured information by superior SQL queries. As a completely managed service, it offers excessive efficiency and scalability whereas permitting safe entry to the information saved within the information warehouse. Organizations worldwide depend on Amazon Redshift to deal with huge datasets, improve their analytics capabilities, and ship helpful enterprise intelligence to their stakeholders.

AWS IAM Id Heart serves as the popular platform for controlling workforce entry to AWS instruments, together with Amazon Q Developer. It permits for a single connection to your current identification supplier (IdP), making a unified view of customers throughout AWS purposes and making use of trusted identification propagation for a clean and constant expertise.

You may entry information in Amazon Redshift utilizing native customers or exterior customers. A neighborhood consumer in Amazon Redshift is a database consumer account that’s created and managed immediately throughout the Redshift cluster itself. Amazon Redshift additionally integrates with IAM Id Heart, and helps trusted identification propagation, so you need to use third-party IdPs corresponding to Microsoft Entra ID (Azure AD), Okta, Ping, OneLogin, or use IAM Id Heart as an identification supply. The IAM Id Heart integration with Amazon Redshift helps centralized authentication and SSO capabilities, simplifying entry administration throughout multi-account environments. As organizations develop in scale, it is suggested to make use of exterior customers for cross-service integration and centralized entry administration.

On this submit, we stroll you thru the method of easily migrating your native Redshift consumer administration to IAM Id Heart customers and teams utilizing the RedshiftIDCMigration utility.

Resolution overview

The next diagram illustrates the answer structure.

The RedshiftIDCMigration utility accelerates the migration of your native Redshift customers, teams, and roles to your IAM Id Heart occasion by performing the next actions:

  • Create customers in IAM Id Heart for each native consumer in a given Redshift occasion.
  • Create teams in IAM Id Heart for each group or function in a given Redshift occasion.
  • Assign customers to teams in IAM Id Heart in response to current assignments within the Redshift occasion.
  • Create IAM Id Heart roles within the Redshift occasion matching the teams created in IAM Id Heart.
  • Grant permissions to IAM Id Heart roles within the Redshift occasion based mostly on the present permissions given to native teams and roles.

Stipulations

Earlier than working the utility, full the next conditions:

  1. Allow IAM Id Heart in your account.
  2. Comply with the steps within the submit Combine Id Supplier (IdP) with Amazon Redshift Question Editor V2 and SQL Shopper utilizing AWS IAM Id Heart for seamless Single Signal-On (particularly, comply with Steps 1–8, skipping Steps 4 and 6).
  3. Configure the IAM Id Heart software assignments:
    1. On the IAM Id Heart console, select Utility Assignments and Functions.
    2. Choose your software and on the Actions dropdown menu, select Edit particulars.
    3. For Consumer and group assignments, select Don’t require assignments. This setting makes it doable to check Amazon Redshift connectivity with out configuring particular information entry permissions.
  4. Configure IAM Id Heart authentication with administrative entry from both Amazon Elastic Compute Cloud (Amazon EC2) or AWS CloudShell.

The utility shall be run from both an EC2 occasion or CloudShell. For those who’re utilizing an EC2 occasion, an IAM function is connected to the occasion. Ensure that the IAM function used in the course of the execution has the next permissions (if not, create a brand new coverage with these permissions and connect it to the IAM function):

  • Amazon Redshift permissions (for serverless):
{
    "Model": "2012-10-17",
    "Assertion": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "redshift-serverless:GetCredentials",
                "redshift-serverless:GetNamespace",
                "redshift-serverless:GetWorkgroup"
            ],
            "Useful resource": [
                "arn:aws:redshift-serverless:${region}:${account-id}:namespace/${namespace-id}",
                "arn:aws:redshift-serverless:${region}:${account-id}:workgroup/${workgroup-id}"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Impact": "Enable",
            "Motion": [
                "redshift-serverless:ListNamespaces",
                "redshift-serverless:ListWorkgroups"
            ],
            "Useful resource": "*"
        },
        {
            "Sid": "VisualEditor2",
            "Impact": "Enable",
            "Motion": [
                "redshift:CreateClusterUser",
                "redshift:JoinGroup",
                "redshift:GetClusterCredentials",
                "redshift:ExecuteQuery",
                "redshift:FetchResults",
                "redshift:DescribeClusters",
                "redshift:DescribeTable"
            ],
            "Useful resource": [
                "arn:aws:redshift:${region}:${account-id}:cluster:redshift-serverless-${workgroup-name}",
                "arn:aws:redshift:${region}:${account-id}:dbgroup:redshift-serverless-${workgroup-name}/${dbgroup}",
                "arn:aws:redshift:${region}:${account-id}:dbname:redshift-serverless-${workgroup-name}/${dbname}",
                "arn:aws:redshift:${region}:${account-id}:dbuser:redshift-serverless-${workgroup-name}/${dbuser}"
            ]
        }
    ]
}

  • Amazon Redshift permissions (for provisioned):
{
    "Model": "2012-10-17",
    "Assertion": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "redshift:GetClusterCredentials",
            "Resource": [
                "arn:aws:redshift: ${region}:${account-id}:dbname:${cluster_name}/${dbname}",
                "arn:aws:redshift: ${region}: ${account-id}:dbuser:${cluster-name}/${dbuser}"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Impact": "Enable",
            "Motion": [
                "redshift:DescribeClusters",
                "redshift:ExecuteQuery",
                "redshift:FetchResults",
                "redshift:DescribeTable"
            ],
            "Useful resource": "*"
        }
    ]
}

{
    "Model": "2012-10-17",
    "Assertion": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:GetEncryptionConfiguration",
                "s3:ListBucket",
                "s3:DeleteObject"
            ],
            "Useful resource": [
                "arn:aws:s3:::${s3_bucket_name}/*",
                "arn:aws:s3:::${s3_bucket_name}"
            ]
        }
    ]
}

  • Id retailer permissions:
{
    "Model": "2012-10-17",
    "Assertion": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "identitystore:*",
            "Resource": [
                "arn:aws:identitystore:::group/*",
                "arn:aws:identitystore:::user/*",
                "arn:aws:identitystore::${account_id}:identitystore/${identity_store_id}",
                "arn:aws:identitystore:::membership/*"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Impact": "Enable",
            "Motion": "identitystore:*",
            "Useful resource": [
                "arn:aws:identitystore:::membership/*",
                "arn:aws:identitystore:::user/*",
                "arn:aws:identitystore:::group/*"
            ]
        }
    ]
}

Artifacts

Obtain the next utility artifacts from the GitHub repo:

  • idc_redshift_unload_indatabase_groups_roles_users.py – A Python script to unload customers, teams, roles and their associations.
  • redshift_unload.ini – The config file used within the previous script to learn Redshift information warehouse particulars and Amazon S3 places to unload the recordsdata.
  • idc_add_users_groups_roles_psets.py – A Python script to create customers and teams in IAM Id Heart, after which affiliate the customers to teams in IAM Id Heart.
  • idc_config.ini – The config file used within the previous script to learn IAM Id Heart particulars.
  • vw_local_ugr_to_idc_urgr_priv.sql – A script that generates SQL statements that carry out two duties in Amazon Redshift:
    • Create roles that precisely match your IAM Id Heart group names, including a specified prefix.
    • Grant acceptable permissions to those newly created Redshift roles.

Testing situation

This take a look at case is designed to supply sensible expertise and familiarize you with the utility’s performance. The situation is structured round a hierarchical nested roles system, beginning with object-level permissions assigned to technical roles. These technical roles are then allotted to enterprise roles. Lastly, enterprise roles are granted to particular person customers. To reinforce the testing setting, the situation additionally incorporates a consumer group.The next diagram illustrates this hierarchy.

Create datasets

Arrange two separate schemas (tickit and tpcds) in a Redshift database utilizing the create schema command. Then, create and populate a number of tables in every schema utilizing the tickit and tpcds pattern datasets.

Specify the suitable IAM function Amazon Useful resource Title (ARN) within the copy instructions if mandatory.

Create customers

Create customers with the next code:

-- ETL customers
create consumer etl_user_1 password 'EtlUser1!';
create consumer etl_user_2 password 'EtlUser2!';
create consumer etl_user_3 password 'EtlUser3!';

-- Reporting customers
create consumer reporting_user_1 password 'ReportingUser1!';
create consumer reporting_user_2 password 'ReportingUser2!';
create consumer reporting_user_3 password 'ReportingUser3!';

-- Adhoc customers
create consumer adhoc_user_1 password 'AdhocUser1!';
create consumer adhoc_user_2 password 'AdhocUser2!';

-- Analyst customers
create consumer analyst_user_1 password 'AnalystUser1!';

Create enterprise roles

Create enterprise customers with the next code:

-- ETL enterprise roles
create function role_bn_etl_tickit;
create function role_bn_etl_tpcds;

-- Reporting enterprise roles
create function role_bn_reporting_tickit;
create function role_bn_reporting_tpcds;

-- Analyst enterprise roles
create function role_bn_analyst_tickit;

Create technical roles

Create technical roles with the next code:

-- Technical roles for tickit schema
create function role_tn_sel_tickit;
create function role_tn_dml_tickit;
create function role_tn_cte_tickit;

-- Technical roles for tpcds schema
create function role_tn_sel_tpcds;
create function role_tn_dml_tpcds;
create function role_tn_cte_tpcds;

Create teams

Create teams with the next code:

-- Adhoc customers group
create group group_adhoc;

Grant rights to technical roles

To grant rights to the technical roles, use the next code:

-- role_tn_sel_tickit
grant utilization on schema tickit to function role_tn_sel_tickit;
grant choose on all tables in schema tickit to function role_tn_sel_tickit;

-- role_tn_dml_tickit
grant utilization on schema tickit to function role_tn_dml_tickit;
grant insert, replace, delete on all tables in schema tickit to function role_tn_dml_tickit;

-- role_tn_cte_tickit
grant utilization, create on schema tickit to function role_tn_cte_tickit;
grant drop on all tables in schema tickit to function role_tn_cte_tickit;

-- role_tn_sel_tpcds
grant utilization on schema tpcds to function role_tn_sel_tpcds;
grant choose on all tables in schema tpcds to function role_tn_sel_tpcds;

-- role_tn_dml_tpcds
grant utilization on schema tpcds to function role_tn_dml_tpcds;
grant insert, replace, delete on all tables in schema tpcds to function role_tn_dml_tpcds;

-- role_tn_cte_tpcds
grant utilization, create on schema tpcds to function role_tn_cte_tpcds;
grant drop on all tables in schema tpcds to function role_tn_cte_tpcds;

Grant technical roles to enterprise roles

To grant the technical roles to the enterprise roles, use the next code:

-- Enterprise function role_bn_etl_tickit
grant function role_tn_sel_tickit to function role_bn_etl_tickit;
grant function role_tn_dml_tickit to function role_bn_etl_tickit;
grant function role_tn_cte_tickit to function role_bn_etl_tickit;

-- Enterprise function role_bn_etl_tpcds
grant function role_tn_sel_tpcds to function role_bn_etl_tpcds;
grant function role_tn_dml_tpcds to function role_bn_etl_tpcds;
grant function role_tn_cte_tpcds to function role_bn_etl_tpcds;

-- Enterprise function role_bn_reporting_tickit
grant function role_tn_sel_tickit to function role_bn_reporting_tickit;

-- Enterprise function role_bn_reporting_tpcds
grant function role_tn_sel_tpcds to function role_bn_reporting_tpcds;

-- Enterprise function role_bn_analyst_tickit
grant function role_tn_sel_tickit to function role_bn_analyst_tickit;

Grant enterprise roles to customers

To grant the enterprise roles to customers, use the next code:

-- etl_user_1
grant function role_bn_etl_tickit to etl_user_1;

-- etl_user_2
grant function role_bn_etl_tpcds to etl_user_2;

-- etl_user_3
grant function role_bn_etl_tickit to etl_user_3;
grant function role_bn_etl_tpcds to etl_user_3;

-- reporting_user_1
grant function role_bn_reporting_tickit to reporting_user_1;

-- reporting_user_2
grant function role_bn_reporting_tpcds to reporting_user_2;

-- reporting_user_3
grant function role_bn_reporting_tickit to reporting_user_3;
grant function role_bn_reporting_tpcds to reporting_user_3;

-- analyst_user_1
grant function role_bn_analyst_tickit to analyst_user_1;

Grant rights to teams

To grant rights to the teams, use the next code:

-- Group group_adhoc
grant utilization on schema tickit to group group_adhoc;
grant choose on all tables in schema tickit to group group_adhoc;

grant utilization on schema tpcds to group group_adhoc;
grant choose on all tables in schema tpcds to group group_adhoc;

Add customers to teams

So as to add customers to the teams, use the next code:

alter group group_adhoc add consumer adhoc_user_1;
alter group group_adhoc add consumer adhoc_user_2;

Deploy the answer

Full the next steps to deploy the answer:

  1. Replace Redshift cluster or serverless endpoint particulars and Amazon S3 location in redshift_unload.ini:
    • cluster_type = provisioned or serverless
    • cluster_id = ${cluster_identifier} (required if cluster_type is provisioned)
    • db_user = ${database_user}
    • db_name = ${database_name}
    • host = ${host_url} (required if cluster_type is provisioned)
    • port = ${port_number}
    • workgroup_name = ${workgroup_name} (required if cluster_type is serverless)
    • area = ${area}
    • s3_bucket = ${S3_bucket_name}
    • roles = roles.csv
    • customers = customers.csv
    • role_memberships = role_memberships.csv
  2. Replace IAM Id Heart particulars in idc_config.ini:
    • area = ${area}
    • account_id = ${account_id}
    • identity_store_id = ${identity_store_id} (accessible on the IAM Id Heart console Settings web page)
    • instance_arn = ${iam_identity_center_instance_arn} (accessible on the IAM Id Heart console Settings web page)
    • permission_set_arn = ${permission_set_arn}
    • assign_permission_set = True or False (True if permission_set_arn is outlined)
    • s3_bucket = ${S3_bucket_name}
    • users_file = customers.csv
    • roles_file = roles.csv
    • role_memberships_file = role_memberships.csv
  3. Create a listing in CloudShell or by yourself EC2 occasion with connectivity to Amazon Redshift.
  4. Copy the 2 .ini recordsdata and obtain the Python scripts to that listing.
  5. Run idc_redshift_unload_indatabase_groups_roles_users.py both from CloudShell or your EC2 occasion:python idc_redshift_unload_indatabase_groups_roles_users.py
  6. Run idc_add_users_groups_roles_psets.py both from CloudShell or your EC2 occasion:python idc_add_users_groups_roles_psets.py
  7. Join your Redshift cluster utilizing the Amazon Redshift question editor v2 or most popular SQL consumer, utilizing superuser credentials.
  8. Copy the SQL within the vw_local_ugr_to_idc_urgr_priv.sql file and run it within the question editor to create the vw_local_ugr_to_idc_urgr_priv view.
  9. Run following SQL command to generate the SQL statements for creating roles and permissions:
    choose existing_grants,idc_based_grants from vw_local_ugr_to_idc_urgr_priv;

    For instance, think about the next current grants:

    CREATE GROUP "group_adhoc";
    CREATE ROLE "role_bn_etl_tickit";
    GRANT USAGE ON SCHEMA tpcds TO function "role_tn_sel_tpcds" ;

    These grants are transformed to the next code:

    CREATE function "AWSIDC:group_adhoc";
    CREATE function "AWSIDC:role_bn_etl_tickit";
    GRANT USAGE ON SCHEMA tpcds TO function "AWSIDC:role_tn_sel_tpcds";

  10. Assessment the statements within the idc_based_grants column.
    This may not be a complete listing of permissions, so evaluate them rigorously.
  11. If every part is appropriate, run the statements from the SQL consumer.

When you could have accomplished the method, you need to have the next configuration:

  • IAM Id Heart now incorporates newly created customers from Amazon Redshift
  • The Redshift native teams and roles are created as teams in IAM Id Heart
  • New roles are established in Amazon Redshift, equivalent to the teams created in IAM Id Heart
  • The newly created Redshift roles are assigned acceptable permissions

For those who encounter a difficulty whereas connecting to Amazon Redshift with the question editor utilizing IAM Id Heart, discuss with Troubleshooting connections from Amazon Redshift question editor v2.

Concerns

Contemplate the next when utilizing this resolution:

  • On the time of writing, creating permissions in AWS Lake Formation is just not in scope.
  • IAM Id Heart and IdP integration setup is out of scope for this utility. Nevertheless, you need to use the view vw_local_ugr_to_idc_urgr_priv.sqlto create roles and grant permissions to the IdP customers and teams handed by IAM Id Heart.
  • When you’ve got permissions given on to native consumer IDs (not utilizing teams or roles), you could change that to a role-based permission strategy for IAM Id Heart integration. Create roles and supply permissions utilizing roles as an alternative of immediately giving permissions to customers.

Clear up

When you’ve got accomplished the testing situation, clear up your setting:

  1. Take away the brand new Redshift roles that had been created by the utility, equivalent to the teams established in IAM Id Heart.
  2. Delete the customers and teams created by the utility inside IAM Id Heart.
  3. Delete the customers, teams, and roles specified within the testing situation.
  4. Drop the tickit and tpcds schemas.

You should utilize the FORCE parameter when dropping the roles to take away related assignments.

Conclusion

On this submit, we confirmed the right way to migrate your Redshift native consumer administration to IAM Id Heart. This transition presents a number of key benefits to your group, corresponding to simplified entry administration by centralized consumer and group administration, a streamlined consumer expertise throughout AWS companies, and diminished administrative overhead. You may implement this migration course of step-by-step, so you’ll be able to take a look at and validate every step earlier than absolutely transitioning your manufacturing setting.

As organizations proceed to scale their AWS infrastructure, utilizing IAM Id Heart turns into more and more helpful for sustaining safe and environment friendly entry administration, together with Amazon SageMaker Unified Studio for an built-in expertise for all of your information and AI.


In regards to the authors

Ziad Wali

Ziad Wali

Ziad is an Analytics Specialist Options Architect at AWS. He has over 10 years of expertise in databases and information warehousing, the place he enjoys constructing dependable, scalable, and environment friendly options. Exterior of labor, he enjoys sports activities and spending time in nature.

Satesh Sonti

Satesh Sonti

Satesh is a Sr. Analytics Specialist Options Architect based mostly out of Atlanta, specializing in constructing enterprise information platforms, information warehousing, and analytics options. He has over 19 years of expertise in constructing information property and main advanced information platform packages for banking and insurance coverage purchasers throughout the globe.

Maneesh Sharma

Maneesh Sharma

Maneesh is a Senior Database Engineer at AWS with greater than a decade of expertise designing and implementing large-scale information warehouse and analytics options. He collaborates with numerous Amazon Redshift Companions and prospects to drive higher integration.

Sumanth Punyamurthula

Sumanth Punyamurthula

Sumanth is a Senior Information and Analytics Architect at AWS with greater than 20 years of expertise in main massive analytical initiatives, together with analytics, information warehouse, information lakes, information governance, safety, and cloud infrastructure throughout journey, hospitality, monetary, and healthcare industries.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments