HomeBig DataSafe entry to a cross-account Amazon MSK cluster from Amazon MSK Join...

Safe entry to a cross-account Amazon MSK cluster from Amazon MSK Join utilizing IAM authentication


Amazon Managed Streaming for Apache Kafka (MSK) Join is a completely managed, scalable, and extremely obtainable service that allows the streaming of knowledge between Apache Kafka and different information programs. Amazon MSK Join is constructed on high of Kafka Join, an open-source framework that gives a regular solution to join Kafka with exterior information programs. Kafka Join helps a wide range of connectors, that are used to stream information out and in of Kafka. MSK Join extends the capabilities of Kafka Join by offering a managed service with added security measures, easy configuration, and automated scaling capabilities, enabling companies to concentrate on their information streaming wants with out the overhead of managing the underlying infrastructure.

In some use circumstances, you would possibly want to make use of an MSK cluster in a single AWS account, however MSK Join is positioned in a separate account. On this submit, we show the way to create a connector to attain this use case. On the time of writing, MSK Join connectors might be created just for MSK clusters which have AWS Identification and Entry Administration (IAM) role-based authentication or no authentication. We show the way to implement IAM authentication after establishing community connectivity. IAM supplies enhanced safety measures, ensuring your programs are protected in opposition to unauthorized entry.

Answer overview

The connector might be configured for a wide range of functions, resembling sinking information to an Amazon Easy Storage Service (Amazon S3) bucket, monitoring the supply database modifications, or serving as a migration instrument resembling MirrorMaker2 on MSK Join to switch information from a supply cluster to a goal cluster that is positioned in a special account.

The next diagram illustrates a use case utilizing Debezium and Amazon S3 supply connectors.

The next diagram illustrates utilizing S3 Sink and migration to a cross-account failover cluster utilizing a MirrorMaker connector deployed on MSK Join.

Presently MSK Join connectors might be created just for MSK clusters which have IAM role-based authentication or no authentication. On this weblog, I’ll information you thru the important steps for implementing the industry-recommended IAM (Identification and Entry Administration) authentication after establishing community connectivity. IAM supplies enhanced safety measures, guaranteeing your programs are protected in opposition to unauthorized entry.

The launch of multi-VPC non-public connectivity (powered by AWS PrivateLink) and cluster coverage assist for MSK clusters simplifies the connectivity of Kafka purchasers to brokers. By enabling this function on the MSK cluster, you should use the cluster-based coverage to handle all entry management centrally in a single place. On this submit, we cowl the method of enabling this function on the supply MSK cluster.

We don’t totally make the most of the multi-VPC connectivity offered by this new function as a result of that requires you to make use of completely different bootstrap URLs with port numbers (14001:3) that aren’t supported by MSK Join as of writing of this submit. We discover a safe community connectivity resolution that makes use of non-public connectivity patterns, as detailed in How Goldman Sachs builds cross-account connectivity to their Amazon MSK clusters with AWS PrivateLink.

Connecting to a cross-account MSK cluster from MSK Join entails the next steps.

Steps to configure the MSK cluster in Account A:

  1. Allow the multi-VPC non-public connectivity(Personal Hyperlink) function for IAM authentication scheme that’s enabled in your MSK cluster.
  2. Configure the cluster coverage to permit a cross-account connector.
  3. Implement one of many previous community connectivity patterns in line with your use case to ascertain the connectivity with the Account B VPC and make community modifications accordingly.

Steps to configure the MSK connector in Account B:

  1. Create an MSK connector in non-public subnets utilizing the AWS Command Line Interface (AWS CLI).
  2. Confirm the community connectivity from Account A and make community modifications accordingly.
  3. Test the vacation spot service to confirm the incoming information.

Stipulations

To comply with together with this submit, it’s best to have an MSK cluster in a single AWS account and MSK Join in a separate account.

Arrange the MSK cluster setup in Account A:

On this submit, we solely present the vital steps which can be required to allow the multi-VPC function on an MSK cluster:

  1. Create a provisioned MSK cluster in Account A’s VPC with the next issues, that are required for the multi-VPC function:
    • Cluster model should be 2.7.1 or increased.
    • Occasion kind should be m5.giant or increased.
    • Authentication needs to be IAM (you should not allow unauthenticated entry for this cluster).
  2. After you create the cluster, go to the Networking settings part of your cluster and select Edit. Then select Activate multi-VPC connectivity.

  1. Choose IAM role-based authentication and select Activate choice.

It would take round half-hour to allow. This step is required to allow the cluster coverage function that enables the cross-account connector to entry the MSK cluster.

  1. After it has been enabled, scroll all the way down to Safety settings and select Edit cluster coverage.
  2. Outline your cluster coverage and select Save modifications.

  1. The brand new cluster coverage permits for outlining a Primary or Superior cluster coverage. With the Primary choice, it solely permits CreateVPCConnection, GetBootstrapBrokers, DescribeCluster, and DescribeClusterV2 actions which can be required for creating the cross-VPC connectivity to your cluster. Nevertheless, we now have to make use of Superior to permit extra actions which can be required by the MSK Connector. The coverage needs to be as follows:
    {
    
        "Model": "2012-10-17",
        "Assertion": [{
            "Effect": "Allow",
            "Principal": {
                "AWS": "Connector-AccountId"
            },
            "Action": [
                "kafka:CreateVpcConnection",
                "kafka:GetBootstrapBrokers",
                "kafka:DescribeCluster",
                "kafka:DescribeClusterV2",
                "kafka-cluster:Connect",
                "kafka-cluster:DescribeCluster",
                "kafka-cluster:ReadData",
                "kafka-cluster:DescribeTopic",
                "kafka-cluster:WriteData",
                "kafka-cluster:CreateTopic",
                "kafka-cluster:AlterGroup",
                "kafka-cluster:DescribeGroup"
            ],
    "Useful resource": [
                    "arn:aws:kafka:::cluster//",
                    "arn:aws:kafka:::topic///",
                    "arn:aws:kafka:::group///"
                ]
        }]
    }

You would possibly want to switch the previous permissions to restrict entry to your assets (matters, teams). Additionally, you possibly can limit entry to a selected connector by giving the connector IAM position, or you possibly can point out the account quantity to permit the connectors in that account.

Now the cluster is prepared. Nevertheless, you might want to ensure of the community connectivity between the cross-account connector VPC and the MSK cluster VPC.

When you’re utilizing VPC peering or Transit Gateway whereas connecting to MSK Join both from cross-account or the identical account, don’t configure your connector to achieve the peered VPC assets with IPs within the following CIDR ranges (for extra particulars, see Connecting from connectors):

  • 10.99.0.0/16
  • 192.168.0.0/16
  • 172.21.0.0/16

Within the MSK cluster safety group, be sure to allowed port 9098 from Account B community assets and make modifications within the subnets in line with your community connectivity sample.

Arrange the MSK connector in Account B:

On this part, we show the way to use the S3 Sink connector. Nevertheless, you should use a special connector in line with your use case and make the modifications accordingly.

  1. Create an S3 bucket (or use an present bucket).
  2. Be sure that the VPC that you just’re utilizing on this account has a safety group and personal subnets. In case your connector for MSK Join wants entry to the web, consult with Allow web entry for Amazon MSK Join.
  3. Confirm the community connectivity between Account A and Account B through the use of the telnet command to the dealer endpoints with port 9098.
  4. Create an S3 VPC endpoint.
  5. Create a connector plugin in line with your connector plugin supplier (confluent or lenses). Make a remark of the customized plugin Amazon Useful resource Title (ARN) to make use of in a later step.
  6. Create an IAM position in your connector to permit entry to your S3 bucket and the MSK cluster.
    • The IAM position’s belief relationship needs to be as follows:
      {
          "Model": "2012-10-17",
          "Assertion": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "Service": "kafkaconnect.amazonaws.com"
                  },
                  "Action": "sts:AssumeRole"
              }
          ]
      }

    • Add the next S3 entry coverage to your IAM position:
      {
          "Model": "2012-10-17",
          "Assertion": [{
              "Effect": "Allow",
              "Action": [
                  "s3:ListAllMyBuckets",
                  "s3:ListBucket",
                  "s3:GetBucketLocation",
                  "s3:DeleteObject",
                  "s3:PutObject",
                  "s3:GetObject",
                  "s3:AbortMultipartUpload",
                  "s3:ListMultipartUploadParts",
                  "s3:ListBucketMultipartUploads"
              ],
              "Useful resource": [
                              "arn:aws:s3:::",
                           "arn:aws:s3:::/*"
              ],
                 "Situation": {
              "StringEquals": {
                     "aws:SourceVpc": "vpc-xxxx"
                     }
                     }
          }]
      }

    • The next coverage comprises the required actions by the connector:
      {
      "Model": "2012-10-17",
      "Assertion": [
         {
              "Effect": "Allow",
              "Action": [
                  "kafka-cluster:Connect",
                  "kafka-cluster:DescribeCluster",
                  "kafka-cluster:ReadData",
                  "kafka-cluster:DescribeTopic",
                  "kafka-cluster:WriteData",
                  "kafka-cluster:CreateTopic",
                  "kafka-cluster:AlterGroup",
                  "kafka-cluster:DescribeGroup"
              ],
              "Useful resource": [
                  "arn:aws:kafka:::cluster//",
                  "arn:aws:kafka:::topic///",
                  "arn:aws:kafka:::group///"
              ]
          }
      ]
      }

You would possibly want to switch the previous permissions to restrict entry to your assets (matters, teams)

Lastly, it’s time to create the MSK connector. As a result of the Amazon MSK console doesn’t enable viewing MSK clusters in different accounts, we present you the way to use the AWS CLI as an alternative. We additionally use fundamental Amazon S3 configuration for testing functions. You would possibly want to switch the configuration in line with your connector’s use case.

  1. Create a connector utilizing the AWS CLI with the next command with the required parameters of the connector, together with Account A’s MSK cluster dealer endpoints:
    aws kafkaconnect create-connector 
    --capacity "autoScaling={maxWorkerCount=2,mcuCount=1,minWorkerCount=1,scaleInPolicy={cpuUtilizationPercentage=10},scaleOutPolicy={cpuUtilizationPercentage=80}}" 
    --connector-configuration 
    "connector.class=io.confluent.join.s3.S3SinkConnector, 
    s3.area=, 
    schema.compatibility=NONE, 
    flush.measurement=2, 
    duties.max=1, 
    matters=, 
    safety.protocol=SASL_SSL, 
    s3.compression.kind=gzip, 
    format.class=io.confluent.join.s3.format.json.JsonFormat, 
    sasl.mechanism=AWS_MSK_IAM, 
    sasl.jaas.config=software program.amazon.msk.auth.iam.IAMLoginModule required, 
    sasl.consumer.callback.handler.class=software program.amazon.msk.auth.iam.IAMClientCallbackHandler, 
    worth.converter=org.apache.kafka.join.storage.StringConverter, 
    storage.class=io.confluent.join.s3.storage.S3Storage, 
    s3.bucket.title=, 
    timestamp.extractor=Document, 
    key.converter=org.apache.kafka.join.storage.StringConverter" 
    --connector-name "Connector-name" 
    --kafka-cluster '{"apacheKafkaCluster": {"bootstrapServers": ":9098","vpc": {"securityGroups": ["sg-0b36a015789f859a3"],"subnets": ["subnet-07950da1ebb8be6d8","subnet-026a729668f3f9728"]}}}' 
    --kafka-cluster-client-authentication "authenticationType=IAM" 
    --kafka-cluster-encryption-in-transit "encryptionType=TLS" 
    --kafka-connect-version "2.7.1" 
    --log-delivery workerLogDelivery='{cloudWatchLogs={enabled=true,logGroup=""}}' 
    --plugins "customPlugin={customPluginArn=,revision=1}" 
    --service-execution-role-arn ""

  2. After you create the connector, join the producer to your matter and insert information into it. Within the following code, we use a Kafka consumer to insert information for testing functions:
    bin/kafka-console-producer.sh --broker-list  --producer.config consumer.properties --topic 

If every part is ready up appropriately, it’s best to see the info in your vacation spot S3 bucket. If not, examine the troubleshooting suggestions within the following part.

Troubleshooting suggestions

After deploying the connector, if it’s within the CREATING state on the connector particulars web page, entry the Amazon CloudWatch log group laid out in your connector creation request. Evaluate the logs for any errors. If no errors are discovered, look ahead to the connector to finish its creation course of.

Moreover, ensure the IAM roles have their required permissions, and examine the safety teams and NACLs for correct connectivity between VPCs.

Clear up

Whenever you’re achieved testing this resolution, clear up any undesirable assets to keep away from ongoing expenses

Conclusion

On this submit, we demonstrated the way to create an MSK connector when you might want to use an MSK cluster in a single AWS account, however MSK Join is positioned in a separate account. This structure contains an S3 Sink connector for demonstration functions, however it might accommodate different kinds of sink and supply connectors. Moreover, this structure focuses solely on IAM authenticated connectors. If an unauthenticated connector is desired, the multi-VPC connectivity (PrivateLink) and cluster coverage elements might be ignored. The remaining course of, which entails making a community connection between the account VPCs, stays the identical.

Check out the answer for your self, and tell us your questions and suggestions within the feedback part.

Take a look at extra AWS Companions or contact an AWS Consultant to learn the way we may also help speed up your online business.


In regards to the Creator

Venkata Sai Mahesh Swargam is a Cloud Engineer at AWS in Hyderabad. He makes a speciality of Amazon MSK and Amazon Kinesis providers. Mahesh is devoted to serving to prospects by offering technical steerage and fixing points associated to their Amazon MSK architectures. In his free time, he enjoys being with household and touring around the globe.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments