HomeBig DataImprove Agentforce information safety with Personal Join for Salesforce Knowledge Cloud and...

Improve Agentforce information safety with Personal Join for Salesforce Knowledge Cloud and Amazon Redshift – Half 3


Knowledge safety is a excessive precedence, significantly as organizations face growing cybersecurity threats. Sustaining the safety of buyer information is high precedence for AWS and Salesforce. With AWS PrivateLink, Salesforce Personal Join eliminates frequent safety dangers related to public endpoints. Salesforce Personal Join now works with Salesforce Knowledge Cloud to maintain your buyer information safe when utilizing with key providers like Agentforce.

In Half 2 of this collection, we mentioned the structure and implementation particulars of cross-Area information sharing between Salesforce Knowledge Cloud and AWS accounts. On this submit, we talk about learn how to create AWS endpoint providers to enhance information safety with Personal Join for Salesforce Knowledge Cloud.

Resolution overview

On this instance, we configure PrivateLink for an Amazon Redshift occasion to allow direct, personal connectivity from Salesforce Knowledge Cloud. AWS recommends that organizations use an Amazon Redshift managed VPC endpoint (powered by PrivateLink) to privately entry a Redshift cluster or serverless workgroup. For particulars about greatest practices, confer with Allow personal entry to Amazon Redshift out of your shopper purposes in one other VPC.

Nonetheless, some organizations would possibly choose to make use of PrivateLink managed by themselves—for instance, a Redshift managed VPC endpoint just isn’t but obtainable in Salesforce Knowledge Cloud, and you must handle your PrivateLink connection. This submit focuses on the answer to configure self-managed PrivateLink between Salesforce Knowledge Cloud and Amazon Redshift in your AWS account to ascertain personal connectivity.

The next structure diagram exhibits the steps for organising personal connectivity between Salesforce Knowledge Cloud and Amazon Redshift in your AWS account.

To arrange personal connectivity between Salesforce Knowledge Cloud and Amazon Redshift, we use the next sources:

Conditions

To finish the steps on this submit, it’s essential to have already got Amazon Redshift working in a personal subnet and have the permissions to handle it.

Create a safety group for the Community Load Balancer

The safety group acts as a digital firewall. The one visitors that reaches the occasion is the visitors allowed by the safety group guidelines. To boost the safety posture, you solely wish to permit visitors to Redshift cases. Full the next steps to create a safety group on your Community Load Balancer (NLB):

  1. On the Amazon VPC console, select Safety teams within the navigation pane.
  2. Select Create safety group.
  3. Enter a reputation and outline for the safety group.
  4. For VPC, use the identical digital personal cloud (VPC) as your Redshift cluster.
  5. For Inbound guidelines, add a rule to permit visitors to ingress the listening port 5439 on the load balancer.

  1. For Outbound guidelines, add a rule to permit visitors to your Redshift occasion.

  1. Select Create safety group.

Create a goal group

Full the next steps to create a goal group:

  1. On the Amazon EC2 console, underneath Load balancing within the navigation pane, select Goal teams.
  2. Select Create goal group.
  3. For Select a goal kind, choose IP addresses.

  1. For Protocol: Port, select TCP and port 5436 (in case your Redshift cluster runs on a special port, change the port accordingly).
  2. For IP handle kind, choose IPv4.
  3. For VPC, select the identical VPC as your Redshift cluster.
  4. Select Subsequent.

  1. For Enter an IPv4 handle from a VPC subnet, enter your Amazon Redshift IP handle.

To find this handle, navigate to your cluster particulars on the Amazon Redshift console, select the Properties tab, and underneath Community and safety settings, broaden VPC endpoint connection particulars and duplicate the personal handle of the community interface. In the event you’re utilizing Amazon Redshift Serverless, navigate to the workgroup dwelling web page. The Amazon Redshift IPv4 addresses may be situated within the Community and safety part underneath Knowledge entry if you select VPC endpoint ID.

  1. After you add the IP handle, select Embrace as pending beneath, then select Create goal group.

Create a load balancer

Full the next steps to create a load balancer:

  1. On the Amazon EC2 console, select Load balancers within the navigation pane.
  2. Select Create load balancer.
  3. Select Community.
  4. For Load balancer title, enter a reputation.
  5. For Scheme, choose Inside.
  6. For Load balancer handle kind, choose IPv4.
  7. For VPC, use the VPC that your goal group is in.

  1. For Availably Zones, choose the Availability Zone the place the Redshift cluster is working.
  2. For Safety teams, select the safety group you created within the earlier step.
  3. For Listener particulars, add a listener that factors to the goal group created within the final step:
    1. For Protocol, select TCP.
    2. For Port, use 5439.
    3. For Default motion, select Redshift-TargetGroup.
  4. Select Create load balancer.

Ensure that the registered targets within the goal group are wholesome earlier than continuing. Additionally guarantee that the goal group has a goal for all Availability Zones in your AWS Area or the NLB has the Cross-zone load balancing attribute enabled.

Within the load balancer’s safety setting, guarantee that Implement inbound guidelines on PrivateLink visitors is off.

Create an endpoint service

Full the next steps to create an endpoint service:

  1. On the Amazon VPC console, select Endpoint providers within the navigation pane.
  2. Select Create endpoint service.
  3. For Load balancer kind, select Community.
  4. For Out there load balancers, choose the load balancer you created within the final step
  5. From Supported Areas, choose an extra area if Knowledge Cloud isn’t hosted in the identical AWS area because the Redshift occasion.  For extra settings depart Acceptance required.

If that is chosen, later, when the Salesforce Knowledge Cloud endpoint is created to hook up with the endpoint service, you have to to come back again to this web page to simply accept the connection. If not chosen, the connection might be constructed straight.

  1. For Supported IP handle kind, choose IPv4.
  2. Select Create.

Subsequent, you must permit Salesforce principals.

  1. After you create the endpoint service, select Permit principals.
  2. In one other browser, navigate to Salesforce Knowledge Cloud Setup.
  3. Underneath Exterior Integrations, entry the brand new Personal Join menu merchandise.
  4. Create a brand new personal community path to Amazon Redshift.

  1. Copy the principal ID.

  1. Return to the endpoint service creation web page.
  2. For Principals so as to add, enter the principal ID.
  3. Copy the endpoint service title.
  4. Select Permit principals.

  1. Return to the Salesforce Knowledge Cloud personal community configuration web page.
  2. For Route Title, enter the endpoint service title.
  3. Select Save.

The route standing ought to present as Allocating.

In the event you opted to simply accept connections within the earlier step, you’ll now want to simply accept the connection from Salesforce Knowledge Cloud.

  1. On the Amazon VPC console, navigate to the endpoint service.
  2. On the Endpoint connections tab, find your pending connection request.

  1. Settle for the endpoint connection request from Salesforce Knowledge Cloud.

Navigate to the Salesforce Knowledge Cloud setup and wait 30 seconds, then refresh the personal join route so the standing exhibits as Prepared.

Now you can use this route when making a reference to Amazon Redshift. For extra particulars, confer with Half 1 of this collection.

Amazon Redshift federation PrivateLink failover

Now that we’ve mentioned learn how to configure PrivateLink to make use of with Personal Join for Salesforce Knowledge Cloud, let’s talk about Amazon Redshift federation PrivateLink failover eventualities.

You possibly can select to deploy your Redshift clusters in three totally different deployment modes:

  • Amazon Redshift provisioned in a Single-AZ RA3 cluster
  • Amazon Redshift provisioned in a Multi-AZ RA3 cluster
  • Amazon Redshift Serverless

PrivateLink depends on a buyer managed NLB linked to service endpoints utilizing IP handle goal teams. The goal group has the IP addresses of your Redshift occasion. If there’s a change in IP handle targets, the NLB goal group have to be up to date to the brand new IP addresses related to the service. Failover habits for Amazon Redshift will differ primarily based on the deployment mode you utilize.

This part describes PrivateLink failover eventualities for these three deployment modes.

Amazon Redshift provisioned in a Single-AZ RA3 cluster

RA3 nodes help provisioned cluster VPC endpoints, which decouple the backend infrastructure from the cluster endpoint used for entry. While you create or restore an RA3 cluster, Amazon Redshift makes use of a port throughout the ranges of 5431–5455 or 8191–8215. When the cluster is about to a port in one among these ranges, Amazon Redshift robotically creates a VPC endpoint in your AWS account for the cluster and attaches community interfaces with a personal IP for every Availability Zone within the cluster. For the PrivateLink configuration, you employ the IP related to the VPC endpoint because the goal for the frontend NLB. You possibly can establish the IP handle of the VPC endpoint on the Amazon Redshift console or by doing a describe-clusters question on the Redshift cluster.

Amazon Redshift won’t take away a community interface related to a VPC endpoint until you add an extra subnet to an present Availability Zone or take away a subnet utilizing Amazon Redshift APIs. We advocate that you simply don’t add a number of subnets to an Availability Zone to keep away from disruption. There is perhaps failover eventualities the place extra community interfaces are added to a VPC endpoint.

In RA3 clusters, the nodes are robotically recovered and changed as wanted by Amazon Redshift. The cluster’s VPC endpoint won’t change even when the chief node is changed.

Cluster relocation is an non-obligatory function that enables Amazon Redshift to maneuver a cluster to a different Availability Zone with none lack of information or modifications to your purposes. When cluster relocation is turned on, Amazon Redshift would possibly select to relocate clusters in some conditions. Particularly, this occurs the place points within the present Availability Zone stop optimum cluster operation or to enhance service availability. You can too invoke the relocation operate in instances the place useful resource constraints in a given Availability Zone are disrupting cluster operations. When a Redshift cluster is relocated to a brand new Availability Zone, the brand new cluster has the identical VPC endpoint however a brand new community interface is added within the new Availability Zone. The brand new personal handle ought to be added to the NLB’s goal group to optimize availability and efficiency.

Within the case {that a} cluster has failed and may’t be recovered robotically, it’s important to provoke a restore of the cluster from a earlier snapshot. This motion generates a brand new cluster with a brand new DNS title, connection string, and VPC endpoint and IP handle for the cluster. You must replace the NLB with the brand new IP for the VPC endpoint of the brand new cluster.

Amazon Redshift provisioned in a Multi-AZ RA3 cluster

Amazon Redshift helps Multi-AZ deployments for provisioned RA3 clusters. Through the use of Multi-AZ deployments, your Redshift information warehouse can proceed working in failure eventualities when an surprising occasion occurs in an Availability Zone. A Multi-AZ deployment deploys compute sources in two Availability Zones, and these compute sources may be accessed by way of a single endpoint. Within the case of a failure of the first nodes, Multi-AZ clusters will make secondary nodes main and deploy a brand new secondary stack in one other Availability Zone. The next diagram illustrates this structure.

Multi-AZ clusters deploy VPC endpoints that time to community interfaces in two Availability Zones, which ought to be configured as part of the NLB goal group. To configure the VPC endpoints within the NLB goal group, you may establish the IP addresses of the VPC endpoint utilizing the Amazon Redshift console or by doing a describe-clusters question on the Redshift cluster. In a failover state of affairs, VPC endpoint IPs won’t change and the NLB doesn’t require an replace.

Amazon Redshift won’t take away a community interface related to a VPC endpoint until you add an extra subnet in to an present Availability Zone or take away a subnet utilizing Amazon Redshift APIs. We advocate that you simply don’t add a number of subnets to an Availability Zone to keep away from disruption.

Amazon Redshift Serverless

Redshift Serverless offers managed infrastructure. You possibly can carry out the get-workgroup question to get the workgroup’s VpcEndpoint IPs. IPs ought to be configured within the goal group of the PrivateLink NLB. As a result of it is a managed service, the failover is managed by AWS. In the course of the occasion of an underlying Availability Zone failure, the workgroup would possibly get a brand new set of IPs. You possibly can regularly question the workgroup configuration or DNS file for the Redshift cluster to verify if IP addresses have modified and replace the NLB accordingly.

Automating IP handle administration

In eventualities the place Amazon Redshift operations would possibly change the IP handle of the endpoint wanted for Amazon Redshift connectivity, you may automate the replace of NLB community targets by monitoring the outcomes for cluster DNS decision, utilizing describe-cluster or get-workgroup queries, and utilizing an AWS Lambda operate to replace the NLB goal group configuration.

You possibly can periodically (on a schedule) question the DNS of the Redshift cluster for IP handle decision. Use a Lambda operate to check and replace the IP goal teams for the NLB. For an instance of this resolution, see Hostname-as-Goal for Community Load Balancers.

For legacy DS2 clusters the place the IP handle of the chief node have to be explicitly monitored, you may configure Amazon CloudWatch metrics to observe the HealthStatus of the chief node. You possibly can configure the metric to set off an alarm, which alerts an Amazon Easy Notification Service (Amazon SNS) subject and invokes a Lambda operate to reconcile the NLB goal group.

For backup and restore patterns, you may create a rule in Amazon EventBridge triggered on the RestoreFromClusterSnapshot API motion, which invokes a Lambda operate to replace the NLB with the brand new IP addresses of the cluster.

For a cluster relocation sample, you may set off an occasion primarily based on the Amazon Redshift ModifyCluster availability-zone-relocation API motion.

Conclusion

On this submit, we mentioned learn how to use AWS endpoint providers to enhance information safety with Personal Join for Salesforce Knowledge Cloud. In case you are presently utilizing the Salesforce Knowledge Cloud zero-copy integration with Amazon Redshift, we advocate you observe the steps supplied on this submit to make the community connection between Salesforce and AWS safe. Attain out to your Salesforce and AWS help groups in case you want extra help to implement this resolution.


Concerning the authors

Yogesh Dhimate is a Sr. Companion Options Architect at AWS, main know-how partnership with Salesforce. Previous to becoming a member of AWS, Yogesh labored with main corporations together with Salesforce driving their business resolution initiatives. With over 20 years of expertise in product administration and options structure Yogesh brings distinctive perspective in cloud computing and synthetic intelligence.

Avijit Goswami is a Principal Options Architect at AWS specialised in information and analytics. He helps AWS strategic clients in constructing high-performing, safe, and scalable information lake options on AWS utilizing AWS managed providers and open supply options. Outdoors of his work, Avijit likes to journey, hike, watch sports activities, and hearken to music.

Ife Stewart is a Principal Options Architect within the Strategic ISV section at AWS. She has been engaged with Salesforce Knowledge Cloud during the last 2 years to assist construct built-in buyer experiences throughout Salesforce and AWS. Ife has over 10 years of expertise in know-how. She is an advocate for variety and inclusion within the know-how subject.

Mike Patterson is a Senior Buyer Options Supervisor within the Strategic ISV section at AWS. He has partnered with Salesforce Knowledge Cloud to align enterprise goals with progressive AWS options to attain impactful buyer experiences. In his spare time, he enjoys spending time together with his household, sports activities, and outside actions.

Drew Loika is a Director of Product Administration at Salesforce and has spent over 15 years delivering buyer worth through information platforms and providers. When not diving deep with clients on what would assist them be extra profitable, he enjoys the acts of constructing, rising, and exploring the good open air.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments