This can be a visitor submit by Oleh Khoruzhenko, Senior Workers DevOps Engineer at Bazaarvoice, in partnership with AWS.
Bazaarvoice is an Austin-based firm powering a world-leading critiques and rankings platform. Our system processes billions of client interactions by rankings, critiques, pictures, and movies, serving to manufacturers and retailers construct shopper confidence and drive gross sales through the use of genuine user-generated content material (UGC) throughout the client journey. The Bazaarvoice Belief Mark is the gold customary in authenticity.
Apache Kafka is likely one of the core parts of our infrastructure, enabling real-time knowledge streaming for the worldwide assessment platform. Though Kafka’s distributed structure met our wants for high-throughput, fault-tolerant streaming, self-managing this advanced system diverted essential engineering assets away from our core product improvement. Every element of our Kafka infrastructure required specialised experience, starting from configuring low-level parameters to sustaining the advanced distributed programs that our prospects depend on. The dynamic nature of the environment demanded steady care and funding in automation. We discovered ourselves consistently managing upgrades, making use of safety patches, implementing fixes, and addressing scaling wants as our knowledge volumes grew.
On this submit, we present you the steps we took emigrate our workloads from self-hosted Kafka to Amazon Managed Streaming for Apache Kafka (Amazon MSK). We stroll you thru our migration course of and spotlight the enhancements we achieved after this transition. We present how we minimized operational overhead, enhanced our safety and compliance posture, automated key processes, and constructed a extra resilient platform whereas sustaining the excessive efficiency our international buyer base expects.
The necessity for modernization
As our platform grew to course of billions of day by day client interactions, we wanted to discover a option to scale our Kafka clusters effectively whereas sustaining a small workforce to handle the infrastructure. The restrictions of self-managed Kafka clusters manifested in a number of key areas:
- Scaling operations – Though scaling our self-hosted Kafka clusters wasn’t inherently advanced, it required cautious planning and execution. Every time we wanted so as to add new brokers to deal with elevated workload, our workforce confronted a multi-step course of involving capability planning, infrastructure provisioning, and configuration updates.
- Configuration complexity – Kafka presents tons of of configuration parameters. Though we didn’t actively handle all of those, understanding their influence was essential. Key settings like I/O threads, reminiscence buffers, and retention insurance policies wanted ongoing consideration as we scaled. Even minor changes might have important downstream results, requiring our workforce to take care of deep experience in these parameters and their interactions to make sure optimum efficiency and stability.
- Infrastructure administration and capability planning – Self-hosting Kafka required us to handle a number of scaling dimensions, together with compute, reminiscence, community throughput, storage throughput, and storage quantity. We would have liked to fastidiously plan capability for all these parts, usually making advanced trade-offs. Past capability planning, we had been chargeable for real-time administration of our Kafka infrastructure. This included promptly detecting and addressing element failures and efficiency points. Our workforce wanted to be extremely aware of alerts, usually requiring instant motion to take care of system stability.
- Specialised experience necessities – Working Kafka at scale demanded deep technical experience throughout a number of domains. The workforce wanted to:
- Monitor and analyze tons of of efficiency metrics
- Conduct advanced root trigger evaluation for efficiency points
- Handle ZooKeeper ensemble coordination
- Execute rolling updates for zero-downtime upgrades and safety patches
These challenges had been compounded throughout peak enterprise intervals, comparable to Black Friday and Cyber Monday, when sustaining optimum efficiency was important for Bazaarvoice’s retail prospects.
Selecting Amazon MSK
After evaluating varied choices, we chosen Amazon MSK as our modernization answer. The choice was pushed by the service’s capability to attenuate operational overhead, present excessive availability out of the field with its three Availability Zone structure, and provide seamless integration with our present AWS infrastructure.
Key capabilities that made Amazon MSK the clear alternative:
- AWS integration – We already used AWS providers for knowledge processing and analytics. Amazon MSK linked instantly with these providers, assuaging the necessity to construct and keep customized integrations. This meant our present knowledge pipelines would proceed working with minimal modifications.
- Automated operations administration – Amazon MSK automated our most time-consuming duties. We now not have to manually monitor situations and storage for failures or reply to those points ourselves.
- Enterprise-grade reliability – The platform’s structure matched our reliability necessities out of the field. Multi-AZ distribution and built-in replication gave us the identical fault tolerance we’d fastidiously constructed into our self-hosted system, now backed by AWS’s service ensures.
- Simplified improve course of – Earlier than Amazon MSK, model upgrades for our Kafka clusters required cautious planning and execution. The method was advanced, involving a number of steps and dangers. Amazon MSK simplified our improve operations. We now use automated upgrades for dev and take a look at workloads and keep management over manufacturing environments. This shift diminished the necessity for intensive planning periods and a number of engineers. Because of this, we keep present with the most recent Kafka variations and safety patches, enhancing our system reliability and efficiency.
- Enhanced safety controls – Our platform required ISO 27001 compliance, which generally concerned months of documentation and safety controls implementation. Amazon MSK got here with this certification built-in, assuaging the necessity for separate compliance work. Amazon MSK encrypted our knowledge, managed community entry, and built-in with our present safety instruments.
With Amazon MSK chosen as our goal platform, we started planning the advanced process of migrating our essential streaming infrastructure with out disrupting the billions of client interactions flowing by our system.
Bazaarvoice’s migration journey
Transferring our advanced Kafka infrastructure to Amazon MSK required cautious planning and exact execution. Our platform processes knowledge by two foremost parts: an Apache Kafka Streams pipeline that handles knowledge processing and augmentation, and consumer purposes that transfer this enriched knowledge to downstream programs. With 40 TB of state throughout 250 inside matters, this migration demanded a methodical strategy.
Planning part
Working with AWS Options Architects proved essential for validating our migration technique. Our platform’s distinctive traits required particular consideration:
- Multi-Area deployment throughout the US and EU
- Complicated stateful purposes with strict knowledge consistency wants
- Important enterprise providers requiring zero downtime
- Numerous client ecosystem with totally different migration necessities
Migration challenges
The largest hurdle was migrating our stateful Kafka Streams purposes. Our knowledge processing runs as a directed acyclic graph (DAG) of purposes throughout areas, utilizing static group membership to stop disruptive rebalancing. It’s essential to notice that Kafka Streams retains its state in inside Kafka matters. For purposes to recuperate correctly, replicating this state precisely is essential. This attribute of Kafka Streams added complexity to our migration course of. Initially, we thought of MirrorMaker2, the usual software for Kafka migrations. Nonetheless, two basic limitations made it difficult:
- Danger of shedding state or incorrectly replicating state throughout our purposes.
- Incapability to run two situations of our purposes concurrently, which meant we wanted to close down the primary software and await it to recuperate from the state within the MSK cluster. Given the scale of our state, this restoration course of exceeded our 30-minute SLA for downtime.
Our answer
We determined to deploy a parallel stack of Kafka Streams purposes studying and writing knowledge from Amazon MSK. This strategy gave us enough time for testing and verification, and enabled the purposes to hydrate their state earlier than we delivered the output to our knowledge warehouse for analytics. We used MirrorMaker2 for enter matter replication, whereas our answer supplied a number of benefits:
- Simplified monitoring of the replication course of
- Averted consistency points between state shops and inside matters
- Allowed for gradual, managed migration of shoppers
- Enabled thorough validation earlier than cutover
- Required a coordinated transition plan for all shoppers, as a result of we couldn’t switch client offsets throughout clusters
Client migration technique
Every client kind required a fastidiously tailor-made strategy:
- Commonplace shoppers – For purposes supporting Kafka Client Group protocol, we carried out a four-step migration. This strategy risked some duplicate processing, however our purposes had been designed to deal with this state of affairs. The steps had been as follows:
- Configure shoppers with
auto.offset.reset: newest. - Cease all DAG producers.
- Look forward to present shoppers to course of remaining messages.
- Minimize over client purposes to Amazon MSK.
- Configure shoppers with
- Apache Kafka Join Sinks – Our sink connectors served two essential databases:
- A distributed search and analytics engine – Doc versioning trusted Kafka file offsets, making direct migration unattainable. To handle this, we carried out an answer that concerned constructing new search engine clusters from scratch.
- A document-oriented NoSQL database – This supported direct migration with out requiring new database situations, simplifying the method considerably.
- Apache Spark and Flink purposes – These offered distinctive challenges as a result of their inside checkpointing mechanisms:
- Offsets managed outdoors Kafka’s client teams
- Checkpoints incompatible between supply and goal clusters
- Required full knowledge reprocessing from the start
We scheduled these migrations throughout off-peak hours to attenuate influence.
Technical advantages and enhancements
Transferring to Amazon MSK essentially modified how we handle our Kafka infrastructure. The transformation is finest illustrated by evaluating key operational duties earlier than and after the migration, summarized within the following desk.
| Exercise | Earlier than: Self-Hosted Kafka | After: Amazon MSK |
| Safety patching | Required devoted workforce time for Kafka and OS updates | Absolutely automated |
| Dealer restoration | Wanted guide monitoring and intervention | Absolutely automated |
| Consumer authentication | Complicated password rotation procedures | AWS Id and Entry Administration (IAM) |
| Model upgrades | Complicated process requiring intensive planning | Absolutely automated |
The main points of the duties are as follows:
- Safety patching – Beforehand, our workforce spent 8 hours month-to-month making use of Kafka and working system (OS) safety patches throughout our dealer fleet. Amazon MSK now handles these updates routinely, sustaining our safety posture with out engineering intervention.
- Dealer restoration – Though our self-hosted Kafka had computerized restoration capabilities, every incident required cautious monitoring and occasional guide intervention. With Amazon MSK, node failures and storage degradation points comparable to Amazon Elastic Block Retailer (Amazon EBS) slowdowns are dealt with solely by AWS and resolved inside minutes with out our involvement.
- Authentication administration – Our self-hosted implementation required password rotations for SASL/SCRAM authentication, a course of that took two engineers a number of days to coordinate. The direct integration between Amazon MSK and AWS Id and Entry Administration (IAM) minimized this overhead whereas strengthening our safety controls.
- Model upgrades – Kafka model upgrades in our self-hosted setting required weeks of planning and testing in addition to weekend upkeep home windows. Amazon MSK manages these upgrades routinely throughout off-peak hours, sustaining our SLAs with out disruption.
These enhancements proved particularly beneficial throughout high-traffic intervals like Black Friday, when our workforce beforehand wanted intensive operational readiness plans. Now, the built-in resiliency of Amazon MSK supplies us with dependable Kafka clusters that function mission-critical infrastructure for our enterprise. The migration made it potential to interrupt our monolithic clusters into smaller, devoted MSK clusters. This improved our knowledge isolation, offered higher useful resource allocation, and enhanced efficiency predictability for high-priority workloads.
Classes realized
Our migration to Amazon MSK revealed a number of key insights that may assist different organizations modernize their Kafka infrastructure:
- Professional validation – Working with AWS Options Architects to validate our migration technique caught a number of essential points early. Though our workforce knew our purposes effectively, exterior Kafka specialists recognized potential issues with state administration and client offset dealing with that we hadn’t thought of. This validation prevented expensive missteps in the course of the migration.
- Knowledge verification – Evaluating knowledge throughout Kafka clusters proved difficult. We constructed instruments to seize matter snapshots in Parquet format on Amazon Easy Storage Service (Amazon S3), enabling fast comparisons utilizing Amazon Athena queries. This strategy gave us confidence that knowledge remained constant all through the migration.
- Begin small – Starting with our smallest knowledge universe in QA helped us refine our course of. Every subsequent migration went smoother as we utilized classes from earlier iterations. This gradual strategy helped us keep system stability whereas constructing workforce confidence.
- Detailed planning – We created particular migration plans with every workforce, contemplating their distinctive necessities and constraints. For instance, our machine studying pipeline wanted particular dealing with as a result of strict offset administration necessities. This granular planning prevented downstream disruptions.
- Efficiency optimization – We discovered that using Amazon MSK provisioned throughput supplied clear value benefits when storage throughput grew to become a bottleneck. This characteristic made it potential to enhance cluster efficiency with out scaling occasion sizes or including brokers, offering a extra environment friendly answer to our throughput challenges.
- Documentation – Sustaining detailed migration runbooks proved invaluable. After we encountered comparable points throughout totally different migrations, having documented options saved important troubleshooting time.
Conclusion
On this submit, we confirmed you the way we modernized our Kafka infrastructure by migrating to Amazon MSK. We walked by our decision-making course of, challenges confronted, and methods employed. Our journey remodeled Kafka operations from a resource-intensive, self-managed infrastructure to a streamlined, managed service, enhancing operational effectivity, platform reliability, and workforce productiveness. For enterprises managing self-hosted Kafka infrastructure, our expertise demonstrates that profitable transformation is achievable with correct planning and execution. As knowledge streaming wants develop, modernizing infrastructure turns into a strategic crucial for sustaining aggressive benefit.
For extra info, go to the Amazon MSK product web page, and discover the great Developer Information to study in regards to the options obtainable that can assist you construct scalable and dependable streaming knowledge purposes on AWS.
Concerning the authors

