Whereas Zalando is now certainly one of Europe’s main on-line trend vacation spot, it started in 2008 as a Berlin-based startup promoting footwear on-line. What began with just some manufacturers and a single nation rapidly grew right into a pan-European enterprise, working in 27 markets and serving greater than 52 million lively prospects.
Quick ahead to at this time, and Zalando isn’t simply an internet retailer—it’s a tech firm at its core. With greater than €14 billion in annual gross merchandise quantity (GMV), the corporate realized that to serve trend at scale, it wanted to depend on extra than simply logistics and stock. It wanted information. And never simply to help the enterprise—however to drive it.
On this submit, we present how Zalando migrated their fast-serving layer information warehouse to Amazon Redshift to realize higher price-performance and scalability.
The size and scope of Zalando’s information operations
From customized dimension suggestions that cut back returns to dynamic pricing, demand forecasting, focused advertising, and fraud detection, information and AI are embedded throughout the group.
Zalando’s information platform operates at a formidable scale, managing over 20 petabytes of information in its lake supporting numerous analytics and machine studying functions. The information platform hosts greater than 5,000 information merchandise maintained by 350 decentralized groups, serving 6,000 month-to-month customers, representing 80% of Zalando’s company workforce. As a completely self-service information platform, it supplies SQL analytics, orchestration, information discovery, and high quality monitoring, empowering groups to construct and handle information merchandise independently.
This scale solely made the necessity for modernization extra pressing. It was clear that environment friendly information loading, dynamic compute scaling, and future-ready infrastructure have been important.
Challenges with the present Quick-Serving Layer (information warehouse)
To allow choices throughout analytics, dashboards, and machine studying, Zalando makes use of an information warehouse that acts as a fast-serving layer and spine for vital information/reporting use instances. This layer holds about 5,000 curated tables and views, optimized for fast, read-heavy workloads. Each week, greater than 3,000 customers—together with analysts, information scientists, and enterprise stakeholders—depend on this layer for fast insights.
However the incumbent information warehouse wasn’t future proof. It was based mostly on a monolithic cluster setup optimized for peak hundreds, like Monday mornings, when weekly and day by day jobs pile up. Consequently, 80% of the time, the system sat underutilized, burning compute and resulting in substantial “slack prices” from over-provisioned capability, with potential month-to-month financial savings of over $30,000 if dynamic scaling have been doable. Concurrency limitations resulted in excessive latency and disrupted business-critical reporting processes. The system’s lack of elasticity led to poor cost-to-utilization ratios, whereas the absence of workload isolation between groups incessantly brought about operational incidents. Upkeep and scaling required fixed vendor help, making it troublesome to handle peak intervals like CyberWeek because of occasion shortage. Moreover, the platform lacked trendy options comparable to on-line question editors and correct auto scaling capabilities, whereas its gradual characteristic improvement and restricted group help additional hindered Zalando’s potential to innovate.
Fixing for scale: Zalando’s journey to a contemporary quick serving layer
Zalando was in search of an answer that demonstrated capabilities which might meet their value and efficiency targets by way of a “easy elevate and shift” method. Amazon Redshift was chosen for the POC to handle autoscaling and concurrency wants, whereas concurrently lowering operational efforts in addition to its potential to combine with Zalando’s current information platform and align with their total information technique.
The general analysis scope for the Redshift evaluation coated following key areas.
Efficiency and value
The analysis of Amazon Redshift demonstrated substantial efficiency enhancements and value advantages in comparison with the previous information warehousing platform.
- Redshift provided 3-5 occasions sooner question execution time.
- Roughly 86% of distinct queries ran sooner on Redshift.
- In a “Monday morning situation”, Redshift demonstrated 3 occasions sooner gathered execution time in comparison with the present platform
- For brief queries, Redshift achieved 100% SLA compliance for queries within the 80-480 second vary. For queries as much as 80 seconds, 90% met SLA.
- Redshift demonstrated 5x sooner parallel question execution, dealing with considerably larger concurrent queries than the present information warehouse’s most parallelism.
- For Interactive Utilization use instances, Redshift demonstrated sturdy efficiency, which is important for BI software customers, particularly in parallel executions situation.
- Redshift options comparable to Automated Desk Optimizations and Automated Materialized views eradicated the necessity for information producing groups to manually optimize the design of tables, making it extremely appropriate for a central service providing.
Structure
Redshift efficiently demonstrated workload isolation comparable to separating transformations(ETL) from serving (BI, Advert-hoc and many others.) workload utilizing Amazon Redshift information sharing. It additionally proved its versatility by way of integration with Spark and customary file codecs was additionally confirmed.
Safety
Amazon Redshift efficiently demonstrated end-to-end encryption, auditing capabilities, and complete entry controls with Row-Degree and Column-Degree Safety as a part of the proof of idea.
Developer productiveness
The analysis demonstrated vital enhancements in developer effectivity. A baseline idea for central deployment template authoring and distribution by way of AWS Service Catalog was efficiently carried out. Moreover, Redshift confirmed spectacular agility with its potential to deploy Redshift Serverless endpoints in minutes for ad-hoc analytics, enhancing the staff’s potential to rapidly reply to analytical wants.
Amazon Redshift migration technique
This part outlines the method Zalando took emigrate the fast-serving layer to Amazon Redshift.
From monolith to modular: Redesigning with Redshift
The migration technique concerned an entire re-architecture of the fast-serving layer, shifting to Amazon Redshift with a multi-warehouse mannequin that separates information producers from information shoppers.Key parts and rules of the goal structure embody:
- Workload Isolation: Use instances are remoted by occasion or setting, with information shares facilitating information alternate between them. Knowledge shares allow an “simple fan out” of information from the Producer warehouse to varied Shopper warehouses. The producer and shopper warehouses could be both Provisioned (comparable to for BI Instruments) or Serverless (comparable to for Analysts). This permits for information sharing between separate authorized entities.
- Standardized Knowledge Loading: A Knowledge Loading API (proprietary to Zalando) was constructed to standardize information loading processes. This API helps incremental loading and efficiency optimizations. Applied with AWS Step Features and AWS Lambda, it detects modified Parquet recordsdata from Delta lake metadata and makes use of Redshift spectrum for loading information into the Redshift Producer warehouse.
- Utilizing Redshift Serverless: Zalando goals to make use of Redshift Serverless wherever doable. Redshift Serverless affords flexibility, value effectivity, and improved efficiency, notably for the light-weight queries prevalent in BI dashboards. It additionally allows the deployment of Redshift serverless endpoints in minutes for ad-hoc analytics, enhancing developer productiveness.
The next diagram depicts Zalando’s end-to-end Amazon Redshift multi-warehouse structure, highlighting the producer-consumer mannequin:

The core technique of migration was “lift-and-shift” by way of code to keep away from complicated refactoring and meet deadlines.
The primary rules used have been:
- Run duties in parallel each time doable.
- Decrease the workload for inside information groups.
- Decouple duties to permit groups to schedule work flexibly.
- Maximize the work achieved by centrally managed companions.
Three-stage migration method
The migration is damaged down into three distinct phases to handle the transition successfully.
Stage 1: Knowledge replication
Zalando’s precedence was creating an entire, synchronized copy of all goal information tables from the previous information warehouse to Redshift. An automatic course of was carried out utilizing Changehub, an inside software constructed on Amazon Managed Workflows for Apache Airflow (MWAA), that displays the previous system’s logs and syncs information updates to Redshift roughly each 5-10 minutes, establishing the brand new information basis with out disrupting current workflows.
Stage 2: Workload migration
The second stage centered on shifting enterprise logic (ETL) and MicroStrategy reporting to Redshift to considerably cut back the load on the legacy system. For ETL migration, semi-automated method was carried out utilizing Migvisor code convertor to transform the scripts. MicroStrategy reporting was migrated by leveraging MSTR’s functionality to routinely generate Redshift-compatible queries based mostly on the semantic layer.
Stage 3: Finalization and decommissioning
The ultimate stage completes the transition by migrating all remaining information shoppers and ingestion processes, resulting in the total shutdown of the previous information warehouse. Throughout this section, all information pipelines are being rerouted to feed straight into Redshift, and long-term possession of processes is being transitioned to the respective groups earlier than the previous system is absolutely decommissioned.
Advantages and Outcomes
A serious infrastructure change at Zalando occurred on October 30, 2024, switching 80% of analytics reporting from the previous information warehouse answer to Redshift. The migration of 80% of analytics reporting to Redshift efficiently decreased operational threat for the vital Cyber Week interval and enabled the decommissioning of the previous information warehouse to keep away from vital license charges.
The venture resulted in substantial efficiency and stability enhancements throughout the board.
Efficiency Enhancements
Key efficiency metrics reveal substantial enhancements throughout a number of dimensions:
- Sooner Question Execution: 75% of all queries now execute sooner on Redshift.
- Improved Reporting Velocity: Excessive-priority reporting queries are considerably sooner, with a 13% discount in P90 execution time and a 23% discount in P99 execution time.
- Drastic Discount in System Load: The general processing time for MicroStrategy (MSTR) stories has dramatically decreased. Peak Monday morning execution time dropped from 130 minutes to 52 minutes. Within the first 4
- weeks, the whole MSTR job length was decreased by over 19,000 hours (equal to 2.2 years of compute time) in comparison with the earlier system. This has led to way more constant and dependable efficiency.
The next graph reveals one of many vital Monday Morning Workload elapsed length on old-data warehouse in addition to Amazon Redshift.

Operational stability
Amazon Redshift has confirmed to be considerably extra secure and dependable, efficiently assembly the important thing goal of lowering operational threat.
- Report Timeouts: Report timeouts, a main concern, have been just about eradicated.
- Crucial Enterprise Interval Efficiency: Redshift carried out exceptionally effectively in the course of the high-stress Cyber Week 2024. It is a stark distinction to the previous system, which suffered vital, financially impactful failures throughout the identical interval in 2022 and 2023.
- Knowledge Loading: For information producers, the consistency of information loading is vital, as delays can maintain up quite a few stories and trigger direct enterprise influence. The system relied on an “ETL Prepared” occasion, which triggers report processing solely in any case required datasets have been loaded. Because the migration to Redshift, the timing of this occasion has grow to be considerably extra constant, enhancing the reliability of all the information pipeline.
The next diagram reveals consistency in ETL Prepared occasion, after migrating to Amazon Redshift

Finish consumer expertise
The discount in whole execution time of Monday morning hundreds has resulted in dramatically improved end-user productiveness. That is the time wanted to course of the total batch of scheduled stories (peak load), which straight interprets to attend occasions and productiveness for finish customers, since that is when most customers want their weekly stories for his or her enterprise. The next graphs reveals typical Mondays earlier than and after the change and the way Amazon Redshift handles the MSTR queue offering significantly better finish consumer expertise.
MSTR queue on 28/10/2024 (earlier than change)
MSTR queue on 02/12/25 (after change)
Learnings and unexpected challenges
Navigating automated optimization in a multi-warehouse structure
Probably the most vital challenges Zalando encountered throughout migration includes Redshift’s multi-warehouse structure and its interplay with automated desk upkeep. The Redshift structure is designed for workload isolation: a central producer warehouse for information loading, and a number of shopper warehouses for analytical queries. Knowledge and related objects reside solely on the producer and are shared by way of Redshift Datashare.
The core situation: Redshift’s Automated Desk Optimization (ATO) operates completely on the producer warehouse. This extends to different efficiency options like Automated Materialized Views and automated question rewriting. Consequently, these optimization processes have been unaware of question patterns and workloads on shopper warehouses. As an illustration, MicroStrategy stories working heavy analytical queries on the patron aspect have been exterior the scope of those automated options. This led to suboptimal information fashions and vital efficiency impacts, notably for tables with AUTO-set distribution and kind keys.
To handle this, two-pronged method was carried out:
1. Collaborative guide tuning: Zalando labored carefully with the AWS Database Engineering staff, who present holistic efficiency checks and tailor-made suggestions for distribution and kind keys throughout all warehouses.
2. Scheduled desk upkeep: Zalando carried out a day by day VACUUM course of for tables with over 5% unsorted information, making certain information group and question efficiency.
Moreover, following information distribution technique was carried out:
- KEY Distribution: Explicitly outlined DISTKEY for tables with clear JOIN circumstances.
- EVEN Distribution: Used for giant reality tables with out clear be part of keys.
- ALL Distribution: Utilized to smaller dimension tables (below 4 million rows).
This proactive method has given higher management over cluster efficiency and mitigated information skew points. Zalando is inspired that AWS is working to incorporate cross-cluster workload consciousness in a future Redshift launch, which ought to additional optimize multi-warehouse setup.
CTEs and execution plans
Widespread Desk Expressions (CTEs) are a robust software for structuring complicated queries by breaking them down into logical, readable steps. Evaluation of question efficiency recognized optimization alternatives in CTE utilization patterns.
Efficiency monitoring revealed that Redshift’s question engine would generally recompute the logic for a nested or repeatedly referenced CTE from scratch each time it was known as throughout the similar SQL assertion as an alternative of writing the CTE’s end result to an in-memory non permanent desk for reuse.
Two methods proved efficient in addressing this problem:
- Convert to a materialized view: CTEs used incessantly throughout a number of queries or with notably complicated logic have been transformed into materialized views (MVs). This pre-compute the end result, making the info available with out re-running the underlying logic.
- Use specific non permanent tables: For CTEs used a number of occasions inside a single, complicated question, the CTE’s end result was explicitly written right into a non permanent desk originally of the transaction. For instance, inside MicroStrategy, the “intermediate desk kind” setting was modified from the default CTE to “Short-term desk.”
Implementation of both materialized views or non permanent tables ensures the complicated logic is computed solely as soon as. This method eradicated the recomputation situation and considerably improved the efficiency of multi-layered SQL queries.
Optimizing reminiscence utilization by right-sizing VARCHAR columns
It could appear to be a minor element, however defining the suitable size for VARCHAR columns can have a stunning and vital influence on question efficiency. This was found firsthand whereas investigating the basis reason for gradual queries that have been exhibiting excessive quantities of disk spill.
The difficulty stemmed from information loading API software, which is answerable for syncing information from Delta Lake tables into Redshift. As a result of Delta Lake’s StringType datatype doesn’t have an outlined size, the software defaulted to creating Redshift columns with a really excessive VARCHAR size (comparable to VARCHAR(16384)).
When a question is executed, the Redshift question engine allocates reminiscence for in-transit information based mostly on the column’s outlined dimension, not the precise dimension of the info it comprises. This meant that for a column containing strings of solely 50 characters however outlined as VARCHAR(16384), the engine would reserve a vastly outsized block of reminiscence. This extreme reminiscence allocation led on to excessive disk spill, the place intermediate question outcomes overflowed from reminiscence to disk, drastically slowing down execution.
To resolve this, a brand new course of was carried out requiring information groups to explicitly outline applicable column lengths throughout object deployment. nalyzing the precise information and setting sensible VARCHAR sizes (comparable to VARCHAR(100) as an alternative of VARCHAR(16384)), considerably improved reminiscence utilization, decreased disk spill, and boosted total question pace. This variation underscores the significance of precision in information definition for an optimized Redshift setting.
Future outlook
Central to Zalando technique is the shift to a serverless-based warehouse topology. This transfer allows automated scaling to satisfy fluctuating analytical calls for, from seasonal gross sales peaks to new staff tasks, all with out guide intervention. The method permits information groups to focus completely on producing insights that drive innovation, making certain platform efficiency aligns with enterprise development.
Because the platform scales, accountable administration is paramount. The combination of AWS Lake Formation create a centralized governance mannequin for safe, fine-grained information entry, enabling protected information democratization throughout the group. Concurrently, Zalando is embedding a robust FinOps tradition by establishing unified value administration processes. This supplies information homeowners with a complete, 360-degree view of their prices throughout Redshift’s providers, empowering them with actionable insights to optimize spending and align it with enterprise worth. Finally, the objective is to make sure each funding in Zalando’s information platform is maximized for enterprise influence.
Conclusion
On this submit, we confirmed how Zalando’s migration to Amazon Redshift has efficiently reworked its information platform, making it a extra data-driven trend tech chief. This transfer has delivered vital enhancements throughout key areas together with enhanced efficiency, elevated stability, decreased operational prices, and improved information consistency. Transferring ahead, a serverless-based structure, centralized governance with AWS Lake Formation, and a robust FinOps tradition will proceed to drive innovation and maximize enterprise influence.
If you happen to’re all for studying extra about Amazon Redshift capabilities, we advocate watching the latest What’s new with Amazon Redshift session within the AWS Occasions channel to get an outline of the options lately added to the service. You may also discover the self-service, hands-on Amazon Redshift labs to experiment with key Amazon Redshift functionalities in a guided method.
Contact your AWS account staff to find out how we will help you modernize your information warehouse infrastructure.
Concerning the authors

