Immediately, we’re saying three new capabilities for Amazon S3 Storage Lens that offer you deeper insights into your storage efficiency and utilization patterns. With the addition of efficiency metrics, assist for analyzing billions of prefixes, and direct export to Amazon S3 Tables, you might have the instruments you might want to optimize software efficiency, cut back prices, and make data-driven choices about your Amazon S3 storage technique.
New efficiency metric classes
S3 Storage Lens now consists of eight new efficiency metric classes that assist establish and resolve efficiency constraints throughout your group. These can be found at group, account, bucket, and prefix ranges. For instance, the service helps you establish small objects in a bucket or prefix that may decelerate software efficiency. This may be mitigated by batching small objects or utilizing the Amazon S3 Specific One Zone storage class for larger efficiency small object workloads.
To entry the brand new efficiency metrics, you might want to allow efficiency metrics within the S3 Storage Lens superior tier when creating a brand new Storage Lens dashboard or modifying an current configuration.
| Metric class | Particulars | Use case | Mitigation |
| Learn request measurement | Distribution of learn request sizes (GET) by day | Determine dataset with small learn request patterns that decelerate efficiency | Small request: Batch small objects or use Amazon S3 Specific One Zone for high-performance small object workloads |
| Write request measurement | Distribution of write request sizes (PUT, POST, COPY, and UploadPart) by day | Determine dataset with small write request patterns that decelerate efficiency | Massive request: Parallelize requests, use MPU or use AWS CRT |
| Storage measurement | Distribution of object sizes | Determine dataset with small small objects that decelerate efficiency | Small object sizes: Think about bundling small objects |
| Concurrent PUT 503 errors | Variety of 503s as a consequence of concurrent PUT operation on identical object | Determine prefixes with concurrent PUT throttling that decelerate efficiency | For single author, modify retry conduct or use Amazon S3 Specific One Zone. For a number of writers, use consensus mechanism or use Amazon S3 Specific One Zone |
| Cross-Area information switch | Bytes transferred and requests despatched throughout Area, in Area | Determine potential efficiency and value degradation as a consequence of cross-Area information entry | Co-locate compute with information in the identical AWS Area |
| Distinctive objects accessed | Quantity or proportion of distinctive objects accessed per day | Determine datasets the place small subset of objects are being ceaselessly accessed. These will be moved to larger efficiency storage tier for higher efficiency | Think about transferring lively information to Amazon S3 Specific One Zone or different caching options |
| FirstByteLatency (current Amazon CloudWatch metric) | Each day common of first byte latency metric | The every day common per-request time from the entire request being obtained to when the response begins to be returned | |
| TotalRequestLatency (current Amazon CloudWatch metric) | Each day common of Complete Request Latency | The every day common elapsed per request time from the primary byte obtained to the final byte despatched |
The way it works
On the Amazon S3 console I select Create Storage Lens dashboard to create a brand new dashboard. It’s also possible to edit an current dashboard configuration. I then configure normal settings akin to offering a Dashboard title, Standing, and the elective Tags. Then, I select Subsequent.

Subsequent, I outline the scope of the dashboard by deciding on Embody all Areas and Embody all buckets and specifying the Areas and buckets to be included.

I choose in to the Superior tier within the Storage Lens dashboard configuration, choose Efficiency metrics, then select Subsequent.

Subsequent, I choose Prefix aggregation as an extra metrics aggregation, then depart the remainder of the knowledge as default earlier than I select Subsequent.

I choose the Default metrics report, then Normal objective bucket because the bucket kind, after which choose the Amazon S3 bucket in my AWS account because the Vacation spot bucket. I depart the remainder of the knowledge as default, then choose Subsequent.

I evaluation all the knowledge earlier than I select Submit to finalize the method.

After it’s enabled, I’ll obtain every day efficiency metrics instantly within the Storage Lens console dashboard. It’s also possible to select to export report in CSV or Parquet format to any bucket in your account or publish to Amazon CloudWatch. The efficiency metrics are aggregated and printed every day and will probably be obtainable at a number of ranges: group, account, bucket, and prefix. On this dropdown menu, I select the % concurrent PUT 503 error for the Metric, Final 30 days for the Date vary, and 10 for the High N buckets.

The Concurrent PUT 503 error rely metric tracks the variety of 503 errors generated by simultaneous PUT operations to the identical object. Throttling errors can degrade software efficiency. For a single author, modify retry conduct or use larger efficiency storage tier akin to Amazon S3 Specific One Zone to mitigate concurrent PUT 503 errors. For a number of writers situation, use a consensus mechanism to keep away from concurrent PUT 503 errors or use larger efficiency storage tier akin to Amazon S3 Specific One Zone.
Full analytics for all prefixes in your S3 buckets
S3 Storage Lens now helps analytics for all prefixes in your S3 buckets by way of a brand new Expanded prefixes metrics report. This functionality removes earlier limitations that restricted evaluation to prefixes assembly a 1% measurement threshold and a most depth of 10 ranges. Now you can observe as much as billions of prefixes per bucket for evaluation on the most granular prefix degree, no matter measurement or depth.
The Expanded prefixes metrics report consists of all current S3 Storage Lens metric classes: storage utilization, exercise metrics (requests and bytes transferred), information safety metrics, and detailed standing code metrics.
Tips on how to get began
I comply with the identical steps outlined within the The way it works part to create or replace the Storage Lens dashboard. In Step 4 on the console, the place you choose export choices, you’ll be able to choose the brand new Expanded prefixes metrics report. Thereafter, I can export the expanded prefixes metrics report in CSV or Parquet format to any normal objective bucket in my account for environment friendly querying of my Storage Lens information.

Good to know
This enhancement addresses situations the place organizations want granular visibility throughout their total prefix construction. For instance, you’ll be able to establish prefixes with incomplete multipart uploads to cut back prices, observe compliance throughout your total prefix construction for encryption and replication necessities, and detect efficiency points on the most granular degree.
Export S3 Storage Lens metrics to S3 Tables
S3 Storage Lens metrics can now be robotically exported to S3 Tables, a totally managed characteristic on AWS with built-in Apache Iceberg assist. This integration offers every day automated supply of metrics to AWS managed S3 Tables for quick querying with out requiring further processing infrastructure.
Tips on how to get began
I begin by following the method outlined in Step 5 on the console, the place I select the export vacation spot. This time, I select Expanded prefixes metrics report. Along with Normal objective bucket, I select Desk bucket.
The brand new Storage Lens metrics are exported to new tables in an AWS managed bucket aws-s3.

I choose the expanded_prefixes_activity_metrics desk to view API utilization metrics for expanded prefix experiences.

I can preview the desk on the Amazon S3 console or use Amazon Athena to question the desk.

Good to know
S3 Tables integration with S3 Storage Lens simplifies metric evaluation utilizing acquainted SQL instruments and AWS analytics companies akin to Amazon Athena, Amazon QuickSight, Amazon EMR, and Amazon Redshift, with out requiring a knowledge pipeline. The metrics are robotically organized for optimum querying, with customized retention and encryption choices to fit your wants.
This integration permits cross-account and cross-Area evaluation, customized dashboard creation, and information correlation with different AWS companies. For instance, you’ll be able to mix Storage Lens metrics with S3 Metadata to investigate prefix-level exercise patterns and establish objects in prefixes with chilly information which might be eligible for transition to lower-cost storage tiers.
On your agentic AI workflows, you should use pure language to question S3 Storage Lens metrics in S3 Tables with the S3 Tables MCP Server. Brokers can ask questions akin to ‘which buckets grew probably the most final month?’ or ‘present me storage prices by storage class’ and get instantaneous insights out of your observability information.
Now obtainable
All three enhancements can be found in all AWS Areas the place S3 Storage Lens is presently provided (besides the China Areas and AWS GovCloud (US)).
These options are included within the Amazon S3 Storage Lens Superior tier at no further cost past commonplace superior tier pricing. For the S3 Tables export, you pay just for S3 Tables storage, upkeep, and queries. There isn’t a further cost for the export performance itself.
To study extra about Amazon S3 Storage Lens efficiency metrics, assist for billions of prefixes, and export to S3 Tables, confer with the Amazon S3 consumer information. For pricing particulars, go to the Amazon S3 pricing web page.



