Service is operating normally: [RESOLVED] Increased Error Rates

highAWSMar 7, 2026 21:04
storage
Storage Failure

Summary

Between 11:27 AM and 12:20 PM PST we experienced substantial error rates for S3 PUT/GET requests in EU-CENTRAL-2 Region. Engineers were engaged immediately based on automated alarming. We identified the root cause as an issue with a subsystem responsible for assembling objects bytes in storage. At 12:04 PM PST, we implemented mitigations and began observing early signs of recovery for S3. Error rates continued to improve, and other AWS Services continued to recover until 12:50 PM PST when we obs

Impact

Between 11:27 AM and 12:20 PM PST we experienced substantial error rates for S3 PUT/GET requests in EU-CENTRAL-2 Region. Engineers were engaged immediately based on automated alarming. We identified the root cause as an issue with a subsystem responsible for assembling objects bytes in storage. At 12:04 PM PST, we implemented mitigations and began observing early signs of recovery for S3. Error rates continued to improve, and other AWS Services continued to recover until 12:50 PM PST when we obs

Lessons Learned

AWS has experienced 5 incidents in the past year. Consider monitoring their status page for recurring patterns.

📊Incidents related to storage have occurred 8 times across all providers in the past year.

💡This incident is categorized as: Storage Failure. Consider implementing preventive measures specific to this failure category.