Service is operating normally: [RESOLVED] Increased Error Rates
Summary
Between 11:27 AM and 12:20 PM PST we experienced substantial error rates for S3 PUT/GET requests in EU-CENTRAL-2 Region. Engineers were engaged immediately based on automated alarming. We identified the root cause as an issue with a subsystem responsible for assembling objects bytes in storage. At 12:04 PM PST, we implemented mitigations and began observing early signs of recovery for S3. Error rates continued to improve, and other AWS Services continued to recover until 12:50 PM PST when we obs
Impact
Between 11:27 AM and 12:20 PM PST we experienced substantial error rates for S3 PUT/GET requests in EU-CENTRAL-2 Region. Engineers were engaged immediately based on automated alarming. We identified the root cause as an issue with a subsystem responsible for assembling objects bytes in storage. At 12:04 PM PST, we implemented mitigations and began observing early signs of recovery for S3. Error rates continued to improve, and other AWS Services continued to recover until 12:50 PM PST when we obs
Lessons Learned
⚠AWS has experienced 5 incidents in the past year. Consider monitoring their status page for recurring patterns.
📊Incidents related to storage have occurred 8 times across all providers in the past year.
💡This incident is categorized as: Storage Failure. Consider implementing preventive measures specific to this failure category.
Similar Incidents
Service impact: Increased Connectivity Issues and API Error Rates
AWS · Mar 3, 2026
Service disruption: Increased Error Rates
AWS · Mar 3, 2026
Service degradation: Increased Error Rates
AWS · Mar 1, 2026
Incident with Codespaces
GitHub · Feb 12, 2026
Spaces Access Keys and DigitalOcean Container Registry
DigitalOcean · Dec 15, 2025