Code search experiencing degraded performance
Summary
Between 2026-02-23 19:10 and 2026-02-24 00:46 UTC, all lexical code search queries in GitHub.com and the code search API were significantly slowed, and during this incident, between 5 and 10% of search queries timed out. This was caused by a single customer who had created a network of hundreds of orchestrated accounts which searched with a uniquely expensive search query. This search query concentrated load on a single hot shard within the search index, slowing down all queries. After we identi
Impact
minor
Timeline
[investigating] We are investigating reports of impacted performance for some GitHub services.
via statuspage[investigating] We are continuing to investigate elevated latency and timeouts for code search.
via statuspage[investigating] Elevated latency and timeouts for code search is isolated to a single shard experiencing elevated CPU. We are continuing to investigate the cause and steps to mitigate.
via statuspage[investigating] Elevated latency and timeouts for code search is isolated to a single shard experiencing elevated CPU. We are taking steps to isolate and mitigate the affected shard.
via statuspage[investigating] Customers using code search continue to see increased latency and timeout errors. We are working to mitigate issues on the affected shard.
via statuspage[investigating] We have identified a cause for the latency and timeouts and have implemented a fix. We are observing initial recovery now.
via statuspage[resolved] Between 2026-02-23 19:10 and 2026-02-24 00:46 UTC, all lexical code search queries in GitHub.com and the code search API were significantly slowed, and during this incident, between 5 and 10% of search queries timed out. This was caused by a single customer who had created a network of hundreds of orchestrated accounts which searched with a uniquely expensive search query. This search query concentrated load on a single hot shard within the search index, slowing down all queries. After we identified the source of the load and stopped the traffic, latency returned to normal.<br /><br />To avoid this situation occurring again in the future, we are making a number of improvements to our systems, including: improved rate limiting that accounts for highly skewed load on hot shards, improved system resilience for when a small number of shards time out, improved tooling to recognize abusive actors, and capabilities that will allow us to shed load on a single shard in emergencies.
via statuspageLessons Learned
⚠GitHub has experienced 39 incidents in the past year. This frequency suggests systemic reliability challenges that may warrant additional monitoring.
📊Incidents related to network, api have occurred 216 times across all providers in the past year. This is one of the most common failure categories in cloud infrastructure.
💡This incident is categorized as: Network / Routing, API Issue. Consider implementing preventive measures specific to this failure category.
Similar Incidents
Elevated 500 errors from Browser Rendering REST API
Cloudflare · Mar 11, 2026
Elevated errors on Claude.ai (including login issues for Claude Code)
Anthropic · Mar 11, 2026
Degraded experience with Copilot Code Review
GitHub · Mar 11, 2026
Increased errors with ChatGPT file downloads
OpenAI · Mar 10, 2026
Increased errors on ChatGPT File Uploads
OpenAI · Mar 10, 2026