Incident with Actions
Summary
On February 2, 2026, between 18:35 UTC and 22:15 UTC, GitHub Actions hosted runners were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners and at February 3, 2026 00:30 UTC for larger runners. During this time, Actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot Coding Agent, Copilot Code Review, CodeQL, Dependabot, GitHub Enter
Impact
major
Timeline
[investigating] We are investigating reports of degraded performance for Actions
via statuspage[investigating] GitHub Actions hosted runners are experiencing high wait times across all labels. Self-hosted runners are not impacted.
via statuspage[investigating] Actions is experiencing degraded availability. We are continuing to investigate.
via statuspage[investigating] The team continues to investigate issues causing GitHub Actions jobs on hosted runners to remain queued for extended periods, with a percentage of jobs failing. We will continue to provide updates as we make progress toward mitigation.
via statuspage[investigating] Pages is experiencing degraded performance. We are continuing to investigate.
via statuspage[investigating] The team continues to investigate issues causing GitHub Actions jobs on hosted runners to remain queued for extended periods, with a percentage of jobs failing. We will continue to provide updates as we make progress toward mitigation.<br />
via statuspage[investigating] We continue to investigate failures impacting GitHub Actions hosted-runner jobs.<br />We have identified the root cause and are working with our upstream provider to mitigate.<br />This is also impacting GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot).
via statuspage[investigating] Copilot is experiencing degraded performance. We are continuing to investigate.
via statuspage[investigating] We continue to investigate failures impacting GitHub Actions hosted-runner jobs.<br />We're waiting on our upstream provider to apply the identified mitigations, and we're preparing to resume job processing as safely as possible.
via statuspage[investigating] Our upstream provider has applied a mitigation to address queuing and job failures on hosted runners.<br />Telemetry shows improvement, and we are monitoring closely for full recovery.
via statuspage[investigating] Pages is operating normally.
via statuspage[investigating] Copilot is operating normally.
via statuspage[investigating] Actions is experiencing degraded performance. We are continuing to investigate.
via statuspage[investigating] Based on our telemetry, most customers should see full recovery from failing GitHub Actions jobs on hosted runners.<br />We are monitoring closely to confirm complete recovery.<br />Other GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot) should also see recovery.
via statuspage[investigating] Actions is operating normally.
via statuspage[resolved] On February 2, 2026, between 18:35 UTC and 22:15 UTC, GitHub Actions hosted runners were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners and at February 3, 2026 00:30 UTC for larger runners. During this time, Actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot Coding Agent, Copilot Code Review, CodeQL, Dependabot, GitHub Enterprise Importer, and Pages. All regions and runner types were impacted. Self-hosted runners on other providers were not impacted. <br /><br />This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out. <br /><br />We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.
via statuspage[investigating] We are investigating reports of degraded performance for Actions
via statuspage[investigating] We are investigating an issue with Actions run start delays, impacting approximately 4% of users.
via statuspage[investigating] We continue to investigate an issue causing Actions run start delays, impacting approximately 4% of users.
via statuspage[investigating] We identified a bottleneck in our processing pipeline and have applied mitigations. We will continue to monitor for full recovery.
via statuspage[investigating] Actions run delays have returned to normal levels.
via statuspage[investigating] Actions is operating normally.
via statuspage[resolved] On February 9th, 2026, between 09:16 UTC and 15:12 UTC GitHub Actions customers experienced run start delays. Approximately 0.6% of runs across 1.8% of repos were affected, with an average delay of 19 minutes for those delayed runs.<br /><br />The incident occurred when increased load exposed a bottleneck in our event publishing system, causing one compute node to fall behind on processing Actions Jobs. We mitigated by rebalancing traffic and increasing timeouts for event processing. We have since isolated performance critical events to a new, dedicated publisher to prevent contention between events and added safeguards to better tolerate processing timeouts.
via statuspage[investigating] We are investigating reports of degraded performance for Actions
via statuspage[resolved] On February 23, 2026, between 15:00 UTC and 17:00 UTC, GitHub Actions experienced degraded performance. During the time, 1.8% of Actions workflow runs experienced delayed starts with an average delay of 15 minutes. The issue was caused by a connection rebalancing event in our internal load balancing layer, which temporarily created uneven traffic distribution across sites and led to request throttling. <br /><br />To prevent recurrence, we are tuning connection rebalancing behavior to spread client reconnections more gradually during load balancer reloads. We are also evaluating improvements to site-level traffic affinity to eliminate the uneven distribution at its source. We have overprovisioned critical paths to prevent any impact if a similar event occurs before those workstreams finish. Finally, we are enhancing our monitoring to detect capacity imbalances proactively.
via statuspageLessons Learned
⚠GitHub has experienced 39 incidents in the past year. This frequency suggests systemic reliability challenges that may warrant additional monitoring.
📊Incidents related to capacity, routing have occurred 22 times across all providers in the past year. This is one of the most common failure categories in cloud infrastructure.
💡This incident is categorized as: Network / Routing, Capacity Issue. Consider implementing preventive measures specific to this failure category.
Similar Incidents
Elevated Latency on Queue Messages, Increased Message Retry Latency in iad1
Vercel · Mar 6, 2026
Multiple services are affected, service degradation
GitHub · Mar 5, 2026
Disruption with some GitHub services
GitHub · Mar 5, 2026
Internal Load Balancers Connectivity
DigitalOcean · Mar 5, 2026
Service impact: Increased Connectivity Issues and API Error Rates
AWS · Mar 3, 2026