Incident with Actions

highGitHubActionsFeb 23, 2026 16:17Duration: 5h 53m
capacityrouting
Network / RoutingCapacity Issue

Summary

On February 2, 2026, between 18:35 UTC and 22:15 UTC, GitHub Actions hosted runners were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners and at February 3, 2026 00:30 UTC for larger runners. During this time, Actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot Coding Agent, Copilot Code Review, CodeQL, Dependabot, GitHub Enter

Impact

major

Timeline

Feb 2, 2026 19:03

[investigating] We are investigating reports of degraded performance for Actions

via statuspage
+4m
Feb 2, 2026 19:07

[investigating] GitHub Actions hosted runners are experiencing high wait times across all labels. Self-hosted runners are not impacted.

via statuspage
+36m
Feb 2, 2026 19:43

[investigating] Actions is experiencing degraded availability. We are continuing to investigate.

via statuspage
+2m
Feb 2, 2026 19:44

[investigating] The team continues to investigate issues causing GitHub Actions jobs on hosted runners to remain queued for extended periods, with a percentage of jobs failing. We will continue to provide updates as we make progress toward mitigation.

via statuspage
+4m
Feb 2, 2026 19:48

[investigating] Pages is experiencing degraded performance. We are continuing to investigate.

via statuspage
+39m
Feb 2, 2026 20:27

[investigating] The team continues to investigate issues causing GitHub Actions jobs on hosted runners to remain queued for extended periods, with a percentage of jobs failing. We will continue to provide updates as we make progress toward mitigation.<br />

via statuspage
+46m
Feb 2, 2026 21:13

[investigating] We continue to investigate failures impacting GitHub Actions hosted-runner jobs.<br />We have identified the root cause and are working with our upstream provider to mitigate.<br />This is also impacting GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot).

via statuspage
+14m
Feb 2, 2026 21:27

[investigating] Copilot is experiencing degraded performance. We are continuing to investigate.

via statuspage
+43m
Feb 2, 2026 22:10

[investigating] We continue to investigate failures impacting GitHub Actions hosted-runner jobs.<br />We're waiting on our upstream provider to apply the identified mitigations, and we're preparing to resume job processing as safely as possible.

via statuspage
+43m
Feb 2, 2026 22:53

[investigating] Our upstream provider has applied a mitigation to address queuing and job failures on hosted runners.<br />Telemetry shows improvement, and we are monitoring closely for full recovery.

via statuspage
+38m
Feb 2, 2026 23:31

[investigating] Pages is operating normally.

via statuspage
+11m
Feb 2, 2026 23:42

[investigating] Copilot is operating normally.

via statuspage
+2m
Feb 2, 2026 23:43

[investigating] Actions is experiencing degraded performance. We are continuing to investigate.

via statuspage
+6m
Feb 2, 2026 23:50

[investigating] Based on our telemetry, most customers should see full recovery from failing GitHub Actions jobs on hosted runners.<br />We are monitoring closely to confirm complete recovery.<br />Other GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot) should also see recovery.

via statuspage
+1h 6m
Feb 3, 2026 00:56

[investigating] Actions is operating normally.

via statuspage
+0m
Feb 3, 2026 00:56

[resolved] On February 2, 2026, between 18:35 UTC and 22:15 UTC, GitHub Actions hosted runners were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners and at February 3, 2026 00:30 UTC for larger runners. During this time, Actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot Coding Agent, Copilot Code Review, CodeQL, Dependabot, GitHub Enterprise Importer, and Pages. All regions and runner types were impacted. Self-hosted runners on other providers were not impacted. <br /><br />This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out. <br /><br />We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.

via statuspage
+157h 21m
Feb 9, 2026 14:17

[investigating] We are investigating reports of degraded performance for Actions

via statuspage
+0m
Feb 9, 2026 14:17

[investigating] We are investigating an issue with Actions run start delays, impacting approximately 4% of users.

via statuspage
+38m
Feb 9, 2026 14:54

[investigating] We continue to investigate an issue causing Actions run start delays, impacting approximately 4% of users.

via statuspage
+32m
Feb 9, 2026 15:26

[investigating] We identified a bottleneck in our processing pipeline and have applied mitigations. We will continue to monitor for full recovery.

via statuspage
+20m
Feb 9, 2026 15:46

[investigating] Actions run delays have returned to normal levels.

via statuspage
+0m
Feb 9, 2026 15:46

[investigating] Actions is operating normally.

via statuspage
+0m
Feb 9, 2026 15:46

[resolved] On February 9th, 2026, between 09:16 UTC and 15:12 UTC GitHub Actions customers experienced run start delays. Approximately 0.6% of runs across 1.8% of repos were affected, with an average delay of 19 minutes for those delayed runs.<br /><br />The incident occurred when increased load exposed a bottleneck in our event publishing system, causing one compute node to fall behind on processing Actions Jobs. We mitigated by rebalancing traffic and increasing timeouts for event processing. We have since isolated performance critical events to a new, dedicated publisher to prevent contention between events and added safeguards to better tolerate processing timeouts.

via statuspage
+336h 31m
Feb 23, 2026 16:17

[investigating] We are investigating reports of degraded performance for Actions

via statuspage
+46m
Feb 23, 2026 17:03

[resolved] On February 23, 2026, between 15:00 UTC and 17:00 UTC, GitHub Actions experienced degraded performance. During the time, 1.8% of Actions workflow runs experienced delayed starts with an average delay of 15 minutes. The issue was caused by a connection rebalancing event in our internal load balancing layer, which temporarily created uneven traffic distribution across sites and led to request throttling. <br /><br />To prevent recurrence, we are tuning connection rebalancing behavior to spread client reconnections more gradually during load balancer reloads. We are also evaluating improvements to site-level traffic affinity to eliminate the uneven distribution at its source. We have overprovisioned critical paths to prevent any impact if a similar event occurs before those workstreams finish. Finally, we are enhancing our monitoring to detect capacity imbalances proactively.

via statuspage

Lessons Learned

GitHub has experienced 39 incidents in the past year. This frequency suggests systemic reliability challenges that may warrant additional monitoring.

📊Incidents related to capacity, routing have occurred 22 times across all providers in the past year. This is one of the most common failure categories in cloud infrastructure.

💡This incident is categorized as: Network / Routing, Capacity Issue. Consider implementing preventive measures specific to this failure category.