[Question]: How is the "Avg queue time" in GitHub Actions Insights calculated? #188437
-
Beta Was this translation helpful? Give feedback.
Replies: 5 comments
-
|
The “Avg queue time” shown in GitHub Actions Insights reflects the time a job spends in the GitHub queue before the runner starts executing. It doesn’t include every delay outside of job setup — once the runner begins the Set up job phase, that part is no longer considered queue time in the Insight metric. |
Beta Was this translation helpful? Give feedback.
-
|
Yess,in azure DevOps insights, "Avg queue time" measures the duration from when a job is queued until it begins the "Set up job" phase, excluding any subsequent delays like init container waits in self-hosted runners (such as those managed via Azure Arc). Hope this may help you 🙌 |
Beta Was this translation helpful? Give feedback.
-
|
Yes, your assumption is 100% correct! In GitHub Actions Insights, Avg queue time only measures the time from when the job is triggered until a runner is assigned and picks it up. Once the runner claims the job and begins the 'Set up job' phase, the queue time metric stops. Because 'Initialize containers' is a process managed by the runner after it has already accepted the job, any delays here (like waiting for ARC/Kubernetes resources to provision or pulling heavy images) are counted as part of the job's total execution time, not queue time. To track this specific bottleneck, you won't see it in the queue metrics. You'd have to monitor overall job execution durations or track pod scheduling/initialization metrics directly within your ARC infrastructure. |
Beta Was this translation helpful? Give feedback.
-
|
This log proves that GitHub Actions has successfully found a runner and assigned the job to it. From GitHub's perspective, the job is no longer "in the queue"—it has officially started executing. Therefore, the "Avg queue time" metric stops ticking. However, because the self-hosted ARC runner operates in a Kubernetes environment (or Azure Arc in this user's specific case), it still needs to provision the actual pods and containers to run the workflow. When there is a resource shortage, Kubernetes leaves these pods in a Pending state while waiting for CPU or memory to free up. GitHub tracks this Kubernetes-level provisioning delay exactly where you see it: as the duration of the Initialize containers step under Set up job. |
Beta Was this translation helpful? Give feedback.
-
|
The Avg queue time in GitHub Actions Insights only measures the time between when a job is triggered and when it is assigned to a runner. Once the runner is assigned and the job starts (i.e., the "Set up job" phase begins), the queue time calculation stops. This means that any delays occurring during: container initialization are not included in the Avg queue time. In your case, the delay in the init container stage is happening after the job has already started on a runner, so it is considered execution time, not queue time. |
Beta Was this translation helpful? Give feedback.

The “Avg queue time” shown in GitHub Actions Insights reflects the time a job spends in the GitHub queue before the runner starts executing. It doesn’t include every delay outside of job setup — once the runner begins the Set up job phase, that part is no longer considered queue time in the Insight metric.
So if your self-hosted runner’s init container is waiting on resources before the Actions runner actually starts, that wait isn’t counted in the average queue time shown in Insights. The metric focuses on the time between when GitHub schedules the job and when it actually starts running on a runner.