|
1 | 1 | --- |
2 | 2 | title: Troubleshoot Pods Remain in a Pending State Scenario |
3 | | -description: This article helps you troubleshoot the pods remain in a Pending state scenario. |
| 3 | +description: This article helps you troubleshoot a scenario in which pods remain in a pending state. |
4 | 4 | ms.date: 01/12/2026 |
5 | 5 | ms.author: jarrettr |
6 | 6 | ms.editor: v-jsitser |
7 | 7 | ms.reviewer: chiragpa, rorylen, v-ryanberg |
8 | 8 | ms.service: azure-kubernetes-service |
9 | 9 | keywords: |
10 | | -#Customer intent: As an Azure Kubernetes user, I want to troubleshoot the pods remain in a Pending state scenario in Azure Kubernetes Service (AKS). |
| 10 | +#Customer intent: As an Azure Kubernetes user, I want to troubleshoot a scenario in which pods remain in the Pending state in Azure Kubernetes Service (AKS). |
11 | 11 | ms.custom: sap:Create, Upgrade, Scale and Delete operations (cluster or nodepool) |
12 | 12 | --- |
13 | 13 |
|
14 | | -# Troubleshoot pods remain in a Pending state scenario |
| 14 | +# Troubleshoot pods that remain in the Pending state |
15 | 15 |
|
16 | | -This article helps you troubleshoot the pods remain in a **Pending** state scenario. |
| 16 | +This article helps you troubleshoot a scenario in which pods remain in the **Pending** state. |
17 | 17 |
|
18 | 18 | ## Symptoms |
19 | 19 |
|
20 | | -You run `kubectl describe pod` for a pod and the pod remains in a **Pending** state. The **Event** section displays **pod didn't trigger scale-up (it wouldn't fit if a new node is added)** and the cluster-autoscaler doesn’t scale up the node count. |
| 20 | +You run `kubectl describe pod` for a pod, and the pod remains in the **Pending** state. When this issue occurs, the **Event** section displays **pod didn't trigger scale-up (it wouldn't fit if a new node is added)**. Additionally, the cluster-autoscaler doesn’t scale up the node count. |
21 | 21 |
|
22 | 22 | ## Cause |
23 | 23 |
|
24 | | -This indicates one or more of the following: |
| 24 | +These symptoms indicate one or more of the following situations: |
25 | 25 |
|
26 | | -- Even if a new node was added by the cluster-autoscaler, the pod can’t be placed on the new node due to the pod's resource requests exceeding the maximum resources available on the node. |
| 26 | +- Even if a new node is added by the cluster-autoscaler, the pod can’t be put onto the new node. This condition occurs because the pod's resource requests exceed the maximum resources that are available on the node. |
27 | 27 |
|
28 | | -- The node might be missing a resource which the pod requires (like a Graphics Processing Unit (GPU)). |
| 28 | +- The node is missing a resource that the pod requires (such as a Graphics Processing Unit (GPU)). |
29 | 29 |
|
30 | | -- The pod has affinity or topology constraints and a new node doesn’t meet these requirements. |
| 30 | +- The pod has affinity or topology constraints, and the new nodes don’t meet these requirements. |
31 | 31 |
|
32 | 32 | ## Resolution |
33 | 33 |
|
34 | | -Review the pod resource request configuration (for example CPU, memory, or GPU) and compare it with the node size. You might need to adjust the node size or type or adjust the resource request configuration for the pod to ensure that pod placement can occur. |
| 34 | +Review the pod resource request configuration (for example CPU, memory, or GPU), and compare it with the node size. To make sure that pod placement can occur, you might have to adjust the node size or type, or adjust the resource request configuration for the pod. |
35 | 35 |
|
36 | | -If you rule out a resource constraint, ensure node affinity or taints aren’t preventing scheduling. |
| 36 | +If you rule out a resource constraint, make sure that affinity or taints are not preventing scheduling. |
0 commit comments