Skip to content

Commit 4d0aa59

Browse files
authored
Update troubleshoot-pods-remain-pending-state-scenario.md
Edit review per CI 8944
1 parent c81a1a6 commit 4d0aa59

1 file changed

Lines changed: 11 additions & 11 deletions

File tree

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,36 @@
11
---
22
title: Troubleshoot Pods Remain in a Pending State Scenario
3-
description: This article helps you troubleshoot the pods remain in a Pending state scenario.
3+
description: This article helps you troubleshoot a scenario in which pods remain in a pending state.
44
ms.date: 01/12/2026
55
ms.author: jarrettr
66
ms.editor: v-jsitser
77
ms.reviewer: chiragpa, rorylen, v-ryanberg
88
ms.service: azure-kubernetes-service
99
keywords:
10-
#Customer intent: As an Azure Kubernetes user, I want to troubleshoot the pods remain in a Pending state scenario in Azure Kubernetes Service (AKS).
10+
#Customer intent: As an Azure Kubernetes user, I want to troubleshoot a scenario in which pods remain in the Pending state in Azure Kubernetes Service (AKS).
1111
ms.custom: sap:Create, Upgrade, Scale and Delete operations (cluster or nodepool)
1212
---
1313

14-
# Troubleshoot pods remain in a Pending state scenario
14+
# Troubleshoot pods that remain in the Pending state
1515

16-
This article helps you troubleshoot the pods remain in a **Pending** state scenario.
16+
This article helps you troubleshoot a scenario in which pods remain in the **Pending** state.
1717

1818
## Symptoms
1919

20-
You run `kubectl describe pod` for a pod and the pod remains in a **Pending** state. The **Event** section displays **pod didn't trigger scale-up (it wouldn't fit if a new node is added)** and the cluster-autoscaler doesn’t scale up the node count.
20+
You run `kubectl describe pod` for a pod, and the pod remains in the **Pending** state. When this issue occurs, the **Event** section displays **pod didn't trigger scale-up (it wouldn't fit if a new node is added)**. Additionally, the cluster-autoscaler doesn’t scale up the node count.
2121

2222
## Cause
2323

24-
This indicates one or more of the following:
24+
These symptoms indicate one or more of the following situations:
2525

26-
- Even if a new node was added by the cluster-autoscaler, the pod can’t be placed on the new node due to the pod's resource requests exceeding the maximum resources available on the node.
26+
- Even if a new node is added by the cluster-autoscaler, the pod can’t be put onto the new node. This condition occurs because the pod's resource requests exceed the maximum resources that are available on the node.
2727

28-
- The node might be missing a resource which the pod requires (like a Graphics Processing Unit (GPU)).
28+
- The node is missing a resource that the pod requires (such as a Graphics Processing Unit (GPU)).
2929

30-
- The pod has affinity or topology constraints and a new node doesn’t meet these requirements.
30+
- The pod has affinity or topology constraints, and the new nodes don’t meet these requirements.
3131

3232
## Resolution
3333

34-
Review the pod resource request configuration (for example CPU, memory, or GPU) and compare it with the node size. You might need to adjust the node size or type or adjust the resource request configuration for the pod to ensure that pod placement can occur.
34+
Review the pod resource request configuration (for example CPU, memory, or GPU), and compare it with the node size. To make sure that pod placement can occur, you might have to adjust the node size or type, or adjust the resource request configuration for the pod.
3535

36-
If you rule out a resource constraint, ensure node affinity or taints aren’t preventing scheduling.
36+
If you rule out a resource constraint, make sure that affinity or taints are not preventing scheduling.

0 commit comments

Comments
 (0)