Skip to content

Commit 5fc453c

Browse files
committed
Merge branch 'main' into live
2 parents 0f3d516 + 77fbbf4 commit 5fc453c

1 file changed

Lines changed: 60 additions & 8 deletions

File tree

support/azure/azure-kubernetes/pod-stuck-crashloopbackoff-mode.md

Lines changed: 60 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,74 @@
11
---
22
title: Pod is stuck in CrashLoopBackOff mode
33
description: Troubleshoot a scenario in which a pod is stuck in CrashLoopBackOff mode on an Azure Kubernetes Service (AKS) cluster.
4-
ms.date: 07/08/2022
4+
ms.date: 09/07/2023
5+
author: VikasPullagura-MSFT
6+
ms.author: vipullag
57
editor: v-jsitser
6-
ms.reviewer: chiragpa, nickoman, v-leedennis
8+
ms.reviewer: chiragpa, nickoman, cssakscic, v-leedennis
79
ms.service: azure-kubernetes-service
810
ms.subservice: common-issues
9-
keywords:
10-
#Customer intent: As an Azure Kubernetes user, I want to troubleshoot why my pod is stuck in CrashLoopBackOff mode so that I can continue to use applications that are deployed to my Azure Kubernetes Service (AKS) cluster successfully.
1111
---
1212
# Pod is stuck in CrashLoopBackOff mode
1313

14-
There are several possible reasons why your pod is stuck in `CrashLoopBackOff` mode. Consider the following options and their associated [kubectl](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands) commands.
14+
If a pod has a `CrashLoopBackOff` status, then the pod probably failed or exited unexpectedly, and the log contains an exit code that isn't zero. There are several possible reasons why your pod is stuck in `CrashLoopBackOff` mode. Consider the following options and their associated [kubectl](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands) commands.
1515

16-
| Option | kubectl command |
17-
|----------------------------------------------------------------------------------------------------------------------|-----------------------------------|
16+
| Option | kubectl command |
17+
|--|--|
1818
| [Debug the pod](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-pods) itself | `kubectl describe pod <pod-name>` |
19-
| Examine the logs | `kubectl logs <pod-name>` |
19+
| [Debug the replication controllers](https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/#debugging-replication-controllers) | `kubectl describe replicationcontroller <controller-name>` |
20+
| [Read the termination message](https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/#writing-and-reading-a-termination-message) | `kubectl get pod <pod-name> --output=yaml` |
21+
| [Examine the logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/) | `kubectl logs <pod-name>` |
22+
23+
> [!NOTE]
24+
> A pod can also have a `CrashLoopBackOff` status if it has finished deployment, but it's configured to keep restarting even if the exit code is zero. For example, if you deploy a busybox image without specifying any arguments, the image starts, runs, finishes, and then restarts in a loop:
25+
>
26+
> ```console
27+
> $ kubectl run nginx --image nginx
28+
> pod/nginx created
29+
>
30+
> $ kubectl run busybox --image busybox
31+
> pod/busybox created
32+
>
33+
> $ kubectl get pods --watch
34+
> NAME READY STATUS RESTARTS AGE
35+
> busybox 0/1 ContainerCreating 0 3s
36+
> nginx 1/1 Running 0 11s
37+
> busybox 0/1 Completed 0 3s
38+
> busybox 0/1 Completed 1 4s
39+
> busybox 0/1 CrashLoopBackOff 1 5s
40+
>
41+
> $ kubectl describe pod busybox
42+
> Name: busybox
43+
> Namespace: default
44+
> Priority: 0
45+
> Node: aks-nodepool<number>-<resource-group-hash-number>-vmss<number>/<ip-address-1>
46+
> Start Time: Wed, 16 Aug 2023 09:56:19 +0000
47+
> Labels: run=busybox
48+
> Annotations: <none>
49+
> Status: Running
50+
> IP: <ip-address-2>
51+
> IPs:
52+
> IP: <ip-address-2>
53+
> Containers:
54+
> busybox:
55+
> Container ID: containerd://<64-digit-hexadecimal-value-1>
56+
> Image: busybox
57+
> Image ID: docker.io/library/busybox@sha256:<64-digit-hexadecimal-value-2>
58+
> Port: <none>
59+
> Host Port: <none>
60+
> State: Waiting
61+
> Reason: CrashLoopBackOff
62+
> Last State: Terminated
63+
> Reason: Completed
64+
> Exit Code: 0
65+
> Started: Wed, 16 Aug 2023 09:56:37 +0000
66+
> Finished: Wed, 16 Aug 2023 09:56:37 +0000
67+
> Ready: False
68+
> Restart Count: 2
69+
> ```
70+
71+
If you don't recognize the issue after you create more pods on the node, then run a pod on a single node to determine how many resources the pod actually uses.
2072
2173
[!INCLUDE [Third-party disclaimer](../../includes/third-party-disclaimer.md)]
2274

0 commit comments

Comments
 (0)