Skip to content

Commit 4ae6508

Browse files
authored
Merge pull request #8396 from AED1523/backoff
AB#4446: Update pod-stuck-crashloopbackoff-mode.md
2 parents 5e7b14b + f848fb3 commit 4ae6508

1 file changed

Lines changed: 18 additions & 4 deletions

File tree

support/azure/azure-kubernetes/create-upgrade-delete/pod-stuck-crashloopbackoff-mode.md

Lines changed: 18 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,31 @@
11
---
22
title: Pod is stuck in CrashLoopBackOff mode
33
description: Troubleshoot a scenario in which a pod is stuck in CrashLoopBackOff mode on an Azure Kubernetes Service (AKS) cluster.
4-
ms.date: 09/07/2023
4+
ms.date: 04/07/2025
55
author: VikasPullagura-MSFT
66
ms.author: vipullag
7-
editor: v-jsitser
8-
ms.reviewer: chiragpa, nickoman, cssakscic, v-leedennis
7+
editor: v-jsitser, addobres
8+
ms.reviewer: chiragpa, nickoman, cssakscic, v-leedennis, addobres
99
ms.service: azure-kubernetes-service
1010
ms.custom: sap:Create, Upgrade, Scale and Delete operations (cluster or nodepool)
1111
---
1212
# Pod is stuck in CrashLoopBackOff mode
1313

14-
If a pod has a `CrashLoopBackOff` status, then the pod probably failed or exited unexpectedly, and the log contains an exit code that isn't zero. There are several possible reasons why your pod is stuck in `CrashLoopBackOff` mode. Consider the following options and their associated [kubectl](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands) commands.
14+
If a pod has a `CrashLoopBackOff` status, then the pod probably failed or exited unexpectedly, and the log contains an exit code that isn't zero. Here are several possible reasons why your pod is stuck in `CrashLoopBackOff` mode:
15+
16+
1. **Application failure**: The application inside the container crashes shortly after starting, often due to misconfigurations, missing dependencies, or incorrect environment variables.
17+
2. **Incorrect resource limits**: If the pod exceeds its CPU or memory resource limits, Kubernetes might kill the container. This issue can happen if resource requests or limits are set too low.
18+
3. **Missing or misconfigured ConfigMaps/Secrets**: If the application relies on configuration files or environment variables stored in ConfigMaps or Secrets but they're missing or misconfigured, the application might crash.
19+
4. **Image pull issues**: If there's an issue with the image (for example, it's corrupted or has an incorrect tag), the container might not start properly and fail repeatedly.
20+
5. **Init containers failing**: If the pod has init containers and one or more fail to run properly, the pod will restart.
21+
6. **Liveness/Readiness probe failures**: If liveness or readiness probes are misconfigured, Kubernetes might detect the container as unhealthy and restart it.
22+
7. **Application dependencies not ready**: The application might depend on services that aren't yet ready, such as databases, message queues, or other APIs.
23+
8. **Networking issues**: Network misconfigurations can prevent the application from communicating with necessary services, causing it to fail.
24+
9. **Invalid commands or arguments**: The container might be started with an invalid `ENTRYPOINT`, command, or argument, leading to a crash.
25+
26+
For more information about the container status, see [Pod Lifecycle - Container states](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-states).
27+
28+
Consider the following options and their associated [kubectl](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands) commands.
1529

1630
| Option | kubectl command |
1731
|--|--|

0 commit comments

Comments
 (0)