|
1 | 1 | --- |
2 | 2 | title: Pod is stuck in CrashLoopBackOff mode |
3 | 3 | description: Troubleshoot a scenario in which a pod is stuck in CrashLoopBackOff mode on an Azure Kubernetes Service (AKS) cluster. |
4 | | -ms.date: 09/07/2023 |
| 4 | +ms.date: 04/07/2025 |
5 | 5 | author: VikasPullagura-MSFT |
6 | 6 | ms.author: vipullag |
7 | | -editor: v-jsitser |
8 | | -ms.reviewer: chiragpa, nickoman, cssakscic, v-leedennis |
| 7 | +editor: v-jsitser, addobres |
| 8 | +ms.reviewer: chiragpa, nickoman, cssakscic, v-leedennis, addobres |
9 | 9 | ms.service: azure-kubernetes-service |
10 | 10 | ms.custom: sap:Create, Upgrade, Scale and Delete operations (cluster or nodepool) |
11 | 11 | --- |
12 | 12 | # Pod is stuck in CrashLoopBackOff mode |
13 | 13 |
|
14 | | -If a pod has a `CrashLoopBackOff` status, then the pod probably failed or exited unexpectedly, and the log contains an exit code that isn't zero. There are several possible reasons why your pod is stuck in `CrashLoopBackOff` mode. Consider the following options and their associated [kubectl](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands) commands. |
| 14 | +If a pod has a `CrashLoopBackOff` status, then the pod probably failed or exited unexpectedly, and the log contains an exit code that isn't zero. Here are several possible reasons why your pod is stuck in `CrashLoopBackOff` mode: |
| 15 | + |
| 16 | +1. **Application failure**: The application inside the container crashes shortly after starting, often due to misconfigurations, missing dependencies, or incorrect environment variables. |
| 17 | +2. **Incorrect resource limits**: If the pod exceeds its CPU or memory resource limits, Kubernetes might kill the container. This issue can happen if resource requests or limits are set too low. |
| 18 | +3. **Missing or misconfigured ConfigMaps/Secrets**: If the application relies on configuration files or environment variables stored in ConfigMaps or Secrets but they're missing or misconfigured, the application might crash. |
| 19 | +4. **Image pull issues**: If there's an issue with the image (for example, it's corrupted or has an incorrect tag), the container might not start properly and fail repeatedly. |
| 20 | +5. **Init containers failing**: If the pod has init containers and one or more fail to run properly, the pod will restart. |
| 21 | +6. **Liveness/Readiness probe failures**: If liveness or readiness probes are misconfigured, Kubernetes might detect the container as unhealthy and restart it. |
| 22 | +7. **Application dependencies not ready**: The application might depend on services that aren't yet ready, such as databases, message queues, or other APIs. |
| 23 | +8. **Networking issues**: Network misconfigurations can prevent the application from communicating with necessary services, causing it to fail. |
| 24 | +9. **Invalid commands or arguments**: The container might be started with an invalid `ENTRYPOINT`, command, or argument, leading to a crash. |
| 25 | + |
| 26 | +For more information about the container status, see [Pod Lifecycle - Container states](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-states). |
| 27 | + |
| 28 | +Consider the following options and their associated [kubectl](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands) commands. |
15 | 29 |
|
16 | 30 | | Option | kubectl command | |
17 | 31 | |--|--| |
|
0 commit comments