Skip to content

Commit 6c0dce6

Browse files
authored
Update pod-stuck-crashloopbackoff-mode.md
Added some common causes for the crashloopbackoff error and reviewed full article.
1 parent a47e534 commit 6c0dce6

1 file changed

Lines changed: 19 additions & 4 deletions

File tree

support/azure/azure-kubernetes/create-upgrade-delete/pod-stuck-crashloopbackoff-mode.md

Lines changed: 19 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,32 @@
11
---
22
title: Pod is stuck in CrashLoopBackOff mode
33
description: Troubleshoot a scenario in which a pod is stuck in CrashLoopBackOff mode on an Azure Kubernetes Service (AKS) cluster.
4-
ms.date: 09/07/2023
4+
ms.date: 03/07/2025
55
author: VikasPullagura-MSFT
66
ms.author: vipullag
7-
editor: v-jsitser
8-
ms.reviewer: chiragpa, nickoman, cssakscic, v-leedennis
7+
editor: v-jsitser, addobres
8+
ms.reviewer: chiragpa, nickoman, cssakscic, v-leedennis, addobres
99
ms.service: azure-kubernetes-service
1010
ms.custom: sap:Create, Upgrade, Scale and Delete operations (cluster or nodepool)
1111
---
1212
# Pod is stuck in CrashLoopBackOff mode
1313

14-
If a pod has a `CrashLoopBackOff` status, then the pod probably failed or exited unexpectedly, and the log contains an exit code that isn't zero. There are several possible reasons why your pod is stuck in `CrashLoopBackOff` mode. Consider the following options and their associated [kubectl](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands) commands.
14+
If a pod has a `CrashLoopBackOff` status, then the pod probably failed or exited unexpectedly, and the log contains an exit code that isn't zero. There are several possible reasons why your pod is stuck in `CrashLoopBackOff` mode.
15+
16+
Common Causes of `CrashLoopBackOff` error:
17+
18+
1. **Application Failure**: The application inside the container crashes shortly after starting, often due to misconfigurations, missing dependencies, incorrect environment variables, etc.
19+
2. **Incorrect Resource Limits**: If the pod exceeds its CPU or memory resource limits, Kubernetes may kill the container. This can happen if resource requests or limits are set too low.
20+
3. **Missing or Misconfigured ConfigMaps/Secrets**: If the application relies on configuration files or environment variables stored in ConfigMaps or Secrets and they are missing or misconfigured, the application might crash.
21+
4. **Image Pull Issues**: If there is an issue with the image (e.g., corrupted, incorrect tag), the container may not start properly and fail repeatedly.
22+
5. **Init Containers Failing**: If the pod has init containers and one or more fail to run properly, it will cause the pod to restart.
23+
6. **Liveness/Readiness Probe Failures**: If liveness or readiness probes are misconfigured, Kubernetes may detect the container as unhealthy and restart it.
24+
7. **Application Dependencies Not Ready**: The application may depend on services that are not yet ready, such as databases, message queues, or other APIs.
25+
8. **Networking Issues**: Network misconfigurations can prevent the application from communicating with necessary services, causing it to fail.
26+
9. **Invalid Command or Arguments**: The container may be started with an invalid entrypoint, command, or arguments, leading to a crash.
27+
28+
29+
Consider the following options and their associated [kubectl](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands) commands.
1530

1631
| Option | kubectl command |
1732
|--|--|

0 commit comments

Comments
 (0)