You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/azure/azure-kubernetes/create-upgrade-delete/pod-stuck-crashloopbackoff-mode.md
+12-13Lines changed: 12 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: Pod is stuck in CrashLoopBackOff mode
3
3
description: Troubleshoot a scenario in which a pod is stuck in CrashLoopBackOff mode on an Azure Kubernetes Service (AKS) cluster.
4
-
ms.date: 03/07/2025
4
+
ms.date: 04/03/2025
5
5
author: VikasPullagura-MSFT
6
6
ms.author: vipullag
7
7
editor: v-jsitser, addobres
@@ -11,20 +11,19 @@ ms.custom: sap:Create, Upgrade, Scale and Delete operations (cluster or nodepool
11
11
---
12
12
# Pod is stuck in CrashLoopBackOff mode
13
13
14
-
If a pod has a `CrashLoopBackOff` status, then the pod probably failed or exited unexpectedly, and the log contains an exit code that isn't zero. There are several possible reasons why your pod is stuck in `CrashLoopBackOff` mode.
14
+
If a pod has a `CrashLoopBackOff` status, then the pod probably failed or exited unexpectedly, and the log contains an exit code that isn't zero. Here are several possible reasons why your pod is stuck in `CrashLoopBackOff` mode:
15
15
16
-
Common Causes of `CrashLoopBackOff` error:
17
-
18
-
1.**Application Failure**: The application inside the container crashes shortly after starting, often due to misconfigurations, missing dependencies, incorrect environment variables, etc.
19
-
2.**Incorrect Resource Limits**: If the pod exceeds its CPU or memory resource limits, Kubernetes may kill the container. This can happen if resource requests or limits are set too low.
20
-
3.**Missing or Misconfigured ConfigMaps/Secrets**: If the application relies on configuration files or environment variables stored in ConfigMaps or Secrets and they are missing or misconfigured, the application might crash.
21
-
4.**Image Pull Issues**: If there is an issue with the image (e.g., corrupted, incorrect tag), the container may not start properly and fail repeatedly.
22
-
5.**Init Containers Failing**: If the pod has init containers and one or more fail to run properly, it will cause the pod to restart.
23
-
6.**Liveness/Readiness Probe Failures**: If liveness or readiness probes are misconfigured, Kubernetes may detect the container as unhealthy and restart it.
24
-
7.**Application Dependencies Not Ready**: The application may depend on services that are not yet ready, such as databases, message queues, or other APIs.
25
-
8.**Networking Issues**: Network misconfigurations can prevent the application from communicating with necessary services, causing it to fail.
26
-
9.**Invalid Command or Arguments**: The container may be started with an invalid entrypoint, command, or arguments, leading to a crash.
16
+
1.**Application failure**: The application inside the container crashes shortly after starting, often due to misconfigurations, missing dependencies, or incorrect environment variables.
17
+
2.**Incorrect resource limits**: If the pod exceeds its CPU or memory resource limits, Kubernetes might kill the container. This can happen if resource requests or limits are set too low.
18
+
3.**Missing or misconfigured ConfigMaps/Secrets**: If the application relies on configuration files or environment variables stored in ConfigMaps or Secrets but they are missing or misconfigured, the application might crash.
19
+
4.**Image pull issues**: If there is an issue with the image (for example, it's corrupted, or it has an incorrect tag), the container might not start properly and fail repeatedly.
20
+
5.**Init containers failing**: If the pod has init containers and one or more fail to run properly, it will cause the pod to restart.
21
+
6.**Liveness/Readiness probe failures**: If liveness or readiness probes are misconfigured, Kubernetes might detect the container as unhealthy and restart it.
22
+
7.**Application dependencies not ready**: The application might depend on services that are not yet ready, such as databases, message queues, or other APIs.
23
+
8.**Networking issues**: Network misconfigurations can prevent the application from communicating with necessary services, causing it to fail.
24
+
9.**Invalid commands or arguments**: The container might be started with an invalid `ENTRYPOINT`, command, or arguments, leading to a crash.
27
25
26
+
For more information about the container status, see [Pod Lifecycle - Container states](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-states)
28
27
29
28
Consider the following options and their associated [kubectl](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands) commands.
0 commit comments