You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/azure/azure-kubernetes/load-bal-ingress-c/troubleshoot-performance-ingress.md
+27-29Lines changed: 27 additions & 29 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,28 +5,28 @@ ms.reviewer: claudiogodoy
5
5
ms.service: azure-kubernetes-service
6
6
ms.date: 05/24/2025
7
7
---
8
-
# Managed NGINX Ingress Controller Troubleshoot
8
+
# Managed NGINX ingress controller guidance
9
9
10
-
The [Managed NGINX ingress controller](/azure/aks/app-routing) is a routing add-on that enables routing Hypertext Transfer Protocol (HTTP) and secure (HTTPS) traffic to applications running on an [Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/) cluster.
10
+
The [Managed NGINX ingress controller](/azure/aks/app-routing) is a routing add-on that enables routing HTTP and HTTPS traffic to applications running on an [Azure Kubernetes Service (AKS)](/azure/aks/) cluster.
11
11
12
12
In performance-related problems, the routing system may be the root cause. This article provides step-by-step guidance to troubleshoot NGINX ingress controller performance issues.
13
13
14
14
## Prerequisites
15
15
16
-
Before you start, ensure you have the following tools installed:
16
+
Before you start, ensure you have the following tool installed:
17
17
18
-
-**Kubernetes CLI (`kubectl`)**: Use Azure CLI to install it by running the command `az aks install-cli`.
18
+
-**Kubernetes CLI (`kubectl`)**: Use Azure CLI and run the command `az aks install-cli`.
19
19
20
-
## Common Symptoms
20
+
## Symptoms
21
21
22
22
| Symptom | Description |
23
23
| --- | --- |
24
-
|**HTTP Gateway Errors**| Error codes such as 502, 504 might indicate an NGINX exhaustion problem. |
25
-
|**High Response Time Difference**| Significant difference between your service response time and the end-to-end response time. There's a common latency added by NGINX, but when it's too large, you might have an NGINX exhaustion problem. |
24
+
|**HTTP Gateway Errors**| Error codes like `502` and `504` can indicate an NGINX exhaustion problem. |
25
+
|**High Response Time Difference**| Significant difference between your service response time and the end-to-end response time. There's a common latency added by NGINX. When it's too large, you might have an NGINX exhaustion problem. |
26
26
27
-
## Step 1: Verify HPA Behavior
27
+
## Step 1: Verify horizontal pod autoscaler (HPA) behavior
28
28
29
-
The most common reason for performance issues on the NGINX is CPU exhaustion. During a load spike in the system, a good approach is to watch the[HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) behavior.
29
+
The most common reason for performance issues on the NGINX is CPU exhaustion. During a load spike in the system, a good approach is to monitor[HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) behavior.
30
30
By default, the routing add-on creates a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) named `app-routing-system`.
31
31
32
32
1.**Get the HPA name**:
@@ -35,13 +35,13 @@ By default, the routing add-on creates a [namespace](https://kubernetes.io/docs/
35
35
kubectl get hpa -n app-routing-system
36
36
```
37
37
38
-
2. **Watch the HPA behavior**:
38
+
2. **Monitor the HPA behavior**:
39
39
40
40
```console
41
41
kubectl get hpa <HPA_NAME> -n app-routing-system -w
42
42
```
43
43
44
-
3. **Evaluate the result**:
44
+
3. **Evaluate the results**:
45
45
46
46
```console
47
47
$ kubectl get hpa <HPA_NAME> -n app-routing-system -w
@@ -52,35 +52,35 @@ By default, the routing add-on creates a [namespace](https://kubernetes.io/docs/
52
52
nginx Deployment/nginx cpu: 133%/70% 1 2 2 80m
53
53
```
54
54
55
-
The **TARGETS** column shows the CPU threshold where the `HPA` will trigger to scale up the pods. You must interpret this behavior. There are a few possibilities:
55
+
The **TARGETS** column shows the CPU threshold where the `HPA` triggers to scale up the pods. There are a few possibilities for this behavior:
56
56
57
-
- The `HPA` has reached the maximum number of pods.
57
+
- `HPA` has reached the maximum number of pods.
58
58
- There are no available nodes to schedule the pods.
59
59
60
-
## Step 2: Look for Pods in Pending State
60
+
## Step 2: Look for pods in pending state
61
61
62
-
If in the previous step you saw that the `NGINX HPA` hasn't reached the maximum number of pods, the [kube-scheduler](https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler) might be struggling to find available nodes to schedule the `NGINX pods`.
62
+
If in your evaluation you saw that `NGINX HPA` hadn't reached the maximum number of pods, the [kube-scheduler](https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler) might not be able to find available nodes to schedule the `NGINX pods`.
63
63
64
-
1. **Get Pending Pods**:
64
+
1. **Get pending pods**:
65
65
66
66
```console
67
67
kubectl get pod --field-selector=status.phase=Pending -n app-routing-system
68
68
```
69
69
70
70
> [!NOTE]
71
-
> If there are Pending pods, the cluster is probably facing a resource exhaustion problem. In this case, refer to [Troubleshoot pod scheduler errors in Azure Kubernetes Service](azure/azure-kubernetes/availability-performance/troubleshoot-pod-scheduler-errors).
71
+
> If there are pending pods, the cluster is probably facing a resource exhaustion problem. See [Troubleshoot pod scheduler errors in Azure Kubernetes Service](azure/azure-kubernetes/availability-performance/troubleshoot-pod-scheduler-errors)for more details.
72
72
73
-
## Step 3: Verify if There Are Limits Applied to the NGINX Deployment
73
+
## Step 3: Verify if there are Limits applied to the NGINX deployment
74
74
75
-
Any misconfiguration on the `NGINX` [resource limits or requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) might lead to `HPA` scaling up more pods than necessary.
75
+
Any misconfiguration on the `NGINX` [resource limits or requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) can lead to `HPA` scaling up more pods than necessary.
@@ -101,11 +101,11 @@ Any misconfiguration on the `NGINX` [resource limits or requests](https://kubern
101
101
102
102
## Solution
103
103
104
-
By default, the current version of the NGINX ingress controller does not set limits for NGINX pods and requests `500m` CPU, which is used by the `HPA`. It is not recommended to change these values directly in the deployment definition.
104
+
By default, the current version of the NGINX ingress controller doesn't set limits for NGINX pods and requests `500m` CPU, which is used by the `HPA`. It's not recommended to change these values directly in the deployment definition.
105
105
106
-
If your `HPA` is reaching the maximum number of pods and the deployment's requests and limits remain unchanged, you should configure the [custom resource definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called [NginxIngressController](https://github.com/Azure/aks-app-routing-operator/blob/main/config/crd/bases/approuting.kubernetes.azure.com_nginxingresscontrollers.yaml).
106
+
If your `HPA` is reaching the maximum number of pods and the deployment's requests and limits remain unchanged, configure the [custom resource definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) named [NginxIngressController](https://github.com/Azure/aks-app-routing-operator/blob/main/config/crd/bases/approuting.kubernetes.azure.com_nginxingresscontrollers.yaml).
107
107
108
-
### Configuration Options
108
+
### Configuration options
109
109
110
110
The following configuration options directly impact the `HPA` behavior:
111
111
@@ -114,11 +114,9 @@ The following configuration options directly impact the `HPA` behavior:
114
114
| `scaling` | object | Configuration for scaling the controller. Contains nested properties. | No | - |
115
115
| `maxReplicas` | integer | Upper limit for replicas. | No | 100 |
116
116
| `minReplicas` | integer | Lower limit for replicas. | No | 2 |
117
-
| `threshold` | string | Scaling threshold defining how aggressively to scale. Options: `rapid`, `steady`, `balanced`. | No | balanced |
117
+
| `threshold` | string | Scaling threshold defining how aggressively to scale. Options include: `rapid`, `steady`, `balanced`. | No | balanced |
118
118
119
-
### How to Apply the Configuration
120
-
121
-
Follow these steps to apply the configuration:
119
+
### Apply the configuration
122
120
123
121
1. **Edit the NginxIngressController CRD**:
124
122
@@ -144,7 +142,7 @@ Follow these steps to apply the configuration:
144
142
kubectl get hpa -n app-routing-system
145
143
```
146
144
147
-
The HPA will automatically update based on your new configuration, and the NGINX ingress controller will scale according to the specified parameters.
145
+
The HPA automatically updates based on your new configuration and the NGINX ingress controller scales according to the specified parameters.
0 commit comments