Skip to content

Commit ff8ac77

Browse files
authored
Update troubleshoot-performance-ingress.md
1 parent c0f5e12 commit ff8ac77

1 file changed

Lines changed: 40 additions & 37 deletions

File tree

Lines changed: 40 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,21 @@
11
---
2-
title: Troubleshoot Performance Issues with the Managed NGINX Ingress Controller in AKS
3-
description: Step-by-step guide to identify and resolve performance issues with the Managed NGINX Ingress Controller in Azure Kubernetes Service (AKS), including common symptoms, root cause analysis, and configuration adjustments.
2+
title: Troubleshoot Performance Issues With the Managed NGINX Ingress Controller in AKS
3+
description: Step-by-step guide to identify and resolve performance issues in the Managed NGINX Ingress Controller in AKS.
44
ms.reviewer: claudiogodoy
55
ms.service: azure-kubernetes-service
66
ms.date: 05/24/2025
77
---
88
# Managed NGINX ingress controller guidance
99

10-
The [Managed NGINX ingress controller](/azure/aks/app-routing) is a routing add-on that enables routing HTTP and HTTPS traffic to applications running on an [Azure Kubernetes Service (AKS)](/azure/aks/) cluster.
10+
The [Managed NGINX ingress controller](/azure/aks/app-routing) is a routing add-on that enables routing HTTP and HTTPS traffic to applications that run on an [Azure Kubernetes Service (AKS)](/azure/aks/) cluster.
1111

12-
In performance-related problems, the routing system may be the root cause. This article provides step-by-step guidance to troubleshoot NGINX ingress controller performance issues.
12+
The routing system might be the root cause of performance-related problems. This article provides step-by-step guidance to troubleshoot NGINX ingress controller performance issues. This aricle also discusses common symptoms, root cause analysis, and configuration adjustments.
1313

1414
## Prerequisites
1515

16-
Before you start, ensure you have the following tool installed:
16+
Before you start, make sure that you have the following tool installed:
1717

18-
- **Kubernetes CLI (`kubectl`)**: Use Azure CLI and run the command `az aks install-cli`.
18+
- **Kubernetes CLI (`kubectl`)**: Use Azure CLI, and run the `az aks install-cli` command.
1919

2020
## Symptoms
2121

@@ -24,24 +24,29 @@ Before you start, ensure you have the following tool installed:
2424
| **HTTP Gateway Errors** | Error codes like `502` and `504` can indicate an NGINX exhaustion problem. |
2525
| **High Response Time Difference** | Significant difference between your service response time and the end-to-end response time. There's a common latency added by NGINX. When it's too large, you might have an NGINX exhaustion problem. |
2626

27-
## Step 1: Verify horizontal pod autoscaler (HPA) behavior
27+
## Cause
2828

29-
The most common reason for performance issues on the NGINX is CPU exhaustion. During a load spike in the system, a good approach is to monitor [HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) behavior.
30-
By default, the routing add-on creates a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) named `app-routing-system`.
29+
The most common cause of performance issues on the NGINX is CPU exhaustion. During a load spike in the system, a good troubleshooting method is to monitor [HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) behavior. By default, the routing add-on creates a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) that's named `app-routing-system`.
3130

32-
1. **Get the HPA name**:
31+
## Resolution
32+
33+
To troubleshoot the issue, follow these steps.
34+
35+
### Step 1: Verify horizontal pod autoscaler (HPA) behavior
36+
37+
1. Get the HPA name:
3338

3439
```console
3540
kubectl get hpa -n app-routing-system
3641
```
3742

38-
2. **Monitor the HPA behavior**:
43+
2. Monitor the `HPA` behavior:
3944

4045
```console
4146
kubectl get hpa <HPA_NAME> -n app-routing-system -w
4247
```
4348

44-
3. **Evaluate the results**:
49+
3. Evaluate the results:
4550

4651
```console
4752
$ kubectl get hpa <HPA_NAME> -n app-routing-system -w
@@ -52,35 +57,33 @@ By default, the routing add-on creates a [namespace](https://kubernetes.io/docs/
5257
nginx Deployment/nginx cpu: 133%/70% 1 2 2 80m
5358
```
5459

55-
The **TARGETS** column shows the CPU threshold where the `HPA` triggers to scale up the pods. There are a few possibilities for this behavior:
60+
The **TARGETS** column shows the CPU threshold at which the `HPA` is triggered to scale up the pods. There are a few possibilities for this behavior:
5661

5762
- `HPA` has reached the maximum number of pods.
58-
- There are no available nodes to schedule the pods.
59-
60-
## Step 2: Look for pods in pending state
63+
- No nodes are available to use to schedule the pods.
6164

62-
If in your evaluation you saw that `NGINX HPA` hadn't reached the maximum number of pods, the [kube-scheduler](https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler) might not be able to find available nodes to schedule the `NGINX pods`.
65+
### Step 2: Look for pods in a pending state
6366

64-
1. **Get pending pods**:
67+
If your evaluation reveals that `NGINX HPA` hasn't reached the maximum number of pods, the [kube-scheduler](https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler) might not be able to find available nodes to use to schedule the `NGINX pods`. To find pending pods, run the following command:
6568

6669
```console
6770
kubectl get pod --field-selector=status.phase=Pending -n app-routing-system
6871
```
6972

70-
> [!NOTE]
71-
> If there are pending pods, the cluster is probably facing a resource exhaustion problem. See [Troubleshoot pod scheduler errors in Azure Kubernetes Service](azure/azure-kubernetes/availability-performance/troubleshoot-pod-scheduler-errors) for more details.
73+
> [!NOTE]
74+
> If there are pending pods, the cluster might experience a resource exhaustion problem. For more information, see [Troubleshoot pod scheduler errors in Azure Kubernetes Service](azure/azure-kubernetes/availability-performance/troubleshoot-pod-scheduler-errors).
7275

73-
## Step 3: Verify if there are Limits applied to the NGINX deployment
76+
### Step 3: Check whether limits are applied to the NGINX deployment
7477

75-
Any misconfiguration on the `NGINX` [resource limits or requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) can lead to `HPA` scaling up more pods than necessary.
78+
Any misconfiguration on the `NGINX` [resource limits or requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) can cause the `HPA` to scale up more pods than what's necessary. To check the limits, follow these steps:
7679

77-
1. **Describe the NGINX deployment**:
80+
1. Describe the NGINX deployment:
7881

7982
```console
8083
kubectl describe deploy nginx -n app-routing-system
8184
```
8285

83-
2. **Verify requests and limits**:
86+
2. Verify the requests and limits:
8487

8588
```console
8689
$ kubectl describe deploy nginx -n app-routing-system
@@ -99,32 +102,32 @@ Any misconfiguration on the `NGINX` [resource limits or requests](https://kubern
99102
...
100103
```
101104

102-
## Solution
105+
## More information
103106

104-
By default, the current version of the NGINX ingress controller doesn't set limits for NGINX pods and requests `500m` CPU, which is used by the `HPA`. It's not recommended to change these values directly in the deployment definition.
107+
By default, the current version of the NGINX ingress controller doesn't set limits for NGINX pods. The controller requests `500m` CPU to be used by the `HPA`. We recommend that you don't change these settings directly in the deployment definition.
105108

106-
If your `HPA` is reaching the maximum number of pods and the deployment's requests and limits remain unchanged, configure the [custom resource definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) named [NginxIngressController](https://github.com/Azure/aks-app-routing-operator/blob/main/config/crd/bases/approuting.kubernetes.azure.com_nginxingresscontrollers.yaml).
109+
If the `HPA` reaches the maximum number of pods, and the deployment requests and limits remain unchanged, configure the [custom resource definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) that's named [NginxIngressController](https://github.com/Azure/aks-app-routing-operator/blob/main/config/crd/bases/approuting.kubernetes.azure.com_nginxingresscontrollers.yaml).
107110

108111
### Configuration options
109112

110-
The following configuration options directly impact the `HPA` behavior:
113+
The following configuration options directly affect the `HPA` behavior.
111114

112115
| Property | Type | Description | Required | Default |
113116
|----------------|---------|-----------------------------------------------------------------------------|----------|-----------|
114-
| `scaling` | object | Configuration for scaling the controller. Contains nested properties. | No | - |
117+
| `scaling` | object | Configuration for scaling the controller. Contains nested properties. | No | Not applicable |
115118
| `maxReplicas` | integer | Upper limit for replicas. | No | 100 |
116119
| `minReplicas` | integer | Lower limit for replicas. | No | 2 |
117-
| `threshold` | string | Scaling threshold defining how aggressively to scale. Options include: `rapid`, `steady`, `balanced`. | No | balanced |
120+
| `threshold` | string | Scaling threshold that defines how aggressively to scale. Options include: `rapid`, `steady`, `balanced`. | No | balanced |
118121

119122
### Apply the configuration
120123

121-
1. **Edit the NginxIngressController CRD**:
124+
1. Edit the NginxIngressController CRD:
122125

123126
```console
124127
kubectl edit nginxingresscontroller -n app-routing-system
125128
```
126129

127-
2. **Add or modify the scaling configuration**:
130+
2. Add or modify the scaling configuration:
128131

129132
```yaml
130133
spec:
@@ -134,17 +137,17 @@ The following configuration options directly impact the `HPA` behavior:
134137
threshold: "balanced"
135138
```
136139

137-
3. **Save and exit** the editor to apply the changes.
140+
3. To apply the changes, save them, and then exit the editor.
138141

139-
4. **Verify the changes**:
142+
4. Verify the changes:
140143

141144
```console
142145
kubectl get hpa -n app-routing-system
143146
```
144147

145-
The HPA automatically updates based on your new configuration and the NGINX ingress controller scales according to the specified parameters.
148+
The HPA automatically updates based on your new configuration. The NGINX ingress controller scales according to the specified parameters.
146149

147-
## Additional resources
150+
## References
148151

149152
- [Learn more about Azure Kubernetes Service (AKS) best practices](/azure/aks/best-practices)
150153
- [Monitor your Kubernetes cluster performance with Container insights](/azure/azure-monitor/containers/container-insights-analyze)
@@ -154,4 +157,4 @@ The HPA automatically updates based on your new configuration and the NGINX ingr
154157

155158
[!INCLUDE [Third-party contact information disclaimer](../../../includes/third-party-contact-disclaimer.md)]
156159

157-
[!INCLUDE [Azure Help Support](../../../includes/azure-help-support.md)]
160+
[!INCLUDE [Azure Help Support](../../../includes/azure-help-support.md)]

0 commit comments

Comments
 (0)