You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/azure/azure-kubernetes/create-upgrade-delete/troubleshoot-apiserver-etcd.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -145,11 +145,11 @@ kubectl get events -n kube-system aks-managed-apiserver-throttling-enabled
145
145
146
146
### Solution 4: Identify unoptimized clients and mitigate
147
147
148
-
#####Step 1: Identify unoptimized clients
148
+
#### Step 1: Identify unoptimized clients
149
149
150
-
- See [Cause 5](#cause-5-an-offending-client-makes-excessive-list-or-put-calls) to identify problematic clients and refine their LIST call patterns - especially those generating high-frequency or high-latency requests as they are the primary contributors to API server degradation. Refer to [best practices](/azure-aks-docs-pr/articles/aks/best-practices-performance-scale-large.md#kubernetes-clients) for further guidance.
150
+
- See [Cause 5](#cause-5-an-offending-client-makes-excessive-list-or-put-calls) to identify problematic clients and refine their LIST call patterns - especially those generating high-frequency or high-latency requests as they are the primary contributors to API server degradation. Refer to [best practices](/azure-aks-docs-pr/articles/aks/best-practices-performance-scale-large.md#kubernetes-clients) for further guidance on client optimization.
151
151
152
-
#####Step 2: Mitigation
152
+
#### Step 2: Mitigation
153
153
> [!WARNING]
154
154
> Do not perform any mitigation steps until the client's call pattern is optimized, as this could lead to the API server becoming fully unresponsive.
155
155
@@ -206,7 +206,7 @@ AzureDiagnostics
206
206
207
207
Although it's helpful to know which clients generate the highest request volume, high request volume alone might not be a cause for concern. The response latency that clients experience is a better indicator of the actual load that each one generates on the API server.
208
208
209
-
#####Step 2 - Identify and analyse latency for user agent
209
+
#### Step 2 - Identify and analyse latency for user agent
210
210
**Using Diagnose and Solve on Azure portal**
211
211
212
212
AKS now provides a built-in analyzer, the API Server Resource Intensive Listing Detector, to help you identify agents that make resource-intensive LIST calls. These calls are a leading cause of API server and etcd performance issues.
@@ -277,7 +277,7 @@ This query is a follow-up to the query in the ["Identify top user agents by the
277
277
> [!TIP]
278
278
> By analyzing this data, you can identify patterns and anomalies that can indicate problems on your AKS cluster or applications. For example, you might notice that a particular user is experiencing high latency. This scenario can indicate the type of API calls that are causing excessive load on the API server or etcd.
279
279
280
-
#####Step 3: Identify Unoptimized API calls for a given user agent
280
+
#### Step 3: Identify Unoptimized API calls for a given user agent
281
281
282
282
Run the following query to tabulate the 99th percentile (P99) latency of API calls across different resource types for a given client.
0 commit comments