Skip to content

Commit f6d1e02

Browse files
committed
markdown edits and sizing
1 parent e3e71f6 commit f6d1e02

1 file changed

Lines changed: 5 additions & 5 deletions

File tree

support/azure/azure-kubernetes/create-upgrade-delete/troubleshoot-apiserver-etcd.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -145,11 +145,11 @@ kubectl get events -n kube-system aks-managed-apiserver-throttling-enabled
145145

146146
### Solution 4: Identify unoptimized clients and mitigate
147147

148-
##### Step 1: Identify unoptimized clients
148+
#### Step 1: Identify unoptimized clients
149149

150-
- See [Cause 5](#cause-5-an-offending-client-makes-excessive-list-or-put-calls) to identify problematic clients and refine their LIST call patterns - especially those generating high-frequency or high-latency requests as they are the primary contributors to API server degradation. Refer to [best practices](/azure-aks-docs-pr/articles/aks/best-practices-performance-scale-large.md#kubernetes-clients) for further guidance.
150+
- See [Cause 5](#cause-5-an-offending-client-makes-excessive-list-or-put-calls) to identify problematic clients and refine their LIST call patterns - especially those generating high-frequency or high-latency requests as they are the primary contributors to API server degradation. Refer to [best practices](/azure-aks-docs-pr/articles/aks/best-practices-performance-scale-large.md#kubernetes-clients) for further guidance on client optimization.
151151

152-
##### Step 2: Mitigation
152+
#### Step 2: Mitigation
153153
> [!WARNING]
154154
> Do not perform any mitigation steps until the client's call pattern is optimized, as this could lead to the API server becoming fully unresponsive.
155155
@@ -206,7 +206,7 @@ AzureDiagnostics
206206
207207
Although it's helpful to know which clients generate the highest request volume, high request volume alone might not be a cause for concern. The response latency that clients experience is a better indicator of the actual load that each one generates on the API server.
208208

209-
##### Step 2 - Identify and analyse latency for user agent
209+
#### Step 2 - Identify and analyse latency for user agent
210210
**Using Diagnose and Solve on Azure portal**
211211

212212
AKS now provides a built-in analyzer, the API Server Resource Intensive Listing Detector, to help you identify agents that make resource-intensive LIST calls. These calls are a leading cause of API server and etcd performance issues.
@@ -277,7 +277,7 @@ This query is a follow-up to the query in the ["Identify top user agents by the
277277
> [!TIP]
278278
> By analyzing this data, you can identify patterns and anomalies that can indicate problems on your AKS cluster or applications. For example, you might notice that a particular user is experiencing high latency. This scenario can indicate the type of API calls that are causing excessive load on the API server or etcd.
279279
280-
##### Step 3: Identify Unoptimized API calls for a given user agent
280+
#### Step 3: Identify Unoptimized API calls for a given user agent
281281

282282
Run the following query to tabulate the 99th percentile (P99) latency of API calls across different resource types for a given client.
283283

0 commit comments

Comments
 (0)