Skip to content

Commit 45c8c48

Browse files
committed
image sizing
1 parent f6d1e02 commit 45c8c48

2 files changed

Lines changed: 8 additions & 8 deletions

File tree

-25.3 KB
Binary file not shown.

support/azure/azure-kubernetes/create-upgrade-delete/troubleshoot-apiserver-etcd.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -131,18 +131,18 @@ If you're experiencing a high rate of HTTP 429 errors, one possible cause is tha
131131
kubectl get flowschemas
132132
kubectl get prioritylevelconfigurations
133133
```
134-
<img src="image-4.png" alt="FlowSchema" width="300">
134+
<img src="image-4.png" alt="FlowSchema" width="600">
135135

136-
<img src="image-5.png" alt="PriorityLevelConfiguration" width="300">
136+
<br>
137+
138+
<img src="image-5.png" alt="PriorityLevelConfiguration" width="600">
137139

138140
- Check Kubernetes Events
139141

140142
```bash
141143
kubectl get events -n kube-system aks-managed-apiserver-throttling-enabled
142144
```
143145

144-
<img src="image-6.png" alt="PriorityLevelConfiguration" width="600" height="1000">
145-
146146
### Solution 4: Identify unoptimized clients and mitigate
147147

148148
#### Step 1: Identify unoptimized clients
@@ -159,21 +159,21 @@ kubectl get events -n kube-system aks-managed-apiserver-throttling-enabled
159159
kubectl delete flowschema aks-managed-apiserver-guard
160160
kubectl delete prioritylevelconfiguration aks-managed-apiserver-guard
161161
```
162-
- You can also [modify the aks-managed-apiserver-guard FlowSchema and PriorityLevelConfiguration](https://kubernetes.io/docs/concepts/cluster-administration/flow-control/#good-practice-apf-settings) by applying the label aks-managed-skip-update-operation: true. This label preserves the modified configurations and prevents AKS from reconciling them back to default values.
162+
- You can also [modify the aks-managed-apiserver-guard FlowSchema and PriorityLevelConfiguration](https://kubernetes.io/docs/concepts/cluster-administration/flow-control/#good-practice-apf-settings) by applying the label **aks-managed-skip-update-operation: true**. This label preserves the modified configurations and prevents AKS from reconciling them back to default values.
163163

164164
```bash
165165
kubectl label prioritylevelconfiguration aks-managed-apiserver-guard
166166
kubectl label flowschema aks-managed-apiserver-guard
167167
```
168168
> [!NOTE]
169-
> It's advisable to rather delete aks-managed-apiserver-guard after optimizing the client's LIST pattern and applying a custom FlowSchema and PriorityLevelConfiguration applicable to your cluster's requirement instead of modifying the default aks-managed-apiserver-guard, refer to [solution 5b](#solution-5b-throttle-a-client-thats-overwhelming-the-control-plane). Modifying it will cause AKS to be not be able reapply the aks-managed-apiserver-guard with defaults if the API server continues to experience out-of-memory (OOM) events in the future.
169+
> It's advisable to rather delete aks-managed-apiserver-guard after optimizing the client's LIST pattern and applying a custom FlowSchema and PriorityLevelConfiguration applicable to your cluster's requirement as specified in [solution 5b](#solution-5b-throttle-a-client-thats-overwhelming-the-control-plane), instead of modifying the default aks-managed-apiserver-guard. Modifying it will cause AKS to be not be able reapply the aks-managed-apiserver-guard with defaults if the API server continues to experience out-of-memory (OOM) events in the future.
170170
171171
### Cause 5: An offending client makes excessive LIST or PUT calls
172172

173173
If you determine that etcd isn't overloaded with too many objects, an offending client might be making too many `LIST` or `PUT` calls to the API server.
174174
If you experience high latency or frequent timeouts, follow these steps to pinpoint the offending client and the types of API calls that fail.
175175

176-
##### <a id="identifytopuseragents"></a> Step 1: Identify top user agents by the number of requests
176+
#### <a id="identifytopuseragents"></a> Step 1: Identify top user agents by the number of requests
177177

178178
To identify which clients generate the most requests (and potentially the most API server load), run a query that resembles the following code. This query lists the top 10 user agents by the number of API server requests sent.
179179

@@ -321,7 +321,7 @@ The results from this query can be useful to identify the kinds of API calls tha
321321

322322
### Solution 5a: Tune your API call pattern
323323

324-
To reduce the pressure on the control plane, consider tuning your client's API call pattern. Refer to [best practices](/azure-aks-docs-pr/articles/aks/best-practices-performance-scale-large.md#kubernetes-clients).
324+
To reduce the pressure on the control plane, consider tuning your client's API server call pattern. Refer to [best practices](/azure-aks-docs-pr/articles/aks/best-practices-performance-scale-large.md#kubernetes-clients).
325325

326326
### Solution 5b: Throttle a client that's overwhelming the control plane
327327

0 commit comments

Comments
 (0)