You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- You can also [modify the aks-managed-apiserver-guard FlowSchema and PriorityLevelConfiguration](https://kubernetes.io/docs/concepts/cluster-administration/flow-control/#good-practice-apf-settings) by applying the label aks-managed-skip-update-operation: true. This label preserves the modified configurations and prevents AKS from reconciling them back to default values.
162
+
- You can also [modify the aks-managed-apiserver-guard FlowSchema and PriorityLevelConfiguration](https://kubernetes.io/docs/concepts/cluster-administration/flow-control/#good-practice-apf-settings) by applying the label **aks-managed-skip-update-operation: true**. This label preserves the modified configurations and prevents AKS from reconciling them back to default values.
> It's advisable to rather delete aks-managed-apiserver-guard after optimizing the client's LIST pattern and applying a custom FlowSchema and PriorityLevelConfiguration applicable to your cluster's requirement instead of modifying the default aks-managed-apiserver-guard, refer to [solution 5b](#solution-5b-throttle-a-client-thats-overwhelming-the-control-plane). Modifying it will cause AKS to be not be able reapply the aks-managed-apiserver-guard with defaults if the API server continues to experience out-of-memory (OOM) events in the future.
169
+
> It's advisable to rather delete aks-managed-apiserver-guard after optimizing the client's LIST pattern and applying a custom FlowSchema and PriorityLevelConfiguration applicable to your cluster's requirement as specified in [solution 5b](#solution-5b-throttle-a-client-thats-overwhelming-the-control-plane), instead of modifying the default aks-managed-apiserver-guard. Modifying it will cause AKS to be not be able reapply the aks-managed-apiserver-guard with defaults if the API server continues to experience out-of-memory (OOM) events in the future.
170
170
171
171
### Cause 5: An offending client makes excessive LIST or PUT calls
172
172
173
173
If you determine that etcd isn't overloaded with too many objects, an offending client might be making too many `LIST` or `PUT` calls to the API server.
174
174
If you experience high latency or frequent timeouts, follow these steps to pinpoint the offending client and the types of API calls that fail.
175
175
176
-
#####<aid="identifytopuseragents"></a> Step 1: Identify top user agents by the number of requests
176
+
#### <aid="identifytopuseragents"></a> Step 1: Identify top user agents by the number of requests
177
177
178
178
To identify which clients generate the most requests (and potentially the most API server load), run a query that resembles the following code. This query lists the top 10 user agents by the number of API server requests sent.
179
179
@@ -321,7 +321,7 @@ The results from this query can be useful to identify the kinds of API calls tha
321
321
322
322
### Solution 5a: Tune your API call pattern
323
323
324
-
To reduce the pressure on the control plane, consider tuning your client's API call pattern. Refer to [best practices](/azure-aks-docs-pr/articles/aks/best-practices-performance-scale-large.md#kubernetes-clients).
324
+
To reduce the pressure on the control plane, consider tuning your client's API server call pattern. Refer to [best practices](/azure-aks-docs-pr/articles/aks/best-practices-performance-scale-large.md#kubernetes-clients).
325
325
326
326
### Solution 5b: Throttle a client that's overwhelming the control plane
0 commit comments