You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#Customer intent: As an Azure Kubernetes user, I want to avoid tunnel connectivity issues so that I can use an Azure Kubernetes Service (AKS) cluster successfully.
9
9
ms.custom: sap:Connectivity
10
10
---
@@ -258,7 +258,45 @@ You can set up a new cluster to use a Managed Network Address Translation (NAT)
258
258
259
259
### Solution 6: Cluster Proportional Autoscaler (CPA) for Konnectivity Agent
260
260
261
-
To address scalability challenges in large clusters, we have implemented the Cluster Proportional Autoscaler (CPA) for our Konnectivity Agents. This approach aligns with industry standards and best practices, ensuring optimal resource usage and enhanced performance. Previously, the Konnectivity agent had a fixed replica count, which created a bottleneck as the cluster grew. With this change, the replica count will now dynamically adjust based on node-scaling rules, providing best-in-class performance.
261
+
To address scalability challenges in large clusters, we have implemented the Cluster Proportional Autoscaler (CPA) for our Konnectivity Agents. This approach aligns with industry standards and best practices, ensuring optimal resource usage and enhanced performance.
262
+
263
+
**Why was this change made?**
264
+
Previously, the Konnectivity agent had a fixed replica count, which could create a bottleneck as the cluster grew. With the implementation of the Cluster Proportional Autoscaler (CPA), the replica count now dynamically adjusts based on node-scaling rules, ensuring optimal performance and resource usage.
265
+
266
+
**What should customers check for?**
267
+
Customers need to monitor for Out Of Memory (OOM) kills on their nodes, as the Konnectivity agents run on these nodes. Here are the steps to identify and troubleshoot OOMKills:
268
+
269
+
1. Check for OOMKills on Nodes: Use the following command to check for OOMKills on your nodes:
270
+
271
+
```
272
+
kubectl get events --all-namespaces | grep -i 'oomkill'
273
+
```
274
+
275
+
2. Inspect Node Resource Usage: Verify the resource usage on your nodes to ensure they are not running out of memory:
276
+
277
+
```
278
+
kubectl top nodes
279
+
```
280
+
281
+
3. Review Pod Resource Requests and Limits: Ensure that the Konnectivity agent pods have appropriate resource requests and limits set to prevent OOMKills:
282
+
283
+
```
284
+
kubectl get pod <pod-name> -n kube-system -o yaml | grep -A5 "resources:"
285
+
```
286
+
287
+
4. Adjust Resource Requests and Limits: If necessary, adjust the resource requests and limits for the Konnectivity agent pods by editing the deployment:
**How do customers use the Cluster Proportional Autoscaler (CPA)?**
294
+
Customers can override default values by updating the konnectivity-agent-autoscaler configmap in the kube-system namespace. Here is a sample command to update the configmap:
295
+
296
+
```
297
+
kubectl edit configmap <pod-name> -n kube-system
298
+
```
299
+
This command opens the configmap in an editor where customers can make the necessary changes.
0 commit comments