You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/azure/azure-kubernetes/availability-performance/cluster-service-health-probe-mode-issues.md
+156-9Lines changed: 156 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,8 +4,9 @@ description: Diagnoses and fixes common issues with the health probe mode featur
4
4
ms.date: 06/03/2024
5
5
ms.reviewer: niqi, cssakscic, v-weizhu
6
6
ms.service: azure-kubernetes-service
7
-
ms.custom: sap:Node/node pool availability and performance, devx-track-azurecli
7
+
ms.custom: sap:Node/node pool availability and performance, devx-track-azurecli, innovation-engine
8
8
---
9
+
9
10
# Troubleshoot issues when enabling the AKS cluster service health probe mode
10
11
11
12
The health probe mode feature allows you to configure how Azure Load Balancer probes the health of the nodes in your Azure Kubernetes Service (AKS) cluster. You can choose between two modes: Shared and ServiceNodePort. The Shared mode uses a single health probe for all external traffic policy cluster services that use the same load balancer. In contrast, the ServiceNodePort mode uses a separate health probe for each service. The Shared mode can reduce the number of health probes and improve the performance of the load balancer, but it requires some additional components to work properly. To enable this feature, see [How to enable the health probe mode feature using the Azure CLI](#how-to-enable-the-health-probe-mode-feature-using-the-azure-cli).
@@ -36,11 +37,92 @@ The following operations also happen:
36
37
37
38
To troubleshoot these issues, follow these steps:
38
39
39
-
1. Check the RP frontend log to see if the health probe mode in the LoadBalancerProfile is properly configured. You can use the `az aks show` command to view the LoadBalancerProfile property of your cluster.
40
-
41
-
2. Check the *overlaymgr* log to see if the cloud provider secret is updated. The keyword to look for is `cloudConfigSecretResolver`. Or check the contents of the cloud-provider-config secret in the `ccp` namespace. You can use the `kubectl get secret` command to view the secret.
42
-
43
-
3. Check the chart or overlay daemonset cloud-node-manager to see if the health-probe-proxy sidecar container is enabled. You can use the `kubectl get ds` command to view the daemonset.
40
+
1. First, connect to your AKS cluster using the Azure CLI:
41
+
42
+
```azurecli
43
+
export RESOURCE_GROUP="aks-rg"
44
+
export AKS_CLUSTER_NAME="aks-cluster"
45
+
az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER_NAME --overwrite-existing
46
+
```
47
+
48
+
2. Next, check the RP frontend log to see if the health probe mode in the LoadBalancerProfile is properly configured. You can use the `az aks show` command to view the LoadBalancerProfile property of your cluster.
49
+
50
+
```azurecli
51
+
export RESOURCE_GROUP="aks-rg"
52
+
export AKS_CLUSTER_NAME="aks-cluster"
53
+
az aks show --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "networkProfile.loadBalancerProfile"
3. Check the cloud provider configuration. In modern AKS clusters, the cloud provider configuration is managed internally and the `ccp` namespace doesn't exist. Instead, check for cloud provider related resources and verify the cloud-node-manager pods are running properly:
78
+
79
+
80
+
```bash
81
+
# Check for cloud provider related ConfigMaps in kube-system
82
+
kubectl get configmap -n kube-system | grep -i azure
83
+
84
+
# Check if cloud-node-manager pods are running (indicates cloud provider integration is working)
85
+
kubectl get pods -n kube-system | grep cloud-node-manager
86
+
87
+
# Check the azure-ip-masq-agent-config if it exists
88
+
kubectl get configmap azure-ip-masq-agent-config-reconciled -n kube-system -o yaml 2>/dev/null || echo "ConfigMap not found"
4. Check the chart or overlay daemonset cloud-node-manager to see if the health-probe-proxy sidecar container is enabled. You can use the `kubectl get ds` command to view the daemonset.
101
+
102
+
```shell
103
+
kubectl get ds -n kube-system cloud-node-manager -o yaml
## Cause 1: The health probe mode isn't Shared or ServiceNodePort
46
128
@@ -74,6 +156,26 @@ The health probe mode feature requires you to register the feature on your subsc
74
156
75
157
Make sure you register the feature for your subscription before creating or updating your cluster. You can use the `az feature register` command to register the feature.
Copy file name to clipboardExpand all lines: support/azure/azure-kubernetes/create-upgrade-delete/cannot-scale-cluster-autoscaler-enabled-node-pool.md
+27-25Lines changed: 27 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,14 +3,15 @@ title: Cluster autoscaler fails to scale with cannot scale cluster autoscaler en
3
3
description: Learn how to troubleshoot the cannot scale cluster autoscaler enabled node pool error when your autoscaler isn't scaling up or down.
4
4
author: sgeannina
5
5
ms.author: ninasegares
6
-
ms.date: 04/17/2025
7
-
ms.reviewer: aritraghosh, chiragpa.momajed
6
+
ms.date: 06/09/2024
7
+
ms.reviewer: aritraghosh, chiragpa
8
8
ms.service: azure-kubernetes-service
9
-
ms.custom: sap:Create, Upgrade, Scale and Delete operations (cluster or nodepool)
9
+
ms.custom: sap:Create, Upgrade, Scale and Delete operations (cluster or nodepool), innovation-engine
10
10
---
11
+
11
12
# Cluster autoscaler fails to scale with "cannot scale cluster autoscaler enabled node pool" error
12
13
13
-
This article discusses how to resolve the "cannot scale cluster autoscaler enabled node pool" error that occurs when you scale a cluster that has an autoscaler-enabled node pool.
14
+
This article discusses how to resolve the "cannot scale cluster autoscaler enabled node pool" error that appears when scaling a cluster with an autoscalerenabled node pool.
14
15
15
16
## Symptoms
16
17
@@ -22,33 +23,33 @@ You receive an error message that resembles the following message:
22
23
23
24
## Troubleshooting checklist
24
25
25
-
Azure Kubernetes Service (AKS) uses Azure Virtual Machine Scale Sets-based agent pools. These pools contain cluster nodes and [cluster autoscaling capabilities](/azure/aks/cluster-autoscaler), if they're enabled.
26
+
Azure Kubernetes Service (AKS) uses virtual machine scale sets-based agent pools, which contain cluster nodes and [cluster autoscaling capabilities](/azure/aks/cluster-autoscaler) if enabled.
26
27
27
28
### Check that the cluster virtual machine scale set exists
28
29
29
-
1. Sign in to the [Azure portal](https://portal.azure.com).
30
-
1. Find the node resource group by searching for the following names:
30
+
1. Sign in to [Azure portal](https://portal.azure.com).
31
+
1. Find the node resource group by searching the following names:
32
+
33
+
- The default name `MC_{AksResourceGroupName}_{YourAksClusterName}_{AksResourceLocation}`.
34
+
- The custom name (if it was provided at creation).
31
35
32
-
- The default name `MC_{AksResourceGroupName}_{YourAksClusterName}_{AksResourceLocation}`
33
-
- The custom name (if it was provided at creation)
34
-
>
35
36
> [!NOTE]
36
-
> When you create a cluster, AKS automatically creates a second resource group to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](/azure/aks/faq#why-are-two-resource-groups-created-with-aks)
37
+
> When you create a new cluster, AKS automatically creates a second resource group to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](/azure/aks/faq#why-are-two-resource-groups-created-with-aks)
37
38
38
-
1. Check the list of resources to make sure that a virtual machine scale set exists.
39
+
1. Check the list of resources and make sure that there's a virtual machine scale set.
39
40
40
41
## Cause 1: The cluster virtual machine scale set was deleted
41
42
42
-
If you delete the virtual machine scale set that's attached to the cluster, this action causes the cluster autoscaler to fail. It also causes issues when you provision resources such as nodes and pods.
43
+
Deleting the virtual machine scale set attached to the clustercauses the cluster autoscaler to fail. It also causes issues when provisioning resources such as nodes and pods.
43
44
44
45
> [!NOTE]
45
-
> Modifying any resource under the node resource group in the AKS cluster is an unsupported action and causes cluster operation failures. You can prevent changes from being made to the node resource group by [blocking users from modifying resources](/azure/aks/cluster-configuration#fully-managed-resource-group-preview) that are managed by the AKS cluster.
46
+
> Modifying any resource under the node resource group in the AKS cluster is an unsupported action and will cause cluster operation failures. You can prevent changes from being made to the node resource group by [blocking users from modifying resources](/azure/aks/cluster-configuration#fully-managed-resource-group-preview) managed by the AKS cluster.
46
47
47
48
### Reconcile node pool
48
49
49
50
If the cluster virtual machine scale set is accidentally deleted, you can reconcile the node pool by using `az aks nodepool update`:
50
51
51
-
```bash
52
+
```shell
52
53
# Update Node Pool Configuration
53
54
az aks nodepool update --resource-group <resource-group-name> --cluster-name <cluster-name> --name <nodepool-name> --tags <tags> --node-taints <taints> --labels <labels>
54
55
@@ -59,13 +60,13 @@ Monitor the node pool to make sure that it's functioning as expected and that al
59
60
60
61
## Cause 2: Tags or any other properties were modified from the node resource group
61
62
62
-
You may experience scaling errors if you modify or delete Azure-created tags and other resource properties in the node resource group. For more information, see [Can I modify tags and other properties of the AKS resources in the node resource group?](/azure/aks/faq#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group)
63
+
You may receive scaling errors if you modify or delete Azure-created tags and other resource properties in the node resource group. For more information, see [Can I modify tags and other properties of the AKS resources in the node resource group?](/azure/aks/faq#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group)
63
64
64
65
### Reconcile node resource group tags
65
66
66
67
Use the Azure CLI to make sure that the node resource group has the correct tags for AKS name and the AKS group name:
67
68
68
-
```bash
69
+
```shell
69
70
# Add or update tags for AKS name and AKS group name
70
71
az group update --name <node-resource-group-name> --set tags.AKS-Managed-Cluster-Name=<aks-managed-cluster-name> tags.AKS-Managed-Cluster-RG=<aks-managed-cluster-rg>
71
72
@@ -76,21 +77,22 @@ Monitor the resource group to make sure that the tags are correctly applied and
76
77
77
78
## Cause 3: The cluster node resource group was deleted
78
79
79
-
Deleting the cluster node resource group causes issues when you provision the infrastructure resources that are required by the cluster. This action causes the cluster autoscaler to fail.
80
+
Deleting the cluster node resource group causes issues when provisioning the infrastructure resources required by the cluster, which causes the cluster autoscaler to fail.
80
81
81
82
## Solution: Update the cluster to the goal state without changing the configuration
82
83
83
-
To resolve this issue, run the following command to recover the deleted virtual machine scale set or any tags (missing or modified).
84
+
To resolve this issue, you can run the following command to recover the deleted virtual machine scale set or any tags (missing or modified):
84
85
85
86
> [!NOTE]
86
-
> It might take a few minutes until the operation finishes.
87
+
> It might take a few minutes until the operation completes.
88
+
89
+
Set your environment variables for the AKS cluster resource group and cluster name before running the command. A random suffix is included to prevent name collisions during repeatable executions, but you must ensure the resource group and cluster exist.
87
90
88
91
```azurecli
89
-
az aks update --resource-group <resource-group-name> --name <aks-cluster-name>
0 commit comments