Skip to content

Commit 52034d3

Browse files
authored
Merge pull request #8164 from MicrosoftDocs/main
Push2Live
2 parents 4c51158 + bcc07ef commit 52034d3

7 files changed

Lines changed: 100 additions & 44 deletions

File tree

.openpublishing.redirection.json

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12659,6 +12659,10 @@
1265912659
"source_path": "support/dynamics-365/sales/errorinternalservertransienterror-error.md",
1266012660
"redirect_url": "/troubleshoot/power-platform/dataverse/email-exchange-synchronization/an-error-occurred-while-synchronizing-item",
1266112661
"redirect_document_id": false
12662+
},
12663+
{
12664+
"source_path": "support/dynamics/gp/integration-manager-log-file-does-not-print.md",
12665+
"redirect_url": "/dynamics-gp/installation/developer-tools"
1266212666
}
1266312667
]
1266412668
}
Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
---
2+
title: AKS cluster upgrade fails with UnsatisfiablePDB error
3+
description: Provides solutions to the UnsatisfiablePDB error when you try to upgrade an Azure Kubernetes Service (AKS) cluster.
4+
ms.date: 10/27/2023
5+
ms.reviewer: chiragpa, v-weizhu
6+
ms.service: azure-kubernetes-service
7+
ms.custom: sap:Create, Upgrade, Scale and Delete operations (cluster or nodepool)
8+
#Customer intent: As an Azure Kubernetes Services (AKS) user, I want to troubleshoot an Azure Kubernetes Service cluster upgrade that failed because of a UnsatisfiablePDB error so that I can upgrade the cluster successfully.
9+
---
10+
11+
# Error "UnsatisfiablePDB" when upgrading an AKS cluster
12+
13+
This article discusses how to identify and resolve the "UnsatisfiablePDB" error that might occur when you try to [upgrade an Azure Kubernetes Service (AKS) cluster](/azure/aks/upgrade-aks-cluster).
14+
15+
## Prerequisites
16+
17+
This article requires Azure CLI version 2.53.0 or a later version. Run `az --version` to find your installed version. If you need to install or upgrade the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
18+
19+
## Symptoms
20+
21+
An AKS cluster upgrade operation fails with the following error message:
22+
23+
> Code: UnsatisfiablePDB
24+
> Message: 1 error occurred:
25+
> \* PDB \<pdb-namespace>/\<pdb-name> has maxunavailble == 0 can't proceed with put operation
26+
27+
## Cause
28+
29+
Before starting an upgrade operation, AKS checks the cluster for any existing [Pod Disruption Budgets (PDBs)](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets) that have the `maxUnavailable` parameter set to 0. Such PDBs are likely to block node drain operations. If node drain operations are blocked, the cluster upgrade operation can't complete successfully. This might potentially cause the cluster to be in a failed state.
30+
31+
After receiving the "UnsatisfiablePDB" error, you can confirm the PDB's status by running the following command:
32+
33+
```console
34+
$ kubectl get pdb <pdb-name> -n <pdb-namespace>
35+
```
36+
37+
The output of this command should be similar to the following one:
38+
39+
```output
40+
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
41+
<pdb-name> N/A 0 0 49s
42+
```
43+
44+
If the value of `MAX UNAVAILABLE` is 0, the node drain fails during the upgrade process.
45+
46+
To resolve this issue, use one of the following solutions.
47+
48+
## Solution 1: Adjust the PDB's "maxUnavailable" parameter
49+
50+
> [!NOTE]
51+
> Use this solution if you can edit the PDB resource directly.
52+
53+
1. Set the PDB's `maxUnavailable` parameter to `1` or a greater value. For more information, see [Specifying a PodDisruptionBudget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget).
54+
2. Retry the AKS cluster upgrade operation.
55+
56+
## Solution 2: Back up, delete, and redeploy the PDB
57+
58+
> [!NOTE]
59+
> Use this solution if directly editing the PDB resource isn't viable.
60+
61+
1. Back up the PDB using the following command:
62+
63+
```console
64+
$ kubectl get pdb <pdb-name> -n <pdb-namespace> -o yaml > pdb_backup.yaml
65+
```
66+
67+
2. Delete the PDB using the following command:
68+
69+
```console
70+
$ kubectl delete pdb <pdb-name> -n <pdb-namespace>
71+
```
72+
73+
3. Retry the AKS cluster upgrade operation.
74+
75+
4. If the AKS cluster upgrade operation succeeds, redeploy the PDB using the following command:
76+
77+
```console
78+
$ kubectl apply -f pdb_backup.yaml
79+
```
80+
81+
[!INCLUDE [Azure Help Support](../../../includes/azure-help-support.md)]

support/azure/azure-kubernetes/extensions/istio-add-on-minor-revision-upgrade.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
title: Istio service mesh add-on minor revision upgrade troubleshooting
33
description: Learn how to do minor revision upgrade troubleshooting on the Istio service mesh add-on for Azure Kubernetes Service (AKS).
4-
ms.date: 04/26/2024
4+
ms.date: 02/07/2025
55
author: SanyaKochhar
66
ms.author: kochhars
77
editor: v-jsitser
8-
ms.reviewer: fuyuanbie, shasb, nshankar, ddama, v-leedennis
9-
ms.service: azure-kubernetes-service
8+
ms.reviewer: fuyuanbie, shasb, nshankar, ddama, v-leedennis, v-weizhu
109
ms.custom: sap:Extensions, Policies and Add-Ons
10+
ms.service: azure-kubernetes-service
1111
#Customer intent: As an Azure Kubernetes user, I want to troubleshoot minor revision upgrades of the Istio add-on so that I can use the Istio service mesh successfully.
1212
---
1313
# Istio service mesh add-on minor revision upgrade troubleshooting
@@ -29,9 +29,10 @@ The following table lists various problems and the different scenarios and solut
2929

3030
| Scenario | Problem | Solution |
3131
|--|--|--|
32-
| Data plane workloads are dropped from the mesh. | Data plane and control plane revisions didn't correspond before you completed or rolled back an upgrade. | <p>Follow these steps:</p><ol> <li><p>Relabel namespaces that contain workloads by specifying the revision that's expected to exist after the upgrade completion or rollback. To do this, run the [kubectl label](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_label/) command:</p><pre>kubectl label namespace default istio.io/rev=asm-x-y --overwrite</pre></li> <li><p>Restart the corresponding workload deployments to trigger sidecar reinjection of the correct revision. To do this, run the [kubectl rollout restart](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_restart/) command:</p><pre>kubectl rollout restart deployment \<deployment name></pre></li> <li><p>Verify that the sidecar images exist. To do this, run the [kubectl get](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/) command:</p><pre>kubectl get pods --namespace \<namespace> --output yaml \| grep mcr.microsoft.com/oss/istio/proxyv2:</pre></li> </ol> |
32+
| Data plane workloads are dropped from the mesh. | Data plane and control plane revisions didn't correspond before you completed or rolled back an upgrade. | <p>Follow these steps:</p><ol> <li><p>Relabel namespaces that contain workloads by specifying the revision that's expected to exist after the upgrade completion or rollback:</p><pre>kubectl label namespace default istio.io/rev=asm-x-y --overwrite</pre></li> <li><p>Restart the corresponding workload deployments using the [kubectl rollout restart](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_restart/) command to trigger sidecar reinjection of the correct revision:</p><pre>kubectl rollout restart deployment \<deployment name></pre></li> <li><p>Verify that the sidecar images exist using the [kubectl get](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/) command:</p><pre>kubectl get pods --namespace \<namespace> --output yaml \| grep mcr.microsoft.com/oss/istio/proxyv2:</pre></li> </ol> |
3333
| Control plane pods are in the pending state. | The pods lack capacity. | Verify the state of the pods by running the [kubectl describe](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/) command. If capacity is the problem, you can scale up your cluster to add another node. For more information, see [Manually scale the node count in an Azure Kubernetes Service (AKS) cluster](/azure/aks/scale-cluster). |
34-
| The [az aks mesh get-upgrades](/cli/azure/aks/mesh#az-aks-mesh-get-upgrades) command returns no available upgrades. | The newest Istio revision might be incompatible with the current AKS cluster version. | You can use the [az aks mesh get-revisions](/cli/azure/aks/mesh#az-aks-mesh-get-revisions) command to discover whether newer Istio revisions exist. The output includes a list of compatible cluster versions for each Istio revision. Therefore, you can determine whether a cluster upgrade is necessary. |
34+
| The [az aks mesh get-upgrades](/cli/azure/aks/mesh#az-aks-mesh-get-upgrades) command returns no available upgrades. | The next Istio revision might be incompatible with the current AKS cluster version. | You can use the [az aks mesh get-revisions](/cli/azure/aks/mesh#az-aks-mesh-get-revisions) command to discover whether newer Istio revisions exist. The output includes a list of compatible cluster versions for each Istio revision. Therefore, you can determine whether a cluster upgrade is necessary. If _both_ mesh and cluster are no longer supported, upgrade the cluster version first, and then the mesh revision. To recover from this scenario, a cluster upgrade is permitted even if it's incompatible with the mesh revision. |
35+
3536

3637
> [!NOTE]
3738
> To avoid unintended behavior and broken functionality, and also make sure that you're receiving updates for security vulnerabilities, we strongly recommend that you upgrade to a supported and up-to-date [AKS version](/azure/aks/supported-kubernetes-versions) and Istio add-on revision. Remember that the add-on revision should also be within the supported Kubernetes version range for the given AKS cluster. As highlighted in the [Minor revision upgrade](/azure/aks/istio-upgrade#minor-revision-upgrade) section of the Istio upgrade article, you can run the `az aks mesh get-revisions` and `az aks mesh get-upgrades` commands to learn about available add-on revisions, upgrades, and compatibility information.
@@ -40,7 +41,10 @@ The following table lists various problems and the different scenarios and solut
4041

4142
- A downgrade to an older revision (outside the canary rollback process) isn't allowed.
4243

43-
- Skipping from one revision to a nonconsecutive revision is allowed only if AKS no longer supports both the current revision and the next upgrade revision. At this point, the only upgrade that's available to you is the lowest supported revision.
44+
- Available upgrades for a revision depend on whether it's currently supported. For example, if `n` is the currently installed revision and `n+2` is the latest revision:
45+
- If `n` is supported, you may upgrade to the next revision `n+1` or directly to the newest revision `n+2`.
46+
- If both `n` and `n+1` (next consecutive) are unsupported, the only available upgrade is `n+2` (next supported).
47+
- If `n` has been unsupported for a while, it's possible both of the next two consecutive revisions are unsupported. In this case, the only available upgrade is the lowest supported revision.
4448

4549
- The Istio `sidecar.istio.io/inject` label doesn't enable sidecar injection for the Istio add-on. You must use the `istio.io/rev` label when you label and relabel your namespaces during the canary upgrade.
4650

support/azure/azure-kubernetes/toc.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -366,3 +366,6 @@
366366
href: error-codes/vmextensionerror-k8sapiserverconnfail.md
367367
- name: VMExtensionError_K8SAPIServerDNSLookupFail error
368368
href: error-codes/vmextensionerror-k8sapiserverdnslookupfail.md
369+
- name: VMExtensionError_K8SAPIServerDNSLookupFail error
370+
- name: UnsatisfiablePDB error
371+
href: error-codes/unsatisfiablepdb-error.md

support/dynamics/gp/integration-manager-log-file-does-not-print.md

Lines changed: 0 additions & 34 deletions
This file was deleted.

support/dynamics/gp/toc.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -154,8 +154,6 @@
154154
href: initialization-errors-when-removing-or-installing-integration-manager.md
155155
- name: Input string was not in a correct format
156156
href: input-string-was-not-a-correct-format.md
157-
- name: Integration Manager log file doesn't print
158-
href: integration-manager-log-file-does-not-print.md
159157
- name: Integration performance decreases after an upgrade
160158
href: integration-performance-decreases-use-sql-source-file.md
161159
- name: Invalid Unit Cost when doing PA Timesheet integration

support/windows-server/active-directory/poor-performance-calling-lookup-functions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: How to disable the lookup of isolated names
33
description: Provides a resolution to the poor performance when calling lookup functions to resolve names. Gives a method to disable the lookup of isolated names in trusted domain.
4-
ms.date: 01/15/2025
4+
ms.date: 02/06/2025
55
manager: dcscontentpm
66
audience: itpro
77
ms.topic: troubleshooting
@@ -16,7 +16,7 @@ When calling the **LookupAccountName** or **LsaLookupNames** function to resolve
1616

1717
For example, poor performance might occur when using scripts or tools (such as *Cacls.exe*, *Xcacls.exe*, *icacls.exe*, *Dsacls.exe*, and *Subinacl.exe*) to call the functions to edit security settings.
1818

19-
The problem may show up when you have many trusted domains or forests (applies to both external and forest trusts), and/or some of these domains or forests are offline or slow to respond.
19+
The problem may show up when you have many trusted domains or external forest trusts, and/or some of these domains or forests are offline or slow to respond.
2020

2121
When the functions are called for an isolated name (the format is AccountName in contrast to domain\AccountName), a remote procedure call (RPC) is made to domain controllers on all trusted domains/forests. This issue might occur if the primary domain has many trust relationships with other domains/forests or if it's doing many lookups at a same time. For example, a script is configured to run at the startup of many clients, or many trusted domains/forests use the same script simultaneously.
2222

0 commit comments

Comments
 (0)