Skip to content

Commit 3472dfc

Browse files
authored
Update fail-to-mount-azure-disk-volume.md
Added scenario 6 for nodes-exceed-max-volume-count https://stackoverflow.com/questions/58880358/kubernetes-azure-nodes-exceed-max-volume-count
1 parent be13e6c commit 3472dfc

1 file changed

Lines changed: 17 additions & 1 deletion

File tree

support/azure/azure-kubernetes/storage/fail-to-mount-azure-disk-volume.md

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Unable to mount Azure disk volumes
33
description: Describes errors that occur when mounting Azure disk volumes fails, and provides solutions.
4-
ms.date: 09/06/2024
4+
ms.date: 03/22/2025
55
author: genlin
66
ms.author: genli
77
ms.reviewer: chiragpa, akscsscic, v-weizhu
@@ -23,6 +23,7 @@ However, the pod stays in the **ContainerCreating** status. When you run the `ku
2323
- [Volume is already used by pod](#error3)
2424
- [StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set](#error4)
2525
- [ApplyFSGroup failed for vol](#error5)
26+
- [node(s) exceed max volume count](#error6)
2627

2728
See the following sections for error details, possible causes and solutions.
2829

@@ -189,6 +190,21 @@ OnRootMismatch: Only change permissions and ownership if permission and ownershi
189190

190191
For more information, see [Configure volume permission and ownership change policy for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods).
191192

193+
## <a id="error6"></a>node(s) exceed max volume count
194+
Here are details of this error:
195+
```output
196+
Events:
197+
Type Reason Age From Message
198+
---- ------ ---- ---- -------
199+
Warning FailedScheduling 25s default-scheduler 0/8 nodes are available: 8 node(s) exceed max volume count. preemption: 0/8 nodes are available: 8 No preemption victims found for incoming pod..
200+
```
201+
### Cause: Max Disks for VM size is reached
202+
The maximum disk limit is reached for the specified VM Size .
203+
204+
### Solution: Use anotehr VM size with more disk limits
205+
You can delete existing disks for the Node ,or scale the nodepool, or add new nodepool with VM size with more disk limit.
206+
Also pay attention for the limit of disks per node should not exceed the [limit](https://kubernetes.io/docs/concepts/storage/storage-limits/#kubernetes-default-limits).
207+
192208
## More information 
193209

194210
For more Azure Disk known issues, see [Azure disk plugin known issues](https://github.com/andyzhangx/demo/blob/master/issues/azuredisk-issues.md).

0 commit comments

Comments
 (0)