You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/azure/azure-kubernetes/storage/fail-to-mount-azure-disk-volume.md
+17-1Lines changed: 17 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: Unable to mount Azure disk volumes
3
3
description: Describes errors that occur when mounting Azure disk volumes fails, and provides solutions.
4
-
ms.date: 09/06/2024
4
+
ms.date: 03/22/2025
5
5
author: genlin
6
6
ms.author: genli
7
7
ms.reviewer: chiragpa, akscsscic, v-weizhu
@@ -23,6 +23,7 @@ However, the pod stays in the **ContainerCreating** status. When you run the `ku
23
23
-[Volume is already used by pod](#error3)
24
24
-[StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set](#error4)
25
25
-[ApplyFSGroup failed for vol](#error5)
26
+
-[node(s) exceed max volume count](#error6)
26
27
27
28
See the following sections for error details, possible causes and solutions.
28
29
@@ -189,6 +190,21 @@ OnRootMismatch: Only change permissions and ownership if permission and ownershi
189
190
190
191
For more information, see [Configure volume permission and ownership change policy for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods).
191
192
193
+
## <a id="error6"></a>node(s) exceed max volume count
194
+
Here are details of this error:
195
+
```output
196
+
Events:
197
+
Type Reason Age From Message
198
+
---- ------ ---- ---- -------
199
+
Warning FailedScheduling 25s default-scheduler 0/8 nodes are available: 8 node(s) exceed max volume count. preemption: 0/8 nodes are available: 8 No preemption victims found for incoming pod..
200
+
```
201
+
### Cause: Max Disks for VM size is reached
202
+
The maximum disk limit is reached for the specified VM Size .
203
+
204
+
### Solution: Use anotehr VM size with more disk limits
205
+
You can delete existing disks for the Node ,or scale the nodepool, or add new nodepool with VM size with more disk limit.
206
+
Also pay attention for the limit of disks per node should not exceed the [limit](https://kubernetes.io/docs/concepts/storage/storage-limits/#kubernetes-default-limits).
207
+
192
208
## More information
193
209
194
210
For more Azure Disk known issues, see [Azure disk plugin known issues](https://github.com/andyzhangx/demo/blob/master/issues/azuredisk-issues.md).
0 commit comments