You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/azure/azure-kubernetes/storage/fail-to-mount-azure-disk-volume.md
+11-6Lines changed: 11 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ However, the pod stays in the **ContainerCreating** status. When you run the `ku
23
23
-[Volume is already used by pod](#error3)
24
24
-[StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set](#error4)
25
25
-[ApplyFSGroup failed for vol](#error5)
26
-
-[node(s) exceed max volume count](#error6)
26
+
-[Node(s) exceed max volume count](#error6)
27
27
28
28
See the following sections for error details, possible causes and solutions.
29
29
@@ -190,20 +190,25 @@ OnRootMismatch: Only change permissions and ownership if permission and ownershi
190
190
191
191
For more information, see [Configure volume permission and ownership change policy for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods).
192
192
193
-
## <a id="error6"></a>node(s) exceed max volume count
193
+
## <a id="error6"></a>Node(s) exceed max volume count
194
+
194
195
Here are details of this error:
196
+
195
197
```output
196
198
Events:
197
199
Type Reason Age From Message
198
200
---- ------ ---- ---- -------
199
201
Warning FailedScheduling 25s default-scheduler 0/8 nodes are available: 8 node(s) exceed max volume count. preemption: 0/8 nodes are available: 8 No preemption victims found for incoming pod..
200
202
```
201
-
### Cause: Max Disks for VM size is reached
202
-
The maximum disk limit is reached for the specified VM Size .
203
+
### Cause: Maximum disk limit is reached
204
+
205
+
The node has reached its maximum disk capacity. In AKS, the number of disks per node depends on the VM size configured for the node pool.
203
206
204
207
### Solution: Use anotehr VM size with more disk limits
205
-
You can delete existing disks for the Node ,or scale the nodepool, or add new nodepool with VM size with more disk limit.
206
-
Also pay attention for the limit of disks per node should not exceed the [limit](https://kubernetes.io/docs/concepts/storage/storage-limits/#kubernetes-default-limits).
208
+
209
+
To resolve the issue, we recommend using another VM size that supports more disks for the node.
210
+
211
+
Additionally, make sure that the number of disks per node does not exceed the [Kubernetes default limits](https://kubernetes.io/docs/concepts/storage/storage-limits/#kubernetes-default-limits).
0 commit comments