Skip to content

Commit e0f8552

Browse files
authored
Fix typos and improve error message clarity
1 parent 3472dfc commit e0f8552

1 file changed

Lines changed: 11 additions & 6 deletions

File tree

support/azure/azure-kubernetes/storage/fail-to-mount-azure-disk-volume.md

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ However, the pod stays in the **ContainerCreating** status. When you run the `ku
2323
- [Volume is already used by pod](#error3)
2424
- [StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set](#error4)
2525
- [ApplyFSGroup failed for vol](#error5)
26-
- [node(s) exceed max volume count](#error6)
26+
- [Node(s) exceed max volume count](#error6)
2727

2828
See the following sections for error details, possible causes and solutions.
2929

@@ -190,20 +190,25 @@ OnRootMismatch: Only change permissions and ownership if permission and ownershi
190190

191191
For more information, see [Configure volume permission and ownership change policy for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods).
192192

193-
## <a id="error6"></a>node(s) exceed max volume count
193+
## <a id="error6"></a>Node(s) exceed max volume count
194+
194195
Here are details of this error:
196+
195197
```output
196198
Events:
197199
Type Reason Age From Message
198200
---- ------ ---- ---- -------
199201
Warning FailedScheduling 25s default-scheduler 0/8 nodes are available: 8 node(s) exceed max volume count. preemption: 0/8 nodes are available: 8 No preemption victims found for incoming pod..
200202
```
201-
### Cause: Max Disks for VM size is reached
202-
The maximum disk limit is reached for the specified VM Size .
203+
### Cause: Maximum disk limit is reached
204+
205+
The node has reached its maximum disk capacity. In AKS, the number of disks per node depends on the VM size configured for the node pool.
203206

204207
### Solution: Use anotehr VM size with more disk limits
205-
You can delete existing disks for the Node ,or scale the nodepool, or add new nodepool with VM size with more disk limit.
206-
Also pay attention for the limit of disks per node should not exceed the [limit](https://kubernetes.io/docs/concepts/storage/storage-limits/#kubernetes-default-limits).
208+
209+
To resolve the issue, we recommend using another VM size that supports more disks for the node.
210+
211+
Additionally, make sure that the number of disks per node does not exceed the [Kubernetes default limits](https://kubernetes.io/docs/concepts/storage/storage-limits/#kubernetes-default-limits).
207212

208213
## More information 
209214

0 commit comments

Comments
 (0)