Skip to content

Commit 96cf046

Browse files
authored
Fix typos and improve clarity in documentation
Edit review per CI 5103
1 parent e0f8552 commit 96cf046

1 file changed

Lines changed: 21 additions & 21 deletions

File tree

support/azure/azure-kubernetes/storage/fail-to-mount-azure-disk-volume.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Unable to mount Azure disk volumes
2+
title: Unable to Mount Azure Disk Volumes
33
description: Describes errors that occur when mounting Azure disk volumes fails, and provides solutions.
44
ms.date: 03/22/2025
55
author: genlin
@@ -14,9 +14,9 @@ This article provides solutions for errors that cause the mounting of Azure disk
1414

1515
## Symptoms
1616

17-
You're trying to deploy a Kubernetes resource such as a Deployment or a StatefulSet, in an Azure Kubernetes Service (AKS) environment. The deployment will create a pod that should mount a PersistentVolumeClaim (PVC) referencing an Azure disk.
17+
You're trying to deploy a Kubernetes resource, such as a Deployment or a StatefulSet, in an Azure Kubernetes Service (AKS) environment. The deployment creates a pod that should mount a PersistentVolumeClaim (PVC) that references an Azure disk.
1818

19-
However, the pod stays in the **ContainerCreating** status. When you run the `kubectl describe pods` command, you may see one of the following errors, which causes the mounting operation to fail:
19+
However, the pod stays in the **ContainerCreating** status. When you run the `kubectl describe pods` command, you may see one of the following errors that cause the mounting operation to fail:
2020

2121
- [Disk cannot be attached to the VM because it is not in the same zone as the VM](#error1)
2222
- [Client '\<client-ID>' with object id '\<object-ID>' doesn't have authorization to perform action over scope '\<disk name>' or scope is invalid](#error2)
@@ -25,7 +25,7 @@ However, the pod stays in the **ContainerCreating** status. When you run the `ku
2525
- [ApplyFSGroup failed for vol](#error5)
2626
- [Node(s) exceed max volume count](#error6)
2727

28-
See the following sections for error details, possible causes and solutions.
28+
See the following sections for error details, possible causes, and solutions.
2929

3030
## <a id="error1"></a>Disk cannot be attached to the VM because it is not in the same zone as the VM
3131

@@ -48,13 +48,13 @@ RawError:
4848

4949
### Cause: Disk and node hosting pod are in different zones
5050

51-
In AKS, the default and other built-in StorageClasses for Azure disks use [locally redundant storage (LRS)](/azure/storage/common/storage-redundancy#locally-redundant-storage). These disks are deployed in [availability zones](/azure/aks/availability-zones). If you use the node pool in AKS with availability zones, and the pod is scheduled on a node that's in another availability zone different from the disk, you may get this error.
51+
In AKS, the default and other built-in storage classes for Azure disks use [locally redundant storage (LRS)](/azure/storage/common/storage-redundancy#locally-redundant-storage). These disks are deployed in [availability zones](/azure/aks/availability-zones). If you use the node pool in AKS together with availability zones, and the pod is scheduled on a node that's in another availability zone that's different from the disk, you might experience this error.
5252

53-
To resolve this error, use one of the following solutions:
53+
To resolve this error, use one of the following solutions.
5454

5555
### Solution 1: Ensure disk and node hosting the pod are in the same zone
5656

57-
To make sure the disk and node that hosts the pod are in the same availability zone, use [node affinity](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/).
57+
To make sure that the disk and node that host the pod are in the same availability zone, use [node affinity](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/).
5858

5959
Refer to the following script as an example:
6060

@@ -70,19 +70,19 @@ affinity:
7070
- <region>-Y
7171
```
7272
73-
\<region> is the region of the AKS cluster. `Y` represents the availability zone of the disk, for example, westeurope-3.
73+
\<region> is the region of the AKS cluster. `Y` represents the availability zone of the disk (for example, westeurope-3).
7474

7575
### Solution 2: Use zone-redundant storage (ZRS) disks
7676

7777
[ZRS](/azure/storage/common/storage-redundancy#zone-redundant-storage) disk volumes can be scheduled on all zone and non-zone agent nodes. For more information, see [Azure disk availability zone support](/azure/aks/availability-zones#azure-disk-availability-zone-support).
7878

79-
To use a ZRS disk, create a new storage class with `Premium_ZRS` or `StandardSSD_ZRS`, and then deploy the PersistentVolumeClaim (PVC) referencing the storage.
79+
To use a ZRS disk, create a storage class by using `Premium_ZRS` or `StandardSSD_ZRS`, and then deploy the PersistentVolumeClaim (PVC) that references the storage.
8080

8181
For more information about parameters, see [Driver Parameters](/azure/aks/azure-csi-files-storage-provision#storage-class-parameters-for-dynamic-persistentvolumes)
8282

8383
### Solution 3: Use Azure Files
8484

85-
[Azure Files](/azure/storage/files/storage-files-introduction) is mounted by using NFS or SMB throughout network and it's not associated with availability zones.
85+
[Azure Files](/azure/storage/files/storage-files-introduction) is mounted by using NFS or SMB throughout network. It's not associated with availability zones.
8686

8787
For more information, see the following articles:
8888

@@ -108,11 +108,11 @@ RawError:
108108

109109
### Cause: AKS identity doesn't have required authorization over disk
110110

111-
AKS cluster's identity doesn't have the required authorization over the Azure disk. This issue occurs when the disk is created in another resource group other than the infrastructure resource group of the AKS cluster.
111+
AKS cluster's identity doesn't have the required authorization over the Azure disk. This issue occurs if the disk is created in a resource group other than the infrastructure resource group of the AKS cluster.
112112

113113
### Solution: Create role assignment that includes required authorization
114114

115-
Create a role assignment that includes the authorization required as per the error. We recommend that you use a [Contributor](/azure/role-based-access-control/built-in-roles/general#contributor) role. If you want to use another built-in role, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles).
115+
Create a role assignment that includes the authorization required per the error. We recommend that you use a [Contributor](/azure/role-based-access-control/built-in-roles/general#contributor) role. If you want to use another built-in role, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles).
116116

117117
To assign a Contributor role, use one of the following methods:
118118

@@ -136,9 +136,9 @@ Here are details of this error:
136136

137137
### Cause: Disk is mounted to multiple pods hosted on different nodes
138138

139-
An Azure disk can be mounted only as [ReadWriteOnce](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes), which makes it available to one node in AKS. That means it can be attached to only one node and mounted only to a pod hosted by that node. If you mount the same disk to a pod on another node, you'll get this error because the disk is already attached to a node.
139+
An Azure disk can be mounted only as [ReadWriteOnce](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). This makes it available to one node in AKS. That means that it can be attached to only one node and mounted to only a pod that's hosted by that node. If you mount the same disk to a pod on another node, you experience this error because the disk is already attached to a node.
140140

141-
### Solution: Ensure disk isn't mounted by multiple pods hosted on different nodes
141+
### Solution: Make sure disk isn't mounted by multiple pods hosted on different nodes
142142

143143
To resolve this error, refer to [Multi-Attach error](https://github.com/andyzhangx/demo/blob/master/issues/azuredisk-issues.md#25-multi-attach-error).
144144

@@ -164,11 +164,11 @@ desc = Attach volume "/subscriptions/<subscription-ID>/resourceGroups/<disk-reso
164164

165165
### Cause: Ultra disk is attached to node pool with ultra disks disabled
166166

167-
This error indicates that an [ultra disk](/azure/virtual-machines/disks-enable-ultra-ssd) is trying to be attached to a node pool with ultra disks disabled. By default, an ultra disk is disabled on AKS node pools.
167+
This error indicates that an [ultra disk](/azure/virtual-machines/disks-enable-ultra-ssd) is trying to be attached to a node pool by having ultra disks disabled. By default, an ultra disk is disabled on AKS node pools.
168168

169169
### Solution: Create a node pool that can use ultra disks
170170

171-
To use ultra disks on AKS, create a node pool with ultra disks support by using the `--enable-ultra-ssd` flag. For more information, see [Use Azure ultra disks on Azure Kubernetes Service](/azure/aks/use-ultra-disks).
171+
To use ultra disks on AKS, create a node pool that has ultra disks support by using the `--enable-ultra-ssd` flag. For more information, see [Use Azure ultra disks on Azure Kubernetes Service](/azure/aks/use-ultra-disks).
172172

173173
## <a id="error5"></a>ApplyFSGroup failed for vol
174174

@@ -178,15 +178,15 @@ Here are details of this error:
178178

179179
### Cause: Changing ownership and permissions for large volume takes much time
180180

181-
When there's a large number of files already present in the volume, if a `securityContext` with `fsGroup` is in place, this error may occur. When there are lots of files and directories under one volume, changing the group ID would consume much time. It's also mentioned in the Kubernetes official documentation [Configure volume permission and ownership change policy for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods):
181+
If there are many files already present in the volume, and if a `securityContext` that uses `fsGroup` exists, this error might occur. If there are lots of files and directories in one volume, changing the group ID would consume excessive time. Additionally, the Kubernetes official documentation [Configure volume permission and ownership change policy for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods) mentions this situation:
182182

183183
"By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the `fsGroup` specified in a Pod's `securityContext` when that volume is mounted. For large volumes, checking and changing ownership and permissions can take much time, slowing Pod startup. You can use the `fsGroupChangePolicy` field inside a `securityContext` to control the way that Kubernetes checks and manages ownership and permissions for a volume."
184184

185185
### Solution: Set fsGroupChangePolicy field to OnRootMismatch
186186

187-
To resolve this error, we recommend that you set `fsGroupChangePolicy: "OnRootMismatch"` in the `securityContext` of a Deployment, a StatefulSet or a pod.
187+
To resolve this error, we recommend that you set `fsGroupChangePolicy: "OnRootMismatch"` in the `securityContext` of a Deployment, a StatefulSet, or a pod.
188188

189-
OnRootMismatch: Only change permissions and ownership if permission and ownership of root directory doesn't match with expected permissions of the volume. This setting could help shorten the time it takes to change ownership and permission of a volume.
189+
OnRootMismatch: Change permissions and ownership only if permission and ownership of the root directory doesn't match the expected permissions of the volume. This setting could help shorten the time that it takes to change ownership and permission of a volume.
190190

191191
For more information, see [Configure volume permission and ownership change policy for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods).
192192

@@ -202,11 +202,11 @@ Warning FailedScheduling 25s default-scheduler 0/8 nodes are available: 8 node(
202202
```
203203
### Cause: Maximum disk limit is reached
204204

205-
The node has reached its maximum disk capacity. In AKS, the number of disks per node depends on the VM size configured for the node pool.
205+
The node has reached its maximum disk capacity. In AKS, the number of disks per node depends on the VM size that's configured for the node pool.
206206

207207
### Solution: Use anotehr VM size with more disk limits
208208

209-
To resolve the issue, we recommend using another VM size that supports more disks for the node.
209+
To resolve the issue, we recommend that you use another VM size that supports more disks for the node.
210210

211211
Additionally, make sure that the number of disks per node does not exceed the [Kubernetes default limits](https://kubernetes.io/docs/concepts/storage/storage-limits/#kubernetes-default-limits).
212212

0 commit comments

Comments
 (0)