You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/windows-server/backup-and-storage/windows-server-mpio-troubleshooting.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -97,9 +97,9 @@ Use this checklist for systematic troubleshooting.
97
97
> [!NOTE]
98
98
>
99
99
> - In these commands, \<Resource_Name> is the name of the disk resource.
100
-
> - The value of the PendingTimeout property is measured in milliseconds. The value shown here is higher than the default value.
100
+
> - The value of the PendingTimeout property is measured in milliseconds. The value shown here's higher than the default value.
101
101
102
-
- For stale ClusterStorage folders, stop cluster service, take ownership and permissions with `takeown` and `icacls`, then delete with `rmdir`.
102
+
- For stale ClusterStorage folders, stop the cluster service, use `takeown`to take ownership and `icacls` to reset permissions, and then use `rmdir`to delete.
103
103
104
104
### Check the state of the MPIO paths
105
105
@@ -153,7 +153,7 @@ The following sections describe the most common issues, and how to fix them.
153
153
If you experience this issue, make sure that your computers are up to date. [October 23, 2025—KB5070884 (OS Build 20348.4297)](https://support.microsoft.com/en-us/topic/october-23-2025-kb5070884-os-build-20348-4297-out-of-band-9c001fdc-f0d2-4636-87bb-494a59da55d0) and subsequent updates contain a fix for this issue.
154
154
155
155
> [!NOTE]
156
-
> After you install this update, VMs that have been backed up by using a host-level backup application might not be able to start. To fix this issue, delete any .rct and .mrt files that are associated with the affected virtual hard disks. Then try again to start the VMs. If the issue persists, contact Microsoft Support.
156
+
> After you install this update, VMs that were previously backed up by using a host-level backup application might not be able to start. To fix this issue, delete any .rct and .mrt files that are associated with the affected virtual hard disks. Then try again to start the VMs. If the issue persists, contact Microsoft Support.
157
157
158
158
### Some cluster disk resources remain in "Online Pending" state
> - In these commands, \<Resource_Name> is the name of the disk resource.
169
-
> - The value of the PendingTimeout property is measured in milliseconds. The value shown here is higher than the default value.
169
+
> - The value of the PendingTimeout property is measured in milliseconds. The value shown here's higher than the default value.
170
170
171
171
### Cluster disk resources are slow to come online
172
172
@@ -181,7 +181,7 @@ To fix this issue, follow these steps:
181
181
182
182
### After maintenance or a restart, administrative tools don't show disks or paths
183
183
184
-
The Disk Management or Failover Cluster Manager tools don't show all of the disks or paths, and the **Discover** option is unavailable. If you run `mpclaim -s -d` at a Windows command prompt, some LUNs might be missing. You might also observer Event ID 46, Event ID 129, or Event ID 153.
184
+
The Disk Management or Failover Cluster Manager tools don't show all of the disks or paths, and the **Discover** option is unavailable. If you run `mpclaim -s -d` at a Windows command prompt, some LUNs might be missing. You might also observe Event ID 46, Event ID 129, or Event ID 153.
185
185
186
186
To fix this issue, follow these steps:
187
187
@@ -211,7 +211,7 @@ To fix this issue, follow these steps:
211
211
212
212
### MPIO is in a degraded state, slow or unresponsive during failover
213
213
214
-
During a failover, you observe delays that might exceed thirty seconds. You also observe IO error messages, and messages such as "MPIO is in a degraded state." You might observe the following events:
214
+
During a failover, you observe delays that might exceed 30 seconds. You also observe IO error messages, and messages such as "MPIO is in a degraded state." You might observe the following events:
215
215
216
216
- Event ID 46
217
217
- Event ID 129
@@ -221,7 +221,7 @@ During a failover, you observe delays that might exceed thirty seconds. You also
221
221
To fix this issue, follow these steps:
222
222
223
223
1. Update all storage, HBA, and multipath drivers and firmware.
224
-
1. To set recommended load-balancing policy and failover parameters, run the following cmdlets at a Powershell command prompt:
224
+
1. To set recommended load-balancing policy and failover parameters, run the following cmdlets at a PowerShell command prompt:
1. Check for security or antivirus scans on the storage volumes (you might have to temporarily exclude the volumes or temporarily disable scans to test the effect on performance).
277
+
1. Check for security or antivirus scans on the storage volumes. You might have to temporarily exclude the volumes or temporarily disable scans to test the effect on performance.
278
278
1. Make sure that Windows Server is up to date, and update drivers and firmware.
279
279
1. Use 64K allocation units for data volumes.
280
280
1. If possible, distribute disks across multiple controllers.
@@ -286,7 +286,7 @@ After you increase disk or LUN capacity, Disk Management or Failover Cluster Man
286
286
To fix this issue, follow these steps:
287
287
288
288
1. Take the affected cluster role offline, then bring it online again (or move role to another node).
289
-
1. To run maintenance prcesses that rescan and extend the file system run the following commands at a Windows command prompt on the node that owns the role:
289
+
1. To run maintenance processes that rescan and extend the file system, run the following commands at a Windows command prompt on the node that owns the role:
290
290
291
291
```console
292
292
diskpart
@@ -303,7 +303,7 @@ After you change cabling, zoning, or storage configuration, the computer that ma
303
303
304
304
To fix this issue, follow these steps:
305
305
306
-
1. At a PowerShell command prompt on the affected computer, run the followng commands:
306
+
1. At a PowerShell command prompt on the affected computer, run the following commands:
307
307
308
308
```powershell
309
309
Update-HostStorageCache
@@ -360,7 +360,7 @@ If these procedures don't resolve your issue, contact Microsoft Support. Use the
360
360
- Cluster validation reports
361
361
-**Registry Editor:** Audit the permissions under `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Worker`
362
362
-**BIOS/UEFI:** Export settings, or create screenshots of them
363
-
-**Minidump files** If you observe a stop or bugcheck error (also known as a bluescreen error), retrieve these files
363
+
-**Minidump files** If you observe a stop or bug check error (also known as a bluescreen error), retrieve these files
364
364
-**Network trace logs** If you observe connectivity issues, collect network traces
365
365
-**Exported VM configuration files:** If you're troubleshooting import or export issues, export configuration files for the affected VMs
366
366
-**Driver versions** Note the current versions that your storage components, network components, and security agents use
0 commit comments