You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/windows-server/backup-and-storage/delete-s2d-storage-pool-reuse-disks.md
+40-40Lines changed: 40 additions & 40 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
2
title: Delete a Storage Spaces Direct Storage Pool and Reset the Physical Sisks
3
-
description: Explains how to gracefully delete an S2D storage pool. This process cleans S2D information from the disks that the storage pool aggregates, so that you can reuse the disks elsewhere.
3
+
description: Explains how to gracefully delete an S2D storage pool so that you can reuse the disks elsewhere.
4
4
ms.date: 01/07/2026
5
5
manager: dcscontentpm
6
6
audience: itpro
@@ -12,11 +12,11 @@ ms.custom:
12
12
appliesto:
13
13
- ✅ <a href=https://learn.microsoft.com/windows/release-health/windows-server-release-info target=_blank>Supported versions of Windows Server</a>
14
14
---
15
-
# How to delete a Storage Spaces Direct storage pool and reset the physical disks
15
+
# Delete a Storage Spaces Direct storage pool and reset the physical disks
16
16
17
-
This article uses an example Storage Spaces Direct (S2D) deployment to explain how to gracefully delete an S2D storage pool. This process cleans S2D information from the disks that the storage pool aggregates, so that you can reuse the disks elsewhere. If you use a different approach to remove disks from a storage pool, both the disks and the storage pool might enter an unusable state. For more information about these issues, including related events, see [More information](#more-information).
17
+
This article uses an example Storage Spaces Direct (S2D) deployment to explain how to gracefully delete an S2D storage pool. This process cleans S2D information from the disks that the storage pool aggregates so that you can reuse the disks elsewhere. If you use a different approach to remove disks from a storage pool, both the disks and the storage pool might enter an unusable state. For more information about these issues and related events, see [More information](#more-information).
18
18
19
-
This example uses the following steps to completely remove the S2D configuration and prepare the disks for reuse.
19
+
This example uses the following steps to completely remove the S2D configuration and prepare the disks for reuse:
20
20
21
21
1.[Review the current S2D configuration](#step-1-review-the-current-s2d-configuration).
22
22
1.[Remove the virtual disks from the storage pool](#step-2-remove-the-virtual-disks-from-the-storage-pool).
@@ -25,28 +25,28 @@ This example uses the following steps to completely remove the S2D configuration
25
25
1.[Verify that everything is removed](#step-5-verify-that-everything-is-removed).
26
26
1.[Clean up the physical disks](#step-6-clean-up-the-physical-disks).
27
27
28
-
The current Windows Server Failover Cluster (WSFC) configuration isn't changed; these steps modify only the S2D configuration.
28
+
The current Windows Server Failover Cluster (WSFC) configuration isn't changed. These steps modify only the S2D configuration.
29
29
30
30
The example in this section uses the following configuration:
31
31
32
-
-**S2D on S2DclusterNew**: The S2D storage pool, which includes two virtual disks that host volumes (ClusterPerformanceHistory and userdata01)
32
+
-**S2D on S2DclusterNew**: S2D storage pool that includes two virtual disks that host volumes (ClusterPerformanceHistory and userdata01)
33
33
- Cluster configuration:
34
-
-**S2D-1.contoso.com**: Cluster node 1, which has three attached disks (1002, 1003, and 1004)
35
-
-**S2D-2.contoso.com**: Cluster node 2, which has three attached disks (2002, 2003, and 2004)
34
+
-**S2D-1.contoso.com**: Cluster node 1 that has three attached disks (1002, 1003, and 1004)
35
+
-**S2D-2.contoso.com**: Cluster node 2 that has three attached disks (2002, 2003, and 2004)
36
36
-**Disk 0**: Operating system disk
37
37
-**Disk 1**: Temporary disk
38
38
39
39
> [!IMPORTANT]
40
-
> This example was created and tested in an Azure test lab environment. As a result, each of the disks in the earlier list appears in the command output examples as "Msft Virtual Disk," even though the example treats them as physical disks.
40
+
> This example was created and tested in an Azure test lab environment. Therefore, each of the disks in the earlier list appears in the command output examples as "Msft Virtual Disk" even though the example treats them as physical disks.
41
41
42
-
To use the procedures that this example describes, make sure that the following permissions are in place:
42
+
To use the procedures from this example, make sure that the following permissions are set:
43
43
44
44
- You have access to PowerShell in administrator mode. The following steps require you to run Windows PowerShell commands in an administrative PowerShell Command Prompt window.
45
-
- You have the necessary permissions to execute the commands on the target system.
45
+
- You have the necessary permissions to run the commands on the target system.
46
46
47
47
## Step 1: Review the current S2D configuration
48
48
49
-
1. To see the properties of the S2D storage pool, open an administrative PowerShell Command Prompt window. Then run the following command:
49
+
1. To see the properties of the S2D storage pool, open an administrative PowerShell Command Prompt window. Then, run the following command:
50
50
51
51
```powershell
52
52
C:\Users\SQLVMADMIN> Get-StoragePool
@@ -56,7 +56,7 @@ To use the procedures that this example describes, make sure that the following
S2D on S2DclusterNew OK Healthy False False 377.99 GB 44.5 GB
@@ -70,12 +70,12 @@ To use the procedures that this example describes, make sure that the following
70
70
71
71
> [!NOTE]
72
72
>
73
-
> - If you don't specify a node when you run `Get-PhysicalDisk` on any node within an S2D cluster, the output includes all physical disks across all nodes in the cluster. This behavior is by design. Each node maintains awareness of the entire pool's disk inventory--not just the disks physically attached to itself.
73
+
> - If you don't specify a node when you run `Get-PhysicalDisk` on any node within an S2D cluster, the output includes all physical disks across all nodes in the cluster. This behavior is by design. Each node maintains awareness of the entire pool's disk inventory--not only the disks that are physically attached to it.
74
74
> - In the command output, note that the **CanPool** property of the 64-GB disks (1002-1004 and 2002-2004) is **False**. This value means that the disks already belong to a storage pool.
75
75
76
76
```output
77
77
Number FriendlyName SerialNumber MediaType CanPool OperationalStatus HealthStatus Usage Size
@@ -226,13 +226,13 @@ To clean up the physical disks, use the script that's provided in the [Step 3.1:
226
226
227
227
As described in the referenced article, substitute your cluster node or server names for the variables in the script. Run the script at a PowerShell command prompt.
228
228
229
-
In this example, we run the script on each cluster node separately. On S2D-1, the script generates output that resembles the following example:
229
+
In this example, we run the script on each cluster node separately. On S2D-1, the script generates output that resembles the following example.
230
230
231
-
:::image type="content" source="media/delete-s2d-storage-pool-reuse-disks/cleanup-script-node-s2d-1-output.png" alt-text="Screenshot of the script output from node S2D-1 that shows the clean-up process for physical disks." lightbox="media/delete-s2d-storage-pool-reuse-disks/cleanup-script-node-s2d-1-output-expanded.png":::
231
+
:::image type="content" source="media/delete-s2d-storage-pool-reuse-disks/cleanup-script-node-s2d-1-output.png" alt-text="Script output from node S2D-1 that shows the clean-up process for physical disks." lightbox="media/delete-s2d-storage-pool-reuse-disks/cleanup-script-node-s2d-1-output-expanded.png":::
232
232
233
-
On S2D-2, the script generates output that resembles the following example:
233
+
On S2D-2, the script generates output that resembles the following example.
234
234
235
-
:::image type="content" source="media/delete-s2d-storage-pool-reuse-disks/cleanup-script-node-s2d-2-output.png" alt-text="Screenshot of the script output from node S2D-2 that shows the clean-up process for physical disks." lightbox="media/delete-s2d-storage-pool-reuse-disks/cleanup-script-node-s2d-2-output-expanded.png":::
235
+
:::image type="content" source="media/delete-s2d-storage-pool-reuse-disks/cleanup-script-node-s2d-2-output.png" alt-text="Script output from node S2D-2 that shows the clean-up process for physical disks." lightbox="media/delete-s2d-storage-pool-reuse-disks/cleanup-script-node-s2d-2-output-expanded.png":::
236
236
237
237
After the script finishes, verify the disk status by running the following command:
238
238
@@ -244,7 +244,7 @@ The output of this command resembles the following example. Note that the **CanP
244
244
245
245
```output
246
246
Number FriendlyName SerialNumber MediaType CanPool OperationalStatus HealthStatus Usage Size
0 Msft Virtual Disk Unspecified False OK Healthy Auto-Select 127 GB
249
249
1 Msft Virtual Disk Unspecified False OK Healthy Auto-Select 32 GB
250
250
1004 Msft Virtual Disk HDD True OK Healthy Auto-Select 64 GB
@@ -257,7 +257,7 @@ Number FriendlyName SerialNumber MediaType CanPool OperationalStatus Hea
257
257
258
258
## More information
259
259
260
-
If you remove a physical disk from the storage pool infrastructure without using the process described earlier, both the storage pool and the disk become unusable. For example, if you remove a disk from one node, the System event log of that node generates Event ID 157:
260
+
If you remove a physical disk from the storage pool infrastructure without using the process that's detailed in this article, both the storage pool and the disk become unusable. For example, if you remove a disk from one node, the System log of that node generates Event ID 157:
The storage pool unhealthy and in a degraded state.
315
+
The storage pool is unhealthy and in a degraded state.
316
316
317
-
If you connect the disk to a new server, the disk might appear to be healthy, but it remains in an unusable state. For example, if you view the disk in Disk Manager, the disk appears to be inaccessible:
317
+
If you connect the disk to a new server, the disk might appear to be healthy, but it remains in an unusable state. For example, if you view the disk in Disk Manager, the disk appears to be inaccessible.
318
318
319
-
:::image type="content" source="media/delete-s2d-storage-pool-reuse-disks/diskmgr-error-detail.png" alt-text="Screenshot of the Disk Manager that shows a disk that was improperly moved from a storage pool to a separate server." lightbox="media/delete-s2d-storage-pool-reuse-disks/diskmgr-error-detail-expanded.png":::
319
+
:::image type="content" source="media/delete-s2d-storage-pool-reuse-disks/diskmgr-error-detail.png" alt-text="View in Disk Manager that shows a disk that was improperly moved from a storage pool to a separate server." lightbox="media/delete-s2d-storage-pool-reuse-disks/diskmgr-error-detail-expanded.png":::
320
320
321
321
According to Disk Manager, the disk still belongs to the storage pool.
0 commit comments