Skip to content

Commit 50494b5

Browse files
authored
Merge pull request #312314 from eshanchomsft/docs-editor/esanperfonavs-1772062927
Create article Elastic SAN Datastore Performance on Azure VMware Solutions
2 parents 02e3b42 + 92d90dc commit 50494b5

2 files changed

Lines changed: 185 additions & 0 deletions

File tree

articles/storage/elastic-san/TOC.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,9 @@ items:
2020
href: elastic-san-best-practices.md
2121
- name: Performance
2222
href: elastic-san-performance.md
23+
- name: Elastic SAN Datastore Performance on Azure VMware Solutions
24+
href: elastic-san-performance-on-azure-vmware-solutions.md
25+
displayname: Performance, Azure VMware Solutions, Azure Elastic SAN
2326
- name: Clustered applications
2427
href: elastic-san-shared-volumes.md
2528
- name: Encryption
Lines changed: 182 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,182 @@
1+
---
2+
title: Elastic SAN datastore performance on Azure VMware solutions
3+
description: Benchmark results and guidance for Azure Elastic SAN datastores used with Azure VMware Solution, including IOPS- and throughput-intensive workloads.
4+
author: eshanchomsft
5+
ms.author: rogarana
6+
ms.topic: concept-article
7+
ms.service: azure-elastic-san-storage
8+
ms.date: 02/26/2026
9+
---
10+
11+
# Elastic SAN datastore performance on Azure VMware Solution
12+
13+
This article describes the performance characteristics of Azure Elastic SAN datastores when used with Azure VMware Solution. It presents benchmark results for common workload patterns and provides enough configuration and test details to help you compare these results with your own environments results.
14+
15+
The results in this article are intended as **reference only**, not as guaranteed performance targets. Actual performance changes depending on workload characteristics, VM configuration, and Elastic SAN provisioning.
16+
17+
## Workload categories
18+
19+
This article covers two common storage workload categories:
20+
21+
- **I/O‑intensive workloads**
22+
- Transactional or metadata‑driven workloads are common examples of I/O‑intensive workloads. These workloads have small, random I/O patterns that are typically read‑heavy.
23+
24+
- **Throughput‑intensive workloads**
25+
- Backup, scanning, analytics, and read‑ahead workloads are common examples of throughput‑intensive workloads. These workloads generate large, sequential I/O patterns.
26+
27+
All tests use a single Elastic SAN–backed AVS datastore, sized and configured as described in the following sections.
28+
29+
## Benchmark environment
30+
31+
### Azure VMware Solution configuration
32+
33+
This article uses the following Azure VMware Solution environment:
34+
35+
- Private cloud generation: **Gen 2**
36+
- ESXi hosts: **Three AV64 hosts**
37+
- Guest virtual machines: **Windows and Linux**
38+
- Operating systems:
39+
- Windows Server 2022
40+
- Ubuntu 24.04
41+
- VM size: **32 vCPU, 256 GB RAM**
42+
- VM disk configuration:
43+
- Disk sizes: **1 TiB or 500 GiB**
44+
- Provisioning: **Eager‑zeroed thick**
45+
46+
### Azure Elastic SAN configuration
47+
48+
This article uses the following Elastic SAN environment:
49+
50+
- Deployed in the **same region and availability zone** as the AVS private cloud
51+
- Base capacity provisioned: **100 TiB**
52+
- Datastore backing volume size: **20 TiB**
53+
- Maximum supported performance:
54+
- **80,000 IOPS**
55+
- **1,280 MBps**
56+
- Private endpoints: **8**
57+
58+
The Elastic SAN used also follows all the best practices outlined in [Optimize the performance of your Elastic SAN](elastic-san-best-practices.md).
59+
60+
## Benchmark tools
61+
62+
The benchmarks use industry‑standard storage testing tools, [DiskSPD](https://github.com/microsoft/diskspd) (used with Windows environments) and [Fio](https://github.com/axboe/fio) (used with Linux environments).
63+
64+
## Perform the benchmark tests
65+
66+
This section provides example commands used to generate the benchmark results shown later in this article. The examples include both **I/O‑intensive** and **throughput‑intensive** scenarios for Windows and Linux. For each workload scenario, the benchmarks are executed on one or more guest VMs connected to the same Elastic SAN datastore.
67+
68+
### I/O‑intensive workload benchmark
69+
70+
#### Windows (DiskSPD)
71+
72+
Each guest VM runs the following command independently. All VMs run concurrently against the same datastore.
73+
74+
```powershell
75+
diskspd.exe -b4K -d900 -Sh -L -o32 -t3 -r -w25 -Z1G -c20G G:\Testdata\IO.dat
76+
```
77+
78+
Key parameters:
79+
80+
- `b4K` – 4‑KB I/O size
81+
- `r -w25` – Random I/O with a 75% read / 25% write mix
82+
- `t3` – Three threads per VM
83+
- `o32` – Queue depth of 32 per thread
84+
- `d900` – 15‑minute steady‑state runtime
85+
- `c20G` – Per‑VM test file size
86+
87+
##### Results
88+
89+
In this scenario, multiple guest VMs run concurrently against the same Elastic SAN–backed datastore. Reported results reflect aggregate datastore‑level performance across all participating VMs.
90+
91+
| Number of Guest VMs | I/O Pattern | I/O Size | Threads per VM | Queue Depth | IOPS Achieved | MBps Achieved |
92+
|---------------:|--------------------------------|----------|---------------:|------------:|--------------:|--------------:|
93+
| 4 | Random (Read/Write 75/25) | 4K | 3 | 96 | 100,000 | 414 |
94+
95+
### Linux (fio)
96+
97+
```shell
98+
fio --name=randrw \
99+
--rw=randrw \
100+
--rwmixread=75 \
101+
--bs=4k \
102+
--iodepth=32 \
103+
--numjobs=3 \
104+
--time_based \
105+
--runtime=900 \
106+
--direct=1 \
107+
--ioengine=libaio \
108+
--group_reporting \
109+
--filename=/mnt/esan/testfile
110+
```
111+
112+
Key parameters:
113+
114+
- `bs=4k` – 4‑KB I/O size
115+
- `rw=randrw`, `rwmixread=75` – 75% read / 25% write mix
116+
- `numjobs=3` – Three threads per VM
117+
- `iodepth=32` – Outstanding I/Os per thread
118+
- `runtime=900` – 15‑minute steady‑state runtime
119+
120+
##### Results
121+
122+
In this scenario, multiple guest VMs run concurrently against the same Elastic SAN–backed datastore. Reported results reflect aggregate datastore‑level performance across all participating VMs.
123+
124+
| Number of Guest VMs | I/O Pattern | I/O Size | Threads per VM | Queue Depth | IOPS Achieved | MBps Achieved |
125+
|---------------:|--------------------------------|----------|---------------:|------------:|--------------:|--------------:|
126+
| 6 | Random (Read/Write 75/25) | 4K | 3 | 96 | 85,000 | 356 |
127+
128+
## Throughput‑intensive workload benchmark
129+
Throughput intensive workloads are represented by large sequential I/O patterns typical of backup, scan, and read ahead workloads.
130+
131+
### Windows (DiskSPD)
132+
133+
```powershell
134+
diskspd.exe -b1M -d900 -Sh -L -o32 -t3 -si -w0 -c200G G:\Testdata\BackupIO.dat
135+
```
136+
137+
Key parameters:
138+
139+
- `b1M` – 1‑MB I/O size
140+
- `si` -w0 – Sequential, read‑only I/O
141+
- `t3` – Three threads per VM
142+
- `o32` – Queue depth
143+
- `d900` – 15‑minute steady‑state runtime
144+
145+
#### Results
146+
147+
In this scenario, a single guest VM runs the benchmark against the Elastic SAN–backed datastore.
148+
149+
| Number of Guest VMs | I/O Pattern | I/O Size | Threads per VM | Queue Depth | IOPS Achieved | MBps Achieved |
150+
|---------------:|---------------------------|----------|---------------:|------------:|--------------:|--------------:|
151+
| 1 | Sequential (Read 100%) | 1M | 3 | 96 | 12,790 | 1,648 |
152+
153+
### Linux (fio)
154+
155+
```shell
156+
fio --name=readseq \
157+
--rw=read \
158+
--bs=1M \
159+
--iodepth=32 \
160+
--numjobs=3 \
161+
--time_based \
162+
--runtime=900 \
163+
--direct=1 \
164+
--ioengine=libaio \
165+
--group_reporting \
166+
--filename=/mnt/esan/testfile
167+
```
168+
169+
#### Results
170+
171+
In this scenario, a single guest VM runs the benchmark against the Elastic SAN–backed datastore.
172+
173+
| Number of Guest VMs | I/O Pattern | I/O Size | Threads per VM | Queue Depth | IOPS Achieved | MBps Achieved |
174+
|---------------:|---------------------------|----------|---------------:|------------:|--------------:|--------------:|
175+
| 1 | Sequential (Read 100%) | 1M | 3 | 96 | 13,000 | 1,519 |
176+
177+
178+
## Next steps
179+
180+
- Review [ESAN best practices](/azure/storage/elastic-san/elastic-san-best-practices)
181+
- [Resize your Elastic SAN's base capacity so it meets IOPS and throughput requirements](/azure/storage/elastic-san/elastic-san-expand)
182+
- Connect to [Azure VMware Solution using Elastic SAN](/azure/azure-vmware/configure-azure-elastic-san)

0 commit comments

Comments
 (0)