You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/storage/files/smb-performance.md
+10-14Lines changed: 10 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn about ways to improve performance and throughput for SSD (pre
4
4
author: khdownie
5
5
ms.service: azure-file-storage
6
6
ms.topic: concept-article
7
-
ms.date: 01/06/2026
7
+
ms.date: 01/15/2026
8
8
ms.author: kendownie
9
9
ms.custom:
10
10
- build-2025
@@ -26,7 +26,7 @@ The following tips might help you optimize performance:
26
26
- Use multi-threaded applications and spread the load across multiple files.
27
27
- Performance benefits of SMB Multichannel increase with the number of files distributing the load.
28
28
- SSD share performance is bound by provisioned share size, including IOPS and throughput, and single file limits. For details, see [understanding the provisioning v1 model](understanding-billing.md#provisioned-v1-model).
29
-
- Maximum performance of a single virtual machine (VM) client is still bound to VM limits. For example, [Standard_D32s_v3](/azure/virtual-machines/dv3-dsv3-series) supports a maximum bandwidth of approximately 1.86 GiB/sec. Egress from the VM (writes to storage) is metered, but ingress (reads from storage) isn't. File share performance is subject to machine network limits, CPUs, internal storage available network bandwidth, IO sizes, parallelism, and other factors.
29
+
- Maximum performance of a single virtual machine (VM) client is still bound to VM limits. For example, [Standard_D32s_v3](/azure/virtual-machines/dv3-dsv3-series) supports a maximum bandwidth of approximately 1.86 GiB/sec. Ingress (writes to storage) is metered, but egress (reads from storage) isn't. File share performance is subject to machine network limits, CPUs, internal storage available network bandwidth, IO sizes, parallelism, and other factors.
30
30
- The initial test is usually a warm-up. Discard the results and repeat the test.
31
31
- If performance is limited by a single client and workload is still below provisioned share limits, you can achieve higher performance by spreading the load over multiple clients.
32
32
@@ -57,7 +57,7 @@ These clients must be running the appropriate kernel stack and CIFS utilities th
57
57
58
58
The following are prerequisites to use SMB Multichannel with Linux.
59
59
60
-
- Kernel with SMB multichannel support enabled (typically 6.8+ and up with *max_channels=4* mount flags)
60
+
- Kernel with SMB multichannel support enabled (see [Linux SMB Multichannel support](#linux-smb-multichannel-support))
61
61
- SMB 3.1.1
62
62
- Port 445/TCP open between client and Azure Files endpoint
63
63
- Ensure client side receive-side scaling (RSS) is enabled for multi-queue support
@@ -83,7 +83,7 @@ SMB Multichannel enables clients to use multiple network connections that provid
83
83
-**Network fault tolerance**:
84
84
Multiple connections mitigate the risk of disruption since clients no longer rely on an individual connection.
85
85
-**Automatic configuration**:
86
-
When SMB Multichannel is enabled on clients and storage accounts, it allows for dynamic discovery of existing connections, and can create addition connection paths as necessary.
86
+
When SMB Multichannel is enabled on clients and storage accounts, it allows for dynamic discovery of existing connections, and can create additional connection paths as necessary.
87
87
-**Cost optimization**:
88
88
Workloads can achieve higher scale from a single VM, or a small set of VMs, while connecting to SSD file shares. This could reduce the total cost of ownership by reducing the number of VMs necessary to run and manage a workload.
89
89
-**Linux client performance scaling**:
@@ -101,7 +101,7 @@ This feature provides greater performance benefits to multi-threaded application
101
101
102
102
SMB Multichannel for Azure file shares currently has the following restrictions:
103
103
104
-
- Only available for SSD file shares. Not available for HDD Azure file shares.
104
+
- Only available for SSD file shares. Not available for HDD file shares.
105
105
- Only supported on clients that are using SMB 3.1.1. Ensure SMB client operating systems are patched to recommended levels.
106
106
- Maximum number of channels is four. For details, see [here](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json#cause-4-number-of-smb-channels-exceeds-four).
107
107
@@ -157,7 +157,7 @@ Load was generated against 10 files with various IO sizes. The scale up test res
157
157
158
158
- On a single NIC, for reads, performance increase of 2x-3x was observed and for writes, gains of 3x-4x in terms of both IOPS and throughput.
159
159
- SMB Multichannel allowed IOPS and throughput to reach VM limits even with a single NIC and the four channel limit.
160
-
- Because egress (or reads to storage) isn't metered, read throughput was able to exceed the VM published limit of approximately 1.86 GiB / sec. The test achieved greater than 2.7 GiB / sec. Ingress (or writes to storage) are still subject to VM limits.
160
+
- Because egress (reads from storage) isn't metered, read throughput was able to exceed the VM's published limit of approximately 1.86 GiB/sec, achieving greater than 2.7 GiB/sec. Ingress (writes to storage) remains subject to VM throughput limits.
161
161
- Spreading load over multiple files allowed for substantial improvements.
162
162
163
163
An example command used in this testing is:
@@ -178,13 +178,9 @@ The load was generated against a single 128 GiB file. With SMB Multichannel enab
178
178
179
179
## Metadata caching for SSD file shares
180
180
181
-
Metadata caching is an enhancement for SSD Azure file shares that reduces metadata latency and raises metadata scale limits. The feature increases latency consistency and available IOPS, and it boosts network throughput.
181
+
Metadata caching is an enhancement for SSD Azure file shares that reduces metadata latency and raises metadata scale limits. The feature increases latency consistency and available IOPS, and it boosts network throughput. Both Windows and Linux clients can use it.
182
182
183
-
This feature improves the performance of the following metadata APIs. Both Windows and Linux clients can use it:
184
-
- Raised metadata scale limits
185
-
- Increase latency consistency, available IOPS, and boost network throughput
186
-
187
-
This feature improves the following metadata APIs and can be used from both Windows and Linux clients:
183
+
This feature improves the performance of the following metadata APIs:
0 commit comments