Skip to content

Commit a6ebbe5

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into migrating-to-application-gateway-v2
2 parents cb20889 + 52b6244 commit a6ebbe5

71 files changed

Lines changed: 1482 additions & 506 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

articles/azure-functions/durable/durable-task-scheduler/durable-task-scheduler-auto-purge.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Autopurge operates asynchronously in the background, optimized to minimize syste
1515

1616
## How it works
1717

18-
Autopurge is an opt-in feature. You can enable it by defining retention policies that control how long to keep the data of orchestrations in certain statuses. The autopurge feature purges orchestration data associated with terminal statuses. "Terminal" refers to orchestrations that have reached a final state with no further scheduling, event processing, or work item generation. Terminal statuses include:
18+
Autopurge is enabled by default with a 30 day policy, but it can be customized. You can modify it by defining retention policies that specify how long to keep orchestration data for certain statuses. The autopurge feature removes orchestration data that is in terminal statuses. A terminal status means the orchestration has reached a final state and will no longer schedule tasks, process events, or generate work items. Terminal statuses include:
1919
- `Completed`
2020
- `Failed`
2121
- `Canceled`

articles/azure-netapp-files/azure-netapp-files-resource-limits.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,10 +37,9 @@ The following table describes resource limits for the Flexible, Standard, Premiu
3737
| Minimum size of a single regular volume | 50 GiB | No |
3838
| Maximum size of a single regular volume | 100 TiB | No |
3939
| Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 50 TiB | No |
40-
| Large volume size increase | 30% of lowest provisioned size | Yes |
4140
| Maximum size of a single [large volume](large-volumes-requirements-considerations.md) | 1 PiB | Yes** |
4241
| Maximum size of a single large volume with breakthrough mode (preview) | 2,400 TiB | No |
43-
| Maximum size of a large volume up to 7.2 PiB** | 7.2 PiB | Yes** |
42+
| Maximum size of a large volume up to 7.2 PiB*** | 7.2 PiB | Yes** |
4443
| Maximum size of a single file | 16 TiB | No |
4544
| Maximum size of directory metadata in a single directory | 320 MB | No |
4645
| Maximum number of files in a single directory | *Approximately* 4 million. <br> See [Determine if a directory is approaching the limit size](directory-sizes-concept.md#directory-limit). | No |

articles/azure-netapp-files/faq-smb.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,25 @@ Azure NetApp Files supports SMB 2.1 and SMB 3.1 (which includes support for SMB
2121

2222
Yes, Windows Server 2025 domain controllers are supported as of September 9, 2025. Windows Server 2025 domain controllers must have all cumulative security updates installed, including [KB5065426](https://support.microsoft.com/en-us/topic/september-9-2025-kb5065426-update-for-windows-server-2025-os-build-26100-6584-6a59dc6a-1ff2-48f4-b375-81e93deee5dd), released on September 9, 2025. You must also enable AES encryption (AES-256) on the Active Directory connection if you plan to introduce any Windows Server 2025 domain controllers into your Active Directory environment. For more information, see [Create and Manage Active Directory connections for Azure NetApp Files](create-active-directory-connections.md).
2323

24+
## What SMB minimum version should be configured on Windows Server 2025 domain controllers for Azure NetApp Files?
25+
26+
For Azure NetApp Files communication with Windows Server 2025 domain controllers, set the SMB minimum dialect to SMB 3.0. If required by your environment, SMB 2.1 can be used. Although Windows Server 2025 supports SMB 3.1.1, enforcing SMB 3.1.1 for this communication can break domain controller communication and prevent authentication to Azure NetApp Files SMB shares.
27+
28+
Run one of the following commands on each Windows Server 2025 domain controller, based on your requirements:
29+
30+
```
31+
Set-SmbServerConfiguration -Smb2DialectMin SMB211
32+
```
33+
34+
```
35+
Set-SmbServerConfiguration -Smb2DialectMin SMB300
36+
```
37+
38+
>[!NOTE]
39+
>This configuration must be applied individually on all Windows Server 2025 domain controllers. It doesn't replicate across the domain.
40+
41+
As an alternative, update the Active Directory site used by Azure NetApp Files so it includes only domain controllers that aren't running Windows Server 2025.
42+
2443
## Does Azure NetApp Files support access to ‘offline files’ on SMB volumes?
2544

2645
Azure NetApp Files supports 'manual' offline files, allowing users on Windows clients to manually select files to be cached locally.

articles/azure-netapp-files/large-volumes-requirements-considerations.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.author: anfdocs
1212
---
1313
# Requirements and considerations for Azure NetApp Files large volumes
1414

15-
Large volumes are Azure NetApp Files volumes with a size of 50 TiB to 1,024 TiB.
15+
Azure NetApp Files large volumes support sizes between 50 TiB and 1,024 TiB.
1616

1717
With breakthrough mode, you can create large volumes at sizes between 2,400 GiB and 2,400 TiB. You must [request the feature](#register-for-breakthrough-mode) before using it for the first time. With cool access enabled, large volumes can scale to 7.2 PiB in certain situations; for more information, see [large volumes up to 7.2 PiB](#requirements-and-considerations-for-large-volumes-up-to-72-pib-preview).
1818

@@ -25,7 +25,6 @@ The following requirements and considerations apply to large volumes. For perfor
2525
* A regular volume can’t be converted to a large volume.
2626
* You must create a large volume at a size of 50 TiB or larger. The maximum size of a large volume is 1,024 TiB.
2727
* You can't resize a large volume to less than 50 TiB.
28-
* A large volume can't be resized to more than 30% of its lowest provisioned size. This limit is adjustable via [a support request](azure-netapp-files-resource-limits.md#resource-limits). When requesting the resize, specify the desired size in TiB.
2928
* When reducing the size of a large volume, the size depends on the size of files written to the volume and the snapshots currently active on the volumes.
3029
* You can't create a large volume with application volume groups.
3130
* Currently, large volumes aren't suited for database (HANA, Oracle, SQL Server, etc.) data and log volumes. For database workloads requiring more than a single volume’s throughput limit, consider deploying multiple regular volumes. To optimize multiple volume deployments for databases, use [application volume groups](application-volume-group-concept.md).
@@ -39,11 +38,11 @@ The following requirements and considerations apply to large volumes. For perfor
3938
</tr></thead>
4039
<tbody>
4140
<tr>
42-
<td>Capacity tier</td>
41+
<td>Service level</td>
4342
<td>Minimum volume size<br>(TiB)</td>
44-
<td>Maximum volume size (TiB)*</td>
45-
<td>Minimum throughput for capacity tier (MiB/s)</td>
46-
<td>Maximum throughput for capacity tier (MiB/s)</td>
43+
<td>Maximum volume size (TiB)</td>
44+
<td>Base throughput (MiB/s) at 50TiB</td>
45+
<td>Maximum throughput for service level (MiB/s)</td>
4746
</tr>
4847
<tr>
4948
<td>Standard (16 MiB/s per TiB)</td>

articles/azure-netapp-files/whats-new.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,13 @@ ms.author: anfdocs
1717

1818
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
1919

20+
21+
## March 2026
22+
23+
* [Large volumes improvement:](large-volumes-requirements-considerations.md#requirements-and-considerations) removed 30% default limit imposed on large volumes
24+
25+
Large volumes operational improvement no longer requires a support ticket to increase a large volume past the 30% imposed limit. This allows customer to automate their large volume size increases without waiting for approval and human intervention.
26+
2027
## January 2026
2128

2229
* [Elastic zone-redundant storage service level](elastic-zone-redundant-concept.md) (preview)

articles/azure-vmware/includes/disk-capabilities-of-the-host.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Azure VMware Solution clusters are based on a hyperconverged infrastructure. The
2020
| AV36P | Dual Intel Xeon Gold 6240 CPUs (Cascade Lake microarchitecture) with 18 cores/CPU @ 2.6 GHz / 3.9 GHz Turbo, Total 36 physical cores (72 logical cores with hyperthreading) | 768 | OSA | 1.5 (Intel Cache) | 19.20 (NVMe) | Selected regions (*) |
2121
| AV48 | Dual Intel Xeon Gold 6442Y CPUs (Sapphire Rapids microarchitecture) with 24 cores/CPU @ 2.6 GHz / 4.0 GHz Turbo, Total 48 physical cores (96 logical cores with hyperthreading) | 1,024 | ESA | N/A | 25.6 (NVMe) | Selected regions (*) |
2222
| AV52 | Dual Intel Xeon Platinum 8270 CPUs (Cascade Lake microarchitecture) with 26 cores/CPU @ 2.7 GHz / 4.0 GHz Turbo, Total 52 physical cores (104 logical cores with hyperthreading) | 1,536 | OSA | 1.5 (Intel Cache) | 38.40 (NVMe) | Selected regions (*) |
23-
| AV64 | Dual Intel Xeon Platinum 8370C CPUs (Ice Lake microarchitecture) with 32 cores/CPU @ 2.8 GHz / 3.5 GHz Turbo, Total 64 physical cores (128 logical cores with hyperthreading) | 1,024 | OSA | 3.84 (NVMe) | 15.36 (NVMe) | Selected regions (**) |
23+
| AV64 | Dual Intel Xeon Platinum 8370C CPUs (Ice Lake microarchitecture) with 32 cores/CPU @ 2.8 GHz / 3.5 GHz Turbo, Total 64 physical cores (128 logical cores with hyperthreading) | 1,024 | OSA / ESA**** | 3.84 (NVMe) / N/A**** | 15.36 (NVMe) / 19.25 (NVMe)**** | Selected regions (**) |
2424

2525
An Azure VMware Solution cluster requires a minimum number of three hosts. You can use hosts of the same type only in a single Azure VMware Solution private cloud. Hosts used to build or scale clusters come from an isolated pool of hosts. Those hosts passed hardware tests and had all data securely deleted before being added to a cluster.
2626

@@ -31,3 +31,5 @@ All of the preceding host types have 100-Gbps network interface throughput.
3131
**AV64 prerequisite: An Azure VMware Solution private cloud deployed with AV36, AV36P, or AV52 is required before adding AV64.
3232

3333
***Raw is based on [International Standard of Units (SI)](https://en.wikipedia.org/wiki/International_System_of_Units) reported by disk manufacturers. Example: 1 TB Raw = 1000000000000 bytes. Space calculated by a computer in binary (1 TB binary = 1099511627776 bytes binary) equals 931.3 gigabytes converted from the raw decimal.
34+
35+
****ESA applies to AV64 Gen 2 deployments.

articles/azure-vmware/introduction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Azure VMware Solution provides two different private cloud generations:
2323

2424
1. Azure VMware Solution Generation 1 provides VMware vSphere clusters built from dedicated bare-metal hosts deployed in Azure data center facilities. Microsoft-managed **ExpressRoute circuits** provide connectivity between VMware vSphere hosts and native Azure resources deployed in Virtual Networks.
2525

26-
1. [Azure VMware Solution Generation 2](native-introduction.md) (Public Preview) provides VMware vSphere clusters built from dedicated Azure bare-metal hosts. Azure VMware Solution Generation 2 features an updated network architecture whereby VMware vSphere hosts are directly attached to Azure Virtual Networks. This offering is only supported on the AV64 SKU.
26+
1. [Azure VMware Solution Generation 2](native-introduction.md) provides VMware vSphere clusters built from dedicated Azure bare-metal hosts. Azure VMware Solution Generation 2 features an updated network architecture whereby VMware vSphere hosts are directly attached to Azure Virtual Networks. This offering is only supported on the AV64 SKU.
2727

2828
## Hosts, clusters, and private clouds
2929

articles/azure-vmware/native-introduction.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -55,15 +55,23 @@ Gen 2 private clouds are supported on the following SKU type:
5555
Gen 2 is available in the following Azure public regions.
5656

5757
- Australia East
58+
- Brazil South
59+
5860
- East US
5961
- Canada Central
6062
- Canada East
6163
- Central US
6264
- Malaysia West
6365

6466
- North Europe
67+
- North Central US
68+
6569
- Norway East
70+
- Qatar Central
71+
6672
- Switzerland North
73+
- Switzerland West
74+
6775
- UK West
6876
- West US 2
6977

articles/azure-vmware/native-network-design-consideration.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -83,19 +83,19 @@ Example /22 CIDR network address block **10.31.0.0/22** is divided into the foll
8383
| :-- | :-- | :-- | :-- |
8484
|VMware NSX Network | /27 | NSX Manager network. | 10.31.0.0/27 |
8585
|vCSA Network | /27 | vCenter Server network. | 10.31.0.32/27 |
86-
|avs-mgmt| /27|The management appliances (vCenter Server and NSX manager) are behind the "avs-mgmt” subnet, programmed as secondary IP ranges on this subnet. You may need to adjust the route tables associated with this subnet if your network traffic, for your management appliances, needs to route through an NVA or firewall | 10.31.0.64/27 |
86+
|avs-mgmt| /27|The management appliances (vCenter Server, NSX manager and HCX cloud manager) are behind the "avs-mgmt” subnet, programmed as secondary IP ranges on this subnet. You may need to adjust the route tables associated with this subnet if your network traffic, for your management appliances, needs to route through an NVA or firewall | 10.31.0.64/27 |
8787
|avs-vnet-sync| /27 |Used by Azure VMware Solution Gen 2 to program routes created in VMware NSX into the virtual network. | 10.31.0.96/27 |
8888
|avs-services | /27 |Used for Azure VMware Solution Gen 2 provider services. Also used to configure private DNS resolution for your private cloud. | 10.31.0.224/27 |
8989
|avs-nsx-gw, avs-nsx-gw-1| /27 |Subnets off each of the T0 Gateways per edge. These subnets are used to program VMware NSX network segments as secondary IPs addresses. |10.31.0.128/27, 10.31.0.160/27 |
9090
|esx-mgmt-vmk1 | /25 |vmk1 is the management interface used by customers to access the host. IPs from the vmk1 interface come from these subnets. All of the vmk1 traffic for all hosts comes from this subnet range. | 10.31.1.0/25 |
9191
|esx-vmotion-vmk2 | /25 | vMotion VMkernel interfaces. | 10.31.1.128/25 |
9292
|esx-vsan-vmk3 | /25 | vSAN VMkernel interfaces and node communication. | 10.31.2.0/25 |
93-
|avs-network-infra-gw|/26|Used by Azure VMware Solution management for programming NSX segments. Customers do no need to modify this subnet because it s only used for Azure VMware Solution infrastructure.|10.31.2.128/26|
93+
|avs-network-infra-gw|/26|Used by Azure VMware Solution management for programming NSX segments. Customers do no need to modify this subnet because it s only used for Azure VMware Solution infrastructure. You will see your NSX network segments being created as secondary IP Prefixes under this subnet. However, the workload segments still route through the avs-nsx-gw and avs-nsx-gw-1 subnets.|10.31.2.128/26|
9494
|Reserved | /27 | Reserved Space. | 10.31.0.128/27 |
9595
|Reserved | /27 | Reserved Space. | 10.31.0.192/27 |
9696

9797
> [!Note]
98-
> For Azure VMware Solution Gen 2 deployments, customers must now allocate an two additional /24 subnets for HCX management and uplink, in addition to the /22 entered during SDDC deployment. These additional /24s are not required for Gen 1.
98+
> For Azure VMware Solution Gen 2 deployments, customers must now allocate an two additional /24 subnets for HCX management and uplink, in addition to the /22 entered during SDDC deployment. In AVS Gen 2, only the HCX mgmt and HCX uplink subnets are required. The vMotion and replication networks are not required for AVS Gen 2.
9999
100100
## Next steps
101101

articles/azure-vmware/toc.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -288,6 +288,8 @@ items:
288288
href: ecosystem-external-storage-solutions.md
289289
- name: Configure Azure Elastic SAN
290290
href: configure-azure-elastic-san.md
291+
- name: Performance with Elastic SAN
292+
href: ../storage/elastic-san/elastic-san-performance-on-azure-vmware-solutions.md?context=%2fazure%2fazure-vmware%2fcontext%2fcontext
291293
- name: Configure Azure NetApp Files
292294
items:
293295
- name: Attach Azure NetApp Files datastores to Azure VMware Solution hosts

0 commit comments

Comments
 (0)