You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/application-gateway/migrate-v1-v2.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -66,6 +66,11 @@ This article focuses on the configuration stage of migration. Migration of clien
66
66
67
67
The configuration migration focuses on setting up the new V2 gateway with the settings from your existing V1 environment. Two Azure PowerShell scripts facilitate the migration of configurations (Standard or Web Application Firewall) from V1 to V2 gateways. These scripts help streamline the transition process by automating key deployment and configuration tasks.
68
68
69
+
> [!NOTE]
70
+
>If the existing Application Gateway V1 deployment is configured with a private-only frontend, you must [register the `EnableApplicationGatewayNetworkIsolation` feature in the subscription](../application-gateway/application-gateway-private-deployment.md#onboard-to-the-feature) for private deployment before running the migration script even though the feature is in GA. This step is required to avoid deployment failures.
71
+
72
+
>Private Application Gateway deployments must have subnet delegation configured to `Microsoft.Network/applicationGateways`. Use the [steps to set up subnet delegation](/azure/virtual-network/manage-subnet-delegation?tabs=manage-subnet-delegation-portal).
73
+
69
74
## Enhanced cloning script (recommended)
70
75
71
76
The enhanced cloning script is the recommended option. It offers an improved migration experience by:
Copy file name to clipboardExpand all lines: articles/automation/automation-hrw-run-runbooks.md
+2-5Lines changed: 2 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,10 +16,7 @@ author: RochakSingh-blr
16
16
17
17
> [!IMPORTANT]
18
18
> - Starting 1st April 2025, all jobs running on agent-based Hybrid Worker will be stopped.
19
-
> - Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) has retired on **31 August 2024** and is no longer supported. Follow the guidelines on how to [migrate from an existing Agent-based User Hybrid Runbook Workers to Extension-based Hybrid Workers](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
20
-
21
-
> [!NOTE]
22
-
> Azure Automation Run As Account has retired on September 30, 2023 and is replaced with Managed Identities. Follow the guidelines on [how to start migrating your runbooks to use managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md#sample-scripts).
19
+
> - Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) has retired on **31 August 2024** and is no longer supported. Follow the guidelines on how to [migrate from an existing Agent-based User Hybrid Runbook Workers to Extension-based Hybrid Workers](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md)
23
20
24
21
25
22
Runbooks that run on a [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md) typically manage resources on the local computer or against resources in the local environment where the worker is deployed. Runbooks in Azure Automation typically manage resources in the Azure cloud. Even though they are used differently, runbooks that run in Azure Automation and runbooks that run on a Hybrid Runbook Worker are identical in structure.
@@ -96,7 +93,7 @@ If the *Python* executable file is at the default location *C:\Python27\python.e
96
93
97
94
> [!NOTE]
98
95
>- PowerShell 7.4 and Python 3.10 runbooks are supported on extension-based Linux Hybrid Workers only. Ensure the Linux Hybrid worker extension version is 1.1.23 or above.
99
-
>- PowerShell 5.1, PowerShell 7.1 (preview), Python 2.7, Python 3.8 runbooks are supported on both extension-based and agent-based Linux Hybrid Runbook Workers. For agent-based workers, ensure the Linux Hybrid Runbook worker version is 1.7.5.0 or above.
96
+
>- PowerShell 7.1 (preview), Python 2.7, Python 3.8 runbooks are supported on both extension-based and agent-based Linux Hybrid Runbook Workers. For agent-based workers, ensure the Linux Hybrid Runbook worker version is 1.7.5.0 or above.
100
97
>- PowerShell 7.2 runbook is supported on extension-based Linux Hybrid Workers only. Ensure the Linux Hybrid worker extension version is 1.1.11 or above.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-cost-model.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ For cost model specific to cross-region replication, see [Cost model for cross-r
21
21
22
22
Azure NetApp Files is billed on provisioned storage capacity, which is allocated by creating capacity pools. Capacity pools are billed monthly based on a set cost per allocated GiB per hour. Capacity pool allocation is measured hourly.
23
23
24
-
Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 50 GiB to a maximum of 100 TiB for regular volumes and up to 1 PiB for [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes). Volumes are assigned quotas that are subtracted from the capacity pool’s provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details.
24
+
Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 50 GiB to a maximum of 100 TiB for regular volumes and up to 1 PiB for [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes). Volumes are assigned quotas that are subtracted from the capacity pool’s provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [Understand Azure NetApp Files snapshot-based data protection](snapshots-introduction.md) for details.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/backup-introduction.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ ms.custom: references_regions
13
13
14
14
# Understand Azure NetApp Files backup
15
15
16
-
Azure NetApp Files backup expands the data protection capabilities of Azure NetApp Files by providing fully managed backup solution for long-term recovery, archive, and compliance. Backups created by the service are stored in Azure storage, independent of volume snapshots that are available for near-term recovery or cloning. Backups taken by the service can be restored to new Azure NetApp Files volumes within the region. Azure NetApp Files backup supports both policy-based (scheduled) backups and manual (on-demand) backups. For more information, see [How Azure NetApp Files snapshots work](snapshots-introduction.md).
16
+
Azure NetApp Files backup expands the data protection capabilities of Azure NetApp Files by providing fully managed backup solution for long-term recovery, archive, and compliance. Backups created by the service are stored in Azure storage, independent of volume snapshots that are available for near-term recovery or cloning. Backups taken by the service can be restored to new Azure NetApp Files volumes within the region. Azure NetApp Files backup supports both policy-based (scheduled) backups and manual (on-demand) backups. For more information, see [Understand Azure NetApp Files snapshot-based data protection](snapshots-introduction.md).
17
17
18
18
## Supported regions
19
19
@@ -60,4 +60,4 @@ If you choose to restore a backup of, for example, 600 GiB to a new volume, you'
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/data-protection-disaster-recovery-options.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ The foundation of data protection solutions including volume restores and clones
28
28
- Restore a snapshot to new volume (clone) in seconds to test or develop with current data
29
29
- Application-consistent snapshots with [AzAcSnap integration](azacsnap-introduction.md) and third party backup tools
30
30
31
-
To learn more, see [How Azure NetApp Files Snapshots work](snapshots-introduction.md) and [Ways to restore data from snapshots](snapshots-introduction.md#ways-to-restore-data-from-snapshots). To create a Snapshot policy, see [Manage Snapshot policies in Azure NetApp Files](snapshots-manage-policy.md).
31
+
To learn more, see [Understand Azure NetApp Files snapshot-based data protection](snapshots-introduction.md) and [Ways to restore data from snapshots](snapshots-introduction.md#ways-to-restore-data-from-snapshots). To create a Snapshot policy, see [Manage Snapshot policies in Azure NetApp Files](snapshots-manage-policy.md).
32
32
33
33
## Backups
34
34
@@ -93,7 +93,7 @@ Fast data recovery (whole volume) | Revert volume from snapshot | Revert volume
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/elastic-zone-redundant-concept.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
5
5
author: b-ahibbard
6
6
ms.service: azure-netapp-files
7
7
ms.topic: concept-article
8
-
ms.date: 02/04/2026
8
+
ms.date: 03/26/2026
9
9
ms.author: anfdocs
10
10
ms.custom: references-regions
11
11
---
@@ -49,7 +49,7 @@ Elastic zone-redundant storage offers several key benefits for resiliency, opera
49
49
| Operational simplicity | Azure manages replication and failover automatically, eliminating the need for duplicate volumes or cross‑zone replication. High availability becomes a one‑click setup, simplifying operations and reducing configuration risk.|
50
50
| Extensive feature support | Elastic zone-redundant storage volumes support a growing set of Azure NetApp Files features, including NFSv3, NFSv4.1, and SMB, along with capabilities including snapshots, backups, customer‑managed keys, and Active Directory integration, delivering enhanced resiliency as feature coverage continues to expand. |
51
51
| Cost-effective high availability | Azure NetApp Files Elastic zone-redundant storage delivers multi‑availability zone redundancy more cost‑effectively than duplicate standby volumes by using all provisioned capacity with no idle replicas. You pay for a single resilient volume, improving utilization, lowering TCO, and avoiding the added egress and administrative costs of external replication solutions. |
52
-
| Metadata performance | Beyond consistent throughput, Azure NetApp Files Elastic zone-redundant storage redefines performance for metadata-heavy workloads. This is critical for SAP shared files and similar environments where metadata operations drive application responsiveness. The shared QoS architecture dynamically allocates IOPS across volumes to maintain low-latency, metadata-intensive operations consistently, even across multiple availability zones. |
52
+
| Metadata performance | Beyond consistent throughput, Azure NetApp Files Elastic zone-redundant storage redefines performance for metadata-heavy workloads. This is critical for environments where high rates of metadata operations directly influence application responsiveness. The shared QoS architecture dynamically allocates IOPS across volumes to sustain low-latency, metadataintensive operations consistently, even across multiple availability zones. |
53
53
54
54
## Supported regions
55
55
@@ -106,4 +106,4 @@ For more detailed information, see [Azure NetApp Files REST API](/rest/api/netap
106
106
-[Storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md)
107
107
-[Create a NetApp Elastic account](elastic-account.md)
108
108
-[Set up an Elastic capacity pool](elastic-capacity-pool-task.md)
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/includes/region-pairs.md
+8-1Lines changed: 8 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -69,8 +69,15 @@ Azure NetApp Files volume replication is supported between various [Azure region
69
69
| North America | West US 2 | West US 3 |
70
70
| Sweden/Europe | Sweden Central | North Europe |
71
71
| Sweden/Europe | Sweden Central | West Europe |
72
+
| UAE/Sweden*| UAE North | Sweden Central |
72
73
| UK/Europe | UK South | North Europe |
73
74
| US Government | US Gov Arizona | US Gov Virginia |
74
75
76
+
*Billing
75
77
> [!NOTE]
76
-
> There can be a discrepancy in the size and number of snapshots between the source and the destination. This discrepancy is expected. Snapshot policies and replication schedules influence the number of snapshots. Snapshot policies and replication schedules, combined with the amount of data that changes between snapshots, influence the size of snapshots. For more information, see [How Azure NetApp Files snapshots work](../snapshots-introduction.md).
78
+
> During the initial rollout, your Azure bill may temporarily show cross-region replication charges for an alternative region pair while the final billing SKUs are being implemented. **There is no overbilling** - the cost shown is the same rate that will apply for **UAE Central to Sweden Central** replication.
79
+
80
+
<br/><br/>
81
+
82
+
> [!NOTE]
83
+
> There can be a discrepancy in the size and number of snapshots between the source and the destination. This discrepancy is expected. Snapshot policies and replication schedules influence the number of snapshots. Snapshot policies and replication schedules, combined with the amount of data that changes between snapshots, influence the size of snapshots. For more information, see [Understand Azure NetApp Files snapshot-based data protection](../snapshots-introduction.md).
0 commit comments