You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/application-gateway/application-gateway-faq.yml
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -137,7 +137,7 @@ sections:
137
137
Yes. The Application Gateway v1 SKU continues to be supported. We strongly recommend moving to v2 to take advantage of the feature updates in that SKU. For more information on the differences between v1 and v2 features, see [Autoscaling and zone-redundant Application Gateway v2](application-gateway-autoscaling-zone-redundant.md). You can manually migrate Application Gateway v1 SKU deployments to v2 by following our [v1-v2 migration document](migrate-v1-v2.md).
138
138
139
139
- question: Does Application Gateway v2 support proxying requests with NTLM or Kerberos authentication?
140
-
answer: No. Application Gateway v2 doesn't support proxying requests with NTLM or Kerberos authentication.
140
+
answer: Yes. Application Gateway v2 now supports proxying requests with NTLM or Kerberos authentication.For more information, see [Dedicated backend connection](configuration-http-settings.md#dedicated-backend-connection).
141
141
142
142
- question: Why are some header values not present when requests are forwarded to my application?
143
143
answer: Request header names can contain alphanumeric characters and hyphens. Request header names that contain other characters are discarded when a request is sent to the backend target. Response header names can contain any alphanumeric characters and specific symbols as defined in [RFC 7230](https://tools.ietf.org/html/rfc7230#page-27).
Copy file name to clipboardExpand all lines: articles/azure-app-configuration/quickstart-container-apps.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: maud-lv
6
6
ms.service: azure-app-configuration
7
7
ms.custom: service-connector
8
8
ms.topic: quickstart
9
-
ms.date: 12/11/2024
9
+
ms.date: 09/19/2025
10
10
ms.author: malev
11
11
12
12
---
@@ -18,6 +18,9 @@ In this quickstart, you use Azure App Configuration in an app running in Azure C
18
18
> [!TIP]
19
19
> While following this quickstart, preferably register all new resources within a single resource group, so that you can regroup them all in a single place and delete them faster later on if you don't need them anymore.
20
20
21
+
> [!IMPORTANT]
22
+
> Support for Service Connector (preview) on Azure Container Apps ends on March 30, 2026. After that date, new service connections using Service Connector (preview) aren't available through any interface. For more information, see [RETIREMENT: Service Connector (Preview) on Azure Container Apps](https://aka.ms/serviceconnectoraca).
23
+
21
24
## Prerequisites
22
25
23
26
- An application using an App Configuration store. If you don't have one, create an instance using the [Quickstart: Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md).
@@ -187,4 +190,4 @@ The managed identity enables one Azure resource to access another without you ma
187
190
To learn how to configure your ASP.NET Core web app to dynamically refresh configuration settings, continue to the next tutorial.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-service-levels.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
5
5
author: b-hchen
6
6
ms.service: azure-netapp-files
7
7
ms.topic: concept-article
8
-
ms.date: 08/22/2025
8
+
ms.date: 09/16/2025
9
9
ms.author: anfdocs
10
10
# Customer intent: "As a cloud storage administrator, I want to understand the throughput capabilities of different service levels in Azure NetApp Files, so that I can choose the right configuration to meet my application's performance requirements."
11
11
---
@@ -17,7 +17,7 @@ Service levels are an attribute of a capacity pool. Service levels are defined a
17
17
18
18
Azure NetApp Files supports four service levels: *Flexible*, *Standard*, *Premium*, and *Ultra*.
The Flexible service level enables you to adjust throughput and size limits independently. You can use the Flexible service level to create high-capacity volumes with low throughput requirements or the reverse: low-capacity volumes with high throughput requirements. The Flexible service level is designed for demanding applications such as Oracle or SAP HANA.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md
+1-16Lines changed: 1 addition & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
5
5
author: b-hchen
6
6
ms.service: azure-netapp-files
7
7
ms.topic: how-to
8
-
ms.date: 08/14/2025
8
+
ms.date: 09/16/2025
9
9
ms.author: anfdocs
10
10
ms.custom:
11
11
- build-2025
@@ -25,21 +25,6 @@ Creating a capacity pool enables you to create volumes within it.
25
25
>[!IMPORTANT]
26
26
>To create a 1-TiB capacity pool with a tag, you must use API versions `2023-07-01_preview` to `2024-01-01_preview` or stable releases from `2024-01-01`.
27
27
* The Standard, Premium, and Ultra service levels are generally available (GA). No registration is required.
28
-
* <aname="flexible"></a> The **Flexible** service level is currently in preview and supported in all Azure NetApp Files regions. You must register the feature before using it for the first time:
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/manage-cool-access.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
5
5
author: b-ahibbard
6
6
ms.service: azure-netapp-files
7
7
ms.topic: how-to
8
-
ms.date: 08/14/2025
8
+
ms.date: 09/16/2025
9
9
ms.author: anfdocs
10
10
ms.custom:
11
11
- build-2025
@@ -138,7 +138,7 @@ No registration is required to use cool access at the Standard service level.
138
138
139
139
# [Flexible](#tab/flexible)
140
140
141
-
Cool access with the Flexible service level is currently in preview. You must be registered to use the [Flexible service](azure-netapp-files-set-up-capacity-pool.md#flexible) before requesting cool access with the Flexible service level. Once you confirm your registration in the Flexible service level preview, register to use cool access with the Flexible service level.
141
+
Cool access with the Flexible service level is currently in preview. You must register the feature before using it.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/whats-new.md
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ ms.custom:
8
8
- linux-related-content
9
9
- build-2025
10
10
ms.topic: overview
11
-
ms.date: 09/04/2025
11
+
ms.date: 09/16/2025
12
12
ms.author: anfdocs
13
13
# Customer intent: As a cloud administrator, I want to learn about the latest enhancements in Azure NetApp Files, so that I can effectively utilize new features for improved data security, resilience, and operational efficiency in my organization's cloud storage solutions.
14
14
---
@@ -19,6 +19,12 @@ Azure NetApp Files is updated regularly. This article provides a summary about t
19
19
20
20
## September 2025
21
21
22
+
*[Flexible service level](azure-netapp-files-set-up-capacity-pool.md) is now generally available (GA)
23
+
24
+
The [Flexible service level](azure-netapp-files-service-levels.md#Flexible) allows you to independently configure storage capacity and throughput, optimizing costs by right-sizing according to storage and performance requirements. With separate pricing for capacity and throughput, the Flexible service level prevents overprovisioning and supports up to 640 MiB/second per TiB. This throughput is five times the performance of the Ultra service level, making it ideal for demanding workloads and offering higher throughput for smaller capacity pools and adapting to changing requirements without the need for volume moves.
25
+
26
+
The Flexible service level is only supported with _new_ manual QoS capacity pools. The Flexible service level offers a minimum throughput of 128 MiB/s and a maximum of 640 MiB/s per TiB [per pool](azure-netapp-files-service-levels.md#flexible-service-level-throughput-examples). This new service level is suitable for applications such as Oracle or SAP HANA and for creating high-capacity volumes with low throughput needs. You can adjust throughput and size limits independently, ensuring flexibility and precise scaling to meet your price-performance requirements.
27
+
22
28
*[Azure NetApp Files datastore support in Azure VMware Solution Generation 2](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md)
23
29
24
30
[Azure NetApp Files datastore](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now supported in [Azure VMware Solution (AVS) Generation 2](../azure-vmware/native-introduction.md). AVS Generation 2 private clouds are deployed inside an Azure virtual network. This means that ExpressRoute is no longer needed to connect the Azure VMware Solution to Azure NetApp Files datastores. This deployment simplifies networking architecture, enhances data transfer speeds, reduces latency for workloads, and improves performance when accessing other Azure services. This capability is supported in all regions where Azure VMware Solution Generation 2 and Azure NetApp Files are available.
With Bicep version 0.35.1 and later, the `@secure()` decorator can be applied to module outputs to mark them as sensitive, ensuring that their values are not exposed in logs or deployment history. This is useful when a module needs to return sensitive data, such as a generated key or connection string, to the parent Bicep file without risking exposure. For more information, see [Secure outputs](./outputs.md#secure-outputs).
551
551
552
+
## Module identity
553
+
554
+
Starting with Bicep version 0.36.1, you can assign a user-assigned managed identity to a module. This makes the identity available within the module—for example, to access a Key Vault. However, this capability is intended for future use and is not yet supported by backend services.
555
+
556
+
```bicep
557
+
param identityId string
558
+
559
+
module mod './module.bicep' = {
560
+
identity: {
561
+
type: 'UserAssigned'
562
+
userAssignedIdentities: {
563
+
'${identityId}': {}
564
+
}
565
+
}
566
+
name: 'mod'
567
+
params: {
568
+
keyVaultUri: 'keyVaultUri'
569
+
identityId: identityId
570
+
}
571
+
}
572
+
```
573
+
552
574
## Related content
553
575
554
576
- For a tutorial, see [Build your first Bicep file](/training/modules/deploy-azure-resources-by-using-bicep-templates/).
Copy file name to clipboardExpand all lines: articles/backup/azure-file-share-support-matrix.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ Vaulted backup for Azure Files is available in the following regions: UK South,
33
33
34
34
Cross Region Restore is supported in all preceding regions, except Italy North.
35
35
36
-
Migration of File Shares protected with snapshot backup to vaulted backup is supported in the following regions: UK South, UK West, Southeast Asia, East Asia, West Central US, India Central, Spain Central, Jio India West, Israel Central, Australia Central 2 and Germany North.
36
+
Migration of File Shares protected with snapshot backup to vaulted backup is supported in the following regions: UK South, UK West, Southeast Asia, East Asia, West Central US, India Central, Spain Central, Jio India West, Israel Central, Australia Central 2 and Germany North, Brazil South, Switzerland North, South Africa North, Australia Southeast, Sweden Central, Norway East, UAE North, West US 3, Japan West, Korea Central, Canada East, South India, Italy North, Poland Central, Australia Central.
37
37
38
38
>[!Note]
39
39
>Cross Subscription Backup and Restore are supported for vaulted backup.
:::image type="content" source="./media/backup-azure-restore-files-from-vm/open-vault-for-vm.png" alt-text="Screenshot shows how to open Recovery Services vault backup item." lightbox="./media/backup-azure-restore-files-from-vm/open-vault-for-vm.png":::
36
36
37
37
3. In the Backup dashboard menu, select **File Recovery**.
> Users should note the performance limitations of this feature. As pointed out in the footnote section of the above blade, this feature should be used when the total size of recovery is 10 GB or less. The expected data transfer speeds are around 1 GB per hour.
47
47
48
-
4.From the **Select recovery point** drop-down menu, select the recovery point that holds the files you want. By default, the latest recovery point is already selected.
48
+
4.Under **Select restore point**, click **Select** to choose the restore point that contains the required files for restore.
49
49
50
50
5. Select **Download Executable** (for Windows Azure VMs) or **Download Script** (for Linux Azure VMs, a Python script is generated) to download the software used to copy files from the recovery point.
Azure downloads the executable or script to the local computer.
55
53
56
54

@@ -59,12 +57,9 @@ To restore files or folders from the recovery point, go to the virtual machine a
59
57
60
58
6. The executable or script is password protected and requires a password. In the **File Recovery** menu, select the copy button to load the password into memory.
## Step 2: Ensure the machine meets the requirements before executing the script
66
61
67
-
After the script is successfully downloaded, make sure you have the right machine to execute this script. The VM where you are planning to execute the script, should not have any of the following unsupported configurations. **If it does, then choose an alternate machine that meets the requirements**.
62
+
After the script is successfully downloaded, make sure you have the right machine to execute this script. The VM where you're planning to execute the script, shouldn't have any of the following unsupported configurations. **If it does, then choose an alternate machine that meets the requirements**.
68
63
69
64
### Dynamic disks
70
65
@@ -142,18 +137,18 @@ If you run the script on a computer with restricted access, ensure there's acces
142
137
-`https://pod01-rec2.GEO-NAME.backup.windowsazure.us` (For Azure US Government) or `AzureBackup` service tag in NSG
143
138
-`https://pod01-rec2.GEO-NAME.backup.windowsazure.de` (For Azure Germany) or `AzureBackup` service tag in NSG
144
139
- Public DNS resolution on port 53 (outbound)
145
-
- The access requirement of the Microsoft Entra ID are`*.microsoft.com`, `*.windowsazure.com`, and `*.windows.net` on port 443 (outbound).
140
+
- The access requirement of the Microsoft Entra ID is`*.microsoft.com`, `*.windowsazure.com`, and `*.windows.net` on port 443 (outbound).
146
141
147
142
> [!NOTE]
148
-
> Proxies may not support iSCSI protocol or give access to port 3260. Hence it is strongly recommended to run this script on machines which have direct access as required above and not on the machines which will redirect to proxy.
143
+
> Proxies may not support iSCSI protocol or give access to port 3260. Hence it's strongly recommended to run this script on machines which have direct access as required above and not on the machines which will redirect to proxy.
149
144
150
145
> [!NOTE]
151
146
>
152
-
> In case, the backedup VM is Windows, then the geo-name will be mentioned in the password generated.<br><br>
147
+
> In case, the backed-up VM is Windows, then the geo-name will be mentioned in the password generated.<br><br>
153
148
> For example, if the generated password is *ContosoVM_wcus_GUID*, then geo-name is wcus and the URL would be: <`https://pod01-rec2.wcus.backup.windowsazure.com`><br><br>
154
149
>
155
150
>
156
-
> If the backedup VM is Linux, then the script file you downloaded in step 1 [above](#step-1-generate-and-download-script-to-browse-and-recover-files) will have the **geo-name** in the name of the file. Use that **geo-name** to fill in the URL. The downloaded script name will begin with: \'VMname\'\_\'geoname\'_\'GUID\'.<br><br>
151
+
> If the backed-up VM is Linux, then the script file you downloaded in step 1 [above](#step-1-generate-and-download-script-to-browse-and-recover-files) will have the **geo-name** in the name of the file. Use that **geo-name** to fill in the URL. The downloaded script name will begin with: \'VMname\'\_\'geoname\'_\'GUID\'.<br><br>
157
152
> So for example, if the script filename is *ContosoVM_wcus_12345678*, the **geo-name** is *wcus* and the URL would be: <`https://pod01-rec2.wcus.backup.windowsazure.com`><br><br>
158
153
>
159
154
@@ -168,7 +163,7 @@ Also, ensure that you have the [right machine to execute the ILR script](#step-2
168
163
169
164
> [!NOTE]
170
165
>
171
-
> The script is generated in English language only and is not localized. Hence it might require that the system locale is in English for the script to execute properly
166
+
> The script is generated in English language only and isn't localized. Hence it might require that the system locale is in English for the script to execute properly
172
167
>
173
168
174
169
@@ -215,7 +210,7 @@ If the file recovery process hangs after you run the file-restore script (for ex
215
210
216
211
1. In the file /etc/iscsi/iscsid.conf, change the setting from:
217
212
-`node.conn[0].timeo.noop_out_timeout = 5` to `node.conn[0].timeo.noop_out_timeout = 120`
218
-
2. After making the above changes, rerun the script. If there are transient failures, ensure there is a gap of 20 to 30 minutes between reruns to avoid successive bursts of requests impacting the target preparation. This interval between re-runs will ensure the target is ready for connection from the script.
213
+
2. After making the above changes, rerun the script. If there are transient failures, ensure there's a gap of 20 to 30 minutes between reruns to avoid successive bursts of requests impacting the target preparation. This interval between re-runs will ensure the target is ready for connection from the script.
219
214
3. After file recovery, make sure you go back to the portal and select **Unmount disks** for recovery points where you weren't able to mount volumes. Essentially, this step will clean any existing processes/sessions and increase the chance of recovery.
0 commit comments