Skip to content

Commit c1b09d8

Browse files
Merge pull request #311011 from MicrosoftDocs/main
Auto Publish – main to live - 2026-01-28 18:00 UTC
2 parents 5e4d0e1 + 9efdbf8 commit c1b09d8

25 files changed

Lines changed: 146 additions & 126 deletions

articles/active-directory-b2c/authorization-code-flow.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,13 +69,16 @@ client_id=00001111-aaaa-2222-bbbb-3333cccc4444
6969
| redirect_uri |Required |The redirect URI of your app, where authentication responses are sent and received by your app. It must exactly match one of the redirect URIs that you registered in the portal, except that it must be URL-encoded. |
7070
| scope |Required |A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application needs a *refresh token* for extended access to resources. The client-id indicates the token issued are intended for use by Azure AD B2C registered client. The `https://{tenant-name}/{app-id-uri}/{scope}` indicates a permission to protected resources, such as a web API. For more information, see [Request an access token](access-tokens.md#scopes). |
7171
| response_mode |Recommended |The method that you use to send the resulting authorization code back to your app. It can be `query`, `form_post`, or `fragment`. |
72-
| state |Recommended |A value included in the request that can be a string of any content that you want to use. Usually, a randomly generated unique value is used, to prevent cross-site request forgery attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page the user was on, or the user flow that was being executed. |
7372
| prompt |Optional |The type of user interaction that is required. Currently, the only valid value is `login`, which forces the user to enter their credentials on that request. Single sign-on won't take effect. |
7473
| code_challenge | recommended / required | Used to secure authorization code grants via Proof Key for Code Exchange (PKCE). Required if `code_challenge_method` is included. You need to add logic in your application to generate the `code_verifier` and `code_challenge`. The `code_challenge` is a Base64 URL-encoded SHA256 hash of the `code_verifier`. You store the `code_verifier` in your application for later use, and send the `code_challenge` along with the authorization request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is now recommended for all application types - native apps, SPAs, and confidential clients like web apps. |
7574
| `code_challenge_method` | recommended / required | The method used to encode the `code_verifier` for the `code_challenge` parameter. This *SHOULD* be `S256`, but the spec allows the use of `plain` if for some reason the client can't support SHA256. <br/><br/>If you exclude the `code_challenge_method`, but still include the `code_challenge`, then the `code_challenge` is assumed to be plaintext. Microsoft identity platform supports both `plain` and `S256`. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is required for [single page apps using the authorization code flow](tutorial-register-spa.md).|
7675
| login_hint | No| Can be used to prefill the sign-in name field of the sign-in page. For more information, see [Prepopulate the sign-in name](direct-signin.md#prepopulate-the-sign-in-name). |
7776
| domain_hint | No| Provides a hint to Azure AD B2C about the social identity provider that should be used for sign-in. If a valid value is included, the user goes directly to the identity provider sign-in page. For more information, see [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider). |
7877
| Custom parameters | No| Custom parameters that can be used with [custom policies](custom-policy-overview.md). For example, [dynamic custom page content URI](customize-ui-with-html.md?pivots=b2c-custom-policy#configure-dynamic-custom-page-content-uri), or [key-value claim resolvers](claim-resolver-overview.md#oauth2-key-value-parameters). |
78+
| state |Recommended |A value included in the request that can be a string of any content that you want to use. Usually, a randomly generated unique value is used, to prevent cross-site request forgery attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page the user was on, or the user flow that was being executed. |
79+
80+
> [!IMPORTANT]
81+
> For security and privacy, do not put URLs or other sensitive data directly in the state parameter. Instead, use a key or identifier that corresponds to data stored in browser storage, such as localStorage or sessionStorage. This approach lets your app securely reference the necessary data after authentication.
7982
8083
At this point, the user is asked to complete the user flow's workflow. This might involve the user entering their username and password, signing in with a social identity, signing up for the directory, or any other number of steps. User actions depend on how the user flow is defined.
8184

articles/application-gateway/application-gateway-diagnostics.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -60,9 +60,9 @@ Resource-specific mode provides:
6060

6161
For Application Gateway, resource-specific mode creates the following tables:
6262

63-
- [AGWAccessLogs](/azure/azure-monitor/reference/tables/agwaccesslogs#columns)
64-
- [AGWPerformanceLogs](/azure/azure-monitor/reference/tables/agwperformancelogs#columns)
65-
- [AGWFirewallLogs](/azure/azure-monitor/reference/tables/agwfirewalllogs#columns)
63+
- [AGWAccessLogs](/azure/azure-monitor/reference/tables/agwaccesslogs)
64+
- [AGWPerformanceLogs](/azure/azure-monitor/reference/tables/agwperformancelogs)
65+
- [AGWFirewallLogs](/azure/azure-monitor/reference/tables/agwfirewalllogs)
6666

6767
**Selecting the collection type in Log analytics**
6868

articles/azure-functions/migrate-dotnet-to-isolated-model.md

Lines changed: 4 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -310,20 +310,13 @@ using Microsoft.Extensions.Logging;
310310

311311
namespace Company.Function
312312
{
313-
public class HttpTriggerCSharp
313+
public class HttpTriggerCSharp(ILogger<HttpTriggerCSharp> logger)
314314
{
315-
private readonly ILogger<HttpTriggerCSharp> _logger;
316-
317-
public HttpTriggerCSharp(ILogger<HttpTriggerCSharp> logger)
318-
{
319-
_logger = logger;
320-
}
321-
322315
[Function("HttpTriggerCSharp")]
323316
public IActionResult Run(
324317
[HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req)
325318
{
326-
_logger.LogInformation("C# HTTP trigger function processed a request.");
319+
logger.LogInformation("C# HTTP trigger function processed a request.");
327320

328321
return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!");
329322
}
@@ -341,19 +334,12 @@ using System.Net;
341334

342335
namespace Company.Function
343336
{
344-
public class HttpTriggerCSharp
337+
public class HttpTriggerCSharp(ILogger<HttpTriggerCSharp> logger)
345338
{
346-
private readonly ILogger<HttpTriggerCSharp> _logger;
347-
348-
public HttpTriggerCSharp(ILogger<HttpTriggerCSharp> logger)
349-
{
350-
_logger = logger;
351-
}
352-
353339
[Function("HttpTriggerCSharp")]
354340
public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequestData req)
355341
{
356-
_logger.LogInformation("C# HTTP trigger function processed a request.");
342+
logger.LogInformation("C# HTTP trigger function processed a request.");
357343

358344
var response = req.CreateResponse(HttpStatusCode.OK);
359345
response.Headers.Add("Content-Type", "text/plain; charset=utf-8");

articles/azure-functions/run-functions-from-deployment-package.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ To enable your function app to run from a package on the [Consumption](./consump
3434

3535
| Value | Description |
3636
|---------|---------|
37-
| **`1`** | Indicates that the function app runs from a local package file deployed in the `c:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder of your function app. |
37+
| **`1`** | Indicates that the function app runs from a local package file deployed in the `c:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder of your function app. This is the default option when you use [Azure Functions Core Tools](/azure/azure-functions/functions-run-local). |
3838
|**`<URL>`** | Sets a URL that is the remote location of the specific package file you want to run. Required for functions apps running on Linux in a Consumption plan. |
3939

4040
The following table indicates the recommended `WEBSITE_RUN_FROM_PACKAGE` values for deployment to a specific operating system and hosting plan:

articles/azure-netapp-files/azure-netapp-files-introduction.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
55
author: b-hchen
66
ms.service: azure-netapp-files
77
ms.topic: overview
8-
ms.date: 10/28/2025
8+
ms.date: 01/28/2026
99
ms.author: anfdocs
1010
# Customer intent: As a cloud architect, I want to evaluate Azure NetApp Files for high-performance file storage, so that I can efficiently manage enterprise workloads while ensuring data availability, scalability, and security in the cloud.
1111
---
@@ -39,7 +39,7 @@ Azure NetApp Files is designed to provide high-performance file storage for ente
3939
| Functionality | Description | Benefit |
4040
| - | - | - |
4141
| In-Azure bare-metal flash performance | Fast and reliable all-flash performance with submillisecond latency. | Run performance-intensive workloads in the cloud with on-premises infrastructure-level performance.
42-
| Multi-protocol support | Supports multiple protocols, including NFSv3, NFSv4.1, SMB 3.0, SMB 3.1.1, and simultaneous dual-protocol. Also supports integration with S3. | Seamlessly integrate with existing infrastructure and workflows without compatibility issues or complex configurations. Azure NetApp Files also integrates with S3 using the [object REST API](object-rest-api-introduction.md). |
42+
| Multi-protocol support | Supports multiple protocols, including NFSv3, NFSv4.1, SMB 3.0, SMB 3.1.1, and simultaneous dual-protocol. Also supports object REST API based on S3 protocol. | Seamlessly integrate with existing infrastructure and workflows without compatibility issues or complex configurations. Azure NetApp Files also integrates with Microsoft Fabric through OneLake, and object-based services using the [object REST API](object-rest-api-introduction.md). |
4343
| Four adaptable performance tiers (Flexible, Standard, Premium, Ultra) | Four performance tiers with dynamic service-level change capability based on workload needs, including cool access for cold data. | Choose the right performance level for workloads and dynamically adjust performance without overspending on resources.
4444
| Small-to-large volumes | Easily resize file volumes from 100 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.
4545
| Small-to-large volumes | Easily resize file volumes from 50 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.

articles/azure-netapp-files/faq-data-migration-protection.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -49,15 +49,15 @@ The requirements for data migration from on premises to Azure NetApp Files are a
4949

5050
By default, your data stays within the region where you deploy your Azure NetApp Files volumes. However, you can choose to replicate your data on a volume-by-volume basis to available destination regions using [cross-region replication](replication.md).
5151

52-
### How do I create a copy of an Azure NetApp Files volume in another Azure region?
52+
### How do I create a copy of an Azure NetApp Files volume in another Azure zone or region?
5353

54-
Azure NetApp Files provides NFS and SMB volumes. Any file based-copy tool can be used to replicate data between Azure regions.
54+
Azure NetApp Files provides NFS and SMB volumes.
5555

56-
The [cross-region replication](replication.md) functionality enables you to asynchronously replicate data from an Azure NetApp Files volume (source) in one region to another Azure NetApp Files volume (destination) in another region. Additionally, you can [create a new volume by using a snapshot of an existing volume](snapshots-restore-new-volume.md).
56+
The [cross-region and cross-zone replication](replication.md) functionality enables you to asynchronously replicate volumes from an Azure NetApp Files volume in one region or zone to another Azure NetApp Files volume (destination) in another region or zone. Additionally, you can [create a new volume from a snapshot of an existing volume](snapshots-restore-new-volume.md).
5757

58-
NetApp offers a SaaS based solution, [NetApp Cloud Sync](https://docs.netapp.com/us-en/occm38/concept_cloud_sync.html). The solution enables you to replicate NFS or SMB data to Azure NetApp Files NFS exports or SMB shares.
58+
Any file based-copy tool can be used to replicate data between Azure zones and regions. NetApp offers a SaaS based solution, [NetApp Cloud Sync](https://docs.netapp.com/us-en/occm38/concept_cloud_sync.html). The solution enables you to replicate NFS or SMB data to Azure NetApp Files NFS exports or SMB shares.
5959

60-
You can also use a wide array of free tools to copy data. For NFS, you can use workloads tools such as [rsync](https://rsync.samba.org/examples.html) to copy and synchronize source data into an Azure NetApp Files volume. For SMB, you can use workloads [robocopy](/windows-server/administration/windows-commands/robocopy) in the same manner. These tools can also replicate file or folder permissions.
60+
You can also use a wide array of third-party tools to copy data. For NFS, you can use workload tools such as [rsync](https://rsync.samba.org/examples.html) to copy and synchronize source data into an Azure NetApp Files volume. For SMB, you can use workloads [robocopy](/windows-server/administration/windows-commands/robocopy) in the same manner. These tools can also replicate file or folder permissions.
6161

6262
The requirements for replicating an Azure NetApp Files volume to another Azure region are as follows:
6363
- Ensure Azure NetApp Files is available in the target Azure region.
@@ -67,7 +67,7 @@ The requirements for replicating an Azure NetApp Files volume to another Azure r
6767

6868
## Migration assistant
6969

70-
The Azure NetApp Files [migration assistant](migrate-volumes.md)
70+
To migrate volumes hosted on ONTAP or Cloud Volumes ONTAP, you can use the Azure NetApp Files [migration assistant](migrate-volumes.md). Migration assistant utilizes SnapMirror technology to efficiently migrate your volumes including all metadata and snapshots for faster migrations with flexible cut-over capabilities.
7171

7272
### Does the Azure NetApp Files migration assistant support bandwidth throttling during data transfers?
7373

2.07 KB
Loading
2.83 KB
Loading
-6.21 KB
Loading

articles/backup/quick-backup-aks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: "Quickstart: Configure an Azure Kubernetes Services cluster backup"
33
description: Learn how to configure backup for an Azure Kubernetes Service (AKS) cluster, and then use Azure Backup to back up specific items in the cluster.
44
ms.topic: quickstart
5-
ms.date: 01/09/2026
5+
ms.date: 01/28/2026
66
ms.service: azure-backup
77
ms.custom:
88
- ignite-2024

0 commit comments

Comments
 (0)