Skip to content

Commit 5dd5699

Browse files
Merge pull request #304471 from MicrosoftDocs/main
Auto Publish – main to live - 2025-08-20 17:00 UTC
2 parents a4c9798 + 92fd03b commit 5dd5699

50 files changed

Lines changed: 628 additions & 363 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

articles/api-management/azure-openai-token-limit-policy.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.collection: ce-skilling-ai-copilot
99
ms.custom:
1010
- build-2024
1111
ms.topic: reference
12-
ms.date: 02/18/2025
12+
ms.date: 08/14/2025
1313
ms.update-cycle: 180-days
1414
ms.author: danlep
1515
---
@@ -54,8 +54,8 @@ By relying on token usage metrics returned from the OpenAI endpoint, the policy
5454
| estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt: <br> - `true`: estimate the number of tokens based on prompt schema in API; may reduce performance. <br> - `false`: don't estimate prompt tokens. <br><br>When set to `false`, the remaining tokens per `counter-key` are calculated using the actual token usage from the response of the model. This could result in prompts being sent to the model that exceed the token limit. In such case, this will be detected in the response, and all succeeding requests will be blocked by the policy until the token limit frees up again. | Yes | N/A |
5555
| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified `tokens-per-minute` or `token-quota` is exceeded. Policy expressions aren't allowed. | No | `Retry-After` |
5656
| retry-after-variable-name | The name of a variable that stores the recommended retry interval in seconds after the specified `tokens-per-minute` or `token-quota` is exceeded. Policy expressions aren't allowed. | No | N/A |
57-
| remaining-quota-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens corresponding to `token-quota` allowed for the `token-quota-period`. Policy expressions aren't allowed. | No | N/A |
58-
| remaining-quota-tokens-variable-name | The name of a variable that after each policy execution stores the number of remaining tokens corresponding to `token-quota` allowed for the `token-quota-period`. Policy expressions aren't allowed. | No | N/A |
57+
| remaining-quota-tokens-header-name | The name of a response header whose value after each policy execution is the estimated number of remaining tokens corresponding to `token-quota` allowed for the `token-quota-period`. Policy expressions aren't allowed. | No | N/A |
58+
| remaining-quota-tokens-variable-name | The name of a variable that after each policy execution stores the estimated number of remaining tokens corresponding to `token-quota` allowed for the `token-quota-period`. Policy expressions aren't allowed. | No | N/A |
5959
| remaining-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens corresponding to `tokens-per-minute` allowed for the time interval. Policy expressions aren't allowed.| No | N/A |
6060
| remaining-tokens-variable-name | The name of a variable that after each policy execution stores the number of remaining tokens corresponding to `tokens-per-minute` allowed for the time interval. Policy expressions aren't allowed.| No | N/A |
6161
| tokens-consumed-header-name | The name of a response header whose value is the number of tokens consumed by both prompt and completion. The header is added to response only after the response is received from backend. Policy expressions aren't allowed.| No | N/A |
@@ -73,6 +73,7 @@ By relying on token usage metrics returned from the OpenAI endpoint, the policy
7373
* This policy can optionally be configured when adding an API from the Azure OpenAI using the portal.
7474
* Where available when `estimate-prompt-tokens` is set to `false`, values in the usage section of the response from the Azure OpenAI API are used to determine token usage.
7575
* Certain Azure OpenAI endpoints support streaming of responses. When `stream` is set to `true` in the API request to enable streaming, prompt tokens are always estimated, regardless of the value of the `estimate-prompt-tokens` attribute. Completion tokens are also estimated when responses are streamed.
76+
* The value of `remaining-quota-tokens-variable-name` or `remaining-quota-tokens-header-name` is an estimate for informational purposes but could be larger than expected based on actual token consumption. The value is more accurate as the quota is approached.
7677
* For models that accept image input, image tokens are generally counted by the backend language model and included in limit and quota calculations. However, when streaming is used or `estimate-prompt-tokens` is set to `true`, the policy currently over-counts each image as a maximum count of 1200 tokens.
7778
* [!INCLUDE [api-management-rate-limit-key-scope](../../includes/api-management-rate-limit-key-scope.md)]
7879
* [!INCLUDE [api-management-token-limit-gateway-counts](../../includes/api-management-token-limit-gateway-counts.md)]

articles/api-management/llm-token-limit-policy.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.service: azure-api-management
88
ms.collection: ce-skilling-ai-copilot
99
ms.custom:
1010
ms.topic: reference
11-
ms.date: 02/18/2025
11+
ms.date: 08/14/2025
1212
ms.update-cycle: 180-days
1313
ms.author: danlep
1414
---
@@ -53,8 +53,8 @@ By relying on token usage metrics returned from the LLM endpoint, the policy can
5353
| estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt: <br> - `true`: estimate the number of tokens based on prompt schema in API; may reduce performance. <br> - `false`: don't estimate prompt tokens. <br><br>When set to `false`, the remaining tokens per `counter-key` are calculated using the actual token usage from the response of the model. This could result in prompts being sent to the model that exceed the token limit. In such case, this will be detected in the response, and all succeeding requests will be blocked by the policy until the token limit frees up again. | Yes | N/A |
5454
| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified `tokens-per-minute` or `token-quota` is exceeded. Policy expressions aren't allowed. | No | `Retry-After` |
5555
| retry-after-variable-name | The name of a variable that stores the recommended retry interval in seconds after the specified `tokens-per-minute` or `token-quota` is exceeded. Policy expressions aren't allowed. | No | N/A |
56-
| remaining-quota-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens corresponding to `token-quota` allowed for the `token-quota-period`. Policy expressions aren't allowed. | No | N/A |
57-
| remaining-quota-tokens-variable-name | The name of a variable that after each policy execution stores the number of remaining tokens corresponding to `token-quota` allowed for the `token-quota-period`. Policy expressions aren't allowed. | No | N/A |
56+
| remaining-quota-tokens-header-name | The name of a response header whose value after each policy execution is the estimated number of remaining tokens corresponding to `token-quota` allowed for the `token-quota-period`. Policy expressions aren't allowed. | No | N/A |
57+
| remaining-quota-tokens-variable-name | The name of a variable that after each policy execution stores the estimated number of remaining tokens corresponding to `token-quota` allowed for the `token-quota-period`. Policy expressions aren't allowed. | No | N/A |
5858
| remaining-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens corresponding to `tokens-per-minute` allowed for the time interval. Policy expressions aren't allowed.| No | N/A |
5959
| remaining-tokens-variable-name | The name of a variable that after each policy execution stores the number of remaining tokens corresponding to `tokens-per-minute` allowed for the time interval. Policy expressions aren't allowed.| No | N/A |
6060
| tokens-consumed-header-name | The name of a response header whose value is the number of tokens consumed by both prompt and completion. The header is added to response only after the response is received from backend. Policy expressions aren't allowed.| No | N/A |
@@ -71,6 +71,7 @@ By relying on token usage metrics returned from the LLM endpoint, the policy can
7171
* This policy can be used multiple times per policy definition.
7272
* Where available when `estimate-prompt-tokens` is set to `false`, values in the usage section of the response from the LLM API are used to determine token usage.
7373
* Certain LLM endpoints support streaming of responses. When `stream` is set to `true` in the API request to enable streaming, prompt tokens are always estimated, regardless of the value of the `estimate-prompt-tokens` attribute.
74+
* The value of `remaining-quota-tokens-variable-name` or `remaining-quota-tokens-header-name` is an estimate for informational purposes but could be larger than expected based on actual token consumption. The value is more accurate as the quota is approached.
7475
* For models that accept image input, image tokens are generally counted by the backend language model and included in limit and quota calculations. However, when streaming is used or `estimate-prompt-tokens` is set to `true`, the policy currently over-counts each image as a maximum count of 1200 tokens.
7576
* [!INCLUDE [api-management-rate-limit-key-scope](../../includes/api-management-rate-limit-key-scope.md)]
7677
* [!INCLUDE [api-management-token-limit-gateway-counts](../../includes/api-management-token-limit-gateway-counts.md)]

articles/application-gateway/application-gateway-faq.yml

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -121,12 +121,6 @@ sections:
121121
122122
Most deployments that use the v2 SKU take around 6 minutes to provision. However, the process can take longer depending on the type of deployment. For example, deployments across multiple availability zones with many instances can take more than 6 minutes.
123123
124-
- question: How does Application Gateway handle routine maintenance?
125-
answer: |
126-
Updates initiated to Application Gateway are applied one [update domain](/azure/virtual-machines/availability-set-overview#how-do-availability-sets-work) at a time. As each update domain's instances are being updated, the remaining instances in other update domains continue to serve traffic<sup>1</sup>. Active connections are gracefully drained from the instances being updated for up to 5 minutes to help establish connectivity to instances in a different update domain before the update begins. During update, Application Gateway temporarily runs at reduced maximum capacity, which is determined by the number of instances configured. The update process proceeds to the next set of instances only if the current set of instances were upgraded successfully.
127-
128-
<sup>1</sup> We recommend a minimum instance count of 2 to be configured for Application Gateway v1 SKU deployments to ensure at least one instance can serve traffic while updates are applied.
129-
130124
- question: Can I use Exchange Server as a backend with Application Gateway?
131125
answer: |
132126
Application Gateway supports [TLS/TCP protocol proxy](tcp-tls-proxy-overview.md) through its Layer 4 proxy in **Preview**.
@@ -186,6 +180,16 @@ sections:
186180
- question: Can I change instance size from medium to large without disruption?
187181
answer: Yes.
188182

183+
- name: Maintenance
184+
questions:
185+
- question: How does Application Gateway handle routine maintenance?
186+
answer: |
187+
Updates initiated to Application Gateway are applied one [update domain](/azure/virtual-machines/availability-set-overview#how-do-availability-sets-work) at a time. As each update domain's instances are being updated, the remaining instances in other update domains continue to serve traffic. Active connections are gracefully drained from the instances being updated for up to 5 minutes to help establish connectivity to instances in a different update domain before the update begins. The update process proceeds to the next set of instances only if the current set of instances were upgraded successfully.
188+
189+
Azure Application Gateway also supports MaxSurge ([Rolling Upgrades with MaxSurge](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-maxsurge)), a capability within Azure Virtual Machine Scale Sets (VMSS) that enables new instances to be provisioned during rolling upgrades without taking existing ones offline. By integrating MaxSurge into the upgrade process, customers can transition to newer gateway versions without any capacity degradation. MaxSurge is automatically enabled on Application Gateway and requires no configuration.
190+
191+
**Note:** Additional IP space is required to provision the temporary instances used by MaxSurge. If sufficient IP space is not available during an update, Application Gateway will fall back to the traditional upgrade method, which may result in reduced maximum capacity based on the number of instances.
192+
189193
- name: Configuration
190194
questions:
191195
- question: Is Application Gateway always deployed in a virtual network?

articles/azure-app-configuration/concept-app-configuration-event.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: jimmyca
66
ms.custom: devdivchpfy22
77
ms.author: jimmyca
88
ms.date: 08/30/2022
9-
ms.topic: article
9+
ms.topic: concept-article
1010
ms.service: azure-app-configuration
1111

1212
---

articles/azure-app-configuration/concept-customer-managed-keys.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: maud-lv
55
ms.author: malev
66
ms.date: 02/20/2024
77
ms.custom: devdivchpfy22, devx-track-azurecli
8-
ms.topic: conceptual
8+
ms.topic: concept-article
99
ms.service: azure-app-configuration
1010
---
1111
# Use customer-managed keys to encrypt your App Configuration data

articles/azure-app-configuration/concept-disaster-recovery.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn how to implement resiliency and disaster recovery with Azure
44
author: avanigupta
55
ms.author: avgupta
66
ms.service: azure-app-configuration
7-
ms.topic: conceptual
7+
ms.topic: concept-article
88
ms.date: 02/16/2024
99
---
1010

articles/azure-app-configuration/concept-enable-rbac.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Use Microsoft Entra ID and Azure role-based access control (RBAC) t
44
author: zhenlan
55
ms.author: zhenlwa
66
ms.date: 10/05/2024
7-
ms.topic: conceptual
7+
ms.topic: concept-article
88
ms.service: azure-app-configuration
99

1010
---

articles/azure-app-configuration/concept-experimentation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: malev
66
ms.service: azure-app-configuration
77
ms.custom:
88
- build-2024
9-
ms.topic: conceptual
9+
ms.topic: concept-article
1010
ms.date: 07/09/2025
1111
ms.update-cycle: 180-days
1212
ms.collection: ce-skilling-ai-copilot

articles/azure-app-configuration/concept-feature-management.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: maud-lv
55
ms.author: malev
66
ms.service: azure-app-configuration
77
ms.custom: devdivchpfy22
8-
ms.topic: conceptual
8+
ms.topic: concept-article
99
ms.date: 03/24/2025
1010
---
1111

articles/azure-app-configuration/concept-geo-replication.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Details of the geo-replication feature in Azure App Configuration.
44
author: maud-lv
55
ms.author: malev
66
ms.service: azure-app-configuration
7-
ms.topic: conceptual
7+
ms.topic: concept-article
88
ms.date: 06/04/2025
99
---
1010

0 commit comments

Comments
 (0)