Skip to content

Commit 74fde71

Browse files
committed
Merge branch 'main' into release-sre-agent
2 parents 58cb7b2 + 1447756 commit 74fde71

4 files changed

Lines changed: 55 additions & 20 deletions

File tree

articles/api-management/llm-content-safety-policy.md

Lines changed: 19 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.service: azure-api-management
88
ms.collection: ce-skilling-ai-copilot
99
ms.custom:
1010
ms.topic: reference
11-
ms.date: 09/03/2025
11+
ms.date: 03/23/2026
1212
ms.update-cycle: 180-days
1313
ms.author: danlep
1414
---
@@ -17,16 +17,16 @@ ms.author: danlep
1717

1818
[!INCLUDE [api-management-availability-premium-dev-standard-basic-premiumv2-standardv2-basicv2](../../includes/api-management-availability-premium-dev-standard-basic-premiumv2-standardv2-basicv2.md)]
1919

20-
The `llm-content-safety` policy enforces content safety checks on large language model (LLM) requests (prompts) by transmitting them to the [Azure AI Content Safety](/azure/ai-services/content-safety/overview) service before sending to the backend LLM API. When the policy is enabled, and Azure AI Content Safety detects malicious content, API Management blocks the request and returns a `403` error code.
20+
The `llm-content-safety` policy enforces content safety checks on large language model (LLM) requests (prompts) or responses (completions) by sending them to the [Azure AI Content Safety](/azure/ai-services/content-safety/overview) service. When you enable the policy and Azure AI Content Safety detects malicious content, API Management blocks the request or response and returns a `403` error code.
2121

2222
> [!NOTE]
23-
> The terms _category_ and _categories_ used in API Management are synonymous with _harm category_ and _harm categories_ in the Azure AI Content Safety service. Details can be found on the [Harm categories in Azure AI Content Safety](/azure/ai-services/content-safety/concepts/harm-categories) page.
23+
> The terms _category_ and _categories_ used in API Management are synonymous with _harm category_ and _harm categories_ in the Azure AI Content Safety service. For more information, see [Harm categories in Azure AI Content Safety](/azure/ai-services/content-safety/concepts/harm-categories).
2424
2525
Use the policy in scenarios such as the following:
2626

27-
* Block requests that contain predefined categories of harmful content or hate speech
28-
* Apply custom blocklists to prevent specific content from being sent
29-
* Shield against prompts that match attack patterns
27+
* Block requests or responses that contain predefined categories of harmful content or hate speech.
28+
* Apply custom blocklists to prevent specific content from being sent or received.
29+
* Shield against prompts that match attack patterns.
3030

3131
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
3232

@@ -41,7 +41,7 @@ Use the policy in scenarios such as the following:
4141
## Policy statement
4242

4343
```xml
44-
<llm-content-safety backend-id="name of backend entity" shield-prompt="true | false" enforce-on-completions="true | false">
44+
<llm-content-safety backend-id="name of backend entity" shield-prompt="true | false" enforce-on-completions="true | false" window-size="integer" window-overlap-size="integer">
4545
<categories output-type="FourSeverityLevels | EightSeverityLevels">
4646
<category name="Hate | SelfHarm | Sexual | Violence" threshold="integer" />
4747
<!-- If there are multiple categories, add more category elements -->
@@ -60,8 +60,10 @@ Use the policy in scenarios such as the following:
6060
| Attribute | Description | Required | Default |
6161
| -------------- | ----------------------------------------------------------------------------------------------------- | -------- | ------- |
6262
| backend-id | Identifier (name) of the Azure AI Content Safety backend to route content-safety API calls to. Policy expressions are allowed. | Yes | N/A |
63-
| shield-prompt | If set to `true`, content is checked for user attacks. Otherwise, skip this check. Policy expressions are allowed. | No | `false` |
64-
| enforce-on-completions| If set to `true`, content safety checks are enforced on chat completions for response validation. Otherwise, skip this check. Policy expressions are allowed. | No | `false` |
63+
| shield-prompt | If set to `true`, check content for user attacks. Otherwise, skip this check. Policy expressions are allowed. | No | `false` |
64+
| enforce-on-completions| If set to `true` when you set the policy in the inbound section for content safety checks on requests, enforce content safety checks also on chat completions for response validation. When you set the policy in the outbound section for content safety checks on responses, this attribute is ignored. Policy expressions are allowed. | No | `false` |
65+
| window-size | The size of the text window in characters that the policy sends to Azure AI Content Safety for evaluation. Configurable only for responses; for requests, the default window size is always used. Policy expressions are allowed. | No | 10,000 characters (Azure AI Content Safety limit) |
66+
| window-overlap-size | The size of the overlap in characters between text windows when the content is split by using the `window-size` attribute. If you don't specify a value, windows don't overlap. Policy expressions are allowed. | No | N/A |
6567

6668

6769
## Elements
@@ -83,24 +85,25 @@ Use the policy in scenarios such as the following:
8385
| Attribute | Description | Required | Default |
8486
| -------------- | ----------------------------------------------------------------------------------------------------- | -------- | ------- |
8587
| name | Specifies the name of this category. The attribute must have one of the following values: `Hate`, `SelfHarm`, `Sexual`, `Violence`. Policy expressions are allowed. | Yes | N/A |
86-
| threshold | Specifies the threshold value for this category at which request are blocked. Requests with content severities less than the threshold aren't blocked. The value must be between 0 (most restrictive) and 7 (least restrictive). Policy expressions are allowed. | Yes | N/A |
88+
| threshold | Specifies the threshold value for this category at which requests or responses are blocked. Requests with content severities less than the threshold aren't blocked. The value must be between 0 (most restrictive) and 7 (least restrictive). Policy expressions are allowed. | Yes | N/A |
8789

8890

8991
## Usage
9092

91-
- [**Policy sections:**](./api-management-howto-policies.md#understanding-policy-configuration) inbound
93+
- [**Policy sections:**](./api-management-howto-policies.md#understanding-policy-configuration) inbound, outbound
9294
- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API
9395
- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
9496

9597
### Usage notes
9698

97-
* The policy runs on a concatenation of all text content in a completion or chat completion request.
98-
* If the request exceeds the character limit of Azure AI Content Safety, a `403` error is returned.
99-
* This policy can be used multiple times per policy definition.
99+
* Configure the policy in the inbound section to check requests and in the outbound section to check responses.
100+
* For streaming responses, the stream handler buffers events in a sliding window and, if a content safety violation is detected, stops forwarding further events to the client. A `403` error isn't returned in this case.
101+
* If the request or response exceeds the character limit of Azure AI Content Safety, the policy returns a `403` error.
102+
* You can use this policy multiple times per policy definition.
100103

101104
## Example
102105

103-
The following example enforces content safety checks on LLM requests using the Azure AI Content Safety service. The policy blocks requests that contain speech in the `Hate` or `Violence` category with a severity level of 4 or higher. In other words, the filter allows levels 0-3 to continue whereas levels 4-7 are blocked. Raising a category's threshold raises the tolerance and potentially decreases the number of blocked requests. Lowering the threshold lowers the tolerance and potentially increases the number of blocked requests. The `shield-prompt` attribute is set to `true` to check for adversarial attacks.
106+
The following example, when configured in the inbound section, enforces content safety checks on LLM requests by using the Azure AI Content Safety service. The policy blocks requests that contain speech in the `Hate` or `Violence` category with a severity level of 4 or higher. In other words, the filter allows levels 0-3 to continue whereas levels 4-7 are blocked. Raising a category's threshold raises the tolerance and potentially decreases the number of blocked requests. Lowering the threshold lowers the tolerance and potentially increases the number of blocked requests. The `shield-prompt` attribute is set to `true` to check for adversarial attacks.
104107

105108
```xml
106109
<policies>
@@ -117,7 +120,7 @@ The following example enforces content safety checks on LLM requests using the A
117120

118121
## Related policies
119122

120-
* [Content validation](api-management-policies.md#content-validation)
123+
* [Content validation](api-management-policies.md#content-validation) policies
121124
* [llm-token-limit](llm-token-limit-policy.md) policy
122125
* [llm-emit-token-metric](llm-emit-token-metric-policy.md) policy
123126

articles/application-gateway/for-containers/application-gateway-for-containers-components.md

Lines changed: 30 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ services: application-gateway
55
author: mbender-ms
66
ms.service: azure-appgw-for-containers
77
ms.topic: concept-article
8-
ms.date: 12/05/2025
8+
ms.date: 3/25/2026
99
ms.author: mbender
1010
# Customer intent: "As a cloud architect, I want to understand the components of Application Gateway for Containers, so that I can effectively configure and manage traffic routing to backend services in my cloud deployment."
1111
---
@@ -138,3 +138,32 @@ Application Gateway for Containers enforces the following timeouts as it initiat
138138

139139
> [!NOTE]
140140
> Request timeout strictly enforces the request to complete in the defined time irrespective if data is actively streaming or the request is idle. For example, if you're serving large file downloads and you expect transfers to take greater than 60 seconds due to size or slow transfer rates, consider increasing the request timeout value or setting it to 0.
141+
142+
## Connectivity
143+
144+
The following connectivity requirements are needed for successful operation of Application Gateway for Containers.
145+
146+
### ALB controller outbound connectivity
147+
148+
|Endpoint|Port|Purpose|
149+
|--|--|--|
150+
| management.azure.com | TCP 443 | Azure ARM API |
151+
| login.microsoftonline.com | TCP 443 | Entra AD authentication |
152+
| *.oic.prod-aks.azure.com | TCP 443 | AKS OIDC issuer (Workload Identity) |
153+
| *.alb.azure.com | TCP 443 | Configuration Endpoint |
154+
| mcr.microsoft.com | TCP 443 | Container images for helm deployment |
155+
| DNS Resolution | UDP 53 | In a default AKS deployment, ALB Controller will query coreDNS/kube-dns within the cluster |
156+
157+
### ALB controller inbound connectivity
158+
159+
>[!Note]
160+
>These inbound ports are exposed via ClusterIP Service and not published directly to the internet. They are exposed to help with troubleshooting / diagnostics and may be blocked with network policy if desired.
161+
162+
|Port|Name|Purpose|
163+
|--|--|--|
164+
| TCP 8000 | backend health | Backend health endpoint (/backendHealth) |
165+
| TCP 8001 | metrics | Prometheus metrics endpoint (/metrics) |
166+
167+
### Frontend connectivity
168+
169+
Each frontend for Application Gateway for Containers is in the format of `*.fzXX.alb.azure.com`, where XX are numeric digits 0-99. Frontends may only listen on port 443 and 80.

articles/data-factory/connector-microsoft-fabric-lakehouse.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: jianleishen
66
author: jianleishen
77
ms.subservice: data-movement
88
ms.topic: conceptual
9-
ms.date: 10/23/2025
9+
ms.date: 01/30/2026
1010
ms.custom:
1111
- synapse
1212
- sfi-image-nochange
@@ -692,6 +692,9 @@ For more information, see the [source transformation](data-flow-source.md) and [
692692

693693
To use Microsoft Fabric Lakehouse Files dataset as a source or sink dataset in mapping data flow, go to the following sections for the detailed configurations.
694694

695+
>[!NOTE]
696+
> Mapping data flows currently support service principal authentication only.
697+
695698
#### Microsoft Fabric Lakehouse Files as a source or sink type
696699

697700
Microsoft Fabric Lakehouse connector supports the following file formats. Refer to each article for format-based settings.

articles/iot-operations/reference/observability-metrics-opcua-broker.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: sethm
66
ms.topic: reference
77
ms.custom:
88
- ignite-2023
9-
ms.date: 10/22/2024
9+
ms.date: 03/25/2026
1010

1111
# CustomerIntent: As an IT admin or operator, I want to be able to monitor and visualize data
1212
# on the health of my industrial assets and edge environment.
@@ -112,4 +112,4 @@ Emitted by all components: Supervisor, OPC UA Connector, and OPC UA Commander.
112112

113113
## Related content
114114

115-
- [Configure observability](../configure-observability-monitoring/howto-configure-observability.md)
115+
[Configure observability](../configure-observability-monitoring/howto-configure-observability.md)

0 commit comments

Comments
 (0)