You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -23,6 +23,8 @@ Use the `azure-openai-semantic-cache-lookup` policy to perform cache lookup of r
23
23
> [!NOTE]
24
24
> * This policy must have a corresponding [Cache responses to Azure OpenAI API requests](azure-openai-semantic-cache-store-policy.md) policy.
25
25
> * For prerequisites and steps to enable semantic caching, see [Enable semantic caching for LLM APIs in Azure API Management](azure-openai-enable-semantic-caching.md).
26
+
> * Because semantic caching returns responses based on similarity (not exact match), it can surface responses that are incorrect, outdated, or unsafe for the current request. Evaluate this feature carefully for your workload and include safeguards.
@@ -23,6 +23,8 @@ The `azure-openai-semantic-cache-store` policy caches responses to Azure OpenAI
23
23
> [!NOTE]
24
24
> * This policy must have a corresponding [Get cached responses to Azure OpenAI API requests](azure-openai-semantic-cache-lookup-policy.md) policy.
25
25
> * For prerequisites and steps to enable semantic caching, see [Enable semantic caching for Azure OpenAI APIs in Azure API Management](azure-openai-enable-semantic-caching.md).
26
+
> * Because semantic caching returns responses based on similarity (not exact match), it can surface responses that are incorrect, outdated, or unsafe for the current request. Evaluate this feature carefully for your workload and include safeguards.
Copy file name to clipboardExpand all lines: articles/api-management/ip-filter-policy.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: dlepow
6
6
7
7
ms.service: azure-api-management
8
8
ms.topic: reference
9
-
ms.date: 07/23/2024
9
+
ms.date: 02/23/2026
10
10
ms.author: danlep
11
11
---
12
12
# Restrict caller IPs
@@ -49,7 +49,9 @@ The `ip-filter` policy filters (allows/denies) calls from specific IP addresses
49
49
50
50
### Usage notes
51
51
52
-
If you configure this policy at more than one scope, IP filtering is applied in the order of [policy evaluation](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order) in your policy definition.
52
+
- If you configure this policy at more than one scope, IP filtering is applied in the order of [policy evaluation](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order) in your policy definition.
53
+
54
+
- If `action` is set to `allow`, requests that don't match any `address` or `address-range` are denied. If `action` is set to `forbid`, requests that don't match any `address` or `address-range` are allowed.
@@ -23,6 +23,7 @@ Use the `llm-semantic-cache-lookup` policy to perform cache lookup of responses
23
23
> [!NOTE]
24
24
> * This policy must have a corresponding [Cache responses to large language model API requests](llm-semantic-cache-store-policy.md) policy.
25
25
> * For prerequisites and steps to enable semantic caching, see [Enable semantic caching for LLM APIs in Azure API Management](azure-openai-enable-semantic-caching.md).
26
+
> * Because semantic caching returns responses based on similarity (not exact match), it can surface responses that are incorrect, outdated, or unsafe for the current request. Evaluate this feature carefully for your workload and include safeguards.
Copy file name to clipboardExpand all lines: articles/api-management/llm-semantic-cache-store-policy.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ ms.service: azure-api-management
8
8
ms.collection: ce-skilling-ai-copilot
9
9
ms.custom:
10
10
ms.topic: reference
11
-
ms.date: 12/13/2024
11
+
ms.date: 02/23/2026
12
12
ms.update-cycle: 180-days
13
13
ms.author: danlep
14
14
---
@@ -22,6 +22,7 @@ The `llm-semantic-cache-store` policy caches responses to chat completion API re
22
22
> [!NOTE]
23
23
> * This policy must have a corresponding [Get cached responses to large language model API requests](llm-semantic-cache-lookup-policy.md) policy.
24
24
> * For prerequisites and steps to enable semantic caching, see [Enable semantic caching for Azure OpenAI APIs in Azure API Management](azure-openai-enable-semantic-caching.md).
25
+
> * Because semantic caching returns responses based on similarity (not exact match), it can surface responses that are incorrect, outdated, or unsafe for the current request. Evaluate this feature carefully for your workload and include safeguards.
The `validate-content` policy validates the size or content of a request or response body against one or more [supported schemas](#schemas-for-content-validation).
17
+
The `validate-content` policy validates the size or content (or both) of a request or response body against one or more [supported schemas](#schemas-for-content-validation).
18
18
19
19
The following table shows the schema formats and request or response content types that the policy supports. Content type values are case insensitive.
20
20
@@ -161,6 +161,74 @@ In the following example, API Management interprets any request as a request wit
161
161
</validate-content>
162
162
```
163
163
164
+
### Complete policy example with content validation
165
+
166
+
The following example shows a complete policy document for a customer order API that uses `validate-content` to validate incoming requests and outgoing responses. The policy validates that customer order payloads conform to the `customer-order-schema` (added to API Management) before forwarding them to the backend, and also validates that the backend's order confirmation matches the expected schema, but only detects issues rather than blocking them.
0 commit comments