You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| API threat detection with [Defender for APIs](protect-with-defender-for-apis.md)| ✔️ | ✔️ | ❌ | ❌ | ❌ |
90
90
91
91
<sup>1</sup> Depends on how the gateway is deployed, but is the responsibility of the customer.<br/>
92
92
<sup>2</sup> Connectivity to the self-hosted gateway v2 [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies) requires DNS resolution of the endpoint hostname.<br/>
93
93
<sup>3</sup> CA root certificates for self-hosted gateway are managed separately per gateway.<br/>
94
94
<sup>4</sup> Client protocol needs to be enabled.<br/>
95
95
<sup>5</sup> Configure using the [forward-request](forward-request-policy.md) policy.<br/>
96
-
<sup>6</sup> Configure CA certificate details for backend certificate authentication in [backend](backends.md) settings.
96
+
<sup>6</sup> Configure CA certificate details for backend certificate authentication in [backend](backends.md) settings.<br/>
97
+
<sup>7</sup> In preview for classic tier instances created starting January 2026. Contact support to enable for existing classic tier instances.
97
98
98
99
### Backend APIs
99
100
@@ -110,7 +111,7 @@ The following tables compare features available in the following API Management
<sup>1</sup> In preview for classic tier instances created starting January 2026. Contact support to enable for existing classic tier instances.
124
+
125
+
122
126
### Policies
123
127
124
128
Managed and self-hosted gateways support all available [policies](api-management-policies.md) in policy definitions with the following exceptions. See the policy reference for details about each policy.
Copy file name to clipboardExpand all lines: articles/api-management/azure-ai-foundry-api.md
+20-21Lines changed: 20 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ ms.service: azure-api-management
5
5
author: dlepow
6
6
ms.author: danlep
7
7
ms.topic: how-to
8
-
ms.date: 10/07/2025
8
+
ms.date: 03/24/2026
9
9
ms.update-cycle: 180-days
10
10
ms.collection: ce-skilling-ai-copilot
11
11
ms.custom: template-how-to, build-2024
@@ -21,32 +21,31 @@ Learn more about managing AI APIs in API Management:
21
21
22
22
*[AI gateway capabilities in Azure API Management](genai-gateway-capabilities.md)
23
23
24
-
25
24
## Client compatibility options
26
25
27
26
API Management supports two client compatibility options for AI APIs from Microsoft Foundry. When you import the API using the wizard, choose the option suitable for your model deployment. The option determines how clients call the API and how the API Management instance routes requests to the Foundry tool.
28
27
29
-
***Azure OpenAI** - Manage Azure OpenAI in Microsoft Foundry model deployments.
28
+
***Azure OpenAI**: Manage Azure OpenAI in Microsoft Foundry model deployments.
30
29
31
-
Clients call the deployment at an `/openai` endpoint such as `/openai/deployments/my-deployment/chat/completions`. Deployment name is passed in the request path. Use this option if your Foundry tool only includes Azure OpenAI model deployments.
30
+
Clients call the deployment at an `/openai` endpoint such as `/openai/deployments/my-deployment/chat/completions`. Deployment name is passed in the request path. Use this option if your Foundry tool only includes Azure OpenAI model deployments.
32
31
33
-
***Azure AI** - Manage model endpoints in Microsoft Foundry that are exposed through the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api).
32
+
***Azure AI**: Manage model endpoints in Microsoft Foundry that are exposed through the [Azure AI Model Inference API](/rest/api/aifoundry/modelinference/).
34
33
35
34
Clients call the deployment at a `/models` endpoint such as `/my-model/models/chat/completions`. Deployment name is passed in the request body. Use this option if you want flexibility to switch between models exposed through the Azure AI Model Inference API and those deployed in Azure OpenAI in Foundry Models.
36
35
37
36
## Prerequisites
38
37
39
-
- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
38
+
* An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
40
39
41
-
- A Foundry tool in your subscription with one or more models deployed. Examples include models deployed in Microsoft Foundry or Azure OpenAI.
40
+
* A Foundry tool in your subscription with one or more models deployed. Examples include models deployed in Microsoft Foundry or Azure OpenAI.
42
41
43
42
## Import Microsoft Foundry API using the portal
44
43
45
-
Use the following steps to import an AI API to API Management.
44
+
Use the following steps to import an AI API to API Management.
46
45
47
46
When you import the API, API Management automatically configures:
48
47
49
-
* Operations for each of the API's REST API endpoints
48
+
* Operations for each of the API's REST API endpoints.
50
49
* A system-assigned identity with the necessary permissions to access the Foundry tool deployment.
51
50
* A [backend](backends.md) resource and a [set-backend-service](set-backend-service-policy.md) policy that direct API requests to the Azure AI Services endpoint.
52
51
* Authentication to the backend using the instance's system-assigned managed identity.
@@ -62,37 +61,38 @@ To import a Microsoft Foundry API to API Management:
62
61
1. On the **Select AI Service** tab:
63
62
1. Select the **Subscription** in which to search for Foundry Tools. To get information about the model deployments in a service, select the **deployments** link next to the service name.
64
63
:::image type="content" source="media/azure-ai-foundry-api/deployments.png" alt-text="Screenshot of deployments for an AI service in the portal.":::
65
-
1. Select a Foundry tool.
64
+
1. Select a Foundry tool.
66
65
1. Select **Next**.
67
66
1. On the **Configure API** tab:
68
67
1. Enter a **Display name** and optional **Description** for the API.
69
68
1. In **Base path**, enter a path that your API Management instance uses to access the deployment endpoint.
70
-
1. Optionally select one or more **Products** to associate with the API.
69
+
1. Optionally, select one or more **Products** to associate with the API.
71
70
1. In **Client compatibility**, select either of the following based on the types of client you intend to support. See [Client compatibility options](#client-compatibility-options) for more information.
72
-
***Azure OpenAI** - Select this option if your clients only need to access Azure OpenAI in Microsoft Foundry model deployments.
73
-
***Azure AI** - Select this option if your clients need to access other models in Microsoft Foundry.
71
+
***Azure OpenAI**: Select this option if your clients only need to access Azure OpenAI in Microsoft Foundry model deployments.
72
+
***Azure AI**: Select this option if your clients need to access other models in Microsoft Foundry.
74
73
1. Select **Next**.
75
74
76
75
:::image type="content" source="media/azure-ai-foundry-api/client-compatibility.png" alt-text="Screenshot of Microsoft Foundry API configuration in the portal.":::
77
76
78
-
1. On the **Manage token consumption** tab, optionally enter settings or accept defaults that define the following policies to help monitor and manage the API:
77
+
1. On the **Manage token consumption** tab, optionally enter settings, or accept defaults that define the following policies to help monitor and manage the API:
1. On the **Apply semantic caching** tab, optionally enter settings or accept defaults that define the policies to help optimize performance and reduce latency for the API:
1. On the **Apply semantic caching** tab, optionally enter settings, or accept defaults that define the policies to help optimize performance and reduce latency for the API:
82
81
*[Enable semantic caching of responses](azure-openai-enable-semantic-caching.md)
83
-
1. On the **AI content safety**, optionally enter settings or accept defaults to configure the Azure AI Content Safety service to block prompts with unsafe content:
82
+
1. On the **AI content safety**, optionally enter settings, or accept defaults to configure the Azure AI Content Safety service to block prompts with unsafe content:
84
83
*[Enforce content safety checks on LLM requests](llm-content-safety-policy.md)
85
84
1. Select **Review**.
86
-
1. After settings are validated, select **Create**.
85
+
1. After settings are validated, select **Create**.
87
86
88
87
## Test the AI API
89
88
90
-
To ensure that your AI API is working as expected, test it in the API Management test console.
89
+
To ensure that your AI API is working as expected, test it in the API Management test console.
90
+
91
91
1. Select the API you created in the previous step.
92
92
1. Select the **Test** tab.
93
93
1. Select an operation that's compatible with the model deployment.
94
94
The page displays fields for parameters and headers.
95
-
1. Enter parameters and headers as needed. Depending on the operation, you might need to configure or update a **Request body**. Here's a very basic example request body for a chat completions operation:
95
+
1. Enter parameters and headers as needed. Depending on the operation, you might need to configure or update a **Request body**. Here's a basic example request body for a chat completions operation:
96
96
97
97
```json
98
98
{
@@ -113,5 +113,4 @@ To ensure that your AI API is working as expected, test it in the API Management
113
113
114
114
When the test is successful, the backend responds with a successful HTTP response code and some data. Appended to the response is token usage data to help you monitor and manage your language model token consumption.
0 commit comments