You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/api-management/azure-ai-foundry-api.md
+20-21Lines changed: 20 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ ms.service: azure-api-management
5
5
author: dlepow
6
6
ms.author: danlep
7
7
ms.topic: how-to
8
-
ms.date: 10/07/2025
8
+
ms.date: 03/24/2026
9
9
ms.update-cycle: 180-days
10
10
ms.collection: ce-skilling-ai-copilot
11
11
ms.custom: template-how-to, build-2024
@@ -21,32 +21,31 @@ Learn more about managing AI APIs in API Management:
21
21
22
22
*[AI gateway capabilities in Azure API Management](genai-gateway-capabilities.md)
23
23
24
-
25
24
## Client compatibility options
26
25
27
26
API Management supports two client compatibility options for AI APIs from Microsoft Foundry. When you import the API using the wizard, choose the option suitable for your model deployment. The option determines how clients call the API and how the API Management instance routes requests to the Foundry tool.
28
27
29
-
***Azure OpenAI** - Manage Azure OpenAI in Microsoft Foundry model deployments.
28
+
***Azure OpenAI**: Manage Azure OpenAI in Microsoft Foundry model deployments.
30
29
31
-
Clients call the deployment at an `/openai` endpoint such as `/openai/deployments/my-deployment/chat/completions`. Deployment name is passed in the request path. Use this option if your Foundry tool only includes Azure OpenAI model deployments.
30
+
Clients call the deployment at an `/openai` endpoint such as `/openai/deployments/my-deployment/chat/completions`. Deployment name is passed in the request path. Use this option if your Foundry tool only includes Azure OpenAI model deployments.
32
31
33
-
***Azure AI** - Manage model endpoints in Microsoft Foundry that are exposed through the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api).
32
+
***Azure AI**: Manage model endpoints in Microsoft Foundry that are exposed through the [Azure AI Model Inference API](/rest/api/aifoundry/modelinference/).
34
33
35
34
Clients call the deployment at a `/models` endpoint such as `/my-model/models/chat/completions`. Deployment name is passed in the request body. Use this option if you want flexibility to switch between models exposed through the Azure AI Model Inference API and those deployed in Azure OpenAI in Foundry Models.
36
35
37
36
## Prerequisites
38
37
39
-
- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
38
+
* An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
40
39
41
-
- A Foundry tool in your subscription with one or more models deployed. Examples include models deployed in Microsoft Foundry or Azure OpenAI.
40
+
* A Foundry tool in your subscription with one or more models deployed. Examples include models deployed in Microsoft Foundry or Azure OpenAI.
42
41
43
42
## Import Microsoft Foundry API using the portal
44
43
45
-
Use the following steps to import an AI API to API Management.
44
+
Use the following steps to import an AI API to API Management.
46
45
47
46
When you import the API, API Management automatically configures:
48
47
49
-
* Operations for each of the API's REST API endpoints
48
+
* Operations for each of the API's REST API endpoints.
50
49
* A system-assigned identity with the necessary permissions to access the Foundry tool deployment.
51
50
* A [backend](backends.md) resource and a [set-backend-service](set-backend-service-policy.md) policy that direct API requests to the Azure AI Services endpoint.
52
51
* Authentication to the backend using the instance's system-assigned managed identity.
@@ -62,37 +61,38 @@ To import a Microsoft Foundry API to API Management:
62
61
1. On the **Select AI Service** tab:
63
62
1. Select the **Subscription** in which to search for Foundry Tools. To get information about the model deployments in a service, select the **deployments** link next to the service name.
64
63
:::image type="content" source="media/azure-ai-foundry-api/deployments.png" alt-text="Screenshot of deployments for an AI service in the portal.":::
65
-
1. Select a Foundry tool.
64
+
1. Select a Foundry tool.
66
65
1. Select **Next**.
67
66
1. On the **Configure API** tab:
68
67
1. Enter a **Display name** and optional **Description** for the API.
69
68
1. In **Base path**, enter a path that your API Management instance uses to access the deployment endpoint.
70
-
1. Optionally select one or more **Products** to associate with the API.
69
+
1. Optionally, select one or more **Products** to associate with the API.
71
70
1. In **Client compatibility**, select either of the following based on the types of client you intend to support. See [Client compatibility options](#client-compatibility-options) for more information.
72
-
***Azure OpenAI** - Select this option if your clients only need to access Azure OpenAI in Microsoft Foundry model deployments.
73
-
***Azure AI** - Select this option if your clients need to access other models in Microsoft Foundry.
71
+
***Azure OpenAI**: Select this option if your clients only need to access Azure OpenAI in Microsoft Foundry model deployments.
72
+
***Azure AI**: Select this option if your clients need to access other models in Microsoft Foundry.
74
73
1. Select **Next**.
75
74
76
75
:::image type="content" source="media/azure-ai-foundry-api/client-compatibility.png" alt-text="Screenshot of Microsoft Foundry API configuration in the portal.":::
77
76
78
-
1. On the **Manage token consumption** tab, optionally enter settings or accept defaults that define the following policies to help monitor and manage the API:
77
+
1. On the **Manage token consumption** tab, optionally enter settings, or accept defaults that define the following policies to help monitor and manage the API:
1. On the **Apply semantic caching** tab, optionally enter settings or accept defaults that define the policies to help optimize performance and reduce latency for the API:
1. On the **Apply semantic caching** tab, optionally enter settings, or accept defaults that define the policies to help optimize performance and reduce latency for the API:
82
81
*[Enable semantic caching of responses](azure-openai-enable-semantic-caching.md)
83
-
1. On the **AI content safety**, optionally enter settings or accept defaults to configure the Azure AI Content Safety service to block prompts with unsafe content:
82
+
1. On the **AI content safety**, optionally enter settings, or accept defaults to configure the Azure AI Content Safety service to block prompts with unsafe content:
84
83
*[Enforce content safety checks on LLM requests](llm-content-safety-policy.md)
85
84
1. Select **Review**.
86
-
1. After settings are validated, select **Create**.
85
+
1. After settings are validated, select **Create**.
87
86
88
87
## Test the AI API
89
88
90
-
To ensure that your AI API is working as expected, test it in the API Management test console.
89
+
To ensure that your AI API is working as expected, test it in the API Management test console.
90
+
91
91
1. Select the API you created in the previous step.
92
92
1. Select the **Test** tab.
93
93
1. Select an operation that's compatible with the model deployment.
94
94
The page displays fields for parameters and headers.
95
-
1. Enter parameters and headers as needed. Depending on the operation, you might need to configure or update a **Request body**. Here's a very basic example request body for a chat completions operation:
95
+
1. Enter parameters and headers as needed. Depending on the operation, you might need to configure or update a **Request body**. Here's a basic example request body for a chat completions operation:
96
96
97
97
```json
98
98
{
@@ -113,5 +113,4 @@ To ensure that your AI API is working as expected, test it in the API Management
113
113
114
114
When the test is successful, the backend responds with a successful HTTP response code and some data. Appended to the response is token usage data to help you monitor and manage your language model token consumption.
Copy file name to clipboardExpand all lines: articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: 'Tutorial: Access data with managed identity in Java'
3
3
description: Secure Azure Database for PostgreSQL connectivity with managed identity from a sample Java Tomcat app, and apply it to other Azure services.
Copy file name to clipboardExpand all lines: articles/app-testing/load-testing/troubleshoot-private-endpoint-tests.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ Azure Load Testing service requires outbound connectivity from the virtual netwo
19
19
| Destination | Need for connectivity |
20
20
| ------------|-------|
21
21
|*.azure.com | Access to this destination is required for the Azure Load Testing service to interact with Azure Batch service. |
22
-
|*.windows.net | Access to this destination is required for the Azure Load Testing service to interact with Azure Service Bus, Azure Event Grids, and Azure Storage. To learn more about firewall configuration in these services, see <li> [Azure Service Bus frequently asked questions](/azure/service-bus-messaging/service-bus-faq#what-ports-do-i-need-to-open-on-the-firewall--) </li> <li> [Azure Event Hubs Firewall Rules](/azure/event-hubs/event-hubs-ip-filtering) </li> <li> [Configure Azure Storage firewalls and virtual networks ](/azure/storage/common/storage-network-security?tabs=azure-portal) </li> |
22
+
|*.windows.net, *.blob.storage.azure.net| Access to this destination is required for the Azure Load Testing service to interact with Azure Service Bus, Azure Event Grids, and Azure Storage. To learn more about firewall configuration in these services, see <li> [Azure Service Bus frequently asked questions](/azure/service-bus-messaging/service-bus-faq#what-ports-do-i-need-to-open-on-the-firewall--) </li> <li> [Azure Event Hubs Firewall Rules](/azure/event-hubs/event-hubs-ip-filtering) </li> <li> [Configure Azure Storage firewalls and virtual networks ](/azure/storage/common/storage-network-security?tabs=azure-portal) </li> |
23
23
|*.azurecr.io | Access to this destination is required for the Azure Load Testing service to interact with Azure Container Registry. To learn more about firewall configuration in Azure Container Registry, see <li> [Firewall access rules - Azure Container Registry ](/azure/container-registry/container-registry-firewall-access-rules) </li> |
24
24
25
25
Optionally, outbound connectivity is needed to *.maven.org and *.github.com to download any plugins that are included in your test configuration.
Copy file name to clipboardExpand all lines: articles/azure-maps/private-endpoints.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,15 +3,15 @@ title: Use private endpoints with Azure Maps
3
3
description: Learn how to use private endpoints with Azure Maps.
4
4
author: pbrasil
5
5
ms.author: peterbr
6
-
ms.date: 02/27/2026
7
-
ms.topic: how-to
6
+
ms.date: 03/26/2026
7
+
ms.topic: article
8
8
ms.service: azure-maps
9
9
ms.subservice: authentication
10
10
---
11
11
12
-
# Use private endpoints with Azure Maps
12
+
# Use private endpoints with Azure Maps (preview)
13
13
14
-
Azure Maps supports [Azure Private Link](/../private-link/private-link-overview.md), enabling secure access to Azure Maps services through a private endpoint in your virtual network. A private endpoint assigns a private IP address from your virtual network to the Azure Maps service, so traffic between your applications and Azure Maps stays on the Microsoft backbone network instead of the public internet. This provides improved security and network isolation. You can create a private endpoint when you create an Azure Maps account or add one to an existing account.
14
+
Azure Maps supports [Azure Private Link](../private-link/private-link-overview.md), enabling secure access to Azure Maps services through a private endpoint in your virtual network. A private endpoint assigns a private IP address from your virtual network to the Azure Maps service, so traffic between your applications and Azure Maps stays on the Microsoft backbone network instead of the public internet. This provides improved security and network isolation. You can create a private endpoint when you create an Azure Maps account or add one to an existing account.
15
15
16
16
## Benefits of private endpoints for Azure Maps
17
17
@@ -82,7 +82,7 @@ Within this zone, a DNS record maps your Azure Maps account's unique ID and regi
82
82
Clients inside the virtual network resolve the hostname to a private IP address for private connectivity, while clients outside the network resolve the same hostname to the Azure Maps public endpoint. This split‑horizon DNS approach lets you use a single endpoint URL both inside and outside the virtual network.
83
83
84
84
If you don't use automatic DNS integration, configure DNS manually so the Azure Maps account hostname
85
-
(`<maps-account-client-id>.<location>.privatelink.account.maps.azure.com`) resolves to the private endpoint IP address within your network. For more information, see [Azure Private Endpoint DNS documentation](/../private-link/private-endpoint-dns.md).
85
+
(`<maps-account-client-id>.<location>.account.maps.azure.com`) resolves to the private endpoint IP address within your network. For more information, see [Azure Private Endpoint DNS documentation](../private-link/private-endpoint-dns.md).
86
86
87
87
### 3. Use the private endpoint in your applications
88
88
@@ -92,7 +92,7 @@ To use the private endpoint, configure your applications to call the **Azure Map
> If your application continues to use the default Azure Maps endpoint (such as `atlas.microsoft.com`), requests won't be routed through the private endpoint. Azure Maps SDKs support overriding the default endpoint, so configure your SDK or connection code to use your Azure Maps account–specific hostname. When configured, requests from within your network are automatically routed through Private Link.
@@ -118,5 +118,5 @@ Ask Copilot
118
118
119
119
## Related content
120
120
121
-
-[Azure Private Endpoint private DNS zone values](/../private-link/private-endpoint-dns.md)
122
-
-[Azure Private Link availability](/../private-link/availability.md)
121
+
-[Azure Private Endpoint private DNS zone values](../private-link/private-endpoint-dns.md)
122
+
-[Azure Private Link availability](../private-link/availability.md)
Copy file name to clipboardExpand all lines: articles/azure-resource-manager/management/async-operations.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ After getting the 201 or 202 response code, you're ready to monitor the status o
29
29
30
30
## URL to monitor status
31
31
32
-
There are two different ways to monitor the status the asynchronous operation. You determine the correct approach by examining the header values that are returned from your original request. First, look for:
32
+
There are two different ways to monitor the status of the asynchronous operation. You determine the correct approach by examining the header values that are returned from your original request. First, look for:
33
33
34
34
*`Azure-AsyncOperation` - URL for checking the ongoing status of the operation. If your operation returns this value, use it to track the status of the operation.
35
35
*`Retry-After` - The number of seconds to wait before checking the status of the asynchronous operation.
If the request is still running, you receive a status code 202. If the request is completed, your receive a status code 200. The body of the response contains the properties of the storage account that was created.
216
+
If the request is still running, you receive a status code 202. If the request is completed, you receive a status code 200. The body of the response contains the properties of the storage account that was created.
0 commit comments