Skip to content

Commit 989af0b

Browse files
committed
Applied suggestions from LAA
1 parent c91d7c0 commit 989af0b

1 file changed

Lines changed: 15 additions & 15 deletions

File tree

articles/api-management/azure-ai-foundry-api.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -17,38 +17,38 @@ ms.custom: template-how-to, build-2024
1717

1818
You can import AI model endpoints deployed in Microsoft Foundry to your API Management instance as APIs. Use AI gateway policies and other capabilities in API Management to simplify integration, improve observability, and enhance control over the model endpoints.
1919

20-
Learn more about managing AI APIs in API Management:
20+
To learn more about managing AI APIs in API Management, see:
2121

2222
* [AI gateway capabilities in Azure API Management](genai-gateway-capabilities.md)
2323

2424

2525
## Client compatibility options
2626

27-
API Management supports the following client compatibility options for AI APIs from Microsoft Foundry. When you import the API using the wizard, choose the option suitable for your model deployment. The option determines how clients call the API and how the API Management instance routes requests to the Foundry tool.
27+
API Management supports the following client compatibility options for AI APIs from Microsoft Foundry. When you import the API by using the wizard, choose the option suitable for your model deployment. The option determines how clients call the API and how the API Management instance routes requests to the Foundry tool.
2828

2929
* **Azure OpenAI** - Manage Azure OpenAI in Microsoft Foundry model deployments.
3030

31-
Clients call the deployment at an `/openai` endpoint such as `/openai/deployments/my-deployment/chat/completions`. Deployment name is passed in the request path. Use this option if your Foundry tool only includes Azure OpenAI model deployments.
31+
Clients call the deployment at an `/openai` endpoint such as `/openai/deployments/my-deployment/chat/completions`. The request path includes the deployment name. Use this option if your Foundry tool only includes Azure OpenAI model deployments.
3232

3333
* **Azure AI** - Manage model endpoints in Microsoft Foundry that are exposed through the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api).
3434

35-
Clients call the deployment at a `/models` endpoint such as `/my-model/models/chat/completions`. Deployment name is passed in the request body. Use this option if you want flexibility to switch between models exposed through the Azure AI Model Inference API and those deployed in Azure OpenAI in Foundry Models.
35+
Clients call the deployment at a `/models` endpoint such as `/my-model/models/chat/completions`. The request body includes the deployment name. Use this option if you want flexibility to switch between models exposed through the Azure AI Model Inference API and those deployed in Azure OpenAI in Foundry Models.
3636

3737
* **Azure OpenAI v1** - Manage Azure OpenAI in Microsoft Foundry model deployments, using the [Azure OpenAI API version 1](/azure/foundry/openai/api-version-lifecycle).
3838

39-
Clients call the deployment at an Azure OpenAI v1 model endpoint such as `my-model/models/chat/completions`. Deployment name is passed in the request body.
39+
Clients call the deployment at an Azure OpenAI v1 model endpoint such as `openai/v1/my-model/chat/completions`. The request body includes the deployment name.
4040

4141
## Prerequisites
4242

4343
- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
4444

4545
- A Foundry tool in your subscription with one or more models deployed. Examples include models deployed in Microsoft Foundry or Azure OpenAI.
4646

47-
- If you want to enable semantic caching for the API, see [Enable semantic caching of responses](azure-openai-enable-semantic-caching.md) for prerequisites.
47+
- For enabling semantic caching for the API, see [Enable semantic caching of responses](azure-openai-enable-semantic-caching.md) for prerequisites.
4848

49-
- If you want to enforce content safety checks on the API, see [Enforce content safety checks on LLM requests](llm-content-safety-policy.md) for prerequisites.
49+
- For enforcing content safety checks on the API, see [Enforce content safety checks on LLM requests](llm-content-safety-policy.md) for prerequisites.
5050

51-
## Import Microsoft Foundry API using the portal
51+
## Import Microsoft Foundry API by using the portal
5252

5353
Use the following steps to import an AI API to API Management.
5454

@@ -57,7 +57,7 @@ When you import the API, API Management automatically configures:
5757
* Operations for each of the API's REST API endpoints.
5858
* A system-assigned identity with the necessary permissions to access the Foundry tool deployment.
5959
* A [backend](backends.md) resource and a [set-backend-service](set-backend-service-policy.md) policy that direct API requests to the Azure AI Services endpoint.
60-
* Authentication to the backend using the instance's system-assigned managed identity.
60+
* Authentication to the backend by using the instance's system-assigned managed identity.
6161
* (optionally) Policies to help you monitor and manage the API.
6262

6363
To import a Microsoft Foundry API to API Management:
@@ -76,7 +76,7 @@ To import a Microsoft Foundry API to API Management:
7676
1. Enter a **Display name** and optional **Description** for the API.
7777
1. In **Base path**, enter a path that your API Management instance uses to access the deployment endpoint.
7878
1. Optionally select one or more **Products** to associate with the API.
79-
1. In **Client compatibility**, select one of the following based on the types of client you intend to support. See [Client compatibility options](#client-compatibility-options) for more information.
79+
1. In **Client compatibility**, select one of the following options based on the types of client you intend to support. See [Client compatibility options](#client-compatibility-options) for more information.
8080
* **Azure OpenAI** - Select this option if your clients only need to access Azure OpenAI in Microsoft Foundry model deployments.
8181
* **Azure AI** - Select this option if your clients need to access other models in Microsoft Foundry.
8282
* **Azure OpenAI v1** - Select this option if you want to use the Azure OpenAI API version 1 with your Foundry model deployments.
@@ -89,14 +89,14 @@ To import a Microsoft Foundry API to API Management:
8989
* [Track token usage](llm-emit-token-metric-policy.md)
9090
1. On the **Apply semantic caching** tab, optionally enter settings or accept defaults that define the policies to help optimize performance and reduce latency for the API:
9191
* [Enable semantic caching of responses](azure-openai-enable-semantic-caching.md)
92-
1. On the **AI content safety**, optionally enter settings or accept defaults to configure the Azure AI Content Safety service to block prompts with unsafe content:
92+
1. On the **AI content safety** tab, optionally enter settings or accept defaults to configure the Azure AI Content Safety service to block prompts with unsafe content:
9393
* [Enforce content safety checks on LLM requests](llm-content-safety-policy.md)
9494
1. Select **Review**.
95-
1. After settings are validated, select **Create**.
95+
1. After the portal validates the settings, select **Create**.
9696

9797
## Test the AI API
9898

99-
To ensure that your AI API is working as expected, test it in the API Management test console.
99+
To make sure your AI API works as expected, test it in the API Management test console.
100100
1. Select the API you created in the previous step.
101101
1. Select the **Test** tab.
102102
1. Select an operation that's compatible with the model deployment.
@@ -117,10 +117,10 @@ To ensure that your AI API is working as expected, test it in the API Management
117117
```
118118

119119
> [!NOTE]
120-
> In the test console, API Management automatically populates an **Ocp-Apim-Subscription-Key** header, and configures the subscription key of the built-in [all-access subscription](api-management-subscriptions.md#all-access-subscription). This key enables access to every API in the API Management instance. Optionally display the **Ocp-Apim-Subscription-Key** header by selecting the "eye" icon next to the **HTTP Request**.
120+
> In the test console, API Management automatically adds an **Ocp-Apim-Subscription-Key** header and sets the subscription key for the built-in [all-access subscription](api-management-subscriptions.md#all-access-subscription). This key provides access to every API in the API Management instance. To optionally display the **Ocp-Apim-Subscription-Key** header, select the "eye" icon next to the **HTTP Request**.
121121
1. Select **Send**.
122122

123-
When the test is successful, the backend responds with a successful HTTP response code and some data. Appended to the response is token usage data to help you monitor and manage your language model token consumption.
123+
When the test is successful, the backend responds with a successful HTTP response code and some data. The response includes token usage data to help you monitor and manage your language model token consumption.
124124

125125

126126
[!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]

0 commit comments

Comments
 (0)