Skip to content

Commit 4c4325c

Browse files
committed
Update Azure AI Foundry and Azure AI services references
1 parent 4649134 commit 4c4325c

21 files changed

Lines changed: 53 additions & 53 deletions

articles/azure-functions/durable/durable-functions-storage-providers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ If the configured value is both an exact match for a single setting and a prefix
9999

100100
### Identity-based connections
101101

102-
If you are using [version 2.7.0 or higher of the extension](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.7.0) and the Azure storage provider, instead of using a connection string with a secret, you can have the app use an [Microsoft Entra identity](../../active-directory/fundamentals/active-directory-whatis.md). To do this, you would define settings under a common prefix which maps to the `connectionName` property in the trigger and binding configuration.
102+
If you are using [version 2.7.0 or higher of the extension](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.7.0) and the Azure storage provider, instead of using a connection string with a secret, you can have the app use a [Microsoft Entra identity](../../active-directory/fundamentals/active-directory-whatis.md). To do this, you would define settings under a common prefix which maps to the `connectionName` property in the trigger and binding configuration.
103103

104104
To use an identity-based connection for Durable Functions, configure the following app settings:
105105

articles/azure-functions/durable/durable-functions-troubleshooting-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ This article provides a guide for troubleshooting common scenarios in Durable Fu
1616
> [!NOTE]
1717
> Microsoft support engineers are available to assist in diagnosing issues with your application. If you're not able to diagnose your problem using this guide, you can file a support ticket by accessing the **New Support request** blade in the **Support + troubleshooting** section of your function app page in the Azure portal.
1818
19-
![Screenshot of support request page in Azure Portal.](./media/durable-functions-troubleshooting-guide/durable-function-support-request.png)
19+
![Screenshot of support request page in Azure portal.](./media/durable-functions-troubleshooting-guide/durable-function-support-request.png)
2020

2121
> [!TIP]
2222
> When debugging and diagnosing issues, it's recommended that you start by ensuring your app is using the latest Durable Functions extension version. Most of the time, using the latest version mitigates known issues already reported by other users. Please read the [Upgrade Durable Functions extension version](./durable-functions-extension-upgrade.md) article for instructions on how to upgrade your extension version.

articles/azure-functions/durable/durable-functions-types-features-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ For more information about entity functions, see the [Durable Entities](durable-
5151
Orchestrator functions are triggered by an [orchestration trigger binding](durable-functions-bindings.md#orchestration-trigger) and entity functions are triggered by an [entity trigger binding](durable-functions-bindings.md#entity-trigger). Both of these triggers work by reacting to messages that are enqueued into a [task hub](durable-functions-task-hubs.md). The primary way to deliver these messages is by using an [orchestrator client binding](durable-functions-bindings.md#orchestration-client) or an [entity client binding](durable-functions-bindings.md#entity-client) from within a *client function*. Any non-orchestrator function can be a *client function*. For example, You can trigger the orchestrator from an HTTP-triggered function, an Azure Event Hub triggered function, etc. What makes a function a *client function* is its use of the durable client output binding.
5252

5353
> [!NOTE]
54-
> Unlike other function types, orchestrator and entity functions cannot be triggered directly using the buttons in the Azure Portal. If you want to test an orchestrator or entity function in the Azure Portal, you must instead run a *client function* that starts an orchestrator or entity function as part of its implementation. For the simplest testing experience, a *manual trigger* function is recommended.
54+
> Unlike other function types, orchestrator and entity functions cannot be triggered directly using the buttons in the Azure portal. If you want to test an orchestrator or entity function in the Azure portal, you must instead run a *client function* that starts an orchestrator or entity function as part of its implementation. For the simplest testing experience, a *manual trigger* function is recommended.
5555
5656
In addition to triggering orchestrator or entity functions, the *durable client* binding can be used to interact with running orchestrations and entities. For example, orchestrations can be queried, terminated, and can have events raised to them. For more information on managing orchestrations and entities, see the [Instance management](durable-functions-instance-management.md) article.
5757

articles/azure-functions/durable/durable-task-scheduler/durable-task-scheduler-auto-scaling.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ In the [Autoscaling in Azure Container Apps sample](https://github.com/Azure-Sam
175175
Subscription: SUBSCRIPTION_NAME (SUBSCRIPTION_ID)
176176
Location: West US 2
177177
178-
You can view detailed progress in the Azure Portal:
178+
You can view detailed progress in the Azure portal:
179179
https://portal.azure.com/#view/HubsExtension/DeploymentDetailsBlade/~/overview/id/%2Fsubscriptions%SUBSCRIPTION_ID%2Fproviders%2FMicrosoft.Resources%2Fdeployments%2FCONTAINER_APP_ENVIRONMENT
180180
181181
(✓) Done: Resource group: GENERATED_RESOURCE_GROUP (1.385s)

articles/azure-functions/durable/durable-task-scheduler/quickstart-container-apps-durable-task-sdk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,7 @@ cd /samples/durable-task-sdks/java/function-chaining
148148
Subscription: SUBSCRIPTION_NAME (SUBSCRIPTION_ID)
149149
Location: West US 2
150150
151-
You can view detailed progress in the Azure Portal:
151+
You can view detailed progress in the Azure portal:
152152
https://portal.azure.com/#view/HubsExtension/DeploymentDetailsBlade/~/overview/id/%2Fsubscriptions%SUBSCRIPTION_ID%2Fproviders%2FMicrosoft.Resources%2Fdeployments%2FCONTAINER_APP_ENVIRONMENT
153153
154154
(✓) Done: Resource group: GENERATED_RESOURCE_GROUP (1.385s)

articles/azure-functions/functions-bindings-event-grid-output.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -719,7 +719,7 @@ Use the following steps to configure a topic key:
719719

720720
### Identity-based authentication
721721

722-
When using version 3.3.x or higher of the extension, you can connect to an Event Grid topic using an [Microsoft Entra identity](../active-directory/fundamentals/active-directory-whatis.md) to avoid having to obtain and work with topic keys.
722+
When using version 3.3.x or higher of the extension, you can connect to an Event Grid topic using a [Microsoft Entra identity](../active-directory/fundamentals/active-directory-whatis.md) to avoid having to obtain and work with topic keys.
723723

724724
You need to create an application setting that returns the topic endpoint URI. The name of the setting should combine a _unique common prefix_ (for example, `myawesometopic`) with the value `__topicEndpointUri`. Then, you must use that common prefix (in this case, `myawesometopic`) when you define the `Connection` property in the binding.
725725

articles/azure-functions/functions-bindings-openai-assistantpost-input.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ Apply the `PostUserQuery` attribute to define an assistant post input binding, w
8282
| --------- | ----------- |
8383
| **Id** | The ID of the assistant to update. |
8484
| **UserMessage** | Gets or sets the user message for the chat completion model, encoded as a string. |
85-
| **AIConnectionName** | _Optional_. Gets or sets the name of the configuration section for AI service connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.|
85+
| **AIConnectionName** | _Optional_. Gets or sets the name of the configuration section for Foundry Tool connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.|
8686
| **ChatModel** | _Optional_. Gets or sets the ID of the model to use as a string, with a default value of `gpt-3.5-turbo`. |
8787
| **Temperature** | _Optional_. Gets or sets the sampling temperature to use, as a string between `0` and `2`. Higher values (`0.8`) make the output more random, while lower values like (`0.2`) make output more focused and deterministic. You should use either `Temperature` or `TopP`, but not both. |
8888
| **TopP** | _Optional_. Gets or sets an alternative to sampling with temperature, called nucleus sampling, as a string. In this sampling method, the model considers the results of the tokens with `top_p` probability mass. So `0.1` means only the tokens comprising the top 10% probability mass are considered. You should use either `Temperature` or `TopP`, but not both. |
@@ -100,7 +100,7 @@ The `PostUserQuery` annotation enables you to define an assistant post input bin
100100
| **name** | The name of the output binding. |
101101
| **id** | The ID of the assistant to update. |
102102
| **userMessage** | Gets or sets the user message for the chat completion model, encoded as a string. |
103-
| **aiConnectionName** | _Optional_. Gets or sets the name of the configuration section for AI service connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.|
103+
| **aiConnectionName** | _Optional_. Gets or sets the name of the configuration section for Foundry Tool connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.|
104104
| **chatModel** | Gets or sets the ID of the model to use as a string, with a default value of `gpt-3.5-turbo`. |
105105
| **temperature** | _Optional_. Gets or sets the sampling temperature to use, as a string between `0` and `2`. Higher values (`0.8`) make the output more random, while lower values like (`0.2`) make output more focused and deterministic. You should use either `Temperature` or `TopP`, but not both. |
106106
| **topP** | _Optional_. Gets or sets an alternative to sampling with temperature, called nucleus sampling, as a string. In this sampling method, the model considers the results of the tokens with `top_p` probability mass. So `0.1` means only the tokens comprising the top 10% probability mass are considered. You should use either `Temperature` or `TopP`, but not both. |
@@ -118,7 +118,7 @@ During the preview, define the output binding as a `generic_output_binding` bind
118118
| **arg_name** | The name of the variable that represents the binding parameter. |
119119
| **id** | The ID of the assistant to update. |
120120
| **user_message** | Gets or sets the user message for the chat completion model, encoded as a string. |
121-
| **ai_connection_name** | _Optional_. Gets or sets the name of the configuration section for AI service connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.|
121+
| **ai_connection_name** | _Optional_. Gets or sets the name of the configuration section for Foundry Tool connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.|
122122
| **chat_model** | Gets or sets the ID of the model to use as a string, with a default value of `gpt-3.5-turbo`. |
123123
| **temperature** | _Optional_. Gets or sets the sampling temperature to use, as a string between `0` and `2`. Higher values (`0.8`) make the output more random, while lower values like (`0.2`) make output more focused and deterministic. You should use either `Temperature` or `TopP`, but not both. |
124124
| **top_p** | _Optional_. Gets or sets an alternative to sampling with temperature, called nucleus sampling, as a string. In this sampling method, the model considers the results of the tokens with `top_p` probability mass. So `0.1` means only the tokens comprising the top 10% probability mass are considered. You should use either `Temperature` or `TopP`, but not both. |
@@ -138,7 +138,7 @@ The binding supports these configuration properties that you set in the function
138138
| **name** | The name of the output binding. |
139139
| **id** | The ID of the assistant to update. |
140140
| **userMessage** | Gets or sets the user message for the chat completion model, encoded as a string. |
141-
| **aiConnectionName** | _Optional_. Gets or sets the name of the configuration section for AI service connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.|
141+
| **aiConnectionName** | _Optional_. Gets or sets the name of the configuration section for Foundry Tool connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.|
142142
| **chatModel** | Gets or sets the ID of the model to use as a string, with a default value of `gpt-3.5-turbo`. |
143143
| **temperature** | _Optional_. Gets or sets the sampling temperature to use, as a string between `0` and `2`. Higher values (`0.8`) make the output more random, while lower values like (`0.2`) make output more focused and deterministic. You should use either `Temperature` or `TopP`, but not both. |
144144
| **topP** | _Optional_. Gets or sets an alternative to sampling with temperature, called nucleus sampling, as a string. In this sampling method, the model considers the results of the tokens with `top_p` probability mass. So `0.1` means only the tokens comprising the top 10% probability mass are considered. You should use either `Temperature` or `TopP`, but not both. |
@@ -155,7 +155,7 @@ The binding supports these properties, which are defined in your code:
155155
|-----------------------|-------------|
156156
| **id** | The ID of the assistant to update. |
157157
| **userMessage** | Gets or sets the user message for the chat completion model, encoded as a string. |
158-
| **aiConnectionName** | _Optional_. Gets or sets the name of the configuration section for AI service connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.|
158+
| **aiConnectionName** | _Optional_. Gets or sets the name of the configuration section for Foundry Tool connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.|
159159
| **chatModel** | Gets or sets the ID of the model to use as a string, with a default value of `gpt-3.5-turbo`. |
160160
| **temperature** | _Optional_. Gets or sets the sampling temperature to use, as a string between `0` and `2`. Higher values (`0.8`) make the output more random, while lower values like (`0.2`) make output more focused and deterministic. You should use either `Temperature` or `TopP`, but not both. |
161161
| **topP** | _Optional_. Gets or sets an alternative to sampling with temperature, called nucleus sampling, as a string. In this sampling method, the model considers the results of the tokens with `top_p` probability mass. So `0.1` means only the tokens comprising the top 10% probability mass are considered. You should use either `Temperature` or `TopP`, but not both. |

0 commit comments

Comments
 (0)