Skip to content

Latest commit

 

History

History
197 lines (142 loc) · 17.5 KB

File metadata and controls

197 lines (142 loc) · 17.5 KB
title Azure OpenAI text completion input binding for Azure Functions
description Learn how to use the Azure OpenAI text completion input binding to access Azure OpenAI text completion APIs during function execution in Azure Functions.
ms.topic reference
ms.custom
build-2024
devx-track-extended-java
devx-track-js
devx-track-python
devx-track-ts
build-2025
ms.collection
ce-skilling-ai-copilot
ms.date 05/15/2025
ms.update-cycle 180-days
zone_pivot_groups programming-languages-set-functions

Azure OpenAI text completion input binding for Azure Functions

[!INCLUDE preview-support]

The Azure OpenAI text completion input binding allows you to bring the results text completion APIs into your code executions. You can define the binding to use both predefined prompts with parameters or pass through an entire prompt.

For information on setup and configuration details of the Azure OpenAI extension, see Azure OpenAI extensions for Azure Functions. To learn more about Azure OpenAI completions, see Learn how to generate or manipulate text.

[!INCLUDE functions-support-notes-samples-openai]

Example

::: zone pivot="programming-language-csharp"
This example demonstrates the templating pattern, where the HTTP trigger function takes a name parameter and embeds it into a text prompt, which is then sent to the Azure OpenAI completions API by the extension. The response to the prompt is returned in the HTTP response.

:::code language="csharp" source="~/functions-openai-extension/samples/textcompletion/csharp-ooproc/TextCompletions.cs" range="23-29":::

This example takes a prompt as input, sends it directly to the completions API, and returns the response as the output.

:::code language="csharp" source="~/functions-openai-extension/samples/textcompletion/csharp-ooproc/TextCompletions.cs" range="35-43":::

::: zone-end
::: zone pivot="programming-language-java" This example demonstrates the templating pattern, where the HTTP trigger function takes a name parameter and embeds it into a text prompt, which is then sent to the Azure OpenAI completions API by the extension. The response to the prompt is returned in the HTTP response.

:::code language="java" source="~/functions-openai-extension/samples/textcompletion/java/src/main/java/com/azfs/TextCompletions.java" range="31-46":::

This example takes a prompt as input, sends it directly to the completions API, and returns the response as the output.

:::code language="java" source="~/functions-openai-extension/samples/textcompletion/java/src/main/java/com/azfs/TextCompletions.java" range="52-65":::

::: zone-end
::: zone pivot="programming-language-javascript"

This example demonstrates the templating pattern, where the HTTP trigger function takes a name parameter and embeds it into a text prompt, which is then sent to the Azure OpenAI completions API by the extension. The response to the prompt is returned in the HTTP response.

:::code language="javascript" source="~/functions-openai-extension/samples/textcompletion/javascript/src/functions/whois.js" :::

::: zone-end
::: zone pivot="programming-language-typescript"

This example demonstrates the templating pattern, where the HTTP trigger function takes a name parameter and embeds it into a text prompt, which is then sent to the Azure OpenAI completions API by the extension. The response to the prompt is returned in the HTTP response.

:::code language="typescript" source="~/functions-openai-extension/samples/textcompletion/typescript/src/functions/whois.ts" :::

::: zone-end
::: zone pivot="programming-language-powershell"
This example demonstrates the templating pattern, where the HTTP trigger function takes a name parameter and embeds it into a text prompt, which is then sent to the Azure OpenAI completions API by the extension. The response to the prompt is returned in the HTTP response.

Here's the function.json file for TextCompletionResponse:

:::code language="json" source="~/functions-openai-extension/samples/textcompletion/powershell/WhoIs/function.json" :::

For more information about function.json file properties, see the Configuration section.

The code simply returns the text from the completion API as the response:

:::code language="powershell" source="~/functions-openai-extension/samples/textcompletion/powershell/WhoIs/run.ps1" :::

::: zone-end
::: zone pivot="programming-language-python"
This example demonstrates the templating pattern, where the HTTP trigger function takes a name parameter and embeds it into a text prompt, which is then sent to the Azure OpenAI completions API by the extension. The response to the prompt is returned in the HTTP response.

:::code language="python" source="~/functions-openai-extension/samples/textcompletion/python/function_app.py" range="7-16" :::

This example takes a prompt as input, sends it directly to the completions API, and returns the response as the output.

:::code language="python" source="~/functions-openai-extension/samples/textcompletion/python/function_app.py" range="19-30" :::

::: zone-end

::: zone pivot="programming-language-csharp"

Attributes

The specific attribute you apply to define a text completion input binding depends on your C# process mode.

In the isolated worker model, apply TextCompletionInput to define a text completion input binding.

In the in-process model, apply TextCompletion to define a text completion input binding.


The attribute supports these parameters:

Parameter Description
Prompt Gets or sets the prompt to generate completions for, encoded as a string.
AIConnectionName Optional. Gets or sets the name of the configuration section for AI service connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.
ChatModel Optional. Gets or sets the ID of the model to use as a string, with a default value of gpt-3.5-turbo.
Temperature Optional. Gets or sets the sampling temperature to use, as a string between 0 and 2. Higher values (0.8) make the output more random, while lower values like (0.2) make output more focused and deterministic. You should use either Temperature or TopP, but not both.
TopP Optional. Gets or sets an alternative to sampling with temperature, called nucleus sampling, as a string. In this sampling method, the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. You should use either Temperature or TopP, but not both.
MaxTokens Optional. Gets or sets the maximum number of tokens to generate in the completion, as a string with a default of 100. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2,048 tokens (except for the newest models, which support 4096).
IsReasoningModel Optional. Gets or sets a value indicating whether the chat completion model is a reasoning model. This option is experimental and associated with the reasoning model until all models have parity in the expected properties, with a default value of false.

::: zone-end ::: zone pivot="programming-language-java"

Annotations

The TextCompletion annotation enables you to define a text completion input binding, which supports these parameters:

Element Description
name Gets or sets the name of the input binding.
prompt Gets or sets the prompt to generate completions for, encoded as a string.
aiConnectionName Optional. Gets or sets the name of the configuration section for AI service connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.
chatModel Gets or sets the ID of the model to use as a string, with a default value of gpt-3.5-turbo.
temperature Optional. Gets or sets the sampling temperature to use, as a string between 0 and 2. Higher values (0.8) make the output more random, while lower values like (0.2) make output more focused and deterministic. You should use either Temperature or TopP, but not both.
topP Optional. Gets or sets an alternative to sampling with temperature, called nucleus sampling, as a string. In this sampling method, the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. You should use either Temperature or TopP, but not both.
maxTokens Optional. Gets or sets the maximum number of tokens to generate in the completion, as a string with a default of 100. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2,048 tokens (except for the newest models, which support 4096).
isReasoningModel Optional. Gets or sets a value indicating whether the chat completion model is a reasoning model. This option is experimental and associated with the reasoning model until all models have parity in the expected properties, with a default value of false.

::: zone-end
::: zone pivot="programming-language-python"

Decorators

During the preview, define the input binding as a generic_input_binding binding of type textCompletion, which supports these parameters:

Parameter Description
arg_name The name of the variable that represents the binding parameter.
prompt Gets or sets the prompt to generate completions for, encoded as a string.
ai_connection_name Optional. Gets or sets the name of the configuration section for AI service connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.
chat_model Gets or sets the ID of the model to use as a string, with a default value of gpt-3.5-turbo.
temperature Optional. Gets or sets the sampling temperature to use, as a string between 0 and 2. Higher values (0.8) make the output more random, while lower values like (0.2) make output more focused and deterministic. You should use either Temperature or TopP, but not both.
top_p Optional. Gets or sets an alternative to sampling with temperature, called nucleus sampling, as a string. In this sampling method, the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. You should use either Temperature or TopP, but not both.
max_tokens Optional. Gets or sets the maximum number of tokens to generate in the completion, as a string with a default of 100. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2,048 tokens (except for the newest models, which support 4096).
is_reasoning _model Optional. Gets or sets a value indicating whether the chat completion model is a reasoning model. This option is experimental and associated with the reasoning model until all models have parity in the expected properties, with a default value of false.

::: zone-end
::: zone pivot="programming-language-powershell"

Configuration

The binding supports these configuration properties that you set in the function.json file.

Property Description
type Must be textCompletion.
direction Must be in.
name The name of the input binding.
prompt Gets or sets the prompt to generate completions for, encoded as a string.
aiConnectionName Optional. Gets or sets the name of the configuration section for AI service connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.
chatModel Gets or sets the ID of the model to use as a string, with a default value of gpt-3.5-turbo.
temperature Optional. Gets or sets the sampling temperature to use, as a string between 0 and 2. Higher values (0.8) make the output more random, while lower values like (0.2) make output more focused and deterministic. You should use either Temperature or TopP, but not both.
topP Optional. Gets or sets an alternative to sampling with temperature, called nucleus sampling, as a string. In this sampling method, the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. You should use either Temperature or TopP, but not both.
maxTokens Optional. Gets or sets the maximum number of tokens to generate in the completion, as a string with a default of 100. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2,048 tokens (except for the newest models, which support 4096).
isReasoningModel Optional. Gets or sets a value indicating whether the chat completion model is a reasoning model. This option is experimental and associated with the reasoning model until all models have parity in the expected properties, with a default value of false.

::: zone-end
::: zone pivot="programming-language-javascript,programming-language-typescript"

Configuration

The binding supports these properties, which are defined in your code:

Property Description
prompt Gets or sets the prompt to generate completions for, encoded as a string.
aiConnectionName Optional. Gets or sets the name of the configuration section for AI service connectivity settings. For Azure OpenAI: If specified, looks for "Endpoint" and "Key" values in this configuration section. If not specified or the section doesn't exist, falls back to environment variables: AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY. For user-assigned managed identity authentication, this property is required. For OpenAI service (non-Azure), set the OPENAI_API_KEY environment variable.
chatModel Gets or sets the ID of the model to use as a string, with a default value of gpt-3.5-turbo.
temperature Optional. Gets or sets the sampling temperature to use, as a string between 0 and 2. Higher values (0.8) make the output more random, while lower values like (0.2) make output more focused and deterministic. You should use either Temperature or TopP, but not both.
topP Optional. Gets or sets an alternative to sampling with temperature, called nucleus sampling, as a string. In this sampling method, the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. You should use either Temperature or TopP, but not both.
maxTokens Optional. Gets or sets the maximum number of tokens to generate in the completion, as a string with a default of 100. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2,048 tokens (except for the newest models, which support 4096).
isReasoningModel Optional. Gets or sets a value indicating whether the chat completion model is a reasoning model. This option is experimental and associated with the reasoning model until all models have parity in the expected properties, with a default value of false.

::: zone-end

Usage

See the Example section for complete examples.

Related content