Skip to content

Commit dbb5c66

Browse files
committed
Acrolinx fixes
1 parent 65243e9 commit dbb5c66

4 files changed

Lines changed: 11 additions & 12 deletions

File tree

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
1-
Generative AI models make it possible to build intelligent chat-based applications that can understand and reason over input. Traditionally, text input is the primary mode of interaction with AI models, but multimodal models are increasingly becoming available. These models make it possible for chat applications to respond to audio input that contains speech as well as text, and to respond using audible speech.
1+
Speech transcription and synthesis are useful capabilities in many scenarios, including:
22

3-
In this module, we'll discuss speech-enabled generative AI and explore how you can use Microsoft Foundry to create speech-capable generative AI solutions that:
3+
- Documenting spoken conversations in calls and meetings.
4+
- Generating captions for videos or presentations.
5+
- Creating audio information for vision-impaired users.
6+
- Developing hands-free AI assistants that read text messages or emails aloud.
47

5-
- Respond to spoken prompts.
6-
- Transcribe speech to text.
7-
- Synthesize speech from text.
8+
In this module, we'll explore how to use speech-capable generative AI models in Microsoft Foundry to convert speech to text and text to speech.

learn-pr/wwl-data-ai/develop-generative-ai-audio-apps/includes/3-develop-audio-chat-app.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ To use a speech-to-text model in your own application, you can use the **AzureOp
1717
from openai import AzureOpenAI
1818
from pathlib import Path
1919

20+
# Create an AzureOpenAI client
2021
client = AzureOpenAI(
2122
azure_endpoint=YOUR_FOUNDRY_ENDPOINT,
2223
api_key=YOUR_FOUNDRY_KEY,

learn-pr/wwl-data-ai/develop-generative-ai-audio-apps/includes/3b-develop-speech-app.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ Similarly to speech-to-text models, you can use the **AzureOpenAI** client in th
1616
from openai import AzureOpenAI
1717
from pathlib import Path
1818

19+
# Create an AzureOpenAI client
1920
client = AzureOpenAI(
2021
azure_endpoint=YOUR_FOUNDRY_ENDPOINT,
2122
api_key=YOUR_FOUNDRY_KEY,
Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,7 @@
1-
In this module, you learned how to use speech-capable generative AI models to transcribe and synthesize speech.
1+
In this module, you learned about speech-capable AI models, and how you can use Microsoft Foundry to create generative AI solutions that:
22

3-
Speech transcription and synthesis are useful capabilities in many scenarios, including:
4-
5-
- Documenting spoken conversations in calls and meetings.
6-
- Generating captions for videos or presentations.
7-
- Creating audio information for vision-impaired users.
8-
- Developing hands-free AI assistants that read text messages or emails aloud.
3+
- Transcribe speech to text.
4+
- Synthesize speech from text.
95

106
> [!TIP]
117
> For more information about speech-capable models in Microsoft Foundry, see **[Audio models](/azure/foundry/foundry-models/concepts/models-sold-directly-by-azure?pivots=azure-openai#audio-models&azure-portal=true)** in the Microsoft Foundry documentation.

0 commit comments

Comments
 (0)