You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: learn-pr/wwl-data-ai/develop-voice-live-agent/includes/3b-voice-live-agent.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ You can create and run an application to use Voice Live with a Microsoft Foundry
9
9
10
10
As you develop an agent in the Microsoft Foundry portal, you can enable **voice mode** to easily integrate Voice Live into your agent, and test it in the playground.
11
11
12
-

12
+

13
13
14
14
After enabling voice mode, you can use the **Configuration** pane to enable Voice Live settings, including:
15
15
@@ -18,7 +18,7 @@ After enabling voice mode, you can use the **Configuration** pane to enable Voic
18
18
- Voice activity detection (VAD) settings to detect interruptions and end of speech.
19
19
- Audio enhancement to mitigate background noise and audio quality.
20
20
-**Voice**: The specific voice used by the agent, and advanced voice settings to control the tone and speaking rate.
21
-
-**Interim response**: The agent can automatically generate speech while waiting for a model's response (for example )
21
+
-**Interim response**: The agent can automatically generate speech while waiting for a model's response.
22
22
-**Avatar**: Inclusion of a visual avatar to represent the agent.
23
23
24
24
## Create a voice agent using code
@@ -101,7 +101,7 @@ To use your agent, you need to build a client application that:
101
101
While you can implement these tasks using any of the functionality available in the APIs, the recommended pattern for Voice Live client applications is to:
102
102
103
103
- Use Microsoft Entra ID authentication to connect to the agent in a Microsoft Foundry project.
104
-
- Implement a custom **VoiceAssistant** class that encapsulates strongly-typed agent configuration, defines functions to configure and start the Voice live session, and processes voice events.
104
+
- Implement a custom **VoiceAssistant** class that encapsulates stronglytyped agent configuration, defines functions to configure and start the Voice live session, and processes voice events.
105
105
- Implement a custom **AudioProcessor** class that encapsulates input and output through audio devices.
106
106
107
107
The following example shows a minimal implementation of this pattern in Python (using the *PyAudio* library for audio input and output).
0 commit comments