You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/provisioned-throughput.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,6 +52,6 @@ We introduced a new deployment type called **ProvisionedManaged** which provides
52
52
53
53
Provisioned throughput quota represents a specific amount of total throughput you can deploy. Quota in the Azure OpenAI Service is managed at the subscription level meaning that it can be consumed by different resources within that subscription.
54
54
55
-
Quota is specific to a (deployment type, mode, region) triplet and isn't interchangeable. Meaning you can't use quota for GPT-4 to deploy GPT-35-turbo. Customers can raise a support request to move the quota across deployment types, models, or regions but we can't guarantee that it will be possible.
55
+
Quota is specific to a (deployment type, model, region) triplet and isn't interchangeable. Meaning you can't use quota for GPT-4 to deploy GPT-35-turbo. Customers can raise a support request to move the quota across deployment types, models, or regions but we can't guarantee that it will be possible.
56
56
57
57
While we make every attempt to ensure that quota is always deployable, quota does not represent a guarantee that the underlying capacity is available for the customer to use. The service assigns capacity to the customer at deployment time and if capacity is unavailable the deployment will fail with an out of capacity error.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/personal-voice-create-consent.md
+48-2Lines changed: 48 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: eric-urban
6
6
manager: nitinme
7
7
ms.service: azure-ai-speech
8
8
ms.topic: how-to
9
-
ms.date: 12/1/2023
9
+
ms.date: 1/10/2024
10
10
ms.author: eur
11
11
---
12
12
@@ -16,7 +16,7 @@ ms.author: eur
16
16
17
17
With the personal voice feature, it's required that every voice be created with explicit consent from the user. A recorded statement from the user is required acknowledging that the customer (Azure AI Speech resource owner) will create and use their voice.
18
18
19
-
To add user consent to the personal voice project, you get the prerecorded consent audio file from a publicly accessible URL (`Consents_Create`) or upload the audio file (`Consents_Post`). In this article, you add consent from a URL.
19
+
To add user consent to the personal voice project, you provide the prerecorded consent audio file [from a publicly accessible URL](#add-consent-from-a-url) (`Consents_Create`) or [upload the audio file](#add-consent-from-a-file) (`Consents_Post`).
20
20
21
21
## Consent statement
22
22
@@ -28,8 +28,54 @@ You can get the consent statement text for each locale from the text to speech G
28
28
"I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice."
29
29
```
30
30
31
+
## Add consent from a file
32
+
33
+
In this scenario, the audio files must be available locally.
34
+
35
+
To add consent to a personal voice project from a local audio file, use the `Consents_Post` operation of the custom voice API. Construct the request body according to the following instructions:
36
+
37
+
- Set the required `projectId` property. See [create a project](./personal-voice-create-project.md).
38
+
- Set the required `voiceTalentName` property. The voice talent name can't be changed later.
39
+
- Set the required `companyName` property. The company name can't be changed later.
40
+
- Set the required `audiodata` property with the consent audio file.
41
+
- Set the required `locale` property. This should be the locale of the consent. The locale can't be changed later. You can find the text to speech locale list [here](/azure/ai-services/speech-service/language-support?tabs=tts).
42
+
43
+
Make an HTTP POST request using the URI as shown in the following `Consents_Post` example.
44
+
- Replace `YourResourceKey` with your Speech resource key.
45
+
- Replace `YourResourceRegion` with your Speech resource region.
46
+
- Replace `JessicaConsentId` with a consent ID of your choice. The case sensitive ID will be used in the consent's URI and can't be changed later.
You should receive a response body in the following format:
53
+
54
+
```json
55
+
{
56
+
"id": "JessicaConsentId",
57
+
"description": "Consent for Jessica voice",
58
+
"projectId": "ProjectId",
59
+
"voiceTalentName": "Jessica Smith",
60
+
"companyName": "Contoso",
61
+
"locale": "en-US",
62
+
"status": "NotStarted",
63
+
"createdDateTime": "2023-04-01T05:30:00.000Z",
64
+
"lastActionDateTime": "2023-04-02T10:15:30.000Z"
65
+
}
66
+
```
67
+
68
+
The response header contains the `Operation-Location` property. Use this URI to get details about the `Consents_Post` operation. Here's an example of the response header:
In this scenario, the audio files must already be stored in an Azure Blob Storage container.
78
+
33
79
To add consent to a personal voice project from the URL of an audio file, use the `Consents_Create` operation of the custom voice API. Construct the request body according to the following instructions:
34
80
35
81
- Set the required `projectId` property. See [create a project](./personal-voice-create-project.md).
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/personal-voice-create-voice.md
+51-8Lines changed: 51 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: eric-urban
6
6
manager: nitinme
7
7
ms.service: azure-ai-speech
8
8
ms.topic: how-to
9
-
ms.date: 12/1/2023
9
+
ms.date: 1/10/2024
10
10
ms.author: eur
11
11
---
12
12
@@ -18,13 +18,59 @@ To use personal voice in your application, you need to get a speaker profile ID.
18
18
19
19
You create a speaker profile ID based on the speaker's verbal consent statement and an audio prompt (a clean human voice sample between 50 - 90 seconds). The user's voice characteristics are encoded in the `speakerProfileId` property that's used for text to speech. For more information, see [use personal voice in your application](./personal-voice-how-to-use.md).
20
20
21
-
## Create personal voice
21
+
> [!NOTE]
22
+
> The personal voice ID and speaker profile ID aren't same. You can choose the personal voice ID, but the speaker profile ID is generated by the service. The personal voice ID is used to manage the personal voice. The speaker profile ID is used for text to speech.
23
+
24
+
You provide the audio files [from a publicly accessible URL](#create-personal-voice-from-a-url) (`PersonalVoices_Create`) or [upload the audio files](#create-personal-voice-from-a-file) (`PersonalVoices_Post`).
22
25
23
-
To create a personal voice and get the speaker profile ID, use the `PersonalVoices_Create` operation of the custom voice API.
26
+
## Create personal voice from a file
24
27
25
-
Before calling this API, please store audio files in Azure Blob. In the example below, audio files are https://contoso.blob.core.windows.net/voicecontainer/jessica/*.wav.
28
+
In this scenario, the audio files must be available locally.
26
29
27
-
Construct the request body according to the following instructions:
30
+
To create a personal voice and get the speaker profile ID, use the `PersonalVoices_Post` operation of the custom voice API. Construct the request body according to the following instructions:
31
+
32
+
- Set the required `projectId` property. See [create a project](./personal-voice-create-project.md).
33
+
- Set the required `consentId` property. See [add user consent](./personal-voice-create-consent.md).
34
+
- Set the required `audiodata` property. You can specify one or more audio files in the same request.
35
+
36
+
Make an HTTP POST request using the URI as shown in the following `PersonalVoices_Post` example.
37
+
- Replace `YourResourceKey` with your Speech resource key.
38
+
- Replace `YourResourceRegion` with your Speech resource region.
39
+
- Replace `JessicaPersonalVoiceId` with a personal voice ID of your choice. The case sensitive ID will be used in the personal voice's URI and can't be changed later.
Use the `speakerProfileId` property to integrate personal voice in your text to speech application. For more information, see [use personal voice in your application](./personal-voice-how-to-use.md).
61
+
62
+
The response header contains the `Operation-Location` property. Use this URI to get details about the `PersonalVoices_Post` operation. Here's an example of the response header:
In this scenario, the audio files must already be stored in an Azure Blob Storage container.
72
+
73
+
To create a personal voice and get the speaker profile ID, use the `PersonalVoices_Create` operation of the custom voice API. Construct the request body according to the following instructions:
28
74
29
75
- Set the required `projectId` property. See [create a project](./personal-voice-create-project.md).
30
76
- Set the required `consentId` property. See [add user consent](./personal-voice-create-consent.md).
@@ -33,9 +79,6 @@ Construct the request body according to the following instructions:
33
79
- Set the required `extensions` property to the extensions of the audio files.
34
80
- Optionally, set the `prefix` property to set a prefix for the blob name.
35
81
36
-
> [!NOTE]
37
-
> The personal voice ID and speaker profile ID aren't same. You can choose the personal voice ID, but the speaker profile ID is generated by the service. The personal voice ID is used to manage the personal voice. The speaker profile ID is used for text to speech.
38
-
39
82
Make an HTTP PUT request using the URI as shown in the following `PersonalVoices_Create` example.
40
83
- Replace `YourResourceKey` with your Speech resource key.
41
84
- Replace `YourResourceRegion` with your Speech resource region.
Copy file name to clipboardExpand all lines: articles/aks/azure-cni-overlay.md
-17Lines changed: 0 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,23 +29,6 @@ You can provide outbound (egress) connectivity to the internet for Overlay pods
29
29
30
30
You can configure ingress connectivity to the cluster using an ingress controller, such as Nginx or [HTTP application routing](./http-application-routing.md). You cannot configure ingress connectivity using Azure App Gateway. For details see [Limitations with Azure CNI Overlay](#limitations-with-azure-cni-overlay).
31
31
32
-
## Limitations
33
-
34
-
Azure CNI Overlay networking in AKS currently has the following limitations:
35
-
36
-
* In case you are using your own subnet to deploy the cluster, the names of the subnet, VNET and resource group which contains the VNET, must be 63 characters or less. This comes from the fact that these names will be used as labels in AKS worker nodes, and are therefore subjected to [Kubernetes label syntax rules](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set).
37
-
38
-
## Regional availability for ARM64 node pools
39
-
40
-
Azure CNI Overlay is currently unavailable for ARM64 node pools in the following regions:
41
-
42
-
- East US 2
43
-
- France Central
44
-
- Southeast Asia
45
-
- South Central US
46
-
- West Europe
47
-
- West US 3
48
-
49
32
## Differences between Kubenet and Azure CNI Overlay
50
33
51
34
Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet, but it has scaling and other limitations. The below table provides a detailed comparison between Kubenet and Azure CNI Overlay. If you don't want to assign VNet IP addresses to pods due to IP shortage, we recommend using Azure CNI Overlay.
Copy file name to clipboardExpand all lines: articles/aks/cluster-autoscaler.md
-1Lines changed: 0 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -181,7 +181,6 @@ You can also configure more granular details of the cluster autoscaler by changi
181
181
| daemonset-eviction-for-occupied-nodes (Preview) | Whether DaemonSet pods will be gracefully terminated from non-empty nodes | true |
182
182
| scale-down-utilization-threshold | Node utilization level, defined as sum of requested resources divided by capacity, in which a node can be considered for scale down | 0.5 |
183
183
| max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | 600 seconds |
184
-
| balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false |
185
184
| balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false |
186
185
| expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste`, `priority` | random |
187
186
| skip-nodes-with-local-storage | If true, cluster autoscaler doesn't delete nodes with pods with local storage, for example, EmptyDir or HostPath | true |
0 commit comments