You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> If you register an application in the Azure portal, this step is completed for you.
221
221
222
222
3. The last step is to [assign the "Cognitive Services User" role](/powershell/module/az.Resources/New-azRoleAssignment) to the service principal (scoped to the resource). By assigning a role, you're granting service principal access to this resource. You can grant the same service principal access to multiple resources in your subscription.
223
-
>[!NOTE]
223
+
224
+
> [!NOTE]
224
225
> The ObjectId of the service principal is used, not the ObjectId for the application.
225
226
> The ACCOUNT_ID will be the Azure resource Id of the Azure AI services account you created. You can find Azure resource Id from "properties" of the resource in Azure portal.
226
227
@@ -239,32 +240,31 @@ In this sample, a password is used to authenticate the service principal. The to
239
240
```
240
241
241
242
2. Get a token:
242
-
> [!NOTE]
243
-
> If you're using Azure Cloud Shell, the `SecureClientSecret` class isn't available.
$responseToken = Invoke-RestMethod -Uri $tokenEndpoint -Method Post -Body $body
258
+
$accessToken = $responseToken.access_token
259
+
```
263
260
261
+
> [!NOTE]
262
+
> Anytime you use passwords in a script, the most secure option is to use the PowerShell Secrets Management module and integrate with a solution such as Azure KeyVault.
263
+
264
264
3. Call the Computer Vision API:
265
265
```powershell-interactive
266
266
$url = $account.Endpoint+"vision/v1.0/models"
267
-
$result = Invoke-RestMethod -Uri $url -Method Get -Headers @{"Authorization"=$token.CreateAuthorizationHeader()} -Verbose
Copy file name to clipboardExpand all lines: articles/aks/csi-secrets-store-identity-access.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -179,7 +179,7 @@ In this security model, you can grant access to your cluster's resources to team
179
179
1. Access your key vault using the [`az aks show`][az-aks-show] command and the user-assigned managed identity created by the add-on.
180
180
181
181
```azurecli-interactive
182
-
az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv
182
+
az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.objectId -o tsv
183
183
```
184
184
185
185
Alternatively, you can create a new managed identity and assign it to your virtual machine (VM) scale set or to each VM instance in your availability set using the following commands.
@@ -193,10 +193,10 @@ In this security model, you can grant access to your cluster's resources to team
193
193
2. Create a role assignment that grants the identity permission access to the key vault secrets, access keys, and certificates using the [`az role assignment create`][az-role-assignment-create] command.
export KEYVAULT_SCOPE=$(az keyvault show --name <key-vault-name> --query id -o tsv)
198
198
199
-
az role assignment create --role Key Vault Administrator --assignee <identity-client-id> --scope $KEYVAULT_SCOPE
199
+
az role assignment create --role "Key Vault Administrator" --assignee $IDENTITY_OBJECT_ID --scope $KEYVAULT_SCOPE
200
200
```
201
201
202
202
3. Create a `SecretProviderClass` using the following YAML. Make sure to use your own values for `userAssignedIdentityID`, `keyvaultName`, `tenantId`, and the objects to retrieve from your key vault.
Copy file name to clipboardExpand all lines: articles/azure-monitor/essentials/azure-monitor-workspace-manage.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -133,12 +133,12 @@ Create a link between the Azure Monitor workspace and the Grafana workspace by u
133
133
If your cluster is already configured to send data to an Azure Monitor managed service for Prometheus, you must disable it first using the following command:
134
134
135
135
```azurecli
136
-
az aks update --disable-azuremonitormetrics -g <cluster-resource-group> -n <cluster-name>
136
+
az aks update --disable-azure-monitor-metrics -g <cluster-resource-group> -n <cluster-name>
137
137
```
138
138
139
139
Then, either enable or re-enable using the following command:
140
140
```azurecli
141
-
az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id
141
+
az aks update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/howto-configure-data-lake.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -270,7 +270,7 @@ The specification field of a DataLakeConnectorTopicMap resource contains the fol
270
270
- `mqttSourceTopic`: The name of the MQTT topic(s) to subscribe to. Supports [MQTT topic wildcard notation](https://chat.openai.com/share/c6f86407-af73-4c18-88e5-f6053b03bc02).
271
271
- `qos`: The quality of service level for subscribing to the MQTT topic. It can be one of 0 or 1.
272
272
- `table`: The table field specifies the configuration and properties of the Delta table in the Data Lake Storage account. It has the following subfields:
273
-
- `tableName`: The name of the Delta table to create or append to in the Data Lake Storage account. This field is also known as the container name when used with Azure Data Lake Storage Gen2. It can contain any English letter, upper or lower case, and underbar `_`, with length up to 256 characters. No dashes `-` or space characters are allowed.
273
+
- `tableName`: The name of the Delta table to create or append to in the Data Lake Storage account. This field is also known as the container name when used with Azure Data Lake Storage Gen2. It can contain any **lower case** English letter, and underbar `_`, with length up to 256 characters. No dashes `-` or space characters are allowed.
274
274
- `schema`: The schema of the Delta table, which should match the format and fields of the message payload. It's an array of objects, each with the following subfields:
275
275
- `name`: The name of the column in the Delta table.
276
276
- `format`: The data type of the column in the Delta table. It can be one of `boolean`, `int8`, `int16`, `int32`, `int64`, `uInt8`, `uInt16`, `uInt32`, `uInt64`, `float16`, `float32`, `float64`, `date32`, `timestamp`, `binary`, or `utf8`. Unsigned types, like `uInt8`, aren't fully supported, and are treated as signed types if specified here.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
+14-3Lines changed: 14 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -123,15 +123,26 @@ Otherwise, you see a list of your recent automated ML experiments, including th
123
123
Additional configurations|Description
124
124
------|------
125
125
Primary metric| Main metric used for scoring your model. [Learn more about model metrics](how-to-configure-auto-train.md#primary-metric).
126
-
Debug model via the Responsible AI dashboard | Generate a Responsible AI dashboard to do a holistic assessment and debugging of the recommended best model. This includes insights such as model explanations, fairness and performance explorer, data explorer, model error analysis. [Learn more about how you can generate a Responsible AI dashboard.](./how-to-responsible-ai-insights-ui.md). RAI Dashboard can only be run if 'Serverless' compute (preview) is specified in the experiment set-up step.
126
+
Enable ensemble stacking | Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. [Learn more about ensemble models](concept-automated-ml.md#ensemble).
127
127
Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
128
-
Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you don't spend more time on the training job than necessary.
129
-
Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job won't run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
128
+
Explain best model| Automatically shows explainability on the best model created by Automated ML.
129
+
130
130
131
131
1. (Optional) View featurization settings: if you choose to enable **Automatic featurization** in the **Additional configuration settings** form, default featurization techniques are applied. In the **View featurization settings**, you can change these defaults and customize accordingly. Learn how to [customize featurizations](#customize-featurization).
132
132
133
133

134
134
135
+
1. The **[Optional] Limits** form allows you to do the following.
136
+
137
+
| Option | Description |
138
+
|---|-----|
139
+
|**Max trials**| Maximum number of trials, each with different combination of algorithm and hyperparameters to try during the AutoML job. Must be an integer between 1 and 1000.
140
+
|**Max concurrent trials**| Maximum number of trial jobs that can be executed in parallel. Must be an integer between 1 and 1000.
141
+
|**Max nodes**| Maximum number of nodes this job can use from selected compute target.
142
+
|**Metric score threshold**| When this threshold value will be reached for an iteration metric the training job will terminate. Keep in mind that meaningful models have correlation > 0, otherwise they are as good as guessing the average Metric threshold should be between bounds [0, 10].
143
+
|**Experiment timeout (minutes)**| Maximum time in minutes the entire experiment is allowed to run. Once this limit is reached the system will cancel the AutoML job, including all its trials (children jobs).
144
+
|**Iteration timeout (minutes)**| Maximum time in minutes each trial job is allowed to run. Once this limit is reached the system will cancel the trial.
145
+
|**Enable early termination**| Select to end the job if the score is not improving in the short term.
135
146
136
147
1. The **[Optional] Validate and test** form allows you to do the following.
Copy file name to clipboardExpand all lines: articles/sap/workloads/dbms-guide-maxdb.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -342,6 +342,9 @@ When deploying SAP MaxDB into Azure, you must review your backup methodology. Ev
342
342
343
343
Backing up and restoring a database in Azure works the same way as it does for on-premises systems, so you can use standard SAP MaxDB backup/restore tools, which are described in one of the SAP MaxDB documentation documents listed in SAP Note [767598].
344
344
345
+
#### <aname="01885ad6-88cf-4d5a-bdb5-6d43a6eed53e"></a>Backup and Restore with Azure Backup
346
+
You can also integrate MaxDB backup with **Azure Backup** using the third-party backup tool **Maxback** (https://maxback.io). MaxBack allows you to backup and restore MaxDB on Windows with VSS integration, which is also used by Azure Backup. The advantage of using Azure Backup is that backup and restore is done at the storage level. MaxBack ensures that the database is in the right state for backup and restore, and automatically handles log volume backups.
347
+
345
348
#### <aname="77cd2fbb-307e-4cbf-a65f-745553f72d2c"></a>Performance Considerations for Backup and Restore
346
349
As in bare-metal deployments, backup and restore performance are dependent on how many volumes can be read in parallel and the throughput of those volumes. Therefore, one can assume:
Copy file name to clipboardExpand all lines: articles/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables.md
+15-1Lines changed: 15 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Design guidance for replicated tables
3
3
description: Recommendations for designing replicated tables in Synapse SQL pool
4
4
author: WilliamDAssafMSFT
5
5
ms.author: wiassaf
6
-
ms.date: 09/27/2022
6
+
ms.date: 01/09/2024
7
7
ms.service: synapse-analytics
8
8
ms.subservice: sql-dw
9
9
ms.topic: conceptual
@@ -163,6 +163,8 @@ For example, this load pattern loads data from four sources, but only invokes on
163
163
164
164
To ensure consistent query execution times, consider forcing the build of the replicated tables after a batch load. Otherwise, the first query will still use data movement to complete the query.
165
165
166
+
The 'Build Replicated Table Cache' operation can execute up to two operations simultaneously. For example, if you attempt to rebuild the cache for five tables, the system will utilize a staticrc20 (which cannot be modified) to concurrently build two tables at the time. Therefore, it is recommended to avoid using large replicated tables exceeding 2 GB, as this may slow down the cache rebuild across the nodes and increase the overall time.
167
+
166
168
This query uses the [sys.pdw_replicated_table_cache_state](/sql/relational-databases/system-catalog-views/sys-pdw-replicated-table-cache-state-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) DMV to list the replicated tables that have been modified, but not rebuilt.
167
169
168
170
```sql
@@ -184,6 +186,18 @@ To trigger a rebuild, run the following statement on each table in the preceding
184
186
SELECT TOP 1*FROM [ReplicatedTable]
185
187
```
186
188
189
+
To monitor the rebuild process, you can use [sys.dm_pdw_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-requests-transact-sql?view=azure-sqldw-latest&preserve-view=true), where the `command` will start with 'BuildReplicatedTableCache'. For example:
190
+
191
+
```sql
192
+
-- Monitor Build Replicated Cache
193
+
SELECT*
194
+
FROMsys.dm_pdw_exec_requests
195
+
WHERE command like'BuildReplicatedTableCache%'
196
+
```
197
+
198
+
> [!TIP]
199
+
> [Table size queries](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-overview#table-size-queries) can be used to verify which table(s) have a replicated distribution policy and which are larger than 2 GB.
200
+
187
201
## Next steps
188
202
189
203
To create a replicated table, use one of these statements:
0 commit comments