You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/batch/batch-linux-nodes.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ The [Batch node agent](https://github.com/Azure/Batch/blob/master/changelogs/nod
45
45
az batch pool supported-images list
46
46
```
47
47
48
-
For more information, you can refer to [Account - List Supported Images - REST API (Azure Batch Service) | Microsoft Docs](/rest/api/batchservice/account/list-supported-images).
48
+
For more information, you can refer to [Account - List Supported Images - REST API (Azure Batch Service) | Microsoft Docs](/rest/api/batchservice/pools/list-supported-images).
Copy file name to clipboardExpand all lines: articles/batch/batch-pool-node-error-checking.md
+7-9Lines changed: 7 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,9 +65,7 @@ Relay provider errors offer deeper insights into pool operation failures, making
65
65
66
66
### Resize timeout or failure
67
67
68
-
When you create a new pool or resize an existing pool, you specify the target number of nodes. The create or resize operation completes immediately, but the actual allocation of new nodes or removal of existing nodes might take several minutes. You can specify the resize timeout in the [Pool - Add](/rest/api/batchservice/pool/add) or [Pool - Resize](/rest/api/batchservice/pool/resize) APIs. If Batch can't allocate the target number of nodes during the resize timeout period, the pool goes into a steady state, and reports resize errors.
69
-
70
-
The [resizeError](/rest/api/batchservice/pool/get#resizeerror) property lists the errors that occurred for the most recent evaluation.
68
+
When you create a new pool or resize an existing pool, you specify the target number of nodes. The create or resize operation completes immediately, but the actual allocation of new nodes or removal of existing nodes might take several minutes. You can specify the resize timeout in the [Create Pool](/rest/api/batchservice/pools/create-pool) or [Resize Pool](/rest/api/batchservice/pools/resize-pool) APIs. If Batch can't allocate the target number of nodes during the resize timeout period, the pool goes into a steady state, and reports resize errors.
71
69
72
70
Common causes for resize errors include:
73
71
@@ -91,15 +89,15 @@ The following issues can occur when you use automatic scaling:
91
89
- The resulting resize operation fails and times out.
92
90
- A problem with the automatic scaling formula leads to incorrect node target values. The resize might either work or time out.
93
91
94
-
To get information about the last automatic scaling evaluation, use the [autoScaleRun](/rest/api/batchservice/pool/get#autoscalerun) property. This property reports the evaluation time, the values and result, and any performance errors.
92
+
To get information about the last automatic scaling evaluation, use the [Evaluate Pool Auto Scale](/rest/api/batchservice/pools/evaluate-pool-auto-scale) property. This property reports the evaluation time, the values and result, and any performance errors.
95
93
96
94
The [pool resize complete event](./batch-pool-resize-complete-event.md) captures information about all evaluations.
97
95
98
96
### Pool deletion failures
99
97
100
98
To delete a pool that contains nodes, Batch first deletes the nodes, which can take several minutes to complete. Batch then deletes the pool object itself.
101
99
102
-
Batch sets the [poolState](/rest/api/batchservice/pool/get#poolstate) to `deleting` during the deletion process. The calling application can detect if the pool deletion is taking too long by using the `state` and `stateTransitionTime` properties.
100
+
Batch sets the [poolState](/rest/api/batchservice/pools/delete-pool) to `deleting` during the deletion process. The calling application can detect if the pool deletion is taking too long by using the `state` and `stateTransitionTime` properties.
103
101
104
102
If the pool deletion is taking longer than expected, Batch retries periodically until the pool is successfully deleted. In some cases, the delay is due to an Azure service outage or other temporary issues. Other factors that prevent successful pool deletion might require you to take action to correct the issue. These factors can include the following issues:
105
103
@@ -113,11 +111,11 @@ If the pool deletion is taking longer than expected, Batch retries periodically
113
111
114
112
## Node errors
115
113
116
-
Even when Batch successfully allocates nodes in a pool, various issues can cause some nodes to be unhealthy and unable to run tasks. These nodes still incur charges, so it's important to detect problems to avoid paying for nodes you can't use. Knowing about common node errors and knowing the current [jobState](/rest/api/batchservice/job/get#jobstate) is useful for troubleshooting.
114
+
Even when Batch successfully allocates nodes in a pool, various issues can cause some nodes to be unhealthy and unable to run tasks. These nodes still incur charges, so it's important to detect problems to avoid paying for nodes you can't use. Knowing about common node errors and knowing the current [job state](/rest/api/batchservice/jobs/get-job) is useful for troubleshooting.
117
115
118
116
### Start task failures
119
117
120
-
You can specify an optional [startTask](/rest/api/batchservice/pool/add#starttask) for a pool. As with any task, the start task uses a command line and can download resource files from storage. The start task runs for each node when the node starts. The `waitForSuccess` property specifies whether Batch waits until the start task completes successfully before it schedules any tasks to a node. If you configure the node to wait for successful start task completion, but the start task fails, the node isn't usable but still incurs charges.
118
+
You can specify an optional [start task](/rest/api/batchservice/tasks/create-task) for a pool. As with any task, the start task uses a command line and can download resource files from storage. The start task runs for each node when the node starts. The `waitForSuccess` property specifies whether Batch waits until the start task completes successfully before it schedules any tasks to a node. If you configure the node to wait for successful start task completion, but the start task fails, the node isn't usable but still incurs charges.
121
119
122
120
You can detect start task failures by using the [taskExecutionResult](/rest/api/batchservice/computenode/get#taskexecutionresult) and [taskFailureInformation](/rest/api/batchservice/computenode/get#taskfailureinformation) properties of the top-level [startTaskInformation](/rest/api/batchservice/computenode/get#starttaskinformation) node property.
123
121
@@ -190,10 +188,10 @@ After you make sure to retrieve any data you need from the node or upload it to
190
188
191
189
You can delete old completed jobs or tasks whose task data is still on the nodes. Look in the `recentTasks` collection in the [taskInformation](/rest/api/batchservice/computenode/get#taskinformation) on the node, or use the [File - List From Compute Node](/rest/api/batchservice/file/listfromcomputenode) API. Deleting a job deletes all the tasks in the job. Deleting the tasks in the job triggers deletion of data in the task directories on the nodes, and frees up space. Once you've freed up enough space, reboot the node. The node should move out of `unusable` state and into `idle` again.
192
190
193
-
To recover an unusable node in [VirtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) pools, you can remove the node from the pool by using the [Pool - Remove Nodes](/rest/api/batchservice/pool/removenodes) API. Then you can grow the pool again to replace the bad node with a fresh one.
191
+
To recover an unusable node in [VirtualMachineConfiguration](/rest/api/batchservice/pools/get-pool#add-a-virtualmachineconfiguration-pool-with-os-disk) pools, you can remove the node from the pool by using the [Pool - Remove Nodes](/rest/api/batchservice/pools/remove-nodes) API. Then you can grow the pool again to replace the bad node with a fresh one.
194
192
195
193
> [!Important]
196
-
> Reimage isn't currently supported for [VirtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) pools.
194
+
> Reimage isn't currently supported for [VirtualMachineConfiguration](/rest/api/batchservice/pools/get-pool#add-a-virtualmachineconfiguration-pool-with-os-disk) pools.
Copy file name to clipboardExpand all lines: includes/azure-batch-limits.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@
13
13
| Azure Batch accounts per region per subscription | 1-3 | 50 |
14
14
| Dedicated cores per Batch account | 0-900<sup>1</sup> | Contact support |
15
15
| Low-priority cores per Batch account | 0-100<sup>1</sup> | Contact support |
16
-
|**[Active](/rest/api/batchservice/job/get#jobstate)** jobs and job schedules per Batch account (**completed** jobs have no limit) | 100-300 | 1,000<sup>2</sup> |
16
+
|**[Active](/rest/api/batchservice/jobs/get-job)** jobs and job schedules per Batch account (**completed** jobs have no limit) | 100-300 | 1,000<sup>2</sup> |
17
17
| Pools per Batch account | 0-100<sup>1</sup> | 500<sup>2</sup> |
0 commit comments