Skip to content

Commit 0e3fa0c

Browse files
committed
Acrolinx
1 parent 9a91e5e commit 0e3fa0c

1 file changed

Lines changed: 4 additions & 4 deletions

File tree

articles/synapse-analytics/spark/apache-spark-autoscale.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.date: 03/11/2024
1111

1212
# Automatically scale Azure Synapse Analytics Apache Spark pools
1313

14-
Apache Spark for Azure Synapse Analytics pool's Autoscale feature automatically scales the number of nodes in a cluster instance up and down. During the creation of a new Apache Spark for Azure Synapse Analytics pool, a minimum and maximum number of nodes, up to 200 nodes, can be set when Autoscale is selected. Autoscale then monitors the resource requirements of the load and scales the number of nodes up or down. There's no additional charge for this feature.
14+
Apache Spark for Azure Synapse Analytics pool's Autoscale feature automatically scales the number of nodes in a cluster instance up and down. During the creation of a new Apache Spark for Azure Synapse Analytics pool, a minimum and maximum number of nodes, up to 200 nodes, can be set when Autoscale is selected. Autoscale then monitors the resource requirements of the load and scales the number of nodes up or down. There's no extra charge for this feature.
1515

1616
## Metrics monitoring
1717

@@ -38,10 +38,10 @@ When the following conditions are detected, Autoscale will issue a scale request
3838

3939
For scale-up, the Azure Synapse Autoscale service calculates how many new nodes are needed to meet the current CPU and memory requirements, and then issues a scale-up request to add the required number of nodes.
4040

41-
For scale-down, based on the number of executors, application masters per node, the current CPU and memory requirements, Autoscale issues a request to remove a certain number of nodes. The service also detects which nodes are candidates for removal based on current job execution. The scale down operation first decommissions the nodes, and then removes them from the cluster.
41+
For scale-down, based on the number of executors, application masters per node, the current CPU, and memory requirements, Autoscale issues a request to remove some nodes. The service also detects which nodes are candidates for removal based on current job execution. The scale down operation first decommissions the nodes, and then removes them from the cluster.
4242

4343
>[!NOTE]
44-
>A note about updating and force applying autoscale configuration to an existing Spark pool. If **Force new setting** in the Azure portal or `ForceApplySetting` in [PowerShell](/powershell/module/az.synapse/update-azsynapsesparkpool) is enabled, then all existing Spark sessions are terminated and configuration changes are applied immediately. If this option is not selected, then the configuration is applied to the new Spark sessions and existing sessions are not terminated.
44+
>A note about updating and force applying autoscale configuration to an existing Spark pool. If **Force new setting** in the Azure portal or `ForceApplySetting` in [PowerShell](/powershell/module/az.synapse/update-azsynapsesparkpool) is enabled, then all existing Spark sessions are terminated and configuration changes are applied immediately. If this option isn't selected, then the configuration is applied to the new Spark sessions and existing sessions aren't terminated.
4545
4646
## Get started
4747

@@ -75,7 +75,7 @@ Apache Spark enables configuration of Dynamic Allocation of Executors through co
7575
```
7676
The defaults specified through the code override the values set through the user interface.
7777

78-
In this example, if your job requires only 2 executors, it will use only 2 executors. When the job requires more, it will scale up to 6 executors (1 driver, 6 executors). When the job doesn't need the executors, then it will decommission the executors. If it doesn't need the node, it will free up the node.
78+
In this example, if your job requires only two executors, it uses only two executors. When the job requires more, it will scale up to six executors (one driver, six executors). When the job doesn't need the executors, then it will decommission the executors. If it doesn't need the node, it will free up the node.
7979

8080
>[!NOTE]
8181
>The maxExecutors will reserve the number of executors configured. Considering the example, even if you use only 2, it will reserve 6.

0 commit comments

Comments
 (0)