Skip to content

Commit 75fb0b3

Browse files
authored
Update 5-install-libraries-for-compute.md
1 parent acbe4f7 commit 75fb0b3

1 file changed

Lines changed: 5 additions & 5 deletions

File tree

learn-pr/wwl-databricks/select-and-configure-compute/includes/5-install-libraries-for-compute.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -26,10 +26,10 @@ Maven libraries require **coordinates** in the format `groupId:artifactId:versio
2626

2727
For R packages from CRAN, provide the package name. Unlike Python and Java libraries, CRAN installations always pull the latest version from the configured mirror. To pin specific R package versions, you need to store the package files in workspace files or volumes instead of installing from CRAN.
2828

29-
With clusters configured in **standard access mode**, Maven coordinates and JAR file paths require **allowlist approval** before installation. This security measure ensures admins review and approve libraries that run on shared compute resources.
29+
With clusters configured in **standard access mode**, Maven coordinates and JAR file paths require **allow list approval** before installation. This security measure ensures admins review and approve libraries that run on shared compute resources.
3030

3131
> [!NOTE]
32-
> To learn more about configuring and managing allowlists for libraries, see the [documentation](/azure/databricks/data-governance/unity-catalog/manage-privileges/allowlist).
32+
> To learn more about configuring and managing allow lists for libraries, see the [documentation](/azure/databricks/data-governance/unity-catalog/manage-privileges/allowlist).
3333
3434
## Install libraries from files
3535

@@ -47,15 +47,15 @@ Unity Catalog volumes offer enhanced security and governance for library storage
4747

4848
Python **requirements.txt files** work with both workspace files and volumes in Databricks Runtime 15.0 and above. These files let you define multiple package dependencies in a single file, making it easier to maintain consistent environments across clusters. Upload the requirements.txt file and install it just like any other library—Azure Databricks automatically installs all listed packages.
4949

50-
For clusters with standard access mode, you must add library file paths to the allowlist before installation. This applies to both workspace files and volumes, ensuring admins approve the libraries used on shared compute.
50+
For clusters with standard access mode, you must add library file paths to the allow list before installation. This applies to both workspace files and volumes, ensuring admins approve the libraries used on shared compute.
5151

5252
## Use init scripts for advanced configuration
5353

5454
**Init scripts** run shell commands during **cluster startup**, before the Spark driver and executors start. While Databricks **doesn't recommend** using init scripts for library installation—cluster-scoped libraries provide a better approach—init scripts prove useful for system-level **configuration** that libraries can't handle.
5555

5656
You might use init scripts to install system packages with `apt-get`, configure environment variables, or set up monitoring agents. For example, an init script could install a specialized database driver that requires system libraries, then configure connection parameters through environment variables. The script runs every time the cluster starts, ensuring your configuration persists across restarts.
5757

58-
Store init scripts in Unity Catalog volumes for clusters running Databricks Runtime 13.3 LTS and above. Create a shell script file, upload it to a volume, then configure the cluster to run the script by specifying its path like `/Volumes/main/engineering/scripts/setup.sh`. For standard access mode, add the init script path to the allowlist before configuring the cluster.
58+
Store init scripts in Unity Catalog volumes for clusters running Databricks Runtime 13.3 LTS and above. Create a shell script file, upload it to a volume, then configure the cluster to run the script by specifying its path like `/Volumes/main/engineering/scripts/setup.sh`. For standard access mode, add the init script path to the allow list before configuring the cluster.
5959

6060
Init scripts execute sequentially in the order you specify. If any script returns a non-zero exit code, the cluster fails to start. This failure protection prevents clusters from running with incomplete or incorrect configuration. You can troubleshoot failed init scripts by configuring cluster log delivery and examining the init script logs.
6161

@@ -79,4 +79,4 @@ To configure the allowlist, metastore admins use Catalog Explorer, selecting the
7979

8080
Different library installation methods suit different scenarios. The following diagram illustrates a decision flow to help you select the appropriate installation approach:
8181

82-
![Diagram showing the different library installation methods.](../media/library-installation.png)
82+
![Diagram showing the different library installation methods.](../media/library-installation.png)

0 commit comments

Comments
 (0)