Skip to content

Commit 77f2ab2

Browse files
authored
Revise note on referencing external libraries in Synapse Spark
Updated note regarding external libraries in Synapse Apache Spark.
1 parent ca34f5d commit 77f2ab2

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ df.write.format("cosmos.oltp").
134134
In this gesture, you'll use Spark Streaming capability to load data from a container into a dataframe. The data will be stored in the primary data lake account (and file system) you connected to the workspace.
135135

136136
> [!NOTE]
137-
> If you're looking to reference external libraries in Synapse Apache Spark, learn more [here](../spark/apache-spark-azure-portal-add-libraries.md). For instance, if you're looking to ingest a Spark DataFrame to a container of Azure Cosmos DB for MongoDB, you can use the [MongoDB connector for Spark](https://docs.mongodb.com/spark-connector/master/).
137+
> If you're looking to reference external libraries in Synapse Apache Spark, learn more [here](../spark/apache-spark-azure-portal-add-libraries.md).
138138
139139
## Load streaming DataFrame from Azure Cosmos DB container
140140
In this example, you'll use Spark's structured streaming capability to load data from an Azure Cosmos DB container into a Spark streaming DataFrame using the change feed functionality in Azure Cosmos DB. The checkpoint data used by Spark will be stored in the primary data lake account (and file system) that you connected to the workspace.

0 commit comments

Comments
 (0)