Skip to content

Commit 57815c9

Browse files
authored
Resolve broken link
Updated note about referencing external libraries in Synapse Apache Spark.
1 parent d8639a9 commit 57815c9

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark-3.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -226,7 +226,7 @@ df.write.format("cosmos.oltp").
226226
## Load streaming DataFrame from container
227227
In this gesture, you use Spark Streaming capability to load data from a container into a dataframe. The data is stored in the primary data lake account (and file system) you connected to the workspace.
228228
> [!NOTE]
229-
> If you're looking to reference external libraries in Synapse Apache Spark, learn more [here](../spark/apache-spark-azure-portal-add-libraries.md). For instance, if you're looking to ingest a Spark DataFrame to a container of Azure Cosmos DB for MongoDB, you can use the MongoDB connector for Spark [here](https://docs.mongodb.com/spark-connector/master/).
229+
> If you're looking to reference external libraries in Synapse Apache Spark, learn more [here](../spark/apache-spark-azure-portal-add-libraries.md).
230230
231231
## Load streaming DataFrame from Azure Cosmos DB container
232232
In this example, you use Spark's structured streaming to load data from an Azure Cosmos DB container into a Spark streaming DataFrame, using the change feed functionality in Azure Cosmos DB. The checkpoint data used by Spark will be stored in the primary data lake account (and file system) that you connected to the workspace.

0 commit comments

Comments
 (0)