Skip to content

Commit f68a7ac

Browse files
committed
added videos
1 parent 7c853f7 commit f68a7ac

15 files changed

Lines changed: 20 additions & 10 deletions

learn-pr/wwl-databricks/design-implement-data-pipelines/1-introduction.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Introduction
44
metadata:
55
title: Introduction
66
description: "Introduction to designing and implementing data pipelines in Azure Databricks"
7-
ms.date: 12/07/2025
7+
ms.date: 03/03/2026
88
author: weslbo
99
ms.author: wedebols
1010
ms.topic: unit

learn-pr/wwl-databricks/design-implement-data-pipelines/2-design-order-operations-pipeline.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Design order of operations for a pipeline
44
metadata:
55
title: Design Order of Operations for a Pipeline
66
description: Learn how to design the order of operations for a data pipeline in Azure Databricks, including ingestion, cleaning, transformation, loading, serving, and monitoring stages.
7-
ms.date: 12/07/2025
7+
ms.date: 03/03/2026
88
author: weslbo
99
ms.author: wedebols
1010
ms.topic: unit

learn-pr/wwl-databricks/design-implement-data-pipelines/3-choose-notebook-lakeflow-pipelines.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Choose notebook vs Lakeflow Pipelines
44
metadata:
55
title: Choose Notebook vs Lakeflow Pipelines
66
description: Learn how to choose between notebooks and Lakeflow Spark Declarative Pipelines for building data pipelines in Azure Databricks, comparing flexibility, maintainability, and use cases.
7-
ms.date: 12/07/2025
7+
ms.date: 03/03/2026
88
author: weslbo
99
ms.author: wedebols
1010
ms.topic: unit

learn-pr/wwl-databricks/design-implement-data-pipelines/4-design-task-logic-lakeflow-job.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Design Lakeflow job logic
44
metadata:
55
title: Design Lakeflow Job Logic
66
description: Learn how to design task logic for Lakeflow Jobs in Azure Databricks, including task dependencies, execution patterns, conditional logic, and parameter configuration.
7-
ms.date: 12/07/2025
7+
ms.date: 03/03/2026
88
author: weslbo
99
ms.author: wedebols
1010
ms.topic: unit

learn-pr/wwl-databricks/design-implement-data-pipelines/5-design-error-handling-pipelines.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Design error handling in pipelines and jobs
44
metadata:
55
title: Design Error Handling in Pipelines and Jobs
66
description: Learn how to design and implement error handling strategies in Azure Databricks data pipelines, notebooks, and jobs using expectations, retry policies, notifications, and exception handling.
7-
ms.date: 12/07/2025
7+
ms.date: 03/03/2026
88
author: weslbo
99
ms.author: wedebols
1010
ms.topic: unit

learn-pr/wwl-databricks/design-implement-data-pipelines/6-create-pipeline-notebook-precedence.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Create pipeline with notebook
44
metadata:
55
title: Create Pipeline with Notebook
66
description: Learn how to create data pipelines using notebooks in Azure Databricks with task dependencies and precedence constraints in Lakeflow Jobs.
7-
ms.date: 12/07/2025
7+
ms.date: 03/03/2026
88
author: weslbo
99
ms.author: wedebols
1010
ms.topic: unit

learn-pr/wwl-databricks/design-implement-data-pipelines/7-create-pipeline-lakeflow-declarative.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Create pipeline with Lakeflow Spark Declarative Pipelines
44
metadata:
55
title: Create Pipeline with Lakeflow Spark Declarative Pipelines
66
description: Learn how to create data pipelines using Lakeflow Spark Declarative Pipelines in Azure Databricks, including streaming tables, materialized views, and data quality expectations.
7-
ms.date: 12/07/2025
7+
ms.date: 03/03/2026
88
author: weslbo
99
ms.author: wedebols
1010
ms.topic: unit

learn-pr/wwl-databricks/design-implement-data-pipelines/8-knowledge-check.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Module assessment
44
metadata:
55
title: Module assessment
66
description: "Knowledge check"
7-
ms.date: 12/07/2025
7+
ms.date: 03/03/2026
88
author: weslbo
99
ms.author: wedebols
1010
ms.topic: unit

learn-pr/wwl-databricks/design-implement-data-pipelines/9-summary.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Summary
44
metadata:
55
title: Summary
66
description: "Summary"
7-
ms.date: 12/07/2025
7+
ms.date: 03/03/2026
88
author: weslbo
99
ms.author: wedebols
1010
ms.topic: unit

learn-pr/wwl-databricks/design-implement-data-pipelines/includes/1-introduction.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
>[!VIDEO https://learn-video.azurefd.net/vod/player?id=ac7df0e7-8d9c-4fae-8ef4-b6bb587bdffe]
2+
13
Building reliable data pipelines requires more than connecting data sources to destinations. You need to design workflows that handle failures gracefully, scale with growing data volumes, and remain maintainable as business requirements evolve. Azure Databricks provides multiple approaches for creating data pipelines—from flexible **notebooks** with procedural code to **Lakeflow Spark Declarative Pipelines** that automate orchestration and data quality enforcement.
24

35
When you design data pipelines, you make decisions that affect every downstream consumer of your data. The order of operations determines whether transformations build on validated, well-structured data. Your choice between notebooks and declarative pipelines influences how much orchestration code you write versus how much the platform manages for you. **Task dependencies** in **Lakeflow Jobs** control execution flow and enable parallel processing that reduces pipeline runtime.

0 commit comments

Comments
 (0)