You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: learn-pr/wwl/introduction-end-analytics-use-microsoft-fabric/5-knowledge-check.yml
+13-2Lines changed: 13 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ title: Module assessment
4
4
metadata:
5
5
title: Module assessment
6
6
description: "Knowledge Check"
7
-
ms.date: 08/05/2025
7
+
ms.date: 03/04/2026
8
8
author: angierudduck
9
9
ms.author: anrudduc
10
10
ms.topic: unit
@@ -44,4 +44,15 @@ quiz:
44
44
explanation: "Incorrect. The Data Warehousing experience is for building and managing data warehouses."
45
45
- content: "Data Factory"
46
46
isCorrect: true
47
-
explanation: "Correct. The Data Factory experience is used to move and transform data using Power Query."
47
+
explanation: "Correct. The Data Factory workload is used to ingest, transform, and orchestrate data."
48
+
- content: "Why is OneLake's unified storage model important for AI capabilities in Fabric?"
49
+
choices:
50
+
- content: "It requires all data to be converted to a proprietary format for AI processing."
51
+
isCorrect: false
52
+
explanation: "Incorrect. OneLake uses the open Delta-Parquet format, not a proprietary format."
53
+
- content: "AI tools like Copilot and data agents can access the same governed data without separate preparation pipelines."
54
+
isCorrect: true
55
+
explanation: "Correct. Because all Fabric workloads store data in OneLake using an open format, AI capabilities can access the same governed data used by reports and dashboards."
56
+
- content: "It stores AI models alongside the data they process."
57
+
isCorrect: false
58
+
explanation: "Incorrect. Model management is handled by the Data Science workload, not by OneLake's storage format."
Microsoft Fabric is an end-to-end analytics platform that provides a single, integrated environment for data professionals and the business to collaborate on data projects. Fabric provides a set of integrated services that enable you to ingest, store, process, and analyze data in a single environment.
1
+
Organizations need to ingest, prepare, govern, and analyze data at scale, often across disconnected tools and teams. Increasingly, that same data also needs to be ready for AI workloads like machine learning models, Copilots, and intelligent agents. Managing these tasks across separate systems creates complexity, governance gaps, and duplicated effort.
2
2
3
-
Microsoft Fabric provides tools for both citizen and professional data practitioners, and integrates with tools the business needs to make decisions. Fabric includes the following services:
3
+
Microsoft Fabric is an end-to-end analytics platform that provides a single, integrated environment where data professionals and the business collaborate on data projects. Built on a unified data lake called OneLake, Fabric brings together the tools you need across that entire lifecycle.
4
4
5
-
- Data engineering
6
-
- Data integration
7
-
- Data warehousing
8
-
- Real-time intelligence
9
-
- Data science
10
-
- Business intelligence
5
+
Because all data is ingested, prepared, and governed within Fabric, the same data that powers your reports and dashboards is also available to AI capabilities like Copilot, data agents, and Fabric IQ. This means the work you do to organize and govern your data directly supports your organization's AI initiatives.
11
6
12
-
Additionally, Microsoft Fabric integrates Copilot, a generative AI assistant that enhances productivity across all workloads by providing intelligent code completion, natural language to SQL conversion, automated insights, and contextual assistance for data professionals and business users alike.
13
-
14
-
This module introduces the Fabric platform, discusses who Fabric is for, explores Fabric services, and examines how Copilot enhances the analytics experience.
7
+
This module introduces the Fabric platform, discusses who Fabric is for, explores Fabric workloads, and examines how Fabric supports both analytics and AI.
Scalable analytics can be complex, fragmented, and expensive. Microsoft Fabric simplifies analytics solutions by providing a single, easy-to-use product that integrates various tools and services into one platform.
2
2
3
-
Fabric is a unified _software-as-a-service_ (SaaS) platform where all data is stored in a single open format in OneLake. OneLake is accessible by all analytics engines in the platform, ensuring scalability, cost-effectiveness, and accessibility from anywhere with an internet connection.
3
+
Fabric is a unified _software-as-a-service_ (SaaS) platform where all data is stored in a single open format in OneLake. All analytics engines in the platform can access OneLake, ensuring scalability, cost-effectiveness, and accessibility from anywhere with an internet connection.
4
4
5
5
## OneLake
6
6
7
-
_OneLake_ is Fabric's centralized data storage architecture that enables collaboration by eliminating the need to move or copy data between systems. OneLake unifies your data across regions and clouds into a single logical lake without moving or duplicating data.
7
+
**OneLake** is Fabric's centralized data storage architecture that enables collaboration by eliminating the need to move or copy data between systems. OneLake unifies your data across regions and clouds into a single logical lake without moving or duplicating data.
8
8
9
-
OneLake is built on _Azure Data Lake Storage_(ADLS) and supports various formats, including Delta, Parquet, CSV, and JSON. All compute engines in Fabric automatically store their data in OneLake, making it directly accessible without the need for movement or duplication. For tabular data, the analytical engines in Fabric write data in delta-parquet format and all engines interact with the format seamlessly.
9
+
OneLake is built on **Azure Data Lake Storage Gen2**(ADLS Gen2) and supports various formats, including Delta, Parquet, CSV, and JSON. All compute engines in Fabric automatically store their data in OneLake, making it directly accessible without the need for movement or duplication. For tabular data, the analytical engines in Fabric write data in delta-parquet format and all engines interact with the format seamlessly.
10
10
11
-
_Shortcuts_ are references to files or storage locations external to OneLake, allowing you to access existing cloud data without copying it. Shortcuts ensure data consistency and enable Fabric to stay in sync with the source.
11
+
:::image type="content" border="true" source="../media/onelake-architecture.png" alt-text="Diagram of Fabric compute engines such as Data Engineering, Data Warehouse, Data Factory, Power BI, and Real-Time Intelligence all accessing the same OneLake data storage.":::
12
12
13
-
:::image type="content" border="true" source="../media/onelake-architecture.png" alt-text="Diagram of the OneLake architecture with workloads accessing the same OneLake data storage and Delta-Parquet storage format as the foundation for serverless compute.":::
13
+
**Shortcuts** are references to files or storage locations within OneLake or external data sources, such as Azure Data Lake Storage, Amazon S3, or Dataverse. Shortcuts allow you to access existing data without copying it, ensuring data consistency and enabling Fabric to stay in sync with the source.
14
+
15
+
Because all Fabric workloads store data in OneLake using an open format, AI capabilities like Copilot and data agents can access the same governed data as your reports and dashboards without separate data preparation pipelines. The work you do to ingest, prepare, and govern data in Fabric is what makes that data available for AI workloads.
14
16
15
17
## Workspaces
16
18
@@ -22,11 +24,11 @@ Workspaces allow you to manage compute resources and integrate with Git for vers
22
24
23
25
## Administration and governance
24
26
25
-
Fabric's OneLake is centrally governed and open for collaboration. Data is secured and governed in one place, which allows users to easily find and access the data they need. Fabric administration is centralized in the _Admin portal_.
27
+
Fabric's OneLake is centrally governed and open for collaboration. Data is secured and governed in one place, which allows users to easily find and access the data they need. Fabric administration is centralized in the **Admin portal**.
26
28
27
29
In the admin portal you can manage groups and permissions, configure data sources and gateways, and monitor usage and performance. You can also access the Fabric admin APIs and SDKs in the admin portal, which can automate common tasks and integrate Fabric with other systems.
28
30
29
-
The _OneLake catalog_ helps you analyze, monitor, and maintain data governance. It provides guidance on sensitivity labels, item metadata, and data refresh status, offering insights into the governance status and actions for improvement.
31
+
The **OneLake catalog** helps you analyze, monitor, and maintain data governance. It provides guidance on sensitivity labels, item metadata, and data refresh status, offering insights into the governance status and actions for improvement.
30
32
31
33
> [!NOTE]
32
34
> Review the [Microsoft Fabric administration](/fabric/admin) documentation for more information.
Copy file name to clipboardExpand all lines: learn-pr/wwl/introduction-end-analytics-use-microsoft-fabric/includes/3-data-team.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,12 +14,14 @@ Data scientists face difficulties integrating native data science techniques wit
14
14
15
15
Microsoft Fabric simplifies the analytics development process by unifying tools into a SaaS platform. Fabric allows different roles to collaborate effectively without duplicating efforts.
16
16
17
-
**Data engineers** can ingest, transform, and load data directly into OneLake using Pipelines, which automate workflows and support scheduling. They can store data in lakehouses, using the Delta-Parquet format for efficient storage and versioning. Notebooks provide advanced scripting capabilities for complex transformations.
17
+
-**Data engineers** can ingest, transform, and load data directly into OneLake using Pipelines, which automate workflows and support scheduling. They can store data in lakehouses, using the Delta-Parquet format for efficient storage and versioning. Notebooks provide advanced scripting capabilities for complex transformations.
18
18
19
-
**Analytics engineers** bridge the gap between data engineering and analysis by curating data assets in lakehouses, ensuring data quality, and enabling self-service analytics. They can create semantic models in Power BI to organize and present data effectively.
19
+
-**Analytics engineers** bridge the gap between data engineering and analysis by curating data assets in lakehouses, ensuring data quality, and enabling self-service analytics. They can create semantic models in Power BI to organize and present data effectively.
20
20
21
-
**Data analysts** can transform data upstream using dataflows and connect directly to OneLake with Direct Lake mode, reducing the need for downstream transformations. They can create interactive reports more efficiently using Power BI.
21
+
-**Data analysts** can transform data upstream using dataflows and connect directly to OneLake with Direct Lake mode, reducing the need for downstream transformations. They can create interactive reports more efficiently using Power BI.
22
22
23
-
**Data scientists** can use integrated notebooks with support for Python and Spark to build and test machine learning models. They can store and access data in lakehouses and integrate with Azure Machine Learning to operationalize and deploy models.
23
+
-**Data scientists** can use integrated notebooks with support for Python and Spark to build and test machine learning models. They can store and access data in lakehouses and integrate with Azure Machine Learning to operationalize and deploy models. The predictions they generate can also serve as grounding data for Copilot and AI agents.
24
24
25
-
**Low-to-no-code users** and **citizen developers** can discover curated datasets through the OneLake catalog and use Power BI templates to quickly create reports and dashboards. They can also use dataflows to perform simple ETL tasks without relying on data engineers.
25
+
-**Low-to-no-code users** and **citizen developers** can discover curated datasets through the OneLake catalog and use Power BI templates to quickly create reports and dashboards. They can also use dataflows to perform simple ETL tasks without relying on data engineers, or ask questions of their data in natural language using Copilot.
26
+
27
+
Every role in the data team contributes to the organization's ability to use AI effectively. Data engineers who maintain clean, well-governed data in OneLake build the foundation that Copilot and AI agents rely on. Analytics engineers who create consistent semantic models give AI tools the business context needed to generate accurate, meaningful answers.
0 commit comments