Skip to content

Commit 8c152e2

Browse files
Merge pull request #54531 from MicrosoftDocs/main
Auto Publish – main to live - 2026-05-01 23:00 UTC
2 parents 652e7be + 3062110 commit 8c152e2

40 files changed

Lines changed: 469 additions & 64 deletions

learn-pr/paths/get-started-fabric/index.yml

Lines changed: 4 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ uid: learn.wwl.get-started-fabric
33
metadata:
44
title: Get started with Microsoft Fabric
55
description: Explore how to implement data analytics solutions on a single platform with Microsoft Fabric. Integrate, transform, and store data to train AI models and create insightful reports.
6-
ms.date: 02/26/2025
6+
ms.date: 05/01/2026
77
author: AngieRudduck
88
ms.author: anrudduc
99
ms.topic: learning-path
@@ -25,21 +25,15 @@ subjects:
2525
- data-analytics
2626
- data-engineering
2727
- data-integration
28-
- data-management
29-
- data-modeling
30-
- information-protection-governance
3128
- data-storage
32-
- data-visualization
3329
modules:
3430
- learn.wwl.introduction-end-analytics-use-microsoft-fabric
3531
- learn.wwl.get-started-lakehouses
36-
- learn.wwl.use-apache-spark-work-files-lakehouse
37-
- learn.wwl.work-delta-lake-tables-fabric
38-
- learn.wwl.use-data-factory-pipelines-fabric
39-
- learn.wwl.ingest-dataflows-gen2-fabric
4032
- learn.wwl.get-started-data-warehouse
4133
- learn.wwl.get-started-kusto-fabric
4234
- learn.wwl.get-started-data-science-fabric
43-
- learn.wwl.administer-fabric
35+
- learn-wwl.get-started-sql-database-microsoft-fabric
36+
- learn.wwl.design-semantic-models-scale
37+
- learn.wwl.understand-fabric-iq-fundamentals
4438
trophy:
4539
uid: learn.wwl.get-started-fabric.trophy

learn-pr/paths/purview-data-security-posture-management/index.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,8 @@ modules:
3232
- learn.wwl.purview-data-security-posture-management-understand
3333
- learn.wwl.purview-dspm-assess-data-security-posture
3434
- learn.wwl.purview-data-security-posture-management-protect-remediate
35+
- learn.wwl.purview-data-security-posture-management-investigate-risks
36+
3537

3638
trophy:
3739
uid: learn.wwl.purview-data-security-posture-management.trophy
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.purview-data-security-posture-management-investigate-risks.analyze-sensitive-data-activity
3+
title: Analyze sensitive data activity with DSPM investigation surfaces
4+
metadata:
5+
title: Analyze Sensitive Data Activity with DSPM Investigation Surfaces
6+
description: "Analyze sensitive data activity across activity explorer, AI activities, audit logs, and reports to identify risky patterns."
7+
ms.date: 04/30/2026
8+
author: wwlpublish
9+
ms.author: riswinto
10+
ms.topic: unit
11+
durationInMinutes: 9
12+
content: |
13+
[!include[](includes/analyze-sensitive-data-activity.md)]
Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
When a data loss prevention (DLP) alert fires or sensitive data activity raises concern, you need to determine what happened and whether it requires action. Data Security Posture Management (DSPM) provides four investigation surfaces for this. Each answers a different investigation question.
2+
3+
## Investigation surfaces in Data Security Posture Management
4+
5+
Without DSPM, answering investigation questions means switching between audit log search, activity reports, DLP alert queues, and Insider Risk Management (IRM) dashboards independently. DSPM brings these starting points together so you can move between surfaces without losing context.
6+
7+
| Surface | What it answers | When to use it |
8+
| --- | --- | --- |
9+
| Activity explorer | What specific events occurred involving sensitive data? | Tracing individual events for specific users, time windows, or labels |
10+
| AI activities tab | What happened during AI interactions with sensitive data? | Investigating prompts, responses, and DLP matches in AI contexts |
11+
| Audit logs | What's the authoritative chronological record? | Reconstructing event sequences, building compliance records, tracing agent activity |
12+
| Reports | What patterns exist across many events over time? | Identifying trends, comparing periods, spotting behavioral shifts |
13+
14+
A typical investigation might start in reports because a trend catches your attention, move to activity explorer to filter for the specific condition, then consult the audit log to get the authoritative record for a specific event.
15+
16+
### Activity explorer
17+
18+
Use activity explorer when you need to investigate a specific event involving sensitive data. It's accessible from **DSPM (preview)** > **Discover** > **Activity explorer** and shows activity related to content that contains sensitive information or has sensitivity labels applied.
19+
20+
You filter activities by:
21+
22+
- **Activity type**: File access, file copy, print, email send, cloud upload, sharing link creation
23+
- **Workload**: SharePoint, OneDrive, Exchange, Teams, Endpoint devices
24+
- **User**: Specific user principal name (UPN) or user groups
25+
- **Time range**: Narrow to the period around a suspected event
26+
- **Sensitivity labels**: Activities involving specific labels
27+
28+
A DLP alert about sensitive data shared externally becomes an investigation when you filter to that user, that time window, and that label. You then see not just the flagged event but the full sequence of events surrounding it.
29+
30+
:::image type="content" source="../media/activity-explorer-all-activity-types.png" alt-text="Screenshot showing activity explorer with the All activity types tab, filters, and a bar chart of sensitive data events." lightbox="../media/activity-explorer-all-activity-types.png":::
31+
32+
### AI activities tab
33+
34+
The **AI activities** tab within activity explorer shows events specific to AI interactions. AI interactions include prompts, data retrieval, responses, and DLP evaluation, all of which matter when reconstructing what happened.
35+
36+
If a DLP alert fires during a Copilot session, this tab shows the full interaction context including which sensitive data the AI accessed and what it returned.
37+
38+
:::image type="content" source="../media/activity-explorer-ai-activities.png" alt-text="Screenshot showing the AI activities tab in activity explorer with AI-specific filters and a bar chart of AI interactions." lightbox="../media/activity-explorer-ai-activities.png":::
39+
40+
> [!NOTE]
41+
> Activity explorer in DSPM (preview) is distinct from activity explorer in DSPM for AI (classic). Events in the AI activities tab originate from the preview version specifically. If you're investigating AI-related activity, confirm you're working in DSPM (preview) rather than the classic version.
42+
43+
### Audit logs
44+
45+
The unified audit log captures a chronological record of user and agent interactions. Each entry includes timestamps, user identity, the exact operation performed, and the result. Where activity explorer shows you filtered views of events, the audit log provides the authoritative compliance-grade record.
46+
47+
For Agent 365 specifically, the audit log captures agent-to-human, human-to-agent, agent-to-tools, and agent-to-agent interactions. This is the only surface that provides the full sequence of what an agent did during a specific time window.
48+
49+
Use audit logs when you need:
50+
51+
- A chronological reconstruction of events for formal reporting
52+
- The authoritative record for compliance or legal purposes
53+
- Details about agent interactions not visible in activity explorer
54+
- Correlation between multiple related events across services
55+
56+
### Reports
57+
58+
DSPM reports provide aggregated views across sensitive data usage and posture trends. Navigate to **DSPM (preview)** > **Reports** to access:
59+
60+
- Sensitive data usage patterns over time
61+
- Labeling progress and gaps
62+
- Policy usage and match frequency
63+
- Risky behavior patterns for users and AI agents
64+
65+
Reports answer "is this getting better or worse?" and "where is risk concentrating?" A rising DLP match rate for a specific sensitive information type might be concentrated in one workload or spread across many. Reports show you that distribution before you drill into individual events.
66+
67+
A behavioral shift in reports gives you the filter criteria to take into activity explorer.
68+
69+
## Escalating to Data Security Investigations
70+
71+
Escalate when filtering and correlation don't explain the pattern or when deeper analysis is required. Data Security Investigations provides deeper analysis and collaborative remediation beyond what DSPM surfaces offer.
72+
73+
To escalate from DSPM:
74+
75+
1. Navigate to **DSPM (preview)** > **Objectives**.
76+
1. Select the **Prevent exfiltration to risky destinations** objective.
77+
1. Select **View objective** to view exfiltration details and proactive insights.
78+
1. On the **Exfiltration risk patterns** card, select **Create investigation**.
79+
1. Provide a name, optional description, and optional AI context.
80+
1. Select **Create investigation** to open the case in Data Security Investigations.
81+
82+
The investigation is automatically scoped to all sensitive data exfiltrated from your organization in the last 30 days.
83+
84+
> [!NOTE]
85+
> Data Security Investigations is a separate Purview solution with its own workflows. The escalation from DSPM creates the starting point. The investigation itself continues in that solution.
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
Organizations with sensitivity labels, data loss prevention (DLP) policies, and insider risk controls in place generate hundreds of signals daily. Alerts fire, policy matches trigger, AI agents interact with content. The challenge isn't the absence of signals. It's determining which ones represent actual risk, reconstructing what happened, and deciding whether to escalate.
2+
3+
Data Security Posture Management (DSPM) in Microsoft Purview provides investigation surfaces that bring these signals together. You can investigate activity, use Security Copilot to synthesize patterns, and examine AI agent behavior from a connected set of tools.
4+
5+
## Learning objectives
6+
7+
After completing this module, you'll be able to:
8+
9+
- Analyze sensitive data activity across activity explorer, audit logs, and DSPM reports
10+
- Investigate data security events using Security Copilot in DSPM
11+
- Investigate risky AI agent behavior through AI observability
12+
13+
## Prerequisites
14+
15+
- Familiarity with the Microsoft Purview portal and its navigation
16+
- Working knowledge of sensitivity labels, DLP policies, and Insider Risk Management concepts
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
AI agents access sensitive data at scale across workloads. When their behavior deviates from intended use or scope, you need to determine whether it's a misconfiguration, a policy gap, or an active security risk.
2+
3+
AI observability in Data Security Posture Management (DSPM) is the surface for this. It shows which agents are active, which are flagged as high risk, and what risky activity categories apply to each.
4+
5+
## AI observability for investigation
6+
7+
AI observability is accessible from **DSPM (preview)** > **AI observability**. The page shows all AI apps and agents with activity in the last 30 days, how many are classified as high risk by Insider Risk Management, and a breakdown of individual agents with the policies governing them.
8+
9+
Discovery asks "what agents exist?" Investigation asks "what is this specific agent doing that concerns me, and is it a problem?"
10+
11+
:::image type="content" source="../media/ai-observability-metrics.png" alt-text="Screenshot showing the AI observability landing page with key metrics for total agents, high risk agents, and agents with sensitive interactions." lightbox="../media/ai-observability-metrics.png":::
12+
13+
## Drilling into a specific agent
14+
15+
When you select a specific agent from the AI observability page, you see:
16+
17+
- **Agent details**: Entra-enabled status, name, created date, owner, agent user ID, policies applied, and which agent it's an instance of
18+
- **Agent activities**: The risk level determined by Insider Risk Management, active insider risk alerts, and a count of risky activities from agent interactions
19+
- **Recommendations**: Remediation suggestions based on identified risks
20+
21+
An agent with zero policies applied and no owner listed is a different risk conversation than one with 17 data protection policies and a known team behind it. The details section tells you which conversation you're having.
22+
23+
From the activities section, you can navigate directly to the insider risk alerts driving the agent's risk score. This lets you see whether triage agents have already categorized those alerts and whether their categorization matches what you're observing in the agent's behavior patterns.
24+
25+
:::image type="content" source="../media/ai-observability-app-detail.png" alt-text="Screenshot showing the agent detail page in AI observability with agent details, risk level, active insider risk alerts, and risky activities count." lightbox="../media/ai-observability-app-detail.png":::
26+
27+
## Interpreting agent risk signals
28+
29+
Insider Risk Management determines a risk level for agents based on their interaction patterns. The top risky activity categories are:
30+
31+
- **Oversharing**: The agent shared sensitive data more broadly than appropriate, such as surfacing content to users who shouldn't have access
32+
- **Exfiltration**: The agent moved sensitive data outside the organization's boundaries through unauthorized channels
33+
- **Unethical behavior**: The agent's interactions violated organizational policies or produced outputs that conflict with data security requirements
34+
35+
Each category leads to a different investigation path. Oversharing means examining access scope and audience. Exfiltration means tracing where data went. Unethical behavior means reviewing outputs and policy configuration.
36+
37+
## Agent-specific signals vs. human user signals
38+
39+
Agents process more interactions per hour, have broader data access, and act without real-time human approval. This means you can't apply human-investigation patterns directly. A spike for a human is unusual, but the same spike for an agent might be normal operation.
40+
41+
Agents also have a dual identity: the agent itself and its owner. Risk can originate from the agent's configuration, from the owner's decisions, or from the systems the agent accesses. An agent doing what it's designed to do but with overly broad access is a policy gap. An agent doing things it wasn't designed to do is a configuration or security issue. Identifying which you're looking at determines the response.
42+
43+
> [!NOTE]
44+
> AI observability includes Agent 365 activity. The **Apps and agents** page doesn't. Use AI observability when investigating agent behavior.
45+
46+
## Investigation workflow for a risky agent
47+
48+
When AI observability flags an agent as high risk, a typical investigation follows this pattern:
49+
50+
1. Review the agent's risk level and risky activity categories in AI observability.
51+
1. Examine the agent details to understand its purpose, owner, and configuration context.
52+
1. Check sensitive interactions to see what data the agent accessed and whether those interactions are consistent with its intended function.
53+
1. Cross-reference with the audit log for the chronological record of the agent's activities.
54+
1. Evaluate whether the behavior represents a configuration issue, an access scope issue, or a security risk. The answer determines whether the owner fixes it, a policy needs adjustment, or immediate response is required.
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
An investigation typically involves multiple questions in sequence. You identify a user, check their recent activity, look for exfiltration patterns, assess whether alerts are related, and decide whether to escalate. Each of those steps can mean switching between surfaces, adjusting filters, and interpreting raw event data.
2+
3+
Security Copilot in Data Security Posture Management (DSPM) compresses that sequence. You describe what you're investigating in natural language. Security Copilot then correlates activity data, alerts, and risk signals into a synthesized answer.
4+
5+
## Use the Copilot Prompt Gallery for investigation
6+
7+
The Copilot Prompt Gallery for DSPM is accessible from suggested prompts on the DSPM posture page and from the **Copilot prompts** button on data security objective cards. The gallery has two tabs: **Prompts** for individual queries and **Promptbooks** for multi-step sequences.
8+
9+
### Filtering for investigation
10+
11+
The Prompts tab includes category filters that scope results to either investigation or assessment. For investigation, you can use:
12+
13+
- **Potentially Risky Users**
14+
- **Potentially Suspicious Activity**
15+
- **Alerts and Policies**
16+
17+
:::image type="content" source="../media/security-copilot-prompt-gallery.png" alt-text="Screenshot of the Copilot Prompt Gallery for DSPM with the category filter dropdown expanded." lightbox="../media/security-copilot-prompt-gallery.png":::
18+
19+
These filters return prompts scoped to users, timeframes, and behaviors. Filters like "Sensitive Data" and "Data at Risk" are scoped to data types, locations, and policy coverage, which serves posture assessment.
20+
21+
### The Risky user investigation Promptbook
22+
23+
The Risky user investigation Promptbook takes a user principal name and a timeframe of up to 30 days. It runs six prompts in sequence that cover sensitive data activity, exfiltration patterns, alert history, and recommended risk-reduction actions.
24+
25+
:::image type="content" source="../media/security-copilot-risky-user-investigation-prompt-book.png" alt-text="Screenshot of the Risky user investigation Promptbook showing input fields and the six prompts." lightbox="../media/security-copilot-risky-user-investigation-prompt-book.png":::
26+
27+
Each step builds on the previous results, correlating activity summaries, anomalies, and alerts into a single view.
28+
29+
If the results indicate a risk, validate through activity explorer or the audit log where you can see the authoritative event records.
30+
31+
### The Sensitive data protection Promptbook
32+
33+
This Promptbook takes a sensitive information type, trainable classifier, or sensitivity label name and a timeframe of up to 30 days. It maps where that data is stored, who interacts with it, whether it's leaving the organization, and who's responsible for the most exfiltration.
34+
35+
Use this Promptbook when an alert or report finding indicates a specific data type might be at risk. It answers whether the concern is real and identifies the users and paths involved. If you're using it without a specific triggering concern, you're doing assessment work rather than investigation.
36+
37+
## Constraints that affect investigation results
38+
39+
- **30-day maximum**: Time-based queries cover at most 30 days. Investigations spanning longer periods need direct analysis through activity explorer or audit logs.
40+
- **Input specificity matters**: Use full user principal names, the exact name of sensitivity labels, and full names of sensitive information types or trainable classifiers. Partial names produce incomplete or wrong results.
41+
- **Iterative refinement**: If the first response is too broad, add specificity. If too narrow, widen the time window or remove constraints. Security Copilot maintains conversation context across follow-up prompts.
42+
43+
## When to validate with direct investigation
44+
45+
Security Copilot synthesizes and summarizes. That synthesis can occasionally be incomplete, miss relevant context, or present partial data. Validate through direct investigation when:
46+
47+
- The results seem inconsistent with what you know about the user or scenario
48+
- You need the authoritative audit record for compliance or legal purposes
49+
- The 30-day limit doesn't cover the period you need to investigate
50+
- The investigation requires correlation with data outside DSPM, like endpoint signals or network logs
51+
- You're making a high-consequence decision like escalation, disciplinary action, or policy change based on the findings
52+
53+
Security Copilot helps you identify what requires deeper investigation. It doesn’t replace the need for direct analysis.

0 commit comments

Comments
 (0)