Skip to content

Commit 244b3af

Browse files
Merge pull request #53402 from riswinto/main
update dlp module
2 parents 9ecd663 + c2cc650 commit 244b3af

5 files changed

Lines changed: 35 additions & 13 deletions

File tree

learn-pr/wwl-sci/purview-data-loss-prevention-create-manage-policies/includes/adaptive-protection-data-loss-prevention.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,9 +40,11 @@ Rather than enforcing a fixed action in all cases, policies can:
4040

4141
For example, a low-risk user might receive a warning for an action, while the same action performed by a higher-risk user results in blocking.
4242

43+
A safe first use of risk-based behavior is to vary guidance rather than blocking. Starting with warnings for lower-risk users and stronger actions only at higher risk levels helps validate assumptions without introducing unnecessary disruption.
44+
4345
## Understand how adaptive protection fits into DLP
4446

45-
Adaptive Protection doesn't replace DLP policies. It extends them by adjusting enforcement based on user risk.
47+
Adaptive Protection extends DLP policies by adjusting enforcement based on user risk.
4648

4749
At a high level:
4850

@@ -54,6 +56,8 @@ The image shows where Adaptive Protection settings appear within a DLP rule. The
5456

5557
:::image type="content" source="../media/adaptive-protection-conditions.png" alt-text="Screenshot showing the DLP rule conditions pane with Insider risk level for Adaptive Protection and selectable risk levels." lightbox="../media/adaptive-protection-conditions.png":::
5658

59+
A common misuse of adaptive behavior is applying it broadly before static policies are stable, which makes enforcement outcomes harder to interpret.
60+
5761
User risk can change over time as behavior changes. When risk increases or decreases, the same rule can produce different enforcement outcomes without redefining detection or scope.
5862

5963
This is why a policy might warn a user in one situation and block the same action later, even though the rule itself hasn't changed.
@@ -68,4 +72,6 @@ Extending DLP with adaptive behavior makes sense when:
6872
- User context meaningfully alters risk
6973
- Static policies no longer scale with real usage patterns
7074

71-
When applied intentionally, risk-based behavior extends DLP capabilities without sacrificing clarity or control.
75+
If enforcement outcomes can't be clearly explained based on risk signals, adaptive behavior is adding complexity without value and should be reconsidered.
76+
77+
When applied intentionally, risk-based behavior extends DLP capabilities without sacrificing clarity.

learn-pr/wwl-sci/purview-data-loss-prevention-create-manage-policies/includes/define-policy-detection.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -36,15 +36,15 @@ For example, combining content-based detection with:
3636

3737
Can significantly reduce unnecessary triggers.
3838

39-
The goal isn't complexity for its own sake. It's clarity. Each condition should contribute meaningfully to the scenario you're trying to address.
39+
Detection should favor clarity over complexity. If adding conditions makes results harder to explain, refinement has likely gone too far.
4040

4141
## Balance coverage with precision
4242

4343
Broad detection increases coverage, but it also increases the chance of false positives. Narrow detection improves precision, but it can miss edge cases.
4444

4545
Early in policy creation, it's often better to favor clarity over completeness. A policy that triggers reliably in fewer scenarios is easier to validate and refine than one that fires constantly with mixed results.
4646

47-
Detection can always be expanded later. Noise is harder to undo.
47+
Refinement stops helping when additional conditions reduce understanding more than they reduce noise. Detection can always be expanded later. Noise is harder to undo.
4848

4949
## Define what "good enough" detection looks like
5050

@@ -56,7 +56,7 @@ Detection doesn't have to be perfect on day one. What matters is whether it reli
5656
- Results are understandable
5757
- False positives are limited and explainable
5858

59-
This creates a strong foundation for validation and tuning.
59+
When detection consistently highlights the same types of activity considered risky, further refinement is unlikely to add value. This provides a strong foundation for validation and tuning.
6060

6161
## Account for how detection choices affect false positives
6262

@@ -66,14 +66,14 @@ Being intentional about detection upfront reduces the need for heavy tuning afte
6666

6767
## Consider scenarios where data is reused or transformed
6868

69-
Some scenarios are more complex than simple sharing or copying. When sensitive data is reused or transformed, detection becomes more important.
69+
Some scenarios are more complex than simple sharing or copying. When sensitive data is reused or transformed, detection plays a larger role in distinguishing real risk from incidental use.
7070

7171
This includes workflows where:
7272

7373
- Content is rewritten or summarized
7474
- Data is combined with other inputs
7575
- Sensitive information appears in generated responses
7676

77-
In these cases, detection quality matters more than aggressive enforcement. Clear, accurate detection helps ensure policies respond to real risk instead of incidental use.
77+
In these scenarios, adding contextual conditions is often more effective than tightening content patterns. Clear, accurate detection helps ensure policies respond to meaningful risk without over-enforcing acceptable use.
7878

7979
With detection defined, scope becomes the deciding factor for whether those signals create insight or noise.

learn-pr/wwl-sci/purview-data-loss-prevention-create-manage-policies/includes/policy-actions.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,8 @@ Higher-risk scenarios might justify:
3333

3434
For many scenarios, starting with nonenforcing actions and validating behavior through simulation helps confirm assumptions before stronger enforcement is applied.
3535

36+
Warning actions are often more effective than blocking when the goal is behavior change rather than prevention. If users consistently adjust their behavior after seeing guidance, stronger enforcement may not be necessary.
37+
3638
## Use policy tips and notifications as guidance
3739

3840
Policy tips and notifications play an important role in shaping behavior. They explain why an action is risky and what users can do instead.
@@ -57,7 +59,7 @@ Overrides are most effective when:
5759
- Justification is required and reviewed
5860
- Override patterns are used to refine detection and scope
5961

60-
If most users override a policy, it's often a signal that the policy doesn't align with real workflows.
62+
If most users override a policy, it's often a signal that the policy doesn't align with real workflows. Isolated overrides, by contrast, are usually expected. Repeated overrides tied to the same workflow are a stronger signal that action choice needs adjustment.
6163

6264
## Pay attention to what override justifications reveal
6365

@@ -69,6 +71,8 @@ Patterns in justifications can help answer questions like:
6971
- Is the policy triggering in unexpected scenarios?
7072
- Does the action match the actual level of risk?
7173

74+
Override justifications that repeat the same explanation often point to legitimate work being interrupted. Varied or unclear justifications are more likely to indicate detection or scope issues rather than action choice.
75+
7276
Using this feedback helps improve policy quality over time.
7377

7478
Any action choice carries assumptions, and those assumptions need to be tested before enforcement.

learn-pr/wwl-sci/purview-data-loss-prevention-create-manage-policies/includes/policy-scope.md

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,8 @@ Effective scoping starts by asking:
3131

3232
Scoping to the locations that matter most reduces noise and makes results easier to interpret.
3333

34+
In practice, overly broad scope often shows up as alerts across many unrelated workflows or locations. When results span activities that don't share a common risk pattern, scope is usually too wide to interpret meaningfully.
35+
3436
## Scope users, groups, and workloads intentionally
3537

3638
Who a policy applies to matters as much as where it applies. Broad user scope can make policies difficult to validate and harder to tune.
@@ -45,14 +47,14 @@ Starting with a narrower scope makes it easier to understand policy behavior bef
4547

4648
## Use pilot scopes to validate assumptions
4749

48-
Pilot scoping isn't just a rollout tactic. It's a design tool.
49-
50-
Applying a policy to a limited group allows you to:
50+
Pilot scoping helps shape policy design. Applying a policy to a limited group allows you to:
5151

5252
- Confirm detection behaves as expected
5353
- Observe real usage patterns
5454
- Identify unexpected enforcement outcomes
5555

56+
A pilot scope has usually done its job when results are predictable and understandable, even if they aren't perfect. If alerts consistently reflect the scenarios you expected, expanding scope becomes a design choice rather than a guess.
57+
5658
Pilot scopes are most effective when paired with simulation, allowing you to evaluate results before enforcing restrictions more broadly.
5759

5860
## Use inclusion and exclusion patterns to reduce noise
@@ -65,6 +67,8 @@ Exclusions can help:
6567
- Prevent duplicate alerts
6668
- Reduce friction in low-risk scenarios
6769

70+
Exclusions that repeat across multiple policies often indicate a shared low-risk workflow rather than a detection problem.
71+
6872
Clear inclusion and exclusion patterns keep policies focused on the behavior you actually want to address.
6973

7074
## Avoid overly broad scope
@@ -77,6 +81,8 @@ When a policy applies everywhere and to everyone:
7781
- False positives increase
7882
- User trust in enforcement decreases
7983

84+
Broad scope isn't always a mistake. Scope can be intentionally broad when enforcement outcomes are consistent and well understood. Problems arise when scope is broad by default rather than by design.
85+
8086
Effective policies balance coverage with precision. Expanding scope should be a deliberate decision, not a default.
8187

8288
## Scope policies for AI-assisted workflows thoughtfully

learn-pr/wwl-sci/purview-data-loss-prevention-create-manage-policies/includes/policy-validation.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,8 @@ Simulation helps you:
1818
- Observe how scope affects real users and workflows
1919
- Evaluate whether selected actions are appropriate
2020

21+
In practice, simulation results are easier to interpret when patterns repeat. Repeated matches tied to the same action or workflow often point to scoping constraints. Scattered, one-off matches across unrelated activity might suggest detection is too broad.
22+
2123
While simulation doesn't enforce restrictions, users might still see policy tips or guidance. This allows organizations to observe not only how policies would trigger, but also how users respond to messaging before enforcement is enabled.
2224

2325
Treating simulation as feedback encourages deliberate refinement before enforcement.
@@ -46,8 +48,12 @@ When reviewing results, look for:
4648
- Whether activity aligns with expected risk scenarios
4749
- Patterns that suggest legitimate work are being affected
4850

51+
Repeated alerts tied to the same users or workflows often indicate scope issues. Alerts spread across unrelated activity usually point to detection that needs refinement. High override rates tend to indicate action mismatch rather than user disregard.
52+
4953
The goal isn't zero activity. It's predictable, understandable activity.
5054

55+
If results are surprising or difficult to explain, enforcement should wait. If an alert can't be clearly explained, users are unlikely to understand it once enforcement begins.
56+
5157
## Make targeted tuning adjustments
5258

5359
Simulation often reveals small adjustments that improve policy quality.
@@ -58,7 +64,7 @@ Common tuning changes include:
5864
- Refining detection conditions
5965
- Adjusting actions to better match observed risk
6066

61-
These changes are easier to make before enforcement, when users aren't yet affected.
67+
Isolated events are rarely a strong signal. Tuning decisions are more reliable when patterns persist over time.
6268

6369
## Identify when a policy is ready for enforcement
6470

@@ -69,7 +75,7 @@ A policy is typically ready for enforcement when:
6975
- Actions match the organization's risk tolerance
7076
- Users can understand why enforcement occurs
7177

72-
Readiness is about confidence, not perfection. Policies can continue to evolve after enforcement begins.
78+
Readiness is about confidence, not perfection. Some noise is expected, as long as it's predictable and understood.
7379

7480
## Recognize validation as the start of the policy lifecycle
7581

0 commit comments

Comments
 (0)