Skip to content

Commit 9e05fd9

Browse files
authored
updates m11
1 parent 254e30f commit 9e05fd9

10 files changed

Lines changed: 59 additions & 81 deletions
Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,9 @@
1-
## Overview
2-
31
Modern AI agents operate across complex cloud environments where security, compliance, and responsible design are essential. This module introduces the foundational concepts solution architects must apply when designing safe and trustworthy agent-based systems.
42

53
It focuses on building AI experiences that protect data, respect organizational policies, and uphold responsible AI expectations throughout the solution lifecycle.
64

7-
You'll explore how identity, access control, data governance, model security, and observability work together to create a defense-in-depth posture for autonomous and semi-autonomous agents. The module highlights how to translate business and compliance requirements into practical technical controls that regulate what agents can access, how they behave, and how their actions are monitored.
5+
You will explore how identity, access control, data governance, model security, and observability work together to create a defense-in-depth posture for autonomous and semi-autonomous agents. The module highlights how to translate business and compliance requirements into practical technical controls that regulate what agents can access, how they behave, and how their actions are monitored.
86

9-
Architects also learn how to identify vulnerabilities across prompts, models, data flows, and agent workflows. The content emphasizes proactive risk mitigation, layered safeguards, and structured evaluation practices to ensure solutions remain secure, predictable, and aligned with organizational standards.
7+
Architects will also learn how to identify vulnerabilities across prompts, models, data flows, and agent workflows. The content emphasizes proactive risk mitigation, layered safeguards, and structured evaluation practices to ensure solutions remain secure, predictable, and aligned with organizational standards.
108

11-
By the end of the module, you'll understand how to design AI systems that balance innovation with accountability. You'll gain the skills to build secure, governed, and compliant agent solutions that scale responsibly across the enterprise.
9+
By the end of the module, you will understand how to design AI systems that balance innovation with accountability. You will gain the skills to build secure, governed, and compliant agent solutions that scale responsibly across the enterprise.

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/2-design-security-agents.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
## Overview
2-
31
Design a defense in depth approach for autonomous and semi-autonomous agents that operate across Microsoft clouds. You'll translate business and compliance requirements into identity, access, data protection, observability, and threat protection controls. You'll also define how agents authenticate, what they can do, what they can see, and how their behavior is monitored and governed at scale.
42

53
### By the end of this unit, solution architects will be able to
@@ -164,7 +162,7 @@ Design a defense in depth approach for autonomous and semi-autonomous agents tha
164162

165163
- Use environment routing to separate dev/test/prod.
166164

167-
- Require peer review and approver signoff to publish; block publishing if mandatory checks fail.
165+
- Require peer review and approver sign-off to publish; block publishing if mandatory checks fail.
168166

169167
**Prepare incident response**
170168

@@ -212,7 +210,7 @@ Design a defense in depth approach for autonomous and semi-autonomous agents tha
212210

213211
5. Outline the incident response plan for a data leakage event.
214212

215-
**Deliverable:** A onepage architecture decision record (ADR) plus the RBAC matrix.
213+
**Deliverable:** A one-page architecture decision record (ADR) plus the RBAC matrix.
216214

217215
## References
218216

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/3-design-governance-agents.md

Lines changed: 21 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,10 @@
1-
## Overview
2-
31
Effective governance ensures that AI agents operate safely, consistently, and in alignment with organizational policy. As enterprises scale agent adoption, Solution Architects must define guardrails that establish accountability, enforce security, manage data flows, and ensure that agents behave predictably. Governance extends across identity, data protection, observability, security baselines, approval workflows, and lifecycle management.
42

53
This unit outlines a structured governance framework that Solution Architects can apply across Microsoft cloud environments to ensure responsible, secure, and compliant agent operations.
64

75
## Learning objectives
86

9-
### At the end of this unit, learners will be able to
7+
At the end of this unit, learners will be able to:
108

119
- Design agent governance models aligned with organizational security, compliance, and operational standards.
1210

@@ -20,47 +18,43 @@ This unit outlines a structured governance framework that Solution Architects ca
2018

2119
## Governance principles for AI agents
2220

23-
### Accountability and ownership
21+
Accountability and ownership is a core governance principle.
2422

2523
Clear ownership ensures agents operate with traceability and predictable responsibility.
2624

27-
#### Key elements
25+
Key elements include:
2826

2927
- Assign an **agent owner** responsible for lifecycle, security posture, and approvals.
3028

3129
- Maintain an **agent registry** documenting purpose, environment, risk level, and data access.
3230

3331
- Require **publishing approvals** for agents handling sensitive or regulated data.
3432

35-
Professional Visual:<br>**Chart - "Agent Ownership Model"**
36-
37-
Columns: Agent | Owner | Environment | Risk Classification | Approval Required
3833

39-
Color coding for Low / Medium / High risk.
4034

4135
## Identity, access, and permission governance
4236

43-
### Establish a strong identity foundation
37+
## Establish a strong identity foundation
4438

4539
Agents should operate with secure, isolated identities that restrict unintended access.
4640

47-
#### Recommended practices
41+
Recommended practices include:
4842

4943
- Use **managed identities** instead of embedded secrets.
5044

5145
- Assign **least-privilege permissions**, scoped by environment and resource.
5246

5347
- Segment roles for **Makers, Approvers, Admins, and Security teams**.
5448

55-
Professional Visual:<br>**Matrix - "Agent RBAC Role Alignment"**<br>Rows: Maker, Publisher, Environment Admin, Security Admin<br>Columns: Create, Modify, Publish, Connectors, Data Access, Monitoring
49+
5650

5751
## Data governance and protection controls
5852

59-
### Data boundaries and classification
53+
Data boundaries and classification are essential for secure governance.
6054

6155
Agents must follow defined boundaries regarding which data they can access, store, or generate.
6256

63-
#### Key considerations
57+
Key considerations include:
6458

6559
- Enforce **data classification** and restrict agent access to approved sources.
6660

@@ -70,25 +64,25 @@ Agents must follow defined boundaries regarding which data they can access, stor
7064

7165
- Use sensitivity labels to **track and govern information movement** throughout agent responses.
7266

73-
:::image type="content" source="../media/data-governance-layering.png" alt-text="Diagram: Data governance layering":::
67+
:::image type="content" source="../media/data-governance-layering.png" alt-text="Diagram that shows data governance layering.":::
7468

7569
## Observability, monitoring, and cost governance
7670

77-
### Centralized monitoring
71+
Centralized monitoring is required for observability and trust.
7872

7973
Visibility into agent runtime activity is essential for auditing and operational trust.
8074

81-
#### Include
75+
Include:
8276

8377
- Logging prompts, actions, outcomes, errors, and escalations.
8478

8579
- Dashboards for success rates, failure patterns, and unexpected behaviors.
8680

8781
- Alerts for anomalous activity such as rapid token spikes or unusual data access.
8882

89-
### Cost governance
83+
Cost governance is also required.
9084

91-
#### Control consumption by
85+
Control consumption by:
9286

9387
- Tagging agent resources for cost attribution.
9488

@@ -98,21 +92,21 @@ Visibility into agent runtime activity is essential for auditing and operational
9892

9993
## Security, threat protection, and safe deployment
10094

101-
### Runtime protection
95+
Runtime protection must be maintained throughout the lifecycle.
10296

10397
Security safeguards must be active throughout an agent's lifecycle.
10498

105-
#### Best practices
99+
Best practices include:
106100

107101
- Enforce **runtime protection** and evaluate agents for insecure configurations before publish.
108102

109103
- Apply input/output filtering to reduce prompt-injection and data-leakage risks.
110104

111105
- Integrate the agent with organizational security monitoring and response processes.
112106

113-
### Govern external integrations
107+
Govern external integrations with strict controls.
114108

115-
#### Agents interacting with external APIs or systems must follow strict rules
109+
Agents interacting with external APIs or systems must follow strict rules:
116110

117111
- Allow only **approved connectors and endpoints**.
118112

@@ -122,23 +116,23 @@ Security safeguards must be active throughout an agent's lifecycle.
122116

123117
## Development, versioning, and lifecycle governance
124118

125-
### Standardized development framework
119+
Standardized development frameworks improve repeatability and control.
126120

127121
Governance improves when development behavior is predictable and repeatable.
128122

129-
#### Include
123+
Include:
130124

131125
- Standard templates for agent creation and documentation.
132126

133127
- Version control for prompts, knowledge sources, and workflows.
134128

135129
- Mandatory pre-publish checks for security, DLP, and data-access configuration.
136130

137-
### Lifecycle policies
131+
Lifecycle policies are needed as agents evolve.
138132

139133
Agents evolve—policies must govern updates and retirement.
140134

141-
#### Policies include
135+
Policies include:
142136

143137
- Scheduled reviews for accuracy, data freshness, and risk reassessment.
144138

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/4-design-model-security.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
## Overview
2-
31
Securing AI models is a core responsibility for solution architects who design, deploy, and operate enterprise-grade AI systems. Model security ensures that every model—whether used in Foundry, Azure AI, or integrated within an agent pipeline—remains protected from threats such as unauthorized access, data leakage, adversarial inputs, and compromised identities.
42

53
This unit provides a structured approach to designing model-level security using identity governance, workload hardening, threat protection, access control, and continuous monitoring. Solution architects will learn how to apply security guardrails that span development, deployment, and operations.

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/5-analyze-solution-ai-vulnerabilities-mitigations-prompt-manipulation.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,10 @@
1-
## Overview
2-
31
AI-powered solutions introduce unique vulnerabilities that differ from traditional application risks. Solution architects must be able to identify weak points across models, data flows, identity boundaries, and user interactions, especially those involving natural-language interfaces susceptible to prompt manipulation.
42

53
This unit provides a structured framework for analyzing vulnerabilities in AI systems and defining effective mitigations. It equips architects with the skills to evaluate model behavior, detect abnormal agent activity, assess identity and RBAC exposure, and build end-to-end protections that reduce operational and security risks.
64

75
## Learning objectives
86

9-
- After completing this unit, learners will be able to:
7+
After completing this unit, learners will be able to:
108

119
- Identify common AI-specific vulnerabilities, including prompt manipulation, data leakage, and insecure model behaviors.
1210

@@ -22,7 +20,7 @@ This unit provides a structured framework for analyzing vulnerabilities in AI sy
2220

2321
### Prompt manipulation risks
2422

25-
- Prompt manipulation occurs when a user intentionally or unintentionally attempts to steer an AI model away from intended safe behaviors. Common techniques include:
23+
Prompt manipulation occurs when a user intentionally or unintentionally attempts to steer an AI model away from intended safe behaviors. Common techniques include:
2624

2725
- Overriding system instructions ("ignore previous instructions…").
2826

@@ -82,7 +80,7 @@ Models may respond unpredictably when encountering ambiguous, adversarial, or se
8280

8381
- Input files contain embedded malicious instructions.
8482

85-
**Best practice:** Architect solutions using _data minimization_, _RBAC boundaries_, and _intentbased access_ aligned with user roles.
83+
**Best practice:** Architect solutions using _data minimization_, _RBAC boundaries_, and _intent-based access_ aligned with user roles.
8684

8785
### Identity, access, and RBAC gaps
8886

@@ -114,7 +112,7 @@ Models may respond unpredictably when encountering ambiguous, adversarial, or se
114112

115113
- Poor auditing and lack of rollback capability.
116114

117-
- Unsecured flows calling third-party endpoints.
115+
- Unsecured flows calling non-Microsoft endpoints.
118116

119117
#### Architects must ensure
120118

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/6-review-solution-adherence-responsible-ai-principles.md

Lines changed: 12 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,8 @@
1-
## Overview
2-
31
Responsible AI (RAI) is a foundational requirement for every solution architect designing or assessing AI systems. Reviewing a solution for adherence to Responsible AI principles ensures that systems remain safe, secure, compliant, transparent, and aligned with organizational and regulatory expectations. This unit equips solution architects with a structured, repeatable method to evaluate solutions across governance, risk, design, deployment, and ongoing operations.
42

53
## Learning objectives
64

7-
- After completing this unit, learners will be able to:
5+
After completing this unit, learners will be able to:
86

97
- Evaluate an AI solution against Microsoft Responsible AI principles.
108

@@ -18,17 +16,17 @@ Responsible AI (RAI) is a foundational requirement for every solution architect
1816

1917
Microsoft defines six core Responsible AI principles that guide design and governance decisions:
2018

21-
1. **Fairness:** AI systems should treat all groups equitably.
19+
- **Fairness:** AI systems should treat all groups equitably.
2220

23-
1. **Reliability and Safety:** Systems must function as intended and prevent harm.
21+
- **Reliability and Safety:** Systems must function as intended and prevent harm.
2422

25-
1. **Privacy and Security:** Protect personal and organizational data through strong controls.
23+
- **Privacy and Security:** Protect personal and organizational data through strong controls.
2624

27-
1. **Inclusiveness:** AI should empower people of all abilities and backgrounds.
25+
- **Inclusiveness:** AI should empower people of all abilities and backgrounds.
2826

29-
1. **Transparency:** Solutions should be understandable, with clear disclosures on how AI is used.
27+
- **Transparency:** Solutions should be understandable, with clear disclosures on how AI is used.
3028

31-
1. **Accountability:** Organizations retain responsibility for decisions made by AI.
29+
- **Accountability:** Organizations retain responsibility for decisions made by AI.
3230

3331
These principles serve as the lens through which a solution architect evaluates models, agents, workflows, integrations, and user experiences.
3432

@@ -102,15 +100,15 @@ The following review model ensures consistency and objectivity when assessing an
102100

103101
### Responsible AI validation tools
104102

105-
#### Solution architects can leverage Microsoft's RAI toolset to validate solution performance
103+
#### Solution architects can leverage Microsoft's Responsible AI toolset to validate solution performance
106104

107105
- RAI validation checks for declarative agents
108106

109107
- Tooling for bias detection, safety evaluation, and risk assessment
110108

111109
- Practices for documenting model lineage, data provenance, and decisions
112110

113-
- Governance processes for review, approval, and signoff
111+
- Governance processes for review, approval, and sign-off
114112

115113
### Operational oversight and governance
116114

@@ -126,16 +124,16 @@ Responsible AI is not a one-time review—it requires continuous monitoring.
126124

127125
- Sunset criteria for models no longer meeting safety or compliance requirements
128126

129-
:::image type="content" source="../media/responsible-ai-lifecycle.png" alt-text="Responsible AI lifecycle flow.":::
127+
:::image type="content" source="../media/responsible-ai-lifecycle.png" alt-text="Diagram that shows the Responsible AI lifecycle flow.":::
130128

131129
## References
132130

133-
- [Microsoft AI principles and approach](https://www.microsoft.com/en-us/ai/principles-and-approach)
131+
- [Microsoft AI principles and approach](https://www.microsoft.com/ai/principles-and-approach)
134132

135133
- [Responsible AI overview for Microsoft Security Copilot](/copilot/security/responsible-ai-overview-security-copilot)
136134

137135
- [Responsible AI overview for Dynamics 365](/dynamics365/fin-ops-core/dev-itpro/responsible-ai/responsible-ai-overview)
138136

139-
- [Microsoft AI tools and practices](https://www.microsoft.com/en-us/ai/tools-practices)
137+
- [Microsoft AI tools and practices](https://www.microsoft.com/ai/tools-practices)
140138

141139
- [Responsible AI validation for Microsoft 365 Copilot extensibility](/microsoft-365-copilot/extensibility/rai-validation)

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/7-validate-data-residency-movement-compliance.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,10 @@
1-
## Overview
2-
31
Validating data residency and movement compliance is a critical responsibility for solution architects designing AI-powered solutions across Microsoft 365, Dynamics 365, and Copilot Studio. Keeping data within approved geographic boundaries ensures that solutions follow regulatory, contractual, and organizational requirements. This unit explains how to evaluate data residency posture, control data movement, and apply data governance policies that align with cloud compliance expectations.
42

53
Solution architects must know where data is stored, how it moves across services, and which components participate in inference, logging, processing, or retention. This includes understanding the behavior of generative AI features, how Copilot Studio handles data, and how Microsoft Purview enforces data-handling compliance.
64

75
## Learning objectives
86

9-
- After completing this unit, learners will be able to:
7+
After completing this unit, learners will be able to:
108

119
- Identify required data residency and sovereignty requirements for AI workloads.
1210

@@ -32,7 +30,7 @@ Data residency defines the physical or geographic location where customer data i
3230

3331
- Whether data used by generative AI stays within the designated region.
3432

35-
- How multi-tenant cloud services distribute workloads.
33+
- How multitenant cloud services distribute workloads.
3634

3735
### Copilot Studio data residency behavior
3836

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/8-design-access-controls-ground-data-model-tune.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
## Overview
2-
31
Designing access controls for grounding data and model-tuning workflows is a critical responsibility for solution architects. AI systems depend on trustworthy, policy-aligned grounding data and secure tuning processes to ensure predictable, compliant, and responsible outputs. Effective controls protect sensitive assets, enforce the principle of least privilege, and ensure AI behaviors remain aligned with organizational, legal, and ethical requirements.
42

53
This unit provides a structured approach for evaluating and designing access controls around data ingestion, grounding retrieval, model evaluations, and supervised fine-tuning workflows.
@@ -56,13 +54,13 @@ Guardrails protect both users and the system by preventing unsafe or non-complia
5654

5755
- Blocklists for prohibited document types
5856

59-
- Sanitization pipelines removing PII or contractual data
57+
- Sanitization pipelines removing personal data or contractual data
6058

6159
- Automated reviews validating safety and policy alignment
6260

6361
- Alerting and anomaly detection for unusual data access or tuning patterns
6462

65-
:::image type="content" source="../media/guardrail-enforcement-model.png" alt-text="Guardrail Enforcement Model.":::
63+
6664

6765
## Operational monitoring and compliance enforcement
6866

0 commit comments

Comments
 (0)