You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Modern AI agents operate across complex cloud environments where security, compliance, and responsible design are essential. This module introduces the foundational concepts solution architects must apply when designing safe and trustworthy agent-based systems.
4
2
5
3
It focuses on building AI experiences that protect data, respect organizational policies, and uphold responsible AI expectations throughout the solution lifecycle.
6
4
7
-
You'll explore how identity, access control, data governance, model security, and observability work together to create a defense-in-depth posture for autonomous and semi-autonomous agents. The module highlights how to translate business and compliance requirements into practical technical controls that regulate what agents can access, how they behave, and how their actions are monitored.
5
+
You will explore how identity, access control, data governance, model security, and observability work together to create a defense-in-depth posture for autonomous and semi-autonomous agents. The module highlights how to translate business and compliance requirements into practical technical controls that regulate what agents can access, how they behave, and how their actions are monitored.
8
6
9
-
Architects also learn how to identify vulnerabilities across prompts, models, data flows, and agent workflows. The content emphasizes proactive risk mitigation, layered safeguards, and structured evaluation practices to ensure solutions remain secure, predictable, and aligned with organizational standards.
7
+
Architects will also learn how to identify vulnerabilities across prompts, models, data flows, and agent workflows. The content emphasizes proactive risk mitigation, layered safeguards, and structured evaluation practices to ensure solutions remain secure, predictable, and aligned with organizational standards.
10
8
11
-
By the end of the module, you'll understand how to design AI systems that balance innovation with accountability. You'll gain the skills to build secure, governed, and compliant agent solutions that scale responsibly across the enterprise.
9
+
By the end of the module, you will understand how to design AI systems that balance innovation with accountability. You will gain the skills to build secure, governed, and compliant agent solutions that scale responsibly across the enterprise.
Copy file name to clipboardExpand all lines: learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/2-design-security-agents.md
+2-4Lines changed: 2 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,3 @@
1
-
## Overview
2
-
3
1
Design a defense in depth approach for autonomous and semi-autonomous agents that operate across Microsoft clouds. You'll translate business and compliance requirements into identity, access, data protection, observability, and threat protection controls. You'll also define how agents authenticate, what they can do, what they can see, and how their behavior is monitored and governed at scale.
4
2
5
3
### By the end of this unit, solution architects will be able to
@@ -164,7 +162,7 @@ Design a defense in depth approach for autonomous and semi-autonomous agents tha
164
162
165
163
- Use environment routing to separate dev/test/prod.
166
164
167
-
- Require peer review and approver signoff to publish; block publishing if mandatory checks fail.
165
+
- Require peer review and approver sign-off to publish; block publishing if mandatory checks fail.
168
166
169
167
**Prepare incident response**
170
168
@@ -212,7 +210,7 @@ Design a defense in depth approach for autonomous and semi-autonomous agents tha
212
210
213
211
5. Outline the incident response plan for a data leakage event.
214
212
215
-
**Deliverable:** A onepage architecture decision record (ADR) plus the RBAC matrix.
213
+
**Deliverable:** A one-page architecture decision record (ADR) plus the RBAC matrix.
Copy file name to clipboardExpand all lines: learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/3-design-governance-agents.md
+21-27Lines changed: 21 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,10 @@
1
-
## Overview
2
-
3
1
Effective governance ensures that AI agents operate safely, consistently, and in alignment with organizational policy. As enterprises scale agent adoption, Solution Architects must define guardrails that establish accountability, enforce security, manage data flows, and ensure that agents behave predictably. Governance extends across identity, data protection, observability, security baselines, approval workflows, and lifecycle management.
4
2
5
3
This unit outlines a structured governance framework that Solution Architects can apply across Microsoft cloud environments to ensure responsible, secure, and compliant agent operations.
6
4
7
5
## Learning objectives
8
6
9
-
### At the end of this unit, learners will be able to
7
+
At the end of this unit, learners will be able to:
10
8
11
9
- Design agent governance models aligned with organizational security, compliance, and operational standards.
12
10
@@ -20,47 +18,43 @@ This unit outlines a structured governance framework that Solution Architects ca
20
18
21
19
## Governance principles for AI agents
22
20
23
-
### Accountability and ownership
21
+
Accountability and ownership is a core governance principle.
24
22
25
23
Clear ownership ensures agents operate with traceability and predictable responsibility.
26
24
27
-
#### Key elements
25
+
Key elements include:
28
26
29
27
- Assign an **agent owner** responsible for lifecycle, security posture, and approvals.
30
28
31
29
- Maintain an **agent registry** documenting purpose, environment, risk level, and data access.
32
30
33
31
- Require **publishing approvals** for agents handling sensitive or regulated data.
34
32
35
-
Professional Visual:<br>**Chart - "Agent Ownership Model"**
Copy file name to clipboardExpand all lines: learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/4-design-model-security.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,3 @@
1
-
## Overview
2
-
3
1
Securing AI models is a core responsibility for solution architects who design, deploy, and operate enterprise-grade AI systems. Model security ensures that every model—whether used in Foundry, Azure AI, or integrated within an agent pipeline—remains protected from threats such as unauthorized access, data leakage, adversarial inputs, and compromised identities.
4
2
5
3
This unit provides a structured approach to designing model-level security using identity governance, workload hardening, threat protection, access control, and continuous monitoring. Solution architects will learn how to apply security guardrails that span development, deployment, and operations.
Copy file name to clipboardExpand all lines: learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/5-analyze-solution-ai-vulnerabilities-mitigations-prompt-manipulation.md
+4-6Lines changed: 4 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,10 @@
1
-
## Overview
2
-
3
1
AI-powered solutions introduce unique vulnerabilities that differ from traditional application risks. Solution architects must be able to identify weak points across models, data flows, identity boundaries, and user interactions, especially those involving natural-language interfaces susceptible to prompt manipulation.
4
2
5
3
This unit provides a structured framework for analyzing vulnerabilities in AI systems and defining effective mitigations. It equips architects with the skills to evaluate model behavior, detect abnormal agent activity, assess identity and RBAC exposure, and build end-to-end protections that reduce operational and security risks.
6
4
7
5
## Learning objectives
8
6
9
-
-After completing this unit, learners will be able to:
7
+
After completing this unit, learners will be able to:
10
8
11
9
- Identify common AI-specific vulnerabilities, including prompt manipulation, data leakage, and insecure model behaviors.
12
10
@@ -22,7 +20,7 @@ This unit provides a structured framework for analyzing vulnerabilities in AI sy
22
20
23
21
### Prompt manipulation risks
24
22
25
-
-Prompt manipulation occurs when a user intentionally or unintentionally attempts to steer an AI model away from intended safe behaviors. Common techniques include:
23
+
Prompt manipulation occurs when a user intentionally or unintentionally attempts to steer an AI model away from intended safe behaviors. Common techniques include:
26
24
27
25
- Overriding system instructions ("ignore previous instructions…").
28
26
@@ -82,7 +80,7 @@ Models may respond unpredictably when encountering ambiguous, adversarial, or se
Copy file name to clipboardExpand all lines: learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/6-review-solution-adherence-responsible-ai-principles.md
+12-14Lines changed: 12 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,8 @@
1
-
## Overview
2
-
3
1
Responsible AI (RAI) is a foundational requirement for every solution architect designing or assessing AI systems. Reviewing a solution for adherence to Responsible AI principles ensures that systems remain safe, secure, compliant, transparent, and aligned with organizational and regulatory expectations. This unit equips solution architects with a structured, repeatable method to evaluate solutions across governance, risk, design, deployment, and ongoing operations.
4
2
5
3
## Learning objectives
6
4
7
-
-After completing this unit, learners will be able to:
5
+
After completing this unit, learners will be able to:
8
6
9
7
- Evaluate an AI solution against Microsoft Responsible AI principles.
10
8
@@ -18,17 +16,17 @@ Responsible AI (RAI) is a foundational requirement for every solution architect
18
16
19
17
Microsoft defines six core Responsible AI principles that guide design and governance decisions:
20
18
21
-
1.**Fairness:** AI systems should treat all groups equitably.
19
+
-**Fairness:** AI systems should treat all groups equitably.
22
20
23
-
1.**Reliability and Safety:** Systems must function as intended and prevent harm.
21
+
-**Reliability and Safety:** Systems must function as intended and prevent harm.
24
22
25
-
1.**Privacy and Security:** Protect personal and organizational data through strong controls.
23
+
-**Privacy and Security:** Protect personal and organizational data through strong controls.
26
24
27
-
1.**Inclusiveness:** AI should empower people of all abilities and backgrounds.
25
+
-**Inclusiveness:** AI should empower people of all abilities and backgrounds.
28
26
29
-
1.**Transparency:** Solutions should be understandable, with clear disclosures on how AI is used.
27
+
-**Transparency:** Solutions should be understandable, with clear disclosures on how AI is used.
30
28
31
-
1.**Accountability:** Organizations retain responsibility for decisions made by AI.
29
+
-**Accountability:** Organizations retain responsibility for decisions made by AI.
32
30
33
31
These principles serve as the lens through which a solution architect evaluates models, agents, workflows, integrations, and user experiences.
34
32
@@ -102,15 +100,15 @@ The following review model ensures consistency and objectivity when assessing an
102
100
103
101
### Responsible AI validation tools
104
102
105
-
#### Solution architects can leverage Microsoft's RAI toolset to validate solution performance
103
+
#### Solution architects can leverage Microsoft's Responsible AI toolset to validate solution performance
106
104
107
105
- RAI validation checks for declarative agents
108
106
109
107
- Tooling for bias detection, safety evaluation, and risk assessment
110
108
111
109
- Practices for documenting model lineage, data provenance, and decisions
112
110
113
-
- Governance processes for review, approval, and signoff
111
+
- Governance processes for review, approval, and sign-off
114
112
115
113
### Operational oversight and governance
116
114
@@ -126,16 +124,16 @@ Responsible AI is not a one-time review—it requires continuous monitoring.
126
124
127
125
- Sunset criteria for models no longer meeting safety or compliance requirements
128
126
129
-
:::image type="content" source="../media/responsible-ai-lifecycle.png" alt-text="Responsible AI lifecycle flow.":::
127
+
:::image type="content" source="../media/responsible-ai-lifecycle.png" alt-text="Diagram that shows the Responsible AI lifecycle flow.":::
130
128
131
129
## References
132
130
133
-
-[Microsoft AI principles and approach](https://www.microsoft.com/en-us/ai/principles-and-approach)
131
+
-[Microsoft AI principles and approach](https://www.microsoft.com/ai/principles-and-approach)
134
132
135
133
-[Responsible AI overview for Microsoft Security Copilot](/copilot/security/responsible-ai-overview-security-copilot)
136
134
137
135
-[Responsible AI overview for Dynamics 365](/dynamics365/fin-ops-core/dev-itpro/responsible-ai/responsible-ai-overview)
138
136
139
-
-[Microsoft AI tools and practices](https://www.microsoft.com/en-us/ai/tools-practices)
137
+
-[Microsoft AI tools and practices](https://www.microsoft.com/ai/tools-practices)
140
138
141
139
-[Responsible AI validation for Microsoft 365 Copilot extensibility](/microsoft-365-copilot/extensibility/rai-validation)
Copy file name to clipboardExpand all lines: learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/7-validate-data-residency-movement-compliance.md
+2-4Lines changed: 2 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,10 @@
1
-
## Overview
2
-
3
1
Validating data residency and movement compliance is a critical responsibility for solution architects designing AI-powered solutions across Microsoft 365, Dynamics 365, and Copilot Studio. Keeping data within approved geographic boundaries ensures that solutions follow regulatory, contractual, and organizational requirements. This unit explains how to evaluate data residency posture, control data movement, and apply data governance policies that align with cloud compliance expectations.
4
2
5
3
Solution architects must know where data is stored, how it moves across services, and which components participate in inference, logging, processing, or retention. This includes understanding the behavior of generative AI features, how Copilot Studio handles data, and how Microsoft Purview enforces data-handling compliance.
6
4
7
5
## Learning objectives
8
6
9
-
-After completing this unit, learners will be able to:
7
+
After completing this unit, learners will be able to:
10
8
11
9
- Identify required data residency and sovereignty requirements for AI workloads.
12
10
@@ -32,7 +30,7 @@ Data residency defines the physical or geographic location where customer data i
32
30
33
31
- Whether data used by generative AI stays within the designated region.
34
32
35
-
- How multi-tenant cloud services distribute workloads.
33
+
- How multitenant cloud services distribute workloads.
Copy file name to clipboardExpand all lines: learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/8-design-access-controls-ground-data-model-tune.md
+2-4Lines changed: 2 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,3 @@
1
-
## Overview
2
-
3
1
Designing access controls for grounding data and model-tuning workflows is a critical responsibility for solution architects. AI systems depend on trustworthy, policy-aligned grounding data and secure tuning processes to ensure predictable, compliant, and responsible outputs. Effective controls protect sensitive assets, enforce the principle of least privilege, and ensure AI behaviors remain aligned with organizational, legal, and ethical requirements.
4
2
5
3
This unit provides a structured approach for evaluating and designing access controls around data ingestion, grounding retrieval, model evaluations, and supervised fine-tuning workflows.
@@ -56,13 +54,13 @@ Guardrails protect both users and the system by preventing unsafe or non-complia
56
54
57
55
- Blocklists for prohibited document types
58
56
59
-
- Sanitization pipelines removing PII or contractual data
57
+
- Sanitization pipelines removing personal data or contractual data
60
58
61
59
- Automated reviews validating safety and policy alignment
62
60
63
61
- Alerting and anomaly detection for unusual data access or tuning patterns
0 commit comments