Skip to content

Commit c8342c6

Browse files
Merge pull request #53984 from pablorodMS/patch-1
Patch 1 - New module11 design responsible ai security governance risk management compliance
2 parents bb84e9e + efc1b7a commit c8342c6

5 files changed

Lines changed: 20 additions & 18 deletions

File tree

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/2-design-security-agents.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ Design a defense in depth approach for autonomous and semi-autonomous agents tha
3838

3939
- Require approvals for publishing to production and for changes to high-risk capabilities (for example, actions that modify data).
4040

41-
:::image type="content" source="../media/role-based-access-control.png" alt-text="RBAC design matrix.":::
41+
:::image type="content" source="../media/role-based-access-control.png" alt-text="Diagram illustrating an RBAC design matrix that maps roles to permissions and access levels across system resources.":::
4242

4343
## Data governance and protection
4444

@@ -160,9 +160,9 @@ Design a defense in depth approach for autonomous and semi-autonomous agents tha
160160

161161
**Govern environments and releases**
162162

163-
- Use environment routing to separate dev/test/prod.
163+
- Use environment routing to separate development, testing and production.
164164

165-
- Require peer review and approver sign-off to publish; block publishing if mandatory checks fail.
165+
- Require peer review and approver sign off to publish; block publishing if mandatory checks fail.
166166

167167
**Prepare incident response**
168168

@@ -220,4 +220,4 @@ Design a defense in depth approach for autonomous and semi-autonomous agents tha
220220

221221
- [Security and governance in Microsoft Copilot Studio](/microsoft-copilot-studio/security-and-governance)
222222

223-
- [Manage IAM for AI workloads on Azure](/training/paths/manage-iam-for-ai-workloads-on-azure/)
223+
- [Manage IAM for AI workloads on Azure](/training/paths/manage-iam-for-ai-workloads-on-azure/)

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/5-analyze-solution-ai-vulnerabilities-mitigations-prompt-manipulation.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,9 @@ Models may respond unpredictably when encountering ambiguous, adversarial, or se
7070

7171
### Data exposure vulnerabilities
7272

73-
#### AI systems often have access to sensitive data sources. Vulnerabilities appear when
73+
#### AI systems often have access to sensitive data sources.
74+
75+
Vulnerabilities appear when:
7476

7577
- Prompts indirectly expose sensitive information.
7678

@@ -190,4 +192,4 @@ Monitoring is central to detecting prompt attacks, unusual model behavior, and u
190192

191193
- [Threat protection for Microsoft 365 agents](/microsoft-agent-365/admin/threat-protection)
192194

193-
- [Evaluate AI agents with the Azure AI Foundry SDK](/azure/ai-foundry/how-to/develop/agent-evaluate-sdk)
195+
- [Evaluate AI agents with the Azure AI Foundry SDK](/azure/ai-foundry/how-to/develop/agent-evaluate-sdk)

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/6-review-solution-adherence-responsible-ai-principles.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ The following review model ensures consistency and objectivity when assessing an
108108

109109
- Practices for documenting model lineage, data provenance, and decisions
110110

111-
- Governance processes for review, approval, and sign-off
111+
- Governance processes for review, approval, and sign off
112112

113113
### Operational oversight and governance
114114

@@ -136,4 +136,4 @@ Responsible AI is not a one-time review—it requires continuous monitoring.
136136

137137
- [Microsoft AI tools and practices](https://www.microsoft.com/ai/tools-practices)
138138

139-
- [Responsible AI validation for Microsoft 365 Copilot extensibility](/microsoft-365-copilot/extensibility/rai-validation)
139+
- [Responsible AI validation for Microsoft 365 Copilot extensibility](/microsoft-365-copilot/extensibility/rai-validation)

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/8-design-access-controls-ground-data-model-tune.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Guardrails protect both users and the system by preventing unsafe or non-complia
5252

5353
### Examples of guardrails
5454

55-
- Blocklists for prohibited document types
55+
- Block lists for prohibited document types
5656

5757
- Sanitization pipelines removing personal data or contractual data
5858

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/includes/9-design-audit-trails-changes-models-data.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -72,9 +72,9 @@ Architects must ensure logs capture _metadata_, not _content_, to avoid unnecess
7272

7373
Azure AI Foundry provides a centralized control plane for model registration, environment configuration, agent deployment, and diagnostic logging.
7474

75-
Key audit features include:
75+
### Key audit features include
7676

77-
Foundry activity logs:
77+
#### Foundry activity logs
7878

7979
Track administrative actions across workspaces, registries, and deployments. Logs support export to:
8080

@@ -84,9 +84,9 @@ Track administrative actions across workspaces, registries, and deployments. Log
8484

8585
- SIEM tools (such as Microsoft Sentinel)
8686

87-
Foundry diagnostics and tracing:
87+
### Foundry diagnostics and tracing
8888

89-
Diagnostics provide traceability of execution across:
89+
#### Diagnostics provide traceability of execution across
9090

9191
- Model calls
9292

@@ -98,7 +98,7 @@ Diagnostics provide traceability of execution across:
9898

9999
## Designing audit pipelines with tracing
100100

101-
Tracing allows architects to follow execution paths and debug generative AI behaviors. When integrated into audit trails, tracing provides:
101+
### Tracing allows architects to follow execution paths and debug generative AI behaviors. When integrated into audit trails, tracing provides
102102

103103
- End-to-end visibility of model inference
104104

@@ -110,7 +110,7 @@ Tracing allows architects to follow execution paths and debug generative AI beha
110110

111111
- Detection of unusual patterns (loops, excessive token spikes, cascading failures)
112112

113-
Recommended tracing fields include:
113+
### Recommended tracing fields include
114114

115115
- Correlation ID
116116

@@ -128,7 +128,7 @@ Recommended tracing fields include:
128128

129129
## Designing audit-ready processes
130130

131-
Governance workflows to include:
131+
### Governance workflows to include
132132

133133
- **Approval workflows** for promoting new model versions
134134

@@ -142,7 +142,7 @@ Governance workflows to include:
142142

143143
### Retention policies
144144

145-
Define retention requirements with Legal, Compliance, and Information Security teams.<br>Common patterns:
145+
#### Define retention requirements with Legal, Compliance, and Information Security teams.<br>Common patterns
146146

147147
- 90 days for low-risk workloads
148148

@@ -156,4 +156,4 @@ Define retention requirements with Legal, Compliance, and Information Security t
156156

157157
- [Tracing a generative AI app](/training/modules/tracing-generative-ai-app/)
158158

159-
- [Enable Azure AI Foundry diagnostics](/training/modules/azure-ai-foundry-secure-environment/enable-foundry-diagnostics)
159+
- [Enable Azure AI Foundry diagnostics](/training/modules/azure-ai-foundry-secure-environment/enable-foundry-diagnostics)

0 commit comments

Comments
 (0)