Skip to content

Commit 011190f

Browse files
authored
Merge pull request #53355 from wwlpublish/LP-158678-5
Creating pull request
2 parents 92196ca + 61e3569 commit 011190f

22 files changed

Lines changed: 338 additions & 0 deletions
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.protect-govern-ai-ready-infrastructure-azure.introduction
3+
title: "Introduction"
4+
metadata:
5+
title: "Introduction"
6+
description: "Introduction."
7+
ms.date: 02/03/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
durationInMinutes: 3
12+
content: |
13+
[!include[](includes/1-introduction.md)]
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.protect-govern-ai-ready-infrastructure-azure.understand-ai-governance-framework-components
3+
title: "Understand AI governance framework components"
4+
metadata:
5+
title: "Understand AI governance framework components"
6+
description: "Understand AI governance framework components."
7+
ms.date: 02/03/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
durationInMinutes: 12
12+
content: |
13+
[!include[](includes/2-understand-ai-governance-framework-components.md)]
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.protect-govern-ai-ready-infrastructure-azure.configure-policies-access-controls
3+
title: "Configure policies and access controls for AI workloads"
4+
metadata:
5+
title: "Configure policies and access controls for AI workloads"
6+
description: "Configure policies and access controls for AI workloads."
7+
ms.date: 02/03/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
durationInMinutes: 12
12+
content: |
13+
[!include[](includes/3-configure-policies-access-controls.md)]
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.protect-govern-ai-ready-infrastructure-azure.implement-responsible-safeguards-content-filter
3+
title: "Implement responsible AI safeguards and content filtering"
4+
metadata:
5+
title: "Implement responsible AI safeguards and content filtering"
6+
description: "Implement responsible AI safeguards and content filtering."
7+
ms.date: 02/03/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
durationInMinutes: 12
12+
content: |
13+
[!include[](includes/4-implement-responsible-safeguards-content-filter.md)]
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.protect-govern-ai-ready-infrastructure-azure.exercise-deploy-governed-ai-infrastructure
3+
title: "Exercise: Configure regional data residency and identity-based access controls"
4+
metadata:
5+
title: "Exercise: Configure Regional Data Residency and Identity‑Based Access Controls"
6+
description: "Exercise: Configure Regional Data Residency and Identity‑Based Access Controls."
7+
ms.date: 02/03/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
durationInMinutes: 30
12+
content: |
13+
[!include[](includes/5-exercise-deploy-governed-ai-infrastructure.md)]
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.protect-govern-ai-ready-infrastructure-azure.knowledge-check
3+
title: "Module assessment"
4+
metadata:
5+
title: "Knowledge check"
6+
description: "Knowledge check"
7+
ms.date: 02/03/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
module_assessment: true
12+
durationInMinutes: 3
13+
content: "Choose the best response for each of the following questions."
14+
quiz:
15+
questions:
16+
- content: "Your healthcare organization is deploying Azure OpenAI to summarize patient medical records. HIPAA regulations require that all patient data remains within the United States and uses customer-managed encryption keys. Which combination of Azure Policy definitions should you assign to enforce these requirements?"
17+
choices:
18+
- content: "Assign **Allowed locations** policy restricting deployments to US regions and 'Cognitive Services should use customer-managed key for encryption' policy with Deny effect at the subscription scope containing healthcare workloads"
19+
isCorrect: true
20+
explanation: "The first option correctly addresses both HIPAA requirements through Azure Policy enforcement: the Allowed locations policy prevents deployments outside authorized US regions, satisfying data residency requirements, while the customer-managed encryption policy ensures patient data is encrypted with keys the organization controls rather than Microsoft-managed keys. Assigning at subscription scope provides comprehensive coverage while allowing specific resource group exceptions if needed."
21+
- content: "Assign **Audit usage of custom RBAC roles** policy and 'Diagnostic logs in Azure AI services should be enabled' policy at the resource group scope to monitor compliance through manual review"
22+
isCorrect: false
23+
explanation: "The second option only provides audit capabilities without preventive enforcement, relying on manual detection after violations occur rather than blocking noncompliant configurations proactively."
24+
- content: "Assign 'Require tag on resources' policy mandating 'compliance:HIPAA' tag and 'Allowed resource types' policy limiting deployments to Azure OpenAI Standard tier only at the management group scope"
25+
isCorrect: false
26+
explanation: "The third option focuses on resource tagging and type restrictions but doesn't directly enforce geographic restrictions or encryption requirements that HIPAA mandates, making it insufficient for healthcare compliance despite supporting general governance practices."
27+
- content: "Your data science team needs to deploy Azure OpenAI models for internal research on customer sentiment analysis. The team should be able to create resources and configure models but must not access production customer data or modify deployed production models. Which RBAC role assignment strategy implements least-privilege access for this scenario?"
28+
choices:
29+
- content: "Assign Cognitive Services Contributor role at the development resource group scope and Cognitive Services User role at the production resource group scope, with separate resource groups isolating development and production environments"
30+
isCorrect: true
31+
explanation: "The first option implements least-privilege access by granting different permission levels based on environment sensitivity: Cognitive Services Contributor in development allows the team to create and configure resources for research, while Cognitive Services User in production provides read-only access for viewing configurations and consuming endpoints without modification capabilities. Separate resource groups create clear security boundaries that RBAC can enforce, preventing accidental or intentional cross-environment changes."
32+
- content: "Assign Owner role at the subscription scope to enable full development flexibility and rely on Azure Policy to prevent unauthorized production changes through approval workflows"
33+
isCorrect: false
34+
explanation: "The second option violates least-privilege principles by granting Owner role, which includes permissions to modify access controls, delete resources, and change billing settings far beyond what the research scenario requires."
35+
- content: "Create a custom role with wildcard permissions for all Cognitive Services operations and assign it at the resource group scope, then configure conditional access policies requiring manager approval for production resource access"
36+
isCorrect: false
37+
explanation: "The third option wildcard permissions eliminate granular control and still grant excessive access, while conditional access policies alone can't enforce resource-level permission restrictions—they control authentication and access conditions but don't replace RBAC for authorization decisions."
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.protect-govern-ai-ready-infrastructure-azure.summary
3+
title: "Summary"
4+
metadata:
5+
title: "Summary"
6+
description: "Summary."
7+
ms.date: 02/03/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
durationInMinutes: 3
12+
content: |
13+
[!include[](includes/7-summary.md)]
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
Your organization deployed Azure OpenAI agents across three departments—sales, customer service, and product development. Within the first week, the Chief Information Security Officer (CISO) raised urgent concerns: customer service agents are processing personal health information without documented safeguards, sales teams are storing prompts containing proprietary pricing strategies in unencrypted logs, and no one can answer the compliance auditor's question about who approved the model deployment. Without clear governance, you risk exposing sensitive data, violating regulatory or HIPAA regulations, and losing stakeholder trust in your AI initiatives.
2+
3+
This module equips you to implement comprehensive AI governance using Microsoft Foundry and Azure AI services. You configure Azure Policy definitions that automatically enforce encryption and regional restrictions, establish Microsoft Entra ID access controls that protect sensitive resources through least-privilege principles, implement Azure AI Content Safety filters that prevent harmful outputs while maintaining operational efficiency, and create monitoring workflows through Azure Monitor and Microsoft Purview that provide audit trails for compliance reviews. By the end, you demonstrate to auditors and leadership that your AI operations balance innovation velocity with risk management and regulatory adherence.
4+
5+
In this module, you learn to:
6+
7+
- Evaluate AI governance requirements and align them with Microsoft Foundry capabilities
8+
- Configure Azure Policy and role-based access controls for AI workloads
9+
- Implement content filtering and responsible AI safeguards using Azure AI services
10+
- Establish monitoring and audit trails for AI operations using Azure Monitor and Microsoft Purview
11+
- Apply governance best practices for model lifecycle management and data protection
12+
13+
Before starting this module, you should have:
14+
15+
- Familiarity with Azure fundamentals including resource groups, subscriptions, and the Azure portal
16+
- Basic understanding of AI and machine learning concepts such as models, prompts, and completions
17+
- Experience navigating the Azure portal and executing Azure CLI commands
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
You might be wondering how organizations manage AI deployments when projects span multiple teams, handle sensitive data, and must satisfy auditors from healthcare, finance, and privacy regulatory bodies. The answer lies in coordinated controls that work together rather than isolated tools. Enterprise AI governance requires five interconnected pillars that address different aspects of risk and compliance.
2+
3+
At the foundation, policy enforcement through Azure Policy ensures every AI resource meets organizational standards before deployment. Consider a financial services firm that must keep customer data within specific geographic boundaries: Azure Policy definitions evaluate each resource creation request and block deployments to unauthorized regions automatically, preventing compliance violations before they occur. With this approach, your security team defines rules once and enforces them consistently across all subscriptions, eliminating the risk that developers accidentally deploy resources to noncompliant locations.
4+
5+
6+
Identity and access management ensures that AI resources are protected through least‑privilege access and adaptive security controls.
7+
- Microsoft Entra ID and RBAC assign scoped roles that limit access to only what users need, reducing risk from excessive permissions.
8+
- Conditional access policies strengthen security for contractors and partners by enforcing factors like multifactor authentication and device compliance.
9+
10+
Data protection mechanisms safeguard sensitive information as it moves through AI systems.
11+
- Microsoft Purview automatically discovers, classifies, and labels sensitive data so protections persist throughout the data lifecycle.
12+
- Azure Key Vault secures encryption keys in hardware security modules, ensuring data remains protected even from privileged administrators.
13+
14+
Model lifecycle governance controls how AI models are tested, approved, and released to production.
15+
- Azure Machine Learning enforces versioning and approval gates so models meet performance, security, and compliance standards before deployment.
16+
- Parallel testing environments allow teams to maintain development speed while reducing risks associated with unvalidated production changes.
17+
18+
:::image type="content" source="../media/azure-policy-regions-compliance.png" alt-text="Diagram showing how Azure Policy enforces region compliance by blocking deployments outside approved EU locations.":::
19+
20+
Azure Monitor and Microsoft Purview provide end‑to‑end auditing and real‑time monitoring that deliver auditable compliance evidence and enable proactive operational response.
21+
22+
- Every policy decision, access request, content filter action, and model deployment are automatically logged to immutable Log Analytics audit trails.
23+
- Auditors can quickly answer compliance questions—such as model approvals or content safety violations—using authoritative logs instead of manual records.
24+
- Real‑time Azure Monitor alerts flag policy violations or abuse patterns early, allowing teams to respond before issues become regulatory incidents.
25+
26+
27+
The five governance pillars work together as a cohesive framework that balances strong oversight with continued innovation across AI systems.
28+
29+
- Policy enforcement, identity management, data protection, model lifecycle controls, and audit capabilities function as an integrated governance system rather than isolated tools.
30+
- Microsoft Foundry and Azure AI services supply the technical foundation, while organizations define the policies and procedures that align with regulatory needs, risk tolerance, and operational maturity.
31+
- Understanding how these components interconnect allows teams to protect stakeholders, meet compliance requirements, and enable innovation without unnecessary friction.
32+
33+
:::image type="content" source="../media/architecture-governance-framework-top-branch.png" alt-text="Diagram showing AI Governance Framework at the top branching into five pillars.":::
34+
35+
*AI governance framework architecture showing five interconnected pillars with their supporting Microsoft services*
36+
37+
38+
39+
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
Consider a scenario where your data science team wants to deploy Azure OpenAI models across development, staging, and production environments. Without governance, developers might choose the most convenient Azure region regardless of data residency requirements, use default encryption that doesn't meet security standards, or deploy expensive SKUs in test environments that inflate costs. Azure Policy definitions prevent these issues by validating resource configurations before deployment succeeds.
2+
3+
With Azure Policy, you define rules once and enforce them consistently. For example, an "Allowed locations" policy restricts Azure OpenAI deployments to European regions only, ensuring regulatory compliance by preventing data transfer outside the EU. When a developer attempts to create a resource in an unauthorized region, the deployment fails immediately with a clear error message explaining the violation and suggesting compliant alternatives. This becomes especially important for organizations operating in multiple regulatory jurisdictions: you assign different policy sets to subscriptions based on their data classification, automatically adapting controls to each workload's risk profile.
4+
5+
Encryption and configuration policies enforce consistent security and cost controls across AI resources.
6+
- Policy enforcement requires customer‑managed keys in Azure Key Vault for services processing sensitive data, preventing deployments without approved encryption.
7+
- Policies also restrict unsupported SKU tiers, reducing the risk of accidental cost overruns from inappropriate resource pricing choices.
8+
9+
Identity-based access controls ensure resources are accessed only under appropriate conditions.
10+
- Microsoft Entra ID conditional access evaluates user context, such as device compliance and location, before granting access to AI roles.
11+
- Adaptive controls require stronger authentication for higher‑risk scenarios, blocking access from unmanaged or noncompliant devices.
12+
13+
Role-based access control aligns permissions with job responsibilities and enforces separation of duties.
14+
- Built-in RBAC roles provide scoped access for common tasks, such as consumption, configuration, and usage monitoring.
15+
- Custom roles allow organizations to separate development and approval responsibilities, reducing the risk of unauthorized or unreviewed production deployments.
16+
17+
18+
Security engineers typically assign policies at the subscription or resource group scope to balance governance coverage with administrative overhead. A policy assigned to the subscription affects all resources within that subscription, providing broad protection with minimal configuration. However, resource group scoping enables exceptions for legitimate scenarios—for example, a research group might receive an exemption from the standard SKU policy to experiment with premium-tier features, while production resource groups remain strictly governed. Organizations document exception processes that require business justification, security review, and time-limited approvals, ensuring that policy exemptions don't create permanent compliance gaps.
19+
20+
:::image type="content" source="../media/policy-scenario-decision-tool-interactive.png" alt-text="Diagram showing Azure Policy assignment workflow in the Azure portal.":::
21+
22+
With policies and access controls working together, you establish defense in depth: policies prevent misconfigured resources from being created, conditional access protects against compromised credentials or unauthorized access attempts, and RBAC limits damage from insider threats by restricting each user to the minimum permissions required for their role. This layered approach addresses multiple attack vectors simultaneously while maintaining developer productivity through clear guardrails rather than blanket restrictions.
23+
24+
:::image type="content" source="../media/access-control-policy-validation-flow.png" alt-text="Diagram showing user requesting AI resource access through Microsoft Entra ID authentication.":::
25+
26+
*Access control and policy validation flow showing authentication, authorization, and policy enforcement sequence for AI resource access*
27+

0 commit comments

Comments
 (0)