Skip to content

Commit 7db2bd1

Browse files
Merge pull request #53328 from wwlpublish/Mod159721
Created GitHubRepo M159721
2 parents 99d1e99 + 09281f6 commit 7db2bd1

44 files changed

Lines changed: 765 additions & 0 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.implement-secure-ai-ready-infrastructure-azure-services.introduction
3+
title: "Introduction"
4+
metadata:
5+
title: "Introduction"
6+
description: "Learn how to build secure AI infrastructure with Microsoft Foundry Hubs, Azure OpenAI Service, and Azure Container Registry. Get started now!"
7+
ms.date: 02/04/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
durationInMinutes: 5
12+
content: |
13+
[!include[](includes/1-introduction.md)]
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.implement-secure-ai-ready-infrastructure-azure-services.understand-microsoft-foundry-security-architecture
3+
title: "Understand Microsoft Foundry security architecture"
4+
metadata:
5+
title: "Understand Microsoft Foundry Security Architecture"
6+
description: "Learn how Microsoft Foundry Hubs enforce centralized security governance, ensuring consistency across AI projects while maintaining team autonomy."
7+
ms.date: 02/04/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
durationInMinutes: 12
12+
content: |
13+
[!include[](includes/2-understand-microsoft-foundry-security-architecture.md)]
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.implement-secure-ai-ready-infrastructure-azure-services.secure-azure-openai-cognitive-services
3+
title: "Secure Azure OpenAI and Cognitive Services"
4+
metadata:
5+
title: "Secure Azure OpenAI Service and Cognitive Services"
6+
description: "Learn how to secure Azure OpenAI Service and Cognitive Services with managed identities, private endpoints, and content filtering for compliance."
7+
ms.date: 02/04/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
durationInMinutes: 13
12+
content: |
13+
[!include[](includes/3-secure-azure-openai-cognitive-services.md)]
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.implement-secure-ai-ready-infrastructure-azure-services.secure-ai-container-images-azure-container
3+
title: "Secure AI container images with Azure Container Registry"
4+
metadata:
5+
title: "Secure AI Container Images with Azure Container Registry"
6+
description: "Secure AI container images using Azure Container Registry with automated vulnerability scanning, access control, and content trust features."
7+
ms.date: 02/04/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
durationInMinutes: 11
12+
content: |
13+
[!include[](includes/4-secure-ai-container-images-azure-container.md)]
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.implement-secure-ai-ready-infrastructure-azure-services.exercise-configure-secure-ai-infrastructure
3+
title: "Configure secure AI infrastructure in Azure"
4+
metadata:
5+
title: "Configure Secure AI Infrastructure in Azure"
6+
description: "Learn how to configure secure AI infrastructure in Azure using private networking, managed connectivity, and customer-managed keys for compliance."
7+
ms.date: 02/04/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
durationInMinutes: 12
12+
content: |
13+
[!include[](includes/5-exercise-configure-secure-ai-infrastructure.md)]
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.implement-secure-ai-ready-infrastructure-azure-services.knowledge-check
3+
title: "Module assessment"
4+
metadata:
5+
title: "Knowledge check"
6+
description: "Knowledge check"
7+
ms.date: 02/03/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
module_assessment: false
12+
durationInMinutes: 5
13+
content: "Choose the best response for each of the following questions."
14+
quiz:
15+
questions:
16+
- content: "Your fraud detection team reports that their Azure OpenAI Service calls are failing with authentication errors after you configured managed identity. The team's application runs in an Azure Kubernetes Service pod and previously used API keys stored in Kubernetes secrets. What is the most likely cause of the authentication failure?"
17+
choices:
18+
- content: "The AKS pod identity needs the Cognitive Services OpenAI User role assigned at the Azure OpenAI Service scope through Azure RBAC"
19+
isCorrect: true
20+
explanation: "Managed identities require explicit RBAC role assignments to access Azure services. The pod identity must have the Cognitive Services OpenAI User role assigned on the target OpenAI Service instance. The second option is incorrect because managed identities fully support authentication from AKS pods through Azure AD Pod Identity or workload identity. The third option is incorrect because private endpoints don't interfere with managed identity authentication—tokens are validated through Azure Active Directory, not through network connectivity to the service."
21+
- content: "Managed identities can't authenticate to Azure OpenAI Service from AKS pods and require API keys stored in Azure Key Vault instead"
22+
isCorrect: false
23+
explanation: "Incorrect. Managed identities fully support authentication from AKS pods through Azure AD Pod Identity or workload identity."
24+
- content: "The private endpoint configuration is blocking managed identity token validation and needs a service endpoint exception rule"
25+
isCorrect: false
26+
explanation: "Incorrect. Private endpoints don't interfere with managed identity authentication—tokens are validated through Azure Active Directory, not through network connectivity to the service."
27+
- content: "Your compliance team requires that all AI service requests and responses remain within European Azure regions to satisfy regulatory data residency requirements. You've deployed Azure OpenAI Service in West Europe with a private endpoint. Which more configuration ensures complete data residency compliance?"
28+
choices:
29+
- content: "Configure diagnostic settings to send logs to a Log Analytics workspace located in the same West Europe region where the OpenAI service is deployed"
30+
isCorrect: true
31+
explanation: "Diagnostic logs contain request metadata and potentially sensitive information that must stay within the same region to maintain complete data residency. Storing logs in a West Europe Log Analytics workspace ensures all data processing and storage occurs within the compliant region."
32+
- content: "Enable content filtering with geo-fencing rules that block requests originating from IP addresses outside the European Economic Area"
33+
isCorrect: false
34+
explanation: "Incorrect. Content filtering doesn't control request geography—private endpoints and virtual network routing control network paths, not IP-based geo-fencing."
35+
- content: "Deploy geo-replication for the OpenAI service to North Europe region to provide automatic failover while maintaining European data residency"
36+
isCorrect: false
37+
explanation: "Incorrect. Azure OpenAI Service doesn't support geo-replication; you would deploy separate OpenAI instances per region rather than replicating a single instance."
38+
- content: "Your security team discovered a critical CVE in the TensorFlow library used by your fraud detection model container. Microsoft Defender for Containers identified the vulnerability, but your deployment pipeline still allows the vulnerable image to deploy to production. What configuration change prevents deployment of containers with critical vulnerabilities?"
39+
choices:
40+
- content: "Implement Azure Policy with a custom policy definition that denies AKS pod deployments when the source container image has unresolved critical or high severity CVEs"
41+
isCorrect: true
42+
explanation: "Azure Policy provides enforcement at deployment time by evaluating container image security scan results and denying deployments that violate policy rules. Custom policy definitions can query Defender's vulnerability assessments and block AKS deployments when critical CVEs exist."
43+
- content: "Enable content trust on Azure Container Registry to require digital signatures, which automatically block images with security vulnerabilities from being pulled"
44+
isCorrect: false
45+
explanation: "Incorrect. Content trust verifies publisher identity and image integrity, not vulnerability status—a signed image can still contain vulnerable libraries."
46+
- content: "Configure Microsoft Defender to automatically quarantine vulnerable images by moving them to a separate repository with restricted pull permissions"
47+
isCorrect: false
48+
explanation: "Incorrect. Defender identifies vulnerabilities but doesn't automatically quarantine images; you must implement separate processes or policies to enforce deployment restrictions based on scan results."
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.implement-secure-ai-ready-infrastructure-azure-services.summary
3+
title: "Summary"
4+
metadata:
5+
title: "Summary"
6+
description: "Learn how to build secure AI infrastructure with Azure services, ensuring governance, vulnerability scanning, and compliance for production AI."
7+
ms.date: 02/04/2026
8+
author: wwlpublish
9+
ms.author: bradj
10+
ms.topic: unit
11+
durationInMinutes: 3
12+
content: |
13+
[!include[](includes/7-summary.md)]
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
Microsoft Foundry Hubs and Projects deliver the governance framework your security team requires. By combining Foundry's centralized policy enforcement with Azure OpenAI Service, Azure Cognitive Services, and Azure Container Registry, you build AI infrastructure that passes enterprise security reviews while maintaining developer agility. This approach eliminates public internet exposure through private endpoints, removes credential sprawl through managed identities, and provides visibility into container vulnerabilities before production deployment.
2+
3+
In this module, you configure a Foundry Hub to enforce security policies across multiple AI projects, integrate Azure OpenAI Service with network isolation and identity controls, and deploy Azure Container Registry with automated vulnerability scanning. By the end, you have production-ready AI infrastructure that satisfies your compliance team and accelerates safe AI adoption across your organization.
4+
5+
## Learning objectives
6+
7+
By the end of this module, you're able to:
8+
9+
- Configure Microsoft Foundry Hubs and Projects for secure AI development environments
10+
- Implement Azure OpenAI Service and Cognitive Services with enterprise security controls
11+
- Secure AI container images and deployments using Azure Container Registry
12+
- Apply network isolation and identity governance to protect AI infrastructure
13+
14+
## Prerequisites
15+
16+
Before starting this module, you should be familiar with:
17+
18+
- Azure fundamentals including resource groups, virtual networks, and identity management
19+
- Basic AI and machine learning concepts
20+
- Container fundamentals and Docker basics
21+
22+
## More resources
23+
24+
- [Microsoft Foundry documentation](/azure/ai-foundry/) - Official documentation for Foundry Hubs and Projects
25+
- [Azure OpenAI Service security best practices](/azure/ai-services/openai/how-to/managed-identity) - Guidance on securing Azure OpenAI deployments
26+
Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
2+
## The centralized governance challenge
3+
4+
You might be wondering how to enforce consistent security policies across multiple AI development teams without slowing down innovation. Traditional approaches create isolated environments for each team, forcing security administrators to configure network rules, identity permissions, and encryption settings repeatedly. With five AI projects underway, your security team faces configuring private endpoints, managed identities, and audit logging five separate times—multiplying administrative overhead and increasing the risk of configuration drift.
5+
6+
Microsoft Foundry Hubs solve this challenge by providing centralized security governance that applies automatically to all connected projects. A Foundry Hub acts as the policy enforcement layer where you configure network isolation, identity controls, and data protection once. Every Foundry Project that connects to the hub inherits these security controls, ensuring consistency without requiring per-project configuration. This separation allows your security team to maintain governance while data science teams retain autonomy within their project workspaces.
7+
8+
## How Hubs and Projects work together
9+
10+
Consider a scenario where your fraud detection team and customer service automation team both need access to Azure OpenAI Service. With traditional Azure resource groups, each team would create separate OpenAI instances, configure their own virtual networks, and manage independent sets of managed identities. This creates security islands where one team might enable public access while another enforces private endpoints—introducing compliance gaps your auditor's flag.
11+
12+
:::image type="content" source="../media/manage-independent-sets-managed-identities.png" alt-text="Diagram showing traditional Azure resource groups where each team creates separate OpenAI instances.":::
13+
14+
Foundry changes this architecture fundamentally. You create one Foundry Hub that defines your organization's security baseline: private endpoint connectivity required, managed identity authentication mandatory, and diagnostic logging enabled for all resources. When the fraud detection team creates their Foundry Project, it automatically inherits the hub's networking configuration—their Azure OpenAI Service endpoint is private by default. The customer service team's project gets identical security controls without more configurations. Both teams work independently within their project boundaries while the hub ensures security consistency.
15+
16+
This becomes especially important when your security team needs to respond to new compliance requirements. Instead of updating five separate environments, you modify the hub's policy once. All connected projects inherit the change immediately, reducing your compliance window from weeks to hours and eliminating the risk of overlooking an environment during updates.
17+
18+
## Core security components
19+
20+
The hub integrates three foundational security services that protect your AI workloads. Microsoft Entra ID provides identity and access management through role-based access control (RBAC), allowing you to define who can create projects, deploy models, and access training data. Unlike service-specific authentication, Microsoft Entra ID centralizes identity management—your existing user groups and conditional access policies apply automatically to AI resources. At the same time, managed identities eliminate credential storage by allowing applications and services to authenticate directly using Microsoft Entra ID tokens, removing API keys from your codebase entirely.
21+
22+
Azure Virtual Network delivers network isolation by creating private connectivity between your hub and Azure services. When you deploy a private endpoint for Azure OpenAI Service, all API traffic flows through your virtual network rather than the public internet. This network boundary prevents external access attempts and ensures compliance with data residency requirements. For example, your fraud detection models processing customer transactions stay within EU Azure regions, satisfying data localization mandates.
23+
24+
Azure Key Vault completes the protection layer by securing secrets, certificates, and encryption keys. Even when services require API keys for backward compatibility, Key Vault stores them with access logging and automatic rotation. Your applications reference keys using Key Vault URIs rather than embedding credentials in configuration files. With this approach, your security team gains audit trails showing when keys were accessed, by which identity, and from which application—visibility that's critical during security investigations.
25+
26+
## Policy inheritance and workspace isolation
27+
28+
Now that you understand the hub's role, let's examine how projects maintain both security consistency and team autonomy. Each Foundry Project operates as an isolated workspace with dedicated compute resources, storage accounts, and AI service connections. Data scientists in the fraud detection project can't access datasets or models in the customer service project—even though both projects connect to the same hub. This isolation prevents accidental data leakage between business domains while simplifying access control through project-level RBAC assignments.
29+
30+
Building on this concept, policy inheritance works like a security firewall with mandatory baseline rules. The hub enforces non-negotiable requirements—private endpoints, managed identities, encryption at rest—that projects can't disable. Within these boundaries, project administrators customize permissions for their team members and configure project-specific resources like storage containers or compute clusters. For instance, the fraud detection project might grant senior data scientists permission to deploy production models while restricting junior analysts to development experiments. These project-level permissions add flexibility without compromising hub-enforced security controls.
31+
32+
Consider what happens when a new data scientist joins your fraud detection team. The project administrator assigns them the "Data Scientist" role within the Foundry Project. This role inherits hub-level policies automatically—they can only access AI services through private endpoints and must authenticate with their Microsoft Entra ID credentials. The project role then grants specific permissions to training datasets and development compute resources. This layered approach means your security team manages baseline controls once at the hub level, while project administrators handle day-to-day access management within their domains.
33+
34+
:::image type="content" source="../media/architecture-centralized-security-controls.png" alt-text="Diagram showing the Microsoft Foundry Hub architecture with centralized security controls.":::
35+
36+
37+
*Microsoft Foundry Hub architecture showing centralized security controls with policy inheritance flowing to isolated project workspaces*
38+
39+
40+
## More resources
41+
42+
- [Plan your Microsoft Foundry Hub architecture](/azure/ai-foundry/concepts/hubs-projects) - Architectural guidance for hub and project design patterns
43+
- [Configure managed identities for Azure AI services](/azure/ai-services/authentication) - Step-by-step guide for passwordless authentication
44+
45+

0 commit comments

Comments
 (0)