Skip to content

Commit a4837d9

Browse files
added md files
1 parent f3eb68c commit a4837d9

4 files changed

Lines changed: 208 additions & 0 deletions

File tree

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
When your data science team requests an Azure OpenAI deployment, Microsoft Foundry orchestrates a series of governance checks before provisioning any resources. This orchestration happens through four integrated components that work together to enforce enterprise policies while maintaining developer productivity.
2+
3+
## Resource catalog: Preapproved AI infrastructure templates
4+
5+
The resource catalog acts as your organization's AI service storefront. Instead of developers creating resources from scratch and potentially misconfiguring security settings, they select from preapproved templates that already embed your security baselines. With this approach, a developer requesting GPU compute automatically gets managed identities enabled, diagnostic logging configured, and network isolation applied—all without understanding the underlying policy requirements. The catalog integrates with Azure Resource Manager and uses Bicep templates to ensure consistent deployments across all environments.
6+
7+
Building on this foundation, the policy engine evaluates each request against your governance rules before any provisioning occurs.
8+
9+
## Policy engine: Automated rule evaluation
10+
11+
The policy engine connects to both Azure Policy for platform-level controls and custom Foundry policies for AI-specific requirements. When a developer selects an Azure OpenAI template from the catalog, the engine validates the request against rules like budget thresholds, approved model versions, and data residency requirements. Unlike traditional governance approaches that rely on manual reviews, this automated evaluation happens in seconds and provides immediate feedback to the requester. The engine evaluates security policies (such as requiring private endpoints), cost policies (such as monthly spending caps per business unit), and compliance policies (such as restricting customer data to specific Azure regions) simultaneously.
12+
13+
This becomes especially important when requests exceed standard approval thresholds and require stakeholder review.
14+
15+
## Approval workflows: Intelligent request routing
16+
17+
Approval workflows integrate with Microsoft Entra ID to route high-value or high-risk requests to appropriate decision makers. For example, requests under $1,000 monthly cost might autoapprove, while production deployments requiring GPT-4 models trigger a workflow involving the AI governance board, security team, and budget owner. The workflow engine tracks approval history, enforces timeout policies, and escalates stalled requests automatically. With Power Automate integration, you can customize routing logic based on your organizational hierarchy and risk appetite.
18+
:::image type="content" source="../media/approval-workflows-integrate-compliant-lifecycle.png" alt-text="Diagram that illustrates continuous monitoring for approved resources are compliant throughout their lifecycle.":::
19+
At the same time, continuous monitoring ensures that approved resources remain compliant throughout their lifecycle.
20+
21+
## Compliance scanner: Continuous assessment
22+
23+
The compliance scanner continuously evaluates deployed AI resources against your governance policies, detecting configuration drift and unauthorized changes. This scanner integrates with Azure Monitor and Microsoft Defender for Cloud to correlate security alerts with policy violations. Consider what happens when a developer manually disables diagnostic logging on an Azure OpenAI endpoint: the scanner detects the drift within minutes, creates an incident ticket, and optionally autoremediates by re-enabling logging. These compliance results feed governance reports that demonstrate adherence to regulatory frameworks like SOC 2, ISO 27001, or industry-specific requirements.
24+
25+
Now that you understand how these components work together, let's examine the specific integration points that connect Foundry to your existing Azure environment. The following diagram shows how a resource request flows through each governance component:
26+
27+
## Integration architecture
28+
29+
Microsoft Foundry doesn't operate in isolation—it extends and orchestrates your existing Azure governance tools. The policy engine uses Azure Policy definitions you created, enriching them with AI-specific rules. Microsoft Entra ID provides the identity foundation for both approval workflows and resource access controls, eliminating duplicate identity management. Azure Monitor streams telemetry from provisioned resources back to the compliance scanner, creating a closed-loop governance system. This integration approach means you're enhancing your current governance investments rather than replacing them.
30+
31+
With this architectural understanding in place, you can now compare how each component contributes to different governance scenarios. The following table maps governance requirements to the Foundry components that address them:
32+
33+
:::image type="content" source="../media/microsoft-foundry-governance-architecture.png" alt-text="Diagram showing an AI developer requesting resources through a catalog.":::
34+
35+
*Microsoft Foundry governance architecture showing request flow from developer to provisioned resources with policy checkpoints*
36+
37+
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
Your organization's AI governance requirements vary dramatically across different contexts: the marketing team experimenting with sentiment analysis needs different controls than the finance team deploying fraud detection models in production. Microsoft Foundry addresses this complexity through hierarchical policy inheritance that balances consistency with flexibility.
2+
3+
## Establish base security policies
4+
5+
Start by defining organization-wide security policies that apply universally across all business units and environments. These base policies enforce non-negotiable requirements like mandating managed identities for all AI services, requiring encryption at rest and in transit, and enabling diagnostic logging to centralized workspaces. With this foundation in place, no team can accidentally deploy an Azure OpenAI endpoint with unencrypted data or anonymous access—the policy engine blocks such requests before provisioning begins. Base policies integrate directly with Azure Policy definitions, which means your existing security baselines automatically extend to AI workloads managed through Foundry.
6+
7+
Building on these universal controls, you can then layer environment-specific policies that adjust governance strictness based on risk profiles.
8+
9+
## Implement environment-specific controls
10+
11+
Development environments typically need relaxed quotas and broader service approval lists to enable rapid experimentation. For example, your development policy might allow any Azure AI service, set GPU quota limits at 10 vCPUs per project, and autoapprove all requests under **$500** monthly cost.
12+
13+
Production environments require the opposite approach:
14+
- Strict service lists limited to validated AI models
15+
- Mandatory approval workflows for any new deployment
16+
- Cost thresholds requiring executive sign out above **$5,000** monthly spend
17+
18+
This differentiation happens through policy scopes—you assign development policies to subscriptions or resource groups tagged with `Environment: Dev`, while production policies target resources tagged `Environment: Prod`.
19+
:::image type="content" source="../media/implement-environment-quotas-controls.png" alt-text="Diagram that illustrates business unit policies addressing organizational boundaries and budget ownership.":::
20+
21+
At the same time, business unit policies let you address organizational boundaries and budget ownership.
22+
23+
## Define business unit policies
24+
25+
Each business unit manages distinct budgets and faces different regulatory requirements. Your healthcare division might enforce policies requiring all customer data to remain in HIPAA-compliant regions, while your retail division prioritizes cost optimization over geographic restrictions. Business unit policies operate at the management group or subscription level, inheriting base security policies while adding unit-specific rules.
26+
27+
Consider a scenario where the finance team gets monthly Azure OpenAI budget allocation of **$50,000** and the marketing team receives **$20,000**. The quotas are in the business unit policies and the policy engine enforces them automatically. When finance attempts to deploy resources that would exceed their allocation, Foundry routes the request through an approval workflow to their VP rather than blocking it outright.
28+
29+
:::image type="content" source="../media/implement-environment-specific-controls.png" alt-text="Diagram that illustrates environment-specific controls.":::
30+
31+
This becomes especially important when you need to balance innovation velocity with compliance requirements across multiple jurisdictions.
32+
33+
## Configure data residency and compliance policies
34+
35+
Data residency policies ensure that AI workloads processing customer data deploy only to approved Azure regions. If your organization operates in the European Union, you might define a compliance policy requiring all production AI services to provision in `westeurope` or `northeurope` regions exclusively. The policy engine evaluates region specifications during request submission and rejects deployments targeting noncompliant regions before any resources are created. Unlike manual governance approaches that rely on post-deployment audits, this preventive enforcement eliminates compliance violations before they occur. Foundry also supports policy exemptions for specific scenarios—perhaps your data science team needs temporary access to a preview AI service available only in `eastus` for evaluation purposes. In such cases, the approval workflow routes the exemption request to your compliance officer for documented approval.
36+
37+
Now that you understand policy types and scopes, let's examine how these policies interact when multiple rules apply to a single request. The following diagram illustrates policy inheritance across organizational hierarchies:
38+
39+
:::image type="content" source="../media/data-resilience-policy-approval.png" alt-text="Diagram that illustrates policy inheritance across organizational hierarchies.":::
40+
41+
## Policy evaluation hierarchy
42+
43+
When a developer requests an Azure OpenAI deployment, Foundry evaluates policies in a specific order: base security policies first, then business unit policies, then environment-specific policies, and finally project-level overrides. This hierarchy ensures that universal security requirements always apply, while allowing appropriate flexibility at lower levels. Consider what happens when conflicting policies exist: if base policy requires managed identities (allowed: true) but a development environment policy attempts to disable this requirement (allowed: false), the more restrictive policy wins—in this case, the base policy enforcement. This fail-secure approach prevents accidental weakening of security postures through misconfigurations.
44+
45+
With this hierarchical understanding in place, you're ready to examine the specific policy types and their practical applications. The following table shows common governance scenarios and their corresponding policy implementations:
46+
47+
:::image type="content" source="../media/hierarchical-policy-structure-microsoft-foundry.png" alt-text="Diagram showing organization at top with base security policy applied universally.":::
48+
49+
50+
*Hierarchical policy structure in Microsoft Foundry showing base policies inherited across organizational units with environment-specific customizations*
51+
52+
53+
54+

0 commit comments

Comments
 (0)