Skip to content

Commit c7e7a2b

Browse files
authored
Add files via upload
1 parent d47ddc7 commit c7e7a2b

5 files changed

Lines changed: 268 additions & 0 deletions

File tree

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
## The three-tier organizational model
2+
3+
When your organization deploys AI at scale, you need infrastructure that balances centralized governance with team autonomy. Microsoft Foundry addresses this challenge through a three-tier hierarchy that mirrors how enterprises actually organize work. At the top, the Microsoft Foundry Portal provides a unified management interface where IT leadership views all AI initiatives across departments. With this single pane of glass, your CTO can monitor spending, track project health, and identify underutilized resources without switching between multiple Azure services.
4+
5+
Beneath the Portal, hubs establish shared governance boundaries—typically aligned with departments, environments, or compliance requirements. Your organization might create a Production Hub for customer-facing applications that enforces strict network isolation and a Development Hub where data scientists experiment with public endpoints. Each hub contains configuration that automatically applies to every project within it: virtual network integration managed identities for authentication, and Azure Policy assignments that prevent teams from deploying noncompliant resources. This inheritance model reduces your operational overhead by 40-60% compared to configuring security settings separately for each AI project.
6+
7+
Projects sit at the bottom tier as isolated workspaces where teams build AI solutions. Think of projects as dedicated folders within a shared drive—each team has full autonomy to train models, manage datasets, and deploy applications while inheriting the hub's network and security settings. The Support Team's chatbot project and the Sales Team's forecasting project both live in the same Production Hub, sharing network rules and managed identities while maintaining separate code, data, and model repositories. This structure answers the common challenge: "How do we let teams move fast without creating security or compliance gaps?"
8+
9+
:::image type="content" source="../media/hub-level-policy-inheritance.png" alt-text="Diagram showing hub-level policy inheritance with three lanes.":::
10+
11+
## Connected resources and the sharing model
12+
13+
Now that you understand the three-tier hierarchy, consider how Azure services integrate with this architecture. Connected resources—like Azure AI Search, Azure OpenAI Service, Azure Storage, and Application Insights—attach at either the hub or project level depending on whether they're shared or dedicated. Hub-level connections make the most impact: when you connect Azure AI Search at the hub level, all projects within that hub can query the same search service without duplicating configuration or incurring separate costs. Your five AI projects share one Standard-tier search instance instead of each provisioning its own Basic tier, cutting your monthly search spend from $1,250 to $250.
14+
15+
With this shared connection in place, project teams maintain logical isolation through separate search indexes. The E-commerce Bot Project queries the Product Catalog index containing 2 million product documents, while the Support Agent Project queries the Knowledge Base index with 500,000 troubleshooting articles. Both indexes live in the same hub-connected search service, yet each project only accesses its authorized data. At the same time, your centralized operations team monitors performance, manages capacity, and reviews audit logs from a single Azure AI Search resource instead of tracking metrics across multiple isolated instances.
16+
17+
Project-level connections serve scenarios requiring dedicated resources. If your healthcare application needs an isolated Azure OpenAI deployment with specific data residency guarantees, you connect that service directly to the Healthcare Project rather than sharing it hub-wide. This flexibility becomes essential when compliance, performance, or cost allocation requirements demand separation. However, most organizations start with hub-level connections for common services like search and storage, then add project-specific connections only when justified by regulatory or technical constraints.
18+
19+
## Identity, networking, and policy inheritance
20+
21+
Building on this foundation of shared and dedicated resources, examine how hubs enforce consistent security across projects. When you provision a hub, Azure creates a system-assigned managed identity that authenticates to connected services without storing credentials. This managed identity rotates automatically, eliminating the security risk of long-lived API keys scattered across project code. Your hub's managed identity receives the Search Index Data Reader role on the connected Azure AI Search service, and every project inheriting that connection uses the same identity—your security team audits one permission assignment instead of 15.
22+
23+
Network configuration follows the same inheritance pattern. Configure the hub with a virtual network integration and private endpoint to your on-premises data center, and all child projects automatically route traffic through that secure path. Developers building the Customer Insights Project never see network configuration options—they inherit the Production Hub's network topology and focus on training models. This becomes especially important when your compliance team requires network traffic logs: Azure Network Watcher captures packets at the hub level, providing unified visibility across all AI workloads instead of requiring per-project monitoring.
24+
25+
Azure Policy assignments applied at the hub scope cascade to projects, preventing teams from accidentally violating organizational standards. Assign a policy requiring specific tags on all resources (CostCenter, DataClassification, Owner), and every storage account or compute instance deployed in any project automatically inherits those requirements. If a developer attempts to create a resource without required tags, Azure blocks the deployment with a clear error message pointing to the policy definition. With this approach, your governance team defines rules once at the hub level and enforces them consistently across dozens of projects without manual audits.
26+
27+
:::image type="content" source="../media/azure-policy-hub-scope-cascade.png" alt-text="Diagram showing Azure Policy assignments applied at the hub scope cascade to projects.":::
28+
29+
## Practical organizational patterns
30+
31+
Consider what happens when your organization scales from three pilot projects to 20 production applications across five business units. Create separate hubs for Production, Staging, and Development environments provide clear promotion paths with appropriate security boundaries. Development hubs use public endpoints and relaxed policies to maximize experimentation velocity, while Production hubs enforce private endpoints, require multifactor authentication for administrative access, and log every API call to Azure Monitor. Projects move through this pipeline: data scientists build prototypes in Development, DevOps engineers validate configurations in Staging, and only approved applications graduate to Production.
32+
33+
Now that you understand how hubs, projects, and connected resources interact, you're ready to configure these components in your Azure subscription. The next unit walks through the specific settings and decisions you'll make when provisioning hubs and organizing projects to support your organization's AI initiatives.
34+
35+
:::image type="content" source="../media/foundry-three-tier-architecture.png" alt-text="Diagram showing the Microsoft Foundry three-tier architecture showing Portal managing multiple hubs.":::
36+
37+
*Microsoft Foundry three-tier architecture showing Portal managing multiple hubs, each hub containing projects with shared connections to Azure AI Search, Azure OpenAI Service, and Azure Storage at the hub level*
38+
39+
40+
## More resources
41+
42+
- [Microsoft Foundry documentation](/azure/ai-studio/) - Official architecture overview and service limits
43+
- [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview) - Authentication and credential rotation patterns
44+
45+
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
## Hub provisioning decisions
2+
3+
Before you create your first hub, consider three foundational decisions that determine governance boundaries for all child projects. Start with region selection: choose the Azure region closest to your users to minimize latency and closest to your data sources to reduce egress costs. If your organization operates globally, you'll likely provision multiple hubs—one per region—to satisfy data residency regulations. Organizations typically create resource groups for Production, Staging, and Development environments, making it straightforward to calculate costs per environment and delete entire environments during testing.
4+
5+
The third decision—networking configuration—carries the most security implications. Public access enables rapid prototyping: developers connect from laptops without VPN, and automated CI/CD pipelines deploy models from GitHub Actions. However, production workloads handling customer data usually demand private endpoints that route all traffic through your virtual network. When you select private endpoint mode during hub creation, Azure provisions network interfaces in your specified subnet, blocks public internet access, and requires on-premises users to connect through ExpressRoute or VPN. This configuration satisfies compliance frameworks requiring network isolation but adds complexity: you configure DNS resolution for privatelink.api.azureml.ms hostnames and ensure your corporate firewall allows traffic to Azure services.
6+
7+
At the same time, Azure creates a system-assigned managed identity for your hub. This identity eliminates credential management: instead of storing connection strings or API keys in project code, the hub's managed identity authenticates to connected Azure services like Azure AI Search and Azure Storage. Building on this foundation, the identity receives the Contributor role on the hub's resource group by default, enabling it to provision compute resources and manage child projects. Your security team benefits from centralized identity lifecycle management—rotate credentials by regenerating the managed identity, and Azure Active Directory automatically propagates the new credentials to all connected services within minutes.
8+
9+
## Role-based access control configuration
10+
11+
Now that your hub exists with networking and identity configured, assign Azure RBAC roles to control who can create projects, manage connections, and view activity logs. The Cognitive Services Contributor role grants AI engineers permission to create projects, deploy models, and configure project-level connections without giving them access to modify hub networking or delete the hub itself. Assign this role to your AI development team's Azure AD group, and every team member inherits the same permissions automatically as they join or leave the group.
12+
13+
For example, suppose your compliance team needs read-only visibility into AI infrastructure without the ability to modify resources. Assign the Reader role at the hub scope, and compliance officers can navigate to the hub in Azure portal, review connected resources, examine activity logs showing all administrative actions, and export deployment metadata—all without permission to change configurations. This separation of duties becomes critical during SOC 2 or ISO 27001 audits when you must demonstrate that monitoring personnel can't alter the systems they audit.
14+
15+
With roles assigned, consider custom roles for specialized scenarios. Your finance team might need a custom role that permits viewing cost analysis data and resource tags but denies access to view actual AI models or datasets. Azure RBAC's fine-grained permissions enable you to create a "Cost Analyst" role that includes actions like Microsoft.CostManagement/reports/read and Microsoft.Resources/tags/read while excluding Microsoft.MachineLearningServices/workspaces/data/read. This becomes especially important when contractual obligations or industry regulations require strict segregation between financial auditors and technical personnel handling sensitive data.
16+
17+
## Project creation and organization patterns
18+
19+
Building on your hub's security foundation, organize projects to reflect how teams actually work. Create one project per AI application or use case: the Customer Support Bot project contains all code, training data, and deployed models for your chatbot initiative, while the Fraud Detection project houses the separate machine learning pipeline analyzing transaction patterns. This one-to-one mapping simplifies ownership—each project has a designated product owner who approves changes—and makes billing transparent through Azure tags.
20+
21+
When creating a project, Azure automatically inherits the parent hub's network configuration, managed identity, and policy assignments. The Support Bot project created in your Production Hub with private endpoints can't accidentally expose data through public internet access—the inheritance model enforces that security boundary. However, projects maintain complete isolation for code repositories, datasets, model registries, and deployment endpoints. Two projects in the same hub can use different machine learning frameworks, deploy models to different compute clusters, and manage separate CI/CD pipelines without interfering with each other.
22+
23+
For example, consider organizational patterns as your AI initiatives scale. Tag projects with metadata that supports cost allocation and lifecycle management: assign CostCenter tags matching your accounting system's department codes, Environment tags (Production, Staging, Development) that drive automated cleanup policies, and DataClassification tags (Public, Internal, Confidential) that trigger security scanning. With these tags in place, finance generates monthly chargeback reports showing each department's AI spending, operations teams use Azure Automation to delete Development-tagged projects older than 90 days, and security teams configure Azure Defender to alert when Confidential-tagged projects attempt to connect to public endpoints.
24+
25+
:::image type="content" source="../media/microsoft-foundry-hub-creation-wizard.png" alt-text="Diagram showing the Microsoft Foundry hub creation wizard.":::
26+
27+
## Governance at scale through Azure Policy
28+
29+
Now that you understand project organization, examine how Azure Policy enforces standards across all projects within a hub. Assign policies at the hub's resource group scope, and they automatically apply to every project created within that hub. A common baseline includes policies requiring specific tags on all resources, restricting deployment to approved Azure regions, and enforcing encryption at rest with customer-managed keys. When a developer attempts to create a storage account in a project without the required CostCenter tag, Azure blocks the deployment immediately and displays a policy violation message with remediation instructions.
30+
31+
This policy inheritance becomes powerful when combined with Azure Policy's audit and remediation capabilities. Enable the "Allowed virtual machine SKUs" policy in Deny mode at your Production Hub, restricting projects to memory-optimized VM families (D-series, E-series) that match your cost optimization strategy. If your data science team requests GPU-accelerated VMs for model training, they must submit a policy exemption request that your cloud governance team reviews and approves through a formal process. Without this guardrail, teams deploy expensive NC-series VMs that idle overnight, inflating your monthly compute bill by thousands of dollars.
32+
33+
Building on these controls, use Azure Policy to enforce security baselines recommended by Microsoft Defender for Cloud. Apply the "Azure AI services should use private link" policy to ensure all projects connect to Azure OpenAI Service through private endpoints rather than public internet. Configure the "Diagnostic logs should be enabled" policy to automatically create Log Analytics workspace connections for every new project, capturing audit trails that satisfy regulatory requirements. With this approach, your security team defines standards once through policy assignments, and Azure enforces them automatically across hundreds of projects without requiring manual configuration reviews.
34+
35+
:::image type="content" source="../media/hub-project-provision-workflow.png" alt-text="Diagram showing hub and project provisioning workflow showing administrator actions.":::
36+
37+
38+
*Hub and project provisioning workflow showing administrator actions, Azure Resource Manager deployment, identity creation, RBAC assignment, policy enforcement, and project inheritance*
39+
40+
## More resources
41+
42+
- [Hub network configuration options](/azure/ai-studio/how-to/configure-private-link) - Detailed guidance on private endpoint setup and DNS configuration
43+
- [Azure Policy built-in definitions for AI services](/azure/governance/policy/samples/built-in-policies#cognitive-services) - Preconfigured policies for encryption, networking, and logging
44+

0 commit comments

Comments
 (0)