Skip to content

Commit 293edfc

Browse files
committed
Fix broken links in security fundamentals articles
- ai-security-best-practices.md: Update Azure OpenAI links to AI Foundry paths, fix RBAC link, update compute isolation link, remove non-existent SFI link - operational-best-practices.md: Remove invalid .md extensions from Entra ID documentation links - zero-trust.md: Remove invalid .md extension from Entra ID link
1 parent e218f00 commit 293edfc

3 files changed

Lines changed: 6 additions & 7 deletions

File tree

articles/security/fundamentals/ai-security-best-practices.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Before you can secure AI workloads, you need visibility into what AI application
4545
Azure OpenAI Service provides REST API access to powerful language models. Securing these deployments is critical for protecting your data and preventing misuse.
4646

4747
**Best practice**: Use private endpoints for network isolation.
48-
**Detail**: Configure Azure OpenAI Service to use private endpoints, removing the public endpoint and restricting access to your virtual network. For more information, see [Configure Azure OpenAI Service with private endpoints](/azure/ai-services/openai/how-to/private-endpoints).
48+
**Detail**: Configure Azure OpenAI Service to use private endpoints, removing the public endpoint and restricting access to your virtual network. For more information, see [Network and access configuration for Azure OpenAI](/azure/ai-foundry/openai/how-to/on-your-data-configuration).
4949

5050
**Best practice**: Use managed identity for authentication.
5151
**Detail**: Configure applications to authenticate using Microsoft Entra managed identities instead of API keys, eliminating the need to manage and rotate secrets. For more information, see [Configure Azure OpenAI Service with Microsoft Entra ID authentication](/azure/ai-services/openai/how-to/managed-identity).
@@ -63,7 +63,7 @@ For more information, see [Content filtering](/azure/ai-services/openai/concepts
6363
**Detail**: Design system prompts that clearly define the model's role, include explicit instructions to reject malicious inputs, and instruct the model to prioritize system instructions over user inputs. Use spotlighting techniques to isolate untrusted data within prompts and integrate [Prompt Shields](/azure/ai-services/content-safety/concepts/jailbreak-detection) to detect jailbreak attempts.
6464

6565
**Best practice**: Monitor usage with diagnostic logs.
66-
**Detail**: Enable diagnostic logging to track API requests, token usage, content filtering results, and errors. Send logs to Azure Monitor for analysis and alerting. For more information, see [Monitor Azure OpenAI Service](/azure/ai-services/openai/how-to/monitor).
66+
**Detail**: Enable diagnostic logging to track API requests, token usage, content filtering results, and errors. Send logs to Azure Monitor for analysis and alerting. For more information, see [Monitor Azure OpenAI](/azure/ai-foundry/openai/how-to/monitor-openai).
6767

6868
## Secure Azure AI Foundry and Azure Machine Learning
6969

@@ -73,13 +73,13 @@ Azure AI Foundry and Azure Machine Learning provide platforms for building and d
7373
**Detail**: Create Azure AI Foundry hubs and Azure Machine Learning workspaces with managed virtual networks that provide private endpoints for dependent services and outbound traffic control. For more information, see [Managed network isolation for Azure AI Foundry](/azure/ai-studio/how-to/configure-managed-network) and [Configure a private endpoint for Azure Machine Learning](/azure/machine-learning/how-to-configure-private-link).
7474

7575
**Best practice**: Implement least-privilege access control.
76-
**Detail**: Configure RBAC using built-in roles and assign permissions at the project or workspace level. Use Microsoft Entra Agent ID for AI agent identity management, applying scoped, short-lived tokens for agent function access. For more information, see [Role-based access control in Azure AI Foundry](/azure/ai-studio/concepts/rbac-azure-ai-studio).
76+
**Detail**: Configure RBAC using built-in roles and assign permissions at the project or workspace level. Use Microsoft Entra Agent ID for AI agent identity management, applying scoped, short-lived tokens for agent function access. For more information, see [Role-based access control for Microsoft Foundry](/azure/ai-foundry/concepts/rbac-foundry).
7777

7878
**Best practice**: Deploy only approved AI models.
7979
**Detail**: Use Azure Machine Learning model registry to track model provenance, verification status, and approval history. Configure automated scanning to validate model integrity and test against adversarial inputs before deployment. Deploy the "[Preview]: Azure Machine Learning Deployments should only use approved Registry Models" Azure Policy to enforce governance. For more information, see [Model management and deployment](/azure/machine-learning/concept-model-management-and-deployment).
8080

8181
**Best practice**: Secure compute resources.
82-
**Detail**: Configure compute instances without public IPs, use managed identity authentication, enable user isolation for shared clusters, and encrypt disks with customer-managed keys. For more information, see [Compute isolation](/azure/machine-learning/concept-compute-isolation).
82+
**Detail**: Configure compute instances without public IPs, use managed identity authentication, enable user isolation for shared clusters, and encrypt disks with customer-managed keys. For more information, see [Secure an Azure Machine Learning training environment](/azure/machine-learning/how-to-secure-training-vnet).
8383

8484
## Implement AI-specific threat protection
8585

@@ -127,4 +127,3 @@ AI applications must comply with regulatory requirements and organizational poli
127127
- Learn about the [AI shared responsibility model](shared-responsibility-ai.md)
128128
- Review [Microsoft Cloud Security Benchmark v2 - Artificial Intelligence Security](/security/benchmark/azure/mcsb-v2-artificial-intelligence-security)
129129
- Explore [Security for AI](/security/security-for-ai/) for comprehensive AI security guidance
130-
- Learn about [Secure Foundations Initiative (SFI)](/security/zero-trust/sfi/security-pillars) security principles

articles/security/fundamentals/operational-best-practices.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -259,7 +259,7 @@ For more information, see [Create and manage policies to enforce compliance](../
259259
<a name='monitor-azure-ad-risk-reports'></a>
260260

261261
## Monitor Microsoft Entra risk reports
262-
The vast majority of security breaches take place when attackers gain access to an environment by stealing a users identity. Discovering compromised identities is no easy task. Microsoft Entra ID uses adaptive machine learning algorithms and heuristics to detect suspicious actions that are related to your user accounts. Each detected suspicious action is stored in a record called a [risk detection](/entra/id-protection/overview-identity-protection). Risk detections are recorded in Microsoft Entra security reports. For more information, read about the [users at risk security report](/entra/id-protection/overview-identity-protection) and the [risky sign-ins security report](/entra/id-protection/overview-identity-protection).
262+
The vast majority of security breaches take place when attackers gain access to an environment by stealing a user's identity. Discovering compromised identities is no easy task. Microsoft Entra ID uses adaptive machine learning algorithms and heuristics to detect suspicious actions that are related to your user accounts. Each detected suspicious action is stored in a record called a [risk detection](/entra/id-protection/overview-identity-protection). Risk detections are recorded in Microsoft Entra security reports. For more information, read about the [users at risk security report](/entra/id-protection/overview-identity-protection) and the [risky sign-ins security report](/entra/id-protection/overview-identity-protection).
263263

264264
## Next steps
265265
See [Incident response overview](incident-response-overview.md) for guidance on responding to security incidents in your Azure environment.

articles/security/fundamentals/zero-trust.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ Additional detailed guidance is available for specific domains:
114114

115115
## Application development and Zero Trust
116116

117-
Applications deployed on Azure must authenticate and authorize every request rather than relying on implicit trust from network location. Key principles include using Microsoft Entra ID for identity verification, requesting minimum permissions, protecting sensitive data, and using managed identities instead of stored credentials. For comprehensive guidance, see [Develop using Zero Trust principles](/security/zero-trust/develop/overview) and [Build Zero Trust-ready apps using Microsoft identity platform](/entra/identity-platform/zero-trust-for-developers.md).
117+
Applications deployed on Azure must authenticate and authorize every request rather than relying on implicit trust from network location. Key principles include using Microsoft Entra ID for identity verification, requesting minimum permissions, protecting sensitive data, and using managed identities instead of stored credentials. For comprehensive guidance, see [Develop using Zero Trust principles](/security/zero-trust/develop/overview) and [Build Zero Trust-ready apps using Microsoft identity platform](/entra/identity-platform/zero-trust-for-developers).
118118

119119
## Next steps
120120

0 commit comments

Comments
 (0)