You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/security/fundamentals/ai-security-best-practices.md
+4-5Lines changed: 4 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ Before you can secure AI workloads, you need visibility into what AI application
45
45
Azure OpenAI Service provides REST API access to powerful language models. Securing these deployments is critical for protecting your data and preventing misuse.
46
46
47
47
**Best practice**: Use private endpoints for network isolation.
48
-
**Detail**: Configure Azure OpenAI Service to use private endpoints, removing the public endpoint and restricting access to your virtual network. For more information, see [Configure Azure OpenAI Service with private endpoints](/azure/ai-services/openai/how-to/private-endpoints).
48
+
**Detail**: Configure Azure OpenAI Service to use private endpoints, removing the public endpoint and restricting access to your virtual network. For more information, see [Network and access configuration for Azure OpenAI](/azure/ai-foundry/openai/how-to/on-your-data-configuration).
49
49
50
50
**Best practice**: Use managed identity for authentication.
51
51
**Detail**: Configure applications to authenticate using Microsoft Entra managed identities instead of API keys, eliminating the need to manage and rotate secrets. For more information, see [Configure Azure OpenAI Service with Microsoft Entra ID authentication](/azure/ai-services/openai/how-to/managed-identity).
@@ -63,7 +63,7 @@ For more information, see [Content filtering](/azure/ai-services/openai/concepts
63
63
**Detail**: Design system prompts that clearly define the model's role, include explicit instructions to reject malicious inputs, and instruct the model to prioritize system instructions over user inputs. Use spotlighting techniques to isolate untrusted data within prompts and integrate [Prompt Shields](/azure/ai-services/content-safety/concepts/jailbreak-detection) to detect jailbreak attempts.
64
64
65
65
**Best practice**: Monitor usage with diagnostic logs.
66
-
**Detail**: Enable diagnostic logging to track API requests, token usage, content filtering results, and errors. Send logs to Azure Monitor for analysis and alerting. For more information, see [Monitor Azure OpenAI Service](/azure/ai-services/openai/how-to/monitor).
66
+
**Detail**: Enable diagnostic logging to track API requests, token usage, content filtering results, and errors. Send logs to Azure Monitor for analysis and alerting. For more information, see [Monitor Azure OpenAI](/azure/ai-foundry/openai/how-to/monitor-openai).
67
67
68
68
## Secure Azure AI Foundry and Azure Machine Learning
69
69
@@ -73,13 +73,13 @@ Azure AI Foundry and Azure Machine Learning provide platforms for building and d
73
73
**Detail**: Create Azure AI Foundry hubs and Azure Machine Learning workspaces with managed virtual networks that provide private endpoints for dependent services and outbound traffic control. For more information, see [Managed network isolation for Azure AI Foundry](/azure/ai-studio/how-to/configure-managed-network) and [Configure a private endpoint for Azure Machine Learning](/azure/machine-learning/how-to-configure-private-link).
**Detail**: Configure RBAC using built-in roles and assign permissions at the project or workspace level. Use Microsoft Entra Agent ID for AI agent identity management, applying scoped, short-lived tokens for agent function access. For more information, see [Role-based access control in Azure AI Foundry](/azure/ai-studio/concepts/rbac-azure-ai-studio).
76
+
**Detail**: Configure RBAC using built-in roles and assign permissions at the project or workspace level. Use Microsoft Entra Agent ID for AI agent identity management, applying scoped, short-lived tokens for agent function access. For more information, see [Role-based access control for Microsoft Foundry](/azure/ai-foundry/concepts/rbac-foundry).
77
77
78
78
**Best practice**: Deploy only approved AI models.
79
79
**Detail**: Use Azure Machine Learning model registry to track model provenance, verification status, and approval history. Configure automated scanning to validate model integrity and test against adversarial inputs before deployment. Deploy the "[Preview]: Azure Machine Learning Deployments should only use approved Registry Models" Azure Policy to enforce governance. For more information, see [Model management and deployment](/azure/machine-learning/concept-model-management-and-deployment).
80
80
81
81
**Best practice**: Secure compute resources.
82
-
**Detail**: Configure compute instances without public IPs, use managed identity authentication, enable user isolation for shared clusters, and encrypt disks with customer-managed keys. For more information, see [Compute isolation](/azure/machine-learning/concept-compute-isolation).
82
+
**Detail**: Configure compute instances without public IPs, use managed identity authentication, enable user isolation for shared clusters, and encrypt disks with customer-managed keys. For more information, see [Secure an Azure Machine Learning training environment](/azure/machine-learning/how-to-secure-training-vnet).
83
83
84
84
## Implement AI-specific threat protection
85
85
@@ -127,4 +127,3 @@ AI applications must comply with regulatory requirements and organizational poli
127
127
- Learn about the [AI shared responsibility model](shared-responsibility-ai.md)
Copy file name to clipboardExpand all lines: articles/security/fundamentals/operational-best-practices.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -259,7 +259,7 @@ For more information, see [Create and manage policies to enforce compliance](../
259
259
<aname='monitor-azure-ad-risk-reports'></a>
260
260
261
261
## Monitor Microsoft Entra risk reports
262
-
The vast majority of security breaches take place when attackers gain access to an environment by stealing a user’s identity. Discovering compromised identities is no easy task. Microsoft Entra ID uses adaptive machine learning algorithms and heuristics to detect suspicious actions that are related to your user accounts. Each detected suspicious action is stored in a record called a [risk detection](/entra/id-protection/overview-identity-protection). Risk detections are recorded in Microsoft Entra security reports. For more information, read about the [users at risk security report](/entra/id-protection/overview-identity-protection) and the [risky sign-ins security report](/entra/id-protection/overview-identity-protection).
262
+
The vast majority of security breaches take place when attackers gain access to an environment by stealing a user's identity. Discovering compromised identities is no easy task. Microsoft Entra ID uses adaptive machine learning algorithms and heuristics to detect suspicious actions that are related to your user accounts. Each detected suspicious action is stored in a record called a [risk detection](/entra/id-protection/overview-identity-protection). Risk detections are recorded in Microsoft Entra security reports. For more information, read about the [users at risk security report](/entra/id-protection/overview-identity-protection) and the [risky sign-ins security report](/entra/id-protection/overview-identity-protection).
263
263
264
264
## Next steps
265
265
See [Incident response overview](incident-response-overview.md) for guidance on responding to security incidents in your Azure environment.
Copy file name to clipboardExpand all lines: articles/security/fundamentals/zero-trust.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -114,7 +114,7 @@ Additional detailed guidance is available for specific domains:
114
114
115
115
## Application development and Zero Trust
116
116
117
-
Applications deployed on Azure must authenticate and authorize every request rather than relying on implicit trust from network location. Key principles include using Microsoft Entra ID for identity verification, requesting minimum permissions, protecting sensitive data, and using managed identities instead of stored credentials. For comprehensive guidance, see [Develop using Zero Trust principles](/security/zero-trust/develop/overview) and [Build Zero Trust-ready apps using Microsoft identity platform](/entra/identity-platform/zero-trust-for-developers.md).
117
+
Applications deployed on Azure must authenticate and authorize every request rather than relying on implicit trust from network location. Key principles include using Microsoft Entra ID for identity verification, requesting minimum permissions, protecting sensitive data, and using managed identities instead of stored credentials. For comprehensive guidance, see [Develop using Zero Trust principles](/security/zero-trust/develop/overview) and [Build Zero Trust-ready apps using Microsoft identity platform](/entra/identity-platform/zero-trust-for-developers).
0 commit comments