You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: Learn about AI model security in Microsoft Defender for Cloud.
4
+
ms.topic: concept-article
5
+
ms.date: 03/25/2026
6
+
ms.author: elkrieger
7
+
zone_pivot_groups: defender-portal-experience
8
+
---
9
+
10
+
# AI model security
11
+
12
+
As organizations increasingly use artificial intelligence (AI) models to drive automation, insights, and intelligent decision-making, security teams need visibility and control to assess the safety and compliance of AI models entering their environments. These models often have broad access to data and infrastructure. Without these capabilities, it becomes increasingly difficult to enforce internal standards.
13
+
14
+
Microsoft Defender for Cloud's Defender for AI security supports AI model scanning. AI model scanning provides proactive detection of unsafe or malicious artifacts and continuously monitors models for risk throughout the AI lifecycle.
15
+
16
+
AI model security automatically scans AI models for security risks such as embedded malware, unsafe operators, and exposed secrets before those models reach production. Integrated directly with Azure Machine Learning and CI/CD pipelines, the service surfaces real-time findings and actionable remediation guidance, so teams can stop risky models early in the development process.
17
+
18
+
By using AI model security, security teams can scan custom AI models uploaded to Azure Machine Learning workspaces and registries to identify threats like embedded malware, unsafe operators, and exposed secrets. Defender for Cloud presents the results, giving teams visibility into security findings along with severity ratings, remediation guidance, and relevant model metadata to support effective triage and prioritization. Developers can also trigger model scans during build or release stages by using CLI tools integrated with Azure DevOps or GitHub pipelines, enabling static scanning and early risk detection before models reach production.
19
+
20
+
## Prerequisites
21
+
22
+
- You must have an Azure subscription that contains AI models registered in [Azure Machine Learning](/azure/machine-learning/quickstart-create-resources) (Azure Machine Learning) registries or workspaces.
23
+
24
+
> [!NOTE]
25
+
> Workspaces and registries that use a private link aren't supported.
26
+
27
+
-[Enable the Defender for Cloud Security Posture Management plan](tutorial-enable-cspm-plan.md).
28
+
29
+
- You must [enable threat protection for AI services](ai-onboarding.md) and the [AI model security](ai-onboarding.md#enable-ai-model-security) component of the plan.
30
+
31
+
- Required permissions: To enable the plan, you need **Owner** or **Contributor** level permissions on the Azure Machine Learning resources.
- File size limit: 10 GB. Model files larger than 10 GB can't be scanned.
36
+
37
+
- The scan occurs once a week.
38
+
39
+
::: zone pivot="azure-portal"
40
+
41
+
## Locate all AI models in your environment
42
+
43
+
1. Sign in to the [Azure portal](https://portal.azure.com/).
44
+
45
+
1. Search for and select **Microsoft Defender for Cloud**.
46
+
47
+
1. Select **Cloud Security Explorer**.
48
+
49
+
1. Select **AI & Mls** > **AI models**.
50
+
51
+
:::image type="content" source="media/models/ai-models.png" alt-text="Screenshot that shows where to select AI models from the drop-down list in the Cloud Security Explorer." lightbox="media/models/ai-models.png":::
52
+
53
+
1. Select **Done**.
54
+
55
+
1. Select **+**.
56
+
57
+
1. Select **Metadata** > **AI Model Metadata**.
58
+
59
+
:::image type="content" source="media/models/ai-models-metadata.png" alt-text="Screenshot that shows how to select the 'AI Models Metadata' option." lightbox="media/models/ai-models-metadata.png":::
60
+
61
+
1. Select **Search**.
62
+
63
+
The Cloud Security Explorer displays all AI models in your environment. You can select **view details** to see more information about each selected model.
64
+
65
+
## Locate AI models with security findings
66
+
67
+
Use the Cloud Security Explorer to find AI models that have active security findings.
68
+
69
+
1. Follow steps 1-7 from the [Locate all AI models in your environment](#locate-all-ai-models-in-your-environment) section.
:::image type="content" source="media/models/all-recommendations.png" alt-text="Screenshot that shows how to select the 'All recommendations' option." lightbox="media/models/all-recommendations.png":::
76
+
77
+
1. Select **Search**.
78
+
79
+
The Cloud Security Explorer displays all AI models in your environment that have active security findings. Select **view details** to see more information about each model and the associated findings.
80
+
81
+
::: zone-end
82
+
83
+
::: zone pivot="defender-portal"
84
+
85
+
## Locate all AI models in your environment
86
+
87
+
The Defender portal **Assets** page provides a comprehensive view of all AI models in your environment.
88
+
89
+
1. Sign in to the [Microsoft Defender portal](https://security.microsoft.com/).
90
+
91
+
1. Go to **Assets** > **Cloud** > **AI** > **AI models**.
92
+
93
+
:::image type="content" source="media/models/defender-ai-models.png" alt-text="Screenshot that shows how to navigate to the Defender portals asset page with all of the AI models presented." lightbox="media/models/defender-ai-models.png":::
94
+
95
+
1. Select an AI model with recommendations.
96
+
97
+
:::image type="content" source="media/models/ai-models-recommendation.png" alt-text="Screenshot that shows AI models that have at least one recommendation affecting them.":::
98
+
99
+
1. Select **Open asset page**.
100
+
101
+
:::image type="content" source="media/models/asset-page.png" alt-text="Screenshot that shows where the 'Open asset page' button is located." lightbox="media/models/asset-page.png":::
102
+
103
+
1. Select **Security recommendations** > the relevant recommendation.
104
+
105
+
1. Review and remediate the security finding as needed.
106
+
107
+
You can also manage the recommendation in the Azure portal.
108
+
109
+
::: zone-end
110
+
111
+
## Next step
112
+
113
+
> [!div class="nextstep"]
114
+
> [Build queries with cloud security explorer](how-to-manage-cloud-security-explorer.md).
Copy file name to clipboardExpand all lines: articles/defender-for-cloud/ai-onboarding.md
+42-5Lines changed: 42 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
title: Enable threat protection for AI services
3
3
description: Learn how to enable threat protection for AI services on your Azure subscription for Microsoft Defender for Cloud.
4
4
ms.topic: install-set-up-deploy
5
-
ms.date: 05/20/2025
5
+
ms.date: 04/01/2026
6
6
ms.author: elkrieger
7
7
author: Elazark
8
8
---
@@ -19,6 +19,8 @@ Threat protection for AI services in Microsoft Defender for Cloud protects Micro
19
19
20
20
-[Enable Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) on your Azure subscription.
21
21
22
+
- Required permissions: To enable the plan, you need **Owner** or **Contributor** level.
23
+
22
24
## Enable threat protection for AI services
23
25
24
26
1. Sign in to the [Azure portal](https://portal.azure.com).
@@ -33,7 +35,17 @@ Threat protection for AI services in Microsoft Defender for Cloud protects Micro
33
35
34
36
:::image type="content" source="media/ai-onboarding/enable-ai-workloads-plan.png" alt-text="Screenshot that shows you how to toggle threat protection for AI services to on." lightbox="media/ai-onboarding/enable-ai-workloads-plan.png":::
35
37
36
-
## Enable user prompt evidence
38
+
## Enable the components of the plan
39
+
40
+
With the AI services threat protection plan enabled, you can control whether the different components of teh plan are enabled. This includes:
41
+
42
+
-**[Suspicious prompt evidence](#enable-suspicious-prompt-evidence)**: receive alerts for suspicious portions of user prompts and model responses to help analyze AI-related security alerts, with sensitive data automatically redacted. These prompt snippets appear in the Defender portal as part of each alert’s evidence.
43
+
44
+
-**[Data security for AI interactions](#enable-data-security-for-microsoft-foundry-with-microsoft-purview)**: allows Microsoft Purview to access and analyze prompts, responses, and related metadata to provide data security and compliance capabilities such as SIT classification, auditing, insider risk, communication compliance, and eDiscovery. It is a paid Purview feature and is not included in the Defender for AI Services plan.
45
+
46
+
-**[AI model security](#enable-ai-model-security)**: AI model scanning gives you a clear, unified view of all your models registered in Azure Machine Learning Registries. It helps teams stay ahead of security risks by automatically checking for issues like serialization vulnerabilities, malware, and missing scans. By surfacing misconfigurations and integrating seamlessly with Defender for Cloud and developer workflows, it ensures your AI models are continuously protected and ready for production.
47
+
48
+
### Enable suspicious prompt evidence
37
49
38
50
With the AI services threat protection plan enabled, you can control whether alerts include suspicious segments directly from your user's prompts, or the model responses from your AI applications or agents. Enabling user prompt evidence helps you triage, classify alerts and your user's intentions.
39
51
@@ -59,7 +71,7 @@ If User prompt evidence is disabled, Microsoft Defender for Cloud continues anal
59
71
60
72
1. Select **Continue**.
61
73
62
-
##**Enable Data Security for Microsoft Foundry with Microsoft Purview**
74
+
### Enable Data Security for Microsoft Foundry with Microsoft Purview
63
75
64
76
> [!NOTE]
65
77
> This feature requires a Microsoft Purview license, which isn't included with Microsoft Defender for Cloud's Defender for AI Services plan.
@@ -89,6 +101,7 @@ This capability helps your organization manage and monitor AI-generated data i
89
101
90
102
> [!NOTE]
91
103
> Data Security Policies for Microsoft Foundry interactions are supported only for API calls that use Microsoft Entra ID authentication with a user-context token, or for API calls that explicitly include user context. To learn more, see [Gain end-user context for Azure AI API calls](gain-end-user-context-ai.md). For all other authentication scenarios, user interactions captured in Purview show up only in Purview Audit and DSPM for AI Activity Explorer.
104
+
92
105
1. Sign in to the [Azure portal](https://portal.azure.com/).
93
106
94
107
1. Search for and select **Microsoft Defender for Cloud**.
@@ -105,7 +118,7 @@ This capability helps your organization manage and monitor AI-generated data i
105
118
106
119
1. Select **Continue**.
107
120
108
-
### **Troubleshooting**
121
+
####**Troubleshooting**
109
122
If you don't see user interactions for Entra ID authenticated users in Microsoft Purview Activity Explorer after turning on the toggle, follow these steps to troubleshoot:
110
123
111
124
Run the following commands in Azure PowerShell
@@ -127,7 +140,31 @@ Run the following commands in Azure PowerShell
AI model security in Defender for Cloud, provides organizations proactive protection for Machine Learning (ML) models. The service scans models for security risks, such as embedded malware, unsafe operators, and exposed secrets, before models reach production.
147
+
148
+
By integrating directly with Azure Machine Learning workspaces, registries, and continuous integration and continuous delivery (CI/CD) pipelines, Ai model security provides security teams visibility into the safety and compliance status of AI models that exist in their environments.
149
+
150
+
1. Sign in to the [Azure portal](https://portal.azure.com/).
151
+
152
+
1. Search for and select **Microsoft Defender for Cloud**.
153
+
154
+
1. In the Defender for Cloud menu, select **Environment settings**.
155
+
156
+
1. Select the relevant Azure subscription.
157
+
158
+
1. Locate AI services and select **Settings**.
159
+
160
+
1. Toggle AI model security to **On**.
161
+
162
+
:::image type="content" source="media/ai-onboarding/model-security.png" alt-text="Screenshot that shows where the toggle for AI model security is located." lightbox="media/ai-onboarding/model-security.png":::
163
+
164
+
1. Select **Continue**.
165
+
166
+
Learn more about [AI model security](ai-model-security.md).
167
+
131
168
## Related content
132
169
133
170
- [Add user and application context to AI alerts](gain-end-user-context-ai.md)
Copy file name to clipboardExpand all lines: articles/defender-for-cloud/alerts-ai-workloads.md
+9-1Lines changed: 9 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Alerts for AI services
3
3
description: This article lists the security alerts for AI services visible in Microsoft Defender for Cloud.
4
4
ms.topic: reference
5
5
ms.custom: linux-related-content
6
-
ms.date: 02/22/2026
6
+
ms.date: 03/25/2026
7
7
ai-usage: ai-assisted
8
8
ms.author: elkrieger
9
9
author: Elazark
@@ -321,6 +321,14 @@ Severity: High
321
321
322
322
**Severity:** Low
323
323
324
+
### Malicious content detected in uploaded AI model
325
+
326
+
(Ai.AIModelScan_MalwareDetected)
327
+
328
+
**Description:** A user-uploaded machine learning model was scanned and found to contain malware. The detection indicates the file may execute malicious code if loaded, posing a threat to account integrity, data confidentiality, and the compute environment.
329
+
330
+
**Severity:** High
331
+
324
332
## Next steps
325
333
326
334
-[Security alerts in Microsoft Defender for Cloud](alerts-overview.md)
Use the `defender scan model` command to scan AI models for security risks including malware, unsafe operators, and exposed secrets. This command supports models stored locally or in cloud registries such as Hugging Face, and common formats including Pickle (`.pkl`), ONNX (`.onnx`), TorchScript (`.pt`), TensorFlow, and SafeTensors.
92
+
93
+
### Usage
94
+
95
+
```bash
96
+
defender scan model <target> [--modelscanner-Output <path>]
|\<target\>| Yes | String | The path to a local model file or directory, or a Hugging Face model URL (for example, `./models/my-model.pkl`, `https://huggingface.co/org/model`). |
104
+
|\-\-modelscanner-Output | No | String | Output path for SARIF scan results. |
105
+
106
+
### Supported model formats
107
+
108
+
- Pickle (`.pkl`)
109
+
- ONNX (`.onnx`)
110
+
- TorchScript (`.pt`)
111
+
- TensorFlow (`.tf`, `.pb`)
112
+
- SafeTensors (`.safetensors`)
113
+
114
+
### Examples
115
+
116
+
#### Scan a local model
117
+
118
+
```bash
119
+
defender scan model ./models/my-model.pkl
120
+
```
121
+
122
+
#### Scan a Hugging Face model
123
+
124
+
```bash
125
+
defender scan model "https://huggingface.co/org/model-name"
126
+
```
127
+
128
+
#### Scan and export SARIF results
129
+
130
+
```bash
131
+
defender scan model ./models/my-model.onnx --modelscanner-Output results.sarif
0 commit comments