Skip to content

Commit ac05afc

Browse files
Merge pull request #2680 from MicrosoftDocs/main
Auto Publish – main to live - 2026-03-30 17:10 UTC
2 parents 11b2ed4 + 6fca408 commit ac05afc

32 files changed

Lines changed: 287 additions & 59 deletions

articles/defender-for-cloud/TOC.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -482,6 +482,9 @@
482482
- name: Discover generative AI workloads
483483
displayName: AI, workloads, models, applications, apps, AI BOM
484484
href: identify-ai-workload-model.md
485+
- name: Discover AI models
486+
displayName: AI, models, generative, applications, apps
487+
href: ai-model-security.md
485488
- name: Explore risks to pre-deployment generative AI artifacts
486489
displayName: AI, risks, generative, applications, apps
487490
href: explore-ai-risk.md
Lines changed: 114 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
---
2+
title: Discover AI models
3+
description: Learn about AI model security in Microsoft Defender for Cloud.
4+
ms.topic: concept-article
5+
ms.date: 03/25/2026
6+
ms.author: elkrieger
7+
zone_pivot_groups: defender-portal-experience
8+
---
9+
10+
# AI model security
11+
12+
As organizations increasingly use artificial intelligence (AI) models to drive automation, insights, and intelligent decision-making, security teams need visibility and control to assess the safety and compliance of AI models entering their environments. These models often have broad access to data and infrastructure. Without these capabilities, it becomes increasingly difficult to enforce internal standards.
13+
14+
Microsoft Defender for Cloud's Defender for AI security supports AI model scanning. AI model scanning provides proactive detection of unsafe or malicious artifacts and continuously monitors models for risk throughout the AI lifecycle.
15+
16+
AI model security automatically scans AI models for security risks such as embedded malware, unsafe operators, and exposed secrets before those models reach production. Integrated directly with Azure Machine Learning and CI/CD pipelines, the service surfaces real-time findings and actionable remediation guidance, so teams can stop risky models early in the development process.
17+
18+
By using AI model security, security teams can scan custom AI models uploaded to Azure Machine Learning workspaces and registries to identify threats like embedded malware, unsafe operators, and exposed secrets. Defender for Cloud presents the results, giving teams visibility into security findings along with severity ratings, remediation guidance, and relevant model metadata to support effective triage and prioritization. Developers can also trigger model scans during build or release stages by using CLI tools integrated with Azure DevOps or GitHub pipelines, enabling static scanning and early risk detection before models reach production.
19+
20+
## Prerequisites
21+
22+
- You must have an Azure subscription that contains AI models registered in [Azure Machine Learning](/azure/machine-learning/quickstart-create-resources) (Azure Machine Learning) registries or workspaces.
23+
24+
> [!NOTE]
25+
> Workspaces and registries that use a private link aren't supported.
26+
27+
- [Enable the Defender for Cloud Security Posture Management plan](tutorial-enable-cspm-plan.md).
28+
29+
- You must [enable threat protection for AI services](ai-onboarding.md) and the [AI model security](ai-onboarding.md#enable-ai-model-security) component of the plan.
30+
31+
- Required permissions: To enable the plan, you need **Owner** or **Contributor** level permissions on the Azure Machine Learning resources.
32+
33+
- Supported model file formats: `Pickle (.pkl)`, `HDF5 (.h5)`, `TorchScript (.pt)`, `ONNX (.onnx)`, `SafeTensors (.safetensors)`, `TensorFlow SavedModel / TFLite (FlatBuffers)`, `NumPy (.npy)`, `Arrow, MsgPack, dill, joblib`, `PMML, JSON, POJO, MOJO, GGUF`.
34+
35+
- File size limit: 10 GB. Model files larger than 10 GB can't be scanned.
36+
37+
- The scan occurs once a week.
38+
39+
::: zone pivot="azure-portal"
40+
41+
## Locate all AI models in your environment
42+
43+
1. Sign in to the [Azure portal](https://portal.azure.com/).
44+
45+
1. Search for and select **Microsoft Defender for Cloud**.
46+
47+
1. Select **Cloud Security Explorer**.
48+
49+
1. Select **AI & Mls** > **AI models**.
50+
51+
:::image type="content" source="media/models/ai-models.png" alt-text="Screenshot that shows where to select AI models from the drop-down list in the Cloud Security Explorer." lightbox="media/models/ai-models.png":::
52+
53+
1. Select **Done**.
54+
55+
1. Select **+**.
56+
57+
1. Select **Metadata** > **AI Model Metadata**.
58+
59+
:::image type="content" source="media/models/ai-models-metadata.png" alt-text="Screenshot that shows how to select the 'AI Models Metadata' option." lightbox="media/models/ai-models-metadata.png":::
60+
61+
1. Select **Search**.
62+
63+
The Cloud Security Explorer displays all AI models in your environment. You can select **view details** to see more information about each selected model.
64+
65+
## Locate AI models with security findings
66+
67+
Use the Cloud Security Explorer to find AI models that have active security findings.
68+
69+
1. Follow steps 1-7 from the [Locate all AI models in your environment](#locate-all-ai-models-in-your-environment) section.
70+
71+
1. Select **+**.
72+
73+
1. Select **Recommendations** > **All recommendations**.
74+
75+
:::image type="content" source="media/models/all-recommendations.png" alt-text="Screenshot that shows how to select the 'All recommendations' option." lightbox="media/models/all-recommendations.png":::
76+
77+
1. Select **Search**.
78+
79+
The Cloud Security Explorer displays all AI models in your environment that have active security findings. Select **view details** to see more information about each model and the associated findings.
80+
81+
::: zone-end
82+
83+
::: zone pivot="defender-portal"
84+
85+
## Locate all AI models in your environment
86+
87+
The Defender portal **Assets** page provides a comprehensive view of all AI models in your environment.
88+
89+
1. Sign in to the [Microsoft Defender portal](https://security.microsoft.com/).
90+
91+
1. Go to **Assets** > **Cloud** > **AI** > **AI models**.
92+
93+
:::image type="content" source="media/models/defender-ai-models.png" alt-text="Screenshot that shows how to navigate to the Defender portals asset page with all of the AI models presented." lightbox="media/models/defender-ai-models.png":::
94+
95+
1. Select an AI model with recommendations.
96+
97+
:::image type="content" source="media/models/ai-models-recommendation.png" alt-text="Screenshot that shows AI models that have at least one recommendation affecting them.":::
98+
99+
1. Select **Open asset page**.
100+
101+
:::image type="content" source="media/models/asset-page.png" alt-text="Screenshot that shows where the 'Open asset page' button is located." lightbox="media/models/asset-page.png":::
102+
103+
1. Select **Security recommendations** > the relevant recommendation.
104+
105+
1. Review and remediate the security finding as needed.
106+
107+
You can also manage the recommendation in the Azure portal.
108+
109+
::: zone-end
110+
111+
## Next step
112+
113+
> [!div class="nextstep"]
114+
> [Build queries with cloud security explorer](how-to-manage-cloud-security-explorer.md).

articles/defender-for-cloud/ai-onboarding.md

Lines changed: 42 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Enable threat protection for AI services
33
description: Learn how to enable threat protection for AI services on your Azure subscription for Microsoft Defender for Cloud.
44
ms.topic: install-set-up-deploy
5-
ms.date: 05/20/2025
5+
ms.date: 04/01/2026
66
ms.author: elkrieger
77
author: Elazark
88
---
@@ -19,6 +19,8 @@ Threat protection for AI services in Microsoft Defender for Cloud protects Micro
1919

2020
- [Enable Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) on your Azure subscription.
2121

22+
- Required permissions: To enable the plan, you need **Owner** or **Contributor** level.
23+
2224
## Enable threat protection for AI services
2325

2426
1. Sign in to the [Azure portal](https://portal.azure.com).
@@ -33,7 +35,17 @@ Threat protection for AI services in Microsoft Defender for Cloud protects Micro
3335

3436
:::image type="content" source="media/ai-onboarding/enable-ai-workloads-plan.png" alt-text="Screenshot that shows you how to toggle threat protection for AI services to on." lightbox="media/ai-onboarding/enable-ai-workloads-plan.png":::
3537

36-
## Enable user prompt evidence
38+
## Enable the components of the plan
39+
40+
With the AI services threat protection plan enabled, you can control whether the different components of teh plan are enabled. This includes:
41+
42+
- **[Suspicious prompt evidence](#enable-suspicious-prompt-evidence)**: receive alerts for suspicious portions of user prompts and model responses to help analyze AI-related security alerts, with sensitive data automatically redacted. These prompt snippets appear in the Defender portal as part of each alert’s evidence.
43+
44+
- **[Data security for AI interactions](#enable-data-security-for-microsoft-foundry-with-microsoft-purview)**: allows Microsoft Purview to access and analyze prompts, responses, and related metadata to provide data security and compliance capabilities such as SIT classification, auditing, insider risk, communication compliance, and eDiscovery. It is a paid Purview feature and is not included in the Defender for AI Services plan.
45+
46+
- **[AI model security](#enable-ai-model-security)**: AI model scanning gives you a clear, unified view of all your models registered in Azure Machine Learning Registries. It helps teams stay ahead of security risks by automatically checking for issues like serialization vulnerabilities, malware, and missing scans. By surfacing misconfigurations and integrating seamlessly with Defender for Cloud and developer workflows, it ensures your AI models are continuously protected and ready for production.
47+
48+
### Enable suspicious prompt evidence
3749

3850
With the AI services threat protection plan enabled, you can control whether alerts include suspicious segments directly from your user's prompts, or the model responses from your AI applications or agents. Enabling user prompt evidence helps you triage, classify alerts and your user's intentions.
3951

@@ -59,7 +71,7 @@ If User prompt evidence is disabled, Microsoft Defender for Cloud continues anal
5971

6072
1. Select **Continue**.
6173

62-
## **Enable Data Security for Microsoft Foundry with Microsoft Purview**
74+
### Enable Data Security for Microsoft Foundry with Microsoft Purview
6375

6476
> [!NOTE]
6577
> This feature requires a Microsoft Purview license, which isn't included with Microsoft Defender for Cloud's Defender for AI Services plan.
@@ -89,6 +101,7 @@ This capability helps your organization manage and monitor AI-generated data i
89101
90102
> [!NOTE]
91103
> Data Security Policies for Microsoft Foundry interactions are supported only for API calls that use Microsoft Entra ID authentication with a user-context token, or for API calls that explicitly include user context. To learn more, see [Gain end-user context for Azure AI API calls](gain-end-user-context-ai.md). For all other authentication scenarios, user interactions captured in Purview show up only in Purview Audit and DSPM for AI Activity Explorer.
104+
92105
1. Sign in to the [Azure portal](https://portal.azure.com/).
93106

94107
1. Search for and select **Microsoft Defender for Cloud**.
@@ -105,7 +118,7 @@ This capability helps your organization manage and monitor AI-generated data i
105118

106119
1. Select **Continue**.
107120

108-
### **Troubleshooting**
121+
#### **Troubleshooting**
109122
If you don't see user interactions for Entra ID authenticated users in Microsoft Purview Activity Explorer after turning on the toggle, follow these steps to troubleshoot:
110123

111124
Run the following commands in Azure PowerShell
@@ -127,7 +140,31 @@ Run the following commands in Azure PowerShell
127140
```PowerShell
128141
New-AzADServicePrincipal -ApplicationId "9ec59623-ce40-4dc8-a635-ed0275b5d58a"
129142
```
130-
143+
144+
### Enable AI model security
145+
146+
AI model security in Defender for Cloud, provides organizations proactive protection for Machine Learning (ML) models. The service scans models for security risks, such as embedded malware, unsafe operators, and exposed secrets, before models reach production.
147+
148+
By integrating directly with Azure Machine Learning workspaces, registries, and continuous integration and continuous delivery (CI/CD) pipelines, Ai model security provides security teams visibility into the safety and compliance status of AI models that exist in their environments.
149+
150+
1. Sign in to the [Azure portal](https://portal.azure.com/).
151+
152+
1. Search for and select **Microsoft Defender for Cloud**.
153+
154+
1. In the Defender for Cloud menu, select **Environment settings**.
155+
156+
1. Select the relevant Azure subscription.
157+
158+
1. Locate AI services and select **Settings**.
159+
160+
1. Toggle AI model security to **On**.
161+
162+
:::image type="content" source="media/ai-onboarding/model-security.png" alt-text="Screenshot that shows where the toggle for AI model security is located." lightbox="media/ai-onboarding/model-security.png":::
163+
164+
1. Select **Continue**.
165+
166+
Learn more about [AI model security](ai-model-security.md).
167+
131168
## Related content
132169
133170
- [Add user and application context to AI alerts](gain-end-user-context-ai.md)

articles/defender-for-cloud/alerts-ai-workloads.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Alerts for AI services
33
description: This article lists the security alerts for AI services visible in Microsoft Defender for Cloud.
44
ms.topic: reference
55
ms.custom: linux-related-content
6-
ms.date: 02/22/2026
6+
ms.date: 03/25/2026
77
ai-usage: ai-assisted
88
ms.author: elkrieger
99
author: Elazark
@@ -321,6 +321,14 @@ Severity: High 
321321

322322
**Severity:** Low 
323323

324+
### Malicious content detected in uploaded AI model
325+
326+
(Ai.AIModelScan_MalwareDetected)
327+
328+
**Description:** A user-uploaded machine learning model was scanned and found to contain malware. The detection indicates the file may execute malicious code if loaded, posing a threat to account integrity, data confidentiality, and the compute environment.
329+
330+
**Severity:** High
331+
324332
## Next steps
325333

326334
- [Security alerts in Microsoft Defender for Cloud](alerts-overview.md)

articles/defender-for-cloud/defender-cli-syntax.md

Lines changed: 46 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Discover how to scan container images for security risks using Micr
44
#customer intent: As a DevOps engineer, I want to scan container images for vulnerabilities so that I can ensure the security of my deployments.
55
author: Elazark
66
ms.author: elkrieger
7-
ms.date: 11/06/2025
7+
ms.date: 02/12/2026
88
ms.topic: concept-article
99
---
1010

@@ -84,4 +84,49 @@ defender scan sbom my-image:latest
8484

8585
```bash
8686
defender scan sbom /home/src --sbom-format cyclonedx1.6-xml
87+
```
88+
89+
## AI Model Scan
90+
91+
Use the `defender scan model` command to scan AI models for security risks including malware, unsafe operators, and exposed secrets. This command supports models stored locally or in cloud registries such as Hugging Face, and common formats including Pickle (`.pkl`), ONNX (`.onnx`), TorchScript (`.pt`), TensorFlow, and SafeTensors.
92+
93+
### Usage
94+
95+
```bash
96+
defender scan model <target> [--modelscanner-Output <path>]
97+
```
98+
99+
### Options
100+
101+
| Name | Required | Type | Description |
102+
| ---------------------------- | -------- | ------ | ------------------------------------------------------------ |
103+
| \<target\> | Yes | String | The path to a local model file or directory, or a Hugging Face model URL (for example, `./models/my-model.pkl`, `https://huggingface.co/org/model`). |
104+
| \-\-modelscanner-Output | No | String | Output path for SARIF scan results. |
105+
106+
### Supported model formats
107+
108+
- Pickle (`.pkl`)
109+
- ONNX (`.onnx`)
110+
- TorchScript (`.pt`)
111+
- TensorFlow (`.tf`, `.pb`)
112+
- SafeTensors (`.safetensors`)
113+
114+
### Examples
115+
116+
#### Scan a local model
117+
118+
```bash
119+
defender scan model ./models/my-model.pkl
120+
```
121+
122+
#### Scan a Hugging Face model
123+
124+
```bash
125+
defender scan model "https://huggingface.co/org/model-name"
126+
```
127+
128+
#### Scan and export SARIF results
129+
130+
```bash
131+
defender scan model ./models/my-model.onnx --modelscanner-Output results.sarif
87132
```
1.48 KB
Loading
-18.6 KB
Loading
41.9 KB
Loading
82.2 KB
Loading
33.5 KB
Loading

0 commit comments

Comments
 (0)