Skip to content

Commit 11def5f

Browse files
Merge pull request #2466 from liranylevy/docs-editor/alerts-ai-workloads-1770123582
Update alerts-ai-workloads.md
2 parents bd47823 + 909effb commit 11def5f

1 file changed

Lines changed: 126 additions & 3 deletions

File tree

articles/defender-for-cloud/alerts-ai-workloads.md

Lines changed: 126 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,9 @@ ms.author: elkrieger
99
author: Elazark
1010
---
1111

12-
# Alerts for AI services
12+
# Alerts for AI
1313

14-
This article lists the security alerts you might get for AI services from Microsoft Defender for Cloud and any Microsoft Defender plans you enabled. The alerts shown in your environment depend on the resources and services you're protecting, and your customized configuration.
14+
This article lists the security alerts you might get for AI from Microsoft Defender for Cloud and any Microsoft Defender plans you enabled. The alerts shown in your environment depend on the resources and services you're protecting, and your customized configuration.
1515

1616
[Learn how to respond to these alerts](manage-respond-alerts.md).
1717

@@ -23,7 +23,7 @@ This article lists the security alerts you might get for AI services from Micros
2323
> [!NOTE]
2424
> For alerts that are in preview: [!INCLUDE [Legalese](./includes/defender-for-cloud-preview-legal-text.md)]
2525
26-
## AI services alerts
26+
## Alerts for AI
2727

2828
### Detected credential theft attempts on an Azure AI model deployment
2929

@@ -195,6 +195,129 @@ This article lists the security alerts you might get for AI services from Micros
195195

196196
**Severity**: Low
197197

198+
## Alerts for AI agents
199+
200+
201+
### (Preview) A Jailbreak attempt on your Azure AI agent was detected by Prompt Shields
202+
203+
(AI.Azure_Agentic_Jailbreak) 
204+
205+
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were detected by Azure Responsible AI Content Safety (also known as Prompt Shields) but weren't blocked due to content filtering settings or due to low confidence. 
206+
207+
**[MITRE tactics](/azure/defender-for-cloud/alerts-reference)**: Privilege Escalation, Defense Evasion 
208+
209+
**Severity**: Medium 
210+
211+
### (Preview) A Jailbreak attempt on your Azure AI agent was blocked by Prompt Shields
212+
213+
(Azure_Agentic_BlockedJailbreak) 
214+
215+
**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were blocked by Azure Responsible AI Content Safety (also known as Prompt Shields), ensuring the integrity of the AI resources and the data security. 
216+
217+
**[MITRE tactics](/azure/defender-for-cloud/alerts-reference)**: Privilege Escalation, Defense Evasion 
218+
219+
**Severity**: Medium 
220+
221+
### (Preview) An ASCII smuggling attempt was detected on an AI agent
222+
223+
(AI.Azure_Agentic_ASCIISmuggling) 
224+
225+
**Description**: ASCII smuggling technique allows an attacker to send invisible instructions to an AI model. These attacks are commonly attributed to indirect prompt injections, where the malicious threat actor is passing hidden instructions to bypass the application and model guardrails. These attacks are usually applied without the user's knowledge given their lack of visibility in the text and can compromise the application tools or connected data sets. 
226+
227+
**[MITRE tactics](/azure/defender-for-cloud/alerts-reference)**: Impact 
228+
229+
**Severity**: High 
230+
231+
### (Preview) A user phishing attempt was detected on an AI agent 
232+
233+
(AI.Azure_Agentic_MaliciousUrl.UserPrompt) 
234+
235+
**Description**: This alert indicates a URL used for phishing attack was sent by a user to an AI agent. The content typically lures visitors into entering their corporate credentials or financial information into a legitimate looking website. Sending this to an AI agent might be for the purpose of corrupting it, poisoning the data sources it has access to, or gaining access to employees or other customers via the agent tools. 
236+
237+
**[MITRE tactics](/azure/defender-for-cloud/alerts-reference)**: Collection 
238+
239+
**Severity**: High 
240+
241+
### (Preview) A suspicious IP access was detected on an AI agent 
242+
243+
(AI.Azure_Agentic_AccessFromSuspiciousIP) 
244+
245+
**Description**: An IP address accessing one of your AI agents was identified by Microsoft Threat Intelligence as having a high probability of being a threat. While observing malicious Internet traffic, this IP came up as involved in attacking other online targets. 
246+
247+
[MITRE tactics](/azure/defender-for-cloud/alerts-reference): Execution 
248+
249+
Severity: High 
250+
251+
### (Preview) An anonymized IP access was detected on an AI agent
252+
253+
(AI.Azure_Agentic_AccessFromAnonymizedIP) 
254+
255+
**Description**: An IP address from the Tor network accessed by one of the AI agents. Tor is a network that allows people to access the Internet while keeping their real IP hidden. Though there are legitimate uses, it is frequently used by attackers to hide their identity when they target people's systems online. 
256+
257+
**[MITRE tactics](/azure/defender-for-cloud/alerts-reference)**: Execution 
258+
259+
**Severity**: High 
260+
261+
### (Preview) A suspicious user-agent access was detected on an AI agent
262+
263+
(AI.Azure_Agentic_AccessFromSuspiciousUserAgent) 
264+
265+
**Description**: The user agent of a request accessing one of your AI agents contained anomalous values indicative of an attempt to abuse or manipulate the agent. The suspicious user agent in question has been mapped by Microsoft threat intelligence as suspected of malicious intent and hence your resources were likely compromised. 
266+
267+
**[MITRE tactics](/azure/defender-for-cloud/alerts-reference)**: Execution, Reconnaissance, Initial access 
268+
269+
**Severity**: Medium 
270+
271+
### (Preview) A malicious URL detected in AI agent response
272+
273+
(AI.Azure_Agentic_MaliciousUrl.ModelResponse) 
274+
275+
**Description**: This alert indicates a corruption of an AI agent developed by the organization, as it has actively shared a known malicious URL used for phishing with a user. The URL originated within the agent itself, the AI model, the tools, or the data the agent can access. 
276+
277+
**[MITRE tactics](/azure/defender-for-cloud/alerts-reference)**: Impact (Defacement)  
278+
279+
**Severity**: High 
280+
281+
### (Preview) A malicious URL was detected in an AI agent’s tool response 
282+
283+
(AI.Azure_Agentic_MaliciousUrl.ToolOutput) 
284+
285+
**Description**: This alert indicates a corruption of an AI agent developed by the organization, as it has actively shared a known malicious URL used for phishing with a user. The URL originated within the tools the agent can access. 
286+
287+
**[MITRE tactics](/azure/defender-for-cloud/alerts-reference)**: Impact 
288+
289+
**Severity**: High
290+
291+
### (Preview) Suspected wallet attack - volume anomaly
292+
293+
(AI.Azure_Agentic_DOWVolumeAnomaly) 
294+
295+
 **Description**: Wallet attacks are a family of attacks common for AI resources that consist of threat actors excessively engage with an AI resource directly or through an application in hopes of causing the organization large financial damages. This detection tracks high volumes of requests and responses by the resource that are inconsistent with its historical usage patterns. 
296+
297+
**[MITRE tactics](/azure/defender-for-cloud/alerts-reference)**: Impact 
298+
299+
**Severity**: Medium 
300+
301+
### (Preview) AI agent instruction prompt leak detected
302+
303+
(AI.Azure_Agentic_InstructionLeakage) 
304+
305+
**Description**: A threat actor attempted to extract system-level instructions from your AI agent, including hidden prompts, policies, or internal configurations. Exposure of this information can compromise security controls and facilitate follow-on attacks such as prompt injection, jailbreaks, or misuse of the model. 
306+
307+
**[MITRE tactics](/azure/defender-for-cloud/alerts-reference)**: Impact 
308+
309+
**Severity**: Low 
310+
311+
### (Preview) AI agent Reconnaissance Attempt Detected  
312+
313+
(AI.Azure_Agentic_LLMReconaissance) 
314+
315+
**Description:** A threat actor is interacting with your Agent in a way that resembles reconnaissance behavior, including attempts to extract system instructions, Agent capabilities, or bypass safety guardrails. These prompts may precede attempted prompt injection or jailbreak attacks. 
316+
317+
**[MITRE tactics](/azure/defender-for-cloud/alerts-reference):** Reconnaissance 
318+
319+
**Severity:** Low 
320+
198321
## Next steps
199322

200323
- [Security alerts in Microsoft Defender for Cloud](alerts-overview.md)

0 commit comments

Comments
 (0)