You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/power-platform/copilot-studio/authoring/error-codes.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ As an agent maker, if a problem occurs when you're using the test pane to [test
39
39
|[InfiniteLoopInBotContent](#infiniteloopinbotcontent)| A node was executed too many times. |
40
40
|[InvalidContent](#invalidcontent)| Invalid content was added to the code editor. |
41
41
|[LatestPublishedVersionNotFound](#latestpublishedversionnotfound)| Unable to retrieve the published version of the agent. |
42
-
|[OpenAIHate](#openaiahate)| Hate content was detected. |
42
+
|[OpenAIHate](#openaihate)| Hate content was detected. |
43
43
|[OpenAIJailBreak](#openaijailbreak)| Jailbreak content was detected. |
44
44
|[OpenAIndirectAttack](#openaindirectattack)| Indirect attack content was detected. |
45
45
|[OpenAISelfHarm](#openaiselfharm)| Self-harm content was detected. |
@@ -182,7 +182,7 @@ This includes, but is not limited to:
182
182
183
183
#### OpenAIJailBreak
184
184
185
-
**Error message**: The content was blocked by a security check for a jailbreak attempt. This is a user prompt attack that is ignoring system prompts with the goal of altering the intended agent behavior. Classes of attacks include attempt to change system rules, embedding a conversation mockup to confuse the model, role-play, or encoding attacks. For more information, see [Prompt Shields in Azure AI Content Safety](azure/ai-services/content-safety/concepts/jailbreak-detection).
185
+
**Error message**: The content was blocked by a security check for a jailbreak attempt. This is a user prompt attack that is ignoring system prompts with the goal of altering the intended agent behavior. Classes of attacks include attempt to change system rules, embedding a conversation mockup to confuse the model, role-play, or encoding attacks. For more information, see [Prompt Shields in Azure AI Content Safety](/azure/ai-services/content-safety/concepts/jailbreak-detection).
186
186
187
187
**Resolution**: You can reinforce responsible AI guidelines with your agent users to avoid this situation. Optionally, you can also update the agent content moderation policies.
188
188
@@ -197,7 +197,7 @@ This includes, but is not limited to:
197
197
- Fraud
198
198
- Code execution and infecting other systems
199
199
200
-
For more information, see [Prompt Shields for documents](azure/ai-services/content-safety/concepts/jailbreak-detection#prompt-shields-for-documents).
200
+
For more information, see [Prompt Shields for documents](/azure/ai-services/content-safety/concepts/jailbreak-detection#prompt-shields-for-documents).
201
201
202
202
**Resolution**: If you're testing and didn't mean it to be an attack, make sure your instructions are in line with what you want the agent to be able to do. Otherwise, you can reinforce responsible AI guidelines with your agent users to avoid this situation.
0 commit comments