Skip to content

Commit a6d6808

Browse files
committed
Added error codes for Responsible AI
1 parent 8c67eeb commit a6d6808

1 file changed

Lines changed: 3 additions & 3 deletions

File tree

support/power-platform/copilot-studio/authoring/error-codes.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ As an agent maker, if a problem occurs when you're using the test pane to [test
3939
| [InfiniteLoopInBotContent](#infiniteloopinbotcontent) | A node was executed too many times. |
4040
| [InvalidContent](#invalidcontent) | Invalid content was added to the code editor. |
4141
| [LatestPublishedVersionNotFound](#latestpublishedversionnotfound) | Unable to retrieve the published version of the agent. |
42-
| [OpenAIHate](#openaiahate) | Hate content was detected. |
42+
| [OpenAIHate](#openaihate) | Hate content was detected. |
4343
| [OpenAIJailBreak](#openaijailbreak) | Jailbreak content was detected. |
4444
| [OpenAIndirectAttack](#openaindirectattack) | Indirect attack content was detected. |
4545
| [OpenAISelfHarm](#openaiselfharm) | Self-harm content was detected. |
@@ -182,7 +182,7 @@ This includes, but is not limited to:
182182

183183
#### OpenAIJailBreak
184184

185-
**Error message**: The content was blocked by a security check for a jailbreak attempt. This is a user prompt attack that is ignoring system prompts with the goal of altering the intended agent behavior. Classes of attacks include attempt to change system rules, embedding a conversation mockup to confuse the model, role-play, or encoding attacks. For more information, see [Prompt Shields in Azure AI Content Safety](azure/ai-services/content-safety/concepts/jailbreak-detection).
185+
**Error message**: The content was blocked by a security check for a jailbreak attempt. This is a user prompt attack that is ignoring system prompts with the goal of altering the intended agent behavior. Classes of attacks include attempt to change system rules, embedding a conversation mockup to confuse the model, role-play, or encoding attacks. For more information, see [Prompt Shields in Azure AI Content Safety](/azure/ai-services/content-safety/concepts/jailbreak-detection).
186186

187187
**Resolution**: You can reinforce responsible AI guidelines with your agent users to avoid this situation. Optionally, you can also update the agent content moderation policies.
188188

@@ -197,7 +197,7 @@ This includes, but is not limited to:
197197
- Fraud
198198
- Code execution and infecting other systems
199199

200-
For more information, see [Prompt Shields for documents](azure/ai-services/content-safety/concepts/jailbreak-detection#prompt-shields-for-documents).
200+
For more information, see [Prompt Shields for documents](/azure/ai-services/content-safety/concepts/jailbreak-detection#prompt-shields-for-documents).
201201

202202
**Resolution**: If you're testing and didn't mean it to be an attack, make sure your instructions are in line with what you want the agent to be able to do. Otherwise, you can reinforce responsible AI guidelines with your agent users to avoid this situation.
203203

0 commit comments

Comments
 (0)