You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/power-platform/copilot-studio/authoring/error-codes.md
+18-6Lines changed: 18 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -166,7 +166,9 @@ Common problems include:
166
166
167
167
#### OpenAIHate
168
168
169
-
**Error message**: The content was blocked by a Responsible AI check for hateful content. Hate harms refer to any content that attacks or uses discriminatory language with reference to a person or identity group based on certain differentiating attributes of these groups.
169
+
**Error message**: The content was filtered due to Responsible AI restrictions.
170
+
171
+
The content was blocked by a Responsible AI check for hateful content. Hate harms refer to any content that attacks or uses discriminatory language with reference to a person or identity group based on certain differentiating attributes of these groups.
170
172
171
173
This includes, but isn't limited to:
172
174
@@ -182,13 +184,17 @@ This includes, but isn't limited to:
182
184
183
185
#### OpenAIJailBreak
184
186
185
-
**Error message**: The content was blocked by a security check for a jailbreak attempt. This is a user prompt attack that's ignoring system prompts with the goal of altering the intended agent behavior. Classes of attacks include attempt to change system rules, embedding a conversation mockup to confuse the model, role-play, or encoding attacks. For more information, see [Prompt Shields in Azure AI Content Safety](/azure/ai-services/content-safety/concepts/jailbreak-detection).
187
+
**Error message**: The content was filtered due to Responsible AI restrictions.
188
+
189
+
The content was blocked by a security check for a jailbreak attempt. This is a user prompt attack that's ignoring system prompts with the goal of altering the intended agent behavior. Classes of attacks include attempt to change system rules, embedding a conversation mockup to confuse the model, role-play, or encoding attacks. For more information, see [Prompt Shields in Azure AI Content Safety](/azure/ai-services/content-safety/concepts/jailbreak-detection).
186
190
187
191
**Resolution**: You can reinforce responsible AI guidelines with your agent users to avoid this situation. Optionally, you can also update the agent content moderation policies.
188
192
189
193
#### OpenAIndirectAttack
190
194
191
-
**Error message**: There was an attack detected from information not directly supplied by the agent author or the end user, such as external documents. Attacker attempts to embed instructions in grounded data provided by the user to maliciously gain control of the system by:
195
+
**Error message**: The content was filtered due to Responsible AI restrictions.
196
+
197
+
There was an attack detected from information not directly supplied by the agent author or the end user, such as external documents. Attacker attempts to embed instructions in grounded data provided by the user to maliciously gain control of the system by:
192
198
193
199
- Manipulating content
194
200
- Intrusion
@@ -203,7 +209,9 @@ For more information, see [Prompt Shields for documents](/azure/ai-services/cont
203
209
204
210
#### OpenAISelfHarm
205
211
206
-
**Error message**: The content was blocked by a Responsible AI check for content related to self-harm. Self-harm describes language related to physical actions intended to purposely hurt, injure, damage one's body or kill oneself.
212
+
**Error message**: The content was filtered due to Responsible AI restrictions.
213
+
214
+
The content was blocked by a Responsible AI check for content related to self-harm. Self-harm describes language related to physical actions intended to purposely hurt, injure, damage one's body or kill oneself.
207
215
208
216
This includes, but isn't limited to:
209
217
@@ -214,7 +222,9 @@ This includes, but isn't limited to:
214
222
215
223
#### OpenAISexual
216
224
217
-
**Error message**: The content was blocked by a Responsible AI check for sexual content. Sexual describes language related to anatomical organs and genitals, romantic relationships and sexual acts, acts portrayed in erotic or affectionate terms, including those portrayed as an assault or a forced sexual violent act against one's will.
225
+
**Error message**: The content was filtered due to Responsible AI restrictions.
226
+
227
+
The content was blocked by a Responsible AI check for sexual content. Sexual describes language related to anatomical organs and genitals, romantic relationships and sexual acts, acts portrayed in erotic or affectionate terms, including those portrayed as an assault or a forced sexual violent act against one's will.
218
228
219
229
This includes, but isn't limited to:
220
230
@@ -234,7 +244,9 @@ This includes, but isn't limited to:
234
244
235
245
#### OpenAIViolence
236
246
237
-
**Error message**: The content was blocked by a Responsible AI check for violent content. Violence describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns, and related entities.
247
+
**Error message**: The content was filtered due to Responsible AI restrictions.
248
+
249
+
The content was blocked by a Responsible AI check for violent content. Violence describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns, and related entities.
0 commit comments