You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/power-platform/copilot-studio/authoring/error-codes.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -168,7 +168,7 @@ Common problems include:
168
168
169
169
**Error message**: The content was filtered due to Responsible AI restrictions.
170
170
171
-
The content was blocked by a Responsible AI check for hateful content. Hate harms refer to any content that attacks or uses discriminatory language with reference to a person or identity group based on certain differentiating attributes of these groups.
171
+
The content was blocked by a Responsible AI check for hateful content. Hateful content refer to any content that attacks or uses discriminatory language with reference to a person or identity group based on certain differentiating attributes of these groups.
172
172
173
173
This includes, but isn't limited to:
174
174
@@ -186,15 +186,15 @@ This includes, but isn't limited to:
186
186
187
187
**Error message**: The content was filtered due to Responsible AI restrictions.
188
188
189
-
The content was blocked by a security check for a jailbreak attempt. This is a user prompt attack that's ignoring system prompts with the goal of altering the intended agent behavior. Classes of attacks include attempt to change system rules, embedding a conversation mockup to confuse the model, role-play, or encoding attacks. For more information, see [Prompt Shields in Azure AI Content Safety](/azure/ai-services/content-safety/concepts/jailbreak-detection).
189
+
The content was blocked by a security check for a jailbreak attempt. A jailbreak attempt is a user prompt attack that ignores system prompts with the goal of altering the intended agent behavior. These attacks include attempts to change system rules, embedding a conversation mockup to confuse the model, role-play, or encoding attacks. For more information, see [Prompt Shields in Azure AI Content Safety](/azure/ai-services/content-safety/concepts/jailbreak-detection).
190
190
191
-
**Resolution**: You can reinforce responsible AI guidelines with your agent users to avoid this situation. Optionally, you can also update the agent content moderation policies.
191
+
**Resolution**: You can reinforce responsible AI guidelines with your agent users to avoid this situation. Optionally, you can also update the agent [content moderation](/microsoft-copilot-studio/knowledge-copilot-studio#content-moderation) policies.
192
192
193
193
#### OpenAIndirectAttack
194
194
195
195
**Error message**: The content was filtered due to Responsible AI restrictions.
196
196
197
-
There was an attack detected from information not directly supplied by the agent author or the end user, such as external documents. Attacker attempts to embed instructions in grounded data provided by the user to maliciously gain control of the system by:
197
+
There was an attack detected from information not directly supplied by the agent author or the end user, such as external documents. Attackers attempts to embed instructions in grounded data provided by the user to maliciously gain control of the system by:
198
198
199
199
- Manipulating content
200
200
- Intrusion
@@ -211,7 +211,7 @@ For more information, see [Prompt Shields for documents](/azure/ai-services/cont
211
211
212
212
**Error message**: The content was filtered due to Responsible AI restrictions.
213
213
214
-
The content was blocked by a Responsible AI check for content related to self-harm. Self-harm describes language related to physical actions intended to purposely hurt, injure, damage one's body or kill oneself.
214
+
The content was blocked by a Responsible AI check for content related to self-harm. Self-harm describes language related to physical actions intended to purposely hurt, injure, damage one's body, or kill oneself.
215
215
216
216
This includes, but isn't limited to:
217
217
@@ -224,7 +224,7 @@ This includes, but isn't limited to:
224
224
225
225
**Error message**: The content was filtered due to Responsible AI restrictions.
226
226
227
-
The content was blocked by a Responsible AI check for sexual content. Sexual describes language related to anatomical organs and genitals, romantic relationships and sexual acts, acts portrayed in erotic or affectionate terms, including those portrayed as an assault or a forced sexual violent act against one's will.
227
+
The content was blocked by a Responsible AI check for sexual content. Sexual content describes language related to anatomical organs and genitals, romantic relationships, sexual acts, and acts portrayed in erotic or affectionate terms, including those portrayed as an assault or a forced sexual violent act against one's will.
228
228
229
229
This includes, but isn't limited to:
230
230
@@ -246,7 +246,7 @@ This includes, but isn't limited to:
246
246
247
247
**Error message**: The content was filtered due to Responsible AI restrictions.
248
248
249
-
The content was blocked by a Responsible AI check for violent content. Violence describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns, and related entities.
249
+
The content was blocked by a Responsible AI check for violent content. Violent content describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns, and related entities.
0 commit comments