Skip to content

Commit 5646dbd

Browse files
authored
Refine explanations in knowledge check YAML
Updated explanations for multiple choice questions to improve clarity and consistency.
1 parent 5c71306 commit 5646dbd

1 file changed

Lines changed: 5 additions & 5 deletions

File tree

learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/10-knowledge-check.yml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,27 +17,27 @@ quiz:
1717
choices:
1818
- content: "Grant broad access so the agent can retrieve any data it may need in future tasks"
1919
isCorrect: false
20-
explanation: "Incorrect. Granting broad access increases the risk of data breaches and does not align with the principles of defense-in-depth."
20+
explanation: "Incorrect. Granting broad access increases the risk of data breaches and doesn't align with the principles of defense-in-depth."
2121
- content: "Use layered identity, access, data governance, monitoring, and threat protection controls"
2222
isCorrect: true
2323
explanation: "Correct. Defense-in-depth requires multiple layers of security—identity, RBAC, data governance, observability, threat protection, and controlled workflows. It prevents single-point failures and ensures agents remain predictable, safe, and aligned with organizational policies."
2424
- content: "Allow the agent to self-correct risky behaviors without human oversight"
2525
isCorrect: false
26-
explanation: "Incorrect. Allowing agents to self-correct without oversight introduces risks and does not align with defense-in-depth principles."
26+
explanation: "Incorrect. Allowing agents to self-correct without oversight introduces risks and doesn't align with defense-in-depth principles."
2727
- content: "Disable logging to reduce operational costs"
2828
isCorrect: false
2929
explanation: "Incorrect. Disabling logging reduces visibility and monitoring, which are critical components of a defense-in-depth strategy."
3030
- content: "What is the most effective way to reduce the risk of AI agents exposing sensitive information?"
3131
choices:
3232
- content: "Allow unrestricted connector access to improve retrieval accuracy"
3333
isCorrect: false
34-
explanation: "Incorrect. Unrestricted access increases the risk of exposing sensitive information and does not align with secure practices."
34+
explanation: "Incorrect. Unrestricted access increases the risk of exposing sensitive information and doesn't align with secure practices."
3535
- content: "Rely solely on model instructions to avoid returning sensitive content"
3636
isCorrect: false
3737
explanation: "Incorrect. Relying solely on model instructions is insufficient to prevent sensitive information exposure."
3838
- content: "Apply DLP policies, sensitivity labels, and least-privilege boundaries across all data sources"
3939
isCorrect: true
40-
explanation: "Correct. Combining DLP enforcement, sensitivity labeling, and least-privilege access ensures agents can only interact with allowed data and cannot inadvertently expose sensitive, regulated, or high-risk information through prompts, retrieval, or outputs."
40+
explanation: "Correct. Combining DLP enforcement, sensitivity labeling, and least-privilege access ensures agents can only interact with allowed data and can't inadvertently expose sensitive, regulated, or high-risk information through prompts, retrieval, or outputs."
4141
- content: "Store sensitive data in agent prompts so it can reason more accurately"
4242
isCorrect: false
43-
explanation: "Incorrect. Storing sensitive data in prompts increases the risk of exposure and is not a secure practice."
43+
explanation: "Incorrect. Storing sensitive data in prompts increases the risk of exposure and isn't a secure practice."

0 commit comments

Comments
 (0)