You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: learn-pr/wwl/design-responsible-ai-security-governance-risk-management-compliance/10-knowledge-check.yml
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -17,27 +17,27 @@ quiz:
17
17
choices:
18
18
- content: "Grant broad access so the agent can retrieve any data it may need in future tasks"
19
19
isCorrect: false
20
-
explanation: "Incorrect. Granting broad access increases the risk of data breaches and does not align with the principles of defense-in-depth."
20
+
explanation: "Incorrect. Granting broad access increases the risk of data breaches and doesn't align with the principles of defense-in-depth."
21
21
- content: "Use layered identity, access, data governance, monitoring, and threat protection controls"
22
22
isCorrect: true
23
23
explanation: "Correct. Defense-in-depth requires multiple layers of security—identity, RBAC, data governance, observability, threat protection, and controlled workflows. It prevents single-point failures and ensures agents remain predictable, safe, and aligned with organizational policies."
24
24
- content: "Allow the agent to self-correct risky behaviors without human oversight"
25
25
isCorrect: false
26
-
explanation: "Incorrect. Allowing agents to self-correct without oversight introduces risks and does not align with defense-in-depth principles."
26
+
explanation: "Incorrect. Allowing agents to self-correct without oversight introduces risks and doesn't align with defense-in-depth principles."
27
27
- content: "Disable logging to reduce operational costs"
28
28
isCorrect: false
29
29
explanation: "Incorrect. Disabling logging reduces visibility and monitoring, which are critical components of a defense-in-depth strategy."
30
30
- content: "What is the most effective way to reduce the risk of AI agents exposing sensitive information?"
31
31
choices:
32
32
- content: "Allow unrestricted connector access to improve retrieval accuracy"
33
33
isCorrect: false
34
-
explanation: "Incorrect. Unrestricted access increases the risk of exposing sensitive information and does not align with secure practices."
34
+
explanation: "Incorrect. Unrestricted access increases the risk of exposing sensitive information and doesn't align with secure practices."
35
35
- content: "Rely solely on model instructions to avoid returning sensitive content"
36
36
isCorrect: false
37
37
explanation: "Incorrect. Relying solely on model instructions is insufficient to prevent sensitive information exposure."
38
38
- content: "Apply DLP policies, sensitivity labels, and least-privilege boundaries across all data sources"
39
39
isCorrect: true
40
-
explanation: "Correct. Combining DLP enforcement, sensitivity labeling, and least-privilege access ensures agents can only interact with allowed data and cannot inadvertently expose sensitive, regulated, or high-risk information through prompts, retrieval, or outputs."
40
+
explanation: "Correct. Combining DLP enforcement, sensitivity labeling, and least-privilege access ensures agents can only interact with allowed data and can't inadvertently expose sensitive, regulated, or high-risk information through prompts, retrieval, or outputs."
41
41
- content: "Store sensitive data in agent prompts so it can reason more accurately"
42
42
isCorrect: false
43
-
explanation: "Incorrect. Storing sensitive data in prompts increases the risk of exposure and is not a secure practice."
43
+
explanation: "Incorrect. Storing sensitive data in prompts increases the risk of exposure and isn't a secure practice."
0 commit comments