You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: Learn about reviewing AI open-source libraries to ensure that they are secure and reliable
7
-
ms.date: 03/06/2026
6
+
description: Learn how to evaluate open-source AI libraries for security risks, including AI-specific supply chain threats like model provenance and serialization vulnerabilities.
description: Learn about content filters and how they can help you secure your AI systems
7
-
ms.date: 03/06/2026
6
+
description: Learn how content filters detect and block harmful content in AI systems, including input and output filtering pipelines and configuration strategies.
description: Learn about metaprompts and how they can help you secure your AI systems
7
-
ms.date: 03/06/2026
6
+
description: Learn how to design effective metaprompts (system prompts) as a security control, including role definition, safety rules, grounding instructions, and anti-manipulation techniques.
description: Learn how grounding reduces hallucinations and security risks by connecting AI responses to verified data sources through RAG and other techniques.
Copy file name to clipboardExpand all lines: learn-pr/advocates/ai-security-controls/7-implement-application-security-best-practices-for-ai-enabled-applications.yml
title: Implement application security best practices for AI enabled applications
4
4
metadata:
5
5
title: Implement application security best practices for AI enabled applications
6
-
description: Learn about application security best practices for AIenabled applications
7
-
ms.date: 03/06/2026
6
+
description: Learn how to apply application security best practices to AI-enabled applications, including secure SDLC, agent tool security, and monitoring.
description: Check your understanding of AI security controls including content filters, metaprompts, data security, and monitoring.
7
+
ms.date: 04/24/2026
8
8
author: ceperezb
9
9
ms.author: ceperezb
10
10
ms.topic: unit
11
11
module_assessment: true
12
-
durationInMinutes: 3
12
+
durationInMinutes: 5
13
13
content: Choose the best response for each question.
14
14
quiz:
15
15
questions:
16
-
- content: "What steps can you take to improve the data security of an AIenabled application?"
16
+
- content: "What steps can you take to improve the data security of an AI-enabled application?"
17
17
choices:
18
-
- content: "AI systems should only be able to enact existing data security controls and use the access permissions of the user they're acting on behalf of."
18
+
- content: "Ensure the AI system only accesses data that the user it's acting on behalf of is authorized to see"
19
19
isCorrect: true
20
-
explanation: "Access control decisions should never be devolved to an AI system."
21
-
- content: "Keep your AIenabled application isolated from the rest of your IT environment."
20
+
explanation: "Access control decisions should never be devolved to an AI system. The AI should operate under the user's permissions using delegated access or under its own least-privilege identity for background tasks."
21
+
- content: "Keep your AI-enabled application isolated from the rest of your IT environment"
22
22
isCorrect: false
23
-
explanation: "Isolating an AI enabled application limits usability rather than improve data security."
24
-
- content: "Run your AIenabled application on premises rather than in the cloud."
23
+
explanation: "Isolating an AI application limits usability rather than improving data security. The correct approach is to apply proper access controls and identity management."
24
+
- content: "Run your AI-enabled application on premises rather than in the cloud"
25
25
isCorrect: false
26
-
explanation: "Running an AI enabled application on premises requires you to create your own AI platform. This is not feasible for most organizations. It will also not improve data security."
27
-
- content: "What kind AI harms and attacks does a metaprompt help to mitigate?"
26
+
explanation: "Running on premises doesn't inherently improve data security and requires building your own AI platform. Proper access controls are effective regardless of deployment location."
27
+
- content: "What type of AI security issue does a metaprompt (system prompt) help mitigate?"
28
28
choices:
29
29
- content: "Model poisoning"
30
30
isCorrect: false
31
-
explanation: "Metaprompts can't help mitigate model poisoning attacks."
32
-
- content: "AI overreliance"
31
+
explanation: "Metaprompts can't mitigate model poisoning because poisoning occurs during training, before the metaprompt is applied at inference time."
32
+
- content: "Jailbreaks and harmful content generation"
33
+
isCorrect: true
34
+
explanation: "A metaprompt defines behavioral guardrails that instruct the model how to respond and what to refuse, helping mitigate jailbreak attempts, harmful content generation, and prompt manipulation."
35
+
- content: "Network-level denial of service attacks"
36
+
isCorrect: false
37
+
explanation: "Metaprompts operate at the model behavior level and can't mitigate infrastructure-level attacks like denial of service."
38
+
- content: "You want to prevent your AI application from returning harmful content. What should you implement?"
39
+
choices:
40
+
- content: "Metaprompts only"
41
+
isCorrect: false
42
+
explanation: "While metaprompts help, they can be bypassed. Content safety filters provide an additional layer that evaluates both inputs and outputs against harmful content categories."
43
+
- content: "Content safety filters as part of a defense-in-depth approach"
44
+
isCorrect: true
45
+
explanation: "Content safety filters can be configured to detect and block harmful content categories. For best results, combine them with metaprompts and other controls for layered defense."
46
+
- content: "Application security best practices alone"
47
+
isCorrect: false
48
+
explanation: "General application security practices don't address the specific challenge of harmful AI-generated content. Content filters are purpose-built for this scenario."
49
+
- content: "Why is grounding an important security control for AI systems?"
50
+
choices:
51
+
- content: "It prevents all types of prompt injection attacks"
33
52
isCorrect: false
34
-
explanation: "Metaprompts can't help mitigate overreliance on AI."
35
-
- content: "Jailbreaks"
53
+
explanation: "Grounding doesn't prevent prompt injection. It constrains the model to respond based on verified data, which reduces fabricated outputs and limits the scope of responses."
54
+
- content: "It reduces fabricated outputs by constraining responses to verified data sources"
36
55
isCorrect: true
37
-
explanation: "A metaprompt is a set of natural language instructions that tell an AI system how to behave or not behave."
38
-
- content: "You want to restrict your AI application from returning content of a sexual or hateful nature, what would you implement to accomplish this goal?"
56
+
explanation: "Grounding connects AI responses to specific, verified data through techniques like RAG. This reduces the risk of fabricated outputs and helps keep responses within the intended scope."
57
+
- content: "It encrypts the model's training data"
58
+
isCorrect: false
59
+
explanation: "Grounding has nothing to do with encryption. It's about connecting model responses to verified, real-world data sources at inference time."
60
+
- content: "What is an AI-specific supply chain risk when adopting open-source AI libraries?"
39
61
choices:
40
-
- content: "Metaprompts"
62
+
- content: "Open-source licenses are always incompatible with commercial use"
41
63
isCorrect: false
42
-
explanation: "Metaprompts can't be configured to filter unwanted content."
43
-
- content: "Content safety filters"
64
+
explanation: "Many open-source licenses are compatible with commercial use. License compatibility should be verified but isn't an inherent blocker."
65
+
- content: "Pre-trained models included in libraries may contain backdoors or biased behavior that's hard to detect through code review"
44
66
isCorrect: true
45
-
explanation: "Content safety filters can be configured to prevent an application from returning content of a sexual or hateful nature."
46
-
- content: "Application security best practices"
67
+
explanation: "Pre-trained models are a unique AI supply chain risk. A compromised model can contain backdoors that aren't visible in code review, making model provenance verification and scanning essential."
68
+
- content: "Open-source AI libraries can't be updated after deployment"
47
69
isCorrect: false
48
-
explanation: "Application security best practices cannot prevent an AI application from returning content of a sexual or hateful nature."
49
-
70
+
explanation: "Open-source libraries can be updated like any other dependency. The challenge is that AI libraries evolve rapidly, and pinning to older versions may mean missing critical security patches."
description: Summary of AI security controls covered in this module, including supply chain security, content filtering, data security, metaprompts, grounding, and monitoring.
0 commit comments