Skip to content

Commit 77f8992

Browse files
authored
Merge pull request #54411 from ceperezb/CEPEREZB-ai-security-controls
update module
2 parents 4796496 + cb16b93 commit 77f8992

45 files changed

Lines changed: 1202 additions & 143 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

learn-pr/advocates/ai-security-controls/1-introduction.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@ uid: learn.ai-security-controls.introduction
33
title: Introduction
44
metadata:
55
title: Introduction
6-
description: Overview of the security controls that you can implement in AI systems to increase the security posture of AI environments.
7-
ms.date: 03/06/2026
6+
description: Introduction to AI security controls, including learning objectives and prerequisites.
7+
ms.date: 04/24/2026
88
author: ceperezb
99
ms.author: ceperezb
1010
ms.topic: unit
11-
durationInMinutes: 1
11+
durationInMinutes: 2
1212
content: |
1313
[!include[](includes/1-introduction.md)]

learn-pr/advocates/ai-security-controls/2-review-ai-open-source-libraries.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@ uid: learn.ai-security-controls.review-ai-open-source-libraries
33
title: Review AI open-source libraries
44
metadata:
55
title: Review AI open-source libraries
6-
description: Learn about reviewing AI open-source libraries to ensure that they are secure and reliable
7-
ms.date: 03/06/2026
6+
description: Learn how to evaluate open-source AI libraries for security risks, including AI-specific supply chain threats like model provenance and serialization vulnerabilities.
7+
ms.date: 04/24/2026
88
author: ceperezb
99
ms.author: ceperezb
1010
ms.topic: unit
11-
durationInMinutes: 4
11+
durationInMinutes: 5
1212
content: |
1313
[!include[](includes/2-review-ai-open-source-libraries.md)]

learn-pr/advocates/ai-security-controls/3-content-filters.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@ uid: learn.ai-security-controls.content-filters
33
title: Content filters
44
metadata:
55
title: Content filters
6-
description: Learn about content filters and how they can help you secure your AI systems
7-
ms.date: 03/06/2026
6+
description: Learn how content filters detect and block harmful content in AI systems, including input and output filtering pipelines and configuration strategies.
7+
ms.date: 04/24/2026
88
author: ceperezb
99
ms.author: ceperezb
1010
ms.topic: unit
11-
durationInMinutes: 3
11+
durationInMinutes: 5
1212
content: |
1313
[!include[](includes/3-content-filters.md)]

learn-pr/advocates/ai-security-controls/4-implement-ai-data-security.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@ uid: learn.ai-security-controls.implement-ai-data-security
33
title: Implement AI data security
44
metadata:
55
title: Implement AI data security
6-
description: Learn about AI data security and how to implement it in your AI systems
7-
ms.date: 03/06/2026
6+
description: Learn about AI data security, including the four types of data in AI systems, agent identity management, and access control strategies.
7+
ms.date: 04/24/2026
88
author: ceperezb
99
ms.author: ceperezb
1010
ms.topic: unit
11-
durationInMinutes: 2
11+
durationInMinutes: 6
1212
content: |
1313
[!include[](includes/4-implement-ai-data-security.md)]

learn-pr/advocates/ai-security-controls/5-create-metaprompts.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@ uid: learn.ai-security-controls.create-metaprompts
33
title: Create metaprompts
44
metadata:
55
title: Create metaprompts
6-
description: Learn about metaprompts and how they can help you secure your AI systems
7-
ms.date: 03/06/2026
6+
description: Learn how to design effective metaprompts (system prompts) as a security control, including role definition, safety rules, grounding instructions, and anti-manipulation techniques.
7+
ms.date: 04/24/2026
88
author: ceperezb
99
ms.author: ceperezb
1010
ms.topic: unit
11-
durationInMinutes: 2
11+
durationInMinutes: 5
1212
content: |
1313
[!include[](includes/5-create-metaprompts.md)]

learn-pr/advocates/ai-security-controls/6-ground-ai-systems.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@ uid: learn.ai-security-controls.ground-ai-systems
33
title: Ground AI systems
44
metadata:
55
title: Ground AI systems
6-
description: Learn about grounding AI systems
7-
ms.date: 03/06/2026
6+
description: Learn how grounding reduces hallucinations and security risks by connecting AI responses to verified data sources through RAG and other techniques.
7+
ms.date: 04/24/2026
88
author: ceperezb
99
ms.author: ceperezb
1010
ms.topic: unit
11-
durationInMinutes: 3
11+
durationInMinutes: 5
1212
content: |
1313
[!include[](includes/6-ground-ai-systems.md)]

learn-pr/advocates/ai-security-controls/7-implement-application-security-best-practices-for-ai-enabled-applications.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@ uid: learn.ai-security-controls.implementing-application-security-best-practices
33
title: Implement application security best practices for AI enabled applications
44
metadata:
55
title: Implement application security best practices for AI enabled applications
6-
description: Learn about application security best practices for AI enabled applications
7-
ms.date: 03/06/2026
6+
description: Learn how to apply application security best practices to AI-enabled applications, including secure SDLC, agent tool security, and monitoring.
7+
ms.date: 04/24/2026
88
author: ceperezb
99
ms.author: ceperezb
1010
ms.topic: unit
11-
durationInMinutes: 2
11+
durationInMinutes: 6
1212
content: |
1313
[!include[](includes/7-implement-application-security-best-practices-for-ai-enabled-applications.md)]
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.ai-security-controls.monitor-detect-ai-threats
3+
title: Monitor and detect AI-specific threats
4+
metadata:
5+
title: Monitor and detect AI-specific threats
6+
description: Learn how to monitor AI systems for security threats including jailbreak attempts, prompt injection, and anomalous agent behavior.
7+
ms.date: 04/24/2026
8+
author: ceperezb
9+
ms.author: ceperezb
10+
ms.topic: unit
11+
durationInMinutes: 7
12+
content: |
13+
[!include[](includes/7a-monitor-detect-ai-threats.md)]

learn-pr/advocates/ai-security-controls/8-knowledge-check.yml

Lines changed: 45 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -3,47 +3,68 @@ uid: learn.ai-security-controls.knowledge-check
33
title: Module assessment
44
metadata:
55
title: Module assessment
6-
description: Display your knowledge
7-
ms.date: 03/06/2026
6+
description: Check your understanding of AI security controls including content filters, metaprompts, data security, and monitoring.
7+
ms.date: 04/24/2026
88
author: ceperezb
99
ms.author: ceperezb
1010
ms.topic: unit
1111
module_assessment: true
12-
durationInMinutes: 3
12+
durationInMinutes: 5
1313
content: Choose the best response for each question.
1414
quiz:
1515
questions:
16-
- content: "What steps can you take to improve the data security of an AI enabled application?"
16+
- content: "What steps can you take to improve the data security of an AI-enabled application?"
1717
choices:
18-
- content: "AI systems should only be able to enact existing data security controls and use the access permissions of the user they're acting on behalf of."
18+
- content: "Ensure the AI system only accesses data that the user it's acting on behalf of is authorized to see"
1919
isCorrect: true
20-
explanation: "Access control decisions should never be devolved to an AI system."
21-
- content: "Keep your AI enabled application isolated from the rest of your IT environment."
20+
explanation: "Access control decisions should never be devolved to an AI system. The AI should operate under the user's permissions using delegated access or under its own least-privilege identity for background tasks."
21+
- content: "Keep your AI-enabled application isolated from the rest of your IT environment"
2222
isCorrect: false
23-
explanation: "Isolating an AI enabled application limits usability rather than improve data security."
24-
- content: "Run your AI enabled application on premises rather than in the cloud."
23+
explanation: "Isolating an AI application limits usability rather than improving data security. The correct approach is to apply proper access controls and identity management."
24+
- content: "Run your AI-enabled application on premises rather than in the cloud"
2525
isCorrect: false
26-
explanation: "Running an AI enabled application on premises requires you to create your own AI platform. This is not feasible for most organizations. It will also not improve data security."
27-
- content: "What kind AI harms and attacks does a metaprompt help to mitigate?"
26+
explanation: "Running on premises doesn't inherently improve data security and requires building your own AI platform. Proper access controls are effective regardless of deployment location."
27+
- content: "What type of AI security issue does a metaprompt (system prompt) help mitigate?"
2828
choices:
2929
- content: "Model poisoning"
3030
isCorrect: false
31-
explanation: "Metaprompts can't help mitigate model poisoning attacks."
32-
- content: "AI overreliance"
31+
explanation: "Metaprompts can't mitigate model poisoning because poisoning occurs during training, before the metaprompt is applied at inference time."
32+
- content: "Jailbreaks and harmful content generation"
33+
isCorrect: true
34+
explanation: "A metaprompt defines behavioral guardrails that instruct the model how to respond and what to refuse, helping mitigate jailbreak attempts, harmful content generation, and prompt manipulation."
35+
- content: "Network-level denial of service attacks"
36+
isCorrect: false
37+
explanation: "Metaprompts operate at the model behavior level and can't mitigate infrastructure-level attacks like denial of service."
38+
- content: "You want to prevent your AI application from returning harmful content. What should you implement?"
39+
choices:
40+
- content: "Metaprompts only"
41+
isCorrect: false
42+
explanation: "While metaprompts help, they can be bypassed. Content safety filters provide an additional layer that evaluates both inputs and outputs against harmful content categories."
43+
- content: "Content safety filters as part of a defense-in-depth approach"
44+
isCorrect: true
45+
explanation: "Content safety filters can be configured to detect and block harmful content categories. For best results, combine them with metaprompts and other controls for layered defense."
46+
- content: "Application security best practices alone"
47+
isCorrect: false
48+
explanation: "General application security practices don't address the specific challenge of harmful AI-generated content. Content filters are purpose-built for this scenario."
49+
- content: "Why is grounding an important security control for AI systems?"
50+
choices:
51+
- content: "It prevents all types of prompt injection attacks"
3352
isCorrect: false
34-
explanation: "Metaprompts can't help mitigate overreliance on AI."
35-
- content: "Jailbreaks"
53+
explanation: "Grounding doesn't prevent prompt injection. It constrains the model to respond based on verified data, which reduces fabricated outputs and limits the scope of responses."
54+
- content: "It reduces fabricated outputs by constraining responses to verified data sources"
3655
isCorrect: true
37-
explanation: "A metaprompt is a set of natural language instructions that tell an AI system how to behave or not behave."
38-
- content: "You want to restrict your AI application from returning content of a sexual or hateful nature, what would you implement to accomplish this goal?"
56+
explanation: "Grounding connects AI responses to specific, verified data through techniques like RAG. This reduces the risk of fabricated outputs and helps keep responses within the intended scope."
57+
- content: "It encrypts the model's training data"
58+
isCorrect: false
59+
explanation: "Grounding has nothing to do with encryption. It's about connecting model responses to verified, real-world data sources at inference time."
60+
- content: "What is an AI-specific supply chain risk when adopting open-source AI libraries?"
3961
choices:
40-
- content: "Metaprompts"
62+
- content: "Open-source licenses are always incompatible with commercial use"
4163
isCorrect: false
42-
explanation: "Metaprompts can't be configured to filter unwanted content."
43-
- content: "Content safety filters"
64+
explanation: "Many open-source licenses are compatible with commercial use. License compatibility should be verified but isn't an inherent blocker."
65+
- content: "Pre-trained models included in libraries may contain backdoors or biased behavior that's hard to detect through code review"
4466
isCorrect: true
45-
explanation: "Content safety filters can be configured to prevent an application from returning content of a sexual or hateful nature."
46-
- content: "Application security best practices"
67+
explanation: "Pre-trained models are a unique AI supply chain risk. A compromised model can contain backdoors that aren't visible in code review, making model provenance verification and scanning essential."
68+
- content: "Open-source AI libraries can't be updated after deployment"
4769
isCorrect: false
48-
explanation: "Application security best practices cannot prevent an AI application from returning content of a sexual or hateful nature."
49-
70+
explanation: "Open-source libraries can be updated like any other dependency. The challenge is that AI libraries evolve rapidly, and pinning to older versions may mean missing critical security patches."

learn-pr/advocates/ai-security-controls/9-summary.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@ uid: learn.ai-security-controls.summary
33
title: Summary
44
metadata:
55
title: Summary
6-
description: A summary of information
7-
ms.date: 03/06/2026
6+
description: Summary of AI security controls covered in this module, including supply chain security, content filtering, data security, metaprompts, grounding, and monitoring.
7+
ms.date: 04/24/2026
88
author: ceperezb
99
ms.author: ceperezb
1010
ms.topic: unit

0 commit comments

Comments
 (0)