Skip to content

Commit e4b00fe

Browse files
Learn Build Service GitHub AppLearn Build Service GitHub App
authored andcommitted
Merging changes synced from https://github.com/MicrosoftDocs/learn-pr (branch live)
2 parents e55f1fd + 923ccc2 commit e4b00fe

42 files changed

Lines changed: 251 additions & 201 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

learn-pr/advocates/moderate-content-detect-harm-azure-ai-content-safety-studio/1-introduction.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ metadata:
66
description: Learn how to choose and configure generative AI guardrails in Azure AI Foundry.
77
author: aprilgittens
88
ms.author: apspeigh
9-
ms.date: 08/29/2024
9+
ms.date: 04/20/2026
1010
ms.update-cycle: 180-days
1111
ms.topic: unit
1212
ms.collection:

learn-pr/advocates/moderate-content-detect-harm-azure-ai-content-safety-studio/10-knowledge-check.yml

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ metadata:
66
description: Validate your understanding of the key concepts covered in this module.
77
author: aprilgittens
88
ms.author: apspeigh
9-
ms.date: 08/29/2024
9+
ms.date: 04/20/2026
1010
ms.update-cycle: 180-days
1111
ms.topic: unit
1212
ms.collection:
@@ -32,28 +32,28 @@ quiz:
3232
- content: To measure the speed at which the model detects harmful content.
3333
isCorrect: false
3434
explanation: The Precision metric doesn't measure the speed of detection.
35-
- content: Which feature of Azure AI Content Safety is responsible for detecting and blocking incorrect information in model outputs?
35+
- content: Which Azure AI Content Safety feature helps determine whether a model response is supported by the source material that you provided?
3636
choices:
3737
- content: Text moderation
3838
isCorrect: false
39-
explanation: Text moderation detects harmful content in text.
40-
- content: Prompt shields
39+
explanation: Text moderation detects harmful content in text. It doesn't evaluate whether a model completion is supported by grounding sources.
40+
- content: Prompt Shields
4141
isCorrect: false
42-
explanation: To detect user prompt attacks and document attacks, prompt shields analyze large language model (LLM) inputs.
42+
explanation: Prompt Shields analyzes user prompts and documents for direct and indirect prompt attacks.
4343
- content: Image moderation
4444
isCorrect: false
45-
explanation: Image moderation analyzes images to identify and block offensive content.
45+
explanation: Image moderation analyzes images for harmful content. It doesn't measure whether a text completion is grounded in source material.
4646
- content: Groundedness detection
4747
isCorrect: true
48-
explanation: The groundedness detection feature detects and blocks incorrect information in model outputs, to help ensure that the text responses are factual and accurate based on the provided source materials.
48+
explanation: Groundedness detection evaluates whether a model completion is grounded in the source material that you supplied.
4949
- content: What is the purpose of the F1 score metric?
5050
choices:
5151
- content: To measure the total volume of harmful content that the model identifies.
5252
isCorrect: false
5353
explanation: The F1 score metric doesn't measure the total volume of harmful content.
5454
- content: To measure the balance between **Precision** and **Recall**.
5555
isCorrect: true
56-
explanation: The F1 score metric in Content Safety is used when there's a need to balance Precision (the accuracy of identified harmful content) and Recall (the model's ability to identify actual harmful content).
56+
explanation: The F1 score in Content Safety is the harmonic mean of Precision (the accuracy of identified harmful content) and Recall (the model's ability to identify actual harmful content), so it gives you a single number that balances the two.
5757
- content: To measure the speed at which the model detects harmful content.
5858
isCorrect: false
5959
explanation: The F1 score metric doesn't measure the speed of detection.

learn-pr/advocates/moderate-content-detect-harm-azure-ai-content-safety-studio/11-summary.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ metadata:
66
description: Summarize what you learned in this module about Azure AI Content Safety.
77
author: aprilgittens
88
ms.author: apspeigh
9-
ms.date: 08/29/2024
9+
ms.date: 04/20/2026
1010
ms.update-cycle: 180-days
1111
ms.topic: unit
1212
ms.collection:

learn-pr/advocates/moderate-content-detect-harm-azure-ai-content-safety-studio/2-content-safety-studio.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ metadata:
66
description: Learn about the Azure AI Content Safety service and how it can help you moderate content.
77
author: aprilgittens
88
ms.author: apspeigh
9-
ms.date: 08/29/2024
9+
ms.date: 04/20/2026
1010
ms.update-cycle: 180-days
1111
ms.topic: unit
1212
ms.collection:

learn-pr/advocates/moderate-content-detect-harm-azure-ai-content-safety-studio/3-prepare.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ metadata:
66
description: Learn how to prepare for the content moderation process in Azure AI Content Safety.
77
author: aprilgittens
88
ms.author: apspeigh
9-
ms.date: 08/29/2024
9+
ms.date: 04/20/2026
1010
ms.update-cycle: 180-days
1111
ms.topic: unit
1212
ms.collection:

learn-pr/advocates/moderate-content-detect-harm-azure-ai-content-safety-studio/4-harm-categories-severity-levels.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ metadata:
66
description: Learn about the harm categories and severity levels in Azure AI Content Safety.
77
author: aprilgittens
88
ms.author: apspeigh
9-
ms.date: 08/29/2024
9+
ms.date: 04/20/2026
1010
ms.update-cycle: 180-days
1111
ms.topic: unit
1212
ms.collection:

learn-pr/advocates/moderate-content-detect-harm-azure-ai-content-safety-studio/5-exercise-text-moderation.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ metadata:
66
description: Learn how to configure text guardrails by using Azure AI Content Safety.
77
author: aprilgittens
88
ms.author: apspeigh
9-
ms.date: 08/29/2024
9+
ms.date: 04/20/2026
1010
ms.update-cycle: 180-days
1111
ms.topic: unit
1212
ms.collection:

learn-pr/advocates/moderate-content-detect-harm-azure-ai-content-safety-studio/6-exercise-image-moderation.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ metadata:
66
description: Learn how to configure image guardrails by using Azure AI Content Safety.
77
author: aprilgittens
88
ms.author: apspeigh
9-
ms.date: 08/29/2024
9+
ms.date: 04/20/2026
1010
ms.update-cycle: 180-days
1111
ms.topic: unit
1212
ms.collection:

learn-pr/advocates/moderate-content-detect-harm-azure-ai-content-safety-studio/7-exercise-groundedness-detection.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ metadata:
66
description: Learn how to detect groundedness in content by using Azure AI Content Safety.
77
author: aprilgittens
88
ms.author: apspeigh
9-
ms.date: 08/29/2024
9+
ms.date: 04/20/2026
1010
ms.update-cycle: 180-days
1111
ms.topic: unit
1212
ms.collection:

learn-pr/advocates/moderate-content-detect-harm-azure-ai-content-safety-studio/8-exercise-prompt-shields.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ metadata:
66
description: Learn how to use prompt shields to moderate content by using Azure AI Content Safety.
77
author: aprilgittens
88
ms.author: apspeigh
9-
ms.date: 08/29/2024
9+
ms.date: 04/20/2026
1010
ms.update-cycle: 180-days
1111
ms.topic: unit
1212
ms.collection:

0 commit comments

Comments
 (0)