Skip to content

Commit c1bb09f

Browse files
authored
Merge pull request #53933 from GraemeMalcolm/main
Updates
2 parents 8f24eb5 + ee8f451 commit c1bb09f

14 files changed

Lines changed: 84 additions & 138 deletions

learn-pr/wwl-data-ai/foundry-sdk/07-knowledge-check.yml

Lines changed: 15 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -15,39 +15,17 @@ durationInMinutes: 5
1515
quiz:
1616
title: Check your knowledge
1717
questions:
18-
- content: What is the primary purpose of the project client in the Microsoft Foundry SDK?
18+
- content: Which endpoint offers the broadest support for OpenAI APIs with Foundry Models?
1919
choices:
20-
- content: To generate AI responses using the Responses API
20+
- content: The Foundry project endpoint
2121
isCorrect: false
22-
explanation: Incorrect. The OpenAI-compatible client is used to generate AI responses.
23-
- content: To access Foundry-native operations like listing connections and managing project properties
22+
explanation: Incorrect. The Foundry Project endpoint is used for Foundry-native operations, and OpenAI API support for the Responses API with Foundry direct models.
23+
- content: The Azure OpenAI endpoint
2424
isCorrect: true
25-
explanation: Correct. The project client provides access to Foundry-native operations that don't have OpenAI equivalents.
26-
- content: To deploy new AI models to the project
25+
explanation: Correct. The Azure OpenAI endpoint provides broad support for OpenAI APIs with Foundry Models.
26+
- content: The Foundry Tools endpoint
2727
isCorrect: false
28-
explanation: Incorrect. Model deployment is done through the Foundry portal, not the SDK.
29-
- content: Which method do you use to generate responses with the Responses API?
30-
choices:
31-
- content: client.chat.completions.create()
32-
isCorrect: false
33-
explanation: Incorrect. This is the older chat completions API method.
34-
- content: client.responses.create()
35-
isCorrect: true
36-
explanation: Correct. The Responses API uses the responses.create() method.
37-
- content: client.generate.response()
38-
isCorrect: false
39-
explanation: Incorrect. This method doesn't exist in the SDK.
40-
- content: What does the previous_response_id parameter do in the Responses API?
41-
choices:
42-
- content: It links responses together to maintain conversation context
43-
isCorrect: true
44-
explanation: Correct. The previous_response_id parameter maintains conversation context across multiple API calls.
45-
- content: It retrieves an older response from the database
46-
isCorrect: false
47-
explanation: Incorrect. Use the responses.retrieve() method to retrieve previous responses.
48-
- content: It deletes previous responses to save storage space
49-
isCorrect: false
50-
explanation: Incorrect. The parameter doesn't delete responses.
28+
explanation: Incorrect. Foundry Tools do not provide broad support for OpenAI APIs with Foundry Models.
5129
- content: Which package must you install to use the Microsoft Foundry SDK in Python?
5230
choices:
5331
- content: Package `azure-foundry`
@@ -59,14 +37,14 @@ quiz:
5937
- content: Package `microsoft-foundry-sdk`
6038
isCorrect: false
6139
explanation: Incorrect. This package doesn't exist.
62-
- content: What advantage does the Responses API offer over the ChatCompletions API?
40+
- content: Which method do you use to generate responses with the Responses API?
6341
choices:
64-
- content: It works only with Azure OpenAI models
42+
- content: client.chat.completions.create()
6543
isCorrect: false
66-
explanation: Incorrect. The Responses API works with both Azure OpenAI and Foundry direct models.
67-
- content: It provides stateful, multi-turn conversation support
68-
isCorrect: true
69-
explanation: Correct. The Responses API maintains conversation context and provides stateful interactions.
70-
- content: It requires less authentication configuration
44+
explanation: Incorrect. This is the older chat completions API method.
45+
- content: client.get_response_id()
7146
isCorrect: false
72-
explanation: Incorrect. Both APIs require the same authentication approach.
47+
explanation: Incorrect. This method doesn't generate a response.
48+
- content: client.responses.create()
49+
isCorrect: true
50+
explanation: Correct. The Responses API uses the responses.create() method.

learn-pr/wwl-data-ai/responsible-ai-studio/1-introduction.yml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,10 @@ metadata:
66
description: Introduce responsible generative AI.
77
author: ivorb
88
ms.author: berryivor
9-
ms.date: 02/26/2026
9+
ms.date: 03/22/2026
1010
ms.topic: unit
1111
ms.collection:
1212
- wwl-ai-copilot
1313
durationInMinutes: 1
1414
content: |
1515
[!include[](includes/1-introduction.md)]
16-

learn-pr/wwl-data-ai/responsible-ai-studio/2-plan-responsible-ai.yml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,10 @@ metadata:
66
description: Describe a practical process for responsible generative AI development.
77
author: ivorb
88
ms.author: berryivor
9-
ms.date: 02/26/2026
9+
ms.date: 03/22/2026
1010
ms.topic: unit
1111
ms.collection:
1212
- wwl-ai-copilot
1313
durationInMinutes: 2
1414
content: |
1515
[!include[](includes/2-plan-responsible-ai.md)]
16-

learn-pr/wwl-data-ai/responsible-ai-studio/3-identify-harms.yml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,10 @@ metadata:
66
description: Identify, prioritize, validate and document potential harms in a responsible AI solution.
77
author: ivorb
88
ms.author: berryivor
9-
ms.date: 02/26/2026
9+
ms.date: 03/22/2026
1010
ms.topic: unit
1111
ms.collection:
1212
- wwl-ai-copilot
1313
durationInMinutes: 5
1414
content: |
1515
[!include[](includes/3-identify-harms.md)]
16-

learn-pr/wwl-data-ai/responsible-ai-studio/4-measure-harms.yml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,10 @@ metadata:
66
description: Measure the presence of potential harm in a responsible AI solution.
77
author: ivorb
88
ms.author: berryivor
9-
ms.date: 02/26/2026
9+
ms.date: 03/22/2026
1010
ms.topic: unit
1111
ms.collection:
1212
- wwl-ai-copilot
1313
durationInMinutes: 5
1414
content: |
1515
[!include[](includes/4-measure-harms.md)]
16-

learn-pr/wwl-data-ai/responsible-ai-studio/5-mitigate-harms.yml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,10 @@ metadata:
66
description: Mitigate potential harms at multiple levels of a responsible AI solution.
77
author: ivorb
88
ms.author: berryivor
9-
ms.date: 02/26/2026
9+
ms.date: 03/22/2026
1010
ms.topic: unit
1111
ms.collection:
1212
- wwl-ai-copilot
1313
durationInMinutes: 5
1414
content: |
1515
[!include[](includes/5-mitigate-harms.md)]
16-

learn-pr/wwl-data-ai/responsible-ai-studio/6-operate-responsibly.yml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,10 @@ metadata:
66
description: Release and operate a generative AI solution responsibly.
77
author: ivorb
88
ms.author: berryivor
9-
ms.date: 02/26/2026
9+
ms.date: 03/22/2026
1010
ms.topic: unit
1111
ms.collection:
1212
- wwl-ai-copilot
1313
durationInMinutes: 3
1414
content: |
1515
[!include[](includes/6-operate-responsibly.md)]
16-
Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,15 @@
11
### YamlMime:ModuleUnit
22
uid: learn.wwl.responsible-ai-studio.exercise-content-filters
3-
title: Exercise - Apply content filters to prevent the output of harmful content
3+
title: Exercise - Apply guardrails to prevent the output of harmful content
44
metadata:
5-
title: Exercise - Apply content filters to prevent the output of harmful content
6-
description: Hands-on exercise to explore content filters in Microsoft Foundry Portal.
5+
title: Exercise - Apply guardrails to prevent the output of harmful content
6+
description: Hands-on exercise to explore guardrails in Microsoft Foundry Portal.
77
author: ivorb
88
ms.author: berryivor
9-
ms.date: 02/26/2026
9+
ms.date: 03/22/2026
1010
ms.topic: unit
1111
ms.collection:
1212
- wwl-ai-copilot
13-
durationInMinutes: 25
13+
durationInMinutes: 20
1414
content: |
1515
[!include[](includes/7-exercise-content-filters.md)]
16-
17-

learn-pr/wwl-data-ai/responsible-ai-studio/8-knowledge-check.yml

Lines changed: 36 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -6,47 +6,45 @@ metadata:
66
description: Check your understanding of the content presented in this module.
77
author: ivorb
88
ms.author: berryivor
9-
ms.date: 02/26/2026
9+
ms.date: 03/22/2026
1010
ms.topic: unit
1111
ms.collection:
1212
- wwl-ai-copilot
1313
module_assessment: true
1414
durationInMinutes: 3
1515
content: |
16-
quiz:
17-
questions:
18-
- content: "Why should you consider creating an AI Impact Assessment when designing a generative AI solution?"
19-
choices:
20-
- content: "To make a legal case that indemnifies you from responsibility for harms caused by the solution"
21-
isCorrect: false
22-
explanation: "An AI Impact Assessment does not indemnify you from responsibility for harm"
23-
- content: "To document the purpose, expected use, and potential harms for the solution"
24-
isCorrect: true
25-
explanation: "An AI Impact Assessment guide documents the expected use of the system and helps identify potential harms."
26-
- content: "To evaluate the cost of cloud services required to implement your solution"
27-
isCorrect: false
28-
explanation: "An AI Impact Assessment typically doesn't include evaluations of cloud service costs."
29-
- content: "What capability of Microsoft Foundry helps mitigate harmful content generation at the Safety System level?"
30-
choices:
31-
- content: "DALL-E model support"
32-
isCorrect: false
33-
explanation: "DALL-E models are used to generate images."
34-
- content: "Fine-tuning"
35-
isCorrect: false
36-
explanation: "Fine-tuning is used to customize models, and provides mitigation at the Model layer"
37-
- content: "Content filters"
38-
isCorrect: true
39-
explanation: "Content filters enable you to suppress harmful content at the Safety System layer."
40-
- content: "Why should you consider a phased delivery plan for your generative AI solution?"
41-
choices:
42-
- content: "To enable you to gather feedback and identify issues before releasing the solution more broadly"
43-
isCorrect: true
44-
explanation: "An initial release to a restricted user base enables you to minimize harm by gather feedback and identifying issues before broad release."
45-
- content: "To eliminate the need to map, measure, mitigate, and manage potential harms"
46-
isCorrect: false
47-
explanation: "A phased delivery plan doesn't eliminate the need to identify, measure, and mitigate potential harms."
48-
- content: "To enable you to charge more for the solution"
49-
isCorrect: false
50-
explanation: "The goal of a phased delivery plan is to reduce potential harm, not to maximize revenue."
51-
52-
16+
quiz:
17+
questions:
18+
- content: "Why should you consider creating an AI Impact Assessment when designing a generative AI solution?"
19+
choices:
20+
- content: "To make a legal case that indemnifies you from responsibility for harms caused by the solution"
21+
isCorrect: false
22+
explanation: "An AI Impact Assessment does not indemnify you from responsibility for harm"
23+
- content: "To document the purpose, expected use, and potential harms for the solution"
24+
isCorrect: true
25+
explanation: "An AI Impact Assessment guide documents the expected use of the system and helps identify potential harms."
26+
- content: "To evaluate the cost of cloud services required to implement your solution"
27+
isCorrect: false
28+
explanation: "An AI Impact Assessment typically doesn't include evaluations of cloud service costs."
29+
- content: "What capability of Microsoft Foundry helps mitigate harmful content generation at the Safety System level?"
30+
choices:
31+
- content: "DALL-E model support"
32+
isCorrect: false
33+
explanation: "DALL-E models are used to generate images."
34+
- content: "Fine-tuning"
35+
isCorrect: false
36+
explanation: "Fine-tuning is used to customize models, and provides mitigation at the Model layer"
37+
- content: "Guardrails"
38+
isCorrect: true
39+
explanation: "Guardrails enable you to suppress harmful content at the Safety System layer."
40+
- content: "Why should you consider a phased delivery plan for your generative AI solution?"
41+
choices:
42+
- content: "To enable you to gather feedback and identify issues before releasing the solution more broadly"
43+
isCorrect: true
44+
explanation: "An initial release to a restricted user base enables you to minimize harm by gather feedback and identifying issues before broad release."
45+
- content: "To eliminate the need to map, measure, mitigate, and manage potential harms"
46+
isCorrect: false
47+
explanation: "A phased delivery plan doesn't eliminate the need to identify, measure, and mitigate potential harms."
48+
- content: "To enable you to charge more for the solution"
49+
isCorrect: false
50+
explanation: "The goal of a phased delivery plan is to reduce potential harm, not to maximize revenue."

learn-pr/wwl-data-ai/responsible-ai-studio/9-summary.yml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,10 @@ metadata:
66
description: Summarize a responsible approach for generative AI.
77
author: ivorb
88
ms.author: berryivor
9-
ms.date: 02/26/2026
9+
ms.date: 03/22/2026
1010
ms.topic: unit
1111
ms.collection:
1212
- wwl-ai-copilot
1313
durationInMinutes: 1
1414
content: |
1515
[!include[](includes/9-summary.md)]
16-

0 commit comments

Comments
 (0)