|
| 1 | +### YamlMime:ModuleUnit |
| 2 | +uid: learn.dpu.introduction-microsoft-foundry-windows.knowledge-check |
| 3 | +title: Knowledge check |
| 4 | +metadata: |
| 5 | + title: Knowledge Check |
| 6 | + description: This content is part of the "Introduction to Microsoft Foundry on Windows" module. |
| 7 | + ms.date: 03/24/2026 |
| 8 | + author: hhreyes21 |
| 9 | + ms.author: v-homerreyes |
| 10 | + ms.topic: unit |
| 11 | +durationInMinutes: 4 |
| 12 | +quiz: |
| 13 | + title: Check your knowledge |
| 14 | + questions: |
| 15 | + - content: "What is the primary purpose of Microsoft Foundry on Windows?" |
| 16 | + choices: |
| 17 | + - content: "To train large-scale AI models exclusively in the cloud" |
| 18 | + isCorrect: false |
| 19 | + explanation: "Incorrect. The content states that large-scale model training typically remains a cloud-based workload and is not the primary focus of Microsoft Foundry on Windows." |
| 20 | + - content: "To provide a unified platform for building AI-powered applications that run locally on Windows and support hybrid scenarios" |
| 21 | + isCorrect: true |
| 22 | + explanation: "Correct. The module defines Microsoft Foundry on Windows as a unified AI development platform for local and hybrid AI applications." |
| 23 | + - content: "To replace Windows operating system AI features" |
| 24 | + isCorrect: false |
| 25 | + explanation: "Incorrect. Microsoft Foundry on Windows builds on and integrates existing Windows AI capabilities rather than replacing them." |
| 26 | + - content: "To manage hardware drivers for CPUs, GPUs, and NPUs" |
| 27 | + isCorrect: false |
| 28 | + explanation: "Incorrect. While the platform abstracts hardware usage, its purpose is AI development, not hardware driver management." |
| 29 | + - content: "Which scenario best illustrates a key advantage of running AI workloads locally on a Windows device?" |
| 30 | + choices: |
| 31 | + - content: "Training very large models using massive distributed datasets" |
| 32 | + isCorrect: false |
| 33 | + explanation: "Incorrect. The module explains that large-scale model training is typically handled in the cloud." |
| 34 | + - content: "Processing sensitive data without sending it to cloud services" |
| 35 | + isCorrect: true |
| 36 | + explanation: "Correct. Local AI execution allows sensitive data to remain on the device, supporting privacy and regulatory requirements." |
| 37 | + - content: "Scaling AI workloads to unlimited compute capacity" |
| 38 | + isCorrect: false |
| 39 | + explanation: "Incorrect. Unlimited scalability is described as a strength of cloud AI, not local execution." |
| 40 | + - content: "Replacing all cloud-based analytics systems" |
| 41 | + isCorrect: false |
| 42 | + explanation: "Incorrect. The module emphasizes that local AI works alongside cloud AI rather than replacing it." |
| 43 | + - content: "What role does Windows ML play in Microsoft Foundry on Windows?" |
| 44 | + choices: |
| 45 | + - content: "Providing built-in task-based AI features such as OCR and text summarization" |
| 46 | + isCorrect: false |
| 47 | + explanation: "Incorrect. This describes the role of Windows AI APIs, not Windows ML." |
| 48 | + - content: "Enabling local execution of curated open-source models" |
| 49 | + isCorrect: false |
| 50 | + explanation: "Incorrect. This is the primary role of Foundry Local." |
| 51 | + - content: "Serving as a runtime for running custom AI models across diverse Windows hardware" |
| 52 | + isCorrect: true |
| 53 | + explanation: "Correct. Windows ML provides a runtime that abstracts hardware differences and supports CPUs, GPUs, and NPUs." |
| 54 | + - content: "Managing network connectivity for hybrid AI applications" |
| 55 | + isCorrect: false |
| 56 | + explanation: "Incorrect. Network connectivity is not described as a function of Windows ML." |
| 57 | + - content: "Which processors can Windows automatically use to run AI workloads based on device capabilities?" |
| 58 | + choices: |
| 59 | + - content: "Only CPUs" |
| 60 | + isCorrect: false |
| 61 | + explanation: "Incorrect. While CPUs are supported, the module explains that GPUs and NPUs are also used when available." |
| 62 | + - content: "Only GPUs and NPUs" |
| 63 | + isCorrect: false |
| 64 | + explanation: "Incorrect. CPUs are included as part of the supported hardware for AI workloads." |
| 65 | + - content: "CPUs, GPUs, and NPUs" |
| 66 | + isCorrect: true |
| 67 | + explanation: "Correct. Windows automatically selects from CPUs, GPUs, and NPUs based on the device configuration and workload." |
| 68 | + - content: "Dedicated cloud accelerators" |
| 69 | + isCorrect: false |
| 70 | + explanation: "Incorrect. The content focuses on on-device hardware rather than cloud-specific accelerators." |
0 commit comments