Skip to content

Commit da488b9

Browse files
Merge pull request #1144 from MicrosoftDocs/fix/windows-ai-comparison-rewrite
Rewrite Windows AI comparison page — Copilot+ terminology decoder + modern scenarios
2 parents 3b1bd48 + ca980f2 commit da488b9

2 files changed

Lines changed: 152 additions & 16 deletions

File tree

docs/TOC.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,8 @@ items:
33
items:
44
- name: Overview
55
href: overview.md
6+
- name: Choose your AI solution
7+
href: ./windows-ai-comparison.md
68
- name: Ready-to-use AI
79
items:
810
- name: Large Language Models
@@ -1138,7 +1140,5 @@ items:
11381140
- name: Release notes
11391141
href: windows-ml/release-notes.md
11401142
items:
1141-
- name: Compare AI solutions
1142-
href: ./windows-ai-comparison.md
11431143
- name: FAQ
11441144
href: windows-ml/faq.yml

docs/windows-ai-comparison.md

Lines changed: 150 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,163 @@
11
---
2-
title: Chose your Windows AI solution
3-
description: Get help choosing which Microsoft AI solution is right for your application.
4-
ms.date: 02/13/2025
2+
title: Choose your Windows AI solution
3+
description: Scenario-based guidance for choosing between Windows AI APIs, Foundry Local, Windows ML, and Azure AI — including a terminology reference to decode the Windows AI landscape.
4+
ms.date: 03/18/2026
55
ms.topic: article
6-
keywords: windows 10, windows ai, windows ml, winml, windows machine learning, microsoft ai, compare, comparison, windows vision skills, Direct ML
6+
keywords: windows ai, phi silica, foundry local, windows ml, copilot plus, windows ai apis, which api, choose
7+
no-loc: [Microsoft Foundry on Windows, Windows AI APIs, Foundry Local, Windows ML, Phi Silica, Copilot+]
78
---
89

9-
# Which AI solution is right for me?
10+
# Choose your Windows AI solution
1011

11-
Microsoft offers several different AI solutions, which means you have several options at your disposal. But how do you choose which one to use for your application? Let's break it down.
12+
The Windows AI landscape has grown quickly and the terminology can be hard to navigate. This page cuts through it: find your scenario, pick your starting point.
1213

13-
### I want to integrate a machine learning model into my application and run it on the device by taking full advantage of hardware acceleration
14+
> [!TIP]
15+
> The #1 question to answer first: **does your app need to run on all Windows hardware, or only Copilot+ PCs?**
16+
> - **All Windows hardware**[Foundry Local](#foundry-local) or [Windows ML](#windows-ml)
17+
> - **Copilot+ PCs only**[Windows AI APIs](#windows-ai-apis--copilot-pcs) (faster, zero setup)
18+
> - **Cloud or hybrid**[Azure AI](#azure-ai--cloud--hybrid)
1419
15-
[Windows Machine Learning](windows-ml/overview.md) is the right choice for you. These high-level WinRT APIs work on Windows 10 applications (UWP, desktop) and evaluate models directly on the device. You can even choose to take advantage of the device's GPU (if it has one) for better performance.
20+
## Decision table
1621

17-
### I want to have fuller control over resource utilization during model execution for high-intensive applications
22+
| I want to… | Recommended option | Hardware required |
23+
|---|---|---|
24+
| Add a chat or summarization feature to my app with minimal code | [Windows AI APIs — Phi Silica](./apis/phi-silica.md) | Copilot+ PC |
25+
| Generate, describe, or manipulate images on-device | [Windows AI APIs — Imaging](./apis/imaging.md) | Copilot+ PC |
26+
| Run an LLM locally on any Windows PC (no cloud, no NPU needed) | [Foundry Local](./foundry-local/get-started.md) | Windows 10+ (CPU/GPU) |
27+
| On-device speech-to-text on any hardware | [Foundry Local — Whisper](./foundry-local/get-started.md) | Windows 10+ |
28+
| Run a custom or Hugging Face model with hardware acceleration | [Windows ML](./new-windows-ml/overview.md) | Windows 10+ (CPU/GPU/NPU) |
29+
| Low-level GPU/NPU control for a game or graphics-intensive app | [DirectML](./directml/dml.md) | Windows 10+ (GPU) |
30+
| Use large cloud-hosted models (GPT-4o, DALL-E, etc.) | [Azure AI](./cloud-ai.md) | Any (internet required) |
31+
| Combine on-device and cloud AI, fall back gracefully | [Foundry Local + Azure AI](./cloud-ai.md) | Windows 10+ |
32+
| Semantic in-app search over documents or images | [Windows AI APIs — Semantic Search](./apis/app-content-search.md) | Copilot+ PC |
33+
| Fine-tune Phi Silica on my own data | [Windows AI APIs — LoRA](./apis/phi-silica-lora.md) | Copilot+ PC |
1834

19-
[DirectML](/windows/desktop/direct3d12/dml) is what you want. These DirectX-style APIs provide a programming paradigm that will feel familiar to C++ game developers, and allow you to take full advantage of the hardware.
35+
## Options in detail
2036

21-
### I want to train, test, and deploy ML models with a framework that is familiar to a .NET developer
37+
### Windows AI APIs — Copilot+ PCs
2238

23-
Check out [ML.NET](https://dotnet.microsoft.com/apps/machinelearning-ai/ml-dotnet), a machine learning framework built for .NET developers.
39+
**Best for**: New Windows features with the least code, on Copilot+ PCs.
2440

25-
### I want to leverage the power of the Azure cloud for training and deploying ML models
41+
Windows AI APIs are part of the Windows App SDK and give packaged apps access to on-device AI models that are installed and maintained by Windows — you don't ship the model, don't manage its lifecycle, and it's shared across apps. Inference runs on the NPU for near-instant, private, battery-efficient results.
2642

27-
See [What are the machine learning products at Microsoft?](/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning) for a comprehensive list of the solutions available from Microsoft, including many products and services that run on Azure.
43+
Available AI capabilities include: text generation (Phi Silica), image description, image generation, image segmentation, image super-resolution, object erase, OCR, semantic search, and video super-resolution. See the [full API list](./apis/index.md).
44+
45+
**Requirements**: Copilot+ PC (NPU with 40+ TOPS — Snapdragon X series, Intel Core Ultra 200V series, AMD Ryzen AI 300 series). Your app must be packaged (MSIX or sparse identity).
46+
47+
> [!div class="nextstepaction"]
48+
> [Get started with Windows AI APIs](./apis/get-started.md)
49+
50+
---
51+
52+
### Foundry Local
53+
54+
**Best for**: LLM or speech-to-text features on any Windows hardware (no Copilot+ required).
55+
56+
Foundry Local runs 20+ open-source models locally via an OpenAI-compatible REST API. You can reuse existing AI code with zero changes — just point your `OpenAIClient` at your locally running Foundry Local OpenAI-compatible endpoint instead of the cloud (see the Foundry Local docs for endpoint details). Models are managed by Foundry Local (not your app), and run on CPU, GPU, or NPU depending on what the device has.
57+
58+
```bash
59+
winget install Microsoft.AIFoundry.Local
60+
foundry model run phi-4-mini
61+
```
62+
63+
**Requirements**: Windows 10 and later. Performance scales with available GPU/NPU.
64+
65+
> [!div class="nextstepaction"]
66+
> [Get started with Foundry Local](./foundry-local/get-started.md)
67+
68+
---
69+
70+
### Windows ML
71+
72+
**Best for**: Running custom, fine-tuned, or Hugging Face models with full control over the inference pipeline.
73+
74+
Windows ML (the new ONNX Runtime-based version) lets you bring your own model in ONNX format and run it locally with hardware acceleration on CPU, GPU, or NPU. It's the right choice when the ready-to-use models in Windows AI APIs and Foundry Local don't cover your scenario, or when you need to integrate a proprietary model.
75+
76+
> [!NOTE]
77+
> There are two APIs named "Windows ML." The **legacy WinRT Windows ML** is the older, inbox API available since Windows 10 1809. The **new Windows ML (NuGet)** is the current ONNX Runtime-based version with better performance and hardware support. New projects should use the [new Windows ML](./new-windows-ml/overview.md).
78+
79+
**Requirements**: Windows 10 and later. Model compatibility and performance vary by hardware.
80+
81+
> [!div class="nextstepaction"]
82+
> [Get started with Windows ML](./new-windows-ml/get-started.md)
83+
84+
---
85+
86+
### Azure AI (cloud and hybrid)
87+
88+
**Best for**: Large frontier models (GPT-4o, DALL-E, Claude, Gemini), scenarios requiring the latest model capabilities, or apps where local hardware can't be assumed.
89+
90+
Azure AI gives you access to the full range of hosted AI models via standard REST APIs. It's the right choice when your scenario requires capabilities beyond what on-device models offer, or when you're building a service rather than a client app.
91+
92+
You can combine Azure AI with Foundry Local in the same app — running fast, private inference locally when the device supports it, and falling back to Azure when it doesn't.
93+
94+
> [!div class="nextstepaction"]
95+
> [Use cloud AI with Windows apps](./cloud-ai.md)
96+
97+
---
98+
99+
## Terminology decoder
100+
101+
The Windows AI space has gone through rapid rebranding. Here's a translation table for terms you might encounter:
102+
103+
| Term you've seen | What it means now |
104+
|---|---|
105+
| **Copilot Runtime APIs** | Old name (2024) for **Windows AI APIs**. Same functionality, renamed. |
106+
| **Windows Copilot Runtime** | Old umbrella term for the AI features now called **Microsoft Foundry on Windows**. |
107+
| **Microsoft Foundry on Windows** | Current umbrella brand covering Windows AI APIs + Foundry Local + Windows ML. |
108+
| **Windows AI Foundry** | Informal shorthand for Microsoft Foundry on Windows. Not the same as **Azure AI Foundry**. |
109+
| **Azure AI Foundry** | Microsoft's cloud-based AI platform. Different product, different team, similar name. |
110+
| **Phi Silica** | The specific Phi model optimized for and built into Windows on Copilot+ PCs. Accessed via Windows AI APIs. |
111+
| **Phi** (general) | Microsoft's family of small language models. Phi-4-mini etc. are available on Azure AI and via Foundry Local. Phi Silica is the NPU-optimized Windows inbox version. |
112+
| **Windows ML** (old) | Legacy WinRT-based inference API, inbox since Windows 10 1809. Still works; no new investment. |
113+
| **Windows ML** (new) | New ONNX Runtime-based NuGet package. Current and actively developed. |
114+
| **DirectML** | Low-level DirectX 12 ML API for GPU/NPU acceleration. Used internally by Windows ML and ONNX Runtime; direct use is for advanced scenarios. |
115+
| **Copilot+ PC** | A PC category defined by hardware: NPU with 40+ TOPS, 16GB+ RAM, specific SoCs. Required for Windows AI APIs; not required for Foundry Local or Windows ML. |
116+
| **NPU** | Neural Processing Unit — dedicated AI acceleration hardware in Copilot+ PCs. Windows AI APIs route inference through the NPU automatically. |
117+
118+
## Combine options in the same app
119+
120+
These options aren't mutually exclusive. A typical pattern for a resilient AI feature:
121+
122+
```csharp
123+
// 1. Try Windows AI APIs (fastest — Copilot+ only)
124+
var readyState = LanguageModel.GetReadyState();
125+
if (readyState == AIFeatureReadyState.EnsureNeeded)
126+
{
127+
var deploymentResult = await LanguageModel.EnsureReadyAsync();
128+
if (deploymentResult.Status == PackageDeploymentStatus.CompletedSuccess)
129+
{
130+
readyState = LanguageModel.GetReadyState();
131+
}
132+
else
133+
{
134+
// Optional: inspect deploymentResult.ExtendedError for diagnostics.
135+
// Treat as unavailable so we fall through to Foundry/Azure.
136+
readyState = AIFeatureReadyState.NotSupportedOnCurrentSystem;
137+
}
138+
}
139+
140+
if (readyState != AIFeatureReadyState.NotSupportedOnCurrentSystem)
141+
{
142+
// Use Phi Silica via Windows AI APIs
143+
using LanguageModel languageModel = await LanguageModel.CreateAsync();
144+
}
145+
// 2. Fall back to Foundry Local (any hardware)
146+
else if (await foundryClient.IsModelAvailableAsync("phi-4-mini"))
147+
{
148+
// Use Foundry Local OpenAI-compatible API
149+
}
150+
// 3. Fall back to Azure AI (always available)
151+
else
152+
{
153+
// Use Azure OpenAI
154+
}
155+
```
156+
157+
This pattern gives Copilot+ users the best experience while keeping the feature working on all hardware.
158+
159+
## Still unsure?
160+
161+
- [Microsoft Foundry on Windows overview](./overview.md) — decision tree and full comparison table
162+
- [AI Dev Gallery](./ai-dev-gallery/index.md) — browse working samples for every API
163+
- [Stack Overflow — windows-ai tag](https://stackoverflow.com/questions/tagged/windows-ai)

0 commit comments

Comments
 (0)