Skip to content

Commit 078cfca

Browse files
Updating WinML docs to focus on self-contained (#1147)
* Updating WinML to target the WASDK component package * Updates to overview * Review updates * Apply suggestions from code review Co-authored-by: Copilot <[email protected]> --------- Co-authored-by: Copilot <[email protected]>
1 parent 7f56269 commit 078cfca

5 files changed

Lines changed: 27 additions & 23 deletions

File tree

docs/new-windows-ml/distributing-your-app.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Deploy your app that uses Windows ML
33
description: Learn how to deploy your app that uses Windows Machine Learning (ML).
4-
ms.date: 03/10/2026
4+
ms.date: 03/20/2026
55
ms.topic: concept-article
66
---
77

@@ -11,11 +11,11 @@ Windows ML is deployed like any other Windows App SDK component, supporting both
1111

1212
## Framework-dependent
1313

14-
Your app depends on the Windows App SDK runtime and/or framework package being present on the target machine. Framework-dependent deployment is the default deployment mode of the Windows App SDK for its efficient use of machine resources and serviceability. See [Deployment architecture and overview for framework-dependent apps](/windows/apps/windows-app-sdk/deployment-architecture) for more details.
14+
Your app depends on the Windows App SDK runtime and/or framework package being present on the target machine. This minimizes the dependencies your app has to carry (by using shared system-wide copies of Windows App SDK runtime), and enables evergreen servicing without you needing to release an update to your app. When targeting the main [`Microsoft.WindowsAppSDK` NuGet package](https://www.nuget.org/packages/Microsoft.WindowsAppSDK), framework-dependent deployment is the default deployment mode. See [Deployment architecture and overview for framework-dependent apps](/windows/apps/windows-app-sdk/deployment-architecture) for more details.
1515

1616
## Self-contained
1717

18-
Your app carries the Windows App SDK dependencies with it. See [Deployment guide for self-contained apps](/windows/apps/package-and-deploy/self-contained-deploy/deploy-self-contained-apps) for more details.
18+
Your app carries the Windows ML dependencies with it. When your app uses individual Windows App SDK component NuGet packages, like the [`Microsoft.WindowsAppSDK.ML` NuGet package](https://www.nuget.org/packages/Microsoft.WindowsAppSDK.ML) for Windows ML, self-contained deployment is the default deployment mode. See [Deployment guide for self-contained apps](/windows/apps/package-and-deploy/self-contained-deploy/deploy-self-contained-apps) for more details.
1919

2020
In self-contained mode, ONNX Runtime binaries are deployed alongside your application:
2121

docs/new-windows-ml/get-started.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
---
22
title: Get started with Windows ML
33
description: Learn how to use Windows ML to download and register AI execution providers for hardware-optimized inference.
4-
ms.date: 03/16/2026
4+
ms.date: 03/20/2026
55
ms.topic: how-to
66
---
77

88
# Get started with Windows ML
99

10-
This topic shows you how to install and use Windows ML to discover, download, and register execution providers (EPs) for use with the ONNX Runtime shipped with Windows ML. Windows ML handles the complexity of package management and hardware selection, automatically downloading the latest execution providers compatible with your device's hardware.
10+
This topic shows you how to install and use Windows ML to discover, download, and register execution providers (EPs) for use with the ONNX Runtime shipped with Windows ML. Windows ML handles the complexity of package management and hardware selection, allowing you to download the latest execution providers compatible with your users' hardware.
1111

12-
If you're not already familiar with the ONNX Runtime, we suggest reading the [ONNX Runtime docs](https://onnxruntime.ai/docs/). In short, Windows ML provides a shared Windows-wide copy of the ONNX Runtime, plus the ability to dynamically download execution providers (EPs).
12+
If you're not already familiar with the ONNX Runtime, we suggest reading the [ONNX Runtime docs](https://onnxruntime.ai/docs/). In short, Windows ML provides a copy of the ONNX Runtime, plus the ability to dynamically download execution providers (EPs).
1313

1414
## Prerequisites
1515

@@ -43,7 +43,11 @@ Python versions 3.10 to 3.13, on x64 and ARM64 devices.
4343

4444
Windows ML is included in [Windows App SDK 1.8.1 or greater](/windows/apps/windows-app-sdk/stable-channel).
4545

46-
See [use the Windows App SDK in an existing project](/windows/apps/windows-app-sdk/use-windows-app-sdk-in-existing-project) for how to add the Windows App SDK to your project, or if you're already using Windows App SDK, update your packages.
46+
The easiest way to use Windows ML is to install the [`Microsoft.WindowsAppSDK.ML` NuGet package](https://www.nuget.org/packages/Microsoft.WindowsAppSDK.ML), which uses self-contained deployment by default. See [Deploy your app](./distributing-your-app.md) to learn more about deployment options.
47+
48+
```bash
49+
dotnet add package Microsoft.WindowsAppSDK.ML
50+
```
4751

4852
### [C++/WinRT](#tab/cppwinrt)
4953

@@ -53,7 +57,7 @@ See [use the Windows App SDK in an existing project](/windows/apps/windows-app-s
5357

5458
### [C/C++](#tab/c)
5559

56-
Windows ML C APIs are included in the [`Microsoft.WindowsAppSDK.ML` NuGet package](https://www.nuget.org/packages/Microsoft.WindowsAppSDK.ML) version 1.8.6 or greater. The NuGet package ships CMake config files under `build/cmake/` that are directly consumable. The `@WINML_VERSION@` token is resolved at NuGet build time, so no additional processing is required.
60+
Windows ML C APIs are included in the [`Microsoft.WindowsAppSDK.ML` NuGet package](https://www.nuget.org/packages/Microsoft.WindowsAppSDK.ML) version `1.8.2141` or greater. The NuGet package ships CMake config files under `build/cmake/` that are directly consumable. The `@WINML_VERSION@` token is resolved at NuGet build time, so no additional processing is required.
5761

5862
To use the NuGet package...
5963

docs/new-windows-ml/migrate-to-windows-ml.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Migrate from standalone ONNX Runtime to Windows ML's ONNX Runtime
33
description: Learn how to migrate from using the standalone ONNX Runtime to using the ONNX Runtime included in Windows Machine Learning (ML) for hardware-optimized inference.
4-
ms.date: 08/14/2025
4+
ms.date: 03/20/2026
55
ms.topic: how-to
66
---
77

@@ -13,9 +13,9 @@ This guide explains how to migrate from using the [standalone ONNX Runtime](http
1313

1414
- **Smaller app download / install size** - Your app doesn't need to distribute large EPs and the ONNX Runtime
1515
- **EPs are dynamically downloaded via Windows ML**, so that you don't have to bundle them with your app
16-
- **The ONNX Runtime in Windows ML is a shared system-wide copy**, so your app doesn't have to bundle it with your app
17-
- **Dynamically uses latest EPs** - Automatically downloads and manages the latest compatible hardware-specific execution providers, without requiring your app to update
18-
- **Dynamically uses latest ONNX Runtime** - Automatically updates the ONNX Runtime without requiring your app to update. See the [ONNX versions](./onnx-versions.md) docs for more info
16+
- **Optionally use a shared system-wide ONNX Runtime**, so your app doesn't have to bundle it with your app
17+
- **Evergreen EPs** - Automatically updates to the latest compatible hardware-specific execution providers, without requiring your app to update
18+
- **Optional evergreen ONNX Runtime** - By using framework-dependent deployment, your app can automatically receive updates to the ONNX Runtime without requiring your app to update. See the [ONNX versions](./onnx-versions.md) docs for more info
1919

2020
## System requirements for Windows ML
2121

docs/new-windows-ml/onnx-versions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
title: ONNX Runtime versions shipped in Windows ML
33
description: Understand which versions of the ONNX Runtime were shipped in which versions of Windows ML.
4-
ms.date: 03/19/2026
4+
ms.date: 03/20/2026
55
ms.topic: concept-article
66
---
77

88
# ONNX Runtime versions shipped in Windows ML
99

10-
Each Windows App SDK release includes Windows ML, which includes a copy of the ONNX Runtime, so that your app can depend on a shared system-wide copy of the ONNX Runtime rather than distributing your own copy.
10+
Each Windows App SDK release includes Windows ML, which includes a copy of the ONNX Runtime, so that your app can depend on a shared system-wide copy of the ONNX Runtime rather than distributing your own copy (if you choose framework-dependent deployment). See [Deploy your app](./distributing-your-app.md) for more details.
1111

1212
## Versions of ONNX Runtime in Windows ML
1313

docs/new-windows-ml/overview.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: What is Windows ML?
33
description: Learn how Windows Machine Learning (ML) helps your Windows apps run AI models locally.
44
ms.topic: article
5-
ms.date: 11/18/2025
5+
ms.date: 3/20/2026
66
---
77

88
# What is Windows ML?
@@ -15,9 +15,9 @@ If you're not already familiar with the ONNX Runtime, we suggest reading the [ON
1515

1616
## Key benefits
1717

18-
- **Dynamically get latest EPs** - Automatically downloads and manages the latest hardware-specific execution providers
19-
- **Shared [ONNX Runtime](https://onnxruntime.ai/docs/)** - Uses system-wide runtime instead of bundling your own, reducing app size
20-
- **Smaller downloads/installs** - No need to carry large EPs and the ONNX Runtime in your app
18+
- **Shared [ONNX Runtime](https://onnxruntime.ai/docs/)** - Optional system-wide runtime instead of bundling your own, reducing app size
19+
- **Dynamically get latest EPs** - Download the latest hardware-specific execution providers with one API call
20+
- **Smaller app downloads/installs** - No need to carry large EPs and the ONNX Runtime in your app
2121
- **Broad hardware support** - Runs on Windows PCs (x64 and ARM64) and Windows Server with any hardware configuration
2222

2323
## System requirements
@@ -39,14 +39,14 @@ You can [see the list of EPs that Windows ML supports here](./supported-executio
3939

4040
Windows ML includes a copy of the [ONNX Runtime](https://onnxruntime.ai/) and allows you to dynamically download vendor-specific **execution providers** (EPs), so your model inference can be optimized across the wide variety of CPUs, GPUs, and NPUs in the Windows ecosystem.
4141

42-
### Automatic deployment
42+
Execution provider acquisition:
4343

44-
1. **App installation** - Windows App SDK bootstrapper initializes Windows ML
45-
2. **Hardware detection** - Runtime identifies available processors
46-
3. **EP download** - Automatically downloads optimal execution providers
47-
4. **Ready to run** - Your app can immediately use AI models
44+
1. **Hardware detection** - Windows ML identifies compatible EPs
45+
2. **EP download** - Call APIs to download compatible EPs
46+
3. **Ready to run** - Your app can accelerate AI models with EPs
4847

4948
This eliminates the need to:
49+
5050
- Bundle execution providers for specific hardware vendors
5151
- Create separate app builds for different execution providers
5252
- Handle execution provider updates manually

0 commit comments

Comments
 (0)