You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/new-windows-ml/distributing-your-app.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: Deploy your app that uses Windows ML
3
3
description: Learn how to deploy your app that uses Windows Machine Learning (ML).
4
-
ms.date: 03/10/2026
4
+
ms.date: 03/20/2026
5
5
ms.topic: concept-article
6
6
---
7
7
@@ -11,11 +11,11 @@ Windows ML is deployed like any other Windows App SDK component, supporting both
11
11
12
12
## Framework-dependent
13
13
14
-
Your app depends on the Windows App SDK runtime and/or framework package being present on the target machine. Framework-dependent deployment is the default deployment mode of the Windows App SDK for its efficient use of machine resources and serviceability. See [Deployment architecture and overview for framework-dependent apps](/windows/apps/windows-app-sdk/deployment-architecture) for more details.
14
+
Your app depends on the Windows App SDK runtime and/or framework package being present on the target machine. This minimizes the dependencies your app has to carry (by using shared system-wide copies of Windows App SDK runtime), and enables evergreen servicing without you needing to release an update to your app. When targeting the main [`Microsoft.WindowsAppSDK` NuGet package](https://www.nuget.org/packages/Microsoft.WindowsAppSDK), framework-dependent deployment is the default deployment mode. See [Deployment architecture and overview for framework-dependent apps](/windows/apps/windows-app-sdk/deployment-architecture) for more details.
15
15
16
16
## Self-contained
17
17
18
-
Your app carries the Windows App SDK dependencies with it. See [Deployment guide for self-contained apps](/windows/apps/package-and-deploy/self-contained-deploy/deploy-self-contained-apps) for more details.
18
+
Your app carries the Windows ML dependencies with it. When your app uses individual Windows App SDK component NuGet packages, like the [`Microsoft.WindowsAppSDK.ML` NuGet package](https://www.nuget.org/packages/Microsoft.WindowsAppSDK.ML) for Windows ML, self-contained deployment is the default deployment mode. See [Deployment guide for self-contained apps](/windows/apps/package-and-deploy/self-contained-deploy/deploy-self-contained-apps) for more details.
19
19
20
20
In self-contained mode, ONNX Runtime binaries are deployed alongside your application:
Copy file name to clipboardExpand all lines: docs/new-windows-ml/get-started.md
+9-5Lines changed: 9 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,15 @@
1
1
---
2
2
title: Get started with Windows ML
3
3
description: Learn how to use Windows ML to download and register AI execution providers for hardware-optimized inference.
4
-
ms.date: 03/16/2026
4
+
ms.date: 03/20/2026
5
5
ms.topic: how-to
6
6
---
7
7
8
8
# Get started with Windows ML
9
9
10
-
This topic shows you how to install and use Windows ML to discover, download, and register execution providers (EPs) for use with the ONNX Runtime shipped with Windows ML. Windows ML handles the complexity of package management and hardware selection, automatically downloading the latest execution providers compatible with your device's hardware.
10
+
This topic shows you how to install and use Windows ML to discover, download, and register execution providers (EPs) for use with the ONNX Runtime shipped with Windows ML. Windows ML handles the complexity of package management and hardware selection, allowing you to download the latest execution providers compatible with your users' hardware.
11
11
12
-
If you're not already familiar with the ONNX Runtime, we suggest reading the [ONNX Runtime docs](https://onnxruntime.ai/docs/). In short, Windows ML provides a shared Windows-wide copy of the ONNX Runtime, plus the ability to dynamically download execution providers (EPs).
12
+
If you're not already familiar with the ONNX Runtime, we suggest reading the [ONNX Runtime docs](https://onnxruntime.ai/docs/). In short, Windows ML provides a copy of the ONNX Runtime, plus the ability to dynamically download execution providers (EPs).
13
13
14
14
## Prerequisites
15
15
@@ -43,7 +43,11 @@ Python versions 3.10 to 3.13, on x64 and ARM64 devices.
43
43
44
44
Windows ML is included in [Windows App SDK 1.8.1 or greater](/windows/apps/windows-app-sdk/stable-channel).
45
45
46
-
See [use the Windows App SDK in an existing project](/windows/apps/windows-app-sdk/use-windows-app-sdk-in-existing-project) for how to add the Windows App SDK to your project, or if you're already using Windows App SDK, update your packages.
46
+
The easiest way to use Windows ML is to install the [`Microsoft.WindowsAppSDK.ML` NuGet package](https://www.nuget.org/packages/Microsoft.WindowsAppSDK.ML), which uses self-contained deployment by default. See [Deploy your app](./distributing-your-app.md) to learn more about deployment options.
47
+
48
+
```bash
49
+
dotnet add package Microsoft.WindowsAppSDK.ML
50
+
```
47
51
48
52
### [C++/WinRT](#tab/cppwinrt)
49
53
@@ -53,7 +57,7 @@ See [use the Windows App SDK in an existing project](/windows/apps/windows-app-s
53
57
54
58
### [C/C++](#tab/c)
55
59
56
-
Windows ML C APIs are included in the [`Microsoft.WindowsAppSDK.ML` NuGet package](https://www.nuget.org/packages/Microsoft.WindowsAppSDK.ML) version 1.8.6 or greater. The NuGet package ships CMake config files under `build/cmake/` that are directly consumable. The `@WINML_VERSION@` token is resolved at NuGet build time, so no additional processing is required.
60
+
Windows ML C APIs are included in the [`Microsoft.WindowsAppSDK.ML` NuGet package](https://www.nuget.org/packages/Microsoft.WindowsAppSDK.ML) version `1.8.2141` or greater. The NuGet package ships CMake config files under `build/cmake/` that are directly consumable. The `@WINML_VERSION@` token is resolved at NuGet build time, so no additional processing is required.
Copy file name to clipboardExpand all lines: docs/new-windows-ml/migrate-to-windows-ml.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: Migrate from standalone ONNX Runtime to Windows ML's ONNX Runtime
3
3
description: Learn how to migrate from using the standalone ONNX Runtime to using the ONNX Runtime included in Windows Machine Learning (ML) for hardware-optimized inference.
4
-
ms.date: 08/14/2025
4
+
ms.date: 03/20/2026
5
5
ms.topic: how-to
6
6
---
7
7
@@ -13,9 +13,9 @@ This guide explains how to migrate from using the [standalone ONNX Runtime](http
13
13
14
14
-**Smaller app download / install size** - Your app doesn't need to distribute large EPs and the ONNX Runtime
15
15
-**EPs are dynamically downloaded via Windows ML**, so that you don't have to bundle them with your app
16
-
-**The ONNX Runtime in Windows ML is a shared system-wide copy**, so your app doesn't have to bundle it with your app
17
-
-**Dynamically uses latest EPs** - Automatically downloads and manages the latest compatible hardware-specific execution providers, without requiring your app to update
18
-
-**Dynamically uses latest ONNX Runtime** - Automatically updates the ONNX Runtime without requiring your app to update. See the [ONNX versions](./onnx-versions.md) docs for more info
16
+
-**Optionally use a shared system-wide ONNX Runtime**, so your app doesn't have to bundle it with your app
17
+
-**Evergreen EPs** - Automatically updates to the latest compatible hardware-specific execution providers, without requiring your app to update
18
+
-**Optional evergreen ONNX Runtime** - By using framework-dependent deployment, your app can automatically receive updates to the ONNX Runtime without requiring your app to update. See the [ONNX versions](./onnx-versions.md) docs for more info
Copy file name to clipboardExpand all lines: docs/new-windows-ml/onnx-versions.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,13 @@
1
1
---
2
2
title: ONNX Runtime versions shipped in Windows ML
3
3
description: Understand which versions of the ONNX Runtime were shipped in which versions of Windows ML.
4
-
ms.date: 03/19/2026
4
+
ms.date: 03/20/2026
5
5
ms.topic: concept-article
6
6
---
7
7
8
8
# ONNX Runtime versions shipped in Windows ML
9
9
10
-
Each Windows App SDK release includes Windows ML, which includes a copy of the ONNX Runtime, so that your app can depend on a shared system-wide copy of the ONNX Runtime rather than distributing your own copy.
10
+
Each Windows App SDK release includes Windows ML, which includes a copy of the ONNX Runtime, so that your app can depend on a shared system-wide copy of the ONNX Runtime rather than distributing your own copy (if you choose framework-dependent deployment). See [Deploy your app](./distributing-your-app.md) for more details.
Copy file name to clipboardExpand all lines: docs/new-windows-ml/overview.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
title: What is Windows ML?
3
3
description: Learn how Windows Machine Learning (ML) helps your Windows apps run AI models locally.
4
4
ms.topic: article
5
-
ms.date: 11/18/2025
5
+
ms.date: 3/20/2026
6
6
---
7
7
8
8
# What is Windows ML?
@@ -15,9 +15,9 @@ If you're not already familiar with the ONNX Runtime, we suggest reading the [ON
15
15
16
16
## Key benefits
17
17
18
-
-**Dynamically get latest EPs** - Automatically downloads and manages the latest hardware-specific execution providers
19
-
-**Shared [ONNX Runtime](https://onnxruntime.ai/docs/)** - Uses system-wide runtime instead of bundling your own, reducing app size
20
-
-**Smaller downloads/installs** - No need to carry large EPs and the ONNX Runtime in your app
18
+
-**Shared [ONNX Runtime](https://onnxruntime.ai/docs/)** - Optional system-wide runtime instead of bundling your own, reducing app size
19
+
-**Dynamically get latest EPs** - Download the latest hardware-specific execution providers with one API call
20
+
-**Smaller app downloads/installs** - No need to carry large EPs and the ONNX Runtime in your app
21
21
-**Broad hardware support** - Runs on Windows PCs (x64 and ARM64) and Windows Server with any hardware configuration
22
22
23
23
## System requirements
@@ -39,14 +39,14 @@ You can [see the list of EPs that Windows ML supports here](./supported-executio
39
39
40
40
Windows ML includes a copy of the [ONNX Runtime](https://onnxruntime.ai/) and allows you to dynamically download vendor-specific **execution providers** (EPs), so your model inference can be optimized across the wide variety of CPUs, GPUs, and NPUs in the Windows ecosystem.
41
41
42
-
### Automatic deployment
42
+
Execution provider acquisition:
43
43
44
-
1.**App installation** - Windows App SDK bootstrapper initializes Windows ML
45
-
2.**Hardware detection** - Runtime identifies available processors
0 commit comments