You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: WSL/tutorials/gpu-compute.md
+7-16Lines changed: 7 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,26 +7,17 @@ ms.topic: article
7
7
ms.localizationpriority: medium
8
8
---
9
9
10
-
# GPU accelerated Machine Learning training in the Windows Subsystem for Linux
10
+
# GPU accelerated machine learning training in the Windows Subsystem for Linux
11
11
12
-
Support for GPU compute, the #1 most requested WSL feature, is now available for preview via the Windows Insider program. [Read the blog post](https://devblogs.microsoft.com/commandline/).
12
+
Support for GPU compute, the #1 most requested WSL feature, is now available for preview via the Windows Insider program. [Read the blog post](https://blogs.windows.com/windowsdeveloper/?p=55781).
13
13
14
-
## What is GPU Compute?
14
+
## What is GPU compute?
15
15
16
-
GPU acceleration is also sometimes referred to as "GPU Compute." GPU computing is the use of a GPU (graphics processing unit) as a co-processor to accelerate CPUs (central processing units). GPUs typically handle only computation for graphics. GPU computing instead uses those powerful video cards or graphics chips to establish a parallel processing with the CPU to analyze data as if it were an image. The process migrates data into graphical form and uses the GPU to offload some of the compute-intensive and time consuming portions of the code, while the rest of the code continues to run on the CPU. This typically creates significant processing speed improvements. Training machine learning models is a great example in which GPU compute can significantly accelerate the time to complete a computational task.
16
+
Leveraging GPU acceleration for compute-intensive tasks is generally referred to as "GPU compute". GPU computing leverages the GPU (graphics processing unit) to accelerate math heavy workloads and uses its parallel processing to complete the required calculations faster, in many cases, than utilizing only a CPU. This parallelization enables significant processing speed improvements for these math heavy workloads then when running on a CPU. Training machine learning models is a great example in which GPU compute can significantly accelerate the time to complete this computationally expensive task.
17
17
18
18
## Install and set up
19
19
20
-
Learn more about WSL 2 support and how to start training machine learning models in the [GPU Accelerated Training guide](https://docs.microsoft.com/windows/win32/direct3d12/gpu-accelerated-training) inside the Direct Machine Learning docs. This guide covers:
20
+
Learn more about WSL 2 support and how to start training machine learning models in the [GPU Accelerated Training guide](https://docs.microsoft.com/windows/win32/direct3d12/gpu-accelerated-training) inside the DirectML docs. This guide covers:
21
21
22
-
* Enabling the NVIDIA CUDA preview
23
-
* Installing the correct GPU driver, which might include:
24
-
* AMD Radeon™ RX series and Radeon™ VII graphics
25
-
* AMD Radeon™ Pro series graphics
26
-
* AMD Ryzen™ and Ryzen™ PRO Processors with Radeon™ Vega graphics
27
-
* AMD Ryzen™ and Ryzen™ PRO Mobile Processors with Radeon™ Vega graphics
28
-
* Intel DirectML preview driver
29
-
* NVIDIA CUDA-enabled driver
30
-
* Guidance for beginners or students to set up TensorFlow with a DirectML backend
31
-
* Guidance for professionals to start using your existing Linux workflows through NVIDIA Docker or by installing PyTorch or TensorFlow
32
-
* Support for NVIDIA Container Toolkit
22
+
* Guidance for beginners or students to set up TensorFlow with DirectML
23
+
* Guidance for professionals to start running their exisiting CUDA ML workflows
0 commit comments