Skip to content

Commit 3d844c2

Browse files
authored
Merge branch 'site' into 8-24a
2 parents 49f62b2 + d2f6e4e commit 3d844c2

19 files changed

+601
-9
lines changed

_get_started/previous-versions.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,53 @@ your convenience.
1717

1818
## Commands for Versions >= 1.0.0
1919

20+
### v2.4.0
21+
22+
#### Conda
23+
24+
##### OSX
25+
26+
```
27+
# conda
28+
conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 -c pytorch
29+
```
30+
31+
##### Linux and Windows
32+
33+
```
34+
# CUDA 11.8
35+
conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=11.8 -c pytorch -c nvidia
36+
# CUDA 12.1
37+
conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.1 -c pytorch -c nvidia
38+
# CUDA 12.4
39+
conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4 -c pytorch -c nvidia
40+
# CPU Only
41+
conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 cpuonly -c pytorch
42+
```
43+
44+
#### Wheel
45+
46+
##### OSX
47+
48+
```
49+
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0
50+
```
51+
52+
##### Linux and Windows
53+
54+
```
55+
# ROCM 6.1 (Linux only)
56+
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/rocm6.1
57+
# CUDA 11.8
58+
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu118
59+
# CUDA 12.1
60+
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121
61+
# CUDA 12.4
62+
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu124
63+
# CPU only
64+
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cpu
65+
```
66+
2067
### v2.3.1
2168

2269
#### Conda

_mobile/android.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,11 @@ order: 3
88
published: true
99
---
1010

11+
<div class="note-card">
12+
<h4>Note</h4>
13+
<p>PyTorch Mobile is no longer actively supported. Please check out <a href="/executorch-overview">ExecuTorch</a>, PyTorch’s all-new on-device inference library. You can also review <a href="https://pytorch.org/executorch/stable/demo-apps-android.html">this page</a> to learn more about how to use ExecuTorch to build an Android app.</p>
14+
</div>
15+
1116
# Android
1217

1318
## Quickstart with a HelloWorld Example

_mobile/home.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@ published: true
99
redirect_from: "/mobile/"
1010
---
1111

12+
<div class="note-card">
13+
<h4>Note</h4>
14+
<p>PyTorch Mobile is no longer actively supported. Please check out <a href="/executorch-overview">ExecuTorch</a>, PyTorch’s all-new on-device inference library. </p>
15+
</div>
16+
1217
# PyTorch Mobile
1318

1419
There is a growing need to execute ML models on edge devices to reduce latency, preserve privacy, and enable new interactive use cases.

_mobile/ios.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,11 @@ order: 2
88
published: true
99
---
1010

11+
<div class="note-card">
12+
<h4>Note</h4>
13+
<p>PyTorch Mobile is no longer actively supported. Please check out <a href="/executorch-overview">ExecuTorch</a>, PyTorch’s all-new on-device inference library. You can also review <a href="https://pytorch.org/executorch/stable/demo-apps-ios.html">this page</a> to learn more about how to use ExecuTorch to build an iOS app.</p>
14+
</div>
15+
1116
# iOS
1217

1318
To get started with PyTorch on iOS, we recommend exploring the following [HelloWorld](https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld).

_posts/2023-06-07-join-pytorch.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,11 +29,11 @@ Being a part of the PyTorch Foundation grants opportunities to help build the fu
2929

3030
## How to join
3131

32-
Premier members [must submit an application ](https://docs.google.com/forms/d/1JVzFIaFu-El5Ug0IlzpHKwPbZLe9MvaAUXl0FZgnNQw/edit)to be considered for board level membership. General and associate members are welcome to [join automatically](https://enrollment.lfx.linuxfoundation.org/?project=pytorch). See below for specific tiering and details on each type of membership.
32+
Commercial organizations are invited to apply for General membership, while non-profits and academic institutions are encouraged to apply for Associate membership.
3333

3434
### Premier Members
3535

36-
Premier members are the highest tier. They will appoint one voting representative in any subcommittees or activities of the PTF Governing Board, and receive prominent placement in displays of membership including website, landscape and marketing materials, exclusive live webinars with PyTorch online programs and everything included within a “general” membership. The annual fee is $150,000 + an LF Silver Membership.
36+
Organizations are welcome to submit an application to be considered as a Premier member. Premier members are the highest tier. They will appoint one voting representative in any subcommittees or activities of the PTF Governing Board, and receive prominent placement in displays of membership including website, landscape and marketing materials, exclusive live webinars with PyTorch online programs and everything included within a “general” membership. The annual fee is $150,000 + an LF Silver Membership.
3737

3838
### General Members
3939

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
---
2+
layout: blog_detail
3+
title: "Accelerate Your AI: PyTorch 2.4 Now Supports Intel GPUs for Faster Workloads"
4+
author: the PyTorch Team at Intel
5+
---
6+
7+
We have exciting news! PyTorch 2.4 now supports Intel® Data Center GPU Max Series and the SYCL software stack, making it easier to speed up your AI workflows for both training and inference. This update allows for you to have a consistent programming experience with minimal coding effort and extends PyTorch’s device and runtime capabilities, including device, stream, event, generator, allocator, and guard, to seamlessly support streaming devices. This enhancement simplifies deploying PyTorch on ubiquitous hardware, making it easier for you to integrate different hardware back ends.
8+
9+
Intel GPU support upstreamed into PyTorch provides support for both eager and graph modes, fully running Dynamo Hugging Face benchmarks. Eager mode now includes common Aten operators implemented with SYCL. The most performance-critical graphs and operators are highly optimized by using oneAPI Deep Neural Network Library (oneDNN) and oneAPI Math Kernel Library (oneMKL). Graph mode (torch.compile) now has an enabled Intel GPU back end to implement the optimization for Intel GPUs and to integrate Triton. Furthermore, data types such as FP32, BF16, FP16, and automatic mixed precision (AMP) are supported. The PyTorch Profiler, based on Kineto and oneMKL, is being developed for the upcoming PyTorch 2.5 release.
10+
11+
Take a look at the current and planned front-end and back-end improvements for Intel GPU upstreamed into PyTorch.
12+
13+
![the current and planned front-end and back-end improvements for Intel GPU upstreamed into PyTorch](/assets/images/intel-gpus-pytorch-2-4.jpg){:style="width:100%"}
14+
15+
PyTorch 2.4 on Linux supports Intel Data Center GPU Max Series for training and inference while maintaining the same user experience as other hardware. If you’re migrating code from CUDA, you can run your existing application on an Intel GPU with minimal changes—just update the device name from `cuda` to `xpu`. For example:
16+
17+
```
18+
# CUDA Code
19+
tensor = torch.tensor([1.0, 2.0]).to("cuda")
20+
21+
# Code for Intel GPU
22+
tensor = torch.tensor([1.0, 2.0]).to("xpu")
23+
```
24+
25+
## Get Started
26+
27+
Try PyTorch 2.4 on the Intel Data Center GPU Max Series through the [Intel® Tiber™ Developer Cloud](https://cloud.intel.com/). Get a tour of the [environment setup, source build, and examples](https://pytorch.org/docs/main/notes/get_start_xpu.html#examples). To learn how to create a free Standard account, see [Get Started](https://console.cloud.intel.com/docs/guides/get_started.html), then do the following:
28+
29+
1. Sign in to the [cloud console](https://console.cloud.intel.com/docs/guides/get_started.html).
30+
31+
2. From the [Training](https://console.cloud.intel.com/training) section, open the **PyTorch 2.4 on Intel GPUs** notebook.
32+
33+
3. Ensure that the **PyTorch 2.4** kernel is selected for the notebook.
34+
35+
## Summary
36+
37+
PyTorch 2.4 introduces initial support for Intel Data Center GPU Max Series to accelerate your AI workloads. With Intel GPU, you’ll get continuous software support, unified distribution, and synchronized release schedules for a smoother development experience. We’re enhancing this functionality to reach Beta quality in PyTorch 2.5. Planned features in 2.5 include:
38+
39+
* More Aten operators and full Dynamo Torchbench and TIMM support in Eager Mode.
40+
41+
* Full Dynamo Torchbench and TIMM benchmark support in torch.compile.
42+
43+
* Intel GPU support in torch.profile.
44+
45+
* PyPI wheels distribution.
46+
47+
* Windows and Intel Client GPU Series support.
48+
49+
We welcome the community to evaluate these new contributions to [Intel GPU support on PyTorch](https://github.com/pytorch/pytorch?tab=readme-ov-file#intel-gpu-support). 
50+
51+
## Resources
52+
53+
* [PyTorch 2.4: Get Started on an Intel GPU](https://pytorch.org/docs/main/notes/get_start_xpu.html)
54+
55+
* [PyTorch Release Notes](https://github.com/pytorch/pytorch/releases)
56+
57+
## Acknowledgments
58+
59+
We want thank PyTorch open source community for their technical discussions and insights: [Nikita Shulga](https://github.com/malfet), [Jason Ansel](https://github.com/jansel), [Andrey Talman](https://github.com/atalman), [Alban Desmaison](https://github.com/alband), and [Bin Bao](https://github.com/desertfire).
60+
61+
We also thank collaborators from PyTorch for their professional support and guidance.
62+
63+
1 To enable GPU support and improve performance, we suggest installing the [Intel® Extension for PyTorch](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/)

0 commit comments

Comments
 (0)