Skip to content

Commit c08845b

Browse files
author
Svetlana Karslioglu
authored
Merge branch 'site' into remove-pytorch-enterprise
2 parents 72f032a + d6ee455 commit c08845b

File tree

2,785 files changed

+36798
-27384
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

2,785 files changed

+36798
-27384
lines changed

404.html

Lines changed: 76 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -5,23 +5,79 @@
55
layout: general
66
---
77

8-
<!DOCTYPE html>
9-
<html>
10-
<body>
11-
<div style="text-align: center;">
12-
<img src="{{ site.baseurl }}/assets/images/404_sign.png" />
13-
14-
<h1>Oops!</h1>
15-
16-
<h4>You've reached a dead end.</h4>
17-
18-
<h4>
19-
If you feel like something should be here, you can <a href="https://github.com/pytorch/pytorch.github.io/issues">open an issue</a> on GitHub.
20-
</h4>
21-
22-
<h4>
23-
Click <a href="/">here</a> to go back to the main page.
24-
</h4>
25-
</div>
26-
</body>
27-
</html>
8+
<script type="text/javascript" charset="utf-8">
9+
const FALLBACK_URL = '';
10+
const REDIRECT_STYLE = {
11+
// Redirect completely, appending the path to the newly specified location.
12+
// This is useful for project renames or moving to a different org.
13+
FULL: 0,
14+
// Redirect to the specific location, losing path information
15+
// This is useful when you just want to capture the audience to known working page.
16+
SIMPLE: 1,
17+
// Redirect to the project's 404 page, injecting the original URL.
18+
FOUROHFOUR_DEFAULT: 2,
19+
// Redirect to the specified path, replacing ${from} with the original URL.
20+
FOUROHFOUR_CUSTOM: 3,
21+
};
22+
23+
const PROJECTS = {
24+
live: {
25+
location: 'https://playtorch.dev/',
26+
style: REDIRECT_STYLE.FULL,
27+
},
28+
};
29+
30+
// eg "https://facebook.github.io/flux/docs/overview/"
31+
const ORIGINAL_URL = window.location.href;
32+
// eg [ "", "flux", "docs", "overview", "" ]
33+
const PATH_PARTS = window.location.pathname.split('/');
34+
// eg "flux"
35+
const PROJECT = PATH_PARTS[1];
36+
// eg "docs/overview/"
37+
const SUBPATH = PATH_PARTS.slice(2).join('/');
38+
39+
40+
// Perform the redirect only for explicitly defined projects.
41+
// Otherwise show the 404 page below
42+
if (PROJECTS.hasOwnProperty(PROJECT)) {
43+
let newUrl = '';
44+
let project = PROJECTS[PROJECT];
45+
switch (project.style) {
46+
case REDIRECT_STYLE.FULL:
47+
newUrl = project.location + SUBPATH;
48+
break;
49+
case REDIRECT_STYLE.SIMPLE:
50+
newUrl = project.location;
51+
break;
52+
case REDIRECT_STYLE.FOUROHFOUR_DEFAULT:
53+
newUrl = project.location + '404.html?from=' + ORIGINAL_URL;
54+
break;
55+
case REDIRECT_STYLE.FOUROHFOUR_CUSTOM:
56+
newUrl = project.location.replace('${from}', ORIGINAL_URL);
57+
break;
58+
default:
59+
newUrl = FALLBACK_URL;
60+
}
61+
62+
if (newUrl !== '') {
63+
window.location.href = newUrl;
64+
}
65+
}
66+
67+
</script>
68+
69+
<div style="text-align: center;">
70+
<img src="{{ site.baseurl }}/assets/images/404_sign.png" />
71+
72+
<h1>Oops!</h1>
73+
74+
<h4>You've reached a dead end.</h4>
75+
76+
<h4>
77+
If you feel like something should be here, you can <a href="https://github.com/pytorch/pytorch.github.io/issues">open an issue</a> on GitHub.
78+
</h4>
79+
80+
<h4>
81+
Click <a href="/">here</a> to go back to the main page.
82+
</h4>
83+
</div>
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
---
2+
category: event
3+
title: PyTorch Conference – Dec 2nd 2022 | Save the Date
4+
date: December 2, 2022
5+
header-image: assets/images/pytorch_conference–dec_2nd_2022.gif
6+
---
7+
8+
[Content Submission Form](https://docs.google.com/forms/d/121ptOuhqhmcPev9g5Zt2Ffl-NtB_oeyFk5CWjumUVLQ/edit) — Complete by Sept. 30th
9+
10+
We are excited to announce that the PyTorch Conference returns in-person as a satellite event to [NeurlPS](https://nips.cc/) (<font size="3">Neural Information Processing Systems</font>) in New Orleans on Dec. 2nd. This is an opportunity to be part of the biggest PyTorch event of the year!
11+
12+
**When**: Dec 2nd, 2022
13+
14+
**Where**: New Orleans, Louisiana (USA) at Generations Hall &#124; *Virtual option as well*
15+
16+
**What**: The PyTorch Conference brings together leading academics, researchers and developers from the Machine Learning community to learn more about software releases on PyTorch, ways PyTorch is being used in academia and industry, development in trends, and more.
17+
18+
Join us for technical talks, project deep dives, and a poster exhibition with the opportunity to meet the authors to learn more about their PyTorch projects and network with the machine learning community. There will also be a virtual option for those who can’t join in-person. More information will be shared soon.
19+
20+
**How**: If you would like to contribute, submit your content for consideration in [this form by Sept. 30th](https://forms.gle/A92Y1h9U4cDjYjnK9). Please note that we can not accept all submissions.
21+
22+
**Guidelines for Content**: When submitting content, it must meet the following criteria:
23+
- Product/features are launched and available at the time of event
24+
- Product/features must be an open source project
25+
- Product/features are new or innovative to the community
26+
- Content can be shared openly with the community
27+
28+
**Questions**: Contact pytorch-marketing@fb.com

_get_started/installation/aws.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -94,8 +94,8 @@ Once you decided upon your instance type, you will need to create, optionally co
9494
* [Windows](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/EC2_GetStarted.html){:target="_blank"}
9595
* [Command-line](https://docs.aws.amazon.com/cli/latest/userguide/cli-using-ec2.html){:target="_blank"}
9696

97-
## AWS SageMaker
98-
{: #aws-sagemaker}
97+
## Amazon SageMaker
98+
{: #amazon-sagemaker}
9999

100100
With [SageMaker](https://aws.amazon.com/sagemaker) service AWS provides a fully-managed service that allows developers and data scientists to build, train, and deploy machine learning models.
101101

_get_started/installation/linux.md

Lines changed: 19 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Installing on Linux
22
{:.no_toc}
33

4-
PyTorch can be installed and used on various Linux distributions. Depending on your system and compute requirements, your experience with PyTorch on Linux may vary in terms of processing time. It is recommended, but not required, that your Linux system has an NVIDIA GPU in order to harness the full power of PyTorch's [CUDA](https://developer.nvidia.com/cuda-zone) [support](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html?highlight=cuda#cuda-tensors)..
4+
PyTorch can be installed and used on various Linux distributions. Depending on your system and compute requirements, your experience with PyTorch on Linux may vary in terms of processing time. It is recommended, but not required, that your Linux system has an NVIDIA or AMD GPU in order to harness the full power of PyTorch's [CUDA](https://developer.nvidia.com/cuda-zone) [support](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html?highlight=cuda#cuda-tensors) or [ROCm](https://docs.amd.com) support.
55

66
## Prerequisites
77
{: #linux-prerequisites}
@@ -80,28 +80,37 @@ sudo apt install python3-pip
8080
### Anaconda
8181
{: #linux-anaconda}
8282

83-
#### No CUDA
83+
#### No CUDA/ROCm
8484

85-
To install PyTorch via Anaconda, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system or do not require CUDA, in the above selector, choose OS: Linux, Package: Conda and CUDA: None.
85+
To install PyTorch via Anaconda, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) or [ROCm-capable](https://docs.amd.com) system or do not require CUDA/ROCm (i.e. GPU support), in the above selector, choose OS: Linux, Package: Conda, Language: Python and Compute Platform: CPU.
8686
Then, run the command that is presented to you.
8787

8888
#### With CUDA
8989

9090
To install PyTorch via Anaconda, and you do have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system, in the above selector, choose OS: Linux, Package: Conda and the CUDA version suited to your machine. Often, the latest CUDA version is better.
9191
Then, run the command that is presented to you.
9292

93+
#### With ROCm
94+
95+
PyTorch via Anaconda is not supported on ROCm currently. Please use pip instead.
96+
9397

9498
### pip
9599
{: #linux-pip}
96100

97101
#### No CUDA
98102

99-
To install PyTorch via pip, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system or do not require CUDA, in the above selector, choose OS: Linux, Package: Pip and CUDA: None.
103+
To install PyTorch via pip, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) or [ROCm-capable](https://docs.amd.com) system or do not require CUDA/ROCm (i.e. GPU support), in the above selector, choose OS: Linux, Package: Pip, Language: Python and Compute Platform: CPU.
100104
Then, run the command that is presented to you.
101105

102106
#### With CUDA
103107

104-
To install PyTorch via pip, and do have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system, in the above selector, choose OS: Linux, Package: Pip and the CUDA version suited to your machine. Often, the latest CUDA version is better.
108+
To install PyTorch via pip, and do have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system, in the above selector, choose OS: Linux, Package: Pip, Language: Python and the CUDA version suited to your machine. Often, the latest CUDA version is better.
109+
Then, run the command that is presented to you.
110+
111+
#### With ROCm
112+
113+
To install PyTorch via pip, and do have a [ROCm-capable](https://docs.amd.com) system, in the above selector, choose OS: Linux, Package: Pip, Language: Python and the ROCm version supported.
105114
Then, run the command that is presented to you.
106115

107116
## Verification
@@ -126,7 +135,7 @@ tensor([[0.3380, 0.3845, 0.3217],
126135
[0.4675, 0.3947, 0.1426]])
127136
```
128137

129-
Additionally, to check if your GPU driver and CUDA is enabled and accessible by PyTorch, run the following commands to return whether or not the CUDA driver is enabled:
138+
Additionally, to check if your GPU driver and CUDA/ROCm is enabled and accessible by PyTorch, run the following commands to return whether or not the GPU driver is enabled (the ROCm build of PyTorch uses the same semantics at the python API level (https://github.com/pytorch/pytorch/blob/master/docs/source/notes/hip.rst#hip-interfaces-reuse-the-cuda-interfaces), so the below commands should also work for ROCm):
130139

131140
```python
132141
import torch
@@ -141,8 +150,10 @@ For the majority of PyTorch users, installing from a pre-built binary via a pack
141150
### Prerequisites
142151
{: #linux-prerequisites-2}
143152

144-
1. Install [Anaconda](#anaconda)
145-
2. Install [CUDA](https://developer.nvidia.com/cuda-downloads), if your machine has a [CUDA-enabled GPU](https://developer.nvidia.com/cuda-gpus).
153+
1. Install [Anaconda](#anaconda) or [Pip](#pip)
154+
2. If you need to build PyTorch with GPU support
155+
a. for NVIDIA GPUs, install [CUDA](https://developer.nvidia.com/cuda-downloads), if your machine has a [CUDA-enabled GPU](https://developer.nvidia.com/cuda-gpus).
156+
b. for AMD GPUs, install [ROCm](https://docs.amd.com), if your machine has a [ROCm-enabled GPU](https://docs.amd.com)
146157
3. Follow the steps described here: [https://github.com/pytorch/pytorch#from-source](https://github.com/pytorch/pytorch#from-source)
147158

148159
You can verify the installation as described [above](#linux-verification).

_includes/production.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -237,6 +237,9 @@ <h2>Technology</h2>
237237
<div class="production-item">
238238
<a class="technology-4" href="https://www.youtube.com/watch?v=LBOIxA5sg2A">PyTorch Community Voices Interview with Alexander O’Connor and Binghui Ouyang of Autodesk</a>
239239
</div>
240+
<div class="production-item">
241+
<a class="technology-5" href="https://customers.microsoft.com/en-us/story/1360878173403369154-isid-partner-professional-services-azure-en">ISID heightens value buried in text using Azure Machine Learning and PyTorch</a>
242+
</div>
240243
</div>
241244
<div id="travel" class="production-section col-md-8 offset-md-1 container">
242245
<h2>Travel</h2>

_includes/quick_start_local.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333
<div class="option-text">PyTorch Build</div>
3434
</div>
3535
<div class="col-md-4 option block version selected" id="stable">
36-
<div class="option-text">Stable (1.12.0)</div>
36+
<div class="option-text">Stable (1.12.1)</div>
3737
</div>
3838
<div class="col-md-4 option block version" id="preview">
3939
<div class="option-text">Preview (Nightly)</div>

_posts/2022-2-8-quantization-in-practice.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -257,17 +257,20 @@ PTQ also pre-quantizes model weights but instead of calibrating activations on-t
257257

258258
import torch
259259
from torch import nn
260+
import copy
260261

261262
backend = "fbgemm" # running on a x86 CPU. Use "qnnpack" if running on ARM.
262263

263-
m = nn.Sequential(
264+
model = nn.Sequential(
264265
nn.Conv2d(2,64,3),
265266
nn.ReLU(),
266267
nn.Conv2d(64, 128, 3),
267268
nn.ReLU()
268269
)
269270

270271
## EAGER MODE
272+
m = copy.deepcopy(model)
273+
m.eval()
271274
"""Fuse
272275
- Inplace fusion replaces the first module in the sequence with the fused module, and the rest with identity modules
273276
"""
@@ -300,10 +303,11 @@ print(m[[1]].weight().element_size()) # 1 byte instead of 4 bytes for FP32
300303

301304
## FX GRAPH
302305
from torch.quantization import quantize_fx
306+
m = copy.deepcopy(model)
303307
m.eval()
304308
qconfig_dict = {"": torch.quantization.get_default_qconfig(backend)}
305309
# Prepare
306-
model_prepared = quantize_fx.prepare_fx(model_to_quantize, qconfig_dict)
310+
model_prepared = quantize_fx.prepare_fx(m, qconfig_dict)
307311
# Calibrate - Use representative (validation) data.
308312
with torch.inference_mode():
309313
for _ in range(10):

0 commit comments

Comments
 (0)