Skip to content

Commit cf4dc17

Browse files
author
Svetlana Karslioglu
authored
Merge branch 'main' into suraj813-patch-1
2 parents ed493d1 + 48d8207 commit cf4dc17

File tree

68 files changed

+3679
-1135
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+3679
-1135
lines changed

.circleci/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Do not edit `config.yml` directly, make all the changes to `config.yml.in` and then run `regenerate.py`

.circleci/config.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -181,6 +181,9 @@ pytorch_windows_build_worker: &pytorch_windows_build_worker
181181
- beginner_source/data
182182
- intermediate_source/data
183183
- prototype_source/data
184+
- store_artifacts:
185+
path: ./docs/build/html
186+
destination: docs
184187

185188
jobs:
186189
pytorch_tutorial_pr_build_manager:

.circleci/config.yml.in

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -181,6 +181,9 @@ pytorch_windows_build_worker: &pytorch_windows_build_worker
181181
- beginner_source/data
182182
- intermediate_source/data
183183
- prototype_source/data
184+
- store_artifacts:
185+
path: ./docs/build/html
186+
destination: docs
184187
{% endraw %}
185188
jobs:
186189
{{ jobs("pr") }}

.github/ISSUE_TEMPLATE/bug-report.yml

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
name: 🐛 Bug Report
2+
description: Create a tutorial bug report
3+
title: "[BUG] - <title>"
4+
labels: [
5+
"bug"
6+
]
7+
8+
body:
9+
- type: markdown
10+
attributes:
11+
value: >
12+
#### Before submitting a bug, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/pytorch/tutorials/issues?q=is%3Aissue+sort%3Acreated-desc+).
13+
- type: textarea
14+
attributes:
15+
label: Add Link
16+
description: |
17+
**Add the link to the tutorial***
18+
placeholder: |
19+
Link to the tutorial on the website:
20+
validations:
21+
required: true
22+
- type: textarea
23+
attributes:
24+
label: Describe the bug
25+
description: |
26+
**Add the bug description**
27+
placeholder: |
28+
Provide a detailed description of the issue with code samples if relevant
29+
```python
30+
31+
# Sample code to reproduce the problem if relevant
32+
```
33+
34+
**Expected Result:** (Describe what you were expecting to see)
35+
36+
37+
**Actual Result:** (Describe the result)
38+
39+
```
40+
The error message you got, with the full traceback.
41+
```
42+
43+
validations:
44+
required: true
45+
- type: textarea
46+
attributes:
47+
label: Describe your environment
48+
description: |
49+
**Describe the environment you encountered the bug in:**
50+
placeholder: |
51+
* Platform (i.e macOS, Linux, Google Colab):
52+
* CUDA (yes/no, version?):
53+
* PyTorch version (run `python -c "import torch; print(torch.__version__)"`):
54+
55+
validations:
56+
required: true
57+
- type: markdown
58+
attributes:
59+
value: >
60+
Thanks for contributing 🎉!
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
name: 🚀 Feature request
2+
description: Submit a proposal for a new PyTorch tutorial or improvement of an existing tutorial
3+
title: "💡 [REQUEST] - <title>"
4+
labels: [
5+
"feature"
6+
]
7+
8+
body:
9+
- type: textarea
10+
attributes:
11+
label: 🚀 Descirbe the improvement or the new tutorial
12+
description: |
13+
**Describe the improvement**
14+
placeholder: |
15+
Explain why this improvement or new tutorial is important. For example, *"This tutorial will help users to better understand feature X of PyTorch."* If there is a tutorial that you propose to replace, add here. If this is related to another GitHub issue, add a link here.
16+
validations:
17+
required: true
18+
- type: textarea
19+
attributes:
20+
label: Existing tutorials on this topic
21+
description: |
22+
**Add a list of existing tutorials on the same topic.**
23+
placeholder: |
24+
List tutorials that already explain this functionality if exist. On pytorch.org or elsewhere.
25+
* Link
26+
* Link
27+
- type: textarea
28+
attributes:
29+
label: Additional context
30+
description: |
31+
**Add additional context**
32+
placeholder: |
33+
Add any other context or screenshots about the feature request.
34+
- type: markdown
35+
attributes:
36+
value: >
37+
Thanks for contributing 🎉!

.github/workflows/spelling.yml

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
name: Check spelling
2+
3+
on:
4+
pull_request:
5+
push:
6+
branches:
7+
- main
8+
jobs:
9+
pyspelling:
10+
runs-on: ubuntu-20.04
11+
steps:
12+
- uses: actions/checkout@v3
13+
- uses: actions/setup-python@v4
14+
with:
15+
python-version: '3.9'
16+
cache: 'pip'
17+
- run: pip install pyspelling
18+
- run: sudo apt-get install aspell aspell-en
19+
- run: pyspelling
20+

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -124,3 +124,6 @@ cleanup.sh
124124

125125
# VSCode
126126
*.vscode
127+
128+
# pyspelling
129+
dictionary.dic

.jenkins/get_sphinx_filenames.py

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
from pathlib import Path
2+
from typing import List
3+
4+
from get_files_to_run import get_all_files
5+
from validate_tutorials_built import NOT_RUN
6+
7+
8+
def get_files_for_sphinx() -> List[str]:
9+
all_py_files = get_all_files()
10+
return [x for x in all_py_files if all(y not in x for y in NOT_RUN)]
11+
12+
13+
SPHINX_SHOULD_RUN = "|".join(get_files_for_sphinx())

.jenkins/validate_tutorials_built.py

Lines changed: 43 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -9,50 +9,49 @@
99
# the file name to explain why, like intro.html), or fix the tutorial and remove it from this list).
1010

1111
NOT_RUN = [
12-
"basics/intro", # no code
13-
"translation_transformer",
14-
"profiler",
15-
"saving_loading_models",
16-
"introyt/captumyt",
17-
"examples_nn/polynomial_module",
18-
"examples_nn/dynamic_net",
19-
"examples_nn/polynomial_optim",
20-
"former_torchies/autograd_tutorial_old",
21-
"former_torchies/tensor_tutorial_old",
22-
"examples_autograd/polynomial_autograd",
23-
"examples_autograd/polynomial_custom_function",
24-
"parametrizations",
25-
"mnist_train_nas", # used by ax_multiobjective_nas_tutorial.py
26-
"fx_conv_bn_fuser",
27-
"super_resolution_with_onnxruntime",
28-
"ddp_pipeline", # requires 4 gpus
29-
"fx_graph_mode_ptq_dynamic",
30-
"vmap_recipe",
31-
"torchscript_freezing",
32-
"nestedtensor",
33-
"recipes/saving_and_loading_models_for_inference",
34-
"recipes/saving_multiple_models_in_one_file",
35-
"recipes/loading_data_recipe",
36-
"recipes/tensorboard_with_pytorch",
37-
"recipes/what_is_state_dict",
38-
"recipes/profiler_recipe",
39-
"recipes/save_load_across_devices",
40-
"recipes/warmstarting_model_using_parameters_from_a_different_model",
41-
"torch_compile_tutorial_",
42-
"recipes/dynamic_quantization",
43-
"recipes/saving_and_loading_a_general_checkpoint",
44-
"recipes/benchmark",
45-
"recipes/tuning_guide",
46-
"recipes/zeroing_out_gradients",
47-
"recipes/defining_a_neural_network",
48-
"recipes/timer_quick_start",
49-
"recipes/amp_recipe",
50-
"recipes/Captum_Recipe",
51-
"flask_rest_api_tutorial",
52-
"text_to_speech_with_torchaudio",
12+
"beginner_source/basics/intro", # no code
13+
"beginner_source/translation_transformer",
14+
"beginner_source/profiler",
15+
"beginner_source/saving_loading_models",
16+
"beginner_source/introyt/captumyt",
17+
"beginner_source/examples_nn/polynomial_module",
18+
"beginner_source/examples_nn/dynamic_net",
19+
"beginner_source/examples_nn/polynomial_optim",
20+
"beginner_source/former_torchies/autograd_tutorial_old",
21+
"beginner_source/former_torchies/tensor_tutorial_old",
22+
"beginner_source/examples_autograd/polynomial_autograd",
23+
"beginner_source/examples_autograd/polynomial_custom_function",
24+
"intermediate_source/parametrizations",
25+
"intermediate_source/mnist_train_nas", # used by ax_multiobjective_nas_tutorial.py
26+
"intermediate_source/fx_conv_bn_fuser",
27+
"advanced_source/super_resolution_with_onnxruntime",
28+
"advanced_source/ddp_pipeline", # requires 4 gpus
29+
"prototype_source/fx_graph_mode_ptq_dynamic",
30+
"prototype_source/vmap_recipe",
31+
"prototype_source/torchscript_freezing",
32+
"prototype_source/nestedtensor",
33+
"recipes_source/recipes/saving_and_loading_models_for_inference",
34+
"recipes_source/recipes/saving_multiple_models_in_one_file",
35+
"recipes_source/recipes/loading_data_recipe",
36+
"recipes_source/recipes/tensorboard_with_pytorch",
37+
"recipes_source/recipes/what_is_state_dict",
38+
"recipes_source/recipes/profiler_recipe",
39+
"recipes_source/recipes/save_load_across_devices",
40+
"recipes_source/recipes/warmstarting_model_using_parameters_from_a_different_model",
41+
"recipes_source/recipes/dynamic_quantization",
42+
"recipes_source/recipes/saving_and_loading_a_general_checkpoint",
43+
"recipes_source/recipes/benchmark",
44+
"recipes_source/recipes/tuning_guide",
45+
"recipes_source/recipes/zeroing_out_gradients",
46+
"recipes_source/recipes/defining_a_neural_network",
47+
"recipes_source/recipes/timer_quick_start",
48+
"recipes_source/recipes/amp_recipe",
49+
"recipes_source/recipes/Captum_Recipe",
50+
"intermediate_source/flask_rest_api_tutorial",
51+
"intermediate_source/text_to_speech_with_torchaudio",
52+
"intermediate_source/tensorboard_profiler_tutorial" # reenable after 2.0 release.
5353
]
5454

55-
5655
def tutorial_source_dirs() -> List[Path]:
5756
return [
5857
p.relative_to(REPO_ROOT).with_name(p.stem[:-7])
@@ -67,6 +66,7 @@ def main() -> None:
6766
glob_path = f"{tutorial_source_dir}/**/*.html"
6867
html_file_paths += docs_dir.glob(glob_path)
6968

69+
should_not_run = [f'{x.replace("_source", "")}.html' for x in NOT_RUN]
7070
did_not_run = []
7171
for html_file_path in html_file_paths:
7272
with open(html_file_path, "r", encoding="utf-8") as html_file:
@@ -77,9 +77,7 @@ def main() -> None:
7777
if (
7878
"Total running time of the script: ( 0 minutes 0.000 seconds)"
7979
in elem.text
80-
and not any(
81-
html_file_path.match(file) for file in NOT_RUN
82-
)
80+
and not any(html_file_path.match(file) for file in should_not_run)
8381
):
8482
did_not_run.append(html_file_path.as_posix())
8583

.pyspelling.yml

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
spellchecker: aspell
2+
matrix:
3+
- name: python
4+
sources:
5+
- beginner_source/*.py
6+
dictionary:
7+
wordlists:
8+
- en-wordlist.txt
9+
pipeline:
10+
- pyspelling.filters.python:
11+
group_comments: true
12+
- pyspelling.filters.context:
13+
context_visible_first: true
14+
delimiters:
15+
# Exclude figure rST tags
16+
- open: '\.\.\s+(figure|literalinclude|math|image|grid)::'
17+
close: '\n'
18+
# Exclude raw directive
19+
- open: '\.\. (raw)::.*$\n*'
20+
close: '\n'
21+
# Exclude Python coding directives
22+
- open: '-\*- coding:'
23+
close: '\n'
24+
# Exclude Authors:
25+
- open: 'Author(|s):'
26+
close: '\n'
27+
# Exclude .rst directives:
28+
- open: ':math:`.*`'
29+
close: ' '
30+
# Ignore multiline content in codeblock
31+
- open: '(?s)^::\n\n '
32+
close: '^\n'
33+
# Ignore reStructuredText block directives
34+
- open: '\.\. (code-block)::.*$\n*'
35+
content: '(?P<first>(^(?P<indent>[ ]+).*$\n))(?P<other>(^([ \t]+.*|[ \t]*)$\n)*)'
36+
close: '(^(?![ \t]+.*$))'
37+
- pyspelling.filters.markdown:
38+
- pyspelling.filters.html:
39+
ignores:
40+
- code
41+
- pre
42+
- pyspelling.filters.url:

_static/img/invpendulum.gif

29.6 KB
Loading

advanced_source/cpp_frontend.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -946,9 +946,9 @@ we use implement the `Adam <https://arxiv.org/pdf/1412.6980.pdf>`_ algorithm:
946946
.. code-block:: cpp
947947
948948
torch::optim::Adam generator_optimizer(
949-
generator->parameters(), torch::optim::AdamOptions(2e-4).beta1(0.5));
949+
generator->parameters(), torch::optim::AdamOptions(2e-4).betas(std::make_tuple(0.5, 0.5)));
950950
torch::optim::Adam discriminator_optimizer(
951-
discriminator->parameters(), torch::optim::AdamOptions(5e-4).beta1(0.5));
951+
discriminator->parameters(), torch::optim::AdamOptions(5e-4).betas(std::make_tuple(0.5, 0.5)));
952952
953953
.. note::
954954

advanced_source/static_quantization_tutorial.rst

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -458,7 +458,8 @@ quantizing for x86 architectures. This configuration does the following:
458458
per_channel_quantized_model = load_model(saved_model_dir + float_model_file)
459459
per_channel_quantized_model.eval()
460460
per_channel_quantized_model.fuse_model()
461-
per_channel_quantized_model.qconfig = torch.ao.quantization.get_default_qconfig('fbgemm')
461+
# The old 'fbgemm' is still available but 'x86' is the recommended default.
462+
per_channel_quantized_model.qconfig = torch.ao.quantization.get_default_qconfig('x86')
462463
print(per_channel_quantized_model.qconfig)
463464
464465
torch.ao.quantization.prepare(per_channel_quantized_model, inplace=True)
@@ -534,8 +535,9 @@ We fuse modules as before
534535
qat_model = load_model(saved_model_dir + float_model_file)
535536
qat_model.fuse_model()
536537
537-
optimizer = torch.optim.SGD(qat_model.parameters(), lr = 0.0001)
538-
qat_model.qconfig = torch.ao.quantization.get_default_qat_qconfig('fbgemm')
538+
optimizer = torch.optim.SGD(qat_model.parameters(), lr = 0.0001)
539+
# The old 'fbgemm' is still available but 'x86' is the recommended default.
540+
qat_model.qconfig = torch.ao.quantization.get_default_qat_qconfig('x86')
539541
540542
Finally, ``prepare_qat`` performs the "fake quantization", preparing the model for quantization-aware training
541543

beginner_source/Intro_to_TorchScript_tutorial.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
Introduction to TorchScript
33
===========================
44
5-
*James Reed (jamesreed@fb.com), Michael Suo (suo@fb.com)*, rev2
5+
**Authors:** James Reed (jamesreed@fb.com), Michael Suo (suo@fb.com), rev2
66
77
This tutorial is an introduction to TorchScript, an intermediate
88
representation of a PyTorch model (subclass of ``nn.Module``) that
@@ -147,7 +147,7 @@ def forward(self, x, h):
147147

148148

149149
######################################################################
150-
# We’ve once again redefined our MyCell class, but here we’ve defined
150+
# We’ve once again redefined our ``MyCell`` class, but here we’ve defined
151151
# ``MyDecisionGate``. This module utilizes **control flow**. Control flow
152152
# consists of things like loops and ``if``-statements.
153153
#
@@ -202,7 +202,7 @@ def forward(self, x, h):
202202
# inputs* the network might see.
203203
#
204204
# What exactly has this done? It has invoked the ``Module``, recorded the
205-
# operations that occured when the ``Module`` was run, and created an
205+
# operations that occurred when the ``Module`` was run, and created an
206206
# instance of ``torch.jit.ScriptModule`` (of which ``TracedModule`` is an
207207
# instance)
208208
#
@@ -283,7 +283,7 @@ def forward(self, x, h):
283283
# Looking at the ``.code`` output, we can see that the ``if-else`` branch
284284
# is nowhere to be found! Why? Tracing does exactly what we said it would:
285285
# run the code, record the operations *that happen* and construct a
286-
# ScriptModule that does exactly that. Unfortunately, things like control
286+
# ``ScriptModule`` that does exactly that. Unfortunately, things like control
287287
# flow are erased.
288288
#
289289
# How can we faithfully represent this module in TorchScript? We provide a
Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
Audio Datasets
22
==============
33

4-
This tutorial has been moved to https://pytorch.org/tutorials/beginner/audio_datasets_tutorial.html
4+
This tutorial has been moved to https://pytorch.org/audio/stable/tutorials/audio_datasets_tutorial.html
55

66
It will redirect in 3 seconds.
77

88
.. raw:: html
99

10-
<meta http-equiv="Refresh" content="3; url='https://pytorch.org/tutorials/beginner/audio_datasets_tutorial.html'" />
10+
<meta http-equiv="Refresh" content="3; url='https://pytorch.org/audio/stable/tutorials/audio_datasets_tutorial.html'" />

0 commit comments

Comments
 (0)