Skip to content

Commit 0b06ec1

Browse files
authored
Merge branch 'master' into patch-1
2 parents 0daa06f + 67f76d3 commit 0b06ec1

14 files changed

+93
-73
lines changed

.jenkins/build.sh

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -86,6 +86,16 @@ if [[ "${JOB_BASE_NAME}" == *worker_* ]]; then
8686
FILES_TO_RUN+=($(basename $filename .py))
8787
fi
8888
count=$((count+1))
89+
done
90+
for filename in $(find prototype_source/ -name '*.py' -not -path '*/data/*'); do
91+
if [ $(($count % $NUM_WORKERS)) != $WORKER_ID ]; then
92+
echo "Removing runnable code from "$filename
93+
python $DIR/remove_runnable_code.py $filename $filename
94+
else
95+
echo "Keeping "$filename
96+
FILES_TO_RUN+=($(basename $filename .py))
97+
fi
98+
count=$((count+1))
8999
done
90100
echo "FILES_TO_RUN: " ${FILES_TO_RUN[@]}
91101

@@ -94,13 +104,13 @@ if [[ "${JOB_BASE_NAME}" == *worker_* ]]; then
94104

95105
# Step 4: If any of the generated files are not related the tutorial files we want to run,
96106
# then we remove them
97-
for filename in $(find docs/beginner docs/intermediate docs/advanced docs/recipes -name '*.html'); do
107+
for filename in $(find docs/beginner docs/intermediate docs/advanced docs/recipes docs/prototype -name '*.html'); do
98108
file_basename=$(basename $filename .html)
99109
if [[ ! " ${FILES_TO_RUN[@]} " =~ " ${file_basename} " ]]; then
100110
rm $filename
101111
fi
102112
done
103-
for filename in $(find docs/beginner docs/intermediate docs/advanced docs/recipes -name '*.rst'); do
113+
for filename in $(find docs/beginner docs/intermediate docs/advanced docs/recipes docs/prototype -name '*.rst'); do
104114
file_basename=$(basename $filename .rst)
105115
if [[ ! " ${FILES_TO_RUN[@]} " =~ " ${file_basename} " ]]; then
106116
rm $filename
@@ -124,7 +134,7 @@ if [[ "${JOB_BASE_NAME}" == *worker_* ]]; then
124134
rm $filename
125135
fi
126136
done
127-
for filename in $(find docs/.doctrees/beginner docs/.doctrees/intermediate docs/.doctrees/advanced docs/.doctrees/recipes -name '*.doctree'); do
137+
for filename in $(find docs/.doctrees/beginner docs/.doctrees/intermediate docs/.doctrees/advanced docs/.doctrees/recipes docs/.doctrees/prototype -name '*.doctree'); do
128138
file_basename=$(basename $filename .doctree)
129139
if [[ ! " ${FILES_TO_RUN[@]} " =~ " ${file_basename} " ]]; then
130140
rm $filename

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ We use sphinx-gallery's [notebook styled examples](https://sphinx-gallery.github
1414
Here's how to create a new tutorial or recipe:
1515
1. Create a notebook styled python file. If you want it executed while inserted into documentation, save the file with suffix `tutorial` so that file name is `your_tutorial.py`.
1616
2. Put it in one of the beginner_source, intermediate_source, advanced_source based on the level. If it is a recipe, add to recipes_source.
17-
2. For Tutorials, include it in the TOC tree at index.rst
18-
3. For Tutorials, create a thumbnail in the [index.rst file](https://github.com/pytorch/tutorials/blob/master/index.rst) using a command like `.. customcarditem:: beginner/your_tutorial.html`. For Recipes, create a thumbnail in the [recipes_index.rst](https://github.com/pytorch/tutorials/blob/master/recipes_source/recipes_index.rst)
17+
2. For Tutorials (except if it is a prototype feature), include it in the TOC tree at index.rst
18+
3. For Tutorials (except if it is a prototype feature), create a thumbnail in the [index.rst file](https://github.com/pytorch/tutorials/blob/master/index.rst) using a command like `.. customcarditem:: beginner/your_tutorial.html`. For Recipes, create a thumbnail in the [recipes_index.rst](https://github.com/pytorch/tutorials/blob/master/recipes_source/recipes_index.rst)
1919

2020
In case you prefer to write your tutorial in jupyter, you can use [this script](https://gist.github.com/chsasank/7218ca16f8d022e02a9c0deb94a310fe) to convert the notebook to python file. After conversion and addition to the project, please make sure the sections headings etc are in logical order.
2121

advanced_source/dynamic_quantization_tutorial.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"""
2-
(experimental) Dynamic Quantization on an LSTM Word Language Model
2+
(beta) Dynamic Quantization on an LSTM Word Language Model
33
==================================================================
44
55
**Author**: `James Reed <https://github.com/jamesr66a>`_
@@ -13,7 +13,7 @@
1313
to int, which can result in smaller model size and faster inference with only a small
1414
hit to accuracy.
1515
16-
In this tutorial, we'll apply the easiest form of quantization -
16+
In this tutorial, we'll apply the easiest form of quantization -
1717
`dynamic quantization <https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic>`_ -
1818
to an LSTM-based next word-prediction model, closely following the
1919
`word language model <https://github.com/pytorch/examples/tree/master/word_language_model>`_

advanced_source/static_quantization_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"""
2-
(experimental) Static Quantization with Eager Mode in PyTorch
2+
(beta) Static Quantization with Eager Mode in PyTorch
33
=========================================================
44
55
**Author**: `Raghuraman Krishnamoorthi <https://github.com/raghuramank100>`_

conf.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,8 +63,9 @@
6363

6464
sphinx_gallery_conf = {
6565
'examples_dirs': ['beginner_source', 'intermediate_source',
66-
'advanced_source', 'recipes_source'],
67-
'gallery_dirs': ['beginner', 'intermediate', 'advanced', 'recipes'],
66+
'advanced_source', 'recipes_source', 'prototype_source'],
67+
'gallery_dirs': ['beginner', 'intermediate', 'advanced', 'recipes', 'prototype'],
68+
'filename_pattern': 'tutorial.py',
6869
'backreferences_dir': False
6970
}
7071

index.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -203,14 +203,14 @@ Welcome to PyTorch Tutorials
203203
.. Frontend APIs
204204
205205
.. customcarditem::
206-
:header: (experimental) Introduction to Named Tensors in PyTorch
206+
:header: (prototype) Introduction to Named Tensors in PyTorch
207207
:card_description: Learn how to use PyTorch to train a Deep Q Learning (DQN) agent on the CartPole-v0 task from the OpenAI Gym.
208208
:image: _static/img/thumbnails/cropped/experimental-Introduction-to-Named-Tensors-in-PyTorch.png
209209
:link: intermediate/memory_format_tutorial.html
210210
:tags: Frontend-APIs,Named-Tensor,Best-Practice
211211

212212
.. customcarditem::
213-
:header: (experimental) Channels Last Memory Format in PyTorch
213+
:header: (beta) Channels Last Memory Format in PyTorch
214214
:card_description: Get an overview of Channels Last memory format and understand how it is used to order NCHW tensors in memory preserving dimensions.
215215
:image: _static/img/thumbnails/cropped/experimental-Channels-Last-Memory-Format-in-PyTorch.png
216216
:link: intermediate/memory_format_tutorial.html
@@ -261,28 +261,28 @@ Welcome to PyTorch Tutorials
261261
:tags: Model-Optimization,Best-Practice
262262

263263
.. customcarditem::
264-
:header: (experimental) Dynamic Quantization on an LSTM Word Language Model
264+
:header: (beta) Dynamic Quantization on an LSTM Word Language Model
265265
:card_description: Apply dynamic quantization, the easiest form of quantization, to a LSTM-based next word prediction model.
266266
:image: _static/img/thumbnails/cropped/experimental-Dynamic-Quantization-on-an-LSTM-Word-Language-Model.png
267267
:link: advanced/dynamic_quantization_tutorial.html
268268
:tags: Text,Quantization,Model-Optimization
269269

270270
.. customcarditem::
271-
:header: (experimental) Dynamic Quantization on BERT
271+
:header: (beta) Dynamic Quantization on BERT
272272
:card_description: Apply the dynamic quantization on a BERT (Bidirectional Embedding Representations from Transformers) model.
273273
:image: _static/img/thumbnails/cropped/experimental-Dynamic-Quantization-on-BERT.png
274274
:link: intermediate/dynamic_quantization_bert_tutorial.html
275275
:tags: Text,Quantization,Model-Optimization
276276

277277
.. customcarditem::
278-
:header: (experimental) Static Quantization with Eager Mode in PyTorch
278+
:header: (beta) Static Quantization with Eager Mode in PyTorch
279279
:card_description: Learn techniques to impove a model's accuracy = post-training static quantization, per-channel quantization, and quantization-aware training.
280280
:image: _static/img/thumbnails/cropped/experimental-Static-Quantization-with-Eager-Mode-in-PyTorch.png
281281
:link: advanced/static_quantization_tutorial.html
282282
:tags: Image/Video,Quantization,Model-Optimization
283283

284284
.. customcarditem::
285-
:header: (experimental) Quantized Transfer Learning for Computer Vision Tutorial
285+
:header: (beta) Quantized Transfer Learning for Computer Vision Tutorial
286286
:card_description: Learn techniques to impove a model's accuracy - post-training static quantization, per-channel quantization, and quantization-aware training.
287287
:image: _static/img/thumbnails/cropped/experimental-Quantized-Transfer-Learning-for-Computer-Vision-Tutorial.png
288288
:link: advanced/static_quantization_tutorial.html

intermediate_source/dynamic_quantization_bert_tutorial.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
(experimental) Dynamic Quantization on BERT
1+
(beta) Dynamic Quantization on BERT
22
===========================================
33

44
.. tip::
5-
To get the most of this tutorial, we suggest using this
5+
To get the most of this tutorial, we suggest using this
66
`Colab Version <https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/dynamic_quantization_bert_tutorial.ipynb>`_. This will allow you to experiment with the information presented below.
7-
7+
88
**Author**: `Jianyu Huang <https://github.com/jianyuh>`_
99

1010
**Reviewed by**: `Raghuraman Krishnamoorthi <https://github.com/raghuramank100>`_
@@ -71,7 +71,7 @@ built-in F1 score calculation helper function.
7171
pip install transformers
7272
7373
74-
Because we will be using the experimental parts of the PyTorch, it is
74+
Because we will be using the beta parts of the PyTorch, it is
7575
recommended to install the latest version of torch and torchvision. You
7676
can find the most recent instructions on local installation `here
7777
<https://pytorch.org/get-started/locally/>`_. For example, to install on

intermediate_source/memory_format_tutorial.py

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# -*- coding: utf-8 -*-
22
"""
3-
(experimental) Channels Last Memory Format in PyTorch
3+
(beta) Channels Last Memory Format in PyTorch
44
*******************************************************
55
**Author**: `Vitaly Fedyunin <https://github.com/VitalyFedyunin>`_
66
@@ -11,12 +11,12 @@
1111
1212
For example, classic (contiguous) storage of NCHW tensor (in our case it is two 2x2 images with 3 color channels) look like this:
1313
14-
.. figure:: /_static/img/classic_memory_format.png
14+
.. figure:: /_static/img/classic_memory_format.png
1515
:alt: classic_memory_format
1616
1717
Channels Last memory format orders data differently:
1818
19-
.. figure:: /_static/img/channels_last_memory_format.png
19+
.. figure:: /_static/img/channels_last_memory_format.png
2020
:alt: channels_last_memory_format
2121
2222
Pytorch supports memory formats (and provides back compatibility with existing models including eager, JIT, and TorchScript) by utilizing existing strides structure.
@@ -34,7 +34,7 @@
3434
# Memory Format API
3535
# -----------------------
3636
#
37-
# Here is how to convert tensors between contiguous and channels
37+
# Here is how to convert tensors between contiguous and channels
3838
# last memory formats.
3939

4040
######################################################################
@@ -104,9 +104,9 @@
104104
######################################################################
105105
# Performance Gains
106106
# -------------------------------------------------------------------------------------------
107-
# The most significant performance gains are observed on NVidia's hardware with
107+
# The most significant performance gains are observed on Nvidia's hardware with
108108
# Tensor Cores support. We were able to archive over 22% perf gains while running '
109-
# AMP (Automated Mixed Precision) training scripts supplied by NVidia https://github.com/NVIDIA/apex.
109+
# AMP (Automated Mixed Precision) training scripts supplied by Nvidia https://github.com/NVIDIA/apex.
110110
#
111111
# ``python main_amp.py -a resnet50 --b 200 --workers 16 --opt-level O2 ./data``
112112

@@ -144,7 +144,7 @@
144144

145145
######################################################################
146146
# Passing ``--channels-last true`` allows running a model in Channels Last format with observed 22% perf gain.
147-
#
147+
#
148148
# ``python main_amp.py -a resnet50 --b 200 --workers 16 --opt-level O2 --channels-last true ./data``
149149

150150
# opt_level = O2
@@ -192,7 +192,7 @@
192192
# Converting existing models
193193
# --------------------------
194194
#
195-
# Channels Last support not limited by existing models, as any model can be converted to Channels Last and propagate format through the graph as soon as input formatted correctly.
195+
# Channels Last support not limited by existing models, as any model can be converted to Channels Last and propagate format through the graph as soon as input formatted correctly.
196196
#
197197

198198
# Need to be done once, after model initialization (or load)
@@ -203,12 +203,12 @@
203203
output = model(input)
204204

205205
#######################################################################
206-
# However, not all operators fully converted to support Channels Last (usually returning
207-
# contiguous output instead). That means you need to verify the list of used operators
208-
# against supported operators list https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support,
206+
# However, not all operators fully converted to support Channels Last (usually returning
207+
# contiguous output instead). That means you need to verify the list of used operators
208+
# against supported operators list https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support,
209209
# or introduce memory format checks into eager execution mode and run your model.
210-
#
211-
# After running the code below, operators will raise an exception if the output of the
210+
#
211+
# After running the code below, operators will raise an exception if the output of the
212212
# operator doesn't match the memory format of the input.
213213
#
214214
#
@@ -282,7 +282,7 @@ def attribute(m):
282282

283283
######################################################################
284284
# If you found an operator that doesn't support Channels Last tensors
285-
# and you want to contribute, feel free to use following developers
285+
# and you want to contribute, feel free to use following developers
286286
# guide https://github.com/pytorch/pytorch/wiki/Writing-memory-format-aware-operators.
287287
#
288288

intermediate_source/named_tensor_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# -*- coding: utf-8 -*-
22
"""
3-
(experimental) Introduction to Named Tensors in PyTorch
3+
(prototype) Introduction to Named Tensors in PyTorch
44
*******************************************************
55
**Author**: `Richard Zou <https://github.com/zou3519>`_
66

intermediate_source/quantized_transfer_learning_tutorial.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
(experimental) Quantized Transfer Learning for Computer Vision Tutorial
1+
(beta) Quantized Transfer Learning for Computer Vision Tutorial
22
========================================================================
33

44
.. tip::
5-
To get the most of this tutorial, we suggest using this
6-
`Colab Version <https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/quantized_transfer_learning_tutorial.ipynb>`_.
7-
This will allow you to experiment with the information presented below.
5+
To get the most of this tutorial, we suggest using this
6+
`Colab Version <https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/quantized_transfer_learning_tutorial.ipynb>`_.
7+
This will allow you to experiment with the information presented below.
88

99
**Author**: `Zafar Takhirov <https://github.com/z-a-f>`_
1010

@@ -62,7 +62,7 @@ such as installations and data loading/visualizations.
6262
Installing the Nightly Build
6363
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
6464

65-
Because you will be using the experimental parts of the PyTorch, it is
65+
Because you will be using the beta parts of the PyTorch, it is
6666
recommended to install the latest version of ``torch`` and
6767
``torchvision``. You can find the most recent instructions on local
6868
installation `here <https://pytorch.org/get-started/locally/>`_.

intermediate_source/rpc_tutorial.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Getting Started with Distributed RPC Framework
55

66
This tutorial uses two simple examples to demonstrate how to build distributed
77
training with the `torch.distributed.rpc <https://pytorch.org/docs/master/rpc.html>`__
8-
package which is first introduced as an experimental feature in PyTorch v1.4.
8+
package which is first introduced as a prototype feature in PyTorch v1.4.
99
Source code of the two examples can be found in
1010
`PyTorch examples <https://github.com/pytorch/examples>`__.
1111

prototype_source/README.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# Prototype Tutorials
2+
3+
This directory contains tutorials demonstrating prototype features in PyTorch.
4+
5+
**Prototype features** are not available as part of binary distributions like PyPI or Conda (except maybe behind run-time flags). To test these features we would, depending on the feature, recommend building from master or using the nightly wheelss that are made available on pytorch.org.
6+
7+
*Level of commitment:* We are committing to gathering high bandwidth feedback only on these features. Based on this feedback and potential further engagement between community members, we as a community will decide if we want to upgrade the level of commitment or to fail fast.

prototype_source/README.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
Prototype Tutorials
2+
------------------

0 commit comments

Comments
 (0)