Skip to content

Commit 0d48b72

Browse files
author
Jessica Lin
authored
Merge pull request #1035 from jlin27/master
Update feature classification labels
2 parents 43e3eb7 + 9a7250d commit 0d48b72

9 files changed

+66
-66
lines changed

advanced_source/dynamic_quantization_tutorial.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"""
2-
(experimental) Dynamic Quantization on an LSTM Word Language Model
2+
(beta) Dynamic Quantization on an LSTM Word Language Model
33
==================================================================
44
55
**Author**: `James Reed <https://github.com/jamesr66a>`_
@@ -13,7 +13,7 @@
1313
to int, which can result in smaller model size and faster inference with only a small
1414
hit to accuracy.
1515
16-
In this tutorial, we'll apply the easiest form of quantization -
16+
In this tutorial, we'll apply the easiest form of quantization -
1717
`dynamic quantization <https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic>`_ -
1818
to an LSTM-based next word-prediction model, closely following the
1919
`word language model <https://github.com/pytorch/examples/tree/master/word_language_model>`_

advanced_source/static_quantization_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"""
2-
(experimental) Static Quantization with Eager Mode in PyTorch
2+
(beta) Static Quantization with Eager Mode in PyTorch
33
=========================================================
44
55
**Author**: `Raghuraman Krishnamoorthi <https://github.com/raghuramank100>`_

index.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -203,14 +203,14 @@ Welcome to PyTorch Tutorials
203203
.. Frontend APIs
204204
205205
.. customcarditem::
206-
:header: (experimental) Introduction to Named Tensors in PyTorch
206+
:header: (prototype) Introduction to Named Tensors in PyTorch
207207
:card_description: Learn how to use PyTorch to train a Deep Q Learning (DQN) agent on the CartPole-v0 task from the OpenAI Gym.
208208
:image: _static/img/thumbnails/cropped/experimental-Introduction-to-Named-Tensors-in-PyTorch.png
209209
:link: intermediate/memory_format_tutorial.html
210210
:tags: Frontend-APIs,Named-Tensor,Best-Practice
211211

212212
.. customcarditem::
213-
:header: (experimental) Channels Last Memory Format in PyTorch
213+
:header: (beta) Channels Last Memory Format in PyTorch
214214
:card_description: Get an overview of Channels Last memory format and understand how it is used to order NCHW tensors in memory preserving dimensions.
215215
:image: _static/img/thumbnails/cropped/experimental-Channels-Last-Memory-Format-in-PyTorch.png
216216
:link: intermediate/memory_format_tutorial.html
@@ -261,28 +261,28 @@ Welcome to PyTorch Tutorials
261261
:tags: Model-Optimization,Best-Practice
262262

263263
.. customcarditem::
264-
:header: (experimental) Dynamic Quantization on an LSTM Word Language Model
264+
:header: (beta) Dynamic Quantization on an LSTM Word Language Model
265265
:card_description: Apply dynamic quantization, the easiest form of quantization, to a LSTM-based next word prediction model.
266266
:image: _static/img/thumbnails/cropped/experimental-Dynamic-Quantization-on-an-LSTM-Word-Language-Model.png
267267
:link: advanced/dynamic_quantization_tutorial.html
268268
:tags: Text,Quantization,Model-Optimization
269269

270270
.. customcarditem::
271-
:header: (experimental) Dynamic Quantization on BERT
271+
:header: (beta) Dynamic Quantization on BERT
272272
:card_description: Apply the dynamic quantization on a BERT (Bidirectional Embedding Representations from Transformers) model.
273273
:image: _static/img/thumbnails/cropped/experimental-Dynamic-Quantization-on-BERT.png
274274
:link: intermediate/dynamic_quantization_bert_tutorial.html
275275
:tags: Text,Quantization,Model-Optimization
276276

277277
.. customcarditem::
278-
:header: (experimental) Static Quantization with Eager Mode in PyTorch
278+
:header: (beta) Static Quantization with Eager Mode in PyTorch
279279
:card_description: Learn techniques to impove a model's accuracy = post-training static quantization, per-channel quantization, and quantization-aware training.
280280
:image: _static/img/thumbnails/cropped/experimental-Static-Quantization-with-Eager-Mode-in-PyTorch.png
281281
:link: advanced/static_quantization_tutorial.html
282282
:tags: Image/Video,Quantization,Model-Optimization
283283

284284
.. customcarditem::
285-
:header: (experimental) Quantized Transfer Learning for Computer Vision Tutorial
285+
:header: (beta) Quantized Transfer Learning for Computer Vision Tutorial
286286
:card_description: Learn techniques to impove a model's accuracy - post-training static quantization, per-channel quantization, and quantization-aware training.
287287
:image: _static/img/thumbnails/cropped/experimental-Quantized-Transfer-Learning-for-Computer-Vision-Tutorial.png
288288
:link: advanced/static_quantization_tutorial.html

intermediate_source/dynamic_quantization_bert_tutorial.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
(experimental) Dynamic Quantization on BERT
1+
(beta) Dynamic Quantization on BERT
22
===========================================
33

44
.. tip::
5-
To get the most of this tutorial, we suggest using this
5+
To get the most of this tutorial, we suggest using this
66
`Colab Version <https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/dynamic_quantization_bert_tutorial.ipynb>`_. This will allow you to experiment with the information presented below.
7-
7+
88
**Author**: `Jianyu Huang <https://github.com/jianyuh>`_
99

1010
**Reviewed by**: `Raghuraman Krishnamoorthi <https://github.com/raghuramank100>`_
@@ -71,7 +71,7 @@ built-in F1 score calculation helper function.
7171
pip install transformers
7272
7373
74-
Because we will be using the experimental parts of the PyTorch, it is
74+
Because we will be using the beta parts of the PyTorch, it is
7575
recommended to install the latest version of torch and torchvision. You
7676
can find the most recent instructions on local installation `here
7777
<https://pytorch.org/get-started/locally/>`_. For example, to install on

intermediate_source/memory_format_tutorial.py

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# -*- coding: utf-8 -*-
22
"""
3-
(experimental) Channels Last Memory Format in PyTorch
3+
(beta) Channels Last Memory Format in PyTorch
44
*******************************************************
55
**Author**: `Vitaly Fedyunin <https://github.com/VitalyFedyunin>`_
66
@@ -11,12 +11,12 @@
1111
1212
For example, classic (contiguous) storage of NCHW tensor (in our case it is two 2x2 images with 3 color channels) look like this:
1313
14-
.. figure:: /_static/img/classic_memory_format.png
14+
.. figure:: /_static/img/classic_memory_format.png
1515
:alt: classic_memory_format
1616
1717
Channels Last memory format orders data differently:
1818
19-
.. figure:: /_static/img/channels_last_memory_format.png
19+
.. figure:: /_static/img/channels_last_memory_format.png
2020
:alt: channels_last_memory_format
2121
2222
Pytorch supports memory formats (and provides back compatibility with existing models including eager, JIT, and TorchScript) by utilizing existing strides structure.
@@ -34,7 +34,7 @@
3434
# Memory Format API
3535
# -----------------------
3636
#
37-
# Here is how to convert tensors between contiguous and channels
37+
# Here is how to convert tensors between contiguous and channels
3838
# last memory formats.
3939

4040
######################################################################
@@ -104,9 +104,9 @@
104104
######################################################################
105105
# Performance Gains
106106
# -------------------------------------------------------------------------------------------
107-
# The most significant performance gains are observed on NVidia's hardware with
107+
# The most significant performance gains are observed on Nvidia's hardware with
108108
# Tensor Cores support. We were able to archive over 22% perf gains while running '
109-
# AMP (Automated Mixed Precision) training scripts supplied by NVidia https://github.com/NVIDIA/apex.
109+
# AMP (Automated Mixed Precision) training scripts supplied by Nvidia https://github.com/NVIDIA/apex.
110110
#
111111
# ``python main_amp.py -a resnet50 --b 200 --workers 16 --opt-level O2 ./data``
112112

@@ -144,7 +144,7 @@
144144

145145
######################################################################
146146
# Passing ``--channels-last true`` allows running a model in Channels Last format with observed 22% perf gain.
147-
#
147+
#
148148
# ``python main_amp.py -a resnet50 --b 200 --workers 16 --opt-level O2 --channels-last true ./data``
149149

150150
# opt_level = O2
@@ -192,7 +192,7 @@
192192
# Converting existing models
193193
# --------------------------
194194
#
195-
# Channels Last support not limited by existing models, as any model can be converted to Channels Last and propagate format through the graph as soon as input formatted correctly.
195+
# Channels Last support not limited by existing models, as any model can be converted to Channels Last and propagate format through the graph as soon as input formatted correctly.
196196
#
197197

198198
# Need to be done once, after model initialization (or load)
@@ -203,12 +203,12 @@
203203
output = model(input)
204204

205205
#######################################################################
206-
# However, not all operators fully converted to support Channels Last (usually returning
207-
# contiguous output instead). That means you need to verify the list of used operators
208-
# against supported operators list https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support,
206+
# However, not all operators fully converted to support Channels Last (usually returning
207+
# contiguous output instead). That means you need to verify the list of used operators
208+
# against supported operators list https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support,
209209
# or introduce memory format checks into eager execution mode and run your model.
210-
#
211-
# After running the code below, operators will raise an exception if the output of the
210+
#
211+
# After running the code below, operators will raise an exception if the output of the
212212
# operator doesn't match the memory format of the input.
213213
#
214214
#
@@ -282,7 +282,7 @@ def attribute(m):
282282

283283
######################################################################
284284
# If you found an operator that doesn't support Channels Last tensors
285-
# and you want to contribute, feel free to use following developers
285+
# and you want to contribute, feel free to use following developers
286286
# guide https://github.com/pytorch/pytorch/wiki/Writing-memory-format-aware-operators.
287287
#
288288

intermediate_source/named_tensor_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# -*- coding: utf-8 -*-
22
"""
3-
(experimental) Introduction to Named Tensors in PyTorch
3+
(prototype) Introduction to Named Tensors in PyTorch
44
*******************************************************
55
**Author**: `Richard Zou <https://github.com/zou3519>`_
66

intermediate_source/quantized_transfer_learning_tutorial.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
(experimental) Quantized Transfer Learning for Computer Vision Tutorial
1+
(beta) Quantized Transfer Learning for Computer Vision Tutorial
22
========================================================================
33

44
.. tip::
5-
To get the most of this tutorial, we suggest using this
6-
`Colab Version <https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/quantized_transfer_learning_tutorial.ipynb>`_.
7-
This will allow you to experiment with the information presented below.
5+
To get the most of this tutorial, we suggest using this
6+
`Colab Version <https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/quantized_transfer_learning_tutorial.ipynb>`_.
7+
This will allow you to experiment with the information presented below.
88

99
**Author**: `Zafar Takhirov <https://github.com/z-a-f>`_
1010

@@ -62,7 +62,7 @@ such as installations and data loading/visualizations.
6262
Installing the Nightly Build
6363
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
6464

65-
Because you will be using the experimental parts of the PyTorch, it is
65+
Because you will be using the beta parts of the PyTorch, it is
6666
recommended to install the latest version of ``torch`` and
6767
``torchvision``. You can find the most recent instructions on local
6868
installation `here <https://pytorch.org/get-started/locally/>`_.

intermediate_source/rpc_tutorial.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Getting Started with Distributed RPC Framework
55

66
This tutorial uses two simple examples to demonstrate how to build distributed
77
training with the `torch.distributed.rpc <https://pytorch.org/docs/master/rpc.html>`__
8-
package which is first introduced as an experimental feature in PyTorch v1.4.
8+
package which is first introduced as a prototype feature in PyTorch v1.4.
99
Source code of the two examples can be found in
1010
`PyTorch examples <https://github.com/pytorch/examples>`__.
1111

recipes_source/recipes/dynamic_quantization.py

Lines changed: 32 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -127,13 +127,13 @@
127127

128128
# define a very, very simple LSTM for demonstration purposes
129129
# in this case, we are wrapping nn.LSTM, one layer, no pre or post processing
130-
# inspired by
130+
# inspired by
131131
# https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html, by Robert Guthrie
132132
# and https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html
133133
class lstm_for_demonstration(nn.Module):
134134
"""Elementary Long Short Term Memory style model which simply wraps nn.LSTM
135-
Not to be used for anything other than demonstration.
136-
"""
135+
Not to be used for anything other than demonstration.
136+
"""
137137
def __init__(self,in_dim,out_dim,depth):
138138
super(lstm_for_demonstration,self).__init__()
139139
self.lstm = nn.LSTM(in_dim,out_dim,depth)
@@ -142,7 +142,7 @@ def forward(self,inputs,hidden):
142142
out,hidden = self.lstm(inputs,hidden)
143143
return out, hidden
144144

145-
145+
146146
torch.manual_seed(29592) # set the seed for reproducibility
147147

148148
#shape parameters
@@ -154,32 +154,32 @@ def forward(self,inputs,hidden):
154154
# random data for input
155155
inputs = torch.randn(sequence_length,batch_size,model_dimension)
156156
# hidden is actually is a tuple of the initial hidden state and the initial cell state
157-
hidden = (torch.randn(lstm_depth,batch_size,model_dimension), torch.randn(lstm_depth,batch_size,model_dimension))
157+
hidden = (torch.randn(lstm_depth,batch_size,model_dimension), torch.randn(lstm_depth,batch_size,model_dimension))
158158

159159

160160
######################################################################
161161
# 2: Do the Quantization
162162
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
163-
#
163+
#
164164
# Now we get to the fun part. First we create an instance of the model
165165
# called float\_lstm then we are going to quantize it. We're going to use
166166
# the
167-
#
167+
#
168168
# ::
169-
#
169+
#
170170
# torch.quantization.quantize_dynamic()
171-
#
171+
#
172172
# function here (`see
173173
# documentation <https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic>`__)
174174
# which takes the model, then a list of the submodules which we want to
175175
# have quantized if they appear, then the datatype we are targeting. This
176176
# function returns a quantized version of the original model as a new
177177
# module.
178-
#
178+
#
179179
# That's all it takes.
180-
#
180+
#
181181

182-
# here is our floating point instance
182+
# here is our floating point instance
183183
float_lstm = lstm_for_demonstration(model_dimension, model_dimension,lstm_depth)
184184

185185
# this is the call that does the work
@@ -206,7 +206,7 @@ def forward(self,inputs,hidden):
206206
# (for example you can set model dimension to something like 80) this will
207207
# converge towards 4x smaller as the stored model size dominated more and
208208
# more by the parameter values.
209-
#
209+
#
210210

211211
def print_size_of_model(model, label=""):
212212
torch.save(model.state_dict(), "temp.p")
@@ -221,23 +221,23 @@ def print_size_of_model(model, label=""):
221221
print("{0:.2f} times smaller".format(f/q))
222222

223223
# note that this value is wrong in PyTorch 1.4 due to https://github.com/pytorch/pytorch/issues/31468
224-
# this will be fixed in 1.5 with https://github.com/pytorch/pytorch/pull/31540
224+
# this will be fixed in 1.5 with https://github.com/pytorch/pytorch/pull/31540
225225

226226

227227
######################################################################
228228
# 4. Look at Latency
229229
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
230230
# The second benefit is that the quantized model will typically run
231231
# faster. This is due to a combinations of effects including at least:
232-
#
232+
#
233233
# 1. Less time spent moving parameter data in
234234
# 2. Faster INT8 operations
235-
#
235+
#
236236
# As you will see the quantized version of this super-simple network runs
237237
# faster. This will generally be true of more complex networks but as they
238238
# say "your milage may vary" depending on a number of factors including
239239
# the structure of the model and the hardware you are running on.
240-
#
240+
#
241241

242242
# compare the performance
243243
print("Floating point FP32")
@@ -255,10 +255,10 @@ def print_size_of_model(model, label=""):
255255
# trained one. However, I think it is worth quickly showing that the
256256
# quantized network does produce output tensors that are "in the same
257257
# ballpark" as the original one.
258-
#
258+
#
259259
# For a more detailed analysis please see the more advanced tutorials
260260
# referenced at the end of this recipe.
261-
#
261+
#
262262

263263
# run the float model
264264
out1, hidden1 = float_lstm(inputs, hidden)
@@ -270,7 +270,7 @@ def print_size_of_model(model, label=""):
270270
mag2 = torch.mean(abs(out2)).item()
271271
print('mean absolute value of output tensor values in the INT8 model is {0:.5f}'.format(mag2))
272272

273-
# compare them
273+
# compare them
274274
mag3 = torch.mean(abs(out1-out2)).item()
275275
print('mean absolute value of the difference between the output tensors is {0:.5f} or {1:.2f} percent'.format(mag3,mag3/mag1*100))
276276

@@ -281,26 +281,26 @@ def print_size_of_model(model, label=""):
281281
# We've explained what dynamic quantization is, what benefits it brings,
282282
# and you have used the ``torch.quantization.quantize_dynamic()`` function
283283
# to quickly quantize a simple LSTM model.
284-
#
284+
#
285285
# This was a fast and high level treatment of this material; for more
286-
# detail please continue learning with `(experimental) Dynamic Quantization on an LSTM Word Language Model Tutorial <https://pytorch.org/tutorials/advanced/dynamic\_quantization\_tutorial.html>`_.
287-
#
288-
#
286+
# detail please continue learning with `(beta) Dynamic Quantization on an LSTM Word Language Model Tutorial <https://pytorch.org/tutorials/advanced/dynamic\_quantization\_tutorial.html>`_.
287+
#
288+
#
289289
# Additional Resources
290290
# =========
291291
# Documentation
292292
# ~~~~~~~~~~~~~~
293-
#
293+
#
294294
# `Quantization API Documentaion <https://pytorch.org/docs/stable/quantization.html>`_
295-
#
295+
#
296296
# Tutorials
297297
# ~~~~~~~~~~~~~~
298-
#
299-
# `(experimental) Dynamic Quantization on BERT <https://pytorch.org/tutorials/intermediate/dynamic\_quantization\_bert\_tutorial.html>`_
300-
#
301-
# `(experimental) Dynamic Quantization on an LSTM Word Language Model <https://pytorch.org/tutorials/advanced/dynamic\_quantization\_tutorial.html>`_
302-
#
298+
#
299+
# `(beta) Dynamic Quantization on BERT <https://pytorch.org/tutorials/intermediate/dynamic\_quantization\_bert\_tutorial.html>`_
300+
#
301+
# `(beta) Dynamic Quantization on an LSTM Word Language Model <https://pytorch.org/tutorials/advanced/dynamic\_quantization\_tutorial.html>`_
302+
#
303303
# Blogs
304304
# ~~~~~~~~~~~~~~
305305
# ` Introduction to Quantization on PyTorch <https://pytorch.org/blog/introduction-to-quantization-on-pytorch/>`_
306-
#
306+
#

0 commit comments

Comments
 (0)