Skip to content

Commit 7e8508b

Browse files
authored
Merge branch 'master' into add-youtube-tutorials
2 parents 8ee6bcf + 7c6ff80 commit 7e8508b

11 files changed

+417
-20
lines changed

_static/torchvision_finetuning_instance_segmentation.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1448,7 +1448,7 @@
14481448
" self.masks = list(sorted(os.listdir(os.path.join(root, \"PedMasks\"))))\n",
14491449
"\n",
14501450
" def __getitem__(self, idx):\n",
1451-
" # load images ad masks\n",
1451+
" # load images and masks\n",
14521452
" img_path = os.path.join(self.root, \"PNGImages\", self.imgs[idx])\n",
14531453
" mask_path = os.path.join(self.root, \"PedMasks\", self.masks[idx])\n",
14541454
" img = Image.open(img_path).convert(\"RGB\")\n",

_static/tv-training-code.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ def __init__(self, root, transforms):
2525
self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks"))))
2626

2727
def __getitem__(self, idx):
28-
# load images ad masks
28+
# load images and masks
2929
img_path = os.path.join(self.root, "PNGImages", self.imgs[idx])
3030
mask_path = os.path.join(self.root, "PedMasks", self.masks[idx])
3131
img = Image.open(img_path).convert("RGB")

beginner_source/basics/quickstart_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@
8383
# Creating Models
8484
# ------------------
8585
# To define a neural network in PyTorch, we create a class that inherits
86-
# from `nn.Module <https://pytorch.org/docs/stable/generated/torch.nn.Module.html)>`_. We define the layers of the network
86+
# from `nn.Module <https://pytorch.org/docs/stable/generated/torch.nn.Module.html>`_. We define the layers of the network
8787
# in the ``__init__`` function and specify how data will pass through the network in the ``forward`` function. To accelerate
8888
# operations in the neural network, we move it to the GPU if available.
8989

beginner_source/blitz/autograd_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -299,7 +299,7 @@
299299
# The only parameters that compute gradients are the weights and bias of ``model.fc``.
300300

301301
# Optimize only the classifier
302-
optimizer = optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9)
302+
optimizer = optim.SGD(model.parameters(), lr=1e-2, momentum=0.9)
303303

304304
##########################################################################
305305
# Notice although we register all the parameters in the optimizer,

beginner_source/blitz/neural_networks_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
88
Now that you had a glimpse of ``autograd``, ``nn`` depends on
99
``autograd`` to define models and differentiate them.
10-
An ``nn.Module`` contains layers, and a method ``forward(input)``\ that
10+
An ``nn.Module`` contains layers, and a method ``forward(input)`` that
1111
returns the ``output``.
1212
1313
For example, look at this network that classifies digit images:

intermediate_source/seq2seq_translation_tutorial.py

Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -76,17 +76,6 @@
7676
helpful as those concepts are very similar to the Encoder and Decoder
7777
models, respectively.
7878
79-
And for more, read the papers that introduced these topics:
80-
81-
- `Learning Phrase Representations using RNN Encoder-Decoder for
82-
Statistical Machine Translation <https://arxiv.org/abs/1406.1078>`__
83-
- `Sequence to Sequence Learning with Neural
84-
Networks <https://arxiv.org/abs/1409.3215>`__
85-
- `Neural Machine Translation by Jointly Learning to Align and
86-
Translate <https://arxiv.org/abs/1409.0473>`__
87-
- `A Neural Conversational Model <https://arxiv.org/abs/1506.05869>`__
88-
89-
9079
**Requirements**
9180
"""
9281
from __future__ import unicode_literals, print_function, division

intermediate_source/torchvision_tutorial.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ Let’s write a ``torch.utils.data.Dataset`` class for this dataset.
122122
self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks"))))
123123
124124
def __getitem__(self, idx):
125-
# load images ad masks
125+
# load images and masks
126126
img_path = os.path.join(self.root, "PNGImages", self.imgs[idx])
127127
mask_path = os.path.join(self.root, "PedMasks", self.masks[idx])
128128
img = Image.open(img_path).convert("RGB")

prototype_source/fx_graph_mode_ptq_static.rst

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -311,6 +311,7 @@ The purpose for calibration is to run through some sample examples that is repre
311311
the statistics of the Tensors and we can later use this information to calculate quantization parameters.
312312

313313
.. code:: python
314+
314315
def calibrate(model, data_loader):
315316
model.eval()
316317
with torch.no_grad():
@@ -320,17 +321,19 @@ the statistics of the Tensors and we can later use this information to calculate
320321
321322
7. Convert the Model to a Quantized Model
322323
-----------------------------------------
323-
``convert_fx`` takes a calibrated model and produces a quantized model.
324+
``convert_fx`` takes a calibrated model and produces a quantized model.
324325

325326
.. code:: python
326-
quantized_model = convert_fx(prepared_model)
327+
328+
quantized_model = convert_fx(prepared_model)
327329
print(quantized_model)
328-
330+
329331
8. Evaluation
330332
-------------
331333
We can now print the size and accuracy of the quantized model.
332334

333335
.. code:: python
336+
334337
print("Size of model before quantization")
335338
print_size_of_model(float_model)
336339
print("Size of model after quantization")
@@ -372,6 +375,7 @@ we'll first call fuse explicitly to fuse the conv and bn in the model:
372375
Note that ``fuse_fx`` only works in eval mode.
373376

374377
.. code:: python
378+
375379
fused = fuse_fx(float_model)
376380
377381
conv1_weight_after_fuse = fused.conv1[0].weight[0]
@@ -383,6 +387,7 @@ Note that ``fuse_fx`` only works in eval mode.
383387
--------------------------------------------------------------------
384388

385389
.. code:: python
390+
386391
scripted_float_model_file = "resnet18_scripted.pth"
387392
388393
print("Size of baseline model")
@@ -397,6 +402,7 @@ quantized in eager mode. FX graph mode and eager mode produce very similar quant
397402
so the expectation is that the accuracy and speedup are similar as well.
398403

399404
.. code:: python
405+
400406
print("Size of Fx graph mode quantized model")
401407
print_size_of_model(quantized_model)
402408
top1, top5 = evaluate(quantized_model, criterion, data_loader_test)

0 commit comments

Comments
 (0)