Skip to content

Commit 2db7b5b

Browse files
authored
Merge branch 'master' into patch-1
2 parents 535ce4f + 4bf264b commit 2db7b5b

File tree

7 files changed

+22
-13
lines changed

7 files changed

+22
-13
lines changed

beginner_source/blitz/autograd_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -299,7 +299,7 @@
299299
# The only parameters that compute gradients are the weights and bias of ``model.fc``.
300300

301301
# Optimize only the classifier
302-
optimizer = optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9)
302+
optimizer = optim.SGD(model.parameters(), lr=1e-2, momentum=0.9)
303303

304304
##########################################################################
305305
# Notice although we register all the parameters in the optimizer,

beginner_source/blitz/cifar10_tutorial.py

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -70,14 +70,16 @@
7070
[transforms.ToTensor(),
7171
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
7272

73+
batch_size = 4
74+
7375
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
7476
download=True, transform=transform)
75-
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
77+
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
7678
shuffle=True, num_workers=2)
7779

7880
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
7981
download=True, transform=transform)
80-
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
82+
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
8183
shuffle=False, num_workers=2)
8284

8385
classes = ('plane', 'car', 'bird', 'cat',
@@ -106,7 +108,7 @@ def imshow(img):
106108
# show images
107109
imshow(torchvision.utils.make_grid(images))
108110
# print labels
109-
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
111+
print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size)))
110112

111113

112114
########################################################################

intermediate_source/fx_conv_bn_fuser.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ def __init__(self):
5151
nn.BatchNorm2d(1),
5252
nn.Conv2d(1, 1, 1),
5353
)
54-
self.wrapped = WrappedBatchnorm()
54+
self.wrapped = WrappedBatchNorm()
5555

5656
def forward(self, x):
5757
x = self.conv1(x)
@@ -259,4 +259,4 @@ def benchmark(model, iters=20):
259259
# feedback you have about using it. Please feel free to use the
260260
# PyTorch Forums (https://discuss.pytorch.org/) and the issue tracker
261261
# (https://github.com/pytorch/pytorch/issues) to provide any feedback
262-
# you might have.
262+
# you might have.

intermediate_source/torchvision_tutorial.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ should return:
5454

5555
If your model returns the above methods, they will make it work for both
5656
training and evaluation, and will use the evaluation scripts from
57-
``pycocotools``.
57+
``pycocotools`` which can be installed with ``pip install pycocotools``.
5858

5959
.. note ::
6060
For Windows, please install ``pycocotools`` from `gautamchitnis <https://github.com/gautamchitnis/cocoapi>`__ with command
@@ -317,8 +317,8 @@ Putting everything together
317317
In ``references/detection/``, we have a number of helper functions to
318318
simplify training and evaluating detection models. Here, we will use
319319
``references/detection/engine.py``, ``references/detection/utils.py``
320-
and ``references/detection/transforms.py``. Just copy them to your
321-
folder and use them here.
320+
and ``references/detection/transforms.py``. Just copy everything under
321+
``references/detection`` to your folder and use them here.
322322

323323
Let’s write some helper functions for data augmentation /
324324
transformation:

prototype_source/fx_graph_mode_ptq_static.rst

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -311,6 +311,7 @@ The purpose for calibration is to run through some sample examples that is repre
311311
the statistics of the Tensors and we can later use this information to calculate quantization parameters.
312312

313313
.. code:: python
314+
314315
def calibrate(model, data_loader):
315316
model.eval()
316317
with torch.no_grad():
@@ -320,17 +321,19 @@ the statistics of the Tensors and we can later use this information to calculate
320321
321322
7. Convert the Model to a Quantized Model
322323
-----------------------------------------
323-
``convert_fx`` takes a calibrated model and produces a quantized model.
324+
``convert_fx`` takes a calibrated model and produces a quantized model.
324325

325326
.. code:: python
326-
quantized_model = convert_fx(prepared_model)
327+
328+
quantized_model = convert_fx(prepared_model)
327329
print(quantized_model)
328-
330+
329331
8. Evaluation
330332
-------------
331333
We can now print the size and accuracy of the quantized model.
332334

333335
.. code:: python
336+
334337
print("Size of model before quantization")
335338
print_size_of_model(float_model)
336339
print("Size of model after quantization")
@@ -372,6 +375,7 @@ we'll first call fuse explicitly to fuse the conv and bn in the model:
372375
Note that ``fuse_fx`` only works in eval mode.
373376

374377
.. code:: python
378+
375379
fused = fuse_fx(float_model)
376380
377381
conv1_weight_after_fuse = fused.conv1[0].weight[0]
@@ -383,6 +387,7 @@ Note that ``fuse_fx`` only works in eval mode.
383387
--------------------------------------------------------------------
384388

385389
.. code:: python
390+
386391
scripted_float_model_file = "resnet18_scripted.pth"
387392
388393
print("Size of baseline model")
@@ -397,6 +402,7 @@ quantized in eager mode. FX graph mode and eager mode produce very similar quant
397402
so the expectation is that the accuracy and speedup are similar as well.
398403

399404
.. code:: python
405+
400406
print("Size of Fx graph mode quantized model")
401407
print_size_of_model(quantized_model)
402408
top1, top5 = evaluate(quantized_model, criterion, data_loader_test)

recipes_source/recipes/loading_data_recipe.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@
101101

102102
# A data point in Yesno is a tuple (waveform, sample_rate, labels) where labels
103103
# is a list of integers with 1 for yes and 0 for no.
104-
yesno_data_trainset = torchaudio.datasets.YESNO('./', download=True)
104+
yesno_data = torchaudio.datasets.YESNO('./', download=True)
105105

106106
# Pick data point number 3 to see an example of the the yesno_data:
107107
n = 3

requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22

33
sphinx==1.8.2
44
sphinx-gallery==0.3.1
5+
docutils==0.16
56
sphinx-copybutton
67
tqdm
78
numpy

0 commit comments

Comments
 (0)