Skip to content

Commit db015b3

Browse files
adamjstewartsoumith
authored andcommitted
Typo fixes in 60-min blitz (#289)
1 parent 76f69e5 commit db015b3

File tree

3 files changed

+16
-16
lines changed

3 files changed

+16
-16
lines changed

beginner_source/blitz/autograd_tutorial.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,8 @@
3030
3131
To prevent tracking history (and using memory), you can also wrap the code block
3232
in ``with torch.no_grad():``. This can be particularly helpful when evaluating a
33-
model because the model may have trainable parameters with `requires_grad=True`,
34-
but for which we don't need the gradients.
33+
model because the model may have trainable parameters with
34+
``requires_grad=True``, but for which we don't need the gradients.
3535
3636
There’s one more class which is very important for autograd
3737
implementation - a ``Function``.
@@ -52,12 +52,12 @@
5252
import torch
5353

5454
###############################################################
55-
# Create a tensor and set requires_grad=True to track computation with it
55+
# Create a tensor and set ``requires_grad=True`` to track computation with it
5656
x = torch.ones(2, 2, requires_grad=True)
5757
print(x)
5858

5959
###############################################################
60-
# Do an operation of tensor:
60+
# Do a tensor operation:
6161
y = x + 2
6262
print(y)
6363

@@ -66,7 +66,7 @@
6666
print(y.grad_fn)
6767

6868
###############################################################
69-
# Do more operations on y
69+
# Do more operations on ``y``
7070
z = y * y * 3
7171
out = z.mean()
7272

@@ -86,14 +86,14 @@
8686
###############################################################
8787
# Gradients
8888
# ---------
89-
# Let's backprop now
89+
# Let's backprop now.
9090
# Because ``out`` contains a single scalar, ``out.backward()`` is
9191
# equivalent to ``out.backward(torch.tensor(1.))``.
9292

9393
out.backward()
9494

9595
###############################################################
96-
# print gradients d(out)/dx
96+
# Print gradients d(out)/dx
9797
#
9898

9999
print(x.grad)
@@ -172,7 +172,7 @@
172172
###############################################################
173173
# You can also stop autograd from tracking history on Tensors
174174
# with ``.requires_grad=True`` by wrapping the code block in
175-
# ``with torch.no_grad()``:
175+
# ``with torch.no_grad():``
176176
print(x.requires_grad)
177177
print((x ** 2).requires_grad)
178178

beginner_source/blitz/cifar10_tutorial.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -208,7 +208,7 @@ def forward(self, x):
208208

209209
########################################################################
210210
# The outputs are energies for the 10 classes.
211-
# Higher the energy for a class, the more the network
211+
# The higher the energy for a class, the more the network
212212
# thinks that the image is of the particular class.
213213
# So, let's get the index of the highest energy:
214214
_, predicted = torch.max(outputs, 1)
@@ -267,20 +267,20 @@ def forward(self, x):
267267
#
268268
# Training on GPU
269269
# ----------------
270-
# Just like how you transfer a Tensor on to the GPU, you transfer the neural
270+
# Just like how you transfer a Tensor onto the GPU, you transfer the neural
271271
# net onto the GPU.
272272
#
273273
# Let's first define our device as the first visible cuda device if we have
274274
# CUDA available:
275275

276276
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
277277

278-
# Assume that we are on a CUDA machine, then this should print a CUDA device:
278+
# Assuming that we are on a CUDA machine, this should print a CUDA device:
279279

280280
print(device)
281281

282282
########################################################################
283-
# The rest of this section assumes that `device` is a CUDA device.
283+
# The rest of this section assumes that ``device`` is a CUDA device.
284284
#
285285
# Then these methods will recursively go over all modules and convert their
286286
# parameters and buffers to CUDA tensors:

beginner_source/blitz/neural_networks_tutorial.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -90,8 +90,8 @@ def num_flat_features(self, x):
9090
print(params[0].size()) # conv1's .weight
9191

9292
########################################################################
93-
# Let try a random 32x32 input
94-
# Note: Expected input size to this net(LeNet) is 32x32. To use this net on
93+
# Let try a random 32x32 input.
94+
# Note: expected input size of this net (LeNet) is 32x32. To use this net on
9595
# MNIST dataset, please resize the images from the dataset to 32x32.
9696

9797
input = torch.randn(1, 1, 32, 32)
@@ -130,8 +130,8 @@ def num_flat_features(self, x):
130130
# registered as a parameter when assigned as an attribute to a*
131131
# ``Module``.
132132
# - ``autograd.Function`` - Implements *forward and backward definitions
133-
# of an autograd operation*. Every ``Tensor`` operation, creates at
134-
# least a single ``Function`` node, that connects to functions that
133+
# of an autograd operation*. Every ``Tensor`` operation creates at
134+
# least a single ``Function`` node that connects to functions that
135135
# created a ``Tensor`` and *encodes its history*.
136136
#
137137
# **At this point, we covered:**

0 commit comments

Comments
 (0)