Skip to content

Commit 9939096

Browse files
authored
Fix grammar
1 parent a8025b7 commit 9939096

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

beginner_source/blitz/neural_networks_tutorial.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -90,9 +90,9 @@ def num_flat_features(self, x):
9090
print(params[0].size()) # conv1's .weight
9191

9292
########################################################################
93-
# Let try a random 32x32 input.
93+
# Let's try a random 32x32 input.
9494
# Note: expected input size of this net (LeNet) is 32x32. To use this net on
95-
# MNIST dataset, please resize the images from the dataset to 32x32.
95+
# the MNIST dataset, please resize the images from the dataset to 32x32.
9696

9797
input = torch.randn(1, 1, 32, 32)
9898
out = net(input)
@@ -227,7 +227,7 @@ def num_flat_features(self, x):
227227
#
228228
# ``weight = weight - learning_rate * gradient``
229229
#
230-
# We can implement this using simple python code:
230+
# We can implement this using simple Python code:
231231
#
232232
# .. code:: python
233233
#
@@ -258,4 +258,4 @@ def num_flat_features(self, x):
258258
#
259259
# Observe how gradient buffers had to be manually set to zero using
260260
# ``optimizer.zero_grad()``. This is because gradients are accumulated
261-
# as explained in `Backprop`_ section.
261+
# as explained in the `Backprop`_ section.

0 commit comments

Comments
 (0)