Skip to content

Commit a075e3d

Browse files
Update pytorch_tutorial.py (#1480)
Somebody left the apostrophe out of the word "lets" every time. Makes it very hard to understand :( Co-authored-by: holly1238 <77758406+holly1238@users.noreply.github.com>
1 parent cd32ed9 commit a075e3d

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

beginner_source/nlp/pytorch_tutorial.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
All of deep learning is computations on tensors, which are
1010
generalizations of a matrix that can be indexed in more than 2
1111
dimensions. We will see exactly what this means in-depth later. First,
12-
lets look what we can do with tensors.
12+
let's look what we can do with tensors.
1313
"""
1414
# Author: Robert Guthrie
1515

@@ -162,7 +162,7 @@
162162
# other operation, etc.)
163163
#
164164
# If ``requires_grad=True``, the Tensor object keeps track of how it was
165-
# created. Lets see it in action.
165+
# created. Let's see it in action.
166166
#
167167

168168
# Tensor factory methods have a ``requires_grad`` flag
@@ -187,7 +187,7 @@
187187
# But how does that help us compute a gradient?
188188
#
189189

190-
# Lets sum up all the entries in z
190+
# Let's sum up all the entries in z
191191
s = z.sum()
192192
print(s)
193193
print(s.grad_fn)
@@ -222,7 +222,7 @@
222222

223223

224224
######################################################################
225-
# Lets have Pytorch compute the gradient, and see that we were right:
225+
# Let's have Pytorch compute the gradient, and see that we were right:
226226
# (note if you run this block multiple times, the gradient will increment.
227227
# That is because Pytorch *accumulates* the gradient into the .grad
228228
# property, since for many models this is very convenient.)

0 commit comments

Comments
 (0)