Skip to content

Update pytorch_tutorial.py #1480

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 23, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions beginner_source/nlp/pytorch_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
All of deep learning is computations on tensors, which are
generalizations of a matrix that can be indexed in more than 2
dimensions. We will see exactly what this means in-depth later. First,
lets look what we can do with tensors.
let's look what we can do with tensors.
"""
# Author: Robert Guthrie

Expand Down Expand Up @@ -162,7 +162,7 @@
# other operation, etc.)
#
# If ``requires_grad=True``, the Tensor object keeps track of how it was
# created. Lets see it in action.
# created. Let's see it in action.
#

# Tensor factory methods have a ``requires_grad`` flag
Expand All @@ -187,7 +187,7 @@
# But how does that help us compute a gradient?
#

# Lets sum up all the entries in z
# Let's sum up all the entries in z
s = z.sum()
print(s)
print(s.grad_fn)
Expand Down Expand Up @@ -222,7 +222,7 @@


######################################################################
# Lets have Pytorch compute the gradient, and see that we were right:
# Let's have Pytorch compute the gradient, and see that we were right:
# (note if you run this block multiple times, the gradient will increment.
# That is because Pytorch *accumulates* the gradient into the .grad
# property, since for many models this is very convenient.)
Expand Down