Skip to content

Fix errors in beginner/blitz #79

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 17, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions beginner_source/blitz/autograd_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,9 @@

``Variable`` and ``Function`` are interconnected and build up an acyclic
graph, that encodes a complete history of computation. Each variable has
a ``.creator`` attribute that references a ``Function`` that has created
a ``.grad_fn`` attribute that references a ``Function`` that has created
the ``Variable`` (except for Variables created by the user - their
``creator is None``).
``grad_fn is None``).

If you want to compute the derivatives, you can call ``.backward()`` on
a ``Variable``. If ``Variable`` is a scalar (i.e. it holds a one element
Expand All @@ -61,8 +61,8 @@
print(y)

###############################################################
# ``y`` was created as a result of an operation, so it has a creator.
print(y.creator)
# ``y`` was created as a result of an operation, so it has a ``grad_fn``.
print(y.grad_fn)

###############################################################
# Do more operations on y
Expand Down
10 changes: 5 additions & 5 deletions beginner_source/blitz/neural_networks_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -157,15 +157,15 @@ def num_flat_features(self, x):
# For example:

output = net(input)
target = Variable(torch.range(1, 10)) # a dummy target, for example
target = Variable(torch.arange(1, 11)) # a dummy target, for example
criterion = nn.MSELoss()

loss = criterion(output, target)
print(loss)

########################################################################
# Now, if you follow ``loss`` in the backward direction, using it’s
# ``.creator`` attribute, you will see a graph of computations that looks
# ``.grad_fn`` attribute, you will see a graph of computations that looks
# like this:
#
# ::
Expand All @@ -181,9 +181,9 @@ def num_flat_features(self, x):
#
# For illustration, let us follow a few steps backward:

print(loss.creator) # MSELoss
print(loss.creator.previous_functions[0][0]) # Linear
print(loss.creator.previous_functions[0][0].previous_functions[0][0]) # ReLU
print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[0][0]) # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU

########################################################################
# Backprop
Expand Down