Skip to content

fix the issue #51 #52

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 28, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions beginner_source/examples_autograd/two_layer_net_autograd.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,16 +46,12 @@
# Tensors, but we do not need to keep references to intermediate values since
# we are not implementing the backward pass by hand.
y_pred = x.mm(w1).clamp(min=0).mm(w2)

# Compute and print loss using operations on Variables.
# Now loss is a Variable of shape (1,) and loss.data is a Tensor of shape
# (1,); loss.data[0] is a scalar value holding the loss.
loss = (y_pred - y).pow(2).sum()
print(t, loss.data[0])

# Manually zero the gradients before running the backward pass
w1.grad.data.zero_()
w2.grad.data.zero_()

# Use autograd to compute the backward pass. This call will compute the
# gradient of loss with respect to all Variables with requires_grad=True.
Expand All @@ -68,3 +64,8 @@
# Tensors.
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data

# Manually zero the gradients after updating weights
w1.grad.data.zero_()
w2.grad.data.zero_()

Original file line number Diff line number Diff line change
Expand Up @@ -60,22 +60,23 @@ def backward(self, grad_output):
for t in range(500):
# Construct an instance of our MyReLU class to use in our network
relu = MyReLU()

# Forward pass: compute predicted y using operations on Variables; we compute
# ReLU using our custom autograd operation.
y_pred = relu(x.mm(w1)).mm(w2)

# Compute and print loss
loss = (y_pred - y).pow(2).sum()
print(t, loss.data[0])

# Manually zero the gradients before running the backward pass
w1.grad.data.zero_()
w2.grad.data.zero_()

# Use autograd to compute the backward pass.
loss.backward()

# Update weights using gradient descent
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data

# Manually zero the gradients after updating weights
w1.grad.data.zero_()
w2.grad.data.zero_()