Skip to content

Commit e679f9f

Browse files
authored
Merge pull request #91 from pytorch/fix-creator
Fix attribute names (creator -> grad_fn)
2 parents 0f52422 + 898dd68 commit e679f9f

File tree

2 files changed

+11
-11
lines changed

2 files changed

+11
-11
lines changed

beginner_source/former_torchies/autograd_tutorial.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,9 @@
2525
There’s one more class which is very important for autograd
2626
implementation - a ``Function``. ``Variable`` and ``Function`` are
2727
interconnected and build up an acyclic graph, that encodes a complete
28-
history of computation. Each variable has a ``.creator`` attribute that
28+
history of computation. Each variable has a ``.grad_fn`` attribute that
2929
references a function that has created a function (except for Variables
30-
created by the user - these have ``None`` as ``.creator``).
30+
created by the user - these have ``None`` as ``.grad_fn``).
3131
3232
If you want to compute the derivatives, you can call ``.backward()`` on
3333
a ``Variable``. If ``Variable`` is a scalar (i.e. it holds a one element
@@ -52,7 +52,7 @@
5252
###############################################################
5353
#
5454

55-
print(x.creator) # we've created x ourselves
55+
print(x.grad_fn) # we've created x ourselves
5656

5757
###############################################################
5858
# Do an operation of x:
@@ -62,8 +62,8 @@
6262

6363
###############################################################
6464
# y was created as a result of an operation,
65-
# so it has a creator
66-
print(y.creator)
65+
# so it has a grad_fn
66+
print(y.grad_fn)
6767

6868
###############################################################
6969
# More operations on y:
@@ -91,7 +91,7 @@
9191

9292
x = Variable(torch.ones(2, 2), requires_grad=True)
9393
y = x + 2
94-
y.backward(torch.ones(2, 2), retain_variables=True)
94+
y.backward(torch.ones(2, 2), retain_graph=True)
9595
# the retain_variables flag will prevent the internal buffers from being freed
9696
print(x.grad)
9797

beginner_source/nlp/pytorch_tutorial.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -177,13 +177,13 @@
177177
print(z.data)
178178

179179
# BUT z knows something extra.
180-
print(z.creator)
180+
print(z.grad_fn)
181181

182182

183183
######################################################################
184184
# So Variables know what created them. z knows that it wasn't read in from
185185
# a file, it wasn't the result of a multiplication or exponential or
186-
# whatever. And if you keep following z.creator, you will find yourself at
186+
# whatever. And if you keep following z.grad_fn, you will find yourself at
187187
# x and y.
188188
#
189189
# But how does that help us compute a gradient?
@@ -192,7 +192,7 @@
192192
# Lets sum up all the entries in z
193193
s = z.sum()
194194
print(s)
195-
print(s.creator)
195+
print(s.grad_fn)
196196

197197

198198
######################################################################
@@ -248,15 +248,15 @@
248248
var_y = autograd.Variable(y)
249249
# var_z contains enough information to compute gradients, as we saw above
250250
var_z = var_x + var_y
251-
print(var_z.creator)
251+
print(var_z.grad_fn)
252252

253253
var_z_data = var_z.data # Get the wrapped Tensor object out of var_z...
254254
# Re-wrap the tensor in a new variable
255255
new_var_z = autograd.Variable(var_z_data)
256256

257257
# ... does new_var_z have information to backprop to x and y?
258258
# NO!
259-
print(new_var_z.creator)
259+
print(new_var_z.grad_fn)
260260
# And how could it? We yanked the tensor out of var_z (that is
261261
# what var_z.data is). This tensor doesn't know anything about
262262
# how it was computed. We pass it into new_var_z, and this is all the

0 commit comments

Comments
 (0)