Skip to content

Commit f35c04a

Browse files
samarth4149brianjoholly1238
authored
Update dcgan_faces_tutorial.py (#816)
* Update dcgan_faces_tutorial.py Previous comment might have been slightly misleading. A bit easier to understand, if made explicit, that gradients of errD_real and errD_fake w.r.t. parameters of netD get added up/accumulated because of the successive backward calls without a zero_grad() in between. * Update dcgan_faces_tutorial.py Co-authored-by: Brian Johnson <brianjo@fb.com> Co-authored-by: holly1238 <77758406+holly1238@users.noreply.github.com>
1 parent ffcaefd commit f35c04a

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

beginner_source/dcgan_faces_tutorial.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -610,10 +610,10 @@ def forward(self, input):
610610
output = netD(fake.detach()).view(-1)
611611
# Calculate D's loss on the all-fake batch
612612
errD_fake = criterion(output, label)
613-
# Calculate the gradients for this batch
613+
# Calculate the gradients for this batch, accumulated (summed) with previous gradients
614614
errD_fake.backward()
615615
D_G_z1 = output.mean().item()
616-
# Add the gradients from the all-real and all-fake batches
616+
# Compute error of D as sum over the fake and the real batches
617617
errD = errD_real + errD_fake
618618
# Update D
619619
optimizerD.step()

0 commit comments

Comments
 (0)