Skip to content

Commit 327f259

Browse files
spzalasubramen
andauthored
Use the correct batch values in the output (#2191)
The current logic uses enumerate counter (i.e. variable `batch`). It displays the loss for trained data in an increment of 100 using batch % 100 == 0. i.e. 1 batch, 101 batch, and so on for the given dataset. This maps to batch =0, 100, and so on). So for the first batch, the loss displayed should be [ 64/60000] instead of [ 0/60000]). For the second it should be [ 6464/60000] instead of [ 6400/60000] , and so on. The Output values, e.g. loss: 2.306185 [ 0/60000], read as a loss observed for zero input data which seems incorrect. This loss mentioned here was for the first batch, which was 64 input data. The second was for 101 batch which has 6464 input data, and so on. Co-authored-by: Suraj Subramanian <5676233+suraj813@users.noreply.github.com>
1 parent 9ec2625 commit 327f259

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

beginner_source/basics/optimization_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ def train_loop(dataloader, model, loss_fn, optimizer):
160160
optimizer.step()
161161

162162
if batch % 100 == 0:
163-
loss, current = loss.item(), batch * len(X)
163+
loss, current = loss.item(), (batch + 1) * len(X)
164164
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
165165

166166

beginner_source/basics/quickstart_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ def train(dataloader, model, loss_fn, optimizer):
151151
optimizer.step()
152152

153153
if batch % 100 == 0:
154-
loss, current = loss.item(), batch * len(X)
154+
loss, current = loss.item(), (batch + 1) * len(X)
155155
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
156156

157157
##############################################################################

0 commit comments

Comments
 (0)