Skip to content

Commit b1d8993

Browse files
authored
Update nn_tutorial.py (#1774)
Duplicate of #984 and #1040.
1 parent 0e39ee6 commit b1d8993

File tree

1 file changed

+3
-4
lines changed

1 file changed

+3
-4
lines changed

beginner_source/nn_tutorial.py

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -134,8 +134,8 @@ def log_softmax(x):
134134
def model(xb):
135135
return log_softmax(xb @ weights + bias)
136136

137-
###############################################################################
138-
# In the above, the ``@`` stands for the dot product operation. We will call
137+
######################################################################################
138+
# In the above, the ``@`` stands for the matrix multiplication operation. We will call
139139
# our function on one batch of data (in this case, 64 images). This is
140140
# one *forward pass*. Note that our predictions won't be any better than
141141
# random at this stage, since we start with random weights.
@@ -753,8 +753,7 @@ def preprocess(x):
753753
#
754754
# Our CNN is fairly concise, but it only works with MNIST, because:
755755
# - It assumes the input is a 28\*28 long vector
756-
# - It assumes that the final CNN grid size is 4\*4 (since that's the average
757-
# pooling kernel size we used)
756+
# - It assumes that the final CNN grid size is 4\*4 (since that's the average pooling kernel size we used)
758757
#
759758
# Let's get rid of these two assumptions, so our model works with any 2d
760759
# single channel image. First, we can remove the initial Lambda layer by

0 commit comments

Comments
 (0)