File tree Expand file tree Collapse file tree 1 file changed +3
-4
lines changed Expand file tree Collapse file tree 1 file changed +3
-4
lines changed Original file line number Diff line number Diff line change @@ -134,8 +134,8 @@ def log_softmax(x):
134
134
def model (xb ):
135
135
return log_softmax (xb @ weights + bias )
136
136
137
- ###############################################################################
138
- # In the above, the ``@`` stands for the dot product operation. We will call
137
+ ######################################################################################
138
+ # In the above, the ``@`` stands for the matrix multiplication operation. We will call
139
139
# our function on one batch of data (in this case, 64 images). This is
140
140
# one *forward pass*. Note that our predictions won't be any better than
141
141
# random at this stage, since we start with random weights.
@@ -753,8 +753,7 @@ def preprocess(x):
753
753
#
754
754
# Our CNN is fairly concise, but it only works with MNIST, because:
755
755
# - It assumes the input is a 28\*28 long vector
756
- # - It assumes that the final CNN grid size is 4\*4 (since that's the average
757
- # pooling kernel size we used)
756
+ # - It assumes that the final CNN grid size is 4\*4 (since that's the average pooling kernel size we used)
758
757
#
759
758
# Let's get rid of these two assumptions, so our model works with any 2d
760
759
# single channel image. First, we can remove the initial Lambda layer by
You can’t perform that action at this time.
0 commit comments