We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent e2e9c36 commit ffda41aCopy full SHA for ffda41a
README.md
@@ -201,7 +201,7 @@ for step = 1 to (training_steps + 1) do
201
// Run the optimization to update W and b values.
202
// Wrap computation inside a GradientTape for automatic differentiation.
203
use g = tf.GradientTape()
204
- // Linear regressoin (Wx + b).
+ // Linear regression (Wx + b).
205
let pred = W * train_X + b
206
// Mean square error.
207
let loss = tf.reduce_sum(tf.pow(pred - train_Y,2)) / (2 * n_samples)
0 commit comments