From c929ed412a5e67cb5f38327ac53712a6f68ebc3f Mon Sep 17 00:00:00 2001 From: Jugesh Sundram Date: Mon, 3 Oct 2022 14:37:05 +0200 Subject: [PATCH] Fix typos --- beginner_source/introyt/autogradyt_tutorial.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/beginner_source/introyt/autogradyt_tutorial.py b/beginner_source/introyt/autogradyt_tutorial.py index b3609eed4e2..19c25c5e81d 100644 --- a/beginner_source/introyt/autogradyt_tutorial.py +++ b/beginner_source/introyt/autogradyt_tutorial.py @@ -40,7 +40,7 @@ ########################################################################### # A machine learning model is a *function*, with inputs and outputs. For -# this discussion, we’ll treat the inputs a as an *i*-dimensional vector +# this discussion, we’ll treat the inputs as an *i*-dimensional vector # :math:`\vec{x}`, with elements :math:`x_{i}`. We can then express the # model, *M*, as a vector-valued function of the input: :math:`\vec{y} = # \vec{M}(\vec{x})`. (We treat the value of M’s output as @@ -226,7 +226,7 @@ # of which should be :math:`2 * cos(a)`. Looking at the graph above, # that’s just what we see. # -# Be aware than only *leaf nodes* of the computation have their gradients +# Be aware that only *leaf nodes* of the computation have their gradients # computed. If you tried, for example, ``print(c.grad)`` you’d get back # ``None``. In this simple example, only the input is a leaf node, so only # it has gradients computed.