Skip to content

Commit 8999538

Browse files
authored
Fix typos (#2064)
1 parent 9795575 commit 8999538

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

beginner_source/introyt/autogradyt_tutorial.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@
4040

4141
###########################################################################
4242
# A machine learning model is a *function*, with inputs and outputs. For
43-
# this discussion, we’ll treat the inputs a as an *i*-dimensional vector
43+
# this discussion, we’ll treat the inputs as an *i*-dimensional vector
4444
# :math:`\vec{x}`, with elements :math:`x_{i}`. We can then express the
4545
# model, *M*, as a vector-valued function of the input: :math:`\vec{y} =
4646
# \vec{M}(\vec{x})`. (We treat the value of M’s output as
@@ -226,7 +226,7 @@
226226
# of which should be :math:`2 * cos(a)`. Looking at the graph above,
227227
# that’s just what we see.
228228
#
229-
# Be aware than only *leaf nodes* of the computation have their gradients
229+
# Be aware that only *leaf nodes* of the computation have their gradients
230230
# computed. If you tried, for example, ``print(c.grad)`` you’d get back
231231
# ``None``. In this simple example, only the input is a leaf node, so only
232232
# it has gradients computed.

0 commit comments

Comments
 (0)