Skip to content

Commit 1399cad

Browse files
committed
More typo fixes
1 parent b1cd8bb commit 1399cad

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

intermediate_source/named_tensor_tutorial.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -288,10 +288,10 @@ def catch_error(fn):
288288
# Autograd support
289289
# ----------------
290290
#
291-
# Autograd currently supports named tensors in a limited manner: autograd
292-
# ignores names on all tensors. Gradient computation is still correct but we
293-
# lose the safety that names give us. It is on the roadmap to introduce
294-
# handling of names to autograd.
291+
# Autograd currently ignores names on all tensors and just treats them like
292+
# regular tensors. Gradient computation is correct but we lose the safety that
293+
# names give us. It is on the roadmap to introduce handling of names to
294+
# autograd.
295295

296296
x = torch.randn(3, names=('D',))
297297
weight = torch.randn(3, names=('D',), requires_grad=True)
@@ -316,8 +316,8 @@ def catch_error(fn):
316316
# Other supported (and unsupported) features
317317
# ------------------------------------------
318318
#
319-
# See here (link to be included) for a detailed breakdown of what is
320-
# supported with the 1.3 release.
319+
# `See here <https://pytorch.org/docs/stable/named_tensor.html>`_ for a
320+
# detailed breakdown of what is supported with the 1.3 release.
321321
#
322322
# In particular, we want to call out three important features that are not
323323
# currently supported:
@@ -346,7 +346,7 @@ def fn(x):
346346
# Now we'll go through a complete example of implementing a common
347347
# PyTorch ``nn.Module``: multi-head attention. We assume the reader is already
348348
# familiar with multi-head attention; for a refresher, check out
349-
# `this explanation <https://nlp.seas.harvard.edu/2018/04/03/attention.html>` _
349+
# `this explanation <https://nlp.seas.harvard.edu/2018/04/03/attention.html>`_
350350
# or
351351
# `this explanation <http://jalammar.github.io/illustrated-transformer/>`_.
352352
#

0 commit comments

Comments
 (0)