Skip to content

Commit 2b7cbb7

Browse files
committed
a few more comments
1 parent 2ede031 commit 2b7cbb7

File tree

2 files changed

+5
-6
lines changed

2 files changed

+5
-6
lines changed

beginner_source/scaled_dot_product_attention_tutorial.py

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"""
2-
An overview of torch.nn.functional.scaled_dot_product_attention
2+
Create High-Performance Transformer Variations with Scaled Dot Product Attention
33
===============================================================
44
55
"""
@@ -14,8 +14,7 @@
1414
# function is named ``torch.nn.functional.scaled_dot_product_attention``.
1515
# There is some extensive documentation on the function in the `PyTorch
1616
# documentation <https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html#torch.nn.functional.scaled_dot_product_attention>`__.
17-
# This function has already been incorporated into torch.nn.MHA
18-
# (Multi-Head Attention) and ``torch.nn.TransformerEncoderLayer``.
17+
# This function has already been incorporated into torch.nn.MultiheadAttention# (Multi-Head Attention) and ``torch.nn.TransformerEncoderLayer``.
1918
#
2019
# Overview
2120
# ~~~~~~~
@@ -57,8 +56,8 @@
5756
# implementations, the user can also explicitly control the dispatch via
5857
# the use of a context manager. This context manager allows users to
5958
# explicitly disable certain implementations. If a user wants to ensure
60-
# the function is indeed using the fasted implementation for their
61-
# specific inputs the context manager can be used to sweep through
59+
# the function is indeed using the fastest implementation for their
60+
# specific inputs, the context manager can be used to sweep through
6261
# measuring performance.
6362
#
6463

index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -525,7 +525,7 @@ What's new in PyTorch tutorials?
525525
:tags: Model-Optimization
526526

527527
.. customcarditem::
528-
:header: (beta) An overview of torch.nn.functional.scaled_dot_product_attention
528+
:header: (beta) Create High-Performance Transformer Variations with Scaled Dot Product Attention
529529
:card_description: This tutorial explores the new torch.nn.functional.scaled_dot_product_attention and how it can be used to construct Transformer components.
530530
:image: _static/img/thumbnails/cropped/pytorch-logo.png
531531
:link: beginner/scaled_dot_product_attention_tutorial.html

0 commit comments

Comments
 (0)