File tree Expand file tree Collapse file tree 2 files changed +5
-6
lines changed Expand file tree Collapse file tree 2 files changed +5
-6
lines changed Original file line number Diff line number Diff line change 1
1
"""
2
- An overview of torch.nn.functional.scaled_dot_product_attention
2
+ Create High-Performance Transformer Variations with Scaled Dot Product Attention
3
3
===============================================================
4
4
5
5
"""
14
14
# function is named ``torch.nn.functional.scaled_dot_product_attention``.
15
15
# There is some extensive documentation on the function in the `PyTorch
16
16
# documentation <https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html#torch.nn.functional.scaled_dot_product_attention>`__.
17
- # This function has already been incorporated into torch.nn.MHA
18
- # (Multi-Head Attention) and ``torch.nn.TransformerEncoderLayer``.
17
+ # This function has already been incorporated into torch.nn.MultiheadAttention# (Multi-Head Attention) and ``torch.nn.TransformerEncoderLayer``.
19
18
#
20
19
# Overview
21
20
# ~~~~~~~
57
56
# implementations, the user can also explicitly control the dispatch via
58
57
# the use of a context manager. This context manager allows users to
59
58
# explicitly disable certain implementations. If a user wants to ensure
60
- # the function is indeed using the fasted implementation for their
61
- # specific inputs the context manager can be used to sweep through
59
+ # the function is indeed using the fastest implementation for their
60
+ # specific inputs, the context manager can be used to sweep through
62
61
# measuring performance.
63
62
#
64
63
Original file line number Diff line number Diff line change @@ -525,7 +525,7 @@ What's new in PyTorch tutorials?
525
525
:tags: Model-Optimization
526
526
527
527
.. customcarditem ::
528
- :header: (beta) An overview of torch.nn.functional.scaled_dot_product_attention
528
+ :header: (beta) Create High-Performance Transformer Variations with Scaled Dot Product Attention
529
529
:card_description: This tutorial explores the new torch.nn.functional.scaled_dot_product_attention and how it can be used to construct Transformer components.
530
530
:image: _static/img/thumbnails/cropped/pytorch-logo.png
531
531
:link: beginner/scaled_dot_product_attention_tutorial.html
You can’t perform that action at this time.
0 commit comments