Skip to content

Commit e28eace

Browse files
ganghe74Svetlana Karslioglu
and
Svetlana Karslioglu
authored
Fix a typo in scaled_dot_product_attention_tutorial.py (#2549)
``CausaulSelfAttention`` to ``CausalSelfAttention`` Co-authored-by: Svetlana Karslioglu <svekars@fb.com>
1 parent 28c6e96 commit e28eace

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

intermediate_source/scaled_dot_product_attention_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -317,7 +317,7 @@ def generate_rand_batch(
317317
# on the same set of functions for both modules.
318318
# The reason for this here is that ``torch.compile`` is very good at removing the
319319
# framework overhead associated with PyTorch. If your model is launching
320-
# large, efficient CUDA kernels, which in this case ``CausaulSelfAttention``
320+
# large, efficient CUDA kernels, which in this case ``CausalSelfAttention``
321321
# is, then the overhead of PyTorch can be hidden.
322322
#
323323
# In reality, your module does not normally consist of a singular

0 commit comments

Comments
 (0)