@@ -415,7 +415,7 @@ Welcome to PyTorch Tutorials
415
415
:card_description: Learn how to use compiled autograd to capture a larger backward graph.
416
416
:image: _static/img/thumbnails/cropped/generic-pytorch-logo.png
417
417
:link: intermediate/compiled_autograd_tutorial
418
- :tags: Model-Optimization,CUDA
418
+ :tags: Model-Optimization,CUDA,Compiler
419
419
420
420
.. customcarditem ::
421
421
:header: Custom C++ and CUDA Operators
@@ -585,7 +585,7 @@ Welcome to PyTorch Tutorials
585
585
:card_description: Train BERT, prune it to be 2:4 sparse, and then accelerate it to achieve 2x inference speedups with semi-structured sparsity and torch.compile.
586
586
:image: _static/img/thumbnails/cropped/Pruning-Tutorial.png
587
587
:link: advanced/semi_structured_sparse.html
588
- :tags: Text,Model-Optimization
588
+ :tags: Text,Model-Optimization,Compiler
589
589
590
590
.. customcarditem ::
591
591
:header: (beta) Dynamic Quantization on an LSTM Word Language Model
@@ -637,18 +637,18 @@ Welcome to PyTorch Tutorials
637
637
:tags: Model-Optimization,Best-Practice,Ax,TorchX
638
638
639
639
.. customcarditem ::
640
- :header: torch.compile Tutorial
640
+ :header: Introduction to torch.compile
641
641
:card_description: Speed up your models with minimal code changes using torch.compile, the latest PyTorch compiler solution.
642
642
:image: _static/img/thumbnails/cropped/generic-pytorch-logo.png
643
643
:link: intermediate/torch_compile_tutorial.html
644
- :tags: Model-Optimization
644
+ :tags: Model-Optimization,Compiler
645
645
646
646
.. customcarditem ::
647
647
:header: Inductor CPU Backend Debugging and Profiling
648
648
:card_description: Learn the usage, debugging and performance profiling for ``torch.compile `` with Inductor CPU backend.
649
649
:image: _static/img/thumbnails/cropped/generic-pytorch-logo.png
650
650
:link: intermediate/inductor_debug_cpu.html
651
- :tags: Model-Optimization
651
+ :tags: Model-Optimization,Compiler
652
652
653
653
.. customcarditem ::
654
654
:header: (beta) Implementing High-Performance Transformers with SCALED DOT PRODUCT ATTENTION
@@ -957,6 +957,17 @@ Additional Resources
957
957
beginner/vt_tutorial
958
958
intermediate/tiatoolbox_tutorial
959
959
960
+ .. toctree ::
961
+ :maxdepth: 1
962
+ :includehidden:
963
+ :hidden:
964
+ :caption: Compiler
965
+
966
+ intermediate/torch_compile_tutorial
967
+ intermediate/compiled_autograd_tutorial
968
+ intermediate/transformer_building_blocks
969
+ intermediate/inductor_debug_cpu
970
+
960
971
.. toctree ::
961
972
:maxdepth: 2
962
973
:includehidden:
@@ -1079,8 +1090,6 @@ Additional Resources
1079
1090
intermediate/torchserve_with_ipex_2
1080
1091
intermediate/nvfuser_intro_tutorial
1081
1092
intermediate/ax_multiobjective_nas_tutorial
1082
- intermediate/torch_compile_tutorial
1083
- intermediate/compiled_autograd_tutorial
1084
1093
intermediate/inductor_debug_cpu
1085
1094
intermediate/scaled_dot_product_attention_tutorial
1086
1095
beginner/knowledge_distillation_tutorial
0 commit comments