Skip to content

Commit f085c9e

Browse files
Merge branch 'main' into fix-typos
2 parents c59e6cb + 001e1a5 commit f085c9e

34 files changed

+2138
-530
lines changed

.ci/docker/requirements.txt

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ tqdm==4.66.1
1414
numpy==1.24.4
1515
matplotlib
1616
librosa
17-
torch==2.6
17+
torch==2.7
1818
torchvision
1919
torchdata
2020
networkx
@@ -36,7 +36,7 @@ datasets
3636
transformers
3737
torchmultimodal-nightly # needs to be updated to stable as soon as it's avaialable
3838
onnx
39-
onnxscript
39+
onnxscript>=0.2.2
4040
onnxruntime
4141
evaluate
4242
accelerate>=0.20.1
@@ -67,7 +67,7 @@ iopath
6767
pygame==2.6.0
6868
pycocotools
6969
semilearn==0.3.2
70-
torchao==0.5.0
70+
torchao==0.10.0
7171
segment_anything==1.0
72-
torchrec==1.0.0; platform_system == "Linux"
73-
fbgemm-gpu==1.1.0; platform_system == "Linux"
72+
torchrec==1.1.0; platform_system == "Linux"
73+
fbgemm-gpu==1.2.0; platform_system == "Linux"

.github/workflows/build-tutorials.yml

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -22,15 +22,15 @@ jobs:
2222
- { shard: 4, num_shards: 15, runner: "linux.g5.4xlarge.nvidia.gpu" }
2323
- { shard: 5, num_shards: 15, runner: "linux.g5.4xlarge.nvidia.gpu" }
2424
- { shard: 6, num_shards: 15, runner: "linux.g5.4xlarge.nvidia.gpu" }
25-
- { shard: 7, num_shards: 15, runner: "linux.4xlarge.nvidia.gpu" }
26-
- { shard: 8, num_shards: 15, runner: "linux.4xlarge.nvidia.gpu" }
27-
- { shard: 9, num_shards: 15, runner: "linux.4xlarge.nvidia.gpu" }
28-
- { shard: 10, num_shards: 15, runner: "linux.4xlarge.nvidia.gpu" }
29-
- { shard: 11, num_shards: 15, runner: "linux.4xlarge.nvidia.gpu" }
30-
- { shard: 12, num_shards: 15, runner: "linux.4xlarge.nvidia.gpu" }
31-
- { shard: 13, num_shards: 15, runner: "linux.4xlarge.nvidia.gpu" }
32-
- { shard: 14, num_shards: 15, runner: "linux.4xlarge.nvidia.gpu" }
33-
- { shard: 15, num_shards: 15, runner: "linux.4xlarge.nvidia.gpu" }
25+
- { shard: 7, num_shards: 15, runner: "linux.g5.4xlarge.nvidia.gpu" }
26+
- { shard: 8, num_shards: 15, runner: "linux.g5.4xlarge.nvidia.gpu" }
27+
- { shard: 9, num_shards: 15, runner: "linux.g5.4xlarge.nvidia.gpu" }
28+
- { shard: 10, num_shards: 15, runner: "linux.g5.4xlarge.nvidia.gpu" }
29+
- { shard: 11, num_shards: 15, runner: "linux.g5.4xlarge.nvidia.gpu" }
30+
- { shard: 12, num_shards: 15, runner: "linux.g5.4xlarge.nvidia.gpu" }
31+
- { shard: 13, num_shards: 15, runner: "linux.g5.4xlarge.nvidia.gpu" }
32+
- { shard: 14, num_shards: 15, runner: "linux.g5.4xlarge.nvidia.gpu" }
33+
- { shard: 15, num_shards: 15, runner: "linux.g5.4xlarge.nvidia.gpu" }
3434
fail-fast: false
3535
runs-on: ${{ matrix.runner }}
3636
steps:

.jenkins/build.sh

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,11 +22,10 @@ sudo apt-get install -y pandoc
2222
#Install PyTorch Nightly for test.
2323
# Nightly - pip install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html
2424
# Install 2.5 to merge all 2.4 PRs - uncomment to install nightly binaries (update the version as needed).
25-
# sudo pip uninstall -y torch torchvision torchaudio torchtext torchdata
26-
# sudo pip3 install torch==2.6.0 torchvision --no-cache-dir --index-url https://download.pytorch.org/whl/test/cu124
2725
# sudo pip uninstall -y fbgemm-gpu torchrec
26+
# sudo pip uninstall -y torch torchvision torchaudio torchtext torchdata torchrl tensordict
2827
# sudo pip3 install fbgemm-gpu==1.1.0 torchrec==1.0.0 --no-cache-dir --index-url https://download.pytorch.org/whl/test/cu124
29-
28+
# pip3 install torch==2.7.0 torchvision torchaudio --no-cache-dir --index-url https://download.pytorch.org/whl/test/cu126
3029
# Install two language tokenizers for Translation with TorchText tutorial
3130
python -m spacy download en_core_web_sm
3231
python -m spacy download de_core_news_sm

.jenkins/validate_tutorials_built.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@
3131
"prototype_source/vmap_recipe",
3232
"prototype_source/torchscript_freezing",
3333
"prototype_source/nestedtensor",
34+
"prototype_source/gpu_direct_storage", # requires specific filesystem + GPUDirect Storage to be set up
3435
"recipes_source/recipes/saving_and_loading_models_for_inference",
3536
"recipes_source/recipes/saving_multiple_models_in_one_file",
3637
"recipes_source/recipes/tensorboard_with_pytorch",
@@ -50,7 +51,8 @@
5051
"intermediate_source/flask_rest_api_tutorial",
5152
"intermediate_source/text_to_speech_with_torchaudio",
5253
"intermediate_source/tensorboard_profiler_tutorial", # reenable after 2.0 release.
53-
"advanced_source/semi_structured_sparse" # reenable after 3303 is fixed.
54+
"advanced_source/semi_structured_sparse", # reenable after 3303 is fixed.
55+
"intermediate_source/torchrec_intro_tutorial", # reenable after 3302 is fixe
5456
]
5557

5658
def tutorial_source_dirs() -> List[Path]:
40.8 KB
Loading

_static/img/install_msvc.png

131 KB
Loading
117 KB
Loading

_templates/layout.html

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -211,14 +211,5 @@
211211

212212
<img height="1" width="1" style="border-style:none;" alt="" src="https://www.googleadservices.com/pagead/conversion/795629140/?label=txkmCPmdtosBENSssfsC&amp;guid=ON&amp;script=0"/>
213213

214-
//temporarily add a link to survey
215-
<script>
216-
var survey = '<div class="survey-banner"><p><i class="fas fa-poll" aria-hidden="true">&nbsp </i> Take the <a href="https://forms.gle/KZ4xGL65VRMYNbbG6">PyTorch Docs/Tutorials survey</a>.</p></div>'
217-
if ($(".pytorch-call-to-action-links").length) {
218-
$(".pytorch-call-to-action-links").before(survey);
219-
} else {
220-
$("#pytorch-article").prepend(survey);
221-
}
222-
</script>
223214

224215
{% endblock %}

beginner_source/basics/README.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Learn the Basics
1313
Tensors
1414
https://pytorch.org/tutorials/beginner/basics/tensor_tutorial.html
1515

16-
4. dataquickstart_tutorial.py
16+
4. data_tutorial.py
1717
Datasets & DataLoaders
1818
https://pytorch.org/tutorials/beginner/basics/data_tutorial.html
1919

beginner_source/basics/optimization_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ def forward(self, x):
7676
# (`read more <https://pytorch.org/tutorials/beginner/hyperparameter_tuning_tutorial.html>`__ about hyperparameter tuning)
7777
#
7878
# We define the following hyperparameters for training:
79-
# - **Number of Epochs** - the number times to iterate over the dataset
79+
# - **Number of Epochs** - the number of times to iterate over the dataset
8080
# - **Batch Size** - the number of data samples propagated through the network before the parameters are updated
8181
# - **Learning Rate** - how much to update models parameters at each batch/epoch. Smaller values yield slow learning speed, while large values may result in unpredictable behavior during training.
8282
#

beginner_source/colab.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ PyTorch Version in Google Colab
1111
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1212

1313
Wen you are running a tutorial that requires a version of PyTorch that has
14-
jst been released, that version might not be yet available in Google Colab.
14+
just been released, that version might not be yet available in Google Colab.
1515
To check that you have the required ``torch`` and compatible domain libraries
1616
installed, run ``!pip list``.
1717

@@ -27,7 +27,7 @@ Using Tutorial Data from Google Drive in Colab
2727
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2828

2929
We've added a new feature to tutorials that allows users to open the
30-
ntebook associated with a tutorial in Google Colab. You may need to
30+
notebook associated with a tutorial in Google Colab. You may need to
3131
copy data to your Google drive account to get the more complex tutorials
3232
to work.
3333

beginner_source/examples_autograd/polynomial_autograd.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
# -*- coding: utf-8 -*-
21
"""
32
PyTorch: Tensors and autograd
43
-------------------------------

beginner_source/examples_autograd/polynomial_custom_function.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
# -*- coding: utf-8 -*-
21
"""
32
PyTorch: Defining New autograd Functions
43
----------------------------------------

conf.py

Lines changed: 40 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,6 @@
3333
sys.path.insert(0, os.path.abspath('./.jenkins'))
3434
import pytorch_sphinx_theme
3535
import torch
36-
import numpy
37-
import gc
3836
import glob
3937
import random
4038
import shutil
@@ -49,6 +47,46 @@
4947
pio.renderers.default = 'sphinx_gallery'
5048

5149

50+
import sphinx_gallery.gen_rst
51+
import multiprocessing
52+
53+
# Monkey patch sphinx gallery to run each example in an isolated process so that
54+
# we don't need to worry about examples changing global state.
55+
#
56+
# Alt option 1: Parallelism was added to sphinx gallery (a later version that we
57+
# are not using yet) using joblib, but it seems to result in errors for us, and
58+
# it has no effect if you set parallel = 1 (it will not put each file run into
59+
# its own process and run singly) so you need parallel >= 2, and there may be
60+
# tutorials that cannot be run in parallel.
61+
#
62+
# Alt option 2: Run sphinx gallery once per file (similar to how we shard in CI
63+
# but with shard sizes of 1), but running sphinx gallery for each file has a
64+
# ~5min overhead, resulting in the entire suite taking ~2x time
65+
def call_fn(func, args, kwargs, result_queue):
66+
try:
67+
result = func(*args, **kwargs)
68+
result_queue.put((True, result))
69+
except Exception as e:
70+
result_queue.put((False, str(e)))
71+
72+
def call_in_subprocess(func):
73+
def wrapper(*args, **kwargs):
74+
result_queue = multiprocessing.Queue()
75+
p = multiprocessing.Process(
76+
target=call_fn,
77+
args=(func, args, kwargs, result_queue)
78+
)
79+
p.start()
80+
p.join()
81+
success, result = result_queue.get()
82+
if success:
83+
return result
84+
else:
85+
raise RuntimeError(f"Error in subprocess: {result}")
86+
return wrapper
87+
88+
sphinx_gallery.gen_rst.generate_file_rst = call_in_subprocess(sphinx_gallery.gen_rst.generate_file_rst)
89+
5290
try:
5391
import torchvision
5492
except ImportError:
@@ -97,14 +135,6 @@
97135

98136
# -- Sphinx-gallery configuration --------------------------------------------
99137

100-
def reset_seeds(gallery_conf, fname):
101-
torch.cuda.empty_cache()
102-
torch.manual_seed(42)
103-
torch.set_default_device(None)
104-
random.seed(10)
105-
numpy.random.seed(10)
106-
gc.collect()
107-
108138
sphinx_gallery_conf = {
109139
'examples_dirs': ['beginner_source', 'intermediate_source',
110140
'advanced_source', 'recipes_source', 'prototype_source'],
@@ -115,7 +145,6 @@ def reset_seeds(gallery_conf, fname):
115145
'first_notebook_cell': ("# For tips on running notebooks in Google Colab, see\n"
116146
"# https://pytorch.org/tutorials/beginner/colab\n"
117147
"%matplotlib inline"),
118-
'reset_modules': (reset_seeds),
119148
'ignore_pattern': r'_torch_export_nightly_tutorial.py',
120149
'pypandoc': {'extra_args': ['--mathjax', '--toc'],
121150
'filters': ['.jenkins/custom_pandoc_filter.py'],

en-wordlist.txt

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -698,3 +698,14 @@ TorchServe
698698
Inductor’s
699699
onwards
700700
recompilations
701+
BiasCorrection
702+
ELU
703+
GELU
704+
NNCF
705+
OpenVINO
706+
OpenVINOQuantizer
707+
PReLU
708+
Quantizer
709+
SmoothQuant
710+
quantizer
711+
quantizers

index.rst

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,11 @@ Welcome to PyTorch Tutorials
33

44
**What's new in PyTorch tutorials?**
55

6-
* `Dynamic Compilation Control with torch.compiler.set_stance <https://pytorch.org/tutorials/recipes/torch_compiler_set_stance_tutorial.html>`__
7-
* `Accelerating PyTorch Transformers by replacing nn.Transformer with Nested Tensors and torch.compile() <https://pytorch.org/tutorials/intermediate/transformer_building_blocks.html>`__
8-
* `Understanding the torch.export Flow and Solutions to Common Challenges <https://pytorch.org/tutorials/recipes/torch_export_challenges_solutions.html>`__
9-
* Updated `torch.export Tutorial <https://pytorch.org/tutorials/intermediate/torch_export_tutorial.html#constraints-dynamic-shapes>`__ with automatic dynamic shapes ``Dim.AUTO``
10-
* Updated `torch.export AOTInductor Tutorial for Python runtime <https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html>`__
11-
* Updated `Using User-Defined Triton Kernels with torch.compile <https://pytorch.org/tutorials/recipes/torch_compile_user_defined_triton_kernel_tutorial.html#composability>`__ with new ``torch.library.triton_op``
12-
* Updated `Compile Time Caching in torch.compile <https://pytorch.org/tutorials/recipes/torch_compile_caching_tutorial.html>`__ with new ``Mega-Cache``
6+
* `Utilizing Torch Function modes with torch.compile <https://pytorch.org/tutorials/recipes/torch_compile_torch_function_modes.html>`__
7+
* `Context Parallel Tutorial <https://pytorch.org/tutorials/prototype/context_parallel.html>`__
8+
* `PyTorch 2 Export Quantization with Intel GPU Backend through Inductor <https://pytorch.org/tutorials/prototype/pt2e_quant_xpu_inductor.html>`__
9+
* `(beta) Explicit horizontal fusion with foreach_map and torch.compile <https://pytorch.org/tutorials/recipes/foreach_map.html>`__
10+
* Updated `Inductor Windows CPU Tutorial <https://pytorch.org/tutorials/prototype/inductor_windows.html>`__
1311

1412
.. raw:: html
1513

@@ -768,14 +766,14 @@ Welcome to PyTorch Tutorials
768766
:tags: Parallel-and-Distributed-Training
769767

770768
.. customcarditem::
771-
:header: Getting Started with Fully Sharded Data Parallel(FSDP)
772-
:card_description: Learn how to train models with Fully Sharded Data Parallel package.
769+
:header: Getting Started with Fully Sharded Data Parallel (FSDP2)
770+
:card_description: Learn how to train models with Fully Sharded Data Parallel (fully_shard) package.
773771
:image: _static/img/thumbnails/cropped/Getting-Started-with-FSDP.png
774772
:link: intermediate/FSDP_tutorial.html
775773
:tags: Parallel-and-Distributed-Training
776774

777775
.. customcarditem::
778-
:header: Advanced Model Training with Fully Sharded Data Parallel (FSDP)
776+
:header: Advanced Model Training with Fully Sharded Data Parallel (FSDP1)
779777
:card_description: Explore advanced model training with Fully Sharded Data Parallel package.
780778
:image: _static/img/thumbnails/cropped/Getting-Started-with-FSDP.png
781779
:link: intermediate/FSDP_advanced_tutorial.html

0 commit comments

Comments
 (0)