Skip to content

Commit fd289a4

Browse files
author
Svetlana Karslioglu
authored
Merge branch 'main' into add-issue-template-2
2 parents 7a176c1 + 33492c7 commit fd289a4

File tree

5 files changed

+15
-16
lines changed

5 files changed

+15
-16
lines changed

.jenkins/validate_tutorials_built.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,6 @@
1414
"profiler",
1515
"saving_loading_models",
1616
"introyt/captumyt",
17-
"introyt/trainingyt",
1817
"examples_nn/polynomial_module",
1918
"examples_nn/dynamic_net",
2019
"examples_nn/polynomial_optim",

beginner_source/introyt/captumyt.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -106,13 +106,13 @@
106106
To install Captum in an Anaconda or pip virtual environment, use the
107107
appropriate command for your environment below:
108108
109-
With ``conda``:
109+
With ``conda``::
110110
111-
``conda install pytorch torchvision captum -c pytorch``
111+
conda install pytorch torchvision captum -c pytorch
112112
113-
With ``pip``:
113+
With ``pip``::
114114
115-
``pip install torch torchvision captum``
115+
pip install torch torchvision captum
116116
117117
Restart this notebook in the environment you set up, and you’re ready to
118118
go!

beginner_source/introyt/tensorboardyt_tutorial.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,14 +24,14 @@
2424
To run this tutorial, you’ll need to install PyTorch, TorchVision,
2525
Matplotlib, and TensorBoard.
2626
27-
With ``conda``:
27+
With ``conda``::
2828
29-
``conda install pytorch torchvision -c pytorch``
30-
``conda install matplotlib tensorboard``
29+
conda install pytorch torchvision -c pytorch
30+
conda install matplotlib tensorboard
3131
32-
With ``pip``:
32+
With ``pip``::
3333
34-
``pip install torch torchvision matplotlib tensorboard``
34+
pip install torch torchvision matplotlib tensorboard
3535
3636
Once the dependencies are installed, restart this notebook in the Python
3737
environment where you installed them.

beginner_source/introyt/trainingyt.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -81,8 +81,8 @@
8181
validation_set = torchvision.datasets.FashionMNIST('./data', train=False, transform=transform, download=True)
8282

8383
# Create data loaders for our datasets; shuffle for training, not for validation
84-
training_loader = torch.utils.data.DataLoader(training_set, batch_size=4, shuffle=True, num_workers=2)
85-
validation_loader = torch.utils.data.DataLoader(validation_set, batch_size=4, shuffle=False, num_workers=2)
84+
training_loader = torch.utils.data.DataLoader(training_set, batch_size=4, shuffle=True)
85+
validation_loader = torch.utils.data.DataLoader(validation_set, batch_size=4, shuffle=False)
8686

8787
# Class labels
8888
classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',

recipes_source/recipes/amp_recipe.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,15 +11,15 @@
1111
range of ``float32``. Mixed precision tries to match each op to its appropriate datatype,
1212
which can reduce your network's runtime and memory footprint.
1313
14-
Ordinarily, "automatic mixed precision training" uses `torch.autocast <https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.autocast>`_ and
14+
Ordinarily, "automatic mixed precision training" uses `torch.autocast <https://pytorch.org/docs/stable/amp.html#torch.autocast>`_ and
1515
`torch.cuda.amp.GradScaler <https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler>`_ together.
1616
1717
This recipe measures the performance of a simple network in default precision,
1818
then walks through adding ``autocast`` and ``GradScaler`` to run the same network in
1919
mixed precision with improved performance.
2020
2121
You may download and run this recipe as a standalone Python script.
22-
The only requirements are Pytorch 1.6+ and a CUDA-capable GPU.
22+
The only requirements are PyTorch 1.6 or later and a CUDA-capable GPU.
2323
2424
Mixed precision primarily benefits Tensor Core-enabled architectures (Volta, Turing, Ampere).
2525
This recipe should show significant (2-3X) speedup on those architectures.
@@ -105,7 +105,7 @@ def make_model(in_size, out_size, num_layers):
105105
##########################################################
106106
# Adding autocast
107107
# ---------------
108-
# Instances of `torch.cuda.amp.autocast <https://pytorch.org/docs/stable/amp.html#autocasting>`_
108+
# Instances of `torch.autocast <https://pytorch.org/docs/stable/amp.html#autocasting>`_
109109
# serve as context managers that allow regions of your script to run in mixed precision.
110110
#
111111
# In these regions, CUDA ops run in a dtype chosen by autocast
@@ -310,7 +310,7 @@ def make_model(in_size, out_size, num_layers):
310310
# 1. Disable ``autocast`` or ``GradScaler`` individually (by passing ``enabled=False`` to their constructor) and see if infs/NaNs persist.
311311
# 2. If you suspect part of your network (e.g., a complicated loss function) overflows , run that forward region in ``float32``
312312
# and see if infs/NaNs persist.
313-
# `The autocast docstring <https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.autocast>`_'s last code snippet
313+
# `The autocast docstring <https://pytorch.org/docs/stable/amp.html#torch.autocast>`_'s last code snippet
314314
# shows forcing a subregion to run in ``float32`` (by locally disabling autocast and casting the subregion's inputs).
315315
#
316316
# Type mismatch error (may manifest as CUDNN_STATUS_BAD_PARAM)

0 commit comments

Comments
 (0)