Skip to content

Commit 85ab17c

Browse files
committed
fdsa
1 parent 00b83bf commit 85ab17c

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

recipes_source/recipes/amp_recipe.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@ def make_model(in_size, out_size, num_layers):
169169
opt.zero_grad()
170170

171171
##########################################################
172-
# All together ("Automatic Mixed Precision")
172+
# All together: "Automatic Mixed Precision"
173173
# ------------------------------------------
174174
# (The following also demonstrates ``enabled``, an optional convenience argument to ``autocast`` and ``GradScaler``.
175175
# If False, ``autocast`` and ``GradScaler``\ 's calls become no-ops.
@@ -270,6 +270,9 @@ def make_model(in_size, out_size, num_layers):
270270
# * Multiple GPUs (``torch.nn.DataParallel`` or ``torch.nn.parallel.DistributedDataParallel``)
271271
# * Custom autograd functions (subclasses of ``torch.autograd.Function``)
272272
#
273+
# If you perform multiple convergence runs in the same script, each run should use
274+
# a dedicated fresh GradScaler instance. GradScaler instances are lightweight.
275+
#
273276
# If you're registering a custom C++ op with the dispatcher, see the
274277
# `autocast section <https://pytorch.org/tutorials/advanced/dispatcher.html#autocast>`_
275278
# of the dispatcher tutorial.

0 commit comments

Comments
 (0)