Skip to content

Commit 54621d4

Browse files
author
Svetlana Karslioglu
authored
Merge branch 'main' into redirect-tts
2 parents 120e67a + 48c31c4 commit 54621d4

File tree

3 files changed

+3
-8
lines changed

3 files changed

+3
-8
lines changed

beginner_source/basics/autogradqs_tutorial.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -130,9 +130,7 @@
130130

131131
######################################################################
132132
# There are reasons you might want to disable gradient tracking:
133-
# - To mark some parameters in your neural network as **frozen parameters**. This is
134-
# a very common scenario for
135-
# `finetuning a pretrained network <https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html>`__
133+
# - To mark some parameters in your neural network as **frozen parameters**.
136134
# - To **speed up computations** when you are only doing forward pass, because computations on tensors that do
137135
# not track gradients would be more efficient.
138136

beginner_source/blitz/autograd_tutorial.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -276,9 +276,6 @@
276276
# It is useful to "freeze" part of your model if you know in advance that you won't need the gradients of those parameters
277277
# (this offers some performance benefits by reducing autograd computations).
278278
#
279-
# Another common usecase where exclusion from the DAG is important is for
280-
# `finetuning a pretrained network <https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html>`__
281-
#
282279
# In finetuning, we freeze most of the model and typically only modify the classifier layers to make predictions on new labels.
283280
# Let's walk through a small example to demonstrate this. As before, we load a pretrained resnet18 model, and freeze all the parameters.
284281

beginner_source/ddp_series_multigpu.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -177,8 +177,8 @@ Running the distributed training job
177177
+ ddp_setup(rank, world_size)
178178
dataset, model, optimizer = load_train_objs()
179179
train_data = prepare_dataloader(dataset, batch_size=32)
180-
- trainer = Trainer(model, dataset, optimizer, device, save_every)
181-
+ trainer = Trainer(model, dataset, optimizer, rank, save_every)
180+
- trainer = Trainer(model, train_data, optimizer, device, save_every)
181+
+ trainer = Trainer(model, train_data, optimizer, rank, save_every)
182182
trainer.train(total_epochs)
183183
+ destroy_process_group()
184184

0 commit comments

Comments
 (0)