Skip to content

Commit 362b005

Browse files
author
Seth Weidman
committed
Merge branch 'master' into remove_generative_section
2 parents fb60b78 + 94cb6a3 commit 362b005

File tree

4 files changed

+8
-5
lines changed

4 files changed

+8
-5
lines changed

beginner_source/transfer_learning_tutorial.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -252,6 +252,8 @@ def visualize_model(model, num_images=6):
252252

253253
model_ft = models.resnet18(pretrained=True)
254254
num_ftrs = model_ft.fc.in_features
255+
# Here the size of each output sample is set to 2.
256+
# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
255257
model_ft.fc = nn.Linear(num_ftrs, 2)
256258

257259
model_ft = model_ft.to(device)

index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -307,7 +307,7 @@ PyTorch in Other Languages
307307
intermediate/char_rnn_generation_tutorial
308308
intermediate/seq2seq_translation_tutorial
309309
beginner/text_sentiment_ngrams_tutorial
310-
beginner/translation_torchtext_tutorial
310+
beginner/torchtext_translation_tutorial
311311
beginner/transformer_tutorial
312312

313313
.. toctree::

intermediate_source/dist_tuto.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
Writing Distributed Applications with PyTorch
1+
3. Writing Distributed Applications with PyTorch
22
=============================================
33
**Author**: `Séb Arnold <https://seba1511.com>`_
44

intermediate_source/model_parallel_tutorial.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828
applications.
2929
3030
Basic Usage
31-
================================
31+
-----------
3232
"""
3333

3434
######################################################################
@@ -75,7 +75,7 @@ def forward(self, x):
7575

7676
######################################################################
7777
# Apply Model Parallel to Existing Modules
78-
# =======================
78+
# ----------------------------------------
7979
#
8080
# It is also possible to run an existing single-GPU module on multiple GPUs
8181
# with just a few lines of changes. The code below shows how to decompose
@@ -235,7 +235,7 @@ def plot(means, stds, labels, fig_name):
235235

236236
######################################################################
237237
# Speed Up by Pipelining Inputs
238-
# =======================
238+
# -----------------------------
239239
#
240240
# In the following experiments, we further divide each 120-image batch into
241241
# 20-image splits. As PyTorch launches CUDA operations asynchronizely, the
@@ -350,3 +350,4 @@ def forward(self, x):
350350
# for your environment, a proper approach is to first generate the curve to
351351
# figure out the best split size, and then use that split size to pipeline
352352
# inputs.
353+
#

0 commit comments

Comments
 (0)