Skip to content

Commit f027be9

Browse files
committed
[skip-ci] Few minor updates
1 parent 1a49e38 commit f027be9

File tree

1 file changed

+18
-12
lines changed

1 file changed

+18
-12
lines changed

intermediate_source/torchvision_tutorial.py

Lines changed: 18 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,14 @@
99
# .. tip::
1010
#
1111
# To get the most of this tutorial, we suggest using this
12-
# `Colab Version <https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/torchvision_finetuning_instance_segmentation.ipynb>`__.
12+
# `Colab Version <https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/torchvision_finetuning_instance_segmentation.ipynb>`_.
1313
# This will allow you to experiment with the information presented below.
1414
#
1515
#
1616
# For this tutorial, we will be finetuning a pre-trained `Mask
17-
# R-CNN <https://arxiv.org/abs/1703.06870>`__ model on the `Penn-Fudan
17+
# R-CNN <https://arxiv.org/abs/1703.06870>`_ model on the `Penn-Fudan
1818
# Database for Pedestrian Detection and
19-
# Segmentation <https://www.cis.upenn.edu/~jshi/ped_html/>`__. It contains
19+
# Segmentation <https://www.cis.upenn.edu/~jshi/ped_html/>`_. It contains
2020
# 170 images with 345 instances of pedestrians, and we will use it to
2121
# illustrate how to use the new features in torchvision in order to train
2222
# an object detection and instance segmentation model on a custom dataset.
@@ -65,7 +65,7 @@
6565
# ``pycocotools`` which can be installed with ``pip install pycocotools``.
6666
#
6767
# .. note ::
68-
# For Windows, please install ``pycocotools`` from `gautamchitnis <https://github.com/gautamchitnis/cocoapi>`__ with command
68+
# For Windows, please install ``pycocotools`` from `gautamchitnis <https://github.com/gautamchitnis/cocoapi>`_ with command
6969
#
7070
# ``pip install git+https://github.com/gautamchitnis/cocoapi.git@cocodataset-master#subdirectory=PythonAPI``
7171
#
@@ -85,10 +85,16 @@
8585
# Writing a custom dataset for PennFudan
8686
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
8787
#
88-
# Let’s write a dataset for the PennFudan dataset. After `downloading and
89-
# extracting the zip
90-
# file <https://www.cis.upenn.edu/~jshi/ped_html/PennFudanPed.zip>`__, we
91-
# have the following folder structure:
88+
# Let’s write a dataset for the PennFudan dataset. First, let's download the dataset and
89+
# extract the `zip file <https://www.cis.upenn.edu/~jshi/ped_html/PennFudanPed.zip>`_:
90+
#
91+
# .. code:: python
92+
#
93+
# wget https://www.cis.upenn.edu/~jshi/ped_html/PennFudanPed.zip -P data
94+
# cd data && unzip PennFudanPed.zip
95+
#
96+
#
97+
# We have the following folder structure:
9298
#
9399
# ::
94100
#
@@ -196,8 +202,8 @@ def __len__(self):
196202
# -------------------
197203
#
198204
# In this tutorial, we will be using `Mask
199-
# R-CNN <https://arxiv.org/abs/1703.06870>`__, which is based on top of
200-
# `Faster R-CNN <https://arxiv.org/abs/1506.01497>`__. Faster R-CNN is a
205+
# R-CNN <https://arxiv.org/abs/1703.06870>`_, which is based on top of
206+
# `Faster R-CNN <https://arxiv.org/abs/1506.01497>`_. Faster R-CNN is a
201207
# model that predicts both bounding boxes and class scores for potential
202208
# objects in the image.
203209
#
@@ -484,7 +490,7 @@ def get_transform(train):
484490
from torchvision.utils import draw_bounding_boxes, draw_segmentation_masks
485491

486492

487-
image = read_image("../_static/img/tv_tutorial/tv_image05.png")
493+
image = read_image("data/PennFudanPed/PNGImages/FudanPed00046.png")
488494
eval_transform = get_transform(train=False)
489495

490496
model.eval()
@@ -527,4 +533,4 @@ def get_transform(train):
527533
# the torchvision repository.
528534
#
529535
# You can download a full source file for this tutorial
530-
# `here <https://pytorch.org/tutorials/_static/tv-training-code.py>`__.
536+
# `here <https://pytorch.org/tutorials/_static/tv-training-code.py>`_.

0 commit comments

Comments
 (0)