@@ -97,7 +97,7 @@ So each image has a corresponding
97
97
segmentation mask, where each color correspond to a different instance.
98
98
Let’s write a ``torch.utils.data.Dataset `` class for this dataset.
99
99
100
- ::
100
+ .. code :: python
101
101
102
102
import os
103
103
import numpy as np
@@ -208,7 +208,7 @@ Let’s suppose that you want to start from a model pre-trained on COCO
208
208
and want to finetune it for your particular classes. Here is a possible
209
209
way of doing it:
210
210
211
- ::
211
+ .. code :: python
212
212
213
213
import torchvision
214
214
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
@@ -227,7 +227,7 @@ way of doing it:
227
227
2 - Modifying the model to add a different backbone
228
228
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
229
229
230
- ::
230
+ .. code :: python
231
231
232
232
import torchvision
233
233
from torchvision.models.detection import FasterRCNN
@@ -275,7 +275,7 @@ our dataset is very small, so we will be following approach number 1.
275
275
Here we want to also compute the instance segmentation masks, so we will
276
276
be using Mask R-CNN:
277
277
278
- ::
278
+ .. code :: python
279
279
280
280
import torchvision
281
281
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
@@ -316,7 +316,7 @@ folder and use them here.
316
316
Let’s write some helper functions for data augmentation /
317
317
transformation:
318
318
319
- ::
319
+ .. code :: python
320
320
321
321
import transforms as T
322
322
@@ -330,7 +330,7 @@ transformation:
330
330
Let’s now write the main function which performs the training and the
331
331
validation:
332
332
333
- ::
333
+ .. code :: python
334
334
335
335
from engine import train_one_epoch, evaluate
336
336
import utils
0 commit comments