Skip to content

Commit d12916d

Browse files
Figures added, pretrained weights link added, minor fixes (#456)
* Figures added, pretrained weights link added, minor fixes Signed-off-by: vnath <vnath@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: vnath <vnath@nvidia.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent e0d2d6b commit d12916d

File tree

3 files changed

+52
-37
lines changed

3 files changed

+52
-37
lines changed

figures/SSL_Different_Augviews.png

1020 KB
Loading

figures/SSL_Overview_Figure.png

793 KB
Loading

self_supervised_pretraining/README.md

Lines changed: 52 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -1,34 +1,42 @@
1-
# Self-Supervised Pretraining Tutorial
1+
# Self-Supervised Pre-training Tutorial
22

33
This directory contains two scripts. The first script 'ssl_script_train.py' generates
44
a good set of pre-trained weights using unlabeled data with self-supervised tasks that
55
are based on augmentations of different types. The second script 'ssl_finetune_train.py' uses
66
the pre-trained weights generated from the first script and performs fine-tuning on a fully supervised
77
task.
88

9+
In case, the user wants to skip the pre-training part, the pre-trained weights can be
10+
[downloaded from here](https://drive.google.com/file/d/1z0BouIiQ9oLizubOIH9Rlpad5Kk_2RtY/view?usp=sharing)
11+
to use for fine-tuning tasks and directly skip to the second part of the tutorial which is using the
12+
'ssl_finetune_train.py'.
13+
914
### Steps to run the tutorial
1015
1.) Download the two datasets [TCIA-Covid19](https://wiki.cancerimagingarchive.net/display/Public/CT+Images+in+COVID-19)
1116
& [BTCV](https://www.synapse.org/#!Synapse:syn3193805/wiki/217789) (More detail about them in the Data section)\
1217
2.) Modify the paths for data_root, json_path & logdir in ssl_script_train.py\
1318
3.) Run the 'ssl_script_train.py'\
14-
4.) Modify the paths for data_root, json_path, pretrained_weights_path from 2.) and
19+
4.) Modify the paths for data_root, json_path, pre-trained_weights_path from 2.) and
1520
logdir_path in 'ssl_finetuning_train.py'\
1621
5.) Run the ssl_finetuning_script.py\
1722
6.) And that's all folks, use the model to your needs
1823

1924
### 1.Data
20-
Pretraining Dataset: The TCIA Covid-19 dataset was used for generating the pretrained weights.
25+
Pre-training Dataset: The TCIA Covid-19 dataset was used for generating the
26+
[pre-trained weights](https://drive.google.com/file/d/1z0BouIiQ9oLizubOIH9Rlpad5Kk_2RtY/view?usp=sharing).
2127
The dataset contains a total of 771 3D CT Volumes. The volumes were split into training and validation sets
22-
of 600 and 171 3D volumes correspondingly. The data is available for download at this [link](https://wiki.cancerimagingarchive.net/display/Public/CT+Images+in+COVID-19).
23-
If this dataset is being used in your work
24-
please use [1] as reference. A json file is provided which contains the suggested training and validation split
25-
in the json_files directory of the self-supervised training tutorial.
26-
27-
Fine-tuning Dataset: The dataset from Beyond the Cranial Vault Challenge [(BTCV)](https://www.synapse.org/#!Synapse:syn3193805/wiki/217789)
28-
2015 hosted at MICCAI was used as a fully supervised fine-tuning task on the pre-trained weights. The dataset
29-
consists of 30 3D Volumes with annotated labels of upto 13 different organs [2]. There are 3 json files provided in the
30-
json_files directory for the dataset. They correspond to having different number of training volumes ranging 6, 12 and 24.
31-
All 3 json files have the same validation split.
28+
of 600 and 171 3D volumes correspondingly. The data is available for download at this
29+
[link](https://wiki.cancerimagingarchive.net/display/Public/CT+Images+in+COVID-19).
30+
If this dataset is being used in your work, please use [1] as reference. A json file is provided
31+
which contains the training and validation splits that were used for the training. The json file can be found in the
32+
json_files directory of the self-supervised training tutorial.
33+
34+
Fine-tuning Dataset: The dataset from Beyond the Cranial Vault Challenge
35+
[(BTCV)](https://www.synapse.org/#!Synapse:syn3193805/wiki/217789)
36+
2015 hosted at MICCAI, was used as a fully supervised fine-tuning task on the pre-trained weights. The dataset
37+
consists of 30 3D Volumes with annotated labels of up to 13 different organs [2]. There are 3 json files provided in the
38+
json_files directory for the dataset. They correspond to having different number of training volumes ranging from
39+
6, 12 and 24. All 3 json files have the same validation split.
3240

3341
References:
3442

@@ -40,14 +48,15 @@ Medical Image Analysis 69 (2021): 101894.
4048

4149
### 2. Network Architectures
4250

43-
For pretraining a modified version of ViT [1] has been used, it can be referred [here](https://docs.monai.io/en/latest/networks.html#vitautoenc)
51+
For pre-training a modified version of ViT [1] has been used, it can be referred
52+
[here](https://docs.monai.io/en/latest/networks.html#vitautoenc)
4453
from MONAI. The original ViT was modified by attachment of two 3D Convolutional Transpose Layers to achieve a similar
45-
reconstruction size as that of the input image. The ViT is the backbone for the UNETR [2] network architecture which was
46-
used for the fine-tuning fully supervised tasks.
54+
reconstruction size as that of the input image. The ViT is the backbone for the UNETR [2] network architecture which
55+
was used for the fine-tuning fully supervised tasks.
4756

48-
The pretrained backbone of ViT weights were loaded to UNETR and the decoder head still relies on random initialization
57+
The pre-trained backbone of ViT weights were loaded to UNETR and the decoder head still relies on random initialization
4958
for adaptability of the new downstream task. This flexibility also allows the user to adapt the ViT backbone to their
50-
own custom created network architectures as well which uses the ViT backbone.
59+
own custom created network architectures as well.
5160

5261
References:
5362

@@ -59,29 +68,33 @@ arXiv preprint arXiv:2103.10504 (2021).
5968

6069
### 3. Self-supervised Tasks
6170

62-
The pretraining pipeline has two aspects to it. The first it uses augmentation to mutate the data and the second is
63-
it utilizes to a regularized [constrastive loss](https://docs.monai.io/en/latest/losses.html#contrastiveloss) [3] to
64-
learn feature representations of the unlabeled data. The multiple augmentations are applied on a randomly selected 3D
65-
foreground patch from a 3D volume. Two augmented views of the same 3D patch are generated for the constrastive loss as
66-
it functions by drawing the two augmented views closer to each other.
71+
The pre-training pipeline has two aspects to it (Refer figure shown below). First, it uses augmentation (top row) to
72+
mutate the data and the second is it utilizes to a regularized
73+
[constrastive loss](https://docs.monai.io/en/latest/losses.html#contrastiveloss) [3] to learn feature representations
74+
of the unlabeled data. The multiple augmentations are applied on a randomly selected 3D foreground patch from a 3D
75+
volume. Two augmented views of the same 3D patch are generated for the contrastive loss as it functions by drawing
76+
the two augmented views closer to each other if the views are generated from the same patch, if not then it tries to
77+
maximize the disagreement. The CL offers this functionality on a mini-batch.
78+
79+
![image](../figures/SSL_Overview_Figure.png)
6780

68-
The augmentations mutate the 3D patch in different ways and the primary task of the network is to reconstruct
81+
The augmentations mutate the 3D patch in various ways, the primary task of the network is to reconstruct
6982
the original image. The different augmentations used are classical techniques such as in-painting [1], out-painting [1]
7083
and noise augmentation to the image by local pixel shuffling [2]. The secondary task of the network is to simultaneously
71-
reconstruct the two augmented views as similar to each other as possible via the regularized contrastive loss [3] as it's
72-
objective is to maximize the agreement. The term regularized has been used here because the contrastive loss is adjusted
84+
reconstruct the two augmented views as similar to each other as possible via regularized contrastive loss [3] as it's
85+
objective is to maximize the agreement. The term regularized has been used here because contrastive loss is adjusted
7386
by the reconstruction loss as a dynamic weight itself.
7487

75-
The below example image depicts the usage of the augmentations pipeline where two augmented views are drawn of the same
88+
The below example image depicts the usage of the augmentation pipeline where two augmented views are drawn of the same
7689
3D patch:
7790

78-
![image](../figures/ssl_aug_views.png)
91+
![image](../figures/SSL_Different_Augviews.png)
7992

80-
The three columns are the three views of axial, coronal, sagittal of a randomly selected patch of size 96x96x96.
81-
The top row is the ground truth image which is not augmented. The middle row is the same image when mutated by augmentations.
82-
The bottom row is a 2nd view of the same patch but augmented with different probabilities
83-
The objective of the SSL network is to reconstruct the original top row image from the first view. The contrastive loss
84-
is driven by maximizing agreement of the reconstruction based on input of the two augmented views .
93+
Multiple axial slice of a 96x96x96 patch are shown before the augmentation (Ref Original Patch in the above figure).
94+
Augmented View 1 & 2 are different augmentations generated via the transforms on the same cubic patch. The objective
95+
of the SSL network is to reconstruct the original top row image from the first view. The contrastive loss
96+
is driven by maximizing agreement of the reconstruction based on input of the two augmented views.
97+
`matshow3d` from `monai.visualize` was used for creating this figure, a tutorial for using can be found [here](https://github.com/Project-MONAI/tutorials/blob/master/modules/transform_visualization.ipynb)
8598

8699
References:
87100

@@ -104,23 +117,25 @@ Batch size: 4 3D Volumes (Total of 8 as 2 samples were drawn per 3D Volume) \
104117
Loss Function: L1
105118
Contrastive Loss Temperature: 0.005
106119

107-
Training Hyper-parameters for Fine-tuning BTCV task (All settings have been kept consistent with prior [UNETR 3D
120+
Training Hyper-parameters for Fine-tuning BTCV task (All settings have been kept consistent with prior
121+
[UNETR 3D
108122
Segmentation tutorial](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/unetr_btcv_segmentation_3d.ipynb)): \
109123
Number of Steps: 30000 \
110124
Validation Frequency: 100 steps \
111125
Batch Size: 1 3D Volume (4 samples are drawn per 3D volume) \
112126
Learning Rate: 1e-4 \
113127
Loss Function: DiceCELoss
114128

115-
### 4. Training & Validation Curves for pretraining SSL
129+
### 4. Training & Validation Curves for pre-training SSL
116130

117131
![image](../figures/ssl_pretrain_losses.png)
118132

119-
L1 error reported for training and validation when performing the SSL training
133+
L1 error reported for training and validation when performing the SSL training. Please note contrastive loss is not
134+
L1.
120135

121136
### 5. Results of the Fine-tuning vs Random Initialization on BTCV
122137

123-
| Training Volumes | Validation Volumes | Random Init Dice score | Pretrained Dice Score | Relative Performance Improvement |
138+
| Training Volumes | Validation Volumes | Random Init Dice score | Pre-trained Dice Score | Relative Performance Improvement |
124139
| ---------------- | ---------------- | ---------------- | ---------------- | ---------------- |
125140
| 6 | 6 | 63.07 | 70.09 | ~11.13% |
126141
| 12 | 6 | 76.06 | 79.55 | ~4.58% |

0 commit comments

Comments
 (0)