@@ -18,7 +18,7 @@ to use for fine-tuning tasks and directly skip to the second part of the tutoria
18
18
3.) Run the 'ssl_script_train.py'\
19
19
4.) Modify the paths for data_root, json_path, pre-trained_weights_path from 2.) and
20
20
logdir_path in 'ssl_finetuning_train.py'\
21
- 5.) Run the ssl_finetuning_script.py\
21
+ 5.) Run the ' ssl_finetuning_script.py' \
22
22
6.) And that's all folks, use the model to your needs
23
23
24
24
### 1.Data
@@ -69,8 +69,8 @@ arXiv preprint arXiv:2103.10504 (2021).
69
69
### 3. Self-supervised Tasks
70
70
71
71
The pre-training pipeline has two aspects to it (Refer figure shown below). First, it uses augmentation (top row) to
72
- mutate the data and the second is it utilizes to a regularized
73
- [ constrastive loss] ( https://docs.monai.io/en/latest/losses.html#contrastiveloss ) [ 3] to learn feature representations
72
+ mutate the data and second, it utilizes regularized
73
+ [ contrastive loss] ( https://docs.monai.io/en/latest/losses.html#contrastiveloss ) [ 3] to learn feature representations
74
74
of the unlabeled data. The multiple augmentations are applied on a randomly selected 3D foreground patch from a 3D
75
75
volume. Two augmented views of the same 3D patch are generated for the contrastive loss as it functions by drawing
76
76
the two augmented views closer to each other if the views are generated from the same patch, if not then it tries to
@@ -81,7 +81,7 @@ maximize the disagreement. The CL offers this functionality on a mini-batch.
81
81
The augmentations mutate the 3D patch in various ways, the primary task of the network is to reconstruct
82
82
the original image. The different augmentations used are classical techniques such as in-painting [ 1] , out-painting [ 1]
83
83
and noise augmentation to the image by local pixel shuffling [ 2] . The secondary task of the network is to simultaneously
84
- reconstruct the two augmented views as similar to each other as possible via regularized contrastive loss [ 3] as it's
84
+ reconstruct the two augmented views as similar to each other as possible via regularized contrastive loss [ 3] as its
85
85
objective is to maximize the agreement. The term regularized has been used here because contrastive loss is adjusted
86
86
by the reconstruction loss as a dynamic weight itself.
87
87
@@ -90,7 +90,7 @@ The below example image depicts the usage of the augmentation pipeline where two
90
90
91
91
![ image] ( ../figures/SSL_Different_Augviews.png )
92
92
93
- Multiple axial slice of a 96x96x96 patch are shown before the augmentation (Ref Original Patch in the above figure).
93
+ Multiple axial slices of a 96x96x96 patch are shown before the augmentation (Ref Original Patch in the above figure).
94
94
Augmented View 1 & 2 are different augmentations generated via the transforms on the same cubic patch. The objective
95
95
of the SSL network is to reconstruct the original top row image from the first view. The contrastive loss
96
96
is driven by maximizing agreement of the reconstruction based on input of the two augmented views.
0 commit comments