Skip to content

Commit b9126e4

Browse files
Fix typos in self_supervised_pretraining/README.md (#492)
1 parent cbd769c commit b9126e4

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

self_supervised_pretraining/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ to use for fine-tuning tasks and directly skip to the second part of the tutoria
1818
3.) Run the 'ssl_script_train.py'\
1919
4.) Modify the paths for data_root, json_path, pre-trained_weights_path from 2.) and
2020
logdir_path in 'ssl_finetuning_train.py'\
21-
5.) Run the ssl_finetuning_script.py\
21+
5.) Run the 'ssl_finetuning_script.py'\
2222
6.) And that's all folks, use the model to your needs
2323

2424
### 1.Data
@@ -69,8 +69,8 @@ arXiv preprint arXiv:2103.10504 (2021).
6969
### 3. Self-supervised Tasks
7070

7171
The pre-training pipeline has two aspects to it (Refer figure shown below). First, it uses augmentation (top row) to
72-
mutate the data and the second is it utilizes to a regularized
73-
[constrastive loss](https://docs.monai.io/en/latest/losses.html#contrastiveloss) [3] to learn feature representations
72+
mutate the data and second, it utilizes regularized
73+
[contrastive loss](https://docs.monai.io/en/latest/losses.html#contrastiveloss) [3] to learn feature representations
7474
of the unlabeled data. The multiple augmentations are applied on a randomly selected 3D foreground patch from a 3D
7575
volume. Two augmented views of the same 3D patch are generated for the contrastive loss as it functions by drawing
7676
the two augmented views closer to each other if the views are generated from the same patch, if not then it tries to
@@ -81,7 +81,7 @@ maximize the disagreement. The CL offers this functionality on a mini-batch.
8181
The augmentations mutate the 3D patch in various ways, the primary task of the network is to reconstruct
8282
the original image. The different augmentations used are classical techniques such as in-painting [1], out-painting [1]
8383
and noise augmentation to the image by local pixel shuffling [2]. The secondary task of the network is to simultaneously
84-
reconstruct the two augmented views as similar to each other as possible via regularized contrastive loss [3] as it's
84+
reconstruct the two augmented views as similar to each other as possible via regularized contrastive loss [3] as its
8585
objective is to maximize the agreement. The term regularized has been used here because contrastive loss is adjusted
8686
by the reconstruction loss as a dynamic weight itself.
8787

@@ -90,7 +90,7 @@ The below example image depicts the usage of the augmentation pipeline where two
9090

9191
![image](../figures/SSL_Different_Augviews.png)
9292

93-
Multiple axial slice of a 96x96x96 patch are shown before the augmentation (Ref Original Patch in the above figure).
93+
Multiple axial slices of a 96x96x96 patch are shown before the augmentation (Ref Original Patch in the above figure).
9494
Augmented View 1 & 2 are different augmentations generated via the transforms on the same cubic patch. The objective
9595
of the SSL network is to reconstruct the original top row image from the first view. The contrastive loss
9696
is driven by maximizing agreement of the reconstruction based on input of the two augmented views.

0 commit comments

Comments
 (0)