Skip to content

Fix typos in self_supervised_pretraining README #492

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 22, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions self_supervised_pretraining/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ to use for fine-tuning tasks and directly skip to the second part of the tutoria
3.) Run the 'ssl_script_train.py'\
4.) Modify the paths for data_root, json_path, pre-trained_weights_path from 2.) and
logdir_path in 'ssl_finetuning_train.py'\
5.) Run the ssl_finetuning_script.py\
5.) Run the 'ssl_finetuning_script.py'\
6.) And that's all folks, use the model to your needs

### 1.Data
Expand Down Expand Up @@ -69,8 +69,8 @@ arXiv preprint arXiv:2103.10504 (2021).
### 3. Self-supervised Tasks

The pre-training pipeline has two aspects to it (Refer figure shown below). First, it uses augmentation (top row) to
mutate the data and the second is it utilizes to a regularized
[constrastive loss](https://docs.monai.io/en/latest/losses.html#contrastiveloss) [3] to learn feature representations
mutate the data and second, it utilizes regularized
[contrastive loss](https://docs.monai.io/en/latest/losses.html#contrastiveloss) [3] to learn feature representations
of the unlabeled data. The multiple augmentations are applied on a randomly selected 3D foreground patch from a 3D
volume. Two augmented views of the same 3D patch are generated for the contrastive loss as it functions by drawing
the two augmented views closer to each other if the views are generated from the same patch, if not then it tries to
Expand All @@ -81,7 +81,7 @@ maximize the disagreement. The CL offers this functionality on a mini-batch.
The augmentations mutate the 3D patch in various ways, the primary task of the network is to reconstruct
the original image. The different augmentations used are classical techniques such as in-painting [1], out-painting [1]
and noise augmentation to the image by local pixel shuffling [2]. The secondary task of the network is to simultaneously
reconstruct the two augmented views as similar to each other as possible via regularized contrastive loss [3] as it's
reconstruct the two augmented views as similar to each other as possible via regularized contrastive loss [3] as its
objective is to maximize the agreement. The term regularized has been used here because contrastive loss is adjusted
by the reconstruction loss as a dynamic weight itself.

Expand All @@ -90,7 +90,7 @@ The below example image depicts the usage of the augmentation pipeline where two

![image](../figures/SSL_Different_Augviews.png)

Multiple axial slice of a 96x96x96 patch are shown before the augmentation (Ref Original Patch in the above figure).
Multiple axial slices of a 96x96x96 patch are shown before the augmentation (Ref Original Patch in the above figure).
Augmented View 1 & 2 are different augmentations generated via the transforms on the same cubic patch. The objective
of the SSL network is to reconstruct the original top row image from the first view. The contrastive loss
is driven by maximizing agreement of the reconstruction based on input of the two augmented views.
Expand Down