Skip to content

Commit c501cbe

Browse files
authored
add tutorial to run nnunet via runner class (#1216)
Signed-off-by: dongy <dongy@nvidia.com> ### Description Adding tutorial to run steps in nnunetv2 via runner class. this closes Project-MONAI/MONAI#6037 ### Checks <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [ ] Avoid including large-size files in the PR. - [ ] Clean up long text outputs from code cells in the notebook. - [ ] For security purposes, please check the contents and remove any sensitive info such as user names and private key. - [ ] Ensure (1) hyperlinks and markdown anchors are working (2) use relative paths for tutorial repo files (3) put figure and graphs in the `./figure` folder - [ ] Notebook runs automatically `./runner.sh -t <path to .ipynb file>` --------- Signed-off-by: dongy <dongy@nvidia.com>
1 parent 803db44 commit c501cbe

File tree

5 files changed

+232
-0
lines changed

5 files changed

+232
-0
lines changed

nnunet/README.md

Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
# MONAI and nnU-Net Integration
2+
3+
[nnU-Net](https://github.com/MIC-DKFZ/nnUNet) is an open-source deep learning framework that has been specifically designed for medical image segmentation. And nnU-Net is a state-of-the-art deep learning framework that is tailored for medical image segmentation. It builds upon the popular U-Net architecture and incorporates various advanced features and improvements, such as cascaded networks, novel loss functions, and pre-processing steps. nnU-Net also provides an easy-to-use interface that allows users to train and evaluate their segmentation models quickly. nnU-Net has been widely used in various medical imaging applications, including brain segmentation, liver segmentation, and prostate segmentation, among others. The framework has consistently achieved state-of-the-art performance in various benchmark datasets and challenges, demonstrating its effectiveness and potential for advancing medical image analysis.
4+
5+
nnU-Net and MONAI are two powerful open-source frameworks that offer advanced tools and algorithms for medical image analysis. Both frameworks have gained significant popularity in the research community, and many researchers have been using these frameworks to develop new and innovative medical imaging applications.
6+
7+
nnU-Net is a framework that provides a standardized pipeline for training and evaluating neural networks for medical image segmentation tasks. MONAI, on the other hand, is a framework that provides a comprehensive set of tools for medical image analysis, including pre-processing, data augmentation, and deep learning models. It is also built on top of PyTorch and offers a wide range of pre-trained models, as well as tools for model training and evaluation. The integration between nnUNet and MONAI can offer several benefits to researchers in the medical imaging field. By combining the strengths of both frameworks, researchers can take advantage of the standardized pipeline provided by nnUNet and the comprehensive set of tools provided by MONAI.
8+
9+
Overall, the integration between nnU-Net and MONAI can offer significant benefits to researchers in the medical imaging field. By combining the strengths of both frameworks, researchers can accelerate their research and develop new and innovative solutions to complex medical imaging challenges.
10+
11+
## What's New in nnU-Net V2
12+
13+
nnU-Net has release a newer version, nnU-Net V2, recently. Some changes have been made as follows.
14+
- Refactored repository: nnU-Net v2 has undergone significant changes in the repository structure, making it easier to navigate and understand. The codebase has been modularized, and the documentation has been improved, allowing for easier integration with other tools and frameworks.
15+
- New features: nnU-Net v2 has introduced several new [features](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/changelog.md), including:
16+
- Region based formulation with sigmoid activation;
17+
- Cross-platform support;
18+
- Multi-GPU training support.
19+
20+
Overall, nnU-Net v2 has introduced significant improvements and new features, making it a powerful and flexible deep learning framework for medical image segmentation. With its easy-to-use interface, modularized codebase, and advanced features, nnU-Net v2 is poised to advance the field of medical image analysis and improve patient outcomes.
21+
22+
## How does the integration works?
23+
As part of the integration, we have introduced a new class called the `nnUNetV2Runner`, which utilizes the Python APIs available in the official nnU-Net repository. The `nnUNetV2Runner` provides several key features that are useful for general users of MONAI.
24+
- The new class offers Python APIs at a high level to facilitate most of the components in nnU-Net, such as model training, validation, ensemble;
25+
- Users are only required to provide the minimum input, as specified in most of the MONAI tutorials for 3D medical image segmentation. The new class will automatically handle data conversion to prepare data that meets the requirements of nnU-Net, which will largely save time for users to prepare the datatsets;
26+
- Additionally, we have enabled users with more GPU resources to automatically allocate model training jobs in parallel. As nnU-Net requires the training of 20 segmentation models by default, distributing model training to larger resources can significantly improve overall efficiency. For instance, users with 8 GPUs can increase model training speed by 6x to 8x automatically using the new class.
27+
28+
## Steps
29+
30+
### 1. nnU-Net v2 installation
31+
32+
THe installation instruction is described [here](docs/install.md).
33+
34+
### 2. Run with Minimal Input using ```nnUNetV2Runner```
35+
36+
User needs to provide a data list (".json" file) for the new task and data root. In general, a valid data list needs to follow the format of ones in [Medical Segmentaiton Decathlon](https://drive.google.com/drive/folders/1HqEgzS8BV2c7xYNrZdEAnrHk7osJJ--2). After creating the data list, the user can create a simple "input.yaml" file (shown below) as the minimum input for **nnUNetV2Runner**.
37+
38+
```
39+
modality: CT
40+
datalist: "./msd_task09_spleen_folds.json"
41+
dataroot: "/workspace/data/nnunet_test/test09"
42+
```
43+
44+
User can also set values of directory variables as options in "input.yaml" if any directory needs to be specified.
45+
46+
```
47+
nnunet_preprocessed: "./work_dir/nnUNet_preprocessed" # optional
48+
nnunet_raw: "./work_dir/nnUNet_raw_data_base" # optional
49+
nnunet_results: "./work_dir/nnUNet_trained_models" # optional
50+
```
51+
52+
Once the minimum input information is provided, user can use the following commands to start the process of the entire nnU-Net pipeline automatically (from model training to model ensemble).
53+
54+
```bash
55+
python -m monai.apps.nnunet nnUNetV2Runner run --input_config='./input.yaml'
56+
```
57+
58+
### 2. Run nnUNet modules using ```AutoRunner```
59+
60+
```nnUNetV2Runner``` offers the one-stop API to execute the pipeline, as well as the APIs to access the underlying components of nnU-Net V2. Below are the command for different components.
61+
62+
```bash
63+
## [component] convert dataset
64+
python -m monai.apps.nnunet nnUNetRunner convert_dataset --input_config "./input_new.yaml"
65+
66+
## [component] converting msd datasets
67+
python -m monai.apps.nnunet nnUNetRunner convert_msd_dataset --input_config "./input.yaml" --data_dir "/workspace/data/Task05_Prostate"
68+
69+
## [component] experiment planning and data pre-processing
70+
python -m monai.apps.nnunet nnUNetRunner plan_and_process --input_config "./input.yaml"
71+
72+
## [component] single-gpu training for all 20 models
73+
python -m monai.apps.nnunet nnUNetRunner train --input_config "./input.yaml"
74+
75+
## [component] single-gpu training for a single model
76+
python -m monai.apps.nnunet nnUNetRunner train_single_model --input_config "./input.yaml" \
77+
--config "3d_fullres" \
78+
--fold 0
79+
80+
## [component] multi-gpu training for all 20 models
81+
export CUDA_VISIBLE_DEVICES=0,1 # optional
82+
python -m monai.apps.nnunet nnUNetRunner train --input_config "./input.yaml" --num_gpus 2
83+
84+
## [component] multi-gpu training for a single model
85+
export CUDA_VISIBLE_DEVICES=0,1 # optional
86+
python -m monai.apps.nnunet nnUNetRunner train_single_model --input_config "./input.yaml" \
87+
--config "3d_fullres" \
88+
--fold 0 \
89+
--num_gpus 2
90+
91+
## [component] find best configuration
92+
python -m monai.apps.nnunet nnUNetRunner find_best_configuration --input_config "./input.yaml"
93+
94+
## [component] predict, ensemble, and postprocessing
95+
python -m monai.apps.nnunet nnUNetRunner predict_ensemble_postprocessing --input_config "./input.yaml"
96+
97+
## [component] predict only
98+
python -m monai.apps.nnunet nnUNetRunner predict_ensemble_postprocessing --input_config "./input.yaml" \
99+
--run_ensemble false --run_postprocessing false
100+
101+
## [component] ensemble only
102+
python -m monai.apps.nnunet nnUNetRunner predict_ensemble_postprocessing --input_config "./input.yaml" \
103+
--run_predict false --run_postprocessing false
104+
105+
## [component] post-processing only
106+
python -m monai.apps.nnunet nnUNetRunner predict_ensemble_postprocessing --input_config "./input.yaml" \
107+
--run_predict false --run_ensemble false
108+
109+
```
110+
111+
## FAQ
112+
113+
THe common questions and answers can be found [here](docs/faq.md).

nnunet/docs/commands.md

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
## Pipeline run
2+
```bash
3+
## [pipeline] option 1: one-click solution
4+
python -m monai.apps.nnunet nnUNetRunner run --input "./input.yaml"
5+
6+
7+
## [pipeline] option 2: one-click solution with dict input
8+
DIR_BASE="/home/dongy/Projects/MONAI/nnunet/nnunet_runner/data"
9+
DIR_RAW="${DIR_BASE}/nnUNet_raw_data_base"
10+
DIR_PREPROCESSED="${DIR_BASE}/nnUNet_preprocessed"
11+
DIR_RESULTS="${DIR_BASE}/nnUNet_trained_models"
12+
13+
python -m monai.apps.nnunet nnUNetRunner run --input "{'dataset_name_or_id': 996, 'nnunet_raw': '${DIR_RAW}', 'nnunet_preprocessed': '${DIR_PREPROCESSED}', 'nnunet_results': '${DIR_RESULTS}'}"
14+
```
15+
16+
## Component run
17+
```bash
18+
## [component] convert dataset
19+
python -m monai.apps.nnunet nnUNetRunner convert_dataset --input "./input.yaml"
20+
21+
22+
## [component] converting msd datasets
23+
python -m monai.apps.nnunet nnUNetRunner convert_msd_dataset --input "./input.yaml" --data_dir "/home/dongy/Data/MSD/NGC/Task05_Prostate"
24+
25+
26+
## [component] experiment planning and data pre-processing
27+
python -m monai.apps.nnunet nnUNetRunner plan_and_process --input "./input.yaml"
28+
29+
30+
## [component] single-gpu training for all 20 models
31+
python -m monai.apps.nnunet nnUNetRunner train --input "./input.yaml"
32+
33+
34+
## [component] single-gpu training for a single model
35+
python -m monai.apps.nnunet nnUNetRunner train_single_model --input "./input.yaml" \
36+
--config "3d_fullres" \
37+
--fold 0 \
38+
--trainer_class_name "nnUNetTrainer_5epochs" \
39+
--export_validation_probabilities true
40+
41+
42+
## [component] multi-gpu training for all 20 models
43+
export CUDA_VISIBLE_DEVICES=0,1 # optional
44+
python -m monai.apps.nnunet nnUNetRunner train --input "./input.yaml" --num_gpus 2
45+
46+
47+
## [component] multi-gpu training for a single model
48+
export CUDA_VISIBLE_DEVICES=0,1 # optional
49+
python -m monai.apps.nnunet nnUNetRunner train_single_model --input "./input.yaml" \
50+
--config "3d_fullres" \
51+
--fold 0 \
52+
--trainer_class_name "nnUNetTrainer_5epochs" \
53+
--export_validation_probabilities true \
54+
--num_gpus 2
55+
56+
57+
## [component] find best configuration
58+
python -m monai.apps.nnunet nnUNetRunner find_best_configuration --input "./input.yaml"
59+
60+
61+
## [component] ensemble
62+
python -m monai.apps.nnunet nnUNetRunner predict_ensemble --input "./input.yaml"
63+
```

nnunet/docs/faq.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# FAQ
2+
3+
## Can I use a dictionary input instead of a "input.yaml" file?
4+
Yes, ```nnUNetV2runner``` is relying on [Google Fire Python library](https://github.com/google/python-fire), which supports dictionary based input. The following is a concrete example.
5+
6+
```bash
7+
## [pipeline] one-click solution with dict input
8+
MODALITY="CT"
9+
DATALIST="./msd_task09_spleen_folds.json"
10+
DATAROOT="/workspace/data/nnunet_test/test09"
11+
12+
python -m monai.apps.nnunet nnUNetRunner run --input "{'modality': '${MODALITY}', 'datalist': '${DATALIST}', 'dataroot': '${DATAROOT}'}"
13+
14+
15+
```

nnunet/docs/input.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
modality:
2+
- T2
3+
- ADC
4+
datalist: "/home/Data/MSD/test/dataset.json"
5+
dataroot: "/home/Data/MSD/test/Task05_Prostate"

nnunet/docs/install.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
# Installation
2+
3+
Users need to install both MONAI and nnU-Net to utilize the nnunet runner.
4+
5+
## MONAI
6+
7+
Users can follow the [link](https://https://docs.monai.io/en/stable/installation.html#option-1-as-a-part-of-your-system-wide-module) to install dev branch of MONAI.
8+
The following command shows the example to install MONAI and Necessary dependencies.
9+
10+
```bash
11+
# install latest monai (pip install monai)
12+
pip install git+https://github.com/Project-MONAI/MONAI#egg=monai
13+
14+
# install dependencies
15+
pip install fire nibabel
16+
pip install "scikit-image>=0.19.0"
17+
```
18+
19+
## nnU-Net (V2)
20+
21+
To run components of nnU-Net V2, users need to properly install PyTorch on their own or adopt [Pytorch docker containers](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) maintained by NVIDIA.
22+
And other dependent libraries can be installed by running basic commands.
23+
24+
```bash
25+
# install dependencies
26+
pip install --upgrade git+https://github.com/MIC-DKFZ/acvl_utils.git
27+
pip install --upgrade git+https://github.com/MIC-DKFZ/dynamic-network-architectures.git
28+
29+
# install nnunet
30+
pip install nnunetv2
31+
32+
# install hiddenlayer (optional)
33+
pip install --upgrade git+https://github.com/julien-blanchon/hiddenlayer.git
34+
```
35+
36+
The official instruction can be found [here](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/installation_instructions.md).

0 commit comments

Comments
 (0)