Skip to content

Commit c5ce6c4

Browse files
authored
Add Pancreas Seg app, multi-model app, and Jupyter Notebook tutorial (#315)
* Add the WIP multi-model example Signed-off-by: mmelqin <mingmelvinq@nvidia.com> * Quiet the style checking error Signed-off-by: mmelqin <mingmelvinq@nvidia.com> * Solved the issue with Pancreas Seg bundle by upping torch version. Now the whole app works with both Spleen and Pancreas bundles Signed-off-by: mmelqin <mingmelvinq@nvidia.com> * Update app and change readme file type Signed-off-by: mmelqin <mingmelvinq@nvidia.com> * Formatting change Signed-off-by: mmelqin <mingmelvinq@nvidia.com> * Add more comments and to trigger a Github build for testing formatting check. Signed-off-by: mmelqin <mingmelvinq@nvidia.com> * Reset to 0.5 main rebase, and re-add new Pancreas and changes Signed-off-by: M Q <mingmelvinq@nvidia.com> * Correct typo in comments Signed-off-by: M Q <mingmelvinq@nvidia.com> * Change the name of model in readme Signed-off-by: M Q <mingmelvinq@nvidia.com> * Add Jupyter notebook for multi_model tutorial Signed-off-by: M Q <mingmelvinq@nvidia.com> * Remove commented out code line Signed-off-by: M Q <mingmelvinq@nvidia.com> Signed-off-by: mmelqin <mingmelvinq@nvidia.com> Signed-off-by: M Q <mingmelvinq@nvidia.com>
1 parent 60134d1 commit c5ce6c4

File tree

12 files changed

+2658
-3
lines changed

12 files changed

+2658
-3
lines changed

docs/source/getting_started/tutorials/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,5 @@ mednist_app
99
monai_bundle_app
1010
segmentation_app
1111
segmentation_clara-viz_app
12+
multi_model_app
1213
```
Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
# Creating a Segmentation App Consuming a MONAI Bundle
2+
3+
This tutorial shows how to create an inference application with multiple models, focusing on model files organization, inferring with named model in the application, and packaging.
4+
5+
The models used in this example are trained with MONAI, and are packaged in the [MONAI Bundle](https://docs.monai.io/en/latest/bundle_intro.html) format.
6+
7+
## Setup
8+
9+
```bash
10+
# Create a virtual environment with Python 3.8.
11+
# Skip if you are already in a virtual environment.
12+
# (JupyterLab dropped its support for Python 3.6 since 2021-12-23.
13+
# See https://github.com/jupyterlab/jupyterlab/pull/11740)
14+
conda create -n monai python=3.8 pytorch torchvision jupyterlab cudatoolkit=11.1 -c pytorch -c conda-forge
15+
conda activate monai
16+
17+
# Launch JupyterLab if you want to work on Jupyter Notebook
18+
jupyter-lab
19+
```
20+
21+
## Executing from Jupyter Notebook
22+
23+
```{toctree}
24+
:maxdepth: 4
25+
26+
../../notebooks/tutorials/07_multi_model_app.ipynb
27+
```
28+
29+
```{raw} html
30+
<p style="text-align: center;">
31+
<a class="sphinx-bs btn text-wrap btn-outline-primary col-md-6 reference external" href="../../_static/notebooks/tutorials/07_multi_model_app.ipynb">
32+
<span>Download 07_multi_model_app.ipynb</span>
33+
</a>
34+
</p>
35+
```
36+
37+
## Executing from Shell
38+
39+
```bash
40+
# Clone the github project (the latest version of main branch only)
41+
git clone --branch main --depth 1 https://github.com/Project-MONAI/monai-deploy-app-sdk.git
42+
43+
cd monai-deploy-app-sdk
44+
45+
# Install monai-deploy-app-sdk package
46+
pip install --upgrade monai-deploy-app-sdk
47+
48+
# Download the zip file containing both the model and test data
49+
pip install gdown
50+
gdown https://drive.google.com/uc?id=1llJ4NGNTjY187RLX4MtlmHYhfGxBNWmd
51+
52+
# After downloading it using gdown, unzip the zip file saved by gdown
53+
unzip -o ai_multi_model_bundle_data.zip
54+
55+
# Install necessary packages from the app; note that numpy-stl and trimesh are only
56+
# needed if the application uses the STL Conversion Operator
57+
pip install monai torch pydicom highdicom SimpleITK Pillow nibabel scikit-image numpy-stl trimesh
58+
59+
# Local execution of the app directly or using MONAI Deploy CLI
60+
python examples/apps/examples/apps/ai_multi_ai_app/app.py -i dcm/ -o output -m multip_models
61+
# or alternatively,
62+
monai-deploy exec ../examples/apps/examples/apps/ai_multi_ai_app/app.py -i dcm/ -o output -m multip_models
63+
64+
# Package app (creating MAP docker image) using `-l DEBUG` option to see progress.
65+
# This assumes that nvidia docker is installed in the local machine.
66+
# Please see https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker to install nvidia-docker2.
67+
monai-deploy package -b nvcr.io/nvidia/pytorch:22.08-py3 examples/apps/ai_multi_ai_app --tag multi_model_app:latest --model multi_models -l DEBUG
68+
69+
# Run the app with docker image and input file locally
70+
monai-deploy run multi_model_app:latest dcm/ output
71+
```
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
import os
2+
import sys
3+
4+
_current_dir = os.path.abspath(os.path.dirname(__file__))
5+
if sys.path and os.path.abspath(sys.path[0]) != _current_dir:
6+
sys.path.insert(0, _current_dir)
7+
del _current_dir
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
from app import App
2+
3+
if __name__ == "__main__":
4+
App(do_run=True)

examples/apps/ai_multi_ai_app/app.py

Lines changed: 223 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,223 @@
1+
# Copyright 2021-2022 MONAI Consortium
2+
# Licensed under the Apache License, Version 2.0 (the "License");
3+
# you may not use this file except in compliance with the License.
4+
# You may obtain a copy of the License at
5+
# http://www.apache.org/licenses/LICENSE-2.0
6+
# Unless required by applicable law or agreed to in writing, software
7+
# distributed under the License is distributed on an "AS IS" BASIS,
8+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9+
# See the License for the specific language governing permissions and
10+
# limitations under the License.
11+
12+
import logging
13+
14+
# Required for setting SegmentDescription attributes. Direct import as this is not part of App SDK package.
15+
from pydicom.sr.codedict import codes
16+
17+
import monai.deploy.core as md
18+
from monai.deploy.core import Application, resource
19+
from monai.deploy.core.domain import Image
20+
from monai.deploy.core.io_type import IOType
21+
from monai.deploy.operators.dicom_data_loader_operator import DICOMDataLoaderOperator
22+
from monai.deploy.operators.dicom_seg_writer_operator import DICOMSegmentationWriterOperator, SegmentDescription
23+
from monai.deploy.operators.dicom_series_selector_operator import DICOMSeriesSelectorOperator
24+
from monai.deploy.operators.dicom_series_to_volume_operator import DICOMSeriesToVolumeOperator
25+
from monai.deploy.operators.monai_bundle_inference_operator import (
26+
BundleConfigNames,
27+
IOMapping,
28+
MonaiBundleInferenceOperator,
29+
)
30+
31+
32+
@resource(cpu=1, gpu=1, memory="7Gi")
33+
# Enforcing torch>=1.12.0 because one of the Bundles/TorchScripts, Pancreas CT Seg, was created
34+
# with this version, and would fail to jit.load with lower version of torch.
35+
# The Bundle Inference Operator as of now only requires torch>=1.10.2, and does not yet dynamically
36+
# parse the MONAI bundle to get the required pip package or ver on initialization, hence it does not set
37+
# its own @env decorator accordingly when the app is being packaged into a MONAI Package.
38+
@md.env(pip_packages=["torch>=1.12.0"])
39+
# pip_packages can be a string that is a path(str) to requirements.txt file or a list of packages.
40+
# The monai pkg is not required by this class, instead by the included operators.
41+
class App(Application):
42+
"""This example demonstrates how to create a multi-model/multi-AI application.
43+
44+
The important steps are:
45+
1. Place the model TorchScripts in a defined folder structure, see below for details
46+
2. Pass the model name to the inference operator instance in the app
47+
3. Connect the input to and output from the inference operators, as required by the app
48+
49+
Required Model Folder Structure:
50+
1. The model TorchScripts, be it MONAI Bundle compliant or not, must be placed in
51+
a parent folder, whose path is used as the path to the model(s) on app execution
52+
2. Each TorchScript file needs to be in a sub-folder, whose name is the model name
53+
54+
An example is shown below, where the `parent_foler` name can be the app's own choosing, and
55+
the sub-folder names become model names, `pancreas_ct_dints` and `spleen_model`, respectively.
56+
57+
<parent_fodler>
58+
├── pancreas_ct_dints
59+
│ └── model.ts
60+
└── spleen_ct
61+
└── model.ts
62+
63+
Note:
64+
1. The TorchScript files of MONAI Bundles can be downloaded from MONAI Model Zoo, at
65+
https://github.com/Project-MONAI/model-zoo/tree/dev/models
66+
2. The input DICOM instances are from a DICOM Series of CT Abdomen, similar to the ones
67+
used in the Spleen Segmentation example
68+
3. This example is purely for technical demonstration, not for clinical use
69+
"""
70+
71+
def __init__(self, *args, **kwargs):
72+
"""Creates an application instance."""
73+
self._logger = logging.getLogger("{}.{}".format(__name__, type(self).__name__))
74+
super().__init__(*args, **kwargs)
75+
76+
def run(self, *args, **kwargs):
77+
# This method calls the base class to run. Can be omitted if simply calling through.
78+
self._logger.info(f"Begin {self.run.__name__}")
79+
super().run(*args, **kwargs)
80+
self._logger.info(f"End {self.run.__name__}")
81+
82+
def compose(self):
83+
"""Creates the app specific operators and chain them up in the processing DAG."""
84+
85+
logging.info(f"Begin {self.compose.__name__}")
86+
87+
# Create the custom operator(s) as well as SDK built-in operator(s).
88+
study_loader_op = DICOMDataLoaderOperator()
89+
series_selector_op = DICOMSeriesSelectorOperator(Sample_Rules_Text)
90+
series_to_vol_op = DICOMSeriesToVolumeOperator()
91+
92+
# Create the inference operator that supports MONAI Bundle and automates the inference.
93+
# The IOMapping labels match the input and prediction keys in the pre and post processing.
94+
# The model_name is optional when the app has only one model.
95+
# The bundle_path argument optionally can be set to an accessible bundle file path in the dev
96+
# environment, so when the app is packaged into a MAP, the operator can complete the bundle parsing
97+
# during init to provide the optional packages info, parsed from the bundle, to the packager
98+
# for it to install the packages in the MAP docker image.
99+
# Setting output IOType to DISK only works only for leaf operators, not the case in this example.
100+
# When multiple models/bundles are supported, create an inference operator for each.
101+
#
102+
# Pertinent MONAI Bundle:
103+
# https://github.com/Project-MONAI/model-zoo/tree/dev/models/spleen_ct_segmentation, v0.3.2
104+
# https://github.com/Project-MONAI/model-zoo/tree/dev/models/pancreas_ct_dints_segmentation, v0.3
105+
106+
config_names = BundleConfigNames(config_names=["inference"]) # Same as the default
107+
108+
# This is the inference operator for the spleen_model bundle. Note the model name.
109+
bundle_spleen_seg_op = MonaiBundleInferenceOperator(
110+
input_mapping=[IOMapping("image", Image, IOType.IN_MEMORY)],
111+
output_mapping=[IOMapping("pred", Image, IOType.IN_MEMORY)],
112+
bundle_config_names=config_names,
113+
model_name="spleen_ct",
114+
)
115+
116+
# This is the inference operator for the pancreas_ct_dints bundle. Note the model name.
117+
bundle_pancreas_seg_op = MonaiBundleInferenceOperator(
118+
input_mapping=[IOMapping("image", Image, IOType.IN_MEMORY)],
119+
output_mapping=[IOMapping("pred", Image, IOType.IN_MEMORY)],
120+
model_name="pancreas_ct_dints",
121+
)
122+
123+
# Create DICOM Seg writer providing the required segment description for each segment with
124+
# the actual algorithm and the pertinent organ/tissue. The segment_label, algorithm_name,
125+
# and algorithm_version are of DICOM VR LO type, limited to 64 chars.
126+
# https://dicom.nema.org/medical/dicom/current/output/chtml/part05/sect_6.2.html
127+
#
128+
# NOTE: Each generated DICOM Seg will be a dcm file with the name based on the SOP instance UID.
129+
130+
# Description for the Spleen seg, and the seg writer obj
131+
seg_descriptions_spleen = [
132+
SegmentDescription(
133+
segment_label="Spleen",
134+
segmented_property_category=codes.SCT.Organ,
135+
segmented_property_type=codes.SCT.Spleen,
136+
algorithm_name="volumetric (3D) segmentation of the spleen from CT image",
137+
algorithm_family=codes.DCM.ArtificialIntelligence,
138+
algorithm_version="0.3.2",
139+
)
140+
]
141+
142+
custom_tags_spleen = {"SeriesDescription": "AI Spleen Seg for research use only. Not for clinical use."}
143+
dicom_seg_writer_spleen = DICOMSegmentationWriterOperator(
144+
segment_descriptions=seg_descriptions_spleen, custom_tags=custom_tags_spleen
145+
)
146+
147+
# Description for the Pancreas seg, and the seg writer obj
148+
seg_descriptions_pancreas = [
149+
SegmentDescription(
150+
segment_label="Pancreas",
151+
segmented_property_category=codes.SCT.Organ,
152+
segmented_property_type=codes.SCT.Pancreas,
153+
algorithm_name="volumetric (3D) segmentation of the pancreas from CT image",
154+
algorithm_family=codes.DCM.ArtificialIntelligence,
155+
algorithm_version="0.3.0",
156+
)
157+
]
158+
custom_tags_pancreas = {"SeriesDescription": "AI Pancreas Seg for research use only. Not for clinical use."}
159+
160+
dicom_seg_writer_pancreas = DICOMSegmentationWriterOperator(
161+
segment_descriptions=seg_descriptions_pancreas, custom_tags=custom_tags_pancreas
162+
)
163+
164+
# NOTE: Sharp eyed readers can already see that the above instantiation of object can be simply parameterized.
165+
# Very true, but leaving them as if for easy reading. In fact the whole app can be parameterized for general use.
166+
167+
# Create the processing pipeline, by specifying the upstream and downstream operators, and
168+
# ensuring the output from the former matches the input of the latter, in both name and type.
169+
self.add_flow(study_loader_op, series_selector_op, {"dicom_study_list": "dicom_study_list"})
170+
self.add_flow(
171+
series_selector_op, series_to_vol_op, {"study_selected_series_list": "study_selected_series_list"}
172+
)
173+
174+
# Feed the input image to all inference operators
175+
self.add_flow(series_to_vol_op, bundle_spleen_seg_op, {"image": "image"})
176+
# The Pancreas CT Seg bundle requires PyTorch 1.12.0 to avoid failure to load.
177+
self.add_flow(series_to_vol_op, bundle_pancreas_seg_op, {"image": "image"})
178+
179+
# Create DICOM Seg for one of the inference output
180+
# Note below the dicom_seg_writer requires two inputs, each coming from a upstream operator.
181+
self.add_flow(
182+
series_selector_op, dicom_seg_writer_spleen, {"study_selected_series_list": "study_selected_series_list"}
183+
)
184+
self.add_flow(bundle_spleen_seg_op, dicom_seg_writer_spleen, {"pred": "seg_image"})
185+
186+
# Create DICOM Seg for one of the inference output
187+
# Note below the dicom_seg_writer requires two inputs, each coming from a upstream operator.
188+
self.add_flow(
189+
series_selector_op, dicom_seg_writer_pancreas, {"study_selected_series_list": "study_selected_series_list"}
190+
)
191+
self.add_flow(bundle_pancreas_seg_op, dicom_seg_writer_pancreas, {"pred": "seg_image"})
192+
193+
logging.info(f"End {self.compose.__name__}")
194+
195+
196+
# This is a sample series selection rule in JSON, simply selecting CT series.
197+
# If the study has more than 1 CT series, then all of them will be selected.
198+
# Please see more detail in DICOMSeriesSelectorOperator.
199+
Sample_Rules_Text = """
200+
{
201+
"selections": [
202+
{
203+
"name": "CT Series",
204+
"conditions": {
205+
"StudyDescription": "(.*?)",
206+
"Modality": "(?i)CT",
207+
"SeriesDescription": "(.*?)"
208+
}
209+
}
210+
]
211+
}
212+
"""
213+
214+
if __name__ == "__main__":
215+
# Creates the app and test it standalone. When running is this mode, please note the following:
216+
# -m <model file>, for model file path
217+
# -i <DICOM folder>, for input DICOM CT series folder
218+
# -o <output folder>, for the output folder, default $PWD/output
219+
# e.g.
220+
# monai-deploy exec app.py -i input -m model/model.ts
221+
#
222+
logging.basicConfig(level=logging.DEBUG)
223+
app_instance = App(do_run=True)
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# About the Multi-Model/Multi-AI Example
2+
3+
This example demonstrates how to create a multi-model/multi-AI application.
4+
5+
## The important steps
6+
- Place the model TorchScripts in a defined folder structure, see below for details
7+
- Pass the model name to the inference operator instance in the app
8+
- Connect the input to and output from the inference operators, as required by the app
9+
10+
## Required model folder structure:
11+
- The model TorchScripts, be it MONAI Bundle compliant or not, must be placed in a parent folder, whose path is used as the path to the model(s) on app execution
12+
- Each TorchScript file needs to be in a sub-folder, whose name is the model name
13+
14+
An example is shown below, where the `parent_foler` name can be the app's own choosing, and the sub-folder names become model names, `pancreas_ct_dints` and `spleen_model`, respectively.
15+
```
16+
<parent_fodler>
17+
├── pancreas_ct_dints
18+
│ └── model.ts
19+
└── spleen_ct
20+
└── model.ts
21+
```
22+
23+
## Note:
24+
- The TorchScript files of MONAI Bundles can be downloaded from MONAI Model Zoo, at https://github.com/Project-MONAI/model-zoo/tree/dev/models
25+
- The input DICOM instances are from a DICOM Series of CT Abdomen study, similar to the ones used in the Spleen Segmentation example
26+
- This example is purely for technical demonstration, not for clinical use.
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
import os
2+
import sys
3+
4+
_current_dir = os.path.abspath(os.path.dirname(__file__))
5+
if sys.path and os.path.abspath(sys.path[0]) != _current_dir:
6+
sys.path.insert(0, _current_dir)
7+
del _current_dir
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
from app import AISpleenSegApp
2+
3+
if __name__ == "__main__":
4+
AISpleenSegApp(do_run=True)

0 commit comments

Comments
 (0)