Skip to content

Commit 1ce007c

Browse files
MMelQingigony
andauthored
add doc on building and running Deploy app on AArch64 (#189)
* add doc on build and run Deploy app on AArch64 Signed-off-by: mmelqin <mingmelvinq@nvidia.com> * Correct typos Signed-off-by: mmelqin <mingmelvinq@nvidia.com> * Update examples/apps/deply_app_on_aarch64_interim.md Co-authored-by: Gigon Bae <gbae@nvidia.com> Signed-off-by: mmelqin <mingmelvinq@nvidia.com> * Update examples/apps/deply_app_on_aarch64_interim.md Co-authored-by: Gigon Bae <gbae@nvidia.com> Signed-off-by: mmelqin <mingmelvinq@nvidia.com> * Update examples/apps/deply_app_on_aarch64_interim.md Co-authored-by: Gigon Bae <gbae@nvidia.com> Signed-off-by: mmelqin <mingmelvinq@nvidia.com> * Updated adfter resolving merge conflict Signed-off-by: mmelqin <mingmelvinq@nvidia.com> Co-authored-by: Gigon Bae <gbae@nvidia.com>
1 parent e733d94 commit 1ce007c

File tree

1 file changed

+114
-0
lines changed

1 file changed

+114
-0
lines changed
Lines changed: 114 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
# Containerizing MONAI Deploy Application for ARMv8 AArch64 (Interim Solution)
2+
3+
This article describes how to containerize a MONAI Deploy application targeting ARMv8 AArch64, without using the MONAI Deploy App SDK Packager or Runner as they do not yet support ARMv8 AArch64. A Docker image will be generated, though not in the form of the **MONAI Application Package**.
4+
5+
## Overview of Solution
6+
7+
The [MONAI Application Packager (Packager)](https://docs.monai.io/projects/monai-deploy-app-sdk/en/latest/developing_with_sdk/packaging_app.html) is a utility for building an application developed with the MONAI Deploy App SDK into a structured MONAI Application Package (**MAP**). The MAP produced by the Packager is a deployable and reusable docker image that can be launched by applicatons that can parse and understand MAP format, e.g. the [MONAI Application Runner (MAR)](https://docs.monai.io/projects/monai-deploy-app-sdk/en/latest/developing_with_sdk/executing_packaged_app_locally.html).
8+
9+
The Packager and MAR, however, do not support AArch64 in APP SDK release v0.1, due to the following reasons,
10+
- The Packager limits the use of base Docker images to those targeting x86 on Ubuntu only
11+
- The Packager injects an binary executable that only supports x86 in the generated MAP, and sets it as the Docker entry point
12+
- The MAR runs the MAP Docker with the aforementioned entry point
13+
14+
An interim solution is therefore provided to containerize and deploy a MONAI Deploy application for AArch64,
15+
- Make use of an trusted AArch64 compatible base image, e.g. [nvcr.io/nvidia/clara-agx:21.05-1.7-py3](https://ngc.nvidia.com/catalog/containers/nvidia:clara-agx:agx-pytorch) which is based on Ubuntu 18.04.5 LTS and already has PyTorch 1.7
16+
- Use custom Docker file to explicitly install dependencies with a `requirements` file and setting the application's main function as the entry point
17+
- Build the application docker with the aforementioned `Dockerfile` and `requirements` file on a AArch64 host computer. [Docker Buildx](https://docs.docker.com/buildx/working-with-buildx/) can also be used to build multi-platform images, though it is not used in this example
18+
- On the AArch64 host machine, use `docker run` command or a script to launch the application docker
19+
20+
## Steps
21+
### Create the MONAI Deploy Application
22+
For general guidance on how to build a deploy application using MONAI Deploy App SDK, and test it on x86 with the Packager and Runner, please refer to [Developing with SDK](https://docs.monai.io/projects/monai-deploy-app-sdk/en/latest/developing_with_sdk/index.html).
23+
24+
For the specific example on building and running a segmentation application, e.g. the Spleen segmentation application, please see [Creating a Segmentation App](https://docs.monai.io/projects/monai-deploy-app-sdk/en/latest/getting_started/tutorials/03_segmentation_app.html).
25+
26+
In the following sections, the UNETR Multi-organ Segmentation application will be used as an example.
27+
28+
### Create the requirements file
29+
Without using the MONAI Deploy App SDK Packager to automatically detect the dependencies of an application, one has to explicitly create the `requierments.txt` file to be used in the `Dockerfile`. Create the `requirements.txt` file in the application's folder with the content shown below,
30+
```bash
31+
monai>=0.6.0
32+
monai-deploy-app-sdk>=0.1.0
33+
nibabel
34+
numpy>=1.17
35+
pydicom>=1.4.2
36+
torch>=1.5
37+
```
38+
Note: The base image to be used already has torch 1.7 and numpy 19.5 pre-installed.
39+
40+
### Crete the Custom Dockerfile
41+
Create the `Dockerfile` in the application folder with the content shown below,
42+
43+
```bash
44+
ARG CONTAINER_REGISTRY=nvcr.io/nvidia/clara-agx
45+
ARG AGX_PT_TAG=21.05-1.7-py3
46+
FROM ${CONTAINER_REGISTRY}/agx-pytorch:${AGX_PT_TAG}
47+
48+
# This is the name of the folder containing the application files.
49+
ENV MY_APP_NAME="ai_unetr_seg_app"
50+
51+
USER root
52+
53+
RUN pip3 install --no-cache-dir --upgrade setuptools==57.4.0 wheel==0.37.0
54+
55+
WORKDIR /opt/$MY_APP_NAME
56+
57+
COPY ./$MY_APP_NAME/requirements.txt ./
58+
RUN pip3 install --no-cache-dir -r requirements.txt
59+
60+
# Copy the application source code.
61+
COPY ./$MY_APP_NAME ./
62+
ENTRYPOINT python3 -u ./app.py -i /input -m /model/model.ts -o /output
63+
```
64+
Note that
65+
- The application files are copied to `/opt/unetr_seg_app` in this example
66+
- The input DICOM instances are in folder `/input`
67+
- The Torch Script model file `model.ts` is in `/model`
68+
- The applicaton output will be in `/output`
69+
70+
### Build the Docker Image targeting AArch64
71+
Copy the application folder including the `requirements.txt` and `Dockerfile` to the working directory, e.g. `my_apps`, on a AArch64 host machine, and ensure Docker is already installed. The application folder structure looks like below,
72+
```bash
73+
my_apps
74+
└─ ai_unetr_seg_app
75+
├── app.py
76+
├── Dockerfile
77+
├── __init__.py
78+
├── __main__.py
79+
├── requirements.txt
80+
└── unetr_seg_operator.py
81+
```
82+
83+
In working directory `my_apps`, build the Dcoker image, named `ai_unetr_seg_app` with the default tag `default`with the following command,
84+
```bash
85+
docker build -t ai_unetr_seg_app -f ai_unetr_seg_app/Dockerfile .
86+
```
87+
### Run the Application Docker Locally
88+
On launching the application docker, input DICOM instances as well model file `model.ts` must be available, and the output folder may be a mounted NFS file share hosted on a remote machine.
89+
A sample shell script is provided below,
90+
```
91+
# Root of the datasets folder, change as needed
92+
OUTPUT_ROOT="/mnt/nfs_clientshare"
93+
DATASETS_FOLDER="datasets"
94+
95+
# App specific parameters, change as needed and ensure contents are present.
96+
APP_DATASET_FOLDER="unetr_dataset"
97+
INPUT_FOLDER="/media/m2/monai_apps/input"
98+
MODEL_FOLDER="/media/m2/monai_apps/models/unetr"
99+
DOCKER_IMAGE="ai_unetr_seg_app"
100+
101+
APP_DATASET_PATH=${OUTPUT_ROOT}/${DATASETS_FOLDER}/${APP_DATASET_FOLDER}
102+
echo "Set to save rendering dataset to: ${APP_DATASET_PATH} ..."
103+
docker run -t --rm --shm-size=1G \
104+
-v ${INPUT_FOLDER}:/input \
105+
-v ${MODEL_FOLDER}:/model \
106+
-v ${APP_DATASET_PATH}:/output \
107+
${DOCKER_IMAGE}
108+
echo "${DOCKER_IMAGE} completed."
109+
echo
110+
echo "Rendering dataset files are saved in the folder, ${APP_DATASET_PATH}:"
111+
ls ${APP_DATASET_PATH}
112+
```
113+
114+
Once application docker terminates, check the application output in the folder shown in the console log.

0 commit comments

Comments
 (0)