Skip to content

Commit 7abd995

Browse files
authored
deprecate acceleration/multi_gpu_test.ipynb (#1532)
fixes #1528 multigpu workflows should typically follow - https://github.com/Project-MONAI/tutorials/blob/main/acceleration/distributed_training/unet_training_workflows.py - https://github.com/Project-MONAI/model-zoo/blob/dev/models/spleen_ct_segmentation/configs/multi_gpu_train.json ### Checks <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [ ] Avoid including large-size files in the PR. - [ ] Clean up long text outputs from code cells in the notebook. - [ ] For security purposes, please check the contents and remove any sensitive info such as user names and private key. - [ ] Ensure (1) hyperlinks and markdown anchors are working (2) use relative paths for tutorial repo files (3) put figure and graphs in the `./figure` folder - [ ] Notebook runs automatically `./runner.sh -t <path to .ipynb file>` --------- Signed-off-by: Wenqi Li <831580+wyli@users.noreply.github.com> Signed-off-by: Wenqi Li <wenqil@nvidia.com>
1 parent 19b42c9 commit 7abd995

File tree

3 files changed

+0
-320
lines changed

3 files changed

+0
-320
lines changed

README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -197,8 +197,6 @@ This notebook compares the performance of `Dataset`, `CacheDataset` and `Persist
197197
##### [fast_training_tutorial](./acceleration/fast_training_tutorial.ipynb)
198198
This tutorial compares the training performance of pure PyTorch program and optimized program in MONAI based on NVIDIA GPU device and latest CUDA library.
199199
The optimization methods mainly include: `AMP`, `CacheDataset`, `GPU transforms`, `ThreadDataLoader`, `DiceCELoss` and `SGD`.
200-
##### [multi_gpu_test](./acceleration/multi_gpu_test.ipynb)
201-
This notebook is a quick demo for devices, run the Ignite trainer engine on CPU, GPU and multiple GPUs.
202200
##### [threadbuffer_performance](./acceleration/threadbuffer_performance.ipynb)
203201
Demonstrates the use of the `ThreadBuffer` class used to generate data batches during training in a separate thread.
204202
##### [transform_speed](./acceleration/transform_speed.ipynb)

acceleration/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,6 @@ This notebook compares the performance of `Dataset`, `CacheDataset` and `Persist
1818
#### [fast_training_tutorial](./fast_training_tutorial.ipynb)
1919
This tutorial compares the training performance of pure PyTorch program and optimized program in MONAI based on NVIDIA GPU device and latest CUDA library.
2020
The optimization methods mainly include: `AMP`, `CacheDataset` and `Novograd`.
21-
#### [multi_gpu_test](./multi_gpu_test.ipynb)
22-
This notebook is a quick demo for devices, run the Ignite trainer engine on CPU, GPU and multiple GPUs.
2321
#### [threadbuffer_performance](./threadbuffer_performance.ipynb)
2422
Demonstrates the use of the `ThreadBuffer` class used to generate data batches during training in a separate thread.
2523
#### [transform_speed](./transform_speed.ipynb)

acceleration/multi_gpu_test.ipynb

Lines changed: 0 additions & 316 deletions
This file was deleted.

0 commit comments

Comments
 (0)