diff --git a/3d_segmentation/brats_segmentation_3d.ipynb b/3d_segmentation/brats_segmentation_3d.ipynb index aee2c58240..5f5ba2c963 100644 --- a/3d_segmentation/brats_segmentation_3d.ipynb +++ b/3d_segmentation/brats_segmentation_3d.ipynb @@ -885,7 +885,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": "Python 3", "language": "python", "name": "python3" }, @@ -899,7 +899,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.12" + "version": "3.7.10" } }, "nbformat": 4, diff --git a/figures/openi_sample.png b/figures/openi_sample.png new file mode 100644 index 0000000000..9069c4cbed Binary files /dev/null and b/figures/openi_sample.png differ diff --git a/multimodal/openi_multilabel_classification_transchex/transchex_openi_multilabel_classification.ipynb b/multimodal/openi_multilabel_classification_transchex/transchex_openi_multilabel_classification.ipynb index 3a68226137..93adfee5d8 100644 --- a/multimodal/openi_multilabel_classification_transchex/transchex_openi_multilabel_classification.ipynb +++ b/multimodal/openi_multilabel_classification_transchex/transchex_openi_multilabel_classification.ipynb @@ -14,7 +14,7 @@ "License: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)\n", "\n", "An example of images and corresponding reports in Open-I dataset is presented as follows [2]:\n", - "![image](https://lh3.googleusercontent.com/bmqrTg0oKbfuwced1SiNdZruqbex_srBbJ9p4nmceAfZaf0FFkl9pzc9UUFP3-6AxxxWDaWbLXfmev5E6_0RmEzd0rLQ1NciF7PTzOkbUcRTJIUKgcpxKZsYnw3L17ATvIFBD47xSIWWiCD28vWBVN1k72P2UPorK1GQJUFEbmDAfGn0XRM2rzwB29SXB2hEtQmbWbe4u4msvcX4spx2rEH-6Qrd-iQRMyDAhq0lstRYBvxtu7ZLRrwtj_P5FQRKeW0hEFqTCQZvKmC75FKoUiltHDfsAl2mig2nsUH0KDBc3atPn9lSBGBFOXsHZdsqw4Q86sXz0roz1vKQWJWcSG7l5YqmPoz5KGrspIs5OJ7QxVvVSmmbe8ctk-T7eBoz3juZ3ux5QhYT2C1BYxGVutLh017FAskyZ1on4BkDTlkLrKSUpbU5la9IrugKM_lAso_cM2ALWb07n-yjsYUJL55oyJBMLCRXyIIutrQSGJW0RwM5LBIgwyklV9P_bRF3_w36hoqtHFNbzN5zrW-RAeJS2nCTYOElmRhzbdl4CwbgVUuStEm66vfUhwtWBMgybyQKb3WVTx69FcgnNC7tuDiPHpU3UuDlNXjKkuh35kxNcbJGYh8ZTY3jmoiVd_nrN9Yh5scCaxxdMtNRgxMWaGFoj7Dl3enBM2wR2FNotZ10smre6F7acOfKSYceAvQXWCzSnZ_C5PJ1szrEFa6v3wn4=w805-h556-no?authuser=0)\n", + "![image](../../figures/openi_sample.png)\n", "\n", "In this tutorial, we use the TransCheX model with 2 layers for each of vision, language mixed modality encoders respectively. As an input to the TransCheX, we use the patient **report** and corresponding **chest X-ray image**. The image itself will be divided into non-overlapping patches with a specified patch resolution and projected into an embedding space. Similarly the reports are tokenized and projected into their respective embedding space. The language and vision encoders seperately encode their respective features from the projected embeddings in each modality. Furthmore, the output of vision and language encoders are fed into a mixed modality encoder which extraxts mutual information. The output of the mixed modality encoder is then utilized for the classification application. \n", "\n",