Skip to content

Commit cbd769c

Browse files
authored
Add Image to TransCheX Tutorial (#490)
* Add Image to TransCheX Tutorial Signed-off-by: ahatamizadeh <ahatamizadeh@nvidia.com> * Add Image to TransCheX Tutorial Signed-off-by: ahatamizadeh <ahatamizadeh@nvidia.com>
1 parent 12796e1 commit cbd769c

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

3d_segmentation/brats_segmentation_3d.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -885,7 +885,7 @@
885885
],
886886
"metadata": {
887887
"kernelspec": {
888-
"display_name": "Python 3 (ipykernel)",
888+
"display_name": "Python 3",
889889
"language": "python",
890890
"name": "python3"
891891
},
@@ -899,7 +899,7 @@
899899
"name": "python",
900900
"nbconvert_exporter": "python",
901901
"pygments_lexer": "ipython3",
902-
"version": "3.8.12"
902+
"version": "3.7.10"
903903
}
904904
},
905905
"nbformat": 4,

figures/openi_sample.png

221 KB
Loading

multimodal/openi_multilabel_classification_transchex/transchex_openi_multilabel_classification.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
"License: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)\n",
1515
"\n",
1616
"An example of images and corresponding reports in Open-I dataset is presented as follows [2]:\n",
17-
"![image](https://lh3.googleusercontent.com/bmqrTg0oKbfuwced1SiNdZruqbex_srBbJ9p4nmceAfZaf0FFkl9pzc9UUFP3-6AxxxWDaWbLXfmev5E6_0RmEzd0rLQ1NciF7PTzOkbUcRTJIUKgcpxKZsYnw3L17ATvIFBD47xSIWWiCD28vWBVN1k72P2UPorK1GQJUFEbmDAfGn0XRM2rzwB29SXB2hEtQmbWbe4u4msvcX4spx2rEH-6Qrd-iQRMyDAhq0lstRYBvxtu7ZLRrwtj_P5FQRKeW0hEFqTCQZvKmC75FKoUiltHDfsAl2mig2nsUH0KDBc3atPn9lSBGBFOXsHZdsqw4Q86sXz0roz1vKQWJWcSG7l5YqmPoz5KGrspIs5OJ7QxVvVSmmbe8ctk-T7eBoz3juZ3ux5QhYT2C1BYxGVutLh017FAskyZ1on4BkDTlkLrKSUpbU5la9IrugKM_lAso_cM2ALWb07n-yjsYUJL55oyJBMLCRXyIIutrQSGJW0RwM5LBIgwyklV9P_bRF3_w36hoqtHFNbzN5zrW-RAeJS2nCTYOElmRhzbdl4CwbgVUuStEm66vfUhwtWBMgybyQKb3WVTx69FcgnNC7tuDiPHpU3UuDlNXjKkuh35kxNcbJGYh8ZTY3jmoiVd_nrN9Yh5scCaxxdMtNRgxMWaGFoj7Dl3enBM2wR2FNotZ10smre6F7acOfKSYceAvQXWCzSnZ_C5PJ1szrEFa6v3wn4=w805-h556-no?authuser=0)\n",
17+
"![image](../../figures/openi_sample.png)\n",
1818
"\n",
1919
"In this tutorial, we use the TransCheX model with 2 layers for each of vision, language mixed modality encoders respectively. As an input to the TransCheX, we use the patient **report** and corresponding **chest X-ray image**. The image itself will be divided into non-overlapping patches with a specified patch resolution and projected into an embedding space. Similarly the reports are tokenized and projected into their respective embedding space. The language and vision encoders seperately encode their respective features from the projected embeddings in each modality. Furthmore, the output of vision and language encoders are fed into a mixed modality encoder which extraxts mutual information. The output of the mixed modality encoder is then utilized for the classification application. \n",
2020
"\n",

0 commit comments

Comments
 (0)