Skip to content

Commit 2f5b10c

Browse files
committed
Add Image to TransCheX Tutorial
Signed-off-by: ahatamizadeh <ahatamizadeh@nvidia.com>
1 parent 29cffef commit 2f5b10c

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

multimodal/openi_multilabel_classification_transchex/transchex_openi_multilabel_classification.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
"License: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)\n",
1515
"\n",
1616
"An example of images and corresponding reports in Open-I dataset is presented as follows [2]:\n",
17-
"![image](../figures/openi_sample.png)\n",
17+
"![image](../../figures/openi_sample.png)\n",
1818
"\n",
1919
"In this tutorial, we use the TransCheX model with 2 layers for each of vision, language mixed modality encoders respectively. As an input to the TransCheX, we use the patient **report** and corresponding **chest X-ray image**. The image itself will be divided into non-overlapping patches with a specified patch resolution and projected into an embedding space. Similarly the reports are tokenized and projected into their respective embedding space. The language and vision encoders seperately encode their respective features from the projected embeddings in each modality. Furthmore, the output of vision and language encoders are fed into a mixed modality encoder which extraxts mutual information. The output of the mixed modality encoder is then utilized for the classification application. \n",
2020
"\n",

0 commit comments

Comments
 (0)