Multi-instance learning tutorial Loss #1478
Unanswered
relyativist
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Hi @relyativist, thanks for your interest here. If @myron could help double-confirm, that would be great! Thanks in advance! |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, thanks for great tutorial for multi-instance learning. I am trying to understand notation (3) from the paper L = Lbag + lambda * Lpatch. From the tutorial we have nn.BCEWithLogitsLoss(), with "mean" reduction with predefined weight decay. I could understand that we are able to use use BCE loss because of one-hot label encoding map transform , so we assign each prediction to 1 class. But I can't get underlying connection notation (3) with the defined BCELosswithLogits. Can someone elaborate on this?
Beta Was this translation helpful? Give feedback.
All reactions