Towards Trainable Saliency Maps in Medical Imaging
- URL: http://arxiv.org/abs/2011.07482v1
- Date: Sun, 15 Nov 2020 09:01:55 GMT
- Title: Towards Trainable Saliency Maps in Medical Imaging
- Authors: Mehak Aggarwal, Nishanth Arun, Sharut Gupta, Ashwin Vaswani, Bryan
Chen, Matthew Li, Ken Chang, Jay Patel, Katherine Hoebel, Mishka Gidwani,
Jayashree Kalpathy-Cramer, Praveer Singh
- Abstract summary: We show how introducing a model design element agnostic to both architecture complexity and model task gives an inherently self-explanatory model.
We compare our results with state of the art non-trainable saliency maps on RSNA Pneumonia dataset and demonstrate a much higher localization efficacy using our adopted technique.
- Score: 4.438919530397659
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While success of Deep Learning (DL) in automated diagnosis can be
transformative to the medicinal practice especially for people with little or
no access to doctors, its widespread acceptability is severely limited by
inherent black-box decision making and unsafe failure modes. While saliency
methods attempt to tackle this problem in non-medical contexts, their apriori
explanations do not transfer well to medical usecases. With this study we
validate a model design element agnostic to both architecture complexity and
model task, and show how introducing this element gives an inherently
self-explanatory model. We compare our results with state of the art
non-trainable saliency maps on RSNA Pneumonia Dataset and demonstrate a much
higher localization efficacy using our adopted technique. We also compare, with
a fully supervised baseline and provide a reasonable alternative to it's high
data labelling overhead. We further investigate the validity of our claims
through qualitative evaluation from an expert reader.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.