Ranking Loss and Sequestering Learning for Reducing Image Search Bias in
Histopathology
- URL: http://arxiv.org/abs/2304.08498v1
- Date: Sat, 15 Apr 2023 03:38:09 GMT
- Title: Ranking Loss and Sequestering Learning for Reducing Image Search Bias in
Histopathology
- Authors: Pooria Mazaheri, Azam Asilian Bidgoli, Shahryar Rahnamayan, H.R.
Tizhoosh
- Abstract summary: This paper proposes two novel ideas to improve image search performance.
First, we use a ranking loss function to guide feature extraction toward the matching-oriented nature of the search.
Second, we introduce the concept of sequestering learning to enhance the generalization of feature extraction.
- Score: 0.6595290783361959
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, deep learning has started to play an essential role in healthcare
applications, including image search in digital pathology. Despite the recent
progress in computer vision, significant issues remain for image searching in
histopathology archives. A well-known problem is AI bias and lack of
generalization. A more particular shortcoming of deep models is the ignorance
toward search functionality. The former affects every model, the latter only
search and matching. Due to the lack of ranking-based learning, researchers
must train models based on the classification error and then use the resultant
embedding for image search purposes. Moreover, deep models appear to be prone
to internal bias even if using a large image repository of various hospitals.
This paper proposes two novel ideas to improve image search performance. First,
we use a ranking loss function to guide feature extraction toward the
matching-oriented nature of the search. By forcing the model to learn the
ranking of matched outputs, the representation learning is customized toward
image search instead of learning a class label. Second, we introduce the
concept of sequestering learning to enhance the generalization of feature
extraction. By excluding the images of the input hospital from the matched
outputs, i.e., sequestering the input domain, the institutional bias is
reduced. The proposed ideas are implemented and validated through the largest
public dataset of whole slide images. The experiments demonstrate superior
results compare to the-state-of-art.
Related papers
- Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Invisible Relevance Bias: Text-Image Retrieval Models Prefer AI-Generated Images [67.18010640829682]
We show that AI-generated images introduce an invisible relevance bias to text-image retrieval models.
The inclusion of AI-generated images in the training data of the retrieval models exacerbates the invisible relevance bias.
We propose an effective training method aimed at alleviating the invisible relevance bias.
arXiv Detail & Related papers (2023-11-23T16:22:58Z) - Are Deep Learning Classification Results Obtained on CT Scans Fair and
Interpretable? [0.0]
Most lung nodule classification papers using deep learning randomly shuffle data and split it into training, validation, and test sets.
In contrast, deep neural networks trained with strict patient-level separation maintain their accuracy rates even when new patient images are tested.
Heat-map visualizations of the activations of the deep neural networks trained with strict patient-level separation indicate a higher degree of focus on the relevant nodules.
arXiv Detail & Related papers (2023-09-22T05:57:25Z) - Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases [62.54519787811138]
We present a simple but effective method to measure and mitigate model biases caused by reliance on spurious cues.
We rank images within their classes based on spuriosity, proxied via deep neural features of an interpretable network.
Our results suggest that model bias due to spurious feature reliance is influenced far more by what the model is trained on than how it is trained.
arXiv Detail & Related papers (2022-12-05T23:15:43Z) - A domain adaptive deep learning solution for scanpath prediction of
paintings [66.46953851227454]
This paper focuses on the eye-movement analysis of viewers during the visual experience of a certain number of paintings.
We introduce a new approach to predicting human visual attention, which impacts several cognitive functions for humans.
The proposed new architecture ingests images and returns scanpaths, a sequence of points featuring a high likelihood of catching viewers' attention.
arXiv Detail & Related papers (2022-09-22T22:27:08Z) - Indicative Image Retrieval: Turning Blackbox Learning into Grey [0.0]
This paper revisits the importance of relevance/matching modeling in deep learning era.
It shows that it is possible to skip the representation learning and model the matching evidence directly.
It sets a new record of 97.77% on Oxford-5k (97.81% on Paris-6k) without extracting any deep features.
arXiv Detail & Related papers (2022-01-28T02:21:09Z) - On the Unreasonable Effectiveness of Centroids in Image Retrieval [0.1933681537640272]
We propose to use the mean centroid representation both during training and retrieval.
As each class is represented by a single embedding - the class centroid - both retrieval time and storage requirements are reduced significantly.
arXiv Detail & Related papers (2021-04-28T08:57:57Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Distractor-Aware Neuron Intrinsic Learning for Generic 2D Medical Image
Classifications [30.62607811479386]
We observe that the convolutional neural networks (CNNs) are vulnerable to distractor interference.
In this paper, we explore distractors from the CNN feature space via proposing a neuron intrinsic learning method.
The proposed method performs favorably against the state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-20T09:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.