A Saliency-based Clustering Framework for Identifying Aberrant
Predictions
- URL: http://arxiv.org/abs/2311.06454v1
- Date: Sat, 11 Nov 2023 01:53:59 GMT
- Title: A Saliency-based Clustering Framework for Identifying Aberrant
Predictions
- Authors: Aina Tersol Montserrat, Alexander R. Loftus, Yael Daihes
- Abstract summary: We introduce the concept of aberrant predictions, emphasizing that the nature of classification errors is as critical as their frequency.
We propose a novel, efficient training methodology aimed at both reducing the misclassification rate and discerning aberrant predictions.
We apply this methodology to the less-explored domain of veterinary radiology, where the stakes are high but have not been as extensively studied compared to human medicine.
- Score: 49.1574468325115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In machine learning, classification tasks serve as the cornerstone of a wide
range of real-world applications. Reliable, trustworthy classification is
particularly intricate in biomedical settings, where the ground truth is often
inherently uncertain and relies on high degrees of human expertise for
labeling. Traditional metrics such as precision and recall, while valuable, are
insufficient for capturing the nuances of these ambiguous scenarios. Here we
introduce the concept of aberrant predictions, emphasizing that the nature of
classification errors is as critical as their frequency. We propose a novel,
efficient training methodology aimed at both reducing the misclassification
rate and discerning aberrant predictions. Our framework demonstrates a
substantial improvement in model performance, achieving a 20\% increase in
precision. We apply this methodology to the less-explored domain of veterinary
radiology, where the stakes are high but have not been as extensively studied
compared to human medicine. By focusing on the identification and mitigation of
aberrant predictions, we enhance the utility and trustworthiness of machine
learning classifiers in high-stakes, real-world scenarios, including new
applications in the veterinary world.
Related papers
- From Uncertainty to Clarity: Uncertainty-Guided Class-Incremental Learning for Limited Biomedical Samples via Semantic Expansion [0.0]
We propose a class-incremental learning method under limited samples in the biomedical field.
Our method achieves optimal performance, surpassing state-of-the-art methods by as much as 53.54% in accuracy.
arXiv Detail & Related papers (2024-09-12T05:22:45Z) - Learning Confidence Bounds for Classification with Imbalanced Data [42.690254618937196]
We propose a novel framework that leverages learning theory and concentration inequalities to overcome the shortcomings of traditional solutions.
Our method can effectively adapt to the varying degrees of imbalance across different classes, resulting in more robust and reliable classification outcomes.
arXiv Detail & Related papers (2024-07-16T16:02:27Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Test-Time Amendment with a Coarse Classifier for Fine-Grained
Classification [10.719054378755981]
We present a novel approach for Post-Hoc Correction called Hierarchical Ensembles (HiE)
HiE utilizes label hierarchy to improve the performance of fine-grained classification at test-time using the coarse-grained predictions.
Our approach brings notable gains in top-1 accuracy while significantly decreasing the severity of mistakes as training data decreases for the fine-grained classes.
arXiv Detail & Related papers (2023-02-01T10:55:27Z) - Benchmarking common uncertainty estimation methods with
histopathological images under domain shift and label noise [62.997667081978825]
In high-risk environments, deep learning models need to be able to judge their uncertainty and reject inputs when there is a significant chance of misclassification.
We conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole Slide Images.
We observe that ensembles of methods generally lead to better uncertainty estimates as well as an increased robustness towards domain shifts and label noise.
arXiv Detail & Related papers (2023-01-03T11:34:36Z) - Towards Fair Classification against Poisoning Attacks [52.57443558122475]
We study the poisoning scenario where the attacker can insert a small fraction of samples into training data.
We propose a general and theoretically guaranteed framework which accommodates traditional defense methods to fair classification against poisoning attacks.
arXiv Detail & Related papers (2022-10-18T00:49:58Z) - Learning Discriminative Representation via Metric Learning for
Imbalanced Medical Image Classification [52.94051907952536]
We propose embedding metric learning into the first stage of the two-stage framework specially to help the feature extractor learn to extract more discriminative feature representations.
Experiments mainly on three medical image datasets show that the proposed approach consistently outperforms existing onestage and two-stage approaches.
arXiv Detail & Related papers (2022-07-14T14:57:01Z) - Sample Efficient Learning of Image-Based Diagnostic Classifiers Using
Probabilistic Labels [11.377362220429786]
We propose a way to learn and use probabilistic labels to train accurate and calibrated deep networks from relatively small datasets.
We observe gains of up to 22% in the accuracy of models trained with these labels, as compared with traditional approaches.
arXiv Detail & Related papers (2021-02-11T18:13:56Z) - Exemplar Auditing for Multi-Label Biomedical Text Classification [0.4873362301533824]
We generalize a recently proposed zero-shot sequence labeling method, "supervised labeling via a convolutional decomposition"
The approach yields classification with "introspection", relating the fine-grained features of an inference-time prediction to their nearest neighbors.
Our proposed approach yields both a competitively effective classification model and an interrogation mechanism to aid healthcare workers in understanding the salient features that drive the model's predictions.
arXiv Detail & Related papers (2020-04-07T02:54:20Z) - Regularizing Class-wise Predictions via Self-knowledge Distillation [80.76254453115766]
We propose a new regularization method that penalizes the predictive distribution between similar samples.
This results in regularizing the dark knowledge (i.e., the knowledge on wrong predictions) of a single network.
Our experimental results on various image classification tasks demonstrate that the simple yet powerful method can significantly improve the generalization ability.
arXiv Detail & Related papers (2020-03-31T06:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.