Semantically Redundant Training Data Removal and Deep Model
Classification Performance: A Study with Chest X-rays
- URL: http://arxiv.org/abs/2309.09773v1
- Date: Mon, 18 Sep 2023 13:56:34 GMT
- Title: Semantically Redundant Training Data Removal and Deep Model
Classification Performance: A Study with Chest X-rays
- Authors: Sivaramakrishnan Rajaraman, Ghada Zamzmi, Feng Yang, Zhaohui Liang,
Zhiyun Xue, and Sameer Antani
- Abstract summary: We propose an entropy-based sample scoring approach to identify and remove semantically redundant training data.
We demonstrate using the publicly available NIH chest X-ray dataset that the model trained on the resulting informative subset of training data significantly outperforms the model trained on the full training set.
- Score: 5.454938535500864
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning (DL) has demonstrated its innate capacity to independently
learn hierarchical features from complex and multi-dimensional data. A common
understanding is that its performance scales up with the amount of training
data. Another data attribute is the inherent variety. It follows, therefore,
that semantic redundancy, which is the presence of similar or repetitive
information, would tend to lower performance and limit generalizability to
unseen data. In medical imaging data, semantic redundancy can occur due to the
presence of multiple images that have highly similar presentations for the
disease of interest. Further, the common use of augmentation methods to
generate variety in DL training may be limiting performance when applied to
semantically redundant data. We propose an entropy-based sample scoring
approach to identify and remove semantically redundant training data. We
demonstrate using the publicly available NIH chest X-ray dataset that the model
trained on the resulting informative subset of training data significantly
outperforms the model trained on the full training set, during both internal
(recall: 0.7164 vs 0.6597, p<0.05) and external testing (recall: 0.3185 vs
0.2589, p<0.05). Our findings emphasize the importance of information-oriented
training sample selection as opposed to the conventional practice of using all
available training data.
Related papers
- Probing Perfection: The Relentless Art of Meddling for Pulmonary Airway Segmentation from HRCT via a Human-AI Collaboration Based Active Learning Method [13.384578466263566]
In pulmonary tracheal segmentation, the scarcity of annotated data is a prevalent issue.
Deep Learning (DL) methods face challenges: the opacity of 'black box' models and the need for performance enhancement.
We address these challenges by combining diverse query strategies with various DL models.
arXiv Detail & Related papers (2024-07-03T23:27:53Z) - Distilled Datamodel with Reverse Gradient Matching [74.75248610868685]
We introduce an efficient framework for assessing data impact, comprising offline training and online evaluation stages.
Our proposed method achieves comparable model behavior evaluation while significantly speeding up the process compared to the direct retraining method.
arXiv Detail & Related papers (2024-04-22T09:16:14Z) - MissDiff: Training Diffusion Models on Tabular Data with Missing Values [29.894691645801597]
This work presents a unified and principled diffusion-based framework for learning from data with missing values.
We first observe that the widely adopted "impute-then-generate" pipeline may lead to a biased learning objective.
We prove the proposed method is consistent in learning the score of data distributions, and the proposed training objective serves as an upper bound for the negative likelihood in certain cases.
arXiv Detail & Related papers (2023-07-02T03:49:47Z) - Exploring Data Redundancy in Real-world Image Classification through
Data Selection [20.389636181891515]
Deep learning models often require large amounts of data for training, leading to increased costs.
We present two data valuation metrics based on Synaptic Intelligence and gradient norms, respectively, to study redundancy in real-world image data.
Online and offline data selection algorithms are then proposed via clustering and grouping based on the examined data values.
arXiv Detail & Related papers (2023-06-25T03:31:05Z) - Stubborn Lexical Bias in Data and Models [50.79738900885665]
We use a new statistical method to examine whether spurious patterns in data appear in models trained on the data.
We apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations.
Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models.
arXiv Detail & Related papers (2023-06-03T20:12:27Z) - Reconstructing Training Data from Model Gradient, Provably [68.21082086264555]
We reconstruct the training samples from a single gradient query at a randomly chosen parameter value.
As a provable attack that reveals sensitive training data, our findings suggest potential severe threats to privacy.
arXiv Detail & Related papers (2022-12-07T15:32:22Z) - Multi-Domain Joint Training for Person Re-Identification [51.73921349603597]
Deep learning-based person Re-IDentification (ReID) often requires a large amount of training data to achieve good performance.
It appears that collecting more training data from diverse environments tends to improve the ReID performance.
We propose an approach called Domain-Camera-Sample Dynamic network (DCSD) whose parameters can be adaptive to various factors.
arXiv Detail & Related papers (2022-01-06T09:20:59Z) - Deep Mining External Imperfect Data for Chest X-ray Disease Screening [57.40329813850719]
We argue that incorporating an external CXR dataset leads to imperfect training data, which raises the challenges.
We formulate the multi-label disease classification problem as weighted independent binary tasks according to the categories.
Our framework simultaneously models and tackles the domain and label discrepancies, enabling superior knowledge mining ability.
arXiv Detail & Related papers (2020-06-06T06:48:40Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Unsupervised Pre-trained Models from Healthy ADLs Improve Parkinson's
Disease Classification of Gait Patterns [3.5939555573102857]
We show how to extract features relevant to accelerometer gait data for Parkinson's disease classification.
Our pre-trained source model consists of a convolutional autoencoder, and the target classification model is a simple multi-layer perceptron model.
We explore two different pre-trained source models, trained using different activity groups, and analyze the influence the choice of pre-trained model has over the task of Parkinson's disease classification.
arXiv Detail & Related papers (2020-05-06T04:08:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.