Exploring Learning Complexity for Downstream Data Pruning
- URL: http://arxiv.org/abs/2402.05356v1
- Date: Thu, 8 Feb 2024 02:29:33 GMT
- Title: Exploring Learning Complexity for Downstream Data Pruning
- Authors: Wenyu Jiang, Zhenlong Liu, Zejian Xie, Songxin Zhang, Bingyi Jing,
Hongxin Wei
- Abstract summary: We propose to treat the learning complexity (LC) as the scoring function for classification and regression tasks.
For the instruction fine-tuning of large language models, our method achieves state-of-the-art performance with stable convergence.
- Score: 9.526877053855998
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The over-parameterized pre-trained models pose a great challenge to
fine-tuning with limited computation resources. An intuitive solution is to
prune the less informative samples from the fine-tuning dataset. A series of
training-based scoring functions are proposed to quantify the informativeness
of the data subset but the pruning cost becomes non-negligible due to the heavy
parameter updating. For efficient pruning, it is viable to adapt the similarity
scoring function of geometric-based methods from training-based to
training-free. However, we empirically show that such adaption distorts the
original pruning and results in inferior performance on the downstream tasks.
In this paper, we propose to treat the learning complexity (LC) as the scoring
function for classification and regression tasks. Specifically, the learning
complexity is defined as the average predicted confidence of subnets with
different capacities, which encapsulates data processing within a converged
model. Then we preserve the diverse and easy samples for fine-tuning. Extensive
experiments with vision datasets demonstrate the effectiveness and efficiency
of the proposed scoring function for classification tasks. For the instruction
fine-tuning of large language models, our method achieves state-of-the-art
performance with stable convergence, outperforming the full training with only
10\% of the instruction dataset.
Related papers
- Data Adaptive Traceback for Vision-Language Foundation Models in Image Classification [34.37262622415682]
We propose a new adaptation framework called Data Adaptive Traceback.
Specifically, we utilize a zero-shot-based method to extract the most downstream task-related subset of the pre-training data.
We adopt a pseudo-label-based semi-supervised technique to reuse the pre-training images and a vision-language contrastive learning method to address the confirmation bias issue in semi-supervised learning.
arXiv Detail & Related papers (2024-07-11T18:01:58Z) - Diffusion-based Neural Network Weights Generation [85.6725307453325]
We propose an efficient and adaptive transfer learning scheme through dataset-conditioned pretrained weights sampling.
Specifically, we use a latent diffusion model with a variational autoencoder that can reconstruct the neural network weights.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Not All Data Matters: An End-to-End Adaptive Dataset Pruning Framework
for Enhancing Model Performance and Efficiency [9.460023981858319]
We propose an end-to-end Adaptive DAtaset PRUNing framework called AdaPruner.
AdaPruner iteratively prunes redundant samples to an expected pruning ratio.
It can still significantly enhance model performance even after pruning up to 10-30% of the training data.
arXiv Detail & Related papers (2023-12-09T16:01:21Z) - Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - Towards Robust Dataset Learning [90.2590325441068]
We propose a principled, tri-level optimization to formulate the robust dataset learning problem.
Under an abstraction model that characterizes robust vs. non-robust features, the proposed method provably learns a robust dataset.
arXiv Detail & Related papers (2022-11-19T17:06:10Z) - Benchmarking Learning Efficiency in Deep Reservoir Computing [23.753943709362794]
We introduce a benchmark of increasingly difficult tasks together with a data efficiency metric to measure how quickly machine learning models learn from training data.
We compare the learning speed of some established sequential supervised models, such as RNNs, LSTMs, or Transformers, with relatively less known alternative models based on reservoir computing.
arXiv Detail & Related papers (2022-09-29T08:16:52Z) - Dataset Pruning: Reducing Training Data by Examining Generalization
Influence [30.30255670341501]
Do all training data contribute to model's performance?
How to construct a smallest subset from the entire training data as a proxy training set without significantly sacrificing the model's performance?
arXiv Detail & Related papers (2022-05-19T05:36:35Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - GLISTER: Generalization based Data Subset Selection for Efficient and
Robust Learning [11.220278271829699]
We introduce Glister, a GeneraLIzation based data Subset selecTion for Efficient and Robust learning framework.
We propose an iterative online algorithm Glister-Online, which performs data selection iteratively along with the parameter updates.
We show that our framework improves upon state of the art both in efficiency and accuracy (in cases (a) and (c)) and is more efficient compared to other state-of-the-art robust learning algorithms.
arXiv Detail & Related papers (2020-12-19T08:41:34Z) - Dynamic Scale Training for Object Detection [111.33112051962514]
We propose a Dynamic Scale Training paradigm (abbreviated as DST) to mitigate scale variation challenge in object detection.
Experimental results demonstrate the efficacy of our proposed DST towards scale variation handling.
It does not introduce inference overhead and could serve as a free lunch for general detection configurations.
arXiv Detail & Related papers (2020-04-26T16:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.