Training Data Subset Selection for Regression with Controlled
Generalization Error
- URL: http://arxiv.org/abs/2106.12491v1
- Date: Wed, 23 Jun 2021 16:03:55 GMT
- Title: Training Data Subset Selection for Regression with Controlled
Generalization Error
- Authors: Durga Sivasubramanian, Rishabh Iyer, Ganesh Ramakrishnan, Abir De
- Abstract summary: We develop an efficient majorization-minimization algorithm for data subset selection.
SELCON trades off accuracy and efficiency more effectively than the current state-of-the-art.
- Score: 19.21682938684508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data subset selection from a large number of training instances has been a
successful approach toward efficient and cost-effective machine learning.
However, models trained on a smaller subset may show poor generalization
ability. In this paper, our goal is to design an algorithm for selecting a
subset of the training data, so that the model can be trained quickly, without
significantly sacrificing on accuracy. More specifically, we focus on data
subset selection for L2 regularized regression problems and provide a novel
problem formulation which seeks to minimize the training loss with respect to
both the trainable parameters and the subset of training data, subject to error
bounds on the validation set. We tackle this problem using several technical
innovations. First, we represent this problem with simplified constraints using
the dual of the original training problem and show that the objective of this
new representation is a monotone and alpha-submodular function, for a wide
variety of modeling choices. Such properties lead us to develop SELCON, an
efficient majorization-minimization algorithm for data subset selection, that
admits an approximation guarantee even when the training provides an imperfect
estimate of the trained model. Finally, our experiments on several datasets
show that SELCON trades off accuracy and efficiency more effectively than the
current state-of-the-art.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Efficient Grammatical Error Correction Via Multi-Task Training and
Optimized Training Schedule [55.08778142798106]
We propose auxiliary tasks that exploit the alignment between the original and corrected sentences.
We formulate each task as a sequence-to-sequence problem and perform multi-task training.
We find that the order of datasets used for training and even individual instances within a dataset may have important effects on the final performance.
arXiv Detail & Related papers (2023-11-20T14:50:12Z) - How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression? [92.90857135952231]
Transformers pretrained on diverse tasks exhibit remarkable in-context learning (ICL) capabilities.
We study ICL in one of its simplest setups: pretraining a linearly parameterized single-layer linear attention model for linear regression.
arXiv Detail & Related papers (2023-10-12T15:01:43Z) - AdaMerging: Adaptive Model Merging for Multi-Task Learning [68.75885518081357]
This paper introduces an innovative technique called Adaptive Model Merging (AdaMerging)
It aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data.
Compared to the current state-of-the-art task arithmetic merging scheme, AdaMerging showcases a remarkable 11% improvement in performance.
arXiv Detail & Related papers (2023-10-04T04:26:33Z) - Towards Accelerated Model Training via Bayesian Data Selection [45.62338106716745]
We propose a more reasonable data selection principle by examining the data's impact on the model's generalization loss.
Recent work has proposed a more reasonable data selection principle by examining the data's impact on the model's generalization loss.
This work solves these problems by leveraging a lightweight Bayesian treatment and incorporating off-the-shelf zero-shot predictors built on large-scale pre-trained models.
arXiv Detail & Related papers (2023-08-21T07:58:15Z) - Alleviating the Effect of Data Imbalance on Adversarial Training [26.36714114672729]
We study adversarial training on datasets that obey the long-tailed distribution.
We propose a new adversarial training framework -- Re-balancing Adversarial Training (REAT)
arXiv Detail & Related papers (2023-07-14T07:01:48Z) - MILO: Model-Agnostic Subset Selection Framework for Efficient Model
Training and Tuning [68.12870241637636]
We propose MILO, a model-agnostic subset selection framework that decouples the subset selection from model training.
Our empirical results indicate that MILO can train models $3times - 10 times$ faster and tune hyperparameters $20times - 75 times$ faster than full-dataset training or tuning without performance.
arXiv Detail & Related papers (2023-01-30T20:59:30Z) - Towards Robust Dataset Learning [90.2590325441068]
We propose a principled, tri-level optimization to formulate the robust dataset learning problem.
Under an abstraction model that characterizes robust vs. non-robust features, the proposed method provably learns a robust dataset.
arXiv Detail & Related papers (2022-11-19T17:06:10Z) - Finding High-Value Training Data Subset through Differentiable Convex
Programming [5.5180456567480896]
In this paper, we study the problem of selecting high-value subsets of training data.
The key idea is to design a learnable framework for online subset selection.
Using this framework, we design an online alternating minimization-based algorithm for jointly learning the parameters of the selection model and ML model.
arXiv Detail & Related papers (2021-04-28T14:33:26Z) - GLISTER: Generalization based Data Subset Selection for Efficient and
Robust Learning [11.220278271829699]
We introduce Glister, a GeneraLIzation based data Subset selecTion for Efficient and Robust learning framework.
We propose an iterative online algorithm Glister-Online, which performs data selection iteratively along with the parameter updates.
We show that our framework improves upon state of the art both in efficiency and accuracy (in cases (a) and (c)) and is more efficient compared to other state-of-the-art robust learning algorithms.
arXiv Detail & Related papers (2020-12-19T08:41:34Z) - Rank-Based Multi-task Learning for Fair Regression [9.95899391250129]
We develop a novel learning approach for multi-taskart regression models based on a biased dataset.
We use a popular non-parametric oracle-based non-world multipliers dataset.
arXiv Detail & Related papers (2020-09-23T22:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.