Learning to Refit for Convex Learning Problems
- URL: http://arxiv.org/abs/2111.12545v1
- Date: Wed, 24 Nov 2021 15:28:50 GMT
- Title: Learning to Refit for Convex Learning Problems
- Authors: Yingyan Zeng, Tianhao Wang, Si Chen, Hoang Anh Just, Ran Jin, Ruoxi
Jia
- Abstract summary: We propose a framework to learn to estimate optimized model parameters for different training sets using neural networks.
We rigorously characterize the power of neural networks to approximate convex problems.
- Score: 11.464758257681197
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) models need to be frequently retrained on changing
datasets in a wide variety of application scenarios, including data valuation
and uncertainty quantification. To efficiently retrain the model, linear
approximation methods such as influence function have been proposed to estimate
the impact of data changes on model parameters. However, these methods become
inaccurate for large dataset changes. In this work, we focus on convex learning
problems and propose a general framework to learn to estimate optimized model
parameters for different training sets using neural networks. We propose to
enforce the predicted model parameters to obey optimality conditions and
maintain utility through regularization techniques, which significantly improve
generalization. Moreover, we rigorously characterize the expressive power of
neural networks to approximate the optimizer of convex problems. Empirical
results demonstrate the advantage of the proposed method in accurate and
efficient model parameter estimation compared to the state-of-the-art.
Related papers
- Efficient Frequency Selective Surface Analysis via End-to-End Model-Based Learning [2.66269503676104]
This paper introduces an innovative end-to-end model-based deep learning approach for efficient electromagnetic analysis of high-dimensional frequency selective surfaces (FSS)
Unlike traditional data-driven methods that require large datasets, this approach combines physical insights from equivalent circuit models with deep learning techniques to significantly reduce model complexity and enhance prediction accuracy.
arXiv Detail & Related papers (2024-10-22T07:27:20Z) - Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Training Survival Models using Scoring Rules [9.330089124239086]
Survival Analysis provides critical insights for incomplete time-to-event data.
It is also an important example of probabilistic machine learning.
We establish different parametric and non-parametric sub-frameworks that allow different degrees of flexibility.
We show that using our framework, we can recover various parametric models and demonstrate that optimization works equally well when compared to likelihood-based methods.
arXiv Detail & Related papers (2024-03-19T20:58:38Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Neural Networks for Parameter Estimation in Intractable Models [0.0]
We show how to estimate parameters from max-stable processes, where inference is exceptionally challenging.
We use data from model simulations as input and train deep neural networks to learn statistical parameters.
arXiv Detail & Related papers (2021-07-29T21:59:48Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - A Semiparametric Approach to Interpretable Machine Learning [9.87381939016363]
Black box models in machine learning have demonstrated excellent predictive performance in complex problems and high-dimensional settings.
Their lack of transparency and interpretability restrict the applicability of such models in critical decision-making processes.
We propose a novel approach to trading off interpretability and performance in prediction models using ideas from semiparametric statistics.
arXiv Detail & Related papers (2020-06-08T16:38:15Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.