Robust priors for regularized regression
- URL: http://arxiv.org/abs/2010.02610v3
- Date: Mon, 25 Oct 2021 13:05:54 GMT
- Title: Robust priors for regularized regression
- Authors: Sebastian Bobadilla-Suarez, Matt Jones, Bradley C. Love
- Abstract summary: Penalized regression approaches like ridge regression shrink toward zero but zero weights is usually not a sensible prior.
Inspired by simple and robust decisions humans use, we constructed non-zero priors for penalized regression models.
Models with robust priors had excellent worst-case performance.
- Score: 12.945710636153537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Induction benefits from useful priors. Penalized regression approaches, like
ridge regression, shrink weights toward zero but zero association is usually
not a sensible prior. Inspired by simple and robust decision heuristics humans
use, we constructed non-zero priors for penalized regression models that
provide robust and interpretable solutions across several tasks. Our approach
enables estimates from a constrained model to serve as a prior for a more
general model, yielding a principled way to interpolate between models of
differing complexity. We successfully applied this approach to a number of
decision and classification problems, as well as analyzing simulated brain
imaging data. Models with robust priors had excellent worst-case performance.
Solutions followed from the form of the heuristic that was used to derive the
prior. These new algorithms can serve applications in data analysis and machine
learning, as well as help in understanding how people transition from novice to
expert performance.
Related papers
- Comparative study of regression vs pairwise models for surrogate-based heuristic optimisation [1.2535250082638645]
This paper addresses the formulation of surrogate problems as both regression models that approximate fitness (surface surrogate models) and a novel way to connect classification models (pairwise surrogate models)
The performance of the overall search, when using online machine learning-based surrogate models, depends not only on the accuracy of the predictive model but also on the kind of bias towards positive or negative cases.
arXiv Detail & Related papers (2024-10-04T13:19:06Z) - Robust Capped lp-Norm Support Vector Ordinal Regression [85.84718111830752]
Ordinal regression is a specialized supervised problem where the labels show an inherent order.
Support Vector Ordinal Regression, as an outstanding ordinal regression model, is widely used in many ordinal regression tasks.
We introduce a new model, Capped $ell_p$-Norm Support Vector Ordinal Regression(CSVOR), that is robust to outliers.
arXiv Detail & Related papers (2024-04-25T13:56:05Z) - Towards Stable Machine Learning Model Retraining via Slowly Varying Sequences [6.067007470552307]
We propose a methodology for finding sequences of machine learning models that are stable across retraining iterations.
We develop a mixed-integer optimization formulation that is guaranteed to recover optimal models.
Our method shows stronger stability than greedily trained models with a small, controllable sacrifice in predictive power.
arXiv Detail & Related papers (2024-03-28T22:45:38Z) - Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - ZeroShape: Regression-based Zero-shot Shape Reconstruction [56.652766763775226]
We study the problem of single-image zero-shot 3D shape reconstruction.
Recent works learn zero-shot shape reconstruction through generative modeling of 3D assets.
We show that ZeroShape achieves superior performance over state-of-the-art methods.
arXiv Detail & Related papers (2023-12-21T01:56:34Z) - A Bayesian Robust Regression Method for Corrupted Data Reconstruction [5.298637115178182]
We develop an effective robust regression method that can resist adaptive adversarial attacks.
First, we propose the novel TRIP (hard Thresholding approach to Robust regression with sImple Prior) algorithm.
We then use the idea of Bayesian reweighting to construct the more robust BRHT (robust Bayesian Reweighting regression via Hard Thresholding) algorithm.
arXiv Detail & Related papers (2022-12-24T17:25:53Z) - Mutual Information Learned Regressor: an Information-theoretic Viewpoint
of Training Regression Systems [10.314518385506007]
An existing common practice for solving regression problems is the mean square error (MSE) minimization approach.
Recently, Yi et al., proposed a mutual information based supervised learning framework where they introduced a label entropy regularization.
In this paper, we investigate the regression under the mutual information based supervised learning framework.
arXiv Detail & Related papers (2022-11-23T03:43:22Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z) - Personalizing Performance Regression Models to Black-Box Optimization
Problems [0.755972004983746]
In this work, we propose a personalized regression approach for numerical optimization problems.
We also investigate the impact of selecting not a single regression model per problem, but personalized ensembles.
We test our approach on predicting the performance of numerical optimizations on the BBOB benchmark collection.
arXiv Detail & Related papers (2021-04-22T11:47:47Z) - Continual Learning using a Bayesian Nonparametric Dictionary of Weight
Factors [75.58555462743585]
Naively trained neural networks tend to experience catastrophic forgetting in sequential task settings.
We propose a principled nonparametric approach based on the Indian Buffet Process (IBP) prior, letting the data determine how much to expand the model complexity.
We demonstrate the effectiveness of our method on a number of continual learning benchmarks and analyze how weight factors are allocated and reused throughout the training.
arXiv Detail & Related papers (2020-04-21T15:20:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.