A Survey Of Regression Algorithms And Connections With Deep Learning
- URL: http://arxiv.org/abs/2104.12647v1
- Date: Mon, 26 Apr 2021 15:18:00 GMT
- Title: A Survey Of Regression Algorithms And Connections With Deep Learning
- Authors: Yunpeng Tai
- Abstract summary: Regression has attracted immense interest lately due to its effectiveness in tasks like predicting values.
This paper characterizes a broad and thoughtful selection of recent Regression algorithms.
The relationship between Regression and Deep Learning is also discussed and a conclusion can be drawn that Deep Learning can be more powerful as a combination with Regression models in the future.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Regression has attracted immense interest lately due to its effectiveness in
tasks like predicting values. And Regression is of widespread use in multiple
fields such as Economics, Finance, Business, Biology and so on. While
considerable studies have proposed some impressive models, few of them have
provided a whole picture regarding how and to what extent Regression has
developed. With the aim of aiding beginners in understanding the relationships
among different Regression algorithms, this paper characterizes a broad and
thoughtful selection of recent regression algorithms, providing an organized
and comprehensive overview of existing work and models utilized frequently. In
this paper, the relationship between Regression and Deep Learning is also
discussed and a conclusion can be drawn that Deep Learning can be more powerful
as an combination with Regression models in the future.
Related papers
- Resampling strategies for imbalanced regression: a survey and empirical analysis [5.863538874435322]
Imbalanced problems can arise in different real-world situations, and to address this, certain strategies in the form of resampling or balancing algorithms are proposed.<n>This work presents an experimental study comprising various balancing and predictive models, and wich uses metrics to capture important elements for the user.<n>It also proposes a taxonomy for imbalanced regression approaches based on three crucial criteria: regression model, learning process, and evaluation metrics.
arXiv Detail & Related papers (2025-07-16T04:34:42Z) - Exploring Training and Inference Scaling Laws in Generative Retrieval [50.82554729023865]
We investigate how model size, training data scale, and inference-time compute jointly influence generative retrieval performance.
Our experiments show that n-gram-based methods demonstrate strong alignment with both training and inference scaling laws.
We find that LLaMA models consistently outperform T5 models, suggesting a particular advantage for larger decoder-only models in generative retrieval.
arXiv Detail & Related papers (2025-03-24T17:59:03Z) - In-Context Linear Regression Demystified: Training Dynamics and Mechanistic Interpretability of Multi-Head Softmax Attention [52.159541540613915]
We study how multi-head softmax attention models are trained to perform in-context learning on linear data.
Our results reveal that in-context learning ability emerges from the trained transformer as an aggregated effect of its architecture and the underlying data distribution.
arXiv Detail & Related papers (2025-03-17T02:00:49Z) - AdaPRL: Adaptive Pairwise Regression Learning with Uncertainty Estimation for Universal Regression Tasks [0.0]
We propose a novel adaptive pairwise learning framework for regression tasks (AdaPRL)
AdaPRL leverages the relative differences between data points and with deep probabilistic models to quantify the uncertainty associated with predictions.
Experiments show that AdaPRL can be seamlessly integrated into recently proposed regression frameworks to gain performance improvement.
arXiv Detail & Related papers (2025-01-10T09:19:10Z) - Active learning for regression in engineering populations: A risk-informed approach [0.0]
Regression is a fundamental prediction task common in data-centric engineering applications.
Active learning is an approach for preferentially acquiring feature-label pairs in a resource-efficient manner.
It is shown that the proposed approach has superior performance in terms of expected cost -- maintaining predictive performance while reducing the number of inspections required.
arXiv Detail & Related papers (2024-09-06T15:03:42Z) - State-Space Modeling in Long Sequence Processing: A Survey on Recurrence in the Transformer Era [59.279784235147254]
This survey provides an in-depth summary of the latest approaches that are based on recurrent models for sequential data processing.
The emerging picture suggests that there is room for thinking of novel routes, constituted by learning algorithms which depart from the standard Backpropagation Through Time.
arXiv Detail & Related papers (2024-06-13T12:51:22Z) - Understanding Forgetting in Continual Learning with Linear Regression [21.8755265936716]
Continual learning, focused on sequentially learning multiple tasks, has gained significant attention recently.
We provide a general theoretical analysis of forgetting in the linear regression model via Gradient Descent.
We demonstrate that, given a sufficiently large data size, the arrangement of tasks in a sequence, where tasks with larger eigenvalues in their population data covariance matrices are trained later, tends to result in increased forgetting.
arXiv Detail & Related papers (2024-05-27T18:33:37Z) - On the Resurgence of Recurrent Models for Long Sequences -- Survey and
Research Opportunities in the Transformer Era [59.279784235147254]
This survey is aimed at providing an overview of these trends framed under the unifying umbrella of Recurrence.
It emphasizes novel research opportunities that become prominent when abandoning the idea of processing long sequences.
arXiv Detail & Related papers (2024-02-12T23:55:55Z) - Linked shrinkage to improve estimation of interaction effects in
regression models [0.0]
We develop an estimator that adapts well to two-way interaction terms in a regression model.
We evaluate the potential of the model for inference, which is notoriously hard for selection strategies.
Our models can be very competitive to a more advanced machine learner, like random forest, even for fairly large sample sizes.
arXiv Detail & Related papers (2023-09-25T10:03:39Z) - Tensor Regression [37.35881539885536]
Regression analysis is a key area of interest in the field of data analysis and machine learning.
The emergence of high dimensional data in technologies such as neuroimaging, computer vision, climatology and social networks, has brought challenges to traditional data representation methods.
This book provides a systematic study and analysis of vectors-based regression models and their applications in recent years.
arXiv Detail & Related papers (2023-08-22T13:04:12Z) - Constructing Effective Machine Learning Models for the Sciences: A
Multidisciplinary Perspective [77.53142165205281]
We show how flexible non-linear solutions will not always improve upon manually adding transforms and interactions between variables to linear regression models.
We discuss how to recognize this before constructing a data-driven model and how such analysis can help us move to intrinsically interpretable regression models.
arXiv Detail & Related papers (2022-11-21T17:48:44Z) - Interpretable Scientific Discovery with Symbolic Regression: A Review [8.414043731621419]
Symbolic regression is emerging as a promising machine learning method for learning mathematical expressions directly from data.
This survey presents a structured and comprehensive overview of symbolic regression methods and discusses their strengths and limitations.
arXiv Detail & Related papers (2022-11-20T05:12:39Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z) - Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory
to Learning Algorithms [91.3755431537592]
We analyze four broad meta-learning strategies which rely on plug-in estimation and pseudo-outcome regression.
We highlight how this theoretical reasoning can be used to guide principled algorithm design and translate our analyses into practice.
arXiv Detail & Related papers (2021-01-26T17:11:40Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.