Nonlinear Feature Aggregation: Two Algorithms driven by Theory
- URL: http://arxiv.org/abs/2306.11143v1
- Date: Mon, 19 Jun 2023 19:57:33 GMT
- Title: Nonlinear Feature Aggregation: Two Algorithms driven by Theory
- Authors: Paolo Bonetti, Alberto Maria Metelli, Marcello Restelli
- Abstract summary: Real-world machine learning applications are characterized by a huge number of features, leading to computational and memory issues.
We propose a dimensionality reduction algorithm (NonLinCFA) which aggregates non-linear transformations of features with a generic aggregation function.
We also test the algorithms on synthetic and real-world datasets, performing regression and classification tasks, showing competitive performances.
- Score: 45.3190496371625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many real-world machine learning applications are characterized by a huge
number of features, leading to computational and memory issues, as well as the
risk of overfitting. Ideally, only relevant and non-redundant features should
be considered to preserve the complete information of the original data and
limit the dimensionality. Dimensionality reduction and feature selection are
common preprocessing techniques addressing the challenge of efficiently dealing
with high-dimensional data. Dimensionality reduction methods control the number
of features in the dataset while preserving its structure and minimizing
information loss. Feature selection aims to identify the most relevant features
for a task, discarding the less informative ones. Previous works have proposed
approaches that aggregate features depending on their correlation without
discarding any of them and preserving their interpretability through
aggregation with the mean. A limitation of methods based on correlation is the
assumption of linearity in the relationship between features and target. In
this paper, we relax such an assumption in two ways. First, we propose a
bias-variance analysis for general models with additive Gaussian noise, leading
to a dimensionality reduction algorithm (NonLinCFA) which aggregates non-linear
transformations of features with a generic aggregation function. Then, we
extend the approach assuming that a generalized linear model regulates the
relationship between features and target. A deviance analysis leads to a second
dimensionality reduction algorithm (GenLinCFA), applicable to a larger class of
regression problems and classification settings. Finally, we test the
algorithms on synthetic and real-world datasets, performing regression and
classification tasks, showing competitive performances.
Related papers
- Interpetable Target-Feature Aggregation for Multi-Task Learning based on Bias-Variance Analysis [53.38518232934096]
Multi-task learning (MTL) is a powerful machine learning paradigm designed to leverage shared knowledge across tasks to improve generalization and performance.
We propose an MTL approach at the intersection between task clustering and feature transformation based on a two-phase iterative aggregation of targets and features.
In both phases, a key aspect is to preserve the interpretability of the reduced targets and features through the aggregation with the mean, which is motivated by applications to Earth science.
arXiv Detail & Related papers (2024-06-12T08:30:16Z) - Causal Feature Selection via Transfer Entropy [59.999594949050596]
Causal discovery aims to identify causal relationships between features with observational data.
We introduce a new causal feature selection approach that relies on the forward and backward feature selection procedures.
We provide theoretical guarantees on the regression and classification errors for both the exact and the finite-sample cases.
arXiv Detail & Related papers (2023-10-17T08:04:45Z) - Interpretable Linear Dimensionality Reduction based on Bias-Variance
Analysis [45.3190496371625]
We propose a principled dimensionality reduction approach that maintains the interpretability of the resulting features.
In this way, all features are considered, the dimensionality is reduced and the interpretability is preserved.
arXiv Detail & Related papers (2023-03-26T14:30:38Z) - Information bottleneck theory of high-dimensional regression: relevancy,
efficiency and optimality [6.700873164609009]
Overfitting is a central challenge in machine learning, yet many large neural networks readily achieve zero training loss.
We quantify overfitting via residual information, defined as the bits in fitted models that encode noise in training data.
arXiv Detail & Related papers (2022-08-08T00:09:12Z) - Efficient and Near-Optimal Smoothed Online Learning for Generalized
Linear Functions [28.30744223973527]
We give a computationally efficient algorithm that is the first to enjoy the statistically optimal log(T/sigma) regret for realizable K-wise linear classification.
We develop a novel characterization of the geometry of the disagreement region induced by generalized linear classifiers.
arXiv Detail & Related papers (2022-05-25T21:31:36Z) - Piecewise linear regression and classification [0.20305676256390928]
This paper proposes a method for solving multivariate regression and classification problems using piecewise linear predictors.
A Python implementation of the algorithm described in this paper is available at http://cse.lab.imtlucca.it/bemporad/parc.
arXiv Detail & Related papers (2021-03-10T17:07:57Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Adaptive Graph-based Generalized Regression Model for Unsupervised
Feature Selection [11.214334712819396]
How to select the uncorrelated and discriminative features is the key problem of unsupervised feature selection.
We present a novel generalized regression model imposed by an uncorrelated constraint and the $ell_2,1$-norm regularization.
It can simultaneously select the uncorrelated and discriminative features as well as reduce the variance of these data points belonging to the same neighborhood.
arXiv Detail & Related papers (2020-12-27T09:07:26Z) - Slice Sampling for General Completely Random Measures [74.24975039689893]
We present a novel Markov chain Monte Carlo algorithm for posterior inference that adaptively sets the truncation level using auxiliary slice variables.
The efficacy of the proposed algorithm is evaluated on several popular nonparametric models.
arXiv Detail & Related papers (2020-06-24T17:53:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.