Piecewise linear regression and classification
- URL: http://arxiv.org/abs/2103.06189v1
- Date: Wed, 10 Mar 2021 17:07:57 GMT
- Title: Piecewise linear regression and classification
- Authors: Alberto Bemporad
- Abstract summary: This paper proposes a method for solving multivariate regression and classification problems using piecewise linear predictors.
A Python implementation of the algorithm described in this paper is available at http://cse.lab.imtlucca.it/bemporad/parc.
- Score: 0.20305676256390928
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a method for solving multivariate regression and
classification problems using piecewise linear predictors over a polyhedral
partition of the feature space. The resulting algorithm that we call PARC
(Piecewise Affine Regression and Classification) alternates between (i) solving
ridge regression problems for numeric targets, softmax regression problems for
categorical targets, and either softmax regression or cluster centroid
computation for piecewise linear separation, and (ii) assigning the training
points to different clusters on the basis of a criterion that balances
prediction accuracy and piecewise-linear separability. We prove that PARC is a
block-coordinate descent algorithm that optimizes a suitably constructed
objective function, and that it converges in a finite number of steps to a
local minimum of that function. The accuracy of the algorithm is extensively
tested numerically on synthetic and real-world datasets, showing that the
approach provides an extension of linear regression/classification that is
particularly useful when the obtained predictor is used as part of an
optimization model. A Python implementation of the algorithm described in this
paper is available at http://cse.lab.imtlucca.it/~bemporad/parc .
Related papers
- Agnostic Learning of Mixed Linear Regressions with EM and AM Algorithms [22.79595679373698]
Mixed linear regression is a well-studied problem in statistics and machine learning.
In this paper, we consider the more general problem of learning of mixed linear regression from samples.
We show that the AM and EM algorithms lead to learning in mixed linear regression by converging to the population loss minimizers.
arXiv Detail & Related papers (2024-06-03T09:43:24Z) - Variable Substitution and Bilinear Programming for Aligning Partially Overlapping Point Sets [48.1015832267945]
This research presents a method to meet requirements through the minimization objective function of the RPM algorithm.
A branch-and-bound (BnB) algorithm is devised, which solely branches over the parameters, thereby boosting convergence rate.
Empirical evaluations demonstrate better robustness of the proposed methodology against non-rigid deformation, positional noise, and outliers, when compared with prevailing state-of-the-art transformations.
arXiv Detail & Related papers (2024-05-14T13:28:57Z) - An adaptive shortest-solution guided decimation approach to sparse
high-dimensional linear regression [2.3759847811293766]
ASSD is adapted from the shortest solution-guided algorithm and is referred to as ASSD.
ASSD is especially suitable for linear regression problems with highly correlated measurement matrices encountered in real-world applications.
arXiv Detail & Related papers (2022-11-28T04:29:57Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Adaptive LASSO estimation for functional hidden dynamic geostatistical
model [69.10717733870575]
We propose a novel model selection algorithm based on a penalized maximum likelihood estimator (PMLE) for functional hiddenstatistical models (f-HD)
The algorithm is based on iterative optimisation and uses an adaptive least absolute shrinkage and selector operator (GMSOLAS) penalty function, wherein the weights are obtained by the unpenalised f-HD maximum-likelihood estimators.
arXiv Detail & Related papers (2022-08-10T19:17:45Z) - Robust Regularized Low-Rank Matrix Models for Regression and
Classification [14.698622796774634]
We propose a framework for matrix variate regression models based on a rank constraint, vector regularization (e.g., sparsity), and a general loss function.
We show that the algorithm is guaranteed to converge, all accumulation points of the algorithm have estimation errors in the order of $O(sqrtn)$ally and substantially attaining the minimax rate.
arXiv Detail & Related papers (2022-05-14T18:03:48Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Linear regression with partially mismatched data: local search with
theoretical guarantees [9.398989897176953]
We study an important variant of linear regression in which the predictor-response pairs are partially mismatched.
We use an optimization formulation to simultaneously learn the underlying regression coefficients and the permutation corresponding to the mismatches.
We prove that our local search algorithm converges to a nearly-optimal solution at a linear rate.
arXiv Detail & Related papers (2021-06-03T23:32:12Z) - A spectral algorithm for robust regression with subgaussian rates [0.0]
We study a new linear up to quadratic time algorithm for linear regression in the absence of strong assumptions on the underlying distributions of samples.
The goal is to design a procedure which attains the optimal sub-gaussian error bound even though the data have only finite moments.
arXiv Detail & Related papers (2020-07-12T19:33:50Z) - Piecewise Linear Regression via a Difference of Convex Functions [50.89452535187813]
We present a new piecewise linear regression methodology that utilizes fitting a difference of convex functions (DC functions) to the data.
We empirically validate the method, showing it to be practically implementable, and to have comparable performance to existing regression/classification methods on real-world datasets.
arXiv Detail & Related papers (2020-07-05T18:58:47Z) - Optimization of Graph Total Variation via Active-Set-based Combinatorial
Reconditioning [48.42916680063503]
We propose a novel adaptive preconditioning strategy for proximal algorithms on this problem class.
We show that nested-forest decomposition of the inactive edges yields a guaranteed local linear convergence rate.
Our results suggest that local convergence analysis can serve as a guideline for selecting variable metrics in proximal algorithms.
arXiv Detail & Related papers (2020-02-27T16:33:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.