PIANO: A Fast Parallel Iterative Algorithm for Multinomial and Sparse
Multinomial Logistic Regression
- URL: http://arxiv.org/abs/2002.09133v1
- Date: Fri, 21 Feb 2020 05:15:48 GMT
- Title: PIANO: A Fast Parallel Iterative Algorithm for Multinomial and Sparse
Multinomial Logistic Regression
- Authors: R. Jyothi and P. Babu
- Abstract summary: We show that PIANO can be easily extended to solve the Sparse Multinomial Logistic Regression problem.
We also prove that PIANO converges to a stationary point of the Multinomial and the Sparse Multinomial Logistic Regression problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multinomial Logistic Regression is a well-studied tool for classification and
has been widely used in fields like image processing, computer vision and,
bioinformatics, to name a few. Under a supervised classification scenario, a
Multinomial Logistic Regression model learns a weight vector to differentiate
between any two classes by optimizing over the likelihood objective. With the
advent of big data, the inundation of data has resulted in large dimensional
weight vector and has also given rise to a huge number of classes, which makes
the classical methods applicable for model estimation not computationally
viable. To handle this issue, we here propose a parallel iterative algorithm:
Parallel Iterative Algorithm for MultiNomial LOgistic Regression (PIANO) which
is based on the Majorization Minimization procedure, and can parallely update
each element of the weight vectors. Further, we also show that PIANO can be
easily extended to solve the Sparse Multinomial Logistic Regression problem -
an extensively studied problem because of its attractive feature selection
property. In particular, we work out the extension of PIANO to solve the Sparse
Multinomial Logistic Regression problem with l1 and l0 regularizations. We also
prove that PIANO converges to a stationary point of the Multinomial and the
Sparse Multinomial Logistic Regression problems. Simulations were conducted to
compare PIANO with the existing methods, and it was found that the proposed
algorithm performs better than the existing methods in terms of speed of
convergence.
Related papers
- Scalable Inference for Bayesian Multinomial Logistic-Normal Dynamic Linear Models [0.5735035463793009]
This article develops an efficient and accurate approach to posterior state estimation, called $textitFenrir$.
Our experiments suggest that Fenrir can be three orders of magnitude more efficient than Stan.
Our methods are made available to the community as a user-friendly software library written in C++ with an R interface.
arXiv Detail & Related papers (2024-10-07T23:20:14Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - An Efficient Data Analysis Method for Big Data using Multiple-Model
Linear Regression [4.085654010023149]
This paper introduces a new data analysis method for big data using a newly defined regression model named multiple model linear regression(MMLR)
The proposed data analysis method is shown to be more efficient and flexible than other regression based methods.
arXiv Detail & Related papers (2023-08-24T10:20:15Z) - TMPNN: High-Order Polynomial Regression Based on Taylor Map
Factorization [0.0]
The paper presents a method for constructing a high-order regression based on the Taylor map factorization.
By benchmarking on UCI open access datasets, we demonstrate that the proposed method performs comparable to the state-of-the-art regression methods.
arXiv Detail & Related papers (2023-07-30T01:52:00Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Oracle Inequalities for Model Selection in Offline Reinforcement
Learning [105.74139523696284]
We study the problem of model selection in offline RL with value function approximation.
We propose the first model selection algorithm for offline RL that achieves minimax rate-optimal inequalities up to logarithmic factors.
We conclude with several numerical simulations showing it is capable of reliably selecting a good model class.
arXiv Detail & Related papers (2022-11-03T17:32:34Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Piecewise linear regression and classification [0.20305676256390928]
This paper proposes a method for solving multivariate regression and classification problems using piecewise linear predictors.
A Python implementation of the algorithm described in this paper is available at http://cse.lab.imtlucca.it/bemporad/parc.
arXiv Detail & Related papers (2021-03-10T17:07:57Z) - A spectral algorithm for robust regression with subgaussian rates [0.0]
We study a new linear up to quadratic time algorithm for linear regression in the absence of strong assumptions on the underlying distributions of samples.
The goal is to design a procedure which attains the optimal sub-gaussian error bound even though the data have only finite moments.
arXiv Detail & Related papers (2020-07-12T19:33:50Z) - Multi-layer Optimizations for End-to-End Data Analytics [71.05611866288196]
We introduce Iterative Functional Aggregate Queries (IFAQ), a framework that realizes an alternative approach.
IFAQ treats the feature extraction query and the learning task as one program given in the IFAQ's domain-specific language.
We show that a Scala implementation of IFAQ can outperform mlpack, Scikit, and specialization by several orders of magnitude for linear regression and regression tree models over several relational datasets.
arXiv Detail & Related papers (2020-01-10T16:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.