On Regularized Sparse Logistic Regression
- URL: http://arxiv.org/abs/2309.05925v2
- Date: Thu, 12 Oct 2023 01:27:46 GMT
- Title: On Regularized Sparse Logistic Regression
- Authors: Mengyuan Zhang and Kai Liu
- Abstract summary: In this paper, we propose a unified framework to solve $ell$regularized logistic regression.
Our experiments are capable of performing our experiments effectively at a lower cost.
- Score: 5.174012156390378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse logistic regression is for classification and feature selection
simultaneously. Although many studies have been done to solve
$\ell_1$-regularized logistic regression, there is no equivalently abundant
work on solving sparse logistic regression with nonconvex regularization term.
In this paper, we propose a unified framework to solve $\ell_1$-regularized
logistic regression, which can be naturally extended to nonconvex
regularization term, as long as certain requirement is satisfied. In addition,
we also utilize a different line search criteria to guarantee monotone
convergence for various regularization terms. Empirical experiments on binary
classification tasks with real-world datasets demonstrate our proposed
algorithms are capable of performing classification and feature selection
effectively at a lower computational cost.
Related papers
- Task Shift: From Classification to Regression in Overparameterized Linear Models [5.030445392527011]
We investigate a phenomenon where latent knowledge is transferred to a more difficult task under a similar data distribution.
We show that while minimum-norm interpolators for classification cannot transfer to regression a priori, they experience surprisingly structured attenuation which enables successful task shift with limited additional data.
Our results show that while minimum-norm interpolators for classification cannot transfer to regression a priori, they experience surprisingly structured attenuation which enables successful task shift with limited additional data.
arXiv Detail & Related papers (2025-02-18T21:16:01Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - A Novel Approach in Solving Stochastic Generalized Linear Regression via
Nonconvex Programming [1.6874375111244329]
This paper considers a generalized linear regression model as a problem with chance constraints.
The results of the proposed algorithm were over 1 to 2 percent better than the ordinary logistic regression model.
arXiv Detail & Related papers (2024-01-16T16:45:51Z) - Deep Imbalanced Regression via Hierarchical Classification Adjustment [50.19438850112964]
Regression tasks in computer vision are often formulated into classification by quantizing the target space into classes.
The majority of training samples lie in a head range of target values, while a minority of samples span a usually larger tail range.
We propose to construct hierarchical classifiers for solving imbalanced regression tasks.
Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks.
arXiv Detail & Related papers (2023-10-26T04:54:39Z) - Retire: Robust Expectile Regression in High Dimensions [3.9391041278203978]
Penalized quantile and expectile regression methods offer useful tools to detect heteroscedasticity in high-dimensional data.
We propose and study (penalized) robust expectile regression (retire)
We show that the proposed procedure can be efficiently solved by a semismooth Newton coordinate descent algorithm.
arXiv Detail & Related papers (2022-12-11T18:03:12Z) - Streaming Sparse Linear Regression [1.8707139489039097]
We propose a novel online sparse linear regression framework for analyzing streaming data when data points arrive sequentially.
Our proposed method is memory efficient and requires less stringent restricted strong convexity assumptions.
arXiv Detail & Related papers (2022-11-11T07:31:55Z) - Safe Screening for Logistic Regression with $\ell_0$-$\ell_2$
Regularization [0.360692933501681]
We present screening rules that safely remove features from logistic regression before solving the problem.
A high percentage of the features can be effectively and safely removed apriori, leading to substantial speed-up in the computations.
arXiv Detail & Related papers (2022-02-01T15:25:54Z) - Iterative Feature Matching: Toward Provable Domain Generalization with
Logarithmic Environments [55.24895403089543]
Domain generalization aims at performing well on unseen test environments with data from a limited number of training environments.
We present a new algorithm based on performing iterative feature matching that is guaranteed with high probability to yield a predictor that generalizes after seeing only $O(logd_s)$ environments.
arXiv Detail & Related papers (2021-06-18T04:39:19Z) - An efficient projection neural network for $\ell_1$-regularized logistic
regression [10.517079029721257]
This paper presents a simple projection neural network for $ell_$-regularized logistics regression.
The proposed neural network does not require any extra auxiliary variable nor any smooth approximation.
We also investigate the convergence of the proposed neural network by using the Lyapunov theory and show that it converges to a solution of the problem with any arbitrary initial value.
arXiv Detail & Related papers (2021-05-12T06:13:44Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Fast OSCAR and OWL Regression via Safe Screening Rules [97.28167655721766]
Ordered $L_1$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning.
Proximal gradient methods are used as standard approaches to solve OWL regression.
We propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure.
arXiv Detail & Related papers (2020-06-29T23:35:53Z) - Beyond UCB: Optimal and Efficient Contextual Bandits with Regression
Oracles [112.89548995091182]
We provide the first universal and optimal reduction from contextual bandits to online regression.
Our algorithm requires no distributional assumptions beyond realizability, and works even when contexts are chosen adversarially.
arXiv Detail & Related papers (2020-02-12T11:33:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.