Persistent Reductions in Regularized Loss Minimization for Variable
Selection
- URL: http://arxiv.org/abs/2011.14549v1
- Date: Mon, 30 Nov 2020 04:59:44 GMT
- Title: Persistent Reductions in Regularized Loss Minimization for Variable
Selection
- Authors: Amin Jalali
- Abstract summary: We show that for a broad class of loss functions it is possible to efficiently identify a subset which are guaranteed to have zero coefficients.
We employ an existing ray algorithm for extreme ray identification and make our guarantee algorithm applicable in ultra-high dimensional problems.
- Score: 3.3504365823045035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the context of regularized loss minimization with polyhedral gauges, we
show that for a broad class of loss functions (possibly non-smooth and
non-convex) and under a simple geometric condition on the input data it is
possible to efficiently identify a subset of features which are guaranteed to
have zero coefficients in all optimal solutions in all problems with loss
functions from said class, before any iterative optimization has been performed
for the original problem. This procedure is standalone, takes only the data as
input, and does not require any calls to the loss function. Therefore, we term
this procedure as a persistent reduction for the aforementioned class of
regularized loss minimization problems. This reduction can be efficiently
implemented via an extreme ray identification subroutine applied to a
polyhedral cone formed from the datapoints. We employ an existing
output-sensitive algorithm for extreme ray identification which makes our
guarantee and algorithm applicable in ultra-high dimensional problems.
Related papers
- Distributed Online Bandit Nonconvex Optimization with One-Point Residual Feedback via Dynamic Regret [10.700891331004799]
This paper considers the distributed online bandit optimization problem with non loss functions over a time-varying digraph.
Players select an adversary and then the adversary assigns an arbitrary non-linear loss function to this player.
The expected regret of our algorithms is comparable to existing algorithms that use two-point deviation.
arXiv Detail & Related papers (2024-09-24T02:37:33Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Efficient Low-rank Identification via Accelerated Iteratively Reweighted Nuclear Norm Minimization [8.879403568685499]
We introduce an adaptive updating strategy for smoothing parameters.
This behavior transforms the algorithm into one that effectively solves problems after a few iterations.
We prove the global proposed experiment, guaranteeing that every iteration is a critical one.
arXiv Detail & Related papers (2024-06-22T02:37:13Z) - Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage [100.8180383245813]
We propose value-based algorithms for offline reinforcement learning (RL)
We show an analogous result for vanilla Q-functions under a soft margin condition.
Our algorithms' loss functions arise from casting the estimation problems as nonlinear convex optimization problems and Lagrangifying.
arXiv Detail & Related papers (2023-02-05T14:22:41Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Support Vector Machines with the Hard-Margin Loss: Optimal Training via
Combinatorial Benders' Cuts [8.281391209717105]
We show how to train the hard-margin SVM model to global optimality.
We introduce an iterative sampling and sub decomposition algorithm that solves the problem.
arXiv Detail & Related papers (2022-07-15T18:21:51Z) - Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization [70.4342220499858]
We introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities.
Our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and safeguard the same rate in the worst case.
arXiv Detail & Related papers (2021-12-29T02:42:59Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Dynamic Regret of Convex and Smooth Functions [93.71361250701075]
We investigate online convex optimization in non-stationary environments.
We choose the dynamic regret as the performance measure.
We show that it is possible to further enhance the dynamic regret by exploiting the smoothness condition.
arXiv Detail & Related papers (2020-07-07T14:10:57Z) - Linear Regression without Correspondences via Concave Minimization [24.823689223437917]
A signal is recovered in a linear regression setting without correspondences.
The associated maximum likelihood function is NP-hard to compute when the signal has dimension larger than one.
We reformulate it as a concave minimization problem, which we solve via branch-and-bound.
The resulting algorithm outperforms state-of-the-art methods for fully shuffled data and remains tractable for up to $8$-dimensional signals.
arXiv Detail & Related papers (2020-03-17T13:19:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.