A Unified Framework for Constructing Nonconvex Regularizations
- URL: http://arxiv.org/abs/2106.06123v1
- Date: Fri, 11 Jun 2021 02:10:01 GMT
- Title: A Unified Framework for Constructing Nonconvex Regularizations
- Authors: Zhiyong Zhou
- Abstract summary: How to construct non regularization function remains open in this paper.
In this paper, we fill in the form of a non regularization function.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Over the past decades, many individual nonconvex methods have been proposed
to achieve better sparse recovery performance in various scenarios. However,
how to construct a valid nonconvex regularization function remains open in
practice. In this paper, we fill in this gap by presenting a unified framework
for constructing the nonconvex regularization based on the probability density
function. Meanwhile, a new nonconvex sparse recovery method constructed via the
Weibull distribution is studied.
Related papers
- A KL-based Analysis Framework with Applications to Non-Descent Optimization Methods [5.779838187603272]
We propose a novel framework for non-descent-type optimization methodologies in non-descent-type scenarios based on the Kurdyka-Lojasiewicz property.
arXiv Detail & Related papers (2024-06-04T12:49:46Z) - Equivariant Frames and the Impossibility of Continuous Canonicalization [10.02508080274145]
We show that unweighted frame-averaging can turn a smooth, non-symmetric function into a discontinuous, symmetric function.
We construct efficient and continuous weighted frames for the actions of $SO(2)$, $SO(3)$, and $S_n$ on point clouds.
arXiv Detail & Related papers (2024-02-25T12:40:42Z) - SimPro: A Simple Probabilistic Framework Towards Realistic Long-Tailed Semi-Supervised Learning [49.94607673097326]
We propose a highly adaptable framework, designated as SimPro, which does not rely on any predefined assumptions about the distribution of unlabeled data.
Our framework, grounded in a probabilistic model, innovatively refines the expectation-maximization algorithm.
Our method showcases consistent state-of-the-art performance across diverse benchmarks and data distribution scenarios.
arXiv Detail & Related papers (2024-02-21T03:39:04Z) - Exploiting hidden structures in non-convex games for convergence to Nash
equilibrium [62.88214569402201]
A wide array of modern machine learning applications can be formulated as non-cooperative Nashlibria.
We provide explicit convergence guarantees for both deterministic and deterministic environments.
arXiv Detail & Related papers (2023-12-27T15:21:25Z) - Can Decentralized Stochastic Minimax Optimization Algorithms Converge
Linearly for Finite-Sum Nonconvex-Nonconcave Problems? [56.62372517641597]
Decentralized minimax optimization has been actively studied in the past few years due to its application in a wide range machine learning.
This paper develops two novel decentralized minimax optimization algorithms for the non-strongly-nonconcave problem.
arXiv Detail & Related papers (2023-04-24T02:19:39Z) - Efficient Informed Proposals for Discrete Distributions via Newton's
Series Approximation [13.349005662077403]
We develop a gradient-like proposal for any discrete distribution without a strong requirement.
Our method efficiently approximates the discrete likelihood ratio via Newton's series expansion.
We prove that our method has a guaranteed convergence rate with or without the Metropolis-Hastings step.
arXiv Detail & Related papers (2023-02-27T16:28:23Z) - Distribution-Free Robust Linear Regression [5.532477732693]
We study random design linear regression with no assumptions on the distribution of the covariates.
We construct a non-linear estimator achieving excess risk of order $d/n$ with the optimal sub-exponential tail.
We prove an optimal version of the classical bound for the truncated least squares estimator due to Gy"orfi, Kohler, Krzyzak, and Walk.
arXiv Detail & Related papers (2021-02-25T15:10:41Z) - Nonlinear Independent Component Analysis for Continuous-Time Signals [85.59763606620938]
We study the classical problem of recovering a multidimensional source process from observations of mixtures of this process.
We show that this recovery is possible for many popular models of processes (up to order and monotone scaling of their coordinates) if the mixture is given by a sufficiently differentiable, invertible function.
arXiv Detail & Related papers (2021-02-04T20:28:44Z) - Distributed Stochastic Nonconvex Optimization and Learning based on
Successive Convex Approximation [26.11677569331688]
We introduce a novel framework for the distributed algorithmic minimization of the sum of the sum of the agents in a network.
We show that the proposed method can be applied to distributed neural networks.
arXiv Detail & Related papers (2020-04-30T15:36:46Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z) - Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable
Neural Distribution Alignment [52.02794488304448]
We propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows.
We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.
arXiv Detail & Related papers (2020-03-26T22:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.