Signal Decomposition Using Masked Proximal Operators
- URL: http://arxiv.org/abs/2202.09338v1
- Date: Fri, 18 Feb 2022 18:05:33 GMT
- Title: Signal Decomposition Using Masked Proximal Operators
- Authors: Bennet E. Meyers and Stephen P. Boyd
- Abstract summary: We consider the well-studied problem of decomposing a vector time series signal into components with different characteristics.
We propose a simple and general framework in which the components are defined by loss functions (which include constraints)
We give two distributed methods which find the optimal decomposition when the component class loss functions are convex.
- Score: 9.267365602872134
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We consider the well-studied problem of decomposing a vector time series
signal into components with different characteristics, such as smooth,
periodic, nonnegative, or sparse. We propose a simple and general framework in
which the components are defined by loss functions (which include constraints),
and the signal decomposition is carried out by minimizing the sum of losses of
the components (subject to the constraints). When each loss function is the
negative log-likelihood of a density for the signal component, our method
coincides with maximum a posteriori probability (MAP) estimation; but it also
includes many other interesting cases. We give two distributed optimization
methods for computing the decomposition, which find the optimal decomposition
when the component class loss functions are convex, and are good heuristics
when they are not. Both methods require only the masked proximal operator of
each of the component loss functions, a generalization of the well-known
proximal operator that handles missing entries in its argument. Both methods
are distributed, i.e., handle each component separately. We derive tractable
methods for evaluating the masked proximal operators of some loss functions
that, to our knowledge, have not appeared in the literature.
Related papers
- EnsLoss: Stochastic Calibrated Loss Ensembles for Preventing Overfitting in Classification [1.3778851745408134]
We propose a novel ensemble method, namely EnsLoss, to combine loss functions within the Empirical risk minimization framework.
We first transform the CC conditions of losses into loss-derivatives, thereby bypassing the need for explicit loss functions.
We theoretically establish the statistical consistency of our approach and provide insights into its benefits.
arXiv Detail & Related papers (2024-09-02T02:40:42Z) - Binary Losses for Density Ratio Estimation [2.512309434783062]
Estimating the ratio of two probability densities is a central problem in machine learning and statistics.
We provide a simple recipe for constructing loss functions with certain properties, such as loss functions that prioritize an accurate estimation of large values.
This contrasts with classical loss functions, such as the logistic loss or boosting loss, which prioritize accurate estimation of small values.
arXiv Detail & Related papers (2024-07-01T15:24:34Z) - Triple Component Matrix Factorization: Untangling Global, Local, and Noisy Components [13.989390077752232]
We solve the problem of common and unique feature extraction from noisy data.
Despite the intricate nature of the problem, we provide a Taylor series characterization by solving the corresponding KarushKuhn-Tucker algorithm.
Numerical experiments in video segmentation and anomaly detection highlight the superior feature extraction abilities of TCMF.
arXiv Detail & Related papers (2024-03-21T14:41:12Z) - On the Error-Propagation of Inexact Hotelling's Deflation for Principal Component Analysis [8.799674132085935]
This paper mathematically characterizes the error propagation of the inexact Hotelling's deflation method.
We explicitly characterize how the errors progress and affect subsequent principal component estimations.
arXiv Detail & Related papers (2023-10-06T14:33:21Z) - Mitigating the Effect of Incidental Correlations on Part-based Learning [50.682498099720114]
Part-based representations could be more interpretable and generalize better with limited data.
We present two innovative regularization methods for part-based representations.
We exhibit state-of-the-art (SoTA) performance on few-shot learning tasks on benchmark datasets.
arXiv Detail & Related papers (2023-09-30T13:44:48Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Gleo-Det: Deep Convolution Feature-Guided Detector with Local Entropy
Optimization for Salient Points [5.955667705173262]
We propose to achieve fine constraint based on the requirement of repeatability while coarse constraint with guidance of deep convolution features.
With the guidance of convolution features, we define the cost function from both positive and negative sides.
arXiv Detail & Related papers (2022-04-27T12:40:21Z) - Asymmetric Loss Functions for Learning with Noisy Labels [82.50250230688388]
We propose a new class of loss functions, namely textitasymmetric loss functions, which are robust to learning with noisy labels for various types of noise.
Experimental results on benchmark datasets demonstrate that asymmetric loss functions can outperform state-of-the-art methods.
arXiv Detail & Related papers (2021-06-06T12:52:48Z) - Universal Online Convex Optimization Meets Second-order Bounds [74.0120666722487]
We propose a simple strategy for universal online convex optimization.
The key idea is to construct a set of experts to process the original online functions, and deploy a meta-algorithm over the linearized losses.
In this way, we can plug in off-the-shelf online solvers as black-box experts to deliver problem-dependent regret bounds.
arXiv Detail & Related papers (2021-05-08T11:43:49Z) - Supervised Learning: No Loss No Cry [51.07683542418145]
Supervised learning requires the specification of a loss function to minimise.
This paper revisits the sc SLIsotron algorithm of Kakade et al. (2011) through a novel lens.
We show how it provides a principled procedure for learning the loss.
arXiv Detail & Related papers (2020-02-10T05:30:52Z) - Unsupervised Learning of the Set of Local Maxima [105.60049730557706]
Two functions are learned: (i) a set indicator c, which is a binary classifier, and (ii) a comparator function h that given two nearby samples, predicts which sample has the higher value of the unknown function v.
Loss terms are used to ensure that all training samples x are a local maxima of v, according to h and satisfy c(x)=1.
We present an algorithm, show an example where it is more efficient to use local maxima as an indicator function than to employ conventional classification, and derive a suitable generalization bound.
arXiv Detail & Related papers (2020-01-14T19:56:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.