Automated data-driven selection of the hyperparameters for
Total-Variation based texture segmentation
- URL: http://arxiv.org/abs/2004.09434v2
- Date: Tue, 12 May 2020 16:43:41 GMT
- Title: Automated data-driven selection of the hyperparameters for
Total-Variation based texture segmentation
- Authors: Barbara Pascal and Samuel Vaiter and Nelly Pustelnik and Patrice Abry
- Abstract summary: Generalized Stein Unbiased Risk Estimator is revisited to handle correlated Gaussian noise.
Problem formulation naturally entails inter-scale and spatially correlated noise.
- Score: 12.093824308505216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Penalized Least Squares are widely used in signal and image processing. Yet,
it suffers from a major limitation since it requires fine-tuning of the
regularization parameters. Under assumptions on the noise probability
distribution, Stein-based approaches provide unbiased estimator of the
quadratic risk. The Generalized Stein Unbiased Risk Estimator is revisited to
handle correlated Gaussian noise without requiring to invert the covariance
matrix. Then, in order to avoid expansive grid search, it is necessary to
design algorithmic scheme minimizing the quadratic risk with respect to
regularization parameters. This work extends the Stein's Unbiased GrAdient
estimator of the Risk of Deledalle et al. to the case of correlated Gaussian
noise, deriving a general automatic tuning of regularization parameters. First,
the theoretical asymptotic unbiasedness of the gradient estimator is
demonstrated in the case of general correlated Gaussian noise. Then, the
proposed parameter selection strategy is particularized to fractal texture
segmentation, where problem formulation naturally entails inter-scale and
spatially correlated noise. Numerical assessment is provided, as well as
discussion of the practical issues.
Related papers
- Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise [19.496063739638924]
We consider a saturate problem of Bayesian inference for a structured spiked model.
We show how to predict the statistical limits using an efficient algorithm inspired by the theory of adaptive Thouless-Anderson-Palmer equations.
arXiv Detail & Related papers (2024-05-31T16:38:35Z) - Risk-Sensitive Diffusion for Perturbation-Robust Optimization [58.68233326265417]
We show that noisy samples incur another objective function, rather than the one with score function, which will wrongly optimize the model.
We introduce risk-sensitive SDE, a type of differential equation (SDE) parameterized by the risk vector.
We prove that zero instability measure is only achievable in the case where noisy samples are caused by Gaussian perturbation.
arXiv Detail & Related papers (2024-02-03T08:41:51Z) - Batches Stabilize the Minimum Norm Risk in High Dimensional
Overparameterized Linear Regression [21.83136833217205]
We show that batch partitioning offers useful trade-offs between computational efficiency and performance.
We suggest a natural small-batch version of the minimum-norm estimator, and derive an upper bound on its quadratic risk.
Our bound is derived via a novel combination of techniques, in particular normal approximation in the Wasserstein metric of noisy projections over random subspaces.
arXiv Detail & Related papers (2023-06-14T11:02:08Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - Optimizing Information-theoretical Generalization Bounds via Anisotropic
Noise in SGLD [73.55632827932101]
We optimize the information-theoretical generalization bound by manipulating the noise structure in SGLD.
We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance.
arXiv Detail & Related papers (2021-10-26T15:02:27Z) - Near-Optimal High Probability Complexity Bounds for Non-Smooth
Stochastic Optimization with Heavy-Tailed Noise [63.304196997102494]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity with bounds dependence on the confidence level that is either negative-power or logarithmic.
We propose novel stepsize rules for two gradient methods with clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Model-based multi-parameter mapping [0.0]
Quantitative MR imaging is increasingly favoured for its richer information content and standardised measures.
Estimations often assume noise subsets of data to solve for different quantities in isolation.
Instead, a generative model can be formulated and inverted to jointly recover parameter estimates.
arXiv Detail & Related papers (2021-02-02T17:00:11Z) - Variable selection for Gaussian process regression through a sparse
projection [0.802904964931021]
This paper presents a new variable selection approach integrated with Gaussian process (GP) regression.
The choice of tuning parameters and the accuracy of the estimation are evaluated with the simulation some chosen benchmark approaches.
arXiv Detail & Related papers (2020-08-25T01:06:10Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.