Efficient and Flexible Sublabel-Accurate Energy Minimization
- URL: http://arxiv.org/abs/2206.09596v1
- Date: Mon, 20 Jun 2022 06:58:55 GMT
- Title: Efficient and Flexible Sublabel-Accurate Energy Minimization
- Authors: Zhakshylyk Nurlanov, Daniel Cremers, Florian Bernard
- Abstract summary: We address the problem of minimizing a class of energy functions consisting of data and smoothness terms.
Existing continuous optimization methods can find sublabel-accurate solutions, but they are not efficient for large label spaces.
We propose an efficient sublabel-accurate method that utilizes the best properties of both continuous and discrete models.
- Score: 62.50191141358778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of minimizing a class of energy functions consisting
of data and smoothness terms that commonly occur in machine learning, computer
vision, and pattern recognition. While discrete optimization methods are able
to give theoretical optimality guarantees, they can only handle a finite number
of labels and therefore suffer from label discretization bias. Existing
continuous optimization methods can find sublabel-accurate solutions, but they
are not efficient for large label spaces. In this work, we propose an efficient
sublabel-accurate method that utilizes the best properties of both continuous
and discrete models. We separate the problem into two sequential steps: (i)
global discrete optimization for selecting the label range, and (ii) efficient
continuous sublabel-accurate local refinement of a convex approximation of the
energy function in the chosen range. Doing so allows us to achieve a boost in
time and memory efficiency while practically keeping the accuracy at the same
level as continuous convex relaxation methods, and in addition, providing
theoretical optimality guarantees at the level of discrete methods. Finally, we
show the flexibility of the proposed approach to general pairwise smoothness
terms, so that it is applicable to a wide range of regularizations. Experiments
on the illustrating example of the image denoising problem demonstrate the
properties of the proposed method. The code reproducing experiments is
available at
\url{https://github.com/nurlanov-zh/sublabel-accurate-alpha-expansion}.
Related papers
- S-CFE: Simple Counterfactual Explanations [21.975560789792073]
We tackle the problem of finding manifold-aligned counterfactual explanations for sparse data.
Our approach effectively produces sparse, manifold-aligned counterfactual explanations.
arXiv Detail & Related papers (2024-10-21T07:42:43Z) - Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Discretize Relaxed Solution of Spectral Clustering via a Non-Heuristic
Algorithm [77.53604156112144]
We develop a first-order term to bridge the original problem and discretization algorithm.
Since the non-heuristic method is aware of the original graph cut problem, the final discrete solution is more reliable.
arXiv Detail & Related papers (2023-10-19T13:57:38Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Optimizing Diffusion Rate and Label Reliability in a Graph-Based
Semi-supervised Classifier [2.4366811507669124]
The Local and Global Consistency (LGC) algorithm is one of the most well-known graph-based semi-supervised (GSSL) classifiers.
We discuss how removing the self-influence of a labeled instance may be beneficial, and how it relates to leave-one-out error.
Within this framework, we propose methods to estimate label reliability and diffusion rate.
arXiv Detail & Related papers (2022-01-10T16:58:52Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Regularization in neural network optimization via trimmed stochastic
gradient descent with noisy label [2.66512000865131]
Regularization is essential for avoiding over-fitting to training data in neural network optimization.
We propose a first-order optimization method (Label-Noised Trim-SGD) which combines the label noise with the example trimming.
The proposed algorithm enables us to impose a large label noise and obtain a better regularization effect than the original methods.
arXiv Detail & Related papers (2020-12-21T01:31:53Z) - Approximate optimization of MAXCUT with a local spin algorithm [0.0]
Local tensor methods are a class of optimization algorithms introduced in [Hastings,arXiv:1905.07047v2].
We investigate performance in practice through benchmarking experiments on instances of the MAXCUT problem.
We find time to solution achieved by the local tensor method is highly uncorrelated with that achieved by a widely used commercial optimization package.
arXiv Detail & Related papers (2020-08-13T18:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.