dugMatting: Decomposed-Uncertainty-Guided Matting
- URL: http://arxiv.org/abs/2306.01452v1
- Date: Fri, 2 Jun 2023 11:19:50 GMT
- Title: dugMatting: Decomposed-Uncertainty-Guided Matting
- Authors: Jiawei Wu, Changqing Zhang, Zuoyong Li, Huazhu Fu, Xi Peng, Joey
Tianyi Zhou
- Abstract summary: We propose a decomposed-uncertainty-guided matting algorithm, which explores the explicitly decomposed uncertainties to efficiently and effectively improve the results.
The proposed matting framework relieves the requirement for users to determine the interaction areas by using simple and efficient labeling.
- Score: 83.71273621169404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cutting out an object and estimating its opacity mask, known as image
matting, is a key task in image and video editing. Due to the highly ill-posed
issue, additional inputs, typically user-defined trimaps or scribbles, are
usually needed to reduce the uncertainty. Although effective, it is either time
consuming or only suitable for experienced users who know where to place the
strokes. In this work, we propose a decomposed-uncertainty-guided matting
(dugMatting) algorithm, which explores the explicitly decomposed uncertainties
to efficiently and effectively improve the results. Basing on the
characteristic of these uncertainties, the epistemic uncertainty is reduced in
the process of guiding interaction (which introduces prior knowledge), while
the aleatoric uncertainty is reduced in modeling data distribution (which
introduces statistics for both data and possible noise). The proposed matting
framework relieves the requirement for users to determine the interaction areas
by using simple and efficient labeling. Extensively quantitative and
qualitative results validate that the proposed method significantly improves
the original matting algorithms in terms of both efficiency and efficacy.
Related papers
- Erasing Undesirable Influence in Diffusion Models [51.225365010401006]
Diffusion models are highly effective at generating high-quality images but pose risks, such as the unintentional generation of NSFW (not safe for work) content.
In this work, we introduce EraseDiff, an algorithm designed to preserve the utility of the diffusion model on retained data while removing the unwanted information associated with the data to be forgotten.
arXiv Detail & Related papers (2024-01-11T09:30:36Z) - Identity Curvature Laplace Approximation for Improved Out-of-Distribution Detection [4.779196219827508]
Uncertainty estimation is crucial in safety-critical applications, where robust out-of-distribution detection is essential.
Traditional Bayesian methods, though effective, are often hindered by high computational demands.
We introduce the Identity Curvature Laplace Approximation (ICLA), a novel method that challenges the conventional posterior coimation formulation.
arXiv Detail & Related papers (2023-12-16T14:46:24Z) - Scalable Bayesian Meta-Learning through Generalized Implicit Gradients [64.21628447579772]
Implicit Bayesian meta-learning (iBaML) method broadens the scope of learnable priors, but also quantifies the associated uncertainty.
Analytical error bounds are established to demonstrate the precision and efficiency of the generalized implicit gradient over the explicit one.
arXiv Detail & Related papers (2023-03-31T02:10:30Z) - A Learning-Based Optimal Uncertainty Quantification Method and Its
Application to Ballistic Impact Problems [1.713291434132985]
This paper concerns the optimal (supremum and infimum) uncertainty bounds for systems where the input (or prior) measure is only partially/imperfectly known.
We demonstrate the learning based framework on the uncertainty optimization problem.
We show that the approach can be used to construct maps for the performance certificate and safety in engineering practice.
arXiv Detail & Related papers (2022-12-28T14:30:53Z) - Interpolation-based Contrastive Learning for Few-Label Semi-Supervised
Learning [43.51182049644767]
Semi-supervised learning (SSL) has long been proved to be an effective technique to construct powerful models with limited labels.
Regularization-based methods which force the perturbed samples to have similar predictions with the original ones have attracted much attention.
We propose a novel contrastive loss to guide the embedding of the learned network to change linearly between samples.
arXiv Detail & Related papers (2022-02-24T06:00:05Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Mitigating Bias in Set Selection with Noisy Protected Attributes [16.882719401742175]
We show that in the presence of noisy protected attributes, in attempting to increase fairness without considering noise, one can, in fact, decrease the fairness of the result!
We formulate a denoised'' selection problem which functions for a large class of fairness metrics.
Our empirical results show that this approach can produce subsets which significantly improve the fairness metrics despite the presence of noisy protected attributes.
arXiv Detail & Related papers (2020-11-09T06:45:15Z) - Semantic Neighborhood-Aware Deep Facial Expression Recognition [14.219890078312536]
A novel method is proposed to formulate semantic perturbation and select unreliable samples during training.
Experiments show the effectiveness of the proposed method and state-of-the-art results are reported.
arXiv Detail & Related papers (2020-04-27T11:48:17Z) - Hidden Cost of Randomized Smoothing [72.93630656906599]
In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
arXiv Detail & Related papers (2020-03-02T23:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.