Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating
Back-Propagation for Saliency Detection
- URL: http://arxiv.org/abs/2007.12211v1
- Date: Thu, 23 Jul 2020 18:47:36 GMT
- Title: Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating
Back-Propagation for Saliency Detection
- Authors: Jing Zhang, Jianwen Xie, Nick Barnes
- Abstract summary: We propose a noise-aware encoder-decoder framework to disentangle a clean saliency predictor from noisy training examples.
The proposed model consists of two sub-models parameterized by neural networks.
- Score: 54.98042023365694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a noise-aware encoder-decoder framework to
disentangle a clean saliency predictor from noisy training examples, where the
noisy labels are generated by unsupervised handcrafted feature-based methods.
The proposed model consists of two sub-models parameterized by neural networks:
(1) a saliency predictor that maps input images to clean saliency maps, and (2)
a noise generator, which is a latent variable model that produces noises from
Gaussian latent vectors. The whole model that represents noisy labels is a sum
of the two sub-models. The goal of training the model is to estimate the
parameters of both sub-models, and simultaneously infer the corresponding
latent vector of each noisy label. We propose to train the model by using an
alternating back-propagation (ABP) algorithm, which alternates the following
two steps: (1) learning back-propagation for estimating the parameters of two
sub-models by gradient ascent, and (2) inferential back-propagation for
inferring the latent vectors of training noisy examples by Langevin Dynamics.
To prevent the network from converging to trivial solutions, we utilize an
edge-aware smoothness loss to regularize hidden saliency maps to have similar
structures as their corresponding images. Experimental results on several
benchmark datasets indicate the effectiveness of the proposed model.
Related papers
- Bayesian identification of nonseparable Hamiltonians with multiplicative noise using deep learning and reduced-order modeling [0.5999777817331317]
This paper presents a structure-preserving Bayesian approach for learning nonseparable Hamiltonian systems.
We develop a novel algorithm for cost-effective application of Bayesian system identification to high-dimensional systems.
We show that using the Bayesian posterior as a training objective can yield upwards of 724 times improvement in Hamiltonian mean squared error.
arXiv Detail & Related papers (2024-01-23T04:05:26Z) - Label Denoising through Cross-Model Agreement [43.5145547124009]
Memorizing noisy labels could affect the learning of the model, leading to sub-optimal performances.
We propose a novel framework to learn robust machine-learning models from noisy labels.
arXiv Detail & Related papers (2023-08-27T00:31:04Z) - Doubly Stochastic Models: Learning with Unbiased Label Noises and
Inference Stability [85.1044381834036]
We investigate the implicit regularization effects of label noises under mini-batch sampling settings of gradient descent.
We find such implicit regularizer would favor some convergence points that could stabilize model outputs against perturbation of parameters.
Our work doesn't assume SGD as an Ornstein-Uhlenbeck like process and achieve a more general result with convergence of approximation proved.
arXiv Detail & Related papers (2023-04-01T14:09:07Z) - Denoising Deep Generative Models [23.19427801594478]
Likelihood-based deep generative models have been shown to exhibit pathological behaviour under the manifold hypothesis.
We propose two methodologies aimed at addressing this problem.
arXiv Detail & Related papers (2022-11-30T19:00:00Z) - On Robust Learning from Noisy Labels: A Permutation Layer Approach [53.798757734297986]
This paper introduces a permutation layer learning approach termed PermLL to dynamically calibrate the training process of a deep neural network (DNN)
We provide two variants of PermLL in this paper: one applies the permutation layer to the model's prediction, while the other applies it directly to the given noisy label.
We validate PermLL experimentally and show that it achieves state-of-the-art performance on both real and synthetic datasets.
arXiv Detail & Related papers (2022-11-29T03:01:48Z) - Noise Distribution Adaptive Self-Supervised Image Denoising using
Tweedie Distribution and Score Matching [29.97769511276935]
We show that Tweedie distributions play key roles in modern deep learning era, leading to a distribution independent self-supervised image denoising formula without clean reference images.
Specifically, by combining with the recent Noise2Score self-supervised image denoising approach and the saddle point approximation of Tweedie distribution, we can provide a general closed-form denoising formula.
We show that the proposed method can accurately estimate noise models and parameters, and provide the state-of-the-art self-supervised image denoising performance in the benchmark dataset and real-world dataset.
arXiv Detail & Related papers (2021-12-05T04:36:08Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Uncertainty Inspired RGB-D Saliency Detection [70.50583438784571]
We propose the first framework to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Inspired by the saliency data labeling process, we propose a generative architecture to achieve probabilistic RGB-D saliency detection.
Results on six challenging RGB-D benchmark datasets show our approach's superior performance in learning the distribution of saliency maps.
arXiv Detail & Related papers (2020-09-07T13:01:45Z) - Set Based Stochastic Subsampling [85.5331107565578]
We propose a set-based two-stage end-to-end neural subsampling model that is jointly optimized with an textitarbitrary downstream task network.
We show that it outperforms the relevant baselines under low subsampling rates on a variety of tasks including image classification, image reconstruction, function reconstruction and few-shot classification.
arXiv Detail & Related papers (2020-06-25T07:36:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.