Variational Positive-incentive Noise: How Noise Benefits Models
- URL: http://arxiv.org/abs/2306.07651v1
- Date: Tue, 13 Jun 2023 09:43:32 GMT
- Title: Variational Positive-incentive Noise: How Noise Benefits Models
- Authors: Hongyuan Zhang, Sida Huang, Xuelong Li
- Abstract summary: We investigate how to benefit the classical models by random noise under the framework of Positive-incentive Noise (Pi-Noise)
Since the ideal objective of Pi-Noise is intractable, we propose to optimize its variational bound instead, namely variational Pi-Noise (VPN)
- Score: 84.67629229767047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A large number of works aim to alleviate the impact of noise due to an
underlying conventional assumption of the negative role of noise. However, some
existing works show that the assumption does not always hold. In this paper, we
investigate how to benefit the classical models by random noise under the
framework of Positive-incentive Noise (Pi-Noise). Since the ideal objective of
Pi-Noise is intractable, we propose to optimize its variational bound instead,
namely variational Pi-Noise (VPN). With the variational inference, a VPN
generator implemented by neural networks is designed for enhancing base models
and simplifying the inference of base models, without changing the architecture
of base models. Benefiting from the independent design of base models and VPN
generators, the VPN generator can work with most existing models. From the
experiments, it is shown that the proposed VPN generator can improve the base
models. It is appealing that the trained variational VPN generator prefers to
blur the irrelevant ingredients in complicated images, which meets our
expectations.
Related papers
- Enhance Vision-Language Alignment with Noise [59.2608298578913]
We investigate whether the frozen model can be fine-tuned by customized noise.
We propose Positive-incentive Noise (PiNI) which can fine-tune CLIP via injecting noise into both visual and text encoders.
arXiv Detail & Related papers (2024-12-14T12:58:15Z) - Robust Neural Processes for Noisy Data [1.7268667700090563]
We study the behavior of in-context learning models when data is contaminated by noise.
We find that the models that perform best on clean data, are different than the models that perform best on noisy data.
We propose a simple method to train NP models that makes them more robust to noisy data.
arXiv Detail & Related papers (2024-11-03T20:00:55Z) - Data Augmentation of Contrastive Learning is Estimating Positive-incentive Noise [54.24688963649581]
We scientifically investigate the connection between contrastive learning and $pi$-noise.
Inspired by the idea of Positive-incentive Noise (Pi-Noise or $pi$-Noise) that aims at learning the reliable noise beneficial to tasks, we develop a $pi$-noise generator.
arXiv Detail & Related papers (2024-08-19T12:07:42Z) - Adaptive Differential Privacy in Federated Learning: A Priority-Based
Approach [0.0]
Federated learning (FL) develops global models without direct access to local datasets.
DP offers a framework that gives a privacy guarantee by adding certain amounts of noise to parameters.
We propose adaptive noise addition in FL which decides the value of injected noise based on features' relative importance.
arXiv Detail & Related papers (2024-01-04T03:01:15Z) - Generative Plug and Play: Posterior Sampling for Inverse Problems [4.417934991211913]
Plug-Play (and) has become a popular method for reconstructing images using a framework consisting of a forward and prior model.
We present experimental simulations using the well-known BM3D denoiser.
arXiv Detail & Related papers (2023-06-12T16:49:08Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections [73.95786440318369]
We focus on the so-called implicit effect' of GNIs, which is the effect of the injected noise on the dynamics of gradient descent (SGD)
We show that this effect induces an asymmetric heavy-tailed noise on gradient updates.
We then formally prove that GNIs induce an implicit bias', which varies depending on the heaviness of the tails and the level of asymmetry.
arXiv Detail & Related papers (2021-02-13T21:28:09Z) - Noise-Equipped Convolutional Neural Networks [15.297063646935078]
Convolutional Neural Network (CNN) has been widely employed in image synthesis and translation tasks.
When a CNN model is fed with a flat input, the transformation degrades into a scaling operation due to the spatial sharing nature of convolution kernels.
arXiv Detail & Related papers (2020-12-09T09:01:45Z) - A Contrastive Learning Approach for Training Variational Autoencoder
Priors [137.62674958536712]
Variational autoencoders (VAEs) are one of the powerful likelihood-based generative models with applications in many domains.
One explanation for VAEs' poor generative quality is the prior hole problem: the prior distribution fails to match the aggregate approximate posterior.
We propose an energy-based prior defined by the product of a base prior distribution and a reweighting factor, designed to bring the base closer to the aggregate posterior.
arXiv Detail & Related papers (2020-10-06T17:59:02Z) - BatVision with GCC-PHAT Features for Better Sound to Vision Predictions [5.9514420658483935]
We train a generative adversarial network to predict plausible depth maps and grayscale layouts from sound.
We build upon previous work with BatVision that consists of a soundto-vision model and a self-collected dataset.
arXiv Detail & Related papers (2020-06-14T19:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.