Can We Transfer Noise Patterns? A Multi-environment Spectrum Analysis
Model Using Generated Cases
- URL: http://arxiv.org/abs/2308.01138v2
- Date: Mon, 14 Aug 2023 12:37:37 GMT
- Title: Can We Transfer Noise Patterns? A Multi-environment Spectrum Analysis
Model Using Generated Cases
- Authors: Haiwen Du, Zheng Ju, Yu An, Honghui Du, Dongjie Zhu, Zhaoshuo Tian,
Aonghus Lawlor, Ruihai Dong
- Abstract summary: spectral data-based testing devices suffer from complex noise patterns when deployed in non-laboratory environments.
We propose a noise patterns transferring model, which takes the spectrum of standard water samples in different environments as cases and learns the differences in their noise patterns.
We generate a sample-to-sample case-base to exclude the interference of sample-level noise on dataset-level noise learning.
- Score: 10.876490928902838
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spectrum analysis systems in online water quality testing are designed to
detect types and concentrations of pollutants and enable regulatory agencies to
respond promptly to pollution incidents. However, spectral data-based testing
devices suffer from complex noise patterns when deployed in non-laboratory
environments. To make the analysis model applicable to more environments, we
propose a noise patterns transferring model, which takes the spectrum of
standard water samples in different environments as cases and learns the
differences in their noise patterns, thus enabling noise patterns to transfer
to unknown samples. Unfortunately, the inevitable sample-level baseline noise
makes the model unable to obtain the paired data that only differ in
dataset-level environmental noise. To address the problem, we generate a
sample-to-sample case-base to exclude the interference of sample-level noise on
dataset-level noise learning, enhancing the system's learning performance.
Experiments on spectral data with different background noises demonstrate the
good noise-transferring ability of the proposed method against baseline systems
ranging from wavelet denoising, deep neural networks, and generative models.
From this research, we posit that our method can enhance the performance of DL
models by generating high-quality cases. The source code is made publicly
available online at https://github.com/Magnomic/CNST.
Related papers
- Bayesian Inference of General Noise Model Parameters from Surface Code's Syndrome Statistics [0.0]
We propose general noise model Bayesian inference methods that integrate the surface code's tensor network simulator.
For stationary noise, where the noise parameters are constant and do not change, we propose a method based on the Markov chain Monte Carlo.
For time-varying noise, which is a more realistic situation, we introduce another method based on the sequential Monte Carlo.
arXiv Detail & Related papers (2024-06-13T10:26:04Z) - One Noise to Rule Them All: Learning a Unified Model of Spatially-Varying Noise Patterns [33.293193191683145]
We present a single generative model which can learn to generate multiple types of noise as well as blend between them.
We also present an application of our model to improving inverse procedural material design.
arXiv Detail & Related papers (2024-04-25T02:23:11Z) - Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - DiffSED: Sound Event Detection with Denoising Diffusion [70.18051526555512]
We reformulate the SED problem by taking a generative learning perspective.
Specifically, we aim to generate sound temporal boundaries from noisy proposals in a denoising diffusion process.
During training, our model learns to reverse the noising process by converting noisy latent queries to the groundtruth versions.
arXiv Detail & Related papers (2023-08-14T17:29:41Z) - Towards General Low-Light Raw Noise Synthesis and Modeling [37.87312467017369]
We introduce a new perspective to synthesize the signal-independent noise by a generative model.
Specifically, we synthesize the signal-dependent and signal-independent noise in a physics- and learning-based manner.
In this way, our method can be considered as a general model, that is, it can simultaneously learn different noise characteristics for different ISO levels.
arXiv Detail & Related papers (2023-07-31T09:10:10Z) - An Investigation of Noise in Morphological Inflection [21.411766936034]
We investigate the types of noise encountered within a pipeline for truly unsupervised morphological paradigm completion.
We compare the effect of different types of noise on multiple state-of-the-art inflection models.
We propose a novel character-level masked language modeling (CMLM) pretraining objective and explore its impact on the models' resistance to noise.
arXiv Detail & Related papers (2023-05-26T02:14:34Z) - Realistic Noise Synthesis with Diffusion Models [68.48859665320828]
Deep image denoising models often rely on large amount of training data for the high quality performance.
We propose a novel method that synthesizes realistic noise using diffusion models, namely Realistic Noise Synthesize Diffusor (RNSD)
RNSD can incorporate guided multiscale content, such as more realistic noise with spatial correlations can be generated at multiple frequencies.
arXiv Detail & Related papers (2023-05-23T12:56:01Z) - Improving the Robustness of Summarization Models by Detecting and
Removing Input Noise [50.27105057899601]
We present a large empirical study quantifying the sometimes severe loss in performance from different types of input noise for a range of datasets and model sizes.
We propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any training, auxiliary models, or even prior knowledge of the type of noise.
arXiv Detail & Related papers (2022-12-20T00:33:11Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - Bridging the Gap Between Clean Data Training and Real-World Inference
for Spoken Language Understanding [76.89426311082927]
Existing models are trained on clean data, which causes a textitgap between clean data training and real-world inference.
We propose a method from the perspective of domain adaptation, by which both high- and low-quality samples are embedding into similar vector space.
Experiments on the widely-used dataset, Snips, and large scale in-house dataset (10 million training examples) demonstrate that this method not only outperforms the baseline models on real-world (noisy) corpus but also enhances the robustness, that is, it produces high-quality results under a noisy environment.
arXiv Detail & Related papers (2021-04-13T17:54:33Z) - Analysing the Noise Model Error for Realistic Noisy Label Data [14.766574408868806]
We study the quality of estimated noise models from the theoretical side by deriving the expected error of the noise model.
We also publish NoisyNER, a new noisy label dataset from the NLP domain.
arXiv Detail & Related papers (2021-01-24T17:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.