Improve Noise Tolerance of Robust Loss via Noise-Awareness
- URL: http://arxiv.org/abs/2301.07306v2
- Date: Sun, 3 Sep 2023 03:15:38 GMT
- Title: Improve Noise Tolerance of Robust Loss via Noise-Awareness
- Authors: Kehui Ding, Jun Shu, Deyu Meng, Zongben Xu
- Abstract summary: We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
- Score: 60.34670515595074
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust loss minimization is an important strategy for handling robust
learning issue on noisy labels. Current approaches for designing robust losses
involve the introduction of noise-robust factors, i.e., hyperparameters, to
control the trade-off between noise robustness and learnability. However,
finding suitable hyperparameters for different datasets with noisy labels is a
challenging and time-consuming task. Moreover, existing robust loss methods
usually assume that all training samples share common hyperparameters, which
are independent of instances. This limits the ability of these methods to
distinguish the individual noise properties of different samples and overlooks
the varying contributions of diverse training samples in helping models
understand underlying patterns. To address above issues, we propose to assemble
robust loss with instance-dependent hyperparameters to improve their noise
tolerance with theoretical guarantee. To achieve setting such
instance-dependent hyperparameters for robust loss, we propose a meta-learning
method which is capable of adaptively learning a hyperparameter prediction
function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity).
Through mutual amelioration between hyperparameter prediction function and
classifier parameters in our method, both of them can be simultaneously finely
ameliorated and coordinated to attain solutions with good generalization
capability. Four SOTA robust loss functions are attempted to be integrated with
our algorithm, and comprehensive experiments substantiate the general
availability and effectiveness of the proposed method in both its noise
tolerance and performance.
Related papers
- Robust Learning under Hybrid Noise [24.36707245704713]
We propose a novel unified learning framework called "Feature and Label Recovery" (FLR) to combat the hybrid noise from the perspective of data recovery.
arXiv Detail & Related papers (2024-07-04T16:13:25Z) - ROPO: Robust Preference Optimization for Large Language Models [59.10763211091664]
We propose an iterative alignment approach that integrates noise-tolerance and filtering of noisy samples without the aid of external models.
Experiments on three widely-used datasets with Mistral-7B and Llama-2-7B demonstrate that ROPO significantly outperforms existing preference alignment methods.
arXiv Detail & Related papers (2024-04-05T13:58:51Z) - Inference Stage Denoising for Undersampled MRI Reconstruction [13.8086726938161]
Reconstruction of magnetic resonance imaging (MRI) data has been positively affected by deep learning.
A key challenge remains: to improve generalisation to distribution shifts between the training and testing data.
arXiv Detail & Related papers (2024-02-12T12:50:10Z) - May the Noise be with you: Adversarial Training without Adversarial
Examples [3.4673556247932225]
We investigate the question: Can we obtain adversarially-trained models without training on adversarial?
Our proposed approach incorporates inherentity by embedding Gaussian noise within the layers of the NN model at training time.
Our work contributes adversarially trained networks using a completely different approach, with empirically similar robustness to adversarial training.
arXiv Detail & Related papers (2023-12-12T08:22:28Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Probabilities Are Not Enough: Formal Controller Synthesis for Stochastic
Dynamical Models with Epistemic Uncertainty [68.00748155945047]
Capturing uncertainty in models of complex dynamical systems is crucial to designing safe controllers.
Several approaches use formal abstractions to synthesize policies that satisfy temporal specifications related to safety and reachability.
Our contribution is a novel abstraction-based controller method for continuous-state models with noise, uncertain parameters, and external disturbances.
arXiv Detail & Related papers (2022-10-12T07:57:03Z) - Probe incompatibility in multiparameter noisy quantum metrology [0.0]
We study the issue of the optimal probe incompatibility in the simultaneous estimation of multiple parameters in generic noisy channels.
In particular, we show that in lossy multiple arm interferometry the probe incompatibility is as strong as in the noiseless scenario.
We introduce the concept of emphrandom quantum sensing and show how the tools developed may be applied to multiple channel discrimination problems.
arXiv Detail & Related papers (2021-04-22T18:03:16Z) - Learning to Generate Noise for Multi-Attack Robustness [126.23656251512762]
Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations.
In safety-critical applications, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system.
We propose a novel meta-learning framework that explicitly learns to generate noise to improve the model's robustness against multiple types of attacks.
arXiv Detail & Related papers (2020-06-22T10:44:05Z) - Learning Adaptive Loss for Robust Learning with Noisy Labels [59.06189240645958]
Robust loss is an important strategy for handling robust learning issue.
We propose a meta-learning method capable of robust hyper tuning.
Four kinds of SOTA loss functions are attempted to be minimization, general availability and effectiveness.
arXiv Detail & Related papers (2020-02-16T00:53:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.