Improving Model Robustness by Adaptively Correcting Perturbation Levels
with Active Queries
- URL: http://arxiv.org/abs/2103.14824v1
- Date: Sat, 27 Mar 2021 07:09:01 GMT
- Title: Improving Model Robustness by Adaptively Correcting Perturbation Levels
with Active Queries
- Authors: Kun-Peng Ning, Lue Tao, Songcan Chen, Sheng-Jun Huang
- Abstract summary: A novel active learning framework is proposed to allow the model to interactively query the correct perturbation level from human experts.
Both theoretical analysis and experimental studies validate the effectiveness of the proposed approach.
- Score: 43.98198697182858
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In addition to high accuracy, robustness is becoming increasingly important
for machine learning models in various applications. Recently, much research
has been devoted to improving the model robustness by training with noise
perturbations. Most existing studies assume a fixed perturbation level for all
training examples, which however hardly holds in real tasks. In fact, excessive
perturbations may destroy the discriminative content of an example, while
deficient perturbations may fail to provide helpful information for improving
the robustness. Motivated by this observation, we propose to adaptively adjust
the perturbation levels for each example in the training process. Specifically,
a novel active learning framework is proposed to allow the model to
interactively query the correct perturbation level from human experts. By
designing a cost-effective sampling strategy along with a new query type, the
robustness can be significantly improved with a few queries. Both theoretical
analysis and experimental studies validate the effectiveness of the proposed
approach.
Related papers
- Robust Deep Reinforcement Learning with Adaptive Adversarial Perturbations in Action Space [3.639580365066386]
We propose an adaptive adversarial coefficient framework to adjust the effect of the adversarial perturbation during training.
The appealing feature of our method is that it is simple to deploy in real-world applications and does not require accessing the simulator in advance.
The experiments in MuJoCo show that our method can improve the training stability and learn a robust policy when migrated to different test environments.
arXiv Detail & Related papers (2024-05-20T12:31:11Z) - Not All Steps are Equal: Efficient Generation with Progressive Diffusion
Models [62.155612146799314]
We propose a novel two-stage training strategy termed Step-Adaptive Training.
In the initial stage, a base denoising model is trained to encompass all timesteps.
We partition the timesteps into distinct groups, fine-tuning the model within each group to achieve specialized denoising capabilities.
arXiv Detail & Related papers (2023-12-20T03:32:58Z) - Adaptive Robust Learning using Latent Bernoulli Variables [50.223140145910904]
We present an adaptive approach for learning from corrupted training sets.
We identify corrupted non-corrupted samples with latent Bernoulli variables.
The resulting problem is solved via variational inference.
arXiv Detail & Related papers (2023-12-01T13:50:15Z) - Monitoring Machine Learning Models: Online Detection of Relevant
Deviations [0.0]
Machine learning models can degrade over time due to changes in data distribution or other factors.
We propose a sequential monitoring scheme to detect relevant changes.
Our research contributes a practical solution for distinguishing between minor fluctuations and meaningful degradations.
arXiv Detail & Related papers (2023-09-26T18:46:37Z) - Exploring The Landscape of Distributional Robustness for Question
Answering Models [47.178481044045505]
Investigation spans over 350 models and 16 question answering datasets.
We find that, in many cases, model variations do not affect robustness.
We release all evaluations to encourage researchers to further analyze robustness trends for question answering models.
arXiv Detail & Related papers (2022-10-22T18:17:31Z) - Learning Sample Reweighting for Accuracy and Adversarial Robustness [15.591611864928659]
We propose a novel adversarial training framework that learns to reweight the loss associated with individual training samples based on a notion of class-conditioned margin.
Our approach consistently improves both clean and robust accuracy compared to related methods and state-of-the-art baselines.
arXiv Detail & Related papers (2022-10-20T18:25:11Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Dynamic Multi-Scale Loss Optimization for Object Detection [14.256807110937622]
We study the objective imbalance of multi-scale detector training.
We propose an Adaptive Variance Weighting (AVW) to balance multi-scale loss according to the statistical variance.
We develop a novel Reinforcement Learning Optimization (RLO) to decide the weighting scheme probabilistically during training.
arXiv Detail & Related papers (2021-08-09T13:12:41Z) - Learning perturbation sets for robust machine learning [97.6757418136662]
We use a conditional generator that defines the perturbation set over a constrained region of the latent space.
We measure the quality of our learned perturbation sets both quantitatively and qualitatively.
We leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations.
arXiv Detail & Related papers (2020-07-16T16:39:54Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.