DeepRepair: Style-Guided Repairing for DNNs in the Real-world
Operational Environment
- URL: http://arxiv.org/abs/2011.09884v1
- Date: Thu, 19 Nov 2020 15:09:44 GMT
- Title: DeepRepair: Style-Guided Repairing for DNNs in the Real-world
Operational Environment
- Authors: Bing Yu and Hua Qi and Qing Guo and Felix Juefei-Xu and Xiaofei Xie
and Lei Ma and Jianjun Zhao
- Abstract summary: We propose a style-guided data augmentation for repairing Deep Neural Networks (DNNs) in the operational environment.
We propose a style transfer method to learn and introduce the unknown failure patterns within the failure data into the training data via data augmentation.
- Score: 27.316150020006916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are being widely applied for various real-world
applications across domains due to their high performance (e.g., high accuracy
on image classification). Nevertheless, a well-trained DNN after deployment
could oftentimes raise errors during practical use in the operational
environment due to the mismatching between distributions of the training
dataset and the potential unknown noise factors in the operational environment,
e.g., weather, blur, noise etc. Hence, it poses a rather important problem for
the DNNs' real-world applications: how to repair the deployed DNNs for
correcting the failure samples (i.e., incorrect prediction) under the deployed
operational environment while not harming their capability of handling normal
or clean data. The number of failure samples we can collect in practice, caused
by the noise factors in the operational environment, is often limited.
Therefore, It is rather challenging how to repair more similar failures based
on the limited failure samples we can collect.
In this paper, we propose a style-guided data augmentation for repairing DNN
in the operational environment. We propose a style transfer method to learn and
introduce the unknown failure patterns within the failure data into the
training data via data augmentation. Moreover, we further propose the
clustering-based failure data generation for much more effective style-guided
data augmentation. We conduct a large-scale evaluation with fifteen degradation
factors that may happen in the real world and compare with four
state-of-the-art data augmentation methods and two DNN repairing methods,
demonstrating that our method can significantly enhance the deployed DNNs on
the corrupted data in the operational environment, and with even better
accuracy on clean datasets.
Related papers
- Mitigating the Impact of Labeling Errors on Training via Rockafellian Relaxation [0.8741284539870512]
We propose and study the implementation of Rockafellian Relaxation (RR) for neural network training.
RR can enhance standard neural network methods to achieve robust performance across classification tasks.
We find that RR can mitigate the effects of dataset corruption due to both (heavy) labeling error and/or adversarial perturbation.
arXiv Detail & Related papers (2024-05-30T23:13:01Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Iterative Assessment and Improvement of DNN Operational Accuracy [11.447394702830412]
We propose DAIC (DNN Assessment and Improvement Cycle), an approach which combines ''low-cost'' online pseudo-oracles and ''high-cost'' offline sampling techniques.
Preliminary results show the benefits of combining the two approaches.
arXiv Detail & Related papers (2023-03-02T14:21:54Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - DIRA: Dynamic Domain Incremental Regularised Adaptation [2.227417514684251]
We introduce Dynamic Incremental Regularised Adaptation (DIRA) for dynamic operational domain adaptions of Deep Neural Network (DNN)
DIRA improves on the problem of forgetting and achieves strong gains in performance when retraining using a few samples from the target domain.
Our approach shows improvements on different image classification benchmarks aimed at evaluating robustness to distribution shifts.
arXiv Detail & Related papers (2022-04-30T03:46:03Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Sketching Curvature for Efficient Out-of-Distribution Detection for Deep
Neural Networks [32.629801680158685]
Sketching Curvature of OoD Detection (SCOD) is an architecture-agnostic framework for equipping trained Deep Neural Networks with task-relevant uncertainty estimates.
We demonstrate that SCOD achieves comparable or better OoD detection performance with lower computational burden relative to existing baselines.
arXiv Detail & Related papers (2021-02-24T21:34:40Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - A Unified Plug-and-Play Framework for Effective Data Denoising and
Robust Abstention [4.200576272300216]
We propose a unified filtering framework leveraging underlying data density.
Our framework can effectively denoising training data and avoid predicting uncertain test data points.
arXiv Detail & Related papers (2020-09-25T04:18:08Z) - Temporal Calibrated Regularization for Robust Noisy Label Learning [60.90967240168525]
Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
However, labeling large-scale data can be very costly and error-prone so that it is difficult to guarantee the annotation quality.
We propose a Temporal Calibrated Regularization (TCR) in which we utilize the original labels and the predictions in the previous epoch together.
arXiv Detail & Related papers (2020-07-01T04:48:49Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.