DARDA: Domain-Aware Real-Time Dynamic Neural Network Adaptation
- URL: http://arxiv.org/abs/2409.09753v1
- Date: Sun, 15 Sep 2024 14:49:30 GMT
- Title: DARDA: Domain-Aware Real-Time Dynamic Neural Network Adaptation
- Authors: Shahriar Rifat, Jonathan Ashdown, Francesco Restuccia,
- Abstract summary: Test Time Adaptation (TTA) has emerged as a practical solution to mitigate the performance degradation of Deep Neural Networks (DNNs) in the presence of corruption/ noise affecting inputs.
We propose Domain-Aware Real-Time Dynamic Adaptation (DARDA) to address such issues.
- Score: 8.339630468077713
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Test Time Adaptation (TTA) has emerged as a practical solution to mitigate the performance degradation of Deep Neural Networks (DNNs) in the presence of corruption/ noise affecting inputs. Existing approaches in TTA continuously adapt the DNN, leading to excessive resource consumption and performance degradation due to accumulation of error stemming from lack of supervision. In this work, we propose Domain-Aware Real-Time Dynamic Adaptation (DARDA) to address such issues. Our key approach is to proactively learn latent representations of some corruption types, each one associated with a sub-network state tailored to correctly classify inputs affected by that corruption. After deployment, DARDA adapts the DNN to previously unseen corruptions in an unsupervised fashion by (i) estimating the latent representation of the ongoing corruption; (ii) selecting the sub-network whose associated corruption is the closest in the latent space to the ongoing corruption; and (iii) adapting DNN state, so that its representation matches the ongoing corruption. This way, DARDA is more resource efficient and can swiftly adapt to new distributions caused by different corruptions without requiring a large variety of input data. Through experiments with two popular mobile edge devices - Raspberry Pi and NVIDIA Jetson Nano - we show that DARDA reduces energy consumption and average cache memory footprint respectively by 1.74x and 2.64x with respect to the state of the art, while increasing the performance by 10.4%, 5.7% and 4.4% on CIFAR-10, CIFAR-100 and TinyImagenet.
Related papers
- AR2: Attention-Guided Repair for the Robustness of CNNs Against Common Corruptions [5.294455344248843]
Deep neural networks suffer from significant performance degradation when exposed to common corruptions.<n>We propose AR2 (Attention-Guided Repair for Robustness) to enhance the corruption robustness of pretrained CNNs.
arXiv Detail & Related papers (2025-07-08T18:37:00Z) - Rapid Salient Object Detection with Difference Convolutional Neural Networks [49.838283141381716]
This paper addresses the challenge of deploying salient object detection (SOD) on resource-constrained devices with real-time performance.<n>We propose an efficient network design that combines traditional wisdom on SOD and the representation power of modern CNNs.
arXiv Detail & Related papers (2025-07-01T20:41:05Z) - Adaptive Calibration: A Unified Conversion Framework of Spiking Neural Network [1.5215973379400674]
Spiking Neural Networks (SNNs) are seen as an energy-efficient alternative to traditional Artificial Neural Networks (ANNs)
We present a unified training-free conversion framework that significantly enhances both the performance and efficiency of converted SNNs.
arXiv Detail & Related papers (2024-12-18T09:38:54Z) - Improving robustness to corruptions with multiplicative weight perturbations [29.880029851866272]
We introduce an alternative approach that improves the robustness of DNNs to a wide range of corruptions without compromising accuracy on clean images.
We first demonstrate that input perturbations can be mimicked by multiplicative perturbations in the weight space.
We also examine the recently proposed Adaptive Sharpness-Aware Minimization (ASAM) and show that it optimize DNNs under adversarial multiplicative weight perturbations.
arXiv Detail & Related papers (2024-06-24T11:20:44Z) - Dynamic Batch Norm Statistics Update for Natural Robustness [5.366500153474747]
We propose a unified framework consisting of a corruption-detection model and BN statistics update.
Our results demonstrate about 8% and 4% accuracy improvement on CIFAR10-C and ImageNet-C.
arXiv Detail & Related papers (2023-10-31T17:20:30Z) - Diffusion Denoising Process for Perceptron Bias in Out-of-distribution
Detection [67.49587673594276]
We introduce a new perceptron bias assumption that suggests discriminator models are more sensitive to certain features of the input, leading to the overconfidence problem.
We demonstrate that the diffusion denoising process (DDP) of DMs serves as a novel form of asymmetric, which is well-suited to enhance the input and mitigate the overconfidence problem.
Our experiments on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA approaches.
arXiv Detail & Related papers (2022-11-21T08:45:08Z) - N2V2 -- Fixing Noise2Void Checkerboard Artifacts with Modified Sampling
Strategies and a Tweaked Network Architecture [66.03918859810022]
We present two modifications to the vanilla N2V setup that both help to reduce the unwanted artifacts considerably.
We validate our modifications on a range of microscopy and natural image data.
arXiv Detail & Related papers (2022-11-15T21:12:09Z) - Back to the Source: Diffusion-Driven Test-Time Adaptation [77.4229736436935]
Test-time adaptation harnesses test inputs to improve accuracy of a model trained on source data when tested on shifted target data.
We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model.
arXiv Detail & Related papers (2022-07-07T17:14:10Z) - Is Neuron Coverage Needed to Make Person Detection More Robust? [3.395452700023097]
In this work, we apply coverage-guided testing (CGT) to the task of person detection in crowded scenes.
The proposed pipeline uses YOLOv3 for person detection and includes finding bugs via sampling and mutation.
We have found no evidence that the investigated coverage metrics can be advantageously used to improve robustness.
arXiv Detail & Related papers (2022-04-21T11:23:33Z) - CorrGAN: Input Transformation Technique Against Natural Corruptions [4.479638789566316]
In this work, we propose CorrGAN approach, which can generate benign input when a corrupted input is provided.
In this framework, we train Generative Adversarial Network (GAN) with novel intermediate output-based loss function.
The GAN can denoise the corrupted input and generate benign input.
arXiv Detail & Related papers (2022-04-19T02:56:46Z) - Distribution-sensitive Information Retention for Accurate Binary Neural
Network [49.971345958676196]
We present a novel Distribution-sensitive Information Retention Network (DIR-Net) to retain the information of the forward activations and backward gradients.
Our DIR-Net consistently outperforms the SOTA binarization approaches under mainstream and compact architectures.
We conduct our DIR-Net on real-world resource-limited devices which achieves 11.1 times storage saving and 5.4 times speedup.
arXiv Detail & Related papers (2021-09-25T10:59:39Z) - Towards Corruption-Agnostic Robust Domain Adaptation [76.66523954277945]
We investigate a new task, Corruption-agnostic Robust Domain Adaptation (CRDA): to be accurate on original data and robust against unavailable-for-training corruptions on target domains.
We propose a new approach based on two technical insights into CRDA: 1) an easy-to-plug module called Domain Discrepancy Generator (DDG) that generates samples that enlarge domain discrepancy to mimic unpredictable corruptions; 2) a simple but effective teacher-student scheme with contrastive loss to enhance the constraints on target domains.
arXiv Detail & Related papers (2021-04-21T06:27:48Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.