Characterizing and Taming Model Instability Across Edge Devices
- URL: http://arxiv.org/abs/2010.09028v1
- Date: Sun, 18 Oct 2020 16:52:06 GMT
- Title: Characterizing and Taming Model Instability Across Edge Devices
- Authors: Eyal Cidon, Evgenya Pergament, Zain Asgar, Asaf Cidon, Sachin Katti
- Abstract summary: This paper presents the first methodical characterization of the variations in model prediction across real-world mobile devices.
We introduce a new metric, instability, which captures this variation.
In experiments, 14-17% of images produced divergent classifications across one or more phone models.
- Score: 4.592454933053539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The same machine learning model running on different edge devices may produce
highly-divergent outputs on a nearly-identical input. Possible reasons for the
divergence include differences in the device sensors, the device's signal
processing hardware and software, and its operating system and processors. This
paper presents the first methodical characterization of the variations in model
prediction across real-world mobile devices. We demonstrate that accuracy is
not a useful metric to characterize prediction divergence, and introduce a new
metric, instability, which captures this variation. We characterize different
sources for instability, and show that differences in compression formats and
image signal processing account for significant instability in object
classification models. Notably, in our experiments, 14-17% of images produced
divergent classifications across one or more phone models. We evaluate three
different techniques for reducing instability. In particular, we adapt prior
work on making models robust to noise in order to fine-tune models to be robust
to variations across edge devices. We demonstrate our fine-tuning techniques
reduce instability by 75%.
Related papers
- A Hybrid Framework for Statistical Feature Selection and Image-Based Noise-Defect Detection [55.2480439325792]
This paper presents a hybrid framework that integrates both statistical feature selection and classification techniques to improve defect detection accuracy.
We present around 55 distinguished features that are extracted from industrial images, which are then analyzed using statistical methods.
By integrating these methods with flexible machine learning applications, the proposed framework improves detection accuracy and reduces false positives and misclassifications.
arXiv Detail & Related papers (2024-12-11T22:12:21Z) - Provable Adversarial Robustness for Group Equivariant Tasks: Graphs,
Point Clouds, Molecules, and More [9.931513542441612]
We propose a sound notion of adversarial robustness that accounts for task equivariance.
certification methods are, however, unavailable for many models.
We derive the first architecture-specific graph edit distance certificates, i.e. sound robustness guarantees for isomorphism equivariant tasks like node classification.
arXiv Detail & Related papers (2023-12-05T12:09:45Z) - Exploring Data Augmentations on Self-/Semi-/Fully- Supervised
Pre-trained Models [24.376036129920948]
We investigate how data augmentation affects performance of vision pre-trained models.
We apply 4 types of data augmentations termed with Random Erasing, CutOut, CutMix and MixUp.
We report their performance on vision tasks such as image classification, object detection, instance segmentation, and semantic segmentation.
arXiv Detail & Related papers (2023-10-28T23:46:31Z) - Deep Learning-Based Defect Classification and Detection in SEM Images [1.9206693386750882]
In particular, we train RetinaNet models using different ResNet, VGGNet architectures as backbone.
We propose a preference-based ensemble strategy to combine the output predictions from different models in order to achieve better performance on classification and detection of defects.
arXiv Detail & Related papers (2022-06-20T16:34:11Z) - Certifying Model Accuracy under Distribution Shifts [151.67113334248464]
We present provable robustness guarantees on the accuracy of a model under bounded Wasserstein shifts of the data distribution.
We show that a simple procedure that randomizes the input of the model within a transformation space is provably robust to distributional shifts under the transformation.
arXiv Detail & Related papers (2022-01-28T22:03:50Z) - High-Robustness, Low-Transferability Fingerprinting of Neural Networks [78.2527498858308]
This paper proposes Characteristic Examples for effectively fingerprinting deep neural networks.
It features high-robustness to the base model against model pruning as well as low-transferability to unassociated models.
arXiv Detail & Related papers (2021-05-14T21:48:23Z) - DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of
Ensembles [20.46399318111058]
Adversarial attacks can mislead CNN models with small perturbations, which can effectively transfer between different models trained on the same dataset.
We propose DVERGE, which isolates the adversarial vulnerability in each sub-model by distilling non-robust features.
The novel diversity metric and training procedure enables DVERGE to achieve higher robustness against transfer attacks.
arXiv Detail & Related papers (2020-09-30T14:57:35Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial
Perturbations [65.05561023880351]
Adversarial examples are malicious inputs crafted to induce misclassification.
This paper studies a complementary failure mode, invariance-based adversarial examples.
We show that defenses against sensitivity-based attacks actively harm a model's accuracy on invariance-based attacks.
arXiv Detail & Related papers (2020-02-11T18:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.