Smart App Attack: Hacking Deep Learning Models in Android Apps
- URL: http://arxiv.org/abs/2204.11075v1
- Date: Sat, 23 Apr 2022 14:01:59 GMT
- Title: Smart App Attack: Hacking Deep Learning Models in Android Apps
- Authors: Yujin Huang, Chunyang Chen
- Abstract summary: We introduce a grey-box adversarial attack framework to hack on-device models.
We evaluate the attack effectiveness and generality in terms of four different settings.
Among 53 apps adopting transfer learning, we find that 71.7% of them can be successfully attacked.
- Score: 16.663345577900813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On-device deep learning is rapidly gaining popularity in mobile applications.
Compared to offloading deep learning from smartphones to the cloud, on-device
deep learning enables offline model inference while preserving user privacy.
However, such mechanisms inevitably store models on users' smartphones and may
invite adversarial attacks as they are accessible to attackers. Due to the
characteristic of the on-device model, most existing adversarial attacks cannot
be directly applied for on-device models. In this paper, we introduce a
grey-box adversarial attack framework to hack on-device models by crafting
highly similar binary classification models based on identified transfer
learning approaches and pre-trained models from TensorFlow Hub. We evaluate the
attack effectiveness and generality in terms of four different settings
including pre-trained models, datasets, transfer learning approaches and
adversarial attack algorithms. The results demonstrate that the proposed
attacks remain effective regardless of different settings, and significantly
outperform state-of-the-art baselines. We further conduct an empirical study on
real-world deep learning mobile apps collected from Google Play. Among 53 apps
adopting transfer learning, we find that 71.7\% of them can be successfully
attacked, which includes popular ones in medicine, automation, and finance
categories with critical usage scenarios. The results call for the awareness
and actions of deep learning mobile app developers to secure the on-device
models. The code of this work is available at
https://github.com/Jinxhy/SmartAppAttack
Related papers
- Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv Detail & Related papers (2023-10-25T03:30:42Z) - A First Look at On-device Models in iOS Apps [40.531989012371525]
On-device deep learning models are being used in vital fields like finance, social media, and driving assistance.
Because of the transparency of the Android platform and the on-device models inside, on-device models on Android smartphones have been proven to be extremely vulnerable.
Since the functionalities of the same app on Android and iOS platforms are similar, the same vulnerabilities may exist on both platforms.
arXiv Detail & Related papers (2023-07-23T13:50:44Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Evaluating Deep Learning Models and Adversarial Attacks on
Accelerometer-Based Gesture Authentication [6.961253535504979]
We use a deep convolutional generative adversarial network (DC-GAN) to create adversarial samples.
We show that our deep learning model is surprisingly robust to such an attack scenario.
arXiv Detail & Related papers (2021-10-03T00:15:50Z) - DeepPayload: Black-box Backdoor Attack on Deep Learning Models through
Neural Payload Injection [17.136757440204722]
We introduce a highly practical backdoor attack achieved with a set of reverse-engineering techniques over compiled deep learning models.
The injected backdoor can be triggered with a success rate of 93.5%, while only brought less than 2ms latency overhead and no more than 1.4% accuracy decrease.
We found 54 apps that were vulnerable to our attack, including popular and security-critical ones.
arXiv Detail & Related papers (2021-01-18T06:29:30Z) - Robustness of on-device Models: Adversarial Attack to Deep Learning
Models on Android Apps [14.821745719407037]
Most deep learning models within Android apps can easily be obtained via mature reverse engineering.
In this study, we propose a simple but effective approach to hacking deep learning models using adversarial attacks.
arXiv Detail & Related papers (2021-01-12T10:49:30Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer
Learning [60.784641458579124]
We show that fine-tuning effectively enhances model robustness under white-box FGSM attacks.
We also propose a black-box attack method for transfer learning models which attacks the target model with the adversarial examples produced by its source model.
To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model.
arXiv Detail & Related papers (2020-08-25T15:04:32Z) - Adversarial Imitation Attack [63.76805962712481]
A practical adversarial attack should require as little as possible knowledge of attacked models.
Current substitute attacks need pre-trained models to generate adversarial examples.
In this study, we propose a novel adversarial imitation attack.
arXiv Detail & Related papers (2020-03-28T10:02:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.