Beyond the Model: Data Pre-processing Attack to Deep Learning Models in
Android Apps
- URL: http://arxiv.org/abs/2305.03963v2
- Date: Thu, 11 May 2023 10:20:33 GMT
- Title: Beyond the Model: Data Pre-processing Attack to Deep Learning Models in
Android Apps
- Authors: Ye Sang, Yujin Huang, Shuo Huang, Helei Cui
- Abstract summary: We introduce a data processing-based attack against real-world deep learning (DL) apps.
Our attack could influence the performance and latency of the model without affecting the operation of a DL app.
Among 320 apps utilizing MLkit, we find that 81.56% of them can be successfully attacked.
- Score: 3.2307366446033945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing popularity of deep learning (DL) models and the advantages of
computing, including low latency and bandwidth savings on smartphones, have led
to the emergence of intelligent mobile applications, also known as DL apps, in
recent years. However, this technological development has also given rise to
several security concerns, including adversarial examples, model stealing, and
data poisoning issues. Existing works on attacks and countermeasures for
on-device DL models have primarily focused on the models themselves. However,
scant attention has been paid to the impact of data processing disturbance on
the model inference. This knowledge disparity highlights the need for
additional research to fully comprehend and address security issues related to
data processing for on-device models. In this paper, we introduce a data
processing-based attacks against real-world DL apps. In particular, our attack
could influence the performance and latency of the model without affecting the
operation of a DL app. To demonstrate the effectiveness of our attack, we carry
out an empirical study on 517 real-world DL apps collected from Google Play.
Among 320 apps utilizing MLkit, we find that 81.56\% of them can be
successfully attacked.
The results emphasize the importance of DL app developers being aware of and
taking actions to secure on-device models from the perspective of data
processing.
Related papers
- Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - MalModel: Hiding Malicious Payload in Mobile Deep Learning Models with Black-box Backdoor Attack [24.569156952823068]
We propose a method to generate or transform mobile malware by hiding the malicious payloads inside the parameters of deep learning models.
We can run malware in DL mobile applications covertly with little impact on the model performance.
arXiv Detail & Related papers (2024-01-05T06:35:24Z) - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models [74.58014281829946]
We analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on public models.
Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models.
arXiv Detail & Related papers (2023-10-19T11:49:22Z) - Beyond Labeling Oracles: What does it mean to steal ML models? [52.63413852460003]
Model extraction attacks are designed to steal trained models with only query access.
We investigate factors influencing the success of model extraction attacks.
Our findings urge the community to redefine the adversarial goals of ME attacks.
arXiv Detail & Related papers (2023-10-03T11:10:21Z) - Isolation and Induction: Training Robust Deep Neural Networks against
Model Stealing Attacks [51.51023951695014]
Existing model stealing defenses add deceptive perturbations to the victim's posterior probabilities to mislead the attackers.
This paper proposes Isolation and Induction (InI), a novel and effective training framework for model stealing defenses.
In contrast to adding perturbations over model predictions that harm the benign accuracy, we train models to produce uninformative outputs against stealing queries.
arXiv Detail & Related papers (2023-08-02T05:54:01Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Smart App Attack: Hacking Deep Learning Models in Android Apps [16.663345577900813]
We introduce a grey-box adversarial attack framework to hack on-device models.
We evaluate the attack effectiveness and generality in terms of four different settings.
Among 53 apps adopting transfer learning, we find that 71.7% of them can be successfully attacked.
arXiv Detail & Related papers (2022-04-23T14:01:59Z) - Machine Learning Security against Data Poisoning: Are We There Yet? [23.809841593870757]
This article reviews data poisoning attacks that compromise the training data used to learn machine learning models.
We discuss how to mitigate these attacks using basic security principles, or by deploying ML-oriented defensive mechanisms.
arXiv Detail & Related papers (2022-04-12T17:52:09Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - An Empirical Study on Deployment Faults of Deep Learning Based Mobile
Applications [7.58063287182615]
Mobile Deep Learning (DL) apps integrate DL models trained using large-scale data with DL programs.
This paper presents the first comprehensive study on the deployment faults of mobile DL apps.
We construct a fine-granularity taxonomy consisting of 23 categories regarding to fault symptoms and distill common fix strategies for different fault types.
arXiv Detail & Related papers (2021-01-13T08:19:50Z) - Mind Your Weight(s): A Large-scale Study on Insufficient Machine
Learning Model Protection in Mobile Apps [17.421303987300902]
This paper presents the first empirical study of machine learning model protection on mobile devices.
We analyzed 46,753 popular apps collected from the US and Chinese app markets.
We found that, alarmingly, 41% of ML apps do not protect their models at all, which can be trivially stolen from app packages.
arXiv Detail & Related papers (2020-02-18T16:14:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.