A First Look at On-device Models in iOS Apps
- URL: http://arxiv.org/abs/2307.12328v2
- Date: Thu, 27 Jul 2023 06:04:27 GMT
- Title: A First Look at On-device Models in iOS Apps
- Authors: Han Hu, Yujin Huang, Qiuyuan Chen, Terry Yue Zhuo, Chunyang Chen
- Abstract summary: On-device deep learning models are being used in vital fields like finance, social media, and driving assistance.
Because of the transparency of the Android platform and the on-device models inside, on-device models on Android smartphones have been proven to be extremely vulnerable.
Since the functionalities of the same app on Android and iOS platforms are similar, the same vulnerabilities may exist on both platforms.
- Score: 40.531989012371525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Powered by the rising popularity of deep learning techniques on smartphones,
on-device deep learning models are being used in vital fields like finance,
social media, and driving assistance.
Because of the transparency of the Android platform and the on-device models
inside, on-device models on Android smartphones have been proven to be
extremely vulnerable.
However, due to the challenge in accessing and analysing iOS app files,
despite iOS being a mobile platform as popular as Android, there are no
relevant works on on-device models in iOS apps.
Since the functionalities of the same app on Android and iOS platforms are
similar, the same vulnerabilities may exist on both platforms.
In this paper, we present the first empirical study about on-device models in
iOS apps, including their adoption of deep learning frameworks, structure,
functionality, and potential security issues.
We study why current developers use different on-device models for one app
between iOS and Android.
We propose a more general attack against white-box models that does not rely
on pre-trained models and a new adversarial attack approach based on our
findings to target iOS's gray-box on-device models.
Our results show the effectiveness of our approaches.
Finally, we successfully exploit the vulnerabilities of on-device models to
attack real-world iOS apps.
Related papers
- Apple Intelligence Foundation Language Models [109.60033785567484]
This report describes the model architecture, the data used to train the model, the training process, and the evaluation results.
We highlight our focus on Responsible AI and how the principles are applied throughout the model development.
arXiv Detail & Related papers (2024-07-29T18:38:49Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Investigating White-Box Attacks for On-Device Models [21.329209501209665]
On-device models are vulnerable to attacks as they can be easily extracted from their corresponding mobile apps.
We propose a Reverse Engineering framework for On-device Models (REOM), which automatically reverses the compiled on-device TFLite model to the debuggable model.
Our results show that REOM enables attackers to achieve higher attack success rates with a hundred times smaller attack perturbations.
arXiv Detail & Related papers (2024-02-08T09:03:17Z) - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models [74.58014281829946]
We analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on public models.
Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models.
arXiv Detail & Related papers (2023-10-19T11:49:22Z) - ModelObfuscator: Obfuscating Model Information to Protect Deployed ML-based Systems [31.988501084337678]
We develop a prototype tool ModelObfuscator to automatically obfuscate on-device TFLite models.
Our experiments show that this proposed approach can dramatically improve model security.
arXiv Detail & Related papers (2023-06-01T05:24:00Z) - Publishing Efficient On-device Models Increases Adversarial
Vulnerability [58.6975494957865]
In this paper, we study the security considerations of publishing on-device variants of large-scale models.
We first show that an adversary can exploit on-device models to make attacking the large models easier.
We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase.
arXiv Detail & Related papers (2022-12-28T05:05:58Z) - Smart App Attack: Hacking Deep Learning Models in Android Apps [16.663345577900813]
We introduce a grey-box adversarial attack framework to hack on-device models.
We evaluate the attack effectiveness and generality in terms of four different settings.
Among 53 apps adopting transfer learning, we find that 71.7% of them can be successfully attacked.
arXiv Detail & Related papers (2022-04-23T14:01:59Z) - Device-Cloud Collaborative Learning for Recommendation [50.01289274123047]
We propose a novel MetaPatch learning approach on the device side to efficiently achieve "thousands of people with thousands of models" given a centralized cloud model.
With billions of updated personalized device models, we propose a "model-over-models" distillation algorithm, namely MoMoDistill, to update the centralized cloud model.
arXiv Detail & Related papers (2021-04-14T05:06:59Z) - Robustness of on-device Models: Adversarial Attack to Deep Learning
Models on Android Apps [14.821745719407037]
Most deep learning models within Android apps can easily be obtained via mature reverse engineering.
In this study, we propose a simple but effective approach to hacking deep learning models using adversarial attacks.
arXiv Detail & Related papers (2021-01-12T10:49:30Z) - Mind Your Weight(s): A Large-scale Study on Insufficient Machine
Learning Model Protection in Mobile Apps [17.421303987300902]
This paper presents the first empirical study of machine learning model protection on mobile devices.
We analyzed 46,753 popular apps collected from the US and Chinese app markets.
We found that, alarmingly, 41% of ML apps do not protect their models at all, which can be trivially stolen from app packages.
arXiv Detail & Related papers (2020-02-18T16:14:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.