Poster: Sponge ML Model Attacks of Mobile Apps
- URL: http://arxiv.org/abs/2303.01243v1
- Date: Wed, 1 Mar 2023 15:12:56 GMT
- Title: Poster: Sponge ML Model Attacks of Mobile Apps
- Authors: Souvik Paul and Nicolas Kourtellis
- Abstract summary: In this work, we focus on the recently proposed Sponge attack.
It is designed to soak up energy consumed while executing inference (not training) of ML model.
For the first time, in this work, we investigate this attack in the mobile setting and measure the effect it can have on ML models running inside apps on mobile devices.
- Score: 3.299672391663527
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine Learning (ML)-powered apps are used in pervasive devices such as
phones, tablets, smartwatches and IoT devices. Recent advances in
collaborative, distributed ML such as Federated Learning (FL) attempt to solve
privacy concerns of users and data owners, and thus used by tech industry
leaders such as Google, Facebook and Apple. However, FL systems and models are
still vulnerable to adversarial membership and attribute inferences and model
poisoning attacks, especially in FL-as-a-Service ecosystems recently proposed,
which can enable attackers to access multiple ML-powered apps. In this work, we
focus on the recently proposed Sponge attack: It is designed to soak up energy
consumed while executing inference (not training) of ML model, without
hampering the classifier's performance. Recent work has shown sponge attacks on
ASCI-enabled GPUs can potentially escalate the power consumption and inference
time. For the first time, in this work, we investigate this attack in the
mobile setting and measure the effect it can have on ML models running inside
apps on mobile devices.
Related papers
- MobileAIBench: Benchmarking LLMs and LMMs for On-Device Use Cases [81.70591346986582]
We introduce MobileAIBench, a benchmarking framework for evaluating Large Language Models (LLMs) and Large Multimodal Models (LMMs) on mobile devices.
MobileAIBench assesses models across different sizes, quantization levels, and tasks, measuring latency and resource consumption on real devices.
arXiv Detail & Related papers (2024-06-12T22:58:12Z) - MalModel: Hiding Malicious Payload in Mobile Deep Learning Models with Black-box Backdoor Attack [24.569156952823068]
We propose a method to generate or transform mobile malware by hiding the malicious payloads inside the parameters of deep learning models.
We can run malware in DL mobile applications covertly with little impact on the model performance.
arXiv Detail & Related papers (2024-01-05T06:35:24Z) - SODA: Protecting Proprietary Information in On-Device Machine Learning
Models [5.352699766206808]
We present an end-to-end framework, SODA, for deploying and serving on edge devices while defending against adversarial usage.
Our results demonstrate that SODA can detect adversarial usage with 89% accuracy in less than 50 queries with minimal impact on service performance, latency, and storage.
arXiv Detail & Related papers (2023-12-22T20:04:36Z) - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models [74.58014281829946]
We analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on public models.
Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models.
arXiv Detail & Related papers (2023-10-19T11:49:22Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Federated Split GANs [12.007429155505767]
We propose an alternative approach to train ML models in user's devices themselves.
We focus on GANs (generative adversarial networks) and leverage their inherent privacy-preserving attribute.
Our system preserves data privacy, keeps a short training time, and yields same accuracy of model training in unconstrained devices.
arXiv Detail & Related papers (2022-07-04T23:53:47Z) - Smart App Attack: Hacking Deep Learning Models in Android Apps [16.663345577900813]
We introduce a grey-box adversarial attack framework to hack on-device models.
We evaluate the attack effectiveness and generality in terms of four different settings.
Among 53 apps adopting transfer learning, we find that 71.7% of them can be successfully attacked.
arXiv Detail & Related papers (2022-04-23T14:01:59Z) - Federated Learning-based Active Authentication on Mobile Devices [98.23904302910022]
User active authentication on mobile devices aims to learn a model that can correctly recognize the enrolled user based on device sensor information.
We propose a novel user active authentication training, termed as Federated Active Authentication (FAA)
We show that existing FL/SL methods are suboptimal for FAA as they rely on the data to be distributed homogeneously.
arXiv Detail & Related papers (2021-04-14T22:59:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.