SODA: Protecting Proprietary Information in On-Device Machine Learning
Models
- URL: http://arxiv.org/abs/2312.15036v1
- Date: Fri, 22 Dec 2023 20:04:36 GMT
- Title: SODA: Protecting Proprietary Information in On-Device Machine Learning
Models
- Authors: Akanksha Atrey, Ritwik Sinha, Saayan Mitra, Prashant Shenoy
- Abstract summary: We present an end-to-end framework, SODA, for deploying and serving on edge devices while defending against adversarial usage.
Our results demonstrate that SODA can detect adversarial usage with 89% accuracy in less than 50 queries with minimal impact on service performance, latency, and storage.
- Score: 5.352699766206808
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The growth of low-end hardware has led to a proliferation of machine
learning-based services in edge applications. These applications gather
contextual information about users and provide some services, such as
personalized offers, through a machine learning (ML) model. A growing practice
has been to deploy such ML models on the user's device to reduce latency,
maintain user privacy, and minimize continuous reliance on a centralized
source. However, deploying ML models on the user's edge device can leak
proprietary information about the service provider. In this work, we
investigate on-device ML models that are used to provide mobile services and
demonstrate how simple attacks can leak proprietary information of the service
provider. We show that different adversaries can easily exploit such models to
maximize their profit and accomplish content theft. Motivated by the need to
thwart such attacks, we present an end-to-end framework, SODA, for deploying
and serving on edge devices while defending against adversarial usage. Our
results demonstrate that SODA can detect adversarial usage with 89% accuracy in
less than 50 queries with minimal impact on service performance, latency, and
storage.
Related papers
- An Early Experience with Confidential Computing Architecture for On-Device Model Protection [6.024889136631505]
Arm Confidential Computing Architecture (CCA) is a new Arm extension for on-device machine learning (ML)
In this paper, we evaluate the performance-privacy trade-offs of deploying models within CCA.
Our framework can successfully protect the model against membership inference attack by an 8.3% reduction in the adversary's success rate.
arXiv Detail & Related papers (2025-04-11T13:21:33Z) - MobileAIBench: Benchmarking LLMs and LMMs for On-Device Use Cases [81.70591346986582]
We introduce MobileAIBench, a benchmarking framework for evaluating Large Language Models (LLMs) and Large Multimodal Models (LMMs) on mobile devices.
MobileAIBench assesses models across different sizes, quantization levels, and tasks, measuring latency and resource consumption on real devices.
arXiv Detail & Related papers (2024-06-12T22:58:12Z) - Poster: Sponge ML Model Attacks of Mobile Apps [3.299672391663527]
In this work, we focus on the recently proposed Sponge attack.
It is designed to soak up energy consumed while executing inference (not training) of ML model.
For the first time, in this work, we investigate this attack in the mobile setting and measure the effect it can have on ML models running inside apps on mobile devices.
arXiv Detail & Related papers (2023-03-01T15:12:56Z) - Adversarial attacks and defenses on ML- and hardware-based IoT device
fingerprinting and identification [0.0]
This work proposes an LSTM-CNN architecture based on hardware performance behavior for individual device identification.
Previous techniques have been compared with the proposed architecture using a hardware performance dataset collected from 45 Raspberry Pi devices.
adversarial training and model distillation defense techniques are selected to improve the model resilience to evasion attacks.
arXiv Detail & Related papers (2022-12-30T13:11:35Z) - DUET: A Tuning-Free Device-Cloud Collaborative Parameters Generation Framework for Efficient Device Model Generalization [66.27399823422665]
Device Model Generalization (DMG) is a practical yet under-investigated research topic for on-device machine learning applications.
We propose an efficient Device-cloUd collaborative parametErs generaTion framework DUET.
arXiv Detail & Related papers (2022-09-12T13:26:26Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - On the Evaluation of User Privacy in Deep Neural Networks using Timing
Side Channel [14.350301915592027]
We identify and report a novel data-dependent timing side-channel leakage (termed Class Leakage) in Deep Learning (DL) implementations.
We demonstrate a practical inference-time attack where an adversary with user privilege and hard-label blackbox access to an ML can exploit Class Leakage.
We develop an easy-to-implement countermeasure by making a constant-time branching operation that alleviates the Class Leakage.
arXiv Detail & Related papers (2022-08-01T19:38:16Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Federated Split GANs [12.007429155505767]
We propose an alternative approach to train ML models in user's devices themselves.
We focus on GANs (generative adversarial networks) and leverage their inherent privacy-preserving attribute.
Our system preserves data privacy, keeps a short training time, and yields same accuracy of model training in unconstrained devices.
arXiv Detail & Related papers (2022-07-04T23:53:47Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Survey: Leakage and Privacy at Inference Time [59.957056214792665]
Leakage of data from publicly available Machine Learning (ML) models is an area of growing significance.
We focus on inference-time leakage, as the most likely scenario for publicly available models.
We propose a taxonomy across involuntary and malevolent leakage, available defences, followed by the currently available assessment metrics and applications.
arXiv Detail & Related papers (2021-07-04T12:59:16Z) - Federated Learning-based Active Authentication on Mobile Devices [98.23904302910022]
User active authentication on mobile devices aims to learn a model that can correctly recognize the enrolled user based on device sensor information.
We propose a novel user active authentication training, termed as Federated Active Authentication (FAA)
We show that existing FL/SL methods are suboptimal for FAA as they rely on the data to be distributed homogeneously.
arXiv Detail & Related papers (2021-04-14T22:59:08Z) - MDLdroid: a ChainSGD-reduce Approach to Mobile Deep Learning for
Personal Mobile Sensing [14.574274428615666]
Running deep learning on devices offers several advantages including data privacy preservation and low-latency response for both model robustness and update.
Personal mobile sensing applications are mostly user-specific and highly affected by environment.
We present MDLdroid, a novel decentralized mobile deep learning framework to enable resource-aware on-device collaborative learning.
arXiv Detail & Related papers (2020-02-07T16:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.