Partially Oblivious Neural Network Inference
- URL: http://arxiv.org/abs/2210.15189v1
- Date: Thu, 27 Oct 2022 05:39:36 GMT
- Title: Partially Oblivious Neural Network Inference
- Authors: Panagiotis Rizomiliotis, Christos Diou, Aikaterini Triakosia, Ilias
Kyrannas and Konstantinos Tserpes
- Abstract summary: We show that for neural network models, like CNNs, some information leakage can be acceptable.
We experimentally demonstrate that in a CIFAR-10 network we can leak up to $80%$ of the model's weights with practically no security impact.
- Score: 4.843820624525483
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Oblivious inference is the task of outsourcing a ML model, like
neural-networks, without disclosing critical and sensitive information, like
the model's parameters. One of the most prominent solutions for secure
oblivious inference is based on a powerful cryptographic tools, like
Homomorphic Encryption (HE) and/or multi-party computation (MPC). Even though
the implementation of oblivious inference systems schemes has impressively
improved the last decade, there are still significant limitations on the ML
models that they can practically implement. Especially when both the ML model
and the input data's confidentiality must be protected. In this paper, we
introduce the notion of partially oblivious inference. We empirically show that
for neural network models, like CNNs, some information leakage can be
acceptable. We therefore propose a novel trade-off between security and
efficiency. In our research, we investigate the impact on security and
inference runtime performance from the CNN model's weights partial leakage. We
experimentally demonstrate that in a CIFAR-10 network we can leak up to $80\%$
of the model's weights with practically no security impact, while the necessary
HE-mutliplications are performed four times faster.
Related papers
- NeuroPlug: Plugging Side-Channel Leaks in NPUs using Space Filling Curves [0.4143603294943439]
All published countermeasures (CMs) add noise N to a signal X.
We show that it is easy to filter this noise out using targeted measurements, statistical analyses and different kinds of reasonably-assumed side information.
We present a novel CM NeuroPlug that is immune to these attack methodologies mainly because we use a different formulation CX + N.
arXiv Detail & Related papers (2024-07-18T10:40:41Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Co(ve)rtex: ML Models as storage channels and their (mis-)applications [2.792027541710663]
In machine learning systems, don't-care states and undefined behavior have been shown to be sources of significant vulnerabilities.
We consider the ML model as a storage channel with a capacity that increases with over parameterization.
We develop optimizations to improve the capacity in this case, including a novel ML-specific substitution based error correction protocol.
arXiv Detail & Related papers (2023-07-17T19:57:10Z) - Publishing Efficient On-device Models Increases Adversarial
Vulnerability [58.6975494957865]
In this paper, we study the security considerations of publishing on-device variants of large-scale models.
We first show that an adversary can exploit on-device models to make attacking the large models easier.
We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase.
arXiv Detail & Related papers (2022-12-28T05:05:58Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
Stealing in Memories [26.067920958354]
One of the major threats to the privacy of Deep Neural Networks (DNNs) is model extraction attacks.
Recent studies show hardware-based side channel attacks can reveal internal knowledge about DNN models (e.g., model architectures)
We propose an advanced model extraction attack framework DeepSteal that effectively steals DNN weights with the aid of memory side-channel attack.
arXiv Detail & Related papers (2021-11-08T16:55:45Z) - Towards a Safety Case for Hardware Fault Tolerance in Convolutional
Neural Networks Using Activation Range Supervision [1.7968112116887602]
Convolutional neural networks (CNNs) have become an established part of numerous safety-critical computer vision applications.
We build a prototypical safety case for CNNs by demonstrating that range supervision represents a highly reliable fault detector.
We explore novel, non-uniform range restriction methods that effectively suppress the probability of silent data corruptions and uncorrectable errors.
arXiv Detail & Related papers (2021-08-16T11:13:55Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.