Covert Attacks on Machine Learning Training in Passively Secure MPC
- URL: http://arxiv.org/abs/2505.17092v1
- Date: Wed, 21 May 2025 00:46:45 GMT
- Title: Covert Attacks on Machine Learning Training in Passively Secure MPC
- Authors: Matthew Jagielski, Daniel Escudero, Rahul Rachuri, Peter Scholl,
- Abstract summary: Multiparty computation (MPC) allows data owners to train machine learning models on combined data while keeping the underlying training data private.<n>MPC threat model considers an adversary who passively corrupts some parties without affecting their overall behavior, or an adversary who actively modifies the behavior of corrupt parties.<n>In this work we show explicit, simple, and effective attacks that an active adversary can run on existing passively secure MPC training protocols.
- Score: 24.724860700532112
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Secure multiparty computation (MPC) allows data owners to train machine learning models on combined data while keeping the underlying training data private. The MPC threat model either considers an adversary who passively corrupts some parties without affecting their overall behavior, or an adversary who actively modifies the behavior of corrupt parties. It has been argued that in some settings, active security is not a major concern, partly because of the potential risk of reputation loss if a party is detected cheating. In this work we show explicit, simple, and effective attacks that an active adversary can run on existing passively secure MPC training protocols, while keeping essentially zero risk of the attack being detected. The attacks we show can compromise both the integrity and privacy of the model, including attacks reconstructing exact training data. Our results challenge the belief that a threat model that does not include malicious behavior by the involved parties may be reasonable in the context of PPML, motivating the use of actively secure protocols for training.
Related papers
- Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning [83.90283731845867]
We consider feature reconstruction attacks, a common risk targeting input data compromise.<n>We show that Federated-based models are resistant to state-of-the-art feature reconstruction attacks.
arXiv Detail & Related papers (2024-12-16T12:02:12Z) - On the Conflict of Robustness and Learning in Collaborative Machine Learning [9.372984119950765]
Collaborative Machine Learning (CML) allows participants to jointly train a machine learning model while keeping their training data private.
In many scenarios where CML is seen as the solution to privacy issues, such as health-related applications, safety is also a primary concern.
To ensure that CML processes produce models that output correct and reliable decisions, researchers propose to use textitrobust aggregators.
arXiv Detail & Related papers (2024-02-21T11:04:23Z) - Differentially Private and Adversarially Robust Machine Learning: An
Empirical Evaluation [2.8084422332394428]
Malicious adversaries can attack machine learning models to infer sensitive information or damage the system by launching a series of evasion attacks.
This study explores the combination of adversarial training and differentially private training to defend against simultaneous attacks.
arXiv Detail & Related papers (2024-01-18T22:26:31Z) - A Robust Adversary Detection-Deactivation Method for Metaverse-oriented
Collaborative Deep Learning [13.131323206843733]
This paper proposes an adversary detection-deactivation method, which can limit and isolate the access of potential malicious participants.
A detailed protection analysis has been conducted on a Multiview CDL case, and results show that the protocol can effectively prevent harmful access by manner analysis.
arXiv Detail & Related papers (2023-10-21T06:45:18Z) - Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning [12.232863656375098]
Federated learning enables the training of collaborative models without sharing of data.<n>This approach brings forth security challenges, notably poisoning and backdoor attacks.<n>We introduce Adversarial Robustness Unhardening (ARU), which is employed by a subset of adversarial clients.
arXiv Detail & Related papers (2023-10-17T21:38:41Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Certified Federated Adversarial Training [3.474871319204387]
We tackle the scenario of securing FL systems conducting adversarial training when a quorum of workers could be completely malicious.
We model an attacker who poisons the model to insert a weakness into the adversarial training such that the model displays apparent adversarial robustness.
We show that this defence can preserve adversarial robustness even against an adaptive attacker.
arXiv Detail & Related papers (2021-12-20T13:40:20Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.