MGIC: A Multi-Label Gradient Inversion Attack based on Canny Edge
Detection on Federated Learning
- URL: http://arxiv.org/abs/2403.08284v1
- Date: Wed, 13 Mar 2024 06:34:49 GMT
- Title: MGIC: A Multi-Label Gradient Inversion Attack based on Canny Edge
Detection on Federated Learning
- Authors: Can Liu and Jin Wang
- Abstract summary: We present a novel gradient inversion strategy based on canny edge detection (MGIC) in both the multi-label and single-label datasets.
Our proposed strategy has better visual inversion image results than the most widely used ones, saving more than 78% of time costs in the ImageNet dataset.
- Score: 6.721419921063687
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a new distributed computing framework that can protect data privacy,
federated learning (FL) has attracted more and more attention in recent years.
It receives gradients from users to train the global model and releases the
trained global model to working users. Nonetheless, the gradient inversion (GI)
attack reflects the risk of privacy leakage in federated learning. Attackers
only need to use gradients through hundreds of thousands of simple iterations
to obtain relatively accurate private data stored on users' local devices. For
this, some works propose simple but effective strategies to obtain user data
under a single-label dataset. However, these strategies induce a satisfactory
visual effect of the inversion image at the expense of higher time costs. Due
to the semantic limitation of a single label, the image obtained by gradient
inversion may have semantic errors. We present a novel gradient inversion
strategy based on canny edge detection (MGIC) in both the multi-label and
single-label datasets. To reduce semantic errors caused by a single label, we
add new convolution layers' blocks in the trained model to obtain the image's
multi-label. Through multi-label representation, serious semantic errors in
inversion images are reduced. Then, we analyze the impact of parameters on the
difficulty of input image reconstruction and discuss how image multi-subjects
affect the inversion performance. Our proposed strategy has better visual
inversion image results than the most widely used ones, saving more than 78% of
time costs in the ImageNet dataset.
Related papers
- Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning [53.766434746801366]
Multimodal contrastive learning (MCL) has shown remarkable advances in zero-shot classification by learning from millions of image-caption pairs crawled from the Internet.
Hackers may unauthorizedly exploit image-text data for model training, potentially including personal and privacy-sensitive information.
Recent works propose generating unlearnable examples by adding imperceptible perturbations to training images to build shortcuts for protection.
We propose Multi-step Error Minimization (MEM), a novel optimization process for generating multimodal unlearnable examples.
arXiv Detail & Related papers (2024-07-23T09:00:52Z) - GI-SMN: Gradient Inversion Attack against Federated Learning without Prior Knowledge [4.839514405631815]
Federated learning (FL) has emerged as a privacy-preserving machine learning approach.
gradient inversion attacks can exploit the gradients of FL to recreate the original user data.
We propose a novel Gradient Inversion attack based on Style Migration Network (GI-SMN)
arXiv Detail & Related papers (2024-05-06T14:29:24Z) - AFGI: Towards Accurate and Fast-convergent Gradient Inversion Attack in Federated Learning [13.104809524506132]
Federated learning (FL) empowers privacypreservation in model training by only exposing users' model gradients.
Yet, FL users are susceptible to gradient inversion attacks (GIAs) which can reconstruct ground-truth training data.
We present an Accurate and Fast-convergent Gradient Inversion attack algorithm, called AFGI, with two components.
arXiv Detail & Related papers (2024-03-13T09:48:04Z) - OsmLocator: locating overlapping scatter marks with a non-training
generative perspective [48.50108853199417]
Locating overlapping marks faces many difficulties such as no texture, less contextual information, hallow shape and tiny size.
Here, we formulate it as a optimization problem on clustering-based re-visualization from a non-training generative perspective.
We especially built a dataset named 2023 containing hundreds of scatter images with different markers and various levels of overlapping severity, and tested the proposed method and compared it to existing methods.
arXiv Detail & Related papers (2023-12-18T12:39:48Z) - Self-similarity Driven Scale-invariant Learning for Weakly Supervised
Person Search [66.95134080902717]
We propose a novel one-step framework, named Self-similarity driven Scale-invariant Learning (SSL)
We introduce a Multi-scale Exemplar Branch to guide the network in concentrating on the foreground and learning scale-invariant features.
Experiments on PRW and CUHK-SYSU databases demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2023-02-25T04:48:11Z) - Enhancing Privacy against Inversion Attacks in Federated Learning by
using Mixing Gradients Strategies [0.31498833540989407]
Federated learning reduces the risk of information leakage, but remains vulnerable to attacks.
We show how several neural network design decisions can defend against gradients inversion attacks.
These strategies are also shown to be useful for deep convolutional neural networks such as LeNET for image recognition.
arXiv Detail & Related papers (2022-04-26T12:08:28Z) - Learning Self-Supervised Low-Rank Network for Single-Stage Weakly and
Semi-Supervised Semantic Segmentation [119.009033745244]
This paper presents a Self-supervised Low-Rank Network ( SLRNet) for single-stage weakly supervised semantic segmentation (WSSS) and semi-supervised semantic segmentation (SSSS)
SLRNet uses cross-view self-supervision, that is, it simultaneously predicts several attentive LR representations from different views of an image to learn precise pseudo-labels.
Experiments on the Pascal VOC 2012, COCO, and L2ID datasets demonstrate that our SLRNet outperforms both state-of-the-art WSSS and SSSS methods with a variety of different settings.
arXiv Detail & Related papers (2022-03-19T09:19:55Z) - Unpaired Image Captioning by Image-level Weakly-Supervised Visual
Concept Recognition [83.93422034664184]
Unpaired image captioning (UIC) is to describe images without using image-caption pairs in the training phase.
Most existing studies use off-the-shelf algorithms to obtain the visual concepts.
We propose a novel approach to achieve cost-effective UIC using image-level labels.
arXiv Detail & Related papers (2022-03-07T08:02:23Z) - Semi-weakly Supervised Contrastive Representation Learning for Retinal
Fundus Images [0.2538209532048867]
We propose a semi-weakly supervised contrastive learning framework for representation learning using semi-weakly annotated images.
We empirically validate the transfer learning performance of SWCL on seven public retinal fundus datasets.
arXiv Detail & Related papers (2021-08-04T15:50:09Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Contrastive Learning for Label-Efficient Semantic Segmentation [44.10416030868873]
Convolutional Neural Network (CNN) based semantic segmentation approaches have achieved impressive results by using large amounts of labeled data.
Deep CNNs trained with the de facto cross-entropy loss can easily overfit to small amounts of labeled data.
We propose a simple and effective contrastive learning-based training strategy in which we first pretrain the network using a pixel-wise, label-based contrastive loss.
arXiv Detail & Related papers (2020-12-13T07:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.