Investigating Privacy Leakage in Dimensionality Reduction Methods via Reconstruction Attack
- URL: http://arxiv.org/abs/2408.17151v2
- Date: Tue, 03 Dec 2024 05:27:59 GMT
- Title: Investigating Privacy Leakage in Dimensionality Reduction Methods via Reconstruction Attack
- Authors: Chayadon Lumbut, Donlapark Ponnoprat,
- Abstract summary: We develop a neural network capable of reconstructing high-dimensional data from low-dimensional embeddings.
We evaluate six popular dimensionality reduction techniques: PCA, sparse random projection (SRP), multidimensional scaling (MDS), Isomap, t-SNE, and UMAP.
- Score: 0.0
- License:
- Abstract: This study investigates privacy leakage in dimensionality reduction methods through a novel machine learning-based reconstruction attack. Employing an informed adversary threat model, we develop a neural network capable of reconstructing high-dimensional data from low-dimensional embeddings. We evaluate six popular dimensionality reduction techniques: PCA, sparse random projection (SRP), multidimensional scaling (MDS), Isomap, t-SNE, and UMAP. Using both MNIST and NIH Chest X-ray datasets, we perform a qualitative analysis to identify key factors affecting reconstruction quality. Furthermore, we assess the effectiveness of an additive noise mechanism in mitigating these reconstruction attacks. Our experimental results on both datasets reveal that the attack is effective against deterministic methods (PCA and Isomap), but ineffective against methods that employ random initialization (SRP, MDS, t-SNE and UMAP). When adding the images with large noises before performing PCA or Isomap, the attack produced severely distorted reconstructions. In contrast, for the other four methods, the reconstructions still show some recognizable features, though they bear little resemblance to the original images.
Related papers
- Re-Visible Dual-Domain Self-Supervised Deep Unfolding Network for MRI Reconstruction [48.30341580103962]
We propose a novel re-visible dual-domain self-supervised deep unfolding network to address these issues.
We design a deep unfolding network based on Chambolle and Pock Proximal Point Algorithm (DUN-CP-PPA) to achieve end-to-end reconstruction.
Experiments conducted on the fastMRI and IXI datasets demonstrate that our method significantly outperforms state-of-the-art approaches in terms of reconstruction performance.
arXiv Detail & Related papers (2025-01-07T12:29:32Z) - Detecting and Mitigating Adversarial Attacks on Deep Learning-Based MRI Reconstruction Without Any Retraining [2.5943586090617377]
We propose a novel approach for detecting and mitigating adversarial attacks on MRI reconstruction models without any retraining.
Our detection strategy is based on the idea of cyclic measurement consistency.
We show that our method substantially reduces the impact of adversarial perturbations across different datasets.
arXiv Detail & Related papers (2025-01-03T17:23:52Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Reconstruction Distortion of Learned Image Compression with
Imperceptible Perturbations [69.25683256447044]
We introduce an attack approach designed to effectively degrade the reconstruction quality of Learned Image Compression (LIC)
We generate adversarial examples by introducing a Frobenius norm-based loss function to maximize the discrepancy between original images and reconstructed adversarial examples.
Experiments conducted on the Kodak dataset using various LIC models demonstrate effectiveness.
arXiv Detail & Related papers (2023-06-01T20:21:05Z) - Fast refacing of MR images with a generative neural network lowers
re-identification risk and preserves volumetric consistency [5.040145546652933]
We propose a novel method for anonymised face generation for 3D T1-weighted scans based on a 3D conditional generative adversarial network.
The proposed method takes 9 seconds for face generation and is suitable for recovering consistent post-processing results after defacing.
arXiv Detail & Related papers (2023-05-26T13:34:14Z) - Uncertainty-Aware Null Space Networks for Data-Consistent Image
Reconstruction [0.0]
State-of-the-art reconstruction methods have been developed based on recent advances in deep learning.
For such approaches to be used in safety-critical domains such as medical imaging, the network reconstruction should not only provide the user with a reconstructed image, but also with some level of confidence in the reconstruction.
This work is the first approach to solving inverse problems that additionally models data-dependent uncertainty by estimating an input-dependent scale map.
arXiv Detail & Related papers (2023-04-14T06:58:44Z) - Subject-specific quantitative susceptibility mapping using patch based
deep image priors [13.734472448148333]
We propose a subject-specific, patch-based, unsupervised learning algorithm to estimate the susceptibility map.
We make the problem well-posed by exploiting the redundancies across the patches of the map using a deep convolutional neural network.
We tested the algorithm on a 3D invivo dataset and demonstrated improved reconstructions over competing methods.
arXiv Detail & Related papers (2022-10-10T02:28:53Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Probabilistic 3D surface reconstruction from sparse MRI information [58.14653650521129]
We present a novel probabilistic deep learning approach for concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric uncertainty prediction.
Our method is capable of reconstructing large surface meshes from three quasi-orthogonal MR imaging slices from limited training sets.
arXiv Detail & Related papers (2020-10-05T14:18:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.