Analysis and Mitigations of Reverse Engineering Attacks on Local Feature
Descriptors
- URL: http://arxiv.org/abs/2105.03812v1
- Date: Sun, 9 May 2021 01:41:36 GMT
- Title: Analysis and Mitigations of Reverse Engineering Attacks on Local Feature
Descriptors
- Authors: Deeksha Dangwal, Vincent T. Lee, Hyo Jin Kim, Tianwei Shen, Meghan
Cowan, Rajvi Shah, Caroline Trippel, Brandon Reagen, Timothy Sherwood,
Vasileios Balntas, Armin Alaghi, Eddy Ilg
- Abstract summary: We show under controlled conditions a reverse engineering attack on sparse feature maps and analyze the vulnerability of popular descriptors.
We evaluate potential mitigation techniques that select a subset of descriptors to carefully balance privacy reconstruction risk while preserving image matching accuracy.
- Score: 15.973484638972739
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As autonomous driving and augmented reality evolve, a practical concern is
data privacy. In particular, these applications rely on localization based on
user images. The widely adopted technology uses local feature descriptors,
which are derived from the images and it was long thought that they could not
be reverted back. However, recent work has demonstrated that under certain
conditions reverse engineering attacks are possible and allow an adversary to
reconstruct RGB images. This poses a potential risk to user privacy. We take
this a step further and model potential adversaries using a privacy threat
model. Subsequently, we show under controlled conditions a reverse engineering
attack on sparse feature maps and analyze the vulnerability of popular
descriptors including FREAK, SIFT and SOSNet. Finally, we evaluate potential
mitigation techniques that select a subset of descriptors to carefully balance
privacy reconstruction risk while preserving image matching accuracy; our
results show that similar accuracy can be obtained when revealing less
information.
Related papers
- Exploring User-level Gradient Inversion with a Diffusion Prior [17.2657358645072]
We propose a novel gradient inversion attack that applies a denoising diffusion model as a strong image prior to enhance recovery in the large batch setting.
Unlike traditional attacks, which aim to reconstruct individual samples and suffer at large batch and image sizes, our approach instead aims to recover a representative image that captures the sensitive shared semantic information corresponding to the underlying user.
arXiv Detail & Related papers (2024-09-11T14:20:47Z) - Is Diffusion Model Safe? Severe Data Leakage via Gradient-Guided Diffusion Model [13.66943548640248]
Gradient leakage has been identified as a potential source of privacy breaches in modern image processing systems.
We propose an innovative gradient-guided fine-tuning method and introduce a new reconstruction attack that is capable of stealing high-resolution images.
Our attack method significantly outperforms the SOTA attack baselines in terms of both pixel-wise accuracy and time efficiency of image reconstruction.
arXiv Detail & Related papers (2024-06-13T14:41:47Z) - Region of Interest Loss for Anonymizing Learned Image Compression [3.0936354370614607]
We show how to achieve sufficient anonymization such that human faces become unrecognizable while persons are kept detectable.
This approach enables compression and anonymization in one step on the capture device, instead of transmitting sensitive, nonanonymized data over the network.
arXiv Detail & Related papers (2024-06-09T10:36:06Z) - Visual Privacy Auditing with Diffusion Models [52.866433097406656]
We propose a reconstruction attack based on diffusion models (DMs) that assumes adversary access to real-world image priors.
We show that (1) real-world data priors significantly influence reconstruction success, (2) current reconstruction bounds do not model the risk posed by data priors well, and (3) DMs can serve as effective auditing tools for visualizing privacy leakage.
arXiv Detail & Related papers (2024-03-12T12:18:55Z) - Privacy Assessment on Reconstructed Images: Are Existing Evaluation
Metrics Faithful to Human Perception? [86.58989831070426]
We study the faithfulness of hand-crafted metrics to human perception of privacy information from reconstructed images.
We propose a learning-based measure called SemSim to evaluate the Semantic Similarity between the original and reconstructed images.
arXiv Detail & Related papers (2023-09-22T17:58:04Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Privacy-Preserving Representations are not Enough -- Recovering Scene
Content from Camera Poses [63.12979986351964]
Existing work on privacy-preserving localization aims to defend against an attacker who has access to a cloud-based service.
We show that an attacker can learn about details of a scene without any access by simply querying a localization service.
arXiv Detail & Related papers (2023-05-08T10:25:09Z) - DISCO: Adversarial Defense with Local Implicit Functions [79.39156814887133]
A novel aDversarIal defenSe with local impliCit functiOns is proposed to remove adversarial perturbations by localized manifold projections.
DISCO consumes an adversarial image and a query pixel location and outputs a clean RGB value at the location.
arXiv Detail & Related papers (2022-12-11T23:54:26Z) - Privacy Safe Representation Learning via Frequency Filtering Encoder [7.792424517008007]
Adversarial Representation Learning (ARL) is a common approach to train an encoder that runs on the client-side and obfuscates an image.
It is assumed, that the obfuscated image can safely be transmitted and used for the task on the server without privacy concerns.
We introduce a novel ARL method enhanced through low-pass filtering, limiting the available information amount to be encoded in the frequency domain.
arXiv Detail & Related papers (2022-08-04T06:16:13Z) - Assessing Privacy Risks from Feature Vector Reconstruction Attacks [24.262351521060676]
We develop metrics that meaningfully capture the threat of reconstructed face images.
We show that reconstructed face images enable re-identification by both commercial facial recognition systems and humans.
Our results confirm that feature vectors should be recognized as Personal Identifiable Information.
arXiv Detail & Related papers (2022-02-11T16:52:02Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.