Learning to Censor by Noisy Sampling
- URL: http://arxiv.org/abs/2203.12192v1
- Date: Wed, 23 Mar 2022 04:50:50 GMT
- Title: Learning to Censor by Noisy Sampling
- Authors: Ayush Chopra, Abhinav Java, Abhishek Singh, Vivek Sharma, Ramesh
Raskar
- Abstract summary: This work is to protect sensitive information when learning from point clouds.
We focus on preserving utility for perception tasks while mitigating attribute leakage attacks.
The key motivating insight is to leverage the localized saliency of perception tasks on point clouds to provide good privacy-utility trade-offs.
- Score: 17.06138741660826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point clouds are an increasingly ubiquitous input modality and the raw signal
can be efficiently processed with recent progress in deep learning. This signal
may, often inadvertently, capture sensitive information that can leak semantic
and geometric properties of the scene which the data owner does not want to
share. The goal of this work is to protect sensitive information when learning
from point clouds; by censoring the sensitive information before the point
cloud is released for downstream tasks. Specifically, we focus on preserving
utility for perception tasks while mitigating attribute leakage attacks. The
key motivating insight is to leverage the localized saliency of perception
tasks on point clouds to provide good privacy-utility trade-offs. We realize
this through a mechanism called Censoring by Noisy Sampling (CBNS), which is
composed of two modules: i) Invariant Sampler: a differentiable point-cloud
sampler which learns to remove points invariant to utility and ii) Noisy
Distorter: which learns to distort sampled points to decouple the sensitive
information from utility, and mitigate privacy leakage. We validate the
effectiveness of CBNS through extensive comparisons with state-of-the-art
baselines and sensitivity analyses of key design choices. Results show that
CBNS achieves superior privacy-utility trade-offs on multiple datasets.
Related papers
- SurANet: Surrounding-Aware Network for Concealed Object Detection via Highly-Efficient Interactive Contrastive Learning Strategy [55.570183323356964]
We propose a novel Surrounding-Aware Network, namely SurANet, for concealed object detection.
We enhance the semantics of feature maps using differential fusion of surrounding features to highlight concealed objects.
Next, a Surrounding-Aware Contrastive Loss is applied to identify the concealed object via learning surrounding feature maps contrastively.
arXiv Detail & Related papers (2024-10-09T13:02:50Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Enhancing Sampling Protocol for Robust Point Cloud Classification [7.6224558218559855]
Real-world data often suffer from corrputions such as sensor noise, which violates the benignness assumption of point cloud in current protocols.
We propose an enhanced point cloud sampling protocol, PointDR, which comprises two components: 1) Downsampling for key point identification and 2) Resampling for flexible sample size.
arXiv Detail & Related papers (2024-08-22T01:48:31Z) - PointCaM: Cut-and-Mix for Open-Set Point Cloud Learning [72.07350827773442]
We propose to solve open-set point cloud learning using a novel Point Cut-and-Mix mechanism.
We use the Unknown-Point Simulator to simulate out-of-distribution data in the training stage.
The Unknown-Point Estimator module learns to exploit the point cloud's feature context for discriminating the known and unknown data.
arXiv Detail & Related papers (2022-12-05T03:53:51Z) - ALLSH: Active Learning Guided by Local Sensitivity and Hardness [98.61023158378407]
We propose to retrieve unlabeled samples with a local sensitivity and hardness-aware acquisition function.
Our method achieves consistent gains over the commonly used active learning strategies in various classification tasks.
arXiv Detail & Related papers (2022-05-10T15:39:11Z) - Decouple-and-Sample: Protecting sensitive information in task agnostic
data release [17.398889291769986]
sanitizer is a framework for secure and task-agnostic data release.
We show that a better privacy-utility trade-off is achieved if sensitive information can be synthesized privately.
arXiv Detail & Related papers (2022-03-17T19:15:33Z) - Explainability-Aware One Point Attack for Point Cloud Neural Networks [0.0]
This work proposes two new attack methods: opa and cta, which go in the opposite direction.
We show that the popular point cloud networks can be deceived with almost 100% success rate by shifting only one point from the input instance.
We also show the interesting impact of different point attribution distributions on the adversarial robustness of point cloud networks.
arXiv Detail & Related papers (2021-10-08T14:29:02Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep
neural networks [19.307753802569156]
We propose DISCO which learns a dynamic and data driven pruning filter to selectively obfuscate sensitive information in the feature space.
We also release an evaluation benchmark dataset of 1 million sensitive representations to encourage rigorous exploration of novel attack schemes.
arXiv Detail & Related papers (2020-12-20T21:15:13Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.