Learning to Censor by Noisy Sampling
- URL: http://arxiv.org/abs/2203.12192v1
- Date: Wed, 23 Mar 2022 04:50:50 GMT
- Title: Learning to Censor by Noisy Sampling
- Authors: Ayush Chopra, Abhinav Java, Abhishek Singh, Vivek Sharma, Ramesh
Raskar
- Abstract summary: This work is to protect sensitive information when learning from point clouds.
We focus on preserving utility for perception tasks while mitigating attribute leakage attacks.
The key motivating insight is to leverage the localized saliency of perception tasks on point clouds to provide good privacy-utility trade-offs.
- Score: 17.06138741660826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point clouds are an increasingly ubiquitous input modality and the raw signal
can be efficiently processed with recent progress in deep learning. This signal
may, often inadvertently, capture sensitive information that can leak semantic
and geometric properties of the scene which the data owner does not want to
share. The goal of this work is to protect sensitive information when learning
from point clouds; by censoring the sensitive information before the point
cloud is released for downstream tasks. Specifically, we focus on preserving
utility for perception tasks while mitigating attribute leakage attacks. The
key motivating insight is to leverage the localized saliency of perception
tasks on point clouds to provide good privacy-utility trade-offs. We realize
this through a mechanism called Censoring by Noisy Sampling (CBNS), which is
composed of two modules: i) Invariant Sampler: a differentiable point-cloud
sampler which learns to remove points invariant to utility and ii) Noisy
Distorter: which learns to distort sampled points to decouple the sensitive
information from utility, and mitigate privacy leakage. We validate the
effectiveness of CBNS through extensive comparisons with state-of-the-art
baselines and sensitivity analyses of key design choices. Results show that
CBNS achieves superior privacy-utility trade-offs on multiple datasets.
Related papers
- PointCaM: Cut-and-Mix for Open-Set Point Cloud Learning [72.07350827773442]
We propose to solve open-set point cloud learning using a novel Point Cut-and-Mix mechanism.
We use the Unknown-Point Simulator to simulate out-of-distribution data in the training stage.
The Unknown-Point Estimator module learns to exploit the point cloud's feature context for discriminating the known and unknown data.
arXiv Detail & Related papers (2022-12-05T03:53:51Z) - Data Augmentation-free Unsupervised Learning for 3D Point Cloud
Understanding [61.30276576646909]
We propose an augmentation-free unsupervised approach for point clouds to learn transferable point-level features via soft clustering, named SoftClu.
We exploit the affiliation of points to their clusters as a proxy to enable self-training through a pseudo-label prediction task.
arXiv Detail & Related papers (2022-10-06T10:18:16Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - ALLSH: Active Learning Guided by Local Sensitivity and Hardness [98.61023158378407]
We propose to retrieve unlabeled samples with a local sensitivity and hardness-aware acquisition function.
Our method achieves consistent gains over the commonly used active learning strategies in various classification tasks.
arXiv Detail & Related papers (2022-05-10T15:39:11Z) - Decouple-and-Sample: Protecting sensitive information in task agnostic
data release [17.398889291769986]
sanitizer is a framework for secure and task-agnostic data release.
We show that a better privacy-utility trade-off is achieved if sensitive information can be synthesized privately.
arXiv Detail & Related papers (2022-03-17T19:15:33Z) - Explainability-Aware One Point Attack for Point Cloud Neural Networks [0.0]
This work proposes two new attack methods: opa and cta, which go in the opposite direction.
We show that the popular point cloud networks can be deceived with almost 100% success rate by shifting only one point from the input instance.
We also show the interesting impact of different point attribution distributions on the adversarial robustness of point cloud networks.
arXiv Detail & Related papers (2021-10-08T14:29:02Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep
neural networks [19.307753802569156]
We propose DISCO which learns a dynamic and data driven pruning filter to selectively obfuscate sensitive information in the feature space.
We also release an evaluation benchmark dataset of 1 million sensitive representations to encourage rigorous exploration of novel attack schemes.
arXiv Detail & Related papers (2020-12-20T21:15:13Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - SPN-CNN: Boosting Sensor-Based Source Camera Attribution With Deep
Learning [1.370633147306388]
We explore means to advance source camera identification based on sensor noise in a data-driven framework.
Our focus is on improving the sensor pattern noise (SPN) extraction from a single image at test time.
Adeep learning approach can yield a more suitable extractor that leads to improved source attribution.
arXiv Detail & Related papers (2020-02-07T17:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.