PointMask: Towards Interpretable and Bias-Resilient Point Cloud
Processing
- URL: http://arxiv.org/abs/2007.04525v1
- Date: Thu, 9 Jul 2020 03:06:06 GMT
- Title: PointMask: Towards Interpretable and Bias-Resilient Point Cloud
Processing
- Authors: Saeid Asgari Taghanaki, Kaveh Hassani, Pradeep Kumar Jayaraman, Amir
Hosein Khasahmadi, Tonya Custis
- Abstract summary: PointMask is a model-agnostic interpretable information-bottleneck approach for attribution in point cloud models.
We show that coupling a PointMask layer with an arbitrary model can discern the points in the input space which contribute the most to the prediction score.
- Score: 16.470806722781333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep classifiers tend to associate a few discriminative input variables with
their objective function, which in turn, may hurt their generalization
capabilities. To address this, one can design systematic experiments and/or
inspect the models via interpretability methods. In this paper, we investigate
both of these strategies on deep models operating on point clouds. We propose
PointMask, a model-agnostic interpretable information-bottleneck approach for
attribution in point cloud models. PointMask encourages exploring the majority
of variation factors in the input space while gradually converging to a general
solution. More specifically, PointMask introduces a regularization term that
minimizes the mutual information between the input and the latent features used
to masks out irrelevant variables. We show that coupling a PointMask layer with
an arbitrary model can discern the points in the input space which contribute
the most to the prediction score, thereby leading to interpretability. Through
designed bias experiments, we also show that thanks to its gradual masking
feature, our proposed method is effective in handling data bias.
Related papers
- Bridge the Points: Graph-based Few-shot Segment Anything Semantically [79.1519244940518]
Recent advancements in pre-training techniques have enhanced the capabilities of vision foundation models.
Recent studies extend the SAM to Few-shot Semantic segmentation (FSS)
We propose a simple yet effective approach based on graph analysis.
arXiv Detail & Related papers (2024-10-09T15:02:28Z) - Regressor-Segmenter Mutual Prompt Learning for Crowd Counting [70.49246560246736]
We propose mutual prompt learning (mPrompt) to solve bias and inaccuracy caused by annotation variance.
Experiments show that mPrompt significantly reduces the Mean Average Error (MAE)
arXiv Detail & Related papers (2023-12-04T07:53:59Z) - MaskDiff: Modeling Mask Distribution with Diffusion Probabilistic Model
for Few-Shot Instance Segmentation [31.648523213206595]
Few-shot instance segmentation extends the few-shot learning paradigm to the instance segmentation task.
Conventional approaches have attempted to address the task via prototype learning, known as point estimation.
We propose a novel approach, dubbed MaskDiff, which models the underlying conditional distribution of a binary mask.
arXiv Detail & Related papers (2023-03-09T08:24:02Z) - Masked Autoencoding for Scalable and Generalizable Decision Making [93.84855114717062]
MaskDP is a simple and scalable self-supervised pretraining method for reinforcement learning and behavioral cloning.
We find that a MaskDP model gains the capability of zero-shot transfer to new BC tasks, such as single and multiple goal reaching.
arXiv Detail & Related papers (2022-11-23T07:04:41Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Masked Discrimination for Self-Supervised Learning on Point Clouds [27.652157544218234]
Masked autoencoding has achieved great success for self-supervised learning in the image and language domains.
Standard backbones like PointNet are unable to properly handle the training versus testing distribution mismatch introduced by masking during training.
We bridge this gap by proposing a discriminative mask pretraining Transformer framework, MaskPoint, for point clouds.
arXiv Detail & Related papers (2022-03-21T17:57:34Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - How do Decisions Emerge across Layers in Neural Models? Interpretation
with Differentiable Masking [70.92463223410225]
DiffMask learns to mask-out subsets of the input while maintaining differentiability.
Decision to include or disregard an input token is made with a simple model based on intermediate hidden layers.
This lets us not only plot attribution heatmaps but also analyze how decisions are formed across network layers.
arXiv Detail & Related papers (2020-04-30T17:36:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.