InfoCons: Identifying Interpretable Critical Concepts in Point Clouds via Information Theory
- URL: http://arxiv.org/abs/2505.19820v1
- Date: Mon, 26 May 2025 10:58:54 GMT
- Title: InfoCons: Identifying Interpretable Critical Concepts in Point Clouds via Information Theory
- Authors: Feifei Li, Mi Zhang, Zhaoxiang Wang, Min Yang,
- Abstract summary: We focus on attributing PC model outputs to interpretable critical concepts, defined as meaningful subsets of the input point cloud.<n>We propose InfoCons, an explanation framework that applies information-theoretic principles to decompose the point cloud into 3D concepts.
- Score: 19.044009184525333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interpretability of point cloud (PC) models becomes imperative given their deployment in safety-critical scenarios such as autonomous vehicles. We focus on attributing PC model outputs to interpretable critical concepts, defined as meaningful subsets of the input point cloud. To enable human-understandable diagnostics of model failures, an ideal critical subset should be *faithful* (preserving points that causally influence predictions) and *conceptually coherent* (forming semantically meaningful structures that align with human perception). We propose InfoCons, an explanation framework that applies information-theoretic principles to decompose the point cloud into 3D concepts, enabling the examination of their causal effect on model predictions with learnable priors. We evaluate InfoCons on synthetic datasets for classification, comparing it qualitatively and quantitatively with four baselines. We further demonstrate its scalability and flexibility on two real-world datasets and in two applications that utilize critical scores of PC.
Related papers
- Individualised Counterfactual Examples Using Conformal Prediction Intervals [12.895240620484572]
High-dimensional feature spaces that are typical of machine learning classification models admit many possible counterfactual examples to a decision.<n>We explicitly model the knowledge of the individual, and assess the uncertainty of predictions which the individual makes by the width of a conformal prediction interval.<n>We present a synthetic data set on a hypercube which allows us to fully visualise the decision boundary.<n>Second, in this synthetic data set we explore the impact of a single CPICF on the knowledge of an individual locally around the original query.
arXiv Detail & Related papers (2025-05-28T13:13:52Z) - Can foundation models actively gather information in interactive environments to test hypotheses? [56.651636971591536]
We introduce a framework in which a model must determine the factors influencing a hidden reward function.<n>We investigate whether approaches such as self- throughput and increased inference time improve information gathering efficiency.
arXiv Detail & Related papers (2024-12-09T12:27:21Z) - When Can You Trust Your Explanations? A Robustness Analysis on Feature Importances [42.36530107262305]
robustness of explanations plays a central role in ensuring trust in both the system and the provided explanation.<n>We propose a novel approach to analyse the robustness of neural network explanations to non-adversarial perturbations.<n>We additionally present an ensemble method to aggregate various explanations, showing how merging explanations can be beneficial for both understanding the model's decision and evaluating the robustness.
arXiv Detail & Related papers (2024-06-20T14:17:57Z) - InterpretCC: Intrinsic User-Centric Interpretability through Global Mixture of Experts [31.738009841932374]
Interpretability for neural networks is a trade-off between three key requirements.
We present InterpretCC, a family of interpretable-by-design neural networks that guarantee human-centric interpretability.
arXiv Detail & Related papers (2024-02-05T11:55:50Z) - Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis [74.00441177577295]
Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
arXiv Detail & Related papers (2023-10-08T11:32:50Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Unsupervised Belief Representation Learning in Polarized Networks with
Information-Theoretic Variational Graph Auto-Encoders [26.640917190618612]
We develop an unsupervised algorithm for belief representation learning in polarized networks.
It learns to project both users and content items (e.g., posts that represent user views) into an appropriate disentangled latent space.
The latent representation of users and content can then be used to quantify their ideological leaning and detect/predict their stances on issues.
arXiv Detail & Related papers (2021-10-01T04:35:01Z) - A Minimalist Dataset for Systematic Generalization of Perception,
Syntax, and Semantics [131.93113552146195]
We present a new dataset, Handwritten arithmetic with INTegers (HINT), to examine machines' capability of learning generalizable concepts.
In HINT, machines are tasked with learning how concepts are perceived from raw signals such as images.
We undertake extensive experiments with various sequence-to-sequence models, including RNNs, Transformers, and GPT-3.
arXiv Detail & Related papers (2021-03-02T01:32:54Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Unifying Model Explainability and Robustness via Machine-Checkable
Concepts [33.88198813484126]
We propose a robustness-assessment framework, at the core of which is the idea of using machine-checkable concepts.
Our framework defines a large number of concepts that the explanations could be based on and performs the explanation-conformity check at test time to assess prediction robustness.
Experiments on real-world datasets and human surveys show that our framework is able to enhance prediction robustness significantly.
arXiv Detail & Related papers (2020-07-01T05:21:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.