Path Choice Matters for Clear Attribution in Path Methods
- URL: http://arxiv.org/abs/2401.10442v1
- Date: Fri, 19 Jan 2024 01:11:44 GMT
- Title: Path Choice Matters for Clear Attribution in Path Methods
- Authors: Borui Zhang, Wenzhao Zheng, Jie Zhou, Jiwen Lu
- Abstract summary: We introduce textbfConcentration Principle, which allocates high attributions to indispensable features.
We then present textbfSAMP, a model-agnostic interpreter, which efficiently searches the near-optimal path.
We also propose the infinitesimal constraint (IC) and momentum strategy (MS) to improve the rigorousness and optimality.
- Score: 84.29092710376217
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rigorousness and clarity are both essential for interpretations of DNNs to
engender human trust. Path methods are commonly employed to generate rigorous
attributions that satisfy three axioms. However, the meaning of attributions
remains ambiguous due to distinct path choices. To address the ambiguity, we
introduce \textbf{Concentration Principle}, which centrally allocates high
attributions to indispensable features, thereby endowing aesthetic and
sparsity. We then present \textbf{SAMP}, a model-agnostic interpreter, which
efficiently searches the near-optimal path from a pre-defined set of
manipulation paths. Moreover, we propose the infinitesimal constraint (IC) and
momentum strategy (MS) to improve the rigorousness and optimality.
Visualizations show that SAMP can precisely reveal DNNs by pinpointing salient
image pixels. We also perform quantitative experiments and observe that our
method significantly outperforms the counterparts. Code:
https://github.com/zbr17/SAMP.
Related papers
- Ambiguity-aware Point Cloud Segmentation by Adaptive Margin Contrastive Learning [65.94127546086156]
We propose an adaptive margin contrastive learning method for semantic segmentation on point clouds.<n>We first design AMContrast3D, a method comprising contrastive learning into an ambiguity estimation framework.<n>Inspired by the insight of joint training, we propose AMContrast3D++ integrating with two branches trained in parallel.
arXiv Detail & Related papers (2025-07-09T07:00:32Z) - Soft Reasoning Paths for Knowledge Graph Completion [63.23109723605835]
Reasoning paths are reliable information in knowledge graph completion (KGC)<n>In real-world applications, it is difficult to guarantee that computationally affordable paths exist toward all candidate entities.<n>We introduce soft reasoning paths to make the proposed algorithm more stable against missing path circumstances.
arXiv Detail & Related papers (2025-05-06T08:12:48Z) - Enhancing Path Planning Performance through Image Representation Learning of High-Dimensional Configuration Spaces [0.4143603294943439]
We present a novel method for accelerating path-planning tasks in unknown scenes with obstacles.
We approximate the distribution of waypoints for a collision-free path using the Rapidly-exploring Random Tree algorithm.
Our experiments demonstrate promising results in accelerating path-planning tasks under critical time constraints.
arXiv Detail & Related papers (2025-01-11T21:14:52Z) - Rethinking Score Distillation as a Bridge Between Image Distributions [97.27476302077545]
We show that our method seeks to transport corrupted images (source) to the natural image distribution (target)
Our method can be easily applied across many domains, matching or beating the performance of specialized methods.
We demonstrate its utility in text-to-2D, text-based NeRF optimization, translating paintings to real images, optical illusion generation, and 3D sketch-to-real.
arXiv Detail & Related papers (2024-06-13T17:59:58Z) - Tripod: Three Complementary Inductive Biases for Disentangled Representation Learning [52.70210390424605]
In this work, we consider endowing a neural network autoencoder with three select inductive biases from the literature.
In practice, however, naively combining existing techniques instantiating these inductive biases fails to yield significant benefits.
We propose adaptations to the three techniques that simplify the learning problem, equip key regularization terms with stabilizing invariances, and quash degenerate incentives.
The resulting model, Tripod, achieves state-of-the-art results on a suite of four image disentanglement benchmarks.
arXiv Detail & Related papers (2024-04-16T04:52:41Z) - Efficient Link Prediction via GNN Layers Induced by Negative Sampling [92.05291395292537]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.
First, emphnode-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions.
Second, emphedge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - Scalable Bayesian Meta-Learning through Generalized Implicit Gradients [64.21628447579772]
Implicit Bayesian meta-learning (iBaML) method broadens the scope of learnable priors, but also quantifies the associated uncertainty.
Analytical error bounds are established to demonstrate the precision and efficiency of the generalized implicit gradient over the explicit one.
arXiv Detail & Related papers (2023-03-31T02:10:30Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Towards More Robust Interpretation via Local Gradient Alignment [37.464250451280336]
We show that for every non-negative homogeneous neural network, a naive $ell$-robust criterion for gradients is textitnot normalization invariant.
We propose to combine both $ell$ and cosine distance-based criteria as regularization terms to leverage the advantages of both in aligning the local gradient.
We experimentally show that models trained with our method produce much more robust interpretations on CIFAR-10 and ImageNet-100.
arXiv Detail & Related papers (2022-11-29T03:38:28Z) - ContraCLIP: Interpretable GAN generation driven by pairs of contrasting
sentences [45.06326873752593]
We find non-linear interpretable paths in the latent space of pre-trained GANs in a model-agnostic manner.
By defining an objective that discovers paths that generate changes along the desired paths in the vision-language embedding space, we provide an intuitive way of controlling the underlying generative factors.
arXiv Detail & Related papers (2022-06-05T06:13:42Z) - Gleo-Det: Deep Convolution Feature-Guided Detector with Local Entropy
Optimization for Salient Points [5.955667705173262]
We propose to achieve fine constraint based on the requirement of repeatability while coarse constraint with guidance of deep convolution features.
With the guidance of convolution features, we define the cost function from both positive and negative sides.
arXiv Detail & Related papers (2022-04-27T12:40:21Z) - Node Representation Learning in Graph via Node-to-Neighbourhood Mutual
Information Maximization [27.701736055800314]
Key towards learning informative node representations in graphs lies in how to gain contextual information from the neighbourhood.
We present a self-supervised node representation learning strategy via directly maximizing the mutual information between the hidden representations of nodes and their neighbourhood.
Our framework is optimized via a surrogate contrastive loss, where the positive selection underpins the quality and efficiency of representation learning.
arXiv Detail & Related papers (2022-03-23T08:21:10Z) - Tune it the Right Way: Unsupervised Validation of Domain Adaptation via
Soft Neighborhood Density [125.64297244986552]
We propose an unsupervised validation criterion that measures the density of soft neighborhoods by computing the entropy of the similarity distribution between points.
Our criterion is simpler than competing validation methods, yet more effective.
arXiv Detail & Related papers (2021-08-24T17:41:45Z) - Guided Integrated Gradients: An Adaptive Path Method for Removing Noise [9.792727625917083]
Integrated Gradients (IG) is a commonly used feature attribution method for deep neural networks.
We show that one of the causes of the problem is the accumulation of noise along the IG path.
We propose adapting the attribution path itself -- conditioning the path not just on the image but also on the model being explained.
arXiv Detail & Related papers (2021-06-17T20:00:55Z) - Provably Efficient Reward-Agnostic Navigation with Linear Value
Iteration [143.43658264904863]
We show how iteration under a more standard notion of low inherent Bellman error, typically employed in least-square value-style algorithms, can provide strong PAC guarantees on learning a near optimal value function.
We present a computationally tractable algorithm for the reward-free setting and show how it can be used to learn a near optimal policy for any (linear) reward function.
arXiv Detail & Related papers (2020-08-18T04:34:21Z) - ENIGMA Anonymous: Symbol-Independent Inference Guiding Machine (system
description) [0.4893345190925177]
We describe an implementation of gradient boosting and neural guidance of saturation-style automated theorem provers.
For the gradient-boosting guidance, we manually create abstracted features by considering arity-based encodings of formulas.
For the neural guidance, we use symbol-independent graph neural networks (GNNs) and their embedding of the terms and clauses.
arXiv Detail & Related papers (2020-02-13T09:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.