Explaining Groups of Points in Low-Dimensional Representations
- URL: http://arxiv.org/abs/2003.01640v3
- Date: Fri, 14 Aug 2020 15:54:13 GMT
- Title: Explaining Groups of Points in Low-Dimensional Representations
- Authors: Gregory Plumb, Jonathan Terhorst, Sriram Sankararaman, Ameet Talwalkar
- Abstract summary: We introduce a new type of explanation, a Global Counterfactual Explanation (GCE), and our algorithm, Transitive Global Translations (TGT)
TGT identifies the differences between each pair of groups using compressed sensing but constrains those pairwise differences to be consistent among all of the groups.
Empirically, we demonstrate that TGT is able to identify explanations that accurately explain the model while being relatively sparse, and that these explanations match real patterns in the data.
- Score: 22.069781949309732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common workflow in data exploration is to learn a low-dimensional
representation of the data, identify groups of points in that representation,
and examine the differences between the groups to determine what they
represent. We treat this workflow as an interpretable machine learning problem
by leveraging the model that learned the low-dimensional representation to help
identify the key differences between the groups. To solve this problem, we
introduce a new type of explanation, a Global Counterfactual Explanation (GCE),
and our algorithm, Transitive Global Translations (TGT), for computing GCEs.
TGT identifies the differences between each pair of groups using compressed
sensing but constrains those pairwise differences to be consistent among all of
the groups. Empirically, we demonstrate that TGT is able to identify
explanations that accurately explain the model while being relatively sparse,
and that these explanations match real patterns in the data.
Related papers
- Exploring Transferable Homogeneous Groups for Compositional Zero-Shot Learning [10.687828416652929]
Homogeneous Group Representation Learning (HGRL) is a new perspective formulates state (object) representation learning as multiple homogeneous sub-group representation learning.
Our method integrates three core components designed to simultaneously enhance both the visual and prompt representation capabilities of the model.
arXiv Detail & Related papers (2025-01-18T08:19:48Z) - Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models [83.02797560769285]
Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data.
Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts.
arXiv Detail & Related papers (2024-05-26T13:11:55Z) - Uncovering Prototypical Knowledge for Weakly Open-Vocabulary Semantic
Segmentation [59.37587762543934]
This paper studies the problem of weakly open-vocabulary semantic segmentation (WOVSS)
Existing methods suffer from a granularity inconsistency regarding the usage of group tokens.
We propose the prototypical guidance network (PGSeg) that incorporates multi-modal regularization.
arXiv Detail & Related papers (2023-10-29T13:18:00Z) - Rectifying Group Irregularities in Explanations for Distribution Shift [18.801357928801412]
Group-aware Shift Explanations (GSE) produces interpretable explanations by leveraging worst-group optimization to rectify group irregularities.
We show how GSE not only maintains group structures, such as demographic and hierarchical subpopulations, but also enhances feasibility and robustness in the resulting explanations.
arXiv Detail & Related papers (2023-05-25T17:57:46Z) - Homomorphism Autoencoder -- Learning Group Structured Representations from Observed Transitions [51.71245032890532]
We propose methods enabling an agent acting upon the world to learn internal representations of sensory information consistent with actions that modify it.
In contrast to existing work, our approach does not require prior knowledge of the group and does not restrict the set of actions the agent can perform.
arXiv Detail & Related papers (2022-07-25T11:22:48Z) - The Group Loss++: A deeper look into group loss for deep metric learning [65.19665861268574]
Group Loss is a loss function based on a differentiable label-propagation method that enforces embedding similarity across all samples of a group.
We show state-of-the-art results on clustering and image retrieval on four datasets, and present competitive results on two person re-identification datasets.
arXiv Detail & Related papers (2022-04-04T14:09:58Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Group-disentangled Representation Learning with Weakly-Supervised
Regularization [13.311886256230814]
GroupVAE is a simple yet effective Kullback-Leibler divergence-based regularization to enforce consistent and disentangled representations.
We demonstrate that learning group-disentangled representations improve upon downstream tasks, including fair classification and 3D shape-related tasks such as reconstruction, classification, and transfer learning.
arXiv Detail & Related papers (2021-10-23T10:01:05Z) - Deep Grouping Model for Unified Perceptual Parsing [36.73032339428497]
The perceptual-based grouping process produces a hierarchical and compositional image representation.
We propose a deep grouping model (DGM) that tightly marries the two types of representations and defines a bottom-up and a top-down process for feature exchanging.
The model achieves state-of-the-art results while having a small computational overhead compared to other contextual-based segmentation models.
arXiv Detail & Related papers (2020-03-25T21:16:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.