Rectifying Group Irregularities in Explanations for Distribution Shift
- URL: http://arxiv.org/abs/2305.16308v1
- Date: Thu, 25 May 2023 17:57:46 GMT
- Title: Rectifying Group Irregularities in Explanations for Distribution Shift
- Authors: Adam Stein, Yinjun Wu, Eric Wong, Mayur Naik
- Abstract summary: Group-aware Shift Explanations (GSE) produces interpretable explanations by leveraging worst-group optimization to rectify group irregularities.
We show how GSE not only maintains group structures, such as demographic and hierarchical subpopulations, but also enhances feasibility and robustness in the resulting explanations.
- Score: 18.801357928801412
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is well-known that real-world changes constituting distribution shift
adversely affect model performance. How to characterize those changes in an
interpretable manner is poorly understood. Existing techniques to address this
problem take the form of shift explanations that elucidate how to map samples
from the original distribution toward the shifted one by reducing the disparity
between these two distributions. However, these methods can introduce group
irregularities, leading to explanations that are less feasible and robust. To
address these issues, we propose Group-aware Shift Explanations (GSE), a method
that produces interpretable explanations by leveraging worst-group optimization
to rectify group irregularities. We demonstrate how GSE not only maintains
group structures, such as demographic and hierarchical subpopulations, but also
enhances feasibility and robustness in the resulting explanations in a wide
range of tabular, language, and image settings.
Related papers
- Graphs Generalization under Distribution Shifts [11.963958151023732]
We introduce a novel framework, namely Graph Learning Invariant Domain genERation (GLIDER)
Our model outperforms baseline methods on node-level OOD generalization across domains in distribution shift on node features and topological structures simultaneously.
arXiv Detail & Related papers (2024-03-25T00:15:34Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Proxy Methods for Domain Adaptation [78.03254010884783]
proxy variables allow for adaptation to distribution shift without explicitly recovering or modeling latent variables.
We develop a two-stage kernel estimation approach to adapt to complex distribution shifts in both settings.
arXiv Detail & Related papers (2024-03-12T09:32:41Z) - Modeling the Q-Diversity in a Min-max Play Game for Robust Optimization [61.39201891894024]
Group distributionally robust optimization (group DRO) can minimize the worst-case loss over pre-defined groups.
We reformulate the group DRO framework by proposing Q-Diversity.
Characterized by an interactive training mode, Q-Diversity relaxes the group identification from annotation into direct parameterization.
arXiv Detail & Related papers (2023-05-20T07:02:27Z) - Adapting to Latent Subgroup Shifts via Concepts and Proxies [82.01141290360562]
We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain.
For continuous observations, we propose a latent variable model specific to the data generation process at hand.
arXiv Detail & Related papers (2022-12-21T18:30:22Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z) - Group-disentangled Representation Learning with Weakly-Supervised
Regularization [13.311886256230814]
GroupVAE is a simple yet effective Kullback-Leibler divergence-based regularization to enforce consistent and disentangled representations.
We demonstrate that learning group-disentangled representations improve upon downstream tasks, including fair classification and 3D shape-related tasks such as reconstruction, classification, and transfer learning.
arXiv Detail & Related papers (2021-10-23T10:01:05Z) - GroupifyVAE: from Group-based Definition to VAE-based Unsupervised
Representation Disentanglement [91.9003001845855]
VAE-based unsupervised disentanglement can not be achieved without introducing other inductive bias.
We address VAE-based unsupervised disentanglement by leveraging the constraints derived from the Group Theory based definition as the non-probabilistic inductive bias.
We train 1800 models covering the most prominent VAE-based models on five datasets to verify the effectiveness of our method.
arXiv Detail & Related papers (2021-02-20T09:49:51Z) - Explaining Groups of Points in Low-Dimensional Representations [22.069781949309732]
We introduce a new type of explanation, a Global Counterfactual Explanation (GCE), and our algorithm, Transitive Global Translations (TGT)
TGT identifies the differences between each pair of groups using compressed sensing but constrains those pairwise differences to be consistent among all of the groups.
Empirically, we demonstrate that TGT is able to identify explanations that accurately explain the model while being relatively sparse, and that these explanations match real patterns in the data.
arXiv Detail & Related papers (2020-03-03T17:06:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.