Curve Your Enthusiasm: Concurvity Regularization in Differentiable
Generalized Additive Models
- URL: http://arxiv.org/abs/2305.11475v3
- Date: Sat, 25 Nov 2023 17:16:37 GMT
- Title: Curve Your Enthusiasm: Concurvity Regularization in Differentiable
Generalized Additive Models
- Authors: Julien Siems, Konstantin Ditschuneit, Winfried Ripken, Alma Lindborg,
Maximilian Schambach, Johannes S. Otterbach, Martin Genzel
- Abstract summary: Generalized Additive Models (GAMs) have recently experienced a resurgence in popularity due to their interpretability.
We show how concurvity can severly impair the interpretability of GAMs.
We propose a remedy: a conceptually simple, yet effective regularizer which penalizes pairwise correlations of the non-linearly transformed feature variables.
- Score: 5.519653885553456
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generalized Additive Models (GAMs) have recently experienced a resurgence in
popularity due to their interpretability, which arises from expressing the
target value as a sum of non-linear transformations of the features. Despite
the current enthusiasm for GAMs, their susceptibility to concurvity - i.e.,
(possibly non-linear) dependencies between the features - has hitherto been
largely overlooked. Here, we demonstrate how concurvity can severly impair the
interpretability of GAMs and propose a remedy: a conceptually simple, yet
effective regularizer which penalizes pairwise correlations of the non-linearly
transformed feature variables. This procedure is applicable to any
differentiable additive model, such as Neural Additive Models or NeuralProphet,
and enhances interpretability by eliminating ambiguities due to self-canceling
feature contributions. We validate the effectiveness of our regularizer in
experiments on synthetic as well as real-world datasets for time-series and
tabular data. Our experiments show that concurvity in GAMs can be reduced
without significantly compromising prediction quality, improving
interpretability and reducing variance in the feature importances.
Related papers
- PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings [55.55445978692678]
PseudoNeg-MAE is a self-supervised learning framework that enhances global feature representation of point cloud mask autoencoders.
We show that PseudoNeg-MAE achieves state-of-the-art performance on the ModelNet40 and ScanObjectNN datasets.
arXiv Detail & Related papers (2024-09-24T07:57:21Z) - Fairness-Aware Estimation of Graphical Models [13.39268712338485]
This paper examines the issue of fairness in the estimation of graphical models (GMs)
Standard GMs can result in biased outcomes, especially when the underlying data involves sensitive characteristics or protected groups.
We introduce a comprehensive framework designed to reduce bias in the estimation of GMs related to protected attributes.
arXiv Detail & Related papers (2024-08-30T16:30:00Z) - Supervised Contrastive Learning with Heterogeneous Similarity for
Distribution Shifts [3.7819322027528113]
We propose a new regularization using the supervised contrastive learning to prevent such overfitting and to train models that do not degrade their performance under the distribution shifts.
Experiments on benchmark datasets that emulate distribution shifts, including subpopulation shift and domain generalization, demonstrate the advantage of the proposed method.
arXiv Detail & Related papers (2023-04-07T01:45:09Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z) - Neural Decomposition: Functional ANOVA with Variational Autoencoders [9.51828574518325]
Variational Autoencoders (VAEs) have become a popular approach for dimensionality reduction.
Due to the black-box nature of VAEs, their utility for healthcare and genomics applications has been limited.
We focus on characterising the sources of variation in Conditional VAEs.
arXiv Detail & Related papers (2020-06-25T10:29:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.