Improving generalisation via anchor multivariate analysis
- URL: http://arxiv.org/abs/2403.01865v2
- Date: Mon, 11 Mar 2024 13:11:51 GMT
- Title: Improving generalisation via anchor multivariate analysis
- Authors: Homer Durand, Gherardo Varando, Nathan Mankovich, Gustau Camps-Valls
- Abstract summary: We introduce a causal regularisation extension to anchor regression (AR) for improved out-of-distribution (OOD) generalisation.
We present anchor-compatible losses, aligning with the anchor framework to ensure robustness against distribution shifts.
We observe that simple regularisation enhances robustness in OOD settings.
- Score: 4.755199731453481
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce a causal regularisation extension to anchor regression (AR) for
improved out-of-distribution (OOD) generalisation. We present anchor-compatible
losses, aligning with the anchor framework to ensure robustness against
distribution shifts. Various multivariate analysis (MVA) algorithms, such as
(Orthonormalized) PLS, RRR, and MLR, fall within the anchor framework. We
observe that simple regularisation enhances robustness in OOD settings.
Estimators for selected algorithms are provided, showcasing consistency and
efficacy in synthetic and real-world climate science problems. The empirical
validation highlights the versatility of anchor regularisation, emphasizing its
compatibility with MVA approaches and its role in enhancing replicability while
guarding against distribution shifts. The extended AR framework advances causal
inference methodologies, addressing the need for reliable OOD generalisation.
Related papers
- From Robustness to Improved Generalization and Calibration in Pre-trained Language Models [0.0]
We investigate the role of representation smoothness, achieved via Jacobian and Hessian regularization, in enhancing pre-trained language models (PLMs) performance.
We introduce a novel two-phase regularization approach, JacHess, which minimizes the norms of the Jacobian and Hessian matrices within PLM intermediate representations.
Our evaluation using the GLUE benchmark demonstrates that JacHess significantly improves in-domain generalization and calibration in PLMs.
arXiv Detail & Related papers (2024-03-31T18:08:37Z) - Towards Robust Out-of-Distribution Generalization Bounds via Sharpness [41.65692353665847]
We study the effect of sharpness on how a model tolerates data change in domain shift.
We propose a sharpness-based OOD generalization bound by taking robustness into consideration.
arXiv Detail & Related papers (2024-03-11T02:57:27Z) - Diagnosing and Rectifying Fake OOD Invariance: A Restructured Causal
Approach [51.012396632595554]
Invariant representation learning (IRL) encourages the prediction from invariant causal features to labels de-confounded from the environments.
Recent theoretical results verified that some causal features recovered by IRLs merely pretend domain-invariantly in the training environments but fail in unseen domains.
We develop an approach based on conditional mutual information with respect to RS-SCM, then rigorously rectify the spurious and fake invariant effects.
arXiv Detail & Related papers (2023-12-15T12:58:05Z) - Improved OOD Generalization via Conditional Invariant Regularizer [43.62211060412388]
We show that given a class label, conditionally independent models of spurious attributes are OOD general.
Based on this, metric Conditional Variation (CSV) which controls OOD error is proposed to measure such conditional independence.
An algorithm with minicave convergence rate is proposed to solve the problem.
arXiv Detail & Related papers (2022-07-14T06:34:21Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Posterior Differential Regularization with f-divergence for Improving
Model Robustness [95.05725916287376]
We focus on methods that regularize the model posterior difference between clean and noisy inputs.
We generalize the posterior differential regularization to the family of $f$-divergences.
Our experiments show that regularizing the posterior differential with $f$-divergence can result in well-improved model robustness.
arXiv Detail & Related papers (2020-10-23T19:58:01Z) - Distributional Robustness and Regularization in Reinforcement Learning [62.23012916708608]
We introduce a new regularizer for empirical value functions and show that it lower bounds the Wasserstein distributionally robust value function.
It suggests using regularization as a practical tool for dealing with $textitexternal uncertainty$ in reinforcement learning.
arXiv Detail & Related papers (2020-03-05T19:56:23Z) - Target-Embedding Autoencoders for Supervised Representation Learning [111.07204912245841]
This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional.
We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets.
arXiv Detail & Related papers (2020-01-23T02:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.