Learning Non-Linear Invariants for Unsupervised Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2407.04022v1
- Date: Thu, 4 Jul 2024 16:01:21 GMT
- Title: Learning Non-Linear Invariants for Unsupervised Out-of-Distribution Detection
- Authors: Lars Doorenbos, Raphael Sznitman, Pablo Márquez-Neila,
- Abstract summary: We propose a framework consisting of a normalizing flow-like architecture capable of learning non-linear invariants.
Our approach achieves state-of-the-art results on an extensive U-OOD benchmark.
- Score: 5.019613806273252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The inability of deep learning models to handle data drawn from unseen distributions has sparked much interest in unsupervised out-of-distribution (U-OOD) detection, as it is crucial for reliable deep learning models. Despite considerable attention, theoretically-motivated approaches are few and far between, with most methods building on top of some form of heuristic. Recently, U-OOD was formalized in the context of data invariants, allowing a clearer understanding of how to characterize U-OOD, and methods leveraging affine invariants have attained state-of-the-art results on large-scale benchmarks. Nevertheless, the restriction to affine invariants hinders the expressiveness of the approach. In this work, we broaden the affine invariants formulation to a more general case and propose a framework consisting of a normalizing flow-like architecture capable of learning non-linear invariants. Our novel approach achieves state-of-the-art results on an extensive U-OOD benchmark, and we demonstrate its further applicability to tabular data. Finally, we show our method has the same desirable properties as those based on affine invariants.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Topology-Aware Dynamic Reweighting for Distribution Shifts on Graph [24.44321658238713]
Graph Neural Networks (GNNs) are widely used for node classification tasks but often fail to generalize when training and test nodes come from different distributions.
We introduce the Topology-Aware Dynamic Reweighting (TAR) framework, which dynamically adjusts sample weights through gradient flow in the Wasserstein space during training.
Our framework's superiority is demonstrated through standard testing on four graph OOD datasets and three class-imbalanced node classification datasets.
arXiv Detail & Related papers (2024-06-03T07:32:05Z) - A robust assessment for invariant representations [10.949263264442349]
We propose a novel method to evaluate invariant performance, specifically tailored for IRM-based methods.
We establish a bridge between the conditional expectation of an invariant predictor across different environments through the likelihood ratio.
Our proposed criterion offers a robust basis for evaluating invariant performance.
arXiv Detail & Related papers (2024-04-07T20:05:49Z) - Learning Invariant Molecular Representation in Latent Discrete Space [52.13724532622099]
We propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts.
Our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts.
arXiv Detail & Related papers (2023-10-22T04:06:44Z) - Contrastive Unsupervised Learning of World Model with Invariant Causal
Features [20.116319631571095]
We present a world model, which learns causal features using the invariance principle.
We use contrastive unsupervised learning to learn the invariant causal features.
Our proposed model performs on par with the state-of-the-art counterpart.
arXiv Detail & Related papers (2022-09-29T16:49:24Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Agree to Disagree: Diversity through Disagreement for Better
Transferability [54.308327969778155]
We propose D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data.
We show how D-BAT naturally emerges from the notion of generalized discrepancy.
arXiv Detail & Related papers (2022-02-09T12:03:02Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Revisiting Design Choices in Model-Based Offline Reinforcement Learning [39.01805509055988]
Offline reinforcement learning enables agents to leverage large pre-collected datasets of environment transitions to learn control policies.
This paper compares and designs novel protocols to investigate their interaction with other hyper parameters, such as the number of models, or imaginary rollout horizon.
arXiv Detail & Related papers (2021-10-08T13:51:34Z) - Path Integrals for the Attribution of Model Uncertainties [0.18899300124593643]
We present a novel algorithm that relies on in-distribution curves connecting a feature vector to some counterfactual counterpart.
We validate our approach on benchmark image data sets with varying resolution, and show that it significantly simplifies interpretability.
arXiv Detail & Related papers (2021-07-19T11:07:34Z) - On the Benefits of Invariance in Neural Networks [56.362579457990094]
We show that training with data augmentation leads to better estimates of risk and thereof gradients, and we provide a PAC-Bayes generalization bound for models trained with data augmentation.
We also show that compared to data augmentation, feature averaging reduces generalization error when used with convex losses, and tightens PAC-Bayes bounds.
arXiv Detail & Related papers (2020-05-01T02:08:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.