CURVE: Learning Causality-Inspired Invariant Representations for Robust Scene Understanding via Uncertainty-Guided Regularization
- URL: http://arxiv.org/abs/2601.20355v1
- Date: Wed, 28 Jan 2026 08:15:56 GMT
- Title: CURVE: Learning Causality-Inspired Invariant Representations for Robust Scene Understanding via Uncertainty-Guided Regularization
- Authors: Yue Liang, Jiatong Du, Ziyi Yang, Yanjun Huang, Hong Chen,
- Abstract summary: CURVE is a framework that integrates variational uncertainty modeling with uncertainty-guided structural regularization to suppress high-variance relations.<n>Specifically, we apply prototype-conditioned debiasing to disentangle invariant interaction dynamics from environment-dependent variations, promoting a sparse and domain-stable topology.<n> Empirically, we evaluate CURVE in zero-shot transfer and low-data sim-to-real adaptation, verifying its ability to learn domain-stable sparse topologies and provide reliable uncertainty estimates to support risk prediction under distribution shifts.
- Score: 30.613712415224473
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scene graphs provide structured abstractions for scene understanding, yet they often overfit to spurious correlations, severely hindering out-of-distribution generalization. To address this limitation, we propose CURVE, a causality-inspired framework that integrates variational uncertainty modeling with uncertainty-guided structural regularization to suppress high-variance, environment-specific relations. Specifically, we apply prototype-conditioned debiasing to disentangle invariant interaction dynamics from environment-dependent variations, promoting a sparse and domain-stable topology. Empirically, we evaluate CURVE in zero-shot transfer and low-data sim-to-real adaptation, verifying its ability to learn domain-stable sparse topologies and provide reliable uncertainty estimates to support risk prediction under distribution shifts.
Related papers
- Invariance on Manifolds: Understanding Robust Visual Representations for Place Recognition [19.200074425090595]
We propose a Second-Order Geometric Statistics framework that inherently captures geometric stability without training.<n>Our approach introduces a training-free framework built upon fixed, pre-trained backbones, achieving strong zero-shot generalization without parameter updates.
arXiv Detail & Related papers (2026-01-31T18:12:29Z) - Environment-Adaptive Covariate Selection: Learning When to Use Spurious Correlations for Out-of-Distribution Prediction [33.69413571438309]
We show that out-of-distribution prediction fails when only a subset of the true causes of the outcome is observed.<n>We propose an environment-adaptive covariate selection algorithm.<n>EACS consistently outperforms static causal, invariant, and ERM-based predictors under diverse distribution shifts.
arXiv Detail & Related papers (2026-01-05T18:13:02Z) - Unsupervised Invariant Risk Minimization [7.903539618132858]
We propose a novel unsupervised framework for emphInvariant Risk Minimization (IRM)<n>Traditional IRM methods rely on labeled data to learn representations that are robust to distributional shifts across environments.<n>We introduce two methods within this framework: Principal Invariant Component Analysis (PICA), a linear method that extracts invariant directions under Gaussian assumptions, and Variational Invariant Autoencoder (VIAE), a deep generative model that disentangles environment-invariant and environment-dependent latent factors.
arXiv Detail & Related papers (2025-05-18T17:54:23Z) - Synergy Between Sufficient Changes and Sparse Mixing Procedure for Disentangled Representation Learning [32.482584125236016]
Disentangled representation learning aims to uncover latent variables underlying the observed data.<n>Some approaches rely on sufficient changes on the distribution of latent variables indicated by auxiliary variables such as domain indices.<n>We propose an identifiability theory with less restrictive constraints regarding distribution changes and the sparse mixing procedure.
arXiv Detail & Related papers (2025-03-01T22:21:37Z) - PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings [55.55445978692678]
PseudoNeg-MAE enhances global feature representation of point cloud masked autoencoders by making them both discriminative and sensitive to transformations.<n>We propose a novel loss that explicitly penalizes invariant collapse, enabling the network to capture richer transformation cues while preserving discriminative representations.
arXiv Detail & Related papers (2024-09-24T07:57:21Z) - Generalized Gaussian Temporal Difference Error for Uncertainty-aware Reinforcement Learning [0.19418036471925312]
We introduce a novel framework for generalized Gaussian error modeling in deep reinforcement learning.<n>We improve the estimation and mitigation of data-dependent aleatoric uncertainty.<n> Experiments with policy gradient algorithms demonstrate significant performance gains.
arXiv Detail & Related papers (2024-08-05T08:12:25Z) - Diagnosing and Rectifying Fake OOD Invariance: A Restructured Causal
Approach [51.012396632595554]
Invariant representation learning (IRL) encourages the prediction from invariant causal features to labels de-confounded from the environments.
Recent theoretical results verified that some causal features recovered by IRLs merely pretend domain-invariantly in the training environments but fail in unseen domains.
We develop an approach based on conditional mutual information with respect to RS-SCM, then rigorously rectify the spurious and fake invariant effects.
arXiv Detail & Related papers (2023-12-15T12:58:05Z) - Causality-oriented robustness: exploiting general noise interventions [4.64479351797195]
In this paper, we focus on causality-oriented robustness and propose Distributional Robustness via Invariant Gradients (DRIG)<n>DRIG exploits general noise interventions in training data for robust predictions against unseen interventions.<n>We show that our framework includes anchor regression as a special case, and that it yields prediction models that protect against more diverse perturbations.
arXiv Detail & Related papers (2023-07-18T16:22:50Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - Which Invariance Should We Transfer? A Causal Minimax Learning Approach [18.71316951734806]
We present a comprehensive minimax analysis from a causal perspective.
We propose an efficient algorithm to search for the subset with minimal worst-case risk.
The effectiveness and efficiency of our methods are demonstrated on synthetic data and the diagnosis of Alzheimer's disease.
arXiv Detail & Related papers (2021-07-05T09:07:29Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.