Structural Causal 3D Reconstruction
- URL: http://arxiv.org/abs/2207.10156v1
- Date: Wed, 20 Jul 2022 19:22:06 GMT
- Title: Structural Causal 3D Reconstruction
- Authors: Weiyang Liu, Zhen Liu, Liam Paull, Adrian Weller, Bernhard Sch\"olkopf
- Abstract summary: We look into the structure of latent space to capture a topological causal ordering of latent factors.
We first show that different causal orderings matter for 3D reconstruction, and then explore several approaches to find a task-dependent causal factor ordering.
Our experiments demonstrate that the latent space structure indeed serves as an implicit regularization and introduces an inductive bias beneficial for reconstruction.
- Score: 43.097291849527274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers the problem of unsupervised 3D object reconstruction
from in-the-wild single-view images. Due to ambiguity and intrinsic
ill-posedness, this problem is inherently difficult to solve and therefore
requires strong regularization to achieve disentanglement of different latent
factors. Unlike existing works that introduce explicit regularizations into
objective functions, we look into a different space for implicit regularization
-- the structure of latent space. Specifically, we restrict the structure of
latent space to capture a topological causal ordering of latent factors (i.e.,
representing causal dependency as a directed acyclic graph). We first show that
different causal orderings matter for 3D reconstruction, and then explore
several approaches to find a task-dependent causal factor ordering. Our
experiments demonstrate that the latent space structure indeed serves as an
implicit regularization and introduces an inductive bias beneficial for
reconstruction.
Related papers
- Dynamics Within Latent Chain-of-Thought: An Empirical Study of Causal Structure [58.89643769707751]
We study latent chain-of-thought as a manipulable causal process in representation space.<n>We find that latent-step budgets behave less like homogeneous extra depth and more like staged functionality with non-local routing.<n>These results motivate mode-conditional and stability-aware analyses as more reliable tools for interpreting and improving latent reasoning systems.
arXiv Detail & Related papers (2026-02-09T15:25:12Z) - Domain Expansion: A Latent Space Construction Framework for Multi-Task Learning [26.322513515274764]
Training a single network with multiple objectives often leads to conflicting gradients that degrade shared representations.<n>We introduce Domain Expansion, a framework that prevents these conflicts by restructuring the latent space itself.
arXiv Detail & Related papers (2026-01-27T21:30:21Z) - Time Series Domain Adaptation via Latent Invariant Causal Mechanism [28.329164754662354]
Time series domain adaptation aims to transfer the complex temporal dependence from the labeled source domain to the unlabeled target domain.
Recent advances leverage the stable causal mechanism over observed variables to model the domain-invariant temporal dependence.
However, modeling precise causal structures in high-dimensional data, such as videos, remains challenging.
arXiv Detail & Related papers (2025-02-23T16:25:58Z) - A Causal Inspired Early-Branching Structure for Domain Generalization [46.55514281988053]
Learning domain-invariant semantic representations is crucial for achieving domain generalization.
Standard training often results in entangled semantic and domain-specific features.
Previous works suggest formulating the problem from a causal perspective.
We propose two strategies as complements for the basic framework.
arXiv Detail & Related papers (2024-03-13T16:04:29Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - The Topology of Causality [0.0]
We provide a unified framework for the study of causality, non-locality and contextuality.
Our work has its roots in the sheaf-theoretic framework for contextuality by Abramsky and Brandenburger.
arXiv Detail & Related papers (2023-03-13T14:20:22Z) - Understanding and Constructing Latent Modality Structures in Multi-modal
Representation Learning [53.68371566336254]
We argue that the key to better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization.
arXiv Detail & Related papers (2023-03-10T14:38:49Z) - Causal Triplet: An Open Challenge for Intervention-centric Causal
Representation Learning [98.78136504619539]
Causal Triplet is a causal representation learning benchmark featuring visually more complex scenes.
We show that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts.
arXiv Detail & Related papers (2023-01-12T17:43:38Z) - Causal structure in the presence of sectorial constraints, with
application to the quantum switch [0.0]
Existing work on quantum causal structure assumes that one can perform arbitrary operations on systems of interest.
We extend the framework for quantum causal modelling to situations where a system can suffer sectorial constraints.
arXiv Detail & Related papers (2022-04-21T17:18:31Z) - Causal structure in spin-foams [0.0]
In spin-foam models for quantum gravity, the role played by the causal structure is still largely unexplored.
We propose a causal version of the EPRL spin-foam model and discuss the role of the causal structure in the reconstruction of a semiclassical spacetime geometry.
arXiv Detail & Related papers (2021-09-02T14:37:42Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.