Do-Operation Guided Causal Representation Learning with Reduced
Supervision Strength
- URL: http://arxiv.org/abs/2206.01802v1
- Date: Fri, 3 Jun 2022 20:18:04 GMT
- Title: Do-Operation Guided Causal Representation Learning with Reduced
Supervision Strength
- Authors: Jiageng Zhu, Hanchen Xie, Wael AbdAlmageed
- Abstract summary: Causal representation learning has been proposed to encode relationships between factors presented in the high dimensional data.
We propose a framework which implements do-operation by swapping latent cause and effect factors encoded from a pair of inputs.
We also identify the inadequacy of existing causal representation metrics empirically and theoretically, and introduce new metrics for better evaluation.
- Score: 12.012459418829732
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal representation learning has been proposed to encode relationships
between factors presented in the high dimensional data. However, existing
methods suffer from merely using a large amount of labeled data and ignore the
fact that samples generated by the same causal mechanism follow the same causal
relationships. In this paper, we seek to explore such information by leveraging
do-operation for reducing supervision strength. We propose a framework which
implements do-operation by swapping latent cause and effect factors encoded
from a pair of inputs. Moreover, we also identify the inadequacy of existing
causal representation metrics empirically and theoretically, and introduce new
metrics for better evaluation. Experiments conducted on both synthetic and real
datasets demonstrate the superiorities of our method compared with
state-of-the-art methods.
Related papers
- Simple Ingredients for Offline Reinforcement Learning [86.1988266277766]
offline reinforcement learning algorithms have proven effective on datasets highly connected to the target downstream task.
We show that existing methods struggle with diverse data: their performance considerably deteriorates as data collected for related but different tasks is simply added to the offline buffer.
We show that scale, more than algorithmic considerations, is the key factor influencing performance.
arXiv Detail & Related papers (2024-03-19T18:57:53Z) - SSL Framework for Causal Inconsistency between Structures and
Representations [23.035761299444953]
Cross-pollination of deep learning and causal discovery has catalyzed a burgeoning field of research seeking to elucidate causal relationships within non-statistical data forms like images, videos, and text.
We theoretically develop intervention strategies suitable for indefinite data and derive causal consistency condition (CCC)
CCC could potentially play an influential role in various fields.
arXiv Detail & Related papers (2023-10-28T08:29:49Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Causal Disentangled Variational Auto-Encoder for Preference
Understanding in Recommendation [50.93536377097659]
This paper introduces the Causal Disentangled Variational Auto-Encoder (CaD-VAE), a novel approach for learning causal disentangled representations from interaction data in recommender systems.
The approach utilizes structural causal models to generate causal representations that describe the causal relationship between latent factors.
arXiv Detail & Related papers (2023-04-17T00:10:56Z) - From Causal Pairs to Causal Graphs [1.5469452301122175]
Causal structure learning from observational data remains a non-trivial task.
Motivated by the Cause-Effect Pair' NIPS 2013 Workshop on Causality Challenge, we take a different approach and generate a probability distribution over all possible graphs.
The goal of the paper is to propose new methods based on this probabilistic information and compare their performance with traditional and state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-08T15:28:55Z) - Generalizable Information Theoretic Causal Representation [37.54158138447033]
We propose to learn causal representation from observational data by regularizing the learning procedure with mutual information measures according to our hypothetical causal graph.
The optimization involves a counterfactual loss, based on which we deduce a theoretical guarantee that the causality-inspired learning is with reduced sample complexity and better generalization ability.
arXiv Detail & Related papers (2022-02-17T00:38:35Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Weakly Supervised Disentangled Generative Causal Representation Learning [21.392372783459013]
We show that previous methods with independent priors fail to disentangle causally related factors even under supervision.
We propose a new disentangled learning method that enables causal controllable generation and causal representation learning.
arXiv Detail & Related papers (2020-10-06T11:38:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.