Causal Imitability Under Context-Specific Independence Relations
- URL: http://arxiv.org/abs/2306.00585v2
- Date: Sun, 11 Jun 2023 21:04:21 GMT
- Title: Causal Imitability Under Context-Specific Independence Relations
- Authors: Fateme Jamshidi, Sina Akbari, Negar Kiyavash
- Abstract summary: We consider the problem of causal imitation learning when CSI relations are known.
We provide a necessary graphical criterion for imitation learning under CSI and show that under a structural assumption, this criterion is also sufficient.
We propose a sound algorithmic approach for causal imitation learning which takes both CSI relations and data into account.
- Score: 18.764384254545718
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Drawbacks of ignoring the causal mechanisms when performing imitation
learning have recently been acknowledged. Several approaches both to assess the
feasibility of imitation and to circumvent causal confounding and causal
misspecifications have been proposed in the literature. However, the potential
benefits of the incorporation of additional information about the underlying
causal structure are left unexplored. An example of such overlooked information
is context-specific independence (CSI), i.e., independence that holds only in
certain contexts. We consider the problem of causal imitation learning when CSI
relations are known. We prove that the decision problem pertaining to the
feasibility of imitation in this setting is NP-hard. Further, we provide a
necessary graphical criterion for imitation learning under CSI and show that
under a structural assumption, this criterion is also sufficient. Finally, we
propose a sound algorithmic approach for causal imitation learning which takes
both CSI relations and data into account.
Related papers
- Rethinking State Disentanglement in Causal Reinforcement Learning [78.12976579620165]
Causality provides rigorous theoretical support for ensuring that the underlying states can be uniquely recovered through identifiability.
We revisit this research line and find that incorporating RL-specific context can reduce unnecessary assumptions in previous identifiability analyses for latent states.
We propose a novel approach for general partially observable Markov Decision Processes (POMDPs) by replacing the complicated structural constraints in previous methods with two simple constraints for transition and reward preservation.
arXiv Detail & Related papers (2024-08-24T06:49:13Z) - New Rules for Causal Identification with Background Knowledge [59.733125324672656]
We propose two novel rules for incorporating BK, which offer a new perspective to the open problem.
We show that these rules are applicable in some typical causality tasks, such as determining the set of possible causal effects with observational data.
arXiv Detail & Related papers (2024-07-21T20:21:21Z) - On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - SSL Framework for Causal Inconsistency between Structures and
Representations [23.035761299444953]
Cross-pollination of deep learning and causal discovery has catalyzed a burgeoning field of research seeking to elucidate causal relationships within non-statistical data forms like images, videos, and text.
We theoretically develop intervention strategies suitable for indefinite data and derive causal consistency condition (CCC)
CCC could potentially play an influential role in various fields.
arXiv Detail & Related papers (2023-10-28T08:29:49Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - Learning a Structural Causal Model for Intuition Reasoning in
Conversation [20.243323155177766]
Reasoning, a crucial aspect of NLP research, has not been adequately addressed by prevailing models.
We develop a conversation cognitive model ( CCM) that explains how each utterance receives and activates channels of information.
By leveraging variational inference, it explores substitutes for implicit causes, addresses the issue of their unobservability, and reconstructs the causal representations of utterances through the evidence lower bounds.
arXiv Detail & Related papers (2023-05-28T13:54:09Z) - How to Enhance Causal Discrimination of Utterances: A Case on Affective
Reasoning [22.11437627661179]
We propose the incorporation of textiti.i.i.d. noise terms into the conversation process, thereby constructing a structural causal model (SCM)
To facilitate the implementation of deep learning, we introduce the cogn frameworks to handle unstructured conversation data, and employ an autoencoder architecture to regard the unobservable noise as learnable "implicit causes"
arXiv Detail & Related papers (2023-05-04T07:45:49Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z) - Causal Effect Identification with Context-specific Independence
Relations of Control Variables [24.835889689036943]
We study the problem of causal effect identification from observational distribution given the causal graph.
We introduce a set of graphical constraints under which the CSI relations can be learned from mere observational distribution.
arXiv Detail & Related papers (2021-10-22T20:58:37Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Identifying Causal Effects via Context-specific Independence Relations [9.51801023527378]
Causal effect identification considers whether an interventional probability distribution can be uniquely determined from a passively observed distribution.
We show that deciding causal effect non-identifiability is NP-hard in the presence of context-specific independence relations.
Motivated by this, we design a calculus and an automated search procedure for identifying causal effects in the presence of CSIs.
arXiv Detail & Related papers (2020-09-21T11:38:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.