CASR: Refining Action Segmentation via Marginalizing Frame-levle Causal
Relationships
- URL: http://arxiv.org/abs/2311.12401v4
- Date: Fri, 26 Jan 2024 07:32:39 GMT
- Title: CASR: Refining Action Segmentation via Marginalizing Frame-levle Causal
Relationships
- Authors: Keqing Du, Xinyu Yang, Hang Chen
- Abstract summary: We propose Causal Abstraction Refiner (CASR), which can refine TAS results from various models by enhancing video causality.
We define the frame-level casual model and segment-level causal model, so that the causal adjacency matrix constructed from marginalized frame-level causal relationships has the ability to represent the segmnet-level causal relationships.
- Score: 23.035761299444953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Integrating deep learning and causal discovery has increased the
interpretability of Temporal Action Segmentation (TAS) tasks. However,
frame-level causal relationships exist many complicated noises outside the
segment-level, making it infeasible to directly express macro action semantics.
Thus, we propose Causal Abstraction Segmentation Refiner (CASR), which can
refine TAS results from various models by enhancing video causality in
marginalizing frame-level casual relationships. Specifically, we define the
equivalent frame-level casual model and segment-level causal model, so that the
causal adjacency matrix constructed from marginalized frame-level causal
relationships has the ability to represent the segmnet-level causal
relationships. CASR works out by reducing the difference in the causal
adjacency matrix between we constructed and pre-segmentation results of
backbone models. In addition, we propose a novel evaluation metric Causal Edit
Distance (CED) to evaluate the causal interpretability. Extensive experimental
results on mainstream datasets indicate that CASR significantly surpasses
existing various methods in action segmentation performance, as well as in
causal explainability and generalization.
Related papers
- Revisiting Spurious Correlation in Domain Generalization [12.745076668687748]
We build a structural causal model (SCM) to describe the causality within data generation process.
We further conduct a thorough analysis of the mechanisms underlying spurious correlation.
In this regard, we propose to control confounding bias in OOD generalization by introducing a propensity score weighted estimator.
arXiv Detail & Related papers (2024-06-17T13:22:00Z) - Effective Bayesian Causal Inference via Structural Marginalisation and Autoregressive Orders [16.682775063684907]
We decompose the structure learning problem into inferring causal order and a parent set for each variable given a causal order.
Our method yields state-of-the-art in structure learning on simulated non-linear additive noise benchmarks with scale-free and Erdos-Renyi graph structures.
arXiv Detail & Related papers (2024-02-22T18:39:24Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - Representation Disentaglement via Regularization by Causal
Identification [3.9160947065896803]
We propose the use of a causal collider structured model to describe the underlying data generative process assumptions in disentangled representation learning.
For this, we propose regularization by identification (ReI), a modular regularization engine designed to align the behavior of large scale generative models with the disentanglement constraints imposed by causal identification.
arXiv Detail & Related papers (2023-02-28T23:18:54Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - Counterfactual Maximum Likelihood Estimation for Training Deep Networks [83.44219640437657]
Deep learning models are prone to learning spurious correlations that should not be learned as predictive clues.
We propose a causality-based training framework to reduce the spurious correlations caused by observable confounders.
We conduct experiments on two real-world tasks: Natural Language Inference (NLI) and Image Captioning.
arXiv Detail & Related papers (2021-06-07T17:47:16Z) - CausalX: Causal Explanations and Block Multilinear Factor Analysis [3.087360758008569]
We propose a unified multilinear model of wholes and parts.
We introduce an incremental bottom-up computational alternative, the Incremental M-mode Block SVD.
The resulting object representation is an interpretable choice of intrinsic causal factor representations related to an object's hierarchy of wholes and parts.
arXiv Detail & Related papers (2021-02-25T13:49:01Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z) - A Critical View of the Structural Causal Model [89.43277111586258]
We show that one can identify the cause and the effect without considering their interaction at all.
We propose a new adversarial training method that mimics the disentangled structure of the causal model.
Our multidimensional method outperforms the literature methods on both synthetic and real world datasets.
arXiv Detail & Related papers (2020-02-23T22:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.