Spatio-Temporal Graphical Counterfactuals: An Overview
- URL: http://arxiv.org/abs/2407.01875v1
- Date: Tue, 2 Jul 2024 01:34:13 GMT
- Title: Spatio-Temporal Graphical Counterfactuals: An Overview
- Authors: Mingyu Kang, Duxin Chen, Ziyuan Pu, Jianxi Gao, Wenwu Yu,
- Abstract summary: Counteractual is a critical yet challenging topic for artificial intelligence to learn knowledge from data.
Our aim is to investigate a survey to compare thinking and discuss different counterfactual models, theories and approaches.
- Score: 11.616701619068804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Counterfactual thinking is a critical yet challenging topic for artificial intelligence to learn knowledge from data and ultimately improve their performances for new scenarios. Many research works, including Potential Outcome Model and Structural Causal Model, have been proposed to realize it. However, their modelings, theoretical foundations and application approaches are usually different. Moreover, there is a lack of graphical approach to infer spatio-temporal counterfactuals, that considers spatial and temporal interactions between multiple units. Thus, in this work, our aim is to investigate a survey to compare and discuss different counterfactual models, theories and approaches, and further build a unified graphical causal frameworks to infer the spatio-temporal counterfactuals.
Related papers
- A Survey on Diffusion Models for Time Series and Spatio-Temporal Data [92.1255811066468]
We review the use of diffusion models in time series and S-temporal data, categorizing them by model, task type, data modality, and practical application domain.
We categorize diffusion models into unconditioned and conditioned types discuss time series and S-temporal data separately.
Our survey covers their application extensively in various fields including healthcare, recommendation, climate, energy, audio, and transportation.
arXiv Detail & Related papers (2024-04-29T17:19:40Z) - Revisiting the Temporal Modeling in Spatio-Temporal Predictive Learning
under A Unified View [73.73667848619343]
We introduce USTEP (Unified S-TEmporal Predictive learning), an innovative framework that reconciles the recurrent-based and recurrent-free methods by integrating both micro-temporal and macro-temporal scales.
arXiv Detail & Related papers (2023-10-09T16:17:42Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Interpretation of Time-Series Deep Models: A Survey [27.582644914283136]
We present a wide range of post-hoc interpretation methods for time-series models based on backpropagation, perturbation, and approximation.
We also want to bring focus onto inherently interpretable models, a novel category of interpretation where human-understandable information is designed within the models.
arXiv Detail & Related papers (2023-05-23T23:43:26Z) - Foundation Models for Decision Making: Problems, Methods, and
Opportunities [124.79381732197649]
Foundation models pretrained on diverse data at scale have demonstrated extraordinary capabilities in a wide range of vision and language tasks.
New paradigms are emerging for training foundation models to interact with other agents and perform long-term reasoning.
Research at the intersection of foundation models and decision making holds tremendous promise for creating powerful new systems.
arXiv Detail & Related papers (2023-03-07T18:44:07Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Temporal Relevance Analysis for Video Action Models [70.39411261685963]
We first propose a new approach to quantify the temporal relationships between frames captured by CNN-based action models.
We then conduct comprehensive experiments and in-depth analysis to provide a better understanding of how temporal modeling is affected.
arXiv Detail & Related papers (2022-04-25T19:06:48Z) - Counterfactual Temporal Point Processes [18.37409880250174]
We develop a causal model of thinning for temporal point processes that builds upon the Gumbel-Max structural causal model.
We then simulate counterfactual realizations of the temporal point process under a given alternative intensity function.
arXiv Detail & Related papers (2021-11-15T08:46:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.