Think before You Simulate: Symbolic Reasoning to Orchestrate Neural Computation for Counterfactual Question Answering
- URL: http://arxiv.org/abs/2506.10753v1
- Date: Thu, 12 Jun 2025 14:37:11 GMT
- Title: Think before You Simulate: Symbolic Reasoning to Orchestrate Neural Computation for Counterfactual Question Answering
- Authors: Adam Ishay, Zhun Yang, Joohyung Lee, Ilgu Kang, Dongjae Lim,
- Abstract summary: This paper introduces a method to enhance a neuro-symbolic model for counterfactual reasoning.<n>We define the notion of a causal graph to represent causal relations.<n>We validate the effectiveness of our approach on two benchmarks.
- Score: 9.875621856950408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal and temporal reasoning about video dynamics is a challenging problem. While neuro-symbolic models that combine symbolic reasoning with neural-based perception and prediction have shown promise, they exhibit limitations, especially in answering counterfactual questions. This paper introduces a method to enhance a neuro-symbolic model for counterfactual reasoning, leveraging symbolic reasoning about causal relations among events. We define the notion of a causal graph to represent such relations and use Answer Set Programming (ASP), a declarative logic programming method, to find how to coordinate perception and simulation modules. We validate the effectiveness of our approach on two benchmarks, CLEVRER and CRAFT. Our enhancement achieves state-of-the-art performance on the CLEVRER challenge, significantly outperforming existing models. In the case of the CRAFT benchmark, we leverage a large pre-trained language model, such as GPT-3.5 and GPT-4, as a proxy for a dynamics simulator. Our findings show that this method can further improve its performance on counterfactual questions by providing alternative prompts instructed by symbolic causal reasoning.
Related papers
- Inverse Scaling in Test-Time Compute [51.16323216811257]
Extending the reasoning length of Large Reasoning Models (LRMs) deteriorates performance.<n>We identify five distinct failure modes when models reason for longer.<n>These findings suggest that while test-time compute scaling remains promising for improving model capabilities, it may inadvertently reinforce problematic reasoning patterns.
arXiv Detail & Related papers (2025-07-19T00:06:13Z) - Towards Unified Neurosymbolic Reasoning on Knowledge Graphs [37.22138524925735]
Knowledge Graph (KG) reasoning has received significant attention in the fields of artificial intelligence and knowledge engineering.<n>We propose a unified neurosymbolic reasoning framework, namely Tunsr, for KG reasoning.
arXiv Detail & Related papers (2025-07-04T16:29:45Z) - Neuro Symbolic Knowledge Reasoning for Procedural Video Question Answering [26.013577822475856]
This paper introduces a new video question-answering dataset that challenges models to leverage procedural knowledge for complex reasoning.<n>It requires recognizing visual entities, generating hypotheses, and performing contextual, causal, and counterfactual reasoning.
arXiv Detail & Related papers (2025-03-19T07:49:14Z) - Enhancing Logical Reasoning in Large Language Models through Graph-based Synthetic Data [53.433309883370974]
This work explores the potential and limitations of using graph-based synthetic reasoning data as training signals to enhance Large Language Models' reasoning capabilities.<n>Our experiments, conducted on two established natural language reasoning tasks, demonstrate that supervised fine-tuning with synthetic graph-based reasoning data effectively enhances LLMs' reasoning performance without compromising their effectiveness on other standard evaluation benchmarks.
arXiv Detail & Related papers (2024-09-19T03:39:09Z) - Emulating the Human Mind: A Neural-symbolic Link Prediction Model with
Fast and Slow Reasoning and Filtered Rules [4.979279893937017]
We introduce a novel Neural-Symbolic model named FaSt-FLiP.
Our objective is to combine a logical and neural model for enhanced link prediction.
arXiv Detail & Related papers (2023-10-21T12:45:11Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Contrastive Reasoning in Neural Networks [26.65337569468343]
Inference built on features that identify causal class dependencies is termed as feed-forward inference.
In this paper, we formalize the structure of contrastive reasoning and propose a methodology to extract a neural network's notion of contrast.
We demonstrate the value of contrastively recognizing images under distortions by reporting an improvement of 3.47%, 2.56%, and 5.48% in average accuracy.
arXiv Detail & Related papers (2021-03-23T05:54:36Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Neuro-Symbolic Visual Reasoning: Disentangling "Visual" from "Reasoning" [49.76230210108583]
We propose a framework to isolate and evaluate the reasoning aspect of visual question answering (VQA) separately from its perception.
We also propose a novel top-down calibration technique that allows the model to answer reasoning questions even with imperfect perception.
On the challenging GQA dataset, this framework is used to perform in-depth, disentangled comparisons between well-known VQA models.
arXiv Detail & Related papers (2020-06-20T08:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.