SymbolicThought: Integrating Language Models and Symbolic Reasoning for Consistent and Interpretable Human Relationship Understanding
- URL: http://arxiv.org/abs/2507.04189v2
- Date: Sun, 13 Jul 2025 22:06:13 GMT
- Title: SymbolicThought: Integrating Language Models and Symbolic Reasoning for Consistent and Interpretable Human Relationship Understanding
- Authors: Runcong Zhao, Qinglin Zhu, Hainiu Xu, Bin Liang, Lin Gui, Yulan He,
- Abstract summary: SymbolicThought is a human-in-the-loop framework that combines large language models with symbolic reasoning.<n>It constructs editable character relationship graphs, refines them using seven types of logical constraints, and enables real-time validation and conflict resolution.<n> Experiments show that SymbolicThought improves annotation accuracy and consistency while significantly reducing time cost.
- Score: 21.30887121514266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding character relationships is essential for interpreting complex narratives and conducting socially grounded AI research. However, manual annotation is time-consuming and low in coverage, while large language models (LLMs) often produce hallucinated or logically inconsistent outputs. We present SymbolicThought, a human-in-the-loop framework that combines LLM-based extraction with symbolic reasoning. The system constructs editable character relationship graphs, refines them using seven types of logical constraints, and enables real-time validation and conflict resolution through an interactive interface. To support logical supervision and explainable social analysis, we release a dataset of 160 interpersonal relationships with corresponding logical structures. Experiments show that SymbolicThought improves annotation accuracy and consistency while significantly reducing time cost, offering a practical tool for narrative understanding, explainable AI, and LLM evaluation.
Related papers
- T-CPDL: A Temporal Causal Probabilistic Description Logic for Developing Logic-RAG Agent [5.439020425819001]
Temporal Causal Probabilistic Description Logic (T-CPDL) is an integrated framework that extends Description Logic with temporal interval operators, explicit causal relationships, and probabilistic annotations.<n>T-CPDL substantially improves inference accuracy, interpretability, and confidence calibration of language model outputs.<n>This work also lays the groundwork for developing advanced Logic-Retrieval-Augmented Generation (Logic-RAG) frameworks.
arXiv Detail & Related papers (2025-06-23T12:11:15Z) - Neuro-Symbolic Contrastive Learning for Cross-domain Inference [13.649270716741535]
inductive logic programming (ILP) excels at inferring logical relationships across diverse, sparse and limited datasets.<n>This paper proposes a bridge between the two approaches: neuro-symbolic contrastive learning.
arXiv Detail & Related papers (2025-02-13T11:48:46Z) - NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding with Explicit Logic Reasoning [22.60247555240363]
This paper explores challenges for methods that require reasoning like human cognition.<n>We propose NAVER, a compositional visual grounding method that integrates explicit probabilistic logic reasoning.<n>Our results show that NAVER achieves SoTA performance comparing to recent end-to-end and compositional baselines.
arXiv Detail & Related papers (2025-02-01T09:19:08Z) - Failure Modes of LLMs for Causal Reasoning on Narratives [51.19592551510628]
We investigate the interaction between world knowledge and logical reasoning.<n>We find that state-of-the-art large language models (LLMs) often rely on superficial generalizations.<n>We show that simple reformulations of the task can elicit more robust reasoning behavior.
arXiv Detail & Related papers (2024-10-31T12:48:58Z) - Large Language Models Fall Short: Understanding Complex Relationships in
Detective Narratives [21.297972871264744]
We introduce a new benchmark, Conan, designed for extracting and analysing intricate character relation graphs from detective narratives.
Specifically, we designed hierarchical relationship categories and manually extracted and annotated role-oriented relationships from the perspectives of various characters.
Our experiments with advanced Large Language Models (LLMs) like GPT-3.5, GPT-4, and Llama2 reveal their limitations in inferencing complex relationships and handling longer narratives.
arXiv Detail & Related papers (2024-02-16T19:59:45Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning Logic [84.59255070520673]
Large language models (LLMs) face a challenge when engaging in temporal reasoning.
We propose TempLogic, a novel framework designed specifically for temporal question-answering tasks.
arXiv Detail & Related papers (2023-05-24T10:57:53Z) - LogiGAN: Learning Logical Reasoning via Adversarial Pre-training [58.11043285534766]
We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.
Inspired by the facilitation effect of reflective thinking in human learning, we simulate the learning-thinking process with an adversarial Generator-Verifier architecture.
Both base and large size language models pre-trained with LogiGAN demonstrate obvious performance improvement on 12 datasets.
arXiv Detail & Related papers (2022-05-18T08:46:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.