NeSyCoCo: A Neuro-Symbolic Concept Composer for Compositional Generalization
- URL: http://arxiv.org/abs/2412.15588v1
- Date: Fri, 20 Dec 2024 05:48:58 GMT
- Title: NeSyCoCo: A Neuro-Symbolic Concept Composer for Compositional Generalization
- Authors: Danial Kamali, Elham J. Barezi, Parisa Kordjamshidi,
- Abstract summary: NeSyCoCo is a neuro-symbolic framework that generates symbolic representations and maps them to differentiable neural computations.
Our framework achieves state-of-the-art results on the ReaSCAN and CLEVR-CoGenT compositional generalization benchmarks.
- Score: 17.49136753589057
- License:
- Abstract: Compositional generalization is crucial for artificial intelligence agents to solve complex vision-language reasoning tasks. Neuro-symbolic approaches have demonstrated promise in capturing compositional structures, but they face critical challenges: (a) reliance on predefined predicates for symbolic representations that limit adaptability, (b) difficulty in extracting predicates from raw data, and (c) using non-differentiable operations for combining primitive concepts. To address these issues, we propose NeSyCoCo, a neuro-symbolic framework that leverages large language models (LLMs) to generate symbolic representations and map them to differentiable neural computations. NeSyCoCo introduces three innovations: (a) augmenting natural language inputs with dependency structures to enhance the alignment with symbolic representations, (b) employing distributed word representations to link diverse, linguistically motivated logical predicates to neural modules, and (c) using the soft composition of normalized predicate scores to align symbolic and differentiable reasoning. Our framework achieves state-of-the-art results on the ReaSCAN and CLEVR-CoGenT compositional generalization benchmarks and demonstrates robust performance with novel concepts in the CLEVR-SYN benchmark.
Related papers
- Neurosymbolic artificial intelligence via large language models and coherence-driven inference [3.522062800701924]
We generate sets of propositions that objectively instantiate graphs that support coherence-driven inference.
We benchmark the ability of large language models to reconstruct coherence graphs from propositions expressed in natural language.
arXiv Detail & Related papers (2025-02-19T18:53:16Z) - Systematic Abductive Reasoning via Diverse Relation Representations in Vector-symbolic Architecture [10.27696004820717]
We propose a Systematic Abductive Reasoning model with diverse relation representations (Rel-SAR) in Vector-symbolic Architecture (VSA)
To derive representations with symbolic reasoning potential, we introduce not only various types of atomic vectors represent numeric, periodic and logical semantics, but also the structured high-dimentional representation (S)
For systematic reasoning, we propose novel numerical and logical functions and perform rule abduction and generalization execution in a unified framework that integrates these relation representations.
arXiv Detail & Related papers (2025-01-21T05:17:08Z) - VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning [86.59849798539312]
We present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations.
We show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
arXiv Detail & Related papers (2024-10-30T16:11:05Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Neuro-Symbolic Causal Reasoning Meets Signaling Game for Emergent
Semantic Communications [71.63189900803623]
A novel emergent SC system framework is proposed and is composed of a signaling game for emergent language design and a neuro-symbolic (NeSy) artificial intelligence (AI) approach for causal reasoning.
The ESC system is designed to enhance the novel metrics of semantic information, reliability, distortion and similarity.
arXiv Detail & Related papers (2022-10-21T15:33:37Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Modeling Content and Context with Deep Relational Learning [31.854529627213275]
We present DRaiL, an open-source declarative framework for specifying deep relational models.
Our framework supports easy integration with expressive language encoders, and provides an interface to study the interactions between representation, inference and learning.
arXiv Detail & Related papers (2020-10-20T17:09:35Z) - Compositional Generalization by Learning Analytical Expressions [87.15737632096378]
A memory-augmented neural model is connected with analytical expressions to achieve compositional generalization.
Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization.
arXiv Detail & Related papers (2020-06-18T15:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.