Neuro-Symbolic RDF and Description Logic Reasoners: The State-Of-The-Art
and Challenges
- URL: http://arxiv.org/abs/2308.04814v1
- Date: Wed, 9 Aug 2023 09:12:35 GMT
- Title: Neuro-Symbolic RDF and Description Logic Reasoners: The State-Of-The-Art
and Challenges
- Authors: Gunjan Singh, Sumit Bhatia, Raghava Mutharaju
- Abstract summary: Onttologies are used in various domains, with RDF and OWL being prominent standards for domain development.
As reasoning grows larger and more expressive, and traditional reasoners struggle to perform efficiently.
Researchers have explored neuro-symbolic approaches that combine networks' learning capabilities with systems' reasoning abilities.
- Score: 6.295207672539996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ontologies are used in various domains, with RDF and OWL being prominent
standards for ontology development. RDF is favored for its simplicity and
flexibility, while OWL enables detailed domain knowledge representation.
However, as ontologies grow larger and more expressive, reasoning complexity
increases, and traditional reasoners struggle to perform efficiently. Despite
optimization efforts, scalability remains an issue. Additionally, advancements
in automated knowledge base construction have created large and expressive
ontologies that are often noisy and inconsistent, posing further challenges for
conventional reasoners. To address these challenges, researchers have explored
neuro-symbolic approaches that combine neural networks' learning capabilities
with symbolic systems' reasoning abilities. In this chapter,we provide an
overview of the existing literature in the field of neuro-symbolic deductive
reasoning supported by RDF(S), the description logics EL and ALC, and OWL 2 RL,
discussing the techniques employed, the tasks they address, and other relevant
efforts in this area.
Related papers
- ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning [92.76959707441954]
We introduce ZebraLogic, a comprehensive evaluation framework for assessing LLM reasoning performance.
ZebraLogic enables the generation of puzzles with controllable and quantifiable complexity.
Our results reveal a significant decline in accuracy as problem complexity grows.
arXiv Detail & Related papers (2025-02-03T06:44:49Z) - VERUS-LM: a Versatile Framework for Combining LLMs with Symbolic Reasoning [8.867818326729367]
We introduce VERUS-LM, a novel framework for neurosymbolic reasoning.
VERUS-LM employs a generic prompting mechanism, clearly separates domain knowledge from queries.
We show that our approach succeeds in diverse reasoning on a novel dataset, markedly outperforming LLMs.
arXiv Detail & Related papers (2025-01-24T14:45:21Z) - TransBox: EL++-closed Ontology Embedding [14.850996103983187]
We develop an effective EL++-closed embedding method that can handle many-to-one, one-to-many and many-to-many relations.
Our experiments demonstrate that TransBox achieves state-of-the-art performance across various real-world datasets for predicting complex axioms.
arXiv Detail & Related papers (2024-10-18T16:17:10Z) - Enhancing Logical Reasoning in Large Language Models through Graph-based Synthetic Data [53.433309883370974]
This work explores the potential and limitations of using graph-based synthetic reasoning data as training signals to enhance Large Language Models' reasoning capabilities.
Our experiments, conducted on two established natural language reasoning tasks, demonstrate that supervised fine-tuning with synthetic graph-based reasoning data effectively enhances LLMs' reasoning performance without compromising their effectiveness on other standard evaluation benchmarks.
arXiv Detail & Related papers (2024-09-19T03:39:09Z) - Explaining Deep Neural Networks by Leveraging Intrinsic Methods [0.9790236766474201]
This thesis contributes to the field of eXplainable AI, focusing on enhancing the interpretability of deep neural networks.
The core contributions lie in introducing novel techniques aimed at making these networks more interpretable by leveraging an analysis of their inner workings.
Secondly, this research delves into novel investigations on neurons within trained deep neural networks, shedding light on overlooked phenomena related to their activation values.
arXiv Detail & Related papers (2024-07-17T01:20:17Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - IID Relaxation by Logical Expressivity: A Research Agenda for Fitting Logics to Neurosymbolic Requirements [50.57072342894621]
We discuss the benefits of exploiting known data dependencies and distribution constraints for Neurosymbolic use cases.
This opens a new research agenda with general questions about Neurosymbolic background knowledge and the expressivity required of its logic.
arXiv Detail & Related papers (2024-04-30T12:09:53Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Beyond Traditional Neural Networks: Toward adding Reasoning and Learning
Capabilities through Computational Logic Techniques [0.0]
This work proposes solutions to improve the knowledge injection process and integrate elements of ML and logic into multi-agent systems.
Neuro-Symbolic AI has emerged as a promising approach combining the strengths of neural networks and symbolic reasoning.
arXiv Detail & Related papers (2023-08-30T09:09:42Z) - Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces [20.260546238369205]
We propose a framework that combines the pattern recognition abilities of neural networks with symbolic reasoning and background knowledge.
We take inspiration from the 'neural algorithmic reasoning' approach [DeepMind 2020] and use problem-specific background knowledge.
We test this on visual analogy problems in RAVENs Progressive Matrices, and achieve accuracy competitive with human performance.
arXiv Detail & Related papers (2022-09-19T04:03:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.