Knowledge Conceptualization Impacts RAG Efficacy
- URL: http://arxiv.org/abs/2507.09389v1
- Date: Sat, 12 Jul 2025 20:10:26 GMT
- Title: Knowledge Conceptualization Impacts RAG Efficacy
- Authors: Chris Davis Jaldi, Anmol Saini, Elham Ghiasi, O. Divine Eziolise, Cogan Shimizu,
- Abstract summary: We investigate the design of transferable and interpretable neurosymbolic AI systems.<n>Specifically, we focus on a class of systems referred to as ''Agentic Retrieval-Augmented Generation'' systems.
- Score: 0.0786430477112975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainability and interpretability are cornerstones of frontier and next-generation artificial intelligence (AI) systems. This is especially true in recent systems, such as large language models (LLMs), and more broadly, generative AI. On the other hand, adaptability to new domains, contexts, or scenarios is also an important aspect for a successful system. As such, we are particularly interested in how we can merge these two efforts, that is, investigating the design of transferable and interpretable neurosymbolic AI systems. Specifically, we focus on a class of systems referred to as ''Agentic Retrieval-Augmented Generation'' systems, which actively select, interpret, and query knowledge sources in response to natural language prompts. In this paper, we systematically evaluate how different conceptualizations and representations of knowledge, particularly the structure and complexity, impact an AI agent (in this case, an LLM) in effectively querying a triplestore. We report our results, which show that there are impacts from both approaches, and we discuss their impact and implications.
Related papers
- Explainability in Context: A Multilevel Framework Aligning AI Explanations with Stakeholder with LLMs [11.11196150521188]
This paper addresses how trust in AI is influenced by the design and delivery of explanations.<n>The framework consists of three layers: algorithmic and domain-based, human-centered, and social explainability.
arXiv Detail & Related papers (2025-06-06T08:54:41Z) - AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.<n>The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - Cutting Through the Confusion and Hype: Understanding the True Potential of Generative AI [0.0]
This paper explores the nuanced landscape of generative AI (genAI)
It focuses on neural network-based models like Large Language Models (LLMs)
arXiv Detail & Related papers (2024-10-22T02:18:44Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Imaginations of WALL-E : Reconstructing Experiences with an
Imagination-Inspired Module for Advanced AI Systems [2.452498006404167]
Our system is equipped with an imagination-inspired module that bridges the gap between textual inputs and other modalities.
This leads to unique interpretations of a concept that may differ from human interpretations but are equally valid.
This work represents a significant advancement in the development of imagination-inspired AI systems.
arXiv Detail & Related papers (2023-08-20T20:10:55Z) - Designing explainable artificial intelligence with active inference: A
framework for transparent introspection and decision-making [0.0]
We discuss how active inference can be leveraged to design explainable AI systems.
We propose an architecture for explainable AI systems using active inference.
arXiv Detail & Related papers (2023-06-06T21:38:09Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - LioNets: A Neural-Specific Local Interpretation Technique Exploiting
Penultimate Layer Information [6.570220157893279]
Interpretable machine learning (IML) is an urgent topic of research.
This paper focuses on a local-based, neural-specific interpretation process applied to textual and time-series data.
arXiv Detail & Related papers (2021-04-13T09:39:33Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.