Bridging LLMs and Symbolic Reasoning in Educational QA Systems: Insights from the XAI Challenge at IJCNN 2025
- URL: http://arxiv.org/abs/2508.01263v1
- Date: Sat, 02 Aug 2025 08:46:06 GMT
- Title: Bridging LLMs and Symbolic Reasoning in Educational QA Systems: Insights from the XAI Challenge at IJCNN 2025
- Authors: Long S. T. Nguyen, Khang H. N. Vo, Thu H. A. Nguyen, Tuan C. Bui, Duc Q. Nguyen, Thanh-Tung Tran, Anh D. Nguyen, Minh L. Nguyen, Fabien Baldacci, Thang H. Bui, Emanuel Di Nardo, Angelo Ciaramella, Son H. Le, Ihsan Ullah, Lorenzo Di Rocco, Tho T. Quan,
- Abstract summary: This paper presents a comprehensive analysis of the XAI Challenge 2025, a hackathon-style competition jointly organized by Ho Chi Minh City University of Technology (HCMUT) and the International Workshop on Trustworthiness and Reliability in Neurosymbolic AI (TRNS-AI)<n>The challenge tasked participants with building Question-Answering (QA) systems capable of answering student queries about university policies while generating clear, logic-based natural language explanations.
- Score: 1.0166217771455643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The growing integration of Artificial Intelligence (AI) into education has intensified the need for transparency and interpretability. While hackathons have long served as agile environments for rapid AI prototyping, few have directly addressed eXplainable AI (XAI) in real-world educational contexts. This paper presents a comprehensive analysis of the XAI Challenge 2025, a hackathon-style competition jointly organized by Ho Chi Minh City University of Technology (HCMUT) and the International Workshop on Trustworthiness and Reliability in Neurosymbolic AI (TRNS-AI), held as part of the International Joint Conference on Neural Networks (IJCNN 2025). The challenge tasked participants with building Question-Answering (QA) systems capable of answering student queries about university policies while generating clear, logic-based natural language explanations. To promote transparency and trustworthiness, solutions were required to use lightweight Large Language Models (LLMs) or hybrid LLM-symbolic systems. A high-quality dataset was provided, constructed via logic-based templates with Z3 validation and refined through expert student review to ensure alignment with real-world academic scenarios. We describe the challenge's motivation, structure, dataset construction, and evaluation protocol. Situating the competition within the broader evolution of AI hackathons, we argue that it represents a novel effort to bridge LLMs and symbolic reasoning in service of explainability. Our findings offer actionable insights for future XAI-centered educational systems and competitive research initiatives.
Related papers
- Mind the XAI Gap: A Human-Centered LLM Framework for Democratizing Explainable AI [3.301842921686179]
We introduce a framework that ensures transparency and human-centered explanations tailored to the needs of experts and non-experts.<n>Our framework encapsulates in one response explanations understandable by non-experts and technical information to experts.
arXiv Detail & Related papers (2025-06-13T21:41:07Z) - Making Sense of the Unsensible: Reflection, Survey, and Challenges for XAI in Large Language Models Toward Human-Centered AI [11.454716478837014]
Explainable AI (XAI) acts as a crucial interface between the opaque reasoning of large language models (LLMs) and diverse stakeholders.<n>This paper presents a comprehensive reflection and survey of XAI for LLMs, framed around three guiding questions: Why is explainability essential? What technical and ethical dimensions does it entail?<n>We argue that explainability must evolve into a civic infrastructure fostering trust, enabling contestability, and aligning AI systems with institutional accountability and human-centered decision-making.
arXiv Detail & Related papers (2025-05-18T17:30:10Z) - Enterprise Architecture as a Dynamic Capability for Scalable and Sustainable Generative AI adoption: Bridging Innovation and Governance in Large Organisations [55.2480439325792]
Generative Artificial Intelligence is a powerful new technology with the potential to boost innovation and reshape governance in many industries.<n>However, organisations face major challenges in scaling GenAI, including technology complexity, governance gaps and resource misalignments.<n>This study explores how Enterprise Architecture Management can meet the complex requirements of GenAI adoption within large enterprises.
arXiv Detail & Related papers (2025-05-09T07:41:33Z) - A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust [2.4578723416255754]
Human-Centered AI (HCAI) emphasizes alignment with human values, while Explainable AI (XAI) enhances transparency by making AI decisions more understandable.<n>This paper presents a novel three-layered framework that bridges HCAI and XAI to establish a structured explainability paradigm.<n>Our findings advance Human-Centered Explainable AI (HCXAI), fostering AI systems that are transparent, adaptable, and ethically aligned.
arXiv Detail & Related papers (2025-04-14T01:29:30Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI [73.75520820608232]
We introduce OlympicArena, which includes 11,163 bilingual problems across both text-only and interleaved text-image modalities.<n>These challenges encompass a wide range of disciplines spanning seven fields and 62 international Olympic competitions, rigorously examined for data leakage.<n>Our evaluations reveal that even advanced models like GPT-4o only achieve a 39.97% overall accuracy, illustrating current AI limitations in complex reasoning and multimodal integration.
arXiv Detail & Related papers (2024-06-18T16:20:53Z) - Applications of Explainable artificial intelligence in Earth system science [12.454478986296152]
This review aims to provide a foundational understanding of explainable AI (XAI)
XAI offers a set of powerful tools that make the models more transparent.
We identify four significant challenges that XAI faces within the Earth system science (ESS)
A visionary outlook for ESS envisions a harmonious blend where process-based models govern the known, AI models explore the unknown, and XAI bridges the gap by providing explanations.
arXiv Detail & Related papers (2024-06-12T15:05:29Z) - Socialized Learning: A Survey of the Paradigm Shift for Edge Intelligence in Networked Systems [62.252355444948904]
This paper presents the findings of a literature review on the integration of edge intelligence (EI) and socialized learning (SL)
SL is a learning paradigm predicated on social principles and behaviors, aimed at amplifying the collaborative capacity and collective intelligence of agents.
We elaborate on three integrated components: socialized architecture, socialized training, and socialized inference, analyzing their strengths and weaknesses.
arXiv Detail & Related papers (2024-04-20T11:07:29Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - A multi-component framework for the analysis and design of explainable
artificial intelligence [0.0]
The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments.
The emergence of concern for creating trusted AI systems, including the creation of regulatory principles to ensure transparency and trust of AI systems.
Here we intend to provide a strategic inventory of XAI requirements, demonstrate their connection to a history of XAI ideas, and synthesize those ideas into a simple framework to calibrate five successive levels of XAI.
arXiv Detail & Related papers (2020-05-05T01:48:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.