Enhanced Question-Answering for Skill-based learning using Knowledge-based AI and Generative AI
- URL: http://arxiv.org/abs/2504.07463v1
- Date: Thu, 10 Apr 2025 05:25:52 GMT
- Title: Enhanced Question-Answering for Skill-based learning using Knowledge-based AI and Generative AI
- Authors: Rahul K. Dass, Rochan H. Madhusudhana, Erin C. Deye, Shashank Verma, Timothy A. Bydlon, Grace Brazil, Ashok K. Goel,
- Abstract summary: We introduce Ivy, an intelligent agent that generates explanations that embody teleological, causal, and compositional principles.<n>This can potentially ensure learners develop a comprehensive understanding of skills crucial for effective problem-solving in online environments.
- Score: 2.6699230340282796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Supporting learners' understanding of taught skills in online settings is a longstanding challenge. While exercises and chat-based agents can evaluate understanding in limited contexts, this challenge is magnified when learners seek explanations that delve into procedural knowledge (how things are done) and reasoning (why things happen). We hypothesize that an intelligent agent's ability to understand and explain learners' questions about skills can be significantly enhanced using the TMK (Task-Method-Knowledge) model, a Knowledge-based AI framework. We introduce Ivy, an intelligent agent that leverages an LLM and iterative refinement techniques to generate explanations that embody teleological, causal, and compositional principles. Our initial evaluation demonstrates that this approach goes beyond the typical shallow responses produced by an agent with access to unstructured text, thereby substantially improving the depth and relevance of feedback. This can potentially ensure learners develop a comprehensive understanding of skills crucial for effective problem-solving in online environments.
Related papers
- Knowledge Tagging with Large Language Model based Multi-Agent System [17.53518487546791]
This paper investigates the use of a multi-agent system to address the limitations of previous algorithms.<n>We highlight the significant potential of an LLM-based multi-agent system in overcoming the challenges that previous methods have encountered.
arXiv Detail & Related papers (2024-09-12T21:39:01Z) - Integrating Cognitive AI with Generative Models for Enhanced Question Answering in Skill-based Learning [3.187381965457262]
This paper proposes a novel approach that merges Cognitive AI and Generative AI to address these challenges.
We employ a structured knowledge representation, the TMK (Task-Method-Knowledge) model, to encode skills taught in an online Knowledge-based AI course.
arXiv Detail & Related papers (2024-07-28T04:21:22Z) - Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - Toward enriched Cognitive Learning with XAI [44.99833362998488]
We introduce an intelligent system (CL-XAI) for Cognitive Learning which is supported by artificial intelligence (AI) tools.
The use of CL-XAI is illustrated with a game-inspired virtual use case where learners tackle problems to enhance problem-solving skills.
arXiv Detail & Related papers (2023-12-19T16:13:47Z) - Learning by Applying: A General Framework for Mathematical Reasoning via
Enhancing Explicit Knowledge Learning [47.96987739801807]
We propose a framework to enhance existing models (backbones) in a principled way by explicit knowledge learning.
In LeAp, we perform knowledge learning in a novel problem-knowledge-expression paradigm.
We show that LeAp improves all backbones' performances, learns accurate knowledge, and achieves a more interpretable reasoning process.
arXiv Detail & Related papers (2023-02-11T15:15:41Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - Incremental Knowledge Based Question Answering [52.041815783025186]
We propose a new incremental KBQA learning framework that can progressively expand learning capacity as humans do.
Specifically, it comprises a margin-distilled loss and a collaborative selection method, to overcome the catastrophic forgetting problem.
The comprehensive experiments demonstrate its effectiveness and efficiency when working with the evolving knowledge base.
arXiv Detail & Related papers (2021-01-18T09:03:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.