Automatic Generation of Socratic Subquestions for Teaching Math Word
Problems
- URL: http://arxiv.org/abs/2211.12835v1
- Date: Wed, 23 Nov 2022 10:40:22 GMT
- Title: Automatic Generation of Socratic Subquestions for Teaching Math Word
Problems
- Authors: Kumar Shridhar, Jakub Macina, Mennatallah El-Assady, Tanmay Sinha,
Manu Kapur, Mrinmaya Sachan
- Abstract summary: We explore the ability of large language models (LMs) in generating sequential questions for guiding math word problem-solving.
On both automatic and human quality evaluations, we find that LMs constrained with desirable question properties generate superior questions.
Results suggest that the difficulty level of problems plays an important role in determining whether questioning improves or hinders human performance.
- Score: 16.97827669744673
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Socratic questioning is an educational method that allows students to
discover answers to complex problems by asking them a series of thoughtful
questions. Generation of didactically sound questions is challenging, requiring
understanding of the reasoning process involved in the problem. We hypothesize
that such questioning strategy can not only enhance the human performance, but
also assist the math word problem (MWP) solvers. In this work, we explore the
ability of large language models (LMs) in generating sequential questions for
guiding math word problem-solving. We propose various guided question
generation schemes based on input conditioning and reinforcement learning. On
both automatic and human quality evaluations, we find that LMs constrained with
desirable question properties generate superior questions and improve the
overall performance of a math word problem solver. We conduct a preliminary
user study to examine the potential value of such question generation models in
the education domain. Results suggest that the difficulty level of problems
plays an important role in determining whether questioning improves or hinders
human performance. We discuss the future of using such questioning strategies
in education.
Related papers
- How to Engage Your Readers? Generating Guiding Questions to Promote Active Reading [60.19226384241482]
We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles.
We explore various approaches to generate such questions using language models.
We conduct a human study to understand the implication of such questions on reading comprehension.
arXiv Detail & Related papers (2024-07-19T13:42:56Z) - Automatic question generation for propositional logical equivalences [6.221146613622175]
We develop and implement a method capable of generating tailored questions for each student.
Previous studies have investigated AQG frameworks in education, which include validity, user-defined difficulty, and personalized problem generation.
Our new AQG approach produces logical equivalence problems for Discrete Mathematics, which is a core course for year-one computer science students.
arXiv Detail & Related papers (2024-05-09T02:44:42Z) - Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners? [140.9751389452011]
We study the biases of large language models (LLMs) in relation to those known in children when solving arithmetic word problems.
We generate a novel set of word problems for each of these tests, using a neuro-symbolic approach that enables fine-grained control over the problem features.
arXiv Detail & Related papers (2024-01-31T18:48:20Z) - Three Questions Concerning the Use of Large Language Models to
Facilitate Mathematics Learning [4.376598435975689]
We discuss the challenges associated with employing large language models to enhance students' mathematical problem-solving skills.
LLMs can generate the wrong reasoning processes, and also exhibit difficulty in understanding the given questions' rationales when attempting to correct students' answers.
arXiv Detail & Related papers (2023-10-20T16:05:35Z) - Retrieval-augmented Generation to Improve Math Question-Answering:
Trade-offs Between Groundedness and Human Preference [0.0]
We design prompts that retrieve and use content from a high-quality open-source math textbook to generate responses to real student questions.
We evaluate the efficacy of this RAG system for middle-school algebra and geometry QA by administering a multi-condition survey.
We argue that while RAG is able to improve response quality, designers of math QA systems must consider trade-offs between generating responses preferred by students and responses closely matched to specific educational resources.
arXiv Detail & Related papers (2023-10-04T22:09:28Z) - Covering Uncommon Ground: Gap-Focused Question Generation for Answer
Assessment [75.59538732476346]
We focus on the problem of generating such gap-focused questions (GFQs) automatically.
We define the task, highlight key desired aspects of a good GFQ, and propose a model that satisfies these.
arXiv Detail & Related papers (2023-07-06T22:21:42Z) - Improving Reading Comprehension Question Generation with Data
Augmentation and Overgenerate-and-rank [3.854023945160742]
Automated answer-aware reading comprehension question generation has significant potential to scale up learner support in educational activities.
One key technical challenge in this setting is that there can be multiple questions, sometimes very different from each other, with the same answer.
We propose 1) a data augmentation method that enriches the training dataset with diverse questions given the same context and answer and 2) an overgenerate-and-rank method to select the best question from a pool of candidates.
arXiv Detail & Related papers (2023-06-15T04:23:25Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Stay Hungry, Stay Focused: Generating Informative and Specific Questions
in Information-Seeking Conversations [41.74162467619795]
We investigate the problem of generating informative questions in information-asymmetric conversations.
To generate pragmatic questions, we use reinforcement learning to optimize an informativeness metric.
We demonstrate that the resulting pragmatic questioner substantially improves the informativeness and specificity of questions generated over a baseline model.
arXiv Detail & Related papers (2020-04-30T00:49:14Z) - Reinforced Multi-task Approach for Multi-hop Question Generation [47.15108724294234]
We take up Multi-hop question generation, which aims at generating relevant questions based on supporting facts in the context.
We employ multitask learning with the auxiliary task of answer-aware supporting fact prediction to guide the question generator.
We demonstrate the effectiveness of our approach through experiments on the multi-hop question answering dataset, HotPotQA.
arXiv Detail & Related papers (2020-04-05T10:16:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.