From Objectives to Questions: A Planning-based Framework for Educational Mathematical Question Generation
- URL: http://arxiv.org/abs/2506.00963v1
- Date: Sun, 01 Jun 2025 11:23:18 GMT
- Title: From Objectives to Questions: A Planning-based Framework for Educational Mathematical Question Generation
- Authors: Cheng Cheng, Zhenya Huang, Guanhao Zhao, Yuxiang Guo, Xin Lin, Jinze Wu, Xin Li, Shijin Wang,
- Abstract summary: We propose the Educational Question Planning with self-Reflection (EQPR) method for educational mathematical question generation.<n>By combining planning algorithm based on Monte Carlo Tree Search with the generative capabilities of Large Language Models, we continuously optimize questions.<n>We have demonstrated that EQPR achieves significant improvements in generating questions that meet multi-dimensional educational objectives.
- Score: 32.76585750014007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatically generating high-quality mathematical problems that align with educational objectives is a crucial task in NLP-based educational technology. Traditional generation methods focus primarily on textual quality, but they often overlook educational objectives. Moreover, these methods address only single-dimensional, simple question generation, failing to meet complex, multifaceted educational requirements. To address these challenges, we constructed and annotated EduMath, a dataset of 16k mathematical questions with multi-dimensional educational objectives. Based on this dataset, we developed EQGEVAL, which incorporates three evaluation dimensions and is designed to assess the ability of models to generate educational questions. Drawing inspiration from teachers' problem design processes, we propose the Educational Question Planning with self-Reflection (EQPR) method for educational mathematical question generation, following a "plan-evaluate-optimize" approach. Specifically, by combining planning algorithm based on Monte Carlo Tree Search with the generative capabilities of Large Language Models, we continuously optimize questions through iterative feedback. This self-optimization mechanism ensures that the generated questions both fit the educational context and strategically achieve specific basic educational objectives. Through extensive experiments based on EQGEVAL, we have demonstrated that EQPR achieves significant improvements in generating questions that meet multi-dimensional educational objectives.
Related papers
- EduAgentQG: A Multi-Agent Workflow Framework for Personalized Question Generation [56.43882334582494]
We propose EduAgentQG, a multi-agent collaborative framework for generating high-quality and diverse personalized questions.<n>The framework consists of five specialized agents and operates through an iterative feedback loop.<n>EduAgentQG outperforms existing single-agent and multi-agent methods in terms of question diversity, goal consistency, and overall quality.
arXiv Detail & Related papers (2025-11-08T12:25:31Z) - Assessing the Quality of AI-Generated Exams: A Large-Scale Field Study [18.104664166381877]
Large language models (LLMs) challenge conventional methods of teaching and learning.<n>One promising application is the generation of customized exams, tailored to specific course content.
arXiv Detail & Related papers (2025-08-09T01:20:53Z) - From Answers to Questions: EQGBench for Evaluating LLMs' Educational Question Generation [30.57730587890455]
Large Language Models (LLMs) have demonstrated remarkable capabilities in mathematical problem-solving.<n>We introduce EQGBench, a benchmark specifically designed for evaluating LLMs' performance in Chinese Educational Question Generation.<n>The dataset incorporates user queries with varying knowledge points, difficulty gradients, and question type specifications to simulate realistic educational scenarios.
arXiv Detail & Related papers (2025-08-05T14:16:42Z) - A Survey of Deep Learning for Geometry Problem Solving [72.22844763179786]
This paper provides a survey of the applications of deep learning in geometry problem solving.<n>It includes (i) a comprehensive summary of the relevant tasks in geometry problem solving; (ii) a thorough review of related deep learning methods; and (iii) a detailed analysis of evaluation metrics and methods.<n>Our goal is to provide a comprehensive and practical reference of deep learning for geometry problem solving to promote further developments in this field.
arXiv Detail & Related papers (2025-07-16T06:03:08Z) - YouLeQD: Decoding the Cognitive Complexity of Questions and Engagement in Online Educational Videos from Learners' Perspectives [1.2084539012992408]
YouLeQD dataset contains learner-posed questions from YouTube lecture video comments.<n>We developed two RoBERTa-based classification models to detect questions and analyze their cognitive complexity.
arXiv Detail & Related papers (2025-01-20T19:54:38Z) - A Novel Approach to Scalable and Automatic Topic-Controlled Question Generation in Education [6.9238760403459425]
This paper introduces a novel approach to educational question generation that controls the topical focus of questions.<n>The proposed Topic-Controlled Question Generation (T-CQG) method enhances the relevance and effectiveness of the generated content for educational purposes.<n>Our results, validated through rigorous offline and human-backed evaluations, demonstrate that the proposed models effectively generate high-quality, topic-focused questions.
arXiv Detail & Related papers (2025-01-09T13:13:24Z) - Distractor Generation in Multiple-Choice Tasks: A Survey of Methods, Datasets, and Evaluation [20.14906249952034]
The distractor generation task focuses on generating incorrect but plausible options for objective questions.
The evolution of artificial intelligence (AI) has transitioned the task from traditional methods to the use of neural networks and pre-trained language models.
This survey explores distractor generation tasks, datasets, methods, and current evaluation metrics for English objective questions.
arXiv Detail & Related papers (2024-02-02T15:53:31Z) - Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges [60.62904929065257]
Large language models (LLMs) offer possibility for resolving this issue by comprehending individual requests.
This paper reviews the recently emerged LLM research related to educational capabilities, including mathematics, writing, programming, reasoning, and knowledge-based question answering.
arXiv Detail & Related papers (2023-12-27T14:37:32Z) - Towards Goal-oriented Intelligent Tutoring Systems in Online Education [69.06930979754627]
We propose a new task, named Goal-oriented Intelligent Tutoring Systems (GITS)
GITS aims to enable the student's mastery of a designated concept by strategically planning a customized sequence of exercises and assessment.
We propose a novel graph-based reinforcement learning framework, named Planning-Assessment-Interaction (PAI)
arXiv Detail & Related papers (2023-12-03T12:37:16Z) - Automating question generation from educational text [1.9325905076281444]
The use of question-based activities (QBAs) is wide-spread in education, forming an integral part of the learning and assessment process.
We design and evaluate an automated question generation tool for formative and summative assessment in schools.
arXiv Detail & Related papers (2023-09-26T15:18:44Z) - Automated Distractor and Feedback Generation for Math Multiple-choice
Questions via In-context Learning [43.83422798569986]
Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and reliable form of assessment.
To date, the task of crafting high-quality distractors has largely remained a labor-intensive process for teachers and learning content designers.
We propose a simple, in-context learning-based solution for automated distractor and corresponding feedback message generation.
arXiv Detail & Related papers (2023-08-07T01:03:04Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - A Survey of Deep Learning for Mathematical Reasoning [71.88150173381153]
We review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade.
Recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning.
arXiv Detail & Related papers (2022-12-20T18:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.