Automatic Generation of Word Problems for Academic Education via Natural
Language Processing (NLP)
- URL: http://arxiv.org/abs/2109.13123v3
- Date: Thu, 30 Sep 2021 05:01:25 GMT
- Title: Automatic Generation of Word Problems for Academic Education via Natural
Language Processing (NLP)
- Authors: Stanley Uros Keller
- Abstract summary: This thesis proposes an approach to generate diverse, context rich word problems.
The proposed approach is proven to be effective in generating valid word problems for mathematical statistics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Digital learning platforms enable students to learn on a flexible and
individual schedule as well as providing instant feedback mechanisms. The field
of STEM education requires students to solve numerous training exercises to
grasp underlying concepts. It is apparent that there are restrictions in
current online education in terms of exercise diversity and individuality. Many
exercises show little variance in structure and content, hindering the adoption
of abstraction capabilities by students. This thesis proposes an approach to
generate diverse, context rich word problems. In addition to requiring the
generated language to be grammatically correct, the nature of word problems
implies additional constraints on the validity of contents. The proposed
approach is proven to be effective in generating valid word problems for
mathematical statistics. The experimental results present a tradeoff between
generation time and exercise validity. The system can easily be parametrized to
handle this tradeoff according to the requirements of specific use cases.
Related papers
- From Prompts to Propositions: A Logic-Based Lens on Student-LLM Interactions [9.032718302451501]
We introduce Prompt2Constraints, a novel method that translates students prompts into logical constraints.
We use this approach to analyze a dataset of 1,872 prompts from 203 students solving programming tasks.
We find that while successful and unsuccessful attempts tend to use a similar number of constraints overall, when students fail, they often modify their prompts more significantly.
arXiv Detail & Related papers (2025-04-25T20:58:16Z) - Using machine learning to measure evidence of students' sensemaking in physics courses [5.509349550209279]
In education, problem-solving correctness is often inappropriately conflated with student learning.
In this work, we contribute such a measurement scheme, which quantifies the evidence of students' physical sensemaking given their written explanations for their solutions to physics problems.
We implement three unique language encoders with logistic regression, and provide a deployability analysis on 385 real student explanations from the 2023 Introduction to Physics course at Tufts University.
arXiv Detail & Related papers (2025-03-19T18:49:21Z) - MathMistake Checker: A Comprehensive Demonstration for Step-by-Step Math Problem Mistake Finding by Prompt-Guided LLMs [13.756898876556455]
We propose a novel system, MathMistake Checker, to automate step-by-step mistake finding in mathematical problems with lengthy answers.
The system aims to simplify grading, increase efficiency, and enhance learning experiences from a pedagogical perspective.
arXiv Detail & Related papers (2025-03-06T10:19:01Z) - BloomWise: Enhancing Problem-Solving capabilities of Large Language Models using Bloom's-Taxonomy-Inspired Prompts [59.83547898874152]
We introduce BloomWise, a new prompting technique, inspired by Bloom's taxonomy, to improve the performance of Large Language Models (LLMs)
The decision regarding the need to employ more sophisticated cognitive skills is based on self-evaluation performed by the LLM.
In extensive experiments across 4 popular math reasoning datasets, we have demonstrated the effectiveness of our proposed approach.
arXiv Detail & Related papers (2024-10-05T09:27:52Z) - Exploring Error Types in Formal Languages Among Students of Upper Secondary Education [0.0]
We report on an exploratory study of errors in formal languages among upper secondary education students.
Our results suggest instances of non-functional understanding of concepts.
These findings can serve as a starting point for a broader understanding of how and why students struggle with this topic.
arXiv Detail & Related papers (2024-09-23T14:16:13Z) - Integrating A.I. in Higher Education: Protocol for a Pilot Study with 'SAMCares: An Adaptive Learning Hub' [0.6990493129893112]
This research aims to introduce an innovative study buddy we will be calling the 'SAMCares'
The system leverages a Large Language Model (LLM) and Retriever-Augmented Generation (RAG) to offer real-time, context-aware, and adaptive educational support.
arXiv Detail & Related papers (2024-05-01T05:39:07Z) - Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges [60.62904929065257]
Large language models (LLMs) offer possibility for resolving this issue by comprehending individual requests.
This paper reviews the recently emerged LLM research related to educational capabilities, including mathematics, writing, programming, reasoning, and knowledge-based question answering.
arXiv Detail & Related papers (2023-12-27T14:37:32Z) - Parameterizing Context: Unleashing the Power of Parameter-Efficient
Fine-Tuning and In-Context Tuning for Continual Table Semantic Parsing [13.51721352349583]
This paper introduces a novel method integrating textitcontext-efficient fine-tuning (PEFT) and textitin-adaptive tuning (ICT) for training a continual table semantic parsing.
The teacher addresses the few-shot problem using ICT, which procures contextual information by demonstrating a few training examples.
In turn, the student leverages the proposed PEFT framework to learn from the teacher's output distribution, and subsequently compresses and saves the contextual information to the prompts, eliminating the need to store any training examples.
arXiv Detail & Related papers (2023-10-07T13:40:41Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Controlled Text Generation with Natural Language Instructions [74.88938055638636]
InstructCTG is a controlled text generation framework that incorporates different constraints.
We first extract the underlying constraints of natural texts through a combination of off-the-shelf NLP tools and simple verbalizes.
By prepending natural language descriptions of the constraints and a few demonstrations, we fine-tune a pre-trained language model to incorporate various types of constraints.
arXiv Detail & Related papers (2023-04-27T15:56:34Z) - Multimodal Lecture Presentations Dataset: Understanding Multimodality in
Educational Slides [57.86931911522967]
We test the capabilities of machine learning models in multimodal understanding of educational content.
Our dataset contains aligned slides and spoken language, for 180+ hours of video and 9000+ slides, with 10 lecturers from various subjects.
We introduce PolyViLT, a multimodal transformer trained with a multi-instance learning loss that is more effective than current approaches.
arXiv Detail & Related papers (2022-08-17T05:30:18Z) - Question Generation for Adaptive Education [7.23389716633927]
We show how to fine-tune pre-trained language models for deep knowledge tracing (LM-KT)
This model accurately predicts the probability of a student answering a question correctly, and generalizes to questions not seen in training.
We then use LM-KT to specify the objective and data for training a model to generate questions conditioned on the student and target difficulty.
arXiv Detail & Related papers (2021-06-08T11:46:59Z) - The empirical duality gap of constrained statistical learning [115.23598260228587]
We study the study of constrained statistical learning problems, the unconstrained version of which are at the core of virtually all modern information processing.
We propose to tackle the constrained statistical problem overcoming its infinite dimensionality, unknown distributions, and constraints by leveraging finite dimensional parameterizations, sample averages, and duality theory.
We demonstrate the effectiveness and usefulness of this constrained formulation in a fair learning application.
arXiv Detail & Related papers (2020-02-12T19:12:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.