Math Operation Embeddings for Open-ended Solution Analysis and Feedback
- URL: http://arxiv.org/abs/2104.12047v1
- Date: Sun, 25 Apr 2021 02:09:17 GMT
- Title: Math Operation Embeddings for Open-ended Solution Analysis and Feedback
- Authors: Mengxue Zhang, Zichao Wang, Richard Baraniuk, Andrew Lan
- Abstract summary: We use a dataset that contains student solution steps in the Cognitive Tutor system to learn implicit and explicit representations of math operations.
Experimental results show that our learned math operation generalize representations well across different data distributions.
- Score: 2.905751301655124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Feedback on student answers and even during intermediate steps in their
solutions to open-ended questions is an important element in math education.
Such feedback can help students correct their errors and ultimately lead to
improved learning outcomes. Most existing approaches for automated student
solution analysis and feedback require manually constructing cognitive models
and anticipating student errors for each question. This process requires
significant human effort and does not scale to most questions used in homework
and practices that do not come with this information. In this paper, we analyze
students' step-by-step solution processes to equation solving questions in an
attempt to scale up error diagnostics and feedback mechanisms developed for a
small number of questions to a much larger number of questions. Leveraging a
recent math expression encoding method, we represent each math operation
applied in solution steps as a transition in the math embedding vector space.
We use a dataset that contains student solution steps in the Cognitive Tutor
system to learn implicit and explicit representations of math operations. We
explore whether these representations can i) identify math operations a student
intends to perform in each solution step, regardless of whether they did it
correctly or not, and ii) select the appropriate feedback type for incorrect
steps. Experimental results show that our learned math operation
representations generalize well across different data distributions.
Related papers
- MathAgent: Leveraging a Mixture-of-Math-Agent Framework for Real-World Multimodal Mathematical Error Detection [53.325457460187046]
We introduce MathAgent, a novel Mixture-of-Math-Agent framework designed specifically to address these challenges.
MathAgent decomposes error detection into three phases, each handled by a specialized agent.
We evaluate MathAgent on real-world educational data, demonstrating approximately 5% higher accuracy in error step identification.
arXiv Detail & Related papers (2025-03-23T16:25:08Z) - MathMistake Checker: A Comprehensive Demonstration for Step-by-Step Math Problem Mistake Finding by Prompt-Guided LLMs [13.756898876556455]
We propose a novel system, MathMistake Checker, to automate step-by-step mistake finding in mathematical problems with lengthy answers.
The system aims to simplify grading, increase efficiency, and enhance learning experiences from a pedagogical perspective.
arXiv Detail & Related papers (2025-03-06T10:19:01Z) - MATH-Perturb: Benchmarking LLMs' Math Reasoning Abilities against Hard Perturbations [90.07275414500154]
We observe significant performance drops on MATH-P-Hard across various models.
We also raise concerns about a novel form of memorization where models blindly apply learned problem-solving skills.
arXiv Detail & Related papers (2025-02-10T13:31:46Z) - Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners? [140.9751389452011]
We study the biases of large language models (LLMs) in relation to those known in children when solving arithmetic word problems.
We generate a novel set of word problems for each of these tests, using a neuro-symbolic approach that enables fine-grained control over the problem features.
arXiv Detail & Related papers (2024-01-31T18:48:20Z) - Using machine learning to find exact analytic solutions to analytically posed physics problems [0.0]
We investigate the use of machine learning for solving analytic problems in theoretical physics.
In particular, symbolic regression (SR) is making rapid progress in recent years as a tool to fit data using functions whose overall form is not known in advance.
We use a state-of-the-art SR package to demonstrate how an exact solution can be found and make an attempt at solving an unsolved physics problem.
arXiv Detail & Related papers (2023-06-05T01:31:03Z) - Interpretable Math Word Problem Solution Generation Via Step-by-step
Planning [6.232269207752905]
We propose a step-by-step planning approach for intermediate solution generation.
Our approach first plans the next step by predicting the necessary math operation needed to proceed.
Experiments on the GSM8K dataset demonstrate that our approach improves the accuracy and interpretability of the solution.
arXiv Detail & Related papers (2023-06-01T15:16:18Z) - Solving Math Word Problems by Combining Language Models With Symbolic
Solvers [28.010617102877923]
Large language models (LLMs) can be combined with external tools to perform complex reasoning and calculation.
We propose an approach that combines an LLM that can incrementally formalize word problems as a set of variables and equations with an external symbolic solver.
Our approach achieves comparable accuracy to the original PAL on the GSM8K benchmark of math word problems and outperforms PAL by an absolute 20% on ALGEBRA.
arXiv Detail & Related papers (2023-04-16T04:16:06Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - A Mutual Information Maximization Approach for the Spurious Solution
Problem in Weakly Supervised Question Answering [60.768146126094955]
Weakly supervised question answering usually has only the final answers as supervision signals.
There may exist many spurious solutions that coincidentally derive the correct answer, but training on such solutions can hurt model performance.
We propose to explicitly exploit such semantic correlations by maximizing the mutual information between question-answer pairs and predicted solutions.
arXiv Detail & Related papers (2021-06-14T05:47:41Z) - Measuring Mathematical Problem Solving With the MATH Dataset [55.4376028963537]
We introduce MATH, a dataset of 12,500 challenging competition mathematics problems.
Each problem has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations.
We also contribute a large auxiliary pretraining dataset which helps teach models the fundamentals of mathematics.
arXiv Detail & Related papers (2021-03-05T18:59:39Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.