An Empirical Study of Finding Similar Exercises
- URL: http://arxiv.org/abs/2111.08322v1
- Date: Tue, 16 Nov 2021 09:39:14 GMT
- Title: An Empirical Study of Finding Similar Exercises
- Authors: Tongwen Huang and Xihua Li
- Abstract summary: We release a Chinese education pre-trained language model BERT$_Edu$ for the label-scarce dataset.
We propose a very effective MoE enhanced multi-task model for FSE task to attain better understanding of exercises.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Education artificial intelligence aims to profit tasks in the education
domain such as intelligent test paper generation and consolidation exercises
where the main technique behind is how to match the exercises, known as the
finding similar exercises(FSE) problem. Most of these approaches emphasized
their model abilities to represent the exercise, unfortunately there are still
many challenges such as the scarcity of data, insufficient understanding of
exercises and high label noises. We release a Chinese education pre-trained
language model BERT$_{Edu}$ for the label-scarce dataset and introduce the
exercise normalization to overcome the diversity of mathematical formulas and
terms in exercise. We discover new auxiliary tasks in an innovative way depends
on problem-solving ideas and propose a very effective MoE enhanced multi-task
model for FSE task to attain better understanding of exercises. In addition,
confidence learning was utilized to prune train-set and overcome high noises in
labeling data. Experiments show that these methods proposed in this paper are
very effective.
Related papers
- Intelligent Repetition Counting for Unseen Exercises: A Few-Shot Learning Approach with Sensor Signals [0.4998632546280975]
This study develops a method to automatically count exercise repetitions by analyzing IMU signals.
We propose a repetition counting technique utilizing a deep metric-based few-shot learning approach.
We show an 86.8% probability of accurately counting ten or more repetitions within a single set across 28 different exercises.
arXiv Detail & Related papers (2024-10-01T05:04:40Z) - Model-Based Transfer Learning for Contextual Reinforcement Learning [5.5597941107270215]
We show how to systematically select good tasks to train, maximizing overall performance across a range of tasks.
Key idea behind our approach is to explicitly model the performance loss incurred by transferring a trained model.
We experimentally validate our methods using urban traffic and standard control benchmarks.
arXiv Detail & Related papers (2024-08-08T14:46:01Z) - LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feedback [71.95402654982095]
We propose Math-Minos, a natural language feedback-enhanced verifier.
Our experiments reveal that a small set of natural language feedback can significantly boost the performance of the verifier.
arXiv Detail & Related papers (2024-06-20T06:42:27Z) - Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning [59.98430756337374]
Supervised fine-tuning enhances the problem-solving abilities of language models across various mathematical reasoning tasks.
Our work introduces a novel technique aimed at cultivating a deeper understanding of the training problems at hand.
We propose reflective augmentation, a method that embeds problem reflection into each training instance.
arXiv Detail & Related papers (2024-06-17T19:42:22Z) - Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - Finding Similar Exercises in Retrieval Manner [11.694650259195756]
How to find similar exercises for a given exercise becomes a crucial technical problem.
We define similar exercises'' as a retrieval process of finding a set of similar exercises based on recall, ranking and re-rank procedures.
comprehensive representation of the semantic information of exercises was obtained through representation learning.
arXiv Detail & Related papers (2023-03-15T01:40:32Z) - Effective Vision Transformer Training: A Data-Centric Perspective [24.02488085447691]
Vision Transformers (ViTs) have shown promising performance compared with Convolutional Neural Networks (CNNs)
In this paper, we define several metrics, including Dynamic Data Proportion (DDP) and Knowledge Assimilation Rate (KAR)
We propose a novel data-centric ViT training framework to dynamically measure the difficulty'' of training samples and generate effective'' samples for models at different training stages.
arXiv Detail & Related papers (2022-09-29T17:59:46Z) - Training Data is More Valuable than You Think: A Simple and Effective
Method by Retrieving from Training Data [82.92758444543689]
Retrieval-based methods have been shown to be effective in NLP tasks via introducing external knowledge.
Surprisingly, we found that REtrieving from the traINing datA (REINA) only can lead to significant gains on multiple NLG and NLU tasks.
Experimental results show that this simple method can achieve significantly better performance on a variety of NLU and NLG tasks.
arXiv Detail & Related papers (2022-03-16T17:37:27Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - Embedding Adaptation is Still Needed for Few-Shot Learning [25.4156194645678]
ATG is a principled clustering method to defining train and test tasksets without additional human knowledge.
We empirically demonstrate the effectiveness of ATG in generating tasksets that are easier, in-between, or harder than existing benchmarks.
We leverage our generated tasksets to shed a new light on few-shot classification: gradient-based methods can outperform metric-based ones when transfer is most challenging.
arXiv Detail & Related papers (2021-04-15T06:00:04Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.