The effects of data size on Automated Essay Scoring engines
- URL: http://arxiv.org/abs/2108.13275v1
- Date: Mon, 30 Aug 2021 14:39:59 GMT
- Title: The effects of data size on Automated Essay Scoring engines
- Authors: Christopher Ormerod, Amir Jafari, Susan Lottridge, Milan Patel, Amy
Harris, and Paul van Wamelen
- Abstract summary: We study the effects of data size and quality on the performance of automated essay scoring engines.
This work seeks to inform us as to how to establish better training data for neural networks that will be used in production.
- Score: 0.415623340386296
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We study the effects of data size and quality on the performance on Automated
Essay Scoring (AES) engines that are designed in accordance with three
different paradigms; A frequency and hand-crafted feature-based model, a
recurrent neural network model, and a pretrained transformer-based language
model that is fine-tuned for classification. We expect that each type of model
benefits from the size and the quality of the training data in very different
ways. Standard practices for developing training data for AES engines were
established with feature-based methods in mind, however, since neural networks
are increasingly being considered in a production setting, this work seeks to
inform us as to how to establish better training data for neural networks that
will be used in production.
Related papers
- Text Classification: Neural Networks VS Machine Learning Models VS Pre-trained Models [0.0]
We present a comparison between different techniques to perform text classification.
We take into consideration seven pre-trained models, three standard neural networks and three machine learning models.
For standard neural networks and machine learning models we also compare two embedding techniques: TF-IDF and GloVe.
arXiv Detail & Related papers (2024-12-30T15:44:05Z) - Transferable Post-training via Inverse Value Learning [83.75002867411263]
We propose modeling changes at the logits level during post-training using a separate neural network (i.e., the value network)
After training this network on a small base model using demonstrations, this network can be seamlessly integrated with other pre-trained models during inference.
We demonstrate that the resulting value network has broad transferability across pre-trained models of different parameter sizes.
arXiv Detail & Related papers (2024-10-28T13:48:43Z) - On the Effect of Purely Synthetic Training Data for Different Automatic Speech Recognition Architectures [19.823015917720284]
We evaluate the utility of synthetic data for training automatic speech recognition.
We reproduce the original training data, training ASR systems solely on synthetic data.
We show that the TTS models generalize well, even when training scores indicate overfitting.
arXiv Detail & Related papers (2024-07-25T12:44:45Z) - Defect Classification in Additive Manufacturing Using CNN-Based Vision
Processing [76.72662577101988]
This paper examines two scenarios: first, using convolutional neural networks (CNNs) to accurately classify defects in an image dataset from AM and second, applying active learning techniques to the developed classification model.
This allows the construction of a human-in-the-loop mechanism to reduce the size of the data required to train and generate training data.
arXiv Detail & Related papers (2023-07-14T14:36:58Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Using GPT-2 to Create Synthetic Data to Improve the Prediction
Performance of NLP Machine Learning Classification Models [0.0]
It is becoming common practice to utilize synthetic data to boost the performance of Machine Learning Models.
I used a Yelp pizza restaurant reviews dataset and transfer learning to fine-tune a pre-trained GPT-2 Transformer Model to generate synthetic pizza reviews data.
I then combined this synthetic data with the original genuine data to create a new joint dataset.
arXiv Detail & Related papers (2021-04-02T20:20:42Z) - BENDR: using transformers and a contrastive self-supervised learning
task to learn from massive amounts of EEG data [15.71234837305808]
We consider how to adapt techniques and architectures used for language modelling (LM) to encephalography modelling (EM)
We find that a single pre-trained model is capable of modelling completely novel raw EEG sequences recorded with differing hardware.
Both the internal representations of this model and the entire architecture can be fine-tuned to a variety of downstream BCI and EEG classification tasks.
arXiv Detail & Related papers (2021-01-28T14:54:01Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Learning Queuing Networks by Recurrent Neural Networks [0.0]
We propose a machine-learning approach to derive performance models from data.
We exploit a deterministic approximation of their average dynamics in terms of a compact system of ordinary differential equations.
This allows for an interpretable structure of the neural network, which can be trained from system measurements to yield a white-box parameterized model.
arXiv Detail & Related papers (2020-02-25T10:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.