Generate labeled training data using Prompt Programming and GPT-3. An
example of Big Five Personality Classification
- URL: http://arxiv.org/abs/2303.12279v1
- Date: Wed, 22 Mar 2023 03:12:40 GMT
- Title: Generate labeled training data using Prompt Programming and GPT-3. An
example of Big Five Personality Classification
- Authors: Eason Chen
- Abstract summary: We generate 25000 conversations labeled with Big Five Personality traits using prompt programming at GPT-3.
Then we train Big Five classification models with these data and evaluate them with 2500 data from generated dialogues and real conversational datasets labeled in Big Five by human annotators.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We generated 25000 conversations labeled with Big Five Personality traits
using prompt programming at GPT-3. Then we train Big Five classification models
with these data and evaluate them with 2500 data from generated dialogues and
real conversational datasets labeled in Big Five by human annotators. The
results indicated that this approach is promising for creating effective
training data. We then compare the performance by different training approaches
and models. Our results suggest that using Adapter-Transformers and transfer
learning from pre-trained RoBERTa sentiment analysis model will perform best
with the generated data. Our best model obtained an accuracy of 0.71 in
generated data and 0.65 in real datasets. Finally, we discuss this approach's
potential limitations and confidence metric.
Related papers
- Generating Realistic Tabular Data with Large Language Models [49.03536886067729]
Large language models (LLM) have been used for diverse tasks, but do not capture the correct correlation between the features and the target variable.
We propose a LLM-based method with three important improvements to correctly capture the ground-truth feature-class correlation in the real data.
Our experiments show that our method significantly outperforms 10 SOTA baselines on 20 datasets in downstream tasks.
arXiv Detail & Related papers (2024-10-29T04:14:32Z) - Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction [54.23208041792073]
Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review.
A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods.
We propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels.
arXiv Detail & Related papers (2024-06-26T05:30:21Z) - Improving Classification Performance With Human Feedback: Label a few,
we label the rest [2.7386128680964408]
This paper focuses on understanding how a continuous feedback loop can refine models, thereby enhancing their accuracy, recall, and precision.
We benchmark this approach on the Financial Phrasebank, Banking, Craigslist, Trec, Amazon Reviews datasets to prove that with just a few labeled examples, we are able to surpass the accuracy of zero shot large language models.
arXiv Detail & Related papers (2024-01-17T19:13:05Z) - Efficient Grammatical Error Correction Via Multi-Task Training and
Optimized Training Schedule [55.08778142798106]
We propose auxiliary tasks that exploit the alignment between the original and corrected sentences.
We formulate each task as a sequence-to-sequence problem and perform multi-task training.
We find that the order of datasets used for training and even individual instances within a dataset may have important effects on the final performance.
arXiv Detail & Related papers (2023-11-20T14:50:12Z) - Investigating Pre-trained Language Models on Cross-Domain Datasets, a
Step Closer to General AI [0.8889304968879164]
We investigate the ability of pre-trained language models to generalize to different non-language tasks.
The four pre-trained models that we used, T5, BART, BERT, and GPT-2 achieve outstanding results.
arXiv Detail & Related papers (2023-06-21T11:55:17Z) - T5Score: Discriminative Fine-tuning of Generative Evaluation Metrics [94.69907794006826]
We present a framework that combines the best of both worlds, using both supervised and unsupervised signals from whatever data we have available.
We operationalize this idea by training T5Score, a metric that uses these training signals with mT5 as the backbone.
T5Score achieves the best performance on all datasets against existing top-scoring metrics at the segment level.
arXiv Detail & Related papers (2022-12-12T06:29:04Z) - Unifying Language Learning Paradigms [96.35981503087567]
We present a unified framework for pre-training models that are universally effective across datasets and setups.
We show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective.
Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization.
arXiv Detail & Related papers (2022-05-10T19:32:20Z) - Few-shot learning through contextual data augmentation [74.20290390065475]
Machine translation models need to adapt to new data to maintain their performance over time.
We show that adaptation on the scale of one to five examples is possible.
Our model reports better accuracy scores than a reference system trained with on average 313 parallel examples.
arXiv Detail & Related papers (2021-03-31T09:05:43Z) - Chatbot Interaction with Artificial Intelligence: Human Data
Augmentation with T5 and Language Transformer Ensemble for Text
Classification [2.492300648514128]
We present the Interaction with Artificial Intelligence (CI-AI) framework as an approach to the training of deep learning chatbots for task classification.
The intelligent system augments human-sourced data via artificial paraphrasing in order to generate a large set of training data.
We find that all models are improved when training data is augmented by the T5 model.
arXiv Detail & Related papers (2020-10-12T19:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.