UmBERTo-MTSA @ AcCompl-It: Improving Complexity and Acceptability
Prediction with Multi-task Learning on Self-Supervised Annotations
- URL: http://arxiv.org/abs/2011.05197v1
- Date: Tue, 10 Nov 2020 15:50:37 GMT
- Title: UmBERTo-MTSA @ AcCompl-It: Improving Complexity and Acceptability
Prediction with Multi-task Learning on Self-Supervised Annotations
- Authors: Gabriele Sarti
- Abstract summary: This work describes a self-supervised data augmentation approach used to improve learning models' performances when only a moderate amount of labeled data is available.
Nerve language models are fine-tuned using this procedure in the context of the AcCompl-it shared task at EVALITA 2020.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work describes a self-supervised data augmentation approach used to
improve learning models' performances when only a moderate amount of labeled
data is available. Multiple copies of the original model are initially trained
on the downstream task. Their predictions are then used to annotate a large set
of unlabeled examples. Finally, multi-task training is performed on the
parallel annotations of the resulting training set, and final scores are
obtained by averaging annotator-specific head predictions. Neural language
models are fine-tuned using this procedure in the context of the AcCompl-it
shared task at EVALITA 2020, obtaining considerable improvements in prediction
quality.
Related papers
- ZZU-NLP at SIGHAN-2024 dimABSA Task: Aspect-Based Sentiment Analysis with Coarse-to-Fine In-context Learning [0.36332383102551763]
The DimABSA task requires fine-grained sentiment intensity prediction for restaurant reviews.
We propose a Coarse-to-Fine In-context Learning method based on the Baichuan2-7B model for the DimABSA task.
arXiv Detail & Related papers (2024-07-22T02:54:46Z) - On the Trade-off of Intra-/Inter-class Diversity for Supervised
Pre-training [72.8087629914444]
We study the impact of the trade-off between the intra-class diversity (the number of samples per class) and the inter-class diversity (the number of classes) of a supervised pre-training dataset.
With the size of the pre-training dataset fixed, the best downstream performance comes with a balance on the intra-/inter-class diversity.
arXiv Detail & Related papers (2023-05-20T16:23:50Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Phoneme Segmentation Using Self-Supervised Speech Models [13.956691231452336]
We apply transfer learning to the task of phoneme segmentation and demonstrate the utility of representations learned in self-supervised pre-training for the task.
Our model extends transformer-style encoders with strategically placed convolutions that manipulate features learned in pre-training.
arXiv Detail & Related papers (2022-11-02T19:57:31Z) - A Generative Language Model for Few-shot Aspect-Based Sentiment Analysis [90.24921443175514]
We focus on aspect-based sentiment analysis, which involves extracting aspect term, category, and predicting their corresponding polarities.
We propose to reformulate the extraction and prediction tasks into the sequence generation task, using a generative language model with unidirectional attention.
Our approach outperforms the previous state-of-the-art (based on BERT) on average performance by a large margins in few-shot and full-shot settings.
arXiv Detail & Related papers (2022-04-11T18:31:53Z) - Uncertainty Estimation for Language Reward Models [5.33024001730262]
Language models can learn a range of capabilities from unsupervised training on text corpora.
It is often easier for humans to choose between options than to provide labeled data, and prior work has achieved state-of-the-art performance by training a reward model from such preference comparisons.
We seek to address these problems via uncertainty estimation, which can improve sample efficiency and robustness using active learning and risk-averse reinforcement learning.
arXiv Detail & Related papers (2022-03-14T20:13:21Z) - Optimizing Active Learning for Low Annotation Budgets [6.753808772846254]
In deep learning, active learning is usually implemented as an iterative process in which successive deep models are updated via fine tuning.
We tackle this issue by using an approach inspired by transfer learning.
We introduce a novel acquisition function which exploits the iterative nature of AL process to select samples in a more robust fashion.
arXiv Detail & Related papers (2022-01-18T18:53:10Z) - Improved Fine-tuning by Leveraging Pre-training Data: Theory and
Practice [52.11183787786718]
Fine-tuning a pre-trained model on the target data is widely used in many deep learning applications.
Recent studies have empirically shown that training from scratch has the final performance that is no worse than this pre-training strategy.
We propose a novel selection strategy to select a subset from pre-training data to help improve the generalization on the target task.
arXiv Detail & Related papers (2021-11-24T06:18:32Z) - Few-shot learning through contextual data augmentation [74.20290390065475]
Machine translation models need to adapt to new data to maintain their performance over time.
We show that adaptation on the scale of one to five examples is possible.
Our model reports better accuracy scores than a reference system trained with on average 313 parallel examples.
arXiv Detail & Related papers (2021-03-31T09:05:43Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.