Transfer Learning for Low-Resource Sentiment Analysis
- URL: http://arxiv.org/abs/2304.04703v1
- Date: Mon, 10 Apr 2023 16:44:44 GMT
- Title: Transfer Learning for Low-Resource Sentiment Analysis
- Authors: Razhan Hameed and Sina Ahmadi and Fatemeh Daneshfar
- Abstract summary: In this paper, the collection and annotation of a dataset are described for sentiment analysis of Central Kurdish.
We explore a few classical machine learning and neural network-based techniques for this task.
- Score: 1.2891210250935146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sentiment analysis is the process of identifying and extracting subjective
information from text. Despite the advances to employ cross-lingual approaches
in an automatic way, the implementation and evaluation of sentiment analysis
systems require language-specific data to consider various sociocultural and
linguistic peculiarities. In this paper, the collection and annotation of a
dataset are described for sentiment analysis of Central Kurdish. We explore a
few classical machine learning and neural network-based techniques for this
task. Additionally, we employ an approach in transfer learning to leverage
pretrained models for data augmentation. We demonstrate that data augmentation
achieves a high F$_1$ score and accuracy despite the difficulty of the task.
Related papers
- Evaluating and explaining training strategies for zero-shot cross-lingual news sentiment analysis [8.770572911942635]
We introduce novel evaluation datasets in several less-resourced languages.
We experiment with a range of approaches including the use of machine translation.
We show that language similarity is not in itself sufficient for predicting the success of cross-lingual transfer.
arXiv Detail & Related papers (2024-09-30T07:59:41Z) - Uzbek Sentiment Analysis based on local Restaurant Reviews [0.0]
We present a work done on collecting restaurant reviews data as a sentiment analysis dataset for the Uzbek language.
The paper includes detailed information on how the data was collected, how it was pre-processed for better quality optimization, as well as experimental setups for the evaluation process.
arXiv Detail & Related papers (2022-05-31T16:21:00Z) - Cross-lingual Lifelong Learning [53.06904052325966]
We present a principled Cross-lingual Continual Learning (CCL) evaluation paradigm.
We provide insights into what makes multilingual sequential learning particularly challenging.
The implications of this analysis include a recipe for how to measure and balance different cross-lingual continual learning desiderata.
arXiv Detail & Related papers (2022-05-23T09:25:43Z) - A Dataset and BERT-based Models for Targeted Sentiment Analysis on
Turkish Texts [0.0]
We present an annotated Turkish dataset suitable for targeted sentiment analysis.
We propose BERT-based models with different architectures to accomplish the task of targeted sentiment analysis.
arXiv Detail & Related papers (2022-05-09T10:57:39Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - Leveraging Pre-trained Language Model for Speech Sentiment Analysis [58.78839114092951]
We explore the use of pre-trained language models to learn sentiment information of written texts for speech sentiment analysis.
We propose a pseudo label-based semi-supervised training strategy using a language model on an end-to-end speech sentiment approach.
arXiv Detail & Related papers (2021-06-11T20:15:21Z) - Improving Cross-Lingual Reading Comprehension with Self-Training [62.73937175625953]
Current state-of-the-art models even surpass human performance on several benchmarks.
Previous works have revealed the abilities of pre-trained multilingual models for zero-shot cross-lingual reading comprehension.
This paper further utilized unlabeled data to improve the performance.
arXiv Detail & Related papers (2021-05-08T08:04:30Z) - Semantic Sentiment Analysis Based on Probabilistic Graphical Models and
Recurrent Neural Network [0.0]
The purpose of this study is to investigate the use of semantics to perform sentiment analysis based on probabilistic graphical models and recurrent neural networks.
The datasets used for the experiments were IMDB movie reviews, Amazon Consumer Product reviews, and Twitter Review datasets.
arXiv Detail & Related papers (2020-08-06T11:59:00Z) - Data Augmentation for Spoken Language Understanding via Pretrained
Language Models [113.56329266325902]
Training of spoken language understanding (SLU) models often faces the problem of data scarcity.
We put forward a data augmentation method using pretrained language models to boost the variability and accuracy of generated utterances.
arXiv Detail & Related papers (2020-04-29T04:07:12Z) - Sentence Level Human Translation Quality Estimation with Attention-based
Neural Networks [0.30458514384586394]
This paper explores the use of Deep Learning methods for automatic estimation of quality of human translations.
Empirical results on a large human annotated dataset show that the neural model outperforms feature-based methods significantly.
arXiv Detail & Related papers (2020-03-13T16:57:55Z) - Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer [64.22926988297685]
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP)
In this paper, we explore the landscape of introducing transfer learning techniques for NLP by a unified framework that converts all text-based language problems into a text-to-text format.
arXiv Detail & Related papers (2019-10-23T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.