Few-Shot Cross-Lingual Stance Detection with Sentiment-Based
Pre-Training
- URL: http://arxiv.org/abs/2109.06050v1
- Date: Mon, 13 Sep 2021 15:20:06 GMT
- Title: Few-Shot Cross-Lingual Stance Detection with Sentiment-Based
Pre-Training
- Authors: Momchil Hardalov, Arnav Arora, Preslav Nakov, Isabelle Augenstein
- Abstract summary: We present the most comprehensive study of cross-lingual stance detection to date.
We use 15 diverse datasets in 12 languages from 6 language families.
For our experiments, we build on pattern-exploiting training, proposing the addition of a novel label encoder.
- Score: 32.800766653254634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of stance detection is to determine the viewpoint expressed in a
piece of text towards a target. These viewpoints or contexts are often
expressed in many different languages depending on the user and the platform,
which can be a local news outlet, a social media platform, a news forum, etc.
Most research in stance detection, however, has been limited to working with a
single language and on a few limited targets, with little work on cross-lingual
stance detection. Moreover, non-English sources of labelled data are often
scarce and present additional challenges. Recently, large multilingual language
models have substantially improved the performance on many non-English tasks,
especially such with limited numbers of examples. This highlights the
importance of model pre-training and its ability to learn from few examples. In
this paper, we present the most comprehensive study of cross-lingual stance
detection to date: we experiment with 15 diverse datasets in 12 languages from
6 language families, and with 6 low-resource evaluation settings each. For our
experiments, we build on pattern-exploiting training, proposing the addition of
a novel label encoder to simplify the verbalisation procedure. We further
propose sentiment-based generation of stance data for pre-training, which shows
sizeable improvement of more than 6% F1 absolute in low-shot settings compared
to several strong baselines.
Related papers
- Zero-shot Cross-lingual Transfer Learning with Multiple Source and Target Languages for Information Extraction: Language Selection and Adversarial Training [38.19963761398705]
This paper provides a detailed analysis on Cross-Lingual Multi-Transferability (many-to-many transfer learning) for the recent IE corpora.
We first determine the correlation between single-language performance and a wide range of linguistic-based distances.
Next, we investigate the more general zero-shot multi-lingual transfer settings where multiple languages are involved in the training and evaluation processes.
arXiv Detail & Related papers (2024-11-13T17:13:25Z) - Zero-shot Cross-lingual Stance Detection via Adversarial Language Adaptation [7.242609314791262]
This paper introduces a novel approach to zero-shot cross-lingual stance detection, Multilingual Translation-Augmented BERT (MTAB)
Our technique employs translation augmentation to improve zero-shot performance and pairs it with adversarial learning to further boost model efficacy.
We demonstrate the effectiveness of our proposed approach, showcasing improved results in comparison to a strong baseline model as well as ablated versions of our model.
arXiv Detail & Related papers (2024-04-22T16:56:43Z) - Understanding Cross-Lingual Alignment -- A Survey [52.572071017877704]
Cross-lingual alignment is the meaningful similarity of representations across languages in multilingual language models.
We survey the literature of techniques to improve cross-lingual alignment, providing a taxonomy of methods and summarising insights from throughout the field.
arXiv Detail & Related papers (2024-04-09T11:39:53Z) - Quantifying the Dialect Gap and its Correlates Across Languages [69.18461982439031]
This work will lay the foundation for furthering the field of dialectal NLP by laying out evident disparities and identifying possible pathways for addressing them through mindful data collection.
arXiv Detail & Related papers (2023-10-23T17:42:01Z) - BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual
Transfer [81.5984433881309]
We introduce BUFFET, which unifies 15 diverse tasks across 54 languages in a sequence-to-sequence format.
BUFFET is designed to establish a rigorous and equitable evaluation framework for few-shot cross-lingual transfer.
Our findings reveal significant room for improvement in few-shot in-context cross-lingual transfer.
arXiv Detail & Related papers (2023-05-24T08:06:33Z) - IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and
Languages [87.5457337866383]
We introduce the Image-Grounded Language Understanding Evaluation benchmark.
IGLUE brings together visual question answering, cross-modal retrieval, grounded reasoning, and grounded entailment tasks across 20 diverse languages.
We find that translate-test transfer is superior to zero-shot transfer and that few-shot learning is hard to harness for many tasks.
arXiv Detail & Related papers (2022-01-27T18:53:22Z) - AM2iCo: Evaluating Word Meaning in Context across Low-ResourceLanguages
with Adversarial Examples [51.048234591165155]
We present AM2iCo, Adversarial and Multilingual Meaning in Context.
It aims to faithfully assess the ability of state-of-the-art (SotA) representation models to understand the identity of word meaning in cross-lingual contexts.
Results reveal that current SotA pretrained encoders substantially lag behind human performance.
arXiv Detail & Related papers (2021-04-17T20:23:45Z) - XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating
Cross-lingual Generalization [128.37244072182506]
Cross-lingual TRansfer Evaluation of Multilinguals XTREME is a benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks.
We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models.
arXiv Detail & Related papers (2020-03-24T19:09:37Z) - X-Stance: A Multilingual Multi-Target Dataset for Stance Detection [42.46681912294797]
We extract a large-scale stance detection dataset from comments written by candidates of elections in Switzerland.
The dataset consists of German, French and Italian text, allowing for a cross-lingual evaluation of stance detection.
arXiv Detail & Related papers (2020-03-18T17:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.