Exploring Cross-lingual Textual Style Transfer with Large Multilingual
Language Models
- URL: http://arxiv.org/abs/2206.02252v1
- Date: Sun, 5 Jun 2022 20:02:30 GMT
- Title: Exploring Cross-lingual Textual Style Transfer with Large Multilingual
Language Models
- Authors: Daniil Moskovskiy, Daryna Dementieva, Alexander Panchenko
- Abstract summary: Detoxification is a task of generating text in polite style while preserving meaning and fluency of the original toxic text.
This work investigates multilingual and cross-lingual detoxification and the behavior of large multilingual models like in this setting.
- Score: 78.12943085697283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detoxification is a task of generating text in polite style while preserving
meaning and fluency of the original toxic text. Existing detoxification methods
are designed to work in one exact language. This work investigates multilingual
and cross-lingual detoxification and the behavior of large multilingual models
like in this setting. Unlike previous works we aim to make large language
models able to perform detoxification without direct fine-tuning in given
language. Experiments show that multilingual models are capable of performing
multilingual style transfer. However, models are not able to perform
cross-lingual detoxification and direct fine-tuning on exact language is
inevitable.
Related papers
- Counterfactually Probing Language Identity in Multilingual Models [15.260518230218414]
We use AlterRep, a method of counterfactual probing, to explore the internal structure of multilingual models.
We find that, given a template in Language X, pushing towards Language Y systematically increases the probability of Language Y words.
arXiv Detail & Related papers (2023-10-29T01:21:36Z) - Soft Language Clustering for Multilingual Model Pre-training [57.18058739931463]
We propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally.
Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods.
arXiv Detail & Related papers (2023-06-13T08:08:08Z) - Detecting Languages Unintelligible to Multilingual Models through Local
Structure Probes [15.870989191524094]
We develop a general approach that requires only unlabelled text to detect which languages are not well understood by a cross-lingual model.
Our approach is derived from the hypothesis that if a model's understanding is insensitive to perturbations to text in a language, it is likely to have a limited understanding of that language.
arXiv Detail & Related papers (2022-11-09T16:45:16Z) - HiJoNLP at SemEval-2022 Task 2: Detecting Idiomaticity of Multiword
Expressions using Multilingual Pretrained Language Models [0.6091702876917281]
This paper describes an approach to detect idiomaticity only from the contextualized representation of a MWE over multilingual pretrained language models.
Our experiments find that larger models are usually more effective in idiomaticity detection. However, using a higher layer of the model may not guarantee a better performance.
arXiv Detail & Related papers (2022-05-27T01:55:59Z) - Language Anisotropic Cross-Lingual Model Editing [61.51863835749279]
Existing work only studies the monolingual scenario, which lacks the cross-lingual transferability to perform editing simultaneously across languages.
We propose a framework to naturally adapt monolingual model editing approaches to the cross-lingual scenario using parallel corpus.
We empirically demonstrate the failure of monolingual baselines in propagating the edit to multiple languages and the effectiveness of the proposed language anisotropic model editing.
arXiv Detail & Related papers (2022-05-25T11:38:12Z) - Lifting the Curse of Multilinguality by Pre-training Modular
Transformers [72.46919537293068]
multilingual pre-trained models suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages.
We introduce language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant.
Our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.
arXiv Detail & Related papers (2022-05-12T17:59:56Z) - Generalising Multilingual Concept-to-Text NLG with Language Agnostic
Delexicalisation [0.40611352512781856]
Concept-to-text Natural Language Generation is the task of expressing an input meaning representation in natural language.
We propose Language Agnostic Delexicalisation, a novel delexicalisation method that uses multilingual pretrained embeddings.
Our experiments across five datasets and five languages show that multilingual models outperform monolingual models in concept-to-text.
arXiv Detail & Related papers (2021-05-07T17:48:53Z) - How Good is Your Tokenizer? On the Monolingual Performance of
Multilingual Language Models [96.32118305166412]
We study a set of nine typologically diverse languages with readily available pretrained monolingual models on a set of five diverse monolingual downstream tasks.
We find that languages which are adequately represented in the multilingual model's vocabulary exhibit negligible performance decreases over their monolingual counterparts.
arXiv Detail & Related papers (2020-12-31T14:11:00Z) - XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning [68.57658225995966]
Cross-lingual Choice of Plausible Alternatives (XCOPA) is a typologically diverse multilingual dataset for causal commonsense reasoning in 11 languages.
We evaluate a range of state-of-the-art models on this novel dataset, revealing that the performance of current methods falls short compared to translation-based transfer.
arXiv Detail & Related papers (2020-05-01T12:22:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.