A User-Centered Evaluation of Spanish Text Simplification
- URL: http://arxiv.org/abs/2308.07556v1
- Date: Tue, 15 Aug 2023 03:49:59 GMT
- Title: A User-Centered Evaluation of Spanish Text Simplification
- Authors: Adrian de Wynter, Anthony Hevia, Si-Qing Chen
- Abstract summary: We present an evaluation of text simplification (TS) in Spanish for a production system.
We compare the most prevalent Spanish-specific readability scores with neural networks, and show that the latter are consistently better at predicting user preferences regarding TS.
We release the corpora in our evaluation to the broader community with the hopes of pushing forward the state-of-the-art in Spanish natural language processing.
- Score: 6.046875672600245
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present an evaluation of text simplification (TS) in Spanish for a
production system, by means of two corpora focused in both complex-sentence and
complex-word identification. We compare the most prevalent Spanish-specific
readability scores with neural networks, and show that the latter are
consistently better at predicting user preferences regarding TS. As part of our
analysis, we find that multilingual models underperform against equivalent
Spanish-only models on the same task, yet all models focus too often on
spurious statistical features, such as sentence length. We release the corpora
in our evaluation to the broader community with the hopes of pushing forward
the state-of-the-art in Spanish natural language processing.
Related papers
- MASIVE: Open-Ended Affective State Identification in English and Spanish [10.41502827362741]
In this work, we broaden our scope to a practically unbounded set of textitaffective states, which includes any terms that humans use to describe their experiences of feeling.
We collect and publish MASIVE, a dataset of Reddit posts in English and Spanish containing over 1,000 unique affective states each.
On this task, we find that smaller finetuned multilingual models outperform much larger LLMs, even on region-specific Spanish affective states.
arXiv Detail & Related papers (2024-07-16T21:43:47Z) - Spanish Pre-trained BERT Model and Evaluation Data [0.0]
We present a BERT-based language model pre-trained exclusively on Spanish data.
We also compiled several tasks specifically for the Spanish language in a single repository.
We have publicly released our model, the pre-training data, and the compilation of the Spanish benchmarks.
arXiv Detail & Related papers (2023-08-06T00:16:04Z) - Multilingual Conceptual Coverage in Text-to-Image Models [98.80343331645626]
"Conceptual Coverage Across Languages" (CoCo-CroLa) is a technique for benchmarking the degree to which any generative text-to-image system provides multilingual parity to its training language in terms of tangible nouns.
For each model we can assess "conceptual coverage" of a given target language relative to a source language by comparing the population of images generated for a series of tangible nouns in the source language to the population of images generated for each noun under translation in the target language.
arXiv Detail & Related papers (2023-06-02T17:59:09Z) - Beyond Contrastive Learning: A Variational Generative Model for
Multilingual Retrieval [109.62363167257664]
We propose a generative model for learning multilingual text embeddings.
Our model operates on parallel data in $N$ languages.
We evaluate this method on a suite of tasks including semantic similarity, bitext mining, and cross-lingual question retrieval.
arXiv Detail & Related papers (2022-12-21T02:41:40Z) - Analyzing the Mono- and Cross-Lingual Pretraining Dynamics of
Multilingual Language Models [73.11488464916668]
This study investigates the dynamics of the multilingual pretraining process.
We probe checkpoints taken from throughout XLM-R pretraining, using a suite of linguistic tasks.
Our analysis shows that the model achieves high in-language performance early on, with lower-level linguistic skills acquired before more complex ones.
arXiv Detail & Related papers (2022-05-24T03:35:00Z) - Evaluation Benchmarks for Spanish Sentence Representations [24.162683655834847]
We introduce Spanish SentEval and Spanish DiscoEval, aiming to assess the capabilities of stand-alone and discourse-aware sentence representations.
In addition, we evaluate and analyze the most recent pre-trained Spanish language models to exhibit their capabilities and limitations.
arXiv Detail & Related papers (2022-04-15T17:53:05Z) - IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and
Languages [87.5457337866383]
We introduce the Image-Grounded Language Understanding Evaluation benchmark.
IGLUE brings together visual question answering, cross-modal retrieval, grounded reasoning, and grounded entailment tasks across 20 diverse languages.
We find that translate-test transfer is superior to zero-shot transfer and that few-shot learning is hard to harness for many tasks.
arXiv Detail & Related papers (2022-01-27T18:53:22Z) - The futility of STILTs for the classification of lexical borrowings in
Spanish [0.0]
STILTs do not provide any improvement over direct fine-tuning of multilingual models.
multilingual models trained on small subsets of languages perform reasonably better than multilingual BERT but not as good as multilingual RoBERTa for the given dataset.
arXiv Detail & Related papers (2021-09-17T15:32:02Z) - Improving Cross-Lingual Reading Comprehension with Self-Training [62.73937175625953]
Current state-of-the-art models even surpass human performance on several benchmarks.
Previous works have revealed the abilities of pre-trained multilingual models for zero-shot cross-lingual reading comprehension.
This paper further utilized unlabeled data to improve the performance.
arXiv Detail & Related papers (2021-05-08T08:04:30Z) - AM2iCo: Evaluating Word Meaning in Context across Low-ResourceLanguages
with Adversarial Examples [51.048234591165155]
We present AM2iCo, Adversarial and Multilingual Meaning in Context.
It aims to faithfully assess the ability of state-of-the-art (SotA) representation models to understand the identity of word meaning in cross-lingual contexts.
Results reveal that current SotA pretrained encoders substantially lag behind human performance.
arXiv Detail & Related papers (2021-04-17T20:23:45Z) - Predicting metrical patterns in Spanish poetry with language models [0.0]
We compare automated metrical pattern identification systems available for Spanish against experiments done by fine-tuning language models trained on the same task.
Our results suggest that BERT-based models retain enough structural information to perform reasonably well for Spanish scansion.
arXiv Detail & Related papers (2020-11-18T22:33:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.