Does It Capture STEL? A Modular, Similarity-based Linguistic Style
Evaluation Framework
- URL: http://arxiv.org/abs/2109.04817v1
- Date: Fri, 10 Sep 2021 12:03:19 GMT
- Title: Does It Capture STEL? A Modular, Similarity-based Linguistic Style
Evaluation Framework
- Authors: Anna Wegmann and Dong Nguyen
- Abstract summary: We propose the modular, fine-grained and content-controlled similarity-based STyle EvaLuation framework (STEL) to test the performance of any model that can compare two sentences on style.
We illustrate STEL with two general dimensions of style (formal/informal and simple/complex) as well as two specific characteristics of style (contrac'tion and numb3r substitution)
BERT-based methods outperform simple versions of commonly used style measures like 3-grams, punctuation frequency and LIWC-based approaches.
- Score: 4.827674529956945
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Style is an integral part of natural language. However, evaluation methods
for style measures are rare, often task-specific and usually do not control for
content. We propose the modular, fine-grained and content-controlled
similarity-based STyle EvaLuation framework (STEL) to test the performance of
any model that can compare two sentences on style. We illustrate STEL with two
general dimensions of style (formal/informal and simple/complex) as well as two
specific characteristics of style (contrac'tion and numb3r substitution). We
find that BERT-based methods outperform simple versions of commonly used style
measures like 3-grams, punctuation frequency and LIWC-based approaches. We
invite the addition of further tasks and task instances to STEL and hope to
facilitate the improvement of style-sensitive measures.
Related papers
- StyleDistance: Stronger Content-Independent Style Embeddings with Synthetic Parallel Examples [48.44036251656947]
Style representations aim to embed texts with similar writing styles closely and texts with different styles far apart, regardless of content.
We introduce StyleDistance, a novel approach to training stronger content-independent style embeddings.
arXiv Detail & Related papers (2024-10-16T17:25:25Z) - ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style
Transfer [57.6482608202409]
Textual style transfer is the task of transforming stylistic properties of text while preserving meaning.
We introduce a novel diffusion-based framework for general-purpose style transfer that can be flexibly adapted to arbitrary target styles.
We validate the method on the Enron Email Corpus, with both human and automatic evaluations, and find that it outperforms strong baselines on formality, sentiment, and even authorship style transfer.
arXiv Detail & Related papers (2023-08-29T17:36:02Z) - StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized
Tokenizer of a Large-Scale Generative Model [64.26721402514957]
We propose StylerDALLE, a style transfer method that uses natural language to describe abstract art styles.
Specifically, we formulate the language-guided style transfer task as a non-autoregressive token sequence translation.
To incorporate style information, we propose a Reinforcement Learning strategy with CLIP-based language supervision.
arXiv Detail & Related papers (2023-03-16T12:44:44Z) - Few-shot Font Generation by Learning Style Difference and Similarity [84.76381937516356]
We propose a novel font generation approach by learning the Difference between different styles and the Similarity of the same style (DS-Font)
Specifically, we propose a multi-layer style projector for style encoding and realize a distinctive style representation via our proposed Cluster-level Contrastive Style (CCS) loss.
arXiv Detail & Related papers (2023-01-24T13:57:25Z) - Controlling Styles in Neural Machine Translation with Activation Prompt [34.53183905545485]
Controlling styles in neural machine translation (NMT) has attracted wide attention, as it is crucial for enhancing user experience.
This paper presents a new benchmark and approach for controlling styles in NMT.
We propose a method named style activation prompt (StyleAP) by prompts from stylized monolingual corpus, which does not require extra fine-tuning.
arXiv Detail & Related papers (2022-12-17T16:05:50Z) - Style Control for Schema-Guided Natural Language Generation [10.821250408348655]
Natural Language Generation (NLG) for task-oriented dialogue systems focuses on communicating content accurately, fluently, and coherently.
We focus on stylistic control and evaluation for schema-guided NLG, with joint goals of achieving both semantic and stylistic control.
arXiv Detail & Related papers (2021-09-24T21:47:58Z) - StylePTB: A Compositional Benchmark for Fine-grained Controllable Text
Style Transfer [90.6768813620898]
Style transfer aims to controllably generate text with targeted stylistic changes while maintaining core meaning from the source sentence constant.
We introduce a large-scale benchmark, StylePTB, with paired sentences undergoing 21 fine-grained stylistic changes spanning atomic lexical, syntactic, semantic, and thematic transfers of text.
We find that existing methods on StylePTB struggle to model fine-grained changes and have an even more difficult time composing multiple styles.
arXiv Detail & Related papers (2021-04-12T04:25:09Z) - StyleMeUp: Towards Style-Agnostic Sketch-Based Image Retrieval [119.03470556503942]
Crossmodal matching problem is typically solved by learning a joint embedding space where semantic content shared between photo and sketch modalities are preserved.
An effective model needs to explicitly account for this style diversity, crucially, to unseen user styles.
Our model can not only disentangle the cross-modal shared semantic content, but can adapt the disentanglement to any unseen user style as well, making the model truly agnostic.
arXiv Detail & Related papers (2021-03-29T15:44:19Z) - ST$^2$: Small-data Text Style Transfer via Multi-task Meta-Learning [14.271083093944753]
Text style transfer aims to paraphrase a sentence in one style into another while preserving content.
Due to lack of parallel training data, state-of-art methods are unsupervised and rely on large datasets that share content.
In this work, we develop a meta-learning framework to transfer between any kind of text styles.
arXiv Detail & Related papers (2020-04-24T13:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.