The Style-Content Duality of Attractiveness: Learning to Write
Eye-Catching Headlines via Disentanglement
- URL: http://arxiv.org/abs/2012.07419v1
- Date: Mon, 14 Dec 2020 11:11:43 GMT
- Title: The Style-Content Duality of Attractiveness: Learning to Write
Eye-Catching Headlines via Disentanglement
- Authors: Mingzhe Li, Xiuying Chen, Min Yang, Shen Gao, Dongyan Zhao and Rui Yan
- Abstract summary: Eye-catching headlines function as the first device to trigger more clicks, bringing reciprocal effect between producers and viewers.
We propose a Disentanglement-based Attractive Headline Generator (DAHG) that generates headline which captures the attractive content following the attractive style.
- Score: 59.58372539336339
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Eye-catching headlines function as the first device to trigger more clicks,
bringing reciprocal effect between producers and viewers. Producers can obtain
more traffic and profits, and readers can have access to outstanding articles.
When generating attractive headlines, it is important to not only capture the
attractive content but also follow an eye-catching written style. In this
paper, we propose a Disentanglement-based Attractive Headline Generator (DAHG)
that generates headline which captures the attractive content following the
attractive style. Concretely, we first devise a disentanglement module to
divide the style and content of an attractive prototype headline into latent
spaces, with two auxiliary constraints to ensure the two spaces are indeed
disentangled. The latent content information is then used to further polish the
document representation and help capture the salient part. Finally, the
generator takes the polished document as input to generate headline under the
guidance of the attractive style. Extensive experiments on the public Kuaibao
dataset show that DAHG achieves state-of-the-art performance. Human evaluation
also demonstrates that DAHG triggers 22% more clicks than existing models.
Related papers
- Generating Attractive and Authentic Copywriting from Customer Reviews [7.159225692930055]
We propose to generate copywriting based on customer reviews, as they provide firsthand practical experiences with products.
We have developed a sequence-to-sequence framework, enhanced with reinforcement learning, to produce copywriting that is attractive, authentic, and rich in information.
Our framework outperforms all existing baseline and zero-shot large language models, including LLaMA-2-chat-7B and GPT-3.5.
arXiv Detail & Related papers (2024-04-22T06:33:28Z) - Named Entity Recognition Based Automatic Generation of Research
Highlights [3.9410617513331863]
We aim to automatically generate research highlights using different sections of a research paper as input.
We investigate whether the use of named entity recognition on the input improves the quality of the generated highlights.
arXiv Detail & Related papers (2023-02-25T16:33:03Z) - Contrastive Learning enhanced Author-Style Headline Generation [15.391087541824279]
We propose a novel Seq2Seq model called CLH3G (Contrastive Learning enhanced Historical Headlines based Headline Generation)
By taking historical headlines into account, we can integrate the stylistic features of the author into our model, and generate a headline consistent with the author's style.
Experimental results show that historical headlines of the same user can improve the headline generation significantly.
arXiv Detail & Related papers (2022-11-07T04:51:03Z) - Paired Cross-Modal Data Augmentation for Fine-Grained Image-to-Text
Retrieval [142.047662926209]
We propose a novel framework for paired data augmentation by uncovering the hidden semantic information of StyleGAN2 model.
We generate augmented text through random token replacement, then pass the augmented text into the latent space alignment module.
We evaluate the efficacy of our augmented data approach on two public cross-modal retrieval datasets.
arXiv Detail & Related papers (2022-07-29T01:21:54Z) - Generating More Pertinent Captions by Leveraging Semantics and Style on
Multi-Source Datasets [56.018551958004814]
This paper addresses the task of generating fluent descriptions by training on a non-uniform combination of data sources.
Large-scale datasets with noisy image-text pairs provide a sub-optimal source of supervision.
We propose to leverage and separate semantics and descriptive style through the incorporation of a style token and keywords extracted through a retrieval component.
arXiv Detail & Related papers (2021-11-24T19:00:05Z) - Knowledge-Enhanced Personalized Review Generation with Capsule Graph
Neural Network [81.81662828017517]
We propose a knowledge-enhanced PRG model based on capsule graph neural network(Caps-GNN)
Our generation process contains two major steps, namely aspect sequence generation and sentence generation.
The incorporated knowledge graph is able to enhance user preference at both aspect and word levels.
arXiv Detail & Related papers (2020-10-04T03:54:40Z) - Hooks in the Headline: Learning to Generate Headlines with Controlled
Styles [69.30101340243375]
We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options.
TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework.
The attraction score of our model generated headlines surpasses that of the state-of-the-art summarization model by 9.68%, and even outperforms human-written references.
arXiv Detail & Related papers (2020-04-04T17:24:47Z) - Learning to Select Bi-Aspect Information for Document-Scale Text Content
Manipulation [50.01708049531156]
We focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer.
In detail, the input is a set of structured records and a reference text for describing another recordset.
The output is a summary that accurately describes the partial content in the source recordset with the same writing style of the reference.
arXiv Detail & Related papers (2020-02-24T12:52:10Z) - Attractive or Faithful? Popularity-Reinforced Learning for Inspired
Headline Generation [0.0]
We propose a novel framework called POpularity-Reinforced Learning for inspired Headline Generation (PORL-HG)
PORL-HG exploits the extractive-abstractive architecture with 1) Popular Topic Attention (PTA) for guiding the extractor to select the attractive sentence from the article and 2) a popularity predictor for guiding the abstractor to rewrite the attractive sentence.
We show that the proposed PORL-HG significantly outperforms the state-of-the-art headline generation models in terms of attractiveness evaluated by both human (71.03%) and the predictor (at least 27.60%)
arXiv Detail & Related papers (2020-02-06T04:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.