Attractive or Faithful? Popularity-Reinforced Learning for Inspired
Headline Generation
- URL: http://arxiv.org/abs/2002.02095v1
- Date: Thu, 6 Feb 2020 04:37:44 GMT
- Title: Attractive or Faithful? Popularity-Reinforced Learning for Inspired
Headline Generation
- Authors: Yun-Zhu Song (1), Hong-Han Shuai (1), Sung-Lin Yeh (2), Yi-Lun Wu (1),
Lun-Wei Ku (3), Wen-Chih Peng (1) ((1) National Chiao Tung University,
Taiwan, (2) National Tsing Hua University, Taiwan, (3) Academia Sinica,
Taiwan)
- Abstract summary: We propose a novel framework called POpularity-Reinforced Learning for inspired Headline Generation (PORL-HG)
PORL-HG exploits the extractive-abstractive architecture with 1) Popular Topic Attention (PTA) for guiding the extractor to select the attractive sentence from the article and 2) a popularity predictor for guiding the abstractor to rewrite the attractive sentence.
We show that the proposed PORL-HG significantly outperforms the state-of-the-art headline generation models in terms of attractiveness evaluated by both human (71.03%) and the predictor (at least 27.60%)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid proliferation of online media sources and published news,
headlines have become increasingly important for attracting readers to news
articles, since users may be overwhelmed with the massive information. In this
paper, we generate inspired headlines that preserve the nature of news articles
and catch the eye of the reader simultaneously. The task of inspired headline
generation can be viewed as a specific form of Headline Generation (HG) task,
with the emphasis on creating an attractive headline from a given news article.
To generate inspired headlines, we propose a novel framework called
POpularity-Reinforced Learning for inspired Headline Generation (PORL-HG).
PORL-HG exploits the extractive-abstractive architecture with 1) Popular Topic
Attention (PTA) for guiding the extractor to select the attractive sentence
from the article and 2) a popularity predictor for guiding the abstractor to
rewrite the attractive sentence. Moreover, since the sentence selection of the
extractor is not differentiable, techniques of reinforcement learning (RL) are
utilized to bridge the gap with rewards obtained from a popularity score
predictor. Through quantitative and qualitative experiments, we show that the
proposed PORL-HG significantly outperforms the state-of-the-art headline
generation models in terms of attractiveness evaluated by both human (71.03%)
and the predictor (at least 27.60%), while the faithfulness of PORL-HG is also
comparable to the state-of-the-art generation model.
Related papers
- Headline-Guided Extractive Summarization for Thai News Articles [0.0]
We propose CHIMA, an extractive summarization model that incorporates the contextual information of the headline for Thai news articles.
Our model utilizes a pre-trained language model to capture complex language semantics and assigns a probability to each sentence to be included in the summary.
Experiments on publicly available Thai news datasets demonstrate that CHIMA outperforms baseline models across ROUGE, BLEU, and F1 scores.
arXiv Detail & Related papers (2024-12-02T15:43:10Z) - From Words to Worth: Newborn Article Impact Prediction with LLM [69.41680520058418]
This paper introduces a promising approach, leveraging the capabilities of LLMs to predict the future impact of newborn articles.
The proposed method employs LLM to discern the shared semantic features of highly impactful papers from a large collection of title-abstract pairs.
The quantitative results, with an MAE of 0.216 and an NDCG@20 of 0.901, demonstrate that the proposed approach achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-08-07T17:52:02Z) - Affective and Dynamic Beam Search for Story Generation [50.3130767805383]
We propose Affective Story Generator (AffGen) for generating interesting narratives.
AffGen employs two novel techniques-Dynamic Beam Sizing and Affective Reranking.
arXiv Detail & Related papers (2023-10-23T16:37:14Z) - HonestBait: Forward References for Attractive but Faithful Headline
Generation [13.456581900511873]
Forward references (FRs) are a writing technique often used for clickbait.
A self-verification process is included during training to avoid spurious inventions.
We present PANCO1, an innovative dataset containing pairs of fake news with verified news for attractive but faithful news headline generation.
arXiv Detail & Related papers (2023-06-26T16:34:37Z) - Contrastive Learning enhanced Author-Style Headline Generation [15.391087541824279]
We propose a novel Seq2Seq model called CLH3G (Contrastive Learning enhanced Historical Headlines based Headline Generation)
By taking historical headlines into account, we can integrate the stylistic features of the author into our model, and generate a headline consistent with the author's style.
Experimental results show that historical headlines of the same user can improve the headline generation significantly.
arXiv Detail & Related papers (2022-11-07T04:51:03Z) - A Survey on Retrieval-Augmented Text Generation [53.04991859796971]
Retrieval-augmented text generation has remarkable advantages and has achieved state-of-the-art performance in many NLP tasks.
It firstly highlights the generic paradigm of retrieval-augmented generation, and then it reviews notable approaches according to different tasks.
arXiv Detail & Related papers (2022-02-02T16:18:41Z) - The Style-Content Duality of Attractiveness: Learning to Write
Eye-Catching Headlines via Disentanglement [59.58372539336339]
Eye-catching headlines function as the first device to trigger more clicks, bringing reciprocal effect between producers and viewers.
We propose a Disentanglement-based Attractive Headline Generator (DAHG) that generates headline which captures the attractive content following the attractive style.
arXiv Detail & Related papers (2020-12-14T11:11:43Z) - What's New? Summarizing Contributions in Scientific Literature [85.95906677964815]
We introduce a new task of disentangled paper summarization, which seeks to generate separate summaries for the paper contributions and the context of the work.
We extend the S2ORC corpus of academic articles by adding disentangled "contribution" and "context" reference labels.
We propose a comprehensive automatic evaluation protocol which reports the relevance, novelty, and disentanglement of generated outputs.
arXiv Detail & Related papers (2020-11-06T02:23:01Z) - Hooks in the Headline: Learning to Generate Headlines with Controlled
Styles [69.30101340243375]
We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options.
TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework.
The attraction score of our model generated headlines surpasses that of the state-of-the-art summarization model by 9.68%, and even outperforms human-written references.
arXiv Detail & Related papers (2020-04-04T17:24:47Z) - Generating Representative Headlines for News Stories [31.67864779497127]
Grouping articles that are reporting the same event into news stories is a common way of assisting readers in their news consumption.
It remains a challenging research problem to efficiently and effectively generate a representative headline for each story.
We develop a distant supervision approach to train large-scale generation models without any human annotation.
arXiv Detail & Related papers (2020-01-26T02:08:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.