Contrastive Learning enhanced Author-Style Headline Generation
- URL: http://arxiv.org/abs/2211.03305v1
- Date: Mon, 7 Nov 2022 04:51:03 GMT
- Title: Contrastive Learning enhanced Author-Style Headline Generation
- Authors: Hui Liu, Weidong Guo, Yige Chen and Xiangyang Li
- Abstract summary: We propose a novel Seq2Seq model called CLH3G (Contrastive Learning enhanced Historical Headlines based Headline Generation)
By taking historical headlines into account, we can integrate the stylistic features of the author into our model, and generate a headline consistent with the author's style.
Experimental results show that historical headlines of the same user can improve the headline generation significantly.
- Score: 15.391087541824279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Headline generation is a task of generating an appropriate headline for a
given article, which can be further used for machine-aided writing or enhancing
the click-through ratio. Current works only use the article itself in the
generation, but have not taken the writing style of headlines into
consideration. In this paper, we propose a novel Seq2Seq model called CLH3G
(Contrastive Learning enhanced Historical Headlines based Headline Generation)
which can use the historical headlines of the articles that the author wrote in
the past to improve the headline generation of current articles. By taking
historical headlines into account, we can integrate the stylistic features of
the author into our model, and generate a headline not only appropriate for the
article, but also consistent with the author's style. In order to efficiently
learn the stylistic features of the author, we further introduce a contrastive
learning based auxiliary task for the encoder of our model. Besides, we propose
two methods to use the learned stylistic features to guide both the pointer and
the decoder during the generation. Experimental results show that historical
headlines of the same user can improve the headline generation significantly,
and both the contrastive learning module and the two style features fusion
methods can further boost the performance.
Related papers
- Capturing Style in Author and Document Representation [4.323709559692927]
We propose a new architecture that learns embeddings for both authors and documents with a stylistic constraint.
We evaluate our method on three datasets: a literary corpus extracted from the Gutenberg Project, the Blog Authorship and IMDb62.
arXiv Detail & Related papers (2024-07-18T10:01:09Z) - SCStory: Self-supervised and Continual Online Story Discovery [53.72745249384159]
SCStory helps people digest rapidly published news article streams in real-time without human annotations.
SCStory employs self-supervised and continual learning with a novel idea of story-indicative adaptive modeling of news article streams.
arXiv Detail & Related papers (2023-11-27T04:50:01Z) - Unsupervised Neural Stylistic Text Generation using Transfer learning
and Adapters [66.17039929803933]
We propose a novel transfer learning framework which updates only $0.3%$ of model parameters to learn style specific attributes for response generation.
We learn style specific attributes from the PERSONALITY-CAPTIONS dataset.
arXiv Detail & Related papers (2022-10-07T00:09:22Z) - StoryTrans: Non-Parallel Story Author-Style Transfer with Discourse
Representations and Content Enhancing [73.81778485157234]
Long texts usually involve more complicated author linguistic preferences such as discourse structures than sentences.
We formulate the task of non-parallel story author-style transfer, which requires transferring an input story into a specified author style.
We use an additional training objective to disentangle stylistic features from the learned discourse representation to prevent the model from degenerating to an auto-encoder.
arXiv Detail & Related papers (2022-08-29T08:47:49Z) - Keyphrase Generation Beyond the Boundaries of Title and Abstract [28.56508031460787]
Keyphrase generation aims at generating phrases (keyphrases) that best describe a given document.
In this work, we explore whether the integration of additional data from semantically similar articles or from the full text of the given article can be helpful for a neural keyphrase generation model.
We discover that adding sentences from the full text particularly in the form of summary of the article can significantly improve the generation of both types of keyphrases.
arXiv Detail & Related papers (2021-12-13T16:33:01Z) - The Style-Content Duality of Attractiveness: Learning to Write
Eye-Catching Headlines via Disentanglement [59.58372539336339]
Eye-catching headlines function as the first device to trigger more clicks, bringing reciprocal effect between producers and viewers.
We propose a Disentanglement-based Attractive Headline Generator (DAHG) that generates headline which captures the attractive content following the attractive style.
arXiv Detail & Related papers (2020-12-14T11:11:43Z) - Hooks in the Headline: Learning to Generate Headlines with Controlled
Styles [69.30101340243375]
We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options.
TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework.
The attraction score of our model generated headlines surpasses that of the state-of-the-art summarization model by 9.68%, and even outperforms human-written references.
arXiv Detail & Related papers (2020-04-04T17:24:47Z) - Learning to Select Bi-Aspect Information for Document-Scale Text Content
Manipulation [50.01708049531156]
We focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer.
In detail, the input is a set of structured records and a reference text for describing another recordset.
The output is a summary that accurately describes the partial content in the source recordset with the same writing style of the reference.
arXiv Detail & Related papers (2020-02-24T12:52:10Z) - Attractive or Faithful? Popularity-Reinforced Learning for Inspired
Headline Generation [0.0]
We propose a novel framework called POpularity-Reinforced Learning for inspired Headline Generation (PORL-HG)
PORL-HG exploits the extractive-abstractive architecture with 1) Popular Topic Attention (PTA) for guiding the extractor to select the attractive sentence from the article and 2) a popularity predictor for guiding the abstractor to rewrite the attractive sentence.
We show that the proposed PORL-HG significantly outperforms the state-of-the-art headline generation models in terms of attractiveness evaluated by both human (71.03%) and the predictor (at least 27.60%)
arXiv Detail & Related papers (2020-02-06T04:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.