StyleBART: Decorate Pretrained Model with Style Adapters for
Unsupervised Stylistic Headline Generation
- URL: http://arxiv.org/abs/2310.17743v2
- Date: Mon, 13 Nov 2023 06:38:53 GMT
- Title: StyleBART: Decorate Pretrained Model with Style Adapters for
Unsupervised Stylistic Headline Generation
- Authors: Hanqing Wang, Yajing Luo, Boya Xiong, Guanhua Chen, Yun Chen
- Abstract summary: StyleBART is an unsupervised approach for stylistic headline generation.
Our method decorates the pretrained BART model with adapters that are responsible for different styles.
We show that StyleBART achieves new state-of-the-art performance in the unsupervised stylistic headline generation task.
- Score: 13.064106986202294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stylistic headline generation is the task to generate a headline that not
only summarizes the content of an article, but also reflects a desired style
that attracts users. As style-specific article-headline pairs are scarce,
previous researches focus on unsupervised approaches with a standard headline
generation dataset and mono-style corpora. In this work, we follow this line
and propose StyleBART, an unsupervised approach for stylistic headline
generation. Our method decorates the pretrained BART model with adapters that
are responsible for different styles and allows the generation of headlines
with diverse styles by simply switching the adapters. Different from previous
works, StyleBART separates the task of style learning and headline generation,
making it possible to freely combine the base model and the style adapters
during inference. We further propose an inverse paraphrasing task to enhance
the style adapters. Extensive automatic and human evaluations show that
StyleBART achieves new state-of-the-art performance in the unsupervised
stylistic headline generation task, producing high-quality headlines with the
desired style.
Related papers
- StyleShot: A Snapshot on Any Style [20.41380860802149]
We show that, a good style representation is crucial and sufficient for generalized style transfer without test-time tuning.
We achieve this through constructing a style-aware encoder and a well-organized style dataset called StyleGallery.
We highlight that, our approach, named StyleShot, is simple yet effective in mimicking various desired styles, without test-time tuning.
arXiv Detail & Related papers (2024-07-01T16:05:18Z) - TinyStyler: Efficient Few-Shot Text Style Transfer with Authorship Embeddings [51.30454130214374]
We introduce TinyStyler, a lightweight but effective approach to perform efficient, few-shot text style transfer.
We evaluate TinyStyler's ability to perform text attribute style transfer with automatic and human evaluations.
Our model has been made publicly available at https://huggingface.co/tinystyler/tinystyler.
arXiv Detail & Related papers (2024-06-21T18:41:22Z) - StyleMaster: Towards Flexible Stylized Image Generation with Diffusion Models [42.45078883553856]
Stylized Text-to-Image Generation (STIG) aims to generate images based on text prompts and style reference images.
We in this paper propose a novel framework dubbed as StyleMaster for this task by leveraging pretrained Stable Diffusion.
Two objective functions are introduced to optimize the model together with denoising loss, which can further enhance semantic and style consistency.
arXiv Detail & Related papers (2024-05-24T07:19:40Z) - StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter [78.75422651890776]
StyleCrafter is a generic method that enhances pre-trained T2V models with a style control adapter.
To promote content-style disentanglement, we remove style descriptions from the text prompt and extract style information solely from the reference image.
StyleCrafter efficiently generates high-quality stylized videos that align with the content of the texts and resemble the style of the reference images.
arXiv Detail & Related papers (2023-12-01T03:53:21Z) - StyleAdapter: A Unified Stylized Image Generation Model [97.24936247688824]
StyleAdapter is a unified stylized image generation model capable of producing a variety of stylized images.
It can be integrated with existing controllable synthesis methods, such as T2I-adapter and ControlNet.
arXiv Detail & Related papers (2023-09-04T19:16:46Z) - ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style
Transfer [57.6482608202409]
Textual style transfer is the task of transforming stylistic properties of text while preserving meaning.
We introduce a novel diffusion-based framework for general-purpose style transfer that can be flexibly adapted to arbitrary target styles.
We validate the method on the Enron Email Corpus, with both human and automatic evaluations, and find that it outperforms strong baselines on formality, sentiment, and even authorship style transfer.
arXiv Detail & Related papers (2023-08-29T17:36:02Z) - Visual Captioning at Will: Describing Images and Videos Guided by a Few
Stylized Sentences [49.66987347397398]
Few-Shot Stylized Visual Captioning aims to generate captions in any desired style, using only a few examples as guidance during inference.
We propose a framework called FS-StyleCap for this task, which utilizes a conditional encoder-decoder language model and a visual projection module.
arXiv Detail & Related papers (2023-07-31T04:26:01Z) - Fine-Grained Control of Artistic Styles in Image Generation [24.524863555822837]
generative models and adversarial training have enabled artificially generating artworks in various artistic styles.
We propose to capture the continuous spectrum of styles and apply it to a style generation task.
Our method can be used with common generative adversarial networks (such as StyleGAN)
arXiv Detail & Related papers (2021-10-19T21:51:52Z) - Hooks in the Headline: Learning to Generate Headlines with Controlled
Styles [69.30101340243375]
We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options.
TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework.
The attraction score of our model generated headlines surpasses that of the state-of-the-art summarization model by 9.68%, and even outperforms human-written references.
arXiv Detail & Related papers (2020-04-04T17:24:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.