SentBS: Sentence-level Beam Search for Controllable Summarization
- URL: http://arxiv.org/abs/2210.14502v1
- Date: Wed, 26 Oct 2022 06:21:01 GMT
- Title: SentBS: Sentence-level Beam Search for Controllable Summarization
- Authors: Chenhui Shen, Liying Cheng, Lidong Bing, Yang You, Luo Si
- Abstract summary: We propose a sentence-level beam search generation method (SentBS), where evaluation is conducted throughout the generation process to select suitable sentences for subsequent generations.
Experiments show that all explored combinations for SentBS can improve the agreement between the generated text and the desired structure, with the best method significantly reducing the structural discrepancies suffered by the existing model, by approximately 68%.
- Score: 55.27670620831012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A wide range of control perspectives have been explored in controllable text
generation. Structure-controlled summarization is recently proposed as a useful
and interesting research direction. However, current structure-controlling
methods have limited effectiveness in enforcing the desired structure. To
address this limitation, we propose a sentence-level beam search generation
method (SentBS), where evaluation is conducted throughout the generation
process to select suitable sentences for subsequent generations. We experiment
with different combinations of decoding methods to be used as subcomponents by
SentBS and evaluate results on the structure-controlled dataset MReD.
Experiments show that all explored combinations for SentBS can improve the
agreement between the generated text and the desired structure, with the best
method significantly reducing the structural discrepancies suffered by the
existing model, by approximately 68%.
Related papers
- Controllable Text Generation in the Instruction-Tuning Era [3.310278632293704]
We find that prompting-based approaches outperform controllable text generation methods on most datasets and tasks.
We provide an algorithm that uses only a task dataset and a Large Language Model with in-context capabilities to automatically generate a constraint dataset.
arXiv Detail & Related papers (2024-05-02T17:24:30Z) - Reinforcement Learning with Token-level Feedback for Controllable Text Generation [16.117006822479407]
We propose a novel reinforcement learning algorithm named TOLE which formulates TOken-LEvel rewards for controllable text generation.
Experimental results show that our algorithm can achieve superior performance on both single-attribute and multi-attribute control tasks.
arXiv Detail & Related papers (2024-03-18T08:18:37Z) - Sequential Visual and Semantic Consistency for Semi-supervised Text
Recognition [56.968108142307976]
Scene text recognition (STR) is a challenging task that requires large-scale annotated data for training.
Most existing STR methods resort to synthetic data, which may introduce domain discrepancy and degrade the performance of STR models.
This paper proposes a novel semi-supervised learning method for STR that incorporates word-level consistency regularization from both visual and semantic aspects.
arXiv Detail & Related papers (2024-02-24T13:00:54Z) - Exploiting Data Hierarchy as a New Modality for Contrastive Learning [0.0]
This work investigates how hierarchically structured data can help neural networks learn conceptual representations of cathedrals.
The underlying WikiScenes dataset provides a spatially organized hierarchical structure of cathedral components.
We propose a novel hierarchical contrastive training approach that leverages a triplet margin loss to represent the data's spatial hierarchy in the encoder's latent space.
arXiv Detail & Related papers (2024-01-06T21:47:49Z) - Sequentially Controlled Text Generation [97.22539956688443]
GPT-2 generates sentences that are remarkably human-like, longer documents can ramble and do not follow human-like writing structure.
We study the problem of imposing structure on long-range text.
We develop a sequential controlled text generation pipeline with generation and editing.
arXiv Detail & Related papers (2023-01-05T21:23:51Z) - Classifiers are Better Experts for Controllable Text Generation [63.17266060165098]
We show that the proposed method significantly outperforms recent PPLM, GeDi, and DExperts on PPL and sentiment accuracy based on the external classifier of generated texts.
The same time, it is also easier to implement and tune, and has significantly fewer restrictions and requirements.
arXiv Detail & Related papers (2022-05-15T12:58:35Z) - Latent Template Induction with Gumbel-CRFs [107.17408593510372]
We explore the use of structured variational autoencoders to infer latent templates for sentence generation.
As a structured inference network, we show that it learns interpretable templates during training.
arXiv Detail & Related papers (2020-11-29T01:00:57Z) - Robust Group Subspace Recovery: A New Approach for Multi-Modality Data
Fusion [18.202825916298437]
We propose a novel multi-modal data fusion approach based on group sparsity.
The proposed approach exploits the structural dependencies between the different modalities data to cluster the associated target objects.
The resulting UoS structure is employed to classify newly observed data points, highlighting the abstraction capacity of the proposed method.
arXiv Detail & Related papers (2020-06-18T16:31:31Z) - AutoSTR: Efficient Backbone Search for Scene Text Recognition [80.7290173000068]
Scene text recognition (STR) is very challenging due to the diversity of text instances and the complexity of scenes.
We propose automated STR (AutoSTR) to search data-dependent backbones to boost text recognition performance.
Experiments demonstrate that, by searching data-dependent backbones, AutoSTR can outperform the state-of-the-art approaches on standard benchmarks.
arXiv Detail & Related papers (2020-03-14T06:51:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.