Recurrent Inference in Text Editing
- URL: http://arxiv.org/abs/2009.12643v2
- Date: Wed, 30 Sep 2020 04:12:05 GMT
- Title: Recurrent Inference in Text Editing
- Authors: Ning Shi, Ziheng Zeng, Haotian Zhang, Yichen Gong
- Abstract summary: We propose a new inference method, Recurrence, that iteratively performs editing actions, significantly narrowing the problem space.
In each iteration, encoding the partially edited text, Recurrence decodes the latent representation, generates an action of short, fixed-length, and applies the action to complete a single edit.
For a comprehensive comparison, we introduce three types of text editing tasks: Arithmetic Operators Restoration (AOR), Arithmetic Equation Simplification (AES), Arithmetic Equation Correction (AEC)
- Score: 6.4689151804633775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In neural text editing, prevalent sequence-to-sequence based approaches
directly map the unedited text either to the edited text or the editing
operations, in which the performance is degraded by the limited source text
encoding and long, varying decoding steps. To address this problem, we propose
a new inference method, Recurrence, that iteratively performs editing actions,
significantly narrowing the problem space. In each iteration, encoding the
partially edited text, Recurrence decodes the latent representation, generates
an action of short, fixed-length, and applies the action to complete a single
edit. For a comprehensive comparison, we introduce three types of text editing
tasks: Arithmetic Operators Restoration (AOR), Arithmetic Equation
Simplification (AES), Arithmetic Equation Correction (AEC). Extensive
experiments on these tasks with varying difficulties demonstrate that
Recurrence achieves improvements over conventional inference methods.
Related papers
- Edit-Constrained Decoding for Sentence Simplification [16.795671075667205]
We propose edit operation based lexically constrained decoding for sentence simplification.
Our experiments indicate that the proposed method consistently outperforms the previous studies on three English simplification corpora commonly used in this task.
arXiv Detail & Related papers (2024-09-28T05:39:50Z) - Task-Oriented Diffusion Inversion for High-Fidelity Text-based Editing [60.730661748555214]
We introduce textbfTask-textbfOriented textbfDiffusion textbfInversion (textbfTODInv), a novel framework that inverts and edits real images tailored to specific editing tasks.
ToDInv seamlessly integrates inversion and editing through reciprocal optimization, ensuring both high fidelity and precise editability.
arXiv Detail & Related papers (2024-08-23T22:16:34Z) - Object-aware Inversion and Reassembly for Image Editing [61.19822563737121]
We propose Object-aware Inversion and Reassembly (OIR) to enable object-level fine-grained editing.
We use our search metric to find the optimal inversion step for each editing pair when editing an image.
Our method achieves superior performance in editing object shapes, colors, materials, categories, etc., especially in multi-object editing scenarios.
arXiv Detail & Related papers (2023-10-18T17:59:02Z) - Non-autoregressive Text Editing with Copy-aware Latent Alignments [31.756401120004977]
We propose a novel non-autoregressive text editing method, by modeling the edit process with latent CTC alignments.
We conduct extensive experiments on GEC and sentence fusion tasks, showing that our proposed method significantly outperforms existing Seq2Edit models and achieves similar or even better results than Seq2Seq with over $4times$ speedup.
In-depth analyses reveal the strengths of our method in terms of robustness under various scenarios and generating fluent and flexible outputs.
arXiv Detail & Related papers (2023-10-11T19:02:57Z) - Reducing Sequence Length by Predicting Edit Operations with Large
Language Models [50.66922361766939]
This paper proposes predicting edit spans for the source text for local sequence transduction tasks.
We apply instruction tuning for Large Language Models on the supervision data of edit spans.
Experiments show that the proposed method achieves comparable performance to the baseline in four tasks.
arXiv Detail & Related papers (2023-05-19T17:51:05Z) - Text Editing as Imitation Game [33.418628166176234]
We reformulate text editing as an imitation game using behavioral cloning.
We introduce a dual decoders structure to parallel the decoding while retaining the dependencies between action tokens.
Our model consistently outperforms the autoregressive baselines in terms of performance, efficiency, and robustness.
arXiv Detail & Related papers (2022-10-21T22:07:04Z) - Composable Text Controls in Latent Space with ODEs [97.12426987887021]
This paper proposes a new efficient approach for composable text operations in the compact latent space of text.
By connecting pretrained LMs to the latent space through efficient adaption, we then decode the sampled vectors into desired text sequences.
Experiments show that composing those operators within our approach manages to generate or edit high-quality text.
arXiv Detail & Related papers (2022-08-01T06:51:45Z) - Text Revision by On-the-Fly Representation Optimization [76.11035270753757]
Current state-of-the-art methods formulate these tasks as sequence-to-sequence learning problems.
We present an iterative in-place editing approach for text revision, which requires no parallel data.
It achieves competitive and even better performance than state-of-the-art supervised methods on text simplification.
arXiv Detail & Related papers (2022-04-15T07:38:08Z) - GRS: Combining Generation and Revision in Unsupervised Sentence
Simplification [7.129708913903111]
We propose an unsupervised approach to sentence simplification that combines text generation and text revision.
We start with an iterative framework in which an input sentence is revised using explicit edit operations, and add paraphrasing as a new edit operation.
This allows us to combine the advantages of generative and revision-based approaches: paraphrasing captures complex edit operations, and the use of explicit edit operations in an iterative manner provides controllability and interpretability.
arXiv Detail & Related papers (2022-03-18T04:52:54Z) - Learning Structural Edits via Incremental Tree Transformations [102.64394890816178]
We present a generic model for incremental editing of structured data (i.e., "structural edits")
Our editor learns to iteratively generate tree edits (e.g., deleting or adding a subtree) and applies them to the partially edited data.
We evaluate our proposed editor on two source code edit datasets, where results show that, with the proposed edit encoder, our editor significantly improves accuracy over previous approaches.
arXiv Detail & Related papers (2021-01-28T16:11:32Z) - Seq2Edits: Sequence Transduction Using Span-level Edit Operations [10.785577504399077]
Seq2Edits is an open-vocabulary approach to sequence editing for natural language processing (NLP) tasks.
We evaluate our method on five NLP tasks (text normalization, sentence fusion, sentence splitting & rephrasing, text simplification, and grammatical error correction)
For grammatical error correction, our method speeds up inference by up to 5.2x compared to full sequence models.
arXiv Detail & Related papers (2020-09-23T13:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.