Leveraging Locality in Abstractive Text Summarization
- URL: http://arxiv.org/abs/2205.12476v1
- Date: Wed, 25 May 2022 03:59:24 GMT
- Title: Leveraging Locality in Abstractive Text Summarization
- Authors: Yixin Liu, Ansong Ni, Linyong Nan, Budhaditya Deb, Chenguang Zhu,
Ahmed H. Awadallah, Dragomir Radev
- Abstract summary: We investigate if models with a restricted context can have competitive performance compared with the memory-efficient attention models.
Our model is applied to individual pages, which contain parts of inputs grouped by the principle of locality.
- Score: 44.67905693077539
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Despite the successes of neural attention models for natural language
generation tasks, the quadratic memory complexity of the self-attention module
with respect to the input length hinders their applications in long text
summarization. Instead of designing more efficient attention modules, we
approach this problem by investigating if models with a restricted context can
have competitive performance compared with the memory-efficient attention
models that maintain a global context by treating the input as an entire
sequence. Our model is applied to individual pages, which contain parts of
inputs grouped by the principle of locality, during both encoding and decoding
stages. We empirically investigated three kinds of localities in text
summarization at different levels, ranging from sentences to documents. Our
experimental results show that our model can have better performance compared
with strong baseline models with efficient attention modules, and our analysis
provides further insights of our locality-aware modeling strategy.
Related papers
- Determine-Then-Ensemble: Necessity of Top-k Union for Large Language Model Ensembling [23.447466392929712]
Large language models (LLMs) exhibit varying strengths and weaknesses across different tasks.
Existing LLM ensembling methods often overlook model compatibility and struggle with inefficient alignment of probabilities.
We introduce the textscUnion textscTop-$k$ textscEnsembling (textscUniTE), a novel approach that efficiently combines models by focusing on the union of the top-k tokens from each model.
arXiv Detail & Related papers (2024-10-03T08:42:38Z) - TriSum: Learning Summarization Ability from Large Language Models with Structured Rationale [66.01943465390548]
We introduce TriSum, a framework for distilling large language models' text summarization abilities into a compact, local model.
Our method enhances local model performance on various benchmarks.
It also improves interpretability by providing insights into the summarization rationale.
arXiv Detail & Related papers (2024-03-15T14:36:38Z) - Split and Rephrase with Large Language Models [2.499907423888049]
Split and Rephrase (SPRP) task consists in splitting complex sentences into a sequence of shorter grammatical sentences.
We evaluate large language models on the task, showing that they can provide large improvements over the state of the art on the main metrics.
arXiv Detail & Related papers (2023-12-18T10:16:37Z) - Multi-Grained Multimodal Interaction Network for Entity Linking [65.30260033700338]
Multimodal entity linking task aims at resolving ambiguous mentions to a multimodal knowledge graph.
We propose a novel Multi-GraIned Multimodal InteraCtion Network $textbf(MIMIC)$ framework for solving the MEL task.
arXiv Detail & Related papers (2023-07-19T02:11:19Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - Coalescing Global and Local Information for Procedural Text
Understanding [70.10291759879887]
A complete procedural understanding solution should combine three core aspects: local and global views of the inputs, and global view of outputs.
In this paper, we propose Coalescing Global and Local InformationCG, a new model that builds entity and time representations.
Experiments on a popular procedural text understanding dataset show that our model achieves state-of-the-art results.
arXiv Detail & Related papers (2022-08-26T19:16:32Z) - Compositional Generalization in Grounded Language Learning via Induced
Model Sparsity [81.38804205212425]
We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations.
We design an agent that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal.
Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations.
arXiv Detail & Related papers (2022-07-06T08:46:27Z) - Modeling Multi-Granularity Hierarchical Features for Relation Extraction [26.852869800344813]
We propose a novel method to extract multi-granularity features based solely on the original input sentences.
We show that effective structured features can be attained even without external knowledge.
arXiv Detail & Related papers (2022-04-09T09:44:05Z) - Robust and Interpretable Grounding of Spatial References with Relation
Networks [40.42540299023808]
Learning representations of spatial references in natural language is a key challenge in tasks like autonomous navigation and robotic manipulation.
Recent work has investigated various neural architectures for learning multi-modal representations for spatial concepts.
We develop effective models for understanding spatial references in text that are robust and interpretable.
arXiv Detail & Related papers (2020-05-02T04:11:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.