Generating Related Work
- URL: http://arxiv.org/abs/2104.08668v1
- Date: Sun, 18 Apr 2021 00:19:37 GMT
- Title: Generating Related Work
- Authors: Darsh J Shah and Regina Barzilay
- Abstract summary: We model generating related work sections while being cognisant of the motivation behind citing papers.
Our model outperforms several strong state-of-the-art summarization and multi-document summarization models.
- Score: 37.161925758727456
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Communicating new research ideas involves highlighting similarities and
differences with past work. Authors write fluent, often long sections to survey
the distinction of a new paper with related work. In this work we model
generating related work sections while being cognisant of the motivation behind
citing papers. Our content planning model generates a tree of cited papers
before a surface realization model lexicalizes this skeleton. Our model
outperforms several strong state-of-the-art summarization and multi-document
summarization models on generating related work on an ACL Anthology (AA) based
dataset which we contribute.
Related papers
- GINopic: Topic Modeling with Graph Isomorphism Network [0.8962460460173959]
We introduce GINopic, a topic modeling framework based on graph isomorphism networks to capture the correlation between words.
We demonstrate the effectiveness of GINopic compared to existing topic models and highlight its potential for advancing topic modeling.
arXiv Detail & Related papers (2024-04-02T17:18:48Z) - Peek Across: Improving Multi-Document Modeling via Cross-Document
Question-Answering [49.85790367128085]
We pre-training a generic multi-document model from a novel cross-document question answering pre-training objective.
This novel multi-document QA formulation directs the model to better recover cross-text informational relations.
Unlike prior multi-document models that focus on either classification or summarization tasks, our pre-training objective formulation enables the model to perform tasks that involve both short text generation and long text generation.
arXiv Detail & Related papers (2023-05-24T17:48:40Z) - Target-aware Abstractive Related Work Generation with Contrastive
Learning [48.02845973891943]
The related work section is an important component of a scientific paper, which highlights the contribution of the target paper in the context of the reference papers.
Most of the existing related work section generation methods rely on extracting off-the-shelf sentences.
We propose an abstractive target-aware related work generator (TAG), which can generate related work sections consisting of new sentences.
arXiv Detail & Related papers (2022-05-26T13:20:51Z) - CORWA: A Citation-Oriented Related Work Annotation Dataset [4.740962650068886]
In natural language processing, literature reviews are usually conducted under the "Related Work" section.
We train a strong baseline model that automatically tags the CORWA labels on massive unlabeled related work section texts.
We suggest a novel framework for human-in-the-loop, iterative, abstractive related work generation.
arXiv Detail & Related papers (2022-05-07T00:23:46Z) - Multi-Vector Models with Textual Guidance for Fine-Grained Scientific
Document Similarity [11.157086694203201]
We present a new scientific document similarity model based on matching fine-grained aspects.
Our model is trained using co-citation contexts that describe related paper aspects as a novel form of textual supervision.
arXiv Detail & Related papers (2021-11-16T11:12:30Z) - Topic-Guided Abstractive Multi-Document Summarization [21.856615677793243]
A critical point of multi-document summarization (MDS) is to learn the relations among various documents.
We propose a novel abstractive MDS model, in which we represent multiple documents as a heterogeneous graph.
We employ a neural topic model to jointly discover latent topics that can act as cross-document semantic units.
arXiv Detail & Related papers (2021-10-21T15:32:30Z) - What's New? Summarizing Contributions in Scientific Literature [85.95906677964815]
We introduce a new task of disentangled paper summarization, which seeks to generate separate summaries for the paper contributions and the context of the work.
We extend the S2ORC corpus of academic articles by adding disentangled "contribution" and "context" reference labels.
We propose a comprehensive automatic evaluation protocol which reports the relevance, novelty, and disentanglement of generated outputs.
arXiv Detail & Related papers (2020-11-06T02:23:01Z) - Leveraging Graph to Improve Abstractive Multi-Document Summarization [50.62418656177642]
We develop a neural abstractive multi-document summarization (MDS) model which can leverage well-known graph representations of documents.
Our model utilizes graphs to encode documents in order to capture cross-document relations, which is crucial to summarizing long documents.
Our model can also take advantage of graphs to guide the summary generation process, which is beneficial for generating coherent and concise summaries.
arXiv Detail & Related papers (2020-05-20T13:39:47Z) - Explaining Relationships Between Scientific Documents [55.23390424044378]
We address the task of explaining relationships between two scientific documents using natural language text.
In this paper we establish a dataset of 622K examples from 154K documents.
arXiv Detail & Related papers (2020-02-02T03:54:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.