Analysis of GraphSum's Attention Weights to Improve the Explainability
of Multi-Document Summarization
- URL: http://arxiv.org/abs/2105.11908v1
- Date: Wed, 19 May 2021 08:18:59 GMT
- Title: Analysis of GraphSum's Attention Weights to Improve the Explainability
of Multi-Document Summarization
- Authors: M. Lautaro Hickmann and Fabian Wurzberger and Megi Hoxhalli and Arne
Lochner and Jessica T\"ollich and Ansgar Scherp
- Abstract summary: Modern multi-document summarization (MDS) methods are based on transformer architectures.
They generate state of the art summaries, but lack explainability.
We aim to improve the explainability of the graph-based MDS by analyzing their attention weights.
- Score: 2.626095252463179
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern multi-document summarization (MDS) methods are based on transformer
architectures. They generate state of the art summaries, but lack
explainability. We focus on graph-based transformer models for MDS as they
gained recent popularity. We aim to improve the explainability of the
graph-based MDS by analyzing their attention weights. In a graph-based MDS such
as GraphSum, vertices represent the textual units, while the edges form some
similarity graph over the units. We compare GraphSum's performance utilizing
different textual units, i. e., sentences versus paragraphs, on two news
benchmark datasets, namely WikiSum and MultiNews. Our experiments show that
paragraph-level representations provide the best summarization performance.
Thus, we subsequently focus oAnalysisn analyzing the paragraph-level attention
weights of GraphSum's multi-heads and decoding layers in order to improve the
explainability of a transformer-based MDS model. As a reference metric, we
calculate the ROUGE scores between the input paragraphs and each sentence in
the generated summary, which indicate source origin information via text
similarity. We observe a high correlation between the attention weights and
this reference metric, especially on the the later decoding layers of the
transformer architecture. Finally, we investigate if the generated summaries
follow a pattern of positional bias by extracting which paragraph provided the
most information for each generated summary. Our results show that there is a
high correlation between the position in the summary and the source origin.
Related papers
- Graph-Dictionary Signal Model for Sparse Representations of Multivariate Data [49.77103348208835]
We define a novel Graph-Dictionary signal model, where a finite set of graphs characterizes relationships in data distribution through a weighted sum of their Laplacians.
We propose a framework to infer the graph dictionary representation from observed data, along with a bilinear generalization of the primal-dual splitting algorithm to solve the learning problem.
We exploit graph-dictionary representations in a motor imagery decoding task on brain activity data, where we classify imagined motion better than standard methods.
arXiv Detail & Related papers (2024-11-08T17:40:43Z) - Compressed Heterogeneous Graph for Abstractive Multi-Document
Summarization [37.53183784486546]
Multi-document summarization (MDS) aims to generate a summary for a number of related documents.
We propose HGSUM, an MDS model that extends an encoder-decoder architecture.
This contrasts with existing MDS models which do not consider different edge types of graphs.
arXiv Detail & Related papers (2023-03-12T04:23:54Z) - Scientific Paper Extractive Summarization Enhanced by Citation Graphs [50.19266650000948]
We focus on leveraging citation graphs to improve scientific paper extractive summarization under different settings.
Preliminary results demonstrate that citation graph is helpful even in a simple unsupervised framework.
Motivated by this, we propose a Graph-based Supervised Summarization model (GSS) to achieve more accurate results on the task when large-scale labeled data are available.
arXiv Detail & Related papers (2022-12-08T11:53:12Z) - FactGraph: Evaluating Factuality in Summarization with Semantic Graph
Representations [114.94628499698096]
We propose FactGraph, a method that decomposes the document and the summary into structured meaning representations (MRs)
MRs describe core semantic concepts and their relations, aggregating the main content in both document and summary in a canonical form, and reducing data sparsity.
Experiments on different benchmarks for evaluating factuality show that FactGraph outperforms previous approaches by up to 15%.
arXiv Detail & Related papers (2022-04-13T16:45:33Z) - Representing Videos as Discriminative Sub-graphs for Action Recognition [165.54738402505194]
We introduce a new design of sub-graphs to represent and encode theriminative patterns of each action in the videos.
We present MUlti-scale Sub-Earn Ling (MUSLE) framework that novelly builds space-time graphs and clusters into compact sub-graphs on each scale.
arXiv Detail & Related papers (2022-01-11T16:15:25Z) - SgSum: Transforming Multi-document Summarization into Sub-graph
Selection [27.40759123902261]
Most existing extractive multi-document summarization (MDS) methods score each sentence individually and extract salient sentences one by one to compose a summary.
We propose a novel MDS framework (SgSum) to formulate the MDS task as a sub-graph selection problem.
Our model can produce significantly more coherent and informative summaries compared with traditional MDS methods.
arXiv Detail & Related papers (2021-10-25T05:12:10Z) - BASS: Boosting Abstractive Summarization with Unified Semantic Graph [49.48925904426591]
BASS is a framework for Boosting Abstractive Summarization based on a unified Semantic graph.
A graph-based encoder-decoder model is proposed to improve both the document representation and summary generation process.
Empirical results show that the proposed architecture brings substantial improvements for both long-document and multi-document summarization tasks.
arXiv Detail & Related papers (2021-05-25T16:20:48Z) - Leveraging Graph to Improve Abstractive Multi-Document Summarization [50.62418656177642]
We develop a neural abstractive multi-document summarization (MDS) model which can leverage well-known graph representations of documents.
Our model utilizes graphs to encode documents in order to capture cross-document relations, which is crucial to summarizing long documents.
Our model can also take advantage of graphs to guide the summary generation process, which is beneficial for generating coherent and concise summaries.
arXiv Detail & Related papers (2020-05-20T13:39:47Z) - Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven
Cloze Reward [42.925345819778656]
We present ASGARD, a novel framework for Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD.
We propose the use of dual encoders---a sequential document encoder and a graph-structured encoder---to maintain the global context and local characteristics of entities.
Results show that our models produce significantly higher ROUGE scores than a variant without knowledge graph as input on both New York Times and CNN/Daily Mail datasets.
arXiv Detail & Related papers (2020-05-03T18:23:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.