Multi-Query Focused Disaster Summarization via Instruction-Based
Prompting
- URL: http://arxiv.org/abs/2402.09008v1
- Date: Wed, 14 Feb 2024 08:22:58 GMT
- Title: Multi-Query Focused Disaster Summarization via Instruction-Based
Prompting
- Authors: Philipp Seeberger, Korbinian Riedhammer
- Abstract summary: CrisisFACTS aims to advance disaster summarization based on multi-stream fact-finding.
Here, participants are asked to develop systems that can extract key facts from several disaster-related events.
This paper describes our method to tackle this challenging task.
- Score: 3.6199702611839792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic summarization of mass-emergency events plays a critical role in
disaster management. The second edition of CrisisFACTS aims to advance disaster
summarization based on multi-stream fact-finding with a focus on web sources
such as Twitter, Reddit, Facebook, and Webnews. Here, participants are asked to
develop systems that can extract key facts from several disaster-related
events, which ultimately serve as a summary. This paper describes our method to
tackle this challenging task. We follow previous work and propose to use a
combination of retrieval, reranking, and an embarrassingly simple
instruction-following summarization. The two-stage retrieval pipeline relies on
BM25 and MonoT5, while the summarizer module is based on the open-source Large
Language Model (LLM) LLaMA-13b. For summarization, we explore a Question
Answering (QA)-motivated prompting approach and find the evidence useful for
extracting query-relevant facts. The automatic metrics and human evaluation
show strong results but also highlight the gap between open-source and
proprietary systems.
Related papers
- DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Progressive Evidence Refinement for Open-domain Multimodal Retrieval
Question Answering [20.59485758381809]
Current multimodal retrieval question-answering models face two main challenges.
utilizing compressed evidence features as input to the model results in the loss of fine-grained information within the evidence.
We propose a two-stage framework for evidence retrieval and question-answering to alleviate these issues.
arXiv Detail & Related papers (2023-10-15T01:18:39Z) - Combining Deep Neural Reranking and Unsupervised Extraction for
Multi-Query Focused Summarization [0.30458514384586394]
CrisisFACTS Track aims to tackle challenges such as multi-stream fact-finding in the domain of event tracking.
We propose a combination of retrieval, reranking, and incorporating the well-known Linear Programming (ILP) and Maximal Marginal Relevance (MMR) frameworks.
arXiv Detail & Related papers (2023-02-02T15:08:25Z) - MQAG: Multiple-choice Question Answering and Generation for Assessing
Information Consistency in Summarization [55.60306377044225]
State-of-the-art summarization systems can generate highly fluent summaries.
These summaries, however, may contain factual inconsistencies and/or information not present in the source.
We introduce an alternative scheme based on standard information-theoretic measures in which the information present in the source and summary is directly compared.
arXiv Detail & Related papers (2023-01-28T23:08:25Z) - Successive Prompting for Decomposing Complex Questions [50.00659445976735]
Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting.
We introduce Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution.
Our best model (with successive prompting) achieves an improvement of 5% absolute F1 on a few-shot version of the DROP dataset.
arXiv Detail & Related papers (2022-12-08T06:03:38Z) - Mixed-modality Representation Learning and Pre-training for Joint
Table-and-Text Retrieval in OpenQA [85.17249272519626]
An optimized OpenQA Table-Text Retriever (OTTeR) is proposed.
We conduct retrieval-centric mixed-modality synthetic pre-training.
OTTeR substantially improves the performance of table-and-text retrieval on the OTT-QA dataset.
arXiv Detail & Related papers (2022-10-11T07:04:39Z) - Cross-Lingual Query-Based Summarization of Crisis-Related Social Media:
An Abstractive Approach Using Transformers [3.042890194004583]
This work proposes a cross-lingual method for retrieving and summarizing crisis-relevant information from social media postings.
We describe a uniform way of expressing various information needs through structured queries and a way of creating summaries.
arXiv Detail & Related papers (2022-04-21T16:07:52Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - Abstractive Query Focused Summarization with Query-Free Resources [60.468323530248945]
In this work, we consider the problem of leveraging only generic summarization resources to build an abstractive QFS system.
We propose Marge, a Masked ROUGE Regression framework composed of a novel unified representation for summaries and queries.
Despite learning from minimal supervision, our system achieves state-of-the-art results in the distantly supervised setting.
arXiv Detail & Related papers (2020-12-29T14:39:35Z) - Multi-hop Inference for Question-driven Summarization [39.08269647808958]
We propose a novel question-driven abstractive summarization method, Multi-hop Selective Generator (MSG)
MSG incorporates multi-hop reasoning into question-driven summarization and, meanwhile, provide justifications for the generated summaries.
Experimental results show that the proposed method consistently outperforms state-of-the-art methods on two non-factoid QA datasets.
arXiv Detail & Related papers (2020-10-08T02:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.