Summaries, Highlights, and Action items: Design, implementation and evaluation of an LLM-powered meeting recap system
- URL: http://arxiv.org/abs/2307.15793v2
- Date: Thu, 29 Aug 2024 00:32:15 GMT
- Title: Summaries, Highlights, and Action items: Design, implementation and evaluation of an LLM-powered meeting recap system
- Authors: Sumit Asthana, Sagih Hilleli, Pengcheng He, Aaron Halfaker,
- Abstract summary: Large language models (LLMs) for dialog summarization have the potential to improve the experience of meetings.
Despite this potential, they face technological limitation due to long transcripts and inability to capture diverse recap needs based on user's context.
We develop a system to operationalize the representations with dialogue summarization as its building blocks.
- Score: 30.35387091657807
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Meetings play a critical infrastructural role in the coordination of work. In recent years, due to shift to hybrid and remote work, more meetings are moving to online Computer Mediated Spaces. This has led to new problems (e.g. more time spent in less engaging meetings) and new opportunities (e.g. automated transcription/captioning and recap support). Recent advances in large language models (LLMs) for dialog summarization have the potential to improve the experience of meetings by reducing individuals' meeting load and increasing the clarity and alignment of meeting outputs. Despite this potential, they face technological limitation due to long transcripts and inability to capture diverse recap needs based on user's context. To address these gaps, we design, implement and evaluate in-context a meeting recap system. We first conceptualize two salient recap representations -- important highlights, and a structured, hierarchical minutes view. We develop a system to operationalize the representations with dialogue summarization as its building blocks. Finally, we evaluate the effectiveness of the system with seven users in the context of their work meetings. Our findings show promise in using LLM-based dialogue summarization for meeting recap and the need for both representations in different contexts. However, we find that LLM-based recap still lacks an understanding of whats personally relevant to participants, can miss important details, and mis-attributions can be detrimental to group dynamics. We identify collaboration opportunities such as a shared recap document that a high quality recap enables. We report on implications for designing AI systems to partner with users to learn and improve from natural interactions to overcome the limitations related to personal relevance and summarization quality.
Related papers
- Increasing faithfulness in human-human dialog summarization with Spoken Language Understanding tasks [0.0]
We propose an exploration of how incorporating task-related information can enhance the summarization process.
Results show that integrating models with task-related information improves summary accuracy, even with varying word error rates.
arXiv Detail & Related papers (2024-09-16T08:15:35Z) - What's Wrong? Refining Meeting Summaries with LLM Feedback [6.532478490187084]
We introduce a multi-LLM correction approach for meeting summarization using a two-phase process that mimics the human review process.
We release QMSum Mistake, a dataset of 200 automatically generated meeting summaries annotated by humans on nine error types.
We transform identified mistakes into actionable feedback to improve the quality of a given summary measured by relevance, informativeness, conciseness, and coherence.
arXiv Detail & Related papers (2024-07-16T17:10:16Z) - Symbolic Planning and Code Generation for Grounded Dialogue [78.48668501764385]
Large language models (LLMs) excel at processing and generating both text and code.
We present a modular and interpretable grounded dialogue system that addresses shortcomings by composing LLMs with a symbolic planner and grounded code execution.
Our system substantially outperforms the previous state-of-the-art, including improving task success in human evaluations from 56% to 69% in the most challenging setting.
arXiv Detail & Related papers (2023-10-26T04:22:23Z) - Minuteman: Machine and Human Joining Forces in Meeting Summarization [2.900810893770134]
We propose a novel tool to enable efficient semi-automatic meeting minuting.
The tool provides a live transcript and a live meeting summary to the users, who can edit them in a collaborative manner.
The resulting application eases the cognitive load of the notetakers and allows them to easily catch up if they missed a part of the meeting due to absence or a lack of focus.
arXiv Detail & Related papers (2023-09-11T07:10:47Z) - FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems [53.89014188309486]
We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
arXiv Detail & Related papers (2023-03-31T23:58:28Z) - Collaborative Reasoning on Multi-Modal Semantic Graphs for
Video-Grounded Dialogue Generation [53.87485260058957]
We study video-grounded dialogue generation, where a response is generated based on the dialogue context and the associated video.
The primary challenges of this task lie in (1) the difficulty of integrating video data into pre-trained language models (PLMs)
We propose a multi-agent reinforcement learning method to collaboratively perform reasoning on different modalities.
arXiv Detail & Related papers (2022-10-22T14:45:29Z) - Abstractive Meeting Summarization: A Survey [15.455647477995306]
A system that could reliably identify and sum up the most important points of a conversation would be valuable in a wide variety of real-world contexts.
Recent advances in deep learning has significantly improved language generation systems, opening the door to improved forms of abstractive summarization.
We provide an overview of the challenges raised by the task of abstractive meeting summarization and of the data sets, models and evaluation metrics that have been used to tackle the problems.
arXiv Detail & Related papers (2022-08-08T14:04:38Z) - A Sliding-Window Approach to Automatic Creation of Meeting Minutes [66.39584679676817]
Meeting minutes record any subject matters discussed, decisions reached and actions taken at meetings.
We present a sliding window approach to automatic generation of meeting minutes.
It aims to tackle issues associated with the nature of spoken text, including lengthy transcripts and lack of document structure.
arXiv Detail & Related papers (2021-04-26T02:44:14Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - A Hierarchical Network for Abstractive Meeting Summarization with
Cross-Domain Pretraining [52.11221075687124]
We propose a novel abstractive summary network that adapts to the meeting scenario.
We design a hierarchical structure to accommodate long meeting transcripts and a role vector to depict the difference among speakers.
Our model outperforms previous approaches in both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-04-04T21:00:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.