Enhancing Argument Structure Extraction with Efficient Leverage of
Contextual Information
- URL: http://arxiv.org/abs/2310.05073v1
- Date: Sun, 8 Oct 2023 08:47:10 GMT
- Title: Enhancing Argument Structure Extraction with Efficient Leverage of
Contextual Information
- Authors: Yun Luo and Zhen Yang and Fandong Meng and Yingjie Li and Jie Zhou and
Yue Zhang
- Abstract summary: We propose an Efficient Context-aware model (ECASE) that fully exploits contextual information.
We introduce a sequence-attention module and distance-weighted similarity loss to aggregate contextual information and argumentative information.
Our experiments on five datasets from various domains demonstrate that our model achieves state-of-the-art performance.
- Score: 79.06082391992545
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Argument structure extraction (ASE) aims to identify the discourse structure
of arguments within documents. Previous research has demonstrated that
contextual information is crucial for developing an effective ASE model.
However, we observe that merely concatenating sentences in a contextual window
does not fully utilize contextual information and can sometimes lead to
excessive attention on less informative sentences. To tackle this challenge, we
propose an Efficient Context-aware ASE model (ECASE) that fully exploits
contextual information by enhancing modeling capacity and augmenting training
data. Specifically, we introduce a sequence-attention module and
distance-weighted similarity loss to aggregate contextual information and
argumentative information. Additionally, we augment the training data by
randomly masking discourse markers and sentences, which reduces the model's
reliance on specific words or less informative sentences. Our experiments on
five datasets from various domains demonstrate that our model achieves
state-of-the-art performance. Furthermore, ablation studies confirm the
effectiveness of each module in our model.
Related papers
- Personalized Video Summarization using Text-Based Queries and Conditional Modeling [3.4447129363520337]
This thesis explores enhancing video summarization by integrating text-based queries and conditional modeling.
Evaluation metrics such as accuracy and F1-score assess the quality of the generated summaries.
arXiv Detail & Related papers (2024-08-27T02:43:40Z) - Factual Dialogue Summarization via Learning from Large Language Models [35.63037083806503]
Large language model (LLM)-based automatic text summarization models generate more factually consistent summaries.
We employ zero-shot learning to extract symbolic knowledge from LLMs, generating factually consistent (positive) and inconsistent (negative) summaries.
Our approach achieves better factual consistency while maintaining coherence, fluency, and relevance, as confirmed by various automatic evaluation metrics.
arXiv Detail & Related papers (2024-06-20T20:03:37Z) - Discovering Elementary Discourse Units in Textual Data Using Canonical Correlation Analysis [0.0]
This study takes a step further by demonstrating the potential of Canonical Correlation Analysis (CCA) in identifying Elementary Discourse Units(EDUs)
The model is simple, linear, adaptable and language independent making it an ideal baseline particularly when labeled training data is scarce or nonexistent.
arXiv Detail & Related papers (2024-06-18T18:37:24Z) - How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - Boosting Event Extraction with Denoised Structure-to-Text Augmentation [52.21703002404442]
Event extraction aims to recognize pre-defined event triggers and arguments from texts.
Recent data augmentation methods often neglect the problem of grammatical incorrectness.
We propose a denoised structure-to-text augmentation framework for event extraction DAEE.
arXiv Detail & Related papers (2023-05-16T16:52:07Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Efficient Multi-Modal Embeddings from Structured Data [0.0]
Multi-modal word semantics aims to enhance embeddings with perceptual input.
Visual grounding can contribute to linguistic applications as well.
New embedding conveys complementary information for text based embeddings.
arXiv Detail & Related papers (2021-10-06T08:42:09Z) - Dependency Induction Through the Lens of Visual Perception [81.91502968815746]
We propose an unsupervised grammar induction model that leverages word concreteness and a structural vision-based to jointly learn constituency-structure and dependency-structure grammars.
Our experiments show that the proposed extension outperforms the current state-of-the-art visually grounded models in constituency parsing even with a smaller grammar size.
arXiv Detail & Related papers (2021-09-20T18:40:37Z) - Attend to the beginning: A study on using bidirectional attention for
extractive summarization [1.148539813252112]
We propose attending to the beginning of a document, to improve the performance of extractive summarization models.
We make use of the tendency of introducing important information early in the text, by attending to the first few sentences in generic textual data.
arXiv Detail & Related papers (2020-02-09T17:46:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.