Increasing Adverse Drug Events extraction robustness on social media:
case study on negation and speculation
- URL: http://arxiv.org/abs/2209.02812v1
- Date: Tue, 6 Sep 2022 20:38:42 GMT
- Title: Increasing Adverse Drug Events extraction robustness on social media:
case study on negation and speculation
- Authors: Simone Scaboro, Beatrice Portelli, Emmanuele Chersoni, Enrico Santus,
Giuseppe Serra
- Abstract summary: In the last decade, an increasing number of users have started reporting Adverse Drug Events (ADE) on social media platforms.
This paper takes into consideration four state-of-the-art systems for ADE detection on social media texts.
We introduce SNAX, a benchmark to test their performance against samples containing negated and speculated ADEs.
- Score: 7.052238842788185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last decade, an increasing number of users have started reporting
Adverse Drug Events (ADE) on social media platforms, blogs, and health forums.
Given the large volume of reports, pharmacovigilance has focused on ways to use
Natural Language Processing (NLP) techniques to rapidly examine these large
collections of text, detecting mentions of drug-related adverse reactions to
trigger medical investigations. However, despite the growing interest in the
task and the advances in NLP, the robustness of these models in face of
linguistic phenomena such as negations and speculations is an open research
question. Negations and speculations are pervasive phenomena in natural
language, and can severely hamper the ability of an automated system to
discriminate between factual and nonfactual statements in text. In this paper
we take into consideration four state-of-the-art systems for ADE detection on
social media texts. We introduce SNAX, a benchmark to test their performance
against samples containing negated and speculated ADEs, showing their fragility
against these phenomena. We then introduce two possible strategies to increase
the robustness of these models, showing that both of them bring significant
increases in performance, lowering the number of spurious entities predicted by
the models by 60% for negation and 80% for speculations.
Related papers
- Misspellings in Natural Language Processing: A survey [52.419589623702336]
misspellings have become ubiquitous in digital communication.
We reconstruct a history of misspellings as a scientific problem.
We discuss the latest advancements to address the challenge of misspellings in NLP.
arXiv Detail & Related papers (2025-01-28T10:26:04Z) - Epidemiology-informed Network for Robust Rumor Detection [59.89351792706995]
We propose a novel Epidemiology-informed Network (EIN) that integrates epidemiological knowledge to enhance performance.
To adapt epidemiology theory to rumor detection, it is expected that each users stance toward the source information will be annotated.
Our experimental results demonstrate that the proposed EIN not only outperforms state-of-the-art methods on real-world datasets but also exhibits enhanced robustness across varying tree depths.
arXiv Detail & Related papers (2024-11-20T00:43:32Z) - Evaluating the Robustness of Adverse Drug Event Classification Models Using Templates [11.276505487445782]
An adverse drug effect (ADE) is any harmful event resulting from medical drug treatment.
Despite their importance, ADEs are often under-reported in official channels.
Some research has turned to detecting discussions of ADEs in social media.
arXiv Detail & Related papers (2024-07-02T17:09:24Z) - Humanizing Machine-Generated Content: Evading AI-Text Detection through Adversarial Attack [24.954755569786396]
We propose a framework for a broader class of adversarial attacks, designed to perform minor perturbations in machine-generated content to evade detection.
We consider two attack settings: white-box and black-box, and employ adversarial learning in dynamic scenarios to assess the potential enhancement of the current detection model's robustness.
The empirical results reveal that the current detection models can be compromised in as little as 10 seconds, leading to the misclassification of machine-generated text as human-written content.
arXiv Detail & Related papers (2024-04-02T12:49:22Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - LaTeX: Language Pattern-aware Triggering Event Detection for Adverse
Experience during Pandemics [10.292364075312667]
The COVID-19 pandemic has accentuated socioeconomic disparities across various racial ethnic groups in the United States.
This paper explores the role of social media in both addressing scarcity and challenges.
We analyze language patterns related to four types of adverse experiences.
arXiv Detail & Related papers (2023-10-05T23:09:31Z) - Measuring the Effect of Influential Messages on Varying Personas [67.1149173905004]
We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona might have upon seeing a news message.
The proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response.
This enables more accurate and comprehensive inference on the mental state of the persona.
arXiv Detail & Related papers (2023-05-25T21:01:00Z) - Survey of Hallucination in Natural Language Generation [69.9926849848132]
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies.
Deep learning based generation is prone to hallucinate unintended text, which degrades the system performance.
This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
arXiv Detail & Related papers (2022-02-08T03:55:01Z) - AI-based Approach for Safety Signals Detection from Social Networks:
Application to the Levothyrox Scandal in 2017 on Doctissimo Forum [1.4502611532302039]
We propose an AI-based approach for the detection of potential pharmaceutical safety signals from patients' reviews.
We focus on the Levothyrox case in France which triggered huge attention from the media following the change of the medication formula.
We investigate various NLP-based indicators extracted from patients' reviews including words and n-grams frequency, semantic similarity, Adverse Drug Reactions mentions, and sentiment analysis.
arXiv Detail & Related papers (2022-02-01T10:17:32Z) - NADE: A Benchmark for Robust Adverse Drug Events Extraction in Face of
Negations [8.380439657099906]
Adverse Drug Event (ADE) extraction mod-els can rapidly examine large collections of so-cial media texts, detecting mentions of drug-related adverse reactions and trigger medicalinvestigations.
Despite the recent ad-vances in NLP, it is currently unknown if suchmodels are robust in face ofnegation, which ispervasive across language varieties.
In this paper we evaluate three state-of-the-art systems, showing their fragility against nega-tion, and then we introduce two possible strate-gies to increase the robustness of these mod-els.
arXiv Detail & Related papers (2021-09-21T10:33:29Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals [53.484562601127195]
We point out the inability to infer behavioral conclusions from probing results.
We offer an alternative method that focuses on how the information is being used, rather than on what information is encoded.
arXiv Detail & Related papers (2020-06-01T15:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.