Detecting Attackable Sentences in Arguments
- URL: http://arxiv.org/abs/2010.02660v1
- Date: Tue, 6 Oct 2020 12:13:00 GMT
- Title: Detecting Attackable Sentences in Arguments
- Authors: Yohan Jo, Seojin Bang, Emaad Manzoor, Eduard Hovy, Chris Reed
- Abstract summary: We analyze driving reasons for attacks in argumentation and identify relevant characteristics of sentences.
We show that machine learning models can automatically detect attackable sentences in arguments.
- Score: 10.20577647332734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding attackable sentences in an argument is the first step toward
successful refutation in argumentation. We present a first large-scale analysis
of sentence attackability in online arguments. We analyze driving reasons for
attacks in argumentation and identify relevant characteristics of sentences. We
demonstrate that a sentence's attackability is associated with many of these
characteristics regarding the sentence's content, proposition types, and tone,
and that an external knowledge source can provide useful information about
attackability. Building on these findings, we demonstrate that machine learning
models can automatically detect attackable sentences in arguments,
significantly better than several baselines and comparably well to laypeople.
Related papers
- Explaining Arguments' Strength: Unveiling the Role of Attacks and Supports (Technical Report) [13.644164255651472]
We propose a novel theory of Relation Attribution Explanations (RAEs)
RAEs offer fine-grained insights into the role of attacks and supports in quantitative bipolar argumentation towards obtaining the arguments' strength.
We show the application value of RAEs in fraud detection and large language models case studies.
arXiv Detail & Related papers (2024-04-22T16:02:48Z) - CASA: Causality-driven Argument Sufficiency Assessment [79.13496878681309]
We propose CASA, a zero-shot causality-driven argument sufficiency assessment framework.
PS measures how likely introducing the premise event would lead to the conclusion when both the premise and conclusion events are absent.
Experiments on two logical fallacy detection datasets demonstrate that CASA accurately identifies insufficient arguments.
arXiv Detail & Related papers (2024-01-10T16:21:18Z) - Lost In Translation: Generating Adversarial Examples Robust to
Round-Trip Translation [66.33340583035374]
We present a comprehensive study on the robustness of current text adversarial attacks to round-trip translation.
We demonstrate that 6 state-of-the-art text-based adversarial attacks do not maintain their efficacy after round-trip translation.
We introduce an intervention-based solution to this problem, by integrating Machine Translation into the process of adversarial example generation.
arXiv Detail & Related papers (2023-07-24T04:29:43Z) - TASA: Deceiving Question Answering Models by Twin Answer Sentences
Attack [93.50174324435321]
We present Twin Answer Sentences Attack (TASA), an adversarial attack method for question answering (QA) models.
TASA produces fluent and grammatical adversarial contexts while maintaining gold answers.
arXiv Detail & Related papers (2022-10-27T07:16:30Z) - Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
Fact-Verification Systems [80.3811072650087]
We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence.
The attacks are also robust against post-hoc modifications of the claim.
These attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios.
arXiv Detail & Related papers (2022-09-07T13:39:24Z) - Rethinking Textual Adversarial Defense for Pre-trained Language Models [79.18455635071817]
A literature review shows that pre-trained language models (PrLMs) are vulnerable to adversarial attacks.
We propose a novel metric (Degree of Anomaly) to enable current adversarial attack approaches to generate more natural and imperceptible adversarial examples.
We show that our universal defense framework achieves comparable or even higher after-attack accuracy with other specific defenses.
arXiv Detail & Related papers (2022-07-21T07:51:45Z) - LPAttack: A Feasible Annotation Scheme for Capturing Logic Pattern of
Attacks in Arguments [33.445994192714956]
In argumentative discourse, persuasion is often achieved by refuting or attacking others arguments.
No existing studies capture complex rhetorical moves in attacks or the presuppositions or value judgements in them.
We introduce LPAttack, a novel annotation scheme that captures the common modes and complex rhetorical moves in attacks along with the implicit presuppositions and value judgements in them.
arXiv Detail & Related papers (2022-04-04T14:15:25Z) - Argument Undermining: Counter-Argument Generation by Attacking Weak
Premises [31.463885580010192]
We explore argument undermining, that is, countering an argument by attacking one of its premises.
We propose a pipeline approach that first assesses the premises' strength and then generates a counter-argument targeting the weak ones.
arXiv Detail & Related papers (2021-05-25T08:39:14Z) - Extracting Implicitly Asserted Propositions in Argumentation [8.20413690846954]
We study methods for extracting propositions implicitly asserted in questions, reported speech, and imperatives in argumentation.
Our study may inform future research on argument mining and the semantics of these rhetorical devices in argumentation.
arXiv Detail & Related papers (2020-10-06T12:03:47Z) - Aspect-Controlled Neural Argument Generation [65.91772010586605]
We train a language model for argument generation that can be controlled on a fine-grained level to generate sentence-level arguments for a given topic, stance, and aspect.
Our evaluation shows that our generation model is able to generate high-quality, aspect-specific arguments.
These arguments can be used to improve the performance of stance detection models via data augmentation and to generate counter-arguments.
arXiv Detail & Related papers (2020-04-30T20:17:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.