Assessing the Human Likeness of AI-Generated Counterspeech
- URL: http://arxiv.org/abs/2410.11007v1
- Date: Mon, 14 Oct 2024 18:48:47 GMT
- Title: Assessing the Human Likeness of AI-Generated Counterspeech
- Authors: Xiaoying Song, Sujana Mamidisetty, Eduardo Blanco, Lingzi Hong,
- Abstract summary: Counterspeech is a targeted response to counteract and challenge abusive or hateful content.
Previous studies have proposed different strategies for automatically generated counterspeech.
We investigate the human likeness of AI-generated counterspeech, a critical factor influencing effectiveness.
- Score: 10.434435022492723
- License:
- Abstract: Counterspeech is a targeted response to counteract and challenge abusive or hateful content. It can effectively curb the spread of hatred and foster constructive online communication. Previous studies have proposed different strategies for automatically generated counterspeech. Evaluations, however, focus on the relevance, surface form, and other shallow linguistic characteristics. In this paper, we investigate the human likeness of AI-generated counterspeech, a critical factor influencing effectiveness. We implement and evaluate several LLM-based generation strategies, and discover that AI-generated and human-written counterspeech can be easily distinguished by both simple classifiers and humans. Further, we reveal differences in linguistic characteristics, politeness, and specificity.
Related papers
- Is Safer Better? The Impact of Guardrails on the Argumentative Strength of LLMs in Hate Speech Countering [22.594296353433855]
We focus on two aspects of counterspeech generation to produce more cogent responses.
First, we test whether the presence of safety guardrails hinders the quality of the generations.
Secondly, we assess whether attacking a specific component of the hate speech results in a more effective argumentative strategy to fight online hate.
arXiv Detail & Related papers (2024-10-04T14:31:37Z) - SIFToM: Robust Spoken Instruction Following through Theory of Mind [51.326266354164716]
We present a cognitively inspired model, Speech Instruction Following through Theory of Mind (SIFToM), to enable robots to pragmatically follow human instructions under diverse speech conditions.
Results show that the SIFToM model outperforms state-of-the-art speech and language models, approaching human-level accuracy on challenging speech instruction following tasks.
arXiv Detail & Related papers (2024-09-17T02:36:10Z) - Self-Directed Turing Test for Large Language Models [56.64615470513102]
The Turing test examines whether AIs can exhibit human-like behaviour in natural language conversations.
Traditional Turing tests adopt a rigid dialogue format where each participant sends only one message each time.
This paper proposes the Self-Directed Turing Test, which extends the original test with a burst dialogue format.
arXiv Detail & Related papers (2024-08-19T09:57:28Z) - Outcome-Constrained Large Language Models for Countering Hate Speech [10.434435022492723]
This study aims to develop methods for generating counterspeech constrained by conversation outcomes.
We experiment with large language models (LLMs) to incorporate into the text generation process two desired conversation outcomes.
Evaluation results show that our methods effectively steer the generation of counterspeech toward the desired outcomes.
arXiv Detail & Related papers (2024-03-25T19:44:06Z) - Consolidating Strategies for Countering Hate Speech Using Persuasive
Dialogues [3.8979646385036175]
We explore controllable strategies for generating counter-arguments to hateful comments in online conversations.
Using automatic and human evaluations, we determine the best combination of features that generate fluent, argumentative, and logically sound arguments.
We share developed computational models for automatically annotating text with such features, and a silver-standard annotated version of an existing hate speech dialog corpora.
arXiv Detail & Related papers (2024-01-15T16:31:18Z) - Beyond Denouncing Hate: Strategies for Countering Implied Biases and
Stereotypes in Language [18.560379338032558]
We draw from psychology and philosophy literature to craft six psychologically inspired strategies to challenge the underlying stereotypical implications of hateful language.
We show that human-written counterspeech uses strategies that are more specific to the implied stereotype, whereas machine-generated counterspeech uses less specific strategies.
Our findings point to the importance of accounting for the underlying stereotypical implications of speech when generating counterspeech.
arXiv Detail & Related papers (2023-10-31T21:33:46Z) - Understanding Counterspeech for Online Harm Mitigation [12.104301755723542]
Counterspeech offers direct rebuttals to hateful speech by challenging perpetrators of hate and showing support to targets of abuse.
It provides a promising alternative to more contentious measures, such as content moderation and deplatforming.
This paper systematically reviews counterspeech research in the social sciences and compares methodologies and findings with computer science efforts in automatic counterspeech generation.
arXiv Detail & Related papers (2023-07-01T20:54:01Z) - Characterizing the adversarial vulnerability of speech self-supervised
learning [95.03389072594243]
We make the first attempt to investigate the adversarial vulnerability of such paradigm under the attacks from both zero-knowledge adversaries and limited-knowledge adversaries.
The experimental results illustrate that the paradigm proposed by SUPERB is seriously vulnerable to limited-knowledge adversaries.
arXiv Detail & Related papers (2021-11-08T08:44:04Z) - An Attribute-Aligned Strategy for Learning Speech Representation [57.891727280493015]
We propose an attribute-aligned learning strategy to derive speech representation that can flexibly address these issues by attribute-selection mechanism.
Specifically, we propose a layered-representation variational autoencoder (LR-VAE), which factorizes speech representation into attribute-sensitive nodes.
Our proposed method achieves competitive performances on identity-free SER and a better performance on emotionless SV.
arXiv Detail & Related papers (2021-06-05T06:19:14Z) - Reinforcement Learning for Emotional Text-to-Speech Synthesis with
Improved Emotion Discriminability [82.39099867188547]
Emotional text-to-speech synthesis (ETTS) has seen much progress in recent years.
We propose a new interactive training paradigm for ETTS, denoted as i-ETTS.
We formulate an iterative training strategy with reinforcement learning to ensure the quality of i-ETTS optimization.
arXiv Detail & Related papers (2021-04-03T13:52:47Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.