Confronting Abusive Language Online: A Survey from the Ethical and Human
Rights Perspective
- URL: http://arxiv.org/abs/2012.12305v1
- Date: Tue, 22 Dec 2020 19:27:11 GMT
- Title: Confronting Abusive Language Online: A Survey from the Ethical and Human
Rights Perspective
- Authors: Svetlana Kiritchenko, Isar Nejadgholi, Kathleen C. Fraser
- Abstract summary: We review a large body of NLP research on automatic abuse detection with a new focus on ethical challenges.
We highlight the need to examine the broad social impacts of this technology.
We identify several opportunities for rights-respecting, socio-technical solutions to detect and confront online abuse.
- Score: 4.916009028580767
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The pervasiveness of abusive content on the internet can lead to severe
psychological and physical harm. Significant effort in Natural Language
Processing (NLP) research has been devoted to addressing this problem through
abusive content detection and related sub-areas, such as the detection of hate
speech, toxicity, cyberbullying, etc. Although current technologies achieve
high classification performance in research studies, it has been observed that
the real-life application of this technology can cause unintended harms, such
as the silencing of under-represented groups. We review a large body of NLP
research on automatic abuse detection with a new focus on ethical challenges,
organized around eight established ethical principles: privacy, accountability,
safety and security, transparency and explainability, fairness and
non-discrimination, human control of technology, professional responsibility,
and promotion of human values. In many cases, these principles relate not only
to situational ethical codes, which may be context-dependent, but are in fact
connected to universal human rights, such as the right to privacy, freedom from
discrimination, and freedom of expression. We highlight the need to examine the
broad social impacts of this technology, and to bring ethical and human rights
considerations to every stage of the application life-cycle, from task
formulation and dataset design, to model training and evaluation, to
application deployment. Guided by these principles, we identify several
opportunities for rights-respecting, socio-technical solutions to detect and
confront online abuse, including 'nudging', 'quarantining', value sensitive
design, counter-narratives, style transfer, and AI-driven public education
applications.
Related papers
- Towards Privacy-aware Mental Health AI Models: Advances, Challenges, and Opportunities [61.633126163190724]
Mental illness is a widespread and debilitating condition with substantial societal and personal costs.
Recent advances in Artificial Intelligence (AI) hold great potential for recognizing and addressing conditions such as depression, anxiety disorder, bipolar disorder, schizophrenia, and post-traumatic stress disorder.
Privacy concerns, including the risk of sensitive data leakage from datasets and trained models, remain a critical barrier to deploying these AI systems in real-world clinical settings.
arXiv Detail & Related papers (2025-02-01T15:10:02Z) - Understanding Mental Health Content on Social Media and Its Effect Towards Suicidal Ideation [0.0]
The study details the application of these technologies in analyzing vast amounts of unstructured social media data to detect linguistic patterns.
It evaluates the real-world effectiveness, limitations, and ethical considerations of employing these technologies for suicide prevention.
arXiv Detail & Related papers (2025-01-16T05:46:27Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Transparency, Security, and Workplace Training & Awareness in the Age of Generative AI [0.0]
As AI technologies advance, ethical considerations, transparency, data privacy, and their impact on human labor intersect with the drive for innovation and efficiency.
Our research explores publicly accessible large language models (LLMs) that often operate on the periphery, away from mainstream scrutiny.
Specifically, we examine Gab AI, a platform that centers around unrestricted communication and privacy, allowing users to interact freely without censorship.
arXiv Detail & Related papers (2024-12-19T17:40:58Z) - Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [55.2480439325792]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.
I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations [0.0]
Generative AI is rapidly transforming medical imaging and text analysis.
This paper explores issues of accuracy, informed consent, data privacy, and algorithmic limitations.
We aim to foster a roadmap for ethical and responsible implementation of generative AI in healthcare.
arXiv Detail & Related papers (2024-06-15T13:28:07Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Case Study: Deontological Ethics in NLP [119.53038547411062]
We study one ethical theory, namely deontological ethics, from the perspective of NLP.
In particular, we focus on the generalization principle and the respect for autonomy through informed consent.
We provide four case studies to demonstrate how these principles can be used with NLP systems.
arXiv Detail & Related papers (2020-10-09T16:04:51Z) - A survey of algorithmic recourse: definitions, formulations, solutions,
and prospects [24.615500469071183]
We focus on algorithmic recourse, which is concerned with providing explanations and recommendations to individuals who are unfavourably treated by automated decision-making systems.
We perform an extensive literature review, and align the efforts of many authors by presenting unified definitions, formulations, and solutions to recourse.
arXiv Detail & Related papers (2020-10-08T15:15:34Z) - Designing for Human Rights in AI [0.0]
AI systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions.
It is becoming evident that these technological developments are consequential to people's fundamental human rights.
Technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context.
arXiv Detail & Related papers (2020-05-11T09:21:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.