Can AI Moderate Online Communities?
- URL: http://arxiv.org/abs/2306.05122v1
- Date: Thu, 8 Jun 2023 11:45:44 GMT
- Title: Can AI Moderate Online Communities?
- Authors: Henrik Axelsen, Johannes Rude Jensen, Sebastian Axelsen, Valdemar
Licht, Omri Ross
- Abstract summary: We use open-access generative pre-trained transformer models (GPT) from OpenAI to train large language models (LLM)
Our preliminary findings suggest, that when properly trained, LLMs can excel in identifying actor intentions, moderating toxic comments, and rewarding positive contributions.
We contribute to the information system (IS) discourse with a rapid development framework on the application of generative AI in content online moderation and management of culture in decentralized, pseudonymous communities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task of cultivating healthy communication in online communities becomes
increasingly urgent, as gaming and social media experiences become
progressively more immersive and life-like. We approach the challenge of
moderating online communities by training student models using a large language
model (LLM). We use zero-shot learning models to distill and expand datasets
followed by a few-shot learning and a fine-tuning approach, leveraging
open-access generative pre-trained transformer models (GPT) from OpenAI. Our
preliminary findings suggest, that when properly trained, LLMs can excel in
identifying actor intentions, moderating toxic comments, and rewarding positive
contributions. The student models perform above-expectation in non-contextual
assignments such as identifying classically toxic behavior and perform
sufficiently on contextual assignments such as identifying positive
contributions to online discourse. Further, using open-access models like
OpenAI's GPT we experience a step-change in the development process for what
has historically been a complex modeling task. We contribute to the information
system (IS) discourse with a rapid development framework on the application of
generative AI in content online moderation and management of culture in
decentralized, pseudonymous communities by providing a sample model suite of
industrial-ready generative AI models based on open-access LLMs.
Related papers
- From MOOC to MAIC: Reshaping Online Teaching and Learning through LLM-driven Agents [78.15899922698631]
MAIC (Massive AI-empowered Course) is a new form of online education that leverages LLM-driven multi-agent systems to construct an AI-augmented classroom.
We conduct preliminary experiments at Tsinghua University, one of China's leading universities.
arXiv Detail & Related papers (2024-09-05T13:22:51Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [49.043599241803825]
Iterative Contrastive Unlearning (ICU) framework consists of three core components.
A Knowledge Unlearning Induction module removes specific knowledge through an unlearning loss.
A Contrastive Learning Enhancement module to preserve the model's expressive capabilities against the pure unlearning goal.
And an Iterative Unlearning Refinement module that dynamically assess the unlearning extent on specific data pieces and make iterative update.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Collective Constitutional AI: Aligning a Language Model with Public Input [20.95333081841239]
There is growing consensus that language model (LM) developers should not be the sole deciders of LM behavior.
We present Collective Constitutional AI (CCAI): a multi-stage process for sourcing and integrating public input into LMs.
We demonstrate the real-world practicality of this approach by creating what is, to our knowledge, the first LM fine-tuned with collectively sourced public input.
arXiv Detail & Related papers (2024-06-12T02:20:46Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Social Learning: Towards Collaborative Learning with Large Language
Models [10.24107243529341]
We introduce the framework of "social learning" in the context of large language models (LLMs)
We present and evaluate two approaches for knowledge transfer between LLMs.
We show that performance using these methods is comparable to results with the use of original labels and prompts.
arXiv Detail & Related papers (2023-12-18T18:44:10Z) - Adapting Large Language Models for Content Moderation: Pitfalls in Data
Engineering and Supervised Fine-tuning [79.53130089003986]
Large Language Models (LLMs) have become a feasible solution for handling tasks in various domains.
In this paper, we introduce how to fine-tune a LLM model that can be privately deployed for content moderation.
arXiv Detail & Related papers (2023-10-05T09:09:44Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Sparsity-aware neural user behavior modeling in online interaction
platforms [2.4036844268502766]
We develop generalizable neural representation learning frameworks for user behavior modeling.
Our problem settings span transductive and inductive learning scenarios.
We leverage different facets of information reflecting user behavior to enable personalized inference at scale.
arXiv Detail & Related papers (2022-02-28T00:27:11Z) - Ex-Model: Continual Learning from a Stream of Trained Models [12.27992745065497]
We argue that continual learning systems should exploit the availability of compressed information in the form of trained models.
We introduce and formalize a new paradigm named "Ex-Model Continual Learning" (ExML), where an agent learns from a sequence of previously trained models instead of raw data.
arXiv Detail & Related papers (2021-12-13T09:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.