Can AI Moderate Online Communities?
- URL: http://arxiv.org/abs/2306.05122v1
- Date: Thu, 8 Jun 2023 11:45:44 GMT
- Title: Can AI Moderate Online Communities?
- Authors: Henrik Axelsen, Johannes Rude Jensen, Sebastian Axelsen, Valdemar
Licht, Omri Ross
- Abstract summary: We use open-access generative pre-trained transformer models (GPT) from OpenAI to train large language models (LLM)
Our preliminary findings suggest, that when properly trained, LLMs can excel in identifying actor intentions, moderating toxic comments, and rewarding positive contributions.
We contribute to the information system (IS) discourse with a rapid development framework on the application of generative AI in content online moderation and management of culture in decentralized, pseudonymous communities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task of cultivating healthy communication in online communities becomes
increasingly urgent, as gaming and social media experiences become
progressively more immersive and life-like. We approach the challenge of
moderating online communities by training student models using a large language
model (LLM). We use zero-shot learning models to distill and expand datasets
followed by a few-shot learning and a fine-tuning approach, leveraging
open-access generative pre-trained transformer models (GPT) from OpenAI. Our
preliminary findings suggest, that when properly trained, LLMs can excel in
identifying actor intentions, moderating toxic comments, and rewarding positive
contributions. The student models perform above-expectation in non-contextual
assignments such as identifying classically toxic behavior and perform
sufficiently on contextual assignments such as identifying positive
contributions to online discourse. Further, using open-access models like
OpenAI's GPT we experience a step-change in the development process for what
has historically been a complex modeling task. We contribute to the information
system (IS) discourse with a rapid development framework on the application of
generative AI in content online moderation and management of culture in
decentralized, pseudonymous communities by providing a sample model suite of
industrial-ready generative AI models based on open-access LLMs.
Related papers
- Engagement-Driven Content Generation with Large Language Models [8.049552839071918]
Large Language Models (LLMs) exhibit significant persuasion capabilities in one-on-one interactions.
This study investigates the potential social impact of LLMs in interconnected users and complex opinion dynamics.
arXiv Detail & Related papers (2024-11-20T10:40:08Z) - Multi-Stage Knowledge Integration of Vision-Language Models for Continual Learning [79.46570165281084]
We propose a Multi-Stage Knowledge Integration network (MulKI) to emulate the human learning process in distillation methods.
MulKI achieves this through four stages, including Eliciting Ideas, Adding New Ideas, Distinguishing Ideas, and Making Connections.
Our method demonstrates significant improvements in maintaining zero-shot capabilities while supporting continual learning across diverse downstream tasks.
arXiv Detail & Related papers (2024-11-11T07:36:19Z) - From MOOC to MAIC: Reshaping Online Teaching and Learning through LLM-driven Agents [78.15899922698631]
MAIC (Massive AI-empowered Course) is a new form of online education that leverages LLM-driven multi-agent systems to construct an AI-augmented classroom.
We conduct preliminary experiments at Tsinghua University, one of China's leading universities.
arXiv Detail & Related papers (2024-09-05T13:22:51Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [49.043599241803825]
Iterative Contrastive Unlearning (ICU) framework consists of three core components.
A Knowledge Unlearning Induction module removes specific knowledge through an unlearning loss.
A Contrastive Learning Enhancement module to preserve the model's expressive capabilities against the pure unlearning goal.
And an Iterative Unlearning Refinement module that dynamically assess the unlearning extent on specific data pieces and make iterative update.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Social Learning: Towards Collaborative Learning with Large Language
Models [10.24107243529341]
We introduce the framework of "social learning" in the context of large language models (LLMs)
We present and evaluate two approaches for knowledge transfer between LLMs.
We show that performance using these methods is comparable to results with the use of original labels and prompts.
arXiv Detail & Related papers (2023-12-18T18:44:10Z) - Adapting Large Language Models for Content Moderation: Pitfalls in Data
Engineering and Supervised Fine-tuning [79.53130089003986]
Large Language Models (LLMs) have become a feasible solution for handling tasks in various domains.
In this paper, we introduce how to fine-tune a LLM model that can be privately deployed for content moderation.
arXiv Detail & Related papers (2023-10-05T09:09:44Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - Sparsity-aware neural user behavior modeling in online interaction
platforms [2.4036844268502766]
We develop generalizable neural representation learning frameworks for user behavior modeling.
Our problem settings span transductive and inductive learning scenarios.
We leverage different facets of information reflecting user behavior to enable personalized inference at scale.
arXiv Detail & Related papers (2022-02-28T00:27:11Z) - Ex-Model: Continual Learning from a Stream of Trained Models [12.27992745065497]
We argue that continual learning systems should exploit the availability of compressed information in the form of trained models.
We introduce and formalize a new paradigm named "Ex-Model Continual Learning" (ExML), where an agent learns from a sequence of previously trained models instead of raw data.
arXiv Detail & Related papers (2021-12-13T09:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.