Large Language Models Help Reveal Unhealthy Diet and Body Concerns in Online Eating Disorders Communities
- URL: http://arxiv.org/abs/2401.09647v2
- Date: Thu, 23 May 2024 05:12:07 GMT
- Title: Large Language Models Help Reveal Unhealthy Diet and Body Concerns in Online Eating Disorders Communities
- Authors: Minh Duc Chu, Zihao He, Rebecca Dorn, Kristina Lerman,
- Abstract summary: Eating disorders (ED) affect millions of people globally, especially adolescents.
The proliferation of online communities that promote and normalize ED has been linked to this public health crisis.
We propose a novel framework to surface implicit attitudes of online communities by adapting large language models to the language of the community.
- Score: 5.392300313326522
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Eating disorders (ED), a severe mental health condition with high rates of mortality and morbidity, affect millions of people globally, especially adolescents. The proliferation of online communities that promote and normalize ED has been linked to this public health crisis. However, identifying harmful communities is challenging due to the use of coded language and other obfuscations. To address this challenge, we propose a novel framework to surface implicit attitudes of online communities by adapting large language models (LLMs) to the language of the community. We describe an alignment method and evaluate results along multiple dimensions of semantics and affect. We then use the community-aligned LLM to respond to psychometric questionnaires designed to identify ED in individuals. We demonstrate that LLMs can effectively adopt community-specific perspectives and reveal significant variations in eating disorder risks in different online communities. These findings highlight the utility of LLMs to reveal implicit attitudes and collective mindsets of communities, offering new tools for mitigating harmful content on social media.
Related papers
- COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities [5.0261645603931475]
Community-Cross-Instruct is an unsupervised framework for aligning large language models to online communities.
It generates instructions in a fully unsupervised manner, enhancing scalability and generalization across domains.
This work enables cost-effective and automated surveying of diverse online communities.
arXiv Detail & Related papers (2024-06-17T20:20:47Z) - Leveraging Prompt-Based Large Language Models: Predicting Pandemic
Health Decisions and Outcomes Through Social Media Language [6.3576870613251675]
We use prompt-based LLMs to examine the relationship between social media language patterns and trends in national health outcomes.
Our work is the first to empirically link social media linguistic patterns to real-world public health trends.
arXiv Detail & Related papers (2024-03-01T21:29:32Z) - The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative [55.08395463562242]
Multimodal Large Language Models (MLLMs) are constantly defining the new boundary of Artificial General Intelligence (AGI)
Our paper explores a novel vulnerability in MLLM societies - the indirect propagation of malicious content.
arXiv Detail & Related papers (2024-02-20T23:08:21Z) - Large Language Model for Mental Health: A Systematic Review [2.9429776664692526]
Large language models (LLMs) have attracted significant attention for potential applications in digital health.
This systematic review focuses on their strengths and limitations in early screening, digital interventions, and clinical applications.
arXiv Detail & Related papers (2024-02-19T17:58:41Z) - Challenges of Large Language Models for Mental Health Counseling [4.604003661048267]
The global mental health crisis is looming with a rapid increase in mental disorders, limited resources, and the social stigma of seeking treatment.
The application of large language models (LLMs) in the mental health domain raises concerns regarding the accuracy, effectiveness, and reliability of the information provided.
This paper investigates the major challenges associated with the development of LLMs for psychological counseling, including model hallucination, interpretability, bias, privacy, and clinical effectiveness.
arXiv Detail & Related papers (2023-11-23T08:56:41Z) - Towards Mitigating Hallucination in Large Language Models via
Self-Reflection [63.2543947174318]
Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks.
This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets.
arXiv Detail & Related papers (2023-10-10T03:05:44Z) - Factuality Challenges in the Era of Large Language Models [113.3282633305118]
Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
arXiv Detail & Related papers (2023-10-08T14:55:02Z) - Redefining Digital Health Interfaces with Large Language Models [69.02059202720073]
Large Language Models (LLMs) have emerged as general-purpose models with the ability to process complex information.
We show how LLMs can provide a novel interface between clinicians and digital technologies.
We develop a new prognostic tool using automated machine learning.
arXiv Detail & Related papers (2023-10-05T14:18:40Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.