Large Language Models Help Reveal Unhealthy Diet and Body Concerns in Online Eating Disorders Communities
- URL: http://arxiv.org/abs/2401.09647v2
- Date: Thu, 23 May 2024 05:12:07 GMT
- Title: Large Language Models Help Reveal Unhealthy Diet and Body Concerns in Online Eating Disorders Communities
- Authors: Minh Duc Chu, Zihao He, Rebecca Dorn, Kristina Lerman,
- Abstract summary: Eating disorders (ED) affect millions of people globally, especially adolescents.
The proliferation of online communities that promote and normalize ED has been linked to this public health crisis.
We propose a novel framework to surface implicit attitudes of online communities by adapting large language models to the language of the community.
- Score: 5.392300313326522
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Eating disorders (ED), a severe mental health condition with high rates of mortality and morbidity, affect millions of people globally, especially adolescents. The proliferation of online communities that promote and normalize ED has been linked to this public health crisis. However, identifying harmful communities is challenging due to the use of coded language and other obfuscations. To address this challenge, we propose a novel framework to surface implicit attitudes of online communities by adapting large language models (LLMs) to the language of the community. We describe an alignment method and evaluate results along multiple dimensions of semantics and affect. We then use the community-aligned LLM to respond to psychometric questionnaires designed to identify ED in individuals. We demonstrate that LLMs can effectively adopt community-specific perspectives and reveal significant variations in eating disorder risks in different online communities. These findings highlight the utility of LLMs to reveal implicit attitudes and collective mindsets of communities, offering new tools for mitigating harmful content on social media.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - "It's a conversation, not a quiz": A Risk Taxonomy and Reflection Tool for LLM Adoption in Public Health [16.418366314356184]
We conduct focus groups with health professionals and health issue experiencers to unpack their concerns.
We synthesize participants' perspectives into a risk taxonomy.
This taxonomy highlights four dimensions of risk in individual behaviors, human-centered care, information ecosystem, and technology accountability.
arXiv Detail & Related papers (2024-11-04T20:35:10Z) - Negation Blindness in Large Language Models: Unveiling the NO Syndrome in Image Generation [63.064204206220936]
Foundational Large Language Models (LLMs) have changed the way we perceive technology.
They have been shown to excel in tasks ranging from poem writing to coding to essay generation and puzzle solving.
With the incorporation of image generation capability, they have become more comprehensive and versatile AI tools.
Currently identified flaws include hallucination, biases, and bypassing restricted commands to generate harmful content.
arXiv Detail & Related papers (2024-08-27T14:40:16Z) - Improving and Assessing the Fidelity of Large Language Models Alignment to Online Communities [5.392300313326522]
Large language models (LLMs) have shown promise in representing individuals and communities.
This paper presents a framework for aligning LLMs with online communities via instruction-tuning.
We demonstrate the utility of our approach by applying it to online communities centered on dieting and body image.
arXiv Detail & Related papers (2024-08-18T05:41:36Z) - Leveraging Prompt-Based Large Language Models: Predicting Pandemic
Health Decisions and Outcomes Through Social Media Language [6.3576870613251675]
We use prompt-based LLMs to examine the relationship between social media language patterns and trends in national health outcomes.
Our work is the first to empirically link social media linguistic patterns to real-world public health trends.
arXiv Detail & Related papers (2024-03-01T21:29:32Z) - The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative [55.08395463562242]
Multimodal Large Language Models (MLLMs) are constantly defining the new boundary of Artificial General Intelligence (AGI)
Our paper explores a novel vulnerability in MLLM societies - the indirect propagation of malicious content.
arXiv Detail & Related papers (2024-02-20T23:08:21Z) - Large Language Model for Mental Health: A Systematic Review [2.9429776664692526]
Large language models (LLMs) have attracted significant attention for potential applications in digital health.
This systematic review focuses on their strengths and limitations in early screening, digital interventions, and clinical applications.
arXiv Detail & Related papers (2024-02-19T17:58:41Z) - Factuality Challenges in the Era of Large Language Models [113.3282633305118]
Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
arXiv Detail & Related papers (2023-10-08T14:55:02Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.