Socially Aware Synthetic Data Generation for Suicidal Ideation Detection
Using Large Language Models
- URL: http://arxiv.org/abs/2402.01712v1
- Date: Thu, 25 Jan 2024 18:25:05 GMT
- Title: Socially Aware Synthetic Data Generation for Suicidal Ideation Detection
Using Large Language Models
- Authors: Hamideh Ghanadian, Isar Nejadgholi, Hussein Al Osman
- Abstract summary: We introduce an innovative strategy that leverages the capabilities of generative AI models to create synthetic data for suicidal ideation detection.
We benchmarked against state-of-the-art NLP classification models, specifically, those centered around the BERT family structures.
Our synthetic data-driven method, informed by social factors, offers consistent F1-scores of 0.82 for both models.
- Score: 8.832297887534445
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Suicidal ideation detection is a vital research area that holds great
potential for improving mental health support systems. However, the sensitivity
surrounding suicide-related data poses challenges in accessing large-scale,
annotated datasets necessary for training effective machine learning models. To
address this limitation, we introduce an innovative strategy that leverages the
capabilities of generative AI models, such as ChatGPT, Flan-T5, and Llama, to
create synthetic data for suicidal ideation detection. Our data generation
approach is grounded in social factors extracted from psychology literature and
aims to ensure coverage of essential information related to suicidal ideation.
In our study, we benchmarked against state-of-the-art NLP classification
models, specifically, those centered around the BERT family structures. When
trained on the real-world dataset, UMD, these conventional models tend to yield
F1-scores ranging from 0.75 to 0.87. Our synthetic data-driven method, informed
by social factors, offers consistent F1-scores of 0.82 for both models,
suggesting that the richness of topics in synthetic data can bridge the
performance gap across different model complexities. Most impressively, when we
combined a mere 30% of the UMD dataset with our synthetic data, we witnessed a
substantial increase in performance, achieving an F1-score of 0.88 on the UMD
test set. Such results underscore the cost-effectiveness and potential of our
approach in confronting major challenges in the field, such as data scarcity
and the quest for diversity in data representation.
Related papers
- Exploring the Landscape for Generative Sequence Models for Specialized Data Synthesis [0.0]
This paper introduces a novel approach that leverages three generative models of varying complexity to synthesize Malicious Network Traffic.
Our approach transforms numerical data into text, re-framing data generation as a language modeling task.
Our method surpasses state-of-the-art generative models in producing high-fidelity synthetic data.
arXiv Detail & Related papers (2024-11-04T09:51:10Z) - Towards Effective and Efficient Continual Pre-training of Large Language Models [163.34610964970258]
Continual pre-training (CPT) has been an important approach for adapting language models to specific domains or tasks.
This paper presents a technical report for continually pre-training Llama-3 (8B)
It significantly enhances the Chinese language ability and scientific reasoning ability of the backbone model.
arXiv Detail & Related papers (2024-07-26T13:55:21Z) - Unveiling the Flaws: Exploring Imperfections in Synthetic Data and Mitigation Strategies for Large Language Models [89.88010750772413]
Synthetic data has been proposed as a solution to address the issue of high-quality data scarcity in the training of large language models (LLMs)
Our work delves into these specific flaws associated with question-answer (Q-A) pairs, a prevalent type of synthetic data, and presents a method based on unlearning techniques to mitigate these flaws.
Our work has yielded key insights into the effective use of synthetic data, aiming to promote more robust and efficient LLM training.
arXiv Detail & Related papers (2024-06-18T08:38:59Z) - Best Practices and Lessons Learned on Synthetic Data [83.63271573197026]
The success of AI models relies on the availability of large, diverse, and high-quality datasets.
Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns.
arXiv Detail & Related papers (2024-04-11T06:34:17Z) - FLIGAN: Enhancing Federated Learning with Incomplete Data using GAN [1.5749416770494706]
Federated Learning (FL) provides a privacy-preserving mechanism for distributed training of machine learning models on networked devices.
We propose FLIGAN, a novel approach to address the issue of data incompleteness in FL.
Our methodology adheres to FL's privacy requirements by generating synthetic data in a federated manner without sharing the actual data in the process.
arXiv Detail & Related papers (2024-03-25T16:49:38Z) - Reimagining Synthetic Tabular Data Generation through Data-Centric AI: A
Comprehensive Benchmark [56.8042116967334]
Synthetic data serves as an alternative in training machine learning models.
ensuring that synthetic data mirrors the complex nuances of real-world data is a challenging task.
This paper explores the potential of integrating data-centric AI techniques to guide the synthetic data generation process.
arXiv Detail & Related papers (2023-10-25T20:32:02Z) - Synthetic Data Generation with Large Language Models for Text
Classification: Potential and Limitations [21.583825474908334]
We study how the performance of models trained on synthetic data may vary with the subjectivity of classification.
Our results indicate that subjectivity, at both the task level and instance level, is negatively associated with the performance of the model trained on synthetic data.
arXiv Detail & Related papers (2023-10-11T19:51:13Z) - Does Synthetic Data Make Large Language Models More Efficient? [0.0]
This paper explores the nuances of synthetic data generation in NLP.
We highlight its advantages, including data augmentation potential and the introduction of structured variety.
We demonstrate the impact of template-based synthetic data on the performance of modern transformer models.
arXiv Detail & Related papers (2023-10-11T19:16:09Z) - Synthetic data, real errors: how (not) to publish and use synthetic data [86.65594304109567]
We show how the generative process affects the downstream ML task.
We introduce Deep Generative Ensemble (DGE) to approximate the posterior distribution over the generative process model parameters.
arXiv Detail & Related papers (2023-05-16T07:30:29Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Handling Non-ignorably Missing Features in Electronic Health Records
Data Using Importance-Weighted Autoencoders [8.518166245293703]
We propose a novel extension of VAEs called Importance-Weighted Autoencoders (IWAEs) to flexibly handle Missing Not At Random patterns in the Physionet data.
Our proposed method models the missingness mechanism using an embedded neural network, eliminating the need to specify the exact form of the missingness mechanism a priori.
arXiv Detail & Related papers (2021-01-18T22:53:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.