LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model
- URL: http://arxiv.org/abs/2403.07581v1
- Date: Tue, 12 Mar 2024 12:10:18 GMT
- Title: LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model
- Authors: Linmei Hu, Hongyu He, Duokang Wang, Ziwang Zhao, Yingxia Shao, Liqiang
Nie
- Abstract summary: Personality detection aims to detect one's personality traits underlying in social media posts.
Most existing methods learn post features directly by fine-tuning the pre-trained language models.
We propose a large language model (LLM) based text augmentation enhanced personality detection model.
- Score: 58.887561071010985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personality detection aims to detect one's personality traits underlying in
social media posts. One challenge of this task is the scarcity of ground-truth
personality traits which are collected from self-report questionnaires. Most
existing methods learn post features directly by fine-tuning the pre-trained
language models under the supervision of limited personality labels. This leads
to inferior quality of post features and consequently affects the performance.
In addition, they treat personality traits as one-hot classification labels,
overlooking the semantic information within them. In this paper, we propose a
large language model (LLM) based text augmentation enhanced personality
detection model, which distills the LLM's knowledge to enhance the small model
for personality detection, even when the LLM fails in this task. Specifically,
we enable LLM to generate post analyses (augmentations) from the aspects of
semantic, sentiment, and linguistic, which are critical for personality
detection. By using contrastive learning to pull them together in the embedding
space, the post encoder can better capture the psycho-linguistic information
within the post representations, thus improving personality detection.
Furthermore, we utilize the LLM to enrich the information of personality labels
for enhancing the detection performance. Experimental results on the benchmark
datasets demonstrate that our model outperforms the state-of-the-art methods on
personality detection.
Related papers
- Orca: Enhancing Role-Playing Abilities of Large Language Models by Integrating Personality Traits [4.092862870428798]
We propose Orca, a framework for data processing and training LLMs of custom characters by integrating personality traits.
Orca comprises four stages: Personality traits inferring, leverage LLMs to infer user's BigFive personality trait reports and scores.
Our experiments demonstrate that our proposed model achieves superior performance on this benchmark.
arXiv Detail & Related papers (2024-11-15T07:35:47Z) - Neuron-based Personality Trait Induction in Large Language Models [115.08894603023712]
Large language models (LLMs) have become increasingly proficient at simulating various personality traits.
We present a neuron-based approach for personality trait induction in LLMs.
arXiv Detail & Related papers (2024-10-16T07:47:45Z) - Humanity in AI: Detecting the Personality of Large Language Models [0.0]
Questionnaires are a common method for detecting the personality of Large Language Models (LLMs)
We propose combining text mining with questionnaires method.
We find that the personalities of LLMs are derived from their pre-trained data.
arXiv Detail & Related papers (2024-10-11T05:53:11Z) - Characterizing Truthfulness in Large Language Model Generations with
Local Intrinsic Dimension [63.330262740414646]
We study how to characterize and predict the truthfulness of texts generated from large language models (LLMs)
We suggest investigating internal activations and quantifying LLM's truthfulness using the local intrinsic dimension (LID) of model activations.
arXiv Detail & Related papers (2024-02-28T04:56:21Z) - Eliciting Personality Traits in Large Language Models [0.0]
Large Language Models (LLMs) are increasingly being utilized by both candidates and employers in the recruitment context.
This study seeks to obtain a better understanding of such models by examining their output variations based on different input prompts.
arXiv Detail & Related papers (2024-02-13T10:09:00Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Improving Input-label Mapping with Demonstration Replay for In-context
Learning [67.57288926736923]
In-context learning (ICL) is an emerging capability of large autoregressive language models.
We propose a novel ICL method called Sliding Causal Attention (RdSca)
We show that our method significantly improves the input-label mapping in ICL demonstrations.
arXiv Detail & Related papers (2023-10-30T14:29:41Z) - Editing Personality for Large Language Models [73.59001811199823]
This paper introduces an innovative task focused on editing the personality traits of Large Language Models (LLMs)
We construct PersonalityEdit, a new benchmark dataset to address this task.
arXiv Detail & Related papers (2023-10-03T16:02:36Z) - Personality Trait Detection Using Bagged SVM over BERT Word Embedding
Ensembles [10.425280599592865]
We present a novel deep learning-based approach for automated personality detection from text.
We leverage state of the art advances in natural language understanding, namely the BERT language model to extract contextualized word embeddings.
Our model outperforms the previous state of the art by 1.04% and, at the same time is significantly more computationally efficient to train.
arXiv Detail & Related papers (2020-10-03T09:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.