Tailoring Personality Traits in Large Language Models via
Unsupervisedly-Built Personalized Lexicons
- URL: http://arxiv.org/abs/2310.16582v2
- Date: Sat, 6 Jan 2024 14:17:40 GMT
- Title: Tailoring Personality Traits in Large Language Models via
Unsupervisedly-Built Personalized Lexicons
- Authors: Tianlong Li, Shihan Dou, Changze Lv, Wenhao Liu, Jianhan Xu, Muling
Wu, Zixuan Ling, Xiaoqing Zheng and Xuanjing Huang
- Abstract summary: Personality plays a pivotal role in shaping human expression patterns.
Previous methods relied on fine-tuning large language models (LLMs) on specific corpora.
We have employed a novel Unsupervisedly-Built personalized lexicon (UBPL) in a pluggable manner to manipulate personality traits.
- Score: 42.66142331217763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personality plays a pivotal role in shaping human expression patterns, thus
regulating the personality of large language models (LLMs) holds significant
potential in enhancing the user experience of LLMs. Previous methods either
relied on fine-tuning LLMs on specific corpora or necessitated manually crafted
prompts to elicit specific personalities from LLMs. However, the former
approach is inefficient and costly, while the latter cannot precisely
manipulate personality traits at a fine-grained level. To address the above
challenges, we have employed a novel Unsupervisedly-Built Personalized Lexicons
(UBPL) in a pluggable manner during the decoding phase of LLMs to manipulate
their personality traits. UBPL is a lexicon built through an unsupervised
approach from a situational judgment test dataset (SJTs4LLM). Users can utilize
UBPL to adjust the probability vectors of predicted words in the decoding phase
of LLMs, thus influencing the personality expression of LLMs. Extensive
experimentation demonstrates the remarkable effectiveness and pluggability of
our method for fine-grained manipulation of LLM's personality.
Related papers
- LMLPA: Language Model Linguistic Personality Assessment [11.599282127259736]
Large Language Models (LLMs) are increasingly used in everyday life and research.
measuring the personality of a given LLM is currently a challenge.
This paper introduces the Language Model Linguistic Personality Assessment (LMLPA), a system designed to evaluate the linguistic personalities of LLMs.
arXiv Detail & Related papers (2024-10-23T07:48:51Z) - zsLLMCode: An Effective Approach for Functional Code Embedding via LLM with Zero-Shot Learning [6.976968804436321]
Large language models (LLMs) have the capability of zero-shot learning, which does not require training or fine-tuning.
We propose zsLLMCode, a novel approach that generates functional code embeddings using LLMs.
arXiv Detail & Related papers (2024-09-23T01:03:15Z) - LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model [58.887561071010985]
Personality detection aims to detect one's personality traits underlying in social media posts.
Most existing methods learn post features directly by fine-tuning the pre-trained language models.
We propose a large language model (LLM) based text augmentation enhanced personality detection model.
arXiv Detail & Related papers (2024-03-12T12:10:18Z) - Characterizing Truthfulness in Large Language Model Generations with
Local Intrinsic Dimension [63.330262740414646]
We study how to characterize and predict the truthfulness of texts generated from large language models (LLMs)
We suggest investigating internal activations and quantifying LLM's truthfulness using the local intrinsic dimension (LID) of model activations.
arXiv Detail & Related papers (2024-02-28T04:56:21Z) - Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models [52.98743860365194]
We propose a new fine-tuning method called Self-Play fIne-tuNing (SPIN)
At the heart of SPIN lies a self-play mechanism, where the LLM refines its capability by playing against instances of itself.
This sheds light on the promise of self-play, enabling the achievement of human-level performance in LLMs without the need for expert opponents.
arXiv Detail & Related papers (2024-01-02T18:53:13Z) - Psychometric Predictive Power of Large Language Models [32.31556074470733]
We find that instruction tuning does not always make large language models human-like from a cognitive modeling perspective.
Next-word probabilities estimated by instruction-tuned LLMs are often worse at simulating human reading behavior than those estimated by base LLMs.
arXiv Detail & Related papers (2023-11-13T17:19:14Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z) - Are Large Language Models Really Robust to Word-Level Perturbations? [68.60618778027694]
We propose a novel rational evaluation approach that leverages pre-trained reward models as diagnostic tools.
Longer conversations manifest the comprehensive grasp of language models in terms of their proficiency in understanding questions.
Our results demonstrate that LLMs frequently exhibit vulnerability to word-level perturbations that are commonplace in daily language usage.
arXiv Detail & Related papers (2023-09-20T09:23:46Z) - Personality Traits in Large Language Models [44.908741466152215]
Personality is a key factor determining the effectiveness of communication.
We present a comprehensive method for administering and validating personality tests on widely-used large language models.
We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
arXiv Detail & Related papers (2023-07-01T00:58:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.