Personality Trait Detection Using Bagged SVM over BERT Word Embedding
Ensembles
- URL: http://arxiv.org/abs/2010.01309v1
- Date: Sat, 3 Oct 2020 09:25:51 GMT
- Title: Personality Trait Detection Using Bagged SVM over BERT Word Embedding
Ensembles
- Authors: Amirmohammad Kazameini, Samin Fatehi, Yash Mehta, Sauleh Eetemadi,
Erik Cambria
- Abstract summary: We present a novel deep learning-based approach for automated personality detection from text.
We leverage state of the art advances in natural language understanding, namely the BERT language model to extract contextualized word embeddings.
Our model outperforms the previous state of the art by 1.04% and, at the same time is significantly more computationally efficient to train.
- Score: 10.425280599592865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the automatic prediction of personality traits has received
increasing attention and has emerged as a hot topic within the field of
affective computing. In this work, we present a novel deep learning-based
approach for automated personality detection from text. We leverage state of
the art advances in natural language understanding, namely the BERT language
model to extract contextualized word embeddings from textual data for automated
author personality detection. Our primary goal is to develop a computationally
efficient, high-performance personality prediction model which can be easily
used by a large number of people without access to huge computation resources.
Our extensive experiments with this ideology in mind, led us to develop a novel
model which feeds contextualized embeddings along with psycholinguistic
features toa Bagged-SVM classifier for personality trait prediction. Our model
outperforms the previous state of the art by 1.04% and, at the same time is
significantly more computationally efficient to train. We report our results on
the famous gold standard Essays dataset for personality detection.
Related papers
- Reverse-Engineering the Reader [43.26660964074272]
We introduce a novel alignment technique in which we fine-tune a language model to implicitly optimize the parameters of a linear regressor.
Using words as a test case, we evaluate our technique across multiple model sizes and datasets.
We find an inverse relationship between psychometric power and a model's performance on downstream NLP tasks as well as its perplexity on held-out test data.
arXiv Detail & Related papers (2024-10-16T23:05:01Z) - LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model [58.887561071010985]
Personality detection aims to detect one's personality traits underlying in social media posts.
Most existing methods learn post features directly by fine-tuning the pre-trained language models.
We propose a large language model (LLM) based text augmentation enhanced personality detection model.
arXiv Detail & Related papers (2024-03-12T12:10:18Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Unsupervised Sentiment Analysis of Plastic Surgery Social Media Posts [91.3755431537592]
The massive collection of user posts across social media platforms is primarily untapped for artificial intelligence (AI) use cases.
Natural language processing (NLP) is a subfield of AI that leverages bodies of documents, known as corpora, to train computers in human-like language understanding.
This study demonstrates that the applied results of unsupervised analysis allow a computer to predict either negative, positive, or neutral user sentiment towards plastic surgery.
arXiv Detail & Related papers (2023-07-05T20:16:20Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Estimating the Personality of White-Box Language Models [0.589889361990138]
Large-scale language models, which are trained on large corpora of text, are being used in a wide range of applications everywhere.
Existing research shows that these models can and do capture human biases.
Many of these biases, especially those that could potentially cause harm, are being well-investigated.
However, studies that infer and change human personality traits inherited by these models have been scarce or non-existent.
arXiv Detail & Related papers (2022-04-25T23:53:53Z) - Pushing on Personality Detection from Verbal Behavior: A Transformer
Meets Text Contours of Psycholinguistic Features [27.799032561722893]
We report two major improvements in predicting personality traits from text data.
We integrate a pre-trained Transformer Language Model BERT and Bidirectional Long Short-Term Memory networks trained on within-text distributions of psycholinguistic features.
We evaluate the performance of the models we built on two benchmark datasets.
arXiv Detail & Related papers (2022-04-10T08:08:46Z) - Knowledge Graph-Enabled Text-Based Automatic Personality Prediction [8.357801312689622]
Text-based Automatic Personality Prediction (APP) is the automated forecasting of the personality of individuals based on the generated/exchanged text contents.
This paper presents a novel knowledge graph-enabled approach to text-based APP that relies on the Big Five personality traits.
arXiv Detail & Related papers (2022-03-17T06:01:45Z) - The state-of-the-art in text-based automatic personality prediction [1.3209941988151326]
Personality detection is an old topic in psychology and Automatic Personality Prediction (or Perception) (APP)
APP is the automated (computationally) forecasting of the personality on different types of human generated/exchanged contents (such as text, speech, image, video)
arXiv Detail & Related papers (2021-10-04T04:51:11Z) - Sentiment analysis in tweets: an assessment study from classical to
modern text representation models [59.107260266206445]
Short texts published on Twitter have earned significant attention as a rich source of information.
Their inherent characteristics, such as the informal, and noisy linguistic style, remain challenging to many natural language processing (NLP) tasks.
This study fulfils an assessment of existing language models in distinguishing the sentiment expressed in tweets by using a rich collection of 22 datasets.
arXiv Detail & Related papers (2021-05-29T21:05:28Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.