Extending the Abstraction of Personality Types based on MBTI with
Machine Learning and Natural Language Processing
- URL: http://arxiv.org/abs/2105.11798v1
- Date: Tue, 25 May 2021 10:00:16 GMT
- Title: Extending the Abstraction of Personality Types based on MBTI with
Machine Learning and Natural Language Processing
- Authors: Carlos Basto
- Abstract summary: A data-centric approach with Natural Language Processing (NLP) to predict personality types based on the MBTI.
The experimentation had a robust baseline of stacked models.
The results showed that attention to the data iteration loop focused on quality, explanatory power and representativeness for the abstraction of more relevant/important resources.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A data-centric approach with Natural Language Processing (NLP) to predict
personality types based on the MBTI (an introspective self-assessment
questionnaire that indicates different psychological preferences about how
people perceive the world and make decisions) through systematic enrichment of
text representation, based on the domain of the area, under the generation of
features based on three types of analysis: sentimental, grammatical and
aspects. The experimentation had a robust baseline of stacked models, with
premature optimization of hyperparameters through grid search, with gradual
feedback, for each of the four classifiers (dichotomies) of MBTI. The results
showed that attention to the data iteration loop focused on quality,
explanatory power and representativeness for the abstraction of more
relevant/important resources for the studied phenomenon made it possible to
improve the evaluation metrics results more quickly and less costly than
complex models such as the LSTM or state of the art ones as BERT, as well as
the importance of these results by comparisons made from various perspectives.
In addition, the study demonstrated a broad spectrum for the evolution and
deepening of the task and possible approaches for a greater extension of the
abstraction of personality types.
Related papers
- Modeling the Human Visual System: Comparative Insights from Response-Optimized and Task-Optimized Vision Models, Language Models, and different Readout Mechanisms [1.515687944002438]
We show that response-optimized models with visual inputs offer superior prediction accuracy for early to mid-level visual areas.
We identify three distinct regions in the visual cortex that are sensitive to perceptual features of the input that are not captured by linguistic descriptions.
We propose a novel scheme that modulates receptive fields and feature maps based on semantic content, resulting in an accuracy boost of 3-23% over existing SOTAs.
arXiv Detail & Related papers (2024-10-17T21:11:13Z) - Neuron-based Personality Trait Induction in Large Language Models [115.08894603023712]
Large language models (LLMs) have become increasingly proficient at simulating various personality traits.
We present a neuron-based approach for personality trait induction in LLMs.
arXiv Detail & Related papers (2024-10-16T07:47:45Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model,
Data, and Training [109.9218185711916]
Aspect-based sentiment analysis (ABSA) aims at automatically inferring the specific sentiment polarities toward certain aspects of products or services behind social media texts or reviews.
We propose to enhance the ABSA robustness by systematically rethinking the bottlenecks from all possible angles, including model, data, and training.
arXiv Detail & Related papers (2023-04-19T11:07:43Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Myers-Briggs personality classification from social media text using
pre-trained language models [0.0]
We describe a series of experiments in which the well-known Bidirectional Representations from Transformers (BERT) model is fine-tuned to perform MBTI classification.
Our main findings suggest that the current approach significantly outperforms well-known text classification models based on bag-of-words and static word embeddings alike.
arXiv Detail & Related papers (2022-07-10T14:38:09Z) - Pushing on Personality Detection from Verbal Behavior: A Transformer
Meets Text Contours of Psycholinguistic Features [27.799032561722893]
We report two major improvements in predicting personality traits from text data.
We integrate a pre-trained Transformer Language Model BERT and Bidirectional Long Short-Term Memory networks trained on within-text distributions of psycholinguistic features.
We evaluate the performance of the models we built on two benchmark datasets.
arXiv Detail & Related papers (2022-04-10T08:08:46Z) - Multitask Learning for Emotion and Personality Detection [17.029426018676997]
We build on the known correlation between personality traits and emotional behaviors, and propose a novel multitask learning framework, SoGMTL.
Our more computationally efficient CNN-based multitask model achieves the state-of-the-art performance across multiple famous personality and emotion datasets.
arXiv Detail & Related papers (2021-01-07T03:09:55Z) - Personality Trait Detection Using Bagged SVM over BERT Word Embedding
Ensembles [10.425280599592865]
We present a novel deep learning-based approach for automated personality detection from text.
We leverage state of the art advances in natural language understanding, namely the BERT language model to extract contextualized word embeddings.
Our model outperforms the previous state of the art by 1.04% and, at the same time is significantly more computationally efficient to train.
arXiv Detail & Related papers (2020-10-03T09:25:51Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.