Personality Assessment from Text for Machine Commonsense Reasoning
- URL: http://arxiv.org/abs/2004.09275v1
- Date: Wed, 15 Apr 2020 07:30:47 GMT
- Title: Personality Assessment from Text for Machine Commonsense Reasoning
- Authors: Niloofar Hezarjaribi, Zhila Esna Ashari, James F. Frenzel, Hassan
Ghasemzadeh, and Saied Hemati
- Abstract summary: PerSense is a framework to estimate human personality traits based on expressed texts.
Our goal is to demonstrate the feasibility of using machine learning algorithms on personality trait data.
- Score: 15.348792748868643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article presents PerSense, a framework to estimate human personality
traits based on expressed texts and to use them for commonsense reasoning
analysis. The personality assessment approaches include an aggregated
Probability Density Functions (PDF), and Machine Learning (ML) models. Our goal
is to demonstrate the feasibility of using machine learning algorithms on
personality trait data to predict humans' responses to open-ended commonsense
questions. We assess the performance of the PerSense algorithms for personality
assessment by conducting an experiment focused on Neuroticism, an important
personality trait crucial in mental health analysis and suicide prevention by
collecting data from a diverse population with different Neuroticism scores.
Our analysis shows that the algorithms achieve comparable results to the ground
truth data. Specifically, the PDF approach achieves 97% accuracy when the
confidence factor, the logarithmic ratio of the first to the second guess
probability, is greater than 3. Additionally, ML approach obtains its highest
accuracy, 82.2%, with a multilayer Perceptron classifier. To assess the
feasibility of commonsense reasoning analysis, we train ML algorithms to
predict responses to commonsense questions. Our analysis of data collected with
300 participants demonstrate that PerSense predicts answers to commonsense
questions with 82.3% accuracy using a Random Forest classifier.
Related papers
- Uncertainty-aware abstention in medical diagnosis based on medical texts [87.88110503208016]
This study addresses the critical issue of reliability for AI-assisted medical diagnosis.
We focus on the selection prediction approach that allows the diagnosis system to abstain from providing the decision if it is not confident in the diagnosis.
We introduce HUQ-2, a new state-of-the-art method for enhancing reliability in selective prediction tasks.
arXiv Detail & Related papers (2025-02-25T10:15:21Z) - Revealing the Self: Brainwave-Based Human Trait Identification [2.660113491122853]
This paper introduces a novel technique for identifying human traits in real time using brainwave data.
Our analysis uncovers several new insights, leading us to a groundbreaking unified approach for identifying diverse human traits.
We have developed an integrated, real-time trait identification solution using EEG data, based on the insights from our analysis.
arXiv Detail & Related papers (2024-12-26T03:27:34Z) - Accessible, At-Home Detection of Parkinson's Disease via Multi-task Video Analysis [3.1851272788128644]
Existing AI-based Parkinson's Disease detection methods primarily focus on unimodal analysis of motor or speech tasks.
We propose a novel Uncertainty-calibrated Fusion Network (UFNet) that leverages this multimodal data to enhance diagnostic accuracy.
UFNet significantly outperformed single-task models in terms of accuracy, area under the ROC curve (AUROC), and sensitivity while maintaining non-inferior specificity.
arXiv Detail & Related papers (2024-06-21T04:02:19Z) - Individual Text Corpora Predict Openness, Interests, Knowledge and Level of Education [0.5825410941577593]
Personality dimension of openness to experience can be predicted from the individual google search history.
Individual text corpora (ICs) were generated from 214 participants with a mean number of 5 million word tokens.
arXiv Detail & Related papers (2024-03-29T21:44:24Z) - Multi-Dimensional Ability Diagnosis for Machine Learning Algorithms [88.93372675846123]
We propose a task-agnostic evaluation framework Camilla for evaluating machine learning algorithms.
We use cognitive diagnosis assumptions and neural networks to learn the complex interactions among algorithms, samples and the skills of each sample.
In our experiments, Camilla outperforms state-of-the-art baselines on the metric reliability, rank consistency and rank stability.
arXiv Detail & Related papers (2023-07-14T03:15:56Z) - Auditing for Human Expertise [12.967730957018688]
We develop a statistical framework under which we can pose this question as a natural hypothesis test.
We propose a simple procedure which tests whether expert predictions are statistically independent from the outcomes of interest.
A rejection of our test thus suggests that human experts may add value to any algorithm trained on the available data.
arXiv Detail & Related papers (2023-06-02T16:15:24Z) - Measuring the Effect of Influential Messages on Varying Personas [67.1149173905004]
We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona might have upon seeing a news message.
The proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response.
This enables more accurate and comprehensive inference on the mental state of the persona.
arXiv Detail & Related papers (2023-05-25T21:01:00Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Dataset Bias in Human Activity Recognition [57.91018542715725]
This contribution statistically curates the training data to assess to what degree the physical characteristics of humans influence HAR performance.
We evaluate the performance of a state-of-the-art convolutional neural network on two HAR datasets that vary in the sensors, activities, and recording for time-series HAR.
arXiv Detail & Related papers (2023-01-19T12:33:50Z) - Analyzing Wearables Dataset to Predict ADLs and Falls: A Pilot Study [0.0]
This paper exhaustively reviews thirty-nine wearable based datasets which can be used for evaluating the system to recognize Activities of Daily Living and Falls.
A comparative analysis on the SisFall dataset using five machine learning methods is performed in python.
The results obtained from this study proves that KNN outperforms other machine learning methods in terms of accuracy, precision and recall.
arXiv Detail & Related papers (2022-09-11T04:41:40Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv Detail & Related papers (2021-07-09T07:19:23Z) - Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans
by measuring error consistency [10.028543085687803]
A central problem in cognitive science and behavioural neuroscience is to ascertain whether two or more decision makers (be they brains or algorithms) use the same strategy.
We introduce trial-by-trial error consistency, a quantitative analysis for measuring whether two decision making systems systematically make errors on the same inputs.
arXiv Detail & Related papers (2020-06-30T12:47:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.