Explainable AI for Psychological Profiling from Digital Footprints: A
Case Study of Big Five Personality Predictions from Spending Data
- URL: http://arxiv.org/abs/2111.06908v1
- Date: Fri, 12 Nov 2021 19:28:56 GMT
- Title: Explainable AI for Psychological Profiling from Digital Footprints: A
Case Study of Big Five Personality Predictions from Spending Data
- Authors: Yanou Ramon, Sandra C. Matz, R.A. Farrokhnia, David Martens
- Abstract summary: We show how Explainable AI (XAI) can help domain experts validate, question, and improve models that classify psychological traits from digital footprints.
First, we demonstrate how global rule extraction sheds light on the spending patterns identified by the model as most predictive for personality.
Second, we implement local rule extraction to show that individuals are assigned to personality classes because of their unique financial behavior.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Every step we take in the digital world leaves behind a record of our
behavior; a digital footprint. Research has suggested that algorithms can
translate these digital footprints into accurate estimates of psychological
characteristics, including personality traits, mental health or intelligence.
The mechanisms by which AI generates these insights, however, often remain
opaque. In this paper, we show how Explainable AI (XAI) can help domain experts
and data subjects validate, question, and improve models that classify
psychological traits from digital footprints. We elaborate on two popular XAI
methods (rule extraction and counterfactual explanations) in the context of Big
Five personality predictions (traits and facets) from financial transactions
data (N = 6,408). First, we demonstrate how global rule extraction sheds light
on the spending patterns identified by the model as most predictive for
personality, and discuss how these rules can be used to explain, validate, and
improve the model. Second, we implement local rule extraction to show that
individuals are assigned to personality classes because of their unique
financial behavior, and that there exists a positive link between the model's
prediction confidence and the number of features that contributed to the
prediction. Our experiments highlight the importance of both global and local
XAI methods. By better understanding how predictive models work in general as
well as how they derive an outcome for a particular person, XAI promotes
accountability in a world in which AI impacts the lives of billions of people
around the world.
Related papers
- Deterministic AI Agent Personality Expression through Standard Psychological Diagnostics [0.0]
We show that AI models can express deterministic and consistent personalities when instructed using established psychological frameworks.
More advanced models like GPT-4o and o1 demonstrate the highest accuracy in expressing specified personalities.
These findings establish a foundation for creating AI agents with diverse and consistent personalities.
arXiv Detail & Related papers (2025-03-21T12:12:05Z) - Twenty Years of Personality Computing: Threats, Challenges and Future Directions [76.46813522861632]
Personality Computing is a field at the intersection of Personality Psychology and Computer Science.
This paper provides an overview of the field, explores key methodologies, discusses the challenges and threats, and outlines potential future directions for responsible development and deployment of Personality Computing technologies.
arXiv Detail & Related papers (2025-03-03T22:03:48Z) - AI Readiness in Healthcare through Storytelling XAI [0.5120567378386615]
We develop an approach that combines multi-task distillation with interpretability techniques to enable audience-centric explainability.
Our methods increase the trust of both the domain experts and the machine learning experts to enable a responsible AI.
arXiv Detail & Related papers (2024-10-24T13:30:18Z) - Learning to Generate and Evaluate Fact-checking Explanations with Transformers [10.970249299147866]
Research contributes to the field of Explainable Artificial Antelligence (XAI)
We develop transformer-based fact-checking models that contextualise and justify their decisions by generating human-accessible explanations.
We emphasise the need for aligning Artificial Intelligence (AI)-generated explanations with human judgements.
arXiv Detail & Related papers (2024-10-21T06:22:51Z) - People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AI [22.138074429937795]
It is often argued that effective human-centered explainable artificial intelligence (XAI) should resemble human reasoning.
We propose a framework of explanatory modes to analyze how people frame explanations, whether mechanistic, teleological, or counterfactual.
Our main finding is that participants deem teleological explanations significantly better quality than counterfactual ones, with perceived teleology being the best predictor of perceived quality.
arXiv Detail & Related papers (2024-03-11T11:48:50Z) - Natural Example-Based Explainability: a Survey [0.0]
This paper provides an overview of the state-of-the-art in natural example-based XAI.
It will explore the following family of methods: similar examples, counterfactual and semi-factual, influential instances, prototypes, and concepts.
arXiv Detail & Related papers (2023-09-05T09:46:20Z) - Assessing Large Language Models' ability to predict how humans balance
self-interest and the interest of others [0.0]
Generative artificial intelligence (AI) holds enormous potential to revolutionize decision-making processes.
By leveraging generative AI, humans can benefit from data-driven insights and predictions.
However, for AI to be a reliable assistant for decision-making it is crucial that it is able to capture the balance between self-interest and the interest of others.
arXiv Detail & Related papers (2023-07-21T13:23:31Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Learning Theory of Mind via Dynamic Traits Attribution [59.9781556714202]
We propose a new neural ToM architecture that learns to generate a latent trait vector of an actor from the past trajectories.
This trait vector then multiplicatively modulates the prediction mechanism via a fast weights' scheme in the prediction neural network.
We empirically show that the fast weights provide a good inductive bias to model the character traits of agents and hence improves mindreading ability.
arXiv Detail & Related papers (2022-04-17T11:21:18Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.