Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning
- URL: http://arxiv.org/abs/2409.08980v1
- Date: Fri, 13 Sep 2024 16:52:24 GMT
- Title: Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning
- Authors: Robert Kaufman, Emi Lee, Manas Satish Bedmutha, David Kirsh, Nadir Weibel,
- Abstract summary: Low trust remains a significant barrier to Autonomous Vehicle (AV) adoption.
We use machine learning to understand the most important factors that contribute to young adult trust.
- Score: 7.106124530294562
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low trust remains a significant barrier to Autonomous Vehicle (AV) adoption. To design trustworthy AVs, we need to better understand the individual traits, attitudes, and experiences that impact people's trust judgements. We use machine learning to understand the most important factors that contribute to young adult trust based on a comprehensive set of personal factors gathered via survey (n = 1457). Factors ranged from psychosocial and cognitive attributes to driving style, experiences, and perceived AV risks and benefits. Using the explainable AI technique SHAP, we found that perceptions of AV risks and benefits, attitudes toward feasibility and usability, institutional trust, prior experience, and a person's mental model are the most important predictors. Surprisingly, psychosocial and many technology- and driving-specific factors were not strong predictors. Results highlight the importance of individual differences for designing trustworthy AVs for diverse groups and lead to key implications for future design and research.
Related papers
- What Did My Car Say? Impact of Autonomous Vehicle Explanation Errors and Driving Context On Comfort, Reliance, Satisfaction, and Driving Confidence [7.623776951753322]
We tested how autonomous vehicle (AV) explanation errors affected a passenger's comfort in relying on an AV.
Despite identical driving, explanation errors reduced ratings of the AV's driving ability.
Prior trust and expertise were positively associated with outcome ratings.
arXiv Detail & Related papers (2024-09-09T15:41:53Z) - Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust [16.140485357046707]
We develop and validated a set of 27-item semantic differential scales for affective and cognitive trust.
Our empirical findings showed how the emotional and cognitive aspects of trust interact with each other and collectively shape a person's overall trust in AI agents.
arXiv Detail & Related papers (2024-07-25T18:55:33Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Decoding trust: A reinforcement learning perspective [11.04265850036115]
Behavioral experiments on the trust game have shown that trust and trustworthiness are universal among human beings.
We turn to the paradigm of reinforcement learning, where individuals update their strategies by evaluating the long-term return through accumulated experience.
In the pairwise scenario, we reveal that high levels of trust and trustworthiness emerge when individuals appreciate both their historical experience and returns in the future.
arXiv Detail & Related papers (2023-09-26T01:06:29Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Will We Trust What We Don't Understand? Impact of Model Interpretability
and Outcome Feedback on Trust in AI [0.0]
We analyze the impact of interpretability and outcome feedback on trust in AI and on human performance in AI-assisted prediction tasks.
We find that interpretability led to no robust improvements in trust, while outcome feedback had a significantly greater and more reliable effect.
arXiv Detail & Related papers (2021-11-16T04:35:34Z) - From the Head or the Heart? An Experimental Design on the Impact of
Explanation on Cognitive and Affective Trust [13.274877222689168]
This study investigates the effectiveness of explanations on both cognitive and affective trust.
We expect these results to be of great significance in designing AV explanations to promote AV trust.
arXiv Detail & Related papers (2021-10-07T13:15:34Z) - Adversarial Visual Robustness by Causal Intervention [56.766342028800445]
Adversarial training is the de facto most promising defense against adversarial examples.
Yet, its passive nature inevitably prevents it from being immune to unknown attackers.
We provide a causal viewpoint of adversarial vulnerability: the cause is the confounder ubiquitously existing in learning.
arXiv Detail & Related papers (2021-06-17T14:23:54Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.