A risk model and analysis method for the psychological safety of human and autonomous vehicles interaction
- URL: http://arxiv.org/abs/2411.05732v3
- Date: Wed, 08 Oct 2025 12:07:29 GMT
- Title: A risk model and analysis method for the psychological safety of human and autonomous vehicles interaction
- Authors: Yandika Sirgabsou, Benjamin Hardin, François Leblanc, Efi Raili, Pericle Salvini, David Jackson, Marina Jirotka, Lars Kunze,
- Abstract summary: The paper introduces a definition of psychological safety in AVs context.<n>It proposes a risk model for identifying and assessing AVs psychological hazards and risks.<n>The paper illustrates the application of the framework for assessing potential psychological hazards using a scenario involving a family's experience with an autonomous vehicle.
- Score: 4.627842277006583
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The rapid advancement of artificial intelligence and autonomous driving technologies has significantly propelled the development of autonomous vehicles (AVs). However, psychological barriers continue to impede widespread AV adoption, despite technological progress. This paper addresses the critical yet often overlooked aspect of psychological safety in AV design and operation. While traditional safety standards focus primarily on physical safety, this paper emphasizes the psychological implications that arise from human interactions with autonomous vehicles, highlighting the importance of trust and perceived risk as significant factors influencing user acceptance. The paper makes a methodological proposal, a framework for addressing AVs psychological safety consisting of three key contributions. First, it introduces a definition of psychological safety in AVs context. Secondly, it proposes a risk model for identifying and assessing AVs psychological hazards and risks. PsySIL (Psychological Safety Integrity Level), a classification of AV psychological risk levels is developed. Thirdly, an adapted system-theoretic analysis method for AVs psychological safety is proposed. The paper illustrates the application of the framework for assessing potential psychological hazards using a scenario involving a family's experience with an autonomous vehicle, pioneering a systems approach towards evaluating situations that could lead to psychological harm. By establishing a framework that incorporates psychological safety alongside physical safety, the paper contributes to the broader discourse on the safe deployment of autonomous vehicle, aiming to guide future developments in user-centred design and regulatory practices, while acknowledging the limitations brought by the application of the proposals on a rather simple but pedagogical illustrative example.
Related papers
- Safety First: Psychological Safety as the Key to AI Transformation [0.36944296923226316]
This study examines whether psychological safety is associated with AI adoption and usage in the workplace.<n> Logistic and linear regression analyses show that psychological safety reliably predicts whether employees adopt AI tools.<n>The study underscores the need to distinguish between adoption and sustained use.
arXiv Detail & Related papers (2026-02-26T17:53:37Z) - ANNIE: Be Careful of Your Robots [48.89876809734855]
We present the first systematic study of adversarial safety attacks on embodied AI systems.<n>We show attack success rates exceeding 50% across all safety categories.<n>Results expose a previously underexplored but highly consequential attack surface in embodied AI systems.
arXiv Detail & Related papers (2025-09-03T15:00:28Z) - Report on NSF Workshop on Science of Safe AI [75.96202715567088]
New advances in machine learning are leading to new opportunities to develop technology-based solutions to societal problems.<n>To fulfill the promise of AI, we must address how to develop AI-based systems that are accurate and performant but also safe and trustworthy.<n>This report is the result of the discussions in the working groups that addressed different aspects of safety at the workshop.
arXiv Detail & Related papers (2025-06-24T18:55:29Z) - Probabilistic modelling and safety assurance of an agriculture robot providing light-treatment [0.0]
Continued adoption of agricultural robots postulates the farmer's trust in the reliability, robustness and safety of the new technology.<n>This paper considers a probabilistic modelling and risk analysis framework for use in the early development phases.
arXiv Detail & Related papers (2025-06-24T13:39:32Z) - SafeAgent: Safeguarding LLM Agents via an Automated Risk Simulator [77.86600052899156]
Large Language Model (LLM)-based agents are increasingly deployed in real-world applications.<n>We propose AutoSafe, the first framework that systematically enhances agent safety through fully automated synthetic data generation.<n>We show that AutoSafe boosts safety scores by 45% on average and achieves a 28.91% improvement on real-world tasks.
arXiv Detail & Related papers (2025-05-23T10:56:06Z) - UniSTPA: A Safety Analysis Framework for End-to-End Autonomous Driving [10.063740202765343]
We propose the Unified System Theoretic Process Analysis (UniSTPA) framework.<n>UniSTPA performs hazard analysis not only at the component level but also within the model's internal layers.<n>The proposed framework thus offers both theoretical and practical guidance for the safe development and deployment of end-to-end autonomous driving systems.
arXiv Detail & Related papers (2025-05-21T01:23:31Z) - Impact Analysis of Inference Time Attack of Perception Sensors on Autonomous Vehicles [11.693109854958479]
We propose an impact analysis based on inference time attacks for autonomous vehicles.<n>We demonstrate in a simulation system that such inference time attacks can also threaten the safety of both the ego vehicle and other traffic participants.
arXiv Detail & Related papers (2025-05-05T23:00:27Z) - Towards Benchmarking and Assessing the Safety and Robustness of Autonomous Driving on Safety-critical Scenarios [30.413293630867418]
Current evaluations of autonomous driving are typically conducted in natural driving scenarios.
Many accidents often occur in edge cases, also known as safety-critical scenarios.
There is currently no clear definition of what constitutes a safety-critical scenario.
arXiv Detail & Related papers (2025-03-31T04:13:32Z) - Don't Let Your Robot be Harmful: Responsible Robotic Manipulation [57.70648477564976]
Unthinking execution of human instructions in robotic manipulation can lead to severe safety risks.
We present Safety-as-policy, which includes (i) a world model to automatically generate scenarios containing safety risks and conduct virtual interactions, and (ii) a mental model to infer consequences with reflections.
We show that Safety-as-policy can avoid risks and efficiently complete tasks in both synthetic dataset and real-world experiments.
arXiv Detail & Related papers (2024-11-27T12:27:50Z) - HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions [76.42274173122328]
We present HAICOSYSTEM, a framework examining AI agent safety within diverse and complex social interactions.
We run 1840 simulations based on 92 scenarios across seven domains (e.g., healthcare, finance, education)
Our experiments show that state-of-the-art LLMs, both proprietary and open-sourced, exhibit safety risks in over 50% cases.
arXiv Detail & Related papers (2024-09-24T19:47:21Z) - Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning [7.106124530294562]
Low trust remains a significant barrier to Autonomous Vehicle (AV) adoption.<n>We use machine learning to understand the most important factors that contribute to young adult trust.
arXiv Detail & Related papers (2024-09-13T16:52:24Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Incorporating Explanations into Human-Machine Interfaces for Trust and Situation Awareness in Autonomous Vehicles [4.1636282808157254]
We study the role of explainable AI and human-machine interface jointly in building trust in vehicle autonomy.
We present a situation awareness framework for calibrating users' trust in self-driving behavior.
arXiv Detail & Related papers (2024-04-10T23:02:13Z) - Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation [0.0]
This paper proposes a structured framework that delineates levels of autonomy, outlines ethical requirements, and defines beneficial default behaviors for AI agents.
We also evaluate 14 state-of-the-art language models (ten off-the-shelf, four fine-tuned) using 16 mental health-related questionnaires.
arXiv Detail & Related papers (2024-04-02T15:05:06Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - PsychoGAT: A Novel Psychological Measurement Paradigm through Interactive Fiction Games with LLM Agents [68.50571379012621]
Psychological measurement is essential for mental health, self-understanding, and personal development.
PsychoGAT (Psychological Game AgenTs) achieves statistically significant excellence in psychometric metrics such as reliability, convergent validity, and discriminant validity.
arXiv Detail & Related papers (2024-02-19T18:00:30Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety [70.84902425123406]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - Autonomous Vehicles an overview on system, cyber security, risks,
issues, and a way forward [0.0]
This chapter explores the complex realm of autonomous cars, analyzing their fundamental components and operational characteristics.
The primary focus of this investigation lies in the realm of cybersecurity, specifically in the context of autonomous vehicles.
A comprehensive analysis will be conducted to explore various risk management solutions aimed at protecting these vehicles from potential threats.
arXiv Detail & Related papers (2023-09-25T15:19:09Z) - Neurosymbolic Meta-Reinforcement Lookahead Learning Achieves Safe
Self-Driving in Non-Stationary Environments [17.39580032857777]
This study introduces an algorithm for online meta-reinforcement learning, employing lookahead symbolic constraints based on emphNeurosymbolic Meta-Reinforcement Lookahead Learning (NUMERLA)
Experimental results demonstrate NUMERLA confers the self-driving agent with the capacity for real-time adaptability, leading to safe and self-adaptive driving under non-stationary urban human-vehicle interaction scenarios.
arXiv Detail & Related papers (2023-09-05T15:47:40Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - What's on your mind? A Mental and Perceptual Load Estimation Framework
towards Adaptive In-vehicle Interaction while Driving [55.41644538483948]
We analyze the effects of mental workload and perceptual load on psychophysiological dimensions.
We classify the mental and perceptual load levels through the fusion of these measurements.
We report up to 89% mental workload classification accuracy and provide a real-time minimally-intrusive solution.
arXiv Detail & Related papers (2022-08-10T21:19:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.