Contextual Intelligent Decisions: Expert Moderation of Machine Outputs
for Fair Assessment of Commercial Driving
- URL: http://arxiv.org/abs/2202.09816v1
- Date: Sun, 20 Feb 2022 13:48:41 GMT
- Title: Contextual Intelligent Decisions: Expert Moderation of Machine Outputs
for Fair Assessment of Commercial Driving
- Authors: Jimiama Mafeni Mase, Direnc Pekaslan, Utkarsh Agrawal, Mohammad
Mesgarpour, Peter Chapman, Mercedes Torres Torres, Grazziela P. Figueredo
- Abstract summary: We introduce a methodology towards a fairer automatic road safety assessment of drivers' behaviours.
The contextual moderation embedded within the intelligent decision-making process is underpinned by expert input.
We develop an interval-valued response-format questionnaire to capture the uncertainty of the influence of factors and variance amongst experts' views.
- Score: 2.7323386266136125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Commercial driving is a complex multifaceted task influenced by personal
traits and external contextual factors, such as weather, traffic, road
conditions, etc. Previous intelligent commercial driver-assessment systems do
not consider these factors when analysing the impact of driving behaviours on
road safety, potentially producing biased, inaccurate, and unfair assessments.
In this paper, we introduce a methodology (Expert-centered Driver Assessment)
towards a fairer automatic road safety assessment of drivers' behaviours,
taking into consideration behaviours as a response to contextual factors. The
contextual moderation embedded within the intelligent decision-making process
is underpinned by expert input, comprising of a range of associated
stakeholders in the industry. Guided by the literature and expert input, we
identify critical factors affecting driving and develop an interval-valued
response-format questionnaire to capture the uncertainty of the influence of
factors and variance amongst experts' views. Questionnaire data are modelled
and analysed using fuzzy sets, as they provide a suitable computational
approach to be incorporated into decision-making systems with uncertainty. The
methodology has allowed us to identify the factors that need to be considered
when moderating driver sensor data, and to effectively capture experts'
opinions about the effects of the factors. An example of our methodology using
Heavy Goods Vehicles professionals input is provided to demonstrate how the
expert-centred moderation can be embedded in intelligent driver assessment
systems.
Related papers
- Traffic and Safety Rule Compliance of Humans in Diverse Driving Situations [48.924085579865334]
Analyzing human data is crucial for developing autonomous systems that replicate safe driving practices.
This paper presents a comparative evaluation of human compliance with traffic and safety rules across multiple trajectory prediction datasets.
arXiv Detail & Related papers (2024-11-04T09:21:00Z) - "A Good Bot Always Knows Its Limitations": Assessing Autonomous System Decision-making Competencies through Factorized Machine Self-confidence [5.167803438665586]
Factorized Machine Self-confidence (FaMSeC) provides a holistic description of factors driving an algorithmic decision-making process.
indicators are derived from hierarchical problem-solving statistics' embedded within broad classes of probabilistic decision-making algorithms.
FaMSeC allows algorithmic goodness of fit' evaluations to be easily incorporated into the design of many kinds of autonomous agents.
arXiv Detail & Related papers (2024-07-29T01:22:04Z) - Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Grasping Causality for the Explanation of Criticality for Automated
Driving [0.0]
This work introduces a formalization of causal queries whose answers facilitate a causal understanding of safety-relevant influencing factors for automated driving.
Based on Judea Pearl's causal theory, we define a causal relation as a causal structure together with a context.
As availability and quality of data are imperative for validly estimating answers to the causal queries, we also discuss requirements on real-world and synthetic data acquisition.
arXiv Detail & Related papers (2022-10-27T12:37:00Z) - Architectural patterns for handling runtime uncertainty of data-driven
models in safety-critical perception [1.7616042687330642]
We present additional architectural patterns for handling uncertainty estimation.
We evaluate the four patterns qualitatively and quantitatively with respect to safety and performance gains.
We conclude that the consideration of context information of the driving situation makes it possible to accept more or less uncertainty depending on the inherent risk of the situation.
arXiv Detail & Related papers (2022-06-14T13:31:36Z) - Evaluating Automated Driving Planner Robustness against Adversarial
Influence [0.0]
This paper aims to help researchers assess the robustness of protections for machine learning-enabled planners against adversarial influence.
We argue that adversarial evaluation fundamentally requires a process that seeks to defeat a specific protection.
This type of inference requires precise statements about threats, protections, and aspects of planning decisions to be guarded.
arXiv Detail & Related papers (2022-05-29T15:39:26Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Modeling Perception Errors towards Robust Decision Making in Autonomous
Vehicles [11.503090828741191]
We propose a simulation-based methodology towards answering the question: is a perception subsystem sufficient for the decision making subsystem to make robust, safe decisions?
We show how to analyze the impact of different kinds of sensing and perception errors on the behavior of the autonomous system.
arXiv Detail & Related papers (2020-01-31T08:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.