Risk-Driven Design of Perception Systems
- URL: http://arxiv.org/abs/2205.10677v1
- Date: Sat, 21 May 2022 21:14:56 GMT
- Title: Risk-Driven Design of Perception Systems
- Authors: Anthony L. Corso, Sydney M. Katz, Craig Innes, Xin Du, Subramanian
Ramamoorthy, Mykel J. Kochenderfer
- Abstract summary: It is important that we design perception systems to minimize errors that reduce the overall safety of the system.
We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system.
We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system.
- Score: 47.787943101699966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern autonomous systems rely on perception modules to process complex
sensor measurements into state estimates. These estimates are then passed to a
controller, which uses them to make safety-critical decisions. It is therefore
important that we design perception systems to minimize errors that reduce the
overall safety of the system. We develop a risk-driven approach to designing
perception systems that accounts for the effect of perceptual errors on the
performance of the fully-integrated, closed-loop system. We formulate a risk
function to quantify the effect of a given perceptual error on overall safety,
and show how we can use it to design safer perception systems by including a
risk-dependent term in the loss function and generating training data in
risk-sensitive regions. We evaluate our techniques on a realistic vision-based
aircraft detect and avoid application and show that risk-driven design reduces
collision risk by 37% over a baseline system.
Related papers
- From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems [2.226040060318401]
We translate System Theoretic Process Analysis (STPA) for analyzing AI operation and development processes.
We focus on systems that rely on machine learning algorithms and conductedA on three case studies.
We find that key concepts and steps of conducting anA readily apply, albeit with a few adaptations tailored for AI systems.
arXiv Detail & Related papers (2024-10-29T20:43:18Z) - ABCD: Trust enhanced Attention based Convolutional Autoencoder for Risk Assessment [0.0]
Anomaly detection in industrial systems is crucial for preventing equipment failures, ensuring risk identification, and maintaining overall system efficiency.
Traditional monitoring methods often rely on fixed thresholds and empirical rules, which may not be sensitive enough to detect subtle changes in system health and predict impending failures.
This paper proposes Attention-based convolutional autoencoder (ABCD) for risk detection and map the risk value derive to the maintenance planning.
ABCD learns the normal behavior of conductivity from historical data of a real-world industrial cooling system and reconstructs the input data, identifying anomalies that deviate from the expected patterns.
arXiv Detail & Related papers (2024-04-24T20:15:57Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Symbolic Perception Risk in Autonomous Driving [4.371383574272895]
We develop a novel framework to assess the risk of misperception in a traffic sign classification task.
We consider the problem in an autonomous driving setting, where visual input quality gradually improves.
We show the closed-form representation of the conditional value-at-risk (CVaR) of misperception.
arXiv Detail & Related papers (2023-03-16T15:49:24Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z) - Learning Disturbances Online for Risk-Aware Control: Risk-Aware Flight
with Less Than One Minute of Data [33.7789991023177]
Recent advances in safety-critical risk-aware control are predicated on apriori knowledge of disturbances a system might face.
This paper proposes a method to efficiently learn these disturbances in a risk-aware online context.
arXiv Detail & Related papers (2022-12-12T21:40:23Z) - SOTIF Entropy: Online SOTIF Risk Quantification and Mitigation for
Autonomous Driving [16.78084912175149]
This paper proposes the "Self-Surveillance and Self-Adaption System" as a systematic approach to online minimize the SOTIF risk.
The core of this system is the risk monitoring of the implemented artificial intelligence algorithms within the autonomous vehicles.
The inherent perception algorithm risk and external collision risk are jointly quantified via SOTIF entropy.
arXiv Detail & Related papers (2022-11-08T05:02:12Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.