Safety Concerns and Mitigation Approaches Regarding the Use of Deep
Learning in Safety-Critical Perception Tasks
- URL: http://arxiv.org/abs/2001.08001v1
- Date: Wed, 22 Jan 2020 13:22:59 GMT
- Title: Safety Concerns and Mitigation Approaches Regarding the Use of Deep
Learning in Safety-Critical Perception Tasks
- Authors: Oliver Willers, Sebastian Sudholt, Shervin Raafatnia, Stephanie
Abrecht
- Abstract summary: The main reasons for deep learning not being used for autonomous agents at large scale already are safety concerns.
Deep learning approaches typically exhibit a black-box behavior which makes it hard to be evaluated with respect to safety-critical aspects.
We present extensive discussions on possible mitigation methods and give an outlook regarding what mitigation methods are still missing in order to facilitate an argumentation for the safety of a deep learning method.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning methods are widely regarded as indispensable when it comes to
designing perception pipelines for autonomous agents such as robots, drones or
automated vehicles. The main reasons, however, for deep learning not being used
for autonomous agents at large scale already are safety concerns. Deep learning
approaches typically exhibit a black-box behavior which makes it hard for them
to be evaluated with respect to safety-critical aspects. While there have been
some work on safety in deep learning, most papers typically focus on high-level
safety concerns. In this work, we seek to dive into the safety concerns of deep
learning methods and present a concise enumeration on a deeply technical level.
Additionally, we present extensive discussions on possible mitigation methods
and give an outlook regarding what mitigation methods are still missing in
order to facilitate an argumentation for the safety of a deep learning method.
Related papers
- Handling Long-Term Safety and Uncertainty in Safe Reinforcement Learning [17.856459823003277]
Safety is one of the key issues preventing the deployment of reinforcement learning techniques in real-world robots.
In this paper, we bridge the gap by extending the safe exploration method, ATACOM, with learnable constraints.
arXiv Detail & Related papers (2024-09-18T15:08:41Z) - Evaluation of Safety Constraints in Autonomous Navigation with Deep
Reinforcement Learning [62.997667081978825]
We compare two learnable navigation policies: safe and unsafe.
The safe policy takes the constraints into the account, while the other does not.
We show that the safe policy is able to generate trajectories with more clearance (distance to the obstacles) and makes less collisions while training without sacrificing the overall performance.
arXiv Detail & Related papers (2023-07-27T01:04:57Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Enhancing Navigational Safety in Crowded Environments using
Semantic-Deep-Reinforcement-Learning-based Navigation [5.706538676509249]
We propose a semantic Deep-reinforcement-learning-based navigation approach that teaches object-specific safety rules by considering high-level obstacle information.
We demonstrate that the agent could learn to navigate more safely by keeping an individual safety distance dependent on the semantic information.
arXiv Detail & Related papers (2021-09-23T10:50:47Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Learning to be Safe: Deep RL with a Safety Critic [72.00568333130391]
A natural first approach toward safe RL is to manually specify constraints on the policy's behavior.
We propose to learn how to be safe in one set of tasks and environments, and then use that learned intuition to constrain future behaviors.
arXiv Detail & Related papers (2020-10-27T20:53:20Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Provably Safe PAC-MDP Exploration Using Analogies [87.41775218021044]
Key challenge in applying reinforcement learning to safety-critical domains is understanding how to balance exploration and safety.
We propose Analogous Safe-state Exploration (ASE), an algorithm for provably safe exploration in MDPs with unknown, dynamics.
Our method exploits analogies between state-action pairs to safely learn a near-optimal policy in a PAC-MDP sense.
arXiv Detail & Related papers (2020-07-07T15:50:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.