Towards exploring adversarial learning for anomaly detection in complex
driving scenes
- URL: http://arxiv.org/abs/2307.05256v1
- Date: Sat, 17 Jun 2023 15:32:16 GMT
- Title: Towards exploring adversarial learning for anomaly detection in complex
driving scenes
- Authors: Nour Habib, Yunsu Cho, Abhishek Buragohain, Andreas Rausch
- Abstract summary: Adversarial learning, a sub-field of machine learning has proven its ability to detect anomalies in images and videos with impressive results on simple data sets.
In this work, we investigate and provide insight into the performance of such techniques on a highly complex driving scenes dataset called Berkeley DeepDrive.
- Score: 0.32116198597240836
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the many Autonomous Systems (ASs), such as autonomous driving cars,
performs various safety-critical functions. Many of these autonomous systems
take advantage of Artificial Intelligence (AI) techniques to perceive their
environment. But these perceiving components could not be formally verified,
since, the accuracy of such AI-based components has a high dependency on the
quality of training data. So Machine learning (ML) based anomaly detection, a
technique to identify data that does not belong to the training data could be
used as a safety measuring indicator during the development and operational
time of such AI-based components. Adversarial learning, a sub-field of machine
learning has proven its ability to detect anomalies in images and videos with
impressive results on simple data sets. Therefore, in this work, we investigate
and provide insight into the performance of such techniques on a highly complex
driving scenes dataset called Berkeley DeepDrive.
Related papers
- Federated Learning for Drowsiness Detection in Connected Vehicles [0.19116784879310028]
Driver monitoring systems can assist in determining the driver's state.
Driver drowsiness detection presents a potential solution.
transmitting the data to a central machine for model training is impractical due to the large data size and privacy concerns.
We propose a federated learning framework for drowsiness detection within a vehicular network, leveraging the YawDD dataset.
arXiv Detail & Related papers (2024-05-06T09:39:13Z) - Multimodal Detection of Unknown Objects on Roads for Autonomous Driving [4.3310896118860445]
We propose a novel pipeline to detect unknown objects.
We make use of lidar and camera data by combining state-of-the art detection models in a sequential manner.
arXiv Detail & Related papers (2022-05-03T10:58:41Z) - Dataset for Robust and Accurate Leading Vehicle Velocity Recognition [0.0]
Recognition of surrounding environment using a camera is an important technology in Advanced Driver-Assistance Systems and Autonomous Driving.
To develop robust recognition technology in the real world, data in environments that are difficult for cameras such as rainy weather or nighttime are essential.
We have constructed a dataset that one can benchmark the technology, targeting the velocity recognition of the leading vehicle.
arXiv Detail & Related papers (2022-04-27T06:06:54Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Lifelong Learning Metrics [63.8376359764052]
The DARPA Lifelong Learning Machines (L2M) program seeks to yield advances in artificial intelligence (AI) systems.
This document outlines a formalism for constructing and characterizing the performance of agents performing lifelong learning scenarios.
arXiv Detail & Related papers (2022-01-20T16:29:14Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Robustness testing of AI systems: A case study for traffic sign
recognition [13.395753930904108]
This paper presents how the robustness of AI systems can be practically examined and which methods and metrics can be used to do so.
The robustness testing methodology is described and analysed for the example use case of traffic sign recognition in autonomous driving.
arXiv Detail & Related papers (2021-08-13T10:29:09Z) - Diverse Complexity Measures for Dataset Curation in Self-driving [80.55417232642124]
We propose a new data selection method that exploits a diverse set of criteria that quantize interestingness of traffic scenes.
Our experiments show that the proposed curation pipeline is able to select datasets that lead to better generalization and higher performance.
arXiv Detail & Related papers (2021-01-16T23:45:02Z) - A Comparative Study of AI-based Intrusion Detection Techniques in
Critical Infrastructures [4.8041243535151645]
We present a comparative study of Artificial Intelligence (AI)-driven intrusion detection systems for wirelessly connected sensors that track crucial applications.
Specifically, we present an in-depth analysis of the use of machine learning, deep learning and reinforcement learning solutions to recognize intrusive behavior in the collected traffic.
Results present the performance metrics for three different IDSs namely the Adaptively Supervised and Clustered Hybrid IDS, Boltzmann Machine-based Clustered IDS and Q-learning based IDS.
arXiv Detail & Related papers (2020-07-24T20:55:57Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.