Commonsense Reasoning-Aided Autonomous Vehicle Systems
- URL: http://arxiv.org/abs/2502.09233v1
- Date: Thu, 13 Feb 2025 11:53:25 GMT
- Title: Commonsense Reasoning-Aided Autonomous Vehicle Systems
- Authors: Keegan Kimbrell,
- Abstract summary: This research involves incorporating commonsense reasoning models that use image data to improve AV systems.
This will allow AV systems to perform more accurate reasoning while also making them more adjustable, explainable, and ethical.
- Score: 0.0
- License:
- Abstract: Autonomous Vehicle (AV) systems have been developed with a strong reliance on machine learning techniques. While machine learning approaches, such as deep learning, are extremely effective at tasks that involve observation and classification, they struggle when it comes to performing higher level reasoning about situations on the road. This research involves incorporating commonsense reasoning models that use image data to improve AV systems. This will allow AV systems to perform more accurate reasoning while also making them more adjustable, explainable, and ethical. This paper will discuss the findings so far and motivate its direction going forward.
Related papers
- Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - LLM4Drive: A Survey of Large Language Models for Autonomous Driving [62.10344445241105]
Large language models (LLMs) have demonstrated abilities including understanding context, logical reasoning, and generating answers.
In this paper, we systematically review a research line about textitLarge Language Models for Autonomous Driving (LLM4AD).
arXiv Detail & Related papers (2023-11-02T07:23:33Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Interpretable UAV Collision Avoidance using Deep Reinforcement Learning [1.2693545159861856]
We present autonomous UAV flight using Deep Reinforcement Learning augmented with Self-Attention Models.
We have tested our algorithm under different weather and environments and found it to be robust compared to conventional Deep Reinforcement Learning algorithms.
arXiv Detail & Related papers (2021-05-25T23:21:54Z) - Explanations in Autonomous Driving: A Survey [7.353589916907923]
We provide a comprehensive survey of the existing work in explainable autonomous driving.
We identify and categorise the different stakeholders involved in the development, use, and regulation of AVs.
arXiv Detail & Related papers (2021-03-09T00:31:30Z) - Applying Machine Learning in Self-Adaptive Systems: A Systematic
Literature Review [15.953995937484176]
There is currently no systematic overview of the use of machine learning in self-adaptive systems.
We focus on self-adaptive systems that are based on a traditional Monitor-Analyze-Plan-Execute feedback loop (MAPE)
The research questions are centred on the problems that motivate the use of machine learning in self-adaptive systems, the key engineering aspects of learning in self-adaptation, and open challenges.
arXiv Detail & Related papers (2021-03-06T13:45:59Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Explaining Autonomous Driving by Learning End-to-End Visual Attention [25.09407072098823]
Current deep learning based autonomous driving approaches yield impressive results also leading to in-production deployment in certain controlled scenarios.
One of the most popular and fascinating approaches relies on learning vehicle controls directly from data perceived by sensors.
The main drawback of this approach as also in other learning problems is the lack of explainability. Indeed, a deep network will act as a black-box outputting predictions depending on previously seen driving patterns without giving any feedback on why such decisions were taken.
arXiv Detail & Related papers (2020-06-05T10:12:31Z) - When Autonomous Systems Meet Accuracy and Transferability through AI: A
Survey [17.416847623629362]
We review the learning-based approaches in autonomous systems from the perspectives of accuracy and transferability.
We focus on reviewing the accuracy or transferability or both of them to show the advantages of adversarial learning.
We discuss several challenges and future topics for using adversarial learning, RL and meta-learning in autonomous systems.
arXiv Detail & Related papers (2020-03-29T04:50:22Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.