Integrating Legal and Logical Specifications in Perception, Prediction, and Planning for Automated Driving: A Survey of Methods
- URL: http://arxiv.org/abs/2510.25386v1
- Date: Wed, 29 Oct 2025 10:57:24 GMT
- Title: Integrating Legal and Logical Specifications in Perception, Prediction, and Planning for Automated Driving: A Survey of Methods
- Authors: Kumar Manas, Mert Keser, Alois Knoll,
- Abstract summary: This survey provides an analysis of current methodologies integrating legal and logical specifications into the perception, prediction, and planning modules of automated driving systems.<n>We systematically explore techniques ranging from logic-based frameworks to computational legal reasoning approaches.<n>A central finding is that significant challenges arise at the intersection of perceptual reliability, legal compliance, and decision-making justifiability.
- Score: 30.541309308318517
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This survey provides an analysis of current methodologies integrating legal and logical specifications into the perception, prediction, and planning modules of automated driving systems. We systematically explore techniques ranging from logic-based frameworks to computational legal reasoning approaches, emphasizing their capability to ensure regulatory compliance and interpretability in dynamic and uncertain driving environments. A central finding is that significant challenges arise at the intersection of perceptual reliability, legal compliance, and decision-making justifiability. To systematically analyze these challenges, we introduce a taxonomy categorizing existing approaches by their theoretical foundations, architectural implementations, and validation strategies. We particularly focus on methods that address perceptual uncertainty and incorporate explicit legal norms, facilitating decisions that are both technically robust and legally defensible. The review covers neural-symbolic integration methods for perception, logic-driven rule representation, and norm-aware prediction strategies, all contributing toward transparent and accountable autonomous vehicle operation. We highlight critical open questions and practical trade-offs that must be addressed, offering multidisciplinary insights from engineering, logic, and law to guide future developments in legally compliant autonomous driving systems.
Related papers
- AI Deception: Risks, Dynamics, and Controls [153.71048309527225]
This project provides a comprehensive and up-to-date overview of the AI deception field.<n>We identify a formal definition of AI deception, grounded in signaling theory from studies of animal deception.<n>We organize the landscape of AI deception research as a deception cycle, consisting of two key components: deception emergence and deception treatment.
arXiv Detail & Related papers (2025-11-27T16:56:04Z) - Empowering Real-World: A Survey on the Technology, Practice, and Evaluation of LLM-driven Industry Agents [63.03252293761656]
This paper systematically reviews the technologies, applications, and evaluation methods of industry agents based on large language models (LLMs)<n>We examine the three key technological pillars that support the advancement of agent capabilities: Memory, Planning, and Tool Use.<n>We provide an overview of the application of industry agents in real-world domains such as digital engineering, scientific discovery, embodied intelligence, collaborative business execution, and complex system simulation.
arXiv Detail & Related papers (2025-10-20T12:46:55Z) - Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives [0.9668407688201359]
Artificial Intelligence (AI) systems are increasingly deployed in legal contexts.<n>The so-called black box problem'' undermines legitimacy of automated decision-making.<n>XAI has proposed a variety of methods to enhance transparency.
arXiv Detail & Related papers (2025-10-13T07:19:15Z) - Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned [45.44933002008943]
This white paper presents the T"UV AUSTRIA Trusted AI framework.<n>It is an end-to-end audit catalog and methodology for assessing and certifying machine learning systems.<n>Building on three pillars - Secure Software Development, Functional Requirements, and Ethics & Data Privacy - it translates the high-level obligations of the EU AI Act into specific, testable criteria.
arXiv Detail & Related papers (2025-09-08T17:52:08Z) - Alignment and Safety in Large Language Models: Safety Mechanisms, Training Paradigms, and Emerging Challenges [47.14342587731284]
This survey provides a comprehensive overview of alignment techniques, training protocols, and empirical findings in large language models (LLMs) alignment.<n>We analyze the development of alignment methods across diverse paradigms, characterizing the fundamental trade-offs between core alignment objectives.<n>We discuss state-of-the-art techniques, including Direct Preference Optimization (DPO), Constitutional AI, brain-inspired methods, and alignment uncertainty quantification (AUQ)
arXiv Detail & Related papers (2025-07-25T20:52:58Z) - Watermarking Without Standards Is Not AI Governance [46.71493672772134]
We argue that current implementations risk serving as symbolic compliance rather than delivering effective oversight.<n>We propose a three-layer framework encompassing technical standards, audit infrastructure, and enforcement mechanisms.
arXiv Detail & Related papers (2025-05-27T18:10:04Z) - Reasoning Under Threat: Symbolic and Neural Techniques for Cybersecurity Verification [0.0]
This survey presents a comprehensive overview of the role of automated reasoning in cybersecurity.<n>We examine SOTA tools and frameworks, explore integrations with AI for neural-symbolic reasoning, and highlight critical research gaps.<n>The paper concludes with a set of well-grounded future research directions, aiming to foster the development of secure systems.
arXiv Detail & Related papers (2025-03-27T11:41:53Z) - Towards Developing Ethical Reasoners: Integrating Probabilistic Reasoning and Decision-Making for Complex AI Systems [4.854297874710511]
A computational ethics framework is essential for AI and autonomous systems operating in complex, real-world environments.<n>Existing approaches often lack the adaptability needed to integrate ethical principles into dynamic and ambiguous contexts.<n>We outline the necessary ingredients for building a holistic, meta-level framework that combines intermediate representations, probabilistic reasoning, and knowledge representation.
arXiv Detail & Related papers (2025-02-28T17:25:11Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.<n>We propose methods tailored to the unique properties of perception and decision-making.<n>We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - On the Need and Applicability of Causality for Fairness: A Unified Framework for AI Auditing and Legal Analysis [0.0]
Article explores the significance of causal reasoning in addressing algorithmic discrimination.<n>By reviewing landmark cases and regulatory frameworks, we illustrate the challenges inherent in proving causal claims.
arXiv Detail & Related papers (2022-07-08T10:37:22Z) - Ethical Assurance: A practical approach to the responsible design,
development, and deployment of data-driven technologies [0.0]
Article offers contributions to the interdisciplinary project of responsible research and innovation in data science and AI.
First, it provides a critical analysis of current efforts to establish practical mechanisms for algorithmic assessment.
Second, it provides an accessible introduction to the methodology of argument-based assurance.
Third, it establishes a novel version of argument-based assurance that we call 'ethical assurance'
arXiv Detail & Related papers (2021-10-11T11:21:49Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.