Traffic-Rule-Compliant Trajectory Repair via Satisfiability Modulo Theories and Reachability Analysis
- URL: http://arxiv.org/abs/2412.15837v1
- Date: Fri, 20 Dec 2024 12:26:22 GMT
- Title: Traffic-Rule-Compliant Trajectory Repair via Satisfiability Modulo Theories and Reachability Analysis
- Authors: Yuanfei Lin, Zekun Xing, Xuyuan Han, Matthias Althoff,
- Abstract summary: Complying with traffic rules is challenging for automated vehicles.<n>We propose a trajectory repair technique to save time.<n> Experiments in high-fidelity simulators and in the real world demonstrate the benefits of our proposed approach.
- Score: 6.5301153208275675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complying with traffic rules is challenging for automated vehicles, as numerous rules need to be considered simultaneously. If a planned trajectory violates traffic rules, it is common to replan a new trajectory from scratch. We instead propose a trajectory repair technique to save computation time. By coupling satisfiability modulo theories with set-based reachability analysis, we determine if and in what manner the initial trajectory can be repaired. Experiments in high-fidelity simulators and in the real world demonstrate the benefits of our proposed approach in various scenarios. Even in complex environments with intricate rules, we efficiently and reliably repair rule-violating trajectories, enabling automated vehicles to swiftly resume legally safe operation in real-time.
Related papers
- Predictive Traffic Rule Compliance using Reinforcement Learning [7.280087547993983]
This paper presents an approach that integrates a motion planner with a deep reinforcement learning model to predict potential traffic rule violations.
Our main innovation is replacing the standard actor network in an actor-critic method with a motion planning module, which ensures both stable and interpretable trajectory generation.
Experiments on an open German highway dataset show that the model can predict and prevent traffic rule violations beyond the planning horizon.
arXiv Detail & Related papers (2025-03-29T01:04:08Z) - A Multi-Loss Strategy for Vehicle Trajectory Prediction: Combining Off-Road, Diversity, and Directional Consistency Losses [68.68514648185828]
Trajectory prediction is essential for the safety and efficiency of planning in autonomous vehicles.<n>Current models often fail to fully capture complex traffic rules and the complete range of potential vehicle movements.<n>This study introduces three novel loss functions: Offroad Loss, Direction Consistency Error, and Diversity Loss.
arXiv Detail & Related papers (2024-11-29T14:47:08Z) - ReGentS: Real-World Safety-Critical Driving Scenario Generation Made Stable [88.08120417169971]
Machine learning based autonomous driving systems often face challenges with safety-critical scenarios that are rare in real-world data.
This work explores generating safety-critical driving scenarios by modifying complex real-world regular scenarios through trajectory optimization.
Our approach addresses unrealistic diverging trajectories and unavoidable collision scenarios that are not useful for training robust planner.
arXiv Detail & Related papers (2024-09-12T08:26:33Z) - Provable Traffic Rule Compliance in Safe Reinforcement Learning on the Open Sea [8.017543518311196]
Reinforcement learning (RL) is a promising method to find motion plans for autonomous vehicles.
Our approach accomplishes guaranteed rule-compliance by integrating temporal logic specifications into RL.
In numerical evaluations on critical maritime traffic situations, our agent always complies with the formalized legal rules and never collides.
arXiv Detail & Related papers (2024-02-13T14:59:19Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - Integrating Higher-Order Dynamics and Roadway-Compliance into
Constrained ILQR-based Trajectory Planning for Autonomous Vehicles [3.200238632208686]
Trajectory planning aims to produce a globally optimal route for Autonomous Passenger Vehicles.
Existing implementations utilizing the vehicle bicycle kinematic model may not guarantee controllable trajectories.
We augment this model by higher-order terms, including the first and second-order derivatives of curvature and longitudinal jerk.
arXiv Detail & Related papers (2023-09-25T22:30:18Z) - Real-time Cooperative Vehicle Coordination at Unsignalized Road
Intersections [7.860567520771493]
Cooperative coordination at unsignalized road intersections aims to improve the safety driving traffic throughput for connected and automated vehicles.
We introduce a model-free Markov Decision Process (MDP) and tackle it by a Twin Delayed Deep Deterministic Policy (TD3)-based strategy in the deep reinforcement learning framework.
We show that the proposed strategy could achieve near-optimal performance in sub-static coordination scenarios and significantly improve control in the realistic continuous flow.
arXiv Detail & Related papers (2022-05-03T02:56:02Z) - Optimizing Trajectories for Highway Driving with Offline Reinforcement
Learning [11.970409518725491]
We propose a Reinforcement Learning-based approach to autonomous driving.
We compare the performance of our agent against four other highway driving agents.
We demonstrate that our offline trained agent, with randomly collected data, learns to drive smoothly, achieving as close as possible to the desired velocity, while outperforming the other agents.
arXiv Detail & Related papers (2022-03-21T13:13:08Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Deep Structured Reactive Planning [94.92994828905984]
We propose a novel data-driven, reactive planning objective for self-driving vehicles.
We show that our model outperforms a non-reactive variant in successfully completing highly complex maneuvers.
arXiv Detail & Related papers (2021-01-18T01:43:36Z) - Learning from Simulation, Racing in Reality [126.56346065780895]
We present a reinforcement learning-based solution to autonomously race on a miniature race car platform.
We show that a policy that is trained purely in simulation can be successfully transferred to the real robotic setup.
arXiv Detail & Related papers (2020-11-26T14:58:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.