Learning a Stable, Safe, Distributed Feedback Controller for a Heterogeneous Platoon of Autonomous Vehicles
- URL: http://arxiv.org/abs/2404.12474v2
- Date: Thu, 17 Oct 2024 18:45:57 GMT
- Title: Learning a Stable, Safe, Distributed Feedback Controller for a Heterogeneous Platoon of Autonomous Vehicles
- Authors: Michael H. Shaham, Taskin Padir,
- Abstract summary: We introduce an algorithm for learning a stable, safe, distributed controller for a heterogeneous platoon.
We train a controller for autonomous platooning in simulation and evaluate its performance on hardware with a platoon of four F1Tenth vehicles.
- Score: 5.289123253466164
- License:
- Abstract: Platooning of autonomous vehicles has the potential to increase safety and fuel efficiency on highways. The goal of platooning is to have each vehicle drive at a specified speed (set by the leader) while maintaining a safe distance from its neighbors. Many prior works have analyzed various controllers for platooning, most commonly linear feedback and distributed model predictive controllers. In this work, we introduce an algorithm for learning a stable, safe, distributed controller for a heterogeneous platoon. Our algorithm relies on recent developments in learning neural network stability certificates. We train a controller for autonomous platooning in simulation and evaluate its performance on hardware with a platoon of four F1Tenth vehicles. We then perform further analysis in simulation with a platoon of 100 vehicles. Experimental results demonstrate the practicality of the algorithm and the learned controller by comparing the performance of the neural network controller to linear feedback and distributed model predictive controllers.
Related papers
- Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Design and Realization of a Benchmarking Testbed for Evaluating
Autonomous Platooning Algorithms [8.440060524215378]
This paper introduces a testbed for evaluating and benchmarking platooning algorithms on 1/10th scale vehicles with onboard sensors.
We evaluate three algorithms, linear feedback and two variations of distributed model predictive control, and compare their results on a typical platooning scenario.
We find that the distributed model predictive control algorithms outperform linear feedback on hardware and in simulation.
arXiv Detail & Related papers (2024-02-14T15:22:24Z) - Partial End-to-end Reinforcement Learning for Robustness Against Modelling Error in Autonomous Racing [0.0]
This paper addresses the issue of increasing the performance of reinforcement learning (RL) solutions for autonomous racing cars.
We propose a partial end-to-end algorithm that decouples the planning and control tasks.
By leveraging the robustness of a classical controller, our partial end-to-end driving algorithm exhibits better robustness towards model mismatches than standard end-to-end algorithms.
arXiv Detail & Related papers (2023-12-11T14:27:10Z) - Vehicle lateral control using Machine Learning for automated vehicle
guidance [0.0]
Uncertainty in decision-making is crucial in the machine learning model used for a safety-critical system.
In this work, we design a vehicle's lateral controller using a machine-learning model.
arXiv Detail & Related papers (2023-03-14T19:14:24Z) - Learning a Single Near-hover Position Controller for Vastly Different
Quadcopters [56.37274861303324]
This paper proposes an adaptive near-hover position controller for quadcopters.
It can be deployed to quadcopters of very different mass, size and motor constants.
It also shows rapid adaptation to unknown disturbances during runtime.
arXiv Detail & Related papers (2022-09-19T17:55:05Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Neural Network Based Model Predictive Control for an Autonomous Vehicle [3.222802562733787]
We study learning based controllers as a replacement for model predictive controllers (MPC) for the control of autonomous vehicles.
We compare training by supervised learning and by reinforcement learning.
This work aims at producing controllers that can both be embedded on real-time platforms and amenable to verification by formal methods techniques.
arXiv Detail & Related papers (2021-07-30T12:11:31Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - BayesRace: Learning to race autonomously using prior experience [20.64931380046805]
We present a model-based planning and control framework for autonomous racing.
Our approach alleviates the gap induced by simulation-based controller design by learning from on-board sensor measurements.
arXiv Detail & Related papers (2020-05-10T19:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.