Managing Bandwidth: The Key to Cloud-Assisted Autonomous Driving
- URL: http://arxiv.org/abs/2410.16227v1
- Date: Mon, 21 Oct 2024 17:32:36 GMT
- Title: Managing Bandwidth: The Key to Cloud-Assisted Autonomous Driving
- Authors: Alexander Krentsel, Peter Schafhalter, Joseph E. Gonzalez, Sylvia Ratnasamy, Scott Shenker, Ion Stoica,
- Abstract summary: We argue that we can, and must, rely on the cloud for real-time control systems like self-driving cars.
We identify an opportunity to offload parts of time-sensitive and latency-critical compute to the cloud.
- Score: 73.55745551827229
- License:
- Abstract: Prevailing wisdom asserts that one cannot rely on the cloud for critical real-time control systems like self-driving cars. We argue that we can, and must. Following the trends of increasing model sizes, improvements in hardware, and evolving mobile networks, we identify an opportunity to offload parts of time-sensitive and latency-critical compute to the cloud. Doing so requires carefully allocating bandwidth to meet strict latency SLOs, while maximizing benefit to the car.
Related papers
- Combining Cloud and Mobile Computing for Machine Learning [2.595189746033637]
We consider model segmentation as a solution to improving the user experience.
We show that the division not only reduces the wait time for users but can also be fine-tuned to optimize the workloads of the cloud.
arXiv Detail & Related papers (2024-01-20T06:14:22Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Efficient Federated Learning with Spike Neural Networks for Traffic Sign
Recognition [70.306089187104]
We introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training.
Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
arXiv Detail & Related papers (2022-05-28T03:11:48Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - An Efficient Deep Learning Approach Using Improved Generative
Adversarial Networks for Incomplete Information Completion of Self-driving [2.8504921333436832]
We propose an efficient deep learning approach to repair incomplete vehicle point cloud accurately and efficiently in autonomous driving.
The improved PF-Net can achieve the speedups of over 19x with almost the same accuracy when compared to the original PF-Net.
arXiv Detail & Related papers (2021-09-01T08:06:23Z) - Self-Supervised Pillar Motion Learning for Autonomous Driving [10.921208239968827]
We propose a learning framework that leverages free supervisory signals from point clouds and paired camera images to estimate motion purely via self-supervision.
Our model involves a point cloud based structural consistency augmented with probabilistic motion masking as well as a cross-sensor motion regularization to realize the desired self-supervision.
arXiv Detail & Related papers (2021-04-18T02:32:08Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z) - Minimizing Age-of-Information for Fog Computing-supported Vehicular
Networks with Deep Q-learning [15.493225546165627]
Age of Information (AoI) is a metric to evaluate the performance of wireless links between vehicles and cloud/fog servers.
This paper introduces a novel proactive and data-driven approach to optimize the driving route with a main objective of guaranteeing the confidence of AoI.
arXiv Detail & Related papers (2020-04-04T05:19:25Z) - Using AI for Mitigating the Impact of Network Delay in Cloud-based
Intelligent Traffic Signal Control [8.121462458089143]
We introduce a new traffic signal control algorithm based on reinforcement learning, which performs well even under severe network delay.
The framework introduced in this paper can be helpful for all agent-based systems where network delay could be a critical concern.
arXiv Detail & Related papers (2020-02-19T17:30:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.