LiveMap: Real-Time Dynamic Map in Automotive Edge Computing
- URL: http://arxiv.org/abs/2012.10252v1
- Date: Wed, 16 Dec 2020 15:00:49 GMT
- Title: LiveMap: Real-Time Dynamic Map in Automotive Edge Computing
- Authors: Qiang Liu, Tao Han, Jiang (Linda) Xie, BaekGyu Kim
- Abstract summary: LiveMap is a real-time dynamic map that detects, matches, and tracks objects on the road with crowdsourcing data from connected vehicles in sub-second.
We develop the control plane of LiveMap that allows adaptive offloading of vehicle computations.
We implement LiveMap on a small-scale testbed and develop a large-scale network simulator.
- Score: 14.195521569220448
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous driving needs various line-of-sight sensors to perceive
surroundings that could be impaired under diverse environment uncertainties
such as visual occlusion and extreme weather. To improve driving safety, we
explore to wirelessly share perception information among connected vehicles
within automotive edge computing networks. Sharing massive perception data in
real time, however, is challenging under dynamic networking conditions and
varying computation workloads. In this paper, we propose LiveMap, a real-time
dynamic map, that detects, matches, and tracks objects on the road with
crowdsourcing data from connected vehicles in sub-second. We develop the data
plane of LiveMap that efficiently processes individual vehicle data with object
detection, projection, feature extraction, object matching, and effectively
integrates objects from multiple vehicles with object combination. We design
the control plane of LiveMap that allows adaptive offloading of vehicle
computations, and develop an intelligent vehicle scheduling and offloading
algorithm to reduce the offloading latency of vehicles based on deep
reinforcement learning (DRL) techniques. We implement LiveMap on a small-scale
testbed and develop a large-scale network simulator. We evaluate the
performance of LiveMap with both experiments and simulations, and the results
show LiveMap reduces 34.1% average latency than the baseline solution.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - SKoPe3D: A Synthetic Dataset for Vehicle Keypoint Perception in 3D from
Traffic Monitoring Cameras [26.457695296042903]
We propose SKoPe3D, a unique synthetic vehicle keypoint dataset from a roadside perspective.
SKoPe3D contains over 150k vehicle instances and 4.9 million keypoints.
Our experiments highlight the dataset's applicability and the potential for knowledge transfer between synthetic and real-world data.
arXiv Detail & Related papers (2023-09-04T02:57:30Z) - Visual Perception System for Autonomous Driving [9.659835301514288]
This work introduces a visual-based perception system for autonomous driving that integrates trajectory tracking and prediction of moving objects to prevent collisions.
The system leverages motion cues from pedestrians to monitor and forecast their movements and simultaneously maps the environment.
The performance, efficiency, and resilience of this approach are substantiated through comprehensive evaluations of both simulated and real-world datasets.
arXiv Detail & Related papers (2023-03-03T23:12:43Z) - Exploring Map-based Features for Efficient Attention-based Vehicle
Motion Prediction [3.222802562733787]
Motion prediction of multiple agents is a crucial task in arbitrarily complex environments.
We show how to achieve competitive performance on the Argoverse 1.0 Benchmark using efficient attention-based models.
arXiv Detail & Related papers (2022-05-25T22:38:11Z) - Collaborative 3D Object Detection for Automatic Vehicle Systems via
Learnable Communications [8.633120731620307]
We propose a novel collaborative 3D object detection framework that consists of three components.
Experiment results and bandwidth usage analysis demonstrate that our approach can save communication and computation costs.
arXiv Detail & Related papers (2022-05-24T07:17:32Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Radar-based Dynamic Occupancy Grid Mapping and Object Detection [55.74894405714851]
In recent years, the classical occupancy grid map approach has been extended to dynamic occupancy grid maps.
This paper presents the further development of a previous approach.
The data of multiple radar sensors are fused, and a grid-based object tracking and mapping method is applied.
arXiv Detail & Related papers (2020-08-09T09:26:30Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.