MSight: An Edge-Cloud Infrastructure-based Perception System for
Connected Automated Vehicles
- URL: http://arxiv.org/abs/2310.05290v1
- Date: Sun, 8 Oct 2023 21:32:30 GMT
- Title: MSight: An Edge-Cloud Infrastructure-based Perception System for
Connected Automated Vehicles
- Authors: Rusheng Zhang, Depu Meng, Shengyin Shen, Zhengxia Zou, Houqiang Li,
Henry X. Liu
- Abstract summary: This paper presents MSight, a cutting-edge roadside perception system specifically designed for automated vehicles.
MSight offers real-time vehicle detection, localization, tracking, and short-term trajectory prediction.
Evaluations underscore the system's capability to uphold lane-level accuracy with minimal latency.
- Score: 58.461077944514564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As vehicular communication and networking technologies continue to advance,
infrastructure-based roadside perception emerges as a pivotal tool for
connected automated vehicle (CAV) applications. Due to their elevated
positioning, roadside sensors, including cameras and lidars, often enjoy
unobstructed views with diminished object occlusion. This provides them a
distinct advantage over onboard perception, enabling more robust and accurate
detection of road objects. This paper presents MSight, a cutting-edge roadside
perception system specifically designed for CAVs. MSight offers real-time
vehicle detection, localization, tracking, and short-term trajectory
prediction. Evaluations underscore the system's capability to uphold lane-level
accuracy with minimal latency, revealing a range of potential applications to
enhance CAV safety and efficiency. Presently, MSight operates 24/7 at a
two-lane roundabout in the City of Ann Arbor, Michigan.
Related papers
- Tapping in a Remote Vehicle's onboard LLM to Complement the Ego Vehicle's Field-of-View [1.701722696403793]
We propose a concept to complement the ego vehicle's field-of-view (FOV) with another vehicle's FOV by tapping into their onboard language models (LLMs)
Our results show that very recent versions of LLMs, such as GPT-4V and GPT-4o, understand a traffic situation to an impressive level of detail, and hence, they can be used even to spot traffic participants.
arXiv Detail & Related papers (2024-08-20T12:38:34Z) - Infrastructure-Assisted Collaborative Perception in Automated Valet Parking: A Safety Perspective [11.405406875019175]
Collaborative Perception can be applied to broaden the field of view of connected vehicles.
We propose a BEV feature-based CP network architecture for infrastructure-assisted AVP systems.
arXiv Detail & Related papers (2024-03-22T12:11:06Z) - RoadRunner -- Learning Traversability Estimation for Autonomous Off-road Driving [13.101416329887755]
We present RoadRunner, a framework capable of predicting terrain traversability and an elevation map directly from camera and LiDAR sensor inputs.
RoadRunner enables reliable autonomous navigation, by fusing sensory information, handling of uncertainty, and generation of contextually informed predictions.
We demonstrate the effectiveness of RoadRunner in enabling safe and reliable off-road navigation at high speeds in multiple real-world driving scenarios through unstructured desert environments.
arXiv Detail & Related papers (2024-02-29T16:47:54Z) - Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - Real-time Full-stack Traffic Scene Perception for Autonomous Driving
with Roadside Cameras [20.527834125706526]
We propose a novel framework for traffic scene perception with roadside cameras.
The proposed framework covers a full-stack of roadside perception, including object detection, object localization, object tracking, and multi-camera information fusion.
Our framework is deployed at a two-lane roundabout located at Ellsworth Rd. and State St., Ann Arbor, MI, USA, providing 7x24 real-time traffic flow monitoring and high-precision vehicle trajectory extraction.
arXiv Detail & Related papers (2022-06-20T13:33:52Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and
Prediction [74.42961817119283]
We use vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles.
By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints.
arXiv Detail & Related papers (2020-08-17T17:58:26Z) - Dynamic Radar Network of UAVs: A Joint Navigation and Tracking Approach [36.587096293618366]
An emerging problem is to track unauthorized small unmanned aerial vehicles (UAVs) hiding behind buildings.
This paper proposes the idea of a dynamic radar network of UAVs for real-time and high-accuracy tracking of malicious targets.
arXiv Detail & Related papers (2020-01-13T23:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.