Hearing What You Cannot See: Acoustic Vehicle Detection Around Corners
- URL: http://arxiv.org/abs/2007.15739v2
- Date: Thu, 25 Feb 2021 10:47:23 GMT
- Title: Hearing What You Cannot See: Acoustic Vehicle Detection Around Corners
- Authors: Yannick Schulz, Avinash Kini Mattar, Thomas M. Hehn, Julian F. P.
Kooij
- Abstract summary: We show that approaching vehicles behind blind corners can be detected by sound before such vehicles enter in line-of-sight.
We have equipped a research vehicle with a roof-mounted microphone array, and show on data collected with this sensor setup.
A novel method is presented to classify if and from what direction a vehicle is approaching before it is visible.
- Score: 5.4960756528016335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work proposes to use passive acoustic perception as an additional
sensing modality for intelligent vehicles. We demonstrate that approaching
vehicles behind blind corners can be detected by sound before such vehicles
enter in line-of-sight. We have equipped a research vehicle with a roof-mounted
microphone array, and show on data collected with this sensor setup that wall
reflections provide information on the presence and direction of occluded
approaching vehicles. A novel method is presented to classify if and from what
direction a vehicle is approaching before it is visible, using as input
Direction-of-Arrival features that can be efficiently computed from the
streaming microphone array data. Since the local geometry around the
ego-vehicle affects the perceived patterns, we systematically study several
environment types, and investigate generalization across these environments.
With a static ego-vehicle, an accuracy of 0.92 is achieved on the hidden
vehicle classification task. Compared to a state-of-the-art visual detector,
Faster R-CNN, our pipeline achieves the same accuracy more than one second
ahead, providing crucial reaction time for the situations we study. While the
ego-vehicle is driving, we demonstrate positive results on acoustic detection,
still achieving an accuracy of 0.84 within one environment type. We further
study failure cases across environments to identify future research directions.
Related papers
- OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Real-Time Idling Vehicles Detection using Combined Audio-Visual Deep
Learning [1.2733164388167968]
We present a real-time, dynamic vehicle idling detection algorithm.
The proposed method relies on a multi-sensor, audio-visual, machine-learning workflow to detect idling vehicles.
We test our system in real-time at a hospital drop-off point in Salt Lake City.
arXiv Detail & Related papers (2023-05-23T23:35:43Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Automated Mobility Context Detection with Inertial Signals [7.71058263701836]
The primary goal of this paper is the investigation of context detection for remote monitoring of daily motor functions.
We aim to understand whether inertial signals sampled with wearable accelerometers, provide reliable information to classify gait-related activities as either indoor or outdoor.
arXiv Detail & Related papers (2022-05-16T09:34:43Z) - Self-Supervised Moving Vehicle Detection from Audio-Visual Cues [29.06503735149157]
We propose a self-supervised approach that leverages audio-visual cues to detect moving vehicles in videos.
Our approach employs contrastive learning for localizing vehicles in images from corresponding pairs of images and recorded audio.
We show that our model can be used as a teacher to supervise an audio-only detection model.
arXiv Detail & Related papers (2022-01-30T09:52:14Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - Driving-Signal Aware Full-Body Avatars [49.89791440532946]
We present a learning-based method for building driving-signal aware full-body avatars.
Our model is a conditional variational autoencoder that can be animated with incomplete driving signals.
We demonstrate the efficacy of our approach on the challenging problem of full-body animation for virtual telepresence.
arXiv Detail & Related papers (2021-05-21T16:22:38Z) - SSTN: Self-Supervised Domain Adaptation Thermal Object Detection for
Autonomous Driving [6.810856082577402]
We have proposed a deep neural network Self Supervised Thermal Network (SSTN) for learning the feature embedding to maximize the information between visible and infrared spectrum domain by contrastive learning.
The proposed method is extensively evaluated on the two publicly available datasets: the FLIR-ADAS dataset and the KAIST Multi-Spectral dataset.
arXiv Detail & Related papers (2021-03-04T16:42:49Z) - V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and
Prediction [74.42961817119283]
We use vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles.
By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints.
arXiv Detail & Related papers (2020-08-17T17:58:26Z) - PLOP: Probabilistic poLynomial Objects trajectory Planning for
autonomous driving [8.105493956485583]
We use a conditional imitation learning algorithm to predict trajectories for ego vehicle and its neighbors.
Our approach is computationally efficient and relies only on on-board sensors.
We evaluate our method offline on the publicly available dataset nuScenes.
arXiv Detail & Related papers (2020-03-09T16:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.