Instantaneous Perception of Moving Objects in 3D
- URL: http://arxiv.org/abs/2405.02781v1
- Date: Sun, 5 May 2024 01:07:24 GMT
- Title: Instantaneous Perception of Moving Objects in 3D
- Authors: Di Liu, Bingbing Zhuang, Dimitris N. Metaxas, Manmohan Chandraker,
- Abstract summary: The perception of 3D motion of surrounding traffic participants is crucial for driving safety.
We propose to leverage local occupancy completion of object point clouds to densify the shape cue, and mitigate the impact of swimming artifacts.
Extensive experiments demonstrate superior performance compared to standard 3D motion estimation approaches.
- Score: 86.38144604783207
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The perception of 3D motion of surrounding traffic participants is crucial for driving safety. While existing works primarily focus on general large motions, we contend that the instantaneous detection and quantification of subtle motions is equally important as they indicate the nuances in driving behavior that may be safety critical, such as behaviors near a stop sign of parking positions. We delve into this under-explored task, examining its unique challenges and developing our solution, accompanied by a carefully designed benchmark. Specifically, due to the lack of correspondences between consecutive frames of sparse Lidar point clouds, static objects might appear to be moving - the so-called swimming effect. This intertwines with the true object motion, thereby posing ambiguity in accurate estimation, especially for subtle motions. To address this, we propose to leverage local occupancy completion of object point clouds to densify the shape cue, and mitigate the impact of swimming artifacts. The occupancy completion is learned in an end-to-end fashion together with the detection of moving objects and the estimation of their motion, instantaneously as soon as objects start to move. Extensive experiments demonstrate superior performance compared to standard 3D motion estimation approaches, particularly highlighting our method's specialized treatment of subtle motions.
Related papers
- Articulated Object Manipulation using Online Axis Estimation with SAM2-Based Tracking [59.87033229815062]
Articulated object manipulation requires precise object interaction, where the object's axis must be carefully considered.
Previous research employed interactive perception for manipulating articulated objects, but typically, open-loop approaches often suffer from overlooking the interaction dynamics.
We present a closed-loop pipeline integrating interactive perception with online axis estimation from segmented 3D point clouds.
arXiv Detail & Related papers (2024-09-24T17:59:56Z) - JSTR: Joint Spatio-Temporal Reasoning for Event-based Moving Object
Detection [17.3397709143323]
Event-based moving object detection is a challenging task, where static background and moving object are mixed together.
We propose a novel joint-temporal reasoning method for event-based moving object detection.
arXiv Detail & Related papers (2024-03-12T09:22:52Z) - Spatio-Temporal Action Detection Under Large Motion [86.3220533375967]
We study the performance of cuboid-aware feature aggregation in action detection under large action.
We propose to enhance actor representation under large motion by tracking actors and performing temporal feature aggregation along the respective tracks.
We find that cuboid-aware feature aggregation consistently achieves a large improvement in action detection performance compared to the cuboid-aware baseline.
arXiv Detail & Related papers (2022-09-06T06:55:26Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object
Removal for Static 3D Point Cloud Map Building [0.1474723404975345]
This paper presents a novel static map building method called ERASOR, Egocentric RAtio of pSeudo Occupancy-based dynamic object Removal.
Our approach directs its attention to the nature of most dynamic objects in urban environments being inevitably in contact with the ground.
arXiv Detail & Related papers (2021-03-07T10:29:07Z) - Phase Space Reconstruction Network for Lane Intrusion Action Recognition [9.351931162958465]
In this paper, we propose a novel object-level phase space reconstruction network (PSRNet) for motion time series classification.
Our PSRNet could reach the best accuracy of 98.0%, which remarkably exceeds existing action recognition approaches by more than 30%.
arXiv Detail & Related papers (2021-02-22T16:18:35Z) - Tracking from Patterns: Learning Corresponding Patterns in Point Clouds
for 3D Object Tracking [34.40019455462043]
We propose to learn 3D object correspondences from temporal point cloud data and infer the motion information from correspondence patterns.
Our method exceeds the existing 3D tracking methods on both the KITTI and larger scale Nuscenes dataset.
arXiv Detail & Related papers (2020-10-20T06:07:20Z) - Associate-3Ddet: Perceptual-to-Conceptual Association for 3D Point Cloud
Object Detection [64.2159881697615]
Object detection from 3D point clouds remains a challenging task, though recent studies pushed the envelope with the deep learning techniques.
We propose a domain adaptation like approach to enhance the robustness of the feature representation.
Our simple yet effective approach fundamentally boosts the performance of 3D point cloud object detection and achieves the state-of-the-art results.
arXiv Detail & Related papers (2020-06-08T05:15:06Z) - Spatiotemporal Attacks for Embodied Agents [119.43832001301041]
We take the first step to study adversarial attacks for embodied agents.
In particular, we generate adversarial examples, which exploit the interaction history in both the temporal and spatial dimensions.
Our perturbations have strong attack and generalization abilities.
arXiv Detail & Related papers (2020-05-19T01:38:47Z) - Drosophila-Inspired 3D Moving Object Detection Based on Point Clouds [22.850519892606716]
We have developed a motion detector based on the shallow visual neural pathway of Drosophila.
This detector is sensitive to the movement of objects and can well suppress background noise.
An improved 3D object detection network is then used to estimate the point clouds of each proposal and efficiently generates the 3D bounding boxes and the object categories.
arXiv Detail & Related papers (2020-05-06T10:04:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.