MGNet: Monocular Geometric Scene Understanding for Autonomous Driving
- URL: http://arxiv.org/abs/2206.13199v1
- Date: Mon, 27 Jun 2022 11:27:55 GMT
- Title: MGNet: Monocular Geometric Scene Understanding for Autonomous Driving
- Authors: Markus Sch\"on, Michael Buchholz, Klaus Dietmayer
- Abstract summary: MGNet is a multi-task framework for monocular geometric scene understanding.
We define monocular geometric scene understanding as the combination of two known tasks: Panoptic segmentation and self-supervised monocular depth estimation.
Our model is designed with focus on low latency to provide fast inference in real-time on a single consumer-grade GPU.
- Score: 10.438741209852209
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce MGNet, a multi-task framework for monocular geometric scene
understanding. We define monocular geometric scene understanding as the
combination of two known tasks: Panoptic segmentation and self-supervised
monocular depth estimation. Panoptic segmentation captures the full scene not
only semantically, but also on an instance basis. Self-supervised monocular
depth estimation uses geometric constraints derived from the camera measurement
model in order to measure depth from monocular video sequences only. To the
best of our knowledge, we are the first to propose the combination of these two
tasks in one single model. Our model is designed with focus on low latency to
provide fast inference in real-time on a single consumer-grade GPU. During
deployment, our model produces dense 3D point clouds with instance aware
semantic labels from single high-resolution camera images. We evaluate our
model on two popular autonomous driving benchmarks, i.e., Cityscapes and KITTI,
and show competitive performance among other real-time capable methods. Source
code is available at https://github.com/markusschoen/MGNet.
Related papers
- DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features [65.8738034806085]
DistillNeRF is a self-supervised learning framework for understanding 3D environments in autonomous driving scenes.
Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs.
arXiv Detail & Related papers (2024-06-17T21:15:13Z) - Metric3D: Towards Zero-shot Metric 3D Prediction from A Single Image [85.91935485902708]
We show that the key to a zero-shot single-view metric depth model lies in the combination of large-scale data training and resolving the metric ambiguity from various camera models.
We propose a canonical camera space transformation module, which explicitly addresses the ambiguity problems and can be effortlessly plugged into existing monocular models.
Our method enables the accurate recovery of metric 3D structures on randomly collected internet images.
arXiv Detail & Related papers (2023-07-20T16:14:23Z) - Monocular 3D Object Detection with Depth from Motion [74.29588921594853]
We take advantage of camera ego-motion for accurate object depth estimation and detection.
Our framework, named Depth from Motion (DfM), then uses the established geometry to lift 2D image features to the 3D space and detects 3D objects thereon.
Our framework outperforms state-of-the-art methods by a large margin on the KITTI benchmark.
arXiv Detail & Related papers (2022-07-26T15:48:46Z) - AutoLay: Benchmarking amodal layout estimation for autonomous driving [18.152206533685412]
AutoLay is a dataset and benchmark for amodal layout estimation from monocular images.
In addition to fine-grained attributes such as lanes, sidewalks, and vehicles, we also provide semantically annotated 3D point clouds.
arXiv Detail & Related papers (2021-08-20T08:21:11Z) - Learning Geometry-Guided Depth via Projective Modeling for Monocular 3D Object Detection [70.71934539556916]
We learn geometry-guided depth estimation with projective modeling to advance monocular 3D object detection.
Specifically, a principled geometry formula with projective modeling of 2D and 3D depth predictions in the monocular 3D object detection network is devised.
Our method remarkably improves the detection performance of the state-of-the-art monocular-based method without extra data by 2.80% on the moderate test setting.
arXiv Detail & Related papers (2021-07-29T12:30:39Z) - Learning to Associate Every Segment for Video Panoptic Segmentation [123.03617367709303]
We learn coarse segment-level matching and fine pixel-level matching together.
We show that our per-frame computation model can achieve new state-of-the-art results on Cityscapes-VPS and VIPER datasets.
arXiv Detail & Related papers (2021-06-17T13:06:24Z) - OmniDet: Surround View Cameras based Multi-task Visual Perception
Network for Autonomous Driving [10.3540046389057]
This work presents a multi-task visual perception network on unrectified fisheye images.
It consists of six primary tasks necessary for an autonomous driving system.
We demonstrate that the jointly trained model performs better than the respective single task versions.
arXiv Detail & Related papers (2021-02-15T10:46:24Z) - Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection
Consistency [114.02182755620784]
We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion and depth in a monocular camera setup without supervision.
Our framework is shown to outperform the state-of-the-art depth and motion estimation methods.
arXiv Detail & Related papers (2021-02-04T14:26:42Z) - SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation
Synergized with Semantic Segmentation for Autonomous Driving [37.50089104051591]
State-of-the-art self-supervised learning approaches for monocular depth estimation usually suffer from scale ambiguity.
This paper introduces a novel multi-task learning strategy to improve self-supervised monocular distance estimation on fisheye and pinhole camera images.
arXiv Detail & Related papers (2020-08-10T10:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.