Neural Contourlet Network for Monocular 360 Depth Estimation
- URL: http://arxiv.org/abs/2208.01817v1
- Date: Wed, 3 Aug 2022 02:25:55 GMT
- Title: Neural Contourlet Network for Monocular 360 Depth Estimation
- Authors: Zhijie Shen, Chunyu Lin, Lang Nie, Kang Liao, and Yao Zhao
- Abstract summary: We provide a new perspective that constructs an interpretable and sparse representation for a 360 image.
We propose a neural contourlet network consisting of a convolutional neural network and a contourlet transform branch.
In the encoder stage, we design a spatial-spectral fusion module to effectively fuse two types of cues.
- Score: 37.82642960470551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For a monocular 360 image, depth estimation is a challenging because the
distortion increases along the latitude. To perceive the distortion, existing
methods devote to designing a deep and complex network architecture. In this
paper, we provide a new perspective that constructs an interpretable and sparse
representation for a 360 image. Considering the importance of the geometric
structure in depth estimation, we utilize the contourlet transform to capture
an explicit geometric cue in the spectral domain and integrate it with an
implicit cue in the spatial domain. Specifically, we propose a neural
contourlet network consisting of a convolutional neural network and a
contourlet transform branch. In the encoder stage, we design a spatial-spectral
fusion module to effectively fuse two types of cues. Contrary to the encoder,
we employ the inverse contourlet transform with learned low-pass subbands and
band-pass directional subbands to compose the depth in the decoder. Experiments
on the three popular panoramic image datasets demonstrate that the proposed
approach outperforms the state-of-the-art schemes with faster convergence. Code
is available at
https://github.com/zhijieshen-bjtu/Neural-Contourlet-Network-for-MODE.
Related papers
- Pixel-Aligned Multi-View Generation with Depth Guided Decoder [86.1813201212539]
We propose a novel method for pixel-level image-to-multi-view generation.
Unlike prior work, we incorporate attention layers across multi-view images in the VAE decoder of a latent video diffusion model.
Our model enables better pixel alignment across multi-view images.
arXiv Detail & Related papers (2024-08-26T04:56:41Z) - Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - HYVE: Hybrid Vertex Encoder for Neural Distance Fields [9.40036617308303]
We present a neural-network architecture suitable for accurate encoding of 3D shapes in a single forward pass.
Our network is able to output valid signed distance fields without explicit prior knowledge of non-zero distance values or shape occupancy.
arXiv Detail & Related papers (2023-10-10T14:07:37Z) - SwinDepth: Unsupervised Depth Estimation using Monocular Sequences via
Swin Transformer and Densely Cascaded Network [29.798579906253696]
It is challenging to acquire dense ground truth depth labels for supervised training, and the unsupervised depth estimation using monocular sequences emerges as a promising alternative.
In this paper, we employ a convolution-free Swin Transformer as an image feature extractor so that the network can capture both local geometric features and global semantic features for depth estimation.
Also, we propose a Densely Cascaded Multi-scale Network (DCMNet) that connects every feature map directly with another from different scales via a top-down cascade pathway.
arXiv Detail & Related papers (2023-01-17T06:01:46Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - PointMCD: Boosting Deep Point Cloud Encoders via Multi-view Cross-modal
Distillation for 3D Shape Recognition [55.38462937452363]
We propose a unified multi-view cross-modal distillation architecture, including a pretrained deep image encoder as the teacher and a deep point encoder as the student.
By pair-wise aligning multi-view visual and geometric descriptors, we can obtain more powerful deep point encoders without exhausting and complicated network modification.
arXiv Detail & Related papers (2022-07-07T07:23:20Z) - ACDNet: Adaptively Combined Dilated Convolution for Monocular Panorama
Depth Estimation [9.670696363730329]
We propose an ACDNet based on the adaptively combined dilated convolution to predict the dense depth map for a monocular panoramic image.
We conduct depth estimation experiments on three datasets (both virtual and real-world) and the experimental results demonstrate that our proposed ACDNet substantially outperforms the current state-of-the-art (SOTA) methods.
arXiv Detail & Related papers (2021-12-29T08:04:19Z) - VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction [71.83308989022635]
In this paper, we advocate that replicating the traditional two stages framework with deep neural networks improves both the interpretability and the accuracy of the results.
Our network operates in two steps: 1) the local computation of the local depth maps with a deep MVS technique, and, 2) the depth maps and images' features fusion to build a single TSDF volume.
In order to improve the matching performance between images acquired from very different viewpoints, we introduce a rotation-invariant 3D convolution kernel called PosedConv.
arXiv Detail & Related papers (2021-08-19T11:33:58Z) - ACORN: Adaptive Coordinate Networks for Neural Scene Representation [40.04760307540698]
Current neural representations fail to accurately represent images at resolutions greater than a megapixel or 3D scenes with more than a few hundred thousand polygons.
We introduce a new hybrid implicit-explicit network architecture and training strategy that adaptively allocates resources during training and inference.
We demonstrate the first experiments that fit gigapixel images to nearly 40 dB peak signal-to-noise ratio.
arXiv Detail & Related papers (2021-05-06T16:21:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.