Dual-branch Graph Feature Learning for NLOS Imaging
- URL: http://arxiv.org/abs/2502.19683v1
- Date: Thu, 27 Feb 2025 01:49:00 GMT
- Title: Dual-branch Graph Feature Learning for NLOS Imaging
- Authors: Xiongfei Su, Tianyi Zhu, Lina Liu, Zheng Chen, Yulun Zhang, Siyuan Li, Juntian Ye, Feihu Xu, Xin Yuan,
- Abstract summary: Non-line-of-sight (NLOS) imaging offers the capability to reveal occluded scenes that are not directly visible.<n>xnet methodology integrates an albedo-focused reconstruction branch dedicated to albedo information recovery and a depth-focused reconstruction branch that extracts geometrical structure.<n>Our method attains the highest level of performance among existing methods across synthetic and real data.
- Score: 51.31554007495926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The domain of non-line-of-sight (NLOS) imaging is advancing rapidly, offering the capability to reveal occluded scenes that are not directly visible. However, contemporary NLOS systems face several significant challenges: (1) The computational and storage requirements are profound due to the inherent three-dimensional grid data structure, which restricts practical application. (2) The simultaneous reconstruction of albedo and depth information requires a delicate balance using hyperparameters in the loss function, rendering the concurrent reconstruction of texture and depth information difficult. This paper introduces the innovative methodology, \xnet, which integrates an albedo-focused reconstruction branch dedicated to albedo information recovery and a depth-focused reconstruction branch that extracts geometrical structure, to overcome these obstacles. The dual-branch framework segregates content delivery to the respective reconstructions, thereby enhancing the quality of the retrieved data. To our knowledge, we are the first to employ the GNN as a fundamental component to transform dense NLOS grid data into sparse structural features for efficient reconstruction. Comprehensive experiments demonstrate that our method attains the highest level of performance among existing methods across synthetic and real data. https://github.com/Nicholassu/DG-NLOS.
Related papers
- Hierarchical Mask-Enhanced Dual Reconstruction Network for Few-Shot Fine-Grained Image Classification [7.4334395431083715]
We propose the Hierarchical Mask-enhanced Dual Reconstruction Network (HMDRN) to improve fine-grained classification.<n>HMDRN incorporates a dual-layer feature reconstruction and fusion module that leverages complementary visual information from different network hierarchies.<n> experiments on three challenging fine-grained datasets demonstrate that HDRN consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2025-06-25T09:15:59Z) - ReassembleNet: Learnable Keypoints and Diffusion for 2D Fresco Reconstruction [20.327632780374497]
We address key limitations in state-of-the-art Deep Learning methods for reassembly.<n>We propose ReassembleNet, a method that reduces complexity by representing each input piece as a set of contour keypoints.<n>We then apply diffusion-based pose estimation to recover the original structure.
arXiv Detail & Related papers (2025-05-27T12:38:06Z) - HUG: Hierarchical Urban Gaussian Splatting with Block-Based Reconstruction for Large-Scale Aerial Scenes [13.214165748862815]
3DGS methods suffer from issues such as excessive memory consumption, slow training times, prolonged partitioning processes, and significant degradation in rendering quality due to the increased data volume.<n>We introduce textbfHUG, a novel approach that enhances data partitioning and reconstruction quality by leveraging a hierarchical neural Gaussian representation.<n>Our method achieves state-of-the-art results on one synthetic dataset and four real-world datasets.
arXiv Detail & Related papers (2025-04-23T10:40:40Z) - Decompositional Neural Scene Reconstruction with Generative Diffusion Prior [64.71091831762214]
Decompositional reconstruction of 3D scenes, with complete shapes and detailed texture, is intriguing for downstream applications.
Recent approaches incorporate semantic or geometric regularization to address this issue, but they suffer significant degradation in underconstrained areas.
We propose DP-Recon, which employs diffusion priors in the form of Score Distillation Sampling (SDS) to optimize the neural representation of each individual object under novel views.
arXiv Detail & Related papers (2025-03-19T02:11:31Z) - Technical Report: Towards Spatial Feature Regularization in Deep-Learning-Based Array-SAR Reconstruction [8.808245551289994]
Array synthetic aperture radar (Array-SAR) has demonstrated significant potential for high-quality 3D mapping.<n>Most studies rely on pixel-by-pixel reconstruction, neglecting spatial features like building structures, leading to artifacts such as holes and fragmented edges.<n>Our study integrates spatial feature regularization into DL-based Array-SAR reconstruction, addressing key questions: What spatial features are relevant in urban-area mapping?<n>Results show that spatial feature regularization significantly improves reconstruction accuracy, retrieves more complete building structures, and enhances robustness by reducing noise and outliers.
arXiv Detail & Related papers (2024-12-22T02:31:11Z) - Optimizing Federated Graph Learning with Inherent Structural Knowledge and Dual-Densely Connected GNNs [6.185201353691423]
Federated Graph Learning (FGL) enables clients to collaboratively train powerful Graph Neural Networks (GNNs) in a distributed manner without exposing their private data.
Existing methods either overlook the inherent structural knowledge in graph data or capture it at the cost of significantly increased resource demands.
We propose FedDense, a novel FGL framework that optimize the utilization efficiency of inherent structural knowledge.
arXiv Detail & Related papers (2024-08-21T14:37:50Z) - UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - Neural Poisson Surface Reconstruction: Resolution-Agnostic Shape
Reconstruction from Point Clouds [53.02191521770926]
We introduce Neural Poisson Surface Reconstruction (nPSR), an architecture for shape reconstruction that addresses the challenge of recovering 3D shapes from points.
nPSR exhibits two main advantages: First, it enables efficient training on low-resolution data while achieving comparable performance at high-resolution evaluation.
Overall, the neural Poisson surface reconstruction not only improves upon the limitations of classical deep neural networks in shape reconstruction but also achieves superior results in terms of reconstruction quality, running time, and resolution agnosticism.
arXiv Detail & Related papers (2023-08-03T13:56:07Z) - A Novel end-to-end Framework for Occluded Pixel Reconstruction with
Spatio-temporal Features for Improved Person Re-identification [0.842885453087587]
Person re-identification is vital for monitoring and tracking crowd movement to enhance public security.
In this work, we propose a plausible solution by developing effective occlusion detection and reconstruction framework for RGB images/videos consisting of Deep Neural Networks.
Specifically, a CNN-based occlusion detection model classifies individual input frames, followed by a Conv-LSTM and Autoencoder to reconstruct the occluded pixels corresponding to the occluded frames for sequential (video) and non-sequential (image) data.
arXiv Detail & Related papers (2023-04-16T08:14:29Z) - Learning Detail-Structure Alternative Optimization for Blind
Super-Resolution [69.11604249813304]
We propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR.
In our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures.
Our method achieves the state-of-the-art against existing methods.
arXiv Detail & Related papers (2022-12-03T14:44:17Z) - Spectral Compressive Imaging Reconstruction Using Convolution and
Contextual Transformer [6.929652454131988]
We propose a hybrid network module, namely CCoT (Contextual Transformer) block, which can acquire the inductive bias ability of transformer simultaneously.
We integrate the proposed CCoT block into deep unfolding framework based on the generalized alternating projection algorithm, and further propose the GAP-CT network.
arXiv Detail & Related papers (2022-01-15T06:30:03Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.