Deep Anti-aliasing of Whole Focal Stack Using its Slice Spectrum
- URL: http://arxiv.org/abs/2101.09420v1
- Date: Sat, 23 Jan 2021 05:14:49 GMT
- Title: Deep Anti-aliasing of Whole Focal Stack Using its Slice Spectrum
- Authors: Yaning Li, Xue Wang, Guoqing Zhou, and Qing Wang
- Abstract summary: The paper aims at removing the aliasing effects for the whole focal stack generated from a sparse 3D light field.
We first explore the structural characteristics embedded in the focal stack slice and its corresponding frequency-domain representation.
We also observe that the energy distribution of FSS always locates within the same triangular area under different angular sampling rates.
- Score: 7.746179634773142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper aims at removing the aliasing effects for the whole focal stack
generated from a sparse 3D light field, while keeping the consistency across
all the focal layers.We first explore the structural characteristics embedded
in the focal stack slice and its corresponding frequency-domain representation,
i.e., the focal stack spectrum (FSS). We also observe that the energy
distribution of FSS always locates within the same triangular area under
different angular sampling rates, additionally the continuity of point spread
function (PSF) is intrinsically maintained in the FSS. Based on these two
findings, we propose a learning-based FSS reconstruction approach for one-time
aliasing removing over the whole focal stack. What's more, a novel
conjugate-symmetric loss function is proposed for the optimization. Compared to
previous works, our method avoids an explicit depth estimation, and can handle
challenging large-disparity scenarios. Experimental results on both synthetic
and real light field datasets show the superiority of the proposed approach for
different scenes and various angular sampling rates.
Related papers
- Improving Geometry in Sparse-View 3DGS via Reprojection-based DoF Separation [35.17953057142724]
Recent learning-based Multi-View Stereo models have demonstrated state-of-the-art performance in sparse-view 3D reconstruction.
We propose reprojection-based DoF separation, a method distinguishing positional DoFs in terms of uncertainty.
We show that separating the positional DoFs of Gaussians and applying targeted constraints effectively suppresses geometric artifacts.
arXiv Detail & Related papers (2024-12-19T06:39:28Z) - Generalizable Non-Line-of-Sight Imaging with Learnable Physical Priors [52.195637608631955]
Non-line-of-sight (NLOS) imaging has attracted increasing attention due to its potential applications.
Existing NLOS reconstruction approaches are constrained by the reliance on empirical physical priors.
We introduce a novel learning-based solution, comprising two key designs: Learnable Path Compensation (LPC) and Adaptive Phasor Field (APF)
arXiv Detail & Related papers (2024-09-21T04:39:45Z) - RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - Diffusion-based Light Field Synthesis [50.24624071354433]
LFdiff is a diffusion-based generative framework tailored for LF synthesis.
We propose DistgUnet, a disentanglement-based noise estimation network, to harness comprehensive LF representations.
Extensive experiments demonstrate that LFdiff excels in synthesizing visually pleasing and disparity-controllable light fields.
arXiv Detail & Related papers (2024-02-01T13:13:16Z) - On Robust Cross-View Consistency in Self-Supervised Monocular Depth Estimation [56.97699793236174]
We study two kinds of robust cross-view consistency in this paper.
We exploit the temporal coherence in both depth feature space and 3D voxel space for self-supervised monocular depth estimation.
Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques.
arXiv Detail & Related papers (2022-09-19T03:46:13Z) - Epipolar Focus Spectrum: A Novel Light Field Representation and
Application in Dense-view Reconstruction [12.461169608271812]
Existing light field representations, such as epipolar plane image (EPI) and sub-aperture images, do not consider the structural characteristics across the views.
This paper proposes a novel Epipolar Focus Spectrum (EFS) representation by rearranging the EPI spectrum.
arXiv Detail & Related papers (2022-04-01T04:01:46Z) - Anomaly Detection of Defect using Energy of Point Pattern Features
within Random Finite Set Framework [5.7564383437854625]
We propose an efficient approach for industrial defect detection that is modeled based on anomaly detection using point pattern data.
We are the first to propose using transfer learning of local/point pattern features to overcome these limitations.
We evaluate the proposed approach on the MVTec AD dataset.
arXiv Detail & Related papers (2021-08-27T08:06:37Z) - Learning an optimal PSF-pair for ultra-dense 3D localization microscopy [33.20228745456316]
A long-standing challenge in multiple-particle-tracking is the accurate and precise 3D localization of individual particles at close proximity.
One established approach for snapshot 3D imaging is point-spread-function (PSF) engineering, in which the PSF is modified to encode the axial information.
Here we suggest using multiple PSFs simultaneously to help overcome this challenge, and investigate the problem of engineering multiple PSFs for dense 3D localization.
arXiv Detail & Related papers (2020-09-29T20:54:52Z) - Deep Selective Combinatorial Embedding and Consistency Regularization
for Light Field Super-resolution [93.95828097088608]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensionality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF spatial SR framework to explore the coherence among LF sub-aperture images.
Experimental results over both synthetic and real-world LF datasets demonstrate the significant advantage of our approach over state-of-the-art methods.
arXiv Detail & Related papers (2020-09-26T08:34:37Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.