Subsurface Depths Structure Maps Reconstruction with Generative
Adversarial Networks
- URL: http://arxiv.org/abs/2206.07388v1
- Date: Wed, 15 Jun 2022 08:51:10 GMT
- Title: Subsurface Depths Structure Maps Reconstruction with Generative
Adversarial Networks
- Authors: Dmitry Ivlev
- Abstract summary: The paper describes a method for reconstruction of detailed-resolution depth structure maps, usually obtained after the 3D seismic surveys.
The method uses two algorithms based on the generative-versa-adrial neural network architecture.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper described a method for reconstruction of detailed-resolution depth
structure maps, usually obtained after the 3D seismic surveys, using the data
from 2D seismic depth maps. The method uses two algorithms based on the
generative-adversarial neural network architecture. The first algorithm
StyleGAN2-ADA accumulates in the hidden space of the neural network the
semantic images of mountainous terrain forms first, and then with help of
transfer learning, in the ideal case - the structure geometry of stratigraphic
horizons. The second algorithm, the Pixel2Style2Pixel encoder, using the
semantic level of generalization of the first algorithm, learns to reconstruct
the original high-resolution images from their degraded copies
(super-resolution technology). There was demonstrated a methodological approach
to transferring knowledge on the structural forms of stratigraphic horizon
boundaries from the well-studied areas to the underexplored ones. Using the
multimodal synthesis of Pixel2Style2Pixel encoder, it is proposed to create a
probabilistic depth space, where each point of the project area is represented
by the density of probabilistic depth distribution of equally probable
reconstructed geological forms of structural images. Assessment of the
reconstruction quality was carried out for two blocks. Using this method,
credible detailed depth reconstructions comparable with the quality of 3D
seismic maps have been obtained from 2D seismic maps.
Related papers
- 3 Dimensional Dense Reconstruction: A Review of Algorithms and Dataset [19.7595986056387]
3D dense reconstruction refers to the process of obtaining the complete shape and texture features of 3D objects from 2D planar images.
This work systematically introduces classical methods of 3D dense reconstruction based on geometric and optical models.
It also introduces datasets for deep learning and the performance and advantages and disadvantages demonstrated by deep learning methods on these datasets.
arXiv Detail & Related papers (2023-04-19T01:56:55Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth
Sampling [75.957103837167]
Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape.
Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch.
arXiv Detail & Related papers (2022-08-14T16:37:51Z) - TerrainMesh: Metric-Semantic Terrain Reconstruction from Aerial Images
Using Joint 2D-3D Learning [20.81202315793742]
This paper develops a joint 2D-3D learning approach to reconstruct a local metric-semantic mesh at each camera maintained by a visual odometry algorithm.
The mesh can be assembled into a global environment model to capture the terrain topology and semantics during online operation.
arXiv Detail & Related papers (2022-04-23T05:18:39Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction [71.83308989022635]
In this paper, we advocate that replicating the traditional two stages framework with deep neural networks improves both the interpretability and the accuracy of the results.
Our network operates in two steps: 1) the local computation of the local depth maps with a deep MVS technique, and, 2) the depth maps and images' features fusion to build a single TSDF volume.
In order to improve the matching performance between images acquired from very different viewpoints, we introduce a rotation-invariant 3D convolution kernel called PosedConv.
arXiv Detail & Related papers (2021-08-19T11:33:58Z) - S2R-DepthNet: Learning a Generalizable Depth-specific Structural
Representation [63.58891781246175]
Human can infer the 3D geometry of a scene from a sketch instead of a realistic image, which indicates that the spatial structure plays a fundamental role in understanding the depth of scenes.
We are the first to explore the learning of a depth-specific structural representation, which captures the essential feature for depth estimation and ignores irrelevant style information.
Our S2R-DepthNet can be well generalized to unseen real-world data directly even though it is only trained on synthetic data.
arXiv Detail & Related papers (2021-04-02T03:55:41Z) - Deep Two-View Structure-from-Motion Revisited [83.93809929963969]
Two-view structure-from-motion (SfM) is the cornerstone of 3D reconstruction and visual SLAM.
We propose to revisit the problem of deep two-view SfM by leveraging the well-posedness of the classic pipeline.
Our method consists of 1) an optical flow estimation network that predicts dense correspondences between two frames; 2) a normalized pose estimation module that computes relative camera poses from the 2D optical flow correspondences, and 3) a scale-invariant depth estimation network that leverages epipolar geometry to reduce the search space, refine the dense correspondences, and estimate relative depth maps.
arXiv Detail & Related papers (2021-04-01T15:31:20Z) - GeoNet++: Iterative Geometric Neural Network with Edge-Aware Refinement
for Joint Depth and Surface Normal Estimation [204.13451624763735]
We propose a geometric neural network with edge-aware refinement (GeoNet++) to jointly predict both depth and surface normal maps from a single image.
GeoNet++ effectively predicts depth and surface normals with strong 3D consistency and sharp boundaries.
In contrast to current metrics that focus on evaluating pixel-wise error/accuracy, 3DGM measures whether the predicted depth can reconstruct high-quality 3D surface normals.
arXiv Detail & Related papers (2020-12-13T06:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.