3D Orientation Field Transform
- URL: http://arxiv.org/abs/2010.01453v1
- Date: Sun, 4 Oct 2020 00:29:46 GMT
- Title: 3D Orientation Field Transform
- Authors: Wai-Tsun Yeung, Xiaohao Cai, Zizhen Liang, Byung-Ho Kang
- Abstract summary: The two-dimensional (2D) orientation field transform has been proved to be effective at enhancing 2D contours and curves in images by means of top-down processing.
It has no counterpart in three-dimensional (3D) images due to the extremely complicated orientation in 3D compared to 2D.
In this work, we modularise the concept and generalise it to 3D curves. Different modular combinations are found to enhance curves to different extents and with different sensitivity to the packing of the 3D curves.
- Score: 0.294944680995069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The two-dimensional (2D) orientation field transform has been proved to be
effective at enhancing 2D contours and curves in images by means of top-down
processing. It, however, has no counterpart in three-dimensional (3D) images
due to the extremely complicated orientation in 3D compared to 2D. Practically
and theoretically, the demand and interest in 3D can only be increasing. In
this work, we modularise the concept and generalise it to 3D curves. Different
modular combinations are found to enhance curves to different extents and with
different sensitivity to the packing of the 3D curves. In principle, the
proposed 3D orientation field transform can naturally tackle any dimensions. As
a special case, it is also ideal for 2D images, owning simpler methodology
compared to the previous 2D orientation field transform. The proposed method is
demonstrated with several transmission electron microscopy tomograms ranging
from 2D curve enhancement to, the more important and interesting, 3D ones.
Related papers
- Any2Point: Empowering Any-modality Large Models for Efficient 3D Understanding [83.63231467746598]
We introduce Any2Point, a parameter-efficient method to empower any-modality large models (vision, language, audio) for 3D understanding.
We propose a 3D-to-any (1D or 2D) virtual projection strategy that correlates the input 3D points to the original 1D or 2D positions within the source modality.
arXiv Detail & Related papers (2024-04-11T17:59:45Z) - SpatialTracker: Tracking Any 2D Pixels in 3D Space [71.58016288648447]
We propose to estimate point trajectories in 3D space to mitigate the issues caused by image projection.
Our method, named SpatialTracker, lifts 2D pixels to 3D using monocular depth estimators.
Tracking in 3D allows us to leverage as-rigid-as-possible (ARAP) constraints while simultaneously learning a rigidity embedding that clusters pixels into different rigid parts.
arXiv Detail & Related papers (2024-04-05T17:59:25Z) - Magic123: One Image to High-Quality 3D Object Generation Using Both 2D
and 3D Diffusion Priors [104.79392615848109]
We present Magic123, a two-stage coarse-to-fine approach for high-quality, textured 3D meshes from a single unposed image.
In the first stage, we optimize a neural radiance field to produce a coarse geometry.
In the second stage, we adopt a memory-efficient differentiable mesh representation to yield a high-resolution mesh with a visually appealing texture.
arXiv Detail & Related papers (2023-06-30T17:59:08Z) - XDGAN: Multi-Modal 3D Shape Generation in 2D Space [60.46777591995821]
We propose a novel method to convert 3D shapes into compact 1-channel geometry images and leverage StyleGAN3 and image-to-image translation networks to generate 3D objects in 2D space.
The generated geometry images are quick to convert to 3D meshes, enabling real-time 3D object synthesis, visualization and interactive editing.
We show both quantitatively and qualitatively that our method is highly effective at various tasks such as 3D shape generation, single view reconstruction and shape manipulation, while being significantly faster and more flexible compared to recent 3D generative models.
arXiv Detail & Related papers (2022-10-06T15:54:01Z) - To The Point: Correspondence-driven monocular 3D category reconstruction [39.811816510186475]
To The Point (TTP) is a method for reconstructing 3D objects from a single image using 2D to 3D correspondences learned from weak supervision.
We replace CNN-based regression of camera pose and non-rigid deformation and obtain substantially more accurate 3D reconstructions.
arXiv Detail & Related papers (2021-06-10T11:21:14Z) - 3D-to-2D Distillation for Indoor Scene Parsing [78.36781565047656]
We present a new approach that enables us to leverage 3D features extracted from large-scale 3D data repository to enhance 2D features extracted from RGB images.
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training.
Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration.
Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.
arXiv Detail & Related papers (2021-04-06T02:22:24Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z) - Generalizing Spatial Transformers to Projective Geometry with
Applications to 2D/3D Registration [11.219924013808852]
Differentiable rendering is a technique to connect 3D scenes with corresponding 2D images.
We propose a novel Projective Spatial Transformer module that generalizes spatial transformers to projective geometry.
arXiv Detail & Related papers (2020-03-24T17:26:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.