Essential difference between 2D and 3D from the perspective of real-space renormalization group
- URL: http://arxiv.org/abs/2311.05891v2
- Date: Tue, 24 Dec 2024 11:55:58 GMT
- Title: Essential difference between 2D and 3D from the perspective of real-space renormalization group
- Authors: Xinliang Lyu, Naoki Kawashima,
- Abstract summary: Mutual-information area laws imply the difficulty of Kadanoff's block-spin method in two dimensions (2D) or higher.
A leap to the tensor-network RG, in hindsight, follows the guidance of mutual information and is efficient in 2D.
In three dimensions (3D), however, entanglement grows according to the area law, posing a threat to 3D block-tensor map as an apt RG transformation.
- Score: 0.10742675209112622
- License:
- Abstract: We point out that area laws of quantum-information concepts indicate limitations of block transformations as well-behaved real-space renormalization group (RG) maps, which in turn guides the design of better RG schemes. Mutual-information area laws imply the difficulty of Kadanoff's block-spin method in two dimensions (2D) or higher due to the growth of short-scale correlations among the spins on the boundary of a block. A leap to the tensor-network RG, in hindsight, follows the guidance of mutual information and is efficient in 2D, thanks to its mixture of quantum and classical perspectives and the saturation of entanglement entropy in 2D. In three dimensions (3D), however, entanglement grows according to the area law, posing a threat to 3D block-tensor map as an apt RG transformation. As a numerical evidence, we show that estimations of 3D Ising critical exponents fail to improve by retaining more couplings. As a guidance to proceed, a tensor-network toy model is proposed to capture the 3D entanglement-entropy area law.
Related papers
- Three-dimensional real space renormalization group with well-controlled approximations [0.10742675209112622]
We make Kadanoff's block idea into a reliable 3D real space renormalization group (RG) method.
The proposed RG is promising as a systematically-improvable real space RG method in 3D.
arXiv Detail & Related papers (2024-12-18T11:53:05Z) - NDC-Scene: Boost Monocular 3D Semantic Scene Completion in Normalized
Device Coordinates Space [77.6067460464962]
Monocular 3D Semantic Scene Completion (SSC) has garnered significant attention in recent years due to its potential to predict complex semantics and geometry shapes from a single image, requiring no 3D inputs.
We identify several critical issues in current state-of-the-art methods, including the Feature Ambiguity of projected 2D features in the ray to the 3D space, the Pose Ambiguity of the 3D convolution, and the Imbalance in the 3D convolution across different depth levels.
We devise a novel Normalized Device Coordinates scene completion network (NDC-Scene) that directly extends the 2
arXiv Detail & Related papers (2023-09-26T02:09:52Z) - GazeNeRF: 3D-Aware Gaze Redirection with Neural Radiance Fields [100.53114092627577]
Existing gaze redirection methods operate on 2D images and struggle to generate 3D consistent results.
We build on the intuition that the face region and eyeballs are separate 3D structures that move in a coordinated yet independent fashion.
arXiv Detail & Related papers (2022-12-08T13:19:11Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - A hybrid classification-regression approach for 3D hand pose estimation
using graph convolutional networks [1.0152838128195467]
We propose a two-stage GCN-based framework that learns per-pose relationship constraints.
The first phase quantizes the 2D/3D space to classify the joints into 2D/3D blocks based on their locality.
The second stage uses a GCN-based module that uses an adaptative nearest neighbor algorithm to determine joint relationships.
arXiv Detail & Related papers (2021-05-23T10:09:10Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR
Segmentation [81.02742110604161]
State-of-the-art methods for large-scale driving-scene LiDAR segmentation often project the point clouds to 2D space and then process them via 2D convolution.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pat-tern.
Our method achieves the 1st place in the leaderboard of Semantic KITTI and outperforms existing methods on nuScenes with a noticeable margin, about 4%.
arXiv Detail & Related papers (2020-11-19T18:53:11Z) - 3D Orientation Field Transform [0.294944680995069]
The two-dimensional (2D) orientation field transform has been proved to be effective at enhancing 2D contours and curves in images by means of top-down processing.
It has no counterpart in three-dimensional (3D) images due to the extremely complicated orientation in 3D compared to 2D.
In this work, we modularise the concept and generalise it to 3D curves. Different modular combinations are found to enhance curves to different extents and with different sensitivity to the packing of the 3D curves.
arXiv Detail & Related papers (2020-10-04T00:29:46Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z) - Cylindrical Convolutional Networks for Joint Object Detection and
Viewpoint Estimation [76.21696417873311]
We introduce a learnable module, cylindrical convolutional networks (CCNs), that exploit cylindrical representation of a convolutional kernel defined in the 3D space.
CCNs extract a view-specific feature through a view-specific convolutional kernel to predict object category scores at each viewpoint.
Our experiments demonstrate the effectiveness of the cylindrical convolutional networks on joint object detection and viewpoint estimation.
arXiv Detail & Related papers (2020-03-25T10:24:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.