X-Field: A Physically Grounded Representation for 3D X-ray Reconstruction
- URL: http://arxiv.org/abs/2503.08596v1
- Date: Tue, 11 Mar 2025 16:31:56 GMT
- Title: X-Field: A Physically Grounded Representation for 3D X-ray Reconstruction
- Authors: Feiran Wang, Jiachen Tao, Junyi Wu, Haoxuan Wang, Bin Duan, Kai Wang, Zongxin Yang, Yan Yan,
- Abstract summary: X-ray imaging is indispensable in medical diagnostics, yet its use is tightly regulated due to potential health risks.<n>Recent research focuses on generating novel views from sparse inputs and reconstructing Computed Tomography (CT) volumes.<n>We introduce X-Field, the first 3D representation specifically designed for X-ray imaging.
- Score: 25.13707706037451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: X-ray imaging is indispensable in medical diagnostics, yet its use is tightly regulated due to potential health risks. To mitigate radiation exposure, recent research focuses on generating novel views from sparse inputs and reconstructing Computed Tomography (CT) volumes, borrowing representations from the 3D reconstruction area. However, these representations originally target visible light imaging that emphasizes reflection and scattering effects, while neglecting penetration and attenuation properties of X-ray imaging. In this paper, we introduce X-Field, the first 3D representation specifically designed for X-ray imaging, rooted in the energy absorption rates across different materials. To accurately model diverse materials within internal structures, we employ 3D ellipsoids with distinct attenuation coefficients. To estimate each material's energy absorption of X-rays, we devise an efficient path partitioning algorithm accounting for complex ellipsoid intersections. We further propose hybrid progressive initialization to refine the geometric accuracy of X-Filed and incorporate material-based optimization to enhance model fitting along material boundaries. Experiments show that X-Field achieves superior visual fidelity on both real-world human organ and synthetic object datasets, outperforming state-of-the-art methods in X-ray Novel View Synthesis and CT Reconstruction.
Related papers
- NeAS: 3D Reconstruction from X-ray Images using Neural Attenuation Surface [0.5772546394254112]
Reconstructing 3D structures from 2D X-ray images is a valuable technique in medical applications that requires less radiation exposure than computed tomography scans.<n>Recent approaches that use implicit neural representations have enabled the synthesis of novel views from sparse X-ray images.<n>We propose a novel approach for reconstructing 3D scenes using a Neural Attenuation Surface (NeAS) that simultaneously captures the surface geometry and attenuation coefficient fields.
arXiv Detail & Related papers (2025-03-10T16:11:58Z) - BGM: Background Mixup for X-ray Prohibited Items Detection [75.58709178012502]
This paper introduces a novel data augmentation approach tailored for prohibited item detection, leveraging unique characteristics inherent to X-ray imagery.<n>Our method is motivated by observations of physical properties including: 1) X-ray Transmission Imagery: Unlike reflected light images, transmitted X-ray pixels represent composite information from multiple materials along the imaging path.<n>We propose a simple yet effective X-ray image augmentation technique, Background Mixup (BGM), for prohibited item detection in security screening contexts.
arXiv Detail & Related papers (2024-11-30T12:26:55Z) - Differentiable Voxel-based X-ray Rendering Improves Sparse-View 3D CBCT Reconstruction [4.941613865666241]
We present DiffVox, a self-supervised framework for Cone-Beam Computed Tomography (CBCT) reconstruction.<n>As a result, we reconstruct high-fidelity 3D CBCT volumes from fewer X-rays, potentially reducing ionizing radiation exposure and improving diagnostic utility.
arXiv Detail & Related papers (2024-11-28T15:49:08Z) - DiffuX2CT: Diffusion Learning to Reconstruct CT Images from Biplanar X-Rays [41.393567374399524]
We propose DiffuX2CT, which models CT reconstruction from ultra-sparse X-rays as a conditional diffusion process.
By doing so, DiffuX2CT achieves structure-controllable reconstruction, which enables 3D structural information to be recovered from 2D X-rays.
As an extra contribution, we collect a real-world lumbar CT dataset, called LumbarV, as a new benchmark to verify the clinical significance and performance of CT reconstruction from X-rays.
arXiv Detail & Related papers (2024-07-18T14:20:04Z) - R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction [53.19869886963333]
3D Gaussian splatting (3DGS) has shown promising results in rendering image and surface reconstruction.
This paper introduces R2$-Gaussian, the first 3DGS-based framework for sparse-view tomographic reconstruction.
arXiv Detail & Related papers (2024-05-31T08:39:02Z) - X-Ray: A Sequential 3D Representation For Generation [54.160173837582796]
We introduce X-Ray, a novel 3D sequential representation inspired by x-ray scans.
X-Ray transforms a 3D object into a series of surface frames at different layers, making it suitable for generating 3D models from images.
arXiv Detail & Related papers (2024-04-22T16:40:11Z) - Radiative Gaussian Splatting for Efficient X-ray Novel View Synthesis [88.86777314004044]
We propose a 3D Gaussian splatting-based framework, namely X-Gaussian, for X-ray novel view visualization.
Experiments show that our X-Gaussian outperforms state-of-the-art methods by 6.5 dB while enjoying less than 15% training time and over 73x inference speed.
arXiv Detail & Related papers (2024-03-07T00:12:08Z) - SdCT-GAN: Reconstructing CT from Biplanar X-Rays with Self-driven
Generative Adversarial Networks [6.624839896733912]
This paper presents a new self-driven generative adversarial network model (SdCT-GAN) for reconstruction of 3D CT images.
It is motivated to pay more attention to image details by introducing a novel auto-encoder structure in the discriminator.
LPIPS evaluation metric is adopted that can quantitatively evaluate the fine contours and textures of reconstructed images better than the existing ones.
arXiv Detail & Related papers (2023-09-10T08:16:02Z) - Perspective Projection-Based 3D CT Reconstruction from Biplanar X-rays [32.98966469644061]
We propose PerX2CT, a novel CT reconstruction framework from X-ray.
Our proposed method provides a different combination of features for each coordinate which implicitly allows the model to obtain information about the 3D location.
arXiv Detail & Related papers (2023-03-09T14:45:25Z) - X-Ray2EM: Uncertainty-Aware Cross-Modality Image Reconstruction from
X-Ray to Electron Microscopy in Connectomics [55.6985304397137]
We propose an uncertainty-aware 3D reconstruction model that translates X-ray images to EM-like images with enhanced membrane segmentation quality.
This shows its potential for developing simpler, faster, and more accurate X-ray based connectomics pipelines.
arXiv Detail & Related papers (2023-03-02T00:52:41Z) - MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware
CT-Projections from a Single X-ray [14.10611608681131]
Excessive ionising radiation can lead to deterministic and harmful effects on the body.
This paper proposes a Deep Learning model that learns to reconstruct CT projections from a few or even a single-view X-ray.
arXiv Detail & Related papers (2022-02-02T13:25:23Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.