Cutting Voxel Projector a New Approach to Construct 3D Cone Beam CT
Operator
- URL: http://arxiv.org/abs/2110.09841v1
- Date: Tue, 19 Oct 2021 10:54:01 GMT
- Title: Cutting Voxel Projector a New Approach to Construct 3D Cone Beam CT
Operator
- Authors: Vojt\v{e}ch Kulvait (1), Georg Rose (1) ((1) Institute for Medical
Engineering and Research Campus STIMULATE, University of Magdeburg,
Magdeburg, Germany)
- Abstract summary: We introduce a new class of projectors for 3D cone beam tomographic reconstruction.
We use analytical formulas for the relationship between the voxel volume projected onto a given detector pixel and its contribution to the extinction value detected on that pixel.
We construct a near-exact projector and backprojector that can be used especially for algebraic reconstruction techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this paper, we introduce a new class of projectors for 3D cone beam
tomographic reconstruction. We find analytical formulas for the relationship
between the voxel volume projected onto a given detector pixel and its
contribution to the extinction value detected on that pixel. Using this
approach, we construct a near-exact projector and backprojector that can be
used especially for algebraic reconstruction techniques. We have implemented
this cutting voxel projector and a less accurate, speed-optimized version of it
together with two established projectors, a ray tracing projector based on
Siddon's algorithm and a TT footprint projector. We show that the cutting voxel
projector achieves, especially for large cone beam angles, noticeably higher
accuracy than the TT projector. Moreover, our implementation of the relaxed
version of the cutting voxel projector is significantly faster than current
footprint projector implementations. We further show that Siddon's algorithm
with comparable accuracy would be much slower than the cutting voxel projector.
All algorithms are implemented within an open source framework for algebraic
reconstruction in OpenCL 1.2 and C++ and are optimized for GPU computation.
They are published as open-source software under the GNU GPL 3 license, see
https://github.com/kulvait/KCT_cbct.
Related papers
- LAPIG: Language Guided Projector Image Generation with Surface Adaptation and Stylization [54.291669057240476]
LAPIG takes the user text prompt as input and aims to transform the surface style using the projector.
Projection surface adaptation (PSA) can generate compensable surface stylization.
generated image is projected for visually pleasing surface style morphing effects.
arXiv Detail & Related papers (2025-03-15T15:31:04Z) - Projecting Gaussian Ellipsoids While Avoiding Affine Projection Approximation [1.4792750204228]
3D Gaussian Splatting has dominated novel-view synthesis with its real-time rendering speed and state-of-the-art rendering quality.
We introduce an ellipsoid-based projection method to calculate the projection of Gaussian ellipsoid onto the image plane, which is the primitive of 3D Gaussian Splatting.
Experiments over multiple widely adopted benchmark datasets show that our ellipsoid-based projection method can enhance the rendering quality of 3D Gaussian Splatting and its extensions.
arXiv Detail & Related papers (2024-11-12T06:29:48Z) - UniVoxel: Fast Inverse Rendering by Unified Voxelization of Scene Representation [66.95976870627064]
We design a Unified Voxelization framework for explicit learning of scene representations, dubbed UniVoxel.
We propose to encode a scene into a latent volumetric representation, based on which the geometry, materials and illumination can be readily learned via lightweight neural networks.
Experiments show that UniVoxel boosts the optimization efficiency significantly compared to other methods, reducing the per-scene training time from hours to 18 minutes, while achieving favorable reconstruction quality.
arXiv Detail & Related papers (2024-07-28T17:24:14Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [62.13932669494098]
This paper presents a 3D Gaussian Inverse Rendering (GIR) method, employing 3D Gaussian representations to factorize the scene into material properties, light, and geometry.
We compute the normal of each 3D Gaussian using the shortest eigenvector, with a directional masking scheme forcing accurate normal estimation without external supervision.
We adopt an efficient voxel-based indirect illumination tracing scheme that stores direction-aware outgoing radiance in each 3D Gaussian to disentangle secondary illumination for approximating multi-bounce light transport.
arXiv Detail & Related papers (2023-12-08T16:05:15Z) - CompenHR: Efficient Full Compensation for High-resolution Projector [68.42060996280064]
Full projector compensation is a practical task of projector-camera systems.
It aims to find a projector input image, named compensation image, such that when projected it cancels the geometric and photometric distortions.
State-of-the-art methods use deep learning to address this problem and show promising performance for low-resolution setups.
However, directly applying deep learning to high-resolution setups is impractical due to the long training time and high memory cost.
arXiv Detail & Related papers (2023-11-22T14:13:27Z) - Neural Projection Mapping Using Reflectance Fields [11.74757574153076]
We introduce a projector into a neural reflectance field that allows to calibrate the projector and photo realistic light editing.
Our neural field consists of three neural networks, estimating geometry, material, and transmittance.
We believe that neural projection mapping opens up the door to novel and exciting downstream tasks, through the joint optimization of the scene and projection images.
arXiv Detail & Related papers (2023-06-11T05:33:10Z) - Novel projection schemes for graph-based Light Field coding [0.10499611180329801]
This paper introduces two novel projection schemes resulting in less error in disparity information.
One projection scheme can also significantly reduce time computation for both encoder and decoder.
arXiv Detail & Related papers (2022-06-09T08:10:22Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Directionally Decomposing Structured Light for Projector Calibration [22.062182997296805]
Intrinsic projector calibration is essential in projection mapping (PM) applications.
We present a practical calibration device that requires a minimal working volume directly in front of the projector lens.
We demonstrate that our technique can calibrate projectors with different focusing distances and aperture sizes at the same accuracy as a conventional method.
arXiv Detail & Related papers (2021-10-08T06:44:01Z) - End-to-end Full Projector Compensation [81.19324259967742]
Full projector compensation aims to modify a projector input image to compensate for both geometric and photometric disturbance of the projection surface.
In this paper, we propose the first end-to-end differentiable solution, named CompenNeSt++, to solve the two problems jointly.
arXiv Detail & Related papers (2020-07-30T18:23:52Z) - DeProCams: Simultaneous Relighting, Compensation and Shape
Reconstruction for Projector-Camera Systems [91.45207885902786]
We propose a novel end-to-end trainable model named DeProCams to learn the photometric and geometric mappings of ProCams.
DeProCams explicitly decomposes the projector-camera image mappings into three subprocesses: shading attributes estimation, rough direct light estimation and photorealistic neural rendering.
In our experiments, DeProCams shows clear advantages over previous arts with promising quality and being fully differentiable.
arXiv Detail & Related papers (2020-03-06T05:49:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.