Asymmetric 3D Context Fusion for Universal Lesion Detection
- URL: http://arxiv.org/abs/2109.08684v1
- Date: Fri, 17 Sep 2021 16:25:10 GMT
- Title: Asymmetric 3D Context Fusion for Universal Lesion Detection
- Authors: Jiancheng Yang, Yi He, Kaiming Kuang, Zudi Lin, Hanspeter Pfister,
Bingbing Ni
- Abstract summary: 3D networks are strong in 3D context yet lack supervised pretraining.
Existing 3D context fusion operators are designed to be spatially symmetric, performing identical operations on each 2D slice like convolutions.
We propose a novel asymmetric 3D context fusion operator (A3D), which uses different weights to fuse 3D context from different 2D slices.
- Score: 55.61873234187917
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling 3D context is essential for high-performance 3D medical image
analysis. Although 2D networks benefit from large-scale 2D supervised
pretraining, it is weak in capturing 3D context. 3D networks are strong in 3D
context yet lack supervised pretraining. As an emerging technique, \emph{3D
context fusion operator}, which enables conversion from 2D pretrained networks,
leverages the advantages of both and has achieved great success. Existing 3D
context fusion operators are designed to be spatially symmetric, i.e.,
performing identical operations on each 2D slice like convolutions. However,
these operators are not truly equivariant to translation, especially when only
a few 3D slices are used as inputs. In this paper, we propose a novel
asymmetric 3D context fusion operator (A3D), which uses different weights to
fuse 3D context from different 2D slices. Notably, A3D is NOT
translation-equivariant while it significantly outperforms existing symmetric
context fusion operators without introducing large computational overhead. We
validate the effectiveness of the proposed method by extensive experiments on
DeepLesion benchmark, a large-scale public dataset for universal lesion
detection from computed tomography (CT). The proposed A3D consistently
outperforms symmetric context fusion operators by considerable margins, and
establishes a new \emph{state of the art} on DeepLesion. To facilitate open
research, our code and model in PyTorch are available at
https://github.com/M3DV/AlignShift.
Related papers
- EmbodiedSAM: Online Segment Any 3D Thing in Real Time [61.2321497708998]
Embodied tasks require the agent to fully understand 3D scenes simultaneously with its exploration.
An online, real-time, fine-grained and highly-generalized 3D perception model is desperately needed.
arXiv Detail & Related papers (2024-08-21T17:57:06Z) - MeshFormer: High-Quality Mesh Generation with 3D-Guided Reconstruction Model [34.245635412589806]
MeshFormer is a sparse-view reconstruction model that explicitly leverages 3D native structure, input guidance, and training supervision.
It can be integrated with 2D diffusion models to enable fast single-image-to-3D and text-to-3D tasks.
arXiv Detail & Related papers (2024-08-19T17:55:17Z) - Any2Point: Empowering Any-modality Large Models for Efficient 3D Understanding [83.63231467746598]
We introduce Any2Point, a parameter-efficient method to empower any-modality large models (vision, language, audio) for 3D understanding.
We propose a 3D-to-any (1D or 2D) virtual projection strategy that correlates the input 3D points to the original 1D or 2D positions within the source modality.
arXiv Detail & Related papers (2024-04-11T17:59:45Z) - NDC-Scene: Boost Monocular 3D Semantic Scene Completion in Normalized
Device Coordinates Space [77.6067460464962]
Monocular 3D Semantic Scene Completion (SSC) has garnered significant attention in recent years due to its potential to predict complex semantics and geometry shapes from a single image, requiring no 3D inputs.
We identify several critical issues in current state-of-the-art methods, including the Feature Ambiguity of projected 2D features in the ray to the 3D space, the Pose Ambiguity of the 3D convolution, and the Imbalance in the 3D convolution across different depth levels.
We devise a novel Normalized Device Coordinates scene completion network (NDC-Scene) that directly extends the 2
arXiv Detail & Related papers (2023-09-26T02:09:52Z) - 3D-Aware Neural Body Fitting for Occlusion Robust 3D Human Pose
Estimation [28.24765523800196]
We propose 3D-aware Neural Body Fitting (3DNBF) for 3D human pose estimation.
In particular, we propose a generative model of deep features based on a volumetric human representation with Gaussian ellipsoidal kernels emitting 3D pose-dependent feature vectors.
The neural features are trained with contrastive learning to become 3D-aware and hence to overcome the 2D-3D ambiguity.
arXiv Detail & Related papers (2023-08-19T22:41:00Z) - Act3D: 3D Feature Field Transformers for Multi-Task Robotic Manipulation [18.964403296437027]
Act3D represents the robot's workspace using a 3D feature field with adaptive resolutions dependent on the task at hand.
It samples 3D point grids in a coarse to fine manner, featurizes them using relative-position attention, and selects where to focus the next round of point sampling.
arXiv Detail & Related papers (2023-06-30T17:34:06Z) - 3D-to-2D Distillation for Indoor Scene Parsing [78.36781565047656]
We present a new approach that enables us to leverage 3D features extracted from large-scale 3D data repository to enhance 2D features extracted from RGB images.
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training.
Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration.
Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.
arXiv Detail & Related papers (2021-04-06T02:22:24Z) - Exemplar Fine-Tuning for 3D Human Model Fitting Towards In-the-Wild 3D
Human Pose Estimation [107.07047303858664]
Large-scale human datasets with 3D ground-truth annotations are difficult to obtain in the wild.
We address this problem by augmenting existing 2D datasets with high-quality 3D pose fits.
The resulting annotations are sufficient to train from scratch 3D pose regressor networks that outperform the current state-of-the-art on in-the-wild benchmarks.
arXiv Detail & Related papers (2020-04-07T20:21:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.