Interpretable 2D Vision Models for 3D Medical Images
- URL: http://arxiv.org/abs/2307.06614v3
- Date: Tue, 5 Dec 2023 10:08:45 GMT
- Title: Interpretable 2D Vision Models for 3D Medical Images
- Authors: Alexander Ziller, Ayhan Can Erdur, Marwa Trigui, Alp G\"uvenir, Tamara
T. Mueller, Philip M\"uller, Friederike Jungmann, Johannes Brandt, Jan
Peeken, Rickmer Braren, Daniel Rueckert, Georgios Kaissis
- Abstract summary: This study proposes a simple approach of adapting 2D networks with an intermediate feature representation for processing 3D images.
We show on all 3D MedMNIST datasets as benchmark and two real-world datasets consisting of several hundred high-resolution CT or MRI scans that our approach performs on par with existing methods.
- Score: 47.75089895500738
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training Artificial Intelligence (AI) models on 3D images presents unique
challenges compared to the 2D case: Firstly, the demand for computational
resources is significantly higher, and secondly, the availability of large
datasets for pre-training is often limited, impeding training success. This
study proposes a simple approach of adapting 2D networks with an intermediate
feature representation for processing 3D images. Our method employs attention
pooling to learn to assign each slice an importance weight and, by that, obtain
a weighted average of all 2D slices. These weights directly quantify the
contribution of each slice to the contribution and thus make the model
prediction inspectable. We show on all 3D MedMNIST datasets as benchmark and
two real-world datasets consisting of several hundred high-resolution CT or MRI
scans that our approach performs on par with existing methods. Furthermore, we
compare the in-built interpretability of our approach to HiResCam, a
state-of-the-art retrospective interpretability approach.
Related papers
- Cross-D Conv: Cross-Dimensional Transferable Knowledge Base via Fourier Shifting Operation [3.69758875412828]
Cross-D Conv operation bridges the dimensional gap by learning the phase shifting in the Fourier domain.
Our method enables seamless weight transfer between 2D and 3D convolution operations, effectively facilitating cross-dimensional learning.
arXiv Detail & Related papers (2024-11-02T13:03:44Z) - Deep Convolutional Neural Networks on Multiclass Classification of Three-Dimensional Brain Images for Parkinson's Disease Stage Prediction [2.931680194227131]
We developed a model capable of accurately predicting Parkinson's disease stages.
We used the entire three-dimensional (3D) brain images as input.
We incorporated an attention mechanism to account for the varying importance of different slices in the prediction process.
arXiv Detail & Related papers (2024-10-31T05:40:08Z) - Semi-supervised 3D Semantic Scene Completion with 2D Vision Foundation Model Guidance [11.090775523892074]
We introduce a novel semi-supervised framework to alleviate the dependency on densely annotated data.
Our approach leverages 2D foundation models to generate essential 3D scene geometric and semantic cues.
Our method achieves up to 85% of the fully-supervised performance using only 10% labeled data.
arXiv Detail & Related papers (2024-08-21T12:13:18Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Deep Generative Models on 3D Representations: A Survey [81.73385191402419]
Generative models aim to learn the distribution of observed data by generating new instances.
Recently, researchers started to shift focus from 2D to 3D space.
representing 3D data poses significantly greater challenges.
arXiv Detail & Related papers (2022-10-27T17:59:50Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - Super Images -- A New 2D Perspective on 3D Medical Imaging Analysis [0.0]
We present a simple yet effective 2D method to handle 3D data while efficiently embedding the 3D knowledge during training.
Our method generates a super-resolution image by stitching slices side by side in the 3D image.
While attaining equal, if not superior, results to 3D networks utilizing only 2D counterparts, the model complexity is reduced by around threefold.
arXiv Detail & Related papers (2022-05-05T09:59:03Z) - Cascaded deep monocular 3D human pose estimation with evolutionary
training data [76.3478675752847]
Deep representation learning has achieved remarkable accuracy for monocular 3D human pose estimation.
This paper proposes a novel data augmentation method that is scalable for massive amount of training data.
Our method synthesizes unseen 3D human skeletons based on a hierarchical human representation and synthesizings inspired by prior knowledge.
arXiv Detail & Related papers (2020-06-14T03:09:52Z) - Weakly-Supervised 3D Human Pose Learning via Multi-view Images in the
Wild [101.70320427145388]
We propose a weakly-supervised approach that does not require 3D annotations and learns to estimate 3D poses from unlabeled multi-view data.
We evaluate our proposed approach on two large scale datasets.
arXiv Detail & Related papers (2020-03-17T08:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.