Predicting Bone Degradation Using Vision Transformer and Synthetic
Cellular Microstructures Dataset
- URL: http://arxiv.org/abs/2312.03133v1
- Date: Tue, 5 Dec 2023 21:00:08 GMT
- Title: Predicting Bone Degradation Using Vision Transformer and Synthetic
Cellular Microstructures Dataset
- Authors: Mohammad Saber Hashemi, Azadeh Sheidaei
- Abstract summary: A robust yet fast computational method to predict and visualize bone degradation has been developed.
Our deep-learning method, TransVNet, can take in different 3D voxelized images and predict their evolution throughout months.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bone degradation, especially for astronauts in microgravity conditions, is
crucial for space exploration missions since the lower applied external forces
accelerate the diminution in bone stiffness and strength substantially.
Although existing computational models help us understand this phenomenon and
possibly restrict its effect in the future, they are time-consuming to simulate
the changes in the bones, not just the bone microstructures, of each individual
in detail. In this study, a robust yet fast computational method to predict and
visualize bone degradation has been developed. Our deep-learning method,
TransVNet, can take in different 3D voxelized images and predict their
evolution throughout months utilizing a hybrid 3D-CNN-VisionTransformer
autoencoder architecture. Because of limited available experimental data and
challenges of obtaining new samples, a digital twin dataset of diverse and
initial bone-like microstructures was generated to train our TransVNet on the
evolution of the 3D images through a previously developed degradation model for
microgravity.
Related papers
- A Generative Machine Learning Model for Material Microstructure 3D
Reconstruction and Performance Evaluation [4.169915659794567]
The dimensional extension from 2D to 3D is viewed as a highly challenging inverse problem from the current technological perspective.
A novel generative model that integrates the multiscale properties of U-net with and the generative capabilities of GAN has been proposed.
The model's accuracy is further improved by combining the image regularization loss with the Wasserstein distance loss.
arXiv Detail & Related papers (2024-02-24T13:42:34Z) - Leveraging Frequency Domain Learning in 3D Vessel Segmentation [50.54833091336862]
In this study, we leverage Fourier domain learning as a substitute for multi-scale convolutional kernels in 3D hierarchical segmentation models.
We show that our novel network achieves remarkable dice performance (84.37% on ASACA500 and 80.32% on ImageCAS) in tubular vessel segmentation tasks.
arXiv Detail & Related papers (2024-01-11T19:07:58Z) - Scalable 3D Reconstruction From Single Particle X-Ray Diffraction Images
Based on Online Machine Learning [34.12502156343611]
We introduce X-RAI, an online reconstruction framework that estimates the structure of a 3D macromolecule from large X-ray SPI datasets.
We demonstrate that X-RAI achieves state-of-the-art performance for small-scale datasets in simulation and challenging experimental settings.
arXiv Detail & Related papers (2023-12-22T04:41:31Z) - TriHuman : A Real-time and Controllable Tri-plane Representation for
Detailed Human Geometry and Appearance Synthesis [76.73338151115253]
TriHuman is a novel human-tailored, deformable, and efficient tri-plane representation.
We non-rigidly warp global ray samples into our undeformed tri-plane texture space.
We show how such a tri-plane feature representation can be conditioned on the skeletal motion to account for dynamic appearance and geometry changes.
arXiv Detail & Related papers (2023-12-08T16:40:38Z) - Unsupervised 3D Pose Estimation with Non-Rigid Structure-from-Motion
Modeling [83.76377808476039]
We propose a new modeling method for human pose deformations and design an accompanying diffusion-based motion prior.
Inspired by the field of non-rigid structure-from-motion, we divide the task of reconstructing 3D human skeletons in motion into the estimation of a 3D reference skeleton.
A mixed spatial-temporal NRSfMformer is used to simultaneously estimate the 3D reference skeleton and the skeleton deformation of each frame from 2D observations sequence.
arXiv Detail & Related papers (2023-08-18T16:41:57Z) - Enhancing Neural Rendering Methods with Image Augmentations [59.00067936686825]
We study the use of image augmentations in learning neural rendering methods (NRMs) for 3D scenes.
We find that introducing image augmentations during training presents challenges such as geometric and photometric inconsistencies.
Our experiments demonstrate the benefits of incorporating augmentations when learning NRMs, including improved photometric quality and surface reconstruction.
arXiv Detail & Related papers (2023-06-15T07:18:27Z) - CryoFormer: Continuous Heterogeneous Cryo-EM Reconstruction using
Transformer-based Neural Representations [49.49939711956354]
Cryo-electron microscopy (cryo-EM) allows for the high-resolution reconstruction of 3D structures of proteins and other biomolecules.
It is still challenging to reconstruct the continuous motions of 3D structures from noisy and randomly oriented 2D cryo-EM images.
We propose CryoFormer, a new approach for continuous heterogeneous cryo-EM reconstruction.
arXiv Detail & Related papers (2023-03-28T18:59:17Z) - Physical model simulator-trained neural network for computational 3D
phase imaging of multiple-scattering samples [1.112751058850223]
We develop a new model-based data normalization pre-processing procedure for homogenizing the sample contrast.
We demonstrate this framework's capabilities on experimental measurements of epithelial buccal cells and Caenorhabditis elegans worms.
arXiv Detail & Related papers (2021-03-29T17:43:56Z) - Generative Modelling of 3D in-silico Spongiosa with Controllable
Micro-Structural Parameters [1.0804061924593265]
We propose to apply recent advances in generative adversarial networks to generate realistic bone structures in-silico.
In a first step, we trained a volumetric generative model in a progressive manner using a Wasserstein objective and gradient penalty.
We were able to simulate the resulting bone structure after deterioration or treatment effects of osteoporosis therapies.
arXiv Detail & Related papers (2020-09-23T18:11:47Z) - Real-time 3D Nanoscale Coherent Imaging via Physics-aware Deep Learning [0.7664249650622356]
We introduce 3D-CDI-NN, a deep convolutional neural network and differential programming framework trained to predict 3D structure and strain.
Our networks are designed to be "physics-aware" in multiple aspects.
Our integrated machine learning and differential programming solution is broadly applicable across inverse problems in other application areas.
arXiv Detail & Related papers (2020-06-16T18:35:32Z) - Anatomy-aware 3D Human Pose Estimation with Bone-based Pose
Decomposition [92.99291528676021]
Instead of directly regressing the 3D joint locations, we decompose the task into bone direction prediction and bone length prediction.
Our motivation is the fact that the bone lengths of a human skeleton remain consistent across time.
Our full model outperforms the previous best results on Human3.6M and MPI-INF-3DHP datasets.
arXiv Detail & Related papers (2020-02-24T15:49:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.