The applicability of transperceptual and deep learning approaches to the
study and mimicry of complex cartilaginous tissues
- URL: http://arxiv.org/abs/2211.14314v1
- Date: Mon, 21 Nov 2022 08:51:52 GMT
- Title: The applicability of transperceptual and deep learning approaches to the
study and mimicry of complex cartilaginous tissues
- Authors: J. Waghorne, C. Howard, H. Hu, J. Pang, W.J. Peveler, L. Harris, O.
Barrera
- Abstract summary: Complex soft tissues, for example the knee meniscus, play a crucial role in mobility and joint health.
In order to design tissue substitutes, the internal architecture of the native tissue needs to be understood and replicated.
We explore a combined audio-visual approach - so called transperceptual - to generate artificial architectures mimicking the native ones.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complex soft tissues, for example the knee meniscus, play a crucial role in
mobility and joint health, but when damaged are incredibly difficult to repair
and replace. This is due to their highly hierarchical and porous nature which
in turn leads to their unique mechanical properties. In order to design tissue
substitutes, the internal architecture of the native tissue needs to be
understood and replicated. Here we explore a combined audio-visual approach -
so called transperceptual - to generate artificial architectures mimicking the
native ones. The proposed method uses both traditional imagery, and sound
generated from each image as a method of rapidly comparing and contrasting the
porosity and pore size within the samples. We have trained and tested a
generative adversarial network (GAN) on the 2D image stacks. The impact of the
training set of images on the similarity of the artificial to the original
dataset was assessed by analyzing two samples. The first consisting of n=478
pairs of audio and image files for which the images were downsampled to 64
$\times$ 64 pixels, the second one consisting of n=7640 pairs of audio and
image files for which the full resolution 256 $\times$ 256 pixels is retained
but each image is divided into 16 squares to maintain the limit of 64 $\times$
64 pixels required by the GAN. We reconstruct the 2D stacks of artificially
generated datasets into 3D objects and run image analysis algorithms to
characterize statistically the architectural parameters - pore size, tortuosity
and pore connectivity - and compare them with the original dataset. Results
show that the artificially generated dataset that undergoes downsampling
performs better in terms of parameter matching. Our audiovisual approach has
the potential to be extended to larger data sets to explore both how
similarities and differences can be audibly recognized across multiple samples.
Related papers
- Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Between Generating Noise and Generating Images: Noise in the Correct
Frequency Improves the Quality of Synthetic Histopathology Images for Digital
Pathology [0.0]
Synthetic images can augment existing datasets, to improve and validate AI algorithms.
We show that introducing random single-pixel noise with the appropriate spatial frequency into a semantic mask can dramatically improve the quality of the synthetic images.
Our work suggests a simple and powerful approach for generating synthetic data on demand to unbias limited datasets.
arXiv Detail & Related papers (2023-02-13T17:49:24Z) - Representation Learning for Non-Melanoma Skin Cancer using a Latent
Autoencoder [0.0]
Generative learning is a powerful tool for representation learning, and shows particular promise for problems in biomedical imaging.
It remains difficult to faithfully reconstruct images from generative models, particularly those as complex as histological images.
In this work, two existing methods (autoencoders and latent autoencoders) are combined in attempt to improve our ability to encode and decode real images of non-melanoma skin cancer.
arXiv Detail & Related papers (2022-09-05T06:24:58Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - An unsupervised deep learning framework for medical image denoising [0.0]
This paper introduces an unsupervised medical image denoising technique that learns noise characteristics from the available images.
It comprises of two blocks of data processing, viz., patch-based dictionaries that indirectly learn the noise and residual learning (RL) that directly learns the noise.
Experiments on MRI/CT datasets are run on a GPU-based supercomputer and the comparative results show that the proposed algorithm preserves the critical information in the images as well as improves the visual quality of the images.
arXiv Detail & Related papers (2021-03-11T10:03:02Z) - MOGAN: Morphologic-structure-aware Generative Learning from a Single
Image [59.59698650663925]
Recently proposed generative models complete training based on only one image.
We introduce a MOrphologic-structure-aware Generative Adversarial Network named MOGAN that produces random samples with diverse appearances.
Our approach focuses on internal features including the maintenance of rational structures and variation on appearance.
arXiv Detail & Related papers (2021-03-04T12:45:23Z) - Exploring Intensity Invariance in Deep Neural Networks for Brain Image
Registration [0.0]
We investigate the effect of intensity distribution among input image pairs for deep learning-based image registration methods.
Deep learning models trained with structure similarity-based loss seems to perform better for both datasets.
arXiv Detail & Related papers (2020-09-21T17:49:03Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Image Quality Assessment: Unifying Structure and Texture Similarity [38.05659069533254]
We develop the first full-reference image quality model with explicit tolerance to texture resampling.
Using a convolutional neural network, we construct an injective and differentiable function that transforms images to overcomplete representations.
arXiv Detail & Related papers (2020-04-16T16:11:46Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.