Ethics of Generating Synthetic MRI Vocal Tract Views from the Face
- URL: http://arxiv.org/abs/2407.08403v1
- Date: Thu, 11 Jul 2024 11:12:48 GMT
- Title: Ethics of Generating Synthetic MRI Vocal Tract Views from the Face
- Authors: Muhammad Suhaib Shahid, Gleb E. Yakubov, Andrew P. French,
- Abstract summary: This paper explores the ethical implications of external-to-internal correlation modeling (E2ICM)
E2ICM uses facial movements to infer internal configurations and provides a cost-effective supporting technology for MRI.
We employ Pix2PixGAN to generate pseudo-MRI views from external articulatory data, demonstrating the feasibility of this approach.
- Score: 0.3755082744150184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Forming oral models capable of understanding the complete dynamics of the oral cavity is vital across research areas such as speech correction, designing foods for the aging population, and dentistry. Magnetic resonance imaging (MRI) technologies, capable of capturing oral data essential for creating such detailed representations, offer a powerful tool for illustrating articulatory dynamics. However, its real-time application is hindered by expense and expertise requirements. Ever advancing generative AI approaches present themselves as a way to address this barrier by leveraging multi-modal approaches for generating pseudo-MRI views. Nonetheless, this immediately sparks ethical concerns regarding the utilisation of a technology with the capability to produce MRIs from facial observations. This paper explores the ethical implications of external-to-internal correlation modeling (E2ICM). E2ICM utilises facial movements to infer internal configurations and provides a cost-effective supporting technology for MRI. In this preliminary work, we employ Pix2PixGAN to generate pseudo-MRI views from external articulatory data, demonstrating the feasibility of this approach. Ethical considerations concerning privacy, consent, and potential misuse, which are fundamental to our examination of this innovative methodology, are discussed as a result of this experimentation.
Related papers
- Speech Audio Generation from dynamic MRI via a Knowledge Enhanced Conditional Variational Autoencoder [6.103954504752016]
We propose a novel two-step "knowledge enhancement + variational inference" framework for generating speech audio signals from cine dynamic MRI sequences.
To the best of our knowledge, this is one of the first attempts at synthesizing speech audio directly from dynamic MRI video sequences.
arXiv Detail & Related papers (2025-03-09T12:40:16Z) - Feasibility Study of a Diffusion-Based Model for Cross-Modal Generation of Knee MRI from X-ray: Integrating Radiographic Feature Information [8.466319668322432]
Knee osteoarthritis (KOA) is a prevalent musculoskeletal disorder, often diagnosed using X-rays due to its cost-effectiveness.
Magnetic Resonance Imaging (MRI) provides superior soft tissue visualization and serves as a valuable supplementary diagnostic tool.
arXiv Detail & Related papers (2024-10-09T15:44:34Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - Synthetic Brain Images: Bridging the Gap in Brain Mapping With Generative Adversarial Model [0.0]
This work investigates the use of Deep Convolutional Generative Adversarial Networks (DCGAN) for producing high-fidelity and realistic MRI image slices.
While the discriminator network discerns between created and real slices, the generator network learns to synthesise realistic MRI image slices.
The generator refines its capacity to generate slices that closely mimic real MRI data through an adversarial training approach.
arXiv Detail & Related papers (2024-04-11T05:06:51Z) - Volumetric Reconstruction Resolves Off-Resonance Artifacts in Static and
Dynamic PROPELLER MRI [76.60362295758596]
Off-resonance artifacts in magnetic resonance imaging (MRI) are visual distortions that occur when the actual resonant frequencies of spins within the imaging volume differ from the expected frequencies used to encode spatial information.
We propose to resolve these artifacts by lifting the 2D MRI reconstruction problem to 3D, introducing an additional "spectral" dimension to model this off-resonance.
arXiv Detail & Related papers (2023-11-22T05:44:51Z) - Leveraging sinusoidal representation networks to predict fMRI signals
from EEG [3.3121941932506473]
We propose a novel architecture that can predict fMRI signals directly from multi-channel EEG without explicit feature engineering.
Our model achieves this by implementing a Sinusoidal Representation Network (SIREN) to learn frequency information in brain dynamics.
We evaluate our model using a simultaneous EEG-fMRI dataset with 8 subjects and investigate its potential for predicting subcortical fMRI signals.
arXiv Detail & Related papers (2023-11-06T03:16:18Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Unidirectional brain-computer interface: Artificial neural network
encoding natural images to fMRI response in the visual cortex [12.1427193917406]
We propose an artificial neural network dubbed VISION to mimic the human brain and show how it can foster neuroscientific inquiries.
VISION successfully predicts human hemodynamic responses as fMRI voxel values to visual inputs with an accuracy exceeding state-of-the-art performance by 45%.
arXiv Detail & Related papers (2023-09-26T15:38:26Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Data and Physics Driven Learning Models for Fast MRI -- Fundamentals and
Methodologies from CNN, GAN to Attention and Transformers [72.047680167969]
This article aims to introduce the deep learning based data driven techniques for fast MRI including convolutional neural network and generative adversarial network based methods.
We will detail the research in coupling physics and data driven models for MRI acceleration.
Finally, we will demonstrate through a few clinical applications, explain the importance of data harmonisation and explainable models for such fast MRI techniques in multicentre and multi-scanner studies.
arXiv Detail & Related papers (2022-04-01T22:48:08Z) - EEG to fMRI Synthesis: Is Deep Learning a candidate? [0.913755431537592]
This work provides the first comprehensive on how to use state-of-the-art principles from Neural Processing to synthesize fMRI data from electroencephalographic (EEG) view data.
A comparison of state-of-the-art synthesis approaches, including Autoencoders, Generative Adrial Networks and Pairwise Learning, is undertaken.
Results highlight the feasibility of EEG to fMRI brain image mappings, pinpointing the role of current advances in Machine Learning and showing the relevance of upcoming contributions to further improve performance.
arXiv Detail & Related papers (2020-09-29T16:29:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.