Autoencoding Labeled Interpolator, Inferring Parameters From Image, And
Image From Parameters
- URL: http://arxiv.org/abs/2312.04640v1
- Date: Thu, 7 Dec 2023 19:00:50 GMT
- Title: Autoencoding Labeled Interpolator, Inferring Parameters From Image, And
Image From Parameters
- Authors: Ali SaraerToosi and Avery Broderick
- Abstract summary: This study presents an image generation tool in the form of a generative machine learning model.
It can rapidly and continuously interpolate between a training set of images and can retrieve the defining parameters of those images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Event Horizon Telescope (EHT) provides an avenue to study black hole
accretion flows on event-horizon scales. Fitting a semi-analytical model to EHT
observations requires the construction of synthetic images, which is
computationally expensive. This study presents an image generation tool in the
form of a generative machine learning model, which extends the capabilities of
a variational autoencoder. This tool can rapidly and continuously interpolate
between a training set of images and can retrieve the defining parameters of
those images. Trained on a set of synthetic black hole images, our tool
showcases success in both interpolating black hole images and their associated
physical parameters. By reducing the computational cost of generating an image,
this tool facilitates parameter estimation and model validation for
observations of black hole system.
Related papers
- CARL: Camera-Agnostic Representation Learning for Spectral Image Analysis [75.25966323298003]
Spectral imaging offers promising applications across diverse domains, including medicine and urban scene understanding.
variability in channel dimensionality and captured wavelengths among spectral cameras impede the development of AI-driven methodologies.
We introduce $textbfCARL$, a model for $textbfC$amera-$textbfA$gnostic $textbfR$esupervised $textbfL$ across RGB, multispectral, and hyperspectral imaging modalities.
arXiv Detail & Related papers (2025-04-27T13:06:40Z) - Validation and Calibration of Semi-Analytical Models for the Event Horizon Telescope Observations of Sagittarius A* [0.0]
We use alinet, a generative machine learning model, to efficiently produce radiatively inefficient accretion flow images.
We estimate the uncertainty introduced by a number of anticipated unmodeled physical effects, including interstellar scattering.
We then use this to calibrate physical parameter estimates and their associated uncertainties from RIAF model fits to mock EHT data.
arXiv Detail & Related papers (2025-04-25T18:00:04Z) - BCDDM: Branch-Corrected Denoising Diffusion Model for Black Hole Image Generation [12.638969185454846]
Black holes and accretion flows can be inferred by fitting Event Horizon Telescope (EHT) data to simulated images generated through general relativistic ray tracing (GRRT)
Due to the computationally intensive nature of GRRT, the efficiency of generating specific radiation flux images needs to be improved.
This paper introduces the Branch Correction Denoising Diffusion Model (BCDDM), which uses a branch correction mechanism and a weighted mixed loss function to improve the accuracy of generated black hole images.
arXiv Detail & Related papers (2025-02-12T16:05:46Z) - Cross-Scan Mamba with Masked Training for Robust Spectral Imaging [51.557804095896174]
We propose the Cross-Scanning Mamba, named CS-Mamba, that employs a Spatial-Spectral SSM for global-local balanced context encoding.
Experiment results show that our CS-Mamba achieves state-of-the-art performance and the masked training method can better reconstruct smooth features to improve the visual quality.
arXiv Detail & Related papers (2024-08-01T15:14:10Z) - Automation of Quantum Dot Measurement Analysis via Explainable Machine Learning [0.0]
We propose an image vectorization approach that involves mathematical modeling of synthetic triangles to mimic the experimental data.
We show that this new method offers superior explainability of model prediction without sacrificing accuracy.
This work demonstrates the feasibility and advantages of applying explainable machine learning techniques to the analysis of quantum dot measurements.
arXiv Detail & Related papers (2024-02-21T11:00:23Z) - Flying By ML -- CNN Inversion of Affine Transforms [0.0]
This paper describes a machine learning method to automate reading of cockpit gauges.
It uses a CNN to invert affine transformations and deduce aircraft states from instrument images.
arXiv Detail & Related papers (2023-12-22T05:24:30Z) - Generating Images of the M87* Black Hole Using GANs [1.0532948482859532]
We introduce Conditional Progressive Generative Adversarial Networks (CPGAN) to generate diverse black hole images.
GANs can be employed as cost effective models for black hole image generation and reliably augment training datasets for other parameterization algorithms.
arXiv Detail & Related papers (2023-12-02T02:47:34Z) - Physics-Driven Turbulence Image Restoration with Stochastic Refinement [80.79900297089176]
Image distortion by atmospheric turbulence is a critical problem in long-range optical imaging systems.
Fast and physics-grounded simulation tools have been introduced to help the deep-learning models adapt to real-world turbulence conditions.
This paper proposes the Physics-integrated Restoration Network (PiRN) to help the network to disentangle theity from the degradation and the underlying image.
arXiv Detail & Related papers (2023-07-20T05:49:21Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - PS-Transformer: Learning Sparse Photometric Stereo Network using
Self-Attention Mechanism [4.822598110892846]
Existing deep calibrated photometric stereo networks aggregate observations under different lights based on pre-defined operations such as linear projection and max pooling.
To tackle this issue, this paper presents a deep sparse calibrated photometric stereo network named it PS-Transformer which leverages the learnable self-attention mechanism to properly capture the complex inter-image interactions.
arXiv Detail & Related papers (2022-11-21T11:58:25Z) - Multitask AET with Orthogonal Tangent Regularity for Dark Object
Detection [84.52197307286681]
We propose a novel multitask auto encoding transformation (MAET) model to enhance object detection in a dark environment.
In a self-supervision manner, the MAET learns the intrinsic visual structure by encoding and decoding the realistic illumination-degrading transformation.
We have achieved the state-of-the-art performance using synthetic and real-world datasets.
arXiv Detail & Related papers (2022-05-06T16:27:14Z) - Aug3D-RPN: Improving Monocular 3D Object Detection by Synthetic Images
with Virtual Depth [64.29043589521308]
We propose a rendering module to augment the training data by synthesizing images with virtual-depths.
The rendering module takes as input the RGB image and its corresponding sparse depth image, outputs a variety of photo-realistic synthetic images.
Besides, we introduce an auxiliary module to improve the detection model by jointly optimizing it through a depth estimation task.
arXiv Detail & Related papers (2021-07-28T11:00:47Z) - Generative Modelling of BRDF Textures from Flash Images [50.660026124025265]
We learn a latent space for easy capture, semantic editing, consistent, and efficient reproduction of visual material appearance.
In a second step, conditioned on the material code, our method produces an infinite and diverse spatial field of BRDF model parameters.
arXiv Detail & Related papers (2021-02-23T18:45:18Z) - Category Level Object Pose Estimation via Neural Analysis-by-Synthesis [64.14028598360741]
In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module.
The image synthesis network is designed to efficiently span the pose configuration space.
We experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone.
arXiv Detail & Related papers (2020-08-18T20:30:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.