Detecting optical transients using artificial neural networks and
reference images from different surveys
- URL: http://arxiv.org/abs/2009.14614v1
- Date: Mon, 28 Sep 2020 22:16:54 GMT
- Title: Detecting optical transients using artificial neural networks and
reference images from different surveys
- Authors: Katarzyna Ward\k{e}ga, Adam Zadro\.zny, Martin Beroiz, Richard
Camuccio and Mario C. D\'iaz
- Abstract summary: We present a method to detect these transients based on an artificial neural network.
One image corresponds to the epoch in which a potential transient could exist; the other is a reference image of an earlier epoch.
We trained a convolutional neural network and a dense layer network on simulated source samples and tested the trained networks on samples created from real image data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To search for optical counterparts to gravitational waves, it is crucial to
develop an efficient follow-up method that allows for both a quick telescopic
scan of the event localization region and search through the resulting image
data for plausible optical transients. We present a method to detect these
transients based on an artificial neural network. We describe the architecture
of two networks capable of comparing images of the same part of the sky taken
by different telescopes. One image corresponds to the epoch in which a
potential transient could exist; the other is a reference image of an earlier
epoch. We use data obtained by the Dr. Cristina V. Torres Memorial Astronomical
Observatory and archival reference images from the Sloan Digital Sky Survey. We
trained a convolutional neural network and a dense layer network on simulated
source samples and tested the trained networks on samples created from real
image data. Autonomous detection methods replace the standard process of
detecting transients, which is normally achieved by source extraction of a
difference image followed by human inspection of the detected candidates.
Replacing the human inspection component with an entirely autonomous method
would allow for a rapid and automatic follow-up of interesting targets of
opportunity. The method will be further tested on telescopes participating in
the Transient Optical Robotic Observatory of the South Collaboration.
Related papers
- Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Real-Time Model-Based Quantitative Ultrasound and Radar [65.268245109828]
We propose a neural network based on the physical model of wave propagation, which defines the relationship between the received signals and physical properties.
Our network can reconstruct multiple physical properties in less than one second for complex and realistic scenarios.
arXiv Detail & Related papers (2024-02-16T09:09:16Z) - UAVs and Neural Networks for search and rescue missions [0.0]
We present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs)
To achieve this, we use artificial neural networks and create a dataset for supervised learning.
arXiv Detail & Related papers (2023-10-09T08:27:35Z) - Combining multi-spectral data with statistical and deep-learning models
for improved exoplanet detection in direct imaging at high contrast [39.90150176899222]
Exoplanet signals can only be identified when combining several observations with dedicated detection algorithms.
We learn a model of the spatial, temporal and spectral characteristics of the nuisance, directly from the observations.
A convolutional neural network (CNN) is then trained in a supervised fashion to detect the residual signature of synthetic sources.
arXiv Detail & Related papers (2023-06-21T13:42:07Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Multi-modal Retinal Image Registration Using a Keypoint-Based Vessel
Structure Aligning Network [9.988115865060589]
We propose an end-to-end trainable deep learning method for multi-modal retinal image registration.
Our method extracts convolutional features from the vessel structure for keypoint detection and description.
The keypoint detection and description network and graph neural network are jointly trained in a self-supervised manner.
arXiv Detail & Related papers (2022-07-21T14:36:51Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - AS-Net: Fast Photoacoustic Reconstruction with Multi-feature Fusion from
Sparse Data [1.7237160821929758]
Photoacoustic imaging is capable of acquiring high contrast images of optical absorption at depths much greater than traditional optical imaging techniques.
In this paper, we employ a novel signal processing method to make sparse PA raw data more suitable for the neural network.
We then propose Attention Steered Network (AS-Net) for PA reconstruction with multi-feature fusion.
arXiv Detail & Related papers (2021-01-22T03:49:30Z) - Real-time sparse-sampled Ptychographic imaging through deep neural
networks [3.3351024234383946]
A ptychography reconstruction is achieved by means of solving a complex inverse problem that imposes constraints both on the acquisition and on the analysis of the data.
We propose PtychoNN, a novel approach to solve the ptychography reconstruction problem based on deep convolutional neural networks.
arXiv Detail & Related papers (2020-04-15T23:43:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.