Preliminary Report on Mantis Shrimp: a Multi-Survey Computer Vision
Photometric Redshift Model
- URL: http://arxiv.org/abs/2402.03535v1
- Date: Mon, 5 Feb 2024 21:44:19 GMT
- Title: Preliminary Report on Mantis Shrimp: a Multi-Survey Computer Vision
Photometric Redshift Model
- Authors: Andrew Engel, Gautham Narayan, Nell Byler
- Abstract summary: Photometric redshift estimation is a well-established subfield of astronomy.
Mantis Shrimp is a computer vision model for photometric redshift estimation that fuses ultra-violet (GALEX), optical (PanSTARRS), and infrared (UnWISE) imagery.
- Score: 0.431625343223275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The availability of large, public, multi-modal astronomical datasets presents
an opportunity to execute novel research that straddles the line between
science of AI and science of astronomy. Photometric redshift estimation is a
well-established subfield of astronomy. Prior works show that computer vision
models typically outperform catalog-based models, but these models face
additional complexities when incorporating images from more than one instrument
or sensor. In this report, we detail our progress creating Mantis Shrimp, a
multi-survey computer vision model for photometric redshift estimation that
fuses ultra-violet (GALEX), optical (PanSTARRS), and infrared (UnWISE) imagery.
We use deep learning interpretability diagnostics to measure how the model
leverages information from the different inputs. We reason about the behavior
of the CNNs from the interpretability metrics, specifically framing the result
in terms of physically-grounded knowledge of galaxy properties.
Related papers
- Maven: A Multimodal Foundation Model for Supernova Science [40.20166238855543]
We present Maven, the first foundation model for supernova science.
We first pre-train our model to align photometry and spectroscopy from 0.5M synthetic supernovae.
We then fine-tune the model on 4,702 observed supernovae from the Zwicky Transient Facility.
arXiv Detail & Related papers (2024-08-29T18:00:05Z) - A Versatile Framework for Analyzing Galaxy Image Data by Implanting Human-in-the-loop on a Large Vision Model [14.609681101463334]
We present a framework for general analysis of galaxy images based on a large vision model (LVM) plus downstream tasks (DST)
Considering the low signal-to-noise ratio of galaxy images, we have incorporated a Human-in-the-loop (HITL) module into our large vision model.
For object detection, trained by 1000 data points, our DST upon the LVM achieves an accuracy of 96.7%, while ResNet50 plus Mask R-CNN gives an accuracy of 93.1%.
arXiv Detail & Related papers (2024-05-17T16:29:27Z) - State Space Model for New-Generation Network Alternative to Transformers: A Survey [52.812260379420394]
In the post-deep learning era, the Transformer architecture has demonstrated its powerful performance across pre-trained big models and various downstream tasks.
To further reduce the complexity of attention models, numerous efforts have been made to design more efficient methods.
Among them, the State Space Model (SSM), as a possible replacement for the self-attention based Transformer model, has drawn more and more attention in recent years.
arXiv Detail & Related papers (2024-04-15T07:24:45Z) - Physics-Driven Turbulence Image Restoration with Stochastic Refinement [80.79900297089176]
Image distortion by atmospheric turbulence is a critical problem in long-range optical imaging systems.
Fast and physics-grounded simulation tools have been introduced to help the deep-learning models adapt to real-world turbulence conditions.
This paper proposes the Physics-integrated Restoration Network (PiRN) to help the network to disentangle theity from the degradation and the underlying image.
arXiv Detail & Related papers (2023-07-20T05:49:21Z) - A Comparative Study on Generative Models for High Resolution Solar
Observation Imaging [59.372588316558826]
This work investigates capabilities of current state-of-the-art generative models to accurately capture the data distribution behind observed solar activity states.
Using distributed training on supercomputers, we are able to train generative models for up to 1024x1024 resolution that produce high quality samples indistinguishable to human experts.
arXiv Detail & Related papers (2023-04-14T14:40:32Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - Self-Supervised Learning for Modeling Gamma-ray Variability in Blazars [0.0]
Blazars are active galactic nuclei with relativistic jets pointed almost directly at Earth.
Deep learning can help uncover structure in gamma-ray blazars' complex variability patterns.
arXiv Detail & Related papers (2023-02-15T14:57:46Z) - Explainable, Physics Aware, Trustworthy AI Paradigm Shift for Synthetic
Aperture Radar [5.164409209168982]
We propose a change of paradigm for explainability in data science for the case of Synthetic Aperture Radar (SAR) data.
It aims to use explainable data transformations based on well-established models to generate inputs for AI methods.
arXiv Detail & Related papers (2023-01-09T09:22:13Z) - Advancing Plain Vision Transformer Towards Remote Sensing Foundation
Model [97.9548609175831]
We resort to plain vision transformers with about 100 million parameters and make the first attempt to propose large vision models customized for remote sensing tasks.
Specifically, to handle the large image size and objects of various orientations in RS images, we propose a new rotated varied-size window attention.
Experiments on detection tasks demonstrate the superiority of our model over all state-of-the-art models, achieving 81.16% mAP on the DOTA-V1.0 dataset.
arXiv Detail & Related papers (2022-08-08T09:08:40Z) - Processing Images from Multiple IACTs in the TAIGA Experiment with
Convolutional Neural Networks [62.997667081978825]
We use convolutional neural networks (CNNs) to analyze Monte Carlo-simulated images from the TAIGA experiment.
The analysis includes selection of the images corresponding to the showers caused by gamma rays and estimating the energy of the gamma rays.
arXiv Detail & Related papers (2021-12-31T10:49:11Z) - Realistic galaxy image simulation via score-based generative models [0.0]
We show that a score-based generative model can be used to produce realistic yet fake images that mimic observations of galaxies.
Subjectively, the generated galaxies are highly realistic when compared with samples from the real dataset.
arXiv Detail & Related papers (2021-11-02T16:27:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.