POLARIS: A High-contrast Polarimetric Imaging Benchmark Dataset for Exoplanetary Disk Representation Learning
- URL: http://arxiv.org/abs/2506.03511v1
- Date: Wed, 04 Jun 2025 02:55:02 GMT
- Title: POLARIS: A High-contrast Polarimetric Imaging Benchmark Dataset for Exoplanetary Disk Representation Learning
- Authors: Fangyi Cao, Bin Ren, Zihao Wang, Shiwei Fu, Youbin Mo, Xiaoyang Liu, Yuzhou Chen, Weixin Yao,
- Abstract summary: With over 1,000,000 images from more than 10,000 exposures, can artificial intelligence (AI) serve as a transformative tool in imaging Earth-like exoplanets in the coming decade?<n>We introduce a benchmark and explore this question from a polarimetric image representation learning perspective.<n>This is the first uniformly reduced, high-quality exoplanet imaging dataset, rare in astrophysics and machine learning.
- Score: 27.870701240010924
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With over 1,000,000 images from more than 10,000 exposures using state-of-the-art high-contrast imagers (e.g., Gemini Planet Imager, VLT/SPHERE) in the search for exoplanets, can artificial intelligence (AI) serve as a transformative tool in imaging Earth-like exoplanets in the coming decade? In this paper, we introduce a benchmark and explore this question from a polarimetric image representation learning perspective. Despite extensive investments over the past decade, only a few new exoplanets have been directly imaged. Existing imaging approaches rely heavily on labor-intensive labeling of reference stars, which serve as background to extract circumstellar objects (disks or exoplanets) around target stars. With our POLARIS (POlarized Light dAta for total intensity Representation learning of direct Imaging of exoplanetary Systems) dataset, we classify reference star and circumstellar disk images using the full public SPHERE/IRDIS polarized-light archive since 2014, requiring less than 10 percent manual labeling. We evaluate a range of models including statistical, generative, and large vision-language models and provide baseline performance. We also propose an unsupervised generative representation learning framework that integrates these models, achieving superior performance and enhanced representational power. To our knowledge, this is the first uniformly reduced, high-quality exoplanet imaging dataset, rare in astrophysics and machine learning. By releasing this dataset and baselines, we aim to equip astrophysicists with new tools and engage data scientists in advancing direct exoplanet imaging, catalyzing major interdisciplinary breakthroughs.
Related papers
- STAR: A Benchmark for Astronomical Star Fields Super-Resolution [51.79340280382437]
We propose STAR, a large-scale astronomical SR dataset containing 54,738 flux-consistent star field image pairs.<n>We propose a Flux-Invariant Super Resolution (FISR) model that could accurately infer the flux-consistent high-resolution images from input photometry.
arXiv Detail & Related papers (2025-07-22T09:28:28Z) - A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations [37.845442465099396]
This paper presents a novel statistical model that captures nuisance fluctuations using a multi-scale approach.<n>It integrates into an interpretable, end-to-end learnable framework for simultaneous exoplanet detection and flux estimation.<n>The proposed approach is computationally efficient, robust to varying data quality, and well suited for large-scale observational surveys.
arXiv Detail & Related papers (2025-03-21T13:07:55Z) - Exoplanet Detection via Differentiable Rendering [23.64604723151245]
Direct imaging of exoplanets faces significant challenges due to the high contrast between host stars and their planets.<n>Wavefront aberrations introduce speckles in the telescope science images, which are patterns of diffracted starlight that can mimic the appearance of planets.<n>Traditional post-processing methods, operating primarily in the image intensity domain, do not integrate wavefront sensing data.<n>We present a differentiable rendering approach that leverages these wavefront sensing data to improve exoplanet detection.
arXiv Detail & Related papers (2025-01-03T17:30:44Z) - Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities [88.398085358514]
Contrastive Deepfake Embeddings (CoDE) is a novel embedding space specifically designed for deepfake detection.
CoDE is trained via contrastive learning by additionally enforcing global-local similarities.
arXiv Detail & Related papers (2024-07-29T18:00:10Z) - A Comparative Study on Generative Models for High Resolution Solar
Observation Imaging [59.372588316558826]
This work investigates capabilities of current state-of-the-art generative models to accurately capture the data distribution behind observed solar activity states.
Using distributed training on supercomputers, we are able to train generative models for up to 1024x1024 resolution that produce high quality samples indistinguishable to human experts.
arXiv Detail & Related papers (2023-04-14T14:40:32Z) - Simple and Effective Synthesis of Indoor 3D Scenes [78.95697556834536]
We study the problem of immersive 3D indoor scenes from one or more images.
Our aim is to generate high-resolution images and videos from novel viewpoints.
We propose an image-to-image GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images.
arXiv Detail & Related papers (2022-04-06T17:54:46Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Self-supervised similarity search for large scientific datasets [0.0]
We present the use of self-supervised learning to explore and exploit large unlabeled datasets.
We first train a self-supervised model to distil low-dimensional representations that are robust to symmetries, uncertainties, and noise in each image.
We then use the representations to construct and publicly release an interactive semantic similarity search tool.
arXiv Detail & Related papers (2021-10-25T18:00:00Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Stereo Matching by Self-supervision of Multiscopic Vision [65.38359887232025]
We propose a new self-supervised framework for stereo matching utilizing multiple images captured at aligned camera positions.
A cross photometric loss, an uncertainty-aware mutual-supervision loss, and a new smoothness loss are introduced to optimize the network.
Our model obtains better disparity maps than previous unsupervised methods on the KITTI dataset.
arXiv Detail & Related papers (2021-04-09T02:58:59Z) - Self-Supervised Representation Learning for Astronomical Images [1.0499611180329804]
Self-supervised learning recovers representations of sky survey images that are semantically useful.
We show that our approach can achieve the accuracy of supervised models while using 2-4 times fewer labels for training.
arXiv Detail & Related papers (2020-12-24T03:25:36Z) - Self-supervised Learning for Astronomical Image Classification [1.2891210250935146]
In Astronomy, a huge amount of image data is generated daily by photometric surveys.
We propose a technique to leverage unlabeled astronomical images to pre-train deep convolutional neural networks.
arXiv Detail & Related papers (2020-04-23T17:32:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.