Deep learning for exoplanet detection and characterization by direct imaging at high contrast
- URL: http://arxiv.org/abs/2509.20310v1
- Date: Wed, 24 Sep 2025 16:43:28 GMT
- Title: Deep learning for exoplanet detection and characterization by direct imaging at high contrast
- Authors: Théo Bodrito, Olivier Flasseur, Julien Mairal, Jean Ponce, Maud Langlois, Anne-Marie Lagrange,
- Abstract summary: We present a multi-scale statistical model for the nuisance component corrupting multivariate image series at high contrast.<n> Applied to data from the VLT/SPHERE instrument, the method significantly improves the detection sensitivity and the accuracy of astrometric and photometric estimation.
- Score: 26.359902459074856
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Exoplanet imaging is a major challenge in astrophysics due to the need for high angular resolution and high contrast. We present a multi-scale statistical model for the nuisance component corrupting multivariate image series at high contrast. Integrated into a learnable architecture, it leverages the physics of the problem and enables the fusion of multiple observations of the same star in a way that is optimal in terms of detection signal-to-noise ratio. Applied to data from the VLT/SPHERE instrument, the method significantly improves the detection sensitivity and the accuracy of astrometric and photometric estimation.
Related papers
- A Hardware-Algorithm Co-Designed Framework for HDR Imaging and Dehazing in Extreme Rocket Launch Environments [17.698797172555754]
Quantitative optical measurement of critical mechanical parameters during rocket launch faces severe challenges due to extreme imaging conditions.<n>We propose a hardware-algorithm co-design framework that combines a custom Spatially Varying Exposure (SVE) sensor with a physics-aware dehazing algorithm.<n>Our approach dynamically estimates haze density, performs region-adaptive illumination optimization, and applies multi-scale entropy-constrained fusion to effectively separate haze from scene radiance.
arXiv Detail & Related papers (2026-01-13T02:51:55Z) - STAR: A Benchmark for Astronomical Star Fields Super-Resolution [51.79340280382437]
We propose STAR, a large-scale astronomical SR dataset containing 54,738 flux-consistent star field image pairs.<n>We propose a Flux-Invariant Super Resolution (FISR) model that could accurately infer the flux-consistent high-resolution images from input photometry.
arXiv Detail & Related papers (2025-07-22T09:28:28Z) - Degradation-Modeled Multipath Diffusion for Tunable Metalens Photography [44.164180405913456]
We introduce Degradation-Modeled Multipath Diffusion for tunable metalens photography.<n>We balance high-frequency detail generation, structural fidelity, and suppression of metalens-specific degradation.<n>We build a millimeter-scale MetaCamera for real-world validation.
arXiv Detail & Related papers (2025-06-28T04:48:37Z) - CARL: Camera-Agnostic Representation Learning for Spectral Image Analysis [75.25966323298003]
Spectral imaging offers promising applications across diverse domains, including medicine and urban scene understanding.<n> variability in channel dimensionality and captured wavelengths among spectral cameras impede the development of AI-driven methodologies.<n>We introduce $textbfCARL$, a model for $textbfC$amera-$textbfA$gnostic $textbfR$esupervised $textbfL$ across RGB, multispectral, and hyperspectral imaging modalities.
arXiv Detail & Related papers (2025-04-27T13:06:40Z) - A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations [37.845442465099396]
This paper presents a novel statistical model that captures nuisance fluctuations using a multi-scale approach.<n>It integrates into an interpretable, end-to-end learnable framework for simultaneous exoplanet detection and flux estimation.<n>The proposed approach is computationally efficient, robust to varying data quality, and well suited for large-scale observational surveys.
arXiv Detail & Related papers (2025-03-21T13:07:55Z) - Adaptive Stereo Depth Estimation with Multi-Spectral Images Across All Lighting Conditions [58.88917836512819]
We propose a novel framework incorporating stereo depth estimation to enforce accurate geometric constraints.
To mitigate the effects of poor lighting on stereo matching, we introduce Degradation Masking.
Our method achieves state-of-the-art (SOTA) performance on the Multi-Spectral Stereo (MS2) dataset.
arXiv Detail & Related papers (2024-11-06T03:30:46Z) - MODEL&CO: Exoplanet detection in angular differential imaging by learning across multiple observations [37.845442465099396]
Most post-processing methods build a model of the nuisances from the target observations themselves.
We propose to build the nuisance model from an archive of multiple observations by leveraging supervised deep learning techniques.
We apply the proposed algorithm to several datasets from the VLT/SPHERE instrument, and demonstrate a superior precision-recall trade-off.
arXiv Detail & Related papers (2024-09-23T09:22:45Z) - Multi-Sensor Diffusion-Driven Optical Image Translation for Large-Scale Applications [3.4085512042262374]
We propose a method that super-resolves large-scale low spatial resolution images into high-resolution equivalents from disparate optical sensors.<n>Our approach provides precise domain adaptation, preserving image content while improving radiometric accuracy and feature representation.<n>We reach a mean Learned Perceptual Image Patch Similarity (mLPIPS) of 0.1884 and a Fr'echet Inception Distance (FID) of 45.64, expressively outperforming all compared methods.
arXiv Detail & Related papers (2024-04-17T10:49:00Z) - Combining multi-spectral data with statistical and deep-learning models
for improved exoplanet detection in direct imaging at high contrast [39.90150176899222]
Exoplanet signals can only be identified when combining several observations with dedicated detection algorithms.
We learn a model of the spatial, temporal and spectral characteristics of the nuisance, directly from the observations.
A convolutional neural network (CNN) is then trained in a supervised fashion to detect the residual signature of synthetic sources.
arXiv Detail & Related papers (2023-06-21T13:42:07Z) - Decoupled-and-Coupled Networks: Self-Supervised Hyperspectral Image
Super-Resolution with Subpixel Fusion [67.35540259040806]
We propose a subpixel-level HS super-resolution framework by devising a novel decoupled-and-coupled network, called DCNet.
As the name suggests, DC-Net first decouples the input into common (or cross-sensor) and sensor-specific components.
We append a self-supervised learning module behind the CSU net by guaranteeing the material consistency to enhance the detailed appearances of the restored HS product.
arXiv Detail & Related papers (2022-05-07T23:40:36Z) - RRNet: Relational Reasoning Network with Parallel Multi-scale Attention
for Salient Object Detection in Optical Remote Sensing Images [82.1679766706423]
Salient object detection (SOD) for optical remote sensing images (RSIs) aims at locating and extracting visually distinctive objects/regions from the optical RSIs.
We propose a relational reasoning network with parallel multi-scale attention for SOD in optical RSIs.
Our proposed RRNet outperforms the existing state-of-the-art SOD competitors both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-10-27T07:18:32Z) - Boosting Image Super-Resolution Via Fusion of Complementary Information
Captured by Multi-Modal Sensors [21.264746234523678]
Image Super-Resolution (SR) provides a promising technique to enhance the image quality of low-resolution optical sensors.
In this paper, we attempt to leverage complementary information from a low-cost channel (visible/depth) to boost image quality of an expensive channel (thermal) using fewer parameters.
arXiv Detail & Related papers (2020-12-07T02:15:28Z) - Hyperspectral-Multispectral Image Fusion with Weighted LASSO [68.04032419397677]
We propose an approach for fusing hyperspectral and multispectral images to provide high-quality hyperspectral output.
We demonstrate that the proposed sparse fusion and reconstruction provides quantitatively superior results when compared to existing methods on publicly available images.
arXiv Detail & Related papers (2020-03-15T23:07:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.