Beyond Spherical geometry: Unraveling complex features of objects orbiting around stars from its transit light curve using deep learning
- URL: http://arxiv.org/abs/2509.14875v1
- Date: Thu, 18 Sep 2025 11:44:10 GMT
- Title: Beyond Spherical geometry: Unraveling complex features of objects orbiting around stars from its transit light curve using deep learning
- Authors: Ushasi Bhowmick, Shivam Kumaran,
- Abstract summary: We train deep neural networks to predict Fourier coefficients directly from simulated light curves.<n>Our results demonstrate that the neural network can successfully reconstruct the low-order ellipses.<n>The level of reconstruction achieved by the neural network underscores the utility of using light curves as a means to extract information from transiting systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Characterizing the geometry of an object orbiting around a star from its transit light curve is a powerful tool to uncover various complex phenomena. This problem is inherently ill-posed, since similar or identical light curves can be produced by multiple different shapes. In this study, we investigate the extent to which the features of a shape can be embedded in a transit light curve. We generate a library of two-dimensional random shapes and simulate their transit light curves with light curve simulator, Yuti. Each shape is decomposed into a series of elliptical components expressed in the form of Fourier coefficients that adds increasingly diminishing perturbations to an ideal ellipse. We train deep neural networks to predict these Fourier coefficients directly from simulated light curves. Our results demonstrate that the neural network can successfully reconstruct the low-order ellipses, which describe overall shape, orientation and large-scale perturbations. For higher order ellipses the scale is successfully determined but the inference of eccentricity and orientation is limited, demonstrating the extent of shape information in the light curve. We explore the impact of non-convex shape features in reconstruction, and show its dependence on shape orientation. The level of reconstruction achieved by the neural network underscores the utility of using light curves as a means to extract geometric information from transiting systems.
Related papers
- Surprising applications of Newton's hyperbolism transform of curves in Fourier-transform spectroscopy [0.0]
We study and generalize a surprisingly elegant geometric transform, the hyperbolism of curves originally found by Isaac Newton.<n>We show that the Bloch picture and especially corresponding phase-space representations are directly geometrically related to the Lorentzian line shape.
arXiv Detail & Related papers (2025-11-11T16:38:24Z) - RotaTouille: Rotation Equivariant Deep Learning for Contours [0.02491171962188218]
We present RotaTouille, a framework for learning from contour data.<n>It achieves both rotation and cyclic shift equivariant through complex-valued circular convolution.<n>We also introduce and characterize equivariant non-linearities, coarsening layers, and global pooling layers.
arXiv Detail & Related papers (2025-08-22T13:05:55Z) - NeuVAS: Neural Implicit Surfaces for Variational Shape Modeling [59.41129792124764]
NeuVAS is a variational approach to shape modeling using neural implicit surfaces constrained under sparse input shape control.<n>We introduce a smoothness term based on a functional of surface curvatures to minimize shape variation of the zero-level set surface of a neural SDF.
arXiv Detail & Related papers (2025-06-16T02:39:45Z) - WIR3D: Visually-Informed and Geometry-Aware 3D Shape Abstraction [13.645442589551354]
WIR3D is a technique for abstracting 3D shapes through a sparse set of visually meaningful curves in 3D.<n>We optimize the parameters of Bezier curves such that they faithfully represent both the geometry and salient visual features.<n>We successfully apply our method for shape abstraction over a broad dataset of shapes.
arXiv Detail & Related papers (2025-05-07T21:28:05Z) - AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.<n>Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - Computing Transiting Exoplanet Parameters with 1D Convolutional Neural
Networks [0.0]
Two 1D convolutional neural network models are presented.
One model operates on complete light curves and estimates the orbital period.
The other one operates on phase-folded light curves and estimates the semimajor axis of the orbit and the square of the planet-to-star radius ratio.
arXiv Detail & Related papers (2024-02-21T10:17:23Z) - From Complexity to Clarity: Analytical Expressions of Deep Neural Network Weights via Clifford's Geometric Algebra and Convexity [54.01594785269913]
We show that optimal weights of deep ReLU neural networks are given by the wedge product of training samples when trained with standard regularized loss.
The training problem reduces to convex optimization over wedge product features, which encode the geometric structure of the training dataset.
arXiv Detail & Related papers (2023-09-28T15:19:30Z) - Curve Your Attention: Mixed-Curvature Transformers for Graph
Representation Learning [77.1421343649344]
We propose a generalization of Transformers towards operating entirely on the product of constant curvature spaces.
We also provide a kernelized approach to non-Euclidean attention, which enables our model to run in time and memory cost linear to the number of nodes and edges.
arXiv Detail & Related papers (2023-09-08T02:44:37Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - Curved Geometric Networks for Visual Anomaly Recognition [39.91252195360767]
Learning a latent embedding to understand the underlying nature of data distribution is often formulated in Euclidean spaces with zero curvature.
In this work, we investigate benefits of the curved space for analyzing anomalies or out-of-distribution objects in data.
arXiv Detail & Related papers (2022-08-02T01:15:39Z) - Neural Convolutional Surfaces [59.172308741945336]
This work is concerned with a representation of shapes that disentangles fine, local and possibly repeating geometry, from global, coarse structures.
We show that this approach achieves better neural shape compression than the state of the art, as well as enabling manipulation and transfer of shape details.
arXiv Detail & Related papers (2022-04-05T15:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.