Hyperspectral Lightcurve Inversion for Attitude Determination
- URL: http://arxiv.org/abs/2401.05397v1
- Date: Tue, 19 Dec 2023 16:06:50 GMT
- Title: Hyperspectral Lightcurve Inversion for Attitude Determination
- Authors: Sim\~ao da Gra\c{c}a Marto, Massimiliano Vasile, Andrew Campbell, Paul
Murray, Stephen Marshall, Vasili Savitski
- Abstract summary: Time series single-pixel spectral measurements of spacecraft are used to infer the spacecraft's attitude and rotation.
The aim is to work with minimal information, thus no prior is available on the attitude nor on the inertia tensor.
Results are shown based on synthetic data.
- Score: 0.9820957505036108
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spectral lightcurves consisting of time series single-pixel spectral
measurements of spacecraft are used to infer the spacecraft's attitude and
rotation. Two methods are used. One based on numerical optimisation of a
regularised least squares cost function, and another based on machine learning
with a neural network model. The aim is to work with minimal information, thus
no prior is available on the attitude nor on the inertia tensor. The
theoretical and practical aspects of this task are investigated, and the
methodology is tested on synthetic data. Results are shown based on synthetic
data.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Spectral operator learning for parametric PDEs without data reliance [6.7083321695379885]
We introduce a novel operator learning-based approach for solving parametric partial differential equations (PDEs) without the need for data harnessing.
The proposed framework demonstrates superior performance compared to existing scientific machine learning techniques.
arXiv Detail & Related papers (2023-10-03T12:37:15Z) - Machine learning enabled experimental design and parameter estimation
for ultrafast spin dynamics [54.172707311728885]
We introduce a methodology that combines machine learning with Bayesian optimal experimental design (BOED)
Our method employs a neural network model for large-scale spin dynamics simulations for precise distribution and utility calculations in BOED.
Our numerical benchmarks demonstrate the superior performance of our method in guiding XPFS experiments, predicting model parameters, and yielding more informative measurements within limited experimental time.
arXiv Detail & Related papers (2023-06-03T06:19:20Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Learning new physics efficiently with nonparametric methods [11.970219534238444]
We present a machine learning approach for model-independent new physics searches.
The corresponding algorithm is powered by recent large-scale implementations of kernel methods.
We show that our approach has dramatic advantages compared to neural network implementations in terms of training times and computational resources.
arXiv Detail & Related papers (2022-04-05T16:17:59Z) - Information-Theoretic Odometry Learning [83.36195426897768]
We propose a unified information theoretic framework for learning-motivated methods aimed at odometry estimation.
The proposed framework provides an elegant tool for performance evaluation and understanding in information-theoretic language.
arXiv Detail & Related papers (2022-03-11T02:37:35Z) - Single-shot self-supervised particle tracking [0.0]
We propose a novel deep-learning method that learns to track objects with sub-pixel accuracy from a single unlabeled image.
We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy.
Thanks to the ability to train deep-learning models with a single unlabeled image, LodeSTAR can accelerate the development of high-quality microscopic analysis pipelines.
arXiv Detail & Related papers (2022-02-28T05:02:20Z) - Analytical Modelling of Exoplanet Transit Specroscopy with Dimensional
Analysis and Symbolic Regression [68.8204255655161]
The deep learning revolution has opened the door for deriving such analytical results directly with a computer algorithm fitting to the data.
We successfully demonstrate the use of symbolic regression on synthetic data for the transit radii of generic hot Jupiter exoplanets.
As a preprocessing step, we use dimensional analysis to identify the relevant dimensionless combinations of variables.
arXiv Detail & Related papers (2021-12-22T00:52:56Z) - Investigation of Nonlinear Model Order Reduction of the Quasigeostrophic
Equations through a Physics-Informed Convolutional Autoencoder [0.0]
Reduced order modeling (ROM) approximates complex physics-based models of real-world processes by inexpensive surrogates.
In this paper we explore the construction of ROM using autoencoders (AE) that perform nonlinear projections of the system dynamics onto a low dimensional manifold.
Our investigation using the quasi-geostrophic equations reveals that while the PI cost function helps with spatial reconstruction, spatial features are less powerful than spectral features.
arXiv Detail & Related papers (2021-08-27T15:20:01Z) - A Scaling Law for Synthetic-to-Real Transfer: A Measure of Pre-Training [52.93808218720784]
Synthetic-to-real transfer learning is a framework in which we pre-train models with synthetically generated images and ground-truth annotations for real tasks.
Although synthetic images overcome the data scarcity issue, it remains unclear how the fine-tuning performance scales with pre-trained models.
We observe a simple and general scaling law that consistently describes learning curves in various tasks, models, and complexities of synthesized pre-training data.
arXiv Detail & Related papers (2021-08-25T02:29:28Z) - Efficient Multidimensional Functional Data Analysis Using Marginal
Product Basis Systems [2.4554686192257424]
We propose a framework for learning continuous representations from a sample of multidimensional functional data.
We show that the resulting estimation problem can be solved efficiently by the tensor decomposition.
We conclude with a real data application in neuroimaging.
arXiv Detail & Related papers (2021-07-30T16:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.