Scaling Laws for Emulation of Stellar Spectra
- URL: http://arxiv.org/abs/2503.18617v2
- Date: Wed, 04 Jun 2025 20:55:17 GMT
- Title: Scaling Laws for Emulation of Stellar Spectra
- Authors: Tomasz Różański, Yuan-Sen Ting,
- Abstract summary: We provide training guidelines for scaling Transformer-based spectral emulators to achieve optimal performance.<n>Our results suggest that optimal computational resource allocation requires balanced scaling.<n>This study establishes a foundation for developing spectral foundational models with enhanced domain transfer capabilities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural network-based emulators for the inference of stellar parameters and elemental abundances represent an increasingly popular methodology in modern spectroscopic surveys. However, these approaches are often constrained by their emulation precision and domain transfer capabilities. Greater generalizability has previously been achieved only with significantly larger model architectures, as demonstrated by Transformer-based models in natural language processing. This observation aligns with neural scaling laws, where model performance predictably improves with increased model size, computational resources allocated to model training, and training data volume. In this study, we demonstrate that these scaling laws also apply to Transformer-based spectral emulators in astronomy. Building upon our previous work with TransformerPayne and incorporating Maximum Update Parametrization techniques from natural language models, we provide training guidelines for scaling models to achieve optimal performance. Our results show that within the explored parameter space, clear scaling relationships emerge. These findings suggest that optimal computational resource allocation requires balanced scaling. Specifically, given a tenfold increase in training compute, achieving an optimal seven-fold reduction in mean squared error necessitates an approximately 2.5-fold increase in dataset size and a 3.8-fold increase in model size. This study establishes a foundation for developing spectral foundational models with enhanced domain transfer capabilities.
Related papers
- Exoplanet Classification through Vision Transformers with Temporal Image Analysis [0.0]
We propose a methodology that transforms raw light curve data from NASA's Kepler mission into Gramian Angular Fields (GAFs) and Recurrence Plots (RPs)<n>These transformed images serve as inputs to the Vision Transformer (ViT) model, leveraging its ability to capture intricate temporal dependencies.<n>We assess the performance of the model through recall, precision, and F1 score metrics, using a 5-fold cross-validation approach to obtain a robust estimate of the model's performance.
arXiv Detail & Related papers (2025-06-19T20:57:17Z) - Scaling Laws of Motion Forecasting and Planning -- A Technical Report [23.340801154900387]
We study the empirical scaling laws of a family of encoder-decoder autoregressive transformer models.<n>We observe a strong correlation between model training loss and model evaluation metrics.<n>We briefly study the utility of training on general logged driving data of other agents to improve the performance of the ego-agent.
arXiv Detail & Related papers (2025-06-09T20:54:23Z) - Exploring Scaling Laws for EHR Foundation Models [17.84205864956449]
We present the first empirical investigation of scaling laws for EHR foundation models.<n>We identify consistent scaling patterns, including parabolic IsoFLOPs curves and power-law relationships between compute, model parameters, data size, and clinical utility.
arXiv Detail & Related papers (2025-05-29T01:05:11Z) - Latent Thought Models with Variational Bayes Inference-Time Computation [52.63299874322121]
Latent Thought Models (LTMs) incorporate explicit latent thought vectors that follow an explicit prior model in latent space.<n>LTMs demonstrate superior sample and parameter efficiency compared to autoregressive models and discrete diffusion models.
arXiv Detail & Related papers (2025-02-03T17:50:34Z) - SMPLest-X: Ultimate Scaling for Expressive Human Pose and Shape Estimation [81.36747103102459]
Expressive human pose and shape estimation (EHPS) unifies body, hands, and face motion capture with numerous applications.<n>Current state-of-the-art methods focus on training innovative architectural designs on confined datasets.<n>We investigate the impact of scaling up EHPS towards a family of generalist foundation models.
arXiv Detail & Related papers (2025-01-16T18:59:46Z) - Optimizing Sequential Recommendation Models with Scaling Laws and Approximate Entropy [104.48511402784763]
Performance Law for SR models aims to theoretically investigate and model the relationship between model performance and data quality.<n>We propose Approximate Entropy (ApEn) to assess data quality, presenting a more nuanced approach compared to traditional data quantity metrics.
arXiv Detail & Related papers (2024-11-30T10:56:30Z) - Scaling Laws for Task-Optimized Models of the Primate Visual Ventral Stream [3.4526439922541705]
We evaluate scaling laws for modeling the primate visual ventral stream (VVS)<n>We observe that while behavioral alignment continues to scale with larger models, neural alignment saturates.<n>Increased scaling is especially beneficial for higher-level visual areas, where small models trained on few samples exhibit only poor alignment.
arXiv Detail & Related papers (2024-11-08T17:13:53Z) - Uni-Mol2: Exploring Molecular Pretraining Model at Scale [27.172011090947823]
We present Uni-Mol2, an innovative molecular pretraining model that integrates features at the atomic level, graph level, and geometry structure level.
We successfully scale Uni-Mol2 to 1.1 billion parameters through pretraining on 800 million conformations, making it the largest molecular pretraining model to date.
arXiv Detail & Related papers (2024-06-21T08:28:54Z) - A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - Navigating Scaling Laws: Compute Optimality in Adaptive Model Training [39.96209967632896]
In recent years, the state-of-the-art in deep learning has been dominated by very large models that have been pre-trained on vast amounts of data.
We extend the concept of optimality by allowing for an adaptive' model, i.e. a model that can change its shape during training.
arXiv Detail & Related papers (2023-11-06T16:20:28Z) - Turbulence in Focus: Benchmarking Scaling Behavior of 3D Volumetric
Super-Resolution with BLASTNet 2.0 Data [4.293221567339693]
Analysis of compressible turbulent flows is essential for applications related to propulsion, energy generation, and the environment.
We present a 2.2 TB network-of-datasets containing 744 full-domain samples from 34 high-fidelity direct numerical simulations.
We benchmark a total of 49 variations of five deep learning approaches for 3D super-resolution.
arXiv Detail & Related papers (2023-09-23T18:57:02Z) - The Languini Kitchen: Enabling Language Modelling Research at Different
Scales of Compute [66.84421705029624]
We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours.
We pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length.
This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput.
arXiv Detail & Related papers (2023-09-20T10:31:17Z) - Scaling Pre-trained Language Models to Deeper via Parameter-efficient
Architecture [68.13678918660872]
We design a more capable parameter-sharing architecture based on matrix product operator (MPO)
MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts.
Our architecture shares the central tensor across all layers for reducing the model size.
arXiv Detail & Related papers (2023-03-27T02:34:09Z) - A Solvable Model of Neural Scaling Laws [72.8349503901712]
Large language models with a huge number of parameters, when trained on near internet-sized number of tokens, have been empirically shown to obey neural scaling laws.
We propose a statistical model -- a joint generative data model and random feature model -- that captures this neural scaling phenomenology.
Key findings are the manner in which the power laws that occur in the statistics of natural datasets are extended by nonlinear random feature maps.
arXiv Detail & Related papers (2022-10-30T15:13:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.