Beyond mirkwood: Enhancing SED Modeling with Conformal Predictions
- URL: http://arxiv.org/abs/2312.14212v2
- Date: Sat, 10 Feb 2024 11:07:55 GMT
- Title: Beyond mirkwood: Enhancing SED Modeling with Conformal Predictions
- Authors: Sankalp Gilda
- Abstract summary: We propose an advanced machine learning-based approach that enhances flexibility and uncertainty in SED fitting.
We incorporate conformalized quantile regression to convert point predictions into error bars, enhancing interpretability and reliability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Traditional spectral energy distribution (SED) fitting techniques face
uncertainties due to assumptions in star formation histories and dust
attenuation curves. We propose an advanced machine learning-based approach that
enhances flexibility and uncertainty quantification in SED fitting. Unlike the
fixed NGBoost model used in mirkwood, our approach allows for any
sklearn-compatible model, including deterministic models. We incorporate
conformalized quantile regression to convert point predictions into error bars,
enhancing interpretability and reliability. Using CatBoost as the base
predictor, we compare results with and without conformal prediction,
demonstrating improved performance using metrics such as coverage and interval
width. Our method offers a more versatile and accurate tool for deriving galaxy
physical properties from observational data.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Flexible Bayesian Last Layer Models Using Implicit Priors and Diffusion Posterior Sampling [7.084307990641011]
We introduce a novel approach that combines diffusion techniques and implicit priors for variational learning of Bayesian last layer weights.
By delivering an explicit and computationally efficient variational lower bound, our method aims to augment the expressive abilities of BLL models.
arXiv Detail & Related papers (2024-08-07T12:59:58Z) - Semi-supervised Regression Analysis with Model Misspecification and High-dimensional Data [8.619243141968886]
We present an inference framework for estimating regression coefficients in conditional mean models.
We develop an augmented inverse probability weighted (AIPW) method, employing regularized estimators for both propensity score (PS) and outcome regression (OR) models.
Our theoretical findings are verified through extensive simulation studies and a real-world data application.
arXiv Detail & Related papers (2024-06-20T00:34:54Z) - Advancing Cross-Domain Generalizability in Face Anti-Spoofing: Insights, Design, and Metrics [10.631157315662607]
This paper presents a novel perspective for enhancing anti-spoofing performance in zero-shot data domain generalization.
One step forward to the previous frame-wise spoofing prediction, we introduce a nuanced metric calculation that aggregates frame-level probabilities for a video-wise prediction.
Our final model outperforms existing state-of-the-art methods across the datasets.
arXiv Detail & Related papers (2024-06-18T04:15:22Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Improved prediction rule ensembling through model-based data generation [0.0]
Prediction rule ensembles (PRE) provide interpretable prediction models with relatively high accuracy.
PRE obtain a large set of decision rules from a (boosted) decision tree ensemble, and achieves sparsitythrough application of Lasso-penalized regression.
This article examines the use of surrogate modelsto improve performance of PRE, wherein the Lasso regression is trained with the help of a massivedataset.
arXiv Detail & Related papers (2021-09-28T12:44:10Z) - Learning Consistent Deep Generative Models from Sparse Data via
Prediction Constraints [16.48824312904122]
We develop a new framework for learning variational autoencoders and other deep generative models.
We show that these two contributions -- prediction constraints and consistency constraints -- lead to promising image classification performance.
arXiv Detail & Related papers (2020-12-12T04:18:50Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.