Neural network reconstruction of cosmology using the Pantheon
compilation
- URL: http://arxiv.org/abs/2305.15499v2
- Date: Sun, 29 Oct 2023 04:27:53 GMT
- Title: Neural network reconstruction of cosmology using the Pantheon
compilation
- Authors: Konstantinos F. Dialektopoulos, Purba Mukherjee, Jackson Levi Said,
Jurgen Mifsud
- Abstract summary: We reconstruct the Hubble diagram using various data sets, including correlated ones.
Using ReFANN, that was built for data sets with independent uncertainties, we expand it to include non-Guassian data points.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we reconstruct the Hubble diagram using various data sets,
including correlated ones, in Artificial Neural Networks (ANN). Using ReFANN,
that was built for data sets with independent uncertainties, we expand it to
include non-Guassian data points, as well as data sets with covariance matrices
among others. Furthermore, we compare our results with the existing ones
derived from Gaussian processes and we also perform null tests in order to test
the validity of the concordance model of cosmology.
Related papers
- Kolmogorov-Arnold Network Autoencoders [0.0]
Kolmogorov-Arnold Networks (KANs) are promising alternatives to Multi-Layer Perceptrons (MLPs)
KANs align closely with the Kolmogorov-Arnold representation theorem, potentially enhancing both model accuracy and interpretability.
Our results demonstrate that KAN-based autoencoders achieve competitive performance in terms of reconstruction accuracy.
arXiv Detail & Related papers (2024-10-02T22:56:00Z) - Domain Adaptive Graph Neural Networks for Constraining Cosmological Parameters Across Multiple Data Sets [40.19690479537335]
We show that DA-GNN achieves higher accuracy and robustness on cross-dataset tasks.
This shows that DA-GNNs are a promising method for extracting domain-independent cosmological information.
arXiv Detail & Related papers (2023-11-02T20:40:21Z) - Linking data separation, visual separation, and classifier performance
using pseudo-labeling by contrastive learning [125.99533416395765]
We argue that the performance of the final classifier depends on the data separation present in the latent space and visual separation present in the projection.
We demonstrate our results by the classification of five real-world challenging image datasets of human intestinal parasites with only 1% supervised samples.
arXiv Detail & Related papers (2023-02-06T10:01:38Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Convolutional Neural Networks on Manifolds: From Graphs and Back [122.06927400759021]
We propose a manifold neural network (MNN) composed of a bank of manifold convolutional filters and point-wise nonlinearities.
To sum up, we focus on the manifold model as the limit of large graphs and construct MNNs, while we can still bring back graph neural networks by the discretization of MNNs.
arXiv Detail & Related papers (2022-10-01T21:17:39Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - Using Mixed-Effect Models to Learn Bayesian Networks from Related Data
Sets [0.04297070083645048]
We provide an analogous solution for learning a Bayesian network from continuous data using mixed-effects models.
We study its structural, parametric, predictive and classification accuracy.
The improvement is marked for low sample sizes and for unbalanced data sets.
arXiv Detail & Related papers (2022-06-08T08:32:32Z) - CoarSAS2hvec: Heterogeneous Information Network Embedding with Balanced
Network Sampling [0.0]
Heterogeneous information network (HIN) embedding aims to find the representations of nodes that preserve the proximity between entities of different nature.
A family of approaches that are wildly adopted applies random walk to generate a sequence of heterogeneous context.
Due to the multipartite graph structure of HIN, hub nodes tend to be over-represented in the sampled sequence, giving rise to imbalanced samples of the network.
arXiv Detail & Related papers (2021-10-12T08:34:39Z) - Learning a Self-Expressive Network for Subspace Clustering [15.096251922264281]
We propose a novel framework for subspace clustering, termed Self-Expressive Network (SENet), which employs a properly designed neural network to learn a self-expressive representation of the data.
Our SENet can not only learn the self-expressive coefficients with desired properties on the training data, but also handle out-of-sample data.
In particular, SENet yields highly competitive performance on MNIST, Fashion MNIST and Extended MNIST and state-of-the-art performance on CIFAR-10.
arXiv Detail & Related papers (2021-10-08T18:06:06Z) - Self-Supervised Neural Architecture Search for Imbalanced Datasets [129.3987858787811]
Neural Architecture Search (NAS) provides state-of-the-art results when trained on well-curated datasets with annotated labels.
We propose a NAS-based framework that bears the threefold contributions: (a) we focus on the self-supervised scenario, where no labels are required to determine the architecture, and (b) we assume the datasets are imbalanced.
arXiv Detail & Related papers (2021-09-17T14:56:36Z) - Statistical model-based evaluation of neural networks [74.10854783437351]
We develop an experimental setup for the evaluation of neural networks (NNs)
The setup helps to benchmark a set of NNs vis-a-vis minimum-mean-square-error (MMSE) performance bounds.
This allows us to test the effects of training data size, data dimension, data geometry, noise, and mismatch between training and testing conditions.
arXiv Detail & Related papers (2020-11-18T00:33:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.