Estimating galaxy masses from kinematics of globular cluster systems: a
new method based on deep learning
- URL: http://arxiv.org/abs/2102.00277v2
- Date: Wed, 3 Feb 2021 13:00:30 GMT
- Title: Estimating galaxy masses from kinematics of globular cluster systems: a
new method based on deep learning
- Authors: Rajvir Kaur, Kenji Bekki, Ghulam Mubashar Hassan, Amitava Datta
- Abstract summary: We present a new method by which the total masses of galaxies including dark matter can be estimated from the kinematics of their globular cluster systems (GCSs)
We apply the convolutional neural networks (CNNs) to the two-dimensional maps of line-of-sight-velocities ($V$) and velocity dispersions ($sigma$) of GCSs predicted from numerical simulations of disk and elliptical galaxies.
Overall accuracy for one-channel and two-channel data is 97.6% and 97.8% respectively, which suggests that the new method is promising.
- Score: 7.512896457568841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new method by which the total masses of galaxies including dark
matter can be estimated from the kinematics of their globular cluster systems
(GCSs). In the proposed method, we apply the convolutional neural networks
(CNNs) to the two-dimensional (2D) maps of line-of-sight-velocities ($V$) and
velocity dispersions ($\sigma$) of GCSs predicted from numerical simulations of
disk and elliptical galaxies. In this method, we first train the CNN using
either only a larger number ($\sim 200,000$) of the synthesized 2D maps of
$\sigma$ ("one-channel") or those of both $\sigma$ and $V$ ("two-channel").
Then we use the CNN to predict the total masses of galaxies (i.e., test the
CNN) for the totally unknown dataset that is not used in training the CNN. The
principal results show that overall accuracy for one-channel and two-channel
data is 97.6\% and 97.8\% respectively, which suggests that the new method is
promising. The mean absolute errors (MAEs) for one-channel and two-channel data
are 0.288 and 0.275 respectively, and the value of root mean square errors
(RMSEs) are 0.539 and 0.51 for one-channel and two-channel respectively. These
smaller MAEs and RMSEs for two-channel data (i.e., better performance) suggest
that the new method can properly consider the global rotation of GCSs in the
mass estimation. We stress that the prediction accuracy in the new mass
estimation method not only depends on the architectures of CNNs but also can be
affected by the introduction of noise in the synthesized images.
Related papers
- On the rates of convergence for learning with convolutional neural networks [9.772773527230134]
We study approximation and learning capacities of convolutional neural networks (CNNs) with one-side zero-padding and multiple channels.
We derive convergence rates for estimators based on CNNs in many learning problems.
It is also shown that the obtained rates for classification are minimax optimal in some common settings.
arXiv Detail & Related papers (2024-03-25T06:42:02Z) - E(2) Equivariant Neural Networks for Robust Galaxy Morphology
Classification [0.0]
We train, validate, and test GCNNs equivariant to discrete subgroups of $E(2)$ on the Galaxy10 DECals dataset.
An architecture equivariant to the group $D_16$ achieves a $95.52 pm 0.18%$ test-set accuracy.
All GCNNs are less susceptible to one-pixel perturbations than an identically constructed CNN.
arXiv Detail & Related papers (2023-11-02T18:00:02Z) - INK: Injecting kNN Knowledge in Nearest Neighbor Machine Translation [57.952478914459164]
kNN-MT has provided an effective paradigm to smooth the prediction based on neighbor representations during inference.
We propose an effective training framework INK to directly smooth the representation space via adjusting representations of kNN neighbors with a small number of new parameters.
Experiments on four benchmark datasets show that method achieves average gains of 1.99 COMET and 1.0 BLEU, outperforming the state-of-the-art kNN-MT system with 0.02x memory space and 1.9x inference speedup.
arXiv Detail & Related papers (2023-06-10T08:39:16Z) - Neural Inference of Gaussian Processes for Time Series Data of Quasars [72.79083473275742]
We introduce a new model that enables it to describe quasar spectra completely.
We also introduce a new method of inference of Gaussian process parameters, which we call $textitNeural Inference$.
The combination of both the CDRW model and Neural Inference significantly outperforms the baseline DRW and MLE.
arXiv Detail & Related papers (2022-11-17T13:01:26Z) - Bounding the Width of Neural Networks via Coupled Initialization -- A
Worst Case Analysis [121.9821494461427]
We show how to significantly reduce the number of neurons required for two-layer ReLU networks.
We also prove new lower bounds that improve upon prior work, and that under certain assumptions, are best possible.
arXiv Detail & Related papers (2022-06-26T06:51:31Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - DeepSZ: Identification of Sunyaev-Zel'dovich Galaxy Clusters using Deep
Learning [5.295349225662439]
Galaxy clusters identified from the Sunyaev Zel'dovich (SZ) effect are a key ingredient in multi-wavelength cluster-based cosmology.
We present a comparison between two methods of cluster identification: the standard Matched Filter (MF) method in SZ cluster finding and a method using Convolutional Neural Networks (CNN)
arXiv Detail & Related papers (2021-02-25T19:01:00Z) - Deep learning for gravitational-wave data analysis: A resampling
white-box approach [62.997667081978825]
We apply Convolutional Neural Networks (CNNs) to detect gravitational wave (GW) signals of compact binary coalescences, using single-interferometer data from LIGO detectors.
CNNs were quite precise to detect noise but not sensitive enough to recall GW signals, meaning that CNNs are better for noise reduction than generation of GW triggers.
arXiv Detail & Related papers (2020-09-09T03:28:57Z) - DeepMerge: Classifying High-redshift Merging Galaxies with Deep Neural
Networks [0.0]
We show the use of convolutional neural networks (CNNs) for the task of distinguishing between merging and non-merging galaxies in simulated images.
We extract images of merging and non-merging galaxies from the Illustris-1 cosmological simulation and apply observational and experimental noise.
The test set classification accuracy of the CNN is $79%$ for pristine and $76%$ for noisy.
arXiv Detail & Related papers (2020-04-24T20:36:06Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.