Two-dimensional total absorption spectroscopy with conditional
generative adversarial networks
- URL: http://arxiv.org/abs/2206.11792v3
- Date: Thu, 14 Dec 2023 16:14:17 GMT
- Title: Two-dimensional total absorption spectroscopy with conditional
generative adversarial networks
- Authors: Cade Dembski, Michelle P. Kuchera, Sean Liddick, Raghu Ramanujan,
Artemis Spyrou
- Abstract summary: We use conditional generative adversarial networks to unfold $E_x$ and $E_gamma$ data in TAS detectors.
Our model demonstrates characterization capabilities within detector resolution limits for upwards of 93% of simulated test cases.
- Score: 0.22499166814992444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We explore the use of machine learning techniques to remove the response of
large volume $\gamma$-ray detectors from experimental spectra. Segmented
$\gamma$-ray total absorption spectrometers (TAS) allow for the simultaneous
measurement of individual $\gamma$-ray energy (E$_\gamma$) and total excitation
energy (E$_x$). Analysis of TAS detector data is complicated by the fact that
the E$_x$ and E$_\gamma$ quantities are correlated, and therefore, techniques
that simply unfold using E$_x$ and E$_\gamma$ response functions independently
are not as accurate. In this work, we investigate the use of conditional
generative adversarial networks (cGANs) to simultaneously unfold $E_{x}$ and
$E_{\gamma}$ data in TAS detectors. Specifically, we employ a \texttt{Pix2Pix}
cGAN, a generative modeling technique based on recent advances in deep
learning, to treat \rawmatrix~ matrix unfolding as an image-to-image
translation problem. We present results for simulated and experimental matrices
of single-$\gamma$ and double-$\gamma$ decay cascades. Our model demonstrates
characterization capabilities within detector resolution limits for upwards of
93% of simulated test cases.
Related papers
- Collaborative non-parametric two-sample testing [55.98760097296213]
The goal is to identify nodes where the null hypothesis $p_v = q_v$ should be rejected.
We propose the non-parametric collaborative two-sample testing (CTST) framework that efficiently leverages the graph structure.
Our methodology integrates elements from f-divergence estimation, Kernel Methods, and Multitask Learning.
arXiv Detail & Related papers (2024-02-08T14:43:56Z) - Quantum state tomography, entanglement detection and Bell violation
prospects in weak decays of massive particles [0.0]
The method is based on a Bloch parameterisation of the $d$-dimensional generalised Gell-Mann representation of $rho$.
The Wigner $P$ and $Q$ symbols are calculated for the cases of spin-half, spin-one, and spin-3/2 systems.
The methods are used to examine Monte Carlo simulations of $pp$ collisions for bipartite systems.
arXiv Detail & Related papers (2022-09-28T10:35:39Z) - Community Detection in the Hypergraph SBM: Exact Recovery Given the
Similarity Matrix [1.74048653626208]
We study the performance of algorithms which operate on the $similarity$matrix $W$, where $W_ij$ reports the number of hyperedges containing both $i$ and $j$.
We design a simple and highly efficient spectral algorithm with nearly linear runtime and show that it achieves the min-bisection threshold.
arXiv Detail & Related papers (2022-08-23T15:22:05Z) - Approximate Function Evaluation via Multi-Armed Bandits [51.146684847667125]
We study the problem of estimating the value of a known smooth function $f$ at an unknown point $boldsymbolmu in mathbbRn$, where each component $mu_i$ can be sampled via a noisy oracle.
We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-delta$ returns an $epsilon$ accurate estimate of $f(boldsymbolmu)$.
arXiv Detail & Related papers (2022-03-18T18:50:52Z) - Machine learning Hadron Spectral Functions in Lattice QCD [0.16799377888527683]
We propose a novel neural network (sVAE) based on the Variation Auto-Encoder (VAE) and Bayesian theorem.
Inspired by the maximum entropy method (MEM) we construct the loss function of the neural work such that it includes a Shannon-Jaynes entropy term and a likelihood term.
The sVAE in most cases is comparable to the maximum entropy method in the quality of reconstructing spectral functions.
arXiv Detail & Related papers (2021-12-01T12:41:28Z) - Unsupervised Spectral Unmixing For Telluric Correction Using A Neural
Network Autoencoder [58.720142291102135]
We present a neural network autoencoder approach for extracting a telluric transmission spectrum from a large set of high-precision observed solar spectra from the HARPS-N radial velocity spectrograph.
arXiv Detail & Related papers (2021-11-17T12:54:48Z) - Machine learning spectral functions in lattice QCD [0.0]
We study the inverse problem of reconstructing spectral functions from Euclidean correlation functions via machine learning.
We propose a novel neutral network, sVAE, which is based on the variational autoencoder (VAE) and can be naturally applied to the inverse problem.
arXiv Detail & Related papers (2021-10-26T09:23:45Z) - Tightening the Dependence on Horizon in the Sample Complexity of
Q-Learning [59.71676469100807]
This work sharpens the sample complexity of synchronous Q-learning to an order of $frac|mathcalS|| (1-gamma)4varepsilon2$ for any $0varepsilon 1$.
Our finding unveils the effectiveness of vanilla Q-learning, which matches that of speedy Q-learning without requiring extra computation and storage.
arXiv Detail & Related papers (2021-02-12T14:22:05Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.